hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6fa076f7565dcac0e9cd65c5423d38c761347418 | 2,080 | md | Markdown | mip-cehome-forumlist/README.md | wupengFEX/mip-extensions-platform | 6bd9e52b44bf62875383b71cb36411c09b86f884 | [
"MIT"
] | 35 | 2017-07-07T01:15:46.000Z | 2020-06-28T06:26:57.000Z | mip-cehome-forumlist/README.md | izzhip/mip-extensions-platform | d84c2297d6b3ced1d4cd4415ba6df03dad251609 | [
"MIT"
] | 48 | 2017-02-15T11:01:58.000Z | 2019-05-22T03:05:38.000Z | mip-cehome-forumlist/README.md | izzhip/mip-extensions-platform | d84c2297d6b3ced1d4cd4415ba6df03dad251609 | [
"MIT"
] | 86 | 2017-03-02T06:39:22.000Z | 2020-11-02T06:49:31.000Z | # mip-cehome-forumlist
mip-cehome-forumlist 板块列表
标题|内容
----|----
类型|不通用
支持布局|responsive,fixed-height,fill,container,fixed
所需脚本|https://c.mipcdn.com/static/v1/mip-cehome-forumlist/mip-cehome-forumlist.js
## 示例
### 基本用法
```html
<mip-cehome-forumlist>
<div id="banner">
<div class="swiper-container swiper-container-horizontal">
<div class="swiper-wrapper">
<div class="swiper-slide"><a href="https://m.cehome.com/bbs/thread/info/785990"><mip-img src="https://img3.cehome.com/appad/17050708325714492.jpg"></mip-img></a></div>
</div>
</div>
<div class="swiper-pagination"></div>
</div>
<section id="indexContent">
<div class="indexNav">
<div class="navItem"><em id="threadNewLink">最新帖子</em></div>
<div class="navItem"><em id="threadTopLink">十大热帖</em></div>
<div class="navItem cur"><em>论坛版块</em></div>
</div>
<ul class="sectionNav clearfix">
<a href="https://m.cehome.com/bbs/mip/forum/threadlist/43/1/">
<li>
<mip-img class="navImg" src="https://img4.cehome.com/album/201706/29/143913n2blldho8bgblf22.png" alt="我爱我挖"></mip-img>
<em class="navTxt">我爱我挖</em>
</li>
</a>
<a href="https://m.cehome.com/bbs/mip/forum/threadlist/44/1/">
<li>
<mip-img class="navImg" src="https://img4.cehome.com/album/201706/29/1439090a4zutq43a6wn4wn.png" alt="杂谈图库"></mip-img>
<em class="navTxt">杂谈图库</em>
</li>
</a>
<a href="https://m.cehome.com/bbs/mip/forum/threadlist/53/1/">
<li>
<mip-img class="navImg" src="https://img4.cehome.com/album/201706/29/1439077mwwg17fvmhmmhzh.png" alt="选购询价"></mip-img>
<em class="navTxt">选购询价</em>
</li>
</a>
<a href="https://m.cehome.com/bbs/mip/forum/threadlist/28/1/">
<li>
<mip-img class="navImg" src="https://img4.cehome.com/album/201706/29/14390592d519gr5eek9ufe.png" alt="招聘求职"></mip-img>
<em class="navTxt">招聘求职</em>
</li>
</a>
</ul>
</section>
</mip-cehome-forumlist>
```
```style
<style mip-custom>
#banner .swiper-slide mip-img img {
width: 100%;
min-height: 100%;
display: block;
height:100%;
}
</style>
``` | 30.144928 | 172 | 0.645673 | yue_Hant | 0.121519 |
6fa0957a98f77dc4a103569e1d230cc27642cb05 | 2,267 | md | Markdown | README.md | chenfengxu714/open_source_projects | 21b79c84a23352c9fc5471edc2a4c5bde64a279c | [
"MIT"
] | 1 | 2022-02-16T03:00:27.000Z | 2022-02-16T03:00:27.000Z | README.md | chenfengxu714/open_source_projects | 21b79c84a23352c9fc5471edc2a4c5bde64a279c | [
"MIT"
] | null | null | null | README.md | chenfengxu714/open_source_projects | 21b79c84a23352c9fc5471edc2a4c5bde64a279c | [
"MIT"
] | null | null | null | # Open Source Projects from PALLAS Lab
Below are links to different open source projects from [Prof. Keutzer](https://people.eecs.berkeley.edu/~keutzer)'s lab at UC Berkeley.
# Core Optimization Algorithms
* [AdaHessian: A Second-order Optimization Algorithm](https://github.com/amirgholami/adahessian)
* [HessianFlow: A Library for Hessian Based Algorithms in Machine Learning](https://github.com/amirgholami/HessianFlow)
* [PowerNorm: Rethinking Batch Normalization in Transformers](https://github.com/sIncerass/powernorm)
* [PyHessian: Neural Networks Through the Lens of the Hessian](https://github.com/amirgholami/PyHessian)
* [TRAttack: Trust Region Adversarial Attack](https://github.com/amirgholami/TRAttack)
# Neural Network Architecture Design
* [ANODE: Unconditionally Accurate Memory-efficient Gradients for Neural ODEs](https://github.com/amirgholami/anode)
* [DiracDeltaNet](https://github.com/Yang-YiFan/DiracDeltaNet)
* [LiDAR](https://github.com/bernwang/LiDAR-annotator)
* [LTP: Learned Token Pruning](https://github.com/kssteven418/ltp)
* [SqueezeNext: Hardware-aware Neural Network Design](https://github.com/amirgholami/SqueezeNext)
* [ShiftNet](https://github.com/alvinwan/shiftresnet-cifar)
* [SqueezeDet](https://github.com/BichenWuUCB/squeezeDet)
* [SqueezeSeg](https://github.com/BichenWuUCB/SqueezeSeg)
# Efficient Inference and Compression
* [BitPack: A Practical Tool to Efficiently Save Ultra-Low Precision/Mixed-Precision Quantized Models](https://github.com/Zhen-Dong/BitPack)
* [CoDeNet: Efficient Deployment of Input-Adaptive Object Detection on Embedded FPGAs](https://github.com/Zhen-Dong/CoDeNet)
* [HAP: Hessian Aware Pruning and Neural Implant](https://github.com/yaozhewei/hap)
* [HAWQ: Hessian Aware Quantization](https://github.com/Zhen-Dong/HAWQ)
* [IBERT: Integer Only BERT Quantization](https://github.com/kssteven418/I-BERT)
* [Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition](https://github.com/kssteven418/Q-ASR)
* [ZeroQ: A Novel Zero Shot Quantization Framework](https://github.com/amirgholami/ZeroQ)
# Domain Randomization and Domain Adaptation
* [MADAN](https://github.com/Luodian/MADAN)
* [PCS for Few-shot Unsupervised Domain Adaptation](https://github.com/zhengzangw/PCS-FUDA)
| 64.771429 | 140 | 0.786502 | yue_Hant | 0.492816 |
6fa10329f2bc8145659dff329edc21c1b070c9ce | 550 | md | Markdown | knowledge/memory_table/README.md | pudongping/swoole-learn-demo | c8ed84212211c9a5d2be7679a6847d5821328099 | [
"MIT"
] | null | null | null | knowledge/memory_table/README.md | pudongping/swoole-learn-demo | c8ed84212211c9a5d2be7679a6847d5821328099 | [
"MIT"
] | null | null | null | knowledge/memory_table/README.md | pudongping/swoole-learn-demo | c8ed84212211c9a5d2be7679a6847d5821328099 | [
"MIT"
] | null | null | null | # swoole memory table
> 多进程之间共享数据
## 示例
```shell
root@dc705af7d5da:/var/www/swoole-learn-demo/memory_table# php table.php
表格的最大行数 ====> 1024
实际占用内存的尺寸 ====> 194880
表格中的数据总数 ====> 3
array(3) {
["name"]=>
string(4) "Jack"
["age"]=>
int(25)
["height"]=>
float(1.88)
}
现在表格中的数据总数 ====> 2
string(1) "1"
==========>
array(3) {
["name"]=>
string(4) "Alex"
["age"]=>
int(18)
["height"]=>
float(1.68)
}
string(1) "3"
==========>
array(3) {
["name"]=>
string(4) "Mary"
["age"]=>
int(21)
["height"]=>
float(1.78)
}
```
| 12.222222 | 72 | 0.516364 | yue_Hant | 0.256477 |
6fa29828d50d96bcf9375ff333f93f8e43cbc0bb | 32 | md | Markdown | README.md | nathanssantos/node-express-mongo-boilerplate | 64685702a1e35b000c2889abe077a46407501f68 | [
"MIT"
] | null | null | null | README.md | nathanssantos/node-express-mongo-boilerplate | 64685702a1e35b000c2889abe077a46407501f68 | [
"MIT"
] | null | null | null | README.md | nathanssantos/node-express-mongo-boilerplate | 64685702a1e35b000c2889abe077a46407501f68 | [
"MIT"
] | null | null | null | # node-express-mongo-boilerplate | 32 | 32 | 0.84375 | eng_Latn | 0.493685 |
6fa2c6d202d466bff9bb0ff14154fbdfa1e4a77b | 35 | md | Markdown | README.md | RainMark/cops | 9b2906f9ff440d9b7c31479f9c9629b5aa47cc5a | [
"Apache-2.0"
] | 1 | 2022-01-22T13:11:35.000Z | 2022-01-22T13:11:35.000Z | README.md | RainMark/cops | 9b2906f9ff440d9b7c31479f9c9629b5aa47cc5a | [
"Apache-2.0"
] | null | null | null | README.md | RainMark/cops | 9b2906f9ff440d9b7c31479f9c9629b5aa47cc5a | [
"Apache-2.0"
] | null | null | null | # cops
Code Oops Coroutine Example
| 11.666667 | 27 | 0.8 | eng_Latn | 0.928344 |
6fa2d102f8f7055f7d3eb8572f428f6725cc6cf3 | 12,415 | md | Markdown | fabric-sdk-go/16177-17631/17021.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-01-08T04:06:04.000Z | 2021-02-09T08:28:54.000Z | fabric-sdk-go/16177-17631/17021.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric-sdk-go/16177-17631/17021.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric-sdk-go<br><strong>Branch</strong>: master<br><strong>ID</strong>: 17021<br><strong>Subject</strong>: [FAB-7830] Refactor client: delay error propagation<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Troy Ronda - troy@troyronda.com<br><strong>Assignee</strong>:<br><strong>Created</strong>: 1/20/2018, 9:43:58 AM<br><strong>LastUpdated</strong>: 1/22/2018, 10:28:11 AM<br><strong>CommitMessage</strong>:<br><pre>[FAB-7830] Refactor client: delay error propagation
This change adjusts fabsdk.NewClient to delay
execution in order to avoid erros on the call.
This allows for a cleaner client API.
Change-Id: Ib0455c7139ee3ab634a5723af724aba5c7d337bf
Signed-off-by: Troy Ronda <troy@troyronda.com>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 9:43:58 AM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 9:44:09 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1013/ (1/2)</pre><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 9:45:11 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review-2
WIP</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 9:48:32 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1132/ (2/2)</pre><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 9:55:23 AM<br><strong>Message</strong>: <pre>Uploaded patch set 2.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 9:55:29 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1014/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 9:55:51 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1132/ : ABORTED
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1132/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-x86_64/1132
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1013/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1013/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-s390x/1013</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 9:58:03 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1133/ (2/2)</pre><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 10:02:35 AM<br><strong>Message</strong>: <pre>Uploaded patch set 3.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 10:02:44 AM<br><strong>Message</strong>: <pre>Patch Set 3:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1015/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 10:03:03 AM<br><strong>Message</strong>: <pre>Patch Set 2: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1133/ : ABORTED
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1133/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-x86_64/1133
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1014/ : ABORTED
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1014/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-s390x/1014</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 10:05:34 AM<br><strong>Message</strong>: <pre>Patch Set 3:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1134/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 10:10:26 AM<br><strong>Message</strong>: <pre>Patch Set 3: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1134/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1134/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-x86_64/1134
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1015/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1015/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-s390x/1015</pre><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 10:18:50 AM<br><strong>Message</strong>: <pre>Removed Code-Review-2 by Troy Ronda <troy@troyronda.com>
</pre><strong>Reviewer</strong>: Troy Ronda - troy@troyronda.com<br><strong>Reviewed</strong>: 1/20/2018, 3:08:08 PM<br><strong>Message</strong>: <pre>Uploaded patch set 4.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 3:08:23 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1016/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 3:09:43 PM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1135/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/20/2018, 3:28:39 PM<br><strong>Message</strong>: <pre>Patch Set 4: Verified+1
Build Successful
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-x86_64/1135/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-x86_64/1135
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-verify-s390x/1016/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-verify-s390x/1016</pre><strong>Reviewer</strong>: Bob Stasyszyn - bob.stasyszyn@securekey.com<br><strong>Reviewed</strong>: 1/22/2018, 9:42:01 AM<br><strong>Message</strong>: <pre>Patch Set 4: Code-Review+1</pre><strong>Reviewer</strong>: Firas Qutishat - firas.qutishat@securekey.com<br><strong>Reviewed</strong>: 1/22/2018, 9:59:35 AM<br><strong>Message</strong>: <pre>Patch Set 4: Code-Review+2</pre><strong>Reviewer</strong>: Firas Qutishat - firas.qutishat@securekey.com<br><strong>Reviewed</strong>: 1/22/2018, 9:59:36 AM<br><strong>Message</strong>: <pre>Change has been successfully merged by Firas Qutishat</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 1/22/2018, 10:28:11 AM<br><strong>Message</strong>: <pre>Patch Set 4:
Build Successful
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-merge-x86_64/284/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-merge-x86_64/284
https://jenkins.hyperledger.org/job/fabric-sdk-go-tests-merge-s390x/231/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-go-tests-merge-s390x/231</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Troy Ronda - troy@troyronda.com<br><strong>Uploader</strong>: Troy Ronda - troy@troyronda.com<br><strong>Created</strong>: 1/20/2018, 9:43:58 AM<br><strong>UnmergedRevision</strong>: [7af5ad749557d5016230ee59a656fd16383a5453](https://github.com/hyperledger-gerrit-archive/fabric-sdk-go/commit/7af5ad749557d5016230ee59a656fd16383a5453)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 1/20/2018, 9:55:51 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br><strong>Approver</strong>: Troy Ronda - troy@troyronda.com<br><strong>Approved</strong>: 1/20/2018, 9:45:11 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 2</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Troy Ronda - troy@troyronda.com<br><strong>Uploader</strong>: Troy Ronda - troy@troyronda.com<br><strong>Created</strong>: 1/20/2018, 9:55:23 AM<br><strong>UnmergedRevision</strong>: [c171b50ac188a21d6a6b065b258b15e2ceea46a5](https://github.com/hyperledger-gerrit-archive/fabric-sdk-go/commit/c171b50ac188a21d6a6b065b258b15e2ceea46a5)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 1/20/2018, 10:03:03 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br><strong>Approver</strong>: Troy Ronda - troy@troyronda.com<br><strong>Approved</strong>: 1/20/2018, 9:45:11 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 3</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Troy Ronda - troy@troyronda.com<br><strong>Uploader</strong>: Troy Ronda - troy@troyronda.com<br><strong>Created</strong>: 1/20/2018, 10:02:35 AM<br><strong>UnmergedRevision</strong>: [3d03ded1f8d4edae72121c5771fc6ae3ac8e3acd](https://github.com/hyperledger-gerrit-archive/fabric-sdk-go/commit/3d03ded1f8d4edae72121c5771fc6ae3ac8e3acd)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 1/20/2018, 10:10:26 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 4</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Troy Ronda - troy@troyronda.com<br><strong>Uploader</strong>: Troy Ronda - troy@troyronda.com<br><strong>Created</strong>: 1/20/2018, 3:08:08 PM<br><strong>GitHubMergedRevision</strong>: [94ac20c7f4275632527ba8be864796c2e7c385d6](https://github.com/hyperledger-gerrit-archive/fabric-sdk-go/commit/94ac20c7f4275632527ba8be864796c2e7c385d6)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 1/20/2018, 3:28:39 PM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Firas Qutishat - firas.qutishat@securekey.com<br><strong>Approved</strong>: 1/22/2018, 9:59:35 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Firas Qutishat<br><strong>Merged</strong>: 1/22/2018, 9:59:36 AM<br><br><strong>Approver</strong>: Bob Stasyszyn - bob.stasyszyn@securekey.com<br><strong>Approved</strong>: 1/22/2018, 9:42:01 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br></blockquote> | 137.944444 | 3,652 | 0.763754 | kor_Hang | 0.314546 |
6fa34341d688828d594b0bcaefa9cb673b4379d9 | 642 | md | Markdown | _posts/archive/33-20171114/2017-11-13-IBM-pitched-its-Watson-supercomputer-as-a-revolution-in-cancer-care-Its-nowhere-close.md | polgarp/alg-exp | 07822f0707d09dd12c1d76dd5e438e866b865470 | [
"MIT"
] | null | null | null | _posts/archive/33-20171114/2017-11-13-IBM-pitched-its-Watson-supercomputer-as-a-revolution-in-cancer-care-Its-nowhere-close.md | polgarp/alg-exp | 07822f0707d09dd12c1d76dd5e438e866b865470 | [
"MIT"
] | null | null | null | _posts/archive/33-20171114/2017-11-13-IBM-pitched-its-Watson-supercomputer-as-a-revolution-in-cancer-care-Its-nowhere-close.md | polgarp/alg-exp | 07822f0707d09dd12c1d76dd5e438e866b865470 | [
"MIT"
] | null | null | null | ---
layout: post
title: "IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close"
posturl: https://www.statnews.com/2017/09/05/watson-ibm-cancer/
tags:
- IBM
- Watson
- Healthcare
---
{% include post_info_header.md %}
The Wizard of Oz method is one of the suggested ways of prototyping AI experiences, but that is reserved for the design phase, not for something after deployment. Besides the huge story of IBM's marketing overhyping some things, there is an other moral here: product development with AI tech is hard, and having the right data is even harder.
<!--more-->
{% include post_info_footer.md %}
| 37.764706 | 342 | 0.761682 | eng_Latn | 0.998335 |
6fa3a6b0faf3a8c741e91e1c865ba4cae8d17880 | 1,005 | md | Markdown | README.md | Conduitry/degit | 782f2b39443b411d5bbed058e0ca2bcc8219231c | [
"MIT"
] | null | null | null | README.md | Conduitry/degit | 782f2b39443b411d5bbed058e0ca2bcc8219231c | [
"MIT"
] | null | null | null | README.md | Conduitry/degit | 782f2b39443b411d5bbed058e0ca2bcc8219231c | [
"MIT"
] | null | null | null | # degit — straightforward project scaffolding
**degit** makes copies of git repositories. When you run `degit some-user/some-repo`, it will find the latest commit on https://github.com/some-user/some-repo and download the associated tar file to `~/.degit/some-user/some-repo/commithash.tar.gz` if it doesn't already exist locally. (This is much quicker than using `git clone`, because you're not downloading the entire git history.)
You can specify a specific branch, tag or commit hash...
```bash
degit some-user/some-repo#some-feature # branch
degit some-user/some-repo#v1.0.0 # tag
degit some-user/some-repo#1234abcd # commit hash
```
...or create a new folder for the project...
```bash
degit some-user/some-repo my-new-project
```
...and that's it. As simple as possible, and no simpler.
## Installation
```bash
npm install -g degit
```
## Not supported
* Windows
* Private repositories
* Anything that isn't GitHub
Pull requests are very welcome!
## License
[MIT](LICENSE). | 25.125 | 386 | 0.727363 | eng_Latn | 0.971906 |
6fa482f57b8e5fb3c9806ea4ed3582679c2045be | 66 | md | Markdown | README.md | calport/CodeLab0-Week5-GitDemo | e5e1879709d327ac33dc0293ad20853cf5b4bd39 | [
"Unlicense"
] | null | null | null | README.md | calport/CodeLab0-Week5-GitDemo | e5e1879709d327ac33dc0293ad20853cf5b4bd39 | [
"Unlicense"
] | null | null | null | README.md | calport/CodeLab0-Week5-GitDemo | e5e1879709d327ac33dc0293ad20853cf5b4bd39 | [
"Unlicense"
] | null | null | null | # CodeLab0-Week5-GitDemo
This is a temporary project for Code Lab
| 22 | 40 | 0.80303 | eng_Latn | 0.988451 |
6fa4fc6701b05cb6ab59f477b3b225247288b170 | 315 | md | Markdown | _definitions/textStage_1997_Plachta.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 2 | 2021-04-26T12:28:47.000Z | 2021-12-21T13:30:58.000Z | _definitions/textStage_1997_Plachta.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 45 | 2020-04-04T19:51:35.000Z | 2022-03-24T16:56:19.000Z | _definitions/textStage_1997_Plachta.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 3 | 2020-04-19T14:17:32.000Z | 2021-04-08T12:13:06.000Z | ---
lemma: text (stage)
source: plachta_editionswissenschaft_1997
page: 139
language: German
contributor: Caroline
updated_by: Caroline
---
**Textstufe** Texteinheit innerhalb der [Textentstehung](genesis.html), die auch chronologisch von der vorhergehenden und der folgenden Texteinheit geschieden werden kann.
| 24.230769 | 171 | 0.806349 | deu_Latn | 0.968458 |
6fa620e4a4861b3f094c470153c4544b04154812 | 3,611 | md | Markdown | wdk-ddi-src/content/fltkernel/nf-fltkernel-fltretainswappedbuffermdladdress.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/fltkernel/nf-fltkernel-fltretainswappedbuffermdladdress.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/fltkernel/nf-fltkernel-fltretainswappedbuffermdladdress.md | xiaoyinl/windows-driver-docs-ddi | 2442baf424975cfeec65190615ed8638a01791b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:fltkernel.FltRetainSwappedBufferMdlAddress
title: FltRetainSwappedBufferMdlAddress function (fltkernel.h)
description: FltRetainSwappedBufferMdlAddress prevents the Filter Manager from freeing the memory descriptor list (MDL) for a buffer that was swapped in by a minifilter driver.
old-location: ifsk\fltretainswappedbuffermdladdress.htm
tech.root: ifsk
ms.assetid: 80498410-9617-414d-997c-0d55f891ba3c
ms.date: 04/16/2018
ms.keywords: FltApiRef_p_to_z_3832baaa-37bc-47cc-9df4-12c92fd0ddd8.xml, FltRetainSwappedBufferMdlAddress, FltRetainSwappedBufferMdlAddress function [Installable File System Drivers], fltkernel/FltRetainSwappedBufferMdlAddress, ifsk.fltretainswappedbuffermdladdress
ms.topic: function
f1_keywords:
- "fltkernel/FltRetainSwappedBufferMdlAddress"
req.header: fltkernel.h
req.include-header: Fltkernel.h
req.target-type: Universal
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: FltMgr.lib
req.dll: Fltmgr.sys
req.irql: Any level
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- fltmgr.sys
api_name:
- FltRetainSwappedBufferMdlAddress
product:
- Windows
targetos: Windows
req.typenames:
---
# FltRetainSwappedBufferMdlAddress function
## -description
<b>FltRetainSwappedBufferMdlAddress</b> prevents the Filter Manager from freeing the memory descriptor list (MDL) for a buffer that was swapped in by a minifilter driver.
## -parameters
### -param CallbackData [in]
Pointer to the callback data structure for the operation.
## -returns
None
## -remarks
When a minifilter driver swaps in a new buffer in a preoperation callback (<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nc-fltkernel-pflt_pre_operation_callback">PFLT_PRE_OPERATION_CALLBACK</a>) routine, the Filter Manager automatically frees the buffer's MDL when the corresponding postoperation (<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nc-fltkernel-pflt_post_operation_callback">PFLT_POST_OPERATION_CALLBACK</a>) callback routine returns.
The minifilter driver can prevent the Filter Manager from freeing the MDL by calling <b>FltRetainSwappedBufferMdlAddress</b> from the postoperation callback routine.
After calling <b>FltRetainSwappedBufferMdlAddress</b>, the caller is responsible for freeing the MDL by calling a routine such as <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdm/nf-wdm-iofreemdl">IoFreeMdl</a>.
<b>FltRetainSwappedBufferMdlAddress</b> can only be called from a postoperation callback routine.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nf-fltkernel-fltdecodeparameters">FltDecodeParameters</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nf-fltkernel-fltgetswappedbuffermdladdress">FltGetSwappedBufferMdlAddress</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdm/nf-wdm-iofreemdl">IoFreeMdl</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nc-fltkernel-pflt_post_operation_callback">PFLT_POST_OPERATION_CALLBACK</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/fltkernel/nc-fltkernel-pflt_pre_operation_callback">PFLT_PRE_OPERATION_CALLBACK</a>
| 31.675439 | 523 | 0.783439 | eng_Latn | 0.49066 |
6fa77b33c3c9761828c15339c585ebc6e0ae282d | 1,618 | md | Markdown | README.md | EuleMitKeule/geoip_monitor | ee21e1dc9bf6fc8c9149ee5ee8a25f51dc0d6ff6 | [
"Unlicense"
] | null | null | null | README.md | EuleMitKeule/geoip_monitor | ee21e1dc9bf6fc8c9149ee5ee8a25f51dc0d6ff6 | [
"Unlicense"
] | null | null | null | README.md | EuleMitKeule/geoip_monitor | ee21e1dc9bf6fc8c9149ee5ee8a25f51dc0d6ff6 | [
"Unlicense"
] | null | null | null | # geoip_monitor
Basic IP Monitoring for Ubuntu with Grafana World Map Plugin Integration
INSTRUCTIONS:
1. Put geoip_monitor.py and geoip_monitor.log and countries.csv in a folder
2. Run apt install geoiplookup
3. Have python3 and the following python packages installed:
-datetime
-time
-subprocess
-mysql.connector
-csv
-pytz
-logging
4. Have a MariaDB MySQL server running
5. Create the Database 'geoip' and a user 'geoip' with access to the database
6. Create table 'geoip1' in the database 'geoip' with these columns:
-latitude : float
-longitude: float
-datetime : datetime
-country : varchar
-ip : varchar
-port : int
6.5 Put the host and password for MariaDB in geoip_monitor.py
7. Configure your iptables logging:
-Run iptables -I INPUT 1 -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j LOG --log-prefix "iptables: "
-Run touch /var/log/iptables.log
-Run touch /etc/rsyslog.d/10-iptables.conf
-Run nano /etc/rsyslog.d/10-iptables.conf
-Put in :msg, contains, "iptables: " -/var/log/iptables.log
& ~
-Run systemctl restart rsyslog
7.5 Put geoip_monitor.service in /etc/systemd/system/ and enable it via systemctl enable geoip_monitor
8. Get the Grafana World Map Plugin
9. Add your MariaDB as a data source to Grafana
10. Create a World Map Panel and feed it via this query:
SELECT
Count(*) as metric, latitude as latitude, longitude as longitude, datetime, ip, country, now()
FROM
geoip1
WHERE
datetime >= $__timeFrom() AND datetime <= $__timeTo()
GROUP BY
country
Should be working now.
| 24.515152 | 105 | 0.710136 | eng_Latn | 0.619141 |
6fa7af8ebf83559aabed96a71796cae6ef45a036 | 6,848 | md | Markdown | gpdb-doc/markdown/ref_guide/datatype-pseudo.html.md | haolinw/gpdb | 16a9465747a54f0c61bac8b676fe7611b4f030d8 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | gpdb-doc/markdown/ref_guide/datatype-pseudo.html.md | haolinw/gpdb | 16a9465747a54f0c61bac8b676fe7611b4f030d8 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | gpdb-doc/markdown/ref_guide/datatype-pseudo.html.md | haolinw/gpdb | 16a9465747a54f0c61bac8b676fe7611b4f030d8 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | # Pseudo-Types
Greenplum Database supports special-purpose data type entries that are collectively called *pseudo-types*. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type. Each of the available pseudo-types is useful in situations where a function's behavior does not correspond to simply taking or returning a value of a specific SQL data type.
Functions coded in procedural languages can use pseudo-types only as allowed by their implementation languages. The procedural languages all forbid use of a pseudo-type as an argument type, and allow only *void* and *record* as a result type.
A function with the pseudo-type *record* as a return data type returns an unspecified row type. The *record* represents an array of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level.
The pseudo-type *void* indicates that a function returns no value.
**Note:** Greenplum Database does not support triggers and the pseudo-type *trigger*.
The types *anyelement*, *anyarray*, *anynonarray*, and *anyenum* are pseudo-types called polymorphic types. Some procedural languages also support polymorphic functions using the types *anyarray*, *anyelement*, *anyenum*, and *anynonarray*.
The pseudo-type *anytable* is a Greenplum Database type that specifies a table expression—an expression that computes a table. Greenplum Database allows this type only as an argument to a user-defined function. See [Table Value Expressions](#topic_ig2_1pc_qfb) for more about the *anytable* pseudo-type.
For more information about pseudo-types, see the PostgreSQL documentation about [Pseudo-Types](https://www.postgresql.org/docs/9.4/datatype-pseudo.html).
**Parent topic:** [Data Types](data_types.html)
## <a id="topic_dbn_bpc_qfb"></a>Polymorphic Types
Four pseudo-types of special interest are *anyelement*, *anyarray*, *anynonarray*, and *anyenum*, which are collectively called *polymorphic* types. Any function declared using these types is said to be a polymorphic function. A polymorphic function can operate on many different data types, with the specific data types being determined by the data types actually passed to it at runtime.
Polymorphic arguments and results are tied to each other and are resolved to a specific data type when a query calling a polymorphic function is parsed. Each position \(either argument or return value\) declared as *anyelement* is allowed to have any specific actual data type, but in any given call they must all be the same actual type. Each position declared as *anyarray* can have any array data type, but similarly they must all be the same type. If there are positions declared *anyarray* and others declared *anyelement*, the actual array type in the *anyarray* positions must be an array whose elements are the same type appearing in the *anyelement* positions. *anynonarray* is treated exactly the same as *anyelement*, but adds the additional constraint that the actual type must not be an array type. *anyenum* is treated exactly the same as *anyelement*, but adds the additional constraint that the actual type must be an `enum` type.
When more than one argument position is declared with a polymorphic type, the net effect is that only certain combinations of actual argument types are allowed. For example, a function declared as `equal(*anyelement*, *anyelement*)` takes any two input values, so long as they are of the same data type.
When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also polymorphic, and the actual data type supplied as the argument determines the actual result type for that call. For example, if there were not already an array subscripting mechanism, one could define a function that implements subscripting as `subscript(*anyarray*, integer) returns *anyelement*`. This declaration constrains the actual first argument to be an array type, and allows the parser to infer the correct result type from the actual first argument's type. Another example is that a function declared as `myfunc(*anyarray*) returns *anyenum*` will only accept arrays of `enum` types.
Note that *anynonarray* and *anyenum* do not represent separate type variables; they are the same type as *anyelement*, just with an additional constraint. For example, declaring a function as `myfunc(*anyelement*, *anyenum*)` is equivalent to declaring it as `myfunc(*anyenum*, *anyenum*)`: both actual arguments must be the same `enum` type.
A variadic function \(one taking a variable number of arguments\) is polymorphic when its last parameter is declared as `VARIADIC *anyarray*`. For purposes of argument matching and determining the actual result type, such a function behaves the same as if you had declared the appropriate number of *anynonarray* parameters.
For more information about polymorphic types, see the PostgreSQL documentation about [Polymorphic Arguments and Return Types](https://www.postgresql.org/docs/9.4/xfunc-c.html#AEN56822).
## <a id="topic_ig2_1pc_qfb"></a>Table Value Expressions
The *anytable* pseudo-type declares a function argument that is a table value expression. The notation for a table value expression is a `SELECT` statement enclosed in a `TABLE()` function. You can specify a distribution policy for the table by adding `SCATTER RANDOMLY`, or a `SCATTER BY` clause with a column list to specify the distribution key.
The `SELECT` statement is run when the function is called and the result rows are distributed to segments so that each segment runs the function with a subset of the result table.
For example, this table expression selects three columns from a table named `customer` and sets the distribution key to the first column:
```
TABLE(SELECT cust_key, name, address FROM customer SCATTER BY 1)
```
The `SELECT` statement may include joins on multiple base tables, `WHERE` clauses, aggregates, and any other valid query syntax.
The *anytable* type is only permitted in functions implemented in the C or C++ languages. The body of the function can access the table using the Greenplum Database Server Programming Interface \(SPI\) or the Greenplum Partner Connector \(GPPC\) API.
The *anytable* type is used in some user-defined functions in the Tanzu Greenplum Text API. The following GPText example uses the `TABLE` function with the `SCATTER BY` clause in the GPText function `gptext.index()` to populate the index `mydb.mytest.articles` with data from the messages table:
```
SELECT * FROM gptext.index(TABLE(SELECT * FROM mytest.messages
SCATTER BY distrib_id), 'mydb.mytest.messages');
```
For information about the function `gptext.index()`, see the Tanzu Greenplum Text documentation.
| 108.698413 | 946 | 0.789136 | eng_Latn | 0.998054 |
6fa7b697369feb478ad943e1f486fedf9da823e7 | 4,338 | md | Markdown | docs/vs-2015/python/getting-started-with-ptvs-start-coding-projects.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/python/getting-started-with-ptvs-start-coding-projects.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/python/getting-started-with-ptvs-start-coding-projects.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Erste Schritte mit PTVS: Codieren beginnen (Projekte) | Microsoft-Dokumentation'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-python
ms.topic: conceptual
ms.assetid: 14b85e70-b9a8-415c-a307-8c8c316a0495
caps.latest.revision: 7
author: kraigb
ms.author: kraigb
manager: jillfra
ms.openlocfilehash: 28622f290d82f86bf3d18cc4f40cfcfc8e953dad
ms.sourcegitcommit: 8b538eea125241e9d6d8b7297b72a66faa9a4a47
ms.translationtype: MTE95
ms.contentlocale: de-DE
ms.lasthandoff: 01/23/2019
ms.locfileid: "54781068"
---
# <a name="getting-started-with-ptvs-start-coding-projects"></a>Erste Schritte mit PTVS: Beginnen Sie codieren (Projekte)
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Mithilfe der Python-Tools für Visual Studio (PTVS) können Sie Ihren Code verwalten.
Sie können diese Anweisungen in einem sehr kurzen [Youtube-Video](https://www.youtube.com/watch?v=KHPoVpL7zHg&list=PLReL099Y5nRdLgGAdrb_YeTdEnd23s6Ff&index=2) ansehen.
Wenn Sie Python bereits verwendet haben, wissen Sie, dass Ihr Projekt dadurch definiert wird, wie Sie Dateien auf dem Datenträger anordnen. Dies funktioniert hervorragend für einfache Python-Projekte, wenn Sie jedoch weitere Dateien (Webseiten mit JavaScript, Komponententests und Buildskripts) haben, können Dateisysteme zu Einschränkungen führen. Visual Studio verwendet Projekte, um drei Dinge zu erreichen.
- Identifizieren Sie wichtige Dateien. Wichtige Dateien sind diejenigen, die Sie in ein Versionskontrollsystem einchecken (Quelldateien, Ressourcen usw.), nicht jedoch Dateien, die als Buildausgabe generiert werden. Wichtige Dateien sind auch solche, die Sie auf einen anderen Computer kopieren, damit eine andere Person in Ihrer App arbeiten kann.
- Geben Sie an, wie Dateien verwendet werden sollen. Möglicherweise verfügen Sie über Dateien, die bei einer Dateiänderung von einem Tool verarbeitet werden müssen. Visual Studio-Projekte können diese Informationen aufzeichnen.
- Definieren Sie die Grenzen Ihrer Komponenten. Wenn Sie über mehrere Komponenten in Ihrer App verfügen, können Sie jede in ein separates Projekt einfügen. Diese können schließlich auf verschiedenen Servern bereitgestellt, mit verschiedenen Build- oder Debugeinstellungen erstellt oder sogar in einer anderen von Visual Studio unterstützten Sprache geschrieben werden, wie beispielsweise C++ oder Node.js.
Es gibt mehrere Projektvorlagen, die Ihnen beim Einstieg helfen. Wenn Sie bereits über Python-Code verfügen, mit dem Sie arbeiten möchten, hilft der Assistent zum Erstellen von Projekten aus vorhandenen Codedateien dabei, ein Projekt zu erstellen, das all Ihre Dateien enthält. Für einige beliebte Frameworks sind mehrere Webprojekte vorhanden. Weitere Vorlagen stehen im Paket mit PTVS-Beispielen zur Verfügung. Es gibt Optionen, mit denen Sie dafür sorgen können, dass Webvorlagen mit anderen Frameworks funktionieren. Bei der Python-Anwendungsvorlage handelt es sich um ein leeres, sauberes Projekt. Es gibt ein Modul, um Ihnen den Einstieg zu erleichtern.
Visual Studio zeigt die geöffneten Projekte im Projektmappen-Explorer-Fenster, einschließlich aller Dateien, Suchpfade und Python-Umgebungen. Um neue Elemente hinzuzufügen, wählen Sie den Projektordner aus, und wählen Sie aus dem Kontextmenü (klicken Sie auf die Schaltfläche mit dem Zeiger nach rechts) die Optionen "Hinzufügen" und "Neues Element". Sie können ein beliebiges Element im Dialogfeld auswählen, den Namen des Elements anpassen und das Element zum Projekt hinzufügen.
Sie können Elemente per Drag & Drop in den Projektmappen-Explorer einfügen. Wenn Sie bereits Dateien in Ihre Projektverzeichnisstruktur kopiert haben, können Sie oben im Projektmappen-Explorer die Option "Alle Dateien anzeigen" auswählen. Anschließend können Sie die Elemente auswählen, die Sie hinzufügen möchten, und aus dem Kontextmenü die Option "In Projekt einschließen" auswählen.
Sie können diese Anweisungen in einem sehr kurzen [YouTube-Video](https://www.youtube.com/watch?v=KHPoVpL7zHg&list=PLReL099Y5nRdLgGAdrb_YeTdEnd23s6Ff&index=2) ansehen.
## <a name="see-also"></a>Siehe auch
[Wiki-Dokumentation](https://github.com/Microsoft/PTVS/wiki/Projects) [PTVS Videos – Einstieg und ausführliche Erläuterungen](https://www.youtube.com/playlist?list=PLReL099Y5nRdLgGAdrb_YeTdEnd23s6Ff)
| 98.590909 | 662 | 0.818119 | deu_Latn | 0.99644 |
6fa88cdba464a07498ae49d44967d045a2aa34b7 | 2,304 | md | Markdown | frontend/Convention.md | collinahn/cafe_kiosk_solution | c6810f15c6f2681079e294f6238d24b8df108e34 | [
"MIT"
] | 1 | 2021-11-30T03:31:53.000Z | 2021-11-30T03:31:53.000Z | frontend/Convention.md | collinahn/cafe_kiosk_solution | c6810f15c6f2681079e294f6238d24b8df108e34 | [
"MIT"
] | 3 | 2021-11-26T18:14:14.000Z | 2021-12-11T05:33:24.000Z | frontend/Convention.md | collinahn/cafe_kiosk_solution | c6810f15c6f2681079e294f6238d24b8df108e34 | [
"MIT"
] | null | null | null | # React
## 컴포넌트 정의
### FunctionComponent.tsx
```ts
type Props = {
...
}
function FunctionComponent({ ... }: Props) {
...
}
export default FunctionComponent
```
위와 같이 CRA 템플릿에 있는 방식으로 컴포넌트를 정의한다.
클래스 컴포넌트의 사용은 지양한다. 클래스 컴포넌트는 함수 컴포넌트에서 지원하지 않는 일부 컴포넌트 생명주기 함수를 사용하기 위해서 사용한다.
React 17에선 번들 사이즈 최적화를 위해 컴포넌트를 정의할 때 `import React from 'react'`문을 넣지 않는 것을 권장한다.
컴포넌트 props의 자료형은 `PropTypes`로 정의하거나 TypeScript로 정의할 수 있다. TypeScript는 자료형을 정적으로 검사하지만, PropTypes는 동적으로 검사한다는 차이가 있다. 근데 TypeScript가 자동으로 `PropTypes` 코드를 생성한다고 하니 TypeScript만 사용해도 문제없다. 그리고 동적 자료형 검사는 실행 시 약간의 오버헤드가 생긴다.
컴포넌트 props의 기본값을 정의할 땐 defaultProps 대신 ES6의 매개변수 기본값을 사용한다. 그리고 defautProps는 나중에 deprecated될 예정이라고 한다.
```ts
type Props = {
prop1: number | undefined
prop2?: number
}
```
prop1은 컴포넌트에 명시적으로 전달해야 하지만 number 또는 undefined 일 수 있는 값이고, prop2는 전달하지 않아도 되는 number형 데이터다. 이 props가 컴포넌트 내부에서 매개변수로 활용될 때의 자료형은 둘 다 number | undefined로 동일하다.
## 메모이제이션
컴포넌트 props로 넘겨주는 값은 `useMemo()`를 통해, 함수는 `useCallback()`를 통해 메모이제이션하는 것을 권장한다. 만약 props를 받는 컴포넌트가 `memo()`로 감싸진 컴포넌트(순수 컴포넌트)면 무조건 메모이제이션을 해야 한다.
## 이벤트 핸들러 네이밍
```js
import { MouseEvent as ReactMouseEvent, useCallback, useState } from 'react'
function FunctionComponent() {
const [searchTerm, setSearchTerm] = useState('')
// 이벤트 핸들러 함수
const handleClickSearchButton = useCallback(
(e: ReactMouseEvent<HTMLElement, MouseEvent>) => {
...
},
[...]
})
const handleChangeSearchTerm = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
setSearchTerm(e.target.value)
},
[],
)
// 이벤트 핸들러 prop
return (
<>
<SearchInput searchTerm={searchTerm} onChange={handleChangeSearchTerm} />
<SearchButton onClick={handleClickSearchButton} />
</>
)
}
export default FunctionComponent
```
이벤트 핸들러를 받는 prop 이름은 `on___`으로 짓고, 이벤트 함수 이름은 `handle___`로 짓는다.
## Custom Hook 정의
### use___.ts
```ts
type Options = {
...
}
function use___({ ... }: Options) {
...
}
export default use___
```
Custom Hook의 이름은 `use___`로 지정한다. Custom Hook 안에선 JSX 사용을 지양하고, 다른 React Hook이나 JavaScript 로직 위주로 작성한다.
(type 이름은 논의 필요)
## 상태 관리
웬만하면 전역 상태 대신 지역 상태로 처리하는 것을 권장한다. 이유는 다른 언어에서 전역 변수 대신 지역 변수 사용을 권장하는 것과 비슷하다. 전역 상태를 사용하려면 React Context API를 사용한다.
한 상태를 여러 컴포넌트가 공유해야 할 땐 상태 관리 로직을 공통된 상위 컴포넌트로 올리는 것을 권장한다.
| 23.272727 | 219 | 0.688368 | kor_Hang | 1.00001 |
6fa8b8ef59db51d33287603dbee47b9b6e519475 | 754 | md | Markdown | docs/examples.md | Preta-Crowz/deno | 2d865f7f3f4608231862610b7375ddc2e9294903 | [
"MIT"
] | 2 | 2020-11-14T20:59:50.000Z | 2021-06-16T22:25:54.000Z | docs/examples.md | Preta-Crowz/deno | 2d865f7f3f4608231862610b7375ddc2e9294903 | [
"MIT"
] | 26 | 2021-11-22T04:24:30.000Z | 2022-03-13T01:30:44.000Z | docs/examples.md | Preta-Crowz/deno | 2d865f7f3f4608231862610b7375ddc2e9294903 | [
"MIT"
] | 1 | 2020-07-25T18:54:10.000Z | 2020-07-25T18:54:10.000Z | # Examples
In this chapter you can find some example programs that you can use to learn
more about the runtime.
## Basic
- [Hello World](./examples/hello_world.md)
- [Import and Export Modules](./examples/import_export.md)
- [How to Manage Dependencies](./examples/manage_dependencies.md)
- [Fetch Data](./examples/fetch_data.md)
- [Read and Write Files](./examples/read_write_files.md)
## Advanced
- [Unix Cat](./examples/unix_cat.md)
- [File Server](./examples/file_server.md)
- [TCP Echo](./examples/tcp_echo.md)
- [Subprocess](./examples/subprocess.md)
- [Permissions](./examples/permissions.md)
- [OS Signals](./examples/os_signals.md)
- [File System Events](./examples/file_system_events.md)
- [Testing If Main](./examples/testing_if_main.md)
| 31.416667 | 76 | 0.737401 | eng_Latn | 0.757779 |
6fa96945bd42d23923f1ae51e2651ffc7143509a | 2,282 | md | Markdown | src/lt/2019-02/09/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/lt/2019-02/09/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/lt/2019-02/09/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Laisvės praradimas
date: 29/05/2019
---
Tik vienas Dievas žino, kiek milijonų ar net milijardų žmonių kovoja su kokia nors priklausomybe. Iki šios dienos mokslininkai vis dar nesupranta, kas ją sukelia, nors kai kuriais atvejais jie iš tiesų gali pamatyti tą smegenų dalį, kurioje yra norai ir troškimai.
Deja, tokių priklausomybių vietų nustatymas yra ne tas pat, kas išlaisvinimas iš priklausomybių.
Priklausomybė negailestinga visiems, o ne tik ją turintiesiems. Šeimos nariai – tėvai, sutuoktiniai, vaikai – visi labai kenčia, kai kuris nors šeimos narys yra užvaldytas galios, kurios, atrodo, neįmanoma atsikratyti.
Narkotikai, alkoholis, tabakas, azartiniai lošimai, pornografija, seksas, netgi maistas – šiuos dalykus priklausomybe padaro įprastas ir progresyvus jų vartojimas ar piktnaudžiavimas jais. Asmuo negali sustoti netgi žinodamas, kad tai jam kenkia. Mėgaudamasis pasirinkimo laisve asmuo tampa priklausomybės vergu, ir jis iš tikrųjų netenka savo laisvės. Petras turi paprastą paaiškinimą priklausomybei ir pastarosios rezultatams: „Jie žada šiems laisvę, patys būdami sugedimo vergai; juk nugalėtasis tampa nugalėjusiojo vergu“ (2 Pt 2, 19).
`4. Kokie dalykai gali sukelti žmonėms priklausomybę? Lk 16, 13; Rom 6, 16; Jok 1, 13–15; 1 Jn 2, 16.`
Nuodėmė ir priklausomybė nebūtinai yra viena ir tas pat. Galima padaryti tokią nuodėmę, kuri nėra susijusi su jūsų priklausomybe, nors dažnai tokia nuodėmė gali tapti priklausomybe. Daug geriau, Dievo galia, sustabdyti nuodėmę prieš pastarajai virstant priklausomybe. Ir, žinoma, vienintelis ilgalaikis problemos sprendimas nuodėmei ir priklausomybei yra gauti naują širdį. „Kurie yra Kristaus Jėzaus, tie nukryžiavo savo kūnus su aistromis ir geismais“ (Gal 5, 24). Paulius taip pat paaiškina romiečiams, ką reiškia mirti šiai nuodėmingai, priklausomybę sukeliančiai prigimčiai, kad galėtume gyventi Kristui (Rom 6, 8–13), o tada priduria: „Apsivilkite Viešpačiu Jėzumi Kristumi ir nelepinkite savo kūno, netenkinkite jo geidulių“ (Rom 13, 14).
`Kas asmeniškai nepatyrė kovos su priklausomybe, ar su mūsų pačių, ar kitų, gal net šeimos narių? Kaip galite padėti žmonėms suvokti, kad tai nėra dvasinės nesėkmės priėmimas, net jei jiems kaip krikščionims vis tiek reikėtų profesionalios pagalbos?`
| 120.105263 | 745 | 0.80894 | lit_Latn | 1.00001 |
6fa9c329bdc637071b9905eee0cc913dfe665694 | 317 | md | Markdown | content/blog/eight.md | Andyctct/Hugo-Personal-Site | 921108f6e93b785202e527c99754d0d1cfae94e7 | [
"CC0-1.0"
] | null | null | null | content/blog/eight.md | Andyctct/Hugo-Personal-Site | 921108f6e93b785202e527c99754d0d1cfae94e7 | [
"CC0-1.0"
] | null | null | null | content/blog/eight.md | Andyctct/Hugo-Personal-Site | 921108f6e93b785202e527c99754d0d1cfae94e7 | [
"CC0-1.0"
] | null | null | null | +++
categories = []
comments = false
date = 2021-10-28T17:27:06-04:00
draft = false
showpagemeta = true
slug = ""
tags = []
title = "The subjective nature of experience"
description = "An analysis of why we may never \"put ourselves in another person's shoes\" and how this should affect the way we treat people"
+++ | 26.416667 | 142 | 0.712934 | eng_Latn | 0.994297 |
6fab6b9b3f7242dee9300fc8533d4ea17569084f | 145 | md | Markdown | docs/algo-overview.md | mpolson64/Ax-1 | cf9e12cc1253efe0fc893f2620e99337e0927a26 | [
"MIT"
] | 1,803 | 2019-05-01T16:04:15.000Z | 2022-03-31T16:01:29.000Z | docs/algo-overview.md | mpolson64/Ax-1 | cf9e12cc1253efe0fc893f2620e99337e0927a26 | [
"MIT"
] | 810 | 2019-05-01T07:17:47.000Z | 2022-03-31T23:58:46.000Z | docs/algo-overview.md | mpolson64/Ax-1 | cf9e12cc1253efe0fc893f2620e99337e0927a26 | [
"MIT"
] | 220 | 2019-05-01T05:37:22.000Z | 2022-03-29T04:30:45.000Z | ---
id: algo-overview
title: Overview
---
Ax supports:
* Bandit optimization
* Empirical Bayes with Thompson sampling
* Bayesian optimization
| 14.5 | 42 | 0.751724 | eng_Latn | 0.839076 |
6face77b6936e389f0e615d6877e3046e11b3f6d | 809 | md | Markdown | catalog/silent-blue/en-US_silent-blue.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/silent-blue/en-US_silent-blue.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/silent-blue/en-US_silent-blue.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Silent Blue

- **type**: manga
- **volumes**: 1
- **chapters**: 6
- **original-name**: Silent Blue
- **start-date**: 2011-10-08
- **end-date**: 2011-10-08
## Tags
- mystery
- drama
- josei
## Authors
- Andou
- Ikori (Story & Art)
## Sinopse
When Aoko was four, a meteorite struck her town. It then rained for twenty days straight, and the crater became what's now known as the Twenty Day Lake. Two decades later, Aoko can't remember the day the meteorite fell, or anything of her life in the town before. And so she dives deep into the lake, searching for that something in her memories that she can't let go of…
(Source: MU)
## Links
- [My Anime list](https://myanimelist.net/manga/110546/Silent_Blue)
| 25.28125 | 371 | 0.682324 | eng_Latn | 0.969918 |
6fad4a9ffe0613ff8d4f24012f2237d125635c73 | 23,302 | md | Markdown | reference/docs-conceptual/whats-new/What-s-New-in-PowerShell-Core-61.md | jwmoss/PowerShell-Docs | 25ae434ae90eaa2b64f16a721d557d790972c331 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/whats-new/What-s-New-in-PowerShell-Core-61.md | jwmoss/PowerShell-Docs | 25ae434ae90eaa2b64f16a721d557d790972c331 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/whats-new/What-s-New-in-PowerShell-Core-61.md | jwmoss/PowerShell-Docs | 25ae434ae90eaa2b64f16a721d557d790972c331 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: What's New in PowerShell Core 6.1
description: New features and changes released in PowerShell Core 6.1
ms.date: 09/13/2018
---
# What's New in PowerShell Core 6.1
Below is a selection of some of the major new features and changes that have been introduced
in PowerShell Core 6.1.
There's also **tons** of "boring stuff" that make PowerShell faster and more stable (plus lots and lots of bug fixes)!
For a full list of changes, check out our [changelog on GitHub](https://github.com/PowerShell/PowerShell/blob/master/CHANGELOG.md).
And while we call out some names below, thank you to
[all of the community contributors](https://github.com/PowerShell/PowerShell/graphs/contributors)
that made this release possible.
## .NET Core 2.1
PowerShell Core 6.1 moved to .NET Core 2.1 after it was
[released in May](https://blogs.msdn.microsoft.com/dotnet/2018/05/30/announcing-net-core-2-1/),
resulting in a number of improvements to PowerShell, including:
- performance improvements (see [below](#performance-improvements))
- Alpine Linux support (preview)
- [.NET global tool support](/dotnet/core/tools/global-tools) - coming soon to PowerShell
- [`Span<T>`](/dotnet/api/system.span-1?view=netcore-2.1)
## Windows Compatibility Pack for .NET Core
On Windows, the .NET team shipped the [Windows Compatibility Pack for .NET Core](https://blogs.msdn.microsoft.com/dotnet/2017/11/16/announcing-the-windows-compatibility-pack-for-net-core/),
a set of assemblies that add a number of removed APIs back to .NET Core on Windows.
We've added the Windows Compatibility Pack to PowerShell Core 6.1 release
so that any modules or scripts that use these APIs can rely on them being available.
The Windows Compatibility Pack enables PowerShell Core to use **more than 1900 cmdlets that ship with Windows 10 October 2018 Update and Windows Server 2019**.
## Support for Application Whitelisting
PowerShell Core 6.1 has parity with Windows PowerShell 5.1 supporting [AppLocker](https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/applocker-overview)
and [Device Guard](https://docs.microsoft.com/en-us/windows/security/threat-protection/device-guard/introduction-to-device-guard-virtualization-based-security-and-windows-defender-application-control) application whitelisting.
Application whitelisting allows granular control of what binaries are allowed to be executed used with PowerShell [Constrained Language mode](https://blogs.msdn.microsoft.com/powershell/2017/11/02/powershell-constrained-language-mode/).
## Performance improvements
PowerShell Core 6.0 made some significant performance improvements.
PowerShell Core 6.1 continues to improve the speed of certain operations.
For example, `Group-Object` has been sped up by 66%:
```powershell
Measure-Command { 1..100000 | % {Get-Random -Minimum 1 -Maximum 10000} | Group-Object }
```
| | Windows PowerShell 5.1 | PowerShell Core 6.0 | PowerShell Core 6.1 |
|--------------|------------------------|---------------------|---------------------|
| Time (sec) | 25.178 | 19.653 | 6.641 |
| Speed-up (%) | N/A | 21.9% | 66.2% |
Similarly, sorting scenarios like this one have improved by more than 15%:
```powershell
Measure-Command { 1..100000 | % {Get-Random -Minimum 1 -Maximum 10000} | Sort-Object }
```
| | Windows PowerShell 5.1 | PowerShell Core 6.0 | PowerShell Core 6.1 |
|--------------|------------------------|---------------------|---------------------|
| Time (sec) | 12.170 | 8.493 | 7.08 |
| Speed-up (%) | N/A | 30.2% | 16.6% |
`Import-Csv` has also been sped up significantly after a regression from Windows PowerShell.
The following example uses a test CSV with 26,616 rows and six columns:
```powershell
Measure-Command {$a = Import-Csv foo.csv}
```
| | Windows PowerShell 5.1 | PowerShell Core 6.0 | PowerShell Core 6.1 |
|--------------|------------------------|---------------------|------------------------|
| Time (sec) | 0.441 | 1.069 | 0.268 |
| Speed-up (%) | N/A | -142.4% | 74.9% (39.2% from WPS) |
Lastly, conversion from JSON into `PSObject` has been sped up by more than 50%
since Windows PowerShell.
The following example uses a ~2MB test JSON file:
```powershell
Measure-Command {Get-Content .\foo.json | ConvertFrom-Json}
```
| | Windows PowerShell 5.1 | PowerShell Core 6.0 | PowerShell Core 6.1 |
|--------------|------------------------|---------------------|------------------------|
| Time (sec) | 0.259 | 0.577 | 0.125 |
| Speed-up (%) | N/A | -122.8% | 78.3% (51.7% from WPS) |
## Check `system32` for compatible in-box modules on Windows
In the Windows 10 1809 update and Windows Server 2019,
we updated a number of in-box PowerShell modules to mark them as compatible with PowerShell Core.
When PowerShell Core 6.1 starts up, it will automatically include `$windir\System32`
as part of the `PSModulePath` environment variable.
However, it only exposes modules to `Get-Module` and `Import-Module`
if its `CompatiblePSEdition` is marked as compatible with `Core`.
```powershell
Get-Module -ListAvailable
```
> [!NOTE]
> You may see different available modules depending on what roles and features are installed.
```Output
...
Directory: C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
ModuleType Version Name PSEdition ExportedCommands
---------- ------- ---- --------- ----------------
Manifest 2.0.1.0 Appx Core,Desk {Add-AppxPackage, Get-AppxPackage, Get-AppxPacka...
Manifest 1.0.0.0 BitLocker Core,Desk {Unlock-BitLocker, Suspend-BitLocker, Resume-Bit...
Manifest 1.0.0.0 DnsClient Core,Desk {Resolve-DnsName, Clear-DnsClientCache, Get-DnsC...
Manifest 1.0.0.0 HgsDiagnostics Core,Desk {New-HgsTraceTarget, Get-HgsTrace, Get-HgsTraceF...
Binary 2.0.0.0 Hyper-V Core,Desk {Add-VMAssignableDevice, Add-VMDvdDrive, Add-VMF...
Binary 1.1 Hyper-V Core,Desk {Add-VMDvdDrive, Add-VMFibreChannelHba, Add-VMHa...
Manifest 2.0.0.0 NetAdapter Core,Desk {Disable-NetAdapter, Disable-NetAdapterBinding, ...
Manifest 1.0.0.0 NetEventPacketCapture Core,Desk {New-NetEventSession, Remove-NetEventSession, Ge...
Manifest 2.0.0.0 NetLbfo Core,Desk {Add-NetLbfoTeamMember, Add-NetLbfoTeamNic, Get-...
Manifest 1.0.0.0 NetNat Core,Desk {Get-NetNat, Get-NetNatExternalAddress, Get-NetN...
Manifest 2.0.0.0 NetQos Core,Desk {Get-NetQosPolicy, Set-NetQosPolicy, Remove-NetQ...
Manifest 2.0.0.0 NetSecurity Core,Desk {Get-DAPolicyChange, New-NetIPsecAuthProposal, N...
Manifest 1.0.0.0 NetSwitchTeam Core,Desk {New-NetSwitchTeam, Remove-NetSwitchTeam, Get-Ne...
Manifest 1.0.0.0 NetWNV Core,Desk {Get-NetVirtualizationProviderAddress, Get-NetVi...
Manifest 2.0.0.0 TrustedPlatformModule Core,Desk {Get-Tpm, Initialize-Tpm, Clear-Tpm, Unblock-Tpm...
...
```
You can override this behavior to show all modules using the `-SkipEditionCheck` switch parameter.
We've also added a `PSEdition` property to the table output.
```powershell
Get-Module Net* -ListAvailable -SkipEditionCheck
```
```Output
Directory: C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
ModuleType Version Name PSEdition ExportedCommands
---------- ------- ---- --------- ----------------
Manifest 2.0.0.0 NetAdapter Core,Desk {Disable-NetAdapter, Disable-NetAdapterBinding, ...
Manifest 1.0.0.0 NetConnection Core,Desk {Get-NetConnectionProfile, Set-NetConnectionProf...
Manifest 1.0.0.0 NetDiagnostics Desk Get-NetView
Manifest 1.0.0.0 NetEventPacketCapture Core,Desk {New-NetEventSession, Remove-NetEventSession, Ge...
Manifest 2.0.0.0 NetLbfo Core,Desk {Add-NetLbfoTeamMember, Add-NetLbfoTeamNic, Get-...
Manifest 1.0.0.0 NetNat Core,Desk {Get-NetNat, Get-NetNatExternalAddress, Get-NetN...
Manifest 2.0.0.0 NetQos Core,Desk {Get-NetQosPolicy, Set-NetQosPolicy, Remove-NetQ...
Manifest 2.0.0.0 NetSecurity Core,Desk {Get-DAPolicyChange, New-NetIPsecAuthProposal, N...
Manifest 1.0.0.0 NetSwitchTeam Core,Desk {New-NetSwitchTeam, Remove-NetSwitchTeam, Get-Ne...
Manifest 1.0.0.0 NetTCPIP Core,Desk {Get-NetIPAddress, Get-NetIPInterface, Get-NetIP...
Manifest 1.0.0.0 NetWNV Core,Desk {Get-NetVirtualizationProviderAddress, Get-NetVi...
Manifest 1.0.0.0 NetworkConnectivityStatus Core,Desk {Get-DAConnectionStatus, Get-NCSIPolicyConfigura...
Manifest 1.0.0.0 NetworkSwitchManager Core,Desk {Disable-NetworkSwitchEthernetPort, Enable-Netwo...
Manifest 1.0.0.0 NetworkTransition Core,Desk {Add-NetIPHttpsCertBinding, Disable-NetDnsTransi...
```
For more information about this behavior, check out [PowerShell RFC0025](https://github.com/PowerShell/PowerShell-RFC/blob/master/5-Final/RFC0025-PSCore6-and-Windows-Modules.md).
## Markdown cmdlets and rendering
Markdown is a standard for creating readable plaintext documents with
basic formatting that can be rendered into HTML.
We've added some cmdlets in 6.1 that allow you to convert and render
Markdown documents in the console, including:
- `ConvertFrom-Markdown`
- `Get-MarkdownOption`
- `Set-MarkdownOption`
- `Show-Markdown`
For example, `Show-Markdown` renders a Markdown file in the console:

For more information about how these cmdlets work, check out
[this RFC](https://github.com/PowerShell/PowerShell-RFC/blob/master/5-Final/RFC0025-Native-Markdown-Rendering.md).
## Experimental feature flags
Experimental feature flags enable users to turn on features that haven't been finalized.
These experimental features aren't supported and may have bugs.
You can learn more about this feature in [PowerShell RFC0029](https://github.com/PowerShell/PowerShell-RFC/blob/master/5-Final/RFC0029-Support-Experimental-Features.md).
## Web cmdlet improvements
Thanks to [@markekraus](https://github.com/markekraus), a whole slew of improvements have been made to our web cmdlets:
[`Invoke-WebRequest`](/powershell/module/microsoft.powershell.utility/invoke-webrequest)
and [`Invoke-RestMethod`](/powershell/module/microsoft.powershell.utility/invoke-restmethod).
- [PR #6109](https://github.com/PowerShell/PowerShell/pull/6109) - default encoding set to UTF-8 for `application-json` responses
- [PR #6018](https://github.com/PowerShell/PowerShell/pull/6018) - `-SkipHeaderValidation` parameter to allow `Content-Type` headers that aren't standards-compliant
- [PR #5972](https://github.com/PowerShell/PowerShell/pull/5972) - `Form` parameter to support simplified `multipart/form-data` support
- [PR #6338](https://github.com/PowerShell/PowerShell/pull/6338) - Compliant, case-insensitive handling of relation keys
- [PR #6447](https://github.com/PowerShell/PowerShell/pull/6447) - Add `-Resume` parameter for web cmdlets
## Remoting improvements
### PowerShell Direct tries to use PowerShell Core first
[PowerShell Direct](/virtualization/hyper-v-on-windows/user-guide/powershell-direct)
is a feature of PowerShell and Hyper-V that allows you to connect to a Hyper-V VM
without network connectivity or other remote management services.
In the past, PowerShell Direct connected using the inbox Windows PowerShell instance on the VM.
Now, PowerShell Direct first attempts to connect using any available `pwsh.exe` on the `PATH` environment variable.
If `pwsh.exe` isn't available, PowerShell Direct falls back to use `powershell.exe`.
### `Enable-PSRemoting` now creates separate remoting endpoints for preview versions
`Enable-PSRemoting` now creates two remoting session configurations:
- One for the major version of PowerShell. For example,`PowerShell.6`. This endpoint that can be relied upon across minor version updates as the "system-wide" PowerShell 6 session configuration
- One version-specific session configuration, for example: `PowerShell.6.1.0`
This behavior is useful if you want to have multiple PowerShell 6 versions installed and accessible
on the same machine.
Additionally, preview versions of PowerShell now get their own
remoting session configurations after running the `Enable-PSRemoting` cmdlet:
```powershell
C:\WINDOWS\system32> Enable-PSRemoting
```
Your output may be different if you haven't set up WinRM before.
```Output
WinRM is already set up to receive requests on this computer.
WinRM is already set up for remote management on this computer.
```
Then you can see separate PowerShell session configurations for
the preview and stable builds of PowerShell 6,
and for each specific version.
```powershell
Get-PSSessionConfiguration
```
```Output
Name : PowerShell.6.2-preview.1
PSVersion : 6.2
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
Name : PowerShell.6-preview
PSVersion : 6.2
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
Name : powershell.6
PSVersion : 6.1
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
Name : powershell.6.1.0
PSVersion : 6.1
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
```
### `user@host:port` syntax supported for SSH
SSH clients typically support a connection string in the format `user@host:port`.
With the addition of SSH as a protocol for PowerShell Remoting,
we've added support for this format of connection string:
`Enter-PSSession -HostName fooUser@ssh.contoso.com:2222`
## MSI option to add explorer shell context menu on Windows
Thanks to [@bergmeister](https://github.com/bergmeister), now you can enable a context menu on Windows. Now you can open your
system-wide installation of PowerShell 6.1 from any folder in the Windows Explorer:

## Goodies
### "Run as Administrator" in the Windows shortcut jump list
Thanks to [@bergmeister](https://github.com/bergmeister), the PowerShell Core shortcut's jump list now includes "Run as Administrator":

### `cd -` returns to previous directory
```powershell
C:\Windows\System32> cd C:\
C:\> cd -
C:\Windows\System32>
```
Or on Linux:
```ShellSession
PS /etc> cd /usr/bin
PS /usr/bin> cd -
PS /etc>
```
Also, `cd` and `cd --` change to `$HOME`.
### `Test-Connection`
Thanks to [@iSazonov](https://github.com/iSazonov), the [`Test-Connection`](/powershell/module/microsoft.powershell.management/test-connection)
cmdlet has been ported to PowerShell Core.
### `Update-Help` as non-admin
By popular demand, `Update-Help` no longer needs to be run as an administrator.
`Update-Help` now defaults to saving help to a user-scoped folder.
### New methods/properties on `PSCustomObject`
Thanks to [@iSazonov](https://github.com/iSazonov), we've added new methods and properties to `PSCustomObject`.
`PSCustomObject` now includes a `Count`/`Length` property like other objects.
```powershell
$PSCustomObject = [pscustomobject]@{foo = 1}
$PSCustomObject.Length
```
```Output
1
```
```powershell
$PSCustomObject.Count
```
```Output
1
```
This work also includes `ForEach` and `Where` methods that allow you
to operate and filter on `PSCustomObject` items:
```powershell
$PSCustomObject.ForEach({$_.foo + 1})
```
```Output
2
```
```powershell
$PSCustomObject.Where({$_.foo -gt 0})
```
```Output
foo
---
1
```
### `Where-Object -Not`
Thanks to @SimonWahlin, we've added the `-Not` parameter to `Where-Object`.
Now you can filter an object at the pipeline for the non-existence of a property,
or a null/empty property value.
For example, this command returns all services that don't have any dependent services defined:
```powershell
Get-Service | Where-Object -Not DependentServices
```
### `New-ModuleManifest` creates a BOM-less UTF-8 document
Given our move to BOM-less UTF-8 in PowerShell 6.0,
we've updated the `New-ModuleManifest` cmdlet to create a BOM-less UTF-8 document
instead of a UTF-16 one.
### Conversions from PSMethod to Delegate
Thanks to [@powercode](https://github.com/powercode), we now support the conversion of a `PSMethod` into a delegate.
This allows you to do things like passing `PSMethod` `[M]::DoubleStrLen`
as a delegate value into `[M]::AggregateString`:
```powershell
class M {
static [int] DoubleStrLen([string] $value) { return 2 * $value.Length }
static [long] AggregateString([string[]] $values, [func[string, int]] $selector) {
[long] $res = 0
foreach($s in $values){
$res += $selector.Invoke($s)
}
return $res
}
}
[M]::AggregateString((gci).Name, [M]::DoubleStrLen)
```
For more info on this change, check out [PR #5287](https://github.com/PowerShell/PowerShell/pull/5287).
### Standard deviation in `Measure-Object`
Thanks to [@CloudyDino](https://github.com/CloudyDino), we've added a `StandardDeviation` property to `Measure-Object`:
```powershell
Get-Process | Measure-Object -Property CPU -AllStats
```
```Output
Count : 308
Average : 31.3720576298701
Sum : 9662.59375
Maximum : 4416.046875
Minimum :
StandardDeviation : 264.389544720926
Property : CPU
```
### `GetPfxCertificate -Password`
Thanks to [@maybe-hello-world](https://github.com/maybe-hello-world), `Get-PfxCertificate` now has the `Password` parameter,
which takes a `SecureString`. This allows you to use it non-interactively:
```powershell
$certFile = '\\server\share\pwd-protected.pfx'
$certPass = Read-Host -AsSecureString -Prompt 'Enter the password for certificate: '
$certThumbPrint = (Get-PfxCertificate -FilePath $certFile -Password $certPass ).ThumbPrint
```
### Removal of the `more` function
In the past, PowerShell shipped a function on Windows called `more` that wrapped `more.com`.
That function has now been removed.
Also the `help` function changed to use `more.com` on Windows, or the system's default pager
specified by `$env:PAGER` on non-Windows platforms.
### `cd DriveName:` now returns users to the current working directory in that drive
Previously, using `Set-Location` or `cd` to return to a PSDrive sent users
to the default location for that drive.
Thanks to [@mcbobke](https://github.com/mcbobke), users are now sent to the last known current working directory for that session.
### Windows PowerShell type accelerators
In Windows PowerShell, we included the following type accelerators to make it easier to work with their respective types:
- `[adsi]`: `System.DirectoryServices.DirectoryEntry`
- `[adsisearcher]`: `System.DirectoryServices.DirectorySearcher`
- `[wmi]`: `System.Management.ManagementObject`
- `[wmiclass]`: `System.Management.ManagementClass`
- `[wmisearcher]`: `System.Management.ManagementObjectSearcher`
These type accelerators were not included in PowerShell 6, but have been added to PowerShell 6.1 running on Windows.
These types are useful in easily constructing AD and WMI objects.
For example, you can query using LDAP:
```powershell
[adsi]'LDAP://CN=FooUse,OU=People,DC=contoso,DC=com'
```
Following example creates a Win32_OperatingSystem CIM object:
```powershell
[wmi]"Win32_OperatingSystem=@"
```
```Output
SystemDirectory : C:\WINDOWS\system32
Organization : Contoso IT
BuildNumber : 18234
RegisteredUser : Contoso Corp.
SerialNumber : 12345-67890-ABCDE-F0123
Version : 10.0.18234
```
This example returns a ManagementClass object for Win32_OperatingSystem class.
```powershell
[wmiclass]"Win32_OperatingSystem"
```
```Output
NameSpace: ROOT\cimv2
Name Methods Properties
---- ------- ----------
Win32_OperatingSystem {Reboot, Shutdown... {BootDevice, BuildNumber, BuildType, Caption...}
```
### `-lp` alias for all `-LiteralPath` parameters
Thanks to [@kvprasoon](https://github.com/kvprasoon), we now have a parameter alias `-lp` for all the
built-in PowerShell cmdlets that have a `-LiteralPath` parameter.
## Breaking Changes
### MSI-based installation paths on Windows
On Windows, the MSI package now installs to the following path:
- `$env:ProgramFiles\PowerShell\6\` for the stable installation of 6.x
- `$env:ProgramFiles\PowerShell\6-preview\` for the preview installation of 6.x
This change ensures that PowerShell Core can be updated/serviced by Microsoft Update.
For more information, check out [PowerShell RFC0026](https://github.com/PowerShell/PowerShell-RFC/blob/master/5-Final/RFC0026-MSI-Installation-Path.md).
### Telemetry can only be disabled with an environment variable
PowerShell Core sends basic telemetry data to Microsoft when it is launched. The data includes the
OS name, OS version, and PowerShell version. This data allows us to better understand the
environments where PowerShell is used and enables us to prioritize new features and fixes.
To opt-out of this telemetry, set the environment variable `POWERSHELL_TELEMETRY_OPTOUT` to `true`,
`yes`, or `1`. We no longer support deletion of the file
`DELETE_ME_TO_DISABLE_CONSOLEHOST_TELEMETRY` to disable telemetry.
### Disallowed Basic Auth over HTTP in PowerShell Remoting on Unix platforms
To prevent the use of unencrypted traffic, PowerShell Remoting on Unix platforms now requires usage
of NTLM/Negotiate or HTTPS.
For more information on these changes, check out [PR #6799](https://github.com/PowerShell/PowerShell/pull/6799).
### Removed `VisualBasic` as a supported language in Add-Type
In the past, you could compile Visual Basic code using the `Add-Type` cmdlet.
Visual Basic was rarely used with `Add-Type`. We removed this feature to reduce the size of PowerShell.
### Cleaned up uses of `CommandTypes.Workflow` and `WorkflowInfoCleaned`
For more information on these changes, check out [PR #6708](https://github.com/PowerShell/PowerShell/pull/6708).
| 42.137432 | 236 | 0.700326 | eng_Latn | 0.699729 |
6fadb537f23b1b8abdec45a04fb87ab41929f8b2 | 7,780 | md | Markdown | source/develop/release-management/features/vdsm/hostconfiguration.html.md | didib/ovirt-site | c80907faf68651ee931ae7de3f478fc0cab8c2ef | [
"MIT"
] | null | null | null | source/develop/release-management/features/vdsm/hostconfiguration.html.md | didib/ovirt-site | c80907faf68651ee931ae7de3f478fc0cab8c2ef | [
"MIT"
] | null | null | null | source/develop/release-management/features/vdsm/hostconfiguration.html.md | didib/ovirt-site | c80907faf68651ee931ae7de3f478fc0cab8c2ef | [
"MIT"
] | null | null | null | ---
title: HostConfiguration
category: feature
authors: ovedo, ybronhei
wiki_category: Feature
wiki_title: Features/HostConfiguration
wiki_revision_count: 18
wiki_last_updated: 2015-01-28
---
# Host Configuration Management
## Summary
oVirt 3.6 provides to admin users to set the host configuration through the UI and API. Vdsm component is the initial interface to the host, configuring the file /etc/vdsm/vdsm.conf allows to manage variance variables and attributes regarding storage, network and virt life cycle operations such as communication details, hardware usage, kernel variables and more. Each attribute is part of specific use-case and some of them are not exposed to modification by any engine API verbs.
This RFE is aimed to expose additional configuration attributes for flexibility and advanced usage. Users should NOT define any attribute normally. The case of changing\\using Vdsm internal variables depend on hardware capability, storage interfaces and other specific cases that should follow by constant support during the modification. In the scope of this feature we expose specific host configuration file and allow the admin to modify it and restart Vdsm service as described below.
In this phase of host configuration management we provide only the option to modify vdsm.conf. The API will provide general approach to manage configuration files. Following phases will be documented as part of <https://bugzilla.redhat.com/show_bug.cgi?id=838096>.
## Owner
* Name: Yaniv Bronheim
* Email: ybronhei@redhat.com
## Current Status
* <https://bugzilla.redhat.com/show_bug.cgi?id=1115171> - Track the status and additional argues regarding the feature.
* Planning design and specification details for the feature.
* Plan to be fully supported as part of oVirt 3.6 release.
## Implementation Alternatives
1. Expose vdsm.conf from the host (via SSH) in the UI. The user will edit it and trigger the change. This will send the result back to VDSM (SSH based protocol, and not API based).
Advantages: Simple, Will support any cluster level
Disadvantages: No validation on fields, The administrator should be familiar with the options to configure (it won't be specified anywhere in the UI and depends on Vdsm implementation - probably will require some level of support (mailing list, IRC, etc...)
2. Expose new API which sends list of fields that are modifiable (and perhaps also the valid values) from VDSM. This will show the user the current values, and allow modification for those verbs.
Advantages: The UI can show the valid and default keys and values. The fields to modify are mandatory, therefore the UI can include description for each of the configurable fields
Disadvantages: Depends on cluster level which includes the new API. Requires specific logic for each field such as validation and conflicts
## The Chosen Approach
The feature will expose the vdsm.conf file and the modification will be performed by the admin in its own risk. The main assumption is that support guides the user during the change process. No validation will be involved but only replacement of current configuration file. Configuration in cluster level won't be supported, to expose such flow user will require to use manual script (such as iterate on all cluster's hosts, move each one to maintenance, use the engine's logic to perform the modification and activate the host).
## User Flow
* "Advanced Host Configuration" tab will be exposed only through Host options (Right click on Host name).
* When host is not on maintenance the tab will show the content of current vdsm.conf file on host without the option to edit it.
* User requires to put host on maintenance to see the text area field enabled and then will be able to modify it with any content (**On connectivity issues an error label message will be shown**).
* After modifying the field, if user does not click on "Update Configuration" button the changes won't save or send to host at all.
* Clicking on "Update Configuration" will start updating flow -> SSH to host, replacing vdsm.conf content and restart vdsmd service - If the flow fails a popup message will be raised.
* On success the content will be updated and the user can activate the host.
* If Vdsm fails to start, the content will be reverted to last known working content. When entering back to "Advanced Host Configuration" tab the user will be able to see if the changes were committed or reverted.
* Any communication issue or ssh errors will be shown by popup message or error label.
## Implementation Deatils
### Vdsm Side
New vdsm-tool verb that gets file path for new conf file content, the verb will replace vdsm.conf with the new file content. The verb doesn't validate the conf file, saving backup for the current /etc/vdsm/vdsm.conf file, replace the content and restart vdsmd if was up when the verb is called.
In vdsmd_init_common.sh, if one init task fails to pass we will check if backup file exists for vdsm.conf under /etc/vdsm/vdsm.conf.\*\*\*\*\* (\*\*\* implies to date). If yes it will try to restore backup file and start again - on success start the backup file will be cleaned.
From RHEV-H prospective, there is no need for additional persistence manipulation. vdsm.conf is persisted on restarts and upgrade as part of the current implementation.
### Engine Side
* Introduce GetHostConfigurationCommand which retrieves vdsm.conf content from host by ssh.
* Introduce SetHostConfigurationCommand(content) which by ssh commands replace vdsm.conf content, restart VDSM service. Engine will log the operation in audit log. The command runs only when host in maintenance.
### UX
* Introducing new tab called "Advanced Host Configuration" in Host options - The content of current vdsm.conf file on host will be exposed there in editable text area field. The content will be blocked if host not on maintenance mode.

### API
RestAPI will allow to modify configuration files on host. In scope of this RFE we will introduce the following APIs:
Retrieves list of configuration file names:
GET /hosts/{host:id}/configurationfiles
Accept: application/xml
` `<host_configuration_files>
` `<host_configuration_file href="..." id="[hex encoding of name field]">
` `<name>`vdsm.conf`</name>
` `<encoding>`base64`</encoding>
` `</host_configuration_file>
...
` `</host_configuration_files>
Search for specific file will be done by GET /hosts/{host:id}/configurationfiles?search=name%3Dvdsm.conf
Retrieve a specific configuration file:
GET /hosts/{host:id}/configurationfiles/{configurationfile:id}
Accept: application/xml or by text/plain
` `<host_configuration_file href="..." id="...">
` `<name>`vdsm.conf`</name>
` `<encoding>`base64`</encoding>
` `<content>
...
` `</content>
` `</host_configuration_file>
Replace configuration file:
PUT /hosts/{host:id}/configurationfiles/{configurationfile:id}
Content-Type: application/xml or text/plain
` `<host_configuration_file href="..." id="...">
` `<name>`vdsm.conf`</name>
` `<encoding>`base64`</encoding>
` `<content>
...
` `</content>
` `</host_configuration_file>
## Open Issues
* Do we need new action group that allows to change host configuration? Although seems like everyone that can edit the host should be able to do that as well
* Notification - how will we show the user that it failed and we rolled back to the previous file? Entering "Advanced Host Configuration" tab again and watch the content. If saved or not.
* UX details - such as if we can have freestyle text area in form
### Documentation / External References
| 60.78125 | 529 | 0.764653 | eng_Latn | 0.995757 |
6fade9c3822b8f08f14fff88760b3817fb0cfdf7 | 134 | md | Markdown | README.md | nikolausmayer/cpp-fps | e9b044e456738e8770265dda92f695adff0b726d | [
"MIT"
] | null | null | null | README.md | nikolausmayer/cpp-fps | e9b044e456738e8770265dda92f695adff0b726d | [
"MIT"
] | null | null | null | README.md | nikolausmayer/cpp-fps | e9b044e456738e8770265dda92f695adff0b726d | [
"MIT"
] | null | null | null | # cpp-fps
Tiny C++ tool: Monitor frequencies of things
[](LICENSE)
| 13.4 | 72 | 0.701493 | eng_Latn | 0.256642 |
6fae344c59cda679b0a2c0fd14af5f8350df3677 | 21,522 | md | Markdown | articles/cosmos-db/sql/sql-api-sdk-python.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 43 | 2017-08-28T07:44:17.000Z | 2022-02-20T20:53:01.000Z | articles/cosmos-db/sql/sql-api-sdk-python.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 676 | 2017-07-14T20:21:38.000Z | 2021-12-03T05:49:24.000Z | articles/cosmos-db/sql/sql-api-sdk-python.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 153 | 2017-07-11T00:08:42.000Z | 2022-01-05T05:39:03.000Z | ---
title: API, Kit de développement logiciel (SDK) et ressources Python SQL Azure Cosmos DB
description: Découvrez l’API et le Kit SDK Python SQL, y compris les dates de publication, les dates de suppression et les modifications apportées entre chaque version du Kit SDK Python Azure Cosmos DB.
author: Rodrigossz
ms.service: cosmos-db
ms.subservice: cosmosdb-sql
ms.devlang: python
ms.topic: reference
ms.date: 04/06/2021
ms.author: rosouz
ms.custom: devx-track-python
ms.openlocfilehash: 288c8c918a51bf399a73880e31269fab13edc09a
ms.sourcegitcommit: dcf1defb393104f8afc6b707fc748e0ff4c81830
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/27/2021
ms.locfileid: "123116097"
---
# <a name="azure-cosmos-db-python-sdk-for-sql-api-release-notes-and-resources"></a>Kit de développement logiciel Python Azure Cosmos DB pour l’API SQL : Notes de publication et ressources
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
> [!div class="op_single_selector"]
> * [Kit de développement logiciel (SDK) .NET v3](sql-api-sdk-dotnet-standard.md)
> * [SDK .NET v2](sql-api-sdk-dotnet.md)
> * [SDK .NET Core v2](sql-api-sdk-dotnet-core.md)
> * [SDK .NET Change Feed v2](sql-api-sdk-dotnet-changefeed.md)
> * [Node.JS](sql-api-sdk-node.md)
> * [Kit SDK Java v4](sql-api-sdk-java-v4.md)
> * [SDK Java Async v2](sql-api-sdk-async-java.md)
> * [SDK Java Sync v2](sql-api-sdk-java.md)
> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
> * [Spring Data v3](sql-api-sdk-java-spring-v3.md)
> * [Connecteur OLTP Spark 3](sql-api-sdk-java-spark-v3.md)
> * [Connecteur OLTP Spark 2](sql-api-sdk-java-spark.md)
> * [Python](sql-api-sdk-python.md)
> * [REST](/rest/api/cosmos-db/)
> * [API REST Resource Provider](/rest/api/cosmos-db-resource-provider/)
> * [SQL](sql-query-getting-started.md)
> * [Exécuteur en bloc – .NET v2](sql-api-sdk-bulk-executor-dot-net.md)
> * [Exécuteur en bloc – Java](sql-api-sdk-bulk-executor-java.md)
| Page| Lien |
|---|---|
|**Téléchargement du Kit de développement logiciel (SDK)**|[PyPI](https://pypi.org/project/azure-cosmos)|
|**Documentation de l’API**|[Documentation de référence sur l’API Python](/python/api/azure-cosmos/azure.cosmos?preserve-view=true&view=azure-python)|
|**Instructions d’installation du Kit de développement logiciel (SDK)**|[Instructions d’installation du Kit de développement logiciel (SDK) Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)|
|**Prise en main**|[Bien démarrer avec le Kit de développement logiciel (SDK) Python](create-sql-api-python.md)|
|**Plateforme actuellement prise en charge**|[Python 2.7](https://www.python.org/downloads/) et [Python 3.6+](https://www.python.org/downloads/)|
## <a name="release-history"></a>Historique des mises en production
## <a name="420"></a>4.2.0
**Résolution des bogues**
- Correction du bogue dans lequel le jeton de continuation n’est pas respecté quand query_iterable est utilisé pour obtenir des résultats par page.
- Correction du bogue dans lequel les jetons de ressources ne sont pas respectés pour les lectures et les suppressions de documents.
**Nouvelles fonctionnalités**
- Ajout de la prise en charge du passage de `partitionKey` lors de l’interrogation du flux de modification.
## <a name="410"></a>4.1.0
- Ajout d’un avertissement de désapprobation pour le mode d’indexation « différé ». Le serveur principal n’autorise plus la création de conteneurs avec ce mode et les définit comme cohérents à la place.
**Nouvelles fonctionnalités**
- Ajout de la possibilité de définir la durée de vie de stockage analytique lors de la création d’un conteneur.
**Résolution des bogues**
- Correction de la prise en charge des `dicts` en tant qu’entrées pour les API get_client.
- Correction de la compatibilité avec Python 2/3 dans les itérateurs de requête.
- Correction de l’erreur d’indicateur de type.
- Correction du bogue qui avait pour effet que les en-têtes d’options n’étaient pas été ajoutés à la fonction upsert_item.
- Correction de l’erreur qui était déclenchée en cas d’utilisation d’un ID de type autre que chaîne dans un élément. L’erreur TypeError est désormais déclenchée à la place de l’erreur AttributeError.
## <a name="400"></a>4.0.0
* Version stable.
* Ajout de HttpLoggingPolicy au pipeline pour permettre le passage dans un enregistreur d’événements personnalisé pour les requêtes et les en-têtes de réponse.
### <a name="400b6"></a>4.0.0b6
* Correction du bogue dans synchronized_request pour les API multimédias.
* Suppression de MediaReadMode et MediaRequestTimeout de ConnectionPolicy, car les requêtes de média ne sont pas prises en charge.
### <a name="400b5"></a>4.0.0b5
* Module azure.cosmos.errors déconseillé et remplacé par azure.cosmos.exceptions.
* Les paramètres de condition d’accès (`access_condition`, `if_match`, `if_none_match`) sont déconseillés en faveur de paramètres `match_condition` et `etag` distincts.
* Correction du bogue dans le fournisseur de mappage de routage.
* Ajout de la prise en charge des requêtes Distinct, Offset et Limit.
* Contexte d’exécution par défaut de la requête de document actuellement utilisé pour :
* Requêtes de flux de modification
* requêtes sur une seule partition (`partitionkey`, `partitionKeyRangeId` est présent dans les options)
* Requêtes sans document.
* Erreurs pour les agrégats sur plusieurs partitions, avec la valeur « Activer la requête entre les partitions » définie sur true, mais sans la présence de mot clé « value ».
* Accès au point de terminaison du plan de requête pour d’autres scénarios afin de récupérer le plan de requête.
* Ajout de la prise en charge de `__repr__` pour les objets entité Cosmos.
* Mise à jour de la documentation.
### <a name="400b4"></a>4.0.0b4
* Ajout de la prise en charge d’un argument de mot clé `timeout` à toutes les opérations pour spécifier un délai d’expiration absolu en secondes pendant lequel l’opération doit être effectuée. Si la valeur du délai d’expiration est dépassée, un `azure.cosmos.errors.CosmosClientTimeoutError` est déclenché.
* Ajout d’un nouveau `ConnectionRetryPolicy` pour gérer le comportement des nouvelles tentatives pendant les erreurs de connexion HTTP.
* Ajout de nouveaux constructeurs et d’arguments de mot clé de configuration par opération :
* `retry_total` : nombre maximal de nouvelles tentatives.
* `retry_backoff_max` : délai d’attente maximal pour une nouvelle tentative en secondes.
* `retry_fixed_interval` : intervalle fixe entre les tentatives en millisecondes.
* `retry_read` : nombre maximal de nouvelles tentatives de lecture de sockets.
* `retry_connect` : nombre maximal de nouvelles tentatives pour une erreur de connexion.
* `retry_status` : nombre maximal de nouvelles tentatives sur les codes d’état d’erreur.
* `retry_on_status_codes` : liste de codes d’état spécifiques sur lesquels effectuer une nouvelle tentative.
* `retry_backoff_factor` : facteur de calcul du délai d’attente entre chaque tentative.
### <a name="400b3"></a>4.0.0b3
* Ajout des fonctionnalités `create_database_if_not_exists()` et `create_container_if_not_exists` à CosmosClient et Database respectivement.
### <a name="400b2"></a>4.0.0b2
* La version 4.0.0b2 est la deuxième itération de nos efforts visant à créer une bibliothèque de client adaptée aux meilleures pratiques en matière de langage Python.
**Dernières modifications**
* La connexion client a été adaptée pour consommer le pipeline HTTP défini dans `azure.core.pipeline`.
* Les objets interactifs ont été renommés en tant que proxys. notamment :
* `Database` -> `DatabaseProxy`
* `User` -> `UserProxy`
* `Container` -> `ContainerProxy`
* `Scripts` -> `ScriptsProxy`
* Le constructeur de `CosmosClient` a été mis à jour :
* Le paramètre `auth` a été renommé `credential` et prendra maintenant un type d’authentification directement. Cela signifie que vous pouvez passer la valeur de clé primaire, un dictionnaire de jetons de ressources ou une liste d’autorisations. Toutefois, l’ancien format de dictionnaire est toujours pris en charge.
* Le paramètre `connection_policy` a été transformé en paramètre de mot clé uniquement et, bien qu’il soit toujours pris en charge, chacun des attributs individuels de la stratégie peut désormais être transmis en tant qu’argument de mot clé explicite :
* `request_timeout`
* `media_request_timeout`
* `connection_mode`
* `media_read_mode`
* `proxy_config`
* `enable_endpoint_discovery`
* `preferred_locations`
* `multiple_write_locations`
* Un nouveau constructeur a été ajouté à `CosmosClient` pour permettre la création via une chaîne de connexion récupérée à partir du portail Azure.
* Certaines opérations `read_all` ont été renommées opérations `list` :
* `CosmosClient.read_all_databases` -> `CosmosClient.list_databases`
* `Container.read_all_conflicts` -> `ContainerProxy.list_conflicts`
* `Database.read_all_containers` -> `DatabaseProxy.list_containers`
* `Database.read_all_users` -> `DatabaseProxy.list_users`
* `User.read_all_permissions` -> `UserProxy.list_permissions`
* Toutes les opérations qui acceptent des paramètres `request_options` ou `feed_options` ont été déplacées vers des paramètres de mot clé uniquement. En outre, bien que ces dictionnaires d’options soient toujours pris en charge, chacune des options individuelles dans le dictionnaire est maintenant prise en charge en tant qu’argument de mot clé explicite.
* La hiérarchie d’erreurs est désormais héritée de `azure.core.AzureError` :
* `HTTPFailure` a été renommé en `CosmosHttpResponseError`
* `JSONParseFailure` a été supprimé et remplacé par `azure.core.DecodeError`
* Ajout d’erreurs supplémentaires pour des codes de réponse spécifiques :
* `CosmosResourceNotFoundError` pour l’état 404
* `CosmosResourceExistsError` pour l’état 409
* `CosmosAccessConditionFailedError` pour l’état 412
* `CosmosClient` peut maintenant être exécuté dans un gestionnaire de contexte pour traiter la fermeture de la connexion cliente.
* Les réponses pouvant être itérées (par exemple, les réponses aux requêtes et les réponses de liste) sont désormais du type `azure.core.paging.ItemPaged`. La méthode `fetch_next_block` a été remplacée par un itérateur secondaire, accessible par la méthode `by_page`.
### <a name="400b1"></a>4.0.0b1
La version 4.0.0b1 est la première préversion de nos efforts visant à créer une bibliothèque de client conviviale adaptée aux meilleures pratiques en matière de langage Python. Pour plus d’informations à ce sujet et pour obtenir des versions préliminaires d’autres bibliothèques du Kit de développement logiciel (SDK) Azure, rendez-vous sur le site https://aka.ms/azure-sdk-preview1-python.
**Changements cassants : nouvelle conception de l’API**
* Les opérations sont désormais étendues à un client particulier :
* `CosmosClient`: ce client traite les opérations au niveau du compte. Cela comprend la gestion des propriétés de service et le référencement des bases de données dans un compte.
* `Database`: ce client traite les opérations au niveau de la base de données. Cela comprend la création et la suppression de conteneurs, d’utilisateurs et de procédures stockées. Il est accessible à partir d’une instance `CosmosClient` par nom.
* `Container`: ce client traite les opérations pour un conteneur particulier. Cela comprend l’interrogation et l’insertion d’éléments et la gestion des propriétés.
* `User`: ce client traite les opérations pour un utilisateur particulier. Cela comprend l’ajout et la suppression d’autorisations et la gestion des propriétés d’utilisateur.
Vous pouvez accéder à ces clients en parcourant la hiérarchie du client à l’aide de la méthode `get_<child>_client`. Pour tout savoir sur la nouvelle API, consultez la [documentation de référence](https://aka.ms/azsdk-python-cosmos-ref).
* Les clients sont accessibles par leur nom plutôt que par leur ID. Il n’est pas nécessaire de concaténer des chaînes pour créer des liens.
* Il n’est plus nécessaire d’importer des types et des méthodes à partir de modules individuels. La surface d’exposition de l’API publique est disponible directement dans le package `azure.cosmos`.
* Les propriétés de requête individuelles peuvent être fournies en tant qu’arguments de mot clé au lieu de construire une instance `RequestOptions` distincte.
### <a name="302"></a>3.0.2
* Ajout de la prise en charge du type de données multipolygone
* Correctif de bogue dans la stratégie de nouvelles tentatives de lecture de session
* Correctif de bogue pour les problèmes de remplissage incorrect lors du décodage de chaînes base 64
### <a name="301"></a>3.0.1
* Correction de bogue dans LocationCache
* Correction de bogue au niveau de la logique de nouvelle tentative de point de terminaison
* Documentation mise à jour
### <a name="300"></a>3.0.0
* Ajout de la prise en charge des écritures multirégions
* Modifications des dénominations
* DocumentClient devient CosmosClient
* Collection devient Container
* Document devient Item
* Nom du package mis à jour en « azure-cosmos »
* Espace de noms mis à jour en « azure.cosmos »
### <a name="233"></a>2.3.3
* Ajout de la prise en charge de proxy
* Ajout de la prise en charge de la lecture du flux de modification
* Ajout de la prise en charge des en-têtes de quota de collection
* Correction du problème de jetons de session volumineuse
* Correction de bogue au niveau de l’API ReadMedia
* Correction de bogue dans le cache de plage de clés de partition
### <a name="232"></a>2.3.2
* Support supplémentaire pour les nouvelles tentatives par défaut concernant les problèmes de connexion.
### <a name="231"></a>2.3.1
* Documentation mise à jour pour référencer Azure Cosmos DB au lieu d’Azure DocumentDB.
### <a name="230"></a>2.3.0
* Cette version du kit de développement logiciel (SDK) nécessite la dernière version de l’émulateur Azure Cosmos DB, que vous pouvez télécharger à l’adresse https://aka.ms/cosmosdb-emulator.
### <a name="221"></a>2.2.1
* Correctif de bogue pour le dictionnaire d’agrégation
* Correctif de bogue pour la suppression des barres obliques dans le lien de ressource
* Tests pour l’encodage Unicode
### <a name="220"></a>2.2.0
* Prise en charge ajoutée pour la fonctionnalité Unité de requête par minute (RU/m).
* Prise en charge ajoutée pour un nouveau niveau de cohérence nommé ConsistentPrefix.
### <a name="210"></a>2.1.0
* Ajout de la prise en charge des requêtes d’agrégation (COUNT, MIN, MAX, SUM et AVG).
* Ajout d’une option permettant de désactiver la vérification SSL pendant son exécution sur l’émulateur DocumentDB.
* Suppression de la restriction du module de demandes dépendantes qui devait correspondre exactement à la version 2.10.0.
* Débit minimal réduit sur les collections partitionnées de 10 100 unités de demande/s à 2 500 unités de demande/s.
* Ajout de la prise en charge de l’activation de la journalisation de script pendant l’exécution de la procédure stockée.
* Version de l’API REST passée à « 2017-01-19 » à l’occasion de cette publication.
### <a name="201"></a>2.0.1
* Modifications éditoriales apportées aux commentaires de documentation.
### <a name="200"></a>2.0.0
* Ajout de la prise en charge de Python 3.5.
* Ajout de la prise en charge du regroupement de connexions à l’aide du module de demandes.
* Ajout de la prise en charge de la cohérence de session.
* Ajout de la prise en charge des requêtes TOP/ORDERBY pour les collections partitionnées.
### <a name="190"></a>1.9.0
* Ajout de la prise en charge d’une stratégie de nouvelle tentative pour les requêtes limitées. (Les requêtes limitées reçoivent une exception de taux de requête excessif, code d’erreur 429.) Par défaut, DocumentDB accepte neuf nouvelles tentatives pour chaque requête lorsque le code d’erreur 429 est retourné, conformément au temps retryAfter spécifié dans l’en-tête de réponse.
Il est désormais possible de définir un intervalle fixe de nouvelle tentative dans la propriété RetryOptions sur l’objet ConnectionPolicy, si vous souhaitez ignorer le temps retryAfter retourné par le serveur entre chaque nouvelle tentative.
DocumentDB attend maintenant au maximum 30 secondes pour chaque requête limitée (quel que soit le nombre de nouvelles tentatives) et renvoie la réponse avec un code d’erreur 429.
Cette durée peut également être remplacée dans la propriété RetryOptions sur l’objet ConnectionPolicy.
* DocumentDB retourne maintenant x-ms-throttle-retry-count and x-ms-throttle-retry-wait-time-ms comme en-têtes de réponse dans chaque requête afin d’indiquer le nombre limite de nouvelles tentatives et le cumul de temps d’attente observé par la requête entre les nouvelles tentatives.
* Suppression de la classe RetryPolicy et de la propriété correspondante (retry_policy) exposées sur la classe document_client, et introduction d’une classe RetryOptions qui expose la propriété RetryOptions sur la classe ConnectionPolicy pouvant être utilisée pour substituer certaines des options de nouvelle tentative par défaut.
### <a name="180"></a>1.8.0
* Ajout de la prise en charge des comptes de base de données géorépliqués.
* Correctif des tests pour déplacer l’hôte global et la masterKey dans les classes de test individuelles.
### <a name="170"></a>1.7.0
* Ajout de la fonctionnalité de durée de vie (TTL) pour les documents.
### <a name="161"></a>1.6.1
* Résolution des bogues liés au partitionnement côté serveur pour autoriser des caractères spéciaux dans le chemin d’accès à la clé de partition.
### <a name="160"></a>1.6.0
* Ajout de la prise en charge de la fonctionnalité de collections partitionnées côté serveur.
### <a name="150"></a>1.5.0
* Ajout de l’infrastructure de partitionnement côté client au Kit de développement logiciel (SDK). Implémentation des classes HashPartionResolver et RangePartitionResolver.
### <a name="142"></a>1.4.2
* Implémentation de l’opération Upsert. Nouvelles méthodes UpsertXXX ajoutées pour prendre en charge la fonctionnalité Upsert.
* Implémentation du routage basé sur l’ID. Aucune modification d'API publique, toutes les modifications en interne.
### <a name="130"></a>1.3.0
* Version ignorée pour aligner le numéro de version avec les autres Kits de développement logiciel (SDK)
### <a name="120"></a>1.2.0
* Prise en charge de l'index géospatial.
* Validation de la propriété ID pour toutes les ressources. Les ID des ressources ne peuvent pas contenir les caractères `?, /, #, \\` ni se terminer par une espace.
* Ajoute le nouvel en-tête « progression de la transformation de l'index » à ResourceResponse.
### <a name="110"></a>1.1.0
* Implémente la stratégie d'indexation V2
### <a name="101"></a>1.0.1
* Prise en charge la connexion proxy
## <a name="release--retirement-dates"></a>Dates de lancement et de suppression
Microsoft envoie une notification au moins **12 mois** avant le retrait d’un Kit de développement logiciel (SDK) pour faciliter la transition vers une version plus récente/prise en charge. Les nouvelles fonctionnalités et fonctions, et les optimisations sont uniquement ajoutées au Kit SDK actuel. Par conséquent, il est recommandé de toujours passer à la dernière version du SDK dès que possible.
> [!WARNING]
> Après le 31 août 2022, Azure Cosmos DB n’apportera plus de correctifs de bogues et ne fournira plus de support aux versions 1.x et 2.x du kit de développement logiciel (SDK) Python Azure Cosmos DB pour l’API SQL. Si vous préférez ne pas effectuer la mise à niveau, les requêtes envoyées depuis la version 1.x et 2.x du Kit de développement logiciel (SDK) continueront à être traitées par le service Azure Cosmos DB.
| Version | Date de sortie | Date de suppression |
| --- | --- | --- |
| [4.2.0](#420) |09 octobre 2020 |--- |
| [4.1.0](#410) |10 août 2020 |--- |
| [4.0.0](#400) |20 mai 2020 |--- |
| [3.0.2](#302) |15 novembre 2018 |--- |
| [3.0.1](#301) |04 octobre 2018 |--- |
| [2.3.3](#233) |8 septembre 2018 |31 août 2022 |
| [2.3.2](#232) |8 mai 2018 |31 août 2022 |
| [2.3.1](#231) |21 décembre 2017 |31 août 2022 |
| [2.3.0](#230) |10 novembre 2017 |31 août 2022 |
| [2.2.1](#221) |29 septembre 2017 |31 août 2022 |
| [2.2.0](#220) |10 mai 2017 |31 août 2022 |
| [2.1.0](#210) |1er mai 2017 |31 août 2022 |
| [2.0.1](#201) |30 octobre 2016 |31 août 2022 |
| [2.0.0](#200) |29 septembre 2016 |31 août 2022 |
| [1.9.0](#190) |7 juillet 2016 |31 août 2022 |
| [1.8.0](#180) |14 juin 2016 |31 août 2022 |
| [1.7.0](#170) |26 avril 2016 |31 août 2022 |
| [1.6.1](#161) |8 avril 2016 |31 août 2022 |
| [1.6.0](#160) |29 mars 2016 |31 août 2022 |
| [1.5.0](#150) |3 janvier 2016 |31 août 2022 |
| [1.4.2](#142) |6 octobre 2015 |31 août 2022 |
| 1.4.1 |6 octobre 2015 |31 août 2022 |
| [1.2.0](#120) |6 août 2015 |31 août 2022 |
| [1.1.0](#110) |9 juillet 2015 |31 août 2022 |
| [1.0.1](#101) |25 mai 2015 |31 août 2022 |
| 1.0.0 |7 avril 2015 |31 août 2022 |
| 0.9.4-prelease |14 janvier 2015 |29 février 2016 |
| 0.9.3-prelease |9 décembre 2014 |29 février 2016 |
| 0.9.2-prelease |25 novembre 2014 |29 février 2016 |
| 0.9.1-prelease |23 septembre 2014 |29 février 2016 |
| 0.9.0-prelease |21.08.14 |29 février 2016 |
## <a name="faq"></a>Questions fréquentes (FAQ)
[!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
## <a name="next-steps"></a>Étapes suivantes
Pour en savoir plus sur Cosmos DB, consultez la page du service [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/). | 57.239362 | 417 | 0.75467 | fra_Latn | 0.961446 |
6fae63178608c0bf2c72cb0c5fa1d0062fdcd5f5 | 5,987 | md | Markdown | 201-sqlmi-new-vnet-w-point-to-site-vpn/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | null | null | null | 201-sqlmi-new-vnet-w-point-to-site-vpn/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | 2 | 2022-03-08T21:13:01.000Z | 2022-03-08T21:13:12.000Z | 201-sqlmi-new-vnet-w-point-to-site-vpn/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | 1 | 2020-02-04T09:25:14.000Z | 2020-02-04T09:25:14.000Z | # Azure Sql Database Managed Instance (SQL MI) with Virtual network gateway configured for point-to-site connection inside the new virtual network
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/PublicLastTestDate.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/PublicDeployment.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/FairfaxLastTestDate.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/FairfaxDeployment.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/BestPracticeResult.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/201-sqlmi-new-vnet-w-point-to-site-vpn/CredScanResult.svg" />
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-sqlmi-new-vnet-w-point-to-site-vpn%2Fazuredeploy.json" target="_blank">
<img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png"/>
</a>
<a href="http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-sqlmi-new-vnet-w-point-to-site-vpn%2Fazuredeploy.json" target="_blank">
<img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.png"/>
</a>
This template allows you to create a [Azure SQL Database Managed Instances](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance) inside a new virtual network with Virtual network gateway that will be configured for point-to-site connections.
`Tags: Azure, SqlDb, Managed Instance, Point-to-Site VPN`
## Solution overview and deployed resources
This deployment will create an Azure Virtual Network with two subnets _ManagedInstance_ and _GatewaySubnet_. Managed Instance will be deployed in _ManagedInstance_ subnet. Virtual network gateway will be created in _GatewaySubnet_ subnet and configured for Point-to-Site VPN conncetions.
## Deployment using PowerShell
The easiast way to deploy this template is by running the following PowerShell script. The script will create and configure VPN certificates and run template deployment afterwards.
```powershell
$scriptUrlBase = 'https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/201-sqlmi-new-vnet-w-point-to-site-vpn'
$parameters = @{
subscriptionId = '<subscriptionId>'
resourceGroupName = '<resourceGroupName>'
location = '<location>'
virtualNetworkName = '<virtualNetworkName>'
managedInstanceName = '<managedInstanceName>'
administratorLogin = '<login>'
administratorLoginPassword = '<password>'
certificateNamePrefix = '<certificateNamePrefix>'
}
Invoke-Command -ScriptBlock ([Scriptblock]::Create((New-Object System.Net.WebClient).DownloadString($scriptUrlBase+'/scripts/deploy.ps1'))) -ArgumentList $parameters, $scriptUrlBase
```
## Deployment from template
You can click the "Deploy to Azure" button at the beginning of this document or follow the instructions for command line deployment using the scripts in the root of this repo, and populate following parameters:
- Name of the Managed Instance that will be create including Managed Instance admin name and password
- Public self-signed root certificate data. For detailed information on this and setting up certificates for point-to-site VPN visit the [documentation](https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-certificates-point-to-site)
- Name of the Azure Virtual Network that will be created and configured, including the address range that will be associated to this VNet. Default address range is 10.0.0.0/16 but you could change it to fit your needs.
- Name of the subnet where Managed Instance will be created. The name will be _ManagedInstance_, if you don't want to change it. Default address range is 10.0.0.0/24 but you could change it to fit your needs.
- Address range for _GatewaySubnet_. Default address range is 10.0.1.0/28 but you could change it to fit your needs.
- VPN client address pool prefix - computer that connects via VPN would get address from this pool. This IP range must not overlap with virtual network IP address range. Default address pool prefix is 192.168.0.0/24 but you could change it to fit your needs.
- Sku name that combines service tear and hardware generation, number of virtual cores and storage size in GB. The table below shows supported combinations.
- License type that could be _BasePrice_ if you are eligable for [Azure Hybrid Use Benefit for SQL Server](https://azure.microsoft.com/en-us/pricing/hybrid-benefit/) or _LicenseIncluded_ otherwise
||GP_Gen4|GP_Gen5|BC_Gen4|BC_Gen5|
|----|------|-----|------|-----|
|Tier|General Purpose|General Purpose|Business Critical|Busines Critical|
|Hardware|Gen 4|Gen 5|Gen 4|Gen 5|
|Min vCores|8|8|8|8|
|Max vCores|24|80|32|80|
|Min storage size|32|32|32|32|
|Max storage size|8192|8192|1024|1024 GB for 8, 16 vCores<br/>2048 GB for 24 vCores<br/>4096 GB for 32, 40, 64, 80 vCores|
## Important
During the public preview deployment might take up to 6h. This is because virtual cluster that hosts the instances needs some time to deploy. Each subsequent instance creation in the same virtual cluster takes just about a few minutes.
After the last Managed Instance is deprovisioned, cluster stays a live for up to 24h. This is to avoid waiting for a new cluster to be provisioned in case that customer just wants to recreate the instance. During that period of time Resource Group and virtual network could not be deleted. This is a known issue and Managed Instance team is working on resolving it.
| 77.753247 | 365 | 0.78303 | eng_Latn | 0.865113 |
6faee8937c4b545a056a2a1e581be114aa3ac3d2 | 1,191 | md | Markdown | _events/2020-05-16-covid5.md | Slugger70/website | a54acbebd871ee297c921d77c1331132593727b1 | [
"MIT"
] | null | null | null | _events/2020-05-16-covid5.md | Slugger70/website | a54acbebd871ee297c921d77c1331132593727b1 | [
"MIT"
] | 23 | 2019-08-20T02:32:42.000Z | 2022-03-30T04:08:40.000Z | _events/2020-05-16-covid5.md | Slugger70/website | a54acbebd871ee297c921d77c1331132593727b1 | [
"MIT"
] | 7 | 2019-08-15T22:16:24.000Z | 2021-09-02T03:48:00.000Z | ---
site: freiburg
tags: [training]
title: Galaxy-ELIXIR Webinar 5 - Behind the scenes - Global Open Infrastructures at work
starts: 2020-05-28
ends: 2020-05-29
organiser:
name: Galaxy-Elixir
---
**Galaxy-ELIXIR Webinar Series: FAIR data and Open Infrastructures to tackle the COVID-19 pandemic**
The Galaxy Community and ELIXIR have organised a webinar series to demonstrate how open software and public research infrastructures can be used in analysing and publishing SARS-CoV2 data.
### Session 5: Behind the scenes: Global Open Infrastructures at work
28 May 2020, 17.00-18.00 CEST (starts at 16.00 BST, 11.00 EDT, 8.00 PDT) (29 May 2020 @ 1:00am AEST.)
This session will guide the participants how they can use the Galaxy compute capacities to run their own analysis. It will present the Pulsar network that connects data centres and High Performance Computing clusters to share their computation power in support of the Galaxy Europe users and provide examples of how to submit an analysis job from the user’s perspective.
<a href="https://elixir-europe.org/events/webinar-galaxy-elixir-covid19" target="_blank">Programme Information and Registration</a>
#### Speaker: TBC
---
| 49.625 | 370 | 0.780856 | eng_Latn | 0.946807 |
6faef7be71772cf147d8a0b1ca6145c9490f2c49 | 3,158 | md | Markdown | README.md | japrescott/inline-critical | e762c46f07d8a63a6d50f7335a78710561cb6f59 | [
"BSD-3-Clause"
] | null | null | null | README.md | japrescott/inline-critical | e762c46f07d8a63a6d50f7335a78710561cb6f59 | [
"BSD-3-Clause"
] | null | null | null | README.md | japrescott/inline-critical | e762c46f07d8a63a6d50f7335a78710561cb6f59 | [
"BSD-3-Clause"
] | null | null | null | # inline-critical
Inline critical-path css and load the existing stylesheets asynchronously.
Existing link tags will also be wrapped in ```<noscript>``` so the users with javascript disabled will see the site rendered normally.
[![NPM version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Build Status][appveyor-image]][appveyor-url] [![Dependency Status][depstat-image]][depstat-url] [![Download][dlcounter-image]][dlcounter-url] [![Coverage Status][coveralls-image]][coveralls-url]
## Installation
This module is installed via npm:
``` bash
$ npm install inline-critical
```
## Example Usage
``` js
var inline = require('inline-critical');
var html = fs.readFileSync('test/fixtures/index.html', 'utf8');
var critical = fs.readFileSync('test/fixtures/critical.css', 'utf8');
var inlined = inline(html, critical);
```
## Example Usage ignoring stylesheet per regex
``` js
var inline = require('inline-critical');
var html = fs.readFileSync('test/fixtures/index.html', 'utf8');
var critical = fs.readFileSync('test/fixtures/critical.css', 'utf8');
var inlined = inline(html, critical, {
ignore: [/bootstrap/]
});
```
## CLI
inline-critical works well with standard input.
You can either pass in the html
```bash
cat index.html | inline-critical critical.css
```
or just flip things around
```bash
cat critical.css | inline-critical index.html
```
or pass in the file as an option
```bash
inline-critical critical.css index.html
```
without having to worry about the correct order
```bash
inline-critical index.html critical.css
```
Run `inline-critical --help` to see the list of options.
## inline(html, styles, options?)
- `html` is the HTML you want to use to inline your critical styles, or any other styles
- `styles` are the styles you're looking to inline
- `options` is an optional configuration object
- `minify` will minify the styles before inlining (default: true)
- `extract` will remove the inlined styles from any stylesheets referenced in the HTML
- `basePath` will be used when extracting styles to find the files references by `href` attributes
- `ignore` ignore matching stylesheets when inlining.
- `selector` defines the element used by loadCSS as a reference for inlining.
## License
MIT
[npm-url]: https://npmjs.org/package/inline-critical
[npm-image]: https://badge.fury.io/js/inline-critical.svg
[travis-url]: https://travis-ci.org/bezoerb/inline-critical
[travis-image]: https://secure.travis-ci.org/bezoerb/inline-critical.svg?branch=master
[appveyor-url]: https://ci.appveyor.com/project/bezoerb/inline-critical/branch/master
[appveyor-image]: https://ci.appveyor.com/api/projects/status/qb9esocjkpp6hw3q/branch/master?svg=true
[depstat-url]: https://david-dm.org/bezoerb/inline-critical
[depstat-image]: https://david-dm.org/bezoerb/inline-critical.svg
[dlcounter-url]: https://www.npmjs.com/package/inline-critical
[dlcounter-image]: https://img.shields.io/npm/dm/inline-critical.svg
[coveralls-url]: https://coveralls.io/github/bezoerb/inline-critical?branch=master
[coveralls-image]: https://coveralls.io/repos/github/bezoerb/inline-critical/badge.svg?branch=master
| 34.326087 | 278 | 0.748575 | eng_Latn | 0.589711 |
6fafac5d53b813b007b42e7698fa3d977b91f929 | 7,170 | md | Markdown | articles/active-directory-b2c/tutorial-create-tenant.md | Caigie/azure-docs.de-de | 788350a050087ee84cd8c5a5a2d32b8d02b62da4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/tutorial-create-tenant.md | Caigie/azure-docs.de-de | 788350a050087ee84cd8c5a5a2d32b8d02b62da4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/tutorial-create-tenant.md | Caigie/azure-docs.de-de | 788350a050087ee84cd8c5a5a2d32b8d02b62da4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Erstellen eines Azure Active Directory B2C-Mandanten'
description: In diesem Tutorial erfahren Sie, wie Sie sich auf die Registrierung Ihrer Anwendungen vorbereiten, indem Sie einen Azure Active Directory B2C-Mandanten über das Azure-Portal erstellen.
services: B2C
author: msmimart
manager: celestedg
ms.service: active-directory
ms.workload: identity
ms.topic: tutorial
ms.date: 07/01/2020
ms.author: mimart
ms.subservice: B2C
ms.openlocfilehash: fbccbcf1ac85b63c5610b9904a84e5e6e3fb6c63
ms.sourcegitcommit: 4f1c7df04a03856a756856a75e033d90757bb635
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 08/07/2020
ms.locfileid: "87922192"
---
# <a name="tutorial-create-an-azure-active-directory-b2c-tenant"></a>Tutorial: Erstellen eines Azure Active Directory B2C-Mandanten
Bevor Ihre Anwendungen mit Azure Active Directory B2C (Azure AD B2C) interagieren können, müssen sie in einem von Ihnen verwalteten Mandanten registriert werden.
In diesem Artikel werden folgende Vorgehensweisen behandelt:
> [!div class="checklist"]
> * Erstellen eines Azure AD B2C-Mandanten
> * Verknüpfen Ihres Mandanten mit Ihrem Abonnement
> * Wechseln zu dem Verzeichnis, das Ihren Azure AD B2C-Mandanten enthält
> * Hinzufügen der Azure AD B2C-Ressource als **Favorit** im Azure-Portal
Im nächsten Tutorial erfahren Sie, wie Sie eine Anwendung registrieren.
## <a name="prerequisites"></a>Voraussetzungen
Wenn Sie kein Azure-Abonnement besitzen, können Sie ein [kostenloses Konto](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) erstellen, bevor Sie beginnen.
## <a name="create-an-azure-ad-b2c-tenant"></a>Erstellen eines Azure AD B2C-Mandanten
1. Melden Sie sich beim [Azure-Portal](https://portal.azure.com/) an. Melden Sie sich mit einem Azure-Konto an, dem mindestens die [Mitwirkender](../role-based-access-control/built-in-roles.md)-Rolle innerhalb des Abonnements oder einer Ressourcengruppe im Abonnement zugewiesen wurde.
1. Wählen Sie das Verzeichnis aus, das Ihr Abonnement enthält.
Wählen Sie auf der Symbolleiste des Azure-Portals das Symbol **Verzeichnis und Abonnement** aus, und wählen Sie dann das Verzeichnis aus, das Ihr Abonnement enthält. Dabei handelt es sich um ein anderes Verzeichnis als das, in dem Ihr Azure AD B2C-Mandant enthalten sein wird.

1. Wählen Sie im Menü des Azure-Portals oder auf der **Startseite** die Option **Ressource erstellen** aus.
1. Suchen Sie nach **Azure Active Directory B2C**, und wählen Sie dann **Erstellen** aus.
1. Wählen Sie **Neuen Azure AD B2C-Mandanten erstellen**.

1. Geben Sie auf der Seite **Verzeichnis erstellen** Folgendes ein:
- **Organisationsname**: Geben Sie einen Namen für Ihren Azure AD B2C-Mandanten ein.
- **Name der Anfangsdomäne**: Geben Sie einen Domänennamen für Ihren Azure AD B2C-Mandanten ein.
- **Land oder Region**: Wählen Sie in der Liste Ihr Land oder Ihre Region aus. Diese Auswahl kann später nicht mehr geändert werden.
- **Abonnement**: Wählen Sie in der Liste Ihr Abonnement aus.
- **Ressourcengruppe**: Wählen Sie eine Ressourcengruppe aus, die den Mandanten enthalten soll. Oder wählen Sie **Neu erstellen** aus, geben Sie unter **Name** einen Namen für die Ressourcengruppe ein, wählen Sie den **Standort der Ressourcengruppe** aus, und wählen Sie dann **OK** aus.

1. Klicken Sie auf **Überprüfen + erstellen**.
1. Überprüfen Sie Ihre Verzeichniseinstellungen. Klicken Sie anschließend auf **Erstellen**.
Sie können mehrere Azure AD B2C-Mandanten zu Abrechnungszwecken mit einem einzelnen Azure-Abonnement verknüpfen. Zum Verknüpfen eines Mandanten müssen Sie auf dem Azure AD B2C-Mandanten ein Administrator sein, und Ihnen muss unter dem Azure-Abonnement mindestens die Rolle „Mitwirkender“ zugewiesen sein. Weitere Informationen finden Sie unter [Verknüpfen eines Azure AD B2C-Mandanten mit einem Abonnement](billing.md#link-an-azure-ad-b2c-tenant-to-a-subscription).
## <a name="select-your-b2c-tenant-directory"></a>Auswählen Ihres B2C-Mandantenverzeichnisses
Wenn Sie mit der Verwendung Ihres neuen Azure AD B2C-Mandanten beginnen möchten, müssen Sie zu dem Verzeichnis wechseln, das den Mandanten enthält.
Wählen Sie im oberen Menü des Azure-Portals den Filter **Verzeichnis und Abonnement** und dann das Verzeichnis mit Ihrem Azure AD B2C-Mandanten aus.
Wenn Ihr neuer Azure B2C-Mandant zunächst nicht in der Liste angezeigt wird, aktualisieren Sie Ihr Browserfenster, und wählen Sie dann erneut im oberen Menü den Filter **Verzeichnis und Abonnement** aus.

## <a name="add-azure-ad-b2c-as-a-favorite-optional"></a>Hinzufügen von Azure AD B2C als Favorit (optional)
Dieser optionale Schritt erleichtert Ihnen die Auswahl Ihres Azure AD B2C-Mandanten in den folgenden Abschnitten und allen nachfolgenden Tutorials.
Statt jedes Mal, wenn Sie mit Ihrem Mandanten arbeiten möchten, in **Alle Dienste** nach *Azure AD B2C* suchen zu müssen, können Sie die Ressource als Favorit markieren. Anschließend können Sie sie im Abschnitt **Favoriten** des Portalmenüs auswählen, um schnell zu Ihrem Azure AD B2C-Mandanten zu navigieren.
Sie müssen diesen Vorgang nur einmal ausführen. Bevor Sie diese Schritte ausführen, müssen Sie in das Verzeichnis mit Ihrem Azure AD B2C-Mandanten gewechselt haben, wie im vorherigen Abschnitt, [Auswählen Ihres B2C-Mandantenverzeichnisses](#select-your-b2c-tenant-directory), beschrieben wurde.
1. Melden Sie sich beim [Azure-Portal](https://portal.azure.com) an.
1. Wählen Sie im Menü des Azure-Portals **Alle Dienste** aus.
1. Suchen Sie im Suchfeld **Alle Dienste** nach **Azure AD B2C**, zeigen Sie auf das Suchergebnis, und wählen Sie dann das Sternsymbol in der QuickInfo aus. Im Azure-Portal wird jetzt unter **Favoriten** **Azure AD B2C** angezeigt.
1. Wenn Sie die Position des neuen Favoriten ändern möchten, wählen Sie im Azure-Portalmenü **Azure AD B2C** aus, und ziehen Sie es dann nach oben oder unten an die gewünschte Position.

## <a name="next-steps"></a>Nächste Schritte
In diesem Artikel haben Sie Folgendes gelernt:
> [!div class="checklist"]
> * Erstellen eines Azure AD B2C-Mandanten
> * Verknüpfen Ihres Mandanten mit Ihrem Abonnement
> * Wechseln zu dem Verzeichnis, das Ihren Azure AD B2C-Mandanten enthält
> * Hinzufügen der Azure AD B2C-Ressource als **Favorit** im Azure-Portal
Als Nächstes erfahren Sie, wie Sie eine Webanwendung in Ihrem neuen Mandanten registrieren.
> [!div class="nextstepaction"]
> [Registrieren Ihrer Anwendungen](tutorial-register-applications.md)
| 66.388889 | 465 | 0.787448 | deu_Latn | 0.993392 |
6fb04398f9950476c2768f8ec4ae4db929bdc69e | 35 | md | Markdown | _includes/about/zh.md | lgd3608/lgd3608.github.io | 84e270a5cd989bfad1d0480008d9fd1be2598928 | [
"Apache-2.0"
] | 1 | 2019-09-07T05:29:10.000Z | 2019-09-07T05:29:10.000Z | _includes/about/zh.md | lgd3608/lgd3608.github.io | 84e270a5cd989bfad1d0480008d9fd1be2598928 | [
"Apache-2.0"
] | null | null | null | _includes/about/zh.md | lgd3608/lgd3608.github.io | 84e270a5cd989bfad1d0480008d9fd1be2598928 | [
"Apache-2.0"
] | null | null | null | > 离开世界之前,一切都是过程。
Hi,我是lgd3608。
| 5 | 16 | 0.657143 | zho_Hans | 0.202389 |
6fb0d7b974cffec071ccaa457f1ec850bd1bb7b3 | 541 | md | Markdown | content/talks/aurelia-rediscover-your-choice-in-front-end-frameworks.md | tbrunia/site | 5998c52a2c53dc0241ffd4c8a39794837da8fafb | [
"MIT"
] | null | null | null | content/talks/aurelia-rediscover-your-choice-in-front-end-frameworks.md | tbrunia/site | 5998c52a2c53dc0241ffd4c8a39794837da8fafb | [
"MIT"
] | null | null | null | content/talks/aurelia-rediscover-your-choice-in-front-end-frameworks.md | tbrunia/site | 5998c52a2c53dc0241ffd4c8a39794837da8fafb | [
"MIT"
] | null | null | null | ---
title: Aurelia - rediscover your choice in front-end frameworks
date: 'April 11, 2017'
tags: Aurelia
speaker: Jeff Shinrock
---
It's 2017 and the choice of front-end frameworks seems endless, and yet somehow
not a choice at all. React and Angular dominate every conversation surrounding
mature, robust, and accessible frameworks. But I went searching for something
that values standards over configuration, flexibility over opinion, and
accessibility over memorization. My search ended with Aurelia and I'd like to
introduce you to it.
| 38.642857 | 79 | 0.798521 | eng_Latn | 0.991749 |
6fb0e934b4f7025f88bf8912c507b718cad8537e | 693 | md | Markdown | README.md | Andrea94c/IoT-MBA-CDP-Luiss | ea31834c73d3fcbb10e2d5b6d32a8ff94eabf6f3 | [
"MIT"
] | null | null | null | README.md | Andrea94c/IoT-MBA-CDP-Luiss | ea31834c73d3fcbb10e2d5b6d32a8ff94eabf6f3 | [
"MIT"
] | null | null | null | README.md | Andrea94c/IoT-MBA-CDP-Luiss | ea31834c73d3fcbb10e2d5b6d32a8ff94eabf6f3 | [
"MIT"
] | null | null | null | # IoT Demo - MBA CDP Luiss 2022
Introduzione all'IoT - Mini sistema IoT con sensori di temperatura e dashboard online
### Occorrente
2x Raspberry Pico
1x Raspberry Pi4 Model B
2x cavo da USB A maschio a USB Micro B maschio
1x cavo da USB A maschio a USB C maschio
1x cavo da Micro HDMI a HDMI
1x micro sd 32gb
1x Scheda di prototipazione (Breadboard)
1x led rosso + resistore tra i 50 e i 330ohms
2x M-M jumper leads
### OS, Software e Linguaggi di programmazione
- Windows 10
- Linux
- Micropython
- Python3
- Thonny
### Librerie python3 per Gateway
- pip install paho-mqtt
## Contatti
Per qualsiasi informazioni contattare Andrea Coletta a **coletta[AT]di.uniroma1.it**.
| 18.72973 | 85 | 0.746032 | ita_Latn | 0.838333 |
6fb171c458a311cd912c8751e9cb87c092e764c2 | 973 | md | Markdown | README.md | aidoop/node-urx | 890b672f8516648feb9167ac98ee3278fc393032 | [
"MIT"
] | null | null | null | README.md | aidoop/node-urx | 890b672f8516648feb9167ac98ee3278fc393032 | [
"MIT"
] | null | null | null | README.md | aidoop/node-urx | 890b672f8516648feb9167ac98ee3278fc393032 | [
"MIT"
] | null | null | null | # node-urx
Universal Robot client module for nodejs
## Base code
python-urx: https://github.com/SintefManufacturing/python-urx
## Universal Robots
- Homepage: https://www.universal-robots.com/
- URSim(Simulator): https://www.universal-robots.com/download/software-e-series/simulator-linux/offline-simulator-e-series-ur-sim-for-linux-5100/
## Install
```bash
$ npm install @things-factory/node-urx --save
```
## Examples
Run the examples from the examples directory.
### Moving Robot Arm
```javascript
const { UrRobot } = require('@things-factory/node-urx')
;(async function () {
var ur = new UrRobot('192.168.0.34')
await ur.connect()
console.log(await ur.getStatus())
let inputPose = await ur.getl(true)
inputPose[2] += 0.1
await ur.movel(inputPose, 0.1, 0.1, true, false)
inputPose[2] -= 0.1
await ur.movel(inputPose, 0.1, 0.1, true, false)
ur.disconnect()
console.log('done')
})()
})()
```
## API Documentation
...
## Test
`npm test`.
| 19.46 | 145 | 0.688592 | yue_Hant | 0.159268 |
6fb178ce92b6b1c8b779404e3dd34ebc6f52aa35 | 19,518 | md | Markdown | articles/azure-cache-for-redis/cache-private-link.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | articles/azure-cache-for-redis/cache-private-link.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | articles/azure-cache-for-redis/cache-private-link.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
title: Cache do Azure para Redis com o link privado do Azure (versão prévia)
description: O ponto de extremidade privado do Azure é uma interface de rede que conecta você de forma privada e segura ao cache do Azure para Redis da plataforma Azure link privado. Neste artigo, você aprenderá a criar um cache do Azure, uma rede virtual do Azure e um ponto de extremidade privado usando o portal do Azure.
author: curib
ms.author: cauribeg
ms.service: cache
ms.topic: conceptual
ms.date: 10/14/2020
ms.openlocfilehash: 22bdf93e7236ae5220a6bb7c6ead898628bb51a1
ms.sourcegitcommit: 772eb9c6684dd4864e0ba507945a83e48b8c16f0
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/19/2021
ms.locfileid: "97007578"
---
# <a name="azure-cache-for-redis-with-azure-private-link-public-preview"></a>Cache do Azure para Redis com o link privado do Azure (visualização pública)
Neste artigo, você aprenderá a criar uma rede virtual e um cache do Azure para a instância Redis com um ponto de extremidade privado usando o portal do Azure. Você também aprenderá a adicionar um ponto de extremidade privado a um cache do Azure existente para a instância do Redis.
O ponto de extremidade privado do Azure é uma interface de rede que conecta você de forma privada e segura ao cache do Azure para Redis da plataforma Azure link privado.
## <a name="prerequisites"></a>Pré-requisitos
* Assinatura do Azure - [criar uma gratuitamente](https://azure.microsoft.com/free/)
> [!IMPORTANT]
> Para usar pontos de extremidade privados, seu cache do Azure para instância Redis precisa ter sido criado após 28 de julho de 2020.
> Atualmente, a replicação geográfica, as regras de firewall, o suporte ao console do portal, a vários pontos de extremidade por cache clusterizado, a persistência para os caches injetados por firewall e VNet não têm suporte.
>
>
## <a name="create-a-private-endpoint-with-a-new-azure-cache-for-redis-instance"></a>Criar um ponto de extremidade privado com um novo cache do Azure para instância Redis
Nesta seção, você criará um novo cache do Azure para a instância Redis com um ponto de extremidade privado.
### <a name="create-a-virtual-network"></a>Criar uma rede virtual
1. Entre no [portal do Azure](https://portal.azure.com) e selecione **Criar um recurso**.
:::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Selecione criar um recurso.":::
2. Na página **novo** , selecione **rede** e, em seguida, selecione **rede virtual**.
3. Selecione **Adicionar** para criar uma rede virtual.
4. Em **Criar rede virtual**, insira ou selecione estas informações na guia **Básico**:
| Configuração | Valor sugerido | Descrição |
| ------------ | ------- | -------------------------------------------------- |
| **Assinatura** | Clique na lista suspensa e selecione sua assinatura. | A assinatura sob a qual criar essa rede virtual. |
| **Grupo de recursos** | Clique na lista suspensa e selecione um grupo de recursos ou selecione **Criar** e insira um novo nome de grupo de recursos. | Nome do grupo de recursos no qual criar sua rede virtual e outros recursos. Ao colocar todos os seus recursos de aplicativos em um só grupo de recursos, você pode gerenciá-los ou excluí-los juntos com facilidade. |
| **Nome** | Insira um nome de rede virtual. | O nome deve começar com uma letra ou número, terminar com uma letra, número ou sublinhado e pode conter apenas letras, números, sublinhados, pontos ou hifens. |
| **Região** | Na lista suspensa e selecione uma região. | Selecione uma [região](https://azure.microsoft.com/regions/) perto de outros serviços que usarão sua rede virtual. |
5. Selecione a guia **endereços IP** ou clique no botão **Avançar: endereços IP** na parte inferior da página.
6. Na guia **endereços IP** , especifique o **espaço de endereço IPv4** como um ou mais prefixos de endereço na notação CIDR (por exemplo, 192.168.1.0/24).
7. Em **nome da sub-rede**, clique em **padrão** para editar as propriedades da sub-rede.
8. No painel **Editar sub-rede** , especifique um **nome de sub-rede** , bem como o **intervalo de endereços de sub-rede**. O intervalo de endereços da sub-rede deve estar na notação CIDR (por exemplo, 192.168.1.0/24). Ele deve estar contido no espaço de endereço da rede virtual.
9. Clique em **Salvar**.
10. Selecione a guia **revisar + criar** ou clique no botão **revisar + criar** .
11. Verifique se todas as informações estão corretas e clique em **criar** para provisionar a rede virtual.
### <a name="create-an-azure-cache-for-redis-instance-with-a-private-endpoint"></a>Criar um cache do Azure para a instância Redis com um ponto de extremidade privado
Para criar uma instância de cache, siga estas etapas.
1. Volte para a home page do portal do Azure ou abra o menu da barra lateral e, em seguida, selecione **criar um recurso**.
1. Na página **Novo**, selecione **Bancos de dados** e, em seguida, **Cache do Azure para Redis**.
:::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Selecionar o Cache do Azure para Redis.":::
1. Na página **Novo Cache Redis**, defina as configurações para o novo cache.
| Configuração | Valor sugerido | DESCRIÇÃO |
| ------------ | ------- | -------------------------------------------------- |
| **Nome DNS** | Insira um nome global exclusivo. | O nome de cache precisa ser uma cadeia de caracteres com 1 a 63 caracteres que contém somente números, letras ou hifens. O nome precisa começar e terminar com um número ou uma letra e não pode conter hifens consecutivos. O *nome do host* de sua instância de cache será *\<DNS name>.redis.cache.windows.net*. |
| **Assinatura** | Clique na lista suspensa e selecione sua assinatura. | A assinatura na qual essa nova instância do Cache do Azure para Redis será criada. |
| **Grupo de recursos** | Clique na lista suspensa e selecione um grupo de recursos ou selecione **Criar** e insira um novo nome de grupo de recursos. | Nome do grupo de recursos no qual o cache e outros recursos serão criados. Ao colocar todos os seus recursos de aplicativos em um só grupo de recursos, você pode gerenciá-los ou excluí-los juntos com facilidade. |
| **Localidade** | Clique na lista suspensa e selecione uma localização. | Selecione uma [região](https://azure.microsoft.com/regions/) perto de outros serviços que usarão o cache. |
| **Tipo de preços** | Clique na lista suspensa e selecione um [Tipo de preço](https://azure.microsoft.com/pricing/details/cache/). | O tipo de preço determina o tamanho, o desempenho e os recursos disponíveis para o cache. Para obter mais informações, confira [Visão geral do Cache do Azure para Redis](cache-overview.md). |
1. Selecione a guia **Rede** ou clique no botão **Rede** na parte inferior da página.
1. Na guia **rede** , selecione **ponto de extremidade privado** para o método de conectividade.
1. Clique no botão **Adicionar** para criar seu ponto de extremidade privado.
:::image type="content" source="media/cache-private-link/3-add-private-endpoint.png" alt-text="Em rede, adicione um ponto de extremidade privado.":::
1. Na página **criar um ponto de extremidade privado** , defina as configurações para seu ponto de extremidade privado com a rede virtual e a sub-rede que você criou na última seção e selecione **OK**.
1. Selecione **Próximo: Avançado** ou clique no botão **Próximo: Avançado** na parte inferior da página.
1. Na guia **Avançado** de uma instância de cache Básico ou Standard, selecione a alternância Habilitar se desejar habilitar uma porta não TLS.
1. Na guia **Avançado** de uma instância de cache Premium, defina as configurações da porta não TLS, do clustering e da persistência de dados.
1. Selecione **Próximo: Marcas** ou clique no botão **Próximo: Botão** Categorias na parte inferior da página.
1. Opcionalmente, na guia **Marcas**, insira o nome e o valor caso deseje categorizar o recurso.
1. Selecione **Examinar + criar**. Você será levado para a guia Examinar + criar, na qual o Azure validará a configuração.
1. Depois que a mensagem em verde Validação aprovada for exibida, selecione **Criar**.
A criação do cache demora um pouco. Monitore o progresso na página **Visão Geral** do Cache do Azure para Redis. Quando o **Status** for mostrado como **Em execução**, o cache estará pronto para uso.
> [!IMPORTANT]
>
> Há um `publicNetworkAccess` sinalizador que é `Disabled` por padrão.
> Esse sinalizador destina-se a permitir que você opcionalmente permita o acesso de ponto de extremidade público e privado ao cache se ele estiver definido como `Enabled` . Se definido como `Disabled` , ele só permitirá acesso de ponto de extremidade privado. Você pode definir o valor para `Disabled` ou `Enabled` com a solicitação de patch a seguir. Edite o valor para refletir qual sinalizador você deseja para o seu cache.
> ```http
> PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
> { "properties": {
> "publicNetworkAccess":"Disabled"
> }
> }
> ```
>
> [!IMPORTANT]
>
> Para se conectar a um cache clusterizado, é `publicNetworkAccess` necessário definir como `Disabled` e só pode haver uma conexão de ponto de extremidade privada.
>
## <a name="create-a-private-endpoint-with-an-existing-azure-cache-for-redis-instance"></a>Criar um ponto de extremidade privado com um cache do Azure existente para a instância do Redis
Nesta seção, você adicionará um ponto de extremidade privado a um cache do Azure existente para a instância do Redis.
### <a name="create-a-virtual-network"></a>Criar uma rede virtual
Para criar uma rede virtual, siga estas etapas.
1. Entre no [portal do Azure](https://portal.azure.com) e selecione **Criar um recurso**.
2. Na página **novo** , selecione **rede** e, em seguida, selecione **rede virtual**.
3. Selecione **Adicionar** para criar uma rede virtual.
4. Em **Criar rede virtual**, insira ou selecione estas informações na guia **Básico**:
| Configuração | Valor sugerido | Descrição |
| ------------ | ------- | -------------------------------------------------- |
| **Assinatura** | Clique na lista suspensa e selecione sua assinatura. | A assinatura sob a qual criar essa rede virtual. |
| **Grupo de recursos** | Clique na lista suspensa e selecione um grupo de recursos ou selecione **Criar** e insira um novo nome de grupo de recursos. | Nome do grupo de recursos no qual criar sua rede virtual e outros recursos. Ao colocar todos os seus recursos de aplicativos em um só grupo de recursos, você pode gerenciá-los ou excluí-los juntos com facilidade. |
| **Nome** | Insira um nome de rede virtual. | O nome deve começar com uma letra ou número, terminar com uma letra, número ou sublinhado e pode conter apenas letras, números, sublinhados, pontos ou hifens. |
| **Região** | Na lista suspensa e selecione uma região. | Selecione uma [região](https://azure.microsoft.com/regions/) perto de outros serviços que usarão sua rede virtual. |
5. Selecione a guia **endereços IP** ou clique no botão **Avançar: endereços IP** na parte inferior da página.
6. Na guia **endereços IP** , especifique o **espaço de endereço IPv4** como um ou mais prefixos de endereço na notação CIDR (por exemplo, 192.168.1.0/24).
7. Em **nome da sub-rede**, clique em **padrão** para editar as propriedades da sub-rede.
8. No painel **Editar sub-rede** , especifique um **nome de sub-rede** , bem como o **intervalo de endereços de sub-rede**. O intervalo de endereços da sub-rede deve estar na notação CIDR (por exemplo, 192.168.1.0/24). Ele deve estar contido no espaço de endereço da rede virtual.
9. Clique em **Salvar**.
10. Selecione a guia **revisar + criar** ou clique no botão **revisar + criar** .
11. Verifique se todas as informações estão corretas e clique em **criar** para provisionar a rede virtual.
### <a name="create-a-private-endpoint"></a>Criar um ponto de extremidade privado
Para criar um ponto de extremidade privado, siga estas etapas.
1. Na portal do Azure, procure o **cache do Azure para Redis** e pressione Enter ou selecione-o nas sugestões de pesquisa.
:::image type="content" source="media/cache-private-link/4-search-for-cache.png" alt-text="Pesquise o cache do Azure para Redis.":::
2. Selecione a instância de cache à qual você deseja adicionar um ponto de extremidade privado.
3. No lado esquerdo da tela, selecione (versão **prévia) ponto de extremidade privado**.
4. Clique no botão **ponto de extremidade privado** para criar seu ponto de extremidade privado.
:::image type="content" source="media/cache-private-link/5-add-private-endpoint.png" alt-text="Adicionar ponto de extremidade privado.":::
5. Na **página criar um ponto de extremidade privado**, defina as configurações para seu ponto de extremidade privado.
| Configuração | Valor sugerido | Descrição |
| ------------ | ------- | -------------------------------------------------- |
| **Assinatura** | Clique na lista suspensa e selecione sua assinatura. | A assinatura sob a qual criar esse ponto de extremidade privado. |
| **Grupo de recursos** | Clique na lista suspensa e selecione um grupo de recursos ou selecione **Criar** e insira um novo nome de grupo de recursos. | Nome do grupo de recursos no qual criar seu ponto de extremidade privado e outros recursos. Ao colocar todos os seus recursos de aplicativos em um só grupo de recursos, você pode gerenciá-los ou excluí-los juntos com facilidade. |
| **Nome** | Insira um nome de ponto de extremidade privado. | O nome deve começar com uma letra ou número, terminar com uma letra, número ou sublinhado e pode conter apenas letras, números, sublinhados, pontos ou hifens. |
| **Região** | Na lista suspensa e selecione uma região. | Selecione uma [região](https://azure.microsoft.com/regions/) perto de outros serviços que usarão seu ponto de extremidade privado. |
6. Clique no botão **Avançar: recurso** na parte inferior da página.
7. Na guia **recurso** , selecione sua assinatura, escolha o tipo de recurso como `Microsoft.Cache/Redis` e, em seguida, selecione o cache ao qual você deseja conectar o ponto de extremidade privado.
8. Clique no botão **próximo: configuração** na parte inferior da página.
9. Na guia **configuração** , selecione a rede virtual e a sub-rede que você criou na seção anterior.
10. Clique no botão **próximo: marcas** na parte inferior da página.
11. Opcionalmente, na guia **Marcas**, insira o nome e o valor caso deseje categorizar o recurso.
12. Selecione **Examinar + criar**. Você é levado para a guia **revisar + criar** , na qual o Azure valida sua configuração.
13. Depois que a mensagem de **validação verde aprovada** for exibida, selecione **criar**.
## <a name="faq"></a>Perguntas frequentes
### <a name="why-cant-i-connect-to-a-private-endpoint"></a>Por que não consigo me conectar a um ponto de extremidade privado?
Se o cache já for um cache injetado de VNet, os pontos de extremidade privados não poderão ser usados com sua instância de cache. Se sua instância de cache estiver usando um recurso sem suporte (listado abaixo), você não poderá se conectar à sua instância de ponto de extremidade privada. Além disso, as instâncias de cache precisam ser criadas depois de 27 de julho para usar pontos de extremidade privados.
### <a name="what-features-are-not-supported-with-private-endpoints"></a>Quais recursos não têm suporte com pontos de extremidade privados?
Replicação geográfica, regras de firewall, suporte ao console do portal, vários pontos de extremidade por cache clusterizado, persistência para regras de firewall e redundância de zona.
### <a name="how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access"></a>Como posso alterar meu ponto de extremidade privado para ser desabilitado ou habilitado a partir do acesso à rede pública?
Há um `publicNetworkAccess` sinalizador que é `Disabled` por padrão. Esse sinalizador destina-se a permitir que você opcionalmente permita o acesso de ponto de extremidade público e privado ao cache se ele estiver definido como `Enabled` . Se definido como `Disabled` , ele só permitirá acesso de ponto de extremidade privado. Você pode definir o valor para `Disabled` ou `Enabled` com a solicitação de patch a seguir. Edite o valor para refletir qual sinalizador você deseja para o seu cache.
```http
PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
{ "properties": {
"publicNetworkAccess":"Disabled"
}
}
```
### <a name="are-network-security-groups-nsg-enabled-for-private-endpoints"></a>Os NSG (grupos de segurança de rede) estão habilitados para pontos de extremidade privados?
Não, eles estão desabilitados para pontos de extremidade privados. Embora as sub-redes que contenham o ponto de extremidade privado possam ter o NSG associado a ela, as regras não entrarão em vigor no tráfego processado pelo ponto de extremidade privado. Você deve [desabilitar a imposição de políticas de rede](../private-link/disable-private-endpoint-network-policy.md) para implantar pontos de extremidade privados em uma sub-rede. O NSG ainda é aplicado em outras cargas de trabalho hospedadas na mesma sub-rede. As rotas em qualquer sub-rede de cliente usarão um prefixo/32, a alteração do comportamento de roteamento padrão requer um UDR semelhante.
Controle o tráfego usando as regras de NSG para o tráfego de saída nos clientes de origem. Implante rotas individuais com o prefixo /32 para substituir as rotas de ponto de extremidade privado. Os logs de fluxo e as informações de monitoramento do NSG para conexões de saída ainda são compatíveis e podem ser usados
### <a name="can-i-use-firewall-rules-with-private-endpoints"></a>Posso usar regras de firewall com pontos de extremidade privados?
Não, essa é uma limitação atual de pontos de extremidade privados. O ponto de extremidade privado não funcionará corretamente se as regras de firewall estiverem configuradas no cache.
### <a name="how-can-i-connect-to-a-clustered-cache"></a>Como posso me conectar a um cache clusterizado?
`publicNetworkAccess` precisa ser definido como `Disabled` e só pode haver uma conexão de ponto de extremidade privada.
### <a name="since-my-private-endpoint-instance-is-not-in-my-vnet-how-is-it-associated-with-my-vnet"></a>Como minha instância do ponto de extremidade privado não está em minha VNet, como ela está associada à minha VNet?
Ele só está vinculado à sua VNet. Como não está em sua VNet, as regras de NSG não precisam ser modificadas para pontos de extremidade dependentes.
### <a name="how-can-i-migrate-my-vnet-injected-cache-to-a-private-endpoint-cache"></a>Como posso migrar o cache injetado de VNet para um cache de ponto de extremidade privado?
Será necessário excluir o cache injetado da VNet e criar uma nova instância de cache com um ponto de extremidade privado.
## <a name="next-steps"></a>Próximas etapas
* Para saber mais sobre o link privado do Azure, consulte a [documentação do link privado do Azure](../private-link/private-link-overview.md).
* Para comparar várias opções de isolamento de rede para sua instância de cache, consulte [documentação de opções de isolamento de rede do cache do Azure para Redis](cache-network-isolation.md). | 79.341463 | 656 | 0.743416 | por_Latn | 0.999646 |
6fb18d7db61365972e420feb6635c7016c79689c | 2,000 | md | Markdown | docs/pages/about.md | malvikarao/website | 556f14105964c1956f167a2fa4df9d5fabf90970 | [
"MIT"
] | null | null | null | docs/pages/about.md | malvikarao/website | 556f14105964c1956f167a2fa4df9d5fabf90970 | [
"MIT"
] | null | null | null | docs/pages/about.md | malvikarao/website | 556f14105964c1956f167a2fa4df9d5fabf90970 | [
"MIT"
] | null | null | null | ---
hdrnav: true
layout: page
title: About
permalink: /about/
navigation_weight: 0
---
### Our Value
Bugmark is a futures market for software issues.
Bugmark pioneers an approach using a two-sided
market, allowing both funders and workers to make
offers that are resolved auction style. We
believe the market signals generated by Bugmark
will be the most efficient way to allocate
resources to high-value issues.
### Bugmark Status
Bugmark is currently an experimental project. We
are conducting market simulations and user
exercises to better understand trading dynamics
and social behavior.
### Our Team
Bugmark has participation from independent
[software contractors][1], [Mozilla
Innovation][2], the [CHAOSS Project][3], and
Incentives Research, a economics consultancy.
[1]: http://mountainviewsmartcontracts.com
[2]: https://wiki.mozilla.org/Innovation
[3]: https://www.linuxfoundation.org/blog/chaoss-project-creates-tools-to-analyze-software-development-and-measure-open-source-community-health/
### Software Assets
To date we have built:
- a trading exchange with a restful API
- a command-line interface
- client-side bots for trading automation
- server-side bots for experimental simulations
- a framework for experimental trading applications
- an ability to trade in digital and fiat currencies
### Q3 Objectives
In Q3 2018, our key objectives are to:
1. begin trading across a half-dozen internal repos
2. develop relationships with our first corporate partners
3. build a customer-communications infrastructure
### Join Us!
We are looking for project contributors and participants!
- web design and usability experts
- web developers with Javascript / ReactJS skills
- back-end developers with Ruby and Elixir skills
- web marketers and business development experts
- people with fin-tech and capital markets expertise
- community managers and developer outreach experts
To get started, post an issue in one of our repos,
or send an email to our lead developer.
| 28.985507 | 144 | 0.7905 | eng_Latn | 0.993198 |
6fb1bc963d0b24f7d9c310958de02e53c26ca4d4 | 1,265 | md | Markdown | api/Word.Index.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-09T13:24:12.000Z | 2020-03-09T16:19:11.000Z | api/Word.Index.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Word.Index.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-28T06:51:45.000Z | 2019-11-28T06:51:45.000Z | ---
title: Index object (Word)
keywords: vbawd10.chm2429
f1_keywords:
- vbawd10.chm2429
ms.prod: word
api_name:
- Word.Index
ms.assetid: 6a2aab98-485b-01c3-8d9b-9e108b455e22
ms.date: 06/08/2017
localization_priority: Normal
---
# Index object (Word)
Represents a single index. The **Index** object is a member of the **Indexes** collection. The **[Indexes](Word.indexes.md)** collection includes all the indexes in the specified document.
## Remarks
Use **Indexes** (Index), where Index is the index number, to return a single **Index** object. The index number represents the position of the **Index** object in the document. The following example updates the first index in the active document.
```vb
If ActiveDocument.Indexes.Count >= 1 Then
ActiveDocument.Indexes(1).Update
End If
```
Use the **Add** method to create an index and add it to the **Indexes** collection. The following example creates an index at the end of the active document.
```vb
Set myRange = ActiveDocument.Content
myRange.Collapse Direction:=wdCollapseEnd
ActiveDocument.Indexes.Add Range:=myRange, Type:=wdIndexRunin
```
## See also
[Word Object Model Reference](overview/Word/object-model.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 26.354167 | 247 | 0.750988 | eng_Latn | 0.838224 |
6fb1d1ddf1154de08a8f754eb823388a965bb206 | 1,917 | md | Markdown | README.md | psforever/gcapy | 1d88102ced17d2649537849bac03b7abe0a78f2a | [
"MIT"
] | 3 | 2016-02-29T06:36:26.000Z | 2020-05-27T01:02:17.000Z | README.md | psforever/gcapy | 1d88102ced17d2649537849bac03b7abe0a78f2a | [
"MIT"
] | 4 | 2016-06-19T16:10:15.000Z | 2020-05-27T08:20:14.000Z | README.md | psforever/gcapy | 1d88102ced17d2649537849bac03b7abe0a78f2a | [
"MIT"
] | 5 | 2016-02-29T13:17:31.000Z | 2020-05-27T01:06:34.000Z | # GCAPy
A Python library and script to parse GCAP files. GCAP stands for Game CAPture and it is a file-format created by the PSForever project to store recorded game records from PlanetSide.
The library currently only reads, not writes, GCAP files.
GCAPy supports three actions: metadata display, record extraction, and game record statistics. Metadata display shows information about the GCAP file, record extraction carves out selected records, and game record statistics give information about PlanetSide packets.
If you are looking for some example captures to play with, download them from here: http://files.psforever.net/captures/
## Installation
This was tested with Python 2.7 on Mac OSX, Linux, and Windows under Cygwin. Here's the quick install:
$ pip install git+https://github.com/psforever/gcapy.git
## Usage
Display the GCAP metadata
$ gcapy -m file.gcap
Display multiple GCAP metadata
$ gcapy -m file.gcap other-file.gcap
Extract records 1-20 and 45 and display as JSON
$ gcapy -xjr 1-20,45 file.gcap
Extract records from 2255 onwards in binary
$ gcapy -xor 2255- file.gcap
Run `gcapy -h` for the full usage statement.
GCAPy also comes with `gcapy-stats`, which parses GCAP files for statistics on packet types and their frequencies.
If you wanted to aggregate statistics for all GCAP files in the local directory, you would do this
$ gcapy-stats *.gcap
The statistics are output to STDOUT and progress is show on STDERR. For multiple repeated stats collection,
a cache may be used
$ gcapy-stats --cache gcap.cache *.gcap
$ gcapy-stats --cache gcap.cache *.gcap # will load stats from cache and run much faster
## Notes
If you have downloaded GCAPy from GitHub without using pip, your .py files will have underscrores instead of hyphens. This will change the commands required to use the utility, i.e. gcapy-stats will be gcapy_stats.
| 41.673913 | 267 | 0.763693 | eng_Latn | 0.986463 |
6fb2d394c6b59637f03feeafc36eda6dc5603b25 | 148 | md | Markdown | README.md | lucasbueno/eventif-android | 04e8fb05c85d046409092098947d94a112d03dc4 | [
"MIT"
] | null | null | null | README.md | lucasbueno/eventif-android | 04e8fb05c85d046409092098947d94a112d03dc4 | [
"MIT"
] | null | null | null | README.md | lucasbueno/eventif-android | 04e8fb05c85d046409092098947d94a112d03dc4 | [
"MIT"
] | null | null | null | # eventif-android
Aplicativo Android para consulta da programação (palestras, cursos, etc.) de eventos.
Feito no Android Studio utilizando Kotlin.
| 29.6 | 85 | 0.804054 | por_Latn | 0.901269 |
6fb3341a3c1f3120a8f9ee6530abf18ae0dd14d6 | 512 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c2873.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-06-30T03:02:58.000Z | 2021-07-27T18:21:28.000Z | docs/error-messages/compiler-errors-2/compiler-error-c2873.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-16T08:33:11.000Z | 2019-10-16T08:33:11.000Z | docs/error-messages/compiler-errors-2/compiler-error-c2873.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-10-01T01:35:05.000Z | 2020-10-01T01:35:05.000Z | ---
title: "Compiler Error C2873"
ms.date: "11/04/2016"
f1_keywords: ["C2873"]
helpviewer_keywords: ["C2873"]
ms.assetid: 7a10036b-400e-4364-bd2f-dcd7370c5e28
---
# Compiler Error C2873
'symbol' : symbol cannot be used in a using-declaration
A `using` directive is missing a [namespace](../../cpp/namespaces-cpp.md) keyword. This causes the compiler to misinterpret the code as a [using declaration](../../cpp/using-declaration.md) rather than a [using directive](../../cpp/namespaces-cpp.md#using_directives). | 42.666667 | 268 | 0.740234 | eng_Latn | 0.692773 |
6fb511bba74f4a12d0607e71b87c93a4ea56e314 | 359 | md | Markdown | split1/_posts/2014-08-05-Om devasura pathaye namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split1/_posts/2014-08-05-Om devasura pathaye namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split1/_posts/2014-08-05-Om devasura pathaye namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | ---
layout: post
last_modified_at: 2021-03-30
title: Om Devasura pathaye namaha 11 times
youtubeId: DONj0lhCaS0
---
Om Devasura pathaye nama
- Who is the lord of Asuras and Devas
{% include youtubePlayer.html id=page.youtubeId %}
[Next]({{ site.baseurl }}{% link split1/_posts/2014-08-04-Om pathye namaha 11 times.md%})
| 12.821429 | 90 | 0.665738 | eng_Latn | 0.518717 |
6fb5c7f855f0b28c82eceebe3fdeea6026c58b49 | 184 | md | Markdown | src/main/resources/docs/description/Generic_Debug_CSSLint.md | ruiteix/codacy-codesniffer | 4633bf6cf096bbba341a63f86e32c7b7637b8b4d | [
"Apache-2.0"
] | 6 | 2016-11-02T18:31:40.000Z | 2021-09-12T10:28:08.000Z | src/main/resources/docs/description/Generic_Debug_CSSLint.md | ruiteix/codacy-codesniffer | 4633bf6cf096bbba341a63f86e32c7b7637b8b4d | [
"Apache-2.0"
] | 57 | 2017-06-13T09:48:12.000Z | 2022-03-29T21:19:06.000Z | src/main/resources/docs/description/Generic_Debug_CSSLint.md | ruiteix/codacy-codesniffer | 4633bf6cf096bbba341a63f86e32c7b7637b8b4d | [
"Apache-2.0"
] | 18 | 2016-09-30T16:27:21.000Z | 2020-06-25T08:09:56.000Z | All css files should pass the basic csslint tests.
Valid: Valid CSS Syntax is used.
```
.foo: { width: 100%; }
```
Invalid: The CSS has a typo in it.
```
.foo: { width: 100 %; }
```
| 15.333333 | 50 | 0.61413 | eng_Latn | 0.895011 |
6fb5e21ebee026f891a5a119f2fc212defc8e405 | 3,645 | md | Markdown | README.md | tmandry/async-fundamentals-initiative | d3e9e0ccf0281a76603b779e9f9ef5a8f2136063 | [
"Apache-2.0",
"MIT"
] | 67 | 2021-09-02T16:16:42.000Z | 2022-03-13T05:07:51.000Z | README.md | tmandry/async-fundamentals-initiative | d3e9e0ccf0281a76603b779e9f9ef5a8f2136063 | [
"Apache-2.0",
"MIT"
] | 3 | 2021-09-09T01:10:37.000Z | 2022-02-01T17:44:06.000Z | README.md | tmandry/async-fundamentals-initiative | d3e9e0ccf0281a76603b779e9f9ef5a8f2136063 | [
"Apache-2.0",
"MIT"
] | 6 | 2021-09-09T01:10:23.000Z | 2022-03-16T00:08:18.000Z | # async fundamentals initiative

## What is this?
This page tracks the work of the async fundamentals [initiative], part of the wg-async-foundations [vision process]! To learn more about what we are trying to do, and to find out the people who are doing it, take a look at the [charter].
[charter]: ./CHARTER.md
[initiative]: https://lang-team.rust-lang.org/initiatives.html
[vision process]: https://rust-lang.github.io/wg-async-foundations/vision.html
## Current status
This is an **umbrella initiative** and, as such, it covers a number of subprojects.
See the [roadmap](./roadmap.md) for a list of individual milestones and their status.
| Subproject | Issue | Progress | State | [Stage] |
|-------------------------------|----------|--------------|-------|----------------|
| async fn | [#50547] | ▰▰▰▰▰ | ✅ | [Stabilized] |
| static async fn in trait | [#91611] | ▰▰▱▱▱ | 🦀 | [Development] |
| dyn async fn in trait | – | ▰▱▱▱▱ | 🦀 | [Proposal] |
| async drop | – | ▰▱▱▱▱ | 🦀 | [Proposal] |
| async closures | – | ▰▱▱▱▱ | 💤 | [Proposal] |
[#50547]: https://github.com/rust-lang/rust/issues/50547
[#91611]: https://github.com/rust-lang/rust/issues/91611
<!-- TODO: Fill these in
[Proposal issue]: (https://github.com/rust-lang/lang-team/)
[Tracking issue]: https://github.com/rust-lang/rust/
-->
[Stage]: https://lang-team.rust-lang.org/initiatives/process/stages.html
[Proposal]: https://lang-team.rust-lang.org/initiatives/process/stages/proposal.html
[Experimental]: https://lang-team.rust-lang.org/initiatives/process/stages/experimental.html
[Development]: https://lang-team.rust-lang.org/initiatives/process/stages/development.html
[Feature complete]: https://lang-team.rust-lang.org/initiatives/process/stages/feature-complete.html
[Stabilized]: https://lang-team.rust-lang.org/initiatives/process/stages/stabilized.html
Key:
* ✅ – phase complete
* 🦀 – phase in progress
* 💤 – phase not started yet
## How Can I Get Involved?
* Check for 'help wanted' issues on this repository!
* If you would like to help with development, please contact the [owner](./charter.md#membership) to find out if there are things that need doing.
* If you would like to help with the design, check the list of active [design discussions](./design-discussions) first.
* If you have questions about the design, you can file an issue, but be sure to check the [FAQ](./FAQ.md) or the [design discussions](./design-discussions) first to see if there is already something that covers your topic.
* If you are using the feature and would like to provide feedback about your experiences, please [open a "experience report" issue][experience-report].
* If you are using the feature and would like to report a bug, please open a regular issue.
We also participate on [Zulip][chat-link], feel free to introduce yourself over there and ask us any questions you have.
[open issues]: /issues
[experience-report]: https://github.com/rust-lang/async-fundamentals-initiative/issues/new?labels=experience-report&template=experience-report.md
[chat-link]: https://rust-lang.zulipchat.com/#narrow/stream/187312-wg-async-foundations
<!-- Should there be a dedicated team? -->
[team-toml]: https://github.com/rust-lang/team/blob/master/teams/wg-async-foundations.toml
## Building Documentation
This repository is also an mdbook project. You can view and build it using the
following command.
```
mdbook serve
```
| 50.625 | 237 | 0.687243 | eng_Latn | 0.876965 |
6fb657c143b1257c8f67c466b77acd774d480463 | 74 | md | Markdown | README.md | liaoerdong/springboot-samples | 0703966e05a1c61dc335f96f3da56f1757a5af22 | [
"Apache-2.0"
] | null | null | null | README.md | liaoerdong/springboot-samples | 0703966e05a1c61dc335f96f3da56f1757a5af22 | [
"Apache-2.0"
] | null | null | null | README.md | liaoerdong/springboot-samples | 0703966e05a1c61dc335f96f3da56f1757a5af22 | [
"Apache-2.0"
] | null | null | null | 作者:廖尔东
项目:springboot整合springcloud全家桶,dubbo,权限,seata等等
本项目仅为个人及本人单位内部学习用途
| 14.8 | 46 | 0.878378 | afr_Latn | 0.274495 |
6fb659e5cb981bed16feed00b6d36ebb10041e5f | 988 | markdown | Markdown | source/_posts/2015-07-01-dump-pe-with-windbg-script.markdown | 0cch/0CChBlog | 232ec67da5bf2207c0d89e445e5d310a946659dc | [
"MIT"
] | 4 | 2015-10-21T09:15:35.000Z | 2021-12-04T08:44:58.000Z | source/_posts/2015-07-01-dump-pe-with-windbg-script.markdown | 0cch/0CChBlog | 232ec67da5bf2207c0d89e445e5d310a946659dc | [
"MIT"
] | 9 | 2020-07-20T02:02:49.000Z | 2021-11-30T02:30:48.000Z | source/_posts/2015-07-01-dump-pe-with-windbg-script.markdown | 0cch/0CChBlog | 232ec67da5bf2207c0d89e445e5d310a946659dc | [
"MIT"
] | null | null | null | ---
author: admin
comments: true
date: 2015-07-01 00:11:15+00:00
layout: post
slug: 'dump-pe-with-windbg-script'
title: 用Windbg script将内存中的PE文件dump出来
categories:
- Tips
---
最近看到有些恶意程序,从网络上下载PE文件后,直接放在内存里重定位和初始化,为了能将其dump出来,所以写了这个Windbg脚本。
{% codeblock lang:windbg %}
.foreach( place { !address /f:VAR,MEM_PRIVATE,MEM_COMMIT /c:"s -[1]a %1 %2 \"MZ\"" } )
{
ad *
.catch {
r @$t2 = place;
r @$t0 = place;
r @$t1 = @@C++(((ntdll!_IMAGE_DOS_HEADER *)@$t0)->e_lfanew);
r @$t0 = @$t0 + @$t1;
r @$t1 = $vvalid(@$t0, 4);
.if (@@C++(@$t1 && @@C++(((ntdll!_IMAGE_NT_HEADERS *)@$t0)->Signature) == 0x00004550))
{
r @$t1 = @@C++(((ntdll!_IMAGE_NT_HEADERS *)@$t0)->OptionalHeader.SizeOfImage);
.printf "%08x %08x\n", @$t2, @$t1;
aS /x start_addr @$t2
aS /x dump_size @$t1
.block {
aS target_file e:\\${start_addr}.dll
}
.block {
.printf "${target_file}"
.writemem "${target_file}" ${start_addr} L?${dump_size}
}
}
}
}
{% endcodeblock %} | 22.976744 | 88 | 0.597166 | yue_Hant | 0.398655 |
6fb6f0cca3c2241a999a5963f590f8361e9065bc | 1,510 | md | Markdown | src/php/gustav/doc/Dev-API.md | futape/gustav.futape.de | ea9f264a4efd5b5a2a2ad02b1797e2f0037f7638 | [
"CC0-1.0"
] | 1 | 2019-05-13T07:28:47.000Z | 2019-05-13T07:28:47.000Z | src/php/gustav/doc/Dev-API.md | futape/gustav.futape.de | ea9f264a4efd5b5a2a2ad02b1797e2f0037f7638 | [
"CC0-1.0"
] | null | null | null | src/php/gustav/doc/Dev-API.md | futape/gustav.futape.de | ea9f264a4efd5b5a2a2ad02b1797e2f0037f7638 | [
"CC0-1.0"
] | null | null | null | The Dev API contains class member that are not defined as `private` or those that should be but aren't due to technial restrictions for example. Moreover this section doesn't describe the publically available members, rather it describes the *real* members used to implement the public functionality. For example, the Dev API would describe the `__call()` method, while the public API would describe the methods made available using that method.
Please note, that, like the private API's ones, any class members documented in this section may change without releasing a new major version of Gustav. Therfore you should not rely on these members if you wish to update easily to future (non-major) versions.
The Dev API is spread across the following Gustav classes.
+ [Gustav](Dev-API%3A-Gustav)
+ [GustavHooks](Dev-API%3A-GustavHooks)
+ [GustavSrc](Dev-API%3A-GustavSrc)
+ [GustavSrcHooks](Dev-API%3A-GustavSrcHooks)
+ [GustavDest](Dev-API%3A-GustavDest)
+ [GustavDestHooks](Dev-API%3A-GustavDestHooks)
+ [GustavContent](Dev-API%3A-GustavContent)
+ [GustavContentHooks](Dev-API%3A-GustavContentHooks)
+ [GustavBlock](Dev-API%3A-GustavBlock)
+ [GustavBlockHooks](Dev-API%3A-GustavBlockHooks)
+ [GustavMatch](Dev-API%3A-GustavMatch)
+ [GustavMatchHooks](Dev-API%3A-GustavMatchHooks)
+ [GustavGenerator](Dev-API%3A-GustavGenerator)
+ [GustavGeneratorHooks](Dev-API%3A-GustavGeneratorHooks)
+ [GustavBase](Dev-API%3A-GustavBase)
+ [GustavBaseHooks](Dev-API%3A-GustavBaseHooks)
| 71.904762 | 447 | 0.771523 | eng_Latn | 0.861433 |
6fb823e5a0d704363d7f89ce9ef51c75c2e0497f | 1,509 | md | Markdown | docs/media/diagrams/README.md | donaldrich80/function-as-a-container | 098c53c3300d5c323e70762234b112f6cdde4f08 | [
"MIT"
] | 2 | 2021-02-07T05:33:02.000Z | 2021-04-01T03:01:05.000Z | docs/media/diagrams/README.md | donaldrich80/function-as-a-container | 098c53c3300d5c323e70762234b112f6cdde4f08 | [
"MIT"
] | null | null | null | docs/media/diagrams/README.md | donaldrich80/function-as-a-container | 098c53c3300d5c323e70762234b112f6cdde4f08 | [
"MIT"
] | 2 | 2020-11-04T05:55:13.000Z | 2021-02-07T05:33:05.000Z | ---
path: tree/master
source: media/diagrams/Dockerfile
---
# diagrams
[](https://hub.docker.com/r/donaldrich/function/diagrams)
## Documentation
### Diagrams
- [:octicons-book-16: Docs](https://diagrams.mingrammer.com)
- [:octicons-mark-github-16: mingrammer/diagrams](https://github.com/mingrammer/diagrams)
## Image Commands
### Container shell
```sh
docker pull donaldrich/function:diagrams
docker run -it --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)":"/work" -w "/work" \
--hostname=diagrams \
--entrypoint="/bin/sh" \
--net="host" \
donaldrich/function:diagrams
```
### Check versions
```sh
docker pull donaldrich/function:diagrams && docker run -it --rm donaldrich/function:diagrams validate
```
### See config options
```sh
docker pull donaldrich/function:diagrams && docker run -it --rm donaldrich/function:diagrams help
```
### Dive into Image
```sh
docker pull donaldrich/function:diagrams && dive donaldrich/function:diagrams
```
### See Layer Info
```sh
docker pull donaldrich/function:diagrams && docker history donaldrich/function:diagrams
```
## Image Details
??? info ""
=== "Image"
```json
--8<--
image-info/diagrams.md
--8<--
```
=== "Layers"
```md
--8<--
layers/diagrams.md
--8<--
```
| 20.391892 | 230 | 0.66004 | yue_Hant | 0.574189 |
6fb91738fb1f02da8186aeaca0e8451f22674d0a | 4,938 | md | Markdown | docs/420-ESP8266-diverse-projects.md | NelisW/myOpenHab | f140112fcbc5798d7e2d19200d96bcd91318a98d | [
"CC0-1.0"
] | 31 | 2015-08-15T16:29:03.000Z | 2020-05-26T14:16:41.000Z | docs/420-ESP8266-diverse-projects.md | NelisW/myOpenHab | f140112fcbc5798d7e2d19200d96bcd91318a98d | [
"CC0-1.0"
] | 3 | 2016-08-13T19:20:48.000Z | 2017-07-14T09:04:56.000Z | docs/420-ESP8266-diverse-projects.md | NelisW/myOpenHab | f140112fcbc5798d7e2d19200d96bcd91318a98d | [
"CC0-1.0"
] | 8 | 2016-02-29T22:59:06.000Z | 2018-06-06T00:51:22.000Z | # ESP8266 Diverse Projects
## Web-based ESP environments
A somewhat confusing [post](http://www.instructables.com/id/ESP8266-based-web-configurable-wifi-general-purpos-1/)
The [ESPEasy project](http://www.esp8266.nu/index.php/Main_Page)
## Miscellaneous projects
<http://horaciobouzas.com/> has a number of project posts, some of which are in the amateur radio context. The author also sells PCBs for some of his projects.
[Connecting two ESP8266](http://randomnerdtutorials.com/how-to-make-two-esp8266-talk/), one as access point and one as station
[Using Blynk with ESP8266-as-Arduino-Uno-wifi](http://www.instructables.com/id/Connect-to-Blynk-using-ESP8266-as-Arduino-Uno-wifi/)
[Wireless logger ESP8266 NodeMCU v1.0 with Arduino IDE](http://www.instructables.com/id/ESP8266-NodeMCU-v10-ESP12-E-with-Arduino-IDE/)
## Weather station with DS18B20, DHT11 and BMP085
http://internetofhomethings.com/homethings/?p=345
http://www.survivingwithandroid.com/2016/04/iot-arduino-programming-monitor-environment.html
## Intro with remote power switching
http://www.instructables.com/id/An-inexpensive-IoT-enabler-using-ESP8266/?ALLSTEPS
## ESP with DHT and display
https://nathan.chantrell.net/20141230/wifi-mqtt-display-with-the-esp8266
https://github.com/nathanchantrell/esp_mqtt_oled
## ESPwifiwebserver
https://github.com/iot-playground/Arduino/blob/master/ESP8266ArduinoIDE/WiFiWebServer/WiFiWebServer.ino
This sketch demonstrates how to set up a simple HTTP-like server.
The server will set a GPIO pin depending on the request
http://server_ip/gpio/0 will set the GPIO2 low,
http://server_ip/gpio/1 will set the GPIO2 high
server_ip is the IP address of the ESP8266 module, will be
printed to Serial when the module is connected.
## WiFiManager
<https://github.com/tzapu/WiFiManager>
ESP8266 WiFi Connection manager with fallback web configuration portal
The configuration portal is of the captive variety, so on various devices it will present the configuration dialogue as soon as you connect to the created access point.
- when your ESP starts up, it sets it up in Station mode and tries to connect to a previously saved Access Point
- if this is unsuccessful (or no previous network saved) it moves the ESP into Access Point mode and spins up a DNS and WebServer (default ip 192.168.4.1)
- using any wifi enabled device with a browser (computer, phone, tablet) connect to the newly created Access Point
- because of the Captive Portal and the DNS server you will either get a 'Join to network' type of popup or get any domain you try to access redirected to the configuration portal
- choose one of the access points scanned, enter password, click save
- ESP will try to connect. If successful, it relinquishes control back to your app. If not, reconnect to AP and reconfigure.
## Simple Arduino IDE example
https://dzone.com/articles/its-time-develop-applications
flashes two leds
## ESPEasy
http://sourceforge.net/projects/espeasy/?source=typ_redirect
http://www.esp8266.com/viewtopic.php?f=29&t=4540
http://www.esp8266.nu/forum/viewtopic.php?t=37
http://www.whatimade.today/esp8266-on-websockets-mdns-ota-and-leds/
## OTA house project
http://www.whatimade.today/esp8266-on-websockets-mdns-ota-and-leds/
ESP8266 - On Websockets, mdns, OTA and LEDS
So what this project consists?
- An ESP8266 which controls a 7 meters, 5050 RGB 12V led strip, 60Leds/meter.
- The ESP8266 is connected to the house router via WiFi.
- The ESP8266 receives data through a websocket on port 81, which means it can be controlled from any browser/OS.
- It supports mdns, so no need for IP address to communicate with the ESP8266 (Partially at the moment).
- It supports OTA (Over-The-Air) updates - So no need to connect it to the computer to update the code.
## iot-playground
http://iot-playground.com/
http://iot-playground.com/download
http://iot-playground.com/blog/2-uncategorised/67-arduino-esp8266-ide
http://iot-playground.com/blog/2-uncategorised/35-esp8266-firmware-update
http://iot-playground.com/blog/2-uncategorised/15-esp8266-wifi-temperature-and-humidity-sensor
http://iot-playground.com/blog/2-uncategorised/21-esp8266-wifi-air-pressure-and-weather-forecast-sensor
http://iot-playground.com/blog/2-uncategorised/41-esp8266-ds18b20-temperature-sensor-arduino-ide
http://iot-playground.com/blog/2-uncategorised/74-esp8266-wifi-pir-motion-sensor-easyiot-cloud-rest-api
## internetofhomethings
http://internetofhomethings.com/homethings/?page_id=207
## Weather station
https://github.com/squix78/platformio-test
## Diverse Projects
[Low power sensor board](https://github.com/z2amiller/sensorboard).
<http://randomnerdtutorials.com/7-weekend-projectstutorials-for-the-esp8266-wifi-module/>
<http://randomnerdtutorials.com/how-to-control-your-esp8266-from-anywhere-in-the-world/>
http://blog.thethings.io/connect-esp8266-to-the-internet-at-thethings-io/
| 37.984615 | 180 | 0.782503 | eng_Latn | 0.740552 |
6fb939d40a25bbc4cccdf9594f862af230953acf | 16,268 | md | Markdown | wdk-ddi-src/content/wdfio/nf-wdfio-wdfioqueuefindrequest.md | tianye606/windows-driver-docs-ddi | 23fec97f3ed3a0c99b117543982d34ee592501e7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdfio/nf-wdfio-wdfioqueuefindrequest.md | tianye606/windows-driver-docs-ddi | 23fec97f3ed3a0c99b117543982d34ee592501e7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdfio/nf-wdfio-wdfioqueuefindrequest.md | tianye606/windows-driver-docs-ddi | 23fec97f3ed3a0c99b117543982d34ee592501e7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:wdfio.WdfIoQueueFindRequest
title: WdfIoQueueFindRequest function (wdfio.h)
description: The WdfIoQueueFindRequest method locates the next request in an I/O queue, or the next request that matches specified criteria, but does not grant ownership of the request to the driver.
old-location: wdf\wdfioqueuefindrequest.htm
tech.root: wdf
ms.assetid: 379fc7ec-577a-48a4-83b0-4be4e8cfe1bf
ms.date: 02/26/2018
ms.keywords: DFQueueObjectRef_c0d57542-6256-4502-ad31-8b388857296f.xml, WdfIoQueueFindRequest, WdfIoQueueFindRequest method, kmdf.wdfioqueuefindrequest, wdf.wdfioqueuefindrequest, wdfio/WdfIoQueueFindRequest
ms.topic: function
f1_keywords:
- "wdfio/WdfIoQueueFindRequest"
req.header: wdfio.h
req.include-header: Wdf.h
req.target-type: Universal
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver: 1.0
req.umdf-ver: 2.0
req.ddi-compliance: DriverCreate, KmdfIrql, KmdfIrql2, wdfioqueuefindrequestfailed, wdfioqueueretrievefoundrequest, wdfioqueueretrievenextrequest
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Wdf01000.sys (KMDF); WUDFx02000.dll (UMDF)
req.dll:
req.irql: <= DISPATCH_LEVEL
topic_type:
- APIRef
- kbSyntax
api_type:
- LibDef
api_location:
- Wdf01000.sys
- Wdf01000.sys.dll
- WUDFx02000.dll
- WUDFx02000.dll.dll
api_name:
- WdfIoQueueFindRequest
product:
- Windows
targetos: Windows
req.typenames:
---
# WdfIoQueueFindRequest function
## -description
<p class="CCE_Message">[Applies to KMDF and UMDF]</p>
The <b>WdfIoQueueFindRequest</b> method locates the next request in an I/O queue, or the next request that matches specified criteria, but does not grant <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/request-ownership">ownership</a> of the request to the driver.
## -parameters
### -param Queue [in]
A handle to a framework queue object.
### -param FoundRequest [in, optional]
A request object handle that the driver received from a previous call to <b>WdfIoQueueFindRequest</b>. This parameter is optional and can be <b>NULL</b>.
### -param FileObject [in, optional]
A handle to a framework file object. This parameter is optional and can be <b>NULL</b>.
### -param Parameters [in, out]
A pointer to a driver-allocated <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdfrequest/ns-wdfrequest-_wdf_request_parameters">WDF_REQUEST_PARAMETERS</a> structure that receives parameters that are associated with the found request. This parameter is optional and can be <b>NULL</b>.
### -param OutRequest [out]
A pointer to a location that receives a handle to the found request. If no match is found, the location receives <b>NULL</b>.
## -returns
<b>WdfIoQueueFindRequest</b> returns STATUS_SUCCESS if the operation succeeds. Otherwise, this method might return one of the following values:
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_INVALID_PARAMETER</b></dt>
</dl>
</td>
<td width="60%">
The driver supplies an invalid handle.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_NOT_FOUND</b></dt>
</dl>
</td>
<td width="60%">
The request that is identified by the <i>FoundRequest</i> parameter cannot be found in the I/O queue.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_NO_MORE_ENTRIES</b></dt>
</dl>
</td>
<td width="60%">
The framework reached the end of the I/O queue without finding a request that matches the search criteria.
</td>
</tr>
</table>
This method also might return other <a href="https://docs.microsoft.com/windows-hardware/drivers/kernel/ntstatus-values">NTSTATUS values</a>.
A bug check occurs if the driver supplies an invalid object handle.
## -remarks
The <b>WdfIoQueueFindRequest</b> method searches a specified I/O queue and attempts to find an I/O request.
Your driver can call <b>WdfIoQueueFindRequest</b> only if the driver is using the manual <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/dispatching-methods-for-i-o-requests">dispatching method</a> for the specified I/O queue.
If <i>FileObject</i> is not <b>NULL</b>, <b>WdfIoQueueFindRequest</b> only examines requests that are associated with the specified file object handle.
If <i>FoundRequest</i> is <b>NULL</b>, this method locates the first request in the I/O queue that matches the <i>FileObject</i> value. If <i>FoundRequest</i> is not <b>NULL</b>, the method begins searching at the request that is identified by <i>FoundRequest</i>. To create an iterative loop, specify <b>NULL</b> for the first call, and then use the returned handle as the <i>FoundRequest</i> parameter for subsequent calls.
If <i>Parameters</i> is not <b>NULL</b>, this method copies the found request's parameters into the driver-supplied structure.
Every call to <b>WdfIoQueueFindRequest</b> that returns STATUS_SUCCESS increments the reference count of the request object whose handle is returned in <i>OutRequest</i>. Therefore, your driver must call <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/wdfobjectdereference">WdfObjectDereference</a> after you have finished using the handle.
Calling <b>WdfIoQueueFindRequest</b> does <i>not</i> grant the driver <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/request-ownership">ownership</a> of any requests. If you want your driver to obtain ownership of a request so that it can process the request, the driver must call <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/kmdf-wdfioqueueretrievefoundrequest">WdfIoQueueRetrieveFoundRequest</a>. In fact, the driver can do only the following with the handle that it receives for the <i>OutRequest</i> parameter:
<ul>
<li>
Use it as the <i>FoundRequest</i> parameter in a subsequent call to <b>WdfIoQueueFindRequest</b>.
</li>
<li>
Use it as the <i>FoundRequest</i> parameter in a subsequent call to <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/kmdf-wdfioqueueretrievefoundrequest">WdfIoQueueRetrieveFoundRequest</a>.
</li>
<li>
Use it as the input parameter in a subsequent call to <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/wdfobjectgettypedcontext">WdfObjectGetTypedContext</a> or a driver-defined method for accessing the object's <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/framework-object-context-space">context space</a>.
</li>
<li>
Use it as the input parameter to <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/wdfobjectdereference">WdfObjectDereference</a>.
</li>
</ul>
If a call to <b>WdfIoQueueFindRequest</b> returns STATUS_NOT_FOUND, a request that was previously in the queue has been removed. The request might have been canceled. A call to <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/kmdf-wdfioqueueretrievefoundrequest">WdfIoQueueRetrieveFoundRequest</a> can also return STATUS_NOT_FOUND.
For more information about the <b>WdfIoQueueFindRequest</b> method, see <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/managing-i-o-queues">Managing I/O Queues</a>.
#### Examples
<b>Example 1</b>
The following code example is from the <a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/sample-kmdf-drivers">PCIDRV</a> sample driver. This example searches an I/O queue for a request that contains a specified I/O function code. If a matching request is found, the example calls <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/kmdf-wdfioqueueretrievefoundrequest">WdfIoQueueRetrieveFoundRequest</a>.
```cpp
NTSTATUS
NICGetIoctlRequest(
IN WDFQUEUE Queue,
IN ULONG FunctionCode,
OUT WDFREQUEST* Request
)
{
NTSTATUS status = STATUS_UNSUCCESSFUL;
WDF_REQUEST_PARAMETERS params;
WDFREQUEST tagRequest;
WDFREQUEST prevTagRequest;
WDF_REQUEST_PARAMETERS_INIT(¶ms);
*Request = NULL;
prevTagRequest = tagRequest = NULL;
do {
WDF_REQUEST_PARAMETERS_INIT(¶ms);
status = WdfIoQueueFindRequest(
Queue,
prevTagRequest,
NULL,
¶ms,
&tagRequest
);
if (prevTagRequest) {
WdfObjectDereference(prevTagRequest);
}
if (status == STATUS_NO_MORE_ENTRIES) {
status = STATUS_UNSUCCESSFUL;
break;
}
if (status == STATUS_NOT_FOUND) {
//
// The prevTagRequest request has disappeared from the
// queue. There might be other requests that match
// the criteria, so restart the search.
//
prevTagRequest = tagRequest = NULL;
continue;
}
if (!NT_SUCCESS(status)) {
status = STATUS_UNSUCCESSFUL;
break;
}
if (FunctionCode == params.Parameters.DeviceIoControl.IoControlCode){
//
// Found a match. Retrieve the request from the queue.
//
status = WdfIoQueueRetrieveFoundRequest(
Queue,
tagRequest,
Request
);
WdfObjectDereference(tagRequest);
if (status == STATUS_NOT_FOUND) {
//
// The tagRequest request has disappeared from the
// queue. There might be other requests that match
// the criteria, so restart the search.
//
prevTagRequest = tagRequest = NULL;
continue;
}
if (!NT_SUCCESS(status)) {
status = STATUS_UNSUCCESSFUL;
break;
}
//
// Found a request.
//
ASSERT(*Request == tagRequest);
status = STATUS_SUCCESS;
break;
} else {
//
// This request is not the correct one. Drop the reference
// on the tagRequest after the driver obtains the next request.
//
prevTagRequest = tagRequest;
continue;
}
} while (TRUE);
return status;
}
```
<b>Example 2</b>
The following code example shows how you can create a general-purpose search routine that calls a search-specific subroutine. If your driver must search one or more queues for multiple types of information, you can provide multiple search-specific subroutines. Each time that your driver calls the general-purpose search routine, it specifies the address of one of your search-specific subroutines.
```cpp
//
// Type declaration for the driver's search-specific subroutines.
//
typedef BOOLEAN (*PFN_CALLBACK_COMPARE)(WDFREQUEST, ULONG);
//
// General-purpose search routine. One of the routine's
// parameters is the address of a search-specific
// subroutine. The search routine calls back to the
// subroutine.
//
WDFREQUEST
FindRequestWithMatchingData(
__in WDFQUEUE Queue,
__in PFN_CALLBACK_COMPARE CallbackCompare,
__in ULONG Data
)
{
WDFREQUEST prevTagRequest = NULL;
WDFREQUEST tagRequest = NULL;
WDFREQUEST outRequest = NULL;
NTSTATUS status = STATUS_INVALID_DEVICE_REQUEST;
PAGED_CODE();
do {
status = WdfIoQueueFindRequest(Queue,
prevTagRequest,
NULL,
NULL,
&tagRequest);
if (prevTagRequest) {
//
// WdfIoQueueFindRequest incremented the
// reference count of the prevTagRequest object,
// so we decrement the count here.
//
WdfObjectDereference(prevTagRequest);
}
if (status == STATUS_NO_MORE_ENTRIES) {
KdPrint(("WdfIoQueueFindRequest returned status 0x%x\n", status));
break;
}
if (status == STATUS_NOT_FOUND) {
//
// The prevTagRequest object is no longer
// in the queue.
//
prevTagRequest = tagRequest = NULL;
continue;
}
if ( !NT_SUCCESS(status)) {
KdPrint(("WdfIoQueueFindRequest failed 0x%x\n", status));
break;
}
//
// We have a handle to the next request that is
// in the queue. Now we call the subroutine
// that determines if this request matches our
// search criteria.
//
if (CallbackCompare(tagRequest, Data)) {
//
// We found a match. Get the request handle.
//
status = WdfIoQueueRetrieveFoundRequest(Queue,
tagRequest,
&outRequest);
//
// WdfIoQueueRetrieveFoundRequest incremented the
// reference count of the TagRequest object,
// so we decrement the count here.
//
WdfObjectDereference(tagRequest);
if (status == STATUS_NOT_FOUND) {
//
// The TagRequest object is no longer
// in the queue. But other requests might
// match our criteria, so we restart the search.
//
prevTagRequest = tagRequest = NULL;
continue;
}
if (!NT_SUCCESS(status)) {
KdPrint(("WdfIoQueueRetrieveFoundRequest failed 0x%x\n",
status));
}
//
// We found the request we were looking for.
//
break;
} else {
//
// The request did not match our criteria.
// Get another request.
//
prevTagRequest = tagRequest;
continue;
}
} while(TRUE);
return outRequest;
}
/
// An example of a driver's search-specific subroutine.
// Your driver can have multiple subroutines to handle
// multiple types of searches.
//
BOOLEAN
CallbackCheckForInfo1(
__in WDFREQUEST Request,
__in ULONG DataToBeMatched
)
{
PREQUEST_CONTEXT reqContext;
PAGED_CODE();
//
// Retrieve information that the driver has stored
// in the request object's context space.
//
reqContext = GetRequestContext(Request);
if (reqContext->ContextInfo1 == DataToBeMatched) {
return TRUE;
}
else {
return FALSE;
}
}
//
// This code shows a call to the FindRequestWithMatchingData routine.
//
WDFREQUEST matchedRequest = NULL;
...
matchedRequest = FindRequestWithMatchingData(readQueue,
CallbackCheckForInfo1,
INFO_VALUE);
if (matchedRequest != NULL) {
//
// Found a request with a context value of INFO_VALIUE.
//
...
}
...
```
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdfrequest/ns-wdfrequest-_wdf_request_parameters">WDF_REQUEST_PARAMETERS</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/kmdf-wdfioqueueretrievefoundrequest">WdfIoQueueRetrieveFoundRequest</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdfio/nf-wdfio-wdfioqueuestop">WdfIoQueueStop</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/wdf/wdfobjectdereference">WdfObjectDereference</a>
| 35.136069 | 557 | 0.629026 | eng_Latn | 0.7019 |
6fb9ae2f3f741638c2adda6dd777c48f5e136fff | 84 | md | Markdown | README.md | smorey2/KFrame | 972eb29e3469c328db77f1819b03fcadcceeb096 | [
"MIT"
] | null | null | null | README.md | smorey2/KFrame | 972eb29e3469c328db77f1819b03fcadcceeb096 | [
"MIT"
] | null | null | null | README.md | smorey2/KFrame | 972eb29e3469c328db77f1819b03fcadcceeb096 | [
"MIT"
] | null | null | null | KFrame -
=====================================================
##DESCRIPTION
XXX
| 12 | 53 | 0.238095 | oci_Latn | 0.998641 |
6fbb4c8a1964b457a9ec8fea8ac5fa83c486b7ae | 5,098 | md | Markdown | README.md | ChicoState/PantryDjango | e075f699f29a3d02ee2c9c2b9a80cae6fce789f4 | [
"MIT"
] | null | null | null | README.md | ChicoState/PantryDjango | e075f699f29a3d02ee2c9c2b9a80cae6fce789f4 | [
"MIT"
] | 10 | 2020-05-13T00:56:16.000Z | 2022-03-12T00:52:29.000Z | README.md | ChicoState/PantryDjango | e075f699f29a3d02ee2c9c2b9a80cae6fce789f4 | [
"MIT"
] | 5 | 2020-04-27T04:17:25.000Z | 2020-08-18T05:01:29.000Z | # Food Pantry [](https://travis-ci.org/ChicoState/PantryDjango)
Welcome to the Food Pantry open source project!
Chico State, like many other universities, has a food pantry for students who do not have access to enough to eat. The pantry provides food for students for free by storing donated and wholesale purchased bulk foods. We want to create a system to manage the inventory of a food pantry, as well as generate reports about the pantry's activities. We need to design a website that manages the information.
## Code of Conduct
Code of conduct for this project is given in
[Code of Conduct](Code_of_Conduct.md).
## Initial Contributors
Initial contributors are noted in [Contributors](Contributors.md)
## Licensing
This project is licensed following the MIT licene given in
[License](LICENSE).
## Requirements
Softwares and technologies used in this project are mentioned in [Requirements](requirements.txt)
## Before UML diagram

### UML Diagram Description
We used 2 Abstract factory design patterns. Also we used one singleton design pattern for the inventoy along with Observer design pattern for views. The details of the classes in the UML diagram are as follows.
Item (class):
Item class is the client code doesn’t depend on concrete classes of factories such as ConcreteFactoryUPC and ConcereFactoryPLU as long as it works with these objects via the AbstractFactoryItems interface. So that Item class can generate these objects dynamically.
ConcreteFactoryUPC:
This concrete class implements AbstarctFactoryItems interface. Objects of this class will hold the data of UPC type products.
ConcereFactoryPLU:
This concrete class implements AbstarctFactoryItems interface. Objects of this class will hold the data of PLU type products.
AbstractFactoryItems:
This is a Abstract Factory Interface which will be implemeneted by its concrete classes.
Inventory:
We used singleton invetory class. We wanted to make a master copy of the inventory which can be used in different classes without creating new objects of the inventory. Definately new objects of the inventory classes can be created, but all the new objects will point to the same instance of inventory.
Similarly we created the AbstractFactoryDonor interface to genrate the providers dynamically using the Donors class client code.
The concrete factory classes AbstractFactoryIndividualDonor and AbstractFactoryOrganizationDonor implements the interface AbstractFactoryDonor.
Provider:
This class manages and stores the AbstractFactoryDonor objects. This class will implement the essential functions like CreateDonor() and GetDonorID() which are embedded in the envery item donoted by donor.
## After UML diagram (added appliance rental)

# Starting the web server
You need to be in the directory that contains the manage.py file (the PantryDjango directory). In the console, we can start the web server by running python manage.py runserver:
{% filename %}command-line{% endfilename %}
```
(myvenv) ~/DjangoPantry$ python manage.py runserver
```
If you are on a Chromebook, use this command instead:
{% filename %}Cloud 9{% endfilename %}
```
(myvenv) ~/DjangoPantry$ python manage.py runserver 0.0.0.0:8080
```
If you are on Windows and this fails with UnicodeDecodeError, use this command instead:
{% filename %}command-line{% endfilename %}
```
(myvenv) ~/DjangoPantry$ python manage.py runserver 0:8000
```
Now you need to check that your website is running. Open your browser (Firefox, Chrome, Safari, Internet Explorer or whatever you use) and enter this address:
{% filename %}browser{% endfilename %}
```
http://127.0.0.1:8000/
```
# Static Analysis Tool
We used Django Lint as our static analysis tool that checks projects and applications that use the Django web development framework.
It reports on common programming errors and bad code smells, including checking for nullable CharField field types, the use of deprecated Django features (such as auto_now_add) as well as the absence of recommended options in settings.py. It aims to encourage the development of high-quality re-usable Django applications.Django Lint is currently implemented as a wrapper around PyLint.
The official documentation and download of django Lint can be found [here](https://pypi.org/project/django-lint/)
# Travis CI Continous Integration Status
[](https://travis-ci.org/ChicoState/PantryDjango)
# Acknowledgments & Inspiration
Software Design and Maintenance Professor and Instructor [Dr.Kevin Buffardi,PHD](https://www.csuchico.edu/csci/people/faculty/buffardi-kevin.shtml)
* [GitHub](https://github.com/kbuffardi)
* [GoogleScholar](https://scholar.google.com/citations?user=KmIt5HIAAAAJ&hl=en)
* [Twitter](https://twitter.com/drkevinbuffardi?lang=en)
* [LinkedIn](https://www.linkedin.com/in/kevin-buffardi-5a84351/)
| 45.115044 | 402 | 0.792468 | eng_Latn | 0.987593 |
6fbb57439239a7225f95afadfbf8d138c4313616 | 463 | md | Markdown | docs/devops/_sidebar.md | Vunovati/titus | 775e6836d5386a60c18d45ee225cbd07c3901892 | [
"Apache-2.0"
] | null | null | null | docs/devops/_sidebar.md | Vunovati/titus | 775e6836d5386a60c18d45ee225cbd07c3901892 | [
"Apache-2.0"
] | null | null | null | docs/devops/_sidebar.md | Vunovati/titus | 775e6836d5386a60c18d45ee225cbd07c3901892 | [
"Apache-2.0"
] | null | null | null | - [Home](/)
- [Quick Start Guide](/quick-start/)
- [Developers](/developers/)
- [DevOps](/devops/?id=devops)
- [Overview](/devops/?id=overview)
- [Deploy Titus](/devops/?id=deploy-titus)
- [Deploy on GCP](/devops/gcp/)
- [Deploy on AWS with Mira](https://github.com/nearform/titus/tree/master/packages/titus-infra-aws-mira)
- [Deploy Titus on Azure](/devops/azure/?id=Deploy-Titus-on-Azure)
- [Contribute to Titus](/contributing/)
| 42.090909 | 112 | 0.652268 | yue_Hant | 0.22652 |
6fbc3619fd2c49c04f0c61551e3af782b26ee735 | 205 | md | Markdown | content/influxdb/cloud/reference/glossary.md | clwluvw/docs-v2 | dc0c00fb59edd1580198242cb15109a01dfe40cc | [
"MIT"
] | 42 | 2019-10-14T18:38:17.000Z | 2022-03-29T15:34:49.000Z | content/influxdb/cloud/reference/glossary.md | clwluvw/docs-v2 | dc0c00fb59edd1580198242cb15109a01dfe40cc | [
"MIT"
] | 1,870 | 2019-10-14T17:03:50.000Z | 2022-03-30T22:23:24.000Z | content/influxdb/cloud/reference/glossary.md | clwluvw/docs-v2 | dc0c00fb59edd1580198242cb15109a01dfe40cc | [
"MIT"
] | 181 | 2019-11-08T19:40:05.000Z | 2022-03-25T10:01:02.000Z | ---
title: Glossary
description: >
Terms related to InfluxData products and platforms.
weight: 8
menu:
influxdb_cloud_ref:
name: Glossary
influxdb/cloud/tags: [glossary]
---
{{< duplicate-oss >}}
| 15.769231 | 53 | 0.712195 | eng_Latn | 0.75029 |
6fbc4c03aefc1ebf8cffb3668600383a15f600bc | 5,550 | md | Markdown | docs/sparkr-migration-guide.md | etspaceman/spark | 155a67d00cb2f12aad179f6df2d992feca8e003e | [
"Apache-2.0"
] | 4 | 2015-04-27T13:21:39.000Z | 2016-09-28T06:03:00.000Z | docs/sparkr-migration-guide.md | etspaceman/spark | 155a67d00cb2f12aad179f6df2d992feca8e003e | [
"Apache-2.0"
] | 33 | 2015-03-11T05:06:27.000Z | 2016-05-31T09:41:35.000Z | docs/sparkr-migration-guide.md | etspaceman/spark | 155a67d00cb2f12aad179f6df2d992feca8e003e | [
"Apache-2.0"
] | 3 | 2015-09-07T09:02:02.000Z | 2017-01-25T22:52:08.000Z | ---
layout: global
title: "Migration Guide: SparkR (R on Spark)"
displayTitle: "Migration Guide: SparkR (R on Spark)"
license: |
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
* Table of contents
{:toc}
Note that this migration guide describes the items specific to SparkR.
Many items of SQL migration can be applied when migrating SparkR to higher versions.
Please refer [Migration Guide: SQL, Datasets and DataFrame](sql-migration-guide.html).
## Upgrading from SparkR 2.4 to 3.0
- The deprecated methods `sparkR.init`, `sparkRSQL.init`, `sparkRHive.init` have been removed. Use `sparkR.session` instead.
- The deprecated methods `parquetFile`, `saveAsParquetFile`, `jsonFile`, `registerTempTable`, `createExternalTable`, and `dropTempTable` have been removed. Use `read.parquet`, `write.parquet`, `read.json`, `createOrReplaceTempView`, `createTable`, `dropTempView`, `union` instead.
## Upgrading from SparkR 2.3 to 2.4
- Previously, we don't check the validity of the size of the last layer in `spark.mlp`. For example, if the training data only has two labels, a `layers` param like `c(1, 3)` doesn't cause an error previously, now it does.
## Upgrading from SparkR 2.3 to 2.3.1 and above
- In SparkR 2.3.0 and earlier, the `start` parameter of `substr` method was wrongly subtracted by one and considered as 0-based. This can lead to inconsistent substring results and also does not match with the behaviour with `substr` in R. In version 2.3.1 and later, it has been fixed so the `start` parameter of `substr` method is now 1-based. As an example, `substr(lit('abcdef'), 2, 4))` would result to `abc` in SparkR 2.3.0, and the result would be `bcd` in SparkR 2.3.1.
## Upgrading from SparkR 2.2 to 2.3
- The `stringsAsFactors` parameter was previously ignored with `collect`, for example, in `collect(createDataFrame(iris), stringsAsFactors = TRUE))`. It has been corrected.
- For `summary`, option for statistics to compute has been added. Its output is changed from that from `describe`.
- A warning can be raised if versions of SparkR package and the Spark JVM do not match.
## Upgrading from SparkR 2.1 to 2.2
- A `numPartitions` parameter has been added to `createDataFrame` and `as.DataFrame`. When splitting the data, the partition position calculation has been made to match the one in Scala.
- The method `createExternalTable` has been deprecated to be replaced by `createTable`. Either methods can be called to create external or managed table. Additional catalog methods have also been added.
- By default, derby.log is now saved to `tempdir()`. This will be created when instantiating the SparkSession with `enableHiveSupport` set to `TRUE`.
- `spark.lda` was not setting the optimizer correctly. It has been corrected.
- Several model summary outputs are updated to have `coefficients` as `matrix`. This includes `spark.logit`, `spark.kmeans`, `spark.glm`. Model summary outputs for `spark.gaussianMixture` have added log-likelihood as `loglik`.
## Upgrading from SparkR 2.0 to 3.1
- `join` no longer performs Cartesian Product by default, use `crossJoin` instead.
## Upgrading from SparkR 1.6 to 2.0
- The method `table` has been removed and replaced by `tableToDF`.
- The class `DataFrame` has been renamed to `SparkDataFrame` to avoid name conflicts.
- Spark's `SQLContext` and `HiveContext` have been deprecated to be replaced by `SparkSession`. Instead of `sparkR.init()`, call `sparkR.session()` in its place to instantiate the SparkSession. Once that is done, that currently active SparkSession will be used for SparkDataFrame operations.
- The parameter `sparkExecutorEnv` is not supported by `sparkR.session`. To set environment for the executors, set Spark config properties with the prefix "spark.executorEnv.VAR_NAME", for example, "spark.executorEnv.PATH"
- The `sqlContext` parameter is no longer required for these functions: `createDataFrame`, `as.DataFrame`, `read.json`, `jsonFile`, `read.parquet`, `parquetFile`, `read.text`, `sql`, `tables`, `tableNames`, `cacheTable`, `uncacheTable`, `clearCache`, `dropTempTable`, `read.df`, `loadDF`, `createExternalTable`.
- The method `registerTempTable` has been deprecated to be replaced by `createOrReplaceTempView`.
- The method `dropTempTable` has been deprecated to be replaced by `dropTempView`.
- The `sc` SparkContext parameter is no longer required for these functions: `setJobGroup`, `clearJobGroup`, `cancelJobGroup`
## Upgrading from SparkR 1.5 to 1.6
- Before Spark 1.6.0, the default mode for writes was `append`. It was changed in Spark 1.6.0 to `error` to match the Scala API.
- SparkSQL converts `NA` in R to `null` and vice-versa.
- Since 1.6.1, withColumn method in SparkR supports adding a new column to or replacing existing columns
of the same name of a DataFrame.
| 71.153846 | 478 | 0.757477 | eng_Latn | 0.994139 |
6fbcc96ef36ad2469ea30348e32a24d781f41a40 | 7,774 | md | Markdown | docs/howto/how_to_add_a_new_fusion_system.md | jzjonah/apollo | bc534789dc0548bf2d27f8d72fe255d5c5e4f951 | [
"Apache-2.0"
] | 22,688 | 2017-07-04T23:17:19.000Z | 2022-03-31T18:56:48.000Z | docs/howto/how_to_add_a_new_fusion_system.md | WJY-Mark/apollo | 463fb82f9e979d02dcb25044e60931293ab2dba0 | [
"Apache-2.0"
] | 4,804 | 2017-07-04T22:30:12.000Z | 2022-03-31T12:58:21.000Z | docs/howto/how_to_add_a_new_fusion_system.md | WJY-Mark/apollo | 463fb82f9e979d02dcb25044e60931293ab2dba0 | [
"Apache-2.0"
] | 9,985 | 2017-07-04T22:01:17.000Z | 2022-03-31T14:18:16.000Z | # How to add a new fusion system
The detailed processing flow of fusion is shown below:

The fusion system introduced by this document is located at fusion Component listed below. Current architecture of Fusion Component is shown:

As we can see from above structure, fusion system is the derived class of `BaseFusionSystem` which acts as a abstract class member of `ObstacleMultiSensorFusion` located in Fusion Component. Next, We will introduce how to add a new fusion system based on current structure.
Apollo has provided one fusion system -- Probabilistic fusion. It could be easily changed or replaced by other systems. The input of system should be objective obstacle data generated by detection and track of upstream sensors, while the output should be fused and tracked objective obastacle data. This document will introduce how to add a new fusion system, the basic task sequence is listed below:
1. Define a class that inherits `base_fusion_system`
2. Implement the class `NewFusionSystem`
3. Add config proto file for `NewFusionSystem`
4. Update config file to put your system into effect
The steps are elaborated below for better understanding:
## Define a class that inherits `base_fusion_system`
All the fusion systems shall inherit `base_fusion_system`,which defines basic class members and a set of interfaces. Here is an example of the system implementation:
```c++
namespace apollo {
namespace perception {
namespace fusion {
class NewFusionSystem : public BaseFusionSystem {
public:
NewFusionSystem();
~NewFusionSystem();
NewFusionSystem(const NewFusionSystem&) = delete;
NewFusionSystem& operator=(const NewFusionSystem&) = delete;
bool Init(const FusionInitOptions& init_options) override;
bool Fuse(const FusionOptions& options,
const base::FrameConstPtr& sensor_frame,
std::vector<base::ObjectPtr>* fused_objects) override;
std::string Name() const override;
}; // class NewFusionSystem
} // namespace fusion
} // namespace perception
} // namespace apollo
```
The function signature of `base_fusion_system` is pre-defined:
```c++
struct FusionInitOptions {
std::vector<std::string> main_sensors;
};
struct FusionOptions {};
struct alignas(16) Frame {
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
Frame() { sensor2world_pose.setIdentity(); }
void Reset() {
timestamp = 0.0;
objects.clear();
sensor2world_pose.setIdentity();
sensor_info.Reset();
lidar_frame_supplement.Reset();
radar_frame_supplement.Reset();
camera_frame_supplement.Reset();
}
// @brief sensor information
SensorInfo sensor_info;
double timestamp = 0.0;
std::vector<std::shared_ptr<Object>> objects;
Eigen::Affine3d sensor2world_pose;
// sensor-specific frame supplements
LidarFrameSupplement lidar_frame_supplement;
RadarFrameSupplement radar_frame_supplement;
CameraFrameSupplement camera_frame_supplement;
UltrasonicFrameSupplement ultrasonic_frame_supplement;
};
typedef std::shared_ptr<Frame> FramePtr;
typedef std::shared_ptr<const Frame> FrameConstPtr;
struct alignas(16) Object {
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
Object();
std::string ToString() const;
void Reset();
int id = -1;
PointCloud<PointD> polygon;
Eigen::Vector3f direction = Eigen::Vector3f(1, 0, 0);
float theta = 0.0f;
float theta_variance = 0.0f;
Eigen::Vector3d center = Eigen::Vector3d(0, 0, 0);
Eigen::Matrix3f center_uncertainty;
Eigen::Vector3f size = Eigen::Vector3f(0, 0, 0);
Eigen::Vector3f size_variance = Eigen::Vector3f(0, 0, 0);
Eigen::Vector3d anchor_point = Eigen::Vector3d(0, 0, 0);
ObjectType type = ObjectType::UNKNOWN;
std::vector<float> type_probs;
ObjectSubType sub_type = ObjectSubType::UNKNOWN;
std::vector<float> sub_type_probs;
float confidence = 1.0f;
int track_id = -1;
Eigen::Vector3f velocity = Eigen::Vector3f(0, 0, 0);
Eigen::Matrix3f velocity_uncertainty;
bool velocity_converged = true;
float velocity_confidence = 1.0f;
Eigen::Vector3f acceleration = Eigen::Vector3f(0, 0, 0);
Eigen::Matrix3f acceleration_uncertainty;
double tracking_time = 0.0;
double latest_tracked_time = 0.0;
MotionState motion_state = MotionState::UNKNOWN;
std::array<Eigen::Vector3d, 100> drops;
std::size_t drop_num = 0;
bool b_cipv = false;
CarLight car_light;
LidarObjectSupplement lidar_supplement;
RadarObjectSupplement radar_supplement;
CameraObjectSupplement camera_supplement;
FusionObjectSupplement fusion_supplement;
};
using ObjectPtr = std::shared_ptr<Object>;
using ObjectConstPtr = std::shared_ptr<const Object>;
```
## Implement the class `NewFusionSystem`
To ensure the new system could function properly, `NewFusionSystem` should at least override the interface Init(), Fuse(), Name() defined in `base_fusion_system`. Init() is resposible for config loading, class member initialization, etc. And Fuse() will implement the basic logic of system. A concrete `NewFusionSystem.cc` example is shown:
```c++
namespace apollo {
namespace perception {
namespace fusion {
bool NewFusionSystem::Init(const FusionInitOptions& init_options) {
/*
Initialization of your system
*/
}
bool NewFusionSystem::Fuse(const FusionOptions& options,
const base::FrameConstPtr& sensor_frame,
std::vector<base::ObjectPtr>* fused_objects) {
/*
Implementation of your system
*/
}
std::string NewFusionSystem::Name() const {
/*
Return your system's name
*/
}
FUSION_REGISTER_FUSIONSYSTEM(NewFusionSystem); //register the new fusion_system
} // namespace fusion
} // namespace perception
} // namespace apollo
```
## Add config and param proto file for `NewFusionSystem`
Follow the following steps to add config proto file for the new system:
1. Define a `proto` for the new system configurations according to the requirements of your algorithm. As a reference, you can found and follow the `proto` definition of `probabilistic_fusion_config` at `modules/perception/proto/probabilistic_fusion_config.proto`
2. Once finishing your `proto`, for example `newfusionsystem_config.proto`, add the following content at the file header:
```protobuf
syntax = "proto2";
package apollo.perception.fusion;
message NewFusionSystemConfig {
double parameter1 = 1;
int32 parameter2 = 2;
}
```
3. Refer to `modules/perception/production/conf/perception/fusion/config_manager.config` and add your system path:
```protobuf
model_config_path: "./conf/perception/fusion/modules/newfusionsystem.config"
```
4. Refer to the `modules/probabilistic_fusion.config` in the same folder and create `newfusionsystem.config`:
```protobuf
model_configs {
# NewFusionSystem model.
name: "NewFusionSystem"
version: "1.0.0"
string_params {
name: "root_dir"
value: "./data/perception/fusion/"
}
string_params {
name: "config_file"
value: "newfusionsystem.pt"
}
}
```
5. Refer to `probabilistic_fusion.pt` and create `newfusionsystem.pt` file at `modules/perception/production/data/perception/fusion/`:
```
Note:The "*.pt" file should have the same format with the "proto" files defined in step 1,2.
```
## Update config file to put your system into effect
To use your new fusion system in Apollo,you need to modify the value of `fusion_method` to your system's name in `fusion_component_conf.pb.txt` located in corresponding folder in `modules/perception/production/data/perception/fusion/`
Once you finished the above modifications, you new fusion system should take effect in Apollo. | 33.947598 | 400 | 0.738616 | eng_Latn | 0.743614 |
6fbce1dbc4e5771349467c68eec3bc94ee6760ba | 86 | md | Markdown | README.md | Tita-Navarro/Learning-Angular | 695e3b03753b9edf4fd2362f75a8aca89eb97abf | [
"MIT"
] | null | null | null | README.md | Tita-Navarro/Learning-Angular | 695e3b03753b9edf4fd2362f75a8aca89eb97abf | [
"MIT"
] | null | null | null | README.md | Tita-Navarro/Learning-Angular | 695e3b03753b9edf4fd2362f75a8aca89eb97abf | [
"MIT"
] | null | null | null | # Learning-Angular
Mi primera app con Angular, ejercicios y lo necesario para crearla
| 28.666667 | 66 | 0.813953 | spa_Latn | 0.997346 |
6fbd179369442d48607939216c541d088a7cd778 | 220 | md | Markdown | README.md | papaiatis/wpf-stopwatch-control | 075cbdab973778f715d848b0a0c32692956673ea | [
"MIT"
] | 2 | 2020-05-28T12:21:40.000Z | 2022-03-26T08:30:36.000Z | README.md | papaiatis/wpf-stopwatch-control | 075cbdab973778f715d848b0a0c32692956673ea | [
"MIT"
] | null | null | null | README.md | papaiatis/wpf-stopwatch-control | 075cbdab973778f715d848b0a0c32692956673ea | [
"MIT"
] | null | null | null | # wpf-stopwatch-control
WPF Stopwatch Control
This is a simple WPF stopwatch control. It wraps a single Label control where the time is shown.
You can control the stopwatch with the Start(), Stop() and Pause() methods.
| 36.666667 | 96 | 0.777273 | eng_Latn | 0.998792 |
6fbe3f2f4ba3376da2170b86ac8d96926b3a5155 | 4,010 | md | Markdown | windows-driver-docs-pr/image/properties-for-wia-camera-minidrivers.md | i35010u/windows-driver-docs.zh-cn | e97bfd9ab066a578d9178313f802653570e21e7d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-04T01:49:58.000Z | 2021-02-04T01:49:58.000Z | windows-driver-docs-pr/image/properties-for-wia-camera-minidrivers.md | i35010u/windows-driver-docs.zh-cn | e97bfd9ab066a578d9178313f802653570e21e7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/image/properties-for-wia-camera-minidrivers.md | i35010u/windows-driver-docs.zh-cn | e97bfd9ab066a578d9178313f802653570e21e7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: WIA 相机微型驱动程序的属性
description: WIA 相机微型驱动程序的属性
ms.date: 04/20/2017
ms.localizationpriority: medium
ms.openlocfilehash: 9defc4784feb4aa727598f23429245a7bde9a32b
ms.sourcegitcommit: 418e6617e2a695c9cb4b37b5b60e264760858acd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/07/2020
ms.locfileid: "96830491"
---
# <a name="properties-for-wia-camera-minidrivers"></a>WIA 相机微型驱动程序的属性
下面是 WIA 相机微型驱动程序独有的所有 WIA 属性的完整列表。
### <a name="required-properties-on-camera-root-items-microsoft-windows-xp-and-windows-me"></a>相机根项 (Microsoft Windows XP 和 Windows Me) 的必需属性
WIA 微型驱动程序提供以下属性:
[**已 \_ 拍摄 WIA DPC \_ 图片 \_**](./wia-dpc-pictures-taken.md)
### <a name="optional-properties-on-camera-root-items-windows-xp-and-windows-me"></a>相机根项 (Windows XP 和 Windows Me) 的可选属性
WIA 微型驱动程序提供以下属性:
[**WIA \_ DPC \_ 艺术家**](./wia-dpc-artist.md)
[**WIA \_ DPC \_ 电池 \_ 状态**](./wia-dpc-battery-status.md)
[**WIA \_ DPC \_ 突发 \_ 间隔**](./wia-dpc-burst-interval.md)
[**WIA \_ DPC \_ 突发 \_ 数量**](./wia-dpc-burst-number.md)
[**WIA \_ DPC \_ 捕获 \_ 延迟**](./wia-dpc-capture-delay.md)
[**WIA \_ DPC \_ 捕获 \_ 模式**](./wia-dpc-capture-mode.md)
[**WIA \_ DPC \_ 压缩 \_ 设置**](./wia-dpc-compression-setting.md)
[**WIA \_ DPC \_ 对比度**](./wia-dpc-contrast.md)
[**WIA \_ DPC \_ 版权 \_ 信息**](./wia-dpc-copyright-info.md)
[**WIA \_ DPC \_ 数字 \_ 缩放**](./wia-dpc-digital-zoom.md)
[**WIA \_ DPC \_ 维度**](./wia-dpc-dimension.md)
[**WIA \_ DPC \_ 效果 \_ 模式**](./wia-dpc-effect-mode.md)
[**WIA \_ DPC \_ 曝光度 \_ 复合**](./wia-dpc-exposure-comp.md)
[**WIA \_ DPC \_ 公开 \_ 索引**](./wia-dpc-exposure-index.md)
[**WIA \_ DPC \_ 公开 \_ 计数 \_ 模式**](./wia-dpc-exposure-metering-mode.md)
[**WIA \_ DPC \_ 曝光 \_ 模式**](./wia-dpc-exposure-mode.md)
[**WIA \_ DPC \_ 曝光 \_ 时间**](./wia-dpc-exposure-time.md)
[**WIA \_ DPC \_ 闪光 \_ 模式**](./wia-dpc-flash-mode.md)
[**WIA \_ DPC \_ FNUMBER**](./wia-dpc-fnumber.md)
[**WIA \_ DPC \_ 焦距 \_**](./wia-dpc-focal-length.md)
[**WIA \_ DPC \_ 焦点 \_ 距离**](./wia-dpc-focus-distance.md)
[**WIA \_ DPC \_ 重点 \_ 手动 \_ 分发**](./wia-dpc-focus-manual-dist.md)
[**WIA \_ DPC \_ 焦点 \_ 计量**](./wia-dpc-focus-metering.md)
[**WIA \_ DPC \_ 焦点 \_ 计量 \_ 模式**](./wia-dpc-focus-metering-mode.md)
[**WIA \_ DPC \_ 焦点 \_ 模式**](./wia-dpc-focus-mode.md)
[**WIA \_ DPC \_ 平移 \_ 位置**](./wia-dpc-pan-position.md)
[**WIA \_ DPC \_ PICT \_ 高度**](./wia-dpc-pict-height.md)
[**WIA \_ DPC \_ PICT \_ 宽度**](./wia-dpc-pict-width.md)
[**保留的 WIA \_ DPC \_ 图片 \_**](./wia-dpc-pictures-remaining.md)
[**WIA \_ DPC \_ 电源 \_ 模式**](./wia-dpc-power-mode.md)
[**WIA \_ DPC \_ RGB \_ 增益**](./wia-dpc-rgb-gain.md)
[**WIA \_ DPC \_ 清晰度**](./wia-dpc-sharpness.md)
[**WIA \_ DPC \_ 拇指 \_ 高度**](./wia-dpc-thumb-height.md)
[**WIA \_ DPC \_ 拇指 \_ 宽度**](./wia-dpc-thumb-width.md)
[**WIA \_ DPC \_ 倾斜 \_ 位置**](./wia-dpc-tilt-position.md)
[**WIA \_ DPC \_ TIMELAPSE \_ 间隔**](./wia-dpc-timelapse-interval.md)
[**WIA \_ DPC \_ TIMELAPSE \_ 号**](./wia-dpc-timelapse-number.md)
[**WIA \_ DPC \_ 计时器 \_ 模式**](./wia-dpc-timer-mode.md)
[**WIA \_ DPC \_ 计时器 \_ 值**](./wia-dpc-timer-value.md)
[**WIA \_ DPC \_ 上传 \_ URL**](./wia-dpc-upload-url.md)
[**WIA \_ DPC \_ 白 \_ 平衡**](./wia-dpc-white-balance.md)
[**WIA \_ DPC \_ 缩放 \_ 位置**](./wia-dpc-zoom-position.md)
### <a name="required-properties-on-camera-child-items-able-to-transfer-data"></a>相机子项目上的必需属性可以传输数据
WIA 微型驱动程序提供以下属性:
[**WIA \_ IPC \_ 缩略图**](./wia-ipc-thumbnail.md)
[**WIA \_ IPC \_ 缩略图 \_ 高度**](./wia-ipc-thumbnail-height.md)
[**WIA \_ IPC \_ 缩略图 \_ 宽度**](./wia-ipc-thumbnail-width.md)
### <a name="optional-properties-on-camera-child-items-able-to-transfer-data"></a>相机子项目上可用于传输数据的可选属性
WIA 微型驱动程序提供以下属性:
[**WIA \_ IPC \_ 音频 \_ 可用**](./wia-ipc-audio-available.md)
[**WIA \_ IPC \_ 音频 \_ 数据**](./wia-ipc-audio-data.md)
[**WIA \_ IPC \_ 音频 \_ 数据 \_ 格式**](./wia-ipc-audio-data-format.md)
[**WIA \_ \_ \_ \_ 每行的 PICT \_ 数量**](./wia-ipc-num-pict-per-row.md)
[**WIA \_ IPC \_ 序列**](./wia-ipc-sequence.md)
[**WIA \_ IPC \_ TIMEDELAY**](./wia-ipc-timedelay.md)
| 28.041958 | 141 | 0.623691 | yue_Hant | 0.694958 |
6fbec5e2e615f3519dc92a32185f3b770d3b2da1 | 5,839 | md | Markdown | docs/big-data-cluster/hdfs-tiering.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/big-data-cluster/hdfs-tiering.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/big-data-cluster/hdfs-tiering.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configurar a camada do HDFS
titleSuffix: SQL Server big data clusters
description: Este artigo explica como configurar a camada do HDFS para montar um sistema de arquivos externo do Azure Data Lake Storage no HDFS em um [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)].
author: nelgson
ms.author: negust
ms.reviewer: mikeray
ms.date: 08/21/2019
ms.topic: conceptual
ms.prod: sql
ms.technology: big-data-cluster
ms.openlocfilehash: 673b3eed760af4b36c494e2dd45cdfc8ed8e8dc8
ms.sourcegitcommit: b4ad3182aa99f9cbfd15f4c3f910317d6128a2e5
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/06/2019
ms.locfileid: "73706049"
---
# <a name="configure-hdfs-tiering-on-includebig-data-clusters-2019includesssbigdataclusters-ss-novermd"></a>Configurar as camadas do HDFS no [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)]
[!INCLUDE[tsql-appliesto-ssver15-xxxx-xxxx-xxx](../includes/tsql-appliesto-ssver15-xxxx-xxxx-xxx.md)]
A camada do HDFS fornece a capacidade de montar um sistema de arquivos externo e compatível com HDFS no HDFS. Este artigo explica como configurar as camadas do HDFS para Clusters de Big Data do SQL Server. No momento, damos suporte à conexão com o Azure Data Lake Storage Gen2 e ao Amazon S3.
## <a name="hdfs-tiering-overview"></a>Visão geral da camada do HDFS
Com a camada, os aplicativos podem acessar diretamente os dados em uma variedade de armazenamentos externos, como se os dados residissem no HDFS local. A montagem é uma operação de metadados, em que os metadados que descrevem o namespace no sistema de arquivos externo são copiados para o HDFS local. Esses metadados incluem informações sobre os diretórios e os arquivos externos junto com as permissões e as ACLs deles. Os dados correspondentes só são copiados sob demanda, quando os próprios dados são acessados por meio de uma consulta, por exemplo. Os dados do sistema de arquivos externos agora podem ser acessados por meio do cluster de Big Data do SQL Server. Você pode executar trabalhos do Spark e consultas SQL nesses dados da mesma forma que os executaria em dados locais armazenados no HDFS no cluster.
### <a name="caching"></a>Cache
Hoje, por padrão, 1% do armazenamento total do HDFS será reservado para o cache de dados montados. O cache é uma configuração global em montagens.
> [!NOTE]
> A camada do HDFS é um recurso desenvolvido pela Microsoft e uma versão anterior dele foi lançada como parte da distribuição do Apache Hadoop 3.1. Para obter mais informações, confira [https://issues.apache.org/jira/browse/HDFS-9806](https://issues.apache.org/jira/browse/HDFS-9806) para obter detalhes.
As seções a seguir fornecem um exemplo de como configurar a camada do HDFS com uma fonte de dados do Azure Data Lake Storage Gen2.
## <a name="refresh"></a>Atualizar
A camada do HDFS dá suporte à atualização. Atualize uma montagem existente para o instantâneo mais recente dos dados remotos.
## <a name="prerequisites"></a>Prerequisites
- [Cluster de Big Data implantado](deployment-guidance.md)
- [Ferramentas de Big Data](deploy-big-data-tools.md)
- **azdata**
- **kubectl**
## <a name="mounting-instructions"></a>Instruções de montagem
Damos suporte à conexão com o Azure Data Lake Storage Gen2 e o Amazon S3. Encontre as instruções sobre como realizar a montagem nesses tipos de armazenamento nos seguintes artigos:
- [Como montar o ADLS Gen2 para a camada do HDFS em um cluster de Big Data](hdfs-tiering-mount-adlsgen2.md)
- [Como montar o S3 para a camada do HDFS em um cluster de Big Data](hdfs-tiering-mount-s3.md)
## <a id="issues"></a> Problemas conhecidos e limitações
A seguinte lista fornece os problemas conhecidos e as limitações atuais ao usar as camadas do HDFS no [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)]:
- Se a montagem estiver travada em um estado `CREATING` por muito tempo, provavelmente, ela terá falhado. Nessa situação, cancele o comando e exclua a montagem, se necessário. Verifique se os parâmetros e as credenciais estão corretos antes de repetir a operação.
- Não é possível criar montagens em diretórios existentes.
- Não é possível criar montagens em montagens existentes.
- Se um dos ancestrais do ponto de montagem não existir, ele será criado com as permissões que usam r-xr-xr-x (555) como padrão.
- A criação da montagem pode levar algum tempo, dependendo do número e do tamanho dos arquivos que estão sendo montados. Durante esse processo, os arquivos da montagem não ficam visíveis para os usuários. Enquanto a montagem é criada, todos os arquivos serão adicionados a um caminho temporário, que usa `/_temporary/_mounts/<mount-location>` como padrão.
- O comando de criação da montagem é assíncrono. Depois que o comando é executado, o status de montagem pode ser verificado para entender o estado da montagem.
- Ao criar a montagem, o argumento usado para **--mount-path** é essencialmente um identificador exclusivo da montagem. A mesma cadeia de caracteres (incluindo a "/" no final, se presente) precisa ser usada nos comandos seguintes.
- As montagens são somente leitura. Não é possível criar diretórios nem arquivos em uma montagem.
- Não recomendamos a montagem de diretórios e arquivos que possam ser alterados. Após a criação da montagem, qualquer alteração ou atualização na localização remota não será refletida na montagem no HDFS. Se ocorrerem alterações na localização remota, você poderá optar por excluir e recriar a montagem para que ela reflita o estado atualizado.
## <a name="next-steps"></a>Próximas etapas
Para obter mais informações sobre [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)], confira [O que são [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]?](big-data-cluster-overview.md).
| 72.9875 | 814 | 0.788149 | por_Latn | 0.999786 |
6fbee27b57f15374ed613b446599c144569f5915 | 4,711 | md | Markdown | Office365-ServiceDescriptions/onedrive-for-business-service-description.md | MicrosoftDocs/OfficeDocs-O365ServiceDescriptions-pr.ru-RU | b3b02dab69d8ba828f2aa831c8db8a9b2003d421 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:22:27.000Z | 2020-05-19T19:22:27.000Z | Office365-ServiceDescriptions/onedrive-for-business-service-description.md | MicrosoftDocs/OfficeDocs-O365ServiceDescriptions-pr.ru-RU | b3b02dab69d8ba828f2aa831c8db8a9b2003d421 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-01-26T05:02:47.000Z | 2022-01-26T05:02:47.000Z | Office365-ServiceDescriptions/onedrive-for-business-service-description.md | MicrosoftDocs/OfficeDocs-O365ServiceDescriptions-pr.ru-RU | b3b02dab69d8ba828f2aa831c8db8a9b2003d421 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-11T20:04:07.000Z | 2019-10-11T20:04:07.000Z | ---
title: Описание услуги OneDrive
ms.author: office365servicedesc
author: pamelaar
manager: gailw
audience: ITPro
ms.topic: reference
f1_keywords:
- onedrive-for-business-service-description
ms.service: o365-administration
localization_priority: Critical
ms.custom:
- Adm_ServiceDesc
- Adm_ServiceDesc_top
ms.assetid: 2f22b6f5-e154-4ef9-85fe-0d1daf9e27b3
description: Посмотрите, какие функции OneDrive доступны в каких планах.
ms.openlocfilehash: 4d35862b6cb6d27f866537e535b4001159d3e853
ms.sourcegitcommit: c117bb958f5b94682fd384b4770a920c6114559b
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 09/24/2021
ms.locfileid: "59669971"
---
# <a name="onedrive-service-description"></a>Описание услуги OneDrive
OneDrive для рабочих и школьных учетных записей - это онлайн-хранилище в облаке, которое предоставляется отдельным лицензированным пользователям в организации. Используйте его для защиты рабочих файлов и доступа к ним на разных устройствах. OneDrive позволяет делиться файлами, совместно работать над документами и синхронизировать файлы со своим компьютером. Дополнительные сведения о [возможностях, функциях и ценах OneDrive](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business).
OneDrive включен в планы Microsoft 365 и Office 365, в планы SharePoint, а также может быть приобретен как самостоятельный план.
## <a name="available-plans"></a>Доступные планы
Подробные сведения о планах, позволяющих использовать OneDrive, см. в [полной таблице сравнения подписок](https://go.microsoft.com/fwlink/?linkid=2139145).
## <a name="feature-availability"></a>Доступность функций
В следующей таблице перечислены основные функции OneDrive, доступные в планах. Применяются определенные ограничения. Дополнительные сведения см. в сносках. Эта таблица может измениться без уведомления. Полный и актуальный список функций см. в [описании службы OneDrive](/office365/servicedescriptions/onedrive-for-business-service-description).
| Функция | Автономный план | Для малого бизнеса | Корпоративный | Для образования | Государственные организации | Некоммерческие организации |
|---------|-------------------|----------------|------------|-----------|------------|-------------|
| Хранилище<sup>1</sup> | Да | Да | Да | Да | Да | Да |
| Функции синхронизации | Да | Да | Да | Да | Да | Да |
| Функции общего доступа и совместной работы | Да | Да | Да | Да | Да | Да |
| Веб-функции | Да | Да | Да | Да | Да | Да |
| Функции для мобильных устройств | Да | Да | Да | Да | Да | Да |
| Функции ИТ-администратора, безопасности и соответствия | Да | Да | Да | Да | Да | Да |
<sup>1</sup> Сведения о хранилище OneDrive для каждого пользователя см. в разделе OneDrive [сравнения современных планов работы](https://go.microsoft.com/fwlink/?linkid=2139145).
## <a name="learn-more"></a>Подробнее
Чтобы получить технические сведения о [OneDrive](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business), ознакомьтесь со следующими ресурсами:
- [OneDrive](/onedrive/onedrive)
- [Блог Microsoft OneDrive](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/bg-p/OneDriveBlog)
- Для большинства планов подписок размер хранилища по умолчанию для OneDrive каждого пользователя составляет 1 ТБ. В зависимости от плана и количества лицензированных пользователей это хранилище можно увеличить до 5 ТБ. Дополнительные сведения см. в разделе **Основные функции** на странице [Сравнение цен и планов облачного хранилища OneDrive](https://www.microsoft.com/microsoft-365/onedrive/compare-onedrive-plans?activetab=tab:primaryr2).
### <a name="licensing-terms"></a>Условия лицензирования
Условия лицензирования продуктов и служб, приобретенных в рамках программ корпоративного лицензирования Майкрософт, см. на сайте [Условия использования продуктов Майкрософт](https://www.microsoft.com/licensing/terms/).
### <a name="messaging"></a>Сообщения
Чтобы отслеживать предстоящие изменения, в том числе новые и измененные компоненты, запланированное обслуживание или другие важные объявления, посетите Центр сообщений. Дополнительные сведения см. в статье [Центр сообщений](/microsoft-365/admin/manage/message-center).
### <a name="accessibility"></a>Специальные возможности
Корпорация Майкрософт по-прежнему стремится к безопасности ваших данных и обеспечению [специальных возможностей](https://www.microsoft.com/trust-center/compliance/accessibility) наших служб. Дополнительные сведения см. в [Центр управления безопасностью Майкрософт](https://www.microsoft.com/trust-center) и в [Центре специальных возможностей Office](https://support.microsoft.com/office/office-accessibility-center-resources-for-people-with-disabilities-ecab0fcf-d143-4fe8-a2ff-6cd596bddc6d).
| 69.279412 | 500 | 0.782424 | rus_Cyrl | 0.804424 |
6fbf2a9144698c8f109c4d94109eca2d674ca4f8 | 74 | md | Markdown | vue/get-double/README.md | fossabot/100DaysOfCode-1 | 0d5c4e832e795ed63f0e846f8662aea2fcaeb774 | [
"MIT"
] | 13 | 2019-05-21T20:36:28.000Z | 2021-11-09T16:21:22.000Z | vue/get-double/README.md | GDSC-Algoma-University/21DaysOfWebdev | e33b30192e0bd9c76659225f122aa030008b076c | [
"MIT"
] | 3 | 2019-09-02T17:39:51.000Z | 2022-03-15T17:33:56.000Z | vue/get-double/README.md | GDSC-Algoma-University/21DaysOfWebdev | e33b30192e0bd9c76659225f122aa030008b076c | [
"MIT"
] | 6 | 2020-09-20T10:42:08.000Z | 2022-01-30T17:53:06.000Z | ## Creating *get-double* project
### Understanding `computed properties`
| 18.5 | 39 | 0.743243 | eng_Latn | 0.939772 |
6fbfb35404c135cf73cc4351aab9595bc31bfcf5 | 4,704 | md | Markdown | themes/gatsby-theme-flex/README.md | mjaydenkim/data-analysis | dc0aeaed3d005cba69bc6be488a3ce53ebe6d35f | [
"MIT"
] | 39 | 2019-11-19T10:57:24.000Z | 2019-11-28T02:34:21.000Z | themes/gatsby-theme-flex/README.md | mjaydenkim/data-analysis | dc0aeaed3d005cba69bc6be488a3ce53ebe6d35f | [
"MIT"
] | 7 | 2021-09-19T00:22:55.000Z | 2022-03-15T02:02:53.000Z | themes/gatsby-theme-flex/README.md | mjaydenkim/data-analysis | dc0aeaed3d005cba69bc6be488a3ce53ebe6d35f | [
"MIT"
] | 3 | 2019-11-21T18:36:23.000Z | 2019-11-24T02:20:26.000Z | <div align="center">
<h1>Page Builder Blocks for Gatsby</h1>
</div>
<p align="center">
Combine the power of <strong>Gatsby</strong>, <strong>MDX</strong> and <strong>Theme UI</strong> to build blazing fast websites.
</p>
<p align="center">
<a href="https://github.com/arshad/gatsby-themes/blob/master/LICENSE"><img src="https://img.shields.io/npm/l/gatsby-theme-flex.svg" alt="License"></a>
<a href="https://github.com/arshad/gatsby-themes/pulls"><img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs welcome!" /></a>
<a href="https://twitter.com/arshadcn"><img src="https://img.shields.io/badge/Follow-%40arshadcn-1da1f2" alt="Follow @arshadcn" /></a>
</p>
<img src="https://arshad.io/uploads/gatsby-theme-flex.gif" />
## About
Flex is a Gatsby theme that ships with pre-built blocks that you can use in your Markdown pages. Then use Theme UI to customize the look and feel of your site.
[See a demo](https://flex.arshad.io)
## Features
- **Customizable** - All blocks can be extended and customized.
- **Extendable** - Build and add your own custom blocks.
- **Accessible** - All blocks are tested for accessiblity.
- **Dark mode** - Support for multiple color modes.
- **SEO** - SEO + Open Graph out of the box.
- **Code Highlighting** - Code highlighting with Prism.
## Quick Start
Create a new Flex site:
```shell
gatsby new site arshad/gatsby-starter-flex
```
Start the development server
```shell
cd site
gatsby develop
```
You should now be able to view your site at `http://localhost:8000`.
## Add a page
Create a page at `content/pages/home/index.mdx` and add the following to it:
```markdown
---
title: Home
excerpt: Welcome to the home page
is_front: TRUE
---
```
This will create a new page and set it as the front page. If you head to `http://localhost:8000`, you should see Home displayed as the page title.
## Add a block
Let's add a hero to the home page. Add the following to `content/pages/home/index.mdx`
```markdown
---
title: Home
excerpt: Welcome to the home page
is_front: TRUE
---
<Hero
heading="Expedita Aperiam"
lead="Sint quaerat et occaecati voluptate illum tenetur."
imageUrl="https://via.placeholder.com/800x450"
style={{
bg: "muted"
}}
/>
```
It's that easy.
### Available Blocks
Flex comes with a lot of pre-built blocks that you can use to build your site.
`Cta`, `Div`, `Faqs`, `Feature`, `Hero`, `Logos`, `PageHeader`, `Section`, `Button`, `Card`, `Faq`, `Lead`, `Link`, `Logo`, `Image`, `Testimonial` and many more.
## Theming
Flex makes use of [Theme UI](https://theme-ui.com) under the hood. This makes it very easy to customize and re-use your theme across pages.
### Customize the theme
Create a new file at `src/gatsby-theme-flex/theme.js` and add your custom theme values in there.
Let's change the default primary color.
```js
// src/gatsby-theme-flex/theme.js
export default {
colors: {
primary: `#ee00ff`,
},
}
```
Stop and restart the development server: `gatsby develop`.
## FAQs
We're still working on the documentation. But here's some quick answers to get you started.
### How to customize a block
Use [Component Shadowing](https://www.gatsbyjs.org/docs/themes/shadowing/). Example, to override the `Hero` block, create the following file `src/gatsby-theme-flex/blocks/hero.js` and create your custom `Hero` block.
To customize the `Header`, create the following `src/gatsby-theme-flex/layout/header.js`.
### How to add new props to a block
Shadow the block and add your new props. For example, the following code adds a `foo` prop to the card block:
```js
// src/gatsby-theme-flex/components/card.js
/** @jsx jsx */
import { jsx } from "theme-ui"
export default ({ style, heading, foo, children }) => {
...
{foo}
...
}
```
You can now use `foo` in your pages as follows:
```markdown
<Card
heading="This is the heading"
foo="bar"
/>
```
### How to add your own block
Shadow the `src/gatsby-theme-flex/blocks.js` component and add the following code:
```js
// src/gatsby-theme-flex/blocks.js
export * from "gatsby-theme-flex/src/blocks"
export { default as MyComponent } from "../my-component.js"
```
```js
// src/my-component.js
/** @jsx jsx */
import { jsx } from "theme-ui"
export default ({ foo }) => <div>{foo}</div>
```
You can now use `MyComponent` in your pages:
```markdown
<MyComponent foo="bar" />
```
## Support
Need help? Create an issue on the main repo [@arshad/gatsby-themes](https://github.com/arshad/gatsby-themes/issues) or ask me [@arshadcn](https://twitter.com/arshadcn).
## License
<a href="https://www.npmjs.com/package/gatsby-theme-flex"><img src="https://img.shields.io/npm/l/gatsby-theme-flex.svg" alt="License"></a>
| 26.27933 | 216 | 0.698767 | eng_Latn | 0.856917 |
6fbfc4a4500b851f9d1a2b388a7962ac8ca5bcb2 | 2,943 | md | Markdown | _success-stories/robbie_ouzts.md | JoshDoody/fsn | f06550c7213e92297ba7c6c68fea624e51278dff | [
"MIT"
] | null | null | null | _success-stories/robbie_ouzts.md | JoshDoody/fsn | f06550c7213e92297ba7c6c68fea624e51278dff | [
"MIT"
] | 11 | 2021-08-31T14:25:06.000Z | 2021-11-10T16:44:56.000Z | _success-stories/robbie_ouzts.md | JoshDoody/fsn | f06550c7213e92297ba7c6c68fea624e51278dff | [
"MIT"
] | 2 | 2021-08-13T20:05:05.000Z | 2021-08-13T22:05:00.000Z | ---
name: Robbie Ouzts
job_title: Georgia Institute of Technology Graduate Career & Co-op Advisor
company:
industry:
headshot: robbie_ouzts.jpg
short_version: |
**Josh's talk is one of the best I have heard on salary negotiation, and our students gave him rave reviews! He is a dynamic speaker, with clear ideas that students can easily remember and apply as a student and continue to apply his negotiation principals as a professional.**
product: How to get promoted in 7 days
result_summary:
case_study_url:
---
**Josh's talk is one of the best I have heard on salary negotiation, and our students gave him rave reviews! He is a dynamic speaker, with clear ideas that students can easily remember and apply as a student and continue to apply his negotiation principals as a professional.**
His talk was just before our Fall Career Fair and provided some great information that many students have already used. Just today a student came by the office and wanted to reconfirm salary ranges. The student said, “I am already using some of Josh’s tips because I anticipate an offer on Wednesday.”
Josh spoke to our students about how to estimate their market value, stand out in their job interviews, get offers and negotiate their starting salary! Josh stressed preparation and practice for interviews, illustrated ways to find “your real market value” and understand your value before starting the interview process. Josh then provided his step-by-step process to avoid “giving the first number” and rationale behind his negotiation tactics. He also reminded students to look at the overall package considering other items other than salary.
**Josh has so much credibility as an engineer, recruiter and entrepreneur.** The student audience appreciated his personal story of how as a new graduate he failed to negotiate his first salaries, learned from his mistakes, recovered and doubled his salary in three years! Josh‘s story provided evidence that application of the _Fearless Salary Negotiation_ steps work. It also assured the student audience that lessons would be learned along their career path.
Josh was generous with his time after his presentation and spent over 1.5 hours answering individual student questions. He provided some customized advice for individual questions that was direct and gave students tips they can immediately apply as well as some ideas to navigate the negotiation process. **The talk, once again, was one of the best I have heard on negotiation!**
**I would recommend him as a speaker and coach! Josh’s ability to connect with the audience and provide great information is amazing. He adds value because he is willing to go the extra mile after the presentation answering individual questions.**
As confirmation of Josh’s work, we plan to bring Josh back for our spring graduate student signature event the Career, Research, and Innovation Development Conference (CRIDC). | 113.192308 | 547 | 0.799524 | eng_Latn | 0.999912 |
6fbfe5e6c274d283411779543365db1f83c2a96b | 8,735 | md | Markdown | articles/supply-chain/global-inventory-accounting/get-started-with-gia.md | MicrosoftDocs/Dynamics-365-Operations.pt-br | ad7f7b44fef487c4fcf846ab4b105d945f3b6554 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-04-02T09:49:16.000Z | 2021-06-01T17:08:54.000Z | articles/supply-chain/global-inventory-accounting/get-started-with-gia.md | MicrosoftDocs/Dynamics-365-Operations.pt-br | ad7f7b44fef487c4fcf846ab4b105d945f3b6554 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-12-12T13:23:02.000Z | 2021-07-15T11:10:45.000Z | articles/supply-chain/global-inventory-accounting/get-started-with-gia.md | MicrosoftDocs/Dynamics-365-Operations.pt-br | ad7f7b44fef487c4fcf846ab4b105d945f3b6554 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-13T09:28:14.000Z | 2021-10-13T09:28:14.000Z | ---
title: Introdução à Contabilidade de estoque global
description: Este tópico descreve como começar com a Contabilidade de estoque global.
author: AndersGirke
ms.date: 06/18/2021
ms.topic: article
audience: Application User
ms.reviewer: kamaybac
ms.custom: intro-internal
ms.search.region: Global
ms.author: aevengir
ms.search.validFrom: 2021-06-18
ms.dyn365.ops.version: 10.0.20
ms.openlocfilehash: 90fcbdc5c9dd4301225952d885794bd4d03ef825fd5590896be13eacfad1f979
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/05/2021
ms.locfileid: "6773278"
---
# <a name="get-started-with-global-inventory-accounting"></a>Introdução à Contabilidade de estoque global
[!include [banner](../includes/banner.md)]
[!INCLUDE [preview-banner](../includes/preview-banner.md)]
A Contabilidade de estoque global de custos permite realizar múltiplas contabilidades de inventário nos razões da Contabilidade de estoque global. É necessário associar cada razão da Contabilidade de estoque global a uma *convenção*. Uma convenção é uma coleção dos seguintes tipos de políticas de contabilidade:
- Objeto de custo
- Base de medida de entrada
- Suposição de fluxo de custos
- Elemento de custo
> [!NOTE]
> Mesmo depois de ativar a Contabilidade de estoque global, ainda é possível realizar a contabilidade de estoque no Microsoft Dynamics 365 Supply Chain Management, como de costume.
A Contabilidade de estoque global é um suplemento. Para disponibilizar os recursos, é necessário instalá-lo a partir do Lifecycle Services (LCS) Microsoft Dynamics. É possível optar por avaliá-lo em um ambiente de teste antes de ativá-lo para ambientes de produção.
No momento, a Contabilidade de estoque global não permite todos os recursos de gerenciamento de custos que são incorporados ao Supply Chain Management. Portanto, é importante que você avalie se o conjunto de recursos disponível no momento atenderá aos seus requisitos.
## <a name="how-to-get-the-global-inventory-accounting-public-preview"></a><a name="sign-up"></a>Como obter a versão preliminar pública da Contabilidade de estoque global
> [!IMPORTANT]
> Para usar a Contabilidade de estoque global, você deve ter um ambiente de alta disponibilidade habilitado para LCS (não um ambiente OneBox). Além disso, você deve estar executando a versão 10.0.19 ou posterior do Supply Chain Management.
Para se inscrever na versão preliminar pública da Contabilidade de estoque global, envie seu ID de ambiente LCS por email para a [Equipe da Contabilidade de estoque global](mailto:GlobalInvAccount@microsoft.com). Depois que você for aprovado para o programa, a equipe enviará um email de acompanhamento que contém uma chave beta da Contabilidade de estoque global e seus pontos de extremidade de serviço. Depois de receber a chave beta, você pode [instalar o suplemento](#install).
## <a name="licensing"></a>Licenciamento
A Contabilidade de estoque global é licenciada com os recursos padrão de contabilidade de estoque que estão disponíveis para o Supply Chain Management. Não é necessário comprar uma licença adicional para usar a Contabilidade de Estoque Global.
## <a name="prerequisites"></a>Pré-requisitos
### <a name="set-up-microsoft-power-platform-integration"></a>Configurar integração do Microsoft Power Platform
Antes de habilitar a funcionalidade de suplemento, você deve se integrar ao Microsoft Power Platform seguindo essas etapas.
1. Abra o ambiente LCS ao qual deseja adicionar o serviço.
1. Acesse **Detalhes completos**.
1. Na seção **Integração do Power Platform**, selecione **Configurar**.
1. Na caixa de diálogo **Configuração do ambiente do Power Platform**, marque a caixa de seleção e selecione **Configuração**. Normalmente, a configuração leva entre 60 e 90 minutos.
1. Após a configuração do ambiente do Microsoft Power Platform ser concluída, a página mostra o nome do seu ambiente. Além disso, a seção **Integração do Power Platform** mostra a afirmação: "A configuração de ambiente do Power Platform foi concluída." A Contabilidade de estoque global não requer um aplicativo de gravação dupla.
Para obter mais informações, consulte [Configuração após a implantação do ambiente](../../fin-ops-core/dev-itpro/power-platform/overview.md#set-up-after-environment-deployment).
### <a name="set-up-dataverse"></a>Configurar o Dataverse
Antes de configurar o Dataverse, adicione os princípios do Contabilidade de estoque global ao seu locatário seguindo essas etapas.
1. Instale o módulo Azure AD para Windows PowerShell v2, conforme descrito em [Instalar o Azure Active Directory PowerShell para o Graph](/powershell/azure/active-directory/install-adv2).
1. Execute o comando do PowerShell a seguir.
```powershell
Connect-AzureAD # (open a sign in window and sign in as a tenant user)
New-AzureADServicePrincipal -AppId "7a1dd80f-c961-4a67-a2f5-d6a5d2f52cf9" -DisplayName "d365-scm-costaccountingservice"
New-AzureADServicePrincipal -AppId "5f58fc56-0202-49a8-ac9e-0946b049718b" -DisplayName "d365-scm-operationdataservice"
```
Em seguida, crie usuários de aplicativos para a Contabilidade de estoque global do Dataverse seguindo essas etapas.
1. Abra a URL do ambiente do Dataverse.
1. Acesse **Configuração Avançada \> Sistema \> Segurança \> Usuários** e crie um usuário de aplicativo. Use o campo **Visualizar** para alterar a exibição de página para *Usuários do Aplicativo*.
1. Selecione **Novo**.
1. Defina o campo **ID do Aplicativo** como *7a1dd80f-c961-4a67-a2f5-d6a5d2f52cf9*.
1. Selecione **Atribuir Função** e, depois, selecione *Administrador do Sistema*. Se houver uma função denominada *Usuário do Common Data Service*, selecione-a.
1. Repita as etapas anteriores, mas defina o campo **ID do aplicativo** para *5f58fc56-0202-49a8-ac9e-0946b049718b*.
Para obter mais informações, consulte [Criar um usuário de aplicativo](/power-platform/admin/create-users-assign-online-security-roles#create-an-application-user).
Se o idioma padrão da sua instalação do Dataverse não for inglês, siga estas etapas.
1. Acesse **Configurações avançadas \> Administração \> Idiomas**.
1. Selecione *Inglês* (*LanguageCode = 1033*) e selecione **Aplicar**.
## <a name="install-the-add-in"></a><a name="install"></a>Instalar o suplemento
Siga estas etapas para instalar o suplemento para que você possa usar a Contabilidade de estoque global.
1. [Inscreva-se](#sign-up) na versão preliminar pública da Contabilidade de estoque global.
1. Entre no [LCS](https://lcs.dynamics.com/Logon/Index).
1. Acesse **Gerenciamento da versão prévia do recurso**.
1. Selecione o sinal de mais (**+**).
1. No campo **Código**, insira a chave beta do complemento da Contabilidade de estoque global. (Você deve ter recebido sua chave beta por email quando se inscreveu.)
1. Selecione **Desbloquear**.
1. Abra o ambiente LCS ao qual deseja adicionar o serviço.
1. Acesse **Detalhes completos**.
1. Acesse **Integração do Power Platform** e selecione **Configuração**.
1. Na caixa de diálogo **Configuração do ambiente do Power Platform**, marque a caixa de seleção e selecione **Configuração**. Normalmente, a configuração leva entre 60 e 90 minutos.
1. Após a configuração do ambiente do Microsoft Power Platform ser concluída, na FastTab **Suplementos do ambiente**, selecione **Instalar um novo suplemento**.
1. Selecione **Contabilidade de estoque global**.
1. Acompanhe o guia de instalação e concorde com os termos e condições.
1. Selecione **Instalar**.
1. Na FastTab **Suplementos do ambiente**, você verá que o Contabilidade de estoque global está sendo instalado. Após alguns minutos, o status deve mudar de *Instalando* para *Instalado*. (Talvez seja necessário atualizar a página para ver essa alteração.) Nesse momento, a Contabilidade de estoque global já está pronto para uso.
## <a name="set-up-the-integration"></a>Configurar a integração
Siga essas etapas para configurar a integração entre a Contabilidade de estoque global e o Supply Chain Management.
1. Entre no Supply Chain Management.
1. Acesse **Administração do sistema \> Gerenciamento de Recursos**.
1. Selecione **Verificar se há atualizações**.
1. Na guia **Tudo**, pesquise o recurso que se chama *Contabilidade de estoque global*.
1. Selecione **Habilitar agora**.
1. Acesse **Contabilidade de estoque global \> Configuração \> Parâmetros contábeis \> Integrações de parâmetros**.
1. Nos campos **Dados do ponto de extremidade de serviço** e **Ponto de extremidade da Contabilidade de estoque global**, insira os URLs com base no email que a equipe da Contabilidade de estoque global enviou quando você se inscreveu na versão preliminar.
A Contabilidade de estoque global está pronta para uso.
| 67.713178 | 481 | 0.785003 | por_Latn | 0.998109 |
6fc0bec70b60b4535cba11aa4eb6631733ab502f | 377 | md | Markdown | README.md | phk422/- | 2b207b4de211319df5aeb525e1058e8a2e61d550 | [
"MIT"
] | 2 | 2020-01-18T08:41:53.000Z | 2020-06-10T02:57:30.000Z | README.md | phk422/- | 2b207b4de211319df5aeb525e1058e8a2e61d550 | [
"MIT"
] | null | null | null | README.md | phk422/- | 2b207b4de211319df5aeb525e1058e8a2e61d550 | [
"MIT"
] | null | null | null | # 小程序仿微信客户端界面/朋友圈和好友列表数据通过接口获取,接口用java实现




接口接口代码:https://github.com/phk422/wechat_interface.git | 29 | 95 | 0.814324 | yue_Hant | 0.063409 |
6fc12036138c7e218278be9b429630598eddf8e1 | 1,839 | md | Markdown | developer-guides/rest-api/users/info/README.md | Rodriq/GSoC-2019-Interactive-APIs-Docs | 79bea2c987fc6a790762ed3efef918c5d409792d | [
"MIT"
] | null | null | null | developer-guides/rest-api/users/info/README.md | Rodriq/GSoC-2019-Interactive-APIs-Docs | 79bea2c987fc6a790762ed3efef918c5d409792d | [
"MIT"
] | 15 | 2019-06-13T18:28:51.000Z | 2021-04-16T11:47:20.000Z | developer-guides/rest-api/users/info/README.md | Rodriq/GSoC-2019-Interactive-APIs-Docs | 79bea2c987fc6a790762ed3efef918c5d409792d | [
"MIT"
] | null | null | null | ---
method: get
parameters: true
endpoint: users.info
authentication: true
category: users
permalink: /developer-guides/rest-api/users/info/
---
{% capture fullPath %}{{ "/api/v1/" | append: page.endpoint }}{% endcapture %}
# Info
{% include api/specific_endpoint.html category=page.category endpoint=page.endpoint method=page.method authentication=page.authentication fullPath=fullPath %}
## Query Parameters
{% include api/list_parameters.html category=page.category endpoint=page.endpoint method=page.method authentication=page.authentication fullPath=fullPath %}
## Other Users Example Call
```bash
curl -H "X-Auth-Token: 9HqLlyZOugoStsXCUfD_0YdwnNnunAJF8V47U3QHXSq" \
-H "X-User-Id: aobEdbYhXfu5hkeqG" \
http://localhost:3000/api/v1/users.info?userId=BsNr28znDkG8aeo7W
```
## Example Result Regular User Callee
```json
{
"user": {
"_id": "nSYqWzZ4GsKTX4dyK",
"type": "user",
"status": "offline",
"active": true,
"name": "Example User",
"utcOffset": 0,
"username": "example"
},
"success": true
}
```
## Example Call Admin Callee that requests user's rooms
```bash
curl -H "X-Auth-Token: 9HqLlyZOugoStsXCUfD_0YdwnNnunAJF8V47U3QHXSq" \
-H "X-User-Id: aobEdbYhXfu5hkeqG" \
http://localhost:3000/api/v1/users.info?userId=BsNr28znDkG8aeo7W&fields={"userRooms": 1}
```
## Example Result
{% include api/example_result.html category=page.category endpoint=page.endpoint method=page.method authentication=page.authentication fullPath=fullPath parameters=page.parameters%}
## Change Log
| Version | Description |
| :--- | :--- |
| 0.70.0 | Added `rooms` property to response if the user request it and has the `view-other-user-channels` permission |
| 0.49.0 | Updated to support `userId` or `username` |
| 0.48.0 | Renamed to `users.info` |
| 0.35.0 | Added as `user.info` |
| 28.292308 | 181 | 0.713431 | eng_Latn | 0.52626 |
6fc19d1449be1b94f5776316506352eb6d0836ef | 7,070 | md | Markdown | docs/mfc/mfc-activex-controls-advanced-topics.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/mfc-activex-controls-advanced-topics.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/mfc-activex-controls-advanced-topics.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 了解更多: MFC ActiveX 控件:高级主题
title: MFC ActiveX 控件:高级主题
ms.date: 09/12/2018
helpviewer_keywords:
- MFC ActiveX controls [MFC], error codes
- MFC ActiveX controls [MFC], accessing invisible dialog controls
- MFC ActiveX controls [MFC], advanced topics
- FireError method [MFC]
- MFC ActiveX controls [MFC], database classes
- MFC ActiveX controls [MFC], special keys
- PreTranslateMessage method [MFC]
- MFC ActiveX controls [MFC], parameterized property
- ThrowError method [MFC]
ms.assetid: e9e34abb-8e2d-461e-bb9c-a1aec5dcecbd
ms.openlocfilehash: 1b37c3621c515153f068633b8272420a68a06c4e
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/11/2020
ms.locfileid: "97280737"
---
# <a name="mfc-activex-controls-advanced-topics"></a>MFC ActiveX 控件:高级主题
本文介绍与开发 ActiveX 控件相关的高级主题。 其中包括:
- [在 ActiveX 控件中使用数据库类](#_core_using_database_classes_in_activex_controls)
- [实现参数化属性](#_core_implementing_a_parameterized_property)
- [处理 ActiveX 控件中的错误](#_core_handling_errors_in_your_activex_control)
- [处理控件中的特殊键](#_core_handling_special_keys_in_your_control)
- [访问运行时不可见的对话框控件](#_core_accessing_dialog_controls_that_are_invisible_at_run_time)
>[!IMPORTANT]
> ActiveX 是一种不能用于新开发的旧技术。 有关取代 ActiveX 的新式技术的详细信息,请参阅 [Activex 控件](activex-controls.md)。
## <a name="using-database-classes-in-activex-controls"></a><a name="_core_using_database_classes_in_activex_controls"></a> 在 ActiveX 控件中使用数据库类
由于 ActiveX 控件类是类库的一部分,因此可以应用相同的过程和规则,以便在标准 MFC 应用程序中使用数据库类来开发使用 MFC 数据库类的 ActiveX 控件。
有关 MFC 数据库类的一般概述,请参阅 [Mfc Database 类 (DAO 和 ODBC) ](../data/mfc-database-classes-odbc-and-dao.md)。 本文介绍 MFC ODBC 类和 MFC DAO 类,并指导您了解其中的任何一项。
> [!NOTE]
> DAO 受 Office 2013 的支持。 DAO 3.6 是最终版本,被视为已过时。 尽管包含 DAO 类,但仍可以) 使用 dao 类,但 Visual C++ 环境和向导不支持 DAO (。 Microsoft 建议你将 [OLE DB 模板](../data/oledb/ole-db-programming.md) 或 [ODBC 和 MFC](../data/odbc/odbc-and-mfc.md) 用于新项目。 只应在维护现有应用程序时使用 DAO。
## <a name="implementing-a-parameterized-property"></a><a name="_core_implementing_a_parameterized_property"></a> 实现参数化属性
参数化属性 (有时称为属性数组) 是一种将同类值集合作为控件的单个属性公开的方法。 例如,可以使用参数化属性将数组或字典公开为属性。 在 Visual Basic 中,使用数组表示法访问此类属性:
[!code-vb[NVC_MFC_AxVb#1](codesnippet/visualbasic/mfc-activex-controls-advanced-topics_1.vb)]
使用添加属性向导实现参数化属性。 添加属性向导通过添加一对 Get/Set 函数来实现属性,这些函数允许控件用户使用以上表示法或标准方式访问属性。
与方法和属性类似,参数化属性也对允许的参数数目有限制。 对于参数化属性,限制为15个参数, (为存储属性值) 保留一个参数。
下面的过程添加了一个名为 Array 的参数化属性,该属性可作为二维整数数组进行访问。
#### <a name="to-add-a-parameterized-property-using-the-add-property-wizard"></a>使用 "添加属性向导" 添加参数化属性
1. 加载控件的项目。
1. 在“类视图”中,展开控件的库节点。
1. 右键单击控件的接口节点(库节点的第二个节点)以打开快捷菜单。
1. 在快捷菜单中,单击 " **添加** ",然后单击 " **添加属性**"。
1. 在 " **属性名称** " 框中,键入 `Array` 。
1. 在 " **属性类型** " 框中,选择 **`short`** 。
1. 对于 " **实现** 类型",请单击 " **Get/Set 方法**"。
1. 在 **Get 函数** 和 **set 函数** 框中,为 Get 和 set 函数键入唯一名称,或接受默认名称。
1. 使用 **参数名称** 和 **参数类型** 控件,添加一个名为 *row* (类型为 *short*) 的参数。
1. 添加第二个名为 *列* (类型为 *short*) 的参数。
1. 单击“完成”。
### <a name="changes-made-by-the-add-property-wizard"></a>添加属性向导所做的更改
添加自定义属性时,添加属性向导会更改控件类标头 (。H) 和实现 (。CPP) 文件。
将以下行添加到控件类中。H 文件:
[!code-cpp[NVC_MFC_AxUI#35](codesnippet/cpp/mfc-activex-controls-advanced-topics_2.h)]
此代码声明了两个 `GetArray` 名 `SetArray` 为的函数,使用户可以在访问属性时请求特定的行和列。
此外,"添加属性向导" 将以下行添加到控件调度映射,位于控件类实现 ( 中。CPP) 文件:
[!code-cpp[NVC_MFC_AxUI#36](codesnippet/cpp/mfc-activex-controls-advanced-topics_3.cpp)]
最后,将 `GetArray` 和函数的实现 `SetArray` 添加到的末尾。CPP 文件。 在大多数情况下,你将修改 Get 函数以返回属性的值。 Set 函数通常包含应在属性更改之前或之后执行的代码。
若要使此属性有用,可以在类型为的控件类中声明一个二维数组成员变量, **`short`** 以存储参数化属性的值。 然后,您可以修改 Get 函数以返回按参数所指示的正确行和列存储的值,然后修改 Set 函数以更新由 row 和 column 参数引用的值。
## <a name="handling-errors-in-your-activex-control"></a><a name="_core_handling_errors_in_your_activex_control"></a> 处理 ActiveX 控件中的错误
如果控件中出现错误条件,则可能需要将错误报告给控件容器。 有两种报告错误的方法,具体取决于错误发生的情况。 如果错误发生在属性的 Get 或 Set 函数中,或在 OLE 自动化方法的实现中出现,则该控件应调用 [COleControl:: ThrowError](reference/colecontrol-class.md#throwerror),它向控制用户发出错误已发生的信号。 如果在任何其他时间发生错误,控件应调用 [COleControl:: FireError](reference/colecontrol-class.md#fireerror),这将激发 stock 错误事件。
若要指示发生的错误类型,控件必须将错误代码传递到 `ThrowError` 或 `FireError` 。 错误代码是具有32位值的 OLE 状态代码。 如果可能,请从 OLECTL 中定义的标准代码集中选择错误代码。H 标头文件。 下表总结了这些代码。
### <a name="activex-control-error-codes"></a>ActiveX 控件错误代码
|错误|描述|
|-----------|-----------------|
|CTL_E_ILLEGALFUNCTIONCALL|非法的函数调用|
|CTL_E_OVERFLOW|溢出|
|CTL_E_OUTOFMEMORY|内存不足|
|CTL_E_DIVISIONBYZERO|被零除|
|CTL_E_OUTOFSTRINGSPACE|字符串空间不足|
|CTL_E_OUTOFSTACKSPACE|堆栈空间不足|
|CTL_E_BADFILENAMEORNUMBER|错误的文件名或文件号|
|CTL_E_FILENOTFOUND|找不到文件|
|CTL_E_BADFILEMODE|错误的文件模式|
|CTL_E_FILEALREADYOPEN|文件已打开|
|CTL_E_DEVICEIOERROR|设备 I/O 错误|
|CTL_E_FILEALREADYEXISTS|文件已存在|
|CTL_E_BADRECORDLENGTH|错误的记录长度|
|CTL_E_DISKFULL|磁盘已满|
|CTL_E_BADRECORDNUMBER|错误的记录号|
|CTL_E_BADFILENAME|错误的文件名|
|CTL_E_TOOMANYFILES|文件太多|
|CTL_E_DEVICEUNAVAILABLE|设备不可用|
|CTL_E_PERMISSIONDENIED|权限被拒绝|
|CTL_E_DISKNOTREADY|磁盘未准备好|
|CTL_E_PATHFILEACCESSERROR|路径/文件访问错误|
|CTL_E_PATHNOTFOUND|找不到路径|
|CTL_E_INVALIDPATTERNSTRING|无效模式字符串|
|CTL_E_INVALIDUSEOFNULL|NULL 的使用无效|
|CTL_E_INVALIDFILEFORMAT|文件格式无效|
|CTL_E_INVALIDPROPERTYVALUE|属性值无效|
|CTL_E_INVALIDPROPERTYARRAYINDEX|属性数组索引无效|
|CTL_E_SETNOTSUPPORTEDATRUNTIME|运行时不支持设置。|
|CTL_E_SETNOTSUPPORTED|不支持 Set 语句(只读属性)|
|CTL_E_NEEDPROPERTYARRAYINDEX|需要属性数组索引|
|CTL_E_SETNOTPERMITTED|不允许进行设置|
|CTL_E_GETNOTSUPPORTEDATRUNTIME|运行时不支持 Get 语句|
|CTL_E_GETNOTSUPPORTED|不支持 Get(只写属性)|
|CTL_E_PROPERTYNOTFOUND|找不到属性|
|CTL_E_INVALIDCLIPBOARDFORMAT|剪贴板格式无效|
|CTL_E_INVALIDPICTURE|图片无效|
|CTL_E_PRINTERERROR|打印机错误|
|CTL_E_CANTSAVEFILETOTEMP|无法将文件保存到 TEMP|
|CTL_E_SEARCHTEXTNOTFOUND|未找到搜索文本|
|CTL_E_REPLACEMENTSTOOLONG|替换内容太长|
如有必要,请使用 CUSTOM_CTL_SCODE 宏为某个标准代码未涵盖的条件定义自定义错误代码。 此宏的参数应为介于1000到32767(含)之间的整数。 例如:
[!code-cpp[NVC_MFC_AxUI#37](codesnippet/cpp/mfc-activex-controls-advanced-topics_4.cpp)]
如果要创建一个 ActiveX 控件来替换现有的 VBX 控件,请使用与 VBX 控件相同的数值来定义 ActiveX 控件错误代码,以确保错误代码兼容。
## <a name="handling-special-keys-in-the-control"></a><a name="_core_handling_special_keys_in_your_control"></a> 处理控件中的特殊键
在某些情况下,你可能想要以特殊方式处理某些击键组合;例如,在多行文本框控件中按 ENTER 键时插入新行,或在按下某个方向键 ID 时在一组编辑控件之间移动。
如果 ActiveX 控件的基类是 `COleControl` ,则可以重写 [CWnd::P retranslatemessage](reference/cwnd-class.md#pretranslatemessage) ,以便在容器处理消息前处理这些消息。 使用此方法时,如果您在重写中处理消息,则始终返回 **TRUE** `PreTranslateMessage` 。
下面的代码示例演示了一种可能的方法来处理与方向键相关的任何消息。
[!code-cpp[NVC_MFC_AxUI#38](codesnippet/cpp/mfc-activex-controls-advanced-topics_5.cpp)]
有关为 ActiveX 控件处理键盘接口的详细信息,请参阅 ActiveX SDK 文档。
## <a name="accessing-dialog-controls-that-are-invisible-at-run-time"></a><a name="_core_accessing_dialog_controls_that_are_invisible_at_run_time"></a> 访问运行时不可见的对话框控件
您可以创建没有用户界面并且在运行时不可见的对话框控件。 如果在运行时 ActiveX 控件添加到对话框并使用 [CWnd:: GetDlgItem](reference/cwnd-class.md#getdlgitem) 访问控件,则控件将无法正常工作。 相反,应使用以下方法之一来获取表示控件的对象:
- 使用 "添加成员变量" 向导,选择 " **控制变量** ",然后选择控件的 ID。 输入成员变量名称并选择控件的包装器类作为 **控件类型**。
-或-
- 将局部变量和子类声明为对话框项。 插入类似于下面的代码 (`CMyCtrl` 是包装类,IDC_MYCTRL1 是控件的 ID) :
[!code-cpp[NVC_MFC_AxCont#19](codesnippet/cpp/mfc-activex-controls-advanced-topics_6.cpp)]
## <a name="see-also"></a>请参阅
[MFC ActiveX 控件](mfc-activex-controls.md)
| 37.807487 | 298 | 0.78628 | yue_Hant | 0.809301 |
6fc1eaa916744183cbadcfdbb68a11b5f5fee260 | 8,560 | md | Markdown | anim/anim.md | dannycx/Demo | 204fca655db00cf41fcefa187d6acfc10a073370 | [
"Apache-2.0"
] | null | null | null | anim/anim.md | dannycx/Demo | 204fca655db00cf41fcefa187d6acfc10a073370 | [
"Apache-2.0"
] | null | null | null | anim/anim.md | dannycx/Demo | 204fca655db00cf41fcefa187d6acfc10a073370 | [
"Apache-2.0"
] | null | null | null | # Android 动画
- 帧动画(AnimationDrawable):一张张图片组成,类似放电影效果,需要大量图片资源
- 补间动画(不改变控件实际位置):旋转动画(RotateAnimation)、透明动画(AlphaAnimation)、缩放动画(ScaleAnimation)、平移动画(TranslateAnimation)
- 属性动画(改变控件实际位置):ValueAnimator、ObjectAnimator
- 布局动画:添加属性android:layoutAnimation=”@anim/layout_animation”,android:animateLayoutChanges,LayoutTransition
- 共享元素(场景动画T):两个界面中有相同控件并设置相同id或指定属性android:transitionName="@string/share_element"为相同值
## 帧动画
###### 容易引起OOM,避免使用过多大尺寸图片
```java
帧布局文件(drawable下)-frame.xml
<animation-list android:oneshot="true">
<item android:drawable="@drawable/theme_translation_turn_day_01" android:duration="110"/>
<item android:drawable="@drawable/theme_translation_turn_day_02" android:duration="110"/>
<item android:drawable="@drawable/theme_translation_turn_day_03" android:duration="110"/>
<item android:drawable="@drawable/theme_translation_turn_day_04" android:duration="110"/>
<item android:drawable="@drawable/theme_translation_turn_day_05" android:duration="110"/>
<item android:drawable="@drawable/theme_translation_turn_day_06" android:duration="110"/>
</animation-list>
view.setBackgroundResource(R.drawable.frame);
AnimationDrawable ad = (AnimationDrawable) view.getBackground();
ad.start();
```
## 补间动画
###### 补间动画虽能对控件做动画,但并没有改变控件内部的属性值

#### 透明动画
```java
java代码
AlphaAnimation alpha = new AlphaAnimation(0,1);
alpha.setDuration(1000);//时间
view.startAnimation(alpha);
xml文件 alpha.xml
<?xml version="1.0" encoding="utf-8"?>
<alpha xmlns:android="http://schemas.android.com/apk/res/android"
android:fromAlpha="0"
android:toAlpha="1"
android:duration="3000"/>
view.startAnimation(AnimationUtils.loadAnimation(getActivity(),R.anim.alpha));
```
#### 旋转动画
```java
java代码
// new RotateAnimation(0,360,100,50);//从0-360度旋转,以(100,50)为中心点
// new RotateAnimation(0, 360, Animation.RELATIVE_TO_SELF, 0.5f, Animation.RELATIVE_TO_SELF, 0.5f);//从0-360度旋转,以自身中心点为中心点
RotateAnimation rotate = new RotateAnimation(0,360);//从0-360度旋转
rotate.setDuration(1000);
rotate.setFillAfter(true);//停在结束位置
view.startAnimation(rotate);
xml文件 rotate.xml
<?xml version="1.0" encoding="utf-8"?>
<rotate
xmlns:android="http://schemas.android.com/apk/res/android"
android:fromDegrees="0"
android:toDegrees="360"
android:pivotX="50%"
android:pivotY="50%"
android:duration="3000"/>
view.startAnimation(AnimationUtils.loadAnimation(getActivity(),R.anim.rotate));
```
#### 缩放动画
```java
java代码
// new ScaleAnimation(0, 1, 0, 1, 100, 50);//相对(100,50)点缩放
// new ScaleAnimation(0, 1, 0, 1, Animation.RELATIVE_TO_SELF, 0.5f, Animation.RELATIVE_TO_SELF, 0.5f);//相对自身中心点缩放一倍
ScaleAnimation scale = new ScaleAnimation(0, 1, 0, 1);//放大一倍
scale.setDuration(1000);
scale.setFillAfter(true);//停在结束位置
view.startAnimation(scale);
xml文件 scale.xml
<?xml version="1.0" encoding="utf-8"?>
<scale
xmlns:android="http://schemas.android.com/apk/res/android"
android:fromXScale="0"
android:toXScale="1"
android:fromYScale="0"
android:toYScale="1"
android:pivotX="50%"
android:pivotY="50%"
android:duration="3000"/>
view.startAnimation(AnimationUtils.loadAnimation(getActivity(),R.anim.scale));
```
#### 平移动画
```java
java代码
TranslateAnimation translate = new TranslateAnimation(0, 200, 0, 200);//相对自身位置右移200,下移200
translate.setDuration(1000);
translate.setFillAfter(true);//停在结束位置
view.startAnimation(translate);
xml文件 translate.xml
<?xml version="1.0" encoding="utf-8"?>
<translate
xmlns:android="http://schemas.android.com/apk/res/android"
android:fromXDelta="0"
android:toXDelta="200"
android:fromYDelta="0"
android:toYDelta="200"
android:duration="3000"/>
view.startAnimation(AnimationUtils.loadAnimation(getActivity(),R.anim.translate));
```
#### 组合动画
```java
java代码
AnimationSet set = new AnimationSet(true);//是否使用插补器
set.setDuration(2000);
AlphaAnimation alpha = new AlphaAnimation(0,1);
alpha.setDuration(1000);//时间
set.addAnimation(alpha);
TranslateAnimation translate = new TranslateAnimation(200, 0, 200, 0);//相对自身位置右移200,下移200
translate.setDuration(1000);
set.addAnimation(translate);
view.startAnimation(set);
xml文件 set.xml
<?xml version="1.0" encoding="utf-8"?>
<set
xmlns:android="http://schemas.android.com/apk/res/android"
android:duration="3000"
android:shareInterpolator="true">
<alpha
android:fromAlpha="0"
android:toAlpha="1"/>
<translate
xmlns:android="http://schemas.android.com/apk/res/android"
android:fromXDelta="200"
android:fromYDelta="200"
android:toXDelta="0"
android:toYDelta="0"/>
</set>
view.startAnimation(AnimationUtils.loadAnimation(getActivity(),R.anim.set));
```
#### 动画监听
```java
TranslateAnimation translate = new TranslateAnimation(200, 0, 200, 0);//相对自身位置右移200,下移200
translate.setDuration(1000);
translate.setAnimationListener(new Animation.AnimationListener() {
@Override
public void onAnimationStart(Animation animation) {
//开始
}
@Override
public void onAnimationEnd(Animation animation) {
//结束
}
@Override
public void onAnimationRepeat(Animation animation) {
//重复
}
});
```
#### 自定义补间动画
- 继承自Animation,重写initialize()和applyTransformation()
- initialize()初始化工作
- applyTransformation()矩阵变换工作(Camera简化矩阵变换过程)
## 属性动画
###### API11出现,ValueAnimator, ObjectAnimator, AnimatorSet,效果是在一个时间间隔内完成对象从一个属性值到另一个属性值的改变。可使用nineoldandroids兼容之前版本
###### ofObject初始化动画,必须调用setEvaluator显示设置估值器,因为系统根本是无法知道,你动画的中间值Object真正是什么类型的。
#### ValueAnimator与ObjectAnimator流程

#### ValueAnimator
###### ValueAnimator对指定值区间做动画运算,我们通过对运算过程做监听来自己操作控件。
- ValueAnimator只负责对指定的数字区间进行动画运算
- 我们需要对运算过程进行监听,然后自己对控件做动画操作
##### 使用
- 创建实例,ofInt,ofFloat,ofObject(需指定估值器)
- 添加监听
#### ObjectAnimator
- 要使用ObjectAnimator来构造对画,要操作的控件中,必须存在对应的属性的set方法
- setter 方法的命名必须以骆驼拼写法命名,即set后每个单词首字母大写,其余字母小写,即类似于setPropertyName所对应的属性为propertyName
- 当且仅当动画的只有一个过渡值时,系统才会调用对应属性的get函数来得到动画的初始值。
#### PropertyValuesHolder
```markdown
其中保存了动画过程中所需要操作的属性和对应的值。我们通过ofFloat(Object target, String propertyName, float… values)构造的动画,ofFloat()的内部实现其实就是将传进来的参数封装成PropertyValuesHolder实例来保存动画状态。在封装成PropertyValuesHolder实例以后,后期的各种操作也是以PropertyValuesHolder为主的。
```
#### 插值器
###### 可以通过重写加速器改变数值进度来改变数值位置
#### 估值器
###### 转换器,他能把小数进度转换成对应的数值位置.可以通过改变Evaluator中进度所对应的数值来改变数值位置。
- ofInt(0,400)表示指定动画的数字区间,是从0运动到400;
- 加速器:在动画开始后,通过加速器会返回当前动画进度所对应的数字进度,但这个数字进度是百分制的,以小数表示,如0.2
- Evaluator:通过监听器拿到的是当前动画对应的具体数值,而不是百分制进度。所以就需要使用Evaluator将从加速器返回的数字进度转成对应的数字值。
- 监听器:在AnimatorUpdateListener监听器使用agetAnimatedValue()拿到Evaluator中返回的数字值。Evaluator一个转换器,能把小数进度转换成对应的数值
## 布局动画
#### 该布局下的子view,上个显示后下个显示
```java
LinearLayout root = View.inflate(this, R.layout.root, null);
ScaleAnimation scale = new ScaleAnimation(0,1,0,1);
scale.setDuration(3000);
LayoutAnimationController lac = new LayoutAnimationController(scale,0.5f);
lac.setOrder(LayoutAnimationController.ORDER_REVERSE);//子view顺序,从下往上
root.setLayoutAnimation(lac);
```
## 布局内容动画
###### 添加并移除控件
```java
private void addBtn(){
Button btn = new Button(this);
btn.setText("add");
root.addView(btn);
root.setLayoutTransition(Transition);
btn.setOnClickListener(new OnClistener(){
@Override
public void onClick(View view){
root.removeView(view);
}
});
}
```
## 列表添加布局动画
```java
private ListView mListView;
private ArrayAdapter<String> mAdapter;
mAdapter=new ArrayAdapter<>(this,android.R.layout.simple_list_item_1,new String[]{"123","456"});
mListView.setAdapter(mAdapter);
ScaleAnimation scale = new ScaleAnimation(0,1,0,1);
scale.setDuration(3000);
LayoutAnimationController lac = new LayoutAnimationController(scale,0.5f);
mListView.setLayoutAnimation(lac);
```
## 资源文件配置布局动画
```java
动画scale.xml-anim
<?xml version="1.0" encoding="utf-8"?>
<scale
xmlns:android="http://schemas.android.com/apk/res/android"
android:fromXScale="0"
android:toXScale="1"
android:fromYScale="0"
android:toYScale="1"
android:pivotX="50%"
android:pivotY="50%"
android:duration="3000"/>
控制LayoutAnimationController
list_anim.xml-anim
<?xml version="1.0" encoding="utf-8"?>
<layoutAnimation
xmlns:android="http://schemas.android.com/apk/res/android"
android:animation="@anim/scale"
android:delay="0.5"/>
<ListView
android:id="@+id/list_view"
android:width="match_parent"
android:height="match_parent"
android:layoutAnimation="@anim/list_anim"/>
```
| 30.035088 | 214 | 0.744509 | yue_Hant | 0.439775 |
6fc2a543e15df0654ff3a92e6a58458cda746d0d | 39 | md | Markdown | README.md | HuangQiii/React_Echarts | 0377157902384600e10c2e9880adbb1f6b8aca08 | [
"MIT"
] | null | null | null | README.md | HuangQiii/React_Echarts | 0377157902384600e10c2e9880adbb1f6b8aca08 | [
"MIT"
] | null | null | null | README.md | HuangQiii/React_Echarts | 0377157902384600e10c2e9880adbb1f6b8aca08 | [
"MIT"
] | null | null | null | # React_Echarts
一个测试echarts多需求的project
| 13 | 22 | 0.897436 | kor_Hang | 0.629155 |
6fc3362c5e06da9c108b4629a4ab4b67b001fb1b | 3,211 | md | Markdown | MD/README_FR.md | PierreBoisleve/Projet_Comparateur_de_Vols_FullStack | 3e0ec31264703ec1036a6f7e3fbbdd7dab4673ac | [
"MIT"
] | null | null | null | MD/README_FR.md | PierreBoisleve/Projet_Comparateur_de_Vols_FullStack | 3e0ec31264703ec1036a6f7e3fbbdd7dab4673ac | [
"MIT"
] | null | null | null | MD/README_FR.md | PierreBoisleve/Projet_Comparateur_de_Vols_FullStack | 3e0ec31264703ec1036a6f7e3fbbdd7dab4673ac | [
"MIT"
] | null | null | null | # Projet CIR2 Groupe 10
Membres du groupe : Eloi ANSELMET, Pierre BOISLEVE, Hugo MERLE, Tristan ROUX
# Cadre du projet :
- Projet de fin d'année 2019/2020 de la formation CIR de la deuxième promotion de l'ISEN Yncrea Ouest - Site de Nantes - Carquefou 44470
- Objectif du projet : créer / simuler un site de réservation de billets d'avion
- Version des outils utilisés:
- Serveur : RedHat (AWS)
- Postgresql : 9.2.24
- Apache2 : 2.4.43
# Architecture du système :
- Présence d'une base de données regroupant les vols, les villes, les taxes, les prix, les commandes et les clients
- Le projet a été entièrement développé en PHP et SQL pour la partie serveur et en HTML, CSS (Utilisation de Bootstrap) et JS pour la partie client.
# Liste des fonctionnalitées :
- Depuis la page index (page de recherche de vol):
- Entrer les paramètres de sa recherche
- Si l'aéroport de départ ou d'arrivée n'existe pas, retourne une erreur à l'utilisateur
- Si il n'y a pas de vol disponible pour cette route, retourne une erreur à l'utilisateur
- Si l'utilisateur inscrit plus de 9 personnes, retourne une erreur à l'utilisateur
- Enfin, si tout les paramètres sont bons, et qu'il y a des vols, renvoie l'utilisateur sur la page d'affichage des vols
- Depuis la page affichageVol (page d'affichage des résultats de la recherche) :
- Classer cette liste des vols en ordre croissant ou décroissant.
- Choisir le prix maximum à afficher à l'aide de la réglette.
- Voir les vols disponibles les jours suivants et précédents à l'aide des flèches
- Revenir à la page de recherche breadcrumb
- Sélectionner un vol parmi la liste afin de passer à la confirmation
- Depuis la page confirmationVol (page de confirmation du vol choisi dans la page précédente)
- Entrer le nom, prénom, adresse mail, date de naissance et nombre de bagages en soute pour chaque passager adulte
- Entrer le nom, prénom, date de naissance et nombre de bagages en soute pour chaque passager enfant
- Affichage dynamique du trajet sur une carte
- Possibilité de revenir à la liste des vols précédents
- Valider son choix avec le bouton valider
- Après avoir validé ces choix :
- Récapitulatif de la commande personne par personne
- Prix total du billet et prix par enfant et adulte
- Possibilité de valider ou d'annuler la commande avec 2 boutons
- Si la commande est validée, elle ira dans la base de donnée des vols et l'utilisateur sera renvoyé à l'index avec un message de validation et il recevra un message sur sa boîte mail
- Si la commande est annulée, elle sera détruite dans la base de données et l'utilisateur sera renvoyé à l'index avec un message d'annulation de la commande
- Menu utilisateur :
- L'utilisateur peut se connecter avec son adresse mail et sa date de naissance,
- Il verra toutes les commandes qu'il a effectué
- L'utilisateur peut annuler ses réservations et celles de ces enfants
- Menu administrateur :
- L'administrateur peut voir la liste de tout les vols ayant été réservé sur le site, voir les mails et les dates de naissances des utilisateurs
- L'administrateur peut annuler n'importe quel billet
| 68.319149 | 187 | 0.749922 | fra_Latn | 0.996861 |
6fc3890e78d8f9c236d985a15ae6bf761e883702 | 943 | md | Markdown | README.md | mariohernandezk10/Homework_-4 | 149350f1e21a4cdcb4a783ae621f231d99899256 | [
"Apache-2.0"
] | null | null | null | README.md | mariohernandezk10/Homework_-4 | 149350f1e21a4cdcb4a783ae621f231d99899256 | [
"Apache-2.0"
] | null | null | null | README.md | mariohernandezk10/Homework_-4 | 149350f1e21a4cdcb4a783ae621f231d99899256 | [
"Apache-2.0"
] | null | null | null | # Not Your Average Quiz

## Description
Take a challenge quiz using simple HTML, JS, and CSS. I used local storage to store the highscores.
Screenshot:

## Table of Contents
* [Installation](#installation)
* [Usage](#usage)
* [License](#license)
* [Contributing](#contributing)
* [Tests](#tests)
* [Questions](#questions)
## Installation
First clone this app on your desktop
Open it using VSCODE
Go to the index.html
Press option and "B" key to open in browser
## Usage
Free to use; have fun :)
## License
This project is licensed under the APACHE 2.0 license.
## Contributing
## Questions
If you have any questions about the repo, open an issue or contact me directly at mariohernandezk10@gmail.com. You can find more of my work at [MarioHernandez](https://github.com/mariohernandezk10/note_taker).
| 18.86 | 209 | 0.71474 | eng_Latn | 0.904062 |
6fc39ba3e01f9a99f205bfc252b31bcda2e483f0 | 2,986 | md | Markdown | README.md | xtreme-jason-smith/datadog-agent-boshrelease | cfbd267b92d9145af313ce878e74243819ceba7a | [
"Apache-2.0"
] | 3 | 2019-03-28T01:38:25.000Z | 2021-05-13T03:19:31.000Z | README.md | xtreme-jason-smith/datadog-agent-boshrelease | cfbd267b92d9145af313ce878e74243819ceba7a | [
"Apache-2.0"
] | 25 | 2017-05-30T20:31:04.000Z | 2020-10-30T21:43:11.000Z | README.md | xtreme-jason-smith/datadog-agent-boshrelease | cfbd267b92d9145af313ce878e74243819ceba7a | [
"Apache-2.0"
] | 18 | 2017-03-28T23:04:19.000Z | 2021-01-07T10:02:22.000Z | # Datadog Agent release for BOSH
* For Debian and RHEL/CentOS based stemcells
* Automatically defines tags based on deployments, names and jobs
* Process, network, ntp and disk integrations by default
* Monit processes are added automatically to process integration
* You can define additional integrations
# What this does
This includes the Debian and RHEL/CentOS releases in the package and unpacks them in the dd-agent directory.
While a source install would be preferable, we're balancing a number of concerns.
1. We want to ensure a consistent deployment among all customers.
1. We want to ensure a quick deployment.
Compiling Python takes a very long time (it took up to 30 minutes for Python alone on some machines we tested it on).
We also saw some disparities on some machines. It can creates issues if it stomps on the system python (which is hard to avoid in some cases) and it sometimes doesn't work.
So, our solution for this was to use our embedded python that we already have in our packages and unpack those packages (rather than installing them).
# Versioning
The BOSH release version follows the scheme:
`cf_major.cf_minor.agent_version`
The first two parts are the BOSH packaging versioning, and the third part is the packaged agent version, without the dot separator.
# Configuration
Upload the release to Bosh director
Create a `runtime-config.yaml` file:
```
releases:
- name: datadog-agent
version: 1
addons:
- name: dd-agent
jobs:
- name: dd-agent
release: datadog-agent
properties:
dd:
use_dogstatsd: yes
api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
tags: ["datadog", "bosh", "bosh:bosh-exp"]
tags:
owner: datadog
email: support@datadoghq.com
```
Upload runtime-config to Bosh Director: `bosh update runtime-config runtime-config.yaml`
Re-deploy the deployments to automatically add the agent.
# Development
This repository contains only the packaging pieces for the agent to be deployed with [Cloud Foundry BOSH][2].
The Datadog Agent itself is added as a blob, and contributions to it are accepted [here][1].
See [here][3] for more information about the structure of a BOSH release.
# Authors
Datadog (opensource@datadoghq.com)
Based on the [datadog-agent-boshrelease][4] project by [Springer Nature Platform Engineering][5]
# Contribute
If you notice a limitation or a bug with the BOSH packaging of the agent, feel free to open a Github issue or submit a PR on this repository.
If the issue is with the agent itself, check out the [Datadog Agent repository][1] for issues and PR.
# Support
Support for this project is available via standard [Datadog support channels][6].
# License
[Apache 2.0 License](LICENSE)
[1]: https://github.com/DataDog/datadog-agent
[2]: https://github.com/cloudfoundry/bosh
[3]: https://bosh.io/docs/create-release/
[4]: https://github.com/SpringerPE/datadog-agent-boshrelease
[5]: https://github.com/SpringerPE/
[6]: http://docs.datadoghq.com/help/
| 30.783505 | 172 | 0.761889 | eng_Latn | 0.993162 |
6fc3bb5c3d392a7cf97b7e4cf624e4fbbf45b7dc | 9,121 | md | Markdown | README.md | write-the-docs-quorum/quorum-meetups | 03994014e20ce467efbfdaa4ff6d779c119b28bb | [
"MIT"
] | 6 | 2020-12-11T20:03:31.000Z | 2021-11-12T00:02:41.000Z | README.md | write-the-docs-quorum/quorum-meetups | 03994014e20ce467efbfdaa4ff6d779c119b28bb | [
"MIT"
] | 17 | 2020-11-01T02:59:16.000Z | 2021-05-16T23:19:29.000Z | README.md | write-the-docs-quorum/quorum-meetups | 03994014e20ce467efbfdaa4ff6d779c119b28bb | [
"MIT"
] | 2 | 2021-03-18T13:47:15.000Z | 2021-06-16T04:18:50.000Z | # Write the Docs Quorum
This is the place to learn about the Write the Docs Quorum pilot program.
## :sparkles: What is Write the Docs Quorum?
The Quorum program brings together various local [Write the Docs](https://www.writethedocs.org/) meetup chapters that are in a common time zone to provide quarterly super meetups over Zoom throughout the year.
These are quarterly regional remote meetups, hence the name: QRRM.
Write the Docs is running the Quorum program as a pilot in the U.S. East Coast region and U.S. West Coast regions for 2021. If the pilot program is successful and there is sufficient interest, the program might expand to EMEA and APEC regions later.
The following U.S. East Coast and Central meetups are currently participating in Quorum:
- [Austin, TX](https://www.meetup.com/WriteTheDocs-ATX-Meetup/)
- [Detroit, MI/Windsor, CAN](https://www.meetup.com/write-the-docs-detroit-windsor/)
- [Florida](https://www.meetup.com/write-the-docs-florida/)
- [New England](https://www.meetup.com/ne-write-the-docs/)
- [Philadelphia, PA](https://www.writethedocs.org/meetups/philly/)
- [Toronto, ON, CAN](https://www.meetup.com/Write-The-Docs-Toronto/)
- [Washington, D.C.](https://www.meetup.com/Write-the-Docs-DC/)
The following U.S. West Coast, Mountain, and Australian meetups are currently participating in Quorum:
- [Bay Area, CA](https://www.meetup.com/Write-the-Docs-Bay-Area/)
- [Los Angeles, CA](https://www.meetup.com/Write-the-Docs-LA/)
- [Portland, OR](https://www.meetup.com/Write-The-Docs-PDX/)
- [Seattle, WA](https://www.meetup.com/Write-The-Docs-Seattle/)
- [Australia](https://www.meetup.com/Write-the-Docs-Australia/)
Quorum pilot coordinator for 2021:
Alyssa Rock - @barbaricyawps on GitHub; feel free to DM her on the [Write the Docs Slack workspace](https://www.writethedocs.org/slack/) any time!
## :link: Meetup links
We have a separate parent Meetup for each quorum program.
Local chapter organizers will announce upcoming events and direct their members to join the parent Meetup to RSVP for events.
- [U.S. East Coast and Central](https://www.meetup.com/virtual-write-the-docs-east-coast-quorum/)
- [U.S. West Coast and Mountain](https://www.meetup.com/virtual-write-the-docs-west-coast-quorum/)
## :calendar: Meetup schedules
Meetups are held quarterly.
On a rotating basis, a different meetup will be responsible for the quarterly super meetup.
If it is safe to do so (cornavirus notwithstanding), individual chapters can still meet in person during the off months for social networking or for their own local educational presentation events.
Individual chapters are also welcome to organize in-person events to view the super meetups.
As a general rule, East Coast meetups will occur the first month of the quarter and West Coast meetups will occur in the second month of the quarter.
The off months for each quarter can act as a backup for meetings if needed.
We'll have a planning meeting the month before each meetup to plan the logistics for the upcoming quarter.
See [Quorum meetup schedule and calendar (detailed)](meetup-schedule-detailed.md) for more information.
Quorum meetup events will ideally occur during the 3rd week of the month, with the 4th week acting as a backup if needed. We offer speakers Monday through Thursday as the range of acceptable days of the week for presentations.
Meetups will be held at:
- 7:00 p.m. Eastern / 6:00p.m. Central for the East Coast
- 7:00 p.m. Pacific / 8:00p.m. Mountain for the West Coast (possibly subject to
change)
## :hourglass_flowing_sand: Meeting agenda and times
Our meetings will last an hour and will follow this agenda and time structure:
- **7:00 to 7:10 - Social networking time** - For the first 10 minutes, remote attendees join the Zoom call and socialize.
- **7:10 to 7:15 - Announcements** - The emcee will give announcements and introduce the speaker.
- **7:15 to 7:45 - Presentation and Q&A** - The speaker can spend about 30-45 minutes giving a presentation, which includes the Q&A portion. The Q&A can expand to fill the time if the presentation is short.
- **7:45 to 8:00 - Breakout rooms by meetup** - We'll use Zoom breakout rooms to have people meet with their individual meetup organizers to say hi to other people in their meetup. Organizers can use that time to talk about job openings, talk about future meetups, and socialize.
See [Meeting agenda (detailed)](meeting-agenda-detailed.md) more detailed meeting instructions. See also: [Emcee script](emcee-script.md).
## :mega: Quorum meetup event publicity
Quorum events will be created on the parent Quorum meetup and then the local meetups will publish an announcement that tells their members about that event.
These announcements will encourage their local members to register at the Quorum Meetup event. Local meetups will not publish their own version of the event.
We want to ensure that we not only drive traffic from local meetups to Quorum, but that we also drive traffic back to the local meetups.
Traffic is hopefully driven back to local meetups by:
- Hosting breakout rooms at the end of the Quorum meetups in which local meetup leaders can connect in smaller groups with people who came to the Quorum meetup that are from a common area.
- Post links to each of the sponsoring local meetup groups in the Quorum meetup event details.
See [Quorum meetup publicity (detailed)](meetup-publicity-detailed.md) for more information about publicity.
## :trophy: Advantages of participating
If you are a local WTD meetup organizer, participating in WTD Quorum has many potential advantages:
- **Sharing the workload** - Volunteering as a WTD meetup organizer can involve a lot of work depending on how much support you have from the members and other organizers in your chapter.
Quorum super meetups can take some of the pressure off by reducing the number of meetups you have to coordinate on your own.
- **Attracting high quality speakers** - Because remote attendance increases audience sizes, high quality speakers can be better guaranteed they will have a good audience.
Remote meetups can also attract high quality speakers by allowing them to present from their preferred location, reducing the burden on their time.
- **Expanding access to high quality content** - Some meetups have a greater pool of potential high quality presenters to draw on than others simply because of geographic density.
Remote meetups allow WTD members from less dense geographic regions to have access to good content without needing to attend in person.
- **Reducing the need for venues or sponsors** - In-person meetings usually need a venue or sponsor to be successful. Coordinating sponsors and venues is frequently challenging for meetup organizers.
Remote meetups don’t have these needs.
## :heavy_check_mark: Responsibilities of participating meetups
Participating local meetup organizers agree to:
- Find a speaker for one approximately one event on a rotating basis.
- Either emcee or find an emcee for their event.
- Help promote each regional event with their meetup members--not just the month that they arranged the speaker.
- Attend as many regional super meetups for your region as possible.
- (Optional): It could help to have 1-2 core team members who can assist in coordinating the Zoom calls and communicating with meetup organizers.
For more information and tips for organizing a successful meetup when it's your turn, see [Organizing a meetup](meetup-organizing.md).
If you need to act as the Zoom coordinator for the event, see the [Zoom coordinator guide](zoom-coordinator-guide).
## :raised_hand: How to participate in Quorum
We can launch a new quorum in a region if we have at least 4 or more local meetups in a given region that are interested in participating.
To join the discussion, join the [Write the Docs Slack](https://www.writethedocs.org/slack/) and add yourself to the `#meetup-organizers-quorum`.
You can also send a direct message on Slack to Alyssa Rock, the current Quorum coordinator.
We also have a mailing list on [wtd-quorum on groups.io](https://groups.io/g/wtd-quorum).
We do our best to cross-post from Slack to the mailing list for archiving purposes.
## :clipboard: Viewing issues and project board for this repository
This repository uses the **ZenHub for Github** extension to manage the project board for this repository. You need to add this extension to your browser to view the project board.
This extension is available for Chrome and Firefox browsers. To get either extension, visit [ZenHub Browser Extension](https://www.zenhub.com/extension).
After installing the extension, you might be prompted to sign in using your GitHub account information.
## :hospital: Will Quorum meetups continue after the coronavirus pandemic?
The long-term plan is that regional Quorum events will become a permanent part of Write the Docs, assuming it provides value and there is sufficient interest from participants.
| 63.783217 | 280 | 0.769872 | eng_Latn | 0.997602 |
6fc439fb6bcd8be419fab16e925059f9c30cb85f | 9,097 | md | Markdown | doc/chaos.md | adriannovegil/spring-petclinic-microservices-sre | 2170f46d9a381e6e7c1f1e3d72c131a6fc0bf805 | [
"Apache-2.0"
] | null | null | null | doc/chaos.md | adriannovegil/spring-petclinic-microservices-sre | 2170f46d9a381e6e7c1f1e3d72c131a6fc0bf805 | [
"Apache-2.0"
] | null | null | null | doc/chaos.md | adriannovegil/spring-petclinic-microservices-sre | 2170f46d9a381e6e7c1f1e3d72c131a6fc0bf805 | [
"Apache-2.0"
] | null | null | null | # Chaos experiments
Following you can read all the information about the chaos experiments defined in the project.
You can execute:
- Chaos using Spring Boot Chaos Monkey interacting directly with the framework
- Chaos using Chaos Toolkit and Spring Boot Chaos Monkey
- Chaos using shell scripts
## 1. Define steady states
Define a metric to check a steady state of your service and of course your entire system. __Start small__ with a service that is not critical.
## 2. Do not start in production
Of course, you can start in production, but keep in mind…
> The best place on earth is ... production!
> Josh Long
... so let’s keep production as the best place on earth and look for our first experiences on another stage. If all goes well, and you’re confident, run it in production.
## 3. Fire the Chaos Using Spring Boot Chaos Monkey
This project provides a Chaos Monkey for Spring Boot applications and will try to attack your running Spring Boot App.
> Everything from getting started to advanced usage is explained in the [Documentation for Chaos Monkey for Spring Boot](https://codecentric.github.io/chaos-monkey-spring-boot/latest/)
If Spring Boot Chaos Monkey is on your classpath and activated with profile name chaos-monkey, it will automatically hook into your application.
Now you can activate [watchers](https://codecentric.github.io/chaos-monkey-spring-boot/latest/#watchers), which look for classes to [assault](https://codecentric.github.io/chaos-monkey-spring-boot/latest/#assaults). There are also [runtime assaults](https://codecentric.github.io/chaos-monkey-spring-boot/latest/#runtime-assaults), which attack your whole application.

### Configure Spring Boot Chaos Monkey
The project comes with the Spring Boot Chaos Monkey configured for the following services:
* `customers-service`
* `visits-service`
* `vets-service`
Adding the following dependency:
```
<!-- Chaos Monkey -->
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>chaos-monkey-spring-boot</artifactId>
</dependency>
```
And activating the Spring Configuration at global level (`application.yml`):
```
# Chaos Engineering
---
spring:
config:
activate:
on-profile: chaos-monkey
management.endpoint.chaosmonkey.enabled: true
chaos:
monkey:
enabled: true
watcher:
component: false
controller: false
repository: false
rest-controller: false
service: false
```
You can read more about the possible configuration options [here](https://codecentric.github.io/chaos-monkey-spring-boot/latest/#_properties).
### Spring Boot Chaos Monkey HTTP Endpoint
| ID | Description | Methods |
| -------------------------------------- | ------------------------------------ | ------- |
| `/chaosmonkey` | Running Chaos Monkey configuration | GET |
| `/chaosmonkey/status` | Is Chaos Monkey enabled or disabled? | GET |
| `/chaosmonkey/enable` | Enable Chaos Monkey | POST |
| `/chaosmonkey/disable` | Disable Chaos Monkey | POST |
| `/chaosmonkey/watchers` | Running Watchers configuration. | GET |
| `/chaosmonkey/watchers` | Change Watchers Configuration | POST |
| `/chaosmonkey/assaults` | Running Assaults configuration | GET |
| `/chaosmonkey/assaults` | Change Assaults configuration | POST |
| `/chaosmonkey/assaults/runtime/attack` | Execute configured runtime Assault | POST |
### Assault Example
`POST` Assaults
Request to enable Latency & Exception Assault
`/chaosmonkey/assaults` - Request
```
{
"level": 5,
"latencyRangeStart": 2000,
"latencyRangeEnd": 5000,
"latencyActive": true,
"exceptionsActive": true,
"killApplicationActive": false
}
```
`/chaosmonkey/assaults` - Response `200 OK`
```
Assault config has changed
```
Define specific method attacks
`/chaosmonkey/assaults` - Request
```
{
"level": 5,
"latencyRangeStart": 2000,
"latencyRangeEnd": 5000,
"latencyActive": true,
"exceptionsActive": true,
"killApplicationActive": false,
"watchedCustomServices": [
"com.example.chaos.monkey.chaosdemo.controller.HelloController.sayHello",
"com.example.chaos.monkey.chaosdemo.controller.HelloController.sayGoodbye"
]
}
```
`/chaosmonkey/assaults` - Response `200 OK`
```
Assault config has changed
```
Define custom Exceptions
`/chaosmonkey/assaults` - Request
```
{
"level": 5,
"latencyRangeStart": 2000,
"latencyRangeEnd": 5000,
"latencyActive": true,
"exceptionsActive": true,
"killApplicationActive": false,
"exception": {
"type": "java.lang.IllegalArgumentException",
"arguments": [
{
"className": "java.lang.String",
"value": "custom illegal argument exception"
}
]
}
}
```
`/chaosmonkey/assaults` - Response `200 OK`
```
Assault config has changed
```
### Fire the Chaos Interacting Directly With Spring Boot Chaos Monkey
In order to active Spring Boot Chaos Monkey's assault options and component instrumentation, you need to call the project's API.
For your convenience we're providing a [script](./scripts/chaos) that turns on various watchers and attacks. To print out the usage description just call the script without any parameters.
```
$ ./chaos/chaos-monkey/call_chaos.sh
usage: ./scripts/chaos/call_chaos.sh: <customers|visits|vets> <attacks_enable_exception|attacks_enable_killapplication|attacks_enable_latency|attacks_enable_memory|watcher_enable_component|watcher_enable_controller|watcher_enable_repository|watcher_enable_restcontroller|watcher_enable_service|watcher_disable>
First pick either customers, visits or vets
Then pick what to enable. Order matters!
Example
./scripts/chaos/call_chaos.sh visits attacks_enable_exception watcher_enable_restcontroller
```
The script takes in at minimum 2 parameters.
* First provides the name of the application for which you want to turn on Chaos Monkey features.
* The subsequent ones will enable attacks and watchers.
The name of the desired feature maps to a json file that gets updated to `http://localhost:${PORT}/actuator/chaosmonkey/assaults` and `http://localhost:${PORT}/actuator/chaosmonkey/watchers` respectively. Example of enabling exception assault via rest controllers for the visits microservice:
```
$ ./call_chaos.sh customers attacks_enable_exception watcher_enable_restcontroller
```
or
```
$ ./call_chaos.sh visits attacks_enable_exception watcher_enable_restcontroller
```
The default assault configuration is set to fail every 5th request. That means that the first four will work as if Chaos Monkey was be disabled.
To sisable the attack execute the following command:
```
$ ./call_chaos.sh customers attacks_disable watcher_disable
```
or
```
$ ./call_chaos.sh visits attacks_disable watcher_disable
```
## 4. Fire the Chaos Using Chaos Toolkit & Spring Boot Chaos Monkey
[Chaos Toolkit](https://chaostoolkit.org/) have support for Chaos Monkey for Spring Boot so that you can run a variety of chaos engineering probes and actions against your Spring Boot applications and services including:
- __Enabling and Disabling the Chaos Monkey__ on a specific service at runtime (useful for turning on the Chaos Monkey support for only the duration of your chaos engineering experiment).
- __Enabling and Configuring chaos assaults__ on a specific service at runtime.
- __Inspecting and recording the configuration of the Chaos Monkey’s watcher and assaults__ through probes in your experiment. Useful for capturing this information for further analysis after your experiments conclude.
Install chaos toolkit. You can do it following the [official documentation](https://chaostoolkit.org/reference/usage/install/)
Set virtual environment in python and install [chaostoolkit-spring](https://chaostoolkit.org/drivers/spring/)
```
$ python3 -m venv ~/.venvs/chaostk
$ source ~/.venvs/chaostk/bin/activate
$ pip install chaostoolkit-spring
```
Run the experiment:
```
chaos run ./chaos/chaos-toolkit/[service]/[experiments].json
```
You can find all the Chaos Toolkit experiments in `./chaos/chaos-toolkit`
## 5. Fire the Chaos Using Shell Scripts
You can execute additional chaos experiment using a collection of shell script located on `./chaos/chaos-scripts`.
To run the experiments, just execute any of the shell scripts and wait to the magic happens ;-)
## 6. Implement active application monitoring
Check your monitoring and check if you can see the overall state of your system. There are many great tools out there to get a pleasant feeling about your entire system.
## 7. References
- https://principlesofchaos.org/?lang=ENcontent
- https://medium.com/chaos-toolkit/chaos-toolkit-loves-chaos-monkey-for-spring-boot-548352985c8f
- https://www.linkedin.com/pulse/chaos-engineering-introduction-using-spring-boot-chellimuthu/
| 35.25969 | 368 | 0.73112 | eng_Latn | 0.913204 |
6fc45f5b46ea92eb4f28977d9b2f95fe1df13015 | 1,998 | md | Markdown | paas/sw-frontend/docs/documents/mzz07m.md | alibaba/SREWorks | 913bee3410fd5f3c1ccf62f9b20465f162be4cfb | [
"Apache-2.0"
] | 407 | 2022-03-16T08:09:38.000Z | 2022-03-31T12:27:10.000Z | paas/sw-frontend/docs/documents/mzz07m.md | alibaba/SREWorks | 913bee3410fd5f3c1ccf62f9b20465f162be4cfb | [
"Apache-2.0"
] | 25 | 2022-03-22T04:27:31.000Z | 2022-03-30T08:47:28.000Z | paas/sw-frontend/docs/documents/mzz07m.md | alibaba/SREWorks | 913bee3410fd5f3c1ccf62f9b20465f162be4cfb | [
"Apache-2.0"
] | 109 | 2022-03-21T17:30:44.000Z | 2022-03-31T09:36:28.000Z | # 2.2 源码构建安装
<a name="kliWz"></a>
# 1. SREWorks源码构建
<a name="xPY76"></a>
## 构建环境准备
- Kubernetes 的版本需要大于等于 **1.20**
- 一台安装有 `git / docker`命令的机器
- 一个可用于上传构建容器镜像仓库(执行`docker push`推送镜像)
- 源码构建包含通过Pod构建镜像环节,所需服务器资源量大于快速构建方案(3台4核16G)

<a name="naB3D"></a>
## 拉取 SREWorks 项目源码
```shell
git clone http://github.com/alibaba/sreworks.git -b v1.1 sreworks
cd sreworks
SW_ROOT=$(pwd)
```
<a name="bIQPN"></a>
## 构建 SREWorks 底座容器镜像
在sreworks目录下,直接在本地执行构建脚本:
```shell
./build.sh --target all --build --tag v1.1
```
<a name="us2zd"></a>
## 上传SREWorks到仓库
将构建产物发布上传到镜像仓库,`SW_REPO`变量替换成用户自己准备的容器镜像仓库。
```shell
SW_REPO="your-registry.***.com/sreworks"
docker login --username=sre****s your-registry.***.com
./build.sh --target all --push $SW_REPO --tag v1.1
```
<a name="jiRmc"></a>
# 2. SREWorks部署&构建运维应用容器镜像
步骤与快速安装大致相同,替换helm install参数, 触发运维应用来自源码的容器镜像构建
```shell
helm install sreworks $SW_ROOT/chart/sreworks-chart \
--kubeconfig="****" \
--create-namespace --namespace sreworks \
--set appmanager.home.url="https://your-website.***.com" \
--set build.enable=true \
--set global.images.tag="v1.1" \
--set global.images.registry=$SW_REPO
```
<a name="jPt3U"></a>
# 3. Helm安装参数清单
如果需要构建完的运维应用上传到自定义容器镜像仓库,请在执行helm安装命令时候传入以下的参数
```shell
# 平台名称
--set platformName="SREWorks"
# 平台图标, icon格式要求(比如:48*48)
--set platformLogo="https://sreworks.oss-cn-beijing.aliyuncs.com/logo/demo.png"
# 底层存储
--set global.storageClass="alicloud-disk-available"
# SREWorks平台启动使用的容器镜像仓库
--set global.images.registry="registry.cn-zhangjiakou.aliyuncs.com/sreworks"
# SaaS容器构建镜像仓库配置
--set appmanager.server.docker.account="sreworks"
--set appmanager.server.docker.password="***"
--set appmanager.server.docker.registry="registry.cn-zhangjiakou.aliyuncs.com"
--set appmanager.server.docker.namespace="builds"
# 源码构建模式的源码仓库来源
--set source.branch="v1.1"
--set source.repo="https://code.aliyun.com/sreworks_public/mirror.git"
```
| 24.975 | 80 | 0.717718 | yue_Hant | 0.299315 |
6fc4840b6443e18b4557341a4d84b6b7768a2db0 | 2,829 | md | Markdown | documents/aws-javascript-developer-guide-v2/doc_source/setting-credentials-node.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-javascript-developer-guide-v2/doc_source/setting-credentials-node.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-javascript-developer-guide-v2/doc_source/setting-credentials-node.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Setting Credentials in Node\.js<a name="setting-credentials-node"></a>
There are several ways in Node\.js to supply your credentials to the SDK\. Some of these are more secure and others afford greater convenience while developing an application\. When obtaining credentials in Node\.js, be careful about relying on more than one source such as an environment variable and a JSON file you load\. You can change the permissions under which your code runs without realizing the change has happened\.
Here are the ways you can supply your credentials in order of recommendation:
1. Loaded from AWS Identity and Access Management \(IAM\) roles for Amazon EC2
1. Loaded from the shared credentials file \(`~/.aws/credentials`\)
1. Loaded from environment variables
1. Loaded from a JSON file on disk
1. Other credential\-provider classes provided by the JavaScript SDK
If more than one credential source is available to the SDK, the default precedence of selection is as follows:
1. Credentials that are explicitly set through the service\-client constructor
1. Environment variables
1. The shared credentials file
1. Credentials loaded from the ECS credentials provider \(if applicable\)
1. Credentials that are obtained by using a credential process specified in the shared AWS config file or the shared credentials file\. For more information, see [Loading Credentials in Node\.js using a Configured Credential Process](loading-node-credentials-configured-credential-process.md)\.
1. Credentials loaded from AWS IAM using the credentials provider of the Amazon EC2 instance \(if configured in the instance metadata\)
For more information, see [Class: AWS\.Credentials](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Credentials.html) and [Class: AWS\.CredentialProviderChain](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CredentialProviderChain.html) in the API reference\.
**Warning**
While it is possible to do so, we do not recommend hard\-coding your AWS credentials in your application\. Hard\-coding credentials poses a risk of exposing your access key ID and secret access key\.
The topics in this section describe how to load credentials into Node\.js\.
**Topics**
+ [Loading Credentials in Node\.js from IAM Roles for EC2](loading-node-credentials-iam.md)
+ [Loading Credentials for a Node\.js Lambda Function](loading-node-credentials-lambda.md)
+ [Loading Credentials in Node\.js from the Shared Credentials File](loading-node-credentials-shared.md)
+ [Loading Credentials in Node\.js from Environment Variables](loading-node-credentials-environment.md)
+ [Loading Credentials in Node\.js from a JSON File](loading-node-credentials-json-file.md)
+ [Loading Credentials in Node\.js using a Configured Credential Process](loading-node-credentials-configured-credential-process.md) | 64.295455 | 426 | 0.794981 | eng_Latn | 0.992924 |
6fc4ffea1175a81cf019855260b9bb3d53156dba | 406 | md | Markdown | _notes/Clases_Aprendizaje_Profundo_Mariano/GradCAM.md | jRicciL/my-obsidian-digital-garden | 9ab25911a24076922edc5ea687d50073fce49af1 | [
"MIT"
] | 4 | 2022-03-11T20:13:27.000Z | 2022-03-30T19:16:54.000Z | _notes/Clases_Aprendizaje_Profundo_Mariano/GradCAM.md | jRicciL/my-obsidian-digital-garden | 9ab25911a24076922edc5ea687d50073fce49af1 | [
"MIT"
] | null | null | null | _notes/Clases_Aprendizaje_Profundo_Mariano/GradCAM.md | jRicciL/my-obsidian-digital-garden | 9ab25911a24076922edc5ea687d50073fce49af1 | [
"MIT"
] | null | null | null | ---
---
# Gradient Class Activation Mapping
#GradCAM
#SirajRaval
***
<iframe width="560" height="315" src="https://www.youtube.com/embed/Y8mSngdQb9Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
##### Resources
- [Grad-CAM with keras-vis](https://fairyonice.github.io/index2.html) | 33.833333 | 248 | 0.746305 | eng_Latn | 0.172754 |
6fc57cf5cc494534a52fb860db523ad2cdbf6585 | 359 | md | Markdown | docs/aron7awol/56994748.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 1 | 2021-01-30T20:28:22.000Z | 2021-01-30T20:28:22.000Z | docs/aron7awol/56994748.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 7 | 2020-09-14T21:51:16.000Z | 2021-04-03T14:48:01.000Z | docs/aron7awol/56994748.md | 3ll3d00d/beqcatalogue | ce38c769c437de382b511e14e60e131944f2ca7d | [
"MIT"
] | 1 | 2021-03-08T20:09:01.000Z | 2021-03-08T20:09:01.000Z | # Free Fire
## DTS-HD MA 5.1
**2016 • R • 1h 30m • Action, Crime, Mystery • aron7awol**
A crime drama set in 1970s Boston, about a gun sale which goes wrong.
[Discuss](https://www.avsforum.com/threads/bass-eq-for-filtered-movies.2995212/post-56994748) [TMDB](334521)


| 23.933333 | 109 | 0.693593 | yue_Hant | 0.625668 |
6fc5db2b96fe1853100070c2c5f056fe150d2246 | 781 | md | Markdown | AlchemyInsights/fix-0x8004de40-error-in-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5ed88ef27055481eb0b053d1b3704fa2c5f67b4b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:05:56.000Z | 2020-05-19T19:05:56.000Z | AlchemyInsights/fix-0x8004de40-error-in-onedrive.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5de78c659954b926467b06b68b46812f72d379d5 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:24:47.000Z | 2022-02-09T06:56:38.000Z | AlchemyInsights/fix-0x8004de40-error-in-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.cs-CZ | 5ed88ef27055481eb0b053d1b3704fa2c5f67b4b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-10-09T20:27:51.000Z | 2021-10-09T10:51:00.000Z | ---
title: Oprava 0x8004de40 chyby v OneDrive
ms.author: pebaum
author: pebaum
ms.date: 04/21/2020
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.openlocfilehash: bedb20c830f47e71ac3aa6efd87b9b280d8ef55f
ms.sourcegitcommit: ab75f66355116e995b3cb5505465b31989339e28
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 08/13/2021
ms.locfileid: "58323187"
---
# <a name="fix-0x8004de40-error-in-onedrive"></a>Oprava 0x8004de40 chyby v OneDrive
Kód chyby 0x8004de40 může OneDrive potíže s připojením k cloudu.
Další informace najdete v tématu [Kód chyby: 0x8004de40 při přihlašování](https://docs.microsoft.com/sharepoint/troubleshoot/administration/error-0x8004de40-in-onedrive) k OneDrive | 35.5 | 180 | 0.820743 | ces_Latn | 0.395447 |
6fc65a0b46dabf1a1297102d8129d055fbe52fbe | 557 | md | Markdown | trust/chad-johnston.md | alexnguyennz/vsct | 12a9a219115b1b35d6f067c6b46493b3b2ec1aa1 | [
"MIT"
] | 1 | 2022-01-18T01:45:49.000Z | 2022-01-18T01:45:49.000Z | trust/chad-johnston.md | alexnguyennz/vsct | 12a9a219115b1b35d6f067c6b46493b3b2ec1aa1 | [
"MIT"
] | null | null | null | trust/chad-johnston.md | alexnguyennz/vsct | 12a9a219115b1b35d6f067c6b46493b3b2ec1aa1 | [
"MIT"
] | null | null | null | ---
name: Chad Johnston
position: Trustee
image: /_public/img/trust/chad-johnston.webp
order: 5
---
A professional builder by trade, Chad is now the General Manager of Rydges Wellington Airport and has been in hospitality now for over 10 years. He discovered an unexpected passion for this dynamic industry and is motivated by assisting his team to grow and enjoy the varied environment of this trade.
Chad wants to give something back to the younger generation, he is a proud father of his 7 year old daughter and believes no child should ever go hungry. | 55.7 | 301 | 0.797127 | eng_Latn | 0.999891 |
6fc6969b4f15f8347ac9aad8b928e9d0eb22882c | 48 | md | Markdown | README.md | fcoeverardo/Esteometria | 5a0199f55bc8e208aff92a707b46e3da72b88159 | [
"MIT"
] | null | null | null | README.md | fcoeverardo/Esteometria | 5a0199f55bc8e208aff92a707b46e3da72b88159 | [
"MIT"
] | null | null | null | README.md | fcoeverardo/Esteometria | 5a0199f55bc8e208aff92a707b46e3da72b88159 | [
"MIT"
] | null | null | null | # Estequiometria
Aplicativo Android Educacional
| 16 | 30 | 0.875 | por_Latn | 0.733368 |
6fc6a87c0ed35506d721bb339350dea16202d980 | 2,957 | md | Markdown | add/metadata/System.Windows.Documents/TableColumnCollection.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:24:36.000Z | 2020-06-16T22:24:36.000Z | add/metadata/System.Windows.Documents/TableColumnCollection.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Windows.Documents/TableColumnCollection.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-08T14:42:27.000Z | 2019-04-08T14:42:27.000Z | ---
uid: System.Windows.Documents.TableColumnCollection
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#IndexOf(System.Object)
---
---
uid: System.Windows.Documents.TableColumnCollection.CopyTo(System.Array,System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.SyncRoot
---
---
uid: System.Windows.Documents.TableColumnCollection.TrimToSize
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Contains(System.Object)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#IsReadOnly
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IEnumerable#GetEnumerator
---
---
uid: System.Windows.Documents.TableColumnCollection.Item(System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.Insert(System.Int32,System.Windows.Documents.TableColumn)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Item(System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#IsFixedSize
---
---
uid: System.Windows.Documents.TableColumnCollection.Count
---
---
uid: System.Windows.Documents.TableColumnCollection.CopyTo(System.Windows.Documents.TableColumn[],System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.IsSynchronized
---
---
uid: System.Windows.Documents.TableColumnCollection.IsReadOnly
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Remove(System.Object)
---
---
uid: System.Windows.Documents.TableColumnCollection.CopyTo
---
---
uid: System.Windows.Documents.TableColumnCollection.IndexOf(System.Windows.Documents.TableColumn)
---
---
uid: System.Windows.Documents.TableColumnCollection.Capacity
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Add(System.Object)
---
---
uid: System.Windows.Documents.TableColumnCollection.Contains(System.Windows.Documents.TableColumn)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Clear
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#Insert(System.Int32,System.Object)
---
---
uid: System.Windows.Documents.TableColumnCollection.RemoveRange(System.Int32,System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.Add(System.Windows.Documents.TableColumn)
---
---
uid: System.Windows.Documents.TableColumnCollection.Clear
---
---
uid: System.Windows.Documents.TableColumnCollection.Remove(System.Windows.Documents.TableColumn)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#IList#RemoveAt(System.Int32)
---
---
uid: System.Windows.Documents.TableColumnCollection.System#Collections#Generic#IEnumerable{System#Windows#Documents#TableColumn}#GetEnumerator
---
---
uid: System.Windows.Documents.TableColumnCollection.RemoveAt(System.Int32)
---
| 23.846774 | 142 | 0.793372 | yue_Hant | 0.997115 |
6fc6d2bdc80d317e3305feeccf3dc8d6895367d9 | 2,452 | md | Markdown | ru/datasphere/api-ref/Project/get.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | 1 | 2022-03-03T01:02:33.000Z | 2022-03-03T01:02:33.000Z | ru/datasphere/api-ref/Project/get.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | null | null | null | ru/datasphere/api-ref/Project/get.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | null | null | null | ---
editable: false
---
# Method get
Returns the specified project.
## HTTP request {#https-request}
```
GET https://datasphere.api.cloud.yandex.net/datasphere/v1/projects/{projectId}
```
## Path parameters {#path_params}
Parameter | Description
--- | ---
projectId | Required. ID of the Project resource to return. To get the project ID use a [list](/docs/datasphere/api-ref/Project/list) request. The maximum string length in characters is 200.
## Response {#responses}
**HTTP Code: 200 - OK**
```json
{
"id": "string",
"folderId": "string",
"createdAt": "string",
"name": "string",
"description": "string",
"settings": {
"serviceAccountId": "string",
"subnetId": "string",
"dataProcClusterId": "string",
"commitMode": "string"
},
"limits": {
"maxUnitsPerHour": "integer",
"maxUnitsPerExecution": "integer"
}
}
```
A Project resource.
Field | Description
--- | ---
id | **string**<br><p>ID of the project.</p>
folderId | **string**<br><p>ID of the folder that the project belongs to.</p>
createdAt | **string** (date-time)<br><p>String in <a href="https://www.ietf.org/rfc/rfc3339.txt">RFC3339</a> text format.</p>
name | **string**<br><p>Name of the project. 1-63 characters long.</p>
description | **string**<br><p>Description of the project. 0-256 characters long.</p>
settings | **object**<br><p>Settings of the project.</p>
settings.<br>serviceAccountId | **string**<br><p>ID of the service account, on whose behalf all operations with clusters will be performed.</p>
settings.<br>subnetId | **string**<br><p>ID of the subnet where the DataProc cluster resides. Currently only subnets created in the availability zone ru-central1-a are supported.</p>
settings.<br>dataProcClusterId | **string**<br><p>ID of the DataProc cluster.</p>
settings.<br>commitMode | **string**<br><p>Commit mode that is assigned to the project.</p> <ul> <li>STANDARD: Commit happens after the execution of a cell or group of cells or after completion with an error.</li> <li>AUTO: Commit happens periodically. Also, automatic saving of state occurs when switching to another type of computing resource.</li> </ul>
limits | **object**<br><p>Limits of the project.</p>
limits.<br>maxUnitsPerHour | **integer** (int64)<br><p>The number of units that can be spent per hour.</p>
limits.<br>maxUnitsPerExecution | **integer** (int64)<br><p>The number of units that can be spent on the one execution.</p> | 41.559322 | 357 | 0.690457 | eng_Latn | 0.82124 |
6fc724192a5dc0aaecaad4e8a08cac6211857351 | 11,467 | md | Markdown | docs/xamarin-forms/platform/sign-in-with-apple/android-ios-sign-in.md | DamianSuess/xamarin-docs | 3bc1dea069f4e646c762efbd41b99f038284f01d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-12-09T05:19:03.000Z | 2021-12-09T05:19:03.000Z | docs/xamarin-forms/platform/sign-in-with-apple/android-ios-sign-in.md | DamianSuess/xamarin-docs | 3bc1dea069f4e646c762efbd41b99f038284f01d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/platform/sign-in-with-apple/android-ios-sign-in.md | DamianSuess/xamarin-docs | 3bc1dea069f4e646c762efbd41b99f038284f01d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Use Sign In with Apple for Xamarin.Forms"
description: "Learn how to implement Sign In with Apple in your Xamarin.Forms mobile applications."
ms.prod: xamarin
ms.assetid: 2E47E7F2-93D4-4CA3-9E66-247466D25E4D
ms.technology: xamarin-forms
author: davidortinau
ms.author: daortin
ms.date: 09/10/2019
no-loc: [Xamarin.Forms, Xamarin.Essentials]
---
# Use Sign In with Apple in Xamarin.Forms
[ Download the sample](/samples/xamarin/xamarin-forms-samples/signinwithapple/)
Sign In with Apple is for all new applications on iOS 13 that use third-party authentication services. The implementation details between iOS and Android are quite different. This guide walks through how you can do this today in Xamarin.Forms.
In this guide and sample, specific platform services are used to handle Sign In with Apple:
- Android using a generic web service talking to Azure Functions with OpenID/OpenAuth
- iOS uses the native API for authentication on iOS 13, and falls back to a generic web service for iOS 12 and below
## A sample Apple sign in flow
This sample offers an opinionated implementation for getting Apple Sign In to work in your Xamarin.Forms app.
We use two Azure Functions to help with the authentication flow:
1. `applesignin_auth` - Generates the Apple Sign In Authorization URL and redirects to it. We do this on the server side, instead of the mobile app, so we can cache the `state` and validate it when Apple's servers send a callback.
2. `applesignin_callback` - Handles the POST callback from Apple and securely exchanges the authorization code for an Access Token and ID Token. Finally, it redirects back to the App's URI Scheme, passing back the tokens in a URL Fragment.
The mobile app registers itself to handle the custom URI scheme we have selected (in this case `xamarinformsapplesignin://`) so the `applesignin_callback` function can relay the tokens back to it.
When the user starts authentication, the following steps happen:
1. The mobile app generates a `nonce` and `state` value and passes them to the `applesignin_auth` Azure function.
2. The `applesignin_auth` Azure function generates an Apple Sign In Authorization URL (using the provided `state` and `nonce`), and redirects the mobile app browser to it.
3. The user enters their credentials securely in the Apple Sign In authorization page hosted on Apple's servers.
4. After the Apple Sign In flow finishes on Apple's servers, Apple Redirects to the `redirect_uri` which will be the `applesignin_callback` Azure function.
5. The request from Apple sent to the `applesignin_callback` function is validated to ensure the correct `state` is returned, and that the ID Token claims are valid.
6. The `applesignin_callback` Azure function exchanges the `code` posted to it by Apple, for an _Access Token_, _Refresh Token_, and _ID Token_ (which contains claims about the User ID, Name, and Email).
7. The `applesignin_callback` Azure function finally redirects back to the app's URI scheme (`xamarinformsapplesignin://`) appending a URI fragment with the Tokens (e.g. `xamarinformsapplesignin://#access_token=...&refresh_token=...&id_token=...`).
8. The Mobile app parses out the URI Fragment into an `AppleAccount` and validates the `nonce` claim received matches the `nonce` generated at the start of the flow.
9. The mobile app is now authenticated!
## Azure Functions
This sample uses Azure Functions. Alternatively, an ASP.NET Core Controller or similar web server solution could deliver the same functionality.
### Configuration
Several app settings need to be configured when using Azure Functions:
- `APPLE_SIGNIN_KEY_ID` - This is your `KeyId` from earlier.
- `APPLE_SIGNIN_TEAM_ID` - This is usually your _Team ID_ found in your [Membership Profile](https://developer.apple.com/account/#/membership)
- `APPLE_SIGNIN_SERVER_ID`: This is the `ServerId` from earlier. It's *not* your App _Bundle ID_, but rather the *Identifier* of the *Services ID* you created.
- `APPLE_SIGNIN_APP_CALLBACK_URI` - This is the custom URI Scheme you want to redirect back to your app with. In this sample `xamarinformsapplesignin://` is used.
- `APPLE_SIGNIN_REDIRECT_URI` - The *Redirect URL* you setup when creating your *Services ID* in the *Apple Sign In* Configuration section. To test, it might look something like: `http://local.test:7071/api/applesignin_callback`
- `APPLE_SIGNIN_P8_KEY` - The text contents of your `.p8` file, with all the `\n` newlines removed so it's one long string
### Security considerations
**Never** store your P8 key inside of your application code. Application code is easy to download and disassemble.
It is also considered a bad practice to use a `WebView` to host the authentication flow, and to intercept URL Navigation events to obtain the authorization code. At this time there is currently no fully secure way to handle Sign In with Apple on non iOS13+ devices without hosting some code on a server to handle the token exchange. We recommend hosting the authorization url generation code on a server so you can cache the state and validate it when Apple issues a POST callback to your server.
## A cross-platform sign in service
Using the Xamarin.Forms DependencyService, you can create separate authentication services that use the platform services on iOS, and a generic web service for Android and other non-iOS platforms based on a shared interface.
```csharp
public interface IAppleSignInService
{
bool Callback(string url);
Task<AppleAccount> SignInAsync();
}
```
On iOS, the native APIs are used:
```csharp
public class AppleSignInServiceiOS : IAppleSignInService
{
#if __IOS__13
AuthManager authManager;
#endif
bool Is13 => UIDevice.CurrentDevice.CheckSystemVersion(13, 0);
WebAppleSignInService webSignInService;
public AppleSignInServiceiOS()
{
if (!Is13)
webSignInService = new WebAppleSignInService();
}
public async Task<AppleAccount> SignInAsync()
{
// Fallback to web for older iOS versions
if (!Is13)
return await webSignInService.SignInAsync();
AppleAccount appleAccount = default;
#if __IOS__13
var provider = new ASAuthorizationAppleIdProvider();
var req = provider.CreateRequest();
authManager = new AuthManager(UIApplication.SharedApplication.KeyWindow);
req.RequestedScopes = new[] { ASAuthorizationScope.FullName, ASAuthorizationScope.Email };
var controller = new ASAuthorizationController(new[] { req });
controller.Delegate = authManager;
controller.PresentationContextProvider = authManager;
controller.PerformRequests();
var creds = await authManager.Credentials;
if (creds == null)
return null;
appleAccount = new AppleAccount();
appleAccount.IdToken = JwtToken.Decode(new NSString(creds.IdentityToken, NSStringEncoding.UTF8).ToString());
appleAccount.Email = creds.Email;
appleAccount.UserId = creds.User;
appleAccount.Name = NSPersonNameComponentsFormatter.GetLocalizedString(creds.FullName, NSPersonNameComponentsFormatterStyle.Default, NSPersonNameComponentsFormatterOptions.Phonetic);
appleAccount.RealUserStatus = creds.RealUserStatus.ToString();
#endif
return appleAccount;
}
public bool Callback(string url) => true;
}
#if __IOS__13
class AuthManager : NSObject, IASAuthorizationControllerDelegate, IASAuthorizationControllerPresentationContextProviding
{
public Task<ASAuthorizationAppleIdCredential> Credentials
=> tcsCredential?.Task;
TaskCompletionSource<ASAuthorizationAppleIdCredential> tcsCredential;
UIWindow presentingAnchor;
public AuthManager(UIWindow presentingWindow)
{
tcsCredential = new TaskCompletionSource<ASAuthorizationAppleIdCredential>();
presentingAnchor = presentingWindow;
}
public UIWindow GetPresentationAnchor(ASAuthorizationController controller)
=> presentingAnchor;
[Export("authorizationController:didCompleteWithAuthorization:")]
public void DidComplete(ASAuthorizationController controller, ASAuthorization authorization)
{
var creds = authorization.GetCredential<ASAuthorizationAppleIdCredential>();
tcsCredential?.TrySetResult(creds);
}
[Export("authorizationController:didCompleteWithError:")]
public void DidComplete(ASAuthorizationController controller, NSError error)
=> tcsCredential?.TrySetException(new Exception(error.LocalizedDescription));
}
#endif
```
The compile flag `__IOS__13` is used to provide support for iOS 13 as well as legacy versions that fallback to the generic web service.
On Android, the generic web service with Azure Functions is used:
```csharp
public class WebAppleSignInService : IAppleSignInService
{
// IMPORTANT: This is what you register each native platform's url handler to be
public const string CallbackUriScheme = "xamarinformsapplesignin";
public const string InitialAuthUrl = "http://local.test:7071/api/applesignin_auth";
string currentState;
string currentNonce;
TaskCompletionSource<AppleAccount> tcsAccount = null;
public bool Callback(string url)
{
// Only handle the url with our callback uri scheme
if (!url.StartsWith(CallbackUriScheme + "://"))
return false;
// Ensure we have a task waiting
if (tcsAccount != null && !tcsAccount.Task.IsCompleted)
{
try
{
// Parse the account from the url the app opened with
var account = AppleAccount.FromUrl(url);
// IMPORTANT: Validate the nonce returned is the same as our originating request!!
if (!account.IdToken.Nonce.Equals(currentNonce))
tcsAccount.TrySetException(new InvalidOperationException("Invalid or non-matching nonce returned"));
// Set our account result
tcsAccount.TrySetResult(account);
}
catch (Exception ex)
{
tcsAccount.TrySetException(ex);
}
}
tcsAccount.TrySetResult(null);
return false;
}
public async Task<AppleAccount> SignInAsync()
{
tcsAccount = new TaskCompletionSource<AppleAccount>();
// Generate state and nonce which the server will use to initial the auth
// with Apple. The nonce should flow all the way back to us when our function
// redirects to our app
currentState = Util.GenerateState();
currentNonce = Util.GenerateNonce();
// Start the auth request on our function (which will redirect to apple)
// inside a browser (either SFSafariViewController, Chrome Custom Tabs, or native browser)
await Xamarin.Essentials.Browser.OpenAsync($"{InitialAuthUrl}?&state={currentState}&nonce={currentNonce}",
Xamarin.Essentials.BrowserLaunchMode.SystemPreferred);
return await tcsAccount.Task;
}
}
```
## Summary
This article described the steps necessary to setup Sign In with Apple for use in your Xamarin.Forms applications.
## Related links
- [XamarinFormsAppleSignIn (Sample)](https://github.com/Redth/Xamarin.AppleSignIn.Sample)
- [Sign In with Apple Guidelines](https://developer.apple.com/design/human-interface-guidelines/sign-in-with-apple/overview/) | 46.425101 | 496 | 0.742217 | eng_Latn | 0.8999 |
6fc72dcc420c80b6d90cd1302ee0333a9e756724 | 29 | md | Markdown | README.md | samuilll/BeginnerExams | 6eec7b295e684399254b44c60b1fd96509004803 | [
"MIT"
] | null | null | null | README.md | samuilll/BeginnerExams | 6eec7b295e684399254b44c60b1fd96509004803 | [
"MIT"
] | null | null | null | README.md | samuilll/BeginnerExams | 6eec7b295e684399254b44c60b1fd96509004803 | [
"MIT"
] | null | null | null | # BeginnerExams
Simple Exams
| 9.666667 | 15 | 0.827586 | eng_Latn | 0.62045 |
6fc7dc909fe4f6a6f29d53a2603b1e018e937260 | 78 | md | Markdown | README.md | ishan-mishra/Notebooks | 583f446e53f7c81686e7a684a73e75a829e76407 | [
"MIT"
] | null | null | null | README.md | ishan-mishra/Notebooks | 583f446e53f7c81686e7a684a73e75a829e76407 | [
"MIT"
] | null | null | null | README.md | ishan-mishra/Notebooks | 583f446e53f7c81686e7a684a73e75a829e76407 | [
"MIT"
] | null | null | null | # Notebooks
Repository to host Jupyter notebooks that I use for note-taking.
| 26 | 65 | 0.794872 | eng_Latn | 0.990719 |
6fc811ed76ce731e3426824e448cd69b630c8567 | 6,392 | md | Markdown | gallery/psget/repository/bootstrapping_nuget_proivder_and_exe.md | I-Cat/PowerShell-Docs.de-de | 17c06af567a068eea5e9ba58abca102b39b86482 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-01-16T06:05:39.000Z | 2019-01-16T06:05:39.000Z | gallery/psget/repository/bootstrapping_nuget_proivder_and_exe.md | I-Cat/PowerShell-Docs.de-de | 17c06af567a068eea5e9ba58abca102b39b86482 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | gallery/psget/repository/bootstrapping_nuget_proivder_and_exe.md | I-Cat/PowerShell-Docs.de-de | 17c06af567a068eea5e9ba58abca102b39b86482 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2016-10-23T13:34:36.000Z | 2021-04-05T00:14:47.000Z | # Bootstrap-NuGet-Anbieter und NuGet.exe für Veröffentlichungsvorgänge mit einzelnen unverzügliche Nachricht und nur das NuGet-Anbieter für nicht-Veröffentlichungsvorgänge bootstrap
NuGet.exe wird vom aktuellen NuGet-Anbieter entfernt. Für die Veröffentlichung ein Modul-Skript, erfordert PowerShellGet NuGet.exe zum Erstellen und zum Übertragen einer NUPKG-Datei in das Repository. NuGet-Anbieter ist erforderlich für die Operationen wie suchen, installieren, aktualisieren und speichern Sie nicht zu veröffentlichen.
Die Logik für Bootstrap hinzugefügt NuGet-Anbieter und NuGet.exe für Veröffentlichungsvorgänge mit einzelnen Aufforderung angezeigt, und starten nur NuGet-Anbieter für nicht-Veröffentlichungsvorgänge.
## Wenn das NuGet-Anbieter nicht verfügbar ist
```powershell
PS C:\windows\system32> find-module -Repository dtlgalleryint -verbose -name contoso
NuGet provider is required to continue
PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or
'C:\Users\manikb\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import the NuGet provider
now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): n
find-module : NuGet provider is required to interact with NuGet-based repositories. Please ensure that '2.8.5.201' or newer version of NuGet provider is installed.
At line:1 char:1
+ find-module -Repository dtlgalleryint -verbose -name contoso
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Find-Module], InvalidOperationException
+ FullyQualifiedErrorId : CouldNotInstallNuGetProvider,Find-Module
PS C:\windows\system32> find-module -Repository dtlgalleryint -verbose -name contoso
NuGet provider is required to continue
PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or
'C:\Users\manikb\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import the NuGet provider
now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
VERBOSE: Installing NuGet provider.
Version Name Type Repository Description
------- ---- ---- ---------- -----------
2.5 Contoso Module dtlgalleryint Contoso module
```
## Wenn NuGet-Anbieter verfügbar ist und NuGet.exe ist während des Veröffentlichungsvorgangs nicht verfügbar
```powershell
PS C:\windows\system32> Publish-Module -Name Contoso -Repository LocalRepo -Verbose
NuGet.exe is required to continue
PowerShellGet requires NuGet.exe to publish an item to the NuGet-based repositories. NuGet.exe must be available under one of the paths specified in PATH environment variable value. Do you want PowerShellGet to install NuGet.exe now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): N
Publish-Module : NuGet.exe is required to interact with NuGet-based repositories. Please ensure that NuGet.exe is available under one of the paths specified in PATH environment variable value.
At line:1 char:1
+ Publish-Module -Name Contoso -Repository LocalRepo -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Publish-Module], InvalidOperationException
+ FullyQualifiedErrorId : CouldNotInstallNuGetExe,Publish-Module
PS C:\windows\system32> Publish-Module -Name Contoso -Repository LocalRepo -Verbose
NuGet.exe is required to continue
PowerShellGet requires NuGet.exe to publish an item to the NuGet-based repositories. NuGet.exe must be available under one of the paths specified in PATH environment variable value. Do you want PowerShellGet to install NuGet.exe now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
VERBOSE: Installing NuGet.exe.
VERBOSE: Successfully published module 'Contoso' to the module publish location 'C:\LocalGallery'. Please allow few minutes for 'Contoso' to show up in the search results.
```
## Wenn stehen NuGet-Anbieter und NuGet.exe nicht beim Veröffentlichen
```powershell
PS C:\windows\system32> Publish-Module -Name Contoso -Repository LocalRepo -Verbose
NuGet.exe and NuGet provider are required to continue
PowerShellGet requires NuGet.exe and NuGet provider version '2.8.5.201' or newer to interact with the NuGet-based repositories. Do you want PowerShellGet to install both NuGet.exe and NuGet provider now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): N
Publish-Module : PowerShellGet requires NuGet.exe and NuGet provider version '2.8.5.201' or newer to interact with the NuGet-based repositories. Please ensure that '2.8.5.201' or newer version of NuGet provider is installed and NuGet.exe is available under
one of the paths specified in PATH environment variable value.
At line:1 char:1
+ Publish-Module -Name Contoso -Repository LocalRepo -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Publish-Module], InvalidOperationException
+ FullyQualifiedErrorId : CouldNotInstallNuGetBinaries,Publish-Module
PS C:\windows\system32> Publish-Module -Name Contoso -Repository LocalRepo -Verbose
NuGet.exe and NuGet provider are required to continue
PowerShellGet requires NuGet.exe and NuGet provider version '2.8.5.201' or newer to interact with the NuGet-based repositories. Do you want PowerShellGet to install both NuGet.exe and NuGet provider now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
VERBOSE: Installing NuGet provider.
VERBOSE: Installing NuGet.exe.
VERBOSE: Successfully published module 'Contoso' to the module publish location 'C:\LocalGallery'. Please allow few minutes for 'Contoso' to show up in the search results.
```
<!--HONumber=Oct16_HO1-->
| 71.022222 | 336 | 0.738423 | eng_Latn | 0.539068 |
6fc828d1f3f78d2daa84166440f3a12854231d81 | 1,684 | md | Markdown | Draft/ATT&CK-Stuff/README.md | raninho/Infosec_Reference | b41a20c2d18346db32ac86dfc13e5c9665a29e69 | [
"MIT"
] | 7 | 2018-12-02T17:48:16.000Z | 2021-09-21T09:12:57.000Z | Draft/ATT&CK-Stuff/README.md | TaharAmine/Infosec_Reference | 2ced7dc306ff81779c2e316ee6dc36e18386ba53 | [
"MIT"
] | null | null | null | Draft/ATT&CK-Stuff/README.md | TaharAmine/Infosec_Reference | 2ced7dc306ff81779c2e316ee6dc36e18386ba53 | [
"MIT"
] | 2 | 2019-02-01T14:05:17.000Z | 2020-06-16T07:57:47.000Z |
#### MITRE ATT&CK Framework Mappings
---------------------------
* MITRE ATT&CK Framework ([MITRE ATT&CK Framework](https://attack.mitre.org/wiki/Main_Page)) for attackers.
* This is a copy of the mappings with links to techniques and background information rather than APT reports.
* If you want to test your defenses against a lot of these things:
* [Atomic Red Team - Small and highly portable detection tests mapped to the Mitre ATT&CK Framework.](https://github.com/redcanaryco/atomic-red-team)
* [Collection](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Collection.md)
* [Command and Control](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Command_and_Control.md)
* [Credential Access](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Command_and_Control.md)
* [Defense Evasion](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Defense_Evasion.md)
* [Discovery](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Discovery.md)
* [Execution](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Execution.md)
* [Exfiltration](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Exfiltration.md)
* [Lateral Movement](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Lateral%20Movement.md)
* [Persistence](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Persistence.md)
* [Privilege Escalation](https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Privilege_Escalation.md) | 99.058824 | 151 | 0.789192 | yue_Hant | 0.416491 |
6fc83b52d3e8f6d251b491cfa002487ec3d1454d | 4,752 | md | Markdown | dynamics-nav-app/receivables-manage-receivables.md | MicrosoftDocs/nav-content.fi-fi | a8443948ed1f1f8c44db9d1f98d5053a7f911cb5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T18:46:25.000Z | 2021-04-21T00:13:46.000Z | dynamics-nav-app/receivables-manage-receivables.md | MicrosoftDocs/nav-content.fi-fi | a8443948ed1f1f8c44db9d1f98d5053a7f911cb5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamics-nav-app/receivables-manage-receivables.md | MicrosoftDocs/nav-content.fi-fi | a8443948ed1f1f8c44db9d1f98d5053a7f911cb5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-14T19:34:15.000Z | 2021-11-05T10:51:54.000Z | ---
title: "Myyntisaamisten hallintatehtävien yleiskatsaus"
description: "Ohjeaiheessa kerrotaan tehtävistä, joilla hallitaan myyntisaamisia ja kohdistetaan maksuja asiakas- ja toimittajatapahtumiin."
documentationcenter:
author: SorenGP
ms.prod: dynamics-nav-2018
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: customer payment, debtor, balance due, AR
ms.date: 08/10/2017
ms.author: sgroespe
ms.translationtype: HT
ms.sourcegitcommit: b9b1f062ee6009f34698ea2cf33bc25bdd5b11e4
ms.openlocfilehash: b9a486d099a6a52bec6ac6b23c21a3c341c20b14
ms.contentlocale: fi-fi
ms.lasthandoff: 10/23/2017
---
# <a name="managing-receivables"></a>Myyntisaamisten hallinta
Säännöllinen vaihe missä tahansa rahoituskierrossa on pankkitilien täsmäyttäminen, mikä edellyttää maksujen kohdistamista asiakas- tai toimittajatapahtumiin, jotta myyntilaskut tai ostohyvityslaskut voidaan sulkea.
[!INCLUDE[d365fin](includes/d365fin_md.md)]issa voi rekisteröidä maksut nopeasti **Maksujen täsmäytyskirjauskansio** -ikkunassa tuomalla pankin tiliotteen tai syötteen. Maksut kohdistetaan avoimiin asiakas- tai toimittajatapahtumiin maksutekstin ja tapahtumatietojen välisen täsmäytyksen perusteella. Voit tarkastella ja muuttaa täsmäytyksiä ennen päiväkirjan kirjaamista sekä sulkea tapahtumien pankkitilitapahtumia päiväkirjaan kirjauksen aikana. Pankkitili täsmäytetään, kun kaikki maksut on kohdistettu.
Maksuja voi kuitenkin kohdistaa ja pankkitilejä täsmäyttää myös muualla.
* **Pankkitilin täsmäytystilaan** -ikkuna, josta voit myös tarkistaa tapahtumia. Lisätietoja on kohdassa [Toimintaohje: Pankkitilien täsmäyttäminen erikseen](bank-how-reconcile-bank-accounts-separately.md).
* Voit kohdistaa ja tarkistaa manuaalisesti **Maksurekisteröinti**-ikkunassa käteisenä, sekkinä tai pankkitapahtumana vastaanotetut maksut maksamattomia myyntiasiakirjoja vastaan. Huomaa, että tämä toiminto on käytettävissä vain myyntiasiakirjoja varten.
* **Kassapäiväkirja**-ikkunassa, jossa vastaanotot kirjataan antamalla manuaalisesti maksurivi soveltuvaan pääkirjaan tai soveltuvalle asiakkaalle tai toiselle tilille. Voit kohdistaa vastaanoton tai hyvityksen yhteen avoimeen tapahtumaan tai useisiin avoimiin tapahtumiin, ennen kuin kirjaat kassapäiväkirjan. Voit tehdä kohdistuksen myös asiakastapahtumista.
Toinen osa myyntisaamisten hallintaa on kerätä avoimet saldot, kuten viivästyskulut, ja lähettää muistutuksia. [!INCLUDE[d365fin](includes/d365fin_md.md)]issa nämä toimet voi tehdä erilaisilla tavoilla. Lisätietoja on kohdassa [Toimintaohje: Avointen saldojen perintä](receivables-collect-outstanding-balances.md).
Seuraavassa taulukossa on tehtäväsarja ja linkit tehtäviä kuvaaviin aiheisiin.
| Toiminta | Katso |
| --- | --- |
| Kohdista maksut avoimiin asiakas- tai toimittajatapahtumiin tuodun pankin tiliotetiedoston tai syötteen perusteella. Täsmäytä pankkitili sitten, kun kaikki maksut on kohdistettu. |[Maksujen kohdistaminen automaattisesti ja pankkitilien täsmäyttäminen](receivables-apply-payments-auto-reconcile-bank-accounts.md) |
| Kohdista maksut avoimiin asiakasmaksuihin maksamattomien myyntiasiakirjojen luettelon manuaalisen tapahtuman perusteella. |[Toimintaohje: Asiakkaan maksujen täsmäyttäminen manuaalisesti maksamattomien myyntiasiakirjojen luettelosta](receivables-how-reconcile-customer-payments-list-unpaid-sales-documents.md) |
| Kirjaa asiakkaiden kassaanmaksut tai hyvitykset kassapäiväkirjaan ja kohdista asiakastapahtumat päiväkirjasta tai kirjatuista tapahtumakirjauksista. |[Toimintaohje: Asiakkaan maksujen täsmäyttäminen manuaalisesti](receivables-how-apply-sales-transactions-manually.md) |
| Asiakkaiden muistuttaminen erääntyneistä summista, koron laskeminen ja viivästyskululaskut sekä myyntireskontran hallinta. |[Toimintaohje: Avointen saldojen perintä](receivables-collect-outstanding-balances.md) |
|Varmista. että tiedät toimitettujen nimikkeiden kulut määrittämällä lisätyt nimikekulut, kuten nimikkeiden myynnin jälkeen syntyvät rahti-, käsittely-, vakuutus- ja kuljetuskulut.|[Toimintaohje: Kaupan lisäkustannusten huomiointi nimikekulujen avulla](payables-how-assign-item-charges.md)|
|Määritä toleranssi, jonka mukaan järjestelmä sulkee laskun, vaikka maksu ja mahdolliset alennukset eivät täysin vastaa laskun koko summaa.|[Toimintaohje: Maksutoleranssien ja maksualennustoleranssien käsitteleminen](finance-payment-tolerance-and-payment-discount-tolerance.md)|
## <a name="see-also"></a>Katso myös
[Myynti](sales-manage-sales.md)
[Ostovelkojen hallinta](payables-manage-payables.md)
[[!INCLUDE[d365fin](includes/d365fin_md.md)] -ohjelman käyttäminen](ui-work-product.md)
[Yleiset liiketoimintatoiminnot](ui-across-business-areas.md)
| 95.04 | 507 | 0.837753 | fin_Latn | 0.999835 |
6fc8545ffce85bad8284dffbba29b6c27182e0b8 | 1,682 | md | Markdown | python_examples/order_assessments/README.md | kirank0220/api-examples | 9d6c51eeb2d4e38d95b0b7d88fd30fe96ef28d20 | [
"MIT"
] | 1 | 2021-12-20T16:49:00.000Z | 2021-12-20T16:49:00.000Z | python_examples/order_assessments/README.md | kirank0220/api-examples | 9d6c51eeb2d4e38d95b0b7d88fd30fe96ef28d20 | [
"MIT"
] | 2 | 2020-11-20T04:51:16.000Z | 2021-06-16T17:02:35.000Z | python_examples/order_assessments/README.md | kirank0220/api-examples | 9d6c51eeb2d4e38d95b0b7d88fd30fe96ef28d20 | [
"MIT"
] | 1 | 2020-11-20T04:46:17.000Z | 2020-11-20T04:46:17.000Z | This example application showcases the abilility to order assessment for existing third parties using an Excel file using the [CyberGRX API](https://api.cybergrx.com/v1/swagger/). This example is coded using Python, the ordering code is contained in [order.py](./order.py). You should run all commands from this directory.
# Running the example
The first step is to configure a virtual environment for the application dependencies. Depending on the version of Python that you are using the following commands will slightly differ.
- Python 2: `pip install virtualenv && virtualenv env`
- Python 3: `pip3 install virtualenv && python3 -m venv env`
- `source env/bin/activate`
- `pip install -r requirements.txt`
At this point you are all setup to run the example, but before you do, create a file that holds your API token.
- `vi .auth-token` add the following lines to this file and save it:
```
export CYBERGRX_API_TOKEN="API-V1 TOKEN FROM UI"
```
# Running the command
There is 1 command in this example, before it can be run, setup the Python environment and the authentication settings
- Remember to source your python environment `source env/bin/activate` the first time you run the command
- `source .auth-token`
- Once you are done experimenting remember to **remove** the `.auth-token` file so you do not leak sensitive information.
## Ordering assessments in bulk
This command will order assessments for third parties that do not have orders already.
- This command expects an excel file that resembels `bulk-order.xlsx` an example has been provided in this directory.
- All columns are required except for `Vendor Contact Phone`
- `python order.py bulk-order.xlsx`
| 62.296296 | 324 | 0.776457 | eng_Latn | 0.998583 |
6fc8fd47d9f1c72df68ac389ed4f87625e781181 | 1,666 | md | Markdown | _projects/1709-quetiapine-prescribing.md | ECHSBACHS/office-of-evaluation-sciences_COPY | 47b569d4bdfa9874bd313f470a34abc416d7ac47 | [
"CC0-1.0"
] | 1 | 2018-08-29T20:10:14.000Z | 2018-08-29T20:10:14.000Z | _projects/1709-quetiapine-prescribing.md | ECHSBACHS/office-of-evaluation-sciences_COPY | 47b569d4bdfa9874bd313f470a34abc416d7ac47 | [
"CC0-1.0"
] | null | null | null | _projects/1709-quetiapine-prescribing.md | ECHSBACHS/office-of-evaluation-sciences_COPY | 47b569d4bdfa9874bd313f470a34abc416d7ac47 | [
"CC0-1.0"
] | null | null | null | ---
title: Reducing Overprescribing of Quetiapine in Medicare Part D
permalink: /projects/quetiapine-prescribing/
tags: project
image: /assets/img/project-images/prescribe.jpg
image-credit: https://www.flickr.com/photos/worldbank/
abstract: /assets/abstracts/1709-Quetiapine-Prescribing.pdf
year: 2018
domain: Health
agency: Health and Human Services
summary: Peer comparison letters for high prescribers of quetiapine reduce prescription volume and improve guideline conformity of prescription fills.
---
## What was the challenge?
Antipsychotics such as quetiapine are often prescribed for reasons not supported by clinical evidence, increasing healthcare costs and potentially exposing patients to harms. The Center for Program Integrity (CPI) at the Centers for Medicare and Medicaid Services (CMS) partnered with the Office of Evaluation Sciences (OES) to improve the value and safety of quetiapine prescribing in Medicare Part D.
## What was the program change?
CPI and OES sent a series of peer comparison letters to high volume prescribers indicating that their quetiapine prescribing was extremely high relative to their within-state peers and that it was under review.
## How did the evaluation work?
CPI and OES randomly assigned high volume prescribers (N = 5,055) to get a treatment or control letter. CPI and OES compared the days of quetiapine supplied by the prescribers and the days of quetiapine received by the prescribers’ baseline patients (N = 89,500) over 9 months.
## What was the impact?
Sending peer comparison letters to high volume prescribers of quetiapine reduced prescribing, and did so without any detectable adverse impacts.
| 57.448276 | 402 | 0.810324 | eng_Latn | 0.996736 |
6fc931285bda714cab6615eab7d638287e6fa7c5 | 4,597 | md | Markdown | _posts/Java/2020-12-25-GC.md | CHS96/CHS96.github.io | a1309fb6fb2af15ff550bb88dd8d2026b480e3a6 | [
"MIT"
] | 1 | 2021-01-17T02:43:58.000Z | 2021-01-17T02:43:58.000Z | _posts/Java/2020-12-25-GC.md | CHS96/CHS96.github.io | a1309fb6fb2af15ff550bb88dd8d2026b480e3a6 | [
"MIT"
] | null | null | null | _posts/Java/2020-12-25-GC.md | CHS96/CHS96.github.io | a1309fb6fb2af15ff550bb88dd8d2026b480e3a6 | [
"MIT"
] | null | null | null | ---
title: "가비지 컬렉터(Garbage Collector)"
excerpt: "가비지 컬렉터(Garbage Collector)를 알아보자!"
categories:
- Java
last_modified_at: 2020-12-25T18:35:00
---
C, C++은 개발자가 메모리를 직접 관리해야 하지만 Java에서는 개발자가 별도로 관리할 필요가 없이 JVM의 Garbage Collector에서 알아서 메모리를 관리해 준다. 가비지 컬렉터는 이름 그대로 쓰레기를 수집해 주는 뜻이다. 가비지 컬렉터를 이해하기 전에 쓰레기가 무엇을 뜻하는지 한번 알아보자.
Java에서 쓰레기는 Heap 메모리 영역에 생성된 객체들 중에 더 이상 참조되지 않는 객체들을 의미한다. 이게 무슨 뜻일까? 간단하게 다음과 같이 String 객체를 하나 선언했다고 하자. 이전 JVM 구조에서 메모리 영역을 공부할 때 스택 영역에는 지역 변수, 파라미터 등 임시 값, 히프 영역에는 new로 생성된 객체가 저장된다고 하였다.

따라서 1번 라인을 실행하면 스택 영역과 히프 영역에는 다음과 같이 할당될 것이다.

2번 라인을 수행하게 되면 어떻게 될까? str이라는 변수는 스택 영역에 새롭게 할당되지 않는다. 히프 영역에 "After"이라는 새로운 객체가 하나 할당되며 str 변수가 이를 참조하도록 한다.

만약 우리가 Before 객체를 사용하고 싶다면 어떻게 해야 할까? "Before"를 참조하는 변수가 존재하지 않기 때문에 우리는 Before 객체의 주소를 알 수가 없다. 결국 Before 객체는 도달할 수 없는 객체, Unreachable Object가 된다. 이를 제거해서 메모리를 관리해 주는 것이 바로 **가비지 컬렉터**이다.
그렇다면 JVM의 가비지 컬렉터는 어떤 방법으로 더 이상 참조되지 않는 객체들을 찾아서 제거하고 메모리를 관리하는 걸까? 단순하게 생각해 보면 스택 영역에 할당된 모든 변수들을 탐색하면서 히프 영역에 참조하고 있는 객체의 유무를 판단하면 될 것 같다. 한번 알아보도록 하자.
가비지 컬렉터는 **Mark, Sweep**의 과정을 통해 실행된다. Mark는 스택 영역에 할당된 모든 변수, Reachable Object들을 탐색하며 어떤 객체를 참조하고 있는지 Mark 하는 작업이다. 이때 GC를 수행하는 스레드를 제외한 모든 스레드들은 중단된다. 이를 Stop the World라고도 부른다. Sweep은 히프 영역의 객체들 중 Mark 되지 않은 객체들을 제거하는 작업이다.
아무래도 히프 영역이 가비지 컬렉터의 주요 대상인 것 같으니 히프 영역에 대해서 조금 더 자세히 알아보자.
히프 영역은 총 5개의 영역으로 나뉜다. 히프 영역을 여러 개로 나눈 이유는 효율적으로 GC가 일어나게 하기 위함이다. JDK7까지는 Permanent 영역이 존재하였지만 JDK8부터는 Permanent 영역이 사라지고 일부가 Metaspace 영역으로 변경되었다.

GC는 **Minor GC, Major GC(Full GC)**로 나뉜다. Minor GC는 Young generation에서 일어나는 가비지 컬렉션이고, Major GC는 Old generation에서 일어나는 가비지 컬렉션을 의미한다. 그럼 하나씩 살펴보도록 하자.
먼저 Minor GC의 동작 과정이다.
1. 우리가 new 키워드로 객체를 생성하게 되면 먼저 Eden 영역에 객체가 할당된다. 만약 Eden 영역이 꽉 차게 된다면 Minor GC가 일어난다. Eden 영역의 Unreachable Object들은 제거될 대상이기 때문에 Eden 영역에 남겨 제거되도록 하고 Reachable Object들은 survivor1 영역으로 옮겨준다.
2. 다시 한번 Eden 영역이 꽉 차서 Minor GC가 일어나게 된다면 위의 과정을 반복한다. Eden 영역의 Reachable Object들을 survivor1의 영역으로 옮겨준다. 만약 survivor1의 영역이 가득 차게 된다면 survivor1 영역의 Reachable Object들을 survivor2의 영역으로 옮겨준다. 이때 age 값을 증가시켜주고 옮겨준다. age는 이후에 Old 영역으로 넘어가는 객체를 판단하는 기준이 된다. 이후 Eden, survivor1 영역을 제거해 준다.
3. 다시 한번 Minor GC가 일어나게 된다면 위의 과정을 반복한다. 만약 survivor2의 영역이 가득 차게 된다면 Reachable Object들을 age 값을 증가시켜 survivor1의 영역으로 옮겨주고 survivor2 영역을 정리해 준다.
4. 만약 Minor GC가 계속해서 발생하게 되어 survivor 영역에 있는 Reachable Object들의 age가 일정 기준을 넘게 되면 앞으로 계속해서 사용될 수 있는 객체라고 판단하여 Old generation 영역으로 옮겨준다. 이 과정을 Promotion이라고 한다. 그렇다면 Old generation 영역도 꽉 차게 된다면 어떻게 될까? Old generation 영역의 가비지들을 제거해야 할 것이다. 이것이 바로 Major GC이다.
Major GC는 Old generation 영역이 꽉 차게 되었을 때 발생한다. 다만, Major GC는 Minor GC보다 시간이 훨씬 많이 걸리고 실행 중에 GC를 제외한 모든 스레드가 중지된다. Old generation 영역에 있는 Unreachable Objcet를 Mark 하고 Sweep 과정을 통해 제거한다. 제거하게 되면 Heap 메모리 영역에 빈 메모리 공간이 생기게 되는데 이 부분을 없애기 위해 재구성을 하게 된다. 메모리를 옮기는 도중에 다른 스레드가 메모리를 사용해버리면 안 되기 때문에 모든 스레드를 중지하는 것이다.
지금까지 GC가 어떻게 메모리를 관리하는지에 대해 살펴봤다. 마지막으로 간단하게 어떤 GC들이 있는지 GC의 종류에 대해서 알아보자. GC는 Old Generation을 어떤 방법으로 관리하냐에 따라 Serial, Parallel, CMS, G1(Garbage First) 4가지로 나뉜다.
**1. Serial Garbage Collector**
가장 기본적인 GC이다. 하나의 CPU로 Young 영역과 Old 영역을 순차적으로 처리한다. Old 영역이 꽉 차게 된다면 순차적으로 탐색하면서 Unreachable Object를 표시하여 한곳으로 모은 후 제거하는 Mark serrp compact 알고리즘을 사용한다. 굉장히 단순한 방법이기 때문에 Minor GC, Major GC 모두 Stop-the-World 방식이다.
**2. Parallel Collector**
Serail Garbage Collector는 단일 스레드이기 때문에 굉장히 비효율적이다. Parallel Collector는 멀티 스레드를 사용하여 병렬 처리를 통해 성능을 개선한 GC이다. 하지만 이 역시도 Minor GC, Major GC 모두 Stop-the-World 방식이다.
**3. CMS Collector**
Stop-the-World 방식을 줄일 수 있다면 성능을 개선할 수 있지 않을까라는 생각에서 나온 것이 바로 CMS Collector이다. 이전까지의 GC들은 Old 영역에서 Mark serrp compact 알고리즘을 사용하여 제거하였다. CMS Collector는 Unreachable Objcet들을 제거 후 빈 메모리 공간을 재구성하지 않는다. 따라서 CMS Collector는 히프 영역의 메모리가 클 때 적합하다.
**4. G1 Collector**
CMS Collector의 단점은 메모리에 빈 공간이 생기면서 메모리 파편화가 발생하게 되는 것이다. G1 Collector는 이를 개선한 GC이다. G1 Collector는 Young, Old 영역을 구분하지 않고 히프 영역을 여러 개의 Region으로 나눈다. 만약 Old 영역에 해당하는 Region이 꽉 차게 된다면 해당 Region을 돌면서 Unreachable Object들을 제거하고 Reachable Object들은 다른 Region에 저장한다. 이렇게 한다면 현재의 Region은 빈 상태가 되며 다른 Region에 Reachable Object들이 저장되면서 메모리 단편화를 없앨 수 있게 된다. 다만, 별도의 Region들이 필요하기 때문에 마찬가지로 히프 영역의 메모리가 클 때 적합하다.
| 67.602941 | 399 | 0.749837 | kor_Hang | 1.00001 |
6fc98a48f9f502c3fd19fe8d7cecaa8c56187a60 | 428 | md | Markdown | static/crew/picard_dixonhill_crew.md | theglette/website | 2d914f27885ecfafcfb2c61e30fccf863877537a | [
"MIT"
] | null | null | null | static/crew/picard_dixonhill_crew.md | theglette/website | 2d914f27885ecfafcfb2c61e30fccf863877537a | [
"MIT"
] | 1 | 2021-04-10T07:44:18.000Z | 2021-04-10T07:52:39.000Z | static/crew/picard_dixonhill_crew.md | theglette/website | 2d914f27885ecfafcfb2c61e30fccf863877537a | [
"MIT"
] | null | null | null | ---
name: Detective Dixon Hill
rarity: 4
series: tng
memory_alpha:
bigbook_tier: 7
events: 21
in_portal: true
date: 14/10/2016
obtained: Post-Launch
mega: false
published: true
---
Not a daily use crew by any stretch, simply a collection piece who’s usable in spurts on the weekends. His namesake television show and some eventable traits will trigger bonuses for him, but his bases may not always make the thaw cost worth it.
| 26.75 | 245 | 0.778037 | eng_Latn | 0.996324 |
6fca02080deb44249f1938031af12ed269fdbd93 | 1,210 | md | Markdown | README.md | trockenasche/fdf2csv | 6647b0a784558607d28d7c2c56f0d3454c85fedb | [
"MIT"
] | 3 | 2018-02-01T21:30:15.000Z | 2020-03-04T16:20:48.000Z | README.md | trockenasche/fdf2csv | 6647b0a784558607d28d7c2c56f0d3454c85fedb | [
"MIT"
] | 3 | 2017-12-19T21:25:32.000Z | 2022-02-05T21:33:11.000Z | README.md | trockenasche/fdf2csv | 6647b0a784558607d28d7c2c56f0d3454c85fedb | [
"MIT"
] | 8 | 2019-06-10T09:04:46.000Z | 2022-02-04T03:54:45.000Z | FDF2CSV
=========
The Forms Data Format (FDF) is based on PDF, it uses the same syntax and has essentially the same file structure, but is much simpler than PDF, since the body of an FDF document consists of only one required object. Forms Data Format is defined in the PDF specification (since PDF 1.2). The Forms Data Format can be used when submitting form data to a server, receiving the response, and incorporating into the interactive form. It can also be used to export form data to stand-alone files that can be imported back into the corresponding PDF interactive form. Beginning in PDF 1.3, FDF can be used to define a container for annotations that are separate from the PDF document they apply to.
tl;dr
-----
FDF (Forms Data Format) is a file format for representing form data and annotations that are contained in a PDF form.<br>
This tool extract all information to a csv file.
Usage
=====
fdf2csv.py filename[#*.fdf]
Adds row/data to the output filename.csv if it exists. It is assumed that
the FDF fields are unique; They would become the CSV column names.
The input filename can include the file path (with a leading tilde for home
page). The output CSV has the trailing digits removed.
| 60.5 | 691 | 0.775207 | eng_Latn | 0.988482 |
6fca33544cbf57b696a959ed9737046e22cb2f6d | 15,916 | md | Markdown | docs/language/informal/nosuchmethod-forwarding.md | omerlevran46/sdk | b1955d63ad678b651b09db3dd286136c4463f36b | [
"BSD-3-Clause"
] | 8,969 | 2015-05-16T16:49:24.000Z | 2022-03-31T19:54:40.000Z | docs/language/informal/nosuchmethod-forwarding.md | omerlevran46/sdk | b1955d63ad678b651b09db3dd286136c4463f36b | [
"BSD-3-Clause"
] | 30,202 | 2015-05-17T02:27:45.000Z | 2022-03-31T22:54:46.000Z | docs/language/informal/nosuchmethod-forwarding.md | omerlevran46/sdk | b1955d63ad678b651b09db3dd286136c4463f36b | [
"BSD-3-Clause"
] | 1,619 | 2015-05-16T21:36:42.000Z | 2022-03-29T20:36:59.000Z | ## NoSuchMethod Forwarding
Author: eernst@
**Status**: Background material, normative language now in dartLangSpec.tex.
**Version**: 0.7 (2018-07-10)
**This document** is an informal specification of the support in Dart 2 for
invoking `noSuchMethod` in situations where an attempt is made to invoke a
method that does not exist.
**The feature** described here, *noSuchMethod forwarding*, is a particular
approach whereby an implementation of `noSuchMethod` in a class _C_ causes
_C_ to be extended with a set of compiler generated forwarding methods, such
that an invocation of any method in the static interface of _C_ will become
a regular method invocation, which in turn invokes `noSuchMethod`.
## Motivation
In Dart 1.x, `noSuchMethod` will be invoked whenever an attempt is made to
call a method that does not exist.
In other words, consider an instance method invocation of a member named
_m_ on a receiver _o_ whose class _C_ does not have a member named _m_ (or
it has a member named _m_, but it does not admit the given invocation,
e.g., because the number of arguments is wrong). The properties of the
invocation are then specified using an instance _i_ of `Invocation`, and
`noSuchMethod` is then invoked with _i_ as the actual argument. Among other
things, _i_ specifies whether the invocation was a method call or an
invocation of a getter or a setter, and it specifies which actual arguments
were passed.
One difficulty with this design is that it requires developers to take
both method invocations and getter invocations into account, in order to
support a given method using `noSuchMethod`:
```dart
class Foo {
foo(x) {}
}
class MockFoo implements Foo {
// PS: Make sure that a tear-off of `_mockFoo` has the same type
// as a tear-off of `Foo.foo`.
_mockFoo(x) {
// ... implement mock behavior for `foo` here.
}
noSuchMethod(Invocation i) {
if (i.memberName == #foo) {
if (i.isMethod &&
i.positionalArguments.length == 1 &&
i.namedArguments.isEmpty) {
return _mockFoo(i.positionalArguments[0]);
} else if (i.isGetter) {
return _mockFoo;
}
}
return super.noSuchMethod(i);
}
}
```
The reason why the type of a tear-off of `_mockFoo` should be the same
as the type of a tear-off of `foo` is that the former should be able to
emulate the properties of the latter faithfully, including the response
it gives rise to when subjected to type tests, either explicitly or
implicitly.
Obviously, this is verbose, tedious, and difficult to maintain if the
claimed superinterfaces (`implements ...`) in the mock class introduce
a large number of methods with complex signatures. It is particularly
inconvenient if the mock behavior is simple and largely independent of
all those types.
The noSuchMethod forwarding approach eliminates much of this tedium
by means of compiler generated forwarding methods corresponding to all
the unimplemented methods. The example could then be expressed as
follows:
```dart
class Foo {
foo(x) {}
}
class MockFoo implements Foo {
noSuchMethod(Invocation i) {
if (i.memberName == #foo) {
if (i.isMethod &&
i.positionalArguments.length == 1 &&
i.namedArguments.isEmpty) {
// ... implement mock behavior for `foo` here.
}
}
return super.noSuchMethod(i);
}
}
```
With noSuchMethod forwarding, this causes a `foo` forwarding
method to be generated, with the signature declared in `Foo`
and with the necessary code to create and initialize a suitable
`Invocation` which will be passed to `noSuchMethod`.
## Syntax
The grammar remains unchanged.
## Static Analysis
We say that a class _C_ _has a non-trivial_ `noSuchMethod` if _C_ declares
or inherits a concrete method named `noSuchMethod` which is distinct
from the declaration in the built-in class `Object`.
*Note that such a declaration cannot be a getter or setter, and it must
accept one positional argument of type `Invocation`, due to the
requirement that it must correctly override the declaration of
`noSuchMethod` in the class `Object`. For instance, in addition to the
obvious choice `noSuchMethod(Invocation i)` it can be
`noSuchMethod(Object i, [String s])`, but not
`noSuchMethod(Invocation i, String s)`.*
If a concrete class _C_ has a non-trivial `noSuchMethod` then each
method signature (including getters and setters) which is a member of _C_'s
interface and for which _C_ does not have a concrete declaration is
_noSuchMethod forwarded_.
A concrete class _C_ that does _not_ have a non-trivial `noSuchMethod`
implements its interface (*it is a compile-time error not to do so*), but
there may exist superclasses of _C_ declared in other libraries whose
interfaces include some private methods for which _C_ has no concrete
declaration (*such members are by definition omitted from the interface of
_C_, because their names are inaccessible*). Similarly, even if a class _D_
does have a non-trivial `noSuchMethod`, there may exist abstract
declarations of private methods with inaccessible names in superclasses of
_D_ for which _D_ has no concrete declaration. In both of these situations,
such inaccessible private method signatures are _noSuchMethod forwarded_.
No other situations give rise to a noSuchMethod forwarded method
signature.
*This means that whenever it is stated that a class _C_ has a noSuchMethod
forwarded method signature, it is guaranteed to be a concrete class with a
non-trivial `noSuchMethod`, or the signature is guaranteed to be
inaccessible. In the former case, the developer expressed the intent to
obtain implementations of "missing methods" by having a non-trivial
`noSuchMethod` declaration, and in the latter case it is impossible to
write declarations in _C_ that implement the missing private methods, but
they will then be provided as generated forwarders.*
If a class _C_ has a noSuchMethod forwarded signature then an implicit
method implementation implementing that method signature is induced in _C_.
In the case where _C_ already contains an abstract declaration with the
same name, the induced method implementation replaces the abstract
declaration.
It is a compile-time error if a concrete class _C_ has a non-trivial
`noSuchMethod`, and a name `m` has a set of method signatures in the
superinterfaces of _C_ where none is most specific, and there is no
declaration in _C_ which provides such a most specific method signature.
*This means that even in the situation where everything else implies that a
noSuchMethod forwarder should be induced, signature ambiguities must still
be resolved by a developer-written declaration, it cannot be a consequence
of implicitly inducing a noSuchMethod forwarder. However, that
developer-written declaration could be an abstract method in the
concrete class itself.*
*Note that there is no most specific method signature if there are several
method signatures which are equally specific with respect to the argument
types and return type, but an optional formal parameter in these signatures
has different default values in different signatures.*
It is a compile-time error if a class _C_ has a noSuchMethod forwarded
method signature _S_ for a method named _m_, as well as an implementation
of _m_.
*This can only happen if that implementation is inherited and satisfies
some, but not all requirements of the noSuchMethod forwarded method
signature. In the example below, a `foo(int i)` implementation is inherited
and a superinterface declares `foo([int i])`. This is a compile-time error
because `C` does not have a method implementation with signature
`foo([int])`, but if one were to be implicitly induced it would override
`A.foo` (which is capable of accepting some but not all of the argument
lists that an implementation of `foo([int])` would allow). We have made
this an error because it would be error prone to induce a forwarder in `C`
which will silently override an `A.foo` which "almost" satisfies the
requirement in the superinterface. In particular, developers are likely to
be surprised if `A.foo` is not called even when it is passed a single
`int` argument, which precisely matches the declaration of `A.foo`.*
```dart
class A {
foo(int i) => null;
}
abstract class B {
foo([int i]);
}
class C extends A implements B {
noSuchMethod(Invocation i) => ...;
// Error on `foo`: Forwarder would override `A.foo`.
}
```
*Note that this makes it a breaking change, in situations where such a
signature conflict exists in some subtype like `C`, to change an abstract
method declaration to a method implementation: If `A` had been an abstract
class and `A.foo` an abstract method which was replaced by an `A.foo`
declaration which implements the method, the error on `foo` in class `C`
would be introduced because `A.foo` was implemented. There is a reasonably
practical workaround, though: implement `C.foo` with a signature that
resolves the conflict. That implementation might invoke `A.foo` in a
superinvocation, or it might forward to `noSuchMethod`, or some times one
and some times the other, that is up to the developer who writes `C.foo`.*
*Note that it is _not_ a compile-time error if the interface of _C_ has a
noSuchMethod forwarded method signature _S_ with name _m_, and a superclass
of _C_ also has a noSuchMethod forwarded method signature named _m_, such
that the implicitly induced implementation of the former overrides the
implicitly induced implementation of the latter. In other words, it is OK
for a generated forwarder to override another generated forwarder.*
*Note that when a class _C_ has an implicitly induced implementation of a
method, superinvocations in subclasses are allowed, just like they would
have been for a developer-written implementation.*
```dart
abstract class D { baz(); }
class E implements D {
noSuchMethod(Invocation i) => null;
}
class F extends E { baz() { super.baz(); }} // OK
```
## Dynamic Semantics
Assume that a class _C_ has an implicitly induced implementation of a
method _m_ with positional formal parameters
_T<sub>1</sub> a<sub>1</sub>..., T<sub>k</sub> a<sub>k</sub>_
and named formal parameters
_T<sub>k+1</sub> n<sub>1</sub>..., T<sub>k+m</sub> n<sub>m</sub>_.
Said implementation will then create an instance _i_ of the predefined
class `Invocation` such that its
- `isGetter` evaluates to true iff _m_ is a getter,
`isSetter` evaluates to true iff _m_ is a setter,
`isMethod` evaluates to true iff _m_ is a method.
- `memberName` evaluates to the symbol for the name _m_.
- `positionalArguments` evaluates to an immutable list whose
values are _a<sub>1</sub>..., a<sub>k</sub>_.
- `namedArguments` evaluates to an immutable map with the same keys
and values as
_{n<sub>1</sub>: n<sub>1</sub>..., n<sub>m</sub>: n<sub>m</sub>}_
*Note that the number of named arguments can be zero, in which case some of
the positional parameters can be optional. We do not need to mention
optional positional arguments separately, because they receive the same
treatment as required parameters (which are of course always positional).*
Finally the induced method implementation will invoke `noSuchMethod` with
_i_ as the actual argument, and return the result obtained from there.
*This determines the dynamic semantics of implicitly induced methods: The
declared return type and the formal parameters, with type annotations and
default values, are uniquely determined by the noSuchMethod forwarded
method signatures, and invocation of an implicitly induced method has the
same semantics of invocation of other methods. In particular, dynamic type
checks are performed on the actual arguments upon invocation when the
corresponding formal parameter is covariant.*
*This ensures, relying on the heap soundness and expression soundness of
Dart (which ensures that every expression of type _T_ will evaluate to an
entity of type _T_), that all statically type safe invocations will invoke
a method implementation, user-written or implicitly induced. In other
words, with statically checked calls there is no need for dynamic support
for `noSuchMethod` at all.*
For a dynamic invocation of a member _m_ on a receiver _o_ that has a
non-trivial `noSuchMethod`, the semantics is such that an attempt to invoke
_m_ with the given actual arguments (including possibly some type
arguments) is made at first. If that fails (*because _o_ has no
implementation of _m_ which can be invoked with the given argument list
shape, be it a developer-written method or an implicitly induced
implementation*) `noSuchMethod` is invoked with an actual argument which is
an `Invocation` describing the actual arguments and invocation.
*This implies that dynamic invocations on receivers having a non-trivial
`noSuchMethod` will simply invoke the forwarders whenever possible.
Similarly, it will work for dynamic invocations as well as statically
checked ones to tear off a method which is in the interface of the receiver
and implemented as a generated forwarder.*
*The only remaining situation is when a dynamic invocation invokes a method
which is not present in the static interface of the receiver, or when a
method with that name is present, but its signature does not allow for the
given invocation (e.g., because some required arguments are omitted). In
this situation, the regular instance method invocation has failed (there is
no such regular method, and no such generated forwarder). Such a dynamic
invocation will then invoke `noSuchMethod`. In this situation, a
developer-written implementation of `noSuchMethod` should also support both
method invocations and tear-offs explicitly (as it should before this
feature was added), because there is no generated forwarder to do that.*
*This approach may incur a certain performance penalty, but only for these
invocations (which are dynamic, and have already failed to invoke an
existing method, regular or generated).*
*In return, this approach enforces the following simple invariant, for both
statically checked and dynamic invocations: Whenever an instance method is
invoked, and no such method exists, `noSuchMethod` will be invoked.*
*One special case to be aware of is where a forwarder is torn off and then
invoked with an actual argument list which does not match the formal
parameter list. In that situation we will get an invocation of
`Object.noSuchMethod` rather than the `noSuchMethod` in the original
receiver, because this is an invocation of a function object (and they do
not override `noSuchMethod`):*
```dart
class A {
dynamic noSuchMethod(Invocation i) => null;
void foo();
}
main() {
A a = new A();
dynamic f = a.foo;
// Invokes `Object.noSuchMethod`, not `A.noSuchMethod`, so it throws.
f(42);
}
```
## Updates
* Jul 10th 2018, version 0.7: Added requirement to generate forwarders
for inaccessible private methods even in the case where there is no
non-trivial `noSuchMethod`.
* Mar 22nd 2018, version 0.6: Added example to illustrate the case where a
torn-off method invokes `Object.noSuchMethod`, not the one in the
receiver, because of a non-matching actual argument list.
* Nov 27th 2017, version 0.5: Changed terminology to use 'implicitly
induced method implementations'. Helped achieving a major simplifaction
of the dynamic semantics.
* Nov 22nd 2017, version 0.4: Removed support for explicitly requesting
generated forwarder in conflict case. Improved the clarity of many
parts.
* Oct 5th 2017, version 0.3: Clarified that generated forwarders must
pass an `Invocation` to `noSuchMethod` which specifies the bindings
of formal arguments to actual arguments. Clarified the treatment of
default values for optional arguments.
* Sep 20th 2017, version 0.2: Many smaller adjustments, based on review
feedback.
* Sep 18th 2017, version 0.1: Created the first version of this document.
| 43.725275 | 76 | 0.773247 | eng_Latn | 0.999706 |
6fcb2251abb72ca1b9fe6ec91b00c9588d5fb59b | 80 | md | Markdown | README.md | Helio-Leoes/Ola_Mundo | fc46688548c8f48ef464ecf5d2e8783aaabd5cff | [
"MIT"
] | null | null | null | README.md | Helio-Leoes/Ola_Mundo | fc46688548c8f48ef464ecf5d2e8783aaabd5cff | [
"MIT"
] | null | null | null | README.md | Helio-Leoes/Ola_Mundo | fc46688548c8f48ef464ecf5d2e8783aaabd5cff | [
"MIT"
] | null | null | null | # Olá, Mundo!
Primeiro repositorio criado no Git
Essa linha foi pelo site!!
| 16 | 35 | 0.725 | por_Latn | 0.999855 |
6fcc106482951dd8be1ef9374ef926bd6f06cb1c | 536 | md | Markdown | zh/linux/w.md | reinhart1010/nix | a1803c718ead3b79854b65396c8967bd5ec32874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | zh/linux/w.md | reinhart1010/nix | a1803c718ead3b79854b65396c8967bd5ec32874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | zh/linux/w.md | reinhart1010/nix | a1803c718ead3b79854b65396c8967bd5ec32874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
layout: page
title: linux/w (中文)
description: "显示登录者及其进程。"
content_hash: e98b146e6f886f71f358e0e5a40aab6a7e841793
related_topics:
- title: English version
url: /en/linux/w.html
icon: bi bi-globe
---
# w
显示登录者及其进程。
更多信息:<https://www.geeksforgeeks.org/w-command-in-linux-with-examples/>.
- 显示当前登录的所有用户的信息:
`w`
- 显示有关特定用户的信息:
`w `<span class="tldr-var badge badge-pill bg-dark-lm bg-white-dm text-white-lm text-dark-dm font-weight-bold">用户</span>
- 显示信息而不包括标题:
`w --no-header`
- 显示信息不包括登录、JCPU 和 PCPU 列:
`w --short`
| 17.290323 | 120 | 0.708955 | yue_Hant | 0.442746 |
6fcc545d13ffa46fb072e20807f17db9403d1003 | 16 | md | Markdown | README.md | shinnlove/weactplus | 4c28c46146f50cd3e2bdbce87f1ed06d88b93ae8 | [
"Apache-2.0"
] | 1 | 2019-03-25T12:15:58.000Z | 2019-03-25T12:15:58.000Z | README.md | shinnlove/weactplus | 4c28c46146f50cd3e2bdbce87f1ed06d88b93ae8 | [
"Apache-2.0"
] | null | null | null | README.md | shinnlove/weactplus | 4c28c46146f50cd3e2bdbce87f1ed06d88b93ae8 | [
"Apache-2.0"
] | null | null | null | # weactplus
tp5
| 5.333333 | 11 | 0.75 | vie_Latn | 0.811792 |
6fcc75c2c901e76dbeabaf88eb6bb0f4e4e105d9 | 266 | md | Markdown | _posts/2021-09-07-travelguide.md | LWFlouisa/FraponicConlang | 5e835f6a91442bd1c4ff838132da0607fc07eea0 | [
"MIT"
] | null | null | null | _posts/2021-09-07-travelguide.md | LWFlouisa/FraponicConlang | 5e835f6a91442bd1c4ff838132da0607fc07eea0 | [
"MIT"
] | null | null | null | _posts/2021-09-07-travelguide.md | LWFlouisa/FraponicConlang | 5e835f6a91442bd1c4ff838132da0607fc07eea0 | [
"MIT"
] | null | null | null | ---
layout: post
title: Travel Phrases WIP
catagories: travelphrases
---
## Butler And Travel Guide
Perowu? - From where, where to?<br />
Lavokudasa? - Bathroom please.<br />
Usatobuso? - Where is the bus?<br />
Miyofamos - I'm hungry, is there a restuarant?<br />
| 22.166667 | 52 | 0.695489 | eng_Latn | 0.946648 |
6fcc98ee518eb29acf9f9d9c9a07234d036045b4 | 2,405 | md | Markdown | docs/src/faq.md | MasonProtter/ApproxFun.jl | 49c9d5893685fe68fad1abce7448159a488475cf | [
"BSD-3-Clause"
] | null | null | null | docs/src/faq.md | MasonProtter/ApproxFun.jl | 49c9d5893685fe68fad1abce7448159a488475cf | [
"BSD-3-Clause"
] | null | null | null | docs/src/faq.md | MasonProtter/ApproxFun.jl | 49c9d5893685fe68fad1abce7448159a488475cf | [
"BSD-3-Clause"
] | null | null | null | # Frequently Asked Questions
## Approximating functions
### How do I interpolate a function at a specified grid?
In the case where the grid is specified by `points(space,n)`, you can apply the default transform to data:
```@meta
DocTestSetup = quote
using ApproxFun
end
```
```jldoctest
julia> S = Chebyshev(1..2);
julia> p = points(S,20); # the default grid
julia> v = exp.(p); # values at the default grid
julia> f = Fun(S,ApproxFun.transform(S,v));
julia> f(1.1)
3.0041660239464347
julia> exp(1.1)
3.0041660239464334
```
ApproxFun has no inbuilt support for interpolating functions at other sets of points, but this can be accomplished manually by evaluating the basis at the set of points and using \:
```jldoctest
julia> S = Chebyshev(1..2);
julia> n = 50;
julia> p = range(1,stop=2,length=n); # a non-default grid
julia> v = exp.(p); # values at the non-default grid
julia> V = Array(Float64,n,n); # Create a Vandermonde matrix by evaluating the basis at the grid
julia> for k = 1:n
V[:,k] = Fun(S,[zeros(k-1);1]).(p)
end
julia> f = Fun(S,V\v);
julia> f(1.1)
3.0041660228311926
julia> exp(1.1)
3.0041660239464334
```
Note that an evenly spaced grid suffers from instability for large `n`. The easiest way around this is to use least squares with more points than coefficients, instead of interpolation:
```jldoctest
julia> S = Chebyshev(1..2);
julia> n = 100; m = 50;
julia> p = range(1,stop=2,length=n); # a non-default grid
julia> v = exp.(p); # values at the non-default grid
julia> V = Array(Float64,n,m); # Create a Vandermonde matrix by evaluating the basis at the grid
julia> for k = 1:m
V[:,k] = Fun(S,[zeros(k-1);1]).(p)
end
julia> f = Fun(S,V\v);
julia> f(1.1)
3.004166023946434
julia> exp(1.1)
3.0041660239464334
```
We can use this same approach for multivariate functions:
```jldoctest
julia> S = Chebyshev(0..1)^2;
julia> n = 1000; m = 50;
julia> Random.seed!(0); x = rand(n); y = rand(n);
julia> v = exp.(x .* cos(y)); # values at the non-default grid
julia> V = Array(Float64,n,m); # Create a Vandermonde matrix by evaluating the basis at the grid
julia> for k = 1:m
V[:,k] = Fun(S,[zeros(k-1);1]).(x,y)
end
julia> f = Fun(S,V\v);
julia> f(0.1,0.2)
1.1029700685084018
julia> exp(0.1*cos(0.2))
1.1029701284210731
```
```@meta
DocTestSetup = nothing
```
| 21.096491 | 186 | 0.656549 | eng_Latn | 0.868554 |
6fcdbeb65ae138d126c8a252ddeb02e5a2dfb8f6 | 5,776 | md | Markdown | docs/advanced/creating-custom-registries.md | TarunavBA/gulp | 818bd73e5da1ad69f43eef84214c98d6392a73e4 | [
"MIT"
] | 27,647 | 2015-01-01T02:10:57.000Z | 2022-03-31T17:05:57.000Z | docs/advanced/creating-custom-registries.md | TarunavBA/gulp | 818bd73e5da1ad69f43eef84214c98d6392a73e4 | [
"MIT"
] | 1,761 | 2015-01-02T16:21:52.000Z | 2022-03-28T18:24:10.000Z | docs/advanced/creating-custom-registries.md | LaudateCorpus1/gulp | b7ebcdf9e87eec3fe51d834caf772d9c8d9e8c00 | [
"MIT"
] | 6,568 | 2015-01-02T02:53:54.000Z | 2022-03-24T18:00:35.000Z | <!-- front-matter
id: creating-custom-registries
title: Creating Custom Registries
hide_title: true
sidebar_label: Creating Custom Registries
-->
# Creating Custom Registries
Allows custom registries to be plugged into the task system, which can provide shared tasks or augmented functionality. Registries are registered using [`registry()`][registry-api-docs].
## Structure
In order to be accepted by gulp, custom registries must follow a specific format.
```js
// as a function
function TestRegistry() {}
TestRegistry.prototype.init = function (gulpInst) {}
TestRegistry.prototype.get = function (name) {}
TestRegistry.prototype.set = function (name, fn) {}
TestRegistry.prototype.tasks = function () {}
// as a class
class TestRegistry {
init(gulpInst) {}
get(name) {}
set(name, fn) {}
tasks() {}
}
```
If a registry instance passed to `registry()` doesn't have all four methods, an error will be thrown.
## Registration
If we want to register our example registry from above, we will need to pass an instance of it to `registry()`.
```js
const { registry } = require('gulp');
// ... TestRegistry setup code
// good!
registry(new TestRegistry())
// bad!
registry(TestRegistry())
// This will trigger an error: 'Custom registries must be instantiated, but it looks like you passed a constructor'
```
## Methods
### `init(gulpInst)`
The `init()` method of a registry is called at the very end of the `registry()` function. The gulp instance passed as the only argument (`gulpInst`) can be used to pre-define tasks using
`gulpInst.task(taskName, fn)`.
#### Parameters
| parameter | type | note |
|:---------:|:----:|------|
| gulpInst | object | Instance of gulp. |
### `get(name)`
The `get()` method receives a task `name` for the custom registry to resolve and return, or `undefined` if no task with that name exists.
#### Parameters
| parameter | type | note |
|:---------:|:----:|------|
| name | string | Name of the task to be retrieved. |
### `set(name, fn)`
The `set()` method receives a task `name` and `fn`. This is called internally by `task()` to provide user-registered tasks to custom registries.
#### Parameters
| parameter | type | note |
|:---------:|:----:|------|
| name | string | Name of the task to be set. |
| fn | function | Task function to be set. |
### `tasks()`
Must return an object listing all tasks in the registry.
## Use Cases
### Sharing Tasks
To share common tasks with all your projects, you can expose an `init` method on the registry and it will receive an instance of gulp as the only argument. You can then use `gulpInst.task(name, fn)` to register pre-defined tasks.
For example, you might want to share a `clean` task:
```js
const fs = require('fs');
const util = require('util');
const DefaultRegistry = require('undertaker-registry');
const del = require('del');
function CommonRegistry(opts){
DefaultRegistry.call(this);
opts = opts || {};
this.buildDir = opts.buildDir || './build';
}
util.inherits(CommonRegistry, DefaultRegistry);
CommonRegistry.prototype.init = function(gulpInst) {
const buildDir = this.buildDir;
const exists = fs.existsSync(buildDir);
if(exists){
throw new Error('Cannot initialize common tasks. ' + buildDir + ' directory exists.');
}
gulpInst.task('clean', function(){
return del([buildDir]);
});
}
module.exports = CommonRegistry;
```
Then to use it in a project:
```js
const { registry, series, task } = require('gulp');
const CommonRegistry = require('myorg-common-tasks');
registry(new CommonRegistry({ buildDir: '/dist' }));
task('build', series('clean', function build(cb) {
// do things
cb();
}));
```
### Sharing Functionality
By controlling how tasks are added to the registry, you can decorate them.
For example, if you wanted all tasks to share some data, you can use a custom registry to bind them to that data. Be sure to return the altered task, as per the description of registry methods above:
```js
const { registry, series, task } = require('gulp');
const util = require('util');
const DefaultRegistry = require('undertaker-registry');
// Some task defined somewhere else
const BuildRegistry = require('./build.js');
const ServeRegistry = require('./serve.js');
function ConfigRegistry(config){
DefaultRegistry.call(this);
this.config = config;
}
util.inherits(ConfigRegistry, DefaultRegistry);
ConfigRegistry.prototype.set = function set(name, fn) {
var bound = fn.bind(this.config);
// Preserve internal properties and task metadata.
var task = Object.assign(bound, fn);
// The `DefaultRegistry` uses `this._tasks` for storage.
this._tasks[name] = task;
return task;
};
registry(new BuildRegistry());
registry(new ServeRegistry());
// `registry` will reset each task in the registry with
// `ConfigRegistry.prototype.set` which will bind them to the config object.
registry(new ConfigRegistry({
src: './src',
build: './build',
bindTo: '0.0.0.0:8888'
}));
task('default', series('clean', 'build', 'serve', function(cb) {
console.log('Server bind to ' + this.bindTo);
console.log('Serving' + this.build);
cb();
}));
```
## Examples
* [undertaker-registry][undertaker-registry-example]: The Gulp 4 default registry.
* [undertaker-common-tasks][undertaker-common-tasks-example]: Proof-of-concept custom registry that pre-defines tasks.
* [undertaker-task-metadata][undertaker-task-metadata-example]: Proof-of-concept custom registry that attaches metadata to each task.
[registry-api-docs]: ../api/registry.md
[undertaker-registry-example]: https://github.com/gulpjs/undertaker-registry
[undertaker-common-tasks-example]: https://github.com/gulpjs/undertaker-common-tasks
[undertaker-task-metadata-example]: https://github.com/gulpjs/undertaker-task-metadata
| 27.769231 | 229 | 0.706371 | eng_Latn | 0.896403 |
6fcddcf6a51c92a5d353f2ba1b719eed146ef086 | 2,054 | md | Markdown | README.md | soheilrt/radioDaal | 92aa1d4c893a6311a25688fc609c2fb567d6144c | [
"MIT"
] | 1 | 2021-01-06T14:32:54.000Z | 2021-01-06T14:32:54.000Z | README.md | soheilrt/radioDaal | 92aa1d4c893a6311a25688fc609c2fb567d6144c | [
"MIT"
] | null | null | null | README.md | soheilrt/radioDaal | 92aa1d4c893a6311a25688fc609c2fb567d6144c | [
"MIT"
] | 1 | 2021-01-07T08:04:53.000Z | 2021-01-07T08:04:53.000Z | 
<div dir="rtl">
# [رادیو دال](https://radiodaal.ir)
وب سایت رادیو دال، پادکست فارسی درباره تحصیل، کار و مهاجرت.
# درباره رادیو دال
مگه این نیست هرکدوم تا آخر عمر یه تعداد محدودی واژه میتونیم بگیم و بنویسیم و تایپ کنیم؟ پس بهتر نیست از این واژه ها به بهترین نحو ممکن استفاده کنیم و کلماتمون را تا جایی که میشه با افراد بیشتری به اشتراک بذاریم؟
رادیو دال هم با همین ایده شکل گرفته: من که میشستم با دوستام گپ بزنم و ازشون راجع به مهاجرت و کار و زندگی سوال بپرسم، خب چرا حرفها و تجربیاتشون رو با بقیه به اشتراک نذارم؟
به خصوص که وقتی میبینم خیلیها واقعا، از جمله خود من نمیدونن که زندگی بعد از دانشگاه یا در محیط کار چه گزینههایی پیش روشون میذاره.
# همیاری
## مصاحبه
اساس این پادکست بر مبنای مشارکت دوستان و به اشتراک گذاشتن تجربیاتی که داشتن با بقیه است. بنابراین اگر فکر میکنید انتخابی داشتید و مسیری رو طی کردین که در طول اون تجربه به درد بخوری به دست آوردین خوشحال میشم باهام تماس بگیرید تا یه گپی خودمونی راجع بهش داشته باشیم. تمام اطلاعات لازم برای این کار رو [اینجا](https://radiodaal.ir/participate) میتونید مشاهده کنید.
## طراح
ممنون میشم اگر بین دوستان کسی هست که در زمینه طراحی مهارتی داره با من تماس بگیره تا بتونیم یه کم رنگ و لعاب بیشتری به پادکست اضافه کنیم.
# تماس
پیشنهادات، نظرات و سوالات خودتون رو میتونید از طریق [تلگرام](https://t.me/radioDaalBot) و یا [ایمیل](radioDaalPodcast@gmail.com) برای من بفرستید.
# مسائل فنی
این سایت یک سایت استاتیک هست. برای ساختش از جکیل استفاده شده و از طریق Netlify سرو میشه. راجع به چگونگی اتصال سایت به دامنه هم یکی از شنوندههای پادکست مطلبی نوشته که در [اینجا](https://virgool.io/@dany_kh/githubpagesdominir-zqjgtpk5pjij) میتونید بخونید.
</div>
### Installation
```
brew upgrade ruby
brew link --overwrite ruby
echo 'export PATH="/usr/local/opt/ruby/bin:$PATH"' >> ~/.zshrc
gem install --user-install bundler jekyll
gem install jalalidate
gem install jekyll-paginate
```
### Running
- Build: `jekyll build --watch`
- Serve: `jekyll serve --host 0.0.0.0 --incremental --watch`
| 48.904762 | 364 | 0.755599 | pes_Arab | 0.987812 |