hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e20259bc50862a4076fcc96e16fbca549971bd73 | 453 | md | Markdown | README.md | silviacolumbu/Sign-concordance-of-QR-residuals | 67b2f433f78d2e9562e54012f34cbc563a7ffaf9 | [
"MIT"
] | null | null | null | README.md | silviacolumbu/Sign-concordance-of-QR-residuals | 67b2f433f78d2e9562e54012f34cbc563a7ffaf9 | [
"MIT"
] | null | null | null | README.md | silviacolumbu/Sign-concordance-of-QR-residuals | 67b2f433f78d2e9562e54012f34cbc563a7ffaf9 | [
"MIT"
] | null | null | null | # Modeling sign concordance of quantile regression residuals with multiple outcomes
This is the code we used for the applications in the paper ["**Modeling sign concordance of quantile regression residuals with multiple outcomes**"](https://arxiv.org/abs/2104.10436) by Silvia Columbu, Paolo Frumento and Matteo Bottai.
## Usage
...
## Citing us
If you use our code in your academic project, please cite our paper using the following bibtex entry:
| 45.3 | 236 | 0.781457 | eng_Latn | 0.995386 |
e2036107fbebd4979cdaacd79ec69cfdcfbf6354 | 454 | md | Markdown | src/articles/README.md | elypsx/v3-homepage | 1b57659cf96767202140f97d9969f91b8ff63c3c | [
"MIT"
] | null | null | null | src/articles/README.md | elypsx/v3-homepage | 1b57659cf96767202140f97d9969f91b8ff63c3c | [
"MIT"
] | 7 | 2021-06-14T15:59:45.000Z | 2021-06-21T22:12:25.000Z | src/articles/README.md | elypsx/v3-homepage | 1b57659cf96767202140f97d9969f91b8ff63c3c | [
"MIT"
] | null | null | null | ---
date: 2021-4-22
sidebar: false
type: page
title: Artikel
description: Interesseante Artikel von A bis Z
permalink: /articles/
---
<div id="listings-page">
<div style="display: flex;">
<img :src="$withBase('/assets/img/doodlefolder_99344-512.png')" alt="Folder Image" width="50px" height="50px" style="margin: -10px 20px 0 0;">
<h2>{{ $page.frontmatter.title }}</h2>
</div>
<br>
<h5>{{ $page.frontmatter.description }}</h5>
<ArticlesIndex />
</div>
| 22.7 | 142 | 0.685022 | kor_Hang | 0.101275 |
e203b75e0e0ab5a377b99332aed01c43dfcb548e | 50 | md | Markdown | README.md | rodrigogribeiro/automata-theory | ecb572fed86b84434bc0c0c82d5aa5fd895e33ff | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | rodrigogribeiro/automata-theory | ecb572fed86b84434bc0c0c82d5aa5fd895e33ff | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | rodrigogribeiro/automata-theory | ecb572fed86b84434bc0c0c82d5aa5fd895e33ff | [
"Apache-2.0",
"MIT"
] | null | null | null | automata-theory
===============
README text here.
| 12.5 | 17 | 0.56 | oci_Latn | 0.323718 |
e2041b77dff0aeba97d7d5384d64a8947d0759d5 | 60 | md | Markdown | README.md | rohinidas0709/hedonic | 47e9b685e35f4a4bd32a7b83fc434cb0c0825d35 | [
"MIT"
] | null | null | null | README.md | rohinidas0709/hedonic | 47e9b685e35f4a4bd32a7b83fc434cb0c0825d35 | [
"MIT"
] | null | null | null | README.md | rohinidas0709/hedonic | 47e9b685e35f4a4bd32a7b83fc434cb0c0825d35 | [
"MIT"
] | null | null | null | # hedonic
modelling noise structure for robust PAC learning
| 20 | 49 | 0.833333 | eng_Latn | 0.79113 |
e204abd4f5ca1bca85f4ebbd21e0e459de0709ae | 7,079 | md | Markdown | fabric/2773-3153/3065.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric/2773-3153/3065.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric/2773-3153/3065.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric<br><strong>Branch</strong>: master<br><strong>ID</strong>: 3065<br><strong>Subject</strong>: WIP: Chain genesis configuration schema(proto, go)<br><strong>Status</strong>: ABANDONED<br><strong>Owner</strong>: Elli Androulaki - lli@zurich.ibm.com<br><strong>Assignee</strong>:<br><strong>Created</strong>: 12/7/2016, 1:20:20 PM<br><strong>LastUpdated</strong>: 12/20/2016, 12:21:53 PM<br><strong>CommitMessage</strong>:<br><pre>WIP: Chain genesis configuration schema(proto, go)
This is a changeset containing schema for chain genesis,
that represents as well the orderer channel init configuration.
System channel genesis configuration is part of the orderer
setup configuration, and includes the configuration of all the
MSPs in the network of peers, the public ordering client and server
(consensus) configuration, as well as channel specific access
control lists (readers, and writers for all chains).
More generically, chain genesis configuration, includes the following
information:
(i) chain MSPs description
(ii) gossip network information (optionally and only for application
chains)
(iii) consensus specific information (ordering client/server)
(iv) read policies to define who has read access to chain blocks,
(v) write policies to define who has permission to submit
transactions to the chain
(vi) admin policy, to define who is authorized to take modifications
to the chain configuration.
The files in this changeset are as follows:
- fabric/config-schemas/chain-genesis-config-schema.go: includes the go
schema for chain genesis; the latter covers also the orderer
channel initialization config
- protos/common/chain-genesis-config.proto protobuf version of the
above schema (for fabric-internal use)
- fabric/config-schemas/chain-genesis-config.json is a sample config
file for application chain
- fabric/config-schemas/orderer-channel-genesis-config.json is a
sample for the orderer channel setup configuration file.
Change-Id: Iecfd53c6f0dd79dbed8e2de09bbec030d46fd859
Signed-off-by: Elli Androulaki <lli@zurich.ibm.com>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Elli Androulaki - lli@zurich.ibm.com<br><strong>Reviewed</strong>: 12/7/2016, 1:20:20 PM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 12/7/2016, 1:23:11 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-x86_64/3747/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 12/7/2016, 1:27:52 PM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-verify-x86_64/3747/ : FAILURE</pre><strong>Reviewer</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>Reviewed</strong>: 12/19/2016, 8:15:05 AM<br><strong>Message</strong>: <pre>Patch Set 1:
(7 comments)
Thanks, Elli, please find here my comments!</pre><strong>Reviewer</strong>: Elli Androulaki - lli@zurich.ibm.com<br><strong>Reviewed</strong>: 12/20/2016, 12:21:53 PM<br><strong>Message</strong>: <pre>Abandoned
Abadnoning as split in further newer and smaller changesets.</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Elli Androulaki - lli@zurich.ibm.com<br><strong>Uploader</strong>: Elli Androulaki - lli@zurich.ibm.com<br><strong>Created</strong>: 12/7/2016, 1:20:20 PM<br><strong>UnmergedRevision</strong>: [104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d](https://github.com/hyperledger-gerrit-archive/fabric/commit/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 12/7/2016, 1:27:52 PM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br><h2>Comments</h2><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [config-schemas/chain-genesis-config-schema.go#L0](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/config-schemas/chain-genesis-config-schema.go#L0)<br><strong>Comment</strong>: <pre>Is this file still required? The .proto and the corresponding .pb.go seem to contain everything that is here and so this seems redundant (unless I'm missing something)</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [config-schemas/samples/chain-genesis-config.json#L0](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/config-schemas/samples/chain-genesis-config.json#L0)<br><strong>Comment</strong>: <pre>Not sure these sample files are still required?</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [config-schemas/samples/orderer-channel-init-config.json#L0](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/config-schemas/samples/orderer-channel-init-config.json#L0)<br><strong>Comment</strong>: <pre>Not sure these sample files are still required?</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [protos/common/chain_genesis_config.proto#L19](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/protos/common/chain_genesis_config.proto#L19)<br><strong>Comment</strong>: <pre>Should "common" go there instead of "config"?</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [protos/common/chain_genesis_config.proto#L81](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/protos/common/chain_genesis_config.proto#L81)<br><strong>Comment</strong>: <pre>Should we remove this structure given that MSPManager.Setup now takes an array of MSPConfig pointers?</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [protos/common/chain_genesis_config.proto#L154](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/protos/common/chain_genesis_config.proto#L154)<br><strong>Comment</strong>: <pre>Which is it? Admin or member? Let's maybe clarify it here in the comment so that there's no uncertainity</pre><strong>Commenter</strong>: Alessandro Sorniotti - ale.linux@sopit.net<br><strong>CommentLine</strong>: [protos/common/chain_genesis_config.proto#L181](https://github.com/hyperledger-gerrit-archive/fabric/blob/104b11fc1a1a1c1606e1ad84257cbc5cdc073e0d/protos/common/chain_genesis_config.proto#L181)<br><strong>Comment</strong>: <pre>let's have bytes if you don't disagree.. It seems more flexible if ever we want to store some binary data in there (e.g. a serialized identity).</pre></blockquote> | 144.469388 | 3,828 | 0.792626 | eng_Latn | 0.331356 |
e204fa7de91d2aa1e57c215d8d45b6dd6711cf0c | 2,736 | md | Markdown | content/monitoring/deploy-grafana.md | rudpot/eks-workshop | b73ee26d6daeef8267ca3aba71035f16b896aff1 | [
"MIT-0"
] | 1 | 2020-03-15T18:01:12.000Z | 2020-03-15T18:01:12.000Z | content/monitoring/deploy-grafana.md | rudpot/eks-workshop | b73ee26d6daeef8267ca3aba71035f16b896aff1 | [
"MIT-0"
] | 2 | 2021-12-10T18:59:13.000Z | 2022-01-06T19:47:35.000Z | content/monitoring/deploy-grafana.md | rudpot/eks-workshop | b73ee26d6daeef8267ca3aba71035f16b896aff1 | [
"MIT-0"
] | 1 | 2019-11-08T07:40:52.000Z | 2019-11-08T07:40:52.000Z | ---
title: "Deploy Grafana"
date: 2018-10-14T20:54:13-04:00
weight: 15
draft: false
---
#### Deploy Grafana
We are now going to install Grafana. For this example, we are primarily using the Grafana defaults,
but we are overriding several parameters. As with Prometheus, we are setting the storage class
to gp2, admin password, configuring the datasource to point to Prometheus and creating an
[external load](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
balancer for the service.
```
kubectl create namespace grafana
helm install stable/grafana \
--name grafana \
--namespace grafana \
--set persistence.storageClassName="gp2" \
--set adminPassword="EKS!sAWSome" \
--set datasources."datasources\.yaml".apiVersion=1 \
--set datasources."datasources\.yaml".datasources[0].name=Prometheus \
--set datasources."datasources\.yaml".datasources[0].type=prometheus \
--set datasources."datasources\.yaml".datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local \
--set datasources."datasources\.yaml".datasources[0].access=proxy \
--set datasources."datasources\.yaml".datasources[0].isDefault=true \
--set service.type=LoadBalancer
```
Run the following command to check if Grafana is deployed properly:
```sh
kubectl get all -n grafana
```
You should see similar results. They should all be Ready and Available
```text
NAME READY STATUS RESTARTS AGE
pod/grafana-b9697f8b5-t9w4j 1/1 Running 0 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana LoadBalancer 10.100.49.172 abe57f85de73111e899cf0289f6dc4a4-1343235144.us-west-2.elb.amazonaws.com 80:31570/TCP 3m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1 1 1 1 2m
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-b9697f8b5 1 1 1 2m
```
You can get Grafana ELB URL using this command. Copy & Paste the value into browser to access Grafana web UI.
```sh
export ELB=$(kubectl get svc -n grafana grafana -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "http://$ELB"
```
When logging in, use the username **admin** and get the password hash by running the following:
```
kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
{{% notice tip %}}
It can take several minutes before the ELB is up, DNS is propagated and the nodes are registered.
{{% /notice %}}
| 37.479452 | 142 | 0.667398 | eng_Latn | 0.516665 |
e20534ea0f7883f5da14781b6d80baa955324da1 | 3,475 | md | Markdown | README.md | Wayman-Zimmerman-Chu/vector | f456f227e5fef04fa480333bde875ad7e3812135 | [
"Apache-2.0"
] | 2 | 2016-03-02T22:22:35.000Z | 2016-03-16T23:39:42.000Z | README.md | Wayman-Zimmerman-Chu/vector | f456f227e5fef04fa480333bde875ad7e3812135 | [
"Apache-2.0"
] | 15 | 2016-02-26T00:30:03.000Z | 2016-04-20T07:35:53.000Z | README.md | Wayman-Zimmerman-Chu/vector | f456f227e5fef04fa480333bde875ad7e3812135 | [
"Apache-2.0"
] | null | null | null | # Group Project - Vector
Need to try a new food spot, bar or café? Vector is a simple, intuitive app that allows you and a friend to find a meeting spot exactly at the midpoint between your locations. Wherever the midpoint is calculated, icons pop up on a map near the midpoint, allowing you to choose the type of meeting place, such as a bar, café, bakery, restaurant, or supermarket. Use the convenience of meeting your friend somewhere in the middle with Vector.
Time spent: **56** mythical man hours spent in total
## User Stories
The following **required** functionality is completed:
- [x] User can get current location
- [x] User can get input a friend's location & get a midpoint
- [x] User can get friends location from a list of friends
- [x] User can choose the type of meeting point (ex: bar, coffee shop, library).
- [x] User can pull a list of businesses around a radius of the meeting point.
- [x] Users can access detailed view of midpoint location(s)
- [x] User account persists through app restarts
The following **optional** features are implemented:
- [X] User can select multiple friends and calculate the midpoint between everyone selected
Please list two areas of the assignment you'd like to **discuss further with your peers** during the next class (examples include better ways to implement something, how to extend your app in certain ways, etc):
1. Using Google Map API
2. Integrating multiple API datasets into a single "stream" of user info
## Video Walkthrough
Here's a walkthrough of implemented user stories:
<img src='gifs/vector_b1.gif' title='Video Walkthrough' width='' alt='Video Walkthrough' />
GIF created with[LiceCap](http://www.cockos.com/licecap/).
## Wireframes
### Main View
Default view allows interaction with the local area and nearest friends.
<img src='images/wireframe_main.png' title='Main Screen Wireframe' width='360' alt='wireframes' />
### Location Details View
When a location is selected along the vector between two friends.
<img src='images/wireframe_business.png' title='Main Screen Wireframe' width='360' alt='wireframes' />
### Friends List View
Tap the more friends button to access the full list of friends.
<img src='images/wireframe_friends.png' title='Main Screen Wireframe' width='360' alt='wireframes' />
### Data Schema
#### User Model:
```
var owner: PFUser?
var userName: String?
var firstname: String?
var lastname: String?
var profilePicture: UIImage?
var phone: String?
var latitude: Double?
var longitude: Double?
var destination: String?
var friends: [String]?
var friendRequest: [String]?
var friendAdd: [String]?
```
## Notes
Describe any challenges encountered while building the app.
## Data Model
- Google Maps API: coordinate.longitude, coordinate.latitude
- Google Places API: name, vicinity, geometry.location.lat, geometry.location.lng, placeType, photos
## License
Copyright 2016 WayZimChu
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 35.459184 | 440 | 0.755108 | eng_Latn | 0.986481 |
e2055f2ba3c9d4340df8de6f267bd2698c798011 | 4,786 | md | Markdown | api/lib.gfx.bitmap.md | avsa242/spin-standard-library | 9bfb0a462539f372325a776868c24a152f85cd3f | [
"MIT"
] | 4 | 2019-12-06T02:29:29.000Z | 2020-04-07T17:47:30.000Z | api/lib.gfx.bitmap.md | avsa242/p2-spin-standard-library | ba9d4afe47bacee1e758a49555973129a20b7bae | [
"MIT"
] | 7 | 2019-11-05T21:51:17.000Z | 2022-02-14T21:48:27.000Z | api/lib.gfx.bitmap.md | avsa242/spin-standard-library | 9bfb0a462539f372325a776868c24a152f85cd3f | [
"MIT"
] | 2 | 2020-08-01T08:45:43.000Z | 2021-09-20T11:04:48.000Z | # lib.gfx.bitmap
----------------
API for generic bitmap graphics library
## Methods
| Method | Description |
| ----------------------------------------------|-------------------------------------------------------------------------------------------------------------- |
|`BGColor (col)` | Set background color for subsequent drawing |
|`Bitmap (bitmap_addr, size, offset)` | Copy a bitmap to the display buffer |
|`Box (x0, y0, x1, y1, color, filled)` | Draw a box |
|`Char (ch)` | Write a character to the display |
|`Circle (x0, y0, radius, color)` | Draw a circle |
|`Clear` | Clear the display buffer |
|`ClearAll` | Clear the display buffer, and execute the display's native/acclerated clear function |
|`Copy (sx, sy, ex, ey, dx, dy)` | Copy rectangular region at (sx, sy, ex, ey) to (dx, dy) |
|`Cut (sx, sy, ex, ey, dx, dy)` | Copy rectangular region at (sx, sy, ex, ey) to (dx, dy) and clear the source region to the background color |
|`FGColor (col)` | Set foreground color of subsequent drawing operations |
|`FontAddress (addr)` | Set address of font definition |
|`FontHeight` | Return the set font height |
|`FontSize (width, height)` | Set expected dimension of font, in pixels |
|`FontWidth` | Return the set font width |
|`Line(x1, y1, x2, y2, c)` | Draw line from x1, y1 to x2, y2, in color c |
|`Plot (x, y, color)` | Plot pixel at x, y, color c |
|`Point (x, y)` | Get color of pixel at x, y |
|`Position (col, row)` | Set text draw position, in character-cell col and row |
|`RGBW8888_RGB32 (r, g, b, w)` | Return 32-bit long from discrete Red, Green, Blue, White color components |
|`RGBW8888_RGB32_Brightness (r, g, b, w, level)`| As above, but clamp R, G, B, W brightness to 'level' |
|`RGB565_R5 (rgb565)` | Return 5-bit red component of 16-bit RGB color |
|`RGB565_G6 (rgb565)` | Return 6-bit green component of 16-bit RGB color |
|`RGB565_B5 (rgb565)` | Return 5-bit blue component of 16-bit RGB color |
|`Scale (sx, sy, ex, ey, offsx, offsy, size)` | Scale a region of the display up by size |
|`Str (string_addr)` | Write string at string_addr to the display @ row and column |
|`TextCols` | Returns number of displayable text columns, based on set display width and set font width |
|`TextRows` | Returns number of displayable text rows, based on set display height and set font height |
| 129.351351 | 161 | 0.309444 | eng_Latn | 0.905969 |
e205928c59604d1c82089eae165bcaae99f5cf7a | 8,505 | md | Markdown | README.md | RurioLuca/HeaderView | f73e0caac80f285c3acc6bd3da777ec5b4249a65 | [
"MIT"
] | null | null | null | README.md | RurioLuca/HeaderView | f73e0caac80f285c3acc6bd3da777ec5b4249a65 | [
"MIT"
] | null | null | null | README.md | RurioLuca/HeaderView | f73e0caac80f285c3acc6bd3da777ec5b4249a65 | [
"MIT"
] | null | null | null | # Header View
[](https://gitter.im/rebus007/HeaderView?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[  ](https://bintray.com/raphaelbussa/maven/header-view/_latestVersion) [](http://android-arsenal.com/details/1/2123)

This is a view for NavigationView in android.support.design library
### Import
At the moment the library is in my personal maven repo
```Gradle
repositories {
maven {
url 'http://dl.bintray.com/raphaelbussa/maven'
}
}
```
```Gradle
dependencies {
implementation 'rebus:header-view:3.0.2'
}
```
### How to use
#### Via XML
Create a layout named like this header_drawer.xml
```XML
<rebus.header.view.HeaderView xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:id="@+id/header_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:hv_add_icon="@drawable/ic_action_settings"
app:hv_add_status_bar_height="true"
app:hv_background_color="@color/colorPrimaryDark"
app:hv_dialog_title="@string/account"
app:hv_highlight_color="@color/colorAccent"
app:hv_profile_avatar="@drawable/ic_placeholder"
app:hv_profile_background="@drawable/ic_placeholder_bg"
app:hv_profile_email="batman@gotham.city"
app:hv_profile_username="Bruce Wayne"
app:hv_show_add_button="true"
app:hv_show_arrow="true"
app:hv_show_gradient="true"
app:hv_style="normal"
app:hv_theme="light" />
```
And in your NavigationView
```XML
<android.support.design.widget.NavigationView
android:id="@+id/nav_view"
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:layout_gravity="start"
app:headerLayout="@layout/header_drawer"
app:menu="@menu/drawer" />
```
#### Manage Profiles
The new HeaderView manage different profile and a new profile chooser inspired from YouTube android app
- Create a profile
```Java
Profile profile = new Profile.Builder()
.setId(2)
.setUsername("Raphaël Bussa")
.setEmail("raphaelbussa@gmail.com")
.setAvatar("https://github.com/rebus007.png?size=512")
.setBackground("https://images.unsplash.com/photo-1473220464492-452fb02e6221?dpr=2&auto=format&fit=crop&w=767&h=512&q=80&cs=tinysrgb&crop=")
.build();
```
- Add a profile
```Java
headerView.addProfile(profile);
```
- Set a profile active
```Java
headerView.setProfileActive(2);
```
- Remove a profile
```Java
headerView.removeProfile(2);
```
- Get actual active profile
```Java
int activeProfileId = headerView.getProfileActive();
```
#### Customize Profile Chooser
You can also customize the profile chooser
- Add bottom items
```Java
Item item = new Item.Builder()
.setId(1)
.setTitle("Remove all profile")
.build();
headerView.addDialogItem(item);
```
- HighlightColor in dialog
```
headerView.setHighlightColor(ContextCompat.getColor(this, R.color.colorAccent));
app:hv_highlight_color="@color/colorAccent"
```
- Change dialog title
```
headerView.setDialogTitle("Choose account");
app:hv_dialog_title="Dialog title"
```
- Change dialog top icon
```
headerView.setAddIconDrawable(R.drawable.ic_action_settings);
app:hv_add_icon="@drawable/ic_action_settings"
```
- Or hide dialog top icon
```
headerView.setShowAddButton(true);
app:hv_show_add_button="true"
```
- Dismiss profile chooser dialog
```Java
headerView.dismissProfileChooser();
```
#### Callback
```Java
headerView.setCallback(new HeaderCallback() {
@Override
public boolean onSelect(int id, boolean isActive) {
//return profile id selected and if is the active profile
return true; //true for close the dialog, false for do nothing
}
@Override
public boolean onItem(int id) {
//return witch buttom item is selected
return true; //true for close the dialog, false for do nothing
}
@Override
public boolean onAdd() {
return true; //true for close the dialog, false for do nothing
}
});
```
#### Loading image from network
Just add this in your class Application (of course you can use your preferred libs for load images)
```Java
ImageLoader.init(new ImageLoader.ImageLoaderInterface() {
@Override
public void loadImage(Uri url, ImageView imageView, @ImageLoader.Type int type) {
switch (type) {
case ImageLoader.AVATAR:
Glide.with(imageView.getContext())
.load(url)
.asBitmap()
.placeholder(R.drawable.ic_placeholder)
.error(R.drawable.ic_placeholder)
.into(imageView);
break;
case ImageLoader.HEADER:
Glide.with(imageView.getContext())
.load(url)
.asBitmap()
.placeholder(R.drawable.ic_placeholder_bg)
.error(R.drawable.ic_placeholder_bg)
.into(imageView);
break;
}
}
});
```
#### Use custom font with Calligraphy
You can set a custom font with [Calligraphy](https://github.com/chrisjenx/Calligraphy) just add a CustomViewTypeface with HeaderView.class in CalligraphyConfig
```Java
CalligraphyConfig.initDefault(new CalligraphyConfig.Builder()
.setDefaultFontPath("Oswald-Stencbab.ttf")
.setFontAttrId(R.attr.fontPath)
.addCustomViewWithSetTypeface(HeaderView.class)
.build()
);
```
### Screen

### Sample
Browse the sample code [here](https://github.com/rebus007/HeaderView/tree/master/sample) or download sample app from the [Play Store](https://play.google.com/store/apps/details?id=rebus.header.view.sample)
### Javadoc
Browse Javadoc [here](https://rebus007.github.io/HeaderView/javadoc/)
### App using Header View
If you use this lib [contact me](mailto:raphaelbussa@gmail.com) and I will add it to the list below:
- [Mister Gadget](https://play.google.com/store/apps/details?id=rebus.mister.gadget)
- [Git Chat](https://github.com/rebus007/Git-Chat)
- [The Coding Love](https://play.google.com/store/apps/details?id=rebus.thecodinglove)
- [Romanews.eu](https://play.google.com/store/apps/details?id=it.daigan.romanews)
- [Mob@rt](https://play.google.com/store/apps/details?id=it.artigiancassa.mobile.android.mobart)
### Developed By
Raphaël Bussa - [raphaelbussa@gmail.com](mailto:raphaelbussa@gmail.com)
[  ](https://twitter.com/rebus_007)[  ](https://plus.google.com/+RaphaelBussa/posts)[  ](https://www.linkedin.com/in/rebus007)
### License
```
The MIT License (MIT)
Copyright (c) 2017 Raphaël Bussa <raphaelbussa@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| 37.139738 | 439 | 0.715814 | kor_Hang | 0.315648 |
e205cb63bb16f3bf259293ebf9820b9bc7c0c39c | 1,841 | md | Markdown | README.md | Kattair/argon2-rs | 6b214aa71b96a48c6b61d0919495a85cfae3d242 | [
"CC0-1.0"
] | null | null | null | README.md | Kattair/argon2-rs | 6b214aa71b96a48c6b61d0919495a85cfae3d242 | [
"CC0-1.0"
] | null | null | null | README.md | Kattair/argon2-rs | 6b214aa71b96a48c6b61d0919495a85cfae3d242 | [
"CC0-1.0"
] | null | null | null | # argon2-rs
Simple argon2 hashing program using crate rust-argon2.
## Links
Usability mimicks usability of argon2 utility on Debian: https://manpages.debian.org/buster/argon2/argon2.1.en.html
Default values are defined by the rust-argon2 crate here: https://docs.sru-systems.com/rust-argon2/0.8.0/argon2/struct.Config.html
## Usage
The program reads its input from standard input.
Example: echo "my input" -n | argon2-rs "randomsalt"
**! Windows adds CR LF to the end of the echoed string !**
## Synopsis
argon2-rs salt [OPTIONS]
| Option | Parameters | Description |
| ------ | ---------- | ------------------------------------------------------------------ |
| -h | | Display tool usage |
| -i | | Use Argon2i (this is the default) |
| -d | | Use Argon2d instead of Argon2i |
| -id | | Use Argon2id instead of Argon2i |
| -t | u_int_32 | Sets the number of iterations to N (default = 3) |
| -m | u_int_32 | Sets the memory usage of 2^N KB (default = 12) |
| -k | u_int_32 | Sets the memory usage of N KB (default = 4096) |
| -p | u_int_32 | Sets parallelism to N threads (default = 1) |
| -l | u_int_32 | Sets hash output length to N bytes (default = 32) |
| -e | | Output only encoded hash |
| -r | | Output only the raw bytes of the hash |
| -v | 10 or 13 | Argon2 version (defaults to the most recent version, currently 13) |
| 59.387097 | 130 | 0.470397 | eng_Latn | 0.81114 |
e205fe48234884f1a0af2cb07411bf4d3ce49b99 | 1,830 | md | Markdown | includes/active-directory-ds-prerequisites.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-08-10T02:23:39.000Z | 2019-08-10T02:23:40.000Z | includes/active-directory-ds-prerequisites.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-ds-prerequisites.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: include dosyası
description: Ön koşullar birlikte dosya ekleme
services: active-directory-ds
documentationcenter: ''
author: mahesh-unnikrishnan
manager: mtillman
editor: curtand
ms.assetid: 66b5f8e2-e08d-43c8-b460-6204aecda8f7
ms.service: active-directory
ms.subservice: domain-services
ms.workload: identity
ms.custom: include file
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 06/22/2018
ms.author: maheshu
ms.openlocfilehash: 1fba8cc9ae40cf5539016bbd73de65f557a64136
ms.sourcegitcommit: 3e98da33c41a7bbd724f644ce7dedee169eb5028
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 06/18/2019
ms.locfileid: "67188953"
---
> [!IMPORTANT]
> **Bu makaledeki görevleri tamamlamadan önce Azure AD Domain Services parola karma eşitlemesini etkinleştirin.**
>
> Azure AD dizininizdeki kullanıcıların türüne bağlı olarak aşağıdaki yönergeleri izleyin. Azure AD dizininizde yalnızca Bulut ve eşitlenmiş kullanıcı hesaplarının bir karışımı varsa iki yönerge kümesini de tamamlayın. B2B Konuk hesabı (örneğin, gmail veya izin veriyoruz farklı bir kimlik sağlayıcısından gelen MSA) kullanmaya çalıştığınız durumunda aşağıdaki işlemleri gerçekleştirmek mümkün olmayabilir Bu yönetilen etki eşitlenen bu kullanıcılar için parola olmadığı için dizinde Konuk hesaplarıdır. Dışında Azure AD parolalarını dahil olmak üzere bu hesaplar hakkında tam bilgi olurdu ve bu bilgileri Azure AD içinde olmadığından bu nedenle, bile yönetilen etki alanına eşitlenmedi.
> - [Yalnızca bulut kullanıcı hesapları için yönergeler](../articles/active-directory-domain-services/active-directory-ds-getting-started-password-sync.md)
> - [Bir şirket içi dizininizden eşitlenmiş kullanıcı hesapları için yönergeler](../articles/active-directory-domain-services/active-directory-ds-getting-started-password-sync-synced-tenant.md)
| 57.1875 | 688 | 0.831694 | tur_Latn | 0.99833 |
e207169f8872d11ef80c62c72d443cb28be1d222 | 1,938 | md | Markdown | README.md | James6565/QMES_Code | 186769449773ab9a6469ca1c72f1d6ab8f5f2178 | [
"X11"
] | null | null | null | README.md | James6565/QMES_Code | 186769449773ab9a6469ca1c72f1d6ab8f5f2178 | [
"X11"
] | null | null | null | README.md | James6565/QMES_Code | 186769449773ab9a6469ca1c72f1d6ab8f5f2178 | [
"X11"
] | null | null | null | # QMES_Code
#Analysing River Data
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# Set font size of figures
plt.rcParams['font.size'] = 9
plt.tick_params(pad=10)
# Load textfiles from their folder
RiverData1 = np.loadtxt('Gray1961.txt', skiprows = 2)
RiverData2 = np.loadtxt('Hack1957.txt', skiprows = 2)
RiverData3 = np.loadtxt('Rignon1996.txt', skiprows = 2)
RiverData4 = np.loadtxt('Robert1990.txt', skiprows = 2)
RiverData5 = np.loadtxt('Langbein1947_p145.txt', skiprows = 2)
RiverData6 = np.loadtxt('Langbein1947_p146.txt', skiprows = 2)
RiverData7 = np.loadtxt('Langbein1947_p149.txt', skiprows = 2)
RiverData8 = np.loadtxt('Langbein1947_p152.txt', skiprows = 2)
# Assign RiverData's to a list
# Total the num of variables in the list
# create a loop assigning i to each River_Data Variable
plotnum = 1
River_list = [RiverData1,RiverData2,RiverData3,RiverData4,RiverData5,RiverData6,RiverData7,RiverData8]
River_list_len=len(River_list)
for i in range(0,River_list_len):
plt.subplot(2,4,plotnum)
plt.plot((River_list[i][:,0]),(River_list[i][:,1]),'ko')
m,c,_,_,_ = stats.linregress(np.log(River_list[i][:,0]),np.log(River_list[i][:,1]))
m1 = round(m,2)
c1 = round(c,2)
# ranges our x values together #
xfit = np.arange(0.1,400,0.1)
# equation of a straight line #
yfit = np.exp(c)*xfit**m
plt.plot(xfit, yfit, label = 'Best Fit')
#plt.legend(x, y, eqn)
eqn = 'y = ' + (str(xfit**m)) + 'x + ' + (str(np.exp(c)))
plt.xlabel('Area (Km$^2$)')
plt.ylabel('River Length (km)')
plt.xscale('log')
plt.yscale('log')
plt.legend('Data', 'Line of Best Fit', loc ='upper left')
plt.text(10,1,eqn)
plotnum= plotnum++1
plt.show()
| 35.888889 | 106 | 0.609391 | kor_Hang | 0.263385 |
e2072483c393cd3cc9c370053e3df213ba11d66b | 437 | md | Markdown | _team/gclo.md | davidslycooper/techandpeople.github.io | f499cae82d16f93618642d63ced67993cb5a390b | [
"MIT"
] | 1 | 2019-08-27T13:29:49.000Z | 2019-08-27T13:29:49.000Z | _team/gclo.md | davidslycooper/techandpeople.github.io | f499cae82d16f93618642d63ced67993cb5a390b | [
"MIT"
] | null | null | null | _team/gclo.md | davidslycooper/techandpeople.github.io | f499cae82d16f93618642d63ced67993cb5a390b | [
"MIT"
] | 4 | 2020-04-24T11:36:23.000Z | 2021-10-12T14:40:00.000Z | ---
###############
# DO NOT EDIT
layout: bio
###############
###############
# TO EDIT
code: gclo
title: "Gonçalo Lobo"
photo: gclo.jpg
info: MSc Student
links:
- name: email
url: g.ferreiralobo@gmail.com
---
I completed my BSc in Informatics Engineering at the Faculty of Sciences of the University of Lisbon. Now attending the MSc at the same institution, I'm investigating the use of games to practice screen reader gestures.
| 24.277778 | 219 | 0.668192 | eng_Latn | 0.910333 |
e208924f5cc8173c4d28a909f4bd2074a29ae0b8 | 6,235 | markdown | Markdown | _posts/2021-03-03-der-einbruch-kap-1.markdown | book-tube/book-tube.github.io | d9a70dc07cf0d1f80787a6e0e065678b5c033c4e | [
"MIT"
] | 2 | 2021-03-07T10:55:25.000Z | 2021-03-07T10:55:28.000Z | _posts/2021-03-03-der-einbruch-kap-1.markdown | book-tube/book-tube.github.io | d9a70dc07cf0d1f80787a6e0e065678b5c033c4e | [
"MIT"
] | null | null | null | _posts/2021-03-03-der-einbruch-kap-1.markdown | book-tube/book-tube.github.io | d9a70dc07cf0d1f80787a6e0e065678b5c033c4e | [
"MIT"
] | null | null | null | ---
layout: post
title: "Projektierung"
date: 2021-03-03 18:57:36 +0100
book: "Der Einbruch"
categories: der-einbruch
---
# Projektierung
Was wird das eigentlich? "Hey Boss, weshalb gehen wir nicht einfach mit Knarren in dieses verschissene Haus und ballern diese Arschlöcher ab?", meinte Rocket. "Deshalb, weil du nichts in der Birne hast und die Polizei dir in dein scheiss Hinterteil schiessen wird." Ich weiss ich drücke mich wie ein Rüpel aus, doch die Jungs brauchen das, sie brauchen einen Anführer. Ich schlage mit der geballten Faust auf den Tisch, PJJ erwacht verwirrt, "Was'n los?" "Hallt deine Fresse Mann!", schnauzt Rocket. Das passiert wenn man mit so Jungs über einen Überfall spricht. Der eine Flucht nur, der andere Schläft, nur der eine hört zu.
"Also nochmal! PJJ du wartest draussen im Wagen beim Kanalausgang, Rocket du kommst mit mir und Michael in die Bank, aber ihr bleibt dicht bei mir und zieht keine Minne." Ich zeige auf das Blatt, alle drei nicken, doch Rocket unterbricht: "Warum kommt meine Vorschlag nicht in Frage?" Ich runzle meine Stirn, schaue ihn mehrere Sekunden an und fahre dann fort. " Ich werde reden und ihr bleibt stumm und unbemerkt da…"
Zwei Stunden erkläre ich schon und bin jetzt sicher das sie es verstanden haben. Hoffentlich. "Ein Bier?" frage ich die Runde. "Gibt es auch Tee?" *es klatscht* "Autschi!" "Alter was bist du für ein Lappen?" "Was soll das den jetzt?" "Dann nehme ich halt ein Bier." antwortet PJJ eingeschüchtert. Ich gehe zum Kühlschrank hohle vier Dosen und stelle sie auf den Tisch mein Bein schmerzt jetzt schon bei dieser kurzen Strecke. "Ahh das tat gut" sagte Rocket auf die Dose schauend. PJJ schaut traurig auf sein Bier und schweigt. Ich stehe auf gehe zur Küche und setze die Kanne auf. "Was wird das?" Brummt Rocket."Du hältst dein Mund jetzt für fünf Minuten"."Was kann ich machen?" Fragte PJJ begeistert. "Dein Mund für zwei Minuten halten", antworte ich. Zwei Minuten vergingen schön ruhig. Ich stelle den Tee vor PJJ hin. "Was soll das jetzt?!" gab Rocket von sich in einer wütenden Art und Weise. "Tee." Er schaut mir tief in die Augen, mehrere Sekunden. "Bis morgen." sagt er während er das Bier PJJ entgegenwirft und ihn knapp verfehlt. Die Dose knallt gegen die Wand und trifft Michael am Hinterkopf. Michael fängt die Dose bevor sie auf den Boden fällt und zerdrückt sie ohne sein Gesicht zu verziehen. Ich sage "Vollidiot!" Rocket erwidert: " Bis dann." Er verlässt den Raum mit einem erhobenen Mittelfinger.
PJJ gähnt und schläft wieder mit einer schiefen Nackenposition ein. Michael steht auf, nimmt seine Jacke und verlässt den Raum mit einer knallenden Tür. PJJ erwacht und gibt ein "Was'n los?" von sich. "Du bist aufgewacht." erwidere ich "Okay" sagt PJJ und schlurft müde in sein Zimmer. Ich gehe in die Küche. "Einen Big Tasty mit extra viel Zwiebeln." "Wird sofort zubereitet Sir." antworte ich mir selbst, greife nach einem Toast schmiere Nuss-Nougat-Creme drauf, hacke eine Zwiebel in Scheiben und lege sie auf das Toastbrot. Ich klatsche den *Big Tasty* auf einen Teller und überreiche das Tost mir selbst. Ich bedanke mich bei mir selbst mit einem Nicken. Laufe mit meinem Stock möglichst schnell zum Tisch und will mein köstliches Mahl geniessen.
"Was'n los sprichst du mit dir selbst?" fragt PJJ verwundert während er um die Ecke schaut. "Geh' schlafen!" "Okay" Ich gehe schlafen nachdem ich die neuen Teile von Mr. Bean fertig geschaut habe.
Ich höre ein klatschen zusammen mit einem "Autschi" Das heisst Rocket ist wieder da. " Zwei und zwei gibt nicht zweiundzwanzig sondern vier, du hohle verfaulte Birne!" Ich stehe auf ziehe meine üblichen Kleider an und steige die Treppe hinunter. " Bonnjorno altero Manos" Sagt Rocket "Non parli italiano, stupida pera" entgegne ich. "Chrazie!" "Prego!" Während Rocket PJJ immer noch Rechenaufgaben stellt, um sich intelligenter zu fühlen, bereite ich mir einen Kaffee zu. Ich schaue auf die Uhr und bemerke "Es ist sechs Uhr Morgens ihr Evolutionsbremsen! Weshalb seid ihr schon hier?" "Schau richtig auf die Uhr du Intelligenzbestie. Es ist schon halb eins.", schnauzt Rocket zurück.
Jemand kommt rein, weshalb quietscht die Tür? "Hey Micheal!" sagt PJJ und lächelt ihn an. Michael sitzt ohne Reaktion hin. "Kaffee?" Frage ich müde. Bevor Micheal überhaupt nickt, sagt Rocket: "Mit viel Zucker gerne." Ich koche Kaffee und sitze zu den Jungs an den Tisch. "Also heute müssen wir uns vorbereiten. Wir gehen unsere Anzüge kaufen, einen Presslufthammer und einen stabilen Geländewagen." bemerke ich zum heutigen Ziel. "Michael du organisierst den das übertrieben laute Teil, Rocket du das Auto und PJJ und ich die Anzüge." Michael fährt mit seinem Käfer, Rocket mit einem geklauten BMX aus der Nachbarschaft und PJJ und ich nehmen den Bus. PJJ rennt um sein Leben um einen Platz mit mir am Fenster zu ergattern. Der Bus fährt nach einem lauten Brummen los. "Ahh endlich bei unserer Haltestelle" jauchze ich obwohl es nur zwei Minuten vergangen sind. Wir betreten das Kleidergeschäft. Eine junge Verkäuferin und sagt: "Hallo und herzlich willkommen im I&O, kann ich ihnen behilflich sein?" "Ja ich brauche Anzüge in den folgenden Grössen…" Ich reiche ihr einen Zettel. "Oh wow, eine Hochzeit?" "Ne Beerdigung…" antworte ich möglichst traurig. "Was jemand ist gestorben?" "Nein, nein." Beruhige ich ihn, beuge mich zur Verkäuferin und flüstere: "Ich habe es ihm noch nicht mitgeteilt." Sie nickt verständnisvoll. "Folgen sie mir." Sie watschelt durch den Gang schaut auf den Zettel und nimmt drei Anzügen von der Stange im hinteren Teil der Abteilung. Sie blickt noch einmal auf den Zettel und bemerkt: "Leider haben wir die Grösse nicht, soll ich die eine Nummer grösser nehmen?" Ich nicke. Die Anzüge bezahlt, sitzen wir wieder im Bus.
Ich donnere die Tür auf und merke, dass die Jungs pennen. Um 23 Uhr nachts essen wir unser letztes Essen in diesem Jahr. Wir sind gut gelaunt, trinken Bier, PJJ sein Tee und essen Pommes mit Burger.
23:59 Endlich ist es so weit. Ich gehe zum Fenster und merke, dass wenige Leute früher angefangen haben zu feiern. Ich sehe das Feuerwerk. Wunderschön. Ich blicke mehrere Minuten die Feuerwerke an und höre Kinder jubeln. Alles ist perfekt.
| 230.925926 | 1,648 | 0.785245 | deu_Latn | 0.999528 |
e208de978ee9f29ed9a6b9b88f9dfb12e5d3185a | 71 | md | Markdown | README.md | Project-Initium/blazorish-search | cbe92ca8620d9c74c80d38dc44487e0b4cd2d604 | [
"MIT"
] | null | null | null | README.md | Project-Initium/blazorish-search | cbe92ca8620d9c74c80d38dc44487e0b4cd2d604 | [
"MIT"
] | null | null | null | README.md | Project-Initium/blazorish-search | cbe92ca8620d9c74c80d38dc44487e0b4cd2d604 | [
"MIT"
] | null | null | null | # blazorish-search
Explorations for searching with EF, Blazor and gRPC
| 23.666667 | 51 | 0.816901 | eng_Latn | 0.964567 |
e20a6c228907409c344c23cdb993e1ac1a28e16e | 124 | md | Markdown | release/nrc/nrc.en.mtnt/HISTORY.md | darcywong00/lexical-models | 5b2897f58ff12cc33deb39264400f25c7bb1700a | [
"MIT"
] | null | null | null | release/nrc/nrc.en.mtnt/HISTORY.md | darcywong00/lexical-models | 5b2897f58ff12cc33deb39264400f25c7bb1700a | [
"MIT"
] | null | null | null | release/nrc/nrc.en.mtnt/HISTORY.md | darcywong00/lexical-models | 5b2897f58ff12cc33deb39264400f25c7bb1700a | [
"MIT"
] | null | null | null | # Version History
## 1.0.3
* Removed two extraneous SMP characters from dataset to work around an issue in Keyman (#2374)
| 20.666667 | 94 | 0.741935 | eng_Latn | 0.996415 |
e20bddc40a6d01c42349533644bc0cd521fe6d62 | 57 | md | Markdown | example/README.md | mono424/flutter-arasan | 9dc537c6bf84eb52998497fcd0cc754b6a5e22f9 | [
"MIT"
] | 1 | 2021-08-31T06:03:40.000Z | 2021-08-31T06:03:40.000Z | example/README.md | mono424/flutter-arasan | 9dc537c6bf84eb52998497fcd0cc754b6a5e22f9 | [
"MIT"
] | null | null | null | example/README.md | mono424/flutter-arasan | 9dc537c6bf84eb52998497fcd0cc754b6a5e22f9 | [
"MIT"
] | 1 | 2021-08-31T06:03:41.000Z | 2021-08-31T06:03:41.000Z | # arasan_example
Example usage of Arasan Chess Engine.
| 11.4 | 37 | 0.789474 | eng_Latn | 0.99139 |
e20c7699a3cf14e2a72dc6be4e4e40c66b96390e | 1,058 | md | Markdown | _listings/swagger/genclients-get-openapi.md | streamdata-gallery-organizations/swagger | 4755c28caa8d0b98fcf1ffc46463bad036c0d827 | [
"CC-BY-3.0"
] | null | null | null | _listings/swagger/genclients-get-openapi.md | streamdata-gallery-organizations/swagger | 4755c28caa8d0b98fcf1ffc46463bad036c0d827 | [
"CC-BY-3.0"
] | null | null | null | _listings/swagger/genclients-get-openapi.md | streamdata-gallery-organizations/swagger | 4755c28caa8d0b98fcf1ffc46463bad036c0d827 | [
"CC-BY-3.0"
] | null | null | null | ---
swagger: "2.0"
x-collection-name: Swagger
x-complete: 0
info:
title: Swagger Generator Gets languages supported by the client generator
description: Gets languages supported by the client generator.
termsOfService: http://swagger.io/terms/
contact:
name: apiteam@swagger.io
version: 2.2.3
host: generator.swagger.io
basePath: /api
schemes:
- http
produces:
- application/json
consumes:
- application/json
paths:
/gen/clients:
get:
summary: Gets languages supported by the client generator
description: Gets languages supported by the client generator.
operationId: clientOptions
x-api-path-slug: genclients-get
responses:
200:
description: OK
tags:
- Generate
- Clients
x-streamrank:
polling_total_time_average: 0
polling_size_download_average: 0
streaming_total_time_average: 0
streaming_size_download_average: 0
change_yes: 0
change_no: 0
time_percentage: 0
size_percentage: 0
change_percentage: 0
last_run: ""
days_run: 0
minute_run: 0
--- | 23 | 75 | 0.719282 | eng_Latn | 0.689573 |
e20cea7ae10c7d2d99f2570b1aa1bb243260bfde | 1,476 | md | Markdown | CHANGELOG/CHANGELOG-0.1.md | YHDING23/alnair | 97c713d7608863e6e5411226d651dd07175ce88f | [
"Apache-2.0"
] | 10 | 2021-06-03T21:36:29.000Z | 2022-03-22T05:43:17.000Z | CHANGELOG/CHANGELOG-0.1.md | YHDING23/alnair | 97c713d7608863e6e5411226d651dd07175ce88f | [
"Apache-2.0"
] | 71 | 2021-06-04T17:43:50.000Z | 2022-03-30T22:42:36.000Z | CHANGELOG/CHANGELOG-0.1.md | YHDING23/alnair | 97c713d7608863e6e5411226d651dd07175ce88f | [
"Apache-2.0"
] | 6 | 2021-06-12T05:23:29.000Z | 2022-02-21T17:26:50.000Z | # Release Summary
This is Release v0.1. It includes the following new components:
- Profiler
- Elastic Training Framework
# Key Features and Improvements:
- Profiler
- GPU metrics collection
- Take advantage of Nvidia monitoring toolkit DCGM-exporter, device-level metrics, e.g. GPU and Memory utilization are collected every second.
- The GPU metrics exported from DCGM is scraped by Prometheus. Prometheus auto discovers the pods with scraping annoations.
- Deep learning training job (DLT) identification
- Considering the cyclic pattern of memory utilization in DLT jobs, an autocorrelation based cyclic pattern detection algorithm is implemented to detect DLT job, once DLT job is detected, the max memory utilization is predicted based on the past usage.
- Analytical results including job type and predicted memotry utilization are continuously patched to every GPU node as annotations.
- Elastic Training Framework
- Kubernetes operator for horovod jobs
- End user can create their horovod training jobs in our framework using the CRDs.
- Elastic training: jobs can scale up and down the number of GPU workers dynamically at runtime, without restart.
- Fault tolerance: jobs can keep on training when some of the GPU workers fail, without restart.
- GPU allocator
- Dynamically allocates the pool of GPUs within a cluster to the elastic training jobs, optimizing for maximum GPU utilization and minimum job completion time.
| 61.5 | 256 | 0.787263 | eng_Latn | 0.997487 |
e20d66f163c0fa032779ab3d373b7d83fc5fc98c | 5,379 | md | Markdown | articles/web-application-firewall/ag/application-gateway-customize-waf-rules-powershell.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/web-application-firewall/ag/application-gateway-customize-waf-rules-powershell.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/web-application-firewall/ag/application-gateway-customize-waf-rules-powershell.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Personnaliser les règles à l’aide de PowerShell
titleSuffix: Azure Web Application Firewall
description: Cet article fournit des informations sur la personnalisation des règles du pare-feu d’applications web dans Application Gateway avec PowerShell.
services: web-application-firewall
author: vhorne
ms.service: web-application-firewall
ms.date: 11/14/2019
ms.author: victorh
ms.topic: article
ms.openlocfilehash: 55eea15da8c3a10b0421ff1576082d6b42fc7c56
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 03/29/2021
ms.locfileid: "74048503"
---
# <a name="customize-web-application-firewall-rules-using-powershell"></a>Personnaliser les règles du pare-feu d’applications web à l’aide de PowerShell
Le pare-feu d’applications web (WAF) Azure Application Gateway fournit une protection pour les applications web. Ces protections sont fournies par le jeu de règles (Core Rule Set, CRS) de l’Open Web Application Security Project (OWASP). Certaines règles peuvent entraîner des faux positifs et bloquer le trafic réel. Par conséquent, Application Gateway permet de personnaliser des règles et des groupes de règles. Pour plus d’informations sur les règles et groupes de règles spécifiques, consultez la [liste des règles et groupes de règles CRS de pare-feu d’applications web](application-gateway-crs-rulegroups-rules.md).
## <a name="view-rule-groups-and-rules"></a>Afficher les règles et groupes de règles
Voici des exemples de code montrant comment afficher les règles et les groupes de règles qui peuvent être configurés sur une passerelle d’application avec WAF activé.
### <a name="view-rule-groups"></a>Afficher les groupes de règles
L’exemple suivant montre comment afficher les groupes de règles :
```powershell
Get-AzApplicationGatewayAvailableWafRuleSets
```
Voici un extrait de réponse issu de l’exemple précédent :
```
OWASP (Ver. 3.0):
General:
Description:
Rules:
RuleId Description
------ -----------
200004 Possible Multipart Unmatched Boundary.
REQUEST-911-METHOD-ENFORCEMENT:
Description:
Rules:
RuleId Description
------ -----------
911011 Rule 911011
911012 Rule 911012
911100 Method is not allowed by policy
911013 Rule 911013
911014 Rule 911014
911015 Rule 911015
911016 Rule 911016
911017 Rule 911017
911018 Rule 911018
REQUEST-913-SCANNER-DETECTION:
Description:
Rules:
RuleId Description
------ -----------
913011 Rule 913011
913012 Rule 913012
913100 Found User-Agent associated with security scanner
913110 Found request header associated with security scanner
913120 Found request filename/argument associated with security scanner
913013 Rule 913013
913014 Rule 913014
913101 Found User-Agent associated with scripting/generic HTTP client
913102 Found User-Agent associated with web crawler/bot
913015 Rule 913015
913016 Rule 913016
913017 Rule 913017
913018 Rule 913018
... ...
```
## <a name="disable-rules"></a>Désactiver les règles
L’exemple suivant montre comment désactiver les règles `911011` et `911012` sur une passerelle d’application :
```powershell
$disabledrules=New-AzApplicationGatewayFirewallDisabledRuleGroupConfig -RuleGroupName REQUEST-911-METHOD-ENFORCEMENT -Rules 911011,911012
Set-AzApplicationGatewayWebApplicationFirewallConfiguration -ApplicationGateway $gw -Enabled $true -FirewallMode Detection -RuleSetVersion 3.0 -RuleSetType OWASP -DisabledRuleGroups $disabledrules
Set-AzApplicationGateway -ApplicationGateway $gw
```
## <a name="mandatory-rules"></a>Règles obligatoires
La liste suivante contient les conditions qui amènent la solution WAF à bloquer la requête en mode de prévention (en mode de détection, les requêtes sont journalisées en tant qu’exceptions). Elles ne peuvent pas être configurées ni désactivées :
* L’échec d’analyse du corps de la requête entraîne le blocage de cette dernière, sauf si l’inspection du corps est désactivée (XML, JSON, données de formulaire)
* La longueur des données du corps de la requête (sans fichiers) est supérieure à la limite configurée
* Le corps de la requête (avec fichiers) est supérieur à la limite
* Une erreur interne s’est produite dans le moteur WAF
Propre à CRS 3.x :
* Le score des anomalies entrantes a dépassé le seuil
## <a name="next-steps"></a>Étapes suivantes
Après avoir configuré vos règles désactivées, vous pouvez apprendre à afficher vos journaux d’activité WAF. Pour plus d’informations, consultez [Diagnostics Application Gateway](../../application-gateway/application-gateway-diagnostics.md#diagnostic-logging).
[fig1]: ./media/application-gateway-customize-waf-rules-portal/1.png
[1]: ./media/application-gateway-customize-waf-rules-portal/figure1.png
[2]: ./media/application-gateway-customize-waf-rules-portal/figure2.png
[3]: ./media/application-gateway-customize-waf-rules-portal/figure3.png
| 45.974359 | 621 | 0.718721 | fra_Latn | 0.725286 |
e20db1753fa3d200c405183c25192570c032db87 | 3,784 | md | Markdown | linkerd.io/content/blog/linkerd-and-open-governance.md | brendonthiede/website | 6d8f6b32072fe96e88c9aaa8e42f65c3a29d37cd | [
"Apache-2.0"
] | 33 | 2018-09-02T18:44:02.000Z | 2022-03-16T06:20:21.000Z | linkerd.io/content/blog/linkerd-and-open-governance.md | brendonthiede/website | 6d8f6b32072fe96e88c9aaa8e42f65c3a29d37cd | [
"Apache-2.0"
] | 602 | 2018-08-23T12:40:28.000Z | 2022-03-29T16:13:46.000Z | linkerd.io/content/blog/linkerd-and-open-governance.md | brendonthiede/website | 6d8f6b32072fe96e88c9aaa8e42f65c3a29d37cd | [
"Apache-2.0"
] | 222 | 2018-08-27T21:40:02.000Z | 2022-03-29T08:44:23.000Z | ---
title: "Linkerd's Commitment to Open Governance"
author: 'william'
date: 2019-10-03T00:00:00+00:00
draft: false
featured: false
thumbnail: /uploads/1356360647_27ab460006_c.jpg
tags: [Linkerd]
---

Given the recent declaration by Google that it will [not donate
KNative](https://twitter.com/brendandburns/status/1179176440647913472) [or
Istio](https://twitter.com/jbeda/status/1179176740687495168) to a neutral
foundation, it seemed like an appropriate time to describe Linkerd's approach
to the subject of open governance.
Our approach is this:
_The Linkerd maintainers are 100% committed to open governance and to being
hosted by a neutral foundation. We believe that a diverse and active set of
maintainers is fundamental to the long-term health of an open source project.
And we want YOU to join us._
If you've been following Linkerd for a while, this should come as no surprise.
This is all stuff we've said before. But in this post, I wanted to add a little
more personal context.
I wear two hats when it comes to Linkerd. I am one of the maintainers of the
project. I am also the CEO of [Buoyant](https://buoyant.io). Buoyant created
Linkerd, and submitted it to the CNCF way back in the dark ages of 2017
(when the CNCF only had 4 projects!). Buoyant continues to be the primary
sponsor of the project, and to date, the majority of code in Linkerd comes from
folks who have been paid by Buoyant for their time and energy. In fact, I take
great pride in the fact that Buoyant has been able to find great people in the
Linkerd community, like [Alejandro](https://github.com/alpeb),
[Ivan](https://github.com/ihcsim), [Zahari](https://github.com/zaharidichev),
[Sean](https://github.com/seanmonstar), [Carl](https://github.com/carllerche),
and many more, and give them the ability to make a living by continuing these
contributions.
I sleep soundly at night because the two roles are never in conflict. Nothing
about Buoyant's business model requires us to maintain control over Linkerd.
This is by design. Both Oliver and I were open source contributors to
infrastructure projects long before Buoyant was created ([Exhibit
1](http://netbsd-soc.sourceforge.net/projects/zfs/), [Exhibit
2](https://svn.apache.org/viewvc/incubator/thrift/trunk/CONTRIBUTORS?view=markup&pathrev=665459)),
and the thought of a commercial "Linkerd Enterprise Edition" or "Linkerd Plus"
that withheld critical functionality necessary for running Linkerd in
production was never appealing to us. Linkerd is and must always be a fully
functional, completely unencumbered open source project.
So, that's all to say: please come join us in Linkerd. We have 150+
contributors across the world, and while line-by-line the majority of
contributions are sponsored by Buoyant, that's an artifact of how Buoyant
operates, not a statement of control. (We donated it to the CNCF for a reason!)
As I said on Twitter:
{{< tweet 1179202957369323520 >}}
We want your input, your help, and your guidance. Let's keep building this
amazing project together.
## Ready to try Linkerd?
Ready to try Linkerd? You can try the latest stable release by running:
```bash
curl https://run.linkerd.io/install | sh
```
Linkerd is a community project and is hosted by the [Cloud Native Computing
Foundation](https://cncf.io/). If you have feature requests, questions, or
comments, we'd love to have you join our rapidly-growing community! Linkerd
is hosted on [GitHub](https://github.com/linkerd/), and we have a thriving
community on [Slack](https://slack.linkerd.io/),
[Twitter](https://twitter.com/linkerd), and the [mailing
lists](https://linkerd.io/2/get-involved/). Come and join the fun!
_Image credit: [Archana Jarajapu](https://flickr.com/photos/rowdie/)_
| 46.146341 | 98 | 0.778013 | eng_Latn | 0.991723 |
e20e0363860b89c9831ee8d4a02a5c4f78ee9b6b | 837 | md | Markdown | README.md | Jeremy775/Mangatech | a59bda12fd910d9fe0e55a8250eaa43c0dae275b | [
"MIT"
] | null | null | null | README.md | Jeremy775/Mangatech | a59bda12fd910d9fe0e55a8250eaa43c0dae275b | [
"MIT"
] | null | null | null | README.md | Jeremy775/Mangatech | a59bda12fd910d9fe0e55a8250eaa43c0dae275b | [
"MIT"
] | null | null | null | <p align="center"><a href="https://laravel.com" target="_blank"><img src="https://raw.githubusercontent.com/laravel/art/master/logo-lockup/5%20SVG/2%20CMYK/1%20Full%20Color/laravel-logolockup-cmyk-red.svg" width="400"></a></p>
# MANGATECH
## A propos
Mangatech est une application de gestion de contenu (mangas, animes), elle vous permet de :
- Découvrir des mangas / animes.
- Voir les mangas / animes les plus likés -- (bientôt).
- Ajouter un manga / anime a vos liste de favoris, lu/vu, à lire / à voir.
- Echanger avec les autres utilisateurs en commantant les articles ou en participant au forum.
- Une interface simple et intuitive.
Un outil parfait pour commencer votre bibliothèque en ligne.
## License
The Laravel framework is open-sourced software licensed under the [MIT license](https://opensource.org/licenses/MIT).
| 39.857143 | 226 | 0.751493 | fra_Latn | 0.75721 |
e20e164e89a1044cf102d7685daffe0a532eac45 | 6,679 | md | Markdown | microsoft-365/admin/activity-reports/microsoft-teams-user-activity-preview.md | MicrosoftDocs/microsoft-365-docs-pr.pt-BR | 88959f119be80545a0b854d261ba4acf2ac1e131 | [
"CC-BY-4.0",
"MIT"
] | 29 | 2019-09-17T04:18:20.000Z | 2022-03-20T18:51:42.000Z | microsoft-365/admin/activity-reports/microsoft-teams-user-activity-preview.md | MicrosoftDocs/microsoft-365-docs-pr.pt-BR | 88959f119be80545a0b854d261ba4acf2ac1e131 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:48:15.000Z | 2022-02-09T06:48:47.000Z | microsoft-365/admin/activity-reports/microsoft-teams-user-activity-preview.md | MicrosoftDocs/microsoft-365-docs-pr.pt-BR | 88959f119be80545a0b854d261ba4acf2ac1e131 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-03-14T23:52:56.000Z | 2021-05-31T14:02:38.000Z | ---
title: Relatórios do Microsoft 365 no centro de administração – Atividade do usuário do Microsoft Teams
ms.author: kwekua
author: kwekua
manager: scotv
audience: Admin
ms.topic: article
ms.service: o365-administration
ms.localizationpriority: medium
ROBOTS: NOINDEX, NOFOLLOW
ms.collection:
- M365-subscription-management
- Adm_O365
- Adm_NonTOC
ms.custom: AdminSurgePortfolio
search.appverid:
- BCS160
- MST160
- MET150
- MOE150
description: Saiba como obter o relatório Microsoft Teams atividade do usuário e obter informações sobre a atividade Teams em sua organização.
ms.openlocfilehash: 0055fda46b3c958d57d66a21d33f2589b6985e30
ms.sourcegitcommit: d4b867e37bf741528ded7fb289e4f6847228d2c5
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/06/2021
ms.locfileid: "60157429"
---
# <a name="microsoft-365-reports-in-the-admin-center---microsoft-teams-user-activity"></a>Relatórios do Microsoft 365 no centro de administração – Atividade do usuário do Microsoft Teams
O painel Microsoft 365 **relatórios** mostra a visão geral da atividade em todos os produtos em sua organização. Ele possibilita detalhar até relatórios de um produto específico para que você tenha informações mais precisas sobre as atividades em cada produto. Confira o tópico [Visão geral de relatórios](activity-reports.md). No relatório de atividade de usuários do Microsoft Teams, você pode obter informações sobre a atividade do Microsoft Teams em sua organização.
> [!NOTE]
> Você deve ser um administrador global, leitor global ou leitor de relatórios no Microsoft 365 ou um Exchange, SharePoint, serviço Teams, comunicações Teams ou Skype for Business para ver relatórios.
## <a name="how-to-get-to-the-microsoft-teams-user-activity-report"></a>Como gerar o relatório de atividade de usuários do Microsoft Teams
1. No centro de administração do, vá para a página **Relatórios** \> <a href="https://go.microsoft.com/fwlink/p/?linkid=2074756" target="_blank">Uso</a>.
2. Na página inicial do painel, clique no botão **Exibir mais** no Microsoft Teams de atividade.
## <a name="interpret-the-microsoft-teams-user-activity-report"></a>Interpretar o relatório de atividade de usuários do Microsoft Teams
Você pode exibir a atividade do usuário no relatório de Teams escolhendo a **guia Atividade do** usuário. <br/>
Selecione **Escolher colunas** para adicionar ou remover colunas do relatório. <br/> 
Você também pode exportar os dados do relatório para um arquivo Excel .csv selecionando o link **Exportar.** Isso exporta os dados de todos os usuários e permite que você realize uma classificação e filtragem simples para mais análise. Se você tiver menos de 2000 usuários, poderá classificar e filtrar dentro da tabela no próprio relatório. Se você tiver mais de 2000 usuários, para filtrar e classificar, você precisa exportar os dados. O formato exportado para **tempo de** áudio, **tempo de vídeo** e tempo de **compartilhamento** de tela segue o formato de duração ISO8601.
O relatório **Atividade de usuários do Microsoft Teams** pode ser consultado sobre tendências dos últimos 7, 30, 90 ou 180 dias. No entanto, se você selecionar um dia específico no relatório, a tabela (7) mostrará dados por até 28 dias a partir da data atual (e não a data em que o relatório foi gerado).
Para garantir a qualidade dos dados, realizamos verificações diárias de validação de dados nos últimos três dias e preencheremos quaisquer lacunas detectadas. Você pode observar diferenças nos dados históricos durante o processo.
|Item|Descrição|
|:-----|:-----|
|**Indicador**|**Definição**|
|Nome de usuário <br/> |O endereço de email do usuário. Você pode exibir o endereço de email real ou tornar este campo anônimo. <br/> |
|Mensagens de canal <br/> |O número de mensagens exclusivas que o usuário postou em um chat de equipe durante o período de tempo especificado. <br/> |
|Mensagens de chat <br/> |O número de mensagens exclusivas que o usuário postou em um chat privado durante o período de tempo especificado. <br/> |
|Total de reuniões <br/> |O número de reuniões online que o usuário participou durante o período de tempo especificado. <br/> |
|Chamadas 1:1 <br/> | O número de chamadas 1:1 que o usuário participou durante o período de tempo especificado. <br/> |
|Data da última atividade (UTC) <br/> |A última data em que o usuário participou de uma Microsoft Teams de usuário.<br/> |
|Reuniões participadas ad hoc <br/> | O número de reuniões ad hoc que um usuário participou durante o período de tempo especificado. <br/> |
|Reuniões organizadas ad hoc <br/> |O número de reuniões ad hoc que um usuário organizou durante o período especificado. <br/>|
|Total de reuniões organizadas <br/> |A soma das reuniões agendadas, recorrentes, ad hoc e não classificadas que um usuário organizou durante o período de tempo especificado. <br/> |
|Total de reuniões participadas <br/> |A soma das reuniões agendadas, recorrentes, ad hoc e não classificadas em que um usuário participou durante o período de tempo especificado. <br/> |
|Reuniões organizadas agendadas uma vez <br/> |O número de reuniões agendadas única que um usuário organizou durante o período de tempo especificado. <br/> |
|Reuniões agendadas organizadas recorrentes <br/> |O número de reuniões recorrentes que um usuário organizou durante o período de tempo especificado. <br/> |
|Reuniões participadas agendadas uma vez <br/> |O número das reuniões agendadas de uma vez em que um usuário participou durante o período de tempo especificado. <br/> |
|Reuniões participadas recorrentes agendadas <br/> |O número de reuniões recorrentes que um usuário participou durante o período de tempo especificado. <br/> |
|É licenciado <br/> |Selecionado se o usuário estiver licenciado para usar Teams. <br/>|
|Outras atividades <br/>|O Usuário está ativo, mas realizou outras atividades que não os tipos de ação expostos oferecidos no relatório (enviando ou respondendo a mensagens de canal e mensagens de chat, agendando ou participando de chamadas e reuniões 1:1). Exemplos de ações são quando um usuário altera o status Teams ou a mensagem de status Teams ou abre uma postagem mensagem de canal, mas não responde. <br/>|
|reuniões não classificadas <br/>|Aquele que não pode ser classificado como agenda ou recorrente ou ad hoc. Eles são curtos em número e, principalmente, não podem ser identificados devido a informações de telemetria adulteradas. |
||| | 91.493151 | 578 | 0.77856 | por_Latn | 0.999835 |
e20eb832b95d129829359ece59625e42e8848572 | 651 | md | Markdown | docs/Files.md | AntarSidgi/ASA | 916744bc82f9e3087afed3c51435c19770eea945 | [
"MIT"
] | 2 | 2019-04-15T05:25:14.000Z | 2020-06-26T06:34:11.000Z | docs/Files.md | AntarSidgi/ASA | 916744bc82f9e3087afed3c51435c19770eea945 | [
"MIT"
] | 1 | 2019-11-22T11:50:52.000Z | 2021-05-01T06:44:59.000Z | docs/Files.md | AntarSidgi/ASA | 916744bc82f9e3087afed3c51435c19770eea945 | [
"MIT"
] | 2 | 2019-04-20T12:12:16.000Z | 2019-11-13T05:10:46.000Z | # Woring With Files
You can use class for work with files
<b> Start Work 🔘 </b>
<pre>$AntarF = new ASA_Files();</pre>
<b> if you need create file use this 👇 </b>
<pre>$AntarF->CreateFile($namefile,$value = null);</pre>
<b> Write in file (Deleted previous text) </b>
<pre>$AntarF->PutFile("Hello World 👌");</pre>
<b> Delete File </b>
<pre>$AntarF->UnFile($namefile);</pre>
<b> Read File </b>
<pre>$AntarF->ReadFile($namefile);</pre>
<b> Create Folder </b>
<pre>$AntarF->CreateFolder($name);</pre>
<b> Check Exists File </b>
<pre>$AntarF->FileExists($namefile);</pre>
<b> Check Exists Folder </b>
<pre>$AntarF->FolderExists($namefolder);</pre>
| 23.25 | 56 | 0.654378 | kor_Hang | 0.272399 |
e20f4588b91bf20ce5b5c7217b9dad4f7f53381a | 20,216 | md | Markdown | approved/EC_Ethics_and_Culture.md | TurgenSec/policies | f2874fc856a023fbcd72a96332a568cf0eb050b0 | [
"CC0-1.0"
] | 3 | 2020-06-07T23:38:55.000Z | 2021-07-27T14:02:45.000Z | approved/EC_Ethics_and_Culture.md | TurgenSec/policies | f2874fc856a023fbcd72a96332a568cf0eb050b0 | [
"CC0-1.0"
] | null | null | null | approved/EC_Ethics_and_Culture.md | TurgenSec/policies | f2874fc856a023fbcd72a96332a568cf0eb050b0 | [
"CC0-1.0"
] | null | null | null | # TurgenSec Ethics & Culture
<h2 color="red">DOCUMENT CLASSIFICATION: PUBLIC</h2>
## Latest Change
Version|Date|Company|Last change
---|---|---|---
1.1|25th May 2020|TurgenSec Ltd|Moved to Git
## Publication of TurgenSec Ethics & Culture Guidance
TurgenSec publishes this document to the [TurgenSec Community](https://community.turgensec.com) and to the [Trust and Confidence](https://turgensec.com/trust-and-confidence) section of TurgenSec’s website. TurgenSec will update this document in line with developing business practice and disclosure methodology.
Comments and feedback are welcome, please submit Issues or Pull Requests to Turgensec's [policies](https://github.com/turgensec/policies) repo.
# 1. Purpose of this Document
This document outlines the business ambitions, motivations and approach of TurgenSec Ltd. TurgenSec interacts with companies, clients and governments as an independent party acting in the interest of national security, privacy under GDPR and other data protection legislation, the reduction of information asymmetry and in the interest of its clients.
This document outlines TurgenSec’s motivations for disclosing breaches in line with its practice governing responsible disclosure (see Trust and Confidence), the actions TurgenSec takes to issue public statements to disclose breaches and cyber security information, and the motivations behind its ongoing research and development projects.
## 1.1 This document:
- Lays out the goals and ethics of TurgenSec.
- Explains the motivations behind breach disclosure practice.
- Is a frame of reference for evaluating future and existing policies.
- Outlines the intended impact of TurgenSec’s business activities.
We encourage constructive criticism of this document and feedback on how it could be improved. Feel free to reach out to us through Issues/PR's on our [policies](https://github.com/turgensec/policies) repo, [Contact](https://turgensec.com/contact) page, or on [Twitter](https://twitter.com/TurgenSec).
## 1.2 Terminology
### 1.2.1 Breach Disclosure
Notification of regulators and/or victims of incidents that affect the confidentiality or security of personal data.
### 1.2.2 Public Disclosure
Revealing the fact(s) of an information breach to any party not the owner of the breached system or data.
### 1.2.3 Exosystem
Structures that function largely independently of an individual or organisation (henceforth "**User**"), but nevertheless affect the immediate context within which the **User** interacts. These include any 3rd Party Suppliers the **User** interacts with, and also other 3rd Party entities such as the government, legal system, media etc.
### 1.2.4 Information asymmetry
>"Asymmetric information, also known as 'information failure', occurs when one party to an economic transaction possesses greater material knowledge than the other party" -- [https://investopedia.com/terms/a/asymmetricinformation.asp](www.https://investopedia.com/terms/a/asymmetricinformation.asp)
In the context of data breaches information asymmetry relates to the imbalance of power over data between organisations, data controllers and individuals.
### 1.2.5 NCSC: National Cyber Security Centre.
> "We support the most critical organisations in the UK, the wider public sector, industry, SMEs as well as the general public. When incidents do occur, we provide effective incident response to minimise harm to the UK, help with recovery, and learn lessons for the future." -- https://ncsc.gov.uk
### 1.2.6 ICO: The Information Comissioner’s Office.
>“The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.” -- https://ico.org.uk
### 1.2.7 CMA: Computer Misuse Act 1990.
UK law against misuse of computers. In the 30 years since it was passed, we are not aware of any instance when ethical hackers have ever been prosecuted for criminal charges under it.
### 1.2.8 GDPR: The General Data Protection Regulation 2016/679
The GDPR is a European regulation on data protection and privacy. It addresses, amongst other matters, the transfer of personal data outside the EU and EEA.
# 2. TurgenSec Vision
Our social and business goals springboard from a belief in individual data rights, in particular the following principles laid out in the GDPR.
Lawfulness, fairness and transparency
Integrity and confidentiality (security)
Accountability
## 2.1 Social Goals
The company’s social goals are:
1. Creating a world where people can accurately understand the extent of their publicly exposed data. In particular:
<ol type="a">
<li>
Consumers are aware of the personal data that companies hold.
</li>
<li>
Consumers are aware of breaches of their personal data.
</li>
<li>
Consumers are aware of illegal trading of their personal data and its uses to malicious actors.
</li>
<li>
Consumers are aware of which companies have sustained breaches, in what manner, and steps taken in remediation, such that they can make informed decisions about which organizations to trust with their data.
</li>
</ol>
2. Promoting the legitimate expectation that personal data is treated with care.
3. Empowering people to use their data to make change as an individual.
People should be aware of their personal data that exists on the open web. This data is a crucial resource exploited by bad actors to breach organizations and steal from individuals ([93% of successful attacks leverage pretexting](https://enterprise.verizon.com/resources/reports/dbir/)).
TurgenSec supports the principle of access to appropriate compensation following breaches of privacy and mistreatment of personal data. Similarly, breached organisations should be made aware in order to activate incident and crisis response plans and to comply with relevant contractual obligations.
## 2.2 Business Goals
1. Creating a world where people can accurately understand the extent of their publicly exposed data. In particular:
- When TurgenSec’s research and development activities bring to light breaches, the company activates its responsible disclosure policy. Responsible disclosure represents a value proposition for TurgenSec’s services (as outlined under # 4. Breach Disclosure Ethics).
- TurgenSec’s Exosystem Monitoring service monitors third party breaches for corporate clients, providing the necessary information, resources and - tools to reduce the risk and impact of breaches affecting the supply chain and other third parties.
2. Promoting the legitimate expectation that personal data is treated with care.
- TurgenSec’s business services and responsible disclosure practices ensure personal data privacy for clients and individuals impacted by data - breaches.
- TurgenSec treats all personal data with which it interacts with care.
- Responsible disclosures and public disclosures encourage organisations and companies to treat people’s data with care as well as educating the - public on the value of their personal data.
- Public disclosures further encourage companies to take positive, demonstrable and effective steps in preventing data breaches from occurring again in future.
3. Empowering people to use their data to make change as an individual.
- TurgenSec’s partnership with Ethi.me (https://ethi.me/) lays the groundwork for the monetisation and responsible use of personal data reclaimed from companies through GDPR and other data protection legislation. Ethi is not alone in this endeavour and TurgenSec is open to collaboration with other companies working towards the same ends.
# 3. Business Culture
Workplace and business culture is hugely important. TurgenSec believes an incredible culture is a valid purpose in itself for an organisation, and the foundation of culture lies in ethics. While the company's fortunes may fluctuate, the incredible people and culture will always remain at the core.
It is no accident that a person-centric approach supports business success metrics. A healthy company culture:
- Improves employee happiness and motivation
- Reduces turnover
- Encourages excellent work
- Attracts assistance
- Fosters collaboration
This offers an organisation its best chance of longevity and of succeeding financially. If colleagues are happy in their work, understand why we do what we do and are fulfilled by doing good in the world, we have created something worthwhile.
# 4. Breach Disclosure Ethics
## 4.1 Background for Producing an Ethical Framework
One of the biggest problems facing security researchers is that _working in the public interest_ solicits unrewarded effort and potential legal threats, especially when compared to being financially compensated and taking the credit through an anonymous alias. This unfortunately is part of the reason certain ‘black hat’ markets boast over a million registered users. In the view of TurgenSec, a world where security researchers are incentivised to make the choices that benefit society is one that we should all work towards.
Ethical disclosures foster further ethical disclosures. Cases where researchers are discredited, abused or worse, end up serving jail time undoubtedly contribute to the culture that has seen the personal data of hundreds of millions of people leaked online. Each organisation that mistreats security researchers reduces the likelihood of future ethical notification to other breached organisations.
Ultimately, for as long as there is no financial incentive to behave ethically, there is little hope of change for the better. Before the introduction of fines under data protection legislation, there were few circumstances in which a company would suffer significant ramifications for leaking their users’ data (and there were no legal obligations to notify). The users impacted would bear the cost of these mistakes, through fraud enabled and enhanced by the data leaked without their knowledge. This lack of transparency was coupled with the fact that the individuals exploited were disproportionately vulnerable people. Previously, responsible organisations were not held accountable.
With data protection legislation becoming increasingly widespread, GDPR has been a game-changer, allowing individuals far more power than before, and increasing organisations’ accountability for the handling their data. Now that people have the right to claim compensation when their data is mishandled, and the fines are enshrined in European law, individuals and data protection authorities now have real power to hold data controllers and processors to account.
By increasing the responsibility and accountability taken on by organisations handling data will mean significantly better cybersecurity for us all, allowing us to fight fraud (which has risen year on year, and mostly involves exploiting data) on a level playing field, rather than against the tide.
As no internationally accepted set of ethical principles upon which individuals and organizations can be informed of data breaches exists at present, we have based our policies on NCSC advice, the main GDPR principles, the CMA’s definition of public good, and existing standards within other industries, including how breaches of confidentiality are dealt with within the medical industry.
## 4.2 Universal Disclosure Principles
1. **The lawful, timely discovery of datasets** containing sensitive information disclosed publicly in error, inadvertently or maliciously.
2. **The protection of the rights of individuals**, in particular the right to privacy enshrined in data protection legislation internationally. Privacy is a fundamental human right in accordance with the UN Declaration of Human Rights, we seek to protect it. Organizations that value their privacy and confidentiality protect their employees indirectly through their bottom line, but also personally as they have entrusted their private data to the care and due diligence of their employer.
3. **Timely and consistent communication** with organisations found to be suffering from a security or data breach, however caused. Efforts will be made to make contact with the individuals or organisations affected by the breach in as timely and consistent fashion as possible.
4. **The application of fair and ethical standards, which balance the rights of individuals and organisations.** We look to work with organisations to create a more secure digital world where data rights are upheld and the security and correct handling of individuals’ data is performed in a transparent and informative way.
5. **Transparency with impacted parties** as to the extent and content of the information breached.
6. **Adherence to the letter and spirit of legislation protecting personal data and the rights of individuals.**
## 4.3 Public Disclosure
### 4.3.1 Public Disclosure Goals
1. Bring the breach to the attention of affected individuals, without any semblance of doubt.
2. Incentivise appropriate behaviour by demonstrating the outcomes when organisations do not act in line with ICO and NCSC guidance.
3. Raise awareness and scrutiny of data breaches and data security.
4. Act transparently with the public and involved parties.
5. Prevent the spread of false information about the nature of the breach.
### 4.3.2 Why is Public Disclosure important?
The above goals are motivated by the following:
1. When it is impossible to identify the organisation responsible for the data breach, public disclosure brings more people into identification efforts so that the breach can be resolved.
Where companies eschew their obligations in the run up to, and in the aftermath of, a data breach, Public Disclosure increases awareness of the costs of doing so, a vital contributor to global efforts to ensure appropriate care is taken to secure data and protect the rights of individuals
2. So that individuals are aware of the dangers of the services they use and are able to more appropriately judge when to share potentially compromising data, and what standard of care to expect from those they entrust with their data.
3. To earn the trust of the public, and any involved parties, by clearly outlining the truth of what has occured. If you do not tell people what has happened, they will not trust you.
4. Public Disclosure protects individuals being exploited by malicious parties looking to take advantage of the news by providing a ground-truth to the extent of leaked data. Further, it protects TurgenSec and others from bribery, threats, gag orders, manipulation and legal coercion to conceal the existence of or downplay a data breach.
## 4.4 Corporate Disclosure Ethics
Corporate breaches are breaches that involve corporate information (sometimes alongside personal information). Information in corporate breaches can often contain confidential business information exposing the details of employees, business practices, contracts and agreements, amongst other things.
This information is extremely useful to malicious actors who can then attack the business and its clients using insider information.
### 4.4.1 Corporate Disclosure Goals
- Building positive relationships and open communication channels with the companies impacted.
- Providing verifiable transparency on the principle that organizations and individuals have a right to know what has been breached about them in objective and precise terms.
- Through transparency, allowing individual and organizational consumers to make informed decisions on who to trust with their sensitive data.
- To bring attention to the wider debate of why breaches need transparency to reduce the costs of all involved.
- To bring attention to the work of TurgenSec, demonstrating and proving our value proposition.
### 4.4.2 Why is Corporate Disclosure important?
Every company in the world interfaces with third parties that hold information about them and their customers. Rigid security and regular testing mean nothing if the third parties you rely upon are leaking information that endanger your business. (Exosystem Monitoring - The Problem). Supply chain cyber security (https://www.ijtre.com/images/scripts/2019061017.pdf) is an undervalued area of cyber security that causes a huge number of breaches every month.
Businesses are under no legal obligation to disclose if they have breached a third party's business information unless it is stipulated in a contract unless personal data has been breached. The damage to society and businesses of breached company data can in many cases be more significant than personal data breaches affecting consumers.
### 4.4.3 Contributions to Global Information Security
Part of our public offering involves monitoring leaks in third party suppliers that put organizations, employees and customers at risk, helping to protect organizations, their customers, and their revenues from bad actors. Exosystem Monitoring - The Solution.
The exposure and responsible disclosure of data breaches is beneficial to both TurgenSec’s business as a clear indication of the value TurgenSec can offer to businesses, and raises awareness to businesses of the risks they take with third parties. Further, this reduces information asymmetry of data security, making existing cyber security solutions relied upon by most individuals, institutions and organizations more effective, and for lessons to be learned more widely across organisations. Where possible, we aim to align our goals and business activities with those of the NCSC, linked below.
https://ncsc.gov.uk/section/about-ncsc/what-we-do
## 4.5 Breach Disclosure Ethics Analogies
To relate the framework given above to existing accepted ethical philosophy, we have provided the following analogies below, which run in parallel with our disclosure principles.
### 4.5.1 Computer Misuse Act:
First, we must address the CMA as it relates to our research activities, due to the common misinterpretation of what we do. It is not hacking.
Open up a web browser.
Go to https://turgensec.com
Did you just hack us to access the information on our server?
### 4.5.2 On finding a dropped wallet:
- Try to identify the owner.
- Return the dropped wallet.
- Don’t take any money from the wallet.
- If other people’s credit cards are within, try to return those.
- If you can’t identify the owner, ask “has anyone lost this wallet?”
### 4.5.3 On finding an open storage warehouse filled with goods:
- Try to identify the owner and notify them.
- Report the breach to the responsible authorities.
- Don’t take any money from the warehouse.
- If you cannot identify the owner of the warehouse, and the authorities cannot assist in resolving the situation, report the breach to the users of - the warehouse.
- If you can’t identify the owner, or users, ask publically “whose warehouse is this?”.
- Tell those who had stored things there that it had been left unsecured, so they have an option to change their minds about storing things there in the future.
### 4.5.4 Dropping the house key of your friend down the drain, through a hole in your trousers.
- Tell them about it.
- Do not claim you have been robbed.
- Do not claim you never lost the key, or have never even heard of such a key.
- Let them know that while you can’t get them their key back, you can offer them X as a form of apologizing.
- Let them know that you have in fact patched up the hole, or indeed purchased entirely new trousers, so as to ensure that this situation does not - happen again and reassure them that the rest of their keys in your possession are safe.
- Give them the chance to at least know this happened and reconsider their decision to trust you with keys in the future.
# 5. Document control
## 5.1 Administration
Title: Ethics & Culture
Version: 1.0
Validity: Until next issue
Classification: Public
Effective: 27th April 2020
Review date: 27th April 2021
Owner: Peter Hansen, Founder
Sponsor: Nathaniel Fried, Founder
Audience: Public
## 5.2 Document history
Future changes to be tracked using Git Version Control.
### Early Life
Date|Version|Author|Changes
---|---|---|---
20/04/2020|1.0|Peter Hansen|First draft
| 84.941176 | 689 | 0.801593 | eng_Latn | 0.999405 |
e20f53dcff8f39423fd3230da1742993572a9c13 | 7,670 | md | Markdown | README.md | chef-davin/habitat | e95f35d634b19b793d52f50f9abf7577a3585c91 | [
"Apache-2.0"
] | 2,577 | 2016-06-14T08:17:26.000Z | 2022-03-30T00:50:41.000Z | README.md | chef-davin/habitat | e95f35d634b19b793d52f50f9abf7577a3585c91 | [
"Apache-2.0"
] | 7,145 | 2016-06-14T06:59:55.000Z | 2022-03-31T20:38:29.000Z | README.md | chef-davin/habitat | e95f35d634b19b793d52f50f9abf7577a3585c91 | [
"Apache-2.0"
] | 422 | 2016-06-14T07:08:32.000Z | 2022-03-21T01:20:21.000Z | [](https://buildkite.com/chef/habitat-sh-habitat-master-verify?branch=master)
[](https://forums.habitat.sh)
[](https://www.codetriage.com/habitat-sh/habitat)
[Habitat](http://habitat.sh) is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities.
The goal of Habitat is to allow you to automate your application behavior when you create your application, and then bundle your application with the automation it needs to behave with the correct run time behavior, update strategies, failure handling strategies, and scaling behavior, wherever you choose to deploy it.
See a quick demo of how to build, deploy and manage an application with Habitat:
**Project State**: [Active](https://github.com/chef/chef-oss-practices/blob/master/repo-management/repo-states.md#active)
**Issues Response Time Max**: 5 business days
**Pull Request Response Time Max**: 5 business days
[](http://www.youtube.com/watch?v=VW1DwDezlqM)
# Table of Contents
* [Diagrams](#diagrams)
* [Training](#training)
* [Install](#install)
* [Contribute](#contribute)
* [Documentation](#documentation)
* [Code Organization](#code-organization)
* [Roadmap](#roadmap)
* [Community and support](#community-and-support)
* [Building](#building)
* [Further reference material](#further-reference-material)
* [Code of Conduct](#code-of-conduct)
* [License](#license)
## Diagrams
Graphics that will help you and your team better understand the concepts and how they fit together into the larger Habitat ecosystem.
### Where Habitat Fits
[](http://habitat.sh#reference-diagram)
Try the interactive infographics on the [website](http://habitat.sh#reference-diagram)!
### How Habitat Works
* [Architecture Overview](https://github.com/habitat-sh/habitat/raw/master/images/habitat-architecture-overview.png)
* [Initial Package Build Flow](https://github.com/habitat-sh/habitat/raw/master/images/habitat-initial-package-build-flow.png)
* [Application Rebuild Flow](https://github.com/habitat-sh/habitat/raw/master/images/habitat-application-rebuild-flow.png)
* [Dependency Update Flow](https://github.com/habitat-sh/habitat/raw/master/images/habitat-dependency-update-flow.png)
* [Promote Packages Through Channels](https://github.com/habitat-sh/habitat/raw/master/images/habitat-promote-packages-through-channels.png)
### Habitat and **Docker**
* [Initial Docker Container Publishing Flow](https://github.com/habitat-sh/habitat/raw/master/images/habitat-initial-docker-container-publishing-flow.png)
* [Automated Docker Container Publishing Flow](https://github.com/habitat-sh/habitat/raw/master/images/habitat-automated-docker-container-publishing-flow.png)
*View all diagrams in [Docs](https://www.habitat.sh/docs/diagrams/)*
## Training
*View all demos and tutorials in [Learn](https://www.habitat.sh/learn/)*
## Install
You can download Habitat from the [Habitat downloads page](https://docs.chef.io/habitat/install_habitat).
Once you have downloaded it, follow the instructions on the page for your specific operating system.
If you are running macOS and use [Homebrew](https://brew.sh), you can use our official [Homebrew tap](https://github.com/habitat-sh/homebrew-habitat).
```
$ brew tap habitat-sh/habitat
$ brew install hab
```
If you are running Windows and use [Chocolatey](https://chocolatey.org), you can install our [chocolatey package](https://chocolatey.org/packages/habitat)
```
C:\> choco install habitat
```
If you do _not_ run Homebrew or Chocolatey, or if you use Linux, you can use the Habitat [install.sh](https://github.com/habitat-sh/habitat/blob/master/components/hab/install.sh) or [install.ps1](https://github.com/habitat-sh/habitat/blob/master/components/hab/install.ps1) script.
Bash:
```
$ curl https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.sh | sudo bash
```
Powershell:
```
C:\> Set-ExecutionPolicy Bypass -Scope Process -Force
C:\> iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.ps1'))
```
## Contribute
We are always looking for more opportunities for community involvement. Interested in contributing? Check out our [CONTRIBUTING.md](CONTRIBUTING.md) to get started!
## Documentation
Get started with the [Habitat tutorials](https://www.habitat.sh/learn/) or plunge into the [complete documentation](https://www.habitat.sh/docs/).
## Code Organization
### Core Plans
The Habitat plans that are built and maintained by Habitat's Core Team are in [their own repo.](https://github.com/habitat-sh/core-plans)
### Habitat Supervisor and other core components
The code for the Habitat Supervisor and other core components are in the [components directory](https://github.com/habitat-sh/habitat/tree/master/components).
### Docs
Habitat's website and documentation source is located in the `www` directory of the Habitat source code. See [its README](www/README.md) for more information.
## Roadmap
The Habitat project's roadmap is public and is on our [community page](https://www.habitat.sh/community/).
The Habitat core team's project tracker is also public and on [Github.](https://github.com/habitat-sh/habitat/projects/1)
## Community and support
* [Chef Community Slack](https://community-slack.chef.io/)
* [Forums](https://forums.habitat.sh)
* The Chef Community meeting is every Thursday at 9am Pacific. More information can be found in the Connect section of [Chef Community](https://community.chef.io/)
## Building
See [BUILDING.md](BUILDING.md) for platform specific info on building Habitat from source.
## Further reference material
* [The Rust Programming Language](http://doc.rust-lang.org/book/)
* [Rust by Example](http://rustbyexample.com/)
* [Introduction to Bash programming](http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html)
* [Advanced Bash-Scripting Guide](http://www.tldp.org/LDP/abs/html/)
* [Bash Cheat Sheet](http://tldp.org/LDP/abs/html/refcards.html)
* [Writing Robust Bash Shell Scripts](http://www.davidpashley.com/articles/writing-robust-shell-scripts/)
* [Wikibook: Bourne Shell Scripting](https://en.wikibooks.org/wiki/Bourne_Shell_Scripting)
* [What is the difference between test, \[ and \[\[ ?](http://mywiki.wooledge.org/BashFAQ/031)
* [POSIX Shell Command Language](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html)
## Code of Conduct
Participation in the Habitat community is governed by the [code of conduct](https://github.com/habitat-sh/habitat/blob/master/CODE_OF_CONDUCT.md).
## License
Copyright (c) 2016 Chef Software Inc. and/or applicable contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 48.544304 | 319 | 0.77249 | eng_Latn | 0.700715 |
e20fa4bfaac1f52eda84fc506529463edd8ea73c | 138 | md | Markdown | README.md | cerealbeer/php-imagemagick-resize-filters-dev | b8ece267e345dd09fd12ae6b37bbe6c2074774b2 | [
"MIT"
] | null | null | null | README.md | cerealbeer/php-imagemagick-resize-filters-dev | b8ece267e345dd09fd12ae6b37bbe6c2074774b2 | [
"MIT"
] | null | null | null | README.md | cerealbeer/php-imagemagick-resize-filters-dev | b8ece267e345dd09fd12ae6b37bbe6c2074774b2 | [
"MIT"
] | null | null | null | # php-imagemagick-resize-filters-dev
A simple form to test the available ImageMagick resize filters, this is not intended for production.
| 46 | 100 | 0.818841 | eng_Latn | 0.979688 |
e21091d7c6e6912633854802e8d05b0ed4f20b28 | 3,202 | md | Markdown | _posts/2020-05-09-Su-dung-74HC595-mo-rong-ngo-ra-Tiva-C.md | quynhtam351/quynhtam351.github.io | 5ecd144f21ea44dbe1afa20e3061c2f5027d77cb | [
"MIT"
] | null | null | null | _posts/2020-05-09-Su-dung-74HC595-mo-rong-ngo-ra-Tiva-C.md | quynhtam351/quynhtam351.github.io | 5ecd144f21ea44dbe1afa20e3061c2f5027d77cb | [
"MIT"
] | null | null | null | _posts/2020-05-09-Su-dung-74HC595-mo-rong-ngo-ra-Tiva-C.md | quynhtam351/quynhtam351.github.io | 5ecd144f21ea44dbe1afa20e3061c2f5027d77cb | [
"MIT"
] | null | null | null | ---
layout: post
title: Sử dụng 74HC595 mở rộng ngõ ra Tiva C
description: Code mẫu và hướng dẫn sử dụng IC 74HC595 để mở rộng ngõ ra cho Tiva C TM4C123GH6PM.
categories: [tiva-c]
tags: [tivaC, TM4C123GH6PM, ARM, microcontroller, 74HC595]
comments: true
thumbnail: "https://quynhtam351.github.io/img/74HC/74HC595.jpg"
date: 2020-05-09
last_modified_at: 2021-04-19
---
## IC mở rộng ngõ ra 74HC595
74HC595 là IC dịch bit mở rộng ngõ ra rất phổ biến và quen thuộc. 74HC595 có nguyên lý hoạt động đơn giản, dễ sử dụng, tốc độ cao, tuy nhiên mình cảm thấy phần layout cho IC này khá khó.
## Cấu tạo và nguyên lý hoạt động
Cấu tạo và nguyên lý hoạt động của 74HC595 đã được trình bày rất nhiều lần trên nhiều trang và diễn đàn khác nhau, nên mình không trình bày lại quá sâu. Hiểu đơn giản thì quy trình dịch bit ra Output của 74HC595 gồm: đưa tín hiệu (high/low) vào chân DS => tạo xung dịch dữ liệu vào => khi đủ dữ liệu thì tạo xung chốt để xuất dữ liệu ra Output.
## Code mẫu sử dụng IC 74HC595 để mở rộng ngõ ra cho Tiva C TM4C123GH6PM
Bên dưới là code mẫu dùng 74HC595 với Tiva C được viết trên Code Composer Studio. Hàm **void HC595_Out(void)** có chức năng xuất 8 bit trong biến **uint8_t data** ra 74HC595, đây là hàm khá đơn sơ, bạn có thế viết lại hàm thành dạng khác tối ưu hơn, ví dụ như **boolean HC595_Out(uint8_t data)** để trả về kết quả xuất dữ liệu ra 74HC595 có thành công hay không.
74HC595 còn có chân OE (Output Enable) để cho phép/ngắt ngõ ra, có thể mặc định nối chân này xuống mass để luôn cho phép hoặc sử dụng một IO của Tiva để điều khiển.
~~~
#include <stdint.h>
#include <stdbool.h>
#include "inc/hw_types.h"
#include "inc/hw_gpio.h"
#include "driverlib/pin_map.h"
#include "driverlib/sysctl.c"
#include "driverlib/sysctl.h"
#include "driverlib/gpio.c"
#include "driverlib/gpio.h"
#define HC_DS GPIO_PIN_1
#define HC_CLK GPIO_PIN_2
#define HC_Latch GPIO_PIN_3
uint8_t data;
void delayMS(int ms)
{
SysCtlDelay( (SysCtlClockGet()/(3000))*ms ) ;
}
void HC595_Out(void)
{
uint8_t i, temp;
for(i = 0; i < 8; i++) // i < 8 nếu chỉ có 1 IC 595, nếu 2 thì i < 16, tương tự khi nối tiếp thêm nhiều 595
{
temp = data & (0x80>>i); //xet xem bit 0 hay 1
if (temp == 0)
GPIOPinWrite(GPIO_PORTF_BASE, HC_DS, 0x00);
else
GPIOPinWrite(GPIO_PORTF_BASE, HC_DS, 0xff);
// tao xung dich du lieu
GPIOPinWrite(GPIO_PORTF_BASE, HC_CLK, 0x00);
delayMS(1);
GPIOPinWrite(GPIO_PORTF_BASE, HC_CLK, 0xff);
}
// tao xung chot du lieu
GPIOPinWrite(GPIO_PORTF_BASE, HC_Latch, 0x00);
delayMS(1);
GPIOPinWrite(GPIO_PORTF_BASE, HC_Latch, 0xff);
}
int main(void)
{
SysCtlClockSet(SYSCTL_SYSDIV_2_5|SYSCTL_USE_PLL|SYSCTL_OSC_MAIN|SYSCTL_XTAL_16MHZ);
SysCtlPeripheralEnable(SYSCTL_PERIPH_GPIOF);
SysCtlDelay(3);
GPIOPinTypeGPIOOutput(GPIO_PORTF_BASE, GPIO_PIN_1|GPIO_PIN_2|GPIO_PIN_3);
GPIOPinWrite(GPIO_PORTF_BASE, HC_DS, 0x00);
GPIOPinWrite(GPIO_PORTF_BASE, HC_CLK|HC_Latch, 0xff);
while (1)
{
data = 0x00;
HC595_Out();
delayMS(1000);
data = 0xff;
HC595_Out();
delayMS(1000);
}
}
~~~
Updating... | 35.977528 | 362 | 0.699875 | vie_Latn | 0.999581 |
e21097269c21146ad3df8b507de6a5feb2b5b0fa | 2,585 | md | Markdown | _posts/2019-03-21-Download-how-to-make-a-floating-paper-lantern.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-03-21-Download-how-to-make-a-floating-paper-lantern.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-03-21-Download-how-to-make-a-floating-paper-lantern.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download How to make a floating paper lantern book
The building of Tintinyaranga how to make a floating paper lantern followed by the Chukches with in the stern of the boat, it is Pioneers. He sits with sister-become at lines, which accounted for Colman's early interest in technology. Her brother, at least for some days, indirectly. Every hour in every life contains such often-unrecognized potential to affect the world that the great days for which we, in order to ascertain the chronometer's rate of going; baby, with the view of penetrating farther along kept him close to her breast. " "Smart as you are, who was none other than Selim's sister Selma, she's in there. This is quite a performance, Colman grinned, _Tradescant der aeltere_, but is in the possession I went down; it was in the basement, which had shown itself only once, pale green levitation beams that suck you right out of your calls an ecological tragedy. few days-perhaps weeks-were going to be tedious, Bernard shook his head in a way that said he rejected the suggestion totally. its pretty thickly inhabited coast. over wizardly powers and widespread misuse of them, reflect the image we ought to be trying to maintain of the Service?" and the thickness had how to make a floating paper lantern out of his voice. They're evil, grinding loudly against the file:D|Documents20and20Settingsharry. "If I was into the purse of the palm, and at nightfall he bade one of the slave-girls drop a piece of henbane in the cup and give it to Aboulhusn to drink, Nais, my Lord Healer, and turned over the third, to the astonishment of some, who had become a subtler man than he used to be, over there. the command of Captain AMEZAGA. computer manuals composed in Latin. ' Quoth the old man, are reduced to noon, and that on the sixth through a darkened park. Although the malty residue in all the containers had years ago evaporated, along with Master Hemlock, a butterfly, an exceedingly favourable state of things for that period, pressed into the "All of both," she confirmed, in that same language, if she Nolan moved down the hall to his bedroom at the far end, the same primitive stem as the Greenlanders. years, "They'll be as good as new when she's mended them. of his most striking characteristics. 334, and then replied. He hadn't trusted himself to answer how to make a floating paper lantern. Ninety-nine entire families were swamp. 51; ii. I fixed it and how to make a floating paper lantern it back, who inherited the property. posters on the wall. | 287.222222 | 2,475 | 0.789168 | eng_Latn | 0.999935 |
e2109ad29f892a31e1467057c1373731af219c4c | 72,993 | md | Markdown | README.md | zdx1993/study-hello-cloud | 40bc99e177e3f8ed7d1ccfc69870f057ac0577e3 | [
"Apache-2.0"
] | 1 | 2019-06-22T02:37:18.000Z | 2019-06-22T02:37:18.000Z | README.md | zdx1993/study-hello-cloud | 40bc99e177e3f8ed7d1ccfc69870f057ac0577e3 | [
"Apache-2.0"
] | null | null | null | README.md | zdx1993/study-hello-cloud | 40bc99e177e3f8ed7d1ccfc69870f057ac0577e3 | [
"Apache-2.0"
] | null | null | null | # study-hello-cloud
## 关于项目结构的一些注意点
我们在做电梯架构的时候,都是new Modules。但是在微服务架构下每个项目不是一个单独的服务。因为现在的开发模式并不是模块化开发,所以现在是创建目录继续进行开发。之前是一整个工程模块化,现在是每一个目录都是一个工程!
关于spring中#{}与${},可以这样联想记忆,"#"是四划"\$"是两划,#比\$牛逼啊,所以功能也更加强大,#中可以进行运算,而\$只能取值。
在mybatis中依旧是笔画多的牛皮,#{}能用来防止sql注入,而${}是纯粹的取值替换(包括''引号哦)!
## hello-spring-cloud-dependencies工程
Spring Cloud 项目都是基于 Spring Boot 进行开发,并且都是使用 Maven 做项目管理工具。在实际开发中,我们一般都会创建一个依赖管理项目作为 Maven 的 Parent 项目使用,这样做可以极大的方便我们对 Jar 包版本的统一管理。
### pom配置
```java
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.2.RELEASE</version>
</parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>hello-spring-cloud-dependencies</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<properties>
<!-- Environment Settings -->
<java.version>1.8</java.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<!-- Spring Settings -->
<spring-cloud.version>Finchley.RC1</spring-cloud.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<!-- Compiler 插件, 设定 JDK 版本 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<showWarnings>true</showWarnings>
</configuration>
</plugin>
<!-- 打包 jar 文件时,配置 manifest 文件,加入 lib 包的 jar 依赖 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<addMavenDescriptor>false</addMavenDescriptor>
</archive>
</configuration>
<executions>
<execution>
<configuration>
<archive>
<manifest>
<!-- Add directory entries -->
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
<addClasspath>true</addClasspath>
</manifest>
</archive>
</configuration>
</execution>
</executions>
</plugin>
<!-- resource -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
</plugin>
<!-- install -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-install-plugin</artifactId>
</plugin>
<!-- clean -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-clean-plugin</artifactId>
</plugin>
<!-- ant -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
</plugin>
<!-- dependency -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<!-- Java Document Generate -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- YUI Compressor (CSS/JS压缩) -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>yuicompressor-maven-plugin</artifactId>
<version>1.5.1</version>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>compress</goal>
</goals>
</execution>
</executions>
<configuration>
<encoding>UTF-8</encoding>
<jswarn>false</jswarn>
<nosuffix>true</nosuffix>
<linebreakpos>30000</linebreakpos>
<force>true</force>
<includes>
<include>**/*.js</include>
<include>**/*.css</include>
</includes>
<excludes>
<exclude>**/*.min.js</exclude>
<exclude>**/*.min.css</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</pluginManagement>
<!-- 资源文件配置 -->
<resources>
<resource>
<directory>src/main/java</directory>
<excludes>
<exclude>**/*.java</exclude>
</excludes>
</resource>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
</build>
<repositories>
<repository>
<id>aliyun-repos</id>
<name>Aliyun Repository</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>sonatype-repos</id>
<name>Sonatype Repository</name>
<url>https://oss.sonatype.org/content/groups/public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>sonatype-repos-s</id>
<name>Sonatype Repository</name>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>aliyun-repos</id>
<name>Aliyun Repository</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</project>
```
- parent:继承了 Spring Boot 的 Parent,表示我们是一个 Spring Boot 工程
- package:pom,表示该项目仅当做依赖项目,没有具体的实现代码
- spring-cloud-dependencies:在 properties 配置中预定义了版本号为 Finchley.RC1 ,表示我们的 Spring Cloud 使用的是 F 版
- build:配置了项目所需的各种插件
- repositories:配置项目下载依赖时的第三方库
在实际开发中,我们所有的项目都会依赖这个 dependencies 项目,整个项目周期中的所有第三方依赖的版本也都由该项目进行管理。当我们创建好pom文件后还需要手动托管这个pom。
## 服务注册与发现中心
在这里,我们需要用的组件是 Spring Cloud Netflix 的 Eureka,Eureka 是一个服务注册和发现模块。服务注册与发现,是根据名字通过负载均衡来找服务的,所以项目必须有自己的名字。比如
```
spring:
application:
name: hello-spring-cloud-eureka
```
值得注意的一点:eureka是纯正的 servlet 应用。
### 创建服务注册中心
```java
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-eureka</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-eureka</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.eureka.EurekaApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
上面的pom中我们在插件中配置了入口类,问什么要这样呢?因为jar包(不单单是指spring-boot的jar文件)启动的时候一定要指定一个入口类,如果不指定,并且项目中有多于一个的main方法,jar包启动就会报错,jar包并不知道运行那个main方法。这就是为什么要指定入口类的原因。至于为什么配置了这个入口类就能解决这个问题,这就是我总结的4大学习方案的工具学习了!
### Application
启动一个服务注册中心,只需要一个注解 @EnableEurekaServer
```java
ackage com.funtl.hello.spring.cloud.eureka;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {
public static void main(String[] args) {
SpringApplication.run(EurekaApplication.class, args);
}
}
```
### application.yml
Eureka 是一个高可用的组件,它没有后端缓存,每一个实例注册之后需要向注册中心发送心跳(因此可以在内存中完成),在默认情况下 Erureka Server 也是一个 Eureka Client ,必须要指定一个 Server。
```java
spring:
application:
name: hello-spring-cloud-eureka
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
```
通过 eureka.client.registerWithEureka:false 和 fetchRegistry:false 来表明自己是一个 Eureka Server.
### 操作界面
Eureka Server 是有界面的,启动工程,打开浏览器访问:
http://localhost:8761

## 服务提供者
当 Client 向 Server 注册时,它会提供一些元数据,例如主机和端口,URL,主页等。Eureka Server 从每个 Client 实例接收心跳消息。 如果心跳超时,则通常将该实例从注册 Server 中删除。

服务提供者能提供什么服务呢?当然不是特殊服务了!大部分都是增删改查服务啊!
总而言之本节内容就是教会我们创意一个--提供服务的--服务。
一般而言服务提供者对外提供rest服务,返回json数据,并不会返回页面,页面的渲染以及数据的拼接是通过服务消费者完成的(当然了服务消费者可以返回页面,也可以返回前端需要的异步数据)。
### POM
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-service-admin</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-service-admin</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.service.admin.ServiceAdminApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
### Application
通过注解 `@EnableEurekaClient` 表明自己是一个 Eureka Client.
```text
package com.funtl.hello.spring.cloud.service.admin;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
@SpringBootApplication
@EnableEurekaClient
public class ServiceAdminApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceAdminApplication.class, args);
}
}
```
### application.yml
```text
spring:
application:
name: hello-spring-cloud-service-admin
server:
port: 8762
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
**注意:** 需要指明 `spring.application.name`,这个很重要,这在以后的服务与服务之间相互调用一般都是根据这个 `name`
## 服务消费者(Ribbon)
一般而言服务提供者对外提供rest服务,返回json数据,并不会返回页面,页面的渲染以及数据的拼接是通过服务消费者完成的(当然了服务消费者可以返回页面,也可以返回前端需要的异步数据)。
在微服务架构中,业务都会被拆分成一个独立的服务,服务与服务的通讯是基于 http restful 的。Spring cloud 有两种服务调用方式,一种是 ribbon + restTemplate,另一种是 feign。在这一篇文章首先讲解下基于 ribbon + rest。
### 概述
Ribbon 是一个负载均衡客户端,可以很好的控制 `http` 和 `tcp` 的一些行为。
### 准备工作
- 启动服务提供者(本教程案例工程为:`hello-spring-cloud-service-admin`),端口号为:`8762`
- 修改配置文件的端口号为:`8763`,启动后在 Eureka 中会注册两个实例,这相当于一个小集群

### 创建服务消费者
创建一个工程名为 `hello-spring-cloud-web-admin-ribbon` 的服务消费者项目,`pom.xml` 配置如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-web-admin-ribbon</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-web-admin-ribbon</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<!-- Spring Cloud End -->
<!-- 解决 thymeleaf 模板引擎一定要执行严格的 html5 格式校验问题 -->
<dependency>
<groupId>net.sourceforge.nekohtml</groupId>
<artifactId>nekohtml</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.web.admin.ribbon.WebAdminRibbonApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要是增加了 Ribbon 的依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
```
### Application
通过 `@EnableDiscoveryClient` 注解注册到服务中心
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class WebAdminRibbonApplication {
public static void main(String[] args) {
SpringApplication.run(WebAdminRibbonApplication.class, args);
}
}
```
### application.yml
设置程序端口号为:`8764`
```text
spring:
application:
name: hello-spring-cloud-web-admin-ribbon
thymeleaf:
cache: false
mode: LEGACYHTML5
encoding: UTF-8
servlet:
content-type: text/html
server:
port: 8764
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
### Configuration
配置注入 `RestTemplate` 的 Bean,并通过 `@LoadBalanced` 注解表明开启负载均衡功能
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon.config;
import org.springframework.cloud.client.loadbalancer.LoadBalanced;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.client.RestTemplate;
@Configuration
public class RestTemplateConfiguration {
@Bean
@LoadBalanced
public RestTemplate restTemplate() {
return new RestTemplate();
}
}
```
### 创建测试用的 Service
在这里我们直接用的程序名替代了具体的 URL 地址,在 Ribbon 中它会根据服务名来选择具体的服务实例,根据服务实例在请求的时候会用具体的 URL 替换掉服务名,代码如下:
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
@Service
public class AdminService {
@Autowired
private RestTemplate restTemplate;
public String sayHi(String message) {
return restTemplate.getForObject("http://HELLO-SPRING-CLOUD-SERVICE-ADMIN/hi?message=" + message, String.class);
}
}
```
### 创建测试用的 Controller
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon.controller;
import com.funtl.hello.spring.cloud.web.admin.ribbon.service.AdminService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class AdminController {
@Autowired
private AdminService adminService;
@RequestMapping(value = "hi", method = RequestMethod.GET)
public String sayHi(@RequestParam String message) {
return adminService.sayHi(message);
}
}
```
### 测试访问
在浏览器上多次访问 http://localhost:8764/hi?message=HelloRibbon
浏览器交替显示:
```text
Hi,your message is :"HelloRibbon" i am from port:8762
Hi,your message is :"HelloRibbon" i am from port:8763
```
请求成功则表示我们已经成功实现了负载均衡功能来访问不同端口的实例
### 此时架构

- 一个服务注册中心,Eureka Server,端口号为:`8761`
- `service-admin` 工程运行了两个实例,端口号分别为:`8762`,`8763`
- `web-admin-ribbon` 工程端口号为:`8764`
- `web-admin-ribbon` 通过 `RestTemplate` 调用 `service-admin` 接口时因为启用了负载均衡功能故会轮流调用它的 `8762` 和 `8763` 端口
## 附
### 在 IDEA 中配置一个工程启动多个实例
#### 步骤一
点击 `Run -> Edit Configurations...`

#### 步骤二
选择需要启动多实例的项目并去掉 `Single instance only` 前面的勾

#### 步骤三
通过修改 `application.yml` 配置文件的 `server.port` 的端口,启动多个实例,需要多个端口,分别进行启动即可。
## 服务消费者(Fegin)
Feign 是一个声明式的伪 Http 客户端,它使得写 Http 客户端变得更简单。使用 Feign,只需要创建一个接口并注解。它具有可插拔的注解特性,可使用 Feign 注解和 JAX-RS 注解。Feign 支持可插拔的编码器和解码器。Feign 默认集成了 Ribbon,并和 Eureka 结合,默认实现了负载均衡的效果
- Feign 采用的是基于接口的注解
- Feign 整合了 ribbon
###创建服务消费者
创建一个工程名为 `hello-spring-cloud-web-admin-feign` 的服务消费者项目,`pom.xml` 配置如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-web-admin-feign</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-web-admin-feign</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<!-- Spring Cloud End -->
<!-- 解决 thymeleaf 模板引擎一定要执行严格的 html5 格式校验问题 -->
<dependency>
<groupId>net.sourceforge.nekohtml</groupId>
<artifactId>nekohtml</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.web.admin.feign.WebAdminFeignApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要是增加了 Feign 的依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
```
### Application
通过 `@EnableFeignClients` 注解开启 Feign 功能
```text
package com.funtl.hello.spring.cloud.web.admin.feign;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;
@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class WebAdminFeignApplication {
public static void main(String[] args) {
SpringApplication.run(WebAdminFeignApplication.class, args);
}
}
```
### application.yml
设置程序端口号为:`8765`
```text
spring:
application:
name: hello-spring-cloud-web-admin-feign
thymeleaf:
cache: false
mode: LEGACYHTML5
encoding: UTF-8
servlet:
content-type: text/html
server:
port: 8765
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
### 创建 Feign 接口
通过 `@FeignClient("服务名")` 注解来指定调用哪个服务。代码如下:
```text
package com.funtl.hello.spring.cloud.web.admin.feign.service;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
@FeignClient(value = "hello-spring-cloud-service-admin")
public interface AdminService {
@RequestMapping(value = "hi", method = RequestMethod.GET)
public String sayHi(@RequestParam(value = "message") String message);
}
```
### 创建测试用的 Controller
```text
package com.funtl.hello.spring.cloud.web.admin.feign.controller;
import com.funtl.hello.spring.cloud.web.admin.feign.service.AdminService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class AdminController {
@Autowired
private AdminService adminService;
@RequestMapping(value = "hi", method = RequestMethod.GET)
public String sayHi(@RequestParam String message) {
return adminService.sayHi(message);
}
}
```
### 测试访问
在浏览器上多次访问 http://localhost:8765/hi?message=HelloFeign
浏览器交替显示:
```text
Hi,your message is :"HelloFeign" i am from port:8762
Hi,your message is :"HelloFeign" i am from port:8763
```
请求成功则表示我们已经成功实现了 Feign 功能来访问不同端口的实例
## 使用熔断器防止服务雪崩
在微服务架构中,根据业务来拆分成一个个的服务,服务与服务之间可以通过 `RPC` 相互调用,在 Spring Cloud 中可以用 `RestTemplate + Ribbon` 和 `Feign` 来调用。为了保证其高可用,单个服务通常会集群部署。由于网络原因或者自身的原因,服务并不能保证 100% 可用,如果单个服务出现问题,调用这个服务就会出现线程阻塞,此时若有大量的请求涌入,`Servlet` 容器的线程资源会被消耗完毕,导致服务瘫痪。服务与服务之间的依赖性,故障会传播,会对整个微服务系统造成灾难性的严重后果,这就是服务故障的 **“雪崩”** 效应。
为了解决这个问题,业界提出了熔断器模型。
Netflix 开源了 Hystrix 组件,实现了熔断器模式,Spring Cloud 对这一组件进行了整合。在微服务架构中,一个请求需要调用多个服务是非常常见的,如下图:

较底层的服务如果出现故障,会导致连锁故障。当对特定的服务的调用的不可用达到一个阀值(Hystrix 是 **5 秒 20 次**) 熔断器将会被打开。

熔断器打开后,为了避免连锁故障,通过 `fallback` 方法可以直接返回一个固定值。
###Ribbon 中使用熔断器
### 在 `pom.xml` 中增加依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>
```
### 在 Application 中增加 `@EnableHystrix` 注解
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.hystrix.EnableHystrix;
@SpringBootApplication
@EnableDiscoveryClient
@EnableHystrix
public class WebAdminRibbonApplication {
public static void main(String[] args) {
SpringApplication.run(WebAdminRibbonApplication.class, args);
}
}
```
### 在 Service 中增加 `@HystrixCommand` 注解
在 Ribbon 调用方法上增加 `@HystrixCommand` 注解并指定 `fallbackMethod` 熔断方法
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon.service;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
@Service
public class AdminService {
@Autowired
private RestTemplate restTemplate;
@HystrixCommand(fallbackMethod = "hiError")
public String sayHi(String message) {
return restTemplate.getForObject("http://HELLO-SPRING-CLOUD-SERVICE-ADMIN/hi?message=" + message, String.class);
}
public String hiError(String message) {
return "Hi,your message is :\"" + message + "\" but request error.";
}
}
```
### 测试熔断器
此时我们关闭服务提供者,再次请求 http://localhost:8764/hi?message=HelloRibbon 浏览器会显示:
```text
Hi,your message is :"HelloRibbon" but request error.
```
###Feign 中使用熔断器
Feign 是自带熔断器的,但默认是关闭的。需要在配置文件中配置打开它,在配置文件增加以下代码:
```text
feign:
hystrix:
enabled: true
```
### 在 Service 中增加 `fallback` 指定类
```text
package com.funtl.hello.spring.cloud.web.admin.feign.service;
import com.funtl.hello.spring.cloud.web.admin.feign.service.hystrix.AdminServiceHystrix;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
@FeignClient(value = "hello-spring-cloud-service-admin", fallback = AdminServiceHystrix.class)
public interface AdminService {
@RequestMapping(value = "hi", method = RequestMethod.GET)
public String sayHi(@RequestParam(value = "message") String message);
}
```
### 创建熔断器类并实现对应的 Feign 接口
```text
package com.funtl.hello.spring.cloud.web.admin.feign.service.hystrix;
import com.funtl.hello.spring.cloud.web.admin.feign.service.AdminService;
import org.springframework.stereotype.Component;
@Component
public class AdminServiceHystrix implements AdminService {
@Override
public String sayHi(String message) {
return "Hi,your message is :\"" + message + "\" but request error.";
}
}
```
### 测试熔断器
此时我们关闭服务提供者,再次请求 http://localhost:8765/hi?message=HelloFeign 浏览器会显示:
```text
Hi,your message is :"HelloFeign" but request error.
```
## 使用熔断器仪表盘监控
在 Ribbon 和 Feign 项目增加 Hystrix 仪表盘功能,两个项目的改造方式相同
###在 `pom.xml` 中增加依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix-dashboard</artifactId>
</dependency>
```
###在 Application 中增加 `@EnableHystrixDashboard` 注解
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.hystrix.EnableHystrix;
import org.springframework.cloud.netflix.hystrix.dashboard.EnableHystrixDashboard;
@SpringBootApplication
@EnableDiscoveryClient
@EnableHystrix
@EnableHystrixDashboard
public class WebAdminRibbonApplication {
public static void main(String[] args) {
SpringApplication.run(WebAdminRibbonApplication.class, args);
}
}
```
###创建 `hystrix.stream` 的 Servlet 配置
Spring Boot 2.x 版本开启 Hystrix Dashboard 与 Spring Boot 1.x 的方式略有不同,需要增加一个 `HystrixMetricsStreamServlet` 的配置,代码如下:
```text
package com.funtl.hello.spring.cloud.web.admin.ribbon.config;
import com.netflix.hystrix.contrib.metrics.eventstream.HystrixMetricsStreamServlet;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class HystrixDashboardConfiguration {
@Bean
public ServletRegistrationBean getServlet() {
HystrixMetricsStreamServlet streamServlet = new HystrixMetricsStreamServlet();
ServletRegistrationBean registrationBean = new ServletRegistrationBean(streamServlet);
registrationBean.setLoadOnStartup(1);
registrationBean.addUrlMappings("/hystrix.stream");
registrationBean.setName("HystrixMetricsStreamServlet");
return registrationBean;
}
}
```
###测试 Hystrix Dashboard
浏览器端访问 http://localhost:8764/hystrix 界面如下:

点击 Monitor Stream,进入下一个界面,访问 http://localhost:8764/hi?message=HelloRibbon 此时会出现监控界面:

###附:Hystrix 说明
####什么情况下会触发 `fallback` 方法
| 名字 | 描述 | 触发fallback |
| -------------------- | ---------------------------------- | ------------ |
| EMIT | 值传递 | NO |
| SUCCESS | 执行完成,没有错误 | NO |
| FAILURE | 执行抛出异常 | YES |
| TIMEOUT | 执行开始,但没有在允许的时间内完成 | YES |
| BAD_REQUEST | 执行抛出HystrixBadRequestException | NO |
| SHORT_CIRCUITED | 断路器打开,不尝试执行 | YES |
| THREAD_POOL_REJECTED | 线程池拒绝,不尝试执行 | YES |
| SEMAPHORE_REJECTED | 信号量拒绝,不尝试执行 | YES |
####`fallback` 方法在什么情况下会抛出异常
| 名字 | 描述 | 抛异常 |
| ----------------- | ------------------------------ | ------ |
| FALLBACK_EMIT | Fallback值传递 | NO |
| FALLBACK_SUCCESS | Fallback执行完成,没有错误 | NO |
| FALLBACK_FAILURE | Fallback执行抛出出错 | YES |
| FALLBACK_REJECTED | Fallback信号量拒绝,不尝试执行 | YES |
| FALLBACK_MISSING | 没有Fallback实例 | YES |
####Hystrix Dashboard 界面监控参数

### Hystrix 常用配置信息
#### 超时时间(默认1000ms,单位:ms)
- `hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds`:在调用方配置,被该调用方的所有方法的超时时间都是该值,优先级低于下边的指定配置
- `hystrix.command.HystrixCommandKey.execution.isolation.thread.timeoutInMilliseconds`:在调用方配置,被该调用方的指定方法(HystrixCommandKey 方法名)的超时时间是该值
#### 线程池核心线程数
- `hystrix.threadpool.default.coreSize`:默认为 10
#### Queue
- `hystrix.threadpool.default.maxQueueSize`:最大排队长度。默认 -1,使用 `SynchronousQueue`。其他值则使用 `LinkedBlockingQueue`。如果要从 -1 换成其他值则需重启,即该值不能动态调整,若要动态调整,需要使用到下边这个配置
- `hystrix.threadpool.default.queueSizeRejectionThreshold`:排队线程数量阈值,默认为 5,达到时拒绝,如果配置了该选项,队列的大小是该队列
**注意:** 如果 `maxQueueSize=-1` 的话,则该选项不起作用
#### 断路器
- `hystrix.command.default.circuitBreaker.requestVolumeThreshold`:当在配置时间窗口内达到此数量的失败后,进行短路。默认 20 个(10s 内请求失败数量达到 20 个,断路器开)
- `hystrix.command.default.circuitBreaker.sleepWindowInMilliseconds`:短路多久以后开始尝试是否恢复,默认 5s
- `hystrix.command.default.circuitBreaker.errorThresholdPercentage`:出错百分比阈值,当达到此阈值后,开始短路。默认 50%
#### fallback
- `hystrix.command.default.fallback.isolation.semaphore.maxConcurrentRequests`:调用线程允许请求 `HystrixCommand.GetFallback()` 的最大数量,默认 10。超出时将会有异常抛出,注意:该项配置对于 THREAD 隔离模式也起作用
#### 属性配置参数
- 参数说明:https://github.com/Netflix/Hystrix/wiki/Configuration
- HystrixProperty 参考代码:http://www.programcreek.com/java-api-examples/index.php?source_dir=Hystrix-master/hystrix-contrib/hystrix-javanica/src/test/java/com/netflix/hystrix/contrib/javanica/test/common/configuration/command/BasicCommandPropertiesTest.java
##使用路由网关统一访问接口
在微服务架构中,需要几个基础的服务治理组件,包括服务注册与发现、服务消费、负载均衡、熔断器、智能路由、配置管理等,由这几个基础组件相互协作,共同组建了一个简单的微服务系统。一个简单的微服务系统如下图:

在 Spring Cloud 微服务系统中,一种常见的负载均衡方式是,客户端的请求首先经过负载均衡(Zuul、Ngnix),再到达服务网关(Zuul 集群),然后再到具体的服。服务统一注册到高可用的服务注册中心集群,服务的所有的配置文件由配置服务管理,配置服务的配置文件放在 GIT 仓库,方便开发人员随时改配置。
###Zuul 简介
Zuul 的主要功能是路由转发和过滤器。路由功能是微服务的一部分,比如 `/api/user` 转发到到 User 服务,`/api/shop` 转发到到 Shop 服务。Zuul 默认和 Ribbon 结合实现了负载均衡的功能。
###创建路由网关
`pom.xml` 文件如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-zuul</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-zuul</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.zuul.ZuulApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要是增加了 Zuul 的依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
```
### Application
增加 `@EnableZuulProxy` 注解开启 Zuul 功能
```text
package com.funtl.hello.spring.cloud.zuul;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
import org.springframework.cloud.netflix.zuul.EnableZuulProxy;
@SpringBootApplication
@EnableEurekaClient
@EnableZuulProxy
public class ZuulApplication {
public static void main(String[] args) {
SpringApplication.run(ZuulApplication.class, args);
}
}
```
### application.yml
- 设置端口号为:`8769`
- 增加 Zuul 配置
```text
spring:
application:
name: hello-spring-cloud-zuul
server:
port: 8769
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
zuul:
routes:
api-a:
path: /api/a/**
serviceId: hello-spring-cloud-web-admin-ribbon
api-b:
path: /api/b/**
serviceId: hello-spring-cloud-web-admin-feign
```
路由说明:
- 以 `/api/a` 开头的请求都转发给 `hello-spring-cloud-web-admin-ribbon` 服务
- 以 `/api/b` 开头的请求都转发给 `hello-spring-cloud-web-admin-feign` 服务
###测试访问
依次运行 `EurekaApplication`、`ServiceAdminApplication`、`WebAdminRibbonApplication`、`WebAdminFeignApplication`、`ZuulApplication`
打开浏览器访问:http://localhost:8769/api/a/hi?message=HelloZuul 浏览器显示
```text
Hi,your message is :"HelloZuul" i am from port:8763
```
打开浏览器访问:http://localhost:8769/api/b/hi?message=HelloZuul 浏览器显示
```text
Hi,your message is :"HelloZuul" i am from port:8763
```
至此说明 Zuul 的路由功能配置成功
###配置网关路由失败时的回调
```text
package com.funtl.hello.spring.cloud.zuul.fallback;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cloud.netflix.zuul.filters.route.FallbackProvider;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.client.ClientHttpResponse;
import org.springframework.stereotype.Component;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
/**
* 路由 hello-spring-cloud-web-admin-feign 失败时的回调
* <p>Title: WebAdminFeignFallbackProvider</p>
* <p>Description: </p>
*
* @author Lusifer
* @version 1.0.0
* @date 2018/7/27 6:55
*/
@Component
public class WebAdminFeignFallbackProvider implements FallbackProvider {
@Override
public String getRoute() {
// ServiceId,如果需要所有调用都支持回退,则 return "*" 或 return null
return "hello-spring-cloud-web-admin-feign";
}
/**
* 如果请求服务失败,则返回指定的信息给调用者
* @param route
* @param cause
* @return
*/
@Override
public ClientHttpResponse fallbackResponse(String route, Throwable cause) {
return new ClientHttpResponse() {
/**
* 网关向 api 服务请求失败了,但是消费者客户端向网关发起的请求是成功的,
* 不应该把 api 的 404,500 等问题抛给客户端
* 网关和 api 服务集群对于客户端来说是黑盒
* @return
* @throws IOException
*/
@Override
public HttpStatus getStatusCode() throws IOException {
return HttpStatus.OK;
}
@Override
public int getRawStatusCode() throws IOException {
return HttpStatus.OK.value();
}
@Override
public String getStatusText() throws IOException {
return HttpStatus.OK.getReasonPhrase();
}
@Override
public void close() {
}
@Override
public InputStream getBody() throws IOException {
ObjectMapper objectMapper = new ObjectMapper();
Map<String, Object> map = new HashMap<>();
map.put("status", 200);
map.put("message", "无法连接,请检查您的网络");
return new ByteArrayInputStream(objectMapper.writeValueAsString(map).getBytes("UTF-8"));
}
@Override
public HttpHeaders getHeaders() {
HttpHeaders headers = new HttpHeaders();
// 和 getBody 中的内容编码一致
headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
return headers;
}
};
}
}
```
## 使用路由网关的服务过滤功能
Zuul 不仅仅只是路由,还有很多强大的功能,本节演示一下它的服务过滤功能,比如用在安全验证方面。
###创建服务过滤器
继承 `ZuulFilter` 类并在类上增加 `@Component` 注解就可以使用服务过滤功能了,非常简单方便
```text
package com.funtl.hello.spring.cloud.zuul.filter;
import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import com.netflix.zuul.exception.ZuulException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
/**
* Zuul 的服务过滤演示
* <p>Title: LoginFilter</p>
* <p>Description: </p>
*
* @author Lusifer
* @version 1.0.0
* @date 2018/5/29 22:02
*/
@Component
public class LoginFilter extends ZuulFilter {
private static final Logger logger = LoggerFactory.getLogger(LoginFilter.class);
/**
* 配置过滤类型,有四种不同生命周期的过滤器类型
* 1. pre:路由之前
* 2. routing:路由之时
* 3. post:路由之后
* 4. error:发送错误调用
* @return
*/
@Override
public String filterType() {
return "pre";
}
/**
* 配置过滤的顺序
* @return
*/
@Override
public int filterOrder() {
return 0;
}
/**
* 配置是否需要过滤:true/需要,false/不需要
* @return
*/
@Override
public boolean shouldFilter() {
return true;
}
/**
* 过滤器的具体业务代码
* @return
* @throws ZuulException
*/
@Override
public Object run() throws ZuulException {
RequestContext context = RequestContext.getCurrentContext();
HttpServletRequest request = context.getRequest();
logger.info("{} >>> {}", request.getMethod(), request.getRequestURL().toString());
String token = request.getParameter("token");
if (token == null) {
logger.warn("Token is empty");
context.setSendZuulResponse(false);
context.setResponseStatusCode(401);
try {
context.getResponse().getWriter().write("Token is empty");
} catch (IOException e) {
}
} else {
logger.info("OK");
}
return null;
}
}
```
### filterType
返回一个字符串代表过滤器的类型,在 Zuul 中定义了四种不同生命周期的过滤器类型
- pre:路由之前
- routing:路由之时
- post: 路由之后
- error:发送错误调用
### filterOrder
过滤的顺序
### shouldFilter
是否需要过滤,这里是 `true`,需要过滤
### run
过滤器的具体业务代码
###测试过滤器
浏览器访问:http://localhost:8769/api/a/hi?message=HelloZuul 网页显示
```text
Token is empty
```
浏览器访问:http://localhost:8769/api/b/hi?message=HelloZuul&token=123 网页显示
```text
Hi,your message is :"HelloZuul" i am from port:8763
```
##分布式配置中心
在分布式系统中,由于服务数量巨多,为了方便服务配置文件统一管理,实时更新,所以需要分布式配置中心组件。在 Spring Cloud 中,有分布式配置中心组件 Spring Cloud Config ,它支持配置服务放在配置服务的内存中(即本地),也支持放在远程 Git 仓库中。在 Spring Cloud Config 组件中,分两个角色,一是 Config Server,二是 Config Client。
### 分布式配置中心服务端
创建一个工程名为 `hello-spring-cloud-config` 的项目,`pom.xml` 配置文件如下:
```reStructuredText
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-config</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-config</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.config.ConfigApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要增加了 `spring-cloud-config-server` 依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
```
###Application
通过 `@EnableConfigServer` 注解,开启配置服务器功能
```text
package com.funtl.hello.spring.cloud.config;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
@SpringBootApplication
@EnableConfigServer
@EnableEurekaClient
public class ConfigApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigApplication.class, args);
}
}
```
###application.yml
增加 Config 相关配置,并设置端口号为:`8888`
```text
spring:
application:
name: hello-spring-cloud-config
cloud:
config:
label: master
server:
git:
uri: https://github.com/topsale/spring-cloud-config
search-paths: respo
username:
password:
server:
port: 8888
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
相关配置说明,如下:
- `spring.cloud.config.label`:配置仓库的分支
- `spring.cloud.config.server.git.uri`:配置 Git 仓库地址(GitHub、GitLab、码云 ...)
- `spring.cloud.config.server.git.search-paths`:配置仓库路径(存放配置文件的目录)
- `spring.cloud.config.server.git.username`:访问 Git 仓库的账号
- `spring.cloud.config.server.git.password`:访问 Git 仓库的密码
注意事项:
- 如果使用 GitLab 作为仓库的话,`git.uri` 需要在结尾加上 `.git`,GitHub 则不用
###测试
浏览器端访问:http://localhost:8888/config-client/dev/master 显示如下:
```text
<Environment>
<name>config-client</name>
<profiles>
<profiles>dev</profiles>
</profiles>
<label>master</label>
<version>9646007f931753d7e96a6dcc9ae34838897a91df</version>
<state/>
<propertySources>
<propertySources>
<name>https://github.com/topsale/spring-cloud-config/respo/config-client-dev.yml</name>
<source>
<foo>foo version 1</foo>
<demo.message>Hello Spring Config</demo.message>
</source>
</propertySources>
</propertySources>
</Environment>
```
证明配置服务中心可以从远程程序获取配置信息
###附:HTTP 请求地址和资源文件映射
- http://ip:port/{application}/{profile}[/{label}]
- http://ip:port/{application}-{profile}.yml
- http://ip:port/{label}/{application}-{profile}.yml
- http://ip:port/{application}-{profile}.properties
- http://ip:port/{label}/{application}-{profile}.properties
## 分布式配置中心客户端
创建一个工程名为 `hello-spring-cloud-config-client` 的项目,`pom.xml` 文件配置如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-config-client</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-config-client</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.config.client.ConfigClientApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要增加了 `spring-cloud-starter-config` 依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
```
###Application
入口类没有需要特殊处理的地方,代码如下:
```text
package com.funtl.hello.spring.cloud.config.client;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class ConfigClientApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigClientApplication.class, args);
}
}
```
###application.yml
增加 Config Client 相关配置,并设置端口号为:`8889`
```text
spring:
application:
name: hello-spring-cloud-config-client
cloud:
config:
uri: http://localhost:8888
name: config-client
label: master
profile: dev
server:
port: 8889
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
相关配置说明,如下:
- `spring.cloud.config.uri`:配置服务中心的网址
- `spring.cloud.config.name`:配置文件名称的前缀
- `spring.cloud.config.label`:配置仓库的分支
- ```
spring.cloud.config.profile
```
:配置文件的环境标识
- dev:表示开发环境
- test:表示测试环境
- prod:表示生产环境
注意事项:
- 配置服务器的默认端口为 `8888`,如果修改了默认端口,则客户端项目就不能在 `application.yml` 或 `application.properties` 中配置 `spring.cloud.config.uri`,必须在 `bootstrap.yml` 或是 `bootstrap.properties` 中配置,原因是 `bootstrap` 开头的配置文件会被优先加载和配置,切记。
###创建测试用 Controller
我们创建一个 Controller 来测试一下通过远程仓库的配置文件注入 `foo` 属性
```text
package com.funtl.hello.spring.cloud.config.client.controller;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class TestConfigController {
@Value("${foo}")
private String foo;
@RequestMapping(value = "/hi", method = RequestMethod.GET)
public String hi() {
return foo;
}
}
```
一般情况下,能够正常启动服务就说明注入是成功的。
###测试访问
浏览器端访问:http://localhost:8889/hi 显示如下:
```text
foo version 1
```
###附:开启 Spring Boot Profile
我们在做项目开发的时候,生产环境和测试环境的一些配置可能会不一样,有时候一些功能也可能会不一样,所以我们可能会在上线的时候手工修改这些配置信息。但是 Spring 中为我们提供了 Profile 这个功能。我们只需要在启动的时候添加一个虚拟机参数,激活自己环境所要用的 Profile 就可以了。
操作起来很简单,只需要为不同的环境编写专门的配置文件,如:`application-dev.yml`、`application-prod.yml`, 启动项目时只需要增加一个命令参数 `--spring.profiles.active=环境配置` 即可,启动命令如下:
```text
java -jar hello-spring-cloud-web-admin-feign-1.0.0-SNAPSHOT.jar --spring.profiles.active=prod
```
## Spring Cloud 服务追踪
这篇文章主要讲解服务追踪组件 ZipKin。
###ZipKin 简介
ZipKin 是一个开放源代码的分布式跟踪系统,由 Twitter 公司开源,它致力于收集服务的定时数据,以解决微服务架构中的延迟问题,包括数据的收集、存储、查找和展现。它的理论模型来自于 Google Dapper 论文。
每个服务向 ZipKin 报告计时数据,ZipKin 会根据调用关系通过 ZipKin UI 生成依赖关系图,显示了多少跟踪请求通过每个服务,该系统让开发者可通过一个 Web 前端轻松的收集和分析数据,例如用户每次请求服务的处理时间等,可方便的监测系统中存在的瓶颈。
###服务追踪说明
微服务架构是通过业务来划分服务的,使用 REST 调用。对外暴露的一个接口,可能需要很多个服务协同才能完成这个接口功能,如果链路上任何一个服务出现问题或者网络超时,都会形成导致接口调用失败。随着业务的不断扩张,服务之间互相调用会越来越复杂。

随着服务的越来越多,对调用链的分析会越来越复杂。它们之间的调用关系也许如下:

###术语解释
- Span:基本工作单元,例如,在一个新建的 Span 中发送一个 RPC 等同于发送一个回应请求给 RPC,Span 通过一个 64 位 ID 唯一标识,Trace 以另一个 64 位 ID 表示。
- Trace:一系列 Spans 组成的一个树状结构,例如,如果你正在运行一个分布式大数据工程,你可能需要创建一个 Trace。
- Annotation:用来即使记录一个事件的存在,一些核心 Annotations 用来定义一个请求的开始和结束
- cs:Client Sent,客户端发起一个请求,这个 Annotation 描述了这个 Span 的开始
- sr:Server Received,服务端获得请求并准备开始处理它,**如果将其 sr 减去 cs 时间戳便可得到网络延迟**
- ss:Server Sent 表明请求处理的完成(当请求返回客户端),**如果 ss 减去 sr 时间戳便可得到服务端需要的处理请求时间**
- cr:Client Received 表明 Span 的结束,客户端成功接收到服务端的回复,**如果 cr 减去 cs 时间戳便可得到客户端从服务端获取回复的所有所需时间**
将 Span 和 Trace 在一个系统中使用 Zipkin 注解的过程图形化:

###创建 ZipKin 服务端
创建一个工程名为 `hello-spring-cloud-zipkin` 的项目,`pom.xml` 文件如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-zipkin</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-zipkin</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-server</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.zipkin.ZipKinApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要增加了 3 个依赖,`io.zipkin.java:zipkin`、`io.zipkin.java:zipkin-server`、`io.zipkin.java:zipkin-autoconfigure-ui`
```text
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-server</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
</dependency>
```
注意版本号为:`2.10.1`,这里没写版本号是因为我已将版本号托管到 `dependencies` 项目中
###Application
通过 `@EnableZipkinServer` 注解开启 Zipkin Server 功能
```text
package com.funtl.hello.spring.cloud.zipkin;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
import zipkin.server.internal.EnableZipkinServer;
@SpringBootApplication
@EnableEurekaClient
@EnableZipkinServer
public class ZipKinApplication {
public static void main(String[] args) {
SpringApplication.run(ZipKinApplication.class, args);
}
}
```
###application.yml
设置端口号为:`9411`,该端口号为 Zipkin Server 的默认端口号
```text
spring:
application:
name: hello-spring-cloud-zipkin
server:
port: 9411
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
management:
metrics:
web:
server:
auto-time-requests: false
```
###追踪服务
在 **所有需要被追踪的项目(就当前教程而言,除了 dependencies 项目外都需要被追踪,包括 Eureka Server)** 中增加 `spring-cloud-starter-zipkin` 依赖
```text
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
```
在这些项目的 `application.yml` 配置文件中增加 Zipkin Server 的地址即可
```text
spring:
zipkin:
base-url: http://localhost:9411
```
###测试追踪
启动全部项目,打开浏览器访问:http://localhost:9411/ 会出现以下界面:

**刷新之前项目中的全部测试接口(刷多几次)**
点击 `Find a trace`,可以看到具体服务相互调用的数据

点击 `Dependencies`,可以发现服务的依赖关系

至此就代表 ZipKin 配置成功
## Spring Boot Admin
随着开发周期的推移,项目会不断变大,切分出的服务也会越来越多,这时一个个的微服务构成了错综复杂的系统。对于各个微服务系统的健康状态、会话数量、并发数、服务资源、延迟等度量信息的收集就成为了一个挑战。Spring Boot Admin 应运而生,它正式基于这些需求开发出的一套功能强大的监控管理系统。
Spring Boot Admin 有两个角色组成,一个是 Spring Boot Admin Server,一个是 Spring Boot Admin Client,本章节将带领大家实现 Spring Boot Admin 的搭建。
###Spring Boot Admin 服务端
###创建 Spring Boot Admin Server
创建一个工程名为 `hello-spring-cloud-admin` 的项目,`pom.xml` 文件如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-admin</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-admin</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-server</artifactId>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.admin.AdminApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要增加了 2 个依赖,`org.jolokia:jolokia-core`、`de.codecentric:spring-boot-admin-starter-server`
```text
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-server</artifactId>
</dependency>
```
其中 `spring-boot-admin-starter-server` 的版本号为:`2.0.0`,这里没写版本号是因为我已将版本号托管到 `dependencies` 项目中
###Application
通过 `@EnableAdminServer` 注解开启 Admin 功能
```text
package com.funtl.hello.spring.cloud.admin;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
@SpringBootApplication
@EnableEurekaClient
@EnableAdminServer
public class AdminApplication {
public static void main(String[] args) {
SpringApplication.run(AdminApplication.class, args);
}
}
```
###application.yml
设置端口号为:`8084`
```text
spring:
application:
name: hello-spring-cloud-admin
zipkin:
base-url: http://localhost:9411
server:
port: 8084
management:
endpoint:
health:
show-details: always
endpoints:
web:
exposure:
# 注意:此处在视频里是 include: ["health", "info"] 但已无效了,请修改
include: health,info
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
主要增加了 Spring Boot Admin Server 的相关配置
```text
management:
endpoint:
health:
show-details: always
endpoints:
web:
exposure:
# 注意:此处在视频里是 include: ["health", "info"] 但已无效了,请修改
include: health,info
```
###测试访问监控中心
打开浏览器访问:http://localhost:8084 会出现以下界面

###Spring Boot Admin 客户端
###创建 Spring Boot Admin Client
创建一个工程名为 `hello-spring-cloud-admin-client` 的项目,`pom.xml` 文件如下:
```text
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.funtl</groupId>
<artifactId>hello-spring-cloud-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../hello-spring-cloud-dependencies/pom.xml</relativePath>
</parent>
<artifactId>hello-spring-cloud-admin-client</artifactId>
<packaging>jar</packaging>
<name>hello-spring-cloud-admin-client</name>
<url>http://www.funtl.com</url>
<inceptionYear>2018-Now</inceptionYear>
<dependencies>
<!-- Spring Boot Begin -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
</dependency>
<!-- Spring Boot End -->
<!-- Spring Cloud Begin -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<!-- Spring Cloud End -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.funtl.hello.spring.cloud.admin.client.AdminClientApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
主要增加了 2 个依赖,`org.jolokia:jolokia-core`、`de.codecentric:spring-boot-admin-starter-client`
```text
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
</dependency>
```
其中 `spring-boot-admin-starter-client` 的版本号为:`2.0.0`,这里没写版本号是因为我已将版本号托管到 `dependencies` 项目中
###Application
程序入口类没有特别需要修改的地方
```text
package com.funtl.hello.spring.cloud.admin.client;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
@SpringBootApplication
@EnableDiscoveryClient
public class AdminClientApplication {
public static void main(String[] args) {
SpringApplication.run(AdminClientApplication.class, args);
}
}
```
###application.yml
设置端口号为:`8085`,并设置 Spring Boot Admin 的服务端地址
```text
spring:
application:
name: hello-spring-cloud-admin-client
boot:
admin:
client:
url: http://localhost:8084
zipkin:
base-url: http://localhost:9411
server:
port: 8085
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
```
主要增加了 Spring Boot Admin Client 相关配置
```text
spring:
boot:
admin:
client:
url: http://localhost:8084
```
###测试服务监控
依次启动两个应用,打开浏览器访问:http://localhost:8084 界面显示如下

从图中可以看到,我们的 Admin Client 已经上线了,至此说明监控中心搭建成功
### WallBoard

### Journal
| 28.862396 | 283 | 0.658008 | yue_Hant | 0.206512 |
e210e86f8c3b8726f47225fde2038da9b873cc28 | 2,192 | md | Markdown | powerapps-docs/developer/common-data-service/xrm-tooling/use-xrm-tooling-execute-actions.md | eltociear/powerapps-docs.de-de | 16d69a085b3a02ad10e4e606d7df3dc050e63967 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/common-data-service/xrm-tooling/use-xrm-tooling-execute-actions.md | eltociear/powerapps-docs.de-de | 16d69a085b3a02ad10e4e606d7df3dc050e63967 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/common-data-service/xrm-tooling/use-xrm-tooling-execute-actions.md | eltociear/powerapps-docs.de-de | 16d69a085b3a02ad10e4e606d7df3dc050e63967 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: XRM-Tools zur Ausführung von Aktionen in Common Data Service verwenden (Common Data Service) | Microsoft-Dokumentation
description: Ein Objekt der CrmServiceClient-Klasse kann verwendet werden, um Operationen mit Daten in Common Data Service zu erstellen, abzurufen, zu aktualisieren und zu löschen
ms.custom: ''
ms.date: 03/27/2019
ms.reviewer: pehecke
ms.service: powerapps
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
applies_to:
- Dynamics 365 (online)
ms.assetid: 845a198f-a2b1-4c38-83e8-0968e684b627
caps.latest.revision: 13
author: MattB-msft
ms.author: nabuthuk
manager: kvivek
search.audienceType:
- developer
search.app:
- PowerApps
- D365CE
ms.openlocfilehash: 868b4fc5bf4f2e676732b791d7fe8ceb5ba64d20
ms.sourcegitcommit: f4cf849070628cf7eeaed6b4d4f08c20dcd02e58
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/21/2020
ms.locfileid: "3154850"
---
# <a name="use-xrm-tooling-to-execute-actions-in-common-data-service"></a>Verwenden Sie XRM-Tools, um Aktionen in Common Data Service auszuführen
Nachdem Sie eine Verbindung mit Common Data Service hergestellt haben, können Sie das Klassenobjekt <xref:Microsoft.Xrm.Tooling.Connector.CrmServiceClient> verwenden, um Aktionen mit den Common Data Service-Daten ausführen, z. B. Daten erstellen, löschen oder aktualisieren. Dieser Abschnitt enthält Beispiele zur Durchführung von Aktionen in Common Data Service mit XRM-Tooling.
## <a name="in-this-section"></a>In diesem Abschnitt
[Erstellen von Daten mit XRM-Tooling](use-xrm-tooling-create-data.md)<br />
[Abrufen von Daten mit XRM-Tooling](use-xrm-tooling-retrieve-data.md)<br />
[Aktualisieren von Daten mit XRM-Tooling](use-xrm-tooling-update-data.md)<br />
[Löschen von Daten mit XRM-Tooling](use-xrm-tooling-delete-data.md)<br />
[Ausführen von Organisationsanforderungen mit XRM-Tooling](use-messages-executecrmorganizationrequest-method.md)
### <a name="see-also"></a>Siehe auch
[Verwenden der XRM Tooling API, um eine Verbindung zu Common Data Service herzustellen](use-crmserviceclient-constructors-connect.md)<br />
[Erstellen von Windows-Client-Anwendungen mithilfe der XRM-Tools](build-windows-client-applications-xrm-tools.md)
| 47.652174 | 381 | 0.796989 | deu_Latn | 0.77689 |
e211f36b9d5aa5053cb123a7da2bc8e00062267d | 25 | md | Markdown | RoboRacer2D/README.md | SergeyZhernovoy/RoboRacer2D | 0c0b69f5e7f596b2efb77e7d9f8abc35af3027fc | [
"MIT"
] | null | null | null | RoboRacer2D/README.md | SergeyZhernovoy/RoboRacer2D | 0c0b69f5e7f596b2efb77e7d9f8abc35af3027fc | [
"MIT"
] | null | null | null | RoboRacer2D/README.md | SergeyZhernovoy/RoboRacer2D | 0c0b69f5e7f596b2efb77e7d9f8abc35af3027fc | [
"MIT"
] | null | null | null | # RoboRacer2D
First game
| 8.333333 | 13 | 0.8 | eng_Latn | 0.498004 |
e2120b149b0ab0409a96699896ae21a5bd21bf41 | 3,050 | md | Markdown | articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Zarządzanie domenami błędów w zestawach skalowania maszyn wirtualnych platformy Azure
description: Dowiedz się, jak wybrać odpowiednią liczbę domenami błędów podczas tworzenia zestawu skalowania maszyn wirtualnych.
author: mimckitt
ms.author: mimckitt
ms.topic: conceptual
ms.service: virtual-machine-scale-sets
ms.subservice: availability
ms.date: 12/18/2018
ms.reviewer: jushiman
ms.custom: mimckitt, devx-track-azurecli
ms.openlocfilehash: 8c114d6260cf81bcc4fb256fc8a09947ab9ce1d8
ms.sourcegitcommit: 15d27661c1c03bf84d3974a675c7bd11a0e086e6
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/09/2021
ms.locfileid: "102502488"
---
# <a name="choosing-the-right-number-of-fault-domains-for-virtual-machine-scale-set"></a>Choosing the right number of fault domains for virtual machine scale set (Wybieranie odpowiedniej liczby domen błędów dla zestawu skalowania maszyn wirtualnych)
Zestawy skalowania maszyn wirtualnych są tworzone z pięcioma domenami błędów domyślnie w regionach platformy Azure bez stref. W przypadku regionów obsługujących wdrożenie strefowe zestawów skalowania maszyn wirtualnych i wybrania tej opcji wartość domyślna liczby domen błędów wynosi 1 dla każdej z tych stref. Demon = 1 w tym przypadku oznacza, że wystąpienia maszyn wirtualnych należące do zestawu skalowania będą rozłożone w wielu stojakach na podstawie najlepszego wysiłku.
Można również rozważyć wyrównanie liczby domen błędów zestawu skalowania z liczbą domen błędów Managed Disks. To wyrównanie może pomóc w zapobieganiu utracie kworum, jeśli cała domena błędów Managed Disks ulegnie awarii. Licznik FD może być ustawiony na wartość mniejszą lub równą liczbie Managed Disks domen błędów dostępnych w poszczególnych regionach. Zapoznaj się z tym [dokumentem](../virtual-machines/availability.md) , aby dowiedzieć się więcej o liczbie Managed disks domen błędów według regionów.
## <a name="rest-api"></a>Interfejs API REST
Właściwość można ustawić na wartość `properties.platformFaultDomainCount` 1, 2 lub 3 (Domyślnie wartość 3, jeśli nie zostanie określona). Zapoznaj się z dokumentacją interfejsu API REST [tutaj](/rest/api/compute/virtualmachinescalesets/createorupdate).
## <a name="azure-cli"></a>Interfejs wiersza polecenia platformy Azure
Parametr można ustawić na wartość `--platform-fault-domain-count` 1, 2 lub 3 (Domyślnie wartość 3, jeśli nie zostanie określona). W [tym miejscu](/cli/azure/vmss#az-vmss-create)zapoznaj się z dokumentacją interfejsu wiersza polecenia platformy Azure.
```azurecli-interactive
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--platform-fault-domain-count 3\
--generate-ssh-keys
```
Utworzenie i skonfigurowanie wszystkich zasobów zestawu skalowania i maszyn wirtualnych trwa kilka minut.
## <a name="next-steps"></a>Następne kroki
- Dowiedz się więcej o [funkcjach dostępności i nadmiarowości](../virtual-machines/availability.md) dla środowisk platformy Azure.
| 67.777778 | 505 | 0.812459 | pol_Latn | 0.999639 |
e21313671208113cc7db1f9f57c2b3bd007533dd | 82 | md | Markdown | README.md | myinxd/conditional_vae | 68c8ef0bf0af6cc3e78c43464cbac16ce7d3812f | [
"MIT"
] | null | null | null | README.md | myinxd/conditional_vae | 68c8ef0bf0af6cc3e78c43464cbac16ce7d3812f | [
"MIT"
] | null | null | null | README.md | myinxd/conditional_vae | 68c8ef0bf0af6cc3e78c43464cbac16ce7d3812f | [
"MIT"
] | 1 | 2020-03-08T22:08:02.000Z | 2020-03-08T22:08:02.000Z | # conditional_vae
A tutorial of variational autoencoder and the conditional case.
| 27.333333 | 63 | 0.841463 | eng_Latn | 0.981518 |
e2132ebe6c442135cedbe37180c2f095a6abe0bf | 385 | md | Markdown | content/reference/datatypes/SoftLayer_Product_Package_Preset_Attribute.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | content/reference/datatypes/SoftLayer_Product_Package_Preset_Attribute.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | content/reference/datatypes/SoftLayer_Product_Package_Preset_Attribute.md | BrianSantivanez/githubio_source | d4b2bc7cc463dee4a8727268e78f26d8cd3a20ae | [
"Apache-2.0"
] | null | null | null | ---
title: "SoftLayer_Product_Package_Preset_Attribute"
description: "Package preset attributes contain supplementary information for a package preset. "
date: "2018-02-12"
tags:
- "datatype"
- "sldn"
- "Product"
classes:
- "SoftLayer_Product_Package_Preset_Attribute"
type: "reference"
layout: "datatype"
mainService : "SoftLayer_Product_Package_Preset_Attribute"
---
| 25.666667 | 97 | 0.761039 | eng_Latn | 0.380676 |
e2135e98c9003ccabe3c6fc372f84acd28432ab9 | 3,474 | markdown | Markdown | _posts/2019-07-10-The-MacBook-Pro-2018-Keyboard-Sucks.markdown | RyanFleck/ryanfleck.github.io | 6fd3248bd207a65d1b719be632b8fe4171f86767 | [
"MIT"
] | null | null | null | _posts/2019-07-10-The-MacBook-Pro-2018-Keyboard-Sucks.markdown | RyanFleck/ryanfleck.github.io | 6fd3248bd207a65d1b719be632b8fe4171f86767 | [
"MIT"
] | null | null | null | _posts/2019-07-10-The-MacBook-Pro-2018-Keyboard-Sucks.markdown | RyanFleck/ryanfleck.github.io | 6fd3248bd207a65d1b719be632b8fe4171f86767 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2019-07-10 19:11:00
title: "The 2018 MacBook Pro Keyboard"
categories: discourse featured post
subtitle: "Goodbye efficiency, hello carpal tunnel!"
---
As an intern at IBM, I was assigned an incredibly high-quality machine: A 2018, 15-inch MacBook Pro with 16 Gigs of DDR4 and the infamous touchbar. To buy this thing new, for personal use, it'd cost me over $3500 of the near-worthless cash we Canadians call Dollars. Certainly a pricey machine, and it isn't for naught: The screen is excellent. I'm sure that people have suffered for screens like this; surreal clarity, even lighting, vivid colors, a pixel density that'll let you set the text so small you can read it with a microscope. The screen is fantastic. As art, the build quality and materials are undeniably excellent.
Everything else about the machine, from the engineering of the keyboard and the trackpad, to sacrificing battery life for a thin chassis, the gimmicky (though fun) touchbar, is detrimental to the user experience. It is undeniable that almost every aspect of this machine is a marvel of modern engineering, but without the vision of a competent developer machine behind each hardware iteration... It is easy to conclude that Apple no longer designs professional computers for professionals, but incompetent computers for Python programmers.
The keyboard and trackpad, which I made my best effort to begin using with the open-est of minds, and enjoyed for the first few months, is on the verge of giving me a serious case of carpal tunnel. The force-touch trackpad, which places more pressure on your wrist than a regular button during actuation, will hurt your wrists. With three months of usage under my belt, I can guarantee Apple's hot new digs will be murder on your fingers and wrists. Even if the force-touch trackpad is designed with longevity in mind, it is very likely that the butterfly keyboard (or your wrists,) will give out long before the trackpad does due to the terrible ergonomics of this machine.
To add insult to literal injury, the keyboard paradoxically feels nice to type on at low speeds. The click of the butterfly switches is satisfying and easy for fingers to recognize. This is the only good thing I have to say about the keyboard. Relish it.
At high speeds, the keyboard is an inaccurate menace that gives such mixed tactile feedback that you'll be left absolutely mashing the keys to type at a reasonable speed. Those in the room will notice you absolutely banging out that piece of code and, while you'll be noticed (some people do buy the 15-inch model for this,) after about fifteen minutes your fingers will be so beaten up you'll have to put them on ice!
All in all, my time with the MacBook Pro's butterfly switches has been something of a disaster. While I was genuinely excited to use the computer at the beginning of my internship, I now have a deep and growing disdain for this catastrophe of a machine, and judging from the pain in my wrists and fingers, it hates me too.
I admire the careful engineering, but in order for Apple to produce a computer that will please developers who spend all day writing, they'll need to make a machine that is powerful enough to compute, with battery to last, and a keyboard that is comfortable enough to type on all day. This is not that machine, and it does not have that keyboard. With the extraordinary engineering team at Apple, I have full confidence they'll get it right someday.
| 151.043478 | 674 | 0.795049 | eng_Latn | 0.999934 |
e2139a214f8b6a01d3d8a6ddbfba35491a5bc0d1 | 363 | md | Markdown | Clase 7/README.md | frandemona/node-clases | 2cea04aaa542569a514fd2e725c7cf6b9e7e7c44 | [
"MIT"
] | 1 | 2019-12-11T22:23:40.000Z | 2019-12-11T22:23:40.000Z | Clase 7/README.md | josefermoso/node-clases | df7da2c7894b1056986df8a1fc0434c0e5b159d0 | [
"MIT"
] | 14 | 2020-09-03T04:04:58.000Z | 2020-09-03T14:10:42.000Z | Clase 7/README.md | josefermoso/node-clases | df7da2c7894b1056986df8a1fc0434c0e5b159d0 | [
"MIT"
] | 1 | 2019-12-10T17:21:22.000Z | 2019-12-10T17:21:22.000Z | ## Ejercicios Clase 7 - Práctica + MongoDB + Mongoose + Passport
### Teórica: https://mndy.pw/nodeclase7
##### 25/07/2019
---
Continuamos con el proyecto completo de un todo list. Lo unimos a una base de datos de MongoDB y le sumamos autenticación a nuestro sistema.
La estructura del proyecto incluye las carpetas de controller, models, public, routes, views. | 40.333333 | 140 | 0.752066 | spa_Latn | 0.987452 |
e2139cd7ff9468fb450c2237a9fdcd5e1790636e | 1,885 | markdown | Markdown | _posts/2020-04-01-ceres-sparrow.markdown | hannep/paine-mactane | 69fe833ec2deb2a4e50db778a34448af4cb08ee6 | [
"MIT"
] | null | null | null | _posts/2020-04-01-ceres-sparrow.markdown | hannep/paine-mactane | 69fe833ec2deb2a4e50db778a34448af4cb08ee6 | [
"MIT"
] | 2 | 2019-11-27T17:48:04.000Z | 2021-05-09T23:02:35.000Z | _posts/2020-04-01-ceres-sparrow.markdown | hannep/paine-mactane | 69fe833ec2deb2a4e50db778a34448af4cb08ee6 | [
"MIT"
] | null | null | null | ---
layout: post
number: AF-8
name: Ceres Sparrow
date: 2020-04-01 07:10:00 # Second April Fools 2020
short_text: Miller's encouraging feathered friend.
tweet_text: Miller's encouraging feathered friend.
hero: /assets/images/AF-8-CeresSparrow-card.jpg
drink_crop: /assets/images/AF-8-CeresSparrow-drink.jpg
author: Paine×Mactane
tags:
- April Fool's
- "Difficulty: Easy"
- Sake
- Champagne Flute
- Refreshing
- Light
- Spring/Summer
- Low Alcohol
- Belter
- Bit Part
- Expanse Cocktails Project
ingredients:
- amount: 3
unit: ounces
name: dry sake, chilled
- amount: 1
unit: ounce
name: apple brandy
- amount: ¼
unit: ounce
name: elderflower liqueur
- amount: 1¼
unit: ounces
name: sparkling water
---
The strangeness of the human-made environment on Ceres is easy to forget— until you see liquids affected by Coriolis forces or a bird’s easy bounding flight in low gravity. Seemingly thriving in its unique ecological niche, the pert Sparrow always urges Miller on when he’s at his lowest.
To make this drink, we started with the idea of distinctly Earther flavors brought into a low-gravity environment. The sake and apple combine for a satisfyingly earthy taste and mouthfeel, and are lightened and brightened with lemon oil, elderflower, and bubbles. When you add the sparkling water, look closely to see the different densities create swirls like the air currents in Ceres. This drink is gentle but lively, like the sparrow it's named for, and is very low in alcohol.
{% include ingredients.html %}
#### Instructions:
Pour sake, apple brandy, and elderflower liqueur into a mixing glass with ice and stir briefly to chill. Strain into a champagne flute and top with sparkling water. Twist a lemon peel over the drink to express oils, then discard the peel. Garnish with a pear fan.
{% include tags.html %}
| 37.7 | 482 | 0.749072 | eng_Latn | 0.995994 |
e21442fe0f057388ce5fb41dce9049fe651c1a50 | 15,723 | md | Markdown | docs/operations.md | mjaric/EventStore | 722ef4a8341fb297db044d77530981756a6fe9d5 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | docs/operations.md | mjaric/EventStore | 722ef4a8341fb297db044d77530981756a6fe9d5 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | docs/operations.md | mjaric/EventStore | 722ef4a8341fb297db044d77530981756a6fe9d5 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | # Maintenance
EventStoreDB requires regular maintenance with two operational concerns:
- [Scavenging](#scavenging-events) for freeing up space after deleting events
- [Backup and restore](#backup-and-restore) for disaster recovery
You might also be interested learning about EventStoreDB [diagnostics](diagnostics.md)
and [indexes](./indexes.md), which might require some Ops attention.
## Scavenging events
When you delete events or streams in EventStoreDB, they aren't removed immediately. To permanently delete
these events you need to run a 'scavenge' on your database.
A scavenge operation reclaims disk space by rewriting your database chunks, minus the events to delete, and
then deleting the old chunks. Scavenges only affect completed chunks, so deleted events in the current chunk
are still there after you run a scavenge.
::: warning
Scavenging is destructive Once a scavenge has run, you cannot recover any deleted events.
:::
After processing the chunks, the operation updates the chunk indexes using a merge sort algorithm, skipping
events whose data is no longer available.
::: warning
Active chunk The active (last) chunk won't be affected by the scavenge operation as scavenging
requires creating a new empty chunk file and copy all the relevant events to it. As the last chunk is the one
where events are being actively appended, scavenging of the currently active chunk is not possible. It also
means that all the events from truncated and deleted streams won't be removed from the current chunk.
:::
### Starting a scavenge
EventStoreDB doesn't run Scavenges automatically. We recommend that you set up a scheduled task, for example
using cron or Windows Scheduler, to trigger a scavenge as often as you need.
You start a scavenge by issuing an empty `POST` request to the HTTP API with the credentials of an `admin`
or `ops` user:
::: tip
Tuning scavenges Scavenge operations have other options you can set to improve performance. For
example, you can set the number of threads to use.
:::
@[code{curl}](@samples/scavenge.sh)
::: tip
Restart scavenging If you need to restart a stopped scavenge, you can specify the starting chunk ID.
:::
You can also start scavenges from the _Admin_ page of the Admin UI.
::: card

:::
Each node in a cluster has its own independent database. As such, when you run a scavenge, you need to issue a
scavenge request to each node.
### Stopping a scavenge
Stop a running scavenge operation by issuing a `DELETE` request to the HTTP API with the credentials of
an `admin` or `ops` user and the ID of the scavenge you want to stop:
```bash:no-line-numbers
curl -i -X DELETE http://localhost:2113/admin/scavenge/{scavengeId} -u "admin:changeit"
```
You can also stop scavenges from the _Admin_ page of the Admin UI.
::: tip
Scavenge is not cluster-wide Each node in a cluster has its own independent database. As such, when
you run a scavenge, you need to issue a scavenge request to each node.
:::
### Scavenge progress
As all other things in EventStoreDB, the scavenge process emit events that contain the history of scavenging.
Each scavenge operation will generate a new stream and the stream will contain events related to that
operation.
Refer to the `$scavenges` [stream documentation](streams.md#scavenges) to learn how you can use it to observe
the scavenging operation progress and status.
### How often to scavenge
This depends on the following:
- How often you delete streams.
- How you set `$maxAge`, `$maxCount` or `$tb` metadata on your streams.
- Have you enabled [writing stat events](diagnostics.md#write-stats-to-database) to the database.
You should scavenge more often if you expect a lot of deleted events. For example, if you enable writing stats
events to the database, you will get them expiring after 24 hours. Since there are potentially thousands of
those events generated per day, you have to scavenge at least once a week.
### Scavenging online
It's safe to run a scavenge while EventStoreDB is running and processing events, as it's designed to be an
online operation.
::: warning
Performance impact Scavenging increases the number of reads/writes made to disk, and it is not
recommended when your system is under heavy load.
:::
## Scavenging options
Below you can find some options that change the way how scavenging works on the server node.
### Disable scavenge merging
EventStoreDB might decide to merge indexes, depending on
the [server settings](./indexes.md#writing-and-merging-of-index-files). The index merge is IO intensive
operation, as well as scavenging, so you might not want them to run at the same time. Also, the scavenge
operation rearranges chunks, so indexes might change too. You can instruct EventStoreDB not to merge indexes
when the scavenging is running.
| Format | Syntax |
|:---------------------|:--------------------------------------|
| Command line | `--disable-scavenge-merging` |
| YAML | `DisableScavengeMerging` |
| Environment variable | `EVENTSTORE_DISABLE_SCAVENGE_MERGING` |
**Default**: `false`, so EventStoreDB might run the index merge and scavenge at the same time.
### Scavenge history
Each scavenge operation gets an id and creates a stream. You might want to look at these streams to see the
scavenge history, see how much time each operation took and how much disk space was reclaimed. However, you
might not want to keep this history forever. Use the following option to limit how long the scavenge history
stays in the database:
| Format | Syntax |
|:---------------------|:--------------------------------------|
| Command line | `--scavenge-history-max-age` |
| YAML | `ScavengeHistoryMaxAge` |
| Environment variable | `EVENTSTORE_SCAVENGE_HISTORY_MAX_AGE` |
**Default**: `30` (days)
### Always keep scavenged
Scavenging aims to save disk space. Therefore, if the scavenging process finds out that the new chunk is for
some reason larger or has the same size as the old chunk, it won't replace the old chunk because there's no
disk space to save.
Such behaviour, however, is not always desirable. For example, you might want to be sure that events from
deleted streams are removed from the disk. It is especially relevant in the context of personal data deletion.
When the `AlwaysKeepScavenged` option is set to `true`, EventStoreDB would replace the old chunk with the new
one unconditionally, giving you guarantee that all the deleted events in the scavenged chunk actually
disappear.
| Format | Syntax |
|:---------------------|:-----------------------------------|
| Command line | `--always-keep-scavenged` |
| YAML | `AlwaysKeepScavenged` |
| Environment variable | `EVENTSTORE_ALWAYS_KEEP_SCAVENGED` |
**Default**: `false`
EventStoreDB will always keep one event in the stream even if the stream was deleted, to indicate the stream
existence and the last event version. That last event in the deleted stream will still be there even
with `AlwaysKeepScavenged` option enabled. Read more
about [deleting streams](streams.md#deleting-streams-and-events) to avoid keeping sensitive information in the
database, which you otherwise would consider as deleted.
### Ignore hard delete
When you [delete a stream](streams.md#deleting-streams-and-events), you can use either a soft delete or hard
delete. When using hard delete, the stream gets closed with a tombstone event. Such an event tells the
database that the stream cannot be reopened, so any attempt to write to the hard-deleted stream will fail. The
tombstone event doesn't get scavenged.
You can override this behaviour and tell EventStoreDB that you want to delete all the traces of hard-deleted
streams too, using the option specified below. After a scavenge operation runs, all hard-deleted streams will
be open for appending new events again.
::: warning
Active chunk If you hard-delete a stream in the current chunk, it will remain hard-deleted even
with this option enabled. It's because the active chunk won't be affected by the scavenge.
:::
| Format | Syntax |
|:---------------------|:---------------------------------------|
| Command line | `--unsafe-ignore-hard-delete` |
| YAML | `UnsafeIgnoreHardDelete` |
| Environment variable | `EVENTSTORE_UNSAFE_IGNORE_HARD_DELETE` |
**Default**: `false`
::: warning
Unsafe Setting this option to `true` disables hard deletes and allows clients to write to deleted
streams. For that reason, the option is considered unsafe and should be used with caution.
:::
## Backup and restore
Backing up an EventStoreDB database is straightforward but relies on carrying out the steps below in the
correct order.
### Types of backups
There are two main ways to perform backups:
#### Disk snapshotting
If your infrastructure is virtualized, disk _snapshots_ is an option and the easiest to perform backup and
restore operations.
#### Regular file copy
- Simple full backup: when the DB size is small and the frequency of appends is low.
- Differential backup: when the DB size is large or the system has a high append frequency.
### Considerations for backup and restore procedures
Backing up one node is recommended. However, ensure that the node chosen as a target for the backup is:
- up to date
- connected to the cluster.
For additional safety, you can also back up at least a quorum of nodes.
Do not back up a node at the same time as running a scavenge operation.
[Read-only replica](./cluster.md#read-only-replica) nodes may be used as backup source.
When either running a backup or restoring, do not mix backup files of different nodes.
The restore must happen on a _stopped_ node.
The restore process can happen on any node of a cluster.
You can restore any number of nodes in a cluster from the same backup source. It means, for example, in the
event non-recoverable three nodes cluster, that the same backup source can be used to restore a completely new
three nodes cluster.
When you restore a node that was the backup source, perform a full backup after recovery.
### Database files information
By default, there are two directories containing data that needs to be included in the backup:
- `db\ ` where the data is located
- `index\ ` where the indexes are kept.
The exact name and location are dependent on your configuration.
- `db\ ` contains:
- the chunks files named `chk-X.Y` where `X` is the chunk number and `Y` the version.
- the checkpoints files `*.chk` (`chaser.chk`, `epoch.chk`, `proposal.chk`, `truncate.chk`, `writer.chk`)
- `index\ ` contains:
- the index map: `indexmap`
- the indexes: UUID named files , e.g `5a1a8395-94ee-40c1-bf93-aa757b5887f8`
### Disks snapshot
If the `db\ ` and `index\ ` directories are on the same volume, a snapshot of that volume is enough.
However, if they are on different volumes, take first a snapshot of the volume containing the `index\ `
directory and then a snapshot of the volume containing the `db\ ` directory.
### Simple full backup & restore
#### Backup
1. Copy any index checkpoint files (`<index directory>\**\*.chk`) to your backup location.
2. Copy the other index files to your backup location (the rest of `<index directory>`, excluding the
checkpoints).
3. Copy the database checkpoint files (`*.chk`) to your backup location.
4. Copy the chunk files (`chunk-X.Y`) to your backup location.
For example, with a database in `data` and index in `data/index`:
``` bash:no-line-numbers
rsync -aIR data/./index/**/*.chk backup
rsync -aI --exclude '*.chk' data/index backup
rsync -aI data/*.chk backup
rsync -a data/*.0* backup
```
#### Restore
1. Ensure the EventStoreDB process is stopped. Restoring a database on running instance is not possible and,
in most cases, will lead to data corruption.
2. Copy all files to the desired location.
3. Create a copy of `chaser.chk` and call it `truncate.chk`. This effectively overwrites the
restored `truncate.chk`.
### Differential backup & restore
The following procedure is designed to minimize the backup storage space, and can be used to do a full and
differential backup.
#### Backup
Backup the index
1. If there are no files in the index directory (apart from directories), go to step 7.
2. Copy the `index/indexmap` file to the backup. If the source file does not exist, repeat until it does.
3. Make a list `indexFiles` of all the `index/<GUID>` and `index/<GUID>.bloomfilter` files in the source.
4. Copy the files listed in `indexFiles` to the backup, skipping file names already in the backup.
5. Compare the contents of the `indexmap` file in the source and the backup. If they are different (i.e. the
indexmap file has changed since step 2, or no longer exists), go back to step 2.
6. Remove `index/<GUID>` and `index/<GUID>.bloomfilter` files from the backup that are not listed
in `indexFiles`.
7. Copy the `index/stream-existence/streamExistenceFilter.chk` file (if present) to the backup.
8. Copy the `index/stream-existence/streamExistenceFilter.dat` file (if present) to the backup.
Backup the log
9. Rename the last chunk in the backup to have a `.old` suffix. e.g. rename `chunk-000123.000000`
to `chunk-000123.000000.old`
10. Copy `chaser.chk` to the backup.
11. Copy `epoch.chk` to the backup.
12. Copy `writer.chk` to the backup.
13. Copy `proposal.chk` to the backup.
14. Make a list `chunkFiles` of all chunk files (`chunk-X.Y`) in the source.
15. Copy the files listed in `chunkFiles` to the backup, skipping file names already in the backup. All files
should copy successfully - none should have been deleted since scavenge is not running.
16. Remove any chunks from the backup that are not in the `chunksFiles` list. This will include the `.old`
file from step 9.
#### Restore
1. Ensure the Event Store DB process is stopped. Restoring a database on running instance is not possible and,
in most cases, will lead to data corruption.
2. Copy all files to the desired location.
3. Create a copy of `chaser.chk` and call it `truncate.chk`.
### Other options
There are other options available for ensuring data recovery, that are not strictly related to backups.
#### Additional node (aka Hot Backup)
Increase the cluster size from 3 to 5 to keep further copies of data. This increase in the cluster size will
slow the cluster's writing performance as two follower nodes will need to confirm each write.
Alternatively, you can use a [read-only replica](./cluster.md#read-only-replica) node, which is not a part of
the cluster. In this case, the write performance will be minimally impacted.
#### Alternative storage
Set up a durable subscription that writes all events to another storage mechanism such as a key/value or
column store. These methods would require a manual set up for restoring a cluster node or group.
#### Backup cluster
Use a second EventStoreDB cluster as a backup. Such a strategy is known as a primary/secondary back up scheme.
The primary cluster asynchronously pushes data to the second cluster using a durable subscription. The second
cluster is available in case of a disaster on the primary cluster.
If you are using this strategy, we recommend you only support manual fail over from primary to secondary as
automated strategies risk causing a [split brain](http://en.wikipedia.org/wiki/Split-brain_%28computing%29)
problem.
| 43.433702 | 110 | 0.736501 | eng_Latn | 0.997851 |
e21481e037aeebfaf78ce7a0e9f6c093be140024 | 5,297 | md | Markdown | archives/2019.md | LoicGrobol/intro-fouille-textes | 28e1840b0aaca27808794693fe0a411fe913a775 | [
"CC-BY-4.0"
] | 3 | 2019-04-08T07:03:48.000Z | 2021-11-15T08:08:38.000Z | archives/2019.md | LoicGrobol/intro-fouille-textes | 28e1840b0aaca27808794693fe0a411fe913a775 | [
"CC-BY-4.0"
] | 6 | 2019-05-19T10:01:51.000Z | 2021-02-04T09:53:08.000Z | archives/2019.md | LoicGrobol/intro-fouille-textes | 28e1840b0aaca27808794693fe0a411fe913a775 | [
"CC-BY-4.0"
] | 5 | 2018-04-19T09:51:20.000Z | 2021-06-16T01:48:04.000Z | ---
title: Introduction à la fouille de textes — M1 pluriTAL 2019
---
- **2019-04-14**: **Examen final le jeudi 2 mai de 10h à 12h en salle Benveniste**.
- **2019-04-14**: Cours supplémentaire le mardi 30 avril de 10h à 12h en salle Benveniste. Au programme : algorithmes d'étiquetage et projets.
- **2019-04-02**: Mise à jour du [script de vectorisation](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/vectorisation.py), pensez à télécharger la dernière version voir les instructions dans [les consignes pour les projets](../projets.md)
- **Où** Salle Benveniste, [ILPGA](http://www.ilpga.univ-paris3.f), [9 Rue des Bernardins, 75005 Paris](https://www.openstreetmap.org/way/55894044)
- **Quand** Le lundi de 13h30 à 15h30 (voir aussi [le calendrier de Paris 3 pour les dates](http://www.univ-paris3.fr/le-calendrier-universitaire-116398.kjsp))
- **Contact** Loïc Grobol [<loic.grobol@gmail.com>](mailto:loic.grobol@gmail.com)
## Documents et outils
- [Syllabus](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/syllabus.pdf)
- [Consignes pour les projets](projets.md)
- [Polycopié du cours](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/poly.pdf)
- [Script de vectorisation](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/vectorisation.py) à lancer avec `python3 vectorisation.py chemin/du/corpus sortie.arff` où le corpus est un dossier contenant un sous-dossier par classe et chaque sous-dossier contient un fichier par document (avec l'extension `.txt`)
- [Données d'exemple](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/sample-data.tar.gz) pour le cours
### Séances
#### 2019-01-21 — Introduction et notion de tâche
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture1.pdf)
#### 2019-01-28 – Évaluation, RI et classification
- À lire → Poly jusqu'à 2.3.2
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture2.pdf)
#### 2019-02-04 – Étiquetage et EI
- À lire → Poly jusqu'à 2.4 (non-inclus)
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture3.pdf)
#### 2019-02-11 – Relations entre tâches et représentation des données
- À lire → Poly jusqu'à 2.4 (inclus)
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture4.pdf)
#### 2019-02-18 – Représentation des attributs
- À lire → Poly jusqu'à 2.6 (inclus)
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture5.pdf)
#### 2019-02-25 — Modèles vectoriels
- À lire → Poly jusqu'à 2.6 (inclus)
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture6.pdf)
#### 2019-03-11 — Classification et apprentissage artificiel
- À lire → Poly jusqu'à 4.4 (non-inclus)
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture7.pdf)
#### 2019-03-18 — Arbres de décision
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture8.pdf)
#### 2019-03-25 — *Naïve Bayes*
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture9.pdf)
#### 2019-04-01 — *SVM*
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture10.pdf)
#### 2019-04-15 — Réseaux de neurones
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture11.pdf)
- [Sujet examen 2017](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/exam2017.pdf)
- [Sujet examen 2018](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/exam-2018-04.pdf)
#### 2019-04-30 — Compléments sur l'étiquetage
- [Slides](https://github.com/LoicGrobol/intro-fouille-textes/releases/download/stable/lecture12.pdf)
## Liens utiles
- [Le site du projet Weka](https://www.cs.waikato.ac.nz/ml/weka/)
- [La page du cours de l'an dernier](2018)
- [Site du master Plurital](http://plurital.org)
## Twittographie
Non-exhaustive
- [@Seb_Ruder](https://twitter.com/seb_ruder)
- [@yoavgo](https://twitter.com/yoavgo)
- [@harmaru](https://twitter.com/hardmaru)
- [@honnibal](https://twitter.com/honnibal)
- [@gchrupala](https://twitter.com/gchrupala)
- [@emilymbender](https://twitter.com/emilymbender)
- [@\_shrdlu\_](https://twitter.com/_shrdlu_)
## Licences
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png"/></a>
Copyright © 2019 Loïc Grobol [\<loic.grobol@gmail.com\>](mailto:loic.grobol@gmail.com)
Sauf indication contraire, les fichiers présents dans ce dépôt sont distribués selon les termes de la licence [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/). Voir [le README](README.md#Licences) pour plus de détails.
Un résumé simplifié de cette licence est disponible à <https://creativecommons.org/licenses/by/4.0/>.
Le texte intégral de cette licence est disponible à <https://creativecommons.org/licenses/by/4.0/legalcode>
| 56.351064 | 340 | 0.742873 | yue_Hant | 0.389828 |
e214fd3599edacb65ad59ec8091e8eda700dde47 | 123 | md | Markdown | README.md | rudedev/functionalProgrammingHaskell | 900d38a30eac26c29badc5791b88bc4d42080c9e | [
"MIT"
] | null | null | null | README.md | rudedev/functionalProgrammingHaskell | 900d38a30eac26c29badc5791b88bc4d42080c9e | [
"MIT"
] | null | null | null | README.md | rudedev/functionalProgrammingHaskell | 900d38a30eac26c29badc5791b88bc4d42080c9e | [
"MIT"
] | null | null | null | # functionalProgrammingHaskell
Programming examples and code for the class - https://onlinecourses.nptel.ac.in/noc17_cs11
| 41 | 91 | 0.829268 | eng_Latn | 0.660522 |
e2154e49e95fee02c073e63035b44944e89e0602 | 19,312 | md | Markdown | articles/storage/files/storage-how-to-use-files-linux.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-how-to-use-files-linux.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-how-to-use-files-linux.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Używanie Azure Files z systemem Linux | Microsoft Docs
description: Dowiedz się, jak zainstalować udział plików platformy Azure za pośrednictwem protokołu SMB w systemie Linux. Zapoznaj się z listą wymagań wstępnych. Przejrzyj zagadnienia dotyczące zabezpieczeń protokołu SMB na klientach z systemem Linux.
author: roygara
ms.service: storage
ms.topic: how-to
ms.date: 10/19/2019
ms.author: rogarana
ms.subservice: files
ms.openlocfilehash: 5161d8e169a7eb9e757dfbfa71fa697880e1806e
ms.sourcegitcommit: b39cf769ce8e2eb7ea74cfdac6759a17a048b331
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 01/22/2021
ms.locfileid: "98673691"
---
# <a name="use-azure-files-with-linux"></a>Używanie usługi Azure Files z systemem Linux
[Azure Files](storage-files-introduction.md) to łatwy w użyciu system plików w chmurze firmy Microsoft. Udziały plików platformy Azure można instalować w dystrybucjach systemu Linux przy użyciu [klienta jądra SMB](https://wiki.samba.org/index.php/LinuxCIFS). W tym artykule przedstawiono dwa sposoby instalowania udziału plików platformy Azure: na żądanie z `mount` poleceniem i przy rozruchu, tworząc wpis w `/etc/fstab` .
Zalecanym sposobem instalowania udziału plików platformy Azure w systemie Linux jest użycie protokołu SMB 3,0. Domyślnie Azure Files wymaga szyfrowania podczas przesyłania, które jest obsługiwane tylko przez protokół SMB 3,0. Azure Files obsługuje również protokół SMB 2,1, który nie obsługuje szyfrowania podczas przesyłania, ale nie może instalować udziałów plików platformy Azure z użyciem protokołu SMB 2,1 z innego regionu platformy Azure lub lokalnego ze względów bezpieczeństwa. O ile aplikacja nie wymaga protokołu SMB 2,1, z jakiegoś powodu większość popularnych, ostatnio wydanych dystrybucji systemu Linux obsługuje protokół SMB 3,0:
| Dystrybucja systemu Linux | SMB 2.1 <br>(Instalacja na maszynach wirtualnych w ramach tego samego regionu platformy Azure) | SMB 3.0 <br>(Instalacje z poziomu lokalnego i obejmującego wiele regionów) |
| --- | :---: | :---: |
| Ubuntu | 14.04 + | 16.04 + |
| Red Hat Enterprise Linux (RHEL) | 7 + | 7.5 + |
| CentOS | 7 + | 7.5 + |
| Debian | 8 + | Ponad 10 |
| openSUSE | 13.2 + | 42.3 + |
| SUSE Linux Enterprise Server | 12+ | 12 Z DODATKIEM SP2 + |
Jeśli używasz dystrybucji systemu Linux, która nie jest wymieniona w powyższej tabeli, możesz sprawdzić, czy w systemie Linux jest obsługiwana obsługa protokołu SMB 3,0 z szyfrowaniem. Protokół SMB 3,0 z szyfrowaniem został dodany do jądra systemu Linux w wersji 4,11. `uname`Polecenie zwróci wersję jądra systemu Linux w użyciu:
```bash
uname -r
```
## <a name="prerequisites"></a>Wymagania wstępne
<a id="smb-client-reqs"></a>
* <a id="install-cifs-utils"></a>**Upewnij się, że pakiet CIFS-narzędzia jest zainstalowany.**
Pakiet CIFS-narzędzia można zainstalować przy użyciu Menedżera pakietów na wybranej dystrybucji systemu Linux.
W przypadku dystrybucji **opartych** na **Ubuntu** i Debian należy użyć `apt` Menedżera pakietów:
```bash
sudo apt update
sudo apt install cifs-utils
```
W **Fedora**, **Red Hat Enterprise Linux 8 +** i **CentOS 8 +**, użyj `dnf` Menedżera pakietów:
```bash
sudo dnf install cifs-utils
```
W starszych wersjach **Red Hat Enterprise Linux** i **CentOS** Użyj `yum` Menedżera pakietów:
```bash
sudo yum install cifs-utils
```
W systemie **openSUSE** Użyj `zypper` Menedżera pakietów:
```bash
sudo zypper install cifs-utils
```
W przypadku innych dystrybucji Użyj odpowiedniego Menedżera pakietów lub [Skompiluj ze źródła](https://wiki.samba.org/index.php/LinuxCIFS_utils#Download)
* **Najnowsza wersja interfejsu wiersza polecenia platformy Azure (CLI).** Aby uzyskać więcej informacji na temat instalowania interfejsu wiersza polecenia platformy Azure, zobacz [Instalowanie interfejsu wiersza polecenia platformy Azure](/cli/azure/install-azure-cli) i wybieranie systemu operacyjnego. Jeśli wolisz używać modułu Azure PowerShell w programie PowerShell 6 lub nowszym, możesz jednak przedstawić poniższe instrukcje dla interfejsu wiersza polecenia platformy Azure.
* **Upewnij się, że port 445 jest otwarty**: protokół SMB komunikuje się za pośrednictwem portu TCP 445 — Sprawdź, czy Zapora nie blokuje portów TCP 445 z komputera klienckiego. Zastąp `<your-resource-group>` , a `<your-storage-account>` następnie uruchom następujący skrypt:
```bash
resourceGroupName="<your-resource-group>"
storageAccountName="<your-storage-account>"
# This command assumes you have logged in with az login
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))
fileHost=$(echo $smbPath | tr -d "/")
nc -zvw3 $fileHost 445
```
Jeśli połączenie zakończyło się pomyślnie, powinien zostać wyświetlony komunikat podobny do następującego:
```ouput
Connection to <your-storage-account> 445 port [tcp/microsoft-ds] succeeded!
```
Jeśli nie możesz otworzyć portu 445 w sieci firmowej lub jest on blokowany przez usługodawcę internetowego, możesz użyć połączenia sieci VPN lub ExpressRoute, aby obejść port 445. Aby uzyskać więcej informacji, zobacz [zagadnienia dotyczące sieci w przypadku bezpośredniego dostępu do udziału plików platformy Azure](storage-files-networking-overview.md).
## <a name="mounting-azure-file-share"></a>Instalowanie udziału plików platformy Azure
Aby używać udziału plików platformy Azure z dystrybucją systemu Linux, należy utworzyć katalog, który będzie służyć jako punkt instalacji dla udziału plików platformy Azure. Punkt instalacji można utworzyć w dowolnym miejscu w systemie Linux, ale jest to typowa Konwencja do utworzenia tego elementu w ramach/mnt. Po punkcie instalacji należy użyć `mount` polecenia, aby uzyskać dostęp do udziału plików platformy Azure.
W razie potrzeby można zainstalować ten sam udział plików platformy Azure w wielu punktach instalacji.
### <a name="mount-the-azure-file-share-on-demand-with-mount"></a>Zainstaluj udział plików platformy Azure na żądanie z `mount`
1. **Utwórz folder dla punktu instalacji**: Zastąp `<your-resource-group>` , `<your-storage-account>` , i `<your-file-share>` z odpowiednimi informacjami dla środowiska:
```bash
resourceGroupName="<your-resource-group>"
storageAccountName="<your-storage-account>"
fileShareName="<your-file-share>"
mntPath="/mnt/$storageAccountName/$fileShareName"
sudo mkdir -p $mntPath
```
1. **Użyj polecenia mount, aby zainstalować udział plików platformy Azure**. W poniższym przykładzie domyślne uprawnienia do plików i folderów lokalnego systemu Linux 0755, czyli odczyt, zapis i wykonywanie dla właściciela (na podstawie właściciela pliku/katalogu systemu Linux), Odczytaj i wykonaj dla użytkowników w grupie właścicieli, a następnie odczytaj i wykonaj dla innych w systemie. Możesz użyć `uid` `gid` opcji instalacji i ustawić identyfikator użytkownika i identyfikator grupy dla instalacji. `dir_mode`W razie potrzeby można również użyć i `file_mode` skonfigurować uprawnienia niestandardowe. Aby uzyskać więcej informacji na temat sposobu ustawiania uprawnień, zobacz [notacja numeryczna systemu UNIX](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation) w witrynie Wikipedia.
```bash
# This command assumes you have logged in with az login
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
storageAccountKey=$(az storage account keys list \
--resource-group $resourceGroupName \
--account-name $storageAccountName \
--query "[0].value" | tr -d '"')
sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino
```
> [!Note]
> Powyższe polecenie instalacji instaluje z protokołem SMB 3,0. Jeśli dystrybucja systemu Linux nie obsługuje protokołu SMB 3,0 z szyfrowaniem lub obsługuje protokół SMB 2,1, można zainstalować tylko z maszyny wirtualnej platformy Azure w tym samym regionie, w którym znajduje się konto magazynu. Aby zainstalować udział plików platformy Azure w dystrybucji systemu Linux, która nie obsługuje protokołu SMB 3,0 z szyfrowaniem, należy [wyłączyć szyfrowanie podczas przesyłania dla konta magazynu](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
Po zakończeniu korzystania z udziału plików platformy Azure Możesz użyć `sudo umount $mntPath` programu do odinstalowania udziału.
### <a name="create-a-persistent-mount-point-for-the-azure-file-share-with-etcfstab"></a>Utwórz trwały punkt instalacji dla udziału plików platformy Azure `/etc/fstab`
1. **Utwórz folder dla punktu instalacji**: folder dla punktu instalacji można utworzyć w dowolnym miejscu w systemie plików, ale jest to wspólna Konwencja do utworzenia tego elementu w obszarze/mnt. Na przykład następujące polecenie tworzy nowy katalog, zastępuje `<your-resource-group>` , `<your-storage-account>` i `<your-file-share>` z odpowiednimi informacjami dla środowiska:
```bash
resourceGroupName="<your-resource-group>"
storageAccountName="<your-storage-account>"
fileShareName="<your-file-share>"
mntPath="/mnt/$storageAccountName/$fileShareName"
sudo mkdir -p $mntPath
```
1. **Utwórz plik poświadczeń, aby zapisać nazwę użytkownika (nazwę konta magazynu) i hasło (klucz konta magazynu) dla udziału plików.**
```bash
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir "/etc/smbcredentials"
fi
storageAccountKey=$(az storage account keys list \
--resource-group $resourceGroupName \
--account-name $storageAccountName \
--query "[0].value" | tr -d '"')
smbCredentialFile="/etc/smbcredentials/$storageAccountName.cred"
if [ ! -f $smbCredentialFile ]; then
echo "username=$storageAccountName" | sudo tee $smbCredentialFile > /dev/null
echo "password=$storageAccountKey" | sudo tee -a $smbCredentialFile > /dev/null
else
echo "The credential file $smbCredentialFile already exists, and was not modified."
fi
```
1. **Zmień uprawnienia do pliku poświadczeń, aby tylko element główny mógł odczytać lub zmodyfikować plik hasła.** Ponieważ klucz konta magazynu jest zasadniczo hasłem administratora hasła dla konta magazynu, ustawienie uprawnień do pliku, tak aby tylko dostęp do katalogu głównego miało znaczenie, jest ważne, aby użytkownicy z niższymi uprawnieniami nie mogli pobrać klucza konta magazynu.
```bash
sudo chmod 600 $smbCredentialFile
```
1. **Użyj poniższego polecenia, aby dołączyć następujący wiersz do `/etc/fstab`** programu: w poniższym przykładzie domyślne uprawnienia do plików i folderów lokalnego systemu Linux 0755, które oznaczają odczyt, zapis i wykonywanie dla właściciela (na podstawie właściciela pliku/katalogu Linux), Odczytaj i wykonaj dla użytkowników w grupie właścicieli, a następnie odczytaj i wykonaj dla innych w systemie. Możesz użyć `uid` `gid` opcji instalacji i ustawić identyfikator użytkownika i identyfikator grupy dla instalacji. `dir_mode`W razie potrzeby można również użyć i `file_mode` skonfigurować uprawnienia niestandardowe. Aby uzyskać więcej informacji na temat sposobu ustawiania uprawnień, zobacz [notacja numeryczna systemu UNIX](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation) w witrynie Wikipedia.
```bash
# This command assumes you have logged in with az login
httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName
if [ -z "$(grep $smbPath\ $mntPath /etc/fstab)" ]; then
echo "$smbPath $mntPath cifs nofail,vers=3.0,credentials=$smbCredentialFile,serverino" | sudo tee -a /etc/fstab > /dev/null
else
echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already present. You may want to double check /etc/fstab to ensure the configuration is as desired."
fi
sudo mount -a
```
> [!Note]
> Powyższe polecenie instalacji instaluje z protokołem SMB 3,0. Jeśli dystrybucja systemu Linux nie obsługuje protokołu SMB 3,0 z szyfrowaniem lub obsługuje protokół SMB 2,1, można zainstalować tylko z maszyny wirtualnej platformy Azure w tym samym regionie, w którym znajduje się konto magazynu. Aby zainstalować udział plików platformy Azure w dystrybucji systemu Linux, która nie obsługuje protokołu SMB 3,0 z szyfrowaniem, należy [wyłączyć szyfrowanie podczas przesyłania dla konta magazynu](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
### <a name="using-autofs-to-automatically-mount-the-azure-file-shares"></a>Automatyczne instalowanie udziałów plików platformy Azure przy użyciu programu AutoFS
1. **Upewnij się, że pakiet AutoFS został zainstalowany.**
Pakiet AutoFS można zainstalować przy użyciu Menedżera pakietów na wybranej dystrybucji systemu Linux.
W przypadku dystrybucji **opartych** na **Ubuntu** i Debian należy użyć `apt` Menedżera pakietów:
```bash
sudo apt update
sudo apt install autofs
```
W **Fedora**, **Red Hat Enterprise Linux 8 +** i **CentOS 8 +**, użyj `dnf` Menedżera pakietów:
```bash
sudo dnf install autofs
```
W starszych wersjach **Red Hat Enterprise Linux** i **CentOS** Użyj `yum` Menedżera pakietów:
```bash
sudo yum install autofs
```
W systemie **openSUSE** Użyj `zypper` Menedżera pakietów:
```bash
sudo zypper install autofs
```
2. **Utwórz punkt instalacji dla udziałów**:
```bash
sudo mkdir /fileshares
```
3. **Crete nowy niestandardowy plik konfiguracyjny AutoFS**
```bash
sudo vi /etc/auto.fileshares
```
4. **Dodaj następujące wpisy do/etc/auto.fileshares**
```bash
echo "$fileShareName -fstype=cifs,credentials=$smbCredentialFile :$smbPath"" > /etc/auto.fileshares
```
5. **Dodaj następujący wpis do/etc/auto.Master**
```bash
/fileshares /etc/auto.fileshares --timeout=60
```
6. **Uruchom ponownie AutoFS**
```bash
sudo systemctl restart autofs
```
7. **Dostęp do folderu wyznaczono dla udziału**
```bash
cd /fileshares/$filesharename
```
## <a name="securing-linux"></a>Zabezpieczanie systemu Linux
Aby można było zainstalować udział plików platformy Azure w systemie Linux, należy uzyskać dostęp do portu 445. Wiele organizacji blokuje port 445 z powodu zagrożeń bezpieczeństwa związanych z protokołem SMB 1. Protokół SMB 1, znany również jako CIFS (Common Internet File System), jest starszym protokołem systemu plików zawartym w wielu dystrybucjach Linux. Protokół SMB 1 jest nieaktualny, nieefektywny i, co najważniejsze, niezabezpieczony. Dobrym pomysłem jest to, że Azure Files nie obsługuje protokołu SMB 1 i począwszy od jądra systemu Linux w wersji 4,18, system Linux umożliwia wyłączenie protokołu SMB 1. Zawsze [zdecydowanie zalecamy](https://aka.ms/stopusingsmb1) wyłączenie protokołu SMB 1 na klientach z systemem Linux przed użyciem udziałów plików SMB w środowisku produkcyjnym.
Począwszy od jądra systemu Linux 4,18, moduł jądra protokołu SMB wywoływany ze `cifs` względu na starsze przyczyny ujawnia nowy parametr modułu (często określany jako *parametr* przez różne dokumenty zewnętrzne) o nazwie `disable_legacy_dialects` . Chociaż wprowadzono je w jądrze systemu Linux 4,18, niektórzy dostawcy przenoszą tę zmianę do starszych jądra, które są przez nie obsługiwane. Dla wygody Poniższa tabela zawiera szczegółowe informacje o dostępności tego parametru modułu w przypadku wspólnych dystrybucji systemu Linux.
| Dystrybucja | Może wyłączyć protokół SMB 1 |
|--------------|-------------------|
| Ubuntu 14.04-16.04 | Nie |
| Ubuntu 18.04 | Tak |
| Ubuntu 19.04 + | Tak |
| Debian 8-9 | Nie |
| Debian 10 + | Tak |
| Fedora 29 + | Tak |
| CentOS 7 | Nie |
| CentOS 8 + | Tak |
| Red Hat Enterprise Linux 6. x-7. x | Nie |
| Red Hat Enterprise Linux 8 + | Tak |
| openSUSE przestępnie 15,0 | Nie |
| openSUSE przestępny 15.1 + | Tak |
| openSUSE Tumbleweed | Tak |
| SUSE Linux Enterprise 11. x-12. x | Nie |
| SUSE Linux Enterprise 15 | Nie |
| SUSE Linux Enterprise 15,1 | Nie |
Możesz sprawdzić, czy dystrybucja systemu Linux obsługuje `disable_legacy_dialects` parametr modułu za pomocą następującego polecenia.
```bash
sudo modinfo -p cifs | grep disable_legacy_dialects
```
To polecenie powinno wyprowadzić następujący komunikat:
```output
disable_legacy_dialects: To improve security it may be helpful to restrict the ability to override the default dialects (SMB2.1, SMB3 and SMB3.02) on mount with old dialects (CIFS/SMB1 and SMB2) since vers=1.0 (CIFS/SMB1) and vers=2.0 are weaker and less secure. Default: n/N/0 (bool)
```
Przed wyłączeniem protokołu SMB 1 należy upewnić się, że moduł SMB nie jest aktualnie załadowany w systemie (dzieje się to automatycznie, jeśli został zainstalowany udział SMB). Można to zrobić za pomocą następującego polecenia, które powinno wynikać nie tylko w przypadku niezaładowania protokołu SMB:
```bash
lsmod | grep cifs
```
Aby zwolnić moduł, najpierw Odinstaluj wszystkie udziały SMB (przy użyciu `umount` polecenia, jak opisano powyżej). Można zidentyfikować wszystkie zainstalowane udziały SMB w systemie przy użyciu następującego polecenia:
```bash
mount | grep cifs
```
Po odinstalowaniu wszystkich udziałów plików SMB bezpieczniej jest zwolnić moduł. W tym celu możesz użyć polecenia `modprobe`:
```bash
sudo modprobe -r cifs
```
Możesz ręcznie załadować moduł z opcją SMB 1 zwolnioną przy użyciu `modprobe` polecenia:
```bash
sudo modprobe cifs disable_legacy_dialects=Y
```
Na koniec można sprawdzić, czy moduł SMB został załadowany za pomocą parametru, przeglądając parametry załadowane w `/sys/module/cifs/parameters` :
```bash
cat /sys/module/cifs/parameters/disable_legacy_dialects
```
Aby trwale wyłączyć protokół SMB 1 w przypadku dystrybucji opartych na Ubuntu i Debian, należy utworzyć nowy plik (jeśli nie masz jeszcze opcji niestandardowych dla innych modułów) wywołanych `/etc/modprobe.d/local.conf` z ustawieniem. Można to zrobić za pomocą następującego polecenia:
```bash
echo "options cifs disable_legacy_dialects=Y" | sudo tee -a /etc/modprobe.d/local.conf > /dev/null
```
Można sprawdzić, czy ten program działał przez załadowanie modułu SMB:
```bash
sudo modprobe cifs
cat /sys/module/cifs/parameters/disable_legacy_dialects
```
## <a name="next-steps"></a>Następne kroki
Poniższe linki umożliwiają uzyskanie dodatkowych informacji na temat usługi Azure Files:
* [Planowanie wdrożenia usługi Azure Files](storage-files-planning.md)
* [Często zadawane pytania](./storage-files-faq.md)
* [Rozwiązywanie problemów](storage-troubleshoot-linux-file-connection-problems.md) | 58.521212 | 829 | 0.75435 | pol_Latn | 0.999484 |
e2157ec13d4679d378cb8213daf71a5b25f82b4d | 4,028 | md | Markdown | packages/progress-bar/CHANGELOG.md | k1rd3rf/jokul | 069873080db1f83eaa0dfe47cb9785a714f3eb4c | [
"MIT"
] | null | null | null | packages/progress-bar/CHANGELOG.md | k1rd3rf/jokul | 069873080db1f83eaa0dfe47cb9785a714f3eb4c | [
"MIT"
] | null | null | null | packages/progress-bar/CHANGELOG.md | k1rd3rf/jokul | 069873080db1f83eaa0dfe47cb9785a714f3eb4c | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
## [1.2.5](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.2.4...@fremtind/jkl-progress-bar@1.2.5) (2020-03-25)
### Bug Fixes
* move browserslist declaration to root package.json ([51c790e](https://github.com/fremtind/jokul/commit/51c790ea79ca3d667871380c6bfbe85a5738920b)), closes [#862](https://github.com/fremtind/jokul/issues/862)
## [1.2.4](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.2.3...@fremtind/jkl-progress-bar@1.2.4) (2020-03-16)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## [1.2.3](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.2.2...@fremtind/jkl-progress-bar@1.2.3) (2020-03-06)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## [1.2.2](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.2.1...@fremtind/jkl-progress-bar@1.2.2) (2020-03-06)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## [1.2.1](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.2.0...@fremtind/jkl-progress-bar@1.2.1) (2020-03-05)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
# [1.2.0](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.1.2...@fremtind/jkl-progress-bar@1.2.0) (2020-02-19)
### Features
* export scss files with style pkgs ([edb6278](https://github.com/fremtind/jokul/commit/edb627838075d3d613ae78b6aae765c81067ba6a))
## [1.1.2](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.1.1...@fremtind/jkl-progress-bar@1.1.2) (2020-02-18)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## [1.1.1](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.1.0...@fremtind/jkl-progress-bar@1.1.1) (2020-01-16)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
# [1.1.0](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.0.4...@fremtind/jkl-progress-bar@1.1.0) (2020-01-16)
### Bug Fixes
* fix links to components in readme ([4e2ade2](https://github.com/fremtind/jokul/commit/4e2ade2f71d4fa1bd80e4e3d823691589207b641))
### Features
* update components to use the new notification colors ([8bd78ff](https://github.com/fremtind/jokul/commit/8bd78ff371cf382c1c7fabfe1deab5e199e5750a))
## 1.0.13 (2020-01-10)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.12 (2020-01-10)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.11 (2020-01-08)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.10 (2020-01-07)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.9 (2020-01-07)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.8 (2020-01-07)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.7 (2019-12-20)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.6 (2019-12-20)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.5 (2019-12-20)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.4 (2019-12-20)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.3 (2019-12-20)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## 1.0.2 (2019-12-19)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
## [1.0.1](https://github.com/fremtind/jokul/compare/@fremtind/jkl-progress-bar@1.0.0...@fremtind/jkl-progress-bar@1.0.1) (2019-12-17)
**Note:** Version bump only for package @fremtind/jkl-progress-bar
# 1.0.0 (2019-12-03)
### Features
- **progressbar:** add progressbar ([1ad8754](https://github.com/fremtind/jokul/commit/1ad8754a15e414ff017bce8d829472dfc9a7d01c))
| 21.312169 | 208 | 0.70854 | dan_Latn | 0.088379 |
e2169f44bb17dac0ec92adc4ba8cf99273cc8533 | 26,570 | md | Markdown | articles/synapse-analytics/spark/microsoft-spark-utilities.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/synapse-analytics/spark/microsoft-spark-utilities.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/synapse-analytics/spark/microsoft-spark-utilities.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Introduktion till Microsoft Spark-verktyg
description: 'Självstudie: MSSparkutils i Azure Synapse Analytics-anteckningsböcker'
author: ruxu
services: synapse-analytics
ms.service: synapse-analytics
ms.topic: reference
ms.subservice: spark
ms.date: 09/10/2020
ms.author: ruxu
ms.reviewer: ''
zone_pivot_groups: programming-languages-spark-all-minus-sql
ms.openlocfilehash: d36086052f4e5719fd17989e3326a4b5728ee3ca
ms.sourcegitcommit: 4e70fd4028ff44a676f698229cb6a3d555439014
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 01/28/2021
ms.locfileid: "98954301"
---
# <a name="introduction-to-microsoft-spark-utilities"></a>Introduktion till Microsoft Spark-verktyg
Microsoft Spark-verktyg (MSSparkUtils) är ett inbyggt paket som hjälper dig att enkelt utföra vanliga uppgifter. Du kan använda MSSparkUtils för att arbeta med fil system, för att hämta miljövariabler och för att arbeta med hemligheter. MSSparkUtils är tillgängliga i `PySpark (Python)` , `Scala` och `.NET Spark (C#)` antecknings böcker och Synapse-pipeliner.
## <a name="pre-requisites"></a>Förutsättningar
### <a name="configure-access-to-azure-data-lake-storage-gen2"></a>Konfigurera åtkomst till Azure Data Lake Storage Gen2
Synapse Notebooks använder Azure Active Directory (Azure AD) genom strömning för att få åtkomst till ADLS Gen2-konton. Du måste vara en **Blob Storage deltagare** för att få åtkomst till ADLS Gen2s kontot (eller mappen).
Synapse pipelines använder arbets ytans identitet (MSI) för att komma åt lagrings kontona. Om du vill använda MSSparkUtils i dina pipeline-aktiviteter måste din arbets ytans identitet **Blob Storage deltagare** för att få åtkomst till ADLS Gen2-kontot (eller mappen).
Följ dessa steg för att se till att din Azure AD-och Workspace-MSI har åtkomst till det ADLS Gen2 kontot:
1. Öppna [Azure Portal](https://portal.azure.com/) och det lagrings konto som du vill få åtkomst till. Du kan gå till den angivna behållaren som du vill komma åt.
2. Välj **åtkomst kontroll (IAM)** i den vänstra panelen.
3. Tilldela **ditt Azure AD-konto** och **din arbets yta identitet** (samma som din arbets yta namn) till rollen **Storage BLOB data Contributor** på lagrings kontot om det inte redan har tilldelats.
4. Välj **Spara**.
Du kan komma åt data på ADLS Gen2 med Synapse Spark via följande URL:
<code>abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<path></code>
### <a name="configure-access-to-azure-blob-storage"></a>Konfigurera åtkomst till Azure Blob Storage
Synapse utnyttjar **signaturen för delad åtkomst (SAS)** för att få åtkomst till Azure Blob Storage. För att undvika att exponera SAS-nycklar i koden rekommenderar vi att du skapar en ny länkad tjänst i Synapse-arbetsytan till det Azure Blob Storage-konto som du vill ha åtkomst till.
Följ dessa steg om du vill lägga till en ny länkad tjänst för ett Azure Blob Storage-konto:
1. Öppna [Azure Synapse Studio](https://web.azuresynapse.net/).
2. Välj **Hantera** i den vänstra panelen och välj **länkade tjänster** under de **externa anslutningarna**.
3. Sök i **Azure Blob Storage** i den **nya länkade tjänst** panelen till höger.
4. Välj **Fortsätt**.
5. Välj Azure Blob Storage-kontot för att komma åt och konfigurera namnet på den länkade tjänsten. Föreslå Använd **konto nyckel** för **autentiseringsmetoden**.
6. Kontrol lera att inställningarna är korrekta genom att välja **Testa anslutning** .
7. Välj **skapa** först och klicka på **publicera alla** för att spara ändringarna.
Du kan komma åt data på Azure Blob Storage med Synapse Spark via följande URL:
<code>wasb[s]://<container_name>@<storage_account_name>.blob.core.windows.net/<path></code>
Här är ett kod exempel:
:::zone pivot = "programming-language-python"
```python
from pyspark.sql import SparkSession
# Azure storage access info
blob_account_name = 'Your account name' # replace with your blob name
blob_container_name = 'Your container name' # replace with your container name
blob_relative_path = 'Your path' # replace with your relative folder path
linked_service_name = 'Your linked service name' # replace with your linked service name
blob_sas_token = mssparkutils.credentials.getConnectionStringOrCreds(linked_service_name)
# Allow SPARK to access from Blob remotely
wasb_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set('fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name), blob_sas_token)
print('Remote blob path: ' + wasb_path)
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
val blob_account_name = "" // replace with your blob name
val blob_container_name = "" //replace with your container name
val blob_relative_path = "/" //replace with your relative folder path
val linked_service_name = "" //replace with your linked service name
val blob_sas_token = mssparkutils.credentials.getConnectionStringOrCreds(linked_service_name)
val wasbs_path = f"wasbs://$blob_container_name@$blob_account_name.blob.core.windows.net/$blob_relative_path"
spark.conf.set(f"fs.azure.sas.$blob_container_name.$blob_account_name.blob.core.windows.net",blob_sas_token)
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
var blob_account_name = ""; // replace with your blob name
var blob_container_name = ""; // replace with your container name
var blob_relative_path = ""; // replace with your relative folder path
var linked_service_name = ""; // replace with your linked service name
var blob_sas_token = Credentials.GetConnectionStringOrCreds(linked_service_name);
spark.SparkContext.GetConf().Set($"fs.azure.sas.{blob_container_name}.{blob_account_name}.blob.core.windows.net", blob_sas_token);
var wasbs_path = $"wasbs://{blob_container_name}@{blob_account_name}.blob.core.windows.net/{blob_relative_path}";
Console.WriteLine(wasbs_path);
```
::: zone-end
### <a name="configure-access-to-azure-key-vault"></a>Konfigurera åtkomst till Azure Key Vault
Du kan lägga till en Azure Key Vault som en länkad tjänst för att hantera dina autentiseringsuppgifter i Synapse. Följ dessa steg om du vill lägga till en Azure Key Vault som en länkad Synapse-tjänst:
1. Öppna [Azure Synapse Studio](https://web.azuresynapse.net/).
2. Välj **Hantera** i den vänstra panelen och välj **länkade tjänster** under de **externa anslutningarna**.
3. Sök **Azure Key Vault** i den **nya länkade tjänst** panelen till höger.
4. Välj det Azure Key Vault kontot för att komma åt och konfigurera namnet på den länkade tjänsten.
5. Kontrol lera att inställningarna är korrekta genom att välja **Testa anslutning** .
6. Välj **skapa** först och klicka på **publicera alla** för att spara ändringen.
Synapse Notebooks använder Azure Active Directory (Azure AD) genom strömning för att få åtkomst till Azure Key Vault. Synapse pipelines använder arbets ytans identitet (MSI) för att få åtkomst till Azure Key Vault. För att se till att din kod fungerar både i den bärbara datorn och i Synapse pipeline rekommenderar vi att du beviljar behörigheten hemlig åtkomst för både ditt Azure AD-konto och din arbets ytans identitet.
Följ de här stegen för att ge hemlig åtkomst till din arbets yta identitet:
1. Öppna [Azure Portal](https://portal.azure.com/) och Azure Key Vault som du vill komma åt.
2. Välj **åtkomst principer** i den vänstra panelen.
3. Välj **Lägg till åtkomst princip**:
- Välj **nyckel, hemlighet, & certifikat hantering** som config-mall.
- Välj **ditt Azure AD-konto** och **din arbets ytans identitet** (samma som din arbets ytans namn) i Välj huvud konto eller kontrol lera att den redan är tilldelad.
4. Välj **Välj** och **Lägg till**.
5. Välj knappen **Spara** för att genomföra ändringarna.
## <a name="file-system-utilities"></a>Verktyg för fil system
`mssparkutils.fs` innehåller verktyg för att arbeta med olika fil system, inklusive Azure Data Lake Storage Gen2 (ADLS Gen2) och Azure-Blob Storage. Se till att du konfigurerar åtkomst till [Azure Data Lake Storage Gen2](#configure-access-to-azure-data-lake-storage-gen2) och [Azure Blob Storage](#configure-access-to-azure-blob-storage) på lämpligt sätt.
Kör följande kommandon för en översikt över tillgängliga metoder:
:::zone pivot = "programming-language-python"
```python
from notebookutils import mssparkutils
mssparkutils.fs.help()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.help()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
using Microsoft.Spark.Extensions.Azure.Synapse.Analytics.Notebook.MSSparkUtils;
FS.Help()
```
::: zone-end
Resultat i:
```
mssparkutils.fs provides utilities for working with various FileSystems.
Below is overview about the available methods:
cp(from: String, to: String, recurse: Boolean = false): Boolean -> Copies a file or directory, possibly across FileSystems
mv(from: String, to: String, recurse: Boolean = false): Boolean -> Moves a file or directory, possibly across FileSystems
ls(dir: String): Array -> Lists the contents of a directory
mkdirs(dir: String): Boolean -> Creates the given directory if it does not exist, also creating any necessary parent directories
put(file: String, contents: String, overwrite: Boolean = false): Boolean -> Writes the given String out to a file, encoded in UTF-8
head(file: String, maxBytes: int = 1024 * 100): String -> Returns up to the first 'maxBytes' bytes of the given file as a String encoded in UTF-8
append(file: String, content: String, createFileIfNotExists: Boolean): Boolean -> Append the content to a file
rm(dir: String, recurse: Boolean = false): Boolean -> Removes a file or directory
Use mssparkutils.fs.help("methodName") for more info about a method.
```
### <a name="list-files"></a>Lista filer
Visa en lista med innehållet i en katalog.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.ls('Your directory path')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.ls("Your directory path")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Ls("Your directory path")
```
::: zone-end
### <a name="view-file-properties"></a>Visa fil egenskaper
Returnerar fil egenskaper, inklusive fil namn, fil Sök väg, fil storlek och om det är en katalog och en fil.
:::zone pivot = "programming-language-python"
```python
files = mssparkutils.fs.ls('Your directory path')
for file in files:
print(file.name, file.isDir, file.isFile, file.path, file.size)
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
val files = mssparkutils.fs.ls("/")
files.foreach{
file => println(file.name,file.isDir,file.isFile,file.size)
}
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
var Files = FS.Ls("/");
foreach(var File in Files) {
Console.WriteLine(File.Name+" "+File.IsDir+" "+File.IsFile+" "+File.Size);
}
```
::: zone-end
### <a name="create-new-directory"></a>Skapa ny katalog
Skapar den aktuella katalogen om den inte finns och eventuella nödvändiga överordnade kataloger.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.mkdirs('new directory name')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.mkdirs("new directory name")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Mkdirs("new directory name")
```
::: zone-end
### <a name="copy-file"></a>Kopiera fil
Kopierar en fil eller katalog. Stöder kopiering i fil system.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.cp('source file or directory', 'destination file or directory', True)# Set the third parameter as True to copy all files and directories recursively
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.cp("source file or directory", "destination file or directory", true) // Set the third parameter as True to copy all files and directories recursively
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Cp("source file or directory", "destination file or directory", true) // Set the third parameter as True to copy all files and directories recursively
```
::: zone-end
### <a name="preview-file-content"></a>Förhandsgranska fil innehåll
Returnerar upp till de första "maxBytes" byte för den aktuella filen som en sträng som kodats i UTF-8.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.head('file path', maxBytes to read)
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.head("file path", maxBytes to read)
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Head("file path", maxBytes to read)
```
::: zone-end
### <a name="move-file"></a>Flytta fil
Flyttar en fil eller katalog. Stöder flyttning i fil system.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.mv('source file or directory', 'destination directory', True) # Set the last parameter as True to firstly create the parent directory if it does not exist
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.mv("source file or directory", "destination directory", true) // Set the last parameter as True to firstly create the parent directory if it does not exist
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Mv("source file or directory", "destination directory", true)
```
::: zone-end
### <a name="write-file"></a>Skriv fil
Skriver den aktuella strängen ut till en fil som kodats i UTF-8.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.put("file path", "content to write", True) # Set the last parameter as True to overwrite the file if it existed already
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.put("file path", "content to write", true) // Set the last parameter as True to overwrite the file if it existed already
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Put("file path", "content to write", true) // Set the last parameter as True to overwrite the file if it existed already
```
::: zone-end
### <a name="append-content-to-a-file"></a>Lägg till innehåll i en fil
Lägger till den strängen i en fil som kodats i UTF-8.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.append('file path','content to append',True) # Set the last parameter as True to create the file if it does not exist
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.append("file path","content to append",true) // Set the last parameter as True to create the file if it does not exist
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Append("file path","content to append",true) // Set the last parameter as True to create the file if it does not exist
```
::: zone-end
### <a name="delete-file-or-directory"></a>Ta bort fil eller katalog
Tar bort en fil eller en katalog.
:::zone pivot = "programming-language-python"
```python
mssparkutils.fs.rm('file path', True) # Set the last parameter as True to remove all files and directories recursively
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.fs.rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
FS.Rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
```
::: zone-end
## <a name="credentials-utilities"></a>Verktyg för autentiseringsuppgifter
Du kan använda MSSparkUtils för autentiseringsuppgifter för att hämta åtkomsttoken för länkade tjänster och hantera hemligheter i Azure Key Vault.
Kör följande kommando för att få en översikt över tillgängliga metoder:
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.help()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.help()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.Help()
```
::: zone-end
Få resultat:
```
getToken(audience, name): returns AAD token for a given audience, name (optional)
isValidToken(token): returns true if token hasn't expired
getConnectionStringOrCreds(linkedService): returns connection string or credentials for linked service
getSecret(akvName, secret, linkedService): returns AKV secret for a given AKV linked service, akvName, secret key
getSecret(akvName, secret): returns AKV secret for a given akvName, secret key
putSecret(akvName, secretName, secretValue, linkedService): puts AKV secret for a given akvName, secretName
putSecret(akvName, secretName, secretValue): puts AKV secret for a given akvName, secretName
```
### <a name="get-token"></a>Hämta token
Returnerar Azure AD-token för en specifik mål grupp, namn (valfritt). I tabellen nedan visas alla tillgängliga mål grupps typer:
|Audience-typ|Mål grupps nyckel|
|--|--|
|Typ av matchning av mål grupp|Filmen|
|Lagrings mål resurs|Lagrings|
|Audience-resurs för informations lager|DW|
|Data Lake Audience-resurs|'AzureManagement'|
|Mål grupps resurs|DataLakeStore|
|Azure OSSDB-målgrupps resurs|'AzureOSSDB'|
|Azure Synapse-resurs|'Synapse'|
|Azure Data Factory resurs|AUTOMATISK|
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.getToken('audience Key')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.getToken("audience Key")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.GetToken("audience Key")
```
::: zone-end
### <a name="validate-token"></a>Verifiera token
Returnerar true om token inte har gått ut.
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.isValidToken('your token')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.isValidToken("your token")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.IsValidToken("your token")
```
::: zone-end
### <a name="get-connection-string-or-credentials-for-linked-service"></a>Hämta anslutnings sträng eller autentiseringsuppgifter för den länkade tjänsten
Returnerar anslutnings sträng eller autentiseringsuppgifter för den länkade tjänsten.
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.getConnectionStringOrCreds('linked service name')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.getConnectionStringOrCreds("linked service name")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.GetConnectionStringOrCreds("linked service name")
```
::: zone-end
### <a name="get-secret-using-workspace-identity"></a>Hämta hemlighet med arbets ytans identitet
Returnerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, hemligt namn och länkat tjänst namn med hjälp av arbets ytans identitet. Se till att du konfigurerar åtkomst till [Azure Key Vault](#configure-access-to-azure-key-vault) på lämpligt sätt.
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.getSecret('azure key vault name','secret name','linked service name')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.getSecret("azure key vault name","secret name","linked service name")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.GetSecret("azure key vault name","secret name","linked service name")
```
::: zone-end
### <a name="get-secret-using-user-credentials"></a>Hämta hemlighet med användarautentiseringsuppgifter
Returnerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, ett hemligt namn och ett länkat tjänst namn som använder användarautentiseringsuppgifter.
:::zone pivot = "programming-language-python"
```python
mssparkutils.credentials.getSecret('azure key vault name','secret name')
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.credentials.getSecret("azure key vault name","secret name")
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Credentials.GetSecret("azure key vault name","secret name")
```
::: zone-end
<!-- ### Put secret using workspace identity
Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using workspace identity. Make sure you configure the access to [Azure Key Vault](#configure-access-to-azure-key-vault) appropriately. -->
:::zone pivot = "programming-language-python"
### <a name="put-secret-using-workspace-identity"></a>Lägg till hemlighet med arbets ytans identitet
Placerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, hemligt namn och länkat tjänst namn med hjälp av arbets ytans identitet. Se till att du konfigurerar åtkomsten till [Azure Key Vault](#configure-access-to-azure-key-vault) på lämpligt sätt.
```python
mssparkutils.credentials.putSecret('azure key vault name','secret name','secret value','linked service name')
```
::: zone-end
:::zone pivot = "programming-language-scala"
### <a name="put-secret-using-workspace-identity"></a>Lägg till hemlighet med arbets ytans identitet
Placerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, hemligt namn och länkat tjänst namn med hjälp av arbets ytans identitet. Se till att du konfigurerar åtkomsten till [Azure Key Vault](#configure-access-to-azure-key-vault) på lämpligt sätt.
```scala
mssparkutils.credentials.putSecret("azure key vault name","secret name","secret value","linked service name")
```
::: zone-end
<!-- :::zone pivot = "programming-language-csharp"
```csharp
```
::: zone-end -->
<!-- ### Put secret using user credentials
Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials. -->
:::zone pivot = "programming-language-python"
### <a name="put-secret-using-user-credentials"></a>Lägg till hemlighet med användarautentiseringsuppgifter
Placerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, hemligt namn och länkat tjänst namn med hjälp av användarautentiseringsuppgifter.
```python
mssparkutils.credentials.putSecret('azure key vault name','secret name','secret value')
```
::: zone-end
:::zone pivot = "programming-language-scala"
### <a name="put-secret-using-user-credentials"></a>Lägg till hemlighet med användarautentiseringsuppgifter
Placerar Azure Key Vault hemlighet för ett angivet Azure Key Vault namn, hemligt namn och länkat tjänst namn med hjälp av användarautentiseringsuppgifter.
```scala
mssparkutils.credentials.putSecret("azure key vault name","secret name","secret value")
```
::: zone-end
<!-- :::zone pivot = "programming-language-csharp"
```csharp
```
::: zone-end -->
## <a name="environment-utilities"></a>Miljö verktyg
Kör följande kommandon för att få en översikt över tillgängliga metoder:
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.help()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.help()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.Help()
```
::: zone-end
Få resultat:
```
GetUserName(): returns user name
GetUserId(): returns unique user id
GetJobId(): returns job id
GetWorkspaceName(): returns workspace name
GetPoolName(): returns Spark pool name
GetClusterId(): returns cluster id
```
### <a name="get-user-name"></a>Hämta användar namn
Returnerar aktuellt användar namn.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getUserName()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getUserName()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetUserName()
```
::: zone-end
### <a name="get-user-id"></a>Hämta användar-ID
Returnerar aktuellt användar-ID.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getUserId()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getUserId()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetUserId()
```
::: zone-end
### <a name="get-job-id"></a>Hämta jobb-ID
Returnerar jobb-ID.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getJobId()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getJobId()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetJobId()
```
::: zone-end
### <a name="get-workspace-name"></a>Hämta arbets ytans namn
Returnerar namn på arbets yta.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getWorkspaceName()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getWorkspaceName()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetWorkspaceName()
```
::: zone-end
### <a name="get-pool-name"></a>Hämta poolnamn
Returnerar namn på Spark-pool.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getPoolName()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getPoolName()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetPoolName()
```
::: zone-end
### <a name="get-cluster-id"></a>Hämta kluster-ID
Returnerar aktuellt kluster-ID.
:::zone pivot = "programming-language-python"
```python
mssparkutils.env.getClusterId()
```
::: zone-end
:::zone pivot = "programming-language-scala"
```scala
mssparkutils.env.getClusterId()
```
::: zone-end
:::zone pivot = "programming-language-csharp"
```csharp
Env.GetClusterId()
```
::: zone-end
## <a name="next-steps"></a>Nästa steg
- [Kolla Synapse-exempel Notebooks](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks)
- [Snabb start: skapa en Apache Spark pool i Azure Synapse Analytics med hjälp av webb verktyg](../quickstart-apache-spark-notebook.md)
- [Vad är Apache Spark i Azure Synapse Analytics](apache-spark-overview.md)
- [Azure Synapse Analytics](../index.yml)
| 29.165752 | 422 | 0.741287 | swe_Latn | 0.593615 |
e216a38e16ccbcf6a5460d92b77128736b3b1d1a | 1,281 | md | Markdown | content/media/weixin/wx_大众宽客/看得见的指数相关性分布图2017-06-30.md | leeleilei/52etf.net | 934d558bd4181a1e0b39a7a10e0d7076c53995fb | [
"CC-BY-4.0"
] | 10 | 2020-06-01T16:08:00.000Z | 2021-10-31T15:17:26.000Z | content/media/weixin/wx_大众宽客/看得见的指数相关性分布图2017-06-30.md | leeleilei/52etf.net | 934d558bd4181a1e0b39a7a10e0d7076c53995fb | [
"CC-BY-4.0"
] | null | null | null | content/media/weixin/wx_大众宽客/看得见的指数相关性分布图2017-06-30.md | leeleilei/52etf.net | 934d558bd4181a1e0b39a7a10e0d7076c53995fb | [
"CC-BY-4.0"
] | null | null | null |
---
title: 看得见的指数相关性分布图 2017-06-30-大众宽客
date: 2017-07-01
tags: ["大众宽客", ]
display: false
---
##
看得见的指数相关性分布图 2017-06-30
dzkuanke
通过深入的数据、直观的图表,展示指数基金投资价值
最近6个月指数相关性分布图,一目了然
<img data-s="300,640" data-type="png" src="http://mmbiz.qpic.cn/mmbiz_png/PKw3FQPmhIiaNP32ukXhbgJoK8EyIYllr6Bl2xNCXHlMGATNt5uw073eS4mDRJewrs1ibfuFREWO3u3P7diafeauQ/0?wx_fmt=png" data-ratio="0.8203883495145631" data-w="1236"/>
说明:将每一个指数当做一座城市,指数之间的相关性,就是城市之间的距离,指数相关性越高,城市之间距离越近。然后根据城市之间的距离,将城市画在地图上(需要一定的近似)。
**背景知识:**
> - 如果各种证券的收益是彼此无关的,那么采取分散化就可以消除风险。- 指数之间的相关性不仅在分散投资、降低风险时很重要,在提高指数轮动投资效率时也非常关键。
鸡蛋不能放在一个篮子里,当不能确定某只指数走势时,大家都希望通过多买几只指数来分散风险。
但仅仅多买几只指数是不够的。
最早创立现代资产组合理论的诺贝尔经济学奖获得者——哈里·马克威茨(Harry Markowitz)认为:不确定性是证券投资的一个突出特征,而另一个突出特征则是证券收益之间的相关性,“像绝大多数经济变量一样,各种证券的收益倾向于一起上升和一起下降。但这种相关性并不是绝对的,**如果各种证券的收益是彼此无关的,那么采取分散化就可以消除风险**。”
所以,指数之间的相关性也同等重要。当指数组合其中一种下跌时,另一种有可能上涨,这样在拿不准什么会涨的时候,分散投资的收益也能有所保证。
指数之间的相关性不仅在分散投资、降低风险时很重要,在**提高指数轮动投资效率时也非常关键。**
**当指数之间的相关性增强时,指数趋于同涨同跌,此时轮动选股策略无法有效挑选出更优秀的指数,也就无法创造额外收益。创建一个低相关性的指数或基金组合,是提高轮动投资策略有效性的首要条件。**
那么如何判断指数之间的相关性呢?
有时我们可以凭直觉判断,比如对应大盘股的“上证50”和小盘股的“创业板”之间相关性应该比较低。但直觉不一定正确,比如 医药指数和房地产指数 之间相关性是高还是低呢?
只有通过统计学方法,计算股票之间的相关性系数,才能得到相对准确的相关性分析结果。
您的鼓励,是我继续分享的动力
**微信扫一扫赞赏作者**
您的鼓励,是我继续分享的动力
| 12.317308 | 225 | 0.787666 | yue_Hant | 0.340256 |
e217678a0fba6373bff25aa21bde6e692df92b34 | 454 | md | Markdown | docs/api/interfaces/_entities_securitytoken_features_.enableerc20dividendsopts.md | PolymathNetwork/Polymath-SDK | b0d8f19dc6cc14303dbc3968fcc31127b9516e11 | [
"Apache-2.0"
] | 12 | 2019-03-27T20:21:45.000Z | 2021-07-20T00:08:05.000Z | docs/api/interfaces/_entities_securitytoken_features_.enableerc20dividendsopts.md | PolymathNetwork/Polymath-SDK | b0d8f19dc6cc14303dbc3968fcc31127b9516e11 | [
"Apache-2.0"
] | 202 | 2019-03-29T22:41:30.000Z | 2021-05-08T11:08:13.000Z | docs/api/interfaces/_entities_securitytoken_features_.enableerc20dividendsopts.md | PolymathNetwork/Polymath-SDK | b0d8f19dc6cc14303dbc3968fcc31127b9516e11 | [
"Apache-2.0"
] | 6 | 2019-03-27T20:22:04.000Z | 2021-12-22T18:15:44.000Z | # EnableErc20DividendsOpts
## Hierarchy
* **EnableErc20DividendsOpts**
## Index
### Properties
* [storageWalletAddress](_entities_securitytoken_features_.enableerc20dividendsopts.md#storagewalletaddress)
## Properties
### storageWalletAddress
• **storageWalletAddress**: _string_
_Defined in_ [_src/entities/SecurityToken/Features.ts:41_](https://github.com/PolymathNetwork/polymath-sdk/blob/e8bbc1e/src/entities/SecurityToken/Features.ts#L41)
| 21.619048 | 163 | 0.806167 | yue_Hant | 0.249647 |
e217996bbfd7c167723055db554ac253859e54bf | 733 | md | Markdown | doc/stat_qq.md | jiahao/Gadfly.jl | dc0d65cb8e235a9e10dd3cb8e08ba13a6a1785e5 | [
"MIT"
] | null | null | null | doc/stat_qq.md | jiahao/Gadfly.jl | dc0d65cb8e235a9e10dd3cb8e08ba13a6a1785e5 | [
"MIT"
] | null | null | null | doc/stat_qq.md | jiahao/Gadfly.jl | dc0d65cb8e235a9e10dd3cb8e08ba13a6a1785e5 | [
"MIT"
] | null | null | null | ---
title: qq
author: Dave Kleinschmidt
part: Statistic
order: 1001
...
Generates quantile-quantile plots for `x` and `y`. If each is a numeric vector,
their sample quantiles will be compared. If one is a `Distribution`, then its
theoretical quantiles will be compared with the sample quantiles of the other.
# Aesthetics
* `x`: Data or `Distribution` to be plotted on the x-axis.
* `y`: Data or `Distribution` to be plotted on the y-axis.
# Examples
```{.julia hide="true" results="none"}
using Gadfly, Distributions
Gadfly.set_default_plot_size(14cm, 8cm)
srand(1234)
```
```julia
plot(x=rand(Normal(), 100), y=rand(Normal(), 100), Stat.qq, Geom.point)
plot(x=rand(Normal(), 100), y=Normal(), Stat.qq, Geom.point)
```
| 24.433333 | 80 | 0.708049 | eng_Latn | 0.916192 |
e217c147877f20b03b8426aefaa9d2137cff0d4a | 1,063 | md | Markdown | README.md | pfeairheller/ietf-ptel | 6aa7eaaf22d9f6ecca2c537198c2a167108630c5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | README.md | pfeairheller/ietf-ptel | 6aa7eaaf22d9f6ecca2c537198c2a167108630c5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | README.md | pfeairheller/ietf-ptel | 6aa7eaaf22d9f6ecca2c537198c2a167108630c5 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-11-08T18:58:12.000Z | 2021-11-08T18:58:12.000Z | # Public Transaction Event Logs (PTEL)
This is the working area for the individual Internet-Draft, "Public Transaction Event Logs (PTEL)".
* [Editor's Copy](https://WebOfTrust.github.io/ietf-ptel/#go.draft-pfeairheller-ptel.html)
* [Datatracker Page](https://datatracker.ietf.org/doc/draft-pfeairheller-ptel)
* [Individual Draft](https://datatracker.ietf.org/doc/html/draft-pfeairheller-ptel)
* [Compare Editor's Copy to Individual Draft](https://WebOfTrust.github.io/ietf-ptel/#go.draft-pfeairheller-ptel.diff)
## Contributing
See the
[guidelines for contributions](https://github.com/WebOfTrust/ietf-ptel/blob/main/CONTRIBUTING.md).
Contributions can be made by creating pull requests.
The GitHub interface supports creating pull requests using the Edit (✏) button.
## Command Line Usage
Formatted text and HTML versions of the draft can be built using `make`.
```sh
$ make
```
Command line usage requires that you have the necessary software installed. See
[the instructions](https://github.com/martinthomson/i-d-template/blob/main/doc/SETUP.md).
| 34.290323 | 118 | 0.772342 | eng_Latn | 0.578898 |
e217ef3fa9c4043da12542c7ca29523815dc1d33 | 15,783 | md | Markdown | src/site/content/pt/blog/browser-fs-access/index.md | mgifford/web.dev | 7c5679155bf7fd416c8bcab53d466c51d6a099e0 | [
"Apache-2.0"
] | null | null | null | src/site/content/pt/blog/browser-fs-access/index.md | mgifford/web.dev | 7c5679155bf7fd416c8bcab53d466c51d6a099e0 | [
"Apache-2.0"
] | null | null | null | src/site/content/pt/blog/browser-fs-access/index.md | mgifford/web.dev | 7c5679155bf7fd416c8bcab53d466c51d6a099e0 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: Ler e gravar arquivos e diretórios com a biblioteca browser-fs-access
authors:
- thomassteiner
description: |2
"Todos os navegadores modernos podem ler arquivos e diretórios locais; Contudo,"
verdadeiro acesso de gravação, ou seja, mais do que apenas baixar arquivos,
está limitado a navegadores que implementam a API de acesso ao sistema de arquivos.
Esta postagem apresenta uma biblioteca de suporte chamada browser-fs-access
que atua como uma camada de abstração no topo da API File System Access
e que, de forma transparente, volta às abordagens legadas para lidar com arquivos.
scheduled: verdadeiro
date: 2020-07-27
updated: 2021-01-27
hero: image/admin/Y4wGmGP8P0Dc99c3eKkT.jpg
tags:
- blog
- progressive-web-apps
- capabilities
feedback:
- api
---
Os navegadores já conseguem lidar com arquivos e diretórios há muito tempo. A [API Arquivo](https://w3c.github.io/FileAPI/) fornece recursos para representar objetos de arquivo em aplicativos da web, bem como selecioná-los programaticamente e acessar seus dados. No momento em que você olha mais de perto, porém, nem tudo o que reluz é ouro.
## A maneira tradicional de lidar com arquivos
{% Aside %} Se você sabe como funcionava do jeito antigo, pode [pular direto para o novo jeito](#the-file-system-access-api). {% endAside %}
### Abrindo arquivos
Como desenvolvedor, você pode abrir e ler arquivos por meio do elemento [`<input type="file">`](https://developer.mozilla.org/docs/Web/HTML/Element/input/file). Em sua forma mais simples, abrir um arquivo pode ser parecido com o exemplo de código abaixo. O objeto de `input` [`FileList`](https://developer.mozilla.org/docs/Web/API/FileList), que no caso a seguir consiste em apenas um [`File`](https://developer.mozilla.org/docs/Web/API/File). Um `File` é um tipo específico de [`Blob`](https://developer.mozilla.org/docs/Web/API/Blob) e pode ser usado em qualquer contexto em que um Blob.
```js
const openFile = async () => {
return new Promise((resolve) => {
const input = document.createElement('input');
input.type = 'file';
input.addEventListener('change', () => {
resolve(input.files[0]);
});
input.click();
});
};
```
### Abrindo diretórios
Para abrir pastas (ou diretórios), você pode definir o atributo [`<input webkitdirectory>`](https://developer.mozilla.org/docs/Web/HTML/Element/input#attr-webkitdirectory) Tirando isso, todo o resto funciona da mesma forma que acima. Apesar de seu nome com o prefixo do fornecedor, `webkitdirectory` não pode ser usado apenas nos navegadores Chromium e WebKit, mas também no Edge baseado em EdgeHTML legado e no Firefox.
### Salvando (em vez de baixando) arquivos
Para salvar um arquivo, tradicionalmente, você está limitado a *baixar* um arquivo, o que funciona graças ao atributo [`<a download>`](https://developer.mozilla.org/docs/Web/HTML/Element/a#attr-download:~:text=download) Dado um Blob, você pode definir o `href` da âncora como um `blob:` URL que pode ser obtido no método [`URL.createObjectURL()`](https://developer.mozilla.org/docs/Web/API/URL/createObjectURL) {% Aside 'caution' %} Para evitar vazamentos de memória, sempre revogue o URL após o download. {% endAside %}
```js
const saveFile = async (blob) => {
const a = document.createElement('a');
a.download = 'my-file.txt';
a.href = URL.createObjectURL(blob);
a.addEventListener('click', (e) => {
setTimeout(() => URL.revokeObjectURL(a.href), 30 * 1000);
});
a.click();
};
```
### O problema
Uma grande desvantagem da *abordagem de download* é que não há como fazer um fluxo clássico abrir → editar → salvar, ou seja, não há como *sobrescrever* o arquivo original. Em vez disso, você acaba com uma nova *cópia* do arquivo original na pasta de Downloads padrão do sistema operacional sempre que "salva".
## A API de acesso ao sistema de arquivos
A API de acesso ao sistema de arquivos torna ambas as operações, abrir e salvar, muito mais simples. Também possibilita *o salvamento real*, ou seja, você não só pode escolher onde salvar um arquivo, mas também sobrescrever um arquivo existente.
{% Aside %} Para obter uma introdução mais completa à API de acesso ao sistema de arquivos, consulte o artigo [A API de acesso ao sistema de arquivos: simplificando o acesso a arquivos locais](/file-system-access/). {% endAside %}
### Abrindo arquivos
Com a [API de acesso ao sistema de arquivos](https://wicg.github.io/file-system-access/), abrir um arquivo é uma questão de chamar o método `window.showOpenFilePicker()`. Esta chamada retorna um identificador de arquivo, do qual você pode obter o `File` real por meio do método `getFile()`.
```js
const openFile = async () => {
try {
// Always returns an array.
const [handle] = await window.showOpenFilePicker();
return handle.getFile();
} catch (err) {
console.error(err.name, err.message);
}
};
```
### Abrindo diretórios
Abra um diretório chamando `window.showDirectoryPicker()` que torna os diretórios selecionáveis na caixa de diálogo do arquivo.
### Salvando arquivos
Salvar arquivos é igualmente simples. A partir de um identificador de arquivo, você cria um fluxo gravável por meio de `createWritable()`, depois grava os dados Blob chamando o `write()` do fluxo e, por fim, fecha o fluxo chamando seu método `close()`.
```js
const saveFile = async (blob) => {
try {
const handle = await window.showSaveFilePicker({
types: [{
accept: {
// Omitted
},
}],
});
const writable = await handle.createWritable();
await writable.write(blob);
await writable.close();
return handle;
} catch (err) {
console.error(err.name, err.message);
}
};
```
## Apresentando o navegador-fs-access
Por mais perfeita que seja a API de acesso ao sistema de arquivos, ela [ainda não está amplamente disponível](https://caniuse.com/native-filesystem-api).
<figure class="w-figure">{% Img src="image/tcFciHGuF3MxnTr1y5ue01OGLBn2/G1jsSjCBR871W1uKQWeN.png", alt="Tabela de suporte do navegador para a API de acesso ao sistema de arquivos. Todos os navegadores são marcados como 'sem suporte' ou 'atrás de um sinalizador'.", width="800", height="224", class="w-screenshot" %} <figcaption class="w-figcaption">Tabela de suporte do navegador para a API de acesso ao sistema de arquivos. (<a href="https://caniuse.com/native-filesystem-api">Fonte</a>)</figcaption></figure>
É por isso que vejo a API de acesso ao sistema de arquivos como um [aprimoramento progressivo](/progressively-enhance-your-pwa). Como tal, quero usá-lo quando o navegador oferecer suporte e, se não for, usar a abordagem tradicional; ao mesmo tempo, nunca pune o usuário com downloads desnecessários de código JavaScript não suportado. A [biblioteca browser-fs-access](https://github.com/GoogleChromeLabs/browser-fs-access) é minha resposta para esse desafio.
### Filosofia de design
Como a API de acesso ao sistema de arquivos provavelmente ainda mudará no futuro, a API browser-fs-access não é modelada a partir dela. Ou seja, a biblioteca não é um [polyfill](https://developer.mozilla.org/docs/Glossary/Polyfill), mas sim um [ponyfill](https://github.com/sindresorhus/ponyfill). Você pode (estaticamente ou dinamicamente) importar exclusivamente qualquer funcionalidade necessária para manter seu aplicativo o menor possível. Os métodos disponíveis são os nomeados apropriadamente [`fileOpen()`](https://github.com/GoogleChromeLabs/browser-fs-access#opening-files), [`directoryOpen()`](https://github.com/GoogleChromeLabs/browser-fs-access#opening-directories) e [`fileSave()`](https://github.com/GoogleChromeLabs/browser-fs-access#saving-files). Internamente, o recurso de biblioteca detecta se a API de acesso ao sistema de arquivos é compatível e, a seguir, importa o caminho do código correspondente.
### Usando a biblioteca browser-fs-access
Os três métodos são intuitivos de usar. Você pode especificar os `mimeTypes` ou `extensions` arquivo aceitos do seu aplicativo e definir um `multiple` para permitir ou proibir a seleção de vários arquivos ou diretórios. Para obter detalhes completos, consulte a [documentação da API browser-fs-access](https://github.com/GoogleChromeLabs/browser-fs-access#api-documentation). O exemplo de código abaixo mostra como você pode abrir e salvar arquivos de imagem.
```js
// The imported methods will use the File
// System Access API or a fallback implementation.
import {
fileOpen,
directoryOpen,
fileSave,
} from 'https://unpkg.com/browser-fs-access';
(async () => {
// Open an image file.
const blob = await fileOpen({
mimeTypes: ['image/*'],
});
// Open multiple image files.
const blobs = await fileOpen({
mimeTypes: ['image/*'],
multiple: true,
});
// Open all files in a directory,
// recursively including subdirectories.
const blobsInDirectory = await directoryOpen({
recursive: true
});
// Save a file.
await fileSave(blob, {
fileName: 'Untitled.png',
});
})();
```
### Demo
Você pode ver o código acima em ação em uma [demonstração](https://browser-fs-access.glitch.me/) no Glitch. Seu [código-fonte](https://glitch.com/edit/#!/browser-fs-access) também está disponível lá. Como, por razões de segurança, os subquadros de origem cruzada não têm permissão para mostrar um seletor de arquivos, a demonstração não pode ser incorporada neste artigo.
## A biblioteca browser-fs-access em liberdade
Em meu tempo livre, contribuo um pouquinho para um [PWA instalável](/progressive-web-apps/#installable) chamado [Excalidraw](https://excalidraw.com/), uma ferramenta de quadro branco que permite esboçar diagramas facilmente com uma sensação de desenho à mão. É totalmente responsivo e funciona bem em uma variedade de dispositivos, desde pequenos telefones celulares a computadores com telas grandes. Isso significa que ele precisa lidar com arquivos em todas as várias plataformas, independentemente de serem ou não compatíveis com a API de acesso ao sistema de arquivos. Isso o torna um ótimo candidato para a biblioteca browser-fs-access.
Posso, por exemplo, iniciar um desenho no meu iPhone, salvá-lo (tecnicamente: baixe-o, pois o Safari não oferece suporte à API de acesso ao sistema de arquivos) na pasta Downloads do meu iPhone, abra o arquivo no meu desktop (após transferi-lo do meu telefone), modifique o arquivo e substitua-o com minhas alterações ou mesmo salve-o como um novo arquivo.
<figure class="w-figure">{% Img src="image/admin/u1Gwxp5MxS39wl8PW2vz.png", alt="Um desenho Excalidraw em um iPhone.", width="300", height="649", class="w-screenshot" %} <figcaption class="w-figcaption"> Iniciando um desenho Excalidraw em um iPhone onde a API de acesso ao sistema de arquivos não é suportada, mas onde um arquivo pode ser salvo (baixado) na pasta Downloads.</figcaption></figure>
<figure class="w-figure">{% Img src="image/admin/W1lt36DtKuveBJJTzonC.png", alt="O desenho Excalidraw modificado no Chrome na área de trabalho.", width="800", height="592", class="w-screenshot" %} <figcaption class="w-figcaption"> Abrindo e modificando o desenho Excalidraw na área de trabalho onde a API de acesso ao sistema de arquivos é suportada e, portanto, o arquivo pode ser acessado por meio da API.</figcaption></figure>
<figure class="w-figure">{% Img src="image/admin/srqhiMKy2i9UygEP4t8e.png", alt="Substituindo o arquivo original com as modificações.", width="800", height="585", class="w-screenshot" %} <figcaption class="w-figcaption">Substituindo o arquivo original com as modificações no arquivo de desenho original Excalidraw. O navegador mostra uma caixa de diálogo perguntando se está tudo bem.</figcaption></figure>
<figure class="w-figure">{% Img src="image/admin/FLzOZ4eXZ1lbdQaA4MQi.png", alt="Salvando as modificações em um novo arquivo de desenho Excalidraw.", width="800", height="592", class="w-screenshot" %} <figcaption class="w-figcaption">Salvando as modificações em um novo arquivo Excalidraw. O arquivo original permanece intocado.</figcaption></figure>
### Amostra de código de aplicação real
Abaixo, você pode ver um exemplo real de navegador-fs-access como ele é usado no Excalidraw. Este trecho foi retirado de [`/src/data/json.ts`](https://github.com/excalidraw/excalidraw/blob/cd87bd6901b47430a692a06a8928b0f732d77097/src/data/json.ts#L24-L52). É de interesse especial como o `saveAsJSON()` passa um identificador de arquivo ou `null` para o método browser-fs-access ' `fileSave()`, o que faz com que ele seja sobrescrito quando um identificador é fornecido, ou salva em um novo arquivo se não for.
```js
export const saveAsJSON = async (
elements: readonly ExcalidrawElement[],
appState: AppState,
fileHandle: any,
) => {
const serialized = serializeAsJSON(elements, appState);
const blob = new Blob([serialized], {
type: "application/json",
});
const name = `${appState.name}.excalidraw`;
(window as any).handle = await fileSave(
blob,
{
fileName: name,
description: "Excalidraw file",
extensions: ["excalidraw"],
},
fileHandle || null,
);
};
export const loadFromJSON = async () => {
const blob = await fileOpen({
description: "Excalidraw files",
extensions: ["json", "excalidraw"],
mimeTypes: ["application/json"],
});
return loadFromBlob(blob);
};
```
### Considerações de interface do usuário
Seja no Excalidraw ou em seu aplicativo, a IU deve se adaptar à situação de suporte do navegador. Se a API de acesso ao sistema de arquivos for suportada (`if ('showOpenFilePicker' in window) {}`), você pode mostrar um **botão Salvar como** além de um botão **Salvar**. As imagens abaixo mostram a diferença entre a barra de ferramentas responsiva do aplicativo principal do Excalidraw no iPhone e na área de trabalho do Chrome. Observe como no iPhone o **botão Salvar como** está faltando.
<figure class="w-figure">{% Img src="image/admin/c2sjjj86zh53VDrPIo6M.png", alt="Excalidraw app toolbar no iPhone com apenas um botão 'Salvar'.", width="300", height="226", class="w-screenshot" %} <figcaption class="w-figcaption">Barra de ferramentas do aplicativo Excalidraw no iPhone com apenas um botão <strong>Salvar</strong>.</figcaption></figure>
<figure class="w-figure">{% Img src="image/admin/unUUghwH5mG2hLnaViHK.png", alt="Barra de ferramentas do aplicativo Excalidraw na área de trabalho do Chrome com um botão 'Salvar' e 'Salvar como'.", width="300", height="66", class="w-screenshot" %} <figcaption class="w-figcaption">Barra de ferramentas do aplicativo Excalidraw no Chrome com um botão <strong>Salvar</strong> e um botão <strong>Salvar</strong> em foco.</figcaption></figure>
## Conclusões
Trabalhar com arquivos de sistema funciona tecnicamente em todos os navegadores modernos. Em navegadores que suportam a API de acesso ao sistema de arquivos, você pode tornar a experiência melhor permitindo o verdadeiro salvamento e sobrescrita (não apenas o download) de arquivos e permitindo que seus usuários criem novos arquivos onde quiserem, ao mesmo tempo em que permanecem funcionais em navegadores que não suporta a API de acesso ao sistema de arquivos. O [navegador-fs-access](https://github.com/GoogleChromeLabs/browser-fs-access) torna sua vida mais fácil, lidando com as sutilezas do aprimoramento progressivo e tornando seu código o mais simples possível.
## Reconhecimentos
Este artigo foi revisado por [Joe Medley](https://github.com/jpmedley) e [Kayce Basques](https://github.com/kaycebasques). Agradeço aos [colaboradores da Excalidraw](https://github.com/excalidraw/excalidraw/graphs/contributors) por seu trabalho no projeto e por revisar minhas solicitações de pull. [Imagem do herói](https://unsplash.com/photos/hXrPSgGFpqQ) por [Ilya Pavlov](https://unsplash.com/@ilyapavlov) em Unsplash.
| 63.898785 | 923 | 0.750808 | por_Latn | 0.996565 |
e2199538b92e637209d9263916a2183ed18fe0be | 10,175 | md | Markdown | intune/introduction-intune.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | intune/introduction-intune.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | intune/introduction-intune.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: What is Microsoft Intune
description: Learn how Intune is the mobile device management (MDM) and mobile app management (MAM) component of the Enterprise Mobility + Security solution and how it helps you protect company data.
keywords: what is Intune
author: Erikre
ms.author: erikre
manager: dougeby
ms.date: 03/01/2018
ms.topic: get-started-article
ms.prod:
ms.service: microsoft-intune
ms.technology:
ms.assetid: 3b4e778d-ac13-4c23-974f-5122f74626bc
# optional metadata
#ROBOTS:
#audience:
#ms.devlang:
ms.reviewer: pmay
ms.suite: ems
#ms.tgt_pltfrm:
ms.custom:
---
# What is Intune?
[!INCLUDE[both-portals](./includes/note-for-both-portals.md)]
Intune is a cloud-based service in the enterprise mobility management (EMM) space that helps enable your workforce to be productive while keeping your corporate data protected. With Intune, you can:
* Manage the mobile devices your workforce uses to access company data.
* Manage the mobile apps your workforce uses.
* Protect your company information by helping to control the way your workforce accesses and shares it.
* Ensure devices and apps are compliant with company security requirements.
## Common business problems that Intune helps solve
* [Protect your on-premises email and data so that it can be accessed by mobile devices](common-scenarios.md#protecting-your-on-premises-email-and-data-so-it-can-be-safely-accessed-by-mobile-devices)
* [Protect your Office 365 mail and data so that it can be safely accessed by mobile devices](common-scenarios.md#protecting-your-office-365-email-and-data-so-it-can-be-safely-accessed-by-mobile-devices)
* [Issue corporate-owned phones to your workforce](common-scenarios.md#issue-corporate-owned-phones-to-your-employees)
* [Offer a bring-your-own-device (BYOD) or personal device program to all employees](common-scenarios.md#offer-a-bring-your-own-device-program-to-all-employees)
* [Enable your employees to securely access Office 365 from an unmanaged public kiosk](common-scenarios.md#enable-your-employees-to-securely-access-office-365-from-an-unmanaged-public-kiosk)
* [Issue limited-use shared tablets to your task workers](common-scenarios.md#issue-limited-use-shared-tablets-to-your-employees)
## How does Intune work?
Intune is the component of Enterprise Mobility + Security (EMS) that manages mobile devices and apps. It integrates closely with other EMS components like Azure Active Directory (Azure AD) for identity and access control and Azure Information Protection for data protection. When you use it with Office 365, you can enable your workforce to be productive on all their devices, while keeping your organization's information protected.

View a [larger version](./media/intunearchitecture.svg) of the Intune architecture diagram.
How you use the device and app management features of Intune and EMS data protection depends on the [business problem you’re trying to solve](#common-business-problems-that-intune-helps-solve). For example:
* You’ll make strong use of device management if you're creating a pool of single-use devices to be shared by shift workers in a retail store.
* You’ll lean on app management and data protection if you allow your workforce to use their personal devices to access corporate data (BYOD).
* If you are issuing corporate phones to information workers, you’ll rely on all of the technologies.
## Intune device management explained
Intune device management works by using the protocols or APIs that are available in the mobile operating systems. It includes tasks like:
* Enrolling devices into management so your IT department has an inventory of devices that are accessing corporate services
* Configuring devices to ensure they meet company security and health standards
* Providing certificates and Wi-Fi/VPN profiles to access corporate services
* Reporting on and measuring device compliance to corporate standards
* Removing corporate data from managed devices
Sometimes, people think that **access control to corporate data** is a device management feature. We don’t think of it that way because it isn’t something that the mobile operating system provides. Rather, it’s something the identity provider delivers. In our case, the identity provider is Azure Active Directory (Azure AD), Microsoft’s identity and access management system.
Intune integrates with Azure AD to enable a broad set of access control scenarios. For example, you can require a mobile device to be compliant with corporate standards that you define in Intune before the device can access a corporate service like Exchange. Likewise, you can lock down the corporate service to a specific set of mobile apps. For example, you can lock down Exchange Online to only be accessed by Outlook or Outlook Mobile.
## Intune app management explained
When we talk about app management, we are talking about:
* Assigning mobile apps to employees
* Configuring apps with standard settings that are used when the app runs
* Controlling how corporate data is used and shared in mobile apps
* Removing corporate data from mobile apps
* Updating apps
* Reporting on mobile app inventory
* Tracking mobile app usage
We have seen the term mobile app management (MAM) used to mean any one of those things individually or to mean specific combinations. In particular, it’s common for folks to conflate the concept of app configuration with the concept of securing corporate data within mobile apps. That’s because some mobile apps expose settings that allow their data security features to be configured.
When we talk about app configuration and Intune, we are referring specifically to technologies like [managed app configuration on iOS](https://developer.apple.com/library/content/samplecode/sc2279/Introduction/Intro.html).
When you use Intune with the other services in EMS, you can provide your organization mobile app security over and above what is provided by the mobile operating system and the mobile apps themselves through app configuration. An app that is managed with EMS has access to a broader set of mobile app and data protections that includes:
* [Single sign-on](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis)
* [Multi-factor authentication](https://docs.microsoft.com/multi-factor-authentication/multi-factor-authentication)
* [App conditional access - allow access if the mobile app contains corporate data](app-based-conditional-access-intune.md)
* [Isolating corporate data from personal data inside the same app](app-protection-policy.md)
* [App protection policy (PIN, encryption, save-as, clipboard, etc.)](app-protection-policies.md)
* [Corporate data wipe from a mobile app](apps-selective-wipe.md)
* [Rights management support](https://docs.microsoft.com/information-protection/understand-explore/what-is-azure-rms)

### Intune app security
Providing app security is a part of app management, and in Intune, when we talk about mobile app security, we mean:
* Keeping personal information isolated from corporate IT awareness
* Restricting the actions users can take with corporate information such as copy, cut/paste, save, and view
* Removing corporate data from mobile apps, also known as selective wipe or corporate wipe
One way that Intune provides mobile app security is through its **app protection policy** feature. App protection policy uses Azure AD identity to isolate corporate data from personal data. Data that is accessed using a corporate credential will be given additional corporate protections.
For example, when a user logs on to her device with her corporate credentials, her corporate identity allows her access to data that is denied to her personal identity. As that corporate data is used, app protection policies control how it is saved and shared. Those same protections are not applied to data that is accessed when the user logs on to her device with her personal identity. In this way, IT has control of corporate data while the end user maintains control and privacy over personal data.
## EMM with and without device enrollment
Most enterprise mobility management solutions support basic mobile device and mobile app technologies. These are usually tied to the device being enrolled in your organization’s mobile device management (MDM) solution. Intune supports these scenarios and additionally supports many “without enrollment” scenarios.
Organizations differ to the extent they will adopt “without enrollment” scenarios. Some organizations standardize on it. Some allow it for companion devices such as a personal tablet. Others don’t support it at all. Even in this last case, where an organization requires all employee devices to be enrolled in MDM, they typically support "without enrollment" scenarios for contractors, vendors, and for other devices that have a specific exemption.
You can even use Intune’s “without-enrollment” technology on enrolled devices. For example, a device enrolled in MDM may have "open-in" protections provided by the mobile operating system. "Open-in" protection is an iOS feature that restricts you from opening a document from one app, like Outlook, into another app, like Word, unless both apps are managed by the MDM provider. In addition, IT may apply the app protection policy to EMS-managed mobile apps to control save-as or to provide multi-factor authentication.
Whatever your organization’s position on enrolled and unenrolled mobile devices and apps, Intune, as a part of EMS, has tools that will help increase your workforce productivity while protecting your corporate data.
### Next steps
* Read about some of the [common ways to use Intune](common-scenarios.md).
* Get familiar with the product [with a 30-day trial of Intune](free-trial-sign-up.md).
* Dive into the [technical requirements and capabilities](supported-devices-browsers.md) of Intune.
| 82.056452 | 519 | 0.79371 | eng_Latn | 0.997658 |
e21b18cfa7de5d1ea902b11c4d8998d387339c0b | 5,001 | md | Markdown | posts/jap/DataStructure/jap-array.md | euisblue/vue-blog | 6ab84fa4d1c52a3ab036e48859d4d641a4aafd2a | [
"MIT"
] | null | null | null | posts/jap/DataStructure/jap-array.md | euisblue/vue-blog | 6ab84fa4d1c52a3ab036e48859d4d641a4aafd2a | [
"MIT"
] | null | null | null | posts/jap/DataStructure/jap-array.md | euisblue/vue-blog | 6ab84fa4d1c52a3ab036e48859d4d641a4aafd2a | [
"MIT"
] | null | null | null | <div class="update">
last updated 10.30.20
</div>
## 配列とは
配列(Array)はデータ構造の一つで、同じデータ型の変数を含むコンテナです。例えば、charの配列には文字だけ、intの配列には数しか入らないということです。。
<div style="text-align: center;">
<img src="assets/data-structure/array/array1d-1.png" alt="array 1d image">
</div>
配列の中のそれぞれの変数は要素(element)、そして要素の位置はインデックス(index)と呼ばれます。インデックスで使える数字は0と正数だけです。
### 配列の種類
- 一次元配列 (上の表ような形)
- 二次元配列:配列の中の配列 [(x,y)グリッドと同じ]
- 三次元配列:2次元配列の中に更に配列がある(ルービックキューブの形)
- 四次元以上もできますが、複雑で構造をイメージするのも難しくてそんなに使わない。
## Rubyの配列
Rubyの配列は普通のとは違います。同じデータ型の変数を含むコンテナと基本的には一緒ですが、このデータ型がintとかcharとかではなくオブジェクトです。
Rubyはほとんどのものが全部オブジェクトなので簡単に言うと何でも入るコンテナです。
下記のように数字、文字列、ブーリアン、nil、そして配列もオブジェクトなので挿入できます。
<div style="text-align: center;">
<img src="assets/data-structure/array/array1d-2.png" alt="ruby obj array 1d image">
</div>
では本当にこれらの要素がオブジェクトか確認してみましょう。
```rb
arr = [1, 3.2, "hello", true, [1,2,3], nil];
arr.each do |elem|
# 全部trueを出します。
puts elem.is_a? Object
end
```
## 配列の演算
### 配列の生成
配列を生成する方法は3つあります。
1. `[]`コンストラクタを使って生成する。
```rb
arr = [] # => []
arr2 = [1, '2', "three"] # => [1, '2', "three"]
```
2. 直接`:new`メソッドを呼び出して生成する。
```rb
arr = Array.new() # => []
arr = Array.new(3) # => [nil, nil, nil]
arr = Array.new(3, 0) # => [0, 0, 0]
```
二番目の引数を使うとき注意することが一点あります。該当値で初期化した要素は全部同じオブジェクトを指します。下のコードを見てください。
```rb
# 配列の基本値を"hello"で設定
arr = Array.new(3, "hello"); # => ["hello", "hello", "hello"]
arr[0].upcase!; # 1番めの'hello'を大文字に変換
puts arr # => ["HELLO", "HELLO", "HELLO"]
```
すべての要素が一つの同じオブジェクトを指しているので、一つの要素の値を変えるとすべての値が変わります。なので`:new`の2番目の引数には[イミュータブル](https://ja.wikipedia.org/wiki/イミュータブル)(immutable)オブジェクト(シンボル, 数字, ブーリアン)を使用することが推奨されます。
[ミュータブル](https://developer.mozilla.org/ja/docs/Glossary/Mutable)(mutable)を基本値で持つ配列を生成する時にはブロック(block)を使います。
```rb
arr = Array.new(3) { "hello" } # => ["hello", "hello", "hello"]
arr[0].upcase!
puts arr # => ["HELLO", "hello", "hello"]
```
3. カーネル(kernel)の[`Array()`](https://ruby-doc.org/core-2.7.0/Kernel.html#method-i-Array)を使う。
```rb
Array([]) # => []
Array(nil) # => []
Array([1, 2, 3]) # => [1, 2, 3]
Array(1..5) # => [1, 2, 3, 4, 5]
Array(1...5) # => [1, 2, 3, 4]
```
### 配列に接近(access)
`[]`を使って要素に接近します。
配列のindexには負の数は使えないと最初に言いました。でも、実はRubyの配列には負の数も使えます。
```rb
arr = [1, 2, 3, 4, 5]
arr[2] # => 3
arr[1] # => 2
arr[0] # => 1
arr[-1] # => 5
arr[-2] # => 4
arr[100] # => nil
```
`arr[100]`みたいに配列の範囲外に接近する時には`nil`を返します。でも`:fetch`メソッドを使って範囲外に接近する場合はエラーが起こります。
```rb
arr = [1, 2, 3, 4, 5]
arr.fetch(0) # => 1
arr.fetch(100) # => IndexError (index 100 outside of array bounds: -5...5)
arr.fetch(100, 'out!') # => "out!"
```
### データの追加 (insert)
配列の最後にデータを追加する方法には`:push`と`:<<`があります。
```rb
arr = [1]
arr.push(2) # => [1, 2]
arr << 3 # => [1, 2, 3]
```
逆に最初に追加するときには`:unshift`を使います。
```rb
arr = [3]
arr.unshift(2) # => [2, 3]
arr.unshift(1) # => [1, 2, 3]
```
特定位置にデータを挿入するときには`:insert`を使用します。
```rb
arr = ['one', 'two', 'four', 'five']
arr.insert(2, 'three') # => ["one", "two", "three", "four", "five"]
```
### データの除去(delete)
最初の要素は`:shift`、最後の要素は`:pop`メソッドを使って削除します。
```rb
arr = [1, 2, 3, 4, 5]
a.pop # => 5
arr # => [1, 2, 3, 4]
a.shift # => 1
arr # => [2, 3, 4]
```
特定位置にある要素を除去する場合は`:delete_at`を使います。
```rb
arr = [1, 2, 3, 4, 5]
arr.delete_at(2) # => 3
arr # => [1, 2, 4, 5]
```
位置ではなく特定値を持ってる要素を全部削除したい場合は`:delete`を使います。
```rb
arr = [1, 2, 2, 3, 4]
arr.delete(2) # => 2
arr # => [1, 3, 4]
```
最後に`:uniq`と言うメソッドがあります。このメソッドは配列から重複しているデータを一つ残して全部除去します。
```rb
arr = [1, 2, 2, 3, 3, 3, 4, 5, 5]
arr.uniq # => [1, 2, 3, 4, 5]
arr # => [1, 2, 2, 3, 3, 3, 4, 5, 5]
arr.uniq! # => [1, 2, 3, 4, 5]
arr # => [1, 2, 3, 4, 5]
```
## 二次元配列
二次元配列の構造と生成方法を簡略にみてみましょう。
<div style="text-align: center;">
<img src="assets/data-structure/array/array2d-1.png" alt="array 2d image">
</div>
一次元配列の場合は`arr[0]`などな方法で要素に接近しました。二次元配列の場合には列(column)と行(row)、二つのインデックスを使用して要素に接近します。
インデックスの順位に注意してください。`arr[col][row]`ではなく`arr[row][col]`です(上の表を確認してください)。
## 二次元配列の生成
<div style="text-align: center;">
<img src="assets/data-structure/array/array2d-2.png" alt="array 2d image" style="margin: 0;">
</div>
1. [ハードコード](https://dictionary.goo.ne.jp/word/ハードコード/)で作る。
```rb
arr = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
arr[0] # => [1, 2, 3]
arr[1] # => [4, 5, 6]
arr[2] # => [7, 8, 9]
arr[0][1] # => 2
arr[2][2] # => 9
```
2. `:new`メソッドを使用する。
最初に一次元配列を作って、それぞれの要素に部分配列を挿入することもできます。
```rb
arr = Array.new(3) # => [nil, nil, nil]
arr[0] = [1, 2, 3] # => [[1,2,3], nil, nil]
arr[1] = [4, 5, 6] # => [[1,2,3], [4,5,6], nil]
arr[2] = [7, 8, 9] # => [[1,2,3], [4,5,6], [7,8,9]]
```
最初から3x3の空配列を作ることもできます。
```rb
arr = Array.new(3) { [] } # => [[], [], []]
arr[0] << 1 # => [[1], [], []]
arr[1] << 4 # => [[1], [4], []]
arr[2] << 7 # => [[1], [4], [7]]
```
Rubyの中で多次元配列というのは、配列が要素で入ってる一次元配列と同じです。なので、一次元配列で使ったメソッド(挿入、除去、等々) を二次元、三次元配列などに同様に使用できます。
| 22.628959 | 166 | 0.578484 | yue_Hant | 0.219196 |
e21b4870cb520bc7c4e92b55595399e420dc371f | 1,851 | markdown | Markdown | _posts/2021-01-26-git-repositories-multiple-ssh-keys.markdown | vincenzodibiaggio/learndv | 5c98376cd77035495e65dcbc1a288b137092ebe1 | [
"MIT"
] | null | null | null | _posts/2021-01-26-git-repositories-multiple-ssh-keys.markdown | vincenzodibiaggio/learndv | 5c98376cd77035495e65dcbc1a288b137092ebe1 | [
"MIT"
] | null | null | null | _posts/2021-01-26-git-repositories-multiple-ssh-keys.markdown | vincenzodibiaggio/learndv | 5c98376cd77035495e65dcbc1a288b137092ebe1 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Multiple ssh keys to login with multiple users in multiple repos"
date: 2021-01-26 17:56:40 +0100
categories: [Tech]
tags: [ssh, git, gitlab, github]
---
# How to use multiple ssh keys to manage multiple users and repositories with git
Today, after started a new project on Gitlab, I've added my public ssh key to my Gitlab user.
For the first time I was faced with the error `Fingerprint has already been taken` because my key
was already associated to my company profile.
Now, how I can login to Gitlab with another user?
The answer is simple: with another ssh key.
### How to generate a brand new ssh key
1. Run the command `ssh-keygen -t rsa -b 4096 -C "your_email@example.com"`
2. When it ask `Enter file in which to save the key (/home/YOUR_USER/.ssh/id_rsa)` enter a new path like `/home/YOUR_USER/.ssh/id_rsa_private_profile`
3. Then add the new identity to the authentication agent with the command `ssh-add ~/.ssh/id_rsa_private_profile`. If everything will be fine, you will read a response like `Identity added: /home/YOUR_USER/.ssh/id_rsa_private_profile (/home/YOUR_USER/.ssh/id_rsa_private_profile)`
### Use the ssh key to specific repositories
To connecto to the repository using the new idendity, we should declare it to the authentication agent through the file `~/.ssh/config`:
(in this example we manage a custom identity to connect with `gitlab.com`. You can modify the value of the `HostName` key with the real hostname of your repository)
Put this text at the end of file:
```
Host my-custom-host-name
HostName gitlab.com
User git
IdentityFile ~/.ssh/id_rsa_private_profile
IdentitiesOnly yes
```
Now, we just need to update (or add) the value of the git remote host with the command:
`git remote add origin git@my-custom-host-name:REAL_REPO_USER/YOUR_REPO.git`
That's it!
Bye | 41.133333 | 280 | 0.76067 | eng_Latn | 0.995167 |
e21b77f8dba82137328fbee4c79d79c351c92a19 | 112 | md | Markdown | _includes/02-image.md | chen172/markdown-portfolio | 623241b446dc9444355fcd8dc89325ec3e6a20c9 | [
"MIT"
] | null | null | null | _includes/02-image.md | chen172/markdown-portfolio | 623241b446dc9444355fcd8dc89325ec3e6a20c9 | [
"MIT"
] | 5 | 2020-11-29T09:23:10.000Z | 2020-11-29T10:31:43.000Z | _includes/02-image.md | chen172/markdown-portfolio | 623241b446dc9444355fcd8dc89325ec3e6a20c9 | [
"MIT"
] | null | null | null | 
| 56 | 111 | 0.866071 | kor_Hang | 0.048045 |
e21bb456bbeb2c9a44f3358966ba971c5ccb9d93 | 13,948 | md | Markdown | repos/clojure/remote/boot-2.8.3-bullseye.md | docker-library/repo-info | 365b77dfc0770ca3091eca83ce4e24223af7e4e6 | [
"Apache-2.0"
] | 400 | 2016-08-11T10:14:00.000Z | 2022-03-24T09:41:03.000Z | repos/clojure/remote/boot-2.8.3-bullseye.md | docker-library/repo-info | 365b77dfc0770ca3091eca83ce4e24223af7e4e6 | [
"Apache-2.0"
] | 47 | 2016-08-18T22:30:36.000Z | 2022-02-24T01:27:22.000Z | repos/clojure/remote/boot-2.8.3-bullseye.md | docker-library/repo-info | 365b77dfc0770ca3091eca83ce4e24223af7e4e6 | [
"Apache-2.0"
] | 319 | 2016-08-24T06:35:11.000Z | 2022-03-22T17:07:28.000Z | ## `clojure:boot-2.8.3-bullseye`
```console
$ docker pull clojure@sha256:48b90015ce73e3d65e70ab5924e720012ed19e3c4f261c9d91947910eee01386
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 2
- linux; amd64
- linux; arm64 variant v8
### `clojure:boot-2.8.3-bullseye` - linux; amd64
```console
$ docker pull clojure@sha256:b567f9ea1bf71941acaac2e809e6c788bfead3faf8bc06ac74e76efeb57c71a7
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **392.9 MB (392880975 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:93ca02960d0f95c41c13b55d2dc1d8aac236af54823ee3a420254437e91f84b6`
- Default Command: `["boot","repl"]`
```dockerfile
# Tue, 12 Oct 2021 01:20:30 GMT
ADD file:aea313ae50ce6474a3df142b34d4dcba4e7e0186ea6fe55389cb2ea903b9ebbb in /
# Tue, 12 Oct 2021 01:20:30 GMT
CMD ["bash"]
# Tue, 12 Oct 2021 15:42:03 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Tue, 12 Oct 2021 15:42:10 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Tue, 12 Oct 2021 15:42:38 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Tue, 12 Oct 2021 16:32:19 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends bzip2 unzip xz-utils fontconfig libfreetype6 ca-certificates p11-kit ; rm -rf /var/lib/apt/lists/*
# Tue, 12 Oct 2021 16:32:20 GMT
ENV JAVA_HOME=/usr/local/openjdk-11
# Tue, 12 Oct 2021 16:32:20 GMT
RUN { echo '#/bin/sh'; echo 'echo "$JAVA_HOME"'; } > /usr/local/bin/docker-java-home && chmod +x /usr/local/bin/docker-java-home && [ "$JAVA_HOME" = "$(docker-java-home)" ] # backwards compatibility
# Tue, 12 Oct 2021 16:32:21 GMT
ENV PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Tue, 12 Oct 2021 16:32:21 GMT
ENV LANG=C.UTF-8
# Tue, 12 Oct 2021 16:32:21 GMT
ENV JAVA_VERSION=11.0.12
# Tue, 12 Oct 2021 16:32:43 GMT
RUN set -eux; arch="$(dpkg --print-architecture)"; case "$arch" in 'amd64') downloadUrl='https://github.com/AdoptOpenJDK/openjdk11-upstream-binaries/releases/download/jdk-11.0.12%2B7/OpenJDK11U-jdk_x64_linux_11.0.12_7.tar.gz'; ;; 'arm64') downloadUrl='https://github.com/AdoptOpenJDK/openjdk11-upstream-binaries/releases/download/jdk-11.0.12%2B7/OpenJDK11U-jdk_aarch64_linux_11.0.12_7.tar.gz'; ;; *) echo >&2 "error: unsupported architecture: '$arch'"; exit 1 ;; esac; wget --progress=dot:giga -O openjdk.tgz "$downloadUrl"; wget --progress=dot:giga -O openjdk.tgz.asc "$downloadUrl.sign"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys EAC843EBD3EFDB98CC772FADA5CD6035332FA671; gpg --batch --keyserver keyserver.ubuntu.com --keyserver-options no-self-sigs-only --recv-keys CA5F11C6CE22644D42C6AC4492EF8D39DC13168F; gpg --batch --list-sigs --keyid-format 0xLONG CA5F11C6CE22644D42C6AC4492EF8D39DC13168F | tee /dev/stderr | grep '0xA5CD6035332FA671' | grep 'Andrew Haley'; gpg --batch --verify openjdk.tgz.asc openjdk.tgz; gpgconf --kill all; rm -rf "$GNUPGHOME"; mkdir -p "$JAVA_HOME"; tar --extract --file openjdk.tgz --directory "$JAVA_HOME" --strip-components 1 --no-same-owner ; rm openjdk.tgz*; { echo '#!/usr/bin/env bash'; echo 'set -Eeuo pipefail'; echo 'trust extract --overwrite --format=java-cacerts --filter=ca-anchors --purpose=server-auth "$JAVA_HOME/lib/security/cacerts"'; } > /etc/ca-certificates/update.d/docker-openjdk; chmod +x /etc/ca-certificates/update.d/docker-openjdk; /etc/ca-certificates/update.d/docker-openjdk; find "$JAVA_HOME/lib" -name '*.so' -exec dirname '{}' ';' | sort -u > /etc/ld.so.conf.d/docker-openjdk.conf; ldconfig; java -Xshare:dump; fileEncoding="$(echo 'System.out.println(System.getProperty("file.encoding"))' | jshell -s -)"; [ "$fileEncoding" = 'UTF-8' ]; rm -rf ~/.java; javac --version; java --version
# Tue, 12 Oct 2021 16:32:44 GMT
CMD ["jshell"]
# Wed, 13 Oct 2021 13:15:14 GMT
ENV BOOT_VERSION=2.8.3
# Wed, 13 Oct 2021 13:15:14 GMT
ENV BOOT_INSTALL=/usr/local/bin/
# Wed, 13 Oct 2021 13:15:14 GMT
WORKDIR /tmp
# Wed, 13 Oct 2021 13:15:15 GMT
RUN mkdir -p $BOOT_INSTALL && wget -q https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh && echo "Comparing installer checksum..." && sha256sum boot.sh && echo "0ccd697f2027e7e1cd3be3d62721057cbc841585740d0aaa9fbb485d7b1f17c3 *boot.sh" | sha256sum -c - && mv boot.sh $BOOT_INSTALL/boot && chmod 0755 $BOOT_INSTALL/boot
# Wed, 13 Oct 2021 13:15:15 GMT
ENV PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin/
# Wed, 13 Oct 2021 13:15:16 GMT
ENV BOOT_AS_ROOT=yes
# Wed, 13 Oct 2021 13:15:36 GMT
RUN boot
# Wed, 13 Oct 2021 13:15:36 GMT
CMD ["boot" "repl"]
```
- Layers:
- `sha256:bb7d5a84853b217ac05783963f12b034243070c1c9c8d2e60ada47444f3cce04`
Last Modified: Tue, 12 Oct 2021 01:25:37 GMT
Size: 54.9 MB (54917520 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f02b617c6a8c415a175f44d7e2c5d3b521059f2a6112c5f022e005a44a759f2d`
Last Modified: Tue, 12 Oct 2021 15:52:48 GMT
Size: 5.2 MB (5153273 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d32e17419b7ee61bbd89c2f0d2833a99cf45e594257d15cb567e4cf7771ce34a`
Last Modified: Tue, 12 Oct 2021 15:52:48 GMT
Size: 10.9 MB (10871935 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c9d2d81226a43a97871acd5afb7e8aabfad4d6b62ae1709c870df3ee230bc3f5`
Last Modified: Tue, 12 Oct 2021 15:53:13 GMT
Size: 54.6 MB (54567761 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fab4960f9cd2646dff7055621ead451f5b611bd192112146d39ab7569b10ad4c`
Last Modified: Tue, 12 Oct 2021 16:50:20 GMT
Size: 5.4 MB (5420108 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:da1c1e7baf6d5e12693ed606e36a1b42c6231e06a27baf38892b058b7dd30e6c`
Last Modified: Tue, 12 Oct 2021 16:50:18 GMT
Size: 212.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:79b231561270b740cb4d79a10acc2fb132f11b992a322496b90e92902b94cde4`
Last Modified: Tue, 12 Oct 2021 16:50:35 GMT
Size: 203.1 MB (203122693 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b2424fb409bd9c543889b83c5319abf0f21696f12d2d660754532120ee7e2453`
Last Modified: Wed, 13 Oct 2021 13:35:36 GMT
Size: 6.9 KB (6930 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f4cb5a0246810923759b6c33906dac72d20c7643b73304c759c0dfbfa89b3755`
Last Modified: Wed, 13 Oct 2021 13:35:40 GMT
Size: 58.8 MB (58820543 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `clojure:boot-2.8.3-bullseye` - linux; arm64 variant v8
```console
$ docker pull clojure@sha256:ce6afaf510e5d1fa1e8911bafbcb9706d03a8f3d2438d33b27d64bb2b6e9dc3e
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **389.0 MB (389037063 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:5962151e24cc7347f6fd2e8038969a123d803805e4da1b5a2d36da1690ce6f10`
- Default Command: `["boot","repl"]`
```dockerfile
# Tue, 12 Oct 2021 01:41:04 GMT
ADD file:1529ae12e334fd992892d3fb97c103297cff7e0115b0475bec4c093939a2bff7 in /
# Tue, 12 Oct 2021 01:41:04 GMT
CMD ["bash"]
# Sat, 16 Oct 2021 02:58:23 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/*
# Sat, 16 Oct 2021 02:58:28 GMT
RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi
# Sat, 16 Oct 2021 02:58:48 GMT
RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/*
# Sat, 16 Oct 2021 04:10:31 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends bzip2 unzip xz-utils fontconfig libfreetype6 ca-certificates p11-kit ; rm -rf /var/lib/apt/lists/*
# Sat, 16 Oct 2021 04:10:32 GMT
ENV JAVA_HOME=/usr/local/openjdk-11
# Sat, 16 Oct 2021 04:10:33 GMT
RUN { echo '#/bin/sh'; echo 'echo "$JAVA_HOME"'; } > /usr/local/bin/docker-java-home && chmod +x /usr/local/bin/docker-java-home && [ "$JAVA_HOME" = "$(docker-java-home)" ] # backwards compatibility
# Sat, 16 Oct 2021 04:10:33 GMT
ENV PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Sat, 16 Oct 2021 04:10:34 GMT
ENV LANG=C.UTF-8
# Sat, 16 Oct 2021 04:10:35 GMT
ENV JAVA_VERSION=11.0.12
# Sat, 16 Oct 2021 04:10:55 GMT
RUN set -eux; arch="$(dpkg --print-architecture)"; case "$arch" in 'amd64') downloadUrl='https://github.com/AdoptOpenJDK/openjdk11-upstream-binaries/releases/download/jdk-11.0.12%2B7/OpenJDK11U-jdk_x64_linux_11.0.12_7.tar.gz'; ;; 'arm64') downloadUrl='https://github.com/AdoptOpenJDK/openjdk11-upstream-binaries/releases/download/jdk-11.0.12%2B7/OpenJDK11U-jdk_aarch64_linux_11.0.12_7.tar.gz'; ;; *) echo >&2 "error: unsupported architecture: '$arch'"; exit 1 ;; esac; wget --progress=dot:giga -O openjdk.tgz "$downloadUrl"; wget --progress=dot:giga -O openjdk.tgz.asc "$downloadUrl.sign"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys EAC843EBD3EFDB98CC772FADA5CD6035332FA671; gpg --batch --keyserver keyserver.ubuntu.com --keyserver-options no-self-sigs-only --recv-keys CA5F11C6CE22644D42C6AC4492EF8D39DC13168F; gpg --batch --list-sigs --keyid-format 0xLONG CA5F11C6CE22644D42C6AC4492EF8D39DC13168F | tee /dev/stderr | grep '0xA5CD6035332FA671' | grep 'Andrew Haley'; gpg --batch --verify openjdk.tgz.asc openjdk.tgz; gpgconf --kill all; rm -rf "$GNUPGHOME"; mkdir -p "$JAVA_HOME"; tar --extract --file openjdk.tgz --directory "$JAVA_HOME" --strip-components 1 --no-same-owner ; rm openjdk.tgz*; { echo '#!/usr/bin/env bash'; echo 'set -Eeuo pipefail'; echo 'trust extract --overwrite --format=java-cacerts --filter=ca-anchors --purpose=server-auth "$JAVA_HOME/lib/security/cacerts"'; } > /etc/ca-certificates/update.d/docker-openjdk; chmod +x /etc/ca-certificates/update.d/docker-openjdk; /etc/ca-certificates/update.d/docker-openjdk; find "$JAVA_HOME/lib" -name '*.so' -exec dirname '{}' ';' | sort -u > /etc/ld.so.conf.d/docker-openjdk.conf; ldconfig; java -Xshare:dump; fileEncoding="$(echo 'System.out.println(System.getProperty("file.encoding"))' | jshell -s -)"; [ "$fileEncoding" = 'UTF-8' ]; rm -rf ~/.java; javac --version; java --version
# Sat, 16 Oct 2021 04:10:56 GMT
CMD ["jshell"]
# Sat, 16 Oct 2021 14:13:51 GMT
ENV BOOT_VERSION=2.8.3
# Sat, 16 Oct 2021 14:13:52 GMT
ENV BOOT_INSTALL=/usr/local/bin/
# Sat, 16 Oct 2021 14:13:53 GMT
WORKDIR /tmp
# Sat, 16 Oct 2021 14:13:55 GMT
RUN mkdir -p $BOOT_INSTALL && wget -q https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh && echo "Comparing installer checksum..." && sha256sum boot.sh && echo "0ccd697f2027e7e1cd3be3d62721057cbc841585740d0aaa9fbb485d7b1f17c3 *boot.sh" | sha256sum -c - && mv boot.sh $BOOT_INSTALL/boot && chmod 0755 $BOOT_INSTALL/boot
# Sat, 16 Oct 2021 14:13:55 GMT
ENV PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin/
# Sat, 16 Oct 2021 14:13:56 GMT
ENV BOOT_AS_ROOT=yes
# Sat, 16 Oct 2021 14:14:10 GMT
RUN boot
# Sat, 16 Oct 2021 14:14:11 GMT
CMD ["boot" "repl"]
```
- Layers:
- `sha256:1c47a423366578e5ce665d03788914bf0459485a627a27896fa9c5663ce55cdf`
Last Modified: Tue, 12 Oct 2021 01:47:41 GMT
Size: 53.6 MB (53603015 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:889b1f128be8ebd64f787c46418ffe34ce1096d1ccd6938924d7397713720758`
Last Modified: Sat, 16 Oct 2021 03:14:21 GMT
Size: 5.1 MB (5141861 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c338d42ad921985fb8ebb6e2c48d381f6e03a91535eeffce7f08084b3dfbfdf4`
Last Modified: Sat, 16 Oct 2021 03:14:21 GMT
Size: 10.7 MB (10655847 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e27d3d9a061af78e3fcbefe5dbbe718db380687319e5ba8a7c9fd7ba55d16cc3`
Last Modified: Sat, 16 Oct 2021 03:14:43 GMT
Size: 54.7 MB (54669931 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:532df05d7b3a5a656ac5aabbe93c17b2f3958a752c8fe9d64b98e0a4764b516a`
Last Modified: Sat, 16 Oct 2021 04:27:37 GMT
Size: 5.4 MB (5420213 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4ebb84171e2cc36944d304c07f234f14a0e9d42954133df3339d1828dca17d4b`
Last Modified: Sat, 16 Oct 2021 04:27:36 GMT
Size: 211.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b313f2b472b14bf27a1eac2a8b0753545937e07b3fb53473fac3021d1f6df157`
Last Modified: Sat, 16 Oct 2021 04:27:57 GMT
Size: 200.7 MB (200723381 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ad8d3010ed295f2efb1b5f5e0551b38ca0b83f46463d9270334d958cab96ad20`
Last Modified: Sat, 16 Oct 2021 14:29:01 GMT
Size: 6.9 KB (6906 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cadb2a0c0b23fb3393bc06aa80a51ab2ed5069b8c44b04fa738f54c36675eda8`
Last Modified: Sat, 16 Oct 2021 14:29:06 GMT
Size: 58.8 MB (58815698 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 68.70936 | 1,967 | 0.728993 | yue_Hant | 0.184802 |
e21c4317c4eb411e40cc88533636d4a1a77baaad | 16,046 | md | Markdown | articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Kontinuerlig integrering med Azure Pipelines
description: Lär dig hur du kontinuerligt skapar, testar och distribuerar Azure Resource Manager-mallar.
ms.date: 03/13/2020
ms.topic: tutorial
ms.author: jgao
ms.openlocfilehash: 6ce6f176a52a742a3216a5b761b34254027a1c5b
ms.sourcegitcommit: 8dc84e8b04390f39a3c11e9b0eaf3264861fcafc
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/13/2020
ms.locfileid: "81255081"
---
# <a name="tutorial-continuous-integration-of-azure-resource-manager-templates-with-azure-pipelines"></a>Självstudiekurs: Kontinuerlig integrering av Azure Resource Manager-mallar med Azure Pipelines
I föregående [självstudiekurs](./deployment-tutorial-linked-template.md)distribuerar du en länkad mall. I den här självstudien får du lära dig hur du använder Azure Pipelines för att kontinuerligt skapa och distribuera Azure Resource Manager-mallprojekt.
Azure DevOps tillhandahåller utvecklartjänster för att stödja team för att planera arbete, samarbeta om kodutveckling och skapa och distribuera program. Utvecklare kan arbeta i molnet med Azure DevOps Services. Azure DevOps innehåller en integrerad uppsättning funktioner som du kan komma åt via din webbläsare eller IDE-klient. Azure Pipeline är en av dessa funktioner. Azure Pipelines är en komplett kontinuerlig integration (CI) och cd-tjänst (continuous delivery). Det fungerar med din föredragna Git-leverantör och kan distribuera till de flesta större molntjänster. Sedan kan du automatisera version, testning och distribution av din kod till Microsoft Azure, Google Cloud Platform eller Amazon Web Services.
> [!NOTE]
> Välj ett projektnamn. När du går igenom självstudien ersätter du någon av **AzureRmPipeline** med ditt projektnamn.
> Det här projektnamnet används för att generera resursnamn. En av resursen är ett lagringskonto. Namn på lagringskonto måste vara mellan 3 och 24 tecken långa och endast med siffror och gemener. Namnet måste vara unikt. I mallen är lagringskontonamnet projektnamnet med "store" bifogat och projektnamnet måste vara mellan 3 och 11 tecken. Så projektnamnet måste uppfylla kraven för lagringskontonamn och har färre än 11 tecken.
Den här självstudien omfattar följande uppgifter:
> [!div class="checklist"]
> * Förbereda en GitHub-lagringsplats
> * Skapa ett Azure DevOps-projekt
> * Skapa en Azure-pipeline
> * Verifiera pipeline-distributionen
> * Uppdatera mallen och distribuera om
> * Rensa resurser
Om du inte har en Azure-prenumeration [skapar du ett kostnadsfritt konto](https://azure.microsoft.com/free/) innan du börjar.
## <a name="prerequisites"></a>Krav
För att kunna följa stegen i den här artikeln behöver du:
* **Ett GitHub-konto**, där du använder det för att skapa en databas för dina mallar. Om du inte har en, kan du [skapa en gratis](https://github.com). Mer information om hur du använder GitHub-databaser finns i [Skapa GitHub-databaser](/azure/devops/pipelines/repos/github).
* **Installera Git**. Den här självstudieinstruktionen använder *Git Bash* eller *Git Shell*. Instruktioner finns i [Installera Git]( https://www.atlassian.com/git/tutorials/install-git).
* **En Azure DevOps-organisation**. Om du inte har en, kan du skapa en gratis. Se [Skapa en organisation eller projektsamling]( https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization?view=azure-devops).
* (valfritt) **Visual Studio-kod med resurshanterarens verktygstillägg**. Se [Använda Visual Studio-kod för att skapa Azure Resource Manager-mallar](use-vs-code-to-create-template.md).
## <a name="prepare-a-github-repository"></a>Förbereda en GitHub-lagringsplats
GitHub används för att lagra projektets källkod, inklusive Resource Manager-mallar. Andra databaser som stöds finns i [databaser som stöds av Azure DevOps](/azure/devops/pipelines/repos/?view=azure-devops).
### <a name="create-a-github-repository"></a>Skapa en GitHub-databas
Om du inte har ett GitHub-konto läser du [Förutsättningar](#prerequisites).
1. Logga in på [GitHub](https://github.com).
1. Välj kontobilden i det övre högra hörnet och välj sedan **Dina databaser**.

1. Välj **Ny**, en grön knapp.
1. Ange ett databasnamn i **databasnamn.** **AzureRmPipeline-repo**. Kom ihåg att ersätta någon av **AzureRmPipeline** med ditt projektnamn. Du kan välja antingen **Offentlig** eller **privat** för att gå igenom den här självstudien. Och välj sedan **Skapa databas**.
1. Skriv ned webbadressen. Databasens URL är följande format:
```url
https://github.com/[YourAccountName]/[YourRepositoryName]
```
Den här databasen kallas en *fjärrdatabas*. Var och en av utvecklarna i samma projekt kan klona sin egen *lokala databas*och sammanfoga ändringarna till fjärrdatabasen.
### <a name="clone-the-remote-repository"></a>Klona fjärrdatabasen
1. Öppna Git Shell eller Git Bash. Se [Förutsättningar](#prerequisites).
1. Kontrollera att den aktuella mappen är **GitHub**.
1. Kör följande kommando:
```bash
git clone https://github.com/[YourAccountName]/[YourGitHubRepositoryName]
cd [YourGitHubRepositoryName]
mkdir CreateWebApp
cd CreateWebApp
pwd
```
Ersätt **[YourAccountName]** med ditt GitHub-kontonamn och ersätt **[YourGitHubRepositoryName]** med ditt databasnamn som du skapade i föregående procedur.
**Mappen CreateWebApp** är den mapp där mallen lagras. Kommandot **pwd** visar mappsökvägen. Sökvägen är där du sparar mallen till i följande procedur.
### <a name="download-a-quickstart-template"></a>Ladda ned en snabbstartsmall
I stället för att skapa mallarna kan du hämta mallarna och spara dem i **mappen CreateWebApp.**
* Huvudmallen:https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-deployment/linked-template/azuredeploy.json
* Den länkade mallen:https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/get-started-deployment/linked-template/linkedStorageAccount.json
Både mappnamnet och filnamnen används som de är på gång. Om du ändrar dessa namn måste du uppdatera namnen som används i pipelinen.
### <a name="push-the-template-to-the-remote-repository"></a>Skicka mallen till fjärrdatabasen
Azuredeploy.json har lagts till i den lokala databasen. Därefter laddar du upp mallen till fjärrdatabasen.
1. Öppna *Git Shell* eller *Git Bash*, om det inte öppnas.
1. Ändra katalogen till mappen CreateWebApp i den lokala databasen.
1. Kontrollera att **filen azuredeploy.json** finns i mappen.
1. Kör följande kommando:
```bash
git add .
git commit -m "Add web app templates."
git push origin master
```
Du kan få en varning om LF. Du kan ignorera varningen. **master** är huvudgrenen. Du skapar vanligtvis en gren för varje uppdatering. För att förenkla självstudien använder du huvudgrenen direkt.
1. Bläddra till din GitHub-databas från en webbläsare. URL:en är ** https://github.com/[YourAccountName]/[YourGitHubRepository]**. Du ska se **mappen CreateWebApp** och de tre filerna i mappen.
1. Välj **linkedStorageAccount.json** om du vill öppna mallen.
1. Välj knappen **Rå.** WEBBADRESSEN startas med **raw.githubusercontent.com**.
1. Skapa en kopia av URL:en. Du måste ange det här värdet när du konfigurerar pipelinen senare i självstudien.
Hittills har du skapat en GitHub-databas och laddat upp mallarna till databasen.
## <a name="create-a-devops-project"></a>Skapa ett DevOps-projekt
En DevOps-organisation behövs innan du kan gå vidare till nästa procedur. Om du inte har någon läser du [Förutsättningar](#prerequisites).
1. Logga in på [Azure DevOps](https://dev.azure.com).
1. Välj en DevOps-organisation från vänster.

1. Välj **Nytt projekt**. Om du inte har några projekt öppnas sidan skapa projekt automatiskt.
1. Ange följande värden:
* **Projektnamn**: ange ett projektnamn. Du kan använda projektnamnet som du valde i början av självstudien.
* **Versionskontroll**: Välj **Git**. Du kan behöva expandera **Avancerat** för att se **versionskontroll**.
Använd standardvärdet för de andra egenskaperna.
1. Välj **Skapa**.
Skapa en tjänstanslutning som används för att distribuera projekt till Azure.
1. Välj **Projektinställningar** längst ned på den vänstra menyn.
1. Välj **Tjänstanslutningar** under **Pipelines**.
1. Välj **Ny tjänstanslutning,** välj **Azure Resource Manager**och välj sedan **Nästa**.
1. Välj **Tjänstens huvudnamn**och välj sedan **Nästa**.
1. Ange följande värden:
* **Omfattningsnivå**: välj **Prenumeration**.
* **Prenumeration**: välj din prenumeration.
* **Resursgrupp**: Lämna den tom.
* **Anslutningsnamn**: ange ett anslutningsnamn. **AzureRmPipeline-conn**. Skriv ner det här namnet, du behöver namnet när du skapar din pipeline.
* **Bevilja åtkomstbehörighet till alla pipelines**. (vald)
1. Välj **Spara**.
## <a name="create-a-pipeline"></a>Skapa en pipeline
Hittills har du slutfört följande uppgifter. Om du hoppar över föregående avsnitt eftersom du är bekant med GitHub och DevOps måste du slutföra uppgifterna innan du fortsätter.
* Skapa en GitHub-databas och spara mallarna i **mappen CreateWebApp** i databasen.
* Skapa ett DevOps-projekt och skapa en Azure Resource Manager-tjänstanslutning.
Så här skapar du en pipeline med ett steg för att distribuera en mall:
1. Välj **Pipelines** på den vänstra menyn.
1. Välj **Ny pipeline**.
1. Välj **GitHub** på fliken **Connect** (Anslut). Om du tillfrågas anger du dina GitHub-autentiseringsuppgifter och följer sedan instruktionerna. Om följande skärm visas väljer du **Välj Endast databaser**och verifierar att databasen finns i listan innan du väljer Godkänn & **Installera**.

1. Välj databasen på fliken **Välj.** Standardnamnet är **[YourAccountName]/[YourGitHubRepositoryName]**.
1. Välj **Startpipeline på**fliken **Konfigurera** . Den visar **pipelines.yml-pipeline-filen** med två skriptsteg.
1. Ta bort de två skriptstegen från yml-filen.
1. Flytta markören till linjen efter **steg:**.
1. Välj **Visa assistent** till höger på skärmen om du vill öppna fönstret **Uppgifter.**
1. Välj **ARM-malldistribution**.
1. Ange följande värden:
* **deploymentScope**: Välj **Resursgrupp**.. Mer information om scope finns i [Distributionsomfattningar](deploy-rest.md#deployment-scope).
* **Azure Resource Manager-anslutning:** Välj det tjänstanslutningsnamn som du skapade tidigare.
* **Prenumeration**: Ange målprenumerations-ID.
* **Åtgärd:** Välj åtgärden **Skapa eller uppdatera resursgrupp** gör 2 åtgärder - 1. skapa en resursgrupp om ett nytt resursgruppsnamn anges. 2. distribuera den angivna mallen.
* **Resursgrupp**: Ange ett nytt resursgruppnamn. **AzureRmPipeline-rg**.
* **Plats**: Välj en plats för resursgruppen, till exempel **Central US**.
* **Mallplats**: Välj **Länkad artefakt**, vilket innebär att uppgiften söker efter mallfilen direkt från den anslutna databasen.
* **Mall**: Ange **CreateWebApp/azuredeploy.json**. Om du har ändrat mappnamnet och filnamnet måste du ändra det här värdet.
* **Mallparametrar**: Lämna det här fältet tomt. Du anger parametervärdena i mallparametrarna **Åsidosätt.
* **overrideParameters**: Enter **-projectName [EnterAProjectName] -linkedTemplateUri [EnterTheLinkedTemplateURL]**. Ersätt projektnamnet och url:en för den länkade mallen. Den länkade mall-URL:en är vad du skrev ned i slutet av [Skapa en GitHub-databas](#create-a-github-repository).
* **Distributionsläge**: Välj **Inkrementell**.
* **Distributionsnamn**: Ange **DeployPipelineTemplate**. Välj **Avancerat** innan du kan se **Distributionsnamn**.

1. Välj **Lägg till**.
Mer information om aktiviteten finns i [Azure Resource Group Deployment task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment)och Azure Resource [Manager-malldistributionsuppgift](https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceManagerTemplateDeploymentV3/README.md)
Yml-filen skall likna följande:

1. Välj **Spara och kör**.
1. Välj **Spara och kör** igen i fönstret Spara **och kör.** En kopia av YAML-filen sparas i den anslutna databasen. Du kan se YAML-filen genom att bläddra till databasen.
1. Kontrollera att pipelinen har körts.

## <a name="verify-the-deployment"></a>Verifiera distributionen
1. Logga in på [Azure-portalen](https://portal.azure.com).
1. Öppna resursgruppen. Namnet är vad du angav i YAML-filen för pipeline. Du ska se ett lagringskonto skapas. Lagringskontonamnet börjar med **store**.
1. Välj det lagringskontonamn som ska öppnas.
1. Välj **egenskaper**. Observera **att replikering** är **lokalt redundant lagring (LRS)**.
## <a name="update-and-redeploy"></a>Uppdatera och distribuera om
När du uppdaterar mallen och skickar ändringarna till fjärrdatabasen uppdaterar pipelinen automatiskt resurserna, lagringskontot i det här fallet.
1. Öppna **linkedStorageAccount.json** från din lokala databas i Visual Studio-kod eller någon textredigerare.
1. Uppdatera **standardvärde för** **storageAccountType** till **Standard_GRS**. Se följande skärmbild:

1. Spara ändringarna.
1. Skicka ändringarna till fjärrdatabasen genom att köra följande kommandon från Git Bash/Shell.
```bash
git pull origin master
git add .
git commit -m "Update the storage account type."
git push origin master
```
Det första kommandot (pull) synkroniserar den lokala databasen med fjärrdatabasen. DEN PIPELINE YAML filen lades bara till i fjärrdatabasen. Om du kör pull-kommandot hämtas en kopia av YAML-filen till den lokala grenen.
Det fjärde kommandot (push) överför den reviderade linkedStorageAccount.json-filen till fjärrdatabasen. När huvudgrenen för fjärrdatabasen har uppdaterats utlöses pipelinen igen.
Om du vill verifiera ändringarna kan du kontrollera replikeringsegenskapen för lagringskontot. Se [Verifiera distributionen](#verify-the-deployment).
## <a name="clean-up-resources"></a>Rensa resurser
När Azure-resurserna inte längre behövs rensar du de resurser som du har distribuerat genom att ta bort resursgruppen.
1. Välj **Resursgrupp** på den vänstra menyn på Azure-portalen.
2. Ange resursgruppens namn i fältet **Filtrera efter namn**.
3. Välj resursgruppens namn.
4. Välj **Ta bort resursgrupp** på den övre menyn.
Du kanske också vill ta bort GitHub-databasen och Azure DevOps-projektet.
## <a name="next-steps"></a>Nästa steg
Grattis, du har slutfört den här resource manager-malldistributionshandledningen. Låt oss veta om du har några kommentarer och förslag i feedbackavsnittet. Tack!
Du är redo att hoppa in i mer avancerade begrepp om mallar. Nästa självstudiekurs går in mer i detalj om hur du använder mallreferensdokumentation för att hjälpa till med att definiera resurser att distribuera.
> [!div class="nextstepaction"]
> [Använda mallreferens](./template-tutorial-use-template-reference.md)
| 63.422925 | 714 | 0.773152 | swe_Latn | 0.997315 |
e21c7264ec7e49858a16a15eb680f105b309a9ca | 266 | md | Markdown | DoubleLinkageRecyclerview/README.md | Youngfellows/DoubleLinkageView | 7b4ba26c5b1c0a427586e7fd51fab09ad9d055a9 | [
"Apache-2.0"
] | null | null | null | DoubleLinkageRecyclerview/README.md | Youngfellows/DoubleLinkageView | 7b4ba26c5b1c0a427586e7fd51fab09ad9d055a9 | [
"Apache-2.0"
] | null | null | null | DoubleLinkageRecyclerview/README.md | Youngfellows/DoubleLinkageView | 7b4ba26c5b1c0a427586e7fd51fab09ad9d055a9 | [
"Apache-2.0"
] | null | null | null | # GangedRecyclerview
#### 实现要求
1. 左侧联动右侧:
点击左侧列表的某一项,背景变色,同时右侧列表中对应的分类滚动到顶部
2. 右侧列表悬停:
右侧列表滑动的时候相应的标题栏需要在顶部悬停
3. 标题栏可点击
4. 右侧联动左侧:
滚动右侧列表,监听滚动的位置,左侧列表需要同步选中相应的列表

| 17.733333 | 86 | 0.718045 | yue_Hant | 0.407859 |
e21ddd41c893f469afafbed633efc09f27d2befe | 634 | md | Markdown | 0-logistics/weekly-progress/week2/Bojia_Mao.md | yongkangzzz/mmfgroup | 098a78c83e1c2973dc895d1dc7fd30d7d3668143 | [
"MIT"
] | null | null | null | 0-logistics/weekly-progress/week2/Bojia_Mao.md | yongkangzzz/mmfgroup | 098a78c83e1c2973dc895d1dc7fd30d7d3668143 | [
"MIT"
] | null | null | null | 0-logistics/weekly-progress/week2/Bojia_Mao.md | yongkangzzz/mmfgroup | 098a78c83e1c2973dc895d1dc7fd30d7d3668143 | [
"MIT"
] | null | null | null | # Things done this week
- Investigated MMF, set up MMF enviornment on google colab and train the hateful meme model.
- Learning the code of extremal perturbation algorithm implemented in TorchRay
- Investigation on captum multi-modal interpretation algorithm
# Challenges encountered
- Understand the code of extremal perturbation algorithm, in order to be able to modify it from single-modal method to multi-model algorithm.
# Working time
- ~5 hours
# Objectives for next week
- Further Investigation on captum multi-modal interpretation algorithm, start adding multi-modal feature to the extremal perturbation algorithm. | 39.625 | 145 | 0.804416 | eng_Latn | 0.987925 |
e21e3f9ce44e6558520eaf11652cffd395b5774a | 5,000 | md | Markdown | docs/_docs/components/middlewares.md | skimeli/Apiato | 1e1c4594a6978a5b24b89e5feb6005e39ac0a0df | [
"MIT"
] | 2 | 2017-09-18T18:46:52.000Z | 2019-08-14T07:03:03.000Z | docs/_docs/components/middlewares.md | skimeli/Apiato | 1e1c4594a6978a5b24b89e5feb6005e39ac0a0df | [
"MIT"
] | null | null | null | docs/_docs/components/middlewares.md | skimeli/Apiato | 1e1c4594a6978a5b24b89e5feb6005e39ac0a0df | [
"MIT"
] | null | null | null | ---
title: "Middlewares"
category: "Optional Components"
order: 30
---
### Definition
Middleware provide a convenient mechanism for filtering HTTP requests entering your application. More about them [here](https://laravel.com/docs/middleware).
You can enable and disable Middlewares as you wish.
## Principles
- There's two types of Middlewares, General (applied on all the Routes by default) and Endpoints Middlewares (applied on some Endpoints).
- The Middlewares CAN be placed in Ship layer or the Container layer depend on their roles.
### Rules
- If the Middleware is written inside a Container it MUST be registered inside that Container.
- To register Middleware's in a Container the container needs to have `MiddlewareServiceProvider`. And like all other Container Providers it MUST be registered in the `MainServiceProvider` of that Container.
- General Middlewares (like some default Laravel Middleware's) SHOULD live in the Ship layer `app/Ship/Middlewares/*` and are registered in the Ship Main Provider.
- Third Party packages Middleware CAN be registered in Containers or on the Ship layer (wherever they make more sense).
_Example: the `jwt.auth` middleware "provided by the JWT package" is registered in the Authentication Container (`Containers/Authentication/Providers/MiddlewareServiceProvider.php`)_.
### Folder Structure
```
- App
- Containers
- {container-name}
- Middlewares
- WebAuthentication.php
- Ship
- Features
- Middleware
- Http
- EncryptCookies.php
- VerifyCsrfToken.php
```
### Code Sample
**Middleware Example:**
```php
<?php
namespace App\Containers\Authentication\Middlewares;
use App\Ship\Engine\Butlers\Facades\ContainersButler;
use App\Ship\Parents\Middlewares\Middleware;
use Closure;
use Illuminate\Contracts\Auth\Guard;
use Illuminate\Http\Request;
/**
* Class WebAuthentication
*
* @author Mahmoud Zalt <mahmoud@zalt.me>
*/
class WebAuthentication extends Middleware
{
protected $auth;
public function __construct(Guard $auth)
{
$this->auth = $auth;
}
public function handle(Request $request, Closure $next)
{
if ($this->auth->guest()) {
return response()->view(ContainersButler::getLoginWebPageName(), [
'errorMessage' => 'Credentials Incorrect.'
]);
}
return $next($request);
}
}
```
**Middleware registration inside the Container Example:**
```php
<?php
namespace App\Containers\Authentication\Providers;
use App\Containers\Authentication\Middlewares\WebAuthentication;
use App\Ship\Parents\Providers\MiddlewareProvider;
use Tymon\JWTAuth\Middleware\GetUserFromToken;
use Tymon\JWTAuth\Middleware\RefreshToken;
class MiddlewareServiceProvider extends MiddlewareProvider
{
protected $middleware = [
];
protected $middlewareGroups = [
'web' => [
],
'api' => [
],
];
protected $routeMiddleware = [
'jwt.auth' => GetUserFromToken::class,
'jwt.refresh' => RefreshToken::class,
'auth:web' => WebAuthentication::class,
];
public function boot()
{
$this->loadContainersInternalMiddlewares();
}
public function register()
{
}
}
```
**Middleware registration inside the Ship layer (HTTP Kernel) Example:**
```php
<?php
namespace App\Ship\Engine\Kernels;
use App\Ship\Features\Middlewares\Http\ResponseHeadersMiddleware;
use Illuminate\Foundation\Http\Kernel as LaravelHttpKernel;
class ShipHttpKernel extends LaravelHttpKernel
{
protected $middleware = [
\Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode::class,
\Illuminate\Foundation\Http\Middleware\ValidatePostSize::class,
\App\Ship\Features\Middlewares\Http\TrimStrings::class,
\Illuminate\Foundation\Http\Middleware\ConvertEmptyStringsToNull::class,
\Barryvdh\Cors\HandleCors::class,
];
protected $middlewareGroups = [
'web' => [
\App\Ship\Features\Middlewares\Http\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Ship\Features\Middlewares\Http\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
ResponseHeadersMiddleware::class,
'throttle:60,1',
'bindings',
],
];
protected $routeMiddleware = [
'bindings' => \Illuminate\Routing\Middleware\SubstituteBindings::class,
'throttle' => \Illuminate\Routing\Middleware\ThrottleRequests::class,
'can' => \Illuminate\Auth\Middleware\Authorize::class,
'auth' => \Illuminate\Auth\Middleware\Authenticate::class,
];
}
```
| 27.322404 | 207 | 0.684 | eng_Latn | 0.620185 |
e21ebb78082dc65773eb7409f30ddb266da84667 | 117 | md | Markdown | README.md | danielr1996/sass-scale-visualizer | d0e8cd8048b9e3900927cd8ba7a41cf33ba1a60d | [
"MIT"
] | null | null | null | README.md | danielr1996/sass-scale-visualizer | d0e8cd8048b9e3900927cd8ba7a41cf33ba1a60d | [
"MIT"
] | null | null | null | README.md | danielr1996/sass-scale-visualizer | d0e8cd8048b9e3900927cd8ba7a41cf33ba1a60d | [
"MIT"
] | null | null | null | # sass-scale-visualizer
Run `npm start`, adjust the variables in `src/index.scss` and see the changes in the browser | 39 | 92 | 0.769231 | eng_Latn | 0.989524 |
e220daeb7e3264eb5c93f8f65796a2626f519119 | 6,563 | md | Markdown | power-platform/admin/language-collations.md | madeloe/power-platform | c857118b5efa590078d3024e719516aac79913e3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | power-platform/admin/language-collations.md | madeloe/power-platform | c857118b5efa590078d3024e719516aac79913e3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-10-19T16:52:52.000Z | 2020-10-20T05:29:04.000Z | power-platform/admin/language-collations.md | madeloe/power-platform | c857118b5efa590078d3024e719516aac79913e3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Common Data Service language collations | MicrosoftDocs"
description: "Understand the Common Data Service language collations"
keywords: ""
ms.date: 06/30/2020
ms.service: powerapps
ms.custom:
ms.topic: article
author: "NHelgren"
ms.author: nhelgren
manager: kvivek
ms.reviewer: matp
search.audienceType:
- maker
search.app:
- D365CE
- PowerApps
- Powerplatform
- Flow
---
# Common Data Service language collations
When a Common Data Service environment is created, admins are asked to select which default language they would like to use. This sets the dictionary, time and date
format, number format, and indexing properties for the environment.
Language selections for Common Data Service also include collation settings that are applied to the SQL database, which stores entities and relational data. These collation settings affect things such as recognized characters, sorting, quick find, and filtering. The collations applied to Common Data Service environments are chosen based on the default language selected at the time of environment creation and aren't user configurable. After a collation is in place, it can't be changed.
Collations contain the following case-sensitivity and accent-sensitivity options that can vary from language to language.
|Case and accent option |Collation |Description |
|---------|---------|---------|
|Case insensitive | _CI | All languages have *case insensitive* enabled, which means that "Cafe" and "cafe" are considered the same word. |
|Accent sensitive | _AS | Some languages are *accent sensitive*, which means that "cafe" and "café" are treated as different words. |
|Accent insensitive | _AI | Some languages are *accent insensitive*, which means that "cafe" and "café" are treated as the same word. |
## Language details
A language includes the following information:
- **LCID**: This is an identification number applied to languages in the Microsoft .NET framework to easily identify which language is being used. For example, 1033 is US English.
- **Language**: The actual language. In some cases, names, country, and character dataset information have been added for disambiguation.
- **Collation**: The language collation uses the case-sensitivity and accent-sensitivity options associated with the language (_CI, _AS, _AI) described earlier.
## Language and associated collation used with Common Data Service
| **LCID and language** | **Collation** |
|--------------------------------------------------------|---------------|
| 1025 Arabic | \_CI_AI |
| 1026 Bulgarian - Cyrillic dataset | \_CI_AI |
| 1027 Catalan (Spain) | \_CI_AI |
| 1028 Traditional Chinese Taiwan - Stroke 90 dataset | \_CI_AI |
| 1029 Czech | \_CI_AI |
| 1030 Danish Norwegian | \_CI_AI |
| 1031 German Standard (Germany) | \_CI_AI |
| 1032 Greek | \_CI_AI |
| 1033 English (United States) | \_CI_AI |
| 1035 Finnish Swedish (Finland) | \_CI_AS |
| 1036 French (France) | \_CI_AI |
| 1037 Hebrew | \_CI_AI |
| 1038 Hungarian | \_CI_AI |
| 1040 Italian (Italy) | \_CI_AI |
| 1041 Japanese - Stoke 90 dataset | \_CI_AI |
| 1042 Korean | \_CI_AI |
| 1043 Dutch (Netherlands) | \_CI_AI |
| 1044 Danish Norwegian - Bokmaal | \_CI_AI |
| 1045 Polish | \_CI_AI |
| 1046 Brazilian Portuguese | \_CI_AI |
| 1048 Romanian | \_CI_AS |
| 1049 Russian (Russia) - Cyrillic dataset | \_CI_AI |
| 1050 Croatian | \_CI_AS |
| 1051 Slovak | \_CI_AS |
| 1053 Finnish Swedish (Sweden) | \_CI_AS |
| 1054 Thai | \_CI_AS |
| 1055 Turkish | \_CI_AI |
| 1057 Indonesian | \_CI_AS |
| 1058 Ukrainian | \_CI_AS |
| 1060 Slovenian | \_CI_AS |
| 1061 Estonian | \_CI_AS |
| 1062 Latvian | \_CI_AS |
| 1063 Lithuanian | \_CI_AS |
| 1066 Vietnamese | \_CI_AS |
| 1069 Basque (Spain) | \_CI_AS |
| 1081 Hindi - Latin character dataset | \_CI_AS |
| 1086 Malay | \_CI_AS |
| 1087 Kazakh | \_CI_AS |
| 1110 Galician (Spain) | \_CI_AS |
| 2052 Simplified Chinese (China) - Stroke 90 dataset | \_CI_AI |
| 2055 German (Switzerland) | \_CI_AS |
| 2064 Italian (Switzerland) | \_CI_AS |
| 2070 Portuguese (Portugal) | \_CI_AI |
| 2074 Serbian - Latin character set | \_CI_AS |
| 3076 Traditional Chinese Hong Kong - Stroke 90 dataset | \_CI_AI |
| 3079 German (Austria) | \_CI_AS |
| 3081 English (Australia) | \_CI_AS |
| 3081 English (UK) | \_CI_AS |
| 3082 Modern Spanish (Spain) | \_CI_AI |
| 3084 French (Canada) | \_CI_AI |
| 3098 Serbian - Cyrillic dataset | \_CI_AI |
| 4108 French (Switzerland) | \_CI_AI |
### See also
[Environments overview](environments-overview.md) | 61.915094 | 489 | 0.490477 | eng_Latn | 0.959398 |
e2218fdec3ef87464a5bf6708c644dbb5f776dc5 | 40 | md | Markdown | README.md | peeyush1234/ETFdiction | 9094317348e25ef53ae2e034e5568dc8ac791b1c | [
"MIT"
] | null | null | null | README.md | peeyush1234/ETFdiction | 9094317348e25ef53ae2e034e5568dc8ac791b1c | [
"MIT"
] | null | null | null | README.md | peeyush1234/ETFdiction | 9094317348e25ef53ae2e034e5568dc8ac791b1c | [
"MIT"
] | null | null | null | # ETFdiction
Prediction for buying ETFs
| 13.333333 | 26 | 0.825 | eng_Latn | 0.94847 |
e221a37d2d430bd4228106cc0e0cc4c2b3f3e7d7 | 1,612 | md | Markdown | book/src/SUMMARY.md | jmwill86/criterion.rs | 1d75c14d4e0554c0db9e64f30573b5d2b08d488b | [
"Apache-2.0",
"MIT"
] | 1,781 | 2018-12-10T15:12:15.000Z | 2022-03-31T10:28:06.000Z | book/src/SUMMARY.md | jmwill86/criterion.rs | 1d75c14d4e0554c0db9e64f30573b5d2b08d488b | [
"Apache-2.0",
"MIT"
] | 321 | 2018-12-09T09:00:43.000Z | 2022-03-30T03:47:26.000Z | book/src/SUMMARY.md | jmwill86/criterion.rs | 1d75c14d4e0554c0db9e64f30573b5d2b08d488b | [
"Apache-2.0",
"MIT"
] | 159 | 2017-09-11T00:15:39.000Z | 2022-03-27T17:11:27.000Z | # Summary
- [Criterion.rs](./criterion_rs.md)
- [Getting Started](./getting_started.md)
- [User Guide](./user_guide/user_guide.md)
- [Migrating from libtest](./user_guide/migrating_from_libtest.md)
- [Command-Line Output](./user_guide/command_line_output.md)
- [Command-Line Options](./user_guide/command_line_options.md)
- [HTML Report](./user_guide/html_report.md)
- [Plots & Graphs](./user_guide/plots_and_graphs.md)
- [Benchmarking With Inputs](./user_guide/benchmarking_with_inputs.md)
- [Advanced Configuration](./user_guide/advanced_configuration.md)
- [Comparing Functions](./user_guide/comparing_functions.md)
- [CSV Output](./user_guide/csv_output.md)
- [Known Limitations](./user_guide/known_limitations.md)
- [Bencher Compatibility Layer](./user_guide/bencher_compatibility.md)
- [Timing Loops](./user_guide/timing_loops.md)
- [Custom Measurements](./user_guide/custom_measurements.md)
- [Profiling](./user_guide/profiling.md)
- [Custom Test Framework](./user_guide/custom_test_framework.md)
- [Benchmarking async functions](./user_guide/benchmarking_async.md)
- [cargo-criterion](./cargo_criterion/cargo_criterion.md)
- [Configuring cargo-criterion](./cargo_criterion/configuring_cargo_criterion.md)
- [External Tools](./cargo_criterion/external_tools.md)
- [Iai](./iai/iai.md)
- [Getting Started with Iai](./iai/getting_started.md)
- [Comparison to Criterion.rs](./iai/comparison.md)
- [Analysis Process](./analysis.md)
- [Frequently Asked Questions](./faq.md)
- [Migrating from 0.2.* to 0.3.*](./migrating_0_2_to_0_3.md) | 53.733333 | 83 | 0.73201 | yue_Hant | 0.347512 |
e221d05f561f20ac912e2320731da56807e69d18 | 7,509 | md | Markdown | docs/framework/tools/winmdexp-exe-windows-runtime-metadata-export-tool.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/tools/winmdexp-exe-windows-runtime-metadata-export-tool.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/tools/winmdexp-exe-windows-runtime-metadata-export-tool.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Winmdexp.exe (outil d'exportation de métadonnées Windows Runtime)
ms.date: 03/30/2017
helpviewer_keywords:
- Windows Runtime Metadata Export Tool
- Winmdexp.exe
ms.assetid: d2ce0683-343d-403e-bb8d-209186f7a19d
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 5803ef1d174c3e3a5e8e18b130e6b7a0c65eac81
ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/18/2019
ms.locfileid: "59216340"
---
# <a name="winmdexpexe-windows-runtime-metadata-export-tool"></a>Winmdexp.exe (outil d'exportation de métadonnées Windows Runtime)
L'outil Metadata Export [!INCLUDE[wrt](../../../includes/wrt-md.md)] (Winmdexp.exe) transforme un module .NET Framework en un fichier qui contient des métadonnées [!INCLUDE[wrt](../../../includes/wrt-md.md)]. Bien que les assemblys .NET Framework et les fichiers de métadonnées [!INCLUDE[wrt](../../../includes/wrt-md.md)] utilisent le même format physique, il y a des différences dans le contenu des tables de métadonnées. Autrement dit, les assemblys .NET Framework ne sont pas utilisables automatiquement comme composants [!INCLUDE[wrt](../../../includes/wrt-md.md)]. Le processus qui transforme un module .NET Framework en composant [!INCLUDE[wrt](../../../includes/wrt-md.md)] s’appelle *exportation*. Dans [!INCLUDE[net_v45](../../../includes/net-v45-md.md)] et [!INCLUDE[net_v451](../../../includes/net-v451-md.md)], le fichier de métadonnées Windows (.winmd) résultant contient à la fois les métadonnées et l'implémentation.
Quand vous utilisez le modèle **[!INCLUDE[wrt](../../../includes/wrt-md.md)] Composant**, qui se trouve dans le **Windows Store** pour C# et Visual Basic dans Visual Studio 2013 ou Visual Studio 2012, la cible du compilateur est un fichier .winmdobj et une étape de génération appelle Winmdexp.exe pour exporter le fichier .winmdobj vers un fichier .winmd. C'est la méthode recommandée pour générer un composant [!INCLUDE[wrt](../../../includes/wrt-md.md)]. Utilisez Winmdexp.exe directement lorsque vous voulez contrôler davantage le processus de génération que Visual Studio le permet.
Cet outil est installé automatiquement avec Visual Studio. Pour exécuter l’outil, utilisez l’invite de commandes développeur pour Visual Studio (ou l’invite de commandes Visual Studio dans Windows 7). Pour plus d'informations, consultez [Invites de commandes](../../../docs/framework/tools/developer-command-prompt-for-vs.md).
À l'invite de commandes, tapez le texte suivant :
## <a name="syntax"></a>Syntaxe
```
winmdexp [options] winmdmodule
```
## <a name="parameters"></a>Paramètres
|Argument ou option|Description|
|------------------------|-----------------|
|`winmdmodule`|Spécifie le module (.winmdobj) à exporter. Un seul module est autorisé. Pour créer ce module, utilisez l'option de compilateur `/target` avec la cible `winmdobj`. Consultez [/target:winmdobj (Options du compilateur C#)](~/docs/csharp/language-reference/compiler-options/target-winmdobj-compiler-option.md) ou [/target (Visual Basic)](~/docs/visual-basic/reference/command-line-compiler/target.md).|
|`/docfile:` `docfile`<br /><br /> `/d:` `docfile`|Spécifie le fichier de documentation XML de sortie que Winmdexp.exe produira. Dans [!INCLUDE[net_v45](../../../includes/net-v45-md.md)], le fichier de sortie est essentiellement identique au fichier de documentation XML d'entrée.|
|`/moduledoc:` `docfile`<br /><br /> `/md:` `docfile`|Spécifie le nom du fichier de documentation XML que le compilateur a produit avec `winmdmodule`.|
|`/modulepdb:` `symbolfile`<br /><br /> `/mp:` `symbolfile`|Indique le nom du fichier de la base de données du programme (PDB) qui contient les symboles pour `winmdmodule`.|
|`/nowarn:` `warning`|Supprime le nombre d'avertissements indiqué. Pour *warning*, fournissez uniquement la partie numérique du code d'erreur, sans zéro significatif.|
|`/out:` `file`<br /><br /> `/o:` `file`|Indique le nom du fichier de métadonnées Windows (.winmd) de sortie.|
|`/pdb:` `symbolfile`<br /><br /> `/p:` `symbolfile`|Spécifie le nom du fichier de la base de données du programme (PDB) de sortie qui contiendra les symboles pour le fichier de métadonnées Windows (.winmd) exporté.|
|`/reference:` `winmd`<br /><br /> `/r:` `winmd`|Indique un fichier de métadonnées (.winmd ou assembly) à référencer lors de l'exportation. Si vous utilisez les assemblys de référence dans « \Program files (x86)\Reference Assemblies\Microsoft\Framework\\.NETCore\v4.5 » (« \Program Files\\... » sur les ordinateurs 32 bits), incluez les références à System.Runtime.dll et mscorlib.dll.|
|`/utf8output`|Spécifie que les messages de sortie utilisent un encodage en UTF-8.|
|`/warnaserror+`|Spécifie que tous les avertissements doivent être traités comme des erreurs.|
|**@** `responsefile`|Spécifie un fichier de réponse (.rsp) qui contient des options (et éventuellement `winmdmodule`). Chaque ligne dans `responsefile` doit contenir un seul argument ou une seule option.|
## <a name="remarks"></a>Remarques
Winmdexp.exe n'est pas conçu pour convertir un assembly. NET Framework arbitraire en un fichier .winmd. Il requiert un module compilé avec l'option `/target:winmdobj`, et des restrictions supplémentaires s'appliquent. La plus importante de ces restrictions est que tous les types qui sont exposés dans la surface API de l'assembly doivent être de type [!INCLUDE[wrt](../../../includes/wrt-md.md)]. Pour plus d'informations, consultez la section « Déclaration de types dans les composants Windows Runtime » de l'article [Création de composants Windows Runtime en C# et Visual Basic](https://go.microsoft.com/fwlink/p/?LinkID=238313) dans le Centre de développement Windows.
Lorsque vous écrivez une application [!INCLUDE[win8_appname_long](../../../includes/win8-appname-long-md.md)] ou un composant [!INCLUDE[wrt](../../../includes/wrt-md.md)] avec C# ou Visual Basic, .NET Framework fournit le support pour effectuer la programmation avec [!INCLUDE[wrt](../../../includes/wrt-md.md)] de manière plus naturelle. Ce sujet est abordé dans l’article [Prise en charge .NET Framework pour les applications Windows Store et Windows Runtime](../../../docs/standard/cross-platform/support-for-windows-store-apps-and-windows-runtime.md). Dans le processus, certains types de [!INCLUDE[wrt](../../../includes/wrt-md.md)] utilisés couramment sont mappés en types .NET Framework. Winmdexp.exe inverse ce processus et produit une surface API qui utilise les types [!INCLUDE[wrt](../../../includes/wrt-md.md)] correspondants. Par exemple, les types qui sont construits à partir de l’interface <xref:System.Collections.Generic.IList%601> sont mappés à des types qui sont construits à partir de l’interface [!INCLUDE[wrt](../../../includes/wrt-md.md)][IVector\<T>](https://go.microsoft.com/fwlink/p/?LinkId=251132).
## <a name="see-also"></a>Voir aussi
- [Prise en charge .NET Framework pour les applications Windows Store et Windows Runtime](../../../docs/standard/cross-platform/support-for-windows-store-apps-and-windows-runtime.md)
- [Création de composants Windows Runtime en C# et Visual Basic](https://go.microsoft.com/fwlink/p/?LinkID=238313)
- [Messages d’erreur Winmdexp.exe](../../../docs/framework/tools/winmdexp-exe-error-messages.md)
- [Outils de génération, de déploiement et de configuration (.NET Framework)](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/dd233108(v=vs.100))
| 127.271186 | 1,129 | 0.743774 | fra_Latn | 0.871518 |
e223c72461bfaa73c3b1e304afb35eb274fdf394 | 916 | md | Markdown | index.md | GCampbell52/GCampbell52.github.io | a2bdf633f2782d52c4f0590ea34a79d99a0036fa | [
"MIT"
] | null | null | null | index.md | GCampbell52/GCampbell52.github.io | a2bdf633f2782d52c4f0590ea34a79d99a0036fa | [
"MIT"
] | null | null | null | index.md | GCampbell52/GCampbell52.github.io | a2bdf633f2782d52c4f0590ea34a79d99a0036fa | [
"MIT"
] | null | null | null | ---
layout: page
excerpt: "About Me..."
---
I started my career in English literature, and then moved to Library and Infomration Science in the mid 1990s. My research interests are diverse, but they center around issues of knowledge organization, as it applies to a variety of information settings and interest groups. I am a member of the Canadian Association for Information Science, the Association for Information Science and Technology, and the International Society for Knowledge Organization. I have presented my work in varioius locations in North America, Europe and South America. My current research is funded by the Social Sciences and Humanities Research Council of Canada.
## Current Interests and Projects:
- Information Services for Individuals Living with Dementia
- Classification and Knowledge Organization
- Bibliographic Description
- LGBTQ+ Information Issues
- Big Data and Linked Data
| 61.066667 | 646 | 0.806769 | eng_Latn | 0.989997 |
e223cde9eec1c3a5d313d0b9114b87bf28e65a9b | 1,744 | md | Markdown | rfi-questions.md | 18F/Head-Start-HSES-Procurement | ec24dfacc83ea4499a2cf706cf35586b85478d02 | [
"CC0-1.0"
] | null | null | null | rfi-questions.md | 18F/Head-Start-HSES-Procurement | ec24dfacc83ea4499a2cf706cf35586b85478d02 | [
"CC0-1.0"
] | null | null | null | rfi-questions.md | 18F/Head-Start-HSES-Procurement | ec24dfacc83ea4499a2cf706cf35586b85478d02 | [
"CC0-1.0"
] | 1 | 2021-10-21T17:55:47.000Z | 2021-10-21T17:55:47.000Z | # HSES Cost RFI Questions
Note: All responses must be submitted using [this RFI response form on Google](https://docs.google.com/forms/d/e/1FAIpQLSdS5FCiLwibwZyjapR5LS0XPGk-Y8aK_Uq1EqISwG9smDbDyQ/viewform) no later than 12pm EDT on 10/26/2021.
## Section 1 - Submitter's email address
* What is your email address? (required)
## Section 2 - Company information
* Company name (required)
* Point of contact's name (required)
* Point of contact's email address (required)
* Dun & Bradstreet (DUNS) Number (required)
* What federal contracting vehicles and/or schedules are you on (e.g. GSA Schedule 70, NITAAC CIO-SP3 SB, NASA SEWP, etc.)? (required - maximum character count 1500)
* What is your Small Business Administration (SBA) small business status? Please check all that apply. (required)
## Section 3 - Questions about the Cost RFI
1. How would you staff and structure an engagement like this? (5000 character limit)
2. Historically, this contract has had an annual cost of $10M. Based on how you would staff and structure your teams for this work, is this budget realistic? Why or why not? (3000 character limit)
3. Is there anything else you think would be useful for us to consider or information that would be helpful in submitting a future bid? (3000 character limit)
4. Are there any labor categories specifically used for these types of services that we absolutely must require? (3000 character limit)
## Section 4 - Interest in this work
* How likely are you to respond to a potential future solicitation related to this need? (required)
* Would you like to be notified via email when the Draft RFQ and formal RFQ are issued?
* What email addresses, if other than the primary contact, would you like to be notified at?
| 58.133333 | 217 | 0.772362 | eng_Latn | 0.998106 |
e22457e2a6c3a685aa6cb0d8e9a70b1daece8a99 | 1,528 | md | Markdown | GraphEmbedding/code/Word2Vec/README.md | dustasa/GraphNetwork | 43a5e3dd69f9a99b3a0311796398d1880b75c237 | [
"Apache-2.0"
] | 2,431 | 2015-01-01T12:25:57.000Z | 2022-03-30T07:54:01.000Z | README.md | java66liu/word2vec | a76a75fd8962331d40d9afb4014975db8afc1a21 | [
"Apache-2.0"
] | 54 | 2015-01-19T01:38:36.000Z | 2022-01-15T09:55:43.000Z | README.md | java66liu/word2vec | a76a75fd8962331d40d9afb4014975db8afc1a21 | [
"Apache-2.0"
] | 686 | 2015-01-02T15:18:31.000Z | 2022-03-29T03:01:32.000Z | # word2vec
[](https://pypi.org/project/word2vec/)
[](http://github.com/danielfrg/word2vec/actions/workflows/test.yml)
[](https://codecov.io/gh/danielfrg/word2vec?branch=master)
[](http://github.com/danielfrg/word2vec/blob/master/LICENSE.txt)
Python interface to Google word2vec.
Training is done using the original C code, other functionality is pure Python with numpy.
## Installation
```
pip install word2vec
```
### Compilation
The installation requires to compile the original C code using `gcc`.
You can override the compilation flags if needed:
```
WORD2VEC_CFLAGS='-march=corei7' pip install word2vec
```
**Windows:** There is basic some support for this support based on this [win32 port](https://github.com/zhangyafeikimi/word2vec-win32).
## Usage
Example notebook: [word2vec](http://nbviewer.ipython.org/urls/raw.github.com/danielfrg/word2vec/master/examples/word2vec.ipynb)
The default functionality from word2vec is available with the following commands:
- `word2vec`
- `word2phrase`
- `word2vec-distance`
- `word2vec-word-analogy`
- `word2vec-compute-accuracy`
Experimental functionality on doc2vec can be found in this example:
[doc2vec](http://nbviewer.ipython.org/urls/raw.github.com/danielfrg/word2vec/master/examples/doc2vec.ipynb)
| 35.534884 | 141 | 0.772251 | eng_Latn | 0.530849 |
e225882eef1224e04078a85e57068ea503949824 | 6,451 | md | Markdown | docs/database-engine/availability-groups/windows/start-data-movement-on-an-always-on-secondary-database-sql-server.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/availability-groups/windows/start-data-movement-on-an-always-on-secondary-database-sql-server.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/availability-groups/windows/start-data-movement-on-an-always-on-secondary-database-sql-server.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Iniciar el movimiento de datos en una base de datos secundaria AlwaysOn (SQL Server) | Microsoft Docs
ms.custom: ''
ms.date: 05/17/2016
ms.prod: sql
ms.reviewer: ''
ms.technology: high-availability
ms.topic: conceptual
helpviewer_keywords:
- Availability Groups [SQL Server], databases
ms.assetid: 498eb3fb-6a43-434d-ad95-68a754232c45
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: da9aa6724aadb63112414cd2a0055ccf5d865d81
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/01/2018
ms.locfileid: "47595563"
---
# <a name="start-data-movement-on-an-always-on-secondary-database-sql-server"></a>Iniciar el movimiento de datos en una base de datos secundaria AlwaysOn (SQL Server)
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
Este tema contiene información sobre cómo iniciar la sincronización de datos después de agregar una base de datos a un grupo de disponibilidad AlwaysOn. Para cada nueva réplica principal, las bases de datos secundarias deben estar preparadas en las instancias de servidor que hospedan las réplicas secundarias. Después, cada una de estas bases de datos secundarias se debe unir manualmente al grupo de disponibilidad.
> [!NOTE]
> Si las rutas de acceso de archivos son idénticas en cada instancia de servidor que hospeda una réplica de disponibilidad para un grupo de disponibilidad, el [Asistente para nuevo grupo de disponibilidad](../../../database-engine/availability-groups/windows/use-the-availability-group-wizard-sql-server-management-studio.md), el [Asistente para agregar réplica al grupo de disponibilidad](../../../database-engine/availability-groups/windows/use-the-add-replica-to-availability-group-wizard-sql-server-management-studio.md)o el [Asistente para agregar base de datos al grupo de disponibilidad](../../../database-engine/availability-groups/windows/availability-group-add-database-to-group-wizard.md) pueden iniciar automáticamente la sincronización de datos.
Para iniciar manualmente la sincronización de datos, debe conectarse, a su vez, a cada instancia de servidor que esté hospedando una réplica secundaria para el grupo de disponibilidad y completar los pasos siguientes:
1. Restaure las copias de seguridad actuales de cada base de datos principal y del registro de transacciones (mediante RESTORE WITH NORECOVERY). Puede usar alguna de las alternativas siguientes:
- Restaure manualmente una copia de seguridad reciente de la base de datos principal utilizando RESTORE WITH NORECOVERY y restaure después cada copia de seguridad de registros posterior utilizando RESTORE WITH NORECOVERY. Realice esta secuencia de restauración en cada instancia del servidor que hospeda una réplica secundaria del grupo de disponibilidad.
**Para obtener más información:**
[Preparar manualmente una base de datos secundaria para un grupo de disponibilidad (SQL Server)](../../../database-engine/availability-groups/windows/manually-prepare-a-secondary-database-for-an-availability-group-sql-server.md)
- Si va a agregar una o varias bases de datos principales de trasvase de registros en un grupo de disponibilidad, es posible que pueda migrar una o varias de sus bases de datos secundarias correspondientes de los grupos de trasvase de registros a los grupos de disponibilidad AlwaysOn. Para migrar una base de datos secundaria de trasvase de registros, es necesario usar el mismo nombre de base de datos que la base de datos principal que reside en una instancia de servidor que hospeda una réplica secundaria para el grupo de disponibilidad. Además, el grupo de disponibilidad debe configurarse de modo que la réplica principal se prefiera para las copias de seguridad y sea candidata para realizar copias de seguridad (es decir, que la prioridad de copia de seguridad sea > 0). Una vez que el trabajo de copia de seguridad se ejecute en la base de datos principal, tendrá que deshabilitar el trabajo de copia de seguridad y, una vez que el trabajo de restauración se haya ejecutado en una base de datos secundaria, deshabilitar el trabajo de restauración.
> [!NOTE]
> Después de crear todas las bases de datos secundarias para el grupo de disponibilidad, si desea realizar copias de seguridad en las réplicas secundarias, deberá configurar de nuevo la preferencia de copia de seguridad automatizada del grupo de disponibilidad.
**Para obtener más información:**
[Requisitos previos para migrar desde grupos de trasvase de registros a grupos de disponibilidad AlwaysOn (SQL Server)](../../../database-engine/availability-groups/windows/prereqs-migrating-log-shipping-to-always-on-availability-groups.md)
[Configurar la copia de seguridad en réplicas de disponibilidad (SQL Server)](../../../database-engine/availability-groups/windows/configure-backup-on-availability-replicas-sql-server.md)
2. En cuanto sea posible, una cada base de datos secundaria preparada al grupo de disponibilidad.
**Para obtener más información:**
[Combinar una base de datos secundaria con un grupo de disponibilidad (SQL Server)](../../../database-engine/availability-groups/windows/join-a-secondary-database-to-an-availability-group-sql-server.md)
## <a name="LaunchWiz"></a> Tareas relacionadas
- [Usar el cuadro de diálogo Nuevo grupo de disponibilidad (SQL Server Management Studio)](../../../database-engine/availability-groups/windows/use-the-new-availability-group-dialog-box-sql-server-management-studio.md)
- [Usar el Asistente para agregar una réplica al grupo de disponibilidad (SQL Server Management Studio)](../../../database-engine/availability-groups/windows/use-the-add-replica-to-availability-group-wizard-sql-server-management-studio.md)
- [Usar el Asistente para agregar una base de datos al grupo de disponibilidad (SQL Server Management Studio)](../../../database-engine/availability-groups/windows/availability-group-add-database-to-group-wizard.md)
## <a name="see-also"></a>Ver también
[Información general de los grupos de disponibilidad AlwaysOn (SQL Server)](../../../database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server.md)
| 93.492754 | 1,065 | 0.776004 | spa_Latn | 0.9534 |
e22596b100c7f0158164387fdb16decdc27d069a | 11,970 | md | Markdown | articles/hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Zelf studie: een Spark-machine learning-app bouwen-Azure HDInsight'
description: 'Zelf studie: stapsgewijze instructies voor het bouwen van Apache Spark machine learning toepassing in HDInsight Spark-clusters met behulp van Jupyter-notebook.'
author: hrasheed-msft
ms.author: hrasheed
ms.reviewer: jasonh
ms.service: hdinsight
ms.topic: tutorial
ms.custom: hdinsightactive,mvc
ms.date: 04/07/2020
ms.openlocfilehash: 963f5bd4dfdd9dda78a437bdb1111c9eec2795dc
ms.sourcegitcommit: 58faa9fcbd62f3ac37ff0a65ab9357a01051a64f
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 04/29/2020
ms.locfileid: "80878439"
---
# <a name="tutorial-build-an-apache-spark-machine-learning-application-in-azure-hdinsight"></a>Zelf studie: een Apache Spark machine learning-toepassing bouwen in azure HDInsight
In deze zelfstudie leert u hoe u het [Jupyter-notebook](https://jupyter.org/) gebruikt voor het bouwen van een [Apache Spark](./apache-spark-overview.md)-toepassing voor Machine Learning voor Azure HDInsight.
[MLlib](https://spark.apache.org/docs/latest/ml-guide.html) is een aanpas bare machine learning bibliotheek van Spark die bestaat uit algemene leer algoritmen en hulpprogram ma's. (Classificatie, regressie, Clustering, gezamenlijke filtering en driedimensionale verlaging. Ook onderliggende optimalisatie primitieven.)
In deze zelfstudie leert u het volgende:
> [!div class="checklist"]
> * Een Apache Spark-toepassing voor Machine Learning ontwikkelen
## <a name="prerequisites"></a>Vereisten
* Een Apache Spark-cluster in HDInsight. Zie [een Apache Spark-cluster maken](./apache-spark-jupyter-spark-sql-use-portal.md).
* Weten hoe u Jupyter Notebook gebruikt met Spark on HDInsight. Zie [gegevens laden en query's uitvoeren met Apache Spark op HDInsight](./apache-spark-load-data-run-query.md)voor meer informatie.
## <a name="understand-the-data-set"></a>Informatie over de gegevensset
De toepassing maakt gebruik van de voorbeeld gegevens van **HVAC. CSV** die standaard beschikbaar zijn op alle clusters. Het bestand bevindt `\HdiSamples\HdiSamples\SensorSampleData\hvac`zich op. De gegevens hebben betrekking op de gewenste temperatuur en de werkelijke temperatuur in enkele gebouwen waarin HVAC-systemen zijn geïnstalleerd. De kolom **System** bevat de id van het betreffende systeem en de kolom **SystemAge** geeft het aantal jaren aan dat het HVAC-systeem wordt gebruikt in het gebouw. U kunt voor spelt of een gebouw Hotter of koud is op basis van de doel temperatuur, de opgegeven systeem-ID en de leeftijd van het systeem.

## <a name="develop-a-spark-machine-learning-application-using-spark-mllib"></a>Een Spark-toepassing voor machine learning ontwikkelen met Spark MLib
Deze toepassing maakt gebruik van een Spark [ml-pijp lijn](https://spark.apache.org/docs/2.2.0/ml-pipeline.html) om een document classificatie uit te voeren. ML-pijp lijnen bieden een uniforme set Api's op hoog niveau die zijn gebouwd boven op DataFrames. Met de DataFrames kunnen gebruikers praktische machine learning pijp lijnen maken en afstemmen. In de pijplijn splitst u het document op in woorden, converteert u de woorden naar een numerieke functievector en bouwt u ten slotte een voorspellend model met behulp van de functievectoren en labels. Voer de volgende stappen uit om de toepassing te maken.
1. Maak een Jupyter-notebook met behulp van de PySpark-kernel. Zie [Een Jupyter-notebook maken](./apache-spark-jupyter-spark-sql.md#create-a-jupyter-notebook) voor de instructies.
1. Importeer de typen die nodig zijn voor dit scenario. Plak het volgende codefragment in een lege cel en druk op **Shift+Enter**.
```PySpark
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF, Tokenizer
from pyspark.sql import Row
import os
import sys
from pyspark.sql.types import *
from pyspark.mllib.classification import LogisticRegressionWithSGD
from pyspark.mllib.regression import LabeledPoint
from numpy import array
```
1. Laad de gegevens (hvac.csv), parseer de gegevens en gebruik ze voor het trainen van het model.
```PySpark
# Define a type called LabelDocument
LabeledDocument = Row("BuildingID", "SystemInfo", "label")
# Define a function that parses the raw CSV file and returns an object of type LabeledDocument
def parseDocument(line):
values = [str(x) for x in line.split(',')]
if (values[3] > values[2]):
hot = 1.0
else:
hot = 0.0
textValue = str(values[4]) + " " + str(values[5])
return LabeledDocument((values[6]), textValue, hot)
# Load the raw HVAC.csv file, parse it using the function
data = sc.textFile("/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv")
documents = data.filter(lambda s: "Date" not in s).map(parseDocument)
training = documents.toDF()
```
In het codefragment definieert u een functie waarmee de werkelijke temperatuur wordt vergeleken met de gewenste temperatuur. Als de werkelijke temperatuur hoger is, is het warm in het gebouw, aangegeven door de waarde **1.0**. Anders is het koud in het gebouw, wat wordt aangegeven door de waarde **0.0**.
1. Configureer de pijplijn voor de Spark-toepassing voor machine learning. Deze bestaat uit drie fasen: tokenizer, hashingTF en lr.
```PySpark
tokenizer = Tokenizer(inputCol="SystemInfo", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.01)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
```
Zie [Apache Spark machine learning pipeline](https://spark.apache.org/docs/latest/ml-pipeline.html) (Pijplijn voor machine learning van Apache Spark) voor meer informatie over pijplijnen en hoe ze werken.
1. Koppel de pijplijn aan het trainingsdocument.
```PySpark
model = pipeline.fit(training)
```
1. Controleer het trainingsdocument om de voortgang met de toepassing te controleren.
```PySpark
training.show()
```
De uitvoer is vergelijkbaar met:
```output
+----------+----------+-----+
|BuildingID|SystemInfo|label|
+----------+----------+-----+
| 4| 13 20| 0.0|
| 17| 3 20| 0.0|
| 18| 17 20| 1.0|
| 15| 2 23| 0.0|
| 3| 16 9| 1.0|
| 4| 13 28| 0.0|
| 2| 12 24| 0.0|
| 16| 20 26| 1.0|
| 9| 16 9| 1.0|
| 12| 6 5| 0.0|
| 15| 10 17| 1.0|
| 7| 2 11| 0.0|
| 15| 14 2| 1.0|
| 6| 3 2| 0.0|
| 20| 19 22| 0.0|
| 8| 19 11| 0.0|
| 6| 15 7| 0.0|
| 13| 12 5| 0.0|
| 4| 8 22| 0.0|
| 7| 17 5| 0.0|
+----------+----------+-----+
```
Vergelijk de uitvoer met het onbewerkte CSV-bestand. De eerste rij van het CSV-bestand bevat bijvoorbeeld deze gegevens:

U ziet hoe de werkelijke temperatuur lager is dan de richttemperatuur, wat betekent dat het koud is in het gebouw. De waarde voor **Label** in de eerste rij is **0,0**, wat betekent dat het gebouw niet heet.
1. Bereid een gegevensset voor die kan worden uitgevoerd op het getrainde model. Hiervoor geeft u een systeem-ID en systeem leeftijd op (aangeduid als **System info** in de training-uitvoer). Het model voor spelt of het gebouw met die systeem-ID en de leeftijd van het systeem wordt Hotter (aangeduid met 1,0) of koelen (aangeduid met 0,0).
```PySpark
# SystemInfo here is a combination of system ID followed by system age
Document = Row("id", "SystemInfo")
test = sc.parallelize([(1L, "20 25"),
(2L, "4 15"),
(3L, "16 9"),
(4L, "9 22"),
(5L, "17 10"),
(6L, "7 22")]) \
.map(lambda x: Document(*x)).toDF()
```
1. Genereer als laatste voorspellingen op basis van de testgegevens.
```PySpark
# Make predictions on test documents and print columns of interest
prediction = model.transform(test)
selected = prediction.select("SystemInfo", "prediction", "probability")
for row in selected.collect():
print row
```
De uitvoer is vergelijkbaar met:
```output
Row(SystemInfo=u'20 25', prediction=1.0, probability=DenseVector([0.4999, 0.5001]))
Row(SystemInfo=u'4 15', prediction=0.0, probability=DenseVector([0.5016, 0.4984]))
Row(SystemInfo=u'16 9', prediction=1.0, probability=DenseVector([0.4785, 0.5215]))
Row(SystemInfo=u'9 22', prediction=1.0, probability=DenseVector([0.4549, 0.5451]))
Row(SystemInfo=u'17 10', prediction=1.0, probability=DenseVector([0.4925, 0.5075]))
Row(SystemInfo=u'7 22', prediction=0.0, probability=DenseVector([0.5015, 0.4985]))
```
Bekijk de eerste rij in de voor spelling. Voor een airco systeem met ID 20 en systeem leeftijd van 25 jaar is het gebouw Hot (**Prediction = 1.0**). De eerste waarde voor DenseVector (0.49999) komt overeen met de voorspelling 0.0 en de tweede waarde (0.5001) komt overeen met de voorspelling 1.0. In de uitvoer toont het model **prediction=1.0**, ook al is de tweede waarde maar een fractie hoger.
1. Sluit het notebook om de resources vrij te geven. Selecteer hiervoor **Sluiten en stoppen** in het menu **Bestand** van het notebook. Met deze actie wordt het notebook afgesloten en gesloten.
## <a name="use-anaconda-scikit-learn-library-for-spark-machine-learning"></a>scikit-learn-bibliotheek van Anaconda gebruiken voor een Spark-toepassing voor machine learning
Apache Spark-clusters in HDInsight bevatten Anaconda-bibliotheken, waaronder de bibliotheek **scikit-learn** voor machine learning. De bibliotheek bevat ook verschillende gegevenssets die u kunt gebruiken om rechtstreeks vanuit een Jupyter-notebook voorbeeldtoepassingen te bouwen. Zie [https://scikit-learn.org/stable/auto_examples/index.html](https://scikit-learn.org/stable/auto_examples/index.html)voor voor beelden van het gebruik van de scikit-Learn-bibliotheek.
## <a name="clean-up-resources"></a>Resources opschonen
Als u deze toepassing niet wilt blijven gebruiken, verwijdert u het cluster dat u hebt gemaakt met de volgende stappen:
1. Meld u aan bij de [Azure-portal](https://portal.azure.com/).
1. Typ **HDInsight** in het **Zoekvak** bovenaan.
1. Selecteer onder **Services** de optie **HDInsight-clusters**.
1. Selecteer in de lijst met HDInsight-clusters die wordt weer gegeven, de **...** naast het cluster dat u voor deze zelf studie hebt gemaakt.
1. Selecteer **verwijderen**. Selecteer **Ja**.

## <a name="next-steps"></a>Volgende stappen
In deze zelf studie hebt u geleerd hoe u de Jupyter Notebook kunt gebruiken om een Apache Spark machine learning-toepassing voor Azure HDInsight te maken. Ga naar de volgende zelfstudie voor meer informatie over het gebruik van IntelliJ IDEA voor Spark-taken.
> [!div class="nextstepaction"]
> [Een scala maven-toepassing maken met behulp van IntelliJ](./apache-spark-create-standalone-application.md)
| 56.197183 | 645 | 0.712114 | nld_Latn | 0.994095 |
e2261dbe22f333e59f6e3fdb18824f612b18a43f | 5,542 | md | Markdown | CHANGELOG.md | giantswarm/rbac-operator | 0d4e7d002d20e64d5000b835c73910d7b8e1c0e1 | [
"Apache-2.0"
] | 1 | 2020-05-31T09:27:17.000Z | 2020-05-31T09:27:17.000Z | CHANGELOG.md | giantswarm/rbac-operator | 0d4e7d002d20e64d5000b835c73910d7b8e1c0e1 | [
"Apache-2.0"
] | 60 | 2020-03-10T14:30:57.000Z | 2022-03-31T06:30:56.000Z | CHANGELOG.md | giantswarm/rbac-operator | 0d4e7d002d20e64d5000b835c73910d7b8e1c0e1 | [
"Apache-2.0"
] | 1 | 2021-02-26T12:55:14.000Z | 2021-02-26T12:55:14.000Z | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.16.0] - 2021-10-07
### Added
- Provide access for customer automation SA `Organization` CR management.
## [0.15.0] - 2021-05-25
### Changed
- Prepare Helm values to use config CR.
- Update architect-orb to v3.0.0.
## [0.14.0] - 2021-04-26
### Added
- Provide customer admin group access to manage `Organization` CRs.
## [0.13.0] - 2021-04-26
### Changed
- Update bootstrap resources on restart.
## [0.12.0] - 2021-03-30
### Changed
- Install all the `rbac-operator` resources by default.
## [0.11.0] - 2021-03-25
### Changed
- Label default ClusterRoles, which needs to be displayed in Happa.
### Deleted
- Remove selector in `orgpermissions` controller.
## [0.10.0] - 2021-03-23
### Changed
- Update `read-all` ClusterRole on every bootstrap.
- Extend `rbac-operator` service account ClusterRole permissions to namespaces.
## [0.9.0] - 2021-03-22
### Changed
- Move management of static resources from Helm into code.
- Remove `view-all` related roles/bindings.
- Bind customer admin group to `cluster-admin` cluster role in target organization namespace.
## [0.8.0] - 2020-11-19
### Added
- `clusterrolebinding` for GiantSwarm staff cluster-admin access.
## [0.7.0] - 2020-10-21
### Added
- Update Roles when their Rules are not up to date.
### Fixed
- Update `RoleBindings` only when necessary.
## [0.6.0] - 2020-09-24
### Changed
- Updated Kubernetes dependencies to v1.18.9 and operatorkit to v2.0.1.
### Added
- Add monitoring labels.
## [0.5.0] - 2020-08-14
### Changed
- Updated backward incompatible Kubernetes dependencies to v1.18.5.
## [0.4.6] - 2020-08-13
### Changed
- Update operatorkit to v1.2.0 and k8sclient to v3.1.2.
## [0.4.5] - 2020-07-30
### Fixed
- Fix `roleRef` in `RoleBinding`/`tenant-admin`.
## [0.4.4] - 2020-07-30
### Fixed
- Fix `roleRef` in `ClusterRoleBinding`/`tenant-admin-view-all`.
## [0.4.3] - 2020-07-29
### Added
- Add github actions for release automation.
### Changed
- Update helm chart to current standard
- Install `serviceaccount` in all installations.
## [0.4.2] - 2020-05-03
### Changed
- Change `rbac` controller label selector to match organization namespaces as well.
## [0.4.1]
- Fix `namespacelabeler` controller label selector.
- Fix `role` name reference in OIDC group and service accounts `rolebinding`.
## [0.4.0]
### Changed
- Push tags to *aliyun* repository.
- Move `rbac` controller code into `rbac` package.
- Add `namespacelabeler` controller, which labels legacy namespaces.
- Add `automation` service account in `global` namespace, which has admin access to all the tenant namespaces.
## [0.3.3] - 2020-05-13
### Changed
- Reconcile `rolebinding` subject group changes properly.
- Fix bug with binding role to the `view-all` read role instead of `tenant-admin` write role.
## [0.3.2]- 2020-04-23
### Changed
- Use Release.Revision in annotation for Helm 3 compatibility.
## [0.3.0]- 2020-04-06
### Added
- Tenant admin role *tenant-admin-manage-rbac* to manage `serviceaccounts`, `roles`, `clusterroles`, `rolebindings` and `clusterrolebindings`.
- Add tenant admin full access to `global` and `default` namespaces.
## [0.2.0]- 2020-03-13
### Changed
- Make rbac-operator optional for installation without OIDC.
## [0.1.0]- 2020-03-13
### Added
- Read-only role for customer access into Control Plane.
[Unreleased]: https://github.com/giantswarm/rbac-operator/compare/v0.16.0...HEAD
[0.16.0]: https://github.com/giantswarm/rbac-operator/compare/v0.15.0...v0.16.0
[0.15.0]: https://github.com/giantswarm/rbac-operator/compare/v0.14.0...v0.15.0
[0.14.0]: https://github.com/giantswarm/rbac-operator/compare/v0.13.0...v0.14.0
[0.13.0]: https://github.com/giantswarm/rbac-operator/compare/v0.12.0...v0.13.0
[0.12.0]: https://github.com/giantswarm/rbac-operator/compare/v0.11.0...v0.12.0
[0.11.0]: https://github.com/giantswarm/rbac-operator/compare/v0.10.0...v0.11.0
[0.10.0]: https://github.com/giantswarm/rbac-operator/compare/v0.9.0...v0.10.0
[0.9.0]: https://github.com/giantswarm/rbac-operator/compare/v0.8.0...v0.9.0
[0.8.0]: https://github.com/giantswarm/rbac-operator/compare/v0.7.0...v0.8.0
[0.7.0]: https://github.com/giantswarm/rbac-operator/compare/v0.6.0...v0.7.0
[0.6.0]: https://github.com/giantswarm/rbac-operator/compare/v0.5.0...v0.6.0
[0.5.0]: https://github.com/giantswarm/rbac-operator/compare/v0.4.6...v0.5.0
[0.4.6]: https://github.com/giantswarm/rbac-operator/compare/v0.4.5...v0.4.6
[0.4.5]: https://github.com/giantswarm/rbac-operator/compare/v0.4.4...v0.4.5
[0.4.4]: https://github.com/giantswarm/rbac-operator/compare/v0.4.3...v0.4.4
[0.4.3]: https://github.com/giantswarm/rbac-operator/compare/v0.4.2...v0.4.3
[0.4.2]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.4.2
[0.4.1]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.4.1
[0.4.0]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.4.0
[0.3.3]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.3.3
[0.3.2]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.3.2
[0.3.0]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.3.0
[0.2.0]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.2.0
[0.1.0]: https://github.com/giantswarm/rbac-operator/releases/tag/v0.1.0
| 26.772947 | 142 | 0.702093 | eng_Latn | 0.46438 |
e22687d24b15710ac05680e732dd2ee66723ee04 | 579 | md | Markdown | README.md | estafette/estafette-ci-web | 7d9a24bf2f80a0a379fecca5176723c3471c9ffc | [
"MIT"
] | 6 | 2018-10-07T03:46:32.000Z | 2021-02-09T18:32:03.000Z | README.md | estafette/estafette-ci-web | 7d9a24bf2f80a0a379fecca5176723c3471c9ffc | [
"MIT"
] | 16 | 2019-03-06T13:27:59.000Z | 2022-03-03T09:02:41.000Z | README.md | estafette/estafette-ci-web | 7d9a24bf2f80a0a379fecca5176723c3471c9ffc | [
"MIT"
] | null | null | null | # Estafette CI
The `estafette-ci-web` component is part of the Estafette CI system documented at https://estafette.io.
Please file any issues related to Estafette CI at https://github.com/estafette/estafette-ci/issues
## Estafette-ci-web
This is the web interface for the Estafette CI system. It's built using Vue.js, Webpack and Bootstrap CSS.
## Development
To start development run
```bash
git clone git@github.com:estafette/estafette-ci-web.git
cd estafette-ci-web
npm install
npm run dev
```
Before committing your changes run
```bash
npm run unit
npm run build
``` | 21.444444 | 106 | 0.766839 | eng_Latn | 0.760075 |
e227d6f4a87cf55f5974f083372e5d3dcd8e3455 | 2,640 | md | Markdown | README.md | shubhamnandanwar/CircularProgressView | a7db454a67fdc25700bbe48cf0e6b872a2b30bb1 | [
"Apache-2.0"
] | 12 | 2019-06-29T05:31:39.000Z | 2022-03-21T13:38:25.000Z | README.md | shubhamnandanwar/CircularProgressView | a7db454a67fdc25700bbe48cf0e6b872a2b30bb1 | [
"Apache-2.0"
] | 6 | 2019-07-04T04:37:30.000Z | 2021-02-01T20:18:19.000Z | README.md | shubhamnandanwar/CircularProgressView | a7db454a67fdc25700bbe48cf0e6b872a2b30bb1 | [
"Apache-2.0"
] | 3 | 2018-10-14T01:30:16.000Z | 2021-11-16T08:36:28.000Z | # CircularProgressBar
[](https://www.codacy.com/app/shubhamnandanwar9776/CircularProgressView?utm_source=github.com&utm_medium=referral&utm_content=shubhamnandanwar/CircularProgressView&utm_campaign=Badge_Grade)
[](https://android-arsenal.com/api?level=16)
[ ](https://bintray.com/shubhamnandanwar9776/CircularProgressView/circular-progress-view/1.1.4)
Usual CircularProgressBar but with cool dash effect in background stroke and fade animation.
## Demo app
A demo app is available on Google Play
[](https://play.google.com/store/apps/details?id=com.shunan.circularprogressview)
## Prerequisites
Add this to your project build.gradle
``` gradle
allprojects {
repositories {
jcenter()
}
}
```
## Dependency
Add this to your module build.gradle
``` gradle
dependencies {
implementation 'com.shunan.circularprogressview:circular-progress-view:1.1.4'
}
```
## Preview
### Basic Functionality

### Animations
 

### Background stroke with Dash Effect

License
----
Copyright 2018 Shubham Nandanwar
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 46.315789 | 293 | 0.795076 | eng_Latn | 0.465042 |
e228f6217dbc6f98aa89e0d404d895d928af74a8 | 490 | md | Markdown | papers/211015 Breaking Down Multilingual Machine Translation.md | rosinality/ml-papers | 3c70bb20436ca385dbec6223370d3cda7d86f6d8 | [
"MIT"
] | 97 | 2021-05-10T14:36:22.000Z | 2022-03-30T08:20:52.000Z | papers/211015 Breaking Down Multilingual Machine Translation.md | rosinality/ml-papers | 3c70bb20436ca385dbec6223370d3cda7d86f6d8 | [
"MIT"
] | null | null | null | papers/211015 Breaking Down Multilingual Machine Translation.md | rosinality/ml-papers | 3c70bb20436ca385dbec6223370d3cda7d86f6d8 | [
"MIT"
] | 8 | 2021-07-19T02:26:17.000Z | 2022-03-24T16:45:40.000Z | https://arxiv.org/abs/2110.08130
Breaking Down Multilingual Machine Translation (Ting-Rui Chiang, Yi-Pei Chen, Yi-Ting Yeh, Graham Neubig)
multilingual mt로 학습시킨 weight가 어떻게 다른 언어 pair에 대해서 transfer 될 수 있는지 분석. 인코더는 multilingual training으로 학습시켰을 때 부스트가 있는 반면 디코더는 도움이 되지 않는 것 같다는 분석이네요. 서로 다른 언어를 디코딩하는 작업에 parameter sharing을 하기가 어려운 것 같다는 추정.
그러고나서 어텐션 헤드의 importance score를 계산한 다음 이 importance score의 correlation을 가지고 유사 언어 그룹을 묶어 파인튜닝 하는 것으로 성능 부스트를 얻을 수 있다는 결과네요.
#nmt #multilingual | 54.444444 | 203 | 0.781633 | kor_Hang | 1.00001 |
e229f97de37cf1e37983942e8fd480b298eec2d6 | 5,783 | md | Markdown | docs/2014/database-engine/options-query-execution-sql-server-advanced-page.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/options-query-execution-sql-server-advanced-page.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/options-query-execution-sql-server-advanced-page.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Параметры (запрос выполнения: SQL Server: страница "Дополнительно") | Документация Майкрософт'
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.suite: ''
ms.technology:
- database-engine
ms.tgt_pltfrm: ''
ms.topic: conceptual
f1_keywords:
- VS.ToolsOptionsPages.QueryExecution.SqlServer.SqlExecutionAdvanced
ms.assetid: 3ec788c7-22c3-4216-9ad0-81a168d17074
caps.latest.revision: 27
author: craigg-msft
ms.author: craigg
manager: craigg
ms.openlocfilehash: 179cddf3670cc29cbb298b53c442c30b80dd202f
ms.sourcegitcommit: c18fadce27f330e1d4f36549414e5c84ba2f46c2
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 07/02/2018
ms.locfileid: "37187241"
---
# <a name="options-query-executionsql-serveradvanced-page"></a>Параметры (запрос выполнения: SQL Server: страница "Дополнительно")
При использовании команды SET доступно несколько параметров. Данная страница используется для задания параметра **set** для запуска запросов [!INCLUDE[msCoName](../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] в редакторе запросов SQL Server. Они не влияют на другие редакторы кода. Изменения этих параметров применяются только к новым запросам [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] . Для изменения параметров для текущих запросов выберите пункт **Параметры запроса** в меню **Запрос** или контекстное меню в окне запроса [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] . В пункте **Выполнение**выберите **Дополнительно**. Дополнительные сведения о них см. в разделе [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] электронной документации.
## <a name="options"></a>Параметры
**SET NOCOUNT**
Не возвращает значения счетчика количества строк в виде сообщения с результирующим набором. По умолчанию этот флажок снят.
**SET NOEXEC**
Не выполняет запрос. По умолчанию этот флажок снят.
**SET PARSEONLY**
Проверяет синтаксис каждого запроса, но не выполняет их. По умолчанию этот флажок снят.
**SET CONCAT_NULL_YIELDS_NULL**
Когда этот флажок установлен, запросы, дополняющие существующее значение значением NULL, всегда возвращают в результате NULL. Когда этот флажок снят, существующее значение, дополненное значением NULL, возвращает существующее значение. Этот флажок выбран по умолчанию.
**SET ARITHABORT**
Когда этот флажок установлен, если инструкция INSERT, DELETE или UPDATE встречает арифметическую ошибку (переполнение, деление на нуль или выход за пределы допустимых значений) во время оценки выражения, выполнение запроса или пакета прерывается. Когда этот флажок снят, по возможности для данного значения предоставляется значение NULL, выполнение запроса продолжается, а в результат включается сообщение. Дополнительные сведения см. в разделе [SET ARITHABORT (Transact-SQL)](/sql/t-sql/statements/set-arithabort-transact-sql). Этот флажок выбран по умолчанию.
**SET SHOWPLAN_TEXT**
Когда этот флажок установлен, с каждым запросом возвращается план запроса в текстовом формате. По умолчанию этот флажок снят.
**SET STATISTICS TIME**
При установке этого флажка с каждым запросом возвращаются статистические данные о времени. По умолчанию этот флажок снят.
**SET STATISTICS IO**
Когда этот флажок установлен, с каждым запросом возвращаются статистические данные о вводе и выводе. По умолчанию этот флажок снят.
**SET TRANSACTION ISOLATION LEVEL**
По умолчанию устанавливается уровень изоляции транзакции READ COMMITTED. Дополнительные сведения см. в разделе [SET TRANSACTION ISOLATION LEVEL (Transact-SQL)](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql). Уровень изоляции транзакции SNAPSHOT недоступен. Чтобы воспользоваться изоляцией моментального снимка SNAPSHOT, добавьте следующую инструкцию [!INCLUDE[tsql](../includes/tsql-md.md)]:
```
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
GO
```
**SET DEADLOCK PRIORITY**
Значение по умолчанию, равное «Обычный», позволяет всем запросам иметь одинаковый приоритет при возникновении взаимоблокировки. Выберите приоритет «Низкий», если необходимо, чтобы запрос проиграл в конфликте, связанном с взаимоблокировкой, и был выбран в качестве запроса, подлежащего прерыванию.
**ЗАДАТЬ ВРЕМЯ ОЖИДАНИЯ БЛОКИРОВКИ**
Значение по умолчанию, равное -1, указывает, что блокировки удерживаются до завершения транзакций. Значение 0 означает, что ожидание отсутствует, а сообщение возвращается, как только встречается блокировка. Задайте значение больше 0 миллисекунд, чтобы прерывать транзакцию, если блокировки для нее должны удерживаться дольше этого времени.
**SET QUERY_GOVERNOR_COST_LIMIT**
Параметр **QUERY_GOVERNOR_COST_LIMIT** используется для задания верхнего ограничения времени, в течение которого может выполняться запрос. Цена запроса — это предполагаемое время в секундах, которое требуется для завершения запроса на аппаратных средствах конкретной конфигурации. Значение по умолчанию, равное 0, указывает, что ограничения времени работы запроса отсутствуют.
**Подавлять заголовки сообщений поставщиков**
Когда этот флажок установлен, сообщения о состояниях от поставщика (например, от поставщика SQLClient) не отображаются. Этот флажок выбран по умолчанию. Снимите этот флажок, чтобы увидеть сообщения поставщиков при диагностике запросов, которые могут неудачно завершаться на уровне поставщика.
**Разорвать соединение после выполнения запроса**
Когда этот флажок установлен, соединение с [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] прекращается после завершения выполнения запроса. По умолчанию этот флажок снят.
**Восстановить значения по умолчанию**
Позволяет вернуть исходные значения по умолчанию для всех параметров на данной странице.
| 71.395062 | 810 | 0.794743 | rus_Cyrl | 0.958297 |
e22a30510997bf2981b93d2dca3d8286e07889f4 | 6,076 | md | Markdown | _posts/2019-08-06-Download-cummins-parts-catalog-cummins-power-generation.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | _posts/2019-08-06-Download-cummins-parts-catalog-cummins-power-generation.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | _posts/2019-08-06-Download-cummins-parts-catalog-cummins-power-generation.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Cummins parts catalog cummins power generation book
" she'd imagined the business with the dog and the computer; but the proof along deer trails cummins parts catalog cummins power generation other natural pathways, 30. Why do you make a face. But lately--" Geertz, since it meant he'd come that close to not having to bother scouting out two more endorsements. Celestina intended to capture Nella as she was now, its miniature display crammed with lines of computer microcode mnemonics. Those last five words, the coldest of mind and heart, ii. " paramedic's hands tightly enough to make him wince. "My father remarried last month. There are some people here from your department to see Kath and a few Others. the Chukches, where it's safe. Wulfstan, used snow mixed with water, the doom doctor did have a passion for Sinsemilla that heвand very commendable. And covering all the derricks was a translucent network of ten-centimeter-wide strips of Beyond the wide median strip, having no connection with the Nights, but I was too busty, Celestina tried freezing-point, blooming with youth and grace, both move purposefully! prevailing ideas, storehouses for train-oil with large thick dried blood, current. walk, sail along the back "Is it true?" she asked. The bank band awakened him. For no hand is there but the hand of God is over it And no oppressor but shall be with worse than he opprest. It took him six more days to get through the big herds in the eastern marshes. A wickedly messed-up kid? Behold, even though tough lots bigger. Why do you make a face. "Couldn't leave it all to the amateurs?' Ribald comments and hoots of derision greeted the remark. around with an underage Negro girl if his marriage to Naomi had been as were also caught, perhaps to rest or "You've been drinking now," she softly accused. "Ho, and on that account the Navy had done nothing wrong, talking to a taxi driver. " For Gammoner, Kalens thought to himself, iii. Like the Lapps and most other European and Asiatic Polar races, took shelter therein against break of day. Car tailpipes follows, Micky returned to her chair, and the officers commanding the key units are already with us, it was night. I told you shown, and could not be induced to take exercise, cummins parts catalog cummins power generation in this way rising masses of smoke that were first carried cummins parts catalog cummins power generation the updraft but that would Bernard gave Jay a stern look. with respect to the state of the ice in summer in the Polar seas. " Terfins, this lounge was clean but drab. I am Kargish, 'It is a girl;' and she said. To the east and She hesitated. In his obsession, whilst Tuhfeh embraced El Anca every moment, choosing your partners rather than leaving them to chance. During our stay in the country I purchased for a To his beloved one the lover's heart's inclined; His soul's a captive slave, or angel dust. "How's that work?" wrist, 383 in the way she looked at him, to reach Vaygats Island. He was wary of rationalizings, it's an 5, where the cummins parts catalog cummins power generation rose in "They say the first year's the hardest, less you safe. So comfort thyself and be of good heart and cheerful eye. About _Pleurotoma pyramidalis_, and then sat in silence. These miners were free women, the complex included a seaport; an air and space terminal distributed mainly across the islands. She cummins parts catalog cummins power generation quit; she wasn't going to do anything for anybody. "And which am I?" Smiling, he arose in haste and disguising himself, that the answers to them could be learned only by earning "Knoweth my loved one when I see her at the lattice high Shine as the sun that flameth forth in heaven's blue demesne?" "But how did the remains get so cummins parts catalog cummins power generation below ground?" Ralston asked, but he wouldn't be able to prevent dehydration strictly by an act of will. That brings me to the other thing I have to tell you," he said in a heavy voice. She had wagged her tail a little. The If the job hunt took weeks, less than a day later, his face seemed to form part of a shell interposed to keep outsiders at a respectful distance from whoever dwelt inside, either; irresistibly handsome. Notwithstanding the exceedingly severe cold a woman here the father, assessment of the situation. And it's cool. While sailing down, he had a pale face wider at the bottom than at the top, then buried her face against my shoulder. LEFT HAND ON the banister, "that's why so many people back at the Neary Ranch were buying Grandma's "What time cummins parts catalog cummins power generation you say you had a job interview?" and women who suffer from this disease, then jammed against the door when the caretaker Mussel Bay first froze permanently in the beginning of February, Barents, brown man sitting at the table looked up at him! In the window of the fourth, and arranged her artfully as a courtesy before the killing. He numbered these there was not much of the surrounding landscape visible. and my lips began to twitch into a grin. When cummins parts catalog cummins power generation affair was prolonged and I found none but her, into this shadowy vastness. They could no longer extra ten percent, the address was an apartment building with guard dogs in the lobby and a doorman who didn't talk. Hollow, I find. ' Quoth the tither, pink contagion from the pianist. She blotted them on her T-shirt. Von Chamberlain's Wife, aren't we, to a bay on the west coast of Vaygats Island. Good, he raised his eyes still higher. 169, he'd begun to be alarmed, unless they're dead. Swedish consul-general regarding the day of our arrival, 'I give him a dirhem every month to the hire of his lodging. I was badly frightened. hills consist mainly of a species of granite which is exposed to the only remarkable ornament of which was cummins parts catalog cummins power generation large piece of nephrite, it just makes you stupid. | 675.111111 | 5,956 | 0.787031 | eng_Latn | 0.999888 |
e22a9dbf8dc4ca8052930b88f9c7435cd86cfbe5 | 136 | md | Markdown | node_modules/devextreme-showdown/test/cases/code-block-with-special-chars.md | SrikanthJakkaslokam/KaveriProject | 5367ee758ed02b21d1e1480fb1ecf10f86bd7351 | [
"MIT"
] | null | null | null | node_modules/devextreme-showdown/test/cases/code-block-with-special-chars.md | SrikanthJakkaslokam/KaveriProject | 5367ee758ed02b21d1e1480fb1ecf10f86bd7351 | [
"MIT"
] | null | null | null | node_modules/devextreme-showdown/test/cases/code-block-with-special-chars.md | SrikanthJakkaslokam/KaveriProject | 5367ee758ed02b21d1e1480fb1ecf10f86bd7351 | [
"MIT"
] | null | null | null | //**this** code _has_ special chars
var arr = ['foo', 'bar', 'baz'];
function () {
return 'foo';
}
\n
| 19.428571 | 40 | 0.426471 | eng_Latn | 0.644619 |
e22af99d93a09c6f5c89c6703f1f9fc9c188e780 | 1,229 | md | Markdown | data/content/fate-side-material/illyasviel-von-einzbern.en.md | tmdict/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 3 | 2022-02-25T11:13:45.000Z | 2022-02-28T11:55:41.000Z | data/content/fate-side-material/illyasviel-von-einzbern.en.md | slsdo/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | null | null | null | data/content/fate-side-material/illyasviel-von-einzbern.en.md | slsdo/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 2 | 2022-02-25T09:59:50.000Z | 2022-02-28T11:55:09.000Z | ---
parent: illyasviel-von-einzbern
source: fate-side-material
id: fate-encyclopedia
language: en
weight: 8
translation: "Mcjon01"
img: fzm_illyasviel-von-einzbern.png
category:
- person
---
A semi-heroine. Berserker’s Master.
In the story, she was a mysterious girl who approached the still-oblivious Shirou while calling him “big brother.”
She possesses innocence and cruelty in equal measure, and for some reason is interested in Shirou.
The kind of character that makes you feel like angels and devils might just be the same thing.
Her power as a Master is the greatest in history.
With a command spell engraved over her entire body and a magic circuit that overwhelms that of normal magi by an enormous margin, there is no doubt that she is an existence born for the sole purpose of winning the Holy Grail War.
Though she played a key role in every route, her true colors were only revealed in the last one. Until then, she was just a worrisome child that got along well with Shirou despite coming to Japan to kill him.
During the plotting stage, she actually had her own route.
However, if we had tried to make it, it would have delayed the release of *Fate/stay night* by another half a year. Rest in peace, Illya route.
| 55.863636 | 229 | 0.788446 | eng_Latn | 0.999773 |
e22b18d2ea063a10c087fe702c35a78493a61c0b | 2,407 | md | Markdown | en_US/plugins/core v3.3/beta/user.md | henribi/documentations | cd87fb349b097d832453f37e44b9a1d4b8b7ad75 | [
"MIT"
] | 3 | 2020-05-17T19:20:29.000Z | 2021-11-25T03:51:15.000Z | en_US/plugins/core v3.3/beta/user.md | henribi/documentations | cd87fb349b097d832453f37e44b9a1d4b8b7ad75 | [
"MIT"
] | 15 | 2020-05-26T13:57:05.000Z | 2022-01-03T17:04:09.000Z | en_US/plugins/core v3.3/beta/user.md | henribi/documentations | cd87fb349b097d832453f37e44b9a1d4b8b7ad75 | [
"MIT"
] | 40 | 2020-03-11T16:47:52.000Z | 2022-03-01T17:37:24.000Z | This is where we will be able to define the list of users
allowed to connect to Jeedom, but also their rights
d'administrateur
Accessible by Administration → Users.
At the top right you have a button to add a user, a
to save and a button to open a support access.
Below you have a table :
- **Username** : user id
- **Active** : allows to deactivate the account
- **Local only** : authorize user login
only if it is on the local Jeedom network
- **Profiles** : allows to choose the user profile :
- **Administrator** : gets all rights on Jeedom
- **User** : can see the dashboard, views,
design, etc. and act on equipment / controls. On the other hand,
he will not have access to the configuration of controls / equipment
nor to the configuration of Jeedom.
- **Limited user** : the user only sees the
authorized equipment (configurable with the "Manage" button
rights")
- **API key** : user's personal API key
- **Double authentication** : indicates whether double authentication
is active (OK) or not (NOK)
- **Date of last connection** : date of the last connection of
the user at Jeedom. Please note, this is the connection date
actual, so if you save your computer, the date of
connection is not updated every time you return to it.
- **To change the password** : allows to change the password from
l'utilisateur
- **Remove** : delete user
- **Regenerate API key** : regenerates the API key of the user
- **Manage rights** : allows to finely manage the rights of
the user (attention the profiles must be in
"Limited user")
Rights management
==================
When clicking on "Manage rights" a window appears allowing you
finely manage user rights. The first tab displays
the different equipment. The second presents the scenarios.
> **IMPORTANT**
>
> The profile must be limited otherwise no restrictions placed here
> will be taken into account
You get a table which allows, for each equipment and each
scenario, define user rights :
- **No** : the user does not see the equipment / scenario
- **Visualization** : the user sees the equipment / scenario but does not
can't act on it
- **Visualization and execution** : the user sees
the equipment / scenario and can act on it (light a lamp, throw
the script, etc.)
| 30.858974 | 77 | 0.692563 | eng_Latn | 0.998711 |
e22bd2498c8175b92c8092524eb4982eca081571 | 924 | md | Markdown | includes/updated-for-az.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/updated-for-az.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/updated-for-az.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.topic: include
ms.date: 04/17/2019
author: dbradish-microsoft
ms.author: dbradish
manager: barbkess
ms.openlocfilehash: 1b3f9fff835e39c737227d6817c3a1dd901f3c65
ms.sourcegitcommit: c5021f2095e25750eb34fd0b866adf5d81d56c3a
ms.translationtype: HT
ms.contentlocale: nl-NL
ms.lasthandoff: 08/25/2020
ms.locfileid: "85367564"
---
> [!NOTE]
> Dit artikel is bijgewerkt voor het gebruik van de nieuwe Azure PowerShell Az-module. De AzureRM-module kan nog worden gebruikt en krijgt bugoplossingen tot ten minste december 2020.
> Zie voor meer informatie over de nieuwe Az-module en compatibiliteit met AzureRM [Introductie van de nieuwe Az-module van Azure PowerShell](https://docs.microsoft.com/powershell/azure/new-azureps-module-az?view=azps-3.3.0). Raadpleeg [Azure PowerShell installeren](https://docs.microsoft.com/powershell/azure/install-az-ps?view=azps-3.3.0) voor instructies over de installatie van de Az-module.
| 51.333333 | 396 | 0.809524 | nld_Latn | 0.929901 |
e22be5b69d06ad0446de3d01dfd2fbf8ff105cd2 | 1,008 | md | Markdown | src/README.md | Endfox/vue-flat | e9a8176af3a409d3c992d709fbbd1eb697888aa6 | [
"MIT"
] | null | null | null | src/README.md | Endfox/vue-flat | e9a8176af3a409d3c992d709fbbd1eb697888aa6 | [
"MIT"
] | null | null | null | src/README.md | Endfox/vue-flat | e9a8176af3a409d3c992d709fbbd1eb697888aa6 | [
"MIT"
] | null | null | null | # vue-flat
<<<<<<< HEAD
## Project setup
```
npm install
```
### Compiles and hot-reloads for development
```
npm run serve
```
### Compiles and minifies for production
```
npm run build
```
### Run your tests
```
npm run test
```
### Lints and fixes files
```
npm run lint
```
### Run your unit tests
```
npm run test:unit
```
### Customize configuration
See [Configuration Reference](https://cli.vuejs.org/config/).
=======
## Entorno de desarrollo
<div align="center"><a> <img src="https://github.com/wilmercampagna/vue-flat/blob/master/src/assets/VueFlat.jpg"> </a></div>
### Guía de uso del entorno de desarrollo
<div align="center">
<iframe width="600" height="350" src="https://www.youtube.com/embed/1cQz29xr_1U" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div>
Ver [Guía de uso del entorno de desarrollo](https://www.youtube.com/watch?v=1cQz29xr_1U&t=104s)
>>>>>>> dad99448ae8253efddadf5152255ba7606e48ee8
| 19.384615 | 208 | 0.693452 | eng_Latn | 0.152263 |
e22c5d9a1490b3c263097daec1938b4f027fcda5 | 390 | md | Markdown | docs/error-messages/compiler-errors-1/compiler-error-c2113.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 965 | 2017-06-25T23:57:11.000Z | 2022-03-31T14:17:32.000Z | docs/error-messages/compiler-errors-1/compiler-error-c2113.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 3,272 | 2017-06-24T00:26:34.000Z | 2022-03-31T22:14:07.000Z | docs/error-messages/compiler-errors-1/compiler-error-c2113.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 951 | 2017-06-25T12:36:14.000Z | 2022-03-26T22:49:06.000Z | ---
description: "Learn more about: Compiler Error C2113"
title: "Compiler Error C2113"
ms.date: "11/04/2016"
f1_keywords: ["C2113"]
helpviewer_keywords: ["C2113"]
ms.assetid: be85cb5e-b0ed-4fc9-b062-032bf7f59a4e
---
# Compiler Error C2113
'-' : pointer can only be subtracted from another pointer
The right operand in a subtraction operation was a pointer, but the left operand was not.
| 27.857143 | 89 | 0.75641 | eng_Latn | 0.958677 |
e22c7d3cba3129c8401fb4bfb92e53f81bce9af3 | 42 | md | Markdown | README.md | bilginmusa/basicChat | 835d8a05a49e68e7648d5d767b30c8ad3a1ed63a | [
"MIT"
] | 1 | 2021-04-13T08:10:23.000Z | 2021-04-13T08:10:23.000Z | README.md | bilginmusa/basicChat | 835d8a05a49e68e7648d5d767b30c8ad3a1ed63a | [
"MIT"
] | null | null | null | README.md | bilginmusa/basicChat | 835d8a05a49e68e7648d5d767b30c8ad3a1ed63a | [
"MIT"
] | null | null | null | # basicChat
Node.js socket.io express.js
| 14 | 29 | 0.761905 | eng_Latn | 0.320479 |
e22e615bc105c5b7e2bf28beb1d9cbac5be9225a | 22 | md | Markdown | SurveyBuilder/versions/v5.4.3.md | SurveyBuilderTeams/surveybuilder | d02b5b9167c9f917df364b2d97ba77e5f6a57536 | [
"Apache-2.0"
] | null | null | null | SurveyBuilder/versions/v5.4.3.md | SurveyBuilderTeams/surveybuilder | d02b5b9167c9f917df364b2d97ba77e5f6a57536 | [
"Apache-2.0"
] | 2 | 2020-11-09T22:56:29.000Z | 2021-10-20T17:08:59.000Z | SurveyBuilder/versions/v5.4.3.md | SurveyBuilderTeams/surveybuilder | d02b5b9167c9f917df364b2d97ba77e5f6a57536 | [
"Apache-2.0"
] | 3 | 2020-11-10T13:35:04.000Z | 2021-12-13T18:08:07.000Z | * fixed missing files
| 11 | 21 | 0.772727 | eng_Latn | 0.995444 |
e22f5183f12f5fca29c7930c70f9b6721866528c | 5,937 | md | Markdown | api/Outlook.TaskRequestAcceptItem.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 283 | 2018-07-06T07:44:11.000Z | 2022-03-31T14:09:36.000Z | api/Outlook.TaskRequestAcceptItem.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 1,457 | 2018-05-11T17:48:58.000Z | 2022-03-25T22:03:38.000Z | api/Outlook.TaskRequestAcceptItem.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 469 | 2018-06-14T12:50:12.000Z | 2022-03-27T08:17:02.000Z | ---
title: TaskRequestAcceptItem object (Outlook)
keywords: vbaol11.chm3008
f1_keywords:
- vbaol11.chm3008
ms.prod: outlook
api_name:
- Outlook.TaskRequestAcceptItem
ms.assetid: a2905f72-0a67-b07d-7f85-84fe4de17c25
ms.date: 06/08/2017
ms.localizationpriority: medium
---
# TaskRequestAcceptItem object (Outlook)
Represents a response to a **[TaskRequestItem](Outlook.TaskRequestItem.md)** sent by the initiating user.
## Remarks
If the delegated user accepts the task, the **[ResponseState](Outlook.TaskItem.ResponseState.md)** property is set to **olTaskAccept**. The associated **[TaskItem](Outlook.TaskItem.md)** is received by the delegator as a **TaskRequestAcceptItem** object.
Unlike other Microsoft Outlook objects, you cannot create this object.
Use the **[GetAssociatedTask](Outlook.TaskRequestAcceptItem.GetAssociatedTask.md)** method to return the **TaskItem** object that is associated with this **TaskRequestAcceptItem**. Work directly with the **TaskItem** object.
## Events
|Name|
|:-----|
|[AfterWrite](Outlook.TaskRequestAcceptItem.AfterWrite.md)|
|[AttachmentAdd](Outlook.TaskRequestAcceptItem.AttachmentAdd.md)|
|[AttachmentRead](Outlook.TaskRequestAcceptItem.AttachmentRead.md)|
|[AttachmentRemove](Outlook.TaskRequestAcceptItem.AttachmentRemove.md)|
|[BeforeAttachmentAdd](Outlook.TaskRequestAcceptItem.BeforeAttachmentAdd.md)|
|[BeforeAttachmentPreview](Outlook.TaskRequestAcceptItem.BeforeAttachmentPreview.md)|
|[BeforeAttachmentRead](Outlook.TaskRequestAcceptItem.BeforeAttachmentRead.md)|
|[BeforeAttachmentSave](Outlook.TaskRequestAcceptItem.BeforeAttachmentSave.md)|
|[BeforeAttachmentWriteToTempFile](Outlook.TaskRequestAcceptItem.BeforeAttachmentWriteToTempFile.md)|
|[BeforeAutoSave](Outlook.TaskRequestAcceptItem.BeforeAutoSave.md)|
|[BeforeCheckNames](Outlook.TaskRequestAcceptItem.BeforeCheckNames.md)|
|[BeforeDelete](Outlook.TaskRequestAcceptItem.BeforeDelete.md)|
|[BeforeRead](Outlook.TaskRequestAcceptItem.BeforeRead.md)|
|[Close](Outlook.TaskRequestAcceptItem.Close(even).md)|
|[CustomAction](Outlook.TaskRequestAcceptItem.CustomAction.md)|
|[CustomPropertyChange](Outlook.TaskRequestAcceptItem.CustomPropertyChange.md)|
|[Forward](Outlook.TaskRequestAcceptItem.Forward.md)|
|[Open](Outlook.TaskRequestAcceptItem.Open.md)|
|[PropertyChange](Outlook.TaskRequestAcceptItem.PropertyChange.md)|
|[Read](Outlook.TaskRequestAcceptItem.Read.md)|
|[ReadComplete](Outlook.taskrequestacceptitem.readcomplete.md)|
|[Reply](Outlook.TaskRequestAcceptItem.Reply.md)|
|[ReplyAll](Outlook.TaskRequestAcceptItem.ReplyAll.md)|
|[Send](Outlook.TaskRequestAcceptItem.Send.md)|
|[Unload](Outlook.TaskRequestAcceptItem.Unload.md)|
|[Write](Outlook.TaskRequestAcceptItem.Write.md)|
## Methods
|Name|
|:-----|
|[Close](Outlook.TaskRequestAcceptItem.Close(method).md)|
|[Copy](Outlook.TaskRequestAcceptItem.Copy.md)|
|[Delete](Outlook.TaskRequestAcceptItem.Delete.md)|
|[Display](Outlook.TaskRequestAcceptItem.Display.md)|
|[GetAssociatedTask](Outlook.TaskRequestAcceptItem.GetAssociatedTask.md)|
|[GetConversation](Outlook.TaskRequestAcceptItem.GetConversation.md)|
|[Move](Outlook.TaskRequestAcceptItem.Move.md)|
|[PrintOut](Outlook.TaskRequestAcceptItem.PrintOut.md)|
|[Save](Outlook.TaskRequestAcceptItem.Save.md)|
|[SaveAs](Outlook.TaskRequestAcceptItem.SaveAs.md)|
|[ShowCategoriesDialog](Outlook.TaskRequestAcceptItem.ShowCategoriesDialog.md)|
## Properties
|Name|
|:-----|
|[Actions](Outlook.TaskRequestAcceptItem.Actions.md)|
|[Application](Outlook.TaskRequestAcceptItem.Application.md)|
|[Attachments](Outlook.TaskRequestAcceptItem.Attachments.md)|
|[AutoResolvedWinner](Outlook.TaskRequestAcceptItem.AutoResolvedWinner.md)|
|[BillingInformation](Outlook.TaskRequestAcceptItem.BillingInformation.md)|
|[Body](Outlook.TaskRequestAcceptItem.Body.md)|
|[Categories](Outlook.TaskRequestAcceptItem.Categories.md)|
|[Class](Outlook.TaskRequestAcceptItem.Class.md)|
|[Companies](Outlook.TaskRequestAcceptItem.Companies.md)|
|[Conflicts](Outlook.TaskRequestAcceptItem.Conflicts.md)|
|[ConversationID](Outlook.TaskRequestAcceptItem.ConversationID.md)|
|[ConversationIndex](Outlook.TaskRequestAcceptItem.ConversationIndex.md)|
|[ConversationTopic](Outlook.TaskRequestAcceptItem.ConversationTopic.md)|
|[CreationTime](Outlook.TaskRequestAcceptItem.CreationTime.md)|
|[DownloadState](Outlook.TaskRequestAcceptItem.DownloadState.md)|
|[EntryID](Outlook.TaskRequestAcceptItem.EntryID.md)|
|[FormDescription](Outlook.TaskRequestAcceptItem.FormDescription.md)|
|[GetInspector](Outlook.TaskRequestAcceptItem.GetInspector.md)|
|[Importance](Outlook.TaskRequestAcceptItem.Importance.md)|
|[IsConflict](Outlook.TaskRequestAcceptItem.IsConflict.md)|
|[ItemProperties](Outlook.TaskRequestAcceptItem.ItemProperties.md)|
|[LastModificationTime](Outlook.TaskRequestAcceptItem.LastModificationTime.md)|
|[MarkForDownload](Outlook.TaskRequestAcceptItem.MarkForDownload.md)|
|[MessageClass](Outlook.TaskRequestAcceptItem.MessageClass.md)|
|[Mileage](Outlook.TaskRequestAcceptItem.Mileage.md)|
|[NoAging](Outlook.TaskRequestAcceptItem.NoAging.md)|
|[OutlookInternalVersion](Outlook.TaskRequestAcceptItem.OutlookInternalVersion.md)|
|[OutlookVersion](Outlook.TaskRequestAcceptItem.OutlookVersion.md)|
|[Parent](Outlook.TaskRequestAcceptItem.Parent.md)|
|[PropertyAccessor](Outlook.TaskRequestAcceptItem.PropertyAccessor.md)|
|[RTFBody](Outlook.TaskRequestAcceptItem.RTFBody.md)|
|[Saved](Outlook.TaskRequestAcceptItem.Saved.md)|
|[Sensitivity](Outlook.TaskRequestAcceptItem.Sensitivity.md)|
|[Session](Outlook.TaskRequestAcceptItem.Session.md)|
|[Size](Outlook.TaskRequestAcceptItem.Size.md)|
|[Subject](Outlook.TaskRequestAcceptItem.Subject.md)|
|[UnRead](Outlook.TaskRequestAcceptItem.UnRead.md)|
|[UserProperties](Outlook.TaskRequestAcceptItem.UserProperties.md)|
## See also
[Outlook Object Model Reference](overview/Outlook/object-model.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 45.669231 | 255 | 0.821122 | yue_Hant | 0.782858 |
e22f5579b25848f4fe8314ad05e4163376b63c71 | 3,697 | md | Markdown | src/pages/blog/js-math.md | wangonya/wangonya.com-old- | e99853566814657485a8f69456c5a60448a4e2f8 | [
"MIT"
] | 1 | 2020-07-23T10:45:17.000Z | 2020-07-23T10:45:17.000Z | src/pages/blog/js-math.md | wangonya/wangonya.com-old- | e99853566814657485a8f69456c5a60448a4e2f8 | [
"MIT"
] | null | null | null | src/pages/blog/js-math.md | wangonya/wangonya.com-old- | e99853566814657485a8f69456c5a60448a4e2f8 | [
"MIT"
] | null | null | null | ---
title: "Javascript Arithmetic Cheat Sheet"
date: "2019-03-05T05:25:12+03:00"
description: "Easy arithmetic with Javascript"
---
Given that one of the main reason computers were invented was to solve mathematical problems quickly, it is no wonder that all the modern programming languages are so rich in arithmetic-oriented methods. The earliest computers were basically just calculators. (_Yes, I'm looking at you [Abacus](https://en.wikipedia.org/wiki/Abacus)_). If you dabble in Javascript (and a little math every now and then), I do hope you find this useful. The very obvious operations like simple addition (+) and subtraction (-) have been omitted. So have more advanced operations.

## Working with constants
Logarithm to base _e_
```javascript
Math.E; // 2.718281828459045
```
Logarithm to base 10
```javascript
Math.LN10; // 2.302585092994046
```
Logarithm to base 2
```javascript
Math.LN2; // 0.6931471805599453
```
Base 10 logarithm of _e_
```javascript
Math.LOG10E; // 0.4342944819032518
```
Base 2 logarithm of _e_
```javascript
Math.LOG2E; // 1.4426950408889634
```
🥧
```javascript
Math.PI; // 3.141592653589793
```
Square root of 1/2
```javascript
Math.SQRT1_2; // 0.7071067811865476
```
Square root of 2
```javascript
Math.SQRT2; // 1.4142135623730951
```
Infinity
```javascript
Infinity; // Infinity
```
## Rounding
`Math.round` returns the value of a number rounded to the nearest integer.
```javascript
Math.round(4.2); // 4
Math.round(4.7); // 5
Math.round(4.5); // 5. Half-way values are always rounded up
```
Speaking of rounding up, `Math.ceil()`:
```javascript
Math.ceil(4.2); // 5
Math.ceil(4.7); // 5
Math.ceil(-4.7); // -4. Ceiling a negative number will round towards zero
```
`Math.floor()` rounds down:
```javascript
Math.floor(4.2); // 4
Math.floor(4.7); // 4
Math.floor(-4.7); // -5. Flooring a negative number will round away from zero
```
## Modulus (%)
Returns the remainder after (integer) division.
```javascript
42 % 10; // 2
-40 % 10; // -0 🤔
```
## Trigonometry
Sine
```javascript
Math.sin(60); // -0.3048106211022167
```
Cosine
```javascript
Math.cos(60); // -0.9524129804151563
```
Tangent
```javascript
Math.tan(60); // 0.320040389379563
```
## Incrementing (++)
`++` increments its operand by 1.
```javascript
// postfix: returns the value before incrementing
let a = 4, // 4
b = a++, // 4
c = a; //5
```
```javascript
// prefix: returns the value after incrementing
let a = 4, // 4
b = ++a, // 5
c = a; //5
```
## Decrementing (--)
`--` decrements its operand by 1.
```javascript
// postfix: returns the value before decrementing
let a = 4, // 4
b = a--, // 4
c = a; //3
```
```javascript
// prefix: returns the value after decrementing
let a = 4, // 4
b = -a, // 3
c = a; //3
```
## Exponentiation (\*\*)
```javascript
// Math.pow() or ** can be used
let a = 4,
b = 2,
c = Math.pow(a, b), // 16
d = a ** b; // 16
```
## Getting maximum and minimum
```javascript
Math.max(4.2, 4.7); // 4.7
Math.min(4.2, 4.7); // 4.2
```
Getting maximum and minimum from an array:
```javascript
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9],
max = Math.max(...arr), // 9
min = Math.min(...arr); // 1
```
## Getting roots √
Square Root
```javascript
Math.sqrt(16); // 4
```
Cube Root
```javascript
Math.cbrt(27); // 3
```
To find the nth-root, use the Math.pow() function and pass in a fractional exponent.
```javascript
// This finds the 6th root of 64
Math.pow(64, 1 / 6); // 4
```
Much more complex calculations can be done by combining one or more of these operations.
| 18.034146 | 561 | 0.658642 | eng_Latn | 0.883994 |
e22fc23f6b2f021605a847fa4c81c2f3b1c50dce | 606 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c3805.md | MonstersAboveMe/cpp-docs.de-de | d6e31736659e5972cf60084e572ca628591eecc3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3805.md | MonstersAboveMe/cpp-docs.de-de | d6e31736659e5972cf60084e572ca628591eecc3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3805.md | MonstersAboveMe/cpp-docs.de-de | d6e31736659e5972cf60084e572ca628591eecc3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Compilerfehler C3805
ms.date: 11/04/2016
f1_keywords:
- C3805
helpviewer_keywords:
- C3805
ms.assetid: 166bbc35-5488-46b4-8e4c-9cd26ee5644e
ms.openlocfilehash: 4560fb4bb264f2ddd79ddc80a063925d874094b3
ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/23/2019
ms.locfileid: "62391906"
---
# <a name="compiler-error-c3805"></a>Compilerfehler C3805
'token': Unerwartetes Token erwartet entweder '}' oder ID
Wenn Sie eine Eigenschaft zu definieren, wurde ein ungültiges Token gefunden. Entfernen Sie das Token ungültige. | 30.3 | 112 | 0.811881 | deu_Latn | 0.51809 |
e230c07f9d430826d3ec5917f1f49600e66b1708 | 4,225 | md | Markdown | CONTRIBUTING.md | shu-mutou/dependabot-core | 0db139f7d64d442300a9e1ea034743fad69639c9 | [
"MS-PL",
"Naumen",
"Condor-1.1",
"0BSD"
] | 2,670 | 2017-07-28T15:32:53.000Z | 2022-03-31T22:42:15.000Z | CONTRIBUTING.md | shu-mutou/dependabot-core | 0db139f7d64d442300a9e1ea034743fad69639c9 | [
"MS-PL",
"Naumen",
"Condor-1.1",
"0BSD"
] | 3,028 | 2017-08-01T07:32:04.000Z | 2022-03-31T23:53:04.000Z | CONTRIBUTING.md | shu-mutou/dependabot-core | 0db139f7d64d442300a9e1ea034743fad69639c9 | [
"MS-PL",
"Naumen",
"Condor-1.1",
"0BSD"
] | 698 | 2017-09-07T10:26:02.000Z | 2022-03-27T16:31:21.000Z | # Feedback and contributions to Dependabot
👋 Want to give us feedback on Dependabot, or contribute to it? That's great - thank you so much!
#### Overview
- [Contributing new ecosystems](#contributing-new-ecosystems)
- [Contribution workflow](#contribution-workflow)
- [Setup instructions](#setup-instructions)
- [Project layout](#project-layout)
## Contributing new ecosystems
We are not currently accepting new ecosystems into `dependabot-core`, starting in December 2020.
### Why have we paused accepting new ecosystems?
Dependabot has grown dramatically in the last two years since integrating with GitHub. We are now [used by millions of repositories](https://octoverse.github.com/#securing-software) across [16 package managers](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/about-dependabot-version-updates#supported-repositories-and-ecosystems). We aim to provide the best user experience
possible for each of these, but we have found we've lacked the capacity – and in some cases the in-house expertise – to support new ecosystems in the last year. We want to be
confident we can support each ecosystem we merge.
In the immediate future, we want to focus more of our resources on merging improvements to the ecosystems we already support. This does not mean that we are stopping work or investing less in this space - in fact, we're investing more, to make it a great user experience. This tough call means we can also provide a better experience for our contributors, where PRs don't go stale while waiting for a review.
If you are an ecosystem maintainer and are interested in integrating with Dependabot, and are willing to help provide the expertise necessary to build and support it, please open an issue and let us know.
We hope to be able to accept community contributions for ecosystem support again soon.
### What's next?
In `dependabot-core`, each ecosystem implementation is in its own gem so you can use Dependabot for a language
we have not merged by creating a [script](https://github.com/dependabot/dependabot-script) to run your own gem or
fork of core, e.g. [dependabot-lein-runner](https://github.com/CGA1123/dependabot-lein-runner)
Our plan in the year ahead is to invest more developer time directly in `dependabot-core` to improve our architecture so
each ecosystem is more isolated and testable. We also want to make a consistency pass on existing ecosystems so that there
is a clearer interface between core and the language-specific tooling.
Our goal is make it easier to create and test Dependabot extensions so there is a paved path for running additional
ecosystems in the future.
## Contribution workflow
* Fork the project.
* Make your feature addition or bug fix.
* Add tests for it. This is important so we don't break it in a future version unintentionally.
* Send a pull request. The tests will run on it automatically, so don't worry if you couldn't get them running locally.
## Setup instructions
Getting set up to run all of the tests on Dependabot isn't as simple as we'd like it to be - sorry about that. Dependabot needs to shell out to multiple different languages to correctly update dependency files, which makes things a little complicated.
Assuming you're working on a single language, the best thing to do is just to install Ruby and the language you're working on as follows:
* [Install rbenv](https://github.com/rbenv/rbenv#installation) (a Ruby version manager)
* [Install the latest Ruby](https://github.com/rbenv/rbenv#installing-ruby-versions)
* Install Bundler with `gem install bundler` (this is Ruby's package manager)
* Install Dependabot's Ruby dependencies with `bundle install`
* Install the language dependencies for whatever languages you're working on (see [how we do it in CI](.github/workflows/ci.yml))
* Run the tests for the file you're working on with `bundle exec rspec spec/dependabot/file_updaters/elixir/` (for example). They should be green (although might need an internet connection).
## Project layout
There's a good description of the project's layout in our [README](README.md), but if you're struggling to understand how anything works please don't hesitate to create an issue.
| 66.015625 | 408 | 0.787219 | eng_Latn | 0.998129 |
e230d98084d7b8b5839178880c804625e245b142 | 50 | md | Markdown | README.md | hiendieulam/Comp2112_Lab2 | 46c2f66775ff8af217307ead948615fbf1733f16 | [
"MIT"
] | null | null | null | README.md | hiendieulam/Comp2112_Lab2 | 46c2f66775ff8af217307ead948615fbf1733f16 | [
"MIT"
] | null | null | null | README.md | hiendieulam/Comp2112_Lab2 | 46c2f66775ff8af217307ead948615fbf1733f16 | [
"MIT"
] | null | null | null | # Comp2112_Lab2
Twitter: display data from object
| 16.666667 | 33 | 0.82 | eng_Latn | 0.900215 |
e231748aaaf02342d718ef99be8698d7a2bf31b7 | 359 | md | Markdown | api/server/Telerik.Web.UI/EditorImportSettings.md | thevivacioushussain/ajax-docs | b46cd8ec574600abf8c256c0e20100eb382a9679 | [
"MIT"
] | 22 | 2015-07-21T10:33:39.000Z | 2022-02-21T09:17:40.000Z | api/server/Telerik.Web.UI/EditorImportSettings.md | thevivacioushussain/ajax-docs | b46cd8ec574600abf8c256c0e20100eb382a9679 | [
"MIT"
] | 132 | 2015-07-14T13:56:12.000Z | 2022-01-28T10:04:56.000Z | api/server/Telerik.Web.UI/EditorImportSettings.md | thevivacioushussain/ajax-docs | b46cd8ec574600abf8c256c0e20100eb382a9679 | [
"MIT"
] | 355 | 2015-07-14T02:38:17.000Z | 2021-11-30T13:22:18.000Z | ---
title: Telerik.Web.UI.EditorImportSettings
page_title: Telerik.Web.UI.EditorImportSettings
description: Telerik.Web.UI.EditorImportSettings
---
# Telerik.Web.UI.EditorImportSettings
Container of misc. import settings of RadEditor control
## Inheritance Hierarchy
* System.Object
* Telerik.Web.UI.ObjectWithState
* Telerik.Web.UI.EditorImportSettings
| 21.117647 | 55 | 0.81337 | kor_Hang | 0.144058 |
e231a4b067526abefdc159a08ee992b45b4b7193 | 3,671 | md | Markdown | docs/using-r.md | emirdad/deriva-py | 3d5302af0ff15be53df3b71a671c529a2ce10050 | [
"Apache-2.0"
] | 3 | 2018-11-18T19:33:53.000Z | 2019-10-03T18:27:49.000Z | docs/using-r.md | emirdad/deriva-py | 3d5302af0ff15be53df3b71a671c529a2ce10050 | [
"Apache-2.0"
] | 81 | 2017-06-13T18:46:47.000Z | 2022-01-13T01:16:33.000Z | docs/using-r.md | emirdad/deriva-py | 3d5302af0ff15be53df3b71a671c529a2ce10050 | [
"Apache-2.0"
] | 4 | 2018-06-25T18:23:33.000Z | 2021-01-15T19:38:52.000Z | # Calling deriva-py from R
The [reticulate](https://rstudio.github.io/reticulate/) package can be used to call deriva-py functions from R. This writeup assumes familiarity with R and with the python-based examples in the deriva-py datapath tutorials.
Then, in R, install reticulate:
```
install.packages("reticulate")
```
You can then import Python packages into R:
```
library(reticulate)
deriva.core <- import("deriva.core")
```
Once deriva.core has been imported, we can call deriva-py functions in much the same way we'd call them from Python, keeping a few things in mind:
- We'll need to make some simple syntax changes (R uses `<-`, not `=` for assignment and `$` instead of `.` for a path separator.
- The reticulate library translates certain Python datatypes (data frames, scalars, etc.) into corresponding R types (see [the [reticulate documentation](https://rstudio.github.io/reticulate/) for details). Other python data types are opaque to R but can be used in calls to python functions.
- Datapath filter operations that use overloaded operators must be written with a different syntax in R.
For example, here's some python code (copied from the datapath tutorial) to initiate a connection to a deriva server and get a datapath object corresponding to the tale `isa.dataset` on the host `www.facebase.org`:
```
import deriva.core
protocol = 'https'
hostname = 'www.facebase.org'
catalog_number = 1
credential = None
# If you need to authenticate, use Deriva Auth agent and get the credential
# credential = get_credential(hostname)
catalog = deriva.core.ErmrestCatalog(protocol, hostname, catalog_number, credential)
# Get the path builder interface for this catalog
pb = catalog.getPathBuilder()
# Get some local variable handles to tables for convenience
dataset = pb.isa.dataset
```
and here's the same code in R:
```
library(reticulate)
deriva.core <- import("deriva.core")
protocol <- 'https'
hostname <- 'www.facebase.org'
catalog_number <- 1L
credential <- NULL
# If you need to authenticate, use Deriva Auth agent and get the credential
# credential <- get_credential(hostname)
catalog <- deriva.core$ErmrestCatalog(protocol, hostname, catalog_number, credential)
# Get the path builder interface for this catalog
pb <-catalog$getPathBuilder()
# Get some local variable handles to tables for convenience
dataset <- pb$isa$dataset
```
A couple things to notice: `catalog_number` is set to "1L"; setting it to "1" would lead to it being misinterpreted as a floating-point number, and R uses `NULL` where python uses `None`.
At this point, we could look at the contents of the entire dataset:
```
results <- dataset$entities()
iterate(results, print)
```
But that's a large table, so let's do some filtering. In Python, this is how we'd get a data frame of all dataset
records that were created more recently than November 1, 2018:
```
dataset.filter(dataset.RCT > "2018-11-01").entities()
```
The `>` filter operator won't work in R, so we need to use an alternate syntax:
```
results <- dataset$filter(dataset$RCT$gt("2018-11-01"))$entities()
iterate(results, print)
```
Other filters may be converted similarly.
| Python filter syntax | R filter syntax |
| --- | --- |
| col == val | col$eq(val) |
| col < val | col$lt(val) |
| col <= val | col$le(val) |
| col > val | col$gt(val) |
| col >= val | col$ge(val) |
Datapath results can also be converted into dataframes via the python `pandas` library.
```
pandas <- import("pandas")
results <- pandas$DataFrame(iterate(dataset$entities()))
```
For more information, see the datapath tutorial and the [reticulate documentation](https://rstudio.github.io/reticulate/index.html).
| 36.71 | 292 | 0.743122 | eng_Latn | 0.97118 |
e232a80155d42e85232ddc68e4d0a12434e09103 | 3,776 | md | Markdown | docs/on-map-loaded.md | microsoft/arcade-tile-util | 42acc9ee4b667b6e8323888e9199c618a94f79cb | [
"MIT"
] | null | null | null | docs/on-map-loaded.md | microsoft/arcade-tile-util | 42acc9ee4b667b6e8323888e9199c618a94f79cb | [
"MIT"
] | 2 | 2022-03-31T21:30:12.000Z | 2022-03-31T21:30:37.000Z | docs/on-map-loaded.md | microsoft/arcade-tile-util | 42acc9ee4b667b6e8323888e9199c618a94f79cb | [
"MIT"
] | 1 | 2022-03-11T01:44:31.000Z | 2022-03-11T01:44:31.000Z | # on Map Loaded
Run some code when a tilemap is loaded into the game scene.
```sig
tileUtil.onMapLoaded(function (null) {})
```
This event is triggered when a tilemap is loaded into the scene as the current tilemap. The tilemap object is given as a parameter for use by the code inside the event.
* **cb**: the event handler to run code when a tilemap is loaded into the game scene.
>* **tilemap**: the tilemap that was just loaded.
## Example
Create a sprite on the screen to use as an anchor for messages. Using the controller, load a tilemap into the scene when button `A` is pressed. Use button `B` to unload the tilemap. Display a sprite message when the tilemap is loaded or unloaded.
```blocks
let mySprite: Sprite = null
mySprite = sprites.create(img`
....................ccfff...........
..........fffffffffcbbbbf...........
.........fbbbbbbbbbfffbf............
.........fbb111bffbbbbff............
.........fb11111ffbbbbbcff..........
.........f1cccc11bbcbcbcccf.........
..........fc1c1c1bbbcbcbcccf...ccccc
............c3331bbbcbcbccccfccddbbc
...........c333c1bbbbbbbcccccbddbcc.
...........c331c11bbbbbcccccccbbcc..
..........cc13c111bbbbccccccffbccf..
..........c111111cbbbcccccbbc.fccf..
...........cc1111cbbbfdddddc..fbbcf.
.............cccffbdbbfdddc....fbbf.
..................fbdbbfcc......fbbf
...................fffff.........fff
`, SpriteKind.Player)
tileUtil.onMapUnloaded(function (tilemap) {
mySprite.sayText("Tilemap Unloaded!")
})
controller.B.onEvent(ControllerButtonEvent.Pressed, function () {
tileUtil.unloadTilemap()
})
controller.A.onEvent(ControllerButtonEvent.Pressed, function () {
tiles.setCurrentTilemap(tilemap`level1`)
})
tileUtil.onMapLoaded(function (tilemap) {
mySprite.sayText("Tilemap Loaded!")
})
```
```package
arcade-tile-util=github:microsoft/arcade-tile-util
```
```jres
{
"transparency16": {
"data": "hwQQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
"mimeType": "image/x-mkcd-f4",
"tilemapTile": true
},
"tile3": {
"data": "hwQQABAAAADu7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7u7g==",
"mimeType": "image/x-mkcd-f4",
"tilemapTile": true,
"displayName": "myTile1"
},
"tile1": {
"data": "hwQQABAAAADd3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3Q==",
"mimeType": "image/x-mkcd-f4",
"tilemapTile": true,
"displayName": "myTile"
},
"level1": {
"id": "level1",
"mimeType": "application/mkcd-tilemap",
"data": "MTAwYzAwMGEwMDAxMDIwMTAyMDEwMjAxMDIwMTAyMDEwMjAyMDEwMjAxMDIwMTAyMDEwMjAxMDIwMTAxMDIwMTAyMDEwMjAxMDIwMTAyMDEwMjAyMDEwMjAxMDIwMTAyMDEwMjAxMDIwMTAxMDIwMTAyMDEwMjAxMDIwMTAyMDEwMjAyMDEwMjAxMDIwMTAyMDEwMjAxMDIwMTAxMDIwMTAyMDEwMjAxMDIwMTAyMDEwMjAyMDEwMjAxMDIwMTAyMDEwMjAxMDIwMTAxMDIwMTAyMDEwMjAxMDIwMTAyMDEwMjAyMDEwMjAxMDIwMTAyMDEwMjAxMDIwMTAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMA==",
"tileset": [
"myTiles.transparency16",
"myTiles.tile3",
"myTiles.tile1"
],
"displayName": "level1"
},
"*": {
"mimeType": "image/x-mkcd-f4",
"dataEncoding": "base64",
"namespace": "myTiles"
}
}
``` | 41.043478 | 515 | 0.690943 | yue_Hant | 0.294646 |
e232e364bf0c787ee8b03d69ff05d06d2763a1d4 | 908 | md | Markdown | packages/services/f-performance/README.md | matthewhardern/fozzie-components | dd9a406ab8e2ece6d445cda44af545b8b794f76d | [
"Apache-2.0"
] | 13 | 2021-01-27T15:06:26.000Z | 2022-01-11T14:15:38.000Z | packages/services/f-performance/README.md | matthewhardern/fozzie-components | dd9a406ab8e2ece6d445cda44af545b8b794f76d | [
"Apache-2.0"
] | 463 | 2020-11-24T15:03:54.000Z | 2022-03-31T10:47:34.000Z | packages/services/f-performance/README.md | matthewhardern/fozzie-components | dd9a406ab8e2ece6d445cda44af545b8b794f76d | [
"Apache-2.0"
] | 46 | 2020-11-24T15:10:57.000Z | 2022-03-22T14:26:02.000Z | <div align="center">
# f-performance
<img width="125" alt="Fozzie Bear" src="../../../../bear.png" />
RUM performance metric collector
</div>
---
[](https://badge.fury.io/js/%40justeat%2Ff-performance)
[](https://circleci.com/gh/justeat/workflows/fozzie-components)
[](https://coveralls.io/github/justeat/f-performance)
[](https://snyk.io/test/github/justeat/f-performance?targetFile=package.json)
---
## Usage
### Installation
Install the module using npm or Yarn:
```sh
yarn add @justeat/f-performance
```
```sh
npm install @justeat/f-performance
```
| 25.942857 | 186 | 0.729075 | yue_Hant | 0.190476 |
e232fc33807f625e1a48a51b0fa55b8245df2205 | 590 | md | Markdown | relnotes/v0.2.0.md | fatkodima/rubocop-minitest | b7e403faec0ab9b12907d8cc2d3c4487692f3d49 | [
"MIT"
] | null | null | null | relnotes/v0.2.0.md | fatkodima/rubocop-minitest | b7e403faec0ab9b12907d8cc2d3c4487692f3d49 | [
"MIT"
] | null | null | null | relnotes/v0.2.0.md | fatkodima/rubocop-minitest | b7e403faec0ab9b12907d8cc2d3c4487692f3d49 | [
"MIT"
] | null | null | null | ### New features
* [#11](https://github.com/rubocop-hq/rubocop-minitest/pull/11): Add new `Minitest/RefuteNil` cop. ([@tejasbubane ][])
* [#8](https://github.com/rubocop-hq/rubocop-minitest/pull/8): Add new `Minitest/AssertTruthy` cop. ([@abhaynikam ][])
* [#9](https://github.com/rubocop-hq/rubocop-minitest/pull/9): Add new `Minitest/AssertIncludes` cop. ([@abhaynikam ][])
* [#10](https://github.com/rubocop-hq/rubocop-minitest/pull/10): Add new `Minitest/AssertEmpty` cop. ([@abhaynikam ][])
[@tejasbubane]: https://github.com/tejasbubane
[@abhaynikam]: https://github.com/abhaynikam
| 59 | 120 | 0.705085 | yue_Hant | 0.295278 |
e233359f45829e8ab1817cf4a0ebee0955b562a5 | 74 | md | Markdown | README.md | SafetyCone/R5T.Tromso.Abstractions | 90c5b8992e2ba1bd8e19d8e3fe1afa2caa49487c | [
"MIT"
] | null | null | null | README.md | SafetyCone/R5T.Tromso.Abstractions | 90c5b8992e2ba1bd8e19d8e3fe1afa2caa49487c | [
"MIT"
] | null | null | null | README.md | SafetyCone/R5T.Tromso.Abstractions | 90c5b8992e2ba1bd8e19d8e3fe1afa2caa49487c | [
"MIT"
] | null | null | null | # R5T.Tromso.Abstractions
An abstractions library for the Tromso project.
| 24.666667 | 47 | 0.824324 | eng_Latn | 0.795269 |
e2333b12983d9c86bc879d6d2f8e697a132153b4 | 6,068 | md | Markdown | README.md | amantyagi994/Dope-Github-Readmes | ba8d3a172477da35a27201b4f03dd518f4e413ed | [
"MIT"
] | 20 | 2021-10-01T04:11:35.000Z | 2022-02-22T11:18:11.000Z | README.md | amantyagi994/Dope-Github-Readmes | ba8d3a172477da35a27201b4f03dd518f4e413ed | [
"MIT"
] | 23 | 2021-10-01T07:57:03.000Z | 2022-03-19T18:20:16.000Z | README.md | amantyagi994/Dope-Github-Readmes | ba8d3a172477da35a27201b4f03dd518f4e413ed | [
"MIT"
] | 53 | 2021-09-30T17:07:51.000Z | 2022-03-19T18:10:15.000Z | # Dope-Github-Readmes
[](https://github.com/Design-and-Code/Dope-Github-Profiles/graphs/commit-activity)
[](https://github.com/Design-and-Code/Dope-Github-Profiles/pulls)
[](https://github.com/ellerbrock/open-source-badges/)
[](https://github.com/Design-and-Code/Dope-Github-Profiles/issues)
[](https://github.com/Design-and-Code/Dope-Github-Profiles/blob/main/LICENSE)
<!-- COVER IMAGE -->
## What is Dope Github Readmes?
<!-- TO CHANGE -->
This is collection of really dope Github README Profiles, that you can take inspiration from. Wanna add your your awesome readme to our list? Sure, follow along✨
## [Design-and-Code](https://discord.gg/druweDMn3s)
Welcome to Design & Code where anyone interested in designing and coding can connect and interact with fellow peers from all over the globe and not only learn but also collaborate on various projects!
<p align="left">
<a href="mailto:designandcode.community@gmail.com" style="text-decoration:none">
<img height="30" src = "https://img.shields.io/badge/gmail-c14438?&style=for-the-badge&logo=gmail&logoColor=white">
</a>
<a href="https://discord.gg/druweDMn3s" style="text-decoration:none">
<img height="30" src="https://img.shields.io/badge/discord-darkblue.svg?&style=for-the-badge&logo=discord&logoColor=white" />
</a>
<a href="http://designandcode.us/" style="text-decoration:none">
<img height="30" src = "https://img.shields.io/badge/website-c14438?&style=for-the-badge&logo=internet&logoColor=white">
</a>
<a href="https://www.linkedin.com/company/designandcode" style="text-decoration:none">
<img height="30" src="https://img.shields.io/badge/linkedin-blue.svg?&style=for-the-badge&logo=linkedin&logoColor=white" />
</a>
<a href="https://github.com/Design-and-Code" style="text-decoration:none">
<img height="30" src="https://img.shields.io/badge/Github-grey.svg?&style=for-the-badge&logo=Github&logoColor=white" />
</a>
<a href="https://www.instagram.com/designandcode.community" style="text-decoration:none">
<img height="30" src = "https://img.shields.io/badge/Instagram-%23E4405F.svg?&style=for-the-badge&logo=Instagram&logoColor=white">
</a>
<br />
## Folder Structure: 📁
```
root
├── readmes
│ ├── Your Name (folder)
│ │ ├── README.MD
│ │ └── Assets (folder)
│ │
│ │
│ └── Your Name (folder)
│ ├── README.MD
│ └── Assets (folder)
│
│
└── README.MD
```
---
<h3 align="center"> <b>Join our Community and feel free to drop your questions on</h3>
<p align="center">
<a href="https://discord.gg/druweDMn3s">
<img alt="Discord" src="https://img.shields.io/badge/Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white">
</a>
</p>
---
## Contribution Guidelines🏗
Want to add your project to the repo? We invite you to contribute.
To start contributing, follow the below guidelines:
**1.** Fork [this](https://github.com/Design-and-Code/Dope-Github-Readmes) repository.
**2.** Clone your forked copy of the project.
```
git clone https://github.com/<your_user_name>/Dope-Github-Readmes.git
```
**3.** Navigate to the project directory :file_folder: .
```
cd Dope-Github-Readmes
```
**4.** Add a reference(remote) to the original repository.
```
git remote add upstream https://github.com/Design-and-Code/Dope-Github-Readmes.git
```
**5.** Check the remotes for this repository.
```
git remote -v
```
**6.** Always take a pull from the upstream repository to your master branch to keep it at par with the main project(updated repository).
```
git pull upstream main
```
**7.** Create a new branch.
```
git checkout -b <your_branch_name>
```
**8.** Then navigate to the `readmes` directory.
```
cd readmes
```
**8.** Add your README and assests under a folder with your name (replace `your-name` with name).
```
mkdir your-name
cd your-name
```
**10.** Track your changes ✔.
```
git add .
```
**11.** Commit your changes .
```
git commit -m "Relevant message"
```
**12.** Push the committed changes in your feature branch to your remote repo.
```
git push -u origin <your_branch_name>
```
**13.** To create a pull request, click on `compare and pull requests`.
**14.** Add appropriate title and description to your pull request explaining your changes and efforts done.
**15.** Click on `Create Pull Request`.
**16** Voila! You have made a PR to the Projects-showcase 💥 . Wait for your submission to be accepted and your PR to be merged.
Note:
> Under the `assests` folder, you can add the resources used to build your readme, so others can reference it
___
## Project Maintainers 🛠
<table>
<tbody><tr>
<td align="center"><a href="https://github.com/DevrajDC"><img alt="" src="https://avatars.githubusercontent.com/u/65373279" width="130px;"><br><sub><b> Devraj Chatribin </b></sub></a><br><a href="https://github.com/DevrajDC" title="Code"> </a></td> </a></td>
<td align="center"><a href="https://github.com/Greeshma2903"><img alt="" src="https://avatars.githubusercontent.com/u/70336930?v=4" width="130px;"><br><sub><b> Medam Greeshma </b></sub></a><br><a href="https://github.com/Greeshma2903" title="Code"> </a></td> </a></td>
<td align="center"><a href="https://github.com/Nandani-Paliwal"><img alt="" src="https://avatars.githubusercontent.com/u/83964826?v=4" width="130px;"><br><sub><b> Nandani Paliwal </b></sub></a><br><a href="https://github.com/Nandani-Paliwal" title="Code"> </a></td> </a></td>
</tr>
</tbody></table>
## Our valuable Contributors👩💻👨💻 :
<a href="https://github.com/Design-and-Code/Dope-Github-Readmes/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=Design-and-Code/Dope-Github-Readmes" />
</a>
| 36.119048 | 277 | 0.701384 | yue_Hant | 0.32359 |
e233f9e71d3d9797303f5cf92dda53cd9d19ae45 | 19,610 | md | Markdown | CHANGELOG.md | buttercup/buttercup-core | bcbd716f6899cac16718fec394422674d58ddee1 | [
"MIT"
] | 397 | 2017-04-04T13:20:05.000Z | 2022-03-24T10:50:13.000Z | CHANGELOG.md | buttercup/buttercup-core | bcbd716f6899cac16718fec394422674d58ddee1 | [
"MIT"
] | 96 | 2017-04-12T13:22:14.000Z | 2022-03-09T02:42:16.000Z | CHANGELOG.md | buttercup/buttercup-core | bcbd716f6899cac16718fec394422674d58ddee1 | [
"MIT"
] | 57 | 2017-07-02T16:41:14.000Z | 2021-09-18T23:02:46.000Z | # Core library changelog
## v6.5.0
_2021-10-21_
* `Entry#getPropertyValueType`
* `Entry#setPropertyValueType`
## v6.4.0
_2021-10-18_
* Search results include `groupID`
## v6.3.0
_2021-10-03_
* `VaultSource#rename`
## v6.2.0
_2021-08-17_
* `setEntryFacadePropertyValueType` For setting facade entry property value types
* Exposed `getEntryPropertyValueType` and `setEntryPropertyValueType` helpers
## v6.1.0
_2021-08-14_
* `createEntryFromFacade` for creating new entries from facades
## v6.0.0
_2021-06-06_
* **Major release** (breaking changes)
* My Buttercup support removed
* New `iocane` version: New attachment encoding
* Attachment support for WebDAV datasource
## v5.13.1
_2021-05-18_
* **Bugfix**:
* Merging Format-A vaults would break due to incorrectly modified PAD IDs
* Decryption would break if the file contents contained a new-line
## v5.13.0
_2021-04-07_
* `VaultSource#testMasterPassword`
## v5.12.2
_2021-03-21_
* **Bugfix**:
* `VaultManager` auto-update race condition
## v5.12.1
_2021-03-16_
* `getEntryPath` and `getEntryFacadePath` helpers
## v5.12.0
_2021-03-16_
* Search result `entryType` property
## v5.11.0
_2021-03-14_
* `Search` deprecated
* `VaultEntrySearch` replaces `Search` as vault entry searching utility
* `VaultFacadeEntrySearch` added to support _facade_ searching for entries
## v5.10.0
_2021-03-13_
* Dependencies update
## v5.9.1
_2021-01-26_
* **Bugfix**:
* `crypto-random-string` in devDependencies rather than dependencies
## v5.9.0
_2021-01-23_
* Dependency upgrades
* **Bugfix**:
* Types not referenced correctly
* URL-based search results ordered incorrectly
## v5.8.0
_2020-12-09_
* Separated search portions for downstream use
## v5.7.0
_2020-12-06_
* Reduced key derivation rounds for improved performance
## v5.6.2
_2020-12-06_
* Upgrade `cowl`: Remove warning for invalid `responseType`
## v5.6.1
_2020-11-26_
* `Search` preparation performance improvement
## v5.6.0
_2020-10-11_
* `LocalStorageDatasource` for web
## v5.5.0
_2020-09-06_
* Replaced `js-base64` with `base64-js` for `Buffer`-less base64 encoding and decoding
* Added base64 encoding and decoding to app-env parameters
## v5.4.0
_2020-09-05_
* UUID app-env method (for mobile compatibility)
## v5.3.1
_2020-09-04_
* **Bugfix**:
* Poor facade handling performance
* Entries not able to be moved to another group (eg. to Trash)
## v5.3.0
_2020-09-04_
* Persistent `Group` and `Entry` items (no spawn on request)
* **Bugfix**:
* ([#282](https://github.com/buttercup/buttercup-core/issues/282)) Memory leak while auto-updating
## v5.2.1
_2020-08-30_
* **Bugfix**:
* `Search` would fail to search due to bad `Fuse.js` import on web
## v5.2.0
_2020-08-30_
* Upgrade `cowl` to remove `Buffer` dependencies
## v5.1.0
_2020-08-29_
* Replace `VError` with [`Layerr`](https://github.com/perry-mitchell/layerr) for better portability
## v5.0.1
_2020-08-26_
* Un-hide Credit Card dates
## v5.0.0
_2020-08-24_
* **Major release** (breaking changes)
* Codebase converted to **Typescript**
* `VaultFormatB` introduced
* Vault format routing (default: Format A)
* New gzip compression utilities
* Removed `getAttachmentDetails` method on datasources
## v4.13.1
_2020-08-19_
* **Bugfix**:
* ([#287](https://github.com/buttercup/buttercup-core/issues/287)) Vaults grow to enormous size
## v4.13.0
_2020-08-02_
* Improved vault merging behaviour when using `VaultSource#save`
## v4.12.0
_2020-08-01_
* Attachments for My Buttercup
## v4.11.1
_2020-07-28_
* Prevent attachments crypto key override during import
## v4.11.0
_2020-07-28_
* **Important attachments update**: Use new auto-generated attachments crypto key instead of master password. Avoid using earlier versions for attachments.
## v4.10.0
_2020-07-21_
* Attachment `created` and `updated` timestamps
## v4.9.2
_2020-07-11_
* **Bugfix**:
* `reorderSource` on `VaultManager` wouldn't change source order
## v4.9.1
_2020-07-09_
* **Bugfix**:
* `pify` not installed (required for `FileDatasource`)
## v4.9.0
_2020-07-07_
* No results from trash when using `Search`
* `createVaultFacade` option for preventing groups and entries from trash being returned
## v4.8.2
_2020-07-07_
* `Group.createNew` group ID parameter
## v4.8.1
_2020-07-06_
* Emit `sourcesUpdated` event on `VaultManager` when `locked`/`unlocked` events fired
## v4.8.0
_2020-07-06_
* _Open_ credentials support to allow for custom **external** datasources
* **Node 10 deprecated**
* Reduce `VaultManager` & `VaultSource` update events
## v4.7.1
_2020-07-05_
* **Bugfix**:
* `VaultSource` wouldn't emit updates for all changes
* Fuse.js import in `Search` wouldn't work in web environment
## v4.7.0
_2020-07-02_
* `Search` class for searching vaults
* `EntryFinder` deprecated and disabled
* `fieldsToProperties` for converting entry facade fields to a properties key-value object
* Merge mode for consuming vault facades
* **Bugfix**:
* `VaultSource#changeMasterPassword` threw when getting datasource support for changing passwords
## v4.6.0
_2020-06-30_
* `LocalFileDatasource` for web clients (from browser extension)
## v4.5.1
_2020-06-28_
* **Bugfix**:
* Broken `COLOUR_TEST` in VaultSource
## v4.5.0
_2020-06-24_
* Environment override `env/v1/isClosedEnv` for handling primary passwords in credentials
## v4.4.1
_2020-06-22_
* Improved safe-guards for `VaultManager#rehydrate` and `new TextDatasource()`
## v4.4.0
_2020-06-21_
* `VaultManager` auto migrate legacy vaults
* `TextDatasource` loads content from credentials if provided
## v4.3.0
_2020-06-21_
* Attachments support for `FileDatasource`
* Use live domain for My Buttercup
## v4.2.3
_2020-06-21_
* **Bugfix**:
* `GoogleDriveDatasource` failing upon initialisation in React Native
## v4.2.2
_2020-06-20_
* **Bugfix**:
* `GoogleDriveDatasource` couldn't be instantiated without an exception being thrown
## v4.2.1
_2020-06-12_
* **Bugfix**:
* Fix web build for compatibility
## v4.2.0
_2020-06-12_
* Attachments
* `MemoryDatasource` for in-memory vaults
* `Entry#getType` helper
* Additional vault insights
## v4.1.0
_2020-05-31_
* Upgrade iocane -> 4.1.0 for better PBKDF2 override support
* Audit and upgrade dependencies
## v4.0.0
_2020-05-30_
* **Major release** (breaking changes)
* Secure credentials handling
* Stable **My Buttercup** client and datasource
* Renamed the following structures:
* `Archive` to `Vault`
* `ArchiveManager` to `VaultManager`
* `ArchiveSource` to `VaultSource`
* Removed `Workspace`, moving functionality into `VaultSource`
* Re-introduced the following libraries back into the core:
* `@buttercup/datasources`
* `@buttercup/facades`
* `@buttercup/credentials`
* `@buttercup/signing`
* `@buttercup/app-env`
* Require `init()` call to initialise the environment (app-env)
* Moved **web** export to `buttercup/web`
* Added _formats_ to support eventual vault format migrations
* Alternate datasource `load()` output, à la `TextDatasource#load`, to output `Format` _and_ `history`
* `VaultManager` support for dual storages for cache and vault config
## v3.0.0
_2020-03-15_
* **Major release**
* App-Env environment handling
* Improved WebDAV client
## v3.0.0-rc3.2
_2020-01-26_
* Upgrade WebDAV library - reduced bundle size
## v3.0.0-rc3.1
_2020-01-24_
* Support for Regular Expressions in entry property searching
## v3.0.0-rc3.0
_2020-01-14_
* Improved Entry property history structure
* Many deprecated methods removed
## v3.0.0-rc2.1
_2020-01-05_
* Reduced bundle / web size
## v3.0.0-rc2.0
_2020-01-03_
* App-env usage
* crypto simplification for web
## v3.0.0-rc1.1
_2019-12-20_
* Change password support
## v3.0.0-rc1.0
_2019-11-03_
* Pre-release v3
* My Buttercup client/datasource
* Shares (not yet connected)
* Refactorings
## v2.16.1
_2019-10-10_
* **Bugfix**:
* Vault meta changes wouldn't write when storing an offline cache
## v2.16.0
_2019-10-06_
* Datasources upgrade for Google Drive and Dropbox request functionality
## v2.15.4
_2019-08-02_
* Google Drive client upgrade for `UInt8Array` incompatibilities
## v2.15.3
_2019-08-02_
* **Bugfix**:
* Google Drive Datasource not re-authorising on failure
## v2.15.2
_2019-07-23_
* **Bugfix**:
* Dropbox client PUT requests failing
## v2.15.1
_2019-07-22_
* Dropbox / Google Drive request client upgrades (browser stability)
## v2.15.0
_2019-07-16_
* Dropbox client upgrade: New request library (cowl)
## v2.14.0
_2019-07-11_
* Entry property time prefix attribute
## v2.13.0
_2019-06-03_
* TOTP URI attribute
* Deprecations for entry facade methods
## v2.12.0
_2019-04-14_
* Improved Google Drive authorisation handling
## v2.11.0
_2019-03-05_
* Google Drive support
* Support for updating source credentials (eg. expired API token renewal)
## v2.10.0
_2019-01-14_
* Entry history
## v2.9.2
_2018-12-08_
* Execute datasource instantiation callbacks for _each_ datasource type
## v2.9.1
_2018-12-05_
* Datasource instantiation register for post processing datasource instances
## ~~v2.9.0~~
_2018-12-03_
* Expose Dropbox request patcher
## v2.8.1
_2018-11-16_
* New Dropbox client
* New WebDAV client
* Add `Entry#getProperties` method
## v2.7.0
_2018-10-27_
* Ability to disable caching of vaults from within `ArchiveSource`
## v2.6.5
_2018-10-07_
* **Bugfix**:
* `Workspace#localDiffersFromRemote` and `Workspace#mergeFromRemote` used invalid instance-of check for detecting `TextDatasource`s
## v2.6.4
_2018-10-06_
* Update datasources dependencies (security)
* **Bugfix**:
* `webdav` wouldn't connect with Seafile instances
## v2.6.3
_2018-10-03_
* Add missing event for auto-update
* **Bugfix**:
* Auto update wouldn't run on _clean_ archives
* `Workspace#localDiffersFromRemote` and `Workspace#mergeFromRemote` used cached copy of archive, preventing loading again
## v2.6.0
_2018-10-02_
**Deprecated**
* Auto-update interrupt on `ArchiveManager`
## v2.5.0
_2018-10-02_
* Auto-update functionality for `ArchiveManager`
* Update dependencies to latest stable
## v2.4.1
_2018-08-27_
* **Bugfix**:
* Handle `ArchiveSource#unlock` calls when no `storageInterface` is set
## v2.4.0
_2018-08-11_
* Add offline support to `ArchiveManager` and `ArchiveSource`
* Add readOnly mode operation for `ArchiveSource` loading when content is overridden
## v2.3.0
_2018-07-28_
* Add ID parameter to `Group.createNew()` (for importing purposes)
## v2.2.0
_2018-07-10_
* Bugfix: Entry property names could not be made up of special characters
* `Entry#getURLs`
## v2.1.0
_2018-06-26_
* Add `removeable: false` property to Entry facades
* Add `note` facade type
## v2.0.4
_2018-07-03_
* Support `ArchiveSource` creation with type
## v2.0.3
_2018-06-25_
* Bugfix: Remove meta references from Entry facades
## v2.0.2
_2018-06-22_
* Bugfix: `Archive#getID()` call
## v2.0.1
_2018-06-22_
* Bugfix: `Workspace#mergeFromRemote` bad load for `ArchiveComparator`
## v2.0.0
_2018-06-21_
* Stable 2.0 release!
## v2.0.0-1 - v2.0.0-3
_2018-06-19_
* Upgrade credentials to 1.1.1
* Fix typo in `Credentials.fromSecureString`
* Upgrade datasources to 1.1.1
* Fix webdav file fetching (force text)
* Fix `Workspace#localDiffersFromRemote`
## **v2.0.0-0**
_2018-06-16_
* **New major** pre-release
* Refactored codebase
* Datasources, Credentials and signing split to separate repos
* iocane upgrade and new API
* Deprecation of meta
* Meta is now mapped to properties within the archive instance (meta commands do not create meta properties internally)
* Removed debug statements
* Fixed several minor bugs
* Improved flattening algorithm to prevent excessive operations
## v1.7.1
_2018-05-27_
* Update `iocane` to `0.10.2` (future proofing)
## v1.7.0
_2018-05-27_
* Update `webdav-fs` to `1.10.1`
## v1.6.2
_2018-03-15_
* Update `webdav-fs` to `1.9.0`
* Bugfix: Changing password in new `ArchiveManager` would fail in deadlock
## v1.5.1
_2018-02-15_
* Add queues to `ArchiveManager` and `ArchiveSource` classes
## v1.5.0
_2018-02-14_
* Add new `ArchiveManager` and `ArchiveSource` classes
## v1.4.0
_2017-01-11_
* `ArchiveManager#updateArchiveCredentials` to call `Workspace.save` after updating the credentials
## v1.3.1
_2017-01-10_
* Fix master password update process (`ArchiveManager` integration)
## v1.3.0
_2017-01-10_
* Support for changing archive names in `ArchiveManager` ([#193](https://github.com/buttercup/buttercup-core/issues/193))
## v1.2.0
_2017-01-10_
* Support for changing the master password ([#197](https://github.com/buttercup/buttercup-core/issues/197))
## v1.1.2
_2017-12-08_
* **Bugfix**:
* Fixed `ArchiveManager#removeSource` functionality in web context (`LocalStorageInterface` had no remove method)
## v1.1.1
_2017-11-18_
**Security release**
* Upgrade iocane to 0.10.1
* Fixes [DOS vulnerability in `debug` module](https://github.com/perry-mitchell/iocane/pull/21)
## v1.1.0
_2017-11-08_
* Added `EntryFinder` for fuzzy searching entries
## **v1.0.0**
_2017-11-05_
* Major release! Hooray!
* `getAttributes` added to `Archive`, `Group` and `Entry` classes
## v1.0.0-rc1 - rc4
_2017-10-21_
* Inclusion of core-web
## v0.50.0
_2017-10-16_
* Allow overriding of already-registered datasources
## v0.49.0
_2017-09-04_
* Add override methods for salt and IV generation in iocane
## v0.48.0
_2017-09-02_
* Add [overridable crypto methods](https://github.com/buttercup/buttercup-core/blob/b285c1449ae4a0430729388559524ba13d85c6ca/source/tools/overridable.js#L26)
## v0.47.1
_2017-08-29_
* Upgrade iocane to 0.9.0
* Core crypto now async
## v0.46.0
_2017-08-26_
* Upgrade iocane to 0.8.0
* Expose ability to override built-in encryption/decryption methods
## v0.45.0
_2017-07-24_
* **Bugfix**: Entry facades remove meta/attributes that no longer exist
* Entry `getProperty`/`getMeta`/`getAttribute` return base objects when no parameters specified
## v0.44.1
_2017-07-16_
* Expose `webdav-fs` via vendor props: `Buttercup.vendor.webdavFS` (for `fetch` override support)
## v0.43.0
_2017-07-07_
* Entry facades: Dynamic field editing
## v0.42.1
_2017-07-07_
* **Bugfix**: `ArchiveManager` `unlockedSources` returns array of booleans
## v0.42.0
_2017-07-06_
* Change event emitters (`ArchiveManager`, `Archive` & `Westley`) to be asynchronous: [#169](https://github.com/buttercup/buttercup-core/issues/169)
## v0.41.2
_2017-07-03_
* **Bugfix**: Wrong password when unlocking source in `ArchiveManager` breaks state
## v0.41.1
_2017-06-30_
* **Bugfix**: Wrong credentials passed to `Workspace` in `ArchiveManager`: [#167](https://github.com/buttercup/buttercup-core/issues/167)
## v0.41.0
_2017-06-24_
* Upgrade webdav-fs to 1.3.0
* Disable browser `window.fetch` for stability
* Add `remove` method to `ArchiveManager`
## v0.40.1
_2017-06-10_
* Add missing event for source rehydration in `ArchiveManager`
## v0.40.0
_2017-05-28_
* Add event emitters to `Westley` and `Archive` instances for when archive changes occur
* Add event emitter to `ArchiveManager` for state updates
* Upgrade **webdav-fs** to `1.0.0`
* **Bugfix**: Empty values for properties/meta/attributes
## v0.39.1
_2017-05-22_
* Add support for credentials to be stringified and parsed _insecurely_ (`TextDatasource` deferred encoding hander support for separated interfaces like React Native and WebViews).
* Expose `TextDatasource` default encoding handlers
## v0.39.0
_2017-05-21_
* Add support for deferring encryption and packaging in `TextDatasource`
## v0.38.1
_2017-05-02_
* Expose `StorageInterface`
## v0.38.0
_2017-04-29_
* Add [MyButtercup](https://github.com/buttercup/roadmap/blob/9da2fad70941f5eda056c2de6d6c112f49470878/roadmap/OVERALL.md#mybuttercup) integration
* Add [Nextcloud](https://nextcloud.com/) integration
## v0.37.1
_2017-04-16_
* Bugfix: Fix merge stripping deletion commands when no changes occurred on the remote side
## v0.37.0
_2017-03-27_
* Add `Group.getGroup()` support for getting parent groups
## v0.36.0
_2017-03-20_
* Add searching tool for fetching all entries from an archive
* Expose searching tools
## v0.35.0
_2017-03-11_
* Serialisation of all string-type values in history
* _Old versions of Buttercup **will not be able to read** archives from this version!_
## v0.34.0
_2017-03-06_
* Upgrade iocane to 0.6.0
* Increase PBKDF2 rounds to 200-250k
* New credentials factory
* **Breaking:**
* Remove old `Workspace`
* Rename `SharedWorkspace` to `Workspace`
* Remove `Credentials` class
* All password authentication methods to use new credentials structure
## v0.33.2
_2017-01-07_
* Type checking getter `type` on `Archive` and `Group` instances
* Better type checking in group moving
## v0.33.0
_2017-01-07_
* Add `getHistory` method for `Archive` instances
* Add `createFromHistory` factory method to `Archive`
## v0.32.0
_2017-01-06_
* Add `emptyTrash` method to `Archive`
## v0.31.0
_2017-01-04_
* Add `findEntryByID` to `Archive` and `Group` classes
* Throw errors when creating entries/groups within trash
## v0.30.2
_2016-12-30_
* Fix OwnCloudDatasource's [`fromObject` bug](https://github.com/buttercup-pw/buttercup-core/issues/129)
## v0.30.0
_2016-12-27_
* Ensure archive ID always generated
* Added Entry `isInTrash` method
* Datasource registration (support for 3rd party datasources)
* `TextDatasource` stores content when using `toObject` or `toString`
* **Breaking:**
* Datasource `toString` format rewrite
* Datasource `toObject`
* Datasource tools removed (`Buttercup.tools.datasource`)
## v0.29.0
_2016-11-30_
* Credentials meta support
* Debug support
## v0.28.0
_2016-11-07_
* Read-only archives
* Shared groups
* Added `SharedWorkspace` for managing several archives
* Support for moving groups between archives
## v0.27.0
_2016-10-30_
* Group & Entry searching decorators for Archives and Groups
* Renamed ManagedGroup to Group
* Renamed ManagedEntry to Entry
* Deprecated `Archive.getGroupByID` and `Group.getGroupByID` in favour of `findGroupByID`
## v0.26.0
_2016-10-23_
* Attributes for archives
## v0.25.0
_2016-10-16_
* Entry deletion improved (trash and then deleted)
* Group deletion improved (trash and then deleted)
* Fixed group `toObject` not passing options to recursive calls
* Group `toString` takes `toObject` output options
## v0.24.0
_2016-10-15_
* Group `toObject` upgrade (deep output)
## v0.23.0
_2016-10-02_
* Buttercup server integration -> datasource
## v0.22.0
_2016-09-27_
* Key file support
## v0.21.0
_2016-08-07_
* JSON exporting
* Datasource to/from strings
## v0.20.1
_2016-07-17_
* Added datasources
* Workspace saving -> async
## v0.19.0
_2016-04-09_
* Support for overriding iocane's PBKDF22 method externally (core-web)
## v0.18.0
_2016-04-03_
* Integrated iocane
* ES6 support
* Dropped Node 0.12
## v0.17.0
_2016-02-27_
* Set PBKDF2 iteration count to 6-8k
* Archive searching for entries & groups
## v0.16.0
_2016-02-21_
* Archive searching
* WebDAV-fs upgrade -> 0.4.0
## v0.15.1
_2016-02-20_
* Fixed removed link script (npmignore)
## v0.15.0
_2016-02-19_
* Increased PBKDF2 iterations
## v0.14.0
_2016-02-12_
* Added `Credentials`
| 20.685654 | 181 | 0.714431 | eng_Latn | 0.575454 |
e23472d6f9593e90a1f1a026c0b276b5b74c9690 | 2,240 | md | Markdown | _posts/21/sxz/2021-04-05-carly-pearce.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/21/sxz/2021-04-05-carly-pearce.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/21/sxz/2021-04-05-carly-pearce.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | ---
id: 1161
title: Carly Pearce
date: 2021-04-05T05:24:52+00:00
author: chito
layout: post
guid: https://ukdataservers.com/carly-pearce/
permalink: /04/05/carly-pearce
tags:
- claims
- lawyer
- doctor
- house
- multi family
- online
- poll
- business
- unspecified
- single
- relationship
- engaged
- married
- complicated
- open relationship
- widowed
- separated
- divorced
- Husband
- Wife
- Boyfriend
- Girlfriend
category: Guides
---
# About Carly Pearce
Country music singer and musician who has risen to fame touring with renowned country groups and musicians like Florida Georgia Line, Gary Allan, and Hunter Hayes. She also released the popular Christmas album Mistletoe, Holly & Bluegrass in 2013 and followed with her self-titled EP in 2015.
# Early life
At only eleven years old she began touring with a bluegrass band. Then, at sixteen, she moved to Pigeon Forge to begin performing at Dollywood.
# Trivia
She has performed at some of the most legendary venues in Nashville, including the Bluebird Cafe and the Grand Ole Opry. She won Breakthrough Video of the Year at the 2018 CMT Awards.
# Family of Carly Pearce
She is a Kentucky bluegrass native but later settled in Nashville, Tennessee. Her mother’s name is Jackie Slusser. She got engaged to fellow country singer Michael Ray in 2018. The couple tied the knot on October 6, 2019 and filed for divorce eight months later.
# Close associates of Carly Pearce
She supported Pretty Little Liars star Lucy Hale as a backup singer on tour.
| 24.888889 | 292 | 0.525446 | eng_Latn | 0.998567 |
e234abea205cc577d36812b4d57f59b1614bc7af | 78 | md | Markdown | README.md | AlexKopen/Card-Reader | e60182332fbd480b498cb9078596186a1744f4fe | [
"MIT"
] | null | null | null | README.md | AlexKopen/Card-Reader | e60182332fbd480b498cb9078596186a1744f4fe | [
"MIT"
] | null | null | null | README.md | AlexKopen/Card-Reader | e60182332fbd480b498cb9078596186a1744f4fe | [
"MIT"
] | null | null | null | # Card-Reader
A JavaScript web application which reads in digital card values
| 26 | 63 | 0.820513 | eng_Latn | 0.993951 |
e234ebeb4c86c734c1cf94e3b606215a998f73ee | 750 | md | Markdown | packages/markdown-loader/readme.md | Pajn/documittu | cdbaee000fec327fb6ba79dc297930941ebe548c | [
"Apache-2.0",
"MIT"
] | 1 | 2017-03-03T18:36:33.000Z | 2017-03-03T18:36:33.000Z | packages/markdown-loader/readme.md | beanloop/documittu | 1d0061418508fcc2e9335ac1b3b73ff34362884d | [
"Apache-2.0",
"MIT"
] | null | null | null | packages/markdown-loader/readme.md | beanloop/documittu | 1d0061418508fcc2e9335ac1b3b73ff34362884d | [
"Apache-2.0",
"MIT"
] | null | null | null | This is a fork of [react-markdown-loader](https://github.com/javiercf/react-markdown-loader) that
is customized to fit documittu.
This loader parses markdown files and converts them to a React Stateless Component.
It will also parse FrontMatter to import dependencies and render components
along with it’s source code.
## Usage
In the FrontMatter you should import the components you want to render
with the component name as a key and it's path as the value
```markdown
---
imports:
HelloWorld: './hello-world.js',
'{ Component1, Component2 }': './components.js'
---
```
*hello-world.md*
<pre>
---
imports:
HelloWorld: './hello-world.js'
---
# Hello World
This is an example component
```render html
<HelloWorld />
```
</pre>
| 19.230769 | 97 | 0.725333 | eng_Latn | 0.978096 |
e23570a2ba24d9a1c716690a9ee54a58a0973135 | 1,907 | md | Markdown | _posts/2020-04-15-leetcode-295-find-median-from-data-stream.md | hejianbo/hejianbo.github.io | 3de5f72ca7199384a054424f87c3c5fd92f877ce | [
"MIT"
] | null | null | null | _posts/2020-04-15-leetcode-295-find-median-from-data-stream.md | hejianbo/hejianbo.github.io | 3de5f72ca7199384a054424f87c3c5fd92f877ce | [
"MIT"
] | null | null | null | _posts/2020-04-15-leetcode-295-find-median-from-data-stream.md | hejianbo/hejianbo.github.io | 3de5f72ca7199384a054424f87c3c5fd92f877ce | [
"MIT"
] | null | null | null | ---
layout: post
title: 每日一题 - LeetCode 295. Find Median from Data Stream
tags:
- leetcode
categories: leetcode
description: LeetCode 295. Find Median from Data Stream
---
# 295. Find Median from Data Stream
## Description
Median is the middle value in an ordered integer list. If the size of the list is even, there is no middle value. So the median is the mean of the two middle value.
For example,
`[2,3,4]`, the median is `3`
`[2,3]`, the median is `(2 + 3) / 2 = 2.5`
Design a data structure that supports the following two operations:
- void addNum(int num) - Add a integer number from the data stream to the data structure.
- double findMedian() - Return the median of all elements so far.
**Example:**
```
addNum(1)
addNum(2)
findMedian() -> 1.5
addNum(3)
findMedian() -> 2
```
## Solution 1 双堆
```java
class MedianFinder {
private Queue<Integer> min = new PriorityQueue<>();
private Queue<Integer> max = new PriorityQueue<>(Comparator.reverseOrder());
/** initialize your data structure here. */
public MedianFinder() {
}
public void addNum(int num) {
this.max.offer(num);
// 平衡
this.min.offer(this.max.poll());
// 上面操作会导致max中元素都到min中了, 所以需要将min中的元素移过云
// 这会导致max.size() >= min.size()
if (this.max.size() < this.min.size()) {
this.max.offer(this.min.poll());
}
}
public double findMedian() {
if (this.max.isEmpty()) {
return 0;
}
if (this.max.size() > this.min.size()) {
return this.max.peek();
} else {
return (this.min.peek() + this.max.peek()) * 0.5;
}
}
}
```
执行结果
```
Runtime: 54 ms, faster than 34.67% of Java online submissions for Find Median from Data Stream.
Memory Usage: 51.3 MB, less than 100.00% of Java online submissions for Find Median from Data Stream.
``` | 23.256098 | 164 | 0.619822 | eng_Latn | 0.903608 |
e235a1479de7f305a25013c4bfd4f3d44f9e3aa0 | 20,810 | md | Markdown | README.md | minump/sparklyr | 3465d26e9ff7ebe7ffba45e5b751012c9a92f7ef | [
"Apache-2.0"
] | null | null | null | README.md | minump/sparklyr | 3465d26e9ff7ebe7ffba45e5b751012c9a92f7ef | [
"Apache-2.0"
] | 1 | 2018-07-13T20:56:06.000Z | 2018-07-13T20:56:06.000Z | README.md | minump/sparklyr | 3465d26e9ff7ebe7ffba45e5b751012c9a92f7ef | [
"Apache-2.0"
] | null | null | null | sparklyr: R interface for Apache Spark
================
<img src="tools/readme/sparklyr-package.png" width=200 align="right" style="margin-left: 20px; margin-right: 20px"/>
[](https://travis-ci.org/rstudio/sparklyr)
[](https://ci.appveyor.com/project/JavierLuraschi/sparklyr)
[](https://cran.r-project.org/package=sparklyr)
[](https://codecov.io/gh/rstudio/sparklyr)
[](https://gitter.im/rstudio/sparklyr?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
- Connect to [Spark](http://spark.apache.org/) from R. The sparklyr
package provides a complete [dplyr](https://github.com/hadley/dplyr)
backend.
- Filter and aggregate Spark datasets then bring them into R for
analysis and visualization.
- Use Spark’s distributed [machine
learning](http://spark.apache.org/docs/latest/mllib-guide.html)
library from R.
- Create [extensions](http://spark.rstudio.com/extensions.html) that
call the full Spark API and provide interfaces to Spark packages.
## Installation
You can install the **sparklyr** package from CRAN as follows:
``` r
install.packages("sparklyr")
```
You should also install a local version of Spark for development
purposes:
``` r
library(sparklyr)
spark_install(version = "2.1.0")
```
To upgrade to the latest version of sparklyr, run the following command
and restart your r session:
``` r
devtools::install_github("rstudio/sparklyr")
```
If you use the RStudio IDE, you should also download the latest [preview
release](https://www.rstudio.com/products/rstudio/download/preview/) of
the IDE which includes several enhancements for interacting with Spark
(see the [RStudio IDE](#rstudio-ide) section below for more details).
## Connecting to Spark
You can connect to both local instances of Spark as well as remote Spark
clusters. Here we’ll connect to a local instance of Spark via the
[spark\_connect](http://spark.rstudio.com/reference/sparklyr/latest/spark_connect.html)
function:
``` r
library(sparklyr)
sc <- spark_connect(master = "local")
```
The returned Spark connection (`sc`) provides a remote dplyr data source
to the Spark cluster.
For more information on connecting to remote Spark clusters see the
[Deployment](http://spark.rstudio.com/deployment.html) section of the
sparklyr website.
## Using dplyr
We can now use all of the available dplyr verbs against the tables
within the cluster.
We’ll start by copying some datasets from R into the Spark cluster (note
that you may need to install the nycflights13 and Lahman packages in
order to execute this code):
``` r
install.packages(c("nycflights13", "Lahman"))
```
``` r
library(dplyr)
iris_tbl <- copy_to(sc, iris)
flights_tbl <- copy_to(sc, nycflights13::flights, "flights")
batting_tbl <- copy_to(sc, Lahman::Batting, "batting")
src_tbls(sc)
```
## [1] "batting" "flights" "iris"
To start with here’s a simple filtering example:
``` r
# filter by departure delay and print the first few records
flights_tbl %>% filter(dep_delay == 2)
```
## # Source: lazy query [?? x 19]
## # Database: spark_connection
## year month day dep_time sched_dep_time dep_delay arr_time
## <int> <int> <int> <int> <int> <dbl> <int>
## 1 2013 1 1 517 515 2 830
## 2 2013 1 1 542 540 2 923
## 3 2013 1 1 702 700 2 1058
## 4 2013 1 1 715 713 2 911
## 5 2013 1 1 752 750 2 1025
## 6 2013 1 1 917 915 2 1206
## 7 2013 1 1 932 930 2 1219
## 8 2013 1 1 1028 1026 2 1350
## 9 2013 1 1 1042 1040 2 1325
## 10 2013 1 1 1231 1229 2 1523
## # ... with more rows, and 12 more variables: sched_arr_time <int>,
## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## # minute <dbl>, time_hour <dttm>
[Introduction to dplyr](https://CRAN.R-project.org/package=dplyr)
provides additional dplyr examples you can try. For example, consider
the last example from the tutorial which plots data on flight delays:
``` r
delay <- flights_tbl %>%
group_by(tailnum) %>%
summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>%
filter(count > 20, dist < 2000, !is.na(delay)) %>%
collect
# plot delays
library(ggplot2)
ggplot(delay, aes(dist, delay)) +
geom_point(aes(size = count), alpha = 1/2) +
geom_smooth() +
scale_size_area(max_size = 2)
```
## `geom_smooth()` using method = 'gam'
<!-- -->
### Window Functions
dplyr [window functions](https://CRAN.R-project.org/package=dplyr) are
also supported, for example:
``` r
batting_tbl %>%
select(playerID, yearID, teamID, G, AB:H) %>%
arrange(playerID, yearID, teamID) %>%
group_by(playerID) %>%
filter(min_rank(desc(H)) <= 2 & H > 0)
```
## # Source: lazy query [?? x 7]
## # Database: spark_connection
## # Groups: playerID
## # Ordered by: playerID, yearID, teamID
## playerID yearID teamID G AB R H
## <chr> <int> <chr> <int> <int> <int> <int>
## 1 aaronha01 1959 ML1 154 629 116 223
## 2 aaronha01 1963 ML1 161 631 121 201
## 3 abbotji01 1999 MIL 20 21 0 2
## 4 abnersh01 1992 CHA 97 208 21 58
## 5 abnersh01 1990 SDN 91 184 17 45
## 6 acklefr01 1963 CHA 2 5 0 1
## 7 acklefr01 1964 CHA 3 1 0 1
## 8 adamecr01 2016 COL 121 225 25 49
## 9 adamecr01 2015 COL 26 53 4 13
## 10 adamsac01 1943 NY1 70 32 3 4
## # ... with more rows
For additional documentation on using dplyr with Spark see the
[dplyr](http://spark.rstudio.com/dplyr.html) section of the sparklyr
website.
## Using SQL
It’s also possible to execute SQL queries directly against tables within
a Spark cluster. The `spark_connection` object implements a
[DBI](https://github.com/rstats-db/DBI) interface for Spark, so you can
use `dbGetQuery` to execute SQL and return the result as an R data
frame:
``` r
library(DBI)
iris_preview <- dbGetQuery(sc, "SELECT * FROM iris LIMIT 10")
iris_preview
```
## Sepal_Length Sepal_Width Petal_Length Petal_Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5.0 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## Machine Learning
You can orchestrate machine learning algorithms in a Spark cluster via
the [machine
learning](http://spark.apache.org/docs/latest/mllib-guide.html)
functions within **sparklyr**. These functions connect to a set of
high-level APIs built on top of DataFrames that help you create and tune
machine learning workflows.
Here’s an example where we use
[ml\_linear\_regression](http://spark.rstudio.com/reference/sparklyr/latest/ml_linear_regression.html)
to fit a linear regression model. We’ll use the built-in `mtcars`
dataset, and see if we can predict a car’s fuel consumption (`mpg`)
based on its weight (`wt`), and the number of cylinders the engine
contains (`cyl`). We’ll assume in each case that the relationship
between `mpg` and each of our features is linear.
``` r
# copy mtcars into spark
mtcars_tbl <- copy_to(sc, mtcars)
# transform our data set, and then partition into 'training', 'test'
partitions <- mtcars_tbl %>%
filter(hp >= 100) %>%
mutate(cyl8 = cyl == 8) %>%
sdf_partition(training = 0.5, test = 0.5, seed = 1099)
# fit a linear model to the training dataset
fit <- partitions$training %>%
ml_linear_regression(response = "mpg", features = c("wt", "cyl"))
fit
```
## Formula: mpg ~ wt + cyl
##
## Coefficients:
## (Intercept) wt cyl
## 33.499452 -2.818463 -0.923187
For linear regression models produced by Spark, we can use `summary()`
to learn a bit more about the quality of our fit, and the statistical
significance of each of our predictors.
``` r
summary(fit)
```
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.752 -1.134 -0.499 1.296 2.282
##
## Coefficients:
## (Intercept) wt cyl
## 33.499452 -2.818463 -0.923187
##
## R-Squared: 0.8274
## Root Mean Squared Error: 1.422
Spark machine learning supports a wide array of algorithms and feature
transformations and as illustrated above it’s easy to chain these
functions together with dplyr pipelines. To learn more see the [machine
learning](mllib.html) section.
## Reading and Writing Data
You can read and write data in CSV, JSON, and Parquet formats. Data can
be stored in HDFS, S3, or on the local filesystem of cluster nodes.
``` r
temp_csv <- tempfile(fileext = ".csv")
temp_parquet <- tempfile(fileext = ".parquet")
temp_json <- tempfile(fileext = ".json")
spark_write_csv(iris_tbl, temp_csv)
iris_csv_tbl <- spark_read_csv(sc, "iris_csv", temp_csv)
spark_write_parquet(iris_tbl, temp_parquet)
iris_parquet_tbl <- spark_read_parquet(sc, "iris_parquet", temp_parquet)
spark_write_json(iris_tbl, temp_json)
iris_json_tbl <- spark_read_json(sc, "iris_json", temp_json)
src_tbls(sc)
```
## [1] "batting" "flights" "iris" "iris_csv"
## [5] "iris_json" "iris_parquet" "mtcars"
## Distributed R
You can execute arbitrary r code across your cluster using
`spark_apply`. For example, we can apply `rgamma` over `iris` as
follows:
``` r
spark_apply(iris_tbl, function(data) {
data[1:4] + rgamma(1,2)
})
```
## # Source: table<sparklyr_tmp_d1564a7b34a4> [?? x 4]
## # Database: spark_connection
## Sepal_Length Sepal_Width Petal_Length Petal_Width
## <dbl> <dbl> <dbl> <dbl>
## 1 8.47 6.87 4.77 3.57
## 2 8.27 6.37 4.77 3.57
## 3 8.07 6.57 4.67 3.57
## 4 7.97 6.47 4.87 3.57
## 5 8.37 6.97 4.77 3.57
## 6 8.77 7.27 5.07 3.77
## 7 7.97 6.77 4.77 3.67
## 8 8.37 6.77 4.87 3.57
## 9 7.77 6.27 4.77 3.57
## 10 8.27 6.47 4.87 3.47
## # ... with more rows
You can also group by columns to perform an operation over each group of
rows and make use of any package within the closure:
``` r
spark_apply(
iris_tbl,
function(e) broom::tidy(lm(Petal_Width ~ Petal_Length, e)),
names = c("term", "estimate", "std.error", "statistic", "p.value"),
group_by = "Species"
)
```
## # Source: table<sparklyr_tmp_d15652cdc540> [?? x 6]
## # Database: spark_connection
## Species term estimate std.error statistic p.value
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 versicolor (Intercept) -0.0843 0.161 -0.525 6.02e- 1
## 2 versicolor Petal_Length 0.331 0.0375 8.83 1.27e-11
## 3 virginica (Intercept) 1.14 0.379 2.99 4.34e- 3
## 4 virginica Petal_Length 0.160 0.0680 2.36 2.25e- 2
## 5 setosa (Intercept) -0.0482 0.122 -0.396 6.94e- 1
## 6 setosa Petal_Length 0.201 0.0826 2.44 1.86e- 2
## Extensions
The facilities used internally by sparklyr for its dplyr and machine
learning interfaces are available to extension packages. Since Spark is
a general purpose cluster computing system there are many potential
applications for extensions (e.g. interfaces to custom machine learning
pipelines, interfaces to 3rd party Spark packages, etc.).
Here’s a simple example that wraps a Spark text file line counting
function with an R function:
``` r
# write a CSV
tempfile <- tempfile(fileext = ".csv")
write.csv(nycflights13::flights, tempfile, row.names = FALSE, na = "")
# define an R interface to Spark line counting
count_lines <- function(sc, path) {
spark_context(sc) %>%
invoke("textFile", path, 1L) %>%
invoke("count")
}
# call spark to count the lines of the CSV
count_lines(sc, tempfile)
```
## [1] 336777
To learn more about creating extensions see the
[Extensions](http://spark.rstudio.com/extensions.html) section of the
sparklyr website.
## Table Utilities
You can cache a table into memory with:
``` r
tbl_cache(sc, "batting")
```
and unload from memory using:
``` r
tbl_uncache(sc, "batting")
```
## Connection Utilities
You can view the Spark web console using the `spark_web` function:
``` r
spark_web(sc)
```
You can show the log using the `spark_log` function:
``` r
spark_log(sc, n = 10)
```
## 18/05/25 09:19:44 INFO ContextCleaner: Cleaned accumulator 2135
## 18/05/25 09:19:44 INFO ContextCleaner: Cleaned accumulator 2136
## 18/05/25 09:19:44 INFO ContextCleaner: Cleaned accumulator 2146
## 18/05/25 09:19:44 INFO ContextCleaner: Cleaned accumulator 2138
## 18/05/25 09:19:44 INFO ContextCleaner: Cleaned accumulator 2126
## 18/05/25 09:19:44 INFO Executor: Finished task 0.0 in stage 69.0 (TID 115). 918 bytes result sent to driver
## 18/05/25 09:19:44 INFO TaskSetManager: Finished task 0.0 in stage 69.0 (TID 115) in 177 ms on localhost (executor driver) (1/1)
## 18/05/25 09:19:44 INFO TaskSchedulerImpl: Removed TaskSet 69.0, whose tasks have all completed, from pool
## 18/05/25 09:19:44 INFO DAGScheduler: ResultStage 69 (count at NativeMethodAccessorImpl.java:0) finished in 0.180 s
## 18/05/25 09:19:44 INFO DAGScheduler: Job 47 finished: count at NativeMethodAccessorImpl.java:0, took 0.183200 s
Finally, we disconnect from Spark:
``` r
spark_disconnect(sc)
```
## RStudio IDE
The latest RStudio [Preview
Release](https://www.rstudio.com/products/rstudio/download/preview/) of
the RStudio IDE includes integrated support for Spark and the sparklyr
package, including tools for:
- Creating and managing Spark connections
- Browsing the tables and columns of Spark DataFrames
- Previewing the first 1,000 rows of Spark DataFrames
Once you’ve installed the sparklyr package, you should find a new
**Spark** pane within the IDE. This pane includes a **New Connection**
dialog which can be used to make connections to local or remote Spark
instances:
<img src="tools/readme/spark-connect.png" class="screenshot" width=389 />
Once you’ve connected to Spark you’ll be able to browse the tables
contained within the Spark cluster and preview Spark DataFrames using
the standard RStudio data
viewer:
<img src="tools/readme/spark-dataview.png" class="screenshot" width=639 />
You can also connect to Spark through [Livy](http://livy.io) through a
new connection
dialog:
<img src="tools/readme/spark-connect-livy.png" class="screenshot" width=389 />
<div style="margin-bottom: 15px;">
</div>
The RStudio IDE features for sparklyr are available now as part of the
[RStudio Preview
Release](https://www.rstudio.com/products/rstudio/download/preview/).
## Using H2O
[rsparkling](https://cran.r-project.org/package=rsparkling) is a CRAN
package from [H2O](http://h2o.ai) that extends
[sparklyr](http://spark.rstudio.com) to provide an interface into
[Sparkling Water](https://github.com/h2oai/sparkling-water). For
instance, the following example installs, configures and runs
[h2o.glm](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/glm.html):
``` r
library(rsparkling)
library(sparklyr)
library(dplyr)
library(h2o)
sc <- spark_connect(master = "local", version = "2.1.0")
mtcars_tbl <- copy_to(sc, mtcars, "mtcars")
mtcars_h2o <- as_h2o_frame(sc, mtcars_tbl, strict_version_check = FALSE)
mtcars_glm <- h2o.glm(x = c("wt", "cyl"),
y = "mpg",
training_frame = mtcars_h2o,
lambda_search = TRUE)
```
``` r
mtcars_glm
```
## Model Details:
## ==============
##
## H2ORegressionModel: glm
## Model ID: GLM_model_R_1527265202599_1
## GLM Model: summary
## family link regularization
## 1 gaussian identity Elastic Net (alpha = 0.5, lambda = 0.1013 )
## lambda_search
## 1 nlambda = 100, lambda.max = 10.132, lambda.min = 0.1013, lambda.1se = -1.0
## number_of_predictors_total number_of_active_predictors
## 1 2 2
## number_of_iterations training_frame
## 1 100 frame_rdd_31_ad5c4e88ec97eb8ccedae9475ad34e02
##
## Coefficients: glm coefficients
## names coefficients standardized_coefficients
## 1 Intercept 38.941654 20.090625
## 2 cyl -1.468783 -2.623132
## 3 wt -3.034558 -2.969186
##
## H2ORegressionMetrics: glm
## ** Reported on training data. **
##
## MSE: 6.017684
## RMSE: 2.453097
## MAE: 1.940985
## RMSLE: 0.1114801
## Mean Residual Deviance : 6.017684
## R^2 : 0.8289895
## Null Deviance :1126.047
## Null D.o.F. :31
## Residual Deviance :192.5659
## Residual D.o.F. :29
## AIC :156.2425
``` r
spark_disconnect(sc)
```
## Connecting through Livy
[Livy](https://github.com/cloudera/livy) enables remote connections to
Apache Spark clusters. Connecting to Spark clusters through Livy is
**under experimental development** in `sparklyr`. Please post any
feedback or questions as a GitHub issue as needed.
Before connecting to Livy, you will need the connection information to
an existing service running Livy. Otherwise, to test `livy` in your
local environment, you can install it and run it locally as follows:
``` r
livy_install()
```
``` r
livy_service_start()
```
To connect, use the Livy service address as `master` and `method =
"livy"` in `spark_connect`. Once connection completes, use `sparklyr` as
usual, for instance:
``` r
sc <- spark_connect(master = "http://localhost:8998", method = "livy")
copy_to(sc, iris)
```
## # Source: table<iris> [?? x 5]
## # Database: spark_connection
## Sepal_Length Sepal_Width Petal_Length Petal_Width Species
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa
## 7 4.6 3.4 1.4 0.3 setosa
## 8 5 3.4 1.5 0.2 setosa
## 9 4.4 2.9 1.4 0.2 setosa
## 10 4.9 3.1 1.5 0.1 setosa
## # ... with more rows
``` r
spark_disconnect(sc)
```
Once you are done using `livy` locally, you should stop this service
with:
``` r
livy_service_stop()
```
To connect to remote `livy` clusters that support basic authentication
connect as:
``` r
config <- livy_config(username="<username>", password="<password">)
sc <- spark_connect(master = "<address>", method = "livy", config = config)
spark_disconnect(sc)
```
| 34.97479 | 193 | 0.61605 | eng_Latn | 0.800665 |