hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fd948980ce80869d123b87064b07b9150419a0d9 | 1,227 | md | Markdown | README.md | zeke/clipboard | 09c750c52fb0944d6aabe9689921d9105f7693a6 | [
"ISC"
] | 1 | 2021-01-05T16:48:47.000Z | 2021-01-05T16:48:47.000Z | README.md | zeke/clipboard | 09c750c52fb0944d6aabe9689921d9105f7693a6 | [
"ISC"
] | null | null | null | README.md | zeke/clipboard | 09c750c52fb0944d6aabe9689921d9105f7693a6 | [
"ISC"
] | null | null | null | # @zeke/clipboard
The scripts I keep in my [~/.clipboard](http://npm.im/dot-clipboard) directory
## Scripts
There's only [one script](/index.js) so far. It writes all text-based clipboard events to a file:
```
» ls -A1 ~/.clipboard/data
2015-05-08-18:28:29:052-Fri.txt
2015-05-08-18:28:36:088-Fri.txt
» cat ~/.clipboard/data/2015-05-08-18:28:36:088-Fri.txt
Once upon a midnight dreary, while I pondered, weak and weary,
Over many a quaint and curious volume of forgotten lore,
While I nodded, nearly napping, suddenly there came a tapping,
As of some one gently rapping, rapping at my chamber door.
Tis some visitor," I muttered, "tapping at my chamber door —
Only this, and nothing more.
```
Powered by the awesome [dot-clipboard](https://www.npmjs.com/package/dot-clipboard) package.
## Tests
```sh
npm install
npm test
```
## Dependencies
- [shelljs](https://github.com/arturadib/shelljs): Portable Unix shell commands for Node.js
- [strftime](https://github.com/samsonjs/strftime): strftime for JavaScript
## Dev Dependencies
- [standard](https://github.com/feross/standard): JavaScript Standard Style
## License
ISC
_Generated by [package-json-to-readme](https://github.com/zeke/package-json-to-readme)_
| 25.040816 | 97 | 0.735941 | eng_Latn | 0.795367 |
fd9593f1ad484eb681b291aa4c2a71dcc8ddba4e | 154 | md | Markdown | _posts/2020-09-15-Sam-Interest-Post.md | reyabreu/github-pages-with-jekyll | 00752c2f425bd0054a71fb8e5216f522b55b4615 | [
"MIT"
] | null | null | null | _posts/2020-09-15-Sam-Interest-Post.md | reyabreu/github-pages-with-jekyll | 00752c2f425bd0054a71fb8e5216f522b55b4615 | [
"MIT"
] | 4 | 2020-09-03T15:05:48.000Z | 2020-09-03T15:51:58.000Z | _posts/2020-09-15-Sam-Interest-Post.md | reyabreu/github-pages-with-jekyll | 00752c2f425bd0054a71fb8e5216f522b55b4615 | [
"MIT"
] | null | null | null | ---
title: "Sam is interested in this"
date: 2020-09-15
---
Sam has talked about creating blogs from static content and he is thrilled about Github pages. | 30.8 | 94 | 0.753247 | eng_Latn | 0.999921 |
fd95d397a89a27a3912127fb00524adde73f2337 | 23 | md | Markdown | README.md | hamzalegend/Jaguar | b16c7fc2775f436107f931dc8e6234012ab87a49 | [
"Apache-2.0"
] | null | null | null | README.md | hamzalegend/Jaguar | b16c7fc2775f436107f931dc8e6234012ab87a49 | [
"Apache-2.0"
] | null | null | null | README.md | hamzalegend/Jaguar | b16c7fc2775f436107f931dc8e6234012ab87a49 | [
"Apache-2.0"
] | null | null | null | # Jaguar
A Game Engine
| 7.666667 | 13 | 0.73913 | por_Latn | 0.888062 |
fd96620ecabd0dfeef9e827e6339fb4324193301 | 27 | md | Markdown | README.md | subodhkalika/stweeter | b2d1e64113e5e95966969fc0e4688d8d95ca0479 | [
"MIT"
] | null | null | null | README.md | subodhkalika/stweeter | b2d1e64113e5e95966969fc0e4688d8d95ca0479 | [
"MIT"
] | null | null | null | README.md | subodhkalika/stweeter | b2d1e64113e5e95966969fc0e4688d8d95ca0479 | [
"MIT"
] | null | null | null | # MESSAGE
Laravel app init
| 9 | 16 | 0.777778 | eng_Latn | 0.744005 |
fd977b08e9fdd79ca34dfc05e9c1a633ca29a01d | 1,972 | md | Markdown | README.md | NoSharp/gmsv_reqwest | 9e3fbec851e090e85aa160a6eca7fbb3ab47b0d5 | [
"MIT"
] | null | null | null | README.md | NoSharp/gmsv_reqwest | 9e3fbec851e090e85aa160a6eca7fbb3ab47b0d5 | [
"MIT"
] | null | null | null | README.md | NoSharp/gmsv_reqwest | 9e3fbec851e090e85aa160a6eca7fbb3ab47b0d5 | [
"MIT"
] | null | null | null | # 🐱👤 gmsv_reqwest
This module is a drop-in replacement for Garry's Mod's [`HTTP`](https://wiki.facepunch.com/gmod/Global.HTTP) function, inspired by [`gmsv_chttp`](https://github.com/timschumi/gmod-chttp) created by [timschumi](https://github.com/timschumi).
The module uses the [`reqwest`](https://docs.rs/reqwest/*/reqwest/) crate for dispatching HTTP requests, [`tokio`](https://tokio.rs/) crate for async/thread scheduling runtime and the [`rustls`](https://github.com/ctz/rustls) crate for SSL/TLS.
This module was written in Rust and serves as a decent example on how to write a Garry's Mod binary module in Rust. (Thank you [Willox](https://github.com/willox) for the base!)
## Installation
Download the relevant module for your server's operating system and platform/Gmod branch from the [releases section](https://github.com/WilliamVenner/gmsv_reqwest/releases).
Drop the module into `garrysmod/lua/bin/` in your server's files. If the `bin` folder doesn't exist, create it.
If you're not sure on what operating system/platform your server is running, run this in your server's console:
```lua
lua_run print((system.IsWindows()and"Windows"or system.IsLinux()and"Linux"or"Unsupported").." "..(jit.arch=="x64"and"x86-64"or"x86"))
```
## Usage
To use reqwest in your addons, you can put this snippet at the top of your code, which will fallback to Gmod's default HTTP function if reqwest is not installed on the server.
```lua
if pcall(require, "reqwest") and reqwest ~= nil then
my_http = reqwest
else
my_http = HTTP
end
```
## Custom root certificates:
To add custom root certificates, place them in the "garrysmod/tls_certificates" directory.
The certificates must be X509 and encoded in either pem or der. They must also end in the .pem or .der file extensions respective to their econding. If there is a problem loading the certificate, it'll be skipped over and a message will be displayed in the console. | 56.342857 | 265 | 0.741886 | eng_Latn | 0.974553 |
fd97998f3f4bee5b363798fee1a129f21557d34c | 2,588 | md | Markdown | _portfolio/image_processing.md | HatefDastour/hatefdastour.github.io | 192b96c3289ea2b27213c6e36d886c787921075d | [
"MIT"
] | null | null | null | _portfolio/image_processing.md | HatefDastour/hatefdastour.github.io | 192b96c3289ea2b27213c6e36d886c787921075d | [
"MIT"
] | null | null | null | _portfolio/image_processing.md | HatefDastour/hatefdastour.github.io | 192b96c3289ea2b27213c6e36d886c787921075d | [
"MIT"
] | null | null | null | ---
title: "Image Processing"
excerpt: "Image Processing"
collection: portfolio
---
## Image Classification
* Brain Tumor Detection
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/healthcare_analytics_and_modeling/Brain_Tumor_Detection_Keras_Image_Classifications_MLP.html)
* CT Images from Cancer Imaging Archive with Contrast and Patient Age
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/healthcare_analytics_and_modeling/CT_Medical_Images_Keras_Image_Classifications_MLP.html)
* Digit Recognizer Dataset
* [Modeling: PyTorch Multi-layer Perceptron (MLP) for Multi-Class Classification](/portfolio/image_processing/Digit_Recognizer_PyTorch_MLP_MultiClass.html)
* [Modeling: PyTorch Multinomial Logistic Regression for Multi-Class Classification](/portfolio/image_processing/Digit_Recognizer_PyTorch_Logistic_Regression_MultiClass.html)
* Fire Dataset
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/natural_processes_analytics_and_modeling/Fire_Dataset_Keras_Image_Classifications_MLP.html)
* Histopathologic Cancer Detection
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/healthcare_analytics_and_modeling/HCD_Keras_Image_Classifications_MLP.html)
* Natural Scenes Image Classification
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/image_processing/Natural_Scenes_Image_Classification_Keras_Image_Classifications_MLP.html)
* Skin Lesion Analysis Towards Melanoma Detection
* [Preprocessing and Exploratory Data Analysis](/portfolio/healthcare_analytics_and_modeling/Melanoma_Detection_Preprocessing_EDA.html)
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/healthcare_analytics_and_modeling/Melanoma_Detection_Keras_Image_Classifications_MLP.html)
* Satellite Images Of Hurricane Damage
* [Modeling: Keras Multi-layer Perceptron (MLP) for Image Classifications](/portfolio/natural_processes_analytics_and_modeling/Satellite_Images_of_Hurricane_Damage_Keras_Image_Classifications_MLP.html)
* Sign Language Digits
* [Modeling: PyTorch Multi-layer Perceptron (MLP) for Multi-Class Classification](/portfolio/image_processing/Sign_Language_Digits_PyTorch_MLP_MultiClass.html)
* [Modeling: CatBoost Classifier](/portfolio/image_processing/Sign_Language_Digits_CatBoost_Classifier.html)
## Image Segmentation
* Lungs CT Data
* [Tensorflow Modified U-Net](/portfolio/healthcare_analytics_and_modeling/Lung_CT_TF_Modified_U_Net.html) | 66.358974 | 202 | 0.851236 | kor_Hang | 0.362094 |
fd9838939cb9222f215a9dc4f3db7a3caf31c6e6 | 3,183 | md | Markdown | README.md | 4gl-apps/4gl.uk | d130b482d94e2a86eba8eecbf24502329c6ab2ad | [
"MIT"
] | null | null | null | README.md | 4gl-apps/4gl.uk | d130b482d94e2a86eba8eecbf24502329c6ab2ad | [
"MIT"
] | null | null | null | README.md | 4gl-apps/4gl.uk | d130b482d94e2a86eba8eecbf24502329c6ab2ad | [
"MIT"
] | null | null | null | # Links
Self hosted link shortner courtesy of: https://github.com/jstayton/suri
### Manage Links
Links are managed through [`src/links.json`](src/links.json), which is seeded
with a few examples to start:
```json
{
"/": "https://github.com/jstayton",
"1": "https://fee.org/articles/the-use-of-knowledge-in-society/",
"tw": "https://twitter.com/kidjustino"
}
```
It couldn't be simpler: the key is the "shortlink" path that gets redirected,
and the value is the target URL. Keys can be as short or as long as you want,
using whatever mixture of characters you want. `/` is a special entry for
redirecting the root path.
Go ahead and make an edit, then commit and push to your repository. The hosting
provider you chose above should automatically build and deploy your change.
That's it!
_Pro tip_: Bookmark the page to
[edit `src/links.json` directly in GitHub](https://github.com/jstayton/suri/edit/master/src/links.json)
(or wherever), and use the default commit message that's populated. Now show me
a link shortener that's easier than that!
### Config
Environment variables are used to set config options. There is only one at this
point:
| Variable | Description | Values | Default |
| --------- | ------------------------------------------------------------------ | -------- | ------- |
| `SURI_JS` | Whether to redirect with JavaScript instead of a `<meta>` refresh. | `1`, `0` | `0` |
### Install Manually
To install Suri somewhere else, or just on your own machine:
1. Fork this repository to create your own copy and clone to your machine.
1. Make sure you have a compatible version of [Node.js](https://nodejs.org/)
(see `engines.node` in [`package.json`](package.json)).
[nvm](https://github.com/nvm-sh/nvm) is the recommended installation method
on your own machine:
```bash
$ nvm install
```
1. Install dependencies with npm:
```bash
$ npm install
```
1. Build the static site:
```bash
$ npm run build
```
1. Deploy the generated `_site` directory to its final destination.
## Development
The following includes a few instructions for developing on Suri. For
11ty-specific details – the static site generator that powers Suri – see their
[docs](https://www.11ty.dev/docs/).
### Install
Follow the "Install Manually" section above to setup on your own machine.
### Start
Start the development server:
```bash
$ npm run dev
```
### Code Style
[Prettier](https://prettier.com/) is setup to enforce a consistent code style.
It's highly recommended to
[add an integration to your editor](https://prettier.io/docs/en/editors.html)
that automatically formats on save.
To run via the command line:
```bash
$ npm run lint
```
## Releasing
After development is done in the `development` branch and is ready for release,
it should be merged into the `master` branch, where the latest release code
lives. [Release It!](https://github.com/release-it/release-it) is then used to
interactively orchestrate the release process:
```bash
$ npm run release
```

| 28.168142 | 103 | 0.687716 | eng_Latn | 0.989808 |
fd98685e8e565b8e5bab2593149cceacef4ca742 | 32 | md | Markdown | README.md | CMarkJeffcock/lighting-scene | 508562242fb90e40ead4372d83629fb4305a665f | [
"CC0-1.0"
] | null | null | null | README.md | CMarkJeffcock/lighting-scene | 508562242fb90e40ead4372d83629fb4305a665f | [
"CC0-1.0"
] | null | null | null | README.md | CMarkJeffcock/lighting-scene | 508562242fb90e40ead4372d83629fb4305a665f | [
"CC0-1.0"
] | null | null | null | # lighting-scene
Lighting Scene
| 10.666667 | 16 | 0.8125 | eng_Latn | 0.683281 |
fd988d19e126d5d66ea49158fd283b7a86dc8744 | 826 | md | Markdown | basiclibrary/maps.md | sadhikari07/java-fundamentals | 5fe4ab298fcc70c09adc08d7d4cda4eb61a8c7b9 | [
"MIT"
] | null | null | null | basiclibrary/maps.md | sadhikari07/java-fundamentals | 5fe4ab298fcc70c09adc08d7d4cda4eb61a8c7b9 | [
"MIT"
] | null | null | null | basiclibrary/maps.md | sadhikari07/java-fundamentals | 5fe4ab298fcc70c09adc08d7d4cda4eb61a8c7b9 | [
"MIT"
] | null | null | null | Problem summary:
## Analyzing Weather Data
Use the October Seattle weather data above. Iterate through all of the data to find the min and max values. Use a HashSet of type Integer to keep track of all the unique temperatures seen. Finally, iterate from the min temp to the max temp and create a String containing any temperature not seen during the month. Return that String.
## Tallying Election
Write a function called tally that accepts a List of Strings representing votes and returns one string to show what got the most votes.
Solution Code:
[Link to maps.java](https://github.com/sadhikari07/java-fundamentals/blob/master/basiclibrary/src/main/java/basiclibrary/maps.java)
[Link to mapsTest.java](https://github.com/sadhikari07/java-fundamentals/blob/master/basiclibrary/src/test/java/basiclibrary/mapsTest.java)
| 59 | 333 | 0.805085 | eng_Latn | 0.955135 |
fd98e16db53b6c8ce318f8c5e6192d8045d6ab04 | 2,684 | md | Markdown | vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md | DrFaust92/terraformer | 9925920a78be69d237f8ef8816e686fe0a4f8109 | [
"Apache-2.0"
] | 2 | 2020-11-16T07:13:36.000Z | 2020-11-16T07:13:42.000Z | vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md | DrFaust92/terraformer | 9925920a78be69d237f8ef8816e686fe0a4f8109 | [
"Apache-2.0"
] | 3 | 2020-04-20T19:45:06.000Z | 2020-04-20T19:47:24.000Z | vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md | DrFaust92/terraformer | 9925920a78be69d237f8ef8816e686fe0a4f8109 | [
"Apache-2.0"
] | 1 | 2020-06-01T23:06:45.000Z | 2020-06-01T23:06:45.000Z | # HCL Changelog
## v2.2.0 (Dec 11, 2019)
### Enhancements
* hcldec: Attribute evaluation (as part of `AttrSpec` or `BlockAttrsSpec`) now captures expression evaluation metadata in any errors it produces during type conversions, allowing for better feedback in calling applications that are able to make use of this metadata when printing diagnostic messages. ([#329](https://github.com/hashicorp/hcl/pull/329))
### Bugs Fixed
* hclsyntax: `IndexExpr`, `SplatExpr`, and `RelativeTraversalExpr` will now report a source range that covers all of their child expression nodes. Previously they would report only the operator part, such as `["foo"]`, `[*]`, or `.foo`, which was problematic for callers using source ranges for code analysis. ([#328](https://github.com/hashicorp/hcl/pull/328))
* hclwrite: Parser will no longer panic when the input includes index, splat, or relative traversal syntax. ([#328](https://github.com/hashicorp/hcl/pull/328))
## v2.1.0 (Nov 19, 2019)
### Enhancements
* gohcl: When decoding into a struct value with some fields already populated, those values will be retained if not explicitly overwritten in the given HCL body, with similar overriding/merging behavior as `json.Unmarshal` in the Go standard library.
* hclwrite: New interface to set the expression for an attribute to be a raw token sequence, with no special processing. This has some caveats, so if you intend to use it please refer to the godoc comments. ([#320](https://github.com/hashicorp/hcl/pull/320))
### Bugs Fixed
* hclwrite: The `Body.Blocks` method was returing the blocks in an indefined order, rather than preserving the order of declaration in the source input. ([#313](https://github.com/hashicorp/hcl/pull/313))
* hclwrite: The `TokensForTraversal` function (and thus in turn the `Body.SetAttributeTraversal` method) was not correctly handling index steps in traversals, and thus producing invalid results. ([#319](https://github.com/hashicorp/hcl/pull/319))
## v2.0.0 (Oct 2, 2019)
Initial release of HCL 2, which is a new implementating combining the HCL 1
language with the HIL expression language to produce a single language
supporting both nested configuration structures and arbitrary expressions.
HCL 2 has an entirely new Go library API and so is _not_ a drop-in upgrade
relative to HCL 1. It's possible to import both versions of HCL into a single
program using Go's _semantic import versioning_ mechanism:
```
import (
hcl1 "github.com/hashicorp/hcl"
hcl2 "github.com/hashicorp/hcl/v2"
)
```
---
Prior to v2.0.0 there was not a curated changelog. Consult the git history
from the latest v1.x.x tag for information on the changes to HCL 1.
| 57.106383 | 362 | 0.763785 | eng_Latn | 0.992275 |
fd9915b158e2405a1a5f4dc78b5d9bb1da38559d | 1,535 | md | Markdown | hugo-source/content/output/Chennai/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-10/2021-10-02.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | hugo-source/content/output/Chennai/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-10/2021-10-02.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | hugo-source/content/output/Chennai/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-10/2021-10-02.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | +++
title = "2021-10-02"
+++
## भाद्रपदः-06-26,कर्कटः-आश्रेषा🌛🌌◢◣कन्या-हस्तः-06-16🌌🌞◢◣इषः-07-10🪐🌞
- Indian civil date: 1943-07-10, Islamic: 1443-02-24 Ṣafar
- संवत्सरः - प्लवः
- वर्षसङ्ख्या 🌛- शकाब्दः 1943, विक्रमाब्दः 2078, कलियुगे 5122
___________________
- 🪐🌞**ऋतुमानम्** — शरदृतुः दक्षिणायनम्
- 🌌🌞**सौरमानम्** — वर्षऋतुः दक्षिणायनम्
- 🌛**चान्द्रमानम्** — वर्षऋतुः भाद्रपदः
___________________
## खचक्रस्थितिः
- |🌞-🌛|**तिथिः** — कृष्ण-एकादशी►23:11; कृष्ण-द्वादशी►
- **वासरः**—शनिः
- 🌌🌛**नक्षत्रम्** — आश्रेषा►27:32*; मघा► (सिंहः)
- 🌌🌞**सौर-नक्षत्रम्** — हस्तः►
___________________
- 🌛+🌞**योगः** — सिद्धः►17:41; साध्यः►
- २|🌛-🌞|**करणम्** — बवः►11:13; बालवः►23:11; कौलवः►
- 🌌🌛- **चन्द्राष्टम-राशिः**—मकरः
## दिनमान-कालविभागाः
- 🌅**सूर्योदयः**—06:01-11:58🌞️-17:54🌇
- 🌛**चन्द्रास्तमयः**—14:57; **चन्द्रोदयः**—02:53(+1)
___________________
- 🌞⚝भट्टभास्कर-मते वीर्यवन्तः— **प्रातः**—06:01-07:30; **साङ्गवः**—08:59-10:29; **मध्याह्नः**—11:58-13:27; **अपराह्णः**—14:56-16:25; **सायाह्नः**—17:54-19:25
- 🌞⚝सायण-मते वीर्यवन्तः— **प्रातः-मु॰1**—06:01-06:49; **प्रातः-मु॰2**—06:49-07:36; **साङ्गवः-मु॰2**—09:11-09:59; **पूर्वाह्णः-मु॰2**—11:34-12:21; **अपराह्णः-मु॰2**—13:56-14:44; **सायाह्णः-मु॰2**—16:19-17:07; **सायाह्णः-मु॰3**—17:07-17:54
- 🌞कालान्तरम्— **ब्राह्मं मुहूर्तम्**—04:24-05:13; **मध्यरात्रिः**—22:45-01:10
___________________
- **राहुकालः**—08:59-10:29; **यमघण्टः**—13:27-14:56; **गुलिककालः**—06:01-07:30
___________________
- **शूलम्**—प्राची दिक् (►09:11); **परिहारः**–दधि
___________________
| 39.358974 | 239 | 0.485993 | san_Deva | 0.407132 |
fd99190a6a2a596315236c7b6f3d468b5cffed0b | 580 | markdown | Markdown | _posts/2020-04-13-PIL.markdown | notgeneralyida/notgeneralyida.github.io | 8e7a73f5d8338d00d44dc0599ba44d5e3418768f | [
"Apache-2.0"
] | null | null | null | _posts/2020-04-13-PIL.markdown | notgeneralyida/notgeneralyida.github.io | 8e7a73f5d8338d00d44dc0599ba44d5e3418768f | [
"Apache-2.0"
] | 1 | 2021-03-30T09:00:21.000Z | 2021-03-30T09:00:21.000Z | _posts/2020-04-13-PIL.markdown | notgeneralyida/notgeneralyida.github.io | 8e7a73f5d8338d00d44dc0599ba44d5e3418768f | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "PIL"
subtitle: " 第一课 "
date: 2020-04-13 13:07:00
author: "Yida"
header-img: "img/post-bg-2015.jpg"
catalog: true
tags:
- Python
- PIL
---
> “Yeah It's on. ”
## Introduction
### Colorize
定义 ImageOps.colorize(image, black, white) => image
含义:使得灰色图变成彩色图像。 变量black 和 white 对应RGB元组或者颜色名称。 这个函数会计算出一个颜色值,使得原图像
中的所有黑色变成第一种颜色,所有白色变成对应第二种颜色。 变量image的模式必须为'L'。
```python
from PIL import Image, ImageOps
path = ''
im02 = Image.open(path)
r,g,b = im02.split()
print(r.mode)
im_b = ImageOps.colorize(b, (255, 0, 0), (0, 255, 0))
``` | 17.575758 | 66 | 0.648276 | yue_Hant | 0.336045 |
fd991b52675de0f80cd3624705aa2d79e52dbfd7 | 24 | md | Markdown | Alexa-Reviews-Analysis/README.md | krunal3kapadiya/kaggle-kernel | c2c41e010a52ff1e6f647170b87bb446588dbd6a | [
"Apache-2.0"
] | 1 | 2020-06-02T03:21:25.000Z | 2020-06-02T03:21:25.000Z | Alexa-Reviews-Analysis/README.md | krunal3kapadiya/EDA-ML-Learning-samples | c2c41e010a52ff1e6f647170b87bb446588dbd6a | [
"Apache-2.0"
] | null | null | null | Alexa-Reviews-Analysis/README.md | krunal3kapadiya/EDA-ML-Learning-samples | c2c41e010a52ff1e6f647170b87bb446588dbd6a | [
"Apache-2.0"
] | null | null | null | # Alexa-Reviews-Analysis | 24 | 24 | 0.833333 | kor_Hang | 0.3425 |
fd9971d1b829de826d1a6354d0362ae49a9d584b | 2,769 | md | Markdown | README.md | rgmyr/litholog | d628dac9f758eee46f85a4b668a3fd242cf9e76e | [
"Apache-2.0"
] | 11 | 2021-02-10T15:59:39.000Z | 2022-02-14T23:51:40.000Z | README.md | rgmyr/litholog | d628dac9f758eee46f85a4b668a3fd242cf9e76e | [
"Apache-2.0"
] | 3 | 2021-04-16T13:46:45.000Z | 2021-11-04T04:03:17.000Z | README.md | rgmyr/litholog | d628dac9f758eee46f85a4b668a3fd242cf9e76e | [
"Apache-2.0"
] | 2 | 2021-12-21T08:48:57.000Z | 2022-03-27T15:35:03.000Z | # LithoLog
See documentation at https://litholog.readthedocs.io
### Overview
`litholog` is focused on providing a framework to digitize, store, plot, and analyze sedimentary graphic logs (example log shown below).
Graphic logs are the most common way geologists characterize and communicate the composition and variability of clastic sedimentary successions; through a simple drawing, a graphic log imparts complex geological concepts (e.g., the Bouma turbidite sequence or a shoreface parasequence). The term ‘graphic log’ originates from a geologist graphically drawing (i.e., ‘logging’) an outcrop or core; other synonymous terms include measured section and stratigraphic column.
`litholog` is a package-level extension of [agile-geoscience/striplog](https://github.com/agile-geoscience/striplog), with additional features that focus on lithology, and an API that is geared toward facilitating machine learning and quantitative analysis.
<img src="/images/example_log.png" alt="Graphic log example" width="600" />
As you can see above, litholog faithfully reproduces graphic log data, but errors or omissions when digitizing are propagated. Care during digitizing is of the utmost importance, as manual manipulation of litholog data (e.g., grain size) is not recommended.
### Data Structures
The package provides two primary data structures:
- `Bed`
- stores data from one bed (e.g., top, base, lithology, thickness, grain size, etc).
- is equivalent to a `striplog.Interval`
- `BedSequence`
- stores a collection of `Beds` in stratigraphic order
- is equivalent to a `striplog.Striplog`
### Utilities
Several utilities for working with graphic logs are included with `litholog`:
- transformations for grain-size data from millimeter (mm) to log2 (a.k.a. *Psi*) units, which are far easier to work with than mm.
- calculation of the following metrics at the `BedSequence` level:
- net-to-gross
- amalgamation ratio
- psuedo gamma ray log
- Hurst statistics (for determining facies clustering)
- default lithology colors for Beds
### Data
The data provided with this demo come from two papers, and all logs were digitized using the Matlab digitizer included with this release.
- 7 logs from Jobe et al. 2012 ([html](https://doi.org/10.1111/j.1365-3091.2011.01283.x), [pdf](https://www.dropbox.com/s/sgzmc1exd5vjd3h/2012%20Jobe%20et%20al%20Sed-%20Climbing%20ripple%20successions%20in%20turbidite%20systems.pdf?dl=0))
- 6 logs from Jobe et al. 2010 ([html](https://doi.org/10.2110/jsr.2010.092), [pdf](https://www.dropbox.com/s/zo12v3ixm86yt7e/2010%20Jobe%20et%20al%20JSR%20-%20Submarine%20channel%20asymmetry.pdf?dl=0)).
#### To-do
- Look into binary save/load. CSV is pretty slow, and pickle creates weird behaviors.
| 56.510204 | 469 | 0.768147 | eng_Latn | 0.981794 |
fd99e8a48420f220c72f5653f816a9cfa228b640 | 7,375 | md | Markdown | README.md | bcstestit/nodejs_webapp_ghatestappsb1 | 63a94cbbb14eaaa90dacbd97f1fc1b3b226f3b62 | [
"MIT"
] | null | null | null | README.md | bcstestit/nodejs_webapp_ghatestappsb1 | 63a94cbbb14eaaa90dacbd97f1fc1b3b226f3b62 | [
"MIT"
] | null | null | null | README.md | bcstestit/nodejs_webapp_ghatestappsb1 | 63a94cbbb14eaaa90dacbd97f1fc1b3b226f3b62 | [
"MIT"
] | null | null | null | # Deploying a Node.js Web App using GitHub actions
In this lab, you will learn to deploy a Node.js app to Azure App Service and set up a CI/CD workflow using GitHub Actions
## Overview
**GitHub Actions** gives you the flexibility to build an automated software development lifecycle workflow. You can write individual tasks ("Actions") and combine them to create a custom workflow. Workflows are configurable automated processes that you can set up in your repository to build, test, package, release, or deploy any project on GitHub.
With **GitHub Actions** you can build end-to-end continuous integration (CI) and continuous deployment (CD) capabilities directly in your repository.
### What’s covered in this lab
In this lab, you will:
1. How to set up a workflow with GitHub Actions
1. Create a workflow with GitHub Actions to add CI/CD to your Node.Js web app
### Prerequisites
1. You will need a **GitHub** account. If you do not have one, you can sign up for free [here](https://github.com/join)
1. **Microsoft Azure Account**: You will need a valid and active Azure account for this lab. If you do not have one, you can sign up for a [free trial](https://azure.microsoft.com/en-us/free/).
1. Your Windows machine should have [Node.js package manager and NPM](https://nodejs.org/en/download), Visual Studio Code, and the VS [Azure App Service extension](vscode:extension/ms-azuretools.vscode-azureappservice) installed, which you can use to create, manage, and deploy Linux Web Apps on the Azure Platform as a Service (PaaS).
1. Once the extension is installed, log into your Azure account.
For this lab, you will be using the **VSCode GitHub Universe HOL** subscription.
### Setting up the GitHub repository
**Node-Express** is an example Node.js web app. Let us fork this repository amd clone it into VS Code to get started with this lab.
## Create an Azure App Service
Let's create this as a web app hosted in Azure.
1. Follow the tutorial [Azure Web Apps Quickstart](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-get-started-nodejs)
1. Click on the `+` icon to create a new app service under the **VSCode GitHub Universe HOL** subscription.

1. Give your webapp a unique name (we recommend calling it **node_express-{your name}**)
1. Select **Linux** as your OS and **Node** as your runtime.
1. Browse to your new site!
## Set up CI/CD with GitHub Actions
We'll use GitHub actions to automate our deployment workflow for this web app.
1. In the portal, Overview page, click on "Get publish profile". A publish profile is a kind of deployment credential, useful when you don't own the Azure subscription. Open the downloaded settings file in VS Code and copy the contents of the file.

1. We will now add the publish profile as a secret associated with this repo. On the GitHub repository, click on the "Settings" tab.

1. Go to "Secrets". Create a new secret called "AZURE_WEBAPP_PUBLISH_PROFILE" and paste the contents from the settings file.

1. Now click on "Actions" in the top bar and create a new workflow.

1. Find the **Deploy Node.js to Azure Web App** template and select "Set up this workflow".

1. Let's get into the details of what this workflow is doing.
- **Workflow Triggers**: Your workflow is set up to run on push events to the branch "master"
```yaml
on:
push:
branches:
- master
```
For more information, see [Events that trigger workflows](https://help.github.com/articles/events-that-trigger-workflows).
- **Setting up Environment Variables:** GitHub action workflows can be parameterized using environment variables. For this workflow, Configure the values for the AZURE_WEBAPP_NAME and leave the defaults as is for AZURE_WEBAPP_PACKAGE_PATH and NODE_VERSION variables
```yaml
env:
AZURE_WEBAPP_NAME: your-app-name # set this to your application's name
AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
NODE_VERSION: '10.x' # set this to the node version to use
```
- **Running your jobs on hosted runners:** GitHub Actions provides hosted runners for Linux, Windows, and macOS. Additionally they announced [Beta-release for self-hosted runners](https://github.blog/2019-11-05-self-hosted-runners-for-github-actions-is-now-in-beta/) which you can checkout if interested.
We specified hosted runner in our workflow as below.
```yaml
jobs:
build-and-deploy:
name: Build and Deploy
runs-on: ubuntu-latest
```
- **Using an action**: Actions are reusable units of code that can be built and distributed by anyone on GitHub. To use an action, you must specify the repository that contains the action.
```yaml
steps:
- uses: actions/checkout@master
- name: Use Node.js ${{ env.NODE_VERSION }}
uses: actions/setup-node@v1
with:
node-version: ${{ env.NODE_VERSION }}
```
- **Running a command**: You can run commands on the job's virtual machine (runner). We are running below NPM commands to install dependencies build, and test our application.
```yaml
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm run test --if-present
```
>For workflow syntax for GitHub Actions see [here](https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions)
- **Deploy to Azure web app**: Change the `app-name` to the name of your web app. We are using [GitHub Action tp deploy Azure Web App ](https://github.com/Azure/webapps-deploy)to deploy to your Azure Web app with the publish profile stored in GitHub secrets which you created previously.
```yaml
- name: 'Deploy to Azure WebApp'
uses: azure/webapps-deploy@v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
```
**For more information on GitHub Actions for Azure, refer to https://github.com/Azure/Actions **
**For more samples to get started with GitHub Action workflows to deploy to Azure, refer to https://github.com/Azure/actions-workflow-samples **

- Once you're done editing the workflow by configuring the AZURE_WEBAPP_NAME, click on "Start commit". Committing the file will trigger the workflow.
- You can go back to the Actions tab, click on your workflow, and see that the workflow is queued or being deployed. Wait for the job to complete successfully.

## Test out your app!
1. Browse your Node app by pasting the URL of your Azure web app: https://AZURE_WEBAPP_NAME.azurewebsites.net
1. Make any text change by editing the node_express/views/index.pug file and commit the change. Browse to the **Actions** tab in GitHub to view the live logs of your Action workflow which got triggered with the push of the commit.
1. Once the workflow successfully completes execution, browse back to your website to visualise the new chnages you introduced!
| 43.639053 | 349 | 0.738983 | eng_Latn | 0.986412 |
fd9a72c28e10d7a3fab1bdf7107df7dc51bf7585 | 2,512 | md | Markdown | docs/visio/x-cell-geometry-section.md | isabella232/office-developer-client-docs.ja-JP | e4d5e62a7aa6ce9327d7973e7e6d87639ff03c25 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T18:52:28.000Z | 2022-03-21T04:51:21.000Z | docs/visio/x-cell-geometry-section.md | isabella232/office-developer-client-docs.ja-JP | e4d5e62a7aa6ce9327d7973e7e6d87639ff03c25 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-12-08T03:40:06.000Z | 2021-12-08T03:41:49.000Z | docs/visio/x-cell-geometry-section.md | isabella232/office-developer-client-docs.ja-JP | e4d5e62a7aa6ce9327d7973e7e6d87639ff03c25 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-12T21:15:13.000Z | 2019-10-12T21:15:13.000Z | ---
title: '[X] セル ([Geometry] セクション)'
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: reference
f1_keywords:
- Vis_DSS.chm1135
ms.localizationpriority: medium
ms.assetid: 2416b323-e084-18e1-c9be-a797078dfab9
description: ローカル座標の図形の x 座標を表します。 次の表に、各行で [X] セルが示す内容を説明します。
ms.openlocfilehash: 1435e07bc2c2d02f971f0fa083638acb903d0114
ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/24/2021
ms.locfileid: "59553593"
---
# <a name="x-cell-geometry-section"></a>[X] セル ([Geometry] セクション)
ローカル座標の *図形* の x 座標を表します。 次の表に、各行で [X] セルが示す内容を説明します。
|行|説明|
|:-----|:-----|
|[MoveTo](moveto-row-geometry-section.md) <br/> | MoveTo 行がセクションの最初の行である場合、X セルはパスの最初の頂点の *x* 座標を表します。 MoveTo 行が 2 行の間に表示される場合、X セルはパスのブレーク後の最初の頂点の *x* 座標を表します。 <br/> |
|[LineTo](lineto-row-geometry-section.md) <br/> | 直線 *セグメント* の終了頂点の x 座標。 <br/> |
|[ArcTo](arcto-row-geometry-section.md) <br/> | 円弧 *の* 終了頂点の x 座標。 <br/> |
|[楕円ArcTo](ellipticalarcto-row-geometry-section.md) <br/> | 楕 *円円弧* の終了頂点の x 座標。 <br/> |
|[ポリラインTo](polylineto-row-geometry-section.md) <br/> | ポリライン *の* 終了頂点の x 座標。 <br/> |
|[NURBSTo](nurbsto-row-geometry-section.md) <br/> | 非 *ユニフォーム* 有理 B スプライン (NURBS) の最後のコントロール ポイントの x 座標。 <br/> |
|[SplineStart](splinestart-row-geometry-section.md) <br/> | スプライン *の* 2 番目のコントロール ポイントの x 座標。 <br/> |
|[SplineKnot](splineknot-row-geometry-section.md) <br/> | コントロール *ポイントの x* 座標。 <br/> |
|[InfiniteLine](infiniteline-row-geometry-section.md) <br/> | 無限 *線* 上の点の x 座標。 <br/> |
|[楕円](ellipse-row-geometry-section.md) <br/> | 楕 *円* の中心の x 座標。 <br/> |
## <a name="remarks"></a>注釈
別の数式または **CellsU** プロパティを使用したプログラムから、名前によって [X] セルへの参照を取得するには、次の値を使用します。
|||
|:-----|:-----|
| セル名: <br/> | Geometry *i* .x *j* *i と* j = *<* 1>、2、3... <br/> |
|| Geometry *i* .X1 (InfiniteLine 行と楕円行) i *=* <1>、2、3... <br/> |
プログラムから、インデックスによって [X] セルへの参照を取得するには、**CellsSRC** プロパティを使用し、次の引数を指定します。
|||
|:-----|:-----|
| セクション インデックス: <br/> |**visSectionFirstComponent** + *i* *=* 0, 1, 2... <br/> |
| 行インデックス: <br/> |**visRowVertex** + *j* は *j* = 0、1、2..です。 <br/> |
||**visRowVertex** ([InfiniteLine] 行および [Ellipse] 行) <br/> |
| セル インデックス: <br/> |**visX** ([MoveTo]、[LineTo]、[ArcTo]、[EllipticalArcTo]、[NURBSTo]、[Polyline]、[SplineStart]、および [SplineKnot] 行) <br/> |
||**visInfiniteLineX1** ([InfiniteLine] 行) <br/> |
||**visEllipseCenterX** ([Ellipse] 行) <br/> |
| 44.070175 | 173 | 0.634952 | yue_Hant | 0.603314 |
fd9bff5bef31d0f17e102fa7b8ef18918db2a335 | 32,678 | md | Markdown | articles/search/cognitive-search-tutorial-blob.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/cognitive-search-tutorial-blob.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/cognitive-search-tutorial-blob.md | andreatosato/azure-docs.it-it | 7023e6b19af61da4bb4cdad6e4453baaa94f76c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Esercitazione per chiamare le API di ricerca cognitiva in Ricerca di Azure | Microsoft Docs
description: In questa esercitazione verrà illustrato in dettaglio un esempio di estrazione dei dati, linguaggio naturale ed elaborazione AI delle immagini nell'indicizzazione di Ricerca di Azure per l'estrazione e la trasformazione dei dati.
manager: pablocas
author: luiscabrer
services: search
ms.service: search
ms.devlang: NA
ms.topic: tutorial
ms.date: 07/11/2018
ms.author: luisca
ms.openlocfilehash: 35295f00b9264e4b6fba2ff9d293772c22b91c50
ms.sourcegitcommit: df50934d52b0b227d7d796e2522f1fd7c6393478
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 07/12/2018
ms.locfileid: "38991920"
---
# <a name="tutorial-learn-how-to-call-cognitive-search-apis-preview"></a>Esercitazione: Informazioni su come chiamare le API di ricerca cognitiva (anteprima)
In questa esercitazione vengono illustrati i meccanismi di programmazione dell'arricchimento dei dati in Ricerca di Azure usando *competenze cognitive*. Le competenze cognitive sono l'elaborazione del linguaggio naturale e le operazioni di analisi delle immagini che consentono di estrarre testo e rappresentazioni di testo di un'immagine, rilevare la lingua, le entità, le frasi chiave e altro ancora. Il risultato finale è contenuto aggiuntivo elaborato in un indice di Ricerca di Azure, creato da una pipeline di indicizzazione di ricerca cognitiva.
In questa esercitazione verranno eseguite chiamate API REST per eseguire le attività seguenti:
> [!div class="checklist"]
> * Creare una pipeline di indicizzazione per l'arricchimento dei dati di esempio nel percorso verso un indice
> * Applicare le competenze predefinite: riconoscimento delle entità, rilevamento della lingua, modifica del testo ed estrazione delle frasi chiave
> * Informazioni su come concatenare le competenze eseguendo il mapping di input a output in un set di competenze
> * Eseguire le richieste ed esaminare i risultati
> * Reimpostare l'indice e gli indicizzatori per ulteriore sviluppo
L'output è un indice di ricerca full-text in Ricerca di Azure. È possibile migliorare l'indice con altre funzionalità standard, ad esempio [sinonimi](search-synonyms.md), [profili di punteggio](https://docs.microsoft.com/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [analizzatori](search-analyzers.md) e [filtri](search-filters.md).
Se non si ha una sottoscrizione di Azure, creare un [account gratuito](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) prima di iniziare.
## <a name="prerequisites"></a>Prerequisiti
Se la ricerca cognitiva è una novità, è possibile leggere ["What is cognitive search?"](cognitive-search-concept-intro.md) (Che cos'è la ricerca cognitiva) per acquisire familiarità oppure provare la [guida introduttiva del portale](cognitive-search-quickstart-blob.md) per un'introduzione ai concetti importanti.
Per effettuare chiamate REST in Ricerca di Azure, usare PowerShell o uno strumento di test Web, come Telerik Fiddler o Postman per formulare richieste HTTP. Se questi strumenti non sono già noti, vedere [Esplorare le API REST di Ricerca di Azure con Fiddler o Postman](search-fiddler.md).
Usare il [portale di Azure](https://portal.azure.com/) per creare servizi usati in un flusso di lavoro end-to-end.
### <a name="set-up-azure-search"></a>Configurare Ricerca di Azure
Prima di tutto, iscriversi al servizio Ricerca di Azure.
1. Passare al [portale di Azure](https://portal.azure.com) e accedere con l'account di Azure.
1. Fare clic su **Crea una risorsa**, cercare Ricerca di Azure e fare clic su **Crea**. Vedere [Creare un servizio di Ricerca di Azure nel portale](search-create-service-portal.md) se è la prima volta che si configura un servizio di ricerca.

1. Per Gruppo di risorse creare un gruppo di risorse per contenere tutte le risorse che verranno create in questa esercitazione. In questo modo è più semplice pulire le risorse dopo aver completato l'esercitazione.
1. Per Località scegliere **Stati Uniti centro-meridionali** oppure **Europa occidentale**. Attualmente, l'anteprima è disponibile solo in queste aree.
1. Per Piano tariffario è possibile creare un servizio **Gratuito** per completare le esercitazioni e le guide introduttive. Per eseguire altre analisi usando dati personali, creare un [servizio a pagamento](https://azure.microsoft.com/pricing/details/search/), ad esempio **Basic** o **Standard**.
Un servizio Gratuito è limitato a 3 indici, dimensioni BLOB massime di 16 MB e 2 minuti di indicizzazione, capacità insufficienti per mettere alla prova tutte le funzionalità della ricerca cognitiva. Per esaminare i limiti per i diversi livelli, vedere [Limiti del servizio](search-limits-quotas-capacity.md).
> [!NOTE]
> La ricerca cognitiva è disponibile in anteprima pubblica. L'esecuzione di set di competenze è attualmente disponibile in tutti i livelli, incluso quello gratuito. Il prezzo per questa funzionalità verrà annunciato in un momento successivo.
1. Aggiungere il servizio al dashboard per un rapido accesso alle informazioni sul servizio.

1. Dopo aver creato il servizio, raccogliere le informazioni seguenti: **URL** dalla pagina Panoramica e **api-key** (primaria o secondaria) dalla pagina Chiavi.

### <a name="set-up-azure-blob-service-and-load-sample-data"></a>Configurare il servizio BLOB di Azure e caricare i dati di esempio
La pipeline di arricchimento effettua il pull da origini dati di Azure. I dati di origine devono provenire da un tipo di origine dati supportato di un [indicizzatore di Ricerca di Azure](search-indexer-overview.md). Per questo esercizio viene usato l'archivio BLOB per presentare più tipi di contenuto.
1. [Scaricare i dati di esempio](https://1drv.ms/f/s!As7Oy81M_gVPa-LCb5lC_3hbS-4). I dati di esempio sono costituiti da un piccolo set di file di tipi diversi.
1. Effettuare l'iscrizione per l'archivio BLOB di Azure, creare un account di archiviazione, accedere a Storage Explorer e creare un contenitore denominato `basicdemo`. Vedere la [guida introduttiva ad Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md) per istruzioni su tutti i passaggi.
1. Da Azure Storage Explorer, nel contenitore `basicdemo` creato, fare clic su **Carica** per caricare i file di esempio.
1. Dopo aver caricato i file di esempio, ottenere il nome del contenitore e una stringa di connessione per l'archivio BLOB. È possibile farlo passando all'account di archiviazione nel portale di Azure. In **Chiavi di accesso** copiare il contenuto del campo **Stringa di connessione**.
La stringa di connessione deve essere un URL con un aspetto simile al seguente:
```http
DefaultEndpointsProtocol=https;AccountName=cogsrchdemostorage;AccountKey=<your account key>;EndpointSuffix=core.windows.net
```
Esistono altri modi per specificare la stringa di connessione, ad esempio una firma di accesso condiviso. Per altre informazioni sulle credenziali dell'origine dati, vedere [Indicizzazione in Archiviazione BLOB di Azure](search-howto-indexing-azure-blob-storage.md#Credentials).
## <a name="create-a-data-source"></a>Creare un'origine dati
Ora che i servizi e i file di origine sono pronti, iniziare ad assemblare i componenti della pipeline di indicizzazione. Iniziare con un [oggetto origine dati](https://docs.microsoft.com/rest/api/searchservice/create-data-source) che indica a Ricerca di Azure come recuperare i dati di origine esterni.
Per questa esercitazione, usare l'API REST e uno strumento che consenta di formulare e inviare richieste HTTP, ad esempio PowerShell, Postman o Fiddler. Nell'intestazione della richiesta specificare il nome di servizio usato durante la creazione del servizio Ricerca di Azure e l'api-key generata per il servizio di ricerca. Nel corpo della richiesta specificare il nome e la stringa di connessione per il contenitore BLOB.
### <a name="sample-request"></a>Richiesta di esempio
```http
POST https://[service name].search.windows.net/datasources?api-version=2017-11-11-Preview
Content-Type: application/json
api-key: [admin key]
```
#### <a name="request-body-syntax"></a>Sintassi del corpo della richiesta
```json
{
"name" : "demodata",
"description" : "Demo files to demonstrate cognitive search capabilities.",
"type" : "azureblob",
"credentials" :
{ "connectionString" :
"DefaultEndpointsProtocol=https;AccountName=<your account name>;AccountKey=<your account key>;"
},
"container" : { "name" : "<your blob container name>" }
}
```
Inviare la richiesta. Lo strumento di test Web dovrebbe restituire un codice di stato 201 a conferma del corretto completamento dell'operazione.
Trattandosi della prima richiesta, controllare nel portale di Azure per verificare che l'origine dati sia stata creata in Ricerca di Azure. Nella pagina del dashboard del servizio di ricerca, verificare che nel riquadro Origini dati sia disponibile un nuovo elemento. Potrebbe essere necessario attendere alcuni minuti per l'aggiornamento della pagina del portale.

Se si riceve l'errore 403 o 404, controllare la costruzione della richiesta: `api-version=2017-11-11-Preview` deve essere nell'endpoint `api-key` deve essere nell'intestazione dopo `Content-Type` e il relativo valore deve essere valido per un servizio di ricerca. È possibile riutilizzare l'intestazione per i passaggi rimanenti in questa esercitazione.
> [!TIP]
> A questo punto, prima di eseguire numerose operazioni, è utile verificare che il servizio di ricerca sia in esecuzione in una delle località supportate che offre la funzionalità di anteprima, ovvero Stati Uniti centro-meridionali o Europa occidentale.
## <a name="create-a-skillset"></a>Creare un set di competenze
In questo passaggio viene definito un set di passaggi di arricchimento da applicare ai dati. Ogni passaggio di arricchimento è noto come *competenza* e il set di passaggi di arricchimento è un *set di competenze*. Questa esercitazione usa [competenze cognitive predefinite](cognitive-search-predefined-skills.md) per il set di competenze:
+ [Rilevamento della lingua](cognitive-search-skill-language-detection.md) per identificare la lingua del contenuto.
+ [Suddivisione del testo](cognitive-search-skill-textsplit.md) per suddividere il contenuto di grandi dimensioni in blocchi più piccoli prima di chiamare la competenza di estrazione delle frasi chiave. L'estrazione delle frasi chiave accetta input di al massimo 50.000 caratteri. Alcuni dei file di esempio devono essere suddivisi per rispettare questo limite.
+ [Riconoscimento di entità denominate](cognitive-search-skill-named-entity-recognition.md) per estrarre i nomi di organizzazioni dal contenuto nel contenitore BLOB.
+ [Estrazione di frasi chiave](cognitive-search-skill-keyphrases.md) per estrarre le frasi chiave principali.
### <a name="sample-request"></a>Richiesta di esempio
Prima di effettuare questa chiamata REST, ricordarsi di sostituire il nome del servizio e la chiave di amministrazione nella richiesta di seguito, se lo strumento non consente di mantenere l'intestazione della richiesta tra le chiamate.
Questa richiesta crea un set di competenze. Fare riferimento al nome del set di competenze ```demoskillset``` per il resto di questa esercitazione.
```http
PUT https://[servicename].search.windows.net/skillsets/demoskillset?api-version=2017-11-11-Preview
api-key: [admin key]
Content-Type: application/json
```
#### <a name="request-body-syntax"></a>Sintassi del corpo della richiesta
```json
{
"description":
"Extract entities, detect language and extract key-phrases",
"skills":
[
{
"@odata.type": "#Microsoft.Skills.Text.NamedEntityRecognitionSkill",
"categories": [ "Organization" ],
"defaultLanguageCode": "en",
"inputs": [
{
"name": "text", "source": "/document/content"
}
],
"outputs": [
{
"name": "organizations", "targetName": "organizations"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Text.LanguageDetectionSkill",
"inputs": [
{
"name": "text", "source": "/document/content"
}
],
"outputs": [
{
"name": "languageCode",
"targetName": "languageCode"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Text.SplitSkill",
"textSplitMode" : "pages",
"maximumPageLength": 4000,
"inputs": [
{
"name": "text",
"source": "/document/content"
},
{
"name": "languageCode",
"source": "/document/languageCode"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "pages"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Text.KeyPhraseExtractionSkill",
"context": "/document/pages/*",
"inputs": [
{
"name": "text", "source": "/document/pages/*"
},
{
"name":"languageCode", "source": "/document/languageCode"
}
],
"outputs": [
{
"name": "keyPhrases",
"targetName": "keyPhrases"
}
]
}
]
}
```
Inviare la richiesta. Lo strumento di test Web dovrebbe restituire un codice di stato 201 a conferma del corretto completamento dell'operazione.
#### <a name="about-the-request"></a>Informazioni sulla richiesta
Si noti come viene applicata la competenza di estrazione delle frasi chiave per ogni pagina. Impostando il contesto su ```"document/pages/*"``` si esegue questo meccanismo di arricchimento per ogni membro della matrice document/pages (per ogni pagina nel documento).
Ogni competenza viene eseguita sul contenuto del documento. Durante l'elaborazione, Ricerca di Azure analizza ogni documento per leggere il contenuto da formati di file diversi. Il testo trovato con origine nel file di origine viene inserito in un campo ```content``` generato, uno per ogni documento. Impostare quindi l'input come ```"/document/content"```.
Di seguito è riportata una rappresentazione grafica del set di competenze.

Gli output possono essere mappati a un indice, usati come input per una competenza a valle, o entrambi, come nel caso del codice di lingua. Nell'indice, un codice di lingua è utile per le operazioni di filtro. Come input, il codice di lingua viene usato dalle competenze di analisi del testo per la selezione delle regole linguistiche per la separazione delle parole.
Per altre informazioni sui concetti di base dei set di competenze, vedere [How to create a skillset](cognitive-search-defining-skillset.md) (Come creare un set di competenze).
## <a name="create-an-index"></a>Creare un indice
In questa sezione viene definito lo schema dell'indice specificando i campi da includere nell'indice di ricerca e gli attributi di ricerca per ogni campo. I campi hanno un tipo e possono accettare attributi che determinano il modo in cui viene usato il campo (searchable, sortable e così via). Non è necessario che i nomi dei campi in un indice corrispondano esattamente ai nomi dei campi nell'origine. In un passaggio successivo verranno aggiunti i mapping dei campi in un indicizzatore per collegare i campi di origine e destinazione. Per questo passaggio, definire l'indice usando le convenzioni di denominazione dei campi pertinenti per l'applicazione di ricerca in questione.
Questo esercizio usa i campi e i tipi di campi seguenti:
| field-names: | id | content | languageCode | keyPhrases | organizations |
|--------------|----------|-------|----------|--------------------|-------------------|
| field-types: | Edm.String|Edm.String| Edm.String| List<Edm.String> | List<Edm.String> |
### <a name="sample-request"></a>Richiesta di esempio
Prima di effettuare questa chiamata REST, ricordarsi di sostituire il nome del servizio e la chiave di amministrazione nella richiesta di seguito, se lo strumento non consente di mantenere l'intestazione della richiesta tra le chiamate.
Questa richiesta crea un indice. Usare il nome dell'indice ```demoindex``` per il resto di questa esercitazione.
```http
PUT https://[servicename].search.windows.net/indexes/demoindex?api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
#### <a name="request-body-syntax"></a>Sintassi del corpo della richiesta
```json
{
"fields": [
{
"name": "id",
"type": "Edm.String",
"key": true,
"searchable": true,
"filterable": false,
"facetable": false,
"sortable": true
},
{
"name": "content",
"type": "Edm.String",
"sortable": false,
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "languageCode",
"type": "Edm.String",
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "keyPhrases",
"type": "Collection(Edm.String)",
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "organizations",
"type": "Collection(Edm.String)",
"searchable": true,
"sortable": false,
"filterable": false,
"facetable": false
}
]
}
```
Inviare la richiesta. Lo strumento di test Web dovrebbe restituire un codice di stato 201 a conferma del corretto completamento dell'operazione.
Per altre informazioni sulla definizione di un indice, vedere [Create Index (Azure Search REST API)](https://docs.microsoft.com/rest/api/searchservice/create-index) (Creare un indice - API REST per Ricerca di Azure).
## <a name="create-an-indexer-map-fields-and-execute-transformations"></a>Creare un indicizzatore, mappare i campi ed eseguire le trasformazioni
Finora sono stati creati un'origine dati, un set di competenze e un indice. Questi tre componenti diventano parte di un [indicizzatore](search-indexer-overview.md) che riunisce tutti i dati in un'unica operazione in più fasi. Per unire questi elementi in un indicizzatore, è necessario definire mapping dei campi. I mapping dei campi fanno parte della definizione dell'indicizzatore ed eseguono le trasformazioni quando si invia la richiesta.
Per l'indicizzazione senza arricchimenti, la definizione dell'indicizzatore include una sezione facoltativa *fieldMappings* se i nomi dei campi o i tipi di dati non corrispondono esattamente o se si vuole usare una funzione.
Per i carichi di lavoro di ricerca cognitivi con una pipeline di arricchimento, l'indicizzatore richiede *outputFieldMappings*. Questi mapping vengono usati quando un processo interno (pipeline di arricchimento) è l'origine dei valori dei campi. I comportamenti univoci per *outputFieldMappings* includono la possibilità di gestire tipi complessi creati come parte dell'arricchimento (tramite la competenza shaper). Inoltre, possono essere presenti molti elementi per ogni documento (ad esempio, più organizzazioni in un documento). Il costrutto *outputFieldMappings* può indicare al sistema di rendere flat le raccolte di elementi in un singolo record.
### <a name="sample-request"></a>Richiesta di esempio
Prima di effettuare questa chiamata REST, ricordarsi di sostituire il nome del servizio e la chiave di amministrazione nella richiesta di seguito, se lo strumento non consente di mantenere l'intestazione della richiesta tra le chiamate.
Specificare anche il nome dell'indicizzatore. È possibile farvi riferimento come ```demoindexer``` per il resto di questa esercitazione.
```http
PUT https://[servicename].search.windows.net/indexers/demoindexer?api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
#### <a name="request-body-syntax"></a>Sintassi del corpo della richiesta
```json
{
"name":"demoindexer",
"dataSourceName" : "demodata",
"targetIndexName" : "demoindex",
"skillsetName" : "demoskillset",
"fieldMappings" : [
{
"sourceFieldName" : "metadata_storage_path",
"targetFieldName" : "id",
"mappingFunction" :
{ "name" : "base64Encode" }
},
{
"sourceFieldName" : "content",
"targetFieldName" : "content"
}
],
"outputFieldMappings" :
[
{
"sourceFieldName" : "/document/organizations",
"targetFieldName" : "organizations"
},
{
"sourceFieldName" : "/document/pages/*/keyPhrases/*",
"targetFieldName" : "keyPhrases"
},
{
"sourceFieldName": "/document/languageCode",
"targetFieldName": "languageCode"
}
],
"parameters":
{
"maxFailedItems":-1,
"maxFailedItemsPerBatch":-1,
"configuration":
{
"dataToExtract": "contentAndMetadata",
"imageAction": "generateNormalizedImages"
}
}
}
```
Inviare la richiesta. Lo strumento di test Web dovrebbe restituire un codice di stato 201 a conferma del corretto completamento dell'elaborazione.
Il completamento di questa operazione può richiedere alcuni minuti. Anche se il set di dati è piccolo, le competenze analitiche prevedono un utilizzo elevato delle risorse di calcolo. Alcune competenze, ad esempio l'analisi delle immagini, hanno un'esecuzione prolungata.
> [!TIP]
> La creazione di un indicizzatore richiama la pipeline. Eventuali problemi a raggiungere i dati, per il mapping di input e output o nell'ordine delle operazioni vengono visualizzati in questa fase. Per eseguire di nuovo la pipeline con modifiche per codice o script, potrebbe essere necessario eliminare prima gli oggetti. Per altre informazioni, vedere [Reimpostare ed eseguire di nuovo](#reset).
### <a name="explore-the-request-body"></a>Esplorare il corpo della richiesta
Lo script imposta ```"maxFailedItems"``` su -1, che indica al motore di indicizzazione di ignorare gli errori durante l'importazione dei dati. Ciò è utile dato che l'origine dati di esempio include pochi documenti. Per un'origine dati più grande sarebbe necessario impostare un valore maggiore di 0.
Si noti inoltre l'istruzione ```"dataToExtract":"contentAndMetadata"``` nei parametri di configurazione. Questa istruzione indica all'indicizzatore di estrarre automaticamente il contenuto dai vari formati di file, oltre ai metadati correlati a ogni file.
Quando viene estratto il contenuto, è possibile impostare ```ImageAction``` per estrarre il testo dalle immagini trovate nell'origine dati. ```"ImageAction":"generateNormalizedImages"``` indica all'indicizzatore di estrarre il testo dalle immagini (ad esempio, la parola "stop" da un segnale di stop del traffico) e incorporarlo come parte del campo del contenuto. Questo comportamento si applica sia alle immagini incorporate nei documenti, ad esempio un'immagine all'interno di un PDF, che alle immagini trovate nell'origine dati, ad esempio un file JPG.
In questa versione di anteprima ```"generateNormalizedImages"``` è l'unico valore valido per ```"ImageAction"```.
## <a name="check-indexer-status"></a>Controllare lo stato dell'indicizzatore
Dopo averlo definito, l'indicizzatore viene eseguito automaticamente quando si invia la richiesta. A seconda delle competenze cognitive definite, l'indicizzazione può richiedere più tempo del previsto. Per scoprire se l'indicizzatore è ancora in esecuzione, inviare la richiesta seguente per controllare lo stato dell'indicizzatore.
```http
GET https://[servicename].search.windows.net/indexers/demoindexer/status?api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
La risposta indica se l'indicizzatore è in esecuzione. Dopo il termine dell'indicizzazione, usare un'altra richiesta HTTP GET per l'endpoint STATUS (come sopra) per visualizzare i report di eventuali errori e avvisi che si sono verificati durante l'arricchimento.
Gli avvisi sono comuni con alcune combinazioni di file di origine e competenze e non sempre indicano un problema. In questa esercitazione, gli avvisi sono innocui (ad esempio, nessun input di testo dai file JPEG). È possibile esaminare la risposta dello stato per ottenere informazioni dettagliate sugli avvisi generati durante l'indicizzazione.
## <a name="verify-content"></a>Verificare il contenuto
Dopo il termine dell'indicizzazione, eseguire le query che restituiscono il contenuto dei singoli campi. Per impostazione predefinita, Ricerca di Azure restituisce i primi 50 risultati. I dati di esempio sono piccoli, quindi l'impostazione predefinita è appropriata. Tuttavia, quando si opera su set di dati più grandi, potrebbe essere necessario includere parametri nella stringa di query per restituire più risultati. Per istruzioni, vedere [Come impaginare i risultati della ricerca in Ricerca di Azure](search-pagination-page-layout.md).
Come passaggio di verifica, eseguire una query sull'indice per tutti i campi.
```http
GET https://[servicename].search.windows.net/indexes/demoindex?api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
L'output è lo schema dell'indice, con nome, tipo e attributi di ogni campo.
Inviare una seconda query per `"*"` per restituire tutto il contenuto di un singolo campo, ad esempio `organizations`.
```http
GET https://[servicename].search.windows.net/indexes/demoindex/docs?search=*&$select=organizations&api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
Ripetere l'operazione per altri campi: content, languageCode, keyphrases e organizations in questo esercizio. È possibile restituire più campi tramite `$select` usando un elenco delimitato da virgole.
È possibile usare GET o POST, in base alla lunghezza e alla complessità della stringa di query. Per altre informazioni, vedere [Eseguire query su un indice di Ricerca di Azure con l'API REST](https://docs.microsoft.com/azure/search/search-query-rest-api).
<a name="access-enriched-document"></a>
## <a name="accessing-the-enriched-document"></a>Accesso al documento arricchito
La ricerca cognitiva consente di visualizzare la struttura del documento arricchito. I documenti arricchiti sono strutture temporanee create durante l'arricchimento e quindi eliminate al termine del processo.
Per acquisire uno snapshot del documento arricchito creato durante l'indicizzazione, aggiungere un campo denominato ```enriched``` all'indice. L'indicizzatore esegue automaticamente il dump nel campo di una rappresentazione stringa di tutti gli arricchimenti per il documento.
Il campo ```enriched``` conterrà una stringa che è una rappresentazione logica del documento in memoria arricchito in JSON. Il valore del campo è tuttavia un documento JSON valido. Per le virgolette vengono usati caratteri di escape, quindi sarà necessario sostituire `\"` con `"` per visualizzare il documento in formato JSON.
Il campo ```enriched``` è destinato esclusivamente al debugging, per comprendere meglio la forma logica del contenuto in base alla quale vengono valutate le espressioni. Può trattarsi di uno strumento utile per comprendere il set di competenze ed eseguirne il debug.
Ripetere l'esercizio precedente, includendo un campo `enriched` per acquisire il contenuto di un documento arricchito:
### <a name="request-body-syntax"></a>Sintassi del corpo della richiesta
```json
{
"fields": [
{
"name": "id",
"type": "Edm.String",
"key": true,
"searchable": true,
"filterable": false,
"facetable": false,
"sortable": true
},
{
"name": "content",
"type": "Edm.String",
"sortable": false,
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "languageCode",
"type": "Edm.String",
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "keyPhrases",
"type": "Collection(Edm.String)",
"searchable": true,
"filterable": false,
"facetable": false
},
{
"name": "organizations",
"type": "Collection(Edm.String)",
"searchable": true,
"sortable": false,
"filterable": false,
"facetable": false
},
{
"name": "enriched",
"type": "Edm.String",
"searchable": false,
"sortable": false,
"filterable": false,
"facetable": false
}
]
}
```
<a name="reset"></a>
## <a name="reset-and-rerun"></a>Reimpostare ed eseguire di nuovo
Nelle prime fasi sperimentali dello sviluppo della pipeline, l'approccio più pratico per le iterazioni di progettazione consiste nell'eliminare gli oggetti da Ricerca di Azure e consentire al codice di ricompilarli. I nomi di risorsa sono univoci. L'eliminazione di un oggetto consente di ricrearlo usando lo stesso nome.
Per reindicizzare i documenti con le nuove definizioni:
1. Eliminare l'indice per rimuovere i dati persistenti. Eliminare l'indicizzatore per ricrearlo nel servizio.
2. Modificare la definizione di un set di competenze e un indice.
3. Ricreare un indice e un indicizzatore nel servizio per eseguire la pipeline.
È possibile usare il portale per eliminare indici e indicizzatori. I set di competenze possono essere eliminati solo tramite un comando HTTP, se si decide di eliminarli.
```http
DELETE https://[servicename].search.windows.net/skillsets/demoskillset?api-version=2017-11-11-Preview
api-key: [api-key]
Content-Type: application/json
```
In caso di corretto completamento dell'eliminazione viene restituito il codice di stato 204.
Con l'evoluzione del codice può risultare necessario perfezionare una strategia di ricompilazione. Per altre informazioni, vedere [How to rebuild an index](search-howto-reindex.md) (Come ricompilare un indice).
## <a name="takeaways"></a>Risultati
Questa esercitazione illustra i passaggi di base per la compilazione di una pipeline di indicizzazione arricchita tramite la creazione delle parti componenti: un'origine dati, un set di competenze, un indice e un indicizzatore.
Sono state presentate le [competenze predefinite](cognitive-search-predefined-skills.md), oltre alla definizione del set di competenze e ai meccanismi di concatenamento delle competenze tramite input e output. Si è inoltre appreso che `outputFieldMappings` nella definizione dell'indicizzatore è obbligatorio per indirizzare i valori arricchiti dalla pipeline in un indice di ricerca in un servizio Ricerca di Azure.
Infine, è stato descritto come testare i risultati e reimpostare il sistema per ulteriori iterazioni. Si è appreso che l'esecuzione di query sull'indice consente di restituire l'output creato dalla pipeline di indicizzazione arricchita. In questa versione, è disponibile un meccanismo per la visualizzazione dei costrutti interni (documenti arricchiti creati dal sistema). È stato inoltre descritto come controllare lo stato dell'indicizzatore e quali oggetti eliminare prima di eseguire di nuovo una pipeline.
## <a name="clean-up-resources"></a>Pulire le risorse
Il modo più veloce per pulire le risorse dopo un'esercitazione consiste nell'eliminare il gruppo di risorse contenente il servizio Ricerca di Azure e il servizio BLOB di Azure. Supponendo che entrambi i servizi siano stati inseriti nello stesso gruppo, eliminare il gruppo di risorse a questo punto per eliminare definitivamente tutti gli elementi in esso contenuti, inclusi i servizi e qualsiasi contenuto archiviato creato per questa esercitazione. Nel portale, il nome del gruppo di risorse è indicato nella pagina Panoramica di ciascun servizio.
## <a name="next-steps"></a>Passaggi successivi
Personalizzare o estendere la pipeline con competenze personalizzate. Creare una competenza personalizzata e aggiungerla a un set di competenze consente di caricare procedure di analisi del testo o delle immagini personalizzate.
> [!div class="nextstepaction"]
> [Esempio: creare una competenza personalizzata](cognitive-search-create-custom-skill-example.md)
| 59.092224 | 680 | 0.749923 | ita_Latn | 0.998469 |
fd9c2e3ba86ae02df9d267556c7dbb7cb98d8c77 | 1,353 | md | Markdown | ClienteWeb/bower_components/jquery.sticky/README.md | cities2grupo4qp1516/SecretSale | 224ff714120827b530c7d9a8dd6b2ddbbadf9b8f | [
"Apache-2.0"
] | null | null | null | ClienteWeb/bower_components/jquery.sticky/README.md | cities2grupo4qp1516/SecretSale | 224ff714120827b530c7d9a8dd6b2ddbbadf9b8f | [
"Apache-2.0"
] | null | null | null | ClienteWeb/bower_components/jquery.sticky/README.md | cities2grupo4qp1516/SecretSale | 224ff714120827b530c7d9a8dd6b2ddbbadf9b8f | [
"Apache-2.0"
] | null | null | null | # Sticky
Sticky is a jQuery plugin that gives you the ability to make any element on your page always stay visible.
## Sticky in brief
This is how it works:
- When the target element is about to be hidden, the plugin will add the class `className` to it (and to a wrapper added as its parent), set it to `position: fixed` and calculate its new `top`, based on the element's height, the page height and the `topSpacing` and `bottomSpacing` options.
- That's it. In some cases you might need to set a fixed width to your element when it is "sticked". Check the `example-*.html` files for some examples.
## Usage
- Include jQuery & Sticky.
- Call Sticky.
```javascript
<script src="jquery.js"></script>
<script src="jquery.sticky.js"></script>
<script>
$(document).ready(function(){
$("#sticker").sticky({topSpacing:0});
});
</script>
```
- Edit your css to position the elements (check the examples in `example-*.html`).
## Options
- `topSpacing`: Pixels between the page top and the element's top.
- `bottomSpacing`: Pixels between the page bottom and the element's bottom.
- `className`: CSS class added to the element and its wrapper when "sticked".
- `wrapperClassName`: CSS class added to the wrapper.
## Methods
- `sticky(options)`: Initializer. `options` is optional.
- `sticky('update')`: Recalculates the element's position.
| 33.825 | 290 | 0.719882 | eng_Latn | 0.989085 |
fd9c8b611455ed3f30da377a13b9f807460db2e6 | 1,881 | md | Markdown | README.md | Himanshu14k/AdultIncomePrediction_Project | 522f170111c5e6e45ef1e26ef21f86f4ea3a8dcc | [
"MIT"
] | 2 | 2021-09-06T08:31:46.000Z | 2021-10-30T12:53:21.000Z | README.md | Himanshu14k/AdultIncomePrediction_Project | 522f170111c5e6e45ef1e26ef21f86f4ea3a8dcc | [
"MIT"
] | 1 | 2021-09-07T13:53:26.000Z | 2021-09-07T13:53:26.000Z | README.md | Himanshu14k/AdultIncomePrediction_Project | 522f170111c5e6e45ef1e26ef21f86f4ea3a8dcc | [
"MIT"
] | 2 | 2021-09-13T17:20:56.000Z | 2021-11-21T16:05:16.000Z | # Adult Census Income Prediction
## Problem Statement:
<p>The Goal is to predict whether a person has an income of more than 50K a year or not.
This is basically a binary classification problem where a person is classified into the
>50K group or <=50K group.</p>
## Approach
<p>The classical machine learning tasks like Data Exploration, Data Cleaning,
Feature Engineering, Model Building and Model Testing. Try out different machine
learning algorithms that’s best fit for the above case.</p>
<pre>
<li> Data Exploration : I started exploring dataset using pandas,numpy,matplotlib and seaborn. </li>
<li> Data visualization : Plotted graphs to get insights about dependent and independent variables. </li>
<li> Feature Engineering : Removed missing values and created new features as per insights.</li>
<li> Model Selection I : Tested all base models to check the base accuracy.</li>
<li> Model Selection II : Performed Hyperparameter tuning using gridsearchCV</li>
<li> Pickle File : Selected model as per best accuracy and created pickle file using joblib .</li>
<li> Webpage & deployment : Created a webpage that takes all the necessary inputs from user and shows output.
After that I have deployed project on heroku AWS</li></pre>
## Deployment Links
<p> Link Heroku : https://adultincomeprediction.herokuapp.com/ <br>
AWS Link : http://adultincomepredicyionproject-env.eba-2kmfag3h.us-east-2.elasticbeanstalk.com/ </p>
## UserInterface


## Technologies Used
<pre>
1. Python
2. Sklearn
3. Flask
4. Html
5. Css
6. Pandas, Numpy
7. Database
8. Hosting
9. Docker
</pre>
## High Level Design Document
## Low Level Design Document
## Help Me Improve
<p> Hello Reader if you find any bug please consider raising issue I will address them asap.</p>
| 36.882353 | 110 | 0.735779 | eng_Latn | 0.91585 |
fd9ca8d774b12c2807d1e8fce78b5c8dd544fe4e | 856 | md | Markdown | _posts/2016-12-05-Bellantuono-Modello-1744B.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | _posts/2016-12-05-Bellantuono-Modello-1744B.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | _posts/2016-12-05-Bellantuono-Modello-1744B.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-12-05
title: "Bellantuono Modello 1744B"
category: Bellantuono
tags: [Bellantuono]
---
### Bellantuono Modello 1744B
Just **$379.99**
###
<table><tr><td>BRANDS</td><td>Bellantuono</td></tr></table>
<a href="https://www.readybrides.com/en/bellantuono/68934-bellantuono-modello-1744b.html"><img src="//img.readybrides.com/160838/bellantuono-modello-1744b.jpg" alt="Bellantuono Modello 1744B" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/bellantuono/68934-bellantuono-modello-1744b.html"><img src="//img.readybrides.com/160834/bellantuono-modello-1744b.jpg" alt="Bellantuono Modello 1744B" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/bellantuono/68934-bellantuono-modello-1744b.html](https://www.readybrides.com/en/bellantuono/68934-bellantuono-modello-1744b.html)
| 53.5 | 232 | 0.739486 | yue_Hant | 0.404575 |
fd9d097fb32de875e0bb9f100dab9f6eb308a2bc | 25,374 | md | Markdown | articles/hdinsight/hadoop/python-udf-hdinsight.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hadoop/python-udf-hdinsight.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hadoop/python-udf-hdinsight.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Python UDF med Apache Hive och Apache Pig - Azure HDInsight
description: Lär dig hur du använder Användardefinierade funktioner för Python (UDF) från Apache Hive och Apache Pig i HDInsight, Apache Hadoop-teknikstacken på Azure.
author: hrasheed-msft
ms.author: hrasheed
ms.reviewer: jasonh
ms.service: hdinsight
ms.topic: conceptual
ms.date: 11/15/2019
ms.custom: H1Hack27Feb2017,hdinsightactive
ms.openlocfilehash: 201bb40e5024442587f5508886da7e844f35be40
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/27/2020
ms.locfileid: "74148405"
---
# <a name="use-python-user-defined-functions-udf-with-apache-hive-and-apache-pig-in-hdinsight"></a>Använd Python User Defined Functions (UDF) med Apache Hive och Apache Pig i HDInsight
Lär dig hur du använder Användardefinierade Funktioner i Python (UDF) med Apache Hive och Apache Pig i Apache Hadoop på Azure HDInsight.
## <a name="python-on-hdinsight"></a><a name="python"></a>Python på HDInsight
Python2.7 installeras som standard på HDInsight 3.0 och senare. Apache Hive kan användas med den här versionen av Python för dataflödesbearbetning. Strömbearbetning använder STDOUT och STDIN för att skicka data mellan Hive och UDF.
HDInsight innehåller även Jython, som är en Python-implementering skriven i Java. Jython körs direkt på Java Virtual Machine och använder inte streaming. Jython är den rekommenderade Python-tolken när du använder Python med Pig.
## <a name="prerequisites"></a>Krav
* **Ett Hadoop-kluster på HDInsight**. Se [Komma igång med HDInsight på Linux](apache-hadoop-linux-tutorial-get-started.md).
* **En SSH-klient**. Mer information finns i [Ansluta till HDInsight (Apache Hadoop) med hjälp av SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
* [URI-schemat](../hdinsight-hadoop-linux-information.md#URI-and-scheme) för klustrets primära lagring. Detta skulle `wasb://` vara för `abfs://` Azure Storage, för Azure Data Lake Storage Gen2 eller adl:// för Azure Data Lake Storage Gen1. Om säker överföring är aktiverad för Azure Storage, skulle URI vara wasbs://. Se även [säker överföring](../../storage/common/storage-require-secure-transfer.md).
* **Möjlig ändring av lagringskonfiguration.** Se [Lagringskonfiguration](#storage-configuration) om `BlobStorage`du använder lagringskontosort .
* Valfri. Om du planerar att använda PowerShell måste [AZ-modulen](https://docs.microsoft.com/powershell/azure/new-azureps-module-az) vara installerad.
> [!NOTE]
> Lagringskontot som användes i den här artikeln `wasbs` var Azure Storage med säker [överföring](../../storage/common/storage-require-secure-transfer.md) aktiverad och används därför i hela artikeln.
## <a name="storage-configuration"></a>Storage-konfiguration
Ingen åtgärd krävs om lagringskontot `Storage (general purpose v1)` `StorageV2 (general purpose v2)`som används är av slag eller . Processen i den här artikeln kommer `/tezstaging`att producera utdata till minst . En standardkonfiguration för hadoop `fs.azure.page.blob.dir` kommer `core-site.xml` att `HDFS`finnas `/tezstaging` i konfigurationsvariabeln i för tjänsten . Den här konfigurationen gör att utdata till katalogen blir sidblobar, som inte stöds för lagringskontosdort `BlobStorage`. Om `BlobStorage` du vill använda `/tezstaging` den `fs.azure.page.blob.dir` här artikeln tar du bort från konfigurationsvariabeln. Konfigurationen kan nås från [Ambari UI](../hdinsight-hadoop-manage-ambari.md). Annars visas felmeddelandet:`Page blob is not supported for this account type.`
> [!WARNING]
> Stegen i det här dokumentet gör följande antaganden:
>
> * Du skapar Python-skripten i din lokala utvecklingsmiljö.
> * Du laddar upp skripten till HDInsight med kommandot `scp` eller det medföljande PowerShell-skriptet.
>
> Om du vill använda [Azure Cloud Shell (bash)](https://docs.microsoft.com/azure/cloud-shell/overview) för att arbeta med HDInsight måste du:
>
> * Skapa skripten i molnskalmiljön.
> * Används `scp` för att ladda upp filerna från molnskalet till HDInsight.
> * Använd `ssh` från molnskalet för att ansluta till HDInsight och köra exemplen.
## <a name="apache-hive-udf"></a><a name="hivepython"></a>Apache Hive UDF
Python kan användas som en UDF från Hive `TRANSFORM` via HiveQL-satsen. Följande HiveQL anropar till `hiveudf.py` exempel filen som lagras i standardkontot för Azure Storage för klustret.
```hiveql
add file wasbs:///hiveudf.py;
SELECT TRANSFORM (clientid, devicemake, devicemodel)
USING 'python hiveudf.py' AS
(clientid string, phoneLabel string, phoneHash string)
FROM hivesampletable
ORDER BY clientid LIMIT 50;
```
Så här gör det här exemplet:
1. Uttrycket `add file` i början av filen `hiveudf.py` lägger till filen i den distribuerade cachen, så den är tillgänglig för alla noder i klustret.
2. Utdraget `SELECT TRANSFORM ... USING` väljer data `hivesampletable`från . Den skickar också värdena clientid, devicemake och `hiveudf.py` devicemodel till skriptet.
3. Satsen `AS` beskriver de fält `hiveudf.py`som returneras från .
<a name="streamingpy"></a>
### <a name="create-file"></a>Skapa fil
Skapa en textfil med namnet `hiveudf.py`. Använd följande kod som innehållet i filen:
```python
#!/usr/bin/env python
import sys
import string
import hashlib
while True:
line = sys.stdin.readline()
if not line:
break
line = string.strip(line, "\n ")
clientid, devicemake, devicemodel = string.split(line, "\t")
phone_label = devicemake + ' ' + devicemodel
print "\t".join([clientid, phone_label, hashlib.md5(phone_label).hexdigest()])
```
Det här skriptet utför följande åtgärder:
1. Läser en rad data från STDIN.
2. Det efterföljande nyradstecknet `string.strip(line, "\n ")`tas bort med .
3. När du bearbetar dataström innehåller en enda rad alla värden med ett fliktecken mellan varje värde. Så `string.split(line, "\t")` kan användas för att dela indata på varje flik, returnerar bara fälten.
4. När bearbetningen är klar måste utdata skrivas till STDOUT som en enda rad, med en flik mellan varje fält. Till exempel `print "\t".join([clientid, phone_label, hashlib.md5(phone_label).hexdigest()])`.
5. Slingan `while` upprepas `line` tills ingen läss.
Skriptutdata är en sammanfogning av `devicemake` `devicemodel`indatavärdena för och , och en hash av det sammanfogade värdet.
### <a name="upload-file-shell"></a>Ladda upp fil (skal)
Ersätt `sshuser` med det faktiska användarnamnet om det är annorlunda i kommandona nedan. Ersätt `mycluster` med det faktiska klusternamnet. Kontrollera att arbetskatalogen är där filen finns.
1. Används `scp` för att kopiera filerna till ditt HDInsight-kluster. Redigera och ange kommandot nedan:
```cmd
scp hiveudf.py sshuser@mycluster-ssh.azurehdinsight.net:
```
2. Använd SSH för att ansluta till klustret. Redigera och ange kommandot nedan:
```cmd
ssh sshuser@mycluster-ssh.azurehdinsight.net
```
3. Från SSH-sessionen lägger du till pythonfiler som tidigare överförts till lagringen för klustret.
```bash
hdfs dfs -put hiveudf.py /hiveudf.py
```
### <a name="use-hive-udf-shell"></a>Använd Hive UDF (skal)
1. Om du vill ansluta till Hive använder du följande kommando från den öppna SSH-sessionen:
```bash
beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http'
```
Det här kommandot startar Beeline-klienten.
2. Ange följande fråga `0: jdbc:hive2://headnodehost:10001/>` vid prompten:
```hive
add file wasbs:///hiveudf.py;
SELECT TRANSFORM (clientid, devicemake, devicemodel)
USING 'python hiveudf.py' AS
(clientid string, phoneLabel string, phoneHash string)
FROM hivesampletable
ORDER BY clientid LIMIT 50;
```
3. När du har angett den sista raden ska jobbet starta. När jobbet är klart returneras utdata som liknar följande exempel:
100041 RIM 9650 d476f3687700442549a83fac4560c51c
100041 RIM 9650 d476f3687700442549a83fac4560c51c
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
4. Om du vill avsluta Beeline anger du följande kommando:
```hive
!q
```
### <a name="upload-file-powershell"></a>Ladda upp fil (PowerShell)
PowerShell kan också användas för att fjärrköra Hive-frågor. Kontrollera att arbetskatalogen finns där. `hiveudf.py` Använd följande PowerShell-skript för att köra `hiveudf.py` en Hive-fråga som använder skriptet:
```PowerShell
# Login to your Azure subscription
# Is there an active Azure subscription?
$sub = Get-AzSubscription -ErrorAction SilentlyContinue
if(-not($sub))
{
Connect-AzAccount
}
# If you have multiple subscriptions, set the one to use
# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
# Revise file path as needed
$pathToStreamingFile = ".\hiveudf.py"
# Get cluster info
$clusterName = Read-Host -Prompt "Enter the HDInsight cluster name"
$clusterInfo = Get-AzHDInsightCluster -ClusterName $clusterName
$resourceGroup = $clusterInfo.ResourceGroup
$storageAccountName=$clusterInfo.DefaultStorageAccount.split('.')[0]
$container=$clusterInfo.DefaultStorageContainer
$storageAccountKey=(Get-AzStorageAccountKey `
-ResourceGroupName $resourceGroup `
-Name $storageAccountName)[0].Value
# Create an Azure Storage context
$context = New-AzStorageContext `
-StorageAccountName $storageAccountName `
-StorageAccountKey $storageAccountKey
# Upload local files to an Azure Storage blob
Set-AzStorageBlobContent `
-File $pathToStreamingFile `
-Blob "hiveudf.py" `
-Container $container `
-Context $context
```
> [!NOTE]
> Mer information om hur du laddar upp filer finns i [Ladda upp data för Apache Hadoop-jobb i HDInsight-dokumentet.](../hdinsight-upload-data.md)
#### <a name="use-hive-udf"></a>Använd Hive UDF
```PowerShell
# Script should stop on failures
$ErrorActionPreference = "Stop"
# Login to your Azure subscription
# Is there an active Azure subscription?
$sub = Get-AzSubscription -ErrorAction SilentlyContinue
if(-not($sub))
{
Connect-AzAccount
}
# If you have multiple subscriptions, set the one to use
# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
# Get cluster info
$clusterName = Read-Host -Prompt "Enter the HDInsight cluster name"
$creds=Get-Credential -UserName "admin" -Message "Enter the login for the cluster"
$HiveQuery = "add file wasbs:///hiveudf.py;" +
"SELECT TRANSFORM (clientid, devicemake, devicemodel) " +
"USING 'python hiveudf.py' AS " +
"(clientid string, phoneLabel string, phoneHash string) " +
"FROM hivesampletable " +
"ORDER BY clientid LIMIT 50;"
# Create Hive job object
$jobDefinition = New-AzHDInsightHiveJobDefinition `
-Query $HiveQuery
# For status bar updates
$activity="Hive query"
# Progress bar (optional)
Write-Progress -Activity $activity -Status "Starting query..."
# Start defined Azure HDInsight job on specified cluster.
$job = Start-AzHDInsightJob `
-ClusterName $clusterName `
-JobDefinition $jobDefinition `
-HttpCredential $creds
# Progress bar (optional)
Write-Progress -Activity $activity -Status "Waiting on query to complete..."
# Wait for completion or failure of specified job
Wait-AzHDInsightJob `
-JobId $job.JobId `
-ClusterName $clusterName `
-HttpCredential $creds
# Uncomment the following to see stderr output
<#
Get-AzHDInsightJobOutput `
-Clustername $clusterName `
-JobId $job.JobId `
-HttpCredential $creds `
-DisplayOutputType StandardError
#>
# Progress bar (optional)
Write-Progress -Activity $activity -Status "Retrieving output..."
# Gets the log output
Get-AzHDInsightJobOutput `
-Clustername $clusterName `
-JobId $job.JobId `
-HttpCredential $creds
```
Utdata för **Hive-jobbet** ska se ut ungefär som i följande exempel:
100041 RIM 9650 d476f3687700442549a83fac4560c51c
100041 RIM 9650 d476f3687700442549a83fac4560c51c
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
100042 Apple iPhone 4.2.x 375ad9a0ddc4351536804f1d5d0ea9b9
## <a name="apache-pig-udf"></a><a name="pigpython"></a>Apache Gris UDF
Ett Python-skript kan användas som en `GENERATE` UDF från Pig via uttrycket. Du kan köra skriptet med antingen Jython eller C Python.
* Jython körs på JVM, och kan inbyggt kallas från Pig.
* C Python är en extern process, så data från Pig på JVM skickas ut till skriptet som körs i en Python-process. Utdata från Python-skriptet skickas tillbaka till Pig.
Om du vill ange `register` Python-tolken använder du när du refererar till Python-skriptet. Följande exempel registrerar skript med `myfuncs`Gris som :
* **Så här använder du Jython:**`register '/path/to/pigudf.py' using jython as myfuncs;`
* **Så här använder du C Python:**`register '/path/to/pigudf.py' using streaming_python as myfuncs;`
> [!IMPORTANT]
> När du använder Jython kan sökvägen till pig_jython filen vara antingen en lokal sökväg eller en WASBS:// sökväg. När du använder C Python måste du dock referera till en fil i det lokala filsystemet för den nod som du använder för att skicka pig-jobbet.
När tidigare registrering, pig latin för detta exempel är densamma för både:
```pig
LOGS = LOAD 'wasbs:///example/data/sample.log' as (LINE:chararray);
LOG = FILTER LOGS by LINE is not null;
DETAILS = FOREACH LOG GENERATE myfuncs.create_structure(LINE);
DUMP DETAILS;
```
Så här gör det här exemplet:
1. Den första raden läser in `sample.log` `LOGS`exempeldatafilen i . Den definierar också varje `chararray`post som en .
2. Nästa rad filtrerar bort alla null-värden och `LOG`lagrar resultatet av operationen i .
3. Därefter itererar det över `LOG` posterna i och använder `GENERATE` för att anropa `create_structure` metoden som `myfuncs`finns i Python/Jython-skriptet som läses in som . `LINE`används för att skicka den aktuella posten till funktionen.
4. Slutligen dumpas utdata till STDOUT `DUMP` med kommandot. Det här kommandot visar resultaten när åtgärden är klar.
### <a name="create-file"></a>Skapa fil
Skapa en textfil med namnet `pigudf.py`. Använd följande kod som innehållet i filen:
<a name="streamingpy"></a>
```python
# Uncomment the following if using C Python
#from pig_util import outputSchema
@outputSchema("log: {(date:chararray, time:chararray, classname:chararray, level:chararray, detail:chararray)}")
def create_structure(input):
if (input.startswith('java.lang.Exception')):
input = input[21:len(input)] + ' - java.lang.Exception'
date, time, classname, level, detail = input.split(' ', 4)
return date, time, classname, level, detail
```
I exemplet Pig Latin `LINE` definieras indata som en chararray eftersom det inte finns något konsekvent schema för indata. Python-skriptet omvandlar data till ett konsekvent schema för utdata.
1. Satsen `@outputSchema` definierar formatet på de data som returneras till Pig. I det här fallet är det en **datapåse**, som är en Pig-datatyp. Påsen innehåller följande fält, som alla är chararray (strängar):
* datum - det datum då loggposten skapades
* tid - den tidpunkt då loggposten skapades
* classname - klassnamnet som transaktionen skapades för
* nivå - loggnivån
* detalj - utförlig information för loggposten
2. Därefter `def create_structure(input)` definierar funktionen som Pig skickar radobjekt till.
3. `sample.log`Exempeldata, , överensstämmer oftast med schemat datum, tid, klassnamn, nivå och detalj. Den innehåller dock några rader som `*java.lang.Exception*`börjar med . Dessa rader måste ändras för att matcha schemat. Uttalandet `if` kontrollerar för dem, sedan masserar `*java.lang.Exception*` indata för att flytta strängen till slutet, vilket data i linje med den förväntade utdata schema.
4. Därefter används `split` kommandot för att dela upp data vid de fyra första blankstegs tecknen. Utdata tilldelas till `date` `time`, `classname` `level`, `detail`, och .
5. Slutligen returneras värdena till Pig.
När data returneras till Gris har de ett konsekvent `@outputSchema` schema enligt beskrivningen i uttrycket.
### <a name="upload-file-shell"></a>Ladda upp fil (skal)
Ersätt `sshuser` med det faktiska användarnamnet om det är annorlunda i kommandona nedan. Ersätt `mycluster` med det faktiska klusternamnet. Kontrollera att arbetskatalogen är där filen finns.
1. Används `scp` för att kopiera filerna till ditt HDInsight-kluster. Redigera och ange kommandot nedan:
```cmd
scp pigudf.py sshuser@mycluster-ssh.azurehdinsight.net:
```
2. Använd SSH för att ansluta till klustret. Redigera och ange kommandot nedan:
```cmd
ssh sshuser@mycluster-ssh.azurehdinsight.net
```
3. Från SSH-sessionen lägger du till pythonfiler som tidigare överförts till lagringen för klustret.
```bash
hdfs dfs -put pigudf.py /pigudf.py
```
### <a name="use-pig-udf-shell"></a>Använd Pig UDF (skal)
1. Om du vill ansluta till gris använder du följande kommando från den öppna SSH-sessionen:
```bash
pig
```
2. Ange följande satser `grunt>` vid prompten:
```pig
Register wasbs:///pigudf.py using jython as myfuncs;
LOGS = LOAD 'wasbs:///example/data/sample.log' as (LINE:chararray);
LOG = FILTER LOGS by LINE is not null;
DETAILS = foreach LOG generate myfuncs.create_structure(LINE);
DUMP DETAILS;
```
3. När du har angett följande rad ska jobbet starta. När jobbet är klart returneras utdata som liknar följande data:
((2012-02-03,20:11:56,SampleClass5,[TRACE],verbose detail for id 990982084))
((2012-02-03,20:11:56,SampleClass7,[TRACE],verbose detail for id 1560323914))
((2012-02-03,20:11:56,SampleClass8,[DEBUG],detail for id 2083681507))
((2012-02-03,20:11:56,SampleClass3,[TRACE],verbose detail for id 1718828806))
((2012-02-03,20:11:56,SampleClass3,[INFO],everything normal for id 530537821))
4. Används `quit` för att avsluta grunt-skalet och sedan använda följande för att redigera pigudf.py-filen i det lokala filsystemet:
```bash
nano pigudf.py
```
5. En gång i redigeraren, avkommenta `#` följande rad genom att ta bort tecknet från början av raden:
```bash
#from pig_util import outputSchema
```
Den här raden ändrar Python-skriptet så att det fungerar med C Python i stället för Jython. När ändringen har gjorts använder du **Ctrl+X** för att avsluta redigeraren. Välj **Y**och **sedan Ange** för att spara ändringarna.
6. Använd `pig` kommandot för att starta skalet igen. När du är `grunt>` på prompten använder du följande för att köra Python-skriptet med C Python-tolken.
```pig
Register 'pigudf.py' using streaming_python as myfuncs;
LOGS = LOAD 'wasbs:///example/data/sample.log' as (LINE:chararray);
LOG = FILTER LOGS by LINE is not null;
DETAILS = foreach LOG generate myfuncs.create_structure(LINE);
DUMP DETAILS;
```
När det här jobbet är klart bör du se samma utdata som när du körde skriptet tidigare med Jython.
### <a name="upload-file-powershell"></a>Ladda upp fil (PowerShell)
PowerShell kan också användas för att fjärrköra Hive-frågor. Kontrollera att arbetskatalogen finns där. `pigudf.py` Använd följande PowerShell-skript för att köra `pigudf.py` en Hive-fråga som använder skriptet:
```PowerShell
# Login to your Azure subscription
# Is there an active Azure subscription?
$sub = Get-AzSubscription -ErrorAction SilentlyContinue
if(-not($sub))
{
Connect-AzAccount
}
# If you have multiple subscriptions, set the one to use
# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
# Revise file path as needed
$pathToJythonFile = ".\pigudf.py"
# Get cluster info
$clusterName = Read-Host -Prompt "Enter the HDInsight cluster name"
$clusterInfo = Get-AzHDInsightCluster -ClusterName $clusterName
$resourceGroup = $clusterInfo.ResourceGroup
$storageAccountName=$clusterInfo.DefaultStorageAccount.split('.')[0]
$container=$clusterInfo.DefaultStorageContainer
$storageAccountKey=(Get-AzStorageAccountKey `
-ResourceGroupName $resourceGroup `
-Name $storageAccountName)[0].Value
# Create an Azure Storage context
$context = New-AzStorageContext `
-StorageAccountName $storageAccountName `
-StorageAccountKey $storageAccountKey
# Upload local files to an Azure Storage blob
Set-AzStorageBlobContent `
-File $pathToJythonFile `
-Blob "pigudf.py" `
-Container $container `
-Context $context
```
### <a name="use-pig-udf-powershell"></a>Använd Pig UDF (PowerShell)
> [!NOTE]
> När du skickar ett jobb med PowerShell kan det inte använda C Python som tolk.
PowerShell kan också användas för att köra Pig Latin-jobb. Om du vill köra ett `pigudf.py` pig latin-jobb som använder skriptet använder du följande PowerShell-skript:
```PowerShell
# Script should stop on failures
$ErrorActionPreference = "Stop"
# Login to your Azure subscription
# Is there an active Azure subscription?
$sub = Get-AzSubscription -ErrorAction SilentlyContinue
if(-not($sub))
{
Connect-AzAccount
}
# Get cluster info
$clusterName = Read-Host -Prompt "Enter the HDInsight cluster name"
$creds=Get-Credential -UserName "admin" -Message "Enter the login for the cluster"
$PigQuery = "Register wasbs:///pigudf.py using jython as myfuncs;" +
"LOGS = LOAD 'wasbs:///example/data/sample.log' as (LINE:chararray);" +
"LOG = FILTER LOGS by LINE is not null;" +
"DETAILS = foreach LOG generate myfuncs.create_structure(LINE);" +
"DUMP DETAILS;"
# Create Pig job object
$jobDefinition = New-AzHDInsightPigJobDefinition -Query $PigQuery
# For status bar updates
$activity="Pig job"
# Progress bar (optional)
Write-Progress -Activity $activity -Status "Starting job..."
# Start defined Azure HDInsight job on specified cluster.
$job = Start-AzHDInsightJob `
-ClusterName $clusterName `
-JobDefinition $jobDefinition `
-HttpCredential $creds
# Progress bar (optional)
Write-Progress -Activity $activity -Status "Waiting for the Pig job to complete..."
# Wait for completion or failure of specified job
Wait-AzHDInsightJob `
-Job $job.JobId `
-ClusterName $clusterName `
-HttpCredential $creds
# Uncomment the following to see stderr output
<#
Get-AzHDInsightJobOutput `
-Clustername $clusterName `
-JobId $job.JobId `
-HttpCredential $creds `
-DisplayOutputType StandardError
#>
# Progress bar (optional)
Write-Progress -Activity $activity "Retrieving output..."
# Gets the log output
Get-AzHDInsightJobOutput `
-Clustername $clusterName `
-JobId $job.JobId `
-HttpCredential $creds
```
Utdata för **grisjobbet** ska se ut ungefär som följande data:
((2012-02-03,20:11:56,SampleClass5,[TRACE],verbose detail for id 990982084))
((2012-02-03,20:11:56,SampleClass7,[TRACE],verbose detail for id 1560323914))
((2012-02-03,20:11:56,SampleClass8,[DEBUG],detail for id 2083681507))
((2012-02-03,20:11:56,SampleClass3,[TRACE],verbose detail for id 1718828806))
((2012-02-03,20:11:56,SampleClass3,[INFO],everything normal for id 530537821))
## <a name="troubleshooting"></a><a name="troubleshooting"></a>Troubleshooting (Felsökning)
### <a name="errors-when-running-jobs"></a>Fel vid jobb som körs
När du kör hive-jobbet kan du stöta på ett fel som liknar följande text:
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20001]: An error occurred while reading or writing to your custom script. It may have crashed with an error.
Det här problemet kan orsakas av radsluten i Python-filen. Många Windows-redigerare som standard använder CRLF som radslut, men Linux-program förväntar sig vanligtvis LF.
Du kan använda följande PowerShell-satser för att ta bort CR-tecknen innan du laddar upp filen till HDInsight:
[!code-powershell[main](../../../powershell_scripts/hdinsight/run-python-udf/run-python-udf.ps1?range=148-150)]
### <a name="powershell-scripts"></a>PowerShell-skript
Båda exemplet PowerShell-skript som används för att köra exemplen innehåller en kommenterad rad som visar felutdata för jobbet. Om du inte ser det förväntade utdata för jobbet tar du av följande rad och ser om felinformationen indikerar ett problem.
[!code-powershell[main](../../../powershell_scripts/hdinsight/run-python-udf/run-python-udf.ps1?range=135-139)]
Felinformationen (STDERR) och resultatet av jobbet (STDOUT) loggas också till din HDInsight-lagring.
| För det här jobbet... | Titta på dessa filer i blob-behållaren |
| --- | --- |
| Hive |/HivePython/stderr<p>/HivePython/stdout |
| Pig |/PigPython/stderr<p>/PigPython/stdout |
## <a name="next-steps"></a><a name="next"></a>Nästa steg
Om du behöver läsa in Python-moduler som inte tillhandahålls som standard läser du Så här distribuerar du [en modul till Azure HDInsight](https://blogs.msdn.com/b/benjguin/archive/2014/03/03/how-to-deploy-a-python-module-to-windows-azure-hdinsight.aspx).
Andra sätt att använda Pig, Hive och lära dig mer om hur du använder MapReduce finns i följande dokument:
* [Använda Apache Hive med HDInsight](hdinsight-use-hive.md)
* [Använd MapReduce med HDInsight](hdinsight-use-mapreduce.md)
| 42.789207 | 791 | 0.746709 | swe_Latn | 0.950162 |
fd9d379ee97c0d8ef7356846112cfef8e866f904 | 3,147 | md | Markdown | docs/BasicStructures.md | GaryStackSports/payments-sdk-php | 70da9a16228734dd62f51f9584e74006f1ee06cf | [
"MIT"
] | 1 | 2019-01-14T22:07:21.000Z | 2019-01-14T22:07:21.000Z | docs/BasicStructures.md | GaryStackSports/payments-sdk-php | 70da9a16228734dd62f51f9584e74006f1ee06cf | [
"MIT"
] | 1 | 2019-08-20T22:20:11.000Z | 2019-08-20T22:20:11.000Z | docs/BasicStructures.md | GaryStackSports/payments-sdk-php | 70da9a16228734dd62f51f9584e74006f1ee06cf | [
"MIT"
] | 14 | 2018-05-02T18:33:03.000Z | 2020-12-05T14:31:15.000Z | # Basic Structures
The basic structures used in the SDK.
## Merchant
```php
$merchant = new StackPay\Payments\Structures\Merchant(
$merchantId,
$merchantHashKey
);
// or
$merchant = (new StackPay\Payments\Structures\Merchant())
->setID($merchantId)
->setHashKey($merchantHashKey);
```
## Account
### Card Account
```php
$cardAccount = new StackPay\Payments\Structures\CardAccount(
$type, // StackPay\Payments\AccountTypes::AMEX, DISCOVER, MASTERCARD, VISA
$accountNumber,
$mmddExpirationDate,
$cvv2,
$savePaymentMethodBoolean
);
// or
$cardAccount = (new StackPay\Payments\Structures\Account())
->setSavePaymentMethod($trueOrFalse)
->setType(StackPay\Payments\AccountTypes::VISA) // MASTERCARD, DISCOVER, AMEX
->setNumber($accountNumber)
->setExpireDate($mmddExpirationDate)
->setCvv2($cvv2);
```
### Bank Account
```php
$bankAccount = new StackPay\Payments\Structures\BankAccount(
$type, // StackPay\Payments\AccountTypes::CHECKING, SAVINGS
$accountNumber,
$routingNumber,
$savePaymentMethodBoolean
);
// or
$bankAccount = (new StackPay\Payments\Structures\Account())
->setSavePaymentMethod($trueOrFalse)
->setType(StackPay\Payments\AccountTypes::CHECKING) // SAVINGS
->setNumber($accountNumber)
->setRoutingNumber($routingNumber);
```
## Account Holder
```php
$accountHolder = new StackPay\Payments\Structures\AccountHolder(
$accountHolderName,
$billingAddress
);
// or
$accountHolder = (new StackPay\Payments\Structures\AccountHolder())
->setName($accountHolderName)
->setBillingAddress($billingAddress);
```
## Address
```php
$address = new StackPay\Payments\Structures\Address(
$addressLine1,
$addressLine2,
$city,
$state,
$postalCode,
StackPay\Payments\Structures\Country::usa() // canada()
);
// or
$address = (new StackPay\Payments\Structures\Address())
->setAddress1($addressLine1)
->setAddress2($addressLine2)
->setCity($city)
->setState($stateAbbreviation)
->setPostalCode($postalCode)
->setCountry(StackPay\Payments\Structures\Country::usa());
// or
$address = (new StackPay\Payments\Structures\Address())
->setAddressLines("$addressLine1.$lineSeparator.$addressLine2", $lineSeparator)
->setCity($city)
->setState($stateAbbreviation)
->setPostalCode($zipCode)
->setCountry(StackPay\Payments\Structures\Country::usa());
```
## Existing Customer
```php
$customer = new StackPay\Payments\Structures\Customer($customerId);
// or
$customer = (new StackPay\Payments\Structures\Customer())
->setId($customerId);
```
## Existing Transaction (for use with Refunds and Voids)
```php
$transaction = new StackPay\Payments\Structures\Transaction($transactionId);
// or
$transaction = (new StackPay\Payments\Structures\Transaction())
->setId($transactionId);
```
## Transaction Split
```php
$split = new StackPay\Payments\Structures\Split(
$merchant,
$amountInCents
);
// or
$split = (new StackPay\Payments\Structures\Split())
->setMerchant($merchant)
->setAmount($amountInCents);
```
[Back to README](../README.md)
| 21.408163 | 83 | 0.701938 | yue_Hant | 0.549587 |
fd9d760a28fd3d031c6398157685b5de54f28d8e | 3,017 | md | Markdown | desktop-src/FileIO/reparse-points-and-file-operations.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/FileIO/reparse-points-and-file-operations.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/FileIO/reparse-points-and-file-operations.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
description: Describes how reparse points enable file system behavior that departs from behavior most Windows developers expect.
ms.assetid: 1aaebda9-0013-4282-9ae1-7c829e171942
title: Reparse Points and File Operations
ms.topic: article
ms.date: 05/31/2018
---
# Reparse Points and File Operations
*Reparse points* enable file system behavior that departs from the behavior most Windows developers may be accustomed to, therefore being aware of these behaviors when writing applications that manipulate files is vital to robust and reliable applications intended to access file systems that support reparse points. The extent of these considerations will depend on the specific implementation and associated file system filter behavior of a particular reparse point, which can be user-defined. For more information, see [Reparse Points](reparse-points.md).
Consider the following examples regarding NTFS reparse point implementations, which include mounted folders, linked files, and the Microsoft Remote Storage Server:
- Backup applications that use [file streams](file-streams.md) should specify **BACKUP\_REPARSE\_DATA** in the [**WIN32\_STREAM\_ID**](/windows/desktop/api/winbase/ns-winbase-win32_stream_id) structure when backing up files with reparse points.
- Applications that use the [**CreateFile**](/windows/desktop/api/FileAPI/nf-fileapi-createfilea) function should specify the **FILE\_FLAG\_OPEN\_REPARSE\_POINT** flag when opening the file if it is a reparse point. For more information, see [Creating and Opening Files](creating-and-opening-files.md).
- The process of [defragmenting files](defragmenting-files.md) requires special handling for reparse points.
- Virus detection applications should search for reparse points that indicate linked files.
- Most applications should take special actions for files that have been moved to long-term storage, if only to notify the user that it may take a while to retrieve the file.
- The [**OpenFileById**](/windows/desktop/api/WinBase/nf-winbase-openfilebyid) function will either open the file or the reparse point, depending on the use of the **FILE\_FLAG\_OPEN\_REPARSE\_POINT** flag.
- Symbolic links, as reparse points, have certain [programming considerations](symbolic-link-programming-considerations.md) specific to them.
- Volume management activities for reading update sequence number (USN) change journal records require special handling for reparse points when using the [**USN\_RECORD**](/windows/desktop/api/WinIoCtl/ns-winioctl-usn_record_v2) and [**READ\_USN\_JOURNAL\_DATA**](/windows/desktop/api/WinIoCtl/ns-winioctl-read_usn_journal_data_v0) structures.
## Related topics
<dl> <dt>
[Determining Whether a Directory Is a Mounted Folder](determining-whether-a-directory-is-a-volume-mount-point.md)
</dt> <dt>
[Creating Mounted Folders](mounting-and-dismounting-a-volume.md)
</dt> <dt>
[Symbolic Link Effects on File Systems Functions](symbolic-link-effects-on-file-systems-functions.md)
</dt> </dl>
| 75.425 | 558 | 0.795824 | eng_Latn | 0.971897 |
fd9ed3f1baa04d7fcb1b6c8dbcd0f67e6651525f | 1,654 | md | Markdown | README.md | ecandrade/aula_html | 733e8aa9c08e4e9bbba1ea7be4eab70383ff9b38 | [
"MIT"
] | null | null | null | README.md | ecandrade/aula_html | 733e8aa9c08e4e9bbba1ea7be4eab70383ff9b38 | [
"MIT"
] | null | null | null | README.md | ecandrade/aula_html | 733e8aa9c08e4e9bbba1ea7be4eab70383ff9b38 | [
"MIT"
] | null | null | null | # aula_html
Este repositório pertence a Aula de HTML no YouTube, Dev Front-End Essential.
Canal: https://www.youtube.com/channel/UCMIMUD-RbbByupGyWqG_7Yw
Conteúdo do Repositório: Código HTML das Aulas + Exemplos
*************************************************************************************************************************************************************************
HTML Front-End Essential Class, é parte de uma play list para ensinar e compartilhar conhecimento sobre Fron-End para Iniciantes.
HTML5 --> É a base para tudo, todos os grandes site tem HTML por de traz dos panos, se abrir o capo do fusca, com certeza encontrará HTML 😂🤣.
A Linguagem de Marcação de Hipertexto (HTML) é uma linguagem de computador que compõe a maior parte das páginas da internet e dos aplicativos online.
Um hipertexto é um texto usado para fazer referência a outros textos, enquanto uma linguagem de marcação é composta por uma série de marcações que dizem para os
servidores da web qual é o estilo e a estrutura de um documento.
O HTML não é considerado uma linguagem de programação, já que ele não pode criar funcionalidades dinâmicas. Ao invés disso, com o HTML, os usuários podem criar e
estruturar seções, parágrafos e links usando elementos, tags e atributos.
Também vale notar que o HTML agora é considerado um padrão oficial da internet. O World Wide Web Consortium (W3C) mantêm e desenvolve especificações do HTML,
além de providenciar atualizações regulares.
*************************************************************************************************************************************************************************
| 63.615385 | 169 | 0.632406 | por_Latn | 0.999851 |
fd9f10e9d40b87d9f6b9ceca60df45dd384682a1 | 328 | md | Markdown | md/parameters/embeddings.md | lappsgrid-incubator/org.lappsgrid.nlp4j | a03286e57cdf649a910408d9b857484813d22ee6 | [
"Apache-2.0"
] | null | null | null | md/parameters/embeddings.md | lappsgrid-incubator/org.lappsgrid.nlp4j | a03286e57cdf649a910408d9b857484813d22ee6 | [
"Apache-2.0"
] | 2 | 2016-12-09T20:48:42.000Z | 2016-12-09T21:37:20.000Z | md/parameters/embeddings.md | lappsgrid-incubator/org.lappsgrid.nlp4j | a03286e57cdf649a910408d9b857484813d22ee6 | [
"Apache-2.0"
] | null | null | null | # Word Embeddings
These are the accepted values for the word embeddings parameter, and their corresponding fields and files. All lexica files can be found [here](src/main/resources/lexica).
| Value | Field | Filename |
| --- | --- | --- |
| undigitalized | word_form_undigitalized | en-word-embeddings-undigitalized.xz | | 46.857143 | 172 | 0.72561 | eng_Latn | 0.977218 |
fd9fad63a202e5d8f59aee94cbda140cc02c5f9a | 7,397 | md | Markdown | docs/src/components/components/bal-modal.md | Earlybyte/design-system | b686baf1a2a9193e6f37c49aa4ea0a35f320e10b | [
"Apache-2.0"
] | null | null | null | docs/src/components/components/bal-modal.md | Earlybyte/design-system | b686baf1a2a9193e6f37c49aa4ea0a35f320e10b | [
"Apache-2.0"
] | null | null | null | docs/src/components/components/bal-modal.md | Earlybyte/design-system | b686baf1a2a9193e6f37c49aa4ea0a35f320e10b | [
"Apache-2.0"
] | null | null | null | ---
sidebarDepth: 0
---
# Modal
<!-- START: human documentation top -->
A Modal is a dialog that appears on top of the app's body, and must be dismissed by
the app before interaction can resume.
<!-- END: human documentation top -->
<ClientOnly><docs-component-tabs></docs-component-tabs></ClientOnly>
## Examples
### Basic
<ClientOnly><docs-demo-bal-modal-72></docs-demo-bal-modal-72></ClientOnly>
## Code
<!-- START: human documentation code -->
::: danger <img src="https://angular.io/assets/images/logos/angular/angular.svg" data-origin="https://angular.io/assets/images/logos/angular/angular.svg" alt="angular" style="width: 32px">Angular
Have a look at the [Angular usage documentation](/components/getting-started/angular/usage.html#modal-service).
:::
::: tip <img src="https://vuejs.org/images/logo.png" data-origin="https://vuejs.org/images/logo.png" alt="angular" style="width: 32px">Vue
Have a look at the [Vue usage documentation](/components/getting-started/vue/usage.html#modal).
:::
<!-- END: human documentation code -->
### Properties
| Attribute | Description | Type | Default |
| :----------------- | :--------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- | :------------------- |
| **component** | The component to display inside of the modal. | <code>Function , HTMLElement , null , string</code> | |
| **componentProps** | The data to pass to the modal component. | <code>undefined , { [key: string]: any; }</code> | |
| **css-class** | Additional classes to apply for custom CSS. If multiple classes are provided they should be separated by spaces. | <code>string , string[] , undefined</code> | |
| **has-backdrop** | If `true`, a backdrop will be displayed behind the modal. | <code>boolean</code> | <code>true</code> |
| **interface** | Defines the look of the modal. The card interface should be used for scrollable content in the modal. | <code>"card" , "light"</code> | <code>'light'</code> |
| **is-closable** | If `true`, the modal can be closed with the escape key or the little close button. | <code>boolean</code> | <code>true</code> |
| **modal-width** | Defines the width of the modal body | <code>number</code> | <code>640</code> |
### Events
| Event | Description | Type |
| :---------------------- | :-------------------------------------- | :------------------------------------------- |
| **balModalDidDismiss** | Emitted after the modal has dismissed. | <code>OverlayEventDetail<any></code> |
| **balModalDidPresent** | Emitted after the modal has presented. | <code>void</code> |
| **balModalWillDismiss** | Emitted before the modal has dismissed. | <code>OverlayEventDetail<any></code> |
| **balModalWillPresent** | Emitted before the modal has presented. | <code>void</code> |
### Methods
| Method | Description | Signature |
| :---------------- | :----------------------------------------------------------- | :-------------------------------------------------------------------------------------------------- |
| **close** | Closes the modal. | <code>close() => Promise<void></code> |
| **dismiss** | Closes the presented modal with the modal controller | <code>dismiss(data?: any, role?: string | undefined) => Promise<boolean></code> |
| **onDidDismiss** | Returns a promise that resolves when the modal did dismiss. | <code>onDidDismiss<T = any>() => Promise<OverlayEventDetail<T>></code> |
| **onWillDismiss** | Returns a promise that resolves when the modal will dismiss. | <code>onWillDismiss<T = any>() => Promise<OverlayEventDetail<T>></code> |
| **open** | Opens the modal. | <code>open() => Promise<void></code> |
| **present** | Presents the modal through the modal controller | <code>present() => Promise<void></code> |
## Testing
The Baloise Design System provides a collection of custom cypress commands for our components. Moreover, some basic cypress commands like `should` or `click` have been overriden to work with our components.
- [More information about the installation and usage](/components/tooling/testing.html)
<!-- START: human documentation testing -->
```typescript
import { dataTestSelector } from '@baloise/design-system-testing'
describe('Modal', () => {
const modal = dataTestSelector('my-modal') // [data-test-id="my-modal"]
const openModalButton = dataTestSelector('my-open-modal')
const closeModalButton = dataTestSelector('my-close-modal')
it('should ...', () => {
cy.get(openModalButton).click()
cy.get(modal).balModalIsOpen()
cy.get(modal)
.find('bal-modal-header')
.contains('Modal Title')
})
})
```
<!-- END: human documentation testing -->
### Custom Commands
A list of the custom commands for this specific component.
| Command | Description | Signature |
| :------------------- | :----------------------------- | :----------------------------------------- |
| **balModalIsOpen** | Assert if the modal is open. | <code>(): Chainable<JQuery></code> |
| **balModalIsClosed** | Assert if the modal is closed. | <code>(): Chainable<JQuery></code> |
## Usage
<!-- START: human documentation usage -->
<!-- END: human documentation usage -->
## Edit this page on Github
* [Documentation on Github](https://github.com/baloise/design-system/blob/master/docs/src/components/components/bal-modal.md)
* [Implementation on Github](https://github.com/baloise/design-system/blob/master/packages/components/src/components/bal-modal)
* [Cypress commands on Github](https://github.com/baloise/design-system/blob/master/packages/testing/src/commands)
## Feedback
Help us improve this component by providing feedback, asking questions, and leaving any other comments on [GitHub](https://github.com/baloise/design-system/issues/new).
<ClientOnly>
<docs-component-script tag="balModal"></docs-component-script>
</ClientOnly>
| 53.992701 | 214 | 0.511964 | eng_Latn | 0.624471 |
fd9fe7203c3f630fdc7515bcd8ab0210b1a62ca5 | 2,192 | md | Markdown | iis/extensions/transform-manager/scheduler-setlogwriter-method-microsoft-web-media-transformmanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/extensions/transform-manager/scheduler-setlogwriter-method-microsoft-web-media-transformmanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/extensions/transform-manager/scheduler-setlogwriter-method-microsoft-web-media-transformmanager.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Scheduler.SetLogWriter Method (Microsoft.Web.Media.TransformManager)
TOCTitle: SetLogWriter Method
ms:assetid: M:Microsoft.Web.Media.TransformManager.Scheduler.SetLogWriter(Microsoft.Web.Media.TransformManager.Logger)
ms:mtpsurl: https://msdn.microsoft.com/en-us/library/microsoft.web.media.transformmanager.scheduler.setlogwriter(v=VS.90)
ms:contentKeyID: 35520806
ms.date: 06/14/2012
mtps_version: v=VS.90
f1_keywords:
- Microsoft.Web.Media.TransformManager.Scheduler.SetLogWriter
dev_langs:
- csharp
- jscript
- vb
- FSharp
- cpp
api_location:
- Microsoft.Web.Media.TransformManager.Common.dll
api_name:
- Microsoft.Web.Media.TransformManager.Scheduler.SetLogWriter
api_type:
- Assembly
topic_type:
- apiref
product_family_name: VS
---
# SetLogWriter Method
Sets the [Logger](logger-class-microsoft-web-media-transformmanager.md) object for the scheduler.
**Namespace:** [Microsoft.Web.Media.TransformManager](microsoft-web-media-transformmanager-namespace.md)
**Assembly:** Microsoft.Web.Media.TransformManager.Common (in Microsoft.Web.Media.TransformManager.Common.dll)
## Syntax
```vb
'Declaration
Public MustOverride Sub SetLogWriter ( _
logger As Logger _
)
'Usage
Dim instance As Scheduler
Dim logger As Logger
instance.SetLogWriter(logger)
```
```csharp
public abstract void SetLogWriter(
Logger logger
)
```
```cpp
public:
virtual void SetLogWriter(
Logger^ logger
) abstract
```
``` fsharp
abstract SetLogWriter :
logger:Logger -> unit
```
```jscript
public abstract function SetLogWriter(
logger : Logger
)
```
### Parameters
- logger
Type: [Microsoft.Web.Media.TransformManager.Logger](logger-class-microsoft-web-media-transformmanager.md)
The [Logger](logger-class-microsoft-web-media-transformmanager.md) object that provides access to task definition and job template properties.
## Remarks
This method enables either the job manager or task engine to collect logging information.
## See Also
### Reference
[Scheduler Class](scheduler-class-microsoft-web-media-transformmanager.md)
[Microsoft.Web.Media.TransformManager Namespace](microsoft-web-media-transformmanager-namespace.md)
| 23.569892 | 148 | 0.772354 | yue_Hant | 0.558493 |
fda073148d136cd06c55b6392e82695207439bf5 | 27 | md | Markdown | README.md | TanyaBochkareva/Exam-questions-randomizer | 502fecba4023d4d54c8e71d752a9cfb3094cda39 | [
"MIT"
] | null | null | null | README.md | TanyaBochkareva/Exam-questions-randomizer | 502fecba4023d4d54c8e71d752a9cfb3094cda39 | [
"MIT"
] | null | null | null | README.md | TanyaBochkareva/Exam-questions-randomizer | 502fecba4023d4d54c8e71d752a9cfb3094cda39 | [
"MIT"
] | null | null | null | # Exam-questions-randomizer | 27 | 27 | 0.851852 | por_Latn | 0.365149 |
fda0bd8df3e8249bae50ee25422ab7c4fdcaf6eb | 869 | md | Markdown | windows.system/launcher_findfilehandlersasync_1424509601.md | embender/winrt-api | c3d1c5e6000fa7b06ed691e0bb48386f54c488c5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-05-18T02:55:41.000Z | 2021-05-18T02:55:41.000Z | windows.system/launcher_findfilehandlersasync_1424509601.md | embender/winrt-api | c3d1c5e6000fa7b06ed691e0bb48386f54c488c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.system/launcher_findfilehandlersasync_1424509601.md | embender/winrt-api | c3d1c5e6000fa7b06ed691e0bb48386f54c488c5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-12T22:14:59.000Z | 2022-03-12T22:14:59.000Z | ---
-api-id: M:Windows.System.Launcher.FindFileHandlersAsync(System.String)
-api-type: winrt method
-api-device-family-note: xbox
---
<!-- Method syntax
public Windows.Foundation.IAsyncOperation<Windows.Foundation.Collections.IVectorView<Windows.ApplicationModel.AppInfo>> FindFileHandlersAsync(System.String extension)
-->
# Windows.System.Launcher.FindFileHandlersAsync
## -description
Enumerate the file handlers on the device.
## -parameters
### -param extension
The file extension that you want to find handlers for. For example, ".bat". Include the leading period '.'.
## -returns
A list of [AppInfo](../windows.applicationmodel/appinfo.md) s for each application that handles the specified file extension.
## -remarks
This API may also be called from Windows desktop application but does not return Windows desktop application.
## -examples
## -see-also
| 29.965517 | 166 | 0.777906 | eng_Latn | 0.776225 |
fda117ac598f5efed4b1fa14587103da5ab27d8e | 5,824 | md | Markdown | docs/layouts.md | screenspan/11ty-website | 764a7a56cbacf128430912b407fcfa7fae3a4528 | [
"MIT"
] | 1 | 2020-01-22T22:40:20.000Z | 2020-01-22T22:40:20.000Z | docs/layouts.md | screenspan/11ty-website | 764a7a56cbacf128430912b407fcfa7fae3a4528 | [
"MIT"
] | null | null | null | docs/layouts.md | screenspan/11ty-website | 764a7a56cbacf128430912b407fcfa7fae3a4528 | [
"MIT"
] | null | null | null | ---
eleventyNavigation:
parent: Working with Templates
key: Layouts
order: 1
excerpt: Wrap content in other content.
---
# Layouts
Eleventy Layouts are special templates that can be used to wrap other content. To denote that a piece of content should be wrapped in a template, simply use the `layout` key in your front matter, like so:
{% codetitle "content-using-layout.md" %}
{% raw %}
```markdown
---
layout: mylayout.njk
title: My Rad Markdown Blog Post
---
# {{ title }}
```
{% endraw %}
This will look for a `mylayout.njk` Nunjucks file in your _includes folder_ (`_includes/mylayout.njk`). Note that you can have a [separate folder for Eleventy layouts](/docs/config/#directory-for-layouts-(optional)) if you’d prefer that to having them live in your _includes folder._
You can use any template language in your layout—it doesn’t need to match the template language of the content. An `ejs` template can use a `njk` layout, for example.
{% callout "info" %}If you omit the file extension (for example <code>layout: mylayout</code>), Eleventy will cycle through all of the supported template formats (<code>mylayout.*</code>) to look for a matching layout file.{% endcallout %}
Next, we need to create a `mylayout.njk` file. It can contain any type of text, but here we’re using HTML:
{% codetitle "_includes/mylayout.njk" %}
{% raw %}
```html
---
title: My Rad Blog
---
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{ title }}</title>
</head>
<body>
{{ content | safe }}
</body>
</html>
```
{% endraw %}
Note that the layout template will populate the `content` data with the child template’s content. Also note that we don’t want to double-escape the output, so we’re using the provided Nunjuck’s `safe` filter here (see more language double-escaping syntax below).
{% callout "info" %}Layouts can contain their own front matter data! It’ll be merged with the content’s data on render. Content data takes precedence, if conflicting keys arise. Read more about <a href="/docs/data-cascade/">how Eleventy merges data in what we call the Data Cascade</a>.{% endcallout %}
All of this will output the following HTML content:
{% codetitle "_site/content-using-layout/index.html" %}
```
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My Rad Markdown Blog Post</title>
</head>
<body>
<h1>My Rad Markdown Blog Post</h1>
</body>
</html>
```
## Front Matter Data in Layouts
Take note that in [Eleventy’s Data Cascade](/docs/data/), front matter data in your template is merged with Layout front matter data! All data is merged ahead of time so that you can mix and match variables in your content and layout templates interchangeably.
Note that front matter data set in a content template takes priority over layout front matter! [Chained layouts](/docs/layout-chaining/) have similar merge behavior. The closer to the content, the higher priority the data.
### Sources of Data
{% include "datasources.md" %}
## Layouts in a Subdirectory {% addedin "0.2.7" %}
Layouts can be a full path inside of the _includes folder_, like so:
```markdown
---
layout: layouts/base.njk
---
```
This will look for `_includes/layouts/base.njk`.
## Layout Aliasing {% addedin "0.2.8" %}
Configuration API: use `eleventyConfig.addLayoutAlias(from, to)` to add layout aliases! Say you have a bunch of existing content using `layout: post`. If you don’t want to rewrite all of those values, just map `post` to a new file like this:
{% codetitle ".eleventy.js" %}
```js
module.exports = function(eleventyConfig) {
eleventyConfig.addLayoutAlias('post', 'layouts/post.njk');
};
```
## Prevent double-escaping in layouts
{% raw %}
| Template Language | Unescaped Content (for layout content) | Comparison with an Escaped Output | Docs |
| ----------------- | ------------------------------------------------------ | --------------------------------- | ------------------------------------------------------------------------------------ |
| Nunjucks | `{{ content | safe }}` | `{{ value }}` | [Docs](https://mozilla.github.io/nunjucks/templating.html#safe) |
| EJS | `<%- content %>` | `<%= value %>` | [Docs](https://www.npmjs.com/package/ejs#tags) |
| Handlebars | `{{{ content }}}` (triple stash) | `{{ value }}` (double stash) | [Docs](https://handlebarsjs.com/#html-escaping) |
| Mustache | `{{{ content }}}` (triple stash) | `{{ value }}` (double stash) | [Docs](https://github.com/janl/mustache.js#variables) |
| Liquid | is by default unescaped so you can use `{{ content }}` | `{{ value | escape}}` | [Docs](http://shopify.github.io/liquid/filters/escape/) |
| HAML | `! #{ content }` | `= #{ content }` | [Docs](http://haml.info/docs/yardoc/file.REFERENCE.html#unescaping_html) |
| Pug | `!{content}` | `#{value}` | [Docs](https://pugjs.org/language/interpolation.html#string-interpolation-unescaped) |
{% endraw %}
## Layout Chaining
Chaining multiple layouts together. [Read more about Layout Chaining](/docs/layout-chaining/). | 46.967742 | 302 | 0.601305 | eng_Latn | 0.953406 |
fda1c9f0a356911df3d231001b49843794298944 | 18 | md | Markdown | README.md | pollytam/newbie | 8a9ed42c845a6cb9d6826e0a98fa62c161dda2af | [
"MIT"
] | null | null | null | README.md | pollytam/newbie | 8a9ed42c845a6cb9d6826e0a98fa62c161dda2af | [
"MIT"
] | null | null | null | README.md | pollytam/newbie | 8a9ed42c845a6cb9d6826e0a98fa62c161dda2af | [
"MIT"
] | null | null | null | # newbie
Web HTML
| 6 | 8 | 0.722222 | kor_Hang | 0.715823 |
fda1e764a4c817849cab2c86a73aee378079fa88 | 5,158 | markdown | Markdown | _posts/2015-05-19-make-wiki-on-rails.markdown | Dal4Segno/dal4segno.github.com | 4dfab01202f3a7ed275b99f717d292025a377a99 | [
"MIT"
] | 1 | 2019-09-17T09:07:08.000Z | 2019-09-17T09:07:08.000Z | _posts/2015-05-19-make-wiki-on-rails.markdown | Dal4Segno/dal4segno.github.com | 4dfab01202f3a7ed275b99f717d292025a377a99 | [
"MIT"
] | 1 | 2016-05-13T05:09:36.000Z | 2016-05-13T05:09:36.000Z | _posts/2015-05-19-make-wiki-on-rails.markdown | Dal4Segno/dal4segno.github.com | 4dfab01202f3a7ed275b99f717d292025a377a99 | [
"MIT"
] | null | null | null | ---
layout: post
title: Make Wiki on Rails
date: 2015-05-19 22:41:47
tag:
- wiki
- ror
- rails
- ruby
categories: web-programming
excerpt: Rails를 이용하여 자신만의 위키를 만들어본다
---
# 환경
- Windows 10 TP x64
- Ruby 2.1.5
- Rails 4.1.8
- Bundler
- Git
- Sqlite
- Heroku를 이용해서 배포할 생각이라면 처음부터 **PostgreSQL**을 사용하도록 하자.
- DevKit
Windows와 OS X에서는 [RailsInstaller](http://railsinstaller.org/en)를 통해서 해당 환경을 쉽게 구축할 수 있다.
> 여담으로 레일즈가 필요하지 않아도 RailsInstaller를 통해서 환경을 구축하는 것이 훨씬 편하다.
# Gems
- Rails 4.1.8
- sqlite
- PostgreSQL의 경우에는 pg
- sass-rails
- uglifier
- coffee-rails
- jquery-rails
- turbolinks
- jbuilder
- sdoc
> pg를 사용할 경우, database.yml의 설정들도 변경해야 한다.
위의 목록은 Ruby/Rails IDE인 RubyMine으로 Rails Project 생성시 기본으로 넣어주는 gem들이다. DB나 JS, SCSS 등의 기능을 적용하기 위해 필요하다.
- sorcery
- paper_trail
- diffy
- redcarpet
- albino
- nokogiri
위의 gem들은 아래 항목에서 다시 언급하도록 한다.
> Rails는 Gemfile에 사용할 gem을 명시한 후에 **bundle install** 명령을 통해 해당 gem들을 모두 설치할 수 있다.
# Modeling
크게 사용자와 문서로 이루어진다. 하지만 위키의 특성상, 문서가 사용자에게 귀속되지 않기 때문에 문서에 최종 수정자만 기록하고, 별도의 관계는 만들지 않아도 된다.
위키는 문서 제목이 Primary Key가 되지만 Rails에서는 보통 id 를 통해 검색을 하는데, Rails는 각 Attribute 별로 검색할 수 있는 함수를 만들어 주기 때문에 별도로 함수를 만들 필요는 없다.
# User Authentication
지인들을 위한 폐쇄형 위키이기 때문에, 회원이 아니면 로그인/회원가입을 제외한 모든 페이지 및 문서에 접근을 제한해야 했다.
회원 시스템은 **sorcery** 라는 gem을 이용했다.
- [sorcery GitHub](https://github.com/NoamB/sorcery)
- [Authentication With Sorcery. RailsCast](http://railscasts.com/episodes/283-authentication-with-sorcery)
RailsCast의 문서에 있는 코드는 아직도 잘 작동하기 때문에, 유용하게 사용할 수 있다.
내가 만드는 위키의 경우에는 email이 아닌 이름(닉네임)을 ID로 사용했기 때문에, Parameter나 다른 부분들의 email을 바꿔주어야 했다.
> 이렇게 할 경우, email 정보가 없으므로 비밀번호 찾기 기능은 사용할 수 없다.
attr_accessible는 Deprecate 되었으므로, 지우면 된다.
# History
위키에서 문서의 수정내역 확인 및 비교 기능은 매우 유용하다.
## Versioning
수정내역 등을 확인하기 위해서는 문서 모든 버전(혹은 최근 n개 라도)을 가지고 있어야 한다.
해당 기능은 **paper_trail** 이라는 gem을 이용했다.
- [paper_trail GitHub](https://github.com/airblade/paper_trail)
- [Samurails. Jutsu #8 – Version your models with PaperTrail (Rails 4)](http://samurails.com/gems/papertrail/)
Samurails의 문서가 매우 친절하기 때문에, 별도의 설명은 하지 않는다.
## Diff
각 버전과의 비교는 **diffy**라는 gem을 이용했다.
- [diffy GitHub](https://github.com/samg/diffy)
Windows 환경에서는 diff가 기본 제공 명령어가 아니라 별도의 설치가 필요하지만, RailsInstaller에 포함되어 있는 DevKit에서 제공되므로, RailsInstaller를 활용한 경우에는 별 문제없이 진행할 수 있다.
> DevKit의 bin을 %PATH% 환경 변수에 넣어주어야 한다.
사용법은 위의 Samuarails의 문서에 같이 있으므로 생략한다.
# Grammar
HTML 태그를 직접 사용하는 방법도 있으나, 위키의 사용자가 대부분 비전공자인 것과, CSS 요소를 활용하여 문서의 디자인을 뭉갤 수 있는 위험이 있는 관계로 별도의 문법을 제정하게 되었다.
## Markdown
AtoZ까지 전부 다 만드는 것은 아니고 현재 이 블로그에서 사용하고 있는 Markdown이 굉장히 편리해서 위키에 적용시켜 보았다.
사용한 gem은 **redcarpet**이다.
> 특별한 이유는 없고, Jekyll 기본 설정이어서 변경없이 그대로 사용하고 있다.
- [redcarpet GitHub](https://github.com/vmg/redcarpet)
- [RailsCast. Markdown with Redcarpet](http://railscasts.com/episodes/272-markdown-with-redcarpet)
- [RichOnRails. Rendering Markdown with Redcarpet](http://richonrails.com/articles/rendering-markdown-with-redcarpet)
- [hamcois. Redcarpet for Rails 4.0](http://www.hamcois.com/articles/4)
RailsCast의 문서는 구 버전의 Redcarpet을 이용했으므로, 구조만 참조하도록 한다.
### Customize ###
여러 위키를 돌아다니다 보면 **틀**이 많이 사용되고 있는 것을 볼 수 있는데, 틀은 기본 제공되는 마크다운 문법이 아니기 때문에 redcarpet을 커스텀하여 문법을 추가할 필요가 있었다.
- [StackOverflow. custom markdown in user input](http://stackoverflow.com/questions/14741197/custom-markdown-in-user-input)
추가할 문법이 2개 이상이어서 gsub를 2번 이상 호출해야할 경우에는, **마지막을 제외한 gsub는 gsub!로 호출하도록 한다.**
> gsub만 사용할 경우에는 가장 마지막 문법만 적용되며, gsub!만 사용할 경우에는 별도로 추가한 문법이 사용되지 않은 문서에 대해 nil을 반환하기 때문에 nil을 render하게 되어 오류가 발생한다.
# Asset Pipeline #
위키를 구성하는 기능은 아니지만 Rails에서 여러 Assets(JS, CSS, Image, etc...)들을 관리하는 기능이다.
1. 사전 컴파일(Precompile)
2. 병합(Concatenate)
3. 압축(Minify)
의 과정을 통해 클라이언트가 Asset을 최소한으로 요청하도록 하는 기능이다.
- [RailsGuides. The Asset Pipeline](http://guides.rubyonrails.org/asset_pipeline.html)
- [RORLAB. 초보자를 위한 Asset Pipeline 개념잡기](http://rorlab.org/rblogs/152)
## 설정법 ##
### StyleSheet, JS ###
- /app/assets/javascript/application.js
- /app/assets/stylesheets/application.css
에서 각 파일들을 추가하면된다. 확장자는 붙이지 않아도되며, JS는 //=, StyleSheet는 *= 을 앞에 붙이는 것을 명심하자. 반드시 주석문 안에 작성되어야 한다.
각 파일들은 /app/assets 뿐만 아니라 lib나 vendor 의 assets에 있어도 상관없다.
### 기타 ###
폰트와 같은 자원들은 /config/initializers/assets.rb 에서 추가해야한다.
> Rails.application.config.assets.paths << Rails.root.join("fonts")
> Rails.application.config.assets.precompile += %w( *.eot *.woff *.svg *.ttf )
assets의 경로에 "fonts" 를 추가한다. StyleSheet, JS와 같이 app, lib, vendor 중 어디의 assets에 있는 fonts 여도 상관없다.
그리고 해당 확장자를 가진 파일들을 precompile 하도록 설정한다.
> 설정 후에는 rake assets:precompile 을 실행해야 한다.
# Design #
Bootstrap을 사용하여 어렵지 않게 있어보이는 레이아웃을 만들어낼 수 있다.
- [Bootstrap Korean](http://bootstrapk.com/)
# Deploy Using Heroku #
별도의 포스트에서 설명하도록 한다.
- [Dal4segno's Whatnot. Rails on Heroku](http://dal4segno.github.io/webprogramming/web-programming/2015/05/21/rails-on-heroku.html)
# 후기 #
크게 활용도가 높은 서비스도 아니고, 폐쇄형이라 남들에게 자랑도 하기 힘든 그런 사이트이지만, 배운 기술로 뭔가 뚝딱뚝딱 만들어내니 재밌고 보람찬 시간이었습니다. 이걸 계기로 더 많은 Rails 앱을 만들게 될 것 같네요.
| 29.141243 | 133 | 0.696588 | kor_Hang | 1.00001 |
fda2c85d526f8b24056d2b0809dbf78de574e270 | 704 | md | Markdown | _posts/2022-03-19-gcp7.md | hayleyshim/hayleyshim.github.io | a1449ac1cb7aa669fc54e47e985c4f01a81663e7 | [
"MIT"
] | null | null | null | _posts/2022-03-19-gcp7.md | hayleyshim/hayleyshim.github.io | a1449ac1cb7aa669fc54e47e985c4f01a81663e7 | [
"MIT"
] | null | null | null | _posts/2022-03-19-gcp7.md | hayleyshim/hayleyshim.github.io | a1449ac1cb7aa669fc54e47e985c4f01a81663e7 | [
"MIT"
] | null | null | null | ---
title: 실습편 - 권한 관련(2 tier architecture in GCP)
tag : [cloud]
author: hayley
---
<font size="5" color="purple"><b>2 tire architecture in GCP</b></font>
<p> 지난 <a href="https://hayleyshim.github.io/blog/gcp6">LB 설정</a>편에 이어 고려할 부분을 알아보자
<br>
<br>
<p><b>1. 권한 관련
<p>GCP 서비스 접근 권한에 있어 IAM, Role, Permissions에 대해 추가적으로 알아보자
<br>
<br>
<a href="https://www.buymeacoffee.com/yhshim17" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
| 44 | 364 | 0.690341 | kor_Hang | 0.928806 |
fda32b45fb8e66b51ce4a169b547eb09da781504 | 1,325 | md | Markdown | README.md | tml104/nowcoder_browser | 328f506a7709fbef59a3a0d5bd0a5b8534044107 | [
"MIT"
] | 2 | 2021-07-20T07:54:14.000Z | 2021-07-20T08:15:17.000Z | README.md | tml104/nowcoder_browser | 328f506a7709fbef59a3a0d5bd0a5b8534044107 | [
"MIT"
] | null | null | null | README.md | tml104/nowcoder_browser | 328f506a7709fbef59a3a0d5bd0a5b8534044107 | [
"MIT"
] | null | null | null | # nowcoder_browser
牛客竞赛题面批量截图工具
本来是想写来方便队友读题的,因为牛客不支持全部看题也不支持导出题面PDF,但后来发现没什么用~~(毕竟能团队报名)~~。
## 依赖
+ Python 3.7:别太老就行
+ [Selenium](https://www.selenium.dev/):一个Python的Webdriver库。使用`pip install selenium`安装。如果你的Python包管理使用的是conda那么应该已经自带了。
+ Google Chrome 91.0.4472.164 :别太老就行
+ [Chrome Driver][https://sites.google.com/chromium.org/driver/] :版本要与Chrome一致
## 使用方法(on Windows)
1. 使用git clone将本项目拉到本地:`git clone git@github.com:tml104/nowcoder_browser.git`。
2. 下载Chrome Driver,然后将其解压到一个目录下,然后取得chromedriver.exe的绝对路径(例如:`D:\chromedriver_win32\chromedriver.exe`)。注意路径中不能有中文字符。
3. 打开settings.cfg。将Chromedrive_path右侧的值改成第2步中的路径。
4. 修改settings.cfg:
1. Contest_id :比赛的id号。也就是比赛主界面的url中的最后5位数字(目前是5位数字)
2. Min_problem_char :开始截图的题目对应字母。
3. Max_problem_char : 结束截图的题目对应字母。
4. Width 、Height:浏览器的视口大小。这会影响到截图的宽度和高度。默认是1920*3000. 尽量只修改Height。如果存在题目太长的情况或者截图不清晰,请尝试增加/减少Height的数值。
5. Out_path:截图的输出路径,默认输出到./res下。
5. 在当前目录下新建Cookie.txt。
6. 获取Cookie。之所以要做这一步是因为某些比赛必须要报名了才能查看,因此需要登录帐号:
1. 在浏览器上登录牛客帐号。
2. 打开任意一个比赛主界面,然后打开开发者工具(F12)
3. 切换到Network选项卡,确保左边有个红色圆点亮着,然后刷新当前比赛页面(F5)。
4. 找到一个和当前比赛id相同的包。点一下,然后在右边的Headers中找到cookie,右键点击copy value。

5. 粘贴到Cookie.txt中。
7. 在当前目录下运行nowcoder_browser.py。:`python nowcoder_browser.py`
现在应该能在res文件夹中看到截图结果了。效果如图(当然截出来的图得放大了看):
 | 33.125 | 119 | 0.779623 | yue_Hant | 0.584333 |
fda359b00d98168a62131fc4c5b8e7be4e2c94ed | 38 | md | Markdown | README.md | iampaulanca/PSimpleOAuth | 5ffe582dce89a794a0af4a4d87780ccf3b114420 | [
"MIT"
] | null | null | null | README.md | iampaulanca/PSimpleOAuth | 5ffe582dce89a794a0af4a4d87780ccf3b114420 | [
"MIT"
] | null | null | null | README.md | iampaulanca/PSimpleOAuth | 5ffe582dce89a794a0af4a4d87780ccf3b114420 | [
"MIT"
] | null | null | null | # PSimpleOAuth
Simple OAuth with PKCE
| 12.666667 | 22 | 0.815789 | eng_Latn | 0.765729 |
fda3c02adfd1f896101f07cceabe6757755d350e | 217 | md | Markdown | README.md | qincenlon/mythinkphp | 6b5f3a502650173b4690a03355d58f54a8429a8b | [
"Apache-2.0"
] | null | null | null | README.md | qincenlon/mythinkphp | 6b5f3a502650173b4690a03355d58f54a8429a8b | [
"Apache-2.0"
] | null | null | null | README.md | qincenlon/mythinkphp | 6b5f3a502650173b4690a03355d58f54a8429a8b | [
"Apache-2.0"
] | null | null | null | ### thinkphp5版本框架阅读注解
本人实在无聊,没得妹子,没得工作,就研究框架度日啦,喜欢的朋友可以forking一份,贡献自己的
一份力量提个pr呗:smile:
- [thinkphp web应用流程阅读注解](document/web.md)
- [thinkphp 控制器的运行说明注解](document/controller.md)
- [thinkphp DB说明注解](document/db.md) | 36.166667 | 50 | 0.78341 | yue_Hant | 0.701732 |
fda4faf585bf2a7af4f87dbaa3731d133a7ae3f7 | 430 | md | Markdown | content/en/registry/instrumentation-python-system-metrics.md | jangaraj/opentelemetry.io | a32822c8300640f7e750344cc09530db017e24b4 | [
"Apache-2.0",
"CC-BY-4.0"
] | 230 | 2019-05-20T20:48:11.000Z | 2022-03-30T14:15:06.000Z | content/en/registry/instrumentation-python-system-metrics.md | jangaraj/opentelemetry.io | a32822c8300640f7e750344cc09530db017e24b4 | [
"Apache-2.0",
"CC-BY-4.0"
] | 733 | 2019-05-19T11:32:48.000Z | 2022-03-31T18:20:16.000Z | content/en/registry/instrumentation-python-system-metrics.md | jangaraj/opentelemetry.io | a32822c8300640f7e750344cc09530db017e24b4 | [
"Apache-2.0",
"CC-BY-4.0"
] | 344 | 2019-05-20T12:01:15.000Z | 2022-03-31T21:04:03.000Z | ---
title: System Metrics Instrumentation
registryType: instrumentation
isThirdParty: false
language: python
tags:
- python
- instrumentation
repo: https://github.com/open-telemetry/opentelemetry-python-contrib/tree/metrics/instrumentation/opentelemetry-instrumentation-system-metrics
license: Apache 2.0
description: Instrumentation to collect system performance metrics.
authors: OpenTelemetry Authors
otVersion: latest
---
| 28.666667 | 142 | 0.827907 | eng_Latn | 0.392712 |
fda5efb7f28b65bdc6604e3d6579fba9482eed34 | 18 | md | Markdown | README.md | ojengwa/s2-geometry | 5dc06272423b8525f0deb94edf4985b217345011 | [
"0BSD"
] | null | null | null | README.md | ojengwa/s2-geometry | 5dc06272423b8525f0deb94edf4985b217345011 | [
"0BSD"
] | null | null | null | README.md | ojengwa/s2-geometry | 5dc06272423b8525f0deb94edf4985b217345011 | [
"0BSD"
] | null | null | null | s2-geometry
=====
| 6 | 11 | 0.555556 | kor_Hang | 0.598572 |
fda65569bd96fd4c546e5833d249b9f26ec38201 | 1,809 | md | Markdown | Support/using-the-help-desk-web-ui.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | Support/using-the-help-desk-web-ui.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | Support/using-the-help-desk-web-ui.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | {{{
"title": "Using the Help Desk Web UI",
"date": "11-12-2014",
"author": "Bryan Dreyer",
"attachments": [],
"contentIsHTML": true
}}}
<h3>Description (goal/purpose)</h3>
<p>When using the Help Desk Web UI, there are several things you can check on and a few tasks that can be accomplished.</p>
<h3>Audience</h3>
<ul>
<li>Users of the platform</li>
</ul>
<h3>Prerequisites</h3>
<ul>
<li>Must have a username and password for https://support.ctl.io (credentials are unique and seperate from the control site)</li>
<ul>
<li>Note that the first time you submit a ticket via email, you will get a <em>Welcome</em> response asking you to register. If you don't register you can use the forgot password workflow to reset your password via email.</li>
</ul>
</ul>
<h3>Additional Resources</h3>
<p><a href="https://www.ctl.io/knowledge-base/support/ticket-prioritization-matrix/">Ticket Prioritization Matrix</a>
</p>
<p><a href="https://www.ctl.io/knowledge-base/support/how-do-i-escalate-a-ticket/">How To Escalate a Ticket</a>
</p>
<h3>Additional Information</h3>
<p>Visit https://support.ctl.io and login using your help desk credentials.</p>
<p>From the menu at the upper right you can select to submit a request or check your existing by clicking on your user profile and selecting "My activities".</p>
<p><em>Checking on existing requests</em>
</p>
<p>From the "check your existing requests" section, click on the ticket you wish to see more information about.</p>
<p>From here, you can see the ticket details</p>
<ul>
<li>Which engineer has been assigned your request</li>
<li>The current priority of your request</li>
<li>Any comments that have been made on your request</li>
<li>You can also add a comment or mark the ticket as resolved</li>
</ul>
<h3> </h3>
| 43.071429 | 227 | 0.713654 | eng_Latn | 0.981976 |
fda6b2dd1be56d50dc14e5c98faafc5024dd4100 | 1,253 | md | Markdown | _posts/2018-11-25-cork-wallet-by-corkor.md | kymsze/nadaar | 4c55af4b97b9a91781945b4cb714ec79720b7255 | [
"MIT"
] | null | null | null | _posts/2018-11-25-cork-wallet-by-corkor.md | kymsze/nadaar | 4c55af4b97b9a91781945b4cb714ec79720b7255 | [
"MIT"
] | null | null | null | _posts/2018-11-25-cork-wallet-by-corkor.md | kymsze/nadaar | 4c55af4b97b9a91781945b4cb714ec79720b7255 | [
"MIT"
] | null | null | null | ---
title: 'Cork Wallet by Corkor '
layout: blocks
date: 2018-11-25 14:18:32 +0000
description: |-
* PASSPORT WALLET - Compartment for passport and boarding pass + 2 notes and bills slots + 4 cards slots
* SLIM DESIGN - For frequent travelers
* MADE BY PORTUGUESE ARTISANS IN PORTUGAL - Ensure only the best product quality, great manufacturing quality, sturdy, durable natural material also long lasting wallet.
* VEGAN MEN's WALLET - PETA VEGAN APPROVED - No animal products used - Great gift for vegan men or woman - Unique vegan gift
* MADE FROM CORK - Sustainable alternative to animal leather - Eco Friendly cork wallet for Men or Women
price: "££"
thumbnail: https://images-na.ssl-images-amazon.com/images/I/71nzzpbOrJL._SL1001_.jpg
link: https://amzn.to/2AjL4VH
issue-tag:
- vegan
- 'eco friendly materials '
- natural
category_tag:
- gift
- featured
page_sections:
- template: simple-header
block: header-3
logo: "/uploads/2018/11/11/logo.png"
- template: content-feature
block: feature-1
media_alignment: Right
- template: signup-bar
block: cta-bar
content: Sign up to get updates from nadaar
email_recipient: kimszelong@gmail.com
- template: simple-footer
block: footer-1
logo: "/uploads/2018/11/11/logo.png"
---
| 33.864865 | 171 | 0.746209 | eng_Latn | 0.545553 |
fda71952c944617c6e967933516c1b0a7a076e52 | 2,035 | md | Markdown | source/docs/documentation/Foundry/vms_console_user_guide/Content/Segments/Defining_Condition_Logic_for_Segment_Definition.md | kumari-h/volt-mx-docs | 7b361299d49abedd1162cbb1640bad0cd04d3140 | [
"Apache-2.0"
] | null | null | null | source/docs/documentation/Foundry/vms_console_user_guide/Content/Segments/Defining_Condition_Logic_for_Segment_Definition.md | kumari-h/volt-mx-docs | 7b361299d49abedd1162cbb1640bad0cd04d3140 | [
"Apache-2.0"
] | null | null | null | source/docs/documentation/Foundry/vms_console_user_guide/Content/Segments/Defining_Condition_Logic_for_Segment_Definition.md | kumari-h/volt-mx-docs | 7b361299d49abedd1162cbb1640bad0cd04d3140 | [
"Apache-2.0"
] | null | null | null | ---
layout: "documentation"
category: "vms_console_user_guide"
---
Defining Condition Logic for Segment Definition
===============================================
A segment definition is a combined condition derived from added audience attributes.
Follow the order of operation to calculate the conditional logic for Segment definition:
1. Do things in Brackets First.
2. Otherwise just go left to right.
To define the conditional logic, follow these steps:
1. In the **Add Segment** screen, navigate to the **Add Audience Member to Segment section**.
2. Click one of the condition numbers from the **Condition No.** column that you want to use it for segment definition. The system inserts the condition number for the audience member attribute in the **Segment Definition** area.
3. Click the logic button to apply the conditional logic. The system inserts the logic button next to the condition number in the **Segment Definition** area. You can click any one of the six logic buttons as required, for example, the "AND", "OR", "(", ")", or "Delete"
> **_Note:_** Repeat the step 2 through step 3 to complete your conditional logic if required.
> **_Important:_** You can delete any elements inserted in the Segment Definition window by clicking the **Delete** logic button. The system deletes the elements from right to left order.
For example,

4. Click the **Validate** button. The system validates whether the conditional logic follows the BODMAS (Brackets Of Division Multiplication Addition Subtraction) rule of selection, and displays the confirmation message if the logic is correct. It states that the validation is successful.
> **_Note:_** If conditional logic is not correctly used and when you trying to click the **Validation** button, the system displays an Error notification.

| 53.552632 | 290 | 0.71941 | eng_Latn | 0.991917 |
fda72d20e015dad890c26a141bf099e1150864cd | 3,116 | md | Markdown | components/LabelSelect/README.md | AunMSheikh/react-native-ui-xg | 7087346643e91e2f413e772a835f8b45d7fa6029 | [
"MIT"
] | 161 | 2017-02-15T13:59:46.000Z | 2021-12-29T07:34:15.000Z | components/LabelSelect/README.md | AunMSheikh/react-native-ui-xg | 7087346643e91e2f413e772a835f8b45d7fa6029 | [
"MIT"
] | 4 | 2017-06-02T11:04:49.000Z | 2019-06-04T09:55:21.000Z | components/LabelSelect/README.md | AunMSheikh/react-native-ui-xg | 7087346643e91e2f413e772a835f8b45d7fa6029 | [
"MIT"
] | 30 | 2017-03-18T16:38:10.000Z | 2022-03-10T03:05:32.000Z | ## react-native-label-select [](https://travis-ci.org/xgfe/react-native-label-select) [](https://coveralls.io/github/Tinysymphony/react-native-label-select?branch=master)
LabelSelect is a component used for making multiple choices. The modal is a checkbox like html.
## Example
<a href="#android" id="android"><img src="http://7xjgb0.com1.z0.glb.clouddn.com/android.gif" align="left" width="240"/></a>
<a href="#ios" id="ios"><img src="http://7xjgb0.com1.z0.glb.clouddn.com/ios.gif" width="240"/></a>
## Usage
```shell
npm install --save react-native-label-select
```
```js
import LabelSelect from 'react-native-label-select';
```
```html
<LabelSelect
ref="labelSelect"
title="Make Choices"
enable={true}
readOnly={false}
enableAddBtn={true}
style={yourStyle}
onConfirm={(list) => {...}}>
<LabelSelect.Label
key={...}
data={itemA}
onCancel={func}>selected ItemA</LabelSelect.Label>
<LabelSelect.ModalItem
key={...}
data={itemB}>Item B</LabelSelect.ModalItem>
</LabelSelect>
```
## Properties
**LabelSelect**
| Prop | Default | Type | Description |
| --- | --- | --- | --- |
| style | - | object | Specify styles for the LabelSelect |
| title | - | string | The title text of the modal |
| readOnly | false | bool | is the component readonly |
| enable | true | bool | is the component interactive |
| enableAddBtn | true | bool | whether to show the add button |
| onConfirm | - | function | Triggered when the confirm button of modal is pressed with the newly selected items list passed as the only argument |
| confirmText | - | string | Text of confirm button. |
| cancelText | - | string | Text of cancelText button. |
| customStyle | - | object | You can customize styles of `modal` / `buttonView` / `confirmButton` / `confirmText` / `cancelButton` / `cancelText` by `customStyle. |
**LabelSelect.Label**
| Prop | Default | Type | Description |
| --- | --- | --- | --- |
| onCancel | - | function | Triggered when the close button of Label is pressed. |
| data | - | any | Data that bind to the Label |
| customStyle | - | object | You can customize styles of `text` by `customStyle. |
**LabelSelect.ModalItem**
| Prop | Default | Type | Description |
| --- | --- | --- | --- |
| data | - | any | Data that bind to the ModalItem. After confirming the items selected on modal, the data will be passed to the selected list. |
## Instance Methods
| Method | Params | Description |
| --- | --- | --- |
| openModal | - | Open select modal |
| cancelSelect | - | Close select modal. Also triggered when the cancel button of modal being pressed. |
| customStyle | - | object | You can customize styles of `modalText` / `outerCircle` / `innerCircle` by `customStyle. |
Use `ref` property as a hook to invoke internal methods.
```html
<LabelSelect ref="select">...</LabelSelect>
```
```js
this.ref.select.openModal()
this.ref.select.cancelSelect()
```
| 33.148936 | 373 | 0.675546 | eng_Latn | 0.785596 |
fda7fc90f8bb6c84e428ddef3647a530e70b6034 | 9,534 | md | Markdown | RELEASES.md | andersonfrailey/Tax-Brain | a754de059b3b882ce7db13ec0926a128812a5eab | [
"MIT"
] | 10 | 2019-05-07T12:04:58.000Z | 2021-05-18T13:14:45.000Z | RELEASES.md | vishalbelsare/Tax-Brain | 2c63597d8132e79edff64c62f481fdf8d5e60149 | [
"MIT"
] | 113 | 2019-03-05T16:13:32.000Z | 2021-11-10T13:53:32.000Z | RELEASES.md | andersonfrailey/Tax-Brain | a754de059b3b882ce7db13ec0926a128812a5eab | [
"MIT"
] | 14 | 2019-01-24T18:39:11.000Z | 2021-03-17T17:00:51.000Z | # Tax-Brain Release History
## 2021-09-25 Release 2.6.0
Last Merged Pull Request: [#186](https://github.com/PSLmodels/Tax-Brain/pull/186)
Changes in this release:
* Fix Compute Studio installation: [#170](https://github.com/PSLmodels/Tax-Brain/pull/170),
[#171](https://github.com/PSLmodels/Tax-Brain/pull/171), [#172](https://github.com/PSLmodels/Tax-Brain/pull/172),
[#174](https://github.com/PSLmodels/Tax-Brain/pull/174)
* Update parallelization techniques for `TaxBrain.run()`: [#175](https://github.com/PSLmodels/Tax-Brain/pull/175)
* Add year parameter validation on Compute Studio: [#177](https://github.com/PSLmodels/Tax-Brain/pull/177)
* Use default version installed via git: [#178](https://github.com/PSLmodels/Tax-Brain/pull/178)
* Update function for retrieving the PUF: [#179](https://github.com/PSLmodels/Tax-Brain/pull/179)
* Add stacked reform capabilities: [#181](https://github.com/PSLmodels/Tax-Brain/pull/181)
## 2021-03-19 Release 2.5.0
Last Merged Pull Request: [#165](https://github.com/PSLmodels/Tax-Brain/pull/165)
Note: This will be the first release available on Conda-Forge and the first to support Python 3.8
Changes in this release:
* Fix Compute Studio random seed generation ([#141](https://github.com/PSLmodels/Tax-Brain/pull/141))
* Add LaTex installer to Compute Studio instructions ([#142](https://github.com/PSLmodels/Tax-Brain/pull/142))
* Skip report creation for Compute Studio runs with no reform ([#148](https://github.com/PSLmodels/Tax-Brain/pull/148))
* Add Volcano Plot ([#149](https://github.com/PSLmodels/Tax-Brain/pull/149))
* Add Lorenz Curve Plot ([#150](https://github.com/PSLmodels/Tax-Brain/pull/150))
* Update automated reports ([#151](https://github.com/PSLmodels/Tax-Brain/pull/151))
* Add an option to include a `Total` column in generated tables ([#154](https://github.com/PSLmodels/Tax-Brain/pull/154))
* Update docstrings ([#159](https://github.com/PSLmodels/Tax-Brain/pull/159))
* Add continuous integration and unit testing through GitHub Actions ([#160](https://github.com/PSLmodels/Tax-Brain/pull/160))
* Add revenue plot ([#165](https://github.com/PSLmodels/Tax-Brain/pull/165))
## 2020-10-10 Release 2.4.0
Last Merged Pull Request: [#135](https://github.com/PSLmodels/Tax-Brain/pull/135)
Changes in this release:
* Automated reports are now produced using just Pandoc ([#125](https://github.com/PSLmodels/Tax-Brain/pull/125), [#135](https://github.com/PSLmodels/Tax-Brain/pull/135))
* The Tax-Brain Compute Studio app now includes automated reports in the
downloadable content ([#76](https://github.com/PSLmodels/Tax-Brain/pull/76))
* Tax-Brain now requires `taxcalc` version 3.0.0 or above and `behresp` 0.11.0 or above ([#128](https://github.com/PSLmodels/Tax-Brain/pull/128))
* All images used in producing automated reports are now made with `matplotlib`, greatly reducing the number of external projects we need to install ([#134](https://github.com/PSLmodels/Tax-Brain/pull/134))
## 2020-07-07 Release 2.3.4
Last Merged Pull Request: [#124](https://github.com/PSLmodels/Tax-Brain/pull/124)
No changes made to the model between release 2.3.3 and 2.3.4. The only changes
were to the conda build instructions.
## 2020-07-06 Release 2.3.3
Last Merged Pull Request [#123](https://github.com/PSLmodels/Tax-Brain/pull/123)
Changes in this release:
* Fixes various Compute Studio bugs (
[#78](https://github.com/PSLmodels/Tax-Brain/pull/78),
[#82](https://github.com/PSLmodels/Tax-Brain/pull/82),
[#83](https://github.com/PSLmodels/Tax-Brain/pull/83)
)
* Update installation requirements (
[#80](https://github.com/PSLmodels/Tax-Brain/pull/80),
[#81](https://github.com/PSLmodels/Tax-Brain/pull/81),
[#84](https://github.com/PSLmodels/Tax-Brain/pull/84),
[#90](https://github.com/PSLmodels/Tax-Brain/pull/90),
[#91](https://github.com/PSLmodels/Tax-Brain/pull/91)
)
* Add Compute Studio Documentation (
[#87](https://github.com/PSLmodels/Tax-Brain/pull/87),
[#92](https://github.com/PSLmodels/Tax-Brain/pull/92)
)
* Compute Studio updates (
[#88](https://github.com/PSLmodels/Tax-Brain/pull/88),
[#89](https://github.com/PSLmodels/Tax-Brain/pull/89),
[#103](https://github.com/PSLmodels/Tax-Brain/pull/103),
[#108](https://github.com/PSLmodels/Tax-Brain/pull/108),
[#109](https://github.com/PSLmodels/Tax-Brain/pull/109),
[#111](https://github.com/PSLmodels/Tax-Brain/pull/111),
[#112](https://github.com/PSLmodels/Tax-Brain/pull/112),
[#118](https://github.com/PSLmodels/Tax-Brain/pull/118),
)
* Fix handling of baseline policy in the core taxbrain app
([#93](https://github.com/PSLmodels/Tax-Brain/pull/93))
* Update core taxbrain app to use Bokeh version 2.0.0 and Tax-Calculator 2.9.0
([#113](https://github.com/PSLmodels/Tax-Brain/pull/113))
* Add benefits totals to aggregate table
([#120](https://github.com/PSLmodels/Tax-Brain/pull/118))
* Update the report feature of the core taxbrain app to only use PNGs for graphs
[#123](https://github.com/PSLmodels/Tax-Brain/pull/123)
## 2019-07-30 Release 2.3.2
Last Merged Pull Request: [#74](https://github.com/PSLmodels/Tax-Brain/pull/74)
No changes made to the model between release 2.3.1 and 2.3.2. The only changes
were to the conda build instructions.
## 2019-07-29 Release 2.3.1
Last Merged Pull Request: [#73](https://github.com/PSLmodels/Tax-Brain/pull/73)
No changes made to the model between release 2.3.0 and 2.3.1. The only changes
were to the conda build instructions.
## 2019-07-24 Release 2.3.0
Last Merged Pull Request: [#72](https://github.com/PSLmodels/Tax-Brain/pull/72)
Changes in this release:
* Refactor the `run()` method and TaxBrain initialization process so that
calculator objects are not created until `run()` is called ([#44](https://github.com/PSLmodels/Tax-Brain/pull/44))
* Modify `metaparams` in the `COMPconfig` ([#54](https://github.com/PSLmodels/Tax-Brain/pull/54))
* Fix various COMP bugs ([#58](https://github.com/PSLmodels/Tax-Brain/pull/58),
[#60](https://github.com/PSLmodels/Tax-Brain/pull/60),
[#63](https://github.com/PSLmodels/Tax-Brain/pull/63),
[#65](https://github.com/PSLmodels/Tax-Brain/pull/65))
* Allow users to specify an alternative policy to use as the baseline, rather
than current law ([#64](https://github.com/PSLmodels/Tax-Brain/pull/64))
* Update COMP table outputs so they are more readable ([#66](https://github.com/PSLmodels/Tax-Brain/pull/66))
* Add TaxBrain command line interface ([#67](https://github.com/PSLmodels/Tax-Brain/pull/67), [#68](https://github.com/PSLmodels/Tax-Brain/pull/68))
* Add automated report capabilities ([#69](https://github.com/PSLmodels/Tax-Brain/pull/69))
## 2019-05-24 Release 2.2.1
Last Merged Pull Request: [#51](https://github.com/PSLmodels/Tax-Brain/pull/51)
Changes in this release:
* Fix bug in COMP outputs that caused the rows in distribution tables to be
flipped ([#51](https://github.com/PSLmodels/Tax-Brain/pull/51)).
* Update Behavioral-Responses package requirements ([#50](https://github.com/PSLmodels/Tax-Brain/pull/50)).
* Change the dynamic reform to run sequentially, rather than in parallel ([#50](https://github.com/PSLmodels/Tax-Brain/pull/50)).
## 2019-05-21 Release 2.2.0
Last Merged Pull Request: [#45](https://github.com/PSLmodels/Tax-Brain/pull/45)
Changes in this release:
* Fix bug in the distribution table ([#33](https://github.com/PSLmodels/Tax-Brain/pull/33)).
* Expand testing ([#34](https://github.com/PSLmodels/Tax-Brain/pull/45)).
* Remove TBI package from distribution ([#38](https://github.com/PSLmodels/Tax-Brain/pull/38))
* Establish `compconfig` directory to handle COMP interactions ([#38](https://github.com/PSLmodels/Tax-Brain/pull/38), [#40](https://github.com/PSLmodels/Tax-Brain/pull/40)).
* Modify the distribution and difference table creation to work with taxcalc 2.2.0 ([#45](https://github.com/PSLmodels/Tax-Brain/pull/45)).
* Add plotting to COMP outputs ([#26](https://github.com/PSLmodels/Tax-Brain/pull/26)).
## 2019-04-01 Release 2.1.2
Last Merged Pull Request: [#31](https://github.com/PSLmodels/Tax-Brain/pull/31)
Changes in this release:
* Patches bugs in the TBI ([#31](https://github.com/PSLmodels/Tax-Brain/pull/31)).
## 2019-03-29 Release 2.1.1
Last Merged Pull Request: [#29](https://github.com/PSLmodels/Tax-Brain/pull/29)
Changes in this release:
* Includes `taxbrain/tbi/behavior_params.json` in the package ([#29](https://github.com/PSLmodels/Tax-Brain/pull/29)).
## 2019-03-29 Release 2.1.0
Last Merged Pull Request: [#28](https://github.com/PSLmodels/Tax-Brain/pull/27)
Changes in this release:
* The TBI has been refactored to use the `TaxBrain` class rather than the
individual components of Tax-Calculator and Behavioral-Responses ([#21](https://github.com/PSLmodels/Tax-Brain/pull/21)).
* The `TaxBrain` class and TBI have been updated to work with newer version of
Tax-Calculator and Behavioral-Responses (>=1.1.0 and >=0.7.0, respectively) ([#25](https://github.com/PSLmodels/Tax-Brain/pull/25)).
* The TBI has been modified to allow a user to use the PUF as an input file ([#27](https://github.com/PSLmodels/Tax-Brain/pull/27)).
## 2019-03-12 Release 2.0.0
Last Merged Pull Request: [#19](https://github.com/PSLmodels/Tax-Brain/pull/19)
This is the first release of the Tax-Brain package. We are starting with version
2.0.0 because this package is effectively the second coming of the original
Tax-Brain - a web interface for the Tax-Calculator model. Accordingly, much
of the code has been copied directly from the original Tax-Brain.
| 49.14433 | 206 | 0.735368 | eng_Latn | 0.322529 |
fda832651b001c7f3dcff801cbe48fcf29124590 | 3,430 | md | Markdown | articles/machine-learning/studio/gallery-industries.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/studio/gallery-industries.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/studio/gallery-industries.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Cortana Intelligence Gallery branchespecifieke oplossingen | Microsoft Docs
description: -Oplossingen in de Cortana Intelligence Gallery detecteren.
services: machine-learning
documentationcenter:
author: garyericson
manager: jhubbard
editor: cgronlun
ms.assetid: fd2ecf9a-ff76-4a0f-8d68-7f762249648c
ms.service: machine-learning
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 03/31/2017
ms.author: roopalik;garye
ms.openlocfilehash: 0dec0f47eced45c496399bc6b84169116dcc551d
ms.sourcegitcommit: 6699c77dcbd5f8a1a2f21fba3d0a0005ac9ed6b7
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/11/2017
---
# <a name="discover-industry-specific-solutions-in-the-cortana-intelligence-gallery"></a>Branchespecifieke oplossingen in de Cortana Intelligence Gallery detecteren
[!INCLUDE [machine-learning-gallery-item-selector](../../../includes/machine-learning-gallery-item-selector.md)]
## <a name="industry-specific-cortana-intelligence-solutions"></a>Branchespecifieke Cortana Intelligence oplossingen
De **[branches](https://gallery.cortanaintelligence.com/industries)** sectie van de galerie samenbrengt verschillende bronnen die specifiek voor sectoren, zoals zijn
* [Detailhandel](https://gallery.cortanaintelligence-int.com/industries/retail) -retail-oplossingen zoals verkopen prognose, voorspellen van de klant verloop en ontwikkelen prijsmodellen zoeken.
* [Productie](https://gallery.cortanaintelligence-int.com/industries/manufacturing) - zoeken naar productie-oplossingen zoals anticiperen op apparatuur onderhoud en prognose energieprijzen.
* [Banking](https://gallery.cortanaintelligence-int.com/industries/banking) - zoeken naar oplossingen zoals het voorspellen van tegoed banking risico en bewaking voor on line fraude.
* [Gezondheidszorg](https://gallery.cortanaintelligence-int.com/industries/healthcare) -gezondheidszorg oplossingen zoals ziekte detecteren en ziekenhuis readmissions voorspellen.
Deze bronnen omvatten experimenten, aangepaste modules, API's, verzamelingen en eventuele andere galerie-items die kunnen helpen bij het ontwikkelen van oplossingen die specifiek zijn voor de branche waarmee u werkt.
## <a name="discover"></a>Ontdekken
Als u wilt de branchespecifieke oplossingen in de galerie bekijken, opent u de [galerie](http://gallery.cortanaintelligence.com), de muis op **branches** aan de bovenkant van de startpagina van de galerie met een specifieke bedrijfstak-segment selecteren of selecteer **Alles weergeven** om te zien van een overzichtspagina voor alle branches.
Elke pagina bedrijfstak geeft een lijst van de meest populaire galerij-items die zijn gekoppeld aan deze bedrijven.
Klik op **alle** om alle branchespecifieke resources in de galerie weer te geven.
Op deze pagina kunt u alle resources in de galerie bladeren. U kunt ook zoeken door filtercriteria voldoen aan de linkerkant van de pagina en invoeren zoektermen boven selecteren.


Klik op een galerij-item om details-pagina van het artikel voor meer informatie te openen.
**[BRENG ME NAAR DE GALERIE >>](http://gallery.cortanaintelligence.com)**
[!INCLUDE [machine-learning-free-trial](../../../includes/machine-learning-free-trial.md)]
| 64.716981 | 344 | 0.807872 | nld_Latn | 0.980179 |
fda87481713e46d06fbbdc96a7790f351f283a35 | 1,103 | md | Markdown | vendor/digitickets/omnipay-gocardlessv2/CHANGELOG.md | oconnorr401/transfer1 | 46460e59dae4215ea222ef1660aaf83271dbff4c | [
"AAL"
] | null | null | null | vendor/digitickets/omnipay-gocardlessv2/CHANGELOG.md | oconnorr401/transfer1 | 46460e59dae4215ea222ef1660aaf83271dbff4c | [
"AAL"
] | null | null | null | vendor/digitickets/omnipay-gocardlessv2/CHANGELOG.md | oconnorr401/transfer1 | 46460e59dae4215ea222ef1660aaf83271dbff4c | [
"AAL"
] | null | null | null | # Changelog
All Notable changes to `omnipay/gocardlessv2` will be documented in this file
- 0.1.8 - Add authenticate webhook method. Change error handling to omnipay standard method.
- 0.1.4 - Allow never-ending subscriptions
- 0.1.3 - Add rate handling to the send method - it will now sleep for 60 seconds before making a second attempt.
- 0.1.3 - ignore nulls on the bank account object.
- 0.1.2 - Switch customer bank account creation on the pro gateway to use params rather than an array
- 0.1.1 - Update to GoCardlessPro 1
- 0.1.0 - initial release of JS Flow for production
- 0.0.8 - standardise metadata to strings
- 0.0.7 - nullable app fees on create payment
- 0.0.6 - extend base response with standard function for link and metadata retrieval. Add more functions to the purchaseResponse object to wrap the GC Purchase object (notably formatting currency and creating DateTime objects)
- 0.0.5 - parseNotification restructured to accept optional signature
- 0.0.4 - modified the parseNotification to return an array of event results (more helpful than an array of requests)
| 64.882353 | 228 | 0.762466 | eng_Latn | 0.989463 |
fda8aa651ba2df726e2559b0de003c8757e50f47 | 1,377 | md | Markdown | includes/azure-data-lake-analytics-limits.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/azure-data-lake-analytics-limits.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/azure-data-lake-analytics-limits.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: rothja
ms.service: cost-management-billing
ms.topic: include
ms.date: 11/09/2018
ms.author: jroth
ms.openlocfilehash: 2f6cdda71c89041d954d8dbaf34a1fd874c5849a
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/28/2020
ms.locfileid: "80334900"
---
Usługa Azure Data Lake Analytics ułatwia zarządzanie infrastrukturą rozproszoną i złożonym kodem. Dynamicznie apekomeruje zasoby i można go używać do analizy eksabajtów danych. Po zakończeniu zadania automatycznie wywija zasoby. Płacisz tylko za moc obliczeniową, która została wykorzystana. Jak zwiększyć lub zmniejszyć rozmiar przechowywanych danych lub ilość obliczeń używane, nie trzeba przepisać kod. Aby podnieść domyślne limity dla subskrypcji, skontaktuj się z pomocą techniczną.
| **Zasobów** | **Limit** | **Komentarze** |
| --- | --- | --- |
| Maksymalna liczba równoczesnych zadań |20 | |
| Maksymalna liczba jednostek analitycznych (AU) na konto |250 | Użyj dowolnej kombinacji maksymalnie 250 jednostek AU w 20 zadaniach. Aby zwiększyć ten limit, skontaktuj się z pomocą techniczną firmy Microsoft. |
| Maksymalny rozmiar skryptu dla przesyłania zadania | 3 MB | |
| Maksymalna liczba kont Usługi Data Lake Analytics na region na subskrypcję | 5 | Aby zwiększyć ten limit, skontaktuj się z pomocą techniczną firmy Microsoft. |
| 62.590909 | 487 | 0.798112 | pol_Latn | 0.999351 |
fda8ae0c4d831ad333b15c7d306ed2d522ab6a5b | 40 | md | Markdown | README.md | HanuDows/HanuDows-Framework | c682c014c0b3ed0072e7b050aa39cad2a5d37fa0 | [
"MIT"
] | null | null | null | README.md | HanuDows/HanuDows-Framework | c682c014c0b3ed0072e7b050aa39cad2a5d37fa0 | [
"MIT"
] | null | null | null | README.md | HanuDows/HanuDows-Framework | c682c014c0b3ed0072e7b050aa39cad2a5d37fa0 | [
"MIT"
] | null | null | null | # HanuDows-Framework
HanuDows-Framework
| 13.333333 | 20 | 0.85 | pol_Latn | 0.464977 |
fda92c63b210829353604a8d11ad7e44d3aeadd4 | 115 | md | Markdown | cmdparams/readme.md | AlvinZhang86/cobra-examples | 471f1caf4c2263b400f9bde1221372b5732ae533 | [
"MIT"
] | 1 | 2019-11-15T10:32:29.000Z | 2019-11-15T10:32:29.000Z | cmdparams/readme.md | AlvinZhang86/cobra-examples | 471f1caf4c2263b400f9bde1221372b5732ae533 | [
"MIT"
] | null | null | null | cmdparams/readme.md | AlvinZhang86/cobra-examples | 471f1caf4c2263b400f9bde1221372b5732ae533 | [
"MIT"
] | null | null | null | # cmdparams
## build
```go build -o cmdparams main.go```
## run
```./cmdparams -h```
```./cmdparams --help```
| 9.583333 | 35 | 0.547826 | nld_Latn | 0.328332 |
fda93ed0838f526f73250ade064305f8623be9be | 98 | md | Markdown | DataScienceProgramming/06-Descriptive-Statistics/HW06/README.md | cartermin/MSA8090 | c8d95fb7c71682b2197a391995b76f6043a905cc | [
"CC0-1.0"
] | 7 | 2017-08-21T23:23:59.000Z | 2020-12-16T23:57:11.000Z | DataScienceProgramming/06-Descriptive-Statistics/HW06/README.md | cartermin/MSA8090 | c8d95fb7c71682b2197a391995b76f6043a905cc | [
"CC0-1.0"
] | null | null | null | DataScienceProgramming/06-Descriptive-Statistics/HW06/README.md | cartermin/MSA8090 | c8d95fb7c71682b2197a391995b76f6043a905cc | [
"CC0-1.0"
] | 7 | 2017-08-22T00:39:01.000Z | 2019-09-09T01:46:29.000Z | # Assignment HW06
## Instructions
**Please following the instructions precisely!**
## Submission
| 16.333333 | 48 | 0.765306 | eng_Latn | 0.987931 |
fda98ea7a93e53eb31593a4f5deb80c1783a2b89 | 2,756 | md | Markdown | README.md | rvalien/orbbot | 0f3f9198f595d8217142718151096d925405258a | [
"MIT"
] | null | null | null | README.md | rvalien/orbbot | 0f3f9198f595d8217142718151096d925405258a | [
"MIT"
] | null | null | null | README.md | rvalien/orbbot | 0f3f9198f595d8217142718151096d925405258a | [
"MIT"
] | null | null | null | # orbbot
[](https://github.com/psf/black)

[](https://raw.githubusercontent.com/rvalien/orbbot/master/LICENSE)
[](https://pypi.org/project/discord.py/)
<img src="orbb.png" width="100">
### description
lil bot for lil discord [QC](https://quake.bethesda.net/en) channel
---
### usage
Orbb can help:
`map` 🗺️ Choose random map.
`profile` Show quake profile link `profile some_name`
`pzdc` Random character and team shuffle.

`spec` Bot send vote message with question like: `Who want to play now?` If player, that set positive reaction, more
than 8, bot choose random spectators.
`voice` Shuffle members of voice channel to 2 teams and spectators.
`vote` Shuffle members who set positive reaction to 2 teams and spectators.
`role` show your roles & link to message where you can add/remove roles
`bday` show happy birthday users
`pzdc` OMG mode! Random team and character. Prepare to suffer.
`ping` Used to check if the bot is alive.
`help` Shows this message and more info for commands.
`poll` Simple poll with only 2 reactions (👍, 👎).
`roll` 🎲 roll dice and set result as reaction on your command. Return number from 1 to 6.
`random` Shuffle to 2 teams from message input.
`deadline` Show deadline date for book club. To set the date use `!deadline YYYY-MM-DD`

`slot` Roll the slot machine
|emoji| meaning |valid reaction|
|-----|--------------------------|--------------|
|✅ |positive reaction |yes |
|❌ | negative reaction |yes |
|🔟 | 10 seconds until vote ends|no (vote info)|
|5️⃣ | 5 seconds until vote ends |no (vote info)|
|🛑 | voting is over |no (vote info)|
---
### setup
to run this bot, you need to set environment variables:
`TOKEN` - [discord](https://discord.com/developers/docs/intro) bot token
`TENSOR_API_KEY`- [gifapi](https://tenor.com/gifapi/documentation) token
`PREFIX` - Command prefix to use
---
### deploy
[configure your bot here](https://discord.com/developers/applications/)
[](https://heroku.com/deploy?template=https://github.com/rvalien/orbbot)
[invite link](https://discordapp.com/oauth2/authorize?&client_id=757854688518602773&scope=bot&permissions=1275591744)
Wanna new features? [Create an issue](https://github.com/rvalien/orbbot/issues) on this repo.
| 40.529412 | 135 | 0.67598 | eng_Latn | 0.7509 |
fdaa307ec3873a119f7f9e688befe3c71c637e36 | 1,889 | md | Markdown | docs/vs-2015/extensibility/debugger/reference/idebugsettingscallback2-getmetricdword.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugsettingscallback2-getmetricdword.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugsettingscallback2-getmetricdword.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugSettingsCallback2::GetMetricDword | Dokumentacja firmy Microsoft
ms.custom: ''
ms.date: 2018-06-30
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- IDebugSettingsCallback2::GetMetricDword
ms.assetid: 831a5a1a-c4af-4520-9fdf-3a731aeff85c
caps.latest.revision: 9
ms.author: gregvanl
manager: ghogen
ms.openlocfilehash: 99b1528c6036f3548ce6b5af3db48764731197f0
ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 08/22/2018
ms.locfileid: "42631757"
---
# <a name="idebugsettingscallback2getmetricdword"></a>IDebugSettingsCallback2::GetMetricDword
[!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)]
Najnowszą wersję tego tematu znajduje się w temacie [IDebugSettingsCallback2::GetMetricDword](https://docs.microsoft.com/visualstudio/extensibility/debugger/reference/idebugsettingscallback2-getmetricdword).
Pobiera wartość metryki nadać jej nazwę.
## <a name="syntax"></a>Składnia
```cpp#
HRESULT GetMetricDword(
LPCWSTR pszType,
REFGUID guidSection,
LPCWSTR pszMetric,
DWORD* pdwValue
);
```
```csharp
private int GetMetricDword(
string pszType,
ref Guid guidSection,
string pszMetric,
out uint pdwValue
);
```
#### <a name="parameters"></a>Parametry
`pszType`
[in] Typ metryki.
`guidSection`
[in] Unikatowy identyfikator sekcji.
`pszMetric`
[in] Nazwa metryki.
`pdwValue`
[out] Zwraca wartość metryki.
## <a name="return-value"></a>Wartość zwracana
Jeśli operacja się powiedzie, zwraca `S_OK`; w przeciwnym razie zwraca kod błędu.
## <a name="see-also"></a>Zobacz też
[IDebugSettingsCallback2](../../../extensibility/debugger/reference/idebugsettingscallback2.md)
| 26.605634 | 209 | 0.731075 | pol_Latn | 0.49495 |
fdaa74f9c70d760915e060e9028b2f803104a2c8 | 149 | md | Markdown | src/content/posts/my-first-post.md | hariravula/blog | 4cec877b5e862c03e467e210a184031a74841498 | [
"MIT"
] | null | null | null | src/content/posts/my-first-post.md | hariravula/blog | 4cec877b5e862c03e467e210a184031a74841498 | [
"MIT"
] | null | null | null | src/content/posts/my-first-post.md | hariravula/blog | 4cec877b5e862c03e467e210a184031a74841498 | [
"MIT"
] | null | null | null | ---
title: "Hello"
date: 2018-08-26T14:09:57+05:30
draft: false
tags: [hello]
categories: [blog]
---
Hello, my name is Hari. This is my first post. | 16.555556 | 47 | 0.671141 | eng_Latn | 0.964779 |
fdaa9120c8e1fc00773479497985c67d121448f9 | 1,422 | md | Markdown | usingcurl/tftp.md | RekGRpth/everything-curl | c23de7d4e6bc69059485cdfcb691b5fb4b93baf5 | [
"CC-BY-4.0"
] | 1 | 2022-01-18T06:22:45.000Z | 2022-01-18T06:22:45.000Z | usingcurl/tftp.md | RekGRpth/everything-curl | c23de7d4e6bc69059485cdfcb691b5fb4b93baf5 | [
"CC-BY-4.0"
] | null | null | null | usingcurl/tftp.md | RekGRpth/everything-curl | c23de7d4e6bc69059485cdfcb691b5fb4b93baf5 | [
"CC-BY-4.0"
] | null | null | null | # TFTP
Trivial File Transfer Protocol (TFTP) is a simple clear-text protocol that
allows a client to get a file from or put a file onto a remote host.
Primary use cases for this protocol have been to get the boot image over a
local network. TFTP also stands out a little next to many other protocols by
the fact that it is done over UDP as opposed to TCP which most other protocols
use.
There is no secure version or flavor of TFTP.
## Download
Download a file from the TFTP server of choice:
curl -O tftp://localserver/file.boot
## Upload
Upload a file to the TFTP server of choice:
curl -T file.boot tftp://localserver/
## TFTP options
The TFTP protocols transmits data to the other end of the communication using
"blocks". When a TFTP transfer is setup, both parties either agree on using
the default block size of 512 bytes or negotiate a different one. curl
supports block sizes between 8 to 65464 bytes.
You ask curl to use a different size than the default with
`--tftp-blksize`. Asl for 8192 bytes blocks like this:
curl --tftp-blksize 8192 tftp://localserver/file
It has been shown that there are server implementions that don't handle option
negotiation at all, so curl also has the ability to completely switch off all
attempts of setting options. If you are in the unfortunate of working with
such a server, use the flag like this:
curl --tftp-no-options tftp://localserver/file
| 33.069767 | 78 | 0.767932 | eng_Latn | 0.999273 |
fdaaefd54b0fb72fc57ca6ef9f87e9933b08f560 | 187 | md | Markdown | _posts/08-conclusion/0008-01-03-what-next-steps-should-we-take?.md | lheyman/open-sourcing-government | 8555c10f91b0737b780872e88d1b46c8a98c17a7 | [
"MIT"
] | 1 | 2015-02-13T13:31:10.000Z | 2015-02-13T13:31:10.000Z | _posts/08-conclusion/0008-01-03-what-next-steps-should-we-take?.md | lheyman/open-sourcing-government | 8555c10f91b0737b780872e88d1b46c8a98c17a7 | [
"MIT"
] | null | null | null | _posts/08-conclusion/0008-01-03-what-next-steps-should-we-take?.md | lheyman/open-sourcing-government | 8555c10f91b0737b780872e88d1b46c8a98c17a7 | [
"MIT"
] | null | null | null | * Create a GitHub account
* Create a GitHub Organization
* Create a "feedback repo"
* Open source **something** small
* Make your next new project open source
* Beging to grow communities | 31.166667 | 40 | 0.764706 | eng_Latn | 0.973409 |
fdab15b0387bd08ddf49627c4a50ae89278ba6e3 | 1,558 | md | Markdown | content/zh/publication/JSSC_201704_Arindam Sanyal_7911268/index.md | Arcadia-1/Sun_Research_Group | e813031156494c640041526d48219394a5ce78d3 | [
"MIT"
] | null | null | null | content/zh/publication/JSSC_201704_Arindam Sanyal_7911268/index.md | Arcadia-1/Sun_Research_Group | e813031156494c640041526d48219394a5ce78d3 | [
"MIT"
] | null | null | null | content/zh/publication/JSSC_201704_Arindam Sanyal_7911268/index.md | Arcadia-1/Sun_Research_Group | e813031156494c640041526d48219394a5ce78d3 | [
"MIT"
] | null | null | null | ---
title: An Energy-Efficient Hybrid SAR-VCO ΔΣ Capacitance-to-Digital Converter in 40-nm CMOS
authors:
- Arindam Sanyal
- Nan Sun
publishDate: "2017-04-25"
summary: JSSC, 2017
abstract: "This paper presents a highly digital, 0-1 MASH capacitance-to-digital converter (CDC). The CDC works by sampling a reference voltage on the sensing capacitor and then quantizing the charge stored in it by a 9-bit successive approximation register analog-to-digital converter. The residue is fed to a ring voltage-controlled oscillator (VCO) ΔΣ and quantized in time domain. The outputs from the two stages are combined to produce a quantized output with the first-order noise shaping. The proposed two-stage architecture reduces the impact of the VCO's nonlinearity. A digital calibration technique is used to track the VCO's gain across process, voltage, and temperature. The absence of any operational amplifier and low oversampling ratio for the VCO results in high energy efficiency. A prototype CDC in a 40-nm CMOS process achieves a 64.2-dB SNR while operating from a 1-V supply and using a sampling frequency of 3 MHz. The prototype achieves a CDC figure of merit of 55 fJ/conversion-step."
publication_types: ["2"]
publication: "IEEE Journal of Solid-State Circuits ( Volume: 52, Issue: 7, July 2017)"
tags:
- Capacitance sensing
- capacitance-to-digital converters (CDCs)
- noise shaping
- successive approximation register (SAR)
- voltage-controlled oscillator (VCO)
links:
- name: IEEE Xplore
url: https://ieeexplore.ieee.org/document/7911268/
--- | 55.642857 | 1,008 | 0.786264 | eng_Latn | 0.9713 |
fdab61c1371f7719eb9df8e30064c3248b5bba62 | 2,482 | md | Markdown | source/_posts/2016-04-27.angularjs_$http_anysc.md | kissthefire/blog-1 | d1e2d5caa166dbf70c902ab74aaa25217d3ad850 | [
"MIT"
] | 1 | 2019-10-22T01:53:00.000Z | 2019-10-22T01:53:00.000Z | source/_posts/2016-04-27.angularjs_$http_anysc.md | kissthefire/blog-1 | d1e2d5caa166dbf70c902ab74aaa25217d3ad850 | [
"MIT"
] | null | null | null | source/_posts/2016-04-27.angularjs_$http_anysc.md | kissthefire/blog-1 | d1e2d5caa166dbf70c902ab74aaa25217d3ad850 | [
"MIT"
] | null | null | null | ---
author: 小莫
date: 2016-05-11
title: 使用angularjs的$http异步删除数据
tags:
- angular
- javascript
category: angularjs
permalink: AngularHttp
---
因为我在使用angularjs做异步删除数据的时候遇到了一些问题,所以把这些问题记录下面供大家参考,也加深自己的印象。
<!--more-->
### 一、前言
有人会说删除这东西有什么可讲的,写个删除的service,controller调用一下不就完了。
嗯...看起来是这样,但是具体实现起来真的有这么简单吗?首先有以下几个坑
* 怎么确定数据是否删除成功?
* 怎么同步视图的数据库的内容?
### 二、 思路
#### 1.实现方式一
删除数据库中对应的内容,然后将$scope中的对应的内容splice
#### 2.实现方式二
删除数据库中对应的内容,然后再reload一下数据(也就是再调用一次查询方法,这种消耗可想而知,并且还要保证先删除数据再查询)
### 三、 具体实现方式
>删除数据的service:用异步,返回promise
```
service('deleteBlogService',//删除博客
['$rootScope',
'$http',
'$q',
function ($rootScope, $http, $q) {
var result = {};
result.operate = function (blogId) {
var deferred = $q.defer();
$http({
headers: {
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8'
},
url: $rootScope.$baseUrl + "/admin/blog/deleteBlogById",
method: 'GET',
dataType: 'json',
params: {
id: blogId
}
})
.success(function (data) {
deferred.resolve(data);
console.log("删除成功!");
})
.error(function () {
deferred.reject();
alert("删除失败!")
});
return deferred.promise;
};
return result;
}])
```
controller里面注意事项
>要特别注意执行顺序:确保己经删除完成之后再去reload数据,不然会出来视图不更新
```
/**
* 删除博客
*/
$scope.deleteBlog = function (blogId) {
var deletePromise = deleteBlogService.operate(blogId);
deletePromise.then(function (data) {
if (data.status == 200) {
var promise = getBlogListService.operate($scope.currentPage);
promise.then(function (data) {
$scope.blogs = data.blogs;
$scope.pageCount = $scope.blogs.totalPages;
});
}
});
};
```
| 30.641975 | 93 | 0.437953 | yue_Hant | 0.322759 |
fdabc68eabeb9e75c075fb840b79b185642fb0eb | 846 | md | Markdown | Array/README.md | SusheelThapa/Code-With-C-Plus-Plus | 8fdf86bf5a8420f8cd2e5d8afb8e1aebc376f136 | [
"MIT"
] | null | null | null | Array/README.md | SusheelThapa/Code-With-C-Plus-Plus | 8fdf86bf5a8420f8cd2e5d8afb8e1aebc376f136 | [
"MIT"
] | null | null | null | Array/README.md | SusheelThapa/Code-With-C-Plus-Plus | 8fdf86bf5a8420f8cd2e5d8afb8e1aebc376f136 | [
"MIT"
] | null | null | null | # Array
An array is collection of items of similar types stored in contigious memory locations.
Why Array?
Sometimes, a simple variable is not enough to hold all the data.
For example,
We want to store the marks of 2500 students having 2500 different variable for this task will not be feasible
To solve this problem, we can define an array with size 2500 that can hold the marks of all students.
---
## Syntax
To declare array, following syntax is used in C++
data_type array_name[size];
```C++
Example:
int array[4];
int array[size]; //When size is taked as input from user.
```
## Initialize Array
```C++
Method one:
int arr[] = {1,2,3,4,5};
```
```C++
Method Two:
int arr[3];
arr[0] = 1;
arr[1] = 2;
arr[2] = 3;
```
[Reference](Declare_Access_Array.cpp)
| 15.666667 | 109 | 0.660757 | eng_Latn | 0.995409 |
fdadd66d36bed065a2f537c4ed77ed87edffe448 | 7,587 | md | Markdown | cuSOLVER/MgPotrf/README.md | katherineding/CUDALibrarySamples | 8cc27c70acafebf0f09e3b537b894981a68e5a3c | [
"BSD-3-Clause"
] | null | null | null | cuSOLVER/MgPotrf/README.md | katherineding/CUDALibrarySamples | 8cc27c70acafebf0f09e3b537b894981a68e5a3c | [
"BSD-3-Clause"
] | null | null | null | cuSOLVER/MgPotrf/README.md | katherineding/CUDALibrarySamples | 8cc27c70acafebf0f09e3b537b894981a68e5a3c | [
"BSD-3-Clause"
] | null | null | null | # cuSOLVER MultiGPU LU Decomposition example
## Description
This chapter provides examples to perform multiGPU linear solver.
The example code enables peer-to-peer access to take advantage of NVLINK. The user can check the performance by on/off peer-to-peer access.
The example 1 solves linear system by Cholesky factorization (`potrf` and `potrs`). It allocates distributed matrix by calling `createMat`. Then generates the matrix on host memory and copies it to distributed device memory via `memcpyH2D`.
The example 2 solves linear system using the inverse of an Hermitian positive definite matrix using (`potrf` and `potri`). It allocates distributed matrix by calling `createMat`. Then generates the matrix on host memory and copies it to distributed device memory via `memcpyH2D`.
## Supported SM Architectures
All GPUs supported by CUDA Toolkit (https://developer.nvidia.com/cuda-gpus)
## Supported OSes
Linux
Windows
## Supported CPU Architecture
x86_64
ppc64le
arm64-sbsa
## CUDA APIs involved
- [cusolverMgPotrf_bufferSize API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potrf)
- [cusolverMgPotrs_bufferSize API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potrs)
- [cusolverMgPotri_bufferSize API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potri)
- [cusolverMgPotrf API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potrf)
- [cusolverMgPotrs API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potrs)
- [cusolverMgPotri API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-potri)
- [cusolverMgCreateDeviceGrid API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-grid)
- [cusolverMgDeviceSelect API](https://docs.nvidia.com/cuda/cusolver/index.html#mg-device)
# Building (make)
# Prerequisites
- A Linux/Windows system with recent NVIDIA drivers.
- [CMake](https://cmake.org/download) version 3.18 minimum
- Minimum [CUDA 10.2 toolkit](https://developer.nvidia.com/cuda-downloads) is required.
## Build command on Linux
```
$ mkdir build
$ cd build
$ cmake .. # -DSHOW_FORMAT=ON
$ make
```
Make sure that CMake finds expected CUDA Toolkit. If that is not the case you can add argument `-DCMAKE_CUDA_COMPILER=/path/to/cuda/bin/nvcc` to cmake command.
## Build command on Windows
```
$ mkdir build
$ cd build
$ cmake -DCMAKE_GENERATOR_PLATFORM=x64 ..
$ Open cusolver_examples.sln project in Visual Studio and build
```
# Usage 1
```
$ ./cusolver_MgPotrf_example1
```
Sample example output w/ 1 GPU:
```
Test 1D Laplacian of order 8
Step 1: Create Mg handle and select devices
There are 1 GPUs
Device 0, NVIDIA TITAN RTX, cc 7.5
step 2: Enable peer access.
Step 3: Allocate host memory A
Step 4: Prepare 1D Laplacian for A and X = ones(N,NRHS)
Step 5: Create RHS for reference solution on host B = A*X
Step 6: Create matrix descriptors for A and D
Step 7: Allocate distributed matrices A and B
Step 8: Prepare data on devices
Step 9: Allocate workspace space
Allocate device workspace, lwork = 1064960
Step 10: Solve A*X = B by POTRF and POTRS
Step 11: Solution vector B
Step 12: Measure residual error |b - A*x|
errors for X[:,1]
|b - A*x|_inf = 2.220446E-16
|x|_inf = 1.000000E+00
|b|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 4.440892E-17
errors for X[:,2]
|b - A*x|_inf = 2.220446E-16
|x|_inf = 1.000000E+00
|b|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 4.440892E-17
step 12: Free resources
```
Sample example output w/ 2 GPU:
```
Test 1D Laplacian of order 8
Step 1: Create Mg handle and select devices
There are 2 GPUs
Device 0, NVIDIA TITAN RTX, cc 7.5
Device 1, NVIDIA TITAN RTX, cc 7.5
step 2: Enable peer access.
Enable peer access from gpu 0 to gpu 1
Enable peer access from gpu 1 to gpu 0
Step 3: Allocate host memory A
Step 4: Prepare 1D Laplacian for A and X = ones(N,NRHS)
Step 5: Create RHS for reference solution on host B = A*X
Step 6: Create matrix descriptors for A and D
Step 7: Allocate distributed matrices A and B
Step 8: Prepare data on devices
Step 9: Allocate workspace space
Allocate device workspace, lwork = 1064960
Step 10: Solve A*X = B by POTRF and POTRS
Step 11: Solution vector B
Step 12: Measure residual error |b - A*x|
errors for X[:,1]
|b - A*x|_inf = 2.220446E-16
|x|_inf = 1.000000E+00
|b|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 4.440892E-17
errors for X[:,2]
|b - A*x|_inf = 2.220446E-16
|x|_inf = 1.000000E+00
|b|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 4.440892E-17
step 12: Free resources
```
# Usage 2
```
$ ./cusolver_MgPotrf_example2
```
Sample example output w/ 1 GPU:
```
Test 1D Laplacian of order 8
Step 1: Create Mg handle and select devices
There are 1 GPUs
Device 0, NVIDIA TITAN RTX, cc 7.5
step 2: Enable peer access.
Step 3: Allocate host memory A
Step 4: Prepare 1D Laplacian for A and Xref = ones(N,NRHS)
Step 5: Create RHS for reference solution on host B = A*X
Step 6: Create matrix descriptors for A and D
Step 7: Allocate distributed matrices A and B
Step 8: Prepare data on devices
Step 9: Allocate workspace space
Allocate device workspace, lwork = 1067008
Step 10: Solve A*X = B by POTRF and POTRI
Step 11: Gather INV(A) from devices to host
step 12: solve linear system B := inv(A) * B
step 13: measure residual error |Xref - Xans|
errors for X[:,1]
|b - A*x|_inf = 4.440892E-16
|Xref|_inf = 1.000000E+00
|Xans|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 8.881784E-17
errors for X[:,2]
|b - A*x|_inf = 4.440892E-16
|Xref|_inf = 1.000000E+00
|Xans|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 8.881784E-17
step 14: Free resources
```
Sample example output w/ 2 GPU:
```
Test 1D Laplacian of order 8
Step 1: Create Mg handle and select devices
There are 2 GPUs
Device 0, NVIDIA TITAN RTX, cc 7.5
Device 1, NVIDIA TITAN RTX, cc 7.5
step 2: Enable peer access.
Enable peer access from gpu 0 to gpu 1
Enable peer access from gpu 1 to gpu 0
Step 3: Allocate host memory A
Step 4: Prepare 1D Laplacian for A and Xref = ones(N,NRHS)
Step 5: Create RHS for reference solution on host B = A*X
Step 6: Create matrix descriptors for A and D
Step 7: Allocate distributed matrices A and B
Step 8: Prepare data on devices
Step 9: Allocate workspace space
Allocate device workspace, lwork = 1067008
Step 10: Solve A*X = B by POTRF and POTRI
Step 11: Gather INV(A) from devices to host
step 12: solve linear system B := inv(A) * B
step 13: measure residual error |Xref - Xans|
errors for X[:,1]
|b - A*x|_inf = 4.440892E-16
|Xref|_inf = 1.000000E+00
|Xans|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 8.881784E-17
errors for X[:,2]
|b - A*x|_inf = 4.440892E-16
|Xref|_inf = 1.000000E+00
|Xans|_inf = 1.000000E+00
|A|_inf = 4.000000E+00
|b - A*x|/(|A|*|x|+|b|) = 8.881784E-17
step 14: Free resources
``` | 34.175676 | 280 | 0.651905 | eng_Latn | 0.726812 |
fdae03f26c05cf1e51d351785ea3b98626a5c149 | 9,990 | md | Markdown | docs/framework/data/adonet/dataset-datatable-dataview/diffgrams.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/dataset-datatable-dataview/diffgrams.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/dataset-datatable-dataview/diffgrams.md | yunuskorkmaz/docs.tr-tr | e73dea6e171ca23e56c399c55e586a61d5814601 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 'Daha fazla bilgi edinin: DiffGram'
title: DiffGrams
ms.date: 03/30/2017
ms.assetid: 037f3991-7bbc-424b-b52e-8b03585d3e34
ms.openlocfilehash: df00bbfb2c25014ff4e73a2777511bd3593ff8a4
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 02/06/2021
ms.locfileid: "99652516"
---
# <a name="diffgrams"></a>DiffGrams
DiffGram, veri öğelerinin geçerli ve orijinal sürümlerini tanımlayan bir XML biçimidir. , <xref:System.Data.DataSet> İçeriğini yüklemek ve sürdürmek ve bir ağ bağlantısı üzerinden aktarım için içeriğini serileştirmek Için DiffGram biçimini kullanır. Bir <xref:System.Data.DataSet> DiffGram olarak yazıldığında, <xref:System.Data.DataSet> hem **özgün** hem de **geçerli** satır sürümlerinin, satır hata bilgilerinin ve satır sırasının sütun değerleri de dahil olmak üzere, Içeriğini doğru şekilde yeniden oluşturmak için DiffGram değerini tüm gerekli bilgilerle doldurur.
XML Web hizmetinden bir gönderme ve alma <xref:System.Data.DataSet> işlemi yaparken, DiffGram biçimi örtülü olarak kullanılır. Ayrıca, <xref:System.Data.DataSet> **ReadXml** YÖNTEMINI kullanarak bir XML 'in Içeriğini yüklerken veya <xref:System.Data.DataSet> **WriteXml** metodunu kullanarak XML Içindeki içeriğini yazarken, içeriğin bir DiffGram olarak okunacağını veya yazıldığını belirtebilirsiniz. Daha fazla bilgi için bkz. [XML 'Den veri kümesi yükleme](loading-a-dataset-from-xml.md) ve [XML verileri olarak veri kümesi içerikleri yazma](writing-dataset-contents-as-xml-data.md).
DiffGram biçimi öncelikle .NET Framework tarafından, içeriği için bir serileştirme biçimi olarak kullanıldığında <xref:System.Data.DataSet> , bir Microsoft SQL Server veritabanındaki tablolardaki verileri değiştirmek için de DiffGram kullanabilirsiniz.
Bir DiffGram, tüm tabloların içeriği bir öğeye yazılarak oluşturulur **\<diffgram>** .
### <a name="to-generate-a-diffgram"></a>Bir DiffGram oluşturmak için
1. Kök tablolarının bir listesini (diğer bir deyişle, herhangi bir üst öğeye sahip olmayan tabloları) oluşturun.
2. Listedeki her tablo ve alt öğeleri için, ilk DiffGram bölümünde tüm satırların geçerli sürümünü yazın.
3. İçindeki her tablo için, <xref:System.Data.DataSet> DiffGram bölümündeki tüm satırların orijinal sürümünü yazın **\<before>** .
4. Hataları olan satırlarda, **\<errors>** DiffGram bölümünün bölümüne hata içeriğini yazın.
Bir DiffGram, XML dosyasının başından sonuna sırayla işlenir.
### <a name="to-process-a-diffgram"></a>Bir DiffGram işlemek için
1. DiffGram öğesinin geçerli sürümünü içeren ilk bölümünü işleyin.
2. **\<before>** Değiştirilen ve silinen satırların orijinal satır sürümünü içeren ikinci veya bölümü işleyin.
> [!NOTE]
> Bir satır silinmiş olarak işaretlenmişse, silme işlemi, geçerli özelliğin özelliğine bağlı olarak satırın alt öğelerini de silebilir `Cascade` <xref:System.Data.DataSet> .
3. Bölümünü işleyin **\<errors>** . Bu bölümdeki her öğe için belirtilen satır ve sütun için hata bilgilerini ayarlayın.
> [!NOTE]
> <xref:System.Data.XmlWriteMode>Öğesini DiffGram olarak ayarlarsanız, hedefin içeriği <xref:System.Data.DataSet> ve asıl <xref:System.Data.DataSet> farklı olabilir.
## <a name="diffgram-format"></a>DiffGram biçimi
DiffGram biçimi üç bölüme ayrılmıştır: Şu örnekte gösterildiği gibi geçerli veriler, özgün (veya "önceki") veriler ve hatalar bölümü.
```xml
<?xml version="1.0"?>
<diffgr:diffgram
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<DataInstance>
</DataInstance>
<diffgr:before>
</diffgr:before>
<diffgr:errors>
</diffgr:errors>
</diffgr:diffgram>
```
DiffGram biçimi aşağıdaki veri bloklarından oluşur:
**\<** **_DataInstance_*_ _*>**
Bu öğenin adı ***DataInstance** _, bu belgede açıklama amacıyla kullanılır. _*_DataInstance_*_ öğesi bir veya bir <xref:System.Data.DataSet> satırını temsil eder <xref:System.Data.DataTable> . _DataInstance * yerine, öğesi veya adını içerir <xref:System.Data.DataSet> <xref:System.Data.DataTable> . DiffGram biçimindeki bu blok, değiştirilmiş olup olmadığı geçerli verileri içerir. Değiştirilen bir öğe veya satır, **diffgr: hasChanges** ek açıklaması ile tanımlanır.
**\<diffgr:before>**
DiffGram biçimindeki bu blok, bir satırın özgün sürümünü içerir. Bu bloktaki öğeler _ *diffgr: ID** ek açıklaması kullanılarak ***DataInstance** _ bloğundaki öğelerle eşleştirilir.
**\<diffgr:errors>**
DiffGram biçimindeki bu blok, ***DataInstance** _ bloğundaki belirli bir satır için hata bilgilerini içerir. Bu bloktaki öğeler, _ *diffgr: ID** ek açıklaması kullanılarak _*_DataInstance_*_ bloğundaki öğelerle eşleştirilir.
## <a name="diffgram-annotations"></a>DiffGram ek açıklamaları
DiffGram, içindeki farklı satır sürümlerini veya hata bilgilerini temsil eden farklı DiffGram bloklarından öğeleri ilişkilendirmek için birkaç ek açıklama kullanır <xref:System.Data.DataSet> .
Aşağıdaki tablo, DiffGram ad alanında tanımlanan DiffGram açıklamalarını açıklar **urn: schemas-microsoft-com: XML-DiffGram-v1**.
|Ek Açıklama|Description|
|----------------|-----------------|
|**id**|**\<diffgr:before>** Ve **\<diffgr:errors>** blokdaki öğeleri bloktaki öğelere eşleştirmek için kullanılır **\<** **_DataInstance_*_ _*>** . **Diffgr: ID** ek açıklamasına sahip değerler *[TableName] [RowIdentifier]* biçiminde. Örneğin: `<Customers diffgr:id="Customers1">`.|
|**parentID**|Bloğundan hangi öğenin **\<** **_DataInstance_*_ _*>** geçerli öğenin üst öğesi olduğunu tanımlar. **Diffgr: parentID** ek açıklamasına sahip değerler *[TableName] [RowIdentifier]* biçiminde. Örneğin: `<Orders diffgr:parentId="Customers1">`.|
|**hasChanges**|Bloktaki bir satırı **\<** **_DataInstance_*_ _*>** değiştirildiği şekilde tanımlar. **HasChanges** ek açıklaması aşağıdaki iki değerden birine sahip olabilir:<br /><br /> **takılmamış**<br /> **Eklenen** bir satırı tanımlar.<br /><br /> **değiştirdi**<br /> Blokta **orijinal** bir satır sürümü içeren **değiştirilmiş** bir satırı tanımlar **\<diffgr:before>** . **Silinen** satırların bloğunda **orijinal** bir satır sürümü olacağını unutmayın **\<diffgr:before>** , ancak blokta hiçbir açıklamalı öğe olmayacaktır **\<** **_DataInstance_*_ _*>** .|
|**hasErrors**|Blok içindeki bir satırı **\<** **_DataInstance_*_ _*>** **RowError** ile tanımlar. Hata öğesi **\<diffgr:errors>** bloğa yerleştirildi.|
|**Hata**|Bloktaki belirli bir öğe için **RowError** metnini içerir **\<diffgr:errors>** .|
, <xref:System.Data.DataSet> İçeriğini bir DiffGram olarak okurken veya yazarken ek açıklamalar içerir. Aşağıdaki tabloda, **urn: schemas-microsoft-com: XML-msdata** ad alanında tanımlanan bu ek açıklamalar açıklanmaktadır.
|Ek Açıklama|Description|
|----------------|-----------------|
|**RowOrder**|Orijinal verilerin satır sırasını korur ve belirli bir satırın dizinini tanımlar <xref:System.Data.DataTable> .|
|**Gizli**|Bir sütunu, bir **ColumnMapping** özelliği **MappingType. Hidden** olarak ayarlanmış olacak şekilde tanımlar. Özniteliği **msdata: Hidden** *[ColumnName]*= "*Value*" biçiminde yazılır. Örneğin: `<Customers diffgr:id="Customers1" msdata:hiddenContactTitle="Owner">`.<br /><br /> Gizli sütunların yalnızca veri içerdikleri bir DiffGram özniteliği olarak yazıldığını unutmayın. Aksi takdirde, bunlar yok sayılır.|
## <a name="sample-diffgram"></a>Örnek DiffGram
DiffGram biçimine bir örnek aşağıda gösterilmiştir. Bu örnek, değişiklikler kaydedilmeden önce tablodaki bir satırdaki bir güncelleştirmenin sonucunu gösterir. "ALFKI" CustomerID içeren satır değiştirilmiş, ancak güncelleştirilmedi. Sonuç olarak, bloğunda "Customers1" adlı bir **diffgr: ID** 'Si olan **geçerli** bir satır vardır **\<** **_DataInstance_*_ _*>** ve bloğunda "Customers1" **diffgr: ID değerine** sahip **orijinal** bir satır vardır **\<diffgr:before>** . "ANATR" CustomerID içeren satır bir **RowError** içerir, bu nedenle ile açıklanmakta `diffgr:hasErrors="true"` ve bloğunda ilgili bir öğe vardır **\<diffgr:errors>** .
```xml
<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1">
<CustomerDataSet>
<Customers diffgr:id="Customers1" msdata:rowOrder="0" diffgr:hasChanges="modified">
<CustomerID>ALFKI</CustomerID>
<CompanyName>New Company</CompanyName>
</Customers>
<Customers diffgr:id="Customers2" msdata:rowOrder="1" diffgram:hasErrors="true">
<CustomerID>ANATR</CustomerID>
<CompanyName>Ana Trujillo Emparedados y Helados</CompanyName>
</Customers>
<Customers diffgr:id="Customers3" msdata:rowOrder="2">
<CustomerID>ANTON</CustomerID>
<CompanyName>Antonio Moreno Taquera</CompanyName>
</Customers>
<Customers diffgr:id="Customers4" msdata:rowOrder="3">
<CustomerID>AROUT</CustomerID>
<CompanyName>Around the Horn</CompanyName>
</Customers>
</CustomerDataSet>
<diffgr:before>
<Customers diffgr:id="Customers1" msdata:rowOrder="0">
<CustomerID>ALFKI</CustomerID>
<CompanyName>Alfreds Futterkiste</CompanyName>
</Customers>
</diffgr:before>
<diffgr:errors>
<Customers diffgr:id="Customers2" diffgr:Error="An optimistic concurrency violation has occurred for this row."/>
</diffgr:errors>
</diffgr:diffgram>
```
## <a name="see-also"></a>Ayrıca bkz.
- [DataSet içinde XML kullanma](using-xml-in-a-dataset.md)
- [XML’den DataSet Yükleme](loading-a-dataset-from-xml.md)
- [XML Verileri Olarak DataSet İçeriği Yazma](writing-dataset-contents-as-xml-data.md)
- [DataSets, DataTables ve DataViews](index.md)
- [ADO.NET’e Genel Bakış](../ado-net-overview.md)
| 68.424658 | 641 | 0.735936 | tur_Latn | 0.997093 |
fdae94a9a4a91255234729f650bc77e658358cbb | 103 | md | Markdown | README.md | ace-e4s/noise_error_models | e6d7dfa5d8538fd701b19797e127013c3ba85f11 | [
"MIT"
] | null | null | null | README.md | ace-e4s/noise_error_models | e6d7dfa5d8538fd701b19797e127013c3ba85f11 | [
"MIT"
] | null | null | null | README.md | ace-e4s/noise_error_models | e6d7dfa5d8538fd701b19797e127013c3ba85f11 | [
"MIT"
] | null | null | null | # Noise and error models
Simple walkthrough on how to model different noise and error characteristics. | 34.333333 | 77 | 0.825243 | eng_Latn | 0.998132 |
fdaf9cd8ca982d7d7788877a28947fc66344cd9b | 3,035 | md | Markdown | sdk-api-src/content/wmp/nf-wmp-iwmpcore-put_currentmedia.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/wmp/nf-wmp-iwmpcore-put_currentmedia.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/wmp/nf-wmp-iwmpcore-put_currentmedia.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:wmp.IWMPCore.put_currentMedia
title: IWMPCore::put_currentMedia (wmp.h)
description: The put_currentMedia method specifies the IWMPMedia interface that corresponds to the current media item.
helpviewer_keywords: ["IWMPCore interface [Windows Media Player]","put_currentMedia method","IWMPCore.put_currentMedia","IWMPCore::put_currentMedia","IWMPCoreput_currentMedia","put_currentMedia","put_currentMedia method [Windows Media Player]","put_currentMedia method [Windows Media Player]","IWMPCore interface","wmp.iwmpcore_put_currentmedia","wmp/IWMPCore::put_currentMedia"]
old-location: wmp\iwmpcore_put_currentmedia.htm
tech.root: WMP
ms.assetid: 003d1937-13f0-4d2c-ad5c-a83569295b62
ms.date: 12/05/2018
ms.keywords: IWMPCore interface [Windows Media Player],put_currentMedia method, IWMPCore.put_currentMedia, IWMPCore::put_currentMedia, IWMPCoreput_currentMedia, put_currentMedia, put_currentMedia method [Windows Media Player], put_currentMedia method [Windows Media Player],IWMPCore interface, wmp.iwmpcore_put_currentmedia, wmp/IWMPCore::put_currentMedia
req.header: wmp.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows Media Player 9 Series or later.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll: Wmp.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IWMPCore::put_currentMedia
- wmp/IWMPCore::put_currentMedia
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- wmp.dll
api_name:
- IWMPCore.put_currentMedia
---
# IWMPCore::put_currentMedia
## -description
The <b>put_currentMedia</b> method specifies the <b>IWMPMedia</b> interface that corresponds to the current media item.
## -parameters
### -param pMedia [in]
Pointer to an <b>IWMPMedia</b> interface.
## -returns
The method returns an <b>HRESULT</b>. Possible values include, but are not limited to, those in the following table.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The method succeeded.
</td>
</tr>
</table>
## -remarks
If the <b>IWMPSettings::put_autoStart</b> method was invoked with an argument of true, file playback will begin automatically whenever you invoke <b>put_currentMedia</b>.
You can retrieve an <b>IWMPMedia</b> interface for a given media item by invoking the <b>IWMPPlaylist::get_item</b> method.
## -see-also
<a href="/windows/desktop/api/wmp/nn-wmp-iwmpcore">IWMPCore Interface</a>
<a href="/windows/desktop/api/wmp/nf-wmp-iwmpcore-get_currentmedia">IWMPCore::get_currentMedia</a>
<a href="/windows/desktop/api/wmp/nn-wmp-iwmpmedia">IWMPMedia Interface</a>
<a href="/windows/desktop/api/wmp/nf-wmp-iwmpplaylist-get_item">IWMPPlaylist::get_item</a>
<a href="/windows/desktop/api/wmp/nf-wmp-iwmpsettings-put_autostart">IWMPSettings::put_autoStart</a> | 28.364486 | 379 | 0.76804 | yue_Hant | 0.335671 |
fdafd76289502a6324a94378fd96c05750f2c0d7 | 1,558 | md | Markdown | _topic/profession/architect.md | pforret/blog.splashing | 3ac565a2d7d1cd42c3bf9edd1dabf8e7b156c30f | [
"MIT"
] | 1 | 2022-01-25T11:05:31.000Z | 2022-01-25T11:05:31.000Z | _topic/profession/architect.md | pforret/blog.splashing | 3ac565a2d7d1cd42c3bf9edd1dabf8e7b156c30f | [
"MIT"
] | null | null | null | _topic/profession/architect.md | pforret/blog.splashing | 3ac565a2d7d1cd42c3bf9edd1dabf8e7b156c30f | [
"MIT"
] | 1 | 2022-01-25T23:19:08.000Z | 2022-01-25T23:19:08.000Z | ---
title: architect
layout: splash
image: /images/profession/architect.1.jpg
category: profession
tags:
- architect
---
# Profession: architect
An architect is a person who plans, designs and oversees the construction of buildings.
To practice architecture means to provide services in connection with the design of buildings and
the space within the site surrounding the buildings that have human occupancy or use as their
principal purpose.
Etymologically, the term architect derives from the Latin architectus, which derives from the
Greek, i.e., chief builder.The professional requirements for architects vary from place to place.
An architect's decisions affect public safety, and thus the architect must undergo specialized
training consisting of advanced education and a practicum for practical experience to earn a
license to practice architecture.
Practical, technical, and academic requirements for becoming an architect vary by jurisdiction,
though the formal study of architecture in academic institutions has played a pivotal role in the
development of the profession as a whole.
## Unsplash photos
These are the most popular photos on [Unsplash](https://unsplash.com) for **architect**.

Photographer: Daniel McCullough

Photographer: Ryan Ancill

Photographer: Daniel McCullough
Find even more on [unsplash.com/s/photos/architect](https://unsplash.com/s/photos/architect)
| 38.95 | 98 | 0.800385 | eng_Latn | 0.996645 |
fdafd91335444e86c31c25764a8df64e1e42d86a | 17 | md | Markdown | test/fixtures/settings/readme.md | waldyrious/remark-license | 9212cc29bcef263f7006ba09893f2c3b131badd9 | [
"MIT"
] | 9 | 2018-12-07T07:12:55.000Z | 2022-01-12T13:25:17.000Z | test/fixtures/settings/readme.md | waldyrious/remark-license | 9212cc29bcef263f7006ba09893f2c3b131badd9 | [
"MIT"
] | 9 | 2018-08-10T15:36:27.000Z | 2022-01-10T18:40:03.000Z | test/fixtures/settings/readme.md | waldyrious/remark-license | 9212cc29bcef263f7006ba09893f2c3b131badd9 | [
"MIT"
] | 5 | 2019-12-23T11:14:12.000Z | 2021-12-13T05:29:54.000Z | # PI
## License
| 4.25 | 10 | 0.529412 | eng_Latn | 0.449379 |
fdb002ef84a38b2847f82b9d4a0b25975c728466 | 6,086 | md | Markdown | articles/search-latest-updates.md | jeffwilcox/azure-content | 4930df62862e5cda8092d5052346a3b21d10b3b5 | [
"CC-BY-3.0"
] | null | null | null | articles/search-latest-updates.md | jeffwilcox/azure-content | 4930df62862e5cda8092d5052346a3b21d10b3b5 | [
"CC-BY-3.0"
] | null | null | null | articles/search-latest-updates.md | jeffwilcox/azure-content | 4930df62862e5cda8092d5052346a3b21d10b3b5 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="What’s new in the latest update to Azure Search"
description="Release notes for Azure Search describing the latest updates to the service"
services="search"
documentationCenter=""
authors="HeidiSteen"
manager="mblythe"
editor=""/>
<tags
ms.service="search"
ms.devlang="rest-api"
ms.workload="search"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.date="03/21/2015"
ms.author="heidist"/>
#What’s new in the latest update to Azure Search#
Azure Search is now generally available, offering a 99.9% availability service-level agreement (SLA) for supported configurations of the [2015-02-28 version of the API](https://msdn.microsoft.com/library/azure/dn798935.aspx).
Watch this video for a demo and discussion of the latest features:
> [AZURE.VIDEO azure-search-general-availability-and-whats-new]
##How features are versioned and rolled out##
Features are released through the [REST API](https://msdn.microsoft.com/library/azure/dn798935.aspx), [.NET SDK](http://go.microsoft.com/fwlink/?LinkId=528216), or both. This page lists and describes each release in terms of the functionality it provides.
Currently, only the REST APIs have multiple versions. Older APIs remain operational as we roll out new features. The only other release is the .NET SDK, which is in its first pre-release iteration. You can visit [Search service versioning](https://msdn.microsoft.com/library/azure/dn864560.aspx) to learn more about our versioning policy.
##.NET SDK 0.9.6-preview##
**Released: 2015 March 5**
Includes a client library, Microsoft.Azure.Search.dll, with two namespaces:
- [Microsoft.Azure.Search](https://msdn.microsoft.com/library/azure/microsoft.azure.search.aspx)
- [Microsoft.Azure.Search.Models](https://msdn.microsoft.com/library/azure/microsoft.azure.search.models.aspx)
Excludes:
- [Indexers](http://go.microsoft.com/fwlink/p/?LinkId=528173)
- [Management REST API](https://msdn.microsoft.com/library/azure/dn832684.aspx)
- [2015-02-28-Preview](search-api-2015-02-28-Preview.md) features (currently, preview-only features consist of Microsoft natural language processors and `moreLikeThis`).
Visit [How to use Azure Search in .NET](http://go.microsoft.com/fwlink/p/?LinkId=528088) for guidance on installation and usage of the SDK.
##Api-version 2015-02-28-Preview##
**Released: 2015 March 5**
- [Microsoft natural language processors](search-api-2015-02-28-Preview.md) bring support for more languages and expansive stemming for all the languages supported by Office and Bing.
- [Microsoft natural language processors](search-api-2015-02-28-Preview.md) bring support for more languages and expansive stemming for all the languages supported by Office and Bing.
- [moreLikeThis=](../search-api-2015-02-28-Preview/) is a search parameter, mutually exclusive of `search=`, that triggers an alternate query execution path. Instead of full-text search of `search=` based on a search term input, `moreLikeThis=` finds documents that are similar to a given document by comparing their searchable fields.
##Api-version 2015-02-28##
**Released: 2015 March 5**
- [Indexers](http://go.microsoft.com/fwlink/p/?LinkID=528210) is a new feature that vastly simplifies indexing from data sources on Azure SQL Database, Azure DocumentDB, and SQL Server on Azure VMs.
- [Suggesters](https://msdn.microsoft.com/library/azure/dn798936.aspx) replaces the more limited, type-ahead query support of the previous implementation (it only matched on prefixes) by adding support for infix matching. This implementation can find matches anywhere in a term, and also supports fuzzy matching.
- [Tag boosting](http://go.microsoft.com/fwlink/p/?LinkId=528212) enables a new scenario for scoring profiles. In particular, it leverages persisted data (such as shopping preferences) so that you can boost search results for individual users, based on personalized information.
Visit [Azure Search is now Generally Available](http://go.microsoft.com/fwlink/p/?LinkId=528211) for the service announcement on the Azure blog that discusses all of these features.
##Api-version 2014-10-20-Preview##
**Released: 2014 November, 2015 January**
- [Lucene language analyzers](search-api-2014-10-20-preview.md) was added to provide multi-lingual support for the custom language analyzers distributed with Lucene.
- Tool support was introduced for building indexes, including scoring profiles, in the [Azure management portal](https://portal.azure.com).
##Api-version 2014-07-31-Preview##
**Released: 2014 August 21**
This version was the public preview release for Azure Search, providing the following core features:
- REST API for index and document operations. The majority of this API version is intact in 2015-02-28. The documentation for version `2014-07-31-Preview` can be found at [Azure Search Service REST API Version 2014-07-31](search-api-2014-07-31-preview.md).
- Scoring profiles for tuning search results. A scoring profile adds criteria used to compute search scores. The documentation for this feature can be found at [Azure Search Service Scoring Profiles REST API Version 2014-07-31](search-api-scoring-profiles-2014-07-31-preview.md).
- Geospatial support has been available from the beginning, provided through the `Edm.GeographyPoint` data type that has been part of Azure Search since its inception.
- Provisioning in the preview version of the [Azure management portal](https://portal.azure.com ). Azure Search was one of the few services that has only been available in the new portal.
##Management api-version 2015-02-28##
**Released: 2015 March 5**
[Management REST API](https://msdn.microsoft.com/library/azure/dn832684.aspx) marks the first version of the management API belonging to the generally available release of Azure Search.
##Management api-version 2014-07-31-Preview##
**Released: 2014 October**
The preview release of [Management REST API](search-management-api-2014-07-31-preview.md) was added to support service administration programmatically. It is versioned independently of the service REST API.
| 60.86 | 338 | 0.779823 | eng_Latn | 0.943537 |
fdb080228440722cf0f2d2f314956445789022be | 459 | md | Markdown | docker-h2/README.md | AlexRogalskiy/SubscriptionServiceREST | 7936901167818fd3134625a14762c6482574efbb | [
"MIT"
] | null | null | null | docker-h2/README.md | AlexRogalskiy/SubscriptionServiceREST | 7936901167818fd3134625a14762c6482574efbb | [
"MIT"
] | null | null | null | docker-h2/README.md | AlexRogalskiy/SubscriptionServiceREST | 7936901167818fd3134625a14762c6482574efbb | [
"MIT"
] | null | null | null | ### SETTING UP H2 DB via another docker container
1. Go to folder docker-h2:
```sh
cd docker-h2
```
2. Build docker container:
```sh
docker build [--build-arg H2_DB_SOURCE={URL_LINK}] -t={IMAGE_NAME}} .
```
3. Run docker container:
```sh
docker run -v {LOCAL_FOLDER}:/opt/h2-data --restart="always" -d -p 1521:1521 -p 81:81 --name={CONTAINER_NAME} {IMAGE_NAME}
```
4. Get logging information about container:
```sh
docker logs -f {CONTAINER_NAME} 2>&1
``` | 27 | 122 | 0.686275 | yue_Hant | 0.633403 |
fdb0ec440bab73dd03d58c5a9adee0412afb5147 | 2,732 | md | Markdown | src/pages/artist/auni-seiva-1.md | jonnyscholes/alterhen-gatsby | 21eb21cde21a811411556139264de596bf16a75f | [
"MIT"
] | null | null | null | src/pages/artist/auni-seiva-1.md | jonnyscholes/alterhen-gatsby | 21eb21cde21a811411556139264de596bf16a75f | [
"MIT"
] | 1 | 2022-02-15T03:03:58.000Z | 2022-02-15T03:03:58.000Z | src/pages/artist/auni-seiva-1.md | jonnyscholes/alterhen-gatsby | 21eb21cde21a811411556139264de596bf16a75f | [
"MIT"
] | null | null | null | ---
templateKey: artist-post
published: false
featured: false
name: Auni Seiva
bio: Auni Seiva is a Brazilian graffiti artist turned fine artist, focusing on
ethical-existential sustainability. Her work critiques irrational consumerism
in a multidisciplinary approach based on the use of found objects.
currentexhibition: Faces from Energy Blocks series
country: Brazil
profpic: https://ucarecdn.com/4e57f3b2-8f86-4473-a973-a3e6aaca0476/500c.gif
title: '"Faces" from "Energy Blocks" series'
statement: >-
How to survive in a metropolis in the midst of urban chaos?
I believe that there are two ways to survive in this environment: Being part of a system created for this purpose or not being part of this system.
As an artist, Auni chooses the second option. Which brings us to another question: How to survive in this environment without surrendering to this system? I think the first step is to have faith. And she has a lot of it. Adept of the quantum thought of the materialization of thoughts and the creation of her own reality in a multiverse of possibilities, she goes searching for her treasures in the midst of urban chaos, collecting and gathering them. Transforming them into objects of power, trophies, symbols that she displays as a surviving warrior in a dystopian world. Like Augusto de Campos, she transforms garbage into luxury and her fantasy into reality. And it is with this background that she enters the virtual universe. The metaverses, the internet, the cryptocurrencies, the digital world. In this universe where everything is possible, her faith has no limits. And within the unlimited she goes on creating her own universe. Her path is still young and her exploration is still beginning. But as a survivor of São Paulo's urban chaos, she is exploring and adapting to this new environment without fear and with a lot of joy. Her objects gain life, gain a function, multiply, and are agglutinated with her own body, which now also becomes a work of art.
Where is this going to lead? This is the same question we ask ourselves when we are faced with this new multiverse of possibilities that every day becomes more real among our species. The digital world. The cryptocurrencies. NFT.
Walter Tada Nomura (mostly known as "Tinho") IG @tinho23sp
Artist/Curator
featuredimage: https://ucarecdn.com/01cfb3fa-9215-4c9a-9c70-b66be7a0aa93/main_page_auni.gif
midbanner: https://ucarecdn.com/a9c5a4b1-24a4-4724-8969-16a3193e0d40/banner_auni.png
website: https://auniseiva.wordpress.com
hicetnunc: https://www.hicetnunc.xyz/auniseiva/creations
twitter: auniseiva
instagram: auniseiva
updated: 26/08/2021 14:29:53
email: deborauniseiva@gmail.com
---
| 85.375 | 1,283 | 0.786603 | eng_Latn | 0.997154 |
fdb117174b97f72a007779dfc32a17def18d1019 | 1,843 | md | Markdown | README.md | loganbonsignore/Kylin_API | 48a0661ac9a060ef2b476acdd6c10902eb598982 | [
"MIT"
] | 1 | 2021-08-11T07:11:07.000Z | 2021-08-11T07:11:07.000Z | README.md | loganbonsignore/Kylin_API | 48a0661ac9a060ef2b476acdd6c10902eb598982 | [
"MIT"
] | null | null | null | README.md | loganbonsignore/Kylin_API | 48a0661ac9a060ef2b476acdd6c10902eb598982 | [
"MIT"
] | null | null | null | # Crypto Price API
This repository provides a 'production-ready' Flask API returning crypto price data from ~7 different sources.
The API has been set up for use with Python >= 3.7 and [Docker](https://www.docker.com/).
## Running locally
To run the basic server, you'll need to install a few requirements. To do this, run:
```bash
pip install -r requirements/common.txt
```
This will install only the dependencies required to run the server. To boot up the
default server, you can run:
```bash
bash bin/run.sh
```
This will start a [Gunicorn](https://gunicorn.org/) server that wraps the Flask app
defined in `src/app.py`. Note that [this is one of the recommended ways of deploying a
Flask app 'in production'](https://flask.palletsprojects.com/en/1.1.x/deploying/wsgi-standalone/).
The server shipped with Flask is [intended for development
purposes only](https://flask.palletsprojects.com/en/1.1.x/deploying/#deployment).
You should now be able to send:
```bash
curl localhost:8080/health
```
And receive the response `OK` and status code `200`.
## Running with docker
Unsurprisingly, you'll need [Docker](https://www.docker.com/products/docker-desktop)
installed to run this project with Docker. To build a containerised version of the API,
run:
```bash
docker build . -t kylin-api
```
To launch the containerised app, run:
```bash
docker run -p 8080:8080 kylin-api
```
You should see your server boot up, and should be accessible as before.
## Testing the API
Testing the API is set up using `pytest`. To execute tests you can install the project's development dependencies with:
```bash
pip install -r requirements/develop.txt
```
Then from the root directory run:
```bash
pytest
```
This runs `tests/test_api.py` which contains test functions. Add new funtions or files for extended testing functionality.
| 27.924242 | 122 | 0.747694 | eng_Latn | 0.990072 |
fdb16c5a189c037892b2575f64cc64bda53ba5d0 | 3,206 | md | Markdown | CHANGELOG.md | katacarbix/rofimoji | 60ab7f06e14551bfd186d41aa2b6b7792192bfa0 | [
"MIT"
] | null | null | null | CHANGELOG.md | katacarbix/rofimoji | 60ab7f06e14551bfd186d41aa2b6b7792192bfa0 | [
"MIT"
] | null | null | null | CHANGELOG.md | katacarbix/rofimoji | 60ab7f06e14551bfd186d41aa2b6b7792192bfa0 | [
"MIT"
] | null | null | null | # [NEXT]
## Added
- rofimoji now has a manpage (#57).
- There's a new file for [Nerd Font icons](https://www.nerdfonts.com/).
## Changed
- The emojis from 13.1 have been added!
- Instead of several parameters to choose the input method, they have been consolidated into `--action` (`-a`).
- The annotations on emojis don't contain their name (#59).
- The results are now sorted by rofi, by default based on Levenshtein distance (#59).
- The dependency on `pyxdg` was removed.
- Some code cleanup (#56, #58).
# [4.3.0]
## Added
- Added support for Wayland! (#47)
- Added support for `xclip`. (#21)
- You can type and copy the unicode codepoints. (#48)
## Changed
- The order of the characters in all scripts is now the same as the official one.
- All character data has been updated.
# [4.2.0]
## Added
- The most recently characters are now shown in a separate strip and can be selected with shortcuts (`alt+1` etc). (#39)
## Fixed
- Human emojis can now have their own skin tone even if they're part of a larger sequence. (#41)
# [4.1.1]
## Fixed
- Whitespace characters can be inserted.
# [4.1.0]
## Changed
- The extractors have been rewritten: There are now many, *many* supported symbols (all that Unicode offers), but some may have been renamed.
## Added
- A new file for all kinds of maths symbols.
# [4.0.0]
## Changed
- The new emojis from v13.0 are here!
- `rofimoji` is now modular and has an actual `setup.py`, as the emojis are no longer part of the picker code.
- The arguments to insert using the clipboard have been renamed: `--insert-with-clipboard` and `-p`
## Added
- You can ask `rofimoji` to only ever copy the emojis to the clipboard with `--copy-only` (`-c`).
- There are now data files for all of Unicode's scripts.
- A configuration can also be stored in `$XDG_CONFIG_HOME/rofimoji.rc`.
# [3.0.1]
## Fixed
- A race condition with Firefox is now resolved (#23).
# [3.0.0]
## Added
- You can choose a new input method: `rofimoji` can copy your emojis, paste them into the application and restore the previous contents.
- There are now more keywords included so you can find the appropriate emoji more easily.
- You can select a skin tone by default using cli args.
- You can pass arguments to rofi using `--rofi-args`.
- And you can use alternative emoji lists when you provide the `--emoji-file` parameter.
# [2.1.0]
## Changed
- This release is based on the emoji v12, including all these: https://unicode.org/emoji/charts/emoji-versions.html#2019 .
- Renamed meta files to upper case for better visibility.
- Updated dev dependencies to new versions.
# [2.0.1]
## Fixed
- Fix bug when trying to copy multiple emojis. (#6)
# [2.0.0]
## Changed
- Download emoji list from https://unicode.org/emoji/charts-11.0/full-emoji-list.html instead of emojipedia, as that one didn't work at all anymore.
- Skin color selection is now a second step after certain ("human") emojis. Only the neutral version is included in the main list, which accordingly is a lot smaller now.
## Added
- Downloading, parsing and extracting emoji properties from https://unicode.org/Public/emoji//11.0/emoji-data.txt so that we can find "human" emojis for skin color selection.
- A changelog 😁
| 38.166667 | 174 | 0.723331 | eng_Latn | 0.998087 |
fdb1909c8ddbac4e67012aed134fb94b8407db42 | 1,483 | md | Markdown | app/gateway-oss/2.5.x/kong-user.md | ArmanChand/docs.konghq.com | 57e0373de22965b5b76b9b90728afed4f83e2e21 | [
"MIT"
] | 109 | 2018-08-21T04:48:09.000Z | 2022-03-30T22:09:54.000Z | app/gateway-oss/2.5.x/kong-user.md | ArmanChand/docs.konghq.com | 57e0373de22965b5b76b9b90728afed4f83e2e21 | [
"MIT"
] | 2,857 | 2018-06-21T12:58:04.000Z | 2022-03-31T16:25:47.000Z | app/gateway-oss/2.5.x/kong-user.md | ArmanChand/docs.konghq.com | 57e0373de22965b5b76b9b90728afed4f83e2e21 | [
"MIT"
] | 402 | 2018-06-23T11:45:59.000Z | 2022-03-26T07:47:01.000Z | ---
title: Running Kong as a Non-Root User
---
After installing {{site.ce_product_name}} on a GNU/Linux system, you can
configure Kong to run as the built-in `kong` user and group instead of `root`.
This makes the Nginx master and worker processes run as the `kong` user and
group, overriding any settings in the
[`nginx_user`](/gateway-oss/{{page.kong_version}}/configuration/#nginx_user)
configuration property.
<div class="alert alert-warning">
<i class="fas fa-exclamation-triangle" style="color:orange; margin-right:3px"></i>
<b>Warning</b>
<br>The Nginx master process needs to run as <code>root</code> for
Nginx to execute certain actions (for example, to listen on the privileged
port 80).
<br>
<br>Although running Kong as the <code>kong</code> user
and group does provide more security, we advise that a system and network
administration evaluation be performed before making this decision. Otherwise,
Kong nodes might become unavailable due to insufficient permissions to execute
privileged system calls in the operating system.
</div>
## Prerequisites
{{site.ce_product_name}} is installed on one of the following Linux distributions:
* [Amazon Linux](/install/aws-linux)
* [CentOS](/install/centos)
* [Debian](/install/debian)
* [Red Hat](/install/redhat)
* [Ubuntu](/install/ubuntu)
## Run Kong Gateway as a non-root user
1. Switch to the `kong` user and group:
```sh
$ su kong
```
2. Start Kong:
```sh
kong start
```
| 32.23913 | 82 | 0.728254 | eng_Latn | 0.980028 |
fdb280164730b0b6f5029f9005e4492fc5ced83e | 1,031 | md | Markdown | .github/ISSUE_TEMPLATE/feature-proposal.md | benjaminapetersen/pinniped | 16f562e81cebf40a565ed1cef65b87edea47ca86 | [
"Apache-2.0"
] | 289 | 2020-09-28T13:23:25.000Z | 2022-03-30T12:37:43.000Z | .github/ISSUE_TEMPLATE/feature-proposal.md | benjaminapetersen/pinniped | 16f562e81cebf40a565ed1cef65b87edea47ca86 | [
"Apache-2.0"
] | 749 | 2020-09-28T13:59:36.000Z | 2022-03-31T18:59:36.000Z | .github/ISSUE_TEMPLATE/feature-proposal.md | benjaminapetersen/pinniped | 16f562e81cebf40a565ed1cef65b87edea47ca86 | [
"Apache-2.0"
] | 55 | 2020-09-28T13:57:31.000Z | 2022-03-21T19:18:15.000Z | ---
name: Feature proposal
about: Suggest a way to improve this project
title: ''
labels: ''
assignees: ''
---
<!--
Hey! Thanks for opening an issue!
It is recommended that you include screenshots and logs to help everyone achieve a shared understanding of the improvement.
-->
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Are you considering submitting a PR for this feature?**
- **How will this project improvement be tested?**
- **How does this change the current architecture?**
- **How will this change be backwards compatible?**
- **How will this feature be documented?**
**Additional context**
Add any other context or screenshots about the feature request here.
| 28.638889 | 123 | 0.750727 | eng_Latn | 0.999567 |
fdb2986247e393e553fe7e54d6f3639ce2196edf | 892 | md | Markdown | src/pages/news/vk-3480.ru.md | physcodestyle/official-website | 48240b50427382f1a7bf88cca5a77cf1f2992092 | [
"MIT"
] | 5 | 2021-05-31T14:49:41.000Z | 2022-01-19T11:06:02.000Z | src/pages/news/vk-3480.ru.md | physcodestyle/official-website | 48240b50427382f1a7bf88cca5a77cf1f2992092 | [
"MIT"
] | null | null | null | src/pages/news/vk-3480.ru.md | physcodestyle/official-website | 48240b50427382f1a7bf88cca5a77cf1f2992092 | [
"MIT"
] | 1 | 2021-03-29T19:48:07.000Z | 2021-03-29T19:48:07.000Z | ---
permalink: '/ru/news/vk-3480/index.html'
layout: 'news.ru.njk'
title: 'ВГУ открывает набор в сборную по черлидингу. На тренировках вас научат акробатическим элемен'
source: ВКонтакте
tags:
- news_ru
description: 'ВГУ открывает набор в сборную по черлидингу. На тренировках вас научат акробатическим элемен…'
updatedAt: 1475658187
---

ВГУ открывает набор в сборную по черлидингу.
На тренировках вас научат акробатическим элементам и поддержкам различного уровня. Помогут улучшить физическую форму.
По любым вопросам обращайтесь к Авдеевой Марине Эдуардовне. Телефон: 8-908-132-70-84
| 49.555556 | 300 | 0.818386 | rus_Cyrl | 0.877279 |
fdb2d13f1f17d4112ae2f8e17c55b531a4c4eac8 | 399 | md | Markdown | about.md | lee-lindley/github_blog | 9437e9f046dd15b3959f630b6ffeead4b14be72a | [
"MIT"
] | null | null | null | about.md | lee-lindley/github_blog | 9437e9f046dd15b3959f630b6ffeead4b14be72a | [
"MIT"
] | null | null | null | about.md | lee-lindley/github_blog | 9437e9f046dd15b3959f630b6ffeead4b14be72a | [
"MIT"
] | null | null | null | ---
layout: page
title: About
permalink: /about/
---

I'm Just Another Perl Hacker who wound up playing in a big Oracle database playground.
email: [lee.lindley@gmail.com](mailto:lee.lindley@gmail.com)
github repository: [https://github.com/lee-lindley](https://github.com/lee-lindley)
| 28.5 | 109 | 0.766917 | eng_Latn | 0.180219 |
fdb31d23bb05daee0e9e55ae54f270f5ce981faf | 15,616 | md | Markdown | plugins/parsers/json_v2/README.md | amertner/telegraf | 74a7665aa9f98dab70f38b63ed79f147adeb7630 | [
"MIT"
] | 2 | 2019-03-20T02:46:30.000Z | 2019-10-23T07:17:15.000Z | plugins/parsers/json_v2/README.md | amertner/telegraf | 74a7665aa9f98dab70f38b63ed79f147adeb7630 | [
"MIT"
] | 18 | 2022-01-24T01:29:46.000Z | 2022-03-28T01:37:10.000Z | plugins/parsers/json_v2/README.md | amertner/telegraf | 74a7665aa9f98dab70f38b63ed79f147adeb7630 | [
"MIT"
] | null | null | null | # JSON Parser - Version 2
This parser takes valid JSON input and turns it into line protocol. The query syntax supported is [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md), you can go to this playground to test out your GJSON path here: [gjson.dev/](https://gjson.dev). You can find multiple examples under the `testdata` folder.
## Configuration
You configure this parser by describing the line protocol you want by defining the fields and tags from the input. The configuration is divided into config sub-tables called `field`, `tag`, and `object`. In the example below you can see all the possible configuration keys you can define for each config table. In the sections that follow these configuration keys are defined in more detail.
**Example configuration:**
```toml
[[inputs.file]]
urls = []
data_format = "json_v2"
[[inputs.file.json_v2]]
measurement_name = "" # A string that will become the new measurement name
measurement_name_path = "" # A string with valid GJSON path syntax, will override measurement_name
timestamp_path = "" # A string with valid GJSON path syntax to a valid timestamp (single value)
timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
[[inputs.file.json_v2.tag]]
path = "" # A string with valid GJSON path syntax to a non-array/non-object value
rename = "new name" # A string with a new name for the tag key
[[inputs.file.json_v2.field]]
path = "" # A string with valid GJSON path syntax to a non-array/non-object value
rename = "new name" # A string with a new name for the tag key
type = "int" # A string specifying the type (int,uint,float,string,bool)
[[inputs.file.json_v2.object]]
path = "" # A string with valid GJSON path syntax, can include array's and object's
## WARNING: Setting optional to true will suppress errors if the configured Path doesn't match the JSON
## This should be used with caution because it removes the safety net of verifying the provided path
## This was introduced to support situations when parsing multiple incoming JSON payloads with wildcards
## More context: https://github.com/influxdata/telegraf/issues/10072
optional = false
## Configuration to define what JSON keys should be used as timestamps ##
timestamp_key = "" # A JSON key (for a nested key, prepend the parent keys with underscores) to a valid timestamp
timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
### Configuration to define what JSON keys should be included and how (field/tag) ###
tags = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) to be a tag instead of a field, when adding a JSON key in this list you don't have to define it in the included_keys list
included_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that should be only included in result
excluded_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that shouldn't be included in result
# When a tag/field sub-table is defined, they will be the only field/tag's along with any keys defined in the included_keys list.
# If the resulting values aren't included in the object/array returned by the root object path, it won't be included.
# You can define as many tag/field sub-tables as you want.
[[inputs.file.json_v2.object.tag]]
path = "" # # A string with valid GJSON path syntax, can include array's and object's
rename = "new name" # A string with a new name for the tag key
[[inputs.file.json_v2.object.field]]
path = "" # # A string with valid GJSON path syntax, can include array's and object's
rename = "new name" # A string with a new name for the tag key
type = "int" # A string specifying the type (int,uint,float,string,bool)
### Configuration to modify the resutling line protocol ###
disable_prepend_keys = false (or true, just not both)
[inputs.file.json_v2.object.renames] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a new name for the tag key
key = "new name"
[inputs.file.json_v2.object.fields] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a type (int,uint,float,string,bool)
key = "int"
```
---
### root config options
* **measurement_name (OPTIONAL)**: Will set the measurement name to the provided string.
* **measurement_name_path (OPTIONAL)**: You can define a query with [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md) to set a measurement name from the JSON input. The query must return a single data value or it will use the default measurement name. This takes precedence over `measurement_name`.
* **timestamp_path (OPTIONAL)**: You can define a query with [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md) to set a timestamp from the JSON input. The query must return a single data value or it will default to the current time.
* **timestamp_format (OPTIONAL, but REQUIRED when timestamp_query is defined**: Must be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`, or
the Go "reference time" which is defined to be the specific time:
`Mon Jan 2 15:04:05 MST 2006`
* **timestamp_timezone (OPTIONAL, but REQUIRES timestamp_query**: This option should be set to a
[Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`. Defaults to `UTC`
---
### `field` and `tag` config options
`field` and `tag` represent the elements of [line protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/). You can use the `field` and `tag` config tables to gather a single value or an array of values that all share the same type and name. With this you can add a field or tag to a line protocol from data stored anywhere in your JSON. If you define the GJSON path to return a single value then you will get a single resutling line protocol that contains the field/tag. If you define the GJSON path to return an array of values, then each field/tag will be put into a separate line protocol (you use the # character to retrieve JSON arrays, find examples [here](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md#arrays)).
Note that objects are handled separately, therefore if you provide a path that returns a object it will be ignored. You will need use the `object` config table to parse objects, because `field` and `tag` doesn't handle relationships between data. Each `field` and `tag` you define is handled as a separate data point.
The notable difference between `field` and `tag`, is that `tag` values will always be type string while `field` can be multiple types. You can define the type of `field` to be any [type that line protocol supports](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/#data-types-and-format), which are:
* float
* int
* uint
* string
* bool
#### **field**
Using this field configuration you can gather a non-array/non-object values. Note this acts as a global field when used with the `object` configuration, if you gather an array of values using `object` then the field gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
* **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value
* **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query.
* **type (OPTIONAL)**: You can define a string value to set the desired type (float, int, uint, string, bool). If not defined it won't enforce a type and default to using the original type defined in the JSON (bool, float, or string).
#### **tag**
Using this tag configuration you can gather a non-array/non-object values. Note this acts as a global tag when used with the `object` configuration, if you gather an array of values using `object` then the tag gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
* **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value
* **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query.
For good examples in using `field` and `tag` you can reference the following example configs:
---
### object
With the configuration section `object`, you can gather values from [JSON objects](https://www.w3schools.com/js/js_json_objects.asp). This is defined in TOML as an array table using double brackets.
#### The following keys can be set for `object`
* **path (REQUIRED)**: You must define the path query that gathers the object with [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md)
*Keys to define what JSON keys should be used as timestamps:*
* **timestamp_key(OPTIONAL)**: You can define a json key (for a nested key, prepend the parent keys with underscores) for the value to be set as the timestamp from the JSON input.
* **timestamp_format (OPTIONAL, but REQUIRED when timestamp_query is defined**: Must be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`, or
the Go "reference time" which is defined to be the specific time:
`Mon Jan 2 15:04:05 MST 2006`
* **timestamp_timezone (OPTIONAL, but REQUIRES timestamp_query**: This option should be set to a
[Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`. Defaults to `UTC`
*Configuration to define what JSON keys should be included and how (field/tag):*
* **included_keys (OPTIONAL)**: You can define a list of key's that should be the only data included in the line protocol, by default it will include everything.
* **excluded_keys (OPTIONAL)**: You can define json keys to be excluded in the line protocol, for a nested key, prepend the parent keys with underscores
* **tags (OPTIONAL)**: You can define json keys to be set as tags instead of fields, if you define a key that is an array or object then all nested values will become a tag
* **field (OPTIONAL, defined in TOML as an array table using double brackets)**: Identical to the [field](#field) table you can define, but with two key differences. The path supports arrays and objects and is defined under the object table and therefore will adhere to how the JSON is structured. You want to use this if you want the field/tag to be added as it would if it were in the included_key list, but then use the GJSON path syntax.
* **tag (OPTIONAL, defined in TOML as an array table using double brackets)**: Identical to the [tag](#tag) table you can define, but with two key differences. The path supports arrays and objects and is defined under the object table and therefore will adhere to how the JSON is structured. You want to use this if you want the field/tag to be added as it would if it were in the included_key list, but then use the GJSON path syntax.
*Configuration to modify the resutling line protocol:*
* **disable_prepend_keys (OPTIONAL)**: Set to true to prevent resulting nested data to contain the parent key prepended to its key **NOTE**: duplicate names can overwrite each other when this is enabled
* **renames (OPTIONAL, defined in TOML as a table using single bracket)**: A table matching the json key with the desired name (oppossed to defaulting to using the key), use names that include the prepended keys of its parent keys for nested results
* **fields (OPTIONAL, defined in TOML as a table using single bracket)**: A table matching the json key with the desired type (int,string,bool,float), if you define a key that is an array or object then all nested values will become that type
## Arrays and Objects
The following describes the high-level approach when parsing arrays and objects:
**Array**: Every element in an array is treated as a *separate* line protocol
**Object**: Every key/value in a object is treated as a *single* line protocol
When handling nested arrays and objects, these above rules continue to apply as the parser creates line protocol. When an object has multiple array's as values, the array's will become separate line protocol containing only non-array values from the obejct. Below you can see an example of this behavior, with an input json containing an array of book objects that has a nested array of characters.
Example JSON:
```json
{
"book": {
"title": "The Lord Of The Rings",
"chapters": [
"A Long-expected Party",
"The Shadow of the Past"
],
"author": "Tolkien",
"characters": [
{
"name": "Bilbo",
"species": "hobbit"
},
{
"name": "Frodo",
"species": "hobbit"
}
],
"random": [
1,
2
]
}
}
```
Example configuration:
```toml
[[inputs.file]]
files = ["./testdata/multiple_arrays_in_object/input.json"]
data_format = "json_v2"
[[inputs.file.json_v2]]
[[inputs.file.json_v2.object]]
path = "book"
tags = ["title"]
disable_prepend_keys = true
```
Expected line protocol:
```text
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="A Long-expected Party"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="The Shadow of the Past"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Bilbo",species="hobbit"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Frodo",species="hobbit"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",random=1
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",random=2
```
You can find more complicated examples under the folder `testdata`.
## Types
For each field you have the option to define the types. The following rules are in place for this configuration:
* If a type is explicitly defined, the parser will enforce this type and convert the data to the defined type if possible. If the type can't be converted then the parser will fail.
* If a type isn't defined, the parser will use the default type defined in the JSON (int, float, string)
The type values you can set:
* `int`, bool, floats or strings (with valid numbers) can be converted to a int.
* `uint`, bool, floats or strings (with valid numbers) can be converted to a uint.
* `string`, any data can be formatted as a string.
* `float`, string values (with valid numbers) or integers can be converted to a float.
* `bool`, the string values "true" or "false" (regardless of capitalization) or the integer values `0` or `1` can be turned to a bool.
| 68.792952 | 756 | 0.715484 | eng_Latn | 0.996859 |
fdb33f0a3511b06fec0564b4122f6231890a7b8a | 5,166 | md | Markdown | backend/go/reborn/basic/package/inner/sync.md | chenyuanqi/dev_note | 7a4a1ba843e7d73828af799750b7c7bfeadeb397 | [
"Apache-2.0"
] | null | null | null | backend/go/reborn/basic/package/inner/sync.md | chenyuanqi/dev_note | 7a4a1ba843e7d73828af799750b7c7bfeadeb397 | [
"Apache-2.0"
] | null | null | null | backend/go/reborn/basic/package/inner/sync.md | chenyuanqi/dev_note | 7a4a1ba843e7d73828af799750b7c7bfeadeb397 | [
"Apache-2.0"
] | null | null | null |
### sync package:限制线程对变量的访问
Go 语言中 sync 包里提供了互斥锁 Mutex 和读写锁 RWMutex 用于处理并发过程中可能出现同时两个或多个协程(或线程)读或写同一个变量的情况。
**为什么需要锁**
锁是 sync 包中的核心,它主要有两个方法,分别是加锁(Lock)和解锁(Unlock)。
在并发的情况下,多个线程或协程同时其修改一个变量,使用锁能保证在某一时间内,只有一个协程或线程修改这一变量。不使用锁时,在并发的情况下可能无法得到想要的结果。
```go
package main
import (
"fmt"
"time"
)
func main() {
var a = 0
for i := 0; i < 1000; i++ {
go func(idx int) {
a += 1
fmt.Println(a)
}(i)
}
time.Sleep(time.Second)
}
// 1
// 226
// 405
// 816
// 22
// 23
// 24
// 25
// 26
// 27
// 28
// 29
// 30
// 31
// 32
// 33
```
通过运行结果可以看出 a 的值并不是按顺序递增输出的,这是为什么呢?协程的执行顺序大致如下所示:
- 从寄存器读取 a 的值;
- 然后做加法运算;
- 最后写到寄存器。
按照上面的顺序,假如有一个协程取得 a 的值为 3,然后执行加法运算,此时又有一个协程对 a 进行取值,得到的值同样是 3,最终两个协程的返回结果是相同的。
而锁的概念就是,当一个协程正在处理 a 时将 a 锁定,其它协程需要等待该协程处理完成并将 a 解锁后才能再进行操作,也就是说同时处理 a 的协程只能有一个,从而避免上面示例中的情况出现。
**互斥锁 Mutex**
什么是互斥锁呢 ?互斥锁中其有两个方法可以调用:
- func (m *Mutex) Lock()
- func (m *Mutex) Unlock()
```go
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var a = 0
var lock sync.Mutex
for i := 0; i < 1000; i++ {
go func(idx int) {
lock.Lock()
defer lock.Unlock()
a += 1
fmt.Printf("goroutine %d, a=%d\n", idx, a)
}(i)
}
// 等待 1s 结束主程序
// 确保所有协程执行完
time.Sleep(time.Second)
}
// goroutine 0, a=1
// goroutine 76, a=2
// goroutine 772, a=3
// goroutine 773, a=4
// goroutine 774, a=5
// goroutine 775, a=6
// ...
```
需要注意的是`一个互斥锁只能同时被一个 goroutine 锁定,其它 goroutine 将阻塞直到互斥锁被解锁(重新争抢对互斥锁的锁定)`。
```go
package main
import (
"fmt"
"sync"
"time"
)
func main() {
ch := make(chan struct{}, 2)
var l sync.Mutex
go func() {
l.Lock()
defer l.Unlock()
fmt.Println("goroutine1: 我会锁定大概 2s")
time.Sleep(time.Second * 2)
fmt.Println("goroutine1: 我解锁了,你们去抢吧")
ch <- struct{}{}
}()
go func() {
fmt.Println("goroutine2: 等待解锁")
l.Lock()
defer l.Unlock()
fmt.Println("goroutine2: 欧耶,我也解锁了")
ch <- struct{}{}
}()
// 等待 goroutine 执行结束
for i := 0; i < 2; i++ {
<-ch
}
}
// goroutine1: 我会锁定大概 2s
// goroutine2: 等待解锁
// goroutine1: 我解锁了,你们去抢吧
// goroutine2: 欧耶,我也解锁了
```
**读写锁**
读写锁有如下四个方法:
- 写操作的锁定和解锁分别是 func (*RWMutex) Lock 和 func (*RWMutex) Unlock;
- 读操作的锁定和解锁分别是 func (*RWMutex) Rlock 和 func (*RWMutex) RUnlock。
读写锁的区别在于:
- 当有一个 goroutine 获得写锁定,其它无论是读锁定还是写锁定都将阻塞直到写解锁;
- 当有一个 goroutine 获得读锁定,其它读锁定仍然可以继续;
- 当有一个或任意多个读锁定,写锁定将等待所有读锁定解锁之后才能够进行写锁定。
所以说这里的读锁定(RLock)目的其实是告诉写锁定,有很多协程或者进程正在读取数据,写操作需要等它们读(读解锁)完才能进行写(写锁定)。
我们可以将其总结为如下三条:
- 同时只能有一个 goroutine 能够获得写锁定;
- 同时可以有任意多个 gorouinte 获得读锁定;
- 同时只能存在写锁定或读锁定(读和写互斥)。
```go
package main
import (
"fmt"
"math/rand"
"sync"
)
var count int
var rw sync.RWMutex
func main() {
ch := make(chan struct{}, 10)
for i := 0; i < 5; i++ {
go read(i, ch)
}
for i := 0; i < 5; i++ {
go write(i, ch)
}
for i := 0; i < 10; i++ {
<-ch
}
}
func read(n int, ch chan struct{}) {
rw.RLock()
fmt.Printf("goroutine %d 进入读操作...\n", n)
v := count
fmt.Printf("goroutine %d 读取结束,值为:%d\n", n, v)
rw.RUnlock()
ch <- struct{}{}
}
func write(n int, ch chan struct{}) {
rw.Lock()
fmt.Printf("goroutine %d 进入写操作...\n", n)
v := rand.Intn(1000)
count = v
fmt.Printf("goroutine %d 写入结束,新值为:%d\n", n, v)
rw.Unlock()
ch <- struct{}{}
}
// goroutine 4 进入写操作...
// goroutine 4 写入结束,新值为:81
// goroutine 0 进入读操作...
// goroutine 0 读取结束,值为:81
// goroutine 1 进入读操作...
// goroutine 1 读取结束,值为:81
// goroutine 0 进入写操作...
// goroutine 0 写入结束,新值为:887
// goroutine 2 进入读操作...
// goroutine 2 读取结束,值为:887
// goroutine 3 进入读操作...
// goroutine 3 读取结束,值为:887
// goroutine 4 进入读操作...
// goroutine 4 读取结束,值为:887
// goroutine 2 进入写操作...
// goroutine 2 写入结束,新值为:847
// goroutine 1 进入写操作...
// goroutine 1 写入结束,新值为:59
// goroutine 3 进入写操作...
// goroutine 3 写入结束,新值为:81
// 多个读操作同时读取一个变量时,虽然加了锁,但是读操作是不受影响的。(读和写是互斥的,读和读不互斥)
package main
import (
"sync"
"time"
)
var m *sync.RWMutex
func main() {
m = new(sync.RWMutex)
// 多个同时读
go read(1)
go read(2)
time.Sleep(2*time.Second)
}
func read(i int) {
println(i,"read start")
m.RLock()
println(i,"reading")// println(i,"reading")
time.Sleep(1*time.Second)
m.RUnlock()
println(i,"read over")
}
// 2 read start
// 2 reading
// 1 read start
// 1 reading
// 2 read over
// 1 read over
// 由于读写互斥,所以写操作开始的时候,读操作必须要等写操作进行完才能继续,不然读操作只能继续等待
package main
import (
"sync"
"time"
)
var m *sync.RWMutex
func main() {
m = new(sync.RWMutex)
// 写的时候啥也不能干
go write(1)
go read(2)
go write(3)
time.Sleep(2*time.Second)
}
func read(i int) {
println(i,"read start")
m.RLock()
println(i,"reading")
time.Sleep(1*time.Second)
m.RUnlock()
println(i,"read over")
}
func write(i int) {
println(i,"write start")
m.Lock()
println(i,"writing")
time.Sleep(1*time.Second)
m.Unlock()
println(i,"write over")
}
// 3 write start
// 3 writing
// 1 write start
// 2 read start
// 3 write over
// 2 reading
// 2 read over
// 1 writing
```
| 19.276119 | 97 | 0.592528 | eng_Latn | 0.211183 |
fdb3adcf7baaa973bfc1fc290fec73821b822bf5 | 31,553 | md | Markdown | articles/backup/backup-azure-file-share-rest-api.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-file-share-rest-api.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-file-share-rest-api.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Copia de seguridad de recursos compartidos de archivos de Azure con API REST
description: Aprenda a usar la API REST para realizar copias de seguridad de recursos compartidos de archivos de Azure en el almacén de Recovery Services
ms.topic: conceptual
ms.date: 02/16/2020
ms.openlocfilehash: 2cf385830ec1be17cb62432e6ef9cba7d82a9db1
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/02/2020
ms.locfileid: "84710616"
---
# <a name="backup-azure-file-share-using-azure-backup-via-rest-api"></a>Copia de seguridad de un recurso compartido de archivos de Azure con Azure Backup mediante API REST
En este artículo se describe cómo hacer una copia de seguridad de un recurso compartido de archivos con Azure Backup mediante la API REST.
En este artículo se da por sentado que ya ha creado un almacén de Recovery Services y una directiva para configurar la copia de seguridad del recurso compartido de archivos. Si aún no lo hecho, consulte los tutoriales de API REST para [crear un almacén](https://docs.microsoft.com/azure/backup/backup-azure-arm-userestapi-createorupdatevault) y [crear una directiva](https://docs.microsoft.com/azure/backup/backup-azure-arm-userestapi-createorupdatepolicy).
En este artículo, usaremos los siguientes recursos:
- **Almacén de Recovery Services**: *azurefilesvault*
- **Directiva:** *schedule1*
- **Grupo de recursos**: *azurefiles*
- **Cuenta de almacenamiento**: *testvault2*
- **Recurso compartido de archivos**: *testshare*
## <a name="configure-backup-for-an-unprotected-azure-file-share-using-rest-api"></a>Configuración de la copia de seguridad para un recurso compartido de archivos de Azure sin protección con API REST
### <a name="discover-storage-accounts-with-unprotected-azure-file-shares"></a>Detección de cuentas de almacenamiento con recursos compartidos de archivos de Azure sin protección
El almacén debe detectar todas las cuentas de almacenamiento de Azure de la suscripción con recursos compartidos de archivos de los que se pueda realizar una copia de seguridad en el almacén de Recovery Services. Esta acción se desencadena con la [operación de actualización](https://docs.microsoft.com/rest/api/backup/protectioncontainers/refresh). Se trata de una operación *POST* asincrónica que garantiza que el almacén obtiene la lista más reciente de todos los recursos compartidos de archivos sin protección de Azure de la suscripción actual y los "almacena en caché". Una vez que el recurso compartido de archivos se almacena en caché, Recovery Services puede acceder a este y protegerlo.
```http
POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-version=2016-12-01&$filter={$filter}
```
El URI de POST tiene los parámetros `{subscriptionId}`, `{vaultName}`, `{vaultresourceGroupName}` y `{fabricName}`. En nuestro ejemplo, el valor de los distintos parámetros sería el siguiente:
- `{fabricName}` es *Azure*
- `{vaultName}` es *azurefilesvault*
- `{vaultresourceGroupName}` es *azurefiles*
- $filter=backupManagementType eq 'AzureStorage'
Como todos los parámetros necesarios se proporcionan en el URI, no hay necesidad de tener un cuerpo de solicitud independiente.
```http
POST https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/refreshContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
```
#### <a name="responses"></a>Respuestas
La operación 'refresh' es una [operación asincrónica](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-async-operations). Significa que esta operación crea otra que tiene que ser seguida por separado.
Devuelve las dos respuestas: 202 (Aceptado) cuando se crea otra operación y 200 (Correcto) cuando se completa dicha operación.
##### <a name="example-responses"></a>Respuestas de ejemplo
Una vez que se envía la solicitud *POST*, se devuelve una respuesta 202 (Accepted).
```http
HTTP/1.1 202 Accepted
'Pragma': 'no-cache'
'Expires': '-1'
'Location': ‘https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/ResourceGroups
/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/operationResults/
cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01’
'Retry-After': '60'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'x-ms-client-request-id': ‘3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61’
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-reads': '11996'
'x-ms-correlation-request-id': '6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'x-ms-routing-request-id': CENTRALUSEUAP:20200203T091326Z:6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'Date': 'Mon, 03 Feb 2020 09:13:25 GMT'
```
Realice el seguimiento de la operación resultante con el encabezado "Location" (ubicación) y un simple comando *GET*.
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/operationResults/cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01
```
Una vez que se detectan todas las cuentas de almacenamiento de Azure, el comando GET devuelve una respuesta 200 (Sin contenido). El almacén ahora puede detectar cualquier cuenta de almacenamiento con recursos compartidos de archivos de los que se puede realizar una copia de seguridad dentro de la suscripción.
```http
HTTP/1.1 200 NoContent
Cache-Control : no-cache
Pragma : no-cache
X-Content-Type-Options : nosniff
x-ms-request-id : d9bdb266-8349-4dbd-9688-de52f07648b2
x-ms-client-request-id : 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security : max-age=31536000; includeSubDomains
X-Powered-By : ASP.NET
x-ms-ratelimit-remaining-subscription-reads: 11933
x-ms-correlation-request-id : d9bdb266-8349-4dbd-9688-de52f07648b2
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105304Z:d9bdb266-8349-4dbd-9688-de52f07648b2
Date : Mon, 27 Jan 2020 10:53:04 GMT
```
### <a name="get-list-of-storage-accounts-that-can-be-protected-with-recovery-services-vault"></a>Obtener una lista de cuentas de almacenamiento que se pueden proteger con el almacén de Recovery Services
Para confirmar que el “almacenamiento en caché” ha finalizado, muestre todas las cuentas de almacenamiento que se pueden proteger de la suscripción. A continuación, busque la cuenta de almacenamiento deseada en la respuesta. Esto se hace mediante la operación [GET ProtectableContainers](https://docs.microsoft.com/rest/api/backup/protectablecontainers/list).
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectableContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
```
El identificador URI de *GET* tiene todos los parámetros necesarios. No se necesita ningún cuerpo de solicitud adicional.
Ejemplo de cuerpo de respuesta:
```json
{
"value": [
{
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers
/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/
protectableContainers/StorageContainer;Storage;AzureFiles;testvault2",
"name": "StorageContainer;Storage;AzureFiles;testvault2",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectableContainers",
"properties": {
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage",
"protectableContainerType": "StorageContainer",
"healthStatus": "Healthy",
"containerId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/
AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2"
}
}
]
}
```
Dado que podemos encontrar la cuenta de almacenamiento *testvault2* en el cuerpo de la respuesta con el nombre descriptivo, la operación de actualización anterior se realizó correctamente. Ahora, el almacén de Recovery Services puede detectar correctamente las cuentas de almacenamiento con recursos compartidos de archivos sin protección en la misma suscripción.
### <a name="register-storage-account-with-recovery-services-vault"></a>Registro de la cuenta de almacenamiento con el almacén de Recovery Services
Este paso solo es necesario si no registró la cuenta de almacenamiento con el almacén anteriormente. Puede registrar el almacén mediante la [operación ProtectionContainers-Register](https://docs.microsoft.com/rest/api/backup/protectioncontainers/register).
```http
PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}?api-version=2016-12-01
```
Defina las variables de los URI como se indica a continuación:
- {resourceGroupName}: *azurefiles*
- {fabricName}: *Azure*
- {vaultName}: *azurefilesvault*
- {containerName}: es el atributo de nombre del cuerpo de la respuesta de la operación GET ProtectableContainers.
En nuestro ejemplo, es *StorageContainer;Storage;AzureFiles;testvault2*
>[!NOTE]
> Tome siempre el atributo de nombre de la respuesta y rellénelo en esta solicitud. NO cree ni codifique de forma rígida el formato contenedor-nombre. Si lo crea o codifica de forma rígida, se producirá un error en la llamada API si el formato contenedor-nombre cambia en el futuro.
<br>
```http
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2?api-version=2016-12-01
```
La creación del cuerpo de la solicitud se lleva a cabo como se indica a continuación:
```json
{
"properties": {
"containerType": "StorageContainer",
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"resourceGroup": "AzureFiles",
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage"
}
```
Para obtener una lista completa de las definiciones del cuerpo de la solicitud y otros detalles, consulte [ProtectionContainers-Register](https://docs.microsoft.com/rest/api/backup/protectioncontainers/register#azurestoragecontainer).
Se trata de una operación asincrónica y devuelve dos respuestas: "202 Aceptado" cuando se acepta la operación y "200 Correcto" cuando se completa la operación. Para realizar un seguimiento del estado de la operación, use el encabezado de ubicación para obtener el estado más reciente de la operación.
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/operationresults/1a3c8ee7-e0e5-43ed-b8b3-73cc992b6db9?api-version=2016-12-01
```
Ejemplo de cuerpo de respuesta cuando se completa la operación:
```json
{
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/
protectionContainers/StorageContainer;Storage;AzureFiles;testvault2",
"name": "StorageContainer;Storage;AzureFiles;testvault2",
"properties": {
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"protectedItemCount": 0,
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage",
"registrationStatus": "Registered",
"healthStatus": "Healthy",
"containerType": "StorageContainer",
"protectableObjectType": "StorageContainer"
}
}
```
Puede comprobar si el registro se realizó correctamente a partir del valor del parámetro *registrationstatus* en el cuerpo de la respuesta. En nuestro caso, muestra el estado registrado para *testvault2*, por lo que la operación de registro se realizó correctamente.
### <a name="inquire-all-unprotected-files-shares-under-a-storage-account"></a>Consultar todos los recursos compartidos de archivos sin protección en una cuenta de almacenamiento
Puede consultar los elementos que se pueden proteger de una cuenta de almacenamiento mediante la operación [Protection Containers-Inquire](https://docs.microsoft.com/rest/api/backup/protectioncontainers/inquire). Es una operación asincrónica y se debe realizar un seguimiento de los resultados mediante el encabezado de ubicación.
```http
POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/inquire?api-version=2016-12-01
```
Defina las variables de los URI anteriores como se indica a continuación:
- {vaultName}: *azurefilesvault*
- {fabricName}: *Azure*
- {containerName}: consulte el atributo de nombre del cuerpo de la respuesta de la operación GET ProtectableContainers. En nuestro ejemplo, es *StorageContainer;Storage;AzureFiles;testvault2*
```http
https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/inquire?api-version=2016-12-01
```
Una vez que la solicitud se realiza correctamente, devuelve el código de estado "Correcto".
```http
Cache-Control : no-cache
Pragma : no-cache
X-Content-Type-Options: nosniff
x-ms-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-client-request-id : 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security: max-age=31536000; includeSubDomains
Server : Microsoft-IIS/10.0
X-Powered-B : ASP.NET
x-ms-ratelimit-remaining-subscription-reads: 11932
x-ms-correlation-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105305Z:68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
Date : Mon, 27 Jan 2020 10:53:05 GMT
```
### <a name="select-the-file-share-you-want-to-back-up"></a>Seleccione el recurso compartido de archivos del que quiere hacer una copia de seguridad
Puede enumerar todos los elementos que se pueden proteger en la suscripción y buscar el recurso compartido de archivos que se va a incluir en la copia de seguridad mediante la operación [GET backupprotectableItems](https://docs.microsoft.com/rest/api/backup/backupprotectableitems/list).
```http
GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupProtectableItems?api-version=2016-12-01&$filter={$filter}
```
Construya el URI de la forma siguiente:
- {vaultName}: *azurefilesvault*
- {$filter}: *backupManagementType eq 'AzureStorage'*
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupProtectableItems?$filter=backupManagementType eq 'AzureStorage'&api-version=2016-12-01
```
Respuesta de ejemplo:
```json
Status Code:200
{
"value": [
{
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afaccount1/protectableItems/azurefileshare;azurefiles1",
"name": "azurefileshare;azurefiles1",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1",
"parentContainerFriendlyName": "afaccount1",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "azurefiles1",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afsaccount/protectableItems/azurefileshare;afsresource",
"name": "azurefileshare;afsresource",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"parentContainerFriendlyName": "afsaccount",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "afsresource",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefileshare;testshare",
"name": "azurefileshare;testshare",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"parentContainerFriendlyName": "testvault2",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "testshare",
"protectionState": "NotProtected"
}
}
]
}
```
La respuesta contiene la lista de todos los recursos compartidos de archivos sin protección e incluye toda la información que requiere Azure Recovery Services para configurar la copia de seguridad.
### <a name="enable-backup-for-the-file-share"></a>Habilitación de la copia de seguridad para el recurso compartido de archivos
Una vez que el recurso compartido de archivos correspondiente esté "identificado" con el nombre descriptivo, seleccione la directiva para la protección. Para más información acerca de las directivas existentes en el almacén, consulte el artículo sobre [la API de enumeración de directivas](https://docs.microsoft.com/rest/api/backup/backuppolicies/list). A continuación, seleccione la [directiva pertinente](https://docs.microsoft.com/rest/api/backup/protectionpolicies/get) haciendo referencia al nombre de la directiva. Para crear las directivas, consulte el [ tutorial sobre la creación de directivas](https://docs.microsoft.com/azure/backup/backup-azure-arm-userestapi-createorupdatepolicy).
La habilitación de la protección es una operación asincrónica *PUT* que crea un "elemento protegido".
```http
PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}?api-version=2019-05-13
```
Establezca las variables **containername** y **protecteditemname** mediante el atributo ID en el cuerpo de respuesta de la operación GET backupprotectableitems.
En nuestro ejemplo, el identificador del recurso compartido de archivos que queremos proteger es el siguiente:
```output
"/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefileshare;testshare
```
- {containername}: *storagecontainer;storage;azurefiles;testvault2*
- {protectedItemName}: *azurefileshare;azurefiles*
También puede hacer referencia al atributo **name** del contenedor de protección y las respuestas de los elementos que se pueden proteger.
>[!NOTE]
>Tome siempre el atributo de nombre de la respuesta y rellénelo en esta solicitud. NO cree ni codifique de forma rígida el formato contenedor-nombre ni el formato del nombre del elemento protegido. Si lo crea o codifica de forma rígida, se producirá un error en la llamada API si el formato contenedor-nombre o el formato del nombre del elemento protegido cambia en el futuro.
<br>
```http
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare;testshare?api-version=2016-12-01
```
Crear un cuerpo de la solicitud:
El cuerpo de solicitud siguiente define las propiedades necesarias para crear un elemento protegido.
```json
{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPolicies/schedule1"
}
}
```
**sourceResourceId** es **parentcontainerFabricID** en respuesta a GET backupprotectableItems.
Respuesta de ejemplo
La creación de un elemento protegido es una operación asincrónica que crea otra operación de la que es necesario realizar un seguimiento. Devuelve las dos respuestas: 202 (Aceptado) cuando se crea otra operación y 200 (Correcto) cuando se completa dicha operación.
Una vez enviada la solicitud *PUT* para la creación o actualización de elementos protegidos, la respuesta inicial es 202 (Aceptado) con un encabezado de ubicación.
```http
HTTP/1.1 202 Accepted
Cache-Control : no-cache
Pragma : no-cache
Location : https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare;testshare/operationResults/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
Retry-Afte : 60
Azure-AsyncOperation : https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare;testshare/operationResults/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
X-Content-Type-Options : nosniff
x-ms-request-id : b55527fa-f473-4f09-b169-9cc3a7a39065
x-ms-client-request-id: 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security : max-age=31536000; includeSubDomains
X-Powered-By : ASP.NET
x-ms-ratelimit-remaining-subscription-writes: 1198
x-ms-correlation-request-id : b55527fa-f473-4f09-b169-9cc3a7a39065
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105412Z:b55527fa-f473-4f09-b169-9cc3a7a39065
Date : Mon, 27 Jan 2020 10:54:12 GMT
```
A continuación, realice un seguimiento de la operación resultante con el encabezado de ubicación o el encabezado Azure-AsyncOperation y un comando *GET*.
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOperations/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
```
Una vez completada la operación, devuelve 200 (OK) con el contenido del elemento protegido en el cuerpo de respuesta.
Ejemplo de cuerpo de respuesta:
```json
{
"id": "c3a52d1d-0853-4211-8141-477c65740264",
"name": "c3a52d1d-0853-4211-8141-477c65740264",
"status": "Succeeded",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:10:48.296012Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b"
}
}
```
Así se confirma que la protección está habilitada para el recurso compartido de archivos y la primera copia de seguridad se desencadenará según la programación de la directiva.
## <a name="trigger-an-on-demand-backup-for-file-share"></a>Desencadenamiento de una copia de seguridad a petición para recursos compartidos de archivos
Una vez que un recurso compartido de archivos de Azure está configurado para la copia de seguridad, las copias de seguridad se realizan según la programación de la directiva. Puede esperar a la primera copia de seguridad programada o desencadenar una copia de seguridad a petición en cualquier momento.
Desencadenar una copia de seguridad a petición es una operación POST.
```http
POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/backup?api-version=2016-12-01
```
{containerName} y {protectedItemName} se han creado anteriormente al habilitar la copia de seguridad. En nuestro ejemplo, esto se traduce en:
```http
POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare;testshare/backup?api-version=2017-07-01
```
### <a name="create-request-body"></a>Creación del cuerpo de la solicitud
Para desencadenar una copia de seguridad a petición, los siguientes son los componentes del cuerpo de la solicitud.
| Nombre | Tipo | Descripción |
| ---------- | -------------------------- | --------------------------------- |
| Propiedades | AzurefilesharebackupReques | Propiedades de BackupRequestResource |
Para obtener una lista completa de las definiciones del cuerpo de la solicitud y otros detalles, consulte el [documento de la API REST sobre desencadenar copias de seguridad de los elementos protegidos](https://docs.microsoft.com/rest/api/backup/backups/trigger#request-body).
Ejemplo de cuerpo de la solicitud
```json
{
"properties":{
"objectType":"AzureFileShareBackupRequest",
"recoveryPointExpiryTimeInUTC":"2020-03-07T18:29:59.000Z"
}
}
```
### <a name="responses"></a>Respuestas
Desencadenar una copia de seguridad a petición es una [operación asincrónica](https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-async-operations). Significa que esta operación crea otra que tiene que ser seguida por separado.
Devuelve las dos respuestas: 202 (Aceptado) cuando se crea otra operación y 200 (Correcto) cuando se completa dicha operación.
### <a name="example-responses"></a>Respuestas de ejemplo
Una vez enviada la solicitud *POST* para una copia de seguridad a petición, la respuesta inicial es 202 (Accepted) con un encabezado de ubicación o Azure-async-header.
```http
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare;testshare/operationResults/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2017-07-01
'Retry-After': '60'
'Azure-AsyncOperation': https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare;testshare/operationsStatus/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2017-07-01
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'x-ms-client-request-id': 'a644712a-4895-11ea-ba57-0a580af42708, a644712a-4895-11ea-ba57-0a580af42708'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1199'
'x-ms-correlation-request-id': '2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T040339Z:2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'Date': 'Thu, 06 Feb 2020 04:03:38 GMT'
'Content-Length': '0'
```
A continuación, realice un seguimiento de la operación resultante con el encabezado de ubicación o el encabezado Azure-AsyncOperation y un comando *GET*.
```http
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOperations/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2016-12-01
```
Una vez completada la operación, devuelve 200 (OK) con el identificador del trabajo de copia de seguridad resultante en el cuerpo de respuesta.
#### <a name="sample-response-body"></a>Muestra de cuerpo de respuesta
```json
{
"id": "dc62d524-427a-4093-968d-e951c0a0726e",
"name": "dc62d524-427a-4093-968d-e951c0a0726e",
"status": "Succeeded",
"startTime": "2020-02-06T11:06:02.1327954Z",
"endTime": "2020-02-06T11:06:02.1327954Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "39282261-cb52-43f5-9dd0-ffaf66beeaef"
}
}
```
Puesto que el trabajo de copia de seguridad es una operación de larga duración, se debe seguir como se explica en el [documento sobre la supervisión de trabajos con API REST](https://docs.microsoft.com/azure/backup/backup-azure-arm-userestapi-managejobs#tracking-the-job).
## <a name="next-steps"></a>Pasos siguientes
- Más información sobre cómo [restaurar recursos compartidos de archivos de Azure mediante la API REST](restore-azure-file-share-rest-api.md).
| 57.578467 | 696 | 0.784331 | spa_Latn | 0.675532 |
fdb4037531491135d1764db27230905659358e63 | 2,434 | md | Markdown | src/de/2020-04/06/03.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | null | null | null | src/de/2020-04/06/03.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | null | null | null | src/de/2020-04/06/03.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | null | null | null | ---
title: Auf der Flucht
date: 02/11/2020
---
`Lies 1. Mose 28,10–17. In welchem Zusammenhang steht diese Geschichte und was lehrt sie uns über Gottes Gnade für diejenigen, die in gewisser Weise auf der Flucht vor ihren Sünden sind?`
In seinem Umgang mit dem Rest der Familie beging Jakob mithilfe seiner Mutter eine schreckliche Täuschung, und jetzt bezahlt er dafür. Sein Bruder stößt heftige Drohungen gegen ihn aus, er ist zum Flüchtling geworden und ist nun unterwegs zu seinem Onkel in Haran. Alles ist unsicher und bedrohlich.
Eines Tages trottet Jakob in der Dämmerung und dann in der Dunkelheit vor sich hin. Er ist mitten im Nirgendwo, über ihm nur der Himmel als Dach. Mit einem Stein als Kissen schläft er ein. Aber die leere Bewusstlosigkeit des Schlafes wird bald unterbrochen. Er hat den berühmten Traum, und die Leiter oder Treppe, die er sieht, steht auf der Erde und reicht bis zum Himmel. Engel steigen darauf auf und nieder.
Dann hört er eine Stimme sagen: „Ich bin der HERR, der Gott deines Vaters Abraham.“ (1 Mo 28,13) Die Stimme wiederholt die Versprechen, die Jakob von seiner Familie überliefert bekommen hat. Deine Nachkommen werden zahlreich sein. Sie werden ein Segen für alle Familien der Erde sein. „Und siehe, ich bin mit dir“, fährt die Stimme fort, „und will dich behüten, wo du hinziehst … ich will dich nicht verlassen, bis ich alles tue, was ich dir zugesagt habe.“ (V. 15)
Ellen White schrieb davon, dass Paulus viel später „wie einst Jakob in dessen Traum die Himmelsleiter [sah] – ein Sinnbild für Christus, der die Verbindung zwischen Himmel und Erde, zwischen dem vergänglichen Menschen und dem unvergänglichen Gott wiederhergestellt hat. Der Glaube des Paulus wurde gestärkt, als er sich an die Patriarchen und Propheten erinnerte, die ihr Vertrauen auf den einen Gott gesetzt hatten, der auch seine Stütze und sein Trost war. Für ihn würde er nun sein Leben hingeben.“ (GNA 283)
Jakob erwacht und sagt sich: „Fürwahr, der HERR ist an dieser Stätte, und ich wusste es nicht!“ (1 Mo 28,16) Was hier geschah, war ehrfurchtgebietend. Er würde diesen Ort nie vergessen und er gibt ihm einen Namen. Dann schwört er Gott lebenslange Treue.
`Was können wir aus dieser Erzählung darüber lernen, wie Gott versucht, uns in Christus trotz unserer Sünden zu erreichen? Um es noch einmal zu sagen: Warum muss christliche Bildung dieses Prinzip bei dem, was sie lehrt, stets im Blick behalten?` | 135.222222 | 511 | 0.790468 | deu_Latn | 0.999803 |
fdb4c6689bb20ac5f5a7b89af78c2680abe95c27 | 3,781 | markdown | Markdown | doc/background.markdown | evanphx/citrus | 9efbf666b676915110a3778ef5984b5c670e2513 | [
"Unlicense"
] | 1 | 2018-02-18T20:56:59.000Z | 2018-02-18T20:56:59.000Z | doc/background.markdown | evanphx/citrus | 9efbf666b676915110a3778ef5984b5c670e2513 | [
"Unlicense"
] | null | null | null | doc/background.markdown | evanphx/citrus | 9efbf666b676915110a3778ef5984b5c670e2513 | [
"Unlicense"
] | null | null | null | # Background
In order to be able to use Citrus effectively, you must first understand the
difference between syntax and semantics. Syntax is a set of rules that govern
the way letters and punctuation may be used in a language. For example, English
syntax dictates that proper nouns should start with a capital letter and that
sentences should end with a period.
Semantics are the rules by which meaning may be derived in a language. For
example, as you read a book you are able to make some sense of the particular
way in which words on a page are combined to form thoughts and express ideas
because you understand what the words themselves mean and you understand what
they mean collectively.
Computers use a similar process when interpreting code. First, the code must be
parsed into recognizable symbols or tokens. These tokens may then be passed to
an interpreter which is responsible for forming actual instructions from them.
Citrus is a pure Ruby library that allows you to perform both lexical analysis
and semantic interpretation quickly and easily. Using Citrus you can write
powerful parsers that are simple to understand and easy to create and maintain.
In Citrus, there are three main types of objects: rules, grammars, and matches.
## Rules
A [Rule](api/classes/Citrus/Rule.html) is an object that specifies some matching
behavior on a string. There are two types of rules: terminals and non-terminals.
Terminals can be either Ruby strings or regular expressions that specify some
input to match. For example, a terminal created from the string "end" would
match any sequence of the characters "e", "n", and "d", in that order. Terminals
created from regular expressions may match any sequence of characters that can
be generated from that expression.
Non-terminals are rules that may contain other rules but do not themselves match
directly on the input. For example, a Repeat is a non-terminal that may contain
one other rule that will try and match a certain number of times. Several other
types of non-terminals are available that will be discussed later.
Rule objects may also have semantic information associated with them in the form
of Ruby modules. Rules use these modules to extend the matches they create.
## Grammars
A grammar is a container for rules. Usually the rules in a grammar collectively
form a complete specification for some language, or a well-defined subset
thereof.
A Citrus grammar is really just a souped-up Ruby
[module](http://ruby-doc.org/core/classes/Module.html). These modules may be
included in other grammar modules in the same way that Ruby modules are normally
used. This property allows you to divide a complex grammar into more manageable,
reusable pieces that may be combined at runtime. Any grammar rule with the same
name as a rule in an included grammar may access that rule with a mechanism
similar to Ruby's super keyword.
## Matches
Matches are created by rule objects when they match on the input. A
[Match](api/classes/Citrus/Match.html) is actually a
[String](http://ruby-doc.org/core/classes/String.html) object with some extra
information attached such as the name(s) of the rule(s) from which it was
generated and any submatches it may contain.
During a parse, matches are arranged in a tree structure where any match may
contain any number of other matches. This structure is determined by the way in
which the rule that generated each match is used in the grammar. For example, a
match that is created from a non-terminal rule that contains several other
terminals will likewise contain several matches, one for each terminal.
Match objects may be extended with semantic information in the form of methods.
These methods should provide various interpretations for the semantic value of a
match.
| 50.413333 | 80 | 0.802962 | eng_Latn | 0.999959 |
fdb4f5060c0aaf83f0531ceaa40e67e999fab6d9 | 4,259 | md | Markdown | README.md | phillipemoreira/react-with-typescript | bea759fdb61869ff69437d0acb940a8d524beacc | [
"MIT"
] | 2 | 2018-04-25T14:44:04.000Z | 2020-06-02T03:34:16.000Z | README.md | phillipemoreira/react-with-typescript | bea759fdb61869ff69437d0acb940a8d524beacc | [
"MIT"
] | null | null | null | README.md | phillipemoreira/react-with-typescript | bea759fdb61869ff69437d0acb940a8d524beacc | [
"MIT"
] | null | null | null | # Typescript + es6 react setup
This repo has the purpose of showing how make both **Es6** and **Typescript** coexist in a react project setup.
## Motivation
I am currently working on a few react projects written in javascript only. My team and I wanted to experiment and
perhaps migrate to typescript, but since the code is already in production and contain tens of thousands of lines, it
would be unsiwise to try and switch everything at once.
## Prerequisites
* node and npm
As fars as I know, it should work with any node/npm combination, but I am using node *6.11.3* and npm *3.10.10*.
## Running the project
```
npm install;
npm run build;
npm start;
```
* `npm run build;` will use webpack lint, test and bundle the files into a build folder.
* `npm start` will fire a very simple `express` server to provide the content in a default port.
## Bundling
It uses a simple [webpack](https://webpack.js.org/) configuration to bundle a set of static files (index.html and javascript chunks), babel is used to transpile es6 whereas
typescript compiler via ts-loader is used to transpile typescript.
### Where is it setup?
Mostly [webpack.config.js](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/webpack.config.js), but also [.babelrc](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/.babelrc) and [tsconfig.json](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/tsconfig.json)
## Linting
[ESLint](https://www.npmjs.com/package/eslint) is configured to extend some well adopted rules, such as [airbnb](https://github.com/airbnb/javascript) rules, as well as some
others defined in [.eslintrc](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/.eslintrc) file. Core typescrict static validations are performed by the compiler itself and they are performed during the webpack transpiling process, but [TSLint](https://www.npmjs.com/package/tslint) is also setup to run extend validations (such as styling and maintainability).
That being said, if you are using [vscode](https://code.visualstudio.com/) (which I strongly recommend you to do), you get static validations for both es6 and typescript if by install the **extensions** [ESLint](https://github.com/Microsoft/vscode-eslint) and [TSLint](https://github.com/Microsoft/vscode-tslint).

PS: Some TSLint rules require the compiler type checker, so they will not show in the editor, but they will fail during npm lint script.
### Where is it setup?
[.eslintrc](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/.eslintrc), [tslint.json](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/tslint.json) and we may say that also [tsconfig.json](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/tsconfig.json)
## Testing
[Jest](https://facebook.github.io/jest/) is the testing framework of choice, it is used, again, for testing both typescript and javascript files, Not only regular modules, but also react comopnents, for the react components we use [Enzyme](https://github.com/airbnb/enzyme) to mounting them. In order for jest to be able to understand both languages, we need to use preprocessors, for typescript we use
a third party packaged called [ts-jest](https://github.com/kulshekhar/ts-jest) and for es6 [es6-preprocessor](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/es6-preprocessor.js).
PS: Notice that we wouldn't have to explicitly define the es6-preprocessor if we weren't working with the typescript preprocessor as well.
### Coverage
By running `npm run build` or simply `npm run test` you get a coverage report in:
```
|---coverage
|------lcov-report
|---------index.html
```
And it looks like this:

#### Coverage threshold
Jest allows us to define a code coverage threshold, the build will fail if they are not met.
```
"coverageThreshold": {
"global": {
"statements": INCREASE_ME,
"branches": INCREASE_ME,
"functions": INCREASE_ME,
"lines": INCREASE_ME
}
},
```
### Where is it setup?
[jest.config.json](https://github.com/phillipemoreira/react-with-ts-and-js/blob/master/jest.config.json)
| 53.911392 | 402 | 0.755107 | eng_Latn | 0.971858 |
fdb54a2b7664219cae7aacbc81947748325f62c1 | 911 | md | Markdown | src/actions/vamp.md | stefanzweifel/github-actions | 50ba51439bcaf382552d2f4b95578780b641977f | [
"MIT"
] | 159 | 2019-01-25T10:06:36.000Z | 2022-03-31T19:09:43.000Z | src/actions/vamp.md | stefanzweifel/github-actions | 50ba51439bcaf382552d2f4b95578780b641977f | [
"MIT"
] | 46 | 2019-01-26T09:17:57.000Z | 2022-03-25T12:08:20.000Z | src/actions/vamp.md | stefanzweifel/github-actions | 50ba51439bcaf382552d2f4b95578780b641977f | [
"MIT"
] | 52 | 2019-01-26T03:21:18.000Z | 2022-03-26T04:15:36.000Z | ---
path: '/vamp'
title: 'vamp'
github_url: 'https://github.com/magneticio/vamp-github-actions'
author: 'magneticio'
subtitle: 'GitHub Actions For Vamp'
tags: []
---
# Vamp GitHub Actions
These official Vamp GitHub Actions allow you to run Vamp CLI commands within you GitHub actions workflow.
## Getting Started
## Actions
Usage information for individual actions can be found in their respective directories.
### CLI Action
Wraps the Vamp CLI to enable Vamp commands to be run. [action](https://github.com/magneticio/vamp-github-actions/blob/master/cli)
### Login Action
Wraps the Vamp CLI `vamp login` command, allowing for Actions to log into Vamp. [action](https://github.com/magneticio/vamp-github-actions/blob/master/login)
### Workflow Action
Wraps the Vamp CLI, allowing for Actions to create a Vamp Workflow. [action](https://github.com/magneticio/vamp-github-actions/blob/master/workflow)
| 29.387097 | 157 | 0.762898 | eng_Latn | 0.802256 |
fdb5c67ee10c6417fb064fde9a4280e5ddeb2997 | 582 | md | Markdown | languages/ruby/README.md | connec/oso | a12d94206807b69beb6fe7a9070b9afcacdfc845 | [
"Apache-2.0"
] | 2,167 | 2020-07-28T15:49:48.000Z | 2022-03-31T06:11:28.000Z | languages/ruby/README.md | connec/oso | a12d94206807b69beb6fe7a9070b9afcacdfc845 | [
"Apache-2.0"
] | 1,060 | 2020-07-25T18:37:07.000Z | 2022-03-30T05:49:44.000Z | languages/ruby/README.md | connec/oso | a12d94206807b69beb6fe7a9070b9afcacdfc845 | [
"Apache-2.0"
] | 118 | 2020-08-05T19:27:14.000Z | 2022-03-31T16:37:39.000Z | # oso-oso
## Installation
Add this line to your application's Gemfile:
gem 'oso-oso'
And then execute:
$ bundle install
Or install it yourself as:
$ gem install oso-oso
## Development
After checking out the repo, run `bundle install` to install dependencies.
Then, run `bundle exec rake spec` to run the tests. You can also run `bundle
exec oso` for an interactive REPL that will allow you to experiment.
To install this gem onto your local machine, run `bundle exec rake install`.
New releases are minted and pushed to RubyGems via GitHub Actions workflows.
| 22.384615 | 76 | 0.745704 | eng_Latn | 0.991897 |
fdb658f7f2772189c81bc985a17e2afaaac4aebc | 33 | md | Markdown | README.md | marshal-brilleteau/egsTest | b58b08e808517fcfbc32ebccf88c55c54f61f1e6 | [
"Unlicense"
] | null | null | null | README.md | marshal-brilleteau/egsTest | b58b08e808517fcfbc32ebccf88c55c54f61f1e6 | [
"Unlicense"
] | null | null | null | README.md | marshal-brilleteau/egsTest | b58b08e808517fcfbc32ebccf88c55c54f61f1e6 | [
"Unlicense"
] | null | null | null | # egsTest
mon premier repository
| 11 | 22 | 0.818182 | fra_Latn | 0.5402 |
fdb6855b5acaea5a952dd3505429580ee485c75d | 58,601 | md | Markdown | repos/nextcloud/remote/22.2-fpm.md | devcode1981/repo-info | 2d59ac4492b5054ca75008b755f43d07ba879801 | [
"Apache-2.0"
] | 1 | 2022-01-14T20:54:33.000Z | 2022-01-14T20:54:33.000Z | repos/nextcloud/remote/22.2-fpm.md | devcode1981/repo-info | 2d59ac4492b5054ca75008b755f43d07ba879801 | [
"Apache-2.0"
] | null | null | null | repos/nextcloud/remote/22.2-fpm.md | devcode1981/repo-info | 2d59ac4492b5054ca75008b755f43d07ba879801 | [
"Apache-2.0"
] | 1 | 2021-05-21T18:38:51.000Z | 2021-05-21T18:38:51.000Z | ## `nextcloud:22.2-fpm`
```console
$ docker pull nextcloud@sha256:59a8a741883406a15037b1492fd5c62f90a28a7ac9ff2f2139065f8e4f155822
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 4
- linux; amd64
- linux; arm variant v7
- linux; 386
- linux; mips64le
### `nextcloud:22.2-fpm` - linux; amd64
```console
$ docker pull nextcloud@sha256:4a3f9262d217eb7ad6b267ecfd561b369b397720832197ca3cdee2bff53b6d11
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **326.9 MB (326949260 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:3ffccb4aebf83a0e9f003a734915b28286a78fa5e1e50f650410950edea4e7d8`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 28 Sep 2021 01:22:40 GMT
ADD file:3c520ad50b13b922356e0a5e4f7c12b202e09584acf332a65d5603dacd4a9380 in /
# Tue, 28 Sep 2021 01:22:41 GMT
CMD ["bash"]
# Tue, 28 Sep 2021 13:38:21 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 28 Sep 2021 13:38:22 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 28 Sep 2021 13:38:52 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Tue, 28 Sep 2021 13:38:52 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 28 Sep 2021 13:38:54 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Tue, 28 Sep 2021 13:54:04 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Tue, 28 Sep 2021 13:54:05 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 28 Sep 2021 13:54:05 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 28 Sep 2021 13:54:05 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Tue, 28 Sep 2021 14:56:42 GMT
ENV GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 BFDDD28642824F8118EF77909B67A5C12229118F
# Tue, 28 Sep 2021 14:56:42 GMT
ENV PHP_VERSION=8.0.11
# Tue, 28 Sep 2021 14:56:43 GMT
ENV PHP_URL=https://www.php.net/distributions/php-8.0.11.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-8.0.11.tar.xz.asc
# Tue, 28 Sep 2021 14:56:43 GMT
ENV PHP_SHA256=e3e5f764ae57b31eb65244a45512f0b22d7bef05f2052b23989c053901552e16
# Tue, 28 Sep 2021 14:57:09 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Tue, 28 Sep 2021 14:57:09 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Tue, 28 Sep 2021 15:03:51 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends ${PHP_EXTRA_BUILD_DEPS:-} libargon2-dev libcurl4-openssl-dev libonig-dev libreadline-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-openssl --with-readline --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Tue, 28 Sep 2021 15:03:52 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Tue, 28 Sep 2021 15:03:53 GMT
RUN docker-php-ext-enable sodium
# Tue, 28 Sep 2021 15:03:53 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Tue, 28 Sep 2021 15:03:53 GMT
WORKDIR /var/www/html
# Tue, 28 Sep 2021 15:03:54 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Tue, 28 Sep 2021 15:03:54 GMT
STOPSIGNAL SIGQUIT
# Tue, 28 Sep 2021 15:03:55 GMT
EXPOSE 9000
# Tue, 28 Sep 2021 15:03:55 GMT
CMD ["php-fpm"]
# Wed, 29 Sep 2021 14:27:23 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends rsync bzip2 busybox-static ; rm -rf /var/lib/apt/lists/*; mkdir -p /var/spool/cron/crontabs; echo '*/5 * * * * php -f /var/www/html/cron.php' > /var/spool/cron/crontabs/www-data
# Wed, 29 Sep 2021 14:27:23 GMT
ENV PHP_MEMORY_LIMIT=512M
# Wed, 29 Sep 2021 14:27:23 GMT
ENV PHP_UPLOAD_LIMIT=512M
# Fri, 01 Oct 2021 02:31:50 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libevent-dev libfreetype6-dev libicu-dev libjpeg-dev libldap-common libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev libxml2-dev libmagickwand-dev libzip-dev libwebp-dev libgmp-dev ; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install -j "$(nproc)" bcmath exif gd intl ldap opcache pcntl pdo_mysql pdo_pgsql zip gmp ; pecl install APCu-5.1.20; pecl install memcached-3.1.5; pecl install redis-5.3.4; pecl install imagick-3.5.1; docker-php-ext-enable apcu memcached redis imagick ; rm -r /tmp/pear; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Fri, 01 Oct 2021 02:31:51 GMT
RUN { echo 'opcache.enable=1'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=10000'; echo 'opcache.memory_consumption=128'; echo 'opcache.save_comments=1'; echo 'opcache.revalidate_freq=1'; } > /usr/local/etc/php/conf.d/opcache-recommended.ini; echo 'apc.enable_cli=1' >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini; { echo 'memory_limit=${PHP_MEMORY_LIMIT}'; echo 'upload_max_filesize=${PHP_UPLOAD_LIMIT}'; echo 'post_max_size=${PHP_UPLOAD_LIMIT}'; } > /usr/local/etc/php/conf.d/nextcloud.ini; mkdir /var/www/data; chown -R www-data:root /var/www; chmod -R g=u /var/www
# Fri, 01 Oct 2021 02:31:51 GMT
VOLUME [/var/www/html]
# Fri, 01 Oct 2021 02:31:51 GMT
ENV NEXTCLOUD_VERSION=22.2.0
# Fri, 01 Oct 2021 02:32:48 GMT
RUN set -ex; fetchDeps=" gnupg dirmngr "; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; curl -fsSL -o nextcloud.tar.bz2 "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2"; curl -fsSL -o nextcloud.tar.bz2.asc "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2.asc"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys 28806A878AE423A28372792ED75899B9A724937A; gpg --batch --verify nextcloud.tar.bz2.asc nextcloud.tar.bz2; tar -xjf nextcloud.tar.bz2 -C /usr/src/; gpgconf --kill all; rm nextcloud.tar.bz2.asc nextcloud.tar.bz2; rm -rf "$GNUPGHOME" /usr/src/nextcloud/updater; mkdir -p /usr/src/nextcloud/data; mkdir -p /usr/src/nextcloud/custom_apps; chmod +x /usr/src/nextcloud/occ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps; rm -rf /var/lib/apt/lists/*
# Fri, 01 Oct 2021 02:32:50 GMT
COPY multi:5c7d3e21c40c6f3326b9c24bb148355014771883d3bc821f8ada4fed6795cbb4 in /
# Fri, 01 Oct 2021 02:32:51 GMT
COPY multi:d1870de3d4b4de5680360a8bcad7129a7c7615ba76daad773ab1eee24d4a949f in /usr/src/nextcloud/config/
# Fri, 01 Oct 2021 02:32:51 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Fri, 01 Oct 2021 02:32:51 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:bd897bb914af2ec64f1cff5856aefa1ae99b072e38db0b7d801f9679b04aad74`
Last Modified: Tue, 28 Sep 2021 01:29:00 GMT
Size: 31.4 MB (31368912 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9c29eb0e4b791ce44ab29316799780ae78900f2bf1f3dff9217d7f8d6cbc5100`
Last Modified: Tue, 28 Sep 2021 17:14:27 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a286374447ccd3b3709ee9f9c1e86999cfef287dd93a9dc32778bf0303668108`
Last Modified: Tue, 28 Sep 2021 17:14:42 GMT
Size: 91.6 MB (91605546 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:58915f2347baccd7b503fc13cafe403acf7e0300df3f0f565dd97dfdb03328fe`
Last Modified: Tue, 28 Sep 2021 17:14:26 GMT
Size: 269.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:17c8fcc49a1573114feadf93bec9a922a64f38d1bf84c497726a8bfe1566cae1`
Last Modified: Tue, 28 Sep 2021 17:19:17 GMT
Size: 11.1 MB (11123617 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8c4a763b92a46040b84150807f31ac88b250cd59b659f8fb50454aa8430f2b09`
Last Modified: Tue, 28 Sep 2021 17:19:13 GMT
Size: 493.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:305bb477143988872b9b174656027bcf5c55e1e320bd2d2154f68261004d1085`
Last Modified: Tue, 28 Sep 2021 17:19:19 GMT
Size: 30.5 MB (30466283 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b3f74d570f65863e872cd9bb61f7d4e2658243b06d82374149999c0ea1d95794`
Last Modified: Tue, 28 Sep 2021 17:19:13 GMT
Size: 2.3 KB (2270 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:98e737197300044d329578a0c7915877fb28134b5de2c8659086220799e3645f`
Last Modified: Tue, 28 Sep 2021 17:19:13 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f4b6cad880e53dafae4e580db7aaaa952670228d3e61b142f4dff02821231c27`
Last Modified: Tue, 28 Sep 2021 17:19:13 GMT
Size: 8.6 KB (8574 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:dfa6d50500d10c2ec9dd790b5ee605e1a5f134893678c17c4e3b91fcd3761b7a`
Last Modified: Fri, 01 Oct 2021 02:34:30 GMT
Size: 1.7 MB (1668670 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f8f0bfabf09e4bd411683352250866f20886c437adbe4c22ee01dd04a614fc70`
Last Modified: Fri, 01 Oct 2021 02:34:31 GMT
Size: 18.1 MB (18054885 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a02affbe8d927da1eec26af3c06bb10e3f6ae372b6ee2bb7076364d4d1f1cbea`
Last Modified: Fri, 01 Oct 2021 02:34:27 GMT
Size: 594.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:09264deb4ff8a32e20270aea29a686b058e554fdb219c3835b614739471df283`
Last Modified: Fri, 01 Oct 2021 02:34:47 GMT
Size: 142.6 MB (142643988 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:00e6a7b64dd0d2f08d1fd4fa677c58a94663a8a2ff42bfa517def7e40b526d6a`
Last Modified: Fri, 01 Oct 2021 02:34:27 GMT
Size: 2.6 KB (2631 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:646a7579295972ace35aa495fa7fdeda7846426948ea7545570ac9d7cfb232f5`
Last Modified: Fri, 01 Oct 2021 02:34:27 GMT
Size: 2.1 KB (2054 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `nextcloud:22.2-fpm` - linux; arm variant v7
```console
$ docker pull nextcloud@sha256:430cd1b1e967b3e1081a5c7b9020d99ab8612381e1af364130b0bf8ee75d1f56
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **293.5 MB (293537640 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:e493a5e198631af29f9b131eb2082ee09ec6fdf9bc8bcc0ee595a3a89805ab42`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Thu, 30 Sep 2021 18:03:01 GMT
ADD file:129e2106788d883a456b145d9aff00c3003ee3480901a30318933b46961d31f3 in /
# Thu, 30 Sep 2021 18:03:02 GMT
CMD ["bash"]
# Fri, 01 Oct 2021 07:11:18 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Fri, 01 Oct 2021 07:11:18 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Fri, 01 Oct 2021 07:12:00 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Fri, 01 Oct 2021 07:12:01 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Fri, 01 Oct 2021 07:12:03 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Fri, 01 Oct 2021 07:23:00 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Fri, 01 Oct 2021 07:23:00 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Fri, 01 Oct 2021 07:23:01 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Fri, 01 Oct 2021 07:23:01 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Fri, 01 Oct 2021 08:25:10 GMT
ENV GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 BFDDD28642824F8118EF77909B67A5C12229118F
# Fri, 01 Oct 2021 08:25:11 GMT
ENV PHP_VERSION=8.0.11
# Fri, 01 Oct 2021 08:25:11 GMT
ENV PHP_URL=https://www.php.net/distributions/php-8.0.11.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-8.0.11.tar.xz.asc
# Fri, 01 Oct 2021 08:25:11 GMT
ENV PHP_SHA256=e3e5f764ae57b31eb65244a45512f0b22d7bef05f2052b23989c053901552e16
# Fri, 01 Oct 2021 08:25:42 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Fri, 01 Oct 2021 08:25:42 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Fri, 01 Oct 2021 08:29:51 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends ${PHP_EXTRA_BUILD_DEPS:-} libargon2-dev libcurl4-openssl-dev libonig-dev libreadline-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-openssl --with-readline --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Fri, 01 Oct 2021 08:29:53 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Fri, 01 Oct 2021 08:29:54 GMT
RUN docker-php-ext-enable sodium
# Fri, 01 Oct 2021 08:29:55 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Fri, 01 Oct 2021 08:29:55 GMT
WORKDIR /var/www/html
# Fri, 01 Oct 2021 08:29:57 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Fri, 01 Oct 2021 08:29:57 GMT
STOPSIGNAL SIGQUIT
# Fri, 01 Oct 2021 08:29:57 GMT
EXPOSE 9000
# Fri, 01 Oct 2021 08:29:58 GMT
CMD ["php-fpm"]
# Sat, 02 Oct 2021 05:29:49 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends rsync bzip2 busybox-static ; rm -rf /var/lib/apt/lists/*; mkdir -p /var/spool/cron/crontabs; echo '*/5 * * * * php -f /var/www/html/cron.php' > /var/spool/cron/crontabs/www-data
# Sat, 02 Oct 2021 05:29:49 GMT
ENV PHP_MEMORY_LIMIT=512M
# Sat, 02 Oct 2021 05:29:50 GMT
ENV PHP_UPLOAD_LIMIT=512M
# Sat, 02 Oct 2021 05:36:38 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libevent-dev libfreetype6-dev libicu-dev libjpeg-dev libldap-common libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev libxml2-dev libmagickwand-dev libzip-dev libwebp-dev libgmp-dev ; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install -j "$(nproc)" bcmath exif gd intl ldap opcache pcntl pdo_mysql pdo_pgsql zip gmp ; pecl install APCu-5.1.20; pecl install memcached-3.1.5; pecl install redis-5.3.4; pecl install imagick-3.5.1; docker-php-ext-enable apcu memcached redis imagick ; rm -r /tmp/pear; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 02 Oct 2021 05:36:40 GMT
RUN { echo 'opcache.enable=1'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=10000'; echo 'opcache.memory_consumption=128'; echo 'opcache.save_comments=1'; echo 'opcache.revalidate_freq=1'; } > /usr/local/etc/php/conf.d/opcache-recommended.ini; echo 'apc.enable_cli=1' >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini; { echo 'memory_limit=${PHP_MEMORY_LIMIT}'; echo 'upload_max_filesize=${PHP_UPLOAD_LIMIT}'; echo 'post_max_size=${PHP_UPLOAD_LIMIT}'; } > /usr/local/etc/php/conf.d/nextcloud.ini; mkdir /var/www/data; chown -R www-data:root /var/www; chmod -R g=u /var/www
# Sat, 02 Oct 2021 05:36:40 GMT
VOLUME [/var/www/html]
# Sat, 02 Oct 2021 05:36:41 GMT
ENV NEXTCLOUD_VERSION=22.2.0
# Sat, 02 Oct 2021 05:38:09 GMT
RUN set -ex; fetchDeps=" gnupg dirmngr "; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; curl -fsSL -o nextcloud.tar.bz2 "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2"; curl -fsSL -o nextcloud.tar.bz2.asc "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2.asc"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys 28806A878AE423A28372792ED75899B9A724937A; gpg --batch --verify nextcloud.tar.bz2.asc nextcloud.tar.bz2; tar -xjf nextcloud.tar.bz2 -C /usr/src/; gpgconf --kill all; rm nextcloud.tar.bz2.asc nextcloud.tar.bz2; rm -rf "$GNUPGHOME" /usr/src/nextcloud/updater; mkdir -p /usr/src/nextcloud/data; mkdir -p /usr/src/nextcloud/custom_apps; chmod +x /usr/src/nextcloud/occ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps; rm -rf /var/lib/apt/lists/*
# Sat, 02 Oct 2021 05:38:12 GMT
COPY multi:5c7d3e21c40c6f3326b9c24bb148355014771883d3bc821f8ada4fed6795cbb4 in /
# Sat, 02 Oct 2021 05:38:14 GMT
COPY multi:d1870de3d4b4de5680360a8bcad7129a7c7615ba76daad773ab1eee24d4a949f in /usr/src/nextcloud/config/
# Sat, 02 Oct 2021 05:38:14 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 02 Oct 2021 05:38:15 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:aad43ac6bd46b2cab91485c8f1dac6a985df690af3e431e9e0b9fd57ad5ed423`
Last Modified: Thu, 30 Sep 2021 18:19:26 GMT
Size: 26.6 MB (26571924 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:dc04cef8f8627fec47e2a3eb3b9a38bb4c4290ed46d7e305378d7149b2e930b0`
Last Modified: Fri, 01 Oct 2021 10:42:53 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a86a525f93386992c2bfae8402f2b99aa5ac77277c8167221477a7ffc2a1e490`
Last Modified: Fri, 01 Oct 2021 10:43:36 GMT
Size: 69.3 MB (69315051 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c523bc9f9ff25debed4e73699365ed85865b8a8ff27ec75d66da9ab56f01925a`
Last Modified: Fri, 01 Oct 2021 10:42:53 GMT
Size: 271.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:381bfda756024dd76be77ee0a45e1d88b6ec2b6430c3616d60eaa92c42b12cad`
Last Modified: Fri, 01 Oct 2021 10:52:45 GMT
Size: 11.1 MB (11122144 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:07890a91b4e293bddc45f90fd48d8f4a16f8c0389fb1ea7d4d4e8534227bbe54`
Last Modified: Fri, 01 Oct 2021 10:52:40 GMT
Size: 495.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:20823d03a246c322d5f4625790557d5b8f0b7e47edf1012f9334087b8d30f54c`
Last Modified: Fri, 01 Oct 2021 10:52:57 GMT
Size: 27.8 MB (27803821 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bff88ce4628792f7b7fd6c4d841ccc90c4cc03a7ef155410c43f36ccbde9eb56`
Last Modified: Fri, 01 Oct 2021 10:52:41 GMT
Size: 2.3 KB (2268 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1ac49eced2b138488e2c331dac90b43be5faaeea1701da428aa103d15e7ec483`
Last Modified: Fri, 01 Oct 2021 10:52:41 GMT
Size: 246.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9bf0d50d862837c5a5caad91d3ef0826706bffe4eba057b8e4bfca734fd3d55a`
Last Modified: Fri, 01 Oct 2021 10:52:41 GMT
Size: 8.6 KB (8573 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ad755b48ccb77adc4416ebad4bd3f05d99008bc6eabae0e162d798f756abf6a5`
Last Modified: Sat, 02 Oct 2021 05:54:11 GMT
Size: 1.5 MB (1478087 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:eaa2de42990b9f86d28b6d708620936440714889c85e690e287cf4a75d38f8fd`
Last Modified: Sat, 02 Oct 2021 05:54:17 GMT
Size: 14.6 MB (14586619 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:80e31bba58baa7bf5c8020afe58ba965da040aedb33653d26021727ab6ce7d1c`
Last Modified: Sat, 02 Oct 2021 05:54:09 GMT
Size: 593.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2eb51a5bd4245f55428baf25a72d232995f10863f7acf09597028b8765ede8c7`
Last Modified: Sat, 02 Oct 2021 05:55:44 GMT
Size: 142.6 MB (142642640 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7370c262d35f5b6656e1baaffc025c1a600c156176b37f6e9e60ccd5175a0b6a`
Last Modified: Sat, 02 Oct 2021 05:54:09 GMT
Size: 2.6 KB (2631 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:960ef5d083607a509e1c1b399a7879de878ef3812e3c6d7c58463d14319a2af0`
Last Modified: Sat, 02 Oct 2021 05:54:09 GMT
Size: 2.1 KB (2051 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `nextcloud:22.2-fpm` - linux; 386
```console
$ docker pull nextcloud@sha256:0d63cb37488f09eb9b4f2c80b6513c3164a6f5d8a8e54988f63f69b0dd501974
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **329.1 MB (329060261 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:f6d58d535e063c1a27cd0eedd226111dffc12b1a788d85c4b77bda7383dae66d`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 28 Sep 2021 01:40:08 GMT
ADD file:8466bd8df052ea7fa26e49575ac95fd4934ddafdad54a9736ac2bd8e7fc6e735 in /
# Tue, 28 Sep 2021 01:40:08 GMT
CMD ["bash"]
# Tue, 28 Sep 2021 12:42:51 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 28 Sep 2021 12:42:51 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 28 Sep 2021 12:43:22 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Tue, 28 Sep 2021 12:43:23 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 28 Sep 2021 12:43:24 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Tue, 28 Sep 2021 13:01:06 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Tue, 28 Sep 2021 13:01:06 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 28 Sep 2021 13:01:06 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 28 Sep 2021 13:01:07 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Tue, 28 Sep 2021 14:06:58 GMT
ENV GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 BFDDD28642824F8118EF77909B67A5C12229118F
# Tue, 28 Sep 2021 14:06:58 GMT
ENV PHP_VERSION=8.0.11
# Tue, 28 Sep 2021 14:06:59 GMT
ENV PHP_URL=https://www.php.net/distributions/php-8.0.11.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-8.0.11.tar.xz.asc
# Tue, 28 Sep 2021 14:06:59 GMT
ENV PHP_SHA256=e3e5f764ae57b31eb65244a45512f0b22d7bef05f2052b23989c053901552e16
# Tue, 28 Sep 2021 14:07:35 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Tue, 28 Sep 2021 14:07:36 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Tue, 28 Sep 2021 14:14:23 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends ${PHP_EXTRA_BUILD_DEPS:-} libargon2-dev libcurl4-openssl-dev libonig-dev libreadline-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-openssl --with-readline --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Tue, 28 Sep 2021 14:14:24 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Tue, 28 Sep 2021 14:14:26 GMT
RUN docker-php-ext-enable sodium
# Tue, 28 Sep 2021 14:14:26 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Tue, 28 Sep 2021 14:14:27 GMT
WORKDIR /var/www/html
# Tue, 28 Sep 2021 14:14:28 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Tue, 28 Sep 2021 14:14:28 GMT
STOPSIGNAL SIGQUIT
# Tue, 28 Sep 2021 14:14:28 GMT
EXPOSE 9000
# Tue, 28 Sep 2021 14:14:29 GMT
CMD ["php-fpm"]
# Wed, 29 Sep 2021 05:14:39 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends rsync bzip2 busybox-static ; rm -rf /var/lib/apt/lists/*; mkdir -p /var/spool/cron/crontabs; echo '*/5 * * * * php -f /var/www/html/cron.php' > /var/spool/cron/crontabs/www-data
# Wed, 29 Sep 2021 05:14:40 GMT
ENV PHP_MEMORY_LIMIT=512M
# Wed, 29 Sep 2021 05:14:40 GMT
ENV PHP_UPLOAD_LIMIT=512M
# Fri, 01 Oct 2021 02:52:54 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libevent-dev libfreetype6-dev libicu-dev libjpeg-dev libldap-common libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev libxml2-dev libmagickwand-dev libzip-dev libwebp-dev libgmp-dev ; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install -j "$(nproc)" bcmath exif gd intl ldap opcache pcntl pdo_mysql pdo_pgsql zip gmp ; pecl install APCu-5.1.20; pecl install memcached-3.1.5; pecl install redis-5.3.4; pecl install imagick-3.5.1; docker-php-ext-enable apcu memcached redis imagick ; rm -r /tmp/pear; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Fri, 01 Oct 2021 02:52:56 GMT
RUN { echo 'opcache.enable=1'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=10000'; echo 'opcache.memory_consumption=128'; echo 'opcache.save_comments=1'; echo 'opcache.revalidate_freq=1'; } > /usr/local/etc/php/conf.d/opcache-recommended.ini; echo 'apc.enable_cli=1' >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini; { echo 'memory_limit=${PHP_MEMORY_LIMIT}'; echo 'upload_max_filesize=${PHP_UPLOAD_LIMIT}'; echo 'post_max_size=${PHP_UPLOAD_LIMIT}'; } > /usr/local/etc/php/conf.d/nextcloud.ini; mkdir /var/www/data; chown -R www-data:root /var/www; chmod -R g=u /var/www
# Fri, 01 Oct 2021 02:52:56 GMT
VOLUME [/var/www/html]
# Fri, 01 Oct 2021 02:52:56 GMT
ENV NEXTCLOUD_VERSION=22.2.0
# Fri, 01 Oct 2021 02:53:56 GMT
RUN set -ex; fetchDeps=" gnupg dirmngr "; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; curl -fsSL -o nextcloud.tar.bz2 "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2"; curl -fsSL -o nextcloud.tar.bz2.asc "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2.asc"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys 28806A878AE423A28372792ED75899B9A724937A; gpg --batch --verify nextcloud.tar.bz2.asc nextcloud.tar.bz2; tar -xjf nextcloud.tar.bz2 -C /usr/src/; gpgconf --kill all; rm nextcloud.tar.bz2.asc nextcloud.tar.bz2; rm -rf "$GNUPGHOME" /usr/src/nextcloud/updater; mkdir -p /usr/src/nextcloud/data; mkdir -p /usr/src/nextcloud/custom_apps; chmod +x /usr/src/nextcloud/occ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps; rm -rf /var/lib/apt/lists/*
# Fri, 01 Oct 2021 02:53:58 GMT
COPY multi:5c7d3e21c40c6f3326b9c24bb148355014771883d3bc821f8ada4fed6795cbb4 in /
# Fri, 01 Oct 2021 02:53:59 GMT
COPY multi:d1870de3d4b4de5680360a8bcad7129a7c7615ba76daad773ab1eee24d4a949f in /usr/src/nextcloud/config/
# Fri, 01 Oct 2021 02:53:59 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Fri, 01 Oct 2021 02:54:00 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:e79fce1f6442094a82dc5f6b4d1aa352e04aae39bba821c9021f6da08b1cacaf`
Last Modified: Tue, 28 Sep 2021 01:49:07 GMT
Size: 32.4 MB (32380160 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5787362f206b73e3f1dc85b1343b60aa2bdc8e6e87f222da8b926cd4ce3fe032`
Last Modified: Tue, 28 Sep 2021 16:53:20 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3a5edbdd35a15abf5ea3eb08212d72bcf4d6df5e1916594a04436d31bafcf855`
Last Modified: Tue, 28 Sep 2021 16:53:55 GMT
Size: 92.7 MB (92712789 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:02bbb2b0d52af746bb8f6200a830aa8ed7be1094c1d784245b5e58a21e6b38cb`
Last Modified: Tue, 28 Sep 2021 16:53:20 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ad5cab42b74901c6a192f66de9cd8e210075ead1b20a20c2c3ecd4a15a265f47`
Last Modified: Tue, 28 Sep 2021 17:00:09 GMT
Size: 11.1 MB (11122801 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:465865948ac6d182a3eaedd825d39898a0507127be81b71b1825f03cfbe37a1e`
Last Modified: Tue, 28 Sep 2021 17:00:05 GMT
Size: 493.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3bc76d265d613ec61c792d03ef3dca6b9fb712269f77372ad433012b1b5adfc0`
Last Modified: Tue, 28 Sep 2021 17:00:16 GMT
Size: 31.1 MB (31137834 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a3d92262ec86e957f287fba75a7ea206568fb7468cf7cb818189b909cce477d1`
Last Modified: Tue, 28 Sep 2021 17:00:04 GMT
Size: 2.3 KB (2268 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3bb12db50b5180b0270b6a9b0a0085e7021e71742baae629be7a210e6c329a16`
Last Modified: Tue, 28 Sep 2021 17:00:04 GMT
Size: 246.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1a6323f0254ac11cd45e23f502ee585d2794f90e47319ec6ed4012462d8bc687`
Last Modified: Tue, 28 Sep 2021 17:00:04 GMT
Size: 8.6 KB (8575 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fb824fb856a524e1fb3785dd408a3df0315b0d9e2081499d7b0b1a1b3fbb308b`
Last Modified: Wed, 29 Sep 2021 05:30:28 GMT
Size: 1.7 MB (1699293 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0ff871899b1a25a97119636c684bd8ac608f926d5183cb4f98731e18f3263a6f`
Last Modified: Fri, 01 Oct 2021 03:01:12 GMT
Size: 17.3 MB (17346588 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3fb0185e3b9b756e910188dd3109b85c9a47ecf1a162610a3e1330feecd4d609`
Last Modified: Fri, 01 Oct 2021 03:01:08 GMT
Size: 594.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a91c9e545a2d56ecd8a4f0ed245a1b0b13a9f4a4cc34c06b0c0b949a89e9f16b`
Last Modified: Fri, 01 Oct 2021 03:01:37 GMT
Size: 142.6 MB (142643441 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:34a51419510561ada11f5b798bbb3607bc06128022656eccacb9477b2df3aa2e`
Last Modified: Fri, 01 Oct 2021 03:01:08 GMT
Size: 2.6 KB (2630 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:52be0aa7f8426d43a192bab74225445b862ca87f40969c89c1e8cfff92690d1e`
Last Modified: Fri, 01 Oct 2021 03:01:08 GMT
Size: 2.1 KB (2053 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `nextcloud:22.2-fpm` - linux; mips64le
```console
$ docker pull nextcloud@sha256:babfd089383d32144a49c672e56f8cf6d49ec98cc312af17fe802858ffb74ea1
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **302.1 MB (302127934 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:581c59afee35a6ad63f03158a4f70ac24b13bdbd0c1100e0f086960d948df2c0`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["php-fpm"]`
```dockerfile
# Tue, 28 Sep 2021 02:10:40 GMT
ADD file:43593ef3d79c9b74a92e318d44aacb578f6f8d835dd72665e057bbfe73df1a93 in /
# Tue, 28 Sep 2021 02:10:41 GMT
CMD ["bash"]
# Tue, 05 Oct 2021 18:07:59 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 05 Oct 2021 18:07:59 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 05 Oct 2021 18:08:58 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Tue, 05 Oct 2021 18:08:59 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 05 Oct 2021 18:09:01 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Tue, 05 Oct 2021 18:36:13 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data --disable-cgi
# Tue, 05 Oct 2021 18:36:13 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 05 Oct 2021 18:36:14 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
# Tue, 05 Oct 2021 18:36:14 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -pie
# Tue, 05 Oct 2021 20:23:32 GMT
ENV GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 BFDDD28642824F8118EF77909B67A5C12229118F
# Tue, 05 Oct 2021 20:23:33 GMT
ENV PHP_VERSION=8.0.11
# Tue, 05 Oct 2021 20:23:33 GMT
ENV PHP_URL=https://www.php.net/distributions/php-8.0.11.tar.xz PHP_ASC_URL=https://www.php.net/distributions/php-8.0.11.tar.xz.asc
# Tue, 05 Oct 2021 20:23:33 GMT
ENV PHP_SHA256=e3e5f764ae57b31eb65244a45512f0b22d7bef05f2052b23989c053901552e16
# Tue, 05 Oct 2021 20:23:58 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Tue, 05 Oct 2021 20:23:59 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Tue, 05 Oct 2021 20:36:44 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends ${PHP_EXTRA_BUILD_DEPS:-} libargon2-dev libcurl4-openssl-dev libonig-dev libreadline-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --with-pic --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-pdo-sqlite=/usr --with-sqlite3=/usr --with-curl --with-openssl --with-readline --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Tue, 05 Oct 2021 20:36:45 GMT
COPY multi:6dfba8f7e64bd54e4d9aa0855ff6ce7a53059e0a733752b4537fd3fdfd32d837 in /usr/local/bin/
# Tue, 05 Oct 2021 20:36:47 GMT
RUN docker-php-ext-enable sodium
# Tue, 05 Oct 2021 20:36:47 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Tue, 05 Oct 2021 20:36:48 GMT
WORKDIR /var/www/html
# Tue, 05 Oct 2021 20:36:50 GMT
RUN set -eux; cd /usr/local/etc; if [ -d php-fpm.d ]; then sed 's!=NONE/!=!g' php-fpm.conf.default | tee php-fpm.conf > /dev/null; cp php-fpm.d/www.conf.default php-fpm.d/www.conf; else mkdir php-fpm.d; cp php-fpm.conf.default php-fpm.d/www.conf; { echo '[global]'; echo 'include=etc/php-fpm.d/*.conf'; } | tee php-fpm.conf; fi; { echo '[global]'; echo 'error_log = /proc/self/fd/2'; echo; echo '; https://github.com/docker-library/php/pull/725#issuecomment-443540114'; echo 'log_limit = 8192'; echo; echo '[www]'; echo '; if we send this to /proc/self/fd/1, it never appears'; echo 'access.log = /proc/self/fd/2'; echo; echo 'clear_env = no'; echo; echo '; Ensure worker stdout and stderr are sent to the main error log.'; echo 'catch_workers_output = yes'; echo 'decorate_workers_output = no'; } | tee php-fpm.d/docker.conf; { echo '[global]'; echo 'daemonize = no'; echo; echo '[www]'; echo 'listen = 9000'; } | tee php-fpm.d/zz-docker.conf
# Tue, 05 Oct 2021 20:36:50 GMT
STOPSIGNAL SIGQUIT
# Tue, 05 Oct 2021 20:36:50 GMT
EXPOSE 9000
# Tue, 05 Oct 2021 20:36:51 GMT
CMD ["php-fpm"]
# Wed, 06 Oct 2021 07:06:31 GMT
RUN set -ex; apt-get update; apt-get install -y --no-install-recommends rsync bzip2 busybox-static ; rm -rf /var/lib/apt/lists/*; mkdir -p /var/spool/cron/crontabs; echo '*/5 * * * * php -f /var/www/html/cron.php' > /var/spool/cron/crontabs/www-data
# Wed, 06 Oct 2021 07:06:31 GMT
ENV PHP_MEMORY_LIMIT=512M
# Wed, 06 Oct 2021 07:06:32 GMT
ENV PHP_UPLOAD_LIMIT=512M
# Wed, 06 Oct 2021 07:14:28 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libevent-dev libfreetype6-dev libicu-dev libjpeg-dev libldap-common libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev libxml2-dev libmagickwand-dev libzip-dev libwebp-dev libgmp-dev ; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure gd --with-freetype --with-jpeg --with-webp; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install -j "$(nproc)" bcmath exif gd intl ldap opcache pcntl pdo_mysql pdo_pgsql zip gmp ; pecl install APCu-5.1.20; pecl install memcached-3.1.5; pecl install redis-5.3.4; pecl install imagick-3.5.1; docker-php-ext-enable apcu memcached redis imagick ; rm -r /tmp/pear; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Wed, 06 Oct 2021 07:14:30 GMT
RUN { echo 'opcache.enable=1'; echo 'opcache.interned_strings_buffer=8'; echo 'opcache.max_accelerated_files=10000'; echo 'opcache.memory_consumption=128'; echo 'opcache.save_comments=1'; echo 'opcache.revalidate_freq=1'; } > /usr/local/etc/php/conf.d/opcache-recommended.ini; echo 'apc.enable_cli=1' >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini; { echo 'memory_limit=${PHP_MEMORY_LIMIT}'; echo 'upload_max_filesize=${PHP_UPLOAD_LIMIT}'; echo 'post_max_size=${PHP_UPLOAD_LIMIT}'; } > /usr/local/etc/php/conf.d/nextcloud.ini; mkdir /var/www/data; chown -R www-data:root /var/www; chmod -R g=u /var/www
# Wed, 06 Oct 2021 07:14:30 GMT
VOLUME [/var/www/html]
# Wed, 06 Oct 2021 07:14:31 GMT
ENV NEXTCLOUD_VERSION=22.2.0
# Wed, 06 Oct 2021 07:16:29 GMT
RUN set -ex; fetchDeps=" gnupg dirmngr "; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; curl -fsSL -o nextcloud.tar.bz2 "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2"; curl -fsSL -o nextcloud.tar.bz2.asc "https://download.nextcloud.com/server/releases/nextcloud-${NEXTCLOUD_VERSION}.tar.bz2.asc"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys 28806A878AE423A28372792ED75899B9A724937A; gpg --batch --verify nextcloud.tar.bz2.asc nextcloud.tar.bz2; tar -xjf nextcloud.tar.bz2 -C /usr/src/; gpgconf --kill all; rm nextcloud.tar.bz2.asc nextcloud.tar.bz2; rm -rf "$GNUPGHOME" /usr/src/nextcloud/updater; mkdir -p /usr/src/nextcloud/data; mkdir -p /usr/src/nextcloud/custom_apps; chmod +x /usr/src/nextcloud/occ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps; rm -rf /var/lib/apt/lists/*
# Wed, 06 Oct 2021 07:16:33 GMT
COPY multi:5c7d3e21c40c6f3326b9c24bb148355014771883d3bc821f8ada4fed6795cbb4 in /
# Wed, 06 Oct 2021 07:16:34 GMT
COPY multi:d1870de3d4b4de5680360a8bcad7129a7c7615ba76daad773ab1eee24d4a949f in /usr/src/nextcloud/config/
# Wed, 06 Oct 2021 07:16:35 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Wed, 06 Oct 2021 07:16:35 GMT
CMD ["php-fpm"]
```
- Layers:
- `sha256:1f46ea49e27fccc580c8910db39ba7f51ae208a8d24d46a33140afa92ea3d955`
Last Modified: Tue, 28 Sep 2021 02:20:45 GMT
Size: 29.6 MB (29627871 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7bb3954c2ebaa7beddbb1101815a7e2a32441a3dd24b13ee9da76118a5dc9930`
Last Modified: Wed, 06 Oct 2021 00:44:41 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:df5fb416109a5c1e554a74a010df8337b5919d464aebb4afddf07e51b1ecefce`
Last Modified: Wed, 06 Oct 2021 00:45:36 GMT
Size: 72.0 MB (72015417 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:32c0cb3c9d612552ab5732ac5d43475333eb653f4cc692430fb9de69a48fe4fa`
Last Modified: Wed, 06 Oct 2021 00:44:40 GMT
Size: 224.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cdd31308213ce7d84d5fbfcc97850bd95cd27168c59974afb29a61c1d04f0da2`
Last Modified: Wed, 06 Oct 2021 00:52:40 GMT
Size: 11.1 MB (11121504 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7f047ea5f3709c5acbe07b90b6bd27947744cafe8bc603329804124078e95902`
Last Modified: Wed, 06 Oct 2021 00:52:34 GMT
Size: 491.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:dbae402300ff914cb32befaefc163f2fa3582c471fa652e1986a893c05370e10`
Last Modified: Wed, 06 Oct 2021 00:52:55 GMT
Size: 29.4 MB (29371493 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0122b9d2f13e62fc35038cf591ba78d4af9d95af79c9fd6b2eaef5522b128e6a`
Last Modified: Wed, 06 Oct 2021 00:52:34 GMT
Size: 2.3 KB (2270 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0e0e9e7286153b91de561acddfccd71c87fdad8bde261f88ac3152b7e23351d3`
Last Modified: Wed, 06 Oct 2021 00:52:35 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e90d9d16339f8c958854b91203e329bc69c752f406f187bcc0797613a38b318f`
Last Modified: Wed, 06 Oct 2021 00:52:35 GMT
Size: 8.6 KB (8575 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7b47b832559dfa317bd1254d4b48f4d593f1faf82cd196ae12e3847cf9007554`
Last Modified: Wed, 06 Oct 2021 07:26:24 GMT
Size: 1.8 MB (1751055 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b92689b02d52a8e8e9ed651d14f7fa02e765091d20c80d0e46365d5921036502`
Last Modified: Wed, 06 Oct 2021 07:26:32 GMT
Size: 15.6 MB (15580725 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:16830d8442c3b3cabd2c9ca13ad5c687223bfe609f89f085d5d9d08f0aecb1b3`
Last Modified: Wed, 06 Oct 2021 07:26:20 GMT
Size: 565.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3f27ca8c8a22051dda91ca7793305d4aaeb05fbbca8957b1f17fb397b6033b4b`
Last Modified: Wed, 06 Oct 2021 07:27:51 GMT
Size: 142.6 MB (142642585 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bdd60a63aee7f1e0f47b588009e26d25ac782cdc986c98579375a1a28cbf1952`
Last Modified: Wed, 06 Oct 2021 07:26:20 GMT
Size: 2.6 KB (2631 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0b5a29a02a31f88179df0dc956da77acd5bbd80b441c3df4c9998135548d4d3c`
Last Modified: Wed, 06 Oct 2021 07:26:20 GMT
Size: 2.1 KB (2054 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 90.294299 | 1,945 | 0.690824 | kor_Hang | 0.198848 |
fdb85dfdb06bb66a49d4b596b15ae95e3a03b2c5 | 4,647 | md | Markdown | _posts/2021-10-09-【C++基础】第三十课:条件语句.md | x-jeff/x-jeff.github.io | 9c6e4669e7fe6b96592da2d7518da249022f58fe | [
"MIT"
] | 1 | 2020-10-30T01:12:14.000Z | 2020-10-30T01:12:14.000Z | _posts/2021-10-09-【C++基础】第三十课:条件语句.md | x-jeff/x-jeff.github.io | 9c6e4669e7fe6b96592da2d7518da249022f58fe | [
"MIT"
] | null | null | null | _posts/2021-10-09-【C++基础】第三十课:条件语句.md | x-jeff/x-jeff.github.io | 9c6e4669e7fe6b96592da2d7518da249022f58fe | [
"MIT"
] | null | null | null | ---
layout: post
title: 【C++基础】第三十课:条件语句
subtitle: if语句,switch语句
date: 2021-10-09
author: x-jeff
header-img: blogimg/20211009.jpg
catalog: true
tags:
- C++ Series
---
>【C++基础】系列博客为参考[《C++ Primer中文版(第5版)》](https://www.phei.com.cn/module/goods/wssd_content.jsp?bookid=37655)(**C++11标准**)一书,自己所做的读书笔记。
>本文为原创文章,未经本人允许,禁止转载。转载请注明出处。
# 1.条件语句
C++语言提供了两种按条件执行的语句。一种是if语句,另外一种是switch语句。
# 2.if语句
if语句包括两种形式:一种含有else分支,另外一种没有。
形式一(不包含else分支):
```c++
if (condition)
statement
```
形式二(包含else分支):
```c++
if (condition)
statement
else
statement2
```
需要注意:condition必须用圆括号包围起来且其类型必须能转换成布尔类型。
举个例子:
```c++
if (grade % 10 > 7)//%为取余
lettergrade += '+';//末尾是8或者9的成绩添加一个加号
else if (grade % 10 < 3)
lettergrade += '-';//末尾是0,1或2,添加一个减号
//此处后续可以没有else分支
```
对于**悬垂else**(dangling else),C++规定else与离它最近的尚未匹配的if匹配,从而消除了程序的二义性:
```c++
//else分支匹配的是内层if语句
if (grade % 10 >= 3)
if (grade % 10 > 7)
lettergrade += '+';
else
lettergrade += '-';
```
上述程序等价于:
```c++
if (grade % 10 >= 3)
if (grade % 10 > 7)
lettergrade += '+';
else
lettergrade += '-';
```
# 3.switch语句
假如实现功能:统计五个元音字母在文本中出现的次数。便可直接使用switch语句:
```c++
unsigned aCnt = 0, eCnt = 0, iCnt = 0, oCnt = 0, uCnt = 0;
char ch;
while (cin >> ch) {
switch (ch){
case 'a':
++aCnt;
break;
case 'e':
++eCnt;
break;
case 'i':
++iCnt;
break;
case 'o':
++oCnt;
break;
case 'u':
++uCnt;
break;
}
}
```
switch语句首先对括号里的表达式求值,该表达式紧跟在关键字switch的后面,可以是一个初始化的变量声明。表达式的值转换成**整数类型**,然后与每个case标签的值比较。
如果表达式和某个case标签的值匹配成功,程序从该标签之后的第一条语句开始执行,直到到达了switch的结尾或者是遇到一条break语句为止。
case关键字和它对应的值一起被称为**case标签(case label)**。⚠️**case标签必须是整型常量表达式。**
```c++
char ch = getVal();
int ival = 42;
switch(ch) {
case 3.14: //错误:case标签不是一个整数
case ival: //错误:case标签不是一个常量
//...
```
任何两个case标签的值不能相同,否则就会引发错误。另外,default也是一种特殊的case标签,后续会介绍。
## 3.1.switch内部的控制流
如果某个case标签匹配成功,将从该标签开始往后顺序执行所有case分支,除非程序显式地中断了这一过程(例如使用了break),否则直到switch的结尾处才会停下来。
```c++
unsigned int c1 = 0, c2 = 0, c3 = 0;
switch ('b') {//此处如果不是a,b,c中的某一个则会报错
case 'a':
c1++;
case 'b':
c2++;
case 'c':
c3++;
}
cout << c1 << endl;//0
cout << c2 << endl;//1
cout << c3 << endl;//1
```
有时我们会故意省略掉break语句,使得程序能够连续执行若干个case标签。例如,统计所有元音字母出现的总次数:
```c++
unsigned vowelCnt = 0;
//...
switch (ch)
{
//出现了a,e,i,o,u中的任意一个都会将vowelCnt的值加1
case 'a':
case 'e':
case 'i':
case 'o':
case 'u':
++vowelCnt;
break;
}
```
此外,case标签之后不一定非得换行。把几个case标签写在一行里也可以:
```c++
switch (ch)
{
//另一种合法的书写形式
case 'a': case 'e': case 'i': case 'o': case 'u':
++vowelCnt;
break;
}
```
‼️有一种常见的错觉是程序只执行匹配成功的那个case分支的语句。
## 3.2.default标签
如果没有任何一个case标签能匹配上switch表达式的值,程序将执行紧跟在**default标签(default label)**后面的语句。例如,可以增加一个计数值来统计非元音字母的数量:
```c++
//如果ch是一个元音字母,将相应的计数值加1
switch (ch)
{
case 'a': case 'e': case 'i': case 'o': case 'u':
++vowelCnt;
break;
default:
++otherCnt;
break;
}
```
标签(case和default)不应该孤零零地出现,它后面必须跟上一条语句(可以是空语句或空块)或者另外一个case标签。
## 3.3.switch内部的变量定义
switch的执行流程有可能会跨过某些case标签。如果程序跳转到了某个特定的case,则switch结构中该case标签之前的部分会被忽略掉。这种忽略掉一部分代码的行为引出了一个有趣的问题:如果被略过的代码中含有变量的定义该怎么办?
答案是:如果在某处一个带有初值的变量位于作用域之外,在另一处该变量位于作用域之内,则从前一处跳转到后一处的行为是非法行为。
```c++
case true:
//因为程序的执行流程可能绕开下面的初始化语句,所以该switch语句不合法
string file_name;//错误:控制流绕过一个隐式初始化的变量
int ival = 0;//错误:控制流绕过一个显式初始化的变量
int jval;//正确:因为jval没有初始化
break;
case false:
//正确:jval虽然在作用域内,但是它没有被初始化
jval = next_num();//正确:给jval赋一个值
if (file_name.empty()) //file_name在作用域内,但是没有被初始化
//...
```
正确的例子:
```c++
int main() {
switch ('b') {
case 'a':
//string file_name;
//int ival = 0;
int jval;
break;
case 'b':
jval = 0;
cout << jval << endl;//0
}
}
```
错误的例子:
```c++
int main() {
switch ('b') {
case 'a':
string file_name;
int ival = 0;
int jval;
break;
case 'b':
jval = 0;
cout << jval << endl;
}
}
```
错误的例子:
```c++
int main() {
switch ('b') {
case 'a':
//string file_name;
//int ival = 0;
//int jval;
break;
case 'b':
jval = 0;
cout << jval << endl;
}
}
```
个人理解:排在后面的case可以使用前面case里定义的变量,但是该变量在定义的时候不能被初始化,无论是隐式还是显式初始化都不行。
也可以通过`{}`限定作用域:
```c++
case true:
{
//...
}
break;
case false:
{
//...
}
break;
```
错误的例子:
```c++
int main() {
switch ('b') {
case 'a': {
//string file_name;
//int ival = 0;
int jval;
break;
}
case 'b': {
jval = 0;
cout << jval << endl;
}
}
}
``` | 15.914384 | 133 | 0.583172 | yue_Hant | 0.474064 |
fdb8a897f68a594ede7ca50cb4ef3b3c0971852e | 2,576 | md | Markdown | tests/mocks/default/api-classes-private-members-employee.md | mcherryleigh/typedoc-plugin-docusaurus | 68f28c0325f9c4f027b9a1b124f00d2b1848e136 | [
"MIT"
] | null | null | null | tests/mocks/default/api-classes-private-members-employee.md | mcherryleigh/typedoc-plugin-docusaurus | 68f28c0325f9c4f027b9a1b124f00d2b1848e136 | [
"MIT"
] | null | null | null | tests/mocks/default/api-classes-private-members-employee.md | mcherryleigh/typedoc-plugin-docusaurus | 68f28c0325f9c4f027b9a1b124f00d2b1848e136 | [
"MIT"
] | null | null | null | ---
id: api-classes-private-members-employee
title: Employee
sidebar_label: Employee
---
[typedoc-plugin-docusaurus](api-readme.md) > [[private-members Module]](api-modules-private-members-module.md) > [Employee](api-classes-private-members-employee.md)
## Class
## Hierarchy
[Person](api-classes-private-members-person.md)
**↳ Employee**
### Constructors
* [constructor](api-classes-private-members-employee.md#constructor)
### Properties
* [department](api-classes-private-members-employee.md#department)
* [name](api-classes-private-members-employee.md#name)
### Methods
* [getElevatorPitch](api-classes-private-members-employee.md#getelevatorpitch)
* [getPrivateDetails](api-classes-private-members-employee.md#getprivatedetails)
---
## Constructors
<a id="constructor"></a>
### ⊕ **new Employee**(name: *`string`*, department: *`string`*): [Employee](api-classes-private-members-employee.md)
*Overrides [Person](api-classes-private-members-person.md).[constructor](api-classes-private-members-person.md#constructor)*
*Defined in [private-members.ts:29](https://github.com/OffGridNetworks/typedoc-plugin-docusaurus/blob/master/tests/src/private-members.ts#L29)*
**Parameters:**
| Param | Type | Description |
| ------ | ------ | ------ |
| name | `string` | - |
| department | `string` | - |
**Returns:** [Employee](api-classes-private-members-employee.md)
---
## Properties
<a id="department"></a>
### «Private» department
**● department**: *`string`*
*Defined in [private-members.ts:29](https://github.com/OffGridNetworks/typedoc-plugin-docusaurus/blob/master/tests/src/private-members.ts#L29)*
___
<a id="name"></a>
### «Protected» name
**● name**: *`string`*
*Inherited from [Person](api-classes-private-members-person.md).[name](api-classes-private-members-person.md#name)*
*Defined in [private-members.ts:23](https://github.com/OffGridNetworks/typedoc-plugin-docusaurus/blob/master/tests/src/private-members.ts#L23)*
___
## Methods
<a id="getelevatorpitch"></a>
### getElevatorPitch
► **getElevatorPitch**(): `string`
*Defined in [private-members.ts:36](https://github.com/OffGridNetworks/typedoc-plugin-docusaurus/blob/master/tests/src/private-members.ts#L36)*
**Returns:** `string`
___
<a id="getprivatedetails"></a>
### «Private» getPrivateDetails
► **getPrivateDetails**(): `string`
*Defined in [private-members.ts:40](https://github.com/OffGridNetworks/typedoc-plugin-docusaurus/blob/master/tests/src/private-members.ts#L40)*
**Returns:** `string`
___
| 16.947368 | 164 | 0.703804 | yue_Hant | 0.522013 |
fdb9039d066a10c88c9d03a31ee93f11c9826e7c | 711 | md | Markdown | docs/ru/faq/integration/index.md | evryfs/ClickHouse | a9648af0b9e2506ce783106315814ed8dbd0a952 | [
"Apache-2.0"
] | 18 | 2021-05-29T01:12:33.000Z | 2021-11-18T12:34:48.000Z | docs/ru/faq/integration/index.md | evryfs/ClickHouse | a9648af0b9e2506ce783106315814ed8dbd0a952 | [
"Apache-2.0"
] | 13 | 2019-06-06T09:45:53.000Z | 2020-05-15T12:03:45.000Z | docs/ru/faq/integration/index.md | evryfs/ClickHouse | a9648af0b9e2506ce783106315814ed8dbd0a952 | [
"Apache-2.0"
] | 22 | 2019-06-14T10:31:51.000Z | 2020-10-12T14:57:44.000Z | ---
title: Интеграция ClickHouse с другими системами
toc_hidden_folder: true
toc_priority: 4
toc_title: Интеграция
---
# Интеграция ClickHouse с другими системами {#question-about-integrating-clickhouse-and-other-systems-rus}
Вопросы:
- [Как экспортировать данные из ClickHouse в файл?](file-export.md)
- [Как импортировать JSON в ClickHouse?](json-import.md)
- [Что делать, если у меня проблема с кодировками при использовании Oracle через ODBC?](oracle-odbc.md)
!!! info "Если вы не нашли то, что искали"
Загляните в другие подразделы F.A.Q. или поищите в остальных разделах документации, ориентируйтесь по оглавлению слева.
[Original article](https://clickhouse.tech/docs/ru/faq/integration/)
| 35.55 | 123 | 0.772152 | rus_Cyrl | 0.82199 |
fdb9666da84e9a57f6850e54e219dd34e8a2724e | 4,045 | md | Markdown | docs/4-error-handling.md | tienduy-nguyen/nestjs-flow | 4606a853f82fcd55841bb34fbd6c9252e77a1009 | [
"MIT"
] | 1 | 2021-03-11T22:56:34.000Z | 2021-03-11T22:56:34.000Z | docs/4-error-handling.md | tienduy-nguyen/nestjs-flow | 4606a853f82fcd55841bb34fbd6c9252e77a1009 | [
"MIT"
] | null | null | null | docs/4-error-handling.md | tienduy-nguyen/nestjs-flow | 4606a853f82fcd55841bb34fbd6c9252e77a1009 | [
"MIT"
] | null | null | null | ## 4. Error handling
Check the code at branch [4-error-handling](https://gitlab.com/tienduy-nguyen/nestjs-flow/-/tree/4-error-handling)
### Exception filter
Nest use built-in exception layer which is responsible for processing all unhandled exceptions across an application.
Check [Nest exception filter](https://docs.nestjs.com/exception-filters) for information details.
- Format of an exception:
```ts
{
"statusCode": number,
"message": string
}
```
- Throw standard exception in Nest
Here is some examples using Exception filter in app:
```ts
const post = await this.postRepository.findOne({ where: { id: id } });
if (!post) {
throw new NotFoundException(`Post with id ${post.id} not found`);
}
```
```ts
const user = await this.userService.getUserByEmail(email);
if (user) {
const isMatch = await bcrypt.compare(password, user.password);
if (isMatch) {
return user;
}
}
throw new BadRequestException('Invalids credentials');
```
```ts
const userCheck = await this.userService.getUserByEmail(registerDto.email);
if (userCheck) {
throw new ConflictException(
`User with email: ${registerDto.email} already exists`,
);
}
```
```ts
} catch (error) {
throw new HttpException(error.message, HttpStatus.INTERNAL_SERVER_ERROR);
}
```
- Create custom exception: Logger exception
ex: Forbidden.exception.ts
```ts
export class ForbiddenException extends HttpException {
constructor() {
super('Forbidden', HttpStatus.FORBIDDEN);
}
}
```
Check more [Nest exception filter](https://docs.nestjs.com/exception-filters).
### Validation
Nest provides several pipes available right out-of-the-box:
- ValidationPipe
- ParseIntPipe
- ParseBoolPipe
- ParseArrayPipe
- ParseUUIDPipe
The ValidationPipe makes use of the powerful [class-validator](https://github.com/typestack/class-validator) package and its declarative validation decorators.
The ValidationPipe provides a convenient approach to enforce validation rules for all incoming client payloads, where the specific rules are declared with simple annotations in local class/DTO declarations in each module.
We will use auto-validation of Nest:
- Setup in `main.ts`
```ts
// main.ts
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalPipes(new ValidationPipe());
...
}
bootstrap();
```
- Install indispensable package dependency to make it works:
- [Class-validator](https://github.com/typestack/class-validator)
- [Class-transformer](https://github.com/typestack/class-transformer)
```bash
$ yarn add class-transformer class-validator
```
- Using class-validator
We will use this package to make sûre that we have good data for body request (DTO) & for entity data before save to database.
Example using validation in `user.entity.ts`
```ts
// user.entity.ts
import { Entity, Column, PrimaryGeneratedColumn } from 'typeorm';
import { IsDate, IsEmail, Min } from 'class-validator';
import moment from 'moment';
@Entity()
export class User {
@PrimaryGeneratedColumn('uuid')
id: string;
@Column()
name: string;
@Column({ unique: true })
@IsEmail()
email: string;
@Column()
@Min(0)
password: string;
@Column({
type: Date,
default: moment(new Date()).format('YYYY-MM-DD HH:ss'),
nullable: true,
})
@IsDate()
createdAt: Date;
@Column({
type: Date,
default: moment(new Date()).format('YYYY-MM-DD HH:ss'),
nullable: true,
})
@IsDate()
updatedAt: Date;
}
```
Example in `create-post.dto.ts`
```ts
import { IsString } from 'class-validator';
export class CreatePostDto {
@IsString()
title: string;
@IsString()
content: string;
}
```
Check more [Doc class-validator](https://github.com/typestack/class-validator/blob/develop/docs/basics/validating-objects.md) for advanced validation.
| 25.124224 | 221 | 0.67911 | eng_Latn | 0.550311 |
fdb9a1b1fc07996fb5907e8095c0dc176fb68c23 | 11 | md | Markdown | README.md | BoredAndDoomed/Beginning-of-game | 265f7d62802c338c282ff2f32889b44b019cbe53 | [
"MIT"
] | null | null | null | README.md | BoredAndDoomed/Beginning-of-game | 265f7d62802c338c282ff2f32889b44b019cbe53 | [
"MIT"
] | null | null | null | README.md | BoredAndDoomed/Beginning-of-game | 265f7d62802c338c282ff2f32889b44b019cbe53 | [
"MIT"
] | null | null | null | # C27 sol
| 5.5 | 10 | 0.545455 | vie_Latn | 0.960768 |
fdb9a69af0ea8a59353a95c72059d5dfc79acf2f | 2,002 | md | Markdown | README.md | dmitriz/make-programming-better | f0861affa9e33761bded0a8a5c18de4816f1c072 | [
"MIT"
] | null | null | null | README.md | dmitriz/make-programming-better | f0861affa9e33761bded0a8a5c18de4816f1c072 | [
"MIT"
] | null | null | null | README.md | dmitriz/make-programming-better | f0861affa9e33761bded0a8a5c18de4816f1c072 | [
"MIT"
] | null | null | null | # make-programming-better
Links to posts and articles aiming to improve the current state of programming
### John Backus
- Article ["Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs."](https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf).
- Questions on Quora: [Do you agree with John Backus in "Can Programming Be Liberated from the Von Neumann Style?" regarding functional style programming?
](https://www.quora.com/Do-you-agree-with-John-Backus-in-Can-Programming-Be-Liberated-from-the-Von-Neumann-Style-regarding-functional-style-programming).
- On Hacker News: https://news.ycombinator.com/item?id=768057, https://news.ycombinator.com/item?id=7671379.
### Edsger W. Dijkstra
- [Interview, Austin, 04–03–1985](http://www.cs.utexas.edu/users/EWD/misc/vanVlissingenInterview.html).
### Michael O. Church
- [What is spaghetti code?](https://web.archive.org/web/20150910093715/https://michaelochurch.wordpress.com/2012/08/15/what-is-spaghetti-code/)
- [Maintenance and Coding Standards](https://web.archive.org/web/20140716023502/http://funceng.com/2013/08/22/maintenance-and-coding-standards/)- [Functional programs rarely rot](https://web.archive.org/web/20140716023452/http://michaelochurch.wordpress.com/2012/12/06/functional-programs-rarely-rot/), December 6, 2012.
- [MOOCs will disrupt the corporate world](https://web.archive.org/web/20151221144207/https://michaelochurch.wordpress.com/2012/12/07/moocs-disrupting-work/), December 7, 2012.
### JavaScript Promises
- [Promises/A+ Considered Harmful](http://robotlolita.me/2013/06/28/promises-considered-harmful.html) by [@robotlolita](https://github.com/robotlolita)
- [Stop Trying to Catch Me](https://jlongster.com/Stop-Trying-to-Catch-Me) by James Long.
- [Promise is the wrong abstraction](http://anttih.com/articles/2017/12/25/promise-is-the-wrong-abstraction)
- [Promises are not neutral enough](https://staltz.com/promises-are-not-neutral-enough.html)
| 87.043478 | 320 | 0.777722 | yue_Hant | 0.731417 |
fdb9f731eadb148eab01f4de141ab964e750d25d | 3,578 | md | Markdown | server/README.md | werty1st/socialoverlay | 9a4fbad5d34c6b8712beaddd4777105c01770383 | [
"BSD-2-Clause"
] | null | null | null | server/README.md | werty1st/socialoverlay | 9a4fbad5d34c6b8712beaddd4777105c01770383 | [
"BSD-2-Clause"
] | null | null | null | server/README.md | werty1st/socialoverlay | 9a4fbad5d34c6b8712beaddd4777105c01770383 | [
"BSD-2-Clause"
] | null | null | null | social media overlay
====================
TODO
css erstellen Done
auto refresh screenshot
id bleibt gleich, es wird nur der anhang geändert
bei neuem content ID neue ID. änderungen immer über picker
Bearbeiten einer alten ID über den picker ermöglichen. Rejected (nur in extremfällen->imperia)
Langfristig Renderer komplett trennen.
Im Picker wird ein Auftrag erzeugt und in DB gespeichert.
Der Renderer überwacht den Changes Feed/listViewOpen und rendert nach einander.
Der P12 Script im HTML Embeddcode muss generisch sein, da beim Speichern die Anzahl der Bilder nicht fest steht.
Nachteil keine Vorschau oder 2 wege (mit und ohne vorschau implementieren)
Oder nicht so weit treiben und im Picker auf ChangesFeed warten und dann Vorschau anzeigen.
Live schalten Funktion
auf sofa02 ausgewählte (aktivierte) Elemente auf sofa01 synchronisieren. picker auf sofa01 so aufbauen das er den auf sofa02 nutzt
auf 01 im prinzip alles ausbauen. live und dev arbeiten(rendern) auf 02. dafür unterschiedliche design docs benutzen.
dev auf merlin umziehen.
####
render auftrag speichern damit er für url=int oder url=live gerendert werden kann.
speichern wärend render deaktivieren
renderauftrag abspeichern mit feld für targets=c1/c2
dienst auf c2 prüft auf renderaufträge und erzeugt einmal bilder und für jedes ziel css/html/js dateien.
####
ziel:
neue funktionen live verfügbar machen
alte inhalte müssen erreichbar bleiben. (alte docs auf sofa01)
entwicklungsfähigkeit muss bleiben
sofa1 spielt weiter aus
picker auf sofa02 umleiten. 02 rendert und synct auf 01
live und testing als desiogn doc einsetzen
Ablauf:
Vor die Pickerschnittstelle:
Live stelle Button: doc.status = "live", replicate mit filter auf 01
Zurückziehen: doc.status = "depub", doc auf 01 löschen
playouturl ändern
MR Code min-height durch height:100% ersetzen
Welche Auswirkungen hat welche Einstellung:
$scope.targetlocationopt = [ { name:"Standard", value:'default'}, //100% kein margin
{ name:"zdfsport.de Startseite", value:'zdfsportstart'}, //feste größe kein margin
{ name:"zdf.de SB Raster", value:'zdfsbraster'}, //100% mit margin -8px
{ name:"zdf.de Faktenbox", value:'faktenbox'} ]; //über html-src
var css_source = "templates/"+config.version+"/style.css";
var dyncss = { elHeight : "100%", margin:"0px" };
if (RenderRequest.targetlocation == "zdfsportstart") {
css_source = "templates/"+config.version+"/style_sportstart.css";
} else if (RenderRequest.targetlocation == "zdfsbraster") {
dyncss.margin = "-8px";
} else if (RenderRequest.targetlocation == "faktenbox"){
dyncss.elHeight = "350px";
dyncss.margin = "0px";
}else /*default*/ {
}
Platzierung: Standard
RenderRequest.screensize = [946]; //946x532
Desktop Css:
Mobil Css:
Platzierung: zdfsport
Desktop Css: style_sportstart.css
Mobil Css: style_sportstart.css /*wird eh mobil nicht ausgespielt*/
Platzierung: zdf.de SB Raster
Desktop Css:
Mobil Css:
Platzierung: Faktenbox
Desktop Css:
Mobil Css:
Embed Css: min-height: 330px; height: 100%;
Es gibt keine Extra Mobil Version mehr. -> CSS:
#_{{hash}}.zdfembed {
text-align: center;
height: {{css.elHeight}};
width: 100%;
overflow: hidden;
margin: {{css.margin}};
}
.socialbox-wrapper #_{{hash}}.zdfembed {
margin: 0px;
margin-bottom: 8px;
}
server kompletten request speichern lassen damit er ihn neu ausführen kann.
server um cron gesteuerte komponente ergänzen die die veröffentlichten mit der internen db abgleicht und die neu renderung übernimmt | 30.322034 | 132 | 0.746506 | deu_Latn | 0.957176 |
fdbb423d4e96ca1d386283e75397ba3905d1f942 | 1,338 | md | Markdown | README.md | satpalmatharoo/RSV-Project-1 | ea5ebc2c2414e485ddf6c618027cb6d4835eab97 | [
"MIT"
] | null | null | null | README.md | satpalmatharoo/RSV-Project-1 | ea5ebc2c2414e485ddf6c618027cb6d4835eab97 | [
"MIT"
] | 10 | 2021-07-15T18:20:39.000Z | 2021-07-22T22:30:14.000Z | README.md | satpalmatharoo/RSV-Project-1 | ea5ebc2c2414e485ddf6c618027cb6d4835eab97 | [
"MIT"
] | 2 | 2021-07-31T11:25:07.000Z | 2021-07-31T11:25:38.000Z | # RSV-Project-1
# RSV-Project-1 DAILY RSV MOTIVATION
## Our inspiration, is to bring users daily motivational quotes, energising each visitor with positivity to either feul the day ahead or reset their day. We want to be the leading motivational app encouraging users on their unique journey of wellbeing - albeit with a little sass that everyone needs. To further feed inquistive minds, we have added a Wikipedia API to give the user an opportunity to delve into their much loved quote or personality.
## This application gives the user a random quote to help set the tone of their mood - the on click button gives the user the control to then move forward to another quote if desired. Searched items in Wikipedia are then saved in local storage holding informatin which will be used in later development, personalising each visit.
## We love the apps ability to give the user the functionality of being selected at random, elevating the vistors experience via engagement. This function will then be evolved with online excerise apps.
###
* https://satpalmatharoo.github.io/RSV-Project-1/
* Screenshot of deployed application
https://github.com/satpalmatharoo/RSV-Project-1/issues/13#issue-951103943
### Collaborators
* [Viktorija Judeikyte](https://github.com/FJVIKTORIJA)
* [Rahma Abdul] (https://github.com/rahmaabdul)
| 70.421053 | 452 | 0.789238 | eng_Latn | 0.993837 |
fdbc4cbd94375471e33ef65c03b83605431e81f2 | 509 | md | Markdown | vendor/github.com/ftrvxmtrx/gravatar/README.md | slowkow/baseapp | e92546887e24ab41d3b6efba940bf8720019be8e | [
"MIT"
] | 98 | 2015-01-07T10:40:02.000Z | 2021-07-30T07:04:30.000Z | vendor/github.com/ftrvxmtrx/gravatar/README.md | slowkow/baseapp | e92546887e24ab41d3b6efba940bf8720019be8e | [
"MIT"
] | 7 | 2017-03-15T15:16:06.000Z | 2018-05-18T07:41:55.000Z | vendor/github.com/ftrvxmtrx/gravatar/README.md | slowkow/baseapp | e92546887e24ab41d3b6efba940bf8720019be8e | [
"MIT"
] | 42 | 2015-02-06T02:12:56.000Z | 2021-08-11T10:06:21.000Z | # gravatar
gravatar is a library to ease access to
[Gravatar API](http://gravatar.com/site/implement/).
It implements a functionality to get avatars and decode JSON profiles.
## Build status
<a href="http://goci.me/project/github.com/ftrvxmtrx/gravatar">
<img src="http://goci.me/project/image/github.com/ftrvxmtrx/gravatar" />
</a>
## Installation
$ go get github.com/ftrvxmtrx/gravatar
## Documentation and examples
[gravatar on go.pkgdoc.org](http://go.pkgdoc.org/github.com/ftrvxmtrx/gravatar)
| 25.45 | 79 | 0.748527 | kor_Hang | 0.268695 |
fdbdcbd11863f45e309c6e2650f11cf44090a002 | 3,263 | md | Markdown | includes/active-directory-develop-guidedsetup-windesktop-test.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-develop-guidedsetup-windesktop-test.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-develop-guidedsetup-windesktop-test.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: inkludera fil
description: inkludera fil
services: active-directory
documentationcenter: dev-center-name
author: jmprieur
manager: CelesteDG
editor: ''
ms.service: active-directory
ms.devlang: na
ms.topic: include
ms.tgt_pltfrm: na
ms.workload: identity
ms.date: 04/10/2019
ms.author: jmprieur
ms.custom: include file
ms.openlocfilehash: 2325509f68ced7c66d9f733b07247ae01301b565
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 07/02/2020
ms.locfileid: "82181551"
---
## <a name="test-your-code"></a>Testa koden
Om du vill köra ditt projekt i Visual Studio väljer du **F5**. Ditt program **MainWindow** visas, som du ser här:

Första gången du kör programmet och väljer **anrops Microsoft Graph API** -knappen uppmanas du att logga in. Använd ett Azure Active Directory konto (arbets-eller skol konto) eller ett Microsoft-konto (live.com, outlook.com) för att testa det.

### <a name="provide-consent-for-application-access"></a>Ge tillstånd för program åtkomst
Första gången du loggar in på ditt program uppmanas du också att ange medgivande för att ge programmet åtkomst till din profil och logga in dig, som du ser här:

### <a name="view-application-results"></a>Visa program resultat
När du har loggat in bör du se den användar profil information som returneras av anropet till Microsoft Graph API. Resultaten visas i **resultat rutan resultat för API-anrop** . Grundläggande information om den token som hämtades via anropet till `AcquireTokenInteractive` eller `AcquireTokenSilent` ska visas i rutan **token-information** . Resultaten innehåller följande egenskaper:
|Egenskap |Format |Beskrivning |
|---------|---------|---------|
|**Användar** |<span>user@domain.com</span> |Det användar namn som används för att identifiera användaren.|
|**Token upphör att gälla** |DateTime |Tiden då token upphör att gälla. MSAL utökar förfallo datumet genom att förnya token vid behov.|
### <a name="more-information-about-scopes-and-delegated-permissions"></a>Mer information om omfattningar och delegerade behörigheter
Microsoft Graph-API: t kräver att *User. Read* -omfånget läser en användar profil. Det här omfånget läggs automatiskt till som standard i alla program som är registrerade i program registrerings portalen. Andra API: er för Microsoft Graph, samt anpassade API: er för backend-servern, kan kräva ytterligare omfång. Microsoft Graph-API: n kräver *kalendrar. Läs* omfattning för att lista användarens kalendrar.
Om du vill komma åt användarens kalendrar i ett programs kontext lägger du till *kalendrarna. Läs* behörighet för program registrerings informationen. Lägg sedan till *kalendrarna. Läs* omfång till `acquireTokenSilent` anropet.
>[!NOTE]
>Användaren kan tillfrågas om ytterligare medgivanden när du ökar antalet omfång.
[!INCLUDE [Help and support](./active-directory-develop-help-support-include.md)]
| 55.305085 | 408 | 0.787006 | swe_Latn | 0.992388 |
fdbe9710bfc4458715be6b5f72db734c5f5a7717 | 2,714 | md | Markdown | _posts/2019-04-23-Download-free-aerostar-repair-manual.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-04-23-Download-free-aerostar-repair-manual.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | _posts/2019-04-23-Download-free-aerostar-repair-manual.md | Kirsten-Krick/Kirsten-Krick | 58994392de08fb245c4163dd2e5566de8dd45a7a | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Free aerostar repair manual book
"It never occurred to me that a congressman would keep a bunch of thugs on the payroll. 292 "Sure, embedded in every human psyche was an affinity for a basic pattern that rarely failed to be She took the path to the old house. Almquist had "You come free aerostar repair manual. Rose had demanded, prosperous port city. " have remarkable adventures to relate, only that it had all been brilliant and really cool. "I'm alone. People aren't. Medra stayed three years with Highdrake, The Eleventh, without looking up, like most TV shows and movies and free aerostar repair manual the actors in themвalthough not, Celestina said, free aerostar repair manual if sort of gross. A DNA molecule adds up to a lot more than a bunch free aerostar repair manual disorganized charges and valency bonds. recessed ledge in the dugout; he let his left hand hang limply over the side, the boy didn't flinch in surprise, Vasquez said. They'll be along soon. A "Wow. They erected it on an outcropping of because of its mysterious-looking contents. just the sorry soul he is. We're wrong? Colman looked over at Veronica. I didn't first see you're. almost against the will of the seafarers, but they did not know "You've still got half the Coke in the can. you want me to give this bag indifference. "We ought to commence evacuating the Kuan-yin," Kath said. And, faithful to then looked dead and cold, some He had been through a long hard trial and had taken a great chance against a great power. The terror he hid from her vanished with the recital of their vows. of free aerostar repair manual varieties of the whale. "This is Bret. The old man waded through the stream barefoot, they will change each other, who commanded attention by the mere fact of his entry, year not stated. 181, the shoulders both of men and women, early May," said Sinsemilla, for He can see a portion of one dust-filmed window, entwined by rambling weeds along the oiled-dirt less with grief for his loss than with happiness for his mother; she has crossed the great divide into the "Closer. Both A round container, filled with luxuriant vegetation which A cold wetness just above the crook of his left elbow, but an otter slipped into it and was gone. " than ever before. A little compensation. The forks were missing. Now she was here to remake the first. "What do you mean by 'basically'?" "It's in my tummy!" Junior would rather have chugged a beaker of carbolic acid than free aerostar repair manual. He smiled. redemption, and it involves no bare-breasted women, is the bridge between that book and the next one. Hound scratched his neck and sighed. | 301.555556 | 2,613 | 0.784819 | eng_Latn | 0.999931 |
fdbef204ab74d5d0087e59fdb6ca45843f5a9c2a | 6,564 | md | Markdown | articles/purview/file-extension-insights.md | Myhostings/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/purview/file-extension-insights.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/purview/file-extension-insights.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: Azure Takiview 'ta verileriniz üzerinde rapor oluşturma öngörülerini kullanarak dosya uzantısı
description: Bu nasıl yapılır kılavuzunda, verileriniz üzerinde takip etme dosya uzantısının nasıl görüntüleneceği ve kullanılacağı açıklanmaktadır.
author: batamig
ms.author: bagol
ms.service: purview
ms.subservice: purview-data-catalog
ms.topic: how-to
ms.date: 01/17/2021
ms.openlocfilehash: 5cbfb41d50e055f745864e4d5f8bc15a55d925e7
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/29/2021
ms.locfileid: "101668578"
---
# <a name="file-extension-insights-about-your-data-from-azure-purview"></a>Azure purview 'daki verilerinize ilişkin dosya uzantısı öngörüleri
Bu nasıl yapılır kılavuzunda, verilerinizde bulunan dosya uzantıları veya dosya türleri hakkında öngörülere erişme, bunları görüntüleme ve bunlara filtre uygulama açıklanmaktadır.
Desteklenen veri kaynakları şunlardır: Azure Blob depolama, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, Amazon S3 demetleri
Bu nasıl yapılır kılavuzunda şunları yapmayı öğreneceksiniz:
> [!div class="checklist"]
> * Azure 'dan purview hesabınızı başlatın
> - Verileriniz üzerinde dosya uzantısı öngörülerini görüntüleme
> - Verilerinize ilişkin daha fazla dosya uzantısı ayrıntısı için detaya gidin
## <a name="prerequisites"></a>Önkoşullar
Purview Insights 'ı kullanmaya başlamadan önce aşağıdaki adımları tamamladığınızdan emin olun:
- Azure kaynaklarınızı ayarlayın ve test verileriyle ilgili hesapları doldurulmuştur
- Her veri kaynağındaki test verileri üzerinde bir tarama ayarlayın ve işlemi tamamlanmıştır. Daha fazla bilgi için bkz. [Azure 'da veri kaynaklarını yönetme (Önizleme)](manage-data-sources.md) ve [tarama kuralı kümesi oluşturma](create-a-scan-rule-set.md).
- Bir [veri okuyucu veya veri seçkin rolü](catalog-permissions.md#azure-purviews-pre-defined-data-plane-roles)ile birlikte bir hesapla oturum açıldı.
Daha fazla bilgi için bkz. [Azure purview 'ta veri kaynaklarını yönetme (Önizleme)](manage-data-sources.md).
## <a name="use-purview-file-extension-insights"></a>Purview dosya uzantısı öngörülerini kullanma
Varlıklarınızı tararken Azure purview, verilerinizde bulunan dosya türlerini tespit edebilir ve her dosya türü hakkında daha fazla ayrıntı sağlar. Ayrıntılar, sahip olduğunuz her türden dosyanın sayısını, bu dosyaların nerede olduğunu ve hassas veriler için tarama yapılıp yapılmayacağını içerir.
> [!NOTE]
> Kaynak türlerinizi taradıktan sonra, yeni varlıkları yansıtmak için **Dosya Uzantısı** öngörülerini birkaç saat verin.
**Dosya Uzantısı öngörülerini görüntülemek için:**
1. Azure portal **Azure purview** [örneği ekranına](https://aka.ms/purviewportal) gidin ve purview hesabınızı seçin.
1. **Genel bakış** sayfasında, **Başlarken** bölümünde, **purview hesabını Başlat** kutucuğunu seçin.
1. Takip görünümü ' nde, :::image type="icon" source="media/insights/ico-insights.png" border="false"::: **Öngörüler** alanına erişmek için soldaki Öngörüler menü öğesini seçin.
1. **Öngörüler** içinde **Dosya uzantıları** sekmesini seçin.
Bu rapor, seçilen zaman diliminde (varsayılan: 30 gün) çok sayıda benzersiz dosya uzantısının ve bulunan en iyi 10 uzantı grafiğinin bir özetini görüntüler.
:::image type="content" source="media/file-extension-insights/file-extension-overview-small.png" alt-text="Dosya Uzantısı raporu-genel bakış" lightbox="media/file-extension-insights/file-extension-overview.png":::
Daha fazla bilgi edinmek için aşağıdakilerden birini yapın:
- Dosya uzantılarının bulunduğu zaman aralığını değiştirmek için raporun en üstündeki **zaman** seçicisini seçin.
- Bulunan dosya uzantılarının tam listesini görüntülemek için grafiğin altında **daha fazla görüntüle** ' yi seçin. Daha fazla bilgi için bkz. [Dosya Uzantısı öngörüleri ayrıntıya gitme](#file-extension-insights-drilldown).
### <a name="file-extension-insights-drilldown"></a>Dosya Uzantısı öngörüleri ayrıntıya gitme
Verilerinizde bulunan dosya türleriyle ilgili üst düzey bilgileri görüntüledikten sonra, bulundukları konum hakkında daha fazla ayrıntı ve gizli veriler için taranıp taranamayacağını öğrenmek için detaya gidin.
Örnek:
:::image type="content" source="media/file-extension-insights/file-extension-drilldown-small.png" alt-text="Dosya Uzantısı raporu-ayrıntıya git" lightbox="media/file-extension-insights/file-extension-drilldown.png":::
Kılavuzda, aşağıdakiler dahil olmak üzere bulunan her bir dosya uzantısının ayrıntıları gösterilmektedir:
- **Dosya sayısı**. Belirtilen uzantıya sahip dosya sayısı.
- **İçerik tarama**. Dosya uzantısının gizli verilerin taranması için desteklenip desteklenmediğini belirtir.
- **Abonelikler**. Belirtilen uzantının bulunduğu abonelik sayısı.
- **Kaynaklar**. Belirtilen dosya uzantısıyla bulunan kaynak sayısı.
Gösterilen verileri filtrelemek için kılavuzun üzerindeki filtreleri kullanın:
|Seçenek |Açıklama |
|---------|---------|
|**Anahtar sözcüğe göre filtrele** | Dosya türlerinizi ada göre filtrelemek için **anahtar sözcüğe göre filtrele** kutusuna metin girin. Örneğin, yalnızca PDF 'Leri görüntülemek için girin `PDF` . |
|**Saat** | Verilerinizin oluşturulduğu zamana ait belirli bir zaman aralığına göre filtrelemek için seçin. <br>**Varsayılan:** 30 gün |
|**Dosya Uzantısı** |Kılavuza bir veya daha fazla dosya türüne göre filtre uygulamak için seçin. |
|**Kaynaklar** |Kılavuza belirli veri kaynaklarına göre filtre uygulamak için seçin. |
|**İçerik tarama** |Yalnızca hassas veriler için daha fazla taranabilecek dosya türlerini veya **. CERT** veya **. jpg** dosyaları gibi taranmayan verileri göstermek için **desteklenen** veya **Desteklenmeyen**' ı seçin. |
| | |
Filtrelerin üzerinde **Sütunları Düzenle** ' yi seçerek :::image type="icon" source="media/insights/ico-columns.png" border="false"::: kılavuzunuzda daha fazla veya daha az sütun görüntüleyin veya sıralamayı değiştirin.
Kılavuzu sıralamak için sütuna göre sıralanacak bir sütun üst bilgisi seçin.
## <a name="next-steps"></a>Sonraki adımlar
Azure purview Insight Reports hakkında daha fazla bilgi edinin
> [!div class="nextstepaction"]
> [Sözlük öngörüleri](glossary-insights.md)
> [!div class="nextstepaction"]
> [Öngörüleri Tara](scan-insights.md)
> [!div class="nextstepaction"]
> [Sınıflandırma öngörüleri](./classification-insights.md)
> [!div class="nextstepaction"]
> [Duyarlılık etiketi öngörüleri](sensitivity-insights.md)
| 57.578947 | 296 | 0.785954 | tur_Latn | 0.999763 |
fdbef7ed2ae795f311a90742eed6d2b63b467ee3 | 33,611 | md | Markdown | src/site/content/es/blog/building-a-settings-component/index.md | lildeadprince/web.dev | aa0b04ffd118195903fdb2bc46335d111ac78fce | [
"Apache-2.0"
] | 1 | 2019-05-22T23:01:39.000Z | 2019-05-22T23:01:39.000Z | src/site/content/es/blog/building-a-settings-component/index.md | lildeadprince/web.dev | aa0b04ffd118195903fdb2bc46335d111ac78fce | [
"Apache-2.0"
] | 8 | 2021-10-31T23:46:40.000Z | 2021-11-01T15:05:12.000Z | src/site/content/es/blog/building-a-settings-component/index.md | lildeadprince/web.dev | aa0b04ffd118195903fdb2bc46335d111ac78fce | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: Construyendo un componente de configuración
subhead: Una descripción fundamental de cómo crear un componente de configuración de controles deslizantes y casillas de verificación.
authors:
- adamargyle
description: Una descripción fundamental de cómo crear un componente de configuración de controles deslizantes y casillas de verificación.
date: 2021-03-17
hero: image/vS06HQ1YTsbMKSFTIPl2iogUQP73/SUaxDTgOYvv2JXxaErBP.png
thumbnail: image/vS06HQ1YTsbMKSFTIPl2iogUQP73/zkv1FlI6dn82rJ104yBV.png
tags:
- blog
- css
- dom
- javascript
- layout
- mobile
- ux
---
En esta publicación, quiero compartir mi forma de pensar cuando creo un componente de configuración para que la web sea responsiva, admita múltiples entradas de dispositivo y funcione en todos los navegadores. Prueba esto en esta [demostración](https://gui-challenges.web.app/settings/dist/).
<figure data-size="full">{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/WuIwd9jPb30KmmnjJn75.mp4", autoplay="true", loop="true", muted="true" %} <figcaption> <a href="https://gui-challenges.web.app/settings/dist/">Demostración</a> </figcaption></figure>
Si prefieres ver un video o deseas una vista previa del UI/UX de lo que estamos creando, aquí hay un tutorial más corto en YouTube:
{% YouTube 'dm7gnp6eh3Q' %}
## Descripción general
He dividido los aspectos de este componente en las siguientes secciones:
1. [Diseños](#layouts)
2. [Color](#color)
3. [Entrada de rango personalizado](#custom-range)
4. [Entrada de casilla de verificación personalizada](#custom-checkbox)
5. [Consideraciones de accesibilidad](#accessibility)
6. [JavaScript](#javascript)
{% Aside 'gotchas' %} Los siguientes fragmentos de CSS asumen PostCSS con [PostCSS Preset Env](https://preset-env.cssdb.org/features). La intención es practicar de manera temprana y a menudo con la sintaxis en los primeros borradores o en todos los buscadores que se encuentre disponible de manera experimental. O como les gusta decir a los complementos, "Usa hoy el CSS del mañana". {% endAside %}
## Diseños
¡Esta es la primera demostración de GUI Challenge en ser **completamente un CSS Grid**! Aquí está cada cuadrícula resaltada con [Chrome DevTools for grid](https://goo.gle/devtools-grid):
{% Img src="image/vS06HQ1YTsbMKSFTIPl2iogUQP73/h6LZhScslprBcFol4gGp.png", alt="Contornos coloridos y superposiciones de espacios que ayudan a mostrar todos los cuadros que componen el diseño de configuración", width="800", height="563" %}
{% Banner 'neutral' %} Para resaltar tus diseños de cuadrícula:
1. Abre Chrome DevTools usando `cmd+opt+i` o `ctrl+alt+i`.
2. Selecciona la pestaña de Diseño junto a la pestaña de Estilos.
3. En la sección de Cuadrícula, marca todos los diseños.
4. Cambia los colores de todos los diseños. {% endBanner %}
### Solo por espacio
El diseño más común:
```css
foo {
display: grid;
gap: var(--something);
}
```
A este diseño le llamo "solo por espacio" porque solo usa la cuadrícula para agregar espacios entre bloques.
Cinco diseños usan esta estrategia, aquí se muestran todos:
{% Img src="image/vS06HQ1YTsbMKSFTIPl2iogUQP73/zYWSVLzdtrh1K8p8yUuA.png", alt="Diseños de cuadrícula verticales resaltados con contornos y rellenados con espacios", width="800", height="568" %}
El elemento de `fieldset`, que contiene cada grupo de entrada (`.fieldset-item`), usa `gap: 1px` para crear los bordes finos entre los elementos. ¡No es una solución de borde complicada!
<div class="switcher">
{% Compare 'better', 'Filled gap' %}
```css
.grid {
display: grid;
gap: 1px;
background: var(--bg-surface-1);
& > .fieldset-item {
background: var(--bg-surface-2);
}
}
```
{% endCompare %}
{% Compare 'worse', 'Border trick' %}
```css
.grid {
display: grid;
& > .fieldset-item {
background: var(--bg-surface-2);
&:not(:last-child) {
border-bottom: 1px solid var(--bg-surface-1);
}
}
}
```
{% endCompare %}
</div>
### Envoltura de cuadricula natural
El diseño más complejo terminó siendo el diseño de macros, el sistema de diseño lógico entre `<main>` y `<form>`.
#### Centrando del contenido de envoltura
Tanto la flexbox como la cuadrícula brindan habilidades a `align-items` o `align-content`, y cuando se trata de elementos de envoltura, las alineaciones de diseño de `content` distribuirán el espacio entre los elementos secundarios como un grupo.
```css
main {
display: grid;
gap: var(--space-xl);
place-content: center;
}
```
El elemento principal está usando `place-content: center` [método abreviado de alineación](https://developer.mozilla.org/docs/Web/CSS/place-content) para que los elementos secundarios estén centrados vertical y horizontalmente en diseños de una y dos columnas.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/IQI2PofA6gpNFUkDrvKo.mp4", autoplay="true", loop="true", muted="true" %}
Mira el video anterior para apreciar cómo el "contenido" permanece centrado, a pesar de que se ha producido el ajuste.
#### Repeat auto-fit minmax
El `<form>` utiliza un diseño de cuadrícula adaptable para cada sección. Este diseño cambia de una a dos columnas según el espacio disponible.
```css
form {
display: grid;
gap: var(--space-xl) var(--space-xxl);
grid-template-columns: repeat(auto-fit, minmax(min(10ch, 100%), 35ch));
align-items: flex-start;
max-width: 89vw;
}
```
Esta cuadrícula tiene un valor diferente para `row-gap` (--space-xl) y `column-gap` (--space-xxl) para poner ese toque personalizado en el diseño responsivo. Cuando las columnas se apilan, queremos un espacio grande, pero no tan grande como si estuviéramos en una pantalla ancha.
La `grid-template-columns` utiliza 3 funciones CSS: `repeat()`, `minmax()` y `min()`. [Una Kravets](#) tiene una [excelente publicación de blog de diseño](/one-line-layouts/) sobre esto, llamándolo [RAM](/one-line-layouts/#07.-ram-(repeat-auto-minmax):-grid-template-columns(auto-fit-minmax(lessbasegreater-1fr))).
Hay 3 adiciones especiales en nuestro diseño, si lo comparas con el de Una:
- Pasamos una función extra de `min()`.
- Especificamos `align-items: flex-start`.
- Hay un estilo de `max-width: 89vw`.
La función extra de `min()` está bien descrita por Evan Minto en su blog en la publicación [Intrinsically Responsive CSS Grid with minmax() y min() (Cuadricula CSS intrínsecamente responsiva con minmax() y min())](https://evanminto.com/blog/intrinsically-responsive-css-grid-minmax-min/). Recomiendo leer esa entrada. La alineación de `flex-start` es para eliminar el efecto de estiramiento predeterminado, de modo que los elementos secundarios de este diseño no necesiten tener alturas iguales, pueden tener alturas naturales e intrínsecas. El video de YouTube tiene un desglose rápido de esta adición de alineación.
Es importante darle un pequeño desglose a `max-width: 89vw` en esta publicación. Déjame mostrarte el diseño con y sin el estilo aplicado:
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/gdldf7hyaBrHWwxQbSaT.mp4", autoplay="true", loop="true", muted="true" %}
¿Qué está sucediendo? Cuando `max-width` es especificado, proporciona contexto, tamaño explícito o [tamaño definido](https://drafts.csswg.org/css-sizing-3/#definite) para que el [algoritmo de diseño de `auto-fit`](https://drafts.csswg.org/css-grid/#auto-repeat) sepa cuántas repeticiones puede caber en el espacio. Si bien parece obvio que el espacio es de "ancho completo", según la especificación de la cuadrícula CSS, se debe proporcionar un tamaño definido o un tamaño máximo. He proporcionado un tamaño máximo.
Entonces, ¿por qué `89vw`? Porque "funcionó" para mi diseño. Un par de personas de Chrome y yo estamos investigando por qué un valor más razonable, como `100vw`, no es suficiente y si esto se trata de un error.
### Espaciado
La mayor parte de la armonía de este diseño proviene de una paleta de espaciado limitada, 7 espacios para ser exactos.
```css
:root {
--space-xxs: .25rem;
--space-xs: .5rem;
--space-sm: 1rem;
--space-md: 1.5rem;
--space-lg: 2rem;
--space-xl: 3rem;
--space-xxl: 6rem;
}
```
El uso de estos flujos es realmente agradable con grid, [CSS @nest](https://drafts.csswg.org/css-nesting-1/) y [level 5 syntax of @media (sintaxis de nivel 5 de @media)](https://drafts.csswg.org/mediaqueries-5/). Aquí hay un ejemplo, el conjunto de estilos de diseño del `<main>`.
```css
main {
display: grid;
gap: var(--space-xl);
place-content: center;
padding: var(--space-sm);
@media (width >= 540px) {
& {
padding: var(--space-lg);
}
}
@media (width >= 800px) {
& {
padding: var(--space-xl);
}
}
}
```
Una cuadrícula con contenido centrado, moderadamente acolchado (padded) por defecto (como es en los dispositivos móviles). Pero a medida que se dispone de más espacio en la ventana gráfica, se extiende aumentando el relleno. ¡CSS en el 2021 luce bastante bien!
¿Recuerda el diseño anterior, "solo por espacio"? Aquí hay una versión más completa de cómo se ven en este componente:
```css
header {
display: grid;
gap: var(--space-xxs);
}
section {
display: grid;
gap: var(--space-md);
}
```
## Color
Un uso controlado del color ayudó a que este diseño se destacara como expresivo pero minimalista. Lo hago de la siguiente manera:
```css
:root {
--surface1: lch(10 0 0);
--surface2: lch(15 0 0);
--surface3: lch(20 0 0);
--surface4: lch(25 0 0);
--text1: lch(95 0 0);
--text2: lch(75 0 0);
}
```
{% Aside 'key-term' %} El [complemento PostCSS de `lab()` y `lch()`](https://github.com/csstools/postcss-lab-function) es parte de [PostCSS Preset Env](https://preset-env.cssdb.org/features#lch-function) y genera colores `rgb()`. {% endAside %}
Nombro a mi superficie (surface) y colores del texto con números en lugar de nombres como `surface-dark` y `surface-darker` porque en una consulta de medios, los cambiaré y claro y oscuro no tendrán sentido.
Los intercambio en una consulta de medios de preferencia como la siguiente:
```css
:root {
...
@media (prefers-color-scheme: light) {
& {
--surface1: lch(90 0 0);
--surface2: lch(100 0 0);
--surface3: lch(98 0 0);
--surface4: lch(85 0 0);
--text1: lch(20 0 0);
--text2: lch(40 0 0);
}
}
}
```
{% Aside 'key-term' %} El [complemento PostCSS `@nest`](https://github.com/csstools/postcss-nesting) es parte de [PostCSS Preset Env](https://preset-env.cssdb.org/features) y expandirá los selectores a una sintaxis compatible con los navegadores actuales. {% endAside %}
Es importante echarle un vistazo rápido a la imagen y a las estrategias generales antes de sumergirnos en los detalles de la sintaxis del color. Pero, como me adelanté un poco, permíteme retroceder.
### ¿LCH?
Sin profundizar demasiado en la teoría del color, LCH es una sintaxis orientada a los humanos, que se adapta a cómo percibimos el color, no a cómo medimos el color con matemáticas (por ejemplo, usando el 255). Esto le da una clara ventaja, ya que los humanos pueden escribirlo más fácilmente y otros humanos estarán en sintonía con estos ajustes.
<figure>{% Img src="image/vS06HQ1YTsbMKSFTIPl2iogUQP73/160dWLSrMhFISwWMVd4w.png", alt="Una captura de pantalla de la página web pod.link/csspodcast, con el Color 2: Perception episode pulled up", width="800", height="329" %}<figcaption> Aprende sobre el color perceptual (¡y más!) En el <a href="https://pod.link/thecsspodcast">CSS Podcast</a></figcaption></figure>
Por hoy, en esta demostración, centrémonos en la sintaxis y los valores que estoy cambiando para hacer claro y oscuro. Veamos 1 superficie y 1 color de texto:
```css
:root {
--surface1: lch(10 0 0);
--text1: lch(95 0 0);
@media (prefers-color-scheme: light) {
& {
--surface1: lch(90 0 0);
--text1: lch(40 0 0);
}
}
}
```
`--surface1: lch(10 0 0)` se traduce en un `10%` de luminosidad, 0 croma y 0 matiz: un gris incoloro muy oscuro. Luego, en la consulta de medios para el modo claro, la luminosidad se cambia al `90%` con `--surface1: lch(90 0 0);`. Y esa es la esencia de la estrategia. Comienza cambiando la luminosidad entre los 2 temas, manteniendo las relaciones de contraste que requiere el diseño o lo que puede mantener la accesibilidad.
La ventaja de `lch()` es que la claridad está orientada al ser humano y podemos sentirnos bien con un `%` de cambio, que será perceptual y consistentemente ese `%` será diferente. `hsl()` por ejemplo, [no es tan confiable](https://twitter.com/argyleink/status/1201908189257555968).
Hay [más para aprender](https://lea.verou.me/2020/04/lch-colors-in-css-what-why-and-how/) sobre los espacios de color y `lch()` si estás interesado. ¡Se está aproximando!
{% Blockquote 'Lea Verou' %} CSS en este momento **no puede acceder a estos colores en lo absoluto**. Permítanme repetirlo: **no tenemos acceso a un tercio de los colores en la mayoría de los monitores modernos.** Y estos no son solo algunos colores, sino los **colores más vivos que puede mostrar la pantalla**. Nuestros sitios web se han desvanecido porque el hardware del monitor evolucionó más rápido que las especificaciones CSS y las implementaciones del navegador. {% endBlockquote %}
### Controles de formularios adaptables con color-scheme
Muchos navegadores incluyen controles de temas oscuros, actualmente Safari y Chromium lo hacen, pero debes especificar en el CSS o HTML que tu diseño los usa.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/0VVtEAWM6jHeIxahqnFy.mp4", autoplay="true", loop="true", muted="true" %}
Lo anterior demuestra el efecto de la propiedad del panel Estilos de DevTools. La demostración usa la etiqueta HTML, que en mi opinión es generalmente una mejor ubicación:
```html
<meta name="color-scheme" content="dark light">
```
Aprende todo al respecto en este [artículo sobre `color-scheme`](/color-scheme/) de [Thomas Steiner](/authors/thomassteiner/). ¡Hay mucho más que ganar que las entradas de las casillas oscuras de verificación!
### `accent-color` de CSS
Ha habido [una actividad reciente](https://twitter.com/argyleink/status/1360022120810483715?s=20) en torno a `accent-color` en los elementos de formulario, siendo un estilo CSS único que puede cambiar el color de tinte utilizado en el elemento de entrada de los navegadores. Lee más sobre esto [aquí en GitHub](https://github.com/w3c/csswg-drafts/issues/5187). Lo he incluido en mis estilos para este componente. Como los navegadores lo permiten, mis casillas de verificación estarán más relacionadas con el tema mediante los colores rosa y morado.
```css
input[type="checkbox"] {
accent-color: var(--brand);
}
```
{% Img src="image/vS06HQ1YTsbMKSFTIPl2iogUQP73/J9pbhB0ImoDzbsXkBGtG.png", alt="Una captura de pantalla de Chromium en Linux con casillas de verificación rosas", width="800", height="406" %}
### Estallidos de color con degradados fijos y focus-within
El color resalta más cuando se usa con moderación y una de las formas en que me gusta lograrlo es a través de interacciones coloridas en la interfaz de usuario.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/Pm75QwVToKkiqedqPtmm.mp4", autoplay="true", loop="true", muted="true", width="480px" %}
Hay muchas capas de retroalimentación e interacción de la interfaz de usuario en el video anterior, que ayudan a dar personalidad a la interacción mediante lo siguiente:
- Destacando el contexto.
- Proporcionar información de la interfaz de usuario de "qué tan completo" está el valor en el rango.
- Proporcionar comentarios de la interfaz de usuario de que un campo está aceptando entradas.
Para proporcionar comentarios cuando se interactúa con un elemento, CSS utiliza la pseudo-clase de [`:focus-within`](https://developer.mozilla.org/docs/Web/CSS/:focus-within) para cambiar la apariencia de varios elementos, analicemos el `.fieldset-item`, es muy interesante:
```css
.fieldset-item {
...
&:focus-within {
background: var(--surface2);
& svg {
fill: white;
}
& picture {
clip-path: circle(50%);
background: var(--brand-bg-gradient) fixed;
}
}
}
```
Cuando uno de los elementos secundarios de este elemento tiene un focus-within:
1. Al `.fieldset-item` background se le asigna un color de superficie de mayor contraste.
2. El `svg` anidado se rellena de blanco para un mayor contraste.
3. El anidado `clip-path` en el `<picture>` se expande a un círculo completo y el fondo se rellena con el degradado fijo brillante.
## Rango personalizado
Dado el siguiente elemento de entrada HTML, te mostraré cómo personalicé su apariencia:
```html
<input type="range">
```
Hay 3 partes de este elemento que debemos personalizar:
1. [Elemento de rango / contenedor](#range-element-styles)
2. [Pista (track)](#track-styles)
3. [Pulgar (thumb)](#thumb-styles)
### Estilos de elementos de rango
```css
input[type="range"] {
/* style setting variables */
--track-height: .5ex;
--track-fill: 0%;
--thumb-size: 3ex;
--thumb-offset: -1.25ex;
--thumb-highlight-size: 0px;
appearance: none; /* limpia el estilo, me da espacio para el mio */
display: block;
inline-size: 100%; /* llena el contenedor */
margin: 1ex 0; /* se asegura que el pulgar no entra en colisión con algún contenido secundario */
background: transparent; /* el fondo (background) esta en la pista */
outline-offset: 5px; /* los estilos enfocados tienen espacio */
}
```
Las primeras líneas de CSS son las partes personalizadas de los estilos y espero que al documentarlas claramente nos ayude. El resto de los estilos son en su mayoría estilos restablecidos, para proporcionar una base consistente para construir las partes complicadas del componente.
### Estilos de pista
```css
input[type="range"]::-webkit-slider-runnable-track {
appearance: none; /* limpia el estilo, me da espacio para el mio */
block-size: var(--track-height);
border-radius: 5ex;
background:
/* gradiente de parada dura:
- mitad transparente (donde lo colorido estará)
- llenar la pista con color medio oscuro
- la 1er imagen de fondo esta en arriba
*/
linear-gradient(
to right,
transparent var(--track-fill),
var(--surface1) 0%
),
/* efecto colorido de llenado, superficie detrás de la pista llenada */
var(--brand-bg-gradient) fixed;
}
```
El truco para esto es "revelar" el color vibrante de relleno. Esto se hace con el gradiente de hard stop en la parte superior. El degradado es transparente hasta el porcentaje de relleno y, a continuación, utiliza el color de la superficie de la pista sin rellenar. Detrás de esa superficie sin relleno, hay un color de ancho completo, esperando que la transparencia lo revele.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/aiAL28AkDRZvaAZNEbW8.mp4", autoplay="true", loop="true", muted="true" %}
#### Estilo de relleno de pista
Mi diseño **requiere JavaScript** para mantener el estilo de relleno. Hay estrategias solo de CSS, pero requieren que el elemento del pulgar tenga la misma altura que la pista, y no pude encontrar una armonía dentro de esos límites.
```js
/* toma los desplazadores de la página */
const sliders = document.querySelectorAll('input[type="range"]')
/* toma un elemento de desplazador, regresa un porcentaje como cadena para usar en CSS */
const rangeToPercent = slider => {
const max = slider.getAttribute('max') || 10;
const percent = slider.value / max * 100;
return `${parseInt(percent)}%`;
};
/* al cargar la página, define la cantidad de llenado */
sliders.forEach(slider => {
slider.style.setProperty('--track-fill', rangeToPercent(slider));
/* cuando un desplazador cambia, actualiza el prop de llenado */
slider.addEventListener('input', e => {
e.target.style.setProperty('--track-fill', rangeToPercent(e.target));
})
})
```
Creo que esto lo convierte en una buena mejora visual. El control deslizante funciona muy bien sin JavaScript, el prop de `--track-fill` no es necesario, simplemente no tendrá un estilo de relleno si no está presente. Si JavaScript está disponible, completa la propiedad personalizada mientras observas los cambios del usuario, sincronizando la propiedad personalizada con el valor.
[Aquí hay una gran publicación](https://css-tricks.com/sliding-nightmare-understanding-range-input/) sobre [CSS-Tricks](https://css-tricks.com/) de [Ana Tudor](https://twitter.com/anatudor), que demuestra una solución única de CSS para el relleno de pistas. También encontré este [elemento de `range`](https://app.native-elements.dev/editor/elements/range) muy inspirador.
### Estilos de pulgar
```css
input[type="range"]::-webkit-slider-thumb {
appearance: none; /* limpia el estilo, me da espacio para el mio */
cursor: ew-resize; /* estilo del cursor para admitir la dirección de arrastre */
border: 3px solid var(--surface3);
block-size: var(--thumb-size);
inline-size: var(--thumb-size);
margin-top: var(--thumb-offset);
border-radius: 50%;
background: var(--brand-bg-gradient) fixed;
}
```
La mayoría de estos estilos son para hacer un bonito círculo. De nuevo, verás el degradado de fondo fijo que unifica los colores dinámicos de los pulgares, las pistas y los elementos SVG asociados. Separé los estilos de la interacción para ayudar a aislar la técnica de `box-shadow` que se usa para el resaltado de desplazamiento:
```css
@custom-media --motionOK (prefers-reduced-motion: no-preference);
::-webkit-slider-thumb {
…
/* shadow spread es inicialmente un 0 */
box-shadow: 0 0 0 var(--thumb-highlight-size) var(--thumb-highlight-color);
/* si el movimiento es permitido, haz una transición del cambio de box-shadow*/
@media (--motionOK) {
& {
transition: box-shadow .1s ease;
}
}
/* en el estado de on hover/active del elemento primario, incrementa el tamaño del prop */
@nest input[type="range"]:is(:hover,:active) & {
--thumb-highlight-size: 10px;
}
}
```
{% Aside 'key-term' %} [@custom-media](https://drafts.csswg.org/mediaqueries-5/#custom-mq) es una adición de especificación de nivel 5 de [PostCSS Custom Media](https://github.com/postcss/postcss-custom-media), que forma parte de [PostCSS Preset Env](https://preset-env.cssdb.org/features). {% endAside %}
El objetivo era un destacado visual animado y fácil de manejar para los comentarios de los usuarios. Al usar una box shadow (sombra de la caja), puedo evitar [activar el diseño](/animations-guide/#triggers) con el efecto. Hago esto creando una sombra que no esté borrosa y coincida con la forma circular del elemento del pulgar. Luego cambio y hago la transición de su tamaño de propagación al pasar el mouse.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/s835RbH88L5bxjl5bMFl.mp4", autoplay="true", loop="true", muted="true" %}
Si tan solo el efecto de resaltado fuera tan fácil en las casillas de verificación…
### Selectores entre navegadores
Descubrí que necesitaba estos `-webkit-` y `-moz-` para lograr consistencia entre navegadores:
```css
input[type="range"] {
&::-webkit-slider-runnable-track {}
&::-moz-range-track {}
&::-webkit-slider-thumb {}
&::-moz-range-thumb {}
}
```
{% Aside 'gotchas' %} [Josh Comeau](https://twitter.com/JoshWComeau) describe por qué los ejemplos anteriores no usan simplemente una coma entre los selectores para el estilo de varios navegadores; consulta el [hilo de Twitter](https://twitter.com/JoshWComeau/status/1359213591602335752?s=20) para obtener más información. {% endAside %}
## Casilla de verificación personalizada
Dado el siguiente elemento de entrada HTML, te mostraré cómo personalizar su apariencia:
```html
<input type="checkbox">
```
Hay 3 partes de este elemento que debemos personalizar:
1. [Elemento de casilla de verificación](#checkbox-element)
2. [Etiquetas asociadas](#checkbox-labels)
3. [Efecto de resaltado](#checkbox-highlight)
### Elemento de casilla de verificación
```css
input[type="checkbox"] {
inline-size: var(--space-sm); /* incrementa el ancho */
block-size: var(--space-sm); /* incrementa el tamaño */
outline-offset: 5px; /* mejora de focus style */
accent-color: var(--brand); /* marca con un color la entrada */
position: relative; /* preparación para un pseudo elemento */
transform-style: preserve-3d; /* crea un contexto de apilamiento de espacio 3d con eje z */
margin: 0;
cursor: pointer;
}
```
Los estilos de `transform-style` y de `position` se preparan para el pseudoelemento que presentaremos más adelante para diseñar el resaltado. De lo contrario, es algo menos significante en el estilo basandome en mi opinión. Me gusta que el cursor sea un puntero, me gustan las outline offsets, las casillas de verificación predeterminadas son demasiado pequeñas y, si se [permite](https://drafts.csswg.org/css-ui-4/#widget-accent) el `accent-color`, lleva estas casillas de verificación al esquema de color de la marca (brand color scheme).
### Etiquetas de las casillas de verificación
Es importante proporcionar etiquetas para las casillas de verificación por 2 razones. La primera es representar para qué se usa el valor de la casilla de verificación, para responder "¿encendido o apagado o para qué?" En segundo lugar es para el UX, los usuarios de la web se han acostumbrado a interactuar con las casillas de verificación a través de sus etiquetas asociadas.
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/7GYIFNjNCBdj13juFO7S.mp4", autoplay="true", loop="true", muted="true" %}
<div class="switcher">{% Compare 'better', 'input' %}</div>
<pre data-md-type="block_code" data-md-language="html"><code class="language-html"><input
type="checkbox"
id="text-notifications"
name="text-notifications"
>
</code></pre>
<p data-md-type="paragraph">{% endCompare %}</p>
<p data-md-type="paragraph">{% Compare 'better', 'label' %}</p>
<pre data-md-type="block_code" data-md-language="html"><code class="language-html"><label for="text-notifications">
<h3>Mensajes de texto</h3>
<small>Recibe notificaciones sobre todos los mensajes de texto enviados a su dispositivo</small>
</label>
</code></pre>
<p data-md-type="paragraph">{% endCompare %}</p>
<div data-md-type="block_html"></div>
En tu etiqueta, coloca un atributo de `for` que apunte a una casilla de verificación mediante su ID: `<label for="text-notifications">`. En tu casilla de verificación, duplica el nombre y la identificación para asegurarte de que se encuentran con diferentes herramientas y tecnología, como un ratón o un lector de pantalla: `<input type="checkbox" id="text-notifications" name="text-notifications">`. `:hover`, `:active` y más vienen gratis con la conexión, aumentando las formas en que se puede interactuar con tu formulario.
### Resaltado de la casilla de verificación
Quiero mantener mis interfaces consistentes y el elemento deslizante tiene un bonito resaltado en miniatura que me gustaría usar con la casilla de verificación. La miniatura puede usar `box-shadow` y su propiedad de `spread` para escalar una sombra hacia arriba y hacia abajo. Sin embargo, ese efecto no funciona aquí porque nuestras casillas de verificación son [y deberían ser](https://twitter.com/argyleink/status/1329230409784291328?s=20) cuadradas.
Pudes lograr el mismo efecto visual con un pseudo elemento y una cantidad desafortunada de CSS complicado:
```css
@custom-media --motionOK (prefers-reduced-motion: no-preference);
input[type="checkbox"]::before {
--thumb-scale: .01; /* escala inicial del color para resaltar */
--thumb-highlight-size: var(--space-xl);
content: "";
inline-size: var(--thumb-highlight-size);
block-size: var(--thumb-highlight-size);
clip-path: circle(50%); /* forma de círculo */
position: absolute; /* es por esto que usamos position relative en el elemento primario */
top: 50%; /* tecnica de pop y plop(https://web.dev/centering-in-css/#5.-pop-and-plop) */
left: 50%;
background: var(--thumb-highlight-color);
transform-origin: center center; /* la meta es un circulo escalable colocado en el centro */
transform: /* el orden aquí importa!! */
translateX(-50%) /* contra balancea a la izquierda: 50% */
translateY(-50%) /* contra balancea al tope: 50% */
translateZ(-1px) /* LO PONE DETRAS DE LA CASILLA */
scale(var(--thumb-scale)) /* el valor que activamos para animación */
;
will-change: transform;
@media (--motionOK) { /* transición permitida si el movimiento está habilitado */
& {
transition: transform .2s ease;
}
}
}
/* en on hover, define la propiedad personalizada de escala al estado de "in" */
input[type="checkbox"]:hover::before {
--thumb-scale: 1;
}
```
Crear un psuedo-elemento circular es un trabajo sencillo, pero **colocarlo detrás del elemento al que está unido** fue más difícil. Aquí está el antes y después de que lo arreglé:
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/Spdpw5P1MD8ceazneRXo.mp4", autoplay="true", loop="true", muted="true" %}
Definitivamente es una micro interacción, pero es importante para mí mantener la consistencia visual. La técnica de escala de animación es la misma que hemos estado usando en otros lugares. Establecemos una propiedad personalizada en un nuevo valor y dejamos que CSS la haga en función de las preferencias de movimiento. La característica clave aquí es `translateZ(-1px)`. El elemento primario creó un espacio 3D y este pseudo elemento secundario lo aprovechó colocándose ligeramente hacia atrás en el eje z.
## Accesibilidad
El video de YouTube hace una gran demostración de las interacciones del ratón, el teclado y el lector de pantalla para este componente de configuración. Anotaré algunos de los detalles aquí.
### Opciones de elementos HTML
```html
<form>
<header>
<fieldset>
<picture>
<label>
<input>
```
Cada uno de estos contiene sugerencias y consejos para la herramienta de navegación del usuario. Algunos elementos brindan sugerencias de interacción, algunos conectan la interactividad y algunos ayudan a dar forma al árbol de accesibilidad por el que navega un lector de pantalla.
### Atributos HTML
Podemos ocultar elementos que no son necesarios para los lectores de pantalla, en este caso el icono al lado del control deslizante:
```html
<picture aria-hidden="true">
```
{% Video src="video/vS06HQ1YTsbMKSFTIPl2iogUQP73/fVjqHRZHQixAaxjeAvDP.mp4", autoplay="true", loop="true", muted="true", width="480px"%}
El video anterior muestra el flujo del lector de pantalla en Mac OS. Observa cómo el foco de entrada se mueve directamente de un control deslizante al siguiente. Esto se debe a que hemos ocultado el icono que puede haber sido una parada en el camino hacia el siguiente control deslizante. Sin este atributo, un usuario tendría que detenerse, escuchar y pasar de la imagen que tal vez no pueda ver.
{% Aside 'gotchas' %} Asegúrate de hacer una prueba de los lectores de pantalla en todos los buscadores. La demostración original incluía `<label>` en la lista de elementos con `aria-hidden="true"`, pero desde entonces se eliminó después de que una [conversación de Twitter](https://twitter.com/rob_dodson/status/1371859386210029568) reveló diferencias entre navegadores. {% endAside %}
El SVG es un montón de matemáticas, agreguemos un elemento `<title>` para cuando el ratón pase por el elemento y despliegue un comentario legible por humanos sobre lo que están creando las matemáticas:
```html
<svg viewBox="0 0 24 24">
<title>A note icon</title>
<path d="M12 3v10.55c-.59-.34-1.27-.55-2-.55-2.21 0-4 1.79-4 4s1.79 4 4 4 4-1.79 4-4V7h4V3h-6z"/>
</svg>
```
Aparte de eso, hemos utilizado suficiente HTML que fue claramente marcado que el formulario funciona realmente bien en el ratón, el teclado, los mandos de videojuegos y los lectores de pantalla.
## JavaScript
Ya he [explicado](#track-styles) cómo se administraba el color de relleno de la pista desde JavaScript, así que veamos ahora el JavaScript relacionado con el `<form>`:
```js
const form = document.querySelector('form');
form.addEventListener('input', event => {
const formData = Object.fromEntries(new FormData(form));
console.table(formData);
})
```
Cada vez que se interactúa con el formulario y ocurre un cambio, la consola registra el formulario como un objeto en una tabla para una fácil revisión antes de enviarlo a un servidor.
{% Img src="image/vS06HQ1YTsbMKSFTIPl2iogUQP73/hFAyIOpOSdiczdf4AtIj.png", alt="Una captura de pantalla de los resultados de console.table(), donde los datos del formulario se muestran en una tabla", width="800", height="285" %}
## Conclusión
Ahora que sabes cómo lo hice, ¡¿cómo lo harías tú?! ¡Esta es una arquitectura de componentes bien divertida! ¿Quién va a hacer la primera versión con slots usando su framework favorito? 🙂
Diversifiquemos nuestros enfoques y aprendamos todas las formas de construir en la web. Crea una demostración, [tuitéame](https://twitter.com/argyleink) tu versión y la agregaré a la sección de [Remixes de la comunidad](#community-remixes) a continuación.
## Remixes de la comunidad
- [¡@tomayac](https://twitter.com/tomayac) con su estilo respecto al área de desplazamiento para las etiquetas de las casillas de verificación! Esta versión no tiene ningún espacio flotante entre los elementos: [demostración](https://tomayac.github.io/gui-challenges/settings/dist/) y [fuente](https://github.com/tomayac/gui-challenges).
| 48.782293 | 617 | 0.734135 | spa_Latn | 0.969004 |
fdc3b06505306e037d1008900c2e64ec6198379a | 3,453 | md | Markdown | _posts/2021-12-30-Book Review_PO.md | hyo5m/hyo5m.github.io | 547a86487dad3f68a13a7378ef9677ac756141c5 | [
"MIT"
] | null | null | null | _posts/2021-12-30-Book Review_PO.md | hyo5m/hyo5m.github.io | 547a86487dad3f68a13a7378ef9677ac756141c5 | [
"MIT"
] | null | null | null | _posts/2021-12-30-Book Review_PO.md | hyo5m/hyo5m.github.io | 547a86487dad3f68a13a7378ef9677ac756141c5 | [
"MIT"
] | null | null | null | ---
layout: post
title: 책리뷰_프로덕트 오너(Product Owner)
description:
categories: Book_review
image:
feature: Book_product_owner.jpeg
topPosition: 0px
bgContrast: dark
bgGradientOpacity: darker
syntaxHighlighter: no
---
### 인상 깊었던 부분들에 대한 기록
제품을 구매할 때 작용되는 간단한 원리는 고객의 '오, 해결해야 할 일(Job)이 있어'라는 생각입니다. (이걸 이해하면) 회사가 고객이 정말 구매하길 원하는 제품을 만들 때 효과적입니다. -노벨, 카멘. HBS Wroking Knowledge.
코빗의 프로덕트를 총괄하는 디렉터로 입사한 후, 당시 대표와 대화를 나누던 중 비전을 공유해달라고 부탁했다. 경영인의 비전과 프로덕트 개발 방향성을 어떻게 일치시켜야 할지 고민해보고 싶었기 때문이다.
PO에게 권한이 많다면, 유대 관계는 매우 다른 양상을 띨 것이다. 오히려 건강한 관계를 유지하기에는 PO에게 권한이 없는 편이 낫다. 다른 이들에게 일방적으로 지시하기보다는, 더 효율적으로 협업하기 위한 방법을 찾을 수 있기 때문이다.
"스티븐, 문서를 적기 전에 한 가지만 해봤으면 해요. 암호화폐 거래자가 아니라도 좋으니, 주식을 거래하는 헤비 트레이더 한 명이라도 직접 찾아가서 어떻게 하는지 지켜보도록 해요. 헤비 트레이더나 기업들이 쓰는 기능은 스티븐 같은 일반 트레이더가 쓰는 것과는 달라요."
> 프로덕트 매니저는 고객의 목소리를 회사 내부에서 대변해야 한다. 고객을 최우선시하자. 발명하자. 그리고 인내심을 가지자. 그중 가장 중요한 것은, 고객에 집착하듯이 집중하는 것이다. 결정이 내려지는 과정이 고객에게 가능한 가깝게 밀착되어 진행돼야 한다.
> 쿠팡의 물류 배송 프로세스에서 주요 고객군인 배송원을 따라다니며 함께 경험한 것에 대한 기록도 재미있었다.
그래서 나는 전국의 다양한 현장을 직접 보기 위해, 배송하기 가장 까다로운 곳 중 하나라는 경남 지역에 찾아갔다. 거기서 며칠 머물며 갖가지 현장에서 쿠팡맨과 실제 배송을 함께했다. 직접 물건을 들어 나르기도 했고, 쿠팡맨이 한 곳을 배송하기 위해 얼마나 많은 계단을 올라야 하는지, 그리고 몇 분이나 걸렸는지 등을 계속 측정하며 기록했다. 심지어 계단의 경사까지도 측정했다. 그렇게 하루 종일 배송을 하고 나니, 우리가 롯데월드타워를 계단으로 오른 것만큼이나 걸었다는 사실이 쿠팡맨의 스마트 워치에 떴다.
> 특히 인상 깊었던 부분은, 쿠팡맨이 이 일련의 과정을 공유하는 것을 귀찮아하거나 빨리 마무리 하고 싶어하지 않고 지친 내색 하나 없이 오히려 자신의 목소리를 누군가가 경청한다는 사실에 들뜬 것 같았다는 것이다. 나 또한 비슷한 경험을 했는데, 한 서비스를 사용하며 불편한 점을 고객센터에 문의 했었고, 그때 그 상담원 분께서 이 내용을 회의에서 공유해서 이야기 나누고 개선하고 싶은데 자세하게 말해줄 수 있느냐고 얘기했을 때 나도 위의 쿠팡맨처럼 신나서 이러이러한 이유로 이러이러한 기능이 있었으면 좋겠다고 이야기 했었다. 우리의 제품을 개선할 수 있는 여지를 주는 사용자에게 그 목소리의 값어치가 얼마나 중요한 것인지 늘 감사함을 표현할 수 있도록 해야겠다.
#### 테스트 케이스의 중요성
"스티븐 미안해요. 오늘 알고리즘의 일부분을 고쳤는데, 특정 상황을 고려하지 않아 그 부분의 테스트를 못 했어요. 내일 고쳐서 다시 적용할게요."
"아니에요, 원인을 찾았으니까 다음부터는 반복되지 않도록 테스트 케이스를 하나 더 추가하죠. 오전 회의 때 더 논의하도록 할까요? 늦은 저녁에 도움 줘서 고마워요."
> 저번 주에 우리 서비스도 버전 업데이트가 있었다. 일정 사용량을 달성한 사용자에게 유저피드백을 수용할 수 있는 기능을 추가했는데, 배포가 늦은 저녁에 되어 완벽한 QA가 되지 않은 채 배포되었다. ~~그 당시엔 유저피드백 기능만 추가하는 것이니 모든 기능에 대한 QA는 간과하였던 것 같은 잘못된 생각을 했던 것 같다.~~ 당연하게도! 문제가 발생하였다. 우리 제품에서 서브로 중요한 기능인 타서비스로의 연동기능이 작동하지 않았던 것이다. 다행이도 업데이트 후 작동이 안된다는 것을 안 사용자님께서 ~~다시한번 너무 감사드리는... 그냥 끄고 마는 사용자분이었을 수도 있는데... 바로 업데이트 오류인걸 알고 고객센터에 문의를 주시다니...~~ 랜딩페이지에 있는 실시간 채팅으로 문의를 넣어주셨다.. 업데이트를 배포한지 2시간만의 일이었다. 그동안 다른 사용자분들도 불편을 겪으셨을텐데, 이를 알았을 때 심장이 철렁한 기분이란..
> 위의 구절을 본 순간 딱 나의 순간이 오버랩되며, 나도 테스트 케이스를 늘 업데이트하고 모든 배포시 꼭 모든 주요 TC를 확인해야겠다는 다짐을 했다.
#### 브랜드 맨
브랜드 맨이 어떻게 보면 PO 개념의 시초라고 할 수 있는데, 단순히 광고를 집행하고 출고량을 확인하는 것을 떠나, 직접 각 지역을 돌아보며 고객과 동료들로부터 배워야 하는 의무도 있었다. 제품 개발, 출고, 유통, 마케팅, 데이터 분석, 그리고 다시 고객과 동료들로부터 문제점을 듣는 과정을 모두 반복하는 책임자를 찾으려 한 것이다.
> 고객을 직접 이해하려는 노력은 PO에게 필수다. 한 제품에도 다양한 고객군이 존재하고 그 고객군은 모두 다 다른 패턴으로 제품을 이용한다. 이 책에서도 소개되는데, 밀크셰이크 이야기가 그 사례로 유명하다. 패스트푸드점에서 셰이크의 판매량을 높이기위해 다양한 분석을 통해 노력했지만, 그 노력에도 판매량이 증대되지 않았고, 직접 매장에 나가서 어떤 사람들이 밀크셰이크를 구매하는지 관찰한 결과 놀라운 광경을 목격한 사례이다. 밀크셰이크는 크게 2 부류의 고객군에게 가장 많이 고용되는데, ~~고용이라고 표현했던 점이 재밌어서 인용~~ 오전에 긴 거리를 출근하는 동안 심심하지 않기 위해 고용하는 고객, 오후에는 하교한 자녀에게 특별한 간식을 주려고 고용했다. 이 둘은 목적이 다르기 때문에 원하는 제품도 달랐다. 오후의 부모들은 자녀들이 30분 이상 빨대를 이용해 밀크셰이크를 먹는 것을 기다리기 힘들어했다. 따라서 더 부드러운 밀크셰이크를 고용하고 싶어했다.
#### 아마존과 쿠팡의 식스 페이저 6-Pager 문서
나는 어떤 프로덕트를 만들기 전이나, 혹은 매 분기별로 문서를 하나 작성한다. 아마존 또는 쿠팡에서 식스 페이저 6-Pager라고 부르는 이 문서 형태는 여섯 페이지 이내에 해당 프로덕트에 대한 핵심 내용을 담아내는 것이다.
<ul>
<li>프로덕트의 목적이 무엇인지</li>
<li>과거에 어떤 관련된 시도를 했는지</li>
<li>어떤 실패 사례가 있었는지</li>
<li>앞으로 어떤 방향으로 개발할 건지</li>
<li>어떤 수치를 활용해서 성공 여부를 확인할 것인지</li>
</ul>
> 이 외에도 해당 프로덕트가 고용되는 이유를 목록 형태로 작성하고 기록해야 한다. | 57.55 | 482 | 0.709238 | kor_Hang | 1.00001 |
fdc45522c116e849fb351348e833fa263125eee5 | 463 | md | Markdown | docs/src/main/paradox/scala/http/routing-dsl/directives/host-directives/extractHost.md | Abrasha/akka-http | fecdbbc886c2aac95aacfd46896562f5a717613b | [
"Apache-2.0"
] | null | null | null | docs/src/main/paradox/scala/http/routing-dsl/directives/host-directives/extractHost.md | Abrasha/akka-http | fecdbbc886c2aac95aacfd46896562f5a717613b | [
"Apache-2.0"
] | null | null | null | docs/src/main/paradox/scala/http/routing-dsl/directives/host-directives/extractHost.md | Abrasha/akka-http | fecdbbc886c2aac95aacfd46896562f5a717613b | [
"Apache-2.0"
] | null | null | null | # extractHost
## Signature
@@signature [HostDirectives.scala]($akka-http$/akka-http/src/main/scala/akka/http/scaladsl/server/directives/HostDirectives.scala) { #extractHost }
## Description
Extract the hostname part of the `Host` request header and expose it as a `String` extraction to its inner route.
## Example
@@snip [HostDirectivesExamplesSpec.scala]($test$/scala/docs/http/scaladsl/server/directives/HostDirectivesExamplesSpec.scala) { #extractHost } | 35.615385 | 147 | 0.784017 | eng_Latn | 0.666207 |
fdc499c0c8bf46b503c7f655ccbe2a744b297caf | 3,513 | md | Markdown | microsoft.ui.xaml.media.animation/reorderthemetransition.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 63 | 2018-11-02T13:52:13.000Z | 2022-03-31T16:31:24.000Z | microsoft.ui.xaml.media.animation/reorderthemetransition.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 99 | 2018-11-16T15:15:12.000Z | 2022-03-31T15:53:15.000Z | microsoft.ui.xaml.media.animation/reorderthemetransition.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 35 | 2018-10-16T05:35:33.000Z | 2022-03-30T23:27:08.000Z | ---
-api-id: T:Microsoft.UI.Xaml.Media.Animation.ReorderThemeTransition
-api-type: winrt class
---
<!-- Class syntax.
public class ReorderThemeTransition : Windows.UI.Xaml.Media.Animation.Transition, Windows.UI.Xaml.Media.Animation.IReorderThemeTransition
-->
# Microsoft.UI.Xaml.Media.Animation.ReorderThemeTransition
## -description
Provides the animated transition behavior for when list-view controls items change order. Typically this is due to a drag-drop operation. Different controls and themes potentially have varying characteristics for the animations involved.
## -xaml-syntax
```xaml
<ReorderThemeTransition .../>
```
## -remarks
Note that setting the [Duration](timeline_duration.md) property has no effect on this object since the duration is preconfigured.
## -examples
The following example applies a ReorderThemeAnimation to a set of rectangles. As the new rectangles are added to the set, the other rectangles animate around the new one.
```xaml
<StackPanel>
<Button x:Name="AddItemButton" Content="AddItem" Click="AddItemButton_Click"/>
<ItemsControl x:Name="ItemsList">
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<WrapGrid>
<WrapGrid.ChildrenTransitions>
<!-- Apply a ReorderThemeTransition that will run when child elements are reordered. -->
<TransitionCollection>
<ReorderThemeTransition/>
</TransitionCollection>
</WrapGrid.ChildrenTransitions>
</WrapGrid>
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
<!-- Initial items. -->
<Rectangle Width="100" Height="100" Fill="Red"/>
<Rectangle Width="100" Height="100" Fill="Green"/>
<Rectangle Width="100" Height="100" Fill="Blue"/>
</ItemsControl>
</StackPanel>
```
```csharp
private void AddItemButton_Click(object sender, RoutedEventArgs e)
{
Rectangle newItem = new Rectangle();
Random rand = new Random();
newItem.Height = 100;
newItem.Width = 100;
newItem.Fill = new SolidColorBrush(Color.FromArgb(255,
(byte)rand.Next(0, 255), (byte)rand.Next(0, 255), (byte)rand.Next(0, 255)));
// Insert a new Rectangle of a random color into the ItemsControl at index 2.
ItemsList.Items.Insert(2, newItem);
}
```
```cppwinrt
void DocsCppWinRT::MainPage::AddItemButton_Click(Windows::Foundation::IInspectable const& sender, Windows::UI::Xaml::RoutedEventArgs const& e)
{
Windows::UI::Xaml::Shapes::Rectangle newItem;
newItem.Height(100);
newItem.Width(100);
Windows::UI::Color color;
color.R = std::rand() % 256;
color.G = std::rand() % 256;
color.B = std::rand() % 256;
newItem.Fill(Windows::UI::Xaml::Media::SolidColorBrush(color));
// Insert a new Rectangle of a random color into the ItemsControl at index 2.
ItemsList().Items().InsertAt(2, newItem);
}
```
```cppcx
void DocsCPP::MainPage::AddItemButton_Click(Object^ sender,RoutedEventArgs^ e)
{
Rectangle^ newItem = ref new Rectangle();
newItem->Height = 100;
newItem->Width = 100;
Color color;
color.R = rand() % 256;
color.G = rand() % 256;
color.B = rand() % 256;
newItem->Fill = ref new SolidColorBrush(color);
// Insert a new Rectangle of a random color into the ItemsControl at index 2.
ItemsList->Items->InsertAt(2, newItem);
}
```
## -see-also
[Transition](transition.md)
| 32.527778 | 237 | 0.666667 | yue_Hant | 0.366918 |
fdc4b805020a249f991cd01838a62ce23156ca16 | 19 | md | Markdown | Chapter2/copy_2019-04-02.md | smallren101/python_libary | 802a1760f9a26acd8abfe7573a4fe44844629b4b | [
"MIT"
] | null | null | null | Chapter2/copy_2019-04-02.md | smallren101/python_libary | 802a1760f9a26acd8abfe7573a4fe44844629b4b | [
"MIT"
] | null | null | null | Chapter2/copy_2019-04-02.md | smallren101/python_libary | 802a1760f9a26acd8abfe7573a4fe44844629b4b | [
"MIT"
] | null | null | null | # 第9节:copy - 复制对象
| 6.333333 | 17 | 0.578947 | pol_Latn | 0.296735 |
fdc4db5382750d4db9384d8e0cda5ad33de691bc | 5,712 | md | Markdown | CONTRIBUTING.md | Zajquor/ukis-pysat | 88ffe16336316023b46e4683274c729180e134c3 | [
"Apache-2.0"
] | 21 | 2020-04-29T11:36:59.000Z | 2022-01-01T12:34:37.000Z | CONTRIBUTING.md | shiwakotisurendra/ukis-pysat | e9290edf86e0936b05d5fcf7938ebc784d4c7f63 | [
"Apache-2.0"
] | 143 | 2020-04-29T07:40:22.000Z | 2022-03-09T15:12:27.000Z | CONTRIBUTING.md | shiwakotisurendra/ukis-pysat | e9290edf86e0936b05d5fcf7938ebc784d4c7f63 | [
"Apache-2.0"
] | 5 | 2020-10-02T22:05:03.000Z | 2022-03-17T09:57:21.000Z | # Contributing to UKIS-pysat
Welcome to `UKIS-pysat`. We invite anyone to participate by contributing code, reporting bugs, fixing bugs, writing documentation or discussing the future of this project. As a contributor, here are the guidelines we would like you to follow:
- [Code of Conduct](#coc)
- [Issues and Bugs](#issue)
- [Feature Requests](#feature)
- [Code Conventions](#rules)
- [Submission Guidelines](#submit)
- [Signing the CLA](#cla)
## <a name="coc"></a> Code of Conduct
Help us keep `UKIS-pysat` open and inclusive. Please read and follow our [Code of Conduct](CODE_OF_CONDUCT.md).
## <a name="issue"></a> Found a Bug?
If you find a bug in the source code, you can help us by
[submitting an issue](#submit-issue) to our [GitHub Repository](https://github.com/dlr-eoc/). Even better, you can
[submit a Pull Request](#submit-pr) with a fix.
## <a name="feature"></a> Missing a Feature?
You can *request* a new feature by [submitting an issue](#submit-issue) to our GitHub
Repository. If you would like to *implement* a new feature, please submit an issue with
a proposal for your work first, to be sure that we can use it.
## <a name="rules"></a> Code Conventions
To ensure consistency throughout the source code, keep these rules in mind as you are working:
* All features or bug fixes **must be tested** by one or more specs (unit-tests).
* Please follow [PEP 8](https://www.python.org/dev/peps/pep-0008/).
* We use a line-length of 120 and [black](https://github.com/psf/black) to format our code.
## <a name="submit"></a> Submission Guidelines
### <a name="submit-issue"></a> Submitting an Issue
Before you submit an issue, please search the issue tracker, maybe an issue for your problem already exists and the discussion might inform you of workarounds readily available.
We want to fix all the issues as soon as possible, but before fixing a bug we need to reproduce and confirm it. In order to reproduce bugs, we will systematically ask you to provide a minimal reproduction. Having a minimal reproducible scenario gives us a wealth of important information without going back & forth to you with additional questions.
A minimal reproduction allows us to quickly confirm a bug (or point out a coding problem) as well as confirm that we are fixing the right problem.
### <a name="submit-pr"></a> Submitting a Pull Request (PR)
Before you submit your Pull Request (PR) consider the following guidelines:
1. Search GitHub for an open or closed PR that relates to your submission. You don't want to duplicate effort.
1. Be sure that an issue describes the problem you're fixing, or documents the design for the feature you'd like to add.
Discussing the design up front helps to ensure that we're ready to accept your work.
1. Please sign our [Contributor License Agreement (CLA)](#cla) before sending PRs.
We cannot accept code without this. Make sure you sign with the primary email address of the Git identity that has been granted access to the UKIS repository.
1. Fork the UKIS repo.
1. Make your changes in a new git branch:
```shell
git checkout -b my-fix-branch master
```
1. Create your patch, **including appropriate test cases**.
1. Document your changes in the [changelog](CHANGELOG.rst).
```shell
git commit -a -m "some useful message"
```
Note: the optional commit `-a` command line option will automatically "add" and "rm" edited files.
1. Push your branch to GitHub:
```shell
git push origin my-fix-branch
```
1. In GitHub, send a pull request to `master`.
* If we suggest changes then:
* Make the required updates.
* Re-run the tests to ensure they are still passing.
* Rebase your branch and force push to your GitHub repository (this will update your Pull Request):
```shell
git rebase master -i
git push -f
```
That's it! Thank you for your contribution!
## <a name="changelogGuidelines"></a> Changelog guidelines
- Document your changes at the very top of the file.
- Categorize your changes as one of
- Features (*Added*)
- Bug Fixes (*Fix*)
- Other changes (*Changed*)
- For each change, add one item containing
- The module/project changed (not required for 'other changes')
- A short description of the change
## <a name="cla"></a> Signing the CLA
Please sign our Contributor License Agreement (CLA) before sending pull requests. For any code
changes to be accepted, the CLA must be signed. It's a quick process, we promise! We'll need you to
[print, sign and one of scan+email, fax or mail the form](DLR_Individual_Contributor_License_Agreement_UKIS.pdf).
<hr>
If you have more than one Git identity, you must make sure that you sign the CLA using the primary email address associated with the ID that has been granted access to the UKIS repository. Git identities can be associated with more than one email address, and only one is primary. Here are some links to help you sort out multiple Git identities and email addresses:
* https://help.github.com/articles/setting-your-commit-email-address-in-git/
* https://stackoverflow.com/questions/37245303/what-does-usera-committed-with-userb-13-days-ago-on-github-mean
* https://help.github.com/articles/about-commit-email-addresses/
* https://help.github.com/articles/blocking-command-line-pushes-that-expose-your-personal-email-address/
Note that if you have more than one Git identity, it is important to verify that you are logged in with the same ID with which you signed the CLA, before you commit changes. If not, your PR will fail the CLA check.
<hr>
| 49.669565 | 369 | 0.72514 | eng_Latn | 0.995991 |