hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d3e926159e88f44daa5975742fc36a06abf84440 | 23 | md | Markdown | README.md | gabrielmonzato20/dev | 3163a0471292f7de7e25075a501dd4ee213f029d | [
"MIT"
] | null | null | null | README.md | gabrielmonzato20/dev | 3163a0471292f7de7e25075a501dd4ee213f029d | [
"MIT"
] | null | null | null | README.md | gabrielmonzato20/dev | 3163a0471292f7de7e25075a501dd4ee213f029d | [
"MIT"
] | null | null | null | # dev
re
eeeer
fff
ffff | 4.6 | 5 | 0.73913 | ltz_Latn | 0.765536 |
d3e9331ff94664f747a17d5c8123f77ddddec87d | 24,278 | md | Markdown | README.md | kodypeterson/elasticsearch-odm | b458c56aef4e0ec5ead9061ce65baf7af8f33f78 | [
"MIT"
] | 82 | 2015-09-07T15:31:36.000Z | 2021-08-18T07:47:38.000Z | README.md | kodypeterson/elasticsearch-odm | b458c56aef4e0ec5ead9061ce65baf7af8f33f78 | [
"MIT"
] | 23 | 2015-09-22T12:13:47.000Z | 2021-04-09T03:33:28.000Z | README.md | kodypeterson/elasticsearch-odm | b458c56aef4e0ec5ead9061ce65baf7af8f33f78 | [
"MIT"
] | 29 | 2015-09-21T16:37:31.000Z | 2020-06-09T03:45:19.000Z | Elasticsearch ODM
=========
[](http://badge.fury.io/js/elasticsearch-odm)
***Like Mongoose but for Elasticsearch.*** Define models, preform CRUD operations, and build advanced search queries. Most commands and functionality that exist in Mongoose exist in this library. All asynchronous functions use Bluebird Promises instead of callbacks.
This is currently the only ODM/ORM library that exists for Elasticsearch on Node.js. [Waterline](https://github.com/balderdashy/waterline) has a [plugin](https://github.com/UsabilityDynamics/node-waterline-elasticsearch) for Elasticsearch but it is incomplete and doesn't exactly harness it's searching power.
[Loopback](https://github.com/strongloop/loopback) has a storage [plugin](https://github.com/drakerian/loopback-connector-elastic-search), but it also doesn't focus on important parts of Elasticsearch, such as mappings and efficient queries. This library automatically handles merging and updating Elasticsearch mappings based on your schema definition.
### Installation
If you currently have [npm elasticsearch](https://www.npmjs.com/package/elasticsearch) installed, you can remove it and access it from [client](client---elasticsearch) in this library if you still need it.
```sh
$ npm install elasticsearch-odm
```
### Features
- Easy to use API that mimics Mongoose, but cuts out the extras.
- Models, Schemas and Elasticsearch specific type mapping.
- Add Elasticsearch specific type options to your [Schema](#schemas), like boost, analyzer or score.
- Utilizes bulk and scroll features from Elasticsearch when needed.
- Easy [search queries](#query-options) without generating your own DSL.
- Seamlessly handles updating your Elasticsearch mappings based off your models [Schema](#schemas).
### Quick Start
You'll find the API is intuitive if you've used Mongoose or Waterline.
Example (no schema):
```js
var elasticsearch = require('elasticsearch-odm');
var Car = elasticsearch.model('Car');
var car = new Car({
type: 'Ford', color: 'Black'
});
elasticsearch.connect('my-index').then(function(){
// be sure to call connect before bootstrapping your app.
car.save().then(function(document){
console.log(document);
});
});
```
Example (using a [schema](#schemas)):
```js
var elasticsearch = require('elasticsearch-odm');
var carSchema = new elasticsearch.Schema({
type: String,
color: {type: String, required: true}
});
var Car = elasticsearch.model('Car', carSchema);
```
## API Reference
- [Core](#core)
- [`.connect(String/Object options)`](#connectstringobject-options---promise)
- [`new Schema(Object options)`](#new-schemaobject-options---schema)
- [`.model(String modelName)`](#modelstring-modelname-optionalschema-schema---model)
- [`.client`](#client---elasticsearch)
- [`.stats()`](#stats)
- [`.createIndex(String index, Object mappings)`](#createindexstring-index-object-mappings)
- [`.removeIndex(String index)`](#removeindexstring-index)
- [Document](#document)
- [`.save()`](#save-document)
- [`.remove()`](#remove)
- [`.update(Object data)`](#updateobject-data---document)
- [`.set(Object data)`](#setobject-data---document)
- [`.toObject()`](#toobject)
- [Model](#model)
- [`.count()`](#count---object)
- [`.create(Object data)`](#createobject-data---document)
- [`.update(String id, Object data)`](#updatestring-id-object-data---document)
- [`.remove(String id)`](#removestring-id)
- [`.removeByIds(Array ids)`](#removebyidsarray-id)
- [`.set(String id)`](#setstring-id-object-data---document)
- [`.find(Object/String match, Object queryOptions)`](#findobjectstring-match-object-queryoptions---document)
- [`.findById(String id, Object queryOptions)`](#findbyidstring-id-object-queryoptions---document)
- [`.findByIds(Array ids, Object queryOptions)`](#findbyidsarray-ids-object-queryoptions---document)
- [`.findOne(Object/String match, Object queryOptions)`](#findoneobjectstring-match-object-queryoptions---document)
- [`.findAndRemove(Object/String match, Object queryOptions)`](#findandremoveobjectstring-match-object-queryoptions---object)
- [`.findOneAndRemove(Object/String match, Object queryOptions)`](#findoneandremoveobjectstring-match-object-queryoptions---object)
- [`.makeInstance(Object data)`](#makeinstanceobject-data---document)
- [`.toMapping()`](#tomapping)
- [Query Options](#query-options)
- [`page & per_page`](#page--per_page)
- [`fields`](#fields)
- [`sort`](#sort)
- [`q`](#q)
- [`must`](#must)
- [`not`](#mot)
- [`missing`](#missing)
- [`exists`](#exists)
- [Schemas](#schemas)
- [`Hooks and Middleware`](#hooks-and-middleware)
- [`Static and Instance Methods`](#static-and-instance-methods)
- [`Sync Mapping`](#sync-mapping)
### Core
Core methods can be called directly on the Elasticsearch ODM instance. These include methods to configure, connect, and get information from your Elasticsearch database. Most methods act upon the [official Elasticsearch client](https://www.npmjs.com/package/elasticsearch).
##### `.connect(String/Object options)` -> `Promise`
Returns a promise that is reolved when the connection is complete. Can be passed a single index name, or a full configuration object. The default host is localhost:9200 when no host is provided, or just an index name is used.
This method should be called at the start of your application.
***If the index name does not exist, it is automatically created for you.***
*You can also add any of the [Elasticsearch specific options](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/configuration.html), like SSL configs.*
Example:
```js
// when bootstrapping your application
var elasticsearch = require('elasticsearch-odm');
elasticsearch.connect({
host: 'localhost:9200',
index: 'my-index',
logging: false, // true by default when NODE_ENV=development
syncMapping: false // see 'sync mapping' in Schemas documentation
ssl: {
ca: fs.readFileSync('./cacert.pem'),
rejectUnauthorized: true
}
});
// OR
elasticsearch.connect('my-index'); // default host localhost:9200
```
##### `new Schema(Object options)` -> `Schema`
Returns a new schema definition to be used for models.
##### `.model(String modelName, Optional/Schema schema)` -> `Model`
Creates and returns a new Model, like calling Mongoose.model(). Takes a type name, in mongodb this is also known as the collection name. This is global function and adds the model to Elasticsearch ODM instance.
##### `.client` -> `Elasticsearch`
The raw instance to the underlying [Elasticsearch](https://www.npmjs.com/package/elasticsearch) client. Not really needed, but it's there if you need it, for example to run queries that aren't provided by this library.
##### `.stats()`
Returns a promise that is resolved with [index stats](https://www.elastic.co/guide/en/elasticsearch/reference/1.6/indices-stats.html) for the current Elasticsearch connections.
##### `.removeIndex(String index)`
Takes an index name, and complete destroys the index. Resolves the promise when it's complete.
##### `.createIndex(String index, Object mappings)`
Takes an index name, and a json string or object representing your [mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html).
Resolves the promise when it's complete.
### Document
Like Mongoose, instances of models are considered documents, and are returned from calls like find() & create(). Documents include the following functions to make working with them easier.
##### `.save()` -> `Document`
Saves or updates the document. If it doesn't exist it is created. Like Mongoose, Elasticsearches internal '_id' is copied to 'id' for you. If you'd like to force a custom id, you can set the id property to something before calling save(). Every document gets a createdOn and updatedOn property set with ISO-8601 formatted time.
##### `.remove()`
Removes the document and destroys the cuurrent document instance. No value is resolved, and missing documents are ignored.
##### `.update(Object data)` -> `Document`
Partially updates the document. Data passed will be merged with the document, and the updated version will be returned. This also sets the current model instance with the new document.
##### `.set(Object data)` -> `Document`
Completely overwrites the document with the data passed, and returns the new document. This also sets the current model instance with the new document.
*Will remove any fields in the document that aren't passed.*
##### `.toObject()`
Like Mongoose, strips all non-document properties from the instance and returns a raw object.
### Model
Model definitions returned from .model() in core include several static functions to help query and manage documents. Most functions are similar to Mongoose, but due to the differences in Elasticsearch, querying includes some extra advanced features.
##### `.count()` -> `Object`
Object returned includes a 'count' property with the number of documents for this Model (also known as _type in Elasticsearch). See [Elasticsearch count](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-count.html).
##### `.create(Object data)` -> `Document`
A helper function. Similar to calling new Model(data).save(). Takes an object, and returns the new document.
##### `.update(String id, Object data)` -> `Document`
A helper function. Similar to calling new Model().update(data). Takes an id and a partial object to update the document with.
##### `.remove(String id)`
Removes the document by it's id. No value is resolved, and missing documents are ignored.
##### `.removeByIds(Array ids)`
Help function, see remove. Takes an array of ids.
##### `.set(String id, Object data)` -> `Document`
Completely overwrites the document matching the id with the data passed, and returns the new document.
*Will remove any fields in the document that aren't passed.*
##### `.find(Object/String match, Object queryOptions)` -> `Document`
There are four ways to call .find() and it's siblings. You can mix and match styles.
- Passing only a match object like `.find({name:'Joe'})`
- Passing only a string to match against all document fields `.find('some string')`
- Passing [Query Options](#query-options) (match can be set to null/empty) `.find({}, {must: {active: true, sort: 'createdOn'}}}`
- Use chaining options (alias for QueryOptions) `.find({}).must({active: true}).sort('createdOn').then(..)`
Unlike mongoose, finding exact matches requires the fields in your mapping to be set to 'not_analyzed'. By default `{index: not_analyzed}` is added to all string fields in your Schema unless you override it.
*Depending on the [analyzer](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-analyzers.html) in your [mapping](https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html), find queries like must, not, and matches may not find any results.*
match => Optional. An alias for the 'must' Query Option. Like Mongoose this matches name/value in documents. Also, instead of an object, just a string can be passed which will match against all document fields using the power of an Elasticsearch [QueryStringQuery](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html).
queryOptions => Optional (can also use chaining instead). An object with Query Options. Here you can specifiy paging, filtering, sorting and other advanced options. [See here for more details](#query-options). You can set the first argument to null, and only use filters from the query options if you wanted.
returns => Found documents, or null if nothing was found.
Example:
```js
var Car = elasticsearch.model('Car');
// Simple query.
Car.find({color: 'blue'}).then(function(results){
console.log(results);
});
// Nested query (for nested documents/properties).
Car.find({'location.city': 'New York'})
// Find all by passing null or empty object to first argument
Car.find(null, {sort: 'createdOn'})
// Search all fields using a QueryStringQuery.
Car.find('some text')
// Chained query without using Query Options.
// Instead of Mongoose .exec(), we call .then()
Car.find()
.must({color: 'blue'})
.exists('owner')
.sort('createdOn')
.then(...)
```
##### `.findById(String id, Object queryOptions)` -> `Document`
Finds a document by id. 'fields' argument is optional and specifies the fields of the document you'd like to include.
##### `.findByIds(Array ids, Object queryOptions)` -> `Document`
Same as .findById() but for multiple documents.
##### `.findOne(Object/String match, Object queryOptions)` -> `Document`
Same arguments as .find(). Returns the first matching document.
##### `.findAndRemove(Object/String match, Object queryOptions)` -> 'Object'
Same arguments as .find(). Removes all matching documents and returns their raw objects.
##### `.findOneAndRemove(Object/String match, Object queryOptions)` -> 'Object'
Same arguments as .findAndRemove(). Removes the first found document.
##### `.makeInstance(Object data)` -> `Document`
Helper function. Takes a raw object and creates a document instance out of it. The object would need at least an id property. The document returned can be used normally as if it were returned from other calls like .find().
##### `.toMapping()`
Returns a complete Elasticsearch mapping for this model based off it's schema. If no schema was used, it returns nothing. Used internally, but it's there if you'd like it.
### Query Options
The query options object includes several options that are normally included in mongoose chained queries, like sort, and paging (skip/limit), and also some advanced features from Elasticsearch.
The Elasticsearch [Query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html) and [Filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-filters.html) DSL is generated using best practices.
##### page & per_page
Type: `Integer`
For most use cases, paging is better suited than skip/limit, so this library includes thhis instead. Page 0/1 are the same thing, so either can be used. Page and per_page both use default when the other is set, page defaults to the first, and per_page defaults to 10.
*Including page or per_page will result in the response being wrapped in a meta data object like the following. You can call toJSON and toObject on this response and it'll call that method on all document instances under the hits property.*
```js
// A paged response that is returned when page or per_page is set.
{
total: 0, // total documents found for the query.
hits: [], // a collection of document instances.
page: 0, // current page requested.
pages: 0 // total number of pages.
}
```
##### fields
Type: `Array or String`
A list of fields to include in the documents returned. For example, you could pass 'id' to only return the matching document id's. See [Elasticsearch Fields](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-fields.html).
```js
// Query Options.
{
fields: ['name', 'age']
}
// Chained Query.
.find()
.fields(['name', 'age'])
.then(...)
```
##### sort
Type: `Array or String`
A list of fields to sort on. If multiple fields are passed then they are executed in order. Adding a '-' sign to the start of the field name makes it sort descending. Default is ascending. See [Elasticsearch Sort](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-sort.html).
Example:
```js
// Query Options.
{
sort: ['name', 'createdOn']
}
// Chained Query.
.find()
.sort(['name', 'createdOn'])
.then(...)
```
##### q
Type: `String`
A string to search all document fields with using Elasticsearch [QueryStringQuery](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html). This can be expensive, so use it sparingly.
Example:
```js
// Query Options.
{
q: 'Red dog run'
}
// Chained Query.
.find('Red dog run')
.then(...)
```
##### must
Type: `Object`
Key value pairs to match documents against. Essentially it's the same as first argument passed to Mongoose .find(). This is also an alias to the first argument passed to .find() in this library.
This is a 'must' [Bool Filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-bool-filter.html).
***Elasticsearches internal [Tokenizers](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html) are used, and fields are analyzed.***
*You can query nested fields using dot notation.*
Example:
```js
// Query Options.
{
must: {
name: 'Jim',
'location.country': 'Canada'
}
}
// Chained Query.
.find()
.must({name: 'Jim', 'location.country': 'Canada'})
.then(...)
```
##### not
Type: `Object`
The same as [must](#must), but matches documents where the key value pairs DON'T match.
This is a 'must_not' [Bool Filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-bool-filter.html) query.
*You can query nested fields using dot notation.*
Example:
```js
// Query Options.
{
not: {
name: 'Jim',
'location.country': 'Canada'
}
}
// Chained Query.
.find()
.not({name: 'Jim', 'location.country': 'Canada'})
.then(...)
```
##### missing
Type: `Array or String`
A single field name, or array of field names. Matches documents where these field names are missing. A field is considered mising, when it is null, empty, or does not exist. See [MissingFilter]
(https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-missing-filter.html).
Example:
```js
// Query Options.
{
missing: ['description', 'name']
}
// Chained Query.
.find()
.missing(['description', 'name'])
.then(...)
```
##### exists
Type: `Array or String`
A single field name, or array of field names. Matches documents where these field names exists. The opposite of [missing](#missing).
Example:
```js
// Query Options.
{
exists: ['description', 'name']
}
// Chained Query.
.find()
.exists(['description', 'name'])
.then(...)
```
### Schemas
Models don't require schemas, but it's best to use them - especially if you'll be making search queries. Elasticsearch-odm will generate and update Elasticsearch with the proper mappings based off your schema definition.
The schemas are similar to Mongoose, but several new field types have been added which Elasticsearch supports. These are; **float**, **double**, **long**, **short**, **byte**, **binary**, **geo_point**. Generally for numbers, only the Number type is needed (which converts to Elasticsearch integer). You can read more about Elasticsearch types [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-core-types.htm).
***NOTE***
- Types can be defined in several ways. The regular mongoose types exist, or you can use the actual type names Elasticsearch uses.
- You can also add any of the field options you see for [Elasticsearch Core Types](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-core-types.html)
- String types will default to `"index": "not_analyzed"`. See [Custom Field Mappings](https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html#custom-field-mappings). This is so the .find() call acts like it does in Mongoose by only fidning exact matches, however, this prevents the ability to do full text search on this field. Simply set `{"index":"analyzed"}` if you'd like full text search instead.
Example:
```js
// Before saving a document with this schema, your Elasticsearch
// mappings will automatically be updated.
// Note the various ways you can define a schema field type.
var carSchema = new elasticsearch.Schema({
// native type without options
available: Boolean,
// Elasticsearch type without options
safteyRating: 'float',
// native array type
parts: [String],
// Elasticsearch array type
oldPrices: {type: ['double']},
// with options
color: {type: String, required: true},
// a field named 'type' must be defined like the following.
type: {type: String},
// nested document
owner: {
name: String,
age: Number,
// force a required field
location: {type: 'geo_point', required: true}
},
// nested document array
inspections: [{
date: Date,
grade: Number
}],
// Enable full-text search of this field.
// NOTE: it's better to than use the 'q' paramater in queryOptions
// during searches instead of must/not or match when using 'analyzed'
description: {type:String, index: 'analyzed'}
// Ignore_malformed is an Elasticsearch Core Type field option for numbers
price: {type: 'double', ignore_malformed: true}
});
```
#### Hooks and Middleware
Schemas include pre and post hooks that function similar to Mongoose. Currently, there are pre/post hooks for 'save' and 'remove'.
**Pre Hooks**
Same conventions as Mongoose. Function takes a done() callback that must be called when your function is finished. `this` is scoped to the current document. assing an Error to done() will cancel the current operation. For example, in a pre 'save' hook, passing an error to done() will cause the document not to be saved and will return your error to the save() callers rejection handler.
```js
var schema = new elasticsearch.Schema(...);
schema.pre('save', function(done){
console.log(this); // this = the current document
done(); // OR done(new Error('bad document'));
});
```
**Post Hooks**
Same conventions as Mongoose. Does not have a done() callback. Executed after the hooked method. The first argument is the current document which may or may not be a document instance (eg. post remove only receives the raw object as the document no longer exists).
```js
var schema = new elasticsearch.Schema(...);
schema.post('remove', function(document){
console.log(document);
});
```
#### Static and Instance Methods
Add methods to your schema with the same convention as Mongoose.
```js
// Instance method.
var schema = new elasticsearch.Schema(...);
schema.methods.getFullName = function(){
return this.firstName + ' ' + this.lastName;
});
// Static method.
schema.statics.findByColor = function(color){
return this.find({color: color});
});
```
#### Sync Mapping
By default, an attempt will be made on connection to convert your schema definitions into Elasticsearch mappings, and send a [PUT mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) request to sync them. This can cause major issues if your schemas mappings have [conflicting types](https://www.elastic.co/guide/en/elasticsearch/reference/2.0/breaking_20_mapping_changes.html).
If you'd like to disable sync mapping, or if your node has mappings already configured, you can do it like so.
```js
elasticsearch.connect({
host: 'localhost:9200',
index: 'my-index',
syncMapping: false
});
```
### CHANGLELOG
[See here.](https://github.com/zackiles/elasticsearch-odm/blob/master/CHANGELOG.md)
### CONTRIBUTING
This is a library Elasticsearch desperately needed for Node.js. Currently the official npm [elasticsearch](https://www.npmjs.com/package/elasticsearch) client has about 23,000 downloads per week, many of them would benefit from this library instead. Pull requests are welcome. There are [Mocha](https://github.com/mochajs/mocha) and [benchmark](https://www.npmjs.com/package/benchmark) tests in the root directory.
### TODO
- Browser build.
- Add support for [querying nested document arrays](https://www.elastic.co/guide/en/elasticsearch/guide/current/nested-query.html) with dot notation syntax.
- Add [scrolling](https://www.elastic.co/guide/en/elasticsearch/reference/1.6/search-request-scroll.html)
- Add a wrapper to enable streaming of document results.
- Add [snapshots/backups](https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-snapshots.html)
- Allow methods to call Elasticsearch facets.
- Performance tweak application, fix garbage collection issues, and do benchmark tests.
- Integrate npm 'friendly' for use with expanding/collapsing parent/child documents.
- Use [source filtering](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html#get-source-filtering) instead of fields.
| 44.793358 | 439 | 0.736552 | eng_Latn | 0.952153 |
d3e96899eff5e80341b5e8aad894aa9535315f9e | 1,352 | md | Markdown | docs/framework/windows-workflow-foundation/samples/tracking.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/samples/tracking.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/samples/tracking.md | mitharp/docs.ru-ru | 6ca27c46e446ac399747e85952a3135193560cd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Отслеживание
ms.date: 03/30/2017
ms.assetid: afdcd9bd-b462-4b2a-aac7-bebf9c80be81
ms.openlocfilehash: 329dcaab093a4cb177fcba64e4bacbe9c9af4710
ms.sourcegitcommit: 8c28ab17c26bf08abbd004cc37651985c68841b8
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 10/07/2018
ms.locfileid: "48845906"
---
# <a name="tracking"></a>Отслеживание
Этот раздел содержит образцы, демонстрирующие отслеживания в Windows Workflow Foundation (WF) рабочего процесса.
## <a name="in-this-section"></a>В этом разделе
[Настраиваемое отслеживание](../../../../docs/framework/windows-workflow-foundation/samples/custom-tracking.md)
Показывает, как создается настраиваемый участник отслеживания и выдаются данные отслеживания на консоль.
[Отслеживание событий в системе трассировки событий Windows](../../../../docs/framework/windows-workflow-foundation/samples/tracking-events-into-event-tracing-in-windows.md)
Показывает, как включить отслеживание в [!INCLUDE[wf1](../../../../includes/wf1-md.md)] в службе рабочего процесса и создавать события отслеживания в средстве отслеживания событий для Windows (ETW).
[Отслеживание SQL](../../../../docs/framework/windows-workflow-foundation/samples/sql-tracking.md)
Показывает, как создать настраиваемый участник отслеживания SQL, который вносит записи отслеживания базу данных SQL.
| 56.333333 | 201 | 0.781805 | rus_Cyrl | 0.663525 |
d3e984510925fbead2c13cd5ffb4edebf0b5dda3 | 2,259 | md | Markdown | docs/notable_critics.md | destinygg/destinypositions | 4e37d80fd928b1718d8525f8a1b8e4c2cad2d246 | [
"MIT"
] | 30 | 2021-03-01T07:34:27.000Z | 2021-11-13T18:15:45.000Z | docs/notable_critics.md | destinygg/destinypositions | 4e37d80fd928b1718d8525f8a1b8e4c2cad2d246 | [
"MIT"
] | 63 | 2021-03-03T09:43:47.000Z | 2021-05-04T05:59:20.000Z | docs/notable_critics.md | destinygg/destinypositions | 4e37d80fd928b1718d8525f8a1b8e4c2cad2d246 | [
"MIT"
] | 89 | 2021-03-01T03:17:45.000Z | 2021-06-17T02:28:14.000Z | ---
id: notable_critics
title: Notable Critics
description: A record of my various personal critics.
---
## Kaceytron
Coming soon!
## Jack Allison
Jack Allison is an internet podcaster/ex-Jimmy Kimmel writer who is definitely [not mad](https://www.youtube.com/watch?v=DmV09SkVjvA) and who is currently banned from using Twitter by his wife [[original](https://twitter.com/JackAMtheShow/status/1359911896690790409) | [backup](https://i.imgur.com/X95350H.png)].
Despite his time-out from social media, Jack still finds time to obsess over what I do both on and off stream. After I took a step back from my canvassing efforts in Omaha, Jack took it upon himself to [email me to celebrate](https://imgur.com/a/OTDFQMS). Unsatisfied with the lack of attention he received for his wacky antics, he [created a fake reddit account](https://i.imgur.com/jy0MXW6.png) to further our interactions by impersonating [a photographer of WOWT](https://www.wowt.com/authors/brandon-tvrdy/), a local media station in Omaha. My web dev, Cake, pulled [some information](https://i.imgur.com/ujsdcBh.png) from our back-end to confirm this dastardly ploy.
Jack — I mean, Brandon, flexes his prior media training and [asks me what appears to be several thoughtful questions](https://i.imgur.com/xTi1KjB.png) regarding my canvassing experiment in Omaha. I [do my best to respond in good faith](https://i.imgur.com/7IlKX3J.png) ([these](https://i.imgur.com/gjyViWk.png) are the linked "off the record" logs), but little did I know...[it was all a ruse](https://i.imgur.com/9MSulwm.png)!
After thoroughly owning me, Jack [takes to Reddit on his account](https://imgur.com/a/4dQX5T0) to expose me for my crimes, though he unfortunately doesn't have the required karma to post on his new account. Since he can't seem to make progress on Reddit, he decides to create a fake Twitter ([backup from the now deleted account](https://i.imgur.com/czHjDdL.png)) to tweet about his fake Reddit account's fake email that he sent to me to a whole bunch of people on Twitter, himself included. Unfortunately for Jack, he [accidentally leaks](https://streamable.com/of232a) on his stream that he's logged in to the very Reddit account he was masquerading under.
Unlucky! And definitely not mad.
| 107.571429 | 671 | 0.770252 | eng_Latn | 0.992 |
d3e9df2dfa5f2081900722caf56f0ede73006279 | 398 | md | Markdown | README.md | pabloriutort/PyData | 9338a1d7447b8c937e99883bbc6ae49e7932406d | [
"MIT"
] | null | null | null | README.md | pabloriutort/PyData | 9338a1d7447b8c937e99883bbc6ae49e7932406d | [
"MIT"
] | null | null | null | README.md | pabloriutort/PyData | 9338a1d7447b8c937e99883bbc6ae49e7932406d | [
"MIT"
] | null | null | null | ## Bibliography
* [Python RDFlib](https://github.com/RDFLib/rdflib)
* [Learning SPARQL - Bob DuCharme](http://www.learningsparql.com/)
* [Semantic Web for the Working Ontologist - Dean Allemang & Jim Hendler](http://workingontologist.org/)
* W3C
* [RDF](https://www.w3.org/RDF/)
* [SPARQL](https://www.w3.org/TR/rdf-sparql-query/)
* [Pulp Fiction](https://www.imdb.com/title/tt0110912/)
| 33.166667 | 104 | 0.693467 | yue_Hant | 0.647527 |
d3ea011e81884c206bbf0a7379b042e36234c292 | 8,425 | md | Markdown | docs/admin/install/installing-istio.md | ikvmw/docs | debf39b6f0c8e8b5858bbf71d50f7522f24937b6 | [
"Apache-2.0"
] | null | null | null | docs/admin/install/installing-istio.md | ikvmw/docs | debf39b6f0c8e8b5858bbf71d50f7522f24937b6 | [
"Apache-2.0"
] | null | null | null | docs/admin/install/installing-istio.md | ikvmw/docs | debf39b6f0c8e8b5858bbf71d50f7522f24937b6 | [
"Apache-2.0"
] | null | null | null | ---
title: "Installing Istio for Knative"
weight: 15
type: "docs"
---
# Installing Istio for Knative
This guide walks you through manually installing and customizing Istio for use
with Knative.
If your cloud platform offers a managed Istio installation, we recommend
installing Istio that way, unless you need to customize your
installation.
## Before you begin
You need:
- A Kubernetes cluster created.
- [`istioctl`](https://istio.io/docs/setup/install/istioctl/) installed.
## Supported Istio versions
The current known-to-be-stable version of Istio tested in conjunction with Knative is **v1.9.5**.
Versions in the 1.9 line are generally fine too.
## Installing Istio
When you install Istio, there are a few options depending on your goals. For a
basic Istio installation suitable for most Knative use cases, follow the
[Installing Istio without sidecar injection](#installing-istio-without-sidecar-injection)
instructions. If you're familiar with Istio and know what kind of installation
you want, read through the options and choose the installation that suits your
needs.
You can easily customize your Istio installation with `istioctl`. The following sections
cover a few useful Istio configurations and their benefits.
### Choosing an Istio installation
You can install Istio with or without a service mesh:
- [Installing Istio without sidecar injection](#installing-istio-without-sidecar-injection)(Recommended
default installation)
- [Installing Istio with sidecar injection](#installing-istio-with-sidecar-injection)
If you want to get up and running with Knative quickly, we recommend installing
Istio without automatic sidecar injection. This install is also recommended for
users who don't need the Istio service mesh, or who want to enable the service
mesh by [manually injecting the Istio sidecars][1].
#### Installing Istio without sidecar injection
Enter the following command to install Istio:
To install Istio without sidecar injection:
1. Create a `istio-minimal-operator.yaml` file using the following template:
```yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
proxy:
autoInject: disabled
useMCP: false
# The third-party-jwt is not enabled on all k8s.
# See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
jwtPolicy: first-party-jwt
addonComponents:
pilot:
enabled: true
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
```
1. Apply the YAML file by running the command:
```bash
istioctl install -f istio-minimal-operator.yaml
```
#### Installing Istio with sidecar injection
If you want to enable the Istio service mesh, you must enable [automatic sidecar
injection][2]. The Istio service mesh provides a few benefits:
- Allows you to turn on [mutual TLS][3], which secures service-to-service
traffic within the cluster.
- Allows you to use the [Istio authorization policy][4], controlling the access
to each Knative service based on Istio service roles.
For automatic sidecar injection, set `autoInject: enabled` in addition to the earlier
operator configuration.
```
global:
proxy:
autoInject: enabled
```
#### Using Istio mTLS feature
Since there are some networking communications between knative-serving namespace
and the namespace where your services running on, you need additional
preparations for mTLS enabled environment.
1. Enable sidecar container on `knative-serving` system namespace.
```bash
kubectl label namespace knative-serving istio-injection=enabled
```
1. Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace
by creating a YAML file using the following template:
```bash
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "knative-serving"
spec:
mtls:
mode: PERMISSIVE
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`.
### Updating the `config-istio` configmap to use a non-default local gateway
If you create a custom service and deployment for local gateway with a name other than `knative-local-gateway`, you
need to update gateway configmap `config-istio` under the `knative-serving` namespace.
1. Edit the `config-istio` configmap:
```bash
kubectl edit configmap config-istio -n knative-serving
```
2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both
the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to:
```
custom-local-gateway.istio-system.svc.cluster.local
```
As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default
`istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace:
```bash
kubectl edit gateway knative-local-gateway -n knative-serving
```
Replace the label selector with the label of your service:
```
istio: knative-local-gateway
```
For the service mentioned earlier, it should be updated to:
```
custom: custom-local-gateway
```
If there is a change in service ports (compared to that of
`knative-local-gateway`), update the port info in the gateway accordingly.
### Verifying your Istio install
View the status of your Istio installation to make sure the install was
successful. It might take a few seconds, so rerun the following command until
all of the pods show a `STATUS` of `Running` or `Completed`:
```bash
kubectl get pods --namespace istio-system
```
> Tip: You can append the `--watch` flag to the `kubectl get` commands to view
> the pod status in realtime. You use `CTRL + C` to exit watch mode.
### Configuring DNS
Knative dispatches to different services based on their hostname, so it is recommended to have DNS properly configured.
To do this, begin by looking up the external IP address that Istio received:
```
$ kubectl get svc -nistio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.0.2.24 34.83.80.117 15020:32206/TCP,80:30742/TCP,443:30996/TCP 2m14s
istio-pilot ClusterIP 10.0.3.27 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 2m14s
```
This external IP can be used with your DNS provider with a wildcard `A` record. However, for a basic non-production set
up, this external IP address can be used with `sslip.io` in the `config-domain` ConfigMap in `knative-serving`.
You can edit this by using the following command:
```
kubectl edit cm config-domain --namespace knative-serving
```
Given this external IP, change the content to:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# sslip.io is a "magic" DNS provider, which resolves all DNS lookups for:
# *.{ip}.sslip.io to {ip}.
34.83.80.117.sslip.io: ""
```
## Istio resources
- For the official Istio installation guide, see the
[Istio Kubernetes Getting Started Guide](https://istio.io/docs/setup/kubernetes/).
- For the full list of available configs when installing Istio with `istioctl`, see
the
[Istio Installation Options reference](https://istio.io/docs/setup/install/istioctl/).
## Clean up Istio
See the [Uninstall Istio](https://istio.io/docs/setup/install/istioctl/#uninstall-istio).
## What's next
- Try the [Getting Started with App Deployment guide](../../serving/getting-started-knative-app.md) for Knative Serving.
[1]:
https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#manual-sidecar-injection
[2]:
https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#automatic-sidecar-injection
[3]: https://istio.io/docs/concepts/security/#mutual-tls-authentication
[4]: https://istio.io/docs/tasks/security/authz-http/
| 32.655039 | 131 | 0.734481 | eng_Latn | 0.988499 |
d3ea3b941af39f34b45324ef196beeeb256a6eb4 | 3,911 | md | Markdown | reference/docs-conceptual/core-powershell/ise/How-to-Create-a-PowerShell-Tab-in-Windows-PowerShell-ISE.md | clrxbl/powerShell-Docs.nl-nl | cd6651abcb7eba84fb4edfbc03ebb6896800a85a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/core-powershell/ise/How-to-Create-a-PowerShell-Tab-in-Windows-PowerShell-ISE.md | clrxbl/powerShell-Docs.nl-nl | cd6651abcb7eba84fb4edfbc03ebb6896800a85a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/core-powershell/ise/How-to-Create-a-PowerShell-Tab-in-Windows-PowerShell-ISE.md | clrxbl/powerShell-Docs.nl-nl | cd6651abcb7eba84fb4edfbc03ebb6896800a85a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.date: 06/05/2017
keywords: PowerShell-cmdlet
title: Een PowerShell-tabblad maken in Windows PowerShell ISE
ms.assetid: c10c18c7-9ece-4fd0-83dc-a19c53d4fd83
ms.openlocfilehash: 4d4388d889f2178b2cd24cb0f3350aee37327625
ms.sourcegitcommit: cf195b090b3223fa4917206dfec7f0b603873cdf
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 04/09/2018
ms.locfileid: "30950043"
---
# <a name="how-to-create-a-powershell-tab-in-windows-powershell-ise"></a>Een PowerShell-tabblad maken in Windows PowerShell ISE
Tabbladen in de Windows PowerShell Integrated Scripting Environment (ISE) kunt u tegelijkertijd maken en gebruiken van verschillende uitvoeringsomgevingen binnen dezelfde toepassing.
Elk tabblad PowerShell overeenkomt met een afzonderlijke uitvoeringsomgeving of de sessie.
> **OPMERKING**:
>
> Variabelen, functies en aliassen die u op een tabblad maakt uitvoeren niet via naar een andere. Andere Windows PowerShell-sessies zijn.
Gebruik de volgende stappen uit om te openen of sluiten van een tabblad in Windows PowerShell.
Wijzig de naam van een tabblad, stel de [DisplayName](The-PowerShellTab-Object.md#displayname) -eigenschap op het tabblad voor Windows PowerShell-scripting-object.
## <a name="to-create-and-use-a-new-powershell-tab"></a>Maken en gebruiken van een nieuw PowerShell-tabblad
Op de **bestand** menu, klikt u op **nieuw PowerShell-tabblad**. Het nieuwe PowerShell-tabblad is altijd wordt geopend als het actieve venster.
PowerShell tabbladen worden stapsgewijs genummerd in de volgorde waarin ze worden geopend.
Elk tabblad is gekoppeld aan een eigen Windows PowerShell-consolevenster.
Er kunnen maximaal 32 PowerShell tabbladen met hun eigen sessie niet openen op een tijdstip (dit is beperkt tot 8 op Windows PowerShell ISE 2.0).
Houd er rekening mee dat te klikken op de **nieuw** of **Open** pictogrammen op de werkbalk maakt een nieuw tabblad niet met een afzonderlijke sessie.
Deze knoppen openen in plaats daarvan een nieuw of bestaand scriptbestand op het momenteel geselecteerde tabblad met een sessie.
U kunt meerdere script bestanden met elk tabblad en sessie openen hebben.
Het script voor een sessie alleen tabbladen onder de tabbladen sessie wanneer de bijbehorende sessie actief is.
Klik op het tabblad om een PowerShell-tabblad te activeren. Om te selecteren uit alle PowerShell tabbladen die zijn geopend op de **weergave** menu, klikt u op het PowerShell-tabblad dat u wilt gebruiken.
## <a name="to-create-and-use-a-new-remote-powershell-tab"></a>Maken en gebruiken van een nieuw tabblad van de externe PowerShell
Op de **bestand** menu, klikt u op **nieuwe externe PowerShell-tabblad** een sessie op een externe computer tot stand brengen.
Een dialoogvenster wordt weergegeven en u wordt gevraagd om de gegevens die zijn vereist voor het maken van de externe verbinding invoeren.
Het tabblad Extern functies net als bij een lokale PowerShell-tabblad, maar de opdrachten en scripts worden uitgevoerd op de externe computer.
## <a name="to-close-a-powershell-tab"></a>Een PowerShell-tabblad sluiten
U kunt een van de volgende technieken gebruiken een tabblad om af te sluiten:
- Klik op het tabblad dat u wilt sluiten.
- Op de **bestand** menu, klikt u op **PowerShell tabblad sluiten**, of klik op de knop Sluiten (**X**) op een actieve tabblad te sluiten van het tabblad.
Als u hebt niet-opgeslagen openen bestanden op het tabblad PowerShell dat u wilt sluiten, wordt u gevraagd op te slaan of deze verwijderen.
Zie voor meer informatie over het opslaan van een script [hoe een Script opslaat](How-to-Write-and-Run-Scripts-in-the-Windows-PowerShell-ISE.md#how-to-save-a-script).
## <a name="see-also"></a>Zie ook
- [Introducing the Windows PowerShell ISE](Introducing-the-Windows-PowerShell-ISE.md)
- [Het gebruik van het consolevenster in de Windows PowerShell ISE](How-to-Use-the-Console-Pane-in-the-Windows-PowerShell-ISE.md) | 66.288136 | 204 | 0.797239 | nld_Latn | 0.999695 |
d3ea5cafb348739c5403b5096fcd058cc5da1039 | 179 | md | Markdown | Botcamps/GFT_Start_4_Java/Introducao ao Git e GitHub/README.md | igorjuancc/dio | 266406a97ffd0a2320de209d2fa03de139058c78 | [
"MIT"
] | null | null | null | Botcamps/GFT_Start_4_Java/Introducao ao Git e GitHub/README.md | igorjuancc/dio | 266406a97ffd0a2320de209d2fa03de139058c78 | [
"MIT"
] | null | null | null | Botcamps/GFT_Start_4_Java/Introducao ao Git e GitHub/README.md | igorjuancc/dio | 266406a97ffd0a2320de209d2fa03de139058c78 | [
"MIT"
] | null | null | null | # Desafio de Projeto sobre Git/GitHub da DIO
Repositório criado para o Desafio de Projeto.
## Links Úteis
[Sintaxe Básica Markdown](https://www.markdownguide.org/basic-syntax/)
| 29.833333 | 70 | 0.776536 | por_Latn | 0.968307 |
d3eb2d4fe842bd651a2b1f89730f90ca357a2531 | 45 | md | Markdown | Specifications/Readme.md | donn/Phi | 74d5e476184d33b31bfbedcbe0c39517d53b679b | [
"Apache-2.0"
] | 6 | 2019-04-17T11:10:46.000Z | 2022-02-24T16:53:20.000Z | Specifications/Readme.md | donn/Phi | 74d5e476184d33b31bfbedcbe0c39517d53b679b | [
"Apache-2.0"
] | 6 | 2019-04-08T11:05:31.000Z | 2020-12-16T12:02:53.000Z | Specifications/Readme.md | donn/Phi | 74d5e476184d33b31bfbedcbe0c39517d53b679b | [
"Apache-2.0"
] | 2 | 2021-09-01T13:51:23.000Z | 2022-02-24T16:54:01.000Z | ## Specification up to date as of Phi v0.0.6. | 45 | 45 | 0.711111 | eng_Latn | 0.992583 |
d3eb5f1b4e59f74cd1d76708ef357144bfdeac83 | 5,949 | md | Markdown | content/blog/how-to-provide-next-order-coupon-in-woocommerce.md | sathishcartrabbit/site-1 | 1ac6f565d7ca827c7692b1cb2552d929516a335b | [
"MIT"
] | null | null | null | content/blog/how-to-provide-next-order-coupon-in-woocommerce.md | sathishcartrabbit/site-1 | 1ac6f565d7ca827c7692b1cb2552d929516a335b | [
"MIT"
] | null | null | null | content/blog/how-to-provide-next-order-coupon-in-woocommerce.md | sathishcartrabbit/site-1 | 1ac6f565d7ca827c7692b1cb2552d929516a335b | [
"MIT"
] | null | null | null | ---
path: "/blog/how-to-provide-next-order-coupon-in-woocommerce"
date: "2019-01-25"
datemodified: "2019-10-18"
title: "How to provide Next order coupon in WooCommerce"
description: "Learn to provide a coupon for the next purchase when a customer makes an order and send it within the order notification email itself"
author: "Siddharth"
image: "../images/How-to-provide-Next-order-coupon-in-WooCommerce/How-to-provide-Next-order-coupon-in-WooCommerce.png"
thumbnail: "../images/How-to-provide-Next-order-coupon-in-WooCommerce/How-to-provide-Next-order-coupon-in-WooCommerce.png"
category: 'woocommerce'
---
<toc></toc>
## Next Order Coupons and Customer Retention
It’s different these days. WooCommerce Stores (_literally every single store_) out there has understood how important customer retention is and they’ve been taking drastic measures to retain their customers. Because they know that,
“Acquiring a new customer costs 5x times more than retaining a customer”
That’s some eye-opening stat ain’t it?
So, how do stores retain customers?
They engage, send personalized emails, conduct give away contests and use some other traditional <link-text url="https://www.retainful.com/blog/10-successful-ideas-to-boost-your-customer-retention-rate" rel="noopener" target="_blank">customer retention methods</link-text>.
Not saying it won’t work. But, every one of these works on a may or may not framework.
You need something more powerful, something immediate that would allow you to say _**“There’s one more customer who’s going to be with my store forever”.**_ That’s the kind of solution you should be having!
And, here’s one such solution - Next Order Coupons aka Smart Coupons or WooCommerce Next Order Discount Coupons.
Let’s see how this works by assuming a scenario,
There’s this customer who is very much interested in a product of yours and he/she has finally placed the order!
Nice. Now you must do something in order to retain him/her forever!
Here’s what you can do - send a next order coupon along with their order confirmation email *(offering a 10 - 20% discount on their next order).*
**There are two things you achieve out of this,**
- You get to drive repeat purchases because your customer has just bought something he/she likes. And a sweet nudge eventually makes them visit your store again.
- Customer Retention - the moment he/she visits your store again, that’s customer retention happening.
And the best part is, you can be sending next order coupons with Retainful forever!
Too much talking? Let’s get to the point!
### Quick Guide - How to Create Next Order Coupon in WooCommerce
Before we start, we assume that you have already installed Retainful. We’ll be getting over with this in 2 quick steps.
#### Step 1 - Creating the Next Order WooCommerce Coupon
<br>
<small><em>**Note:** Below attached is a screenshot of Retainful Plugin’s Next Order Coupon section image.</em></small>
<br>

1. Open Retainful and toggle to the “Next Order Coupon” section.
2. Check the “Yes” radio button to enable next order coupons.
3. Choose from a variety of given order status options like completed order, canceled order and more. Coupons will not be generated by Retainful until the order status meets the chosen option.
4. Choose who you should be displaying your smart coupons to - the administrator, the customer or all users (and there are more options available).
5. Select your coupon type - Percentage or Flat Discount.
6. And then, mention the coupon value. Remember, if this space is left blank, coupons will not be generated. Also, the minimum value of the coupon should be 1.
7. Choose whether or not to send coupons for orders created in the backend.
8. Choose where the coupon should be displayed in the order notification email - after the order details, or the customer details or the order meta details.
9. Create a custom coupon message if you want to. And also make sure you enter the specifics in the coupon shortcodes,
- {{coupon_code}} - coupon code
- {{coupon_amount}} - coupon amount
- {{coupon_url}} - URL to apply coupon automatically
- {{coupon_expiry_date}} - coupon expiry date. If the coupon does not have any expiry date this will not be attached to the message.
We are done with step 1, the coupon creation part. Now, for the second step.
#### Step 2 - Setting Usage Restrictions for the Coupon
Retainful comes with power-packed coupon usage restriction options.

You will find the coupon usage restriction when you scroll
1. The minimum and maximum spend values can be set. The coupon is valid only if the customer shops between the set values.
2. Set coupon validity date. You can simply leave it blank or enter 0 if you do not want the coupon to expire.
3. Choose multiple date formats for coupon expiry.
4. Choose to which products and categories your next order coupon should be displayed and not be displayed.
5. And once you are done with customizing your coupon, make sure you click Save.
Your Next Order Coupon is all set and ready to go live.
<cta url="https://app.retainful.com" rel="noopener" target="_blank">Try Retainful For Free!</cta>
Yeah, we know. It takes no more than 2 minutes to create a Next Order Coupon with Retainful. It’s all about how efficiently you make use of this to retain your customers.
Haven’t you started creating your WooCommerce Discount Coupon already?
<link-text url="https://www.retainful.com/next-order-coupon" rel="noopener" target="_blank">Start creating next order coupons</link-text>
and drive repeat purchases and retain more customers, and let us know once when you’ve sent a thousand smart coupons!
We’ll be waiting to hear from you! | 47.975806 | 273 | 0.758951 | eng_Latn | 0.997584 |
d3ebf2186f90f78bfc5dc5bdefb1b44a1cb9a1da | 645 | md | Markdown | docs/add/metadata/DocumentFormat.OpenXml.Office2010.Drawing.Charts/ShowSketchButton.meta.md | mairaw/open-xml-docs-ref-dotnet | 078db78b64aaade35a15f31e279cc81294f0bae1 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-01-14T16:13:01.000Z | 2021-08-30T21:20:02.000Z | docs/add/metadata/DocumentFormat.OpenXml.Office2010.Drawing.Charts/ShowSketchButton.meta.md | mairaw/open-xml-docs-ref-dotnet | 078db78b64aaade35a15f31e279cc81294f0bae1 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2018-08-02T20:11:26.000Z | 2022-02-15T02:39:52.000Z | docs/add/metadata/DocumentFormat.OpenXml.Office2010.Drawing.Charts/ShowSketchButton.meta.md | mairaw/open-xml-docs-ref-dotnet | 078db78b64aaade35a15f31e279cc81294f0bae1 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-10-11T16:55:35.000Z | 2021-05-05T15:13:48.000Z | ---
uid: DocumentFormat.OpenXml.Office2010.Drawing.Charts.ShowSketchButton
ms.author: "soliver"
manager: "soliver"
---
---
uid: DocumentFormat.OpenXml.Office2010.Drawing.Charts.ShowSketchButton.CloneNode(System.Boolean)
ms.author: "soliver"
manager: "soliver"
---
---
uid: DocumentFormat.OpenXml.Office2010.Drawing.Charts.ShowSketchButton.#ctor
ms.author: "soliver"
manager: "soliver"
---
---
uid: DocumentFormat.OpenXml.Office2010.Drawing.Charts.ShowSketchButton.Val
ms.author: "soliver"
manager: "soliver"
---
---
uid: DocumentFormat.OpenXml.Office2010.Drawing.Charts.ShowSketchButton.LocalName
ms.author: "soliver"
manager: "soliver"
---
| 21.5 | 96 | 0.772093 | yue_Hant | 0.396112 |
d3ec116bc459ea4ca6bce96175a252128126417b | 623 | md | Markdown | docs/uptime/monitoring/munin-server-monitoring/index.md | ciwise/server-docs | a7d4016a59aec83ceb5e2721383c75b24adfb226 | [
"CC-BY-4.0"
] | null | null | null | docs/uptime/monitoring/munin-server-monitoring/index.md | ciwise/server-docs | a7d4016a59aec83ceb5e2721383c75b24adfb226 | [
"CC-BY-4.0"
] | null | null | null | docs/uptime/monitoring/munin-server-monitoring/index.md | ciwise/server-docs | a7d4016a59aec83ceb5e2721383c75b24adfb226 | [
"CC-BY-4.0"
] | null | null | null | ---
deprecated: true
author:
name: Linode
email: docs@linode.com
description: 'A collection of guides that introduce and support the use of Munin.'
keywords: 'munin,server monitoring,munin linux,munin debian'
license: '[CC BY-ND 3.0](http://creativecommons.org/licenses/by-nd/3.0/us/)'
alias: ['server-monitoring/munin/']
modified: Tuesday, April 19th, 2011
modified_by:
name: Linode
published: 'Thursday, January 7th, 2010'
title: Munin Server Monitoring
---
Munin is a popular open source software solution for server monitoring. These guides will help you get up and running quickly with Munin on your Linux VPS.
| 34.611111 | 155 | 0.764045 | eng_Latn | 0.890001 |
d3ec2fa9d91c1e8c53de9cefd5b4b516b70ef1fc | 12,997 | md | Markdown | _posts/unity_shader/shader_study/2021-02-13-water-shader.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | null | null | null | _posts/unity_shader/shader_study/2021-02-13-water-shader.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | null | null | null | _posts/unity_shader/shader_study/2021-02-13-water-shader.md | rito15/Rito15.github.io | 1da1d7852aee3b31da194309aa21d3531ae8d872 | [
"MIT"
] | 5 | 2021-07-23T08:47:44.000Z | 2021-12-29T11:24:39.000Z | ---
title: 물 쉐이더 만들기
author: Rito15
date: 2021-02-13 19:24:00 +09:00
categories: [Unity Shader, Shader Study]
tags: [unity, csharp, shader, graphics, water]
math: true
mermaid: true
---
# 목표
---
- 서피스 쉐이더로 물 쉐이더 만들기
<br>
# 목차
---
- [1. 물 쉐이더 기초](#물-쉐이더-기초)
- [2. 프레넬 공식 적용](#프레넬-공식-적용)
- [3. 물 흐르게 하기](#물-흐르게-하기)
- [4. 스페큘러 적용](#스페큘러-적용)
- [5. 파도 만들기](#파도-만들기)
- [6. 투과율 제어하기](#투과율-제어하기)
- [7. 최종 결과](#최종-결과)
<br>
# 준비물
---
- 큐브맵 텍스쳐 기반 스카이박스
- 물 노멀맵 텍스쳐
- 물에 빠질 로봇
<br>
# 물 쉐이더 기초
---
메시는 유니티의 기본적인 Plane을 이용한다.
노멀은 노멀맵을 넣어 적용하고, 간단히 float로 타일링이 가능하도록 _Tiling 프로퍼티를 추가한다.
그리고 _Strength 프로퍼티를 통해 노멀의 강도를 조절할 수 있게 한다.
{:.normal}
```hlsl
Shader "Rito/Water"
{
Properties
{
_BumpMap("Normal Map", 2D) = "bump" {}
_Cube("Cube", Cube) = ""{}
[Space]
_Alpha("Alpha", Range(0, 1)) = 0.8
_Tiling("Normal Tiling", Range(1, 10)) = 1
_Strength("Normal Strength", Range(0, 2)) = 1
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
CGPROGRAM
#pragma surface surf Lambert alpha:fade
#pragma target 3.0
sampler2D _BumpMap;
samplerCUBE _Cube;
struct Input
{
float2 uv_BumpMap;
float3 worldRefl;
INTERNAL_DATA
};
float _Alpha;
float _Tiling;
float _Strength;
void surf (Input IN, inout SurfaceOutput o)
{
float3 reflColor = texCUBE(_Cube, WorldReflectionVector(IN, o.Normal));
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling)) * _Strength;
o.Emission = reflColor;
o.Alpha = _Alpha;
}
ENDCG
}
FallBack "Legacy Shaders/Transparent/VertexLit"
}
```
<br>
# 프레넬 공식 적용
---
프레넬 공식은 시야 방향 벡터와 대상의 노멀 방향 벡터의 관계를 이용한 공식이다.
두 벡터가 이루는 각이 수직에 가까울수록 투과율이 감소하고, 반사율이 증가한다는 것을 의미한다.
간단히 `1 - saturate(dot(N, V))`로 표현하며, 이를 다양하게 응용할 수 있다.
{:.normal}
```hlsl
// Properties
_FresnelPower("Fresnel Power", Range(0, 10)) = 3
_FresnelIntensity("Fresnel Intensity", Range(0, 5)) = 1
// ...
void surf (Input IN, inout SurfaceOutput o)
{
float3 reflColor = texCUBE(_Cube, WorldReflectionVector(IN, o.Normal));
o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling)) * _Strength;
// Fresnel
float ndv = saturate(dot(normalize(o.Normal), IN.viewDir));
float fresnel = 1. - pow(ndv, _FresnelPower) * _FresnelIntensity;
o.Emission = reflColor * _CubeIntensity * fresnel + _CubeBrightness;
o.Alpha = _Alpha * (fresnel + _AlphaAdd);
}
```
<br>
# 물 흐르게 하기
---
물을 흐르게 하는 것은 정말 간단하다.
노멀 텍스쳐를 샘플링할 때 uv에 시간을 더해주면 된다.
{:.normal}
그런데 이렇게 하면 흐른다는 느낌보다 '지나간다'는 느낌이 강하다.
그래서 텍스쳐를 두 번 샘플링하여, 서로 반대 방향으로 흐르게 해주고 두 결과의 평균 값을 사용한다.
{:.normal}
이제 정말로 물이 흐르는 듯한 느낌이 든다.
```hlsl
// Properties
_FlowDirX("Flow Direction X", Range(-1, 1)) = 1
_FlowDirY("Flow Direction Y", Range(-1, 1)) = 0
_FlowSpeed("Flow Speed", Range(0, 10)) = 1
// ...
void surf (Input IN, inout SurfaceOutput o)
{
// Flow
float2 flowDir = normalize(float2(_FlowDirX, _FlowDirY));
float2 flow = flowDir * _Time.x * _FlowSpeed;
float3 normal1 = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling + flow)) * _Strength;
float3 normal2 = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling - flow * 0.3)) * _Strength * 0.5;
float3 reflColor = texCUBE(_Cube, WorldReflectionVector(IN, o.Normal));
o.Normal = (normal1 + normal2) * 0.5;
// Fresnel
float ndv = saturate(dot(normalize(o.Normal), IN.viewDir));
float fresnel = 1. - pow(ndv, _FresnelPower) * _FresnelIntensity;
o.Emission = reflColor * _CubeIntensity * fresnel + _CubeBrightness;
o.Alpha = _Alpha * (fresnel + _AlphaAdd);
}
```
<br>
# 스페큘러 적용
---
스페큘러를 이용해 물 표면에 빛이 예쁘게 산란되도록 만들 수 있다.
기존에 사용하던 Lambert 대신, 커스텀 라이트 함수를 만들어 적용한다.
스페큘러는 저렴한 블린 퐁 공식을 사용하였다.
{:.normal}
```hlsl
// Properties
_SpColor("Specular Color", Color) = (1, 1, 1, 1)
_SpPower("Specular Power", Range(10, 500)) = 150
_SpIntensity("Specular Intensity", Range(0, 10)) = 3
// ...
#pragma surface surf WaterSpecular alpha:fade
// ...
float4 LightingWaterSpecular(SurfaceOutput s, float3 lightDir, float3 viewDir, float atten)
{
float3 H = normalize(lightDir + viewDir); // Binn Phong
float spec = saturate(dot(H, s.Normal));
spec = pow(spec, _SpPower);
float4 col;
col.rgb = spec * _SpColor.rgb * _SpIntensity * _LightColor0;
col.a = s.Alpha + spec;
return col;
}
```
<br>
# 파도 만들기
---
거창한 파도는 아니고, 물이 더 역동적으로 일렁이게끔 하려고 한다.
우선 하나만 사용하던 Plane을 복제하여 4개로 만든다.
{:.normal}
그리고 버텍스 함수를 추가하고, 정점 Y 위치를 uv.x만큼 더해준다.
```hlsl
#pragma surface surf WaterSpecular alpha:fade vertex:vert
void vert(inout appdata_full v)
{
v.vertex.y += v.texcoord.x;
}
```
{:.normal}
그럼 요렇게 경사가 생긴다.
이번에는 Plane끼리 연결시킬 목적으로 이렇게 수정한다.
```hlsl
v.vertex.y += abs(v.texcoord.x * 2. - 1.);
```
{:.normal}
근데 곰곰이 생각해보니, 메시가 더 조밀한 Plane을 만들어 쓰거나 테셀레이션을 넣으면 굳이 번거롭게 Plane을 여러 장 붙여 쓸 필요가 없을 것 같다.
그래서 바로 만들어 왔다.
- [Custom Plane Mesh Generator(Link)](../unity-toy-custom-plane-mesh-generator/)
{:.normal}
그리고 이번엔 프로퍼티를 넣어 각종 옵션들을 제어할 수 있게 해주고,
uv의 x와 y를 이용해 모든 방향으로 파도를 설정할 수 있게 해준다.
그리고 부드럽게 연결되도록 sin() 함수에 넣어준다.
{:.normal}
```hlsl
// Properties
_WaveCount("Wave Count", Int) = 0
_WaveHeight("WaveHeight", Range(0, 10)) = 1
_WaveDirX("Wave Direction X", Range(-1, 1)) = 0.5
_WaveDirY("Wave Direction Y", Range(-1, 1)) = 1
_WaveSpeed("Wave Speed", Range(0, 10)) = 1
// ...
#pragma surface surf WaterSpecular alpha:fade vertex:vert
// ...
float _WaveHeight, _WaveDirX, _WaveDirY, _WaveSpeed;
int _WaveCount;
void vert(inout appdata_full v)
{
float t = _Time.y * _WaveSpeed;
float2 waveDir = normalize(float2(_WaveDirX, _WaveDirY));
float wave;
wave = sin(abs(v.texcoord.x * waveDir.x) * _WaveCount + t) * _WaveHeight;
wave += sin(abs(v.texcoord.y * waveDir.y) * _WaveCount + t) * _WaveHeight;
v.vertex.y = wave / 2.;
}
// ...
```
<br>
{:.normal}
{:.normal}
<br>
# 투과율 제어하기
---
{:.normal}
지금까지 노멀맵에 의해 분산된 노멀 값을 기반으로 투과율을 설정했더니, 의도와 전혀 다른 결과가 나왔다.
마치 물 표면에 오클루전이 자글자글하게 낀 것 같은 느낌이었다.
따라서 노멀맵 적용 이전의 노멀값과 뷰 벡터를 이용해 투과율을 제어하도록 변경하였다.
{:.normal}
```hlsl
[Space, Header(Penetration Options)]
_Penetration("Penetration", Range(0, 1)) = 0.2 // 투과율
_PenetrationThreshold("Penetration Threshold", Range(0, 50)) = 5
/* surf 함수 */
// originNormal : 노멀맵 적용하지 않은 초기 노멀벡터
float penet = pow(saturate(dot(originNormal, IN.viewDir)), _PenetrationThreshold) * _Penetration;
o.Alpha = _Alpha - penet;
```
<br>
# 최종 결과
---

```hlsl
Shader "Rito/Water"
{
Properties
{
[Header(Textures)]
_BumpMap("Normal Map", 2D) = "bump" {}
_Cube("Cube", Cube) = ""{}
[Space, Header(Basic Options)]
_Tint("Tint Color", Color) = (0, 0, 0.01, 1)
_Alpha("Alpha", Range(0, 1)) = 1
_CubeIntensity("CubeMap Intensity", Range(0, 2)) = 1
_CubeBrightness("CubeMap Brightness", Range(-2, 2)) = 0
[Space, Header(Penetration Options)]
_Penetration("Penetration", Range(0, 1)) = 0.2 // 투과율
_PenetrationThreshold("Penetration Threshold", Range(0, 50)) = 5
[Space, Header(Normal Map Options)]
_Tiling("Normal Tiling", Range(1, 10)) = 2
_Strength("Normal Strength", Range(0, 2)) = 1
[Space, Header(Fresnel Options)]
_FresnelPower("Fresnel Power", Range(0, 10)) = 3
_FresnelIntensity("Fresnel Intensity", Range(0, 5)) = 1
[Space, Header(Lighting Options)]
_SpColor("Specular Color", Color) = (1, 1, 1, 1)
_SpPower("Specular Power", Range(10, 500)) = 300
_SpIntensity("Specular Intensity", Range(0, 10)) = 2
[Space, Header(Flow Options)]
_FlowDirX("Flow Direction X", Range(-1, 1)) = -1
_FlowDirY("Flow Direction Y", Range(-1, 1)) = 1
_FlowSpeed("Flow Speed", Range(0, 10)) = 1
[Space, Header(Wave Options)]
_WaveCount("Wave Count", Int) = 8
_WaveHeight("WaveHeight", Range(0, 10)) = 0.1
_WaveDirX("Wave Direction X", Range(-1, 1)) = -1
_WaveDirY("Wave Direction Y", Range(-1, 1)) = 1
_WaveSpeed("Wave Speed", Range(0, 10)) = 2
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
CGPROGRAM
#pragma surface surf WaterSpecular alpha:fade vertex:vert
#pragma target 3.0
sampler2D _BumpMap;
samplerCUBE _Cube;
struct Input
{
float2 uv_BumpMap;
float3 worldRefl;
float3 viewDir;
float3 Normal;
INTERNAL_DATA
};
float _Alpha, _Penetration, _PenetrationThreshold;
float _CubeIntensity, _CubeBrightness;
float _Tiling, _Strength;
float _FresnelPower, _FresnelIntensity;
float _FlowDirX, _FlowDirY, _FlowSpeed;
float4 _Tint, _SpColor;
float _SpPower, _SpIntensity, _DiffIntensity;
float _WaveHeight, _WaveDirX, _WaveDirY, _WaveSpeed;
int _WaveCount;
// Wave
void vert(inout appdata_full v)
{
float t = _Time.y * _WaveSpeed;
float2 waveDir = normalize(float2(_WaveDirX, _WaveDirY));
float wave;
wave = sin(abs(v.texcoord.x * waveDir.x) * _WaveCount + t) * _WaveHeight;
wave += sin(abs(v.texcoord.y * waveDir.y) * _WaveCount + t) * _WaveHeight;
v.vertex.y = wave / 2.;
}
void surf (Input IN, inout SurfaceOutput o)
{
float3 originNormal = o.Normal;
float3 reflColor = texCUBE(_Cube, WorldReflectionVector(IN, originNormal));
// Flow
float2 flowDir = normalize(float2(_FlowDirX, _FlowDirY));
float2 flow = flowDir * _Time.x * _FlowSpeed;
float3 normal1 = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling + flow));
float3 normal2 = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap * _Tiling * 0.5 - flow * 0.3)) * 0.5;
o.Normal = (normal1 + normal2) * 0.5;
// Fresnel
float ndv = saturate(dot(o.Normal * _Strength, IN.viewDir));
float fresnel = 1. - pow(ndv, _FresnelPower) * _FresnelIntensity;
// Penetration
float penet = pow(saturate(dot(originNormal, IN.viewDir)), _PenetrationThreshold) * _Penetration;
// FInal
o.Emission = (_Tint * 0.5) + (reflColor * _CubeIntensity * fresnel) + _CubeBrightness;
o.Alpha = _Alpha - penet;
}
float4 LightingWaterSpecular(SurfaceOutput s, float3 lightDir, float3 viewDir, float atten)
{
float3 H = normalize(lightDir + viewDir); // Binn Phong
float spec = saturate(dot(H, s.Normal));
spec = pow(spec, _SpPower);
float4 col;
col.rgb = spec * _SpColor.rgb * _SpIntensity * _LightColor0;
col.a = s.Alpha + spec;
return col;
}
ENDCG
}
FallBack "Legacy Shaders/Transparent/VertexLit"
}
```
<br>
# References
---
- 정종필, 테크니컬 아티스트를 위한 유니티 쉐이더 스타트업, 비엘북스, 2017 | 27.362105 | 139 | 0.636455 | kor_Hang | 0.792154 |
d3eca0bc3ae47588267ccedaeb305cee9c25460d | 42 | md | Markdown | README.md | akalankauk/akalankauk.github.io | d67e2832a794328e5003a7e8f7f7f6eb60ead612 | [
"MIT"
] | 3 | 2017-09-18T21:23:18.000Z | 2022-02-20T06:17:07.000Z | README.md | akalankauk/akalankauk.github.io | d67e2832a794328e5003a7e8f7f7f6eb60ead612 | [
"MIT"
] | null | null | null | README.md | akalankauk/akalankauk.github.io | d67e2832a794328e5003a7e8f7f7f6eb60ead612 | [
"MIT"
] | null | null | null | # Thank you Leet! I got It. Your The One!
| 21 | 41 | 0.666667 | eng_Latn | 0.999718 |
d3ecb6c3fb46b0aac862edc0b1a8f6dd8cbf56a0 | 1,549 | md | Markdown | ps/JONGMAN/2019-12-09-Jongman-ch8-11-2/2019-12-09-Jongman-ch8-11-2.md | beenpow/beenpow.github.io | b04b0535bcb8fd857303029a1865994ee985895e | [
"MIT"
] | null | null | null | ps/JONGMAN/2019-12-09-Jongman-ch8-11-2/2019-12-09-Jongman-ch8-11-2.md | beenpow/beenpow.github.io | b04b0535bcb8fd857303029a1865994ee985895e | [
"MIT"
] | 3 | 2020-02-26T11:55:43.000Z | 2021-05-20T11:24:19.000Z | ps/JONGMAN/2019-12-09-Jongman-ch8-11-2/2019-12-09-Jongman-ch8-11-2.md | beenpow/beenpow.github.io | b04b0535bcb8fd857303029a1865994ee985895e | [
"MIT"
] | 1 | 2020-01-14T02:53:01.000Z | 2020-01-14T02:53:01.000Z | ---
layout: post
title : "Ch.8.11-2 문제 ID TRIPATHCNT"
subtitle: "종만북"
type: "알고리즘 문제해결 전략"
ps: true
jongman: true
text: true
author: "beenpow"
post-header: true
header-img: "img/1.jpg"
order: 1
date: "2019-12-09"
---
# 8.11-2 문제: 삼각형 위의 최대 경로 개수 세기 ( 문제ID : TRIPATHCNT, 난이도: 중)
[algo]: <https://algospot.com/judge/problem/read/TRIPATHCNT>
- 기존에 풀었던, 아래 문제에 대한 변형 문제이다.
[algo]: <https://beenpow.github.io/jongman/2019/12/07/Jongman-ch8-4-1/>
- 기존 문제에서는 최대 경로의 합을 구했을 뿐이지, 경로 자체는 구하지 않았습니다.
## 책에 제시된 풀이
```cpp
count(y,x) = (y,x)에서 시작해 맨 아래줄까지 내려가는 최대 경로의 수
- count(y+1,x) path2(y+1,x) > path2(y+1,x+1)
count(y,x) = max - count(y+1,x+1) path2(y+1,x) < path2(y+1,x+1)
``` - count(y+1,x)+ count(y+1,x+1) path2(y+1,x) = path2(y+1,x+1)
```cpp
int n, triangle[100][100];
int pathCache[100][100];
// (y,x)위치부터 맨 아래줄까지 내려가면서 얻을 수 있는 최대 경로의 합을 반환한다.
int path2(int y, int x){
// 기저 사례: 맨 아래줄에 도달한 경우
if(y == n-1)return triangle[y][x];
// 메모이제이션
int& ret = pathCache[y][x];
if(ret != -1)return ret;
return ret = max( path2(y+1, x), path2(y+1, x+1)) + triangle[y][x];
}
int countCache[100][100];
// (y,x)에서 시작해서 맨 아래줄까지 내려가는 경로 중 최대 경로의
// 개수를 반환한다.
int count(int y, int x){
// 기저 사례: 맨 아래줄에 도달한 경우
if(y == n-1) return 1;
// 메모이제이션
int& ret = countCache[y][x];
if(ret != -1) return ret;
ret = 0;
if(path2(y+1, x+1) >= path2(y+1, x)) ret += count(y+1, x+1);
if(path2(y+1, x+1) <= path2(y+1, x)) ret += count(y+1, x);
return ret;
}
```
| 25.816667 | 77 | 0.564235 | kor_Hang | 0.998335 |
d3eccb5a2eef8fb3315e10c3fca0c54f1bb6a861 | 1,425 | md | Markdown | README.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | README.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | README.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # .NET Docs
This repo contains work-in-progress documentation for .NET. To contribute, see the [Contributing Guide](CONTRIBUTING.md) and the [issues list](https://github.com/dotnet/docs/issues).
We welcome contributions to help us improve and complete the .NET docs. Feel free to copy/paste documentation from [.NET Framework docs](https://msdn.microsoft.com/library/w0x726c2.aspx) as a starting point for .NET docs. We anticipate that [Xamarin](http://developer.xamarin.com/api/root/classlib/), [Mono](http://docs.go-mono.com/?link=root%3a%2fclasslib) and [Unity](http://docs.unity3d.com/Manual/index.html) will also use this documentation.
This project has adopted the code of conduct defined by the Contributor Covenant
to clarify expected behavior in our community.
For more information, see the [.NET Foundation Code of Conduct](https://dotnetfoundation.org/code-of-conduct).
Samples Build Status
===
| Framework | Ubuntu 16.04 (x64) | Windows Server 2012 R2 (x64) |
| ------------- |------------| -----|
| .NET Core | [](http://seoul.westus.cloudapp.azure.com/job/dotnetcore-samples-ubuntu1604-x64/) | [](http://seoul.westus.cloudapp.azure.com/job/dotnetcore-samples-windows2012-x64/) |
| 83.823529 | 446 | 0.751579 | eng_Latn | 0.377944 |
d3eceacfe2951bef3a0d8122135ab44eaaebc04d | 2,771 | md | Markdown | README.md | ppsan/fluttipedia | b86e807096400c35c50cc5683965dd7a1e52bee5 | [
"MIT"
] | 2 | 2019-04-16T08:48:46.000Z | 2019-04-16T09:10:16.000Z | README.md | ppsan/fluttipedia | b86e807096400c35c50cc5683965dd7a1e52bee5 | [
"MIT"
] | null | null | null | README.md | ppsan/fluttipedia | b86e807096400c35c50cc5683965dd7a1e52bee5 | [
"MIT"
] | null | null | null |
# Fluttipedia
Eine Flutter-App, um das Wikipedia-Spiel [Getting to Philosophy](https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy) unter Verwendung des [Skiapoden](https://github.com/skiapoden/) [firstlink](https://skiapoden.herokuapp.com/) Microservices zu spielen.
## Getting Started
Das Projekt kann folgendermassen ausgeführt werden:
### 1. Vorhandene Emulatoren auflisten
```
flutter emulators
```
### 2. Bestimmten Emulator starten
```
flutter emulators --launch Pixel_2_XL_API_28
```
### 3. Projekt starten
```
flutter run
```
## Technische Anforderungen
Die nachfolgenden Anforderungen wurden in beschriebenen Codestellen umgesetzt.
### Verwendung von Flutter (3 Pkt.)
Die App wurde mit Flutter erstellt mit dem Material App Widget (`main.dart` Zeile 12 -23), welches die Definierung eines globalen Themes ermöglicht. Für dieses App wurden sowohl Stateful wie auch Stateless Widgets verwendet und teilweise eigene Widgets erstellt (`widgets/TargetCard.dart`, `widgets/TutorialCard.dart`).
### Implementierung & Verwendung einer eigenen Server-Komponente (1 Pkt.)
Um dem ersten Link eines Wikipedia-Artikels zu erhalten wurde eine externe Serverkomponente verwendet, welcher ein Begriff übergeben werden kann und als Antwort den ersten Link des Artikels liefert.
Die Serverkomponente wurde für das Modul Softwaretesting (SWT) als Gruppenarbeit erstellt. Weitere Infos auf (https://skiapoden.herokuapp.com/firstlink).
Im Code zu sehen ist dies in `screens/result.dart`, Zeile 62 - 66.
### Kommunikation über HTTP (1 Pkt.)
Die Kommunkiation mit der Serverkomponente erfolgt asynchron via HTTP.
Im Code zu sehen ist dies in `screens/result.dart`, Zeile 61 - 72.
### Lokale Persistenz mittels SharedPreferences (1 Pkt.)
```
fluttipedia\lib\screens\Settings.dart
```
Zeilen 34 - 63
## Sonstiges
Weitere Anmerkungen zur App.
### Struktur
Die App besteht aus einer MainPage und mehrere SubPages.
Die MainPage bildet das Grundgerüst der App durch die Verwendung des Flutter-Widget MaterialApp. Dort sind generelle Spezifikationen, wie z.B. die Colorpalette definiert.
Die jeweiligen SubPages bieten die effektive Funktionalität der App an. Es wurde darauf geachtet, dass in ersten Linie Flutter-Widgets verwendet wurden, um das Material Design einzuhalten.
### Navigation
Die Navigation findet hauptsächlich über die BottomNavigationBar statt. Beim Klick auf das entsprechende Icon wird die gewünschte SubPage von der Seite her eingeschoben.
### Persistenz
Für die Persistenz kommen die SharedPreferences zum Einsatz. Flutter bietet dieses Feature standardmässig nicht an, sodass folgende dependency verwendet werden muss:
``
shared_preferences: ^0.4.3
``
| 37.958904 | 320 | 0.769758 | deu_Latn | 0.995585 |
d3ecf0cd874776e64161d9eef0c3016762eb415d | 119 | md | Markdown | src/pages/blog/2018-12-03-something-else.md | dlwjiang/theory-a | beebbfd8b9566e6e9d9e7edf653431d14e0e7036 | [
"MIT"
] | null | null | null | src/pages/blog/2018-12-03-something-else.md | dlwjiang/theory-a | beebbfd8b9566e6e9d9e7edf653431d14e0e7036 | [
"MIT"
] | null | null | null | src/pages/blog/2018-12-03-something-else.md | dlwjiang/theory-a | beebbfd8b9566e6e9d9e7edf653431d14e0e7036 | [
"MIT"
] | null | null | null | ---
templateKey: blog-post
title: Something else
date: 2018-12-03T17:52:16.900Z
description: a
tags:
- intro
---
bcd
| 11.9 | 30 | 0.705882 | eng_Latn | 0.18679 |
d3ecfeabf73df905b82eddcd04d3929081b098f5 | 1,769 | md | Markdown | docs/easypay/README.md | pooyaa/help.ZarinPal.com | b042893af025d037ec1b95bb2316738c6514ba3d | [
"MIT"
] | 14 | 2018-06-29T11:17:27.000Z | 2021-01-18T17:20:37.000Z | docs/easypay/README.md | pooyaa/help.ZarinPal.com | b042893af025d037ec1b95bb2316738c6514ba3d | [
"MIT"
] | 1 | 2018-09-27T10:41:37.000Z | 2018-09-27T15:03:00.000Z | docs/easypay/README.md | pooyaa/help.ZarinPal.com | b042893af025d037ec1b95bb2316738c6514ba3d | [
"MIT"
] | 124 | 2018-05-29T09:28:41.000Z | 2022-01-23T22:48:53.000Z | ---
sidebar: auto
---
# زرین لینک
زرین لینک و لینک شخصی دو سرویس کاربردی و تسهیلکنندهی پرداختهای آنلاین مخصوصا در شبکههای اجتماعی است که در اینجا از این دو سرویس صحبت میشود.
## زرین لینک چیست؟
زرین لینک یک لینک پرداخت است که میتوان برای یک محصول با قیمت مشخص تعریف و استفاده کرد که بیشتر قابل استفاده در شبکههای اجتماعی، بلاگها، انجمنها و… است.
### چگونه زرین لینک بسازم؟
۱. در صفحه زرین لینک روی  کلیک کنید.
۲. فیلدهای مربوط به اطلاعات اولیه را تکمیل نمایید.
۳. بر روی ایجاد زرین لینک کلیک کنید.
۴. فیلدهایی که میخواهید در زرین لینک شما نمایش داده شود را مشخص کنید.
۵. در صورت لزوم برای تعداد سفارش محصول محدودیت تعیین کنید.
۶. لینکهای بازگشت پرداخت موفق و ناموفق را مشخص کنید. (اختیاری)
۷. روی گزینهی ذخیره کلیک کنید.
::: warning توجه
ساخت زرین لینک محدودیتی ندارد، میتوان به هر تعداد زرین لینک ایجاد کرد.
:::
### چگونه از زرین لینک استفاده کنم؟
۱. در صفحه زرین لینک برای استفاده از هر زرین لینک روی  کلیک کنید.
۲. گزینه نمایش کد را انتخاب کنید.
۳. از میان کدهای نمایش داده شده بنا به استفاده خود آنها را انتخاب کنید.
### ویرایش زرین لینک
۱. در صفحه زرین لینک برای ویرایش هرکدام از زرین لینکها روی  کلیک کنید.
۲. گزینه ویرایش را انتخاب کنید.
۳. موارد قابل ویرایش را تغییر دهید و روی گزینه ویرایش کلیک کنید.
## لینک شخصی
لینک شخصی میتواند مانند یک دستگاه کارتخوان عمل کند و هر مبلغی توسط دیگران برای شما واریز شود.
به جای دردسرهای کارت به کارت کردن پول، میتوان از لینک شخصی استفاده کرد و آن را برای دیگران ارسال کرد.
### چگونه لینک شخصی بسازم؟
۱. در صفحه زرین لینک روی  کلیک کنید.
۲. فیلدهای مربوطه را تکمیل و ثبت کنید.
::: warning توجه
دقت کنید که بعد از ساخت لینک شخصی امکان تغییر و ویرایش آن وجود ندارد.
:::
| 27.215385 | 155 | 0.725834 | pes_Arab | 0.999435 |
d3ed7ffa7a2794120dd8eeacb9554cf01a7eba1f | 14,629 | md | Markdown | README.md | JetBrains/gradle-changelog-plugin | f8f60be53ec760dee546d4eff515d432ed664049 | [
"Apache-2.0"
] | 157 | 2020-05-28T15:35:11.000Z | 2022-03-31T06:14:51.000Z | README.md | JetBrains/gradle-changelog-plugin | f8f60be53ec760dee546d4eff515d432ed664049 | [
"Apache-2.0"
] | 40 | 2020-06-05T18:05:39.000Z | 2022-03-17T13:51:15.000Z | README.md | JetBrains/gradle-changelog-plugin | f8f60be53ec760dee546d4eff515d432ed664049 | [
"Apache-2.0"
] | 19 | 2020-06-15T21:12:41.000Z | 2022-01-17T13:20:28.000Z | # Gradle Changelog Plugin
[][jb:confluence-on-gh]
[][jb:twitter]
[![Gradle Plugin][gradle-plugin-shield]][gradle-plugin]
[][gh:build]
[][jb:slack]
**This project requires Gradle 6.6 or newer**
> **TIP:** Upgrade Gradle Wrapper with `./gradlew wrapper --gradle-version 7.2`
A Gradle plugin that provides tasks and helper methods to simplify working with a changelog that is managed in the [keep a changelog][keep-a-changelog] style.
## Table of contents
- [Usage](#usage)
- [Configuration](#configuration)
- [Tasks](#tasks)
- [`initializeChangelog`](#initializechangelog)
- [`getChangelog`](#getchangelog)
- [Extension Methods](#extension-methods)
- [`get`](#get)
- [`getOrNull`](#getornull)
- [`getUnreleased`](#getunreleased)
- [`getLatest`](#getlatest)
- [`getAll`](#getall)
- [`has`](#has)
- [`Changelog.Item`](#changelogitem)
- [Helper Methods](#helper-methods)
- [Usage Examples](#usage-examples)
- [Contributing](#contributing)
## Usage
Kotlin:
```kotlin
import org.jetbrains.changelog.date
plugins {
id("org.jetbrains.changelog") version "1.2.1"
}
tasks {
// ...
patchPluginXml {
changeNotes(provider { changelog.getUnreleased().toHTML() })
}
}
changelog {
version.set("1.0.0")
path.set("${project.projectDir}/CHANGELOG.md")
header.set(provider { "[${version.get()}] - ${date()}" })
itemPrefix.set("-")
keepUnreleasedSection.set(true)
unreleasedTerm.set("[Unreleased]")
groups.set(listOf("Added", "Changed", "Deprecated", "Removed", "Fixed", "Security"))
}
```
Groovy:
```groovy
import org.jetbrains.changelog.ExtensionsKt
plugins {
id 'org.jetbrains.changelog' version '1.2.1'
}
apply plugin: 'org.jetbrains.changelog'
intellij {
// ...
patchPluginXml {
changeNotes({ changelog.getUnreleased().toHTML() })
}
}
changelog {
version = "1.0.0"
path = "${project.projectDir}/CHANGELOG.md"
header = "[${-> version.get()}] - ${new SimpleDateFormat("yyyy-MM-dd").format(new Date())}"
headerParserRegex = ~/(\d+\.\d+)/
itemPrefix = "-"
keepUnreleasedSection = true
unreleasedTerm = "[Unreleased]"
groups = ["Added", "Changed", "Deprecated", "Removed", "Fixed", "Security"]
}
```
> **Note:** All the extension and tasks properties are now lazy (see [Lazy Configuration][gradle-lazy-configuration]).
>
> To set values in Kotlin DSL, use `.set(...)` method explicitly, like `changelog.version.set("1.0.0")`, in Groovy it is still possible to use `=` assignment.
>
> To access property value, call `.get()`, like: `changelog.version.get()` in both variants.
## Configuration
Plugin can be configured with the following properties set in the `changelog {}` closure:
| Property | Description | Default value |
|-------------------------|----------------------------------------------------------------------------|-------------------------------------------------------------------------------|
| **`version`** | Current version. By default, global project's version is used. | `project.version` |
| `groups` | List of groups created with a new Unreleased section. | `["Added", "Changed", "Deprecated", "Removed", "Fixed", "Security"]` |
| `header` | `String` or `Provider` that returns current header value. | `provider { "[${version.get()}]" }` |
| `headerParserRegex` | `Regex`/`Pattern`/`String` used to extract version from the header string. | `null`, fallbacks to [`ChangelogPluginConstants#SEM_VER_REGEX`][semver-regex] |
| `itemPrefix` | Single item's prefix, allows to customise the bullet sign. | `"-"` |
| `keepUnreleasedSection` | Add Unreleased empty section after patching. | `true` |
| `patchEmpty` | Patches changelog even if no release note is provided. | `true` |
| `path` | Path to the changelog file. | `"${project.projectDir}/CHANGELOG.md"` |
| `unreleasedTerm` | Unreleased section text. | `"[Unreleased]"` |
> **Note:** `header` closure has the delegate explicitly set to the extension's context for the sake of the [Configuration cache][configuration-cache] support.
## Tasks
The plugin introduces the following tasks:
| Task | Description |
|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| [`getChangelog`](#getchangelog) | Retrieves changelog for the specified version. |
| [`initializeChangelog`](#initializechangelog) | Creates new changelog file with Unreleased section and empty groups. |
| [`patchChangelog`](#patchchangelog) | Updates the unreleased section to the given version. Requires *unreleased* section to be present in the changelog file. |
### `getChangelog`
Retrieves changelog for the specified version.
#### Options
| Option | Description |
|----------------|----------------------------------------------------|
| `--no-header` | Skips the first version header line in the output. |
| `--unreleased` | Returns Unreleased change notes. |
#### Examples
```bash
$ ./gradlew getChangelog --console=plain -q --no-header
### Added
- Initial project scaffold
- GitHub Actions to automate testing and deployment
- Kotlin support
```
### `initializeChangelog`
Creates new changelog file with Unreleased section and empty groups.
#### Examples
```bash
$ ./gradlew initializeChangelog
$ cat CHANGELOG.md
## [Unreleased]
### Added
- Example item
### Changed
### Deprecated
### Removed
### Fixed
### Security
```
### `patchChangelog`
Updates the unreleased section to the given version. Requires *unreleased* section to be present in the changelog file.
#### Options
| Option | Description |
|------------------|---------------------------------------------------------|
| `--release-note` | Adds custom release note to the latest changelog entry. |
#### Examples
```bash
$ cat CHANGELOG.md
## [Unreleased]
### Added
- A based new feature
$ ./gradlew patchChangelog
$ cat CHANGELOG.md
## [Unreleased]
### Added
## [1.0.0]
### Added
- A based new feature
```
```bash
$ cat CHANGELOG.md
## [Unreleased]
### Added
- This note will get overridden by the `--release-note`
$ ./gradlew patchChangelog --release-note='- Foo'
$ cat CHANGELOG.md
## [Unreleased]
### Added
## [1.0.0]
- Foo
```
## Extension Methods
All the methods are available via the `changelog` extension and allow for reading the changelog file within the Gradle tasks to provide the latest (or specific) change notes.
> **Note:** Following methods depend on the `changelog` extension, which is set in the *Configuration* [build phase][build-phases].
> To make it run properly, it's required to place the configuration before the fist usage of such a method.
> Alternatively, you can pass the Gradle closure to the task, which will postpone the method invocation.
### `get`
The method returns a `Changelog.Item` object for the specified version.
Throws `MissingVersionException` if the version is not available.
It is possible to specify the *unreleased* section with setting the `${changelog.unreleasedTerm}` value.
#### Parameters
| Parameter | Type | Description | Default value |
|-----------|----------|----------------------|------------------------|
| `version` | `String` | Change note version. | `${changelog.version}` |
#### Examples
Kotlin:
```kotlin
tasks {
patchPluginXml {
changeNotes.set(provider { changelog.get("1.0.0").toHTML() })
}
}
```
Groovy:
```groovy
tasks {
patchPluginXml {
changeNotes = { changelog.get("1.0.0").toHTML() }
}
}
```
### `getOrNull`
Same as `get`, but returns `null` instead of throwing `MissingVersionException`.
#### Parameters
| Parameter | Type | Description | Default value |
|-----------|----------|----------------------|------------------------|
| `version` | `String` | Change note version. | `${changelog.version}` |
### `getUnreleased`
The method returns a `Changelog.Item` object for the *unreleased* version.
#### Examples
Kotlin:
```kotlin
tasks {
patchPluginXml {
changeNotes.set(provider { changelog.getUnreleased().toHTML() })
}
}
```
Groovy:
```groovy
tasks {
patchPluginXml {
changeNotes = { changelog.getUnreleased().toHTML() }
}
}
```
### `getLatest`
The method returns the latest `Changelog.Item` object (first on the list).
#### Examples
Kotlin:
```kotlin
tasks {
patchPluginXml {
changeNotes.set(provider { changelog.getLatest().toHTML() })
}
}
```
Groovy:
```groovy
tasks {
patchPluginXml {
changeNotes = { changelog.getLatest().toHTML() }
}
}
```
### `getAll`
The method returns all available `Changelog.Item` objects.
#### Examples
Kotlin:
```kotlin
extension.getAll().values.map { it.toText() }
```
Groovy:
```groovy
extension.getAll().values().each { it.toText() }
```
### `has`
The method checks if the given version exists in the changelog.
#### Examples
Kotlin:
```kotlin
tasks {
patchPluginXml {
provider { changelog.has("1.0.0") }
}
}
```
Groovy:
```groovy
tasks {
patchPluginXml {
{ changelog.has("1.0.0") }
}
}
```
## `Changelog.Item`
Methods described in the above section return `Changelog.Item` object, which is a representation of the single changelog section for the specific version.
It provides a couple of properties and methods that allow altering the output form of the change notes:
### Properties
| Name | Type | Description |
|-----------|----------|----------------------|
| `version` | `String` | Change note version. |
### Methods
| Name | Description | Returned type |
|---------------------|--------------------------------|---------------|
| `noHeader()` | Excludes header part. | `this` |
| `noHeader(Boolean)` | Includes/excludes header part. | `this` |
| `getHeader()` | Returns header text. | `String` |
| `toText()` | Generates Markdown output. | `String` |
| `toPlainText()` | Generates Plain Text output. | `String` |
| `toString()` | Generates Markdown output. | `String` |
| `toHTML()` | Generates HTML output. | `String` |
## Helper Methods
| Name | Description | Returned type |
|----------------------------------------|----------------------------------------------------------------|---------------|
| `date(pattern: String = "yyyy-MM-dd")` | Shorthand for retrieving the current date in the given format. | `String` |
| `markdownToHTML(input: String)` | Converts given Markdown content to HTML output. | `String` |
| `markdownToPlainText(input: String)` | Converts given Markdown content to Plain Text output. | `String` |
> **Note:** To use package-level Kotlin functions in Groovy, you need to import the containing file as a class:
> ```groovy
> import org.jetbrains.changelog.ExtensionsKt
>
> changelog {
> header = { "[$version] - ${ExtensionsKt.date('yyyy-MM-dd')}" }
> }
> ```
## Usage Examples
- [Unity Support for ReSharper and Rider](https://github.com/JetBrains/resharper-unity)
- [IntelliJ Platform Plugin Template](https://github.com/JetBrains/intellij-platform-plugin-template)
- [Package Search](https://plugins.jetbrains.com/plugin/12507-package-search)
## Contributing
### Integration tests
To perform integration tests with an existing project, bind the `gradle-changelog-plugin` sources in the Gradle settings file:
`settings.gradle`:
```
rootProject.name = "IntelliJ Platform Plugin Template"
includeBuild '/Users/hsz/Projects/JetBrains/gradle-changelog-plugin'
```
`settings.gradle.kts`:
```
rootProject.name = "IntelliJ Platform Plugin Template"
includeBuild("/Users/hsz/Projects/JetBrains/gradle-changelog-plugin")
```
[gh:build]: https://github.com/JetBrains/gradle-changelog-plugin/actions?query=workflow%3ABuild
[gh:gradle-intellij-plugin]: https://github.com/JetBrains/gradle-intellij-plugin
[jb:confluence-on-gh]: https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub
[jb:slack]: https://plugins.jetbrains.com/slack
[jb:twitter]: https://twitter.com/JBPlatform
[build-phases]: https://docs.gradle.org/current/userguide/build_lifecycle.html#sec:build_phases
[configuration-cache]: https://docs.gradle.org/6.8.2/userguide/configuration_cache.html
[keep-a-changelog]: https://keepachangelog.com/en/1.0.0
[gradle-plugin-shield]: https://img.shields.io/maven-metadata/v.svg?label=Gradle%20Plugin&color=blue&metadataUrl=https://plugins.gradle.org/m2/org/jetbrains/intellij/plugins/gradle-changelog-plugin/maven-metadata.xml
[gradle-plugin]: https://plugins.gradle.org/plugin/org.jetbrains.changelog
[gradle-lazy-configuration]: https://docs.gradle.org/current/userguide/lazy_configuration.html
[semver-regex]: https://github.com/JetBrains/gradle-changelog-plugin/blob/main/src/main/kotlin/org/jetbrains/changelog/ChangelogPluginConstants.kt#L38
| 33.323462 | 216 | 0.578235 | eng_Latn | 0.532321 |
d3ee076e8439dacfb6f71e93b3d8ec6d2c888847 | 798 | md | Markdown | README.md | ralvescosta/go_clean_architecture_api | 3c81db410b4820741acf464e16a7e4ba48b6b182 | [
"MIT"
] | 2 | 2020-10-02T11:00:25.000Z | 2021-12-20T22:02:50.000Z | README.md | ralvescosta/go_clean_architecture_api | 3c81db410b4820741acf464e16a7e4ba48b6b182 | [
"MIT"
] | null | null | null | README.md | ralvescosta/go_clean_architecture_api | 3c81db410b4820741acf464e16a7e4ba48b6b182 | [
"MIT"
] | null | null | null | # GO LANG - REST API
This simple project has the purpose to study Go Lang, Clean Architecture and REST API.
## Features:
- Create User Account
- Create User Sessions with JWT
- Save the record of every time the user login
- Manager User Control Access (ACL) with UsersPermissions
### utils
- RUN: `go run src/*.go`
- BUILD: `go build src/*.go`
- TEST ALL FILES: `go test ./src/...`
- TEST COV: `go test ./src/... -cover`
- TEST COV: `go test ./src/... -cover -coverprofile=c.out && go tool cover -html=c.out -o coverage.html`
- CREATE MOCKS: `~/.asdf/installs/golang/1.15.2/packages/bin/mockgen -destination=mocks/mock_database_connection.go -package=mocks -source=core_module/frameworks/database/connection.go` `https://levelup.gitconnected.com/unit-testing-using-mocking-in-go-f281122f499f` | 39.9 | 266 | 0.726817 | kor_Hang | 0.518704 |
d3ee8babeb9d5a89089b0f5aa1339edf0bc47cd9 | 11,666 | md | Markdown | README.md | karol-majewski/rxjs-marbles | 75b646c92765e24f42119ba1e2855946ff391811 | [
"MIT"
] | null | null | null | README.md | karol-majewski/rxjs-marbles | 75b646c92765e24f42119ba1e2855946ff391811 | [
"MIT"
] | null | null | null | README.md | karol-majewski/rxjs-marbles | 75b646c92765e24f42119ba1e2855946ff391811 | [
"MIT"
] | null | null | null | # rxjs-marbles
[](https://github.com/cartant/rxjs-marbles/blob/master/LICENSE)
[](https://www.npmjs.com/package/rxjs-marbles)
[](https://npmjs.org/package/rxjs-marbles)
[](http://travis-ci.org/cartant/rxjs-marbles)
[](https://david-dm.org/cartant/rxjs-marbles)
[](https://david-dm.org/cartant/rxjs-marbles#info=devDependencies)
[](https://david-dm.org/cartant/rxjs-marbles#info=peerDependencies)
[](https://greenkeeper.io/)
### What is it?
`rxjs-marbles` is an RxJS [marble testing](https://github.com/ReactiveX/rxjs/blob/master/doc/marble-testing.md) library that should be compatible with any test framework. It wraps the RxJS [`TestScheduler`](https://github.com/ReactiveX/rxjs/blob/5.4.2/src/testing/TestScheduler.ts) and provides methods similar to the [helper methods](https://github.com/ReactiveX/rxjs/blob/master/doc/marble-testing.md#api) used the `TestScheduler` API.
It can be used with [AVA](https://github.com/avajs/ava), [Jasmine](https://github.com/jasmine/jasmine), [Jest](https://facebook.github.io/jest/), [Mocha](https://github.com/mochajs/mocha) or [Tape](https://github.com/substack/tape) in the browser or in Node and it supports CommonJS and ES module bundlers.
### Why might you need it?
I created this package because I wanted to use RxJS marble tests in a number of projects and those projects used different test frameworks.
There are a number of marble testing packages available - including the Mocha-based implementation in RxJS itself - but I wanted something that was simple, didn't involve messing with globals and `beforeEach`/`afterEach` functions and was consistent across test frameworks.
If you are looking for something similar, this might suit.
## Install
Install the package using NPM:
```
npm install rxjs-marbles --save-dev
```
## Getting started
If you're just getting started with marble testing, you might be interested in how I wasted some of my time by not carefully reading the manual: [RxJS Marble Testing: RTFM](https://ncjamieson.com/marble-testing-rtfm/).
## Usage
`rxjs-marbles` contains framework-specific import locations. If there is a location for the test framework that you are using, you should use the specific import. Doing so will ensure that you receive the best possible integration with your test framework.
For example, importing from `"rxjs-marbles/jest"` will ensure that Jest's matcher is used and the output for test failures will be much prettier.
### With Mocha
Instead of passing your test function directly to `it`, pass it to the library's `marbles` function, like this:
```ts
import { marbles } from "rxjs-marbles/mocha";
import { map } from "rxjs/operators";
describe("rxjs-marbles", () => {
it("should support marble tests", marbles(m => {
const source = m.hot("--^-a-b-c-|");
const subs = "^-------!";
const expected = "--b-c-d-|";
const destination = source.pipe(
map(value => String.fromCharCode(value.charCodeAt(0) + 1))
);
m.expect(destination).toBeObservable(expected);
m.expect(source).toHaveSubscriptions(subs);
}));
});
```
### With other test frameworks
To see how `rxjs-marbles` can be used with other test frameworks, see the [examples](./examples) within the repository.
With AVA and Tape, the callback passed to the `marbles` function will receive an addional argument - the AVA `ExecutionContext` or the Tape `Test` - which can be used to specify the number of assertions in the test plan. See the framework-specific examples for details.
### Using cases for test variations
In addition to the `marbles` function, the library exports a `cases` function that can be used to reduce test boilerplate by specifying multiple cases for variations of a single test. The API is based on that of [`jest-in-case`](https://github.com/Thinkmill/jest-in-case), but also includes the marbles context.
The `cases` implementation is framework-specific, so the import must specify the framework. For example, with Mocha, you would import `cases` and use it instead of the `it` function, like this:
```ts
import { cases } from "rxjs-marbles/mocha";
import { map } from "rxjs/operators";
describe("rxjs-marbles", () => {
cases("should support cases", (m, c) => {
const source = m.hot(c.s);
const destination = source.pipe(
map(value => String.fromCharCode(value.charCodeAt(0) + 1))
);
m.expect(destination).toBeObservable(c.e);
}, {
"non-empty": {
s: "-a-b-c-|",
e: "-b-c-d-|"
},
"empty": {
s: "-|",
e: "-|"
}
});
});
```
### `TestScheduler` behaviour changes in RxJS version 6
In RxJS version 6, a `run` method was added to the `TestScheduler` and when it's used, the scheduler's behaviour is significantly changed.
`rxjs-marbles` now defaults to using the scheduler's `run` method. To use the scheduler's old behaviour, you can call the `configure` function, passing `{ run: false }`, like this:
```ts
import { configure } from "rxjs-marbles/mocha";
const { cases, marbles } = configure({ run: false });
```
### Dealing with deeply-nested schedulers
**WARNING**: `bind` is deprecated and can only be used with `configure({ run: false })`.
Sometimes, passing the `TestScheduler` instance to the code under test can be tedious. The context includes a `bind` method that can be used to bind a scheduler's `now` and `schedule` methods to those of the context's `TestScheduler`.
`bind` can be passed specific scheduler instances or can be called with no arguments to bind RxJS's `animationFrame`, `asap`, `async` and `queue` schedulers to the context's `TestScheduler`.
For example:
```ts
it("should support binding non-test schedulers", marbles(m => {
m.bind();
const source = m.hot("--^-a-b-c-|");
const subs = "^--------!";
const expected = "---a-b-c-|";
// Note that delay is not passed a scheduler:
const destination = source.delay(m.time("-|"));
m.expect(destination).toBeObservable(expected);
m.expect(source).toHaveSubscriptions(subs);
}));
```
### Changing the time per frame
**WARNING**: `reframe` is deprecated and can only be used with `configure({ run: false })`.
The RxJS `TestScheduler` defaults to 10 virtual milliseconds per frame (each character in the diagram represents a frame) with a maximum of 750 virtual milliseconds for each test.
If the default is not suitable for your test, you can change it by calling the context's `reframe` method, specifying the time per frame and the (optional) maximum time. The `reframe` method must be called before any of the `cold`, `flush`, `hot` or `time` methods are called.
The [examples](./examples) include tests that use `reframe`.
### Alternate assertion methods
If the BDD syntax is something you really don't like, there are some alternative methods on the `Context` that are more terse:
```ts
const source = m.hot("--^-a-b-c-|", values);
const subs = "^-------!";
const expected = m.cold("--b-c-d-|", values);
const destination = source.map((value) => value + 1);
m.equal(destination, expected);
m.has(source, subs);
```
## API
The `rxjs-marbles` API includes the following functions:
* [configure](#configure)
* [marbles](#marbles)
* [observe](#observe)
* [fakeSchedulers](#fakeSchedulers)
<a name="configure"></a>
### configure
```ts
interface Configuration {
assert?: (value: any, message: string) => void;
assertDeepEqual?: (a: any, b: any) => void;
frameworkMatcher?: boolean;
run?: boolean;
}
function configure(options: Configuration): { marbles: MarblesFunction };
```
The `configure` method can be used to specify the assertion functions that are to be used. Calling it is optional; it's only necessary if particular assertion functions are to be used. It returns an object containing a `marbles` function that will use the specified configuration.
The default implementations simply perform the assertion and throw an error for failed assertions.
<a name="marbles"></a>
### marbles
```ts
function marbles(test: (context: Context) => any): () => any;
function marbles<T>(test: (context: Context, t: T) => any): (t: T) => any;
```
`marbles` is passed the test function, which it wraps, passing the wrapper to the test framework. When the test function is called, it is passed the `Context` - which contains methods that correspond to the `TestScheduler` [helper methods](https://github.com/ReactiveX/rxjs/blob/master/doc/marble-testing.md#api):
```ts
interface Context {
autoFlush: boolean;
bind(...schedulers: IScheduler[]): void;
cold<T = any>(marbles: string, values?: any, error?: any): ColdObservable<T>;
configure(options: Configuration): void;
equal<T = any>(actual: Observable<T>, expected: Observable<T>): void;
equal<T = any>(actual: Observable<T>, expected: string, values?: { [key: string]: T }, error?: any): void;
equal<T = any>(actual: Observable<T>, subscription: string, expected: Observable<T>): void;
equal<T = any>(actual: Observable<T>, subscription: string, expected: string, values?: { [key: string]: T }, error?: any): void;
expect<T = any>(actual: Observable<T>, subscription?: string): Expect<T>;
flush(): void;
has<T = any>(actual: Observable<T>, expected: string | string[]): void;
hot<T = any>(marbles: string, values?: any, error?: any): HotObservable<T>;
reframe(timePerFrame: number, maxTime?: number): void;
readonly scheduler: TestScheduler;
teardown(): void;
time(marbles: string): number;
}
interface Expect<T> {
toBeObservable(expected: ColdObservable<T> | HotObservable<T>): void;
toBeObservable(expected: string, values?: { [key: string]: T }, error?: any): void;
toHaveSubscriptions(expected: string | string[]): void;
}
```
<a name="observe"></a>
### observe
In Jasmine, Jest and Mocha, the test framework recognises asynchronous tests by their taking a `done` callback or returning a promise.
The `observe` helper can be useful when an observable cannot be tested using a marble test. Instead, expectations can be added to the observable stream and the observable can be returned from the test.
See the [examples](./examples) for usage.
<a name="fakeSchedulers"></a>
### fakeSchedulers
With Jest and Jasmine, the test framework can be configured to use its own concept of fake time. AVA, Mocha and Tape don't have built-in support for fake time, but the functionality can be added via `sinon.useFakeTimers()`.
In some situations, testing asynchronous observables with marble tests can be awkward. Where testing with marble tests is too difficult, it's possible to test observables using the test framework's concept of fake time, but the `now` method of the `AsyncScheduler` has to be patched. The `fakeSchedulers` helper can be used to do this.
See the [examples](./examples) for usage.
Also, I've written an article on the `fakeSchedulers` function: [RxJS: Testing with Fake Time](https://ncjamieson.com/testing-with-fake-time/). | 45.74902 | 437 | 0.708983 | eng_Latn | 0.96002 |
d3eff6f6be434ec173b1eb3f3a7a1d1d688d4ec1 | 1,898 | md | Markdown | CHANGELOG.md | solnic/transflow | 725eaa8e24f6077b07d7c4c47ddab1d78ceb5577 | [
"MIT"
] | 116 | 2015-08-16T15:03:25.000Z | 2020-12-22T22:50:58.000Z | CHANGELOG.md | solnic/transflow | 725eaa8e24f6077b07d7c4c47ddab1d78ceb5577 | [
"MIT"
] | 4 | 2015-08-25T18:52:19.000Z | 2016-03-28T13:32:43.000Z | CHANGELOG.md | solnic/transflow | 725eaa8e24f6077b07d7c4c47ddab1d78ceb5577 | [
"MIT"
] | 7 | 2015-08-19T11:05:47.000Z | 2019-11-08T03:16:08.000Z | # 0.3.0 2015-08-19
## Added
- Support for steps that return [kleisli](https://github.com/txus/kleisli) monads (solnic)
- Support for setting default step options via flow DSL (solnic)
- Support for subscribing many listeners to a single step (solnic)
- Support for subscribing one listener to all steps (solnic)
## Changed
- Now step objects are wrapped using `Step` decorator that uses `dry-pipeline` gem (solnic)
- Only `Transflow::StepError` errors can cause transaction failure (solnic)
[Compare v0.2.0...v0.3.0](https://github.com/solnic/transflow/compare/v0.2.0...v0.3.0)
# 0.2.0 2015-08-18
## Added
- Support for currying a publisher step (solnic)
[Compare v0.1.0...v0.2.0](https://github.com/solnic/transflow/compare/v0.1.0...v0.2.0)
# 0.1.0 2015-08-17
## Added
- `Transaction#call` will raise if options include an unknown step name (solnic)
- `Transflow` support shorter syntax for steps: `steps :one, :two, :three` (solnic)
- `step(name)` defaults to `step(name, with: name)` (solnic)
## Fixed
- `Transaction#to_s` displays steps in the order of execution (solnic)
## Internal
- Organize source code into separate files (solnic)
- Document public interface with YARD (solnic)
- Add unit specs for `Transaction` (solnic)
[Compare v0.0.2...v0.1.0](https://github.com/solnic/transflow/compare/v0.0.2...v0.1.0)
# 0.0.2 2015-08-16
## Added
- Ability to pass aditional arguments to specific operations prior calling the
whole transaction (solnic)
[Compare v0.0.2...v0.0.2](https://github.com/solnic/transflow/compare/v0.0.1...v0.0.2)
# 0.0.2 2015-08-16
## Added
- Ability to publish events from operations via `publish: true` option (solnic)
- Ability to subscribe to events via `Transflow::Transaction#subscribe` interface (solnic)
[Compare v0.0.1...v0.0.2](https://github.com/solnic/transflow/compare/v0.0.1...v0.0.2)
# 0.0.1 2015-08-16
First public release \o/
| 28.757576 | 91 | 0.718651 | eng_Latn | 0.787249 |
d3f047c4e3ab85f7728e8714883e032c5dc3c22e | 596 | md | Markdown | Module4/solutions/_filter1.md | bioinformaticsdotca/CSHL_2019 | 8ee0505d5aa65d7c99b799af7de140e5c82aa60a | [
"CC-BY-3.0"
] | 2 | 2019-09-30T02:16:10.000Z | 2021-04-20T21:50:19.000Z | Module4/solutions/_filter1.md | bioinformaticsdotca/CSHL_2019 | 8ee0505d5aa65d7c99b799af7de140e5c82aa60a | [
"CC-BY-3.0"
] | null | null | null | Module4/solutions/_filter1.md | bioinformaticsdotca/CSHL_2019 | 8ee0505d5aa65d7c99b799af7de140e5c82aa60a | [
"CC-BY-3.0"
] | 2 | 2019-04-12T20:12:39.000Z | 2019-05-24T15:35:00.000Z | By looking at INFO field we can understand the filter
* `QD` stands for `Variant Confidence/Quality by Depth` which corresponds to the QUAL score normalized by allele depth (AD) for a variant.
* `FS` stands for `Phred-scaled p-value using Fisher's exact test to detect strand bias` which corresponds to result of a Fisher's Exact Test to determine if there is strand bias between forward and reverse strands for the reference or alternate allele.
* `MQ` stands for `RMS Mapping Quality` which corresponds to an estimation of the overall mapping quality of reads supporting a variant call.
| 85.142857 | 255 | 0.785235 | eng_Latn | 0.99892 |
d3f1a60eb6431dddd671be89ecdfecd395655abd | 737 | md | Markdown | README.md | mgalante/virgencita | 50e09a84b4617b4ebf1d8488036bff791cb2a987 | [
"MIT"
] | 9 | 2019-02-01T18:38:35.000Z | 2019-12-26T21:33:43.000Z | README.md | mgalante/virgencita | 50e09a84b4617b4ebf1d8488036bff791cb2a987 | [
"MIT"
] | 3 | 2019-02-03T16:39:15.000Z | 2019-08-02T21:08:50.000Z | README.md | mgalante/virgencita | 50e09a84b4617b4ebf1d8488036bff791cb2a987 | [
"MIT"
] | 3 | 2019-02-01T19:44:03.000Z | 2019-03-06T14:41:18.000Z | # Virgencita del tiempo climático
[](https://travis-ci.org/reynico/virgencita)
### Setup
- Download [GeoIP city location database](https://dev.maxmind.com/geoip/geoip2/geolite2/)
- Uncompress and rename to `geo.mmdb`
- Set your `FORECAST_API_KEY_1`. Create an account at [DarkSky](https://darksky.net/)
- Set your `FORECAST_ANALYTICS_KEY` to have metrics at Google Analytics
- Install the Python requirements `make install`
- Run with `python3 main.py`
### How to test locally
Requires 18.02.0+
- Edit `docker-compose.yml` with your keys
```
docker-compose build
docker-compose up
```
### Running unit tests
- Run `make test`
### Run lint
- Run `make lint`
| 28.346154 | 119 | 0.738128 | eng_Latn | 0.387012 |
d3f1efed9c3474f508401b698b91c48109520c24 | 4,605 | md | Markdown | content/post/2017-01-31-looking-ahead-2017-projects.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-12-19T09:50:28.000Z | 2022-03-31T00:40:39.000Z | content/post/2017-01-31-looking-ahead-2017-projects.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 2 | 2018-04-23T13:45:38.000Z | 2020-01-24T23:04:16.000Z | content/post/2017-01-31-looking-ahead-2017-projects.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-04-22T05:43:46.000Z | 2022-03-02T20:28:45.000Z | ---
author: slowe
categories: Personal
comments: true
date: 2017-01-31T00:00:00Z
tags:
- Personal
- OSS
- Writing
- Podcast
title: 'Looking Ahead: My 2017 Projects'
url: /2017/01/31/looking-ahead-2017-projects/
---
For the last few years, I've been sharing my list of projects for each year (here's the [list for 2012][xref-1], the [list for 2013][xref-2], [2015's list][xref-3], and [last year's list][xref-4]---I didn't do a list for 2014). Toward the end of each year, I also publish a "report card" assessing my performance against that year's list (here's [the 2016 assessment][xref-6]). In this post, I'm going to share my list of planned projects for 2017.
Without further ado, here's the list for 2017:
1. **Finish the network automation book.** One way or another, the [network automation book][xref-5] I'm writing with Jason Edelman and Matt Oswalt is getting finished in 2017. (It's [available now as an Early Access edition][link-1] if you'd like to give it a look and provide some feedback.)
2. **Launch an open source book project.** This is something I've been tossing around for a while now. Since my efforts at making code contributions to an open source project aren't going so well (though I'm going to keep moving in that direction), I figured I'd contribute in a way I _know_ I can do. This is going to be a "cookbook"-style book, and the goal I'm setting for myself is to produce at least 2 "recipes" per quarter for the book. Stay tuned for some additional details soon. _(Stretch goal: Produce 3 "recipes" per quarter.)_
3. **Produce some video content.** My target this year is to produce at least one piece of video content every quarter. This will most likely be a brief video demo of a project, a walk-through of a particular configuration, or similar. _(Stretch goal: Publish 6 videos over the course of 2017, instead of only 4.)_
4. **Get the Full Stack Journey podcast back on track.** I launched [the Full Stack Journey podcast][link-4] last year, and managed to get 10 episodes published (reasonably close to my target of monthly episodes). Now that I have a better idea of what's involved ([this post][link-3] by Ethan Banks is truly _spot on_), I'm going to shoot for 9 episodes in 2017 and give myself some room for really busy periods (like VMworld). I know that heavy-duty podcasters will tell me that I need to do more in order to really build an audience, but---like my blog itself---the point of the podcast isn't monetization. It's to help the community. _(Stretch goal: publish 12 episodes in 2017.)_
5. **Complete a "wildcard project."** I've had this on the list for the last couple of years. The thinking behind including this is that it's pretty difficult (impossible?) to predict what sorts of opportunities will emerge over the course of the year, and I'd like to remain open to tackling a new or different project should it present itself. At the same time, it's entirely possible that a new or different project _won't_ appear, so I probably won't give myself a negative grade if some new unforeseen project doesn't emerge.
So there's my list of 2017 projects, and---importantly---associated goals for each project (along with stretch goals for some of the projects). Two other things that will keep me busy this year (aside from my day job, of course!) but which I'm _not_ going to list as projects include:
1. Migrating to Linux on my primary laptop (a new Dell Latitude E7370; see my brief hardware review [here][xref-7])
2. Supporting the VMUG community worldwide (I'll be in Pittsburgh in early March and St. Louis in early April, with more dates on the horizon; contact me if you're a VMUG leader and are interested in having me speak at your event)
I'd love to hear your feedback (positive or negative, but try to be constructive if at all possible). Feel free to [contact me on Twitter][link-2]. Thanks!
[link-1]: http://shop.oreilly.com/product/0636920042082.do
[link-2]: https://twitter.com/scott_lowe
[link-3]: http://ethancbanks.com/2017/01/27/starting-a-podcast-is-easy-continuing-is-hard/
[link-4]: http://fullstackjourney.com/
[xref-1]: {{< relref "2012-01-02-some-projects-for-2012.md" >}}
[xref-2]: {{< relref "2013-02-07-looking-ahead-my-2013-projects.md" >}}
[xref-3]: {{< relref "2015-01-16-looking-ahead-2015-projects.md" >}}
[xref-4]: {{< relref "2016-01-21-looking-ahead-2016-projects.md" >}}
[xref-5]: {{< relref "2015-12-28-next-gen-network-engineering-skills.md" >}}
[xref-6]: {{< relref "2016-12-22-looking-back-2016-project-report-card.md" >}}
[xref-7]: {{< relref "2017-01-30-review-dell-latitude-e7370.md" >}}
| 92.1 | 683 | 0.747883 | eng_Latn | 0.997095 |
d3f23b3a84714b13345c5047e1f006847cebd73b | 629 | md | Markdown | _datasets/PacELF_Phase2_504.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | _datasets/PacELF_Phase2_504.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | _datasets/PacELF_Phase2_504.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | ---
schema: pacelf
title: Exploiting the potential of vector control for disease prevention
organization: Townson, H., Nathan, M. B., Zaim, M., Guillet, P., Manga, L., Bos, R., Kindhauser, M.
notes: N/A
access: Open
resources:
- name: Exploiting the potential of vector control for disease prevention
url: 'N/A'
format: Hardcopy
access: Open
pages: 942-947
category: Scientific Papers
access: Open
journal: Bulletin of the World Health Organization
publisher: N/A
language: English
hardcopy_location: JCU WHOCC Ichimori collection
work_location: Multicountry Global
year: N/A
decade: unspecified
PacELF_ID: 1248
---
| 24.192308 | 100 | 0.756757 | eng_Latn | 0.661824 |
d3f24223a1ff37cde8f81bd7ef9b46eb1ec6dfe7 | 2,719 | md | Markdown | README.md | scontini76/bashcov | 9462f7df84ef15c63d265876d7ed4161009bb995 | [
"MIT"
] | null | null | null | README.md | scontini76/bashcov | 9462f7df84ef15c63d265876d7ed4161009bb995 | [
"MIT"
] | null | null | null | README.md | scontini76/bashcov | 9462f7df84ef15c63d265876d7ed4161009bb995 | [
"MIT"
] | null | null | null | # Bashcov
[](https://rubygems.org/gems/bashcov)
[](https://travis-ci.org/infertux/bashcov)
[](https://coveralls.io/r/infertux/bashcov)
[](https://codeclimate.com/github/infertux/bashcov/maintainability)
[](http://inch-ci.org/github/infertux/bashcov)
Bashcov is a **code coverage analysis tool for Bash**.
In most cases, you'll want overall coverage results for your project from
[shUnit2](https://github.com/kward/shunit2),
[Bats](https://github.com/sstephenson/bats),
[bash_unit](https://github.com/pgrange/bash_unit),
[assert.sh](https://github.com/lehmannro/assert.sh),
etc.
Bashcov automatically takes care of this by caching and merging results when generating reports,
so your report includes coverage across your test suites and thereby gives you a better picture of blank spots.
It uses the [SimpleCov](https://github.com/colszowka/simplecov) coverage library to generate HTML reports.
SimpleCov gets installed automatically when you install Bashcov.
Here are example coverages generated by Bashcov:
[test app demo](https://infertux.github.com/bashcov/test_app/ "Coverage for the bundled test application") &
[RVM demo](https://infertux.github.com/bashcov/rvm/ "Coverage for RVM").
## Installation
`gem install bashcov`
If the `gem` command is unavailable, you need to [install Ruby](https://www.ruby-lang.org/en/documentation/installation/) first.
## Usage
`bashcov --help` prints all available options. Here are some examples:
bashcov ./script.sh
bashcov --skip-uncovered ./script.sh
bashcov -- ./script.sh --some --flags
bashcov --skip-uncovered -- ./script.sh --some --flags
`./script.sh` can be a mere Bash script or typically your CI script. Bashcov will keep track of all executed scripts.
It will create a directory named `./coverage/`, you may open `./coverage/index.html` to browse the coverage report.
### SimpleCov integration
You can leverage the underlying library [SimpleCov](https://github.com/colszowka/simplecov)
by adding a `.simplecov` file in your project's root, like [this](https://github.com/infertux/bashcov/blob/master/spec/test_app/.simplecov).
See [advanced usage](./USAGE.md) for more information.
## Contributing
Bug reports and patches are most welcome.
See the [contribution guidelines](https://github.com/infertux/bashcov/blob/master/CONTRIBUTING.md).
## License
MIT
| 45.316667 | 161 | 0.763884 | eng_Latn | 0.718946 |
d3f2638bdd29570ac88dd69fac42274561f7ab1d | 97 | md | Markdown | js/map.md | g0xxx3uru/Career_Plan | 6963be6e14495d0f764084c61405be30eb2bd397 | [
"MIT"
] | null | null | null | js/map.md | g0xxx3uru/Career_Plan | 6963be6e14495d0f764084c61405be30eb2bd397 | [
"MIT"
] | null | null | null | js/map.md | g0xxx3uru/Career_Plan | 6963be6e14495d0f764084c61405be30eb2bd397 | [
"MIT"
] | null | null | null | This is aimed for improving js and edited as a track every 3 months for js.
- Plans for 3 months
| 32.333333 | 75 | 0.762887 | eng_Latn | 0.999974 |
d3f279c5b34ee50d80cd6c9d73c9375bab242e2d | 1,545 | md | Markdown | service/README.md | Ross1503/Brunel | c6b6323fa6525c2e1b5f83dc6f97bdeb237e3b06 | [
"Apache-2.0"
] | 306 | 2015-09-03T18:04:21.000Z | 2022-02-12T15:15:39.000Z | service/README.md | Ross1503/Brunel | c6b6323fa6525c2e1b5f83dc6f97bdeb237e3b06 | [
"Apache-2.0"
] | 313 | 2015-09-09T14:20:14.000Z | 2020-09-14T02:00:05.000Z | service/README.md | Ross1503/Brunel | c6b6323fa6525c2e1b5f83dc6f97bdeb237e3b06 | [
"Apache-2.0"
] | 88 | 2015-09-11T16:45:22.000Z | 2021-11-28T12:35:48.000Z | # Service
A basic JAX-RS web service implementation for Brunel that produces Javascript that renders a Brunel visualization given Brunel syntax and the data to visualize.
#### To start the Brunel server
Gradle can be used to deploy Brunel to TomEE and start the web server. From `/brunel`:
```gradle cargoRunLocal```
To confirm it is working, navigate to:
http://localhost:8080/brunel-service/brunel/interpret/d3?brunel_src=data('http%3A%2F%2Fbrunel.mybluemix.net%2Fsample_data%2FBGG%2520Top%25202000%2520Games.csv')%20chord%20x(categories)%20y(playerage)%20color(playerage)%20size(%23count)%20tooltip(%23all)&width=575&height=575
#### REST methods
The REST methods are defined in `BrunelService.java`. The main REST calls are:
````POST /brunel/interpret/d3````
Post a CSV on the payload (as a String)
Params:
src: The brunel syntax
width: The visualization width to use
height: The visualization height to use
visid: A unique identifier for the HTML tag containing the visualization
This returns the Javascript that will render the visualization.
````GET /brunel/interpret/d3````
Gets full HTML that can be placed into an `<iframe>`.
Params:
brunel_src: The brunel syntax
brunel_url: Optionally, an URL pointing to a file containg the brunel.
width: The visualization width to use
height: The visualization height to use
data: An URL reference to a CSV
This returns HTML that can be placed into an IFrame. It does not return the `<iframe>` HTML tag--only the contents.
| 35.930233 | 278 | 0.746926 | eng_Latn | 0.973347 |
d3f2f25e9b04240984a8912815919f9d48c7910c | 4,540 | md | Markdown | docs/index.md | tatsuyafw/sider-docs | 95af84247abec02bd669647dd032da156f35b744 | [
"MIT"
] | null | null | null | docs/index.md | tatsuyafw/sider-docs | 95af84247abec02bd669647dd032da156f35b744 | [
"MIT"
] | null | null | null | docs/index.md | tatsuyafw/sider-docs | 95af84247abec02bd669647dd032da156f35b744 | [
"MIT"
] | null | null | null | ---
title: Home
sidebar_label: Home
hide_title: true
---
# Sider Documentation
## About the documentation
This documentation is a collection of guides for Sider, analyzers and customization features.
We have prepared the content to improve your experience with Sider. If you have any problems or comments about Sider, please let us know.
## Gettings Started
This section has documents about how to use Sider.
* [Introductory Videos](./getting-started/intro-videos.md)
* [Setting up Sider](./getting-started/setup.md)
* [Repository Settings](./getting-started/repository-settings.md)
* [Dashboard Overview](./getting-started/dashboard.md)
* [Custom Analysis Configuration](./getting-started/custom-configuration.md)
* [Working with Issues](./getting-started/working-with-issues.md)
* [Permission](./getting-started/permissions.md)
## Analysis Tools
This section has documents about analysis tools which Sider supports. We have analysis options to fit your project. Sider supports 20+ tools for 8 languages.
### Ruby
<details open>
<summary>6 tools are available.</summary>
* [RuboCop](./tools/ruby/rubocop.md)
* [Reek](./tools/ruby/reek.md)
* [Querly](./tools/ruby/querly.md)
* [Rails Best Practices](./tools/ruby/rails-bestpractices.md)
* [Brakeman](./tools/ruby/brakeman.md)
* [HAML-Lint](./tools/ruby/haml-lint.md)
</details>
### Java
<details open>
<summary>2 tools are available.</summary>
* [Checkstyle](./tools/java/checkstyle.md)
* [PMD](./tools/java/pmd.md)
</details>
### JavaScript and Flavors
<details open>
<summary>4 tools are available.</summary>
* [ESLint](./tools/javascript/eslint.md)
* [JSHint](./tools/javascript/tslint.md)
* [TSLint](./tools/javascript/jshint.md)
* [CoffeeLint](./tools/javascript/coffeelint.md)
</details>
### PHP
<details open>
<summary>3 tools are available.</summary>
* [Phinder](./tools/php/phinder.md)
* [PHPMD](./tools/php/phpmd.md)
* [PHP_CodeSniffer](./tools/php/codesniffer.md)
</details>
### Python
<details open>
<summary>1 tool is available.</summary>
* [Flake8](./tools/python/flake8.md)
</details>
### Swift
<details open>
<summary>1 tool is available.</summary>
* [SwiftLint](./tools/swift/swiftlint.md)
</details>
### CSS
<details open>
<summary>2 tools are available.</summary>
* [stylelint](./tools/css/stylelint.md)
* [SCSS-Lint](./tools/css/scss-lint.md)
</details>
### Go
<details open>
<summary>3 tools are available.</summary>
* [go vet](./tools/go/govet.md)
* [Golint](./tools/go/golint.md)
* [Go Meta Linter](./tools/go/gometalinter.md)
</details>
### Others
<details open>
<summary>2 tools are available.</summary>
* [Goodcheck](./tools/others/goodcheck.md)
* [Misspell](./tools/others/misspell.md)
</details>
## Custom Rules
This section has information, examples, and tips about custom rules. Learn how to create them for your projects.
* [Intro to Custom Rules](./custom-rules/introduction-to-custom-rules.md)
* [Goodcheck](./custom-rules/goodcheck.md)
## Advanced Settings
This section covers advanced features outside of the basic setup of Sider.
* [Inline Comments](./advanced-settings/inline-comments.md)
* [Private Dependencies](./advanced-settings/private-dependencies.md)
* [Restricting access to Close button](./advanced-settings/restricting-access-to-close-button.md)
* [Transferring a repository](./advanced-settings/transferring-a-repository.md)
## Billing and Plans
This section covers questions about billing, invoices, and plans.
* [Billing and Plans](./billing-and-plans.md)
## Troubleshooting
This section answers common questions about how to use Sider.
* [Troubleshooting](./troubleshooting.md)
## Sider Enterprise
Sider supports GitHub Enterprise with Sider Enterprise.
Read the following enterprise documents for Sider Enterprise setup.
* [Sider Enterprise Outline](./enterprise/outline.md)
* [Quickstart](./enterprise/quickstart.md)
* [Computer Resources](./enterprise/resources.md)
* [GitHub Setup](./enterprise/github.md)
* [Database Guide](./enterprise/database.md)
* [Object Storage Guide](./enterprise/storage.md)
* [Containers Guide](./enterprise/containers.md)
* [Updating Guide](./enterprise/updating.md)
* [Administration Guide](./enterprise/administration.md)
* [Scaling Guide](./enterprise/scaling.md)
* [Configuration Index](./enterprise/config.md)
* [Health Check Guide](./enterprise/healthcheck.md)
* [Testing for Sider running](./enterprise/testing/guide.md)
* [Release Notes](./enterprise/releases/changelog.md) | 26.549708 | 157 | 0.726872 | eng_Latn | 0.722902 |
d3f3014542e26ada71bbacf57626266abcb686e3 | 1,147 | md | Markdown | basic/docs/ex.e--vega.md | qabex/spmda | 91fa5e12806d0f1eb626fb1c8a0a0facd88bff00 | [
"BSD-2-Clause"
] | null | null | null | basic/docs/ex.e--vega.md | qabex/spmda | 91fa5e12806d0f1eb626fb1c8a0a0facd88bff00 | [
"BSD-2-Clause"
] | null | null | null | basic/docs/ex.e--vega.md | qabex/spmda | 91fa5e12806d0f1eb626fb1c8a0a0facd88bff00 | [
"BSD-2-Clause"
] | null | null | null | #### [Vega-Lite](https://vega.github.io/vega-lite/)
This is `ex-g--vega.md`
```!vegalite
description: 'A simple bar chart with embedded data.',
data: {
values: [
{a: 'A', b: 28},
{a: 'B', b: 55},
{a: 'C', b: 43},
{a: 'D', b: 91},
{a: 'E', b: 81},
{a: 'F', b: 53},
{a: 'G', b: 19},
{a: 'H', b: 87},
{a: 'I', b: 52}
]
},
mark: 'bar',
encoding: {
x: {field: 'a', type: 'ordinal'},
y: {field: 'b', type: 'quantitative'}
}
```
```!vegalite-svg
data: {url: "https://vega.github.io/vega-datasets/data/cars.json"},
mark: "point",
encoding: {
x: {field: "Horsepower", type: "quantitative"},
y: {field: "Miles_per_Gallon", type: "quantitative"}
}
```
```!vegalite-canvas
{
"width": 300,
"height": 150,
"data": {
"sequence": {
"start": 0,
"stop": 12.7,
"step": 0.1,
"as": "x"
}
},
"transform": [
{
"calculate": "sin(datum.x)",
"as": "sin(x)"
}
],
"mark": "line",
"encoding": {
"x": {
"field": "x",
"type": "quantitative"
},
"y": {
"field": "sin(x)",
"type": "quantitative"
}
}
}
```
| 16.867647 | 69 | 0.451613 | eng_Latn | 0.101495 |
d3f330106269f55fa9ab6b66dc24a56932af6709 | 8,353 | md | Markdown | articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial`:` Verwenden einer verwalteten Identität für den Zugriff auf Azure Key Vault – Windows – Azure AD
description: In diesem Tutorial erfahren Sie, wie Sie eine systemseitig zugewiesene verwaltete Identität eines virtuellen Windows-Computers verwenden, um auf Azure Key Vault zuzugreifen.
services: active-directory
documentationcenter: ''
author: barclayn
manager: daveba
editor: daveba
ms.service: active-directory
ms.subservice: msi
ms.devlang: na
ms.topic: tutorial
ms.tgt_pltfrm: na
ms.workload: identity
ms.date: 01/10/2020
ms.author: barclayn
ms.collection: M365-identity-device-management
ms.openlocfilehash: 2890eb2211ac0a105363742a0e900e52a577ed27
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/09/2020
ms.locfileid: "89255824"
---
# <a name="tutorial-use-a-windows-vm-system-assigned-managed-identity-to-access-azure-key-vault"></a>Tutorial: Verwenden der systemseitig zugewiesenen verwalteten Identität eines virtuellen Windows-Computers für den Zugriff auf Azure Key Vault
[!INCLUDE [preview-notice](../../../includes/active-directory-msi-preview-notice.md)]
In diesem Tutorial erfahren Sie, wie Sie eine systemseitig zugewiesene verwaltete Identität für einen virtuellen Windows-Computer verwenden, um auf Azure Key Vault zuzugreifen. Key Vault dient als Bootstrap und ermöglicht es Ihrer Clientanwendung, mithilfe des Geheimnisses auf Ressourcen zuzugreifen, die nicht von Azure Active Directory (AD) geschützt sind. Verwaltete Dienstidentitäten werden von Azure automatisch verwaltet und ermöglichen Ihnen die Authentifizierung für Dienste, die die Azure AD-Authentifizierung unterstützen, ohne dass Sie Anmeldeinformationen in Ihren Code einfügen müssen.
Folgendes wird vermittelt:
> [!div class="checklist"]
> * Gewähren des Zugriffs auf ein in einer Key Vault gespeicherten Geheimnisses für den virtuellen Computer
> * Abrufen eines Zugriffstokens mithilfe der VM-Identität und Verwenden dieses Zugriffstokens zum Abrufen des Geheimnisses aus der Key Vault
## <a name="prerequisites"></a>Voraussetzungen
[!INCLUDE [msi-tut-prereqs](../../../includes/active-directory-msi-tut-prereqs.md)]
## <a name="enable"></a>Aktivieren
[!INCLUDE [msi-tut-enable](../../../includes/active-directory-msi-tut-enable.md)]
## <a name="grant-access"></a>Gewähren von Zugriff
In diesem Abschnitt wird gezeigt, wie Sie einem virtuellen Computer den Zugriff auf ein in einer Key Vault-Instanz gespeichertes Geheimnis gewähren. Mithilfe von verwalteten Identitäten für Azure-Ressourcen kann der Code Zugriffstoken zur Authentifizierung von Ressourcen abrufen, die die Azure AD-Authentifizierung unterstützen. Allerdings unterstützen nicht alle Azure-Dienste die Azure AD-Authentifizierung. Um verwaltete Identitäten für Azure-Ressourcen mit diesen Diensten zu verwenden, speichern Sie die Dienstanmeldeinformationen in Azure Key Vault, und greifen Sie mit der verwalteten Identität des virtuellen Computers auf Key Vault zu, um die Anmeldeinformationen abzurufen.
Zunächst müssen Sie eine Key Vault-Instanz erstellen und der systemseitig zugewiesenen verwalteten Identität des virtuellen Computers den Zugriff darauf erteilen.
1. Wählen Sie oben in der linken Navigationsleiste **Ressource erstellen** > **Sicherheit + Identität** > **Key Vault** aus.
2. Geben Sie unter **Name** einen Namen für die neue Key Vault an.
3. Suchen Sie die Key Vault in dem Abonnement und in der Ressourcengruppe, in der sich auch der zuvor erstellte virtuelle Computer befindet.
4. Wählen Sie **Zugriffsrichtlinien** aus, und klicken Sie auf **Neue hinzufügen**.
5. Wählen Sie unter „Anhand einer Vorlage konfigurieren“ die Option **Verwaltung von Geheimnissen** aus.
6. Wählen Sie **Prinzipal auswählen**, und geben Sie im Suchfeld den Namen des zuvor erstellten virtuellen Computers ein. Wählen Sie in der Ergebnisliste den virtuellen Computer aus, und klicken Sie auf **Auswählen** .
7. Klicken Sie auf **OK**, um die neue Zugriffsrichtlinie hinzuzufügen. Klicken Sie dann erneut auf **OK**, um die Auswahl der Zugriffsrichtlinie abzuschließen.
8. Klicken Sie auf **Erstellen**, um die Key Vault zu erstellen.

Fügen Sie der Key Vault nun ein Geheimnis hinzu, das Sie später mithilfe von Code abrufen können, der auf dem virtuellen Computer ausgeführt wird:
1. Wählen Sie **Alle Ressourcen** aus, suchen Sie den erstellten Schlüsseltresor, und wählen Sie diesen aus.
2. Wählen Sie **Geheimnisse** aus, und klicken Sie auf **Hinzufügen**.
3. Wählen Sie unter **Uploadoptionen** die Option **Manuell** aus.
4. Geben Sie einen Namen und einen Wert für das Geheimnis ein. Dabei kann es sich um einen beliebigen Wert handeln.
5. Lassen Sie das Aktivierungs- und das Ablaufdatum leer, und übernehmen Sie für **Aktiviert** die ausgewählte Option **Ja**.
6. Klicken Sie auf **Erstellen**, um das Geheimnis zu erstellen.
## <a name="access-data"></a>Zugreifen auf Daten
In diesem Abschnitt wird gezeigt, wie Sie mithilfe der VM-Identität ein Zugriffstoken abrufen und es zum Abrufen des Geheimnisses aus der Key Vault-Instanz verwenden. Wenn bei Ihnen nicht PowerShell 4.3.1 oder höher installiert ist, müssen Sie [die neueste Version herunterladen und installieren](/powershell/azure/).
Zuerst wird mit der systemseitig zugewiesenen verwalteten Identität des virtuellen Computers ein Zugriffstoken für die Authentifizierung bei Key Vault abgerufen:
1. Navigieren Sie im Portal zu **Virtuelle Computer**, wechseln Sie dann zu Ihrem virtuellen Windows-Computer, und klicken Sie in der **Übersicht** auf **Verbinden**.
2. Geben Sie Ihren **Benutzernamen** und Ihr **Kennwort** ein, das Sie beim Erstellen des **rs** hinzugefügt haben.
3. Sie haben nun eine **Remotedesktopverbindung** mit dem virtuellen Computer erstellt. Öffnen Sie jetzt PowerShell in der Remotesitzung.
4. Rufen Sie in PowerShell die Webanforderung für den Mandanten auf, um das Token für den lokalen Host am spezifischen Port des virtuellen Computers abzurufen.
Die PowerShell-Anforderung:
```powershell
$response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -Method GET -Headers @{Metadata="true"}
```
Extrahieren Sie als Nächstes die vollständige Antwort, die als JSON-formatierte Zeichenfolge (JavaScript Object Notation) im „$response“-Objekt gespeichert ist.
```powershell
$content = $response.Content | ConvertFrom-Json
```
Extrahieren Sie dann das Zugriffstoken aus der Antwort.
```powershell
$KeyVaultToken = $content.access_token
```
Verwenden Sie abschließend den PowerShell-Befehl „Invoke-WebRequest“, um das zuvor in der Key Vault erstellte Geheimnis abzurufen. Übergeben Sie dabei das Zugriffstoken im Autorisierungsheader. Sie benötigen die URL Ihrer Key Vaults. Diese befindet sich im Abschnitt **Zusammenfassung** der Seite **Übersicht** der Key Vault.
```powershell
(Invoke-WebRequest -Uri https://<your-key-vault-URL>/secrets/<secret-name>?api-version=2016-10-01 -Method GET -Headers @{Authorization="Bearer $KeyVaultToken"}).content
```
Die Antwort sieht so aus:
```powershell
{"value":"p@ssw0rd!","id":"https://mytestkeyvault.vault.azure.net/secrets/MyTestSecret/7c2204c6093c4d859bc5b9eff8f29050","attributes":{"enabled":true,"created":1505088747,"updated":1505088747,"recoveryLevel":"Purgeable"}}
```
Nachdem Sie das Geheimnis aus der Key Vault abgerufen haben, können Sie es für die Authentifizierung bei einem Dienst verwenden, für den ein Name und ein Kennwort angegeben werden müssen.
## <a name="disable"></a>Disable
[!INCLUDE [msi-tut-disable](../../../includes/active-directory-msi-tut-disable.md)]
## <a name="next-steps"></a>Nächste Schritte
In diesem Tutorial haben Sie gelernt, wie Sie eine systemseitig zugewiesene verwaltete Identität eines virtuellen Windows-Computers verwenden, um auf Azure Key Vault zuzugreifen. Weitere Informationen zu Azure Key Vault finden Sie in folgendem Artikel:
> [!div class="nextstepaction"]
>[Azure Key Vault](../../key-vault/general/overview.md) | 63.763359 | 686 | 0.776009 | deu_Latn | 0.992882 |
d3f384b477e6fe6afd70ae06d20ecdaf3ad6b762 | 10,119 | md | Markdown | sample-livelabs-templates/create-labs/labs/6-labs-setup-graphical-remote-desktop/6-labs-setup-graphical-remote-desktop.md | sriram00/learning-library | f47374a055912c076855b9d2f74123b2b492069a | [
"UPL-1.0"
] | null | null | null | sample-livelabs-templates/create-labs/labs/6-labs-setup-graphical-remote-desktop/6-labs-setup-graphical-remote-desktop.md | sriram00/learning-library | f47374a055912c076855b9d2f74123b2b492069a | [
"UPL-1.0"
] | null | null | null | sample-livelabs-templates/create-labs/labs/6-labs-setup-graphical-remote-desktop/6-labs-setup-graphical-remote-desktop.md | sriram00/learning-library | f47374a055912c076855b9d2f74123b2b492069a | [
"UPL-1.0"
] | null | null | null | # OPTIONAL - Setup Graphical Remote Desktop
## Introduction
This lab will show you how to deploy and configure noVNC Graphical Remote Desktop on an Oracle Enterprise Linux (OEL) instance.
### Objectives
- Deploy NoVNC Remote Desktop
- Configure Desktop
- Add Applications Shortcuts to Desktop
### Prerequisites
This lab assumes you have:
- An Oracle Enterprise Linux (OEL) that meets requirement for marketplace publishing
## **STEP 1**: Deploy noVNC
1. As root, create script */tmp/novnc-1.sh* to perform the first set of tasks.
```
<copy>
sudo su - || sudo sed -i -e 's|root:x:0:0:root:/root:.*$|root:x:0:0:root:/root:/bin/bash|g' /etc/passwd; sudo su -
</copy>
```
```
<copy>
cat > /tmp/novnc-1.sh <<EOF
#!/bin/bash
export appuser=\$1
echo "Installing X-Server required packages ..."
yum -y groupinstall "Server with GUI"
echo "Installing other required packages ..."
yum -y install \
tigervnc-server \
numpy \
mailcap \
nginx
yum -y localinstall \
http://mirror.dfw.rax.opendev.org:8080/rdo/centos7-master/deps/latest/noarch/novnc-1.1.0-6.el7.noarch.rpm \
http://mirror.dfw.rax.opendev.org:8080/rdo/centos7-master/deps/latest/noarch/python2-websockify-0.8.0-13.el7.noarch.rpm
echo "Updating VNC Service ..."
cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver_\${appuser}@:1.service
sed -i "s/<USER>/\${appuser}/g" /etc/systemd/system/vncserver_\${appuser}@:1.service
firewall-cmd --zone=public --permanent --add-service=vnc-server
firewall-cmd --zone=public --permanent --add-port=5901/tcp
firewall-cmd --zone=public --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=6080/tcp
firewall-cmd --reload
systemctl daemon-reload
systemctl enable vncserver_\${appuser}@:1.service
systemctl enable nginx
systemctl daemon-reload
echo "End of novnc-1.sh"
EOF
chmod +x /tmp/novnc-*.sh
</copy>
```
2. Create script */tmp/novnc-2.sh* to perform the second set of tasks.
```
<copy>
cat > /tmp/novnc-2.sh <<EOF
#!/bin/bash
#Enable and Start services
setsebool -P httpd_can_network_connect 1
systemctl daemon-reload
systemctl enable websockify.service
systemctl start websockify.service
nginx -s reload
systemctl restart nginx
echo "noVNC has been successfully deployed on this host and is using NGINX proxy. Open the browser and navigate to the URL below to validate"
echo ""
echo "#================================================="
echo "#"
echo "# http://`curl -s ident.me`/index.html?resize=remote"
echo "# or"
echo "# http://`curl -s ident.me`/password=LiveLabs.Rocks_99&resize=remote&autoconnect=true"
echo "#================================================="
echo ""
EOF
chmod +x /tmp/novnc-*.sh
</copy>
```
3. Create websockify systemd service file
```
<copy>
cat > /etc/systemd/system/websockify.service <<EOF
[Unit]
Description=Websockify Service
After=network.target cloud-final.service
[Service]
Type=simple
User=oracle
ExecStart=/bin/websockify --web=/usr/share/novnc/ --wrap-mode=respawn 6080 localhost:5901
Restart=on-abort
[Install]
WantedBy=multi-user.target
EOF
</copy>
```
3. Create */etc/nginx/conf.d/novnc.conf* file
```
<copy>
pub_ip=`curl -s ident.me`
cat > /etc/nginx/conf.d/novnc.conf <<EOF
# nginx proxy config file for noVNC websockify
upstream vnc_proxy {
server 127.0.0.1:6080;
}
server {
listen 80;
listen [::]:80;
server_name $pub_ip;
access_log /var/log/nginx/novnc_access.log;
error_log /var/log/nginx/novnc_error.log;
location / {
proxy_pass http://vnc_proxy/;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 61s;
}
}
EOF
</copy>
```
4. Ensure that the *EPEL* Yum Repo is configured and enabled. i.e. contains the entry *enabled=1*. If not update it accordingly before proceeding with the next step
```
<copy>
sed -i -e 's|enabled=.*$|enabled=1|g' /etc/yum.repos.d/oracle-epel-ol7.repo
cat /etc/yum.repos.d/oracle-epel-ol7.repo|grep enable
</copy>
```

5. Run script *novnc-1.sh* with the desired VNC user as the sole input parameter. e.g. *oracle*
```
<copy>
export appuser=oracle
/tmp/novnc-1.sh ${appuser}
</copy>
```
6. Set password for VNC user.
```
<copy>
vncpasswd ${appuser}
</copy>
```
7. Provide password as prompted. e.g. "*LiveLabs.Rocks_99*". When prompted with *Would you like to enter a view-only password (y/n)?*, enter **N**
```
<copy>
LiveLabs.Rocks_99
</copy>
```
8. Su over to the VNC user account and enforce the password. when prompted
```
<copy>
sudo su - ${appuser}
vncserver
</copy>
```
9. Provide the same password as you did above. e.g. "*LiveLabs.Rocks_99*". When prompted with *Would you like to enter a view-only password (y/n)?*, enter **N**
```
<copy>
LiveLabs.Rocks_99
</copy>
```
10. Stop the newly started VNC Server running on "**:1**" and exit (or *CTRL+D*) the session as vnc user to go back to *root*
```
<copy>
vncserver -kill :1
exit
</copy>
```
11. Start VNC Server using *systemctl*
```
<copy>
systemctl start vncserver_${appuser}@:1.service
systemctl status vncserver_${appuser}@:1.service
</copy>
```
12. Run script *novnc-2.sh* to finalize
```
<copy>
export appuser=oracle
/tmp/novnc-2.sh
</copy>
```
13. After validating successful setup from URL displayed by above script, remove all setup scripts from "*/tmp*"
```
<copy>
rm -rf /tmp/novnc-*.sh
</copy>
```
## **STEP 2**: Configure Desktop
LiveLabs compute instance are password-less and only accessible via SSH keys. As result it's important to adjust session settings some settings to ensure a better user experience.
1. Launch your browser to the following URL
```
<copy>http://[your instance public-ip address]/index.html?resize=remote</copy>
```

2. Copy/Paste the Password below to login
```
<copy>LiveLabs.Rocks_99</copy>
```

3. Navigate to "*Applications >> System Tools >> Settings*"

4. Click on "*Privacy*" and set **Screen Lock** to *Off*

5. Click on "*Power*" and set **Blank Screen** under Power Saving to *Never*

6. Click on "*Notifications*" and set **Notifications Popups** and **Lock Screen Notifications** to *Off*

7. Scroll-down, Click on "*Devices >> Resolution*" and select **1920 x 1080 (16:9)**


## **STEP 3**: Add Applications to Desktop
For ease of access to desktop applications provided on the instance and needed to perform the labs, follow the steps below to add shortcuts to the desktop. In the example below, we will be adding a shortcut of *FireFox* browser.
1. On the desktop from the previous setup, click on *Home > Other Locations*, then navigate to *`/usr/share/applications`* and scroll-down to find *FireFox*

2. Right-click on *FireFox* and select *Copy to...*

3. Navigate to *Home > Desktop* and Click on *Select*

4. Double-click on the newly added icon on the desktop and click on *Trust and Launch*


5. Repeat steps above to add any other required Application the workshop may need to the Desktop (e.g. Terminal, SQL Developer, etc...)

## **STEP 4**: Enable Copy/Paste from Local to Remote Desktop (noVNC clipboard)
Perform the tasks below and add them to any workshop guide to instruct users on how to enable clipboard on the remote desktop for local-to-remote copy/paste.
During the execution of your labs you may need to copy text from your local PC/Mac to the remote desktop, such as commands from the lab guide. While such direct copy/paste isn't supported as you will realize, you may proceed as indicated below to enable an alternative local-to-remote clipboard with Input Text Field.
1. From your remote desktop session, click on the small gray tab on the middle-left side of your screen to open the control bar

2. Select the *clipboard* icon, Copy the sample text below and paste into the clipboard widget, then finally open up the desired application and paste accordingly using *mouse controls*
```
<copy>echo "This text was copied from my local computer"</copy>
```

*Note:* Please make sure you initialize your clipboard with steps *[1-3]* shown above before opening the target application in which you intend to paste the text. Otherwise will find the *paste* function grayed out in step 4 when attempting to paste.
You may now [proceed to the next lab](#next).
## Acknowledgements
* **Author** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, September 2020
* **Contributors** - Robert Pastijn
* **Last Updated By/Date** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, May 2021
| 30.85061 | 317 | 0.644036 | eng_Latn | 0.831254 |
d3f3ca7a5db893961a1136e5985db19ea42d503c | 11,729 | md | Markdown | inst/exploratory/01-ggspec-components.md | ijlyttle/ggspec | c11e398beffa05d6216588bafa493b0b4472e314 | [
"MIT"
] | null | null | null | inst/exploratory/01-ggspec-components.md | ijlyttle/ggspec | c11e398beffa05d6216588bafa493b0b4472e314 | [
"MIT"
] | null | null | null | inst/exploratory/01-ggspec-components.md | ijlyttle/ggspec | c11e398beffa05d6216588bafa493b0b4472e314 | [
"MIT"
] | null | null | null | First components of the ggspec function
================
<br/>
## Extracting elements
<br/>
``` r
p <- ggplot(data = iris) +
geom_point(aes(x = Petal.Width, y = Petal.Length)) +
geom_point(aes(x = Petal.Width, y = Petal.Length, color = Species), shape = 21, fill = "white") +
scale_y_log10()
p
```
<!-- -->
<br/>
### Extracting data
The first move will be to create the list `data` where all of the data
will live.
<!-- QUESTION: what to do about `waiver()` objects? -->
#### Intermediate data step
`data_int()` will create an intermediate-form for the data. The inputs
are the plot data and the plot layers. The result is a named list of
datasets, named `data-00`, `data-01`, …, where each list contains the
elements `metadata`, `variables`, and `hash`.
**TO - DO**:
- Need to name the datasets but will need to check for matching
hashsums and remove `NULL`s
- Would like only the default data to be able to be named
`data-00`
- The other data sets will be named `data-01`, `data-02`, and so
on.
- `metadata` will be a named list (names are variable names) of 3 (1
required, 2 optional)
- `type` = variable type (required)
- `levels` = levels of factor (optional)
- `timezone` = timezone of `date` or `POSIXct` (optional)
<!-- end list -->
``` r
name_data <- function(dat) {
# check for any matching hashsums
}
data_int <- function(data_plt, layers_plt) {
#join together default data and layer data
data_all <- append(list(data_plt), purrr::map(layers_plt, purrr::pluck, "data"))
#format the lists of data
data_all <- purrr::map(data_all, format_data_int)
# how to name the datasets??
# remove NULL entries
data_all <- purrr::discard(data_all, is.null)
data_all
}
```
<br/>
Helper functions:
`format_data_int()` will format each list of data so that it contains:
- `metadata`: discussed below
- `variables`: the data frame itself
- `hash`: the md5 hash of the data frame
<!-- end list -->
``` r
format_data_int <- function(dat) {
if(is.waive(dat) || is.null(dat)) return(NULL)
else {
list(
metadata = purrr::pluck(dat) %>% purrr::map(create_meta_levels),
variables = dat,
hash = digest::digest(dat)
)
}
}
```
`create_meta_levels()` will create the names list of 3
- `metadata`: could be a named list, using names of variables:
- `type`: first pass at `"quantitative"`, …, based on class,
etc.
- `levels`: optional, vector of strings, used for factor-levels
- `timezone`: optional, timezone of `date` or `POSIXct`
`case_type_vl()` converts the type into a Vega-lite type
``` r
case_type_vl <- function(type) {
case_when(
type == "Date" | type == "POSIXct" ~ "temporal",
type == "factor" | type == "character" | type == "logical" ~ "nominal",
type == "ordered" ~ "ordinal",
type == "numeric" ~ "quantitative"
)
}
create_meta_levels <- function(dat){
type = class(dat)
if(type == "factor" | type == "ordered") {
meta <- list(
type = case_type_vl(type),
levels = levels(dat)
)
} else if (type == "date" | type == "POSIXct") {
meta <- list(
type = case_type_vl(type),
timezone = NULL # use lubridate::tz or ??
)
} else {
meta <- list(
type = case_type_vl(type)
)
}
meta
}
```
<br/>
Example of the function in use:
``` r
test <- data_int(p$data, p$layers)
str(test)
```
## List of 1
## $ :List of 3
## ..$ metadata :List of 5
## .. ..$ Sepal.Length:List of 1
## .. .. ..$ type: chr "quantitative"
## .. ..$ Sepal.Width :List of 1
## .. .. ..$ type: chr "quantitative"
## .. ..$ Petal.Length:List of 1
## .. .. ..$ type: chr "quantitative"
## .. ..$ Petal.Width :List of 1
## .. .. ..$ type: chr "quantitative"
## .. ..$ Species :List of 2
## .. .. ..$ type : chr "nominal"
## .. .. ..$ levels: chr [1:3] "setosa" "versicolor" "virginica"
## ..$ variables:'data.frame': 150 obs. of 5 variables:
## .. ..$ Sepal.Length: num [1:150] 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## .. ..$ Sepal.Width : num [1:150] 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## .. ..$ Petal.Length: num [1:150] 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## .. ..$ Petal.Width : num [1:150] 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## .. ..$ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
## ..$ hash : chr "d3c5d071001b61a9f6131d3004fd0988"
<br/>
This intermediate-form could be used to generate the ggspec-form; it
could also be useful later.
<br/>
#### Final data step
`data_spc()` will return a named list of datasets, named `data-00`,
`data-01`, … . Each list will have:
- `metadata`, as in `data_int()`
- `observations`, transpose of variables
<!-- end list -->
``` r
format_data_spec <- function(dat) {
list(
metadata = dat$metadata,
observations = purrr::transpose(dat$variables)
)
}
data_spc <- function(data_int) {
purrr::map(data_int, format_data_spec)
}
```
``` r
str(data_spc(data_int(p$data, p$layers)), max.level = 2)
```
## List of 1
## $ :List of 2
## ..$ metadata :List of 5
## ..$ observations:List of 150
<br/>
### Extracting layers
Within each layer-object, we need:
1. data (a reference id?)
2. geom
3. geom\_params (maybe)
4. mapping
5. aes\_params
6. stat (maybe)
7. stat\_params (maybe)
<br/>
#### Intermediate layers step
The ggspec layers are a function of the ggplot layers, but also of the
data and scales:
`layer_int()` calls `get_layers()` for each layer. `get_layers()`
returns …
- if `layer_plt` has no data, use `data-00`
- if `layer_plt` has data, hash it and compare against `data_int`, use
name
- make sure that the mapping field is a name in the dataset
- can use type from the dataset metadata for now, can incorporate
scales later
<!-- end list -->
``` r
get_layers <- function(layer) {
pluck_layer <- purrr::partial(purrr::pluck, .x = layer)
list(
data = list(),
geom = list(
class = pluck_layer("geom", class, 1)
),
mapping = pluck_layer("mapping") %>% purrr::map(get_mappings),
aes_params = pluck_layer("aes_params"),
stat = list(
class = pluck_layer("stat", class, 1)
)
)
}
layer_int <- function(layer_plt) {
purrr::map(layer_plt, get_layers)
}
```
<br/>
Helper functions:
``` r
get_mappings <- function(aes) {
list(field = rlang::get_expr(aes),
type = NULL
)
}
```
<br/>
Example of function being used:
``` r
str(layer_int(p$layers))
```
## List of 2
## $ :List of 5
## ..$ data : list()
## ..$ geom :List of 1
## .. ..$ class: chr "GeomPoint"
## ..$ mapping :List of 2
## .. ..$ x:List of 2
## .. .. ..$ field: symbol Petal.Width
## .. .. ..$ type : NULL
## .. ..$ y:List of 2
## .. .. ..$ field: symbol Petal.Length
## .. .. ..$ type : NULL
## ..$ aes_params: NULL
## ..$ stat :List of 1
## .. ..$ class: chr "StatIdentity"
## $ :List of 5
## ..$ data : list()
## ..$ geom :List of 1
## .. ..$ class: chr "GeomPoint"
## ..$ mapping :List of 3
## .. ..$ x :List of 2
## .. .. ..$ field: symbol Petal.Width
## .. .. ..$ type : NULL
## .. ..$ y :List of 2
## .. .. ..$ field: symbol Petal.Length
## .. .. ..$ type : NULL
## .. ..$ colour:List of 2
## .. .. ..$ field: symbol Species
## .. .. ..$ type : NULL
## ..$ aes_params:List of 2
## .. ..$ shape: num 21
## .. ..$ fill : chr "white"
## ..$ stat :List of 1
## .. ..$ class: chr "StatIdentity"
<br/>
#### FInal layers step
In `layer_spc()` we will compare the layer data with `data_int` and
determine the types of the variable by comparing with `data_int` and
`scales_spc`
``` r
layer_spc <- function(layer_int, data_int, scales_spc) {
layer_int
}
```
<br/>
Example of function being used:
<br/>
### Extracting scales
I think that scales will be one-to-one:
`scales_spc()` calls `get_scales()` which operates on a single scale,
used with purrr::map(), to get …
will need to first check if there is even anything there…
``` r
get_scales <- function(scale) {
pluck_scale <- purrr::partial(purrr::pluck, .x = scale)
list(
name = pluck_scale("name"),
class = pluck_scale(class, 1),
aesthetics = pluck_scale("aesthetics"),
transform = list(
name = pluck_scale("trans", "name")
)
)
}
scale_spc <- function(scale_plt) {
purrr::map(scale_plt, get_scales)
}
```
``` r
str(scale_spc(p$scales$scales))
```
## List of 1
## $ :List of 4
## ..$ name : NULL
## ..$ class : chr "ScaleContinuousPosition"
## ..$ aesthetics: chr [1:10] "y" "ymin" "ymax" "yend" ...
## ..$ transform :List of 1
## .. ..$ name: chr "log-10"
<br/>
### Extracting labels
Finally, labels:
``` r
find_scale_labs <- function(labs) {
lab <- purrr::pluck(labs, "name")
if(!is.waive(lab)) {
names(lab) <- purrr::pluck(labs, "aesthetics", 1)
lab
}
}
labels_spc <- function(labels_plt, scales_plt) {
# Find the right way to deal with labels - we could run into a
# problem if we have, say, multiple color scales
# scale_labs <- purrr::map(p_scale$scales$scales, find_scale_labs)
# How to replace the labels with scale labels???
labels_plt
}
```
``` r
p_lab <- ggplot(iris) +
geom_point(aes(x = Petal.Width, y = Petal.Length)) +
geom_point(aes(x = Petal.Width, y = Petal.Length), color = "firebrick") +
scale_y_log10() +
labs(x = "new lab")
p_scale <- ggplot(iris) +
geom_point(aes(x = Petal.Width, y = Petal.Length)) +
geom_point(aes(x = Petal.Width, y = Petal.Length), color = "firebrick") +
scale_y_log10("new lab")
ps <- ggplot_build(p_scale)
```
<br/>
## All together now\!
<br/>
``` r
ggspec <- function(plt){
list(
data = data_spc(data_int(plt$data, plt$layers)),
layers = layer_spc(layer_int(plt$layers)),
scales = scale_spc(plt$scales$scales),
labels = labels_spc(plt$labels)
)
}
```
<br/>
Try it out:
``` r
str(ggspec(p), max.level = 3)
```
## List of 4
## $ data :List of 1
## ..$ :List of 2
## .. ..$ metadata :List of 5
## .. ..$ observations:List of 150
## $ layers:List of 2
## ..$ :List of 5
## .. ..$ data : list()
## .. ..$ geom :List of 1
## .. ..$ mapping :List of 2
## .. ..$ aes_params: NULL
## .. ..$ stat :List of 1
## ..$ :List of 5
## .. ..$ data : list()
## .. ..$ geom :List of 1
## .. ..$ mapping :List of 3
## .. ..$ aes_params:List of 2
## .. ..$ stat :List of 1
## $ scales:List of 1
## ..$ :List of 4
## .. ..$ name : NULL
## .. ..$ class : chr "ScaleContinuousPosition"
## .. ..$ aesthetics: chr [1:10] "y" "ymin" "ymax" "yend" ...
## .. ..$ transform :List of 1
## $ labels:List of 3
## ..$ x : chr "Petal.Width"
## ..$ y : chr "Petal.Length"
## ..$ colour: chr "Species"
<br/> <br/>
``` r
pp <- ggplot(iris) +
geom_point(aes(x = Petal.Width, y = Petal.Length, color = Species)) +
geom_point(aes(x = Petal.Width, y = Petal.Length), color = "firebrick", size = 1) +
scale_y_log10("new lab")
str(ggspec(p_scale), max.level = 3)
str(ggspec(pp), max.level = 3)
```
| 23.647177 | 99 | 0.56015 | eng_Latn | 0.819301 |
d3f45b5744b966b1f85a0e7fa0ecb9f74ed151fa | 16 | md | Markdown | README.md | nunenuh/semantic_drone.pytorch | e4779d470caabafa5804e102da4e5f966cd642ff | [
"MIT"
] | null | null | null | README.md | nunenuh/semantic_drone.pytorch | e4779d470caabafa5804e102da4e5f966cd642ff | [
"MIT"
] | 1 | 2021-05-03T12:08:12.000Z | 2021-05-03T12:08:12.000Z | README.md | nunenuh/semantic_drone.pytorch | e4779d470caabafa5804e102da4e5f966cd642ff | [
"MIT"
] | 3 | 2021-05-03T09:51:47.000Z | 2021-05-03T10:38:55.000Z | # drone.pytorch
| 8 | 15 | 0.75 | pol_Latn | 0.804826 |
d3f45f5010f171d08a5e11bd3591ec64000d27e4 | 436 | md | Markdown | docs/reference/errors-and-warnings/NU3001.md | paulburlumi/docs.microsoft.com-nuget | 37414a253c725e6c839d1a816ba865319977d54d | [
"MIT"
] | 3 | 2019-11-12T04:31:37.000Z | 2020-12-01T10:07:25.000Z | docs/reference/errors-and-warnings/NU3001.md | paulburlumi/docs.microsoft.com-nuget | 37414a253c725e6c839d1a816ba865319977d54d | [
"MIT"
] | null | null | null | docs/reference/errors-and-warnings/NU3001.md | paulburlumi/docs.microsoft.com-nuget | 37414a253c725e6c839d1a816ba865319977d54d | [
"MIT"
] | 1 | 2019-11-12T04:31:39.000Z | 2019-11-12T04:31:39.000Z | ---
title: NuGet Error NU3001
description: NU3001 error code
author: zhili1208
ms.author: lzhi
manager: rob
ms.date: 06/25/2018
ms.topic: reference
ms.reviewer: anangaur
f1_keywords:
- "NU3001"
---
# NuGet Error NU3001
*NuGet 4.6.0+*
### Issue
Invalid arguments to either the [sign command](../../tools/cli-ref-sign.md) or the [verify command](../../tools/cli-ref-verify.md).
### Solution
Check and correct the arguments provided. | 19.818182 | 131 | 0.71789 | eng_Latn | 0.737344 |
d3f48a66ffcf91ea6291a78fd4955ef116e28873 | 2,028 | md | Markdown | dynamicsax2012-technet/assign-sequence-values-to-an-item.md | manasayub/DynamicsAX2012-technet | 045435c9463f774f1f23c28884b217737114b75f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:06:04.000Z | 2020-06-16T22:06:04.000Z | dynamicsax2012-technet/assign-sequence-values-to-an-item.md | AzureMentor/DynamicsAX2012-technet | 444c87b4b0f9930115bcbb603b54afb99dc4fea9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/assign-sequence-values-to-an-item.md | AzureMentor/DynamicsAX2012-technet | 444c87b4b0f9930115bcbb603b54afb99dc4fea9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Assign sequence values to an item
TOCTitle: Assign sequence values to an item
ms:assetid: e8dcb2e7-a64c-4522-af18-1845f15b4cf9
ms:mtpsurl: https://technet.microsoft.com/library/JJ838752(v=AX.60)
ms:contentKeyID: 50120635
author: Khairunj
ms.date: 04/18/2014
mtps_version: v=AX.60
audience: Application User
ms.search.region: Global
---
# Assign sequence values to an item
_**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2_
Use this procedure to assign sequences and sequence values to an item, an item group, or all items.
1. Click **Master planning** \> **Setup** \> **Sequencing** \> **Sequence value**.
2. Create a record.
3. In the **Sequence ID** field, select the identification code of a sequence.
4. In the **Item code** field, select one of the following options to indicate the item code and determine the value that you can select in the **Item relation** field:
- **Table** – Assign the sequence to an item. If you select this option, you can then select the identification code of the item in the **Item relation** field.
- **Group** – Assign the sequence to an item group. If you select this option, you can then select the identification code of the item group in the **Item relation** field.
- **All** – Assign the sequence to all the items. If you select this option, the **Item relation** field is not available.
5. In the **Item relation** field, select the item or item group to assign the sequence and the sequence value to.
> [!NOTE]
> <P>This field is available only if you select either <STRONG>Table</STRONG> or <STRONG>Group</STRONG> in the <STRONG>Item code</STRONG> field.</P>
6. In the **Value** field, select the sequence value to assign to the item, the item group, or all items.
## See also
[Set up product sequences](set-up-product-sequences.md)
[Set up sequence groups](set-up-sequence-groups.md)
[Sequence item (form)](https://technet.microsoft.com/library/jj838760\(v=ax.60\))
| 36.214286 | 178 | 0.716963 | eng_Latn | 0.965017 |
d3f4ebf35d2224a2b5f9870640d53b3a4ed0236a | 100 | md | Markdown | README.md | PriceLab/clustergrammerShiny | 93b645bde10d745042a4b98df8516cce9f0ba39c | [
"Apache-2.0"
] | null | null | null | README.md | PriceLab/clustergrammerShiny | 93b645bde10d745042a4b98df8516cce9f0ba39c | [
"Apache-2.0"
] | null | null | null | README.md | PriceLab/clustergrammerShiny | 93b645bde10d745042a4b98df8516cce9f0ba39c | [
"Apache-2.0"
] | null | null | null | # clustergrammerShiny
an R shiny wrapper around the Ma'ayan Lab's clustergrammer javascript heatmap
| 33.333333 | 77 | 0.84 | eng_Latn | 0.839837 |
d3f54774f533514ad39d158717fb1a2862fe3b15 | 12,002 | md | Markdown | Governance.md | rxmarbles/package-maintenance | a9ba6edb6bccf504bb3496581438cf7e497cd120 | [
"MIT"
] | null | null | null | Governance.md | rxmarbles/package-maintenance | a9ba6edb6bccf504bb3496581438cf7e497cd120 | [
"MIT"
] | null | null | null | Governance.md | rxmarbles/package-maintenance | a9ba6edb6bccf504bb3496581438cf7e497cd120 | [
"MIT"
] | null | null | null | ## Package Maintenance Working Group Governance
### Collaborators
### WG Membership
The package-maintence team has two levels of membership. Administrative
members and regular members.
If you'd like to be listed as regular team member, open a PR adding yourself
to [MEMBERS.md](MEMBERS.md) along with a few words on how you are planning
to contribute to the team's efforts.
Administrative members take on additional levels of responsibility with
respect to managing the [pkgjs](https://github.com/pkgjs) organization
and the other repositories managed by the working group. Administrative
members should have a long standing involvement in the working group.
Individuals who have made significant contributions and who wish to be
considered as an Administrative member may create an issue or
contact an Administrative WG member directly. It is not necessary
to wait for an Administrative WG member to nominate the individual.
There is no specific set of requirements or qualifications for WG
membership beyond these rules.
The WG may add additional Administrative members to the WG by
consensus. If there are any objections to adding any individual
member, an attempt should be made to resolve those objections
following the WG's Consensus Seeking Process.
A WG member may be removed from the WG by voluntary resignation, or by
consensus of the Administrative WG members.
Changes to Administrative WG membership should be posted in
the agenda, and may be suggested as any other agenda
item (see "WG Meetings" below).
If an addition or removal is proposed, this should be raised in an
issue, tagged for the next agenda, and concensus should be reached
in the issue. This is to ensure
that all Administrative members are given the opportunity to
participate in all membership decisions. The addition or removal
is considered approved once the issue has been open for 2 meetings
or 14 days whichever is longer, and at least half the Administrative
members are in favor.
No more than 1/3 of the Administrative WG members may be
affiliated with the same employer. If removal or resignation
of a WG member, or a change of employment by a WG member,
creates a situation where more than 1/3 of the WG membership
shares an employer, then the situation must be immediately
remedied by the resignation or removal of one or more
Administrative WG members affiliated with the
over-represented employer(s).
For the current list of members, see the project [README.md][].
### WG Meetings
The WG meets regularly as scheduled on the Node.js
[calendar](https://nodejs.org/calendar). Each meeting should be
published to YouTube.
Items are added to the WG agenda that are considered contentious or
are modifications of governance, contribution policy, WG membership,
or release process.
The intention of the agenda is not to approve or review all patches;
that should happen continuously on GitHub.
Any community member or contributor can ask that something be added to
the next meeting's agenda by logging a GitHub Issue. Any Collaborator,
WG member or the moderator can add the item to the agenda by adding
the ***package-maintenance-agenda*** tag to the issue.
Prior to each WG meeting the moderator will share the Agenda with
members of the WG. WG members can add any items they like to the
agenda at the beginning of each meeting. The moderator and the WG
cannot veto or remove items.
The moderator is responsible for summarizing the discussion of each
agenda item and sends it as a pull request after the meeting.
### Repository Management
All repositories under the management of the package maintenance team must include:
* LICENCE.md from the list approved by the OpenJS Foundation
* CODE_OF_CONDUCT.md referencing the Node.js Code of conduct in the admin repo.
* CONTRIBUTING.md which includes the current version of the Developer Certificate of Origin (DCO) used by the Node.js Project
* An PR template(s) used for all Pull requests which includes the current version of the DCO used by the Node.js Project.
* A README.md which includes references to the GOVERNANCE.md in the package-maintenance repository as the authoritative governance which applies to the repository.
#### nodejs/package-maintenance
##### Maintainers
Maintainers for the nodejs/package-maintenance repo are managed
through the [package-maintenance](https://github.com/orgs/nodejs/teams/package-maintenance/)
team. Administrative members are given the maintainer role for that team
and can add/remove members as they join or leave the team.
##### Landing PRs
The package maintenance team policy on landing a PR in this repository
is for there to be:
- At least 4 approvals from regular members other than the author of the PR
- No blocking reviews
- 7 day period from the 4th approval to merging
- In the event there are 4 approvals but existing pending reviews still exist a countdown of 21 days will start. During
the countdown period every possible effort must be made to contact the reviewer. If the reviewer cannot be
contacted to review the changes within the countdown period, and their requested changes are believed to be addressed, the PR may be landed. This rule is not intended to circumvent
the policy of consensus when it is known that consensus has not been reached.
All PRs shall follow the [contributions policy](CONTRIBUTING.md)
#### nodejs/ci-config-travis
##### Maintainers
Maintainers for this repository are managed
through the [ci-config-travis-maintainers](https://github.com/orgs/nodejs/teams/ci-config-travis-maintainers/)
team. Administrative members are given the maintainer role for that team
and can add/remove members as appropriate.
##### Landing PRs
The package maintenance team policy on landing a PR in this repository
is for there to be:
- At least 2 approvals from package-maintenance members other than the author of the PR
- No blocking reviews
- 72 hours since the PR was openened
#### nodejs/ci-config-github-actions
##### Maintainers
Maintainers for this repository are managed
through the [ci-config-github-actions-maintainers](https://github.com/orgs/nodejs/teams/ci-config-github-actions-maintainers/)
team. Administrative members are given the maintainer role for that team
and can add/remove members as appropriate.
##### Landing PRs
The package maintenance team policy on landing a PR in this respositry
is for there to be:
- At least 2 approvals from package-maintenance members other than the author of the PR
- No blocking reviews
- 72 hours since the PR was openened
#### pkgjs organization
All members of the pkgjs organization will be required to have 2FA enabled,
and the repository will be configured to enforce this requirement.
##### Adding or removing repositories
Any repository created under the Pkgjs GitHub Organization is considered to be
a project under the ownership of the OpenJS Foundation, and thereby subject
to the Intellectual Property and Governance policies of the Foundation.
Any member may request the management of repositories within the
Pkgjs GitHub Organization by opening an issue in the
[package-maintenance repository][nodejs/package-maintenance].
The actions requested could be:
- Creating a new repository
- Deleting an existing repository
- Archiving an existing repository
- Transferring a repository into or out of the organization
Provided there are no objections from any members raised in
the issue, such requests are approved automatically after 72 hours. If any
objection is made, the request may be moved to a vote
of the Administrative members.
In certain cases, OpenJS Cross Project Council and/or OpenJS Foundation Board
of Directors approval may also be required. (Most likely in the case of
transfeering a repository into or out of the organization).
##### Maintainers
Maintainers for the repositories in the Pkgjs repository are managed
through a team created for each repository. They will be named as
[REPO-maintainers](https://github.com/orgs/pkgjs/teams/REPO-maintainers/)
where REPO is the name of the repository. Administrative members are given
the maintainer role these team and can add/remove members as appropriate.
In addition all maintainers for a given repository can add/remove
maintainers are given the maintainer role for the teams for which they are
added and can add/remove members as appropriate.
##### Landing PRs
The package maintenance team policy on landing a PR in the repositories within the Pkgjs
repository is for there to be:
- At least 1 approval from package-maintenance members other than the author of the PR,
unless there is single maintainer for the repository. In that case the single maintainer
can land their own PRs.
- No blocking reviews
- 72 hours since the PR was openened
Certain types of pull requests can be fast-tracked and may land after a shorter
delay. For example:
* Focused changes that affect only documentation and/or the test suite:
* `code-and-learn` tasks often fall into this category.
* `good-first-issue` pull requests may also be suitable.
* Changes that fix regressions:
* Regressions that break the workflow (red CI or broken compilation).
* Regressions that happen right before a release, or reported soon after.
To propose fast-tracking a pull request, apply the `fast-track` label. Then add
a comment that Collaborators may upvote.
If someone disagrees with the fast-tracking request, remove the label. Do not
fast-track the pull request in that case.
The pull request may be fast-tracked if two Collaborators approve the
fast-tracking request. To land, the pull request itself still needs two
Collaborator approvals and a passing CI.
Collaborators may request fast-tracking of pull requests they did not author.
In that case only, the request itself is also one fast-track approval. Upvote
the comment anyway to avoid any doubt.
### Consensus Seeking Process
The WG follows a [Consensus Seeking][] decision-making model.
When an agenda item has appeared to reach a consensus the moderator
will ask "Does anyone object?" as a final call for dissent from the
consensus.
If an agenda item cannot reach a consensus a WG member can call for
either a closing vote or a vote to table the issue to the next
meeting. The call for a vote must be seconded by a majority of the WG
or else the discussion will continue. Simple majority wins.
Note that changes to administrative WG membership require
consensus. If there are any objections to adding or removing
individual members, an effort must be made to resolve
those objections. If consensus cannot be reached, a
vote may be called following the process above. Administrative
members are responsible for voting in these cases.
<a id="developers-certificate-of-origin"></a>
## Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
* (a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
* (b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
* (c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
* (d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
[Consensus Seeking]: http://en.wikipedia.org/wiki/Consensus-seeking_decision-making
[nodejs/package-maintenance]: https://github.com/nodejs/package-maintenance
| 43.80292 | 180 | 0.795701 | eng_Latn | 0.999797 |
d3f5b67503d4089f455861f10b98a86e3267fa15 | 3,511 | md | Markdown | aboutme.md | lassegs/skole | c30fae8436092f710e98bcf7a957032df84a7e00 | [
"MIT"
] | null | null | null | aboutme.md | lassegs/skole | c30fae8436092f710e98bcf7a957032df84a7e00 | [
"MIT"
] | null | null | null | aboutme.md | lassegs/skole | c30fae8436092f710e98bcf7a957032df84a7e00 | [
"MIT"
] | null | null | null | ---
id: 191
title: Om bloggen
date: 2014-02-28T11:09:18+00:00
author: lassegs
layout: page
guid: http://skole.lassegs.org/?page_id=191
wip_slogan:
- ""
wip_template:
- full
---
Denne bloggen er en samling av mine forelesningsnotater. Notatene er mine tolkninger av hva forelesningen handla om. Derfor er de ikke komplette, og forbehold må tas om mangelfull og beint ut feilaktig informasjon. Samtidig er det er bedre at de ligger tilgjengelig enn at de råtner på en eller annen harddisk, og om noen andre får bruk for dem er det hyggelig.
Mitt navn er Lasse Gullvåg Sætre og jeg går [bachelorprogrammet i samfunnsgeografi](http://www.uio.no/studier/program/samfunnsgeografi/) på [Universitetet i Oslo](http://www.uio.no/), med 40-gruppe i [sosiologi.](http://www.uio.no/studier/program/sosiologi/) Det meste på denne bloggen er lisensiert under [Creative Commons CC-NC-SA](http://creativecommons.org/licenses/by-nc-sa/4.0/), som hovedsakelig betyr at du står fritt til å dele det videre, men ikke for kommersielt bruk, og med samme eller kompatibel lisens. Brytes dine rettigheter av denne nettsida? Det var ikke meningen, ta kontakt nederst, så løser vi det.
Er du foreleser og føler at notatene ikke representerer forelesningen din godt nok? Er det feil eller mangler? Ta kontakt nederst på denne siden, så finner vi en løsning. Vil du ha et innlegg tatt ned? Ta kontakt da også, men tenk først på følgende:
* Repetisjon er et meget anerkjent pedagogisk prinsipp.
* <span style="line-height: 1.5em;">Mange studenter er avhengig av en jobb, som gjør at de ikke alltid får vært i forelesningen.</span>
* <span style="line-height: 1.5em;">Mange studenter er syke i løpet av studieåret og noen er fraværende pga. sykdom i lengre perioder.</span>
* <span style="line-height: 1.5em;">Dagens studenter er storforbrukere av ny teknologi og nye kommunikasjonsformer.</span>
* <span style="line-height: 1.5em;">Det er hjemlet i Lov om universiteter og høyskoler § 4-3 pkt 5 at ”institusjonene skal, så langt det er mulig og rimelig, legge studiesituasjonen til rette for studenter med særskilte behov”.</span>
* <span style="line-height: 1.5em;">Enten vi liker det eller ikke, så har samfunnet og den måten vi mennesker omgåes, endret seg de siste tiårene. Dagens studenter har andre omgangsformer og studievaner – de lever et mindre strukturert liv og mer fleksibelt og impulsivt enn det som var vanlig før. Da må også undervisningen tilpasse seg.</span>
* Når samfunnet bruker milliarder hvert år på at mange tusen forelesere skal forberede og holde gode forelesninger, så er det synd at mange av disse holdes bare én gang og for et meget begrenset publikum. En av Norges viktigste ressurser er kunnskap, og disse forelesningene utgjør en ekstra kunnskapsressurs for Norge når de blir gjort tilgjengelig for allmennheten.
Jeg oppfordrer, [i likhet med rektor Ole Petter Ottersen](http://blogg.uio.no/unidir/ottersen/content/forelesninger-p%C3%A5-podcast-ett-%C3%A5r-etter), forelesere til å podcaste forelesningene sine.
<div id='contact-form-191'>
</div>
<a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license"><img style="border-width: 0;" src="https://i0.wp.com/i.creativecommons.org/l/by-nc-sa/4.0/80x15.png?w=720" alt="Creative Commons License" data-recalc-dims="1" /></a>
This work is licensed under a <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. | 103.264706 | 620 | 0.77243 | nob_Latn | 0.991163 |
d3f68f953ffc2974a7725cc044a61e3b641291f5 | 206 | md | Markdown | _posts/2016-12-19-basic-git-topic.md | 3014zhangshuo/3014zhangshuo.github.io | ee7c5f1fbfb7ca3eff19bdb99dce6e08b22c0b67 | [
"MIT"
] | null | null | null | _posts/2016-12-19-basic-git-topic.md | 3014zhangshuo/3014zhangshuo.github.io | ee7c5f1fbfb7ca3eff19bdb99dce6e08b22c0b67 | [
"MIT"
] | null | null | null | _posts/2016-12-19-basic-git-topic.md | 3014zhangshuo/3014zhangshuo.github.io | ee7c5f1fbfb7ca3eff19bdb99dce6e08b22c0b67 | [
"MIT"
] | null | null | null | ---
layout: post
title: 'git 基本原题'
date: 2016-12-19 18:05
comments: true
categories:
---
`git init`
当你在一个新目录或已有目录内执行 git init 时,Git 会创建一个 .git 目录,几乎所有 Git 存储和操作的内容都位于该目录下。如果你要备份或复制一个库,基本上将这一目录拷贝至其他地方就可以了。 | 22.888889 | 104 | 0.762136 | zho_Hans | 0.344778 |
d3f78a11019aebb34e1f2ebc9f33b4dbf9000d5b | 1,853 | md | Markdown | README.md | DanielMSchmidt/java-method-parser | 757867011fdf8d4814f677351eb87401e3ebb076 | [
"MIT"
] | 1 | 2021-04-04T16:44:45.000Z | 2021-04-04T16:44:45.000Z | README.md | DanielMSchmidt/java-method-parser | 757867011fdf8d4814f677351eb87401e3ebb076 | [
"MIT"
] | 267 | 2017-10-12T08:28:22.000Z | 2021-06-05T12:05:13.000Z | README.md | DanielMSchmidt/java-method-parser | 757867011fdf8d4814f677351eb87401e3ebb076 | [
"MIT"
] | 4 | 2018-12-29T04:32:49.000Z | 2020-11-20T09:06:28.000Z | # java-method-parser [](https://travis-ci.org/DanielMSchmidt/java-method-parser) [](https://coveralls.io/github/DanielMSchmidt/java-method-parser?branch=master) [](https://bettercodehub.com/)
> Get an objective-c header file and translate it to equivalent javascript calls
## Install
```
$ npm install java-method-parser
```
## Usage
```js
const fs = require("fs");
const javaMethodParser = require("java-method-parser");
const content = fs.readFileSync("/path/to/java/Ponies.java");
const output = javaMethodParser(content);
fs.writeFileSync("/path/to/project/ponies.json", output);
```
## Example
```java
package com.foo.bar.baz;
public class BasicName {
private BasicName() {}
/**
* asserting a matcher
*/
public static ViewInteraction assertMatcher(ViewInteraction i, Matcher<View> m) {}
// This is for asserting invisibility
public static ViewInteraction assertNotVisible(ViewInteraction i) {}
}
```
```json
{
"name": "BasicName",
"pcakage": "com.foo.bar",
"methods": [
{
"args": [
{
"type": "ViewInteraction",
"name": "i"
},
{
"type": "Matcher<View>",
"name": "m"
}
],
"comment": "asserting a matcher",
"name": "assertMatcher",
"returnType": "ViewInteraction"
},
{
"args": [
{
"type": "ViewInteraction",
"name": "i"
}
],
"comment": "This is for asserting invisibility",
"name": "assertNotVisible",
"returnType": "ViewInteraction"
}
]
}
```
## License
MIT © [Daniel Schmidt](http://danielmschmidt.de)
| 23.455696 | 495 | 0.664868 | yue_Hant | 0.29442 |
21b0b5b701a4dfb28691bb4dce5aa0a586e41a8c | 4,062 | md | Markdown | doc/NEWS.md | pgmasters/pgbackrest | 665da12ae7f519077512e60ce3a5ec6c6a9997fd | [
"MIT"
] | null | null | null | doc/NEWS.md | pgmasters/pgbackrest | 665da12ae7f519077512e60ce3a5ec6c6a9997fd | [
"MIT"
] | null | null | null | doc/NEWS.md | pgmasters/pgbackrest | 665da12ae7f519077512e60ce3a5ec6c6a9997fd | [
"MIT"
] | null | null | null | **February 10, 2022**: [Crunchy Data](https://www.crunchydata.com) is pleased to announce the release of [pgBackRest](https://pgbackrest.org/) 2.37, the latest version of the reliable, easy-to-use backup and restore solution that can seamlessly scale up to the largest databases and workloads.
pgBackRest has recently introduced many exciting new features including a built-in TLS server, binary protocol, new authentication methods, backup history retention, restore enhancements, backup integrity enhancements, and increased option indexes.
IMPORTANT NOTE: pgBackRest 2.37 is the last version to support PostgreSQL 8.3/8.4.
pgBackRest supports a robust set of features for managing your backup and recovery infrastructure, including: parallel backup/restore, full/differential/incremental backups, multiple repositories, delta restore, parallel asynchronous archiving, per-file checksums, page checksums (when enabled) validated during backup, multiple compression types, encryption, partial/failed backup resume, backup from standby, tablespace and link support, S3/Azure/GCS support, backup expiration, local/remote operation via SSH or TLS, flexible configuration, and more.
You can install pgBackRest from the [PostgreSQL Yum Repository](https://yum.postgresql.org/) or the [PostgreSQL APT Repository](https://apt.postgresql.org). Source code can be downloaded from [releases](https://github.com/pgbackrest/pgbackrest/releases).
## Major New Features
### TLS Server
The TLS server provides an alternative to SSH for remote operations such as backup. Containers benefit because pgBackRest can be used as the entry point without any need for SSH. In addition, performance tests have shown TLS to be significantly faster than SSH. See [User Guide](https://pgbackrest.org/user-guide-rhel.html#repo-host/setup-tls).
### Binary Protocol
The binary protocol provides a faster and more memory efficient way for pgBackRest to communicate with local and remote processes while maintaining the ability to communicate between different architectures.
### New Authentication Methods
The GCS storage driver now supports automatic authentication on GCE instances and the S3 storage driver supports WebIdentity authentication. See [Config Reference](https://pgbackrest.org/configuration.html#section-repository/option-repo-gcs-key-type).
### Additional Backup Integrity Checks
A number of integrity checks were added to ensure the backup is valid or errors are detected as early as possible, including: loop while waiting for checkpoint LSN to reach replay LSN, check archive immediately after backup start, timeline and checkpoint checks before backup, check that clusters are alive and correctly configured during a backup, and warn when checkpoint_timeout exceeds db-timeout.
### Increase Maximum Index Allowed for PG/REPO Options to 256
Up to 256 PostgreSQL clusters and repositories may now be configured.
### Restore Enhancements
The restore command has a number of new features, including: db-exclude option (see [Config Reference](https://pgbackrest.org/configuration.html#section-restore/option-db-exclude)), link-map option can create new links (see [Config Reference](https://pgbackrest.org/configuration.html#section-restore/option-link-map)), automatically create data directory, restore --type=lsn (See [Command Reference](https://pgbackrest.org/command.html#command-restore/category-command/option-type)), and error when restore is unable to find a backup to match the time target.
### Backup History Retention
The backup manifest history can now be expired. See [Config Reference](https://pgbackrest.org/configuration.html#section-repository/option-repo-retention-history).
## Links
- [Website](https://pgbackrest.org)
- [User Guides](https://pgbackrest.org/user-guide-index.html)
- [Release Notes](https://pgbackrest.org/release.html)
- [Support](http://pgbackrest.org/#support)
[Crunchy Data](https://www.crunchydata.com) is proud to support the development and maintenance of [pgBackRest](https://github.com/pgbackrest/pgbackrest).
| 84.625 | 560 | 0.803545 | eng_Latn | 0.942546 |
21b127afbad2374d318875b33e8d2cdc9adaa48f | 2,543 | md | Markdown | README.md | irmccallum/poverty | a1e6878ac2ec90262e814f12a97df1932197647f | [
"MIT"
] | null | null | null | README.md | irmccallum/poverty | a1e6878ac2ec90262e814f12a97df1932197647f | [
"MIT"
] | null | null | null | README.md | irmccallum/poverty | a1e6878ac2ec90262e814f12a97df1932197647f | [
"MIT"
] | null | null | null | # Poverty Mapping
## Download Data
1. WSF: data found in this publication - https://www.nature.com/articles/s41597-020-00580-5
2. VIIRS: https://eogdata.mines.edu/download_dnb_composites.html
3. Naural Earth data: Countries - https://www.naturalearthdata.com/downloads/50m-cultural-vectors/ (Admin 0 - Countries), Graticules - https://www.naturalearthdata.com/downloads/110m-physical-vectors/ and Populated Places (simple) https://www.naturalearthdata.com/downloads/10m-cultural-vectors/
4. Indicators: https://data.worldbank.org/indicator
5. DHS: https://dhsprogram.com/data/ and process/harmonize: https://github.com/mcooper/DHSwealth
## Run the following scripts in order:
Place the following scripts into a single folder, and create subdirectories for the respective datasets downloaded above. Then run the following scripts in the order they appear below:
`viirs_import.R` – this produces VIIRS vrt and tif by compressing file size, setting lit pixels to nodata, and unlit pixels to value of 1
`wsf_import.R` - produces a reprojected tif, set to same resolution as VIIRS, using the WSF 500 m percentage layer
`global_area.R` - produces a global area raster, set to same resolution as VIIRS, in km2 units
`cntry_rasterize.R` – rasterize sovereign state polygons needed for zonal statistics
`extract_zonal.R` - creates country level darkness statistics – exports csv file
`continent_stats.R` – produces table of continent stats on unlit footprints
`figure1_globe.R` - plots global %unlit per country on a global map and bar graphs (urban and rural)
`global_tifs.R` - produce global total WSF and unlit WSF tifs by area for figure2
`import_dhs.R` – import DHS data coming from harmonization: https://github.com/mcooper/DHSwealth
`explore_dhs.R` – reformat and visualize data
`figure2_dhs.R` – country level DHS boxplots, also exports dataset for validation statistics - change continent between Africa, Asia, South and North America to produce respective graphs and tables which are used as input to 'figure2_val.Rmd'
`figure2_val.Rmd` - country level validation of DHS and unlit settlements
`figure3_maps.R` – plot maps of wealth classes for 4 select countries and export all maps as tiff files
`figure4a/b/c.R` – plots a,b,c, wealth index, SHDI and income
## SI Scripts
`figureXXX_stats_SI.R` – creates scatterplots of unlit vs World Bank and FAO indicators, exports merged datasets for validation
`figureXXX_stats_confidence_SI.R` - computes an estimate, test statitic, significance test, and confidence interval
| 49.862745 | 295 | 0.787259 | eng_Latn | 0.909232 |
21b1c23bc8b665cc0026b0ea1f9f302573eb250f | 406 | md | Markdown | MD/YMT2015.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | null | null | null | MD/YMT2015.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | 1 | 2020-02-12T02:46:21.000Z | 2020-02-12T02:46:21.000Z | MD/YMT2015.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | null | null | null | # YMT2015
Yagasaki, T., Matsumoto, M. & Tanaka, H.
Effects of thermodynamic inhibitors on the dissociation of methane hydrate: a molecular dynamics study.
Phys. Chem. Chem. Phys. 17, 32347–32357 (2015).
http://doi.org/10.1039/c5cp03008k
* 熱力学的阻害剤がハイドレートの分解にどのように作用するのかを解明した。

#water #research #clathratehydrate #papers #paper2015
| 16.916667 | 103 | 0.753695 | yue_Hant | 0.39154 |
21b1e009fde4c214e8cb1ec082c637011df6c048 | 60 | md | Markdown | README.md | anton-rudov/trueconf_exporter | f74ffeaa1f2d82275416940ce1d8b2e3275a9194 | [
"Apache-2.0"
] | null | null | null | README.md | anton-rudov/trueconf_exporter | f74ffeaa1f2d82275416940ce1d8b2e3275a9194 | [
"Apache-2.0"
] | null | null | null | README.md | anton-rudov/trueconf_exporter | f74ffeaa1f2d82275416940ce1d8b2e3275a9194 | [
"Apache-2.0"
] | null | null | null | # trueconf_exporter
Simple Prometheus exporter for TrueConf
| 20 | 39 | 0.866667 | eng_Latn | 0.273294 |
21b1fa620dcbcd974b6fbb292a6a87ebf45948f6 | 918 | md | Markdown | kubernetes/docs/V1beta2PriorityLevelConfigurationCondition.md | Ankit01Mishra/java | 7ba55207a60e42128dd898e8c7cbec68617a8b2e | [
"Apache-2.0"
] | 2,389 | 2017-05-13T16:11:07.000Z | 2022-03-31T07:06:17.000Z | kubernetes/docs/V1beta2PriorityLevelConfigurationCondition.md | Ankit01Mishra/java | 7ba55207a60e42128dd898e8c7cbec68617a8b2e | [
"Apache-2.0"
] | 2,128 | 2017-05-10T17:53:06.000Z | 2022-03-30T22:21:15.000Z | kubernetes/docs/V1beta2PriorityLevelConfigurationCondition.md | Ankit01Mishra/java | 7ba55207a60e42128dd898e8c7cbec68617a8b2e | [
"Apache-2.0"
] | 1,112 | 2017-05-10T03:45:57.000Z | 2022-03-31T10:02:03.000Z |
# V1beta2PriorityLevelConfigurationCondition
PriorityLevelConfigurationCondition defines the condition of priority level.
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**lastTransitionTime** | [**OffsetDateTime**](OffsetDateTime.md) | `lastTransitionTime` is the last time the condition transitioned from one status to another. | [optional]
**message** | **String** | `message` is a human-readable message indicating details about last transition. | [optional]
**reason** | **String** | `reason` is a unique, one-word, CamelCase reason for the condition's last transition. | [optional]
**status** | **String** | `status` is the status of the condition. Can be True, False, Unknown. Required. | [optional]
**type** | **String** | `type` is the type of the condition. Required. | [optional]
| 51 | 183 | 0.672113 | eng_Latn | 0.744453 |
21b26682068055342726e185e3185f9a2731b8ef | 355 | md | Markdown | README.md | marviorocha/StationPRO | a20c24cdbcdb67cc0da56bdc054f832e6d129f1e | [
"MIT"
] | 3 | 2016-03-18T12:39:38.000Z | 2019-08-13T23:20:00.000Z | README.md | marviorocha/StationPRO | a20c24cdbcdb67cc0da56bdc054f832e6d129f1e | [
"MIT"
] | 8 | 2020-09-25T12:43:58.000Z | 2021-08-28T14:42:24.000Z | README.md | marviorocha/StationPRO | a20c24cdbcdb67cc0da56bdc054f832e6d129f1e | [
"MIT"
] | null | null | null | # Station Pro - Plugin Development
This is a LAB Branch of the plugin Station Pro. All files have included this branch is for development with docker and wordpress.
for other code use Us can use a other Branch.
## Station Pro
## Licence
The theme is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
| 29.583333 | 129 | 0.766197 | eng_Latn | 0.998483 |
21b30c6e922cc08a22bba32af51638e673c961e6 | 546 | markdown | Markdown | _posts/2018-04-17-read.markdown | hoangvangiang/hoangvangiang.github.io | 73cc018b23ae8fed42995165dbc5614fc9aaf4b2 | [
"Apache-2.0"
] | 1 | 2021-08-06T16:26:04.000Z | 2021-08-06T16:26:04.000Z | _posts/2018-04-17-read.markdown | hoangvangiang/hoangvangiang.github.io | 73cc018b23ae8fed42995165dbc5614fc9aaf4b2 | [
"Apache-2.0"
] | null | null | null | _posts/2018-04-17-read.markdown | hoangvangiang/hoangvangiang.github.io | 73cc018b23ae8fed42995165dbc5614fc9aaf4b2 | [
"Apache-2.0"
] | null | null | null | ---
title: Read
homepage: https://github.com/Fastbyte01/Read
download: https://github.com/Fastbyte01/Read/archive/master.zip
demo: https://readjekyll.fastbyte01.it/
author: Fasbyte01
thumbnail: read.jpg
license: CCA 3.0 License
license_link: https://github.com/Fastbyte01/Read/blob/master/LICENSE.md
github_repo: Fastbyte01/Read
stars: 2
---
Read is a simple one page Jekyll theme developed by Fastbyte01 and based on Read Only template by Html5up.All the options are changable directly from the config file, so you don't need to touch the code.
| 39 | 203 | 0.791209 | eng_Latn | 0.710287 |
21b34b457f24fb9452a3a2eb5537a4e176163e24 | 2,346 | md | Markdown | docs/analytics/CAR-2020-11-007/index.md | petekalo/car | f1135e65e0001c304ca7cb34b78e4383fd98c57f | [
"Apache-2.0"
] | 1 | 2021-12-07T13:29:33.000Z | 2021-12-07T13:29:33.000Z | docs/analytics/CAR-2020-11-007/index.md | JMoretS21Sec/car | c1509d119233c9788d626a8be0bcbeddfd973feb | [
"Apache-2.0"
] | null | null | null | docs/analytics/CAR-2020-11-007/index.md | JMoretS21Sec/car | c1509d119233c9788d626a8be0bcbeddfd973feb | [
"Apache-2.0"
] | null | null | null | ---
title: "CAR-2020-11-007: Network Share Connection Removal"
layout: analytic
submission_date: 2020/11/30
information_domain: Host
subtypes: Process
analytic_type: TTP
contributors: Olaf Hartong
applicable_platforms: Windows
---
Adversaries may use network shares to exfliltrate date; they will then remove the shares to cover their tracks. This analytic looks for the removal of network shares via commandline, which is otherwise a rare event.
### ATT&CK Detection
|Technique|Subtechnique(s)|Tactic(s)|Level of Coverage|
|---|---|---|---|
|[Indicator Removal on Host](https://attack.mitre.org/techniques/T1070/)|[Network Share Connection Removal](https://attack.mitre.org/techniques/T1070/005/)|[Defense Evasion](https://attack.mitre.org/tactics/TA0005/)|High|
### Data Model References
|Object|Action|Field|
|---|---|---|
|[process](/data_model/process) | [create](/data_model/process#create) | [exe](/data_model/process#exe) |
|[process](/data_model/process) | [create](/data_model/process#create) | [command_line](/data_model/process#command_line) |
### Applicable Sensors
- [osquery_4.1.2](/sensors/osquery_4.1.2)
- [osquery_4.6.0](/sensors/osquery_4.6.0)
- [Sysmon_10.4](/sensors/sysmon_10.4)
- [Sysmon_11.0](/sensors/sysmon_11.0)
- [Sysmon_13](/sensors/sysmon_13)
### Implementations
#### Pseudocode - network shares being removed via the command line (Pseudocode, CAR native)
This is a pseudocode representation of the below splunk search.
```
processes = search Process:Create
target_processes = filter processes where (
(exe="C:\\Windows\\System32\\net.exe" AND command_line="*delete*") OR
command_line="*Remove-SmbShare*" OR
comman_line="*Remove-FileShare*" )
output target_processes
```
#### Splunk Search - delete network shares (Splunk, Sysmon native)
looks network shares being deleted from the command line
```
(index=__your_sysmon_index__ EventCode=1) ((Image="C:\\Windows\\System32\\net.exe" AND CommandLine="*delete*") OR CommandLine="*Remove-SmbShare*" OR CommandLine="*Remove-FileShare*")
```
#### LogPoint Search - delete network shares (Logpoint, LogPoint native)
looks network shares being deleted from the command line
```
norm_id=WindowsSysmon event_id=1 ((image="C:\Windows\System32\net.exe" command="*delete*") OR command="*Remove-SmbShare*" OR command="*Remove-FileShare*")
```
| 29.696203 | 221 | 0.73913 | eng_Latn | 0.575034 |
21b360de36abceeb75082a113e570b6bcc2f3f4d | 7,771 | md | Markdown | 4.0/README.md | I70l0teN4ik/nominatim-docker | e1fa2cbcaf0ce1cbb2e6da50dc8824d53a78690f | [
"CC0-1.0"
] | null | null | null | 4.0/README.md | I70l0teN4ik/nominatim-docker | e1fa2cbcaf0ce1cbb2e6da50dc8824d53a78690f | [
"CC0-1.0"
] | null | null | null | 4.0/README.md | I70l0teN4ik/nominatim-docker | e1fa2cbcaf0ce1cbb2e6da50dc8824d53a78690f | [
"CC0-1.0"
] | null | null | null | # Nominatim Docker (Nominatim version 4.0)
## Automatic import
Download the required data, initialize the database and start nominatim in one go
```
docker run -it --rm \
-e PBF_URL=https://download.geofabrik.de/europe/monaco-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/europe/monaco-updates/ \
-p 8080:8080 \
--name nominatim \
mediagis/nominatim:4.0
```
Port 8080 is the nominatim HTTP API port and 5432 is the Postgres port, which you may or may not want to expose.
If you want to check that your data import was successful, you can use the API with the following URL: http://localhost:8080/search.php?q=avenue%20pasteur
## Configuration
### General Parameters
The following environment variables are available for configuration:
- `PBF_URL`: Which [OSM extract](#openstreetmap-data-extracts) to download and import. It cannot be used together with `PBF_PATH`. Check [https://download.geofabrik.de](https://download.geofabrik.de)
- `PBF_PATH`: Which [OSM extract](#openstreetmap-data-extracts) to import from the .pbf file inside the container. It cannot be used together with `PBF_URL`.
- `REPLICATION_URL`: Where to get updates from. Also available from Geofabrik.
- `REPLICATION_UPDATE_INTERVAL`: How often upstream publishes diffs (in seconds, default: `86400`). _Requires `REPLICATION_URL` to be set._
- `REPLICATION_RECHECK_INTERVAL`: How long to sleep if no update found yet (in seconds, default: `900`). _Requires `REPLICATION_URL` to be set._
- `IMPORT_WIKIPEDIA`: Whether to import the Wikipedia importance dumps, which improve the scoring of results. On a beefy 10 core server, this takes around 5 minutes. (default: `false`)
- `IMPORT_US_POSTCODES`: Whether to import the US postcode dump. (default: `false`)
- `IMPORT_GB_POSTCODES`: Whether to import the GB postcode dump. (default: `false`)
- `THREADS`: How many threads should be used to import (default: `16`)
- `NOMINATIM_PASSWORD`: The password to connect to the database with (default: `qaIACxO6wMR3`)
The following run parameters are available for configuration:
- `shm-size`: Size of the tmpfs in Docker, for bigger imports (e.g. Europe) this needs to be set to at least 1GB or more. Half the size of your available RAM is recommended. (default: `64M`)
### PostgreSQL Tuning
The following environment variables are available to tune PostgreSQL:
- `POSTGRES_SHARED_BUFFERS` (default: `2GB`)
- `POSTGRES_MAINTENANCE_WORK_MEM` (default: `10GB`)
- `POSTGRES_AUTOVACUUM_WORK_MEM` (default: `2GB`)
- `POSTGRES_WORK_MEM` (default: `50MB`)
- `POSTGRES_EFFECTIVE_CACHE_SIZE` (default: `24GB`)
- `POSTGRES_SYNCHRONOUS_COMMIT` (default: `off`)
- `POSTGRES_MAX_WAL_SIZE` (default: `1GB`)
- `POSTGRES_CHECKPOINT_TIMEOUT` (default: `10min`)
- `POSTGRES_CHECKPOINT_COMPLETITION_TARGET` (default: `0.9`)
See https://nominatim.org/release-docs/4.0.1/admin/Installation/#tuning-the-postgresql-database for more details on those settings.
### Import Style
The import style can be modified through an environment variable :
- `IMPORT_STYLE` (default: `full`)
Available options are :
- `admin`: Only import administrative boundaries and places.
- `street`: Like the admin style but also adds streets.
- `address`: Import all data necessary to compute addresses down to house number level.
- `full`: Default style that also includes points of interest.
- `extratags`: Like the full style but also adds most of the OSM tags into the extratags column.
See https://nominatim.org/release-docs/4.0.1/admin/Import/#filtering-imported-data for more details on those styles.
### Flatnode files
In addition you can also mount a volume / bind-mount on `/nominatim/flatnode` (see: Persistent container data) to use flatnode storage. This is advised for bigger imports (Europe, North America etc.), see: https://nominatim.org/release-docs/4.0.1/admin/Import/#flatnode-files. If the mount is available for the container, the flatnode configuration is automatically set and used.
## Persistent container data
If you want to keep your imported data across deletion and recreation of your container, make the following folder a volume:
- `/var/lib/postgresql/12/main` is the storage location of the Postgres database & holds the state about whether the import was successful
- `/nominatim/flatnode` is the storage location of the flatnode file.
So if you want to be able to kill your container and start it up again with all the data still present use the following command:
```
docker run -it --rm --shm-size=1g \
-e PBF_URL=https://download.geofabrik.de/europe/monaco-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/europe/monaco-updates/ \
-e IMPORT_WIKIPEDIA=false \
-e NOMINATIM_PASSWORD=very_secure_password \
-v nominatim-data:/var/lib/postgresql/12/main \
-p 8080:8080 \
--name nominatim \
mediagis/nominatim:4.0
```
## OpenStreetMap Data Extracts
Nominatim imports OpenStreetMap (OSM) data extracts. The source of the data can be specified with one of the following environment variables:
- `PBF_URL` variable specifies the URL. The data is downloaded during initialization, imported and removed from disk afterwards. The data extracts can be freely downloaded, e.g., from [Geofabrik's server](https://download.geofabrik.de).
- `PBF_PATH` variable specifies the path to the mounted OSM extracts data inside the container. No .pbf file is removed after initialization.
It is not possible to define both `PBF_URL` and `PBF_PATH` sources.
The replication update can be performed only via HTTP.
A sample of `PBF_PATH` variable usage is:
``` sh
docker run -it --rm \
-e PBF_PATH=/nominatim/data/monaco-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/europe/monaco-updates/ \
-p 8080:8080 \
-v /osm-maps/data:/nominatim/data \
--name nominatim \
mediagis/nominatim:4.0
```
where the _/osm-maps/data/_ directory contains _monaco-latest.osm.pbf_ file that is mounted and available in container: _/nominatim/data/monaco-latest.osm.pbf_
## Updating the database
Full documentation for Nominatim update available [here](https://nominatim.org/release-docs/4.0.1/admin/Update/). For a list of other methods see the output of:
```
docker exec -it nominatim sudo -u nominatim nominatim replication --help
```
The following command will keep updating the database forever:
```
docker exec -it nominatim sudo -u nominatim nominatim replication --project-dir /nominatim
```
If there are no updates available this process will sleep for 15 minutes and try again.
## Custom PBF Files
If you want your Nominatim container to host multiple areas from Geofabrik, you can use a tool, such as [Osmium](https://osmcode.org/osmium-tool/manual.html), to merge multiple PBF files into one.
``` sh
docker run -it --rm \
-e PBF_PATH=/nominatim/data/merged.osm.pbf \
-p 8080:8080 \
-v /osm-maps/data:/nominatim/data \
--name nominatim \
mediagis/nominatim:4.0
```
where the _/osm-maps/data/_ directory contains _merged.osm.pbf_ file that is mounted and available in container: _/nominatim/data/merged.osm.pbf_
## Development
If you want to work on the Docker image you can use the following command to build a local
image and run the container with
```
docker build -t nominatim . && \
docker run -it --rm \
-e PBF_URL=https://download.geofabrik.de/europe/monaco-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/europe/monaco-updates/ \
-p 8080:8080 \
--name nominatim \
nominatim
```
## Docker Compose
In addition, we also provide a basic `contrib/docker-compose.yml` template which you use as a starting point and adapt to your needs. Use this template to set the environment variables, mounts, etc. as needed.
| 45.711765 | 379 | 0.754343 | eng_Latn | 0.947938 |
21b3780fc2cace2f17d1df993b0fb9833d58ca41 | 1,025 | md | Markdown | docs/sample/dubbo-body(http).md | xiejiajun/dubbo-go-pixiu | 60ed707059e32f1c16d49caba62512a08d32d4e9 | [
"ECL-2.0",
"Apache-2.0"
] | 20 | 2021-02-28T13:32:40.000Z | 2022-02-20T13:55:15.000Z | docs/sample/dubbo-body(http).md | xiejiajun/dubbo-go-pixiu | 60ed707059e32f1c16d49caba62512a08d32d4e9 | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2021-03-03T05:06:56.000Z | 2021-03-18T15:11:29.000Z | docs/sample/dubbo-body(http).md | xiejiajun/dubbo-go-pixiu | 60ed707059e32f1c16d49caba62512a08d32d4e9 | [
"ECL-2.0",
"Apache-2.0"
] | 10 | 2021-03-02T03:00:18.000Z | 2021-12-13T11:03:19.000Z | # Get the parameter from the body
> POST request
## Passthroughs
### Api Config
```yaml
name: pixiu
description: pixiu sample
resources:
- path: '/api/v1/test-dubbo/user'
type: restful
description: user
methods:
- httpVerb: POST
onAir: true
timeout: 1000ms
inboundRequest:
requestType: http
queryStrings:
- name: name
required: true
integrationRequest:
requestType: http
host: 127.0.0.1:8889
path: /UserProvider/CreateUser
group: "test"
version: 1.0.0
```
### Request
```bash
curl host:port/api/v1/test-dubbo/user -X POST -d '{"name": "tiecheng","id": "0002","code": 3,"age": 18}' --header "Content-Type: application/json"
```
### Response
- If first add, return like:
```json
{
"id": "0002",
"code": 3,
"name": "tiecheng",
"age": 18,
"time": "0001-01-01T00:00:00Z"
}
```
- If you add user multi, return like:
```json
{
"message": "data is exist"
}
``` | 17.372881 | 146 | 0.565854 | kor_Hang | 0.341161 |
21b3b8e58effbbd0884f7adcde1080546b19c531 | 684 | md | Markdown | README.md | teatwig/gruvbox-firefox-themes | be72914158d1843f5ad3aaefd66dc0bd41adc108 | [
"Unlicense"
] | null | null | null | README.md | teatwig/gruvbox-firefox-themes | be72914158d1843f5ad3aaefd66dc0bd41adc108 | [
"Unlicense"
] | null | null | null | README.md | teatwig/gruvbox-firefox-themes | be72914158d1843f5ad3aaefd66dc0bd41adc108 | [
"Unlicense"
] | null | null | null | # Gruvbox Firefox Themes
## Get the Themes
* [Gruvbox Dark Hard](https://addons.mozilla.org/firefox/addon/gruvbox-d-h/)
* [Gruvbox Dark Medium](https://addons.mozilla.org/firefox/addon/gruvbox-d-m/)
* [Gruvbox Dark Soft](https://addons.mozilla.org/firefox/addon/gruvbox-d-s/)
* [Gruvbox Light Hard](https://addons.mozilla.org/firefox/addon/gruvbox-light-hard/)
* [Gruvbox Light Medium](https://addons.mozilla.org/firefox/addon/gruvbox-light-medium/)
* [Gruvbox Light Soft](https://addons.mozilla.org/firefox/addon/gruvbox-light-soft/)
## Build them yourself
```sh
# generate dist dirs with manifests for each theme
npm run build
# pack the theme as a ZIP file
npm run package
```
| 34.2 | 88 | 0.745614 | kor_Hang | 0.243281 |
21b49bb2fb28a76cc236723ff1d4061a94eb2530 | 643 | md | Markdown | README.md | eloraafrin/Bangla-Character-Vowel--Classificaion-Using-Matlab | 919721ce7807c22dd8936609d5587bea0897c4a0 | [
"MIT"
] | 1 | 2018-06-07T14:31:18.000Z | 2018-06-07T14:31:18.000Z | README.md | eloraafrin/Bangla-Character-Vowel--Classificaion-Using-Matlab | 919721ce7807c22dd8936609d5587bea0897c4a0 | [
"MIT"
] | null | null | null | README.md | eloraafrin/Bangla-Character-Vowel--Classificaion-Using-Matlab | 919721ce7807c22dd8936609d5587bea0897c4a0 | [
"MIT"
] | 2 | 2019-07-28T15:35:36.000Z | 2020-09-12T08:17:54.000Z | # MNIST Digit Recognition using MATLAB
This is the neural network implementation of handwritten digit recognition based on Michael Nielsen's book: [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html?) Chapter 1. This matlab code is a modified version of his python code which can be found [here](https://github.com/mnielsen/neural-networks-and-deep-learning).
This program was run in MATLAB 2016b. The main file is digitRecognition.m. In this file, you can configure the number of hidden layers, number of nodes per layers, number of training epochs, learning rate, and the size of the mini batches.
| 80.375 | 357 | 0.796267 | eng_Latn | 0.989997 |
21b54949fe9ed43946cbcbfd540d608fa64f42e0 | 878 | md | Markdown | Views.md | brody-rebne/401-Readings | 594ce91b5d25e6adbcf3e3bfbe74761bd63c81dd | [
"MIT"
] | null | null | null | Views.md | brody-rebne/401-Readings | 594ce91b5d25e6adbcf3e3bfbe74761bd63c81dd | [
"MIT"
] | null | null | null | Views.md | brody-rebne/401-Readings | 594ce91b5d25e6adbcf3e3bfbe74761bd63c81dd | [
"MIT"
] | null | null | null | ## Views
### Layouts
- Layout files define pieces of front-end content to be easily re-used throughout an app
- Built in .cshtml files, often as `Pages/Shared/_Layout.cshtml` (Razor Pages) or `Views/Shared/_Layout.cshtml` (views for controller)
- The layout defines a base template for views
- Can be more than one layout file, for apps with multiple base layouts
- Layout files can bring in other .cshtml files as partials
- Sections allow small portions of code to be organized more easily
- Sections are defined via `@section SectionName { <div>You can put any amount of html here</div> }
- Sections can be called via `@RenderSections("SectionName", required: false)`
- Exception is thrown if required section is not found
- Namespaces can be imported to any view via a `_ViewImports.cshtml`
- They are imported via directives including `@using`, `@inherits`, and `@model`
| 54.875 | 134 | 0.763098 | eng_Latn | 0.996543 |
21b58bd4862e199d772e1d0632f8d84024f6c19d | 2,354 | md | Markdown | content/recipes/Chicken_Enchilada_Casserole.md | rmrfslashbin/notes.improvisedscience.org | 97b630092633162b66e093d8dcb65dbe796e6a83 | [
"MIT"
] | null | null | null | content/recipes/Chicken_Enchilada_Casserole.md | rmrfslashbin/notes.improvisedscience.org | 97b630092633162b66e093d8dcb65dbe796e6a83 | [
"MIT"
] | null | null | null | content/recipes/Chicken_Enchilada_Casserole.md | rmrfslashbin/notes.improvisedscience.org | 97b630092633162b66e093d8dcb65dbe796e6a83 | [
"MIT"
] | null | null | null | ---
title: Chicken Enchilada Casserole
date: 2014-11-30T23:17:26-05:00
tags:
- chicken
- enchelada
- poblano
- peppers
- tortillas
- cheese
categories:
- casserole
- main
methods:
- baked
---
## Source
http://www.wholefoodsmarket.com/recipe/chicken-enchilada-casserole
## Equipment
- Parchment paper
- Large baking sheet
- Food processor
- 9 x 13 baking dish
## Ingredients
- 3 split skin-on, bone-in half chicken breasts (about 2 ¼ pounds
total)
- 5 poblano peppers, halved, stemmed and seeded
- 2 red bell peppers, seeded, stemmed and cut into quarters
- 1 cup cashews
- 1 tablespoon cider vinegar
- Pinch fine sea salt
- Pinch cayenne pepper
- 12 corn tortillas, halved
- 1 ½ cup salsa, preferably hot or medium-hot
- ½ cup low-sodium chicken or vegetable broth
- 1 cup shredded sharp cheddar cheese or nondairy cheddar-style cheese
## Preparation
### Roast Peppers and Chicken
- Preheat the oven to 400°F and line a baking sheet with parchment
paper.
- Place chicken on one side of the baking sheet and poblano and bell
peppers on the other side.
- Roast, stirring peppers once or twice until peppers are browned and
chicken is just cooked through, 40 to 45 minutes.
- Cool slightly, then remove and discard skin and bone from chicken
and shred meat.
- Dice peppers.
### Cashew Cream
- Meanwhile, place cashews in a small bowl and cover by about 1 inch
with boiling water.
- Let soak 30 minutes.
- Drain and discard soaking liquid.
- In a food processor, combine cashews, ½ cup water, vinegar, salt and
cayenne, and blend until smooth; add more water a tablespoon at a
time, if necessary, to purée.
### Salsa and Broth
- In a separate bowl, Combine salsa and broth
### Construct Casserole
- Reduce the oven temperature to 375°F.
- Layer 1/3 of the tortillas in the bottom of a 9 x 13-inch casserole
dish.
- Spoon ½ cup of the salsa/both mixture over the tortillas.
- Top with half the chicken and half the roasted peppers, and drizzle
with half of the cashew cream.
- Repeat layering again, finishing with a layer of tortillas and salsa
mixture.
- Sprinkle with cheese and bake until bubbling and golden brown, about
35 minutes.
### Cool
Let cool 10 minutes before cutting and serving.
| 26.75 | 72 | 0.709006 | eng_Latn | 0.992932 |
21b59759767236adc9348eba2e8db8d4120fead3 | 2,100 | md | Markdown | README.md | grant-traynor/python_ipc_tests | 97c0e7675f2b56e018e6380f4662c239e0391cf5 | [
"Apache-2.0"
] | 1 | 2021-02-08T06:59:32.000Z | 2021-02-08T06:59:32.000Z | README.md | grant-traynor/python_ipc_tests | 97c0e7675f2b56e018e6380f4662c239e0391cf5 | [
"Apache-2.0"
] | null | null | null | README.md | grant-traynor/python_ipc_tests | 97c0e7675f2b56e018e6380f4662c239e0391cf5 | [
"Apache-2.0"
] | null | null | null | # python_ipc_tests
I/O Throughput / Exchange performance tests for various IPCs and Threaded / Multiprocessing methods in python
# Simple Result Overview
```
50,000 Messages Time(s) /msg(us) msg/sec
pipetest01 - Protocol.method 5.9 120 8400
pipetest02 - Protocol.Class - localhost 3.8 77 12950
pipetest03 - Protocol.Class - UDS 3.1 65 15000
pipetest04 - Protocol.Class - UDS + uvloop 0.4 9 110000
pipetest05 - Protocol.Class - lclhst + uvloop 0.4 9 110000
pipetest06 - coroutines - direct 0.1 3 350000
pipetest07 - coroutines - direct + uvloop 0.1 3 350000
pipetest08 - function call - no asyncio 0.007 0.1 6723141
threads - ping - deque 2.9 4.2 1717032
threads - ping - Queue 2.0 4.1 23600
```
# Machine Spec
8 Cores as follows
```
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz
stepping : 7
microcode : 0x710
cpu MHz : 1334.006
cache size : 10240 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
bugs : cpu_meltdown spectre_v1 spectre_v2
bogomips : 7182.54
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
```
| 42 | 475 | 0.628571 | eng_Latn | 0.284955 |
21b5cf7dcb39920db9c41923b7289104dcec956e | 12,601 | md | Markdown | socrata/jqyq-axtx.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 7 | 2017-05-02T16:08:17.000Z | 2021-05-27T09:59:46.000Z | socrata/jqyq-axtx.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 5 | 2017-11-27T15:40:39.000Z | 2017-12-05T14:34:14.000Z | socrata/jqyq-axtx.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 3 | 2017-03-03T14:48:48.000Z | 2019-05-23T12:57:42.000Z | # ResultsNOLA (historic)
## Dataset
| Name | Value |
| :--- | :---- |
| Catalog | [Link](https://catalog.data.gov/dataset/resultsnola) |
| Metadata | [Link](https://data.nola.gov/api/views/jqyq-axtx) |
| Data: JSON | [100 Rows](https://data.nola.gov/api/views/jqyq-axtx/rows.json?max_rows=100) |
| Data: CSV | [100 Rows](https://data.nola.gov/api/views/jqyq-axtx/rows.csv?max_rows=100) |
| Host | data.nola.gov |
| Id | jqyq-axtx |
| Name | ResultsNOLA (historic) |
| Attribution | City of New Orleans |
| Category | City Administration |
| Tags | resultsnola |
| Created | 2015-11-10T14:28:12Z |
| Publication Date | 2016-09-13T14:20:34Z |
## Description
This data includes all of the key performance indicators for ResultsNOLA, the City's performance management report. It is connected to the graphs showing data for past years on the goal pages on results.nola.gov.
## Columns
```ls
| Included | Schema Type | Field Name | Name | Data Type | Render Type |
| ======== | ============== | ================== | ================== | ============= | ============= |
| Yes | series tag | indicatorid | IndicatorID | text | number |
| Yes | series tag | strategyid | StrategyID | text | number |
| Yes | series tag | organization | Organization | text | text |
| Yes | series tag | name | Name | text | text |
| Yes | series tag | type | Type | text | text |
| Yes | series tag | target | Target | text | text |
| Yes | series tag | seasonality | Seasonality | text | text |
| Yes | series tag | direction | Direction | text | text |
| Yes | numeric metric | value | value | number | number |
| Yes | series tag | rowid | RowID | text | text |
| Yes | time | date | Date | calendar_date | calendar_date |
| No | | year | Year | number | number |
| No | | quarter | Quarter | number | number |
| No | | datelabel | DateLabel | text | text |
| Yes | numeric metric | total | Total | number | number |
| Yes | numeric metric | percent | Percent | number | number |
| Yes | numeric metric | ytd | YTD | number | number |
| No | | quarter_label | Quarter_Label | text | text |
| Yes | series tag | action_aggregation | Action_Aggregation | text | text |
| Yes | series tag | note | Note | text | text |
```
## Time Field
```ls
Value = date
Format & Zone = yyyy-MM-dd'T'HH:mm:ss
```
## Series Fields
```ls
Excluded Fields = quarter,datelabel,quarter_label,year
```
## Data Commands
```ls
series e:jqyq-axtx d:2016-03-31T00:00:00.000Z t:note="6,973 requisitions approved in 2016." t:indicatorid=101 t:action_aggregation="Maintain Below" t:organization=Budget t:name="Avg. days to approve requisitions for the purchase of goods or services" t:strategyid=20103 t:type=Average t:seasonality="Not Seasonal" t:target=1 t:rowid="Avg. days to approve requisitions for the purchase of goods or services_2016-03-31" t:direction=Under m:total=0.63 m:ytd=0.634975982 m:value=0.63
series e:jqyq-axtx d:2016-03-31T00:00:00.000Z t:note="111 of 140 project milestones delivered on schedule in 2016." t:indicatorid=201 t:action_aggregation="Maintain Above" t:organization="Capital Projects" t:name="Percent of projects delivered on schedule" t:strategyid=40103 t:type="Average Percent" t:seasonality="Not Seasonal" t:target=80% t:rowid="Percent of projects delivered on schedule_2016-03-31" t:direction=Over m:percent=82.4 m:value=82.4
series e:jqyq-axtx d:2016-03-31T00:00:00.000Z t:note="387 of 451 invoices paid within respective time periods in 2016." t:indicatorid=203 t:action_aggregation="Maintain Above" t:organization="Capital Projects" t:name="Percent of invoices paid within 30 days for bonds, 60 days for revolver funds, and 60 days for DCDBG funds" t:strategyid=20103 t:type="Average Percent" t:seasonality="Not Seasonal" t:target=80% t:rowid="Percent of invoices paid within 30 days for bonds, 60 days for revolver funds, and 60 days for DCDBG funds_2016-03-31" t:direction=Over m:percent=82.1 m:value=82.1
```
## Meta Commands
```ls
metric m:value p:double l:value t:dataTypeName=number
metric m:total p:double l:Total t:dataTypeName=number
metric m:percent p:integer l:Percent t:dataTypeName=number
metric m:ytd p:integer l:YTD t:dataTypeName=number
entity e:jqyq-axtx l:"ResultsNOLA (historic)" t:attribution="City of New Orleans" t:url=https://data.nola.gov/api/views/jqyq-axtx
property e:jqyq-axtx t:meta.view d:2017-09-25T07:24:28.196Z v:averageRating=0 v:name="ResultsNOLA (historic)" v:attribution="City of New Orleans" v:id=jqyq-axtx v:category="City Administration"
property e:jqyq-axtx t:meta.view.owner d:2017-09-25T07:24:28.196Z v:displayName=mschigoda v:id=ii98-542e v:screenName=mschigoda
property e:jqyq-axtx t:meta.view.tableauthor d:2017-09-25T07:24:28.196Z v:displayName=mschigoda v:roleName=publisher v:id=ii98-542e v:screenName=mschigoda
property e:jqyq-axtx t:meta.view.metadata.custom_fields.common_core d:2017-09-25T07:24:28.196Z v:Contact_Email=data@nola.gov
```
## Top Records
```ls
| indicatorid | strategyid | organization | name | type | target | seasonality | direction | value | rowid | date | year | quarter | datelabel | total | percent | ytd | quarter_label | action_aggregation | note |
| =========== | ========== | ================ | ================================================================================================================ | =============== | ====== | ============ | ========= | ===== | =========================================================================================================================== | =================== | ==== | ======= | ========= | ===== | ======= | =========== | ============= | ================== | =================================================================================== |
| 101 | 20103 | Budget | Avg. days to approve requisitions for the purchase of goods or services | Average | 1 | Not Seasonal | Under | 0.63 | Avg. days to approve requisitions for the purchase of goods or services_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | 0.63 | | 0.634975982 | Q1 | Maintain Below | 6,973 requisitions approved in 2016. |
| 201 | 40103 | Capital Projects | Percent of projects delivered on schedule | Average Percent | 80% | Not Seasonal | Over | 82.4 | Percent of projects delivered on schedule_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 82.4 | | Q1 | Maintain Above | 111 of 140 project milestones delivered on schedule in 2016. |
| 203 | 20103 | Capital Projects | Percent of invoices paid within 30 days for bonds, 60 days for revolver funds, and 60 days for DCDBG funds | Average Percent | 80% | Not Seasonal | Over | 82.1 | Percent of invoices paid within 30 days for bonds, 60 days for revolver funds, and 60 days for DCDBG funds_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 82.1 | | Q1 | Maintain Above | 387 of 451 invoices paid within respective time periods in 2016. |
| 401 | 20203 | Civil Service | Percent of internal customers who agree that training was useful to their position | Average Percent | 95% | Not Seasonal | Over | 92 | Percent of internal customers who agree that training was useful to their position_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 92 | 92.30769231 | Q1 | Maintain Above | 527 of 568 internal customers agreed training was useful to their position in 2016. |
| 402 | 20201 | Civil Service | Percent of eligible lists established within 60 days of the job announcement closing | Average Percent | 90% | Not Seasonal | Over | 96 | Percent of eligible lists established within 60 days of the job announcement closing_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 96 | 95.65217391 | Q1 | Maintain Above | NA |
| 404 | 20201 | Civil Service | Percent of employees selected from eligible lists who satisfactorily complete their initial probationary periods | Average Percent | 90% | Not Seasonal | Over | 87 | Percent of employees selected from eligible lists who satisfactorily complete their initial probationary periods_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 87 | 87.36842105 | Q1 | Maintain Above | 145 of 167 employees completed probationary periods in 2016. |
| 602 | 40201 | Code Enforcement | Number of inspections | Count | 15,000 | Not Seasonal | Over | 2264 | Number of inspections_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | 2264 | | 2264 | Q1 | Increase to | NA |
| 607 | 40201 | Code Enforcement | Number of blighted properties brought into compliance at hearing by property owners | Count | 750 | Not Seasonal | Over | 174 | Number of blighted properties brought into compliance at hearing by property owners_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | 174 | | 210 | Q1 | Increase to | NA |
| 603 | 40201 | Code Enforcement | Number of properties brought to initial hearing | Count | 2,500 | Not Seasonal | Over | 571 | Number of properties brought to initial hearing_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | 571 | | 573 | Q1 | Increase to | NA |
| 605 | 40201 | Code Enforcement | Percent of hearings reset due to failure to re-inspect the property | Average Percent | 3% | Not Seasonal | Under | 3.2 | Percent of hearings reset due to failure to re-inspect the property_2016-03-31 | 2016-03-31T00:00:00 | 2016 | 1 | 2016, Q1 | | 3.2 | 1.745200698 | Q1 | Maintain Below | |
``` | 113.522523 | 584 | 0.49377 | eng_Latn | 0.806554 |
21b629063b48c3769ec58763c67844ea2cb66a36 | 720 | md | Markdown | Edge_AI_Fundamentals/Lesson 3/Convert a Caffee Model/README.md | nesreensada/Intel-Edge-AI-for-IoT-Developers | b74bde926f17459196ff78ebd850c29c30df85d1 | [
"MIT"
] | null | null | null | Edge_AI_Fundamentals/Lesson 3/Convert a Caffee Model/README.md | nesreensada/Intel-Edge-AI-for-IoT-Developers | b74bde926f17459196ff78ebd850c29c30df85d1 | [
"MIT"
] | null | null | null | Edge_AI_Fundamentals/Lesson 3/Convert a Caffee Model/README.md | nesreensada/Intel-Edge-AI-for-IoT-Developers | b74bde926f17459196ff78ebd850c29c30df85d1 | [
"MIT"
] | null | null | null | # Convert a Caffe Model - Solution
First, you can start by checking out the documentation specific to Caffe models [here](https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe.html).
I did notice an additional helpful argument here: `--input_proto`, which is used to specify
a `.prototxt` file to pair with the `.caffemodel` file when the model name and `.prototxt`
filename do not match.
Now, given that I was in the directory with the Caffe model file & `.prototxt` file, here was the
full path to convert my model:
```
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt
```
| 48 | 199 | 0.790278 | eng_Latn | 0.974855 |
21b62ddbb2622782d0c458ec15e1145f270733fa | 724 | md | Markdown | ru/datalens/operations/chart/create-ring-chart.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | 117 | 2018-12-29T10:20:17.000Z | 2022-03-30T12:30:13.000Z | ru/datalens/operations/chart/create-ring-chart.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | 205 | 2018-12-29T14:58:45.000Z | 2022-03-30T21:47:12.000Z | ru/datalens/operations/chart/create-ring-chart.md | OlesyaAkimova28/docs | 08b8e09d3346ec669daa886a8eda836c3f14a0b0 | [
"CC-BY-4.0"
] | 393 | 2018-12-26T16:53:47.000Z | 2022-03-31T17:33:48.000Z | # Создание кольцевой диаграммы
Чтобы создать кольцевую диаграмму:
1. На [главной странице]({{link-datalens-main}}) сервиса нажмите **Создать чарт**.
1. В разделе **Датасет** выберите датасет для визуализации. Если у вас нет датасета, [создайте его](../dataset/create.md).
1. Выберите тип чарта **Кольцевая диаграмма**.
1. Перетащите измерение или показатель из датасета в секцию **Цвет**.
1. Перетащите показатель из датасета в секцию **Показатели**. Значения отобразятся в виде областей кольцевой диаграммы.
Чтобы отключить отображение числа в центре:
1. В верхней левой части экрана нажмите значок .
1. В окне **Настройки чарта** выключите **Итоги**.
1. Нажмите **Применить**. | 55.692308 | 122 | 0.754144 | rus_Cyrl | 0.9761 |
21b634781be462bd4f643cbf6a72fae5401221d7 | 6,158 | md | Markdown | articles/virtual-machines/image-version-managed-image-cli.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/image-version-managed-image-cli.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/image-version-managed-image-cli.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Migrera från en hanterad avbildning till en avbildnings version med Azure CLI
description: Lär dig hur du migrerar från en hanterad avbildning till en avbildnings version i ett delat avbildnings galleri med hjälp av Azure CLI.
author: cynthn
ms.service: virtual-machines
ms.subservice: imaging
ms.topic: how-to
ms.workload: infrastructure
ms.date: 05/04/2020
ms.author: cynthn
ms.reviewer: akjosh
ms.custom: devx-track-azurecli
ms.openlocfilehash: 8631a411b26f91bc72e23ac7ff9fb2278f61168c
ms.sourcegitcommit: 11e2521679415f05d3d2c4c49858940677c57900
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 07/31/2020
ms.locfileid: "87502893"
---
# <a name="migrate-from-a-managed-image-to-an-image-version-using-the-azure-cli"></a>Migrera från en hanterad avbildning till en avbildnings version med hjälp av Azure CLI
Om du har en befintlig hanterad avbildning som du vill migrera till ett delat avbildnings Galleri kan du skapa en avbildning av en delad avbildning direkt från den hanterade avbildningen. När du har testat din nya avbildning kan du ta bort den hanterade käll avbildningen. Du kan också migrera från en hanterad avbildning till ett delat avbildnings galleri med hjälp av [PowerShell](image-version-managed-image-powershell.md).
Bilder i ett bild galleri har två komponenter som vi kommer att skapa i det här exemplet:
- En **bild definition** innehåller information om avbildningen och kraven för att använda den. Detta inkluderar om avbildningen är Windows eller Linux, specialiserad eller generaliserad, viktig information och lägsta och högsta minnes krav. Det är en definition av en typ av bild.
- En **avbildnings version** är vad som används för att skapa en virtuell dator när du använder ett delat avbildnings Galleri. Du kan ha flera versioner av en avbildning efter behov för din miljö. När du skapar en virtuell dator används avbildnings versionen för att skapa nya diskar för den virtuella datorn. Avbildnings versioner kan användas flera gånger.
## <a name="before-you-begin"></a>Innan du börjar
Du måste ha ett befintligt [delat avbildnings Galleri](shared-images-cli.md)för att kunna slutföra den här artikeln.
Du måste ha en befintlig hanterad avbildning av en generaliserad virtuell dator för att kunna slutföra exemplet i den här artikeln. Mer information finns i [avbilda en hanterad avbildning](./linux/capture-image.md). Om den hanterade avbildningen innehåller en datadisk får data disk storleken inte vara större än 1 TB.
När du arbetar med den här artikeln ersätter du resurs gruppen och VM-namnen där det behövs.
## <a name="create-an-image-definition"></a>Skapa en avbildnings definition
Eftersom hanterade avbildningar alltid är generaliserade avbildningar skapar du en avbildnings definition med hjälp av `--os-state generalized` för en generaliserad avbildning.
Namn på bild definitioner kan bestå av versaler eller gemener, siffror, punkter, streck och punkter.
Mer information om de värden som du kan ange för en bild definition finns i [bild definitioner](./linux/shared-image-galleries.md#image-definitions).
Skapa en bild definition i galleriet med hjälp av [AZ sig-bild-definition Create](/cli/azure/sig/image-definition#az-sig-image-definition-create).
I det här exemplet heter avbildnings definitionen *myImageDefinition*och är för en [GENERALISERAd](./linux/shared-image-galleries.md#generalized-and-specialized-images) Linux OS-avbildning. Använd om du vill skapa en definition för avbildningar med hjälp av ett Windows-operativsystem `--os-type Windows` .
```azurecli-interactive
resourceGroup=myGalleryRG
gallery=myGallery
imageDef=myImageDefinition
az sig image-definition create \
--resource-group $resourceGroup \
--gallery-name $gallery \
--gallery-image-definition $imageDef \
--publisher myPublisher \
--offer myOffer \
--sku mySKU \
--os-type Linux \
--os-state generalized
```
## <a name="create-the-image-version"></a>Skapa avbildnings versionen
Skapa versioner med [AZ avbildnings Galleri skapa-avbildning-version](/cli/azure/sig/image-version#az-sig-image-version-create). Du måste skicka in ID: t för den hanterade avbildningen som ska användas som bas linje för att skapa avbildnings versionen. Du kan använda [AZ bild lista](/cli/azure/image?view#az-image-list) för att hämta ID: n för dina avbildningar.
```azurecli-interactive
az image list --query "[].[name, id]" -o tsv
```
Tillåtna tecken för bild version är tal och punkter. Talen måste vara inom intervallet för ett 32-bitars heltal. Format: *Major version*. *MinorVersion*. *Korrigering*.
I det här exemplet är versionen av vår avbildning *1.0.0* och vi kommer att skapa en replik i regionen USA, *södra centrala* och 1 i regionen *USA, östra 2* med zon-redundant lagring. Kom ihåg att du även måste inkludera *käll* regionen som mål för replikering när du väljer mål regioner för replikering.
Skicka ID: t för den hanterade avbildningen i `--managed-image` parametern.
```azurecli-interactive
imageID="/subscriptions/<subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/images/myImage"
az sig image-version create \
--resource-group $resourceGroup \
--gallery-name $gallery \
--gallery-image-definition $imageDef \
--gallery-image-version 1.0.0 \
--target-regions "southcentralus=1" "eastus2=1=standard_zrs" \
--replica-count 2 \
--managed-image $imageID
```
> [!NOTE]
> Du måste vänta tills avbildnings versionen är fullständigt slutförd och replikerad innan du kan använda samma hanterade avbildning för att skapa en annan avbildnings version.
>
> Du kan också lagra alla dina avbildnings versions repliker i [zon redundant lagring](../storage/common/storage-redundancy.md) genom att lägga till `--storage-account-type standard_zrs` den när du skapar avbildnings versionen.
>
## <a name="next-steps"></a>Nästa steg
Skapa en virtuell dator från en [generaliserad avbildnings version](vm-generalized-image-version-cli.md).
Information om hur du anger information om inköps planer finns i [tillhandahålla information om inköps plan för Azure Marketplace när du skapar avbildningar](marketplace-images.md).
| 59.211538 | 426 | 0.788892 | swe_Latn | 0.999391 |
21b74600de1040fe591d83e6486b3d219c7d9201 | 2,065 | md | Markdown | GrammarOfEquation.md | BoolPurist/SimpCalc | 4eb91363a38c5cb6d4b48dc446b1a8c3f9fb20d1 | [
"MIT"
] | null | null | null | GrammarOfEquation.md | BoolPurist/SimpCalc | 4eb91363a38c5cb6d4b48dc446b1a8c3f9fb20d1 | [
"MIT"
] | null | null | null | GrammarOfEquation.md | BoolPurist/SimpCalc | 4eb91363a38c5cb6d4b48dc446b1a8c3f9fb20d1 | [
"MIT"
] | null | null | null | # Description of the Grammar of a valid equation
Startsymbol = equation
| Symbol| | Subsymbols of Symbol |
|:----|:-:|:-------------------------------------------------------------------|
| equation | = | spaceOperand ( ( operator spaceOperand ) \| equationInParatheseWithSpace \| ( operator equationInParatheseWithSpace) )\* WhiteSpace\* |
| equationInParatheseWithSpace | = | WhiteSpace* equationInParathese |
| equationInParathese | = | openingParathese equation closingParathese surroundedOperatorWithParam |
| spaceOperand | = | WhiteSpace* operand |
| operand | = | ( operandAsNumber \| operandWithSurroundedOperator \| operandWithRightOperator \| operandAsFunctionWithBase \| operandAsFunction \| operandAsIntegerFunction ) |
| operandWithSurroundedOperator | = | operrandAsNumber surroundedOperatorWithParam |
| surroundedOperatorWithParam | = | surroundedOperator ( integer \| equationInParathese ) |
| operandWithRightOperator | = | operatorWithoutNeededLeft ( integer \| equationInParathese ) |
| operandAsFunctionWithBase | = | operandFunctionBaseNeeded ( equationInParathese \| operand ) equationInParathese |
| operandAsFunction | = | operandFunction equationInParathese |
| operandAsIntegerFunction | = | integer operandFunctionsWithIntegers |
| operator | = | WhiteSpace* ( sign \| prioritySign ) |
| operandAsNumber | = | constSign\* \| (floatingNumber constSign\*) |
| floatingNumber | = | integer fractionalPartOfNumber? |
| integer | = | signSequence number |
| signSequence | = | sign* |
| fractionalPartOfNumber | = | ( . \| , ) number |
| constSign | = | pi \| π \| e |
| number | = | ( 0 \| 1 \| 2 \| 3 \| 4 \| 5 \| 6 \| 7 \| 8 \| 9 )+ |
| sign | = | + \| - |
| prioritySign | = | ( * \| / \| % ) |
| operandFunctionsWithIntegers | = | ! |
| surroundedOperator | = | ^ \| E \| surroundedOperatorWithoutNeededLeft |
| operatorWithoutNeededLeft | = | √ \| R |
| operandFunctionBaseNeeded | = | log |
| operandFunction | = | cos \| sin \| tan \| cocos \| cosin \| cotan \| ln |
| openingParathese | = | ( |
| closingParathese | = | ) |
| 59 | 176 | 0.667797 | eng_Latn | 0.526652 |
21b7bf68826af2a05919b551e20e9f6f30eb9879 | 2,893 | md | Markdown | quay-enterprise/insert-custom-cert.md | Guaris/docs-1 | d403f2321e31f94b1e4b0cd7042d519bade54548 | [
"Apache-2.0"
] | 558 | 2015-01-01T08:52:16.000Z | 2021-12-04T19:48:28.000Z | quay-enterprise/insert-custom-cert.md | davyb1964/docs | d403f2321e31f94b1e4b0cd7042d519bade54548 | [
"Apache-2.0"
] | 670 | 2015-01-05T19:26:13.000Z | 2020-06-01T03:17:33.000Z | quay-enterprise/insert-custom-cert.md | davyb1964/docs | d403f2321e31f94b1e4b0cd7042d519bade54548 | [
"Apache-2.0"
] | 498 | 2015-01-03T03:37:42.000Z | 2022-02-17T15:13:50.000Z | # Adding TLS Certificates to the Quay Enterprise Container
To add custom TLS certificates to Quay Enterprise, create a new directory named `extra_ca_certs/` beneath the Quay Enterprise config directory. Copy any required site-specific TLS certificates to this new directory.
## Example
### View certificate to be added to the container:
```
$ cat storage.crt
-----BEGIN CERTIFICATE-----
MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
[...]
-----END CERTIFICATE-----
```
### Create certs directory and copy certificate there:
```
$ mkdir -p quay/config/extra_ca_certs
$ cp storage.crt quay/config/extra_ca_certs/
$ tree quay/config/
├── config.yaml
├── extra_ca_certs
│ ├── storage.crt
```
### Restart QE container and check cert with docker-exec:
Obtain the quay container's `CONTAINER ID` with `docker ps`:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
5a3e82c4a75f quay.io/coreos/quay:v2.9.3 "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 8443/tcp grave_keller
```
Restart the container with that ID:
```
$ docker restart 5a3e82c4a75f
```
Examine the certificate copied into the container namespace:
```
$ docker exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem
-----BEGIN CERTIFICATE-----
MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
```
## Add certs when deployed on Kubernetes
When deployed on Kubernetes, QE mounts in a secret as a volume to store config assets. Unfortunately, this currently breaks the upload certificate function of the superuser panel.
To get around this error, a base64 encoded certificate can be added to the secret *after* Quay Enterprise has been deployed.
Begin by base64 encoding the contents of the certificate:
```
$ cat ca.crt
-----BEGIN CERTIFICATE-----
MIIDljCCAn6gAwIBAgIBATANBgkqhkiG9w0BAQsFADA5MRcwFQYDVQQKDA5MQUIu
TElCQ09SRS5TTzEeMBwGA1UEAwwVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTE2
MDExMjA2NTkxMFoXDTM2MDExMjA2NTkxMFowOTEXMBUGA1UECgwOTEFCLkxJQkNP
UkUuU08xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTCCASIwDQYJKoZI
[...]
-----END CERTIFICATE-----
$ cat ca.crt | base64 -w 0
[...]
c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
```
Use the `kubectl` tool to edit the quay-enterprise-config-secret.
```
$ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
```
Add an entry for the cert and paste the full base64 encoded string under the entry:
```
custom-cert.crt:
c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
```
Finally, recycle all QE pods. Use `kubectl delete` to remove all QE pods. The QE Deployment will automatically schedule replacement pods with the new certificate data.
| 31.791209 | 215 | 0.752506 | eng_Latn | 0.620094 |
21b7c0b686307b8d4866801d83099fc49dfe700b | 20,899 | md | Markdown | manifold-endpoint/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | null | null | null | manifold-endpoint/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | 2 | 2022-03-08T21:13:01.000Z | 2022-03-08T21:13:12.000Z | manifold-endpoint/README.md | josephkiran/azure-quickstart-templates | 07e7e1e87e5bee4bdc44d7510d1701c0574426aa | [
"MIT"
] | 1 | 2020-02-04T09:25:14.000Z | 2020-02-04T09:25:14.000Z | # Manifold Platform Endpoint on CentOS
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/PublicLastTestDate.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/PublicDeployment.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/FairfaxLastTestDate.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/FairfaxDeployment.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/BestPracticeResult.svg" />
<IMG SRC="https://azbotstorage.blob.core.windows.net/badges/manifold-endpoint/CredScanResult.svg" />
By clicking "Deploy to Azure" you agree to the Terms and Conditions below.
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fmanifoldtechnology%2Fazure-quickstart-templates%2Fmaster%2Fmanifold-endpoint%2Fazuredeploy.json" target="_blank"><img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png"/></a>
<a href="http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2Fmanifoldtechnology%2Fazure-quickstart-templates%2Fmaster%2Fmanifold-endpoint%2Fazuredeploy.json" target="_blank">
<img src="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.png"/>
</a>
# This template deploys a Manifold Platform Server on a CentOS VM into the selected resource group.
#
Manifold's BaaS product allows enterprises to manage diverse account relationships. With Manifold, enterprises can now link accounts to Azure's cloud-based blockchain and establish relationships between accounts and accounting entities, simplifying complex tasks like managing shared P&Ls.
#<br>
Manifold Platform uses the Oracle JVM, and use of this image signifies your agreement with the license for thier product.
Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX
ORACLE AMERICA, INC. ("ORACLE"), FOR AND ON BEHALF OF ITSELF AND ITS SUBSIDIARIES AND AFFILIATES UNDER COMMON CONTROL, IS WILLING TO LICENSE THE SOFTWARE TO YOU ONLY UPON THE CONDITION THAT YOU ACCEPT ALL OF THE TERMS CONTAINED IN THIS BINARY CODE LICENSE AGREEMENT AND SUPPLEMENTAL LICENSE TERMS (COLLECTIVELY "AGREEMENT"). PLEASE READ THE AGREEMENT CAREFULLY. BY SELECTING THE "ACCEPT LICENSE AGREEMENT" (OR THE EQUIVALENT) BUTTON AND/OR BY USING THE SOFTWARE YOU ACKNOWLEDGE THAT YOU HAVE READ THE TERMS AND AGREE TO THEM. IF YOU ARE AGREEING TO THESE TERMS ON BEHALF OF A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT THAT YOU HAVE THE LEGAL AUTHORITY TO BIND THE LEGAL ENTITY TO THESE TERMS. IF YOU DO NOT HAVE SUCH AUTHORITY, OR IF YOU DO NOT WISH TO BE BOUND BY THE TERMS, THEN SELECT THE "DECLINE LICENSE AGREEMENT" (OR THE EQUIVALENT) BUTTON AND YOU MUST NOT USE THE SOFTWARE ON THIS SITE OR ANY OTHER MEDIA ON WHICH THE SOFTWARE IS CONTAINED.
1. DEFINITIONS. "Software" means the software identified above in binary form that you selected for download, install or use (in the version You selected for download, install or use) from Oracle or its authorized licensees, any other machine readable materials (including, but not limited to, libraries, source files, header files, and data files), any updates or error corrections provided by Oracle, and any user manuals, programming guides and other documentation provided to you by Oracle under this Agreement. "General Purpose Desktop Computers and Servers" means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications, for example but not limited to: Software embedded in or bundled with industrial control systems, wireless mobile telephones, wireless handheld devices, kiosks, TV/STB, Blu-ray Disc devices, telematics and network control switching equipment, printers and storage management systems, and other related systems are excluded from this definition and not licensed under this Agreement. "Programs" means (a) Java technology applets and applications intended to run on the Java Platform, Standard Edition platform on Java-enabled General Purpose Desktop Computers and Servers; and (b) JavaFX technology applications intended to run on the JavaFX Runtime on JavaFX-enabled General Purpose Desktop Computers and Servers. “Commercial Features” means those features identified in Table 1-1 (Commercial Features In Java SE Product Editions) of the Java SE documentation accessible at http://www.oracle.com/technetwork/java/javase/documentation/index.html. “README File” means the README file for the Software accessible at http://www.oracle.com/technetwork/java/javase/documentation/index.html.
2. LICENSE TO USE. Subject to the terms and conditions of this Agreement including, but not limited to, the Java Technology Restrictions of the Supplemental License Terms, Oracle grants you a non-exclusive, non-transferable, limited license without license fees to reproduce and use internally the Software complete and unmodified for the sole purpose of running Programs. THE LICENSE SET FORTH IN THIS SECTION 2 DOES NOT EXTEND TO THE COMMERCIAL FEATURES. YOUR RIGHTS AND OBLIGATIONS RELATED TO THE COMMERCIAL FEATURES ARE AS SET FORTH IN THE SUPPLEMENTAL TERMS ALONG WITH ADDITIONAL LICENSES FOR DEVELOPERS AND PUBLISHERS.
3. RESTRICTIONS. Software is copyrighted. Title to Software and all associated intellectual property rights is retained by Oracle and/or its licensors. Unless enforcement is prohibited by applicable law, you may not modify, decompile, or reverse engineer Software. You acknowledge that the Software is developed for general use in a variety of information management applications; it is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use the Software in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle disclaims any express or implied warranty of fitness for such uses. No right, title or interest in or to any trademark, service mark, logo or trade name of Oracle or its licensors is granted under this Agreement. Additional restrictions for developers and/or publishers licenses are set forth in the Supplemental License Terms.
4. DISCLAIMER OF WARRANTY. THE SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. ORACLE FURTHER DISCLAIMS ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT.
5. LIMITATION OF LIABILITY. IN NO EVENT SHALL ORACLE BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, PUNITIVE OR CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOSS OF PROFITS, REVENUE, DATA OR DATA USE, INCURRED BY YOU OR ANY THIRD PARTY, WHETHER IN AN ACTION IN CONTRACT OR TORT, EVEN IF ORACLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. ORACLE'S ENTIRE LIABILITY FOR DAMAGES HEREUNDER SHALL IN NO EVENT EXCEED ONE THOUSAND DOLLARS (U.S. $1,000).
6. TERMINATION. This Agreement is effective until terminated. You may terminate this Agreement at any time by destroying all copies of Software. This Agreement will terminate immediately without notice from Oracle if you fail to comply with any provision of this Agreement. Either party may terminate this Agreement immediately should any Software become, or in either party's opinion be likely to become, the subject of a claim of infringement of any intellectual property right. Upon termination, you must destroy all copies of Software.
7. EXPORT REGULATIONS. You agree that U.S. export control laws and other applicable export and import laws govern your use of the Software, including technical data; additional information can be found on Oracle's Global Trade Compliance web site (http://www.oracle.com/us/products/export). You agree that neither the Software nor any direct product thereof will be exported, directly, or indirectly, in violation of these laws, or will be used for any purpose prohibited by these laws including, without limitation, nuclear, chemical, or biological weapons proliferation.
8. TRADEMARKS AND LOGOS. You acknowledge and agree as between you
and Oracle that Oracle owns the ORACLE and JAVA trademarks and all ORACLE- and JAVA-related trademarks, service marks, logos and other brand
designations ("Oracle Marks"), and you agree to comply with the Third
Party Usage Guidelines for Oracle Trademarks currently located at
http://www.oracle.com/us/legal/third-party-trademarks/index.html . Any use you make of the Oracle Marks inures to Oracle's benefit.
9. U.S. GOVERNMENT LICENSE RIGHTS. If Software is being acquired by or on behalf of the U.S. Government or by a U.S. Government prime contractor or subcontractor (at any tier), then the Government's rights in Software and accompanying documentation shall be only those set forth in this Agreement.
10. GOVERNING LAW. This agreement is governed by the substantive and procedural laws of California. You and Oracle agree to submit to the exclusive jurisdiction of, and venue in, the courts of San Francisco, or Santa Clara counties in California in any dispute arising out of or relating to this agreement.
11. SEVERABILITY. If any provision of this Agreement is held to be unenforceable, this Agreement will remain in effect with the provision omitted, unless omission would frustrate the intent of the parties, in which case this Agreement will immediately terminate.
12. INTEGRATION. This Agreement is the entire agreement between you and Oracle relating to its subject matter. It supersedes all prior or contemporaneous oral or written communications, proposals, representations and warranties and prevails over any conflicting or additional terms of any quote, order, acknowledgment, or other communication between the parties relating to its subject matter during the term of this Agreement. No modification of this Agreement will be binding, unless in writing and signed by an authorized representative of each party.
SUPPLEMENTAL LICENSE TERMS
These Supplemental License Terms add to or modify the terms of the Binary Code License Agreement. Capitalized terms not defined in these Supplemental Terms shall have the same meanings ascribed to them in the Binary Code License Agreement. These Supplemental Terms shall supersede any inconsistent or conflicting terms in the Binary Code License Agreement, or in any license contained within the Software.
A. COMMERCIAL FEATURES. You may not use the Commercial Features for running Programs, Java applets or applications in your internal business operations or for any commercial or production purpose, or for any purpose other than as set forth in Sections B, C, D and E of these Supplemental Terms. If You want to use the Commercial Features for any purpose other than as permitted in this Agreement, You must obtain a separate license from Oracle.
B. SOFTWARE INTERNAL USE FOR DEVELOPMENT LICENSE GRANT. Subject to the terms and conditions of this Agreement and restrictions and exceptions set forth in the README File incorporated herein by reference, including, but not limited to the Java Technology Restrictions of these Supplemental Terms, Oracle grants you a non-exclusive, non-transferable, limited license without fees to reproduce internally and use internally the Software complete and unmodified for the purpose of designing, developing, and testing your Programs.
C. LICENSE TO DISTRIBUTE SOFTWARE. Subject to the terms and conditions of this Agreement and restrictions and exceptions set forth in the README File, including, but not limited to the Java Technology Restrictions and Limitations on Redistribution of these Supplemental Terms, Oracle grants you a non-exclusive, non-transferable, limited license without fees to reproduce and distribute the Software, provided that (i) you distribute the Software complete and unmodified and only bundled as part of, and for the sole purpose of running, your Programs, (ii) the Programs add significant and primary functionality to the Software, (iii) you do not distribute additional software intended to replace any component(s) of the Software, (iv) you do not remove or alter any proprietary legends or notices contained in the Software, (v) you only distribute the Software subject to a license agreement that: (a) is a complete, unmodified reproduction of this Agreement; or (b) protects Oracle's interests consistent with the terms contained in this Agreement and that includes the notice set forth in Section H, and (vi) you agree to defend and indemnify Oracle and its licensors from and against any damages, costs, liabilities, settlement amounts and/or expenses (including attorneys' fees) incurred in connection with any claim, lawsuit or action by any third party that arises or results from the use or distribution of any and all Programs and/or Software. The license set forth in this Section C does not extend to the Software identified in Section G.
D. LICENSE TO DISTRIBUTE REDISTRIBUTABLES. Subject to the terms and conditions of this Agreement and restrictions and exceptions set forth in the README File, including but not limited to the Java Technology Restrictions and Limitations on Redistribution of these Supplemental Terms, Oracle grants you a non-exclusive, non-transferable, limited license without fees to reproduce and distribute those files specifically identified as redistributable in the README File ("Redistributables") provided that: (i) you distribute the Redistributables complete and unmodified, and only bundled as part of Programs, (ii) the Programs add significant and primary functionality to the Redistributables, (iii) you do not distribute additional software intended to supersede any component(s) of the Redistributables (unless otherwise specified in the applicable README File), (iv) you do not remove or alter any proprietary legends or notices contained in or on the Redistributables, (v) you only distribute the Redistributables pursuant to a license agreement that: (a) is a complete, unmodified reproduction of this Agreement; or (b) protects Oracle's interests consistent with the terms contained in the Agreement and includes the notice set forth in Section H, (vi) you agree to defend and indemnify Oracle and its licensors from and against any damages, costs, liabilities, settlement amounts and/or expenses (including attorneys' fees) incurred in connection with any claim, lawsuit or action by any third party that arises or results from the use or distribution of any and all Programs and/or Software. The license set forth in this Section D does not extend to the Software identified in Section G.
E. DISTRIBUTION BY PUBLISHERS. This section pertains to your distribution of the JavaTM SE Development Kit Software (“JDK”) with your printed book or magazine (as those terms are commonly used in the industry) relating to Java technology ("Publication"). Subject to and conditioned upon your compliance with the restrictions and obligations contained in the Agreement, Oracle hereby grants to you a non-exclusive, nontransferable limited right to reproduce complete and unmodified copies of the JDK on electronic media (the "Media") for the sole purpose of inclusion and distribution with your Publication(s), subject to the following terms: (i) You may not distribute the JDK on a stand-alone basis; it must be distributed with your Publication(s); (ii) You are responsible for downloading the JDK from the applicable Oracle web site; (iii) You must refer to the JDK as JavaTM SE Development Kit; (iv) The JDK must be reproduced in its entirety and without any modification whatsoever (including with respect to all proprietary notices) and distributed with your Publication subject to a license agreement that is a complete, unmodified reproduction of this Agreement; (v) The Media label shall include the following information: “Copyright [YEAR], Oracle America, Inc. All rights reserved. Use is subject to license terms. ORACLE and JAVA trademarks and all ORACLE- and JAVA-related trademarks, service marks, logos and other brand designations are trademarks or registered trademarks of Oracle in the U.S. and other countries.” [YEAR] is the year of Oracle's release of the Software; the year information can typically be found in the Software’s “About” box or screen. This information must be placed on the Media label in such a manner as to only apply to the JDK; (vi) You must clearly identify the JDK as Oracle's product on the Media holder or Media label, and you may not state or imply that Oracle is responsible for any third-party software contained on the Media; (vii) You may not include any third party software on the Media which is intended to be a replacement or substitute for the JDK; (viii) You agree to defend and indemnify Oracle and its licensors from and against any damages, costs, liabilities, settlement amounts and/or expenses (including attorneys' fees) incurred in connection with any claim, lawsuit or action by any third party that arises or results from the use or distribution of the JDK and/or the Publication; ; and (ix) You shall provide Oracle with a written notice for each Publication; such notice shall include the following information: (1) title of Publication, (2) author(s), (3) date of Publication, and (4) ISBN or ISSN numbers. Such notice shall be sent to Oracle America, Inc., 500 Oracle Parkway, Redwood Shores, California 94065 U.S.A , Attention: General Counsel.
F. JAVA TECHNOLOGY RESTRICTIONS. You may not create, modify, or change the behavior of, or authorize your licensees to create, modify, or change the behavior of, classes, interfaces, or subpackages that are in any way identified as "java", "javax", "sun", “oracle” or similar convention as specified by Oracle in any naming convention designation.
G. LIMITATIONS ON REDISTRIBUTION. You may not redistribute or otherwise transfer patches, bug fixes or updates made available by Oracle through Oracle Premier Support, including those made available under Oracle's Java SE Support program.
H. COMMERCIAL FEATURES NOTICE. For purpose of complying with Supplemental Term Section C.(v)(b) and D.(v)(b), your license agreement shall include the following notice, where the notice is displayed in a manner that anyone using the Software will see the notice:
Use of the Commercial Features for any commercial or production purpose requires a separate license from Oracle. “Commercial Features” means those features identified Table 1-1 (Commercial Features In Java SE Product Editions) of the Java SE documentation accessible at http://www.oracle.com/technetwork/java/javase/documentation/index.html
I. SOURCE CODE. Software may contain source code that, unless expressly licensed for other purposes, is provided solely for reference purposes pursuant to the terms of this Agreement. Source code may not be redistributed unless expressly provided for in this Agreement.
J. THIRD PARTY CODE. Additional copyright notices and license terms applicable to portions of the Software are set forth in the THIRDPARTYLICENSEREADME file accessible at http://www.oracle.com/technetwork/java/javase/documentation/index.html. In addition to any terms and conditions of any third party opensource/freeware license identified in the THIRDPARTYLICENSEREADME file, the disclaimer of warranty and limitation of liability provisions in paragraphs 4 and 5 of the Binary Code License Agreement shall apply to all Software in this distribution.
K. TERMINATION FOR INFRINGEMENT. Either party may terminate this Agreement immediately should any Software become, or in either party's opinion be likely to become, the subject of a claim of infringement of any intellectual property right.
L. INSTALLATION AND AUTO-UPDATE. The Software's installation and auto-update processes transmit a limited amount of data to Oracle (or its service provider) about those specific processes to help Oracle understand and optimize them. Oracle does not associate the data with personally identifiable information. You can find more information about the data Oracle collects as a result of your Software download at http://www.oracle.com/technetwork/java/javase/documentation/index.html.
For inquiries please contact: Oracle America, Inc., 500 Oracle Parkway,
Redwood Shores, California 94065, USA.
Last updated 02 April 2013
| 224.72043 | 2,814 | 0.813532 | eng_Latn | 0.992974 |
21b7ed9c3e5da2c9f559761e9b2db1935aeb333f | 3,200 | md | Markdown | docs/framework/winforms/advanced/how-to-manually-manage-buffered-graphics.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-02T11:09:59.000Z | 2019-06-15T10:17:08.000Z | docs/framework/winforms/advanced/how-to-manually-manage-buffered-graphics.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/advanced/how-to-manually-manage-buffered-graphics.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "How to: Manually Manage Buffered Graphics"
ms.date: "03/30/2017"
dev_langs:
- "csharp"
- "vb"
helpviewer_keywords:
- "flicker [Windows Forms], reducing by manually managing graphics"
- "graphics [Windows Forms], managing buffered"
ms.assetid: 4c2a90ee-bbbe-4ff6-9170-1b06c195c918
---
# How to: Manually Manage Buffered Graphics
For more advanced double buffering scenarios, you can use the [!INCLUDE[dnprdnshort](../../../../includes/dnprdnshort-md.md)] classes to implement your own double-buffering logic. The class responsible for allocating and managing individual graphics buffers is the <xref:System.Drawing.BufferedGraphicsContext> class. Every application has its own default <xref:System.Drawing.BufferedGraphicsContext> that manages all of the default double buffering for that application. You can retrieve a reference to this instance by calling the <xref:System.Drawing.BufferedGraphicsManager.Current%2A>.
### To obtain a reference to the default BufferedGraphicsContext
- Set the <xref:System.Drawing.BufferedGraphicsManager.Current%2A> property, as shown in the following code example.
[!code-csharp[System.Windows.Forms.LegacyBufferedGraphics#11](~/samples/snippets/csharp/VS_Snippets_Winforms/System.Windows.Forms.LegacyBufferedGraphics/CS/Class1.cs#11)]
[!code-vb[System.Windows.Forms.LegacyBufferedGraphics#11](~/samples/snippets/visualbasic/VS_Snippets_Winforms/System.Windows.Forms.LegacyBufferedGraphics/VB/Class1.vb#11)]
> [!NOTE]
> You do not need to call the `Dispose` method on the <xref:System.Drawing.BufferedGraphicsContext> reference that you receive from the <xref:System.Drawing.BufferedGraphicsManager> class. The <xref:System.Drawing.BufferedGraphicsManager> handles all of the memory allocation and distribution for default <xref:System.Drawing.BufferedGraphicsContext> instances.
For graphically intensive applications such as animation, you can sometimes improve performance by using a dedicated <xref:System.Drawing.BufferedGraphicsContext> instead of the <xref:System.Drawing.BufferedGraphicsContext> provided by the <xref:System.Drawing.BufferedGraphicsManager>. This enables you to create and manage graphics buffers individually, without incurring the performance overhead of managing all the other buffered graphics associated with your application, though the memory consumed by the application will be greater.
### To create a dedicated BufferedGraphicsContext
- Declare and create a new instance of the <xref:System.Drawing.BufferedGraphicsContext> class, as shown in the following code example.
[!code-csharp[System.Windows.Forms.LegacyBufferedGraphics#12](~/samples/snippets/csharp/VS_Snippets_Winforms/System.Windows.Forms.LegacyBufferedGraphics/CS/Class1.cs#12)]
[!code-vb[System.Windows.Forms.LegacyBufferedGraphics#12](~/samples/snippets/visualbasic/VS_Snippets_Winforms/System.Windows.Forms.LegacyBufferedGraphics/VB/Class1.vb#12)]
## See also
- <xref:System.Drawing.BufferedGraphicsContext>
- [Double Buffered Graphics](double-buffered-graphics.md)
- [How to: Manually Render Buffered Graphics](how-to-manually-render-buffered-graphics.md)
| 84.210526 | 593 | 0.797188 | eng_Latn | 0.846556 |
21b8acb59b682c62d550c5478998c77a35329823 | 11,949 | md | Markdown | mdop/mbam-v1/security-considerations-for-mbam-10.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-20T21:13:51.000Z | 2021-04-20T21:13:51.000Z | mdop/mbam-v1/security-considerations-for-mbam-10.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-08T05:27:50.000Z | 2020-07-08T15:39:35.000Z | mdop/mbam-v1/security-considerations-for-mbam-10.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-04T12:31:26.000Z | 2021-11-04T12:31:26.000Z | ---
title: Considerazioni sulla sicurezza per MBAM 1.0
description: Considerazioni sulla sicurezza per MBAM 1.0
author: dansimp
ms.assetid: 5e1c8b8c-235b-4a92-8b0b-da50dca17353
ms.reviewer: ''
manager: dansimp
ms.author: dansimp
ms.pagetype: mdop, security
ms.mktglfcycl: manage
ms.sitesec: library
ms.prod: w10
ms.date: 08/30/2016
ms.openlocfilehash: b06c343c8193293dde91bc7af2541f35f332d261
ms.sourcegitcommit: 354664bc527d93f80687cd2eba70d1eea024c7c3
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 06/26/2020
ms.locfileid: "10826195"
---
# Considerazioni sulla sicurezza per MBAM 1.0
Questo argomento contiene una breve panoramica degli account e dei gruppi, i file di log e altre considerazioni relative alla sicurezza per Microsoft BitLocker Administration and Monitoring (MBAM). Per altre informazioni, seguire i collegamenti in questo articolo.
## Considerazioni generali sulla sicurezza
**Comprendere i rischi per la sicurezza.** Il rischio più grave per MBAM è che la sua funzionalità potrebbe essere dirottata da un utente non autorizzato che potrebbe quindi riconfigurare la crittografia BitLocker e ottenere i dati della chiave di crittografia BitLocker nei client MBAM. Tuttavia, la perdita delle funzionalità di MBAM per un breve periodo di tempo a causa di un attacco di tipo Denial of Service non avrebbe in genere un impatto catastrofico.
**Proteggere fisicamente i computer**. La sicurezza è incompleta senza sicurezza fisica. Chiunque abbia accesso fisico a un server MBAM potrebbe potenzialmente attaccare l'intera base client. Eventuali attacchi fisici potenziali devono essere considerati ad alto rischio e mitigati in modo appropriato. I server MBAM devono essere archiviati in una sala server fisicamente sicura con accesso controllato. Proteggere questi computer quando gli amministratori non sono fisicamente presenti quando il sistema operativo blocca il computer oppure usando uno screen saver protetto.
**Applicare gli aggiornamenti della sicurezza più recenti a tutti i computer**. Tieniti aggiornato sui nuovi aggiornamenti per i sistemi operativi, Microsoft SQL Server e MBAM iscrivendoti al servizio di notifica della sicurezza <https://go.microsoft.com/fwlink/p/?LinkId=28819> .
**Usare password complesse o passare frasi**. Usa sempre password complesse con 15 o più caratteri per tutti gli account di amministratore di MBAM e MBAM. Non usare mai password vuote. Per altre informazioni sui concetti relativi alle password, vedere il white paper "account password e criteri" su TechNet ( <https://go.microsoft.com/fwlink/p/?LinkId=30009> ).
## Account e gruppi in MBAM
Una procedura consigliata per la gestione degli account utente consiste nel creare gruppi globali di dominio e aggiungere gli account utente. Quindi, Aggiungi gli account globali di dominio ai gruppi di MBAM locali necessari nei server MBAM.
### Gruppi di ActiveDirectoryDomainServices
Nessun gruppo viene creato automaticamente durante la configurazione di MBAM. Devi tuttavia creare i seguenti gruppi globali di ActiveDirectoryDomainServices per gestire le operazioni di MBAM.
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Nome del gruppo</th>
<th align="left">Dettagli</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>MBAM Advanced helpdesk utenti</p></td>
<td align="left"><p>Crea questo gruppo per gestire i membri del gruppo locale degli utenti di MBAM Advanced helpdesk che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Controllo di conformità MBAM per l'accesso DB</p></td>
<td align="left"><p>Creare questo gruppo per gestire i membri del gruppo di controllo della conformità MBAM di DB Access che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Utenti di hardware MBAM</p></td>
<td align="left"><p>Creare questo gruppo per gestire i membri del gruppo di utenti hardware MBAM che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Utenti di MBAM helpdesk</p></td>
<td align="left"><p>Creare questo gruppo per gestire i membri del gruppo di utenti di MBAM helpdesk locale creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Accesso di MBAM e DB hardware</p></td>
<td align="left"><p>Crea questo gruppo per gestire i membri del gruppo locale per il recupero e il ripristino di MBAM e l'hardware che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Utenti di report MBAM</p></td>
<td align="left"><p>Creare questo gruppo per gestire i membri del gruppo di utenti del report MBAM che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Amministratori di sistema MBAM</p></td>
<td align="left"><p>Crea questo gruppo per gestire i membri del gruppo di amministratori di sistema MBAM che è stato creato durante la configurazione di MBAM.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Esenzioni crittografiche di BitLocker</p></td>
<td align="left"><p>Creare questo gruppo per gestire gli account utente che devono essere esentati dalla crittografia BitLocker a partire da computer a cui accedono.</p></td>
</tr>
</tbody>
</table>
### Gruppi locali di MBAM server
MBAM setup crea gruppi locali per supportare le operazioni di MBAM. Dovresti aggiungere i gruppi globali di ActiveDirectoryDomainServices ai gruppi locali di MBAM appropriati per configurare le autorizzazioni di sicurezza e accesso ai dati di MBAM.
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Nome del gruppo</th>
<th align="left">Dettagli</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>MBAM Advanced helpdesk utenti</p></td>
<td align="left"><p>I membri di questo gruppo hanno esteso l'accesso alle funzionalità dell'helpdesk di amministrazione e monitoraggio di Microsoft BitLocker.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Controllo di conformità MBAM per l'accesso DB</p></td>
<td align="left"><p>Questo gruppo contiene i computer che hanno accesso al database di controllo di conformità di MBAM.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Utenti di hardware MBAM</p></td>
<td align="left"><p>I membri di questo gruppo hanno accesso ad alcune delle funzionalità di funzionalità hardware dell'amministrazione e del monitoraggio di Microsoft BitLocker.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Utenti di MBAM helpdesk</p></td>
<td align="left"><p>I membri di questo gruppo hanno accesso ad alcune delle funzionalità dell'helpdesk dall'amministrazione e dal monitoraggio di Microsoft BitLocker.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Accesso di MBAM e DB hardware</p></td>
<td align="left"><p>Questo gruppo contiene i computer che hanno accesso al database di MBAM Recovery e hardware.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Utenti di report MBAM</p></td>
<td align="left"><p>I membri di questo gruppo hanno accesso ai report di conformità e controllo dall'amministrazione e dal monitoraggio di Microsoft BitLocker.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Amministratori di sistema MBAM</p></td>
<td align="left"><p>I membri di questo gruppo hanno accesso a tutte le funzionalità di amministrazione e monitoraggio di Microsoft BitLocker.</p></td>
</tr>
</tbody>
</table>
### Account di accesso di report SSRS
L'account del servizio report di SQL Server Reporting Services (SSRS) fornisce il contesto di sicurezza per l'esecuzione dei report di MBAM disponibili tramite SSRS. Questo account è configurato durante la configurazione di MBAM.
## File di log di MBAM
Durante la configurazione di MBAM, i seguenti file di log di configurazione di MBAM vengono creati nella cartella% Temp% dell'utente che installa il
**File di log configurazione di MBAM server**
<a href="" id="msi-five-random-characters--log"></a>MSI <em> < Five random characters > </em> . log
Registra le azioni intraprese durante l'installazione di MBAM Setup e MBAM server.
<a href="" id="installcompliancedatabase-log"></a>InstallComplianceDatabase. log
Registra le azioni intraprese per creare la configurazione del database di stato di conformità di MBAM.
<a href="" id="installkeycompliancedatabase-log"></a>InstallKeyComplianceDatabase. log
Registra le azioni intraprese per creare il database di ripristino e hardware di MBAM.
<a href="" id="addhelpdeskdbauditusers-log"></a>AddHelpDeskDbAuditUsers. log
Registra le azioni intraprese per creare gli account di accesso di SQL Server nel database di stato della conformità di MBAM e autorizzare il servizio Web Helpdesk nel database per i report.
<a href="" id="addhelpdeskdbusers-log"></a>AddHelpDeskDbUsers. log
Registra le azioni intraprese per autorizzare i servizi Web al database per il ripristino delle chiavi e creare gli account di accesso al database hardware e ripristino di MBAM.
<a href="" id="addkeycompliancedbusers-log"></a>AddKeyComplianceDbUsers. log
Registra le azioni intraprese per autorizzare i servizi Web al database di stato di conformità MBAM per la creazione di report di conformità.
<a href="" id="addrecoveryandhardwaredbusers-log"></a>AddRecoveryAndHardwareDbUsers. log
Registra le azioni intraprese per autorizzare servizi Web per il ripristino di MBAM e il database hardware per il ripristino della chiave.
**Nota** Per ottenere altri file di log della configurazione di MBAM, è necessario installare l'amministrazione e il monitoraggio di Microsoft BitLocker tramite il pacchetto **msiexec** e l'opzione di posizione **/l** < > . I file di log vengono creati nella posizione specificata.
**File di log della configurazione del client MBAM**
<a href="" id="msi-five-random-characters--log"></a>MSI <em> < Five random characters > </em> . log
Registra le azioni intraprese durante l'installazione del client MBAM.
## Considerazioni su dati di database di MBAM
La funzionalità di crittografia dati trasparente (Transparent Data Encryption) disponibile in SQL Server 2008 è un requisito necessario per l'installazione delle istanze di database che ospiteranno le caratteristiche del database MBAM.
Con Transparent, è possibile eseguire una crittografia a livello di database completa in tempo reale. Transparent è una scelta adatta per la crittografia in blocco per soddisfare gli standard di sicurezza dei dati aziendali o conformità normativa. Transparent funziona a livello di file, che è simile a due caratteristiche di Windows: EFS (Encrypting File System) e crittografia unità BitLocker, entrambi i quali crittografano anche i dati nel disco rigido. Transparent non sostituisce crittografia a livello di cella, EFS o BitLocker.
Quando la funzionalità di crittografia è abilitata in un database, tutti i backup vengono crittografati. È quindi necessario prestare particolare attenzione per garantire che il certificato usato per proteggere la chiave di crittografia del database (decifratore) sia supportato e mantenuto con il backup del database. Senza un certificato, i dati saranno illeggibili. Eseguire il backup del certificato insieme al database. Ogni backup del certificato deve avere due file; entrambi i file devono essere archiviati. È consigliabile archiviarli separatamente dal file di backup del database per la sicurezza.
Per un esempio di come abilitare le istanze di Transparent per il database MBAM, vedere [valutazione di mbam 1,0](evaluating-mbam-10.md).
Per altre informazioni su Transparent in SQLServer 2008, vedere [crittografia di database in SQL Server 2008 Enterprise Edition](https://go.microsoft.com/fwlink/?LinkId=269703).
## Argomenti correlati
[Sicurezza e privacy per MBAM 1.0](security-and-privacy-for-mbam-10.md)
| 56.630332 | 607 | 0.777052 | ita_Latn | 0.996681 |
21b8b6120082d60460f42a58479f1e48c5a558d6 | 563 | md | Markdown | Readme.md | chemzqm/nanobar | ebd564588a00959f2c663f84ba0a7bcbef77741c | [
"MIT"
] | null | null | null | Readme.md | chemzqm/nanobar | ebd564588a00959f2c663f84ba0a7bcbef77741c | [
"MIT"
] | null | null | null | Readme.md | chemzqm/nanobar | ebd564588a00959f2c663f84ba0a7bcbef77741c | [
"MIT"
] | null | null | null | # Nanobar
A progress bar at the very top of a webpage.
VIEW DEMO: <http://chemzqm.github.io/nanobar/>
## Installation
Install with [component(1)](http://component.io):
$ component install chemzqm/nanobar
## API
```
var Nanobar = require('nanobar');
```
### new Nanobar([parent]);
append nanobar to parentNode (default to document.body);
### Nanobar#update(number)
Goto a certain percent of progress.
```
nanobar.update(30)
```
### Nanobar#dismiss()
Remove this nanobar.
### Nanobar#infinite()
Make the progress bar infinite.
## License
MIT
| 13.731707 | 56 | 0.687389 | eng_Latn | 0.22943 |
21b8e2d2fe667bcb28722c0fc24758e5922fc209 | 52 | md | Markdown | README.md | BudgetToaster/infinity-stones | b3b604d5bd76cb03432a3a98d78a8e0c508e4629 | [
"MIT"
] | null | null | null | README.md | BudgetToaster/infinity-stones | b3b604d5bd76cb03432a3a98d78a8e0c508e4629 | [
"MIT"
] | null | null | null | README.md | BudgetToaster/infinity-stones | b3b604d5bd76cb03432a3a98d78a8e0c508e4629 | [
"MIT"
] | null | null | null | # infinity-stones
Minecraft infinity stones plugin
| 17.333333 | 33 | 0.826923 | spa_Latn | 0.220642 |
21b8fef93a82ab0981544d8ae63cb1d90cbbf852 | 52 | md | Markdown | README.md | legnoh/php-fuel | e566cb4e9352f11eeda977a284da925a98481c92 | [
"MIT"
] | null | null | null | README.md | legnoh/php-fuel | e566cb4e9352f11eeda977a284da925a98481c92 | [
"MIT"
] | null | null | null | README.md | legnoh/php-fuel | e566cb4e9352f11eeda977a284da925a98481c92 | [
"MIT"
] | null | null | null | cf-php-fuel
====
- FuelPHP in CloudFoundry samples
| 10.4 | 33 | 0.711538 | eng_Latn | 0.548897 |
21b910d094587e9712ef80d8bcb11f982e455f16 | 2,333 | markdown | Markdown | app/_posts/2008-07-15-noche-monologo-beer-station.markdown | pacoguzman/pacoguzman.github.io | ef51ef42f661090b596da61d5a7e896b8effce74 | [
"MIT"
] | null | null | null | app/_posts/2008-07-15-noche-monologo-beer-station.markdown | pacoguzman/pacoguzman.github.io | ef51ef42f661090b596da61d5a7e896b8effce74 | [
"MIT"
] | null | null | null | app/_posts/2008-07-15-noche-monologo-beer-station.markdown | pacoguzman/pacoguzman.github.io | ef51ef42f661090b596da61d5a7e896b8effce74 | [
"MIT"
] | null | null | null | ---
layout: post
title: Noche de monólogos en Beer Station
date: 2008-07-15 15:34:41
description: 'Juan Aroca en Beer Station monólogos y cervezas'
tags: ['live-shows']
---
Dada la mala cartelera que abunda en los cines durante los meses de verano, el pasado sábado decidimos a última hora cambiar de registro. Por primera vez íbamos a ver un monólogo de humor en directo.
Con mi afición a realizar comprar/reservas por Internet que va <em>increcendo</em> (esta palabra creo que existe pero no creo que se escriba así), me puse manos a la obra y desde mi página de inicio de <a href="http://es.beruby.com/" title="es.beruby.com/" id="link_0">BeRuby</a>, me dirigí a <a href="http://www.atrapalo.com" title="www.atrapalo.com" id="link_1">atrapalo.com</a> y ale a buscar espectáculos en Madrid.
Y ahí estaba por 6 euritos, mil pesetilllas de las de antes, y por un precio similar al del cine encontré un monólogo de <a href="http://www.juanaroca.es" title="www.juanaroca.es" id="link_2">Juan Aroca</a> en <a href="http://www.beerstation.es" title="www.beerstation.es" id="link_3">Beer Station</a>, para el sábado a las 23:00.
Nada más llegar a la Plaza de Santo Domingo, lo primero en lo que nos fijamos fue en el rótulo de Guinness del local, seguido de una promoción de Murphy. Por lo que mientras yo recogía las entradas a la "chica de naranja" mi novia se encargaba del aprovisionamiento líquido. Una Guinness para la señorita y una Murphy para el caballero.
La verdad que el sitio esta muy chulo pero lo mejor estaba por llegar, sí el monólogo.
Comenzando por lo moderno que está la sociedad actualmente, reseña a los pantalones por las rodillas, pasando por como detectar el número de estrellas de un hostal a partir del tacto de sus toallas. Comentando algún chiste de funcionarios, y otros tantos de gitanos.
Vamos que estuvo genial, y durante los dos actos de media hora aproximadamente no paramos de reir. Y como ejemplo el siguiente chiste:
> Se abre el telón y aparece una familia de gitanos, y se llevan el telón. Como diría el propio Juan Aroca: "mu tonto, mu tonto"
<a href="http://www.flickr.com/photos/pacoguzman/2671762706/" title="Beer Station por Paco Guzman - Tikhon, en Flickr"><img src="http://farm4.static.flickr.com/3237/2671762706_04f0c5b291_m.jpg" alt="Beer Station" width="240" height="180"></a>
| 89.730769 | 419 | 0.763823 | spa_Latn | 0.985428 |
21b97c09ab936996210c9f91760fa71c2bb11fea | 3,121 | md | Markdown | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-region-isvisible(int_int_constgraphics).md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-region-isvisible(int_int_constgraphics).md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-region-isvisible(int_int_constgraphics).md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:gdiplusheaders.Region.IsVisible(INT,INT,constGraphics)
title: Region::IsVisible
description: The Region::IsVisible method determines whether a point is inside this region.
tech.root: gdiplus
helpviewer_keywords: ["Region::IsVisible"]
ms.assetid: 6e7059c0-2029-4178-961a-88738894ee83
ms.date: 05/20/2019
ms.keywords: Region::IsVisible
targetos: Windows
req.assembly:
req.construct-type: function
req.ddi-compliance:
req.dll:
req.header: gdiplusheaders.h
req.idl:
req.include-header:
req.irql:
req.kmdf-ver:
req.lib:
req.max-support:
req.namespace:
req.redist:
req.target-min-winverclnt:
req.target-min-winversvr:
req.target-type:
req.type-library:
req.umdf-ver:
req.unicode-ansi:
f1_keywords:
- Region::IsVisible
- gdiplusheaders/Region::IsVisible
dev_langs:
- c++
topic_type:
- apiref
api_type:
- COM
api_location:
- gdiplusheaders.h
api_name:
- Region::IsVisible
---
# Region::IsVisible(INT,INT,Graphics*)
## -description
The **Region::IsVisible** method determines whether a point is inside this region.
## -parameters
### -param x
Integer that specifies the x-coordinate of the point to test.
### -param y
Integer that specifies the y-coordinate of the point to test.
### -param g
Optional.
Pointer to a <a href="/windows/desktop/api/gdiplusgraphics/nl-gdiplusgraphics-graphics">Graphics</a> object that contains the world and page transformations required to calculate the device coordinates of this region and the point.
The default value is **NULL**.
## -returns
Type: <b><a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a></b>
If the method succeeds, it returns Ok, which is an element of the <a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> enumeration.
If the method fails, it returns one of the other elements of the <a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> enumeration.
## -remarks
#### Examples
The following example creates a region from a path and then tests to determine whether a point is inside the region.
```cpp
VOID Example_IsVisibleXY(HDC hdc)
{
Graphics graphics(hdc);
Point points[] = {
Point(110, 20),
Point(120, 30),
Point(100, 60),
Point(120, 70),
Point(150, 60),
Point(140, 10)};
GraphicsPath path;
SolidBrush solidBrush(Color(255, 255, 0, 0));
path.AddClosedCurve(points, 6);
// Create a region from a path.
Region pathRegion(&path);
graphics.FillRegion(&solidBrush, &pathRegion);
// Check to see whether the point (125, 40) is in the region.
INT x = 125;
INT y = 40;
if(pathRegion.IsVisible(x, y, &graphics))
{
// The point is in the region.
}
// Fill a small circle centered at the point (125, 40).
SolidBrush brush(Color(255, 0, 0, 0));
graphics.FillEllipse(&brush, x - 4, y - 4, 8, 8);
}
```
## -see-also
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-region">Region</a>
<a href="/windows/desktop/api/gdiplustypes/nl-gdiplustypes-rect">Rect</a>
<a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> | 24.968 | 231 | 0.718039 | eng_Latn | 0.803664 |
21b9a19ae6daa251356f1496e390f31a2eb24982 | 7,550 | md | Markdown | docs/mfc/reference/crebar-class.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/crebar-class.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/crebar-class.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CReBar-класс | Документы Microsoft
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-mfc
ms.topic: reference
f1_keywords:
- CReBar
- AFXEXT/CReBar
- AFXEXT/CReBar::AddBar
- AFXEXT/CReBar::Create
- AFXEXT/CReBar::GetReBarCtrl
dev_langs:
- C++
helpviewer_keywords:
- CReBar [MFC], AddBar
- CReBar [MFC], Create
- CReBar [MFC], GetReBarCtrl
ms.assetid: c1ad2720-1d33-4106-8e4e-80aa84f93559
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
ms.openlocfilehash: 94fc1e0ccad8980e0ed5a1cc0f8c0262502e1398
ms.sourcegitcommit: 76b7653ae443a2b8eb1186b789f8503609d6453e
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 05/04/2018
---
# <a name="crebar-class"></a>CReBar-класс
Панель элементов управления, которая предоставляет макет, сохраняемость и сведения о состоянии для элементов управления главной панели.
## <a name="syntax"></a>Синтаксис
```
class CReBar : public CControlBar
```
## <a name="members"></a>Участники
### <a name="public-methods"></a>Открытые методы
|Имя|Описание|
|----------|-----------------|
|[CReBar::AddBar](#addbar)|Добавляет диапазон главной панели.|
|[CReBar::Create](#create)|Создает контейнер элементов управления и прикрепляет его к `CReBar` объекта.|
|[CReBar::GetReBarCtrl](#getrebarctrl)|Предоставляет прямой доступ к базовой стандартного элемента управления.|
## <a name="remarks"></a>Примечания
Объект Главная панель может содержать множество дочерних окон, обычно других элементов управления, включая поля ввода, панели инструментов и списки. Объект главной панели можно отобразить его дочерних окон на указанном точечном рисунке. Приложение может автоматически изменить главной панели или пользователь может вручную изменить размер главной панели, щелкните и перетащите его захвата.

## <a name="rebar-control"></a>Элемент управления главной панели
На главной панели объект ведет себя аналогично объект панели инструментов. Главная панель использует механизм щелкните и перетащите для изменения размера в своей зоне. Элемент управления главной панели может содержать один или несколько полосы с каждой зоны с любым сочетанием полосу захвата, битовая карта, текстовую метку и дочернего окна. Тем не менее зоны не может содержать более одного дочернего окна.
**CReBar** использует [CReBarCtrl](../../mfc/reference/crebarctrl-class.md) класса для его реализации. Можно получить доступ к этим элементом управления главной панели посредством [GetReBarCtrl](#getrebarctrl) Чтобы воспользоваться преимуществами возможностей настройки элемента управления. Дополнительные сведения об элементах управления главной панели см. в разделе `CReBarCtrl`. Дополнительные сведения об использовании элементов управления главной панели см. в разделе [использование CReBarCtrl](../../mfc/using-crebarctrl.md).
> [!CAUTION]
> Панель закрепление элементов управления MFC не поддерживают главной панели и объекты управления главной панели. Если **CRebar::EnableDocking** вызывается, приложение будет подтвердить.
## <a name="inheritance-hierarchy"></a>Иерархия наследования
[CObject](../../mfc/reference/cobject-class.md)
[CCmdTarget](../../mfc/reference/ccmdtarget-class.md)
[CWnd](../../mfc/reference/cwnd-class.md)
[CControlBar](../../mfc/reference/ccontrolbar-class.md)
`CReBar`
## <a name="requirements"></a>Требования
**Заголовок:** afxext.h
## <a name="addbar"></a> CReBar::AddBar
Вызовите эту функцию-член добавляемый диапазон на главной панели.
```
BOOL AddBar(
CWnd* pBar,
LPCTSTR pszText = NULL,
CBitmap* pbmp = NULL,
DWORD dwStyle = RBBS_GRIPPERALWAYS | RBBS_FIXEDBMP);
BOOL AddBar(
CWnd* pBar,
COLORREF clrFore,
COLORREF clrBack,
LPCTSTR pszText = NULL,
DWORD dwStyle = RBBS_GRIPPERALWAYS);
```
### <a name="parameters"></a>Параметры
`pBar`
Указатель на `CWnd` объект, который представляет дочернее окно, вставляемого в главной панели. Указанный объект должен иметь **WS_CHILD**.
`lpszText`
Указатель на строку, содержащую текст, отображаемый на главной панели. **Значение NULL,** по умолчанию. Текст, содержащийся в `lpszText` не является частью дочернее окно; он находится на главной панели, сам.
`pbmp`
Указатель на `CBitmap` объект отображался на фоне главной панели. **Значение NULL,** по умолчанию.
`dwStyle`
Объект `DWORD` содержащий стиль, применяемый к главной панели. В разделе **fStyle** функции описание структуры Win32 [REBARBANDINFO](http://msdn.microsoft.com/library/windows/desktop/bb774393) полный список стилей аппаратного контроллера управления.
*clrFore*
Объект **COLORREF** значение, представляющее цвет переднего плана для главной панели.
*clrBack*
Объект **COLORREF** значение, представляющее цвет фона главной панели.
### <a name="return-value"></a>Возвращаемое значение
Имеет ненулевое значение в случае успешного выполнения, иначе — 0.
### <a name="example"></a>Пример
[!code-cpp[NVC_MFC_CReBarCtrl#1](../../mfc/reference/codesnippet/cpp/crebar-class_1.cpp)]
## <a name="create"></a> CReBar::Create
Вызовите эту функцию-член для создания главной панели.
```
virtual BOOL Create(
CWnd* pParentWnd,
DWORD dwCtrlStyle = RBS_BANDBORDERS,
DWORD dwStyle = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN | CBRS_TOP,
UINT nID = AFX_IDW_REBAR);
```
### <a name="parameters"></a>Параметры
`pParentWnd`
Указатель на `CWnd` окно которого Windows является родительским для строки состояния объекта. Обычно фрейм окна.
`dwCtrlStyle`
Стиль элемента управления главной панели. По умолчанию **RBS_BANDBORDERS**, который отображает ограничить строки для разделения смежные полосами в элементе управления главной панели. В разделе [стили элемента управления главной панели](http://msdn.microsoft.com/library/windows/desktop/bb774377) в Windows SDK список стилей.
`dwStyle`
Стили окна главной панели.
`nID`
Идентификатор дочернего окна главной панели
### <a name="return-value"></a>Возвращаемое значение
Имеет ненулевое значение в случае успешного выполнения, иначе — 0.
### <a name="example"></a>Пример
Далее приведен пример [CReBar::AddBar](#addbar).
## <a name="getrebarctrl"></a> CReBar::GetReBarCtrl
Эта функция-член обеспечивает прямой доступ к базовой стандартного элемента управления.
```
CReBarCtrl& GetReBarCtrl() const;
```
### <a name="return-value"></a>Возвращаемое значение
Ссылку на [CReBarCtrl](../../mfc/reference/crebarctrl-class.md) объекта.
### <a name="remarks"></a>Примечания
Вызовите эту функцию-член, чтобы воспользоваться преимуществами функций Windows главной панели стандартного элемента управления в настройки вашей главной панели. При вызове `GetReBarCtrl`, он возвращает объект ссылки на `CReBarCtrl` таким образом можно использовать любой набор функций-членов.
Дополнительные сведения об использовании `CReBarCtrl` вашей главной панели, в разделе [использование CReBarCtrl](../../mfc/using-crebarctrl.md).
### <a name="example"></a>Пример
[!code-cpp[NVC_MFC_CReBarCtrl#2](../../mfc/reference/codesnippet/cpp/crebar-class_2.cpp)]
## <a name="see-also"></a>См. также
[Пример MFC MFCIE](../../visual-cpp-samples.md)
[CControlBar-класс](../../mfc/reference/ccontrolbar-class.md)
[Диаграмма иерархии](../../mfc/hierarchy-chart.md)
| 42.897727 | 534 | 0.732185 | rus_Cyrl | 0.892768 |
21b9ab30a86012e7d8802e361a60a1e9a458871c | 350 | md | Markdown | document/current/content/features/transaction/2pc-transaction.en.md | codefairy08/incubator-shardingsphere-doc | 445033b6a2cf782609236d1e61e38a8d5b6be605 | [
"Apache-2.0"
] | 2 | 2019-01-27T07:33:40.000Z | 2022-02-02T14:44:15.000Z | document/current/content/features/transaction/2pc-transaction.en.md | slievrly/sharding-sphere-doc | 1659b6ae4f2ebf10da0a00e41f2dfb21e200045b | [
"Apache-2.0"
] | null | null | null | document/current/content/features/transaction/2pc-transaction.en.md | slievrly/sharding-sphere-doc | 1659b6ae4f2ebf10da0a00e41f2dfb21e200045b | [
"Apache-2.0"
] | 1 | 2019-09-26T05:53:18.000Z | 2019-09-26T05:53:18.000Z | +++
pre = "<b>3.4.2. </b>"
toc = true
title = "2PC Transaction"
weight = 2
+++
## Concept
* Fully support cross-database transactions.
* Use Atomikos by default; support to use SPI to upload other XA transaction managers.
## Supported Situation
* Sharding-JDBC can support users' own configurations of XA data source.
* Sharding-Proxy support.
| 18.421053 | 86 | 0.714286 | eng_Latn | 0.941151 |
21bb2b9401c80f36e3b8b630e20dbc50906b8d9a | 2,163 | md | Markdown | README.md | bmurley/devjournalr | 8287405895d8d7017a83f61aba98e0f778bdc819 | [
"MIT"
] | null | null | null | README.md | bmurley/devjournalr | 8287405895d8d7017a83f61aba98e0f778bdc819 | [
"MIT"
] | null | null | null | README.md | bmurley/devjournalr | 8287405895d8d7017a83f61aba98e0f778bdc819 | [
"MIT"
] | null | null | null | # devjournalr
Explorations into Data Journalism - updated as possible.
## 3-30-2021
[//]: # (Just started this data journal as we are beginning a Zoom meeting about scholarships. Want to see if this helps me figure out what system to use to keep data organized and present visually. It's now 15:43 and there's no solution for half of these scholarships and people are spending more time digging into secondary bullshit and not the issue at hand. And now a certain faculty member brings up the inevitable "we wuz robbed" angle.)
[//]: # (Have to be careful because this form of commenting doesn't follow the return.)
[//]: # (It's pretty obvious that I've been shut out of the decision-making, as every time I have tried to speak, someone has talked over me. And I'm tired of trying to fight to get my voice in edgewise. I'm just going to work with my students and do my classes and union stuff and separate myself from the rest of the department, because none of these people is at all interested in getting things done.)
Dialogue mapping:
Dialogue maps help teams and organizations collaborate and deliberate on complicated issues more effectively by facilitating a discussion that breaks down each issue and idea. In other words, dialogue maps visualize the critical thinking process, making it easier to consider problems from every angle.
What is a “wicked problem”?
First coined by Horst Rittel and Melvin M. Webber in 1973, “wicked problems” describe problems that are hard to define and difficult to solve. Though the term originally applied to social and cultural issues, it has also been adopted in product development and other business disciplines.
Wicked problems typically share the following characteristics:
There is no definitive end or stopping point.
Solutions are good or bad, rather than true or false.
There is no formula for solving a wicked problem.
Each problem is unique.
Wicked problems are symptoms of other problems.
There are no alternative solutions.
There are multiple explanations for a problem depending on individual perspectives.
In short: Wicked problems are complex issues that don’t have one clear-cut solution.
| 69.774194 | 443 | 0.793805 | eng_Latn | 0.999853 |
21bb2fdcfd2b2ef3480e7ddd6c8c76bab498d858 | 3,274 | md | Markdown | content/publication/moreira-2016-oikos/index.md | parameterizeit/starter-academic | 547e7b52b00c76868336c98cca5728a786e509fe | [
"MIT"
] | null | null | null | content/publication/moreira-2016-oikos/index.md | parameterizeit/starter-academic | 547e7b52b00c76868336c98cca5728a786e509fe | [
"MIT"
] | null | null | null | content/publication/moreira-2016-oikos/index.md | parameterizeit/starter-academic | 547e7b52b00c76868336c98cca5728a786e509fe | [
"MIT"
] | null | null | null | ---
# Documentation: https://sourcethemes.com/academic/docs/managing-content/
title: Plant defence responses to volatile alert signals are population-specific
subtitle: ''
summary: ''
authors:
- Xoaquín Moreira
- William K. Petry
- Johnattan Hernández-Cumplido
- Stéphanie Morelon
- Betty Benrey
tags: []
categories: []
date: '2016-01-01'
lastmod: 2020-09-19T17:49:53-04:00
featured: false
draft: false
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2020-09-19T21:49:53.762834Z'
publication_types:
- 2
abstract: ''
publication: '*Oikos*'
url_pdf: moreira-2016-oikos.pdf
doi: 10.1111/oik.02891
url_dataset: https://doi-org/10.5061/dryad.d6rb0
---
### Abstract
Herbivore‐induced volatiles are widespread in plants. They can serve as alert signals that enable neighbouring leaves and plants to pre‐emptively increase defences and avoid herbivory damage. However, our understanding of the factors mediating volatile organic compound (VOC) signal interpretation by receiver plants and the degree to which multiple herbivores affect VOC signals is still limited. Here we investigated whether plant responses to damage‐induced VOC signals were population specific. As a secondary goal, we tested for interference in signal production or reception when plants were subjected to multiple types of herbivore damage. We factorially crossed the population sources of paired *Phaseolus lunatus* plants (same versus different population sources) with a mechanical damage treatment to one member of the pair (i.e. the VOC emitter, damaged versus control), and we measured herbivore damage to the other plant (the VOC receiver) in the field. Prior to the experiment, both emitter and receiver plants were naturally colonized by aphids, enabling us to test the hypothesis that damage from sap‐feeding herbivores interferes with VOC communication by including emitter and receiver aphid abundances as covariates in our analyses. One week after mechanical leaf damage, we removed all the emitter plants from the field and conducted fortnightly surveys of leaf herbivory. We found evidence that receiver plants responded using population‐specific ‘dialects’ where only receivers from the same source population as the damaged emitters suffered less leaf damage upon exposure to the volatile signals. We also found that the abundance of aphids on both emitter and receiver plants did not alter this volatile signalling during both production and reception despite well‐documented defence crosstalk within individual plants that are simultaneously attacked by multiple herbivores. Overall, these results show that plant communication is highly sensitive to genetic relatedness between emitter and receiver plants and that communication is resilient to herbivore co‐infestation.
| 72.755556 | 2,096 | 0.799633 | eng_Latn | 0.997049 |
21bb3d87dcac29325e2ff2b4116729582957b5fb | 47 | md | Markdown | docs-src/src/content/api/CsvHelper.TypeConversion/TypeConverterException.md | gsulc/CsvHelper | 27876bf922564d012b25ea52d3ec867eacccc5e3 | [
"MS-PL"
] | null | null | null | docs-src/src/content/api/CsvHelper.TypeConversion/TypeConverterException.md | gsulc/CsvHelper | 27876bf922564d012b25ea52d3ec867eacccc5e3 | [
"MS-PL"
] | null | null | null | docs-src/src/content/api/CsvHelper.TypeConversion/TypeConverterException.md | gsulc/CsvHelper | 27876bf922564d012b25ea52d3ec867eacccc5e3 | [
"MS-PL"
] | null | null | null | # TypeConverterException Class
Coming soon...
| 11.75 | 30 | 0.787234 | kor_Hang | 0.395161 |
21bcf87aa9a16d02b7a7bc5292eb4c21519813d1 | 12,141 | md | Markdown | aboutme.md | ByeongkiJeong/byeongkijeong.github.io | 00928511f4682476718eac596e0af2d92a9c5913 | [
"MIT"
] | null | null | null | aboutme.md | ByeongkiJeong/byeongkijeong.github.io | 00928511f4682476718eac596e0af2d92a9c5913 | [
"MIT"
] | null | null | null | aboutme.md | ByeongkiJeong/byeongkijeong.github.io | 00928511f4682476718eac596e0af2d92a9c5913 | [
"MIT"
] | 2 | 2019-01-14T06:06:12.000Z | 2019-04-30T08:07:39.000Z | ---
layout: page
title: About Me
subtitle:
---
<b>Byeongki Jeong</b>
21세기의 대리석 속에 감춰진 형상을 꺼내주는 일을 합니다.
* Currently working at Optimization&Analytics Office @SK Innovation
* A former member of [Business Intelligence and Data Analytics Laboratory @Konkuk university](https://sites.google.com/view/kkbizintelligence/)
* Research interest: Business NLP, Natural language understanding, Patent intelligence, Social intelligence, Data science
### Education
* <b>Ph.D degree</b> @Department of Industrial engineering, Konkuk university, 2017.03 ~ 2021.02
* <b>Bachelor's degree</b> @Department of Industrial engineering, Konkuk university, 2011.03 ~ 2017.02
### Papers
#### International
* Yoon, J., <b>Jeong, B.,</b> Kim, M., Lee, C.* (2021) An information entropy and latent Dirichlet allocation approach to noise patent filtering. Advanced Engineering Informatics, 47, 101243. [(Link)](https://www.sciencedirect.com/science/article/abs/pii/S1474034620302123)
* <b>Jeong, B.,</b> Ko, N., Yoon, J.*, Son, C. (2021). Trademark-based framework to uncover business diversification opportunities: Application of deep link prediction and competitive intelligence analysis. Computers in industry, 124, 103356. [(Link)](https://www.sciencedirect.com/science/article/pii/S016636152030590X)
* Ko, N., <b>Jeong, B.,</b> Yoon, J.*, Son, C. (2020). Patent-trademark linking framework for business competition analysis. Computers in industry, 122, 103242. [(Link)](https://www.sciencedirect.com/science/article/pii/S0166361519311169)
* Choi, J., <b>Jeong, B.</b>, Yoon, J.*, Coh, B., Lee, J. (2020). A novel approach to evaluating the business potential of intellectual properties: A machine learning-based predictive analysis of patent lifetime. Computers and Industrial Engineering, 145, 106544. [(Link)](https://www.sciencedirect.com/science/article/pii/S0360835220302783)
* Choi, J., <b>Jeong, B.</b> & Yoon, J*. (2019). Technology opportunity discovery under the dynamic change of focus technology fields: Application of sequential pattern mining to patent classifications. Technological Forecasting and Social Change. [(Link)](https://www.sciencedirect.com/science/article/pii/S0040162518312745)
* <b>Jeong, B.</b>, Yoon, J.*, Lee, J. (2019). Social media mining for product planning: a product opportunity mining approach based on topic modeling and sentiment analysis. International Journal of Information Management, 48, 280-290. [(Link)](http://www.sciencedirect.com/science/article/pii/S0268401217302955)
* Ko, N., <b>Jeong, B.</b>, Seo, W., & Yoon, J. (2019). A transferability evaluation model for intellectual property. Computers and Industrial Engineering, 131, 344-355. [(Link)](https://www.sciencedirect.com/science/article/pii/S0360835219302141)
* Yoon, J., <b>Jeong, B.</b>, Lee, W. H., & Kim, J*. (2018). Tracing the evolving trends in electronic skin (e-skin) technology using growth curve and technology position-based patent bibliometrics. IEEE Access, 6(1), 26530-26542. [(Link)](https://ieeexplore.ieee.org/document/8358697/)
* Ko, N., <b>Jeong, B.</b>, Choi, S., & Yoon, J*. (2017). Identifying Product Opportunities Using Social Media Mining: Application of Topic Modeling and Chance Discovery Theory. IEEE Access, 6(1), 1680-1693. [(Link)](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8166751)
* <b>Jeong, B.</b>, & Yoon, J*. (2017). Competitive Intelligence Analysis of Augmented Reality Technology Using Patent Information. Sustainability, 9(4), 497. [(Link)](http://www.mdpi.com/2071-1050/9/4/497)
* Song, C. H., Han, J. W., <b>Jeong, B.</b>, & Yoon, J*. (2017). Mapping the Patent Landscape in the Field of Personalized Medicine. Journal of Pharmaceutical Innovation, 1-11. [(Link)](https://link.springer.com/article/10.1007/s12247-017-9283-z)
#### Domestic
* 정재민, <b>정병기</b>, 윤장혁*. (2020) 비즈니스 기회 발굴을 위한 문제-해결방법 기반의 특허분석 방법 (A problem-solution based patent analysis approach for business opportunity discovery) , 지식재산연구, 15(2), 187-222. [(Link)](https://www.kiip.re.kr/journal/view.do?bd_gb=jor&bd_cd=1&bd_item=0&po_d_gb=&po_no=J00057&po_j_no=J00057&po_a_no=396)
* 정재민, <b>정병기</b>, 윤장혁*. (2019) 온라인 제품리뷰 데이터를 활용한 제품기회의 동적 모니터링 (Dynamic monitoring of product opportunities with online product review data) , 대한산업공학회지, 45(5), 387-401. [(Link)](http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09218525)
* 윤장혁*, 고남욱, 정병기, 최재웅. (2019) 상표데이터기반 비즈니스 인텔리전스: 활용사례 및 연구이슈 (Trademark data-based business intelligence: applications and future research issues) , 대한산업공학회지, 45(4), 270-283. [(Link)](http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE08760709)
* 오승현, 고남욱, <b>정병기</b>, 최재웅, 윤장혁*, 이재민, 고병열. (2019) 제품혁신수준 분석을 위한 특허정보 기반 지표 개발(Development of patent information-based indexes for product innovativeness analysis) , 대한산업공학회지, 45(1), 11-21. [(Link)](http://www.dbpia.co.kr/Journal/ArticleDetail/NODE07612571)
* 이지호, <b>정병기</b>, 고남욱, 오승현, & 윤장혁*. (2018) 특허의 Problem-Solution 텍스트 마이닝을 활용한 기술경쟁정보 분석 방법. 지식재산연구, 13(3), 171-204. [(Link)](https://www.kiip.re.kr/journal/view.do?bd_gb=jor&bd_cd=1&bd_item=0&po_d_gb=&po_no=J00050&po_j_no=J00050&po_a_no=341)
* 이다혜, 최하영, <b>정병기</b>, & 윤장혁*. (2018). 특허 텍스트마이닝을 활용한 바이오 연료 기술 모니터링. 지식재산연구, 13(1), 285-312. [(Link)](https://www.kiip.re.kr/journal/view.do?bd_gb=jor&bd_cd=1&bd_item=0&po_d_gb=&po_no=J00048&po_j_no=J00048&po_a_no=327)
* <b>정병기</b>, 고남욱, 경진영, 최덕용, & 윤장혁*. (2017). 장벽특허의 무효화 분석을 위한 선행특허기술 발굴 시스템 개발. Entrue Journal of Information Technology, 16(1), 97-108. [(Link)](www.entrue.com/Contents/DownLoadFile?FileName=%5B3-3%5DRN-2016-02-012_%EC%9C%A4%EC%9E%A5%ED%98%81_2%EA%B5%90_p_3.pdf)
* <b>정병기</b>, 김정욱, & 윤장혁*. (2016). 융합기술의 동향분석을 위한 의미론적 특허분석 접근 방법. 지식재산연구, 11(4), 211-240. [(Link)](https://www.kiip.re.kr/journal/view.do?bd_gb=jor&bd_cd=1&bd_item=0&po_d_gb=&po_no=J00043&po_j_no=J00043&po_a_no=289)
* 김정욱, <b>정병기</b>, & 윤장혁*. (2016). 네트워크분석과 기술성장모형을 이용한 기술기획: 증강현실 기술의 특허를 활용하여. 대한산업공학회지, 42(5). [(Link)](http://www.dbpia.co.kr/Journal/ArticleDetail/NODE07021833)
* 김용현, <b>정병기</b>, & 윤장혁*. (2016). 특허경영활동이 기업 경영성과에 미치는 영향에 관한 연구: 국내 의료기기 제조 기업을 중심으로. 산업경영시스템학회지, 39(1), 1-8. [(Link)](http://www.ksie.or.kr/bbs/?bid=thesis&volumn=%C1%A639%B1%C7%20%C1%A61%C8%A3)
#### Conference
* <b>정병기</b>, & 윤장혁. (2019). 딥 러닝을 활용한 상표기반의 비즈니스 기회 발굴. 대한산업공학회 추계학술대회, 서울대학교, 서울
* <b>정병기</b>, & 윤장혁. (2018). Event Detection & Tracking 기반의 이머징 니즈 탐지 방법. 대한산업공학회 춘계공동학술대회, 현대호텔, 경주
* <b>정병기</b>, & 윤장혁. (2017). 텍스트마이닝과 메타휴리스틱알고리즘 기반의 신규 특허 컨셉 개발 방법. 대한산업공학회 춘계공동학술대회, Expo Convention Center, 여수
* 고남욱, <b>정병기</b>, & 윤장혁. (2017). 특허거래정보를 활용한 IP평가모형: 딥뉴럴네트워크 접근. 대한산업공학회 춘계공동학술대회, Expo Convention Center, 여수
* 최재웅, <b>정병기</b>, & 윤장혁. (2017). 딥러닝을 활용한 특허수명 예측분석. 대한산업공학회 춘계공동학술대회, Expo Convention Center, 여수
* <b>Jeong B.</b>, & Yoon J. (2016). Identifying product opportunities using topic modeling and sentiment analysis of social media data, APIEMS, Taipei
* Ko N., <b>Jeong B.</b>, & Yoon J. (2016). Product improvement opportunity analysis based on text mining of online VOC: application of topic modeling and chance discovery theory, APIEMS, Taipei
* <b>정병기</b>, 고남욱, & 윤장혁. (2016). 딥뉴럴네트워크 기반의 특허수명 예측모형 개발. 대한산업공학회 추계학술대회, 고려대학교, 서울
* 고남욱, <b>정병기</b>, & 윤장혁. (2016). 특허의 권리변동 정보를 이용한 IP평가모형 개발. 대한산업공학회 추계학술대회, 고려대학교, 서울
* <b>정병기</b>, & 윤장혁. (2016). 토픽모델링과 감성분석을 활용한 제품 개발 기회 도출. 대한산업공학회 춘계공동학술대회, ICC, 제주
* <b>정병기</b>, 김정욱, & 윤장혁. (2015). 토픽모델링을 활용한 증강현실기술의 특허경쟁정보분석. 대한산업공학회 추계학술대회, 연세대학교, 서울
* 김무진, <b>정병기</b>, & 윤장혁. (2015). 노이즈 특허 필터링 시스템 개발. 대한산업공학회 추계학술대회, 연세대학교, 서울
* 김용현, <b>정병기</b>, & 윤장혁. (2015). 특허경영활동이 기업 경영성과에 미치는 영향에 관한 연구: 국내 의료기기 제조 기업을 중심으로. 대한산업공학회 춘계공동학술대회, 라마다호텔, 제주
### Projects
* <b>2019.06~2019.12</b>, 빅데이터 등을 활용한 평가항목 객관화 체계 구축 (Development of a quantified evaluation framework for KTRS using big data), <i>기술보증기금</i>, Researcher
* <b>2018.06~2021.05</b>, 소셜-기술 데이터 애널리틱스 기반의 비즈니스 발굴-평가-운영 체계 (Social-technical data analytics-based business intelligence for business discovery, evaluation and operation), <i>한국연구재단</i>, Researcher
* <b>2018.11~2019.07</b>, 지적측량 업무량 변동예측 모델 개발을 위한 LX 내외부 데이터 처리 및 분석 방안 연구(Study on LX internal and external data processing and analysis for development of prediction model of cadastral survey workload), <i>한국국토정보공사</i>, Researcher
* <b>2018.11~2018.12</b>, 빅데이터 등을 활용한 KTRS 평가지표 객관화(Quantification of KTRS evaluation using big data), <i>기술보증기금</i>, Researcher
* <b>2018.06~2018.12</b>, 지식재산(IP) 가치평가 지원방안 연구(Study on automated process of intellectual property valuation), <i>기술보증기금</i>, Researcher
* <b>2018.05~2018.12</b>, 근미래이슈 탐지를 위한 오픈데이터 유형탐색 및 분석모델 연구 (Exploration and analysis model development of open data sources for near future issue detection), <i>한국과학기술정보연구원</i>, Researcher
* <b>2017.04~2018.03</b>, 기술역량기반 응용기술 후보 및 주요개발자 발굴 및 추천 시스템 (A system for discovering application technology candidates and their key researchers based on technological competence), <i>연구성과실용화진흥원</i>, Researcher
* <b>2017.03~2017.10</b>, 제품혁신 지표 및 분석모형 개발 (Development of product innovation indexes and technology opportunity discovery models), <i>한국과학기술정보연구원(KISTI)</i>, Researcher
* <b>2017.05~2017.07</b>, 특허자동평가 시스템 고도화 방안 연구 (A study on the advancement design of a patent evaluation system), <i>기술보증기금</i>, Researcher
* <b>2016.07~2016.12</b>, 지식재산(IP) 평가지원 시스템 프레임 설계 및 엔진 개발 (Intellectual property evaluation system framework design and its development), <i>기술보증기금</i>, Researcher
* <b>2016.06~2017.05</b>, NRF 보유 연구제안서의 분석을 통한 국내 R&D 동향 분석 (National R&D trend analysis using NRF research proposals), <i>한국연구재단(NRF)</i>, Researcher
* <b>2016.05~2017.04</b>, 토픽모델링과 감성분석을 활용한 제품 개발 기회 도출 (Identifying product development opportunities using topic modeling and sentiment analysis), <i>건국대학교</i>, Researcher
* <b>2015.11~2016.10</b>, 텍스트마이닝 기반의 시맨틱 특허분석 시스템 개발 (Development of a semantic patent analysis system based on text mining) (sub project of 장벽특허의 무효자료 자동 추출 시스템 개발), <i>중소기업청 (SMBA)</i>, Researcher
* <b>2015.09~2018.08</b>, 기술지능 기반의 차세대 제품-서비스 시스템 개발 접근 방법 (A technology intelligence-based approach for product-service systems development), <i>한국연구재단(NRF)</i>, Researcher
* <b>2015.05~2016.04</b>, 특허활동이 기업의 경영성과에 미치는 영향에 관한 연구 (The effects of qualitative patenting activities on firm outcomes), <i>건국대학교</i>, Researcher
* <b>2015.08~2015.10</b>, 특허 지능 방법론 연구: 텍스트 마이닝의 활용 (A patent intelligence methodology: application of text mining), <i>특허법인 해담</i>, Researcher
* <b>2015.03~2015.10</b>, 기술인지 모형 개발을 통한 제품 네트워크 구축 (Construction of product networks based on technology recognition models), <i>한국과학기술정보연구원(KISTI)</i>, Researcher
* <b>2014.12~2015.11</b>, 기업의 보유특허를 활용한 신기술기회 탐색 방법론: 인용정보기반 협업필터링의 적용 (A methodology to explore new technology opportunities based on firms' own patents), <i>건국대학교</i>, Researcher
* <b>2014.03~2014.10</b>, 대용량 데이터 분석 기술 개발 (A Study on Big Data Analysis Technology), <i>한국과학기술정보연구원(KISTI)</i>, Researcher
* <b>2012.09~2015.08</b>, 기술지능 기반의 차세대 제품-서비스 시스템 개발 접근 방법 (A technology intelligence-based approach for product-service systems development), <i>한국연구재단(NRF)</i>, Researcher
### Patent
* 윤장혁, 이지호, <b>정병기</b>. 사용자 생성 데이터 기반의 사용자 경험 분석 장치 및 방법 (APPARATUS AND METHOD FOR ANALYZING USER EXPERIENCE BASED ON USER-GENERATED DATA). 출원번호: 10-2020-0162997. 대한민국특허청.
* 유문재, 장영규, 송정진, 최은철, 김응찬, 고남욱, <b>정병기</b>, 윤장혁, IP 평가 모형을 이용한 IP 평가 방법 및 그 장치(IP VALUATION METHOD USING IP VALUATION MODEL AND APPARATUS THEREOF). 등록번호: 102078289. 대한민국특허청.
* 윤장혁, 이지호, <b>정병기</b>, 고남욱, 오승현. 기술 경쟁 정보 제공 장치 및 방법 (DEVICE AND METHOD FOR PROVIDING TECHNOLOGICAL COMPETITIVE INTELLIGENCE). 등록번호: 10-2221267. 대한민국특허청.
* 윤장혁, 최재웅, <b>정병기</b>. 기술 기회 발굴 장치 및 방법 (Device and method for discovering technology opportunities). 출원번호: 10-2018-0155555. 대한민국특허청.
* 윤장혁, 최재웅, <b>정병기</b>, 이재민, 고병열, 권오진. 특허수명예측장치 및 그 동작 방법(DISCRIMINATION APPARATUS FOR PATENT LIFETIME, AND CONTROL METHOD THEREOF). 등록번호: 101877235. 대한민국특허청.
### Prize
* <b>정병기</b>, 김정욱. (2016). 의미론적 융합 기술 분석 방법: 토픽모델링과 교차영향분석의 활용, 특허청장상, <i>제 11회 지식재산우수논문 공모전</i>, 특허청(KIPO)
### Contact me
* [jbk958(at)gmail.com](mailto:jbk958@gmail.com)
* [byeongkij(at)konkuk.ac.kr](mailto:byeongkij@konkuk.ac.kr)
| 127.8 | 344 | 0.719545 | kor_Hang | 0.96701 |
21be6344d765fffdbd5431cfb688ea4714ea9416 | 2,062 | md | Markdown | README.md | matthewbush55/note-taker | 4e18b76489ef7c6fe11d3085642e175948a20aa2 | [
"MIT"
] | null | null | null | README.md | matthewbush55/note-taker | 4e18b76489ef7c6fe11d3085642e175948a20aa2 | [
"MIT"
] | null | null | null | README.md | matthewbush55/note-taker | 4e18b76489ef7c6fe11d3085642e175948a20aa2 | [
"MIT"
] | null | null | null | # Note Taker
[](https://opensource.org/licenses/MIT)
## Table of Contents:
- [Description](#description)
- [Installation](#installation)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Questions](#questions)
## Description
This application allows users to use a web interface to create, save and delete notes with titles and content. It uses `node.js` & `JavaScript`, and `express.js` The app also allows users to navigate to notes that were taken previously.
Live application hosted on Heroku: https://note-taker-app-mrb.herokuapp.com/

## Installation
To install necessary dependencies, run the following command:
npm i
## Usage
After pulling down the repository, start the express server by running `node server.js`. Then, open a browser and navigate to http://localhost:3001 (a live version is hosted at https://note-taker-app-mrb.herokuapp.com/). You will be prompted to open the app by clicking "Get Started." To create a new note, click in the "Note Title" section and add a title. Then, click in the "Note Text" section and add the contents of your note. When both the title and content are filled in, a "save" button will appear in the upper-right corner, which can be clicked to save the note to the list. To delete a note, click the trash can icon next to the note you want to delete. Click on previous notes to pull up their content.
## License
This project is licensed under [License: MIT](https://opensource.org/licenses/MIT)
## Contributing
To contribute to this project (or any others), please contact me using the information in the Questions section below or by submitting a pull request.
> For more information on project contribution guidelines, please reference [Contributor Covenant](https://www.contributor-covenant.org/)
## Questions?
If you have any questions, please feel free to reach out. Thanks!
GitHub: https://github.com/matthewbush55
Email: matthewbush55@gmail.com
| 42.081633 | 714 | 0.761397 | eng_Latn | 0.983757 |
21bf751f1dae6f9fb768479507cbae9d823087d3 | 15,502 | md | Markdown | docs/standard/data/xml/rules-for-inferring-schema-node-types-and-structure.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/data/xml/rules-for-inferring-schema-node-types-and-structure.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/data/xml/rules-for-inferring-schema-node-types-and-structure.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Regeln für Rückschlussschemaknotentypen und Struktur
ms.date: 03/30/2017
ms.technology: dotnet-standard
ms.assetid: d74ce896-717d-4871-8fd9-b070e2f53cb0
ms.openlocfilehash: 6d66384dea7018bcc3b2dd8fde96f4fa2653f8e8
ms.sourcegitcommit: 5f236cd78cf09593c8945a7d753e0850e96a0b80
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/07/2020
ms.locfileid: "75710244"
---
# <a name="rules-for-inferring-schema-node-types-and-structure"></a>Regeln für Rückschlussschemaknotentypen und Struktur
In diesem Thema wird beschrieben, wie der Schemarückschlussprozess die in einem XML-Dokument auftretenden Knotentypen in eine XSD-Struktur (XML Schema Definition) übersetzt.
## <a name="element-inference-rules"></a>Rückschlussregeln für Elemente
In diesem Abschnitt werden die Rückschlussregeln für Elementdeklarationen erläutert. Es existieren acht Strukturen zur Elementdeklaration, die hergeleitet werden können:
1. Einfaches Element
2. Leeres Element
3. Leeres Element mit Attributen
4. Element mit Attributen und einfachem Inhalt
5. Element mit einer Sequenz von direkt untergeordneten Elementen
6. Element mit einer Sequenz von direkt untergeordneten Elementen und Attributen
7. Element mit einer Sequenz von Auswahlmöglichkeiten direkt untergeordneter Elemente
8. Element mit einer Sequenz von Auswahlmöglichkeiten direkt untergeordneter Elemente und Attribute
> [!NOTE]
> Alle `complexType`-Deklarationen werden als anonyme Typen hergeleitet. Das einzige globale hergeleitete Element ist das Stammelement. Alle anderen Elemente sind lokal.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
### <a name="simple-typed-element"></a>Element eines einfachen Typs
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Das fett formatierte Element stellt das Schema dar, das für das Element eines einfachen Typs hergeleitet wird.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root>text</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root" type="xs:string" />`<br /><br /> `</xs:schema>`|
### <a name="empty-element"></a>Leeres Element
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Das fett formatierte Element stellt das für das leere Element hergeleitete Schema dar.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<empty/>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="empty" />`<br /><br /> `</xs:schema>`|
### <a name="empty-element-with-attributes"></a>Leeres Element mit Attributen
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für das leere Element mit Attributen hergeleitete Schema dar.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<empty attribute1="text"/>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="empty">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:attribute name="attribute1" type="xs:string" use="required" />`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="element-with-attributes-and-simple-content"></a>Element mit Attributen und einfachem Inhalt
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für ein Element mit Attributen und einfachem Inhalt hergeleitete Schema dar.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root attribute1="text">value</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:simpleContent>`<br /><br /> `<xs:extension base="xs:string">`<br /><br /> `<xs:attribute name="attribute1" type="xs:string" use="required" />`<br /><br /> `</xs:extension>`<br /><br /> `</xs:simpleContent>`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="element-with-a-sequence-of-child-elements"></a>Element mit einer Sequenz von direkt untergeordneten Elementen
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für ein Element mit einer Sequenz von direkt untergeordneten Elementen hergeleitete Schema dar.
> [!NOTE]
> Ein Element wird auch wie eine Sequenz behandelt, wenn es nur über ein direkt untergeordnetes Element verfügt.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root>`<br /><br /> `<subElement/>`<br /><br /> `</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:sequence>`<br /><br /> `<xs:element name="subElement" />`<br /><br /> `</xs:sequence>`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="element-with-a-sequence-of-child-elements-and-attributes"></a>Element mit einer Sequenz von direkt untergeordneten Elementen und Attributen
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für ein Element mit einer Sequenz von direkt untergeordneten Elementen und Attributen hergeleitete Schema dar.
> [!NOTE]
> Ein Element wird auch wie eine Sequenz behandelt, wenn es nur über ein direkt untergeordnetes Element verfügt.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root attribute1="text">`<br /><br /> `<subElement1/>`<br /><br /> `<subElement2/>`<br /><br /> `</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:sequence>`<br /><br /> `<xs:element name="subElement1" />`<br /><br /> `<xs:element name="subElement2" />`<br /><br /> `</xs:sequence>`<br /><br /> `<xs:attribute name="attribute1" type="xs:string" use="required" />`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="element-with-a-sequence-and-choices-of-child-elements"></a>Element mit einer Sequenz und Auswahlmöglichkeiten von direkt untergeordneten Elementen
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für ein Element mit einer Sequenz und Auswahlmöglichkeiten von direkt untergeordneten Elementen hergeleitete Schema dar.
> [!NOTE]
> Das `maxOccurs`-Attribut des `xs:choice`-Elements wird im hergeleiteten Schema auf `"unbounded"` festgelegt.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root>`<br /><br /> `<subElement1/>`<br /><br /> `<subElement2/>`<br /><br /> `<subElement1/>`<br /><br /> `</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:sequence>`<br /><br /> `<xs:choice maxOccurs="unbounded">`<br /><br /> `<xs:element name="subElement1" />`<br /><br /> `<xs:element name="subElement2" />`<br /><br /> `</xs:choice>`<br /><br /> `</xs:sequence>`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="element-with-a-sequence-and-choice-of-child-elements-and-attributes"></a>Element mit einer Sequenz und Auswahlmöglichkeit von direkt untergeordneten Elementen und Attributen
In der folgenden Tabelle wird die XML-Eingabe für die <xref:System.Xml.Schema.XmlSchemaInference.InferSchema%2A>-Methode und das generierte XML-Schema dargestellt. Die fett formatierten Elemente stellen das für ein Element mit einer Sequenz und Auswahlmöglichkeit von direkt untergeordneten Elementen und Attributen hergeleitete Schema dar.
> [!NOTE]
> Das `maxOccurs`-Attribut des `xs:choice`-Elements wird im hergeleiteten Schema auf `"unbounded"` festgelegt.
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
|XML|Schema|
|---------|------------|
|`<?xml version="1.0"?>`<br /><br /> `<root attribute1="text">`<br /><br /> `<subElement1/>`<br /><br /> `<subElement2/>`<br /><br /> `<subElement1/>`<br /><br /> `</root>`|`<?xml version="1.0" encoding="utf-8"?>`<br /><br /> `<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xml`<br /><br /> `ns:xs="http://www.w3.org/2001/XMLSchema">`<br /><br /> `<xs:element name="root">`<br /><br /> `<xs:complexType>`<br /><br /> `<xs:sequence>`<br /><br /> `<xs:choice maxOccurs="unbounded">`<br /><br /> `<xs:element name="subElement1" />`<br /><br /> `<xs:element name="subElement2" />`<br /><br /> `</xs:choice>`<br /><br /> `</xs:sequence>`<br /><br /> `<xs:attribute name="attribute1" type="xs:string" use="required" />`<br /><br /> `</xs:complexType>`<br /><br /> `</xs:element>`<br /><br /> `</xs:schema>`|
### <a name="attribute-processing"></a>Verarbeitung von Attributen
Sobald eine neues Attribut in einem Knoten auftritt, wird es der hergeleiteten Definition des Knotens mit `use="required"` hinzugefügt. Wenn derselbe Knoten das nächste Mal in der Instanz gefunden wird, werden die Attribute der aktuellen Instanz mit den bereits hergeleiteten im Rückschlussprozess verglichen. Wenn einige der bereits hergeleiteten Attribute in der Instanz nicht vorhanden sind, wird der Attributdefinition `use="optional"` hinzugefügt. Neue Attribute werden vorhandenen Deklarationen mit `use="optional"` hinzugefügt.
### <a name="occurrence-constraints"></a>Beschränkungen hinsichtlich des Vorkommens
Während des Schemarückschlussprozesses wird das `minOccurs`-Attribut und das `maxOccurs`-Attribut mit den Werten `"0"` oder `"1"` und `"1"` oder `"unbounded"` für hergeleitete Komponenten eines Schemas generiert. Die Werte `"1"` und `"unbounded"` werden nur verwendet, wenn die Werte `"0"` und `"1"` keine Validierung für das XML-Dokument durchführen können (wenn `MinOccurs="0"` ein Element beispielsweise nicht genau beschreibt, wird `minOccurs="1"` verwendet).
### <a name="mixed-content"></a>Gemischter Inhalt
Wenn ein Element gemischte Inhalte (z. B. Text mit eingefügten Elementen) enthält, wird das `mixed="true"`-Attribut für die hergeleitete komplexe Typdefinition generiert.
## <a name="other-node-type-inference-rules"></a>Rückschlussregeln für andere Knotentypen
In der folgenden Tabelle sind Rückschlussregeln für Verarbeitungsanweisungs-, Kommentar-, Entitätsverweis-, CDATA-, Dokumenttypen- und Namespaceknoten dargestellt.
|Knotentyp|Übersetzung|
|---------------|-----------------|
|Verarbeitungsanweisung|Ignoriert.|
|Anmerkungen|Ignoriert.|
|Entitätsverweis|Die <xref:System.Xml.Schema.XmlSchemaInference>-Klasse behandelt keine Entitätsverweise. Wenn ein XML-Dokument Entitätsverweise enthält, müssen Sie einen Reader verwenden, der die Entitäten erweitert. Sie können beispielsweise eine <xref:System.Xml.XmlTextReader>-Klasse mit der <xref:System.Xml.XmlTextReader.EntityHandling%2A>-Eigenschaft übergeben, die auf den <xref:System.Xml.EntityHandling.ExpandEntities>-Member als Parameter festgelegt ist. Wenn Entitätsverweise auftreten und der Reader Entitäten nicht erweitert, wird eine Ausnahme ausgelöst.|
|CDATA|Alle `<![CDATA[ … ]]`-Abschnitte in einem XML-Dokument werden als `xs:string` hergeleitet.|
|Dokumenttyp|Ignoriert.|
|-Namespaces|Ignoriert.|
Weitere Informationen zum Schemarückschlussprozess finden Sie unter [Herleiten von Schemata aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md).
## <a name="see-also"></a>Siehe auch
- <xref:System.Xml.Schema.XmlSchemaInference>
- [XML Schema Object Model (SOM) (XML-Schemaobjektmodell (SOM))](../../../../docs/standard/data/xml/xml-schema-object-model-som.md)
- [Herleiten eines XML-Schemas](../../../../docs/standard/data/xml/inferring-an-xml-schema.md)
- [Herleiten von Schemas aus XML-Dokumenten](../../../../docs/standard/data/xml/inferring-schemas-from-xml-documents.md)
- [Regeln zum Herleiten einfacher Typen](../../../../docs/standard/data/xml/rules-for-inferring-simple-types.md)
| 100.662338 | 833 | 0.708618 | deu_Latn | 0.884807 |
21bfb6c9de8034c5d8ef304cf65c4900a348bdd4 | 1,242 | md | Markdown | README.md | Prometheus-life/Gardening | 1c52e72053488f8fe8cd16f842950dd219e3dea1 | [
"Unlicense"
] | null | null | null | README.md | Prometheus-life/Gardening | 1c52e72053488f8fe8cd16f842950dd219e3dea1 | [
"Unlicense"
] | null | null | null | README.md | Prometheus-life/Gardening | 1c52e72053488f8fe8cd16f842950dd219e3dea1 | [
"Unlicense"
] | null | null | null | # Gardening
This is a project focused on lowest possible marginal cost for gardens. Focusing on geophonics, hydrophonics, aerophonics and their associated efficiencies.
## [Geophonics](https://en.wikipedia.org/wiki/Agricultural_science)
Geophonics is the most basic form of agriculture. It uses soil as a medium and is the most wasteful form of resources. It typically requires higher maintenance for weed removal, the most water usage and the largest usage of land for productivity.
## [Hydrophonics](https://en.wikipedia.org/wiki/Hydroponics)
Hydrophonics removes the soil medium and uses a nutrient based liquid to provide sustenance for the plants. This typically requires maintenance in the form of ensuring that the roots of the plants don't become oversoaked in water and need dry time for healthy roots.
## [Aerophonics](https://en.wikipedia.org/wiki/Aeroponics)
Aerophonics is a subset of hydrophonics that will use misters or drippers to provide the nutrient liquid to the roots of the plant. With the right automation, this form of agriculture will provide the highest yield while providing little need for maintenance. It allows provides for continuous exposure of oxygen to the roots which is essential for plant growth.
| 77.625 | 362 | 0.809179 | eng_Latn | 0.99931 |
21c02397ab67c6e242178033dd2f6e7eaf6f29d6 | 210 | md | Markdown | formatted/June 19th, 2020.md | lxyer/roam_test | 7f671a2ea391d2122a58b40a64cc3068d9b75de9 | [
"Apache-2.0"
] | null | null | null | formatted/June 19th, 2020.md | lxyer/roam_test | 7f671a2ea391d2122a58b40a64cc3068d9b75de9 | [
"Apache-2.0"
] | null | null | null | formatted/June 19th, 2020.md | lxyer/roam_test | 7f671a2ea391d2122a58b40a64cc3068d9b75de9 | [
"Apache-2.0"
] | null | null | null | - [x] 学习RR的教程
- [x] 下载相关RR Chrome插件
- [老婆](<老婆.md>)今天发了微信:2020年6月,不知不觉,薇薇美甲已经和大家一起走过了四年的光阴。 这4年,是收获更是成长,很幸福有一群可爱的顾客一直在默默的陪着我们,我们像朋友一样一起度过最好的美丽时光! 值此4周年店庆之际,特推出史无前例的店庆钜惠活动来感谢您的支持与陪伴,有需要的小伙伴可微信提前预定,真的是超大力度哦~爱你们
| 52.5 | 173 | 0.819048 | yue_Hant | 0.371527 |
21c05fc599f56ca1bcfa93eb0c404c46c165ba10 | 125 | md | Markdown | README.md | magero-team/magento-packager | 785c61c813224ca49dcac997f61afb03aed7e84f | [
"MIT"
] | null | null | null | README.md | magero-team/magento-packager | 785c61c813224ca49dcac997f61afb03aed7e84f | [
"MIT"
] | null | null | null | README.md | magero-team/magento-packager | 785c61c813224ca49dcac997f61afb03aed7e84f | [
"MIT"
] | null | null | null | Magero Magento Packager
============================
Tool for building package of extension for Magento from source files
| 25 | 70 | 0.632 | eng_Latn | 0.875061 |
21c08999660905d170f2396fbd8f558357dee331 | 238 | md | Markdown | tests/test_generators/output/kitchen_sink_md2/description.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | tests/test_generators/output/kitchen_sink_md2/description.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | tests/test_generators/output/kitchen_sink_md2/description.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | # Slot: description
URI: [https://w3id.org/linkml/tests/core/description](https://w3id.org/linkml/tests/core/description)
<!-- no inheritance hierarchy -->
## Properties
* Range: None
## Identifier and Mapping Information
| 10.818182 | 101 | 0.697479 | yue_Hant | 0.401726 |
21c18786292ece46ce69bbf1c2c9f1ba6c3d9712 | 12,939 | md | Markdown | articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ändern des Lizenzierungsmodells für eine SQL Server-VM in Azure
description: Hier erfahren Sie, wie Sie die Lizenzierung für eine SQL Server-VM in Azure von „nutzungsbasierter Bezahlung“ (Pay-as-you-go, PAYG) in „Bring-Your-Own-License“ (BYOL) mit dem Azure-Hybridvorteil ändern.
services: virtual-machines-windows
documentationcenter: na
author: MashaMSFT
manager: jroth
tags: azure-resource-manager
ms.service: virtual-machines-sql
ms.devlang: na
ms.topic: conceptual
ms.tgt_pltfrm: vm-windows-sql-server
ms.workload: iaas-sql-server
ms.date: 11/13/2019
ms.author: mathoma
ms.reviewer: jroth
ms.openlocfilehash: 00262b48b8fa2fd1292554155e8ec8e933d886e6
ms.sourcegitcommit: 2f8ff235b1456ccfd527e07d55149e0c0f0647cc
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 01/07/2020
ms.locfileid: "75690909"
---
# <a name="change-the-license-model-for-a-sql-server-virtual-machine-in-azure"></a>Ändern des Lizenzierungsmodells für eine SQL Server-VM in Azure
In diesem Artikel wird beschrieben, wie Sie das Lizenzierungsmodell für einen virtuellen SQL Server-Computer (VM) in Azure ändern, indem Sie den neuen SQL-VM-Ressourcenanbieter **Microsoft.SqlVirtualMachine** verwenden.
Es gibt zwei Lizenzierungsmodelle für eine VM, die SQL Server hostet: nutzungsbasierte Bezahlung und Azure-Hybridvorteil (Azure Hybrid Benefit, AHB). Sie können das Lizenzierungsmodell Ihrer SQL Server-VM im Azure-Portal, mit der Azure-Befehlszeilenschnittstelle (Azure CLI) oder mit PowerShell ändern.
Nutzungsbasierte Bezahlung bedeutet, dass die sekundenbasierten Kosten für die Ausführung des virtuellen Azure-Computers die Kosten für die SQL Server-Lizenz enthalten.
Der [Azure-Hybridvorteil](https://azure.microsoft.com/pricing/hybrid-benefit/) ermöglicht Ihnen die Verwendung Ihrer eigenen SQL Server-Lizenz mit einer VM, auf der SQL Server ausgeführt wird.
Beim Azure-Hybridvorteil können Sie SQL Server-Lizenzen mit Software Assurance („qualifizierte Lizenz“) auf Azure-VMs verwenden. Beim Azure-Hybridvorteil wird Kunden nicht die Verwendung einer SQL Server-Lizenz auf einem virtuellen Computer in Rechnung gestellt. Sie müssen jedoch weiterhin die Kosten des zugrunde liegenden Cloudcomputings (d. h. den Basistarif), des Speichers und der Sicherungen bezahlen. Außerdem müssen sie für die E/A-Vorgänge bezahlen, die mit der Nutzung der Dienste in Verbindung stehen (sofern zutreffend).
Gemäß den Microsoft-Produktbestimmungen: „...müssen Kunden beim Konfigurieren von Workloads in Azure angeben, dass sie Azure SQL-Datenbank (verwaltete Instanz, Pool für elastische Datenbanken und Einzeldatenbank), Azure Data Factory, SQL Server Integration Services oder SQL Server-VMs im Rahmen des Azure-Hybridvorteils für SQL Server verwenden“.
Um die Verwendung des Azure-Hybridvorteils für SQL Server auf einer Azure-VM anzugeben und die Bestimmungen einzuhalten, stehen drei Optionen zu Ihrer Auswahl:
- Stellen Sie eine VM mit einem „Bring-Your-Own-License“-SQL Server-Image aus dem Azure Marketplace bereit. Diese Option ist nur für Kunden verfügbar, die über ein Enterprise Agreement verfügen.
- Stellen Sie eine VM mit einem SQL Server-Image mit nutzungsbasierter Bezahlung aus dem Azure Marketplace bereit, und aktivieren Sie den Azure-Hybridvorteil.
- Installieren Sie SQL Server selbst auf einer Azure-VM, [registrieren Sie sich manuell bei Ihrem SQL-VM-Ressourcenanbieter](virtual-machines-windows-sql-register-with-resource-provider.md), und aktivieren Sie den Azure-Hybridvorteil.
Der SQL Server-Lizenztyp wird beim Bereitstellen der VM festgelegt. Sie können ihn später jederzeit ändern. Der Wechsel zwischen den Lizenzmodellen führt zu keinen Ausfallzeiten. Die VM oder der SQL Server-Dienst startet nicht neu, es entstehen keine zusätzlichen Kosten, und das neue Modell ist sofort wirksam. Durch die Aktivierung des Azure-Hybridvorteils werden die Kosten *reduziert*.
## <a name="prerequisites"></a>Voraussetzungen
Das Ändern des Lizenzierungsmodells Ihrer SQL Server-VM stellt die folgenden Anforderungen:
- Ein [Azure-Abonnement](https://azure.microsoft.com/free/).
- Eine [SQL Server-VM](https://docs.microsoft.com/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-server-provision), die beim [SQL-VM-Ressourcenanbieter](virtual-machines-windows-sql-register-with-resource-provider.md) registriert ist.
- [Software Assurance](https://www.microsoft.com/licensing/licensing-programs/software-assurance-default) ist eine Voraussetzung für die Verwendung des [Azure-Hybridvorteils](https://azure.microsoft.com/pricing/hybrid-benefit/).
## <a name="change-the-license-for-vms-already-registered-with-the-resource-provider"></a>Ändern der Lizenzen für VMs, die bereits beim Ressourcenanbieter registriert sind
# <a name="portaltabazure-portal"></a>[Portal](#tab/azure-portal)
[!INCLUDE [windows-virtual-machines-sql-use-new-management-blade](../../../../includes/windows-virtual-machines-sql-new-resource.md)]
Sie können das Lizenzierungsmodell direkt über das Portal ändern:
1. Öffnen Sie das [Azure-Portal](https://portal.azure.com), und starten Sie die Ressource [Virtueller SQL-Computer](virtual-machines-windows-sql-manage-portal.md#access-the-sql-virtual-machines-resource) für Ihre SQL Server-VM.
1. Wählen Sie **Konfigurieren** unter **Einstellungen** aus.
1. Wählen Sie **Azure-Hybridvorteil** aus, und aktivieren Sie das Kontrollkästchen, um zu bestätigen, dass Sie über eine SQL Server-Lizenz mit Software Assurance verfügen.
1. Wählen Sie **Übernehmen** am unteren Rand der Seite **Konfigurieren** aus.

# <a name="azure-clitabazure-cli"></a>[Azure-Befehlszeilenschnittstelle](#tab/azure-cli)
Sie können die Azure CLI (Azure-Befehlszeilenschnittstelle) nutzen, um das Lizenzierungsmodell zu wechseln.
Mit dem folgenden Codeausschnitt wird Ihr Lizenzierungsmodell von der nutzungsbasierten Bezahlung in Bring-Your-Own-License (bzw. die Verwendung des Azure-Hybridvorteils) geändert:
```azurecli-interactive
# Switch your SQL Server VM license from pay-as-you-go to bring-your-own
# example: az sql vm update -n AHBTest -g AHBTest --license-type AHUB
az sql vm update -n <VMName> -g <ResourceGroupName> --license-type AHUB
```
Mit dem folgenden Codeausschnitt wird Ihr Bring-Your-Own-License-Modell in die nutzungsbasierte Bezahlung geändert:
```azurecli-interactive
# Switch your SQL Server VM license from bring-your-own to pay-as-you-go
# example: az sql vm update -n AHBTest -g AHBTest --license-type PAYG
az sql vm update -n <VMName> -g <ResourceGroupName> --license-type PAYG
```
# <a name="powershelltabazure-powershell"></a>[PowerShell](#tab/azure-powershell)
Sie können Ihr Lizenzierungsmodell auch mit PowerShell ändern.
Mit dem folgenden Codeausschnitt wird Ihr Lizenzierungsmodell von der nutzungsbasierten Bezahlung in Bring-Your-Own-License (bzw. die Verwendung des Azure-Hybridvorteils) geändert:
```powershell-interactive
# Switch your SQL Server VM license from pay-as-you-go to bring-your-own
Update-AzSqlVM -ResourceGroupName <resource_group_name> -Name <VM_name> -LicenseType AHUB
```
Mit dem folgenden Codeausschnitt wird Ihr Bring-Your-Own-License-Modell in die nutzungsbasierte Bezahlung geändert:
```powershell-interactive
# Switch your SQL Server VM license from bring-your-own to pay-as-you-go
Update-AzSqlVM -ResourceGroupName <resource_group_name> -Name <VM_name> -LicenseType PAYG
```
---
## <a name="change-the-license-for-vms-not-registered-with-the-resource-provider"></a>Ändern der Lizenzen für VMs, die nicht beim Ressourcenanbieter registriert sind
Wenn Sie eine SQL Server-VM mit Images mit nutzungsbasierter Bezahlung aus dem Azure Marketplace bereitgestellt haben, lautet der SQL Server-Lizenztyp „nutzungsbasierte Bezahlung“. Wenn Sie eine SQL Server-VM mit einem Bring-Your-Own-License-Image aus dem Azure Marketplace bereitgestellt haben, lautet der Lizenztyp AHUB. Alle über standardmäßige (nutzungsbasierte Bezahlung) oder Bring-Your-Own-License-Azure Marketplace-Images bereitgestellten SQL Server-VMs werden automatisch beim SQL-VM-Ressourcenanbieter registriert, sodass der [Lizenztyp](#change-the-license-for-vms-already-registered-with-the-resource-provider) geändert werden kann.
Sie sind nur dazu berechtigt, SQL Server auf einer Azure-VM über den Azure-Hybridvorteil selbst zu installieren. Sie sollten [diese VMs beim SQL-VM-Ressourcenanbieter registrieren](virtual-machines-windows-sql-register-with-resource-provider.md), indem Sie die SQL Server-Lizenz auf den Azure-Hybridvorteil festlegen, um die Nutzung des Azure-Hybridvorteils gemäß der Microsoft-Produktbestimmungen anzugeben.
Sie können den Lizenztyp „Nutzungsbasierte Bezahlung“ oder Azure-Hybridvorteil einer SQL Server-VM nur ändern, wenn die SQL Server-VM beim SQL-VM-Ressourcenanbieter registriert ist.
## <a name="remarks"></a>Bemerkungen
- Azure CSP-Kunden (Azure Cloud Solution Provider) können den Azure-Hybridvorteil nutzen, indem sie zuerst eine VM mit nutzungsbasierter Bezahlung bereitstellen und sie dann in Bring-Your-Own-License umwandeln, wenn sie über aktive Software Assurance verfügen.
- Wenn Sie Ihre SQL Server-VM-Ressource verwerfen, kehren Sie wieder zur hartcodierten Lizenzeinstellung des Images zurück.
- Die Möglichkeit, das Lizenzierungsmodell zu ändern, ist ein Feature des SQL-VM-Ressourcenanbieters. Bei der Bereitstellung eines Azure Marketplace-Images über das Azure-Portal wird eine SQL Server-VM automatisch beim Ressourcenanbieter registriert. Kunden, die SQL Server selbst installieren, müssen [ihre SQL Server-VM jedoch manuell registrieren](virtual-machines-windows-sql-register-with-resource-provider.md).
- Das Hinzufügen einer SQL Server-VM zu einer Verfügbarkeitsgruppe erfordert die Neuerstellung der VM. Daher kehren alle VMs, die einer Verfügbarkeitsgruppe hinzugefügt werden, zum Standardlizenztyp „Nutzungsbasierte Bezahlung“ zurück. Der Azure-Hybridvorteil muss erneut aktiviert werden.
## <a name="limitations"></a>Einschränkungen
Das Ändern des Lizenzmodells
- ist nur für Kunden mit [Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-overview) verfügbar.
- wird nur für die Standard Edition und Enterprise Edition von SQL Server unterstützt. unterstützt keine Lizenzänderungen für Express, Web und Developer.
- wird nur für virtuelle Computer unterstützt, die mit dem Azure Resource Manager-Modell bereitgestellt wurden. unterstützt keine SQL Server-VMs, die mit dem klassischen Modell bereitgestellt wurden.
- Nur für öffentliche oder Azure Government-Clouds verfügbar.
- wird nur auf virtuellen Computern unterstützt, die eine einzelne Netzwerkschnittstelle haben.
## <a name="known-errors"></a>Bekannte Fehler
### <a name="the-resource-microsoftsqlvirtualmachinesqlvirtualmachinesresource-group-under-resource-group-resource-group-was-not-found"></a>Die Ressource „Microsoft.SqlVirtualMachine/SqlVirtualMachines/\<Ressourcengruppe>“ unter der Ressourcengruppe „\<Ressourcengruppe>“ wurde nicht gefunden.
Dieser Fehler tritt auf, wenn Sie versuchen, das Lizenzierungsmodell auf einer SQL Server-VM zu ändern, die nicht beim SQL-VM-Ressourcenanbieter registriert wurde:
`The Resource 'Microsoft.SqlVirtualMachine/SqlVirtualMachines/\<resource-group>' under resource group '\<resource-group>' was not found. The property 'sqlServerLicenseType' cannot be found on this object. Verify that the property exists and can be set.`
Sie müssen Ihr Abonnement beim Ressourcenanbieter registrieren und anschließend [Ihre SQL Server-VM beim Ressourcenanbieter registrieren](virtual-machines-windows-sql-register-with-resource-provider.md).
### <a name="the-virtual-machine-vmname-has-more-than-one-nic-associated"></a>Der virtuelle Computer „\<vmname\>“ hat mehr als eine zugeordnete Netzwerkschnittstelle
Dieser Fehler tritt auf virtuellen Computern auf, die über mehr als eine Netzwerkschnittstelle verfügen. Entfernen Sie eine der Netzwerkschnittstellen, bevor Sie das Lizenzierungsmodell ändern. Obwohl Sie die Netzwerkschnittstelle nach dem Ändern des Lizenzmodells wieder zur VM hinzufügen können, werden Vorgänge im Azure-Portal wie die automatische Sicherung und das automatische Patchen nicht mehr unterstützt.
## <a name="next-steps"></a>Nächste Schritte
Weitere Informationen finden Sie in den folgenden Artikeln:
* [Übersicht über SQL Server auf einem virtuellen Windows-Computer](virtual-machines-windows-sql-server-iaas-overview.md)
* [Häufig gestellte Fragen zu SQL Server auf virtuellen Windows-Computern](virtual-machines-windows-sql-server-iaas-faq.md)
* [Preisinformationen für SQL Server auf virtuellen Windows-Computern](virtual-machines-windows-sql-server-pricing-guidance.md)
* [SQL Server auf Windows-VMs – Versionshinweise](virtual-machines-windows-sql-server-iaas-release-notes.md)
| 79.87037 | 644 | 0.81065 | deu_Latn | 0.983581 |
21c4b5a260a05b0e6b058b83f45780d895c105a5 | 609 | md | Markdown | learn/smartcontracts/untitled.md | martinpospech/wiki | c9024a95ea9ca329ab3bc2483676363901a17639 | [
"MIT"
] | null | null | null | learn/smartcontracts/untitled.md | martinpospech/wiki | c9024a95ea9ca329ab3bc2483676363901a17639 | [
"MIT"
] | null | null | null | learn/smartcontracts/untitled.md | martinpospech/wiki | c9024a95ea9ca329ab3bc2483676363901a17639 | [
"MIT"
] | null | null | null | # Morley/Lorentz & Archetype
## **Morley / Lorentz / Indigo**
[Morley](https://hackage.haskell.org/package/morley) is a Haskell-based framework for meta-programming Michelson smart contracts.
**Key Resources**
* [Lorentz introductory blog post](https://serokell.io/blog/lorentz-implementing-smart-contract-edsl-in-haskell)
* [Lorentz documentation](https://gitlab.com/morley-framework/morley/-/tree/1722a7ab667a407ce4ed225bb1e5bce8434bfe77/)
## **Archetype**
[Archetype](https://archetype-lang.org) is a DSL for Tezos which facilitates formal verification and transcodes contracts to SmartPy and LIGO.
| 38.0625 | 142 | 0.778325 | eng_Latn | 0.394365 |
21c4bdbd5d7d5fefc7f4a250dee94e9e36f4fc79 | 1,644 | md | Markdown | README-Localized/README-ja-jp.md | think3dots/PRTL-Office | cbaddf317e84fb221740d63deb7ecbb7174e0746 | [
"MIT"
] | 28 | 2015-11-20T15:27:02.000Z | 2021-08-30T21:20:29.000Z | README-Localized/README-ja-jp.md | think3dots/PRTL-Office | cbaddf317e84fb221740d63deb7ecbb7174e0746 | [
"MIT"
] | 7 | 2016-02-23T19:00:30.000Z | 2019-07-31T23:59:05.000Z | README-Localized/README-ja-jp.md | think3dots/PRTL-Office | cbaddf317e84fb221740d63deb7ecbb7174e0746 | [
"MIT"
] | 23 | 2015-12-15T02:44:17.000Z | 2021-03-02T05:19:46.000Z | # ドキュメント アセンブリ Word アドイン サンプル
この Word アドインでは、以下を実行する方法が示されます。
1. OOXML からテンプレートの内容を挿入します。
2. 定型の段落を追加します。
3. 画像を挿入します。
4. テキストを検索し、コンテンツ コントロールを挿入します。
5. コンテンツ コントロールのテキストを変更します。
6. ファイルからコンテンツを挿入します。
7. ワイルドカードを使用して検索します。
8. フッターをドキュメントに挿入します。
9. setSelectedDataAsync API を使用して、OOXML をカスタマイズして挿入します。
## 前提条件
ドキュメント アセンブリ アドイン サンプルを使用するには、以下が必要です。
* Visual Studio 2015 または Visual Studio 2013 更新プログラム 5。
* Word 2016、または Word Javascript API をサポートする任意のクライアント。
## サンプル アドインの実行
Visual Studio でソリューションを開き、F5 キーを選択してサンプルを実行します。Word のインスタンスが起動し、Word にアドインが読み込まれた状態が表示されます。
## 質問とコメント
ドキュメント アセンブリ Word アドイン サンプルについて、Microsoft にフィードバックをお寄せください。質問や提案につきましては、このリポジトリの「[問題](https://github.com/OfficeDev/Word-Add-in-DocumentAssembly/issues)」セクションに送信できます。
アドイン開発全般の質問については、「[スタック オーバーフロー](http://stackoverflow.com/questions/tagged/Office365+API)」に投稿してください。質問またはコメントには、[office-js]、[word]、[API] のタグを付けてください。
## 詳細を見る
Word Javascript API ベースのアドインを作成するのに役立つその他のリソースを以下に示します。
* [Office アドイン プラットフォームの概要](https://msdn.microsoft.com/ja-jp/library/office/jj220082.aspx)
* [Word アドイン](https://github.com/OfficeDev/office-js-docs/blob/master/word/word-add-ins.md)
* [Word アドインのプログラミングの概要](https://github.com/OfficeDev/office-js-docs/blob/master/word/word-add-ins-programming-guide.md)
* [Word のスニペット エクスプローラー](http://officesnippetexplorer.azurewebsites.net/#/snippets/word)
* [Word アドインの JavaScript API リファレンス](https://github.com/OfficeDev/office-js-docs/tree/master/word/word-add-ins-javascript-reference)
* [Silly stories Word アドイン サンプル](https://github.com/OfficeDev/Word-Add-in-SillyStories)
## 著作権
Copyright (c) 2015 Microsoft.All rights reserved.
| 36.533333 | 165 | 0.796229 | yue_Hant | 0.787982 |
21c521799b983a8750c6f3b170febe26e4af3821 | 20 | md | Markdown | README.md | averykatko/splatscii | 55df577298db82b2485c7938eb58e5824a4156c7 | [
"0BSD"
] | 1 | 2015-07-09T13:24:11.000Z | 2015-07-09T13:24:11.000Z | README.md | averykatko/splatscii | 55df577298db82b2485c7938eb58e5824a4156c7 | [
"0BSD"
] | null | null | null | README.md | averykatko/splatscii | 55df577298db82b2485c7938eb58e5824a4156c7 | [
"0BSD"
] | null | null | null | # splatscii
a game?
| 6.666667 | 11 | 0.7 | slk_Latn | 0.288902 |
21c590b16ac18cb7bfd41670c80c4921f252f19b | 2,586 | md | Markdown | node_modules/egg-cookies/README.zh-CN.md | camellieeee/docker_egg | ff0a93e29b27232924bc886782c2282997af7d31 | [
"MIT"
] | 1 | 2019-04-29T12:39:02.000Z | 2019-04-29T12:39:02.000Z | node_modules/egg-cookies/README.zh-CN.md | camellieeee/docker_egg | ff0a93e29b27232924bc886782c2282997af7d31 | [
"MIT"
] | 1 | 2020-07-07T20:35:15.000Z | 2020-07-07T20:35:15.000Z | node_modules/egg-cookies/README.zh-CN.md | camellieeee/docker_egg | ff0a93e29b27232924bc886782c2282997af7d31 | [
"MIT"
] | null | null | null | # egg-cookies
[![NPM version][npm-image]][npm-url]
[![build status][travis-image]][travis-url]
[![Test coverage][codecov-image]][codecov-url]
[![David deps][david-image]][david-url]
[![Known Vulnerabilities][snyk-image]][snyk-url]
[![npm download][download-image]][download-url]
[npm-image]: https://img.shields.io/npm/v/egg-cookies.svg?style=flat-square
[npm-url]: https://npmjs.org/package/egg-cookies
[travis-image]: https://img.shields.io/travis/eggjs/egg-cookies.svg?style=flat-square
[travis-url]: https://travis-ci.org/eggjs/egg-cookies
[codecov-image]: https://codecov.io/gh/eggjs/egg-cookies/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/eggjs/egg-cookies
[david-image]: https://img.shields.io/david/eggjs/egg-cookies.svg?style=flat-square
[david-url]: https://david-dm.org/eggjs/egg-cookies
[snyk-image]: https://snyk.io/test/npm/egg-cookies/badge.svg?style=flat-square
[snyk-url]: https://snyk.io/test/npm/egg-cookies
[download-image]: https://img.shields.io/npm/dm/egg-cookies.svg?style=flat-square
[download-url]: https://npmjs.org/package/egg-cookies
为 egg 提供 cookie 操作的封装。
```js
ctx.cookies = new Cookies(ctx, keys);
ctx.cookies.get('key', 'value', options);
ctx.cookies.set('key', 'value', options);
```
## 初始化
初始化时需要传递 Array 类型 的keys 参数,否则无法使用 cookies 的 `signed` 和 `encrypt` 功能。
每次设置或读取 signed cookie 或者 encrypt cookie 的时候,会用 keys 进行加密。每次加密都通过 keys 数组的第一个 key 进行加密,解密会从先到后逐个 key 尝试解密。读取 signed cookie 时,如果发现不是用第一个 key 进行加密时,会更新签名为第一个 key 加密的值。读取 encrypt cookie 时不会进行更新操作。
## 设置 cookie
通过 `cookies.set(key, value, options)` 的方式来设置一个 cookie。其中 options 支持的参数有:
- path - `String` cookie 的有效路径,默认为 `/`。
- domain - `String` cookie 的有效域名范围,默认为 `undefined`。
- expires - `Date` cookie 的失效时间。
- maxAge - `Number` cookie 的最大有效时间,如果设置了 maxAge,将会覆盖 expires 的值。
- secure - `Boolean` 是否只在加密信道中传输,注意,如果请求为 http 时,不允许设置为 true https 时自动设置为 ture。
- httpOnly - `Boolean` 如果设置为 ture,则浏览器中不允许读取这个 cookie 的值。
- overwrite - `Boolean` 如果设置为 true,在一个请求上重复写入同一个 key 将覆盖前一次写入的值,默认为 false。
- signed - `Boolean` 是否需要对 cookie 进行签名,需要配合 get 时传递 signed 参数,此时前端无法篡改这个 cookie,默认为 true。
- encrypt - `Boolean` 是否需要对 cookie 进行加密,需要配合 get 时传递 encrypt 参数,此时前端无法读到真实的 cookie 值,默认为 false。
## 读取 cookie
通过 `cookies.get(key, value, options)` 的方式来读取一个 cookie。其中 options 支持的参数有:
- signed - `Boolean` 是否需要对 cookie 进行验签,需要配合 set 时传递 signed 参数,此时前端无法篡改这个 cookie,默认为 true。
- encrypt - `Boolean` 是否需要对 cookie 进行解密,需要配合 set 时传递 encrypt 参数,此时前端无法读到真实的 cookie 值,默认为 false。
## 删除 cookie
通过 `cookie.set(key, null)` 来删除一个 cookie。如果传递了 `signed` 参数,签名也会被删除。
## License
[MIT](LICENSE.txt)
| 39.784615 | 192 | 0.739366 | yue_Hant | 0.654674 |
21c5b05d5a0097e8bbbef408b3582a0596a47307 | 2,406 | md | Markdown | ce/marketing/manage-teams.md | astolz89/dynamics-365-customer-engagement | 32a7fe7f0478823b33451ff2653719835433d71a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/marketing/manage-teams.md | astolz89/dynamics-365-customer-engagement | 32a7fe7f0478823b33451ff2653719835433d71a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/marketing/manage-teams.md | astolz89/dynamics-365-customer-engagement | 32a7fe7f0478823b33451ff2653719835433d71a | [
"CC-BY-4.0",
"MIT"
] | 4 | 2022-02-10T04:56:47.000Z | 2022-02-10T07:32:49.000Z | ---
title: "Administer teams of users (Dynamics 365 for Marketing) | Microsoft Docs"
description: "Organize users into teams and apply roles to teams in Dynamics 365 for Marketing"
keywords: "administration; organization settings; user; team; role; permission; fiscal year"
ms.date: 04/01/2018
ms.service:
- "dynamics-365-marketing"
ms.custom:
- "dyn365-admin"
- "dyn365-marketing"
ms.topic: article
applies_to:
- "Dynamics 365 (online)"
- "Dynamics 365 Version 9.x"
ms.assetid: edc9a748-f08f-484c-ae86-0c20d20c1f6c
author: kamaybac
ms.author: kamaybac
manager: shellyha
ms.reviewer: renwe
topic-status: Drafting
search.audienceType:
- admin
- customizer
- enduser
search.app:
- D365CE
- D365Mktg
---
# Manage teams
[!INCLUDE[cc_applies_to_update_9_0_0](../includes/cc_applies_to_update_9_0_0.md)]
A team is a group of users who share and collaborate on business records. A user can be associated with multiple teams.
1. Go to **Settings** > **Advanced Settings** > **Organization** > **Team Management**.
1. Select **Add Team**.
1. Enter the following details, and then select **Add**.
- Team Name
- Business Unit Name: This default value is your organization name.
- Team Manager: Select a manager for the team.
- Team Administrator: Select the person who will perform administrative tasks for the team, such as adding or removing members.
## Add members to the team
1. On the **Team Management** page, select the team you want to add members to, and then on the command bar, select **Add Members**.
1. Select the members, and then select **Add**.
## Remove members from the team
1. On the **Team Management** page, select the team you want to remove the members from, and then on the command bar, select **Remove Members**.
1. Select the members you want to remove, and then select **Remove**.
## Manage Roles
When a team is assigned a role, all team members are assigned the set of privileges associated with that role.
1. Go to **Settings** > **Advanced Settings** > **Organization** > **Team Management**.
1. Select a team you want to assign a role to, and then on the command bar, select **Manage Roles**.
1. In the **Manage Roles** dialog box, select the security role you want for the team, and then select **OK**.
### See also
[Manage security, users, and teams](../admin/manage-security-users-and-teams.md)
[Manage users](manage-users.md)
| 36.454545 | 144 | 0.72818 | eng_Latn | 0.99171 |
21c718058a1d36b21c9e7431e10ebc62d015ce9c | 28 | md | Markdown | docs/API/lfads.md | SachsLab/indl | 531d2e0c2ee765004aedc553af40e258262f86cb | [
"Apache-2.0"
] | 1 | 2021-02-22T01:39:50.000Z | 2021-02-22T01:39:50.000Z | docs/API/lfads.md | SachsLab/indl | 531d2e0c2ee765004aedc553af40e258262f86cb | [
"Apache-2.0"
] | null | null | null | docs/API/lfads.md | SachsLab/indl | 531d2e0c2ee765004aedc553af40e258262f86cb | [
"Apache-2.0"
] | null | null | null | # lfads
::: indl.model.lfads | 14 | 20 | 0.678571 | dan_Latn | 0.971302 |
21c7ab7877d4770a3680cf81b5b5c1945954a5eb | 2,336 | md | Markdown | src/mir/passes.md | longfangsong/rustc-dev-guide-cn | 912d5a599657e233b58ecf4f1b74cd14939864dc | [
"Apache-2.0",
"MIT"
] | 1 | 2021-02-05T06:52:12.000Z | 2021-02-05T06:52:12.000Z | src/mir/passes.md | longfangsong/rustc-dev-guide-cn | 912d5a599657e233b58ecf4f1b74cd14939864dc | [
"Apache-2.0",
"MIT"
] | null | null | null | src/mir/passes.md | longfangsong/rustc-dev-guide-cn | 912d5a599657e233b58ecf4f1b74cd14939864dc | [
"Apache-2.0",
"MIT"
] | null | null | null | # MIR passes
如果您想获取某个函数(或常量,等等)的MIR,则可以使用`optimized_mir(def_id)`查询。
这将返回给您最终的,优化的MIR。 对于外部def-id,我们只需从其他crate的元数据中读取MIR。
但是对于本地def-id,查询将构造MIR,然后通过应用一系列pass来迭代优化它。 本节描述了这些pass的工作方式以及如何扩展它们。
为了为给定的def-id`D`生成`optimized_mir(D)`,MIR会通过几组优化,每组均由一个查询表示。
每个套件都包含多个优化和转换。
这些套件代表有用的中间点,我们可以在这些中间点访问MIR以进行类型检查或其他目的:
- `mir_build(D)` —— 这不是一个查询,而是构建初始MIR
- `mir_const(D)` —— 应用一些简单的转换以使MIR准备进行常量求值;
- `mir_validated(D)` —— 应用更多的转换,使MIR准备好进行借用检查;
- `optimized_mir(D)` —— 完成所有优化后的最终状态。
### 实现并注册一个pass
`MirPass`是处理MIR的一些代码,通常(但不总是)以某种方式对其进行转换。 例如,它可能会执行优化。
`MirPass` trait本身可在[`rustc_mir::transform`模块][mirtransform]中找到,它基本上由一个`run_pass`方法组成,
该方法仅要求一个`&mut Mir`(以及tcx及有关其来源的一些信息)。
因此,其是对MIR进行原地修改而非重新构造(这有助于保持效率)。
基本的MIR pass的一个很好的例子是[`NoLandingPads`],它遍历MIR并删除由于unwinding而产生的所有边——当配置为`panic=abort`时,它unwind永远不会发生。
从源代码可以看到,MIR pass是通过首先定义虚拟类型(无字段的结构)来定义的,例如:
```rust
struct MyPass;
```
接下来您可以为它实现`MirPass`特征。
然后,您可以将此pass插入到在诸如`optimized_mir`,`mir_validated`等查询的pass的适当列表中。
(如果这是一种优化,则应进入`optimized_mir`列表中。)
如果您要写pass,则很有可能要使用[MIR访问者]。
MIR访问者是一种方便的方法,可以遍历MIR的所有部分,以进行搜索或进行少量编辑。
### 窃取
中间查询`mir_const()`和`mir_validated()`产生一个使用`tcx.alloc_steal_mir()`分配的`&tcx Steal<Mir<'tcx >>`。
这表明结果可能会被下一组优化**窃取** —— 这是一种避免克隆MIR的优化。
尝试使用被窃取的结果会导致编译器panic。
因此,重要的是,除了作为MIR处理管道的一部分之外,不要直接从这些中间查询中读取信息。
由于存在这种窃取机制,因此还必须注意确保在处理管道中特定阶段的MIR被窃取之前,任何想要读取的信息都已经被读取完了。
具体来说,这意味着如果您有一些查询`foo(D)`要访问`mir_const(D)`或`mir_validated(D)`的结果,
则需要让后继使用`ty::queries::foo::force(...)`强制传递`foo(D)`。
即使您不直接要求查询结果,这也会强制执行查询。
例如,考虑MIR const限定。
它想读取由`mir_const()`套件产生的结果。
但是,该结果将被`mir_validated()`套件**窃取**。
如果什么都不做,那么如果`mir_const_qualif(D)`在`mir_validated(D)`之前执行,它将成功执行,否则就会失败。
因此,`mir_validated(D)`会在实际窃取之前对`mir_const_qualif`进行强制执行,
从而确保读取已发生(请记住[查询已被记忆](../query.html),因此执行第二次查询时只是从缓存加载):
```text
mir_const(D) --read-by--> mir_const_qualif(D)
| ^
stolen-by |
| (forces)
v |
mir_validated(D) ------------+
```
这种机制有点偷奸耍滑的感觉。 [rust-lang/rust#41710] 中讨论了更优雅的替代方法。
[rust-lang/rust#41710]: https://github.com/rust-lang/rust/issues/41710
[mirtransform]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/transform/
[`NoLandingPads`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/transform/no_landing_pads/struct.NoLandingPads.html
[MIR访问者]: ./visitor.html
| 32.901408 | 128 | 0.750856 | yue_Hant | 0.53847 |
21c85fe09d17581d04801ca6226d4200961a7cc1 | 3,924 | md | Markdown | markdown/org/docs/patterns/florence/instructions/de.md | anna-puk/freesewing | 95c6ab42c46bf8d246fbb53a5f1448cfbf6b0105 | [
"MIT"
] | null | null | null | markdown/org/docs/patterns/florence/instructions/de.md | anna-puk/freesewing | 95c6ab42c46bf8d246fbb53a5f1448cfbf6b0105 | [
"MIT"
] | null | null | null | markdown/org/docs/patterns/florence/instructions/de.md | anna-puk/freesewing | 95c6ab42c46bf8d246fbb53a5f1448cfbf6b0105 | [
"MIT"
] | null | null | null | ### Schritt 1: Füge die Mittelnaht zusammen
Join the curved seam that is center of our mask by placing the _good sides together_ and sewing them in place.

<Note>Wiederhole diesen Schritt sowohl für den äußeren Hauptstoff als auch für den inneren Futterstoff.</Note>
### Schritt 2 (optional): Die Mittelnaht bügeln
<Note>
Dieser Schritt hat keinen funktionellen Wert, er wird deine Maske nur besser aussehen lassen.
Wenn dir das also nicht allzu viel ausmacht, kannst du es gerne überspringen.
</Note>
Bügel die Nahtzugabe an der Mittelnaht auf, so dass die Naht schön flach liegt.
Da es sich um eine gekrümmte Naht handelt, wird sie nicht flach liegen. Aber du kannst dich mit deinem Bügeleisen von der einen Seite her nähern und dann die zweite Hälfte von der anderen Seite machen. Alternativ kannst du zum Bügeln ein Bügelei oder ein Kissen verwenden.
<Note>Wiederhole diesen Schritt sowohl für den äußeren Hauptstoff als auch für den inneren Futterstoff.</Note>
### Schritt 3: Nähe den äußeren an den inneren Stoff und befestige die Bänder

Nun werden wir den inneren (Futter) an den äußeren (Haupt) Stoff nähen und die Bänder in einem Schritt anbringen.
- Lege deinen Futterstoff mit der guten Seite nach oben vor dich hin.
- Lege dann zwei Bänder an den Ecken einer Seite (in unserem Beispiel rechts) so an, dass sie ein wenig aus der Maske herausragen, das Band aber nach innen verläuft.
- Lege nun den Hauptstoff mit der guten Seiten nach unten darauf. You should now have both layers of your mask on top of each other with _good sides together_ and two ribbons sandwiched between them
- Pinne alle Schichten und Bänder aneinander, um sie an Ort und Stelle zu halten
- Jetzt mache das Gleiche auf der anderen Seite
<Tip>
Wenn du etwas Übung bekommst, wirst du feststellen, dass du dies nicht feststecken musst und die Bänder einfach einführen kannst, wenn du dich einer Ecke näherst.
</Tip>
Nähe nun um die Maske herum und achte darauf, eine Seite offen zu lassen, damit wir die Maske später von innen nach außen drehen können.
<Warning>
Achte darauf, dass sich keine Bänder in der Naht verfangen, außer dort, wo du das willst.
Führe sie entweder durch die Öffnung, die du auf einer Seite lässt, oder bündel sie zwischen den Masken deiner Ebene, um sie aus dem Weg zu halten.
</Warning>
### Schritt 4: Drehe die Maske von innen nach außen
Eigentlich ist deine Maske jetzt von innen nach außen gedreht. Wenn wir sie also von innen nach außen drehen, erhalten sie das äußere nach außen, oder normal.
Greife einfach durch die linke Seite hinein und ziehe die Maske vorsichtig durch, um sie zu drehen.
### Schritt 5 (optional): Die Maske bügeln
<Note>
Dieser Schritt hat wenig funktionalen Wert, er wird deine Maske nur besser aussehen lassen.
Wenn dir das also nicht allzu viel ausmacht, kannst du es gerne überspringen.
</Note>
Nun, da die Maske so ist, wie sie sein sollte, ist es an der Zeit, sie zu bügeln. Before doing so, make sure to fold the seam allowance of the side we left open inwards, so that we press it flat as if it was sewn.
### Schritt 6: Schließe die offene Seite der Maske und nähe an der Kante um den Rand herum

Jetzt ist es an der Zeit, die Seite unserer Maske zu schließen, die wir offen gelassen haben, um sie von innen nach außen zu drehen.
We are not merely going to close the opening, but also edge-stitch around the entire mask to give our mask some extra stability, and keep the lining at the back.
Achte darauf, dass die offene Seite ordentlich nach innen gefaltet wird, und nähe dann um die gesamte Maske herum.
### Schritt 7: Trage deine Maske oder nähe einen ganzen Stapel
Das war's, du bist fertig! Du kannst jetzt deine Maske tragen.
Noch besser: Mache ein paar mehr, damit du anderen auch Masken geben kannst.
| 47.277108 | 272 | 0.779817 | deu_Latn | 0.999536 |
21c8d592fa597482c2c3852a1da47fcfdc274831 | 7 | md | Markdown | docs/archive/zip.md | yifengyou/go | c881d270242ef3d8429d05497f68c278a2b078f3 | [
"Unlicense"
] | 1 | 2021-08-23T04:06:03.000Z | 2021-08-23T04:06:03.000Z | docs/archive/zip.md | yifengyou/go | c881d270242ef3d8429d05497f68c278a2b078f3 | [
"Unlicense"
] | null | null | null | docs/archive/zip.md | yifengyou/go | c881d270242ef3d8429d05497f68c278a2b078f3 | [
"Unlicense"
] | null | null | null | # zip
| 2.333333 | 5 | 0.428571 | eus_Latn | 0.628688 |
21c916e8629fad4312c50cf898de8ec940124346 | 481 | md | Markdown | _pages/faqs/can-i-use-external-libraries.md | ShubhGurukul/dev-center | 3ce77c3c2c7568b1ade05100365d5d17926deec4 | [
"MIT"
] | 13 | 2016-12-07T14:30:21.000Z | 2020-07-20T16:25:56.000Z | _pages/faqs/can-i-use-external-libraries.md | algorithmiaio/dev-center | f5d138c24348c92c9a2687b7fbc12a9d646d6ea9 | [
"MIT"
] | 288 | 2016-03-03T18:37:41.000Z | 2022-02-28T13:00:49.000Z | _pages/faqs/can-i-use-external-libraries.md | ShubhGurukul/dev-center | 3ce77c3c2c7568b1ade05100365d5d17926deec4 | [
"MIT"
] | 19 | 2016-05-03T00:26:18.000Z | 2022-01-01T16:52:53.000Z | ---
layout: faq
title: "Can I use external libraries with my algorithms?"
categories: faqs
tags: [algo-dev-faq]
show_related: true
image:
teaser: /icons/fa-bolt.png
---
Of course! Dependencies are added to the algorithm through the Dependencies dialog inside the algorithm editor.
You can read more in the managing dependencies section of the [Python Algorithm Developers guide](https://algorithmia.com/developers/algorithm-development/languages/python#managing-dependencies). | 37 | 195 | 0.794179 | eng_Latn | 0.948854 |
21c9338940fe84815a9ecd5682821f80fb58b320 | 1,112 | md | Markdown | src/blog/2017/slub_debug.md | tych0/tycho.pizza | eadb3ae2c94ec17b74a49fcf87d4f111f40cacdb | [
"MIT"
] | 3 | 2016-07-06T14:59:43.000Z | 2019-06-21T22:25:12.000Z | src/blog/2017/slub_debug.md | tych0/tycho.pizza | eadb3ae2c94ec17b74a49fcf87d4f111f40cacdb | [
"MIT"
] | 4 | 2016-04-10T19:19:16.000Z | 2020-08-18T03:08:43.000Z | src/blog/2017/slub_debug.md | tych0/tycho.pizza | eadb3ae2c94ec17b74a49fcf87d4f111f40cacdb | [
"MIT"
] | 2 | 2017-09-22T18:57:24.000Z | 2020-08-17T14:42:11.000Z | ---
title: Just how expensive is slub_deug=p?
date: 2017-03-31
tags: linux, kernel, security
---
Recently, I became interested in a debugging option in the Linux kernel
### slub\_debug=p
```
Average Half load -j 2 Run (std deviation):
Elapsed Time 44.586 (1.67125)
User Time 73.874 (2.51294)
System Time 7.756 (0.741741)
Percent CPU 182.4 (0.547723)
Context Switches 13880.8 (157.161)
Sleeps 15745.2 (24.3146)
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 32.702 (0.400087)
User Time 89.22 (16.3062)
System Time 8.945 (1.37014)
Percent CPU 266.4 (88.5729)
Context Switches 15701 (1929.57)
Sleeps 15722.2 (78.1875)
```
### without slub\_debug=p
```
Average Half load -j 2 Run (std deviation):
Elapsed Time 40.614 (0.232873)
User Time 69.978 (0.503061)
System Time 5.09 (0.182209)
Percent CPU 184.4 (0.547723)
Context Switches 13596 (121.501)
Sleeps 15740.4 (46.4629)
Average Optimal load -j 4 Run (std deviation):
Elapsed Time 30.622 (0.171523)
User Time 86.233 (17.1381)
System Time 5.874 (0.853557)
Percent CPU 270.1 (90.3431)
Context Switches 15370.3 (1875.97)
Sleeps 15777.4 (74.43)
```
| 22.693878 | 71 | 0.716727 | kor_Hang | 0.399874 |
21ca42e16fce6d314ad52c8372f361a16e9f7036 | 879 | md | Markdown | _posts/2019-11-17-Preventing-History-From-Repeating-Itself.md | obscureButTrue/obscureButTrue | 99508a07ddd6280e91940dc17c700bce5e50121e | [
"MIT"
] | null | null | null | _posts/2019-11-17-Preventing-History-From-Repeating-Itself.md | obscureButTrue/obscureButTrue | 99508a07ddd6280e91940dc17c700bce5e50121e | [
"MIT"
] | null | null | null | _posts/2019-11-17-Preventing-History-From-Repeating-Itself.md | obscureButTrue/obscureButTrue | 99508a07ddd6280e91940dc17c700bce5e50121e | [
"MIT"
] | null | null | null | In the first half of the 22nd century, evidence of several civilisations on earth before our own became pressing. The decrypted artifacts found on the moon tell a clear story: life had emerged from planet earth several times already, only to be incinerated over and over again in nuclear war. Each time, eons of evolution were annihilated. As we are descending into a hopeless debilitating geopolitical chaos yet again, we should not aim to try to prevent collapse in our iteration, but instead aim to make collapse less likely in the next one. As uranium and other radioactive elements are earths only truly non-renewable resources, we should remove as many sources as possible from our earths mantle. This will reduce mining options in future civilisations of earth, and will bring closer the iteration where nuclear war is impossible due to a simple lack of fissile material.
| 439.5 | 878 | 0.816837 | eng_Latn | 0.999834 |
21ca8d1b3a5b8e6e409d8787cc81c6a41a10be3a | 1,653 | md | Markdown | docs/cppcx/wrl/raiseexception-function.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cppcx/wrl/raiseexception-function.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cppcx/wrl/raiseexception-function.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 'En savoir plus sur : fonction RaiseException'
title: RaiseException (fonction)
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- internal/Microsoft::WRL::Details::RaiseException
helpviewer_keywords:
- RaiseException function
ms.assetid: f9c74f6d-112a-4d2e-900f-622f795d5dbf
ms.openlocfilehash: b5353757ff04ab12c0fc61da6b2e98b2df835ef0
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 12/11/2020
ms.locfileid: "97198438"
---
# <a name="raiseexception-function"></a>RaiseException (fonction)
Prend en charge l’infrastructure WRL et n’est pas destiné à être utilisé directement à partir de votre code.
## <a name="syntax"></a>Syntaxe
```cpp
inline void __declspec(noreturn) RaiseException(
HRESULT hr,
DWORD dwExceptionFlags = EXCEPTION_NONCONTINUABLE);
```
### <a name="parameters"></a>Paramètres
*heure(s)*<br/>
Code d’exception de l’exception levée ; autrement dit, HRESULT d’une opération ayant échoué.
*dwExceptionFlags*<br/>
Indicateur qui signale une exception continue (la valeur de l’indicateur est zéro) ou une exception non continuable (la valeur de l’indicateur est différente de zéro). Par défaut, l’exception n’est pas continue.
## <a name="remarks"></a>Notes
Lève une exception dans le thread appelant.
Pour plus d’informations, consultez la `RaiseException` fonction Windows.
## <a name="requirements"></a>Spécifications
**En-tête :** Internal. h
**Espace de noms :** Microsoft :: WRL ::D étails
## <a name="see-also"></a>Voir aussi
[Microsoft :: WRL ::D espace de noms étails](microsoft-wrl-details-namespace.md)
| 31.188679 | 211 | 0.761645 | fra_Latn | 0.65959 |
21cb1751232d81a02bc030cbe61c11d528990cc5 | 71 | md | Markdown | htmlNotes.md | YaolinChen/yaolinchen.github.io | 94e666a7fce0ba7420a92e2950c8d7c077b4e64e | [
"CC-BY-3.0"
] | null | null | null | htmlNotes.md | YaolinChen/yaolinchen.github.io | 94e666a7fce0ba7420a92e2950c8d7c077b4e64e | [
"CC-BY-3.0"
] | null | null | null | htmlNotes.md | YaolinChen/yaolinchen.github.io | 94e666a7fce0ba7420a92e2950c8d7c077b4e64e | [
"CC-BY-3.0"
] | null | null | null | Note down some grammer here, along with the test.html page.
# font
| 10.142857 | 59 | 0.71831 | eng_Latn | 0.99166 |
21ccb1c85164bd4ac3df094b201821d95c75c333 | 4,585 | md | Markdown | articles/cdn/cdn-resource-health.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-09-04T06:39:25.000Z | 2019-09-04T06:43:40.000Z | articles/cdn/cdn-resource-health.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cdn/cdn-resource-health.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Supervisión del estado de los recursos de Azure CDN | Microsoft Docs
description: Obtenga información sobre cómo supervisar el estado de los recursos de Azure CDN con Estado de los recursos de Azure.
services: cdn
documentationcenter: .net
author: zhangmanling
manager: zhangmanling
editor: ''
ms.assetid: bf23bd89-35b2-4aca-ac7f-68ee02953f31
ms.service: azure-cdn
ms.devlang: multiple
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: integration
ms.date: 01/23/2017
ms.author: mazha
ms.openlocfilehash: 6710f5e5b873f751ad21068acdc15d38574f8378
ms.sourcegitcommit: ccb9a7b7da48473362266f20950af190ae88c09b
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/05/2019
ms.locfileid: "67593437"
---
# <a name="monitor-the-health-of-azure-cdn-resources"></a>Supervisión del estado de los recursos de Azure CDN
El estado de los recursos de Azure CDN es un subconjunto de [Estado de los recursos de Azure](../resource-health/resource-health-overview.md). Puede usar Estado de los recursos de Azure para supervisar el estado de los recursos de una red CDN y recibir instrucciones procesables para solucionar problemas.
>[!IMPORTANT]
>El estado de los recursos de Azure CDN solo tiene en cuenta el estado de la entrega de la red CDN global y las funciones de la API. El estado de los recursos de Azure CDN no comprueba los puntos de conexión individuales de la red CDN.
>
>Las señales que recibe el estado de los recursos de Azure CDN pueden retrasarse hasta 15 minutos.
## <a name="how-to-find-azure-cdn-resource-health"></a>Buscar el estado de los recursos de Azure CDN
1. En [Azure Portal](https://portal.azure.com), vaya a su perfil de la red CDN.
2. Haga clic en el botón **Configuración**.

3. En *Soporte y solución de problemas*, haga clic en **Estado de los recursos**.

>[!TIP]
>También encontrará los recursos de la red CDN en el icono *Estado de los recursos* de la hoja *Ayuda y soporte técnico*. Para acceder rápidamente a *Ayuda y soporte técnico*, haga clic en el círculo con un signo de interrogación ( **?** ) en la esquina superior derecha del portal.
>
> 
## <a name="azure-cdn-specific-messages"></a>Mensajes específicos de Azure CDN
Los estados relacionados con el estado de los recursos de Azure CDN se muestra a continuación.
|Message | Acción recomendada |
|---|---|
|Es posible que haya detenido, quitado o configurado de forma incorrecta uno o más de los puntos de conexión de la red CDN. | Es posible que haya detenido, quitado o configurado de forma incorrecta uno o más de los puntos de conexión de la red CDN.|
|El servicio de administración de la red CDN no está disponible en estos momentos. | Vuelva aquí para ver actualizaciones de estado. Si el problema persiste después del tiempo de resolución esperado, póngase en contacto con el soporte técnico.|
|Es posible que los puntos de conexión de la red CDN se vean afectados por problemas continuos con algunos de nuestros proveedores de redes CDN. | Vuelva aquí para ver actualizaciones de estado. Use la herramienta de solución de problemas para obtener información sobre cómo probar el origen y el punto de conexión de la red CDN. Si el problema persiste después del tiempo de resolución esperado, póngase en contacto con el soporte técnico. |
|Los cambios de configuración en los puntos de conexión de la red CDN están experimentando retrasos en la propagación. | Vuelva aquí para ver actualizaciones de estado. Si los cambios de configuración no se propagan por completo en el tiempo esperado, póngase en contacto con el soporte técnico.|
|Tenemos problemas al cargar el portal complementario. | Vuelva aquí para ver actualizaciones de estado. Si el problema persiste después del tiempo de resolución esperado, póngase en contacto con el soporte técnico.|
Tenemos problemas con algunos de nuestros proveedores de red CDN. | Vuelva aquí para ver actualizaciones de estado. Si el problema persiste después del tiempo de resolución esperado, póngase en contacto con el soporte técnico. |
## <a name="next-steps"></a>Pasos siguientes
- [Información general sobre el estado de los recursos de Azure](../resource-health/resource-health-overview.md)
- [Solución de problemas de la compresión en la red CDN](./cdn-troubleshoot-compression.md)
- [Solución de problemas de errores 404](./cdn-troubleshoot-endpoint.md) | 68.432836 | 442 | 0.784079 | spa_Latn | 0.987552 |