hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17fa3701c94421ae1b7bd184e185be8d4f22cd0b | 205 | md | Markdown | README.md | so-glad/fusion | 9be2a3156fbd517b7d3fb7bd591ee8041c6f4a29 | [
"MIT"
] | null | null | null | README.md | so-glad/fusion | 9be2a3156fbd517b7d3fb7bd591ee8041c6f4a29 | [
"MIT"
] | 6 | 2017-05-21T10:19:20.000Z | 2017-06-21T20:37:58.000Z | README.md | so-glad/fusion | 9be2a3156fbd517b7d3fb7bd591ee8041c6f4a29 | [
"MIT"
] | null | null | null | # fusion
A user authorization authentication framework, principle like simov/grant or jaredhanson/passport, powered by oauth2, but also including server schema adjust refered from oauthjs/koa-oauth-server
| 68.333333 | 195 | 0.839024 | eng_Latn | 0.99865 |
17faef2da4886f4f880442575f237a5717908b00 | 36 | md | Markdown | README.md | Pacific1991/OCTMessageRecording2 | cfbe2a27cb1eedac1fe953a5123a5d8abec0e1d6 | [
"MIT"
] | null | null | null | README.md | Pacific1991/OCTMessageRecording2 | cfbe2a27cb1eedac1fe953a5123a5d8abec0e1d6 | [
"MIT"
] | null | null | null | README.md | Pacific1991/OCTMessageRecording2 | cfbe2a27cb1eedac1fe953a5123a5d8abec0e1d6 | [
"MIT"
] | null | null | null | # OCTMessageRecording2
chat for iOS
| 12 | 22 | 0.833333 | kor_Hang | 0.908811 |
17fb1ce02c9188b3c5de4901e67558c1ff9e4a85 | 1,390 | md | Markdown | README.md | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 2 | 2019-05-11T08:24:50.000Z | 2020-12-17T14:57:03.000Z | README.md | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 1 | 2019-11-07T05:02:11.000Z | 2019-11-07T05:02:11.000Z | README.md | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 1 | 2019-12-10T13:39:05.000Z | 2019-12-10T13:39:05.000Z | # Tools and investigation results, regarding performance in Monasca:
monasca-perf
============
Performance testing tools:
perf.py - sends x requests as fast as it can to the api and exits.
time python perf.py
agent_simulator.py - sends requests once (default) or continously to the api.
Simulates agent sends by having a configurable send interval per process(server).
python agent_simulatory.py
perfprocess.py - sends x requests as fast as it can and exits.
time python perfprocess.py num_processes num_threads num_requests_per_thread num_metrics_per_request
check.py - after restarting the persisters, run this on the persister server for stats.
python check.py
influx_load.py - does a bunch of metric writes to an influx cluster
time python influx_load.py num_processes num_threads num_requests_per_thread num_metrics_per_request series_name db_username db_password
influx_host_list.py - Lists the hosts with data stored on influxdb + handles agent amplification
python influx_host_list.py dbusername dbpassword url
Performance evaluation tools:
analyze_persister.py - Queries the Monasca Java Persister metrics endpoint to print a summary of persister activity
python analyze_persister.py
kafka_client_perf_results
=========================
Comparison of throughput with different kafka Python clients:
Investigation results and tools
| 30.217391 | 145 | 0.788489 | eng_Latn | 0.972061 |
17fb48904ea2ab6e219eb2b0fb0412c9b5fc3534 | 70 | md | Markdown | README.md | mar7in91/sphinx-voice-recognition-example | f1475c27178b419de4301eef291d9886609ac038 | [
"MIT"
] | null | null | null | README.md | mar7in91/sphinx-voice-recognition-example | f1475c27178b419de4301eef291d9886609ac038 | [
"MIT"
] | null | null | null | README.md | mar7in91/sphinx-voice-recognition-example | f1475c27178b419de4301eef291d9886609ac038 | [
"MIT"
] | null | null | null | # sphinx-voice-recognition-example
Example to understand PocketSphinx
| 23.333333 | 34 | 0.857143 | eng_Latn | 0.944221 |
17fbad6be8a06b770812cc4d92f79b49a719e78b | 61 | md | Markdown | README.md | Jerusaland/natural-language-practices | db16196453821eec0353b8f7fa0effce5c8b7652 | [
"MIT"
] | null | null | null | README.md | Jerusaland/natural-language-practices | db16196453821eec0353b8f7fa0effce5c8b7652 | [
"MIT"
] | null | null | null | README.md | Jerusaland/natural-language-practices | db16196453821eec0353b8f7fa0effce5c8b7652 | [
"MIT"
] | null | null | null | # natural-language-practices
learning and practicing for NLP
| 20.333333 | 31 | 0.836066 | eng_Latn | 0.987615 |
17fbe0cfb0f0f6edb64d690b4e7878ca6c070c93 | 1,381 | md | Markdown | README.md | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | README.md | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | README.md | slaclab/mps_history | 225a00a3e079df2d288d99a1ea719703d7141bb4 | [
"BSD-3-Clause-LBNL"
] | null | null | null | # MPS History Server
### Running the Server
Run the history server with the command:
`./HistoryServer.sh`
There should be support for starting in various environments including local, dev, and production. However, this file can be edited to include other environments and parameters as the program changes.
### Optional Client
#### Testing
Included with the history server is a client that can be used for testing the server data flow with semi-realistic data. In this client, a connection to a configuration database is established through the MPS_Config class within the [MPS_Database](https://github.com/slaclab/mps_database/blob/master/mps_database/mps_config.py) module. Then, the appropriate data type(device, channel, etc.) is selected at random, packaged into a struct, and sent over a socket connection via local host. This is repeated multiple times for various different data types supported by the History Server.
#### Database Creation/Reset
There is also commented out functionality for creating and resetting the initial History Server Database with the appropriate table schema.
### Configuration File
The configuration file is responsible for including the needed filenames and paths for logging, and the configuration and history databases. There is separate sections for different environments, and can be edited to include more environments if needed.
| 86.3125 | 585 | 0.807386 | eng_Latn | 0.999451 |
17fc047f88d22131ad1c41ecf47b3b554ff2bd4c | 585 | md | Markdown | reading.md | gjwphper/JavaGuide | 5c6b58b2f0abf0481e1f65c6dbe4e57d865739bf | [
"Apache-2.0"
] | null | null | null | reading.md | gjwphper/JavaGuide | 5c6b58b2f0abf0481e1f65c6dbe4e57d865739bf | [
"Apache-2.0"
] | null | null | null | reading.md | gjwphper/JavaGuide | 5c6b58b2f0abf0481e1f65c6dbe4e57d865739bf | [
"Apache-2.0"
] | null | null | null | spring-clound
spring-boot
mybatis
kafka
大型网站架构
多线程
redis zset
distributed-system
mysql数据库索引
java集合
ArrayList源码+扩容机制分析.md
spring-boot
ConfigurationProperties
数据结构、算法
排序
常见算法
log2N等
抽查
大数据组件 了解
ThreadLocal
mysql 事务级别
======================================================================================================================================================
database
mysql
redis
system-design
distributed-system
api-gateway
message-queue
kafka
rpc
as
distribute
java
net
system
tools
| 9.285714 | 150 | 0.497436 | oci_Latn | 0.859329 |
17fc6fe6b6cc0b0793ba3d90aa323fffb34a9f9f | 420 | md | Markdown | catalog/hh-remix/en-US_hh-remix.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/hh-remix/en-US_hh-remix.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/hh-remix/en-US_hh-remix.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # HH Remix

- **type**: manga
- **volumes**: 1
- **chapters**: 4
- **original-name**: HH リミックス
- **start-date**: 2003-10-06
## Tags
- yaoi
## Authors
- Kanzaki
- Takashi (Story & Art)
## Sinopse
1-2. HH Remix (Heavenly Body) 3. A Ballad For You 4. Beloved
## Links
- [My Anime list](https://myanimelist.net/manga/7121/HH_Remix)
| 15.555556 | 64 | 0.609524 | yue_Hant | 0.195089 |
17fc7dd447e8284e74c8d6ce898d540d70973f89 | 2,179 | md | Markdown | articles/synapse-analytics/sql/develop-label.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/synapse-analytics/sql/develop-label.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/synapse-analytics/sql/develop-label.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Use etiquetas de consulta no Synapse SQL
description: Incluso neste artigo estão dicas essenciais para o uso de etiquetas de consulta no Synapse SQL.
services: synapse-analytics
author: filippopovic
manager: craigg
ms.service: synapse-analytics
ms.topic: conceptual
ms.subservice: ''
ms.date: 04/15/2020
ms.author: fipopovi
ms.reviewer: jrasnick
ms.custom: ''
ms.openlocfilehash: 47b476cbc6997ca5ec63968bdc269e2273662100
ms.sourcegitcommit: b80aafd2c71d7366838811e92bd234ddbab507b6
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/16/2020
ms.locfileid: "81430026"
---
# <a name="use-query-labels-in-synapse-sql"></a>Use etiquetas de consulta no Synapse SQL
Incluso neste artigo estão dicas essenciais para o uso de etiquetas de consulta no Synapse SQL.
> [!NOTE]
> SQL on-demand (preview) não suporta consultas de rotulagem.
## <a name="what-are-query-labels"></a>Quais são os rótulos de consulta
O pool SQL suporta um conceito chamado rótulos de consulta. Antes de entrar em qualquer profundidade, vamos examinar um exemplo:
```sql
SELECT *
FROM sys.tables
OPTION (LABEL = 'My Query Label')
;
```
A última linha marca a cadeia de caracteres ‘Meu Rótulo de Consulta’ à consulta. Essa marca é particularmente útil, pois o rótulo é capaz de executar consultas por meio de DMVs. A consulta para rótulos fornece um mecanismo para localizar consultas de problemas e ajuda a identificar o progresso através de uma execução elt.
Uma boa convenção de nomes é muito útil. Por exemplo, iniciar o rótulo com PROJECT, PROCEDURE, STATEMENT ou COMMENT identifica exclusivamente a consulta entre todos os códigos no controle de origem.
A consulta a seguir usa uma exibição de gerenciamento dinâmico para pesquisar por rótulo:
```sql
SELECT *
FROM sys.dm_pdw_exec_requests r
WHERE r.[label] = 'My Query Label'
;
```
> [!NOTE]
> É essencial colocar colchetes ou aspas duplas em torno da palavra do rótulo ao consultar. Rótulo é uma palavra reservada e causa um erro quando ele não é delimitado.
>
>
## <a name="next-steps"></a>Próximas etapas
Para obter mais dicas de desenvolvimento, confira [visão geral de desenvolvimento](develop-overview.md).
| 36.932203 | 323 | 0.780633 | por_Latn | 0.993698 |
17fcd159c351212d96462a2b8af023eb3aa0f9e1 | 305 | md | Markdown | src/pages/blog/2018-05-01-deliveryxpress-visits-phoenix-arizona.md | westwick/deliveryxpress-gatsby | 4b55b4ae1363282dab1e044bddc4bb64679e50ca | [
"MIT"
] | 1 | 2019-11-01T02:20:30.000Z | 2019-11-01T02:20:30.000Z | src/pages/blog/2018-05-01-deliveryxpress-visits-phoenix-arizona.md | westwick/deliveryxpress-gatsby | 4b55b4ae1363282dab1e044bddc4bb64679e50ca | [
"MIT"
] | null | null | null | src/pages/blog/2018-05-01-deliveryxpress-visits-phoenix-arizona.md | westwick/deliveryxpress-gatsby | 4b55b4ae1363282dab1e044bddc4bb64679e50ca | [
"MIT"
] | null | null | null | ---
templateKey: blog-post
title: DeliveryXpress visits Phoenix Arizona
date: 2018-05-01T00:00:00-04:00
description: >-
CEO Brian Pritchard has traveled out to Phoenix, Arizona to attend YUM! Brands
(IPHFHA) Pizza Hut Franchise Holders Association Spring Meeting
tags:
- '1'
---
AAAAAAAAAAAAAAAAAAA
| 25.416667 | 80 | 0.767213 | kor_Hang | 0.527801 |
17fcfbe77d61a14510c56573c5839e2cee30cbdc | 2,368 | md | Markdown | docs/New-AtlassianSession.md | Invoke-Automation/AtlassianCLI | 704a762531a61772beeafad4bddd762059fca6d0 | [
"MIT"
] | 2 | 2017-07-13T14:54:37.000Z | 2021-02-05T01:48:09.000Z | docs/New-AtlassianSession.md | Invoke-Automation/AtlassianCLI | 704a762531a61772beeafad4bddd762059fca6d0 | [
"MIT"
] | 15 | 2017-01-20T15:27:29.000Z | 2017-07-20T13:47:14.000Z | docs/New-AtlassianSession.md | Invoke-Automation/AtlassianCLI | 704a762531a61772beeafad4bddd762059fca6d0 | [
"MIT"
] | null | null | null | ---
external help file: AtlassianCLI-help.xml
online version:
schema: 2.0.0
---
# New-AtlassianSession
## SYNOPSIS
Initialises a new AtlassianSession
## SYNTAX
```
New-AtlassianSession [-Server] <String> [-Credential] <PSCredential> [-WhatIf] [-Confirm]
```
## DESCRIPTION
The New-AtlassianSession cmdlet creates a new AtlassianSession object.
If no current AtlassianSession is saved in the custom session variable (name specified in settings.xml) the newly created AtlassianSession will be saved in this variable.
If the custom session variable already has a value you will be prompted whether or not you want to replace it.
## EXAMPLES
### -------------------------- EXAMPLE 1 --------------------------
```
New-AtlassianSession -Server 'http://localhost:2990/jira'
```
Creates a new AtlassianSession that connects to the jira test environment running on your localhost.
Credentials will be prompted.
## PARAMETERS
### -Server
Specifies the Server URL to be used for the session.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Credential
Specifies the Credentials to be used for the session.
```yaml
Type: PSCredential
Parameter Sets: (All)
Aliases:
Required: True
Position: 2
Default value: (Get-Credential -Message "Enter Atlassian user credentials")
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
Shows what would happen if the cmdlet runs.
The cmdlet is not run.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: wi
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Confirm
Prompts you for confirmation before running the cmdlet.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: cf
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
## INPUTS
### None
You cannot pipe input to this cmdlet.
## OUTPUTS
### AtlassianSession
Returns the newly created AtlassianSession object. (even if it is not saved)
## NOTES
The AtlassianSession object has a Save(\<Path\>) method that can be called to save an encrypted session file to disk.
Encrypted session files can only be used by the same user account.
## RELATED LINKS
| 21.142857 | 170 | 0.745355 | eng_Latn | 0.918459 |
17fd37d5153b3505adab5ee92b81da28fdc33e7f | 4,972 | md | Markdown | _chapters/chapters/web.md | EmilieHK/ar-for-eu-book | 3e20ac6dd06d8b8bf368be38fcd3370682ad7b3c | [
"CC-BY-3.0"
] | null | null | null | _chapters/chapters/web.md | EmilieHK/ar-for-eu-book | 3e20ac6dd06d8b8bf368be38fcd3370682ad7b3c | [
"CC-BY-3.0"
] | null | null | null | _chapters/chapters/web.md | EmilieHK/ar-for-eu-book | 3e20ac6dd06d8b8bf368be38fcd3370682ad7b3c | [
"CC-BY-3.0"
] | null | null | null | ---
layout: reading_chapter
title: AR for the Web (in progress)
hide: true
permalink: /chapter/web/
categories: chapter
visualizations:
---
Augmented Reality for the Web demands compatibility with the major browsers.
To reach this goal, developers needs access to the native device drivers and APIs of the different vendors in JavaScript.
## Enabling Technologies
In recent years, the Web has made a major leap in application maturity.
Almost all desktop applications like office packages, image and video editing have Web versions.
This is due to the availability of HTML5 and its many APIs exposing native device APIs to developers by JavaScript.
Moreover, Web browsers can communicate directly with each other without the need of a server, reducing the latency of many networks depending on applications like collaborative editing, video and voice chats, and computer games.
- [WebRTC](https://webrtc.org/)
- [Web Assembly](https://webassembly.org/)
- [Web Workers](https://www.w3.org/TR/workers/)
- [WebGL 2.0](https://www.khronos.org/registry/webgl/specs/latest/2.0/)
- [Progressive Web Apps](https://developers.google.com/web/progressive-web-apps/)
# WebVR
[WebVR](https://immersive-web.github.io/webvr/) provided a developer API to different virtual reality devices on the Web and is now superseded by WebXR.
# WebXR
WebXR (Mixed Reality on the Web) superseded WebVR in 2018.
The [W3c editor's draft](https://immersive-web.github.io/webxr/)
gives details on the WebXR Device API.
WebXR is an extension of the WebVR API covering augmented reality devices in the JavaScript API.
The new API has two goals.
First, it enhances the possibilities for new input devices like for gesture and speech recognition.
This gives to users new options to navigate and interact in the virtual environment.
Second, it gives a technical platform to create augmented reality content.
Moreover, it tackles incompatibilities of the predecessor with different browsers like Safari and Chrome.
The amount of code needed to create virtual experiences on different devices should be reduced,
A number of browsers is already supporting WebXR.
[//]: # (QRD*19)
# 3D Graphics Frameworks on the Web
A series of graphics frameworks exist which allow the display of 3D models on Web pages.
The presented frameworks introduce increasing levels of abstraction.
Whereas WebGL works with polygon primitives and shaders, three.js abstracts to scenes, objects and materials.
In turn, A-Frame uses three.js to provide an HTML-based description language for 3D scenes which can be executed as a WebXR experience.
## WebGL
[WebGL](https://get.webgl.org/) is a cross-platform, royalty-free API used to create 3D graphics in a Web browser.
## three.js
[three.js](https://threejs.org/) is a popular JavaScript framework for displaying 3D content on the web.
## A-Frame
[A-Frame](https://aframe.io/) is a web framework for building virtual and augmented reality experiences.
**Example of an A-Frame**:
<script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
<script>
AFRAME.registerComponent('hide-in-ar-mode', {
// Set this object invisible while in AR mode.
init: function() {
this.el.sceneEl.addEventListener('enter-vr', (ev) => {
this.wasVisible = this.el.getAttribute('visible');
if (this.el.sceneEl.is('ar-mode')) {
this.el.setAttribute('visible', false);
}
});
this.el.sceneEl.addEventListener('exit-vr', (ev) => {
if (this.wasVisible) this.el.setAttribute('visible', true);
});
}
});
</script>
<a-scene style="width: 500px; height: 500px" embedded>
<a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9" shadow></a-box>
<a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E" shadow></a-sphere>
<a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D" shadow></a-cylinder>
<a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4" shadow></a-plane>
<a-camera position="0 1.2 0"></a-camera>
<a-sky color="#ECECEC" hide-in-ar-mode></a-sky>
</a-scene>
The WebXR demo can be viewed on desktop browsers, as well as mobile browsers.
If a VR device is connected to the PC, it automatically activates once this page is loaded and the view is changed if the user clicks on the VR button.
To return to the view in the browser, press escape.
If no VR device is present, the VR button instead toggles a full-screen view of the 3D scene.
On smartphones, the VR button also works and switches to a stereoscopic view which can be used e.g. with Google Cardboard.
On newer smartphones which have ARCore or ARKit installed and with Chrome Browser version 79 or higher, an AR button will appear next to the VR button.
If the user clicks on it, the phone's camera will activate and the virtual objects are integrated into the camera stream.
| 48.745098 | 228 | 0.732904 | eng_Latn | 0.978358 |
17fdc82951f20c742daa3cf5e90dceac404f9d61 | 1,176 | md | Markdown | docs/framework/wcf/diagnostics/tracing/system-servicemodel-diagnostics-filternotmatchednodequotaexceeded.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/system-servicemodel-diagnostics-filternotmatchednodequotaexceeded.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/system-servicemodel-diagnostics-filternotmatchednodequotaexceeded.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: System.ServiceModel.Diagnostics.FilterNotMatchedNodeQuotaExceeded
ms.date: 03/30/2017
ms.assetid: 067f27ba-4d9e-4efb-8fa7-c23d2654d967
ms.openlocfilehash: c34c9b738f8386a7fbcbd7957711c22cd0dc5096
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
ms.locfileid: "33480755"
---
# <a name="systemservicemodeldiagnosticsfilternotmatchednodequotaexceeded"></a>System.ServiceModel.Diagnostics.FilterNotMatchedNodeQuotaExceeded
System.ServiceModel.Diagnostics.FilterNotMatchedNodeQuotaExceeded
## <a name="description"></a>Beschreibung
Die Überprüfung der Nachricht mit dem Nachrichtenprotokollierungsfilter hat das Knotenkontingent überstiegen, das für den Filter festgelegt wurde.
## <a name="see-also"></a>Siehe auch
[Ablaufverfolgung](../../../../../docs/framework/wcf/diagnostics/tracing/index.md)
[Verwenden der Ablaufverfolgung zum Beheben von Anwendungsfehlern](../../../../../docs/framework/wcf/diagnostics/tracing/using-tracing-to-troubleshoot-your-application.md)
[Verwaltung und Diagnose](../../../../../docs/framework/wcf/diagnostics/index.md)
| 53.454545 | 174 | 0.792517 | deu_Latn | 0.465369 |
17fddb4bea4263f7eebd40fa0734175fed9953d9 | 12,221 | md | Markdown | src/pages/posts/2021-04-27T00:00:04-post.md | evanmacbride/reddit-digest | 47659c6b52d9b7d74025c517931e107cb2f6be94 | [
"MIT"
] | 1 | 2020-02-03T02:35:55.000Z | 2020-02-03T02:35:55.000Z | src/pages/posts/2021-04-27T00:00:04-post.md | evanmacbride/reddit-snapshots | 00dcad012a949243e7399a45dd9a37720cfe6576 | [
"MIT"
] | null | null | null | src/pages/posts/2021-04-27T00:00:04-post.md | evanmacbride/reddit-snapshots | 00dcad012a949243e7399a45dd9a37720cfe6576 | [
"MIT"
] | null | null | null | ---
title: '04/27/21 12:00AM UTC Snapshot'
date: '2021-04-27T00:00:04'
---
<ul>
<h2>Sci/Tech</h2>
<li><a href='https://www.newstatesman.com/business/companies/2021/04/ceos-are-hugely-expensive-why-not-automate-them'><img src='https://b.thumbs.redditmedia.com/yA1GWQ_KdSWg-dSMpXkInuLYMqzV6gfvP4C38nYyhos.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.newstatesman.com/business/companies/2021/04/ceos-are-hugely-expensive-why-not-automate-them'>CEOs are hugely expensive – why not automate them?</a></div>(newstatesman.com) posted by <a href='https://www.reddit.com/user/TypicalActuator0'>TypicalActuator0</a> in <a href='https://www.reddit.com/r/technology'>technology</a> 49864 points & 4327 <a href='https://www.reddit.com/r/technology/comments/myy0ab/ceos_are_hugely_expensive_why_not_automate_them/'>comments</a></div></li>
<li><a href='https://www.theguardian.com/commentisfree/2021/apr/25/elon-musk-jeff-bezos-space-moon-mars-workers-rights-unions'><img src='https://b.thumbs.redditmedia.com/AgyJBrGq_8iI7a2tV01_QyPX26YpD_FEQRz5mJBAgOk.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.theguardian.com/commentisfree/2021/apr/25/elon-musk-jeff-bezos-space-moon-mars-workers-rights-unions'>In space, no one will hear Bezos and Musk’s workers’ call for basic rights - If they’re serious about survival of the species, they need to act more responsibly toward working people here on terra firma.</a></div>(theguardian.com) posted by <a href='https://www.reddit.com/user/Gari_305'>Gari_305</a> in <a href='https://www.reddit.com/r/Futurology'>Futurology</a> 28182 points & 1749 <a href='https://www.reddit.com/r/Futurology/comments/myvgu5/in_space_no_one_will_hear_bezos_and_musks_workers/'>comments</a></div></li>
<li><a href='https://www.theverge.com/2021/4/26/22403344/diy-device-yayagram-telegram-voice-messages-physical-phone-switchboard'><img src='https://b.thumbs.redditmedia.com/hhxnQJqwnsrYbFtAmJrmOysO8zqvljgqfcGcmYKgN5g.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.theverge.com/2021/4/26/22403344/diy-device-yayagram-telegram-voice-messages-physical-phone-switchboard'>Inventive grandson builds Telegram messaging machine for 96-year-old grandmother</a></div>(theverge.com) posted by <a href='https://www.reddit.com/user/a_Ninja_b0y'>a_Ninja_b0y</a> in <a href='https://www.reddit.com/r/gadgets'>gadgets</a> 9468 points & 239 <a href='https://www.reddit.com/r/gadgets/comments/myw96j/inventive_grandson_builds_telegram_messaging/'>comments</a></div></li>
<li><a href='https://www.reddit.com/r/askscience/comments/myze4n/why_does_a_ball_bounce_higher_the_more_air/'><svg version='1.1' viewBox='-34 -12 104 64' preserveAspectRatio='xMidYMid slice' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>text link thumbnail</title>
<path d='M12.19,8.84a1.45,1.45,0,0,0-1.4-1h-.12a1.46,1.46,0,0,0-1.42,1L1.14,26.56a1.29,1.29,0,0,0-.14.59,1,1,0,0,0,1,1,1.12,1.12,0,0,0,1.08-.77l2.08-4.65h11l2.08,4.59a1.24,1.24,0,0,0,1.12.83,1.08,1.08,0,0,0,1.08-1.08,1.64,1.64,0,0,0-.14-.57ZM6.08,20.71l4.59-10.22,4.6,10.22Z'>
</path>
<path d='M32.24,14.78A6.35,6.35,0,0,0,27.6,13.2a11.36,11.36,0,0,0-4.7,1,1,1,0,0,0-.58.89,1,1,0,0,0,.94.92,1.23,1.23,0,0,0,.39-.08,8.87,8.87,0,0,1,3.72-.81c2.7,0,4.28,1.33,4.28,3.92v.5a15.29,15.29,0,0,0-4.42-.61c-3.64,0-6.14,1.61-6.14,4.64v.05c0,2.95,2.7,4.48,5.37,4.48a6.29,6.29,0,0,0,5.19-2.48V26.9a1,1,0,0,0,1,1,1,1,0,0,0,1-1.06V19A5.71,5.71,0,0,0,32.24,14.78Zm-.56,7.7c0,2.28-2.17,3.89-4.81,3.89-1.94,0-3.61-1.06-3.61-2.86v-.06c0-1.8,1.5-3,4.2-3a15.2,15.2,0,0,1,4.22.61Z'>
</path>
</svg></a><div><div class='linkTitle'><a href='https://www.reddit.com/r/askscience/comments/myze4n/why_does_a_ball_bounce_higher_the_more_air/'>Why does a ball bounce higher the more air pressure it has?</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/Blister777'>Blister777</a> in <a href='https://www.reddit.com/r/askscience'>askscience</a> 2284 points & 195 <a href='https://www.reddit.com/r/askscience/comments/myze4n/why_does_a_ball_bounce_higher_the_more_air/'>comments</a></div></li>
<li><a href='https://i.redd.it/vllb02m3qhv61.png'><img src='https://b.thumbs.redditmedia.com/eWDIJw03lXAC2E1DWQxWpN0iL7eLOcl9zeAb5uqG08c.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/vllb02m3qhv61.png'>Titanis walleri was the only terror bird to travel north during the Great American Biotic Interchange. However, it only managed to colonize a small part of North America for a limited time, and it went extinct during the early Pleistocene.</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/Pardusco'>Pardusco</a> in <a href='https://www.reddit.com/r/Naturewasmetal'>Naturewasmetal</a> 2108 points & 49 <a href='https://www.reddit.com/r/Naturewasmetal/comments/myu59e/titanis_walleri_was_the_only_terror_bird_to/'>comments</a></div></li>
<li><a href='https://www.freethink.com/articles/farming-robot'><svg version='1.1' viewBox='-34 -14 104 64' preserveAspectRatio='xMidYMid meet' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>link thumbnail</title>
<path d='M32,4H4A2,2,0,0,0,2,6V30a2,2,0,0,0,2,2H32a2,2,0,0,0,2-2V6A2,2,0,0,0,32,4ZM4,30V6H32V30Z'></path>
<path d='M8.92,14a3,3,0,1,0-3-3A3,3,0,0,0,8.92,14Zm0-4.6A1.6,1.6,0,1,1,7.33,11,1.6,1.6,0,0,1,8.92,9.41Z'></path>
<path d='M22.78,15.37l-5.4,5.4-4-4a1,1,0,0,0-1.41,0L5.92,22.9v2.83l6.79-6.79L16,22.18l-3.75,3.75H15l8.45-8.45L30,24V21.18l-5.81-5.81A1,1,0,0,0,22.78,15.37Z'></path>
</svg></a><div><div class='linkTitle'><a href='https://www.freethink.com/articles/farming-robot'>Farming Robot Kills 100,000 Weeds per Hour With Lasers</a></div>(freethink.com) posted by <a href='https://www.reddit.com/user/Captain_Vegetable'>Captain_Vegetable</a> in <a href='https://www.reddit.com/r/tech'>tech</a> 2001 points & 130 <a href='https://www.reddit.com/r/tech/comments/mz30y6/farming_robot_kills_100000_weeds_per_hour_with/'>comments</a></div></li>
<h2>Maker</h2>
<li><a href='https://i.redd.it/67a02sakfiv61.png'><img src='https://a.thumbs.redditmedia.com/jeZXXHEQN906jKLDMZL6D5XbJvYVKdKsUMxU3Kv5eU8.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/67a02sakfiv61.png'>Woke up to this little guy. I love him.</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/Atotallyrandomname'>Atotallyrandomname</a> in <a href='https://www.reddit.com/r/3Dprinting'>3Dprinting</a> 3065 points & 68 <a href='https://www.reddit.com/r/3Dprinting/comments/mywcix/woke_up_to_this_little_guy_i_love_him/'>comments</a></div></li>
<li><a href='https://i.redd.it/zy36hp3j1jv61.jpg'><img src='https://b.thumbs.redditmedia.com/WHgFlPh_iWzfDBtOVmVtowUeSv7aRRvaOOhgtNtvans.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/zy36hp3j1jv61.jpg'>A grate to prevent my cat from eating my pitcher plant</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/Zorcron'>Zorcron</a> in <a href='https://www.reddit.com/r/functionalprint'>functionalprint</a> 1552 points & 58 <a href='https://www.reddit.com/r/functionalprint/comments/myys1a/a_grate_to_prevent_my_cat_from_eating_my_pitcher/'>comments</a></div></li>
<li><a href='https://www.reddit.com/r/buildapc/comments/mywgld/how_do_i_control_fan_speeds_in_my_case_do_i_need/'><svg version='1.1' viewBox='-34 -12 104 64' preserveAspectRatio='xMidYMid slice' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>text link thumbnail</title>
<path d='M12.19,8.84a1.45,1.45,0,0,0-1.4-1h-.12a1.46,1.46,0,0,0-1.42,1L1.14,26.56a1.29,1.29,0,0,0-.14.59,1,1,0,0,0,1,1,1.12,1.12,0,0,0,1.08-.77l2.08-4.65h11l2.08,4.59a1.24,1.24,0,0,0,1.12.83,1.08,1.08,0,0,0,1.08-1.08,1.64,1.64,0,0,0-.14-.57ZM6.08,20.71l4.59-10.22,4.6,10.22Z'>
</path>
<path d='M32.24,14.78A6.35,6.35,0,0,0,27.6,13.2a11.36,11.36,0,0,0-4.7,1,1,1,0,0,0-.58.89,1,1,0,0,0,.94.92,1.23,1.23,0,0,0,.39-.08,8.87,8.87,0,0,1,3.72-.81c2.7,0,4.28,1.33,4.28,3.92v.5a15.29,15.29,0,0,0-4.42-.61c-3.64,0-6.14,1.61-6.14,4.64v.05c0,2.95,2.7,4.48,5.37,4.48a6.29,6.29,0,0,0,5.19-2.48V26.9a1,1,0,0,0,1,1,1,1,0,0,0,1-1.06V19A5.71,5.71,0,0,0,32.24,14.78Zm-.56,7.7c0,2.28-2.17,3.89-4.81,3.89-1.94,0-3.61-1.06-3.61-2.86v-.06c0-1.8,1.5-3,4.2-3a15.2,15.2,0,0,1,4.22.61Z'>
</path>
</svg></a><div><div class='linkTitle'><a href='https://www.reddit.com/r/buildapc/comments/mywgld/how_do_i_control_fan_speeds_in_my_case_do_i_need/'>How do I control fan speeds in my case? Do I need a fan controller or can I do it with software?</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/suskab'>suskab</a> in <a href='https://www.reddit.com/r/buildapc'>buildapc</a> 1493 points & 150 <a href='https://www.reddit.com/r/buildapc/comments/mywgld/how_do_i_control_fan_speeds_in_my_case_do_i_need/'>comments</a></div></li>
<li><a href='https://v.redd.it/wvkkuw1v1fv61'><img src='https://b.thumbs.redditmedia.com/pwzqlwED8mGhGmSi2UinRhFAUP7tKkva4FlUc0yShGs.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://v.redd.it/wvkkuw1v1fv61'>Time-lapse of my plants growing indoors. Created using Raspberry Pi + Gphoto2</a></div>(v.redd.it) posted by <a href='https://www.reddit.com/user/Make-life'>Make-life</a> in <a href='https://www.reddit.com/r/raspberry_pi'>raspberry_pi</a> 1430 points & 30 <a href='https://www.reddit.com/r/raspberry_pi/comments/mym14g/timelapse_of_my_plants_growing_indoors_created/'>comments</a></div></li>
<h2>Etcetera</h2>
<li><a href='https://i.redd.it/zrrw3np85iv61.png'><img src='https://b.thumbs.redditmedia.com/okrxXSbVdT3hOPKZqfPJryEUN6uS9hWWIvKrMvjUcZk.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/zrrw3np85iv61.png'>Zero</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/MattiasRotman'>MattiasRotman</a> in <a href='https://www.reddit.com/r/PixelArt'>PixelArt</a> 3462 points & 46 <a href='https://www.reddit.com/r/PixelArt/comments/myvdy2/zero/'>comments</a></div></li>
<li><a href='https://globalcomix.com/c/infinite-dark/chapters/en/1/1?utm_medium=social&utm_source=reddit&utm_campaign=GC_infinite-dark_042621&utm_term=r-scifi'><img src='https://b.thumbs.redditmedia.com/h_CBK7bJl9HVfUih1Di8Y480sILV5txAD1vxA2vLN3Q.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://globalcomix.com/c/infinite-dark/chapters/en/1/1?utm_medium=social&utm_source=reddit&utm_campaign=GC_infinite-dark_042621&utm_term=r-scifi'>Infinite Dark is a future where the universe ended, but somehow humanity survives and now drifts the endless void</a></div>(globalcomix.com) posted by <a href='https://www.reddit.com/user/AppletiniOnFleek'>AppletiniOnFleek</a> in <a href='https://www.reddit.com/r/scifi'>scifi</a> 514 points & 76 <a href='https://www.reddit.com/r/scifi/comments/mz09o6/infinite_dark_is_a_future_where_the_universe/'>comments</a></div></li>
<li><a href='https://www.reddit.com/gallery/mz7kr6'><img src='https://b.thumbs.redditmedia.com/ceOMCy_D89jYLN8PaqwNm_R4u2dUgy2MtKmBV0mL1Ak.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.reddit.com/gallery/mz7kr6'>1951 GM LeSabre</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/Moxhoney411'>Moxhoney411</a> in <a href='https://www.reddit.com/r/WeirdWheels'>WeirdWheels</a> 504 points & 19 <a href='https://www.reddit.com/r/WeirdWheels/comments/mz7kr6/1951_gm_lesabre/'>comments</a></div></li>
<li><a href='https://i.redd.it/af1lxw65yiv61.png'><img src='https://b.thumbs.redditmedia.com/9zBp1XpZ4M6vbUF86kJwEaoyUhNhcIutIbyyedeu0is.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/af1lxw65yiv61.png'>[Roujin Z] Semi-realistic late 80s computer screen</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/CagoSuiFornelli'>CagoSuiFornelli</a> in <a href='https://www.reddit.com/r/itsaunixsystem'>itsaunixsystem</a> 407 points & 13 <a href='https://www.reddit.com/r/itsaunixsystem/comments/myye8n/roujin_z_semirealistic_late_80s_computer_screen/'>comments</a></div></li>
</ul>
| 210.706897 | 916 | 0.733901 | yue_Hant | 0.33206 |
17fe9d403c1e464c6914c60ef3a3ca8eed857431 | 24 | md | Markdown | README.md | kamilmuzyka91/firebase-start-snippet | 70cae86154d4fd0a76af76fd548bd50e5d5744b1 | [
"MIT"
] | null | null | null | README.md | kamilmuzyka91/firebase-start-snippet | 70cae86154d4fd0a76af76fd548bd50e5d5744b1 | [
"MIT"
] | null | null | null | README.md | kamilmuzyka91/firebase-start-snippet | 70cae86154d4fd0a76af76fd548bd50e5d5744b1 | [
"MIT"
] | null | null | null | # firebase-start-snippet | 24 | 24 | 0.833333 | nob_Latn | 0.538334 |
17feab3172ac777989cba7cacbb0c10db9a452f8 | 1,704 | md | Markdown | README.md | adaptris-labs/build-parent-json-csv | c5b74ef3e5f279f165693556d3ef653fd6a287b0 | [
"Apache-2.0"
] | null | null | null | README.md | adaptris-labs/build-parent-json-csv | c5b74ef3e5f279f165693556d3ef653fd6a287b0 | [
"Apache-2.0"
] | 12 | 2020-03-10T08:17:21.000Z | 2022-03-03T01:37:59.000Z | README.md | adaptris-labs/build-parent-json-csv | c5b74ef3e5f279f165693556d3ef653fd6a287b0 | [
"Apache-2.0"
] | 1 | 2022-02-28T14:31:32.000Z | 2022-02-28T14:31:32.000Z | # build-parent-json-csv
[](https://github.com/adaptris-labs/build-parent-json-csv/actions)
The suggested name was supreme-giggle
This showcases using [interlok-build-parent](https://github.com/adaptris-labs/interlok-build-parent); with an actual real world deploy-able example for you.
# What it does
* jetty workflow that accepts a JSON Array, and returns you back CSV
- If the channel is started then `curl -si -XPOST -d'[{"column1": "line1"}, {"column1": "line2"}]' http://localhost:8080/api/csv` will give you some data
* there is a service-test.xml which tests the json-array-csv service
* there is limited error handling such that if you give it a non-json-array; it gives you a json stacktrace.
## Getting started
* `./gradlew clean build`
* `(cd ./build/distribution && java -jar lib/interlok-boot.jar)`
* Login to the UI as usual via (http://localhost:8080/interlok); note that the adapter is _started_ but the channel is _stopped_
* Kill everything
* `./gradlew -PbuildEnv=dev clean build`
* `(cd ./build/distribution && java -jar lib/interlok-boot.jar)`
* Login to the UI as usual; the adapter is _started_ and channel is now _started_
* You can now also use the service-tester page.
By specifying a build environment, you are effectively copying `variables-local-dev.properties` to `variables-local.properties` in your output directory; this means that the channel is now marked as `autostart=true`; Also, with the buildEnvironment set to be _dev_, you can use the service tester page in the UI, since the service tester jar files are now included as part of the distribution.
| 54.967742 | 393 | 0.758803 | eng_Latn | 0.991822 |
17fec895ad2bce2b37009dda04a3275fc1d09f28 | 42 | md | Markdown | README.md | fitnote/FitNote-Backend | 1765d9d7411d451039f49b806d5fa8916f498306 | [
"MIT"
] | null | null | null | README.md | fitnote/FitNote-Backend | 1765d9d7411d451039f49b806d5fa8916f498306 | [
"MIT"
] | 2 | 2019-06-26T16:53:59.000Z | 2019-06-26T16:57:34.000Z | README.md | fitnote/FitRear | 1765d9d7411d451039f49b806d5fa8916f498306 | [
"MIT"
] | null | null | null | # FitNote-Backend
📇 健背 -- 抛瓦后台,健康生活的坚实后背。
| 14 | 23 | 0.690476 | deu_Latn | 0.111458 |
17ff5821e8fd799fe713f32b6785d2ad2cdcf82a | 2,249 | md | Markdown | README.md | emsewell96/cheatsheet | b9cef5e95ea701e4d92baae5f8b03479cedeb463 | [
"MIT"
] | null | null | null | README.md | emsewell96/cheatsheet | b9cef5e95ea701e4d92baae5f8b03479cedeb463 | [
"MIT"
] | null | null | null | README.md | emsewell96/cheatsheet | b9cef5e95ea701e4d92baae5f8b03479cedeb463 | [
"MIT"
] | null | null | null | # CheatSheet
[](https://github.com/darkmatter18/cheatsheet/netowrk)
[](https://github.com/darkmatter18/cheatsheet/stargazers)
[](https://github.com/darkmatter18/cheatsheet/blob/master/LICENSE)

This repo contains useful cheatsheets for several `Programming Languages`.
Feel free to use the CheatSheets to help learn new skills.
### The following languages and libraries are currently available:
- [C++](./C++)
- [Code Editors](./Code%20Editors)
- [GNU Emacs](./Code%20Editors/GNU%20Emacs)
- [Vim](./Code%20Editors/Vim)
- [Visual Studio Code](./Code%20Editors/Visual%20Studio%20Code/)
- [Computer Network](./Computer%20Network)
- [Docker](./Docker)
- [Git and GitHub](./Git%20and%20GitHub)
- [Hadoop](./Hadoop)
- [JavaScript](./JavaScript)
- [Java](./Java)
- [Linux](./Linux)
- [MATLAB](./MATLAB)
- [Neural Netwoks](./Neural%20Networks)
- [Python](./Python)
- [NumPy](./Python/NumPy)
- [pandas](./Python/pandas)
- [R](./R)
- [Shell](./Shell)
- [PowerShell](./Shell/PowerShell)
- [SQL](./SQL)
- [Statistics](./Statistics)
- [Time Complexity](./Time%20Complexity)
- [TypeScript](./TypeScript)
- [Regex](./Regex)
If you have a cheatsheet you would like to share feel free to contribute.
## Credits
This project exists thanks to all the people who [contribute](CONTRIBUTING.md).<br>
<a href="https://github.com/darkmatter18/cheatsheet/graphs/contributors"><img src="https://opencollective.com/cheatsheet/contributors.svg?width=890&button=false" /></a>
> ***Need help?***
***Feel free to contact me @ [in2arkadipb13@gmail.com](mailto:in2arkadipb13@gmail.com)***
[](https://github.com/darkmatter18/) [](https://twitter.com/Arkadipb21)
| 44.98 | 329 | 0.728324 | yue_Hant | 0.461441 |
17ff5f264ee9c2f039ccf34caf0374c33fa2e231 | 5,613 | md | Markdown | data/readme_files/thunderpush.thunderpush.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/thunderpush.thunderpush.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/thunderpush.thunderpush.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | .. image:: http://i.imgur.com/CgGL5eU.png
------
.. image:: https://img.shields.io/pypi/v/thunderpush.svg
:target: https://pypi.python.org/pypi/thunderpush
.. image:: https://img.shields.io/travis/thunderpush/thunderpush/master.svg
:target: http://travis-ci.org/thunderpush/thunderpush
.. image:: https://img.shields.io/docker/pulls/kjagiello/thunderpush.svg
:target: https://hub.docker.com/r/kjagiello/thunderpush/
Thunderpush is a Tornado and SockJS based push service. It provides
a Beaconpush (beaconpush.com) inspired HTTP API and client.
Install
=======
::
pip install thunderpush
Using Docker
============
::
docker run -d -p 8080:8080 \
-e PUBLIC_KEY=public \
-e PRIVATE_KEY=secret \
kjagiello/thunderpush
Usage
=====
::
usage: thunderpush [-h] [-p PORT] [-H HOST] [-v] [-d] [-V] clientkey apikey
positional arguments:
clientkey client key
apikey server API key
optional arguments:
-h, --help show this help message and exit
-p PORT, --port PORT binds server to custom port
-H HOST, --host HOST binds server to custom address
-v, --verbose verbose mode
-d, --debug debug mode (useful for development)
-V, --version show program's version number and exit
JavaScript client
=================
In order to use provided by Thunderpush client, you need to include following
lines on your webpage.
.. code-block:: html
<script src="http://cdn.sockjs.org/sockjs-0.3.min.js"></script>
<script src="thunderpush.js"></script>
The only thing you have to do now is to make a connection to your Thunderpush
server in following way:
.. code-block:: html
<script type="text/javascript">
Thunder.connect("thunder.example.com", "apikey", ["testchannel"], {log: true});
Thunder.listen(function(message) { alert(message); });
</script>
This code is all you need to do to start receive messages pushed to the client
from your Thunderpush server. As you can see, we instructed Thunder client
to display logs, which can be helpful for debugging your application.
For more examples of how to use Thunderpush, look into `examples <https://github.com/thunderpush/thunderpush/tree/master/examples>`_.
Open-source libraries for communicating with the HTTP API
=========================================================
Python: `python-thunderclient <https://github.com/thunderpush/python-thunderclient>`_
PHP: `php-thunderclient <https://github.com/thunderpush/php-thunderclient>`_
Java: `java-thunderclient <https://github.com/Sim00n/java-thunderclient>`_
Hubot: `hubot-thunderpush <https://github.com/thunderpush/hubot-thunderpush>`_
Ruby: `thunderpush-gem <https://github.com/welingtonsampaio/thunderpush-gem>`_
.NET: `ThunderClient.Net <https://github.com/primediabroadcasting/ThunderClient.Net>`_
Using the HTTP API
==================
Example of interacting with Thunderpush API using cURL::
curl \
-X POST \
-H "Content-Type: application/json" \
-H "X-Thunder-Secret-Key: secretkey" \
--data-ascii "\"Hello World!\"" \
http://thunder.example.com/api/1.0.0/[API key]/channels/[channel]/
All requests to the HTTP API must provide *X-Thunder-Secret-Key* header that
should contain the private API key.
Sending a message to a channel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
POST /api/1.0.0/[API key]/channels/[channel]/
Message should be sent as the body of the request. Only valid JSON body
will be accepted.
Getting number of users online
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
GET /api/1.0.0/[API key]/users/
Checking presence of a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
GET /api/1.0.0/[API key]/users/[user id]/
Sending a message to a user
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
POST /api/1.0.0/[API key]/users/[user id]/
Message should be sent as the body of the request. Only valid JSON body
will be accepted.
Forcing logout of a user
^^^^^^^^^^^^^^^^^^^^^^^^
::
DELETE /api/1.0.0/[API key]/users/[user id]/
Always returns 204 http code.
Retrieving list of users in a channel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
GET /api/1.0.0/[API key]/channels/[channel]/
JavaScript client API
=====================
Connecting to the server
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: javascript
Thunder.connect(server, apiKey, channels, options)
Connects to the Thunderpush server and starts listening for incomming
messages.
server
Adress of your Thunderpush server.
apiKey
Public api key.
channels
Array of channels you want to subscribe to.
options
Object with optional settings you may pass to Thunder:
log
Set it to true if you want to activate verbose mode. This will turn on
SockJS logs as well.
user
Set it to override the client generated user id.
protocol
Set it to "https" if you want to use it instead of "http".
Listening for messages
^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: javascript
Thunder.listen(handler)
Registers callback function that will receive incomming messages. You can
register as many handlers you want. Handler function should accept
one argument which is the message itself.
Getting high CPU usage?
^^^^^^^^^^^^^^^^^^^^^^^
Before giving up on thunderpush, check it's logs and look for
errors like this one `error: [Errno 24] Too many open files`. If you're seeing them,
it means that you've reached the limit of open file descriptors on your system.
The only thing you need to do is to raise the limit. Following SO answer will
tell you how to do it: http://stackoverflow.com/a/4578356/250162 Then simply
restart thunderpush, forget about the problem and get a cold one!
| 26.106977 | 133 | 0.676109 | eng_Latn | 0.827626 |
17fff4edca903c30628ef7ce72e01d978496198c | 4,805 | md | Markdown | _posts/2021-10-16-gattu-2011-watch-download-pdisk-movie.md | pdiskmovies/pdiskmovies.github.io | 99083af56af8d8bfc9301b265df8e4fb97fff2c2 | [
"MIT"
] | null | null | null | _posts/2021-10-16-gattu-2011-watch-download-pdisk-movie.md | pdiskmovies/pdiskmovies.github.io | 99083af56af8d8bfc9301b265df8e4fb97fff2c2 | [
"MIT"
] | null | null | null | _posts/2021-10-16-gattu-2011-watch-download-pdisk-movie.md | pdiskmovies/pdiskmovies.github.io | 99083af56af8d8bfc9301b265df8e4fb97fff2c2 | [
"MIT"
] | null | null | null | ---
id: 2660
title: Gattu (2011) Watch Download pdisk Movie
date: 2021-10-16T05:19:48+00:00
author: tentrockers
layout: post
guid: https://tentrockers.online/gattu-2011-watch-download-pdisk-movie/
permalink: /gattu-2011-watch-download-pdisk-movie/
cyberseo_rss_source:
- 'https://www.pdiskmovies.net/feeds/posts/default?max-results=100&start-index=301'
cyberseo_post_link:
- https://www.pdiskmovies.net/2021/09/gattu-2011-watch-download-pdisk-movie.html
categories:
- Uncategorized
---
<div class="separator">
<a href="https://1.bp.blogspot.com/-BHs3gwj6KbA/YUcupm1WdJI/AAAAAAAAAOc/-qEizvYdmFAUGsr0zzzVrejpvaljJtFnwCLcBGAsYHQ/s1054/ergre.jpg" imageanchor="1"><img loading="lazy" border="0" data-original-height="1054" data-original-width="750" height="640" src="https://1.bp.blogspot.com/-BHs3gwj6KbA/YUcupm1WdJI/AAAAAAAAAOc/-qEizvYdmFAUGsr0zzzVrejpvaljJtFnwCLcBGAsYHQ/w456-h640/ergre.jpg" width="456" /></a>
</div>
<span><br /></span>
<div>
<div>
<span>An illiterate road boy has only one ambition – to fly kites and reign very best inside the blue skies. And he’s going to flip around everything in his small international to string his dream collectively.</span>
</div>
<div>
<span>Movie Review: So meet Gattu (Mohammad). A common avenue kid. Dreaming of flying sky-excessive. But no supersonic jets or choppers in his desires actually. Like the famous seagull known as Jonathon Livingston, who simplest desired to fly higher and better, what this terrible, illiterate boy (living in small-city Roorkee) lives for, is a string, a brightly coloured, diamond fashioned patang. And of course, a few breeze to boost his spirits and help his kite jump. The simplest riding force in his measly life (living underneath an open roof, with tin drums for walls, and bizarre-jobs for a paltry sum of Rs.20 a day) is his knack for flying kites. Even his chachu’s (Naresh Kumar), at instances, harsh and tyrannical pressures to paintings for a living don’t suppress his blithe spirit. In fact, with some cutely devilish developments, effervescent strength and glowing eyes, Gattu is a sweet soul, continually sunwashed with wish. He has only one problem – Kali! A mysterious black patang (believed to do patang ka jaadu-tona), who has for many years, defeated all other kites in town, and nobody is aware of who’s her master. Our Mr. Smarty Half-pants who thinks on his feet, realizes that the simplest manner to conquer Kali is to fly his kite from the very best point, that is the nearby school terrace. And he pulls all ‘strings’, literally, to take on the ‘black diamond’ in the blue skies. Even if it way sneaking into college, pretending to be ‘Agent Gattu’, disguised as a pupil, mendacity to chachu, developing a web of innovative testimonies and antics (of wafadar jasoos, aatankvadis, and barood) that leaves everyone convinced, charmed and pressured on the same time. Rightly, a thoughts of a child is certainly magical; all they want is sand to make a fortress. Or a hand-made kite to keep the sector.</span>
</div>
<div>
<span>Mohammad Samad’s overall performance is coronary heart-warming, with marvel in his eyes and playful innocence he grabs all of your attention. His confined world suddenly opens doorways, and as he receives lessons on gravity and goodness, technology and sach ji jeet, his big eyes twinkle with amazement – like he’s just noticed ET on Earth.</span>
</div>
<div>
<span>Naresh Kumar plays his element properly, as the austere chachu harrowed by means of Gattu’s guts, who eventually melts and embraces him.</span>
</div>
<div>
<span>Director Rajan Khosa has intelligently made a children’s film (which has been applauded inside the competition circuit) that actually crosses all age boundaries. It subtly throws light on compelling problems like baby illiteracy and child labour with out shouting from terrace-tops or turning into a preachy docu-drama. In one second the tale is solely simple, and within the other it’s profound enough to transport you, if now not theatrically surprise you. It’s greased with a kind of paradoxical truth, yet, it leaves you upbeat.</span>
</div>
<div>
<span>Gattu is a have to-look ahead to kids of every age (study: grown-united states of americaeven greater). And if you assume you’re too grown up for a kiddie movie, pass fly a kite. Maybe it’s the real problem with the arena, too many humans grow up too soon. We need to just let the babies in us rule the arena.</span>
</div>
</div>
[](https://kofilink.com/1/bnYybDY1MDAwYnJ2?dn=1) | 100.104167 | 1,841 | 0.771072 | eng_Latn | 0.989005 |
aa0064da04800defeffcd72d6db7f47a04fe974d | 2,224 | md | Markdown | docs-archive-a/2014/integration-services/transfer-error-messages-task-editor-general-page.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/integration-services/transfer-error-messages-task-editor-general-page.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:39:57.000Z | 2021-11-25T02:25:30.000Z | docs-archive-a/2014/integration-services/transfer-error-messages-task-editor-general-page.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:33.000Z | 2021-10-13T09:18:07.000Z | ---
title: Éditeur de tâche de transfert de messages d’erreur (page général) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: integration-services
ms.topic: conceptual
f1_keywords:
- sql12.dts.designer.transfererrormessagestask.general.f1
helpviewer_keywords:
- Transfer Error Messages Task Editor
ms.assetid: 67b21f48-4795-4128-81dc-743f7a95ef74
author: chugugrace
ms.author: chugu
ms.openlocfilehash: c656b2399586025a316bb32c97496efad0648e93
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87694639"
---
# <a name="transfer-error-messages-task-editor-general-page"></a>Éditeur de tâche de transfert de messages d'erreur (page Général)
Utilisez la page **Général** de la boîte de dialogue **Éditeur de tâche de transfert de messages d'erreur** pour donner un nom et une description à la tâche de transfert de messages d'erreur. La tâche de transfert de messages d’erreur transfère un ou plusieurs messages d’erreur [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] définis par l’utilisateur entre des instances de [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]. Pour plus d'informations sur cette tâche, consultez [Transfer Error Messages Task](control-flow/transfer-error-messages-task.md).
## <a name="options"></a>Options
**Nom**
Donnez un nom unique à la tâche de transfert de messages d'erreur. Ce nom sert d'étiquette à l'icône de la tâche.
> [!NOTE]
> Les noms de tâche doivent être uniques dans un package.
**Description**
Entrez une description de la tâche de transfert de messages d'erreur.
## <a name="see-also"></a>Voir aussi
[Guide de référence des erreurs et des messages propres à Integration Services](../../2014/integration-services/integration-services-error-and-message-reference.md)
[Tâches Integration Services](control-flow/integration-services-tasks.md)
[Éditeur de tâche de transfert de messages d’erreur (page messages)](../../2014/integration-services/transfer-error-messages-task-editor-messages-page.md)
[Page Expressions](expressions/expressions-page.md)
| 51.72093 | 572 | 0.768435 | fra_Latn | 0.60424 |
aa01c12f3f425b42860f3a564bdac2407dde3251 | 81 | md | Markdown | samples/tutorials/managed-certs/README.md | radoslav-tomov/k8s-config-connector | c4f4849f5673030a109a5fff1618b5959adaeac5 | [
"Apache-2.0"
] | 475 | 2019-04-08T07:45:41.000Z | 2022-03-31T22:15:01.000Z | samples/tutorials/managed-certs/README.md | radoslav-tomov/k8s-config-connector | c4f4849f5673030a109a5fff1618b5959adaeac5 | [
"Apache-2.0"
] | 634 | 2019-05-01T17:00:37.000Z | 2022-03-31T20:45:13.000Z | samples/tutorials/managed-certs/README.md | radoslav-tomov/k8s-config-connector | c4f4849f5673030a109a5fff1618b5959adaeac5 | [
"Apache-2.0"
] | 80 | 2019-04-08T17:14:39.000Z | 2022-03-16T18:32:41.000Z | Samples for: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs | 81 | 81 | 0.814815 | kor_Hang | 0.206752 |
aa01fd5370ad9ac494a8f3baa902d26b4cd85a5e | 732 | md | Markdown | _posts/2016-08-26-Vera-Wang-Fall-2016-NEW-Style-120-2016.md | hectodressdo/hectodressdo.github.io | f5caf00eee7c8be17c85e4fe9ee1724bad3853fe | [
"MIT"
] | null | null | null | _posts/2016-08-26-Vera-Wang-Fall-2016-NEW-Style-120-2016.md | hectodressdo/hectodressdo.github.io | f5caf00eee7c8be17c85e4fe9ee1724bad3853fe | [
"MIT"
] | null | null | null | _posts/2016-08-26-Vera-Wang-Fall-2016-NEW-Style-120-2016.md | hectodressdo/hectodressdo.github.io | f5caf00eee7c8be17c85e4fe9ee1724bad3853fe | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-08-26
title: "Vera Wang Fall 2016 NEW Style 120 2016"
category: Vera Wang
tags: [Vera Wang,2016]
---
### Vera Wang Fall 2016 NEW Style 120
Just **$389.99**
### 2016
<table><tr><td>BRANDS</td><td>Vera Wang</td></tr><tr><td>Years</td><td>2016</td></tr></table>
<a href="https://www.readybrides.com/en/vera-wang/96105-vera-wang-fall-2016-new-style-120.html"><img src="//img.readybrides.com/248228/vera-wang-fall-2016-new-style-120.jpg" alt="Vera Wang Fall 2016 NEW Style 120" style="width:100%;" /></a>
<!-- break -->
Buy it: [https://www.readybrides.com/en/vera-wang/96105-vera-wang-fall-2016-new-style-120.html](https://www.readybrides.com/en/vera-wang/96105-vera-wang-fall-2016-new-style-120.html)
| 45.75 | 240 | 0.695355 | yue_Hant | 0.51464 |
aa02fc16aa523ec48866cdbeb953d0c4ad0d0d21 | 40 | md | Markdown | README.md | nick6969/BowlingKata | ef566e3070c1cef22c8899207a414905040b957d | [
"MIT"
] | null | null | null | README.md | nick6969/BowlingKata | ef566e3070c1cef22c8899207a414905040b957d | [
"MIT"
] | null | null | null | README.md | nick6969/BowlingKata | ef566e3070c1cef22c8899207a414905040b957d | [
"MIT"
] | null | null | null | # BowlingKata
2018-07-17 iOS@Taipei 練習題
| 13.333333 | 25 | 0.775 | deu_Latn | 0.208165 |
aa03175ebff90723416a112684e23e79fb27365b | 34,307 | md | Markdown | docs/csi.md | julien-faye/mesos | 5ffb3e40eb4d9234778cc1a96141539258e994bf | [
"Apache-2.0"
] | 4,537 | 2015-01-01T03:26:40.000Z | 2022-03-31T03:07:00.000Z | docs/csi.md | julien-faye/mesos | 5ffb3e40eb4d9234778cc1a96141539258e994bf | [
"Apache-2.0"
] | 227 | 2015-01-29T02:21:39.000Z | 2022-03-29T13:35:50.000Z | docs/csi.md | julien-faye/mesos | 5ffb3e40eb4d9234778cc1a96141539258e994bf | [
"Apache-2.0"
] | 1,992 | 2015-01-05T12:29:19.000Z | 2022-03-31T03:07:07.000Z | ---
title: Apache Mesos - Container Storage Interface (CSI) Support
layout: documentation
---
# Container Storage Interface (CSI) Support
This document describes the [Container Storage Interface](https://github.com/container-storage-interface/spec)
(CSI) support in Mesos.
Currently, only CSI spec version 0.2 is supported in Mesos 1.6+ due to
incompatible changes between CSI version 0.1 and version 0.2. CSI version 0.1 is
supported in Mesos 1.5.
## Motivation
### Current Limitations of Storage Support in Mesos
Prior to 1.5, Mesos supports both [local persistent volumes](persistent-volume.md)
as well as [external persistent volumes](isolators/docker-volume.md). However,
both of them have limitations.
[Local persistent volumes](persistent-volume.md) do not support offering
physical or logical block devices directly. Frameworks do not have the choice to
select filesystems for their local persistent volumes. Although Mesos does
support [multiple local disks](multiple-disk.md), it's a big burden for
operators to configure each agent properly to be able to leverage this feature.
Finally, there is no well-defined interface allowing third-party storage vendors
to plug into Mesos.
[External persistent volumes](isolators/docker-volume.md) support in Mesos
bypasses the resource management part. In other words, using an external
persistent volume does not go through the usual offer cycle. Mesos does not
track resources associated with the external volumes. This makes quota control,
reservation, and fair sharing almost impossible to enforce. Also, the current
interface Mesos uses to interact with storage vendors is the
[Docker Volume Driver Interface](https://docs.docker.com/engine/extend/plugins_volume/)
(DVDI), which has several [limitations](https://docs.google.com/document/d/125YWqg_5BB5OY9a6M7LZcby5RSqBwo2PZzpVLuxYXh4/edit?usp=sharing).
### Container Storage Interface (CSI)
[Container Storage Interface](https://github.com/container-storage-interface/spec)
(CSI) is a specification that defines a common set of APIs for all interactions
between the storage vendors and the container orchestration platforms. It is the
result of a [close collaboration](https://github.com/container-storage-interface/community)
among representatives from the [Kubernetes](https://kubernetes.io/),
[CloudFoundry](https://www.cloudfoundry.org/), [Docker](https://www.docker.com/)
and Mesos communities. The primary goal of CSI is to allow storage vendors to
write one plugin that works with all container orchestration platforms.
It was an easy decision to build the storage support in Mesos using CSI. The
benefits are clear: it will fit Mesos into the larger storage ecosystem in a
consistent way. In other words, users will be able to use any storage system
with Mesos using a consistent API. The out-of-tree plugin model of CSI decouples
the release cycle of Mesos from that of the storage systems, making the
integration itself more sustainable and maintainable.
## Architecture
The following figure provides an overview about how Mesos supports CSI.

### First Class Storage Resource Provider
The [resource provider](resource-provider.md) abstraction is a natural fit for
supporting storage and CSI. Since CSI standardizes the interface between
container orchestrators and storage vendors, the implementation for the storage
resource provider should be the same for all storage systems that are
CSI-compatible.
As a result, Mesos provides a default implementation of LRP, called Storage
Local Resource Provider (SLRP), to provide general support for storage and CSI.
Storage External Resource Provider (SERP) support is [coming soon](https://issues.apache.org/jira/browse/MESOS-8371).
The storage resource providers serve as the bridges between Mesos and CSI plugins.
More details about SLRP can be found in the following [section](#storage-local-resource-provider).
### Standalone Containers for CSI Plugins
CSI plugins are long-running [gRPC](https://grpc.io/) services, like daemons.
Those CSI plugins are packaged as containers, and are launched by SLRPs using
the [standalone containers](standalone-containers.md) API from the agent.
Standalone containers can be launched without any tasks or executors. They use
the same isolation mechanism provided by the agent for task and executor
containers.
There is a component in each SLRP that is responsible for monitoring the health
of the CSI plugin containers and restarting them if needed.
## Framework API
### New Disk Source Types
Two new types of disk sources have been added: `RAW` and `BLOCK`.
```protobuf
message Resource {
message DiskInfo {
message Source {
enum Type {
PATH = 1;
MOUNT = 2;
BLOCK = 3; // New in 1.5
RAW = 4; // New in 1.5
}
optional Type type = 1;
}
}
}
```
The disk source type (i.e., `DiskInfo::Source::Type`) specifies the property of
a disk resource and how it can be consumed.
* `PATH`: The disk resource can be accessed using the Volume API (backed by a
POSIX compliant filesystem). The disk resource can be carved up into smaller
chunks.
* `MOUNT`: The disk resource can be accessed using the Volume API (backed by a
POSIX compliant filesystem). The disk resource cannot be carved up into
smaller chunks.
* `BLOCK`: (New in 1.5) The disk resource can be directly accessed on Linux
without any filesystem (e.g., `/dev/sdb`). The disk resource cannot be carved
up into smaller chunks.
* `RAW`: (New in 1.5) The disk resource cannot be accessed by the framework yet.
It has to be [converted](#new-offer-operations-for-disk-resources) into any of
the above types before it can be accessed. The disk resource cannot be carved
up into smaller chunks if it has an [ID](#disk-id-and-metadata) (i.e.,
[pre-existing disks](#pre-existing-disks)), and can be carved up into smaller
chunks if it does not have an [ID](#disk-id-and-metadata) (i.e.,
[storage pool](#storage-pool)).
### Disk ID and Metadata
Two more fields have been added to `DiskInfo.Source` to further describe the
disk source. It also allows CSI plugins to propagate plugin-specific information
to the framework.
```protobuf
message Resource {
message DiskInfo {
message Source {
// An identifier for this source. This field maps onto CSI
// volume IDs and is not expected to be set by frameworks.
optional string id = 4;
// Additional metadata for this source. This field maps onto CSI
// volume metadata and is not expected to be set by frameworks.
optional Labels metadata = 5;
}
}
}
```
* `id`: This maps to CSI [Volume ID](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#createvolume)
if the disk resource is backed by a [Volume](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#terminology)
from a CSI plugin. This field must not be set by frameworks.
* `metadata`: This maps to CSI [Volume Attributes](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#createvolume)
if the disk resource is backed by a [Volume](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#terminology)
from a CSI plugin. This field must not be set by frameworks.
### Storage Pool
A `RAW` disk resource may or may not have an ID (i.e., `DiskInfo.Source.id`),
depending on whether or not the `RAW` disk resource is backed by a CSI Volume. A
`RAW` disk resource not backed by a CSI Volume is usually referred to as a
storage pool (e.g., an LVM volume group, or EBS storage space, etc.).
The size of the storage pool is reported by the CSI plugin using the
[`GetCapacity` interface](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#getcapacity).
Currently, a storage pool must have a [profile](#profiles) defined. Any disk
resource created from the storage pool inherits the same profile as the storage
pool. See more details in the [profiles](#profiles) section.
### Pre-existing Disks
A `RAW` disk resource with an ID (i.e., `DiskInfo.Source.id`) is referred to as
a [pre-existing disk](#pre-existing-disks). Pre-existing disks are those
[CSI Volumes](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#terminology)
that are detected by the corresponding CSI plugin using the
[`ListVolumes` interface](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#listvolumes),
but have not gone through the dynamic provisioning process (i.e., via `CREATE_DISK`).
For example, operators might pre-create some LVM logical volumes before
launching Mesos. Those pre-created LVM logical volumes will be reported by the
LVM CSI plugin when Mesos invokes the `ListVolumes` interface, thus will be
reported as pre-existing disks in Mesos.
Currently, pre-existing disks do not have [profiles](#profiles). This may change
in the near future. See more details in the [profiles](#profiles) section.
### New Offer Operations for Disk Resources
To allow dynamic provisioning of disk resources, two new offer operations have
been added to the [scheduler API](scheduler-http-api.md#accept):
`CREATE_DISK` and `DESTROY_DISK`.
To learn how to use the offer operations, please refer to the
[`ACCEPT`](scheduler-http-api.md#accept) Call in the v1 scheduler API, or
[`acceptOffers`](app-framework-development-guide.md#api) method in the v0
scheduler API for more details.
```protobuf
message Offer {
message Operation {
enum Type {
UNKNOWN = 0;
LAUNCH = 1;
LAUNCH_GROUP = 6;
RESERVE = 2;
UNRESERVE = 3;
CREATE = 4;
DESTROY = 5;
GROW_VOLUME = 11;
SHRINK_VOLUME = 12;
CREATE_DISK = 13; // New in 1.7.
DESTROY_DISK = 14; // New in 1.7.
}
optional Type type = 1;
}
}
```
#### `CREATE_DISK` operation
The offer operation `CREATE_DISK` takes a `RAW` disk resource
(`create_disk.source`), and create a `MOUNT` or a `BLOCK` disk resource
(`create_disk.target_type`) from the source. The source `RAW` disk resource can
either be a storage pool (i.e., a `RAW` disk resource without an ID) or a
pre-existing disk (i.e., a `RAW` disk resource with an ID). The quantity of the
converted resource (either `MOUNT` or `BLOCK` disk resource) will be the same as
the source `RAW` resource.
```protobuf
message Offer {
message Operation {
message CreateDisk {
required Resource source = 1;
required Resource.DiskInfo.Source.Type target_type = 2;
}
optional CreateDisk create_disk = 15;
}
}
```
The created disk resource will have the disk [`id` and `metadata`](#disk-id-and-metadata)
set accordingly to uniquely identify the volume reported by the CSI plugin.
Note that `CREATE_DISK` is different than [`CREATE`](persistent-volume.md).
`CREATE` creates a [persistent volume](persistent-volume.md) which indicates
that the data stored in the volume will be persisted until the framework
explicitly destroys it. It must operate on a non-`RAW` disk resource (i.e.,
`PATH`, `MOUNT` or `BLOCK`).
#### `DESTROY_DISK` operation
The offer operation `DESTROY_DISK` destroys a `MOUNT` or a `BLOCK` disk resource
(`destroy_disk.source`), which will result in a `RAW` disk resource. The
quantity of the `RAW` disk resource will be the same as the specified `source`,
unless it has an invalid profile (described later), in which case the
`DESTROY_DISK` operation will completely remove the disk resource.
```protobuf
message Offer {
message Operation {
message DestroyDisk {
required Resource source = 1;
}
optional DestroyDisk destroy_disk = 16;
}
}
```
This operation is intended to be a reverse operation of `CREATE_DISK`. In
other words, if the volume is created from a storage pool (i.e., a `RAW` disk
resource without an ID), the result of the corresponding `DESTROY_DISK` should
be a storage pool. And if the volume is created from a [pre-existing disk](#pre-existing-disks)
(i.e., a `RAW` disk resource with an ID), the result of the corresponding
`DESTROY_DISK` should be a pre-existing disk.
Currently, Mesos infers the result based on the presence of an assigned
[profile](#profiles) in the disk resource. In other words, if the volume to be
destroyed has a profile, the converted `RAW` disk resource will be a storage
pool (i.e., `RAW` disk resource without an ID). Otherwise, the converted `RAW`
disk resource will be a pre-existing disk (i.e., `RAW` disk resource with an
ID). This leverages the fact that currently, each storage pool must have a
profile, and pre-existing disks do not have profiles.
#### Getting Operation Results
It is important for the frameworks to get the results of the above offer
operations so that they know if the dynamic disk provisioning is successful or
not.
Starting with Mesos 1.6.0 it is possible to opt-in to receive status updates
related to operations that affect resources managed by a resource provider. In
order to do so, the framework has to set the `id` field in the operation.
Support for operations affecting the agent default resources is [coming
soon](https://issues.apache.org/jira/browse/MESOS-8194).
## Profiles
The primary goal of introducing profiles is to provide an indirection to a set
of storage vendor-specific parameters for the disk resources. It provides a way
for the cluster operator to describe the classes of storage they offer and
abstracts away the low-level details of a storage system.
Each profile is just a simple string (e.g., "fast", "slow", "gold"), as
described below:
```protobuf
message Resource {
message DiskInfo {
message Source {
// This field serves as an indirection to a set of storage
// vendor specific disk parameters which describe the properties
// of the disk. The operator will setup mappings between a
// profile name to a set of vendor specific disk parameters. And
// the framework will do disk selection based on profile names,
// instead of vendor specific disk parameters.
//
// Also see the DiskProfile module.
optional string profile = 6;
}
}
}
```
A typical framework that needs storage is expected to perform disk
resource selection based on the `profile` of a disk resource, rather
than low-level storage vendor specific parameters.
### Disk Profile Adaptor Module
In order to let cluster operators customize the mapping between profiles and
storage system-specific parameters, Mesos provides a [module](modules.md)
interface called `DiskProfileAdaptor`.
```cpp
class DiskProfileAdaptor
{
public:
struct ProfileInfo
{
csi::VolumeCapability capability;
google::protobuf::Map<std::string, std::string> parameters;
};
virtual Future<ProfileInfo> translate(
const std::string& profile,
const ResourceProviderInfo& resourceProviderInfo) = 0;
virtual Future<hashset<std::string>> watch(
const hashset<std::string>& knownProfiles,
const ResourceProviderInfo& resourceProviderInfo) = 0;
};
```
The module interface has a `translate` method that takes a profile and returns
the corresponding [CSI volume capability](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#createvolume)
(i.e., the `capability` field) and [CSI volume creation parameters](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#createvolume)
(i.e., the `parameters` field) for that profile. These two fields will be used to
call the CSI `CreateVolume` interface during dynamic provisioning (i.e.,
`CREATE_DISK`), or CSI `ControllerPublishVolume` and
`NodePublishVolume` when publishing (i.e., when a task using the disk resources
is being launched on a Mesos agent).
The `watch` method in the module interface allows Mesos to get notified about
the changes on the profiles. It takes a list of known profiles and returns a
future which will be set if the module detects changes to the known profiles
(e.g., a new profile is added). Currently, all profiles are immutable, thus are
safe to cache.
Since `ProfileInfo` uses protobuf from the CSI spec directly, there is an
implicit dependency between backward compatibility of the module interface and
the CSI spec version. Since CSI doesn't provide a backward compatibility
promise, modules have to be re-built against each release of Mesos.
### URI Disk Profile Adaptor
To demonstrate how to use the disk profile adaptor module, Mesos ships with a
default disk profile adaptor, called `UriDiskProfileAdaptor`. This module
polls the profile information (in JSON) from a configurable URI. Here are the
module parameters that can be used to configure the module:
* `uri`: URI to a JSON object containing the profile mapping. The module
supports both HTTP(s) and file URIs. The JSON object should consist of some
top-level string keys corresponding to the disk profile name. Each value
should contain a `ResourceProviderSelector` under `resource_provider_selector`
or a `CSIPluginTypeSelector` under `csi_plugin_type_selector` to specify the
set of resource providers this profile applies to, followed by a
`VolumeCapability` under `volume_capabilities` and arbitrary key-value pairs
under `create_parameters`. For example:
```json
{
"profile_matrix": {
"my-profile": {
"csi_plugin_type_selector": {
"plugin_type": "org.apache.mesos.csi.test"
},
"volume_capabilities": {
"mount": {
"fs_type": "xfs"
},
"access_mode": {
"mode": "SINGLE_NODE_WRITER"
}
},
"create_parameters": {
"type": "raid5",
"stripes": "3",
"stripesize": "64"
}
}
}
}
```
* `poll_interval`: How long to wait between polling the specified `uri`. If the
poll interval has elapsed since the last fetch, then the URI is re-fetched;
otherwise, a cached `ProfileInfo` is returned. If not specified, the URI is
only fetched once.
* `max_random_wait`: How long at most to wait between discovering a new set of
profiles and notifying the callers of `watch`. The actual wait time is a
uniform random value between 0 and this value. If the `--uri` points to a
centralized location, it may be good to scale this number according to the
number of resource providers in the cluster. [default: 0secs]
To enable this module, please follow the [modules documentation](modules.md):
add the following JSON to the `--modules` agent flag, and set agent flag
`--disk_profile_adaptor` to `org_apache_mesos_UriDiskProfileAdaptor`.
```json
{
"libraries": [
{
"file": "/PATH/TO/liburi_disk_profile.so",
"modules": [
{
"name": "org_apache_mesos_UriDiskProfileAdaptor",
"parameters": [
{
"key": "uri",
"value": "/PATH/TO/my_profile.json"
},
{
"key": "poll_interval",
"value": "1secs"
}
]
}
]
}
]
}
```
### Storage Pool Capacity and Profiles
The capacity of a [storage pool](#storage-pool) is usually tied to the profiles
of the volumes that the users want to provision from the pool. For instance,
consider an LVM volume group (a storage pool) backed by 1000G of physical
volumes. The capacity of the storage pool will be 1000G if the logical volumes
provisioned from the pool have `"raid0"` configuration, and will be 500G if the
logical volumes provisioned from the pool have `"raid1"` configuration.
In fact, it does not make sense to have a storage pool that does not have a
profile because otherwise the allocator or the framework will not be able to
predict how much space a volume will take, making resource management almost
impossible to implement.
Therefore, each storage pool must have a profile associated with it. The profile
of a storage pool is the profile of the volumes that can be provisioned from the
pool. In other words, the volumes provisioned from a storage pool inherit the
profile of the storage pool.
Mesos gets the capacity of a storage pool with a given profile by invoking the
CSI [`GetCapacity` interface](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#getcapacity)
with the corresponding volume capability and parameters associated with the
profile.
It is possible that a storage system is able to provide volumes with different
profiles. For example, the LVM volume group is able to produce both raid0 and
raid1 logical volumes, backed by the same physical volumes. In that case, Mesos
will report one storage pool per profile. In this example, assuming there are
two profiles: `"raid0"` and `"raid1"`, Mesos will report 2 `RAW` disk resources:
1. 1000G `RAW` disk resource with profile `"raid0"`
2. 500G `RAW` disk resource with profile `"raid1"`.
TODO(jieyu): Discuss correlated resources.
## Storage Local Resource Provider
[Resource Provider](resource-provider.md) is an abstraction in Mesos allowing
cluster administrators to customize the providing of resources and the handling
of operations related to the provided resources.
For storage and CSI support, Mesos provides a default implementation of the
resource provider interface that serves as the bridge between Mesos and the CSI
plugins. It is called the Storage Resource Provider. It is responsible for
launching CSI plugins, talking to CSI plugins using the gRPC protocol, reporting
available disk resources, handling offer operations from frameworks, and making
disk resources available on the agent where the disk resources are used.
Currently, each Storage Resource Provider instance manages exactly one CSI
plugin. This simplifies reasoning and implementation.
In Mesos 1.5, only the Storage Local Resource Provider (SLRP) is supported. This
means the disk resources it reports are tied to a particular agent node, and
thus cannot be used on other nodes. The Storage External Resource Provider
(SERP) is [coming soon](https://issues.apache.org/jira/browse/MESOS-8371).
### Enable gRPC Support
[gRPC](https://grpc.io/) must be enabled to support SLRP. To enable gRPC
support, configure Mesos with `--enable-grpc`.
### Enable Agent Resource Provider Capability
In order to use SLRPs, the agent needs to be configured to enable resource
provider support. Since resource provider support is an experimental feature, it
is not turned on by default in 1.5. To enable that, please set the agent flag
`--agent_features` to the following JSON:
```json
{
"capabilities": [
{"type": "MULTI_ROLE"},
{"type": "HIERARCHICAL_ROLE"},
{"type": "RESERVATION_REFINEMENT"},
{"type": "RESOURCE_PROVIDER"}
]
}
```
Note that although capabilities `MULTI_ROLE`, `HIERARCHICAL_ROLE` and
`RESERVATION_REFINEMENT` are not strictly necessary for supporting resources
providers, these must be specified because the agent code already assumes those
capabilities are set, and the old code that assumes those capabilities not being
set has already been removed.
### SLRP Configuration
Each SLRP configures itself according to its `ResourceProviderInfo` which is
specified by the operator.
```protobuf
message ResourceProviderInfo {
required string type = 3;
required string name = 4;
repeated Resource.ReservationInfo default_reservations = 5;
// Storage resource provider related information.
message Storage {
required CSIPluginInfo plugin = 1;
}
optional Storage storage = 6;
}
```
* `type`: The type of the resource provider. This uniquely identifies a resource
provider implementation. For instance: `"org.apache.mesos.rp.local.storage"`.
The naming of the `type` field should follow the
[Java package naming convention](https://en.wikipedia.org/wiki/Java_package#Package_naming_conventions)
to avoid conflicts on the type names.
* `name`: The name of the resource provider. There could be multiple instances
of a type of resource provider. The name field is used to distinguish these
instances. It should be a legal [Java identifier](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html)
to avoid conflicts on concatenation of type and name.
* `default_reservations`: If set, any new resources from this resource provider
will be reserved by default. The first `ReservationInfo` may have type
`STATIC` or `DYNAMIC`, but the rest must have `DYNAMIC`. One can create a new
reservation on top of an existing one by pushing a new `ReservationInfo` to
the back. The last `ReservationInfo` in this stack is the "current"
reservation. The new reservation's role must be a child of the current one.
* `storage`: Storage resource provider specific information (see more details
below).
```protobuf
message CSIPluginInfo {
required string type = 1;
required string name = 2;
repeated CSIPluginContainerInfo containers = 3;
}
```
* `type`: The type of the CSI plugin. This uniquely identifies a CSI plugin
implementation. For instance: `"org.apache.mesos.csi.test"`. The naming
should follow the [Java package naming convention](https://en.wikipedia.org/wiki/Java_package#Package_naming_conventions)
to avoid conflicts on type names.
* `name`: The name of the CSI plugin. There could be multiple instances of the
same type of CSI plugin. The name field is used to distinguish these
instances. It should be a legal [Java identifier](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html)
to avoid conflicts on concatenation of type and name.
* `containers`: CSI plugin container configurations (see more details below).
The [CSI controller service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#controller-service-rpc)
will be served by the first that contains `CONTROLLER_SERVICE`, and the
[CSI node service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#node-service-rpc)
will be served by the first that contains `NODE_SERVICE`.
```protobuf
message CSIPluginContainerInfo {
enum Service {
UNKNOWN = 0;
CONTROLLER_SERVICE = 1;
NODE_SERVICE = 2;
}
repeated Service services = 1;
optional CommandInfo command = 2;
repeated Resource resources = 3;
optional ContainerInfo container = 4;
}
```
* `services`: Whether the CSI plugin container provides the
[CSI controller service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#controller-service-rpc),
the [CSI node service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#node-service-rpc)
or both.
* `command`: The command to launch the CSI plugin container.
* `resources`: The resources to be used for the CSI plugin container.
* `container`: The additional `ContainerInfo` about the CSI plugin container.
Note that each CSI plugin will have all isolation mechanisms configured on the
agent applied to it.
#### Sample SLRP Configuration
The following is a sample SLRP configuration that uses the [test CSI plugin](https://github.com/apache/mesos/blob/1.5.x/src/examples/test_csi_plugin.cpp)
provided by Mesos that provides both CSI controller and node services, and sets
the default reservation to `"test-role"`. The test CSI plugin will be built if
you configure Mesos with `--enable-tests-install`.
```json
{
"type": "org.apache.mesos.rp.local.storage",
"name": "test_slrp",
"default_reservations": [
{
"type": "DYNAMIC",
"role": "test-role"
}
],
"storage": {
"plugin": {
"type": "org.apache.mesos.csi.test",
"name": "test_plugin",
"containers": [
{
"services": [ "CONTROLLER_SERVICE", "NODE_SERVICE" ],
"command": {
"shell": true,
"value": "./test-csi-plugin --available_capacity=2GB --work_dir=workdir",
"uris": [
{
"value": "/PATH/TO/test-csi-plugin",
"executable": true
}
]
},
"resources": [
{ "name": "cpus", "type": "SCALAR", "scalar": { "value": 0.1 } },
{ "name": "mem", "type": "SCALAR", "scalar": { "value": 200.0 } }
]
}
]
}
}
}
```
### SLRP Management
#### Launching SLRP
To launch a SLRP, place the SLRP configuration JSON described in the
[previous section](#slrp-configuration) in a directory (e.g.,
`/etc/mesos/resource-providers`) and set the agent flag
`--resource_provider_config_dir` to point to that directory. The corresponding
SLRP will be loaded by the agent. It is possible to put multiple SLRP
configuration JSON files under that directory to instruct the agent to load
multiple SLRPs.
Alternatively, it is also possible to dynamically launch a SLRP using the [agent
v1 operator API](operator-http-api.md#agent-api). To use that, still set the
agent flag `--resource_provider_config_dir` to point to a configuration
directory (the directory maybe empty). Once the agent is launched, hit the agent
`/api/v1` endpoint using the [`ADD_RESOURCE_PROVIDER_CONFIG`](operator-http-api.md#add_resource_provider_config)
call:
For example, here is the `curl` command to launch a SLRP:
```shell
curl -X POST -H 'Content-Type: application/json' -d '{"type":"ADD_RESOURCE_PROVIDER_CONFIG","add_resource_provider_config":{"info":<SLRP_JSON_CONFIG>}}' http://<agent_ip>:<agent_port>/api/v1
```
#### Updating SLRP
A SLRP can be updated by modifying the JSON configuration file. Once the
modification is done, restart the agent to pick up the new configuration.
Alternatively, the operator can dynamically update a SLRP using the [agent v1
operator API](operator-http-api.md#agent-api). When the agent is running, hit
the agent `/api/v1` endpoint using the
[`UPDATE_RESOURCE_PROVIDER_CONFIG`](operator-http-api.md#update_resource_provider_config)
call:
For example, here is the `curl` command to update a SLRP:
```shell
curl -X POST -H 'Content-Type: application/json' -d '{"type":"UPDATE_RESOURCE_PROVIDER_CONFIG","update_resource_provider_config":{"info":<NEW_SLRP_JSON_CONFIG>}}' http://<agent_ip>:<agent_port>/api/v1
```
*NOTE*: Currently, only `storage.containers` in the `ResourceProviderInfo` can
be updated. This allows operators to update the CSI plugin (e.g., upgrading)
without affecting running tasks and executors.
#### Removing SLRP
Removing a SLRP means that the agent will terminate the existing SLRP if it is
still running, and will no longer launch the SLRP during startup. The master and
the agent will think the SLRP has disconnected, similar to agent disconnection.
If there exists a task that is using the disk resources provided by the SLRP,
its execution will not be affected. However, offer operations (e.g.,
`CREATE_DISK`) for the SLRP will not be successful. In fact, if a SLRP is
disconnected, the master will rescind the offers related to that SLRP,
effectively disallowing frameworks to perform operations on the disconnected
SLRP.
The SLRP can be re-added after its removal following the same instructions of
[launching a SLRP](#launching-slrp). Note that removing a SLRP is different than
marking a SLRP as gone, in which case the SLRP will not be allowed to be
re-added. Marking a SLRP as gone is not yet supported.
A SLRP can be removed by removing the JSON configuration file from the
configuration directory (`--resource_provider_config_dir`). Once the removal is
done, restart the agent to pick up the removal.
Alternatively, the operator can dynamically remove a SLRP using the
[agent v1 operator API](operator-http-api.md#agent-api). When the agent is
running, hit the agent `/api/v1` endpoint using the
[`REMOVE_RESOURCE_PROVIDER_CONFIG`](operator-http-api.md#remove_resource_provider_config)
call:
For example, here is the `curl` command to update a SLRP:
```shell
curl -X POST -H 'Content-Type: application/json' -d '{"type":"REMOVE_RESOURCE_PROVIDER_CONFIG","remove_resource_provider_config":{"type":"org.apache.mesos.rp.local.storage","name":<SLRP_NAME>}}' http://<agent_ip>:<agent_port>/api/v1
```
#### Authorization
A new authorization action `MODIFY_RESOURCE_PROVIDER_CONFIG` has been added.
This action applies to adding/updating/removing a SLRP.
For the default Mesos local authorizer, a new ACL
`ACL.ModifyResourceProviderConfig` has been added, allowing operators limit the
access to the above API endpoints.
```protobuf
message ACL {
// Which principals are authorized to add, update and remove resource
// provider config files.
message ModifyResourceProviderConfig {
// Subjects: HTTP Username.
required Entity principals = 1;
// Objects: Given implicitly.
// Use Entity type ANY or NONE to allow or deny access.
required Entity resource_providers = 2;
}
}
```
Currently, the `Objects` has to be either `ANY` or `NONE`. Fine-grained
authorization of specific resource provider objects is not yet supported. Please
refer to the [authorization doc](authorization.md) for more details about the
default Mesos local authorizer.
### Standalone Containers for CSI Plugins
As already mentioned earlier, each SLRP instance manages exactly one CSI plugin.
Each CSI plugin consists of one or more containers containing run processes that
implement both the [CSI controller service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#controller-service-rpc)
and the [CSI node service](https://github.com/container-storage-interface/spec/blob/v0.1.0/spec.md#node-service-rpc).
The CSI plugin containers are managed by the SLRP automatically. The operator
does not need to deploy them manually. The SLRP will make sure that the CSI
plugin containers are running and restart them if needed (e.g., failed).
The CSI plugin containers are launched using the standalone container API
provided by the Mesos agent. See more details about standalone container in the
[standalone container doc](standalone-container.md).
## Limitations
* Only local disk resources are supported currently. That means the disk
resources are tied to a particular agent node and cannot be used on a
different agent node. The external disk resources support is coming soon.
* The CSI plugin container cannot be a Docker container yet. Storage vendors
currently should package the CSI plugins in binary format and use the
[fetcher](fetcher.md) to fetch the binary executable.
* `BLOCK` type disk resources are not supported yet.
| 42.511772 | 232 | 0.748943 | eng_Latn | 0.97579 |
aa03f258d2d4f7165eb5db7713bcb251eea67cd3 | 1,630 | md | Markdown | docs/fundamentals/code-analysis/style-rules/ide0018.md | xh286286/docs.zh-cn | 454d0598540d1be670f17663bae7f4c5fd33b7fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-08T05:12:22.000Z | 2021-04-08T05:12:22.000Z | docs/fundamentals/code-analysis/style-rules/ide0018.md | xh286286/docs.zh-cn | 454d0598540d1be670f17663bae7f4c5fd33b7fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/fundamentals/code-analysis/style-rules/ide0018.md | xh286286/docs.zh-cn | 454d0598540d1be670f17663bae7f4c5fd33b7fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-24T05:35:32.000Z | 2021-01-24T05:35:32.000Z | ---
title: IDE0018:内联变量声明
description: 了解代码分析规则 IDE0018:内联变量声明
ms.date: 09/30/2020
ms.topic: reference
f1_keywords:
- IDE0018
- csharp_style_inlined_variable_declaration
helpviewer_keywords:
- IDE0018
- csharp_style_inlined_variable_declaration
author: gewarren
ms.author: gewarren
dev_langs:
- CSharp
ms.openlocfilehash: e1473cb4866331a3ed6a32cf79b5145b1043a54e
ms.sourcegitcommit: 636af37170ae75a11c4f7d1ecd770820e7dfe7bd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/07/2020
ms.locfileid: "96590566"
---
# <a name="inline-variable-declaration-ide0018"></a>内联变量声明 (IDE0018)
|Property|值|
|-|-|
| **规则 ID** | IDE0018 |
| **标题** | 内联变量声明 |
| **类别** | Style |
| **Subcategory** | 表达式级首选项 (语言规则) |
| **适用的语言** | C# 7.0+ |
## <a name="overview"></a>概述
此样式规则与 `out` 变量是否声明为内联有关。 从 C# 7 开始,可以[在方法调用的实际参数列表中声明 out 变量](../../../csharp/language-reference/keywords/out-parameter-modifier.md#calling-a-method-with-an-out-argument),而不是在单独的变量声明中。
## <a name="csharp_style_inlined_variable_declaration"></a>csharp_style_inlined_variable_declaration
|Property|值|
|-|-|
| **选项名称** | csharp_style_inlined_variable_declaration
| **选项值** | `true` - `out` 变量在方法调用的参数列表中声明为内联为首选项(如可能)<br /><br />`false` - 在方法调用之前声明 `out` 变量为首选项 |
| **默认选项值** | `true` |
#### <a name="example"></a>示例
```csharp
// csharp_style_inlined_variable_declaration = true
if (int.TryParse(value, out int i) {...}
// csharp_style_inlined_variable_declaration = false
int i;
if (int.TryParse(value, out i) {...}
```
## <a name="see-also"></a>另请参阅
- [表达式级首选项](expression-level-preferences.md)
- [代码样式语言规则](language-rules.md)
- [代码样式规则参考](index.md)
| 26.721311 | 185 | 0.72638 | eng_Latn | 0.121474 |
aa0444a47b38b059e2ed3bc04b4cd61fd57d317a | 123 | md | Markdown | _posts/2015-10-5-demo-4.md | sgengler/site-polish | 863c04f3408b57ac41d6bc74d71ad81f6478de2f | [
"MIT"
] | null | null | null | _posts/2015-10-5-demo-4.md | sgengler/site-polish | 863c04f3408b57ac41d6bc74d71ad81f6478de2f | [
"MIT"
] | null | null | null | _posts/2015-10-5-demo-4.md | sgengler/site-polish | 863c04f3408b57ac41d6bc74d71ad81f6478de2f | [
"MIT"
] | null | null | null | ---
layout: post
title: Demo 4
slug: demo-1 demo-2 demo-3 demo-4
---
- Delayed entrance
- Use custom cubic-bezier easing.
| 13.666667 | 33 | 0.691057 | eng_Latn | 0.391921 |
aa04e2d17edddd062abe4c049cfb3acbf2eb95b6 | 62 | md | Markdown | README.md | hatamiarash7/ColorGame | 44d2d6375857e1716fe33b7cc1a110c2240f0da1 | [
"Apache-2.0"
] | null | null | null | README.md | hatamiarash7/ColorGame | 44d2d6375857e1716fe33b7cc1a110c2240f0da1 | [
"Apache-2.0"
] | null | null | null | README.md | hatamiarash7/ColorGame | 44d2d6375857e1716fe33b7cc1a110c2240f0da1 | [
"Apache-2.0"
] | null | null | null | ColorGame
===
ColorGame is a simple color based Android Game. | 15.5 | 47 | 0.774194 | eng_Latn | 0.882735 |
aa052be60f92266947e7b55d04af9ec97860fca3 | 485 | md | Markdown | README.md | vt-sailbot/sailbot-18 | 55b4c4c18f24ff0e7476a7aedea6089fd78c6a99 | [
"MIT"
] | 1 | 2018-09-07T16:47:20.000Z | 2018-09-07T16:47:20.000Z | README.md | vt-sailbot/sailbot-18 | 55b4c4c18f24ff0e7476a7aedea6089fd78c6a99 | [
"MIT"
] | null | null | null | README.md | vt-sailbot/sailbot-18 | 55b4c4c18f24ff0e7476a7aedea6089fd78c6a99 | [
"MIT"
] | null | null | null | # SailBOT
[]()
[]()
## Introduction
SailBOT is a competition to build and program an autonomous sailing robot capable of navigating obstacles in a dynamic, real-world sailing environment. The robot uses on-board sensor input to effectively sail towards marked latitude and longitude coordinate locations and avoid obstacles.
| 53.888889 | 288 | 0.791753 | eng_Latn | 0.927393 |
aa057b85171151a8744f5f37e46b33a9f965cdc1 | 2,061 | md | Markdown | business-central/ui-how-preview-post-results.md | MicrosoftDocs/dynamics365smb-docs-pr.nl-be | 133474ce2cae16f74c343157d941aa50aa57edcc | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:20:12.000Z | 2021-04-20T21:13:46.000Z | business-central/ui-how-preview-post-results.md | MicrosoftDocs/dynamics365smb-docs-pr.nl-be | 133474ce2cae16f74c343157d941aa50aa57edcc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/ui-how-preview-post-results.md | MicrosoftDocs/dynamics365smb-docs-pr.nl-be | 133474ce2cae16f74c343157d941aa50aa57edcc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Een voorbeeld van posten bekijken voordat u een document of dagboek boekt | Microsoft Docs
description: U kunt ervoor zorgen dat posten voor documenten en dagboeken correct zijn voordat u ze naar het grootboek boekt.
services: project-madeira
documentationcenter: ''
author: SusanneWindfeldPedersen
ms.service: dynamics365-business-central
ms.topic: conceptual
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 04/01/2021
ms.author: solsen
ms.openlocfilehash: fc137cb8c45d935be28d5ededc435d2d4648f8bb
ms.sourcegitcommit: a7cb0be8eae6ece95f5259d7de7a48b385c9cfeb
ms.translationtype: HT
ms.contentlocale: nl-BE
ms.lasthandoff: 07/08/2021
ms.locfileid: "6441056"
---
# <a name="preview-posting-results"></a>Voorbeeld van boekingsresultaten weergeven
In elk document en elk dagboek die kunnen worden geboekt, kunt u de knop **Voorbeeld van boeking weergeven** kiezen om de verschillende soorten posten te bekijken die worden gemaakt wanneer u het document of het dagboek boekt.
## <a name="to-preview-gl-entries-that-will-result-from-posting-a-purchase-invoice"></a>Een voorbeeld weergeven van grootboekposten die resulteren na boeking van een inkoopfactuur
1. Kies het  voer **Inkoopfacturen** in en kies vervolgens de gerelateerde koppeling.
2. Maak een inkoopfactuur. Zie voor meer informatie [Inkopen vastleggen](purchasing-how-record-purchases.md).
3. Kies **Voorbeeld van boeking weergeven**.
4. Selecteer op de pagina **Voorbeeld van boeking weergeven** **Grootboekpost** en kies vervolgens **Verwante posten weergeven**.
Op de pagina **Voorbeeld GB-posten** ziet u welke posten worden gemaakt wanneer u de inkoopfactuur boekt.
## <a name="see-also"></a>Zie ook
[Documenten en dagboeken boeken](ui-post-documents-journals.md)
[Werken met [!INCLUDE[prod_short](includes/prod_short.md)]](ui-work-product.md)
[Algemene bedrijfsfunctionaliteit](ui-across-business-areas.md)
[!INCLUDE[footer-include](includes/footer-banner.md)] | 55.702703 | 226 | 0.794275 | nld_Latn | 0.990627 |
aa05bc81f91a25892d6a5621bb61a7bcd902811f | 5,639 | md | Markdown | beginner/getting_started.md | patcon/loomio-school | e4d721f254f6e6936781e1ec7a7a594fca334a15 | [
"MIT"
] | null | null | null | beginner/getting_started.md | patcon/loomio-school | e4d721f254f6e6936781e1ec7a7a594fca334a15 | [
"MIT"
] | null | null | null | beginner/getting_started.md | patcon/loomio-school | e4d721f254f6e6936781e1ec7a7a594fca334a15 | [
"MIT"
] | 1 | 2018-11-29T03:48:53.000Z | 2018-11-29T03:48:53.000Z | # Starting your group
Getting started is often the hardest part of anything. On Loomio, it involves getting everyone signed up, familiar with the tool, and comfortable making decisions together. If done well, the group will come to life as people take the initiative to start discussions and raise proposals.
Most Loomio groups need a champion to help them get going, and then they build their own momentum. If you're reading this, that champion is probably you!
## Introduce Loomio 👂👄🗣
Have a discussion with your group about using Loomio.
- **What problem** are you trying to solve with a new tool? (For example, including people who can't attend meetings, better documentation, or to keep making progress between meetings.)
- **What kind of decisions** do you want to make on Loomio?
- **Who's involved?** Does everyone in the group need to use it?
- What kind of **behaviour** is welcome, and what's not ok?
- How will Loomio complement your **existing processes**?
Some people will be apprehensive about adopting a new piece of software for online decision-making. Talk to people in your group to work through any concerns.
## Have a look around 👀
You need a good understanding of how to use Loomio, so you can assist others.
Make sure you understand the [group settings](https://loomio.gitbooks.io/manual/content/en/group_settings.html), especially privacy. Have a look through the menus and sections to familiarise yourself with the software.
<iframe width="560" height="315" src="https://www.youtube.com/embed/xwE0IM1k64E" frameborder="0" allowfullscreen></iframe>
## Update your group description 🌟
With a clear purpose, people can make judgement calls about what's best for the group. Understanding group context helps people get oriented.
Your Loomio group page has a space for information. Update this field so that everyone can see it when they visit the group. If you're unsure, start a discussion with your group about what you want to achieve together and what information people want on the group page.
<iframe width="560" height="315" src="https://www.youtube.com/embed/asZEbbiHH-s" frameborder="0" allowfullscreen></iframe>
## Upload a profile photo 👩🏽
Seeing someone's face next to the text they've written can make it feel more human. It's especially important if you're the one welcoming everyone in.
Upload your profile photo before sending invitations to join the group, so that your friendly image is included.
## Upload a group photo 🖼
Uploading a photo that has some meaning to your group significantly improves the sense of belonging. You can customise both the small square photo (eg, with a logo), and the big cover image (eg, with a group photo).
<iframe width="560" height="315" src="https://www.youtube.com/embed/8cdGR7FaGYg" frameborder="0" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/F2r2tKi3pxM" frameborder="0" allowfullscreen></iframe>
## Introductions thread 👋🏽
A round of introductions is a great way for people to get to know a bit about each other and to feel more comfortable. Even if people are already acquainted, they can share more specifically about their role or perspective in the Loomio group.
New groups start with an introductions thread automatically. We recommend you edit the thread to make it yours, with a prompt relevant to your group.
To edit the introductions thread, click into it and select the edit option in the context box.
<iframe width="560" height="315" src="https://www.youtube.com/embed/CGUH3UJrxfQ" frameborder="0" allowfullscreen></iframe>
## Invite people 🙋
You're ready to invite people into the group! Loomio is a group collaboration tool, so this is an important step.
Visit your group page and click the “Invite people” button.
* If you have their email addresses, you can send invitations to each member of your group.
* Or you can share the invitation link via email or however your team communicates.
Follow up on people who don't make it into the group and give them a nudge. You don't want to leave people out of decision-making.
If people join but don't introduce themselves, you might like to welcome them to the group with an @mention in the introductions thread:
> “Welcome to the group @Jane :) It's great to have you here! Would you saying a little bit about your work in this space?”
<iframe width="560" height="315" src="https://www.youtube.com/embed/nm9hoOofblw" frameborder="0" allowfullscreen></iframe>
## Champion the use of Loomio 🏆
### Demonstrate and invite participation.
If you model behaviour for others to emulate, your group will be more inclusive and engaging. You can help decisions progress constructively.
For example:
> - “@Jane that could be a good idea, why don’t you raise a proposal so we can see if the rest of the group agrees?”
- “We haven’t heard from @Bill and @Ngaire … what are your thoughts?”
- “We might be getting off topic here. I've started another Loomio discussion about that [here](http://www.loomio.org). Let's bring this back to the original focus.”
### Move decisions to Loomio
#### If discussions are happening via email...
Remind the group you've agreeed to use Loomio and request people move the discussion over there. Sometimes it can be helpful to copy and past what's been said so far and directly give everyone the Loomio link.
#### If decisions are being made in-person...
Recall why you wanted to use Loomio, and ask if the group wants to move the discussion online before concluding a decision. Common reasons are to include people who aren't in the room, and create documentation for future reference.
| 57.540816 | 286 | 0.771591 | eng_Latn | 0.999284 |
aa06c372432180dcf96ff2ce7f45dd35492709ac | 742 | md | Markdown | docs/ui-components/labels.md | jonqin2018/jonqin2018.github.io | c9b61ff3f79b353438d56f74b66adb86d70b7335 | [
"MIT"
] | null | null | null | docs/ui-components/labels.md | jonqin2018/jonqin2018.github.io | c9b61ff3f79b353438d56f74b66adb86d70b7335 | [
"MIT"
] | 1 | 2021-05-11T22:22:37.000Z | 2021-05-11T22:22:37.000Z | docs/ui-components/labels.md | jonqin2018/jonqin2018.github.io | c9b61ff3f79b353438d56f74b66adb86d70b7335 | [
"MIT"
] | null | null | null | ---
layout: default
title: Labels
parent: UI Components
nav_order: 3
---
# Labels
Use labels as a way to add an additional mark to a section of your docs. Labels come in a few colors. By default, labels will be blue.
<div class="code-example" markdown="1">
Default label
{: .label }
Blue label
{: .label .label-blue}
Stable
{: .label .label-green}
New release
{: .label .label-purple}
Coming soon
{: .label .label-yellow}
Deprecated
{: .label .label-red}
</div>
```markdown
Default label
{: .label }
Blue label
{: .label .label-blue}
Stable
{: .label .label-green}
New release
{: .label .label-purple}
Coming soon
{: .label .label-yellow}
Deprecated
{: .label .label-red}
```
| 14.269231 | 135 | 0.636119 | eng_Latn | 0.90464 |
aa06ea4680a2246126e29049c2e6f8c21cb2755d | 810 | md | Markdown | README.md | nemanjasdev/mTribesTest | ed594ee98ad88e3e67a78b985bce8308a63383de | [
"MIT"
] | null | null | null | README.md | nemanjasdev/mTribesTest | ed594ee98ad88e3e67a78b985bce8308a63383de | [
"MIT"
] | null | null | null | README.md | nemanjasdev/mTribesTest | ed594ee98ad88e3e67a78b985bce8308a63383de | [
"MIT"
] | null | null | null | # mTribesTest
Test Application for M-Tribes Company
When user first time runs the app:
--- Home View ---
- Detailed list with cars is shown
- If interior is unacceptable, user can see a red car, while for good interior color is green.
- There's a swipe-to-change-view option, or user change views with click on icon in bottom NavigationBar.
--- Map View ---
- On a single cluster click, it goes to the center of the screen
- On double cluster click, map is zoomed a bit and more pins can be visible
- Typing some text in SearchView shows different markers - Search is by car name
- Click on marker shows InfoWindow, while other markers dissapear
- Click on InfoWindow returns previous markers back to map
- If GPS is turned on, click on marker shows distance from that car and user location
Enjoy!
| 31.153846 | 106 | 0.75679 | eng_Latn | 0.998474 |
aa07757d5cc61b2377c09c22f5b1e3ebe6ab27f4 | 9,756 | md | Markdown | help/sources/home.md | pfd/experience-platform.en | bf13ed21dcc85f62d522bc570fc13c7cb6377844 | [
"MIT"
] | null | null | null | help/sources/home.md | pfd/experience-platform.en | bf13ed21dcc85f62d522bc570fc13c7cb6377844 | [
"MIT"
] | null | null | null | help/sources/home.md | pfd/experience-platform.en | bf13ed21dcc85f62d522bc570fc13c7cb6377844 | [
"MIT"
] | null | null | null | ---
keywords: Experience Platform;home;popular topics;source connectors;source connector;sources;data sources;data source;data source connection
solution: Experience Platform
title: Adobe Experience Platform Source Connectors overview
topic: overview
description: Adobe Experience Platform allows data to be ingested from external sources while providing you with the ability to structure, label, and enhance incoming data using Platform services. You can ingest data from a variety of sources such as Adobe applications, cloud-based storage, databases, and many others.
---
# Source connectors overview
Adobe Experience Platform allows data to be ingested from external sources while providing you with the ability to structure, label, and enhance incoming data using [!DNL Platform] services. You can ingest data from a variety of sources such as Adobe applications, cloud-based storage, databases, and many others.
[!DNL Experience Platform] provides a RESTful API and an interactive UI that lets you set-up source connections to various data providers with ease. These source connections enable you to authenticate your third-party systems, set times for ingestion runs, and manage data ingestion throughput.
With [!DNL Experience Platform], you can centralize data you collect from disparate sources and use the insights gained from it to do more.
## Types of sources
Sources in [!DNL Experience Platform] are grouped into the following categories:
### Adobe applications
[!DNL Experience Platform] allows data to be ingested from other Adobe applications, including Adobe Analytics, Adobe Audience Manager, and [!DNL Experience Platform Launch]. See the following related documents for more information:
- [Adobe Audience Manager connector overview](connectors/adobe-applications/audience-manager.md)
- [Create an Adobe Audience Manager source connector in the UI](./tutorials/ui/create/adobe-applications/audience-manager.md)
- [Adobe Analytics Classifications Data connector overview](connectors/adobe-applications/classifications.md)
- [Create an Adobe Analytics Classifications Data source connector in the UI](./tutorials/ui/create/adobe-applications/classifications.md)
- [Adobe Analytics data connector overview](connectors/adobe-applications/analytics.md)
- [Create an Adobe Analytics source connector in the UI](./tutorials/ui/create/adobe-applications/analytics.md)
- [Create a Customer Attributes source connector in the UI](./tutorials/ui/create/adobe-applications/customer-attributes.md)
### Advertising
[!DNL Experience Platform] provides support for ingesting data from a third-party advertising system. See the following related documents for more information on specific source connectors:
- [[!DNL Google AdWords]](connectors/advertising/ads.md) connector
### Cloud Storage
Cloud storage sources can bring your own data into [!DNL Platform] without the need to download, format, or upload. Ingested data can be formatted as XDM JSON, XDM parquet, or delimited. Every step of the process is integrated into the Sources workflow using the user interface. See the following related documents for more information:
- [[!DNL Azure Data Lake Storage Gen2] connector](connectors/cloud-storage/adls-gen2.md)
- [[!DNL Azure Blob] connector](connectors/cloud-storage/blob.md)
- [[!DNL Amazon Kinesis] connector](connectors/cloud-storage/kinesis.md)
- [[!DNL Amazon S3] connector](connectors/cloud-storage/s3.md)
- [[!DNL Apache HDFS] connector](connectors/cloud-storage/hdfs.md)
- [[!DNL Azure Event Hubs] connector](connectors/cloud-storage/eventhub.md)
- [[!DNL Azure File Storage] connector](connectors/cloud-storage/azure-file-storage.md)
- [[!DNL FTP and SFTP] connector](connectors/cloud-storage/ftp-sftp.md)
- [[!DNL Google Cloud Storage] connector](connectors/cloud-storage/google-cloud-storage.md)
### Customer Relationship Management (CRM)
CRM systems provide data that can help build customer relationships, which in turn, create loyalty and drive customer retention. [!DNL Experience Platform] provides support for ingesting CRM data from [!DNL Microsoft Dynamics 365] and [!DNL Salesforce]. See the following related documents for more information:
- [[!DNL Microsoft Dynamics] connector](connectors/crm/ms-dynamics.md)
- [[!DNL Salesforce] connector](connectors/crm/salesforce.md)
### Customer Success
[!DNL Experience Platform] provides support for ingesting data from a third-party customer success application. See the following related documents for more information:
- [[!DNL Salesforce Service Cloud] connector](connectors/customer-success/salesforce-service-cloud.md)
- [[!DNL ServiceNow] connector](connectors/customer-success/servicenow.md)
### Database
[!DNL Experience Platform] provides support for ingesting data from a third-party database. See the following related documents for more information on specific source connectors:
- [[!DNL Amazon Redshift] connector](connectors/databases/redshift.md)
- [[!DNL Apache Hive on Azure HDInsights] connector](connectors/databases/hive.md)
- [[!DNL Apache Spark on Azure HDInsights] connector](connectors/databases/spark.md)
- [[!DNL Azure Data Explorer] connector](connectors/databases/data-explorer.md)
- [[!DNL Azure Synapse Analytics] connector](connectors/databases/synapse-analytics.md)
- [[!DNL Azure Table Storage] connector](connectors/databases/ats.md)
- [[!DNL Couchbase] connector](connectors/databases/couchbase.md)
- [[!DNL Google BigQuery] connector](connectors/databases/bigquery.md)
- [[!DNL GreenPlum] connector](connectors/databases/greenplum.md)
- [[!DNL HP Vertica] connector](connectors/databases/hp-vertica.md)
- [[!DNL IBM DB2] connector](connectors/databases/ibm-db2.md)
- [[!DNL Microsoft SQL Server] connector](connectors/databases/sql-server.md)
- [[!DNL MySQL] connector](connectors/databases/mysql.md)
- [[!DNL Oracle] connector](connectors/databases/oracle.md)
- [[!DNL Phoenix] connector](connectors/databases/phoenix.md)
- [[!DNL PostgreSQL] connector](connectors/databases/postgres.md)
### eCommerce
[!DNL Experience Platform] provides support for ingesting data from a third-party eCommerce system. See the following related documents for more information on specific source connectors:
- [[!DNL Shopify]](connectors/ecommerce/shopify.md)
### Marketing Automation
[!DNL Experience Platform] provides support for ingesting data from a third-party marketing automation system. See the following related documents for more information on specific source connectors:
- [[!DNL HubSpot] connector](connectors/marketing-automation/hubspot.md)
### Payments
[!DNL Experience Platform] provides support for ingesting data from a third-party payments system. See the following related documents for more information on specific source connectors:
- [[!DNL PayPal] connector](connectors/payments/paypal.md)
### Protocols
[!DNL Experience Platform] provides support for ingesting data from a third-party protocols system. See the following related documents for more information on specific source connectors:
- [[!DNL Generic OData] connector](connectors/protocols/odata.md)
## Access control for sources in data ingestion
Permissions for sources in data ingestion can be managed within the Adobe Admin Console. You can access permissions through the **[!UICONTROL Permissions]** tab in a particular product profile. From the **[!UICONTROL Edit Permissions]** panel, you can access the permissions pertaining to sources through the **[!UICONTROL data ingestion]** menu entry. The **[!UICONTROL View Sources]** permission grants read-only access to available sources in the **[!UICONTROL Catalog]** tab and authenticated sources in the **[!UICONTROL Browse]** tab, while the **[!UICONTROL Manage Sources]** permission grants full access to read, create, edit, and disable sources.
The following table outlines how the UI behaves based on different combinations of these permissions:
| Permission level | Description |
| ---- | ----|
| **[!UICONTROL View Sources]** On | Grant read-only access to sources in each source-type in the Catalog tab, as well as the Browse, Accounts, and Dataflow tabs. |
| **[!UICONTROL Manage Sources]** On | In addition to the functions included in **[!UICONTROL View Sources]**, grants access to **[!UICONTROL Connect Source]** option in **[!UICONTROL Catalog]** and to **[!UICONTROL Select Data]** option in **[!UICONTROL Browse]**. **[!UICONTROL Manage Sources]** also allows you to enable or disable **[!UICONTROL DataFlows]** and edit their schedules. |
| **[!UICONTROL View Sources]** Off and **[!UICONTROL Manage Sources]** Off | Revoke all access to sources. |
For more information about the available permissions granted through the Admin Console, including those four sources, see the [access control overview](../access-control/home.md).
## Terms and conditions {#terms-and-conditions}
By using any of the Sources labeled as beta (“Beta”), You hereby acknowledge that the Beta is provided ***“as is” without warranty of any kind***.
Adobe shall have no obligation to maintain, correct, update, change, modify, or otherwise support the Beta. You are advised to use caution and not to rely in any way on the correct functioning or performance of such Beta and/or accompanying materials. The Beta is considered Confidential Information of Adobe.
Any “Feedback” (information regarding the Beta including but not limited to problems or defects you encounter while using the Beta, suggestions, improvements, and recommendations) provided by You to Adobe is hereby assigned to Adobe including all rights, title, and interest in and to such Feedback.
Submit Open Feedback or create a Support Ticket to share your suggestions or report a bug, seek a feature enhancement.
| 72.266667 | 656 | 0.787003 | eng_Latn | 0.974317 |
aa0782bfe6970c5b3d10eab6bec0f1a95e0459e7 | 1,278 | md | Markdown | _posts/2017-09-25-ora-ubuntu-dock-supporta-le-barre-di-avanzamento-e-i-contatori-notifiche.md | DumbMahreeo/linuxhub.it | 201ca8534562fb22f013b919d599c547b461bd5c | [
"MIT"
] | null | null | null | _posts/2017-09-25-ora-ubuntu-dock-supporta-le-barre-di-avanzamento-e-i-contatori-notifiche.md | DumbMahreeo/linuxhub.it | 201ca8534562fb22f013b919d599c547b461bd5c | [
"MIT"
] | null | null | null | _posts/2017-09-25-ora-ubuntu-dock-supporta-le-barre-di-avanzamento-e-i-contatori-notifiche.md | DumbMahreeo/linuxhub.it | 201ca8534562fb22f013b919d599c547b461bd5c | [
"MIT"
] | null | null | null | ---
title: 'Ora Ubuntu Dock supporta le barre di avanzamento e i contatori notifiche'
published: 2017-09-25
layout: post
author: Mirko B.
author_github: mirkobrombin
tags:
- ubuntu
- gnome
---
<p>Piacevole sorpresa in Ubuntu 17.10. Infatti, nella Dock sarà ora possibile visualizzare la barra progressiva delle varie applicazioni ed i contatori ad esse relative, ad esempio, il progresso dei download di Firefox, i messaggi non letti di Telegram o Skype potranno essere visualizzati.</p><p><img class=" size-full wp-image-157" alt="" height="286" src="https://linuxhub.it/wordpress/wp-content/uploads/2017/09/ubuntu-dock-progress-bar-badge-1.png" width="554" /></p><p>Se già state utilizzando Ubuntu 17.10 riceverete presto l'aggiornamento. Se proprio non potete aspettare, il download é disponibile <a href="https://launchpad.net/ubuntu/+source/gnome-shell-extension-ubuntu-dock/0.6/+build/13508799">Qui.</a></p><p>Una volta installato é necessario riavviare la sessione di Gnome, per farlo rieffettuate il login o tramite combinazione di tasti<strong> ALT+F2</strong> digitando <strong>r</strong> e premendo invio.</p><p><strong>Download</strong> | https://launchpad.net/ubuntu/+source/gnome-shell-extension-ubuntu-dock/0.6/+build/13508799</p> | 116.181818 | 1,079 | 0.770736 | ita_Latn | 0.868523 |
aa07a843822ec814f719a6d37ed0df451d628d6c | 22 | md | Markdown | README.md | codeformuenster/busplanung | 9d493e2464bb153f30f65d8a129558eb986da869 | [
"Apache-2.0"
] | null | null | null | README.md | codeformuenster/busplanung | 9d493e2464bb153f30f65d8a129558eb986da869 | [
"Apache-2.0"
] | null | null | null | README.md | codeformuenster/busplanung | 9d493e2464bb153f30f65d8a129558eb986da869 | [
"Apache-2.0"
] | null | null | null | busplanung
==========
| 7.333333 | 10 | 0.454545 | deu_Latn | 0.90215 |
aa083ed0e3bbd845387dd180d7f61c67f4033a4c | 9,450 | md | Markdown | src/content/dips/es/DIP-0.md | NhlanhlaHasane/devcon-website | 0abd1ceb9fa0d4792facbfe905febc687d4b36e4 | [
"MIT"
] | 11 | 2020-12-03T02:42:10.000Z | 2021-09-07T07:55:31.000Z | src/content/dips/es/DIP-0.md | NhlanhlaHasane/devcon-website | 0abd1ceb9fa0d4792facbfe905febc687d4b36e4 | [
"MIT"
] | 1 | 2021-10-07T20:44:12.000Z | 2021-10-07T20:44:12.000Z | src/content/dips/es/DIP-0.md | NhlanhlaHasane/devcon-website | 0abd1ceb9fa0d4792facbfe905febc687d4b36e4 | [
"MIT"
] | 5 | 2020-11-28T18:57:55.000Z | 2022-03-26T08:39:49.000Z | ---
Summary: 'DIP significa Devcon Improvement Proposal. Un DIP es una propuesta presentada por miembros de la comunidad que describen una nueva característica o proceso deseado para mejorar Devcon. Un DIP debería ser conciso y proporcionar toda la información posible, así como una racionalidad para la propuesta.
El autor del DIP es responsable de presentar los argumentos a favor de un DIPP propuesto, y los miembros de la comunidad podrán comentar al respecto. Corresponde al equipo de Devcon elegir qué propuestas son consideradas, revisadas y aceptadas.'
Github URL: https://github.com/efdevcon/DIPs/blob/master/DIPs/DIP-0.md
DIP: 0
Title: Propósito DIP y directrices
Status: Active
Themes: Meta
Tags: Other
Authors: Bettina Boon Falleur (@BettinaBF), Ligi (@ligi), Heather Davidson (@p0unce), Skylar (@skylarweaver), Joseph Schweitzer (@ethjoe)
Discussion: https://forum.devcon.org
Created: 2020-07-06
---
## ¿Qué es un DIP?
DIP significa Devcon Improvement Proposal. Un DIP es una propuesta presentada por miembros de la comunidad que describen una nueva característica o proceso deseado para mejorar Devcon. Un DIP debería ser conciso y proporcionar toda la información posible, así como una racionalidad para la propuesta.
El autor del DIP es responsable de presentar los argumentos a favor de un DIPP propuesto, y los miembros de la comunidad podrán comentar al respecto. Corresponde al equipo de Devcon elegir qué propuestas son consideradas, revisadas y aceptadas.
## Racionale
El equipo de Devcon tiene la intención de proporcionar un mecanismo para recolectar información de la comunidad colaborativa sobre lo que debe incluirse en el próximo Devcon. Si bien estamos entusiasmados de tener un proceso más formal para escuchar ideas de la comunidad (aproximadamente inspiradas por la PEP más descentralizada, BIP y procesos EIP), este es un experimento, y debe entenderse que la aprobación de propuestas en última instancia corresponde exclusivamente al equipo de Devcon. Los DI se centran en la colaboración en el ecosistema, así que por favor revisen y colaboren en otras propuestas en lugar de enviar posibles duplicados.
## Temas
Hay dos maneras de sugerir un DIP. Nosotros, el equipo de Devcon emitimos una lista de deseos (a través de RFPs) para implementaciones que nos gustaría que sucedieran en la próxima edición de la conferencia. ¡Échale un vistazo a [aquí](https://forum.devcon.org/c/devcon-rfps/5)! También acogemos con satisfacción todas y cualquier gran idea que mejore Devcon.
Aquí hay una lista de temas para inspirarte:
* **Ticketing** - Cualquier cosa útil con respecto al ticketing (por ejemplo, el uso de NFTs)
* **Social** - ¿Cómo hacer que la gente se conecte, anterior, durante y después de Devcon?
* **Sostenibilidad Ambiental** - Cualquier idea que nos ayude a hacer que Devcon sea más medioambientalmente sostenible
* **Experiencia virtual** - streaming en vivo y otras formas de interactuar con personas que no pueden estar físicamente presentes en Devcon
* **Compras & ID** - ¡Hagamos más para usar criptomonedas en Devcon! Compras in situ y offsite, reservas,...
* **Participación de la comunidad** - ¿Cómo podemos integrar más información de la comunidad en Devcon?
* **Arte & Belleza** - Un diseño genial para artículos de swag, un pianista de arte...
* **Freeform** - ¡Cualquier gran idea que tengas! Desde tutu-martes hasta traer tu propio T-rex para la ceremonia de cierre
* **Meta** - Cualquier mejora del proceso DIP en sí. Este proceso es nuevo, y podría beneficiarse de cualquier cosa que tengas en mente. Nos encantaría escuchar tus pensamientos.
## Etiquetas
Devcon tiene varios aspectos de su organización. Para ayudarnos a guiarle mejor, seleccione el área de enfoque que concierne a su DIP:
* **Operaciones de eventos** - Cualquier cosa relacionada con la organización general de Devcon
* **Producción de eventos** - Cualquier cosa que tenga que ver con la experiencia de los asistentes en el sitio
* **Software** - Cualquier cosa que tenga que ver con las necesidades de software, devcon.org, subdominios, ...
* **Comunicaciones** -Cualquier cosa que tenga que ver con las comunicaciones relacionadas con Devcon, los medios, artículos de blog, ...
* **Patrocinaciones** - Cualquier cosa que tenga que ver con patrocinaciones, beneficios y más.
* **Otras** - Cualquier otra cosa
## Recursos
Su DIP puede requerir recursos del equipo de Devcon, así que asegúrese de agregarlos a su borrador. Esto nos ahorrará tanto tiempo precioso al evaluar la feasibilidad de un DIP.
* Soporte técnico
* Sala física en el lugar
* Soporte operacional
* Soporte de comunicación
* Otro: por favor especifique
## Criterios para un DIP exitoso
* Alineado con el Código de Conducta y los valores de Devcon
* Integración es impulsada y propiedad de los accionistas/proyecto/comunidad
* Deseando colaborar con otros proyectos y personas
* Se puede integrar con otros dApps & proyectos en Devcon
* Tan bueno o mejor que la alternativa que no es blockchain (explicar por qué y cómo)
* mentalidad FOSS
## Equipo
El equipo de Devcon es el tomador de decisión final sobre el estado de un DIP (Aceptado - Posteado - No Implementado). Un equipo dedicado trabajará juntos para proporcionar una revisión técnica y operativa de todos los borradores DIP presentados. Ellos son responsables de comunicarse con los autores de DIP y retransmitir información entre equipos, y acompañar proyectos a través de su fase de producción-implementación para asegurar que los DI aceptados estén listos para Devcon.
## Flujo trabajo
#### Eureka!
Usted, el autor DIP, acaba de encontrar una gran idea para Devcon. O bien (a) su DIP responde a uno (o varios) de Devcon, o bien (b) ha encontrado otra mejora que le gustaría sugerir.
#### Comentarios comunitarios
Esto es opcional para que los DI respondan a un RFP.
Antes de escribir un borrador DIP, tómese el tiempo para examinar su idea. Abra un hilo de discusión en el [Foro de desarrollo](https://forum.devcon.org/) y asegúrese de exponer claramente su idea para permitir que la comunidad proporcione sus comentarios. Esto se hace para asegurar que no pierdas el tiempo escribiendo un DIP que no obtendrá suficiente tracción, no es factible o es un duplicado. Tómese cinco minutos para leer los RFMs y asegúrese de que su idea no encaja en uno, ¡puede que le ahorre algo de tiempo!
#### Proceso
1. **Borrador DIP** - ¡Elige tu pluma digital y escribe tu DIP! Sigue la ruta de ladrillo amarillo: [formato DIP](https://github.com/efdevcon/DIPs/blob/master/DIPs/DIP-0.md#dip-format)
2. **Enviar DIP** - Haga clic en el botón aterrador y envíe su DIP. Asegúrese de incluir toda la información requerida en la plantilla bajo el formato DIP a continuación.
3. **Revisión del editor** - Su DIP está ahora en manos del equipo de editores DIP para su revisión. Los editores pueden solicitar más información y pedir que vuelva a enviar el DIP.
4. **Reseña del equipo de Devcon**
* **Borrador** - Tu DIP que está experimentando una rápida iteración y cambios
* **Aceptado** - ¡Tu idea DIP ha sido aprobada! Ahora es el momento de trabajar en la implementación!
* **Posteado** - Tu DIP no será posible el próximo Devcon - pero nos encanta la idea.
* **No implementado** - Oh no, el equipo de desarrollo medio, póngase en contacto con ellos y pida más contexto para entender su razonamiento
* **Cambios Solicitados** - El DIP necesita modificaciones condicionales a su validación.
#### Implementación
* Definición de la línea de tiempo del proyecto
* Sincronización bimensual con editores DIP & otros autores DIP
* Colaboración con otros proyectos DIP
* Pruebas
#### Producción
* ¡Listo para despegar!
## DIP Format
Todos los DI deben estar escritos en formato markdown. Por favor, utilice la siguiente plantilla:
```
---
DIP: (Número de DIP)
Título: (Piensen en una atracción descriptiva, descriptiva, título corto)
Estado: Borrador
Temas: (Ver temas arriba)
Etiquetas: (Por favor seleccione todos los que apliquen: Programación, ...)
Autores: (Emails de los contactos principales)
Recursos requeridos: Espacio físico en el lugar, Soporte de Operaciones, etc.)
Discusión: (URL de donde se discute este DIP, preferiblemente en https://forum.devcon. rg)
---
-----Resumen de Propuesta-----
__Resumen simple__
__50 palabras resumy__
-----Abstract-----
__200 palabra descripción de la propuesta. _
-----Motivación & Rationale-----
__Debajo hay algunas indicaciones útiles__
- ¿Cómo mejoraría la experiencia de los asistentes?
- ¿Cómo es esta solución mejor que una experiencia que no sea blockchain?
- ¿Cómo introduce esta propuesta a los asistentes a un nuevo caso de uso de blockchain/ethereum?
-----Implementación-----
__Debajo hay algunas indicaciones útiles__
- ¿Alguna parte de esta propuesta ha sido implementada en otros eventos? Si es así, por favor describa cómo se fue.
- ¿Necesita comentarios o datos de los asistentes después del evento?
-----Requisitos operacionales & Propiedad-----
__Por favor, responde a las siguientes preguntas:__
1. ¿Qué medidas son necesarias para aplicar la propuesta en Devcon?
2. ¿Quién será responsable de la aplicación efectiva de la propuesta? (es decir, trabajar el día 0)
3. ¿Con qué otros proyectos se podría integrar esta propuesta? (Puntos de bonus para la colaboración entre equipos :))
-----Enlaces & Información Adicional-----
__Cualquier información adicional__
```
| 63 | 647 | 0.775238 | spa_Latn | 0.99633 |
aa08c7996798ae3eaa250dd0266f55cc2f0a0329 | 1,551 | md | Markdown | content/posts/bai-17-thuc-hanh-lap-trinh-co-so.md | jsad1900/fithou | acd9e1c6f76df7fe6f4bc78eadbf816bf77cddaa | [
"MIT"
] | null | null | null | content/posts/bai-17-thuc-hanh-lap-trinh-co-so.md | jsad1900/fithou | acd9e1c6f76df7fe6f4bc78eadbf816bf77cddaa | [
"MIT"
] | null | null | null | content/posts/bai-17-thuc-hanh-lap-trinh-co-so.md | jsad1900/fithou | acd9e1c6f76df7fe6f4bc78eadbf816bf77cddaa | [
"MIT"
] | null | null | null | ---
title: Bài 17 - thực hành lập trình cơ sở
subtitle: Bài tập về mảng
category:
- thực hành lập trình cơ sở
author: Đạt k20
date: 2020-10-21
featureImage: /uploads/baiviet/cpp.png
---
> Bài 17, Hãy viết chương trình nhập vào dãy số đến khi gặp số 0 thì dừng, sau đó tách số chẵn và lẻ ra 2 mảng khác nhau. Tính trung bình cộng các số chẵn dương và các trung bình cộng các số lẻ âm.
>Thắc mắc bài 17 xin trao đổi [Duy Đạt](https://www.facebook.com/duydat2002/)
## Code:
```c++
#include <iostream>
using namespace std;
int so[1000], chan[1000], le[100], i, dem, c, l, scd, sla, demcd, demla;
int main()
{
// bai 17 - fithou.netlify.app - @Dat
i=0;
so[0]=1;
while (so[i]!=0)
{
i++;
cout << "Nhap phan tu thu " << i << " cua day: ";
cin >> so[i];
}
//in day
dem=i-1;
cout << "Day ban vua nhap la: ";
for (i=1; i<=dem; i++) cout << " " << so[i];
//tach
c=0; l=0; demcd=0; demla=0; scd=0; sla=0;
for (i=1; i<=dem; i++)
if (so[i]%2==0)
{
c++;
chan[c]=so[i];
if (chan[c]>0)
{
demcd++;
scd+=chan[c];
}
}
else
{
l++;
le[l]=so[i];
if (le[l]<0)
{
demla++;
sla+=le[l];
}
}
//in day chan le
cout << "\nDay cac so chan ban vua nhap la: ";
for (i=1; i<=c; i++) cout << " " << chan[i];
cout << "\nDay cac so le ban vua nhap la: ";
for (i=1; i<=l; i++) cout << " " << le[i];
cout << "\nTrung binh cong cac so chan duong cua day la: " << (float)scd/demcd;
cout << "\nTrung binh cong cac so le am cua day la: " << (float)sla/demla;
return 0;
}
```
| 22.157143 | 197 | 0.548678 | vie_Latn | 0.986939 |
aa08d29455d92ebf3684e8ac12735a0344a6e83a | 1,289 | md | Markdown | README.md | jbinder/AspNet.Security.HandySignatur | 607ee1ef27046a0837f3aec91ff5805acee81f56 | [
"MIT"
] | null | null | null | README.md | jbinder/AspNet.Security.HandySignatur | 607ee1ef27046a0837f3aec91ff5805acee81f56 | [
"MIT"
] | null | null | null | README.md | jbinder/AspNet.Security.HandySignatur | 607ee1ef27046a0837f3aec91ff5805acee81f56 | [
"MIT"
] | null | null | null | # AspNet.Security.HandySignatur
Handy-Signatur [1] authentication provider for ASP.NET Core 2.1 [2].
:warning: Alpha, do not use in production!
## Usage
Install the NuGet package:
PM> Install-Package AspNet.Security.HandySignatur -Version 2.1.0-alpha.3
Adapt your `Startup` class to include the following:
public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddAuthentication()
.AddHandySignatur(options => { options.IdentityLinkDomainIdentifier = "TODO"; });
// ...
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
// ...
app.UseAuthentication();
// ...
}
where the `IdentityLinkDomainIdentifier` is the identifier of the requesting authority in the private sector encoded as URN, see [3].
The following claims are available:
* NameIdentifier
* GivenName
* Surname
* Name
* DateOfBirth
To customize the redirect views, the `RedirectToAtrustViewCreator` and `RedirectFromAtrustViewCreator` options can be used.
## References
[1] https://www.handy-signatur.at/
[2] https://docs.microsoft.com/en-us/aspnet/core/?view=aspnetcore-2.1
[3] https://www.buergerkarte.at/konzept/securitylayer/spezifikation/20080220/tutorial/tutorial.html
| 24.788462 | 133 | 0.714507 | eng_Latn | 0.441672 |
aa09ebb64a568a1a78b1425442b8a3b657fa605e | 918 | md | Markdown | CHANGELOG.md | sambacha/BoringSolidity | 7196a31172ff17e78c145d729a1d212ccd7403e8 | [
"MIT"
] | null | null | null | CHANGELOG.md | sambacha/BoringSolidity | 7196a31172ff17e78c145d729a1d212ccd7403e8 | [
"MIT"
] | null | null | null | CHANGELOG.md | sambacha/BoringSolidity | 7196a31172ff17e78c145d729a1d212ccd7403e8 | [
"MIT"
] | 1 | 2021-04-15T18:06:50.000Z | 2021-04-15T18:06:50.000Z | ### Changelog
All notable changes to this project will be documented in this file. Dates are displayed in UTC.
Generated by [`auto-changelog`](https://github.com/CookPete/auto-changelog).
#### 1.3.1
> 10 September 2021
- add optimized balance check to BoringERC20 [`#4`](https://github.com/sambacha/BoringSolidity/pull/4)
- Update README.md [`#2`](https://github.com/sambacha/BoringSolidity/pull/2)
- Natspec comments and gas savings [`#1`](https://github.com/sambacha/BoringSolidity/pull/1)
- refactor(repo): re-package [`438c2d1`](https://github.com/sambacha/BoringSolidity/commit/438c2d19ba1f58bb868622cb991c23925ab113fc)
- BoringOwnable tests added [`540d65b`](https://github.com/sambacha/BoringSolidity/commit/540d65b68af4f70f70cace5178a31dd2b608ab6e)
- Magic numbers & strings + linting + 100% test cov [`e270f75`](https://github.com/sambacha/BoringSolidity/commit/e270f7509104bfac7adbd8b13c3d844d1af76204)
| 54 | 155 | 0.779956 | eng_Latn | 0.222056 |
aa0a01baf5a1c7ab6e026b3ae52fffe32c1a26e7 | 1,438 | md | Markdown | README.md | MarkBroerkens/markbroerkens.github.io | fc393605acce207f31bd4a530ccd474d681b5b2c | [
"MIT"
] | null | null | null | README.md | MarkBroerkens/markbroerkens.github.io | fc393605acce207f31bd4a530ccd474d681b5b2c | [
"MIT"
] | null | null | null | README.md | MarkBroerkens/markbroerkens.github.io | fc393605acce207f31bd4a530ccd474d681b5b2c | [
"MIT"
] | null | null | null | 
This is the source of my Jekyll based website that is hosted on GitHub at broerkens.de
### How to Contribute
Github automatically updates the websites on each commit using a custom workflow. Thus there a no restrictions with respect to Jekyll plugins.
Note: please use lowercase tag names in the posts. Otherwise the links to the tag pages will be broken.
You can locally check if the page gets rendered correctly using:
```
bundle install
bundle exec jekyll serve
```
### Custom Domain
In order to use the custom domain 'broerkens.de' the following configurations are required:
* in the GitHub settings of this repository set `GitHub Pages / Custom Domain` to `broerkens.de`
* in the file [_config.yml](_config.yml) set `url:` to `http://broerkens.de`
* at your DNS provider configure an A record for `broerkens.de` that points to `185.199.108.153`
* at your DNS provider configure a `permanent redirect` from subdomain `www.broerkens.de` to `broerkens.de`
See also [Managing a custom domain for your GitHub Pages site](https://docs.github.com/en/free-pro-team@latest/github/working-with-github-pages/managing-a-custom-domain-for-your-github-pages-site).
### Credits
* [Artem Sheludko](http://artemsheludko.com) for the great Jekyll thema [zolan](https://github.com/artemsheludko/zolan).
### License
MIT License
| 42.294118 | 197 | 0.771905 | eng_Latn | 0.934102 |
aa0a5ab5e34f9cf1ddb9840f278d6cb3e4c3a69f | 43 | md | Markdown | src/gcns/README.md | Nihalh55/parallel-aes | fc9f7abe98ef0bd1c048c89e238a244736c5e3b2 | [
"MIT"
] | 6 | 2018-10-30T11:12:56.000Z | 2021-08-20T08:10:13.000Z | src/gcns/README.md | Nihalh55/parallel-aes | fc9f7abe98ef0bd1c048c89e238a244736c5e3b2 | [
"MIT"
] | 13 | 2018-10-30T11:17:06.000Z | 2018-11-13T08:50:34.000Z | src/gcns/README.md | Nihalh55/parallel-aes | fc9f7abe98ef0bd1c048c89e238a244736c5e3b2 | [
"MIT"
] | 3 | 2018-11-03T04:55:54.000Z | 2021-08-20T08:10:22.000Z | # GCNS - GPU with Coalescing but No Scicing | 43 | 43 | 0.767442 | eng_Latn | 0.53455 |
aa0a7193de0df31665f0778acc59518f42090280 | 648 | md | Markdown | README.md | HexM7/alert-renderer | 0fcf4787b7508f66d12cd9a0fcb63a74abeceed4 | [
"Apache-2.0"
] | null | null | null | README.md | HexM7/alert-renderer | 0fcf4787b7508f66d12cd9a0fcb63a74abeceed4 | [
"Apache-2.0"
] | null | null | null | README.md | HexM7/alert-renderer | 0fcf4787b7508f66d12cd9a0fcb63a74abeceed4 | [
"Apache-2.0"
] | null | null | null | # alert-renderer

Renders alert blocks produced using the [editorjs-alert](https://github.com/vishaltelangre/editorjs-alert) plugin. This plugin takes the `data` object returned from the **editor.js**'s JSON output and renders the raw HTML code for the block.
## 🧰 Alert block

## :link: Links
- [editor.js](https://editorjs.io/)
- [editor.js-alert](https://github.com/vishaltelangre/editorjs-alert)
## :ledger: License
[Apache License 2.0](https://github.com/HexM7/alert-renderer/blob/main/LICENSE)
| 40.5 | 241 | 0.748457 | eng_Latn | 0.27797 |
aa0ae6f84ed328185ca4194917bb314536195b45 | 334 | md | Markdown | README.md | pankaj2k9/muorious | 94cc75346465a0da8b9596740fa34e03afa4c528 | [
"MIT"
] | null | null | null | README.md | pankaj2k9/muorious | 94cc75346465a0da8b9596740fa34e03afa4c528 | [
"MIT"
] | 9 | 2021-03-09T10:54:10.000Z | 2022-02-26T14:50:21.000Z | README.md | pankaj2k9/muorious | 94cc75346465a0da8b9596740fa34e03afa4c528 | [
"MIT"
] | null | null | null | # Miuros
Miuros.com - Marketing Website
## Packages
Install Packages
`yarn install`
## Gatsby
Install Gatsby
`npm i -g gatsby-cli`
## Contentful
Add `.contentful.json`
```
{
"spaceId":"...",
"accessToken":"..."
}
```
To run dev environment
`yarn run dev`
To run build condition & test
```
gatsby build
gatsby serve
```
| 10.4375 | 30 | 0.649701 | kor_Hang | 0.432295 |
aa0b7b2f788a476805b6c0507115bf48b38608f5 | 4,314 | md | Markdown | README.md | Riezmann/Flutter_mapbox_navigation | a89f0b1d7812af997e6d7f0aca714bfd24e6785e | [
"MIT"
] | 6 | 2022-02-22T15:06:03.000Z | 2022-03-27T20:22:28.000Z | README.md | Riezmann/Flutter_mapbox_navigation | a89f0b1d7812af997e6d7f0aca714bfd24e6785e | [
"MIT"
] | null | null | null | README.md | Riezmann/Flutter_mapbox_navigation | a89f0b1d7812af997e6d7f0aca714bfd24e6785e | [
"MIT"
] | 4 | 2022-02-21T11:41:03.000Z | 2022-03-16T11:31:02.000Z | # MapBox Flutter
This repository contains code corresponding to the Youtube videos
- https://youtu.be/oFDx6tLipmw
- https://youtu.be/J1yLPFbjOkE
If you loved this work then you can support me by liking and sharing the video, and starring this repository! 😇
## 🤔 So what is this app now?
MapBox Flutter demonstrates an implementation of MapBox in a Flutter application, along with its Maps and Navigation SDKs. We will do this by building a simple project involving a hungry you on a bicycle and a set of restaurants and cafes! 😂
In the second video, we recreate a complete Uber like experience, to implement turn-by-turn navigation, and in the process also make use of the Mapbox Search APIs, Direction APIs, Reverse Geocoding and Maps SDK.
## 👩🏻💻 Where are the instructions to get the app up and running?
Well open your browser and IDE, we have got a few things to do! Follow along as we get the app up and running.
1. Head to https://account.mapbox.com and create a new account. Then go to your account page. You will see a public access token. This is something we will need later.
<img width="844" alt="Screenshot 2022-02-11 at 9 03 04 PM" src="https://user-images.githubusercontent.com/45942031/153620576-c7cac859-b403-4e1b-aea7-83df4dada119.png">
2. In the same page click on **Create a token** and in the next page, add a new secret token. Make sure that the `DOWNLOADS:READ` permission is checked here. After you create the token, you must copy it somewhere else because you can only see this once. <br>
<img width="1440" alt="Screenshot 2022-02-11 at 9 04 50 PM" src="https://user-images.githubusercontent.com/45942031/153621198-8b2f9c44-5d56-4e93-841e-3608cd0b24bf.png">
3. Now that you have both the tokens, you just need to replace them in appropriate places of the project code. Open the `starter_code/mapbox_navigation` folder and look for `YOUR_PUBLIC_ACCESS_TOKEN` and `YOUR_SECRET_TOKEN` in the entire project. Replace them with the tokens gotten in Step 1 and 2 respectively. To double check you can make sure that -
* For `ios`: You must have put public token in `Runner/Info.plist` file.
* Some `android`: You must have put public token as a string in `app/src/main/res/values/strings.xml` and secret token as a string in `gradle.properties` file.
4. To run the app in an IOS device/emulator, you will need 1 more step. Open or create `.netrc` file in your home directory. In there add the following lines:
```
machine api.mapbox.com
login mapbox
password <YOUR_SECRET_TOKEN>
```
5. Finally open your `assets` folder and add a `config/.env` file. You can add one key-value pair there:
```
MAPBOX_ACCESS_TOKEN="<YOUR_PUBLIC_ACCESS_TOKEN>"
```
That's it. You follow these 5 steps and you should be good to go. Now I would also recommend going through the documentation for [Android](https://docs.mapbox.com/android/maps/guides/install/) and [iOS](https://docs.mapbox.com/ios/maps/guides/install/), because there is a lot more to it, like adding permissions and stuff, which I have already done for you.
## 📚 Any supporting materials?
Sure. The complete demonstration for this app is done in the Youtube video mentioned above. Other than that, here are some additional resources -
- Medium article 1 - https://absatyaprakash01.medium.com/navigation-with-mapbox-for-flutter-apps-313687778686
- Medium article 2 - https://absatyaprakash01.medium.com/turn-by-turn-navigation-with-mapbox-16f874567b3c
- Mapbox documentation - https://docs.mapbox.com/
- Mapbox directions API playground - https://docs.mapbox.com/playground/directions/
- A list of all plugins used (thanks to the creators and Flutter community ❤️)
- Mapbox GL - https://pub.dev/packages/mapbox_gl
- Flutter Mapbox Navigation - https://pub.dev/packages/flutter_mapbox_navigation
- Shared Preferences - https://pub.dev/packages/shared_preferences
- Location - https://pub.dev/packages/location
- Other than these - carousel_slider, cached_network_image, dio and flutter_dotenv, etc.
## 😞 I ran this app, and I have an issue ...
Make sure to watch the video, and that might just clarify your issue, because I have demonstrated all the steps there. Still, if you have anything, just open an issue or comment on the youtube video, or reach out to me and I'll love to help!
Thank you for visiting!
| 71.9 | 358 | 0.767733 | eng_Latn | 0.987932 |
aa0bf8593b6673a69f76fd81358e68efa25aa958 | 16,142 | markdown | Markdown | _posts/2020-02-28-clean-code.markdown | drunkcoding/drunkcoding.github.io | 8f26ae13a5c33329e34fe70daee9135b57a9fb8b | [
"MIT"
] | 3 | 2018-07-30T13:09:28.000Z | 2020-01-22T12:35:40.000Z | _posts/2020-02-28-clean-code.markdown | drunkcoding/drunkcoding.github.io | 8f26ae13a5c33329e34fe70daee9135b57a9fb8b | [
"MIT"
] | null | null | null | _posts/2020-02-28-clean-code.markdown | drunkcoding/drunkcoding.github.io | 8f26ae13a5c33329e34fe70daee9135b57a9fb8b | [
"MIT"
] | null | null | null | ---
layout: splash
title: "Clean Code"
date: 2020-02-28 23:27:33 +0800
categories: ["programming"]
tags: ["development", "clean code"]
---
Why we need clean code? What is the standards for clean code?
- [Introduction](#introduction)
- [What is clean code](#what-is-clean-code)
- [Why clean code is important](#why-clean-code-is-important)
- [How to achieve clean code](#how-to-achieve-clean-code)
- [Names](#names)
- [Express the usage](#express-the-usage)
- [Avoid disinformation](#avoid-disinformation)
- [Use of abbreviation](#use-of-abbreviation)
- [Name Length](#name-length)
- [Functions](#functions)
- [Function size](#function-size)
- [Function argument](#function-argument)
- [Function call](#function-call)
- [Form](#form)
- [Comments](#comments)
- [Code style](#code-style)
- [Data structure](#data-structure)
- [Books to Read](#books-to-read)
## Introduction
In this section motivation of clean code will be discussed. Specification of clean code and design pattern to achieve the goal will be discussed later
### What is clean code
Clean code usually refers to a readable code. The code does not always need to be simple and easy to understand, since we have complex algorithms and mathematical or cache optimizaed code which only appears be be easy to professionals. However, it must be straight forward to it meaning and can represent its function without refering to comment a lot. It is easy to think of the programs should be neatly formatted as it is the first glance for the reader. Correct indent and spelling must be the basic requirement. In addition, the advanced aspect of clean code is a proper structure and flexibility of the program, which allows the co-developers to take over the tasks and modify the code easily. This is also a matter of convenient deployment and unit test in modern object-oriented programming. Specification of clean code can be generally applied on all languages and to varies ways of programming. Among all, a straightforward meaning and full functioning of test are the aim we want to achieve.
### Why clean code is important
As professional developers, in most cases, we do not write code only for ourselves. The code we write either contributes to the open source community or create business value when it is present to the customers. The code of professional developers affects both professional community and normal people. Messy code usually does harm to all.
For professional community, messy code does harm to productivity. When having messy code, newly hired people learn the old code slower. It is the old code that teaches new people, rather senior staff teaches freshman. As the development schedule is lag, companies will be spent extra money to resolve the same issue. Messy code usually hides and separate bugs in lines, which make the maintenance more difficult. As the lifetime of a product mostly consists of maintenance, this is harmful to the productivity and customers’ experience. Newbies will most likely to be hesitated to change the messy code and feel difficult to test it, thus loosing confident on the availability and safety of the product.
Clean code usually gives us independently deployable program, which is also independently developable. This provide convenience for team development and management of massive project.
### How to achieve clean code
If messy code already exists in the project, there is not easy to transfer it to clean code. Project manager can choose either reconstruct the code using new framework or try to improve code and test from the easiest part and then expend to the whole project. It can be a smoother approach to settle constrains and rules for clean code at the very beginning. The achievement is a rigid system where change and deployment are isolated to any behavior outside the scope. The settlement of clean code be as simple as variable and function naming to practice like test driven development and behavior driven development.
## Names
Variable and function names are the basic building block of a program. One saying states that writing a program should be the same as writing article where names and expressions directly express the intension of usage and can be read out as it is. This also means that names better express themselves without looking at the code used it.
### Express the usage
Here we prsent a simple example on variable naming as follow. The constants are used as a function parameter to jusge whether an input is belonged to an interval. Mathematically, an interval has a start and an end, and a proper interval can include either one, both or none. In the bad example, we can not intemperate the usage of the variable from its name, hence forcing the reader to investigate the detail of the code. We can improve this by integrating mathematically concept into the variable name. So that it can clear indicate its use for interval. Also, the usage is restricted by calling “DataInterval::OPEN”, whose meaning is clear anywhere in the code.
```c++
/* Bad example */
const int INCLUDE_NONE = 0;
const int INCLUDE_SECOND = 1;
const int INCLUDE_FIRST = 2;
const int INCLUDE_BOTH = 3;
/* Good Example */
enum class DataInterval {
OPEN,
CLOSED,
OPEN_LEFT,
OPEN_RIGHT
};
```
### Avoid disinformation
Any common mistake in the program is disinformation of variable. Disinformation means that the variable is used as its name or the name is too vague to decide its usage. This may cause confusing through out the program, as the meaning of the variable is changing. Since we are not considering programming in assembly language and embedded system where registers are always reused, variables surely do not cause performance issue. In the following example, we can see that the ambiguity is raised in variable “flag”, where it is used for two purposes and the variable name does not reveal what is it holding. We can revise that into a “isXXX” format to indicate a success to a function call or judgement.
```c++
/* Bad example */
bool flag = // some input or calculation
if (flag) {
// do somthing
}
else {
// do something
}
flag = // a second calculation or return value
if (flag) {
// do somthing
}
else {
// do something
}
/* Good Example */
bool isSetTimeSuccess = // some input or calculation
if (isSetTimeSuccess) {
// do somthing
}
else {
// do something
}
bool isPutHandleSuccess = // a second calculation or return value
if (isPutHandleSuccess) {
// do somthing
}
else {
// do something
}
```
### Use of abbreviation
The programmer will sometimes use abbreviation for simplify coding writing and write comment somewhere in the code to state its meaning. Abbreviation can only be allowed for convention, like “iter”/”it” for iterator. It is difficult to communicate with random abbreviation since it is hard to remember through out the program. Also, with modern IDE and compiler, type check easy is automatic, so that programmer have no reason to put type as prefix for each variable, such as “m” for member and “p” for pointer. The examples below show the correction on a wrong naming. In addition, “Info” can also be regard as vague naming, we should specify what information is contained. Also, the convention is that variable and class should be nouns, and methods to be verb.
```c++
/* Bad example */
uint32_t iSOF; // SOF = State of File
AOL* pLockInfo // AOGL = Array of Locks
/* Good Example */
uint32_t fileState;
Locks* grantedLocks;
```
### Name Length
Usually, we keep names short while express the full usage. But there can be some exception that allows long name. The general rule is to follow large scope and small scope convention, where name can be long in small scope as it is called from few places. Also, long name in small scope help reader to understand each step of function call. Derived class name is naturally long since more detail added. We can observe in the example below that “WriteBlock” has longer name as it is a derived class. The “handle” method has a short name since it is a public method, while each private call inside is long with full expression.
```c++
/* Good Example */
class Block {};
class WriteBlock : public Block {};
class ReadBlock : public Block {};
void WriteBlock::handle() {
RequestWriteBlock();
PrepareWriteBlockForStorage();
SendWriteBlockToStorage();
UodateWriteBlockSize();
}
```
## Functions
Back in the old days, there is a saying that function should only do one thing and do it well. It is true for the majority.
### Function size
A smaller function makes the reader easy to capture its functionality. When scroll down the page, readers can easily forget what is written above since they haven’t understood what the function does yet. If a function does multiple jobs, it obeys the expression of its name, as one concise name can only express one functionality. We can always refactor one function into multiple function calls and refactor very long function into classes. Once we have this kind of clear view, we can quickly go through each function and class without look at the detail.
### Function argument
If anyone has the experience of writing a program on windows platform, he or she will probably observe that windows system call usually has a very long argument list, together with a long document. This makes the caller inconvenient to do research before using the function, and hard to remember. For general cases, we want to make the function argument list short (keep it at most three arguments), so that the arguments can stick to the name of functions.
It is a violation of single job constrain of functions when using Boolean as argument (double-intakes). We’d better have good reason to do this. Double-intakes problem is a choice between rule constrain and simplicity of API. For open source library, we often see Boolean argument for behavior alternation, which is reasonable if it is split into further function calls in a simple way.
```c++
/* Bad Example */
void Push(int value, bool flag) {
if (flag) // insert value and extend buffer when full
else // wait for a pop when buffer is full
}
/* Good Example */
// insert value and extend buffer when full
void PushEnforce(int value);
// wait for a pop when buffer is full
void PushWait(int value);
```
There is a choice between long argument list for constructors and list of setters. Here we prefer list of setters, one for each member. This makes the decoupling derived class from the base class, where adding variable will infect one line rather structure. We do not need to concern the correctness of initialization, since it is taken care of by unit test.
If not really needed, do not output to an argument. The straightforward feeling of a function is input argument and return results. If design a function same as Linux system call that output to an argument, we’d better have outstanding document to alter the user about this behavior. We most cases we can return a single value as the result or return structure if multiple values are needed. Since C++11 have smart pointer and move construct, we do not need to concern performance penalty of return value, but a proper order of function call.
Do we need to care about invalid input of a function? If it is a public API, then yes, since we cannot predict user behavior. If it is only within the scope of a module, then it is not needed. Correct input will be granted by unit test. While allow null for the argument (defensive programming) making the function complex.
### Function call
For some function must be called in an order like (new, process, delete), (lock, process, unlock), we can wrap it up into another function for system state consistency. The idea is called temporal coupling, since we minimize the coupling to the scope of a single function call (no such problem for Golang defer).
A function must be careful to what it returns. A method that changes internal value should not have a return. The caller cannot know the whether the return value refer to the new state or old state. If the program needs to capture the old states, it must do this inside the state-changing function, so that hide unnecessary detail from caller. This also clearly separates between query and command. A function must not return an instance which allows the caller to know the system query chain. The complexity exposed will certainly increase coupling of the program. It is better only propagating dependency one at a time (Law of Demeter). It is convenient to expose the call chain within the scopr of a class
```c++
/* Bad Example */
System.OpenRootDir().OpenFile("test").Write("a", 3, 1);
/* Good Example */
String.lower().strip().split();
```
## Form
### Comments
When we need to write a comment, we should first ask ourselves whether the names have done their jobs. Usually names will explain everything in the program. There are things can do and cannot do:
- (can do) comments for copy right and explanation: explanation is restricted for limited cases, such as a brief technical detail.
- (cannot do) write document as comments: why not create a structured technical document?
- (cannot do) explain by repeating the code: useless words.
- (cannot do) TODO without outer development plan and document: no one knows what to do and how to do.
- (cannot do) talk about code far away: must put comment next to the code block which needs explanation
### Code style
The first rule of all, no matter what the code style is, it must be agreed among the team. Since a team share a same project, they must work on someone other part of the code in the future. The agreement on code style ease the burden of guessing and looking for the definition of the code. It is an easy practice that use the same IDE autoformatting configuration. Code to communicate and convey information is more important than correctness, since until the correct message is conveyed, it becomes easy to analysis code. In addition, white space and lines can serve as separation of functional blocks, which is also important for communication.
A big project does not imply big file size. Imagine that a file content that can be fit into the dislay of the screen, isn’t is easy to capture all content at the first glance? When a file has a thousand lines, it is hard to jump back and forth to get a full idea of function call at the first time. As a result, file should be split well depend on outreach calls and coupling.
Common rules for code management can be:
- scissors rule: all public at top and private at bottom; but follow convension is the first priority.
- stepdowm rule: parent at top, called child function at bottom, no function calls above
### Data structure
In data structure design, we mostly concern about dependency management and layering. It is bad design that expose private variable through getters and setters. If callers have direct access to private variable, then what is the purpose of being private. Usually, callers want the content of private variable for further calculation and checking, so that programmer can wrap this type of requirement a public method. In addition, getters should return data structure for business rather plain content. The rule is getter and setter should not reveal how classes is implemented (private information). IN general class protect us from new type but expose to new methods; structures vise versa
In a good software design, code dependency should cross boundary from concrete implementation and point to the abstract side. For example in designing a database application:
- there should be an access layer to know concrete schema and SQL; this should depend on both database and application access
- application does not need to know anything about schema, just issue control throught the layer
- layer should convert between business object and data rows
## Books to Read
- Refactoring: Improving the Design of Existing Code by Martin Fowler, with Kent Beck
- Test Driven Development by Kent Beck
- Clean Code by Robert Cecil Martin
- Fit for Developing Software: Framework for Integrated Tests by Rick Mugridge, Ward Cunningham
| 68.982906 | 1,002 | 0.779148 | eng_Latn | 0.999752 |
aa0d288ef8a9cca52862cf2c9b37ef859f4fb7d7 | 2,391 | md | Markdown | docs/writing-stories/parameters.md | blake-newman/storybook | 939a215ce2efb988487554093281e45200574c8c | [
"MIT"
] | null | null | null | docs/writing-stories/parameters.md | blake-newman/storybook | 939a215ce2efb988487554093281e45200574c8c | [
"MIT"
] | 67 | 2020-09-16T07:50:51.000Z | 2022-01-23T03:23:16.000Z | docs/writing-stories/parameters.md | blake-newman/storybook | 939a215ce2efb988487554093281e45200574c8c | [
"MIT"
] | 1 | 2020-09-17T04:53:59.000Z | 2020-09-17T04:53:59.000Z | ---
title: 'Parameters'
---
Parameters are a set of static, named metadata about a story, typically used to control the behavior of Storybook features and addons.
For example, let’s customize the backgrounds addon via a parameter. We’ll use `parameters.backgrounds` to define which backgrounds appear in the backgrounds toolbar when a story is selected.
## Story parameters
We can set a parameter for a single story with the `parameters` key on a CSF export:
```js
// Button.story.js
export const Primary = Template.bind({});
Primary.args = {
primary: true,
label: 'Button',
};
Primary.parameters = {
backgrounds: {
values: [
{ name: 'red', value: '#f00' },
{ name: 'green', value: '#0f0' },
],
},
};
```
## Component parameters
We can set the parameters for all stories of a component using the `parameters` key on the default CSF export:
```js
// Button.story.js
import Button from './';
export default {
title: 'Button',
component: Button,
parameters: {
backgrounds: {
values: [
{ name: 'red', value: '#f00' },
{ name: 'green', value: '#0f0' },
],
},
},
};
```
## Global parameters
We can also set the parameters for **all stories** via the `parameters` export of your [`.storybook/preview.js`](../configure/overview.md#configure-story-rendering) file (this is the file where you configure all stories):
```js
// Button.story.js
export const parameters = {
backgrounds: {
values: [
{ name: 'red', value: '#f00' },
{ name: 'green', value: '#0f0' },
],
},
};
```
Setting a global parameter is a common way to configure addons. With backgrounds, you configure the list of backgrounds that every story can render in.
## Rules of parameter inheritance
The way the global, component and story parameters are combined is:
- More specific parameters take precedence (so a story parameter overwrites a component parameter which overwrites a global parameter).
- Parameters are **merged** so keys are only ever overwritten, never dropped.
The merging of parameters is important. It means it is possible to override a single specific sub-parameter on a per-story basis but still retain the majority of the parameters defined globally.
If you are defining an API that relies on parameters (e..g an [**addon**](../api/addons.md)) it is a good idea to take this behavior into account.
| 28.807229 | 221 | 0.692179 | eng_Latn | 0.992678 |
aa0d4ad0395c56d1a933de131d855a0f9c071845 | 59 | md | Markdown | README.md | baranonen/trainsimulator | 6122407f9dee3a3e09ddb2310e16901a4f67011b | [
"MIT"
] | null | null | null | README.md | baranonen/trainsimulator | 6122407f9dee3a3e09ddb2310e16901a4f67011b | [
"MIT"
] | null | null | null | README.md | baranonen/trainsimulator | 6122407f9dee3a3e09ddb2310e16901a4f67011b | [
"MIT"
] | null | null | null | # trainsimulator
Train Simulator - İstanbul M2 Metro Line
| 19.666667 | 41 | 0.79661 | ind_Latn | 0.223858 |
aa0e1e53e0a0f95eb653cadabcd643eac19ce608 | 1,684 | md | Markdown | README.md | Mayankjh/Udacity_Project_3 | 8115a9a2af61243e18521ef1c2969d4e80768c68 | [
"MIT"
] | null | null | null | README.md | Mayankjh/Udacity_Project_3 | 8115a9a2af61243e18521ef1c2969d4e80768c68 | [
"MIT"
] | null | null | null | README.md | Mayankjh/Udacity_Project_3 | 8115a9a2af61243e18521ef1c2969d4e80768c68 | [
"MIT"
] | null | null | null | ## UDACITY PROJECT 3 CLASSIC ARCADE GAME
This is my version for Udacity's Front-End Web Developer Nanodegree
Installation:
* Download the repository:
* Click download ZIP on the right of the screen, then extract the zip file to your computer, or clone the repository using git.
* Navigate to where you unzipped the file or cloned the repository.
* Double-click index.html to open the game in your browser.
Or, point your browser to https://mayankjh.github.io/Udacity_Project_3/ to get the live experience.
## Gameplay:
1. Press enter to start the game.
2. Use the arrow keys to move.
3. The objective is to reach the top of the water and collect gems to score.
4. Each time you reach the top of the water, difficulty increases.
5. Collect hearts to gain extra lives.
5. Avoid the bugs, they kill you. You start with three lives.
## Resources I referred to while working on this project:
* http://irene.marin.cat/udacity/project3/index.html
* https://github.com/ncaron/frontend-nanodegree-arcade-game/blob/master/js/app.js
* https://github.com/asaki444/frontendfrogger/blob/master/js/app.js
* https://developer.mozilla.org/en-US/docs/Games/Techniques/2D_collision_detection
* http://stackoverflow.com/questions/4950115/removeeventlistener-on-anonymous-functions-in-javascript
* http://stackoverflow.com/questions/14542062/eventlistener-enter-key
* http://stackoverflow.com/questions/21553547/how-to-clear-timeout-in-javascript
* https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/clearTimeout
* https://discussions.udacity.com/t/jshint-warnings/34854/5
## LICENSE
The work and material is licensed Copyright © 2018 Mayank Jha https://mayankjha.mit-license.org/
| 42.1 | 127 | 0.784442 | eng_Latn | 0.779834 |
aa0e8da0ac3386aa4efdbb8f0c47f8a83af3da0c | 509 | md | Markdown | behavioral/state/README.md | emosdevelop/java-design-patterns | 91fa7762caec8a121c1d0bcdb2b06d0361222674 | [
"MIT"
] | null | null | null | behavioral/state/README.md | emosdevelop/java-design-patterns | 91fa7762caec8a121c1d0bcdb2b06d0361222674 | [
"MIT"
] | null | null | null | behavioral/state/README.md | emosdevelop/java-design-patterns | 91fa7762caec8a121c1d0bcdb2b06d0361222674 | [
"MIT"
] | null | null | null | # State
Substitute for a always changing state variable in a Object. Gets rid of the if-else/switch statements check.<br>
Instead creates new concrete-state classes.
The state pattern has 4 members
* Client
* Context - The state handler class.
* State - Abstraction which holds the handle method that the context will call.
* Concrete State - The state implementation with the states behaviour.
If there is many different states there will be alot of small classes in the codebase.
 | 33.933333 | 113 | 0.779961 | eng_Latn | 0.997562 |
aa0ea0976beffe965769f8d2bcd15ec3351b6d65 | 5,105 | md | Markdown | ce/outlook-addin/user-guide/example-going-offline.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/outlook-addin/user-guide/example-going-offline.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/outlook-addin/user-guide/example-going-offline.md | Gen1a/dynamics-365-customer-engagement | ce3c02bfa54594f016166522e552982fb66a9389 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Example of going offline with Dynamics 365 for Outlook | MicrosoftDocs"
ms.custom:
ms.date: 01/11/2016
ms.reviewer:
ms.suite:
ms.tgt_pltfrm:
ms.topic: article
applies_to:
- Dynamics 365 apps
- Dynamics 365 apps (on-premises)
- Dynamics CRM 2013
- Dynamics CRM 2015
- Dynamics CRM 2016
- Dynamics CRM Online
ms.assetid: f622fe69-14b3-44e0-a4eb-093959910c70
caps.latest.revision: 35
author: mduelae
ms.author: mkaur
manager: kvivek
search.audienceType:
- admin
- customizer
- enduser
search.app:
- D365CE
- D365Outlook
---
# Example of going offline with Dynamics 365 for Outlook
Salespeople can make critical customer information available and up-to-date on business trips with filters. By specifying only the data you need to synchronize with your laptop, you can avoid wasting valuable laptop memory, stay current with the head office, and keep information on your laptop fresh. Meanwhile, managers and co-workers are up-to-date.
Using [!INCLUDE[pn_crm_for_outlook_short](../../includes/pn-crm-for-outlook-short.md)], you can set up and activate filters with criteria similar to Advanced Find by specifying the criteria for the [!INCLUDE[pn_microsoftcrm](../../includes/pn-microsoftcrm.md)] records that you want to be available when you go offline. In addition, you can change what data will be available when you synchronize by activating and deactivating the filters.
To see what data filters are being applied to your offline synchronization, in [!INCLUDE[pn_Outlook_short](../../includes/pn-outlook-short.md)], on the **File** menu, click or tap **Dynamics 365 apps** > **Go Offline** > **Manage Offline Filters**.
> [!NOTE]
> You can have more than one active filter so you can take larger, combined sets of data offline.
## Select the data you need with filters
To leverage local data, consider a trip to regional offices in the USA in Washington and Oregon. You would want to define needed information in the [!INCLUDE[pn_microsoftcrm](../../includes/pn-microsoftcrm.md)] database that applies to customers in these states.
First, create a filter of the data for a record type. Save this filter as your “master,” and call it “My Active Accounts”, for example. Second, modify this filter to create different versions for your business needs.
- To edit an existing filter, on either tab, double-click or tap the item in the list. To keep the original data group, make a copy using **Save As**, and add additional criteria, such as “Address 1: State/Province equals WA”. Save it with a new name such as “My Accounts in Washington.”
- Using **Save As** again, change the criteria to “Address 1: State/Province equals OR”, and name your new data group “My Accounts in Oregon.”
> [!IMPORTANT]
> Before your trip, deactivate all filters, except those that apply to the customers in the first area you are visiting.
## Deactivate or activate filters
- To deactivate a filter, on the **User Filters** tab, select one or more filters. On the tool bar, click or tap the **Deactivate** button (a red circle with a red square).
- To activate a filter, on the **User Filters** tab, select one or more filters and then click or tap the **Activate** button (a green circle with a green triangle).
## Take your data offline and synchronize your data
- In [!INCLUDE[pn_Outlook_short](../../includes/pn-outlook-short.md)], on the **Dynamics 365 apps** menu, click or tap **Go Offline**.
While you are offline, you can add new contacts and accounts or update the accounts and contacts on your laptop. When connecting to your company's network again, you can synchronize your data.
> [!IMPORTANT]
> [!INCLUDE[cc_Outlook_Offline_LocalAccess](../../includes/cc-outlook-offline-localaccess.md)]
## Go back online and synchronize your data
- In [!INCLUDE[pn_Outlook_short](../../includes/pn-outlook-short.md)], on the **Dynamics 365 apps** menu, click or tap **Go Online**.
Any updated data from your laptop will be synchronized with your company's [!INCLUDE[pn_microsoftcrm](../../includes/pn-microsoftcrm.md)] database. You can now deactivate and activate a new set of filters for your next visits, using the procedures explained earlier in this article.
## Combine data filters to take more information with you
Because filters are additive, you can have more than one active filter. For example, if you are going to the Northwest United States, you can activate the Washington and Oregon data filters you created and take both sets of data with you.
<a name="BMKM_MUprivacy"></a>
## Privacy notices
[!INCLUDE[cc_privacy_crm_outlook1](../../includes/cc-privacy-crm-outlook1.md)]
[!INCLUDE[cc_privacy_crm_sync_to_outlook](../../includes/cc-privacy-crm-sync-to-outlook.md)]
### See also
[Work offline with Dynamics 365 for Outlook](work-offline-dynamics-365-outlook.md)
[Choose records to work with offline in Dynamics 365 for Outlook](choose-records-work-offline.md)
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 58.678161 | 443 | 0.740842 | eng_Latn | 0.99389 |
aa0efd61e896a23c970d4df207a4b1c115873537 | 220 | md | Markdown | content/directory/Wide Angle Studios.md | jonmccon/gatsby-advanced-starter | ddb53a39d9c19c66622649d75657a8c3fe277f66 | [
"MIT"
] | null | null | null | content/directory/Wide Angle Studios.md | jonmccon/gatsby-advanced-starter | ddb53a39d9c19c66622649d75657a8c3fe277f66 | [
"MIT"
] | 23 | 2020-03-30T08:18:44.000Z | 2021-11-05T06:02:40.000Z | content/directory/Wide Angle Studios.md | jonmccon/gatsby-advanced-starter | ddb53a39d9c19c66622649d75657a8c3fe277f66 | [
"MIT"
] | 1 | 2021-07-22T07:08:49.000Z | 2021-07-22T07:08:49.000Z | ---
title: "Wide Angle Studios"
featuredImage: ./-hamburgers.png
website: "https://www.wideanglestudios.com/"
twit: ""
inst: ""
category: "W"
tags:
- Downtown
- small
- video
- marketing
---
description
| 13.75 | 44 | 0.636364 | eng_Latn | 0.333729 |
aa105cf917bcc1fab70970dd4ecc36656d90da27 | 966 | md | Markdown | book/commands/str_contains.md | lucatrv/nushell.github.io | f42f30bdc4ad1e04bb562cc9403bde0d4e21eeaa | [
"MIT"
] | null | null | null | book/commands/str_contains.md | lucatrv/nushell.github.io | f42f30bdc4ad1e04bb562cc9403bde0d4e21eeaa | [
"MIT"
] | null | null | null | book/commands/str_contains.md | lucatrv/nushell.github.io | f42f30bdc4ad1e04bb562cc9403bde0d4e21eeaa | [
"MIT"
] | null | null | null | ---
title: str contains
layout: command
version: 0.59.1
---
Checks if string contains pattern
## Signature
```> str contains (pattern) ...rest --insensitive```
## Parameters
- `pattern`: the pattern to find
- `...rest`: optionally check if string contains pattern by column paths
- `--insensitive`: search is case insensitive
## Examples
Check if string contains pattern
```shell
> 'my_library.rb' | str contains '.rb'
```
Check if string contains pattern case insensitive
```shell
> 'my_library.rb' | str contains -i '.RB'
```
Check if string contains pattern in a table
```shell
> [[ColA ColB]; [test 100]] | str contains 'e' ColA
```
Check if string contains pattern in a table
```shell
> [[ColA ColB]; [test 100]] | str contains -i 'E' ColA
```
Check if string contains pattern in a table
```shell
> [[ColA ColB]; [test hello]] | str contains 'e' ColA ColB
```
Check if string contains pattern
```shell
> 'hello' | str contains 'banana'
```
| 19.32 | 74 | 0.673913 | eng_Latn | 0.953397 |
aa10a81a9a0269e889006bb748b94b2af9ec383e | 4,484 | markdown | Markdown | _posts/2016-08-12-what-is-NewHavenIO.markdown | mzemel/newhavenio.github.io | f9387d2082a96d8cbbcb305259295d317b8d50a8 | [
"CC-BY-4.0",
"MIT"
] | 11 | 2015-12-26T21:02:27.000Z | 2018-05-01T00:26:17.000Z | _posts/2016-08-12-what-is-NewHavenIO.markdown | mzemel/newhavenio.github.io | f9387d2082a96d8cbbcb305259295d317b8d50a8 | [
"CC-BY-4.0",
"MIT"
] | 81 | 2015-11-06T18:56:25.000Z | 2022-02-26T03:40:45.000Z | _posts/2016-08-12-what-is-NewHavenIO.markdown | mzemel/newhavenio.github.io | f9387d2082a96d8cbbcb305259295d317b8d50a8 | [
"CC-BY-4.0",
"MIT"
] | 16 | 2016-01-23T15:11:39.000Z | 2019-07-11T18:29:30.000Z | ---
layout: post
title: "What is NewHaven.io and why does it matter?"
date: 2016-8-12
categories: update
author: Lou Rinaldi
source: internal
ext-url: none
---
You may be peripherally aware of NewHaven.io but not really understand what the group is or why it exists. In a nutshell, we're a nonprofit organization that serves the local tech community. But what does that actually _mean?_
### Aggregation of local interest groups
Technology is a funny thing. More importantly, humans are funny creatures. We obsess over bright shiny things and then when we lose interest in them we move on to the next one. One could easily argue that along with continuous improvement, it is the inherently ephemeral nature of our collective interest in different technologies that drives the cycle of obsolescence and change.
As geeks self-organize around their preferred technologies, related interest groups coalesce, grow, stagnate and die. These social constructs change as the technologies change. What stays the same is the underlying bedrock of people in the tech community. This foundation layer remains constant even if the elements which comprise it are constantly rearranging themselves into new configurations. It may slowly expand or contract as people move into and out of the local area, but it is reasonably stable. The people that make up this foundation are who NewHaven.io serves, and we're in it for the long haul.
### Communication and organization in the community
NewHaven.io leverages some well-known tools to achieve its goal of connecting the local tech community. In addition to [our own website](http://newhaven.io/), we make extensive use of [Meetup](http://www.meetup.com/newhavenio/) and [Slack](https://newhavenio-slackin.herokuapp.com/). [Meetup](http://www.meetup.com/newhavenio/) is used to organize and schedule in-person gatherings, while Slack facilitates online communication and collaboration. Many people first discover NewHaven.io through [Meetup](http://www.meetup.com/newhavenio/) by searching locally for events highlighting technologies of interest to them. Slack is a natural extension of the physical meetings, allowing the conversation to continue fluidly and asynchronously long after the lights have been turned off and the doors have closed.
Both our [Meetup](http://www.meetup.com/newhavenio/) and Slack are structured to allow for tech- or topic-specific engagement. Anyone from novice to expert can self-select areas that interest them and dive in as shallow or as deep as they choose. This includes proposing meetings where one can either present to a group as an expert or convene a learning session as a novice. We encourage both equally!
### Partnerships in the community
Local companies and educational institutions recognize the importance of NewHaven.io in maintaining and growing a thriving tech community. A healthy tech community makes New Haven more attractive to prospective students and IT professionals, which has a positive impact beyond the realm of technology. Sponsorship of NewHaven.io is one way organizations can contribute to our efforts toward achieving these outcomes. We're also interested in cross-promotional opportunities in the community.
We regularly collaborate on events with local organizations such as [ProductCamp Connecticut](http://www.pcampct.org/), [A100](http://indie-soft.com/a100/), [Independent Software](http://indie-soft.com/), [Yale University](http://yale.edu), [The Grove](http://grovenewhaven.com/), [Continuity](http://continuity.net), [Digital Surgeons](https://www.digitalsurgeons.com/), [SeeClickFix](http://seeclickfix.com), and [Core Informatics](http://www.coreinformatics.com/). Often times the value that NewHaven.io brings to these collaborations is our "human infrastructure," other times it's the subject matter expertise of our members. Our nonprofit financial apparatus can also be leveraged for management of larger-scale community events such as conferences. We're even exploring some exciting new event frontiers such as the blending of tech and fitness.
### How will _you_ get involved?
We welcome everyone into NewHaven.io with open arms. Keeping this engine running requires sustained effort, so we're always on the lookout for help. There are numerous opportunities for contributing based on your own personal talents and areas of interest, so [pop into our `#organizing` Slack channel](https://newhavenio-slackin.herokuapp.com/) and introduce yourself today!
| 128.114286 | 852 | 0.80107 | eng_Latn | 0.998859 |
aa1172eb4baef21e03e287b7788f86480122f809 | 1,655 | md | Markdown | articles/data-factory/control-flow-append-variable-activity.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-06-06T22:50:05.000Z | 2017-06-06T22:50:05.000Z | articles/data-factory/control-flow-append-variable-activity.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 41 | 2016-11-21T14:37:50.000Z | 2017-06-14T20:46:01.000Z | articles/data-factory/control-flow-append-variable-activity.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 7 | 2016-11-16T18:13:16.000Z | 2017-06-26T10:37:55.000Z | ---
title: Attività di accodamento a variabile in Azure Data Factory
description: Informazioni su come impostare l'attività di accodamento a variabile per aggiungere un valore a una variabile di matrice esistente definita in una pipeline di Data Factory
ms.service: data-factory
ms.topic: conceptual
author: dcstwh
ms.author: weetok
ms.reviewer: jburchel
ms.date: 10/09/2018
ms.openlocfilehash: 1ca58fc208bb02d137b977e0b18857e8c87a5440
ms.sourcegitcommit: 32e0fedb80b5a5ed0d2336cea18c3ec3b5015ca1
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 03/30/2021
ms.locfileid: "104783847"
---
# <a name="append-variable-activity-in-azure-data-factory"></a>Attività di accodamento a variabile in Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Usare l'attività di accodamento a variabile per aggiungere un valore a una variabile di matrice esistente definita in una pipeline di Data Factory.
## <a name="type-properties"></a>Proprietà del tipo
Proprietà | Descrizione | Obbligatoria
-------- | ----------- | --------
name | Nome dell'attività nella pipeline | Sì
description | Testo che descrive l'attività | no
type | Il tipo di attività è AppendVariable | sì
Valore | Valore letterale stringa o valore di oggetto espressione da accodare a una variabile specificata | sì
variableName | Nome della variabile che verrà modificata dall'attività, la variabile deve essere di tipo "Array" | sì
## <a name="next-steps"></a>Passaggi successivi
Informazioni su un'attività del flusso di controllo correlato supportata da Data Factory:
- [Configurare un'attività variabile](control-flow-set-variable-activity.md)
| 47.285714 | 184 | 0.786103 | ita_Latn | 0.971199 |
aa118e27357048f185844d31b9766a376d0d473c | 619 | md | Markdown | README.md | Nyarime/QCloud-Lighthouse-PromoCode-Helper | 32d1344b77274c7b6bee0359739db5763604add3 | [
"Apache-2.0"
] | 36 | 2022-02-26T19:52:24.000Z | 2022-03-27T15:20:26.000Z | README.md | Nyarime/QCloud-Lighthouse-PromoCode-Helper | 32d1344b77274c7b6bee0359739db5763604add3 | [
"Apache-2.0"
] | 2 | 2022-02-27T07:19:36.000Z | 2022-02-27T15:27:28.000Z | README.md | Nyarime/QCloud-Lighthouse-PromoCode-Helper | 32d1344b77274c7b6bee0359739db5763604add3 | [
"Apache-2.0"
] | 3 | 2022-02-27T07:14:06.000Z | 2022-02-28T03:06:03.000Z | # 腾讯云轻量满减神器
在满足轻量优惠券最低门槛下,计算续费腾讯云轻量应用服务器价格的计算器

计算范围仅适用于"腾讯云轻量应用服务器"产品,价格可能产生波动,计算结果仅供参考
欢迎加入TG交流群 [MoeIDC](https://oh.sb/tg)
如果没有优惠券,可以看看腾讯云的促销: [here](https://curl.qcloud.com/I8Z5glUD).
## 怎么用
获取 discount.py 然后用 Python 3 或 下载打包的二进制exe文件执行
感谢 hmsjy2017 大佬!
### 参数介绍
为了能更好的计算,我们需要您提供"需续费价格"及"最低付费门槛"。但要满足 "需续费价格"≤ "最低付费门槛"
如果需要导出测试结果,请在"是否保存结果至LightHouse.txt?(y/n)"输入 `y` 并回车
## 价格变动
腾讯云于2022年1月19日取消了24元等的轻量应用服务器旧套餐,因此使用较久的版本会导致计算结果在实际应用中无法适用。所以请在使用本脚本前检查下是否为最新版本!
### 免责声明
这个脚本仅适用于计算使用优惠券购买腾讯云轻量应用服务器的方案,请勿使用此项目从事违法犯罪行为!!!
感谢!
| 20.633333 | 81 | 0.785137 | yue_Hant | 0.62661 |
aa11a715ba1ddc71b540aa80adef397163771d95 | 120 | md | Markdown | examples/ssr-with-data-loading-using-redux/CHANGELOG.md | sevenwestmedia-labs/project-watchtower | 3ec195f557a4bdc9e49de31ab9a4945629542119 | [
"MIT"
] | 15 | 2018-11-28T23:50:08.000Z | 2019-01-08T10:43:59.000Z | examples/ssr-with-data-loading-using-redux/CHANGELOG.md | sevenwestmedia/project-watchtower | 3ec195f557a4bdc9e49de31ab9a4945629542119 | [
"MIT"
] | 28 | 2019-01-12T08:58:37.000Z | 2022-02-12T11:56:45.000Z | examples/ssr-with-data-loading-using-redux/CHANGELOG.md | sevenwestmedia/project-watchtower | 3ec195f557a4bdc9e49de31ab9a4945629542119 | [
"MIT"
] | 5 | 2019-01-13T12:43:46.000Z | 2020-12-24T13:31:26.000Z | # ssr-with-data-loading-using-redux
## 1.0.1-beta.0
### Patch Changes
- 3f98adb: Bump versions for dependency updates
| 17.142857 | 47 | 0.725 | eng_Latn | 0.454078 |
aa12360641d47c65c08d95e587cf3894e0fdae7d | 14 | md | Markdown | ch04_05/README.md | MikeMKH/statistics-the-easier-way-with-r | ff4acda258e46d9413686ba642a9315e402a4198 | [
"Apache-2.0"
] | null | null | null | ch04_05/README.md | MikeMKH/statistics-the-easier-way-with-r | ff4acda258e46d9413686ba642a9315e402a4198 | [
"Apache-2.0"
] | null | null | null | ch04_05/README.md | MikeMKH/statistics-the-easier-way-with-r | ff4acda258e46d9413686ba642a9315e402a4198 | [
"Apache-2.0"
] | null | null | null | # chapter 4.5
| 7 | 13 | 0.642857 | eng_Latn | 0.797796 |
aa1294d6abd94d118bf4ef9ff6b8932490db6cc0 | 1,750 | md | Markdown | docs/deploy/ExternalShuffleBlockResolver.md | LingarajVB/apache-spark-internals | c1258aeb2304631210ea1d5dfd1dd2889c9aa7b6 | [
"Apache-2.0"
] | 1 | 2020-10-26T07:52:38.000Z | 2020-10-26T07:52:38.000Z | docs/deploy/ExternalShuffleBlockResolver.md | LingarajVB/apache-spark-internals | c1258aeb2304631210ea1d5dfd1dd2889c9aa7b6 | [
"Apache-2.0"
] | null | null | null | docs/deploy/ExternalShuffleBlockResolver.md | LingarajVB/apache-spark-internals | c1258aeb2304631210ea1d5dfd1dd2889c9aa7b6 | [
"Apache-2.0"
] | null | null | null | = ExternalShuffleBlockResolver
*ExternalShuffleBlockResolver* is...FIXME
== [[getBlockData]] getBlockData Method
[source, java]
----
ManagedBuffer getBlockData(
String appId,
String execId,
String blockId)
----
getBlockData parses `blockId` (in the format of `shuffle_[shuffleId]\_[mapId]_[reduceId]`) and returns the `FileSegmentManagedBuffer` that corresponds to `shuffle_[shuffleId]_[mapId]_0.data`.
getBlockData splits `blockId` to 4 parts using `_` (underscore). It works exclusively with `shuffle` block ids with the other three parts being `shuffleId`, `mapId`, and `reduceId`.
It looks up an executor (i.e. a `ExecutorShuffleInfo` in `executors` private registry) for `appId` and `execId` to search for a storage:BlockDataManager.md#ManagedBuffer[ManagedBuffer].
The `ManagedBuffer` is indexed using a binary file `shuffle_[shuffleId]\_[mapId]_0.index` (that contains offset and length of the buffer) with a data file being `shuffle_[shuffleId]_[mapId]_0.data` (that is returned as `FileSegmentManagedBuffer`).
It throws a `IllegalArgumentException` for block ids with less than four parts:
```
Unexpected block id format: [blockId]
```
or for non-`shuffle` block ids:
```
Expected shuffle block id, got: [blockId]
```
It throws a `RuntimeException` when no `ExecutorShuffleInfo` could be found.
```
Executor is not registered (appId=[appId], execId=[execId])"
```
== [[logging]] Logging
Enable `ALL` logging level for `org.apache.spark.network.shuffle.ExternalShuffleBlockResolver` logger to see what happens inside.
Add the following line to `conf/log4j.properties`:
[source,plaintext]
----
log4j.logger.org.apache.spark.network.shuffle.ExternalShuffleBlockResolver=ALL
----
Refer to ROOT:spark-logging.md[Logging].
| 33.018868 | 247 | 0.762286 | eng_Latn | 0.835829 |
aa12afdfdf958b0ca890f6847b666a691f6858a1 | 3,383 | md | Markdown | src/da/2018-03/02/teacher-comments.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/da/2018-03/02/teacher-comments.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/da/2018-03/02/teacher-comments.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Aktiviteter og dialog
date: 13/07/2018
---
**Guds Ord**
* Jesus som alles Herre
* Læs ApG 2,21 og 38. Peters prædiken på Pinsedagen indledes og afsluttes med en henvisning til Herrens ”navn“.
* Drøft, hvad Jesu navn her indebærer mht. til hans natur og autoritet!
* Sammenlign ApG 2,17 og 33. Hvem ”udgyder“ Ånden? Er det Faderen eller Sønnen?
* Sammenlign ApG 2,36 med 10,37
* Hvor i beretningen i ApG 2 kan du finde vidnesbyrd om, at Jesu gerning ikke bare er for jøderne, men for alle?
` `
* Helligånden til alle
* Læg mærke til de grupper, som Helligånden ifølge Joel skulle udgydes over:
* Vers 17: _________ mennesker
* Med hensyn til køn:
* Med hensyn til alder:
* Med hensyn til social status:
` `
**Uddybende spørgsmål**
* Hvilken betydning har det for den kristne kirke, at Jesus er alles Herre?
* Bør der i kirken være roller og autoritet, der favoriserer fx de rige, de hvide eller mænd frem for fattige, farvede eller kvinder?
` `
* Syvende Dags Adventistkirken har udtrykt sin tro på alle kristnes ligestilling i kirken i punkt 14 i sine officielle trospunkter, kaldt ”Enhed i Kristi Legeme“:
* ”Menigheden er ét legeme med mange lemmer, kaldet fra alle folkeslag, stammer, tungemål og folk. I Kristus er vi en ny skabning. Særpræg i race, kultur, uddannelse og nationalitet eller forskel på høj og lav, rig og fattig, mand og kvinde må ikke skabe splittelse blandt os. Vi er alle lige i Kristus, der med én Ånd har bundet os sammen i et fællesskab med ham og med hinanden. Vi skal alle tjene og lade os tjene uden partiskhed eller forbehold. Gennem åbenbaringen af Jesus Kristus i Skriften er vi fælles om samme tro og håb, og vi aflægger det samme vidnesbyrd over for alle. Dette fællesskab har sin oprindelse i den treenige Gud, der har antaget os som sine børn.“
* Læs og drøft indholdet i klassen.
` `
**Mødet med dagligdagen**
Find eksempler på diskrimination omtalt i ugens medier, hvor grupper diskrimineres simpelthen på grund af køn, alder eller social status.
` `
**Personligt kristenliv**
Når du og jeg af og til befinder os i situationer, hvor vi skal vurdere andre mennesker, fx til jobinterviews eller menighedspositioner, hvordan kan jeg da sikre mig, at jeg ikke påvirkes af fordomme mod bestemte grupper?
` `
**Ord og udtryk**
* Teofani“ er et teknisk udtryk fra græsk, sammensat af ordene Theos (Gud) og phaino (vise sig). Det bruges om situationer og beskrivelser af situationer, hvor Gud viser sig for mennesker. I Bibelen beskrives sådanne tilsynekomster ofte med de samme vendinger.
* ”Indgav at sige“. Det udsagnsord, som i ApG 2,4 oversættes med ”indgav at sige“, anvendes også af Lukas I ApG 26,25, hvor det til dansk oversættes med at ”sige noget forstandigt“: Paulus’ ord er sande og ”forstandige“ og ikke som mere end antydet af Festus meningsløse eller vanvittige.
* Udtrykket ”de sidste dage“ bruges af Joel og fortolkes af Peter som en betegnelse for de ”afgørende tider“. Der tænkes ikke først og fremmest på en periode, der afslutter verdenshistorien, men på den tidsalder, der er afgørende for historien: Gud er blevet menneske, forsoningen er vundet, døden er besejret, Ånden er udgydt, og alle Guds løfter sikret.
På hvilken måde er det, der foregik i Judæa for nu to tusind år siden afgørende for dit og mit liv – og for verdenshistorien?
` `
**Noter**
` `
| 51.257576 | 675 | 0.758203 | dan_Latn | 0.999904 |
aa14a42577b50e27a333dfd594191f20f9347e28 | 1,128 | md | Markdown | windows-driver-docs-pr/print/pscript-renderer.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-07T12:25:23.000Z | 2022-02-07T12:25:23.000Z | windows-driver-docs-pr/print/pscript-renderer.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/print/pscript-renderer.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Pscript Renderer
description: Pscript Renderer
keywords:
- PostScript Printer Driver WDK print , renderer
- Pscript WDK print , renderer
- renderer WDK Pscript
ms.date: 04/20/2017
---
# Pscript Renderer
The Pscript renderer is implemented as a [printer graphics DLL](printer-graphics-dll.md) and thus exports functions defined by the Microsoft Device Driver Interface (DDI) for graphics drivers. When an application calls Graphics Device Interface (GDI) functions to send text and images to a printer device, the kernel-mode graphics engine calls the renderer's graphics DDI functions. These graphics DDI functions assist GDI in drawing a print job's page images.
The renderer is also responsible for sending text rendered image data, along with printer command sequences, to the print spooler, which then directs the data stream and commands to printer hardware.
You can modify Pscript's rendering operations by providing a [rendering plug-in](rendering-plug-ins.md), which is described in the section entitled [Customizing Microsoft's Printer Drivers](customizing-microsoft-s-printer-drivers.md).
| 37.6 | 460 | 0.795213 | eng_Latn | 0.97978 |
aa153c152efb8aded57a12109ea45f0754822c62 | 8,052 | md | Markdown | articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Hur hanterade identiteter för Azure-resurser fungerar med Azure Virtual Machines
description: Beskrivning av hanterade identiteter för Azure-resurser fungerar med Azure Virtual Machines.
services: active-directory
documentationcenter: ''
author: MarkusVi
manager: daveba
editor: ''
ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
ms.service: active-directory
ms.subservice: msi
ms.devlang: ''
ms.topic: conceptual
ms.custom: mvc
ms.date: 06/11/2020
ms.author: markvi
ms.collection: M365-identity-device-management
ms.openlocfilehash: b61fd2f9bc36743754a43b05629a798f0305d4e5
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 07/02/2020
ms.locfileid: "85609217"
---
# <a name="how-managed-identities-for-azure-resources-work-with-azure-virtual-machines"></a>Hur hanterade identiteter för Azure-resurser fungerar med virtuella Azure-datorer
Hanterade identiteter för Azure-resurser ger Azure-tjänster en automatiskt hanterad identitet i Azure Active Directory. Du kan använda den här identiteten för att autentisera till en tjänst som stöder Azure AD-autentisering, utan att ha autentiseringsuppgifter i din kod.
I den här artikeln får du lära dig hur hanterade identiteter fungerar med Azure Virtual Machines (VM).
## <a name="how-it-works"></a>Så här fungerar det
Internt är hanterade identiteter tjänstens huvud namn av en särskild typ, som endast kan användas med Azure-resurser. När den hanterade identiteten tas bort tas motsvarande tjänst objekt bort automatiskt.
När en användardefinierad eller systemtilldelad identitet skapas, utfärdar MSRP (Managed Identity Resource Provider) ett certifikat internt till identiteten.
Din kod kan använda en hanterad identitet för att begära åtkomsttoken för tjänster som stöder Azure AD-autentisering. Azure tar hand om de autentiseringsuppgifter som används av tjänstinstansen.
Följande diagram visar hur hanterade tjänstidentiteter fungerar med virtuella datorer i Azure (VM):

| Egenskap | Systemtilldelad hanterad identitet | Användartilldelad hanterad identitet |
|------|----------------------------------|--------------------------------|
| Skapa | Skapas som en del av en Azure-resurs (till exempel en virtuell Azure-dator eller Azure App Service). | Skapas som fristående Azure-resurs. |
| Livscykel | Delad livs cykel med den Azure-resurs som den hanterade identiteten skapas med. <br/> När den överordnade resursen tas bort, tas även den hanterade identiteten bort. | Oberoende livs cykel. <br/> Måste tas bort explicit. |
| Dela mellan Azure-resurser | Kan inte delas. <br/> Den kan bara kopplas till en enda Azure-resurs. | Kan delas. <br/> Samma användare-tilldelade hanterade identitet kan associeras med fler än en Azure-resurs. |
| Vanliga användarsituationer | Arbets belastningar som finns i en enda Azure-resurs. <br/> Arbets belastningar för vilka du behöver oberoende identiteter. <br/> Till exempel ett program som körs på en enskild virtuell dator | Arbets belastningar som körs på flera resurser och som kan dela en enda identitet. <br/> Arbets belastningar som behöver förautentisering till en säker resurs som en del av ett etablerings flöde. <br/> Arbets belastningar där resurser återvinns ofta, men behörigheter bör vara konsekventa. <br/> Till exempel en arbets belastning där flera virtuella datorer behöver åtkomst till samma resurs |
## <a name="system-assigned-managed-identity"></a>Systemtilldelad hanterad identitet
1. Azure Resource Manager tar emot en begäran om att aktivera den systemtilldelade hanterade identiteten på en virtuell dator.
2. Azure Resource Manager skapar ett huvudnamn för tjänsten i Azure AD som representerar den virtuella datorns identitet. Tjänstens huvudnamn skapas i den Azure AD-klientorganisation som är betrodd av prenumerationen.
3. Azure Resource Manager konfigurerar identiteten på den virtuella datorn genom att uppdatera Azure Instance Metadata Service Identity-slutpunkten med klient-ID och certifikat för tjänstens huvud namn.
4. När den virtuella datorn har fått en identitet använder du informationen om tjänstens huvudnamn för att ge den virtuella datorn åtkomst till Azure-resurser. Använd rollbaserad åtkomstkontroll (RBAC) i Azure AD när du anropar Azure Resource Manager för att tilldela lämplig roll till tjänstens huvudnamn för den virtuella datorn. Ge din kod åtkomst till den specifika hemligheten eller nyckeln i Key Vault när du anropar Key Vault.
5. Din kod som körs på den virtuella datorn kan begära en token från Azure instance metadata service-slutpunkt, som endast kan nås från den virtuella datorn:`http://169.254.169.254/metadata/identity/oauth2/token`
- Resursparametern anger vilken tjänst som token ska skickas till. Använd `resource=https://management.azure.com/` för att autentisera mot Azure Resource Manager.
- API-versionsparametern anger IMDS-versionen, använd api-version=2018-02-01 eller högre.
6. Ett anrop görs till Azure AD för att begära en åtkomsttoken (se steg 5) med klient-ID:t och certifikatet som konfigurerades i steg 3. Azure AD returnerar en åtkomsttoken för JSON Web Token (JWT).
7. Koden skickar åtkomsttoken vid ett anrop till en tjänst som stöder Azure AD-autentisering.
## <a name="user-assigned-managed-identity"></a>Användartilldelad hanterad identitet
1. Azure Resource Manager tar emot en begäran om att skapa en användartilldelad hanterad identitet.
2. Azure Resource Manager skapar ett huvudnamn för tjänsten i Azure AD som representerar den användartilldelade hanterade identiteten. Tjänstens huvudnamn skapas i den Azure AD-klientorganisation som är betrodd av prenumerationen.
3. Azure Resource Manager tar emot en begäran om att konfigurera den tilldelade hanterade identiteten på en virtuell dator och uppdaterar slut punkten för Azure Instance Metadata Service-identiteten med klient-ID och certifikat för den användardefinierade hanterade identitets tjänstens huvud namn.
4. När den användartilldelade hanterade identiteten har skapats använder du informationen om tjänstens huvudnamn för att ge identiteten åtkomst till Azure-resurser. Använd rollbaserad åtkomstkontroll (RBAC) i Azure AD när du anropar Azure Resource Manager för att tilldela lämplig roll till tjänstens huvudnamn för den användartilldelade identiteten. Ge din kod åtkomst till den specifika hemligheten eller nyckeln i Key Vault när du anropar Key Vault.
> [!Note]
> Du kan även utföra det här steget före steg 3.
5. Din kod som körs på den virtuella datorn kan begära en token från Azure-Instance Metadata Service identitetens slut punkt, endast tillgänglig från den virtuella datorn:`http://169.254.169.254/metadata/identity/oauth2/token`
- Resursparametern anger vilken tjänst som token ska skickas till. Använd `resource=https://management.azure.com/` för att autentisera mot Azure Resource Manager.
- Parametern för klient-ID anger den identitet som token har begärts för. Det här värdet krävs för att lösa tvetydigheter om mer än en användartilldelad identitet finns på samma virtuella dator.
- Parametern för API-version anger Azure Instance Metadata Service-versionen. Använd `api-version=2018-02-01` eller senare.
6. Ett anrop görs till Azure AD för att begära en åtkomsttoken (se steg 5) med klient-ID:t och certifikatet som konfigurerades i steg 3. Azure AD returnerar en åtkomsttoken för JSON Web Token (JWT).
7. Koden skickar åtkomsttoken vid ett anrop till en tjänst som stöder Azure AD-autentisering.
## <a name="next-steps"></a>Nästa steg
Kom igång med funktionen Hanterade identiteter för Azure-resurser med följande snabbstarter:
* [Använda en systemtilldelad hanterad identitet för en virtuell Windows-dator för åtkomst till Resource Manager](tutorial-windows-vm-access-arm.md)
* [Använda en systemtilldelad hanterad identitet för en virtuell Linux-dator för åtkomst till Resource Manager](tutorial-linux-vm-access-arm.md)
| 83.875 | 620 | 0.803403 | swe_Latn | 0.999754 |
aa15ba7f93b7e10f5cf6904d5907ff73f0063ded | 6,362 | md | Markdown | README.md | tatianagarcia/FluEgg | 6b7380e06aa12b5d04fa8834ff8851428a1490bb | [
"NCSA",
"Unlicense"
] | null | null | null | README.md | tatianagarcia/FluEgg | 6b7380e06aa12b5d04fa8834ff8851428a1490bb | [
"NCSA",
"Unlicense"
] | null | null | null | README.md | tatianagarcia/FluEgg | 6b7380e06aa12b5d04fa8834ff8851428a1490bb | [
"NCSA",
"Unlicense"
] | 2 | 2020-01-19T19:54:24.000Z | 2021-05-25T02:18:12.000Z | # Fluvial Egg Drift Simulator (FluEgg)
A three-dimensional Lagrangian model capable of evaluating the influence of flow velocity, shear dispersion and turbulent diffusion on the transport and dispersal patterns of Asian carp eggs is presented. The model’s variables include not only biological behavior (growth rate, density changes) but also the physical characteristics of the flow field, such as mean velocities and eddy diffusivities.
# Code Structure
The Graphical User Inter Interface (GUI) code for the FluEgg is FluEgg.m and FluEgg.fig.
The Main function of FluEgg is called FluEgggui.
# Motivation
The transport of Asian carp eggs and fish in the early stages of development is very important on their life history and recruitment success. A better understanding of the transport and dispersal patterns of Asian carp at early life stages might give insight into the development and implementation of control strategies for Asian carp.
The FluEgg model was developed to evaluate the influence of flow velocity, shear dispersion and turbulent diffusion on the transport and dispersal patterns of Asian carp eggs. FluEgg output includes the three-dimensional location of the eggs at each time
step together with its growth stage. The output results can be used to estimate lateral, longitudinal or vertical egg distribution. In addition, it can be used to generate an egg breakthrough curve (egg concentration as a function of time) at a certain downstream location from the virtual spawning location. Egg breakthrough curves are important for understanding egg dispersion and travel times.
Egg vertical concentration distribution might give insights into egg suspension and settlement. Egg longitudinal concentration distributions can be used to estimate the streamwise and shear velocity, and minimum river length required for successful egg development.
Egg lateral distributions give information about dead zones, provided the input hydraulic data for the model is sufficiently well
described. The location of suitable spawning grounds can be predicted based on the egg growth stage and on the vertical, lateral
or longitudinal egg concentration distributions.
The FluEgg model has the capability to predict the drifting behavior of eggs based on the physical properties of the eggs and
the environmental and hydrodynamic characteristics of the stream where the eggs are drifting.
A complete description of the FluEgg model was presented by Garcia et al. (2013); users can refer to this paper for detailed information on both the mathematical model and the performance of the model.
#References
Garcia, T., Jackson, P.R.,Murphy, E.A., Valocchi, A.J., Garcia, M.H., 2013. Development of a Fluvial Egg Drift Simulator to evaluate the transport and dispersion of Asian carp eggs in rivers. Ecol. Model. 263, 211–222
# Installation
The FluEgg model iswritten in the MATLAB® programming language (Mathworks, Natick, MA,USA). It requires the statistics and image processing toolboxes.
==============================================================================
FluEgg Release License
==============================================================================
University of Illinois/NCSA Open Source License
Copyright (c) 2011-2014 University of Illinois at Urbana-Champaign
All rights reserved.
Developed by: Tatiana Garcia
Ven Te Chow Hydrosystems Laboratory, Department of Civil and Environmental Engineering at University of Illinois at Urbana-Champaign
http://vtchl.illinois.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal with the
Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
-Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers.
-Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other
materials provided with the distribution.
-Neither the names of Tatiana Garcia, Ven Te Chow Hydrosystems Laboratory, Department of Civil and Environmental Engineering at University of Illinois at
Urbana-Champaign, nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written
permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE.
==============================================================================
Copyrights and Licenses for Third Party Software Distributed with FluEgg:
==============================================================================
The FluEgg software contains code written by third parties. Such software will
have its own individual LICENSE.TXT file in the directory in which it appears.
This file will describe the copyrights, license, and restrictions which apply
to that code.
The disclaimer of warranty in the University of Illinois/NCSA Open Source License
applies to all code in the FluEgg Distribution, and nothing in any of the
other licenses gives permission to use the names of Tatiana Garcia, Ven Te Chow Hydrosystems Laboratory, Department of Civil and Environmental Engineering at
University of Illinois at Urbana-Champaign or the University of Illinois to endorse or promote products derived from this
Software.
The following pieces of software have additional or alternate copyrights,
licenses, and/or restrictions:
Program/Function Developer Directory
---------------- --------- ---------
voxel.m Suresh Joel FluEggRepo\voxel.m
cells.m Suresh Joel FluEggRepo\voxel.m
| 84.826667 | 399 | 0.762339 | eng_Latn | 0.992983 |
aa16d9e20ea860c57e4328ba457f1f36669a5c64 | 1,284 | md | Markdown | MovieReviewsAnalysis/readme.md | dsdsgithub007/dsds007 | 17de91514948903186ab6878cbed61c2ae2ebee4 | [
"CC0-1.0"
] | null | null | null | MovieReviewsAnalysis/readme.md | dsdsgithub007/dsds007 | 17de91514948903186ab6878cbed61c2ae2ebee4 | [
"CC0-1.0"
] | null | null | null | MovieReviewsAnalysis/readme.md | dsdsgithub007/dsds007 | 17de91514948903186ab6878cbed61c2ae2ebee4 | [
"CC0-1.0"
] | null | null | null | > # Project Description
> The Film Junky Union, a new edgy community for classic movie enthusiasts, is developing a system for filtering and categorizing movie reviews.
> ## Project Goal
> The goal is to train a model to automatically detect negative reviews. Use the dataset of IMDB movie reviews with polarity labelling to build a model for classifying positive and negative reviews. Try to attain an F1 score of at least 0.85.
> ## Data Description
> * The data was provided by Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).
> ### Description of the selected fields
> * review: the review text
> * pos: the target, '0' for negative and '1' for positive
> * ds_part: 'train'/'test' for the train/test part of dataset, correspondingly
>## Libraries Used
> * Pandas
> * Matplotlib.pyplot, matplotlib.dates
> * scipy.stats
> * numpy
> * math
> * re
> * sklearn
> * nltk
> * spacy
> * lightgbm
> * xgboost
> * torch
> * transformers
> * tqdm
>## Models Evaluated
> * LogisticRegression
> * NLTK, TF-IDF, LR
> * spaCy, TF-IDF, LR
> * spaCy, TF-IDF, XGB
> * spaCy, TF-IDF, LGBM
> * BERT, LR
| 33.789474 | 263 | 0.714953 | eng_Latn | 0.929394 |
aa1744bcb07f84d085dccc07537fe156233128f4 | 171 | md | Markdown | README.md | ngaimen/stock_report_analysis | 05c17711cd4c0c93fe584a979ec18023b1a2357c | [
"Apache-2.0"
] | null | null | null | README.md | ngaimen/stock_report_analysis | 05c17711cd4c0c93fe584a979ec18023b1a2357c | [
"Apache-2.0"
] | null | null | null | README.md | ngaimen/stock_report_analysis | 05c17711cd4c0c93fe584a979ec18023b1a2357c | [
"Apache-2.0"
] | 1 | 2021-03-14T14:30:37.000Z | 2021-03-14T14:30:37.000Z | 提供财务报表数据和分析例子, 以及回归测试相关代码
## 财务报表分析
### 使用前提:
将finace.7z解压到当前目录
### 使用方法
运行evan_analysis.py 根据提示输入需要分析的财报即可.
## 回归测试
```
cd stock_back
```
然后编写自己的策略,其中提供了下载历史价格数据的方法等
| 10.6875 | 35 | 0.74269 | yue_Hant | 0.663967 |
aa186286f0b735aa567f28aaca69ed6b9835a913 | 2,017 | md | Markdown | docs/zh-cn/reference/scapi/fw/dotnet/neo/Blockchain.md | wschae/docs | 34f75851c3180fdccd6495adc0a757a4dd474e7d | [
"CC-BY-4.0"
] | 1 | 2021-12-02T10:05:37.000Z | 2021-12-02T10:05:37.000Z | docs/zh-cn/reference/scapi/fw/dotnet/neo/Blockchain.md | wschae/docs | 34f75851c3180fdccd6495adc0a757a4dd474e7d | [
"CC-BY-4.0"
] | 2 | 2021-01-29T18:40:48.000Z | 2021-07-19T14:47:38.000Z | docs/zh-cn/reference/scapi/fw/dotnet/neo/Blockchain.md | wschae/docs | 34f75851c3180fdccd6495adc0a757a4dd474e7d | [
"CC-BY-4.0"
] | null | null | null | # Blockchain 类
该类提供了访问区块链数据的一系列方法。
命名空间:[Neo.SmartContract.Framework.Services.Neo](../neo.md)
程序集:Neo.SmartContract.Framework
## 语法
```c#
public static class Blockchain
```
## 方法
| | 名称 | 说明 |
| ------------------------------------------------------ | ------------------------------------------------------ | ---------------------------------------- |
|  | [GetAccount(byte[])](Blockchain/GetAccount.md) | 根据合约脚本的散列来获得一个账户(地址) |
|  | [GetAsset(byte[])](Blockchain/GetAsset.md) | 根据资产 ID 查找资产 |
|  | [GetBlock(byte[])](Blockchain/GetBlock.md) | 通过区块 hash ,查找区块 |
|  | [GetBlock(uint)](Blockchain/GetBlock2.md) | 通过区块高度,查找区块 |
|  | [GetContract(byte[])](Blockchain/GetContract.md) | 通过合约散列获取合约内容 |
|  | [GetHeader(byte[])](Blockchain/GetHeader.md) | 通过区块 hash ,查找区块头 |
|  | [GetHeader(uint)](Blockchain/GetHeader2.md) | 通过区块高度,查找区块头 |
|  | [GetHeight()](Blockchain/GetHeight.md) | 获得当前区块高度 |
|  | [GetTransaction(byte[])](Blockchain/GetTransaction.md) | 通过交易 ID 查找交易 |
|  | [GetValidators()](Blockchain/GetValidators.md) | 获得共识人的公钥 |
# 构造方法
BlockChain 类是静态类,无需构造方法。 | 63.03125 | 158 | 0.499256 | yue_Hant | 0.784124 |
aa18a64357327da999f31c48a0aa48540f334c39 | 67 | md | Markdown | Exchange/ExchangeServer/collaboration/shared-mailboxes/index.md | gderderian/OfficeDocs-Exchange | 957fc509af8639c62c8ce2ef0035a532c60dfaef | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-02-06T10:04:57.000Z | 2019-02-06T10:05:56.000Z | Exchange/ExchangeServer/collaboration/shared-mailboxes/index.md | gderderian/OfficeDocs-Exchange | 957fc509af8639c62c8ce2ef0035a532c60dfaef | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange/ExchangeServer/collaboration/shared-mailboxes/index.md | gderderian/OfficeDocs-Exchange | 957fc509af8639c62c8ce2ef0035a532c60dfaef | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-07-19T15:36:10.000Z | 2021-07-19T15:36:10.000Z | ---
redirect_url: shared-mailboxes
redirect_document_id: TRUE
---
| 13.4 | 30 | 0.761194 | eng_Latn | 0.306073 |
aa19bf252d9b866a8873cf7f7ca9fa77a6e0caea | 959 | md | Markdown | README.md | hicetnunc2000/test-ea | fade3ef435d2cda7aac85b46323aef18468a6471 | [
"MIT"
] | null | null | null | README.md | hicetnunc2000/test-ea | fade3ef435d2cda7aac85b46323aef18468a6471 | [
"MIT"
] | null | null | null | README.md | hicetnunc2000/test-ea | fade3ef435d2cda7aac85b46323aef18468a6471 | [
"MIT"
] | null | null | null | drand-node-ea
chainlink external adapter which gets randomness from [drand.love](https://drand.love)
it was build with sls framework 2.8.0, you must configure it, run `sls deploy`, get it's endpoint, build a bridge, add tasks and you'll have the jobid from this implementation (for such experiments look also testnet especifications in the repo)
```drandramdoness.sol```
mainnet sample contract for interactions: 0xDA2Ed8129f72F6D3455127ebb40109449772B893 at the moment you must send 0.1LINK in whichever implementation that you use
https://ipfs.io/ipfs/QmX5XLeuQDiZj2jqBXM5tTHi3dYhMEtNNDbQa7J5pvqvi1
uint256: 15193941169270745378893100680725550593904155809811730811544783570068588680135
```
{
"initiators": [
{
"type": "runlog",
"params": {
"address": "0x2Bd355065fE6b4Df4CE7c12f15b9a9b2a8392037"
}
}
],
"tasks": [
{
"type": "drand_ea"
},
{
"type": "ethint256"
},
{
"type": "ethtx"
}
]
}
```
```MIT license``` | 25.918919 | 244 | 0.729927 | eng_Latn | 0.829272 |
aa19f19db2c56178b38039dd24ef2c87ca2f425a | 4,504 | md | Markdown | README.md | RoyalKir/MyoUnityAndroidPlugin | 01a9549ef6aa4f1e41dea11c8fc4bc44a9956c75 | [
"MIT"
] | 1 | 2015-10-05T03:06:52.000Z | 2015-10-05T03:06:52.000Z | README.md | RoyalKir/MyoUnityAndroidPlugin | 01a9549ef6aa4f1e41dea11c8fc4bc44a9956c75 | [
"MIT"
] | null | null | null | README.md | RoyalKir/MyoUnityAndroidPlugin | 01a9549ef6aa4f1e41dea11c8fc4bc44a9956c75 | [
"MIT"
] | null | null | null | # MyoUnityAndroidPlugin
Unofficial plugin which enables you to build Unity applications for Android with Myo support.
## Directory structure and Setup
You need to add both folders to your Unity projects Assets folder.
I tested the plugin with Unity 5.1.2 and a Galaxy Note 4. Although I used Myo Android SDK 0.10.0 and Myo software version 1.0.0 to build the project. The "AndroidPlugin" itself is built as an "Android Bound Service" so it is mentioned in the Manifest file but not as the main-application. This should enable you to use a second plugin more easily.
```
\Assets
+---MyoPlugin
| +---Demo
| | +---Scenes
| | | MyoDemoScene.unity
| | \---Scripts
| | MyoPluginDemo.cs
| +---Prefabs
| | MyoManager.prefab
| \---Scripts
| MyoBinding.cs
| MyoManager.cs
\---Plugins
\---Android
| AndroidManifest.xml
| AndroidPlugin.jar
\---libs
\---armeabi-v7a
libgesture-classifier.so
```
## Getting Started
1. Import MyoPlugin and Plugins folder into your Unity project.
2. Open "MyoPlugin/Demo/Scenes/MyoDemoScene.unity".
3. In your Build Settings switch your platform to Android.
4. In Player Settings you have to set your Bundle Identifier
5. Also set minimum API Level 18 (Android 4.3 'Jelly Bean'). Everything else in Player Settings should be optional.
6. Now you should be able to build and deploy to your Android device.
## Using the Demo
1. First you must pair your Myo device. Press the "AttachToAdjacent" button then 'bump' Myo gently to your device. Note: Currently no other attach method is implemented, see "Known Issues".
2. You should immediately gain control of the cube's rotation. Poses and Orientation will be displayed as well. You can use Myo vibration, uninitialize the plugin and initialize it again using the buttons.
## The API
When you build your own scene, you simply need to have a GameObject thats called "MyoManager" with the MyoManger script attached to it. For an out of the box solution, there is also a prefab included which you can drag into your scene. You now can start the plugin via "MyoManager.Initialize", the other methods are called in the same manner. The code below represents the methods which can be called using the MyoManger.
```C#
/// Manager class for Myo Android Plugin. Use only this public API to interface with Myo inside of Unity.
public class MyoManager : MonoBehaviour
{
/// Subscribe to this event to recieve Pose event notifications
public static event Action<MyoPose> PoseEvent;
/// More standard Myo events you can subscribe to
public static event Action AttachEvent, DetachEvent, ConnectEvent, DisconnectEvent,
ArmSyncEvent, ArmUnsyncEvent, LockEvent, UnlockEvent;
/// Initializes and enables Myo plugin
public static void Initialize();
/// Uninitialize and disables Myo plugin
public static void Uninitialize();
/// Automatically pairs with the first Myo device that touches the iOS device.
public static void AttachToAdjacent();
/// Gets the rotation of the Myo device, converted into Unity's coordinate system (See MyoToUnity).
public static Quaternion GetQuaternion()
/// Vibrates the Myo device for the specified length, Short, Medium, or Long.
public static void VibrateForLength( MyoVibrateLength length );
/// Tells you if the plugin is already initialized
public static bool GetIsInitialized(){
/// Tells you if a Myo is already attached to the device
public static bool GetIsAttached()
}
```
## Known Issues
- Only 'attachToAdjacent' method for connecting the Myo is working right now. You have to 'bump' your Myo gently against your Android device. The usual option to choose a myo from a screen isn't implemented yet.
- Currently you can only use one Myo per device. If I can get my hands on a second one i will implement this functionality too.
## Support
This project was kind of rushed, so if you have any feedback just contact me (Florian.Strieg@Student.Reutlingen-University.DE). Although i cannot guarantee any direct support I'm looking forward to develop this plugin a bit further.
## Thanks
Last but not least I would like you to know that I took inspiration and parts of code from https://github.com/zoiclabs/Myo-Unity-iOS-Plugin. I liked the architecture so hopefully the creator will be okay with it =)
| 47.410526 | 421 | 0.722913 | eng_Latn | 0.992269 |
aa19fa41aaa80c540bd52941feaef665b39b88c2 | 2,646 | md | Markdown | articles/backup/tutorial-backup-restore-files-windows-server.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/tutorial-backup-restore-files-windows-server.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/tutorial-backup-restore-files-windows-server.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 将文件从 Azure 恢复到 Windows Server
description: 本教程详细介绍了如何将项目从 Azure 恢复到 Windows Server。
services: backup
author: saurabhsensharma
manager: shivamg
keywords: windows server 备份; 还原文件 windows server; 备份和灾难恢复
ms.service: backup
ms.topic: tutorial
ms.date: 2/14/2018
ms.author: saurse
ms.custom: mvc
ms.openlocfilehash: e05c80e52605e051bdd6815608ca8c12e1393727
ms.sourcegitcommit: 266fe4c2216c0420e415d733cd3abbf94994533d
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/01/2018
ms.locfileid: "34607015"
---
# <a name="recover-files-from-azure-to-a-windows-server"></a>将文件从 Azure 恢复到 Windows Server
使用 Azure 备份可以从 Windows Server 备份中恢复个别项目。 如果必须快速还原意外删除的文件,恢复个别文件会很有帮助。 本教程介绍如何使用 Microsoft Azure 恢复服务 (MARS) 代理从在 Azure 中执行的备份中恢复项目。 本教程介绍如何执行下列操作:
> [!div class="checklist"]
> * 启动个别项目的恢复
> * 选择恢复点
> * 从恢复点还原项目
本教程假定已经执行了[将 Windows Server 备份到 Azure](backup-configure-vault.md) 的步骤,并且在 Azure 中拥有至少一个 Windows Server 文件的备份。
## <a name="initiate-recovery-of-individual-items"></a>启动个别项目的恢复
使用 Microsoft Azure 恢复服务 (MARS) 代理安装名为“Microsoft Azure 备份”的有用用户界面向导。 Microsoft Azure 备份向导与 Microsoft Azure 恢复服务 (MARS) 代理协同工作,从存储在 Azure 中的恢复点检索备份数据。 使用 Microsoft Azure 备份向导确定要还原到 Windows Server 的文件或文件夹。
1. 打开“Microsoft Azure 备份”管理单元。 可以通过在计算机中搜索 **Microsoft Azure 备份**找到该代理。

2. 在向导中,单击代理控制台的“操作窗格”中的“恢复数据”,启动“恢复数据”向导。

3. 在“入门”页上,选择“此服务器(服务器名称)”,然后单击“下一步”。
4. 在“选择恢复模式”页上,选择“个别文件和文件夹”,然后单击“下一步”以开始恢复点选择过程。
5. 在“选择卷和日期”页上,选择包含想要还原的文件或文件夹的卷,然后单击“装载”。 选择日期,然后从下拉菜单中选择与恢复点相对应的时间。 以**粗体**显示的日期指示当天至少有一个可用的恢复点。

单击“装载”时,Azure 备份会将恢复点用作磁盘。 从磁盘浏览和恢复文件。
## <a name="restore-items-from-a-recovery-point"></a>从恢复点还原项目
1. 装载恢复卷后,单击“浏览”打开 Windows 资源管理器,并查找希望恢复的文件和文件夹。

可以从恢复卷直接打开文件,并验证这些文件。
2. 在 Windows 资源管理器中,复制想要还原的文件和/或文件夹,然后将它们粘贴到服务器的任何所需位置。

3. 完成还原文件和/或文件夹后,在“恢复数据”向导的“浏览和恢复文件”页上,单击“卸载”。

4. 单击“是”,确认要卸载该卷。
卸载快照后,代理控制台的“作业”窗格中将显示“作业已完成”。
## <a name="next-steps"></a>后续步骤
这将完成有关将 Windows Server 数据备份和还原到 Azure 的教程。 若要了解有关 Azure 备份的详细信息,请参阅备份加密虚拟机的 PowerShell 示例。
> [!div class="nextstepaction"]
> [备份加密 VM](./scripts/backup-powershell-sample-backup-encrypted-vm.md)
| 33.493671 | 203 | 0.760393 | yue_Hant | 0.556867 |
aa1b025d58c24abf88fa6c3b9747febf67694133 | 7,163 | md | Markdown | content/blog/ferret-v0.8/index.md | MontFerret/montferret.github.io | 7702766144604bbb5c3d0a3f29ed57921a2e37da | [
"MIT"
] | 1 | 2020-01-28T18:10:55.000Z | 2020-01-28T18:10:55.000Z | content/blog/ferret-v0.8/index.md | MontFerret/montferret.github.io | 7702766144604bbb5c3d0a3f29ed57921a2e37da | [
"MIT"
] | 15 | 2019-05-06T14:27:28.000Z | 2021-10-08T16:02:15.000Z | content/blog/ferret-v0.8/index.md | MontFerret/montferret.github.io | 7702766144604bbb5c3d0a3f29ed57921a2e37da | [
"MIT"
] | 6 | 2019-07-23T19:47:57.000Z | 2021-10-06T11:51:02.000Z | ---
title: "Ferret v0.8"
subtitle: "More features, better API"
draft: false
author: "Tim Voronov"
authorLink: "https://www.twitter.com/ziflex"
date: "2019-07-23"
---
Hooray, **[Ferret v0.8](https://github.com/MontFerret/ferret/releases/tag/v0.8.0)** has been released!
It's been a while since the last release, but we worked hard to bring new and better Ferret. This release has many new exciting features, but unfortunately, there are also some breaking changes.
You can find the full changelog **[here](https://github.com/MontFerret/ferret/blob/master/CHANGELOG.md#080)**.
Let's go!
# What's added
## iframe
Ferret *finally* supports ``iframe`` elements.
When a page gets loaded, Ferret finds all available elements and provieds an access to them via the ``.frames`` property of a page object.
Here's an example of how to find a target frame:
{{< editor height="250px" readonly="true" >}}
LET page = DOCUMENT("https://www.w3schools.com/html/html_iframe.asp", {
driver: "cdp"
})
LET content = (
FOR f IN page.frames
FILTER f.URL == "https://www.w3schools.com/html/default.asp"
RETURN f.head.innerHTML
)
RETURN FIRST(content)
{{</ editor >}}
Alternatively, you can filter them out by url or access to a target iframe by index, if you know it's position.
<div class="notification is-warning">
Due to CORS security policies, you still may have issues with iframes if it points to another domain.
</div>
## Namespaces
With this release, we introduce a new language feature - **namespaces**.
Namespaces allow library authors (and us) to isolate functions into dedicated sub sections.
Here is an example:
{{< code lang="fql" height="120px" >}}
LET blob = DOWNLOAD("https://raw.githubusercontent.com/MontFerret/ferret/master/assets/logo.png")
RETURN IO::FS::WRITE("logo.png", blob)
{{</ code >}}
To namespace a function, use the new ``namespace`` method. The ``namespace`` method is chainable:
{{< code lang="golang" height="380px" >}}
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"strings"
"github.com/MontFerret/ferret/pkg/compiler"
"github.com/MontFerret/ferret/pkg/runtime/core"
"github.com/MontFerret/ferret/pkg/runtime/values"
"github.com/MontFerret/ferret/pkg/runtime/values/types"
)
func main() {
c := compiler.New()
c.Namespace("IO").Namespace("FS").RegisterFunction("Read", fs.Read)
}
{{</ code >}}
<div class="notification is-info">
In future releases we will put HTML related functions into HTML:: namespaces.
</div>
## XPath
A good web scraping tool needs XPath support, and Ferret finally has it!
Ferret provides simple interface to XPath engine for both drivers - CDP and HTTP.
It automatically detects the output value type and deserializes them accordingly.
{{< code lang="fql" height="90px" >}}
RETURN XPATH(page, "//div[contains(@class, 'form-group')]")
{{</ code >}}
{{< code lang="fql" height="90px" >}}
RETURN XPATH(page, "count(//div)")
{{</ code >}}
These two queries will return 2 different types:
1. Returns an array of serialized elements (their inner HTML)
2. Returns a number indicating how many "div" elements are on the page.
## Regular expression operator
This release provides a shorthand for using regexp assertions:
{{< code lang="fql" height="90px" >}}
LET result = "foo" =~ "^f[o].$" // returns "true"
{{</ code >}}
{{< code lang="fql" height="90px" >}}
LET result = "foo" !~ "[a-z]+bar$" // returns "true"
{{</ code >}}
## New functions to manipulate DOM
There are some cases when you might need to change the existing DOM. To help with that, we added the ``INNER_HTML_SET`` and ``INNER_TEXT_SET`` functions.
{{< code lang="fql" height="185px" >}}
// Using document and selector
INNER_HTML_SET(doc, "body", "<span>Hello</span>")
INNER_TEXT_SET(doc, "body", "Hello")
// Or an element directly
INNER_HTML_SET(doc.body, "<span>Hello</span>")
INNER_TEXT_SET(doc.body, "Hello")
{{</ code >}}
## Viewport settings
In this release, you can override default values of a viewport in headless mode.
{{< code lang="fql" height="185px" >}}
LET doc = DOCUMENT(@url, {
driver: 'cdp',
viewport: {
width: 1920,
height: 1080
}
})
{{</ code >}}
## Better emulation of user interaction
This is a big change in how Ferret handles page interactions.
Now Ferret interacts with pages in a more advanced way - your script can scrolls down or up to an element, moves the mouse, focuses and types... with random delays. Just like a real person!
## Other
There are many other many small changes here and there, like adding ``FOCUS``, ``ESCAPE_HTML``, ``UNESCAPE_HTML`` and ``DECODE_URI_COMPONENT`` functions; improving performance; and changing internal design of some parts of the system.
# What's broken
We try to maintain backwards compatibility, but some of the new features required serious design changes that lead to breaking compatibility with previous versions. As we approach to release v1.0, the API is becoming more stable and will require fewer dramatic changes.
<div class="notification is-info">
Most of the breaking changes will affect only embedded solutions, use of HTML drivers in particular. No changes in the syntax, so no scripts need to change.
</div>
## Virtual DOM structure
Work on ``iframe`` support required us to redesign the structure of the virtual DOM by introducing top level entity called ``HTMLPage``:
{{< code lang="golang" height="440px" >}}
type HTMLPage interface {
core.Value
core.Iterable
core.Getter
core.Setter
collections.Measurable
io.Closer
IsClosed() values.Boolean
GetURL() values.String
GetMainFrame() HTMLDocument
GetFrames(ctx context.Context) (*values.Array, error)
GetFrame(ctx context.Context, idx values.Int) (core.Value, error)
GetCookies(ctx context.Context) (*values.Array, error)
SetCookies(ctx context.Context, cookies ...HTTPCookie) error
DeleteCookies(ctx context.Context, cookies ...HTTPCookie) error
PrintToPDF(ctx context.Context, params PDFParams) (values.Binary, error)
CaptureScreenshot(ctx context.Context, params ScreenshotParams) (values.Binary, error)
WaitForNavigation(ctx context.Context) error
Navigate(ctx context.Context, url values.String) error
NavigateBack(ctx context.Context, skip values.Int) (values.Boolean, error)
NavigateForward(ctx context.Context, skip values.Int) (values.Boolean, error)
}
{{</ code >}}
Previously, ``HTMLDocument`` contained the open page, but ``iframe`` nodes introduce the need to have multiple documents representing each node. This led to a new entity in the structure.
## Driver API
Because of the changes in Virtual DOM structure, the driver API has been changed as well in order to be reasonable.
``Driver.LoadDocument`` and ``LoadDocumentParams`` are renamed to ``Driver.Open`` and ``Params``.
{{< code lang="golang" height="150px" >}}
type Driver interface {
io.Closer
Name() string
Open(ctx context.Context, params Params) (HTMLPage, error)
}
{{</ code >}}
## Other
In the context of API stabilization and consistency, there are some other minor changes in vDOM elements like extra returned value (usually an error) or ``Get`` prefix in some methods.
| 35.285714 | 270 | 0.726511 | eng_Latn | 0.904301 |
aa1b4392bcd7bdd61f9c1ac0d7b275b952a4c496 | 759 | md | Markdown | README.md | Dekryptor/nore-chatting-app | a37159579dd51b766505f5516d3ab8f9426ad8b8 | [
"MIT"
] | null | null | null | README.md | Dekryptor/nore-chatting-app | a37159579dd51b766505f5516d3ab8f9426ad8b8 | [
"MIT"
] | 7 | 2020-04-04T18:46:36.000Z | 2022-02-12T09:01:29.000Z | README.md | Dekryptor/nore-chatting-app | a37159579dd51b766505f5516d3ab8f9426ad8b8 | [
"MIT"
] | 1 | 2019-05-05T20:23:43.000Z | 2019-05-05T20:23:43.000Z | # Nore Chatting Web Application
Nore Chatting App is a web chat application like [Zalo](https://chat.zalo.me/) or [Skype](https://www.skype.com/en/).Server build with [Node.js](https://nodejs.org/en/)
----
## Tech
Nore Chatting Application uses a number of open source projects to work properly:
* [React.js](https://reactjs.org/) - A JavaScript library for building user interfaces.
* [Redux](https://redux.js.org/) - A predictable state container for JavaScript apps.
* [Webpack](https://webpack.js.org/) - Webpack is used to compile JavaScript modules.
----
## Usage
Nore Chatting Application requires Node.js to run.
Install the dependencies and devDependencies and start the server.
```sh
$ cd nore-chatting-app
$ npm install
$ npm start
```
| 26.172414 | 168 | 0.725955 | eng_Latn | 0.723609 |
aa1e5660983d1d9c2a3c30cadbca6be158217980 | 17,612 | md | Markdown | files/en-us/web/css/css_flexible_box_layout/basic_concepts_of_flexbox/index.md | liannemarie/content | 6d614dad93f3e0dfd6acd58a44fd0b35a0b0ba6c | [
"OML",
"CECILL-B",
"RSA-MD"
] | null | null | null | files/en-us/web/css/css_flexible_box_layout/basic_concepts_of_flexbox/index.md | liannemarie/content | 6d614dad93f3e0dfd6acd58a44fd0b35a0b0ba6c | [
"OML",
"CECILL-B",
"RSA-MD"
] | null | null | null | files/en-us/web/css/css_flexible_box_layout/basic_concepts_of_flexbox/index.md | liannemarie/content | 6d614dad93f3e0dfd6acd58a44fd0b35a0b0ba6c | [
"OML",
"CECILL-B",
"RSA-MD"
] | null | null | null | ---
title: Basic concepts of flexbox
slug: Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox
tags:
- CSS
- Flex
- Guide
- axes
- concepts
- container
- flexbox
---
{{CSSRef}}
The Flexible Box Module, usually referred to as flexbox, was designed as a one-dimensional layout model, and as a method that could offer space distribution between items in an interface and powerful alignment capabilities. This article gives an outline of the main features of flexbox, which we will be exploring in more detail in the rest of these guides.
When we describe flexbox as being one dimensional we are describing the fact that flexbox deals with layout in one dimension at a time — either as a row or as a column. This can be contrasted with the two-dimensional model of [CSS Grid Layout](/en-US/docs/Web/CSS/CSS_Grid_Layout), which controls columns and rows together.
## The two axes of flexbox
When working with flexbox you need to think in terms of two axes — the main axis and the cross axis. The main axis is defined by the {{cssxref("flex-direction")}} property, and the cross axis runs perpendicular to it. Everything we do with flexbox refers back to these axes, so it is worth understanding how they work from the outset.
### The main axis
The main axis is defined by `flex-direction`, which has four possible values:
- `row`
- `row-reverse`
- `column`
- `column-reverse`
Should you choose `row` or `row-reverse`, your main axis will run along the row in the **inline direction**.

Choose `column` or `column-reverse` and your main axis will run from the top of the page to the bottom — in the **block direction**.

### The cross axis
The cross axis runs perpendicular to the main axis, therefore if your `flex-direction` (main axis) is set to `row` or `row-reverse` the cross axis runs down the columns.

If your main axis is `column` or `column-reverse` then the cross axis runs along the rows.

Understanding which axis is which is important when we start to look at aligning and justifying flex items; flexbox features properties that align and justify content along one axis or the other.
## Start and end lines
Another vital area of understanding is how flexbox makes no assumption about the writing mode of the document. In the past, CSS was heavily weighted towards horizontal and left-to-right writing modes. Modern layout methods encompass the range of writing modes and so we no longer assume that a line of text will start at the top left of a document and run towards the right hand side, with new lines appearing one under the other.
You can [read more about the relationship between flexbox and the Writing Modes specification](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Relationship_of_Flexbox_to_Other_Layout_Methods#writing_modes) in a later article; however, the following description should help explain why we do not talk about left and right and top and bottom when we describe the direction that our flex items flow in.
If the `flex-direction` is `row` and I am working in English, then the start edge of the main axis will be on the left, the end edge on the right.

If I were to work in Arabic, then the start edge of my main axis would be on the right and the end edge on the left.

In both cases the start edge of the cross axis is at the top of the flex container and the end edge at the bottom, as both languages have a horizontal writing mode.
After a while, thinking about start and end rather than left and right becomes natural, and will be useful to you when dealing with other layout methods such as CSS Grid Layout which follow the same patterns.
## The flex container
An area of a document laid out using flexbox is called a **flex container**. To create a flex container, we set the value of the area's container's {{cssxref("display")}} property to `flex` or `inline-flex`. As soon as we do this the direct children of that container become **flex items**. As with all properties in CSS, some initial values are defined, so when creating a flex container all of the contained flex items will behave in the following way.
- Items display in a row (the `flex-direction` property's default is `row`).
- The items start from the start edge of the main axis.
- The items do not stretch on the main dimension, but can shrink.
- The items will stretch to fill the size of the cross axis.
- The {{cssxref("flex-basis")}} property is set to `auto`.
- The {{cssxref("flex-wrap")}} property is set to `nowrap`.
The result of this is that your items will all line up in a row, using the size of the content as their size in the main axis. If there are more items than can fit in the container, they will not wrap but will instead overflow. If some items are taller than others, all items will stretch along the cross axis to fill its full size.
You can see in the live example below how this looks. Try editing the items or adding additional items in order to test the initial behavior of flexbox.
{{EmbedGHLiveSample("css-examples/flexbox/basics/the-flex-container.html", '100%', 480)}}
### Changing flex-direction
Adding the {{cssxref("flex-direction")}} property to the flex container allows us to change the direction in which our flex items display. Setting `flex-direction: row-reverse` will keep the items displaying along the row, however the start and end lines are switched.
If we change `flex-direction` to `column` the main axis switches and our items now display in a column. Set `column-reverse` and the start and end lines are again switched.
The live example below has `flex-direction` set to `row-reverse`. Try the other values — `row`, `column` and `column-reverse` — to see what happens to the content.
{{EmbedGHLiveSample("css-examples/flexbox/basics/flex-direction.html", '100%', 350)}}
## Multi-line flex containers with flex-wrap
While flexbox is a one dimensional model, it is possible to cause our flex items to wrap onto multiple lines. In doing so, you should consider each line as a new flex container. Any space distribution will happen across that line, without reference to the lines either side.
To cause wrapping behavior add the property {{cssxref("flex-wrap")}} with a value of `wrap`. Now, should your items be too large to all display in one line, they will wrap onto another line. The live sample below contains items that have been given a width, the total width of the items being too wide for the flex container. As `flex-wrap` is set to `wrap`, the items wrap. Set it to `nowrap`, which is also the initial value, and they will instead shrink to fit the container because they are using initial flexbox values that allows items to shrink. Using `nowrap` would cause an overflow if the items were not able to shrink, or could not shrink small enough to fit.
{{EmbedGHLiveSample("css-examples/flexbox/basics/flex-wrap.html", '100%', 400)}}
Find out more about wrapping flex items in the guide [Mastering Wrapping of Flex Items](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Mastering_Wrapping_of_Flex_Items).
## The flex-flow shorthand
You can combine the two properties `flex-direction` and `flex-wrap` into the {{cssxref("flex-flow")}} shorthand. The first value specified is `flex-direction` and the second value is `flex-wrap`.
In the live example below try changing the first value to one of the allowable values for `flex-direction` - `row`, `row-reverse`, `column` or `column-reverse`, and also change the second to `wrap` and `nowrap`.
{{EmbedGHLiveSample("css-examples/flexbox/basics/flex-flow.html", '100%', 400)}}
## Properties applied to flex items
To have more control over flex items we can target them directly. We do this by way of three properties:
- {{cssxref("flex-grow")}}
- {{cssxref("flex-shrink")}}
- {{cssxref("flex-basis")}}
We will take a brief look at these properties in this overview, and you can gain a fuller understanding in the guide [Controlling Ratios of Flex Items on the Main Axis](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Controlling_Ratios_of_Flex_Items_Along_the_Main_Ax).
Before we can make sense of these properties we need to consider the concept of **available space**. What we are doing when we change the value of these flex properties is to change the way that available space is distributed amongst our items. This concept of available space is also important when we come to look at aligning items.
If we have three 100 pixel-wide items in a container which is 500 pixels wide, then the space we need to lay out our items is 300 pixels. This leaves 200 pixels of available space. If we don’t change the initial values then flexbox will put that space after the last item.

If we instead would like the items to grow and fill the space, then we need to have a method of distributing the leftover space between the items. This is what the `flex` properties that we apply to the items themselves, will do.
### The flex-basis property
The `flex-basis` is what defines the size of that item in terms of the space it leaves as available space. The initial value of this property is `auto` — in this case the browser looks to see if the items have a size. In the example above, all of the items have a width of 100 pixels and so this is used as the `flex-basis`.
If the items don’t have a size then the content's size is used as the flex-basis. This is why when we just declare `display: flex` on the parent to create flex items, the items all move into a row and take only as much space as they need to display their contents.
### The flex-grow property
With the `flex-grow` property set to a positive integer, flex items can grow along the main axis from their `flex-basis`. This will cause the item to stretch and take up any available space on that axis, or a proportion of the available space if other items are allowed to grow too.
If we gave all of our items in the example above a `flex-grow` value of 1 then the available space in the flex container would be equally shared between our items and they would stretch to fill the container on the main axis.
The flex-grow property can be used to distribute space in proportion. If we give our first item a `flex-grow` value of 2, and the other items a value of 1 each, 2 parts will be given to the first item (100px out of 200px in the case of the example above), 1 part each the other two (50px each out of the 200px total).
### The flex-shrink property
Where the `flex-grow` property deals with adding space in the main axis, the `flex-shrink` property controls how it is taken away. If we do not have enough space in the container to lay out our items, and `flex-shrink` is set to a positive integer, then the item can become smaller than the `flex-basis`. As with `flex-grow`, different values can be assigned in order to cause one item to shrink faster than others — an item with a higher value set for `flex-shrink` will shrink faster than its siblings that have lower values.
The minimum size of the item will be taken into account while working out the actual amount of shrinkage that will happen, which means that flex-shrink has the potential to appear less consistent than flex-grow in behavior. We’ll therefore take a more detailed look at how this algorithm works in the article [Controlling Ratios of items along the main axis](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Controlling_Ratios_of_Flex_Items_Along_the_Main_Ax).
> **Note:** These values for `flex-grow` and `flex-shrink` are proportions. Typically if we had all of our items set to `flex: 1 1 200px` and then wanted one item to grow at twice the rate, we would set that item to flex: `2 1 200px`. However you could also use `flex: 10 1 200px` and `flex: 20 1 200px` if you wanted.
### Shorthand values for the flex properties
You will very rarely see the `flex-grow`, `flex-shrink`, and `flex-basis` properties used individually; instead they are combined into the {{cssxref("flex")}} shorthand. The `flex` shorthand allows you to set the three values in this order — `flex-grow`, `flex-shrink`, `flex-basis`.
The live example below allows you to test out the different values of the flex shorthand; remember that the first value is `flex-grow`. Giving this a positive value means the item can grow. The second is `flex-shrink` — with a positive value the items can shrink, but only if their total values overflow the main axis. The final value is `flex-basis`; this is the value the items are using as their base value to grow and shrink from.
{{EmbedGHLiveSample("css-examples/flexbox/basics/flex-properties.html", '100%', 510)}}
There are also some predefined shorthand values which cover most of the use cases. You will often see these used in tutorials, and in many cases these are all you will need to use. The predefined values are as follows:
- `flex: initial`
- `flex: auto`
- `flex: none`
- `flex: <positive-number>`
Setting `flex: initial` resets the item to the initial values of Flexbox. This is the same as `flex: 0 1 auto`. In this case the value of `flex-grow` is 0, so items will not grow larger than their `flex-basis` size. The value of `flex-shrink` is 1, so items can shrink if they need to rather than overflowing. The value of `flex-basis` is `auto`. Items will either use any size set on the item in the main dimension, or they will get their size from the content size.
Using `flex: auto` is the same as using `flex: 1 1 auto`; everything is as with `flex:initial` but in this case the items can grow and fill the container as well as shrink if required.
Using `flex: none` will create fully inflexible flex items. It is as if you wrote `flex: 0 0 auto`. The items cannot grow or shrink but will be laid out using flexbox with a `flex-basis` of `auto`.
The shorthand you often see in tutorials is `flex: 1` or `flex: 2` and so on. This is as if you used `flex: 1 1 0`. The items can grow and shrink from a `flex-basis` of 0.
Try these shorthand values in the live example below.
{{EmbedGHLiveSample("css-examples/flexbox/basics/flex-shorthands.html", '100%', 510)}}
## Alignment, justification and distribution of free space between items
A key feature of flexbox is the ability to align and justify items on the main- and cross-axes, and to distribute space between flex items. Note that these properties are to be set on the flex container, not on the items themselves.
### align-items
The {{cssxref("align-items")}} property will align the items on the cross axis.
The initial value for this property is `stretch` and this is why flex items stretch to the height of the flex container by default. This might be dictated by the height of the tallest item in the container, or by a size set on the flex container itself.
You could instead set `align-items` to `flex-start` in order to make the items line up at the start of the flex container, `flex-end` to align them to the end, or `center` to align them in the centre. Try this in the live example — I have given the flex container a height in order that you can see how the items can be moved around inside the container. See what happens if you set the value of align-items to:
- `stretch`
- `flex-start`
- `flex-end`
- `center`
{{EmbedGHLiveSample("css-examples/flexbox/basics/align-items.html", '100%', 520)}}
### justify-content
The {{cssxref("justify-content")}} property is used to align the items on the main axis, the direction in which `flex-direction` has set the flow. The initial value is `flex-start` which will line the items up at the start edge of the container, but you could also set the value to `flex-end` to line them up at the end, or `center` to line them up in the centre.
You can also use the value `space-between` to take all the spare space after the items have been laid out, and share it out evenly between the items so there will be an equal amount of space between each item. To cause an equal amount of space on the right and left of each item use the value `space-around`. With `space-around`, items have a half-size space on either end. Or, to cause items to have equal space around them use the value `space-evenly`. With `space-evenly`, items have a full-size space on either end.
Try the following values of `justify-content` in the live example:
- `flex-start`
- `flex-end`
- `center`
- `space-around`
- `space-between`
- `space-evenly`
{{EmbedGHLiveSample("css-examples/flexbox/basics/justify-content.html", '100%', 380)}}
In the article [Aligning Items in a Flex Container](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Aligning_Items_in_a_Flex_Container) we will explore these properties in more depth, in order to have a better understanding of how they work. These simple examples however will be useful in the majority of use cases.
## Next steps
After reading this article you should have an understanding of the basic features of Flexbox. In the next article we will look at [how this specification relates to other parts of CSS](/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Relationship_of_Flexbox_to_Other_Layout_Methods).
| 78.977578 | 670 | 0.767261 | eng_Latn | 0.999713 |
aa1f5b98d859c7df642d65de77ec19422e86d155 | 2,719 | md | Markdown | docs/master-data-services/delete-users-or-groups-master-data-services.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-12T00:50:30.000Z | 2021-10-12T00:53:51.000Z | docs/master-data-services/delete-users-or-groups-master-data-services.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/master-data-services/delete-users-or-groups-master-data-services.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-25T13:33:56.000Z | 2020-06-25T13:33:56.000Z | ---
title: Excluir usuários ou grupos (Master Data Services) | Microsoft Docs
ms.custom: ''
ms.date: 03/01/2017
ms.prod: sql
ms.prod_service: mds
ms.reviewer: ''
ms.suite: sql
ms.technology:
- master-data-services
ms.tgt_pltfrm: ''
ms.topic: conceptual
helpviewer_keywords:
- deleting groups [Master Data Services]
- groups [Master Data Services], deleting
- users [Master Data Services], deleting
- deleting users [Master Data Services]
ms.assetid: 0bbf9d2c-b826-48bb-8aa9-9905db6e717f
caps.latest.revision: 7
author: leolimsft
ms.author: lle
manager: craigg
ms.openlocfilehash: e89369c4ece00ec5114801a560fc6cf81ef02fcf
ms.sourcegitcommit: de5e726db2f287bb32b7910831a0c4649ccf3c4c
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/12/2018
ms.locfileid: "35333720"
---
# <a name="delete-users-or-groups-master-data-services"></a>Excluir usuários ou grupos (Master Data Services)
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md-winonly](../includes/appliesto-ss-xxxx-xxxx-xxx-md-winonly.md)]
Exclua usuários ou grupos quando não desejar mais que eles acessem o [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)].
Observe o seguinte comportamento ao excluir usuários e grupos:
- Se você excluir um usuário que é membro de um grupo com acesso ao [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], ele ainda poderá acessar o [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] até que você o remova do Active Directory ou do grupo local.
- Se você excluir um grupo, todos os usuários do grupo que acessaram o [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] serão exibidos na lista **Usuários** até que você os exclua.
- As alterações na segurança não são propagadas no MDS [!INCLUDE[ssMDSXLS](../includes/ssmdsxls-md.md)] durante 20 minutos.
## <a name="prerequisites"></a>Prerequisites
Para executar esse procedimento:
- Você deve ter permissão para acessar a área funcional **Permissões de Usuário e Grupo** .
### <a name="to-delete-users-or-groups"></a>Para excluir usuários ou grupos
1. No [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], clique em **Permissões de Usuário e Grupo**.
2. Para excluir um usuário, permaneça na página **Usuários** . Para excluir um grupo, na barra de menus, clique em **Gerenciar Grupos**.
3. Na grade, selecione a linha correspondente ao usuário ou grupo que você deseja excluir.
4. Clique em **Excluir usuário selecionado** ou **Excluir grupo selecionado**.
5. Na caixa de diálogo de confirmação, clique em **OK**.
## <a name="see-also"></a>Consulte Também
[Segurança (Master Data Services)](../master-data-services/security-master-data-services.md)
| 41.830769 | 258 | 0.730048 | por_Latn | 0.986872 |
aa1f9295c152ce80e3245d541f495b948d6fe9ab | 163 | md | Markdown | doc/distrib/NodeHelpFiles/DynamoUnits.Quantity.TypeId.md | dnenov/Dynamo | a14634c8915b9a3afb09c8259d949f0b56c7f665 | [
"Unlicense",
"Zlib",
"Apache-2.0",
"MS-PL"
] | null | null | null | doc/distrib/NodeHelpFiles/DynamoUnits.Quantity.TypeId.md | dnenov/Dynamo | a14634c8915b9a3afb09c8259d949f0b56c7f665 | [
"Unlicense",
"Zlib",
"Apache-2.0",
"MS-PL"
] | null | null | null | doc/distrib/NodeHelpFiles/DynamoUnits.Quantity.TypeId.md | dnenov/Dynamo | a14634c8915b9a3afb09c8259d949f0b56c7f665 | [
"Unlicense",
"Zlib",
"Apache-2.0",
"MS-PL"
] | null | null | null | ## In Depth
Quantity.TypeId gets the string representation of the selected quantity.
___
## Example File
 | 27.166667 | 72 | 0.791411 | eng_Latn | 0.747299 |
aa1fbf7a22dbdd70a726cd3513cb5604f09bf9f2 | 2,280 | md | Markdown | atom/proton/python/docs/PortfolioManagementApi.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | atom/proton/python/docs/PortfolioManagementApi.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | atom/proton/python/docs/PortfolioManagementApi.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | # proton_api.PortfolioManagementApi
All URIs are relative to *https://sandbox.hydrogenplatform.com/proton/v1*
Method | HTTP request | Description
------------- | ------------- | -------------
[**rebalancing_signal**](PortfolioManagementApi.md#rebalancing_signal) | **POST** /rebalancing_signal | Rebalancing Signal
# **rebalancing_signal**
> dict(str, object) rebalancing_signal(rebalancing_signal_request)
Rebalancing Signal
Generate rebalancing signals for a group of investments
### Example
```python
from __future__ import print_function
import time
import proton_api
from proton_api.rest import ApiException
from pprint import pprint# Configure OAuth2 access token for authorization: oauth2
configuration = proton_api.Configuration()
# create an instance of the API class
api_instance = proton_api.AuthApi(proton_api.ApiClient(configuration))
#api_token_response = api_instance.create_using_post_client_credentials("client_id", "password")
# OR
#api_token_response = api_instance.create_using_post_password_credentials("client_id","password", "username", "secret" )
configuration.access_token = api_token_response.access_token
# create an instance of the API class
api_instance = proton_api.PortfolioManagementApi(proton_api.ApiClient(configuration))
rebalancing_signal_request = proton_api.RebalancingSignalRequest() # RebalancingSignalRequest | Request payload for Rebalancing Signal
try:
# Rebalancing Signal
api_response = api_instance.rebalancing_signal(rebalancing_signal_request)
pprint(api_response)
except ApiException as e:
print("Exception when calling PortfolioManagementApi->rebalancing_signal: %s\n" % e)
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**rebalancing_signal_request** | [**RebalancingSignalRequest**](RebalancingSignalRequest.md)| Request payload for Rebalancing Signal |
### Return type
**dict(str, object)**
### Authorization
[oauth2](../README.md#oauth2)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
| 32.112676 | 180 | 0.749123 | eng_Latn | 0.266188 |
aa1ffd49e6834cc0a6008a3c54f3dbf1fb1c2191 | 2,194 | md | Markdown | desktop-src/SecAuthN/credential-security-support-provider.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-03-18T02:46:08.000Z | 2022-03-18T03:19:15.000Z | desktop-src/SecAuthN/credential-security-support-provider.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/SecAuthN/credential-security-support-provider.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
Description: The Credential Security Support Provider protocol (CredSSP) is a Security Support Provider that is implemented by using the Security Support Provider Interface (SSPI).
ms.assetid: b3006b89-d9fc-4444-a3c8-ad2698de501c
title: Credential Security Support Provider
ms.topic: article
ms.date: 05/31/2018
---
# Credential Security Support Provider
The Credential Security Support Provider protocol (CredSSP) is a Security Support Provider that is implemented by using the Security Support Provider Interface ([SSPI](sspi.md)). CredSSP lets an application delegate the user's credentials from the client to the target server for remote authentication. CredSSP provides an encrypted [Transport Layer Security Protocol](transport-layer-security-protocol.md) channel. The client is authenticated over the encrypted channel by using the Simple and Protected Negotiate (SPNEGO) protocol with either [Microsoft Kerberos](microsoft-kerberos.md) or [Microsoft NTLM](microsoft-ntlm.md).
> [!Caution]
> This is not constrained delegation. CredSSP passes the user's full credentials to the server without any constraint.
For information about SPNEGO, see [Microsoft Negotiate](microsoft-negotiate.md).
After the client and server are authenticated, the client passes the user's credentials to the server. The credentials are doubly encrypted under the SPNEGO and TLS session keys. CredSSP supports password-based logon as well as smart card logon based on both [*X.509*](/windows/desktop/SecGloss/x-gly) and PKINIT.
> [!IMPORTANT]
> CredSSP does not support Wow64 clients.
For more information about CredSSP, see the following topics.
| Topic | Description |
|-------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| [CredSSP Group Policy Settings](credssp-group-policy-settings.md)<br/> | Delegation of credentials by CredSSP can be controlled by using group policy settings.<br/> |
| 53.512195 | 628 | 0.670921 | eng_Latn | 0.982938 |
aa201e7e07e30039fc5e5f7f1d298e21c728f57c | 25 | md | Markdown | README.md | parind/No-Coin | 16dbc2c0f8cf0429ee99ad714255902fa6654333 | [
"MIT"
] | null | null | null | README.md | parind/No-Coin | 16dbc2c0f8cf0429ee99ad714255902fa6654333 | [
"MIT"
] | null | null | null | README.md | parind/No-Coin | 16dbc2c0f8cf0429ee99ad714255902fa6654333 | [
"MIT"
] | null | null | null | # No-Coin
NoCoin testing
| 8.333333 | 14 | 0.76 | eng_Latn | 0.372894 |
aa207305055def042bcdffa62271aa0fa833d32c | 2,492 | md | Markdown | docs/install.md | ace411/dumbflower | fe1dbfc0ffbc2abaa46d34574c5c2e446337788f | [
"MIT"
] | 4 | 2016-08-05T15:47:52.000Z | 2018-06-19T01:13:58.000Z | docs/install.md | ace411/dumbflower | fe1dbfc0ffbc2abaa46d34574c5c2e446337788f | [
"MIT"
] | 2 | 2018-06-19T16:31:13.000Z | 2018-07-04T17:54:31.000Z | docs/install.md | ace411/dumbflower | fe1dbfc0ffbc2abaa46d34574c5c2e446337788f | [
"MIT"
] | 2 | 2018-06-19T18:43:26.000Z | 2018-06-23T18:15:11.000Z | <div style="display: flex; flex-wrap: nowrap; flex-flow: column; align-content: center; align-items: center; justify-content: center;">
<img src="https://ucarecdn.com/43f41248-6a21-4428-88b7-084e1f13e050/dumbflowerlogo480x270.png" alt="dumbflower logo">
<div style="display: flex; flex-flow: row; height: 20px; flex-wrap: nowrap; margin-bottom: 1.4rem;">
<img alt="Build Status" style="max-width: 105px; margin-left: 2px;" onClick="location.href='https://travis-ci.org/ace411/dumbflower'" src="https://travis-ci.org/ace411/dumbflower.svg?branch=master">
<img alt="Codacy Badge" style="max-width: 105px; margin-left: 2px;" onClick="location.href='https://www.codacy.com/app/ace411/dumbflower?utm_source=github.com&utm_medium=referral&utm_content=ace411/dumbflower&utm_campaign=Badge_Grade'" src="https://api.codacy.com/project/badge/Grade/86961fde07564ec388c4a93582f6ba7a">
<img alt="License" style="max-width: 105px; margin-left: 2px;" onClick="location.href='https://packagist.org/packages/chemem/dumbflower'" src="https://poser.pugx.org/chemem/dumbflower/license">
</div>
</div>
DumbFlower is a simple image manipulation library for PHP. It is an appendage of the GD library, a component of the PHP userland core. The subsequent text is documentation of the library which should help you, the reader, understand how to go about using it.
# Installation
Before you can use the dumbflower library, you should have either Git or Composer installed on your system of preference. To install the package via Composer, type the following in your preferred command line interface:
```
composer require chemem/dumbflower dev-master
```
# Usage
The DumbFlower package, like many PHP libraries, is namespaced. The table below is an enumeration of the library's function groupings and respective namespaces:
| Type | Namespace |
|------------------|-----------------------------------|
| Filters | ```Chemem\DumbFlower\Filters\``` |
| Resize functions | ```Chemem\DumbFlower\Resize\``` |
| Snapshots | ```Chemem\DumbFlower\Snapshot\``` |
# Supported Formats
Since dumbflower is an embellishment of the PHP GD image extension, it supports most image formats included in the [extension's documentation](http://php.net/manual/en/book.image.php). Just in case you do not remember what those extensions are, the list below is a reminder:
- ```jpeg```
- ```gif```
- ```png```
- ```webp```
| 59.333333 | 338 | 0.713483 | eng_Latn | 0.837257 |
aa20a5763bbe5a0e19aa624d36cabd6e57aa3c7f | 1,699 | md | Markdown | yaml/k8s-1-Basic/8-autoscaling-hpa/README.md | meta-magic/kubernetes_workshop | df65e3e4c0ff9a94ec72d226d5aaa5cbdf99838d | [
"Apache-2.0"
] | 25 | 2018-10-05T16:20:12.000Z | 2022-03-03T16:58:39.000Z | yaml/k8s-1-Basic/8-autoscaling-hpa/README.md | meta-magic/kubernetes_workshop | df65e3e4c0ff9a94ec72d226d5aaa5cbdf99838d | [
"Apache-2.0"
] | null | null | null | yaml/k8s-1-Basic/8-autoscaling-hpa/README.md | meta-magic/kubernetes_workshop | df65e3e4c0ff9a94ec72d226d5aaa5cbdf99838d | [
"Apache-2.0"
] | 20 | 2018-10-05T14:39:49.000Z | 2022-02-28T13:20:49.000Z | # Horizontal Pod Autoscaler
The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization
Note: Make sure you have shoppingportal namespace
#### Enable metric service before proceeding further, more detail can be found [metric-server](https://github.com/meta-magic/kubernetes_workshop/tree/master/yaml/Logging-Monitoring/metrics_grafana)
Execute below YAML
````
kubectl create -f https://raw.githubusercontent.com/meta-magic/kubernetes_workshop/master/yaml/k8s-1-Basic/8-autoscaling-hpa/productreview-deployment.yaml
kubectl create -f https://raw.githubusercontent.com/meta-magic/kubernetes_workshop/master/yaml/k8s-1-Basic/8-autoscaling-hpa/productreview-service.yaml (Have set deployment with cpu and memory confugration)
kubectl create -f https://raw.githubusercontent.com/meta-magic/kubernetes_workshop/master/yaml/k8s-1-Basic/8-autoscaling-hpa/product-horizontal-scaler.yaml
````
Generate request using below command
````
for((i=0;i<10000;i++)); do curl http://192.168.99.100:30160/productreviewms/check/live; done (PORT will change, so get service node port)
````
List the pod, where you will be able to find number of pod increase as CPU utilization is increased.
<img width="1255" alt="screen shot 2018-10-06 at 12 32 23 pm" src="https://user-images.githubusercontent.com/23295769/46568553-7f08dd00-c964-11e8-8bfa-c332f07cb9b3.png">
When CPU utilization is back to normal number of pods will come down automatically
<img width="1184" alt="screen shot 2018-10-06 at 12 38 32 pm" src="https://user-images.githubusercontent.com/23295769/46568568-c4c5a580-c964-11e8-87e9-00e78d326fd2.png">
| 62.925926 | 206 | 0.799294 | eng_Latn | 0.5316 |
aa20f904f9da3344d5f175287560d42963818c9e | 149 | md | Markdown | _pages/posts.md | gizrak/gizrak.github.io | b5cd4585bdd9d7ad2ab96b9be68c6a2c82adec1c | [
"MIT"
] | null | null | null | _pages/posts.md | gizrak/gizrak.github.io | b5cd4585bdd9d7ad2ab96b9be68c6a2c82adec1c | [
"MIT"
] | null | null | null | _pages/posts.md | gizrak/gizrak.github.io | b5cd4585bdd9d7ad2ab96b9be68c6a2c82adec1c | [
"MIT"
] | null | null | null | ---
title: "Posts by Year"
permalink: /posts/
layout: posts
author_profile: true
---
[\[년도별\]](/posts/) [\[카테고리별\]](/categories/) [\[태그별\]](/tags/)
| 16.555556 | 62 | 0.597315 | eng_Latn | 0.55322 |
aa21938b004ed69528ab8563d8dfa1583beb5781 | 3,850 | md | Markdown | wiki/threads/302.md | dtzxporter/ModmeForums | a0e15103d47f19a65a44507da5e95c9437b5c4c9 | [
"MIT"
] | 1 | 2022-01-31T17:26:28.000Z | 2022-01-31T17:26:28.000Z | wiki/threads/302.md | dtzxporter/ModmeForums | a0e15103d47f19a65a44507da5e95c9437b5c4c9 | [
"MIT"
] | null | null | null | wiki/threads/302.md | dtzxporter/ModmeForums | a0e15103d47f19a65a44507da5e95c9437b5c4c9 | [
"MIT"
] | 6 | 2021-12-01T01:55:00.000Z | 2022-03-04T00:43:03.000Z | # Sun wont build anymore
Game Modding | Call of Duty: Black Ops 3 | Radiant
---
<strong style="font-size: 1.4em;"><span style="text-decoration: underline;text-decoration-color: #34a7f9;"><span style="color:#34a7f9;">ModmeBot</span></span>:</strong>
<p>Thread By: Wild<br />Hello, I have this problem now where when I try and bake the lighting in my map it says it failed and at the bottom of the window it says "Sun FAILED needs more than 128mb". Whenever I try and compile the map it gives my error in launcher that says "Error: Failed to export D:/Steam/steamapps/common/Call of Duty Black Ops III/\map_source\zm\zm_agc.map because Volume #0: 10.67, 512.00, -42.63 Level sun volume missing from file!"<br /><br /><br /><br />What I don't understand is that it used to work fine, then I worked on my map like normal and now it doesn't work. I still do have a sun volume and when I go to cords it says in the error there is nothing there. It says there's a volume there but the closest volume to that place is my start_zone but its not actually touching those cords.</p>
---
<strong style="font-size: 1.4em;"><span style="text-decoration: underline;text-decoration-color: #34a7f9;"><span style="color:#34a7f9;">ModmeBot</span></span>:</strong>
<p>Reply By: DTZxPorter<br /><blockquote><em>AGC</em>Hello, I have this problem now where when I try and bake the lighting in my map it says it failed and at the bottom of the window it says "Sun FAILED needs more than 128mb". Whenever I try and compile the map it gives my error in launcher that says "Error: Failed to export D:/Steam/steamapps/common/Call of Duty Black Ops III/\map_source\zm\zm_agc.map because Volume #0: 10.67, 512.00, -42.63 Level sun volume missing from file!"<br /><br /><br /><br />What I don't understand is that it used to work fine, then I worked on my map like normal and now it doesn't work. I still do have a sun volume and when I go to cords it says in the error there is nothing there. It says there's a volume there but the closest volume to that place is my start_zone but its not actually touching those cords.</blockquote><br /><br />Watch your VRAM / RAM while compiling lighting, see if it nears any limits</p>
---
<strong style="font-size: 1.4em;"><span style="text-decoration: underline;text-decoration-color: #34a7f9;"><span style="color:#34a7f9;">ModmeBot</span></span>:</strong>
<p>Reply By: Wild<br /><blockquote><em>DTZxPorter</em><blockquote><em>AGC</em>Hello, I have this problem now where when I try and bake the lighting in my map it says it failed and at the bottom of the window it says "Sun FAILED needs more than 128mb". Whenever I try and compile the map it gives my error in launcher that says "Error: Failed to export D:/Steam/steamapps/common/Call of Duty Black Ops III/\map_source\zm\zm_agc.map because Volume #0: 10.67, 512.00, -42.63 Level sun volume missing from file!"<br /><br /><br /><br />What I don't understand is that it used to work fine, then I worked on my map like normal and now it doesn't work. I still do have a sun volume and when I go to cords it says in the error there is nothing there. It says there's a volume there but the closest volume to that place is my start_zone but its not actually touching those cords.</blockquote><br /><br />Watch your VRAM / RAM while compiling lighting, see if it nears any limits</blockquote><br /><br />I'm not sure if I was doing the right thing but I used task manager and the highest it went was 123mb but I still had like 7.1gb available. When I watch it while baking the light in radiant it goes up to 146mb and it says I have about 2.6gb available. I'm sorry if if I didn't do that right. I'm not very good with the technical side of making maps lol.</p>
| 213.888889 | 1,404 | 0.74 | eng_Latn | 0.99559 |
aa21f8cba028cbd48b1b6b93dc8bf266769328f4 | 5,511 | md | Markdown | sdk-api-src/content/fsrmreports/nf-fsrmreports-ifsrmreportscheduler-createscheduletask.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/fsrmreports/nf-fsrmreports-ifsrmreportscheduler-createscheduletask.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/fsrmreports/nf-fsrmreports-ifsrmreportscheduler-createscheduletask.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:fsrmreports.IFsrmReportScheduler.CreateScheduleTask
title: IFsrmReportScheduler::CreateScheduleTask (fsrmreports.h)
description: Creates a scheduled task that is used to trigger a report job.
helpviewer_keywords: ["CreateScheduleTask","CreateScheduleTask method [File Server Resource Manager]","CreateScheduleTask method [File Server Resource Manager]","FsrmReportScheduler class","CreateScheduleTask method [File Server Resource Manager]","IFsrmReportScheduler interface","FsrmReportScheduler class [File Server Resource Manager]","CreateScheduleTask method","IFsrmReportScheduler interface [File Server Resource Manager]","CreateScheduleTask method","IFsrmReportScheduler.CreateScheduleTask","IFsrmReportScheduler::CreateScheduleTask","fs.ifsrmreportscheduler_createscheduletask","fsrm.ifsrmreportscheduler_createscheduletask","fsrmreports/IFsrmReportScheduler::CreateScheduleTask"]
old-location: fsrm\ifsrmreportscheduler_createscheduletask.htm
tech.root: fsrm
ms.assetid: 983a6d05-417f-4aea-9652-955fd96e78f0
ms.date: 12/05/2018
ms.keywords: CreateScheduleTask, CreateScheduleTask method [File Server Resource Manager], CreateScheduleTask method [File Server Resource Manager],FsrmReportScheduler class, CreateScheduleTask method [File Server Resource Manager],IFsrmReportScheduler interface, FsrmReportScheduler class [File Server Resource Manager],CreateScheduleTask method, IFsrmReportScheduler interface [File Server Resource Manager],CreateScheduleTask method, IFsrmReportScheduler.CreateScheduleTask, IFsrmReportScheduler::CreateScheduleTask, fs.ifsrmreportscheduler_createscheduletask, fsrm.ifsrmreportscheduler_createscheduletask, fsrmreports/IFsrmReportScheduler::CreateScheduleTask
req.header: fsrmreports.h
req.include-header: FsrmReports.h, FsrmTlb.h
req.target-type: Windows
req.target-min-winverclnt: None supported
req.target-min-winversvr: Windows Server 2008
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl: FsrmReports.idl
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll: SrmSvc.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IFsrmReportScheduler::CreateScheduleTask
- fsrmreports/IFsrmReportScheduler::CreateScheduleTask
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- SrmSvc.dll
api_name:
- IFsrmReportScheduler.CreateScheduleTask
- FsrmReportScheduler.CreateScheduleTask
---
# IFsrmReportScheduler::CreateScheduleTask
## -description
<p class="CCE_Message">[Starting with Windows Server 2012 this method is not supported; use the
<a href="/previous-versions/windows/desktop/fsrm/msft-fsrmscheduledtask">MSFT_FSRMScheduledTask</a> WMI class to manage
scheduled tasks.]
Creates a scheduled task that is used to trigger a report job.
## -parameters
### -param taskName [in]
The name of a <a href="/windows/desktop/TaskSchd/task-scheduler-start-page">Task Scheduler</a>
task to create. The string is limited to 230 characters.
### -param namespacesSafeArray [in]
A <b>VARIANT</b> that contains a <b>SAFEARRAY</b> of local
directory paths to verify (see Remarks). Each element of the array is a variant of type
<b>VT_BSTR</b>. Use the <b>bstrVal</b> member of the variant to set the
path.
### -param serializedTask [in]
An XML string that defines the Task Scheduler job. For details, see
<a href="/windows/desktop/TaskSchd/task-scheduler-schema">Task Scheduler Schema</a>.
## -returns
The method returns the following return values.
## -remarks
To run a report job on a schedule, the value of the <i>taskName</i> parameter and the value
of the <a href="/previous-versions/windows/desktop/api/fsrmreports/nf-fsrmreports-ifsrmreportjob-get_task">IFsrmReportJob::Task</a> property must be the
same.
Specify the same namespaces for this method that you specified for the
<a href="/previous-versions/windows/desktop/api/fsrmreports/nf-fsrmreports-ifsrmreportjob-get_namespaceroots">IFsrmReportJob::NamespaceRoots</a> property.
This method validates the namespace paths. For validation details, see the Remarks section of
<a href="/previous-versions/windows/desktop/api/fsrmreports/nf-fsrmreports-ifsrmreportscheduler-verifynamespaces">VerifyNamespaces</a>.
To generate the XML, you can use the Task Scheduler v2.0 interfaces to define the scheduled task; however, the
task definition must be v1.0 compatible. (Use the Task Scheduler API to define the task but not to register the
task—this method registers the task.) After defining the task, access the
<a href="/windows/desktop/api/taskschd/nf-taskschd-itaskdefinition-get_xmltext">ITaskDefinition::XmlText</a> property to get
the XML.
Note that FSRM ignores triggers in the XML that FSRM does not support. For the "MONTHLYDOW"
trigger, you cannot use the V2 extensions. For example, if you specify "WeeksOfMonth", you can
specify only one week of the month and it cannot be the fifth week. Also, for "DaysOfWeek", you
can specify only one day.
#### Examples
For an example, see
<a href="/previous-versions/windows/desktop/fsrm/scheduling-a-report-job">Scheduling a Report Job</a>.
<div class="code"></div>
## -see-also
<a href="/previous-versions/windows/desktop/fsrm/fsrmreportscheduler">FsrmReportScheduler</a>
<a href="/previous-versions/windows/desktop/api/fsrmreports/nn-fsrmreports-ifsrmreportscheduler">IFsrmReportScheduler</a> | 46.310924 | 692 | 0.790964 | eng_Latn | 0.72894 |
aa225db50027bf186874efa18e304e3f43b3df5f | 171 | md | Markdown | README.md | GabrielMerigo/Memory_Game | 919d54127c712f42f0d955ccc02c2aadaff540f0 | [
"MIT"
] | 2 | 2021-02-06T13:22:38.000Z | 2022-02-06T19:58:25.000Z | README.md | GabrielMerigo/Memory_Game | 919d54127c712f42f0d955ccc02c2aadaff540f0 | [
"MIT"
] | null | null | null | README.md | GabrielMerigo/Memory_Game | 919d54127c712f42f0d955ccc02c2aadaff540f0 | [
"MIT"
] | 1 | 2021-02-04T17:49:45.000Z | 2021-02-04T17:49:45.000Z | <h1 align="center">Memory Game 📚 </h1>

### :bulb: Sobre o projeto
<p>Simples jogo da memória construído com HTML5, CSS3 e JavaScript Puro.</p>
| 24.428571 | 76 | 0.695906 | por_Latn | 0.968537 |
aa230039d86e23604cfb4dbdb8136f33dba2307b | 9,885 | md | Markdown | articles/advisor/advisor-operational-excellence-recommendations.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/advisor/advisor-operational-excellence-recommendations.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/advisor/advisor-operational-excellence-recommendations.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Améliorer l’excellence opérationnelle avec Advisor
description: Utilisez Azure Advisor pour améliorer et atteindre l’excellence opérationnelle de vos abonnements Azure.
ms.topic: article
ms.date: 10/24/2019
ms.openlocfilehash: 63e88129a7418e82ea13429c33d8735e96616476
ms.sourcegitcommit: 7dacbf3b9ae0652931762bd5c8192a1a3989e701
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/16/2020
ms.locfileid: "92122617"
---
# <a name="achieve-operational-excellence-by-using-azure-advisor"></a>Atteindre l’excellence opérationnelle à l’aide d’Azure Advisor
Les recommandations d’Azure Advisor en matière d’excellence opérationnelle peuvent vous aider dans les domaines suivants :
- Efficacité des processus et des workflows.
- Simplicité de gestion des ressources.
- Bonnes pratiques de déploiement.
Vous trouverez ces recommandations sous l’onglet **Excellence opérationnelle** du tableau de bord Advisor.
## <a name="create-azure-service-health-alerts-to-be-notified-when-azure-problems-affect-you"></a>Créer des alertes Azure Service Health pour être averti des problèmes Azure qui vous concernent
Nous vous recommandons de configurer des alertes Azure Service Health pour recevoir une notification quand des problèmes Azure vous concernent. [Azure Service Health](https://azure.microsoft.com/features/service-health/) est un service gratuit qui offre des conseils et un support personnalisés lorsque vous êtes concerné par un problème de service Azure. Advisor identifie les abonnements qui n’ont pas d’alertes configurées et recommande de les configurer.
## <a name="design-your-storage-accounts-to-prevent-reaching-the-maximum-subscription-limit"></a>Créer vos comptes de stockage de manière à éviter d’atteindre le nombre maximum d’abonnements
Une région Azure peut prendre en charge un maximum de 250 comptes de stockage par abonnement. Si vous atteignez cette limite, vous ne pourrez pas créer d’autres comptes de stockage dans cette combinaison abonnement/région. Advisor vérifie vos abonnements, puis fournit des recommandations pour vous aider à créer moins de comptes de stockage pour une combinaison abonnement/région qui est proche du nombre limite.
## <a name="ensure-you-have-access-to-azure-cloud-experts-when-you-need-it"></a>Assurez-vous d’avoir accès à des experts du cloud Azure chaque fois que vous en avez besoin
Quand vous exécutez une charge de travail critique pour l’entreprise, il est essentiel d’avoir accès au support technique en cas de besoin. Advisor identifie les éventuels abonnements stratégiques dont le plan de support ne prévoit aucun support technique. Il recommande d’effectuer une mise à niveau pour choisir une option qui inclut un support technique.
## <a name="delete-and-re-create-your-pool-to-remove-a-deprecated-internal-component"></a>Supprimer et recréer votre pool pour retirer un composant interne déprécié
Si votre pool utilise un composant interne déprécié, supprimez et recréez le pool pour améliorer la stabilité et les performances.
## <a name="repair-invalid-log-alert-rules"></a>Réparer les règles d’alerte de journal invalides
Azure Advisor détecte les règles d’alerte dont la section des conditions contient des requêtes non valides. Vous pouvez créer des règles d’alerte de journal dans Azure Monitor et les utiliser pour exécuter des requêtes d’analytique à des intervalles spécifiés. Les résultats de la requête déterminent si une alerte doit être déclenchée. Il arrive que des requêtes d’analytique deviennent non valides au fil du temps en raison de modifications effectuées dans les ressources, les tables ou les commandes référencées. Advisor vous recommande alors de corriger la requête dans la règle d’alerte pour éviter qu’elle ne se désactive automatiquement, et garantir ainsi une couverture de supervision complète de vos ressources dans Azure. [Découvrez-en plus sur la résolution des problèmes liés aux règles d’alerte.](../azure-monitor/platform/alerts-troubleshoot-log.md)
## <a name="use-azure-policy-recommendations"></a>Utiliser les recommandations Azure Policy
Azure Policy est un service Azure que vous pouvez utiliser pour créer, affecter et gérer des stratégies. Ces stratégies appliquent des règles et des effets sur vos ressources. Les recommandations Azure Policy suivantes peuvent vous aider à atteindre l’excellence opérationnelle :
**Gérer les étiquettes.** Cette stratégie ajoute ou remplace la balise spécifiée et la valeur lors de la création ou de la mise à jour d’une ressource. Vous pouvez corriger des ressources existantes en déclenchant une tâche de correction. Cette stratégie ne modifie pas les étiquettes sur les groupes de ressources.
**Appliquer les exigences de conformité géographique.** Cette stratégie vous permet de restreindre les emplacements que votre organisation peut spécifier lors du déploiement de ressources.
**Spécifier les références SKU de machine virtuelle autorisées pour les déploiements.** Cette stratégie vous permet de spécifier un ensemble de références de machine virtuelle que votre organisation peut déployer.
**Appliquer *Auditer les machines virtuelles qui n’utilisent pas de disques managés* .**
**Activer *Hériter une étiquette du groupe de ressources* .** Cette stratégie ajoute ou remplace la balise spécifiée et la valeur du groupe de ressources parent lors de la création ou de la mise à jour d’une ressource. Vous pouvez corriger des ressources existantes en déclenchant une tâche de correction.
Advisor recommande quelques stratégies Azure individuelles qui aident les clients à atteindre l’excellence opérationnelle en adoptant les meilleures pratiques. Si un client décide d’attribuer une stratégie recommandée, nous supprimons la recommandation. Si le client décide de supprimer la stratégie ultérieurement, Advisor continue de supprimer la recommandation, car nous interprétons sa suppression comme un signal fort de ce qui suit :
1. Le client a supprimé la stratégie parce que, malgré la recommandation d’Advisor, elle ne s’applique pas à son cas d’usage spécifique.
2. Après avoir attribué et supprimé la stratégie, le client la connaît bien et peut l’attribuer ou la supprimer à nouveau si nécessaire, sans avoir besoin d’assistance, si elle devient pertinente pour son cas d’usage. Si le client trouve qu’il est dans son intérêt d’attribuer à nouveau la même stratégie, il peut le faire dans Azure Policy sans que cela nécessite de recommandation dans Advisor. Notez que cette logique s’applique spécifiquement à la recommandation de stratégie dans la catégorie Excellence opérationnelle. Ces règles ne s’appliquent pas aux recommandations de sécurité.
## <a name="no-validation-environment-enabled"></a>Aucun environnement de validation activé
Azure Advisor détermine que vous n’avez pas d’environnement de validation activé dans l’abonnement actuel. Lors de la création de vos pools d’hôtes, vous avez sélectionné \"Non\" pour \"Environnement de validation\" sous l’onglet Propriétés. Le fait d’avoir au moins un pool d’hôtes avec un environnement de validation activé garantit la continuité des activités via les déploiements du service Windows Virtual Desktop avec une détection précoce des problèmes potentiels. [En savoir plus](../virtual-desktop/create-validation-host-pool.md)
## <a name="ensure-production-non-validation-environment-to-benefit-from-stable-functionality"></a>Garantir un environnement de production (et non de validation) pour bénéficier de fonctionnalités stables
Azure Advisor détecte que l’environnement de validation est activé pour un trop grand nombre de vos pools d’hôtes. Pour que les environnements de validation remplissent au mieux leur fonction, vous devez avoir au moins un, mais jamais plus de la moitié de vos pools d’hôtes dans l’environnement de validation. En ayant un équilibre parfait entre vos pools d’hôtes avec l’environnement de validation activé et ceux pour lesquels il est désactivé, vous pourrez tirer le meilleur parti des avantages des déploiements en plusieurs phases proposés par Windows Virtual Desktop avec certaines mises à jour. Pour résoudre ce problème, ouvrez les propriétés de votre pool d’hôtes et sélectionnez \"Non\" à côté du paramètre \"Environnement de validation\".
## <a name="enable-traffic-analytics-to-view-insights-into-traffic-patterns-across-azure-resources"></a>Activer Traffic Analytics pour obtenir des insights sur les modèles de trafic des ressources Azure
Traffic Analytics est une solution cloud qui offre une visibilité sur l’activité des utilisateurs et des applications dans Azure. Traffic Analytics analyse les journaux de flux du groupe de sécurité réseau Network Watcher pour fournir des insights sur le flux de trafic. Avec Traffic Analytics, vous pouvez afficher les éléments du réseau qui utilisent le plus de bande passante lors des déploiements Azure et non Azure, détecter les ports ouverts, les protocoles et les flux malveillants de votre environnement, et optimiser le déploiement de votre réseau pour de meilleures performances. Vous pouvez traiter les journaux de flux toutes les 10 ou 60 minutes, ce qui vous permet d’analyser plus rapidement votre trafic. Il est recommandé d’activer Traffic Analytics pour vos ressources Azure.
## <a name="next-steps"></a>Étapes suivantes
Pour en savoir plus sur les recommandations d’Advisor, consultez les ressources suivantes :
* [Présentation du conseiller](advisor-overview.md)
* [Prise en main](advisor-get-started.md)
* [Recommandations d’Advisor en matière de coûts](advisor-cost-recommendations.md)
* [Recommandations d’Advisor en matière de performances](advisor-performance-recommendations.md)
* [Recommandations d’Advisor en matière de fiabilité](advisor-high-availability-recommendations.md)
* [Recommandations d’Advisor en matière de sécurité](advisor-security-recommendations.md)
* [API REST Advisor](/rest/api/advisor/)
| 119.096386 | 863 | 0.814972 | fra_Latn | 0.990467 |
aa231f76552d12d36910aaa522d1b81fe4c43cd6 | 585 | md | Markdown | README.md | websharper-extensions/WebSharper.Community.PowerBI | 29e6ca6b8e6002618091ffb9fc7bf390617354e0 | [
"Apache-2.0"
] | null | null | null | README.md | websharper-extensions/WebSharper.Community.PowerBI | 29e6ca6b8e6002618091ffb9fc7bf390617354e0 | [
"Apache-2.0"
] | 1 | 2017-09-08T04:42:15.000Z | 2017-09-08T04:42:15.000Z | README.md | websharper-extensions/WebSharper.Community.PowerBI | 29e6ca6b8e6002618091ffb9fc7bf390617354e0 | [
"Apache-2.0"
] | null | null | null | # WebSharper.Community.PowerBI
WebSharper extension for PowerBI (https://github.com/Microsoft/PowerBI-JavaScript)
# Documentation and samples
1. Look at Client.Main() in WebSharper.Community.PowerBI.Sample to see how to use plugin.
2. Look at App.config in WebSharper.Community.PowerBI.Sample how to get accessToken to run sample report witch is available from https://github.com/Microsoft/PowerBI-JavaScript/wiki (chapter "Sample REST API for embedding reports") - direct link from wiki https://powerbiembedapinode.azurewebsites.net/api/reports/c52af8ab-0468-4165-92af-dc39858d66ad
| 73.125 | 350 | 0.815385 | yue_Hant | 0.34304 |
aa233732de4925474ffbfb9bb704699471a71870 | 916 | markdown | Markdown | _posts/2011-03-10-currency-converter-actualizado.markdown | PedroLamas/pedrolamas.pt | 7c3f899a28b62da7242e1fae2aed08b850e940a1 | [
"MIT"
] | null | null | null | _posts/2011-03-10-currency-converter-actualizado.markdown | PedroLamas/pedrolamas.pt | 7c3f899a28b62da7242e1fae2aed08b850e940a1 | [
"MIT"
] | null | null | null | _posts/2011-03-10-currency-converter-actualizado.markdown | PedroLamas/pedrolamas.pt | 7c3f899a28b62da7242e1fae2aed08b850e940a1 | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: Currency Converter actualizado!
wordpress_id: 1727
wordpress_url: http://www.pedrolamas.com/?p=1727
date: 2011-03-10 15:53:23.000000000 +00:00
categories:
- Mobilidade
tags:
- Microsoft
- Windows Phone
- WP7
- WP7Dev
- Coding4Fun
- Marketplace
- Currency Converter
---
Acabei de verificar que o [Currency Converter](tag/currency-converter/), a aplicação que fiz em colaboração com o [Coding4Fun](http://www.coding4fun.com/), já tem a actualização disponível no Marketplace do Windows Phone 7!
E que novidades podem encontrar nesta versão? Bem, é mais rápida e gasta tráfego de dados, dado que agora conta com um sistema de "caching" das taxas de conversão utilizadas!
Podem encontrar o [código fonte](http://currency.codeplex.com/) actualizado no [CodePlex](http://www.codeplex.com/), e está para breve o artigo sobre o desenvolvimento desta nova versão! ;)
| 36.64 | 223 | 0.771834 | por_Latn | 0.978005 |
aa244107982cf0ac62e61fb82d01cb870d55d5fa | 11,834 | md | Markdown | docs/configure/overview.md | mrauhu/storybook | 385b7d78f6c0448c7e9218351b61b214122ec7ff | [
"MIT"
] | null | null | null | docs/configure/overview.md | mrauhu/storybook | 385b7d78f6c0448c7e9218351b61b214122ec7ff | [
"MIT"
] | null | null | null | docs/configure/overview.md | mrauhu/storybook | 385b7d78f6c0448c7e9218351b61b214122ec7ff | [
"MIT"
] | null | null | null | ---
title: 'Configure Storybook'
---
Storybook is configured via a folder called `.storybook`, which contains various configuration files.
<div class="aside">
Note that you can change the folder that Storybook uses by setting the `-c` flag to your `start-storybook` and `build-storybook` scripts.
</div>
## Configure your Storybook project
The main configuration file is `main.js`. This file controls the Storybook server's behavior, so you must restart Storybook’s process when you change it. It contains the following:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-main-default-setup.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
The `main.js` configuration file is a [preset](../addons/addon-types.md) and, as such, has a powerful interface, but the key fields within it are:
- `stories` - an array of globs that indicates the [location of your story files](#configure-story-loading), relative to `main.js`.
- `addons` - a list of the [addons](https://storybook.js.org/addons/) you are using.
- `webpackFinal` - custom [webpack configuration](./webpack.md#extending-storybooks-webpack-config).
- `babel` - custom [babel configuration](./babel.md).
- `framework` - framework specific configurations to help the loading and building process.
<div class="aside">
💡 Tip: Customize your default story by referencing it first in the `stories` array.
</div>
See all the [available](#using-storybook-api) fields below if you need further customization.
### Feature flags
Additionally, you can also provide additional feature flags to your Storybook configuration. Below is an abridged list of available features that are currently available.
| Configuration element | Description |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `storyStoreV7` | Configures Storybook to load stories [on demand](#on-demand-story-loading), rather than during boot up. <br/> `features: { storyStoreV7: true }` |
| `buildStoriesJson` | Generates a `stories.json` file to help story loading with the on demand mode. <br/> `features: { buildStoriesJson: true }` |
| `emotionAlias` | Provides backwards compatibility for Emotion. See the [migration documentation](https://github.com/storybookjs/storybook/blob/next/MIGRATION.md#emotion11-quasi-compatibility) for context.<br/> `features: { emotionAlias: false }` |
| `babelModeV7` | Enables the new [Babel configuration](./babel.md#v7-mode) mode for Storybook. <br/> `features: { babelModeV7: true }` |
| `postcss` | Disables the implicit PostCSS warning. See the [migration documentation](https://github.com/storybookjs/storybook/blob/next/MIGRATION.md#deprecated-implicit-postcss-loader) for context. <br/> `features: { postcss: false }` |
| `modernInlineRender` | Enables Storybook's modern inline rendering mode. <br/> `features: { modernInlineRender: false }` |
## Configure story loading
By default, Storybook will load stories from your project based on a glob (pattern matching string) in `.storybook/main.js` that matches all files in your project with extension `.stories.*`. The intention is you colocate a story file with the component it documents.
```
•
└── components
├── Button.js
└── Button.stories.js
```
If you want to use a different naming convention, you can alter the glob using the syntax supported by [picomatch](https://github.com/micromatch/picomatch#globbing-features).
For example, if you wanted to pull both `.md` and `.js` files from the `my-project/src/components` directory, you could write:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-main-js-md-files.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
### With a configuration object
Additionally, you can customize your Storybook configuration to load your stories based on a configuration object. For example, if you wanted to load your stories from a `packages` directory, you could adjust your `stories` configuration field into the following:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-storyloading-with-custom-object.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
When Storybook starts, it will look for any file containing the `stories` extension inside the `packages/stories` directory and generate the titles for your stories.
### With a directory
You can also simplify your Storybook configuration and load the stories based on a directory. For example, if you want to load all the stories inside a `packages/MyStories`, you can adjust the configuration as such:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-storyloading-with-directory.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
### With a custom implementation
You can also adjust your Storybook configuration and implement your custom logic for loading your stories. For example, suppose you were working on a project that includes a particular pattern that the conventional ways of loading stories could not solve, in that case, you could adjust your configuration as follows:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-storyloading-custom-logic.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
### On-demand story loading
As your Storybook grows in size, it gets challenging to load all of your stories in a performant way, slowing down the loading times and yielding a large bundle. Starting with Storybook 6.4, you can optimize your story loading by enabling the `storyStoreV7` feature flag in your configuration as follows:
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-on-demand-story-loading.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
Once you've restarted your Storybook, you'll see an almost immediate performance gain in your loading times and also a decrease in the generated bundle.
#### Known limitations
This feature is experimental, and it has some limitations on what you can and cannot do in your stories files. If you plan to use it, you'll need to take into consideration the following limitations:
- [CSF formats](../api/csf.md) from version 1 to version 3 are supported. The `storiesOf` construct is not.
- Custom`storySort` functions are allowed based on a restricted API.
## Configure your project with TypeScript
If you need, you can also configure your Storybook using TypeScript. To get started, add a `.babelrc` file inside your project and include the following Babel presets:
Rename your `.storybook/main.js` to `.storybook/main.ts` and restart your Storybook.
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-ts-config-babelrc.js.mdx',
]}
/>
<!-- prettier-ignore-end -->
### Using Storybook API
You can also use Storybook's API to configure your project with TypeScript. Under the hood, it mirrors the exact configuration you get by default. Below is an abridged Storybook configuration with TypeScript and additional information about each configuration element.
<!-- prettier-ignore-start -->
<CodeSnippets
paths={[
'common/storybook-main-default-setup.ts.mdx',
]}
/>
<!-- prettier-ignore-end -->
| Configuration element | Description |
| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `stories` | The array of globs that indicates the [location of your story files](#configure-story-loading), relative to `main.ts` |
| `staticDirs` | Sets a list of directories of [static files](./images-and-assets.md#serving-static-files-via-storybook-configuration) to be loaded by Storybook <br/> `staticDirs:['../public']` |
| `addons` | Sets the list of [addons](/https://storybook.js.org/addons/) loaded by Storybook <br/> `addons:['@storybook/addon-essentials']` |
| `typescript` | Configures how Storybook handles [TypeScript files](./typescript.md) <br/> `typescript: { check: false, checkOptions: {} }` |
| `framework` | Configures Storybook based on a set of framework-specific settings <br/> `framework:'@storybook/svelte'` |
| `core` | Configures Storybook's internal features.<br/> `core: { builder: 'webpack5' }` |
| `features` | Enables Storybook's additional features.<br/> See table below for a list of available features `features: { storyStoreV7: true }` |
| `refs` | Configures [Storybook composition](../sharing/storybook-composition.md) <br/> `refs:{ example: { title: 'ExampleStorybook', url:'https://your-url.com' } }` |
| `logLevel` | Configures Storybook's logs in the browser terminal. Useful for debugging <br/> `logLevel: 'debug'` |
| `webpackFinal` | Customize Storybook's [Webpack](./webpack.md) setup <br/> `webpackFinal: async (config:any) => { return config; }` |
## Configure story rendering
To control the way stories are rendered and add global [decorators](../writing-stories/decorators.md#global-decorators) and [parameters](../writing-stories/parameters.md#global-parameters), create a `.storybook/preview.js` file. This is loaded in the Canvas tab, the “preview” iframe that renders your components in isolation. Use `preview.js` for global code (such as [CSS imports](../get-started/setup.md#render-component-styles) or JavaScript mocks) that applies to all stories.
The `preview.js` file can be an ES module and export the following keys:
- `decorators` - an array of global [decorators](../writing-stories/decorators.md#global-decorators)
- `parameters` - an object of global [parameters](../writing-stories/parameters.md#global-parameters)
- `globalTypes` - definition of [globalTypes](../essentials/toolbars-and-globals.md#global-types-and-the-toolbar-annotation)
If you’re looking to change how to order your stories, read about [sorting stories](../writing-stories/naming-components-and-hierarchy.md#sorting-stories).
## Configure Storybook’s UI
To control the behavior of Storybook’s UI (the **“manager”**), you can create a `.storybook/manager.js` file.
This file does not have a specific API but is the place to set [UI options](./features-and-behavior.md) and to configure Storybook’s [theme](./theming.md).
| 57.446602 | 481 | 0.620669 | eng_Latn | 0.961767 |
aa2590d9859c238c405022434dd5f2e2d92779c4 | 3,016 | md | Markdown | docs/extensibility/debugger/expression-evaluation-in-break-mode.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/expression-evaluation-in-break-mode.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/expression-evaluation-in-break-mode.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ausdrucksauswertung im Unterbrechungsmodus | Microsoft-Dokumentation
ms.date: 11/04/2016
ms.topic: conceptual
helpviewer_keywords:
- break mode, expression evaluation
- debugging [Debugging SDK], expression evaluation
- expression evaluation, break mode
ms.assetid: 34fe5b58-15d5-4387-a266-72120f90a4b6
author: gregvanl
ms.author: gregvanl
manager: jillfra
ms.workload:
- vssdk
ms.openlocfilehash: 852b4284bbbf59ce8f3964d98f0464e583f1444f
ms.sourcegitcommit: 2193323efc608118e0ce6f6b2ff532f158245d56
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/25/2019
ms.locfileid: "55028672"
---
# <a name="expression-evaluation-in-break-mode"></a>Ausdrucksauswertung im Unterbrechungsmodus
Der folgende Abschnitt beschreibt den Prozess, der auftritt, wenn der Debugger im Unterbrechungsmodus befindet und muss die Auswertung von Ausdrücken durchzuführen.
## <a name="expression-evaluation-process"></a>Ausdruck Evaluierungsprozesses
Es folgen die grundlegenden Schritte bei der Auswertung eines Ausdrucks:
1. Ruft die Sitzungs-Debug-Manager (SDM) [IDebugStackFrame2::GetExpressionContext](../../extensibility/debugger/reference/idebugstackframe2-getexpressioncontext.md) zum Abrufen einer Ausdruck-Kontext-Schnittstelle [IDebugExpressionContext2](../../extensibility/debugger/reference/idebugexpressioncontext2.md).
2. Klicken Sie dann aufruft, das SDM [IDebugExpressionContext2::ParseText](../../extensibility/debugger/reference/idebugexpressioncontext2-parsetext.md) mit der Zeichenfolge analysiert werden.
3. Falls ParseText S_OK zurückgegeben wird, wird die Ursache des Fehlers zurückgegeben.
– andernfalls:
Wenn ParseText S_OK zurückgibt, das SDM kann, rufen Sie entweder [IDebugExpression2::EvaluateSync](../../extensibility/debugger/reference/idebugexpression2-evaluatesync.md) oder [IDebugExpression2::EvaluateAsync](../../extensibility/debugger/reference/idebugexpression2-evaluateasync.md) um einen endgültigen Wert aus den analysierten Ausdruck abzurufen.
- Bei Verwendung `IDebugExpression2::EvaluateSync`, die bestimmten Rückrufschnittstelle kommuniziert der Auswertung des laufenden Prozesses. Der endgültige Wert wird zurückgegeben, eine [IDebugProperty2](../../extensibility/debugger/reference/idebugproperty2.md) Schnittstelle.
- Bei Verwendung `IDebugExpression2::EvaluateAsync`, die bestimmten Rückrufschnittstelle kommuniziert der Auswertung des laufenden Prozesses. Nachdem die Auswertung abgeschlossen ist, sendet EvaluateAsync ein [IDebugExpressionEvaluationCompleteEvent2](../../extensibility/debugger/reference/idebugexpressionevaluationcompleteevent2.md) Schnittstelle über den Rückruf. Mit dieser Ereignisschnittstelle, der endgültige Wert in den Ergebnissen [GetResult](../../extensibility/debugger/reference/idebugexpressionevaluationcompleteevent2-getresult.md).
## <a name="see-also"></a>Siehe auch
[Aufrufen von debuggerereignissen](../../extensibility/debugger/calling-debugger-events.md) | 70.139535 | 555 | 0.812997 | deu_Latn | 0.855896 |
aa25bb1718b0af1ef99eefa7f9ba6ff10de43791 | 541 | md | Markdown | README.md | eodantas/cobol-pdf | ebb3b3e76f8c79da1b903255a860ffa238409de1 | [
"MIT"
] | 1 | 2019-10-29T00:02:18.000Z | 2019-10-29T00:02:18.000Z | README.md | eodantas/cobol-pdf | ebb3b3e76f8c79da1b903255a860ffa238409de1 | [
"MIT"
] | null | null | null | README.md | eodantas/cobol-pdf | ebb3b3e76f8c79da1b903255a860ffa238409de1 | [
"MIT"
] | null | null | null | # COBOL - PDF
Generating PDF files using COBOL.
## Getting Started
Compiled and tested with:
Micro Focus Object COBOL (32-bit)
Version 4.0.38 Copyright (C) 1984-1998 Micro Focus Ltd.
### Installing
See sample.cob and the manual in doc folder.
## Authors
* [Edgar Olavo Garcia Dantas (eodantas@gmail.com)](https://github.com/eodantas)
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
## Acknowledgments
* Thanks to [fpdf.org](www.fpdf.org) (where the project was based)
| 20.807692 | 98 | 0.735675 | eng_Latn | 0.589356 |
aa25cb33f7e1fa73dc5a0ba3987df1ae3a142f71 | 34 | md | Markdown | src_book/SUMMARY.md | akiradeveloper/cfop-research | 37d2f3d97ac48618045694e172d817b49fb8d529 | [
"MIT"
] | null | null | null | src_book/SUMMARY.md | akiradeveloper/cfop-research | 37d2f3d97ac48618045694e172d817b49fb8d529 | [
"MIT"
] | 2 | 2021-09-27T04:11:41.000Z | 2021-09-27T05:41:21.000Z | src_book/SUMMARY.md | akiradeveloper/cfop-research | 37d2f3d97ac48618045694e172d817b49fb8d529 | [
"MIT"
] | null | null | null | # Summary
- [a](a.md)
- [b](b.md) | 8.5 | 11 | 0.441176 | kor_Hang | 0.329942 |
aa260178897d5f6ca8d4850b6ca8d0f4c91a0025 | 2,153 | md | Markdown | tm/PHPExcel/README.md | yogirajh007/OnlineGCC | 3687738305eefa0598bf5969590df41b7151f684 | [
"MIT"
] | null | null | null | tm/PHPExcel/README.md | yogirajh007/OnlineGCC | 3687738305eefa0598bf5969590df41b7151f684 | [
"MIT"
] | 6 | 2020-07-20T02:57:34.000Z | 2022-03-03T23:08:37.000Z | tm/PHPExcel/README.md | yogirajh007/OnlineGCC | 3687738305eefa0598bf5969590df41b7151f684 | [
"MIT"
] | null | null | null | # PHPExcel - OpenXML - Read, Write and Create spreadsheet documents in PHP - Spreadsheet engine
PHPExcel is a library written in pure PHP and providing a set of classes that allow you to write to and read from different spreadsheet file formats, like Excel (BIFF) .xls, Excel 2007 (OfficeOpenXML) .xlsx, CSV, Libre/OpenOffice Calc .ods, Gnumeric, PDF, HTML, ... This project is built around Microsoft's OpenXML standard and PHP.
Master: [](http://travis-ci.org/PHPOffice/PHPExcel)
Develop: [](http://travis-ci.org/PHPOffice/PHPExcel)
[](https://gitter.im/PHPOffice/PHPExcel)
## File Formats supported
### Reading
* BIFF 5-8 (.xls) Excel 95 and above
* Office Open XML (.xlsx) Excel 2007 and above
* SpreadsheetML (.xml) Excel 2003
* Open Document Format/OASIS (.ods)
* Gnumeric
* HTML
* SYLK
* CSV
### Writing
* BIFF 8 (.xls) Excel 95 and above
* Office Open XML (.xlsx) Excel 2007 and above
* HTML
* CSV
* PDF (using either the tcPDF, DomPDF or mPDF libraries, which need to be installed separately)
## Requirements
* PHP version 5.2.0 or higher
* PHP extension php_zip enabled (required if you need PHPExcel to handle .xlsx .ods or .gnumeric files)
* PHP extension php_xml enabled
* PHP extension php_gd2 enabled (optional, but required for exact column width autocalculation)
*Note:* PHP 5.6.29 has [a bug](https://bugs.php.net/bug.php?id=735300) that
prevents SQLite3 caching to work correctly. Use a newer (or older) versions of
PHP if you need SQLite3 caching.
## Want to contribute?
PHPExcel developement for next version has moved under its new name PhpSpreadsheet. So please head over to [PhpSpreadsheet](https://github.com/PHPOffice/PhpSpreadsheet#want-to-contribute) to contribute patches and features.
## License
PHPExcel is licensed under [LGPL (GNU LESSER GENERAL PUBLIC LICENSE)](https://github.com/PHPOffice/PHPExcel/blob/master/license.md)
| 45.808511 | 332 | 0.754296 | eng_Latn | 0.662838 |
aa260b56362055c78382df7bcfc4de02b981780d | 2,522 | md | Markdown | docs/code-quality/ca2220-finalizers-should-call-base-class-finalizer.md | silentwater/visualstudio-docs.de-de | 5a9caad5c1bee3041c2758238ffd04f4086a9bac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca2220-finalizers-should-call-base-class-finalizer.md | silentwater/visualstudio-docs.de-de | 5a9caad5c1bee3041c2758238ffd04f4086a9bac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca2220-finalizers-should-call-base-class-finalizer.md | silentwater/visualstudio-docs.de-de | 5a9caad5c1bee3041c2758238ffd04f4086a9bac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CA2220: Finalizer sollten Basisklassen-Finalizer aufrufen.'
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- CA2220
- FinalizersShouldCallBaseClassFinalizer
helpviewer_keywords:
- CA2220
- FinalizersShouldCallBaseClassFinalizer
ms.assetid: 48329f42-170d-45ee-a381-e33f55a240c5
author: gewarren
ms.author: gewarren
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: 034f80c9198ab098070e6642f4a4d96cff1744c5
ms.sourcegitcommit: 21d667104199c2493accec20c2388cf674b195c3
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/08/2019
ms.locfileid: "55927645"
---
# <a name="ca2220-finalizers-should-call-base-class-finalizer"></a>CA2220: Finalizer sollten Basisklassen-Finalizer aufrufen.
|||
|-|-|
|TypeName|FinalizersShouldCallBaseClassFinalizer|
|CheckId|CA2220|
|Kategorie|Microsoft.Usage|
|Unterbrechende Änderung|Nicht unterbrechende Änderung|
## <a name="cause"></a>Ursache
Ein Typ, überschreibt <xref:System.Object.Finalize%2A?displayProperty=fullName> ruft nicht die <xref:System.Object.Finalize%2A> -Methode in der Basisklasse.
## <a name="rule-description"></a>Regelbeschreibung
Der Abschluss muss durch die Vererbungshierarchie weitergegeben werden. Um dies zu gewährleisten, müssen Typen ihrer Basisklasse aufrufen <xref:System.Object.Finalize%2A> innerhalb ihrer eigenen <xref:System.Object.Finalize%2A> Methode. Den Aufruf von den Basisklassen-Finalizer wird von der C#-Compiler automatisch hinzugefügt.
## <a name="how-to-fix-violations"></a>Behandeln von Verstößen
Um einen Verstoß gegen diese Regel zu beheben, rufen Sie des Basistyps <xref:System.Object.Finalize%2A> aus Ihrem <xref:System.Object.Finalize%2A> Methode.
## <a name="when-to-suppress-warnings"></a>Wenn Sie Warnungen unterdrücken
Unterdrücken Sie keine Warnung dieser Regel. Einige Compiler, die die common Language Runtime als Ziel legen Sie einen Aufruf der Basistyp des Finalizers in die Microsoft intermediate Language (MSIL). Wenn eine Warnung dieser Regel gemeldet wird, wird der Compiler kein Aufruf eingefügt, und müssen Sie es an Ihrem Code hinzufügen.
## <a name="example"></a>Beispiel
Das folgende Visual Basic-Beispiel zeigt ein `TypeB` ordnungsgemäß aufruft, die <xref:System.Object.Finalize%2A> -Methode in der Basisklasse.
[!code-vb[FxCop.Usage.IDisposableBaseCalled#1](../code-quality/codesnippet/VisualBasic/ca2220-finalizers-should-call-base-class-finalizer_1.vb)]
## <a name="see-also"></a>Siehe auch
- [Dispose-Muster](/dotnet/standard/design-guidelines/dispose-pattern) | 44.245614 | 331 | 0.802934 | deu_Latn | 0.875046 |
aa26c56be5756aebfaade20544dc64997c826d1c | 51 | md | Markdown | CONTRIBUTING.md | eilifm/holtwintersts | 67f1faab553e720d80f454831ecd59933a91753b | [
"MIT"
] | 5 | 2018-09-27T06:12:46.000Z | 2020-02-23T09:02:10.000Z | CONTRIBUTING.md | eilifm/holtwintersts | 67f1faab553e720d80f454831ecd59933a91753b | [
"MIT"
] | null | null | null | CONTRIBUTING.md | eilifm/holtwintersts | 67f1faab553e720d80f454831ecd59933a91753b | [
"MIT"
] | 1 | 2018-09-30T07:03:04.000Z | 2018-09-30T07:03:04.000Z | Be nice. Fork and PR. No promises on review time.
| 25.5 | 50 | 0.72549 | eng_Latn | 0.819133 |
aa274e2213d4749e0676f4acabeffb683e599254 | 8,009 | md | Markdown | sdk-api-src/content/bluetoothleapis/nf-bluetoothleapis-bluetoothgattgetdescriptors.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/bluetoothleapis/nf-bluetoothleapis-bluetoothgattgetdescriptors.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/bluetoothleapis/nf-bluetoothleapis-bluetoothgattgetdescriptors.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:bluetoothleapis.BluetoothGATTGetDescriptors
title: BluetoothGATTGetDescriptors function (bluetoothleapis.h)
description: Gets all the descriptors available for the specified characteristic.
helpviewer_keywords: ["BluetoothGATTGetDescriptors","BluetoothGATTGetDescriptors function [Bluetooth Devices]","bltooth.bluetoothgattgetdescriptors","bluetoothleapis/BluetoothGATTGetDescriptors"]
old-location: bltooth\bluetoothgattgetdescriptors.htm
tech.root: bltooth
ms.assetid: C4D51362-5D4E-45CC-8E29-10B201B5673C
ms.date: 12/05/2018
ms.keywords: BluetoothGATTGetDescriptors, BluetoothGATTGetDescriptors function [Bluetooth Devices], bltooth.bluetoothgattgetdescriptors, bluetoothleapis/BluetoothGATTGetDescriptors
req.header: bluetoothleapis.h
req.include-header:
req.target-type: Universal
req.target-min-winverclnt: Supported in Windows 8 and later versions of Windows.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: BluetoothApis.lib
req.dll: BluetoothAPIs.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- BluetoothGATTGetDescriptors
- bluetoothleapis/BluetoothGATTGetDescriptors
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- BluetoothAPIs.dll
- Ext-MS-Win-Bluetooth-APIs-l1-1-0.dll
api_name:
- BluetoothGATTGetDescriptors
---
# BluetoothGATTGetDescriptors function
## -description
The <b>BluetoothGATTGetDescriptors</b> function gets all the descriptors available for the specified characteristic.
## -parameters
### -param hDevice [in]
Handle to the Bluetooth device or service. If a service handle is passed, then the service must be the grandparent of the descriptor.
### -param Characteristic [in]
Pointer to <a href="/windows/desktop/api/bthledef/ns-bthledef-bth_le_gatt_characteristic">BTH_LE_GATT_CHARACTERISTIC</a> structure containing the parent characteristic of the descriptors to be retrieved.
### -param DescriptorsBufferCount [in]
The number of elements allocated for the <i>DescriptorsBuffer</i> parameter.
### -param DescriptorsBuffer [out, optional]
Pointer to buffer containing a <a href="/windows/desktop/api/bthledef/ns-bthledef-bth_le_gatt_descriptor">BTH_LE_GATT_DESCRIPTOR</a> structure into which to return descriptors.
### -param DescriptorsBufferActual [out]
Pointer to buffer into which the actual number of descriptors were returned in the <i>DescriptorsBuffer</i> parameter.
### -param Flags [in]
Flags to modify the behavior of <b>BluetoothGATTGetDescriptors</b>:
<table>
<tr>
<th>Flag</th>
<th>Description</th>
</tr>
<tr>
<td>
<b>BLUETOOTH_GATT_FLAG_NONE</b>
</td>
<td>
The client does not have specific GATT requirements (default).
</td>
</tr>
</table>
## -returns
This function returns the following values:
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The operation completed successfully.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_MORE_DATA</b></dt>
</dl>
</td>
<td width="60%">
The buffer parameter is NULL and the number of items available is being returned instead.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_ACCESS_DENIED</b></dt>
</dl>
</td>
<td width="60%">
Returned if both a parent service and a service handle are provided and the service hierarchy does not roll up to the provided parent service handle.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_INVALID_PARAMETER</b></dt>
</dl>
</td>
<td width="60%">
One of the following conditions occurred:
<ul>
<li><i>DescriptorsBuffer</i> is <b>NULL</b>, and <i>DescriptorsBufferCount</i> is 0.</li>
<li><i>DescriptorsBuffer</i> is non-<b>NULL</b>, but <i>DescriptorsBufferCount</i> is <b>NULL</b>.</li>
<li><i>DescriptorsBuffer</i> is non-<b>NULL</b>, and <i>DescriptorsBufferCount</i> is 0.</li>
</ul>
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_INVALID_USER_BUFFER</b></dt>
</dl>
</td>
<td width="60%">
A buffer is specified, but the buffer count
size is smaller than what is required, in bytes.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_BAD_COMMAND</b></dt>
</dl>
</td>
<td width="60%">
The current data in the cache appears to be
inconsistent, and is leading to internal errors.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>ERROR_NO_SYSTEM_RESOURCES</b></dt>
</dl>
</td>
<td width="60%">
The operation ran out of memory.
</td>
</tr>
</table>
## -remarks
Returned characteristics are cached upon successful retrieval of characteristics from the device directly. Unless a service-change event is received, the list of returned characteristics is not expected to change.
Profile drivers should pre-allocate a sufficiently large buffer for the array of
characteristics to be returned in. Callers can determine the necessary buffer size by passing a non-<b>NULL</b> value in <i>DescriptorsBufferActual</i> and <b>NULL</b> in <i>DescriptorsBuffer</i>.
Do not modify the returned characteristic structure,
and then use the modified structure in subsequent function calls. Behavior is undefined
if the caller does this.
The parent characteristic must be present in the
cache, otherwise the function will fail. The parent service must be a service returned by either <a href="/windows/desktop/api/bluetoothleapis/nf-bluetoothleapis-bluetoothgattgetservices">BluetoothGATTGetServices</a> or
<a href="/windows/desktop/api/bluetoothleapis/nf-bluetoothleapis-bluetoothgattgetincludedservices">BluetoothGATTGetIncludedServices</a>.
<b>Example</b>
```cpp
////////////////////////////////////////////////////////////////////////////
// Determine Descriptor Buffer Size
////////////////////////////////////////////////////////////////////////////
GetDescriptors:
hr = BluetoothGATTGetDescriptors(
hCurrService,
currGattChar,
0,
NULL,
&descriptorBufferSize,
BLUETOOTH_GATT_FLAG_NONE);
if (HRESULT_FROM_WIN32(ERROR_MORE_DATA) != hr) {
PrintHr("BluetoothGATTGetDescriptors - Buffer Size", hr);
goto Done; // Allow continuation
}
if (descriptorBufferSize > 0) {
pDescriptorBuffer = (PBTH_LE_GATT_DESCRIPTOR)
malloc(descriptorBufferSize
* sizeof(BTH_LE_GATT_DESCRIPTOR));
if (NULL == pDescriptorBuffer) {
printf("pDescriptorBuffer out of memory\r\n");
goto Done;
} else {
RtlZeroMemory(pDescriptorBuffer, descriptorBufferSize);
}
////////////////////////////////////////////////////////////////////////////
// Retrieve Descriptors
////////////////////////////////////////////////////////////////////////////
hr = BluetoothGATTGetDescriptors(
hCurrService,
currGattChar,
descriptorBufferSize,
pDescriptorBuffer,
&numDescriptors,
BLUETOOTH_GATT_FLAG_NONE);
if (S_OK != hr) {
PrintHr("BluetoothGATTGetDescriptors - Actual Data", hr);
goto Done;
}
if (numDescriptors != descriptorBufferSize) {
printf("buffer size and buffer size actual size mismatch\r\n");
goto Done;
}
}
```
## -see-also
<a href="/windows/desktop/api/bthledef/ns-bthledef-bth_le_gatt_characteristic">BTH_LE_GATT_CHARACTERISTIC</a>
<a href="/windows/desktop/api/bthledef/ns-bthledef-bth_le_gatt_descriptor">BTH_LE_GATT_DESCRIPTOR</a> | 29.229927 | 224 | 0.655762 | eng_Latn | 0.716427 |
aa280e4471b03cac92809f8cbf756831e912ec0b | 9,066 | md | Markdown | docs/README.md | sgalkin/CarND-T3P1 | 0ae5f7474d7b6070efccd84aa61956bd350ef9f8 | [
"MIT"
] | null | null | null | docs/README.md | sgalkin/CarND-T3P1 | 0ae5f7474d7b6070efccd84aa61956bd350ef9f8 | [
"MIT"
] | null | null | null | docs/README.md | sgalkin/CarND-T3P1 | 0ae5f7474d7b6070efccd84aa61956bd350ef9f8 | [
"MIT"
] | null | null | null | # Path Planning Project
[](http://www.udacity.com/drive)
[](https://www.codacy.com/app/tech.svg/CarND-T3P1?utm_source=github.com&utm_medium=referral&utm_content=sgalkin/CarND-T3P1&utm_campaign=Badge_Grade)
[](https://www.codefactor.io/repository/github/sgalkin/carnd-t3p1)
[](https://bettercodehub.com/)
[](https://semaphoreci.com/sgalkin/carnd-t3p1)
[](https://codecov.io/gh/sgalkin/CarND-T3P1)
[](https://circleci.com/gh/sgalkin/CarND-T3P1)
---
## Overview
This project implements path planning algorithm for Udacity self-driving car
simulator.
### Goals
In this project the goal is to safely navigate around a virtual highway with
other traffic. Using the car's localization and sensor fusion data. There is
also a sparse map list of waypoints around the highway.
Acceptance criteria:
1. The car should try to go as close as possible to the 50 MPH speed limit,
which means passing slower traffic when possible.
2. The car should avoid hitting other cars at all cost.
3. The car should avoid driving inside of the marked road lanes at all times,
unless going from one lane to another.
4. The car should be able to make one complete loop around the 6946m highway.
5. The car should not experience total acceleration over 10 m/s^2
6. the car should not experience jerk that is greater than 10 m/s^3
Other vehicles on the highway:
1. Are driving ±10 MPH of the 50 MPH speed limit.
2. Might try to change lanes too.
## Demo
`./t3p1`
[Video](https://vimeo.com/260849645)
---
## Usage
```sh
Usage:
t3p1 [options]
Available options:
-w, --waypoints File with route waypoints (defaults: data/highway_map.csv)
-p, --port Port to use (default: 4567)
-h, --help print this help screen
```
---
## Dependencies
### Runtime
* [Term 3 Simulator](https://github.com/udacity/self-driving-car-sim/releases)
### Tools
* `cmake` >= 3.5
* All OSes: [click here for installation instructions](https://cmake.org/install/)
* `make` >= 4.1 (Linux, Mac), 3.81 (Windows)
* Linux: make is installed by default on most Linux distros
* Mac: [install Xcode command line tools to get make](https://developer.apple.com/xcode/features/)
* Windows: [Click here for installation instructions](http://gnuwin32.sourceforge.net/packages/make.htm)
* `gcc/g++` >= 5.4, clang
* Linux: gcc/g++ is installed by default on most Linux distros
* Mac: same deal as make - [install Xcode command line tools](https://developer.apple.com/xcode/features/)
* Windows: recommend using [MinGW](http://www.mingw.org/)
### Libraries not included into the project
* [`uWebSocketIO`](https://github.com/uWebSockets/uWebSockets) == v0.13.0
* Ubuntu/Debian: the repository includes [`install-ubuntu.sh`](./scripts/install-ubuntu.sh) that can be used to set
up and install `uWebSocketIO`
* Mac: the repository includes [`install-mac.sh`](./scripts/install-mac.sh)
that can be used to set up and install `uWebSocketIO`
* Windows: use either Docker, VMware, or even [Windows 10 Bash on Ubuntu](https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/)
### Libraries included into the project
* [`Eigen`](http://eigen.tuxfamily.org/index.php?title=Main_Page) - C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms
* [`JSON for Modern C++`](https://github.com/nlohmann/json) - JSON parser
* [`Catch2`](https://github.com/catchorg/Catch2) - Unit-testing framework
* [`ProgramOptions.hxx`](https://github.com/Fytch/ProgramOptions.hxx) - Single-header program options parsing library for C++11
* [`nanoflann`](https://github.com/jlblancoc/nanoflann) - a C++11 header-only library for Nearest Neighbor (NN) search wih KD-trees
* [`spline.h`](http://kluge.in-chemnitz.de/opensource/spline) - Cubic Spline interpolation in C++
## Build
### Local manual build
0. Clone this repo.
1. `mkdir build`
2. `cd build`
3. `cmake .. -DCMAKE_BUILD_TYPE=Release -DLOCAL_BUILD=ON -DDOCKER_BUILD=OFF`
4. `make`
5. `make test`
### Pre-built Docker container
0. docker pull sgalkin/carnd-t3p1
### Manual build using containerized development environment
0. Clone this repo.
1. `mkdir build`
2. `cd build`
3. `cmake .. -DCMAKE_BUILD_TYPE=Release -DLOCAL_BUILD=OFF -DDOCKER_BUILD=ON`
4. `make docker-build`
5. `make docker-test`
6. `make docker-run` or `make docker-shell; ./t3p1`
---
## Map
Each waypoint in the list contains [_x_,_y_,_s_,_dx_,_dy_] values.
* _x_ and _y_ are the waypoint's map coordinate position
* _s_ value is the distance along the road to get to that waypoint in meters
* _dx_ and _dy_ values define the unit normal vector pointing outward of
the highway loop
The highway's waypoints loop around so the frenet _s_ value,
distance along the road, goes from _0_ to _6945.554_.

## Protocol
The project uses `uWebSocketIO` request-response protocol in communicating with the simulator.
_INPUT_: values provided by the simulator to the c++ program
```json
{
"x": "(float) - The car's x position in map coordinates",
"y": "(float) - The car's y position in map coordinates",
"yaw": "(float) - The car's speed in MPH",
"speed": "(float) - The velocity of the vehicle (magnitude)",
"s": "(float) - The car's s position in frenet coordinates",
"d": "(float) - The car's d position in frenet coordinates",
"previous_path_x": "(Array<float>) - The global x positions of the previous path, processed points removed",
"previous_path_y": "(Array<float>) - The global y positions of the previous path, processed points removed",
"end_path_s": "The previous list's last point's frenet s value",
"end_path_d": "The previous list's last point's frenet d value",
"sensor_fusion": [ "2D vector of cars and then that car's",
[
"<id> - car's unique ID",
"<x> - car's x position in map coordinates",
"<y> - car's y position in map coordinates",
"<vx> - car's x velocity in m/s",
"<vy> - car's y velocity in m/s",
"<s> - car's s position in frenet coordinates",
"<d> - car's d position in frenet coordinates",
]
]
}
```
_OUTPUT_: values provided by the c++ program to the simulator
```json
{
"next_x": "(Array<float>) - The global x positions of the trajectory",
"next_y": "(Array<float>) - The global y positions of the trajectory",
}
```
## Algorithm
### Constants
* Path planning horizon - _1 second_
* Points in path - _50_ (horizon / simulator tick)
* Hard speed limit - _50 MPH_
* Recommended speed - _99%_ of hard speed limit
* Comfort forward gap (distance to the obstacle in front) - _30 m_
* Comfort backward gap (distance to the obstacle behind) - _10 m_
* Minimal gap (safety limit) - _15 m_
### Fusion
* Position of each vehicle predicted for the end of current path (up to 1 second)
* Constant velocity model is used for prediction
* Closest car (if any) and its velocity associated to each lane
### Reference lane
* Cost function used in order to choose lane for the next iteration
* Safety check applied for each lane - no obstacles closer than _minimal gap_
* the only possible action - keep the lane
* Main terms of the cost function
* penalty for changing lanes (try to stay on lane)
* penalty for using leftmost lane
* forward gap size
* velocity of the car in front (if any)
* Implicit FSM used in the project in order to protect the car during lane change
### Reference velocity
* Slow down if obstacle too close (closer than _minimal gap_)
* Follow the car in front with its velocity
* Speed up to the recommended velocity
### Trajectory generation
* At each step the application extends _previous path_ with new points,
this guaranties smoothness of the path
* Cubic spline used in the project as a reference trajectory
* Velocity is constant for each step
### Control
* The car uses a perfect controller and will visit every (_x_, _y_) point
it receives in the list every _0.02_ seconds
* The units for the (_x_, _y_) points are in meters
* The spacing of the points determines the speed of the car
* The vector going from a point to the next point in the list dictates the
angle of the car.
* Acceleration both in the tangential and normal directions is measured along
with the _jerk_ (the rate of change of total acceleration)
## TODO
1. Try to use more advanced cost function.
2. Increase test coverage.
3. Tune safety parameters.
4. Try to change not only to neighboring lane
| 41.972222 | 252 | 0.728877 | eng_Latn | 0.933425 |
aa287b0600055c3b072a18d2adbabddd2b151590 | 845 | md | Markdown | VBA/Excel-VBA/articles/application-displaycommentindicator-property-excel.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/Excel-VBA/articles/application-displaycommentindicator-property-excel.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/Excel-VBA/articles/application-displaycommentindicator-property-excel.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: Application.DisplayCommentIndicator Property (Excel)
keywords: vbaxl10.chm133123
f1_keywords:
- vbaxl10.chm133123
ms.prod: excel
api_name:
- Excel.Application.DisplayCommentIndicator
ms.assetid: 8617da4e-97cb-fe57-bb51-a9c671e2ff27
ms.date: 06/08/2017
---
# Application.DisplayCommentIndicator Property (Excel)
Returns or sets the way cells display comments and indicators. Can be one of the **[XlCommentDisplayMode](xlcommentdisplaymode-enumeration-excel.md)** constants.
## Syntax
_expression_ . **DisplayCommentIndicator**
_expression_ A variable that represents an **Application** object.
## Example
This example hides cell tips but retains comment indicators.
```vb
Application.DisplayCommentIndicator = xlCommentIndicatorOnly
```
## See also
#### Concepts
[Application Object](application-object-excel.md)
| 19.204545 | 162 | 0.783432 | eng_Latn | 0.377018 |
aa292ae5cbd70ddb31a61328aae7ada78a961a73 | 479 | md | Markdown | CHANGELOG.md | cryptocoinjs/qr-encode | 152c489cecdb6175c0ada1f47c30a8fae4eb69f4 | [
"MIT"
] | 15 | 2015-01-14T14:07:08.000Z | 2021-07-22T06:17:53.000Z | CHANGELOG.md | DalavanCloud/qr-encode | 152c489cecdb6175c0ada1f47c30a8fae4eb69f4 | [
"MIT"
] | null | null | null | CHANGELOG.md | DalavanCloud/qr-encode | 152c489cecdb6175c0ada1f47c30a8fae4eb69f4 | [
"MIT"
] | 5 | 2015-01-27T17:58:02.000Z | 2018-12-24T23:34:57.000Z | 0.3.0 / 2014-10-20
------------------
- expand QR version from 10 to 40 / https://github.com/cryptocoinjs/qr-encode/pull/2
0.2.0 / 2014-07-02
------------------
* deleted `bower.json` and `component.json`
* added fixture tests
* refactored a lot of code into separate files
* works in Node.js now too
* added Travis-CI
* added Coveralls
0.1.0 / 2013-11-20
------------------
* changed package name
* removed AMD support
0.0.1 / 2013-11-12
------------------
* initial release | 22.809524 | 84 | 0.60334 | eng_Latn | 0.909776 |
aa299994e5db32651ea83fb8d4a5c54b5cc52c61 | 786 | md | Markdown | README.md | conan-io/wishlist | 5a4f144312d614a36bbb8e3ba901c7dba9027984 | [
"MIT"
] | 61 | 2016-04-26T16:36:24.000Z | 2022-03-31T15:28:46.000Z | README.md | conan-io/wishlist | 5a4f144312d614a36bbb8e3ba901c7dba9027984 | [
"MIT"
] | 261 | 2016-04-26T17:30:00.000Z | 2022-01-24T09:16:29.000Z | README.md | conan-io/wishlist | 5a4f144312d614a36bbb8e3ba901c7dba9027984 | [
"MIT"
] | 1 | 2019-11-26T18:44:17.000Z | 2019-11-26T18:44:17.000Z | # Conan Wishlist
## DEPRECATION NOTICE: Use [Conan Center Index repository](https://github.com/conan-io/conan-center-index) instead.
This repository tracked user wishes for new Conan recipes prior to the start of the [Conan Center Index](https://github.com/conan-io/conan-center-index).
If a Conan package does not exist in the Index yet, please search the issues first if someone else already made a request for the library or tool you would like to have.
If such a request does not exist, feel free to [create](https://github.com/conan-io/conan-center-index/issues/new?labels=library+request&template=package_request.md&title=%5Brequest%5D+%3CLIBRARY-NAME%3E%2F%3CLIBRARY-VERSION%3E) one.
Existing issues in this repository will be closed and / or moved to the Index over time.
| 65.5 | 233 | 0.784987 | eng_Latn | 0.958641 |
aa2a100e2726cfc7b9c3f6e62a76e3e2906267fa | 7,702 | md | Markdown | WindowsServerDocs/networking/technologies/nps/nps-plan-proxy.md | KisaragiEffective/windowsserverdocs.ja-jp | cb093a5b71fa9e962ce7c933f4b22163dd4954ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/networking/technologies/nps/nps-plan-proxy.md | KisaragiEffective/windowsserverdocs.ja-jp | cb093a5b71fa9e962ce7c933f4b22163dd4954ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/networking/technologies/nps/nps-plan-proxy.md | KisaragiEffective/windowsserverdocs.ja-jp | cb093a5b71fa9e962ce7c933f4b22163dd4954ac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: NPS を RADIUS プロキシとして計画する
description: このトピックでは、Windows Server 2016 でのネットワークポリシーサーバーの RADIUS プロキシ展開の計画について説明します。
manager: brianlic
ms.prod: windows-server
ms.technology: networking
ms.topic: article
ms.assetid: ca77d64a-065b-4bf2-8252-3e75f71b7734
ms.author: pashort
author: shortpatti
ms.openlocfilehash: 29a48275dfd56cbf223e0fca0c9c276f35a675cc
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/27/2019
ms.locfileid: "71396016"
---
# <a name="plan-nps-as-a-radius-proxy"></a>NPS を RADIUS プロキシとして計画する
>適用対象: Windows Server (半期チャネル)、Windows Server 2016
RADIUS\) プロキシ \(リモート認証ダイヤルインユーザーサービスとしてネットワークポリシーサーバー (NPS) を展開する場合、NPS は RADIUS クライアントからの接続要求 (ネットワークアクセスサーバーやその他の RADIUS プロキシなど) を受信し、これらの接続要求を NPS または他の RADIUS サーバーを実行しているサーバーに転送します。 これらの計画ガイドラインを使用して、RADIUS の展開を簡略化できます。
これらの計画ガイドラインには、NPS を RADIUS サーバーとして展開する状況は含まれません。 NPS を RADIUS サーバーとして展開すると、NPS はローカルドメインおよびローカルドメインを信頼するドメインに対する接続要求に対して認証、承認、およびアカウンティングを実行します。
NPS を RADIUS プロキシとしてネットワークに展開する前に、次のガイドラインを使用して展開を計画します。
- NPS 構成を計画します。
- RADIUS クライアントを計画します。
- リモート RADIUS サーバーグループを計画します。
- メッセージ転送の属性操作規則を計画します。
- 接続要求ポリシーを計画します。
- NPS アカウンティングを計画します。
## <a name="plan-nps-configuration"></a>NPS 構成を計画する
Nps を RADIUS プロキシとして使用する場合、nps は接続要求を NPS または他の RADIUS サーバーに転送して処理します。 このため、NPS プロキシのドメインメンバーシップは関係ありません。 プロキシは、ユーザーアカウントのダイヤルインプロパティにアクセスする必要がないため、Active Directory Domain Services \(AD DS\) に登録する必要はありません。 また、プロキシは接続要求に対して承認を実行しないため、NPS プロキシでネットワークポリシーを構成する必要はありません。 NPS プロキシは、ドメインメンバーでもかまいません。また、ドメインメンバーシップのないスタンドアロンサーバーでもかまいません。
RADIUS プロトコルを使用して、RADIUS クライアント (ネットワークアクセスサーバーとも呼ばれます) と通信するように NPS を構成する必要があります。 また、NPS によってイベントログに記録されるイベントの種類を構成したり、サーバーの説明を入力したりすることができます。
### <a name="key-steps"></a>主要手順
NPS プロキシ構成の計画中に、次の手順を使用できます。
- Radius クライアントから radius メッセージを受信し、radius メッセージをリモート RADIUS サーバーグループのメンバーに送信するために NPS プロキシが使用する RADIUS ポートを決定します。 Radius 認証メッセージの場合は1812および1645、RADIUS アカウンティングメッセージの場合は UDP ポート1813および1646の既定のユーザーデータグラムプロトコル (UDP) ポートが使用されます。
- NPS プロキシに複数のネットワークアダプターが構成されている場合は、RADIUS トラフィックを許可するアダプターを特定します。
- NPS でイベントログに記録するイベントの種類を決定します。 拒否された接続要求、成功した接続要求、またはその両方をログに記録できます。
- 複数の NPS プロキシを展開するかどうかを判断します。 フォールトトレランスを提供するには、少なくとも2つの NPS プロキシを使用します。 1つの NPS プロキシをプライマリ RADIUS プロキシとして使用し、もう一方をバックアップとして使用します。 各 RADIUS クライアントは、両方の NPS プロキシで構成されます。 プライマリ NPS プロキシが使用できなくなると、RADIUS クライアントは、代替 NPS プロキシにアクセス要求メッセージを送信します。
- 1つの NPS プロキシ構成を他の NPS プロキシにコピーして管理オーバーヘッドを節約し、サーバーを正しく構成しないようにするためのスクリプトを計画します。 NPS には、別の NPS プロキシにインポートするために、NPS プロキシ構成のすべてまたは一部をコピーできる Netsh コマンドが用意されています。 コマンドは Netsh プロンプトで手動で実行できます。 ただし、コマンドシーケンスをスクリプトとして保存した場合、プロキシの構成を変更する場合は、後でスクリプトを実行することができます。
## <a name="plan-radius-clients"></a>RADIUS クライアントを計画する
RADIUS クライアントは、ワイヤレスアクセスポイント、仮想プライベートネットワーク \(VPN\) サーバー、802.1 X 対応スイッチ、およびダイヤルアップサーバーなどのネットワークアクセスサーバーです。 Radius サーバーに接続要求メッセージを転送する RADIUS プロキシは、radius クライアントでもあります。 NPS では、RADIUS プロトコルに準拠しているすべてのネットワークアクセスサーバーと RADIUS プロキシがサポートされています。詳細については、「RFC 2865、リモート認証ダイヤルインユーザーサービス \(RADIUS\)」、RFC 2866、「RADIUS アカウンティング」を参照してください。
また、ワイヤレスアクセスポイントとスイッチの両方が、802.1 X 認証に対応している必要があります。 拡張認証プロトコル (EAP) または保護された拡張認証プロトコル (PEAP) を展開する場合は、アクセスポイントとスイッチが EAP の使用をサポートしている必要があります。
ワイヤレスアクセスポイントの PPP 接続の基本的な相互運用性をテストするには、パスワード認証プロトコル (PAP) を使用するようにアクセスポイントとアクセスクライアントを構成します。 ネットワークアクセスに使用する予定のプロトコルをテストするまで、追加の PPP ベースの認証プロトコル (PEAP など) を使用します。
### <a name="key-steps"></a>主要手順
RADIUS クライアントの計画中に、次の手順を実行できます。
- NPS で構成する必要があるベンダー固有の属性 (Vsa) を文書化します。 Nas で Vsa が必要な場合は、NPS でネットワークポリシーを構成するときに、後で使用するために VSA 情報をログに記録します。
- すべてのデバイスの構成を簡略化するために、RADIUS クライアントと NPS プロキシの IP アドレスを文書化します。 RADIUS クライアントを展開するときは、radius プロトコルを使用するように構成する必要があります。 NPS プロキシの IP アドレスを認証サーバーとして入力します。 また、RADIUS クライアントと通信するように NPS を構成する場合は、RADIUS クライアントの IP アドレスを NPS スナップインに入力する必要があります。
- RADIUS クライアントと NPS スナップインで、構成用の共有シークレットを作成します。 NPS で RADIUS クライアントを構成するときに NPS スナップインにも入力する、共有シークレット (パスワード) を使用して RADIUS クライアントを構成する必要があります。
## <a name="plan-remote-radius-server-groups"></a>リモート RADIUS サーバーグループを計画する
NPS プロキシでリモート RADIUS サーバーグループを構成すると、ネットワークアクセスサーバーおよび NPS プロキシまたはその他の RADIUS プロキシから受信する一部またはすべての接続要求メッセージを送信する場所を NPS プロキシに指示することになります。
NPS を RADIUS プロキシとして使用して、接続要求を1つまたは複数のリモート RADIUS サーバーグループに転送できます。また、各グループに1つまたは複数の RADIUS サーバーを含めることができます。 NPS プロキシで複数のグループにメッセージを転送する場合は、グループごとに1つの接続要求ポリシーを構成します。 接続要求ポリシーには、ポリシーに指定されているリモート RADIUS サーバーグループに送信するメッセージを NPS プロキシに通知する、属性操作ルールなどの追加情報が含まれています。
リモート RADIUS サーバーグループは、nps の Netsh コマンドを使用して構成できます。そのためには、NPS スナップインの [リモート RADIUS サーバーグループ] の下でグループを直接構成するか、新しい接続要求ポリシーウィザードを実行します。
### <a name="key-steps"></a>主要手順
リモート RADIUS サーバーグループの計画中に、次の手順を使用できます。
- NPS プロキシが接続要求を転送する RADIUS サーバーが含まれているドメインを特定します。 これらのドメインには、展開する RADIUS クライアントを介してネットワークに接続するユーザーのユーザーアカウントが含まれています。
- RADIUS がまだ展開されていないドメインに新しい RADIUS サーバーを追加する必要があるかどうかを判断します。
- リモート RADIUS サーバーグループに追加する RADIUS サーバーの IP アドレスを文書化します。
- 作成する必要があるリモート RADIUS サーバーグループの数を決定します。 場合によっては、ドメインごとに1つのリモート RADIUS サーバーグループを作成し、そのドメインの RADIUS サーバーをグループに追加することをお勧めします。 ただし、1つのドメインに大量のリソースがある場合があります。たとえば、ドメイン内のユーザーアカウントを持つ多数のユーザー、多数のドメインコントローラー、および多数の RADIUS サーバーがあります。 または、ドメインが地理的に広い領域をカバーしている場合、ネットワークアクセスサーバーと RADIUS サーバーを互いに離れた場所に配置することがあります。 このような場合や、場合によっては、ドメインごとに複数のリモート RADIUS サーバーグループを作成できます。
- NPS プロキシおよびリモート RADIUS サーバーで構成用の共有シークレットを作成します。
## <a name="plan-attribute-manipulation-rules-for-message-forwarding"></a>メッセージ転送の属性操作規則を計画する
接続要求ポリシーで構成されている属性の操作規則を使用すると、特定のリモート RADIUS サーバーグループに転送するアクセス要求メッセージを識別できます。
属性の操作規則を使用せずに、すべての接続要求を1つのリモート RADIUS サーバーグループに転送するように NPS を構成できます。
ただし、接続要求を転送する場所が複数ある場合は、場所ごとに接続要求ポリシーを作成し、メッセージの転送先のリモート RADIUS サーバーグループを使用してポリシーを構成する必要があります。属性の操作ルールを使用して、どのメッセージを転送するかを NPS に指示します。
次の属性に対して規則を作成できます。
- "ステーション ID" と呼ばれます。 ネットワークアクセスサーバー (NAS) の電話番号。 この属性の値は文字列です。 パターンマッチング構文を使用すると、区分コードを指定できます。
- コーリングステーション ID。 呼び出し元によって使用される電話番号。 この属性の値は文字列です。 パターンマッチング構文を使用すると、区分コードを指定できます。
- ユーザー名。 アクセスクライアントによって提供され、RADIUS のアクセス要求メッセージに NAS によって含まれているユーザー名。 この属性の値は、通常、領域名とユーザーアカウント名を含む文字列です。
接続要求のユーザー名の領域名を正しく置換または変換するには、適切な接続要求ポリシーのユーザー名属性の属性操作規則を構成する必要があります。
### <a name="key-steps"></a>主要手順
属性の操作規則の計画中に、次の手順を使用できます。
- メッセージを RADIUS サーバーに転送するための論理パスがあることを確認するために、プロキシ経由でのリモート RADIUS サーバーへのメッセージのルーティングを計画します。
- 各接続要求ポリシーに使用する1つまたは複数の属性を決定します。
- 各接続要求ポリシーに使用する属性の操作規則を文書化し、メッセージの転送先となるリモート RADIUS サーバーグループに規則を一致させます。
## <a name="plan-connection-request-policies"></a>接続要求ポリシーを計画する
NPS が RADIUS サーバーとして使用されている場合、既定の接続要求ポリシーが構成されます。 追加の接続要求ポリシーを使用して、より具体的な条件を定義したり、リモート RADIUS サーバーグループに転送するメッセージを NPS に指示する属性操作規則を作成したり、高度な属性を指定したりすることができます。 新しい接続要求ポリシーウィザードを使用して、共通またはカスタムの接続要求ポリシーを作成します。
### <a name="key-steps"></a>主要手順
接続要求ポリシーの計画時には、次の手順を使用できます。
- RADIUS プロキシとしてのみ機能する NPS を実行する各サーバーで、既定の接続要求ポリシーを削除します。
- 各ポリシーに必要な追加の条件と設定を計画し、この情報を、ポリシーに対して計画されているリモート RADIUS サーバーグループと属性の操作規則と組み合わせます。
- 共通の接続要求ポリシーをすべての NPS プロキシに配布するように計画を設計します。 1つの NPS 上の複数の NPS プロキシに共通のポリシーを作成し、NPS の Netsh コマンドを使用して、接続要求ポリシーとサーバー構成を他のすべてのプロキシにインポートします。
## <a name="plan-nps-accounting"></a>NPS アカウンティングを計画する
NPS を RADIUS プロキシとして構成する場合、NPS 形式のログファイル、データベースと互換性のある形式のログファイル、または NPS SQL Server ログを使用して RADIUS アカウンティングを実行するように構成できます。
また、これらのログ形式のいずれかを使用して、アカウンティングを実行するリモート RADIUS サーバーグループにアカウンティングメッセージを転送することもできます。
### <a name="key-steps"></a>主要手順
NPS アカウンティングの計画では、次の手順を使用できます。
- NPS プロキシでアカウンティングサービスを実行するか、アカウンティングメッセージをリモート RADIUS サーバーグループに転送するかを決定します。
- アカウンティングメッセージを他のサーバーに転送する予定がある場合は、ローカルの NPS プロキシアカウンティングを無効にすることを計画します。
- アカウンティングメッセージを他のサーバーに転送する予定の場合は、接続要求ポリシーの構成手順を計画します。 NPS プロキシのローカルアカウンティングを無効にした場合は、そのプロキシで構成する各接続要求ポリシーでアカウンティングメッセージ転送が有効になり、適切に構成されている必要があります。
- 使用するログの形式を決定します。 IAS 形式のログファイル、データベースと互換性のある形式のログファイル、または NPS SQL Server ログ記録です。
NPS の負荷分散を RADIUS プロキシとして構成するには、「 [Nps プロキシサーバーの負荷分散](nps-manage-proxy-lb.md)」を参照してください。
| 47.54321 | 358 | 0.850948 | jpn_Jpan | 0.725872 |