hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b13aab71cdf3dfd3577915b5543dc925ad116204 | 2,204 | md | Markdown | articles/stream-analytics/stream-analytics-output-error-policy.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2017-08-28T07:45:55.000Z | 2022-03-07T21:35:48.000Z | articles/stream-analytics/stream-analytics-output-error-policy.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 441 | 2017-11-08T13:15:56.000Z | 2021-06-02T10:39:53.000Z | articles/stream-analytics/stream-analytics-output-error-policy.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 27 | 2017-11-13T13:38:31.000Z | 2022-02-17T11:57:33.000Z | ---
title: Wyjściowe zasady błędów w Azure Stream Analytics
description: Dowiedz się więcej o zasadach obsługi błędów wyjściowych dostępnych w Azure Stream Analytics.
author: enkrumah
ms.author: ebnkruma
ms.service: stream-analytics
ms.topic: conceptual
ms.date: 12/04/2018
ms.custom: seodec18
ms.openlocfilehash: 19d762a55127af34e84185b11518aa6584acb5bd
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/29/2021
ms.locfileid: "98012414"
---
# <a name="azure-stream-analytics-output-error-policy"></a>Azure Stream Analytics wyjściowe zasady błędów
W tym artykule opisano zasady obsługi błędów danych wyjściowych, które można skonfigurować w Azure Stream Analytics.
Zasady obsługi błędów danych wyjściowych mają zastosowanie tylko do błędów konwersji danych, które występują, gdy zdarzenie wyjściowe generowane przez zadanie Stream Analytics jest niezgodne ze schematem docelowego ujścia. Te zasady można skonfigurować, wybierając pozycję **Ponów** lub **upuść**. W Azure Portal, w ramach zadania Stream Analytics, w obszarze **Konfiguruj** wybierz pozycję **zasady błędów** , aby dokonać wyboru.

## <a name="retry"></a>Ponów próbę
Gdy wystąpi błąd, Azure Stream Analytics ponawianie próby zapisania zdarzenia przez czas, dopóki zapis nie powiedzie się. Nie ma limitu czasu dla ponownych prób. Ostatecznie wszystkie kolejne zdarzenia są blokowane przed przetwarzaniem przez zdarzenie, które jest ponawiane. Ta opcja jest domyślną zasadą obsługi błędów wyjścia.
## <a name="drop"></a>Listy rozwijanej
Usługa Azure Stream Analytics porzuci wszelkie zdarzenia wyjściowe powodujące błędy konwersji danych. Porzuconych zdarzeń nie można odzyskać na potrzeby ponownego przetwarzania później.
Wszystkie błędy przejściowe (na przykład błędy sieciowe) są ponawiane niezależnie od konfiguracji wyjściowych zasad obsługi błędów.
## <a name="next-steps"></a>Następne kroki
[Przewodnik rozwiązywania problemów Azure Stream Analytics](./stream-analytics-troubleshoot-query.md) | 61.222222 | 430 | 0.819873 | pol_Latn | 0.999922 |
b13af89d949c8a0158ecec2b3a387e9cf2d56a89 | 2,889 | md | Markdown | docs/ssma/sybase/connect-to-azure-sql-db-sybasetosql.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssma/sybase/connect-to-azure-sql-db-sybasetosql.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssma/sybase/connect-to-azure-sql-db-sybasetosql.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Herstellen einer Verbindung mit Azure SQL-Datenbank (sybaseto SQL) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.reviewer: ''
ms.technology: ssma
ms.topic: conceptual
ms.assetid: 96538007-1099-40c8-9902-edd07c5620ee
author: Shamikg
ms.author: Shamikg
ms.openlocfilehash: 68fbac69959d423477750a69bb6e5b06ab62af2b
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/08/2020
ms.locfileid: "68083475"
---
# <a name="connect-to-azure-sql-db--sybasetosql"></a>Herstellen einer Verbindung mit Azure SQL-DB (SybaseToSQL)
Verwenden Sie das Dialogfeld Verbindung mit Azure SQL-Datenbank herstellen, um eine Verbindung mit der Azure SQL DB-Datenbank herzustellen, die Sie migrieren möchten.
Um auf dieses Dialogfeld zuzugreifen, wählen Sie im Menü Datei die Option **mit Azure SQL**- **Datenbank** verbinden aus. Wenn Sie bereits eine Verbindung hergestellt haben, stellt der Befehl **erneut eine Verbindung mit der Azure SQL** -Datenbank her.
## <a name="options"></a>Tastatur
**Server Name**
Wählen Sie den Server Namen für die Verbindung mit Azure SQL-Datenbank aus, oder geben Sie diesen
**Datenbank**
Wählen **Sie den Daten** Banknamen aus, oder geben Sie ihn ein.
> [!IMPORTANT]
> SSMA für Sybase unterstützt keine Verbindung mit der Master-Datenbank in Azure SQL-Datenbank.
**Benutzername**
Geben Sie den Benutzernamen ein, den SSMA zum Herstellen einer Verbindung mit der Azure SQL-Datenbank verwendet.
**Kennwort**
Geben Sie das Kennwort für den Benutzernamen ein,
**Eingegebenen**
SSMA empfiehlt eine verschlüsselte Verbindung mit Azure SQL-Datenbank.
## <a name="create-azure-database"></a>Azure-Datenbank erstellen
Wenn sich keine Datenbanken im Azure SQL-DB-Konto befinden, können Sie die erste Datenbank erstellen.
Führen Sie die folgenden Schritte aus, um eine neue Datenbank zum ersten Mal zu erstellen:
1. Klicken Sie auf die Schaltfläche Durchsuchen, die im Dialogfeld Verbindung mit Azure SQL-Datenbank herstellen vorhanden ist.
2. Wenn keine Datenbanken vorhanden sind, werden die folgenden zwei Menü Elemente angezeigt.
1. **(keine Datenbanken gefunden)** , die deaktiviert und ständig ausgegraut sind
2. **Erstellen Sie eine neue Datenbank** , die nur aktiviert ist, wenn keine Datenbanken im Azure SQL-DB-Konto vorhanden sind. Wenn Sie auf dieses Menü Element klicken, ist das Dialogfeld Azure-Datenbank erstellen mit Datenbankname und-Größe vorhanden.
3. Zum Zeitpunkt der Erstellung der Datenbank werden die folgenden beiden Parameter als Eingabe angegeben:
1. **Datenbankname:** Geben Sie den Datenbanknamen ein.
2. **Datenbankgröße:** Wählen Sie die Datenbankgröße aus, die Sie im Azure SQL-DB-Konto erstellen müssen.
| 43.119403 | 259 | 0.759778 | deu_Latn | 0.994763 |
b13afea35ca150135c36c8be4650e88d01d32960 | 1,629 | md | Markdown | docs/standard-library/scoped-allocator.md | drvoss/cpp-docs.ko-kr | dda556c732d97e5959be3b39dc331ded7eda8bb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/scoped-allocator.md | drvoss/cpp-docs.ko-kr | dda556c732d97e5959be3b39dc331ded7eda8bb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/scoped-allocator.md | drvoss/cpp-docs.ko-kr | dda556c732d97e5959be3b39dc331ded7eda8bb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '<scoped_allocator> | Microsoft Docs'
ms.custom:
ms.date: 11/04/2016
ms.reviewer:
ms.suite:
ms.technology:
- cpp-standard-libraries
ms.tgt_pltfrm:
ms.topic: reference
f1_keywords:
- <scoped_allocator>
dev_langs:
- C++
helpviewer_keywords:
- scoped_allocator Header
ms.assetid: d20175b8-96be-4896-8141-3faba45e0005
caps.latest.revision:
author: corob-msft
ms.author: corob
manager: ghogen
ms.workload:
- cplusplus
ms.openlocfilehash: fb5a0c7258579e6126cc2770e6b6bd81ffc84b5c
ms.sourcegitcommit: d51ed21ab2b434535f5c1d553b22e432073e1478
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 02/23/2018
---
# <a name="ltscopedallocatorgt"></a><scoped_allocator>
컨테이너 템플릿 클래스 scoped_allocator를 정의합니다.
## <a name="syntax"></a>구문
```cpp
#include <scoped_allocator>
```
### <a name="operators"></a>연산자
|||
|-|-|
|[operator!=](../standard-library/scoped-allocator-operators.md#op_neq)|연산자의 좌변에 있는 scoped_allocator 개체가 우변에 있는 list 개체와 같지 않은지 테스트합니다.|
|[operator==](../standard-library/scoped-allocator-operators.md#op_eq_eq)|연산자의 좌변에 있는 scoped_allocator 개체가 우변에 있는 list 개체와 같은지 테스트합니다.|
### <a name="classes"></a>클래스
|||
|-|-|
|[scoped_allocator_adaptor 클래스](../standard-library/scoped-allocator-adaptor-class.md)|할당자 하나 이상의 중첩을 캡슐화하는 템플릿 클래스입니다.|
## <a name="see-also"></a>참고 항목
[헤더 파일 참조](../standard-library/cpp-standard-library-header-files.md)
[C++ 표준 라이브러리의 스레드 보안](../standard-library/thread-safety-in-the-cpp-standard-library.md)
[C++ 표준 라이브러리 참조](../standard-library/cpp-standard-library-reference.md)
| 27.610169 | 138 | 0.709024 | kor_Hang | 0.842903 |
b13b280ccef3bca7de09ab19d2ee7573a5467381 | 1,335 | md | Markdown | README.md | heyitzrare/status-picker-plus | 8883c46b49adfa7ef6321d54a8b1baf179b2f66c | [
"MIT"
] | null | null | null | README.md | heyitzrare/status-picker-plus | 8883c46b49adfa7ef6321d54a8b1baf179b2f66c | [
"MIT"
] | null | null | null | README.md | heyitzrare/status-picker-plus | 8883c46b49adfa7ef6321d54a8b1baf179b2f66c | [
"MIT"
] | null | null | null | 
# Status Picker Plus
*Should* make Discord's status picker look spicier. Shoutouts to [Tropical](https://github.com/Tropix126) for the original code and [LuckFire](https://github.com/LuckFire) for the tweaked version. Oh, and THANK YOU LUCKFIRE for archiving it so I could actually maybe possibly get it working when I feel like it!

# Installation
For **[Powercord](http://powercord.dev/)** (maybe also **[Vizality](https://vizality.com/)**, though it's untested) installation, go to **Themes -> Open a CMD / Powershell / Terminal / Gitbash** in the folder, and enter the following:
```
git clone https://github.com/heyitzrare/statuspicker-plus
```
**For [BetterDiscord](https://betterdiscord.net/):**
Unfortunately, due to the fact that the current version of the theme uses the `@import` command, and that I can't find a compiled version of the theme *anywhere,* BetterDiscord downloads are disabled. If you have/can make a compiled version of the theme, send it my way!
**For Browser / Web:**
Unfortunately, due to the fact that the current version of the theme uses the `@import` command, and that I can't find a compiled version of the theme *anywhere,* browser downloads are disabled. If you have/can make a compiled version of the theme, send it my way!
| 70.263158 | 311 | 0.750562 | eng_Latn | 0.972197 |
b13b5b6a1928944aaf5d9fb61b190d9e781a119c | 7,480 | md | Markdown | docs/codelabs/plz_query.md | samwestmoreland/please | 1616742eeefca3dd0b3194e4c1ec9a8542ec13c7 | [
"Apache-2.0"
] | 1,992 | 2016-08-08T11:14:10.000Z | 2022-03-31T08:29:57.000Z | docs/codelabs/plz_query.md | samwestmoreland/please | 1616742eeefca3dd0b3194e4c1ec9a8542ec13c7 | [
"Apache-2.0"
] | 1,059 | 2016-08-03T17:11:37.000Z | 2022-03-30T16:27:30.000Z | docs/codelabs/plz_query.md | samwestmoreland/please | 1616742eeefca3dd0b3194e4c1ec9a8542ec13c7 | [
"Apache-2.0"
] | 213 | 2016-12-09T15:37:00.000Z | 2022-03-23T23:08:26.000Z | summary: Tips and tricks - plz query
description: Tips and tricks to help you become productive with Please - using plz query to query the build graph
id: plz_query
categories: intermediate
tags: medium
status: Published
authors: Jon Poole
Feedback Link: https://github.com/thought-machine/please
# Tips and tricks - plz query
## Overview
Duration: 2
### Prerequisites
- You must have Please installed: [Install please](https://please.build/quickstart.html)
- You should have a basic understanding of using Please to build and test code
### What You’ll Learn
This codelab isn't exhaustive however it should give you an idea of the sort of things the Please CLI is capable of:
- Finding the dependencies of a target
- Including and excluding targets
- Printing information about targets as well as internal targets
## Setting up
Duration: 2
For this codelab we will be using the Please codelabs repo:
```
$ git clone https://github.com/thought-machine/please-codelabs
Cloning into 'please-examples'...
remote: Enumerating objects: 228, done.
remote: Total 228 (delta 0), reused 0 (delta 0), pack-reused 228
Receiving objects: 100% (228/228), 38.23 KiB | 543.00 KiB/s, done.
Resolving deltas: 100% (79/79), done.
```
We'll be using the getting started with go codelab for these examples:
```
$ cd please-codelabs/getting_started_go
```
## Finding dependencies of a target
Duration: 4
Please has a strict build graph representing each built target and their dependencies on each other. Among many things,
this graph can be interrogated to determine the dependencies of a target:
```
$ plz query deps //src/greetings:greetings_test
//src/greetings:greetings_test
//src/greetings:greetings
//third_party/go:assert
//third_party/go:_assert-assert#download
```
This can be especially useful when trying to improve build performance. Unnecessary dependencies between targets can
cause certain rules to be rebuilt when they don't need to be.
### Internal rules
Woah what's that `//third_party/go:_assert-assert#download` target we just saw? That's an internal rule that was
generated by the `go_get()` rule `//third_party/go:assert`. Internal rules can be identified by their leading `_` in
their name. We can view more information about this one with `plz query print`:
```
$ plz query print //third_party/go:_assert-assert#download
# //third_party/go:_assert-assert#download:
remote_file(
name = '_assert-assert#download',
srcs = ['https://github.com/go-playground/assert/archive/v1.2.1.zip'],
outs = ['assert-assert.zip'],
building_description = 'Fetching...',
timeout = 600,
config = 'None',
visibility = ['PUBLIC'],
)
```
As we can see, `go_get()` has generated a remote file rule to download the github archive of the sources for that go
module! You shouldn't depend on these rules directly as they may change between minor releases of Please.
Most of the `plz query` sub-commands have a `--hidden` flag that can be used to include hidden targets:
```
$ plz query alltargets --hidden
//src:_main#lib
//src:_main#lib_srcs
//src:main
//src/greetings:_greetings#import_config
//src/greetings:_greetings#srcs
//src/greetings:_greetings_test#lib
//src/greetings:_greetings_test#lib_srcs
//src/greetings:_greetings_test#main
//src/greetings:_greetings_test#main_lib
//src/greetings:_greetings_test#main_lib_srcs
//src/greetings:greetings
//src/greetings:greetings_test
//third_party/go:_assert#a_rule
//third_party/go:_assert#get
//third_party/go:_assert#import_config
//third_party/go:_assert-assert#download
//third_party/go:assert
```
## Reverse dependencies
Duration: 2
If you're changing a build rule that you know has a wide reaching effect, it might be good to run all the tests that
will be affected by that change. Let's find the reverse dependencies of our internal download rule:
```
$ plz query revdeps //third_party/go:_assert-assert#download
//third_party/go:assert
```
Well that doesn't look quite right... We should see `//src/greetings:greetings_test` too.
Turns out finding reverse dependencies is quite a slow operation. Please limits this to just one level so you don't
accidentally lock up your terminal trying to walk the whole build graph. You can set the level with `--level=2` or if
you want to get all reverse dependencies, you can set it to `-1`:
```
$ plz query revdeps --level=-1 //third_party/go:_assert-assert#download
//src/greetings:greetings_test
//third_party/go:assert
```
Be careful, this can be slow on larger build graphs. You can use `--include=//src/foo/...` to limit the search to a
slice of your repository. More on this later in this codelab!
## Composing plz commands
Duration: 2
So we've managed to determine that targets that might be effected by our change. How do we run these tests? Please can
be instructed to listen for targets on standard input:
```
$ plz query revdeps --level=-1 //third_party/go:_assert-assert#download | plz test -
//src/greetings:greetings_test 1 test run in 0s; 1 passed [cached]
1 test target and 1 test run; 1 passed.
Total time: 110ms real, 0s compute.
```
The `-` at the end of `plz test -` indicates to Please that we will be supplying the targets to build over standard
input.
## Including and excluding targets
Duration: 2
Almost all Please commands can take in the `--include` and `--exclude` arguments. These can be used to specifically
exclude targets:
```
$ plz query revdeps --exclude //src/greetings:greetings_test --level=-1 //third_party/go:_assert-assert#download | plz test -
0 test targets and 0 tests run; 0 passed.
Total time: 40ms real, 0s compute.
```
As you can see, we excluded the test from earlier so `plz test` didn't run it. We can also exclude this on the test
command:
```
$ plz query revdeps --level=-1 //third_party/go:_assert-assert#download | plz test --exclude //src/greetings:greetings_test -
0 test targets and 0 tests run; 0 passed.
Total time: 40ms real, 0s compute.
```
### Including based on label
Targets can be labeled in Please. Most of the built-in rules apply some basic labels, e.g. the go rules apply the `go`
label to their targets. These can be very useful to run all tests for a given language:
```
$ plz build --include go --exclude //third_party/go/...
```
This will build all Go targets but will only build targets under `//third_party/go/...` if they're a dependency of a
target that needs to be built.
You may also add custom labels to your targets. Update `srcs/greetings/BUILD` as such:
### `src/greetings/BUILD`
```python
go_library(
name = "greetings",
srcs = ["greetings.go"],
visibility = ["//src/..."],
labels = ["my_label"], # Add a label to the library rule
)
go_test(
name = "greetings_test",
srcs = ["greetings_test.go"],
deps = [
":greetings",
"//third_party/go:assert",
],
external = True,
)
```
```
$ plz query alltargets --include=my_label
//src/greetings:greetings
$ plz build --include=my_label
Build finished; total time 300ms, incrementality 100.0%. Outputs:
//src/greetings:greetings:
plz-out/gen/src/greetings/greetings.a
```
This can be especially useful for separating out slow running tests:
```
$ plz test --exclude e2e
```
## What's next?
Duration: 1
Hopefully this has given you a taster for what is possible with `plz query`, however there's so much more. See the
[cli](/commands.html#query) for an idea of what's possible!
| 33.542601 | 125 | 0.735829 | eng_Latn | 0.994906 |
fafbba03130c31e0d5af98eb906a1ac3f840544e | 4,991 | md | Markdown | articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: Utvecklingsverktyg
titleSuffix: Azure Data Science Virtual Machine
description: Lär dig mer om de verktyg och integrerade utvecklings miljöer som är tillgängliga på Data Science Virtual Machine.
keywords: data science tools, data science virtual machine, tools for data science, linux data science
services: machine-learning
ms.service: data-science-vm
author: lobrien
ms.author: laobri
ms.topic: conceptual
ms.date: 12/12/2019
ms.openlocfilehash: cecc195b8b97ffd9b25cf12898726352ddd698a9
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/30/2021
ms.locfileid: "100519447"
---
# <a name="development-tools-on-the-azure-data-science-virtual-machine"></a>Utvecklingsverktyg på Azure-Data Science Virtual Machine
Data Science Virtual Machine (DSVM) paketerar flera populära verktyg i ett mycket produktivt Integrated Development Environment (IDE). Här följer några verktyg som finns på DSVM.
## <a name="visual-studio-community-edition"></a>Visual Studio Community Edition
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Generell användning IDE |
| DSVM-versioner som stöds | Windows: Visual Studio 2017, Windows 2019: Visual Studio 2019 |
| Vanliga användnings områden | Program varu utveckling |
| Hur konfigureras den och installeras på DSVM? | Arbets belastningen data vetenskap (python och R-verktyg), Azure-arbetsbelastning (Hadoop, Data Lake), Node.js SQL Server verktyg [Azure Machine Learning för Visual Studio Code](https://github.com/Microsoft/vs-tools-for-ai) |
| Använda och köra den | Skriv bords gen väg ( `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\devenv.exe` ). Du kan grafiskt öppna Visual Studio med hjälp av Skriv bords ikonen eller **Start** -menyn. Sök efter program (Windows-tangenten + S) följt av **Visual Studio**. Därifrån kan du skapa projekt på språk som C#, python, R och Node.js. |
| Relaterade verktyg på DSVM | Visual Studio Code, RStudio, Juno |
> [!NOTE]
> Du kan få ett meddelande om att utvärderings perioden har upphört att gälla. Ange dina Microsoft-konto autentiseringsuppgifter. Eller skapa ett nytt kostnads fritt konto för att få åtkomst till Visual Studio Community.
## <a name="visual-studio-code"></a>Visuell Studio-kod
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Generell användning IDE |
| DSVM-versioner som stöds | Windows, Linux |
| Vanliga användnings områden | Kod redigerare och git-integrering |
| Använda och köra den | Skriv bords gen väg ( `C:\Program Files (x86)\Microsoft VS Code\Code.exe` ) i Windows, skriv bords gen väg eller Terminal ( `code` ) i Linux |
| Relaterade verktyg på DSVM | Visual Studio, RStudio, Juno |
## <a name="rstudio-desktop"></a>RStudio Desktop
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Client IDE för R-språk |
| DSVM-versioner som stöds | Windows, Linux |
| Vanliga användnings områden | R-utveckling |
| Använda och köra den | Skriv bords gen väg ( `C:\Program Files\RStudio\bin\rstudio.exe` ) på Windows, skriv bords gen väg ( `/usr/bin/rstudio` ) på Linux |
| Relaterade verktyg på DSVM | Visual Studio, Visual Studio Code, Juno |
## <a name="rstudio-server"></a>RStudio Server
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Client IDE för R-språk |
| Vad är det? | Webbaserad IDE för R |
| DSVM-versioner som stöds | Linux |
| Vanliga användnings områden | R-utveckling |
| Använda och köra den | Aktivera tjänsten med _systemctl aktivera RStudio-Server_ och starta sedan tjänsten med _systemctl start RStudio-Server_. Logga sedan in på RStudio-servern på http: \/ /Your-VM-IP: 8787. |
| Relaterade verktyg på DSVM | Visual Studio, Visual Studio Code, RStudio Desktop |
## <a name="juno"></a>Juno
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Client IDE för Julia-språk |
| DSVM-versioner som stöds | Windows, Linux |
| Vanliga användnings områden | Julias utveckling |
| Använda och köra den | Skriv bords gen väg ( `C:\JuliaPro-0.5.1.1\Juno.bat` ) på Windows, skriv bords gen väg ( `/opt/JuliaPro-VERSION/Juno` ) på Linux |
| Relaterade verktyg på DSVM | Visual Studio, Visual Studio Code, RStudio |
## <a name="pycharm"></a>Pycharm med
| Kategori | Värde |
| ------------- | ------------- |
| Vad är det? | Client IDE för python-språk |
| DSVM-versioner som stöds | Windows 2019, Linux |
| Vanliga användnings områden | Python-utveckling |
| Använda och köra den | Skriv bords gen vägen ( `C:\Program Files\tk` ) i Windows. Skriv bords gen väg ( `/usr/bin/pycharm` ) på Linux |
| Relaterade verktyg på DSVM | Visual Studio, Visual Studio Code, RStudio |
| 57.367816 | 377 | 0.677419 | swe_Latn | 0.991025 |
fafbd7ac7108e0729694aa1e667a5756efbffa85 | 2,873 | md | Markdown | articles/mysql/howto-restart-server-powershell.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/mysql/howto-restart-server-powershell.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/mysql/howto-restart-server-powershell.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Neustarten von Servern – Azure PowerShell – Azure Database for MySQL
description: In diesem Artikel wird beschrieben, wie Sie einen Azure Database for MySQL-Server mit Azure PowerShell neu starten.
author: ajlam
ms.author: andrela
ms.service: mysql
ms.topic: how-to
ms.date: 4/28/2020
ms.custom: devx-track-azurepowershell
ms.openlocfilehash: 71d10078a704b2905cf055347f5ed4272ca8ef72
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/09/2020
ms.locfileid: "87502782"
---
# <a name="restart-azure-database-for-mysql-server-using-powershell"></a>Neustarten eines Azure Database for MySQL-Servers mithilfe von Azure PowerShell
In diesem Thema wird erläutert, wie Sie einen Azure Database for MySQL-Server neu starten. Möglicherweise müssen Sie Ihren Server zu Wartungszwecken neu starten. Während des Vorgangs kommt es dann zu einem kurzen Ausfall.
Der Neustart des Servers wird blockiert, wenn der Dienst ausgelastet ist. Beispielsweise kann der Dienst einen zuvor angeforderten Vorgang (z. B. das Skalieren von virtuellen Kernen) verarbeiten.
Die für einen Neustart benötigte Zeit hängt vom MySQL-Wiederherstellungsprozess ab. Zum Verkürzen der Neustartzeit wird empfohlen, die Aktivitäten auf dem Server vor dem Neustart auf ein Minimum zu beschränken.
## <a name="prerequisites"></a>Voraussetzungen
Zum Durcharbeiten dieses Leitfadens benötigen Sie Folgendes:
- Lokale Installation des [Moduls „Az PowerShell“](/powershell/azure/install-az-ps) oder von [Azure Cloud Shell](https://shell.azure.com/) im Browser
- Ein [Azure Database for MySQL-Server](quickstart-create-mysql-server-database-using-azure-powershell.md)
> [!IMPORTANT]
> Solange nur eine Vorschauversion des PowerShell-Moduls „Az.MySql“ verfügbar ist, müssen Sie es separat über das AZ PowerShell-Modul installieren. Verwenden Sie dazu den folgenden Befehl: `Install-Module -Name Az.MySql -AllowPrerelease`.
> Sobald das PowerShell-Modul „Az.MySql“ allgemein verfügbar ist, wird es in die zukünftigen Releases des Az PowerShell-Moduls integriert und in Azure Cloud Shell nativ zur Verfügung gestellt.
Wenn Sie PowerShell lieber lokal verwenden möchten, stellen Sie mithilfe des Cmdlets [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) eine Verbindung mit Ihrem Azure-Konto her.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
## <a name="restart-the-server"></a>Neustarten des Servers
Starten Sie den Server mit dem folgenden Befehl neu:
```azurepowershell-interactive
Restart-AzMySqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
```
## <a name="next-steps"></a>Nächste Schritte
> [!div class="nextstepaction"]
> [Erstellen eines Azure Database for MySQL-Servers mit PowerShell](quickstart-create-mysql-server-database-using-azure-powershell.md)
| 55.25 | 238 | 0.804386 | deu_Latn | 0.911649 |
fafc435d674be2480b659f9c7352fbfdf0004e11 | 898 | md | Markdown | docs/framework/wcf/diagnostics/etw/4024-wasclosealllistenerchannelinstancesfailed.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/4024-wasclosealllistenerchannelinstancesfailed.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/4024-wasclosealllistenerchannelinstancesfailed.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 4024 - WasCloseAllListenerChannelInstancesFailed
ms.date: 03/30/2017
ms.assetid: 73f0dc73-f0b7-4c13-8328-9fdc262009ec
ms.openlocfilehash: bf9a52a55df4281ffbd84a1c1f43bdce26cc730b
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/04/2018
ms.locfileid: "33465255"
---
# <a name="4024---wasclosealllistenerchannelinstancesfailed"></a>4024 - WasCloseAllListenerChannelInstancesFailed
## <a name="properties"></a>Özellikler
|||
|-|-|
|Kimlik|4024|
|Anahtar Sözcükler|ActivationServices|
|Düzey|Hata|
|Kanal|Microsoft Windows uygulama sunucusu-uygulamalar/analitik|
## <a name="description"></a>Açıklama
Tüm dinleyici kanalı örneklerini kapanmasını başarısız olduğunda bu olay yayınlanır.
## <a name="message"></a>İleti
Hata kodu: % 1
## <a name="details"></a>Ayrıntılar
| 30.965517 | 113 | 0.753898 | tur_Latn | 0.587572 |
fafc6654952906bec54a9c8422ff019b47b9d53e | 207 | md | Markdown | _posts/2017-01-01-advanced-examples.md | BotongXue/botongxue.github.io | 6670de33ca041dcdab1c14a5b243ad39dee11a79 | [
"Unlicense"
] | null | null | null | _posts/2017-01-01-advanced-examples.md | BotongXue/botongxue.github.io | 6670de33ca041dcdab1c14a5b243ad39dee11a79 | [
"Unlicense"
] | null | null | null | _posts/2017-01-01-advanced-examples.md | BotongXue/botongxue.github.io | 6670de33ca041dcdab1c14a5b243ad39dee11a79 | [
"Unlicense"
] | null | null | null | ---
title: "PUBLICATIONS"
mathjax: true
layout: post
categories: media
---

## MathJax
| 14.785714 | 114 | 0.743961 | yue_Hant | 0.116548 |
fafdd4ef7a292fbfd3cef7c84f0058e641d43ecb | 1,524 | md | Markdown | _posts/2021-09-23-SpringJobSchedulerCron.md | Hyun0JAM/Hyun0JAM.github.io | 8b36fbfd3be9987429d9814ff179ef70716ea0fb | [
"MIT"
] | 1 | 2020-10-14T08:57:18.000Z | 2020-10-14T08:57:18.000Z | _posts/2021-09-23-SpringJobSchedulerCron.md | Hyun0JAM/Hyun0JAM.github.io | 8b36fbfd3be9987429d9814ff179ef70716ea0fb | [
"MIT"
] | null | null | null | _posts/2021-09-23-SpringJobSchedulerCron.md | Hyun0JAM/Hyun0JAM.github.io | 8b36fbfd3be9987429d9814ff179ef70716ea0fb | [
"MIT"
] | null | null | null | ---
title : "Spring Job Scheduler - Cron Expression "
category : "Spring"
tages : [Spring,job schedule]
date : 2021-09-23T18:00:00
last_modified_at: 2021-09-23T18:00:00
comment : true
---
크론작업은 반복되는 주기로 예약되며 unix-cron(` * * * * * `)형식으로 지정된다.(ex.매분 매시간 매일 매주 매달)
## Cron Expression
### Option
`?` : 조건없음 [일, 요일 에서만 사용가능]
`*` : 모든 조건에서 참
- 시작시간/단위 (예 0/5) : 해당 시작시간부터 해당 단위때 참
- 시작범위-끝범위 (예 3-5) : 예제(3-5)는 3에서 5까지 (3, 4, 5) 조건일때 참.
- x,y,z... (예 1,3,5) : 예제(1,3,5) 1,3,5 일때만 참.
`L` : [일, 요일 에서만 사용가능]
- 일에서 사용하면 : 예(L) 마지막 날짜입니다. 예를들어 1월이라면 31일 2월이라면 윤년에 따라 28혹은 29일 4월이라면 30일에 참.
- 요일에서 사용하면 : 예(6L) 6은(토요일) 마지막 토요일에 실행됩니다. 마지막주가 토요일이 없다면 그전주 토요일에 참.
`W` : [일에서만 사용가능]
- 가장 가까운 평일(월~토)를 찾습니다.
- 15W 라고 설정했다면 15일이 월~금 범위라면 해당 날짜에서 참.
- 15W 15일이 토요일이라면 가장 가까운 금요일인 14일에 참.
- 15W 15일이 일요일이라면 가장 가까운 월요일인 16일에 참.
`#` : [요일에서만 사용가능]
- 예를들어 3#2 라고 썻다면 (수요일#2번째주)라는 의미가 됩니다.
- 즉 2번째주의 수요일에 참이 됩니다.
### Example
- `0 0/5 * * *` : 5분마다
- `10 0/5 * * * ?` : 10초 뒤 5분마다 수행.
- `0 30 10-13 ? * WED,FRI` : 매 주 수요일과 금요일 10시~13시 30분에 수행.
- `0 15 10 ? * 6#3` : 매달 3번째 금요일 10시 15분에 실행
- `0 0/30 8-9 5,20 * ?` : 매 월 5일과 20일에 8시~9시대에 30분 간격으로 수행. (8:00, 8:30, 9:00, 9:30) 수행
---
**references**
- [https://gs.saro.me/dev?tn=548](https://gs.saro.me/dev?tn=548)
- [https://www.leafcats.com/94](https://www.leafcats.com/94)
- [https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&blogId=estern&logNo=110010101624](https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&blogId=estern&logNo=110010101624) | 26.736842 | 192 | 0.620079 | kor_Hang | 0.99996 |
fafdd975a2ae0b7fd558df988fde1f049ebf0d41 | 334 | md | Markdown | links/gql_websocket_link/CHANGELOG.md | Grohden/gql | dfece82d5a7fbce372a6bd8fbad2b2e0c8b03863 | [
"MIT"
] | null | null | null | links/gql_websocket_link/CHANGELOG.md | Grohden/gql | dfece82d5a7fbce372a6bd8fbad2b2e0c8b03863 | [
"MIT"
] | null | null | null | links/gql_websocket_link/CHANGELOG.md | Grohden/gql | dfece82d5a7fbce372a6bd8fbad2b2e0c8b03863 | [
"MIT"
] | null | null | null | ## 0.1.3
- Add `inactivityTimeout` parameter.
## 0.1.2
- Send `stop` command to server once subscription canceled.
## 0.1.1
- Handle exceptions.
- Throw `WebSocketLinkParserException` and `WebSocketLinkServerException`.
## 0.1.0
- port Websocket link from `graphq-flutter`
- use `WebSocketChannel` as a base for the connection
| 18.555556 | 74 | 0.730539 | eng_Latn | 0.4227 |
fafe04c0464076771b265033f99cd498a91f1252 | 1,780 | md | Markdown | docs/standard/serialization/serialization-tools.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/serialization/serialization-tools.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/serialization/serialization-tools.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Strumenti per la serializzazione
ms.date: 03/30/2017
ms.assetid: 593b675f-938c-44ff-807b-0ca9fea30103
ms.openlocfilehash: af0ed0df0e99245d3dacd31280574c36415d2a1e
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "61778313"
---
# <a name="serialization-tools"></a>Strumenti per la serializzazione
In questa sezione vengono fornite informazioni dettagliate sugli strumenti per la serializzazione. È possibile utilizzare tutti gli strumenti dalla riga di comando.
> [!IMPORTANT]
> Perché gli strumenti di .NET Framework possano funzionare, è necessario impostare correttamente le variabili di ambiente Pah, Include e Lib attraverso l'esecuzione di SDKVars.bat, che si trova nella directory \<SDK>\v2.0\Bin. SDKVars.bat deve essere eseguito in ogni shell di comandi.
## <a name="in-this-section"></a>In questa sezione
|Strumento|Descrizione|
|----------|-----------------|
|[Strumento per la generazione di serializzatori XML (Sgen.exe)](../../../docs/standard/serialization/xml-serializer-generator-tool-sgen-exe.md)|Consente di creare un assembly di serializzazione XML per i tipi di un assembly specificato al fine di migliorare le prestazioni di runtime di <xref:System.Xml.Serialization.XmlSerializer>.|
|[Strumento XML Schema Definition (Xsd.exe)](../../../docs/standard/serialization/xml-schema-definition-tool-xsd-exe.md)|Consente di generare schemi XML conformi al linguaggio XSD proposto da W3C (World Wide Web Consortium). Questo strumento genera classi Common Language Runtime e classi <xref:System.Data.DataSet> da un file di schema XSD.|
## <a name="see-also"></a>Vedere anche
- [Strumenti](../../../docs/framework/tools/index.md)
| 63.571429 | 344 | 0.767416 | ita_Latn | 0.946049 |
fafe487e2a8d1ccc63d9007c3ecf3a827022e320 | 1,844 | md | Markdown | readme.md | gkundu22/Google-Meet-Hack | e174ccce75f426fc86c27e2db3a493c94797d8e2 | [
"MIT"
] | null | null | null | readme.md | gkundu22/Google-Meet-Hack | e174ccce75f426fc86c27e2db3a493c94797d8e2 | [
"MIT"
] | 1 | 2020-11-12T00:52:01.000Z | 2021-12-23T17:58:00.000Z | readme.md | gkundu22/Google-Meet-Hack | e174ccce75f426fc86c27e2db3a493c94797d8e2 | [
"MIT"
] | null | null | null | # Google Meet Hack
## How To Install
- Download The [Zip File](https://github.com/gkundu22/Google-Meet-Hack/archive/main.zip) and extract it.
- Open Chrome Browser and click on option button.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall1.png" width="50%">
</p>
<br />
- Select More Tools and then select Extensions in Option Menu.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall2.png" width="50%">
</p>
<br />
- Turn On the Developer Mode.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall3.png" width="50%">
</p>
<br />
- Click On Load Unpacked.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall4.png" width="50%">
</p>
<br />
- Selected The Extracted <b>Google Meet Hack</b> Folder.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall5.png" width="50%">
</p>
<br />
- Congratulation!! Your Extension is Installed.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToInstall6.png" width="50%">
</p>
<br />
## How To Use
- Select the no. of participant and click on confirm
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToUse1.png" width="50%">
</p>
<br />
- Now Process is started or click on cancel to stop the extension.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToUse2.png" width="50%">
</p>
<br />
- You can Mute the site by right clicking on the Meet tab.
<p>
<img src="https://raw.githubusercontent.com/gkundu22/Google-Meet-Hack/main/img/howToUse3.png" width="50%">
</p>
<br />
| 36.88 | 114 | 0.682755 | yue_Hant | 0.312809 |
faff904c569476984b392eee542ab75796575393 | 2,841 | md | Markdown | _posts/2017-08-04-introducing-rx-devtools.md | KwintenP/kwintenp.github.com | 7071cbbf07a1675dc6e8d44f5175672dc528a821 | [
"MIT"
] | null | null | null | _posts/2017-08-04-introducing-rx-devtools.md | KwintenP/kwintenp.github.com | 7071cbbf07a1675dc6e8d44f5175672dc528a821 | [
"MIT"
] | 1 | 2017-09-11T17:49:41.000Z | 2017-09-11T17:49:41.000Z | _posts/2017-08-04-introducing-rx-devtools.md | KwintenP/kwintenp.github.com | 7071cbbf07a1675dc6e8d44f5175672dc528a821 | [
"MIT"
] | 1 | 2017-09-11T17:36:07.000Z | 2017-09-11T17:36:07.000Z | ---
layout: post
cover: false
title: Introducing Rx devtools
date: 2017-08-04
subclass: 'post'
categories: 'casper'
published: true
disqus: true
---
Ever since I first started using RxJS up until this very day, it has become my absolute favorite way of coding. I cannot believe to work in a world without observables anymore nor can I understand on how I was able to write code before. I have started sharing the knowlegde I had through blogposts, workshops and coaching.
One thing that always comes up while working with observables is the very high learning curve. It's really hard to grasp the concept when you're just starting. One of the reasons for this is that it's really hard to visualize and debug the observables in your application.
With this in the back of my mind, I started wondering how that could be fixed. If there only was a way to clearly see the data flowing in the streams of your application in realtime. That's how the idea for Rx Devtools was born.
## Introducing Rx Devtools
After first playing with the idea, I decided to create a small POC. This POC has grown into a chrome extension that, as of today, can be used to visualise streams realtime! Take a look at the demo below (it's a youtube video, pls click :)):
[](https://youtu.be/stWGClDE_Gk)
On the left you can see the code we are debugging at the moment. Notice the `debug` operators on every observable. Here you can pass a name to track the streams.
On the right side you can see the plugin in action. Left, we have a list with one entry per observable we are debugging. When clicked on one of them, you can see the actual marble diagrams with all of the operators. You can click on a marble to inspect the value it had at that moment in time. This way, you can not only see the value of every event being passed to the observable chain, but also see the moment in time they were produced and push down this chain.
If you for example have a combineLatest which doesn't seem to fire, there will probably be one source observable that is not producing a value. With the plugin, this is visualised in seconds!
For more information on the plugin, how to install, how it works and how to use it, I would like to point you to the <a href="https://github.com/kwintenp/rx-devtools" target="_blank">Github</a> page.
### What's next
The plugin as it exists today can definitely be used. It is however far from finished and still in an alpha phase. Over the next few weeks, I'll try to add as many features asap. If you have any ideas for features you want to see added, feel free to create feature requests through Github issues.
If you find any bugs, which I'm certain you will, please report them in the form of Github issues. I will try to tackle them asap.
Happy debugging!
| 76.783784 | 464 | 0.772967 | eng_Latn | 0.999882 |
faff93de13d7b33418191ecece617ffccdefadfb | 56 | md | Markdown | README.md | erick-ais/Cursos | 19608c14200af9a4f1edae096406150e23300671 | [
"MIT"
] | 1 | 2022-01-07T18:27:51.000Z | 2022-01-07T18:27:51.000Z | README.md | erick-ais/Cursos | 19608c14200af9a4f1edae096406150e23300671 | [
"MIT"
] | null | null | null | README.md | erick-ais/Cursos | 19608c14200af9a4f1edae096406150e23300671 | [
"MIT"
] | null | null | null | # Cursos
Anotações, exercícios e atividades de estudo.
| 18.666667 | 46 | 0.785714 | por_Latn | 1.000003 |
4f00daf3751ee785de03369e9a5d6f95249a1033 | 5,772 | md | Markdown | powerapps-docs/maker/canvas-apps/functions/function-datevalue-timevalue.md | gregdegruy/powerapps-docs | 39f6feb699512e9c2bf71ef1a1238b32b639da02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/maker/canvas-apps/functions/function-datevalue-timevalue.md | gregdegruy/powerapps-docs | 39f6feb699512e9c2bf71ef1a1238b32b639da02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/maker/canvas-apps/functions/function-datevalue-timevalue.md | gregdegruy/powerapps-docs | 39f6feb699512e9c2bf71ef1a1238b32b639da02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DateValue, TimeValue, and DateTimeValue functions | Microsoft Docs
description: Reference information, syntax and examples for the DateValue, TimeValue, and DateTimeValue functions in Power Apps
author: gregli-msft
manager: kvivek
ms.service: powerapps
ms.topic: reference
ms.custom: canvas
ms.reviewer: tapanm
ms.date: 03/16/2020
ms.author: gregli
search.audienceType:
- maker
search.app:
- PowerApps
---
# DateValue, TimeValue, and DateTimeValue functions in Power Apps
Converts date, time, or both in a *string* to a *date/time* value.
## Description
- **DateValue** function converts a *date string* (for example, "10/01/2014") to a *date/time* value.
- **TimeValue** function converts a *time string* (for example, "12:15 PM") to a *date/time* value.
- **DateTimeValue** function converts a *date and time string* (for example, "January 10, 2013 12:13 AM") to a *date/time* value.
**DateValue** function ignores any time information in the date string, and the **TimeValue** function ignores any date information in the time string.
> [!NOTE]
> The DateValue, TimeValue, and DateTimeValue functions by default use the language from the current user's settings. You can override it to ensure that strings are interpreted properly. For example, "10/1/1920" is interpreted as *October 1<sup>st</sup>* in "*en*" and as *January 10<sup>th</sup>* in "*fr*".
Dates must be in one of these formats:
- MM/DD/YYYY
- DD/MM/YYYY
- DD Mon YYYY
- Month DD, YYYY
To convert from numeric date, month and year components, read [Date](function-date-time.md). <br>
To convert from numeric hour, minute and second components, read [Time](function-date-time.md).
For more information, read:
- [Working with date and time](../show-text-dates-times.md).
- [Date/time and data types](data-types.md#date-time-and-datetime).
## Syntax
**DateValue**( *String* [, *Language* ])<br>
**DateTimeValue**( *String* [, *Language* ])<br>
**TimeValue**( *String* [, *Language* ])
* *String* - Required. A text string that contains a date, time, or combination date and time value.
* *Language* - Optional. A language string, such as would be returned by the first two characters from the [Language](function-language.md) function. If not provided, the language of the current user's settings is used.
## Examples
### DateValue
If you type **10/11/2014** into a text-input control named **Startdate**, and then set the [Text](../controls/properties-core.md) property of a label to these formulas:
- Convert a date from a string in the user's locale and show the result as a long date.
```powerapps-dot
Text( DateValue( Startdate.Text ), DateTimeFormat.LongDate )
```
Device set to **en** locale shows the label as **Saturday, October 11, 2014**.
> [!NOTE]
> You can use several options with **DateTimeFormat** compared to **LongDateTime**. To display a list of options, type the parameter followed by an exclamation sign (**!**) in the formula bar.
- Convert date from a string in the French locale and show the result as a long date. In this example, the months and day of the month are interpreted differently from English.
```powerapps-dot
Text( DateValue( Startdate.Text, "fr" ), DateTimeFormat.LongDate )
```
Device set to **en** locale shows the label as **Monday, November 10, 2014**.
If you typed **October 20, 2014** instead:
- Convert a date from a string in the user's locale and calculate the difference between two days, in days
```powerapps-dot
DateDiff( DateValue( Startdate.Text ), Today() )
```
Device set to **en** locale shows the label as **9**, indicating the number of days between October 11 and October 20. The [DateDiff](function-dateadd-datediff.md) function can also show the difference in months, quarters, or years.
### DateTimeValue
If you typed **10/11/2014 1:50:24.765 PM** into a text-input control named **Start**, and then set the [Text](../controls/properties-core.md) property of a label to the following formula:
- Convert both a date and time string in the current locale.
```powerapps-dot
Text( DateTimeValue( Start.Text ), DateTimeFormat.LongDateTime )
```
Device set to **en** locale shows the label as **Saturday, October 11, 2014 1:50:24 PM**.
> [!NOTE]
> You can use several options with **DateTimeFormat** compared to **LongDateTime**. To display a list of options, type the parameter followed by an exclamation sign (**!**) in the formula bar.
- Convert both a date and time string in the French locale. Month and day of the month are interpreted differently.
```powerapps-dot
Text( DateTimeValue( Start.Text, "fr"), DateTimeFormat.LongDateTime )
```
Device set to **en** locale shows the label as **Monday, November 10, 2014 1:50:24 PM**.
- Convert both a date and time string in the user's locale, and display the result with a fractional second.
```powerapps-dot
Text( DateTimeValue( Start.Text ), "dddd, mmmm dd, yyyy hh:mm:ss.fff AM/PM" )
```
Device set to **en** locale shows the label as **Saturday, October 11, 2014 01:50:24.765 PM**.
As an alternative, you can specify **hh:mm:ss.f** or **hh:mm:ss.ff** to round the time to the nearest 10<sup>th</sup> or 100<sup>th</sup> of a second.
### TimeValue
Name a text-input control **FinishedAt**, and set the [Text](../controls/properties-core.md) property of a label to this formula:
```powerapps-dot
If( TimeValue( FinishedAt.Text ) < TimeValue( "5:00:00.000 PM" ),
"You made it!",
"Too late!"
)
```
- If you type **4:59:59.999 PM** in the **FinishedAt** control, the label shows "*You made it!*"
- If you type **5:00:00.000 PM** in the **FinishedAt** control, the label shows "*Too late!*"
| 41.52518 | 308 | 0.703915 | eng_Latn | 0.968994 |
4f00eeba519553226ea743d9b87eb2198716ea94 | 2,727 | md | Markdown | README.md | maito1201/csv4dynamo | 73b6472f73ac35cd5edbbe892f213b63505d9c66 | [
"MIT"
] | null | null | null | README.md | maito1201/csv4dynamo | 73b6472f73ac35cd5edbbe892f213b63505d9c66 | [
"MIT"
] | null | null | null | README.md | maito1201/csv4dynamo | 73b6472f73ac35cd5edbbe892f213b63505d9c66 | [
"MIT"
] | null | null | null | # csv4dynamo

export and import csv for DynamoDB
This is an integration of [ma91n/dynamo2csv](https://github.com/ma91n/dynamo2csv) and [maito1201/csv2dynamo](https://github.com/maito1201/csv2dynamo)
# install
```
go get github.com/maito1201/csv4dynamo/cmd/csv4dynamo
```
# usage
```
csv4dynamo [options]
```
```
NAME:
csv4dynamo - export and import csv for DynamoDB
USAGE:
main.exe [global options] command [command options] [arguments...]
COMMANDS:
import, csv2dynamo, i import csv to DynamoDB
export, dynamo2csv, x export csv from DynamoDB
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--table-name value, -t value [import export] target dynamo db tabe name (required)
--endpoint value, -e value [import export] endpoint of DynamoDB
--profile value, -p value [import export] profile of aws cli
--output value, --out value, -o value [import export] target output (default: stdout, e.g. ./out.txt), no file will be created if execute option is enabled
--csv value, -c value, --file value, -f value [import] file to import (e.g. ./tablename.csv)
--execute [import] is directly execute import command (default: false)
--filter-expression value, --fex value [export] filter-expression to export (e.g. 'contains(#ts, :s)')
--expression-attribute-values value, --exp-values value, --xav value [export] expression-attribute-values to export (e.g. '{":s":{"S":"15:00:00Z"}}')
--expression-attribute-names value, --exp-names value, --xan value [export] expression-attribute-names to export (e.g. '{"#ts":"timestamp"}')
--help, -h show help (default: false)
```
# example
## import csv to DynamoDB
```
csv4dynamo --table-name sample-table --file ./testdata/sample.csv --out out.txt import
read and compile csv
progress: 2/2
complete!
aws dynamodb put-item --table-name sample-table --item {"s_value":{"S":"sample1"},"n_value":{"N":"1"},"bool_value":{"B":true}}
aws dynamodb put-item --table-name sample-table --item {"s_value":{"S":"sample2"},"n_value":{"N":"2"},"bool_value":{"B":false}}
```
## export csv from DynamoDB
```
csv4dynamo --table-name sample-table -filter-expression 'contains(#ts, :s)' --out out.csv export
```
# caution
This CLI may not work well with complex table condition(e.g: Table that the data format of the attributes is not unified for each record)
| 39.521739 | 190 | 0.610194 | eng_Latn | 0.666117 |
4f00ff722413c7a30730bc2c7f767c33e27e407a | 2,693 | md | Markdown | agent/loading-bay/truck-park/README.md | Noddy76/data-highway | a9b0d439c45945395045b6abe98ad511e6d8b52d | [
"Apache-2.0"
] | 38 | 2019-01-16T14:44:49.000Z | 2021-05-12T07:39:40.000Z | agent/loading-bay/truck-park/README.md | Noddy76/data-highway | a9b0d439c45945395045b6abe98ad511e6d8b52d | [
"Apache-2.0"
] | 22 | 2019-01-18T15:06:30.000Z | 2019-06-05T20:08:51.000Z | agent/loading-bay/truck-park/README.md | Noddy76/data-highway | a9b0d439c45945395045b6abe98ad511e6d8b52d | [
"Apache-2.0"
] | 11 | 2019-01-16T16:56:53.000Z | 2020-12-14T22:03:51.000Z | # Road Truck Park
Truck Park is an application that is instructed to consume a batch of a road's data from Kafka up to a given set of
partition offsets and stream it to S3 in Avro format.
## Configuration Options
Arguments without a default are mandatory.
| Argument | Default | Description
|--- |--- |---
| `kafka.bootstrapServers` | | Kafka bootstrap servers for data to consume.
| `kafka.pollTimeout` | 100 (ms) | The time, in milliseconds, spent waiting in poll if data is not available in the buffer.
| `road.name` | | The road name.
| `road.topic` | | Kafka topic where road message data is stored.
| `road.offsets` | | key/value delimited string of kafka partitions and end offset to process to.
| `road.model.topic` | | Kafka topic where road model data is stored. Used for schema lookup.
| `writer.flushBytesThreshold` | 134217728 (128Mi) | Byte threshold at which a file is closed and a new file started.
| `avroCodec.name` | deflate | Avro compression codec name.
| `avroCodec.level` | 3 | Avro compression level, if applicable.
| `s3.bucket` | | S3 bucket to upload data to.
| `s3.prefix` | | S3 key prefix (or 'directory') for data being uploaded.
| `s3.partSize` | 5242880 (5Mi) | Size of individual parts in S3 multipart upload.
| `s3.retry.maxAttempts` | 3 | Maximum number of retries to attempt individual part uploads.
| `s3.retry.sleepSeconds` | 1 | Number of seconds to sleep between retry attempts.
| `s3.async.poolSize` | 3 | Fixed number of threads available for concurrent uploads.
| `s3.async.queueSize` | 3 | Fixed queue size of part uploads waiting to execute.
| `s3.endpoint.url` | - | Location of S3 endpoint for data landing.
| `s3.endpoint.signingRegion` | - | Signing region of S3 endpoint for data landing.
| `metrics.graphiteEndpoint` | disabled | Graphite instance to send metrics to.
## Example YAML
```
kafka.bootstrapServers: kafka:9092
road:
name: my_road
topic: _roads.my_road
offsets: 0:123,234;1:124,235
model.topic: _roads
s3:
bucket: my-bucket
prefix: data
metrics.graphiteEndpoint: graphite:2003
```
| 58.543478 | 146 | 0.560342 | eng_Latn | 0.961511 |
4f0114090bb207bd971c928961786867469ddbaf | 33,034 | md | Markdown | repos/node/remote/17.2.0-alpine.md | sittichai-it/repo-info | 68639341b2bc11199174131ae8381149b783a532 | [
"Apache-2.0"
] | null | null | null | repos/node/remote/17.2.0-alpine.md | sittichai-it/repo-info | 68639341b2bc11199174131ae8381149b783a532 | [
"Apache-2.0"
] | null | null | null | repos/node/remote/17.2.0-alpine.md | sittichai-it/repo-info | 68639341b2bc11199174131ae8381149b783a532 | [
"Apache-2.0"
] | null | null | null | ## `node:17.2.0-alpine`
```console
$ docker pull node@sha256:e64dc950217610c86f29aef803b123e1b6a4a372d6fa4bcf71f9ddcbd39eba5c
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 6
- linux; amd64
- linux; arm variant v6
- linux; arm variant v7
- linux; arm64 variant v8
- linux; ppc64le
- linux; s390x
### `node:17.2.0-alpine` - linux; amd64
```console
$ docker pull node@sha256:3888dace40316274a4666539d239e081976a01c2e460f3230b1bf49433d805f8
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **51.1 MB (51104816 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:bb1fcdaff9369c498f6161d85f8a082c10012c0a8d6b9c76f5140abce7841c78`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 17:19:44 GMT
ADD file:762c899ec0505d1a32930ee804c5b008825f41611161be104076cba33b7e5b2b in /
# Fri, 12 Nov 2021 17:19:45 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 01:27:43 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 01:27:54 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 01:27:54 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 01:27:59 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 01:28:00 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 01:28:00 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 01:28:00 GMT
CMD ["node"]
```
- Layers:
- `sha256:97518928ae5f3d52d4164b314a7e73654eb686ecd8aafa0b79acd980773a740d`
Last Modified: Fri, 12 Nov 2021 17:20:39 GMT
Size: 2.8 MB (2822981 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4796d9078153ec5c3d36d82022cf28b1dceb9123a074a048725f4f27d474d8cb`
Last Modified: Thu, 02 Dec 2021 01:47:46 GMT
Size: 45.9 MB (45930900 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c3cb450ea517aa64a9da0bf37c19226eba8869ba1864cc67f3ed13227234067c`
Last Modified: Thu, 02 Dec 2021 01:47:36 GMT
Size: 2.4 MB (2350484 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:dc9de54e9683649905626715008817451a2954e81b3268c399d3c0e794019de2`
Last Modified: Thu, 02 Dec 2021 01:47:36 GMT
Size: 451.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:17.2.0-alpine` - linux; arm variant v6
```console
$ docker pull node@sha256:e7fe29810537fc24b2b4e1c2f48eb7254334f670a6082afcecacb38024ecc2c3
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **50.1 MB (50103978 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:6070502dcd2f2de49bd1765e0b3d851030b5dd7253596f1b967e3e654d506dbb`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 16:49:42 GMT
ADD file:6daf1fe862c00673bf9cf4d7e20b0bf253a56e7fb8ed5e730a4466ab9186e18a in /
# Fri, 12 Nov 2021 16:49:44 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 03:26:30 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 03:57:15 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 03:57:16 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 03:57:26 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 03:57:26 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 03:57:27 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 03:57:27 GMT
CMD ["node"]
```
- Layers:
- `sha256:56afcfda5d78cc243287acbaad250c5e8c0f47aae620dd7c51985b0d3c9b2728`
Last Modified: Fri, 12 Nov 2021 16:51:32 GMT
Size: 2.6 MB (2635392 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:51817ca1af8fc173a7ac19757768be207743d7687a104ed6a10e6a88e66fb23b`
Last Modified: Thu, 02 Dec 2021 06:51:09 GMT
Size: 45.1 MB (45066705 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a0656c50cb9c61f7e180a6400715da39edd3191248f9c4f2448d7ef8983c8da1`
Last Modified: Thu, 02 Dec 2021 06:50:36 GMT
Size: 2.4 MB (2401432 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5a129705d96a7741ae489c0c8881ec85f5c7fbc6acbdc8ddc044a13410fac019`
Last Modified: Thu, 02 Dec 2021 06:50:35 GMT
Size: 449.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:17.2.0-alpine` - linux; arm variant v7
```console
$ docker pull node@sha256:4c42708c7d7cade1eed1a0195d73b355a03d17aadd186d721e59efc2e6d48d32
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **49.4 MB (49408125 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:7016c40efe7db43bd2105aede6e0717ca47065a01a0ff6d5b07115adc6300c03`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 16:57:35 GMT
ADD file:03e0720458c3475758bf4394afa56f2165198eb91e6e9581f7768e433744dd9b in /
# Fri, 12 Nov 2021 16:57:36 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 03:18:54 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 03:47:12 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 03:47:13 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 03:47:23 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 03:47:24 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 03:47:24 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 03:47:25 GMT
CMD ["node"]
```
- Layers:
- `sha256:764d2e53e1a607f2d8261522185d5b9021ade3ec1a595664ee90308c00176899`
Last Modified: Fri, 12 Nov 2021 16:59:33 GMT
Size: 2.4 MB (2438618 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fbf63f2973deb89a506c38ca70f947d14f7b8e74f8ea61c5b8960d7d06fa2264`
Last Modified: Thu, 02 Dec 2021 07:12:45 GMT
Size: 44.6 MB (44567506 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:50634a5694bc898a5534a3c88e9cc7240a365c03f78ef225110e4f460a88694d`
Last Modified: Thu, 02 Dec 2021 07:12:13 GMT
Size: 2.4 MB (2401551 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:954703ba3202ac26d3ab0e3e6f718dc1ef94ec08de367f8a24a5a7bb8ac9f41d`
Last Modified: Thu, 02 Dec 2021 07:12:11 GMT
Size: 450.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:17.2.0-alpine` - linux; arm64 variant v8
```console
$ docker pull node@sha256:916bbd0627cc635b0602bb8665248fffbbfcb5ec5a274e4718d80a11e9b08d3f
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **51.1 MB (51131874 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:931d0903ad7e36f05a1d87f9416c1e3842e7d50d575583846717c3042d505012`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 16:39:58 GMT
ADD file:400c0466b29ccad54e0f6c0acef22542992828678c96693ef1f9f4d0551935d8 in /
# Fri, 12 Nov 2021 16:39:58 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 03:20:08 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 03:52:55 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 03:52:55 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 03:53:02 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 03:53:03 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 03:53:03 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 03:53:04 GMT
CMD ["node"]
```
- Layers:
- `sha256:be307f383ecc62b27a29b599c3fc9d3129693a798e7fcce614f09174cfe2d354`
Last Modified: Fri, 12 Nov 2021 16:40:59 GMT
Size: 2.7 MB (2717700 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1822d4fff1a728a41b5890a3451dc6fbb488ac5688cf6d30843344bceb67b198`
Last Modified: Thu, 02 Dec 2021 06:47:36 GMT
Size: 46.0 MB (46003881 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a1faa0e8c7a6b8f9e9e8e7d775b68367bc56a94cc55b4ddbc27bf8ceb83898bd`
Last Modified: Thu, 02 Dec 2021 06:47:27 GMT
Size: 2.4 MB (2409843 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:45097cae72dce0eee4769e2addf101bc610ae70bdc71eafb7a04d9d20b38d3f3`
Last Modified: Thu, 02 Dec 2021 06:47:26 GMT
Size: 450.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:17.2.0-alpine` - linux; ppc64le
```console
$ docker pull node@sha256:6e5a20a669dd78100e3ac7df0815c93130dac06d3d2e3a2ade65b839bd6cfa86
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **53.8 MB (53754904 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:c06fc9c304bf863991798686971f328278f8eb0ab1aee3afacd154da1a7c5bd1`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 21:18:00 GMT
ADD file:4d45063079cb28a34f1e382fddb22f156ac99d5449aa05ed37cb653c1f7b80f2 in /
# Fri, 12 Nov 2021 21:18:01 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 02:50:58 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 03:25:33 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 03:25:39 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 03:25:56 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 03:25:58 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 03:26:00 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 03:26:05 GMT
CMD ["node"]
```
- Layers:
- `sha256:72940440c1ab65eca4d38846164719ffde4b147543cc658d041407a925b13368`
Last Modified: Fri, 12 Nov 2021 21:19:32 GMT
Size: 2.8 MB (2817467 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:71cf2ced56be2acb0e93bac72c74fadfa90b88bec7cd45d08c014d5a4d75da08`
Last Modified: Thu, 02 Dec 2021 06:54:30 GMT
Size: 48.5 MB (48526756 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9b71aa92a84262c38a1008d454e6ddbf697880bbc62fe996e7cd1ec377acb270`
Last Modified: Thu, 02 Dec 2021 06:53:48 GMT
Size: 2.4 MB (2410231 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ff01adc54cc7f6b6c70a65d8f206a49f6014cf859b0e464716b041a8ad8276a5`
Last Modified: Thu, 02 Dec 2021 06:53:47 GMT
Size: 450.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `node:17.2.0-alpine` - linux; s390x
```console
$ docker pull node@sha256:f108b80f947981b33dca513de98932b73e4457c8fc49b970959920fb01218e43
```
- Docker Version: 20.10.7
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **50.9 MB (50865174 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:5bf3d8e9945d769ddc946aff12c83a2b5bbb4a2c7c8ae387e78fb4b78b5debcf`
- Entrypoint: `["docker-entrypoint.sh"]`
- Default Command: `["node"]`
```dockerfile
# Fri, 12 Nov 2021 16:41:35 GMT
ADD file:7e0cf02b3f015b1a0f867c03b2902b85f2140be1cee7af63c23f367a487e4577 in /
# Fri, 12 Nov 2021 16:41:36 GMT
CMD ["/bin/sh"]
# Thu, 02 Dec 2021 03:11:50 GMT
ENV NODE_VERSION=17.2.0
# Thu, 02 Dec 2021 03:47:48 GMT
RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="915d7a201c51e90220fa4f608ff62b569e414d3c75ceb95e383e877ce5300257" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python3 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 74F12602B6F1C4E913FAA37AD3A89613643B6201 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version
# Thu, 02 Dec 2021 03:47:51 GMT
ENV YARN_VERSION=1.22.15
# Thu, 02 Dec 2021 03:47:55 GMT
RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version
# Thu, 02 Dec 2021 03:47:56 GMT
COPY file:4d192565a7220e135cab6c77fbc1c73211b69f3d9fb37e62857b2c6eb9363d51 in /usr/local/bin/
# Thu, 02 Dec 2021 03:47:56 GMT
ENTRYPOINT ["docker-entrypoint.sh"]
# Thu, 02 Dec 2021 03:47:56 GMT
CMD ["node"]
```
- Layers:
- `sha256:817a13b0e05928f7491adbf1d2cf261ec35079112247bd03469bbe31156aca7c`
Last Modified: Fri, 12 Nov 2021 16:42:44 GMT
Size: 2.6 MB (2609278 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:57e459ae3a1ec049a44d02c665463bd4f1fc7dcac37cfcab4e336124e1a62bd5`
Last Modified: Thu, 02 Dec 2021 06:48:43 GMT
Size: 45.8 MB (45844269 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5c3e005cd27a213af071476910f59509b1f9c2941e52bade0cc018d6395b2192`
Last Modified: Thu, 02 Dec 2021 06:48:37 GMT
Size: 2.4 MB (2411177 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:50e24c07331e55a97c8805b46f41958077a9151875e978e58ce273c80b36dfd4`
Last Modified: Thu, 02 Dec 2021 06:48:37 GMT
Size: 450.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 99.201201 | 2,577 | 0.700339 | yue_Hant | 0.429903 |
4f01875fc5c2fe3be65e821509dd3a4fa26e37bf | 3,073 | md | Markdown | README.md | AdySan/ESP8266-HomeKit | 164f8d13b5ad66532d934d325652e5ae8013f625 | [
"Apache-2.0"
] | 3 | 2017-04-25T18:26:02.000Z | 2021-06-24T06:15:58.000Z | README.md | AdySan/ESP8266-HomeKit | 164f8d13b5ad66532d934d325652e5ae8013f625 | [
"Apache-2.0"
] | null | null | null | README.md | AdySan/ESP8266-HomeKit | 164f8d13b5ad66532d934d325652e5ae8013f625 | [
"Apache-2.0"
] | 1 | 2017-03-02T16:50:46.000Z | 2017-03-02T16:50:46.000Z | # ESP8266-HomeKit
HomeKit server on ESP8266 RTOS with an API approach
==============================
Copyright 2016 HomeACcessoryKid - HacK - homeaccessorykid@gmail.com
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================
Public Apple's HomeKit protocol code has been around for some time for more potent processors
(notably https://github.com/KhaosT/HAP-NodeJS).
This is a rewrite for the ESP8266 for you to play with.
I have used ESP8266_RTOS_SDK-V1.5.0 and WolfCrypt 3.9.8 to build the software on.
# Code
The code provides all the services required to pair iOS with an IP device and to
operate that device once paired with multiple iOS devices.
It runs on even the smallest ESP8266 device like the ESP-1. It attempts to create
an API level to create your HomeKit device without descending to the lower levels of the HAP protocol.
## Timings
Here are some preliminary timings.
### Pairing
Pairing is dominated by the SRP algorithm which is very slow and expensive.
Fortunately this only happens once when the iOS device is being associated with the HomeKit device:
Time1: 25 seconds from boot till start of server, so that initial interaction is split second.
Time2: 30 seconds (based on a build with DEBUG logging which is slow).
### Verify
Verify happens every time an iOS device reconnected to the HomeKit device. Ideally this should be as fast as possible.
Time: 1.2 seconds
## Memory
The HomeKit code is approximately 400K and about 18K of RAM is left for other purposes.
During Pairing so much RAM is used that it is required to launch most code after pairing is done.
# Thanks
I want to thank a number of projects which made this possible:
1. https://github.com/KhaosT/HAP-NodeJS - which documents the HomeKit protocols for IP and allowed me to guess how they
were implemented.
2. https://github.com/aanon4/HomeKit.git - which inspired this README and should inspire us to look into assembly.
3. https://github.com/espressif/ESP8266_RTOS_SDK.git - Espressif for their great product
4. https://www.wolfssl.com/wolfSSL/Products-wolfcrypt.html - For a great one stop crypto library
# Notes
Please note that this software was produced without any reference to any proprietary documentation or information.
I am not a MFi licensee, nor do I have access to any related information.
Espressif uses MIT license.
WolfCrypt uses GPLv2 or higher license. For the purpose of this distribution you should use GPLv3.
This is based on the changes I had to make to Wolfcrypt and to be compatible with Apache-2.0 license.
| 38.898734 | 119 | 0.766027 | eng_Latn | 0.997843 |
4f019566b8d7d14f02355d8e8dd52c797aa38b7b | 3,060 | md | Markdown | docs/payroll_uk/EmployeeStatutorySickLeave.md | mogest/xero-ruby | 9bf09ea0ae54763ee15d32b826206adc59d441f6 | [
"MIT"
] | null | null | null | docs/payroll_uk/EmployeeStatutorySickLeave.md | mogest/xero-ruby | 9bf09ea0ae54763ee15d32b826206adc59d441f6 | [
"MIT"
] | null | null | null | docs/payroll_uk/EmployeeStatutorySickLeave.md | mogest/xero-ruby | 9bf09ea0ae54763ee15d32b826206adc59d441f6 | [
"MIT"
] | null | null | null | # XeroRuby::PayrollUk::EmployeeStatutorySickLeave
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**statutory_leave_id** | **String** | The unique identifier (guid) of a statutory leave | [optional]
**employee_id** | **String** | The unique identifier (guid) of the employee |
**leave_type_id** | **String** | The unique identifier (guid) of the \"Statutory Sick Leave (non-pensionable)\" pay item |
**start_date** | **Date** | The date when the leave starts |
**end_date** | **Date** | The date when the leave ends |
**type** | **String** | the type of statutory leave | [optional]
**status** | **String** | the type of statutory leave | [optional]
**work_pattern** | **Array<String>** | The days of the work week the employee is scheduled to work at the time the leave is taken |
**is_pregnancy_related** | **Boolean** | Whether the sick leave was pregnancy related |
**sufficient_notice** | **Boolean** | Whether the employee provided sufficent notice and documentation as required by the employer supporting the sick leave request |
**is_entitled** | **Boolean** | Whether the leave was entitled to receive payment | [optional]
**entitlement_weeks_requested** | **Float** | The amount of requested time (in weeks) | [optional]
**entitlement_weeks_qualified** | **Float** | The amount of statutory sick leave time off (in weeks) that is available to take at the time the leave was requested | [optional]
**entitlement_weeks_remaining** | **Float** | A calculated amount of time (in weeks) that remains for the statutory sick leave period | [optional]
**overlaps_with_other_leave** | **Boolean** | Whether another leave (Paternity, Shared Parental specifically) occurs during the requested leave's period. While this is allowed it could affect payment amounts | [optional]
**entitlement_failure_reasons** | **Array<String>** | If the leave requested was considered \"not entitled\", the reasons why are listed here. | [optional]
## Code Sample
```ruby
require 'XeroRuby::PayrollUk'
instance = XeroRuby::PayrollUk::EmployeeStatutorySickLeave.new(statutory_leave_id: null,
employee_id: null,
leave_type_id: null,
start_date: null,
end_date: null,
type: Sick,
status: Pending,
work_pattern: null,
is_pregnancy_related: null,
sufficient_notice: null,
is_entitled: null,
entitlement_weeks_requested: null,
entitlement_weeks_qualified: null,
entitlement_weeks_remaining: null,
overlaps_with_other_leave: null,
entitlement_failure_reasons: null)
```
| 63.75 | 225 | 0.607843 | eng_Latn | 0.974385 |
4f01a435003716f0d6f8fb7ae7be54e854ceb115 | 617 | md | Markdown | README.md | DeveloperInProgress/subqlstarter-terra | 19e6e0c0f0fcc0e30d64ec397fa6d2f1afcfbd38 | [
"MIT"
] | null | null | null | README.md | DeveloperInProgress/subqlstarter-terra | 19e6e0c0f0fcc0e30d64ec397fa6d2f1afcfbd38 | [
"MIT"
] | null | null | null | README.md | DeveloperInProgress/subqlstarter-terra | 19e6e0c0f0fcc0e30d64ec397fa6d2f1afcfbd38 | [
"MIT"
] | 1 | 2022-02-22T02:51:33.000Z | 2022-02-22T02:51:33.000Z | # SubQuery Starter Package for Indexing Terra
This is a starter project for Indexing Terra using this[https://github.com/DeveloperInProgress/subql] terra subql/node module.
# Setup
1. clone this repository
`git clone https://github.com/DeveloperInProgress/subqlstarter-terra.git`
2. install dependencies
```
cd subqlstarter-terra
yarn install
```
3. After running the terra subql/node module once, build the project when the terra subql/node module asks for the missing mapping file in './dist/index.js'
```
yarn build
```
4.) Run the terra subql/node module once again and the indexing process will begin.
| 24.68 | 156 | 0.771475 | eng_Latn | 0.858491 |
4f01a695e300d5fd13628b6944865364986325e9 | 2,968 | md | Markdown | README.md | rds0751/wagtail-torchbox | ef55abba8a803f76ae8d5948ef7c82e475840df3 | [
"MIT"
] | null | null | null | README.md | rds0751/wagtail-torchbox | ef55abba8a803f76ae8d5948ef7c82e475840df3 | [
"MIT"
] | null | null | null | README.md | rds0751/wagtail-torchbox | ef55abba8a803f76ae8d5948ef7c82e475840df3 | [
"MIT"
] | null | null | null | Torchbox.com on Wagtail
=======================
[](https://travis-ci.org/torchbox/wagtail-torchbox)
This project was originally a clone of [wagtaildemo](http://github.com/torchbox/wagtaildemo), customised for the Torchbox site.
Setup (with Vagrant)
--------------------
We recommend running Wagtail in a virtual machine using Vagrant, to ensure that the correct dependencies are in place.
### Dependencies
- [VirtualBox](https://www.virtualbox.org/)
- [Vagrant 1.1+](http://www.vagrantup.com)
### Installation
Run the following commands:
```bash
git clone [the url you copied above]
cd wagtail-torchbox
vagrant up
vagrant ssh
# then, within the SSH session:
./manage.py createcachetable
./manage.py migrate
./manage.py createsuperuser
./manage.py runserver 0.0.0.0:8000
```
To build static files you will additionally need to run the following commands:
```bash
cd /vagrant/tbx/core/static_src/
npm install
npm run build:prod
```
**Note:** You can run it within the VM where node is pre-installed, but if you are using Mac OS, you will likely have issues with performance of these commands. It is adviced to Mac OS users to have node on the host machine.
To install node on the host machine we recommend using [`nvm`](https://github.com/creationix/nvm). Once you have `nvm` installed simply run `nvm install` to install and activate the version of node required for the project. Refer to the nvm docs for more details about available commands.
After the installation the app will accessible on the host machine as http://localhost:8000/ - you can access the Wagtail admin interface at http://localhost:8000/admin/. The codebase is located on the host
machine, exported to the VM as a shared folder; code editing and Git operations will generally be done on the host.
To make code changes:
- Create a new branch for your work in the form `ticketnumber-briefdescription` e.g. `123-fix-dodgy-quotemarks`
- Make your code changes
- `git push origin branchname`
- Go back to Torchbox repo in the browser (https://github.com/torchbox/wagtail-torchbox)
- Click the big green 'New pull request' button
- Set the 'base' as 'master' and the 'compare' as your branch.
- Click 'create pull request.'
You will probably need to first merge to the staging branch in order to stage your changes and show them to the client. You still follow the process above, but add a comment to the pull request asking to manually merge to staging and deploy to the staging site. When you are ready to deploy, add another comment requesting that the pull request is merged to master and deployed.
### Download production data and media to local VM
Within the vagrant box:
```
heroku login
fab pull-production-data
fab pull-production-media
```
You may need to check on Heroku dashboard (https://dashboard.heroku.com) if you have the permission to access the `torchbox-production` app.
| 40.657534 | 378 | 0.756402 | eng_Latn | 0.990128 |
4f01e42f32db0fb4e8bc3d841eb315120406bcde | 8,528 | md | Markdown | articles/kinect-dk/depth-camera.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/kinect-dk/depth-camera.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/kinect-dk/depth-camera.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: Azure Kinect DK derinlik Kamerası
description: Azure Kinect DK 'de derinlik kameranın çalışma ilkelerini ve temel özelliklerini anlayın.
author: tesych
ms.author: tesych
ms.prod: kinect-dk
ms.date: 06/26/2019
ms.topic: conceptual
keywords: Kinect, Azure, algılayıcı, SDK, derinlik Kamerası, TOF, ilkeler, performans, ınvalidation
ms.openlocfilehash: 22f04b983ed7c6a2ab19a5c1c709621655ee31c0
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/30/2021
ms.locfileid: "85277667"
---
# <a name="azure-kinect-dk-depth-camera"></a>Azure Kinect DK derinlik Kamerası
Bu sayfada, Azure Kinect DK 'de derinlik kamerasının nasıl kullanılacağı ele alınmaktadır. Derinlik Kamerası, iki kamerasının ikincisi olur. Önceki bölümlerde de belirtildiği gibi, diğer kamera ise RGB kameradır.
## <a name="operating-principles"></a>İşletim ilkeleri
Azure Kinect DK derinliği Kamerası, genliği modüle edilmiş sürekli dalga (AMCW) uçuş zamanı (ToF) ilkesini uygular. Kamera, neredeyse IR (NıR) spekine, sahneye göre modüle edilmiş aydınlatmayı yayınlar. Daha sonra, ışığın kameradan sahneye ve geriye doğru ilerleme zamanının dolaylı bir ölçümünü kaydeder.
Bu ölçümler, bir derinlik eşlemesi oluşturmak için işlenir. Derinlik eşleme, her piksel için, milimetre cinsinden ölçülen Z koordinatı değerleri kümesidir.
Bir derinlik eşlemesiyle birlikte, temizleme IR okuma adlı bir de elde ediyoruz. Temiz IR okuma içindeki piksellerin değeri, sahneye döndürülen ışık miktarıyla orantılıdır. Görüntü, normal bir IR görüntüsüne benzer şekilde görünür. Aşağıdaki şekilde bir örnek derinlik Haritası (sol) ve buna karşılık gelen temiz IR görüntüsü (sağ) gösterilmektedir.

## <a name="key-features"></a>Önemli özellikler
Derinlik kameranın teknik özellikleri şunlardır:
- daha yüksek modülasyonu ve derinlik hassasiyetini sağlayan gelişmiş piksel teknolojisine sahip 1 megapel ToF görüntüleme yongası.
- Neredeyse ve geniş bir görünüm alanı (FoV) derinliği modlarını etkinleştiren iki NıR lazer dides.
- Dünyanın en küçük ToF pikseli, 3,5 μm ile 3,5 μm.
- Piksel başına otomatik seçeneği, yakın ve uzak nesnelerin düzgün yakalanmasına izin veren büyük dinamik aralığı etkinleştirir.
- Güneş ışığı içinde iyileştirilmiş performans sağlayan küresel perde.
- Yonga, lazer ve güç kaynağı varyasyonuna bile sağlam doğruluk sağlayan çok aşamalı derinlik hesaplama yöntemi.
- Düşük Systematik ve rastgele hatalar.

Derinlik Kamerası, ham modüle edilmiş IR görüntülerini ana BILGISAYARA iletir. BILGISAYARDA, GPU hızlandırılmış derinlik altyapısı yazılımı, Ham sinyali derinlik haritalarına dönüştürür.Derinlik Kamerası, çeşitli modları destekler. **Görünüm (FoV) modlarının dar alanı** X ve Y boyutlarındaki daha küçük kapsamlar, ancak Z boyutunda daha büyük kapsamlar içeren sahneler için idealdir. Sahnenin büyük X ve Y kapsamları varsa, ancak daha küçük bir Z aralığı varsa, **geniş FoV modları** daha uygundur.
Derinlik Kamerası, uygun **olmayan modlar** Ile Karşılaştırılacak Z aralığını genişletmek için **2x2 özellikli modlarını** destekler. Atma işlemi, görüntü çözünürlüğünü düşürmek için yapılır. Tüm modlar, en fazla 15 fps kare hızında çalışan 1 megapel (MP) modu dışında, saniyede 30 ' a kadar kare (fps) ile çalıştırılabilir. Derinlik Kamerası Ayrıca **Pasif BIR IR modu** sağlar. Bu modda, kameradaki aydınlatma etkin değildir ve yalnızca çevresel aydınlatma izlenir.
## <a name="camera-performance"></a>Kamera performansı
Kameranın performansı Systematik ve rastgele hatalar olarak ölçülür.
### <a name="systematic-error"></a>Sistematik hata
Sistematik hata, gürültü kaldırma sonrasında ölçülen derinlik ve doğru (taban gerçeği) derinliği arasındaki fark olarak tanımlanır. Derinlik gürültüsünü olabildiğince ortadan kaldırmak için bir statik sahnenin birçok karede zamana bağlı ortalamayı hesapladık. Daha kesin olarak, sistematik hata şu şekilde tanımlanır:

Burada *d <sub>t</sub>* , *t* zamanında ölçü derinliğini, *N* ise ortalama yordamda kullanılan çerçeve sayısını ve *d <sub>gt</sub>* 'nin taban bir derinlik olduğunu gösterir.
Derinlik kameranın sistematik hata belirtimi çok yollu girişim 'yi (MPı) dışlanıyor. MPı, bir algılayıcı pikselin birden fazla nesne tarafından yansıtılan ışığı tümleştirtiği bir. MPı daha sonra daha sonra açıklayacağımızı derinlemesine bir şekilde daha yüksek modülasyon sıklıklarıyla birlikte derinlemesine bir şekilde azalmaktadır.
### <a name="random-error"></a>Rastgele hata
Kamerayı taşımadan aynı nesnenin 100 görüntüsünü ele alalım. Nesnenin derinliği, 100 görüntülerde her birinde biraz farklı olacaktır. Bu fark, görüntü paraziti nedeniyle oluşur. Görüntü paraziti, sensörin zaman içinde rastgele bir faktöre göre değişiklik gösterdiği photons sayısıdır. Bu rastgele hatayı, bir statik sahnede şu şekilde hesaplanan zaman içindeki derinliğin standart sapması olarak tanımladık:

Burada *N* , *derinlik ölçümlerinin sayısını belirtir. d <sub></sub>* *<sub>t</sub>* , *t* ve *d* tüm derinlik ölçümlerine göre hesaplanan ortalama değeri gösterir.
## <a name="invalidation"></a>Geçersiz kılma
Bazı durumlarda, derinlik kamerası bazı pikseller için doğru değerler sunmayabilir. Bu durumlarda derinlik pikselleri geçersiz kılınır. Geçersiz pikseller, Derinlik değeri 0 ' a eşit olarak belirtilir. Derinlik altyapısının doğru değerleri üretmesinin nedenleri şunlardır:
- Etkin IR aydınlatma maskesinin dışında
- Doygun IR sinyali
- Düşük IR sinyali
- Aykırı değerleri filtrele
- Çoklu yol engelleme
### <a name="illumination-mask"></a>Aydınlatma maskesi
Etkin IR aydınlatma maskesinin dışında olduklarında pikseller geçersiz kılınır. Bu tür piksellerin sinyal işlem derinliğine kullanılması önerilmez. Aşağıdaki şekilde, aydınlatmayla maske tarafından geçersiz kılma örneği gösterilmektedir. Geçersiz kılınan pikseller, geniş FoV modlarında (sol) dairenin dışındaki siyah renk piksellerdir ve dar FoV modlarında altıon (sağ).

### <a name="signal-strength"></a>Sinyal gücü
Doygun bir IR sinyali içerdiğinde piksel geçersiz kılınır. Pikseller doygun olduğunda, aşama bilgileri kaybolur. Aşağıdaki görüntüde, doygun bir IR sinyaliyle geçersiz kılma örneği gösterilmektedir. Hem derinlik hem de IR görüntülerinde örnek piksellere işaret eden oklara bakın.

IR sinyali, derinlik oluşturmak için yeterince güçlü olmadığında de geçersiz kılma yapılabilir. Aşağıdaki şekilde, düşük bir IR sinyaliyle geçersiz kılma örneği gösterilmektedir. Hem derinlik hem de IR görüntülerinde örnek piksellere işaret eden oklara bakın.

### <a name="ambiguous-depth"></a>Belirsiz derinlik
Ayrıca, sahnede birden fazla nesneden sinyal almış olmaları durumunda pikseller geçersiz kılınabilir. Bu tür bir geçersiz kılma, görünebileceği yaygın bir durumdur. Sahne geometrisi nedeniyle, kameradan gelen IR ışıkları bir duvardan diğerine ve diğerine yansıtılmıştır. Yansıtılan ışık, pikselin ölçülen derinliğinde belirsizlik oluşmasına neden olur. Derinlik algoritmasındaki filtreler bu belirsiz sinyalleri algılar ve pikselleri geçersiz kılar.
Aşağıdaki şekiller, çok yollu algılama ile geçersiz kılma örneklerini gösterir. Ayrıca, bir kamera görünümünden (üst satır) geçersiz kılınan yüzey alanının farklı bir kamera görünümünden (alt satır) yeniden belirebilir. Bu görüntü, bir perspektiften geçersiz kılınan yüzeyler diğerinden görünebilir olduğunu gösterir.

Çok yollu bir diğer yaygın durum, ön plan ve arka planda (nesne kenarları etrafında) karışık sinyal içeren pikseldir. Hızlı hareket sırasında kenarlar etrafında daha fazla geçersiz kılınan piksel görebilirsiniz. Ek geçersiz kılınan pikseller, ham derinlik yakalamanın etkilenme aralığı nedeniyle oluşur.

## <a name="next-steps"></a>Sonraki adımlar
[Koordinat sistemleri](coordinate-systems.md)
| 76.142857 | 499 | 0.81637 | tur_Latn | 0.999816 |
4f0203f87c9207ed7b13b0201adef1124243a75b | 42 | md | Markdown | README.md | PacktPublishing/The-Complete-Guide-to-Bug-Bounty-Hunting | 9e3b21994742671c86c1e7a43b8f3c0e816b09a5 | [
"MIT"
] | 1 | 2021-04-19T05:36:58.000Z | 2021-04-19T05:36:58.000Z | README.md | PacktPublishing/The-Complete-Guide-to-Bug-Bounty-Hunting | 9e3b21994742671c86c1e7a43b8f3c0e816b09a5 | [
"MIT"
] | null | null | null | README.md | PacktPublishing/The-Complete-Guide-to-Bug-Bounty-Hunting | 9e3b21994742671c86c1e7a43b8f3c0e816b09a5 | [
"MIT"
] | 1 | 2021-04-18T12:54:06.000Z | 2021-04-18T12:54:06.000Z | # The-Complete-Guide-to-Bug-Bounty-Hunting | 42 | 42 | 0.809524 | yue_Hant | 0.72229 |
4f02129f09bc979dfde3a87d76e86f305d8ab0ba | 1,140 | md | Markdown | content/zh/nginx/nginx_static_file.md | aiopsclub/devopsclub | 96ab27994e1343ce99de3032932caf0ba7809229 | [
"MIT"
] | 2 | 2020-07-12T09:59:32.000Z | 2021-02-25T11:07:11.000Z | content/zh/nginx/nginx_static_file.md | aiopsclub/devopsclub | 96ab27994e1343ce99de3032932caf0ba7809229 | [
"MIT"
] | 59 | 2020-06-21T02:33:07.000Z | 2021-06-08T05:06:19.000Z | content/zh/nginx/nginx_static_file.md | aiopsclub/devopsclub | 96ab27994e1343ce99de3032932caf0ba7809229 | [
"MIT"
] | 1 | 2020-06-21T03:31:17.000Z | 2020-06-21T03:31:17.000Z | ---
title : "Nginx系列之nginx静态文件服务"
weight : 5
---
> nginx作为web服务器,在静态文件服务方面有着卓越的性能,我们可以很方便的搭建文件服务,方便文件在网络上分享,接下来我们就来看一下nginx静态服务的具体配置:
## 1. nginx配置
```shell
# nginx.conf
user nginx;
error_log /var/log/nginx/error.log;
http {
server {
listen 80;
location / {
autoindex on;
root /data/www;
}
}
}
```
经过以上简单的配置,nginx -s reload后,nginx即可作为静态文件服务器。这段配置的关键在于server配置端,nginx中使用localtion匹配uri,root来指定文件服务的根目录。autoindex指令作用是当找不到index文件[`默认index.html`],会以html的格式返回文件服务根目录的文件列表。
## 2. 静态文件规则
当我们访问的uri为/a/b/c.txt时,nginx会到/data/www/找对应目录结构的文件,即/data/www/a/b/c.txt,具体分为以下几种情况:
1. 文件存在,直接返回c.txt.文件的内容;
2. 文件不存在,如果autoindex未开启,则会返回404页面,否则的话nginx会先判断/data/www/a/b目录是否存在,存在直接返回/data/www/a/b的文件列表,否则的话直接返回404页面。
简单说,当文件访问请求到达时,nginx会将请求的uri和root之类后的参数拼在一起,然后去文件系统寻找对应的文件。
## 3. 实际效果
```shell
# /data/www目录结构
[root@localhost www]# tree
.
├── a
└── abc
└── e
1 directory, 2 files
```

## 4. 总结
在nginx配置中,localtion可以有多个,支持精确匹配、前缀匹配和正则匹配,且他们都有着固定的匹配顺序规则,这些内容会有专门的文章介绍,现在我们只需要知道如何快速搭建自己的文件服务即可。
| 21.111111 | 169 | 0.697368 | yue_Hant | 0.551255 |
4f024ce9c0dbbbbb96355c4753f46d90c5ff19e7 | 126 | md | Markdown | saw-core-sbv/README.md | msaaltink/saw-script | 2e4fc0603da85bb1b188d4739a3386e25eea50ab | [
"BSD-3-Clause"
] | 411 | 2015-06-09T22:00:47.000Z | 2022-03-30T11:41:23.000Z | saw-core-sbv/README.md | msaaltink/saw-script | 2e4fc0603da85bb1b188d4739a3386e25eea50ab | [
"BSD-3-Clause"
] | 1,151 | 2015-06-12T20:46:31.000Z | 2022-03-23T02:56:32.000Z | saw-core-sbv/README.md | msaaltink/saw-script | 2e4fc0603da85bb1b188d4739a3386e25eea50ab | [
"BSD-3-Clause"
] | 65 | 2015-06-10T17:52:26.000Z | 2022-02-10T18:17:06.000Z | This repository contains a backend for the `saw-core` library that uses
the `sbv` library for communication with SMT solvers.
| 42 | 71 | 0.801587 | eng_Latn | 0.999514 |
4f02c9051cfff6dd4da4454d4674a493d4b07063 | 15,640 | md | Markdown | articles/virtual-machines/windows/tutorial-create-vmss.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/windows/tutorial-create-vmss.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/windows/tutorial-create-vmss.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial – Criar um conjunto de dimensionamento de máquinas virtuais para o Windows no Azure | Microsoft Docs
description: Neste tutorial, você aprenderá a usar o Azure PowerShell para criar e implantar um aplicativo altamente disponível em VMs Windows usando um conjunto de dimensionamento de máquinas virtuais
services: virtual-machine-scale-sets
documentationcenter: ''
author: cynthn
manager: gwallace
editor: ''
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-machine-scale-sets
ms.workload: infrastructure-services
ms.tgt_pltfrm: na
ms.devlang: ''
ms.topic: tutorial
ms.date: 11/30/2018
ms.author: cynthn
ms.custom: mvc
ms.openlocfilehash: 66b9099c8989b5ad3df1d8e27eb33a19ee6f23eb
ms.sourcegitcommit: c105ccb7cfae6ee87f50f099a1c035623a2e239b
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 07/09/2019
ms.locfileid: "67708083"
---
# <a name="tutorial-create-a-virtual-machine-scale-set-and-deploy-a-highly-available-app-on-windows-with-azure-powershell"></a>Tutorial: Criar um conjunto de dimensionamento de máquinas virtuais e implantar um aplicativo altamente disponível no Windows com o Azure PowerShell
Um conjunto de dimensionamento de máquinas virtuais permite implantar e gerenciar um conjunto de máquinas virtuais idênticas de dimensionamento automático. Você pode dimensionar manualmente o número de VMs no conjunto de dimensionamento. Você também pode definir regras para o dimensionamento automático com base no uso de recursos, como CPU, demanda de memória ou tráfego de rede. Neste tutorial, você implantará um conjunto de dimensionamento de máquinas virtuais no Azure e saberá como:
> [!div class="checklist"]
> * Usar a Extensão de Script Personalizado para definir um site do IIS para dimensionar
> * Criar um balanceador de carga para o conjunto de dimensionamento
> * Criar um conjunto de dimensionamento de máquinas virtuais
> * Aumentar ou diminuir o número de instâncias em um conjunto de dimensionamento
> * Criar regras de dimensionamento automático
## <a name="launch-azure-cloud-shell"></a>Iniciar o Azure Cloud Shell
O Azure Cloud Shell é um shell interativo grátis que pode ser usado para executar as etapas neste artigo. Ele tem ferramentas do Azure instaladas e configuradas para usar com sua conta.
Para abrir o Cloud Shell, basta selecionar **Experimentar** no canto superior direito de um bloco de código. Você também pode iniciar o Cloud Shell em uma guia separada do navegador indo até [https://shell.azure.com/powershell](https://shell.azure.com/powershell). Selecione **Copiar** para copiar os blocos de código, cole o código no Cloud Shell e depois pressione Enter para executá-lo.
## <a name="scale-set-overview"></a>Visão geral do conjunto de escala
Um conjunto de dimensionamento de máquinas virtuais permite implantar e gerenciar um conjunto de máquinas virtuais idênticas de dimensionamento automático. As VMs em um conjunto de dimensionamento são distribuídas entre domínios de falha lógica e domínios de atualização em um ou mais *grupos de posicionamento*. Os grupos de posicionamento são grupos de VMs configuradas de maneira semelhante, da mesma forma que os [conjuntos de disponibilidade](tutorial-availability-sets.md).
VMs são criadas conforme necessário em um conjunto de escala. Você define regras de dimensionamento automático para controlar como e quando as VMs são adicionadas ou removidas do conjunto de dimensionamento. Essas regras podem ser disparados com base em métricas como carga de CPU, utilização de memória ou tráfego de rede.
Escala define suporte até 1.000 VMs quando você usar uma imagem da plataforma Windows Azure. Para cargas de trabalho com requisitos significativos de personalização de VM ou instalação, você pode querer [criar uma imagem de VM personalizada](tutorial-custom-images.md). Você pode criar até 300 VMs em um conjunto de dimensionamento ao usar uma imagem personalizada.
## <a name="create-a-scale-set"></a>Criar um conjunto de escala
Crie um conjunto de dimensionamento de máquinas virtuais com [New-AzVmss](https://docs.microsoft.com/powershell/module/az.compute/new-azvmss). O exemplo a seguir cria um conjunto de dimensionamento chamado *myScaleSet* que usa a imagem da plataforma do *Datacenter do Windows Server 2016*. São criados automaticamente os recursos de rede do Azure para rede virtual, endereço IP público e balanceador de carga. Quando solicitado, você poderá fornecer suas próprias credenciais administrativas para as instâncias de VM no conjunto de dimensionamento:
```azurepowershell-interactive
New-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Location "EastUS" `
-VMScaleSetName "myScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicyMode "Automatic"
```
Leva alguns minutos para criar e configurar todos os recursos e as VMs do conjunto de dimensionamento.
## <a name="deploy-sample-application"></a>Implantar um aplicativo de exemplo
Para testar seu conjunto de dimensionamento, instale um aplicativo Web básico. A Extensão de Script Personalizado do Azure para baixar e executar um script que instala o IIS nas instâncias de VM. Essa extensão é útil para a configuração de implantação de postagem, instalação de software ou qualquer outra configuração/tarefa de gerenciamento. Para obter mais informações, consulte a [Visão geral da Extensão de Script Personalizado](extensions-customscript.md).
Usar a Extensão de Script Personalizado para instalar um servidor Web IIS básico. Aplique a Extensão do Script Personalizado que instala o IIS da seguinte maneira:
```azurepowershell-interactive
# Define the script for your Custom Script Extension to run
$publicSettings = @{
"fileUris" = (,"https://raw.githubusercontent.com/Azure-Samples/compute-automation-configurations/master/automate-iis.ps1");
"commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File automate-iis.ps1"
}
# Get information about the scale set
$vmss = Get-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
# Use Custom Script Extension to install IIS and configure basic website
Add-AzVmssExtension -VirtualMachineScaleSet $vmss `
-Name "customScript" `
-Publisher "Microsoft.Compute" `
-Type "CustomScriptExtension" `
-TypeHandlerVersion 1.8 `
-Setting $publicSettings
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
```
## <a name="allow-traffic-to-application"></a>Permitir o tráfego para o aplicativo
Para permitir o acesso ao aplicativo Web básico, crie um grupo de segurança de rede com [New-AzNetworkSecurityRuleConfig](https://docs.microsoft.com/powershell/module/az.network/new-aznetworksecurityruleconfig) e [New-AzNetworkSecurityGroup](https://docs.microsoft.com/powershell/module/az.network/new-aznetworksecuritygroup). Para saber mais, confira [Rede para os conjuntos de dimensionamento de máquinas virtuais do Azure](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md).
```azurepowershell-interactive
# Get information about the scale set
$vmss = Get-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
#Create a rule to allow traffic over port 80
$nsgFrontendRule = New-AzNetworkSecurityRuleConfig `
-Name myFrontendNSGRule `
-Protocol Tcp `
-Direction Inbound `
-Priority 200 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80 `
-Access Allow
#Create a network security group and associate it with the rule
$nsgFrontend = New-AzNetworkSecurityGroup `
-ResourceGroupName "myResourceGroupScaleSet" `
-Location EastUS `
-Name myFrontendNSG `
-SecurityRules $nsgFrontendRule
$vnet = Get-AzVirtualNetwork `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name myVnet
$frontendSubnet = $vnet.Subnets[0]
$frontendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name mySubnet `
-AddressPrefix $frontendSubnet.AddressPrefix `
-NetworkSecurityGroup $nsgFrontend
Set-AzVirtualNetwork -VirtualNetwork $vnet
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
```
## <a name="test-your-scale-set"></a>Testar seu conjunto de dimensionamento
Para ver o conjunto de dimensionamento em ação, obtenha o endereço IP público do balanceador de carga com [Get-AzPublicIPAddress](https://docs.microsoft.com/powershell/module/az.network/get-azpublicipaddress). O exemplo a seguir exibe o endereço IP de *myPublicIP*, criado como parte do conjunto de dimensionamento:
```azurepowershell-interactive
Get-AzPublicIPAddress `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myPublicIPAddress" | select IpAddress
```
Insira o endereço IP público em um navegador da Web. O aplicativo Web é exibido, incluindo o nome do host da VM para a qual o balanceador de carga distribui o tráfego:

Para ver a escala definida em ação, você pode forçar atualização seu navegador da web para ver o balanceador de carga distribuir tráfego entre todas as VMs que executam seu aplicativo.
## <a name="management-tasks"></a>Tarefas de gerenciamento
Durante todo o ciclo de vida do conjunto de dimensionamento, você poderá precisar executar uma ou mais tarefas de gerenciamento. Além disso, talvez você deseje criar scripts que automatizam várias tarefas do ciclo de vida. O Azure PowerShell fornece uma maneira rápida de realizar essas tarefas. Estas são algumas tarefas comuns.
### <a name="view-vms-in-a-scale-set"></a>Exibição de VMs em um conjunto de escala
Para exibir uma lista de instâncias de VM em um conjunto de dimensionamento, use [Get-AzVmssVM](https://docs.microsoft.com/powershell/module/az.compute/get-azvmssvm) da seguinte forma:
```azurepowershell-interactive
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
```
A saída de exemplo abaixo mostra duas instâncias de VM no conjunto de dimensionamento:
```powershell
ResourceGroupName Name Location Sku InstanceID ProvisioningState
----------------- ---- -------- --- ---------- -----------------
MYRESOURCEGROUPSCALESET myScaleSet_0 eastus Standard_DS1_v2 0 Succeeded
MYRESOURCEGROUPSCALESET myScaleSet_1 eastus Standard_DS1_v2 1 Succeeded
```
Para exibir informações adicionais sobre uma instância específica de VM, adicione o parâmetro `-InstanceId` a [Get-AzVmssVM](https://docs.microsoft.com/powershell/module/az.compute/get-azvmssvm). O exemplo abaixo exibe informações sobre a instância de VM *1*:
```azurepowershell-interactive
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet" `
-InstanceId "1"
```
### <a name="increase-or-decrease-vm-instances"></a>Aumentar ou diminuir as instâncias de VM
Para ver o número de instâncias que você tem atualmente em um conjunto de dimensionamento, use [Get-AzVmss](https://docs.microsoft.com/powershell/module/az.compute/get-azvmss) e consulte *sku.capacity*:
```azurepowershell-interactive
Get-AzVmss -ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet" | `
Select -ExpandProperty Sku
```
Manualmente, é possível aumentar ou diminuir o número de máquinas virtuais no conjunto de dimensionamento com [Update-AzVmss](https://docs.microsoft.com/powershell/module/az.compute/update-azvmss). O seguinte exemplo aumenta o número de VMs no conjunto de dimensionamento para *3*:
```azurepowershell-interactive
# Get current scale set
$scaleset = Get-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
# Set and update the capacity of your scale set
$scaleset.sku.capacity = 3
Update-AzVmss -ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $scaleset
```
A atualização para o número especificado de instâncias no conjunto de dimensionamento leva alguns minutos.
### <a name="configure-autoscale-rules"></a>Configurar regras de dimensionamento automático
Em vez de dimensionar manualmente o número de instâncias no conjunto de dimensionamento, defina regras de dimensionamento automático. Essas regras monitoram as instâncias no conjunto de dimensionamento e respondem adequadamente com base nas métricas e nos limites que você define. O exemplo a seguir escala horizontalmente o número de instâncias em um quando a carga média da CPU é maior que 60% por um período superior a 5 minutos. Se, em seguida, a carga média da CPU cair abaixo de 30% em um período superior a 5 minutos, as instâncias serão reduzidas horizontalmente em uma instância:
```azurepowershell-interactive
# Define your scale set information
$mySubscriptionId = (Get-AzSubscription)[0].Id
$myResourceGroup = "myResourceGroupScaleSet"
$myScaleSet = "myScaleSet"
$myLocation = "East US"
$myScaleSetId = (Get-AzVmss -ResourceGroupName $myResourceGroup -VMScaleSetName $myScaleSet).Id
# Create a scale up rule to increase the number instances after 60% average CPU usage exceeded for a 5-minute period
$myRuleScaleUp = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator GreaterThan `
-MetricStatistic Average `
-Threshold 60 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Increase `
-ScaleActionValue 1
# Create a scale down rule to decrease the number of instances after 30% average CPU usage over a 5-minute period
$myRuleScaleDown = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator LessThan `
-MetricStatistic Average `
-Threshold 30 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Decrease `
-ScaleActionValue 1
# Create a scale profile with your scale up and scale down rules
$myScaleProfile = New-AzAutoscaleProfile `
-DefaultCapacity 2 `
-MaximumCapacity 10 `
-MinimumCapacity 2 `
-Rule $myRuleScaleUp,$myRuleScaleDown `
-Name "autoprofile"
# Apply the autoscale rules
Add-AzAutoscaleSetting `
-Location $myLocation `
-Name "autosetting" `
-ResourceGroup $myResourceGroup `
-TargetResourceId $myScaleSetId `
-AutoscaleProfile $myScaleProfile
```
Para obter mais informações de design sobre o uso do dimensionamento automático, consulte [Melhores práticas de dimensionamento automático](/azure/architecture/best-practices/auto-scaling).
## <a name="next-steps"></a>Próximas etapas
Neste tutorial, você criou um conjunto de dimensionamento de máquinas virtuais. Você aprendeu como:
> [!div class="checklist"]
> * Usar a Extensão de Script Personalizado para definir um site do IIS para dimensionar
> * Criar um balanceador de carga para o conjunto de dimensionamento
> * Criar um conjunto de dimensionamento de máquinas virtuais
> * Aumentar ou diminuir o número de instâncias em um conjunto de dimensionamento
> * Criar regras de dimensionamento automático
Avança para o próximo tutorial para saber mais sobre conceitos de máquinas virtuais de balanceamento de carga.
> [!div class="nextstepaction"]
> [Balancear carga de máquinas virtuais](tutorial-load-balancer.md)
| 53.016949 | 588 | 0.784719 | por_Latn | 0.975705 |
4f0385a3ea4526188c5bb0c4419ae118d68963a5 | 9,190 | md | Markdown | didact/Readme-dataquality4ai--data-quality-for-ai.md | SuyashGupte/Developer-Playground | 1bbf6d3b303e6701f24523fe0cc388264819f121 | [
"Apache-2.0"
] | null | null | null | didact/Readme-dataquality4ai--data-quality-for-ai.md | SuyashGupte/Developer-Playground | 1bbf6d3b303e6701f24523fe0cc388264819f121 | [
"Apache-2.0"
] | null | null | null | didact/Readme-dataquality4ai--data-quality-for-ai.md | SuyashGupte/Developer-Playground | 1bbf6d3b303e6701f24523fe0cc388264819f121 | [
"Apache-2.0"
] | null | null | null | <html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="style.css">
<style>
.header {
background-image: url('https://github.com/IBM/Developer-Playground/blob/master/didact/images/data-quality.png?raw=true');
}
</style>
</head>
<body>
<div style="margin-top:2rem"></div>
<div class="hidden-state">$workspace_id</div>
<div class="header">
<div class="left-content">
<div class="apptitle">
Data Quality for AI
</div>
<div class="subheading">
Assess the quality of data sets for AI models.
</div>
</div>
</div>
<br>
<br>
<div class="section" style="font-size:16px; margin-top:-1.25rem">
<p>
Access to quality data can significantly reduce model building time, streamline data preparation efforts, and
improve the overall reliability of the AI pipeline.</p>
<p>
The Data Quality for AI integrated toolkit provides various data profiling and quality estimation metrics to
assess the quality of ingested data in a systematic and objective manner. This application allows you to
experiment with the Data Quality for AI toolkit to assess the quality of your dataset.
</p>
</div>
<div class="section">
<p style="font-size:24px">Learning Resources</p>
<div>
<a href="https://developer.ibm.com/learningpaths/data-quality-ai-toolkit/">Get started with Data Quality for
AI</a></br>
</div>
</div>
<div class="section">
<p style="font-size:24px">Included APIs</p>
<div>
<p><a href="https://developer.ibm.com/apis/catalog/dataquality4ai--data-quality-for-ai/Introduction">Data Quality
for AI API</a></p>
</div>
</div>
<div class="section">
<p style="font-size:24px">Pre-requisites</p>
<div>
<ol>
<li>
<p>IBM Account - <a href="https://ibm.com/registration?cm_sp=ibmdev--developer-sandbox--cloudreg">Create</a>
one for free.</p>
</li>
<li>Obtain API credentials </li>
<ul>
<li><a href="https://www.ibm.com/account/reg/us-en/signup?formid=urx-50307">Subscribe </a> to the Data Quality
for AI.</li>
<li>Check your <a href="https://developer.ibm.com/profile/myapis"> API Subscriptions</a>.</li>
<li>Select the subscription for Data Quality for AI to proceed.</li>
<li>You can obtain your Client ID/Secret from here. Else, you can "Generate API Key".</li>
</ul>
</ol>
</div>
</div>
<div class="section">
<p style="font-size:24px">Instructions</p>
<p>Please follow all the below steps in proper sequence.</p>
</div>
<div class="timeline-container">
<div class="timeline step git-clone">
<div class="content">
<p>Clone the GitHub repository.</p>
</div>
<input type="checkbox">
<a id="step" class="button is-dark is-medium" title="Get the Code"
href="didact://?commandId=extension.sendToTerminal&text=data-quality%7Cget-code%7Cdata-quality|git%20clone%20-b%20DART%20https://github.com/IBM/Developer-Playground.git%20${CHE_PROJECTS_ROOT}/data-quality/">Get
Code</a>
<span class="dot"></span>
</div>
<div class="timeline step install-dependencies">
<div class="content">
<p>Install required dependencies for executing application.</p>
</div>
<input type="checkbox">
<a id="step" class="button is-dark is-medium" title="Build the Application"
href="didact://?commandId=extension.sendToTerminal&text=data-quality%7Cbuild-application%7Cdata-quality|cd%20${CHE_PROJECTS_ROOT}/data-quality/DataQuality%20%26%26%20npm%20install%20--production">Install
Dependencies</a>
<span class="dot"></span>
</div>
<div class="timeline step configure-application">
<div class="content">
<p>Configure the application. See pre-requisites.</p>
</div>
<input type="checkbox">
<a id="step" class="button is-dark is-medium" title="Open the File"
href="didact://?commandId=extension.openFile&text=data-quality%7Cconfigure-application%7C${CHE_PROJECTS_ROOT}/data-quality/DataQuality/.env">Configure
Application</a>
<span class="dot"></span>
</div>
<div class="timeline step launch-application">
<div class="content">
<p>Launch the application in the preview window.</p>
</div>
<input type="checkbox">
<a id="step" class="button is-dark is-medium" title="Launch the Application"
href="didact://?commandId=extension.sendToTerminal&text=data-quality%7Claunch-application%7Cdata-quality|cd%20${CHE_PROJECTS_ROOT}/data-quality/DataQuality%20%26%26%20node%20server.js">Launch
Application</a>
<span class="dot"></span>
</div>
</div>
<br>
<div class="footer">
<div class="footer-cta">
<div class="footer-step stop-application" style="background:transparent">
<p>To edit or explore the application, make sure to stop it first.</p>
<a class="button is-dark is-medium" title="Stop Application"
href="didact://?commandId=vscode.didact.sendNamedTerminalCtrlC&text=data-quality">Stop Application</a>
</div>
<div class="footer-step explore-application" style="background:transparent">
<p>Explore and update the code as per your requirement.</p>
<a class="button is-dark is-medium" title="Explore the Code"
href="didact://?commandId=extension.openFile&text=data-quality%7Cexplore-code%7C${CHE_PROJECTS_ROOT}/data-quality/DataQuality/src/App.js">Explore
Code</a>
</div>
<div class="footer-step re-launch-application" style="background:transparent">
<p>Re-launch the application to view the changes made.</p>
<a class="button is-dark is-medium" title="Re-Launch the Application"
href="didact://?commandId=extension.sendToTerminal&text=data-quality%7Crelaunch-application%7Cdata-quality|cd%20${CHE_PROJECTS_ROOT}/data-quality/DataQuality%20%26%26%20npm%20install%20--only=dev%20%26%26%20rm%20-rf%20build%20%26%26%20npm%20run%20build%20%26%26%20node%20server.js">Re-Launch
Application</a>
</div>
<div class="footer-step git-push" style="background:transparent">
<p style="margin-top:0.625rem;">Click to push code to your own Github repository. You will need a personal access
token to complete this action via the CLI. Refer to this <a
href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token">guide</a>
for generating your personal access token.</p>
<a class="button is-dark is-medium" title="Delete services from IBM Cloud"
href="didact://?commandId=vscode.didact.sendNamedTerminalAString&text=sandbox%20terminal$$sh%20/github.sh ">Push
to Github</a>
</div>
</div>
<div class="image-div">
<p class="image-content">Want to explore this project more?
<span style="font-size:15px;margin-top:0px;display:block;">Head over to the <a
href="https://github.com/IBM/Developer-Playground/tree/DART" target="_blank">Github Repository</a></span>
<span style="font-size:15px;margin-top:0px;display:block;">For further assistance reach out to <a
href="https://github.com/IBM/Technology-Sandbox-Support/issues/new/choose" target="_blank"> Help &
Support</a></span>
<span style="font-size:15px;margin-top:0px;display:block;">Check out our <a
href="https://ibm.github.io/Technology-Sandbox-Support/" target="_blank"> FAQs</a></span>
</p>
<div class="image-btn">
<a class="image-link" href="didact://?commandId=extension.openURL&text=data-quality%7Cview-product-details%7Chttps://www.ibm.com/products/dqaiapi
" target="_blank">
View Product Details
<span>
<svg style="position: absolute; right: 0.625rem;" fill="#ffffff" focusable="false"
preserveAspectRatio="xMidYMid meet" xmlns="http://www.w3.org/2000/ svg" width="25" height="25"
viewBox="0 0 32 32" aria-hidden="true">
<path d="M18 6L16.6 7.4 24.1 15 3 15 3 17 24.1 17 16.6 24.6 18 26 28 16z"></path>
<title>Arrow right</title>
</svg>
</span>
</a>
<a class="image-link"
href="didact://?commandId=extension.openURL&text=data-quality%7Cget-trial-subscription%7Chttps://www.ibm.com/account/reg/us-en/signup?formid=urx-50307"
target="_blank">
Get Trial Subscription
<span>
<svg style="position: absolute; right: 0.625rem;" fill="#ffffff" focusable="false"
preserveAspectRatio="xMidYMid meet" xmlns="http://www.w3.org/2000/ svg" width="25" height="25"
viewBox="0 0 32 32" aria-hidden="true">
<path d="M18 6L16.6 7.4 24.1 15 3 15 3 17 24.1 17 16.6 24.6 18 26 28 16z"></path>
<title>Arrow right</title>
</svg>
</span>
</a>
</div>
</div>
</div>
</body>
<script src="progressive.js"></script>
</html> | 49.408602 | 301 | 0.645375 | eng_Latn | 0.294817 |
4f0392f370a9b6bd28e0fbf5d6186c65fc524757 | 3,210 | md | Markdown | data/readme_files/openstack-ansible.openstack-ansible.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/openstack-ansible.openstack-ansible.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/openstack-ansible.openstack-ansible.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | # OpenStack on Ansible with Vagrant (unofficial)
## Note: this isn't the official OpenStack-Ansible project
You almost certainly want [openstack/openstack-ansible][1] instead, which
is the official OpenStack-Ansible project.
[1]: https://github.com/openstack/openstack-ansible
## Overview
This repository contains script that will deploy OpenStack into Vagrant virtual
machines. These scripts are based on the [Official OpenStack
Docmentation](http://docs.openstack.org/), havana release, except where
otherwise noted.
See also [Vagrant, Ansible and OpenStack on your laptop]
(http://www.slideshare.net/lorinh/vagrant-ansible-and-openstack-on-your-laptop)
on SlideShare, though this refers to a much older version of this repo and so is
now out of date.
## Install prereqs
You'll need to install:
* [Vagrant](http://vagrantup.com)
* [Ansible](http://ansible.github.com)
* [python-netaddr](https://pypi.python.org/pypi/netaddr/)
* [python-novaclient](https://pypi.python.org/pypi/python-novaclient) (recommended)
To install Ansible and the other required Python modules:
pip install ansible netaddr python-novaclient
## (Optional) Speed up your provisioning
Install [Vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier) plugin:
vagrant plugin install vagrant-cachier
It allow to share a local directory containing packages (Apt, Npm, …) cache
among VMs.
## Get an Ubuntu 12.04 (precise) Vagrant box
Download a 64-bit Ubuntu Vagrant box:
vagrant box add precise64 http://files.vagrantup.com/precise64.box
## Grab this repository
This repository uses a submodule that contains some custom Ansible modules for
OpenStack, so there's an extra command required after cloning the repo:
git clone http://github.com/openstack-ansible/openstack-ansible.git
cd openstack-ansible
git submodule update --init
## Bring up the cloud
make
This will boot three VMs (controller, network, storage, and a compute node),
install OpenStack, and attempt to boot a test VM inside of OpenStack.
If everything works, you should be able to ssh to the instance from any
of your vagrant hosts:
* username: `cirros`
* password: `cubswin:)`
Note: You may get a "connection refused" when attempting to ssh to the instance.
It can take several minutes for the ssh server to respond to requests, even
though the cirros instance has booted and is pingable.
## Vagrant hosts
The hosts for the standard configuration are:
* 10.1.0.2 (our cloud controller)
* 10.1.0.3 (compute node #1)
* 10.1.0.4 (the quantum network host)
* 10.1.0.5 (the swift storage host)
You should be able to ssh to these VMs (username: `vagrant`, password:
`vagrant`). You can also authenticate with the vagrant private key, which is
included here as the file `vagrant_private_key` (NOTE: git does not manage file
permissions, these must be set to using "chmod 0600 vagrant_private_key" or ssh
and ansible will fail with an error).
## Interacting with your cloud
You can interact with your cloud directly from your desktop, assuming that you
have the [python-novaclient](http://pypi.python.org/pypi/python-novaclient/)
installed.
Note that the openrc file will be created on the controller by default.
| 31.782178 | 83 | 0.766978 | eng_Latn | 0.979517 |
4f0396cc47b8d8b60b1c52567802feb8447c63b9 | 4,540 | md | Markdown | articles/mariadb/concepts-connectivity.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/mariadb/concepts-connectivity.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/mariadb/concepts-connectivity.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: Erros de conectividade transitórios - Base de Dados Azure para MariaDB
description: Saiba como lidar com erros de conectividade transitórios para a Base de Dados Azure para MariaDB.
keywords: conexão mysql,cadeia de ligação,problemas de conectividade,erro transitório,erro de ligação
author: savjani
ms.author: pariks
ms.service: mariadb
ms.topic: conceptual
ms.date: 3/18/2020
ms.openlocfilehash: 2a651d87411654f1a52c4a097f115ba848ce569c
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/29/2021
ms.locfileid: "98660043"
---
# <a name="handling-of-transient-connectivity-errors-for-azure-database-for-mariadb"></a>Tratamento de erros de conectividade transitórios para a Base de Dados Azure para MariaDB
Este artigo descreve como lidar com erros transitórios ligados à Base de Dados Azure para MariaDB.
## <a name="transient-errors"></a>Erros transitórios
Um erro transitório, também conhecido como falha transitória, é um erro que se resolverá sozinho. Normalmente, estes erros manifestam-se como uma ligação ao servidor de base de dados que está a ser largada. Também não podem ser abertas novas ligações a um servidor. Podem ocorrer erros transitórios, por exemplo, quando o hardware ou falha de rede acontece. Outra razão poderia ser uma nova versão de um serviço PaaS que está a ser lançado. A maioria destes eventos são automaticamente atenuados pelo sistema em menos de 60 segundos. Uma das melhores práticas para conceber e desenvolver aplicações na nuvem é esperar erros transitórios. Assuma que podem acontecer em qualquer componente a qualquer momento e ter a lógica adequada para lidar com estas situações.
## <a name="handling-transient-errors"></a>Manipulação de erros transitórios
Os erros transitórios devem ser manuseados utilizando lógica de repetição. Situações que devem ser consideradas:
* Um erro ocorre quando se tenta abrir uma ligação
* Uma ligação ociosa é deixada no lado do servidor. Quando se tenta emitir um comando, não pode ser executado.
* Uma ligação ativa que está atualmente a executar um comando é largada.
O primeiro e o segundo caso são bastante diretos para a frente. Tente abrir a ligação de novo. Quando tiver sucesso, o erro transitório foi atenuado pelo sistema. Pode utilizar novamente a sua Base de Dados Azure para a MariaDB. Recomendamos que tenha esperado antes de voltar a tentar a ligação. Afaste-se se as primeiras tentativas falharem. Desta forma, o sistema pode utilizar todos os recursos disponíveis para superar a situação de erro. Um bom padrão a seguir é:
* Aguarde 5 segundos antes da primeira repetição.
* Para cada uma das seguintes, aumente a espera exponencialmente, até 60 segundos.
* Descoda um número máximo de recauchutagens e o seu pedido considera que a operação falhou.
Quando uma ligação com uma transação ativa falha, é mais difícil lidar corretamente com a recuperação. Há dois casos: Se a transação foi apenas de natureza lida, é seguro reabrir a ligação e voltar a tentar a transação. Se, no entanto, a transação também estava a escrever para a base de dados, deve determinar se a transação foi revertida ou se foi bem sucedida antes do erro transitório acontecer. Nesse caso, pode não ter recebido o reconhecimento do pedido de equição do servidor de bases de dados.
Uma maneira de fazer isso, é gerar uma identificação única no cliente que é usado para todas as retrórias. Você passa este ID único como parte da transação para o servidor e para armazená-lo em uma coluna com uma restrição única. Desta forma, pode voltar a tentar a transação com segurança. Será bem sucedido se a transação anterior for revertida e o cliente gerado ID único ainda não existe no sistema. Falhará indicando uma violação de chave duplicada se o ID único foi previamente armazenado porque a transação anterior foi concluída com sucesso.
Quando o seu programa comunicar com a Base de Dados Azure para MariaDB através de middleware de terceiros, pergunte ao fornecedor se o middleware contém lógica de repetição para erros transitórios.
Certifique-se de testar a sua lógica de relículo. Por exemplo, tente executar o seu código enquanto escala os recursos computetos da sua Base de Dados Azure para o servidor MariaDB. A sua aplicação deve tratar do breve tempo de paragem que se encontra durante esta operação sem problemas.
## <a name="next-steps"></a>Passos seguintes
* [Resolver problemas de ligação ao Azure Database for MariaDB](howto-troubleshoot-common-connection-issues.md)
| 90.8 | 762 | 0.810132 | por_Latn | 0.999977 |
4f041efe3f3838b3dd6a509e7227b64134073545 | 10,871 | md | Markdown | user-integration-guide/symfony-integration.md | austinlparker/opentelemetry-php | 379b6ff3c2fe929c9cb1249be038e1b65f4cb15e | [
"Apache-2.0"
] | null | null | null | user-integration-guide/symfony-integration.md | austinlparker/opentelemetry-php | 379b6ff3c2fe929c9cb1249be038e1b65f4cb15e | [
"Apache-2.0"
] | 11 | 2021-02-17T15:14:32.000Z | 2021-03-17T20:53:18.000Z | user-integration-guide/symfony-integration.md | saktib/opentelemetry-php | 78a24b6f812ab047f55851ba72cf3e798d3da2b5 | [
"Apache-2.0"
] | null | null | null | # Integrating Opentelemetry PHP into Symfony Applications
## Introduction
As a developer, you might be wondering how OpenTelemetry could be beneficial to you. Without practical examples, the usefulness of distributed tracing can be difficult to grasp for persons without a cloud or site reliability engineering background. This user guide shows how OpenTelemetry could be useful to gain insights into exceptions happening within an application. This example uses the OpenTelemtry PHP library integrated into a Symfony application, bundled with Jaeger and Zipkin, for visualizing data.
## Prerequisites
To follow this guide you will need:
* PHP Installed, this example uses PHP 7.4.
* [Composer](https://getcomposer.org/download/ ) for dependency management.
* [Symfony CLI](https://symfony.com/download) for managing your Symfony application.
* [Docker](https://docs.docker.com/get-docker/) for bundling our visualization tools. We have setup instructions for docker on this project's [readme](https://github.com/open-telemetry/opentelemetry-php#development).
## Step 1 - Creating a Symfony Application
Create a Symfony application by running the command `symfony new my_project_name`. We are calling this example `otel-php-symfony-basic-example`, so the command is as follows;
`symfony new otel-php-symfony-basic-example` .
## Step 2 - Require and Test Symfony Dependencies
To define our routes within our controller methods, let's require the Doctrine annotation library by running the command `composer require doctrine/annotations`.
We can test that routes defined within Controllers work by creating a `HelloController.php` file within the `src\Controller` folder as follows:
```
<?php
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Annotation\Route;
class HelloController extends AbstractController
{
/**
* @Route("/hello", name="hello")
*/
public function index(): Response
{
return new Response('Hello World');
}
}
```
To check out the routes available in our current project run `php bin/console debug:router`.

Let's confirm that our application works by running the command `symfony server:start`.

You can navigate to `http://127.0.0.1:8000/hello` route to see the `Hello world` response returned from the HelloController index method above.

## Step 3 - Require the OpenTelemetry PHP Library
For this step, we require the OpenTelemetry PHP Library by running the command `composer require open-telemetry/opentelemetry`. It is worthy of note that this command pulls in the last stable release for the library.
## Step 4 - Bundle Zipkin and Jaeger into the Application
To visualize traces from our application, we have to bundle open source tracing tools [Zipkin](https://zipkin.io/) and [Jaeger](https://www.jaegertracing.io/) into our application using docker.
Let's add a `docker-compose.yaml` file in the root of our project with the content as follows:
```
version: '3.7'
services:
zipkin:
image: openzipkin/zipkin-slim
ports:
- 9411:9411
jaeger:
image: jaegertracing/all-in-one
environment:
COLLECTOR_ZIPKIN_HTTP_PORT: 9412
ports:
- 9412:9412
- 16686:16686
```
To confirm that docker is installed and running on our system, we can run the hello world docker example using the command `docker run -it --rm hello-world`. If everything works well, run `docker-compose up -d` to pull in Zipkin and Jaeger. This might take some time, depending on your internet connection speed.

We can confirm that Zipkin is up by navigating to `http://localhost:9411/` on our browser. For Jaeger, navigating to `http://localhost:16686/` on our browser should display the Jaeger home page.


Now it is time to utilize our OpenTelemetry PHP Library to export traces to both Zipkin and Jaeger.
## Step 5
The entry point for all Symfony applications is the `index.php` file located in the `public` folder. Let's navigate to `public\index.php` to see what is happening. It is worthy of note that resources(namespaces, classes, variables) created within the `index.php` file are available within the entire application, by default the index file imports all auto loaded classes within the vendor folder. It also imports contents of the `.env` file. The other parts of the `index.php` file enable debugging as well as support request and response resolution using the application kernel.
To use open-telemetry specific classes we have to import them at the top of our index file, using the `use` keyword. This is what our imports look like:
```
use App\Kernel;
use OpenTelemetry\Contrib\Jaeger\Exporter as JaegerExporter;
use OpenTelemetry\Contrib\Zipkin\Exporter as ZipkinExporter;
use OpenTelemetry\Sdk\Trace\Clock;
use OpenTelemetry\Sdk\Trace\Sampler\AlwaysOnSampler;
use OpenTelemetry\Sdk\Trace\SamplingResult;
use OpenTelemetry\Sdk\Trace\SpanProcessor\BatchSpanProcessor;
use OpenTelemetry\Sdk\Trace\TracerProvider;
use OpenTelemetry\Trace as API;
use Symfony\Component\Dotenv\Dotenv;
use Symfony\Component\ErrorHandler\Debug;
use Symfony\Component\HttpFoundation\Request;
```
Next, we create a sample recording trace using the [AlwaysOnSampler](https://github.com/open-telemetry/opentelemetry-php/blob/main/sdk/Trace/Sampler/AlwaysOnSampler.php) class, just before the Kernel instance is created like below:
```
$sampler = new AlwaysOnSampler();
$samplingResult = $sampler->shouldSample(
null,
md5((string) microtime(true)),
substr(md5((string) microtime(true)), 16),
'io.opentelemetry.example',
API\SpanKind::KIND_INTERNAL
);
```
Since we are looking to export traces to both Zipkin and Jaeger we have to make use of their individual exporters;
```
$jaegerExporter = new JaegerExporter(
'Hello World Web Server Jaeger',
'http://localhost:9412/api/v2/spans'
);
$zipkinExporter = new ZipkinExporter(
'Hello World Web Server Zipkin',
'http://localhost:9411/api/v2/spans'
);
```
Next we create a trace, and add processors for each trace(One for Jaeger and another for Zipkin). Then we proceed to start and activate a span for each trace. We create a trace only if the RECORD AND SAMPLED sampling result condition passes as follows;
```
if (SamplingResult::RECORD_AND_SAMPLED === $samplingResult->getDecision()) {
$jaegerTracer = (new TracerProvider())
->addSpanProcessor(new BatchSpanProcessor($jaegerExporter, Clock::get()))
->getTracer('io.opentelemetry.contrib.php');
$zipkinTracer = (new TracerProvider())
->addSpanProcessor(new BatchSpanProcessor($zipkinExporter, Clock::get()))
->getTracer('io.opentelemetry.contrib.php');
$request = Request::createFromGlobals();
$jaegerSpan = $jaegerTracer->startAndActivateSpan($request->getUri());
$zipkinSpan = $zipkinTracer->startAndActivateSpan($request->getUri());
}
```
Finally we end the active spans if sampling is complete, by adding the following block at the end of the `index.php` file;
```
if (SamplingResult::RECORD_AND_SAMPLED === $samplingResult->getDecision()) {
$zipkinTracer->endActiveSpan();
$jaegerTracer->endActiveSpan();
}
```
lets confirm that we can see exported traces on both Zipkin and Jaeger. To do that we need to reload `http://127.0.0.1:8000/hello` or any other route on our symfony server;

We also need to navigate to Zipkin and Jaeger on our browser, using the URLS `http://localhost:9411/` and `http://localhost:16686/`. Do ensure that both your symfony server and docker instance are running for this step.
For Jaeger under service, you should see a `Hello World Web Server Jaeger` service, go ahead and click find traces to see exported traces.

Once we click on `Find Traces` you should be able to see traces like below:

We can click on a trace to get more information about the trace.

For Zipkin, we can visualize our trace by clicking on `Run Query`

Since resources in Symfony's `public\index.php` file are available to the entire application, we can use any of the already instantiated tracers within `HelloController`. In addition to the tracers, we can also utilize associated properties, methods and events.
Lets try using the `addEvent` method, to capture errors within our controller as follows:
```
global $zipkinTracer;
if ($zipkinTracer) {
/** @var Span $span */
$span = $zipkinTracer->getActiveSpan();
$span->setAttribute('foo', 'bar');
$span->updateName('New name');
$zipkinTracer->startAndActivateSpan('Child span');
try {
throw new \Exception('Exception Example');
} catch (\Exception $exception) {
$span->setSpanStatus($exception->getCode(), $exception->getMessage());
}
$zipkinTracer->endActiveSpan();
}
```
We need to reload our `http://127.0.0.1:8000/hello` route, then navigate to Zipkin like before to see that our span name gets updated to `new name` and our `Exception Example` is visible

## Summary
With the above example we have been able to instrument a Symfony application using the OpenTelemetry php library. You can fork the example project [here](https://github.com/prondubuisi/otel-php-symfony-basic-example). You can also checkout the original test application [here](https://github.com/dkarlovi/opentelemetry-php-user-test).
| 46.457265 | 580 | 0.753473 | eng_Latn | 0.863748 |
4f04280c3b39763bc20cfdab4774647a0989dd4c | 24,018 | md | Markdown | docs/03-forms.md | sergiy-petrov/filament | 1d37b54f7a7c1c11a349090b55c752ab6927366a | [
"MIT"
] | 1 | 2022-02-01T02:12:10.000Z | 2022-02-01T02:12:10.000Z | docs/03-forms.md | sergiy-petrov/filament | 1d37b54f7a7c1c11a349090b55c752ab6927366a | [
"MIT"
] | null | null | null | docs/03-forms.md | sergiy-petrov/filament | 1d37b54f7a7c1c11a349090b55c752ab6927366a | [
"MIT"
] | null | null | null | ---
title: Building Forms
---
Filament comes with a powerful form builder which can be used to create intuitive, dynamic, and contextual forms in the admin panel.
Forms have a schema, which is an array that contains many form components. The schema defines the form's [fields](#fields), their [validation rules](#validation), and their [layout](#layout) in the form.
Here is an example form configuration for a `CustomerResource`:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
Components\TextInput::make('name')->autofocus()->required(),
Components\TextInput::make('email')->email()->required(),
Components\Select::make('type')
->placeholder('Select a type')
->options([
'individual' => 'Individual',
'organization' => 'Organization',
]),
Components\DatePicker::make('birthday'),
])
->columns(2);
}
```
> Please note: when building forms for resources, please ensure that you are using components within the `Filament\Resources\Forms\Components` namespace and not `Filament\Forms\Components`.
## Fields
Resource field classes are located in the `Filament\Resources\Forms\Components` namespace.
All field components have access to the following customization methods:
```php
Field::make($name)
->columnSpan($span = 1) // On large devices, this sets the number of columns that the field should span in the form.
->default($default) // Sets the default value for this field.
->dependable() // Reloads the form when this field is changed.
->disabled($disabled = false) // Make the field read-only.
->extraAttributes($attributes = []) // A key-value array of extra HTML attributes to pass to the field.
->helpMessage($message) // Sets an optional message below the field. It supports Markdown.
->hint($hint) // Sets an optional short message adjacent to the label. It supports Markdown.
->id($id) // Set the HTML ID of the field, which is otherwise automatically generated based on its name.
->label($label); // Set custom label text for with the field, which is otherwise automatically generated based on its name. It supports localization strings.
```
### Checkbox
```php
Checkbox::make($name)
->autofocus() // Autofocus the field.
->inline() // Render the checkbox inline with its label.
->stacked(); // Render the checkbox under its label.
```
### Date Picker
```php
DatePicker::make($name)
->autofocus() // Autofocus the field.
->displayFormat($format = 'F j, Y') // Set the display format of the field, using PHP date formatting tokens.
->firstDayOfWeek($day = 1) // Set the first day of the week in the calendar view, with 1 being Monday, and 0 or 7 being Sunday.
->format($format = 'Y-m-d') // Set the storage format of the field, using PHP date formatting tokens.
->maxDate($date) // Set the maximum date that can be selected.
->minDate($date) // Set the minimum date that can be selected.
->placeholder($placeholder) // Set the placeholder for when the field is empty. It supports localization strings.
->weekStartsOnMonday() // Set the first day of the week to Monday in the calendar view.
->weekStartsOnSunday(); // Set the first day of the week to Sunday in the calendar view.
```
### Date-time Picker
```php
DateTimePicker::make($name)
->autofocus() // Autofocus the field.
->displayFormat($format = 'F j, Y H:i:s') // Set the display format of the field, using PHP date formatting tokens.
->firstDayOfWeek($day = 1) // Set the first day of the week in the calendar view, with 1 being Monday, and 0 or 7 being Sunday.
->format($format = 'Y-m-d H:i:s') // Set the storage format of the field, using PHP date formatting tokens.
->maxDate($date) // Set the maximum date that can be selected.
->minDate($date) // Set the minimum date that can be selected.
->placeholder($placeholder) // Set the placeholder for when the field is empty. It supports localization strings.
->weekStartsOnMonday() // Set the first day of the week to Monday in the calendar view.
->weekStartsOnSunday() // Set the first day of the week to Sunday in the calendar view.
->withoutSeconds(); // Hide the seconds input.
```
### File Upload
```php
FileUpload::make($name)
->acceptedFileTypes($types = []) // Limit the type of files that can be uploaded using an array of mime types.
->avatar() // Make the field suitable for uploading and displaying a circular avatar.
->disk($disk) // Set a custom disk that uploaded files should be read from and written to.
->directory($directory) // Set a custom directory that uploaded files should be written to.
->image() // Allow only images to be uploaded.
->imageCropAspectRatio($ratio) // Crop images to this certain aspect ratio when they are uploaded, e.g: '1:1'.
->imagePreviewHeight($height) // Set the height of the image preview in pixels.
->imageResizeTargetHeight($height) // Resize images to this height (in pixels) when they are uploaded.
->imageResizeTargetWidth($width) // Resize images to this width (in pixels) when they are uploaded.
->loadingIndicatorPosition($position = 'right') // Set the position of the loading indicator.
->maxSize($size) // Set the maximum size of files that can be uploaded, in kilobytes.
->minSize($size) // Set the minimum size of files that can be uploaded, in kilobytes.
->panelAspectRatio($ratio) // Set the aspect ratio of the panel, e.g: '1:1'.
->panelLayout($layout) // Set the layout of the panel.
->placeholder($placeholder) // Set the placeholder for when no file has been uploaded. It supports localization strings.
->removeUploadButtonPosition($position = 'left') // Set the position of the remove upload button.
->uploadButtonPosition($position = 'right') // Set the position of the upload button.
->uploadProgressIndicatorPosition($position = 'right') // Set the position of the upload progress indicator.
->visibility($visibility = 'public'); // Set the visibility of uploaded files.
```
> Please note, it is the responsibility of the developer to delete these files from the disk if they are removed, as Filament is unaware if they are depended on elsewhere. One way to do this automatically is observing a [model event](https://laravel.com/docs/eloquent#events).
> To customize Livewire's default file upload validation rules, please refer to its [documentation](https://laravel-livewire.com/docs/file-uploads#global-validation).
> Available values for the position methods can be found on [Filepond's website](https://pqina.nl/filepond/docs/patterns/api/filepond-instance#styles).
> Support for multiple file uploads is coming soon. For more information, please see our [Development Roadmap](roadmap).
### Key-value
```php
KeyValue::make($name)
->addButtonLabel($label) // Set the add button label. It supports localization strings.
->deleteButtonLabel($label) // Set the delete button label. It supports localization strings.
->disableAddingRows($state = false) // Disable the addition of rows.
->disableDeletingRows($state = false) // Disable the deletion of rows.
->disableEditingKeys($state = false) // Disable the editing of keys.
->keyLabel($label) // Set the key field label label. It supports localization strings.
->keyPlaceholder($placeholder) // Set the key field placeholder. It supports localization strings.
->sortable($sortable = true) // Allow the keys to be sorted using drag and drop.
->sortButtonLabel($label) // Set the sort button label. It supports localization strings.
->valueLabel($label) // Set the value field label label. It supports localization strings.
->valuePlaceholder($placeholder); // Set the value field placeholder. It supports localization strings.
```
### Markdown Editor
```php
MarkdownEditor::make($name)
->attachmentDisk($disk) // Set a custom disk that uploaded attachments should be read from and written to.
->attachmentDirectory($directory) // Set a custom directory that uploaded attachments should be written to.
->autofocus() // Autofocus the field.
->disableAllToolbarButtons() // Disable all toolbar buttons.
->disableToolbarButtons($buttons = []) // Disable toolbar buttons. See below for options.
->enableToolbarButtons($buttons = []) // Enable toolbar buttons. See below for options.
->placeholder($placeholder); // Set the placeholder for when the field is empty. It supports localization strings.
```
#### Toolbar Buttons
```
attachFiles
bold
bullet
code
italic
link
number
preview
strike
write
```
### Rich Editor
```php
RichEditor::make($name)
->attachmentDisk($disk) // Set a custom disk that uploaded attachments should be read from and written to.
->attachmentDirectory($directory) // Set a custom directory that uploaded attachments should be written to.
->autofocus() // Autofocus the field.
->disableAllToolbarButtons() // Disable all toolbar buttons.
->disableToolbarButtons($buttons = []) // Disable toolbar buttons. See below for options.
->enableToolbarButtons($buttons = []) // Enable toolbar buttons. See below for options.
->placeholder($placeholder); // Set the placeholder for when the field is empty. It supports localization strings.
```
#### Toolbar Buttons
```
attachFiles
bold
bullet
code
heading
italic
link
number
quote
redo
strike
subheading
title
undo
```
### Select
```php
Select::make($name)
->autofocus() // Autofocus the field.
->emptyOptionsMessage($message) // Set the message for when there are no options available to pick from. It supports localization strings.
->noSearchResultsMessage($message) // Set the message for when there are no option search results. It supports localization strings.
->options($options = []) // Set the key-value array of available options to pick from.
->placeholder($placeholder); // Set the placeholder for when the field is empty. It supports localization strings.
```
> If you're looking to use a select for a `belongsTo()` relationship, please check out the [`BelongsToSelect` resource field](resources#managing-single-related-records).
### Tags Input
```php
TagsInput::make($name)
->autofocus() // Autofocus the field.
->placeholder($placeholder) // Set the placeholder for when the new tag field is empty. It supports localization strings.
->separator($separator = ','); // Set the separator that should be used between tags.
```
### Textarea
```php
Textarea::make($name)
->autocomplete($autocomplete = 'on') // Set up autocomplete for the field.
->autofocus() // Autofocus the field.
->cols($cols) // The number of columns wide the textarea is.
->disableAutocomplete() // Disable autocomplete for the field.
->placeholder($placeholder); // Set the placeholder for when the field is empty. It supports localization strings.
->rows($rows) // The number of rows tall the textarea is.
```
### Text Input
```php
TextInput::make($name)
->autocomplete($autocomplete = 'on') // Set up autocomplete for the field.
->autofocus() // Autofocus the field.
->disableAutocomplete() // Disable autocomplete for the field.
->email() // Require a valid email address to be provided.
->max($max) // Set a maximum numeric value to be provided.
->min($min) // Set a minimum numeric value to be provided.
->numeric() // Require a numeric value to be provided.
->password() // Obfuscate the field's value.
->placeholder($placeholder) // Set the placeholder for when the field is empty. It supports localization strings.
->postfix($postfix) // Set a postfix label to be displayed after the input.
->prefix($prefix) // Set a prefix label to be displayed before the input.
->tel() // Require a valid telephone number to be provided.
->type($type = 'text') // Set the input's HTML type.
->url(); // Require a valid URL to be provided.
```
### Toggle
The `onIcon()` and `offIcon()` methods support the name of any Blade icon component, and passes a set of formatting classes to it. By default, the [Blade Heroicons](https://github.com/blade-ui-kit/blade-heroicons) package is installed, so you may use the name of any [Heroicon](https://heroicons.com) out of the box. However, you may create your own custom icon components or install an alternative library if you wish.
```php
Toggle::make($name)
->autofocus() // Autofocus the field.
->inline() // Render the toggle inline with its label.
->offIcon($icon) // Set the icon that should be displayed when the toggle is off.
->onIcon($icon) // Set the icon that should be displayed when the toggle is on.
->stacked(); // Render the toggle under its label.
```
## Validation
Filament provides a number of validation methods that can be applied to fields. Please refer to the [Laravel Validation docs](https://laravel.com/docs/validation#available-validation-rules) if you are unsure about any of these.
```php
->acceptedFileTypes($types = []) // Accepts an array of mime types, file upload field only.
->confirmed($field = '{field name}Confirmation') // Text-based fields only.
->email() // Text input field only.
->image() // File upload field only.
->max($value) // Text input field only.
->maxDate($date) // Date-based fields only.
->maxLength($length) // Text-based fields only.
->maxSize($size) // In kilobytes, file upload field only.
->min($value) // Text input field only.
->minDate($date) // Date-based fields only.
->minLength($length) // Text-based fields only.
->minSize($size) // In kilobytes, file upload field only.
->nullable() // Applied to all fields by default.
->numeric() // Text input field only.
->required()
->requiredWith()
->same($field) // Text-based fields only.
->tel() // Text input field only.
->unique($table, $column = '{field name}', $exceptCurrentRecord = false)
->url() // Text input field only.
```
You may apply additional custom validation rules to any field using the `rules()` method:
```php
Field($name)
->rules(['alpha', 'ends_with:a']);
```
> Please note: when specifying **resource** field names in custom validation rules, you must prefix them with `record.`.
## Layout
### Grid
By default, form fields are stacked on top of each other in one column. To change this across the entire form, you may chain the `columns()` method onto the form object:
```php
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
])
->columns(2);
}
```
Alternatively, you may customize the number of columns for a small part of the form using a Grid component:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Grid::make([
// ...
])->columns(2),
]);
}
```
### Section
You may want to separate your fields into sections, each with a heading and subheading. To do this, you can use a Section component:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Section::make(
'Heading',
'Subheading',
[
// ...
],
),
]);
}
```
If you don't require a subheading, you may use the `schema()` method to declare the section schema late:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Section::make('Heading')
->schema([
// ...
]),
]);
}
```
You may use the `columns()` method to easily create a [grid](#grid) within the section:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Section::make(
'Heading',
'Subheading',
[
// ...
],
)->columns(2),
]);
}
```
Sections may be `collapsible()` to optionally hide content in long forms:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Section::make(
'Heading',
'Subheading',
[
// ...
],
)->collapsible(),
]);
}
```
You may `collapse()` sections by default:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Section::make(
'Heading',
'Subheading',
[
// ...
],
)->collapsed(),
]);
}
```
### Fieldset
You may want to group fields into a Fieldset. Each fieldset has a label, a border, and a two-column grid:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Fieldset::make(
'Label',
[
// ...
],
),
]);
}
```
You may use the `columns()` method to customize the number of columns in the fieldset:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Fieldset::make(
'Label',
[
// ...
],
)->columns(3),
]);
}
```
### Tabs
Some forms can be long and complex. You may want to use tabs to reduce the number that are available at once:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Tabs::make('Label')
->tabs([
Components\Tab::make(
'First Tab',
[
// ...
],
),
Components\Tab::make(
'Second Tab',
[
// ...
],
),
]),
]);
}
```
You may use the `columns()` method to easily create a [grid](#grid) within the tab:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Tabs::make('Label')
->tabs([
Components\Tab::make(
'Tab',
[
// ...
],
)->columns(2),
]),
]);
}
```
### Group
Groups are used to wrap multiple associated form components. They have no effect on the form visually, but are useful for applying modifications to many fields at once:
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Group::make([
// ...
]),
]);
}
```
### Placeholder
Placeholders can be used to render text-only "fields" within your forms. Each placeholder has a value, which is cannot be changed by the user.
```php
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
// ...
Components\Placeholder::make('website', 'filamentadmin.com'),
]);
}
```
## Dependent Fields
Dependent fields are fields that are modified based on the value of another. For example, you could show a group of fields based on the value of a Select.
The first step to setting up dependent fields is to apply the `dependable()` method to the field that should be watched for changes. When the value of this field is changed, the whole form will reload:
```php
Components\Select::make('type')
->placeholder('Select a type')
->options([
'individual' => 'Individual',
'organization' => 'Organization',
])
->dependable();
```
To modify fields based on the value of another, you may use the `when()` method. The first argument to this method is a callback that evaluates the `$record` object, and returns true or false depending on if the modifications should be applied. The second argument makes modifications to the current field. If no second argument is supplied, the field will only be shown when the callback in the first argument is true:
In this example, the fields in the [group](#group) will only be shown when the `type` field is set to `individual`:
```php
Components\Group::make([
// ...
])->when(fn ($record) => $record->type === 'individual');
```
Here, the `company_number` field will only be required when the `type` field is set to `organization`:
```php
Components\TextInput::make('company_number')
->when(
fn ($record) => $record->type === 'organization',
fn ($field) => $field->required(),
);
```
## Context Customization
You may customize forms based on the page they are used. To do this, you can chain the `only()` or `except()` methods onto any form component.
```php
use App\Filament\Resources\CustomerResource\Pages;
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
Components\TextInput::make('name')
->required()
->only(Pages\CreateCustomer::class),
]);
}
```
In this example, the `name` field will `only()` be displayed on the `CreateCustomer` page.
```php
use App\Filament\Resources\CustomerResource\Pages;
use Filament\Resources\Forms\Components;
use Filament\Resources\Forms\Form;
public static function form(Form $form)
{
return $form
->schema([
Components\TextInput::make('name')
->except(Pages\EditCustomer::class, fn ($field) => $field->required()),
]);
}
```
In this example, the `name` field will be required, `except()` on the `EditCustomer` page.
This is an incredibly powerful pattern, and allows you to completely customize a form contextually by chaining as many methods as you wish to the callback.
## Developing Custom Components
To create a custom field, you may use:
```bash
php artisan make:filament-field CountrySelect --resource
```
This will create a new custom class and view for your field, which you may use in a form in the same way as any other field.
To create a generic form component, which may be commonly used for custom layouts, you may generate a class and view using:
```bash
php artisan make:filament-form-component SidebarLayout --resource
```
Alternatively, simple custom layouts may be created using a `View` component, and passing the name of a `$view` in your app:
```php
Components\View::make($view);
```
| 34.859216 | 419 | 0.652053 | eng_Latn | 0.97712 |
4f0432d709c3728bcc6d3e0f82f568f921d234b2 | 1,916 | md | Markdown | _posts/Algorithm/Doit/exercise/chap05/2022-01-21-chap05-4.md | ijnooyah/ijnooyah.github.io | 2f3d181ee9f66e5031da564737410d34e3c00b5f | [
"MIT"
] | null | null | null | _posts/Algorithm/Doit/exercise/chap05/2022-01-21-chap05-4.md | ijnooyah/ijnooyah.github.io | 2f3d181ee9f66e5031da564737410d34e3c00b5f | [
"MIT"
] | null | null | null | _posts/Algorithm/Doit/exercise/chap05/2022-01-21-chap05-4.md | ijnooyah/ijnooyah.github.io | 2f3d181ee9f66e5031da564737410d34e3c00b5f | [
"MIT"
] | null | null | null | ---
title: "[Doit 알고리즘 입문 스터디] 4주차 3차시 재귀 알고리즘"
excerpt: "ch05-3 하노이의 탑, ch05-4 8퀸 문제"
categories:
- Doit
tags:
- Algorithm
- Java
last_modified_at: 2022-01-19
---
# 05-3 하노이의 탑
## 1. 하노이의 탑
move메서드의
매개변수 no : 옮겨야 할 원반의 개수
x: 시작 기둥의 번호
y: 목표 기둥의 번호
```java
// 실습 5-6
package doit.exercise.chap05;
import java.util.Scanner;
// 하노이의 탑
public class Hanoi {
// no개의 원반을 x번 기둥에서 y번 기둥으로 옮김
static void move(int no, int x, int y) {
if(no > 1)
move(no - 1, x, 6 - x - y);
System.out.println("원반[" + no + "]을" + x + "기둥에서" + y + "기둥으로 옮김");
if (no > 1)
move(no -1, 6 - x - y, y);
}
public static void main(String[] args) {
Scanner stdIn = new Scanner(System.in);
System.out.println("하노이의 탑");
System.out.print("원반 개수 :");
int n = stdIn.nextInt();
move(n, 1, 3); // 1번 기둥의 n개의 원반을 3번 기둥으로 옮김
}
}
```
# 05-4 8퀸 문제
## 1. 8퀸 문제란?
8퀸 문제(8-Queen problem)는 재귀 알고리즘에 대한 이해를 돕기 위한 예제로 자주 등장한다.
"서로 공격하여 잡을 수 없도록 8개의 퀸을 8*8 체스판에 놓아라"
(퀸은 서 있는 지점에서 체스판의 어떤 지점으로든 여덟 방향으로 직선 이동이 가능함)
## 2. 퀸 배치하기
## 3. 가지 뻗기
분할 정복법 : 하노이의 탑이나 8퀸 문제처럼 문제를 세분하고 세분된 작은 문제의 풀이를
결합해 전체 문제를 풀이하는 기법
문제를 세분할 때는 작은 문제의 풀이에서 원래 문제의 풀이가 쉽게 도출될 수 있게 설계해야함.
```java
// 실습 5-7
package doit.exercise.chap05;
// 각 열에 1개의 퀸을 배치하는 조합을 재귀적으로 나열한다.
public class QueenB {
static int[] pos = new int[8] // 각 열의 퀸의 위치
// 각 열의 퀸의 위치를 출력한다
static void print() {
for (int i = 0; i < 8; i++)
System.out.printf("%2d", pos[i]);
System.out.println();
}
// i열에 퀸을 놓는다. (i 열 j행)
static void set(int i) {
for (int j = 0; j < 8; j++) {
pos[i] = j;// 퀸을 j행에 배치한다.
if (i == 7) // 모든 열에 배치한다. (열이 7일때 출력 즉 마지막열일때 출력)
print();
else
set(i + 1); // 다음 열에 퀸을 배치한다.
}
}
}
```
| 17.577982 | 75 | 0.51618 | kor_Hang | 1.000009 |
4f05c5af5ae3f1dd27d52b3360c61eec14391cb3 | 383 | md | Markdown | translations/ru-RU/content/github/administering-a-repository/configuring-pull-request-merges.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2021-05-10T06:53:57.000Z | 2021-05-17T00:08:22.000Z | translations/ru-RU/content/github/administering-a-repository/configuring-pull-request-merges.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 222 | 2021-04-08T20:13:34.000Z | 2022-03-18T22:37:27.000Z | translations/ru-RU/content/github/administering-a-repository/configuring-pull-request-merges.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-05-21T01:41:05.000Z | 2021-05-21T01:41:24.000Z | ---
title: Configuring pull request merges
intro: 'You can configure pull request merges on {% data variables.product.product_location %} to match your workflow and preferences for managing Git history.'
mapTopic: true
redirect_from:
- /articles/configuring-pull-request-merges
versions:
free-pro-team: '*'
enterprise-server: '*'
github-ae: '*'
topics:
- repositories
---
| 25.533333 | 160 | 0.741514 | eng_Latn | 0.912672 |
4f05d614918241b5162205ee49605d4e5b4405fa | 831 | md | Markdown | software/_posts/2018-01-01-mixt-web.md | hallettmiket/hallettmiket.github.io | 94230f6d864a36bd4c749187080e24c1dd1030e7 | [
"Unlicense",
"MIT"
] | 3 | 2018-05-03T11:02:03.000Z | 2018-06-04T19:41:35.000Z | software/_posts/2018-01-01-mixt-web.md | hallettmiket/hallettmiket.github.io | 94230f6d864a36bd4c749187080e24c1dd1030e7 | [
"Unlicense",
"MIT"
] | 4 | 2022-01-13T00:30:58.000Z | 2022-01-13T00:31:05.000Z | software/_posts/2018-01-01-mixt-web.md | hallettmiket/hallettmiket.github.io | 94230f6d864a36bd4c749187080e24c1dd1030e7 | [
"Unlicense",
"MIT"
] | 8 | 2018-05-30T01:46:45.000Z | 2021-11-06T17:27:38.000Z | ---
layout: software
title: "MIxT: web service"
year: 2018
authors: B Fjukstad, Lars Ailo Bongo, V Dumeaux, M Hallett
github: https://github.com/fjukstad/mixt
image: /images/papers/kvik.png
pdf: /pdfs/papers/fjukstad-bioRxiv.pdf
---
The <strong>Matched Interaction Across Tissues (MIxT)</strong> is a system designed for exploring and comparing transcriptional profiles from two or more matched tissues across individuals.
[MIxT Webserver](https://github.com/fjukstad/mixt) is a Google GO-based package that provides the dynamic web infrastructure using Docker. It provides compute services for large-scale computations dynamic computations.
MIxT Webserve communicates with [MIxT App](https://github.com/vdumeaux/mixtApp) to gain access to properly formatted gene expression datasets and associated statistical analyses.
| 39.571429 | 218 | 0.795427 | eng_Latn | 0.876583 |
4f0653f92d7a532ba4ef05dfc48c7fb828c30913 | 632 | md | Markdown | content/lessons/full-document/index.md | Enva2712/frontend-class-2020 | ea9f602afe800b75e597dcc1a513973a9c0fdde0 | [
"MIT"
] | null | null | null | content/lessons/full-document/index.md | Enva2712/frontend-class-2020 | ea9f602afe800b75e597dcc1a513973a9c0fdde0 | [
"MIT"
] | 2 | 2021-03-01T21:17:35.000Z | 2021-09-20T21:58:50.000Z | content/lessons/full-document/index.md | Enva2712/frontend-class-2020 | ea9f602afe800b75e597dcc1a513973a9c0fdde0 | [
"MIT"
] | null | null | null | ---
title: Putting Things Together
lesson: 2
---
# Putting Things Together
In this unit, you will learn to create an entire HTML document, link to CSS in another file, and write CSS selectors.
## Introduction
So far, the HTML you have written is just content; but websites also have metadata, or information about the content. To add metadata, we need to restructure our HTML. One type of metadata we can add is a link to an external CSS file that tells the browser how to style our HTML. To write CSS in another file though, we need to learn about CSS selectors, which tell the browser which HTML element to apply some CSS to.
| 48.615385 | 418 | 0.772152 | eng_Latn | 0.999618 |
4f06a443d8f45acd32ef9531204f70595ebdbd1a | 378 | md | Markdown | dynect/README.md | jdamick/denominator | 60aef58a24175c9ecbd8b1a70ab2e95144e8b362 | [
"Apache-2.0"
] | 457 | 2015-01-02T22:54:05.000Z | 2022-03-05T16:40:45.000Z | dynect/README.md | jdamick/denominator | 60aef58a24175c9ecbd8b1a70ab2e95144e8b362 | [
"Apache-2.0"
] | 127 | 2015-01-05T17:20:19.000Z | 2021-04-01T09:19:50.000Z | dynect/README.md | jdamick/denominator | 60aef58a24175c9ecbd8b1a70ab2e95144e8b362 | [
"Apache-2.0"
] | 89 | 2015-01-06T02:49:28.000Z | 2022-01-12T02:06:21.000Z | ## Notable Behaviors
The following are notable when compared to different providers.
* `Zone.id()` is the `Zone.name()`
* Zone lists are 1 + N requests in order to zip with the SOA's ttl and rname.
* `Zone.ttl()` is the default for new records.
* The Zone's NS record set includes 4 Primary Service Records. These cannot be removed, so `put` requests will silently retain them.
| 54 | 132 | 0.743386 | eng_Latn | 0.999471 |
4f07b4451ecfa8ebc5bd123317c2e1c062a8909a | 1,143 | md | Markdown | src/pages/blog/2016/07/12-01.md | Neos21GitHub/neo.s21.xrea.com | 8b867683273ba36cf7cccd6ff0037e60f7243599 | [
"MIT"
] | null | null | null | src/pages/blog/2016/07/12-01.md | Neos21GitHub/neo.s21.xrea.com | 8b867683273ba36cf7cccd6ff0037e60f7243599 | [
"MIT"
] | null | null | null | src/pages/blog/2016/07/12-01.md | Neos21GitHub/neo.s21.xrea.com | 8b867683273ba36cf7cccd6ff0037e60f7243599 | [
"MIT"
] | null | null | null | ---
title : Windows7・10 でタスクバーに設置しているツールバーにアクセスするショートカットキー
created : 2016-07-12
last-modified: 2016-07-12
header-date : true
path:
- /index.html Neo's World
- /blog/index.html Blog
- /blog/2016/index.html 2016年
- /blog/2016/07/index.html 07月
hidden-info:
original-blog: Corredor
---
Windows のタスクバーに、「リンク」ツールバーだとか、「クイック起動」ツールバーだとかを設置していて、そのリンクをキーボードショートカットで選択したい。
いろいろ調べたが、直接タスクバーにあるツールバーを選択するキーボードショートカットはないようだ。
__Win + T でタスクバーを選択後、Tab を2回押してツールバーにフォーカスを当てる__のが一番速そう。次に速い操作は _Win + B で通知バーを選択し、Shift + Tab で手前のツールバーにフォーカスを当てる_というもの。
結局マウスで選択しちゃいそうだな~。
---
なお、「クイック起動」ツールバーは「新規ツールバー」から `shell:quick launch` と入力して OK すると登録できる。
「タスクバーを固定する」のチェックを外し、ツールバーの名前上で右クリックすると、タイトルやボタン名を表示するか否かが変更できる。タスクバーを固定しているとなぜかこのメニューが出ない。
## 参考
- [タスクバーのキーボード操作まとめ | Windowsのかゆいとこ](http://kayuitoko.blog129.fc2.com/blog-entry-15.html)
- [Windows の意外と知られてなさそうなショートカット キーとか | grabacr.nét](http://grabacr.net/archives/313)
- [Windows 7 に「クイック起動」バーを追加する方法 - パソコントラブルQ&A](http://www.724685.com/weekly/qa091223.htm)
## 2017-03-16 追記
230日後に全く同じ内容の記事を書いちゃってましたwww
- [タスクバーに配置したタスクトレイアイコンやツールバーにアクセスするショートカットキー](/blog/2017/02/28-03.html)
| 28.575 | 121 | 0.776903 | yue_Hant | 0.563449 |
4f08ad72594f2a95260c2fc6fa7223ebf50b3a09 | 2,399 | md | Markdown | README.md | NeProgramist/Asynchronous | 38db8319a5ee2bc6d3d1f00897aa39046f8b7afa | [
"MIT"
] | 10 | 2020-05-31T17:01:26.000Z | 2021-01-13T17:13:35.000Z | README.md | NeProgramist/Asynchronous | 38db8319a5ee2bc6d3d1f00897aa39046f8b7afa | [
"MIT"
] | 4 | 2020-05-27T17:29:10.000Z | 2020-05-29T14:38:47.000Z | README.md | NeProgramist/Asynchronous | 38db8319a5ee2bc6d3d1f00897aa39046f8b7afa | [
"MIT"
] | null | null | null | # AsynKtronous Programming
Implementation of [Asynchronous Programming](https://github.com/HowProgrammingWorks/Index/blob/master/Courses/Asynchronous.md) patterns and principles in [Kotlin](https://github.com/JetBrains/kotlin) using [Kotlin Coroutines](https://github.com/Kotlin/kotlinx.coroutines).
- [Timers, timeouts and EventEmitter](https://youtu.be/LK2jveAnRNg):
- [x] [Timers](src/main/kotlin/Timers);
- [x] [EventEmitters](src/main/kotlin/EventEmitter);
- [Asynchronous Programming with callbacks](https://youtu.be/z8Hg6zgi3yQ):
- [x] [AsynchronousProgramming](src/main/kotlin/AsynchronousProgramming);
- [NonBlocking Asynchronous Iteration](https://youtu.be/wYA2cIRYLoA):
- [x] [NonBlocking](src/main/kotlin/NonBlocking);
- [Asynchronous Programming with Promises](https://youtu.be/RMl4r6s1Y8M):
- [x] [Promise](src/main/kotlin/Promise);
- [Asynchronous adapters: promisify, callbackify, asyncify](https://youtu.be/76k6_YkYRmU):
- [x] [AsyncAdapter](src/main/kotlin/AsyncAdapter);
- [Asynchronous Data Collectors](https://youtu.be/tgodt1JL6II):
- [x] [Collector](src/main/kotlin/Collector);
- [Thenable](https://youtu.be/DXp__1VNIvI):
- [x] [Thenable](src/main/kotlin/Promise/a-thenable.kt) (implemented partially);
- [ ] Implement all [repo](https://github.com/HowProgrammingWorks/Thenable);
- [Concurrent Queue](https://youtu.be/Lg46AH8wFvg);
- [x] [ConcurrentQueue](src/main/kotlin/ConcurrentQueue);
- [Revealing Constructor](https://youtu.be/leR5sXRkuJI):
- [x] [RevealingConstructor](src/main/kotlin/RevealingConstructor);
- [x] Used in [Collector/3-class.kt](src/main/kotlin/Collector/3-class.kt);
- [Futures](https://youtu.be/22ONv3AGXdk)
- [x] [Future](src/main/kotlin/Future);
- [Deferred](https://youtu.be/a2fVA1o-ovM):
- [x] [Deferred](src/main/kotlin/Deferred);
- [x] Used as part of [kotlinx.coroutines](https://github.com/Kotlin/kotlinx.coroutines) instead of Promises in most cases;
- [Actor Model](https://youtu.be/xp5MVKEqxY4):
- [ ] ActorModel;
- [Observer and Observable](https://youtu.be/_bFXuLcXoXg):
- [ ] Observer;
- [Asynchronous Function Composition](https://youtu.be/3ZCrMlMpOrM):
- [ ] AsyncCompose.
Note that due to such features as strong typing, JVM multithreading, etc, we didn't implement some examples as is but modified it according to Kotlin programming traditions and practices.
| 61.512821 | 272 | 0.732388 | yue_Hant | 0.538728 |
4f0a28a585fd23ab0b4c92b83c7ad2ccb84fde91 | 163 | md | Markdown | domain/cnstats.ru/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-12-20T19:10:17.000Z | 2021-07-18T22:32:37.000Z | domain/cnstats.ru/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2020-06-19T16:02:03.000Z | 2021-08-24T16:49:39.000Z | domain/cnstats.ru/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-29T20:36:31.000Z | 2020-06-29T20:36:31.000Z | ---
company-name: CNStats
domain: cnstats.ru
home: http://cnstats.ru/
privacy-policy: http://www.cn-software.com/en/privacy/
email: support@cn-software.com
---
| 14.818182 | 54 | 0.717791 | yue_Hant | 0.338358 |
4f0a606a9af69af3f5dac833d55de3ee5fe628be | 6,186 | md | Markdown | Exchange-Server-2013/modifying-multivalued-properties-exchange-2013-help.md | isabella232/OfficeDocs-Exchange-Test-pr.ru-ru | fa090640fde8845ab5043c30635b4f929777fc72 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2018-07-20T08:47:21.000Z | 2021-05-26T10:59:17.000Z | Exchange-Server-2013/modifying-multivalued-properties-exchange-2013-help.md | MicrosoftDocs/OfficeDocs-Exchange-Test-pr.ru-ru | fa090640fde8845ab5043c30635b4f929777fc72 | [
"CC-BY-4.0",
"MIT"
] | 24 | 2018-06-19T08:37:04.000Z | 2018-09-26T16:37:08.000Z | Exchange-Server-2013/modifying-multivalued-properties-exchange-2013-help.md | isabella232/OfficeDocs-Exchange-Test-pr.ru-ru | fa090640fde8845ab5043c30635b4f929777fc72 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2018-06-19T07:21:50.000Z | 2021-11-15T11:19:10.000Z | ---
title: 'Изменение многозначных свойств: Exchange 2013 Help'
TOCTitle: Изменение многозначных свойств
ms:assetid: dc2c1062-ad79-404b-8da3-5b5798dbb73b
ms:mtpsurl: https://technet.microsoft.com/ru-ru/library/Bb684908(v=EXCHG.150)
ms:contentKeyID: 50489340
ms.date: 05/22/2018
mtps_version: v=EXCHG.150
ms.translationtype: MT
---
# Изменение многозначных свойств
_**Применимо к:** Exchange Server 2013_
_**Последнее изменение раздела:** 2015-03-09_
Многозначное свойство — свойство, которое может содержать более одного значения. Например, свойство **BlockedRecipients** объекта **RecipientFilterConfig** может принимать несколько адресов получателей, как в следующих примерах:
- john@contoso.com
- kim@northwindtraders.com
- david@adatum.com
Поскольку свойство **BlockedRecipients** может принимать более одного значения, оно называется многозначным. В данном разделе объясняется, как использовать командную консоль Exchange для добавления и удаления значения для многозначных свойств объекта.
Дополнительные сведения об объектах см. в разделе [Структура данных](https://technet.microsoft.com/ru-ru/library/aa996386\(v=exchg.150\)). Дополнительные сведения о командной консоли см. в статье [Использование Powershell с Exchange 2013 (командная консоль Exchange)](https://technet.microsoft.com/ru-ru/library/bb123778\(v=exchg.150\)).
## Сравнение изменения многозначного свойства и изменения свойства, принимающего только одно значение
Изменение многозначных свойств несколько отличается от изменения свойств, принимающих только одно значение. Чтобы изменить свойство, принимающее только одно значение, можно присвоить значение непосредственно ему, как в следующей команде.
```powershell
Set-TransportConfig -MaxSendSize 12MB
```
При использовании этой команды сохраненное значение переписывается, чтобы придать новое значение свойству **MaxSendSize**. Это несложно для свойств, принимающих только одно значение. Однако при работе с многозначными свойствами возникают трудности. Предположим, что свойство **BlockedRecipients** объекта **RecipientFilterConfig** настроено так, чтобы принимать три значения, перечисленные в предыдущем разделе. При запуске команды `Get-RecipientFilterConfig | Format-List BlockedRecipients` отображается следующее:
```powershell
BlockedRecipients : {david@adatum.com, kim@northwindtraders.com, john@contoso.com}
```
Теперь предположим, что получен запрос на добавление нового адреса SMTP в заблокированный список пользователей. Для добавления нового адреса SMTP запускаем следующую команду:
```powershell
Set-RecipientFilterConfig -BlockedRecipients chris@contoso.com
```
При повторном выполнении команды `Get-RecipientFilterConfig | Format-List BlockedRecipients` отобразится следующее:
```powershell
BlockedRecipients : {chris@contoso.com}
```
Это не то, что ожидалось. Требовалось добавить новый адрес SMTP в существующий список блокированных получателей, но вместо этого существующий список блокированных получателей был заменен новым адресом SMTP. Этот непредусмотренный результат показывает, что изменение многозначного свойства отличается от изменения свойства, принимающего только одно значение. При изменении многозначного свойства необходимо убедиться в том, что значение добавляется или удаляется, не перезаписывая весь список значений. В следующем разделе показано, как именно это сделать.
## Изменение многозначных свойств
Изменение многозначных свойств похоже на изменение свойств с одним значением. Вам просто нужно добавить определенный дополнительный синтаксис, чтобы сообщить командной консоли, что вы хотите добавить или удалить значения для многозначного свойства вместо замены всех хранящихся в нем данных. Этот синтаксис вместе с добавляемыми или удаляемыми для свойства значениями включается в виде значения параметра при запуске командлета. В следующей таблице приведен синтаксис, который требуется добавить в параметр командлета для изменения многозначных свойств.
### Синтаксис многозначных свойств
<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="header">
<th>Action</th>
<th>Синтаксис</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>Добавление одного или нескольких значений ко многозначному свойству</p></td>
<td>
```powershell
@{Add="<value1>", "<value2>", "<value3>"}
```
</td>
</tr>
<tr class="even">
<td><p>Удаление одного или нескольких значений из многозначного свойства</p></td>
<td>
```powershell
@{Remove="<value1>", "<value2>", "<value3>"}
```
</td>
</tr>
</tbody>
</table>
Синтаксис, который вы выбираете из таблицы синтаксиса многозначных свойств, указывается в виде значения параметра в командлете. Например, следующая команда добавляет несколько значений в многозначное свойство:
```powershell
Set-ExampleCmdlet -Parameter @{Add="Red", "Blue", "Green"}
```
Когда вы используете данный синтаксис, указываемые значения добавляются в список значений, уже присутствующих в свойстве, или удаляются из него. Используя приведенный выше пример **BlockedRecipients**, мы можем добавить chris@contoso.com без перезаписи остальных значений в данном свойстве с помощью следующей команды:
```powershell
Set-RecipientFilterConfig -BlockedRecipients @{Add="chris@contoso.com"}
```
Если бы вы хотели удалить david@adatum.com из списка значений, команда имела бы следующий вид:
```powershell
Set-RecipientFilterConfig -BlockedRecipients @{Remove="david@adatum.com"}
```
Можно использовать и более сложные комбинации, такие как одновременное добавление и удаление значений для свойства. Для этого вставьте точку с запятой (`;`) между действиями `Add` и `Remove`. Например:
```powershell
Set-RecipientFilterConfig -BlockedRecipients @{Add="carter@contoso.com", "sam@northwindtraders.com", "brian@adatum.com"; Remove="john@contoso.com"}
```
Если мы используем команду `Get-RecipientFilterConfig | Format-List BlockedRecipients` еще раз, мы увидим, что адреса электронной почты для пользователей Carter, Sam и Brian были добавлены, а адрес для пользователя John был удален.
```powershell
BlockedRecipients : {brian@adatum.com, sam@northwindtraders.com, carter@contoso.com, chris@contoso.com, kim@northwindtraders.com}
``` | 48.708661 | 555 | 0.799062 | rus_Cyrl | 0.957705 |
4f0ad18f91dc4cd6f53712cd4875ae834eec68bb | 1,516 | md | Markdown | tvc12/dotfile/README.md | tvc12/ohmypet-hacktoberfest-2021 | 4e99e0c746d67f84eee16a59608d38395f8ebc80 | [
"MIT"
] | 3 | 2021-10-04T14:46:00.000Z | 2021-12-30T14:19:56.000Z | tvc12/dotfile/README.md | tvc12/ohmypet-hacktoberfest-2021 | 4e99e0c746d67f84eee16a59608d38395f8ebc80 | [
"MIT"
] | 3 | 2020-01-29T13:17:02.000Z | 2020-01-29T13:45:40.000Z | tvc12/dotfile/README.md | tvc12/ohmypet-hacktoberfest-2021 | 4e99e0c746d67f84eee16a59608d38395f8ebc80 | [
"MIT"
] | 3 | 2021-10-10T07:20:14.000Z | 2021-10-31T15:40:49.000Z | ## dotfiles
Awesome dot file for [me](https://github.com/tvc12)
### 👓 Basic install
👉 Choice your option in file `run.sh`
🏃 Grant execution for `run.sh`
```bash
chmod u+x run.sh
```
👉 Finally, run script
```bash
./run.sh
```
### 💪 Create swap partition
👉 Alter length for swap partition
📖 Example: **16G** in `create_swap.sh` file.
``` bash
sudo fallocate -l 16G /swapfile
```
🏃 Grant execution for `create_swap.sh`
```bash
chmod u+x create_swap.sh
```
👉 Run script
``` bash
./create_swap.sh
```
### 🤝 Contributor
[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/0)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/1)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/2)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/3)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/4)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/5)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/6)[](https://sourcerer.io/fame/tvc12/tvc12/dotfiles/links/7)
### 😤 LICENSE
📙 [MIT](LICENSE) LICENSE | 30.938776 | 944 | 0.711741 | yue_Hant | 0.630727 |
4f0b96f1950d26e7062d08340f580bd3b6d337d7 | 96 | md | Markdown | README.md | terryokay/TeQRCodeDemo | 57077f437f30fcdc0fa02cdbcad13c2cb153b8a4 | [
"MIT"
] | null | null | null | README.md | terryokay/TeQRCodeDemo | 57077f437f30fcdc0fa02cdbcad13c2cb153b8a4 | [
"MIT"
] | null | null | null | README.md | terryokay/TeQRCodeDemo | 57077f437f30fcdc0fa02cdbcad13c2cb153b8a4 | [
"MIT"
] | null | null | null | # TeQRCodeDemo
这个是官方的扫描,识别和生成二维码的Demo.并不是我本人写的,是在cocoachina上看到的,作者没有留下布置github.如果作者看到,请联系我.非常感谢
| 32 | 80 | 0.885417 | yue_Hant | 0.884025 |
4f0c1e81b0cc3809e2d6fa13678e6e8166a5a418 | 5,420 | md | Markdown | docs/parallel/concrt/reference/event-class.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/parallel/concrt/reference/event-class.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/parallel/concrt/reference/event-class.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:53:26.000Z | 2020-05-28T15:53:26.000Z | ---
title: event – třída
ms.date: 11/04/2016
f1_keywords:
- CONCRT/concurrency::event
- CONCRT/concurrency::event::reset
- CONCRT/concurrency::event::set
- CONCRT/concurrency::event::wait
- CONCRT/concurrency::event::wait_for_multiple
- CONCRT/concurrency::event::timeout_infinite
helpviewer_keywords:
- event class
ms.assetid: fba35a53-6568-4bfa-9aaf-07c0928cf73d
ms.openlocfilehash: 3d645cc09c61402059e9a86679c10ee703ee8031
ms.sourcegitcommit: 63784729604aaf526de21f6c6b62813882af930a
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/17/2020
ms.locfileid: "79443740"
---
# <a name="event-class"></a>event – třída
Událost ručního resetování, která je výslovně informována o Concurrency Runtime.
## <a name="syntax"></a>Syntaxe
```cpp
class event;
```
## <a name="members"></a>Členové
### <a name="public-constructors"></a>Veřejné konstruktory
|Název|Popis|
|----------|-----------------|
|[~ Event – destruktor](#dtor)|Zničí událost.|
### <a name="public-methods"></a>Veřejné metody
|Název|Popis|
|----------|-----------------|
|[nové](#reset)|Obnoví událost na stav bez signalizace.|
|[set](#set)|Signalizuje událost.|
|[Počkej](#wait)|Čeká, až se událost přestane signalizovat.|
|[wait_for_multiple](#wait_for_multiple)|Čeká, až se zasignalizují více událostí.|
### <a name="public-constants"></a>Veřejné konstanty
|Název|Popis|
|----------|-----------------|
|[timeout_infinite](#timeout_infinite)|Hodnota označující, že čas čekání by neměl nikdy trvat.|
## <a name="remarks"></a>Poznámky
Další informace najdete v tématu [Synchronizace datových struktur](../../../parallel/concrt/synchronization-data-structures.md).
## <a name="inheritance-hierarchy"></a>Hierarchie dědičnosti
`event`
## <a name="requirements"></a>Požadavky
**Záhlaví:** ConcRT. h
**Obor názvů:** souběžnost
## <a name="ctor"></a>událostí
Vytvoří novou událost.
```cpp
_CRTIMP event();
```
### <a name="remarks"></a>Poznámky
## <a name="dtor"></a>~ – událost
Zničí událost.
```cpp
~event();
```
### <a name="remarks"></a>Poznámky
Očekává se, že při spuštění destruktoru nečekají na událost žádná vlákna. Umožnění destrukci události s vlákny, na kterých stále čeká, má za následek nedefinované chování.
## <a name="reset"></a>nové
Obnoví událost na stav bez signalizace.
```cpp
void reset();
```
## <a name="set"></a>stanovenými
Signalizuje událost.
```cpp
void set();
```
### <a name="remarks"></a>Poznámky
Signalizace události může způsobit, že libovolný počet kontextů čeká na událost, která má být spustitelný.
## <a name="timeout_infinite"></a>timeout_infinite
Hodnota označující, že čas čekání by neměl nikdy trvat.
```cpp
static const unsigned int timeout_infinite = COOPERATIVE_TIMEOUT_INFINITE;
```
## <a name="wait"></a>Počkej
Čeká, až se událost přestane signalizovat.
```cpp
size_t wait(unsigned int _Timeout = COOPERATIVE_TIMEOUT_INFINITE);
```
### <a name="parameters"></a>Parametry
*_Timeout*<br/>
Označuje počet milisekund, po jejichž uplynutí vypršel časový limit. Hodnota `COOPERATIVE_TIMEOUT_INFINITE` znamená, že neexistuje žádný časový limit.
### <a name="return-value"></a>Návratová hodnota
Pokud bylo čekání splněno, je vrácena hodnota `0`. v opačném případě hodnota `COOPERATIVE_WAIT_TIMEOUT` označuje, že čekání vypršel časový limit, aniž by došlo k signalizaci události.
> [!IMPORTANT]
> V aplikaci Univerzální platforma Windows (UWP) Nevolejte `wait` ve vlákně ASTA, protože toto volání může zablokovat aktuální vlákno a může způsobit, že aplikace přestane reagovat.
## <a name="wait_for_multiple"></a>wait_for_multiple
Čeká, až se zasignalizují více událostí.
```cpp
static size_t __cdecl wait_for_multiple(
_In_reads_(count) event** _PPEvents,
size_t count,
bool _FWaitAll,
unsigned int _Timeout = COOPERATIVE_TIMEOUT_INFINITE);
```
### <a name="parameters"></a>Parametry
*_PPEvents*<br/>
Pole událostí, na které se má čekat. Počet událostí v poli je označen parametrem `count`.
*count*<br/>
Počet událostí v poli, které jsou zadány v parametru `_PPEvents`.
*_FWaitAll*<br/>
Pokud je nastavena na hodnotu **true**, parametr určuje, že všechny události v poli, které jsou zadány v parametru `_PPEvents`, musí být oznámeny, aby bylo možné vyhovět čekání. Pokud je nastavena na hodnotu **false**, určuje, že jakákoli událost v poli, která je uvedena v parametru `_PPEvents` přestala signalizovat, bude vyhovovat čekání.
*_Timeout*<br/>
Označuje počet milisekund, po jejichž uplynutí vypršel časový limit. Hodnota `COOPERATIVE_TIMEOUT_INFINITE` znamená, že neexistuje žádný časový limit.
### <a name="return-value"></a>Návratová hodnota
Pokud bylo čekání splněno, index v poli dodaném v parametru `_PPEvents`, který splnil čekací podmínku; v opačném případě hodnota `COOPERATIVE_WAIT_TIMEOUT` označuje, že čekání vypršela bez splněné podmínky.
### <a name="remarks"></a>Poznámky
Pokud je parametr `_FWaitAll` nastaven na hodnotu `true`, aby označoval, že všechny události musí být signalizaci, aby splňovaly čekání, index vrácený funkcí nepřenese žádný zvláštní význam, než fakt, že se nejedná o hodnotu `COOPERATIVE_WAIT_TIMEOUT`.
> [!IMPORTANT]
> V aplikaci Univerzální platforma Windows (UWP) Nevolejte `wait_for_multiple` ve vlákně ASTA, protože toto volání může zablokovat aktuální vlákno a může způsobit, že aplikace přestane reagovat.
## <a name="see-also"></a>Viz také
[concurrency – obor názvů](concurrency-namespace.md)
| 30.449438 | 341 | 0.738561 | ces_Latn | 0.997121 |
4f0c3d9ad79d9eba893860e677f41941fa524857 | 196 | md | Markdown | react-native/usage.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | 21 | 2017-03-02T03:15:59.000Z | 2022-01-04T07:08:00.000Z | react-native/usage.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | null | null | null | react-native/usage.md | greedbell/blog | 525407372163d775db2c0d8adb433ad489069b35 | [
"MIT"
] | 3 | 2017-04-27T09:12:36.000Z | 2019-05-06T16:16:37.000Z | # 使用
## props 和 state
props是在父组件中指定,而且一经指定,在被指定的组件的生命周期中则不再改变。 对于需要改变的数据,我们需要使用state。
## Modal
react native 自带的 Modal iOS present 原理:取 Modal 父组件所属的 ViewController present ModalViewController。
| 19.6 | 96 | 0.806122 | yue_Hant | 0.330656 |
4f0d52d391356b04bc3ce11d2592cffa11fe9658 | 8,131 | md | Markdown | archived/2020-12-16/2020-12-16-Western expatriates are leaving Asia.md | NodeBE4/waime | e0c02dc3b1b329ad93a9384898efd937d045a6e0 | [
"MIT"
] | 15 | 2020-07-21T21:49:39.000Z | 2022-03-21T04:56:16.000Z | archived/2020-12-16/2020-12-16-Western expatriates are leaving Asia.md | NodeBE4/waime | e0c02dc3b1b329ad93a9384898efd937d045a6e0 | [
"MIT"
] | 2 | 2021-01-22T03:02:02.000Z | 2021-04-06T01:55:17.000Z | archived/2020-12-16/2020-12-16-Western expatriates are leaving Asia.md | NodeBE4/waime | e0c02dc3b1b329ad93a9384898efd937d045a6e0 | [
"MIT"
] | 3 | 2020-07-26T15:54:23.000Z | 2021-03-15T14:06:54.000Z | ---
layout: post
title: "Western expatriates are leaving Asia"
date: 2020-12-16T17:09:58.000Z
author: 经济学人en
from: https://www.economist.com/asia/2020/12/19/western-expatriates-are-leaving-asia
tags: [ 经济学人en ]
categories: [ 经济学人en ]
---
<!--1608138598000-->
[Western expatriates are leaving Asia](https://www.economist.com/asia/2020/12/19/western-expatriates-are-leaving-asia)
------
<div>
<img src="https://images.weserv.nl/?url=www.economist.com/img/b/1280/720/90/sites/default/files/images/print-edition/20201219_ASP003_0.jpg"/><div></div><aside ><div ><time itemscope="" itemType="http://schema.org/DateTime" dateTime="2020-12-19T00:00:00Z" >Dec 19th 2020</time><meta itemProp="author" content="The Economist"/></div></aside><p ><span data-caps="initial">W</span><small>HEN JAKARTA</small> went into lockdown in April Asian Tigers Group, a moving company, received a flurry of business from wealthy expatriates who had fled overnight. Removal teams were led into deserted homes by maids or colleagues. Without the owners on hand to sort belongings, they found themselves stripping beds and packing dirty sheets into boxes alongside broken toys and other junk.</p><p >All the expats who want to move away have now done so and there are few new arrivals. Asian Tigers is laying off staff for the first time in its 35-year history. “Unless the tap turns back on we’re in a tough spot,” says Bill Lloyd, head of the group’s Indonesian operations.</p><div id="" ><div><div id="econ-1"></div></div></div><p >Asian metropolises have long attracted migrants from the rich world. Fast-growing businesses, vast natural resources and unfamiliar commercial environments make for interesting work. Meanwhile, living costs are low, so expats can afford maids and big houses that would be out of reach back home. Around 3m migrants from the <small>OECD</small> were living in Asia last year, according to data from the club, which is composed mostly of rich countries, up from 2.3m in 1990. But the pandemic has underlined the drawbacks of living abroad, including distance from family and (in some places) a lack of good medical facilities. Unlike most immigrants, these are a well-off lot for whom relocation is a choice. Many have raced home rather than weather the pandemic in a foreign country.</p><p >That is a loss for both their home and host countries. People moving from the developed world to emerging markets make up a tiny portion of the world’s 270m international migrants. But they play an outsized role in the global economy, bringing new ideas and cosmopolitan connections wherever they go. A study in Canada found that a 10% increase in migrants from a given country is associated with a 1% increase in exports to that country and a 3% increase in imports from it. There is no reason why migration to developing economies should not produce similar benefits, says Amanda Klekowski von Koppenfels of the University of Kent. “We rarely think of the global North as a region that gains from emigration but it absolutely does,” she says. “This is a small but powerful movement.”</p><p >However, covid-19 has diminished the appeal of living abroad for many in the rich world. Iñigo Lumbreras de Mazarredo spent much of his 20s working his way up the ranks at a food-delivery firm in Asia. The best thing about expat life, he says, was the opportunity to travel widely and meet new people. After a stressful couple of weeks trying to pack up and go home, including four cancelled flights, the 29-year-old returned from Cambodia to Spain in April. “The main point of going abroad is to have an experience,” he explains. “Now that isn’t an option.”</p><p >Data are patchy on migration in Asia, but all the evidence points to a mass exodus. By June America had repatriated more than 15,000 of its citizens from the continent, including both tourists and migrants. Knight Frank, a multinational estate agent, has seen a big rise in expats looking to buy a base in their home country, particularly among those with elderly parents or children at boarding school back home. In a survey in June roughly 30% of the group’s agents said these clients were planning to move permanently and 60% said they wanted to split their time between their original and adoptive homes.</p><div id="" ><div><div id="econ-2"></div></div></div><p >For employers, the pandemic has accentuated the disadvantages of hiring pricey Westerners in Asia. Most countries have introduced quarantine rules and stalled visa applications, making it difficult to get people where they are supposed to be going. Eliminating expensive foreign postings is an easy way to save money in a recession. Meanwhile, the need to work from home has shown that colleagues can collaborate reasonably well at a great distance using video-calling. More than 50% of businesses have repatriated employees on long-term assignments abroad and only half of them expect to move them back within a year, according to a survey by <small>ECA</small> International, which helps firms relocate staff. Almost all the companies surveyed also said they were allowing expats to work from other places if they wanted to.</p><p >Covid-19 is accelerating a trend that was already under way, says Toby Fowlston of Robert Walters, a recruiting firm. Education and language skills across the region have improved markedly in recent years. There is much less need to fly in expats to get a job done. In Hong Kong, Mr Fowlston says, the growing influence of mainland China means that employers are looking for Mandarin-speakers. He estimates that expats occupy just a fifth of client-facing roles at investment banks in the city, down from a third five years ago.</p><p >Host governments are also obstructing the hiring of expats, imagining that this might reduce unemployment. In Malaysia firms can employ foreigners only if they cannot find a local applicant who fits the bill. Employers have to advertise jobs through a central portal, interview candidates within 30 days and report back to the authorities afterwards. In August the Singaporean government raised the minimum wage businesses have to pay foreigners to secure a visa. It also launched an investigation into 47 firms that, it suspects, have not given local applicants a fair shot. In a similar vein, several Asian countries say they will cancel the residence permits of foreigners who leave the country without obtaining special permission first.</p><p >Whether businesses and governments want them or not, there will always be Westerners eager to live in Asia. Most people move either for love or for work. The first group is not that flighty. For the second, the appeal of expatriate life is likely to return when borders reopen and social-distancing rules fall away.</p><p >Hector Drake and his wife recently moved to London to have their first child after a decade in Hong Kong, Beijing and Shanghai. If anything, Mr Drake says, the pandemic has revealed what a desirable place to live Asia is. The governments of Hong Kong, Singapore, Taiwan and Vietnam have done a far better job than most in the West of keeping the virus under control. The couple caught covid-19 shortly after returning to Britain. Mr Drake may put more thought into his health insurance next time, but he hopes to work abroad again. “People will chase opportunities if they are there,” he says. <span data-ornament="ufinish">■</span></p><p ><em>Editor’s note: Some of our covid-19 coverage is free for readers of </em>The Economist Today<em>, our daily <a href="https://www.economist.comhttps://my.economist.com/user#newsletter">newsletter</a>. For more stories and our pandemic tracker, see our <a href="https://www.economist.com/news/2020/03/11/the-economists-coverage-of-the-coronavirus">hub</a></em></p><p data-test-id="Footnote" >This article appeared in the Asia section of the print edition under the headline "Shipping out"</p><br><hr><div>获取更多RSS:<br><a href="https://feedx.net" style="color:orange" target="_blank">https://feedx.net</a> <br><a href="https://feedx.xyz" style="color:orange" target="_blank">https://feedx.xyz</a><br></div>
</div>
| 478.294118 | 7,731 | 0.784897 | eng_Latn | 0.999275 |
4f0d582a485f54f3ba81026f7057d364553a5e31 | 11,026 | md | Markdown | _pages/How_to_write_your_own_track_feature_analyzer_algorithm_for_TrackMate.md | mcib3d/imagej.github.io | 42950a9c4f07c0a0ba223596cefbde72fb22a88e | [
"CC-BY-3.0"
] | null | null | null | _pages/How_to_write_your_own_track_feature_analyzer_algorithm_for_TrackMate.md | mcib3d/imagej.github.io | 42950a9c4f07c0a0ba223596cefbde72fb22a88e | [
"CC-BY-3.0"
] | null | null | null | _pages/How_to_write_your_own_track_feature_analyzer_algorithm_for_TrackMate.md | mcib3d/imagej.github.io | 42950a9c4f07c0a0ba223596cefbde72fb22a88e | [
"CC-BY-3.0"
] | null | null | null | ---
autogenerated: true
title: How to write your own track feature analyzer algorithm for TrackMate
layout: page
categories: Tutorials
description: test description
---
{% include extendingtrackmatetutorials%}
Introduction
------------
This article is the second in the series dedicated to extending TrackMate with your own modules. Here we focus on creating **feature analyzers**: small algorithms that calculate one or several numerical values for the TrackMate results. The [previous article](How_to_write_your_own_edge_feature_analyzer_algorithm_for_TrackMate) focused on writing edge analyzers: algorithms that allocate a numerical value to the link between two spots.
In this article, we will create a **feature analyzer for tracks** that calculate numerical values for whole tracks. To make it simple, and also to answer the request of a colleague, we will make an analyzer that reports the location of the starting and ending points of a track.
Actually, we will not learn much beyond what we saw previously. The only little change is that our analyzer will generate 6 numerical values instead of 1. We will use the [SciJava](SciJava) discovery mechanism as before, but just for the sake of it, we will introduce how to **disable** modules.
Track analyzers
---------------
All the track feature analyzers must implement {% include github org='fiji' repo='TrackMate' source='fiji/plugin/trackmate/features/track/TrackAnalyzer.java' label='TrackAnalyzer interface' %}. Like for the {% include github org='fiji' repo='TrackMate' source='fiji/plugin/trackmate/features/edges/EdgeAnalyzer.java' label='EdgeAnalyzer' %} interface, it extends both
- {% include github org='fiji' repo='TrackMate' source='fiji/plugin/trackmate/features/FeatureAnalyzer.java' label='FeatureAnalyzer' %} that helps you declaring what you compute,
- and {% include github org='fiji' repo='TrackMate' source='fiji/plugin/trackmate/TrackMateModule.java' label='TrackMateModule' %}, that is in charge of the integration in TrackMate.
The only changes for us are two methods specific to tracks:
public void process( final Collection< Integer > trackIDs, final Model model );
the does the actual feature calculation for the specified tracks, and
public boolean isLocal();
that specified whether the calculation of the features for one track affects only this track or all the tracks. For the discussion on local *vs* non-local feature analyzers, I report you to the [previous article item](How_to_write_your_own_edge_feature_analyzer_algorithm_for_TrackMate#isLocal.28.29).
Track feature analyzer header
-----------------------------
Like all TrackMate modules, you need to annotate your class to make it discoverable by TrackMate. It takes the following shape:
@Plugin( type = TrackAnalyzer.class )
public class TrackStartSpotAnalyzer implements TrackAnalyzer
{
// etc...
and that's good enough.
Declaring features
------------------
Declaring the features your provide is done as before. This time, a single analyzer returns 6 values, so you need to declare them. Here is the related code:
@Plugin( type = TrackAnalyzer.class )
public class TrackStartSpotAnalyzer implements TrackAnalyzer
{
private static final String KEY = "TRACK_START_SPOT_ANALYZER";
public static final String TRACK_START_X = "TRACK_START_X";
public static final String TRACK_START_Y = "TRACK_START_Y";
public static final String TRACK_START_Z = "TRACK_START_Z";
public static final String TRACK_STOP_X = "TRACK_STOP_X";
public static final String TRACK_STOP_Y = "TRACK_STOP_Y";
public static final String TRACK_STOP_Z = "TRACK_STOP_Z";
private static final List< String > FEATURES = new ArrayList< String >( 6 );
private static final Map< String, String > FEATURE_SHORT_NAMES = new HashMap< String, String >( 6 );
private static final Map< String, String > FEATURE_NAMES = new HashMap< String, String >( 6 );
private static final Map< String, Dimension > FEATURE_DIMENSIONS = new HashMap< String, Dimension >( 6 );
static
{
FEATURES.add( TRACK_START_X );
FEATURES.add( TRACK_START_Y );
FEATURES.add( TRACK_START_Z );
FEATURES.add( TRACK_STOP_X );
FEATURES.add( TRACK_STOP_Y );
FEATURES.add( TRACK_STOP_Z );
FEATURE_NAMES.put( TRACK_START_X, "Track start X" );
FEATURE_NAMES.put( TRACK_START_Y, "Track start Y" );
FEATURE_NAMES.put( TRACK_START_Z, "Track start Z" );
FEATURE_NAMES.put( TRACK_STOP_X, "Track stop X" );
FEATURE_NAMES.put( TRACK_STOP_Y, "Track stop Y" );
FEATURE_NAMES.put( TRACK_STOP_Z, "Track stop Z" );
FEATURE_SHORT_NAMES.put( TRACK_START_X, "X start" );
FEATURE_SHORT_NAMES.put( TRACK_START_Y, "Y start" );
FEATURE_SHORT_NAMES.put( TRACK_START_Z, "Z start" );
FEATURE_SHORT_NAMES.put( TRACK_STOP_X, "X stop" );
FEATURE_SHORT_NAMES.put( TRACK_STOP_Y, "Y stop" );
FEATURE_SHORT_NAMES.put( TRACK_STOP_Z, "Z stop" );
FEATURE_DIMENSIONS.put( TRACK_START_X, Dimension.POSITION );
FEATURE_DIMENSIONS.put( TRACK_START_Y, Dimension.POSITION );
FEATURE_DIMENSIONS.put( TRACK_START_Z, Dimension.POSITION );
FEATURE_DIMENSIONS.put( TRACK_STOP_X, Dimension.POSITION );
FEATURE_DIMENSIONS.put( TRACK_STOP_Y, Dimension.POSITION );
FEATURE_DIMENSIONS.put( TRACK_STOP_Z, Dimension.POSITION );
}
/*
* FEATUREANALYZER METHODS
*/
@Override
public List< String > getFeatures()
{
return FEATURES;
}
@Override
public Map< String, String > getFeatureShortNames()
{
return FEATURE_SHORT_NAMES;
}
@Override
public Map< String, String > getFeatureNames()
{
return FEATURE_NAMES;
}
@Override
public Map< String, Dimension > getFeatureDimensions()
{
return FEATURE_DIMENSIONS;
}
Let's compute them now.
Accessing tracks in TrackMate
-----------------------------
In the previous article, we went maybe a bit quickly on how to access data in TrackMate. This is not the goal of this series, but here is a quick recap:
All the track structure is stored in a sub-component of the model called the {% include github org='fiji' repo='TrackMate' source='fiji/plugin/trackmate/TrackModel.java' label='TrackModel' %}. It stores the collection of links between two spots that builds a graph, and has some rather complex logic to maintain a list of connected components: the tracks.
The tracks themselves are indexed by their ID, stored as an `int`, that has no particular meaning. Once you have the ID of track, you can get the spots it contains with
trackModel.trackSpots( trackID )
and its links (or edges) with
trackModel.trackEdges( trackID )
Let's exploit this.
Calculating the position of start and end points
------------------------------------------------
Well, it is just about retrieving a track and identifying its starting and end points. Here is the whole code for the processing method:
@Override
public void process( final Collection< Integer > trackIDs, final Model model )
{
// The feature model where we store the feature values:
final FeatureModel fm = model.getFeatureModel();
// Loop over all the tracks we have to process.
for ( final Integer trackID : trackIDs )
{
// The tracks are indexed by their ID. Here is how to get their
// content:
final Set< Spot > spots = model.getTrackModel().trackSpots( trackID );
// Or .trackEdges( trackID ) if you want the edges.
// This set is NOT ordered. If we want the first one and last one we
// have to sort them:
final Comparator< Spot > comparator = Spot.frameComparator;
final List< Spot > sorted = new ArrayList< Spot >( spots );
Collections.sort( sorted, comparator );
// Extract and store feature values.
final Spot first = sorted.get( 0 );
fm.putTrackFeature( trackID, TRACK_START_X, Double.valueOf( first.getDoublePosition( 0 ) ) );
fm.putTrackFeature( trackID, TRACK_START_Y, Double.valueOf( first.getDoublePosition( 1 ) ) );
fm.putTrackFeature( trackID, TRACK_START_Z, Double.valueOf( first.getDoublePosition( 2 ) ) );
final Spot last = sorted.get( sorted.size() - 1 );
fm.putTrackFeature( trackID, TRACK_STOP_X, Double.valueOf( last.getDoublePosition( 0 ) ) );
fm.putTrackFeature( trackID, TRACK_STOP_Y, Double.valueOf( last.getDoublePosition( 1 ) ) );
fm.putTrackFeature( trackID, TRACK_STOP_Z, Double.valueOf( last.getDoublePosition( 2 ) ) );
// Et voilà!
}
}
The whole code for the analyzer can be found {% include github org='fiji' repo='TrackMate-examples' source='plugin/trackmate/examples/trackanalyzer/TrackStartSpotAnalyzer.java' label='here' %}.
Wrapping up
-----------
Et ca marche !
<figure><img src="/media/TrackMate TrackAnalyzerExample.png" title="TrackMate_TrackAnalyzerExample.png" width="600" alt="TrackMate_TrackAnalyzerExample.png" /><figcaption aria-hidden="true">TrackMate_TrackAnalyzerExample.png</figcaption></figure>
In the next article we will build a spot analyzer and complicate things a bit, by introducing the notion of *priority*. But before this, a short word on how to disable a module.
How to disable a module
-----------------------
Suppose you have in your code tree a TrackMate module you wish not to use anymore. The trivial way would be to delete its class, but here is another one what allows us to introduce [SciJava](SciJava) plugin annotation parameters.
The `@Plugin( type = TrackAnalyzer.class )` annotation accepts extra parameters on top of the `type` one. They all take the shape of a `key = value` pair, and a few of them allow the fine tuning of the TrackMate module integration.
The first one we will see is the `enabled` value. It accepts a `boolean` as value and by default it is `true`. Its usage is obvious:
{% include ambox text='If you want to disable a TrackMate module, add the `enabled = false` annotation parameter.' %}
Like this:
@Plugin( type = TrackAnalyzer.class, enabled = false )
Disabled modules are not even instantiated. They are as good as dead, except that you can change your mind easily. By the way, you can see that the TrackMate source tree has many of these disabled modules...
{% include person content='JeanYvesTinevez' %} ([talk](User_talk_JeanYvesTinevez)) 14:23, 11 March 2014 (CDT)
| 47.525862 | 437 | 0.686922 | eng_Latn | 0.91832 |
4f0d5ce23c3eaf05ff23ce1c3a25f18b76e0f3e0 | 827 | md | Markdown | _posts/people-love/25/2019-07-31-hello3307 (487).md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/25/2019-07-31-hello3307 (487).md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/25/2019-07-31-hello3307 (487).md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | ---
id: 3788
title: Sebastián Viera
author: chito
layout: post
guid: http://localhost/mbti/?p=3788
permalink: /hello3788
tags:
- say thank you
category: Guides
---
{: toc}
## Name
Sebastián Viera
* * *
## Nationality
Uruguay
* * *
## National Position
* * *
## Random data
* National kit
* Club
Junior
* Club Position
GK
* Club Kit
1
* Club Joining
40544
* Contract Expiry
2019
* Rating
72
* Height
184 cm
* Weight
86 kg
* Preffered Foot
Right
* Birth Date
30382
* Preffered Position
GK Medium / Medium
* Weak foot
3
* Skill Moves
1
* Ball Control
24
* Dribbling
7
* Marking
9
* Sliding Tackle
9
* Standing Tackle
9
* Aggression
30
* Reactions
76
* Attacking Position
14
* Interceptions
22</ul> | 8.353535 | 35 | 0.594921 | eng_Latn | 0.397901 |
4f0e8a79e3c0c9541f0bc6b6fcbcda457c256700 | 1,133 | md | Markdown | docs/GenerationDetails.md | SemanticBeeng/scalagen | 16774b4373fcd20ba06212ee2808811a730474c3 | [
"Apache-2.0"
] | 42 | 2017-12-10T19:17:17.000Z | 2021-06-23T11:58:56.000Z | docs/GenerationDetails.md | SemanticBeeng/scalagen | 16774b4373fcd20ba06212ee2808811a730474c3 | [
"Apache-2.0"
] | 46 | 2017-12-10T23:18:17.000Z | 2018-10-01T12:15:47.000Z | docs/GenerationDetails.md | SemanticBeeng/scalagen | 16774b4373fcd20ba06212ee2808811a730474c3 | [
"Apache-2.0"
] | 6 | 2017-12-11T01:29:30.000Z | 2019-05-07T22:36:50.000Z | # Orders
```
// Example tree
A
/ \
B C
/ \
D E
```
### scalameta/scalagen
- Leaf first
- Top to bottom of file (Left to right of tree)
- Multiple Annotations are processed left to right (outer to inner)
- Note: This order is subject to change.
`D -> E -> B -> C -> A`
### scalameta/paradise and scalamacros/paradise
- Leaf first
- Bottom to top of file (right to left)
- Multiple Annotations processed right to left (inner to outer)
`C -> E -> D -> B -> A`
### scalameta/scalameta traverse/transform
- Root first
- Top to bottom (Left to right of tree)
`A -> B -> D -> E -> C`
## Generation
### Extension/Manipulation
- Result gets computed when tranforming the annotee
- Get added the AST when tranforming the annotee
### Companion and Transmutation
- Result gets computed when tranforming the annotee
- Result is added to the AST during *parent* tranformation,
before the parent has actually been tranformed
### Inputs
The input to a given generator is the current state of the Tree.
For example, when transforming `B`
It's children will be D' and E', the already tranformed versions of D and E
| 21.788462 | 75 | 0.695499 | eng_Latn | 0.996175 |
4f0f940b86399e50186df306671bc55c1ac9237a | 719 | md | Markdown | src/docs/tree_children_merge.md | arturopala/scala-tree | 3dded26fe970ea2f312b3a2762132db760e308d8 | [
"Apache-2.0"
] | 5 | 2020-03-24T11:30:43.000Z | 2022-01-26T13:43:08.000Z | src/docs/tree_children_merge.md | arturopala/scala-tree | 3dded26fe970ea2f312b3a2762132db760e308d8 | [
"Apache-2.0"
] | null | null | null | src/docs/tree_children_merge.md | arturopala/scala-tree | 3dded26fe970ea2f312b3a2762132db760e308d8 | [
"Apache-2.0"
] | null | null | null | Tree's merging procedure
===
Variant 1, when donorIndex <= (recipientIndex - recipientSize)
---
0: Initial tree layout (spaced for readability):
nml kjb ih gfedcb a
002 011 01 010202 4
--- ------
1: add a donor's children count to the recipient children count
nml kjb ih gfedcb a
002 011 01 010203 4
^
2: decrement donor's parent children count
nml kjb ih gfedcb a
002 011 01 010203 3
^
3: relocate a donor tree next to the recipient tree, if needed
nml ih kjb gfedcb a
002 01 011 010202 4
--- ------
4: remove the donor's top node
nml ih kjgfedcb a
002 01 01010203 4
-- | 18.435897 | 63 | 0.5758 | eng_Latn | 0.559382 |
4f102c16aff0cedb7aa85c12243f0315dfca90f7 | 3,549 | md | Markdown | README.md | brosaplanella/liionpack | 4c3f61b6f28e1419974c8572669d70fc173a6959 | [
"MIT"
] | null | null | null | README.md | brosaplanella/liionpack | 4c3f61b6f28e1419974c8572669d70fc173a6959 | [
"MIT"
] | null | null | null | README.md | brosaplanella/liionpack | 4c3f61b6f28e1419974c8572669d70fc173a6959 | [
"MIT"
] | null | null | null | [](https://github.com/pybamm-team/liionpack/actions/workflows/python-app.yml)
[](https://liionpack.readthedocs.io/en/main/?badge=main)
[](https://codecov.io/gh/pybamm-team/liionpack)

# Overview of liionpack
*liionpack* takes a 1D PyBaMM model and makes it into a pack. You can either specify
the configuration e.g. 16 cells in parallel and 2 in series (16p2s) or load a
netlist
## Installation
Follow the steps given below to install the `liionpack` Python package. The package must be installed to run the included examples. It is recommended to create a virtual environment for the installation.
```bash
# Clone the repository
$ git clone https://github.com/pybamm-team/liionpack.git
# Create a virtual environment in the repository directory
$ cd liionpack
$ python -m venv .venv
# Activate the virtual environment and upgrade pip if venv installed an old version
$ source .venv/bin/activate
$ pip install --upgrade pip
# Install the required packages
$ pip install -r requirements.txt
# Install the liionpack package from within the repository
$ pip install -e .
```
Alternatively, use Conda to create a virtual environment then install the `liionpack` package.
```bash
# Clone the repository
$ git clone https://github.com/pybamm-team/liionpack.git
# Create a Conda virtual environment
$ cd liionpack
$ conda env create -f environment.yml
# Activate the conda environment
$ conda activate lipack
# Install the liionpack package from within the repository
$ pip install -e .
```
## Example Usage
The following code block illustrates how to use liionpack to perform a simulation:
```python
import liionpack as lp
import numpy as np
import pybamm
# Generate the netlist
netlist = lp.setup_circuit(Np=16, Ns=2, Rb=1e-4, Rc=1e-2, Ri=5e-2, V=3.2, I=80.0)
output_variables = [
'X-averaged total heating [W.m-3]',
'Volume-averaged cell temperature [K]',
'X-averaged negative particle surface concentration [mol.m-3]',
'X-averaged positive particle surface concentration [mol.m-3]',
]
# Heat transfer coefficients
htc = np.ones(32) * 10
# Cycling experiment, using PyBaMM
experiment = pybamm.Experiment(
["Charge at 50 A for 30 minutes", "Rest for 15 minutes", "Discharge at 50 A for 30 minutes", "Rest for 30 minutes"],
period="10 seconds",
)
# PyBaMM parameters
chemistry = pybamm.parameter_sets.Chen2020
parameter_values = pybamm.ParameterValues(chemistry=chemistry)
# Solve pack
output = lp.solve(netlist=netlist,
parameter_values=parameter_values,
experiment=experiment,
output_variables=output_variables,
htc=htc)
```
## Contributing to liionpack
If you'd like to help us develop liionpack by adding new methods, writing documentation, or fixing embarrassing bugs, please have a look at these [guidelines](https://github.com/pybamm-team/liionpack/blob/main/docs/contributing.md) first.
## Acknowledgments
PyBaMM-team acknowledges the funding and support of the Faraday Institution's multi-scale modelling project and Innovate UK.
The development work carried out by members at Oak Ridge National Laboratory was partially sponsored by the Office of Electricity under the United States Department of Energy (DOE).
| 35.138614 | 238 | 0.752888 | eng_Latn | 0.928494 |
4f114f0bdf873f4cd017ea67006b70b932c77821 | 4,138 | md | Markdown | notion_data/Knowledge Base f0e5e8cfbc1147918d3177d4e6c074d1/Academia 06dd963d8d3f470c82c07af8b291879d/Math Duality f763041d40f44bcf82a3dd0087e71c9b.md | Justin-Yuan/Papers | c72a073e5bee8a35b1f0accee345653b09f1e415 | [
"MIT"
] | 8 | 2018-12-02T20:20:29.000Z | 2021-11-08T05:36:34.000Z | notion_data/Knowledge Base f0e5e8cfbc1147918d3177d4e6c074d1/Academia 06dd963d8d3f470c82c07af8b291879d/Math Duality f763041d40f44bcf82a3dd0087e71c9b.md | Justin-Yuan/Papers | c72a073e5bee8a35b1f0accee345653b09f1e415 | [
"MIT"
] | null | null | null | notion_data/Knowledge Base f0e5e8cfbc1147918d3177d4e6c074d1/Academia 06dd963d8d3f470c82c07af8b291879d/Math Duality f763041d40f44bcf82a3dd0087e71c9b.md | Justin-Yuan/Papers | c72a073e5bee8a35b1f0accee345653b09f1e415 | [
"MIT"
] | null | null | null | # Math: Duality
---
## Convex Optimization Lecture 11 — CMU
link: [http://www.stat.cmu.edu/~ryantibs/convexopt-F15/scribes/11-dual-gen-scribed.pdf](http://www.stat.cmu.edu/~ryantibs/convexopt-F15/scribes/11-dual-gen-scribed.pdf)
### Lagrangian
- key idea: $f^* = inf_x \ sup_{u \ge 0,v}\ L(x,u,v)$

### Lagrange dual function
- key idea: $g(u,v) = inf_x \ L(x,u,v)$
- NOTE: the infimum here does not require x to be taken in the feasible set

### Lagrange dual problem
- key idea: $g^* = sup_{u\ge0,v}\ inf_x\ L(x,u,v)$

### Weak duality
- key idea: $f^* \ge g^*$

- max-min inequality

- minimax theorem

### Strong duality

---
## Duality — Stanford
link: [https://web.stanford.edu/class/ee364a/lectures/duality.pdf](https://web.stanford.edu/class/ee364a/lectures/duality.pdf)
### formulation



### examples


### Complementary slackness

### KKT conditions

---
## Fenchel's duality theorem — Wiki
link: [https://en.wikipedia.org/wiki/Fenchel's_duality_theorem](https://en.wikipedia.org/wiki/Fenchel%27s_duality_theorem)
### formulation

### dual space

### convex conjugate

### illustration

---
## References
- Reinforcement Learning via Fenchel-Rockafellar Duality
[](https://arxiv.org/pdf/2001.01866.pdf)
- convex optimization: fall 2015
[Convex Optimization](http://www.stat.cmu.edu/~ryantibs/convexopt-F15/)
- Duality Theory of Constrained Optimization
[](https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec18_duality_thy.pdf)
- Duality in Optimization
[](https://www.imo.universite-paris-saclay.fr/IMG/pdf/paris1and2_cle085e32.pdf) | 36.298246 | 168 | 0.812953 | yue_Hant | 0.486042 |
4f1274810eb0b2f45b29145434a1d521776525a9 | 4,088 | md | Markdown | README.md | amark/mongous | 8199ed7e8481c85f62753b14e315639fd9225319 | [
"MIT"
] | 87 | 2015-01-02T07:04:01.000Z | 2022-03-15T20:26:48.000Z | README.md | amark/mongous | 8199ed7e8481c85f62753b14e315639fd9225319 | [
"MIT"
] | 13 | 2015-01-04T04:17:21.000Z | 2020-10-06T05:41:02.000Z | README.md | amark/mongous | 8199ed7e8481c85f62753b14e315639fd9225319 | [
"MIT"
] | 13 | 2015-01-16T13:20:21.000Z | 2021-07-13T14:57:57.000Z | Mongous
==========
Mongous, for hu*mongous*, is a simple and blazing fast MongoDB driver that uses a jQuery like syntax.
### How it works
var $ = require("mongous").Mongous;
$("database.collection").save({my:"value"});
$("database.collection").find({},function(r){
console.log(r);
});
Done. App development has never felt as close to the shell as this! Making it a breeze to grab'n'store anything anywhere in your code without the nasty hassle of connections, collections, and cascading callbacks.
### Database & Collections
- <code>db('Database.Collection')</code>
- Database is the name of your database
- Collection is the name of your collection
- Examples
- <code>db('blog.post')</code>
- <code>db('blog.post.body')</code>
### Commands
- **Update** <code>db('blog.post').update(find, update, ...)</code>
- find
is the object you want to find.
- update
is what you want to update find with.
- ...
- <code>{ upsert: true, multi: false }</code>
- <code> true, true </code>
- **Save** <code>db('blog.post').save(what)</code>
- what
is the object to be updated or created.
- **Insert** <code>db('blog.post').insert(what...)</code>
- what
is an object to be created.
is an array of objects to be created.
- Examples
- <code>db('blog.post').save({hello: 'world'})</code>
- <code>db('blog.post').save([{hello: 'world'}, {foo: 'bar'}])</code>
- <code>db('blog.post').save({hello: 'world'}, {foo: 'bar'})</code>
- **Remove** <code>db('blog.post').remove(what, ...)</code>
- what is the object to be removed.
- ...
true for atomic.
- **Find** <code>db('blog.users').find(..., function(reply){ })</code>
- reply
is the reply from MongoDB.
- reply.documents
are the documents that you found from MongoDB.
- ... <br/>
params are filtered by type
- Objects
- first object
is what you want to find.
- second object
are fields you want
<br/>Ex: <code>{ name: 1, age: 1 }</code>
- third object
is any of the following options:
<br/> <code>{ lim: x, skip: y, sort:{age: 1} }</code>
- Numbers
- first number
is the limit (return all if not specified)
- second number
is the skip
- Examples
- <code>db('blog.users').find(5, function(reply){ })</code><br/>
reply.documents is the first 5 documents,
- <code>db('blog.users').find(5, {age: 23}, function(reply){ })</code><br/>
with age of 23,
- <code>db('blog.users').find({age: 27}, 5, {name: 1}, function(reply){ })</code><br/>
and a name.
- <code>db('blog.users').find(5, {age: 27}, {name: 1}, {lim: 10}, function(reply){ })</code><br/>
is the same as the previous example, except the limit is 10 instead of 5.
- <code>db('blog.users').find(5, function(reply){ }, 2)</code><br/>
reply.documents skips the first 2 documents and is the next 3 documents.
- <code>db('blog.users').find(function(reply){ }, {age: 25}, {}, {limit: 5, skip: 2})</code><br/>
is the same as the previous example except only of doucments with the age of 25.
- <code>db('blog.users').find({}, {}, {sort: {age: -1}}, function(reply){ })</code><br/>
reply.documents is sorted by age in a decsending (acsending while it is {age:1} ) order.
- **Operations** <code>db('blog.$cmd').find(command,1)</code>
- command
is the database operation command you want to perform.
- Example
<code>db('blog.$cmd').find({drop:"users"},1)</code><br/>
drops the users collection, deleting it.
- **Authentication** <code>db('blog.$cmd').auth(username,password,callback)</code>
- username, password <br/>
username and password of the 'blog' database
- callback <br/>
the callback function when authentication is finished.
- Example
- <code>db('blog.$cmd').auth('user','pass',function(reply){})</code><br/>
- **Open** <code>db().open(host,port)</code>
- Only necessary to call if you explicitly want a different host and port, elsewise it lazy opens.
Mongous is a reduction ('less is more') of node-mongodb-driver by Christian Kvalheim. | 39.68932 | 213 | 0.630382 | eng_Latn | 0.96547 |
4f127ca9238bfeb8e0f74b24d3e4b256de73e43d | 30 | md | Markdown | README.md | Ravi4pk/android | 195facb0376d91e652471dbf4068a045a1e8ce08 | [
"Apache-2.0"
] | null | null | null | README.md | Ravi4pk/android | 195facb0376d91e652471dbf4068a045a1e8ce08 | [
"Apache-2.0"
] | null | null | null | README.md | Ravi4pk/android | 195facb0376d91e652471dbf4068a045a1e8ce08 | [
"Apache-2.0"
] | null | null | null | # android
My Android Projects
| 10 | 19 | 0.8 | eng_Latn | 0.480017 |
4f12bf0284b7c9c9c0ee978af8b033b3f9261c71 | 532 | md | Markdown | pages/system-components/ansible-tower/controls/NIST-800-53-PS-3 3.md | ComplianceAsCode/uswds-opencontrol | 0b068f8433018c4b603057e1088a9930e9b303c5 | [
"CC0-1.0"
] | null | null | null | pages/system-components/ansible-tower/controls/NIST-800-53-PS-3 3.md | ComplianceAsCode/uswds-opencontrol | 0b068f8433018c4b603057e1088a9930e9b303c5 | [
"CC0-1.0"
] | null | null | null | pages/system-components/ansible-tower/controls/NIST-800-53-PS-3 3.md | ComplianceAsCode/uswds-opencontrol | 0b068f8433018c4b603057e1088a9930e9b303c5 | [
"CC0-1.0"
] | null | null | null | #NIST-800-53-PS-3 3
##Information With Special Protection Measures
#### Description
"The organization ensures that individuals accessing an information system processing, storing, or transmitting information requiring special protection:
(3)(a). Have valid access authorizations that are demonstrated by assigned official government duties; and
(3)(b). Satisfy [Assignment: organization-defined additional personnel screening criteria]."
No information found for the combination of standard NIST-800-53 and control PS-3 (3)
| 66.5 | 153 | 0.800752 | eng_Latn | 0.988487 |
4f13213c99b827859c17fe56560faf8de58595bf | 205 | md | Markdown | docs/1.18/node-v1beta1.md | justinwalz/k8s-alpha | 3b3cc2f1120daaf5dd74c147c0adec79720726f9 | [
"Apache-2.0"
] | 70 | 2020-05-13T10:44:17.000Z | 2021-11-15T10:42:11.000Z | docs/1.18/node-v1beta1.md | justinwalz/k8s-alpha | 3b3cc2f1120daaf5dd74c147c0adec79720726f9 | [
"Apache-2.0"
] | 2 | 2020-11-19T16:36:56.000Z | 2021-07-02T11:55:44.000Z | docs/1.18/node-v1beta1.md | justinwalz/k8s-alpha | 3b3cc2f1120daaf5dd74c147c0adec79720726f9 | [
"Apache-2.0"
] | 10 | 2020-06-23T09:05:57.000Z | 2021-06-02T00:02:55.000Z | ---
permalink: /1.18/node/v1beta1/
---
# package v1beta1
## Subpackages
* [overhead](node-v1beta1-overhead.md)
* [runtimeClass](node-v1beta1-runtimeClass.md)
* [scheduling](node-v1beta1-scheduling.md) | 15.769231 | 46 | 0.717073 | deu_Latn | 0.087509 |
4f143cc5f5bc5222cecd9752c2fe55d47e39ff14 | 2,581 | md | Markdown | articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure CLI-példaszkript – Feliratkozás egyéni témakörre | Microsoft Docs
description: Ez a cikk egy minta Azure CLI-parancsfájlt tartalmaz, amely bemutatja, hogyan fizethet elő Event Grid eseményekre egy egyéni témakörben.
services: event-grid
documentationcenter: na
author: spelluru
ms.service: event-grid
ms.devlang: azurecli
ms.topic: sample
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 01/23/2020
ms.author: spelluru
ms.openlocfilehash: 9d82a5c3d9723c26d5a98bb2f0c92a6739ffee25
ms.sourcegitcommit: f52ce6052c795035763dbba6de0b50ec17d7cd1d
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 01/24/2020
ms.locfileid: "76720129"
---
# <a name="subscribe-to-events-for-a-custom-topic-with-azure-cli"></a>Feliratkozás egy egyéni témakör eseményeire az Azure CLI-vel
Ez a szkript létrehoz egy Event Grid-előfizetést egy egyéni témakör eseményeihez.
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)]
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
Az előzetes verziójú példaszkripthez az Event Grid-bővítményre van szükség. A telepítéséhez futtassa az `az extension add --name eventgrid` parancsot.
## <a name="sample-script---stable"></a>Példaszkript – stabil
[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-custom-topic/subscribe-to-custom-topic.sh "Subscribe to custom topic")]
## <a name="sample-script---preview-extension"></a>Példaszkript – előzetes verziójú bővítmény
[!code-azurecli[main](../../../cli_scripts/event-grid/subscribe-to-custom-topic-preview/subscribe-to-custom-topic-preview.sh "Subscribe to custom topic")]
## <a name="script-explanation"></a>Szkript ismertetése
A szkript a következő parancsot használja az esemény-előfizetés létrehozásához. A táblázatban lévő összes parancs a hozzá tartozó dokumentációra hivatkozik.
| Parancs | Megjegyzések |
|---|---|
| [az eventgrid event-subscription create](https://docs.microsoft.com/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Event Grid-előfizetés létrehozása. |
| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) – bővítmény verziója | Event Grid-előfizetés létrehozása. |
## <a name="next-steps"></a>Következő lépések
* Az előfizetések lekérdezéséről lásd: [Event Grid-előfizetések lekérdezése](../query-event-subscriptions.md).
* Az Azure CLI-vel kapcsolatos további információért lásd az [Azure CLI dokumentációját](https://docs.microsoft.com/cli/azure).
| 48.698113 | 210 | 0.784967 | hun_Latn | 0.988025 |
4f1497abc92e300bc1bdc19648e5617bd237a9fc | 1,030 | md | Markdown | docs/Model/School.md | bit9labs/clever-php | 638ee3d6f92db3b66eb3ea97dbb749f914bbbaf7 | [
"Apache-2.0"
] | null | null | null | docs/Model/School.md | bit9labs/clever-php | 638ee3d6f92db3b66eb3ea97dbb749f914bbbaf7 | [
"Apache-2.0"
] | null | null | null | docs/Model/School.md | bit9labs/clever-php | 638ee3d6f92db3b66eb3ea97dbb749f914bbbaf7 | [
"Apache-2.0"
] | null | null | null | # School
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**created** | **string** | | [optional]
**district** | **string** | | [optional]
**ext** | **object** | | [optional]
**high_grade** | **string** | | [optional]
**id** | **string** | | [optional]
**last_modified** | **string** | | [optional]
**location** | [**\Clever\Model\Location**](Location.md) | | [optional]
**low_grade** | **string** | | [optional]
**mdr_number** | **string** | | [optional]
**name** | **string** | | [optional]
**nces_id** | **string** | | [optional]
**phone** | **string** | | [optional]
**principal** | [**\Clever\Model\Principal**](Principal.md) | | [optional]
**school_number** | **string** | | [optional]
**sis_id** | **string** | | [optional]
**state_id** | **string** | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 39.615385 | 161 | 0.532039 | yue_Hant | 0.176639 |
4f15ac89127623a6bed2014bc5d345dd4c1488d6 | 6,190 | md | Markdown | content/docs/leetcode/leetcode736/_index.md | Githubwyb/md-books | 16f181c66afa97306f86c90f24fff3dbf7d02db2 | [
"Apache-2.0"
] | null | null | null | content/docs/leetcode/leetcode736/_index.md | Githubwyb/md-books | 16f181c66afa97306f86c90f24fff3dbf7d02db2 | [
"Apache-2.0"
] | null | null | null | content/docs/leetcode/leetcode736/_index.md | Githubwyb/md-books | 16f181c66afa97306f86c90f24fff3dbf7d02db2 | [
"Apache-2.0"
] | null | null | null | ---
weight: 736
title: "736. Parse Lisp Expression"
---
# 题目
You are given a string expression representing a Lisp-like expression to return the integer value of.
The syntax for these expressions is given as follows.
- An expression is either an integer, let expression, add expression, mult expression, or an assigned variable. Expressions always evaluate to a single integer.
- (An integer could be positive or negative.)
- A let expression takes the form "(let v1 e1 v2 e2 ... vn en expr)", where let is always the string "let", then there are one or more pairs of alternating variables and expressions, meaning that the first variable v1 is assigned the value of the expression e1, the second variable v2 is assigned the value of the expression e2, and so on sequentially; and then the value of this let expression is the value of the expression expr.
- An add expression takes the form "(add e1 e2)" where add is always the string "add", there are always two expressions e1, e2 and the result is the addition of the evaluation of e1 and the evaluation of e2.
- A mult expression takes the form "(mult e1 e2)" where mult is always the string "mult", there are always two expressions e1, e2 and the result is the multiplication of the evaluation of e1 and the evaluation of e2.
- For this question, we will use a smaller subset of variable names. A variable starts with a lowercase letter, then zero or more lowercase letters or digits. Additionally, for your convenience, the names "add", "let", and "mult" are protected and will never be used as variable names.
- Finally, there is the concept of scope. When an expression of a variable name is evaluated, within the context of that evaluation, the innermost scope (in terms of parentheses) is checked first for the value of that variable, and then outer scopes are checked sequentially. It is guaranteed that every expression is legal. Please see the examples for more details on the scope.
Example 1:
```
Input: expression = "(let x 2 (mult x (let x 3 y 4 (add x y))))"
Output: 14
Explanation: In the expression (add x y), when checking for the value of the variable x,
we check from the innermost scope to the outermost in the context of the variable we are trying to evaluate.
Since x = 3 is found first, the value of x is 3.
```
Example 2:
```
Input: expression = "(let x 3 x 2 x)"
Output: 2
Explanation: Assignment in let statements is processed sequentially.
```
Example 3:
```
Input: expression = "(let x 1 y 2 x (add x y) (add x y))"
Output: 5
Explanation: The first (add x y) evaluates as 3, and is assigned to x.
The second (add x y) evaluates as 3+2 = 5.
```
Constraints:
- 1 <= expression.length <= 2000
- There are no leading or trailing spaces in expression.
- All tokens are separated by a single space in expression.
- The answer and all intermediate calculations of that answer are guaranteed to fit in a 32-bit integer.
- The expression is guaranteed to be legal and evaluate to an integer.
# 思路1
## 分析
- 分析此题是要用递归,并且要注意表达式作用域
1. 首先需要对表达式进行解析,将每一个表达式解析成数组,这一步是需要将括号完整解析
```go
// parseExpression, preprocess the expression
// convert '(let x 1 y 2 (let x 2 y (add 1 2) x))' to ['let' 'x' '1' 'y' '2' '(let x 2 y (add 1 2) x)']
func parseExpression(expression string) (result []string)
```
2. 然后需要解析value
- value是变量,从作用域中取值
- 表达式,递归求值传入作用域
- 数字,解析
```go
// getValue, get value of input
// number => number
// var => paramMap[var]
// expression => parse(expression)
func getValue(input string, paramMap map[string]int) int
```
3. 重点来了,真正解析的函数,需要控制变量作用域,每一次递归,将上层的变量拷贝一份,生成自己的变量map
```go
// parse (let e1 v1 e2 v2 expr)
// will copy paramMap and make itself paramMap1
func parse(expression string, paramMap map[string]int) int
```
## 代码实现
```go
package main
import (
_ "fmt"
"strconv"
)
// parseExpression, preprocess the expression
// convert '(let x 1 y 2 (let x 2 y (add 1 2) x))' to ['let' 'x' '1' 'y' '2' '(let x 2 y (add 1 2) x)']
func parseExpression(expression string) (result []string) {
if expression[0] != '(' || expression[len(expression)-1] != ')' {
panic("expression not valid")
}
for index := 1; index < len(expression)-1; index++ {
// find full ()
if expression[index] == '(' {
stack := 1
i := index + 1
for ; i < len(expression); i++ {
if expression[i] == '(' {
stack++
continue
}
if expression[i] == ')' {
stack--
if stack == 0 {
break
}
}
}
result = append(result, expression[index:i+1])
index = i + 1
continue
}
// find next space
i := index + 1
for ; i < len(expression); i++ {
if expression[i] == ' ' {
break
}
}
if i == len(expression) {
result = append(result, expression[index:len(expression)-1])
break
}
result = append(result, expression[index:i])
index = i
}
return
}
// getValue, get value of input
// number => number
// var => paramMap[var]
// expression => parse(expression)
func getValue(input string, paramMap map[string]int) int {
if input[0] == '(' {
return parse(input, paramMap)
}
if v, ok := paramMap[input]; ok {
return v
}
tmp, err := strconv.ParseInt(input, 10, 64)
if err != nil {
panic(err)
}
return int(tmp)
}
// parse (let e1 v1 e2 v2 expr)
// will copy paramMap and make itself paramMap1
func parse(expression string, paramMap map[string]int) int {
if expression[0] != '(' || expression[len(expression)-1] != ')' {
panic("expression not valid")
}
paramMap1 := make(map[string]int)
// copy paramMap
for k, v := range paramMap {
paramMap1[k] = v
}
// split expression to array
arr := parseExpression(expression)
if arr[0] == "add" || arr[0] == "mult" {
if len(arr) != 3 {
panic("add need three element, " + expression)
}
if arr[0] == "add" {
return getValue(arr[1], paramMap1) + getValue(arr[2], paramMap1)
}
return getValue(arr[1], paramMap1) * getValue(arr[2], paramMap1)
}
if arr[0] == "let" {
// make self scope
for i := 2; i < len(arr)-1; i += 2 {
paramMap1[arr[i-1]] = getValue(arr[i], paramMap1)
}
return getValue(arr[len(arr)-1], paramMap1)
}
panic("not support " + arr[0])
}
func evaluate(expression string) int {
return parse(expression, nil)
}
```
| 30.048544 | 431 | 0.681583 | eng_Latn | 0.97861 |
4f17983299a81898423fe7cf11931d8b95460679 | 1,472 | md | Markdown | _posts/2018-05-06-week13.md | nyu-ossd-s18/aif228-weekly | b2a5d1efe062a9cbb6daa92b010b7b1b4023da17 | [
"MIT"
] | null | null | null | _posts/2018-05-06-week13.md | nyu-ossd-s18/aif228-weekly | b2a5d1efe062a9cbb6daa92b010b7b1b4023da17 | [
"MIT"
] | null | null | null | _posts/2018-05-06-week13.md | nyu-ossd-s18/aif228-weekly | b2a5d1efe062a9cbb6daa92b010b7b1b4023da17 | [
"MIT"
] | null | null | null | ---
layout: post
title: Week 13
---
### Reflections on Course
This semester I learned a lot about the open source community. I also learned how to work with GitHub, something I had almost no experience with before this course. I'd also never really worked on a group programming project of this size, so our work on Brackets was also a postive experience for me.
I guess a lot of the class had experience with git already, but I wish the course spent more time on the ins and outs of, and more advanced, git commands. But I liked how the course was structured so that we started with the history of open source software and then as the semester progressed we started working on them ourselves. I also enjoy computer science courses that touch on the historical background of what we're studying-I think it's helpful.
If I had to remove one thing it would be the guest lectures. I liked the concept but thought the actual presentations themselves always ended up pretty far from the realm of material relevant to the class. Maybe having guest presenters coming in for 20 to 30 minute blocks would be better since they could be more concise.
If I could add one thing to the course it would be more structure. I know that there's a syllabus on the course website but I'm still pretty unclear what, exactly, goes into our grades. The midterm grade printout was helpful, but it'd be nice to have at least some idea at different points in the semester how we're doing.
| 105.142857 | 453 | 0.788043 | eng_Latn | 0.999965 |
4f17e0ea7c470d2f249b212aebaf153630b289d4 | 28,347 | md | Markdown | _posts/2014-09-09-235.md | dmcpatrimonio/shcu2014 | e481810be96713f50ae8e839418b3ec25a4e1236 | [
"MIT"
] | null | null | null | _posts/2014-09-09-235.md | dmcpatrimonio/shcu2014 | e481810be96713f50ae8e839418b3ec25a4e1236 | [
"MIT"
] | null | null | null | _posts/2014-09-09-235.md | dmcpatrimonio/shcu2014 | e481810be96713f50ae8e839418b3ec25a4e1236 | [
"MIT"
] | null | null | null | ---
title: 'Géza Heller e a representação do Rio moderno'
author:
- Niuxa Dias Drago
- Marcia Furriel Gálvez
- Sylvia Heller
category: Representações
---
# Resumo
Pretende-se aqui investigar aspectos da representação pictórica e sua
relação com a cidade moderna, através da obra do artista e arquiteto
húngaro Géza Heller, imigrado para a cidade do Rio de Janeiro em 1935. O
artista produziu dezenas de grafites, bicos de pena e aquarelas que
registram importantes transformações da cidade, como a demolição do
casario às abas do Morro do Castelo, a construção dos edifícios da nova
Esplanada, a demolição da primeira geração de edifícios da Avenida
Central para sua verticalização, a demolição dos edifícios do Largo da
Carioca e das quadras que deram passagem à Avenida Presidente Vargas,
entre outros. Registra também a construção de importantes monumentos
modernos, como o edifício do Ministério da Educação e Saúde e a Estação
Central do Brasil, da qual é co-autor. Sua obra é, desta forma, um campo
privilegiado para os estudos relativos à memória da cidade, e às
transformações urbanas, que podem ser estudadas não apenas através de
seus temas, mas igualmente de seu traço. No contexto da nova onda de
transformações urbanas por que passa a cidade do Rio de Janeiro, a obra
de Géza Heller, recentemente divulgada por uma exposição no Parque das
Ruínas, demonstra sua atualidade. Investigando-a à luz de conceitos de
Argan e Boyer, nos sentimos pressionados a rever nossa relação com a
cidade e sua história.
Palavras-chave: Géza Heller, Rio de Janeiro, representação, modernidade,
memória, paisagem urbana
# Abstract
The present article intends to investigate aspects of pictorial
representation and its relation to the modern city, through the work of
Hungarian artist and architect Géza Heller, who immigrated to the city
of Rio de Janeiro in 1935. The artist produced dozens of graffiti,
nozzles pen and watercolors that record important changes in the city,
such as the demolition of the houses to the flaps of the "Morro do
Castelo", the construction of new buildings at the Esplanade, the
demolition of the first generation of buildings at Central Avenue to its
verticalization, the demolition of buildings at "Largo da Carioca" and
at the blocks that gave way to the Presidente Vargas Avenue, among
others. Heller also registers the construction of important modern
monuments such as the Ministry of Education and Health of Brazil (MES)
and the Central Station buildings, of which the last he is co-author.
His work is thus an ideal instrument for studies on the city's memory
and urban transformations, which can be investigated not only through
its themes, but also through his techniques. In Rio de Janeiro's new
wave of urban transformations, the work of Géza Heller, released
recently by an exhibition at "Parque das Ruínas", demonstrates its
relevance. Investigating it at the light of concepts of Argan and Boyer,
we feel pressured to review our relationship with the city and its
history.
Keywords: Géza Heller, Rio de Janeiro, representation, modernity,
memory, cityscape
# Introdução
"Qualquer que seja a sua antiguidade, a obra de arte ocorre como algo
que acontece no presente." G. C. Argan
Este trabalho pretende analisar registros do espaço urbano feitos pelo
arquiteto, desenhista e pintor húngaro Géza Heller, nas décadas de 1930
e 1940, no Rio de janeiro. Parte-se do princípio que o artista, um dos
poucos a se dedicar à temática do espaço urbano e das transformações da
cidade neste importante período de modernização, traduziu, tanto em seu
traço quanto no tema das obras, questões referentes à modernidade
urbana, à memória e à cidade como obra de arte.1 Procuramos investigar
as três camadas que, segundo Argan (1998, p.29), conformam uma obra: as
noções culturais do artista, em sintonia com a sua sociedade; as
habilidades técnicas e representacionais, que se relacionam com seu
fazer
profissional; e a contribuição pessoal do artista.
# O artista
O arquiteto húngaro Géza Heller chegou para fixar residência no Rio de
Janeiro em 1935. Getúlio Vargas já ensaiava seu golpe, e impunha grandes
transformações à cidade. Para levar seu plano de modernização adiante,
implementaria a ditadura do Estado Novo, regime certamente familiar ao
artista europeu, que fugira da Europa do entre-guerras.
Heller formara-se, em 1921, na Escola Superior de Arquitetura de
Budapeste, onde seu irmão mais novo não pôde se matricular devido à lei
que limitava as vagas para judeus. Dez anos depois, devido a esta e
outras leis que indicavam a proximidade da faxina étnica nazista,
embarcou num navio rumo à Montevidéu, onde alguns amigos já residiam.
Desta viagem, existem diversos desenhos feitos a bordo, registrando o
próprio navio e os portos por onde passou: Rio de Janeiro (fig. 1) e
Santos. Segundo o artista, foi nessa passagem que se apaixonou pelo Rio
de Janeiro, aonde viria a fixar residência quatro anos depois.
> 1 "Portanto, ela \[a cidade\] não é apenas (\...) um invólucro ou uma
> concentração de produtos artísticos, mas um produto artístico ela
> mesma. (\...) a obra de arte não é mais a expressão de uma única e bem
> definida personalidade artística, mas de uma soma de componentes não
> necessariamente concentrada numa pessoa ou numa época." (ARGAN, 1998,
> p.73)
>
> 
>
> Figura 1. Géza Heller. Porto do Rio de Janeiro. Fonte: catálogo da
> exposição Géza Heller, 2012, p.16
Como não teve seu diploma imediatamente reconhecido no Brasil, Heller,
assim como diversos outros profissionais do leste europeu, acabou
empregando-se em escritórios de outros arquitetos imigrantes provindos
de países com mais "status" no Brasil, ou já residentes há algum tempo
na cidade. Assim, trabalhou ao lado do escocês Robert Prentice, do
húngaro radicado na cidade desde 1926, Adalbert Szilard, e do francês
Henri Sajous. Exímio desenhista, Heller era o responsável pelas
perspectivas de apresentação dos projetos. Seu agudo senso de observação
do ambiente construído se estende dos detalhes construtivos --
registrados nas muitas cadernetas que o artista levava consigo e onde
desenhava padrões, gradis, e encaixes -- ao ambiente urbano, que ele
registrou em desenhos a bico de pena e que também apareciam
contextualizando as grandes perspectivas de projeto.
Sua contribuição ao desenvolvimento dos projetos ainda não pôde ser
devidamente avaliada. Toda a produção é atribuída aos arquitetos
responsáveis, ainda que saibamos que Heller dirigiu o escritório carioca
de Sajous quando este fixou residência em São Paulo, a partir de 1944.
Apenas na bem documentada evolução do projeto da Estação Ferroviária
Central do Brasil podemos ter certeza da sua visionária capacidade
projetual. Mas esta capacidade de entender o movimento da cidade e
projetar o futuro só nos interessa, aqui, na medida em que complementa a
habilidade de tornar histórica a sua representação da cidade.
O acervo de Heller guarda algumas dezenas de bicos-de-pena, grafites e
aquarelas que retratam a cidade do Rio de Janeiro. Laranjeiras - onde o
arquiteto residiu até se mudar para a cidade mineira de Passa Quatro,
aos 85 anos -- e imediações, junto com o Centro, são os espaços que
privilegiou em suas representações. Alguns de seus registros foram
publicados na revista *Arquitetura e Urbanismo 2,* do Instituto de
Arquitetura do Brasil, o que demonstra a importância concedida a eles
pela classe dos arquitetos já naquela época.
Em 1942, Heller tornou-se aluno de pintura de Alberto da Veiga Guignard
e, já em 1943, expôs na primeira coletiva do grupo que ficou conhecido
como "Grupo Guignard". Ao longo da década de 1940, o interesse pela
cidade persistiu, e a
> 2 Encontramos reproduções nos números de setembro e outubro de 1938 e
> maio e junho de 1939.
pesquisa das técnicas pictóricas correu junto a seu empenho em
documentar as transformações da cidade. Ao final da década, os registros
*in loco* do ambiente urbano se tornam mais raros, embora sejam deste
período algumas de suas mais felizes traduções artísticas da cidade do
Rio de Janeiro. A obra apresentada na figura 2 é testemunho de como
Heller foi capaz de encontrar uma procedente intersecção entre a
linguagem da pintura moderna -- neste caso, a obra de Paul Klee - e a
paisagem da cidade.
> 
>
> Figura 2. Géza Heller. Sem título. Fonte: acervo pessoal de Sylvia
> Heller.
# A Cidade
De 1937 a 1945, o centro do Rio se expandiu em direção a um espaço
inexistente: para o mar, com os aterros; para o céu, com a
verticalização urbana; e para o lugar das antigas colinas do Centro,
demolidas quase todas porque estavam no caminho do "progresso". Sobraram
as colinas do mar do Valongo, lado de dentro da Guanabara, para onde a
cidade ainda não queria ir. Sobreviveram o morro dos poderosos
beneditinos, o Morro da Conceição (devidamente escondido pelo
"arranha-céu" *A Noite*) e os morros da Saúde, Pinto e Gamboa, a beira
mar, zona de prostíbulos e galpões que ficou reservada ao porto de
comércio.
A esplanada do Morro do Castelo e seu aterro adjacente começaram a ser
ocupados em 1930, de acordo com o Plano urbanístico de Alfred Agache.
Trabalhando no escritório de Henri Sajous, localizado no
recém-inaugurado edifício Nilomex, obra de Robert Prentice para a nova
esplanada, Heller registraria as transformações bem abaixo da sua
janela. Seus desenhos mostram a construção dos grandes ministérios, em
especial o edifício modernista do Ministério da Educação e Saúde, das
quadras planejadas e, também, os últimos dias da antiga arquitetura,
alojada nos resquícios do Morro do Castelo. Ao fundo, os marcos
verticais que começam a pontuar a cidade, e outros condenados ao
desaparecimento. Heller registra igualmente as transformações do Largo
da Carioca, ampliado em 1938, com as demolições da Imprensa Nacional e
do Hospital da Ordem Terceira da Penitência. Produz também desenhos que
mostram a demolição de dois grandes monumentos da arquitetura colonial,
destruídos para dar passagem à Avenida Presidente Vargas: a Igreja de
São Pedro dos Clérigos e a Igreja de Bom Jesus do Calvário.
A cidade sofria, desde a década de 1920, os reflexos de todos os
problemas trazidos pela industrialização: crescimento dos subúrbios sem
infraestrutura, deficiência na área de transportes, déficit
habitacional, gerando especulação do solo urbano e ocupações
irregulares. Prado Júnior encomendara ao urbanista Alfred Agache um
plano para solucionar os problemas da cidade, que não chega a ser
implantado, entre outras coisas, devido à Revolução de 1930. No entanto,
"a fórmula apresentada por Agache para a resolução dos problemas da
República Velha -- ou seja, a intervenção do Estado no processo de
reprodução da força de trabalho urbana -- se constituirá na mola mestra
do novo regime que Getúlio Vargas implanta no país." (ABREU, 1997,
p.90).
Aproveitando-se do enfraquecimento das oligarquias rurais, ao qual não
correspondeu imediatamente uma burguesia industrial e financeira
dominante, Getúlio Vargas fez valer seu projeto de modernização,
privilegiando os monumentos institucionais e os grandes eixos viários,
inspirados em Agache.3 À grande perspectiva da larga Avenida Presidente
Vargas (1942), correspondem a eletrificação da Central do Brasil (1935)
e
a abertura da Avenida Brasil (1946), que completavam o plano viário de
Getúlio. Mas a "drástica cirurgia" (LIMA, 1990) representada pela
avenida superestimou o desenvolvimento do centro. Na verdade, a década
de 1940 viu o crescimento de centros de bairro e o *boom* da zona sul. O
contido crescimento da área central pôde facilmente ser absorvido pela
Esplanada do Castelo e pela verticalização do eixo já constituído da
Avenida Central (ABREU, 1997, pp.114-115).
Desta forma, a enorme avenida permaneceu bem diferente das perspectivas
propagandeadas pelo governo e pelas firmas de engenharia e elevadores
elétricos nas revistas, em fotomontagens de maquetes mostrando o futuro
(fig.3). O futuro da Avenida Presidente Vargas, ladeada pelas grandiosas
fileiras de arranha-céus que o capital construiria, jamais se completou.
O futuro da Esplanada do Castelo, realização da utopia agacheana, se
veria ultrapassado antes da ocupação da décima quadra. Enquanto isso,
Géza Heller se preocupava em registrar as obras, resquícios,
trabalhadores, mudanças de plano. O artista apreendera o sentido do
moderno, e sabia que moderno não era o enclave estudado de Agache, mas a
cidade em seu movimento, em seu caminho para um futuro sobre o qual só
se poderia saber que mudaria a cada instante. Um futuro tão mais incerto
quanto mais se apagavam as marcas do passado.
> 3 Maurício de Abreu (1997, pp.113-114) destaca que "O Plano Agache
> havia sugerido a construção "de uma grande avenida de continuação do
> Canal do Mangue" que, exigindo a demolição de todos os prédios
> situados entre as antigas ruas General Câmara e São Pedro,
> "desembaraçaria a bonita Igreja da Candelária, que se inscreveria
> perfeitamente na sua perspectiva". Com pequenas modificações, a
> sugestão de agache foi implementada durante o Estado Novo."
>
> 
>
> Figura 3. Projeto da Av. Presidente Vargas, publicado na revista PDF
> de abril/1941. Fonte:
> <http://dc370.4shared.com/doc/-dGxYUmK/preview.html>
# A representação
O Rio de Janeiro foi a cidade que Heller adotou. Nela, e com ela,
desenvolveu suas habilidades e fez-se artista pleno, sem jamais retornar
à Europa ou tomá-la como referencial. A intensa relação que desenvolveu
com o Rio de Janeiro está registrada em seus desenhos, assim como uma
paixão numa série de cartas de amor. Com a prancheta e a pena, tateou a
cidade até inscrevê-la no mais profundo de sua memória. Ao invés dos
monumentos de praxe, preferia os becos, as obras, o movimento das ruas.
Seus desenhos não são registros de passeios de um turista, nem pesquisas
técnicas de um arquiteto. São a construção da memória afetiva de quem
escolheu o Rio de Janeiro para viver. Desenhando, Heller conheceu e
reconheceu a cidade. Desenhando-a, tornou-se carioca. Ao construir a sua
memória afetiva, construía também a memória histórica da cidade.
Para um imigrante, a criação de uma memória é também esquecimento, ou
superação de uma memória anterior. Talvez Géza Heller construísse assim
uma relação ambígua com sua própria memória. Ao fixar no papel momentos
de transformação da cidade, estivesse também desenhando seu próprio
movimento de transformação. E ao fazê-lo, deixava de lado a lembrança da
difícil experiência anterior.
> A relação entre memória e esquecimento pode-se objetivar num discurso,
> mas, para que a relação exista, deve também existir o documento capaz
> de dar à memória pelo menos a mesma força do esquecimento: o documento
> que se imponha como pilar da memória e que a memória tende,
> inevitavelmente, a rejeitar. (SARLO, 1997, p.41).
Seu traço e seus temas mostram a cidade em transformação. O esforço de
registrar este movimento traduz a consciência de que o moderno não é só
tecnologia, mas também história. Enquanto os modernistas do novo
continente se entendiam como
fundadores de uma história artística,4 Heller preocupava-se em registrar
cada documento pretérito em sua agonia. Por isso sentou-se à frente da
demolição do teatro e do cassino do Passeio Público (fig. 4). Enquanto
os cariocas comemoravam o fim do "emplastro pretensioso"5 que durara
apenas quinze anos, Heller reconhecia o
documento que, a despeito de méritos formais, constituía um capítulo da
história da cidade. Sem ele, como seria o moderno parte desta história?
Argan (1998, p.74) destaca a "artificiosa concentração da historicidade"
nos centros antigos das cidades, enquanto as novas construções modernas
são consideradas desprovidas de historicidade.6 Exatamente porque
compreende a historicidade do moderno, Heller pode ser, a um só tempo,
um documentarista e um projetista, pode valorizar cada
documento do passado, e ainda assim projetar uma nova arquitetura para o
futuro.
> 
>
> Figura 4. Géza Heller. Demolição do Cassino Beira-Mar. Fonte: catálogo
> da exposição Géza Heller, 2012, p.31
Seu interesse pela cidade moderna, que não é a cidade acabada ou ideal,
mas a "cidade do devir"7, traduz-se na técnica rápida dos registros em
bico de pena e, porque não dizer, no maior monumento saído de sua
prancheta: a grande torre do relógio da Estação Central do Brasil. O
grande relógio, de frente para a avenida expressa e a estrada de ferro
mais movimentada do Brasil, marca para a cidade a instauração de um novo
tempo -- e o desenho de Heller, onde se justapõem a antiga e a nova
estação deve ser lido neste sentido (fig.5). Trata-se de uma imagem em
que passado e presente convivem momentaneamente, uma alegoria da
transformação
> 4 Argan (1998, pp.60-61) demonstra como os artistas americanos, "de
> Dewey a Langer e Arheim" subestimavam a gênese histórica da arte,
> "afirmando seu caráter individualista, de experiência criadora
> autônoma". Podemos estender esta ideia ao movimento modernista no
> Brasil.
>
> 5 "O jornal *A Noite* comemorou o fim do emplastro pretencioso e
> floreado", cujo projeto mesclava elementos ecléticos e neocoloniais".
> (PREFEITURA \..., 2012, p.30).
>
> 6 Caberia destacar a primazia das instituições patrimoniais
> brasileiras ao reconhecer o valor histórico dos monumentos modernos,
> embora os tombamentos iniciais se referissem a obras isoladas, e não a
> conjuntos urbanos.
>
> 7 "a cidade moderna contrapõe-se à antiga exatamente na medida em que
> reflete o conceito de uma cidade que, não tendo uma instituição
> carismática, pode continuar a mudar sem uma ordem providencial e que,
> portanto, exatamente a sua mudança contínua é representativa (\...)."
> (ARGAN, 1998, pp.74-75).
moderna:
Cada discurso define uma ordem espacial, uma imagem congelada que
captura o modo pelo qual o presente transitório é percebido.
*Momentaneamente apreendendo forças destrutivas e energéticas*, as
formas representacionais se tornam registros sucintos do que
consideramos ser a realidade do presente. Estes modelos estéticos
transformam nosso senso do real, pois a imagem da cidade é um conceito
abstrato, uma forma construída imaginária.8 (BOYER, 1996, p.31, grifos
nossos).
> 
>
> Figura 5. Géza Heller. Sem Título. Fonte: catálogo da exposição Géza
> Heller, 2012, p.75
Os registros do artista nos chamam ainda mais a atenção por serem
únicos, raros. A comoção que causou na opinião pública a demolição da
igreja setecentista de São Pedro dos Clérigos, para a passagem do leito
da Av. Presidente Vargas, era suposta ter inspirado mais registros
daquela arquitetura prestes a se perder.9 No entanto, são muito raros os
registros da pequena igreja que, em dado momento, encontrava-se isolada
numa esplanada ameaçadora, em ótima posição, no espaço e no tempo, para
ser registrada. As fotografias que se conhecem dela não somam uma
dezena, e, ao que tudo indica, apenas o artista húngaro sentou-se diante
de sua demolição para eternizá-
> 8 No original: "*Every discourse sets up a spatial order, a frozen
> image that captures the manner in which a transitory present is
> perceived*. *Momentarily arresting disruptive and energetic forces,
> representational formes become succinct records of what we consider to
> be present reality. These aesthetic models transform our sense of the
> real, for the image of the city is an abstract concept, an imaginary
> constructed form."*
>
> 9 Evelyn Furquim Werneck Lima (1990, p.39-40) registra a polêmica
> sobre a demolição e as ideias de transladá-la para outros locais,
> publicadas pelos jornais da época.
la (Fig. 6). Os pintores e desenhistas em seus ateliês, e os arquitetos
preocupados demais em projetar esqueceram-se de ver a cidade. Talvez não
fossem capazes de, olhando o furor de sua transformação, entender que
aquela era ela, a cidade, e que não existiria uma nova cidade,
petrificada para registro, em nenhum futuro mais além. Esperaram para
ver o que seria justo registrar, quando a transformação acabasse, mas
ela jamais acabaria. Para os artistas modernos brasileiros, a
experiência estética só poderia estar fora, ou além, da experiência
cotidiana. Não havia sentido em registrar a paisagem urbana que não
correspondia ainda ao ponto zero da história que se pretendia construir.
"A cidade real", como diz Argan (1998, p.74), "reflete as dificuldades
do fazer a arte, e as circunstâncias contraditórias do mundo em que se
faz".
> 
>
> Figura 6. Géza Heller. Demolição da Igreja de São Pedro. Fonte:
> catálogo da exposição Géza Heller, 2012, p.79
Géza Heller sabia que também a técnica era premida pelo tempo. O traço
muda com a cidade, e o registro não é só das obras em curso, ou do morro
a esvair-se, mas do traço sôfrego que, na velocidade emanada da mão,
reflete a velocidade que o artista intui e, ao contrário de querer
paralisar, como na foto, é mais um tema que ele representa. O valor
artístico do desenho de Heller está em superar o momento histórico em
que habita (a data), denunciando, em sua forma, o caráter, o clima e o
sentido contraditório de uma época que, querendo criar monumentos
perenes, testemunhava contra qualquer perenidade.
O traço rápido é o registro do trabalho de artista, enquanto registra
também o trabalho, contemporâneo e correlato, do operário. Ambos,
juntos, são o trabalho da cidade em sua trajetória histórica. Ambos,
tema e técnica, traduzem sensivelmente a aceleração do tempo moderno em
seu momento de paroxismo -- momento em que as coisas já quase não valem
como coisas, mas como símbolos desta aceleração. O traço de Heller
encontra-se exatamente neste ponto, em que o tema quase se perde na
velocidade do traço, em que quase deixamos de enxergá-lo para nos
admirarmos com o ponto a que o artista adequou sua técnica a fim de
captar o sentido do tempo.
A maior parte de seus registros a bico-de-pena comprova seu interesse
pela construção, pelas técnicas e o trabalho que só se vê no decorrer do
processo, do fazer. Quando registra a pedreira do Morro da Viúva
(Fig.7), pertencente à construtora Januzzi &Cia, Heller relega ao
segundo plano a imagem perene do Corcovado, e faz
da pedreira, com cujo granito se construíam os edifícios da nova
Esplanada do Castelo, o foco de sua representação. Seu tema era a cidade
em transformação e, para um arquiteto, a transformação é material. "A
cidade é uma construção", diz Argan (1998, p.75), "e o ponto de partida
de toda construção é a construtibilidade, antes de considerar a cidade
em relação a categorias estéticas, é preciso considerá-la em relação às
técnicas que a tornam não apenas concebível, mas projetada (\...)."
> 
>
> Figura 7. Géza Heller. Morro da Viúva. Fonte: catálogo da exposição
> Géza Heller, 2012, p.23
Seria preciso considerar ainda que o interesse de Heller pelas técnicas
que fazem mudar os processos e a arte também é evidente em sua
experiência como pintor. O artista utilizou todas as técnicas do desenho
sobre papel (lápis, carvão, pastel oleoso, bico de pena, aquarela,
guache), experimentou a gravura e a pintura sobre diversos suportes,
além de ter criado uma técnica especial de monotipia (PREFEITURA\...,
2012, p.16).
Chama a atenção, no cômputo de suas obras, a insistência e esmero com
que registra os antigos sobrados que avistava da janela do escritório de
Henri Sajous, no edifício *Nilomex*, recém-inaugurado na esplanada do
Castelo. Os pequenos sobrados, antes às abas do Morro do Castelo,
aparecem ilhados entre a Avenida Rio Branco e a nova Esplanada, e
ostentam o abandono de quem vê próximo o fim. Essa melancolia e a
lembrança de um tempo mais duradouro não passam despercebidos a Heller,
que faz deles alguns de seus registros mais minuciosos, onde vemos
roupas ao varal, pessoas conversando nos quintais e pequenos pássaros
que ainda procuram o morro que se foi (Fig.8).
> 
>
> Figura 8. Géza Heller. Demolição da Policlínica. Fonte: catálogo da
> exposição Géza Heller, 2012, p.39
# Representação e História
Os registros de Heller são um documento imprescindível para entender a
cidade. Ela aparece neles com toda sua complexidade. Em seus desenhos se
juntam pontas de fios históricos que nunca aparecem unidos nos registros
oficiais, e o sentido artístico da cidade, que só pela própria arte pode
ser essencialmente traduzido. É através deles que podemos construir a
história visual da cidade, compreender retalhos de tecido urbano e
remontar utopias. Mas, principalmente, é através deles que a história da
arquitetura e do urbanismo pode se inserir no seu campo de
pertencimento, que é o da história da arte.
A diferença entre um registro como o de Heller e uma imagem comercial da
cidade, é a mesma que separa a arte da informação. Ao invés da obra
aberta às interpretações presentes, uma falsa certeza; ao invés do
movimento, o produto:
> O que a cultura tecnonocientífica atual quer substituir ao
> probabilismo histórico ou à busca da verdade é a oferta da informação
> exata, incontestável, imediata (\...) que podendo ser verificada, não
> é suscetível de crítica e de demonstração e que, mal deixa de ser
> notícia, precipita-se num passado sem fundo e sem medida, perdendo
> todo significado (ARGAN, 1998, p.28).
Hoje, que o Rio de Janeiro passa por transformações de semelhante monta,
que artista as registra? Que traço pode apreendê-las, ultrapassando a
autoridade dos *posts* em redes sociais como documentos complexos do
novo devir? A quem interessará o trabalho alienado dos milhares de
operários que, ao não saber o que estão fazendo, estão construindo a
cidade? A quem interessará as contradições entre as montagens (agora
computadorizadas e animadas) que a prefeitura apresenta nos vídeos do
metrô e a cidade real? A quem interessará os contrastes que só aparecem
no movimento da mudança?
A relação orgânica com os espaços da cidade foi atropelada pela
velocidade, pelas imagens, pela inexistência das relações interpessoais
e históricas no espaço urbano. O
registro afetivo, pelo qual o artista apreende a cidade, foi
definitivamente substituído pelo simulacro: pelo clique que abole o
mundo fora do quadro, pela ilusão em três dimensões, pelo passeio
virtual dos aplicativos de internet.10
Uma vez que o sentido de arte e história se perdeu para os arquitetos,
talvez não tenhamos, depois dos grandes eventos, documentos como os de
Heller para estudar. Todos os registros terão sido produzidos pela
imprensa oficial, pelo mercado imobiliário ou pelas Instituições
promotoras dos eventos, excluindo da historiografia a cidade real. Os
arquitetos e historiadores da cidade terão que levantar camadas e
camadas para enxergar o que estava bem a sua frente. Alienados tanto dos
princípios da história quanto dos princípios da realidade construtiva,
os arquitetos terão perdido o direito à sua cidade\...
# Referências bibliográficas
> ABREU, Maurício de. *Evolução Urbana do Rio de Janeiro*. Rio de
> Janeiro: IPLANRio, 1997. 3ª Ed.
>
> ARGAN, G.C. *História da Arte como História da Cidade*. São Paulo:
> Martins Fontes, 1998.
>
> BAUDRILLARD, Jean. *Simulacros e Simulações*. Lisboa: Relógio D'Água,
> 1991.
>
> BOYER, Christine. *The City of the Collective Memory: It's Historical
> Imagery and Architectural Entertainments*. Cambridge: MIT Press, 1996.
>
> LIMA, Evelyn F. W. *Avenida Presidente Vargas: uma drástica cirurgia*.
> Rio de Janeiro: Secretaria municipal de Cultura -- DGDC, 1990.
>
> PREFEITURA DO RIO DE JANEIRO / CABBET Produções. *Géza Heller -- um
> carioca sonhador*. (catálogo da exposição). 2012.
SARLO, Beatriz. A história contra o esquecimento. *Paisagens
imaginárias***.** org. Sérgio Miceli. São Paulo: EDUSP, 1997.
> 10 "Hoje a abstração já não é a do mapa, do duplo, do espelho ou do
> conceito. A simulação já não é a simulação de um território, de um ser
> referencial, de uma substância. É a geração pelos modelos de um real
> sem origem nem realidade: hiper-real. O território já não precede o
> mapa, nem lhe sobrevive. É agora o mapa que precede o território --
> precessão dos simulacros -- é ele que engendra os territórios cujos
> fragmentos apodrecem sobre a extensão do mapa." (BAUDRILLARD, 1991,
> p.8).
| 51.63388 | 72 | 0.794758 | por_Latn | 0.999911 |
4f18453f273388147263d319b98de70aa0bc6e06 | 685 | md | Markdown | _posts/2020-02-04-marca-de-arroz-decide-romper-contrato-de-publicidade-com-regina-duarte.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-02-04-marca-de-arroz-decide-romper-contrato-de-publicidade-com-regina-duarte.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-02-04-marca-de-arroz-decide-romper-contrato-de-publicidade-com-regina-duarte.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 2874640996
title: >-
Marca de arroz decide romper contrato de publicidade com Regina Duarte
author: Tatu D'Oquei
date: 2020-02-04 17:00:00
pub_date: 2020-02-04 17:00:00
time_added: 2020-02-04 23:28:36
category:
tags: []
---
A marca de arroz Cristal, de que Regina Duarte é garota-propaganda, decidiu romper o contrato de publicidade com a atriz, em virtude de sua decisão de entrar para o governo.
**Link:** [https://epoca.globo.com/guilherme-amado/marca-de-arroz-decide-romper-contrato-de-publicidade-com-regina-duarte-24229367](https://epoca.globo.com/guilherme-amado/marca-de-arroz-decide-romper-contrato-de-publicidade-com-regina-duarte-24229367)
| 38.055556 | 252 | 0.769343 | por_Latn | 0.791455 |
4f18803a38e4f55da0d03e2b70faa44a174d5d01 | 1,277 | md | Markdown | README.md | josancamon19/flutter_sqflite_notodo_app | f1fa579a005eb1235f381d8d119816390508b48c | [
"Apache-2.0"
] | 1 | 2019-03-09T09:39:39.000Z | 2019-03-09T09:39:39.000Z | README.md | josancamon19/flutter_sqflite_notodo_app | f1fa579a005eb1235f381d8d119816390508b48c | [
"Apache-2.0"
] | null | null | null | README.md | josancamon19/flutter_sqflite_notodo_app | f1fa579a005eb1235f381d8d119816390508b48c | [
"Apache-2.0"
] | 1 | 2019-02-09T04:04:27.000Z | 2019-02-09T04:04:27.000Z | # flutter no todo app
This code contains a no todo app in flutter, this app saves the data inside a sqflite database, this app makes part of the course [Flutter and Dart - The Complete Flutter App Development Course](https://www.udemy.com/flutter-dart-the-complete-flutter-app-development-course/learn/v4/content)
from Udemy and Paulo Dichone.
This app contains a full CRUD in [SQflite](https://github.com/tekartik/sqflite).
## Getting Started
For help getting started with Flutter, view the online
[documentation](https://flutter.io/).
## Screens

## License
Copyright 2018 Joan Cabezas
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 38.69697 | 291 | 0.75881 | eng_Latn | 0.9543 |
4f192b566d3128953133a79804549e23312e3589 | 6,509 | md | Markdown | articles/app-service/containers/quickstart-docker.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/app-service/containers/quickstart-docker.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/app-service/containers/quickstart-docker.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Implementación de una aplicación de Docker en Linux: Azure App Service'
description: Como implementar una imagen de Docker en Azure App Services para Linux
author: msangapu
ms.author: msangapu
ms.date: 08/28/2019
ms.topic: quickstart
ms.service: app-service
ms.openlocfilehash: 2a7dc477b4cd0be0c50569d84e10cfe1d666eac9
ms.sourcegitcommit: 88ae4396fec7ea56011f896a7c7c79af867c90a1
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 09/06/2019
ms.locfileid: "70392704"
---
# <a name="deploy-to-azure-using-docker"></a>Implementación en Azure con Docker
App Service en Linux proporciona pilas de aplicaciones predefinidas en Linux con compatibilidad con lenguajes como .NET, PHP o Node.js entre otros. También puede usar una imagen personalizada de Docker para ejecutar la aplicación web en una pila de aplicaciones aún sin definir en Azure. En este inicio rápido se muestra cómo implementar una imagen desde [Azure Container Registry](/azure/container-registry) (ACR) en App Service.
## <a name="prerequisites"></a>Requisitos previos
* Una [cuenta de Azure](https://azure.microsoft.com/free/?utm_source=campaign&utm_campaign=vscode-tutorial-docker-extension&mktingSource=vscode-tutorial-docker-extension)
* [Docker](https://www.docker.com/community-edition)
* [Visual Studio Code](https://code.visualstudio.com/)
* La [extensión de Azure App Service para VS Code](vscode:extension/ms-azuretools.vscode-azureappservice). Puede usar esta extensión para crear, administrar e implementar Web Apps de Linux en la Plataforma como servicio (PaaS) de Azure.
* La [extensión de Docker para VS Code](vscode:extension/ms-azuretools.vscode-docker). Puede usar esta extensión para simplificar la administración de imágenes y comandos locales de Docker e implementar imágenes de aplicaciones compiladas en Azure.
## <a name="create-an-image"></a>Crear una imagen
Para completar este inicio rápido, necesitará una imagen de aplicación web adecuada almacenada en [Azure Container Registry](/azure/container-registry). Siga las instrucciones del artículo [Guía de inicio rápido: Creación de un registro de contenedor privado con Azure Portal](/azure/container-registry/container-registry-get-started-portal), pero use la imagen `mcr.microsoft.com/azuredocs/go` en lugar de `hello-world`.
> [!IMPORTANT]
> Asegúrese de establecer la opción **Usuario administrador** en **Habilitar** al crear el registro de contenedor. También puede establecerla en la sección **Claves de acceso** de la página de registro en Azure Portal. Esta opción de configuración es necesaria para el acceso a App Service.
## <a name="sign-in"></a>Iniciar sesión
A continuación, inicie VS Code e inicie sesión en su cuenta de Azure con la extensión App Service. Para ello, seleccione el logotipo de Azure en la barra de actividades, vaya al explorador **APP SERVICE**, después, seleccione **Iniciar sesión en Azure** y siga las instrucciones.

## <a name="check-prerequisites"></a>Comprobación de los requisitos previos
Ahora puede comprobar si ha instalado y configurado todos los requisitos previos.
En VS Code, verá su dirección de correo electrónico de Azure en la barra de estado y la suscripción en el explorador de **APP SERVICE**.
A continuación, compruebe que tiene Docker instalado y en ejecución. El siguiente comando mostrará la versión de Docker si se está ejecutando.
```bash
docker --version
```
Por último, asegúrese de que Azure Container Registry está conectado. Para ello, seleccione el logotipo de Docker en la barra de actividad y, a continuación, vaya a **REGISTRIES** (Registros).

## <a name="deploy-the-image-to-azure-app-service"></a>Implementación de la imagen en Azure App Service
Ahora que todo está configurado, puede implementar la imagen en [Azure App Service](https://azure.microsoft.com/services/app-service/) directamente desde el explorador de extensiones de Docker.
Busque la imagen debajo del nodo **Registries** en el Explorador de **DOCKER** y expándalo para mostrar sus etiquetas. Haga clic con el botón derecho en una etiqueta y, después, seleccione **Implementar imagen en Azure App Service**.
Desde aquí, siga las indicaciones para elegir una suscripción, un nombre de aplicación único global, un grupo de recursos y un plan de App Service. Elija **B1 básico** como plan de tarifa y una región.
Después de la implementación, la aplicación está disponible en `http://<app name>.azurewebsites.net`.
Un **grupo de recursos** es una colección con nombre de todos los recursos de la aplicación en Azure. Por ejemplo, un grupo de recursos puede contener una referencia a un sitio web, una base de datos y una función de Azure.
Un **plan de App Service** define los recursos físicos que se van a usar para hospedar el sitio web. Este inicio rápido usa un plan de hospedaje **básico** en la infraestructura de **Linux**, lo que significa que el sitio se hospedará en una máquina Linux junto con otros sitios web. Si empieza con el plan **básico**, puede usar Azure Portal para escalar verticalmente de modo que el suyo sea el único sitio que se ejecute en una máquina.
## <a name="browse-the-website"></a>Exploración del sitio web
El panel **Salida** se abrirá durante la implementación para indicar el estado de la operación. Cuando se complete la operación, busque la aplicación que creó en el explorador de **APP SERVICE**, haga clic con el botón derecho en ella y, después, seleccione **Examinar sitio web** para abrir el sitio en el explorador.
> [!div class="nextstepaction"]
> [He tenido un problema](https://www.research.net/r/PWZWZ52?tutorial=quickstart-docker&step=deploy-app)
## <a name="next-steps"></a>Pasos siguientes
Ha completado correctamente este inicio rápido.
A continuación, vea otras extensiones de Azure.
* [Cosmos DB](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb)
* [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
* [Herramientas de la CLI de Azure](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azurecli)
* [Herramientas de Azure Resource Manager](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
O bien, puede conseguirlas todas si instala el paquete de extensiones [Azure Tools](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack).
| 72.322222 | 439 | 0.788139 | spa_Latn | 0.974032 |
4f1944f3e8e6d77ba02a809513d863f8787c13de | 334 | md | Markdown | _kaartenproject/28d40fea-22af-4d40-b713-0e73606295d6.md | sammeltassen/tresor-maps | dddf1e93cafbe846d5044f3d87b031f489112201 | [
"MIT"
] | null | null | null | _kaartenproject/28d40fea-22af-4d40-b713-0e73606295d6.md | sammeltassen/tresor-maps | dddf1e93cafbe846d5044f3d87b031f489112201 | [
"MIT"
] | null | null | null | _kaartenproject/28d40fea-22af-4d40-b713-0e73606295d6.md | sammeltassen/tresor-maps | dddf1e93cafbe846d5044f3d87b031f489112201 | [
"MIT"
] | null | null | null | ---
pid: 28d40fea-22af-4d40-b713-0e73606295d6
idno: TRL-7.4.2.09
thumbnail: https://dlc.services/thumbs/7/4/28d40fea-22af-4d40-b713-0e73606295d6/full/400,339/0/default.jpg
manifest: https://dlc.services/iiif-resource/delft/string1string2string3/kaartenproject-2007/TRL-7.4.2.09
order: '487'
layout: map
collection: kaartenproject
---
| 33.4 | 106 | 0.784431 | yue_Hant | 0.152914 |
4f194f4d6fd581215812efd9121f9464159ed3b9 | 12,882 | md | Markdown | docs/tutorials/demo.md | betarelease/pyrsia | 7f7387637f15735d2d4b1476f2ac507d8288a21e | [
"Apache-2.0"
] | 36 | 2021-12-14T22:10:38.000Z | 2022-03-28T16:08:10.000Z | docs/tutorials/demo.md | pyrsia/pyrsia | b9be5907c51c3cebf132e43e88b7dced021f9686 | [
"Apache-2.0"
] | 462 | 2021-12-08T00:25:15.000Z | 2022-03-31T18:08:12.000Z | docs/tutorials/demo.md | betarelease/pyrsia | 7f7387637f15735d2d4b1476f2ac507d8288a21e | [
"Apache-2.0"
] | 26 | 2021-12-07T21:09:09.000Z | 2022-03-28T17:28:20.000Z | # Demo on two Ubuntu instances
This demo tutorial is a first step in demonstrating Pyrsia's capabilities.
You will setup two Pyrsia nodes on two separate Ubuntu instances, wire them
together in a very small p2p network, and use the regular Docker client on
Ubuntu to pull images off the Pyrsia network. The Pyrsia nodes use Docker Hub
as a fallback mechanism in case the image is not yet available in the Pyrsia
network.
```mermaid
flowchart TB
subgraph Instance 1
docker[Docker Client]-->PyrsiaNode1[Pyrsia Node]
end
subgraph Instance 2
docker2[Docker Client]-->PyrsiaNode2[Pyrsia Node]
end
PyrsiaNode1-->PyrsiaNode2
PyrsiaNode2-->PyrsiaNode1
PyrsiaNode1-->DockerHub[Docker Hub]
```
## Prerequisites
- Two Ubuntu instances with public IPs that allow inbound TCP traffic on port 44000.
We will refer to them as:
- `node1`
- `node2`
- We assume you have Docker installed. Follow these [instructions](https://docs.docker.com/engine/install/ubuntu/) if you do not.
> If you ran these steps before and if you want to start from a clean sheet, do this:
>
> ```sh
> apt-get remove pyrsia
> rm -rf /usr/local/var/pyrsia
> ```
## Demo Scenario
This demo consists of several steps: (scroll down for instructions)
1. [Installation and configuration](#install-and-configure-pyrsia)
- Install and configure Pyrsia on `node1`
- Install and configure Pyrsia on `node2`, make it connect to `node1`
2. [Docker pull on `node1`](#use-pyrsia)
- image is not available in the Pyrsia network
- image is requested from Docker Hub and stored locally, so it becomes
available in the Pyrsia network
3. [Use the Pyrsia CLI to check `node1` status](#use-the-cli-to-check-the-node-status)
4. [Docker pull on `node2`](#use-pyrsia)
- The same Docker image is pulled on `node2`
- `node2` requests the image from the Pyrsia network, in this specific case: `node1`.
5. [Use the Pyrsia CLI to check `node2` status](#use-the-cli-to-check-the-node-status)
6. [Docker pull on `node2`](#use-pyrsia)
- The same Docker image is pulled again on `node2`
- `node2` doesn't have to download the image again
These are the steps in more detail:
```mermaid
sequenceDiagram
participant User as User
participant Docker1 as Docker Daemon on node1
participant Node1 as Pyrsia Node on node1
participant PNW as Pyrsia Network
participant Docker2 as Docker Daemon on node2
participant Node2 as Pyrsia Node on node2
participant DockerHub as Docker Hub
User ->> Node1: Installs Pyrsia
activate User
note left of User: Installation
User ->> Node2: Installs Pyrsia and configures it to connect to node1
Node2 ->> Node1: Connects to peer node1 on port 44000<br>node1 and node2 now form the 'Pyrsia Network'
deactivate User
User ->> Docker1: docker pull image
activate User
note left of User: Pull on node1
Docker1 ->> Node1: request image through the Docker<br>Registry API running inside the Pyrsia node on port 7888
Node1 ->> PNW: Node1 checks if the image is available<br>locally or on the Pyrsia network
Node1 ->> DockerHub: The image is not available and Node1<br>requests the image from Docker Hub
Node1 ->> PNW: Node1 stores the image locally<br>and announces it availability on the Pyrsia Network
Node1 ->> Docker1: The Pyrsia node responds with the requested image
Docker1 ->> User: docker pull is completed successfully
deactivate User
User ->> Node1: pyrsia status the user uses the CLI to ask the<br>status the CLI connects to the Pyrsia node on port 7888
activate User
deactivate User
note left of User: Check Pyrsia<br>node status
User ->> Docker2: docker pull image
activate User
note left of User: Pull on node2
Docker2 ->> Node2: request image through the Docker Registry API<br>running inside the Pyrsia node on port 7888
Node2 ->> PNW: Node2 checks if the image is available locally<br>or on the Pyrsia network<br>In this case, it is available on Node1
Node2 ->> Node1: Node2 connects to port 44000 on Node1<br>to request and download the artifact
Node2 ->> PNW: Node2 stores the artifact locally and announces itself<br>as a provider for this artifact as well.
Node2 ->> Docker2: The Pyrsia node responds with the requested image
Docker2 ->> User: docker pull is completed successfully
deactivate User
User ->> Node2: pyrsia node --status the user uses the CLI to ask the status<br>the CLI connects to the Pyrsia node on port 7888
activate User
deactivate User
note left of User: Check Pyrsia<br>node status
User ->> Docker2: docker pull image
activate User
note left of User: Pull again on node2
Docker2 ->> Node2: request image through the Docker Registry API<br>running inside the Pyrsia node on port 7888
Node2 ->> Docker2: The Pyrsia node responds with the requested image<br>because it was already available locally
Docker2 ->> User: Docker pull is completed successfully
deactivate User
```
## Install and configure Pyrsia
> IMPORTANT: run the installation phase as `root`.
**On both instances:**
### Install Pyrsia
Follow the instructions below or have a look at the latest [Pyrsia documentation](https://pyrsia.io/docs/tutorials/quick-installation/).
```sh
curl -sS https://pyrsia.io/install.sh | sh
```
or run the commands listed below:
```sh
# Update system and install base tooling
sudo apt-get update
sudo apt-get install -y wget gnupg
# Add the Pyrsia keys to verify packages
wget -q -O - https://repo.pyrsia.io/repos/Release.key | gpg --dearmor > pyrsia.gpg
sudo install -o root -g root -m 644 pyrsia.gpg /etc/apt/trusted.gpg.d/
rm pyrsia.gpg
echo "deb https://repo.pyrsia.io/repos/nightly focal main" | sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt-get update
# Install
sudo apt-get install -y pyrsia
```
### Edit configuration
Both nodes will already be listening on port 44000 when it starts.
Let's now edit the configuration on node2 to connect to node1 at startup.
**On `node2`:**
Edit `/etc/systemd/system/multi-user.target.wants/pyrsia.service` and add
`--peer /ip4/public_ip_of_node1/tcp/44000` to the `ExecStart` line so it looks
like this:
```sh
ExecStart=/usr/bin/pyrsia_node --host 0.0.0.0 -L /ip4/0.0.0.0/tcp/44000 --peer /ip4/public_ip_of_node1/tcp/44000
```
This will make sure `node2` connects to peer `node1` when it starts.
Reload the daemon configuration:
```sh
systemctl daemon-reload
```
Restart the Pyrsia node:
```sh
service pyrsia restart
```
Check the daemon status:
```sh
service pyrsia status
```
You should see something very similar to:
```sh
● pyrsia.service - Pyrsia Node
Loaded: loaded (/lib/systemd/system/pyrsia.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-03-23 14:29:55 UTC; 5min ago
Main PID: 42619 (pyrsia_node)
Tasks: 11 (limit: 19189)
Memory: 3.4M
CGroup: /system.slice/pyrsia.service
└─42619 /usr/bin/pyrsia_node -H 127.0.0.1 --peer /ip4/1.2.3.4/tcp/44000 -L /ip4/0.0.0.0/tcp/44000
```
### Use the CLI to check the node status
Check the node status:
```sh
pyrsia -s
```
You should see something very similar to:
```sh
Connected Peers Count: 1
Artifacts Count: 3 {"manifests": 1, "blobs": 2}
Total Disk Space Allocated: 5.84 GB
Disk Space Used: 0.0002%
```
List the node's peers:
```sh
pyrsia -l
```
You should see something very similar to:
```sh
Connected Peers:
["12D3KooWMD9ynPTdvhWMcdX7mh23Au1QpVS3ekTCQzpRTtd1g6h3"]
```
### Tail the log
```sh
tail -f /var/log/syslog
```
You should see something very similar to:
```sh
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: DEBUG multistream_select::dialer_select > Dialer: Proposed protocol: /ipfs/id/1.0.0
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: DEBUG multistream_select::dialer_select > Dialer: Received confirmation for protocol: /ipfs/id/1.0.0
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: DEBUG libp2p_core::upgrade::apply > Successfully applied negotiated protocol
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: Identify::Received: 12D3KooWMD9ynPTdvhWMcdX7mh23Au1QpVS3ekTCQzpRTtd1g6h3; IdentifyInfo { public_key: Ed25519(PublicKey(compressed): a94721f6a984901ec913ca8fac3963103f9f5f45fa5c484e9df8db469ab1e), protocol_version: "ipfs/1.0.0", agent_version: "rust-libp2p/0.34.0", listen_addrs: ["/ip4/1.1.1.1/tcp/44000", "/ip4/127.0.0.1/tcp/44000", "/ip4/10.128.0.14/tcp/44000", "/ip4/172.17.0.1/tcp/44000"], protocols: ["/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0", "/ipfs/kad/1.0.0", "/file-exchange/1"], observed_addr: "/ip4/2.2.2.2/tcp/52012" }
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: DEBUG pyrsia::network::p2p > Identify::Received: adding address "/ip4/34.66.158.102/tcp/44000" for peer 12D3KooWMD9ynPTdvhWMcdX7mh23Au1QpVS3ekTCQzpRTtd1g6h3
Mar 23 14:37:08 demo-pyrsia-node-2 pyrsia_node[42678]: INFO pyrsia::network::handlers > Dialed "/ip4/34.66.158.102/tcp/44000"
```
## Use Pyrsia
Keep the log tail from the installation phase running and open a new terminal
on both instances. (doesn’t have to be `root`)
First on `node1`, pull any Docker image:
```sh
docker pull alpine
```
(make sure to remove it from the local Docker cache if you already pulled it
before: `docker rmi alpine`)
Look at the syslog to show what happened. Alternatively grep the syslog for ‘Step’.
```sh
cat /var/log/syslog | grep Step
> Step 1: Does "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" exist in the artifact manager?
> Step 1: NO, "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" does not exist in the artifact manager.
> Step 3: Retrieving "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" from docker.io
> Step 3: "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" successfully stored locally from docker.io
> Final Step: "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" successfully retrieved!
> Step 3: "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" successfully stored locally from docker.io
> Final Step: "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" successfully retrieved!
```
It shows that Pyrsia didn’t have the image yet, but it fetched it from Docker Hub instead.
Next on `node2`, pull the same Docker image:
```sh
docker pull alpine
```
Inspect the syslog on `node2`, or grep for ‘Steps’:
```sh
> Step 1: Does "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" exist in the artifact manager?
> Step 1: Does "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" exist in the artifact manager?
> Step 1: NO, "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" does not exist in the artifact manager.
> Step 1: NO, "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" does not exist in the artifact manager.
> Step 2: Does "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" exist in the Pyrsia network?
> Step 2: Does "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" exist in the Pyrsia network?
> Step 2: YES, "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" exists in the Pyrsia network.
> Step 2: "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" successfully stored locally from Pyrsia network.
> Final Step: "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" successfully retrieved!
> Step 2: YES, "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" exists in the Pyrsia network.
> Step 2: "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" successfully stored locally from Pyrsia network.
> Final Step: "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" successfully retrieved!
```
This shows the image wasn't available locally, but it was available in the
Pyrsia network, retrieved and stored locally.
Next, remove the image from the local docker cache, and retrieve it again:
```sh
docker rmi alpine
docker pull alpine
```
Inspect the syslog on `node2` again:
```sh
> Step 1: YES, "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" exist in the artifact manager.
> Final Step: "sha256:3d243047344378e9b7136d552d48feb7ea8b6fe14ce0990e0cc011d5e369626a" successfully retrieved!
> Final Step: "sha256:e9adb5357e84d853cc3eb08cd4d3f9bd6cebdb8a67f0415cc884be7b0202416d" successfully retrieved!
```
It will show the local Pyrsia node already had this Docker image and didn’t
have to download it again. Inspect the Pyrsia node status again on both nodes:
```sh
pyrsia -s
```
You should see something very similar to:
```sh
Connected Peers Count: 1
Artifacts Count: 3 {"manifests": 1, "blobs": 2}
Total Disk Space Allocated: 5.84 GB
Disk Space Used: 0.0002%
```
| 38.225519 | 583 | 0.765487 | eng_Latn | 0.834528 |
4f195e31d79f8fe8f13778e33bca8d124c1f4fb1 | 6,055 | md | Markdown | docs/reporting-services/report-design/report-builder-functions-lookup-function.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-25T18:10:29.000Z | 2022-02-25T18:10:29.000Z | docs/reporting-services/report-design/report-builder-functions-lookup-function.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/reporting-services/report-design/report-builder-functions-lookup-function.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Lookup-Funktion (Berichts-Generator und SSRS) | Microsoft-Dokumentation
ms.date: 03/07/2017
ms.prod: reporting-services
ms.prod_service: reporting-services-sharepoint, reporting-services-native
ms.technology: report-design
ms.topic: conceptual
ms.assetid: e60e5bab-b286-4897-9685-9ff12703517d
author: maggiesMSFT
ms.author: maggies
ms.openlocfilehash: b4bebbcb2efb5dc8ef9bc056cb0bb62b3ee08fee
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/01/2018
ms.locfileid: "47679128"
---
# <a name="report-builder-functions---lookup-function"></a>Funktionen des Berichts-Generators: Lookup-Funktion
Gibt den ersten übereinstimmenden Wert für den angegebenen Namen aus einem Dataset mit Name-Wert-Paaren zurück.
> [!NOTE]
> [!INCLUDE[ssRBRDDup](../../includes/ssrbrddup-md.md)]
## <a name="syntax"></a>Syntax
```
Lookup(source_expression, destination_expression, result_expression, dataset)
```
#### <a name="parameters"></a>Parameter
*source_expression*
(**Variant**) Ein Ausdruck, der im aktuellen Bereich ausgewertet wird und den zu suchenden Namen oder Schlüssel angibt. Beispiel: `=Fields!ProdID.Value`.
*destination_expression*
(**Variant**) Ein Ausdruck, der für jede Zeile in einem Dataset ausgewertet wird, und der den Namen oder den Schlüssel für die Übereinstimmung angibt. Beispiel: `=Fields!ProductID.Value`.
*result_expression*
(**Variant**) Ein Ausdruck, der für die Zeile im Dataset ausgewertet wird, für die *source_expression* = *destination_expression*gilt, und der den abzurufenden Wert angibt. Beispiel: `=Fields!ProductName.Value`.
*Dataset (dataset)*
Eine Konstante, die den Namen eines Datasets im Bericht angibt. Beispiel: "Products".
## <a name="return"></a>Rückgabewert
Gibt einen Wert vom Typ **Variant**zurück; gibt **Nothing** zurück, wenn keine Übereinstimmung vorhanden ist.
## <a name="remarks"></a>Remarks
Rufen Sie für ein Name-Wert-Paar, für das eine 1:1-Beziehung vorhanden ist, den Wert mithilfe von **Lookup** aus dem angegebenen Dataset ab. Beispiel: Für ein ID-Feld in einer Tabelle können Sie das entsprechende Namensfeld mithilfe von **Lookup** aus einem Dataset abrufen, das nicht an den Datenbereich gebunden wird.
Mit**Lookup** wird Folgendes ausgeführt:
- Der Quellausdruck wird im aktuellen Bereich ausgewertet.
- Der Zielausdruck wird für jede Zeile des angegebenen Datasets ausgewertet, nachdem Filter angewendet wurden, und zwar anhand der Sortierung des angegebenen Datasets.
- Bei der ersten Übereinstimmung von Quellausdruck und Zielausdruck wird der Ergebnisausdruck für diese Zeile im Dataset ausgewertet.
- Der Ergebnisausdruckswert wird zurückgegeben.
Verwenden Sie [LookupSet-Funktion (Berichts-Generator und SSRS)](../../reporting-services/report-design/report-builder-functions-lookupset-function.md), um mehrere Werte für einen einzelnen Namen oder ein Schlüsselfeld abzurufen, für das eine 1:n-Beziehung vorhanden ist. Verwenden Sie die [Multilookup-Funktion (Berichts-Generator und SSRS)](../../reporting-services/report-design/report-builder-functions-multilookup-function.md), um **Lookup** für mehrere Werte aufzurufen.
Es gelten folgende Einschränkungen:
- **Lookup** wird ausgewertet, nachdem alle Filterausdrücke angewendet wurden.
- Nur eine Suchebene wird unterstützt. Ein Quell-, Ziel- oder Ergebnisausdruck kann keinen Verweis auf eine Suchfunktion einschließen.
- Quell- und Zielausdrücke müssen den gleichen Datentyp ergeben. Der Rückgabetyp ist der gleiche wie der Datentyp des ausgewerteten Ergebnisausdrucks.
- Quell-, Ziel- und Ergebnisausdrücke können keine Verweise auf Berichts- oder Gruppenvariablen einschließen.
- **Lookup** kann nicht als Ausdruck für die folgenden Berichtselemente verwendet werden:
- Dynamische Verbindungszeichenfolgen für eine Datenquelle.
- Berechnete Felder in einem Dataset.
- Abfrageparameter in einem Dataset.
- Filter in einem Dataset.
- Berichtsparameter.
- Die Eigenschaft „Report.Language“.
Weitere Informationen finden Sie in der [Aggregatfunktionsreferenz (Berichts-Generator und SSRS)](../../reporting-services/report-design/report-builder-functions-aggregate-functions-reference.md) und unter [Ausdrucksbereich für Gesamtwerte, Aggregate und integrierte Auflistungen (Berichts-Generator und SSRS)](../../reporting-services/report-design/expression-scope-for-totals-aggregates-and-built-in-collections.md).
## <a name="example"></a>Beispiel
Nehmen Sie im folgenden Beispiel an, dass eine Tabelle an ein Dataset gebunden wird, das ein Feld für den Produktbezeichner "ProductID" enthält. Ein separates Dataset mit dem Namen "Product" enthält den entsprechenden Produktbezeichner "ID" und den Produktnamen "Name".
Im folgenden Ausdruck vergleicht **Lookup** in jeder Zeile des Datasets „Product“ den Wert von „ProductID“ mit „ID“. Wenn eine Übereinstimmung gefunden wird, wird der Wert des Felds „Name“ für diese Zeile zurückgegeben.
```
=Lookup(Fields!ProductID.Value, Fields!ID.Value, Fields!Name.Value, "Product")
```
## <a name="see-also"></a>Weitere Informationen finden Sie unter
[Ausdrucksverwendungen in Berichten (Berichts-Generator und SSRS)](../../reporting-services/report-design/expression-uses-in-reports-report-builder-and-ssrs.md)
[Beispiele für Ausdrücke (Berichts-Generator und SSRS)](../../reporting-services/report-design/expression-examples-report-builder-and-ssrs.md)
[Datentypen in Ausdrücken (Berichts-Generator und SSRS)](../../reporting-services/report-design/data-types-in-expressions-report-builder-and-ssrs.md)
[Ausdrucksbereich für Gesamtwerte, Aggregate und integrierte Sammlungen (Berichts-Generator und SSRS)](../../reporting-services/report-design/expression-scope-for-totals-aggregates-and-built-in-collections.md)
| 58.221154 | 479 | 0.761189 | deu_Latn | 0.983913 |
4f19ab7966d7c639f2184a457bfad936a4ea268a | 6,448 | md | Markdown | windows-apps-src/design/style/writing-style.md | TakaSoap/windows-uwp.zh-cn | d1376dc760c7334dc3c6f1770008943adda60c21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/style/writing-style.md | TakaSoap/windows-uwp.zh-cn | d1376dc760c7334dc3c6f1770008943adda60c21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/style/writing-style.md | TakaSoap/windows-uwp.zh-cn | d1376dc760c7334dc3c6f1770008943adda60c21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 写入样式
description: 要让应用的文本看上去与其设计自然融为一体,使用正确的语音和语气非常重要。
keywords: UWP, Windows 10, 文本, 编写, 语音, 语气, 设计, UI, UX
ms.date: 05/07/2018
ms.topic: article
ms.localizationpriority: medium
ms.custom: RS5
ms.openlocfilehash: f5f38fdfb3ba9fd32a6e06ec19cdbfb6c3b09f24
ms.sourcegitcommit: 789bfe3756c5c47f7324b96f482af636d12c0ed3
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 08/09/2019
ms.locfileid: "68867688"
---
# <a name="writing-style"></a>写入样式

错误消息的措词方式、帮助文档的编写方式,甚至为按钮选择的文本都会对应用的可用性产生重大影响。 写作风格可能对用户体验的好坏产生重要影响。
## <a name="voice-and-tone-principles"></a>语音和声调原则
调查显示用户对于友好、有用和简洁的写作风格反应最佳。 作为此研究的一部分,Microsoft 确定了三个所有内容均适用的语音和声调原则,它们是 Fluent Design 不可或缺的组成部分。
### <a name="be-warm-and-relaxed"></a>温暖、轻松
最重要的是不要吓跑用户。 要通俗易懂,不要使用他们不理解的术语。 即使有错误发生,也不要因为任何问题而责怪用户。 相反,你的应用应该承担责任并提供将用户操作放在首位的热情指导。
### <a name="be-ready-to-lend-a-hand"></a>随时准备提供帮助
撰写风格始终表达出感同身受。 专注于介绍正在进行的操作,并提供用户所需的信息,而不要向他们加载过多不必要的信息。 如果可能,应始终在出现问题时提供解决方案。
### <a name="be-crisp-and-clear"></a>简洁、清晰
大多数时候,文本并非应用的焦点。 它的目的是指导用户,告诉他们正在进行的操作以及他们接下来应该怎么做。 不要在撰写应用文本时忽略这一点,不要假定用户会阅读每个字。 使用你的受众熟悉的语言,并确保能够轻松地过眼即懂。
## <a name="lead-with-whats-important"></a>突出重点
用户需要能够快速阅读和了解你的文本。 不要将篇幅浪费在不必要的介绍上。 最大限度地提高关键字的可见性,并且在添加之前始终展现核心思想。
:::row:::
:::column:::
 选择**筛选器**为图像添加效果。
:::column-end:::
:::column:::
 如果想要为图像添加视觉效果或者其他效果,请选择**筛选器**。
:::column-end:::
:::row-end:::
## <a name="emphasize-action"></a>强调操作
应用是由操作定义的。 用户在使用应用时会进行操作,而应用在响应用户时也会进行操作。 确保整个应用中的文本均使用主动语态 。 用户和功能应描述为它们主动执行操作,而不是对它们执行了操作。
:::row:::
:::column:::
 重启应用以查看更改。
:::column-end:::
:::column:::
 应用重启后,会应用所做的更改。
:::column-end:::
:::row-end:::
## <a name="short-and-sweet"></a>简短扼要
用户快速阅读文本时,通常会完全跳过大块的文字。 请不要牺牲必要的信息和展示效果,但也不要使用不必要的字词。 有时候,这意味着可以使用许多更简短的句子或片段。 其他情况下,这意味着需要谨慎地选择较长句子的措辞和结构。
:::row:::
:::column:::
 我们无法上传图片。 如果再次发生此情况,请尝试重启应用。 但是,不用担心,你的图片将会等待你返回。
:::column-end:::
:::column:::
 出错,我们无法上传图片。 请重试。如果再次遇到此问题,可能需要重启应用。 但是,不用担心,我们已经将你的操作保存到本地,你返回来以后即可继续操作。
:::column-end:::
:::row-end:::
## <a name="style-conventions"></a>风格惯例
如果你认为自己不会成为一名作家,那么尝试实施这些原则和建议可能会具有挑战性。 但不用担心,使用简单直接的语言是提供良好用户体验的好方法。 如果你仍然不确定如何组织语言,这里有一些有用的指南以供参考。 如果要了解详细信息,请查看 [Microsoft 样式指南](https://docs.microsoft.com/style-guide/welcome/)。
### <a name="addressing-the-user"></a>称呼用户
直接与用户进行交流。
* 始终称呼用户为“你”。
* 用“我们”来指代你自己的角度。 这样既热情,又有助于让用户感觉像是属于体验的一部分。
* 不要使用“我”来指代应用的角度,即使你是唯一创建它的人。
:::row:::
:::column:::
 我们无法将你的文件保存到该位置。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
### <a name="abbreviations"></a>缩写词
当你需要在整个应用中多次提及产品、地点或技术概念时,缩写词可能会很有用。 它们可以节省空间,并且感觉上更加自然,前提是只要用户能够理解它们。
* 请勿假定用户已经熟悉了任何的缩写词,即使你认为它们很常见。
* 始终在用户第一次看到新的缩写词时定义它的含义。
* 请勿使用过于相似的缩写词。
* 如果你正在本地化自己的应用,或者如果你的用户将英语作为第二语言,请勿使用缩写。
:::row:::
:::column:::
 通用 Windows 平台 (UWP) 设计指南可帮助你设计和构建美观、优化的应用。 通过每个 UWP 应用中包含的设计功能,你可以构建适用于一系列设备的用户界面 (UI)。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
### <a name="contractions"></a>语言简练
人们习惯于使用简练的语言,并且很乐意阅读简练的文字。 如果语言不够简练,你的应用会看起来过于正式,甚至是生硬。
* 当简练的语言能够自然融入到文本中时,请使用它们。
* 不要只是为了节省空间或者当这些简练的语言会让你的文字听起来拗口的情况下使用。
:::row:::
:::column:::
 喜欢这个图像的话,选择“保存” 即可将其添加到库中。 这样便可以与好友共享这个图像了。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
### <a name="periods"></a>句号
用句号结尾的文本意味着该文本是一个完整的句子。 将句号用于较大的文本块,并且避免用于比一个完整句子较短的文本。
* 将句号用作工具提示、错误消息和对话中完整句子的结尾。
* 请勿将句号用作按钮、单选按钮、标签或复选框文本的结尾。
:::row:::
:::column:::

<b>没有连接。</b>
* 请检查网络电缆是否已插入。
* 请确保没有处于飞行模式下。
* 请查看无线开关是否已打开。
* 请重启路由器。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
### <a name="capitalization"></a>大写
尽管大写字母很重要,但容易过度使用。
* 大写专有名词。
* 大写应用中文本的每个字符串的开头:每个句子、标签和标题的开头。
:::row:::
:::column:::

<b>哪部分遇到了问题?</b>
* 我忘记了密码。
* 它不接受密码。
* 其他人可能正在使用我的帐户。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
## <a name="error-messages"></a>错误消息
当应用中出现问题时,用户会注意到这一点。 由于用户在遇到错误消息时可能会感到困惑或懊恼,因此良好的语音和语气可能会在这方面产生特别重要的影响。
最重要的是,错误消息不能责怪用户。 但是,同样重要的是,不要用他们不理解的信息使他们感到不知所措。 大多数时间,遇到错误的用户只是想尽快轻松地返回到他们之前正在进行的操作。 因此,编写的任何错误消息应该:
* 温暖、轻松 :使用谈话的口吻,避免使用陌生的术语和技术行话。
* 随时准备提供帮助 :尽可能告诉用户出现了什么问题,告知他们将会发生什么,并提供他们能够实施的可行解决方案。
* 简洁、清晰 :去除无关的信息。
:::row:::
:::column:::

<b>没有连接。</b>
* 请检查网络电缆是否已插入。
* 请确保没有处于飞行模式下。
* 请查看无线开关是否已打开。
* 请重启路由器。
:::column-end:::
:::column:::
:::column-end:::
:::row-end:::
## <a name="dialogs"></a>对话框
:::row:::
:::column:::
为应用中的任何对话创建文本时,许多适用于编写错误消息的建议也同样适用。 尽管用户期待对话,但这些对话仍然会中断应用的正常进程;并且这些对话需要简洁有用,以便用户可以回到他们之前正在进行的操作。
但最重要的是对话标题和其按钮之间的“调用和响应”。 请确保按钮是标题所提及问题的明确答案,并且格式在应用中保持一致。
:::column-end:::
:::column:::

<b>哪部分遇到了问题?</b>
1. 我忘记了密码
2. 它不接受我的密码
3. 其他人可能正在使用我的帐户
:::column-end:::
:::row-end:::
## <a name="buttons"></a>按钮
:::row:::
:::column:::
按钮上的文本需要足够简洁,让用户可以一目了然;并且要足够清楚,让该按钮的功能显而易见。 按钮上最长的文本应该只是简短的几个字词,许多文本应该都比这更短。
在编写按钮的文本时,请记住,每个按钮代表一个操作。 请务必在按钮文本中使用主动语态 ,使用代表操作而非反应的词。
:::column-end:::
:::column:::

* 立即安装
* Share
:::column-end:::
:::row-end:::
## <a name="spoken-experiences"></a>口语体验
在为 Cortana 等口语体验编写文本时,相同的通用原则和建议也适用。 在这些功能中,良好的写作原则甚至更为重要,因为你无法为用户提供其他视觉设计元素来补充说出的字词。
* 温暖、轻松 :以谈话的口吻与用户进行互动。 比其他任何方面都重要的是,要让口语体验听上去热情且平易近人,并且使用户敢于与之进行交谈。
* 随时准备提供帮助 :当用户询问无法做到的要求时,请提供替代建议。 就像在错误消息中一样,如果出现问题并且应用无法满足请求,它应该为用户提供一个可以尝试进行询问的可行选择。
* 简洁、清晰 :保持语言的简单。 口语体验中不适合使用长句或复杂的字词。
## <a name="accessibility-and-localization"></a>辅助功能和本地化
如果在编写文本时考虑到辅助功能和本地化,你的应用可能会接触到更多的受众。 这是仅通过文本无法达成的,尽管简单友好的语言是一个很好的开始。 有关详细信息,请参阅[辅助功能概述](https://docs.microsoft.com/windows/uwp/design/accessibility/accessibility-overview)和[本地化指南](https://docs.microsoft.com/windows/uwp/design/globalizing/globalizing-portal)。
* 随时准备提供帮助 :考虑到不同的体验。 避免使用可能对国际受众无意义的短语,并且请勿使用假设用户可以和不可以做什么的字词。
* 简洁、清晰 :避免使用不必要的特殊和专业词汇。 文本越简单,就越容易进行本地化。
## <a name="techniques-for-non-writers"></a>适合非作家的技巧
无需成为训练有素或经验丰富的作家即可为用户提供良好的体验。 选择那些对你来说听起来舒服的字词,它们同样也会为其他人带来舒适感。 但有时,这并没有听上去的那么容易。 如果遇到问题,这些技巧可以帮助你。
* 想象一下,你正在和一位朋友谈论你的应用。 如何向他们介绍此应用? 如何介绍其功能或如何为他们提供说明? 最好向尚未使用过的真实用户介绍此应用。
* 想象一下你会如何描述一个完全不同的应用。 例如,如果你正在编写游戏应用,想象一下该如何用语言或文字来介绍一款财务或新闻应用。 通过对比所使用的语言和结构,可以让更深入地了解适用于你所编写应用的正确字词。
* 关注类似的应用以获取灵感。
找到正确的字词是许多人纠结的一个问题,因此,难以确定最自然的字词时也不要感到沮丧。
| 25.895582 | 258 | 0.717122 | yue_Hant | 0.521784 |
4f19c98f6c7fb7e59a50fd8c7d5bd58721a5d175 | 2,329 | md | Markdown | samples/cl_hot_kernels/README.md | inteI-cloud/pti-gpu | df968c95687f15f871c9323d9325211669487bd2 | [
"MIT"
] | 1 | 2021-04-14T16:29:09.000Z | 2021-04-14T16:29:09.000Z | samples/cl_hot_kernels/README.md | inteI-cloud/pti-gpu | df968c95687f15f871c9323d9325211669487bd2 | [
"MIT"
] | null | null | null | samples/cl_hot_kernels/README.md | inteI-cloud/pti-gpu | df968c95687f15f871c9323d9325211669487bd2 | [
"MIT"
] | null | null | null | # OpenCL(TM) Hot Functions
## Overview
This sample is a simple LD_PRELOAD based tool that allows to collect OpenCL(TM) kernels within an application along with their total execution time and call count.
As a result, table like the following will be printed.
```
=== Device Timing Results: ===
Total Execution Time (ns): 370767821
Total Device Time for CPU (ns): 0
Total Device Time for GPU (ns): 174828332
== GPU Backend: ==
Kernel, Calls, SIMD, Time (ns), Time (%), Average (ns), Min (ns), Max (ns)
GEMM, 4, 32, 174828332, 100.00, 43707083, 43329166, 44306250
```
## Supported OS
- Linux
- Windows (*under development*)
## Prerequisites
- [CMake](https://cmake.org/) (version 3.12 and above)
- [Git](https://git-scm.com/) (version 1.8 and above)
- [Python](https://www.python.org/) (version 2.7 and above)
- [OpenCL(TM) ICD Loader](https://github.com/KhronosGroup/OpenCL-ICD-Loader)
- [Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver](https://github.com/intel/compute-runtime) to run on GPU
- [Intel(R) Xeon(R) Processor / Intel(R) Core(TM) Processor (CPU) Runtimes](https://software.intel.com/en-us/articles/opencl-drivers#cpu-section) to run on CPU
## Build and Run
### Linux
Run the following commands to build the sample:
```sh
cd <pti>/samples/cl_hot_kernels
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
```
Use this command line to run the tool:
```sh
./cl_hot_kernels <target_application>
```
One may use [cl_gemm](../cl_gemm) or [dpc_gemm](../dpc_gemm) as target application:
```sh
./cl_hot_kernels ../../cl_gemm/build/cl_gemm
./cl_hot_kernels ../../dpc_gemm/build/dpc_gemm cpu
```
### Windows
Use Microsoft* Visual Studio x64 command prompt to run the following commands and build the sample:
```sh
cd <pti>\samples\cl_hot_kernels
mkdir build
cd build
cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_LIBRARY_PATH=<opencl_icd_lib_path> ..
nmake
```
Use this command line to run the tool:
```sh
cl_hot_kernels.exe <target_application>
```
One may use [cl_gemm](../cl_gemm) or [dpc_gemm](../dpc_gemm) as target application:
```sh
cl_hot_kernels.exe ..\..\cl_gemm\build\cl_gemm.exe
cl_hot_kernels.exe ..\..\dpc_gemm\build\dpc_gemm.exe cpu
``` | 35.287879 | 163 | 0.69386 | eng_Latn | 0.604316 |
4f19d56cde1c1394d9bb2a955b25229fc5c2d678 | 636 | md | Markdown | labs/dragon/Specification.md | sabertazimi/cpp-primer | 7efd433e251bf9a03d765cd9adbc3de5eb538c75 | [
"MIT"
] | null | null | null | labs/dragon/Specification.md | sabertazimi/cpp-primer | 7efd433e251bf9a03d765cd9adbc3de5eb538c75 | [
"MIT"
] | 1 | 2021-09-12T16:09:20.000Z | 2021-09-12T16:09:20.000Z | labs/dragon/Specification.md | sabertazimi/cpp-primer | 7efd433e251bf9a03d765cd9adbc3de5eb538c75 | [
"MIT"
] | 1 | 2018-09-26T14:48:23.000Z | 2018-09-26T14:48:23.000Z | # Dragon Language Specification
## Declaration
```java
// defination section(without initialization)
...
// initialization section
...
// other sections
...
```
## Variable
### String
只存在字符串常量
### Array
```java
int[][] arr;
arr = new int[][10];
```
## Function
```java
// function defination
int sayHelloFunc = (void) => {
Print("Hello World!\n");
return 0;
};
```
## Scope
* 局部变量不可与函数参数重名
* 局部域({}, including FuncDef) 不可有重名变量
* 局部域间的生命周期不同
## OOP
* 所有 varField 为私有域
* 所有 methodField 为公有域
* 不支持方法重载
* 支持方法重写
* 所有方法均为虚方法
```java
Computer pc;
pc = new Mac();
pc.crash(); // => "Mac crash!"
```
| 10.965517 | 45 | 0.600629 | yue_Hant | 0.652826 |
4f1ba0fc8802e8453169caa2d0e6638bb70b9aec | 558 | md | Markdown | 2021/160.md | zx648383079/blog | 19f14a7b7397976e1a95d952de5c44fc630d4c98 | [
"MIT"
] | 2 | 2020-05-08T06:34:57.000Z | 2021-12-31T03:14:44.000Z | 2021/160.md | zx648383079/blog | 19f14a7b7397976e1a95d952de5c44fc630d4c98 | [
"MIT"
] | null | null | null | 2021/160.md | zx648383079/blog | 19f14a7b7397976e1a95d952de5c44fc630d4c98 | [
"MIT"
] | 3 | 2019-11-13T11:10:59.000Z | 2020-06-12T14:50:14.000Z | # Github Host 更改
## 特别说明
此方法不稳定,但至少有时能打开
# 第一步
主要解决 github 加载慢打不开的情况
github网址查询:[▷ GitHub.com : GitHub: Where the world builds software · GitHub](https://github.com.ipaddress.com/)
github域名查询:[▷ github.global.ssl.Fastly.net Website statistics and traffic analysis | Fastly | fastly.net](https://fastly.net.ipaddress.com/github.global.ssl.fastly.net)
github静态资源ip:[▷ assets-cdn.Github.com Website statistics and traffic analysis | Github | github.com](https://github.com.ipaddress.com/assets-cdn.github.com)
# 第二步 刷新DNS缓存
```
ipconfig /flushdns
``` | 24.26087 | 168 | 0.752688 | yue_Hant | 0.335584 |
4f1bb498c58eb4507a05a4e058b76bc18d913475 | 2,480 | md | Markdown | articles/network-watcher/network-watcher-ip-flow-verify-overview.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/network-watcher/network-watcher-ip-flow-verify-overview.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/network-watcher/network-watcher-ip-flow-verify-overview.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Introdução ao fluxo IP verificado no Vigilante da Rede Azure [ Microsoft Docs
description: Esta página fornece uma visão geral da capacidade de verificação de fluxo IP do Observador de Rede
services: network-watcher
documentationcenter: na
author: damendo
ms.service: network-watcher
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 11/30/2017
ms.author: damendo
ms.openlocfilehash: 69aca5e0901a0da8aa98fe310ac220898bf650b2
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/27/2020
ms.locfileid: "76845011"
---
# <a name="introduction-to-ip-flow-verify-in-azure-network-watcher"></a>Introdução ao fluxo IP verificado no Observador da Rede Azure
O fluxo IP verifica se um pacote é permitido ou negado de ou para uma máquina virtual. A informação consiste em direção, protocolo, IP local, IP remoto, porto local e porto remoto. Se o pacote for negado por um grupo de segurança, o nome da regra que negou o pacote é devolvido. Embora qualquer IP de origem ou destino possa ser escolhido, a verificação de fluxos IP ajuda os administradores a diagnosticar rapidamente problemas de conectividade a partir ou à internet e de ou para o ambiente no local.
A verificação do fluxo IP analisa as regras de todos os Grupos de Segurança da Rede (NSGs) aplicadas à interface da rede, tais como uma subnet ou uma máquina virtual NIC. O fluxo de tráfego é então verificado com base nas definições configuradas de ou para a interface da rede. A verificação do fluxo IP é útil para confirmar se uma regra de um Grupo de Segurança de Rede está bloqueando o tráfego de entrada ou de saída de uma máquina virtual.
Uma instância do Network Watcher precisa de ser criada em todas as regiões que pretende executar a verificação do fluxo IP. O Network Watcher é um serviço regional e só pode ser destituda contra recursos na mesma região. A instância utilizada não afeta os resultados da verificação do fluxo ip, uma vez que qualquer rota associada ao NIC ou subnet ainda é devolvida.
![1][1]
## <a name="next-steps"></a>Passos seguintes
Visite o seguinte artigo para saber se um pacote é permitido ou negado para uma máquina virtual específica através do portal. [Verifique se o tráfego é permitido num VM com IP Flow Verifique usando o portal](diagnose-vm-network-traffic-filtering-problem.md)
[1]: ./media/network-watcher-ip-flow-verify-overview/figure1.png
| 67.027027 | 502 | 0.803226 | por_Latn | 0.999039 |
4f1bba18afb1ca2e93c00adadd95eba4512d13f3 | 984 | md | Markdown | doc/slice/iter/Split.md | danieleades/bitvec | 7d31e867c49e8de18658699fcb1361886b91b9bf | [
"MIT"
] | 366 | 2018-07-12T12:59:27.000Z | 2020-12-22T13:21:42.000Z | doc/slice/iter/Split.md | danieleades/bitvec | 7d31e867c49e8de18658699fcb1361886b91b9bf | [
"MIT"
] | 92 | 2018-10-22T17:05:20.000Z | 2020-12-24T12:32:36.000Z | doc/slice/iter/Split.md | danieleades/bitvec | 7d31e867c49e8de18658699fcb1361886b91b9bf | [
"MIT"
] | 43 | 2018-07-02T02:45:28.000Z | 2020-12-21T20:50:59.000Z | # Shared Bit-Slice Splitting
This iterator yields successive non-overlapping segments of a bit-slice,
separated by bits that match a predicate function. Splitting advances one
segment at a time, starting at the beginning of the bit-slice.
The matched bit is **not** included in the yielded segment.
It is created by the [`BitSlice::split`] method.
## Original
[`slice::Split`](core::slice::Split)
## API Differences
The predicate function receives both the index within the bit-slice, as well as
the bit value, in order to allow the predicate to have more than one bit of
information when splitting.
## Examples
```rust
use bitvec::prelude::*;
let bits = bits![0, 0, 0, 1, 1, 1, 0, 1];
let mut split = bits.split(|idx, _bit| idx % 3 == 2);
assert_eq!(split.next().unwrap(), bits![0; 2]);
assert_eq!(split.next().unwrap(), bits![1; 2]);
assert_eq!(split.next().unwrap(), bits![0, 1]);
assert!(split.next().is_none());
```
[`BitSlice::split`]: crate::slice::BitSlice::split
| 27.333333 | 79 | 0.707317 | eng_Latn | 0.981194 |
4f1d77dfabc9252d0703a7932ae2640811b74579 | 2,527 | md | Markdown | content/photography.md | cjbarker/cjbarker.com | 9c2dca09ed0cd30222ab229236152074442e4262 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | content/photography.md | cjbarker/cjbarker.com | 9c2dca09ed0cd30222ab229236152074442e4262 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | content/photography.md | cjbarker/cjbarker.com | 9c2dca09ed0cd30222ab229236152074442e4262 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | +++
type = "single"
title = "Photography"
description = "Point and Shoot"
tags = ["photography", "photos", "images"]
+++
### Photography by CJ Barker ###
I am an avid photographer who is more interested in capturing an image and the moment than obsessing about hardware specs. I simply enjoy taking photos.
<div class="row">
<div class="column">
<a href="/photography/trees.jpg" target="photos"><img src="/photography/trees.jpg" alt="Trees" /></a>
<a href="/photography/seagulls.jpg" target="photos"><img src="/photography/seagulls.jpg" alt="Sea Gulls" /></a>
<a href="/photography/sf_air.jpg" target="photos"><img src="/photography/sf_air.jpg" alt="San Francisco from the Sky" /></a>
<a href="/photography/shasta_lake.jpg" target="photos"><img src="/photography/shasta_lake.jpg" alt="Shasta Lake" /></a>
</div>
<div class="column">
<a href="/photography/seattle_bubble.jpg" target="photos"><img src="/photography/seattle_bubble.jpg" alt="Seattle Bubble" /></a>
<a href="/photography/utah.jpg" target="photos"><img src="/photography/utah.jpg" alt="Utah Mtn" /></a>
<a href="/photography/iceland.jpg" target="photos"><img src="/photography/iceland.jpg" alt="Icelandic Anchor" /></a>
<a href="/photography/seaside.jpg" target="photos"><img src="/photography/seaside.jpg" alt="SeaSide, OR" /></a>
</div>
<div class="column">
<a href="/photography/salmon.jpg" target="photos"><img src="/photography/salmon.jpg" alt="Salmon" /></a>
<a href="/photography/drums_abbey.jpg" target="photos"><img src="/photography/drums_abbey.jpg" alt="Drumset at Abbey Road Studios, London, EN" /></a>
<a href="/photography/seattle_buds.jpg" target="photos"><img src="/photography/seattle_buds.jpg" alt="Flower Buds at Kingstreet Stations in Seattle, WA" /></a>
<a href="/photography/wentachee.jpg" target="photos"><img src="/photography/wentachee.jpg" alt="Lake Wentachee, WA" /></a>
</div>
<div class="column">
<a href="/photography/greenland.jpg" target="photos"><img src="/photography/greenland.jpg" alt="Airplane Engine over Greenland" /></a>
<a href="/photography/seven_lake_basin.jpg" target="photos"><img src="/photography/seven_lake_basin.jpg" alt="Seven Lakes Basin, Olypic Pennisula, WA" /></a>
<a href="/photography/castle_rock.jpg" target="photos"><img src="/photography/castle_rock.jpg" alt="Castle Rock" /></a>
<a href="/photography/german_alps.jpg" target="photos"><img src="/photography/german_alps.jpg" alt="The Alps in Germany" /></a>
</div>
</div>
| 64.794872 | 163 | 0.691729 | eng_Latn | 0.126856 |
4f1fbe97589efa0f1cbda50cff2ed48ea05a46d5 | 1,305 | md | Markdown | CHAPTER 8 - Evaluation and Optimization/Evaluation and Optimization Lab/README.md | ipetel/Streaming_Data_Collection_Lab | 855bde873ee06f826c64453d03176ef6f3001fd1 | [
"Apache-2.0"
] | null | null | null | CHAPTER 8 - Evaluation and Optimization/Evaluation and Optimization Lab/README.md | ipetel/Streaming_Data_Collection_Lab | 855bde873ee06f826c64453d03176ef6f3001fd1 | [
"Apache-2.0"
] | null | null | null | CHAPTER 8 - Evaluation and Optimization/Evaluation and Optimization Lab/README.md | ipetel/Streaming_Data_Collection_Lab | 855bde873ee06f826c64453d03176ef6f3001fd1 | [
"Apache-2.0"
] | null | null | null | # Evaluation and Optimization Lab
Notice that this Lab is part B of <a href="https://github.com/ipetel/AWS_Certified_ML-acloud.guru/tree/master/CHAPTER%207%20-%20Algorithms/Algorithms%20Lab">Algorithms Lab</a>
**Use Case**
Mr. K has given us the go to use the Linear Learner model for identifying the legitimacy of a reported UFO sighting. He plans to send out a team of scientist to any reported sighting predicted probable or unexplained. Before deploying the model, he wants us to make sure we have the most optimized model, improve any performance in training, and possibly improve accuracy.
**Goal**
Tune our model to find the most optimized model for our problem.
**My Solution**
I have created an extended solution from the official one, in my solution I have extended explanation for every step<br>
read the notebook for additional explanation.<br>
hope you will find this notebook useful.<br>
**Note**
1. in most of code cell that I'm using external libraries I'm importing it every time, although I can import all of them once on the start of the notebook, I wanted to show what libraries I'm using on every step.
2. there are numerous times that I'm assigning the same variables with the same values on different cells, again just want to to show what variables are used on every step.
| 59.318182 | 372 | 0.784674 | eng_Latn | 0.999199 |
4f20aa5006263db164212bb8bb76ca9603acd14d | 8,485 | md | Markdown | curriculum/challenges/portuguese/10-coding-interview-prep/data-structures/use-depth-first-search-in-a-binary-search-tree.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 172,317 | 2017-01-11T05:26:18.000Z | 2022-03-31T23:30:16.000Z | curriculum/challenges/portuguese/10-coding-interview-prep/data-structures/use-depth-first-search-in-a-binary-search-tree.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 26,252 | 2017-01-11T06:19:09.000Z | 2022-03-31T23:18:31.000Z | curriculum/challenges/portuguese/10-coding-interview-prep/data-structures/use-depth-first-search-in-a-binary-search-tree.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 27,418 | 2017-01-11T06:31:22.000Z | 2022-03-31T20:44:38.000Z | ---
id: 587d8257367417b2b2512c7e
title: Usar busca em profundidade em uma árvore binária de busca
challengeType: 1
forumTopicId: 301719
dashedName: use-depth-first-search-in-a-binary-search-tree
---
# --description--
Sabemos como procurar por um valor específico em uma árvore binária. Mas e se quisermos pesquisar a árvore inteira? E se não tivermos uma árvore ordenada e precisarmos simplesmente pesquisar por um valor? Aqui, vamos introduzir alguns métodos de travessia que podem ser usados para pesquisar essa estrutura de dados. O primeiro método será a busca em profundidade. Na busca em profundidade, uma determinada subárvore é pesquisada o mais profundamente possível antes da busca continuar para outra subárvore. Podemos realizar essa busca de três formas: Em ordem: começa a pesquisa no nó mais à esquerda e termina no nó mais à direita. Pré-ordem: pesquisa todas as raízes antes das folhas. Pós-ordem: pesquisa todas as folhas antes das raízes. Como você pode imaginar, você pode escolher métodos de busca diferentes, dependendo dos dados que sua árvore armazena e do que você está procurando. Em uma árvore binária de busca, uma travessia de ordem retorna os nós de forma ordenada.
# --instructions--
Aqui, vamos usar estes três métodos de pesquisa na nossa árvore binária de busca. A busca em profundidade é uma operação inerentemente recursiva que continua a pesquisar mais subárvores enquanto existirem nós filhos. Uma vez que você entende este conceito básico, você pode simplesmente reorganizar a ordem da pesquisa nos nós e nas subárvores para produzir qualquer uma das três buscas. Por exemplo, na busca de pós-ordem, a pesquisa deve, recursivamente, ir até o nó da folha antes de retornar qualquer um dos nós em si. Por outro lado, na busca de pré-ordem, a pesquisa deve retornar os nós primeiro e depois continuar a pesquisa pela árvore. Use os métodos em ordem (`inorder`), pré-ordem (`preorder`) e pós-ordem (`postorder`) na nossa árvore. Cada um desses métodos deve retornar um array de itens que representa a travessia da árvore. Certifique-se de retornar os valores numéricos em cada nó do array, não os nós em si. Por fim, retorne `null` se a árvore estiver vazia.
# --hints--
A estrutura de dados `BinarySearchTree` deve existir.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
}
return typeof test == 'object';
})()
);
```
A árvore binária de busca deve ter um método chamado `inorder`.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
return typeof test.inorder == 'function';
})()
);
```
A árvore binária de busca deve ter um método chamado `preorder`.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
return typeof test.preorder == 'function';
})()
);
```
A árvore binária de busca deve ter um método chamado `postorder`.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
return typeof test.postorder == 'function';
})()
);
```
O método `inorder` deve retornar um array com os valores de cada nó.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.inorder !== 'function') {
return false;
}
test.add(7);
test.add(1);
test.add(9);
test.add(0);
test.add(3);
test.add(8);
test.add(10);
test.add(2);
test.add(5);
test.add(4);
test.add(6);
return test.inorder().join('') == '012345678910';
})()
);
```
O método `preorder` deve retornar um array com os valores de cada nó.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.preorder !== 'function') {
return false;
}
test.add(7);
test.add(1);
test.add(9);
test.add(0);
test.add(3);
test.add(8);
test.add(10);
test.add(2);
test.add(5);
test.add(4);
test.add(6);
return test.preorder().join('') == '710325469810';
})()
);
```
O método `postorder` deve retornar um array com os valores de cada nó.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.postorder !== 'function') {
return false;
}
test.add(7);
test.add(1);
test.add(9);
test.add(0);
test.add(3);
test.add(8);
test.add(10);
test.add(2);
test.add(5);
test.add(4);
test.add(6);
return test.postorder().join('') == '024653181097';
})()
);
```
O método `inorder` deve retornar `null` quando a árvore estiver vazia.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.inorder !== 'function') {
return false;
}
return test.inorder() == null;
})()
);
```
O método `preorder` deve retornar `null` quando a árvore estiver vazia.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.preorder !== 'function') {
return false;
}
return test.preorder() == null;
})()
);
```
O método `postorder` deve retornar `null` quando a árvore estiver vazia.
```js
assert(
(function () {
var test = false;
if (typeof BinarySearchTree !== 'undefined') {
test = new BinarySearchTree();
} else {
return false;
}
if (typeof test.postorder !== 'function') {
return false;
}
return test.postorder() == null;
})()
);
```
# --seed--
## --after-user-code--
```js
BinarySearchTree.prototype = Object.assign(
BinarySearchTree.prototype,
{
add: function(value) {
function searchTree(node) {
if (value < node.value) {
if (node.left == null) {
node.left = new Node(value);
return;
} else if (node.left != null) {
return searchTree(node.left);
}
} else if (value > node.value) {
if (node.right == null) {
node.right = new Node(value);
return;
} else if (node.right != null) {
return searchTree(node.right);
}
} else {
return null;
}
}
var node = this.root;
if (node == null) {
this.root = new Node(value);
return;
} else {
return searchTree(node);
}
}
}
);
```
## --seed-contents--
```js
var displayTree = tree => console.log(JSON.stringify(tree, null, 2));
function Node(value) {
this.value = value;
this.left = null;
this.right = null;
}
function BinarySearchTree() {
this.root = null;
// Only change code below this line
// Only change code above this line
}
```
# --solutions--
```js
var displayTree = tree => console.log(JSON.stringify(tree, null, 2));
function Node(value) {
this.value = value;
this.left = null;
this.right = null;
}
function BinarySearchTree() {
this.root = null;
this.result = [];
this.inorder = function(node) {
if (!node) node = this.root;
if (!node) return null;
if (node.left) this.inorder(node.left);
this.result.push(node.value);
if (node.right) this.inorder(node.right);
return this.result;
};
this.preorder = function(node) {
if (!node) node = this.root;
if (!node) return null;
this.result.push(node.value);
if (node.left) this.preorder(node.left);
if (node.right) this.preorder(node.right);
return this.result;
};
this.postorder = function(node) {
if (!node) node = this.root;
if (!node) return null;
if (node.left) this.postorder(node.left);
if (node.right) this.postorder(node.right);
this.result.push(node.value);
return this.result;
};
}
```
| 25.790274 | 978 | 0.6264 | por_Latn | 0.918229 |
4f2169613df9933681270b7da9af01b547f84320 | 1,588 | md | Markdown | README.md | Fyzu/logger | aa0363d8b98e618f64037841906da8780b507710 | [
"MIT"
] | null | null | null | README.md | Fyzu/logger | aa0363d8b98e618f64037841906da8780b507710 | [
"MIT"
] | null | null | null | README.md | Fyzu/logger | aa0363d8b98e618f64037841906da8780b507710 | [
"MIT"
] | null | null | null | [![deps][deps]][deps-url]
## Install
```bash
npm i -D @fyzu/logger
```
> ⚠️ We do not recommend installing this module globally
## Usage
```js
const createLogger = require('@fyzu/logger')
const logger = createLogger({ name: 'wds' })
logger.info('Server Starting')
```
## Options
|Name|Type|Default|Description|
|:--:|:--:|:-----:|:----------|
|[**`name`**](#name)|`{String}`|`''<unknown>'`|Log Name (**Required**)|
|[**`level`**](#level)|`{String}`|`'info'`|Log Level|
|[**`unique`**](#unique)|`{String}`|`true`|Log Uniqueness|
|[**`timestamp`**](#timestamp)|`{Boolean}`|`false`|Log Timestamps|
### `name`
Specifies the name of the log to create. **This option is required**, and used to differentiate between loggers when `webpack-log` is used in multiple projects
executing in the same process
```js
const logger = createLogger({ name: 'wds' })
```
### `level`
Specifies the level the logger should use. A logger will not produce output for
any log level _beneath_ the specified level. Available levels and order are:
```js
[
'info',
'warn',
'error',
'trace',
'debug',
'silent'
]
```
```js
const logger = createLogger({ level: 'error' })
logger.error(err)
```
> ℹ️ The level names shown above correspond to the available logging methods,
with the notable exception of the `silent` level
### `timestamp`
If `true`, instructs the logger to display a timestamp for log output, preceding
all other data
```js
const logger = createLogger({ timestamp: true })
```
[deps]: https://david-dm.org/fyzu/logger.svg
[deps-url]: https://david-dm.org/fyzu/logger
| 21.459459 | 159 | 0.656801 | eng_Latn | 0.730239 |
4f220cccdec9377ff8e2b7d873186318c5ebd04f | 655 | md | Markdown | docs/useQueryParams.md | craig1123/lohook | 95157a1e1b7789b3b537118f42f1e782a03df18a | [
"MIT"
] | 837 | 2019-11-07T15:38:49.000Z | 2022-03-30T22:20:58.000Z | docs/useQueryParams.md | YosephGezahegn/react-recipes | 95157a1e1b7789b3b537118f42f1e782a03df18a | [
"MIT"
] | 20 | 2019-11-08T21:49:32.000Z | 2021-09-18T00:06:32.000Z | docs/useQueryParams.md | YosephGezahegn/react-recipes | 95157a1e1b7789b3b537118f42f1e782a03df18a | [
"MIT"
] | 61 | 2019-11-08T21:33:34.000Z | 2022-03-16T06:45:39.000Z | # 📍 `useQueryParams`
Gets and sets query params
## Usage
```js
import { useQueryParams } from 'react-recipes';
function App() {
const { getParams, setParams } = useQueryParams();
const params = getParams();
return (
<div>
<button
onClick={() => {
setParams({ page: 1, order: 'ASC' });
}}
>
Set Params
</button>
<button
onClick={() => {
setParams({});
}}
>
Clear params
</button>
{Object.entries(params).map(([paramKey, paramValue]) => (
<p>
{paramKey}: {paramValue}
</p>
))}
</div>
);
}
```
| 16.375 | 63 | 0.470229 | kor_Hang | 0.208082 |
4f2344db24644375f49dcf73f254c8518bd30e97 | 5,655 | md | Markdown | docs/sharepoint/creating-sharepoint-features.md | MicrosoftDocs/visualstudio-docs.fr-fr | 7e9a0c75b29b9e7e5b67cbea2a23b2525235725a | [
"CC-BY-4.0",
"MIT"
] | 9 | 2018-03-03T14:08:04.000Z | 2022-01-28T01:13:30.000Z | docs/sharepoint/creating-sharepoint-features.md | MicrosoftDocs/visualstudio-docs.fr-fr | 7e9a0c75b29b9e7e5b67cbea2a23b2525235725a | [
"CC-BY-4.0",
"MIT"
] | 37 | 2017-12-01T21:15:20.000Z | 2021-07-09T11:03:45.000Z | docs/sharepoint/creating-sharepoint-features.md | MicrosoftDocs/visualstudio-docs.fr-fr | 7e9a0c75b29b9e7e5b67cbea2a23b2525235725a | [
"CC-BY-4.0",
"MIT"
] | 29 | 2017-12-01T19:42:00.000Z | 2022-02-18T10:09:22.000Z | ---
title: création de fonctionnalités de SharePoint | Microsoft Docs
description: créer une fonctionnalité de SharePoint pour regrouper les éléments de projet associés SharePoint pour faciliter le déploiement. ajoutez des fonctionnalités à la solution SharePoint. Utilisez le concepteur de fonctionnalités.
ms.custom: SEO-VS-2020
ms.date: 02/02/2017
ms.topic: conceptual
dev_langs:
- VB
- CSharp
helpviewer_keywords:
- SharePoint development in Visual Studio, features
- features [SharePoint development in Visual Studio]
author: John-Hart
ms.author: johnhart
manager: jmartens
ms.technology: sharepoint-development
ms.workload:
- office
ms.openlocfilehash: 4eaf4a9541834215c0c66aed7bb9271faa79a1e3
ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/13/2021
ms.locfileid: "126635648"
---
# <a name="create-sharepoint-features"></a>créer des fonctionnalités de SharePoint
vous pouvez utiliser une fonctionnalité de SharePoint pour regrouper des éléments de projet associés SharePoint pour faciliter le déploiement. vous pouvez créer des fonctionnalités, définir des étendues et marquer d’autres fonctionnalités en tant que dépendances à l’aide du concepteur de fonctionnalités SharePoint. Le concepteur génère également un manifeste, qui est un fichier XML décrivant chaque fonctionnalité.
## <a name="add-features-to-the-sharepoint-solution"></a>ajouter des fonctionnalités à la solution SharePoint
vous pouvez ajouter une fonctionnalité à la solution SharePoint à l’aide de Explorateur de solutions ou de l’explorateur de package. Vous pouvez utiliser l’une des méthodes suivantes pour ajouter une fonctionnalité.
- Dans **Explorateur de solutions**, ouvrez le menu contextuel des **fonctionnalités**, puis choisissez **Ajouter une fonctionnalité**.
- Dans l' **Explorateur de packages**, ouvrez le menu contextuel du package, puis choisissez Ajouter une **fonctionnalité**.
## <a name="using-the-feature-designer"></a>Utilisation du concepteur de fonctionnalités
une solution SharePoint peut contenir une ou plusieurs fonctionnalités SharePoint, qui sont regroupées sous le nœud feature dans Explorateur de solutions. Chaque fonctionnalité possède son propre **Concepteur de fonctionnalités** que vous pouvez utiliser pour personnaliser les propriétés de la fonctionnalité. pour plus d’informations, consultez [comment : personnaliser un SharePoint fonctionnalité](../sharepoint/how-to-customize-a-sharepoint-feature.md). Pour distinguer les fonctionnalités les unes des autres, vous pouvez configurer les propriétés des fonctionnalités telles que le titre, la description, la version et l’étendue.
### <a name="feature-designer-options"></a>Options du concepteur de fonctionnalités
Après avoir créé une fonctionnalité, vous pouvez utiliser le concepteur de fonctionnalités pour la personnaliser.
Le tableau suivant décrit les propriétés de fonctionnalité qui s’affichent dans le concepteur de fonctionnalités.
|Propriété|Description|
|--------------|-----------------|
|Intitulé|Optionnel. Le titre par défaut de la fonctionnalité a la valeur *SolutionName* *NomFonctionnalité*.|
|Description|facultatif. description de la fonctionnalité de SharePoint.|
|Étendue|Obligatoire. Si une fonctionnalité est créée à l’aide de **Explorateur de solutions**, l’étendue est définie sur Web par défaut.<br /><br /> -Batterie de serveurs : activer une fonctionnalité pour une batterie de serveurs entière.<br /><br /> -Site : active une fonctionnalité pour tous les sites Web d’une collection de sites.<br /><br /> -Web : active une fonctionnalité pour un site Web spécifique.<br /><br /> -WebApplication : active une fonctionnalité pour tous les sites Web d’une application Web.|
|Éléments de la solution|tous les éléments de SharePoint qui peuvent être ajoutés à la fonctionnalité.|
|Éléments dans la fonctionnalité|SharePoint éléments de projet qui ont été ajoutés à la fonctionnalité.|
## <a name="add-and-remove-sharepoint-project-items"></a>ajouter et supprimer des éléments de projet SharePoint
vous pouvez sélectionner les éléments de projet SharePoint auxquels vous souhaitez ajouter une fonctionnalité de SharePoint pour le déploiement. Utilisez le **Concepteur de fonctionnalités** pour ajouter et supprimer des éléments dans les fonctionnalités, et pour afficher le manifeste de la fonctionnalité. pour plus d’informations, consultez [comment : ajouter et supprimer des éléments dans des fonctionnalités de SharePoint](../sharepoint/how-to-add-and-remove-items-to-sharepoint-features.md).
## <a name="add-feature-dependencies"></a>Ajouter des dépendances de fonctionnalités
vous pouvez configurer le manifeste de fonctionnalité de sorte que le serveur SharePoint active certaines fonctionnalités avant que votre fonctionnalité soit activée. par exemple, si votre fonctionnalité de SharePoint dépend d’autres fonctionnalités de fonctionnalités ou de données, le serveur SharePoint peut essayer d’activer les fonctionnalités dont dépend votre fonctionnalité. Pour plus d’informations, consultez [Comment : ajouter et supprimer des dépendances de fonctionnalités](../sharepoint/how-to-add-and-remove-feature-dependencies.md).
## <a name="see-also"></a>Voir aussi
- [comment : personnaliser une fonctionnalité de SharePoint](../sharepoint/how-to-customize-a-sharepoint-feature.md)
- [comment : ajouter et supprimer des éléments pour SharePoint fonctionnalités](../sharepoint/how-to-add-and-remove-items-to-sharepoint-features.md)
- [Comment : ajouter et supprimer des dépendances de fonctionnalités](../sharepoint/how-to-add-and-remove-feature-dependencies.md)
| 91.209677 | 636 | 0.805836 | fra_Latn | 0.984672 |
4f24f10a342586a166f1863499e62487f3b0d859 | 672 | md | Markdown | _posts/2021-11-10-vco-controller.md | m-k-S/m-k-S.github.io | d550736cec625db91cc9db19723b94b2b5aca077 | [
"MIT"
] | null | null | null | _posts/2021-11-10-vco-controller.md | m-k-S/m-k-S.github.io | d550736cec625db91cc9db19723b94b2b5aca077 | [
"MIT"
] | null | null | null | _posts/2021-11-10-vco-controller.md | m-k-S/m-k-S.github.io | d550736cec625db91cc9db19723b94b2b5aca077 | [
"MIT"
] | null | null | null | ---
layout: post
title: VCO Controller for Repumper Sidebands
---
Documentation coming soon! Find the EAGLE files here:
* <a href="https://github.com/m-k-S/openMOT/blob/master/eda/eagle/VCORepumperSideband/VCO.brd">.brd file</a>
* <a href="https://github.com/m-k-S/openMOT/blob/master/eda/eagle/VCORepumperSideband/VCO.sch">.sch file</a>
<figure style="display: inline-block;
margin-left: auto;
margin-right: auto;
width: 40%;">
<img src="{{site.url}}/static/projects/mot/vco-sch.png"/>
</figure>
<figure style="display: inline-block;
margin-left: auto;
margin-right: auto;
width: 40%;">
<img src="{{site.url}}/static/projects/mot/vco-brd.png"/>
</figure>
| 28 | 108 | 0.702381 | eng_Latn | 0.113633 |
4f256a8703d41797ec4fc62c30eddb09beeaf2de | 3,138 | md | Markdown | content/posts/2019-04-12--react-koa-blog-11/index.md | kokily/dnkblog | 4a1853aac4b2bb6c9814e8aa9c3636286045512a | [
"MIT"
] | 1 | 2020-08-08T19:59:50.000Z | 2020-08-08T19:59:50.000Z | content/posts/2019-04-12--react-koa-blog-11/index.md | kokily/dnkblog | 4a1853aac4b2bb6c9814e8aa9c3636286045512a | [
"MIT"
] | null | null | null | content/posts/2019-04-12--react-koa-blog-11/index.md | kokily/dnkblog | 4a1853aac4b2bb6c9814e8aa9c3636286045512a | [
"MIT"
] | null | null | null | ---
title: MobX 블로그-11(레이아웃)
subTitle: Mobx-Blog(프론트엔드-2)
category: "Blog Making"
cover: logo.jpg
---
## 페이지 레이아웃
가벼웁게 스타일링을 해주겠습니다. 어차피 리액트 스트랩을 사용할 거라 간단한 것들만 손봐줄게요.(어차피 잘 하지도 못하는뎈ㅋㅋ)
```js
- file: /mobx-blog/client/src/scss/Header.scss
.header {
.dropdown-item:hover {
background: #0275d8;
color: white;
}
}
```
```js
- file: /mobx-blog/client/src/scss/Footer.scss
.footer {
background: #292b2c;
height: 3rem;
color: white;
font-size: 1.2rem;
display: flex;
align-items: center;
justify-content: center;
}
```
```js
- file: /mobx-blog/client/src/scss/Page.scss
.page {
main {
background: #f7f7f7;
min-height: calc(100vh - 3rem - 56px);
padding: 1.2rem;
}
}
```
그럼 아래와 같이 푸터가 턱! 박힐거에요.

MobX를 이용해서 상태를 관리할 거니깐 구글 크롬 웹스토어에서 MobX Developer Tools를 받아주세요!

설치 후 개발자모드에서 Mobx를 선택하면 쨘~

어이쿠... 아직 헤더를 안 만들었는데 미리 찍은게 걸렸군요. ㅋㅋㅋ
### 백엔드 API 연동
package.json의 프록시를 백엔드 Koa 서버랑 물려주겠습니다.
```js
- file: /mobx-blog/client/package.json
(...생략)
},
"proxy":"http://localhost:4000/"
}
```
이제 헤더(네비게이션)을 만들어봅시다!
```js
- file: /mobx-blog/client/src/components/common/Header.js
import React, { Component } from 'react'
import { Link } from 'react-router-dom'
// MobX
import { inject, observer } from 'mobx-react'
// React Strap
import { Navbar, NavbarBrand, NavbarToggler, Collapse, Nav,
NavItem, NavLink, UncontrolledDropdown, DropdownToggle, DropdownMenu, DropdownItem
} from 'reactstrap'
// Style
import 'scss/Header.scss'
@inject('header')
@observer
class Header extends Component {
render() {
const { header } = this.props
return (
<div className="header">
<Navbar color="primary" dark expand="md">
<NavbarBrand tag={Link} to="/">리액트 블로그</NavbarBrand>
<NavbarToggler onClick={header.toggleMenu} />
<Collapse isOpen={header.menuOpen} navbar>
<Nav className="ml-auto" color="white" navbar>
<NavItem>
<NavLink tag={Link} to="/">포스트</NavLink>
</NavItem>
<UncontrolledDropdown nav inNavbar>
<DropdownToggle nav caret>
사용자 인증
</DropdownToggle>
<DropdownMenu right>
<DropdownItem>로그아웃</DropdownItem>
<DropdownItem tag={Link} to="/auth/login">로그인</DropdownItem>
<DropdownItem divider />
<DropdownItem tag={Link} to="/auth/register">회원가입</DropdownItem>
</DropdownMenu>
</UncontrolledDropdown>
</Nav>
</Collapse>
</Navbar>
</div>
)
}
}
export default Header
```
https://reactstrap.github.io/components/navbar/ 이곳에 설명이 자세히 나와있으니 보시면 네브 구조는 쉽게 이해하실 수 있으실 겁니다.
우선 위에 MobX 상태 관리를 위해 데코레이터를 사용한(@) 것이 보일거에요.
`inject`로 사용하고 싶은 스토어를 부르고 `observer`로 변화가 이뤄지면 감시를 할 대상 컴포넌트를 지정해 주면 저 위에 있는 **header** 라는 스토어를 사용할 수가 있습니다.
`header` 스토어는 또한 *this.props*로 지정하여 사용하면 아래 구문들이 간결해 지겠죠!
일단 아직 상태관리를 하지는 않았습니다.
다음 포스트에서 저 네비게이션에 사용할 상태 관리를 만들고 로그인/회원가입 등의 사용자 인증관리를 만들게요! | 22.098592 | 111 | 0.618547 | kor_Hang | 0.991778 |
4f25b57538593479048948ba66fcceb7805dd0e6 | 23,874 | md | Markdown | _resources/2021-07-06-Day3-1.md | TextZip/site | 75b34c72cbeee6876f027ed9164c1f1af182607d | [
"MIT"
] | null | null | null | _resources/2021-07-06-Day3-1.md | TextZip/site | 75b34c72cbeee6876f027ed9164c1f1af182607d | [
"MIT"
] | null | null | null | _resources/2021-07-06-Day3-1.md | TextZip/site | 75b34c72cbeee6876f027ed9164c1f1af182607d | [
"MIT"
] | null | null | null | ---
title: Workshop Day 3 Session 1
tags: TeXt
layout: article
mode: normal
type: article
sharing: true
author: Automation and Robotics Club
show_author_profile: true
show_title: true
full_width: false
header: true
aside:
toc: true
sidebar:
nav: workshop-bar
---
<style>
img {
border-radius: 8px;
}
</style>
Content will be live after the registeration deadline
{:.error}
<!--
# Introduction
Hello! Congratulations on making it to Day 3 of the workshop. In this session, you will be learning about the various sensors present in your kit and how they can be interfaced with the Arduino Uno.
# Motors and Motor Drivers
## About
In everyday life, we see rotating devices like a drill, wheels in a vehicle, and a fan that make our tasks easier because their ability to "turn" helps in these cases. But what is the cause that drives them to turn? Well, this cannot be the sole job of electrical or manual work but is a combined effort of certain laws that lead to an electric motor's principle.
## Electric Motor
Energy transformation is the basic principle of an electric motor, and it is pretty evident that in this case, it is from electrical energy to the energy that leads to the motion of a rotor which indeed is mechanical. But what is the cause of the rotation? There has to be a force responsible for this turning effect known as torque. Here, the interrelation between electric current and magnetism helps. It is a known fact that a current-carrying conductor experiences a force in a magnetic field and when it is a conducting loop, it experiences a torque! So torque of a motor is indeed an important factor for selecting a motor for our purpose.
### Components of an Electric Motor
The basic parts include:
- Rotor
- Stator
- Brush
- Commutator
- Armature
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/DC_motor_parts.png" alt="IR" width=auto height=auto>
### Working
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/DC_motor_working.png" alt="IR" width=auto height=auto>
It is essential to understand that providing power to a motor depends on its size and complexity. A smaller motor can be manually supplied power through a battery, whereas a larger motor, the ones that are used in industries and other high power-driven motors, requires a mechanism which, when energized, connects its terminals to the high power supply. This method of starting a motor is called a "direct on line starter method."
Another fact about these motors is that they draw a huge amount of current to go to the full-speed state from the initial state (to overcome inertia). This leads to drawing a huge amount of starting current from the power supply (for higher starting voltage across the terminals), which may cause damage to the circuits. So "adjustable speed-drives" and "current limiting resistors" are used to prevent problems to the circuits and the motor itself.
### Types of control in a system
Now that we got a good idea about motors let us discuss the two control systems: open-loop and closed-loop systems.
The reason behind discussing these is to know about how control is established in various systems.
#### 1) Open-loop system:
○ This is used when the system under consideration is not complex and need not require that much supervision for its working—for example, the lighting in your room, we just turn the lights on, but we do not ensure if it is working with the same brightness for every time interval. The same is the case with ordinary DC Motor; there is no special mechanism to ensure if it is maintaining a constant rpm or torque for every time interval. If it is fine, it works, else it undergoes certain problems.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/open_loop_system.png" alt="IR" width=auto height=auto>
The above picture shows an open-loop system.
● Commanded variable: The output that you desire.
● Controller: Arduino or any other controller
● Actuator: The device which actually does the process or which enables other devices to work on the process
● Process: The task which we want to perform(here, rotation of the motor)
● Controlled variable: The Actual result/output the we get
#### 2) Closed-loop system:
○ It is clear that in an open-loop schematic, there can be a deviation of the "controlled variable" from the "commanded variable" called the "error variable". This error can cause undesirable consequences when working on huge projects. Here comes the closed-loop system. The special feature of this is that there is a feedback mechanism with the help of sensors that lets the controller know about the situation of the controlled variable with which we can measure the error factor and make our output as desirable as possible.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/closed_loop_system.png" alt="IR" width=auto height=auto>
### Other types of Motors
#### 1) Servo Motor
- This motor follows a closed-loop system. So it has got the advantage of precision. This motor provides high torque along with the precise position of the shaft, which gives its feedback control.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/servo_working.png" alt="IR" width=auto height=auto>
- The above is the schematic of the servo motor. It is to be noted that input is given to it in the form of PWM signals (pulse-width modulation) which leads to the position control of the motor (Of Course, digital signals also can be used for its position control). Also the position sensors in this closed-loop control are generally potentiometers.
- This is not all; servos can also be encoded depending on the complexity of the task that is involved making them gain access to speed and torque feedback along with regular position control. And here comes the actual problem of using these complex closed-loop systems when it is essential to perform the task but such skilled workers are not available. Skilled programmers are essential for optimizing the internal algorithms of closed-loop systems which is not an easy task.
So there is another type of motor that is in much use in the real world, the stepper motor.
#### 2) Stepper Motor
This motor utilizes an open-loop system.
- Stepper motors have a permanent magnetic rotating shaft called a rotor and stationary electromagnets surrounding the rotor called the stator.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/stator_rotor.png" alt="IR" width=auto height=auto>
- Stepper motors have typically 50 to 100 electromagnet poles (pairs of north and south poles) generated either by a permanent magnet or an electric current.
- The greater the number of poles the more is the precision.
There are two main ways in which a stepper motor can be driven:
##### Full-step mode:
The rotor turns through a bigger angle at once. This can be done by energizing either one or two phases( in general) at a time.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/full_step_mode.png" alt="IR" width=auto height=auto>
It can be seen that the torque in two phase on mode is higher than that of one phase one mode.
##### Microstepping:
This mode is essential when precision is an important factor in the process. In this case, the rotor is controlled by two phases such that the current in both of them is varied such that the magnetic field produced keeps varying with time, and so is the torque acting on the rotor. This is done such that the rotor moves through small angles( depending in the magnitude of the current and the way you vary it).
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/microstepping.png" alt="IR" width=auto height=auto>
In general, Servo Motors run more smoothly than a stepper motor except when micro stepping are used.
A Servo Motor will typically provide 2-3 times the speed of a typical stepper motor as the Servo Motor speed increases, the torque of the Servo Motor remains constant, thus performing better than a stepper motor at higher speeds usually above 1000 RPM
## H-Bridge
The direction in which the motor rotates is expected to change depending on the situation and this cannot be always manually changed. Here comes the role of a H-Bridge which enables the change in direction of the motor by changing the state of the switches in its circuit which can be accomplished by logical HIGH or LOW to the respective switch.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/hb_1.png" alt="IR" width=auto height=auto>
It is evident from the above diagram about the way in which this works. When S1 and S4 are closed (S2 and S3 being open), the motor rotates in one direction while on closing S2 and S3 (keeping S1 and S4 open) the motor turns in the other direction.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/hb_2.png" alt="IR" width=auto height=auto>
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/hb_3.png" alt="IR" width=auto height=auto>
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/hb_4.png" alt="IR" width=auto height=auto>
This above diagram shows shorting of circuit which may lead to the burning of H-Bridge.
Also it is not advisable to drive the motor and change the direction at the same time.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/hb_5.png" alt="IR" width=auto height=auto>
## Motor Driver
A motor driver at a basic level is an integrated circuit chip used to control a motor as per given instructions with the help of H Bridge topology in the case of the L293D.
Well, lets break this down and look into it closely one at a time.
### Why do we need a motor driver?
- In the world of autonomous technology, we would require efficient communication between microcontrollers like the arduino and a motor which draws a huge amount of current to work at the desired rate. Meanwhile, micro controllers operate on low level voltage/current (Eg: Arduino has an operating voltage of 5V).
- Thus, to bridge this gap of power output from the arduino to the motor, a motor driver can act as the interface taking in the low current signals from the Arduino and converting them into higher current signals which can help drive the motor as required.
- Thus, we can control the speed and direction of rotation of motor based on voltage provided.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/motor_driver_1.png" alt="IR" width=auto height=auto>
### The L293D motor driver
We will now specifically look into the L293D
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/L293D_1.png" alt="IR" width=auto height=auto>
- It is a commonly used and a popular 16 pin motor driver IC.
- It is capable of running 2 motors at the same time and works on the principle of a H Bridge.
#### Working principle
The IC works on the principle of H-Bridge (this can be understood on checking how the IC conveys signals to the motor).
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/L293D_working.png" alt="IR" width=auto height=auto>
- As we see, VCC2 is the pin that drives the motor at its required voltage and VCC1 is for charging the 16-pin IC which is connected to the Arduino 5V pin. One important point is that whenever there are any components in a project, we need to provide a common ground to them. Same is the case here, all the ground pins in the circuit are to be connected with the GND pin of the Arduino.
- It should also be noted that the enable pins are going to control the speed of the rotation through the PWM signals (Pulse width modulation) which can be established by connecting them to the PWM digital pins on Arduino.
[Reference](https://components101.com/articles/what-is-motor-driver-h-bridge-topology-and-direction-control)
### L293D PIN DESCRIPTION
The following image shows the pinout of the L293D.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/L293D_pins.png" alt="IR" width=auto height=auto>
Understanding this is very crucial to facilitate its usage in the circuit.
| Pin Number | Pin Name | Description |
| --- | --- | --- |
| 1 | Enable 1,2 | This pin enables the input pin Input 1(2) and Input 2(7) |
| 2 | Input 1 | Directly controls the Output 1 pin, Controlled by digital circuits |
| 3 | Output 1 | Connected to one end of Motor 1 |
| 4 | Ground | Ground pins are connected to ground of circuit (0V) |
| 5 | Ground | Ground pins are connected to ground of circuit (0V) |
| 6 | Output 2 | Connected to another end of Motor 1 |
| 7 | Input 2 | Directly controls the Output 2 pin, Controlled by digital circuits |
| 8 | Vcc2 (Vs) | Connected to Voltage pin for running motors (4.5V to 36V) |
| 9 | Enable 3,4 | This pin enables the input pin Input 3(10) and Input 4(15) |
| 10 | Input 3 | Directly controls the Output 3 pin, Controlled by digital circuits |
| 11 | Output 3 | Connected to one end of Motor 2 |
| 12 | Ground | Ground pins are connected to ground of circuit (0V) |
| 13 | Ground | Ground pins are connected to ground of circuit (0V) |
| 14 | Output 4 | Connected to another end of Motor 2 |
| 15 | Input 4 | Directly controls the Output 4 pin, Controlled by digital circuits |
| 16 | Vcc2 (Vss) | Connected to +5V to enable IC function |
Few points to be noted:
- There is a notch on the top of the IC, to guide us with proper connections.
- Input pins: The ones to which we give signals
- Output pins: The ones connected to the motors.
- The enable pins are there to “enable” the output pins. If enable pins are set to logic HIGH then the output pins match up to the input pins.
- If the enable pins are set to logic LOW then regardless of logic states of input pins, the output pins are always set to zero.
[Reference](https://components101.com/ics/l293d-pinout-features-datasheet)
You can also take a look at the [datasheet](https://components101.com/asset/sites/default/files/component_datasheet/L293D%20Datasheet.pdf) of L293D for more information.
Now that we have a fair idea of the the L293D, lets use it in a circuit to grasp the complete picture
### Using L293D with motors and Arduino
*Circuit on TinkerCAD*
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/L293D_arduino.png" alt="IR" width=auto height=auto>
#### Code
```c++
int speedPinR = 5; // connect input pin 2 which controls output pin 2
int dirR1 = 4; // connect input pin 1 which controls output pin 1
int dirR2 = 2; // connect enable pin 1,2
int speedPinL = 6; // connect enable pin 3,4
int dirL1 = 7; // connect input pin 3 which controls output pin 3
int dirL2 = 12; // connect input pin 4 which controls output pin 4
const int mSpeedL = 79; // to maintain approx 5000rpm for left wheel
const int mSpeedR = 79; // to maintain approx 5000rpm for right wheel
int dt=1000; // delay time
// Output pins 1,2 connected to right motor
// Output pins 3,4 connected to left motor
void setup()
{
Serial.begin(9600);
// set pinMode for all pins
pinMode(speedPinR, OUTPUT);
pinMode(dirR1, OUTPUT);
pinMode(dirR2, OUTPUT);
pinMode(speedPinL, OUTPUT);
pinMode(dirL1, OUTPUT);
pinMode(dirL2, OUTPUT);
}
void loop()
{
digitalWrite(dirR1, HIGH); // input pin 1 set to high
digitalWrite(dirR2, LOW); // enable pin 1,2 set to low
analogWrite(speedPinR,mSpeedR); // 5000rpm FOR RIGHT
digitalWrite(dirL1, HIGH); // input pin 3 set to high
digitalWrite(dirL2, LOW); // enable pin 3,4 set to low
analogWrite(speedPinL,mSpeedL); // 5000rpm FOR LEFT
delay(dt); // set delay time
}
```
[TinkerCAD Simulation](https://www.tinkercad.com/things/l1ABNFeVqaz)
Refer to the above to get an idea on how to make the connections and how to work with the L293D.
Here is a very useful video by Paul MCWhorter. Do give it a watch!
https://www.youtube.com/watch?v=fPLEncYrl4Q
* * *
# Ultrasonic Sensor HC-SR04
## About
The HC-SR04 ultrasonic sensor is used to calculate the distance of an obstacle in the path of the bot. It consists of four pins: VCC, trigger, echo & ground. It also includes a transmitter and receiver, which transmit and receive the US signals, respectively.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/HC-SR04_1.png" alt="IR" width=auto height=auto>
## Working Principle
The HC-SR04 sensor detects the travel time of a signal from transmission till it's received. The sensor sends out a ping, and an echo is heard. The duration of travel of the signal is then measured by the sensor.
When the Trig pin of the sensor receives a pulse of HIGH for at least 10 microseconds, it will initiate the sensor, which will transmit out 8 cycles of ultrasonic burst and wait for the reflected ultrasonic burst. When the sensor detects US signals from the receiver, it will set the Echo pin to HIGH and delay for a period proportional to distance.
The sensor transmits 8 cycles of ultrasonic bursts since the receiver needs to hear enough cycles to reach its full output and set the Echo pin to HIGH.
## Pins Description
- VCC: This powers the sensor with +5V and is connected to the 5V supply of the Arduino.
- Trigger: It is an Input pin. It is connected to any digital pin on the Arduino. On sending a HIGH signal to this pin, the transmitter sends out US signals until a LOW signal is sent.
- Echo: It is an Output pin. It is connected to any digital pin on the Arduino. This pin goes HIGH for the duration equal to the time taken by the receiver to receive US signals after being transmitted.
- Ground: This pin is connected to the ground of the Arduino.
The following image shows the connection of the terminals of the sensor to the Arduino:
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/HC-SR04_2.png" alt="IR" width=auto height=auto>
[Reference](https://components101.com/sensors/ultrasonic-sensor-working-pinout-datasheet)
You can also refer to the [datasheet](https://cdn.sparkfun.com/datasheets/Sensors/Proximity/HCSR04.pdf) for more information.
## Distance Calculation
As described earlier, the HC-SR04 sensor sends out ultrasonic signals and receives the echo, then measures the pingTimeTravel of the signal.
We know, $Speed = Distance \hspace{0.1cm} / \hspace{0.1cm}Time$
Sound waves travel at a speed of about $340 m/s$ at room temperature.
$\therefore Speed = 340 m/s$
The time given by HC-SR04 is in microseconds. We need to divide the time by 2 since this is the time taken by the sound waves from the transmitter to the object back to the receiver.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/HC-SR04_distance.png" alt="IR" width=auto height=auto>
Thus, the formula becomes,
$Distance = 340 * (pingTravelTime / 2)$
You can watch the following videos to understand better:
- [Measuring Speed of Sound With HC-SR04 Sensor](https://www.youtube.com/watch?v=BTMMNsL0_b0)
- [Measuring Distance With HC-SR04 Ultrasonic Sensor](https://www.youtube.com/watch?v=2hwrDSVHQ-E)
## Code
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/HC-SR04_arduino.png" alt="IR" width=auto height=auto>
```c++
int trigPin = 12; // connect trig pin of HC-SR04
int echoPin = 11; // connect echo pin of HC-SR04
int pingTravelTime; // initialize variable
void setup() {
pinMode(trigPin,OUTPUT); // trig pin set as OUTPUT
pinMode(echoPin,INPUT); // echo pin set as INPUT
Serial.begin(9600);
}
void loop() {
digitalWrite(trigPin,LOW); // trig pin set to low
delayMicroseconds(10); // delay time in microseconds
digitalWrite(trigPin,HIGH); // trig pin set to high
delayMicroseconds(10); // delay time in microseconds
digitalWrite(trigPin,LOW); // trig pin set to low
pingTravelTime = pulseIn(echoPin,HIGH); // pulseIn() function explained below
delay(25); // delay time
Serial.println(pingTravelTime); // print pingTravelTime on Serial Monitor
}
```
[TinkerCAD Simulation](https://www.tinkercad.com/things/cGqyGisEibf)
## pulseIn() Function
The pulseIn() function measures the time period of either HIGH or LOW pulse input signal.
The syntax of pulseIn() function is:
`pulseIn(pin, value)`
- pin: the number of the pin on which the pulse is to be read.
- value: the value of the pulse, either HIGH or LOW.
In the above code, the pin is the echoPin, and the value is HIGH, i.e. the function measures the time period of the HIGH pulse input signal on the echoPin.
Hence it indirectly measures the travel time of the signal. Thus the pingTimeTravel can be calculated using the pulseIn() function.
https://www.youtube.com/watch?v=M-UKXCUI0rE
* * *
# LM7805 Voltage Regulator
## About
Voltage regulators are used to ensure steady and constant output from voltage sources. The integrated circuits used for voltage regulation are called Voltage regulator ICs.
IC 7805 is a three-terminal Voltage Regulator that restricts the output voltage to 5V output for various ranges of input voltage. It acts as an excellent component against input voltage fluctuations for circuits, and adds an additional safety to your circuitry. Primary job of a voltage regulator is to drop a larger voltage to a smaller one and keep it stable, since that regulated voltage is being used to power sensitive electronics.
LM78xx series is a family of linear voltage regulators. This series of regulators are excellent for most purposes, they can handle up to almost 30V on the input and depending on the package, up to 1A output current. The last digit on the IC signifies the voltage to which input is regulated and given out, for example, 7805 regulates to give 5V output and a 7803 gives a 3V output.
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/7805_1.jpg" alt="IR" width=auto height=auto>
## Pins Description
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/7805_pins.jpg" alt="IR" width=auto height=auto>
| Pin No. | Pin | Function | Description |
| --- | --- | --- | --- |
| 1 | INPUT | Input Voltage (7V-35V) | In this pin of the IC positive unregulated voltage is given in regulation |
| 2 | GROUND | Ground(0V) | In this pin where the ground is given. This pin is neutral for equally the input and output |
| 3 | OUTPUT | regulated output; 5V(4.8V-5.2V) | The output of the regulated 5V is taken out at this pin of the IC regulator |
You can read more about the 7805 IC [here](https://www.electronicshub.org/understanding-7805-ic-voltage-regulator/).
A detailed video explanation on the 7805 IC can be found [here](https://www.youtube.com/watch?v=LEv26VH0S1E).
<img src="{{site.baseurl}}/assets/images/resources/Day3_Session1/7805_connection.jpg" alt="IR" width=auto height=auto>
[Tinkercad Simulation for Reference](https://www.tinkercad.com/things/cGUY894p0d4-lm7805-voltage-regulator)
[How to use LM7805 regulator in 5 volt dc power supply from 9 volt battery](https://www.youtube.com/watch?v=9KXN69eXzj8)
## 7805 Regulator Features
- 5V positive voltage regulator.
- Operating current is 5mA.
- Minimum input voltage is 7V.
- Maximum input voltage is 25V.
- Required very minimum external component to fully function.
- It can deliver up to 1.5A of current.
- Internal thermal overload and short circuit current limiting protection is available.
- Junction temperature maximum of 125-degree celsius.
- This IC has both internal current limiting and thermal shutdown features.
You can also refer to the [datasheet](http://ee-classes.usc.edu/ee459/library/datasheets/LM7805.pdf) for more information.
## Application of LM7805
### Used as
- fixed output regulator.
- regulated dual supply.
- In most devices as constant +5V output is needed for microcontrollers and sensors in most of the projects.
- current limiter for certain applications.
- polarity reversal protection circuit.
- Reverse bias protection circuit.
- positive regulator in negative configurations.
* * * --> | 54.382688 | 645 | 0.763676 | eng_Latn | 0.99722 |
4f268b8b1c99203dc772c4478ab08d5571704b3a | 8,918 | md | Markdown | examples/doodles.md | jfischoff/twitch | 9ed10b0aaff8aace3b8b2eb47d605d9107194b95 | [
"MIT"
] | 38 | 2015-01-21T14:41:40.000Z | 2020-12-13T17:46:53.000Z | examples/doodles.md | jfischoff/twitch | 9ed10b0aaff8aace3b8b2eb47d605d9107194b95 | [
"MIT"
] | 18 | 2015-04-23T17:36:12.000Z | 2021-10-01T11:53:56.000Z | examples/doodles.md | jfischoff/twitch | 9ed10b0aaff8aace3b8b2eb47d605d9107194b95 | [
"MIT"
] | 7 | 2015-08-01T17:21:35.000Z | 2017-12-26T19:40:22.000Z | #What is the fastest Unboxed Mutable Reference?
MutableByteBuffer [^*] <br>
I was profiling a simple loop. Here is a simpler version:
```haskell
countForever ref = forever $ modifyIORef' ref (+1)
```
I counted 4 assembly instructions. <br>
"Addition takes 300 picoseconds, but there is memory access so let's say two nanoseconds," I thought.<br>
To profile, I wrote:
```haskell
loopHundredMillion = replicateM_ 100000000
iorefLoop = do
ref <- newIORef 0
loopHundredMillion $ modifyIORef' ref (+1)
```
I expected this to take ~ 1/5 ^th^ of a second. <br>
It took *1 sec*. <br>
Is five hundred million iterations an unrealistic top-line?<br>
As a sanity check I wrote an analogous program in C.
```c
//CountForever.c
int main (int argc, const char** argv)
{
int ref = 0;
while (ref < 1000000)
{
ref++;
}
}
```
```bash
$ gcc -O2 CountForever.c -oCountForever && time ./CountForever
real 0m0.002s
user 0m0.001s
sys 0m0.001s
```
This was faster then I expected:
> Maybe **time** isn't that accurate when programs are fast?
On IRC [^1]
<pre><code class="irc">
<span class="irc-name-jfischoff">jfischoff</span><span class="irc-tilde"> ~</span> <span class="irc-jfischoff">Is there anyway to make modifyIORef' faster? </span>
<span class="irc-name-jfischoff">jfischoff</span><span class="irc-tilde"> ~ </span><span class="irc-jfischoff">I am surprised that in a second I was only able to update this ref</span>
<span class="irc-tilde">↪</span> <span class="irc-jfischoff">100 million times: timeout 1000000 $ forever $ modifyIORef' x (1+)</span>
<span class="irc-name-jfischoff">jfischoff</span><span class="irc-tilde"> ~</span> <span class="irc-jfischoff">where as c++ was able to do the same in 4 milliseconds</span>
<span class="irc-name-glguy">glguy</span><span class="irc-tilde"> ~</span> <span class="irc-glguy">c++ was able to do 1 update every 0.04 nanoseconds?</span>
<span class="irc-name-glguy">glguy</span><span class="irc-tilde"> ~</span> <span class="irc-glguy">an update rate of 25 gigahertz?</span>
<span class="irc-name-dv-">dv-</span><span class="irc-tilde"> ~</span> <span class="irc-dv-">gcc probably just <span class="irc-emphasis">replaced it with a constant</span></span>
<span class="irc-name-jfischoff">jfischoff</span><span class="irc-tilde"> ~ </span><span class="irc-jfischoff">dv-: perhaps</span>
<span class="irc-name-glguy">glguy</span><span class="irc-tilde"> ~</span> <span class="irc-glguy">That or C++ unlocks the fast mode of an Intel processor</span>
</code>
</pre>
Burn.
```bash
$ gcc -02 CountForver.c -S
```
```nasm
; CountForver.s
; Notice, there are no jumps or changes to the program counter.
; There is no looping.
_main:
.cfi_startproc
BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
xorl %eax, %eax
popq %rbp
ret
.cfi_endproc
```
dv- was right. The compiler got rid of the loop, I'm assuming, because I wasn't using the result. <br>
I added a **volatile** keyword to prevent optimizations.
```c
//CountForver.c
int main (int argc, const char** argv)
{
volatile int ref = 0;
while (ref < 100000000)
{
ref++;
}
}
```
```bash
$ gcc -O2 CountForever.c -oCountForever && time ./CountForever
real 0m0.178s
user 0m0.176s
sys 0m0.001s
```
So C can do *greater* than **500 million** increments per second, but why stop there? What about assembly?
```nasm
; c0Un7f0r3v3r.asm
EXTERN _exit
GLOBAL start
SECTION .data
align 8
iterationCount DD 100000000
result DD 0
SECTION .text
start:
; Copy the iteration count to the ecx register
; which is used by the loopnz instruction
mov ecx, [iterationCount]
loopBlock:
inc dword [result] ; Increment the value at the address of result
loopnz loopBlock ; Decrement the ecx counter and loop until ecx is zero
exitBlock:
push dword 0 ; Set the exit code to zero
mov eax, 1 ; Place the system call number (exit) in the eax reg
sub esp, 4 ; I have to add some dummy space to the stack for some reason
int 0x80 ; Exit
```
```bash
$ nasm -fmacho c0n7f0r3v3r.asm &&\
> ld -macosx_version_min 10.7.0 -oc0n7f0r3v3r c0n7f0r3v3r.o &&\
> time ./c0Un7f0r3v3r;
real 0m0.176s
user 0m0.174s
sys 0m0.001s
```
So IORef is about *five times slower* than naive approaches in C and assembly. <br>
What gives? <br>
To the Core!
```bash
$ ghc-core -- -O2 IORefLoop.hs
```
```ghc-core
...
case readMutVar# @ RealWorld @ Int ref rwState0 of
_ { (# rwState1, value #) ->
case value of
_ { I# unboxedIntValue ->
case writeMutVar# @ RealWorld @ Int ref (I# (unboxedIntValue +# 1)) rwState1 of ...
```
I find GHC Core almost unreadable. <br>
One trick is to try to ignore most of the case statments.<br>
The first and the third case statements are not for scrutinizing alternatives, but are to ensure proper sequencing of IO actions. <br>
<pre class="ghc-core">
<code class="ghc-core">
<span class="state-token-case">case readMutVar# @ RealWorld @ Int ref rwState0 of
_ { (# rwState1, value #) -></span>
case value of
_ { I# unboxedIntValue ->
<span class="state-token-case">case writeMutVar# @ RealWorld @ Int ref (I# (unboxedIntValue +# 1)) rwState1 of ..</span>
</code>
</pre>
The second case statement is unboxed a primitive int.
<pre class="ghc-core">
case value of _ { <span class="ghc-core-boxing">I# unboxedIntValue</span> }
</pre>
and the boxing when setting
<pre class="ghc-core">
case writeMutVar# @ RealWorld @ Int ref (<span class="ghc-core-boxing">I# (unboxedIntValue +# 1)</span>)
</pre>
`I#` is the `Int` constructor (the `#` means it is a compiler primitive). It wraps or boxes an unpacked, unboxed, "real" int. <br>
Most of the time the compiler can perform the unboxing automatically, but it can't in this case. [^2] <br>
If it is just boxing then we need a unboxed mutable reference. I can think of two options [Ptr Int](https://hackage.haskell.org/package/base-4.7.0.1/docs/Foreign-Ptr.html) and [MutableByteArray](http://hackage.haskell.org/package/primitive-0.5.3.0/docs/Data-Primitive-ByteArray.html#t:MutableByteArray). <br>
<blockquote class="twitter-tweet tw-align-center" lang="en"><p>Dropping truth bombs. “<a href="https://twitter.com/jfischoff">@jfischoff</a>: <a href="https://twitter.com/EvilHaskellTips">@EvilHaskellTips</a> IORefs? No, no, no. Ptrs + Storable. C is the only fast language.”</p>— Evil Haskell Tips (@EvilHaskellTips) <a href="https://twitter.com/EvilHaskellTips/status/430561860794843136">February 4, 2014</a></blockquote>
<script async src="http://platform.twitter.com/widgets.js" charset="utf-8"></script>
First the `Ptr`{.haskell} version
```haskell
ptrHertz = alloca $ \ptr -> do
poke ptr (0 :: Int)
loopHundredMillion $ do
i <- peek ptr
let !i' = i + 1
poke ptr i'
```
Using criterion I can see this gives me **0.230 secs**. <br>
Closer. <br>
Now the `MutableByteBuffer`{.haskell} version:
```haskell
mbaHertz = do
mba <- newAlignedPinnedByteArray 4 8
loopHundredMillion $ do
i <- readByteArray mba 0 :: IO Int
let !i' = i + 1
writeByteArray mba 0 i'
```
**0.189** seconds. <br>
About ***5 percent*** slower then C and assembly versions. Nice. <br>
MutableByteBuffers are a length followed by the data immediately. The length is unnecessary in our case, so we are using more space then necessary. <br>
Maybe the extra length is causing a slowdown? <br>
Having a unboxed mutable value like MutableByteBuffer would be nice, too bad we don't have one ... or do we? <br>
There is [MutVar](http://hackage.haskell.org/package/primitive-0.5.3.0/docs/Data-Primitive-MutVar.html), so how does it compare? <br>
Let's compare `MutVar`{.haskell}
```haskell
mutVarHertz = do
mv <- newMutVar 1
loopHundredMillion $ do
i <- readMutVar mv :: IO Int
let !i' = i + 1
writeMutVar mv i'
```
Terrible.<br>
There we have it, MutableByteBuffer FTW. <br>
Except I don't really want to use a MutableByteBuffer. It is not type safe, I had to essential cast the data when reading it. It is less type safe then the C version. <br>
If only there was a safe wrapper around MutableByteArray? There is! [Vector](https://hackage.haskell.org/package/vector-0.10.11.0/docs/Data-Vector-Mutable.html).
So does how does `Vector`{.haskell} fair
TODO put in Vector code
TODO profile MutableArray
TODO add UArray code
[Sources](https://github.com/jfischoff/mutable-reference-benchmarks)
# Criterion Reports
## All versions
## Just the fast versions
## The single operations - this really shows off the accuracy of Criterion.
[^1]: <http://ircbrowse.net/browse/haskell?id=18867462×tamp=1408726262#t1408726262>
[^2]:
<http://ircbrowse.net/browse/haskell?id=18867577×tamp=1408727702#t1408727702>
[^*]: Fastest on my computer for updating a single unboxed Int | 34.038168 | 438 | 0.699821 | eng_Latn | 0.85788 |
4f26c691a1b8075d034120b508f41b07825bd86a | 11,415 | md | Markdown | docs/vs-2015/extensibility/internals/running-document-table.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/internals/running-document-table.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-24T14:57:38.000Z | 2020-07-24T14:57:38.000Z | docs/vs-2015/extensibility/internals/running-document-table.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Document-Tabelle mit | Microsoft-Dokumentation
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- read locks
- running document table (RDT), IVsDocumentLockHolder interface
- running document table (RDT)
- running document table (RDT), edit locks
- document data objects, running document table
ms.assetid: bbec74f3-dd8e-48ad-99c1-2df503c15f5a
caps.latest.revision: 19
ms.author: gregvanl
manager: jillfra
ms.openlocfilehash: 7ea32df892efa47c91d8292bdc9065080318a059
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/23/2019
ms.locfileid: "68155559"
---
# <a name="running-document-table"></a>Aktive Dokumenttabelle
[!INCLUDE[vs2017banner](../../includes/vs2017banner.md)]
Die IDE verwaltet die Liste aller aktuell geöffneten Dokumente in einer internen Struktur der ausgeführten Dokumententabelle (RDT) aufgerufen. Diese Liste enthält alle geöffneten Dokumente im Arbeitsspeicher, unabhängig davon, ob diese Dokumente derzeit bearbeitet wird. Ein Dokument ist ein Element, das gespeichert wird, einschließlich der Dateien in einem Projekt oder der Hauptprojektdatei (z. B. eine VCXPROJ-Datei).
## <a name="elements-of-the-running-document-table"></a>Elemente der die aktive Dokumenttabelle
Der ausgeführten Dokumententabelle enthält die folgenden Einträge.
|Element|Beschreibung|
|-------------|-----------------|
|Dokumentenmoniker|Eine Zeichenfolge, die das dokumentendatenobjekt eindeutig identifiziert. Dies wäre der absolute Dateipfad für ein Projektsystem, die Dateien (z. B. C:\MyProject\MyFile) verwaltet werden. Diese Zeichenfolge dient auch für Projekte, die in anderen speichern als Dateisysteme, z. B. gespeicherte Prozeduren in einer Datenbank gespeichert. In diesem Fall kann das Projektsystem eine eindeutige Zeichenfolge erfinden, die sie erkennen und möglicherweise analysieren, um zu bestimmen, wie Sie das Dokument speichern kann.|
|Hierarchie-Besitzer|Das Hierarchy-Objekt, das das Dokument besitzt, dargestellt durch ein <xref:Microsoft.VisualStudio.Shell.Interop.IVsHierarchy> Schnittstelle.|
|Element-ID|Der Elementbezeichner für ein bestimmtes Element innerhalb der Hierarchie. Dieser Wert ist für alle Dokumente in der Hierarchie, die Besitzer dieses Dokument ist, eindeutig, aber dieser Wert ist nicht zwingend über verschiedene Hierarchien hinweg eindeutig sein.|
|Dokumentendatenobjekt|In jedem Fall ist dies ein `IUnknown`<br /><br /> -Objekts. Die IDE erfordert keine bestimmte Schnittstelle über keine der `IUnknown` Schnittstelle für einen benutzerdefinierten Editor-Dokument-Datenobjekt. Für eine standard-Editor jedoch Implementierung des Editors die <xref:Microsoft.VisualStudio.Shell.Interop.IVsPersistDocData2> Schnittstelle ist erforderlich, um die Datei-Persistenz-Aufrufe aus dem Projekt zu verarbeiten. Weitere Informationen finden Sie unter [Speichern eines Standarddokuments](../../extensibility/internals/saving-a-standard-document.md).|
|Flags|Flags, die steuern, ob eine Lese- oder Bearbeitungssperre angewendet wird, ob das Dokument gespeichert wird, und So weiter, können angegeben werden, wenn der RDT Einträge hinzugefügt werden. Weitere Informationen finden Sie unter der <xref:Microsoft.VisualStudio.Shell.Interop._VSRDTFLAGS>-Enumeration.|
|Bearbeiten Sie die Anzahl der Sperren|Die Anzahl der bearbeitungssperren. Eine Bearbeitungssperre gibt an, für ein Text-Editor das Dokument zur Bearbeitung zu öffnen. Wenn die Anzahl der bearbeitungssperren 0 (null) wechselt, wird der Benutzer zum Speichern des Dokuments, aufgefordert, wenn es geändert wurde. Beispielsweise jedes Mal, wenn Sie ein Dokument in einem Editor mit öffnen die **neues Fenster** -Befehls wird eine Bearbeitungssperre für dieses Dokument im RDT hinzugefügt wird. Damit eine Bearbeitungssperre festgelegt werden kann muss das Dokument über eine Hierarchie verfügen oder Element-ID|
|Anzahl von Lesevorgängen Sperren|Anzahl der Lesesperren. Eine Lesesperre gibt an, dass das Dokument über einen Mechanismus, z. B. ein Assistenten gelesen wird. Eine Lesesperre enthält ein Dokument in der RDT aktiv und gibt an, dass das Dokument kann nicht bearbeitet werden. Sie können eine Lesesperre festlegen, auch wenn das Dokument nicht über eine Hierarchie verfügen oder-ID Element Dieses Feature können Sie zum Öffnen eines Dokuments im Arbeitsspeicher, und geben ihn in der RDT ohne das Dokument, das jeder Hierarchie gehört. Diese Funktion wird nur selten verwendet.|
|Besitzer der Sperre|Eine Instanz von einem <xref:Microsoft.VisualStudio.Shell.Interop.IVsDocumentLockHolder> Schnittstelle. Der Inhaber der Sperre wird von Features wie z. B. Assistenten implementiert, die öffnen und Bearbeiten von Dokumenten außerhalb eines Editors. Ein Sperre Inhaber ermöglicht die Funktion, um eine Bearbeitungssperre hinzuzufügen, auf das Dokument, um zu verhindern, dass das Dokument geschlossen wird, während es immer noch bearbeitet wird. Bearbeiten Sie in der Regel Sperren werden nur durch Dokumentfenster (d. h. Editoren) hinzugefügt.|
Jeder Eintrag in der RDT ist eine eindeutige Hierarchie oder die Element-ID zugeordnet, die in der Regel auf einen Knoten im Projekt entspricht. Alle Dokumente, die zur Bearbeitung verfügbar, werden in der Regel im Besitz einer Hierarchie. Einträge in der RDT steuern, welches Projekt, oder, genauer: die Hierarchie derzeit besitzt das dokumentendatenobjekt, das bearbeitet wird. Mit den Informationen in der RDT kann die IDE ein Dokuments verhindern gleichzeitig von mehr als ein Projekt geöffnet wird.
Die Hierarchie auch steuert die Dauerhaftigkeit der Daten und verwendet die Informationen in der RDT zum Aktualisieren der **speichern** und **speichern** Dialogfelder. Beim Benutzer ein Dokument zu ändern, und wählen Sie dann die **beenden** Befehl die **Datei** im Menü die IDE werden aufgefordert, diese mit der **Save Changes** Dialogfelds werden alle Projekte anzeigen und Projektelemente, die derzeit geändert werden. Dadurch können Benutzer wählen, welche die Dokumente speichern möchten. Die Liste der zu speichernden Dokumente (d. h. diese Dokumente, die Änderungen haben) aus dem RDT generiert wird. Alle Elemente, die Sie erwarten in dem **Save Changes** Dialogfeld beim Beenden der Anwendungs müssen Datensätze in der RDT. Der RDT koordiniert, welche Dokumente gespeichert werden, und gibt an, ob der Benutzer aufgefordert wird, über einen Vorgang mit den Werten in den Flags-Eintrag für jedes Dokument angegeben. Weitere Informationen zu den RDT-Flags finden Sie unter den <xref:Microsoft.VisualStudio.Shell.Interop._VSRDTFLAGS> Enumeration.
## <a name="edit-locks-and-read-locks"></a>Bearbeitungssperren und Read-Sperren
Bearbeitungssperren und Lesesperren befinden sich in der RDT. Der Dokument-Fenster-inkrementiert bzw. dekrementiert die Bearbeitung gesperrt werden. Daher Wenn ein Benutzer ein neues Dokumentfenster geöffnet wird, die bearbeiten Sperrenanzahl inkrementiert um eins. Wenn die Anzahl der bearbeitungssperren 0 (null) erreicht, wird die Hierarchie angegeben, dass beibehalten oder speichern die Daten für das zugeordnete Dokument. Die Hierarchie kann dann die Daten in keiner Weise, einschließlich der persistenten Speicherung als eine Datei oder als ein Element in einem Repository persistent gespeichert werden. Können Sie die <xref:Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable.LockDocument%2A> -Methode in der die <xref:Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable> Schnittstelle zum Hinzufügen von bearbeitungssperren und Lesen von Sperren, und die <xref:Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable.UnlockDocument%2A> Methode zum Entfernen dieser Sperren.
In der Regel, wenn das Dokumentfenster für einen Editor instanziiert wird, fügt der Fensterrahmen automatisch eine Bearbeitungssperre für das Dokument im RDT. Wenn Sie eine benutzerdefinierte Ansicht eines Dokuments erstellen, die verwendet jedoch keiner standard-Dokumentfenster (d. h., er implementiert nicht die <xref:Microsoft.VisualStudio.Shell.Interop.IVsWindowFrame> Schnittstelle), müssen Sie Ihre eigenen Bearbeitungssperre festgelegt. Beispielsweise wird in einem Assistenten, ein Dokument bearbeitet, ohne die in einem Editor geöffnet wird. Damit für Dokumentsperren von Assistenten und ähnliche Entitäten geöffnet werden kann, müssen diese Entitäten implementieren die <xref:Microsoft.VisualStudio.Shell.Interop.IVsDocumentLockHolder> Schnittstelle. Um Ihre dokumentensperrenhalter zu registrieren, rufen die <xref:Microsoft.VisualStudio.Shell.Interop.IVsRunningDocumentTable.RegisterDocumentLockHolder%2A> -Methode, und übergeben Ihr <xref:Microsoft.VisualStudio.Shell.Interop.IVsDocumentLockHolder> Implementierung. Auf diese Weise wird der RDT Ihre dokumentensperrenhalter hinzugefügt. Ein weiteres Szenario für die Implementierung eines dokumentensperrenhalter ist, wenn Sie ein Dokument über eine spezielle Toolfenster öffnen. In diesem Fall können Sie nicht das Toolfenster schließen Sie das Dokument zu haben. Allerdings als einen dokumentensperrenhalter in der RDT registriert wird, die IDE kann aufrufen Ihrer Implementierung von der <xref:Microsoft.VisualStudio.Shell.Interop.IVsDocumentLockHolder.CloseDocumentHolder%2A> Methode, eine Schließen des Dokuments aufgefordert.
## <a name="other-uses-of-the-running-document-table"></a>Andere verwendet die aktive Dokumenttabelle
Andere Entitäten in der IDE verwenden der RDT zum Abrufen von Informationen zu Dokumenten. Beispielsweise verwendet der Quell-Steuerungs-Manager der RDT, teilen Sie das System, laden ein Dokument im Editor aus, nachdem sie die neueste Version der Datei abruft. Zu diesem Zweck sucht die Dateien in der aktiven Dokumenttabelle, um festzustellen, ob diese geöffnet sind, der Source-Steuerungs-Manager. Wenn sie sind, wird der Source-Steuerungs-Manager zuerst überprüft, dass die Hierarchie implementiert die <xref:Microsoft.VisualStudio.Shell.Interop.IVsPersistHierarchyItem2.ReloadItem%2A> Methode. Wenn das Projekt nicht implementiert die <xref:Microsoft.VisualStudio.Shell.Interop.IVsPersistHierarchyItem2.ReloadItem%2A> -Methode, und klicken Sie dann auf die Quelle steuern überprüft eine Implementierung der <xref:Microsoft.VisualStudio.Shell.Interop.IVsPersistDocData2.ReloadDocData%2A> Methode für das dokumentendatenobjekt direkt.
Die IDE verwendet auch den RDT an diesem (Verschieben in den Vordergrund) einem geöffneten Dokument, wenn ein Benutzer das Dokument anfordert. Weitere Informationen finden Sie unter [Anzeigen von Dateien mithilfe des Befehls der geöffneten Datei](../../extensibility/internals/displaying-files-by-using-the-open-file-command.md). Bestimmt, ob eine Datei geöffnet, in der RDT ist, führen Sie eine der folgenden.
- Abfrage für den dokumentmoniker (d. h. der vollständigen Dokuments-Pfad), um herauszufinden, ob das Element geöffnet ist.
- Verwenden Sie die Hierarchie oder das Element-ID, bitten Sie das Projektsystem für den Pfad des vollständigen Dokuments und suchen Sie dann das Element in der RDT.
## <a name="see-also"></a>Siehe auch
[RDT_ReadLock-Verwendung](../../extensibility/internals/rdt-readlock-usage.md)
[Persistenz und die aktive Dokumenttabelle](../../extensibility/internals/persistence-and-the-running-document-table.md)
| 178.359375 | 1,598 | 0.820237 | deu_Latn | 0.99622 |
4f26ce29ea1058a0d1f8f285e03d93605075c3f5 | 3,073 | md | Markdown | pages/docs/graphics/lighting/lighting-overview.md | ezEngine/docs-src | 46b25cd59671f847e8d6e6e2953565e9a6053ef2 | [
"MIT"
] | null | null | null | pages/docs/graphics/lighting/lighting-overview.md | ezEngine/docs-src | 46b25cd59671f847e8d6e6e2953565e9a6053ef2 | [
"MIT"
] | null | null | null | pages/docs/graphics/lighting/lighting-overview.md | ezEngine/docs-src | 46b25cd59671f847e8d6e6e2953565e9a6053ef2 | [
"MIT"
] | null | null | null | # Lighting
Lighting is the most important aspect of making a scene look good.
## Physically Based Rendering
There are many formulas for computing lighting on surfaces. The defacto industry standard, which is also used in ezEngine, is **P**hysically **B**ased **R**endering (PBR) which describes a surface in terms of *color*, the surface normals, its *roughness*, whether it is a *metal*. Using this data, very convincing lighting can be computed.
Therefore the standard type of [material](../../materials/materials-overview.md) requires you to provide such textures. Optionally an *occlusion texture* can pronounce the lighting for small crevices.
## Static vs. Dynamic Lighting
Many games differentiate between *static* or *baked* lighting, and *dynamic* lighting. Static lighting is precomputed and typically stored in *lightmaps* (dedicated textures) and other data structures. Dynamic lighting does not require any preprocessing or extra data. Baked lighting typically has the advantage that it can look much better because it can simulate light bounces and thus illuminate areas that are not directly lit.
Currently ezEngine **only supports dynamic lighting**. That means every light source that you add to the scene can be moved around and change its color or brightness. It also means that every light source has a performance cost. The renderer uses a clustered forward rendering approach which can handle a relatively large amount of light sources efficiently. The most important rule is to reduce the number of *overlapping* light sources. The editor [render modes](../../editor/editor-views.md#render-modes) allow you to look for hotspots.
## Shadows
Dynamic lights have the disadvantage that they don't provide shadows by default. Instead, casting shadows is a separate process, which costs a lot of performance for every light source involved. Therefore, each light source requires you to decide whether it should cast shadows or not. You can use many small fill lights, as long as they don't cast shadows, but you should keep the number of shadow casting lights as low as possible, and each light should only cover an area as small as possible.
For more details see the chapter about [dynamic shadows](dynamic-shadows.md).
## Light Component Types
There are different component types to provide different types of lighting:
* [Ambient Light Component](ambient-light-component.md): For lighting up a scene in general.
* [Directional Light Component](directional-light-component.md): For sun/moon light.
* [Point Light Component](point-light-component.md): For light bulbs and overall fill lights.
* [Spot Light Component](spot-light-component.md): For flashlights and directed lighting.
* [Sky Light Component](sky-light-component.md): For dynamic light contribution from the sky.
* [Reflection Probe Components](reflection-probe-components.md): For localized reflection probes.
## See Also
* [Materials](../../materials/materials-overview.md)
* [Dynamic Shadows](dynamic-shadows.md)
* [Render Modes](../../editor/editor-views.md#render-modes) | 78.794872 | 539 | 0.787504 | eng_Latn | 0.99858 |
4f274afb54cf758fcda1cc86d7d65ea67bf83675 | 888 | md | Markdown | README.md | cajbecu/kafka-connect-http | c7b60e7543861865125b17c185749ac31b10db60 | [
"Apache-2.0"
] | null | null | null | README.md | cajbecu/kafka-connect-http | c7b60e7543861865125b17c185749ac31b10db60 | [
"Apache-2.0"
] | null | null | null | README.md | cajbecu/kafka-connect-http | c7b60e7543861865125b17c185749ac31b10db60 | [
"Apache-2.0"
] | null | null | null | # Kafka Connect HTTP Connector
[](https://app.fossa.io/projects/git%2Bgithub.com%2Fthomaskwscott%2Fkafka-connect-http?ref=badge_shield)
kafka-connect-http is a [Kafka Connector](http://kafka.apache.org/documentation.html#connect)
for invoking HTTP APIs with data from Kafka.
Documentation for this connector is todo
# Development
You can build kafka-connect-http with Maven using the standard lifecycle phases.
# FAQ
Refer frequently asked questions on Kafka Connect HTTP here -
https://github.com/thomaskwscott/kafka-connect-http/wiki/FAQ
# Contribute
- Source Code: https://github.com/thomaskwscott/kafka-connect-http
- Issue Tracker: https://github.com/thomaskwscott/kafka-connect-http/issues
# License
The project is licensed under the Apache 2 license.
| 34.153846 | 224 | 0.797297 | eng_Latn | 0.488945 |
4f2784b00bece2942382c05c22fcf3f16b1b9cf0 | 339 | md | Markdown | README.md | erikerlandson/steampipe-ubi | de5fee73fb6cfd9329bea717e76369ad1c5e6216 | [
"Apache-2.0"
] | 1 | 2021-06-06T01:48:45.000Z | 2021-06-06T01:48:45.000Z | README.md | erikerlandson/steampipe-ubi | de5fee73fb6cfd9329bea717e76369ad1c5e6216 | [
"Apache-2.0"
] | 1 | 2021-06-06T15:09:34.000Z | 2021-06-06T21:18:30.000Z | README.md | erikerlandson/steampipe-ubi | de5fee73fb6cfd9329bea717e76369ad1c5e6216 | [
"Apache-2.0"
] | null | null | null | # steampipe-ubi
container image for steampipe installed onto UBI
run the container interactively
```sh
$ oc run -i -t --serviceaccount=steampipe steampipe --image=quay.io/erikerlandson/steampipe:latest --command -- /bin/bash
```
add "anyuid" to steampipe SA:
```sh
oc adm policy add-scc-to-user anyuid -z steampipe --as system:admin
```
| 26.076923 | 121 | 0.743363 | eng_Latn | 0.8894 |
4f27ab72cbf1bf61915437cb101aeed1e669898c | 1,040 | md | Markdown | content/post/statmodeling-stat-columbia-edu-2019-11-15-ballot-order-effects-in-the-news-im-skeptical-of-the-claimed-5-effect.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 8 | 2018-03-27T05:17:56.000Z | 2021-09-11T19:18:07.000Z | content/post/statmodeling-stat-columbia-edu-2019-11-15-ballot-order-effects-in-the-news-im-skeptical-of-the-claimed-5-effect.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 16 | 2018-01-31T04:27:06.000Z | 2021-10-03T19:54:50.000Z | content/post/statmodeling-stat-columbia-edu-2019-11-15-ballot-order-effects-in-the-news-im-skeptical-of-the-claimed-5-effect.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 12 | 2018-01-27T15:17:26.000Z | 2021-09-07T04:43:12.000Z | ---
title: Ballot order effects in the news; I’m skeptical of the claimed 5% effect.
date: '2019-11-16'
linkTitle: https://statmodeling.stat.columbia.edu/2019/11/15/ballot-order-effects-in-the-news-im-skeptical-of-the-claimed-5-effect/
source: Statistical Modeling, Causal Inference, and Social Science
description: 'Palko points us to this announcement by Marc Elias: BREAKING: In major
court victory ahead of 2020, Florida federal court throws out state’s ballot
order law that lists candidates of the governor’s party first on every ballot
for every office. Finds that it gave GOP candidates a 5% advantage. @AndrewGillum
lost in 2018 by .4% ...'
disable_comments: true
---
Palko points us to this announcement by Marc Elias: BREAKING: In major court victory ahead of 2020, Florida federal court throws out state’s ballot order law that lists candidates of the governor’s party first on every ballot for every office. Finds that it gave GOP candidates a 5% advantage. @AndrewGillum lost in 2018 by .4% ... | 80 | 343 | 0.775962 | eng_Latn | 0.993776 |
4f2831a7bfe92b7b97166b62901f0e914714f7ff | 27,774 | md | Markdown | _posts/blog/2016-09-30-what's new in C#6.md | bhageena/bhageena.github.io | 637e49e5d82d553036d237a4afe8ac36d6d741df | [
"MIT"
] | null | null | null | _posts/blog/2016-09-30-what's new in C#6.md | bhageena/bhageena.github.io | 637e49e5d82d553036d237a4afe8ac36d6d741df | [
"MIT"
] | 1 | 2017-04-07T14:36:36.000Z | 2017-04-07T14:36:36.000Z | _posts/blog/2016-09-30-what's new in C#6.md | bhageena/bhageena.github.io | 637e49e5d82d553036d237a4afe8ac36d6d741df | [
"MIT"
] | null | null | null | ---
layout: post
title: What's new in C Sharp 6
description: ""
excerpt: "The rich feature list of C# 6, allows us to write more concise and readable code. The syntax avoids verbosity in many common scenarios and it leads to better developer productivity by helping us concentrate on developing the core features rather than getting entangled in the constructs of the language"
categories: blog
tags: CSharp .Net NewReleases Improvements
comments: true
share: true
readtime: true
---
Since its first release in January 2002 (Version 1.0), C# has come a long way. Along with .NET Framework 4.6, a new version of C#, has been released in July 2015 - though the language itself hasn't changed much, main thrust has been on introduction of Roslyn .NET Compiler Platform. However, version 6.0 consists of a rich list of syntactical improvements, leading to code that is more concise & more readable resulting in enhanced developer productivity. We'll look into the feature list, try out small examples and will see how this release changes the way we write C#.
# What's New in C# 6
C# has always been my favorite language, and it's getting more feature rich and attractive with every new release. Let's have a peek at the features in this release:
- [Auto-Property Initializers](#auto-property-initializers)
- [await in catch and finally blocks](#await-in-catch-and-finally-blocks)
- [Exception Filters](#exception-filters)
- [Expression-bodied Function Members](#expression-bodied-function-members)
- [Improvements in Overload Resolution Methods](#improvements-in-overload-resolution-methods)
- [index initializers](#index-initializers)
- [nameof Expressions](#nameof-expressions)
- [Null - Conditional Operators](#null-conditional-operators)
- [String Interpolation](#string-interpolation)
- [using static](#using-static)
Introduction of these features allows us to write more concise and readable code. The syntax avoids verbosity in many common scenarios and it leads to better developer productivity by helping us concentrate on developing the core features rather than getting entangled in the constructs of the language. Also while reading the code the improved syntax helps in understanding the design intent more easily.
In the sections to follow, we'll look into each of these features using some examples.
### Auto-Property Initializers
Earlier in order to make a read-only property we had to the following:
- A read-only-defined backing field
- Initialization of the backing field from within the constructor
- Explicit implementation of the property (rather than using an auto-property)
- An explicit getter implementation that returns the backing field
These properties would need to have setters and you would need to use that setter to initialize the data storage used by the backing field.
E.g., we used to create our properties with a getter and setter and initialized them in constructor with values as shown below.
- #### without initializers
```csharp
public class Employee
{
public string Name { get; set; }
public decimal Salary { get; set; }
public Employee()
{
/* Initializing property through constructor */
Name = "Douglas Bader";
Salary = 50000;
}
}
```
As this class grows, you may include other constructors. Each constructor needs to initialize these fields, or you'll introduce errors.
Now in C#6, *Auto-Property Initializers* allows us to declare initial value for a property as part of the property declaration.
- #### with initializers
```csharp
public class Employee
{
/* Property with inline initialization */
public string Name { get; set; } = "Douglas Bader";
public decimal Salary { get; set; } = 50000;
}
```
### Read-only auto-properties
In addition to flexibility with visibility, you can also initialize *Read-only auto-properties*. They provide a more concise syntax to create immutable types. The closest you could get to immutable types in earlier versions of C# was to declare private setters:
```csharp
public string Name { get; private set; };
public decimal Salary { get; private set; };
```
Using this syntax, the compiler doesn't ensure that the type really is immutable. It only enforces that the `Name` and `Salary` properties are not modified from any code outside the class.
Read-only auto-properties enable true read-only behavior. You declare the auto-property with only a get accessor:
```csharp
public string Name { get; };
public decimal Salary { get; };
```
The `Name` and `Salary` properties can be set only in the body of a constructor:
```csharp
public Employee(string name, double salary)
{
if (IsNullOrWhiteSpace(name))
throw new ArgumentException(message: "Cannot be blank", paramName: nameof(name));
Name = name;
Salary = salary;
}
```
Trying to set `Name` in another method generates a `CS0200` compilation error:
```csharp
public class Employee
{
public string Name { get; };
public void ChangeName(string newName)
{
// Generates CS 0200: Property or indexer cannot be assigned to -- it is read only
Name = newName;
}
}
```
This feature enables true language support for creating immutable types and using the more concise and convenient auto-property syntax.
## Await in Catch and Finally blocks
C# 5 not allowed to use the `await` keyword in `catch` and `finally` blocks, Now in any async method, you can use an await expression in a finally clause, you can also await in catch expressions. This is most often helpful while logging errors:
```csharp
public async static Task GetStockQuotes()
{
await logMethodEntrance();
HttpClient client = new HttpClient();
try
{
var result = await client.GetStringAsync("http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json");
WriteLine(result);
}
catch (Exception exception)
{
/* If the first request throws an exception, this request will be executed. Both are asynchronous requests*/
var result = await client.GetStringAsync("http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote");
WriteLine(result);
}
finally
{
await logMethodExit();
client.Dispose();
}
}
```
The implementation details for adding `await` support inside `catch` and `finally` clauses ensures that the behavior is consistent with the behavior for synchronous code. When code executed in a `catch` or `finally` clause throws, execution looks for a suitable `catch` clause in the next surrounding block. If there was a current exception, that exception is lost. The same happens with awaited expressions in `catch` and `finally` clauses: a suitable `catch` is searched for, and the current exception, if any, is lost.
## Exception Filters
Exception Filters are clauses that determine when a given catch clause should be applied. If the expression used for an exception filter evaluates to `true`, the catch clause performs its normal processing on an exception. If the expression evaluates to `false`, then the `catch` clause is skipped.
Exception filters have been supported in Visual Basic, but are new to the C# compiler. They allow you to specify a condition for a catch block. We typically used the following in C# 5:
```csharp
class Program
{
static void Main(string[] args)
{
var httpStatusCode = 404;
Write("HTTP Error: ");
try
{
throw new Exception(httpStatusCode.ToString());
}
catch (Exception ex)
{
if (ex.Message.Equals("500"))
Write("Bad Request");
else if (ex.Message.Equals("401"))
Write("Unauthorized");
else if (ex.Message.Equals("402"))
Write("Payment Required");
else if (ex.Message.Equals("403"))
Write("Forbidden");
else if (ex.Message.Equals("404"))
Write("Not Found");
}
ReadLine();
}
}
```
Rather than entering the catch block and checking to see which condition met our exception, we can now decide if we even want to enter the specific catch block.
```csharp
class Program
{
static void Main(string[] args)
{
var httpStatusCode = 404;
Write("HTTP Error: ");
try
{
throw new Exception(httpStatusCode.ToString());
}
catch (Exception ex) when (ex.Message.Equals("400"))
{
Write("Bad Request");
ReadLine();
}
catch (Exception ex) when (ex.Message.Equals("401"))
{
Write("Unauthorized");
ReadLine();
}
catch (Exception ex) when (ex.Message.Equals("402"))
{
Write("Payment Required");
ReadLine();
}
catch (Exception ex) when (ex.Message.Equals("403"))
{
Write("Forbidden");
ReadLine();
}
catch (Exception ex) when (ex.Message.Equals("404"))
{
Write("Not Found");
ReadLine();
}
ReadLine();
}
}
```
The code generated by exception filters provides better information about an exception that is thrown and not processed.
## Expression-bodied function members
You can now write functions and computed properties like lamda expressions to save unnecessary ceremony of defining function and property statement block. Just use the lambda operator `=>` and then start writing your code that goes into your function/property body. Here is a simple example:
```csharp
public static double MultiplyNumbers(double num1, double num2) => num1 * num2;
```
You can use expression-bodied members in read only properties as well:
```csharp
public string FullName => $"{FirstName} {LastName}";
```
## Improvements in Overload Resolution Methods
This feature is probably easy to miss. There were constructs where the previous version of the C# compiler may have found some method calls involving lambda expressions ambiguous.
```csharp
class Program
{
private static void DoProcessing(Action item)
{
}
private static int DoProcessing(Func<int> item)
{
return item();
}
public static int FunCall()
{
return 50;
}
static void Main(string[] args)
{
int resultitem = DoProcessing(FunCall);
Console.WriteLine(resultitem);
Console.ReadKey();
}
}
```
In earlier versions, this code fails to compile. The compiler issues an Error- “The call is ambiguous between the following methods or properties: `DoProcessing(System.Action)` and `DoProcessing(System.Func<int>)`". There is also an error that `FunCall` has the wrong return type, and that the void cannot be implicitly converted to int. In C# 6, however, this compiles without issue.
## Index Initializers
Index initializers make, creation and initialization of objects with indexes, possible to at the same time.
This feature makes working with Dictionary initialization very easy:
```csharp
var dict = new Dictionary<string, int>()
{
["firstName"] = "Douglas",
["lastName"] = "Bader"
};
```
Any object that has an indexed getter or setter can be used with this syntax:
```csharp
class Program
{
public class ClassWithIndexer
{
public int this[string index]
{
set
{
Console.WriteLine($"Index: {index}, value: {value}");
}
}
}
public static void Main()
{
var x = new ClassWithIndexer()
{
["firstName"] = "Douglas",
["lastName"] = "Bader"
};
Console.ReadKey();
}
}
```
If the class has multiple indexers it is possible to assign them all in a single group of statements:
```csharp
class Program
{
public class ClassWithIndexer
{
public int this[string index]
{
set
{
Console.WriteLine($"Index: {index}, value: {value}");
}
}
public string this[int index]
{
set
{
Console.WriteLine($"Index: {index}, value: {value}");
}
}
}
public static void Main()
{
var x = new ClassWithIndexer()
{
["firstName"] = "Douglas",
["lastName"] = "Bader"
["role"] = "Fighter Pilot",
["level"] = "Ace"
};
}
}
```
One important thing you should notice here is that the set accessor might behave differently compared to an [Add method used in collection initializers](#extension-add-methods-in-collection-initializers). As shown in the below examples:
Here setting value in same property twice doesn't throw any exception, but first value gets overwritten by second one.
```csharp
var names = new Dictionary<string, string>
{
["firstName"] = "Douglas",
["firstName"] = "Bader", // does not throw, second value overwrites the first one
};
```
Doing the same thing, as in below example throws run time exception.
```csharp
var names = new Dictionary<string, string>
{
{ "firstName", "Douglas" },
{ "firstName", "Bader" }, // run-time ArgumentException: An item with the same key has already been added.
};
```
The new syntax ensures that the associative containers can be initialized same way in which we have been initializing sequential containers from many versions.
### Extension `Add` methods in collection initializers
When you use a collection initializer to create a collection, the compiler searches for an Add method of the collection type for which the parameters for the Add method match the types of the values in the collection initializer. This Add method is used to populate the collection with the values from the collection initializer.
If no matching Add method exists and you cannot modify the code for the collection, you can add an extension method called Add that takes the parameters that are required by the collection initializer. This is typically what you need to do when you use collection initializers for generic collections.
The following example shows how to add an extension method to the generic List<T> type so that a collection initializer can be used to add objects of type Employee. The extension method enables you to use the shortened collection initializer syntax.
```csharp
public class Fighter
{
public int Id { get; set; }
public string Name { get; set; }
}
```
```csharp
public static void Add(this List<Fighter> list, int id, string name)
{
list.Add(new Fighter {
Id = id,
Name = name
});
}
```
```csharp
public void Main()
{
dynamic fighters = new List<Fighter> {
{
1,
"Erich, Hartmann"
},
{
2,
"Manfred, Richthofen"
},
{
3,
"Douglas, Bader"
}
};
}
```
## `nameof` Expressions
The nameof operator allows you to get the name of a code element such as variable, type or member as string, without hard-coding it as a literal. The operation is evaluated at compile-time, which means you can rename, using an IDE's rename feature, a referenced identifier and the name string will update with it. This is also useful when throwing exceptions related to method arguments and also when implementing INotifyPropertyChanged.
The `nameof` expression evaluates to the name of a symbol. It's very handy whenever you need the name of a variable, a property, or a member field.
One of the most common uses for `nameof` is to provide the name of a symbol that caused an exception:
```csharp
public string SayHello(string greeted)
{
if (greeted == null)
throw new ArgumentNullException(nameof(greeted));
Console.WriteLine("Hello, " + greeted);
}
```
The nameof operator is evaluated at compile time and changes the expression into a string literal. This is also useful for strings that are named after their member that exposes them. e.g.:
```csharp
public static class Strings
{
public const string Foo = nameof(Foo); // Rather than Foo = "Foo"
public const string Bar = nameof(Bar); // Rather than Bar = "Bar"
}
```
Since nameof expressions are compile-time constants, they can be used in attributes, case labels, switch statements, and so on.
nameof can be used with Enums very conveniently.
```csharp
// So instead of writing
Console.WriteLine(Enum.One.ToString());
//you can write
Console.WriteLine(nameof(Enum.One))
```
The nameof operator can access non-static members using static-like syntax.
```csharp
// So instead of doing
string foo = "Foo";
string lengthName = nameof(foo.Length);
//you can write
string lengthName = nameof(string.Length);
```
The output will be Length in both examples. However, the latter prevents the creation of unnecessary instances.
Although the nameof operator works with most language constructs, there are some limitations. For example, you cannot use the nameof operator on open generic types or method return values:
```csharp
public static int Main()
{
Console.WriteLine(nameof(List<>)); // Compile-time error
Console.WriteLine(nameof(Main())); // Compile-time error
}
```
Furthermore, if you apply it to a generic type, the generic type parameter will be ignored:
```csharp
Console.WriteLine(nameof(List<int>)); // "List"
Console.WriteLine(nameof(List<bool>)); // "List"
```
The advantage of using the `nameof` operator over a constant string is that tools can understand the symbol. If you use refactoring tools to
rename the symbol, it will rename it in the `nameof` expression. Constant strings don't have that advantage. Try it yourself in your favorite editor:
rename a variable, and any `nameof` expressions will update as well.
The `nameof` expression produces the unqualified name of its argument
(`LastName` in the previous examples) even if you use the fully qualified
name for the argument:
```csharp
public string FirstName
{
get { return firstName; }
set
{
if (value != firstName)
{
firstName = value;
PropertyChanged?.Invoke(this,
new PropertyChangedEventArgs(nameof(UXComponents.ViewModel.FirstName)));
}
}
}
private string firstName;
```
This `nameof` expression produces `FirstName`, not `UXComponents.ViewModel.FirstName`.
## Null-conditional operators
The NullReferenceException is the most common type of exception in C#, avoiding it requires couple of lines of null checks that look ugly,
moreover they complicate code. You need to check every access of variables to ensure you are not dereferencing `null`, otherwise you will be greeted with a runtime exception. The *null conditional operator* makes those checks much easier and fluid.
You only need to replace the member access `.` with `?.`:
So instead of writing with explicit null check
```csharp
var name = employee != null ? employee.Name : null;
```
using null conditional operator, you can write as thus:
```csharp
var name = employee?.Name;
```
In the above example, the variable `name` is assigned `null` if the employee object is `null`. Otherwise, it gets assigned the value of the `Name` property. Most importantly, the `?.` means that this line of code does not generate a `NullReferenceException` when the `employee` variable is `null`. Instead, it short-circuits and produces `null`.
Also, note that this expression returns a `string`, regardless of the value of `employee`. In the case of short circuiting, the `null` value returned is typed to match the full expression.
You can often use this construct with the *null coalescing* operator to assign default values when one of the properties are `null`:
```csharp
name = employee?.Name ?? "Unspecified";
```
The right hand side operand of the `?.` operator is not limited to properties or fields. You can also use it to conditionally invoke methods. The most common use of member functions with the null conditional operator is to safely invoke delegates (or event handlers) that may be `null`. You'll do this by calling the delegate's `Invoke` method using the `?.` operator to access the member.
## String Interpolation
String Interpolation is the process of evaluating a string literal containing one or more placeholders, yielding a result in which the placeholders are replaced with their corresponding values. C# 6 contains new syntax for composing strings from a format string and expressions that can be evaluated to produce other string values.
Traditionally, you needed to use positional parameters in a method
like `string.Format`:
```csharp
public string Name
{
get
{
return string.Format("{0} {1}", FirstName, LastName);
}
}
```
With C# 6, the new string interpolation feature enables you to embed the expressions in the format string. Simple preface the string with `$`:
```
public string FullName => $"{FirstName} {LastName}";
```
This initial example used variable expressions for the substituted expressions. You can expand on this syntax to use any expression. For
example, you could compute a employee's grade point average as part of the interpolation:
```
public string GetFormattedGradePoint() =>
$"Name: {LastName}, {FirstName}. G.P.A: {paygrades.Average()}";
```
Running the preceding example, you would find that the output for `paygrades.Average()` might have more decimal places than you would like. The string interpolation syntax supports all the format strings available using earlier formatting methods. You add the format strings inside the braces. Add a `:` following the expression to format:
```
public string GetGradePointPercentage() =>
$"Name: {LastName}, {FirstName}. G.P.A: {paygrades.Average():F2}";
```
The preceding line of code will format the value for `paygrades.Average()` as a floating-point number with two decimal places.
The `:` is always interpreted as the separator between the expression being formatted and the format string. This can introduce problems when
your expression uses a `:` in another way, such as a conditional operator:
```
public string GetGradePointPercentages() =>
$"Name: {LastName}, {FirstName}. G.P.A: {paygrades.Any() ? paygrades.Average() : double.NaN:F2}";
```
In the preceding example, the `:` is parsed as the beginning of the format string, not part
of the conditional operator. In all cases where this happens, you can
surround the expression with parentheses to force the compiler to interpret
the expression as you intend:
```
public string GetGradePointPercentages() =>
$"Name: {LastName}, {FirstName}. G.P.A: {(paygrades.Any() ? paygrades.Average() : double.NaN):F2}";
```
There aren't any limitations on the expressions you can place between the braces. You can execute a complex LINQ query inside an interpolated
string to perform computations and display the result:
```
public string GetAllpaygrades() =>
$@"All paygrades: {paygrades.OrderByDescending(g => g)
.Select(s => s.ToString("F2")).Aggregate((partial, element) => $"{partial}, {element}")}";
```
You can see from this sample that you can even nest a string interpolation expression inside another string interpolation expression. This example
is very likely more complex than you would want in production code. Rather, it is illustrative of the breadth of the feature. Any C# expression
can be placed between the curly braces of an interpolated string.
## using static
In C# 6, we can import the static members of a class into our namespace. This means we can use the static members of a class directly with no need to qualify them with their namespace or type name. The format is `using static` followed by the type whose static methods you wish to import.
Often we make abundant use of static methods in our code e.g. the utility methods. Calling the static methods requires it's class name to precede it. Repeatedly typing the class name can obscure the meaning of your code. A common example is when you write classes that perform many numeric calculations.
Your code will be littered with @System.Math.Sin, @System.Math.Sqrt and other calls to different methods in the @System.Math class. The new `using static` syntax can make these classes much cleaner to read. You specify the class you're using:
```csharp
using static System.Math;
```
And now, you can use any static method in the @System.Math class without qualifying the @System.Math class. The @System.Math class is a great use case for
this feature because it does not contain any instance methods. You can also use `using static` to import a class' static methods for a class that has both static and instance methods. One of the most useful examples is @System.String:
```csharp
using static System.String;
```
> You must use the fully qualified class name, `System.String` in a static using statement. You cannot use the `string` keyword instead.
You can now call static methods defined in the @System.String class qualifying those methods as members of that class:
```csharp
if (IsNullOrWhiteSpace(lastName))
throw new ArgumentException(message: "Cannot be blank", paramName: nameof(lastName));
```
The `static using` feature and extension methods interact in interesting ways, and the language design included some rules that specifically address those interactions. The goal is to minimize any chances of breaking changes in existing codebases, including yours.
Extension methods are only in scope when called using the extension method invocation syntax, not when called as a static method. You'll often see this in LINQ queries. You can import the LINQ pattern by importing @System.Linq.Enumerable.
```csharp
using static System.Linq.Enumerable;
```
This imports all the methods in the @System.Linq.Enumerable class. However, the extension methods are only in scope when called as extension methods. They are not in scope if they are called using the static method syntax:
```csharp
public bool MakesDeansList()
{
return paygrades.All(g => g > 3.5) && paygrades.Any();
// Code below generates CS0103:
// The name 'All' does not exist in the current context.
//return All(paygrades, g => g > 3.5) && paygrades.Any();
}
```
This decision is because extension methods are typically called using extension method invocation expressions. In the rare case where they are called using the static method call syntax it is to resolve ambiguity. Requiring the class name as part of the invocation seems wise.
Apart from these, the `static using` directive also imports any nested types. That enables you to reference any nested types without qualification.
------
| 41.64018 | 572 | 0.687802 | eng_Latn | 0.99748 |
4f28eb641b85e2e23386c03b8237b661721b8ab1 | 10,765 | markdown | Markdown | _posts/2018-07-10-Soil-Ecosystems.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 4 | 2018-10-09T17:57:31.000Z | 2018-11-20T17:56:09.000Z | _posts/2018-07-10-Soil-Ecosystems.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 1 | 2018-10-10T14:45:39.000Z | 2018-10-10T14:45:39.000Z | _posts/2018-07-10-Soil-Ecosystems.markdown | aaberhe/SOC-Hub | b6fd65c0854503a4616825d03141cbc8230b9fff | [
"Apache-2.0"
] | 11 | 2018-01-24T22:18:59.000Z | 2018-05-29T23:16:42.000Z | ---
title: "Oh, the places carbon goes!"
author: "Genelle Watkins"
layout: post
level2: Biotic controls
level3: soil carbon
level1: carbon sequestration
tags:
- soil
- permafrost
- sequestration
- carbon cycle
category: climate in plants and soil
figures: /img/2bii/
---
# The Carbon Cycle
Carbon is all around us, and inside of us! It is in the trees, the air we breathe, the oceans we swim in on a warm day, and within the ground beneath our feet. Like gravity, you cannot see the carbon molecules themselves, but it is always present. So, how does it work? Many people believe carbon is only related to the C0~2~ molecules retrieved by plants; however, that cycle goes much deeper, and the fate of carbon is in absolutely everything part of our surroundings.

*Figure 1. The figure above shows the fate of carbon utilized in a variety of systems, above and belowground.*
# Pools of Carbon!
Not swimming pools, but congregations of distinct pools of carbon that come from different sources. These sources can include oil wells, coal seams, and aquifers. In order to combat the excess carbon in the atmosphere, natural resources such as forests or oceans can act as sinks, and sequester carbon and store it in the earth's soil. Unfortunately, as sources of carbon emission continually increase, natural sinks have declined over the past few decades. Pre-industrialization, carbon dioxide was in an abundance of 280 ppm, but after, reached 381 ppm. In the 1980s, carbon sinks had a storage capacity of about 56.3%, and currently account for less than 54%.
Lai, R. (2004). [DOI: 10.1038/nature14338](https://www.nature.com/articles/nature14338)
Scharlemann, J. P. (2014). [DOI:10.4155/cmt.13.77](https://doi.org/10.4155/cmt.13.77)

*Figure 2. A simplification of the carbon cycle and how burning fossil fuels is upsetting the balance of this natural cycle.*

*Figure 3. The dynamic equilibrium of activities and processes that all contribute to losses and gains of soil carbon to determine the carbon balance in an ecosystem.*
There is an abundance of processes that affect carbon balance, but it is important to keep in context that it is not only about soil organic carbon pools. It is to see only a small portion of the puzzle piece, but this figure puts into perspective the activities and processes that we should always keep in mind when thinking of carbon in plants and soil.
# Carbon Processes in different Plant Types (C~3~, C~4~, and CAM)
Plant species are unique through the photosynthetic paths they use to create energy and store carbon. C~3~, C~4~, and CAM pathways highlight the diversity of not only the plants themselves, but the different biomes, moisure, and time of day where they reside. Ultimately, these different plant types can potentially have a significant influence on litter quality and soil carbon content in soils.
Castellano, M. J. (2015). [DOI: 10.1111/gcb.12982](https://lib.dr.iastate.edu/cgi/viewcontent.cgi?referer=https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Integrating+plant+litter+quality%2C+soil+organic+matter+stabilization%2C+and+the+carbon+saturation+concept.%C2%A0Global+change+biology%2C%C2%A021%289%29%2C+3200-3209.&btnG=&httpsredir=1&article=1099&context=agron_pubs)
, to learn more about advantages/disadvantages to each pathway.]({{site.baseurl}}{{page.figures}}C3_C4_Pathways.jpg)
*Figure 4. An overview on how C~3~ and C~4~ plants take in carbon. The majority of plants fall under the C~3~ pathway, but to learn more, read [Organic Geochemistry Journal Club--Papers and Cake](https://papersandcake.wordpress.com/2015/10/27/the-leaf-wax-composition-and-stable-carbon-isotope-values-of-conifers-should-we-care/), to learn more about advantages/disadvantages to each pathway.*
 to dive into the details!]({{site.baseurl}}{{page.figures}}CAM_Pathway.jpg)
*Figure 5. CAM plants are for more xerophytic plant species, and have similar steps compared to the C~4~ plants. Learn more [here](http://lifeofplant.blogspot.com/2011/10/c4-and-cam-photosynthesis.html) to dive into the details!*
 that tracks data up there.]({{site.baseurl}}{{page.figures}}Healy_Alaska.jpg)
*Figure 6. Carbon is not just above ground collected by plants, but stored in great abundance in the soils belowground. Places like Healy, Alaska (_picture above_), offers a glimpse at how even colder places store the greatest amount of carbon! Check out the [blog](https://www.polartrec.com/expeditions/carbon-balance-in-warming-and-drying-tundra/journals/2011-05-03) that tracks data up there.*
Overall, it is important to realize that carbon in plants is more than just these complex processes. We see these plants every day, whether it be in agricultural settings (rice, wheat, barley, sugarcane). Or in desert landscapes from plants such as cacti and other succulents.

*Figure 7. Examples of plants from each of the carbon pathways (C~3~=wheat, CAM=succulents, C~4~=maize).*
# Carbon Allocation
Carbon allocation differs both above and below ground. Conventionally, carbon storage is looked at in aboveground biomass such as vegetation (i.e. grasses, forests, other vegetation). However, allocatoin belowground is significant because soil depth is highly variable where temperatures and moisture have a noticeable impact on presence of carbon stored within soils (Litton et al., 2008). The figure below from Giardina et al.'s (2014) paper illustrates the processes of belowground carbon.

Giardina, P.C. (2014) [DOI](https://www.nature.com/articles/nclimate2322)
# Reducing the loss of Soil Carbon
Atmospheric carbon dioxide continues to rise significantly over the past several decades. [NOAA's](https://www.co2.earth/weekly-co2) Earth System Research Laboratory (ESRL) keeps track of weekly C0~2~ emissions from Mauna Loa Observatory on Hawaii Island. Currently, atmospheric carbon dioxide sits at approximately 408.21 ppm (parts per million). To offset the increases over recent years, some researchers suggest sequestration of excess atmospheric carbon is as simple as growing more plants with proper management. The video below offers insight as to how that may be possible:

*Soil Carbon: Putting Carbon Back Where it Belongs- In the Earth*
*Video*: Good news! Plants can quite literally change the face of the earth. By growing more plants, we can capture more carbon dioxide, water, production, biodiversity and profit? In fact, a 1% change in soil organic matter across just one quarter of the world's land area could sequester 300 billion tons of physical C0~2~. Check out Tony Lovell's TED Talk [here](https://www.youtube.com/watch?v=wgmssrVInP0)."
**Table 1.**The table below offers simple solutions to increase carbon inputs and reduce carbon losses within agriculture, one of the dominating landscapes responsible for carbon release.

# The Sleeping Giants
Most of the Arctic's sequestered carbon is located in thaw-vulnerable topsoils, within 3-m of the surface. Watch NASA's Jet Propulsion Laboratory (JPL) discuss the emissions of Carbon from Artic Permafrost, often referred to as _The Sleeping Giant_:

Because of global climate change, permafrost is inevitably going to keep melting, leading to some dramatic shifts in soil and atmospheric carbon quantities. These large carbon pools must be monitored closely, and maps like the one below can help assess the amount of carbon stored in places like Yedoma, Alaska (_pictured below_). Quantifying those real-time changes will help for strategies to mitigate and monitor the projected outcomes of melting permafrost.


Schuur, E. A. G.(2015).[DOI: 10.1038/nature14338](https://pdfs.semanticscholar.org/7e5b/b2ada771107313c7a56cdaed9c60e24a2fed.pdf)
# Peatlands
Thick organic soil layers and increased plant production and decreased litter decomposition are what peatlands are made of, usually found in boreal and tropical forests. Peatlands are an exceptional carbon storage source, but they are in jeopardy of being burned down, converted into other land uses, and more alarming, releasing millennia-old carbon back into the atmosphere. Climate change mediated through anthropogenic activity contributes to peatland emissions at an accelerated rate; however, there is potential to mitigate if appropriate policies are put in place to help reduce our overall impact and offset our carbon footprint. To learn more about these sleeping giants. Visit Casey's blog post.
Scharlemann, J. P. (2014).
# Learn more and read other blogs on Soil Carbon and Permafrost:
(https://permaculturenews.org/2015/10/13/how-soil-and-carbon-are-related/)
(http://blog.our-sci.net/2017/11/22/open-strategy-build-soil-carbon-part-1/)
(https://www.nature.com/scitable/knowledge/library/soil-carbon-storage-84223790)
| 90.462185 | 705 | 0.783651 | eng_Latn | 0.991004 |
4f2a05693a0930c7c4e59e4292bf83485dd13dce | 3,947 | md | Markdown | articles/cognitive-services/LUIS/luis-get-started-python-get-intent.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/LUIS/luis-get-started-python-get-intent.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/LUIS/luis-get-started-python-get-intent.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 获取意向,Python
titleSuffix: Language Understanding - Azure Cognitive Services
description: 在本快速入门中,你将向 LUIS 终结点传递话语并返回意向和实体。
services: cognitive-services
author: lingliw
manager: digimobile
ms.custom: seodec18
ms.service: cognitive-services
ms.subservice: language-understanding
ms.topic: quickstart
ms.date: 04/19/19
ms.author: v-lingwu
ms.openlocfilehash: ff9f94ab190629c6b6468f7b23ab1331e819c67b
ms.sourcegitcommit: e77582e79df32272e64c6765fdb3613241671c20
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/14/2019
ms.locfileid: "67135993"
---
# <a name="quickstart-get-intent-using-python"></a>快速入门:使用 Python 获取意向
在本快速入门中,你将向 LUIS 终结点传递话语并返回意向和实体。
[!INCLUDE [Quickstart introduction for endpoint](../../../includes/cognitive-services-luis-qs-endpoint-intro-para.md)]
## <a name="prerequisites"></a>先决条件
* [Python 3.6](https://www.python.org/downloads/) 或更高版本。
* [Visual Studio Code](https://code.visualstudio.com/)
[!INCLUDE [Use authoring key for endpoint](../../../includes/cognitive-services-luis-qs-endpoint-luis-repo-note.md)]
## <a name="get-luis-key"></a>获取 LUIS 密钥
[!INCLUDE [Use authoring key for endpoint](../../../includes/cognitive-services-luis-qs-endpoint-get-key-para.md)]
## <a name="get-intent-with-browser"></a>使用浏览器获取意向
[!INCLUDE [Use authoring key for endpoint](../../../includes/cognitive-services-luis-qs-endpoint-browser-para.md)]
## <a name="get-intent--programmatically"></a>以编程方式获取意向
可以使用 Python 来访问上一步骤中浏览器窗口显示的相同结果。
1. 将以下代码片段之一复制到名为 `quickstart-call-endpoint.py` 的文件中:
```
########### Python 2.7 #############
import httplib, urllib, base64
headers = {
# Request headers includes endpoint key
# You can use the authoring key instead of the endpoint key.
# The authoring key allows 1000 endpoint queries a month.
'Ocp-Apim-Subscription-Key': 'YOUR-KEY',
}
params = urllib.urlencode({
# Text to analyze
'q': 'turn on the left light',
# Optional request parameters, set to default values
'verbose': 'false',
})
# HTTP Request
try:
# LUIS endpoint HOST for chinaeast region
conn = httplib.HTTPSConnection('chinaeast.api.cognitive.microsoft.com')
# LUIS endpoint path
# includes public app ID
conn.request("GET", "/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
# print HTTP response to screen
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
```
```
########### Python 3.6 #############
import requests
headers = {
# Request headers
'Ocp-Apim-Subscription-Key': 'YOUR-KEY',
}
params ={
# Query parameter
'q': 'turn on the left light',
# Optional request parameters, set to default values
'timezoneOffset': '0',
'verbose': 'false',
'spellCheck': 'false',
'staging': 'false',
}
try:
r = requests.get('https://chinaeast2.api.cognitive.azure.cn/luis/v2.0//apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2',headers=headers, params=params)
print(r.json())
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
```
2. 将 `Ocp-Apim-Subscription-Key` 字段的值替换为你的 LUIS 终结点密钥。
3. 使用 `pip install requests` 安装依赖项。
4. 使用 `python ./quickstart-call-endpoint.py` 运行该脚本。 其中会显示前面在浏览器窗口中显示的相同 JSON。
## <a name="luis-keys"></a>LUIS 密钥
[!INCLUDE [Use authoring key for endpoint](../../../includes/cognitive-services-luis-qs-endpoint-key-usage-para.md)]
## <a name="clean-up-resources"></a>清理资源
删除 python 文件。
## <a name="next-steps"></a>后续步骤
> [!div class="nextstepaction"]
> [添加表达](luis-get-started-python-add-utterance.md)
| 29.022059 | 153 | 0.649861 | eng_Latn | 0.262792 |
4f2a61cca6a4df8ef0d1055e5d4f8c042177dba4 | 17,653 | md | Markdown | install.md | yu2003w/cri-o | 651744791c66c484b3074fd533b605deb1b05208 | [
"Apache-2.0"
] | 1 | 2020-12-18T09:29:31.000Z | 2020-12-18T09:29:31.000Z | install.md | yu2003w/cri-o | 651744791c66c484b3074fd533b605deb1b05208 | [
"Apache-2.0"
] | null | null | null | install.md | yu2003w/cri-o | 651744791c66c484b3074fd533b605deb1b05208 | [
"Apache-2.0"
] | 1 | 2020-12-18T09:29:34.000Z | 2020-12-18T09:29:34.000Z | 
# CRI-O Installation Instructions
This guide will walk you through the installation of [CRI-O](https://github.com/cri-o/cri-o), an Open Container Initiative-based implementation of the [Kubernetes Container Runtime Interface](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/container-runtime-interface-v1.md).
It is assumed you are running a Linux machine.
**Table of Contents**:
- [Install packaged versions of CRI-O](#install-packaged-versions-of-cri-o)
* [Supported versions](#supported-versions)
* [Installation Instructions](#installation-instructions)
+ [openSUSE](#openSUSE)
+ [Fedora 31 or later](#fedora-31-or-later)
+ [Other yum based operating systems](#other-yum-based-operating-systems)
+ [APT based operating systems](#apt-based-operating-systems)
- [Build and install CRI-O from source](#build-and-install-cri-o-from-source)
* [Runtime dependencies](#runtime-dependencies)
* [Build and Run Dependencies](#build-and-run-dependencies)
+ [Fedora - RHEL 7 - CentOS](#fedora---rhel-7---centos)
+ [RHEL 8](#rhel-8)
+ [Debian - Raspbian - Ubuntu](#debian---raspbian---ubuntu)
* [Get Source Code](#get-source-code)
* [Build](#build)
+ [Install with Ansible](#install-with-ansible)
+ [Build Tags](#build-tags)
* [Static builds](#static-builds)
+ [Creating a release archive](#creating-a-release-archive)
* [Download conmon](#download-conmon)
* [Setup CNI networking](#setup-cni-networking)
* [CRI-O configuration](#cri-o-configuration)
+ [Validate registries in registries.conf](#validate-registries-in-registriesconf)
+ [Recommended - Use systemd cgroups.](#recommended---use-systemd-cgroups)
+ [Optional - Modify verbosity of logs](#optional---modify-verbosity-of-logs)
+ [Optional - Modify capabilities and sysctls](#optional---modify-capabilities-and-sysctls)
* [Starting CRI-O](#starting-cri-o)
* [Using CRI-O](#using-cri-o)
## Install packaged versions of CRI-O
CRI-O builds for native package managers using [openSUSE's OBS](build.opensuse.org)
### Supported versions
Below is a compatiblity matrix between versions of CRI-O (y-axis) and distributions (x-axis)
| | Fedora 31+ | openSUSE | CentOS_8 | CentOS_8_Stream | CentOS_7 | Debian_Unstable | Debian_Testing | Debian 10 | Rasbian_10 | xUbuntu_20.04 | xUbuntu_19.10 | xUbuntu_19.04 | xUbuntu_18.04 |
| ---- | ---------- | -------- | -------- | --------------- | -------- | --------------- | -------------- | --------- | ---------- | ------------- | ------------- | ------------- | ------------- |
| 1.18 | ✓ | ✓ | ✓ | ✓ | | ✓ | ✓ | | | ✓ | | | |
| 1.17 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| 1.16 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
To install, choose a supported version for your operating system, and export it as a variable, like so:
`export VERSION=1.18`
We also save releases as subprojects. If you'd, for instance, like to use `1.18.3` you can set
`export VERSION=1.18:1.18.3`
### Installation Instructions
#### openSUSE:
```shell
sudo zypper install cri-o
```
#### Fedora 31 or later
```shell
sudo dnf module enable cri-o:$VERSION
sudo dnf install cri-o
```
For Fedora, we only support setting minor versions. i.e: `VERSION=1.18`, and do not support pinning patch versions: `VERSION=1.18.3`
#### Other yum based operating systems
To install on the following operating systems, set the environment variable $OS as the appropriate field in the following table:
| Operating system | $OS |
| ---------------- | ----------------- |
| Centos 8 | `CentOS_8` |
| Centos 8 Stream | `CentOS_8_Stream` |
| Centos 7 | `CentOS_7` |
And then run the following as root:
```shell
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
yum install cri-o
```
#### APT based operating systems
Note: this tutorial assumes you have curl and gnupg installed
To install on the following operating systems, set the environment variable $OS as the appropriate field in the following table:
| Operating system | $OS |
| ---------------- | ----------------- |
| Debian Unstable | `Debian_Unstable` |
| Debian Testing | `Debian_Testing` |
| Raspberry Pi OS | `Raspbian_10` |
| Ubuntu 20.04 | `xUbuntu_20.04` |
| Ubuntu 19.10 | `xUbuntu_19.10` |
| Ubuntu 19.04 | `xUbuntu_19.04` |
| Ubuntu 18.04 | `xUbuntu_18.04` |
And then run the following as root:
```shell
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
apt-get update
apt-get install cri-o cri-o-runc
```
Note: We include cri-o-runc because Ubuntu and Debian include their own packaged version of runc.
While this version should work with CRI-O, keeping the packaged versions of CRI-O and runc in sync ensures they work together.
If you'd like to use the distribution's runc, you'll have to add the file:
```toml
[crio.runtime.runtimes.runc]
runtime_path = ""
runtime_type = "oci"
runtime_root = "/run/runc"
```
to `/etc/crio/crio.conf.d/`
## Build and install CRI-O from source
### Runtime dependencies
- runc, Clear Containers runtime, or any other OCI compatible runtime
- iproute
- iptables
Latest version of `runc` is expected to be installed on the system. It is picked up as the default runtime by CRI-O.
### Build and Run Dependencies
#### Fedora - RHEL 7 - CentOS
**Required**
Fedora, RHEL 7, CentOS and related distributions:
```bash
yum install -y \
containers-common \
device-mapper-devel \
git \
glib2-devel \
glibc-devel \
glibc-static \
go \
gpgme-devel \
libassuan-devel \
libgpg-error-devel \
libseccomp-devel \
libselinux-devel \
pkgconfig \
make \
runc
```
**Please note**:
- `CentOS 8` (or higher): `pkgconfig` package is replaced by `pkgconf-pkg-config`
- By default btrfs is not enabled. To add the btrfs support, install the
following package: `btrfs-progs-devel`
- It is possible the distribution packaged version of runc is out of date. If you'd like to get the latest and greatest runc, consider using the one found in https://build.opensuse.org/project/show/devel:kubic:libcontainers:stable
#### RHEL 8
RHEL 8 distributions:\
Make sure you are subscribed to the following repositories: \
BaseOS/x86_64 \
Appstream/x86_64
CodeReady Linux Builder for x86_64
```
subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms
subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms
subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms
```
Follow the guide below to subscribe to the repositories if not already subscribed:\
https://access.redhat.com/solutions/265523
This requires go version 1.12 or greater:
```
yum module -y install go-toolset
```
```bash
yum install -y \
containers-common \
device-mapper-devel \
git \
make \
glib2-devel \
glibc-devel \
glibc-static \
runc \
```
Here is a link on how to install a source rpm on RHEL: \
https://www.itechlounge.net/2012/12/linux-how-to-install-source-rpm-on-rhelcentos/
Dependency: gpgme-devel \
Link: http://download.eng.bos.redhat.com/brewroot/packages/gpgme/1.10.0/6.el8/x86_64/
Dependency: go-md2man \
Command:
```
go get github.com/cpuguy83/go-md2man
```
The following dependencies:
```bash
libassuan \
libassuan-devel \
libgpg-error \
libseccomp \
libselinux \
pkgconf-pkg-config \
```
#### Debian - Raspbian - Ubuntu
On Debian, Raspbian and Ubuntu distributions, [enable the Kubic project
repositories](../README.md#installing-crio) and install the following packages:
```bash
apt-get update -qq && apt-get install -y \
btrfs-tools \
containers-common \
git \
golang-go \
libassuan-dev \
libdevmapper-dev \
libglib2.0-dev \
libc6-dev \
libgpgme11-dev \
libgpg-error-dev \
libseccomp-dev \
libsystemd-dev \
libselinux1-dev \
pkg-config \
go-md2man \
cri-o-runc \
libudev-dev \
software-properties-common \
gcc \
make
```
**Caveats and Notes:**
If using an older release or a long-term support release, be careful to double-check that the version of `runc` is new enough (running `runc --version` should produce `spec: 1.0.0`), or else build your own.
Be careful to double-check that the version of golang is new enough, version
1.12.x or higher is required. If needed, newer golang versions are available at
[the official download website](https://golang.org/dl).
### Get Source Code
Clone the source code using:
```bash
git clone https://github.com/cri-o/cri-o # or your fork
cd cri-o
```
Make sure your `CRI-O` and `kubernetes` versions are of matching major versions.
For instance, if you want to be compatible with the latest kubernetes release,
you'll need to use the latest tagged release of `CRI-O` on branch `release-1.18`.
### Build
To install with default buildtags using seccomp, use:
```bash
make
sudo make install
```
Otherwise, if you do not want to build `CRI-O` with seccomp support you can add `BUILDTAGS=""` when running make.
```bash
make BUILDTAGS=""
sudo make install
```
#### Install with Ansible
An [Ansible Role](https://github.com/alvistack/ansible-role-cri_o) is also available to automate the above steps:
``` bash
sudo su -
mkdir -p ~/.ansible/roles
cd ~/.ansible/roles
git clone https://github.com/alvistack/ansible-role-cri_o.git cri_o
cd ~/.ansible/roles/cri_o
pip3 install --upgrade --ignore-installed --requirement requirements.txt
molecule converge
molecule verify
```
#### Build Tags
`CRI-O` supports optional build tags for compiling support of various features.
To add build tags to the make option the `BUILDTAGS` variable must be set.
```bash
make BUILDTAGS='seccomp apparmor'
```
| Build Tag | Feature | Dependency |
|----------------------------------|-------------------------------------------------|--------------|
| seccomp | syscall filtering | libseccomp |
| selinux | selinux process and mount labeling | libselinux |
| apparmor | apparmor profile support | <none> |
`CRI-O` manages images with [containers/image](https://github.com/containers/image), which uses the following buildtags.
| Build Tag | Feature | Dependency |
|----------------------------------|-------------------------------------------------|--------------|
| containers_image_openpgp | use native golang pgp instead of cgo | <none> |
| containers_image_ostree_stub | disable use of ostree as an image transport | <none> |
`CRI-O` also uses [containers/storage](https://github.com/containers/storage) for managing container storage.
| Build Tag | Feature | Dependency |
|----------------------------------|-------------------------------------------------|--------------|
| exclude_graphdriver_btrfs | exclude btrfs as a storage option | <none> |
| btrfs_noversion | for building btrfs version < 3.16.1 | btrfs |
| exclude_graphdriver_devicemapper | exclude devicemapper as a storage option | <none> |
| libdm_no_deferred_remove | don't compile deferred remove with devicemapper | devicemapper |
| exclude_graphdriver_overlay | exclude overlay as a storage option | <none> |
| ostree | build storage using ostree | ostree |
### Static builds
It is possible to build a statically linked binary of CRI-O by using the
officially provided [nix](https://nixos.org/nix) package and the derivation of
it [within this repository](../nix). The builds are completely reproducible and
will create a `x86_64`/`amd64` stripped ELF binary for
[glibc](https://www.gnu.org/software/libc). These binaries are integration
tested as well and support the following features:
- apparmor
- btrfs
- device mapper
- gpgme
- seccomp
- selinux
To build the binaries locally either [install the nix package
manager](https://nixos.org/nix/download.html) or setup a new container image
from the root directory of this repository by executing:
```
make test-image-nix
```
Please note that you can specify the container runtime and image name by
specifying:
```
make test-image-nix \
CONTAINER_RUNTIME=podman \
TESTIMAGE_NIX=crionix
```
The overall build process can take a tremendous amount of CPU time depending on
the hardware. After the image has been successfully built, it should be possible
to build the binaries:
```
make build-static
```
There exist an already pre-built container image used for the internal CI. This
means that invoking `make build-static` should work even without building the
image before.
Note that the container runtime and nix image can be specified here, too. The
resulting binaries should now be available within:
- `bin/static/crio`
To build the binaries without any prepared container and via the already
installed nix package manager, simply run the following command from the root
directory of this repository:
```
nix build -f nix
```
The resulting binary should be now available in `result-bin/bin`.
#### Creating a release archive
A release bundle consists of all static binaries, the man pages and
configuration files like `00-default.conf`. The `release-bundle` target can be
used to build a new release archive within the current repository:
```
make release-bundle
…
Created ./bundle/crio-v1.15.0.tar.gz
```
### Download conmon
[conmon](https://github.com/containers/conmon) is a per-container daemon that `CRI-O` uses to monitor container logs and exit information.
`conmon` needs to be downloaded with `CRI-O`.
running:
```bash
git clone https://github.com/containers/conmon
cd conmon
make
sudo make install
```
will download conmon to your $PATH.
### Setup CNI networking
A proper description of setting up CNI networking is given in the
[`contrib/cni` README](/contrib/cni/README.md). But the gist is that you need to
have some basic network configurations enabled and CNI plugins installed on
your system.
### CRI-O configuration
If you are installing for the first time, generate and install configuration files with:
```
sudo make install.config
```
### Validate registries in registries.conf
Edit `/etc/containers/registries.conf` and verify that the registries option has valid values in it. For example:
```
[registries.search]
registries = ['registry.access.redhat.com', 'registry.fedoraproject.org', 'quay.io', 'docker.io']
[registries.insecure]
registries = []
[registries.block]
registries = []
```
For more information about this file see [registries.conf(5)](https://github.com/containers/image/blob/master/docs/containers-registries.conf.5.md).
#### Optional - Modify verbosity of logs
Users can modify the `log_level` by specifying an overwrite like
`/etc/crio/crio.conf.d/01-log-level.conf` to change the verbosity of
the logs. Options are fatal, panic, error, warn, info (default), debug and
trace.
```
[crio.runtime]
log_level = "info"
```
#### Optional - Modify capabilities and sysctls
By default, `CRI-O` uses the following capabilities:
```
default_capabilities = [
"CHOWN",
"DAC_OVERRIDE",
"FSETID",
"FOWNER",
"SETGID",
"SETUID",
"SETPCAP",
"NET_BIND_SERVICE",
"KILL",
]
```
and no sysctls
```
default_sysctls = [
]
```
Users can change either default by adding overwrites to `/etc/crio/crio.conf.d`.
### Starting CRI-O
Running make install will download CRI-O into the folder
```bash
/usr/local/bin/crio
```
You can run it manually there, or you can set up a systemd unit file with:
```
sudo make install.systemd
```
And let systemd take care of running CRI-O:
``` bash
sudo systemctl daemon-reload
sudo systemctl enable crio
sudo systemctl start crio
```
### Using CRI-O
- Follow this [tutorial](tutorials/crictl.md) to quickly get started running simple pods and containers.
- To run a full cluster, see [the instructions](tutorials/kubernetes.md).
- To run with kubeadm, see [kubeadm instructions](tutorials/kubeadm.md).
| 34.145068 | 314 | 0.66459 | eng_Latn | 0.917869 |
4f2af7f134e042f6b9958874a93edb0023ec2c23 | 23 | md | Markdown | README.md | maikonFoltz/menuDropdownJquery | 5e73574fab3d9756c819956e158c5d9e7b4530f0 | [
"MIT"
] | null | null | null | README.md | maikonFoltz/menuDropdownJquery | 5e73574fab3d9756c819956e158c5d9e7b4530f0 | [
"MIT"
] | null | null | null | README.md | maikonFoltz/menuDropdownJquery | 5e73574fab3d9756c819956e158c5d9e7b4530f0 | [
"MIT"
] | null | null | null | # menuDropdownJquery
| 7.666667 | 20 | 0.782609 | eng_Latn | 0.683128 |
4f2b1d20a7a2d2d666bd353142c5fdd5b0be04ea | 192 | md | Markdown | sourcecode/src/main/java/com/billkalin/sourcecode/question26/read_me.md | BillKalin/SwordOffer | 7fa9414f586530d3f2f65a110ccf77b26466d7ec | [
"Apache-2.0"
] | null | null | null | sourcecode/src/main/java/com/billkalin/sourcecode/question26/read_me.md | BillKalin/SwordOffer | 7fa9414f586530d3f2f65a110ccf77b26466d7ec | [
"Apache-2.0"
] | null | null | null | sourcecode/src/main/java/com/billkalin/sourcecode/question26/read_me.md | BillKalin/SwordOffer | 7fa9414f586530d3f2f65a110ccf77b26466d7ec | [
"Apache-2.0"
] | null | null | null | ### 字符串的排列
##### 题目:
输入一个字符串,打印出该字符串中字符的所有排列。例如输入字符串abc,则打印由字符
a,b,c组成的所有字符串abc,acb,bac,bca,cab和cba。
##### 解题思路:
分成两部分来考虑,先拿出a,然后依次和后面的字符替换,使用递归,依次替换,得到全排列字符串。
##### 参考
[源码](./Main.java) | 13.714286 | 46 | 0.6875 | yue_Hant | 0.223559 |
4f2bf957955641f2c2608dd245a68c548312133f | 300 | md | Markdown | core/schema-discovery/README.md | peter-ls/kylo | 02d2a753aadb15234209e3ceacbd2a3720cb9914 | [
"Apache-2.0"
] | 1,016 | 2017-03-03T12:05:19.000Z | 2022-03-26T20:47:38.000Z | core/schema-discovery/README.md | Mpopoma/kylo | 02d2a753aadb15234209e3ceacbd2a3720cb9914 | [
"Apache-2.0"
] | 86 | 2017-03-03T23:22:32.000Z | 2022-03-17T22:24:39.000Z | core/schema-discovery/README.md | Mpopoma/kylo | 02d2a753aadb15234209e3ceacbd2a3720cb9914 | [
"Apache-2.0"
] | 632 | 2017-03-02T04:37:05.000Z | 2022-03-22T03:23:39.000Z | schema-discovery module
===
This Module is used to pass a database table or a csv file into a usable TableSchema object extracting metadata from the table such as Field Names, Data Types etc.
The *schema-discovery-model* module also has additional model elements for Querying Data from a database.
| 42.857143 | 163 | 0.8 | eng_Latn | 0.995233 |
4f2c6f86e67a32f6312f60b4317b4763d759d043 | 1,250 | md | Markdown | src/bechamel-sauce-classic.md | sylGauthier/recipes | 6300dce93ad001353502e493ab698bda8c1b24aa | [
"CC0-1.0"
] | 7 | 2021-03-21T23:46:26.000Z | 2022-03-25T18:42:16.000Z | src/bechamel-sauce-classic.md | sylGauthier/recipes | 6300dce93ad001353502e493ab698bda8c1b24aa | [
"CC0-1.0"
] | 1 | 2021-03-23T13:10:23.000Z | 2021-03-23T23:08:16.000Z | src/bechamel-sauce-classic.md | sylGauthier/recipes | 6300dce93ad001353502e493ab698bda8c1b24aa | [
"CC0-1.0"
] | 1 | 2021-06-26T18:54:35.000Z | 2021-06-26T18:54:35.000Z | # Classic béchamel sauce
Classic French sauce, base for a lot of dishes
- Cook time: 15-20 min
## Ingredients
- 5 Teaspoons unsalted butter
- 5 Teaspoons flour
- 1 liter whole milk
- Salt to taste
- Pepper to taste
- Pinch of nutmeg
## Directions
1. Put your butter in your pot and let is slowly melt on medium to low heat.
2. Once the butter is fully melted add the flour and stir it in to make a roux.
3. Keep stirring your roux on medium low heat until it gets lightly golden brown
and starts smelling a bit nutty.
4. Add about a glass of your whole milk and stir until combined, repeat this
process until you have a thick sauce in your pan and new milk you add easily
combines. Never stop stirring.
5. At this point you can add the rest of your milk, if you skip the previous
step you will end up with lumps of roux that are hard to get out.
6. Lower your heat to low and keep stirring don't forget to get in the corners
of the pot because your sauce will burn easily.
7. Once your sauce has the desired thickness give it a taste and add your salt
and pepper until it is to your liking. A pinch of nutmeg, preferably freshly
grated, will also go a long way.
## Contribution
- yiusa
;tags: basic sauce french italian
| 32.894737 | 80 | 0.7496 | eng_Latn | 0.999735 |
4f2c721af36ddcf2c41ba4b8c7b6ee7096512554 | 14,662 | md | Markdown | protocol/0001-MKTF-market_framework.md | vegaprotocol/specs | e154fb988ff151ac8a2acdd046360a2e454358b8 | [
"MIT"
] | 1 | 2021-12-22T16:36:54.000Z | 2021-12-22T16:36:54.000Z | protocol/0001-MKTF-market_framework.md | vegaprotocol/specs | e154fb988ff151ac8a2acdd046360a2e454358b8 | [
"MIT"
] | 3 | 2021-06-07T07:59:01.000Z | 2022-02-07T20:02:54.000Z | protocol/0001-MKTF-market_framework.md | vegaprotocol/specs | e154fb988ff151ac8a2acdd046360a2e454358b8 | [
"MIT"
] | 1 | 2022-02-04T16:29:24.000Z | 2022-02-04T16:29:24.000Z | Feature name: market-framework
# Summary
The market framework is a set of concepts that define the markets available on a Vega network in terms of the product and instrument being traded on each, the trading mode and related parameters, and the risk model being used for margin calculations.
The market framework is described in Section 3 of the [whitepaper](https://vega.xyz/papers/vega-protocol-whitepaper.pdf).
# Guide-level explanation
The trading core will create order books, risk engines, etc. and accept orders and other instructions based on the data held within the market framework. Depending on the deployment context for the trading core, the market framework will be created and manipulated in different ways:
- In the some private/permissioned Vega networks, the framework instances will be set up using configuration files.
- In later test network releases, the public Mainnet, and other private/permissioned networks, entities in the framework will be created by governance transactions.
- In scenario testing tools, etc. the framework may be created by both configuration and governance transactions.
- Changes to the market framework entities on a running Vega instance will always be made via governance transactions.
Out of scope for this ticket:
- the governance protocol and design of governance transactions is out of scope for this market framework design;
- risk models and risk engine;
- trading modes and trading mode parameters;
- products, smart products, and the first built-in product(s) to be built (futures, options)
- APIs through which clients can query and update market framework data
# Reference-level explanation
The market framework is essentially a set of data structures that configure and control almost all of the behaviour of a Vega network (the main exceptions being per-instance network and node configuration, and network-wide parameters that apply to all markets). These data structures are described in the sections below.
## Market
The market data structure collects all of the information required for Vega to operate a market. The component structures tradable instrument, instrument, and product may not exist in a Vega network at all unless defined and used by one (or more, in the case of products) markets. Risk models are a set of instances of a risk model data structure that are external to the market framework and provided by the risk model implementation. They are part of the Vega codebase and in the current version of the protocol, new risk models are not created by governance or configuration on a running Vega node. All structures in the market framework should be fully and unambiguously defined by their parameters. That is, two instances of a structure with precisely the same parameters are equivalent and identical, and should probably be de-duplicated on this basis within the implementation.
Data:
- **Identifier:** this should unambiguously identify a market
- **Status:** Proposed | Pending | Cancelled | Active | Suspended | Closed | Trading Terminated | Settled (see [market lifecycle spec](./0043-MKTL-market_lifecycle.md))
- **Trading mode:** this defines the trading mode (e.g. [continuous trading](#trading-mode---continuous-trading), [auction](#trading-mode---auctions)) and any required configuration for the trading mode. Note also that each trading mode in future will have very different sets of applicable parameters.
- **Tradable instrument:** an instance of or reference to a tradable instrument.
- **Mark price methodology:** reference to which [mark price](./0009-MRKP-mark_price.md) calculation methodology will be used.
- **Mark price methodology parameters:**
- Algorithm 1 / Last Traded Price: initial mark price
- **Price monitoring parameters**: a list of parameters, each specifying one price monitoring auction trigger and the associated auction duration.
- **Market activation time**: Read only, set by system when market opens. The date/time at which the opening auction uncrossed and the market first entered it's normal trading mode (empty if this had not happened)
- **Tick size**: (size of an increment in price in terms of the quote unit)
- **Quoted Decimal places**: number of decimals places for quote unit, e.g. if quote unit is USD and decimal places is 2 then prices are quoted in integer numbers of cents.
- **Position Decimal Places**: number of decimal places for orders and positions, i.e. if this is 2 then the smallest increment that can be traded is 0.01, for example 0.01 BTC in a BTSUSD market. (Note: it is agreed that initially the integer representation of the full precision of both order and positions can be required to fit into an int64, so this means that the largest position/order size possible reduces by a factor of ten for every extra decimal place used. this also means that, for instance, it would not be possible to create a BTCUSD market that allows order/position sizes equivalent to 1 sat.)
Note that Vega has hard limit maximum of MAX_DECIMAL_PLACES_FOR_POSITIONS_AND_ORDERS as a "compile-time" parameter. Typical value be MAX_DECIMAL_PLACES_FOR_POSITIONS_AND_ORDERS=6.
### Trading mode - continuous trading
Params:
- None currently
### Trading mode - Auctions
Params:
- **Call period end:** when the call period ends (date/time), may be empty if indefinite
A market can be in Auction Mode for a number of reasons:
- At market creation, markets will start in an [opening auction](./0026-AUCT-auctions.md#opening-auctions-at-creation-of-the-market), as a price discovery mechanism
- A market can be a [Frequent Batch Auction](./0026-AUCT-auctions.md#frequent-batch-auction), rather than continuous trading
- Due to [price monitoring](./0032-PRIM-price_monitoring.md) triggering a price discovery auction.
How markets operate during auction mode is a separate specification: [0026 - Auctions](./0026-AUCT-auctions.md)
## Tradable instrument
A tradable instrument is a combination of an instrument and a risk model. An instrument can only be traded when paired with a risk model, however regardless of the risk model, two identical instruments are expected to be fungible (see below).
Data:
- **Instrument:** an instance of or reference to a fully specified instrument.
- **Risk model:** a reference to a risk model *that is valid for the instrument* (NB: risk models may therefore be expected to expose a mechanism by which to test whether or not it can calculate risk/margins for a given instrument)
## Instrument
Uniquely and unambiguously describes something that can be traded on Vega, two identical instruments should be fungible, potentially (in the future, when multiple markets per instrument are allowed) even across markets. At least initially Vega will allow a maximum of one market per instrument, but the design should allow for this to be relaxed in the future when additional trading modes are added.
Instruments are the data structure that provides most of the metadata that allows for market discovery in addition to providing a concrete instance of a product to be traded. An instrument may also be described as a 'contract' (among other things) in trading literature and press.
Data:
- **Identifier:** a string/binary ID that uniquely identifies an instrument across all instruments now and in the future. Perhaps a hash of all the defining data references and parameters. These should be generated by Vega.
- **Code:** a short(ish...) code that does not necessarily uniquely identify an instrument, but is meaningful and relatively easy to type, e.g. FX:BTCUSD/DEC18, NYSE:ACN, ... (these will be supplied by humans either through config or as part of the market spec being voted on using the governance protocol.)
- **Name:** full and fairly descriptive name for the instrument.
- **Metadata fields:** see #85.
- **Product:** a reference to or instance of a fully specified product, including all required product parameters for that product.
## Product
Products define the behaviour of a position throughout the trade lifecycle. They do this by taking a pre-defined set of product parameters as inputs and emitting a stream of *lifecycle events* which enable Vega to margin, trade and settle the product.
Products will be of two types:
- **Built-ins:** products that are hard coded as part of Vega (built in futures and then options will be our first products).
- **Smart Products:** products that are defined in Vega's Smart Product language (future functionality)
Product lifecycle events:
- **Cash/asset flows:** these are consumed by the settlement engine and describe a movement of a number of some asset from (-ve value) or to (+ve value) the holder of a (long position), with the size of the flow specify the quantity of the asset per unit of long volume.
- **Trading Terminated:** this event moves a market to 'Trading Terminated' state, means that further trading is not possible (see [market lifecycle spec](./0043-MKTL-market_lifecycle.md)).
- **Settlement:** this event triggers final settlement of positions and release of margin, e.g. once settlement data is received from a data source/oracle and final settlement cashflows are calculated (see [market lifecycle spec](../protocol/0043-MKTL-market_lifecycle.md)).
Products must expose certain data to Vega WHEN they are instantiated as an instrument by providing parameters:
- **Settlement assets:** one or more assets that can be involved in settlement
- **Margin assets:** one or more assets that may be required as margin (usually the same set as settlement assets, but not always)
- **Price / quote units:** the unit in which prices (e.g. on the order book are quoted), usually but not always one of the settlement assets. Usually but not always (e.g. for bonds traded on yield, units = % return or options traded on implied volatility, units = % annualised vol) an asset (currency, commodity, etc.)
Products need to re-evaluate their logic when any of their inputs change e.g. oracle publishes a value, change in time, parameter changed etc., so Vega will need to somehow notify of that update.
Data:
- **Product name/code/reference/instance:** to be obtained either via a specific string identifying a builtin, e.g. 'Future', 'Option' or in future smart product code OR a reference to a product (e.g. a hash of the compiled smart product) where an existing product is being reused. Stored as a reference to a built-in product instance or a 'compiled' bytecode/AST instance for the smart product language.
- **Product specific parameters** which can be single values or streams (e.g. events from an oracle), e.g. for a future:
- Settlement and margin asset
- Maturity date
- Oracle / settlement price data reference
- Minimum order size
- *Note: the specific parameters for a product are defined by the product and will vary between products, so the system needs to be flexible in this regard.*
Note: product definition for futures is out of scope for this ticket.
## Price monitoring parameters**
[Price monitoring (spec)](./0032-PRIM-price_monitoring.md) parameters specify an array of price monitoring triggers and the associated auction durations. Each parameter contains the following fields:
- `horizon` - price projection horizon expressed as a year fraction over which price is to be projected by the risk model and compared to the actual market moves during that period. Must be positive.
- `probability` - probability level used in price monitoring. Must be in the (0,1) range.
- `auctionExtension` - auction duration (or extension in case market is already in auction mode) per breach of the `horizon`, `probability` trigger pair specified above. Must be greater than 0.
An arbitrary limit of 4 price parameters can be set per market. This prevents building up a confusing set of price monitoring rules on a market. 4 was chosen as a practical limit, but could be increased should the need arise.
----
# Pseudo-code / examples
## Market framework data structures
```rust
struct Market {
id: String,
trading_mode: TradingMode,
tradable_instrument: TradableInstrument,
}
enum TradingMode {
ContinuousTrading { }, // in reality there will (eventually) be params here
// DiscreteTrading { period: Duration, ... },
// Auction { end_datetime: DateTime, ... },
// RFQ { ... },
}
struct TradableInstrument {
instrument: Instrument,
risk_model: RiskModel,
}
struct Instrument {
id: String,
code: String,
name: String,
metadata: InstrumentMetadata,
product: Product,
}
struct InstrumentMetadata {
tags: Vec<String>,
}
enum Product {
// maturity should be some sort of DateTime, settlement_asset is however we refer to crypto-assets (collateral) on Vega
Future { maturity: String, oracle: Oracle, settlement_asset: String },
// EuropeanOption {},
// SmartProduct {},
}
enum Oracle {
EthereumEvent { contract_id: String, event: String } // totally guessed at these :-)
// ... more oracle types here...
}
enum RiskModel {
BuiltinFutures { historic_volatility: f64 } // parameters here subject to change and may not be correct now
}
```
## Example of a market in the above structure
**Note:** all the naming conventions, IDs, etc. here are made up and just examples of the kind of thing that might happen and some fields are missing 🤷♀️.
```rust
Market {
id: "BTC/DEC18",
status: "Active",
trading_mode: ContinuousTrading { ... },
tradable_instrument: TradableInstrument {
instrument: Instrument {
id: "Crypto/BTCUSD/Futures/Dec19", // maybe a concatenation of all the data or maybe a hash/digest
code: "FX:BTCUSD/DEC19",
name: "December 2019 BTC vs USD future",
metadata: InstrumentMetadata {
tags: [
"asset_class:fx/crypto",
"product:futures",
"underlying:BTC/USD",
"fx/base: BTC",
"fx/quote: USD"
]
},
product: Future {
maturity: "2019-12-31",
settlementPriceSource: {
sourceType: "signedMessage",
sourcePubkeys: ["YOUR_PUBKEY_HERE"],
field: "price",
dataType: "decimal",
filters: [
{ "field": "feed_id", "equals": "BTCUSD/EOD" },
{ "field": "mark_time", "equals": "31/12/20" }
]
}
settlement_asset: "Ethereum/Ether"
}
},
risk_model: BuiltinFutures {
historic_volatility: 0.15
}
}
}
```
| 63.198276 | 884 | 0.744169 | eng_Latn | 0.997808 |
4f2d1ae795f19a4d93ab4bfa3c905941b035899e | 1,042 | md | Markdown | website/docs/components/inputs/files.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | website/docs/components/inputs/files.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | website/docs/components/inputs/files.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | ---
title: files
type: input
categories: ["Local"]
---
<!--
THIS FILE IS AUTOGENERATED!
To make changes please edit the contents of:
lib/input/files.go
-->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Reads files from a path, where each discrete file will be consumed as a single
message.
```yaml
# Config fields, showing default values
input:
files:
path: ""
delete_files: false
```
The path can either point to a single file (resulting in only a single message)
or a directory, in which case the directory will be walked and each file found
will become a message.
### Metadata
This input adds the following metadata fields to each message:
``` text
- path
```
You can access these metadata fields using
[function interpolation](/docs/configuration/interpolation#metadata).
## Fields
### `path`
A path to either a directory or a file.
Type: `string`
Default: `""`
### `delete_files`
Whether to delete files once they are consumed.
Type: `bool`
Default: `false`
| 16.539683 | 79 | 0.702495 | eng_Latn | 0.987649 |
4f2db22fdf727d3d1ca62583608bf21d3f2ce08f | 2,325 | md | Markdown | CDS_Tool_Output/Parsers/parserContent_exalms-567.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | CDS_Tool_Output/Parsers/parserContent_exalms-567.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | CDS_Tool_Output/Parsers/parserContent_exalms-567.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | #### Parser Content
```Java
{
Name = exalms-567
Vendor = Microsoft
Product = Microsoft Windows
Lms = Direct
DataType = "windows-567"
IsHVF = true
TimeFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
Conditions = [ """"event_id":567,""", "Object Access Attempt:" ,""""@timestamp"""" ]
Fields = [
"""({event_name}Object Access Attempt)""",
""""@timestamp"\s*:\s*"({time}\d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\d\.\d\d\dZ)""",
""""computer_name"\s*:\s*"({host}.+?)"""",
""""record_number"\s*:\s*"({record_id}\d+)""",
"""({event_code}567)""",
""""user"\s*:\s*\{[^\}]*"identifier"\s*:\s*"({user_sid}[^"]+)""",
""""user"\s*:\s*\{[^\}]*"name"\s*:\s*"({user}[^"]+)""",
""""user"\s*:\s*\{[^\}]*"domain"\s*:\s*"({domain}[^"]+)""",
""""(param3|ObjectType)"\s*:\s*"({file_type}[^"]+)""",
""""(param5|ObjectName)"\s*:\s*"({file_path}[^"]+)""",
""""(param5|ObjectName)"\s*:\s*"([^"]*\\)?({file_name}[^\\\."]+(\.({file_ext}[^\.\\"]+))?)"""",
""""(param5|ObjectName)"\s*:\s*"({file_parent}.+?)\\+[^\\]+"""",
""""(param6|Accesses)"\s*:\s*"({accesses}.+?)"""",
]
DupFields = [ "host->dest_host" ]
}
{
Name = exalms-576
Vendor = Microsoft
Product = Microsoft Windows
Lms = Direct
DataType = "windows-privileged-access"
TimeFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
Conditions = [ """"Special privileges assigned to new logon""", """"event_id":576""", """"@timestamp""""]
Fields = [
"""({event_name}Special privileges assigned to new logon)""",
""""@timestamp"\s*:\s*"({time}[^"]+)"""",
""""computer_name"\s*:\s*"({host}[\w\-\.]+)""",
"""({event_code}576)""",
"""({ownership_privilege}SeTakeOwnershipPrivilege)""",
"""({environment_privilege}SeSystemEnvironmentPrivilege)""",
"""({debug_privilege}SeDebugPrivilege)""",
"""({tcb_privilege}SeTcbPrivilege)""",
""""record_number"\s*:\s*"({record_id}\d+)"""",
""""user"\s*:\s*\{.*?"identifier"\s*:\s*"({user_sid}[^"]+)"""",
""""user"\s*:\s*\{.*?"domain":"({domain}[^"]+)"""",
""""user"\s*:\s*\{.*?"name":"({user}[^"]+)"""",
""""(param4|Privileges)"\s*:\s*"({privileges}[^"]+)"""",
""""(param3|LogonID|logon_id)"\s*:\s*"(-|({logon_id}.+?))\s*"""",
""""(param3|LogonID|logon_id)"\s*:\s*"\(([^,\s]+(,|\s))?(-|({logon_id}.+?)\))"""",
]
DupFields = ["host->dest_host"]
}
``` | 40.789474 | 107 | 0.501505 | yue_Hant | 0.127486 |