hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8abfc2eb9b72b9846f826fab9aba9773ddefc906 | 1,006 | md | Markdown | content/hosp/labevents.md | MIT-LCP/mimic-iv-website | 3d77603f76583162f0526573197d9f19f6d6f139 | [
"MIT"
] | 8 | 2020-01-26T11:51:20.000Z | 2022-02-06T18:16:31.000Z | content/hosp/labevents.md | MIT-LCP/mimic-iv-website | 3d77603f76583162f0526573197d9f19f6d6f139 | [
"MIT"
] | null | null | null | content/hosp/labevents.md | MIT-LCP/mimic-iv-website | 3d77603f76583162f0526573197d9f19f6d6f139 | [
"MIT"
] | 3 | 2020-08-31T02:39:44.000Z | 2021-11-13T01:01:11.000Z | +++
title = "labevents"
linktitle = "labevents"
weight = 1
toc = false
[menu]
[menu.main]
parent = "Hosp Tables"
+++
## *labevents*
The *labevents* table stores the results of all laboratory measurements made for a single patient.
These include hematology measurements, blood gases, chemistry panels, and less common tests such as genetic assays.
## Links to
* *d_labitems* on `itemid`
<!--
# Important considerations
-->
## Table columns
Name | Postgres data type
---- | ----
`subject_id` | INTEGER NOT NULL
`hadm_id` | INTEGER
`stay_id` | INTEGER
`spec_id` | INTEGER NOT NULL
`itemid` | INTEGER NOT NULL
`charttime` | TIMESTAMP NOT NULL
`storetime` | TIMESTAMP
`value` | VARCHAR(200)
`valuenum` | DOUBLE PRECISION
`valueuom` | VARCHAR(20)
`ref_range_lower` | DOUBLE PRECISION
`ref_range_upper` | DOUBLE PRECISION
`flag` | VARCHAR(10)
`priority` | VARCHAR(7)
### `subject_id`
{{% include "/static/include/subject_id.md" %}}
### `hadm_id`
{{% include "/static/include/hadm_id.md" %}}
| 18.290909 | 115 | 0.694831 | yue_Hant | 0.442485 |
8abfea7ed98e15df5acafb7fb4398a5ef23d0001 | 10,720 | md | Markdown | docs/example-scenario/data/hybrid-etl-with-adf.md | Robertokusch/architecture-center.de-de | 4bae3c87897b233c01d19561471cecdc5558adfb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/example-scenario/data/hybrid-etl-with-adf.md | Robertokusch/architecture-center.de-de | 4bae3c87897b233c01d19561471cecdc5558adfb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/example-scenario/data/hybrid-etl-with-adf.md | Robertokusch/architecture-center.de-de | 4bae3c87897b233c01d19561471cecdc5558adfb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ETL-Hybridvorgänge mit vorhandenen lokalen SSIS-Bereitstellungen und Azure Data Factory
titleSuffix: Azure Example Scenarios
description: ETL-Hybridvorgänge mit vorhandenen lokalen SSIS-Bereitstellungen (SQL Server Integration Services) und Azure Data Factory.
author: alhieng
ms.date: 09/20/2018
ms.topic: example-scenario
ms.service: architecture-center
ms.subservice: example-scenario
ms.custom: tsp-team
social_image_url: /azure/architecture/example-scenario/data/media/architecture-diagram-hybrid-etl-with-adf.png
ms.openlocfilehash: 354b8ee14f82631842902da3de852f777b1954cc
ms.sourcegitcommit: c053e6edb429299a0ad9b327888d596c48859d4a
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/20/2019
ms.locfileid: "58248495"
---
# <a name="hybrid-etl-with-existing-on-premises-ssis-and-azure-data-factory"></a>ETL-Hybridvorgänge mit vorhandenen lokalen SSIS-Bereitstellungen und Azure Data Factory
Organisationen, die ihre SQL Server-Datenbanken in die Cloud migrieren, können von erheblichen Kosteneinsparungen und Leistungszuwächsen sowie von einer höheren Flexibilität und einer besseren Skalierbarkeit profitieren. Die Überarbeitung bereits vorhandener ETL-Prozesse (Extrahieren, Transformieren und Laden), die mit SQL Server Integration Services (SSIS) erstellt wurden, kann sich jedoch als Migrationshindernis erweisen. In anderen Fällen erfordert der Prozess zum Laden von Daten eine komplexe Logik bzw. bestimmte Datentoolkomponenten, die von Azure Data Factory v2 noch nicht unterstützt werden. Häufig werden SSIS-Funktionen wie Transformationen für Fuzzysuche und Fuzzygruppierung, CDC (Change Data Capture), SCD (Slowly Changing Dimensions) und DQS (Data Quality Services) verwendet.
Für die Lift & Shift-Migration einer vorhandenen SQL-Datenbank empfiehlt sich unter Umständen die Verwendung eines ETL-Hybridansatzes. Ein Hybridansatz verwendet zwar Data Factory als primäre Orchestrierungsengine, nutzt aber weiterhin vorhandene SSIS-Pakete für die Bereinigung von Daten und die Verwendung lokaler Ressourcen. Bei diesem Ansatz wird die Data Factory-basierte SQL Server-IR (Integration Runtime) verwendet, um eine Lift & Shift-Migration vorhandener Datenbanken in die Cloud zu ermöglichen, und es werden bereits vorhandener Code und vorhandene SSIS-Pakete genutzt.
Dieses Beispielszenario ist für Organisationen relevant, die Datenbanken in die Cloud verlagern und die Verwendung von Data Factory als primäre cloudbasierte ETL-Engine in Betracht ziehen, während sie vorhandene SSIS-Pakete in ihren neuen Clouddatenworkflow integrieren. Viele Organisationen haben stark in die Entwicklung von SSIS-ETL-Paketen für bestimmte Datenaufgaben investiert. Die Vorstellung, diese Pakete umschreiben zu müssen, kann durchaus abschreckend sein. Darüber hinaus sind viele vorhandene Codepakete von lokalen Ressourcen abhängig, was eine Migration in die Cloud verhindert.
Mit Data Factory können Kunden ihre bereits vorhandenen ETL-Pakete nutzen und weitere Investitionen in die lokale ETL-Entwicklung zurückfahren. In diesem Beispiel werden mögliche Anwendungsfälle für die Nutzung vorhandener SSIS-Pakete im Rahmen eines neuen Clouddatenworkflows mit Azure Data Factory v2 erläutert.
## <a name="potential-use-cases"></a>Mögliche Anwendungsfälle
In der Vergangenheit war SSIS für viele SQL Server-Datenexperten das bevorzugte ETL-Tool für Datentransformationen und -ladevorgänge. Gelegentlich wurden bestimmte SSIS-Features oder Plug-In-Komponenten von Drittanbietern verwendet, um die Entwicklung voranzutreiben. Kommt ein Austausch oder eine Neuentwicklung dieser Pakete nicht in Frage, können Kunden ihre Datenbanken nicht in die Cloud migrieren. Kunden suchen daher nach Möglichkeiten, wie sie ihre vorhandenen Datenbanken möglichst effizient in die Cloud migrieren und ihre vorhandenen SSIS-Pakete weiter verwenden können.
Im Anschluss finden Sie einige potenzielle lokale Anwendungsfälle:
- Laden von Netzwerkrouterprotokollen in eine Datenbank zu Analysezwecken
- Vorbereiten von Beschäftigungsdaten der Personalabteilung für Analyseberichte
- Laden von Produkt- und Vertriebsdaten in ein Data Warehouse, um Umsatzprognosen zu erstellen
- Automatisieren von Ladevorgänge für Speicher operativer Daten oder Data Warehouses für das Finanz- und Rechnungswesen.
## <a name="architecture"></a>Architecture
![Übersicht über die Architektur eines ETL-Hybridprozesses mit Azure Data Factory][architecture-diagram]
1. Daten werden aus dem Blobspeicher in Data Factory gelesen.
2. Die Data Factory-Pipeline ruft eine gespeicherte Prozedur auf, um einen lokal gehosteten SSIS-Auftrag über die Integrated Runtime auszuführen.
3. Die Datenbereinigungsaufträge werden ausgeführt, um die Daten für die Downstreamnutzung vorzubereiten.
4. Nach erfolgreicher Datenbereinigung werden die bereinigten Daten mithilfe einer Kopieraufgabe in Azure geladen.
5. Danach werden die bereinigten Daten in Tabellen in SQL Data Warehouse geladen.
### <a name="components"></a>Komponenten
- [Blobspeicher][docs-blob-storage] wird als Dateispeicher sowie als Datenquelle für Data Factory verwendet.
- [SQL Server Integration Services][docs-ssis] enthält die lokalen ETL-Pakete für die Ausführung aufgabenspezifischer Workloads.
- [Azure Data Factory][docs-data-factory] ist die Cloudorchestrierungsengine, die Daten aus mehreren Quellen miteinander kombiniert, orchestriert und in ein Data Warehouse lädt.
- [SQL Data Warehouse][docs-sql-data-warehouse] zentralisiert Daten in der Cloud, sodass über standardmäßige ANSI-SQL-Abfragen problemlos auf sie zugegriffen werden kann.
### <a name="alternatives"></a>Alternativen
Data Factory kann auch Datenbereinigungsprozeduren aufrufen, die mit anderen Technologien (etwa mit einem Databricks-Notebook, einem Python-Skript oder einer auf einem virtuellen Computer ausgeführten SSIS-Instanz) implementiert werden. Eine mögliche Alternative zum Hybridansatz ist das [Installieren kostenpflichtiger oder lizenzierter benutzerdefinierter Komponenten für Azure-SSIS Integration Runtime](/azure/data-factory/how-to-develop-azure-ssis-ir-licensed-components).
## <a name="considerations"></a>Überlegungen
Die Integrated Runtime (IR) unterstützt zwei Modelle: selbstgehostet und von Azure gehostet. Sie müssen sich zunächst für eine dieser beiden Optionen entscheiden. Die selbstgehostete Variante ist kostengünstiger, aber auch mit einem höheren Wartungs- und Verwaltungsaufwand verbunden. Weitere Informationen finden Sie unter [Selbstgehostete Integrationslaufzeit](/azure/data-factory/concepts-integration-runtime#self-hosted-integration-runtime). Eine Entscheidungshilfe für die Wahl der passenden IR finden Sie bei Bedarf unter [Ermitteln der richtigen Integrationslaufzeit](/azure/data-factory/concepts-integration-runtime#determining-which-ir-to-use).
Bei Verwendung der von Azure gehosteten Variante müssen Sie entscheiden, wie viel Leistung für die Verarbeitung Ihrer Daten erforderlich ist. Bei der von Azure gehosteten Konfiguration können Sie im Rahmen der Konfigurationsschritte die VM-Größe auswählen. Weitere Informationen zum Auswählen von VM-Größen finden Sie unter [Überlegungen zur Leistung](/azure/cloud-services/cloud-services-sizes-specs#performance-considerations).
Die Entscheidung wird deutlich einfacher, wenn Sie bereits über SSIS-Pakete mit lokalen Abhängigkeiten verfügen (beispielsweise Datenquellen oder Dateien, auf die nicht von Azure aus zugegriffen werden kann). In diesem Fall kommt nur die selbstgehostete IR in Frage. Dieser Ansatz ist am flexibelsten und ermöglicht es, die Cloud als Orchestrierungsengine zu nutzen, ohne vorhandene Pakete umschreiben zu müssen.
Letztendlich geht es darum, die verarbeiteten Daten zur weiteren Optimierung in die Cloud zu laden oder mit anderen in der Cloud gespeicherten Daten zu kombinieren. Achten Sie im Rahmen des Entwurfsprozesses auf die Anzahl von Aktivitäten, die in den Data Factory-Pipelines verwendet werden. Weitere Informationen finden Sie unter [Pipelines und Aktivitäten in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities).
## <a name="pricing"></a>Preise
Data Factory ist eine kostengünstige Möglichkeit, um die Datenverschiebung in die Cloud zu orchestrieren. Die Kosten basieren auf verschiedenen Faktoren:
- Anzahl von Pipelineausführungen
- Anzahl verwendeter Entitäten/Aktivitäten innerhalb der Pipeline
- Anzahl von Überwachungsvorgängen
- Anzahl von Integrationsdurchläufen (in Azure gehostete IR oder selbstgehostete IR)
Für Data Factory wird eine nutzungsbasierte Abrechnung verwendet. Kosten fallen daher nur während Pipelineausführungen und während der Überwachung an. Die Ausführung einer einfachen Pipeline kostet gerade einmal 50 Cent, die Überwachung lediglich 25 Cent. Mit dem [Azure-Kostenrechner](https://azure.microsoft.com/pricing/calculator/) können Sie eine genauere Schätzung auf der Grundlage Ihrer spezifischen Workload erstellen.
Wenn Sie eine ETL-Hybridworkload ausführen, müssen Sie auch die Kosten für den virtuellen Computer berücksichtigen, der als Host für Ihre SSIS-Pakete fungiert. Diese Kosten richten sich nach der Größe des virtuellen Computers und reichen von D1v2 (ein Kern, 3,5 GB RAM, Datenträger mit 50 GB) bis E64V3 (64 Kerne, 432 GB RAM, Datenträger mit 1.600 GB). Weitere Informationen zur Wahl der geeigneten VM-Größe finden Sie unter [Überlegungen zur Leistung](/azure/cloud-services/cloud-services-sizes-specs#performance-considerations).
## <a name="next-steps"></a>Nächste Schritte
- Informieren Sie sich ausführlicher über [Azure Data Factory](https://azure.microsoft.com/services/data-factory/).
- Absolvieren Sie das [Schritt-für-Schritt-Tutorial](/azure/data-factory/#step-by-step-tutorials), um sich mit Azure Data Factory vertraut zu machen.
- [Stellen Sie Azure-SSIS Integration Runtime in Azure Data Factory bereit.](/azure/data-factory/tutorial-deploy-ssis-packages-azure)
<!-- links -->
[architecture-diagram]: ./media/architecture-diagram-hybrid-etl-with-adf.png
[small-pricing]: https://azure.com/e/
[medium-pricing]: https://azure.com/e/
[large-pricing]: https://azure.com/e/
[availability]: /azure/architecture/checklist/availability
[resource-groups]: /azure/azure-resource-manager/resource-group-overview
[resiliency]: /azure/architecture/resiliency/
[security]: /azure/security/
[scalability]: /azure/architecture/checklist/scalability
[docs-blob-storage]: /azure/storage/blobs/
[docs-data-factory]: /azure/data-factory/introduction
[docs-resource-groups]: /azure/azure-resource-manager/resource-group-overview
[docs-ssis]: /sql/integration-services/sql-server-integration-services
[docs-sql-data-warehouse]: /azure/sql-data-warehouse/sql-data-warehouse-overview-what-is
| 102.095238 | 796 | 0.829104 | deu_Latn | 0.993572 |
8ac02cb1c847e9805ac57494fa2335e6458deec8 | 143 | md | Markdown | gitlab/docs/unicorn/gauge.ruby_gc_stat_heap_eden_pages.md | phonglh79/integrations | 1c55d42418cb139434ef5a7c1e12752475ebcd0a | [
"Apache-2.0"
] | null | null | null | gitlab/docs/unicorn/gauge.ruby_gc_stat_heap_eden_pages.md | phonglh79/integrations | 1c55d42418cb139434ef5a7c1e12752475ebcd0a | [
"Apache-2.0"
] | 1 | 2020-11-05T20:14:54.000Z | 2020-11-05T20:14:54.000Z | gitlab/docs/unicorn/gauge.ruby_gc_stat_heap_eden_pages.md | phonglh79/integrations | 1c55d42418cb139434ef5a7c1e12752475ebcd0a | [
"Apache-2.0"
] | 1 | 2017-02-08T09:58:51.000Z | 2017-02-08T09:58:51.000Z | ---
title: ruby_gc_stat_heap_eden_pages
brief: Multiprocess metric
metric_type: gauge
---
### ruby_gc_stat_heap_eden_pages
Multiprocess metric | 17.875 | 35 | 0.825175 | eng_Latn | 0.232877 |
8ac02ed02fd24c49ababe0f71a8518ba7836cde4 | 298 | md | Markdown | README.md | bwesterb/byteswriter | fe2f57e9fb9cee2d2843caa9e7bc1baa081fb567 | [
"MIT"
] | null | null | null | README.md | bwesterb/byteswriter | fe2f57e9fb9cee2d2843caa9e7bc1baa081fb567 | [
"MIT"
] | null | null | null | README.md | bwesterb/byteswriter | fe2f57e9fb9cee2d2843caa9e7bc1baa081fb567 | [
"MIT"
] | null | null | null | byteswriter
===========
This Go package implements a `io.Writer` that wraps a `[]byte`.
It's like `bytes.Reader`, but then a Writer.
```golang
b := make([]byte, 20)
w := byteswriter.NewWriter(b)
// use io.Writer w
```
See [godoc documentation](https://godoc.org/github.com/bwesterb/byteswriter)
| 21.285714 | 76 | 0.677852 | eng_Latn | 0.808285 |
8ac0a3a08c06f486a180356ec9a80723d1bc7ea3 | 16 | md | Markdown | README.md | Roandre-Batista/CafeBoutique | a3fc8c372c7730ccd4602067bad40b8af4432769 | [
"MIT"
] | null | null | null | README.md | Roandre-Batista/CafeBoutique | a3fc8c372c7730ccd4602067bad40b8af4432769 | [
"MIT"
] | null | null | null | README.md | Roandre-Batista/CafeBoutique | a3fc8c372c7730ccd4602067bad40b8af4432769 | [
"MIT"
] | null | null | null | # CafeBoutique
| 5.333333 | 14 | 0.75 | fra_Latn | 0.488391 |
8ac0faa6f1197fd8f17a2c907439932331b081b8 | 2,232 | md | Markdown | README.md | TradeshiftCN/blayze | 1844aeb6da3593debfd5891301429d8c82885126 | [
"MIT"
] | 24 | 2018-05-09T10:58:44.000Z | 2021-12-30T14:39:56.000Z | README.md | TradeshiftCN/blayze | 1844aeb6da3593debfd5891301429d8c82885126 | [
"MIT"
] | 11 | 2018-05-14T10:55:26.000Z | 2019-05-22T10:58:33.000Z | README.md | TradeshiftCN/blayze | 1844aeb6da3593debfd5891301429d8c82885126 | [
"MIT"
] | 3 | 2018-05-09T19:29:20.000Z | 2019-02-03T17:20:58.000Z | # blayze
A fast and flexible Bayesian Naive Bayes implementation for the JVM written in Kotlin.
* Fully supports the online learning paradigm, in which data, and even new features, are added as they become available.
* Reasonably fast and memory efficient. We've trained a document classifier with tens of thousands of classes on hundreds of thousands of documents, and ironed out most of the hot-spots.
* Naturally works with few samples, by integrating out the uncertainty on estimated parameters.
* Models and data structures are immutable such that they are concurrency friendly.
* Efficient serialization and deserialization using protobuf.
* Missing and unknown features at prediction time are properly handled.
* Minimal dependencies.
## Usage
Get the latest artifact from [maven central](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.tradeshift%22%20a%3A%22blayze%22)
````java
//Java 9
Model model = new Model().batchAdd(List.of(new Update( //Models are immutable
new Inputs( // Supports multiple feature types
Map.of( //Text features
"subject", "Attention, is it true?", //features are named.
"body", "Good day dear beneficiary. This is Secretary to president of Benin republic is writing this email ..." // multiple features of the same type have different names
),
Map.of( //Categorical features
"sender", "WWW.@galaxy.ocn.ne.jp"
),
Map.of( //Gaussian features
"n_words", 482.
)
),
"spam" // the outcome, in this case spam.
)));
Map<String, Double> predictions = model.predict(new Inputs(/*...*/));// e.g. {"spam": 0.624, "ham": 0.376}
````
## Built With
* [Kotlin](https://kotlinlang.org/) - Language
* [Maven](https://maven.apache.org/) - Dependency Management
* [Protocol Buffers](https://developers.google.com/protocol-buffers/) - Serialization
## Versioning
We use [SemVer](http://semver.org/) for versioning.
## Authors
* [Rasmus Berg Palm](https://github.com/rasmusbergpalm)
* [Fuyang Liu](https://github.com/liufuyang)
* [Lasse Reedtz](https://github.com/lre)
| 42.923077 | 194 | 0.665323 | eng_Latn | 0.947712 |
8ac13685437143f98a94c952e2f910d4982c2f0d | 335 | md | Markdown | guide/english/java/throw-keyword/index.md | smonem/freeCodeCamp | f03f05d53de38fbc84ba50f1b6ee156e77959698 | [
"BSD-3-Clause"
] | 3 | 2019-06-04T15:31:48.000Z | 2019-10-26T11:02:45.000Z | guide/english/java/throw-keyword/index.md | Nhatdth14/freeCodeCamp | 9e82ae87b69a7bb5af87ee730da30be0be6cbf8a | [
"BSD-3-Clause"
] | 58 | 2019-04-25T23:23:57.000Z | 2021-07-28T23:18:44.000Z | guide/english/java/throw-keyword/index.md | Nhatdth14/freeCodeCamp | 9e82ae87b69a7bb5af87ee730da30be0be6cbf8a | [
"BSD-3-Clause"
] | 4 | 2019-06-28T13:50:36.000Z | 2021-04-17T17:30:35.000Z | ---
title: Throw
---
## throw
The Java throw keyword is used to explicitly throw an exception. You can throw either checked or unchecked exception in java.
***Example:***
```java
throw new ArithmeticException("/ by zero not permitted");
```
##### More resources
[Geeks for Geeks](https://www.geeksforgeeks.org/throw-throws-java/)
| 19.705882 | 125 | 0.713433 | eng_Latn | 0.971097 |
8ac16859582ab467659cdd31ef4fac9671efb0e4 | 7,419 | md | Markdown | articles/purview/register-scan-azure-blob-storage-source.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/purview/register-scan-azure-blob-storage-source.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/purview/register-scan-azure-blob-storage-source.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: De Azure Storage-BLOB scannen
description: Meer informatie over het scannen van Azure Blob-opslag in uw Azure controle sfeer liggen Data Catalog.
author: shsandeep123
ms.author: sandeepshah
ms.service: purview
ms.subservice: purview-data-catalog
ms.topic: how-to
ms.date: 11/25/2020
ms.openlocfilehash: 9fc5c115486c7cbf84fc0bd98ff7996c674f2e24
ms.sourcegitcommit: b6267bc931ef1a4bd33d67ba76895e14b9d0c661
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 12/19/2020
ms.locfileid: "97694821"
---
# <a name="register-and-scan-azure-blob-storage"></a>Azure Blob Storage registreren en scannen
In dit artikel wordt beschreven hoe u een Azure Blob Storage-account registreert in controle sfeer liggen en hoe u een scan instelt.
## <a name="supported-capabilities"></a>Ondersteunde mogelijkheden
Azure Blob Storage ondersteunt volledige en incrementele scans voor het vastleggen van de meta gegevens en het schema. De gegevens worden ook automatisch geclassificeerd op basis van systeem-en aangepaste classificatie regels.
## <a name="prerequisites"></a>Vereisten
- Voordat u gegevens bronnen registreert, maakt u een Azure controle sfeer liggen-account. Zie [Quick Start: een Azure controle sfeer liggen-account maken](create-catalog-portal.md)voor meer informatie over het maken van een controle sfeer liggen-account.
- U moet een Azure controle sfeer liggen-gegevens bron beheerder zijn
## <a name="setting-up-authentication-for-a-scan"></a>Verificatie voor een scan instellen
Er zijn drie manieren om verificatie voor Azure Blob-opslag in te stellen:
- Beheerde identiteit
- Account sleutel
- Service-principal
### <a name="managed-identity-recommended"></a>Beheerde identiteit (aanbevolen)
Wanneer u **beheerde identiteit** kiest om de verbinding in te stellen, moet u eerst uw controle sfeer liggen-account de machtiging geven om de gegevens bron te scannen:
1. Ga naar uw opslagaccount.
1. Selecteer **Access Control (IAM)** in het linker navigatiemenu.
1. Selecteer **+ Toevoegen**.
1. Stel de **rol** in op **Storage BLOB data Reader** en voer uw Azure controle sfeer liggen-account naam in onder invoervak **selecteren** . Selecteer vervolgens **Opslaan** om deze rol toe te wijzen aan uw Purview-account.
> [!Note]
> Raadpleeg de stappen in [toegang verlenen tot blobs en wacht rijen met Azure Active Directory](https://docs.microsoft.com/azure/storage/common/storage-auth-aad) voor meer informatie.
### <a name="account-key"></a>Account sleutel
Wanneer de geselecteerde verificatie methode de **account sleutel** is, moet u uw toegangs sleutel ophalen en opslaan in de sleutel kluis:
1. Navigeer naar uw opslag account
1. **Instellingen > toegangs sleutels** selecteren
1. Kopieer uw *sleutel* en sla deze ergens op om de volgende stappen uit te voeren
1. Navigeer naar uw sleutelkluis
1. Selecteer **Instellingen > Geheimen**
1. Selecteer **+ genereren/importeren** en voer de **naam** en **waarde** in als de *sleutel* van uw opslag account
1. Selecteer **Maken** om te voltooien
1. Als uw sleutelkluis nog niet is verbonden met Purview, moet u [een nieuwe sleutelkluisverbinding maken](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Maak ten slotte [een nieuwe referentie](manage-credentials.md#create-a-new-credential) met behulp van de sleutel voor het instellen van de scan
### <a name="service-principal"></a>Service-principal
Als u een service-principal wilt gebruiken, kunt u een bestaande gebruiken of een nieuwe maken.
> [!Note]
> Als u een nieuwe service-principal moet maken, volgt u deze stappen:
> 1. Navigeer naar [Azure Portal](https://portal.azure.com).
> 1. Selecteer **Azure Active Directory** in het menu aan de linkerkant.
> 1. Selecteer **App-registraties**.
> 1. Selecteer **+ Nieuwe toepassing registreren**.
> 1. Voer een naam in voor de **toepassing** (de service-principal-naam).
> 1. Selecteer **Alleen accounts in deze organisatiemap**.
> 1. Bij Omleidings-URI selecteert u **Web** en voert u de gewenste URL in. Dit hoeft geen echte of werkende URL te zijn.
> 1. Selecteer vervolgens **Registreren**.
Het is vereist om de toepassings-ID en het geheim van de Service-Principal op te halen:
1. Navigeer naar uw service-principal in [Azure Portal](https://portal.azure.com)
1. Kopieer de waarde van de **Toepassings-id (client)** uit **Overzicht** en van **Clientgeheim** uit **Certificaten & geheimen**.
1. Navigeer naar uw sleutelkluis
1. Selecteer **Instellingen > Geheimen**
1. Selecteer **+ Genereren/importeren** en voer de **Naam** van uw keuze in en de **Waarde** als het **Clientgeheim** van uw service-principal
1. Selecteer **Maken** om te voltooien
1. Als uw sleutelkluis nog niet is verbonden met Purview, moet u [een nieuwe sleutelkluisverbinding maken](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Maak tot slot [een nieuwe referentie](manage-credentials.md#create-a-new-credential) met behulp van de service-principal om uw scan in te stellen
#### <a name="granting-the-service-principal-access-to-your-blob-storage"></a>De Service-Principal toegang verlenen tot uw Blob-opslag
1. Ga naar uw opslagaccount.
1. Selecteer **Access Control (IAM)** in het linker navigatiemenu.
1. Selecteer **+ Toevoegen**.
1. Stel de **rol** in op **Storage BLOB data Reader** en voer uw service principal name of object-id in onder invoervak **selecteren** . Selecteer vervolgens **Opslaan** om deze roltoewijzing aan uw Service-Principal toe te wijzen.
## <a name="firewall-settings"></a>Firewallinstellingen
> [!NOTE]
> Als u firewall hebt ingeschakeld voor het opslag account, moet u de methode **beheerde identiteits** verificatie gebruiken bij het instellen van een scan.
1. Ga naar uw opslag account in [Azure Portal](https://portal.azure.com)
1. Navigeer naar **instellingen > netwerken** en
1. Selecteer **geselecteerde netwerken** onder **toegang toestaan vanaf**
1. Selecteer in de sectie **firewall** de optie **vertrouwde micro soft-Services toegang geven tot dit opslag account** en druk op **Opslaan**
:::image type="content" source="./media/register-scan-azure-blob-storage-source/firewall-setting.png" alt-text="Scherm opname van firewall instelling":::
## <a name="register-an-azure-blob-storage-account"></a>Een Azure Blob Storage-account registreren
Ga als volgt te werk om een nieuw BLOB-account in uw Data Catalog te registreren:
1. Ga naar uw Purview-account
1. Selecteer **Bronnen** in het linkernavigatievenster
1. Selecteer **Registreren**
1. Selecteer **Azure Blob Storage** bij **bronnen registreren**
1. Selecteer **Doorgaan**
Ga als volgt te werk op het scherm **bronnen registreren (Azure Blob Storage)** :
1. Voer een **Naam** in waarvan de gegevensbron wordt vermeld in de catalogus.
1. Kies uw abonnement om opslag accounts te filteren
1. Selecteer een opslagaccount
1. Selecteer een verzameling of maak een nieuwe (optioneel)
1. **Voltooi** om de gegevensbron te registreren.
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-sources.png" alt-text="opties voor bronnen registreren" border="true":::
[!INCLUDE [create and manage scans](includes/manage-scans.md)]
## <a name="next-steps"></a>Volgende stappen
- [Bladeren door de Azure Purview-gegevenscatalogus](how-to-browse-catalog.md)
- [Zoeken in de Azure Purview-gegevenscatalogus](how-to-search-catalog.md)
| 54.955556 | 255 | 0.769241 | nld_Latn | 0.995135 |
8ac280f79d69b1e40daa7287952d7781fa9df968 | 6,767 | md | Markdown | _posts/2021-08-22-emma.md | jackzhu2050/jackzhu2050.github.io | 108b20067caffa76487ddbf149e04115bf73fb23 | [
"MIT"
] | 2 | 2020-08-04T01:07:14.000Z | 2022-02-24T03:06:05.000Z | _posts/2021-08-22-emma.md | jackzhu2050/jackzhu2050.github.io | 108b20067caffa76487ddbf149e04115bf73fb23 | [
"MIT"
] | null | null | null | _posts/2021-08-22-emma.md | jackzhu2050/jackzhu2050.github.io | 108b20067caffa76487ddbf149e04115bf73fb23 | [
"MIT"
] | null | null | null | ---
layout: post
title: Emma
date: 2021-08-22
Author: Jack Zhu
tags: [novel, Jane Austen, love]
comments: true
---
I don't know why the book could grasp me so tightly. It's like a magnet sitting there, and I cannot be separated from, until it's done. It's also interesting that *Jane Austen*'s book is always with a happy ending, which you won't feel sorry or sad once you close the last page. *Emma* is one such book.
It's not the book that makes me sad. It's always a reflection about my life and the ensuing comparison. Comparison can always bring the sadness, since something good, elegant, graceful will always stand out, which dwarfs what we currently own or the life we currently live. The age *Jane Austen* lived was wonderful in some perspective. People could have time to read, to dance, to have parties, to indulge in the trifling emotion change, to understand and ponder about the subtle feelings from each other, which is a huge extravagance to the modern us. Ladies and gentlemen could walk along the garden for hours, and could visit each other and talk for a long time. They could have games like conundrum to implicitly express the deep feelings towards the lady. They took comments towards the people around about their demeanor, their intelligence, their elegance, and the personality. They had a very good understanding about tastes, and there was no such big discrepency like us to hold a very different view about what is good and bad, what is moral and what is not, which makes everyone justifiable about the life style they live. That might be presumed as an improvement for human beings, but it also leads to a lot of nonsense.

*Jane Austen*'s novels might be about rich people, and you might argue the poor people could never live a life likewise, maybe even worse as the life in *Dickens*'s *Great Expectations*. I think that's a fair point, and it's a little pity that *Jane* didn't depict the poor's life using her elegant pen. *Jane*'s family was not very rich, but her father had a *modest* income, and the atomosphere of her family was *open, amused, easy intellectual*, which might make it hard for *Jane* to be able to portray the poor's life as well as the rich's. And also, as a young lady, it was easy to be attracted by the *beautiful* and *elegant* things, which normally came together with the rich. We can see she must be very sensitive and self-aware, and even made self-reflection very often, just like *Emma*. And this enabled her to write with very accurate feelings and subtle emotions, which we love very much even today.
The history of human society seems to prove that it doesn't necessarily go upwards as we might expect, sometimes it might go downwards. The direction is not all about material resources, or the advancement in science or medicine, but also about the spirits, and pursuing the meaning of life, and maybe the latter is more important. In that sense, we might still go downwards after the advent of the smartphone, which makes the *slow life* impossible. We cannot read a decent book since it has over 200 pages; we cannot watch a decent movie since it's longer than 100 minutes; we cannot enjoy a play in the opera, since it is too *boring*. We cannot help grabbing our phones scrolling down and down for the infinite update and falling into the rabbit hole and forgetting what we were doing. We might argue that it's not the fault of the new technology but the people themselves, which is not the full answer. We cannot deny the effect of the environment over us, esp. when the temptation is within our reach. Watching a funny video clip is easier than a long movie; Watching a paragraph of the main plot of a novel is easier than reading the original masterpiece; expressing your feeling directly and explicitly is easier than writing a poem for days with your full effort. We become hasty for everything and we want to know the outcome immediately whether it's your health check or it's the romance. I think it's really a pity and shame for human beings to give up those wonderful things and only focus on the quick gratification.
---
For *Emma*, I enjoy reading it very much, maybe even more than *Pride and Prejudice*, since I love *Emma* and her personality very much. She was a little self-indulgent and proud, and a bit overconfident about her intelligence and her ability to perceive others' deep feelings. Her relationship with *Harriet* was very interesting, in which readers can find a lot of flaws in *Emma*. What she did cost the engagement for *Harriet*, and later the sad story with Mr. Elton. When *Mr. Knightley* behaved very gallantly to ask *Harriet* to dance, *Emma* thought about the possible attachment between them, but finally found her attachment with *Mr. Knightley*. The interaction with *Frank Churchill* was also interesting, since *Emma* confronted herself a lot about whether the possible attachment existed and whether she should push it or retract. *Emma* was always self-centered, and her relationship with *Harriet* was also to compensate the leaving of *Mrs. Weston*, and later she tried to introduce *Harriet* to the new social cricle which she thought much better than hers to make her be able to grow. And the whole plot about *Mr. Elton* was also all from herself and ended sadly. Especially, when *Harriet* expressed her feelings towards *Mr. Knightley* to *Emma*, *Emma* still accepted the love from *Mr. Knightley*, and sent *Harriet* to London to avoid her own guilt, until *Harriet* successfully built the attachment with her first persuer again. And the relationship between them faded over the time after the three marriages, which might be better for *Harriet*, since it never brought anything good but sadness and even *tragedy*. And also *Emma* used to insult *Miss Bates* in public about her being too talkative, which she felt bad and tried to compensate in the second day by visiting *Miss Bates*.
Seems I disapprove *Emma* a lot. That's not true even though I can see a lot of flaws in her. She is surely imperfect, but she is also very intelligent, open, and elegant. She is someone *Mr. Knightley* could talk openly without scruples, e.g he could blame her about her being too brutal towards *Miss Bates*. She could understand him very well. In some way, they match each other very well. *Emma* could gain some improvement by being together with *Mr. Knightley*, and *Mr. Knightley* can also feel a more true self in the interaction with *Emma*.
Just like *Mr. Knightley* put it
> My Emma, does not everything serve to prove more and more the beauty of truth and sincerity in all our dealings with each other?
That kind of honesty, sincerity and being true is really something worthy a man's life. | 225.566667 | 1,813 | 0.781883 | eng_Latn | 0.999965 |
8ac341d018f5350b221055335ab79787fe281ed1 | 10,830 | md | Markdown | docs/framework/wpf/advanced/sharing-message-loops-between-win32-and-wpf.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/sharing-message-loops-between-win32-and-wpf.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/sharing-message-loops-between-win32-and-wpf.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Gemeinsame Verwendung von Nachrichtenschleifen zwischen Win32 und WPF
ms.date: 03/30/2017
helpviewer_keywords:
- Win32 code [WPF], sharing message loops
- message loops [WPF]
- sharing message loops [WPF]
- interoperability [WPF], Win32
ms.assetid: 39ee888c-e5ec-41c8-b11f-7b851a554442
ms.openlocfilehash: 35a908cc26e6b70c9acd8732521837f2b20eaf5b
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
---
# <a name="sharing-message-loops-between-win32-and-wpf"></a>Gemeinsame Verwendung von Nachrichtenschleifen zwischen Win32 und WPF
In diesem Thema wird beschrieben, wie eine Nachrichtenschleife für die Zusammenarbeit mit implementieren [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)], entweder mithilfe der vorhandenen Nachricht Schleife Datenanzeige im <xref:System.Windows.Threading.Dispatcher> oder erstellen eine separate Nachrichtenschleife auf die [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] des Codes interoperation Seite.
## <a name="componentdispatcher-and-the-message-loop"></a>ComponentDispatcher und die Nachrichtenschleife
Für die Unterstützung von Interoperation und ein normales Szenario besteht darin zu implementieren <xref:System.Windows.Interop.IKeyboardInputSink>, oder Unterklassen von Klassen, die bereits implementiert <xref:System.Windows.Interop.IKeyboardInputSink>, wie z. B. <xref:System.Windows.Interop.HwndSource> oder <xref:System.Windows.Interop.HwndHost>. Allerdings befasst Senke Tastaturfunktionen alle möglichen Schleife Anforderungen sich nicht, die Sie möglicherweise beim Senden und Empfangen von Nachrichten über die Grenzen der interoperation. Um formalisieren Nachricht Anwendungsarchitektur, [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] bietet die <xref:System.Windows.Interop.ComponentDispatcher> Klasse, die ein einfaches Protokoll für eine Nachrichtenschleife anzuwendendes definiert.
<xref:System.Windows.Interop.ComponentDispatcher> ist eine statische Klasse, die mehrere Member verfügbar macht. Der Bereich der einzelnen Methoden ist implizit an den aufrufenden Thread gebunden. Eine Nachrichtenschleife muss aufgerufen werden, einige davon [!INCLUDE[TLA2#tla_api#plural](../../../../includes/tla2sharptla-apisharpplural-md.md)] in kritischen Zeiten (wie im nächsten Abschnitt definiert).
<xref:System.Windows.Interop.ComponentDispatcher> bietet Ereignisse, denen andere Komponenten (z. B. die Tastatursenke) überwachen können. Die <xref:System.Windows.Threading.Dispatcher> -Klasse ruft die entsprechende <xref:System.Windows.Interop.ComponentDispatcher> Methoden in der entsprechenden Reihenfolge. Der Code ist verantwortlich für das aufrufen, wenn Sie Ihre eigene Nachrichtenschleife implementieren, <xref:System.Windows.Interop.ComponentDispatcher> Methoden auf ähnliche Weise.
Aufrufen von <xref:System.Windows.Interop.ComponentDispatcher> rufen Methoden auf einen Thread nur Ereignishandler, die in diesem Thread registriert wurden.
## <a name="writing-message-loops"></a>Schreiben von Nachrichtenschleifen
Im folgenden finden Sie eine Prüfliste der <xref:System.Windows.Interop.ComponentDispatcher> Member verwendet wird, wenn Sie Ihre eigene Nachrichtenschleife schreiben:
- <xref:System.Windows.Interop.ComponentDispatcher.PushModal%2A>: die Nachrichtenschleife sollte aufrufen, um anzugeben, dass der Thread modal ist.
- <xref:System.Windows.Interop.ComponentDispatcher.PopModal%2A>: die Nachrichtenschleife sollte rufen Sie diese Option, um anzugeben, dass der Thread auf diesen zurückgesetzt wurde.
- <xref:System.Windows.Interop.ComponentDispatcher.RaiseIdle%2A>: die Nachrichtenschleife sollte rufen Sie diese Option, um anzugeben, dass <xref:System.Windows.Interop.ComponentDispatcher> auslösen soll die <xref:System.Windows.Interop.ComponentDispatcher.ThreadIdle> Ereignis. <xref:System.Windows.Interop.ComponentDispatcher> löst keine <xref:System.Windows.Interop.ComponentDispatcher.ThreadIdle> Wenn <xref:System.Windows.Interop.ComponentDispatcher.IsThreadModal%2A> ist `true`, aber möglicherweise Nachrichtenschleifen aufrufen auswählen <xref:System.Windows.Interop.ComponentDispatcher.RaiseIdle%2A> selbst wenn <xref:System.Windows.Interop.ComponentDispatcher> nicht auf das er im modalen Zustand.
- <xref:System.Windows.Interop.ComponentDispatcher.RaiseThreadMessage%2A>: die Nachrichtenschleife sollte aufrufen, um anzugeben, dass eine neue Nachricht verfügbar ist. Der Rückgabewert gibt an, ob ein Listener einen <xref:System.Windows.Interop.ComponentDispatcher> Ereignis behandelt, die Nachricht. Wenn <xref:System.Windows.Interop.ComponentDispatcher.RaiseThreadMessage%2A> gibt `true` (behandelt), der Verteiler sollte keine weiteren Aktionen mit der Meldung. Wenn der Rückgabewert ist `false`, wird erwartet, dass der Verteiler rufen die [!INCLUDE[TLA2#tla_win32](../../../../includes/tla2sharptla-win32-md.md)] Funktion `TranslateMessage`, rufen Sie anschließend `DispatchMessage`.
## <a name="using-componentdispatcher-and-existing-message-handling"></a>ComponentDispatcher und bestehende Meldungsbehandlung
Im folgenden finden Sie eine Prüfliste der <xref:System.Windows.Interop.ComponentDispatcher> Elemente verwendet werden, wenn Sie auf den inhärenten verlassen [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] Nachrichtenschleife.
- <xref:System.Windows.Interop.ComponentDispatcher.IsThreadModal%2A>: Gibt zurück, ob die Anwendung modale verlassen hat (z. B. eine modale Nachrichtenschleife verfügt über ein Push ausgeführt wurde). <xref:System.Windows.Interop.ComponentDispatcher> Dieser Status kann nachverfolgt werden, da die Klasse die Anzahl der verwaltet <xref:System.Windows.Interop.ComponentDispatcher.PushModal%2A> und <xref:System.Windows.Interop.ComponentDispatcher.PopModal%2A> aufrufen, die von der Nachrichtenschleife.
- <xref:System.Windows.Interop.ComponentDispatcher.ThreadFilterMessage> und <xref:System.Windows.Interop.ComponentDispatcher.ThreadPreprocessMessage> Ereignisse folgen die Standardregeln für das Delegieren von aufrufen. Delegaten werden in einer nicht angegebenen Reihenfolge aufgerufen, und alle Delegaten werden aufgerufen, auch wenn das erste Schema die Nachricht markiert, als behandelt.
- <xref:System.Windows.Interop.ComponentDispatcher.ThreadIdle>: Gibt an eine geeignete und effiziente Verarbeitung im Leerlauf befindet, führen Sie (es gibt keine ausstehenden Nachrichten für den Thread). <xref:System.Windows.Interop.ComponentDispatcher.ThreadIdle> wird nicht ausgelöst werden, wenn der Thread modal ist.
- <xref:System.Windows.Interop.ComponentDispatcher.ThreadFilterMessage>: für alle Nachrichten, die von die Meldungsverteilschleife verarbeitet ausgelöst.
- <xref:System.Windows.Interop.ComponentDispatcher.ThreadPreprocessMessage>: für alle Nachrichten, die nicht während der behandelt wurden ausgelöst <xref:System.Windows.Interop.ComponentDispatcher.ThreadFilterMessage>.
Eine Nachricht gilt als behandelt If nach der <xref:System.Windows.Interop.ComponentDispatcher.ThreadFilterMessage> Ereignis oder <xref:System.Windows.Interop.ComponentDispatcher.ThreadPreprocessMessage> Ereignis, das `handled` in Ereignisdaten als Verweis übergebener Parameter ist `true`. Ereignishandler sollte die Meldung ignorieren, wenn `handled` ist `true`, da das bedeutet, dass die unterschiedliche Handler behandelt die Nachricht zuerst. Ereignishandler auf beide Ereignisse können die Nachricht zu ändern. Der Verteiler sollte die geänderte Nachricht und nicht die ursprüngliche unveränderte Nachricht senden. <xref:System.Windows.Interop.ComponentDispatcher.ThreadPreprocessMessage> alle Listener, aber die architektonische Absicht übermittelt wird, die nur die Fenster der obersten Ebene mit dem HWND, an dem die gerichteten Nachrichten sollten Code als Antwort auf die Nachricht aufrufen.
## <a name="how-hwndsource-treats-componentdispatcher-events"></a>Wie behandelt HwndSource ComponentDispatcher-Ereignisse
Wenn die <xref:System.Windows.Interop.HwndSource> ist ein Fenster der obersten Ebene (kein übergeordnetes Element HWND), wird folglich mit <xref:System.Windows.Interop.ComponentDispatcher>. Wenn <xref:System.Windows.Interop.ComponentDispatcher.ThreadPreprocessMessage> ausgelöst wird, und wenn die Nachricht vorgesehen ist die <xref:System.Windows.Interop.HwndSource> oder untergeordneten Fenster <xref:System.Windows.Interop.HwndSource> Aufrufe der <xref:System.Windows.Interop.HwndSource.System%23Windows%23Interop%23IKeyboardInputSink%23TranslateAccelerator%2A>, <xref:System.Windows.Interop.IKeyboardInputSink.TranslateChar%2A>, <xref:System.Windows.Interop.IKeyboardInputSink.OnMnemonic%2A> Tastatur Senke Sequenz.
Wenn die <xref:System.Windows.Interop.HwndSource> ist ein Fenster der obersten Ebene (verfügt über einen übergeordneten HWND), werden keine Verarbeitung. Nur die Fenster auf oberster Ebene ist, führen Sie die Behandlung erwartet, und es muss ein Fenster auf oberster Ebene mit Unterstützung für die Senke Tastatur als Teil jeder interoperation Szenario sein.
Wenn <xref:System.Windows.Interop.HwndHost.WndProc%2A> auf eine <xref:System.Windows.Interop.HwndSource> wird aufgerufen, ohne eine entsprechende Tastenkombination Senke Methode zuerst aufgerufen wird, empfängt die Anwendung Tastaturereignisse der höheren Ebene wie z. B. <xref:System.Windows.UIElement.KeyDown>. Allerdings werden keine Tastatur Senke Methoden die umgeht wünschenswert Eingabemodell Tastaturfunktionen z. B. Unterstützung von Zugriffstasten aufgerufen. Dies kann passieren, wenn die Nachrichtenschleife nicht richtig des relevanten Threads auf benachrichtigt wurde die <xref:System.Windows.Interop.ComponentDispatcher>, oder weil das übergeordnete Element HWND nicht die richtige Tastatur Senke Antworten aufgerufen hat.
Eine Meldung, die auf der Tastatursenke geht möglicherweise nicht an das HWND gesendet werden, wenn Sie zuvor hinzugefügt haben, Hooks für diese Nachricht mithilfe der <xref:System.Windows.Interop.HwndSource.AddHook%2A> Methode. Möglicherweise wurde die Nachricht auf Nachrichtenebene für die Datapump direkt und nicht übermittelt werden, um behandelt die `DispatchMessage` Funktion.
## <a name="see-also"></a>Siehe auch
<xref:System.Windows.Interop.ComponentDispatcher>
<xref:System.Windows.Interop.IKeyboardInputSink>
[Interaktion zwischen WPF und Win32](../../../../docs/framework/wpf/advanced/wpf-and-win32-interoperation.md)
[Threadmodell](../../../../docs/framework/wpf/advanced/threading-model.md)
[Übersicht über die Eingabe](../../../../docs/framework/wpf/advanced/input-overview.md)
| 156.956522 | 905 | 0.817821 | deu_Latn | 0.971771 |
8ac3f1e3c268307e2e44be2802efb56f120c02bc | 2,842 | md | Markdown | README.md | aulbytj/Ruby_Capstone_bot | c76bde588875b4bba9270e106f0ebed4134de3de | [
"MIT"
] | 3 | 2020-05-22T00:46:26.000Z | 2021-11-08T12:12:32.000Z | README.md | aulbytj/Ruby_Capstone_bot | c76bde588875b4bba9270e106f0ebed4134de3de | [
"MIT"
] | 1 | 2021-10-12T21:11:47.000Z | 2021-10-12T21:11:47.000Z | README.md | aulbytj/Ruby_Capstone_bot | c76bde588875b4bba9270e106f0ebed4134de3de | [
"MIT"
] | null | null | null | # Build your own Bot - Ruby Capstone Project
> It was required in this project that I develop a bot the could be used in one of the following platform Slack, Twitter or Telegram.
## Built With
- Ruby
- Lita
- Redis
- Tested with RSpec
## Getting Started
Accept the inviation to the Slack workspace 'mrjaysneighborhood' [here](https://join.slack.com/t/mrjaysneighborhood/shared_invite/zt-e211lq47-cbJr0FnVZJjn79YCbuZqpg)\
Type `jaybot says` or `@jaybot says` in any of the channels for instructions on how to interact with jaybot.\
If you don't already have a Slack account now would be a great time to get one!😁
To see the code you can click [here](ttps://github.com/aulbytj/Ruby_Capstone_bot/tree/develop).
### Prerequisites
Must have a Slack account.\
If you would like to test this the code locally must have Ruby installed.\
To run test, must have RSpec and Redis installed.\
Must have Lita installed.
### Installation
To install RSpec open terminal and enter the following
```
gem install rspec
```
To check the version of RSpec that was installed
```
rspec --version
```
Take a minute and look through the various options available in rspec
```
rspec --help
```
To install Redis Download, extract and compile Redis with:
```
$ wget http://download.redis.io/releases/redis-6.0.3.tar.gz
$ tar xzf redis-6.0.3.tar.gz
$ cd redis-6.0.3
$ make
```
On Mac you can use Home brew.
```
brew install redis
```
To install Lita running the following command in your shell:
```
gem install lita
```
### Run Test
Ensure the redis is running `redis-cli` should output `Pong`, if not then start the redis server
If installed with Hombrew
```
brew services start redis
```
If installed manually navigate to where redis is installed and Run redis with
```
src/redis
```
Navigate to the directory `Ruby_Capstone_bot/jaybot/lita-dialog ` in the terminal and run
```
rspec
```
To test jaybot locally, redis and lita must be installed on your machine.
Start redis server, navigate to where you installed redis and run
```
src/redis
```
```
git clone https://github.com/aulbytj/Ruby_Capstone_bot.git
cd Ruby_Capstone_bot/jaybot/
lita
```
## Authors
👤 **Aulbourn Knowles**
- Github: [@githubhandle](https://github.com/aulbytj)
- Twitter: [@twitterhandle](https://twitter.com/aulbytj)
- Linkedin: [linkedin](https://linkedin.com/in/aulbourn-knowles-b9971672)
## 🤝 Contributing
Contributions, issues and feature requests are welcome!
Feel free to check the [issues page](ttps://github.com/aulbytj/Ruby_Capstone_bot/issues).
## Show your support
Give a ⭐️ if you like this project!
## Acknowledgments
- [Microverse](https://www.microverse.org/), TSE's, [Neovim](https://neovim.io/), [Alacritty](https://github.com/alacritty/alacritty), [Lita](https://www.lita.io/)
## 📝 License
This project is [MIT](lic.url) licensed.
| 22.555556 | 166 | 0.736805 | eng_Latn | 0.962782 |
8ac3f5b70b882c350aa880712e836a724f100001 | 16,335 | md | Markdown | README.md | houy1/Multitarget-tracker | bee300e8bfd660c86cbeb6892c65a5b7195c9381 | [
"Apache-2.0"
] | 1,801 | 2015-01-19T16:28:03.000Z | 2022-03-31T12:28:56.000Z | README.md | houy1/Multitarget-tracker | bee300e8bfd660c86cbeb6892c65a5b7195c9381 | [
"Apache-2.0"
] | 168 | 2016-03-02T06:23:20.000Z | 2022-03-25T12:29:37.000Z | README.md | houy1/Multitarget-tracker | bee300e8bfd660c86cbeb6892c65a5b7195c9381 | [
"Apache-2.0"
] | 608 | 2015-01-19T16:27:51.000Z | 2022-03-30T02:07:56.000Z | [](https://github.com/Nuzhny007/Multitarget-tracker/actions?query=workflow%3Abuild-Ubuntu)
[](https://github.com/Smorodov/Multitarget-tracker/actions?query=workflow%3ACodeQL)
# Last changes
* TensorRT 8 for YOLO detectors
* Re-identification embeddings for persons and vehicles from OpenVINO correctly works!
# New videos!
* Vehicles speed calculation with YOLO v4 (Thanks [Sam Blake for great idea!](https://medium.com/hal24k-techblog/how-to-track-objects-in-the-real-world-with-tensorflow-sort-and-opencv-a64d9564ccb1))
[](https://youtu.be/qOHYvDwpsO0)
* First step to ADAS with YOLO v4
[](https://youtu.be/5cgg5fy90Xg)
# Multitarget (multiple objects) tracker
#### 1. Objects detector can be created with function [CreateDetector](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Detector/BaseDetector.cpp) with different values of the detectorType:
1.1. Based on background substraction: built-in Vibe (tracking::Motion_VIBE), SuBSENSE (tracking::Motion_SuBSENSE) and LOBSTER (tracking::Motion_LOBSTER); MOG2 (tracking::Motion_MOG2) from [opencv](https://github.com/opencv/opencv/blob/master/modules/video/include/opencv2/video/background_segm.hpp); MOG (tracking::Motion_MOG), GMG (tracking::Motion_GMG) and CNT (tracking::Motion_CNT) from [opencv_contrib](https://github.com/opencv/opencv_contrib/tree/master/modules/bgsegm). For foreground segmentation used contours from OpenCV with result as cv::RotatedRect
1.2. Haar face detector from OpenCV (tracking::Face_HAAR)
1.3. HOG pedestrian detector from OpenCV (tracking::Pedestrian_HOG) and C4 pedestrian detector from [sturkmen72](https://github.com/sturkmen72/C4-Real-time-pedestrian-detection) (tracking::Pedestrian_C4)
1.4. Detector based on opencv_dnn (tracking::DNN_OCV) and pretrained models from [chuanqi305](https://github.com/chuanqi305/MobileNet-SSD) and [pjreddie](https://pjreddie.com/darknet/yolo/)
1.5. YOLO detector (tracking::Yolo_Darknet) with darknet inference from [AlexeyAB](https://github.com/AlexeyAB/darknet) and pretrained models from [pjreddie](https://pjreddie.com/darknet/yolo/)
1.6. YOLO detector (tracking::Yolo_TensorRT) with NVidia TensorRT inference from [enazoe](https://github.com/enazoe/yolo-tensorrt) and pretrained models from [pjreddie](https://pjreddie.com/darknet/yolo/)
1.7. You can to use custom detector with bounding or rotated rectangle as output.
#### 2. Matching or solve an [assignment problem](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h):
2.1. Hungrian algorithm (tracking::MatchHungrian) with cubic time O(N^3) where N is objects count
2.2. Algorithm based on weighted bipartite graphs (tracking::MatchBipart) from [rdmpage](https://github.com/rdmpage/maximum-weighted-bipartite-matching) with time O(M * N^2) where N is objects count and M is connections count between detections on frame and tracking objects. It can be faster than Hungrian algorithm
2.3. [Distance](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h) from detections and objects: euclidean distance in pixels between centers (tracking::DistCenters), euclidean distance in pixels between rectangles (tracking::DistRects), Jaccard or IoU distance from 0 to 1 (tracking::DistJaccard)
#### 3. [Smoothing trajectories and predict missed objects](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h):
3.1. Linear Kalman filter from OpenCV (tracking::KalmanLinear)
3.2. Unscented Kalman filter from OpenCV (tracking::KalmanUnscented) with constant velocity or constant acceleration models
3.3. [Kalman goal](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h) is only coordinates (tracking::FilterCenter) or coordinates and size (tracking::FilterRect)
3.4. Simple [Abandoned detector](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h)
3.5. [Line intersection](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/CarsCounting.cpp) counting
#### 4. [Advanced visual search](https://github.com/Smorodov/Multitarget-tracker/blob/master/src/Tracker/Ctracker.h) for objects if they have not been detected:
4.1. No search (tracking::TrackNone)
4.2. Built-in DAT (tracking::TrackDAT) from [foolwood](https://github.com/foolwood/DAT), STAPLE (tracking::TrackSTAPLE) from [xuduo35](https://github.com/xuduo35/STAPLE) or LDES (tracking::TrackLDES) from [yfji](https://github.com/yfji/LDESCpp); KCF (tracking::TrackKCF), MIL (tracking::TrackMIL), MedianFlow (tracking::TrackMedianFlow), GOTURN (tracking::TrackGOTURN), MOSSE (tracking::TrackMOSSE) or CSRT (tracking::TrackCSRT) from [opencv_contrib](https://github.com/opencv/opencv_contrib/tree/master/modules/tracking)
With this option the tracking can work match slower but more accuracy.
#### 5. Pipeline
5.1. Syncronous [pipeline - SyncProcess](https://github.com/Smorodov/Multitarget-tracker/blob/master/example/VideoExample.h):
- get frame from capture device;
- decoding;
- objects detection (1);
- tracking (2-4);
- show result.
This pipeline is good if all algorithms are fast and works faster than time between two frames (40 ms for device with 25 fps). Or it can be used if we have only 1 core for all (no parallelization).
5.2. Pipeline with [2 threads - AsyncProcess](https://github.com/Smorodov/Multitarget-tracker/blob/master/example/VideoExample.h):
- 1th thread takes frame t and makes capture, decoding and objects detection;
- 2th thread takes frame t-1, results from first thread and makes tracking and results presentation (this is the Main read).
So we have a latency on 1 frame but on two free CPU cores we can increase performance on 2 times.
5.3. Fully [acynchronous pipeline](https://github.com/Smorodov/Multitarget-tracker/tree/master/async_detector) can be used if the objects detector works with low fps and we have a free 2 CPU cores. In this case we use 4 threads:
- 1th main thread is not busy and used for GUI and result presentation;
- 2th thread makes capture and decoding, puts frames in threadsafe queue;
- 3th thread is used for objects detection on the newest frame from the queue;
- 4th thread is used for objects tracking: waits the frame with detection from 3th tread and used advanced visual search (4) in intermediate frames from queue until it ges a frame with detections.
This pipeline can used with slow but accuracy DNN and track objects in intermediate frame in realtime without latency.
Also you can read [Wiki in Russian](https://github.com/Smorodov/Multitarget-tracker/wiki).
#### Demo Videos
* Mouse tracking:
[](https://www.youtube.com/watch?v=2fW5TmAtAXM)
* Motion Detection and tracking:
[](https://www.youtube.com/watch?v=GjN8jOy4kVw)
* Multiple Faces tracking:
[](https://www.youtube.com/watch?v=j67CFwFtciU)
* Simple Abandoned detector:
[](https://www.youtube.com/watch?v=fpkHRsFzspA)
#### Tested Platforms
1. Ubuntu Linux 18.04 with x86 processors
2. Ubuntu Linux 18.04 with Nvidia Jetson Nano (YOLO + darknet on GPU works!)
3. Windows 10 (x64 and x32 builds)
#### Build
1. Download project sources
2. Install CMake
3. Install OpenCV (https://github.com/opencv/opencv) and OpenCV contrib (https://github.com/opencv/opencv_contrib) repositories
4. Configure project CmakeLists.txt, set OpenCV_DIR (-DOpenCV_DIR=/path/to/opencv/build).
5. If opencv_contrib don't installed then disable options USE_OCV_BGFG=OFF, USE_OCV_KCF=OFF and USE_OCV_UKF=OFF
6. If you want to use native darknet YOLO detector with CUDA + cuDNN then set BUILD_YOLO_LIB=ON (Install first CUDA and cuDNN libraries from Nvidia)
7. If you want to use YOLO detector with TensorRT then set BUILD_YOLO_TENSORRT=ON (Install first TensorRT library from Nvidia)
8. For building example with low fps detector (now native darknet YOLO detector) and Tracker worked on each frame: BUILD_ASYNC_DETECTOR=ON
9. For building example with line crossing detection (cars counting): BUILD_CARS_COUNTING=ON
10. Go to the build directory and run make
**Full build:**
git clone https://github.com/Smorodov/Multitarget-tracker.git
cd Multitarget-tracker
mkdir build
cd build
cmake . .. -DUSE_OCV_BGFG=ON -DUSE_OCV_KCF=ON -DUSE_OCV_UKF=ON -DBUILD_YOLO_LIB=ON -DBUILD_YOLO_TENSORRT=ON -DBUILD_ASYNC_DETECTOR=ON -DBUILD_CARS_COUNTING=ON
make -j
How to run cmake on Windows for Visual Studio 15 2017 Win64: [example](https://github.com/Smorodov/Multitarget-tracker/blob/master/data/cmake_vs2017.bat). You need to add directory with cmake.exe to PATH and change build params in cmake.bat
**Usage:**
Usage:
./MultitargetTracker <path to movie file> [--example]=<number of example 0..7> [--start_frame]=<start a video from this position> [--end_frame]=<play a video to this position> [--end_delay]=<delay in milliseconds after video ending> [--out]=<name of result video file> [--show_logs]=<show logs> [--gpu]=<use OpenCL> [--async]=<async pipeline> [--res]=<csv log file> [--settings]=<ini file> [--batch_size=<number of frames>]
./MultitargetTracker ../data/atrium.avi -e=1 -o=../data/atrium_motion.avi
Press:
* 'm' key for change mode: play|pause. When video is paused you can press any key for get next frame.
* Press Esc to exit from video
Params:
1. Movie file, for example ../data/atrium.avi
2. [Optional] Number of example: 0 - MouseTracking, 1 - MotionDetector, 2 - FaceDetector, 3 - PedestrianDetector, 4 - OpenCV dnn objects detector, 5 - Yolo Darknet detector, 6 - YOLO TensorRT Detector, Cars counting
-e=0 or --example=1
3. [Optional] Frame number to start a video from this position
-sf=0 or --start_frame==1500
4. [Optional] Play a video to this position (if 0 then played to the end of file)
-ef=0 or --end_frame==200
5. [Optional] Delay in milliseconds after video ending
-ed=0 or --end_delay=1000
6. [Optional] Name of result video file
-o=out.avi or --out=result.mp4
7. [Optional] Show Trackers logs in terminal
-sl=1 or --show_logs=0
8. [Optional] Use built-in OpenCL
-g=1 or --gpu=0
9. [Optional] Use 2 threads for processing pipeline
-a=1 or --async=0
10. [Optional] Path to the csv file with tracking result
-r=res.csv or --res=res.csv
11. [Optional] Path to the ini file with tracker settings
-s=settings.ini or --settings=settings.ini
12. [Optional] Batch size - simultaneous detection on several consecutive frames
-bs=2 or --batch_size=1
More details here: [How to run examples](https://github.com/Smorodov/Multitarget-tracker/wiki/Run-examples).
#### Using MT Tracking as a library in your CMake project
Build MTTracking in the usual way, and choose an installation prefix where the library will be installed
(see [CMake Documentation](https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html) for the defaults).
In the `build` directory run
```
$ cmake --install .
```
This will generate the CMake files needed to find the MTTracking package with libraries and include files for
your project. E.g.
```
MTTrackingConfig.cmake
MTTrackingConfigVersion.cmake
MTTrackingTargets.cmake
```
In your CMake project, do the following:
```
find_package(MTTracking REQUIRED)
target_include_directories(MyProjectTarget PUBLIC ${MTTracking_INCLUDE_DIR})
target_link_libraries(MyProjectTarget PUBLIC MTTracking::mtracking MTTracking::mdetection)
```
You may need to provide CMake with the location to find the above `.cmake` files, e.g.
```
$ cmake -DMTTracking_DIR=<location_of_cmake_files> ..
```
If CMake succeeds at finding the package, you can use MTTracking in your project e.g.
```
#include <mtracking/Ctracker.h>
//...
std::unique_ptr<BaseTracker> m_tracker;
TrackerSettings settings;
settings.SetDistance(tracking::DistJaccard);
m_tracker = BaseTracker::CreateTracker(settings);
//...
```
And so on.
#### Thirdparty libraries
* OpenCV (and contrib): https://github.com/opencv/opencv and https://github.com/opencv/opencv_contrib
* Vibe: https://github.com/BelBES/VIBE
* SuBSENSE and LOBSTER: https://github.com/ethereon/subsense
* GTL: https://github.com/rdmpage/graph-template-library
* MWBM: https://github.com/rdmpage/maximum-weighted-bipartite-matching
* Pedestrians detector: https://github.com/sturkmen72/C4-Real-time-pedestrian-detection
* Non Maximum Suppression: https://github.com/Nuzhny007/Non-Maximum-Suppression
* MobileNet SSD models: https://github.com/chuanqi305/MobileNet-SSD
* YOLO v3 models: https://pjreddie.com/darknet/yolo/
* Darknet inference and YOLO v4 models: https://github.com/AlexeyAB/darknet
* NVidia TensorRT inference and YOLO v5 models: https://github.com/enazoe/yolo-tensorrt
* GOTURN models: https://github.com/opencv/opencv_extra/tree/c4219d5eb3105ed8e634278fad312a1a8d2c182d/testdata/tracking
* DAT tracker: https://github.com/foolwood/DAT
* STAPLE tracker: https://github.com/xuduo35/STAPLE
* LDES tracker: https://github.com/yfji/LDESCpp
* Ini file parser: https://github.com/benhoyt/inih
#### License
Apache 2.0: [LICENSE text](https://github.com/Smorodov/Multitarget-tracker/blob/master/LICENSE)
#### Project cititations
1. Jeroen PROVOOST "Camera gebaseerde analysevan de verkeersstromen aaneen kruispunt", 2014 ( https://iiw.kuleuven.be/onderzoek/eavise/mastertheses/provoost.pdf )
2. Roberto Ciano, Dimitrij Klesev "Autonome Roboterschwarme in geschlossenen Raumen", 2015 ( https://www.hs-furtwangen.de/fileadmin/user_upload/fak_IN/Dokumente/Forschung_InformatikJournal/informatikJournal_2016.pdf#page=18 )
3. Wenda Qin, Tian Zhang, Junhe Chen "Traffic Monitoring By Video: Vehicles Tracking and Vehicle Data Analysing", 2016 ( http://cs-people.bu.edu/wdqin/FinalProject/CS585%20FinalProjectReport.html )
4. Ipek BARIS "CLASSIFICATION AND TRACKING OF VEHICLES WITH HYBRID CAMERA SYSTEMS", 2016 ( http://cvrg.iyte.edu.tr/publications/IpekBaris_MScThesis.pdf )
5. Cheng-Ta Lee, Albert Y. Chen, Cheng-Yi Chang "In-building Coverage of Automated External Defibrillators Considering Pedestrian Flow", 2016 ( http://www.see.eng.osaka-u.ac.jp/seeit/icccbe2016/Proceedings/Full_Papers/092-132.pdf )
6. Roberto Ciano, Dimitrij Klesev "Autonome Roboterschwarme in geschlossenen Raumen" in "informatikJournal 2016/17", 2017 ( https://docplayer.org/124538994-2016-17-informatikjournal-2016-17-aktuelle-berichte-aus-forschung-und-lehre-der-fakultaet-informatik.html )
7. Omid Noorshams "Automated systems to assess weights and activity in grouphoused mice", 2017 ( https://pdfs.semanticscholar.org/e5ff/f04b4200c149fb39d56f171ba7056ab798d3.pdf )
8. RADEK VOPÁLENSKÝ "DETECTION,TRACKING AND CLASSIFICATION OF VEHICLES", 2018 ( https://www.vutbr.cz/www_base/zav_prace_soubor_verejne.php?file_id=181063 )
9. Márk Rátosi, Gyula Simon "Real-Time Localization and Tracking using Visible Light Communication", 2018 ( https://ieeexplore.ieee.org/abstract/document/8533800 )
10. Thi Nha Ngo, Kung-Chin Wu, En-Cheng Yang, Ta-Te Lin "Areal-time imaging system for multiple honey bee tracking and activity monitoring", 2019 ( https://www.sciencedirect.com/science/article/pii/S0168169919301498 )
11. Tiago Miguel, Rodrigues de Almeida "Multi-Camera and Multi-Algorithm Architecture for VisualPerception onboard the ATLASCAR2", 2019 ( http://lars.mec.ua.pt/public/LAR%20Projects/Vision/2019_TiagoAlmeida/Thesis_Tiago_AlmeidaVF_26Jul2019.pdf )
12. ROS, http://docs.ros.org/lunar/api/costmap_converter/html/Ctracker_8cpp_source.html
| 64.565217 | 563 | 0.759596 | eng_Latn | 0.461894 |
8ac4171ca42fb430f9c40991a3f1cff55fa906ba | 6,399 | md | Markdown | docs/csharp/how-to/search-strings.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T07:30:24.000Z | 2020-01-06T07:30:24.000Z | docs/csharp/how-to/search-strings.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/how-to/search-strings.md | emrekas/docs.tr-tr | 027bd2c6c93900a75cac7ac42531c89085f87888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Nasıl yapılır: arama dizeleri (C# Kılavuzu)'
ms.date: 02/21/2018
helpviewer_keywords:
- searching strings [C#]
- strings [C#], searching with String methods
- strings [C#], searching with regular expressions
ms.assetid: fb1d9a6d-598d-4a35-bd5f-b86012edcb2b
ms.openlocfilehash: b9c27e419d37b6c0730f214d3b2b9bbdf7e30d11
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/23/2019
ms.locfileid: "61672074"
---
# <a name="how-to-search-strings"></a>Nasıl yapılır: dizeleri arama
Dizeleri metin arama için iki ana stratejiler kullanabilirsiniz. Yöntemlerinin <xref:System.String> sınıfı belirli bir metni arayın. Normal ifadeler, metin desenleri arayın.
[!INCLUDE[interactive-note](~/includes/csharp-interactive-note.md)]
[Dize](../language-reference/keywords/string.md) bir diğer ad türü için <xref:System.String?displayProperty=nameWithType> sınıfı, bir dizenin içeriklerini aramak için kullanışlı yöntemler sunar. Bunlar arasında olan <xref:System.String.Contains%2A>, <xref:System.String.StartsWith%2A>, <xref:System.String.EndsWith%2A>, <xref:System.String.IndexOf%2A>, <xref:System.String.LastIndexOf%2A>. <xref:System.Text.RegularExpressions.Regex?displayProperty=nameWithType> SAX metin kalıpları aramak için zengin bir sözlük. Bu makalede, bu teknikler ve ihtiyaçlarınızı en iyi yöntemi seçme öğrenin.
## <a name="does-a-string-contain-text"></a>Bir dizeyi metin içeriyor mu?
<xref:System.String.Contains%2A?displayProperty=nameWithType>, <xref:System.String.StartsWith%2A?displayProperty=nameWithType> Ve <xref:System.String.EndsWith%2A?displayProperty=nameWithType> yöntemleri aramak için belirli bir metni bir dize. Aşağıdaki örnek, her biri bu yöntemleri ve büyük küçük harfe duyarlı arama kullanan bir değişim gösterir:
[!code-csharp-interactive[search strings using methods](../../../samples/snippets/csharp/how-to/strings/SearchStrings.cs#1)]
Yukarıdaki örnekte, bu yöntemler için önemli bir nokta gösterir. Aramalar **büyük/küçük harfe** varsayılan olarak. Kullandığınız <xref:System.StringComparison.CurrentCultureIgnoreCase?displayProperty=nameWithType> büyük küçük harfe duyarlı bir arama belirtmek için sabit listesi değeri.
## <a name="where-does-the-sought-text-occur-in-a-string"></a>Aranan metin dizesi içinde nerede gerçekleşiyor?
<xref:System.String.IndexOf%2A> Ve <xref:System.String.LastIndexOf%2A> yöntemleri ayrıca arama metin dizeleri. Bu yöntemler aranan metnin konumunu döndürür. Döndürmeleri metni, bulunamadığında, `-1`. Aşağıdaki örnek, arama "yöntemleri" sözcüğü ilk ve son oluşumunu gösterir ve metin arasında görüntüler.
[!code-csharp-interactive[search strings for indices](../../../samples/snippets/csharp/how-to/strings/SearchStrings.cs#2)]
## <a name="finding-specific-text-using-regular-expressions"></a>Belirli bir normal ifadeler kullanarak metin bulma
<xref:System.Text.RegularExpressions.Regex?displayProperty=nameWithType> Sınıfı, dizeleri aramak için kullanılabilir. Bu aramaları, basit karmaşıklığı karmaşık metin desenleri için değişebilir.
Aşağıdaki kod örneği, "" veya bir tümcedeki çalışması yok sayılıyor "their" sözcüğü için arar. Statik yöntem <xref:System.Text.RegularExpressions.Regex.IsMatch%2A?displayProperty=nameWithType> arama gerçekleştirir. Bu dize arama ve arama deseni için size. Bu durumda, büyük küçük harf duyarsız arama üçüncü bir bağımsız değişken belirtir. Daha fazla bilgi için bkz. <xref:System.Text.RegularExpressions.RegexOptions?displayProperty=nameWithType>.
Arama metin arama deseni açıklar. Aşağıdaki tabloda, her öğe için arama deseni açıklar. (Aşağıdaki tabloda tek kullanan `\` , atlanan, olarak `\\` bir C# dizedeki).
| Düzeni | Açıklama |
| -------- |-------------|
| , | metinle eşleşen "" |
| (EIR)? | 0 veya 1 "EIR" oluşumunu eşleşmesi |
| \s | bir boşluk karakteri eşleştir |
[!code-csharp-interactive[Search using regular expressions](../../../samples/snippets/csharp/how-to/strings/SearchStrings.cs#3)]
> [!TIP]
> `string` Yöntemleri, tam bir dize ararken genellikle daha iyi bir seçenek şunlardır. Normal ifadeler, daha iyi bir kaynak dizesi bazı deseni için zaman aradığınız olan.
## <a name="does-a-string-follow-a-pattern"></a>Bir dizeyi bir desenle takip ediyor mu?
Aşağıdaki kod, bir dizideki her bir dizenin biçimi doğrulamak için normal ifadeler kullanır. Doğrulama, her bir dizenin üç rakamlar grupları çizgilerle ayrılmış bir telefon numarası biçiminin olmasını gerektirir, ilk iki grupları üç basamak içeren ve üçüncü Grup dört basamak içerir. Arama deseni normal ifadeyi kullanan `^\\d{3}-\\d{3}-\\d{4}$`. Daha fazla bilgi için [normal ifade dili - hızlı başvuru](../../standard/base-types/regular-expression-language-quick-reference.md).
| Düzeni | Açıklama |
| -------- |-------------------------------------|
| ^ | dizenin başlangıcıyla eşleşir |
| \d{3} | tam olarak 3 basamak karakter ile eşleşir |
| - | eşleşen '-' karakteri |
| \d{3} | tam olarak 3 basamak karakter ile eşleşir |
| - | eşleşen '-' karakteri |
| \d{4} | tam olarak 4 basamaklı karakter ile eşleşir |
| $ | Dize sonu ile eşleşir |
[!code-csharp-interactive[csProgGuideStrings#4](../../../samples/snippets/csharp/how-to/strings/SearchStrings.cs#4)]
Bu tek arama deseni ile eşleşir birçok geçerli dizesi. Normal ifadeler için arama yapın ya da tek bir metin dizesi yerine desen karşı doğrulamak daha uygundur.
Bu örnekler kodda bakarak deneyebilirsiniz bizim [GitHub deposu](https://github.com/dotnet/samples/tree/master/snippets/csharp/how-to/strings). Örnekleri indirebilirsiniz [zip dosyası olarak](https://github.com/dotnet/samples/raw/master/snippets/csharp/how-to/strings.zip).
## <a name="see-also"></a>Ayrıca bkz.
- [C# Programlama Kılavuzu](../programming-guide/index.md)
- [Dizeler](../programming-guide/strings/index.md)
- [LINQ ve Dizeler](../programming-guide/concepts/linq/linq-and-strings.md)
- <xref:System.Text.RegularExpressions.Regex?displayProperty=nameWithType>
- [.NET framework normal ifadeleri](../../standard/base-types/regular-expressions.md)
- [Normal İfade Dili - Hızlı Başvuru](../../standard/base-types/regular-expression-language-quick-reference.md)
- [. NET'te dizeleri kullanmak için en iyi uygulamalar](../../standard/base-types/best-practices-strings.md)
| 74.406977 | 588 | 0.759025 | tur_Latn | 0.995818 |
8ac63e09f3b56bb7ad5a3f55846d99d7b9b700d2 | 1,742 | md | Markdown | README.md | igormartins4/clone-vercel-homepage | 69bead3d4134cfb40db049fa819a29939457280c | [
"MIT"
] | null | null | null | README.md | igormartins4/clone-vercel-homepage | 69bead3d4134cfb40db049fa819a29939457280c | [
"MIT"
] | null | null | null | README.md | igormartins4/clone-vercel-homepage | 69bead3d4134cfb40db049fa819a29939457280c | [
"MIT"
] | null | null | null | <h1 align="center">
UI Clone - Vercel (Homepage)
</h1>
<p align="center">Clone da <a href="https://vercel.com">Vercel Homepage</a> com propósito de estudo.</p>
<p align="center">Clique <a href="https://www.youtube.com/playlist?list=PL85ITvJ7FLohTZv9cC5-PrZ39Q3cugWqp">aqui</a> para ver o vídeo original da Rocketseat no Youtube.</p>
<p align="center">
<a href="https://github.com/igormartins4/clone-vercel-homepage/graphs/contributors">
<img src="https://img.shields.io/github/contributors/igormartins4/clone-vercel-homepage?color=%23ff762a&style=for-the-badge" alt="Contributors">
</a>
<a href="https://github.com/igormartins4/clone-vercel-homepage/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/igormartins4/clone-vercel-homepage?color=ff000ff&style=for-the-badge" alt="GitHub license" >
</a>
<a href="https://github.com/igormartins4/clone-vercel-homepage/stargazers">
<img alt="GitHub stars" src="https://img.shields.io/github/stars/igormartins4/clone-vercel-homepage?style=for-the-badge">
</a>
</p>
<hr>
## Participantes
[<img src="https://avatars.githubusercontent.com/u/23300792?s=460&u=48142b8d548e9c7d1e69a3593b614e48a5513ad0&v=4" width="100px;"/>](https://github.com/igormartins4)
[Igor Martins](https://github.com/igormartins4)
## Recursos usados
- [x] HTML, SCSS e JS
- [x] Imagens SVG
## Inicie o ambiente de desenvolvedor no VSCode
1. Abra a pasta do projeto no `VSCode`
2. Instale as extensões `Live Server` e `Live Sass Compiler`
3. Clique com o botão direito em `index.html` e depois vá em `Open with Live Server`
4. Acesse o **endereço gerado** em seu navegador 🚀
5.
<p align="center">
<img src="assets\print.jpg" width="1000px" alt="Screenshot da página">
</p> | 40.511628 | 172 | 0.729621 | por_Latn | 0.500838 |
8ac64d72483c431384233c9d30286edb17919c2e | 204 | md | Markdown | README.md | lin-wish/random-name | 91bae70aad4547e06388105136573a7c18525ed0 | [
"MIT"
] | null | null | null | README.md | lin-wish/random-name | 91bae70aad4547e06388105136573a7c18525ed0 | [
"MIT"
] | null | null | null | README.md | lin-wish/random-name | 91bae70aad4547e06388105136573a7c18525ed0 | [
"MIT"
] | null | null | null | # Random Names
A simple app for generating random names. Written in python and flask.
## Dependencies
- FLask
## Run
```bash
env FLASK_APP=main.py flask run
```
Server Running at :5000
## LICENSE
MIT
| 13.6 | 71 | 0.720588 | eng_Latn | 0.852955 |
8ac88e8ac6ac636849249ee1ef0f25a610f72a9c | 60 | md | Markdown | _meme-templates/468.md | harsshjain/meming-blog | 74a0370141cea9d98a1b4fc50b87ddaa545227c9 | [
"MIT"
] | 9 | 2016-08-08T16:24:17.000Z | 2020-06-24T14:29:42.000Z | _meme-templates/468.md | harsshjain/meming-blog | 74a0370141cea9d98a1b4fc50b87ddaa545227c9 | [
"MIT"
] | 2 | 2017-04-01T13:46:15.000Z | 2017-09-14T13:02:31.000Z | _meme-templates/468.md | harsshjain/meming-blog | 74a0370141cea9d98a1b4fc50b87ddaa545227c9 | [
"MIT"
] | 3 | 2017-09-27T09:10:16.000Z | 2020-07-17T16:40:45.000Z | ---
slug: Ugly-Twins
template_id: 468
title: Ugly Twins
---
| 10 | 17 | 0.683333 | yue_Hant | 0.107355 |
8ac89082dc44975dbb23d4978e035186af012146 | 1,023 | md | Markdown | README.md | Arquisoft/InciDashboard_e2b | 93c4acf8cad307034f6c767f830192377e0ab9a3 | [
"Unlicense"
] | 3 | 2018-03-07T11:32:10.000Z | 2018-03-07T20:08:52.000Z | README.md | Arquisoft/InciDashboard_e2b | 93c4acf8cad307034f6c767f830192377e0ab9a3 | [
"Unlicense"
] | 8 | 2018-03-20T16:12:10.000Z | 2018-04-16T08:46:47.000Z | README.md | Arquisoft/InciDashboard_e2b | 93c4acf8cad307034f6c767f830192377e0ab9a3 | [
"Unlicense"
] | 2 | 2018-03-25T21:54:08.000Z | 2018-03-26T12:42:45.000Z | # InciDashboard_e2b
[](https://travis-ci.org/Arquisoft/InciDashboard_e2b)
[](https://www.codacy.com/app/pabloHeviaV/InciDashboard_e2b?utm_source=github.com&utm_medium=referral&utm_content=Arquisoft/InciDashboard_e2b&utm_campaign=Badge_Grade)
[](https://codecov.io/gh/Arquisoft/InciDashboard_e2b)
InciDashboard e2b module
# Authors
* Pablo Hevia Viejo (UO251259)
* Pelayo Martínez Capela (UO250985)
* Gemma González Gil (UO236976)
* Erik Gabriel González García (UO224164)
# Cómo ejecutar
Antes de ejecutar la aplicación hay que lanzar el archivo "bootKafka.bat" que iniciará Kafka, Zookeeper y la base de datos hsqldb.
Se han añadido los usuarios "pedro@example.com" y "maria@example.com" ambos con contraseña "1234" para hacer pruebas.
| 51.15 | 271 | 0.801564 | yue_Hant | 0.447161 |
8ac89abee75dd08f210e3246559fe4a97e1fba96 | 1,954 | md | Markdown | README.md | Gakur/instaclone1 | 5c4aa9128e0afaceb889e83e8b57545b558f398b | [
"MIT"
] | null | null | null | README.md | Gakur/instaclone1 | 5c4aa9128e0afaceb889e83e8b57545b558f398b | [
"MIT"
] | null | null | null | README.md | Gakur/instaclone1 | 5c4aa9128e0afaceb889e83e8b57545b558f398b | [
"MIT"
] | null | null | null | # PHOTOGRAM
>[Peter- Gakure](https://github.com/Gakur/instaclone1)
# Description
This is a clone of Instagram website where people share their images and videos for other users to view.
Users can sign up, login, view and post photos, search and follow other users.
## Live Link
Click [View Site](https://instaclone-wk2.herokuapp.com/) to visit the site
## User Story
* Sign in to the application to start using.
* Upload a pictures to the application.
* Search for different users using their usernames.
* See your profile with all your pictures.
* Follow other users and see their pictures on my timeline.
## Setup and Installation
To get the project .......
##### Clone the repository:
```bash
https://github.com/Gakur/instaclone1
```
##### Navigate into the folder and install requirements
```bash
cd instaclone/
```
##### Install and activate Virtual environment
```bash
- pipenv shell -
```
##### Setup Database
SetUp your database User,Password, then make migrations
```bash
python manage.py makemigrations
```
Now Migrate
```bash
python manage.py migrate
```
##### Run the application
```bash
python manage.py runserver
```
##### Testing the application
```bash
python manage.py test
```
Open the application on your browser `127.0.0.1:8000`.
## Technology used
* [Python3.8](https://www.python.org/)
* [Django ](https://docs.djangoproject.com/en/2.2/)
* [Heroku](https://heroku.com)
## Known Bugs
* There are no known bugs currently but pull requests are allowed incase you spot a bug
## Contact Information
If you have any question or contributions, please email me at [petergakure97@gmail.com]
## License
* [](https://github.com/Gakur/instaclone1/Picture-Globe/blob/master/LICENSE)
* Copyright (c) 2021 **Peter Gakure** | 26.767123 | 165 | 0.686796 | eng_Latn | 0.917566 |
8ac93dd199e67d8200a876e57c69f6f2a7773767 | 525 | md | Markdown | guide/russian/mathematics/add-fractions-with-unlike-denominators/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 10 | 2019-08-09T19:58:19.000Z | 2019-08-11T20:57:44.000Z | guide/russian/mathematics/add-fractions-with-unlike-denominators/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 2,056 | 2019-08-25T19:29:20.000Z | 2022-02-13T22:13:01.000Z | guide/russian/mathematics/add-fractions-with-unlike-denominators/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 5 | 2018-10-18T02:02:23.000Z | 2020-08-25T00:32:41.000Z | ---
title: Add Fractions with Unlike Denominators
localeTitle: Добавить фракции в отличие от знаменателей
---
## Добавить фракции в отличие от знаменателей
Чтобы добавить фракции в отличие от знаменателей, вам нужно переписать фракции, чтобы все они имели один и тот же знаменатель. Один из способов сделать это - умножить все несопоставимые знаменатели вместе, чтобы создать общий знаменатель.
### Примеры
 | 47.727273 | 238 | 0.80381 | rus_Cyrl | 0.954286 |
8ac9e19bd5f473228763d91157444fdd4b97057c | 3,327 | md | Markdown | yong-hu-zhi-nan/canvas-she-ji-gong-zuo-liu-cheng-canvas-designing-workflows/qian-ming-signatures.md | lsj9383/celery-cn | 8d0099f0e8fda92ce99b03ce02f43655581a7558 | [
"Apache-2.0"
] | 1 | 2020-03-26T09:11:08.000Z | 2020-03-26T09:11:08.000Z | yong-hu-zhi-nan/canvas-she-ji-gong-zuo-liu-cheng-canvas-designing-workflows/qian-ming-signatures.md | lsj9383/celery-cn | 8d0099f0e8fda92ce99b03ce02f43655581a7558 | [
"Apache-2.0"
] | null | null | null | yong-hu-zhi-nan/canvas-she-ji-gong-zuo-liu-cheng-canvas-designing-workflows/qian-ming-signatures.md | lsj9383/celery-cn | 8d0099f0e8fda92ce99b03ce02f43655581a7558 | [
"Apache-2.0"
] | null | null | null | # 签名:Signatures
你刚刚在 [calling]() 指南中学习了如何通过 `delay` 方法来调用任务,并且这也是你常用的,但有时你可能希望传递任务签名给别的进程使用,或者作为其他函数的参数。
[signature()]() 包装了任务调用的参数、关键词参数和执行选项,以便传递给函数,甚至可以序列化后通过网络进行传输。
* 你可以创建签名,例如:
```python
>>> from celery import signature
>>> signature('tasks.add', args=(2, 2), countdown=10)
tasks.add(2, 2)
```
这个任务签名具有两个参数: (2, 2),以及 countdown 为10的执行选项。
* 或者你可以使用任务的签名方法来创建签名:
```python
>>> add.signature((2, 2), countdown=10)
tasks.add(2, 2)
```
* 也有创建签名的快捷方式:
```python
>>> add.s(2, 2)
tasks.add(2, 2)
```
* 签名也支持关键词参数:
```python
>>> add.s(2, 2, debug=True)
tasks.add(2, 2, debug=True)
```
* 任何签名对象都可以检查不同的字段:
```python
>>> s = add.signature((2, 2), {'debug': True}, countdown=10)
>>> s.args
(2, 2)
>>> s.kwargs
{'debug': True}
>>> s.options
{'countdown': 10}
```
* 签名也支持 `delay` 和 `apply_async` 的“调用API”,包括直接调用(\_\_call\_\_)。
调用签名,将直接在当前进程中执行任务:
```python
>>> add(2, 2)
4
>>> add.s(2, 2)()
4
```
`delay` 是我们代替 `apply_async` 的快捷方式:
```
>>> result = add.delay(2, 2)
>>> result.get()
4
>>> result = add.s(2, 2).delay()
>>> result.get()
4
```
`apply_async` 采用与 [app.Task.apply_async()]() 相同的调用参数。
```python
>>> add.apply_async(args, kwargs, **options)
>>> add.signature(args, kwargs, **options).apply_async()
>>> add.apply_async((2, 2), countdown=1)
>>> add.signature((2, 2), countdown=1).apply_async()
```
* 你不能通过 [s()]() 定义执行选项,但是可以通过 set 的链式调用解决:
```python
>>> add.s(2, 2).set(countdown=1)
proj.tasks.add(2, 2)
```
## 部分参数
使用签名,你可以在工作进程中执行任务:
```python
>>> add.s(2, 2).delay()
>>> add.s(2, 2).apply_async(countdown=1)
```
或者,你也可以直接在当前进程执行任务:
```python
>>> add.s(2, 2)()
4
```
可以指明 `delay`/`apply_async` 额外的参数、关键词参数或执行选项:
* 任何参数都将添加在签名参数前:
```python
>>> partial = add.s(2) # incomplete signature
>>> partial.delay(4) # 4 + 2
>>> partial.apply_async((4,)) # same
```
* 被添加的关键词参数将会和签名中的关键词参数合并,新添加的关键词参数优先:
```python
>>> s = add.s(2, 2, debug=False)
>>> s.delay(debug=True) # -> add(2, 2, debug=True)
>>> s.apply_async(kwargs={'debug': True}) # same
```
* 被添加的执行选项将会与签名中的执行选项合并,新添加的执行选项优先:
```python
>>> s = add.signature((2, 2), countdown=10)
>>> s.apply_async(countdown=1) # countdown is now 1
```
你也可以对签名进行克隆:
```python
>>> s = add.s(2)
proj.tasks.add(2)
>>> s.clone(args=(4,), kwargs={'debug': True})
proj.tasks.add(4, 2, debug=True)
```
## 不变性(Immutability)
部分参数通常在回调中使用,父任务的结果将会作为参数传递给链接或chord的回调任务。有时你希望指明一个不需要参数的回调,这时你可以设置签名为不变的。
```python
>>> add.apply_async((2, 2), link=reset_buffers.signature(immutable=True))
```
快捷方式 `.si()` 也能创建不变签名:
```python
>>> add.apply_async((2, 2), link=reset_buffers.si())
```
对于不变签名,只有执行选项可以进行设置,并无法使用部分参数和关键词参数。
注意:在该教材中,我有时会使用前缀 ~ 来进行签名。你不应该在生产环境中使用这样的代码,这仅仅是在 Python Shell 中进行测试的关键方式。
```python
>>> ~sig
>>> # is the same as
>>> sig.delay().get()
```
## 回调
任务可以使用 `apply_async` 的 link 参数来添加回调。
```python
add.apply_async((2, 2), link=other_task.s())
```
只有当任务成功退出,回调函数才能被执行,并且将父任务的结果作为参数传递给回调的任务。
如前所述,新传递的参数将会添加在签名指定的参数前。
如果你有一个签名:
```python
>>> sig = add.s(10)
```
接着 `sig.delay(result)` 将变为:
```python
>>> add.apply_async(args=(result, 10))
```
现在,让我们调用 `add` 任务,并设置回调。
```python
>>> add.apply_async((2, 2), link=add.s(8))
```
正如预期,首先第一个任务将会计算 2 + 2,接着回调任务将会计算 4 + 8。
| 18.081522 | 88 | 0.609558 | yue_Hant | 0.260182 |
8ac9f3550846e0eca7b30fb9cb757818701e4697 | 2,832 | md | Markdown | README.md | dOrgTech/necdao-snapshot-api | 368e4dc030939fe0166ecf0759326fc25fd5d86e | [
"MIT"
] | null | null | null | README.md | dOrgTech/necdao-snapshot-api | 368e4dc030939fe0166ecf0759326fc25fd5d86e | [
"MIT"
] | 6 | 2020-09-02T18:09:11.000Z | 2020-10-14T22:20:40.000Z | README.md | dOrgTech/necdao-snapshot-api | 368e4dc030939fe0166ecf0759326fc25fd5d86e | [
"MIT"
] | null | null | null | # necdao-snapshot-api
1. Log into psql and create a Postgres database and a user called admin:
```
sudo -u postgres psql
create database necdao;
\c necdao;
```
2. Run the following scripts in psql after connecting to the PostgresSQL database:
create table period (id bigserial not null, nec_to_distribute bigint, primary key(id));
create table week (id bigserial not null, nec_to_distribute bigint, snapshot_date timestamp with time zone, publish_date timestamp with time zone, unlock_date timestamp with time zone, contract_address varchar, closed boolean, fk_period_id bigint, constraint fk_week_period foreign key(fk_period_id) references period(id), start_date timestamp with time zone, end_date timestamp with time zone, primary key(id));
create table reward (fk_week_id bigint, id bigserial not null, primary key(id), address varchar, bpt_balance numeric, nec_earned numeric, constraint fk_reward_week foreign key(fk_week_id) references week(id));
create table users (id bigserial not null, email varchar, password varchar);
insert into users (email, password) values ('admin2', '$2y$10$.YhBbfUzXYbClV7DIUFY6.3ydJ2Bz2zduQwExxcwCLy9WdH7h2Ake');
create table reward_multiple (id bigserial not null, primary key(id), volume_minimum numeric, reward_multiple numeric);
INSERT INTO reward_multiple (volume_minimum, reward_multiple) VALUES
(0, 0.80),
(5000, 1.10),
(20000, 1.20),
(100000, 1.50),
(1000000, 2.00);
create user admin2 with password 'password';
alter database necdao owner to admin;
grant all privileges on database necdao to admin;
grant all privileges on table period to admin;
grant all privileges on table week to admin;
grant all privileges on table reward to admin;
grant all privileges on table users to admin;
grant all privileges on table reward_multiple to admin;
grant all privileges on sequence period_id_seq to admin;
grant all privileges on sequence reward_id_seq to admin;
grant all privileges on sequence users_id_seq to admin;
grant all privileges on sequence week_id_seq to admin;
grant all privileges on sequence reward_multiple_id_seq to admin;
This will create all appropriate tables and columns and will seed the users table with an admin2 account (email: 'admin2', password: 'password')
If you still cannot login, comment the auth code here and in the UI, create a user through register in the admin2 dashboard, then uncomment the auth code to login with that user.
1. Create a .env file with the following:
BALANCER_SUBGRAPH_API=https://api.thegraph.com/subgraphs/name/balancer-labs/balancer-beta
DATABASE_URL= -- url of the deployed postgres database --
SECRET_KEY=secret
PRIVATE_KEY= -- Private key of the wallet that has gas for contract deployment --
INFURA_API_KEY= -- infura api key --
DEVELOPMENT=true
4. Run the following:
```
yarn install
yarn dev
```
| 44.952381 | 412 | 0.790254 | eng_Latn | 0.967155 |
8aca6edcc9f1165f0aa8b66f73b3b5f5014e5fe3 | 1,563 | md | Markdown | docs/analysis-services/xmla/xml-elements-properties/new-element-xmla.md | RafaJPSantos/bi-shared-docs | 6cadbd4e8308de738424dacb62da7ebeb34bddbe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/xmla/xml-elements-properties/new-element-xmla.md | RafaJPSantos/bi-shared-docs | 6cadbd4e8308de738424dacb62da7ebeb34bddbe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/xmla/xml-elements-properties/new-element-xmla.md | RafaJPSantos/bi-shared-docs | 6cadbd4e8308de738424dacb62da7ebeb34bddbe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "New Element (XMLA) | Microsoft Docs"
description: Learn how the New element contains the new file system storage location used by a Folder element.
ms.date: 07/24/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: xmla
ms.topic: reference
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
---
# New Element (XMLA)
Contains the new file system storage location used by a [Folder](../xml-elements-properties/folder-element-xmla.md) element.
## Syntax
```xml
<Folder>
...
<New>...</New>
...
</Folder>
```
## Element characteristics
|Characteristic|Description|
|--------------------|-----------------|
|Data type and length|String|
|Default value|None|
|Cardinality|1-1: Required element that occurs once and only once.|
## Element relationships
|Relationship|Element|
|------------------|-------------|
|Parent elements|[Folder](../xml-elements-properties/folder-element-xmla.md)|
|Child elements|None|
## Remarks
The **New** element contains a UNC path that replaces the value of the **Original** element contained by the parent **Folder** element for all objects restored or synchronized, respectively, during a **Restore** or **Synchronize** command. The value of the **Original** element is compared to the value of the **StorageLocation** element for each cube, measure group, or partition and, if a match is found, the value of this element is used to update the **StorageLocation** of the object during restoration or synchronization. | 34.733333 | 530 | 0.683301 | eng_Latn | 0.962099 |
8acbca65c4cb93208b9abdb20e13934528ec2a4e | 8,832 | md | Markdown | doc/OlderChanges.md | spijdar/cltorch | 579800c5eb3809324056bd5a6cb1af892938ca5f | [
"BSD-3-Clause"
] | 308 | 2015-06-11T02:41:57.000Z | 2022-01-21T04:31:56.000Z | doc/OlderChanges.md | spijdar/cltorch | 579800c5eb3809324056bd5a6cb1af892938ca5f | [
"BSD-3-Clause"
] | 76 | 2015-06-21T11:57:46.000Z | 2022-02-26T07:23:16.000Z | doc/OlderChanges.md | spijdar/cltorch | 579800c5eb3809324056bd5a6cb1af892938ca5f | [
"BSD-3-Clause"
] | 29 | 2015-06-11T11:15:10.000Z | 2021-11-01T13:54:10.000Z | # Older changes
This page contains older changes, that have been moved from the [Recent Changes](https://github.com/hughperkins/cltorch#recent-changes) section on the main page.
For the most recent changes please see [Recent Changes](https://github.com/hughperkins/cltorch#recent-changes)
* 23rd July:
* Fixed memory leak on Intel HD Graphics
* 22th July:
* Performance improvement:
* All per-element operations are around 2-5 faster on NVIDIA and AMD now
* In the specific, this means that times for Karpathy's [char-rnn](http://github.com/karpathy/char-rnn) are around 2-3 times faster on NVIDIA and AMD cards, compared to before
* [colesbury](https://github.com/colesbury)'s pull request [#176](https://github.com/torch/cutorch/pull/176) ported to cltorch, 'Allow CudaTensors as indices'
* [andresy](https://github.com/andresy)'s pull request [#203](https://github.com/torch/cutorch/pull/203) ported to cltorch, 'expose retain and free for CudaStorage/CudaTensor'
* 19th July:
* Upgrade EasyCL version
* Need to explicitly enable timing now (just in case impacts performance)
* DumpTimings now shows count of number of calls, as well as timings
* 18th July:
* Added custom user kernels
* 16th July:
* Did some cleaning:
* source code now all in `src` directory, to keep the front page on github clean
* moved a bunch of stuff from this page to other pages, ie older changes, and list of what works
* 20x speed boost for Apply kernel, and char-rnn, on Intel HD5500 GPU
* 15th July:
* can pass point ClTensor now also to `:lt()`, `:gt()`, `:le()`, `:ge()`, `:eq()`, `:ne()`
* added profiling:
* `cltorch.setProfiling(1)` to enable (has a performance hit obviously, whilst enabled)
* `cltorch.dumpProfiling()` to dump timings since last dump
* timings are cumulative over kernel filename/kernelname combination
* 14th July:
* created point tensors:
* `:sum()` can return a point tensor, which stays on the GPU, eliminating gpu pipeline stall, see presentation above
* `add()`, `csub()`, `mul` and `div` can all accept a point tensor in place of their scalar argument
* `:prod()` can return a point tensor too now, as can `:max()`, `:min()`, `:all()`, and `:any()`
* can pass point ClTensor also to `:fill()` now
* 13th July:
* possible to use tensors without `:setDevice()` to same device as them first. Tested with `:sum()`, `:sum(1)`, and `:sum(2)` for now
* 12th July:
* add `cltorch.about()`, to provide build information
* 10th July:
* added cmin, cmax, for tensors and scalars (as per https://github.com/torch/cutorch/pull/198/files )
* 5th July:
* fixed some Mac build/load issues, so builds/loads on Mac now (thank you to mlajtos, szagouyko, centime, luo123n, and pdhvip for their enormous help with fixing this :-) )
* getDeviceProperties and so on now only show GPU and APU devices, ignores pure CPU devices (which pure CPU devices are not supported by cltorch at this time)
* added `cltorch.test()`, which runs unit tests
* 4th July:
* `torch.save` and `torch.load` implemented
* 27th June:
* fixed more bugs involving Tensor copy. Hopefully should be fixed permanently now :-P
* added `cltorch.dumpTimings()`, which will dump cumulative timings for various parts of the engine. It's mostly for usage by maintainers / optimizers.
* massive optimization for anything involving apply, reduce, reduceall, index etc => this makes the ltsm script at [karpathy/char-rnn](https://github.com/karpathy/char-rnn) run significantly faster when using OpenCL now :-)
* 26th June:
* add addcmul, and unit test
* add addcdiv, and unit test
* added `apply2` and `apply3` as synonyms for `map` and `map2`
* can use `x`, `y`, `z` instead of `*out`, `*in1` and `*in2`, in `apply`, `map`, etc
* fix a buffer copy bug (note: implies updating EasyCL, and rebuilding EasyCL, see notes on updating above)
* 25th June:
* added bernoulli (generates on host-side for now, but I guess this is fast enough for many things?)
* 24th June:
* added tests for `gather`, and removed some spam
* added `scatter` (for both tensor or float source)
* 23rd June:
* Fixed bug where operations such as apply and map on tensors with non-zero offset didnt work correctly (ie, `fill` etc after `narrow` or similar)
* Added `gather`
* 22nd June:
* Under the hood:
* Moved marking a buffer dirty, ie modified on the GPU, from [THClTensorMathBlas.cpp](https://github.com/hughperkins/cltorch/blob/9133fb4f0a23a86c48dcb5dc9cc7d44f44850a3f/lib/THCl/THClTensorMathBlas.cpp#L202) to [THClBlas.cpp](https://github.com/hughperkins/cltorch/blob/9133fb4f0a23a86c48dcb5dc9cc7d44f44850a3f/lib/THCl/THClBlas.cpp#L424)
* This fixes a bug in [clnn](https://github.com/hughperkins/clnn), where the results of a convolutional layer were not being written back to the output tensor
* tests pass now on an AMD gpu (actually I managed to scrounge access to a W9100 :-D )
* 21st June:
* Under the hood:
* Upgraded new THClKernels class to handle `THClTensorInfo`
* migrated Reduce, ReduceAll, etc to use THClKernels
* upgraded EasyCL to handle `uint`, `long`, `ulong`
* added `cltorch.finish()` and `cltorch.synchronize()`, both do same thing, which is a `clFinish()`, on current device
* made it possible to require both cutorch and cltorch, as long as one requires cutorch followed by cltorch, in that order
* 20th June:
* rename new `sub` method to `csub` so doesnt collide with existing `sub`
* added `cltorch.setTrace(1|0)`, which prints out every allocate or copy of gpu buffers (named 'wrapper's)
* removed `set` and `get` methods, because cause repeated gpu buffer copy (actually, get not too bad, but does copy whole buffer; set copies whole buffer, repeatedly :-P )
* modifed `ClStorage.__string__` to first copy whole storage to FloatStorage, once, then convert this to string, rather than using now non-existent `get`
* `torch.ClTensor{3,5,2}` will now first create this as a `FloatTensor` then call `copy` on this, to convert whole Tensor/Storage to `ClTensor` (avoids repeated `set` calls)
* added `normall`, ie can do `torch.norm(c)`, `torch.norm(c, exponent)`
* added `prod`, `prod(1)`, `prod(2)`
* `max(1)` and `min(1)` now return the indices too, as well as the max. Ditto for dimension 2.
* added `:all()` and `:any()`
* added `:indexFill()`
* added `:indexCopy()`
* added `:indexSelect()`
* added `torch.cumsum(x,2)` and `torch.cumsum(x,1)`
* added `torch.cumprod(x,2)` and `torch.cumprod(x,1)`
* Under the hood:
* created new THClKernels class:
* handles THClTensor kernel input
* provides `run` method that takes a dim3 `grid` and `block` input, as for cutorch kernel launches
* migrated TensorIndexed to use THClKernels
* 19th June:
* fixed a compile bug in EasyCL, when lua5.2/5.3 header files are present (not tested yet)
* added `a:sub(b)` method, which does element-wise subtraction of b from a, and puts results in a
* migrated to new version of EasyCL, with one fewer waitforevents, to try to boost perf a bit
* added `apply`, `map`, `map2` :-) (which run on GPU, at full speed)
* added 2-pass reduceall, ie can do reduceall on much larger tensors now
* 18th June:
* fixed a bug in clBLAS sger that meant that sger crashed on even tiny 5x5 matrices on nvidia, using either rowmajor or columnmajor :-) https://github.com/clMathLibraries/clBLAS/pull/109
* note that you will need to `git submodule update`, and `rm -Rf build/clBLAS`, in order to pick up the new version of clBLAS
* moved clBLAS initialization code out of inner loops => huge speed boost
* added `:neg()` operator, which negates the tensor (like `-` but without reallocation, I think)
* 15th-17th June:
* pow(x,y) no longer returns undefined values for x containing, or being, negative
* pow(x,y) now uses `pown` when y is an exact integer scalar (ie where (float)((int)y) == y)
* when no opencl-enabled devices enabled, now raise a THError, with a clear error message, rather than throwing a C++ exception, with no error message output
* under the hood: added cltorch.getState()
* renamed libTHCL.so to libTHCl.so
* added THCl include files to `install` section
* masked fill works now
* torch.addr works now
* 15th June:
* C:t() working
* 14th June:
* ReduceAll working :-) For now means: sometensor:sum() works
* sometensor:sum(1) and sometensor:sum(2) working too now :-)
* A:min(), A:max() added
* created unit tests, in [test](test) directory, [cltorch-unit-tensor.lua](test/cltorch-unit-tensor.lua) which pass
* 13th June:
* added `cltorch.setDevice`/`cltorch.getDevice`, see [test-device.lua](test/test-device.lua) for an example
* added EasyCL includes to EasyCL install section, to remove build errors with "EasyCL.h" not found, etc
| 67.419847 | 343 | 0.71841 | eng_Latn | 0.993222 |
8acc8a1cc5c7ecd297cf3194debfd14fa158ecd2 | 1,816 | md | Markdown | README.md | tgerk/rollup-plugin-natives | 77224de1ffba084996680b4e80efeb6f6c4e8ad8 | [
"MIT"
] | 13 | 2019-07-20T20:39:50.000Z | 2022-03-30T17:07:38.000Z | README.md | tgerk/rollup-plugin-natives | 77224de1ffba084996680b4e80efeb6f6c4e8ad8 | [
"MIT"
] | 10 | 2020-07-06T20:29:18.000Z | 2021-06-30T09:21:43.000Z | README.md | tgerk/rollup-plugin-natives | 77224de1ffba084996680b4e80efeb6f6c4e8ad8 | [
"MIT"
] | 11 | 2019-07-08T12:05:18.000Z | 2022-03-11T16:52:48.000Z | # rollup-plugin-natives
[](https://npmjs.org/package/rollup-plugin-natives)
Extract native modules (.node files) while creating a rollup bundle and put them in one place"
## Installation
```bash
npm install --save-dev rollup-plugin-natives
```
## Usage
In some cases you have native dependencies (usually required by `bindings` or `node-pre-gyp`)
and you have to put them somewhere accessible to the rolled-up bundle.
This package is just for doing exactly this.
```js
// rollup.config.js
import nativePlugin from 'rollup-plugin-natives';
export default {
input: 'main.js',
output: {
file: 'dist/bundle.js',
format: 'cjs'
},
plugins: [
nativePlugin({
// Where we want to physically put the extracted .node files
copyTo: 'dist/libs',
// Path to the same folder, relative to the output bundle js
destDir: './libs',
// Use `dlopen` instead of `require`/`import`.
// This must be set to true if using a different file extension that '.node'
dlopen: false,
// Modify the final filename for specific modules
// A function that receives a full path to the original file, and returns a desired filename
map: (modulePath) => 'filename.node',
// Or a function that returns a desired file name and a specific destination to copy to
map: (modulePath) => { name: 'filename.node', copyTo: 'C:\\Dist\\libs\\filename.node' },
// Generate sourcemap
sourcemap: true
})
]
};
```
## License
MIT
## About...
This plugin was created by me and shared with you courtesy of [Silverbolt](http://silverbolt.ai/) which I'm working for.
| 28.375 | 120 | 0.636013 | eng_Latn | 0.974141 |
8accae49361131a189b67c3a0db5faf1402df883 | 664 | md | Markdown | _public_articles/sciences_et_avenir_16.md | SuperNEMO-DBD/SuperNEMO-DBD.github.io | 96c5df1630b2dfd95d55e31ad1f78228c3c1a54f | [
"MIT"
] | null | null | null | _public_articles/sciences_et_avenir_16.md | SuperNEMO-DBD/SuperNEMO-DBD.github.io | 96c5df1630b2dfd95d55e31ad1f78228c3c1a54f | [
"MIT"
] | 28 | 2017-02-24T17:41:41.000Z | 2020-02-14T14:59:22.000Z | _public_articles/sciences_et_avenir_16.md | SuperNEMO-DBD/SuperNEMO-DBD.github.io | 96c5df1630b2dfd95d55e31ad1f78228c3c1a54f | [
"MIT"
] | 7 | 2017-03-02T18:52:07.000Z | 2020-01-16T15:39:58.000Z | ---
media: "Sciences et Avenir"
date: 1-10-2016
remoteurl: "http://www.lsm.in2p3.fr/communication/revue_presse/2016-10_Science%20et%20avenir.pdf"
title: "L'autre hypothèse : la particule de Majorana"
thumbnail: "http://www.lsm.in2p3.fr/communication/revue_presse/logo%20sciences%20et%20avenir.jpg"
abstract: |
L'énigme de l'antimatière pourrait voir sa réponse surgir des montagnes situées à la frontière franco-italienne. Là, sous 1700 mètres de roches, le Laboratoire souterrain de Modane (LSM) est une oasis préservée des particules invisibles bombardant sans cesse la surface de la Terre et traversant notre corps chaque seconde. (Article en français)
---
| 66.4 | 348 | 0.790663 | fra_Latn | 0.871535 |
8acd61780c863c4472e2e5a9d62377b28a85a1da | 873 | markdown | Markdown | doc/tutorials/gpu/table_of_content_gpu.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 163 | 2019-06-04T02:00:58.000Z | 2022-03-26T14:23:10.000Z | doc/tutorials/gpu/table_of_content_gpu.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 9 | 2019-10-19T06:55:30.000Z | 2019-10-21T15:08:33.000Z | doc/tutorials/gpu/table_of_content_gpu.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 29 | 2019-01-08T05:43:58.000Z | 2022-03-24T00:07:03.000Z | @cond CUDA_MODULES
GPU-Accelerated Computer Vision (cuda module) {#tutorial_table_of_content_gpu}
=============================================
Squeeze out every little computation power from your system by using the power of your video card to
run the OpenCV algorithms.
- @subpage tutorial_gpu_basics_similarity
*Compatibility:* \> OpenCV 2.0
*Author:* Bernát Gábor
This will give a good grasp on how to approach coding on the GPU module, once you already know
how to handle the other modules. As a test case it will port the similarity methods from the
tutorial @ref tutorial_video_input_psnr_ssim to the GPU.
- @subpage tutorial_gpu_thrust_interop
*Compatibility:* \>= OpenCV 3.0
This tutorial will show you how to wrap a GpuMat into a thrust iterator in order to be able to
use the functions in the thrust library.
@endcond
| 34.92 | 100 | 0.722795 | eng_Latn | 0.966559 |
8ace42acec25a4526af98802d1416fd0a428d8f2 | 2,369 | md | Markdown | docs/2014/reporting-services/report-server/windows-application-log.md | masashimi/sql-docs.ja-jp | 8d7b348348f377b8b1621da72311554cfd003fae | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/report-server/windows-application-log.md | masashimi/sql-docs.ja-jp | 8d7b348348f377b8b1621da72311554cfd003fae | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/report-server/windows-application-log.md | masashimi/sql-docs.ja-jp | 8d7b348348f377b8b1621da72311554cfd003fae | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Windows アプリケーション ログ | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.suite: ''
ms.technology:
- reporting-services-native
ms.tgt_pltfrm: ''
ms.topic: conceptual
helpviewer_keywords:
- Windows application logs [Reporting Services]
- logs [Reporting Services], Windows application logs
- application logs [Reporting Services]
ms.assetid: 742fd00e-aa6c-4c8a-b58f-c03c489b1699
caps.latest.revision: 31
author: markingmyname
ms.author: maghan
manager: craigg
ms.openlocfilehash: 5cc0848226b80c2c77345ed737f8acff68eba5bc
ms.sourcegitcommit: c18fadce27f330e1d4f36549414e5c84ba2f46c2
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/02/2018
ms.locfileid: "37150063"
---
# <a name="windows-application-log"></a>Windows アプリケーション ログ
[!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] では、イベント メッセージが Windows アプリケーション ログに書き込まれます。 アプリケーション ログに書き込まれたメッセージ情報を使用して、ローカル システムで実行されているレポート サーバー アプリケーションで生成されたイベントを確認できます。
## <a name="viewing-report-server-events"></a>レポート サーバーのイベントの表示
イベント ビューアーを使用すると、ログ ファイルの表示およびログ ファイルに含まれるメッセージのフィルター処理を実行できます。 イベント メッセージの詳細については、次を参照してください。[エラーおよびイベント リファレンス(Reporting Services)](../troubleshooting/errors-and-events-reference-reporting-services.md)します。 Windows アプリケーション ログおよびイベント ビューアーの詳細については、Windows の製品マニュアルを参照してください。
[!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] には、次の 3 つのイベント ソースがあります。
- レポート サーバー (レポート サーバー Windows サービス)
- レポート マネージャー
- スケジュールおよび配信のプロセッサ
[!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] には、レポート サーバーのアプリケーション イベントのログ記録を無効にしたり、ログに記録するイベントを制御する方法が用意されていません。 レポート サーバーのイベントのログ記録について記述しているスキーマは固定されています。 スキーマを拡張して、カスタム イベントをサポートすることはできません。
次の表では、レポート サーバーがアプリケーション イベント ログに書き込むイベントの種類を説明します。
|イベントの種類|説明|
|----------------|-----------------|
|[情報]|正常に行われた操作を表すイベント (たとえば、レポート サーバー サービスの開始時)|
|警告|発生する可能性のある問題を示すイベント (たとえば、空きディスク容量の不足)|
|[エラー]|重大な問題を表すイベント (たとえば、サービスが開始されなかった場合)|
|成功の監査|ログオンが成功したことを表すセキュリティ イベント|
|失敗の監査|ログインを試行して失敗したときにログに記録されるイベント|
## <a name="see-also"></a>参照
[Reporting Services のログ ファイルとソース](../report-server/reporting-services-log-files-and-sources.md)
[エラーおよびイベント リファレンス(Reporting Services)](../troubleshooting/errors-and-events-reference-reporting-services.md)
| 40.152542 | 286 | 0.757704 | yue_Hant | 0.526682 |
8ad0435afb4583b78b31581382a428cf42aefd46 | 15,339 | md | Markdown | src/documents/IoCReferences.md | avtar/infusion-docs | 29afe5dadcf5bac11eaba7625c126790d16b87cb | [
"CC-BY-3.0"
] | null | null | null | src/documents/IoCReferences.md | avtar/infusion-docs | 29afe5dadcf5bac11eaba7625c126790d16b87cb | [
"CC-BY-3.0"
] | null | null | null | src/documents/IoCReferences.md | avtar/infusion-docs | 29afe5dadcf5bac11eaba7625c126790d16b87cb | [
"CC-BY-3.0"
] | null | null | null | ---
title: IoC References
layout: default
category: Infusion
---
The Infusion IoC Framework uses a basic syntax for referencing objects in the current [context](Contexts.md).
References always take the syntactic form `{context-name}.some.path.segments`, which we name as the type `<reference>` - the meaning and form of the context name can vary and have a different meaning in different contexts:
<table id="reference-table">
<thead>
<tr>
<th colspan="2">Different permitted forms for a <reference> string</th>
</tr>
<tr>
<th>Syntax</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>{<componentRef>}.<path to member></code></td>
<td>
<ul>
<li>
<code><componentRef></code> is a reference to a component via one of its context names. It may be:
<ul>
<li><code>that</code>, to reference the current component e.g. <code>{that}</code></li>
<li>the fully qualified name of the component's type or one of its <a href="ComponentGrades.md">grade names</a> e.g. <code>{fluid.pagedTable}</code></li>
<li>the <strong>short name</strong> of the component's type or one of its <strong>gradeNames</strong>, i.e. the last segment of the fully namespaced name, e.g. <code>{pagedTable}</code></li>
<li>the component's <strong>member name</strong>, i.e. the name used when defining a subcomponent in a components block</li>
<li>an entire <code><reference></code> string in itself. This is useful in highly dynamic cases where the context name should be determined from some other options material, e.g. <code>{{that}.options.targetContext}</code>
</ul>
</li>
<li>
<code><path to member></code> is an EL path into the referenced component's members (this path may be empty)
</li>
</ul>
</td>
</tr>
<tr>
<td><code>{arguments}.<index></code></td>
<td>
<ul>
<li>
<code>{arguments}</code> refers to the array of arguments passed to a function. This form is used in the definition of <a href="Invokers.md">Invokers</a>
</li>
<li>
<index> is the 0-based numeric index of the desired argument
<div class="infusion-docs-note"><strong>Note:</strong>
The <code>arguments</code> context name can only be used in contexts where arguments are in scope - for example, as part of the arguments
to an <a href="InfusionEventSystem.md#registering-a-listener-to-an-event">event listener</a> or <a href="Invokers.md">invoker</a>, or within component configuration that is
being instantiated as part of a <a href="SubcomponentDeclaration.md#dynamic-subcomponents-with-a-source-event">dynamic component with a source event </a>.
</div>
</li>
</ul>
</td>
</tr>
<tr>
<td><code>{source}.<path to member></code></td>
<td>
<ul>
<li>
<code>{source}</code> refers to the particular instance of the <code>sources</code> array which was used to instantiate a
particular <a href="SubcomponentDeclaration.md#dynamic-components">dynamic component</a>. This context name is not valid outside a dynamic component definition.
</li>
</ul>
</td>
</tr>
<tr>
<td><code>{<iocss expression>}.<path to member></code></td>
<td>
<ul>
<li>
<code><iocss expression></code> is an <a href="IoCSS.md">IoCSS</a> expression referencing a component.
</li>
<li>
<code><path to member></code> is an EL path into the referenced component's members.
<div class="infusion-docs-note"><strong>Note:</strong>
Full IoCSS expressions are not valid in all contexts. They can primarily be used in the <code>target</code> field of the <a href="IoCSS.md#distributeoptions-format"><code>distributeOptions</code></a> record.
</div>
</li>
</ul>
</td>
</tr>
</tbody>
</table>
## Where To Use IoC References
IoC references may be used almost anywhere within a component's options, for example:
* in the [subcomponent definitions](SubcomponentDeclaration.md),
* in [invoker](Invokers.md) specifications,
* in [options distributions](IoCSS.md),
* in [listeners specifications](InfusionEventSystem.md#registering-a-listener-to-an-event), including as the left hand side key specifying an event in a listeners block.
* in [events specifications](InfusionEventSystem.md#declaring-an-event-on-a-component), including as the left hand side key specifying an event in an events block
* or indeed almost anywhere else
## How IoC References are resolved
For a conventional IoC reference (of the style `<componentRef>` rather than the style `<iocss expression>`), a search is begun upwards from the site of the reference in the component tree to find the first component which matches the context name.
The following diagram shows a possible such reference site in green:
<!-- Diagram source within Google Drawings at https://docs.google.com/drawings/d/14ESiMe0q8_lzVsAE-CkUvZdU42A_rs0_IfYg54pNFjA/edit -->

The set of components which are in scope for resolution from this site are shown in yellow (circles) and orange (diamonds) in this diagram. These are components which are either
i) an ancestor of the component holding the reference site, or
ii) a sibling of such a component.
iii) a component anywhere in the tree which has been marked with the grade [fluid.resolveRoot](Contexts.md#global-components-fluidresolveroot-and-fluidresolverootsingle) - these are the ones shown in orange diamonds
The context reference matches a component if it matches via the 2nd, 3rd or 4th rules in the first row of the [above table](#reference-table) - **either**
it agrees with a fully-qualified grade or type name of a component, **or** it agrees with the last path segment of such a name, **or** it agrees with the component's member name.
If no context name matches anywhere in the tree, the reference expression resolves to `undefined`. In this case, if the path segments following the context name in the reference expression are not empty, the framework will throw an error.
Components which are not in scope for resolution from the reference site (shown as a green pentagon) are shown as blue squares.
## Examples of {<componentRef>}
In the example below, the IoC reference `{that}` refers to the component in which it is being used.
```javascript
fluid.defaults("fluid.prefs.separatedPanel", {
gradeNames: ["fluid.prefs.prefsEditorLoader"],
listeners: {
onCreate: {
funcName: "fluid.prefs.prefsEditorLoader.hideReset",
args: ["{that}"]
}
}
});
```
This could equally be written using the short name of the `fluid.prefs.separatedPanel` component, as shown below:
```javascript
fluid.defaults("fluid.prefs.separatedPanel", {
gradeNames: ["fluid.prefs.prefsEditorLoader"],
listeners: {
onCreate: {
funcName: "fluid.prefs.prefsEditorLoader.hideReset",
args: ["{separatedPanel}"]
}
}
});
```
The above two examples are equivalent.
In the example below, the IoC expression `{fluid.prefs.enactor.tableOfContents}` refers to the component being defined by the `defaults` block.
The short name `tableOfContents` must not be used here, because it would not be unique: It would be unclear whether the nickname was referring to `fluid.prefs.enactor.tableOfContents` or `fluid.tableOfContents`.
```javascript
fluid.defaults("fluid.prefs.enactor.tableOfContents", {
gradeNames: ["fluid.viewComponent", "fluid.prefs.enactor"],
components: {
tableOfContents: {
type: "fluid.tableOfContents",
container: "{fluid.prefs.enactor.tableOfContents}.container",
options: {
// ...
}
}
}
});
```
Another way to avoid the ambiguity mentioned above would be to use the member name, which is the name used when defining the subcomponent in the components block.
In the example below `{toc}` refers to the name used to define the subcomponent in the component block.
```javascript
fluid.defaults("fluid.prefs.enactor.tableOfContents", {
gradeNames: ["fluid.viewComponent", "fluid.prefs.enactor"],
components: {
toc: {
type: "fluid.tableOfContents",
container: "{fluid.prefs.enactor.tableOfContents}.container",
options: {
components: {
type: "fluid.tableOfContents.levels",
container: "{toc}.dom.tocContainer"
}
}
}
}
});
```
## Examples of {<componentRef>}.<path to member>
The example below includes several IoC references. All of them are inside a subcomponent declaration and all include `{controllers}`, which in this case is a reference to the parent component. Specifically:
* `{controllers}.model` is a reference to the model that is a member of the parent component - note that this reference sets up a permanent [model relay](ModelRelay.md) between these two models;
* the IoC expressions in the subcomponent's events block are references to events defined on the parent component's event block;
* `{controllers}.dom.scrubberContainer` is a reference to one of the selectors defined on the parent component.
```javascript
fluid.defaults("fluid.videoPlayer.controllers", {
gradeNames: ["fluid.viewComponent"],
selectors: {
scrubberContainer: ".flc-videoPlayer-scrubberContainer"
},
events: {
onScrub: null,
onStartScrub: null,
afterScrub: null
},
components: {
scrubber: {
type: "fluid.videoPlayer.controllers.scrubber",
container: "{controllers}.dom.scrubberContainer",
options: {
model: "{controllers}.model",
events: {
onScrub: "{controllers}.events.onScrub",
afterScrub: "{controllers}.events.afterScrub",
onStartScrub: "{controllers}.events.onStartScrub"
}
}
}
}
});
```
## Examples of `{arguments}.n`
The example below uses the `{arguments}.n` syntax to deliver the first and second arguments passed to listeners to the `onMove` event to the `fluid.moduleLayout.onMoveListener` function.
```javascript
fluid.defaults("fluid.moduleLayoutHandler", {
gradeNames: ["fluid.layoutHandler"],
events: {
onMove: "{reorderer}.events.onMove"
},
listeners: {
onMove: {
funcName: "fluid.moduleLayout.onMoveListener",
args: ["{arguments}.0", "{arguments}.1", "{that}.layout"]
}
}
});
```
## Examples of {<iocss expression>}
The example below uses an [IoCSS](IoCSS.md) expression `{that > moreText}.options.selectors.images`. The expression refers to the `images` selector in the `moreText` subcomponent that is a direct descendent of the current component.
```javascript
fluid.defaults("gpii.explorationTool.enactors.showMoreText", {
gradeNames: ["fluid.viewComponent", "fluid.prefs.enactor"],
selectors: {
images: "img, [role~='img']"
}
});
fluid.defaults("gpii.explorationTool.enactorSet", {
gradeNames: ["fluid.uiEnhancer.starterEnactors"],
components: {
moreText: {
type: "gpii.explorationTool.enactors.showMoreText"
}
},
distributeOptions: {
source: "{that}.options.moreTextSelector",
removeSource: true,
target: "{that > moreText}.options.selectors.images"
}
});
```
## More Examples
### Example 1
```javascript
// Range Annotator
fluid.defaults("fluid.pagedTable.rangeAnnotator", {
gradeNames : ["fluid.component"],
listeners : {
"{pagedTable}.events.onRenderPageLinks" : {
funcName : "fluid.pagedTable.rangeAnnotator.onRenderPageLinks",
args : ["{pagedTable}", "{arguments}.0", "{arguments}.1"]
}
}
});
// Paged Table
fluid.defaults("fluid.pagedTable", {
gradeNames : ["fluid.pager", "fluid.table"],
components : {
rangeAnnotator : {
type : "fluid.pagedTable.rangeAnnotator"
}
}
// ...
});
```
The above example defines a `rangeAnnotator`, which is used as a subcomponent of a pagedTable. This definition uses several IoC references:
* the expression "{pagedTable}.events.onRenderPageLinks" is used to refer to the onRenderPageLinks event of the pagedTable component
* the IoC references:
* `{pagedTable}.events.onRenderPageLinks` refers to the `pagedTable` component
* `{arguments}.0` and `{arguments}.1` refer to the first and second arguments supplied when the source event is fired `onRenderPageLinks`
### Example 2
```javascript
fluid.defaults("fluid.videoPlayer.languageControls.eventBinder", {
gradeNames: ["fluid.component"],
listeners: {
"{button}.events.onPress": "{menu}.toggleView"
}
});
```
The above example uses two IoC references:
* `{button}.events.onPress` refers to the `onPress` even of the `button` component
* `{menu}.toggleView` refers to the `toggleView` method of the `menu` component
### Example 3
```javascript
fluid.defaults("fluid.uploader", {
gradeNames: ["fluid.viewComponent"],
components: {
uploaderImpl: {
type: "fluid.uploaderImpl"
}
},
distributeOptions: {
source: "{that}.options",
removeSource: true,
exclusions: ["components.uploaderContext", "components.uploaderImpl"],
target: "{that > uploaderImpl}.options"
}
});
```
The above example uses IoC references in the `distributeOptions` block:
* `{that}.options` identifies the `options` block of the current `that` (i.e. `fluid.uploader`)
* `{that > uploaderImpl}.options` identifies the `uploaderImpl` subcomponent of the current `that` (`fluid.uploader`) (see [IoCSS](IoCSS.md) for more information about this notation)
## Reserved IoC Names
The following context names are reserved within the IoC system:
* that
* arguments
* source
* sourcePath
* change
* instantiator
As a result, you should typically avoid defining types that use these names as the final segment (e.g `todoList.source` or `todoList.panel.that`), since it will be impossible to resolve references to these components in many contexts.
| 42.372928 | 256 | 0.639937 | eng_Latn | 0.972426 |
8ad1bab5e2bb4dea46a7669baba2c98fe54d6fc1 | 1,039 | md | Markdown | src/content/works/2020-12-26-constantine-capers-flashes-of-memory.md | alexanderson1993/nataliebrianne.com | 485a9278cbf5d722c7143995de44f9b5c56adb3d | [
"MIT"
] | null | null | null | src/content/works/2020-12-26-constantine-capers-flashes-of-memory.md | alexanderson1993/nataliebrianne.com | 485a9278cbf5d722c7143995de44f9b5c56adb3d | [
"MIT"
] | 8 | 2020-11-02T21:05:47.000Z | 2022-02-27T03:47:50.000Z | src/content/works/2020-12-26-constantine-capers-flashes-of-memory.md | alexanderson1993/nataliebrianne.com | 485a9278cbf5d722c7143995de44f9b5c56adb3d | [
"MIT"
] | null | null | null | ---
title: "Constantine Capers: Flashes of Memory"
date: 2020-12-26T04:27:28.258Z
progress: 45
published: false
bios:
- name: Byron Constantine
image: /src/content/assets/untitled_artwork.png
description: This is Byron Constantine, a private detective with short term
memory loss. Every day he wakes up and can’t remember the day before. He’s
managed to solve over 40 cases despite that. He’s 27 years old, has an
impeccable sense of style, and plays the piano. How did he end up with
short term memory loss? Well, you might find out in this book.
- name: Mira Blayse
image: /src/content/assets/untitled_artwork-1.png
description:
This is Mira Blayse, an artist and accidental non-conformist. It’s
not her fault that her hair doesn’t stay styled. While she’s shy and prone
to blush, she isn’t afraid to go out of her comfort zone to find the
truth. She’s 22 years old, has a twin, is ambidextrous, and has an
unhealthy love of French toast.
---
Blurb Coming Soon!
| 41.56 | 80 | 0.716073 | eng_Latn | 0.998544 |
8ad1ca564cd0365aa1c3fc010c36f0e5ecdcd0d7 | 6,945 | md | Markdown | src/mdPages/puppeteer/index.md | jean9696/gatsby-blog | 24fe89f52f0ef1cf12a8577556fc3dcd36dbd254 | [
"MIT"
] | null | null | null | src/mdPages/puppeteer/index.md | jean9696/gatsby-blog | 24fe89f52f0ef1cf12a8577556fc3dcd36dbd254 | [
"MIT"
] | null | null | null | src/mdPages/puppeteer/index.md | jean9696/gatsby-blog | 24fe89f52f0ef1cf12a8577556fc3dcd36dbd254 | [
"MIT"
] | null | null | null | ---
path: "/blog/puppeteer"
date: "2019-07-04"
title: "React E2E testing with Puppeteer in CI - Part 1"
---
When we work on a project we want to code **fast**, to be *agile*. Thus, we focus on features, sometime we do refactoring but I confess that tests are often forgotten. However, it comes a time when the codebase is so big that a little change can break the whole application and it can become scary to push new code to production.
A solution could have been testing every single react component in our codebase to reach the goal of 100% of test coverage but we didn’t want to unit test our 1000 components which could change at anytime (remember, we’re agiles 🐇).
The best alternative we’ve found at [habx](https://www.habx.com) has been to integrate puppeteer’s tests in our CI. As Puppeteer is using chrome headless, it’s perfectly working in a docker environment that are available in CI. Thus, our tests are ran every time we push to a branch, for different roles and in **parallel** ! Theses articles will explain how we stabilize this testing process.
<table style="width: 500px;margin: 0 auto;">
<tbody>
<tr>
<td style="padding: 0 32px">
<img src="./puppeteer.jpg" alt="puppeteer"/>
</td>
<td style="padding: 0 32px">
<img src="./circleci.jpg" alt="circleci"/>
</td>
</tr>
</tbody>
</table>
# Setting up environments
First we need to be able to run puppeteer and user actions in a test environment. This is already done by *smooth-code* with [jest-puppeteer](https://github.com/smooth-code/jest-puppeteer) . So the basic config is as following.
<br/><br/>
##### jest-puppeteer.config.js (at root folder)
```js
module.exports = {
launch: {
headless: true, // you can change that if you want to see what you're doing
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-gpu',
'--disable-dev-shm-usage',
'--disable-web-security',
],
},
}
```
<br/>and in your jest config ⬇
<br/><br/>
##### package.json for example
```json
{
...
"jest": {
"preset": "jest-puppeteer",
...
},
...
}
```
<br/>**Easy as that !** 👍<br/><br/>
Now for the CI, puppeteer needs some dependencies so we add them in the Dockerfile we use.<br/><br/>
##### Dockerfile to use in CI
```dockerfile
FROM node:8-slim
USER root
# See https://crbug.com/795759
RUN apt-get update && apt-get install -yq libgconf-2-4
# Install latest chrome dev package and fonts to support major charsets (Chinese, Japanese, Arabic, Hebrew, Thai and a few others)
# Note: this installs the necessary libs to make the bundled version of Chromium that Puppeteer
# installs, work.
RUN apt-get update && apt-get install -y wget --no-install-recommends \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst ttf-freefont \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --auto-remove -y curl \
&& rm -rf /src/*.deb
# It's a good idea to use dumb-init to help prevent zombie chrome processes.
ADD https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64 /usr/local/bin/dumb-init
RUN chmod +x /usr/local/bin/dumb-init
# Uncomment to skip the chromium download when installing puppeteer. If you do,
# you'll need to launch puppeteer with:
# browser.launch({executablePath: 'google-chrome-unstable'})
# ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
# Install puppeteer so it's available in the container.
RUN npm i puppeteer
ENTRYPOINT ["dumb-init", "--"]
CMD ["google-chrome-unstable"]
USER root
```
<br/>At habx we usDockerfile to use in CIe [CircleCI](https://circleci.com) for our continuous integration but this example should work with other CIs (it works with [drone](https://drone.io/) 🤖)
<br/>So here is hour circle CI job
<br/><br/>
##### .circleci/config.yml
```yaml
test-puppeteer:
docker:
- image: jean9696/drone-puppeteer
name: puppeteer
steps:
- attach_workspace:
at: /root/
- run:
command: |
# Run a nginx server with our built app inside
apt-get update || true
apt-get -y install nginx
sed -i 's/root.*/root \/root\/project\/build\/;/' /etc/nginx/sites-enabled/default
nginx -c /etc/nginx/nginx.conf
chmod -R a+rx /root/
npm run test
```
(NB: don’t forget to use `--runInBand` jest option as it is recommended by jest-puppeteer)
Now we have our test environment set up. *What’s next ?*
# Tests setup
Ok! So now we can run a headless chrome in local and in the CI but our app is private, it needs **authentication**. For now habx is using JWT to authentificate its users, so let’s generate it for our test users ! Thus, we need to configure a [jest global setup file](https://jestjs.io/docs/en/configuration#globalsetup-string) that do that for each test suits.
```js
/* here the var page is globally injected by the jest-puppeteer preset to call puppeteer actions */
beforeAll(async () => {
jest.setTimeout(30000) // good to have since scenariis can take time
await page.setRequestInterception(true) // we enable puppeteer to intercept network request
const generatedJwt = generateJwt()
// Now at all request we make, we add the authenficated jwt
await page.on('request', request => {
const headers = Object.assign({}, request.headers(), {
Authorization: `${generatedJwt}`,
cookie: `jwt=${generatedJwt};`,
})
// We don't want to send useless data from our test environement so since we are here, we abort our request if it has nothing to do with our app, otherwise we send the request
if (request.url().includes('hotjar') || request.url().includes('segment')) {
request.abort()
} else {
request.continue({ headers })
}
})
})
```
Great ! Now we can use our app correctly. But what if something is wrong with the server ? We need to log that…
So we add the following code (BTW, if you know a better way to log in jest, please tell us !).
```js
page.on('pageerror', err => console.log('Page Error: ', err))
page.on('error', err => console.log('Error: ', err))
```
Now we’re good to go ! You can write your tests with [puppeteer's API](https://github.com/GoogleChrome/puppeteer/blob/v1.19.0/docs/api.md)
# Conclusion
Puppeteer helped us many times, avoiding to push code that would have break the app. Tests are made with real data which is a really good points for corner cases that we can’t predice. In addition, these tests often help us when we do refactoring client side but also server side because the tests are calling real apis that can be breaking for tests.
| 41.339286 | 393 | 0.689705 | eng_Latn | 0.954186 |
8ad22ff6b07e3af40f215932705fe3b31657b39a | 5,289 | markdown | Markdown | _posts/2012-07-26-kalman-filter.markdown | jayrambhia/jayrambhia.github.com | 82771d782057cff0481a374522d6dab6f9255128 | [
"MIT"
] | 2 | 2015-04-29T02:51:02.000Z | 2020-09-14T20:09:01.000Z | _posts/2012-07-26-kalman-filter.markdown | jayrambhia/jayrambhia.github.com | 82771d782057cff0481a374522d6dab6f9255128 | [
"MIT"
] | 1 | 2018-07-25T22:32:13.000Z | 2018-07-25T22:32:13.000Z | _posts/2012-07-26-kalman-filter.markdown | jayrambhia/jayrambhia.github.com | 82771d782057cff0481a374522d6dab6f9255128 | [
"MIT"
] | 2 | 2015-07-04T19:15:42.000Z | 2017-11-15T19:46:28.000Z | ---
category: Blog
tag: Computer Vision
comments: true
date: 2012-07-26 19:28:20
layout: post
slug: kalman-filter
title: Kalman Filter
keywords: [learn opencv, open source computer vision, kalman filter opencv, tracking with kalman filter, google summer of code]
---
Kalman Filter is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error. It uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those that would be based on a single measurement alone.The filter is named for [Rudolf (Rudy) E. Kálmán](http://en.wikipedia.org/wiki/Rudolf_E._K%C3%A1lm%C3%A1n), one of the primary developers of its theory.
The algorithm works in a two-step process: in the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Because of the algorithm's recursive nature, it can run in real time using only the present input measurements and the previously calculated state; no additional past information is required.
Since Kalman filter is a recursive estimator, it needs only the estimated state from the previous time step and the current measurement to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. This can be very helpful to improve tracking of the objects. I implemented Kalman Filter in SimpleCV. As of now, I'm just predicting the center of the object using its current and previous centers. I might add more features in Kalman Filter later viz pixel velocity, real time velocity, areaRatio, etc.
**Kalman Filter with OpenCV**:
I tried using OpenCV 2.4 version to implement Kalman Filter, but it turns out that the bindings are incomplete. see [here](http://answers.opencv.org/question/182/assertion-error-in-kalman-filter-python-opencv-240/). As of now it's not possible to implement Kalman Filter using cv2. So, cv it is.
**Create Kalman Filter**
import cv2.cv as cv
kalman = cv.CreateKalman(4, 2, 0)
kalman_state = cv.CreateMat(4, 1, cv.CV_32FC1)
kalman_process_noise = cv.CreateMat(4, 1, cv.CV_32FC1)
kalman_measurement = cv.CreateMat(2, 1, cv.CV_32FC1)
Kalman filter is now ready to be used.
**Set Kalman Filter**
# set previous state for prediction
kalman.state_pre[0,0] = x
kalman.state_pre[1,0] = y
kalman.state_pre[2,0] = 0
kalman.state_pre[3,0] = 0
# set kalman transition matrix
kalman.transition_matrix[0,0] = 1
kalman.transition_matrix[0,1] = 0
kalman.transition_matrix[0,2] = 0
kalman.transition_matrix[0,3] = 0
kalman.transition_matrix[1,0] = 0
kalman.transition_matrix[1,1] = 1
kalman.transition_matrix[1,2] = 0
kalman.transition_matrix[1,3] = 0
kalman.transition_matrix[2,0] = 0
kalman.transition_matrix[2,1] = 0
kalman.transition_matrix[2,2] = 0
kalman.transition_matrix[2,3] = 1
kalman.transition_matrix[3,0] = 0
kalman.transition_matrix[3,1] = 0
kalman.transition_matrix[3,2] = 0
kalman.transition_matrix[3,3] = 1
# set Kalman Filter
cv.SetIdentity(kalman.measurement_matrix, cv.RealScalar(1))
cv.SetIdentity(kalman.process_noise_cov, cv.RealScalar(1e-5))
cv.SetIdentity(kalman.measurement_noise_cov, cv.RealScalar(1e-1))
cv.SetIdentity(kalman.error_cov_post, cv.RealScalar(1))
**Predict new points using Kalman filter**
kalman_prediction = cv.KalmanPredict(kalman)
predict_pt = (kalman_prediction[0,0], kalman_prediction[1,0])
**Kalman Correction**
kalman_estimated = cv.KalmanCorrect(kalman, kalman_measurement)
state_pt = (kalman_estimated[0,0], kalman_estimated[1,0])
**Changing Kalman Measurement**
kalman_measurement[0, 0] = x
kalman_measurement[1, 0] = y
**Pseudo Code**
Create Kalman Filter
Start Tracking
While (some condition)
x, y = track()
Set Kalman Filter
Change Kalman Measurements
Predict Kalman
Kalman Correction
Update the center of the object
<iframe width="420" height="315" src="http://www.youtube.com/embed/ZGhGeKQMyVA" frameborder="0" allowfullscreen></iframe>
This was the video where I use Kalman Filter to predict the center of the objects. I am not using Kalman Correction as of now. I have implemented Kalman Filter in SimpleCV Tracking Feature. You can find my GitHub SimpleCV Kalman branch [here](https://github.com/jayrambhia/SimpleCV/tree/kalman_filter).
**Using Kalman Filter with SimpleCV**
from SimpleCV import *
def camshift():
cam = Camera()
img = cam.getImage()
d = Display(img.size())
bb1 = getBB() # Get Bounding Box from some method
fs1=[]
while True:
try:
img1 = cam.getImage()
fs1 = img1.track("camshift",fs1,img,bb1,num_frames=5)
fs1.drawBB()
fs1.draw()
fs1.drawPredict(color=Color.RED)
camshift()
So, this is it. Kalman Filter.
P.S. Going back to the campus in few days. quite excited.
| 39.470149 | 599 | 0.720174 | eng_Latn | 0.895599 |
8ad2bbfe8d8d0d3fdc9d9ca0d98808597438faf2 | 4,755 | md | Markdown | tccli/examples/autoscaling/v20180419/DescribeLifecycleHooks.md | zqfan/tencentcloud-cli | b6ad9fced2a2b340087e4e5522121d405f68b615 | [
"Apache-2.0"
] | 47 | 2018-05-31T11:26:25.000Z | 2022-03-08T02:12:45.000Z | tccli/examples/autoscaling/v20180419/DescribeLifecycleHooks.md | zqfan/tencentcloud-cli | b6ad9fced2a2b340087e4e5522121d405f68b615 | [
"Apache-2.0"
] | 23 | 2018-06-14T10:46:30.000Z | 2022-02-28T02:53:09.000Z | tccli/examples/autoscaling/v20180419/DescribeLifecycleHooks.md | zqfan/tencentcloud-cli | b6ad9fced2a2b340087e4e5522121d405f68b615 | [
"Apache-2.0"
] | 22 | 2018-10-22T09:49:45.000Z | 2022-03-30T08:06:04.000Z | **Example 1: 查询生命周期挂钩**
Input:
```
tccli as DescribeLifecycleHooks --cli-unfold-argument ```
Output:
```
{
"Response": {
"TotalCount": 4,
"LifecycleHookSet": [
{
"LifecycleHookName": "terminate-topic",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 120,
"NotificationMetadata": "topic",
"NotificationTarget": {
"TargetType": "CMQ_TOPIC",
"TopicName": "one-topic",
"QueueName": ""
},
"CreatedTime": "2019-04-19T02:59:30Z",
"DefaultResult": "ABANDON",
"LifecycleHookId": "ash-oq76wsrx",
"LifecycleTransition": "INSTANCE_TERMINATING"
},
{
"LifecycleHookName": "launch-queue",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 120,
"NotificationMetadata": "queue",
"NotificationTarget": {
"TargetType": "CMQ_QUEUE",
"TopicName": "",
"QueueName": "one-queue"
},
"CreatedTime": "2019-04-19T02:57:14Z",
"DefaultResult": "CONTINUE",
"LifecycleHookId": "ash-fbjiexz7",
"LifecycleTransition": "INSTANCE_LAUNCHING"
},
{
"LifecycleHookName": "one-hook",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 360,
"NotificationMetadata": "",
"NotificationTarget": {
"TargetType": "",
"TopicName": "",
"QueueName": ""
},
"CreatedTime": "2019-04-19T02:56:02Z",
"DefaultResult": "CONTINUE",
"LifecycleHookId": "ash-heyubibl",
"LifecycleTransition": "INSTANCE_LAUNCHING"
},
{
"LifecycleHookName": "one-hook-default",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 300,
"NotificationMetadata": "",
"NotificationTarget": {
"TargetType": "",
"TopicName": "",
"QueueName": ""
},
"CreatedTime": "2019-04-19T02:51:24Z",
"DefaultResult": "CONTINUE",
"LifecycleHookId": "ash-8azjzxj9",
"LifecycleTransition": "INSTANCE_LAUNCHING"
}
],
"RequestId": "dff07f6e-bdbc-4532-baeb-e7fb3aebe248"
}
}
```
**Example 2: 查询生命周期挂钩,使用Filter**
Input:
```
tccli as DescribeLifecycleHooks --cli-unfold-argument \
--Filters.0.Name lifecycle-hook-id \
--Filters.0.Values ash-oq76wsrx ash-fbjiexz7 \
--Filters.1.Name auto-scaling-group-id \
--Filters.1.Values asg-8fbozqja
```
Output:
```
{
"Response": {
"TotalCount": 2,
"LifecycleHookSet": [
{
"LifecycleHookName": "terminate-topic",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 120,
"NotificationMetadata": "topic",
"NotificationTarget": {
"TargetType": "CMQ_TOPIC",
"TopicName": "one-topic",
"QueueName": ""
},
"CreatedTime": "2019-04-19T02:59:30Z",
"DefaultResult": "ABANDON",
"LifecycleHookId": "ash-oq76wsrx",
"LifecycleTransition": "INSTANCE_TERMINATING"
},
{
"LifecycleHookName": "launch-queue",
"LifecycleTransitionType": "NORMAL",
"AutoScalingGroupId": "asg-8fbozqja",
"HeartbeatTimeout": 120,
"NotificationMetadata": "queue",
"NotificationTarget": {
"TargetType": "CMQ_QUEUE",
"TopicName": "",
"QueueName": "one-queue"
},
"CreatedTime": "2019-04-19T02:57:14Z",
"DefaultResult": "CONTINUE",
"LifecycleHookId": "ash-fbjiexz7",
"LifecycleTransition": "INSTANCE_LAUNCHING"
}
],
"RequestId": "2d774a6c-bcaa-4805-b0cd-bd64519e2538"
}
}
```
| 33.020833 | 61 | 0.464984 | yue_Hant | 0.957198 |
8ad2c53786c632917654be9ea9185f0fc5230f07 | 2,561 | md | Markdown | readme.md | amosyuen/node-bunyan-prettystream | 1ef0297ed544c3b545d80ac5de545bcfe877dba1 | [
"Unlicense",
"MIT"
] | null | null | null | readme.md | amosyuen/node-bunyan-prettystream | 1ef0297ed544c3b545d80ac5de545bcfe877dba1 | [
"Unlicense",
"MIT"
] | null | null | null | readme.md | amosyuen/node-bunyan-prettystream | 1ef0297ed544c3b545d80ac5de545bcfe877dba1 | [
"Unlicense",
"MIT"
] | null | null | null | bunyan-prettystream-circularsafe is a stream based implementation of the [Bunyan][bunyan] CLI tool's pretty printing capabilities. It allows
apps using bunyan to log directly to the console or file in human readable format instead of as JSON without having to
run or pipe into the bunyan tool. This is useful for working with IDEs which do not have the ability to pipe console
output to another application (such as WebStorm).
This library is only really meant for development and should not be used on production environments.
This package is based on a fork of bunyan-prettystream which was originally created by Amar Suhail (https://github.com/mrrama/node-bunyan-prettystream).
This fork was originally created by Nathan Hadfield (https://github.com/hadfieldn/node-bunyan-prettystream), and contains code that makes logging objects with circular references safe,
along with other community-provided fixes and updated dependencies.
# Usage
```javascript
var bunyan = require('bunyan');
var PrettyStream = require('bunyan-prettystream');
var prettyStdOut = new PrettyStream();
prettyStdOut.pipe(process.stdout);
var log = bunyan.createLogger({
name: 'foo',
streams: [{
level: 'debug',
type: 'raw',
stream: prettyStdOut
}]
});
```
# Tests
Running unit tests requires `mocha` installed.
```bash
make test
```
## Coverage
```bash
make coverage
```
# License
(The MIT License)
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
[bunyan]: https://github.com/trentm/node-bunyan
| 36.585714 | 184 | 0.755955 | eng_Latn | 0.948137 |
8ad2ff5cefad747d20e0ea8e60a6985f4ffee8ec | 69,432 | md | Markdown | docs/API.md | EatonEM/emcbUDPMaster | 88f4ad81edb84851b748217847134ad068964c83 | [
"BSD-3-Clause"
] | 2 | 2019-04-30T14:39:42.000Z | 2020-07-30T14:41:09.000Z | docs/API.md | EatonEM/emcbUDPMaster | 88f4ad81edb84851b748217847134ad068964c83 | [
"BSD-3-Clause"
] | 10 | 2019-06-26T16:41:54.000Z | 2021-04-08T01:20:42.000Z | docs/API.md | EatonEM/emcbUDPMaster | 88f4ad81edb84851b748217847134ad068964c83 | [
"BSD-3-Clause"
] | 5 | 2019-12-11T16:21:48.000Z | 2021-02-15T20:12:36.000Z | # Table of Contents
- [Class: EmcbUDPbroadcastMaster](#emcbudpbroadcastmaster)
- [Class Properties](#emcbudpbroadcastmaster-properties)
- [new EmcbUDPbroadcastMaster(args)](#new-emcbudpbroadcastmasterargs)
- [updateBroadcastUDPkey(key)](#updatebroadcastudpkeykey)
- [updateUnicastUDPkey(key)](#updateunicastudpkeyiddevice-key)
- [getMasterIPAddress()](#getMasterIPAddress)
- [getDevice(ipAddressOrIdDevice)](#getDeviceipaddressoriddevice)
- [discoverDevices([nonce])](#discoverdevicesnonce)
- [createDevice(idDevice, ipAddress, [unicastGetNextSequenceNumber])](#createDeviceidDevice-ipAddress-unicastGetNextSequenceNumber)
- [syncDeviceSequenceNumbers()](#syncdevicesequencenumbers)
- [getNextSequenceNumber([nonce])](#getnextsequencenumbernonce)
- [getBreakerRemoteHandlePosition()](#getbreakerremotehandleposition)
- [getMeterData()](#getmeterdata)
- [getDeviceStatus()](#getdevicestatus)
- [setNextSequenceNumber(desiredNextSequenceNumber)](#setnextsequencenumberdesirednextsequencenumber)
- [setBreakerState(desiredState[, maxAttempts])](#setbreakerstatedesiredstate-maxattempts)
- [setBargraphLEDToUserDefinedColor(enabled[, colorObj, blinking])](#setbargraphledtouserdefinedcolorenabled-colorobj-blinking)
- [setBargraphLEDToUserDefinedColorName(colorName[, duration, blinking])](#setbargraphledtouserdefinedcolornamecolorname-duration-blinking)
- [Class: EmcbUDPdeviceMaster](#emcbudpdevicemaster)
- [Class Properties](#emcbudpdevicemaster-properties)
- [EventEmitter Cheat Sheet](#eventemitter-cheat-sheet)
- [Class: logger](#logger)
- [Constants](#constants)
- [Network Configuration](#network-configuration)
- [EMCB_UDP_PORT](#EMCB_UDP_PORT)
- [EMCB UDP Application Layer](#emcb-udp-application-layer)
- [EMCB_UDP_IMPLEMENTED_PROTOCOL_VERSION](#EMCB_UDP_IMPLEMENTED_PROTOCOL_VERSION)
- [EMCB_UDP_MESSAGE_THROTTLE_TIME_MS](#EMCB_UDP_MESSAGE_THROTTLE_TIME_MS)
- [EMCB_UDP_LONGEST_IMPLEMENTED_MESSAGE_LENGTH](#EMCB_UDP_LONGEST_IMPLEMENTED_MESSAGE_LENGTH)
- [Header](#emcb-udp-application-layer-header)
- [EMCB_UDP_HEADER_START_MASTER](#EMCB_UDP_HEADER_START_MASTER)
- [EMCB_UDP_HEADER_START_SLAVE](#EMCB_UDP_HEADER_START_SLAVE)
- [Message Codes](#message-codes)
- [GET Message Codes](#get-message-codes)
- [EMCB_UDP_MESSAGE_CODE_GET_NEXT_SEQUENCE_NUMBER](#EMCB_UDP_MESSAGE_CODE_GET_NEXT_SEQUENCE_NUMBER)
- [EMCB_UDP_MESSAGE_CODE_GET_DEVICE_STATUS](#EMCB_UDP_MESSAGE_CODE_GET_DEVICE_STATUS)
- [EMCB_UDP_MESSAGE_CODE_GET_BREAKER_REMOTE_HANDLE_POSITION](#EMCB_UDP_MESSAGE_CODE_GET_BREAKER_REMOTE_HANDLE_POSITION)
- [EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA](#EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA)
- [SET Message Codes](#set-message-codes)
- [EMCB_UDP_MESSAGE_CODE_SET_NEXT_SEQUENCE_NUMBER](#EMCB_UDP_MESSAGE_CODE_SET_NEXT_SEQUENCE_NUMBER)
- [EMCB_UDP_MESSAGE_CODE_SET_BREAKER_REMOTE_HANDLE_POSITION](#EMCB_UDP_MESSAGE_CODE_SET_BREAKER_REMOTE_HANDLE_POSITION)
- [EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED](#EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED)
- [EMCB_UDP_MESSAGE_CODES](#EMCB_UDP_MESSAGE_CODES)
- [Enums and Parsed Data](#enums-and-parsed-data)
- [EMCB_UDP_ACK](#EMCB_UDP_ACK)
- [EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_RATE_LIMITED](#EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_RATE_LIMITED)
- [EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_BAD_SEQUENCE_NUMBER](#EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_BAD_SEQUENCE_NUMBER)
- [EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN](#EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN)
- [EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED](#EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED)
- [EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_FEEDBACK_MISMATCH](#EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_FEEDBACK_MISMATCH)
- [EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE](#EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE)
- [Events](#events)
- [EMCB_UDP_EVENT_QUEUE_DRAINED](#EMCB_UDP_EVENT_QUEUE_DRAINED)
- [EMCB_UDP_EVENT_DEVICE_DISCOVERED](#EMCB_UDP_EVENT_DEVICE_DISCOVERED)
- [EMCB_UDP_EVENT_DEVICE_REMOVED](#EMCB_UDP_EVENT_DEVICE_REMOVED)
- [EMCB_UDP_EVENT_DEVICE_IP_ADDRESS_CHANGED](#EMCB_UDP_EVENT_DEVICE_IP_ADDRESS_CHANGED)
- [Errors](#errors)
- [EMCB_UDP_ERROR_TIMEOUT](#EMCB_UDP_ERROR_TIMEOUT)
- [EMCB_UDP_EVENT_PARSER_ERROR](#EMCB_UDP_EVENT_PARSER_ERROR)
- [EMCB_UDP_ERROR_INVALID_DATA_LENGTH](#EMCB_UDP_ERROR_INVALID_DATA_LENGTH)
- [Others](#others)
- [EMCB_UDP_DEVICE_COLORS](#EMCB_UDP_DEVICE_COLORS)
## EmcbUDPbroadcastMaster
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) is the primary class exposed
by `require('emcbUDPmaster')`. This class manages all UDP traffic to the EMCBs.
It facilitates device discovery, [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster)
creation and management, and holds the message queues, manages timeouts, etc.
for both broadcast and unicast traffic.
In addition to the commands listed below,
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) extends the
[EventEmitter](https://nodejs.org/api/events.html) class and makes the events
described in [EventEmitter Cheat Sheet](#eventemitter-cheat-sheet) available to
`.on()`, `.once()`, etc.
```js
const { EmcbUDPbroadcastMaster } = require('emcbUDPmaster');
// or
const master0 = require('emcbUDPmaster').EmcbUDPbroadcastMaster
```
### EmcbUDPbroadcastMaster Properties
In addition to the functions described below, an
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) instance has the following
properties available to access:
- `ipAddress` _(String)_: The local network broadcast IP Address used by the
instance. This will be set asyncronously if no `broadcastIPAddress` is
provided to the constructor
- `port` _(Number)_: The Destination UDP port number used by the instance and
all [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instances.
- `devices` _(Object)_: An object indexed by individual EMCB device `IP
Addresses`, which holds all [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster)
instances for the discovered devices.
- `udpSocket` _(dgram.Socket)_: The [Node.js udp4
Socket](https://nodejs.org/api/dgram.html#dgram_class_dgram_socket) used for
all local communication on the network.
- `unhandledMessages` _(Number)_: Integer number of times that we have received
data from the network without an active message to process it against. In
other words, this is the number of times EMCBs have provided data beyond the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) instance's timeout for a
message.
### new EmcbUDPbroadcastMaster(args)
Creates an instance of the Broadcast Master object.
- `args` _(Object)_
- `broadcastUDPKey` _(Buffer)_: UDP Key for signing/validating all broadcast
messages
- `unicastUDPKeys` _(Object)_: `[Key]: Value` pairs for unicast UDP Keys for
signing/validating messages
- _`$DEVICE_ID`_ _(Buffer)_: UDP Key for signing/validating all unicast
messages for the particular device ID.
- [`broadcastIPAddress`] _(String)_: Optional broadcast IP address for the
master to use.
- [`ifaceName`] _(String)_: Optional interface name (i.e. in the keys provided
by
[`os.networkInterfaces()`](https://nodejs.org/api/os.html#os_os_networkinterfaces))
to use as the network interface for UDP traffic. This will only be used if
`broadcastIPAddress` is not provided. If this key is __also__ not provided,
the instance will try to determine the "default" network interface via
[`local-ipv4-address`](https://www.npmjs.com/package/local-ipv4-address)
(with the caveats described at the link)
- [`port`] _(String)_: Optional destination UDP port number to use for all
communication. Defaults to [EMCB_UDP_PORT](#EMCB_UDP_PORT).
- [`sequenceNumber`] _(Number)_: Optional "Next" Sequence Number that we will
use when interacting with this device. Defaults to a random number within
the legal UInt32 range of 0 <= x <= 0xFFFFFFFF. **This value should be left
undefined or retreived from non-volatile memory and set to the last highest
sequence number used to maintain cybersecurity.**
- **RETURNS** `instance` _(EmcbUDPbroadcastMaster)_: an instantiated
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster).
```js
const { EmcbUDPbroadcastMaster } = require('./emcbUDPmaster');
var EMCBs = new EmcbUDPbroadcastMaster({
broadcastUDPKey : Buffer.from("DD4253D8725A02A0C1FA3417D809686FE397CC8148EFF5328CE436644849A225", "hex"),
unicastUDPKeys : {
"30000c2a690c7652" : Buffer.from("01C43A38DF5669F3D410602437EC2EF3DAEB12AED3C7EB3FA192D581D2AB9F20", "hex"),
}
})
```
**NOTE** For the class to do anything useful, you need to provide keys for the
devices on your local network gathered from the [EMCB Cloud
API](https://portal.developer.eatonem.com/).
### updateBroadcastUDPkey(key)
Updates the broadcast UDP Key provisioned via the [EMCB Cloud
API](https://portal.developer.eatonem.com/).
- `key` _(Buffer)_: UDP Key for signing/validating all broadcast messages
```javascript
const crypto = require("crypto")
EMCBs.updateBroadcastUDPkey(crypto.randomBytes(32))
// Don't expect to find any EMCBs this way due to the VERY low probability of randomly generating an in use key, but the API syntax should work :)
```
### updateUnicastUDPkey(idDevice, key)
Updates the unicast UDP Key for a particular device provisioned via the [EMCB
Cloud API](https://portal.developer.eatonem.com/).
- `idDevice` _(String)_: Device ID using the unicast key
- `key` _(Buffer)_: UDP Key for signing/validating all unicast messages for this
particular device ID.
```javascript
EMCBs.updateUnicastUDPkey("30000c2a690c7652", Buffer.from("DD4253D8725A02A0C1FA3417D809686FE397CC8148EFF5328CE436644849A225", "hex")
```
### getMasterIPAddress()
Get the [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster)'s `ipAddress`.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following
data or throws an [`Error`](https://nodejs.org/api/errors.html):
- `ipAddress` _(String)_: The IP Address of the Master's interface that is
being used by the library.
```javascript
console.log(EMCBs.getDevice("30000c2a690c7652").idDevice)
// 30000c2a690c7652
```
### getDevice(ipAddressOrIdDevice)
Get the [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the specified
`ipAddress` or `idDevice`, assuming that it has been successfully discovered and
is communicating with the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster)/[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster).
- `ipAddressOrIdDevice` _(String)_: Local IP Address or Device ID of the device
- **RETURNS** [`instance`] _(EmcbUDPdeviceMaster | undefined)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the given
`ipAddressOrIdDevice` or `undefined` if none was found.
```javascript
console.log(EMCBs.getDevice("30000c2a690c7652").idDevice)
// 30000c2a690c7652
```
### discoverDevices([nonce])
Discover EMCBs on the local network using the provisioned UDP broadcast key.
This is a convenience wrapper that performs 4
[getNextSequenceNumber()](#getnextsequencenumbernonce) commands and returns a
`Promise` that will resolve with the list of all active devices within the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) or reject with an
[`Error`](https://nodejs.org/api/errors.html) if none have been found.
> [`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) will self call this
> function every 5 minutes to detect any devices that are added to the network.
- [`nonce`] _(Buffer)_: Optional 4 byte UInt32 held within a
[`Buffer`](https://nodejs.org/api/buffer.html). Defaults to
`crypto.random(4)`. **NOTE** - `nonce` should **NEVER** be provided in
production code (as a [Cryptographically secure pseudorandom
number](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator)
is the safest thing to use here to prove device authenticity and prevent
messages from being replayed) but is included in the API in order to allow
developers control over the messages that they send for testing purposes.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following
data or throws an [`Error`](https://nodejs.org/api/errors.html):
- `data` _(Object)_:
- _`$IP_ADDRESS`_ _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the discovered at the
_`$IP_ADDRESS`_ key.
> **NOTE** - If there are any valid responses, the return `Promise` will
> resolve. It will only reject in the event that no devices have ever been
> discovered. A resolve does NOT guarantee that devices on the network have
> been discovered.
```javascript
EMCBs.discoverDevices()
.then((devices) => {
console.log("DISCOVER DEVICES COMPLETE - found " + Object.keys(devices).length + " EMCBs")
var coloredDeviceArray = []
for(var ipAddress in devices){
var device = devices[ipAddress]
coloredDeviceArray.push(chalk[device.chalkColor](device.idDevice));
}
console.log(coloredDeviceArray.join(chalk.reset(",")))
})
// 3000d8c46a572cf2,3000d8c46a572d8a,3000d8c46a572d5c,3000d8c46a572af0,3000d8c46a572c34,3000d8c46a572b08,3000d8c46a572aba,3000d8c46a572b34
// **NOTE** this (and all logs from each specific device) will be colorized in terminals that support ANSI escape codes!
```
## createDevice(idDevice, ipAddress, [unicastGetNextSequenceNumber])]
Creates an [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for a given `idDevice`
at a given `ipAddress` (assuming its UDP Key is provided to [new
EmcbUDPbroadcastMaster(args)](#new-emcbudpbroadcastmasterargs)).
- `idDevice` _(String)_: Device ID.
- `ipAddress` _(String)_: The local network IP Address of the device.
- [`unicastGetNextSequenceNumber`] _(Boolean)_: Optional `true`/`false` to
determine if the device's sequence number should be obtained via a unicast
message. Defaults to true.
- **RETURNS** - `device` _(EmcbUDPdeviceMaster)_: The newly created
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instance.
### syncDeviceSequenceNumbers()
Sends a unicast -
[setNextSequenceNumber](#setnextsequencenumberdesirednextsequencenumber) command
to each [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instance that the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) has discovered in order to
sync their sequence numbers together.
> **NOTE** - this method should **NOT** be used in most applications.
> Specifically if you are using the
> [EMCB_UDP_EVENT_QUEUE_DRAINED](#EMCB_UDP_EVENT_QUEUE_DRAINED) event for
> polling (which you should be using), the `emcbUDPmaster` library will
> automatically take care of keeping device sequence numbers in sync by
> monitoring for consecutive timeouts from discovered devices.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves on success or
throws an [`Error`](https://nodejs.org/api/errors.html).
```javascript
EMCBs.syncDeviceSequenceNumbers()
```
### getNextSequenceNumber([nonce])
Gets the next Sequence Number and device ID's of the EMCBs on the local network
using the replayable Get Next Expected UDP Sequence Number Command with a
sequence number of `0x0000` to facilitate device discovery and synchronization.
- [`nonce`] _(Buffer)_: Optional 4 byte UInt32 held within a
[`Buffer`](https://nodejs.org/api/buffer.html). Defaults to
`crypto.random(4)`. **NOTE** - `nonce` should **NEVER** be provided in
production code (as a [Cryptographically secure pseudorandom
number](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator)
is the safest thing to use here to prove device authenticity and prevent
messages from being replayed) but is included in the API in order to allow
developers control over the messages that they send for testing purposes.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following
data if there are any valid responses. Otherwise it will throw the same data
structure or an instance of an [`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses
by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `nextSequenceNumber` _(Number)_: UInt32 expected value of the
Sequence Number in the next command to the device
- `idDevice` _(String)_: Device ID.
- `protocolRevision` _()_: UInt32 protocol revision number.
- `errors` _(Object)_: Optional object that will contain
[`Error`](https://nodejs.org/api/errors.html) objects decorated with an
additional `device` property, which is the relevant
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any
encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An
[`Error`](https://nodejs.org/api/errors.html) object describing the
error.
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain
[`Error`](https://nodejs.org/api/errors.html) objects decorated with an
additional `device` property, which is the relevant
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any
timeouts
- _`$IP_ADDRESS`_ _(Error)_: An
[`Error`](https://nodejs.org/api/errors.html) object describing the
timeout.
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - If there are any valid responses, the return `Promise` will
> resolve. It will only reject in the event that **ALL** responses are errors
> or timeouts.
```javascript
EMCBs.getNextSequenceNumber()
.then((data) => {
//data.responses[aParticularIPAddress] = {
// idDevice: '3000d8c46a572b08',
// nextSequenceNumber: 2286175166,
// protocolRevision: 1,
// device: {...}
// }
})
```
### getBreakerRemoteHandlePosition()
Gets the Remote Handle Position of the EMCB.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following
data if there are any valid responses. Otherwise it will throw the same data
structure or an instance of an [`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses
by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `state` _(Number)_: UInt8 code representing the breaker's current
Feedback State. One of
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN`,
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED`, or
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE`.
- `stateString` _(String)_: A human readable string representing the
EMCB state. One of `"Open"`, `"Closed"`, or `"Feedback Mismatch"`.
- `errors` _(Object)_: Optional object that will contain
[`Error`](https://nodejs.org/api/errors.html) objects decorated with an
additional `device` property, which is the relevant
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any
encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An
[`Error`](https://nodejs.org/api/errors.html) object describing the
error.
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain
[`Error`](https://nodejs.org/api/errors.html) objects decorated with an
additional `device` property, which is the relevant
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any
timeouts
- _`$IP_ADDRESS`_ _(Error)_: An
[`Error`](https://nodejs.org/api/errors.html) object describing the
timeout.
- `device` _(EmcbUDPdeviceMaster)_: The
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - If there are any valid responses, the return `Promise` will
> resolve. It will only reject in the event that **ALL** responses are errors
> or timeouts.
```javascript
async function logFeedbackState(){
const feedbackStates = await EMCBs.getBreakerRemoteHandlePosition()
for(var ipAddress in feedbackStates.responses){
var device = feedbackStates.responses[ipAddress].device
console.log(chalk[device.chalkColor](device.idDevice + " Remote Handle Position is " + feedbackStates.responses[ipAddress].stateString))
}
}
logFeedbackState()
// 30000c2a69113173 Remote Handle Position is Open
```
### getMeterData()
Gets the Current Metering Data for the EMCB.
> **NOTE** - this function will return the data that is transmitted over the
> wire. However, the data may not be updated within the EMCB HW. It is the
> responsibility of the user to check the `updateNum` to determine if the data
> is "stale" or not. Alternatively, the
> [EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA](#EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA)
> Event can be used, which will only be called when the data is fresh.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following
data if there are any valid responses. Otherwise it will throw the same data
structure or an instance of an [`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `updateNum` _(Number)_: Integer Update number. Starts at 0 on boot and increments when periodic data is updated on the device.
- `frequency` _(Number)_: Integer Line frequency. mHz.
- `period` _(Number)_: Integer Period. The Number of milliseconds over which the returned data was accumulated.
- `mJp0` _(Number)_: Int64 Phase 0 Cumulative active energy. milliJoules = milliWatt-Second.
- `mVARsp0` _(Number)_: Int64 Phase 0 Cumulative reactive energy. mVARs.
- `mVAsp0` _(Number)_: UInt64 Phase 0 Cumulative apparent energy. mVAs.
- `LNmVp0` _(Number)_: Integer Phase 0 voltage RMS. mV.
- `mAp0` _(Number)_: Integer Phase 0 current RMS. mA.
- `q1mJp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Active energy. mJ.
- `q2mJp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Active energy. mJ.
- `q3mJp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Active energy. mJ.
- `q4mJp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Active energy. mJ.
- `q1mVARsp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Reactive energy. mVARs.
- `q2mVARsp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Reactive energy. mVARs.
- `q3mVARsp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Reactive energy. mVARs.
- `q4mVARsp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Reactive energy. mVARs.
- `q1mVAsp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Apparent energy. mVAs.
- `q2mVAsp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Apparent energy. mVAs.
- `q3mVAsp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Apparent energy. mVAs.
- `q4mVAsp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Apparent energy. mVAs.
- `mJp1` _(Number)_: Int64 Phase 1 Cumulative active energy. mJ.
- `mVARsp1` _(Number)_: Int64 Phase 1 Cumulative reactive energy. mVARs.
- `mVAsp1` _(Number)_: UInt64 Phase 1 Cumulative apparent energy. mVAs.
- `LNmVp1` _(Number)_: Integer Phase 1 voltage RMS. mV.
- `mAp1` _(Number)_: Integer Phase 1 current RMS. mA.
- `q1mJp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Active energy. mJ.
- `q2mJp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Active energy. mJ.
- `q3mJp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Active energy. mJ.
- `q4mJp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Active energy. mJ.
- `q1mVARsp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Reactive energy. mVARs.
- `q2mVARsp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Reactive energy. mVARs.
- `q3mVARsp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Reactive energy. mVARs.
- `q4mVARsp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Reactive energy. mVARs.
- `q1mVAsp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Apparent energy. mVAs.
- `q2mVAsp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Apparent energy. mVAs.
- `q3mVAsp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Apparent energy. mVAs.
- `q4mVAsp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Apparent energy. mVAs.
- `LLp01mV` _(Number)_: Integer Phase-phase voltage RMS. mV.
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - If there are any valid responses, the return `Promise` will
> resolve. It will only reject in the event that **ALL** responses are errors
> or timeouts.
```javascript
async function logMeterData(){
const meterData = await EMCBs.getMeterData()
for(var ipAddress in meterData.responses){
var data = meterData.responses[ipAddress]
var device = data.device
logger.info(chalk[device.chalkColor](`${device.idDevice}: updateNum=${data.updateNum.toString().padStart(3)}, LN-Volts-p0=${(data.LNmVp0/1000.0).toString().padEnd(7, "0")}, LN-Volts-p1=${(data.LNmVp1/1000.0).toString().padEnd(7, "0")}, Amps-p0=${(data.mAp0/1000.0).toString().padStart(7)}, Amps-p1=${(data.mAp1/1000.0).toString().padStart(7)}, Frequency-Hz=${(data.frequency/1000.0).toString().padEnd(6, "0")}`))
}
}
logMeterData()
// 30000c2a69113173: updateNum=176, LN-Volts-p0=126.133, LN-Volts-p1=126.133, Amps-p0= 0.009, Amps-p1= 0.008, Frequency-Hz=60.030
```
### getDeviceStatus()
Gets the Current Metering Data for the EMCB.
> **NOTE** - this function will return the data that is transmitted over the
> wire. However, the metering data may not be updated within the EMCB HW. It
> is the responsibility of the user to check the `updateNum` to determine if the
> data is "stale" or not. Alternatively, the
> [EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA](#EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA)
> Event can be used, which will only be called when the data is fresh.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following data if there are any valid responses. Otherwise it will throw the same data structure or an instance of an [`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `breaker` _(Object)_:
- `state` _(Number)_: UInt8 code representing the breaker's current Feedback State. One of `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN`, `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED`, or `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE`.
- `stateString` _(String)_: A human readable string representing the EMCB state. One of `"Open"`, `"Closed"`, or `"Feedback Mismatch"`.
- `meter` _(Object)_:
- `updateNum` _(Number)_: Integer Update number. Starts at 0 on boot and increments when periodic data is updated on the device.
- `frequency` _(Number)_: Integer Line frequency. mHz.
- `period` _(Number)_: Integer Period. The Number of seconds over which the returned was accumulated. // was a UInt8 in protocol version 1.07
- `mJp0` _(Number)_: Int64 Phase 0 Cumulative active energy. milliJoules = milliWatt-Second.
- `mVARsp0` _(Number)_: Int64 Phase 0 Cumulative reactive energy. mVARs.
- `mVAsp0` _(Number)_: UInt64 Phase 0 Cumulative apparent energy. mVAs.
- `LNmVp0` _(Number)_: Integer Phase 0 voltage RMS. mV.
- `mAp0` _(Number)_: Integer Phase 0 current RMS. mA.
- `q1mJp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Active energy. mJ.
- `q2mJp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Active energy. mJ.
- `q3mJp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Active energy. mJ.
- `q4mJp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Active energy. mJ.
- `q1mVARsp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Reactive energy. mVARs.
- `q2mVARsp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Reactive energy. mVARs.
- `q3mVARsp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Reactive energy. mVARs.
- `q4mVARsp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Reactive energy. mVARs.
- `q1mVAsp0` _(Number)_: UInt64 Quadrant 1 Phase 0 Cumulative Apparent energy. mVAs.
- `q2mVAsp0` _(Number)_: UInt64 Quadrant 2 Phase 0 Cumulative Apparent energy. mVAs.
- `q3mVAsp0` _(Number)_: UInt64 Quadrant 3 Phase 0 Cumulative Apparent energy. mVAs.
- `q4mVAsp0` _(Number)_: UInt64 Quadrant 4 Phase 0 Cumulative Apparent energy. mVAs.
- `mJp1` _(Number)_: Int64 Phase 1 Cumulative active energy. mJ.
- `mVARsp1` _(Number)_: Int64 Phase 1 Cumulative reactive energy. mVARs.
- `mVAsp1` _(Number)_: UInt64 Phase 1 Cumulative apparent energy. mVAs.
- `LNmVp1` _(Number)_: Integer Phase 1 voltage RMS. mV.
- `mAp1` _(Number)_: Integer Phase 1 current RMS. mA.
- `q1mJp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Active energy. mJ.
- `q2mJp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Active energy. mJ.
- `q3mJp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Active energy. mJ.
- `q4mJp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Active energy. mJ.
- `q1mVARsp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Reactive energy. mVARs.
- `q2mVARsp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Reactive energy. mVARs.
- `q3mVARsp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Reactive energy. mVARs.
- `q4mVARsp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Reactive energy. mVARs.
- `q1mVAsp1` _(Number)_: UInt64 Quadrant 1 Phase 1 Cumulative Apparent energy. mVAs.
- `q2mVAsp1` _(Number)_: UInt64 Quadrant 2 Phase 1 Cumulative Apparent energy. mVAs.
- `q3mVAsp1` _(Number)_: UInt64 Quadrant 3 Phase 1 Cumulative Apparent energy. mVAs.
- `q4mVAsp1` _(Number)_: UInt64 Quadrant 4 Phase 1 Cumulative Apparent energy. mVAs.
- `LLp01mV` _(Number)_: Integer Phase-phase voltage RMS. mV.
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - If there are any valid responses, the return `Promise` will
> resolve. It will only reject in the event that **ALL** responses are errors
> or timeouts.
```javascript
async function logDeviceStatus(){
const data = await EMCBs.getDeviceStatus()
for(var ipAddress in data.responses){
var status = data.responses[ipAddress]
var device = status.device
logger.info(chalk[device.chalkColor](`${device.idDevice}: Breaker State=${status.breaker.stateString}. Meter Data: updateNum=${status.meter.updateNum.toString().padStart(3)}, LN-Volts-p0=${(status.meter.LNmVp0/1000.0).toString().padEnd(7, "0")}, LN-Volts-p1=${(status.meter.LNmVp1/1000.0).toString().padEnd(7, "0")}, Amps-p0=${(status.meter.mAp0/1000.0).toString().padStart(7)}, Amps-p1=${(status.meter.mAp1/1000.0).toString().padStart(7)}, Frequency-Hz=${(status.meter.frequency/1000.0).toString().padEnd(6, "0")}`))
}
}
logDeviceStatus()
// 40000c2a69113173: Breaker State=Open. Meter Data: updateNum=178, LN-Volts-p0=126.164, LN-Volts-p1=126.164, Amps-p0= 0.01, Amps-p1= 0.009, Frequency-Hz=60.030
```
### setNextSequenceNumber(desiredNextSequenceNumber)
Sets the next Sequence Number to be used by the EMCBs. In order for this
command to work, the [getNextSequenceNumber](#getnextsequencenumbernonce) must
have successfully be ran for the EMCB in order for the library to know the
Sequence Number that the device will currently accept.
> **NOTE** - this method should **NOT** be used in most applications.
> Specifically if you are using the
> [EMCB_UDP_EVENT_QUEUE_DRAINED](#EMCB_UDP_EVENT_QUEUE_DRAINED) event for
> polling (which you should be using), the `emcbUDPmaster` library will
> automatically take care of keeping device sequence numbers in sync by
> monitoring for consecutive timeouts from discovered devices.
- `desiredNextSequenceNumber` _(Number)_: The desired UInt32 value for the next Sequence Number between 0 and 0xFFFFFFFF.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following data if **ALL** responses are valid. Otherwise, it will throw the same data structure or an instance of an[`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `ack` _(Number)_: UInt8 ACK code provided by the device. A value of [EMCB_UDP_ACK](#EMCB_UDP_ACK) means the command was executed and the breaker confirmed it is in in desired state. Any other value is a NACK (and is likely enumerated as an **EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_\*** Enum).
- `ackString` _(String | undefined)_: A human readable string representing `ack` value. One of `"Acknowledged"`, `"Rate Limited"`, `"Bad Sequence Number"`, or `undefined`.
- `nextSequenceNumber` _(Number | undefined)_: UInt32 expected value of the Sequence Number in the next command to the device. This value will only be set if `ack` === [EMCB_UDP_ACK](#EMCB_UDP_ACK).
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - The return `Promise` will resolve **ONLY** if all requests are
> successful. It will reject if **ANY** responses are errors or timeouts.
```javascript
EMCBs.setNextSequenceNumber(crypto.randomBytes(4).readUInt32LE(0))
```
### setBreakerState(desiredState[, maxAttempts])
Sets the desired breaker state. Will attempt to send the command successfully
up to maxAttempts times (defaults to 3).
- `desiredState` _(ENUM)_: One of the following constants: `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN`, `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED`, or `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE`
- `maxAttempts` _(Number)_: Optional maximum number of attempts to set the breaker state. Defaults to 3 and allows values from 1-10.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following data if **ALL** responses are valid. Otherwise, it will throw the same data structure or an instance of an[`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `ack` _(Number)_: UInt8 ACK code provided by the device. A value of [EMCB_UDP_ACK](#EMCB_UDP_ACK) means the command was executed and the breaker confirmed it is in in desired state. Any other value is a NACK.
- `state` _(Number)_: UInt8 code representing the breaker's current Feedback State. One of `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN`, `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED`, or `EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE`.
- `stateString` _(String)_: A human readable string representing the EMCB state. One of `"Open"`, `"Closed"`, or `"Feedback Mismatch"`.
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - The return `Promise` will resolve **ONLY** if all requests are
> successful. It will reject if **ANY** responses are errors or timeouts.
```javascript
function onSuccess(data, logger = console.log){
var responses = []
var errors = []
var timeouts = []
for(var ipAddress in data.responses){
var device = data.responses[ipAddress].device
data.responses[ipAddress].device = device.idDevice
responses.push(chalk[device.chalkColor](util.inspect(data.responses[ipAddress])))
}
for(var ipAddress in data.timeouts){
var errorString = data.timeouts[ipAddress].message
var device = data.timeouts[ipAddress].device
timeouts.push(chalk[device.chalkColor](device.idDevice + " - " + errorString))
}
for(var ipAddress in data.errors){
var errorString = data.errors[ipAddress].message
var device = data.errors[ipAddress].device
errors.push(chalk[device.chalkColor](device.idDevice + " - " + errorString))
}
// Sort for consistent rainbow colors!
responses.sort()
errors.sort()
timeouts.sort()
var responseStr = responses.length > 0 ? chalk.reset("\nResponses:\n") + responses.join(chalk.reset(',\n')) : ""
var errorStr = errors.length > 0 ? chalk.reset("\nErrors:\n") + errors.join(chalk.reset(',\n')) : ""
var timeoutStr = timeouts.length > 0 ? chalk.reset("\nTimeouts:\n") + timeouts.join(chalk.reset(',\n')) : ""
logger(responseStr + errorStr + timeoutStr + '\n')
}
function onError(err){
onSuccess(err, console.error)
}
//3-shot Open
EMCBs.setBreakerState(EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN, 3).then(onSuccess).catch(onError)
// 1-shot Close
EMCBs.setBreakerState(EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED, 1).then(onSuccess).catch(onError)
// 3-shot Toggle
EMCBs.setBreakerState(EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE).then(onSuccess).catch(onError)
// 3-shot Open to a specific device
EMCBs.getDevice("30000c2a690c7652").setBreakerState(EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN, 3).then(onSuccess).catch(onError)
```
### setBargraphLEDToUserDefinedColor(enabled[, colorObj, blinking])
Set the EMCB Bargraph LEDs to specific color. This is a convenience function
which leverages
[setBargraphLEDToUserDefinedColor](#setbargraphledtouserdefinedcolorenabled-colorobj-blinking)
under the hood.
- `enabled` _(Boolean)_: Controls if the User Defined Color control of the bargraph is enabled. If set to false, the Bargraph will return to normal operation and all other arguments will be ignored
- `colorObj` _(Array)_: An optional 5 element array containing the colors for each individual rgb segment of the Bargraph LEDs. Element 0 is the LED closest to the "bump" on the EMCB.
- `[0]` _(Object)_:
- [`red`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`green`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blue`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blinking`] _(Boolean)_ An optional value to control if the LED segment should blink or not.
- `[1]` _(Object)_:
- [`red`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`green`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blue`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blinking`] _(Boolean)_ An optional value to control if the LED segment should blink or not.
- `[2]` _(Object)_:
- [`red`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`green`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blue`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blinking`] _(Boolean)_ An optional value to control if the LED segment should blink or not.
- `[3]` _(Object)_:
- [`red`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`green`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blue`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blinking`] _(Boolean)_ An optional value to control if the LED segment should blink or not.
- `[4]` _(Object)_:
- [`red`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`green`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blue`] _(Number)_: An optional UInt8 value (0-255) controlling the brightness for this led. Defaults to 0
- [`blinking`] _(Boolean)_ An optional value to control if the LED segment should blink or not.
- `duration` _(Number)_: Optional Integer with a value of 0 (the bargraph will stay this color until unset by the UDP API) or from 1-10737418 seconds that the bargraph will stay the set color. Defaults to 5 seconds.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following data if **ALL** responses are valid. Otherwise, it will throw the same data structure or an instance of an[`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `ack` _(Number)_: UInt8 ACK code provided by the device. A value of [EMCB_UDP_ACK](#EMCB_UDP_ACK) means the command was executed and the breaker confirmed it is in in desired state. Any other value is a NACK.
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - The return `Promise` will resolve **ONLY** if all requests are
> successful. It will reject if **ANY** responses are errors or timeouts.
```javascript
var colorObj = new Array(5);
var color = {red: 0, green: 0, blue: 255, blinking: true},
colorObj.fill(color, 1, 4)
// Set the center 3 LEDs of the EMCB to blink blue until commanded differently
EMCBs.setBargraphLEDToUserDefinedColor(true, colorObj, 0)
```
### setBargraphLEDToUserDefinedColorName(colorName[, duration, blinking])
Set the EMCB Bargraph LEDs to a specific named color. This is a convenience
function which leverages
[setBargraphLEDToUserDefinedColor](#setbargraphledtouserdefinedcolorenabled-colorobj-blinking)
under the hood.
- `colorName` _(String)_: The named color to set the bargraph to. The library looks up colors using [color-name-list](https://www.npmjs.com/package/color-name-list). additionally, it supports all valid [chalk](https://www.npmjs.com/package/chalk) colors as well as `"off"`, `"clear"`, and `"reset"` to disable the User Defined Color.
- `duration` _(Number)_: Optional Integer with a value of 0 (the bargraph will stay this color until unset by the UDP API) or from 1-10737418 seconds that the bargraph will stay the set color. Defaults to 5 seconds.
- `blinking` _(Boolean)_: Optional value to blink the LEDs on the EMCB.
- **RETURNS** `Promise` _(Object)_: A `promise` that resolves with the following data if **ALL** responses are valid. Otherwise, it will throw the same data structure or an instance of an[`Error`](https://nodejs.org/api/errors.html).
- `data` _(Object)_:
- `responses` _(Object)_: Optional object that will contain parsed responses by IP Address for valid responses
- _`$IP_ADDRESS`_ _(Object)_:
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `ack` _(Number)_: UInt8 ACK code provided by the device. A value of [EMCB_UDP_ACK](#EMCB_UDP_ACK) means the command was executed and the breaker confirmed it is in in desired state. Any other value is a NACK.
- `errors` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any encountered errors, excluding timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the error.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
- `timeouts` _(Object)_: Optional object that will contain [`Error`](https://nodejs.org/api/errors.html) objects decorated with an additional `device` property, which is the relevant [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster), by IP Address for any timeouts
- _`$IP_ADDRESS`_ _(Error)_: An [`Error`](https://nodejs.org/api/errors.html) object describing the timeout.
- `device` _(EmcbUDPdeviceMaster)_: The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) for the response.
> **NOTE** - The return `Promise` will resolve **ONLY** if all requests are
> successful. It will reject if **ANY** responses are errors or timeouts.
```javascript
for(var ipAddress in EMCBs.devices){
var device = EMCBs.devices[ipAddress]
// Blink the EMCB bargraph to the same color as what we are logging for 10 seconds!
device.setBargraphLEDToUserDefinedColorName(device.chalkColor, 10, true)
}
```
## EmcbUDPdeviceMaster
The [`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) exposes the same functionality
as the [`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster), but unicasts each
command to a specific device/IP address rather than using the broadcast IP
address. In addition to the commands listed below,
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) also extends the
[EventEmitter](https://nodejs.org/api/events.html) class and makes the events
described in [EventEmitter Cheat Sheet](#eventemitter-cheat-sheet) available to
`.on()`, `.once()`, etc.
Instances of this class are created and managed by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) (rather than being created
directly) as a part of the [Device Discovery](#discoverdevicesnonce) process
(and more accurately during [getNextSequenceNumber](#getnextsequencenumbernonce)
responses). The instances can be obtained using the
[getDevice](#getDeviceipaddressoriddevice) function or by accessing the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster).`devices` property directly
by the device's `IP Address`.
The following commands from the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) are **NOT** available in
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instances:
- [updateBroadcastUDPkey(key)](#updatebroadcastudpkeykey)
- [updateUnicastUDPkey(key)](#updateunicastudpkeyiddevice-key)
- [getMasterIPAddress()](#getMasterIPAddress)
- [getDevice(ipAddressOrIdDevice)](#getDeviceipaddressoriddevice)
- [discoverDevices(nonce)](#discoverdevicesnonce)
- [syncDeviceSequenceNumbers](#syncdevicesequencenumbers)
These commands are identical to the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster), except that they unicast
the command to the specific `idDevice` ipAddress instead of broadcasting on the
broadcast address
- [getNextSequenceNumber([nonce])](#getnextsequencenumbernonce)
- [getBreakerRemoteHandlePosition()](#getbreakerremotehandleposition)
- [getMeterData()](#getmeterdata)
- [getDeviceStatus()](#getDeviceStatus)
- [setNextSequenceNumber(desiredNextSequenceNumber)](#setnextsequencenumberdesirednextsequencenumber)
- [setBreakerState(desiredState[, maxAttempts])](#setbreakerstatedesiredstate-maxattempts)
- [setBargraphLEDToUserDefinedColor(enabled[, colorObj, blinking])](#setbargraphledtouserdefinedcolorenabled-colorobj-blinking)
- [setBargraphLEDToUserDefinedColorName(colorName[, duration, blinking])](#setbargraphledtouserdefinedcolornamecolorname-duration-blinking)
### EmcbUDPdeviceMaster Properties
In addition to the functions listed above, an
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) has 4 additional properties that
are **NOT** available in [`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster)
instances:
- `chalkColor` _(string)_: This is a color assigned to the device from the
[EMCB_UDP_DEVICE_COLORS](#EMCB_UDP_DEVICE_COLORS) array during
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instantiation. It can be used
with [chalk](https://www.npmjs.com/package/chalk) to help colorize logs.
- `idDevice` _(string)_: Device ID of the device
- `remoteHandlePosition` _(Number)_: UInt8 code representing the breaker's
current Feedback State. One of
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN`,
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED`, or
`EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE`.
- `meterData` _(object)_: The latest meter data that has been obtained from the
device. Identical to the return `data.responses[$IP_ADDRESS]` in
[getMeterData](#getmeterdata).
## EventEmitter Cheat Sheet
Both [`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) extend the
[EventEmitter](https://nodejs.org/api/events.html) class. The following code
will register for every event and provides some commentary for the circumstances
under which events get called.
```js
// Called whenever there is a response to a GET_NEXT_SEQUENCE_NUMBER command
EMCBs.on(EMCB_UDP_MESSAGE_CODE_GET_NEXT_SEQUENCE_NUMBER, data => {
logger.info(chalk[data.device.chalkColor](`Sequence Number updated to 0x${data.nextSequenceNumber.toString(16).toUpperCase()} = ${data.nextSequenceNumber}`))
})
// Called whenever there is a response to a GET_DEVICE_STATUS command that contains fresh data
EMCBs.on(EMCB_UDP_MESSAGE_CODE_GET_DEVICE_STATUS, data => {
logger.info(chalk[data.device.chalkColor](`Received GET_DEVICE_STATUS response from ${data.device.ipAddress} with Device ID ${data.device.idDevice}`))
})
// Called whenever there is a response to a GET_DEVICE_DEBUG_DATA command
EMCBs.on(EMCB_UDP_MESSAGE_CODE_GET_DEVICE_DEBUG_DATA, data => {
var device = data.device;
delete data.device;
logger.info(chalk[device.chalkColor](`Received GET_DEVICE_DEBUG_DATA response from ${device.ipAddress} with Device ID ${device.idDevice}. Data = ${util.inspect(data, {breakLength: Infinity})}`))
})
// Called whenever the breaker feedback position changes - could be from a GET_BREAKER_REMOTE_HANDLE_POSITION, GET_DEVICE_STATUS, or SET_BREAKER_REMOTE_HANDLE_POSITION command)
EMCBs.on(EMCB_UDP_MESSAGE_CODE_GET_BREAKER_REMOTE_HANDLE_POSITION, function(data){
logger.info(chalk[data.device.chalkColor](`Breaker Feedback Position changed from ${data.lastState} to ${data.state}`))
})
// Called whenever there is new EMCB Meter data (as detected by seeing an update to updateNum) - could be GET_DEVICE_STATUS or GET_METER_TELEMETRY_DATA
EMCBs.on(EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA, function(data){
if(data.updateNum%5 === 0){
logger.info(chalk[data.device.chalkColor](`${data.device.idDevice}: updateNum=${data.updateNum.toString().padStart(3)}, LN-Volts-p0=${(data.LNmVp0/1000.0).toString().padEnd(7, "0")}, LN-Volts-p1=${(data.LNmVp1/1000.0).toString().padEnd(7, "0")}, Amps-p0=${(data.mAp0/1000.0).toString().padStart(7)}, Amps-p1=${(data.mAp1/1000.0).toString().padStart(7)}, Frequency-Hz=${(data.frequency/1000.0).toString().padEnd(6, "0")}`))
}
})
// Listening to an individual device instead of ALL devices works just fine too :)
// EMCBs.getDevice("10.130.116.50").on(EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA, function(meterData){
// console.log(meterData)
// })
// Called for every successful SET_NEXT_SEQUENCE_NUMBER command
EMCBs.on(EMCB_UDP_MESSAGE_CODE_SET_NEXT_SEQUENCE_NUMBER, data => {
logger.info(chalk[data.device.chalkColor](`SET_NEXT_SEQUENCE_NUMBER response "${data.ackString}" from ${data.device.ipAddress} with Device ID ${data.device.idDevice}.${data.ack === EMCB_UDP_ACK ? ` Sequence Number updated to 0x${data.nextSequenceNumber.toString(16).toUpperCase()} = ${data.nextSequenceNumber}` : ""}`))
})
// Called for every successful SET_BREAKER_REMOTE_HANDLE_POSITION command
EMCBs.on(EMCB_UDP_MESSAGE_CODE_SET_BREAKER_REMOTE_HANDLE_POSITION, data => {
logger.info(chalk[data.device.chalkColor](`SET_BREAKER_REMOTE_HANDLE_POSITION command succeeded!`))
})
// Called for every successful SET_BARGRAPH_LED_TO_USER_DEFINED
EMCBs.on(EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED, data => {
logger.info(chalk[data.device.chalkColor](`SET_BREAKER_REMOTE_HANDLE_POSITION command succeeded!`))
})
// Called every time a device is discovered on the local network
EMCBs.on(EMCB_UDP_EVENT_DEVICE_DISCOVERED, data => {
logger.info(chalk[data.device.chalkColor](`Discovered EMCB ${data.device.idDevice} at ${data.device.ipAddress}!`))
})
// Called after 100 consecutive timeouts and multiple resync attempts with a particular device as we remove it from the list of devices currently "discovered" and available within the EmcbUDPbroadcastMaster
EMCBs.on(EMCB_UDP_EVENT_DEVICE_REMOVED, data => {
logger.warn(chalk[data.device.chalkColor](`Removing EMCB at ${data.device.ipAddress} with with Device ID ${data.device.idDevice}... Too many consecutive timeouts/errors.`))
})
// Called whenever a device IP address change is detected
EMCBs.on(EMCB_UDP_EVENT_DEVICE_IP_ADDRESS_CHANGED, data => {
logger.info(chalk[data.device.chalkColor](`Device ID ${data.device.idDevice} moved from ${data.oldIPaddress} to ${data.newIPaddress}`))
})
// Called whenever there is a device timeout
EMCBs.on(EMCB_UDP_ERROR_TIMEOUT, data => {
logger.warn(chalk[data.device.chalkColor](data.message))
})
// Called whenever there is a parser error - which can include a nack from the device, invalid number of bytes, etc.
EMCBs.on(EMCB_UDP_ERROR_PARSER, data => {
logger.warn(chalk[data.device.chalkColor]("Parser Error - " + data.message))
})
// Whenever the message queue is drained, poll the devices' status as quickly as possible, in order to cause our events listeners above to fire!
EMCBs.on(EMCB_UDP_EVENT_QUEUE_DRAINED, () => {
EMCBs.getDeviceStatus()
})
EMCBs.discoverDevices()
.then((devices) => {
console.log("DISCOVER DEVICES COMPLETE - found " + Object.keys(devices).length + " EMCBs")
});
```
## logger
- [Class: logger](#logger) `logger` exposes a pre-configured
[winston@3](https://github.com/winstonjs/winston) logger. It also overrides
`console.log`, etc. so that all logs are captured by
[winston](https://github.com/winstonjs/winston).
These logs are written to both the console and to `./logs/` whenever the
`emcbUDPmaster` is used to aid in debugging/understanding.
> **NOTE** - Because the written files will contain colorized output via ANSI
> Escape codes, command line tools such as `cat` or
> [SumblimeANSI](https://github.com/aziz/SublimeANSI) are **very** useful in
> viewing the logs.
## Constants
The following constants are exported by the module and available for application
use and described below.
```javascript
const {
// Network Configuration
EMCB_UDP_PORT,
// Application Layer Constraints
EMCB_UDP_IMPLEMENTED_PROTOCOL_VERSION,
EMCB_UDP_MESSAGE_THROTTLE_TIME_MS,
EMCB_UDP_LONGEST_IMPLEMENTED_MESSAGE_LENGTH,
// Application Layer Header Constants
EMCB_UDP_HEADER_START_MASTER,
EMCB_UDP_HEADER_START_SLAVE,
// Application Layer GET Message Codes
EMCB_UDP_MESSAGE_CODE_GET_NEXT_SEQUENCE_NUMBER,
EMCB_UDP_MESSAGE_CODE_GET_DEVICE_DEBUG_DATA,
EMCB_UDP_MESSAGE_CODE_GET_DEVICE_STATUS,
EMCB_UDP_MESSAGE_CODE_GET_BREAKER_REMOTE_HANDLE_POSITION,
EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA,
// Application Layer SET Message Codes
EMCB_UDP_MESSAGE_CODE_SET_NEXT_SEQUENCE_NUMBER,
EMCB_UDP_MESSAGE_CODE_SET_BREAKER_REMOTE_HANDLE_POSITION,
EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED,
// Application Layer Integer Message Codes to strings
EMCB_UDP_MESSAGE_CODES,
// Enums / Parsed Data Constants
EMCB_UDP_ACK,
EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_RATE_LIMITED,
EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_BAD_SEQUENCE_NUMBER,
EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN,
EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED,
EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_FEEDBACK_MISMATCH,
EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE,
// Errors
EMCB_UDP_ERROR_TIMEOUT,
EMCB_UDP_ERROR_PARSER,
EMCB_UDP_ERROR_INVALID_DATA_LENGTH,
// EventEmitter Events
EMCB_UDP_EVENT_QUEUE_DRAINED,
EMCB_UDP_EVENT_DEVICE_DISCOVERED,
EMCB_UDP_EVENT_DEVICE_REMOVED,
EMCB_UDP_EVENT_DEVICE_IP_ADDRESS_CHANGED,
// Others
EMCB_UDP_DEVICE_COLORS
} = require('emcbUDPmaster');
```
### Network Configuration
#### EMCB_UDP_PORT
The destination UDP port number that will be used by default by
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and all created
[`EmcbUDPdeviceMaster`](#emcbudpdevicemaster) instances. This value is 32866
(or "EATON" on a phone keypad)
### EMCB UDP Application Layer
#### EMCB_UDP_IMPLEMENTED_PROTOCOL_VERSION
The version of the **EMCB UDP API** Application Protocol that is implemented by
the class and used to check against
[getNextSequenceNumber()](#getnextsequencenumbernonce) responses to verify
compatibility.
#### EMCB_UDP_MESSAGE_THROTTLE_TIME_MS
The fastest rate that messages will be sent over the local network in
milliseconds.
#### EMCB_UDP_LONGEST_IMPLEMENTED_MESSAGE_LENGTH
The longest implemented message response supported by the class (to reduce
processing time / buffer overruns in fuzz testing).
### EMCB UDP Application Layer Header
#### EMCB_UDP_HEADER_START_MASTER
Start Byte of all Master->Slave requests
#### EMCB_UDP_HEADER_START_SLAVE
Start Byte of all Slave->Master responses
### Message Codes
### GET Message Codes
#### EMCB_UDP_MESSAGE_CODE_GET_NEXT_SEQUENCE_NUMBER
The integer message code for the GET_NEXT_SEQUENCE_NUMBER command. This
constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODE_GET_DEVICE_STATUS
The integer message code for the GET_DEVICE_STATUS command. This constant will
also be emitted by the [`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODE_GET_BREAKER_REMOTE_HANDLE_POSITION
The integer message code for the GET_BREAKER_REMOTE_HANDLE_POSITION command.
This constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODE_GET_METER_TELEMETRY_DATA
The integer message code for the GET_METER_TELEMETRY_DATA command. This
constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
### SET Message Codes
#### EMCB_UDP_MESSAGE_CODE_SET_NEXT_SEQUENCE_NUMBER
The integer message code for the SET_NEXT_SEQUENCE_NUMBER command. This
constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODE_SET_BREAKER_REMOTE_HANDLE_POSITION
The integer message code for the SET_BREAKER_REMOTE_HANDLE_POSITION command.
This constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED
The integer message code for the SET_BARGRAPH_LED_TO_USER_DEFINED command. This
constant will also be emitted by the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) and
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) whenever a response to the
command is successfully parsed.
#### EMCB_UDP_MESSAGE_CODES
A lookup table to convert the integer message codes back to human readable
strings.
```javascript
console.log(EMCB_UDP_MESSAGE_CODES[EMCB_UDP_MESSAGE_CODE_SET_BARGRAPH_LED_TO_USER_DEFINED])
// $ SET_BARGRAPH_LED_TO_USER_DEFINED
```
### Enums and Parsed Data
#### EMCB_UDP_ACK
A response value defined in the EMCB UDP API to signify that the command was
successfully acknowledged and performed by the device.
#### EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_RATE_LIMITED
A response value defined in the EMCB UDP API to signify that the
[`setNextSequenceNumber`](#setnextsequencenumberdesirednextsequencenumber)
command was rate limited and therefore not executed.
#### EMCB_UDP_SET_NEXT_SEQUENCE_NUMBER_BAD_SEQUENCE_NUMBER
A response value defined in the EMCB UDP API to signify that the
[`setNextSequenceNumber`](#setnextsequencenumberdesirednextsequencenumber)
command was not executed due to an invalid `desiredSequenceNumber`
#### EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_OPEN
A response/command value defined in the EMCB UDP API to signify that the EMCB
remote handle is/should be in the open position
#### EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_CLOSED
A response/command value defined in the EMCB UDP API to signify that the EMCB
remote handle is/should be in the closed position
#### EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_FEEDBACK_MISMATCH
A response value defined in the EMCB UDP API to signify that the EMCB remote
handle feedback is mismatched on a 2-pole breaker (i.e. one pole is open and the
other is closed).
#### EMCB_UDP_BREAKER_REMOTE_HANDLE_POSITION_TOGGLE
A command value defined in the EMCB UDP API to signify that the EMCB remote
handle should toggle from its current state
### Events
In addtion to the [GET](#get-message-codes) and [SET](#set-message-codes)
Message codes, the following events will be emitted by
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) instances:
#### EMCB_UDP_EVENT_QUEUE_DRAINED
Emitted whenever the message queue for the broadcast master is empty and the
application should execute additional regular polling commands
```javascript
EMCBs.on(EMCB_UDP_EVENT_QUEUE_DRAINED, () => {
EMCBs.getDeviceStatus()
}
```
#### EMCB_UDP_EVENT_DEVICE_DISCOVERED
Emitted whenever a new EMCB is discovered as a part of a
[getNextSequenceNumber](#getnextsequencenumbernonce) or
[discoverDevices()](#discoverdevicesnonce) command
#### EMCB_UDP_EVENT_DEVICE_REMOVED
Emitted whenever an EMCB is removed from the
[`EmcbUDPbroadcastMaster`](#emcbudpbroadcastmaster) instance's list of devices,
due to excessive consecutive timeouts.
#### EMCB_UDP_EVENT_DEVICE_IP_ADDRESS_CHANGED
Emitted whenever an EMCB device's IP address changes.
### Errors
The following error constants are used by the application. Additional [node.js
Errors](https://nodejs.org/api/errors.html) are thrown / returned as
appropriate.
#### EMCB_UDP_ERROR_TIMEOUT
This error will be emitted and returned in the rejected promise whenever a
device experiences a timeout.
#### EMCB_UDP_EVENT_PARSER_ERROR
This error will be emitted and returned in the rejected promise whenever a
response parser throws an error (wrong number of bytes, nack from the device,
etc.)
#### EMCB_UDP_ERROR_INVALID_DATA_LENGTH
This error will be returned in the rejected promise whenever a parser detects an
invalid response length.
### Others
#### EMCB_UDP_DEVICE_COLORS
This is an array of [chalk](https://www.npmjs.com/package/chalk) colors to aid
in logging/debugging of the application.
| 55.590072 | 526 | 0.751469 | eng_Latn | 0.733962 |
8ad33ae9651593e87a2df18caca84d819fc60f50 | 539 | md | Markdown | README.md | SignumV/SignumV | eb857856a25f8258cb0e0799f6b978575e12bffb | [
"MIT"
] | null | null | null | README.md | SignumV/SignumV | eb857856a25f8258cb0e0799f6b978575e12bffb | [
"MIT"
] | null | null | null | README.md | SignumV/SignumV | eb857856a25f8258cb0e0799f6b978575e12bffb | [
"MIT"
] | null | null | null | Signum development tree
Signum is a PoW/PoS-based cryptocurrency.
Goals
===========================
Signums sole purpose is to raise awareness of crab people!
Crabpeople (cancer hominum), is a species that are forced to
live underground by modern society. Crabpeople, or CB's are a
technologically advanced society, they use humanoid robots
that are controlled by pilot CBs to blend into the human world.
CBs hope that one day they can take over the surface Earth by
weakening the human species and turning them into metrosecual's.
| 33.6875 | 64 | 0.760668 | eng_Latn | 0.999813 |
8ad3434b30aaaa8ac3e464d48d1bc3719a86e8cf | 36,717 | md | Markdown | content/stories/2021/russell-keith-magee.md | amcasari/opensourcestories.org | ca199de8d2893266218e7a664b8119ffc9f39489 | [
"Apache-2.0"
] | 8 | 2021-09-21T15:18:38.000Z | 2022-01-28T21:40:50.000Z | content/stories/2021/russell-keith-magee.md | amcasari/opensourcestories.org | ca199de8d2893266218e7a664b8119ffc9f39489 | [
"Apache-2.0"
] | 30 | 2021-09-21T14:33:31.000Z | 2022-03-26T15:49:42.000Z | content/stories/2021/russell-keith-magee.md | amcasari/opensourcestories.org | ca199de8d2893266218e7a664b8119ffc9f39489 | [
"Apache-2.0"
] | 8 | 2021-09-22T14:03:23.000Z | 2022-01-19T11:34:21.000Z | ---
title: "Russell Keith-Magee's journey to open source"
date: 2021-10-05T19:07:35-07:00
draft: false
description: "Russell Keith-Magee connects with Open Source Stories to talk about his earliest memories of technology, recount how he got involved with the Django ecosystem, and share his thoughts on open source contractualism."
storyteller: "Russell Keith-Magee"
storycorps: 'https://archive.storycorps.org/embed/3441381/'
bio: "Dr Russell Keith-Magee is the founder of the BeeWare project, developing GUI tools and libraries to support the development of Python software on desktop and mobile platforms. He has also been a member of the Django core team since 2006, and for 5 years, was President of the Django Software Foundation. In his day job, he wrangles data pipelines for Upwave. He is a frequent speaker at Python and Django conferences around the globe, sharing his knowledge and experiences as a FLOSS developer, community maintainer, and (unsuccessful) startup founder."
facilitators: ["amanda casari", "julia ferraioli"]
audio: "https://media.blubrry.com/1466155/content.blubrry.com/1466155/Russell_Keith-Magee_s_journey_to_open_source.mp3"
explicit: "no"
bytes: 37945722
tags:
- Python
- Compassion
- Django
- Community
---
**Julia Ferraioli**: My name is Julia Ferraioli, and my pronouns are she/her. Today is October 5 2021. And I'm speaking with Russell Keith-Magee, who is a committed technologist, core developer on the Django project, and the founder of the beware project. I'm recording this conversation for open source stories in a rather Spartan office that I still haven't decorated after moving in. And my first memory of a computer is actually playing Wheel of Fortune on MS DOS, if you can believe that. That was a while ago. And Russell, would you like to introduce yourself?
**Russell Keith-Magee**: Yes. Hi, my name is Russell Keith-Magee. I am speaking today from Perth, Western Australia which is Whadjuk Nyoongar Boodja; the Whadjuk Noongar are the traditional owners of the land where I'm recording from. Because of virtue of time zones it is actually the sixth of October where I'm recording -- timezones how do they work? My first memory of a computer is actually my father bringing home an original Apple Macintosh. My father was very keen on experimenting with new and wacky technology and so we had a Commodore64 in the house very very early.
But before that, before we had that one we did he did have for a trial for a weekend an original Macintosh that he brought home and I remember vividly discovering -- no idea what I was going to do with this thing -- but I discovered there was a paint program and you could draw and you could draw things with paint. But if you if you've got the fattest brush, and you colored in the entire screen entirely black, and then you click reset, it would go through like a couple of shades of like gray scale as the color went away. I don't know why that blew my mind that you could do that. It's sitting in my father's office watching his fingers to shave go through phases of gray amused seven year old me I guess.
**Julia Ferraioli**: I seem to remember effects like that myself, as well as manually starting a screensaver. That of course, if you've left running too long, it would burn into the monitors.
**Russell Keith-Magee**: Yes.
## On compassion
**Julia Ferraioli**: So thank you for joining me today. I'm really excited to chat with you. And I want to just get a little bit of an idea about your background. Let's dig into the really light weight stuff. Like…what are some important lessons that you've learned in your life?
**Russell Keith-Magee**: I guess it's kind of been an ongoing lesson to learn that there is almost no situation where having compassion and empathy for the people you're dealing with, that will not serve you well. In my youth, I can remember being a lot more angry and frustrated at all these other stupid people in the world who just don't understand. As I've gotten older, I have gradually and sometimes very painfully learned that it's not that everyone else in the world is stupid. It's just that everyone else in the world has a different set of experiences and a different set of knowledge and a different set of backgrounds, a different set of expectations. More often than not what is perceived as this person being stupid, is just their set of expectations coming into the situation are radically different to your own. Pulling yourself out of your own head to see where they are coming from will one not only make dealing with the world a lot less frustrating for you but can often help you get to whatever shared goal you're trying to get to a lot easier.
Just by virtue of its if you understand where someone's coming from, it's a lot easier to present the information in a way they're going to be able to understand or absorb or recognize whatever it is that you're saying. That's not to say that it isn't incredibly frustrating sometimes when you're still in conversations, but it has helped me manage my frustration a lot more to realize where other people are coming or not coming from a place of trying actively to frustrate me. It's just an accident of the world being a very large and complex and intriguing place.
**Julia Ferraioli**: That's a fantastic lesson. I often tend to think of it as people are operating with different environment variables set.
**Russell Keith-Magee**: Yes, yes. And there are many, many of them and they're not at all documented.
**Julia Ferraioli**: No!
**Russell Keith-Magee**: And quite often they are even aware of the environment variables they're running under, which is part of the frustration, I guess. But yes, being aware of those environmental variables is helpful.
## First experiences with open source
**Julia Ferraioli**: Excellent. So you are very involved in the open source ecosystem. So how would you describe open source to someone unfamiliar with it?
**Russell Keith-Magee**: I guess I would describe it as a collective project where a group of people work together to build technological solutions to a problem. So that by sharing, they don't repeat each other's work. And they can learn from each other's lessons. If you're working on a system by yourself, there's a limit to how much you can do by yourself. If even two small groups are working together, there is a limit to how much they can achieve on their own. But if everybody is working together and sharing together, you end up with a more robust, more complete solution, because you have more input into what is being developed and what is being built.
So it is, in some regards, the antithesis of what sort of modern capitalism is trying to teach us all to do, which you know that idea that you find something that you're good and you make sure you corner the market so that nobody else can do it. It is this idea that if we all contribute together, we all give a little bit towards the project. Everyone moves a little bit further as a result.
**Julia Ferraioli**: Kind of this collective good concept.
**Russell Keith-Magee**: Yeah.
**Julia Ferraioli**: So what was your first encounter with open source? How did you first become aware of it?
**Russell Keith-Magee**: I became aware of open source before the word open source was even a thing, so my first exposure was in the mid-90s. I was messing around with a computer and someone did the "Hey, hey, you know, have you have seen this thing called Linux?", and passed me a great big stack of floppy disks that I could install on my computer. And there's this whole other operating system that was like completely different from Windows. At that point, free software was a thing. And like you usually do, you read up all the code around or the documentation and the manifesto statements that are around free software. And this was kind of fascinating idea that this is this piece of hardware, this printer was frustrating. So people liberated the software for it so they could program their own printer like, yeah, that that sounds great. How do I get me more of that?
That was sort of the start of my university career by the time I was, in my honors years, the open source movement as we now understand -- the OSI [[Open Source Initiative](https://opensource.org)] and groups like that -- were starting to formalize what they were saying, under a new narrative about what that would mean, that wasn't quite in the extreme ends of what the Free Software Foundation was pushing but in a similar kind of vein.
**Julia Ferraioli**: Gotcha. You talked about Linux, but was there a first piece of open source software that really got you bought into the whole thing?
**Russell Keith-Magee**: I guess, if I had to put my finger on it, I would say it was probably the GNOME Desktop, again, in that kind of late 90s-ish timeframe, when I should have been spending a lot more time working on my thesis, but it was just being intrigued by this idea of a desktop that you could build and configure and change things and modify. It was still very, very early stages, and so a lot of things broke, a lot of things didn't work., and it was exposing me to new new ideas and new pieces of technology. And reading up all of the design documents of the people who were actively working on it was kind of this this idea that "Yeah, I can, I can help them". Not necessarily that I was successful, but, at least in theory, I could help them do what they're doing.
The only real restriction was my ability to narrow down a single thing that I could work on and contribute to the overall project.
**Julia Ferraioli**: So at that time, were you already coding proficiently, or were you...
**Russell Keith-Magee**: Proficiently is an interesting description, so I learned to program because I had this introduction to programming when I was eight or nine, when dad brought home the Commodore64. So I had been programming in various capacities. I went to university to do physics as an undergrad. I was picking up lots of like all the computing units that I could on the side of my honors ended up being in computer science.
So I could code, I was definitely not at a level that was building entirely new pieces of a desktop system, because there was so many pieces of that puzzle that I didn't understand yet. But I had certainly had aspirations. I think I did, at one point submit a pull request to some obscure part of GTK, which I think as I remember, rightly, the review came back with sort of raised eyebrows, "What exactly are you trying to do here?" So you know, I wasn't wasn't definitely wasn't successful on my first attempt.
**Julia Ferraioli**: I can definitely relate, having had some of those same review comments, in my own experience.
## Getting involved in Django
**Julia Ferraioli**: So what was the first open source project that you really got involved in?
**Russell Keith-Magee**: That is Django, which kind of, sort of very much set by direction for the next 15 years of my career, and in some regards life. Although I knew open source, and I liked open source, my involvement in the community was was very, "I've got my own little thing that I want to tinker around with". And I've got this grand idea of this thing that I'm going to build that I've variously tinkered on for 20-something years at this point. It's the projects that will never ever get built. By this point, fully aware that my time will never allow me to build it. But you know, it's a lovely dream.
For a long time, this was the problem domain that I understood. So when I wanted to learn a new language, let's rebuild it. But this time in C, this time, in Python, this time in Ruby, just to sort of learn, learn the language to bring a problem, you understand to a new language and see how that changes the solution that you've got. Around that time, 2004/2005, I had this grand realization that the web, the web is a thing. And I might be able to use the web to solve this problem. So I better learn some more about the web, because it looks like it's going to be important. And yet, despite at that point, having a PhD in computer science, I never learned anything about how servers or you know, the internet, works at that level anyway.
I sat down and tried to teach myself web programming as best I could, sort of went through PHP tutorials and went through a bunch of other tutorials, and none of them stuck, none of them really made sense. And they're exacerbated by the fact that open source documentation in the early 2000s were not good. So I tinkered around and tried a bunch of different web frameworks, I remember trying the Rails tutorial at one point, a combination of not knowing Ruby well enough. And Rails being a new domain, that didn't really make any sense. I tried a couple other different Python web frameworks and then stumbled upon Django, probably about two months after it was originally open sourced.
Django was originally an in house project at the Lawrence Journal World, which is a small town newspaper, in Lawrence, Kansas. They made a decision that they were going to open source their web framework that they had been using to build their newspaper website, basically, as a content management system, but sort of step back from the full CMS kind of aspect, just just the web framework bit. I found it and went through their tutorial. And for the first time is like a little light went on this, like, "Oh, that's what you do". Oh, it's just that! Oh, okay. Well, in which case, let's go with that and started building up more more and more complex experimentation of what I could do.
Again, because it was a very, very young project, and had only just recently been open sourced, there was a lot of stuff that was missing, there were a lot of features that just weren't there. So I got to a point where it's like, "Well, okay, this is great. But I really need to be able to do this other thing" that was like, you could define a many to many relationship between two database objects. You could traverse one direction, but you couldn't traverse back the other way, well like okay, but I need to get to go both ways. So can I, I fix that? I don't know. And because I did know Python quite well, at that point. And it was quite a well structured code base and a lot smaller code base than it is today. I got into the code like traced through, I can query it this way. Here it's being constructed. So if I just like take that and copy it and like reverse all the variables the other way, then that'll reverse it, right?
So I did that. And there was also a good test suite. So I could write a bunch of test cases to prove that yes, this was actually doing exactly what it should do, and submitted that as a pull request. So that was that would have been about two or three months after I for first time downloaded Django, submitted that pull request like that got reviewed and pulled into the code that I did a couple other little smaller minor things around the timezone handling or not some other bits and pieces that popped up. Adrian Holovaty, who is one of the project project founders, sort of mailed and said, "Hey, do you want to join the core team?"
## Joining the Django core team
**Russell Keith-Magee**: So it was a very, very rapid from having never seen the project before to "you're on the core team now, good luck". It also helped that at the time, they were going through this big thing called magic removal, which was like a very large scale reengineering of some of the core pieces of Django, so they were willing to sort of give this newcomer access to the magic removal branch without necessarily giving them access to the core. So, I joined the team to help with the removal of the magic. And it kind of just progressed from there, I sort of just kept tinkering around on open source, on Django.
By happy coincidence, about three months after I'd been given the commit bit -- that I was actively contributing -- the place where I was working at the time was a defense consulting company that had a very, pretty lucrative contract to build this system that they were going to build for an exercise that was coming up, and the engineering plan was just being kicked off. They were just scoping out how much work was going to be and how long it was going to take to deliver. It was going to be a full fat client of Java GUI with user interface and everything was being built. The plan was calling for this 12 months engineering schedule to build this thing out.
It occurred to me that hang on, sure, we could do that. But we could also do this as a web page, and it'll be done in like three weeks. Like, we're just doing basic data collection here. This isn't a big, big thing we're trying to build. So why don't we just do it as a website? I pitched that to my engineering manager, who sort of said, "Hmm, interesting, I hadn't thought of that", and he goes off, he did the Django tutorial.
About two days later came back with a very, very bare bones prototype, but a fully working bare bones prototype of this system we need to build, we can polish it a lot, but this would do if we had to. And so it was kind of "Well, yeah, let's do that". Because we'll be done in like two months. And we can spend the rest of the time fine tuning it to make sure it's exactly what we need, rather than not having a deliverable until October of this year. At which point we discover it doesn't work, we've got to finesse all the other problems.
So at that point, working on open source, it didn't become my day job. But I had a lot of leeway in my working day, to fix the problems in Django that were preventing us from building the thing that was actually commercially viable. And that sort of ideal model of, you're not 100% working on Django, but you're working, you're using it and then fixing the bugs as you go and contributing those bugs upstream. So that then everybody benefits from the thing that you've discovered the hard way by trying to resolve this bug. That just kind of then set the direction for the next couple of years.
A lot of the company that we're working at, their work started being seen through the lens of "can we do this as a web framework?" They never became a web company, per se, they were still defense consulting, but they worked out that they could rapidly develop these websites. I had a lot of leeway to work on Django bugs and answering questions on mailing lists that weren't directly related to what we were doing. But they did have a big picture I had relevance to how we were progressing.
## Working in a global community
**Russell Keith-Magee**: Now, the thing that's sort of going on in the background here is that I'm based in Perth, Western Australia, we like to lay claim to being the most isolated capital city in the world. If you get on a plane, you have to fly for three hours to get to the nearest State Capital. Four or five, depending upon when you're going to get to a real city. So I was here in Perth working away on an open source project, did not meet anybody else on the project for almost three years when the first DjangoCon conference was held.
I begged my my line manager, (I had changed companies move to a different company, but they had hired me on the basis of my Django experience). So they said, "Okay, we'll pay for you to go to this conference", wow, someone's going to pay for me to fly to the United States so that I can talk about this thing that I've been doing in my spare time, and a little bit of my work time as well. At that point, I flew to San Francisco the conference that was Google hosted the very, very first conference and in the lobby of the hotel where we'd been to set up. I'm checking in, and another Australian voice over there said, "that sounds like you might be an Australian, are you Russell?" It turns out, it's Malcolm Tredinnick who I've been working with was who was based in Sydney, who I had never met before.
He was the first person I've met that I've been working with at that point for two and a half, three years, and started a beautiful friendship, and then met the rest of the team over the course of the next two days of the conference.
**Julia Ferraioli**: Was it weird, meeting people in person after interacting with them online for so long?
**Russell Keith-Magee**: Very weird. And I guess that was kind of my first real exposure to the idea that the person you are on the internet is not the person you are in person? Or it isn't necessarily. I think the conference hotel that we were at, had this sort of weird little social room, which is set up with the HiFi lounge, and all kinds of weird stuff in there.
But I met a couple of people in there one night, and one woman in particular, Barbara Shaurette, who met me and said, "You're not the same person I was expecting from reading all of your emails, you are a very different person", which I think was a compliment. We're still friends. So I think it was still a compliment. But yeah, so she pointed out that my email presence was very formal and very, very straightforward. Very bullet point. These are the things we're going to get done. Which, when I've had a beer, and I'm relaxing with friends, I'm not.
It was an interesting little head check that the internet and who you are in person, can be very, very different. Any other thing is also keep in mind that this is 2005/2006. I did not at that point, have a broadband internet connection. I was doing a lot of this stuff on a very, very slow ADSL. But it was very slow ADSL. So video chat -- it could be done, but it wasn't done a lot. "Just jump on a zoom call" was not something you would do. It was all being done by text. It was very much the emails you wrote was who you were, unless you actually physically knew someone in person.
**Julia Ferraioli**: I think that even though we have Zoom and video calls, I think people still run into that disconnect, that cognitive dissonance between who you are in person and who you are online, because you can edit.
**Russell Keith-Magee**: Yeah, you can edit stuff. There's also that there is, this is one of the things I've been feeling particularly around sort of COVID as an experience is that's forced me to be in Perth, like I'm not going to anywhere, all the conferences that I normally go to. And online conferences become weird because if I go to an online conference in the United States, they're winding down at the end of the day, and it's six o'clock in the morning for me.
So they're kicking back and they've got a drink in one hand, and they’re being nice and social and I'm wolfing down cereal. And I guess I could mix that with whiskey, but probably not a good idea. And so there's a whole different -- where you are in the day matters a whole lot around the way that you interact with other people and sort of whether you're being social or whether you're being formal or whether you're like we're trying to formally follow an agenda or were just kicking back and telling stories and that's a really hard thing to navigate when you are so geographically isolated, and timezone isolated in my case as well.
**Julia Ferraioli**: Absolutely. And people are often in different mindsets at different times of the day. Makes total sense.
## Open source contractualism, identity, and burnout
**Julia Ferraioli**: So what would you like to talk about today?
**Russell Keith-Magee**: I guess is there's the story that I've just told about getting involved in open source is kind of the origin story. Which which is how I got into open source and how I got to be involved in the Django project and that has absolutely shaped my life and the meta story is there -- is it honestly like I guess there's the Gary Larson comment from way back when or maybe from The New Yorker "on the internet no one knows you're a dog" was my life for a long time it's like I'm I am this person on the internet from from out in the middle of nowhere and the only reason you know I exist is because I mailed the mailing lists.
I've been able to go from that to be someone who is known to people in Europe and known to people in the US and other people in the rest of Australia or in Asia. I've traveled to conferences to see them, and been invited to speak at conferences. So it has been an amazing personal journey to be able to have this international presence and reputation based upon something that is basically what I'm working on in my spare time for the fun, for the most part. And that's been wonderful, like the people that I've met, have changed my life in ways that I can't describe, and provided opportunities that I would not have imagined 20 years ago.
But there's also a really weird kind of dark downside to it. The volunteering, because so much of it is done as volunteer labor. It is very easy to get sucked into a hole, where you end up giving a lot of yourself. As a volunteer, it is easy to end up in a place where you end up giving a lot of yourself to this project that it literally doesn't pay the bills, it maybe indirectly pays the bills, like most of the job offers that I've gotten over the last 15 years have been related one way or another to my reputation open source, which is definitely helpful. But it's very easy to get tied up in, oh, but I have to keep doing this contribution, I have to give time, I've got these emails that have to be answered. And that, in particular, there are a lot of people who come to open source who don't necessarily share the giving back aspect as much they see this free thing. They treat it as a product that they can consume and absorb and use.
When it doesn't work, it's your fault. And it's specifically your fault, because you personally are the person who didn't find this bug, fix this bug, didn't respond to their ticket fast enough, whatever it is that they perceive to be the slight. That can be as simple as just straight up abuse on mailing lists, or it can be a really sort of subtle, insidious thing of just constantly being the thorn in the side asking about, when is this going to happen, when are you going to do this thing. When it is working with someone who you have done something for them, and they do something for you, because it is a give and take relationship, at least there is a sense of obligation there, but it's an earned obligation because they have done something for you in return.
But there's a lot of people who don't necessarily see it as that earned obligation. That can lead into some very, very dark patterns of where you're getting your little dopamine rush of contribution by doing something. The reason you're getting that dopamine rush is people asking you to do something that you wouldn't have otherwise done. You weren't getting paid to do this. So why would you want to do it otherwise. You're all of a sudden you are internalizing all of the angst and pain about a bug that in no way impacts on you. It's like not solving a problem you have, you just think you're fixing this thing because someone else will feel better as a result of you fixing their thing. And that combination of those sort of pressures.
I sort of am a chronic volunteer, I will jump in and help help out with anything that I see that is going on, led me to get involved in the Django Software Foundation because someone needed to do it. I thought that I could do it, and I did it reasonably well for a couple of years, I guess. You end up doing a bunch of things that you're not doing because you enjoy them, you're doing them because you think they need to be done. But you're not getting anything back other than maybe some collective community appreciation, if you're lucky.
In terms of my own personal story, I ended up in a very, very dark place not just as a result of open source with some other things going on in my life at the same time. But the combination of factors and open source contribution was a big one. And I was diagnosed as having a major mental health incident about seven years ago at this point, and I needed to take a big step back, scale back by involvement in Django, scale backed by an open open source for a little while, at least at least reassess what I was doing all the reasons why I was doing it. I guess there is so much positive and so much good that comes out of open source as a community as an ecosystem. Software and systems improve collectively much better than they do when they're being pushed by one company's particular interests. But there is also a dark side.
Related to that is something that is part of my pivot was stepping away from Django as a project. I needed to separate it the easiest way I do things in black and white, you either do them or I don't. And so the easiest way for me was to kind of step back from from Django as a project. I started tinkering on my own thing on the BeeWare project, which one, so let me tinker and contribute in my own time on my own thing, something that I was interested in. That has sort of has grown over time. And, you know, I need to keep a head check on whether I've ended up going down the same dark pattern sometimes.
But it kind of also drew my attention to the way that -- without wanting to sound too much like a political radical -- the way that capitalism interacts with that whole process, that a lot of the pressures that you end up seeing that you observe as individuals asking, when are you going to fix this is not actually an individual asking, it's a company asking. They're doing it because a company is using this product and needs it to be fixed. But those companies aren't necessarily giving back. Some do. Some do a lot more than others.
It's not anywhere near as ingrained in open source as an ecosystem, open source as a as a culture that companies who are using open source who are almost always using it, because it's the free option and free as in, it costs nothing option not because it's the freedom liberty, lovely, high, high ideals, that maybe the word free is occasionally useful. They're using it because it costs them nothing. And they're getting free support. They're getting it from volunteers who are burning themselves out to satisfy these needs, because they've got this weird little dopamine thing going on in their head.
That's not sustainable long term.
The unfortunate side of open source is the number of people that I've seen that have gone in, contribute do two years of amazing work and contributions, and then just burnout because of the pressures that have been placed on them by, often, people who have resources and should know a lot better, but because they're able to, and there's no sort of active restriction to prevent them from doing it. Don't stop them from burning out other people because they are outside their organizational hierarchy or whatever. I think where we are as a open source as a movement is in an interesting little place right now like, technologically I think everybody or almost everybody is on board with the idea that open systems build better systems in the long run. But we haven't worked out how we build them collectively. Without in some cases literally killing people. That there is a piece of this puzzle that's missing.
I think collectively we are in a place where we need to have decide having these conversations a lot more seriously than we are at the moment.
**Julia Ferraioli**: It's interesting, too, because as we've seen the rise in popularity of open source both in the projects started and the projects consumed, there is this increased pressure because maintainers are worried that if they don't follow up on requests or bugs or what have you, that they're going to be damaging their professional identity as well. So it does make it very difficult to take care of your own needs and your identity away from open source. It's at a kind of critical point. And it has been for a while.
**Russell Keith-Magee**: Yeah, it has been for a while it's like and this is also not a new situation like burnout of maintainers is something that we've observed. It hasn't necessarily been actively observed. But if you go back and look you can sort of see the patterns have been for like years and years going back.
And the only open source ones that are proclaimed contributors who don't burn out are the ones who either the project is open source in name but not actually in spirit, it's like it's the in-house project that everyone can see the source code to but only people at that company actually work on it. But then as soon as that company decides they don't care about the project, the project dies, the project just gets cut off because all of a sudden it doesn't have this organic ecosystem around it. It's just this one thing that's been propped up by one company. We know that this is the thing that builds the best technological solutions or certainly has all evidence seems to suggest it builds the best technological solutions.
How do we actually build this without hurting people? How do we continue to recognize that there are people on the other side of the planet who can make a meaningful contribution to the design and design of this ecosystem? Without needing to literally burn them out because they're spending all their spare time answering angry emails from someone who, for them, is just this, this thing that I'm going to shout off and yell into the night? But it's an actual directed personal attack at one person who's receiving that email.
## The potential of open source
**Julia Ferraioli**: Yeah. So we only have a couple minutes left -- time got away from me for sure, I'vebeen engrossed -- so I'll pose one last question. Where do you see open source's greatest potential?
**Russell Keith-Magee**: I guess for me, the greatest potential is in the fact that it is accessible to everybody. I am geographically isolated, but I have been able to get into a project. I've had, as a result of my involvement in the Django project, any number of contacts with people in Africa who have had -- because there is this zero cost of entry and it's really just as long as you've got an internet connection you can kind of get involved with the community, at some level -- they've been able to get into this technology, it is a massive lever that lets people get into this technology very, very easily. The fact that you don't have to be physically resided in San Francisco, to be able to have an impact on the technological world, or New York or Austin or you know, any other tech city you want to pick, you can be anywhere.
You can literally be anywhere and contribute and have a meaningful impact on the world, I think is probably the biggest potential impact that it can have that it can literally be a worldwide project to improve the world that we are in. Working out how to then resource that in a way that doesn't cause these people to come into the project and then burn out because entire world then lands on their doorstep and asks why they're not doing their job properly. That is then the challenge that I think we've got, that we're facing as a community.
**Julia Ferraioli**: Gotcha. The opportunity and challenges do go hand in hand.
**Russell Keith-Magee**: Yeah. And I think that part of that that goes along, and is one of those things that's tied to it because it is zero cost to get into, it also means that the problems that get addressed aren't necessarily the ones that have huge financial payoffs next to it. The things that get solved other things that problems that people actually have, versus the problems that can be monetized by somebody. And so lots of little things can be built, that solve a small community's needs really, really well and would never be commercially viable.
But because you've now got a technology stack that is evolving, that lowers the barrier to entry, because more and more focus has been put on to how do we onboard people into this technology? And how do we make this technology easier and easier to use? Combined with the ubiquity of modern technology, it means that we end up with more of the world's problems being solved, but not by directly one company decided this problem needs to be solved by building the tools that lets people help themselves and then giving them a community and an ecosystem they can get into to help everybody work out that they can do this thing better. Your "hands around the world" kind of feeling but I genuinely believe that it is, at least at some level is true. And it certainly is possible.
**Julia Ferraioli**: Well, thank you so much Russell, for coming and speaking with us today. I hope to be able to chat with you again soon!
**Russell Keith-Magee**:
Absolutely. My pleasure. | 198.47027 | 1,066 | 0.781654 | eng_Latn | 0.999982 |
8ad3e082bb08a846e2ed821308922bc22a2b65f1 | 13,668 | md | Markdown | Applications/Mail_servers/template_exchange_2010_client_access_performance_monitoring_(rus)_1/5.0/README.md | Michael-Git-Web/templateszbx | bb5a4d4949c232edbaacb4ed7ebbc397e32cd74a | [
"MIT"
] | 1 | 2022-01-23T18:19:40.000Z | 2022-01-23T18:19:40.000Z | Applications/Mail_servers/template_exchange_2010_client_access_performance_monitoring_(rus)_1/5.0/README.md | Michael-Git-Web/templateszbx | bb5a4d4949c232edbaacb4ed7ebbc397e32cd74a | [
"MIT"
] | null | null | null | Applications/Mail_servers/template_exchange_2010_client_access_performance_monitoring_(rus)_1/5.0/README.md | Michael-Git-Web/templateszbx | bb5a4d4949c232edbaacb4ed7ebbc397e32cd74a | [
"MIT"
] | null | null | null | # Exchange Mailbox Servers_RU
## Overview
39 Items, 5 Triggers 13 Graphs and 6 Screens built from Exchange 2010 Client Access Server Performance Counters for advanced troubleshooting, trending and capacity planning.
Изменены названия Performance Couters для поддержки русской версии сервера Exchange
## Author
Stephen E. Fritz
## Macros used
There are no macros links in this template.
## Template links
There are no template links in this template.
## Discovery rules
There are no discovery rules in this template.
## Items collected
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Почтовый ящик банка данных MSExchange Средняя задержка RPC (клиент)|<p>RPC Average Latency is a server RPC latency, in ms, averaged for the past 1,024 packets. Should be less than 50 ms on average for each client. Wide disparities between different client types, such as IMAP4, Microsoft Outlook Anywhere, or other clients (MAPI), can help direct troubleshooting to appropriate subcomponents.</p>|`Zabbix agent`|perf_counter["\Почтовый ящик банка данных MSExchange(*)\Средняя задержка RPC (клиент)"]<p>Update: 30</p>|
|Microsoft Exchange Information Store|<p>Manages the Exchange Information Store. This includes mailbox databases and public folder databases. If this service is stopped, mailbox databases and public folder databases on this computer are unavailable. If this service is disabled, any services that explicitly depend on it will fail to start. This service is dependent on the RPC, Server, Windows Event Log, and Workstation services.</p>|`Zabbix agent`|service_state[MSExchangeIS]<p>Update: 30</p>|
|MSExchangeIS Запросов RPC|<p>Indicates the overall RPC requests currently executing within the information store process. Should be below 70 at all times.</p>|`Zabbix agent`|perf_counter["\MSExchangeIS\Запросов RPC"]<p>Update: 30</p>|
|Microsoft Exchange Server Extension for Windows Server Backup|<p>Enables Windows Server Backup users to back up and recover application data for Microsoft Exchange. This service has no dependencies.</p>|`Zabbix agent`|service_state[WSBExchange]<p>Update: 30</p>|
|Почтовый ящик банка данных MSExchange Средняя задержка RPC|<p>Indicates the RPC latency, in ms, averaged for all operations in the last 1,024 packets. Shouldn't be higher than 100 ms on average.</p>|`Zabbix agent`|perf_counter["\Почтовый ящик банка данных MSExchange(*)\Средняя задержка RPC"]<p>Update: 30</p>|
|Microsoft Exchange Replication Service|<p>Provides replication functionality for mailbox databases on Mailbox servers in a database availability group (DAG) and database mount functionality for all Mailbox servers. This service is dependent upon the Microsoft Exchange Active Directory Topology service.</p>|`Zabbix agent`|service_state[MSExchangeRepl]<p>Update: 30</p>|
|Microsoft Exchange Active Directory Topology|<p>Provides Active Directory topology information to Exchange services. If this service is stopped, most Exchange services are unable to start. This service has no dependencies.</p>|`Zabbix agent`|service_state[MSExchangeADTopology]<p>Update: 30</p>|
|\Почтовый ящик банка данных MSExchange(_Total) Средняя задержка RPC (клиент)|<p>Shows a server RPC latency, in ms, averaged for the past 1,024 packets for a particular client protocol. Should be less than 50 ms on average for each client. Wide disparities between different client types, such as IMAP4, Microsoft Outlook Anywhere, or other clients (MAPI), can help direct troubleshooting to appropriate subcomponents.</p>|`Zabbix agent`|perf_counter["\Почтовый ящик банка данных MSExchange(_Total)\Средняя задержка RPC (клиент)"]<p>Update: 30</p>|
|Microsoft Exchange RPC Client Access|<p>Manages client RPC connections for Exchange. This service is dependent upon the Microsoft Exchange Active Directory Topology service.</p>|`Zabbix agent`|service_state[MSExchangeRPC]<p>Update: 30</p>|
|Exchange Mailbox IS Клиент: неудачных RPC/с|<p>Shows the client-reported rate of failed RPCs (since the store was started). Should be 0 at all times. Higher values may indicate RPC threads are exhausted or client throttling is occurring for clients running versions of Outlook earlier than Office Outlook 2007.</p>|`Zabbix agent`|perf_counter["\MSExchangeIS\Клиент: неудачных RPC/с"]<p>Update: 30</p>|
|Microsoft Exchange Mail Submission Service|<p>Submits messages from the Mailbox server to Exchange 2010 Hub Transport servers. This service is dependent upon the Microsoft Exchange Active Directory Topology service.</p>|`Zabbix agent`|service_state[MSExchangeMailSubmission]<p>Update: 30</p>|
|Exchange Mailbox Database Ожидающих потоков журнала|<p>Shows the number of threads waiting for their data to be written to the log to complete an update of the database. If this number is too high, the log may be a bottleneck. Should be less than 10 on average. Regular spikes concurrent with log record stall spikes indicate that the transaction log disks are a bottleneck. If the value for log threads waiting is more than the spindles available for the logs, there is a bottleneck on the log disks.</p>|`Zabbix agent`|perf_counter["\MSExchange Database(*)\Ожидающих потоков журнала"]<p>Update: 30</p>|
|Exchange Mailbox Database Ожиданий записи в журнал/с|<p>Shows the number of log records that can't be added to the log buffers per second because the log buffers are full. If this counter is nonzero for a long period of time, the log buffer size may be a bottleneck. The average value should be below 10 per second. Spikes (maximum values) shouldn't be higher than 100 per second. If I/O log write latencies are high, check for RAID5 or synchronize replication on log devices. You can also use the MSExchange Database Instances (Information store/<Database Name>) log record stalls/sec counter to determine which database(s) may be having issues. This will assist you in determining which drive(s) to focus on. This counter is an extended Exchange counter in Performance Monitor.</p>|`Zabbix agent`|perf_counter["\MSExchange Database(*)\Ожиданий записи в журнал/с"]<p>Update: 30</p>|
|Почтовый ящик банка данных MSExchange Сообщений в очереди отправки|<p>Shows the current number of submitted messages not yet processed by the transport layer. Should be below 50 at all times. Shouldn't be sustained for more than 15 minutes. This may indicate connectivity issues to the transport server.</p>|`Zabbix agent`|perf_counter["\Почтовый ящик банка данных MSExchange(*)\Сообщений в очереди отправки"]<p>Update: 30</p>|
|База данных MSExchange ==> Экземпляры(*)\Средняя задержка при чтении в процессе ввода-вывода для (прикрепленной) базы данных|<p>Shows the average length of time, in ms, per database read operation. Should be 20 ms on average. Should show 50 ms spikes.</p>|`Zabbix agent`|perf_counter["\База данных MSExchange ==> Экземпляры(*)\Средняя задержка при чтении в процессе ввода-вывода для (прикрепленной) базы данных"]<p>Update: 30</p>|
|Помощник по занятости MSExchange Количество событий, обработанных с ошибками|<p>Shows the total number of failures that occurred while the Resource Booking Attendant was processing events. Should be 0 at all times.</p>|`Zabbix agent`|perf_counter["\Помощник по занятости MSExchange\Количество событий, обработанных с ошибками"]<p>Update: 30</p>|
|Microsoft Exchange Throttling|<p>Limits the rate of user operations. This service is dependent upon the Microsoft Exchange Active Directory Topology service.</p>|`Zabbix agent`|service_state[MSExchangeThrottling]<p>Update: 30</p>|
|Microsoft Exchange Mailbox Assistants|<p>Performs background processing of mailboxes in the Exchange store. This service is dependent upon the Microsoft Exchange Active Directory Topology service.</p>|`Zabbix agent`|service_state[MSExchangeMailboxAssistants]<p>Update: 30</p>|
|Microsoft Search (Exchange Server)|<p>This is a Microsoft Exchange-customized version of Microsoft Search. This service is dependent on the RPC service.</p>|`Zabbix agent`|service_state[msftesql-Exchange]<p>Update: 30</p>|
|Microsoft Exchange System Attendant|<p>Forwards directory lookups to a global catalog server for legacy Outlook clients, generates e-mail addresses and OABs, updates free/busy information for legacy clients, and maintains permissions and group memberships for the server. If this service is disabled, any services that explicitly depend on it will fail to start. This service is dependent on the RPC, Server, Windows Event Log, and Workstation services.</p>|`Zabbix agent`|service_state[MSExchangeSA]<p>Update: 30</p>|
|MSExchangeIS (общие папки) Сообщений в очереди отправки|<p>Shows the current number of submitted messages not yet processed by the transport layer. Should be less than 20 at all times.</p>|`Zabbix agent`|perf_counter["\MSExchangeIS (общие папки)(*)\Сообщений в очереди отправки"]<p>Update: 30</p>|
|Интерфейс хранилища MSExchange Неудачных запросов RPC (%)|<p>Shows the percentage of failed requests in the total number of RPC requests. Failed means the sum of failed with error code plus failed with exception. Should be less than 1 at all times.</p>|`Zabbix agent`|perf_counter["\Интерфейс хранилища MSExchange(*)\Неудачных запросов RPC (%)"]<p>Update: 30</p>|
|Microsoft Exchange Search Indexer|<p>Drives indexing of mailbox content, which improves the performance of content search. This service is dependent upon the Microsoft Exchange Active Directory Topology and Microsoft Search (Exchange Server) services.</p>|`Zabbix agent`|service_state[MSExchangeSearch]<p>Update: 30</p>|
|Отправка почты MSExchange Серверов-концентраторов в состоянии повторной попытки|<p>Shows the number of Hub Transport servers in retry mode. Should be 0 at all times.</p>|`Zabbix agent`|perf_counter["\Отправка почты MSExchange(*)\Серверов-концентраторов в состоянии повторной попытки"]<p>Update: 30</p>|
|Помощник по ведению календаря MSExchange Ошибок запросов|<p>Shows the total number of failures that occurred while the Calendar Attendant was processing events. Should be 0 at all times.</p>|`Zabbix agent`|perf_counter["\Помощник по ведению календаря MSExchange\Ошибок запросов"]<p>Update: 30</p>|
|База данных MSExchange ==> Экземпляры(*)\Средняя задержка при записи в процессе ввода-вывода для (прикрепленной) базы данных|<p>Shows the average length of time, in ms, per database write operation. Should be 50 ms on average. Spikes of up to 100 ms are acceptable if not accompanied by database page fault stalls.</p>|`Zabbix agent`|perf_counter["\База данных MSExchange ==> Экземпляры(*)\Средняя задержка при записи в процессе ввода-вывода для (прикрепленной) базы данных"]<p>Update: 30</p>|
|Отправка почты MSExchange Неудачных отправок в секунду|<p>Shows the number of failed submissions per second. Should be 0 at all times.</p>|`Zabbix agent`|perf_counter["\Отправка почты MSExchange(*)\Неудачных отправок в секунду"]<p>Update: 30</p>|
|Отправка почты MSExchange Временных сбоев отправки/с|<p>Shows the number of temporary submission failures per second. Should be 0 at all times.</p>|`Zabbix agent`|perf_counter["\Отправка почты MSExchange(*)\Временных сбоев отправки/с"]<p>Update: 30</p>|
|Интерфейс хранилища MSExchange(_Total) Средняя задержка RPC (мс)|<p>Shows the average latency, in ms, of RPC requests. The average is calculated over all RPCs since exrpc32 was loaded. Should be less than 100 ms at all times.</p>|`Zabbix agent`|perf_counter["\Интерфейс хранилища MSExchange(_Total)\Средняя задержка RPC (мс)"]<p>Update: 30</p>|
|Microsoft Exchange Transport Log Search|<p>Provides remote search capability for Microsoft Exchange Transport log files. On Hub Transport servers, this service is dependent upon the Microsoft Exchange Active Directory Topology service. On Edge Transport servers, this service is dependent upon the Microsoft Exchange ADAM service.</p>|`Zabbix agent`|service_state[MSExchangeTransportLogSearch]<p>Update: 30</p>|
|MSExchangeIS (общие папки) Размер очереди получения репликации|<p>Shows the number of replication messages waiting to be processed. Should be less than 100 at all times. This value should return to a minimum value between replication intervals.</p>|`Zabbix agent`|perf_counter["\MSExchangeIS (общие папки)(*)\Размер очереди получения репликации"]<p>Update: 30</p>|
|Exchange Mailbox Database Записанных в журнал байтов/с|<p>Shows the rate of bytes written to the log. Should be less than 10,000,000 at all times. With each log file being 1,000,000 bytes in size, 10,000,000 bytes/sec would yield 10 logs per second. This may indicate a large message being sent or a looping message.</p>|`Zabbix agent`|perf_counter["\MSExchange Database(*)\Записанных в журнал байтов/с"]<p>Update: 30</p>|
|MSExchangeIS Средняя задержка RPC|<p>Indicates the RPC latency, in ms, averaged for all operations in the last 1,024 packets. For information about how clients are affected when overall server RPC averaged latencies increase, see Understanding Client Throttling Policies. Shouldn't be higher than 100 ms on average. To determine if certain protocols are causing overall RPC latencies, monitor MSExchangeIS Client (*) RPC Average Latency to separate latencies based on client protocol.</p>|`Zabbix agent`|perf_counter["\MSExchangeIS\Средняя задержка RPC"]<p>Update: 30</p>|
|\Интерфейс хранилища MSExchange Медленных запросов RPC (%)|<p>Shows the percentage of slow RPC requests among all RPC requests. A slow RPC request is one that has taken more than 500 ms. Should be less than 1 at all times.</p>|`Zabbix agent`|perf_counter["\Интерфейс хранилища MSExchange(*)\Медленных запросов RPC (%)"]<p>Update: 30</p>|
## Triggers
There are no triggers in this template.
| 195.257143 | 884 | 0.792069 | eng_Latn | 0.826781 |
8ad3ed7fdba8e2e20a75ed9880813752a710adef | 967 | md | Markdown | articles/virtual-machines/linux/cli-manage-nodejs.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/cli-manage-nodejs.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/cli-manage-nodejs.md | abulu/mc-docs.zh-cn | edc4c17dcdba69b644b632eb744ea0d4e6349e42 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Linux 和 Mac 的基本 Azure 经典 CLI 命令 | Azure
description: 用于在 Linux 和 Mac 上开始管理 Azure 资源管理器模式的 VM 的基本 Azure 经典 CLI 命令
services: virtual-machines-linux
documentationcenter: ''
author: rockboyfor
manager: digimobile
editor: tysonn
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-machines-linux
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure-services
origin.date: 05/12/2017
ms.date: 04/01/2019
ms.author: v-yeche
ms.openlocfilehash: dcdfe881f27ff0316907dcc165f9009db8cf833e
ms.sourcegitcommit: b8fb6890caed87831b28c82738d6cecfe50674fd
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 03/29/2019
ms.locfileid: "58626683"
---
# <a name="common-azure-classic-cli-commands-on-linux-and-mac"></a>Linux 和 Mac 上的常用 Azure 经典 CLI 命令
[!INCLUDE [virtual-machines-common-cli-manage-nodejs](../../../includes/virtual-machines-common-cli-manage-nodejs.md)]
<!-- Update_Description: update meta properties --> | 33.344828 | 118 | 0.785936 | yue_Hant | 0.307781 |
8ad4030a29620602553d75a5225f779f07bcc418 | 5,601 | md | Markdown | docs/porting/fix-your-dependencies-on-library-internals.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/porting/fix-your-dependencies-on-library-internals.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/porting/fix-your-dependencies-on-library-internals.md | v-makoud/cpp-docs | b05cff71a8a6a8a4c7bbea1263fd0a711853f921 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Fix your dependencies on library internals | Microsoft Docs"
ms.custom: ""
ms.date: "05/24/2017"
ms.technology: ["cpp-language"]
ms.topic: "conceptual"
dev_langs: ["C++"]
helpviewer_keywords: ["library internals in an upgraded Visual C++ project", "_Hash_seq in an upgraded Visual C++ project"]
ms.assetid: 493e0452-6ecb-4edc-ae20-b6fce2d7d3c5
author: "corob-msft"
ms.author: "corob"
ms.workload: ["cplusplus"]
---
# Fix your dependencies on library internals
Microsoft has published the source code for the Standard Library, most of the C Runtime Library, and other Microsoft libraries in many versions of Visual Studio. The intent is to help you understand library behavior and to debug your code. One side-effect of publishing the library source code is that some internal values, data structures, and functions are exposed, even though they are not part of the library interface. They usually have names that begin with two underscores, or an underscore followed by a capital letter, names that the C++ Standard reserves to implementations. These values, structures, and functions are implementation details that may change as the libraries evolve over time, and so we strongly recommend against taking any dependencies on them. If you do, you risk non-portable code and issues when you try to migrate your code to new versions of the libraries.
In most cases, the What's New or Breaking Changes document for each release of Visual Studio doesn't mention changes to library internals. After all, you're not supposed to be affected by these implementation details. However, sometimes the temptation to use some code you can see inside the library is too great. This topic discusses dependencies on CRT or Standard Library internals you may have relied on, and how to update your code to remove those dependencies so you can make it more portable or migrate to new versions of the library.
## _Hash_seq
The internal hash function `std::_Hash_seq(const unsigned char *, size_t)`, used to implement `std::hash` on some string types, was visible in recent versions of the Standard Library. This function implemented an [FNV-1a hash]( https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function) on a character sequence.
To remove this dependency, you have a couple of options.
- If your intent is to put a `const char *` sequence into an unordered container by using the same hash code machinery as `basic_string`, you can do that by using the `std::hash` template overload that takes a `std::string_view`, which returns that hash code in a portable way. The string library code may or may not rely on use of an FNV-1a hash in the future, so this is the best way to avoid a dependency on a particular hash algorithm.
- If your intent is to generate an FNV-1a hash over arbitrary memory, we've made that code available on GitHub in the [VCSamples]( https://github.com/Microsoft/vcsamples) repository in a stand-alone header file, [fnv1a.hpp](https://github.com/Microsoft/VCSamples/tree/master/VC2015Samples/_Hash_seq), under an [MIT license](https://github.com/Microsoft/VCSamples/blob/master/license.txt). We've also included a copy here for your convenience. You can copy this code into a header file, add the header to any affected code, and then find and replace `_Hash_seq` by `fnv1a_hash_bytes`. You'll get identical behavior to the internal implementation in `_Hash_seq`.
```cpp
/*
VCSamples
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#include <stddef.h>
inline size_t fnv1a_hash_bytes(const unsigned char * first, size_t count) {
#if defined(_WIN64)
static_assert(sizeof(size_t) == 8, "This code is for 64-bit size_t.");
const size_t fnv_offset_basis = 14695981039346656037ULL;
const size_t fnv_prime = 1099511628211ULL;
#else /* defined(_WIN64) */
static_assert(sizeof(size_t) == 4, "This code is for 32-bit size_t.");
const size_t fnv_offset_basis = 2166136261U;
const size_t fnv_prime = 16777619U;
#endif /* defined(_WIN64) */
size_t result = fnv_offset_basis;
for (size_t next = 0; next < count; ++next)
{
// fold in another byte
result ^= (size_t)first[next];
result *= fnv_prime;
}
return (result);
}
```
## See also
[Upgrading Projects from Earlier Versions of Visual C++](upgrading-projects-from-earlier-versions-of-visual-cpp.md)<br/>
[Overview of potential upgrade issues (Visual C++)](overview-of-potential-upgrade-issues-visual-cpp.md)<br/>
[Upgrade your code to the Universal CRT](upgrade-your-code-to-the-universal-crt.md) | 70.0125 | 889 | 0.773612 | eng_Latn | 0.984278 |
8ad41cac84aa0589d9a23d047f84d8ffc7cb070a | 21,466 | md | Markdown | biztalk/core/how-to-modify-rate-based-throttling-settings.md | SicongLiuSimon/biztalk-docs | 85394b436d277504d9e759c655608888123785bd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-14T19:45:26.000Z | 2019-06-14T19:45:26.000Z | biztalk/core/how-to-modify-rate-based-throttling-settings.md | AzureMentor/biztalk-docs | 16b211f29ad233c26d5511475c7e621760908af3 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2020-01-09T22:34:58.000Z | 2020-02-18T19:42:16.000Z | biztalk/core/how-to-modify-rate-based-throttling-settings.md | AzureMentor/biztalk-docs | 16b211f29ad233c26d5511475c7e621760908af3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2017-06-23T18:30:28.000Z | 2017-11-28T01:11:25.000Z | ---
title: "How to Modify Rate Based Throttling Settings | Microsoft Docs"
ms.custom: ""
ms.date: "06/08/2017"
ms.prod: "biztalk-server"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords:
- "Bts10.settings.HostRate"
ms.assetid: a99dfb29-dee6-4a43-8d34-45179d9d0b5e
caps.latest.revision: 14
author: "MandiOhlinger"
ms.author: "mandia"
manager: "anneta"
---
# How to Modify Rate Based Throttling Settings
Rate based throttling in [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] is applied to host instances that contain orchestrations or send adapters that receive and deliver or process messages that have been published to the MessageBox. Using the [!INCLUDE[btsSettingsDashboard](../includes/btssettingsdashboard-md.md)], you can modify the rate based throttling configuration settings of a given host, across a BizTalk group. These settings apply to all host instances assigned to the given host. This topic provides the step-by-step procedure to modify these settings.
The rate based throttling condition can be triggered under the following conditions:
- The amount of memory, the number of threads, or the number of database connections used by the host instance exceeds the throttling thresholds.
- The message delivery incoming rate for the host instance exceeds the message delivery outgoing rate * the specified rate overdrive factor (percent) value.
- The number of messages being processed concurrently by the host instance exceeds the in-process messages per CPU * the number of CPUs available on the box.
## Prerequisites
To perform this operation, you must be logged on as a member of the BizTalk Server Administrators group.
### To modify the rate based throttling settings of a host
1. In the **BizTalk Server Administration Console**, expand [!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)], right-click **BizTalk Group**, and then click **Settings**.
2. In the **BizTalk Settings Dashboard** dialog box, on the **Hosts** tab, click the **Rate Based Throttling** tab.
3. Do the following and click **Apply** to apply the modifications and proceed to another tab. Else, click **OK** to apply the modifications and exit the Settings Dashboard.
| Use this | To do this | Boundary values | Default value | Upgrade logic |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|---------------|---------------|
| **Host** | From the drop-down list, select the host representing the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] runtime instances. | - | - | - |
**Publishing**
| Use this | To do this | Boundary values | Default value | Upgrade logic |
|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|---------------|---------------------------------------------------------------------------------------------------|
| **Minimum number of samples** | Specify the minimum number of messages [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] will sample for the **Sampling window duration** before considering rate-based throttling.<br /><br /> If the actual number of samples in a sampling window fall below this value, then the samples are discarded and throttling is not applied. This value should be consistent with a rate at which messages can be published under a medium load. For example, if your system is expected to handle 1,000 documents per second under a medium load, then this parameter should be set to 1,000 \* Sample window duration in seconds (or more precisely, 1 \* **Sampling window duration** (seconds)). If the value is set too low, then the system may experience a throttling condition under low load. If the value is set too high, then there may not be enough samples for this technique to be effective. | 1 – Maximum value of type Integer | 100 | - |
| **Sampling window duration** | Specify the time-window (measured in seconds), which is used to calculate the publishing rate based on the samples collected. The duration should be increased if the latency required for publishing a single message is high. | 1 – Maximum value of type Integer | 15000 | - |
| **Rate overdrive factor** | Specify the percent to control how much higher you allow the request rate to be than the completion rate before a throttling condition occurs.<br /><br /> For example, if messages are being published at a rate of 200 per second and this parameter is set to 125, then the system allows the publication of up to 250 messages per second (125% \* 200 = 250) before applying throttling. Specifying too small a value for this parameter can cause the system to throttle more aggressively and could lead to over-throttling. Specifying too large a value for this parameter can cause under throttling and prevent the throttling mechanism from recognizing a legitimate throttling condition. | 1 – Maximum value of type Integer | 125 | - |
| **Maximum throttling delay** | Specify the maximum delay (in milliseconds) [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] imposes on a message instance due to throttling. The actual delay depends on the severity of the throttling condition. | 1 – Maximum value of type Integer | 300000 | - |
| **Throttling override** | Specify if you want to override message publishing throttling. | 0: Do not override<br /><br /> 1: Initiate throttling condition<br /><br /> 2: Do not throttle | 0 | Throttling parameters read from registry should be mapped one-to-one to host instance parameters. |
| **Throttling override severity** | Specify the severity of an inbound throttling condition.<br /><br /> A higher value increases the severity of an inbound throttling condition initiated when **Throttling override** is set to 1. | 1 – 1000 | 100 | Lowest of all host instance values. |
**Delivery**
| Use this | To do this | Boundary values | Default value | Upgrade logic |
|----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|---------------|---------------------------------------------------------------------------------------------------|
| **Minimum number of samples** | Specify the minimum number of messages BizTalk will sample for the **Sampling window duration** before considering rate-based throttling.<br /><br /> If the actual number of samples in a sampling window falls below this value, then the samples are discarded and throttling is not applied. This value should be consistent with a rate at which messages can be delivered under a medium load. For example, if your system is expected to handle 1,000 documents per second under a medium load, then this parameter should be set to 1,000 \* Sample window duration in seconds (or more precisely, 1 \* **Sample window duration** (seconds) for this scenario).<br /><br /> If the value is set too low, then the system may experience a throttling condition under low load. If the value is set too high, then there may not be enough samples for this technique to be effective. | 1 – Maximum value of type Integer | 100 | - |
| **Sampling window duration** | Specify the time-window (in seconds), which is used to calculate the processing rate based on the samples collected. The duration should be increased if the latency required for processing a single message is high. | 1 – Maximum value of type Integer | 15000 | - |
| **Rate overdrive factor** | Specify the percent to control how much higher you allow the delivery rate to the Orchestration or Messaging engine to be than the completion rate before a throttling condition occurs.<br /><br /> For example, if messages are being processed at a rate of 200 per second and this parameter is set to 125, then the system allows the processing of up to 250 messages per second (125% \* 200 = 250) before applying throttling. Specifying too small a value for this parameter causes the system to throttle more aggressively and could lead to over throttling. Specifying too large a value for this parameter causes under throttling and prevent the throttling mechanism from recognizing a legitimate throttling condition. | 1 – Maximum value of type Integer | 125 | - |
| **Maximum throttling delay** | Specify the maximum delay [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] imposes on a message instance due to throttling. The actual delay depends on the severity of the throttling condition. | 1 – Maximum value of type Integer | 300000 | - |
| **Throttling override** | Specify if you want to override message delivery throttling. | 0: Do not override<br /><br /> 1: Initiate throttling condition<br /><br /> 2: Do not throttle | 0 | Throttling parameters read from registry should be mapped one-to-one to host instance parameters. |
| **Throttling override severity** | Specify the severity of the outbound throttling condition.<br /><br /> A higher value increases the severity of an outbound throttling condition initiated when **Throttling override** is set to 1. | 1 – 1000 | 100 | Lowest of all host instance values. |
> [!NOTE]
> To restore the default settings, click **Restore Defaults**.
## See Also
[How to Modify Host Settings](../core/how-to-modify-host-settings.md) | 290.081081 | 1,163 | 0.315056 | eng_Latn | 0.981314 |
8ad474866eef55ec310c09ea66ee4cd62e9a6c3b | 2,638 | md | Markdown | docs/cms.md | striblab/20181215-acs_housing_costs | c020a901d9ef1d66e2b0fa5c6c53731f995aeb7c | [
"MIT"
] | 3 | 2020-01-16T21:35:50.000Z | 2020-03-13T18:40:00.000Z | docs/cms.md | striblab/20181215-acs_housing_costs | c020a901d9ef1d66e2b0fa5c6c53731f995aeb7c | [
"MIT"
] | 2 | 2019-03-29T17:44:41.000Z | 2019-03-29T21:11:01.000Z | docs/cms.md | striblab/20181215-acs_housing_costs | c020a901d9ef1d66e2b0fa5c6c53731f995aeb7c | [
"MIT"
] | 1 | 2019-06-27T16:05:42.000Z | 2019-06-27T16:05:42.000Z | # CMS integration and publishing
If this project is meant to live within the [Star Tribune CMS](https://cms.clickability.com/cms), overall, this means that the markup and content are stored within the CMS, while the styling and javascript is managed externally, probably on S3.
## Setup
To test the content through a local [news-platform](https://github.com/MinneapolisStarTribune/news-platform/), make sure the following is true:
- Ensure that `ASSETS_STATIC_URL` environment variable set to `http://localhost:3000/` for `news-platform`. This is necessary to use the local version of the assets in this project.
- `news-platform` is installed and running.
### News-platform
`news-platform` TODO
### CMS pages
To setup an article to take advantage of this workflow, for each page needed:
1. Create an article.
- Set "Web Page View" (unsure?) to "Yes"
- (coming soon) Set the `Template overide` that is something like `Full page article vXX`.
1. Create a connected LCD
- See below for more about fields, but overall, these should be something like:
- `content`: Main body of content, this is likely the `build/_index-content.html` file that is rendered.
- `styles`: `news/projects/all/generator-test/styles.bundle.css`
- `scripts`: `news/projects/all/generator-test/app.bundle.js`
- `script libraries` or `style libraries`
1. Update `config.json`
### Configuration
Configuration to connect the project for development is managed in `config.json`. It should have at least one `pages` entry. For example:
```
"cms": {
"defaultArticleContentTemplateRewriteClass": "article-lcd-body-content",
"pages": [
{
"id": "index",
"articleId": "222222222",
"lcd": "11111111",
"default": true
},
{
"id": "page-two",
// Shared styles
"styles": "index",
"articleId": "33333333",
"lcd": "4444444",
"rewriteRules": {
"custom-class": "_template-id-to-replace"
}
}
]
}
```
So that we can develop in the `news-platform` environment, we use BrowserSync's rewrite rules to change content as its served. This allows us to put the content of our local templates into the page served through `news-platform`.
#### Shared assets across pages
By using the `styles` or `js` properties in the configuration for a page, you can specify the styles or scripts that are used in that page. For instance, if you put `index` for all the `styles` in the pages, then that one style file will be used for all pages.
You can also, edit the templates directly if you want.
## Publishing
`gulp cms:info` TODO
`gulp cms:lcd` TODO
| 36.638889 | 260 | 0.702805 | eng_Latn | 0.994444 |
8ad49eeaad560a78995fe8ae88976b6878a34f61 | 14,845 | md | Markdown | README.md | leafac/sqlite | 85d95e8528608c8816a483a48cf01351b56fa284 | [
"MIT"
] | 17 | 2021-02-20T06:55:39.000Z | 2022-03-05T18:39:15.000Z | README.md | leafac/sqlite | 85d95e8528608c8816a483a48cf01351b56fa284 | [
"MIT"
] | 2 | 2021-04-22T17:31:31.000Z | 2021-11-10T15:47:26.000Z | README.md | leafac/sqlite | 85d95e8528608c8816a483a48cf01351b56fa284 | [
"MIT"
] | 1 | 2021-08-27T23:34:50.000Z | 2021-08-27T23:34:50.000Z | <!--
- [ ] Return a dump of the final schema
- [ ] https://github.com/leafac/sqlite-migration/issues/1
- [ ] https://github.com/trevyn/turbosql/blob/2e46e42a78f929cb2492a87e7124ba49d01178ca/turbosql-impl/src/lib.rs#L281
- [ ] One more reason why forward only migrations make sense: alter table is limited in sqlite3
I think the documentation should be more like a fork of the documentation of better-sqlite3 otherwise it’s a prerequisite to read the better-sqlite3 docs and understand what you’re wrapper does. I think the current docs should be more of a footnote. Otherwise I wouldn’t see people taking it seriously as they are quickly trying to evaluate a library and browse the API.
Also the migration stuff is awesome but it should be more transparent how it works. ie the “pragma how it works” section should be inline with the migration docs IMO. Also a few examples of how to check the current migration scheme version would be helpful.
Document the IN operator and how it may blow up the cache (https://github.com/leafac/sqlite/pull/2)
-->
<h1 align="center">@leafac/sqlite</h1>
<h3 align="center"><a href="https://npm.im/better-sqlite3">better-sqlite3</a> with <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals">tagged template literals</a></h3>
<p align="center">
<a href="https://github.com/leafac/sqlite"><img src="https://img.shields.io/badge/Source---" alt="Source"></a>
<a href="https://www.npmjs.com/package/@leafac/sqlite"><img alt="Package" src="https://badge.fury.io/js/%40leafac%2Fsqlite.svg"></a>
<a href="https://github.com/leafac/sqlite/actions"><img src="https://github.com/leafac/sqlite/workflows/.github/workflows/main.yml/badge.svg" alt="Continuous Integration"></a>
</p>
### Videos
[<img src="https://img.youtube.com/vi/3PCpXOPcVlM/0.jpg" width="200" /><br />Demonstration](https://youtu.be/3PCpXOPcVlM)
[<img src="https://img.youtube.com/vi/ORdYNOwpcsY/0.jpg" width="200" /><br />Code Review](https://youtu.be/ORdYNOwpcsY)
### Installation
```console
$ npm install @leafac/sqlite
```
Use @leafac/sqlite with [the es6-string-html Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=Tobermory.es6-string-html) for syntax highlighting on the queries in the tagged template literals.
### Features, Usage, and Examples
@leafac/sqlite is a [thin wrapper (approximately 100 lines of code)](src/index.ts) around better-sqlite3 which adds the following features:
#### Prepared Statements Management
To use better-sqlite3 you must create prepared statements and then call them with parameters, for example:
```typescript
import BetterSqlite3Database from "better-sqlite3";
const betterSqlite3Database = new BetterSqlite3Database(":memory:");
betterSqlite3Database.exec(
`CREATE TABLE "users" ("id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT);`
);
const statement = betterSqlite3Database.prepare(
`INSERT INTO "users" ("name") VALUES (?)`
);
console.log(statement.run("Leandro Facchinetti")); // => { changes: 1, lastInsertRowid: 1 }
```
The benefit of this approach is that you may reuse the statements, which leads to better performance.
The problem with this approach is that you must manage statements in your application, and running simple queries becomes a two-step process.
@leafac/sqlite brings back the simplicity of issuing queries directly to the database object without losing the performance benefits of reuseable prepared statements (see [§ How It Works](#how-it-works)).
#### The `sql` Tagged Template Literal
Queries in @leafac/sqlite must be created with the `sql` tagged template literal; simple untagged strings don’t work. @leafac/sqlite needs the tagged template literal to manage the prepared statements and to guarantee that the parameters are escaped safely (see [§ How It Works](#how-it-works)).
For example:
```typescript
import { Database, sql } from "@leafac/sqlite";
const database = new Database(":memory:");
database.execute(
sql`CREATE TABLE "users" ("id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT);`
);
console.log(
database.run(
sql`INSERT INTO "users" ("name") VALUES (${"Leandro Facchinetti"})`
)
); // => { changes: 1, lastInsertRowid: 1 }
console.log(database.get<{ name: string }>(sql`SELECT * from "users"`)); // => { id: 1, name: 'Leandro Facchinetti' }
```
You may interpolate raw SQL with the `$${...}` form, for example:
```typescript
sql`SELECT * FROM "users" WHERE "name" = ${"Leandro Facchinetti"} $${sql` AND "age" = ${30}`}`;
```
#### Convenience Methods for Transactions
In better-sqlite3, transactions follow a preparation/execution two-step process similar to the one followed by statements, as described in [§ Prepared Statements Management](#prepared-statements-management), for example:
```typescript
const transaction = database.transaction(() => {
// Doesn’t execute immediately
});
// Execute the transaction
transaction();
```
@leafac/sqlite introduces convenience methods to execute a transaction in one step, for example:
```typescript
database.executeTransaction(() => {
// Executes immediately
});
```
The function passed to the better-sqlite3 `.transaction()` method may have parameters, which will correspond to the arguments passed when executing the transaction. The function passed to the @leafac/sqlite `.executeTransaction()` method must not have any parameters.
#### Native TypeScript Support
No need for `npm install --save-dev @types/...`.
#### A Lightweight Migration System
For example:
```typescript
// At an early point in the process of developing an application:
database.migrate(
sql`CREATE TABLE "users" ("id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT);`
);
// At a later point a new migration is added:
database.migrate(
sql`CREATE TABLE "users" ("id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT);`,
(database) => {
database.run(
sql`INSERT INTO "users" ("name") VALUES (${"Leandro Facchinetti"})`
);
}
);
```
The `.migrate()` method receives as parameters `` sql`...` `` queries and arbitrary functions. Only the parameters that have not been run before are executed to bring the database up to the most recent version, so you should call `.migrate()` at your application startup. Migrations are run on a transaction, so if one of them fails everything rolls back (if your arbitrary functions have side-effects you’ll have to manage them yourself).
##### No Down Migrations
Most migration systems provide a way to **undo** migrations; something called **down** migrations. `.migrate()` doesn’t provide a down migration mechanism.
I believe that down migrations are more trouble to maintain (they can be a lot of work!) than they’re worth, particularly in small applications. Why? Because down migrations have two main selling points:
1. You may go back and forward with the database schema in development (think of alternating back and forth while working on different feature branches that change the database schema).
2. You may rollback a deployment that goes wrong in production.
But I don’t think these selling points hold up:
1. You may recreate the database from scratch whenever you need in development.
2. You almost never want to run a down migration in production because that would make you lose data.
In case something goes wrong, `.migrate()` requires you to write a new migration that undoes the troublesome previous migration. The only way through is forward!
##### Don’t Change Migrations That Already Run
`.migrate()` doesn’t run migrations that it ran in the past, so if you change an existing migration, it won’t take effect. `.migrate()` has no mechanism to detect and warn about this kind of issue (it can’t, because arbitrary functions don’t lend themselves to this kind of inspection).
### API
The `Database` class is a subclass of the better-sqlite3 database, so all [better-sqlite3 database’s methods](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#class-database) are available in `Database`. If you need to use the traditional two-step workflow of explicitly preparing a statement as mentioned in [§ Prepared Statements Management](#prepared-statements-management), you can do that.
The `Database` class introduces the following new methods:
- `.run(query, options)`, `.get<T>(query, options)`, `.all<T>(query, options)`, and `.iterate<T>(query, options)`: Equivalent to the corresponding methods in [better-sqlite3’s statements](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#runbindparameters---object). The differences are:
1. These methods must be called on the database instead of on a prepared statement.
2. These methods work with queries generated with the `sql` tagged template literal.
3. **Advanced:** These methods accept an optional `options` parameter which should be an object with the `safeIntegers` field to control [the use of BigInt in the result](https://github.com/JoshuaWise/better-sqlite3/blob/v7.1.4/docs/integer.md). This changes the underlying statement until another query with the same statement sets `safeIntegers` to a different value. For example:
```typescript
console.log(
database.get<{ name: string }>(sql`SELECT * from "users"`, {
safeIntegers: true,
})
); // => { id: 1n, name: 'Leandro Facchinetti' }
console.log(database.get<{ name: string }>(sql`SELECT * from "users"`)); // => { id: 1n, name: 'Leandro Facchinetti' }
console.log(
database.get<{ name: string }>(sql`SELECT * from "users"`, {
safeIntegers: false,
})
); // => { id: 1, name: 'Leandro Facchinetti' }
```
- `.execute<T>(query)`: Equivalent to [better-sqlite3’s `.exec()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#execstring---this), but adapted to work with the queries generated with the `sql` tagged template literal.
You must not interpolate any parameters into queries passed to `.execute()`; for example, the following throws an error:
```typescript
database.execute(
sql`INSERT INTO "users" ("name") VALUES (${"Leandro Facchinetti"})`
); // => Throws an error
```
- `.executeTransaction<T>(fn)`, `.executeTransactionImmediate<T>(fn)`, and `.executeTransactionExclusive<T>(fn)`: Equivalent to [better-sqlite3’s `.transaction()`, `.transaction().immediate()`, and `.transaction().exclusive()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#transactionfunction---function), but execute the transaction immediately (see [§ Convenience Methods for Transactions](#convenience-methods-for-transactions)).
### How It Works
#### Prepared Statements Management & The `sql` Tagged Template Literal
The `sql` tag produces a data structure with the source of the query along with the parameters, for example, the following query:
```javascript
sql`INSERT INTO "users" ("name") VALUES (${"Leandro Facchinetti"})`;
```
becomes the following data structure:
```json
{
"source": "INSERT INTO \"users\" (\"name\") VALUES (?)",
"parameters": ["Leandro Facchinetti"]
}
```
The `Database` keeps a map from query sources to better-sqlite3 prepared statements (a **cache**; a technique called **memoization**). To run a query, `Database` picks up on the data structure produced by the `sql` tag and looks for the query source in the map; if it’s a hit, then `Database` reuses the prepared statement and only binds the new parameters; otherwise `Database` creates the prepared statement, uses it, and stores it for later.
There’s no cache eviction policy in @leafac/sqlite. The prepared statements for every query ever run hang around in memory for as long as the database object is alive (the statements aren’t eligible for garbage collection because they’re in the map). In most cases, that’s fine because there are only a limited number of queries; it’s the parameters that change. If that becomes a problem for you, you may access the cache under the `statements` property and implement your own cache eviction policy.
You may also use the low-level `.getStatement(source: string, options: Options)` method to get a hold of the underlying prepared statement in the cache (for example, to use [`.pluck()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#plucktogglestate---this), [`.expand()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#expandtogglestate---this), [`.raw()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#rawtogglestate---this), [`.columns()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#columns---array-of-objects), and [`.bind()`](https://github.com/JoshuaWise/better-sqlite3/blob/master/docs/api.md#bindbindparameters---this)—though `.bind()` will probably render the prepared statement unusable by @leafac/sqlite).
#### Migration System
`.migrate()` uses the [`user_version` SQLite PRAGMA](https://www.sqlite.org/pragma.html#pragma_user_version) to store the number of migrations it ran in the past, and consults this number to avoid re-running migrations.
### Related Projects
- <https://npm.im/@leafac/html>: Use tagged template literals as an HTML template engine.
### Prior Art
- <https://npm.im/better-sqlite3>: The basis for @leafac/sqlite. The rest of this document explains how they’re different.
- <https://npm.im/sql-template-strings>: This was the inspiration for using tagged template literals in this way. Unfortunately, sql-template-strings is incompatible with better-sqlite3, thus @leafac/sqlite.
- <https://npm.im/html-template-tag>: I love (and stole) the idea of using `$${...}` to mark safe interpolation from html-template-tag.
- <https://npm.im/package/pg-lit>, <https://npm.im/package/slonik>: These packages also feature tagged template literals for SQL, but they’re for [PostgreSQL](https://www.postgresql.org/) instead of SQLite.
- <https://npm.im/sqlite>, and <https://npm.im/better-sqlite3-helper>: These packages include lightweight migration systems. `.migrate()` is even more lightweight: It doesn’t support **down** migrations and it requires the migrations to be passed as an array, as opposed to, for example, being stored in SQL files. (But you can come up with this array in any way you want, including, for example, reading from a bunch of SQL files.)
- <https://github.com/trevyn/turbosql>: After having published `.migrate()` the author of Turbosql [reached out](https://github.com/leafac/sqlite-migration/issues/1) to say that they independently arrived at a similar design, but in the Rust ecosystem instead of Node.js. It’s great to have company!
### Changelog
#### 2.0.0
- [ESM-only](https://gist.github.com/sindresorhus/a39789f98801d908bbc7ff3ecc99d99c).
- Add support for the `IN` operator (https://github.com/leafac/sqlite/pull/2, thanks @mfbx9da4).
| 60.345528 | 809 | 0.743685 | eng_Latn | 0.942057 |
8ad4b3c418bb571abac9535df0e5215a974d54b1 | 8,079 | md | Markdown | articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Kurz: Zřizování uživatelů pro LinkedIn Elevate – Azure AD'
description: Zjistěte, jak nakonfigurovat službu Azure Active Directory tak, aby automaticky zřašovala a zřašovala uživatelské účty na LinkedIn Elevate.
services: active-directory
documentationcenter: ''
author: ArvindHarinder1
manager: CelesteDG
ms.assetid: d4ca2365-6729-48f7-bb7f-c0f5ffe740a3
ms.service: active-directory
ms.subservice: saas-app-tutorial
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 03/28/2019
ms.author: arvinh
ms.collection: M365-identity-device-management
ms.openlocfilehash: fa0a26eaeac431ed2c78c5bd938bbbe7dff14e0e
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/27/2020
ms.locfileid: "77057409"
---
# <a name="tutorial-configure-linkedin-elevate-for-automatic-user-provisioning"></a>Kurz: Konfigurace linkedinu Elevate pro automatické zřizování uživatelů
Cílem tohoto kurzu je ukázat kroky, které je potřeba provést v LinkedIn Elevate a Azure AD automaticky zřídit a de-zřizování uživatelských účtů z Azure AD na LinkedIn Elevate.
## <a name="prerequisites"></a>Požadavky
Scénář popsaný v tomto kurzu předpokládá, že již máte následující položky:
* Tenanta Azure Active Directory.
* Tenant LinkedIn Elevate
* Účet správce na LinkedIn Elevate s přístupem k Centru účtů LinkedIn
> [!NOTE]
> Azure Active Directory se integruje s LinkedIn Elevate pomocí protokolu [SCIM.](http://www.simplecloud.info/)
## <a name="assigning-users-to-linkedin-elevate"></a>Přiřazení uživatelů ke zvýšení oprávnění LinkedIn
Azure Active Directory používá koncept s názvem "přiřazení" k určení, kteří uživatelé by měli získat přístup k vybraným aplikacím. V kontextu automatického zřizování uživatelských účtů budou synchronizováni pouze uživatelé a skupiny, které byly "přiřazeny" k aplikaci ve službě Azure AD.
Před konfigurací a povolením zřizovací služby se budete muset rozhodnout, kteří uživatelé nebo skupiny ve službě Azure AD představují uživatele, kteří potřebují přístup ke službě LinkedIn Elevate. Jakmile se rozhodnete, můžete přiřadit tyto uživatele k LinkedIn Elevate podle pokynů zde:
[Přiřazení uživatele nebo skupiny k podnikové aplikaci](../manage-apps/assign-user-or-group-access-portal.md)
### <a name="important-tips-for-assigning-users-to-linkedin-elevate"></a>Důležité tipy pro přiřazení uživatelů na LinkedIn Elevate
* Doporučuje se, aby jeden uživatel Azure AD přiřazena LinkedIn Elevate k testování konfigurace zřizování. Další uživatelé a/nebo skupiny mohou být přiřazeny později.
* Při přiřazování uživatele ke zvýšení oprávnění LinkedIn je nutné vybrat roli **uživatele** v dialogovém okně přiřazení. Role "Výchozí přístup" nefunguje pro zřizování.
## <a name="configuring-user-provisioning-to-linkedin-elevate"></a>Konfigurace zřizování uživatelů na LinkedIn Elevate
Tato část vás provede připojením azure ad k rozhraní API pro zřizování uživatelských účtů SCIM společnosti LinkedIn Elevate a konfigurací zřizovací služby pro vytváření, aktualizaci a zakázání přiřazených uživatelských účtů na LinkedIn Elevate na základě přiřazení uživatelů a skupin na základě přiřazení uživatelů a skupin ve službě Azure AD.
**Tip:** Můžete se také rozhodnout, že chcete povolit jednotné přihlašování na základě SAML pro LinkedIn Elevate podle pokynů uvedených na [webu Azure Portal](https://portal.azure.com). Jednotné přihlašování lze nakonfigurovat nezávisle na automatické zřizování, i když tyto dvě funkce se vzájemně doplňují.
### <a name="to-configure-automatic-user-account-provisioning-to-linkedin-elevate-in-azure-ad"></a>Konfigurace automatického zřizování uživatelských účtů na LinkedIn Elevate ve službě Azure AD:
Prvním krokem je načtení přístupového tokenu LinkedIn. Pokud jste správce rozlehlé sítě, můžete zřídit přístupový token sami. V centru účtů přejděte do **globálního nastavení nastavení > ** a otevřete panel Nastavení **SCIM.**
> [!NOTE]
> Pokud přistupujete k centru účtů přímo, nikoli prostřednictvím odkazu, můžete se k němu dostat pomocí následujících kroků.
1. Přihlaste se do Centra účtů.
2. Vyberte **Nastavení > správce** .
3. Na levém postranním panelu klikněte na **Rozšířené integrace.** Budete přesměrováni do centra účtů.
4. Klikněte na **+ Přidat novou konfiguraci SCIM** a postupujte podle postupu vyplněním každého pole.
> [!NOTE]
> Pokud není povoleno automatické přiřazování licencí, znamená to, že jsou synchronizována pouze uživatelská data.

> [!NOTE]
> Pokud je povoleno přiřazení automatických licencí, je třeba poznamenat instanci aplikace a typ licence. Licence jsou přiřazovány podle toho, kdo dřív přijde, je dřív na řadě, dokud nebudou všechny licence odebrány.

5. Klepněte na **tlačítko Generovat token**. V poli **tokenu přístupu** by se měl zobrazit váš přístupový token.
6. Před opuštěním stránky uložte přístupový token do schránky nebo počítače.
7. Dále se přihlaste k [portálu Azure](https://portal.azure.com)a přejděte do části **Azure Active Directory > Enterprise Apps > Všechny aplikace.**
8. Pokud jste již nakonfigurovali LinkedIn Elevate pro jednotné přihlašování, vyhledejte svou instanci LinkedIn Elevate pomocí vyhledávacího pole. V opačném případě vgalerii aplikací vyberte **Přidat** a vyhledejte **LinkedIn Elevate.** Ve výsledcích hledání vyberte LinkedIn Elevate a přidejte je do seznamu aplikací.
9. Vyberte svou instanci LinkedIn Elevate a pak vyberte kartu **Zřizování.**
10. Nastavte **režim zřizování** na **automatické**.

11. Vyplňte následující pole v části **Přihlašovací údaje správce** :
* Do pole Adresa URL `https://api.linkedin.com` **klienta** zadejte .
* Do pole **Tajný token** zadejte přístupový token, který jste vygenerovali v kroku 1, a klepněte na tlačítko **Testovat připojení** .
* Oznámení o úspěchu byste měli vidět na pravé horní straně portálu.
12. Do pole **E-mail** s oznámením zadejte e-mailovou adresu osoby nebo skupiny, která by měla dostávat oznámení o chybách zřizování, a zaškrtněte políčko níže.
13. Klikněte na **Uložit**.
14. V části **Mapování atributů** zkontrolujte atributy uživatele a skupiny, které budou synchronizovány z Azure AD na LinkedIn Elevate. Všimněte si, že atributy vybrané jako **odpovídající** vlastnosti budou použity tak, aby odpovídaly uživatelským účtům a skupinám v LinkedIn Elevate pro operace aktualizace. Chcete-li potvrdit všechny změny, vyberte tlačítko Uložit.

15. Chcete-li povolit službu zřizování Azure AD pro LinkedIn Elevate, změňte **stav zřizování** **na Zapnuto** v části **Nastavení**
16. Klikněte na **Uložit**.
Tím se spustí počáteční synchronizace všech uživatelů a/nebo skupin přiřazených linkedinu Elevate v části Uživatelé a skupiny. Všimněte si, že počáteční synchronizace bude trvat déle než následné synchronizace, ke kterým dochází přibližně každých 40 minut, pokud je služba spuštěna. Část **Podrobnosti synchronizace** můžete použít ke sledování průběhu a sledování odkazů na protokoly zřizování aktivit, které popisují všechny akce prováděné službou zřizování v aplikaci LinkedIn Elevate.
Další informace o tom, jak číst protokoly zřizování Azure AD, naleznete [v tématu Vytváření sestav na automatické zřizování uživatelských účtů](../app-provisioning/check-status-user-account-provisioning.md).
## <a name="additional-resources"></a>Další zdroje
* [Správa zřizování uživatelských účtů pro podnikové aplikace](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [Jak ve službě Azure Active Directory probíhá přístup k aplikacím a jednotné přihlašování?](../manage-apps/what-is-single-sign-on.md)
| 63.614173 | 488 | 0.799728 | ces_Latn | 0.999978 |
8ad4d0db76f5ebeec3566ca8677f6851774493df | 686 | md | Markdown | add/metadata/System.Windows.Forms/PreviewKeyDownEventArgs.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:24:36.000Z | 2020-06-16T22:24:36.000Z | add/metadata/System.Windows.Forms/PreviewKeyDownEventArgs.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Windows.Forms/PreviewKeyDownEventArgs.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-08T14:42:27.000Z | 2019-04-08T14:42:27.000Z | ---
uid: System.Windows.Forms.PreviewKeyDownEventArgs
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.Shift
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.KeyValue
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.KeyData
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.Alt
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.#ctor(System.Windows.Forms.Keys)
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.KeyCode
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.Modifiers
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.IsInputKey
---
---
uid: System.Windows.Forms.PreviewKeyDownEventArgs.Control
---
| 17.15 | 82 | 0.760933 | yue_Hant | 0.995911 |
8ad65dc3b4bc6cf017df82449585c0a20f70948b | 838 | md | Markdown | clients/iam_credentials/README.md | pojiro/elixir-google-api | 928496a017d3875a1929c6809d9221d79404b910 | [
"Apache-2.0"
] | 1 | 2021-12-20T03:40:53.000Z | 2021-12-20T03:40:53.000Z | clients/iam_credentials/README.md | pojiro/elixir-google-api | 928496a017d3875a1929c6809d9221d79404b910 | [
"Apache-2.0"
] | 1 | 2020-08-18T00:11:23.000Z | 2020-08-18T00:44:16.000Z | clients/iam_credentials/README.md | pojiro/elixir-google-api | 928496a017d3875a1929c6809d9221d79404b910 | [
"Apache-2.0"
] | null | null | null | # GoogleApi.IAMCredentials
IAM Service Account Credentials API client library.
Creates short-lived credentials for impersonating IAM service accounts. To enable this API, you must enable the IAM API (iam.googleapis.com).
## Installation
Install this package from [Hex](https://hex.pm) by adding
`google_api_iam_credentials` to your list of dependencies in `mix.exs`:
```elixir
def deps do
[{:google_api_iam_credentials, "~> 0.12"}]
end
```
## For more information
Product documentation is available at [https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials](https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials).
Library reference documentation is published on Hexdocs at
[https://hexdocs.pm/google_api_iam_credentials](https://hexdocs.pm/google_api_iam_credentials).
| 34.916667 | 207 | 0.789976 | eng_Latn | 0.688132 |
8ad6b877c74bcca2fb18973c0b3a976989e30b5f | 826 | md | Markdown | source/docs/addons.md | radiocubito/guides-next.yclas.com | ce27fc9407e404c9a651e55a4a09e25858613e8a | [
"MIT"
] | null | null | null | source/docs/addons.md | radiocubito/guides-next.yclas.com | ce27fc9407e404c9a651e55a4a09e25858613e8a | [
"MIT"
] | null | null | null | source/docs/addons.md | radiocubito/guides-next.yclas.com | ce27fc9407e404c9a651e55a4a09e25858613e8a | [
"MIT"
] | null | null | null | ---
title:
description:
extends: _layouts.documentation
section: content
---
# Addons
In this guide, you can find all of the available plugins for your website, markeplace or store.
* [Blog](/docs/plugins-create-a-blog)
* [Forum](/docs/plugins-forum-section)
- [View/edit forum topics](/docs/plugins-view-edit-forum-topics)
* [FAQ](/docs/plugins-faq-system)
* [Messaging](/docs/plugins-message-system)
* [Reviews](/docs/addons-review)
* [Subscription/Membership](/docs/plugins-membership-plans-to-post)
* [Social login](/docs/plugins-login-using-social-auth)
* [Black list](/docs/plugins-activate-black-list)
* [Auto locate](/docs/plugins-auto-locate-users)
* [Adblock detector](/docs/addons-adblock-detector)
* [Add to home screen](/docs/addons-add-to-homescreen)
* [eWallet](/docs/e-wallet)
| 33.04 | 96 | 0.713075 | eng_Latn | 0.258674 |
8ad6c8d70c679a76179f523517a155d0abe4cd65 | 7,924 | md | Markdown | articles/operations-management-suite/operations-management-suite-monitoring-alerts.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/operations-management-suite/operations-management-suite-monitoring-alerts.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/operations-management-suite/operations-management-suite-monitoring-alerts.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Avviso di gestione di monitoring prodotti Microsoft | Microsoft Azure"
description="Un avviso indica un problema che richiede attenzione da un amministratore. In questo articolo vengono descritte le differenze come avvisi vengono creati e gestiti in System Center Operations Manager (SCOM) e registro Analitica e procedure consigliate all'utilizzo di due prodotti per una strategia di gestione degli avvisi ibrida."
services="operations-management-suite"
documentationCenter=""
authors="bwren"
manager="jwhit"
editor="tysonn" />
<tags
ms.service="operations-management-suite"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="infrastructure-services"
ms.date="09/06/2016"
ms.author="bwren" />
# <a name="managing-alerts-with-microsoft-monitoring"></a>Gestione degli avvisi di monitoraggio di Microsoft
Un avviso indica un problema che richiede attenzione da un amministratore. Vi sono differenze nette tra System Center Operations Manager (SCOM) e Analitica Log in operazioni di gestione famiglia di prodotti (OMS) in termini di creazione avvisi, come vengono gestiti e analizzati e come si riceve una notifica che è stato rilevato un problema critico.
## <a name="alerts-in-operations-manager"></a>Avvisi in Operations Manager
Avvisi nel SCOM generati da Monitor per indicare un problema specifico o singole regole. Un monitor può generare un avviso quando si entra uno stato di errore quando una regola può generare un avviso per indicare un problema critico non direttamente correlato allo stato di un oggetto gestito. Management Pack includono una serie di flussi di lavoro che consente di creare avvisi per l'applicazione o servizio che gestiscono. Parte del processo di configurazione di un nuovo management pack è l'ottimizzazione per assicurarsi di non ricevere avvisi eccessivi per i problemi che non è consigliabile critiche.

SCOM fornisce completa di gestione degli avvisi di avvisi con uno stato che può essere modificato dagli amministratori come funzionano per risolvere il problema. Quando è stato risolto il problema, l'amministratore imposta l'avviso verso chiuso momento in cui non verrà più visualizzato nelle visualizzazioni la visualizzazione degli avvisi attivi. Avvisi generati da Monitor possono essere risolti automaticamente quando il monitor ritorna allo stato di integrità.

## <a name="alerts-in-log-analytics"></a>Avvisi nell'Analitica Log
Un avviso in Analitica Log viene creato da una query di log che viene eseguita automaticamente a intervalli regolari. È possibile creare una regola di avviso da una query di log. Se la query restituisca i risultati che soddisfano i criteri specificati, viene creato un avviso. Può trattarsi di una query specifica che consente di creare un avviso se viene rilevato un determinato evento oppure è possibile utilizzare una query più generale che consente di cercare qualsiasi evento di errore relativo a una determinata applicazione.
Registro Analitica avvisi scritto all'archivio OMS come un evento e possono essere recuperati con una query di log. Non è stato come SCOM eventi in modo che è possibile indicare se è stato risolto il problema.

Quando SCOM viene utilizzato come origine dati per Analitica Log avvisi SCOM vengono scritte all'archivio OMS durante la creazione e modifica.

[Soluzione di gestione degli avvisi](http://technet.microsoft.com/library/mt484092.aspx) viene fornito un riepilogo di avvisi attivi e diverse query comuni per recuperare diversi set di avvisi. In questo modo si analisi più efficienti gli avvisi di un report in SCOM. È possibile eseguire il drill-dai riepiloghi ai dati di dettagliati e creare query ad hoc per recuperare i diversi gruppi di avvisi.

## <a name="notifications"></a>Notifiche
Le notifiche di SCOM inviare posta elettronica o un testo in risposta agli avvisi che soddisfano specifici criteri. È possibile creare sottoscrizioni di notifica diversi che sono presenti delle persone diverse una notifica a seconda di criteri quali l'oggetto che si sta controllando, la gravità dell'avviso, il tipo di problema rilevato o l'ora del giorno.
Alcuni abbonamenti è utilizzabile per implementare una strategia di notifica di completamento per un numero elevato di management pack.

Analitica log possibile invio di una notifica tramite posta elettronica che è stato creato un avviso impostando un'azione di notifica di messaggio di posta elettronica per ogni [regola di avviso](http://technet.microsoft.com/library/mt614775.aspx). Non ha la possibilità di SCOM Sottoscrivi avvisi molteplici con una singola regola. Inoltre, è necessario creare le regole di avviso poiché non è disponibile alcuna preconfigurato OMS.

Non è completamente possibile gestire avvisi SCOM nel Log Analitica attraverso dal momento che è possibile modificare solo nella Console. Registro Analitica può essere utilizzato come parte di una gestione degli avvisi elaborare attraverso per fornire tale SCOM solo strumenti di analisi non è disponibile.
## <a name="alert-remediation"></a>Risoluzione degli avvisi
[Monitoraggio](http://technet.microsoft.com/library/mt614775.aspx) fa riferimento a un tentativo di correggere automaticamente il problema identificato da un avviso.
SCOM consente di eseguire Diagnostica e ripristini in risposta a un monitor immettere uno stato. Si verifica questo evento simultaneo monitor la creazione dell'avviso. Diagnostica e ripristini vengono in genere implementati come script utilizzabile con l'agente. La diagnostica tenta di raccogliere ulteriori informazioni sul problema rilevato durante il tentativo di risolvere il problema di un ripristino.
Registro Analitica consente di avviare un [runbook automazione Azure](https://azure.microsoft.com/documentation/services/automation/) o una chiamata un webhook in risposta a un avviso Log Analitica. Runbook può contenere logica complessa implementata in PowerShell. Lo script viene eseguito in Azure e possa accedere a qualsiasi Azure risorse o risorse esterne disponibili dal cloud. Automazione Azure hanno la possibilità di eseguire runbook su un server nel centro dati locale, ma questa caratteristica non è attualmente disponibile quando si avvia dal runbook in risposta agli avvisi dei Log Analitica.
Entrambi ripristini in SCOM e runbook in OMS può includere gli script di PowerShell, ma ripristini sono difficili da creare e gestire perché deve essere contenuti all'interno di un management pack. Runbook sono archiviate nel modello di automazione Azure che fornisce funzionalità di creazione, test e gestione runbook.
Se si usa SCOM come origine dati per Log Analitica, è possibile creare un avviso di Log Analitica utilizzando una query di log per recuperare gli avvisi SCOM salvati nell'archivio OMS. In questo modo è possibile eseguire un runbook automazione Azure in risposta a un avviso SCOM. Naturalmente, poiché dal runbook verrà eseguito in Azure, questo non sarebbe una strategia valida per il ripristino dei problemi in locale.
## <a name="next-steps"></a>Passaggi successivi
- Per informazioni dettagliate degli [avvisi in System Center Operations Manager (SCOM)](https://technet.microsoft.com/library/hh212913.aspx). | 110.055556 | 610 | 0.814109 | ita_Latn | 0.999622 |
8ad77f75965bd05c0a096ad711fdcfdff6de512a | 1,765 | md | Markdown | 2019-events.md | open-heterogeneous-computing-framework/conference | a930846a6a72f84a7eb7bf0f8880401cc7e7d946 | [
"Apache-2.0"
] | 5 | 2019-05-20T08:18:07.000Z | 2019-10-19T14:55:01.000Z | 2019-events.md | open-heterogeneous-computing-framework/conference | a930846a6a72f84a7eb7bf0f8880401cc7e7d946 | [
"Apache-2.0"
] | 7 | 2019-05-10T08:33:57.000Z | 2019-06-18T01:15:48.000Z | 2019-events.md | open-heterogeneous-computing-framework/conference | a930846a6a72f84a7eb7bf0f8880401cc7e7d946 | [
"Apache-2.0"
] | 2 | 2019-05-11T14:10:37.000Z | 2021-10-04T12:32:10.000Z | ### KubeCon/OSS China Co-located Event
#### Event: Open Heterogeneous Computing Framework Introduction
#### Time: 2019.06.24, from 8:00am - 16:00pm
#### Schedule: [sched link](https://kccncosschn19eng.sched.com/event/Nv2S/open-heterogeneous-computing-framework-introduction-hosted-by-huawei-additional-registration-fee-required?iframe=yes&w=100%&sidebar=yes&bg=no#)
#### Registration: need to first register for the main event.
#### Topic Proposal: Submit an [issue](https://github.com/open-heterogeneous-computing-framework/conference/issues/new) based upon the [CFP template](./cfp-template.md) by 2019.05.18
#### Agenda
* 9:00 - 10:00 Introduction - OHCF Overview, Zhipeng Huang (Huawei)
* 10:00 - 10:30 Strategy For NFV Acceleration from China Mobile, Sheng Wang (China Mobile)
* 10:30 - 11:00 AI Platform Practices and learnings in China Mobile, Yong Liu (China Mobile)
* 11:00 - 11:30 The application of large scale GPU virtualization on iFlyTech cloud, Ruichen Xu (iFlyTech) - [#7](https://github.com/open-heterogeneous-computing-framework/conference/issues/7)
* 11:30 - 12:00 TLS offloading solution using heterogeneous HW management, Xinran Wang (Intel) - [#3](https://github.com/open-heterogeneous-computing-framework/conference/issues/3)
* 12:00 - 13:30 Lunch break
* 13:30 - 15:00 OHCF Unconference - Topics Introduction (OpenStack, Kubernetes, RISC-V, Linuxboot, OCP, ...). Optional: Open Source: Accelerating Innovation in the AI Market, Ibrahim Haddad (LFAI Foundation) - [#6](https://github.com/open-heterogeneous-computing-framework/conference/issues/6)
* 15:00 - 16:00 OHCF Unconference - Discussion and Summary (All Attendee)
#### [Slides](https://github.com/open-heterogeneous-computing-framework/conference/tree/master/kubecon-shanghai-2019)
| 98.055556 | 293 | 0.763173 | yue_Hant | 0.288978 |
8ad7a6ae328c2f3811d0112bf26685f4bc0281a7 | 690 | md | Markdown | about.md | FoxyNPF/foxynpf.github.io | 591d5ece2898ad72b5f2fff23a9a140172fb3a70 | [
"MIT"
] | null | null | null | about.md | FoxyNPF/foxynpf.github.io | 591d5ece2898ad72b5f2fff23a9a140172fb3a70 | [
"MIT"
] | null | null | null | about.md | FoxyNPF/foxynpf.github.io | 591d5ece2898ad72b5f2fff23a9a140172fb3a70 | [
"MIT"
] | 1 | 2020-08-15T12:18:21.000Z | 2020-08-15T12:18:21.000Z | ---
layout: page
title: About
permalink: /about/
---
My name is Neil Fox and i've worked in Cyber Security for around 5 years for two major UK Telcos.
The majority of my time working in cyber has been in 3rd line/CERT type roles as an escalation point for high priority incidents, this has
also included a lot of threat hunting and in my current role a lot of focus on malware analysis.
I love what I do and think there is real value in sharing knowledge amongst the cyber industry. This is why i decided to setup this site,
I will be sharing write ups of malware analysis I have performed along with anything that i think is particluarly cool/useful relating to
incident response.
Neil
| 43.125 | 138 | 0.785507 | eng_Latn | 0.999951 |
8ad7b92296ec48bfe25dc91f8280585559042cd7 | 2,823 | md | Markdown | WindowsServerDocs/virtualization/hyper-v/best-practices-analyzer/Ensure-sufficient-physical-disk-space-is-available-when-virtual-machines-use-differencing.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/virtualization/hyper-v/best-practices-analyzer/Ensure-sufficient-physical-disk-space-is-available-when-virtual-machines-use-differencing.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/virtualization/hyper-v/best-practices-analyzer/Ensure-sufficient-physical-disk-space-is-available-when-virtual-machines-use-differencing.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Verificare che sia disponibile spazio su disco sufficiente quando le macchine virtuali usano dischi rigidi virtuali differenze
description: Versione online del testo per questa regola di Best Practices Analyzer.
ms.prod: windows-server
manager: dongill
ms.technology: compute-hyper-v
ms.author: kathydav
ms.topic: article
ms.assetid: 71f99aab-f994-4022-9da0-d661965b95ac
author: kbdazure
ms.date: 8/16/2016
ms.openlocfilehash: 52e09cfb8695389d2c37def2c39ff43b4091fb76
ms.sourcegitcommit: b00d7c8968c4adc8f699dbee694afe6ed36bc9de
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/08/2020
ms.locfileid: "80861954"
---
# <a name="ensure-sufficient-physical-disk-space-is-available-when-virtual-machines-use-differencing-virtual-hard-disks"></a>Verificare che sia disponibile spazio su disco sufficiente quando le macchine virtuali usano dischi rigidi virtuali differenze
>Si applica a: Windows Server 2016
Per altre informazioni sulle procedure consigliate e sulle analisi, vedere [Eseguire analisi di Best Practice Analyzer e gestire i risultati delle analisi](https://go.microsoft.com/fwlink/p/?LinkID=223177).
|Proprietà|Dettagli|
|-|-|
|**Sistema operativo**|Windows Server 2016|
|**Prodotto/funzionalità**|Hyper-V|
|**Gravità**|Avviso|
|**Categoria**|Configurazione|
Nelle sezioni seguenti, corsivo indica il testo dell'interfaccia Utente visualizzata nello strumento Analizzatore procedure consigliate per questo problema.
## <a name="issue"></a>Problema
*Una o più macchine virtuali utilizzano dischi rigidi virtuali differenze.*
## <a name="impact"></a>Impatto
*Per i dischi rigidi virtuali differenze è necessario spazio disponibile sul volume di hosting, in modo che lo spazio possa essere allocato quando si verificano scritture nei dischi rigidi virtuali. Se lo spazio disponibile è esaurito, potrebbero essere interessate tutte le macchine virtuali che si basano sull'archiviazione fisica. Ciò influisca sulle macchine virtuali seguenti:*
\<elenco di macchine virtuali >
## <a name="resolution"></a>Risoluzione
*Monitorare lo spazio disponibile su disco per garantire che lo spazio disponibile sia sufficiente per l'espansione del disco rigido virtuale. Provare a unire i dischi rigidi virtuali differenze nell'elemento padre. Nella console di gestione di Hyper-V, controllare il disco differenze per determinare il disco rigido virtuale padre. Se si unisce un disco differenze a un disco padre condiviso da altri dischi differenze, tale azione danneggerà la relazione tra gli altri dischi differenze e il disco padre, rendendoli inutilizzabili. Dopo aver verificato che il disco rigido virtuale padre non è condiviso, è possibile utilizzare la modifica guidata disco per unire il disco differenze al disco rigido virtuale padre.*
| 60.06383 | 721 | 0.800567 | ita_Latn | 0.996716 |
8ad88860f4cf575980ff20809bd24b40a8586866 | 4,753 | md | Markdown | README.md | fransao/scalaPlay | 692af1edcca624acaa1e31079b84cec8378770c9 | [
"Apache-2.0"
] | 27 | 2015-02-21T14:20:00.000Z | 2021-05-25T00:10:14.000Z | README.md | fransao/scalaPlay | 692af1edcca624acaa1e31079b84cec8378770c9 | [
"Apache-2.0"
] | 2 | 2017-06-06T18:37:53.000Z | 2021-04-04T20:04:36.000Z | README.md | fransao/scalaPlay | 692af1edcca624acaa1e31079b84cec8378770c9 | [
"Apache-2.0"
] | 30 | 2015-01-14T11:27:52.000Z | 2022-01-15T20:35:59.000Z | # Essential Play Code
This repository contains exercises and solutions for
[Underscore's Essential Play](http://underscore.io/training/courses/essential-play/)
book and training course.
If you want to discuss the content or exercises with the authors,
join us in our chat room on Gitter:
[](https://gitter.im/underscoreio/scala?utm_source=essential-play-readme&utm_medium=badge&utm_campaign=essential-play)
## Using the Source Code
This repository contains two branches: one for `exercises` and one for `solutions`.
The directory structure is the same in each branch,
with each exercise stored as a standalone Play project in its own directory.
You will need to have Git and Java and an internet connection to run the exercises.
All other dependendencies are either included with the projects
or downloaded on demand during compilation.
See below for quick getting started instructions.
For more detailed instructions see Chapter 1 of the book.
### Note on Using Scala IDE
Older versions of this repo shipped with the `sbteclipse` plugin
to make it easier to set up Scala IDE.
Current best practices recommend installing `sbteclipse` as a global plugin.
I have removed it deom from this repo because it was causing conflicts
for people who were doing the right thing.
See the `sbteclipse` [documentation](https://github.com/typesafehub/sbteclipse)
for instructions on installing it as a global plugin.
### Getting Started on Linux or OS X
Complete the following steps outlined in Chapter 1 in the section entitled
"Setting up SBT for This Book":
1. Clone this repository to a directory on your hard drive,
e.g. `C:\essential-play-code`:
~~~
bash$ git clone https://github.com/underscoreio/essential-play-code.git
~~~
2. Change to the directory for the "hello world" exercise:
~~~
bash$ cd essential-play-code/chapter1-hello
~~~
3. Run the `sbt.sh` script.
You may have to wait while SBT downloads various dependencies:
~~~
bash$ ./sbt.sh
# Lots of output here...
# The first run will take a while...
[app] $
~~~
4. Type `run` at the SBT prompt.
You may have to wait while SBT downlaods various dependencies:
~~~
[app] $ run
# Lots of output here...
# The first run will take a while...
--- (Running the application from SBT, auto-reloading is enabled) ---
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
(Server started, use Ctrl+D to stop and go back to the console...)
~~~
5. Open [http://localhost:9000](http://localhost:9000) in a web browser.
SBT will compile the application, which may take a while.
After this you should see the message `"Hello World!"` in your browser.
~~~
[app] $ run
# Lots of output here...
# The first run will take a while...
--- (Running the application from SBT, auto-reloading is enabled) ---
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
(Server started, use Ctrl+D to stop and go back to the console...)
~~~
### Getting Started on Windows
You will need to have installed Git and Java (we recommend Oracle's Java 7 SDK).
Complete the following steps outlined in Chapter 1 in the section entitled
"Setting up SBT for This Book":
1. Clone this repository to a directory on your hard drive, e.g. `C:\essential-play-code`:
~~~
C:\> git clone https://github.com/underscoreio/essential-play-code.git ↩
C:\essential-play-code
~~~
2. Change to the directory for the "hello world" exercise:
~~~
C:\> cd\essential-play-code\chapter1-hello
~~~
3. Run the `sbt.bat` script.
You may have to wait while SBT downloads various dependencies:
~~~
C:\essential-play-code> sbt
# Lots of output here...
# The first run will take a while...
[app] $
~~~
4. Type `run` at the SBT prompt.
You may have to wait while SBT downlaods various dependencies:
~~~
[app] $ run
# Lots of output here...
# The first run will take a while...
--- (Running the application from SBT, auto-reloading is enabled) ---
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
(Server started, use Ctrl+D to stop and go back to the console...)
~~~
5. Open [http://localhost:9000](http://localhost:9000) in a web browser.
SBT will compile the application, which may take a while.
After this you should see the message `"Hello World!"` in your browser.
~~~
[app] $ run
# Lots of output here...
# The first run will take a while...
--- (Running the application from SBT, auto-reloading is enabled) ---
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
(Server started, use Ctrl+D to stop and go back to the console...)
~~~
| 30.273885 | 169 | 0.698717 | eng_Latn | 0.99435 |
8ad89263c5cc976f3662c27788118bbe55ed6331 | 5,105 | md | Markdown | articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md | mahakjain07031985/azure-docs | 1f20c9156a5b21754e1a9fe41178949c7b1132ad | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md | mahakjain07031985/azure-docs | 1f20c9156a5b21754e1a9fe41178949c7b1132ad | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-24T23:22:53.000Z | 2020-03-24T23:23:02.000Z | articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md | mahakjain07031985/azure-docs | 1f20c9156a5b21754e1a9fe41178949c7b1132ad | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Share anchors across sessions and devices'
description: In this tutorial, you learn how to share Azure Spatial Anchor identifiers between Android/iOS devices in Unity with a back-end service.
author: ramonarguelles
manager: vriveras
services: azure-spatial-anchors
ms.author: rgarcia
ms.date: 07/31/2020
ms.topic: tutorial
ms.service: azure-spatial-anchors
---
# Tutorial: Share Azure Spatial Anchors across sessions and devices
In this tutorial, you'll learn how to use [Azure Spatial Anchors](../overview.md) to create anchors during one session and then locate them, on the same device or on a different one. These same anchors could also be located by multiple devices in the same place and at the same time.

Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an app that can be deployed to two or more devices. Azure Spatial Anchors created by one instance can be shared to the others.
You'll learn how to:
> [!div class="checklist"]
> * Deploy an ASP.NET Core Web App in Azure that can be used to share anchors, storing them in memory for a duration of time.
> * Configure the AzureSpatialAnchorsLocalSharedDemo scene within the Unity Sample from our Quickstarts to take advantage of the Sharing Anchors Web App.
> * Deploy and run to one or more devices.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
[!INCLUDE [Share Anchors Sample Prerequisites](../../../includes/spatial-anchors-share-sample-prereqs.md)]
It's worth noticing that, although you'll be using Unity and an ASP.NET Core Web App in this Tutorial, it is only to show an example on how to share Azure Spatial Anchor identifiers across other devices. You can use other languages and back-end technologies to achieve the same goal.
[!INCLUDE [Create Spatial Anchors resource](../../../includes/spatial-anchors-get-started-create-resource.md)]
## Download the sample project
[!INCLUDE [Clone Sample Repo](../../../includes/spatial-anchors-clone-sample-repository.md)]
## Deploy your Sharing Anchors Service
## [Visual Studio](#tab/VS)
Open Visual Studio, and open the project at the `Sharing\SharingServiceSample` folder.
[!INCLUDE [Publish Azure](../../../includes/spatial-anchors-publish-azure.md)]
## [Visual Studio Code](#tab/VSC)
You will need to create a resource group and an App Service Plan before you deploy the service in VS Code.
### Sign in to Azure
Navigate to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and sign in to your Azure subscription.
### Create a resource group
[!INCLUDE [resource group intro text](../../../includes/resource-group.md)]
Next to **Resource Group**, select **New**.
Name the resource group **myResourceGroup** and select **OK**.
### Create an App Service plan
[!INCLUDE [app-service-plan](../../../includes/app-service-plan.md)]
Next to **Hosting Plan**, select **New**.
In the **Configure Hosting Plan** dialog box, use these settings:
| Setting | Suggested value | Description |
|-|-|-|
|App Service Plan| MySharingServicePlan | Name of the App Service plan. |
| Location | West US | The datacenter where the web app is hosted. |
| Size | Free | The [pricing tier](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) that determines hosting features. |
Select **OK**.
Open Visual Studio Code, and open the project at the `Sharing\SharingServiceSample` folder. Follow <a href="https://docs.microsoft.com/aspnet/core/tutorials/publish-to-azure-webapp-using-vscode?view=aspnetcore-2.2#open-it-with-visual-studio-code" target="_blank">this tutorial</a> to deploy the sharing service through Visual Studio Code. You can follow the steps starting from the 'Open it with Visual Studio Code' section. Do not create another ASP.NET project as explained in the step above as you already have the project that needs to be deployed and published - the SharingServiceSample.
---
## Deploy the sample app
[!INCLUDE [Run Share Anchors Sample](../../../includes/spatial-anchors-run-share-sample.md)]
[!INCLUDE [Clean-up section](../../../includes/clean-up-section-portal.md)]
## Next steps
In this tutorial, you've deployed an ASP.NET Core Web App in Azure, and then configured and deployed a Unity App. You created Spatial Anchors with the app, and shared them with other devices by using your ASP.NET Core Web App.
You can improve your ASP.NET Core Web App so that it uses Azure Cosmos DB to persist the storage of your shared Spatial Anchor identifiers. Adding Azure Cosmos DB support will allow your ASP.NET Core Web App to create an anchor today, and come back days later to be able to locate it again, by using the anchor identifier stored in your web app.
> [!div class="nextstepaction"]
> [Use Azure Cosmo DB to Store Anchors](./tutorial-use-cosmos-db-to-store-anchors.md)
| 51.05 | 593 | 0.760235 | eng_Latn | 0.983104 |
8ad8f378757c1961174c3994471ed84f2607ecb7 | 6,318 | md | Markdown | src/api/instance-methods.md | qixiaobro/docs-next-zh-cn | 92dabf9be563ccca6729c42462a112b57f1500ef | [
"MIT"
] | 1 | 2021-05-25T07:09:21.000Z | 2021-05-25T07:09:21.000Z | src/api/instance-methods.md | qixiaobro/docs-next-zh-cn | 92dabf9be563ccca6729c42462a112b57f1500ef | [
"MIT"
] | null | null | null | src/api/instance-methods.md | qixiaobro/docs-next-zh-cn | 92dabf9be563ccca6729c42462a112b57f1500ef | [
"MIT"
] | null | null | null | # 实例方法
## $watch
- **参数:**
- `{string | Function} source`
- `{Function | Object} callback`
- `{Object} [options]`
- `{boolean} deep`
- `{boolean} immediate`
- `{string} flush`
- **返回:**`{Function} unwatch`
- **用法:**
侦听组件实例上的响应式 property 或函数计算结果的变化。回调函数得到的参数为新值和旧值。我们只能将顶层的 `data`、`prop` 或 `computed` property 名作为字符串传递。对于更复杂的表达式,用一个函数取代。
- **示例:**
```js
const app = Vue.createApp({
data() {
return {
a: 1,
b: 2,
c: {
d: 3,
e: 4
}
}
},
created() {
// 顶层property 名
this.$watch('a', (newVal, oldVal) => {
// 做点什么
})
// 用于监视单个嵌套property 的函数
this.$watch(
() => this.c.d,
(newVal, oldVal) => {
// 做点什么
}
)
// 用于监视复杂表达式的函数
this.$watch(
// 表达式 `this.a + this.b` 每次得出一个不同的结果时
// 处理函数都会被调用。
// 这就像监听一个未被定义的计算属性
() => this.a + this.b,
(newVal, oldVal) => {
// 做点什么
}
)
}
})
```
当侦听的值是一个对象或者数组时,对其属性或元素的任何更改都不会触发侦听器,因为它们引用相同的对象/数组:
```js
const app = Vue.createApp({
data() {
return {
article: {
text: 'Vue is awesome!'
},
comments: ['Indeed!', 'I agree']
}
},
created() {
this.$watch('article', () => {
console.log('Article changed!')
})
this.$watch('comments', () => {
console.log('Comments changed!')
})
},
methods: {
// 这些方法不会触发侦听器,因为我们只更改了Object/Array的一个property,
// 不是对象/数组本身
changeArticleText() {
this.article.text = 'Vue 3 is awesome'
},
addComment() {
this.comments.push('New comment')
},
// 这些方法将触发侦听器,因为我们完全替换了对象/数组
changeWholeArticle() {
this.article = { text: 'Vue 3 is awesome' }
},
clearComments() {
this.comments = []
}
}
})
```
`$watch` 返回一个取消侦听函数,用来停止触发回调:
```js
const app = Vue.createApp({
data() {
return {
a: 1
}
}
})
const vm = app.mount('#app')
const unwatch = vm.$watch('a', cb)
// later, teardown the watcher
unwatch()
```
- **选项:deep**
为了发现对象内部值的变化,可以在选项参数中指定 `deep: true`。注意监听数组的变更不需要这么做。
```js
vm.$watch('someObject', callback, {
deep: true
})
vm.someObject.nestedValue = 123
// callback is fired
```
- **选项:immediate**
在选项参数中指定 `immediate: true` 将立即以表达式的当前值触发回调:
```js
vm.$watch('a', callback, {
immediate: true
})
// 立即以 `a` 的当前值触发 `callback`
```
注意,在带有 `immediate` 选项时,你不能在第一次回调时取消侦听给定的 property。
```js
// 这会导致报错
const unwatch = vm.$watch(
'value',
function() {
doSomething()
unwatch()
},
{ immediate: true }
)
```
如果你仍然希望在回调内部调用一个取消侦听的函数,你应该先检查其函数的可用性:
```js
let unwatch = null
unwatch = vm.$watch(
'value',
function() {
doSomething()
if (unwatch) {
unwatch()
}
},
{ immediate: true }
)
```
<!-- TODO: translation -->
- **Option: flush**
The `flush` option allows for greater control over the timing of the callback. It can be set to `'pre'`, `'post'` or `'sync'`.
The default value is `'pre'`, which specifies that the callback should be invoked before rendering. This allows the callback to update other values before the template runs.
The value `'post'` can be used to defer the callback until after rendering. This should be used if the callback needs access to the updated DOM or child components via `$refs`.
If `flush` is set to `'sync'`, the callback will be called synchronously, as soon as the value changes.
For both `'pre'` and `'post'`, the callback is buffered using a queue. The callback will only be added to the queue once, even if the watched value changes multiple times. The interim values will be skipped and won't be passed to the callback.
Buffering the callback not only improves performance but also helps to ensure data consistency. The watchers won't be triggered until the code performing the data updates has finished.
`'sync'` watchers should be used sparingly, as they don't have these benefits.
For more information about `flush` see [Effect Flush Timing](../guide/reactivity-computed-watchers.html#effect-flush-timing).
- **参考** [Watchers](../guide/computed.html#侦听器)
## $emit
- **参数:**
- `{string} eventName`
- `[...args]`
触发当前实例上的事件。附加参数都会传给监听器回调。
- **示例:**
只配合一个事件名使用 $emit:
```html
<div id="emit-example-simple">
<welcome-button v-on:welcome="sayHi"></welcome-button>
</div>
```
```js
const app = Vue.createApp({
methods: {
sayHi() {
console.log('Hi!')
}
}
})
app.component('welcome-button', {
template: `
<button v-on:click="$emit('welcome')">
Click me to be welcomed
</button>
`
})
app.mount('#emit-example-simple')
```
配合额外的参数使用 `$emit`:
```html
<div id="emit-example-argument">
<advice-component v-on:give-advice="showAdvice"></advice-component>
</div>
```
```js
const app = Vue.createApp({
methods: {
showAdvice(advice) {
alert(advice)
}
}
})
app.component('advice-component', {
data() {
return {
adviceText: 'Some advice'
}
},
template: `
<div>
<input type="text" v-model="adviceText">
<button v-on:click="$emit('give-advice', adviceText)">
Click me for sending advice
</button>
</div>
`
})
```
- **参考**
- [`emits` 选项](./options-data.html#emits)
- [事件抛出一个值](../guide/component-basics.html#使用事件抛出一个值)
## $forceUpdate
- **用法:**
迫使组件实例重新渲染。注意它仅仅影响实例本身和插入插槽内容的子组件,而不是所有子组件。
## $nextTick
- **参数:**
- `{Function} [callback]`
- **用法:**
将回调延迟到下次 DOM 更新循环之后执行。在修改数据之后立即使用它,然后等待 DOM 更新。它跟全局方法 `nextTick` 一样,不同的是回调的 `this` 自动绑定到调用它的实例上。
- **示例:**
<!-- TODO: translation -->
```js
Vue.createApp({
// ...
methods: {
// ...
example() {
// modify data
this.message = 'changed'
// DOM is not updated yet
this.$nextTick(function() {
// DOM is now updated
// `this` is bound to the current instance
this.doSomethingElse()
})
}
}
})
```
- **参考** [nextTick](global-api.html#nexttick)
| 20.057143 | 245 | 0.558088 | eng_Latn | 0.677068 |
8ada5676e8e3c7778b7257ac9083620ab2ded2d0 | 1,130 | md | Markdown | _episodes/2021-05-25-episode-109-bidoof.md | e-vanaubrey/icy-blog | 36af6803c8ddbfb474a97354b97b17f45167f93c | [
"MIT"
] | null | null | null | _episodes/2021-05-25-episode-109-bidoof.md | e-vanaubrey/icy-blog | 36af6803c8ddbfb474a97354b97b17f45167f93c | [
"MIT"
] | 1 | 2021-07-31T19:29:44.000Z | 2021-07-31T19:29:44.000Z | _episodes/2021-05-25-episode-109-bidoof.md | e-vanaubrey/icy-blog | 36af6803c8ddbfb474a97354b97b17f45167f93c | [
"MIT"
] | null | null | null | ---
layout: episode-post
title: "Episode 109: Bidoof"
date: 2021-05-25 16:55:15
season: "4"
thumbnail: /assets/img/uploads/109_bidoof.png
---
We act like a bunch of doofuses as we discuss the best recipes for Bidoof. Also: Garfield, getting Quenched, and some exciting new business opportunities. All that and more on I Chews You.
{:.links}
[](https://podcasts.apple.com/us/podcast/110-bidoof/id1455409177?i=1000523043168) [](https://open.spotify.com/episode/7kowblTrGMVmQcVlViGWos?si=LiJUOwp7Rxm6o5a30ymEBg) [](https://podcasts.google.com/feed/aHR0cHM6Ly9pY2hld3N5b3UubGlic3luLmNvbS9yc3M/episode/MDNjMWMzZmQtNTZiMi00YTE1LTg0M2YtOTcxNzM5YzhhYjc5?sa=X&ved=0CA0QkfYCahcKEwjw0sLshObwAhUAAAAAHQAAAAAQAQ) [](https://www.stitcher.com/s?eid=84212041) | 102.727273 | 786 | 0.793805 | yue_Hant | 0.310979 |
8adb0b211463973a913c541fe9b2ca4208a51541 | 1,916 | md | Markdown | docs/framework/unmanaged-api/metadata/imetadatatables-getstring-method.md | soelax/docs.de-de | 17beb71b6711590e35405a1086e6ac4eac24c207 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadatatables-getstring-method.md | soelax/docs.de-de | 17beb71b6711590e35405a1086e6ac4eac24c207 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadatatables-getstring-method.md | soelax/docs.de-de | 17beb71b6711590e35405a1086e6ac4eac24c207 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IMetaDataTables::GetString-Methode
ms.date: 03/30/2017
api_name:
- IMetaDataTables.GetString
api_location:
- mscoree.dll
api_type:
- COM
f1_keywords:
- IMetaDataTables::GetString
helpviewer_keywords:
- IMetaDataTables::GetString method [.NET Framework metadata]
- GetString method, IMetaDataTables interface [.NET Framework metadata]
ms.assetid: 895c35cf-b95d-4e3b-93b5-cfc1cf9044fc
topic_type:
- apiref
author: mairaw
ms.author: mairaw
ms.openlocfilehash: c457d8a6b3ab187b7d02c9c9be800c4ef1f0f58c
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/23/2019
ms.locfileid: "54537984"
---
# <a name="imetadatatablesgetstring-method"></a>IMetaDataTables::GetString-Methode
Ruft die Zeichenfolge am angegebenen Index aus der Tabellenspalte in den Gültigkeitsbereich des aktuellen ab.
## <a name="syntax"></a>Syntax
```
HRESULT GetString (
[in] ULONG ixString,
[out] const char **ppString
);
```
#### <a name="parameters"></a>Parameter
`ixString`
[in] Der Index, an dem beginnen, den nächsten Wert gesucht werden soll.
`ppString`
[out] Ein Zeiger auf einen Zeiger auf den Wert der zurückgegebenen Zeichenfolge.
## <a name="requirements"></a>Anforderungen
**Plattformen:** Weitere Informationen finden Sie unter [Systemanforderungen](../../../../docs/framework/get-started/system-requirements.md).
**Header:** Cor.h
**Bibliothek:** Als Ressource in MsCorEE.dll verwendet
**.NET Framework-Versionen:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Siehe auch
- [IMetaDataTables-Schnittstelle](../../../../docs/framework/unmanaged-api/metadata/imetadatatables-interface.md)
- [IMetaDataTables2-Schnittstelle](../../../../docs/framework/unmanaged-api/metadata/imetadatatables2-interface.md)
| 33.034483 | 144 | 0.735908 | deu_Latn | 0.393593 |
8addad75b022b90971b3b2855dc15c816872bc2a | 3,353 | md | Markdown | content/news/2014/03/2014-03-17-introducing-openfda.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | 1 | 2022-02-11T11:53:47.000Z | 2022-02-11T11:53:47.000Z | content/news/2014/03/2014-03-17-introducing-openfda.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | content/news/2014/03/2014-03-17-introducing-openfda.md | afeijoo/digitalgov.gov | 117098d31802464d9696987980f4a400f3f6654c | [
"CC0-1.0"
] | null | null | null | ---
slug: introducing-openfda
date: 2014-03-17 12:00:06 -0400
title: Introducing openFDA
summary: ' Welcome to the new home of openFDA! We are incredibly excited to see so much interest in our work and hope that this site can be a valuable resource to those wishing to use public FDA data in both the public and private'
authors:
- dr-taha-kass-hout
topics:
- api
- data
- api
- cloud
- fda
- mobile-first
---
[{{< legacy-img src="2014/03/p\_blog\_introducing\_openFDA\_600x267.jpg" alt="Image of scientist reading printout from mainframe computer" >}}](https://s3.amazonaws.com/digitalgov/_legacy-img/2014/03/p_blog_introducing_openFDA_600x267.jpg)
Welcome to the new home of [openFDA](http://open.fda.gov/)! We are incredibly excited to see so much interest in our work and hope that this site can be a valuable resource to those wishing to use public FDA data in both the public and private sector to spur innovation, further regulatory or scientific missions, educate the public, and save lives.
Through openFDA, developers and researchers will have easy access to high-value FDA public data through [RESTful APIs](http://apievangelist.com/index.html) and structured file downloads. In short, our goal is to make it simple for an application, mobile, or Web developer, or all stripes of researchers, to use data from FDA in their work. We’ve done an extensive amount of research both internally and with potential external developers to identify which datasets are both in demand and have a high barrier of entry. As a result, our initial pilot project will cover a number of datasets from various areas within FDA, defined into three broad focus areas: Adverse Events, Product Recalls, and Product Labeling. These API’s won’t have one-on-one matching to FDA’s internal data organizational structure; rather, we intend to abstract on top of a myriad of datasets and provide appropriate metadata and identifiers when possible. Of course, we’ll always make the raw source data available for people who prefer to work that way (and it’s good to mention that we also will not be releasing any data that could potentially be used to identify individuals or other private information).
The openFDA initiative is one part of the larger Office of Informatics and Technology Innovation roadmap. As part of my role as FDA’s [Chief Health Informatics Officer](http://www.fda.gov/AboutFDA/CentersOffices/ucm349836.htm), I’m working to lead efforts to move FDA in to a cutting edge technology organization. You’ll be hearing more about our other initiatives, including Cloud Computing, High Performance Computing, Next Generation Sequencing, and mobile-first deployment in the near future.
As we work towards a release of openFDA we’ll begin to share more about our work and how you can get involved. In the meantime, I suggest you sign up for our listserv ([on our home page](http://open.fda.gov/)) to get the latest updates on the project. You can also reach our team via [email](mailto:open@fda.hhs.gov) if there is a unique partnership opportunity or other collaboration you wish to discuss._This post was originally published on the [openFDA blog](http://open.fda.gov/) by Dr. Taha Kass-Hout, the Chief Health Informatics Officer of the U.S. Food and Drug Administration (FDA)._ | 134.12 | 1,219 | 0.78646 | eng_Latn | 0.997258 |
8ade4739ffff70661335c5e3096651dab8205562 | 88 | md | Markdown | README.md | tomasz-madej/wp-same-height-boxes | 6ad2604f8a9d9fb3010093c865f256f2a3eada3a | [
"MIT"
] | null | null | null | README.md | tomasz-madej/wp-same-height-boxes | 6ad2604f8a9d9fb3010093c865f256f2a3eada3a | [
"MIT"
] | null | null | null | README.md | tomasz-madej/wp-same-height-boxes | 6ad2604f8a9d9fb3010093c865f256f2a3eada3a | [
"MIT"
] | null | null | null | # wp-same-height-boxes
This plugin sets up the same height for all boxes inside section
| 29.333333 | 64 | 0.795455 | eng_Latn | 0.997634 |
8ae0c559bbf18900b573f316b3a386a75d849f1a | 1,595 | md | Markdown | Documentation/Java/main.md | ehailey1/treehopper-sdk | c242f939a93d93da11ff79577666130c15aecec7 | [
"MIT"
] | 3 | 2018-03-16T07:00:42.000Z | 2022-03-27T00:39:55.000Z | Documentation/Java/main.md | ehailey1/treehopper-sdk | c242f939a93d93da11ff79577666130c15aecec7 | [
"MIT"
] | 16 | 2016-08-12T18:51:04.000Z | 2021-04-16T16:14:07.000Z | Documentation/Java/main.md | ehailey1/treehopper-sdk | c242f939a93d93da11ff79577666130c15aecec7 | [
"MIT"
] | 6 | 2015-11-04T15:53:49.000Z | 2020-06-25T18:34:47.000Z | \mainpage Welcome
\section intro_sec Introduction
This documentation contains all Java-specific information for interfacing with Treehopper. For hardware documentation, or for documentation for other languages, visit <a href="https://docs.treehopper.io/">https://docs.treehopper.io/</a>.
\subsection features Features
Treehopper's Java API is designed to support many different execution contexts; you can integrate it into simple console applications that have 100% binary compatibility under Windows, macOS, Linux, and other UNIX-like operating systems. Treehopper's Java API also fully supports Android, allowing you to target smartphones and tablets.
\subsection libraries Libraries
In addition to the main API that allows you to manipulate and sample pins on the Treehopper, the Java API also includes an ever-growing library full of drivers for many different peripheral ICs, including IMUs and other sensors; GPIO expanders, DACs and ADCs; LED drivers, character and graphical displays; and motor drivers, rotary encoders, and other motion devices.
\subsection Modules
Treehopper's Java API is split across the following packages:
- io.treehopper: the base library. Provides GPIO, PWM, I2C, SPI, and base interface support. Requires one of these connectors:
- io.treehopper.desktop: provides platform-agnostic connectivity for traditional console or desktop applications running on Windows, macOS, or Linux.
- io.treehopper.android: provides connectivity for Android projects.
- io.treehopper.libraries: provides support for more than 100 commonly-used ICs and peripherals.
| 79.75 | 368 | 0.811285 | eng_Latn | 0.994029 |
8ae0ec18c54be564815cb8416742b502df123d2d | 923 | md | Markdown | docs/book/SUMMARY.md | wangxy518/cloud-provider-vsphere | 9ba254002d3eb49d3b862d7463f8a5ef98730880 | [
"Apache-2.0"
] | null | null | null | docs/book/SUMMARY.md | wangxy518/cloud-provider-vsphere | 9ba254002d3eb49d3b862d7463f8a5ef98730880 | [
"Apache-2.0"
] | null | null | null | docs/book/SUMMARY.md | wangxy518/cloud-provider-vsphere | 9ba254002d3eb49d3b862d7463f8a5ef98730880 | [
"Apache-2.0"
] | null | null | null | # Summary
* [Introduction](README.md)
* [VMware vSphere Storage Concepts](concepts/vmware_vsphere_storage.md)
* [In-Tree and Out-of-Tree Implementation Models](concepts/in_tree_vs_out_of_tree.md)
* [About vSphere Cloud Provider](concepts/vcp_overview.md)
* [Overview of the CPI](concepts/cpi_overview.md)
* [Overview of the CSI](concepts/csi_overview.md)
* [Glossary](glossary.md)
* [Cloud Provider Interface (CPI)](cloud_provider_interface.md)
* [Container Storage Interface (CSI)](container_storage_interface.md)
* [Cloud Config Spec](cloud_config.md)
## Tutorials
* [Running a Kubernetes cluster on vSphere with kubeadm](/tutorials/kubernetes-on-vsphere-with-kubeadm.md)
* [Deploying CCM and CSI with Zones Topology](/tutorials/deploying_cpi_and_csi_with_multi_dc_vc_aka_zones.md)
* [Deploying the (deprecated) in-tree vSphere Cloud Provider using kubeadm](./tutorials/k8s-vcp-on-vsphere-with-kubeadm.md)
| 48.578947 | 123 | 0.781148 | yue_Hant | 0.380261 |
8ae18b4aa92d4e030a086bc3c2106357d1759dc6 | 240 | md | Markdown | README.md | highwindmx/Web2Note | 5400817fb6043f2f0cd2637bdbd7c7cd939de2f2 | [
"MIT"
] | 3 | 2019-02-16T13:45:23.000Z | 2020-02-21T00:22:17.000Z | README.md | highwindmx/Web2Note | 5400817fb6043f2f0cd2637bdbd7c7cd939de2f2 | [
"MIT"
] | 6 | 2019-02-16T14:35:37.000Z | 2019-02-20T12:18:34.000Z | README.md | highwindmx/Web2Note | 5400817fb6043f2f0cd2637bdbd7c7cd939de2f2 | [
"MIT"
] | null | null | null | # Web2Note
taking notes by "SingleFile" addon of Firefox and Manage them
https://addons.mozilla.org/en-US/firefox/addon/single-file/

| 48 | 106 | 0.7875 | yue_Hant | 0.232096 |
8ae1e981bca33ce347b8273e5459f45485296fc7 | 6,071 | md | Markdown | articles/supply-chain/production-control/consumption.md | MicrosoftDocs/Dynamics-365-Operations.es-es | 862ffcf00a8ec2087a0894571bf62c673d632667 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-18T17:13:55.000Z | 2022-03-02T03:46:52.000Z | articles/supply-chain/production-control/consumption.md | MicrosoftDocs/Dynamics-365-Operations.es-es | 862ffcf00a8ec2087a0894571bf62c673d632667 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-12-11T16:51:00.000Z | 2019-04-30T11:46:06.000Z | articles/supply-chain/production-control/consumption.md | MicrosoftDocs/Dynamics-365-Operations.es-es | 862ffcf00a8ec2087a0894571bf62c673d632667 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-05-17T12:21:14.000Z | 2022-01-13T22:45:23.000Z | ---
title: Calcular el consumo de materiales
description: En este artículo se proporciona información acerca de las diversas opciones relacionadas con el cálculo del consumo de materiales.
author: johanhoffmann
ms.date: 06/20/2017
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: BOMDesignerEditBOM, BOMTable, ProdBOM
audience: Application User
ms.reviewer: kamaybac
ms.custom: 53401
ms.assetid: 9cff88e4-0425-4707-9178-3c2cb10df7c2
ms.search.region: Global
ms.search.industry: Manufacturing
ms.author: johanho
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: e62d49b5fa2b26c34106e5bbf3dfbc27145e5c4e9acf798b8faef273d8957e51
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 08/05/2021
ms.locfileid: "6776881"
---
# <a name="calculate-material-consumption"></a>Calcular el consumo de materiales
[!include [banner](../includes/banner.md)]
En este artículo se proporciona información acerca de las diversas opciones relacionadas con el cálculo del consumo de materiales.
Las siguientes opciones relacionadas con el cálculo del consumo de materiales están disponibles en las fichas **Configuración** y **Consumo de pasos** en la ficha desplegable **Detalles de línea** de la página **Lista de materiales**.
## <a name="variable-and-constant-consumption"></a>Consumo constante y variable
En el campo **El consumo es** puede seleccionar si el consumo se debe calcular como una cantidad constante o una cantidad variable. Seleccione **Constante** si se precisa una cantidad o un volumen fijo para la producción, independientemente de la cantidad producida. Seleccione **Variable**, el ajuste predeterminado, si la cantidad necesaria de material en los productos terminados es proporcional al número de productos terminados que se produzcan.
## <a name="calculating-consumption-from-a-formula"></a>Cálculo del consumo a partir de una fórmula
En el campo **Fórmula** puede configurar varias fórmulas para calcular el consumo de materiales. Si usa el valor predeterminado **Estándar**, no se calcula el consumo a partir de una fórmula. Las fórmulas siguientes funcionan conjuntamente con los campos **Alto**, **Ancho**, **Profundidad**, **Densidad** y **Constante**:
- Alto \* Constante
- Alto \* Ancho \* Constante
- Alto \* Ancho \* Profundidad \* Constante
- (Alto \* Ancho \* Profundidad/Densidad) \* Constante
## <a name="rounding-up-and-multiples"></a>Redondeo y múltiplos
Juntos, los campos **Redondeo para arriba** y **Múltiplos** permiten redondear hacia arriba del valor del consumo de materiales. Por ejemplo, puede redondear el valor en función de la unidad de gestión de material en la que se selecciona la materia prima para producción. Las siguientes opciones están disponibles en el campo **Redondeo para arriba**: **Cantidad**, **Medida** y **Consumo**.
### <a name="quantity"></a>Cantidad
Si selecciona **Cantidad** como mecanismo de redondeo, la cantidad debe ser un múltiplo de la cantidad especificada. Por ejemplo, si se precisan números enteros, seleccione **1** en el campo **Múltiplos**. Los números se redondean a una cantidad divisible por 1.
### <a name="measurement"></a>Medida
Normalmente, se selecciona **Medida** como mecanismo de redondeo cuando la materia prima proviene de dimensiones específicas. Por ejemplo, hace falta una pieza de tubo de metal de dos metros para un producto terminado, y el tubo de metal se almacena en longitudes de 4,5 metros. En este caso, el mecanismo de redondeo **Medida** se puede usar para calcular cuántos tubos de metal se requieren para producir un número específico de piezas del producto terminado. En este ejemplo, el campo **Fórmula** se ha establecido en **Alto \* Constante**. El campo **Alto** se ha establecido en **2** para indicar la longitud del tubo necesaria para el producto terminado. El campo **Múltiplo** se ha establecido en **4,5** para indicar que el tubo se selecciona en longitudes de 4,5 metros. Este es el cálculo:
1. Número de múltiplos necesarios para 10 piezas del producto terminado: 10 ÷ 2 = 5 piezas
2. Consumo total: 4,5 × 5 = 22,5 metros de tubo de metal
Se asume que se desechan 0,5 metros de tubo por cada cinco piezas de tubo consumidas.
### <a name="consumption"></a>Consumo
Normalmente, se selecciona **Consumo** como mecanismo de redondeo cuando la materia prima se debe seleccionar en cantidades completas de una unidad de gestión de material del producto específico. Por ejemplo, se usan dos cuartos de pintura para producir una pieza de un producto terminado, y la pintura se selecciona en botes de 25 cuartos. En este caso, se puede usar el mecanismo de redondeo **Consumo** para redondear el consumo a números enteros de botes de 25 cuartos de galón. Este es el cálculo de la cantidad de pintura requerida si se deben producir 180 piezas del producto terminado:
1. Pintura necesaria, sin la parte desechada: 180 × 2 = 360 cuartos de galón
2. Número de botes: 360 ÷ 25 = 14,4, que se redondea a 15
3. Pintura necesaria, con la parte desechada: 15 × 25 = 375 cuartos de galón
## <a name="step-consumption"></a>Consumo de pasos
El consumo de pasos se usa para calcular el consumo constante en intervalos de cantidad. Si selecciona **Consumo de pasos** en el campo **Fórmula** de la pestaña **Configuración**, puede agregar información acerca de los pasos en la pestaña **Consumo de pasos**. La cantidad consumida fija se puede configurar en intervalos de la cantidad producida. Por ejemplo, el consumo de pasos se ha configurado como se muestra en la siguiente tabla.
| Serie inicial | Cantidad |
|-------------|----------|
| 0,00 | 10,0000 |
| 100,00 | 20,0000 |
| 200,00 | 40,0000 |
La cantidad de la lista de materiales es 1 y la cantidad de producción es 110. La fórmula para el consumo es de (cantidad) de serie = Consumo. Dado que la cantidad de producción es 110, entra en la serie inicial 100. Así pues, la cantidad es 20.
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 72.27381 | 799 | 0.763795 | spa_Latn | 0.988386 |
8ae27ee4dc6eac38b1fd741afff27dbb1566c6fa | 146 | md | Markdown | src/app/types/readme.md | geldersezweefvliegclub/Pegasus | 354fd299804a14e5b7b79f790aad1972c90c0d5c | [
"MTLL"
] | 4 | 2021-11-27T13:48:12.000Z | 2021-12-10T09:22:41.000Z | src/app/types/readme.md | geldersezweefvliegclub/Pegasus | 354fd299804a14e5b7b79f790aad1972c90c0d5c | [
"MTLL"
] | null | null | null | src/app/types/readme.md | geldersezweefvliegclub/Pegasus | 354fd299804a14e5b7b79f790aad1972c90c0d5c | [
"MTLL"
] | 1 | 2021-07-31T03:52:52.000Z | 2021-07-31T03:52:52.000Z | conversie van yml naar ts
npx openapi-typescript --default-non-nullable ../../../../Helios/Helios.git/docs/Startlijst.yml --output Startlijst.ts
| 36.5 | 118 | 0.746575 | nld_Latn | 0.500036 |
8ae3a8f1fcffc59c767d9669eec4561864267835 | 2,094 | md | Markdown | README.md | arlenyan/protobufjs-loader | c06ba8824e4867178bfb1f47ba378b27fba42aad | [
"MIT"
] | 3 | 2019-05-19T06:37:58.000Z | 2021-03-21T08:14:51.000Z | README.md | arlenyan/protobufjs-loader | c06ba8824e4867178bfb1f47ba378b27fba42aad | [
"MIT"
] | null | null | null | README.md | arlenyan/protobufjs-loader | c06ba8824e4867178bfb1f47ba378b27fba42aad | [
"MIT"
] | 7 | 2018-08-02T12:27:04.000Z | 2021-02-12T19:05:10.000Z | [](https://travis-ci.org/kmontag/protobufjs-loader)
[](https://github.com/semantic-release/semantic-release)
# protobufjs-loader
Webpack loader to translate
[protobuf](https://github.com/google/protobuf/) definitions to
[ProtoBuf.js](https://github.com/dcodeIO/ProtoBuf.js/)
modules. Equivalent to running your definitions through the [pbjs
CLI](https://github.com/dcodeIO/ProtoBuf.js/#pbjs-for-javascript).
This allows you to use the light or minimal ProtoBuf.js distributions
without an explicit compile step in your build pipeline.
# Install
``` sh
npm install --save-dev protobufjs-loader-webpack4
```
# Usage
``` javascript
// webpack.config.js
module.exports = {
...
module: {
rules: [{
test: /\.proto$/,
use: {
loader: 'protobufjs-loader',
options: {
/* controls the "target" flag to pbjs - true for
* json-module, false for static-module.
* default: false
*/
json: true,
/* import paths provided to pbjs.
* default: webpack import paths (i.e. config.resolve.modules)
*/
paths: ['/path/to/definitions'],
/* additional command line arguments passed to
* pbjs, see https://github.com/dcodeIO/ProtoBuf.js/#pbjs-for-javascript
* for a list of what's available.
* default: []
*/
pbjsArgs: ['--no-encode']
}
}
}]
}
};
```
``` javascript
// myModule.js
/* replaces e.g.:
*
* const protobuf = require('protobufjs/light');
* const jsonDescriptor = require('json!my/compiled/protobuf.js');
* const Root = protobuf.Root.fromJSON(jsonDescriptor);
*/
const Root = require('my/protobuf.proto');
```
| 30.347826 | 165 | 0.583572 | eng_Latn | 0.265756 |
8ae3eb1fd85a5581daec7e387d79326613c6fe1a | 800 | md | Markdown | guides/content/developer/source/index.md | melvinhgf/spree | cc83c88f89871618b7bdc8cebcd74d072e5642f0 | [
"BSD-3-Clause"
] | 2 | 2020-05-09T11:40:36.000Z | 2020-05-09T11:42:20.000Z | guides/content/developer/source/index.md | cgcmart/open | 6f22d1ed92f321c30e2ef2175ac84b345563991d | [
"BSD-3-Clause"
] | 6 | 2020-02-26T14:03:29.000Z | 2021-09-28T00:22:06.000Z | guides/content/developer/source/index.md | cgcmart/open | 6f22d1ed92f321c30e2ef2175ac84b345563991d | [
"BSD-3-Clause"
] | 1 | 2022-01-09T10:50:32.000Z | 2022-01-09T10:50:32.000Z | ---
title: "Source Code"
section: source-code
---
## Source Code
Spree's functionality is split across seven different components:
* [**API**](http://api.spreecommerce.com): Provides a HTTP JSON API for several of Spree's components.
* [**Backend**](/developer/backend): Provides the admin backend component for Spree;
things like product and order management.
* [**cmd**](/developer/cmd): Provides the `spree` command, used for generating extensions.
* [**Core**](/developer/core): Provides the minimum necessary functionality for Spree
to work.
* [**Frontend**](/developer/frontend): Provides the front-facing functionality for
Spree; things like product viewing and checkout.
* [**Sample**](/developer/sample): Provides the sample data for Spree, used for
setting up a new Spree store.
| 40 | 102 | 0.73625 | eng_Latn | 0.963412 |
8ae4f2d5e661c8e5e8f030f0c35d86f0bd8135a3 | 1,647 | md | Markdown | docs/visual-basic/misc/bc31113.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31113.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31113.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: A instrução não declara um método 'AddHandler', 'RemoveHandler' ou 'RaiseEvent'
ms.date: 07/20/2015
f1_keywords:
- vbc31113
- bc31113
helpviewer_keywords:
- BC31113
ms.assetid: f8299c9d-6030-43e5-878e-8d2b042191b5
ms.openlocfilehash: b4a955398b84b215799f103153b3327b35de7289
ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/04/2020
ms.locfileid: "84405684"
---
# <a name="statement-does-not-declare-an-addhandler-removehandler-or-raiseevent-method"></a>A instrução não declara um método 'AddHandler', 'RemoveHandler' ou 'RaiseEvent'
A instrução não fornece uma `AddHandler` instrução de `RemoveHandler` declaração, ou `RaiseEvent` em um `Custom Event` procedimento. Uma declaração de evento Personalizada é um bloco de código incluído nas `Custom Event` `End Event` instruções e. Dentro desse bloco, cada `Custom Event` procedimento aparece como um bloco interno incluído em uma instrução de declaração e uma `End` instrução.
**ID do erro:** BC31113
## <a name="to-correct-this-error"></a>Para corrigir este erro
- Forneça uma `AddHandler` `RemoveHandler` instrução de declaração, ou `RaiseEvent` .
## <a name="see-also"></a>Confira também
- [Instrução Event](../language-reference/statements/event-statement.md)
- [Instrução AddHandler](../language-reference/statements/addhandler-statement.md)
- [Instrução RemoveHandler](../language-reference/statements/removehandler-statement.md)
- [Instrução RaiseEvent](../language-reference/statements/raiseevent-statement.md)
- [Eventos](../programming-guide/language-features/events/index.md)
| 49.909091 | 394 | 0.779599 | por_Latn | 0.916614 |
8ae530740edf12ddcd578431b98f2ed03ecc3a02 | 4,924 | md | Markdown | thunderclap-macros/README.md | jazzfool/thunderclap | 7e11b25db19b58907e82223dd76b47b3ffa7f694 | [
"Apache-2.0",
"MIT"
] | 11 | 2020-01-01T19:21:38.000Z | 2021-04-13T01:21:28.000Z | thunderclap-macros/README.md | jazzfool/thunderclap | 7e11b25db19b58907e82223dd76b47b3ffa7f694 | [
"Apache-2.0",
"MIT"
] | 2 | 2020-02-25T13:28:14.000Z | 2020-02-26T08:48:56.000Z | thunderclap-macros/README.md | jazzfool/thunderclap | 7e11b25db19b58907e82223dd76b47b3ffa7f694 | [
"Apache-2.0",
"MIT"
] | 2 | 2020-01-10T10:55:48.000Z | 2020-02-25T13:08:07.000Z | # `derive`
All of these derives accept a `thunderclap_crate` attribute to specify the name of the Thunderclap crate;
```rust
use thunderclap as alternative_thunderclap;
#[derive(SomeThunderclapDerive)]
#[thunderclap_crate(alternative_thunderclap)] // <--
struct Foo // ...
```
Realistically, there's no need to use this. This mainly used within Thunderclap so that internal types can `derive` where the only handle to the crate root is `crate::`.
## `PipelineEvent`
```rust
#[derive(PipelineEvent, Clone, Copy, PartialEq)]
enum MyEvent {
#[event_key(stop)]
Stop,
#[event_key(play)]
Play(f32),
#[event_key(rewind)]
Rewind {
seconds: u32,
play: bool,
},
}
```
Which resolves down to:
```rust
impl thunderclap::pipe::Event for MyEvent {
fn get_key(&self) -> &'static str {
match self {
MyEvent::Stop => "stop",
MyEvent::Play(..) => "play",
MyEvent::Rewind{..} => "rewind",
}
}
}
impl MyEvent { // These are automatically called by `pipeline!` to "cast" the event.
pub fn unwrap_as_stop(self) -> Option<()> {
if let MyEvent::Stop = self { Some(()) } else { None }
}
pub fn unwrap_as_play(self) -> Option<(f32)> {
if let MyEvent::Play(x0) = self { Some(x0) } else { None }
}
pub fn unwrap_as_rewind(self) -> Option<(u32, bool)> {
if let MyEvent::Rewind{seconds, play} = self { Some((seconds, play)) } else { None }
}
}
```
## `LayableWidget`
```rust
#[derive(LayableWidget)]
struct MyWidget {
#[widget_layout]
layout: WidgetLayoutEvents,
}
```
Expands to...
```rust
impl thunderclap::base::LayableWidget for MyWidget {
#[inline]
fn listen_to_layout(&mut self, layout: impl Into<Option<thunderclap::base::WidgetLayoutEventsInner>>) {
self.layout.update(layout);
}
#[inline]
fn layout_id(&self) -> Option<u64> {
self.layout.id()
}
}
```
## `DropNotifier`
```rust
#[derive(DropNotifier)]
struct MyWidget {
#[widget_drop_event]
drop_event: RcEventQueue<DropEvent>,
}
```
Expands to...
```rust
impl thunderclap::base::DropNotifier for MyWidget {
#[inline(always)]
fn drop_event(&self) -> &thunderclap::reclutch::event::RcEventQueue<thunderclap::base::DropEvent> {
&self.drop_event
}
}
```
Note that you'll still have to appropriately implement `Drop` to emit into `drop_event`;
```rust
// Manually implemented
impl Drop for MyWidget {
fn drop(&mut self) {
self.drop_event.emit_owned(DropEvent);
}
}
```
## `HasVisibility`
```rust
#[derive(HasVisibility)]
struct MyWidget {
#[widget_visibility]
visibility: Visibility,
}
```
Expands to...
```rust
impl thunderclap::base::HasVisibility {
#[inline]
fn set_visibility(&mut self, visibility: thunderclap::base::Visibility) {
self.visibility = visibility;
}
#[inline]
fn visibility(&self) -> thunderclap::base::Visibility {
self.visibility
}
}
```
TL;DR: setter and getter.
## `Repaintable`
```rust
#[derive(Repaintable)]
struct MyWidget {
#[repaint_target]
a: CommandGroup,
#[repaint_target]
b: CommandGroup,
#[widget_child]
#[repaint_target]
c: AnotherWidget, // <-- assuming this has a method called `repaint`.
}
```
Expands to...
```rust
impl thunderclap::base::Repaintable for MyWidget {
#[inline]
fn repaint(&mut self) {
self.a.repaint();
self.b.repaint();
self.c.repaint();
for child in thunderclap::base::WidgetChildren::children_mut(self) {
child.repaint();
}
}
}
```
## `Movable` and `Resizable`
Both these derives accept an attribute `widget_transform_callback`.
In the case of deriving both `Movable` and `Resizable`, note that "overlapping" derive attributes are valid, so in many scenarios you can write the attribute once for it to be applied to both derives.
Assume `<a/b>` means "interchangeable", since these two derives are almost identical.
```rust
#[derive(<Movable/Resizable>)]
#[widget_transform_callback(on_transform)]
struct MyWidget {
#[widget_rect]
rect: RelativeRect
// -- OR --
#[widget_<position/size>]
x: <RelativePoint/Size>,
}
```
Expands to...
```rust
impl thunderclap::base::<Movable/Resizable> for MyWidget {
fn set_<position/size>(&mut self, <position/size>: thunderclap::reclutch::display::<RelativePoint/Size>) {
self.rect.<origin/size> = <position/size>;
// -- OR --
self.x = <position/size>;
thunderclap::base::Repaintable::repaint(self);
self.on_transform();
}
#[inline]
fn <position/size>(&self) -> thunderclap::reclutch::display::<RelativePoint/Size> {
self.rect.<origin/size>
// -- OR --
self.x
}
}
```
Here the `// -- OR --` denotes that the derive can operate on either a point/size field or a rectangle field.
| 22.18018 | 200 | 0.634444 | eng_Latn | 0.793608 |
8ae55fa551a1dc8e07018b4d1f249b8cd6e00ff1 | 744 | md | Markdown | README.md | martinschwinzerl/demotrack | 853aa24bcf10f99ee176a185c2418ca1c7c7ace0 | [
"MIT"
] | null | null | null | README.md | martinschwinzerl/demotrack | 853aa24bcf10f99ee176a185c2418ca1c7c7ace0 | [
"MIT"
] | null | null | null | README.md | martinschwinzerl/demotrack | 853aa24bcf10f99ee176a185c2418ca1c7c7ace0 | [
"MIT"
] | null | null | null | # demotrack
## About
`demotrack` is a stripped-down, stand-alone, proof-of-concept GPU accelerated beam-dynamics particle tracking tool similar to [SixTrackLib](https://github.com/SixTrack/sixtracklib) or [SixTrack](https://github.com/SixTrack/sixtrack). It implements a particle model similar to `SixTrackLib` and a small list of beam-elements
- drifts
- multipole
- cavity
- coasting SpaceCharge
These elements are sufficent to track a simple FODO lattice. `demotrack` relies on [PyOpenCL](https://documen.tician.de/pyopencl/) for accelerated parallel problems. Cf the `examples/demo.py` file for an usage example.
## Installation
It should be sufficient to run
```
pip install -e .
```
from the main directory of your working copy.
| 35.428571 | 323 | 0.768817 | eng_Latn | 0.948502 |
8ae5901585963314d53094a45fe73324eb71ad8d | 2,176 | md | Markdown | _posts/fastchallenge/2020-11-21-fastchallenge34.md | windowdong11/windowdong11.github.io | 5cf3970b4b0a54836b560c82ccdc7145d8a68a98 | [
"MIT"
] | 1 | 2020-11-04T15:52:59.000Z | 2020-11-04T15:52:59.000Z | _posts/fastchallenge/2020-11-21-fastchallenge34.md | windowdong11/windowdong11.github.io | 5cf3970b4b0a54836b560c82ccdc7145d8a68a98 | [
"MIT"
] | null | null | null | _posts/fastchallenge/2020-11-21-fastchallenge34.md | windowdong11/windowdong11.github.io | 5cf3970b4b0a54836b560c82ccdc7145d8a68a98 | [
"MIT"
] | null | null | null | ---
title: "자료구조/👉알고리즘 - 34"
date: 2020-11-21 18:00:00
categories: fastcampus-challenge Problem-Solving
toc : true
usemathjax: true
---
## 내용
1. 기본 탐색 - 기초
## 문서 검색
[문서 검색](https://www.acmicpc.net/problem/1543)
두 문자열이 주어진다.
처음 문자열을 문서, 두번째 문자열을 단어라고 할때,
문서에서 문자열이 몇번 **중복되지 않도록** 등장하는지 횟수를 세는 문제,
"ababababa"
"aba"
-> **aba**b**aba**ba
맨 처음에 **ab*aba***에서 {0, 1, 2 번째 글자} 그룹과,
{2, 3, 4 번째 글자} 그룹은 서로 2번째 글자가 겹치기 때문에 세지 않는다.
문서의 길이(d) 2500, 단어의 길이(w) 50
시간제한 2초/ 메모리 제한 128MB
완전탐색으로 $O(dw) = d*w = 125000$정도면 가능하겠다.
```py
d = input()
w = input()
cnt = 0
i = 0
while i < len(d) - len(w) + 1:
found = True
for k in range(len(w)):
if d[i + k] != w[k]:
found = False
i += 1
break
if found:
i += len(w)
cnt += 1
print(cnt)
```
1. 문서의 첫 글자부터 차례대로 선택
2. 선택한 글자의 해당 위치부터 w와 일치하는 지 확인
3. 일치하는 경우 일치하는 수 + 1, w의 길이만큼 건너뛰고, 다음 글자 선택, 2번부터 반복
4. 일치하지 않는 경우 다음 글자 선택, 2번부터 반복

약간 더 보기좋은 코드
문자열 비교를 슬라이싱을 통해 비교할 수 있다.
```py
d = input()
w = input()
i = 0
result = 0
while len(d) - i >= len(w):
if d[i : i + len(w)] == w:
result += 1
i += len(word)
else:
i += 1
print(result)
```
## 새
[새](https://www.acmicpc.net/problem/1568)
최대 1,000,000,000
시간제한 2초
다들 문제에 적힌대로 생각하고,
이게 시간안에 될까? 라는 생각을 할텐데,
1+2+3+...하다보면 빠르게 커진다.
가장 반복횟수가 큰 첫 사이클에서의 시간(초)를 구해보자
$1,000,000,000 = {n(n+1)\over2}$
$2,000,000,000 = n^2 + n$
$n \fallingdotseq 44721$
대략 $O(\sqrt n)$정도 나오겠다.
```py
n = int(input())
i = 1
t = 0
while n:
if i <= n:
n -= i
else:
i = 1
n -= i
i += 1
t += 1
print(t)
```

## 베스트셀러
[베스트셀러](https://www.acmicpc.net/problem/1302)
팔린 책의 이름 목록을 통해 가장 많이 팔린 책의 이름을 구하는 문제
```py
d = input()
w = input()
cnt = 0
i = 0
while i < len(d) - len(w) + 1:
found = True
for k in range(len(w)):
if d[i + k] != w[k]:
found = False
i += 1
break
if found:
i += len(w)
cnt += 1
print(cnt)
```

## 마지막
내용 | 16.118519 | 54 | 0.53079 | kor_Hang | 0.999808 |
8ae5a28b3d7fabfd3fdd9f36900a0cb1ec25f56d | 205 | md | Markdown | _posts/2002-07-26-berzerkers-bug.md | RC7502/RC7502.github.io | fb5490f0f76bd374cd50c999b4506975145522d7 | [
"MIT"
] | null | null | null | _posts/2002-07-26-berzerkers-bug.md | RC7502/RC7502.github.io | fb5490f0f76bd374cd50c999b4506975145522d7 | [
"MIT"
] | null | null | null | _posts/2002-07-26-berzerkers-bug.md | RC7502/RC7502.github.io | fb5490f0f76bd374cd50c999b4506975145522d7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Berzerker's Bug"
date: 2002-07-26 17:55
author: rcadmin
comments: true
categories: [Comics]
tags: []
---
<!--more--><img src="http://dl.bitsmack.com/comics/20020726.gif" alt="" />
| 18.636364 | 74 | 0.663415 | ita_Latn | 0.083436 |
8ae66eeee98ccf6a9806bd31d87254ebde9c035b | 46 | md | Markdown | README.md | nmhung1210/reactjs-dev | 6801e4286ac0a589ff4b8433c4ff722487e42f94 | [
"MIT"
] | null | null | null | README.md | nmhung1210/reactjs-dev | 6801e4286ac0a589ff4b8433c4ff722487e42f94 | [
"MIT"
] | null | null | null | README.md | nmhung1210/reactjs-dev | 6801e4286ac0a589ff4b8433c4ff722487e42f94 | [
"MIT"
] | null | null | null | # reactjs-dev
Development package for reactjs
| 15.333333 | 31 | 0.826087 | eng_Latn | 0.88647 |
8ae70eede4ddfbec5403c6dee0d66d3fc345ba00 | 405 | md | Markdown | _bagby/gbby-45g138.md | jpmantica/wax-sandbox | 96a80a692203a1972d13c30b5a00823d8b2234d0 | [
"MIT"
] | null | null | null | _bagby/gbby-45g138.md | jpmantica/wax-sandbox | 96a80a692203a1972d13c30b5a00823d8b2234d0 | [
"MIT"
] | null | null | null | _bagby/gbby-45g138.md | jpmantica/wax-sandbox | 96a80a692203a1972d13c30b5a00823d8b2234d0 | [
"MIT"
] | 1 | 2021-11-09T15:37:25.000Z | 2021-11-09T15:37:25.000Z | ---
pid: gbby-45g138
order: '138'
file_name: gbby-45g138.jpg
label: 'GBBY 45G/138: Football Game Scene - Notre Dame vs. Carnegie Tech - 1936'
_date: '1936'
object_type: glass plate negative
source: http://archives.nd.edu/Bagby/GBBY-45g138.jpg
thumbnail: "/img/derivatives/simple/gbby-45g138/thumbnail.jpg"
full: "/img/derivatives/simple/gbby-45g138/fullwidth.jpg"
layout: bagby_item
collection: bagby
---
| 28.928571 | 80 | 0.762963 | kor_Hang | 0.147492 |
8ae8abc74866cdf0d52b408b0c4e330b4a0503ac | 1,876 | md | Markdown | _pages/known-issues/hardware/specter-diy.md | bitstein/btcguide.github.io | 5de8fd7911d97a309445dfc9394cc1e04e6b9bf5 | [
"MIT"
] | null | null | null | _pages/known-issues/hardware/specter-diy.md | bitstein/btcguide.github.io | 5de8fd7911d97a309445dfc9394cc1e04e6b9bf5 | [
"MIT"
] | null | null | null | _pages/known-issues/hardware/specter-diy.md | bitstein/btcguide.github.io | 5de8fd7911d97a309445dfc9394cc1e04e6b9bf5 | [
"MIT"
] | null | null | null | ---
title: Specter DIY
---
{% include hw_experts.md %}
TODO: add more content
#### Cannot Buy Assembled Version
While the DIY version is great for expert users (and no longer requires soldiering!), the overwhelming majority of users prefer a product they can purchase.
The fewer people that use a product, the less thoroughly it is scrutinized/tested.
Small nitpick: there is no currently available case, it's just raw electronics.
We expect both these issues to be resolved in the future, which will make this device a recommendable addition to your multisig setup.
#### No Physical Security
Not having a secure element means that if someone gets physical access to your Specter DIY device they can extract your seed.
There are two mitigations that make this not that big of a deal:
1. Many use-cases are already built around the idea of giving complete access to anyone who gets physical access to a device.
For example, if you're storing seed phrases on metal plates (with no passphrase) then an attacker who gets access to that plate has all the private keys associated with it.
To get the benefits of a secure element (enforcing PIN access to a secure element with both a limit on the number of attempts an exponentially-increasing time-delay for guesses) means that you also need to remember a PIN.
2. A long passphrase can strongly mitigate this issue, and the iPhone-style keyboard is very good for entering passphrases.
#### Written in Python
This is not inherently a problem (python is a good general-purpose programming language!), but several hardware wallets with varying level of multisig support are also written in python, such as; Trezor, Coldcard, and Passport.
These hardware wallets share *a lot* of upstream code, and it's possible that if a vulnerability were discovered in one it would be present in the others.
{% include encouragement.md %}
| 58.625 | 227 | 0.788913 | eng_Latn | 0.999851 |
8ae8c1e27a5657c8e3d3e7235b8c259b0935b7fd | 858 | md | Markdown | content/til/2020-08/2020-08-13.md | laysent/blog | e4d902dd7bc565c2751b23a64517377aa42f632a | [
"MIT"
] | 1 | 2021-04-23T15:51:05.000Z | 2021-04-23T15:51:05.000Z | content/til/2020-08/2020-08-13.md | laysent/blog | e4d902dd7bc565c2751b23a64517377aa42f632a | [
"MIT"
] | 4 | 2020-05-06T14:10:16.000Z | 2022-02-26T09:58:27.000Z | content/til/2020-08/2020-08-13.md | laysent/blog | e4d902dd7bc565c2751b23a64517377aa42f632a | [
"MIT"
] | null | null | null | ---
title: Google vs Trick Tool
date: '2020-08-13'
category: 'Open Source Code'
---
当用户试图比较两样东西的时候,经常会在 Google 中输入 `xxx vs xxx` 来进行搜索。而 Google 的搜索关键词自动填充功能可以很好得展示用户常见的输入内容。因此,当已知一个关键词 A,就很容易在 Google 中通过 `A vs` 这样的关键词去触发自动填充功能,从而找到和 A 近似的其他关键词 B,C,D。
比如,查找和 Webpack 相关的关键词:

既然可以通过这个方法知道 A 相近的关键词 B,那么通过同样的方法,递归一下,就可以知道 B 的关键词 C。如此往复,就可以生成一张相近关键词的无向图(之所以是“无向图”是因为默认 vs 是满足交换律的)。
[这里](https://anvaka.github.io/vs/?query=)提供了一个可视化的工具,只要输入一个关键词,就可以递归的调用 Google 的接口,最终生成一个无向图。工具的源代码在[这里](https://github.com/anvaka/vs)。其中,具体查询 Google Suggestion API 的代码在[这里](https://github.com/anvaka/vs/blob/2bceaa530dc933cc193bbcb54e8d11483769b9f5/src/lib/buildGraph.js#L121)可以找到。
关于这个 Google Suggestion API 的用法介绍(包括 Google vs 的技巧介绍),可以参考[另外一篇文章](https://medium.com/applied-data-science/the-google-vs-trick-618c8fd5359f)。
| 47.666667 | 281 | 0.783217 | yue_Hant | 0.819518 |
8ae8cbfdea012e3461f385ceffbd4aca76c2a24e | 25 | md | Markdown | README.md | qiuanshmily/aggregate-payment | 368b6ca8c8d12d180d61b34c050cdd0b0da7d234 | [
"MIT"
] | null | null | null | README.md | qiuanshmily/aggregate-payment | 368b6ca8c8d12d180d61b34c050cdd0b0da7d234 | [
"MIT"
] | null | null | null | README.md | qiuanshmily/aggregate-payment | 368b6ca8c8d12d180d61b34c050cdd0b0da7d234 | [
"MIT"
] | null | null | null | # aggregate-payment
聚合支付
| 8.333333 | 19 | 0.8 | deu_Latn | 0.279979 |
8ae91d227a98c6e56497a1844a764c5a7a5da2e8 | 3,271 | md | Markdown | FAQ-graphicspath.md | wilfriedh/texfaq.github.io | 6acf12c39c6223dfb8c427aa7d451f1af97cbce7 | [
"CC0-1.0"
] | 39 | 2018-06-03T12:57:33.000Z | 2022-01-20T13:35:52.000Z | FAQ-graphicspath.md | wilfriedh/texfaq.github.io | 6acf12c39c6223dfb8c427aa7d451f1af97cbce7 | [
"CC0-1.0"
] | 24 | 2018-06-02T19:51:34.000Z | 2021-09-30T18:12:09.000Z | FAQ-graphicspath.md | wilfriedh/texfaq.github.io | 6acf12c39c6223dfb8c427aa7d451f1af97cbce7 | [
"CC0-1.0"
] | 22 | 2018-05-27T17:15:26.000Z | 2022-03-08T15:55:48.000Z | ---
title: Importing graphics from "somewhere else"
category: graphics
permalink: /FAQ-graphicspath
---
By default, graphics commands like `\includegraphics` look
"wherever TeX files are found" for the graphic file they're being
asked to use. This can reduce your flexibility if you choose to hold
your graphics files in a common directory, away from your (La)TeX
sources.
The simplest solution is to patch TeX's path, by changing the
default path. On most systems, the default path is taken from the
environment variable `TEXINPUTS`, if it's present; you can adapt that
to take in the path it already has, by setting the variable to
```latex
TEXINPUTS=.:<graphics path(s)>:
```
on a Unix system; on a Windows system the separator will be `;`
rather than `:`. The `.` is there to ensure
that the current directory is searched first; the trailing `:` says
"patch in the value of `TEXINPUTS` from your configuration file, here".
This method has the merit of efficiency ((La)TeX does _all_ of
the searches, which is quick), but it's always clumsy and may prove
inconvenient to use in Windows setups (at least).
The alternative is to use the [`graphics`](https://ctan.org/pkg/graphics) package command
`\graphicspath`; this command is of course also available to users
of the [`graphicx`](https://ctan.org/pkg/graphicx) and the [`epsfig`](https://ctan.org/pkg/epsfig) packages. The
syntax of `\graphicspath`s one argument is slightly odd: it's a
sequence of paths (typically relative paths), each of which is
enclosed in braces. A slightly odd example (slightly modified from one
given in the [`graphics`](https://ctan.org/pkg/graphics) bundle documentation) is:
<!-- {% raw %} -->
```latex
\graphicspath{{eps/}{png/}}
```
<!-- {% endraw %} -->
which will search for graphics files in subdirectories `eps` and
`png` of the directory in which LaTeX is running. (Note that
the trailing `/` _is_ required.)
(Note that some (La)TeX systems will only allow you to use files in
the current directory and its sub-directories, for security reasons.
However, `\graphicspath` imposes no such restriction: as far as
_it_ is concerned, you can access files anywhere.)
Be aware that `\graphicspath` does not affect the operations of
graphics macros other than those from the graphics bundle — in
particular, those of the outdated [`epsf`](https://ctan.org/pkg/epsf) and
[`psfig`](https://ctan.org/pkg/psfig) packages are immune.
The slight disadvantage of the `\graphicspath` method is
inefficiency. The package will call (La)TeX once for each entry in
the list to look for a file, which of course slows things. Further,
(La)TeX remembers the name of any file it's asked to look up, thus
effectively losing memory, so that in the limit a document that uses a
huge number of graphical inputs could be embarrassed by lack of
memory. (Such "memory starvation" is pretty unlikely with any
ordinary document in a reasonably modern (La)TeX system, but it
should be borne in mind.)
If your document is split into a variety of directories, and each
directory has its associated graphics, the [`import`](https://ctan.org/pkg/import) package
may well be the thing for you; see the discussion
in the question
"[bits of document in other directories](FAQ-docotherdir)".
| 46.070423 | 113 | 0.758178 | eng_Latn | 0.999008 |
8ae994ff73eac989f1c653171b7cf853ccd2c43a | 11,055 | md | Markdown | _posts/2019-11-29-r-madhavan-hairstyles.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | _posts/2019-11-29-r-madhavan-hairstyles.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | _posts/2019-11-29-r-madhavan-hairstyles.md | comotecyn/-hairstyle | d77bbac3ea01d7130320d4f80b2dc57020aed1a0 | [
"MIT"
] | null | null | null | ---
id: 179
title: R Madhavan Hairstyles
date: 2019-11-29T02:00:05+00:00
author: masje
layout: post
guid: http://example.com/?p=179
permalink: /2019/11/29/r-madhavan-hairstyles/
categories:
- Uncategorized
tags:
- r madhavan hairstyles
---
[
<img class="img-fluid" src="https://i0.wp.com/bollywoodbio.se/wp-content/uploads/2019/07/r-madhavan-to-star-alongside-debutante-khushali-kumar-in-dahi-cheeni-620x570.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan To Star Alongside Debutante Khushali Kumar In Dahi" />](https://bollywoodbio.se/wp-content/uploads/2019/07/r-madhavan-to-star-alongside-debutante-khushali-kumar-in-dahi-cheeni-620x570.jpg)
R Madhavan To Star Alongside Debutante Khushali Kumar In Dahi
###
<img src="https://i0.wp.com/images.indianexpress.com/2015/10/madhavan-759.jpg" width="100%" align="left" style="margin-right: 8px;margin-bottom: 8px;" /> <!--ads/auto.txt-->
[
<img class="img-fluid" src="https://i0.wp.com/lookaside.fbsbx.com/lookaside/crawler/media/?media_id=189851791186996" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan S Fans Club Home Facebook" />](https://lookaside.fbsbx.com/lookaside/crawler/media/?media_id=189851791186996)
R Madhavan S Fans Club Home Facebook
[
<img class="img-fluid" src="https://i0.wp.com/www.filmibeat.com/img/2017/06/rmadhavanbirthdayspecial-01-1496308176.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Birthday Special 5 Movies That Gave Him A Dedicated" />](https://www.filmibeat.com/img/2017/06/rmadhavanbirthdayspecial-01-1496308176.jpg)
R Madhavan Birthday Special 5 Movies That Gave Him A Dedicated
[
<img class="img-fluid" src="https://i0.wp.com/www.veethi.com/images/people/fullsize/R._Madhavan_20160113060139.jpeg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Hairstyle Veethi" />](http://www.veethi.com/images/people/fullsize/R._Madhavan_20160113060139.jpeg)
R Madhavan Hairstyle Veethi
[
<img class="img-fluid" src="https://i0.wp.com/www.petaindia.com/wp-content/uploads/2014/03/MadhavansPhotoCreditAtulKasbekar.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Peta S Person Of The Year Blog Peta India" />](https://www.petaindia.com/wp-content/uploads/2014/03/MadhavansPhotoCreditAtulKasbekar.jpg)
R Madhavan Peta S Person Of The Year Blog Peta India
[
<img class="img-fluid" src="https://i0.wp.com/english.cdn.zeenews.com/sites/default/files/2017/02/07/569102-maddy-twitter.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan On Board Chandamama Door Ke Starring Sushant Singh" />](https://english.cdn.zeenews.com/sites/default/files/2017/02/07/569102-maddy-twitter.jpg)
R Madhavan On Board Chandamama Door Ke Starring Sushant Singh
[
<img class="img-fluid" src="https://i0.wp.com/assets.charmboard.com/images/w_1280,h_720/x_455,y_131,w_815,h_461,c_crop,f_auto,q_auto,e_sharpen/h_541/im/lc/340194/r-madhavan-in-yaanji-vikram-vedha-2017.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Biography Age Wiki Place Of Birth Height Quotes" />](https://assets.charmboard.com/images/w_1280,h_720/x_455,y_131,w_815,h_461,c_crop,f_auto,q_auto,e_sharpen/h_541/im/lc/340194/r-madhavan-in-yaanji-vikram-vedha-2017.jpg)
R Madhavan Biography Age Wiki Place Of Birth Height Quotes
[
<img class="img-fluid" src="https://i0.wp.com/media.newstracklive.com/uploads/entertainment/bollywood-news/Aug/27/big_thumb/aish-warya-madhavan-pics_59a28d73498b6.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Aishwarya Rai Bachchan Upsets Over R Madhavan S Role Opposite Her" />](https://media.newstracklive.com/uploads/entertainment/bollywood-news/Aug/27/big_thumb/aish-warya-madhavan-pics_59a28d73498b6.jpg)
Aishwarya Rai Bachchan Upsets Over R Madhavan S Role Opposite Her
[
<img class="img-fluid" src="https://i0.wp.com/www.pinkvilla.com/files/madsri_m.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Maara R Madhavan Kick Starts The Shoot Of The Film Alongside" />](https://www.pinkvilla.com/files/madsri_m.jpg)
Maara R Madhavan Kick Starts The Shoot Of The Film Alongside
[
<img class="img-fluid" src="https://i0.wp.com/akm-img-a-in.tosshub.com/indiatoday/images/story/201511/madhavan-story_647_112115061023.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Madhavan S Irudhi Suttru To Hit The Screens On January 29 Movies" />](https://akm-img-a-in.tosshub.com/indiatoday/images/story/201511/madhavan-story_647_112115061023.jpg)
Madhavan S Irudhi Suttru To Hit The Screens On January 29 Movies
[
<img class="img-fluid" src="https://i0.wp.com/img.republicworld.com/republic-prod/stories/promolarge/xxhdpi/cusfpttntbjnuwun_1590072256.jpeg?tr=w-812,h-464" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Yexjqngyog7ycm" />](https://img.republicworld.com/republic-prod/stories/promolarge/xxhdpi/cusfpttntbjnuwun_1590072256.jpeg?tr=w-812,h-464)
Yexjqngyog7ycm
[
<img class="img-fluid" src="https://i0.wp.com/assets.charmboard.com/images/w_633,h_406/x_0,y_24,w_633,h_358,c_crop,q_auto,f_auto/h_541/im/lc/120591/r-madhavan-in-jab-main-tumhare-saath-hoon-jodi-breakers-2012.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Biography Age Wiki Place Of Birth Height Quotes" />](https://assets.charmboard.com/images/w_633,h_406/x_0,y_24,w_633,h_358,c_crop,q_auto,f_auto/h_541/im/lc/120591/r-madhavan-in-jab-main-tumhare-saath-hoon-jodi-breakers-2012.jpg)
R Madhavan Biography Age Wiki Place Of Birth Height Quotes
[
<img class="img-fluid" src="https://i0.wp.com/images.indianexpress.com/2015/10/madhavan-759.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="My Respect For Women Has Gone Up Tremendously After Growing My" />](https://images.indianexpress.com/2015/10/madhavan-759.jpg)
My Respect For Women Has Gone Up Tremendously After Growing My
[
<img class="img-fluid" src="https://i0.wp.com/i.pinimg.com/originals/e8/8d/25/e88d25ae620e2bc88d6d83491afbf64a.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="16 Pictures Of R Madhavan That Will Totally Justify Your" />](https://i.pinimg.com/originals/e8/8d/25/e88d25ae620e2bc88d6d83491afbf64a.jpg)
16 Pictures Of R Madhavan That Will Totally Justify Your
[
<img class="img-fluid" src="https://i0.wp.com/www.reportdoor.com/wp-content/uploads/2020/05/RHTDM-Reunion-Dia-Mirza-and-R-Madhavan-Collaborate-For-a.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Melgxx7izq Bwm" />](https://www.reportdoor.com/wp-content/uploads/2020/05/RHTDM-Reunion-Dia-Mirza-and-R-Madhavan-Collaborate-For-a.jpg)
Melgxx7izq Bwm
[
<img class="img-fluid" src="https://i0.wp.com/img-s-msn-com.akamaized.net/tenant/amp/entityid/BB12dKPi.img?h=300&w=300&m=6&q=60&o=f&l=f&x=521&y=764" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="R Madhavan Gets Nostalgic About His 90s Tv Show Ghar Jamai Says" />](https://img-s-msn-com.akamaized.net/tenant/amp/entityid/BB12dKPi.img?h=300&w=300&m=6&q=60&o=f&l=f&x=521&y=764)
R Madhavan Gets Nostalgic About His 90s Tv Show Ghar Jamai Says
[
<img class="img-fluid" src="https://i0.wp.com/varnam.my/wp-content/uploads/2018/03/mad.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Madhavan I M Too Old To Do Movies Like Alaipayuthey Varnam My" />](https://varnam.my/wp-content/uploads/2018/03/mad.jpg)
Madhavan I M Too Old To Do Movies Like Alaipayuthey Varnam My
[
<img class="img-fluid" src="https://i0.wp.com/cdn.dnaindia.com/sites/default/files/styles/full/public/2016/01/20/416423-madhavan.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="I Went Through The Lowest Point In My Life For Saala Khadoos R" />](https://cdn.dnaindia.com/sites/default/files/styles/full/public/2016/01/20/416423-madhavan.jpg)
I Went Through The Lowest Point In My Life For Saala Khadoos R
[
<img class="img-fluid" src="https://i0.wp.com/i.pinimg.com/236x/3f/6e/f6/3f6ef636f1fc056bcd578c6c1e146ee9--tamil-movies-online-crushes.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="7 Best R Madhavan Images R Madhavan Madhavan Actor Man Crush" />](https://i.pinimg.com/236x/3f/6e/f6/3f6ef636f1fc056bcd578c6c1e146ee9--tamil-movies-online-crushes.jpg)
7 Best R Madhavan Images R Madhavan Madhavan Actor Man Crush
[
<img class="img-fluid" src="https://i0.wp.com/assets.charmboard.com/images/w_1280,h_720/x_0,y_-1,w_1279,h_722,c_crop,f_auto,q_auto/h_541/im/lc/706431/r-madhavan-in-smile-with-the-amazon-prime-family-amazon-prime-video-2018.jpg" width="100%" onerror="this.onerror=null;this.src='https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQh_l3eQ5xwiPy07kGEXjmjgmBKBRB7H2mRxCGhv1tFWg5c_mWT';" alt="Get R Madhavan Hair Hairstyle In Amazon Prime Video Smile With" />](https://assets.charmboard.com/images/w_1280,h_720/x_0,y_-1,w_1279,h_722,c_crop,f_auto,q_auto/h_541/im/lc/706431/r-madhavan-in-smile-with-the-amazon-prime-family-amazon-prime-video-2018.jpg)
Get R Madhavan Hair Hairstyle In Amazon Prime Video Smile With | 115.15625 | 650 | 0.786341 | yue_Hant | 0.210941 |
8aea3abc9408bb4efb440639bd8f36b27e3ee62f | 1,614 | md | Markdown | hugo/content/zapisky/zapisek-ze-dne-20-04-2003.md | jan-martinek/me | bcf7cc4fb787e27823d66c5982af13266ffdef63 | [
"MIT"
] | null | null | null | hugo/content/zapisky/zapisek-ze-dne-20-04-2003.md | jan-martinek/me | bcf7cc4fb787e27823d66c5982af13266ffdef63 | [
"MIT"
] | null | null | null | hugo/content/zapisky/zapisek-ze-dne-20-04-2003.md | jan-martinek/me | bcf7cc4fb787e27823d66c5982af13266ffdef63 | [
"MIT"
] | null | null | null | ---
title: Zápisek ze dne 20. dubna 2003
date: '2003-04-20 17:00:00 +0200'
date_gmt: '2003-04-20 15:00:00 +0200'
category: nezařazené
tags: []
comments: []
---
<p>Tak, jsou tu <span style="font-weight:bold">Velikonoce</span>...
Čas kdy si sadisté plně užijou svých choutek. Já moc sadista nejsem, takže se u
nás velikonoce odehrávají jako návštěva babiček, které mě po polechtání
jalovcem (či jeho napodobeninou) podarují vajíčky, čokoládovými zajíci apod.
Minulý rok jsem pokusně provedl elektronické velikonoce a podařilo se mi vykoledovat
jedno úplně analogové čokoládové vejce. Je to snad první vajíčko od první
třídy, které jsem nedostal od některé z babiček či jiných rodinných příslušnic
:). Letos už jsem nějak ztratil svou víru v moderní elektroniku, tak nejspíš
skončím jako v předchozích letech...</p>
<p>Nemám moc rád prázdniny, protože většinu
prázdnin trávím doma. Protože se počasí už celkem zlepšilo, tak jsem se aspoň
pořádně plýtval energií na kole a zajel během dvou dnů 130km na kole. Takle na
konci dubna je to pro mě celkem síla... Nejkrásnější je stejně půlhodina sezení u
štěrkáče za Hrozenkovem a sledování vlnek... Možná ještě lepší než hučící
řeka...</p>
<p>btw: pokud jste nikdy neviděli žádný
japonský ANIMOVANÝ film (nenechte se zmást Pokémonem), zajděte si na film
<span style="font-weight:bold">Cesta do fantazie</span>, sice jsem ho ještě neviděl,
ale je úžasný... (film vyhrál např. Oscara za animovaný film 2002 a spousty
dalších ocenění po světě, stal se nejúspěšnějším filmem všech dob v japonsku,
hodnocení v Cinemě: Fuka: 100%) Slyšel jsem část soundtracku, je úžasný...</p>
| 53.8 | 84 | 0.778191 | ces_Latn | 1.000006 |
8aeaf9c86e3cf5ba6cd5b1a3555d11b4501a5de1 | 6,686 | md | Markdown | _posts/2017-02-27-edge-lengths.md | trashbirdecology/luisdva.github.io | bbf52b4b464f55ac3a96c2dacf4ef329d027595f | [
"MIT"
] | 2 | 2020-02-10T18:41:28.000Z | 2020-04-02T03:00:13.000Z | _posts/2017-02-27-edge-lengths.md | trashbirdecology/luisdva.github.io | bbf52b4b464f55ac3a96c2dacf4ef329d027595f | [
"MIT"
] | 1 | 2021-12-13T15:48:53.000Z | 2021-12-13T15:48:53.000Z | _posts/2017-02-27-edge-lengths.md | trashbirdecology/luisdva.github.io | bbf52b4b464f55ac3a96c2dacf4ef329d027595f | [
"MIT"
] | 4 | 2020-10-14T20:09:27.000Z | 2021-12-13T15:33:17.000Z | ---
title: "Making sense of trees"
excerpt: Extracting edge lengths from R phylo objects.
category: rstats
tags:
- Jeff Hanson
- ape
- Paradis
header:
image: /assets/images/featureTrees.jpg
---
In the words of [Nick Matzke]( https://twitter.com/NickJMatzke):
> R's tree structure is pretty non-intuitive, compared to the "a tree is a collection of node objects" structure that is taught in most phylogenetics courses and used in e.g. C++ software. It's done this way because R likes lists, and doesn't like objects.
The structure of phylogenetic trees in R (namely [APE’s](https://academic.oup.com/bioinformatics/article/20/2/289/204981/APE-Analyses-of-Phylogenetics-and-Evolution-in-R) phylo objects) can be confusing because much of the information needed to describe how different nodes and branches make up a tree is implicit.
I learned this the hard way during the early days of my PhD research. My thesis work revolved around phylogenetic lineage ages, so one of my very first tasks after finding a suitable tree was to get a table with species and their ‘edge lengths’ as a way to (more or less) represent their evolutionary ages. At this time I was very new to R and I lost months trying to figure this out on my own. After going over the APE book I slowly realized that lineage age information is contained within the phylo objects and that it can be extracted, but only if you ask nicely.
Phylo objects have a vector of edge lengths for every node, but I was unable to figure out how to get a list of tips and internal nodes that would match up with the vector of edge lengths. After much struggle, I figured out that the _branching.times()_ function computes the distance from each node to the tips, producing a named vector that corresponds to the nodes in the tree. I also learned that the _mrca()_ function produces a matrix of node numbers corresponding to the most recent common ancestors between pairs of tips or nodes. The next challenge would be to put the two together.
I’m writing this brief post because I found an R script from early 2012 in a random USB stick, in which coding superstar [Jeff Hanson]( https://twitter.com/jeff_o_hanson) walked me through indexing, building loops, and functions in order to extract the species lineage ages for any phylogeny object.
The function below has a couple of loops that iterate through the mrca matrix and the branching times vector to produce a data frame of edge lengths for all the tree tips. The for loops make for slow processing if we have large trees but I was patient and that exact function created the main dataset that I analysed for a large part of my thesis. If you’re interested have a look at Jeff’s crisp programming to see how clever he was when coming up with the loops.
{% highlight r %}
# extracting edge lengths from a phylo object
# define function
# note that the function requires ape to be installed
phylLages <- function(inputTree){
require(ape)
# numeric vector with the branching times for each tip
apedates<-branching.times(inputTree)
# matrix of tips and most recent common ancestors
mrca.matrix<-mrca(inputTree)
### preliminary procesing
Species = c()
Node_date = c()
### main processing
col_counter = 0
for (i in colnames(mrca.matrix)) {
# keep track of which column we're in
col_counter = col_counter + 1
# print messege
cat(paste("Starting species ", as.character(col_counter), " out of ", as.character(length(colnames(mrca.matrix))), "\n", sep=""))
# add species name to export vector
Species = append(Species, i)
# extract a vector of node numbers for the species we're up to
currNodeVec = mrca.matrix[,col_counter]
# working out what the node dates are for each node for the species we're working with
currNodeDates = c()
row_counter = 0
for (currNodeNum in currNodeVec) {
row_counter = row_counter + 1
if (row_counter != col_counter) {
currNodeDates = append(currNodeDates, apedates[as.character(currNodeNum)])
}
}
# find out what the minimum number is
minDate = min(currNodeDates)
Node_date = append(Node_date, minDate)
}
exportFrame = data.frame(Species,Node_date)
cat("DONE!\n")
return(exportFrame)
}
{% endhighlight %}
A few years later I learned that the [BioGeoBEARS](http://phylo.wikidot.com/biogeobears) R package includes the _prt()_ function, which Nick Matzke wrote specifically to print the content of a tree into a tabular format, making all the implicit information explicit. This function is faster and provides much more information, and I was very relieved to see that the edge lengths it provides are the same Jeff and I obtained with the custom function.
Lets have a look. Both functions keep the tip order so comparing the results is easy. This examples uses the built in tree of bird families (Sibley and Ahlquist 1990) from the _ape_ package.
<figure>
<a href="/assets/images/birdFams.png"><img src="/assets/images/birdFams.png"></a>
<figcaption>plotted with ggtree</figcaption>
</figure>
{% highlight r %}
# comparing the edge lengths with BioGeoBEARS::prt
# load libraries
library(ape)
library(dplyr)
library(stringr)
library(BioGeoBEARS)
# phylogeny of bird families (comes with ape)
data("bird.families")
# get a data frame of edge lengths using the Jeff Hanson & Luis Verde function
jhEdges <- phylLages(bird.families)
# get a data frame of edge lengths using prt
birdPrintout <- prt(bird.families)
## extract the data for the tips only
# note: the tips in this particular tree are families not species
birdPrintout_Tips <- birdPrintout %>% filter(str_detect(label,"Node")==FALSE) %>%
select(Species=label,Node_date=edge.length)
# quick comparison
head(jhEdges) == head(birdPrintout_Tips)
{% endhighlight %}
Now we can do side-by-side glimpse of the two data frames, and time how long the two functions take to process the data.
{% highlight text %}
> bind_cols(jhEdges,birdPrintout_Tips) %>% head
Species Node_date Species Node_date
1 Struthionidae 17.1 Struthionidae 17.1
2 Rheidae 17.1 Rheidae 17.1
3 Casuariidae 9.5 Casuariidae 9.5
4 Apterygidae 9.5 Apterygidae 9.5
5 Tinamidae 21.8 Tinamidae 21.8
6 Cracidae 19.8 Cracidae 19.8
{% endhighlight %}
{% highlight r %}
system.time(phylLages(bird.families))
system.time(prt(bird.families))
{% endhighlight %}
_prt_ is about 4.5x faster than the loops, and this becomes relevant as the number of tips increases or in the case of multiphylo objects with varying edge lengths so I recommend using it for any work on edge lengths.
| 48.449275 | 590 | 0.746036 | eng_Latn | 0.995208 |
8aeb23a609eeb2983744021b162bc523c82a7ee8 | 7,075 | md | Markdown | desktop-src/medfound/media-foundation-programming--essential-concepts.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/medfound/media-foundation-programming--essential-concepts.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/medfound/media-foundation-programming--essential-concepts.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
description: If you are new to digital media, this topic introduces some concepts that you will need to understand before writing a Media Foundation application.
ms.assetid: d76d655e-23f3-407c-97a1-be015b0de37d
title: 'Media Foundation: Essential Concepts'
ms.topic: article
ms.date: 05/31/2018
---
# Media Foundation: Essential Concepts
If you are new to digital media, this topic introduces some concepts that you will need to understand before writing a Media Foundation application.
- [Streams](#streams)
- [Compression](#compression)
- [Media Containers](#media-containers)
- [Formats](#formats)
- [Related topics](#related-topics)
## Streams
A *stream* is a sequence of media data with a uniform type. The most common types are audio and video, but a stream can contain almost any kind of data, including text, script commands, and still images. The term *stream* in this documentation does not imply delivery over a network. A media file intended for local playback also contains streams.
Usually, a media file contains either a single audio stream, or exactly one video stream and one audio stream. However, a media file might contain several streams of the same type. For example, a video file might contain audio streams in several different languages. At run time, the application would select which stream to use.
## Compression
*Compression* refers to any process that reduces the size of a data stream by removing redundant information. Compression algorithms fall into two broad categories:
- *Lossless* compression. Using a lossless algorithm, the reconstructed data is identical to the original.
- *Lossy* compression. Using a lossy algorithm, the reconstructed data is an approximation of the original, but is not an exact match.
In most other domains, lossy compression is not acceptable. (Imagine getting back an "approximation" of a spreadsheet!) But lossy compression schemes are well-suited to audio and video, for a couple of reasons.
The first reason has to do with the physics of human perception. When we listen to a complex sound, like a music recording, some of the information contained in that sound is not perceptible to the ear. With the help of signal processing theory, it is possible to analyze and separate the frequencies that cannot be perceived. These frequencies can be removed with no perceptual effect. Although the reconstructed audio will not match the original exactly, it will *sound* the same to the listener. Similar principles apply to video.
Second, some degradation in sound or image quality may be acceptable, depending on the intended purpose. In telephony, for example, audio is often highly compressed. The result is good enough for a phone conversation—but you wouldn't want to listen to a symphony orchestra over a telephone.
Compression is also called *encoding*, and a device that encodes is called an *encoder*. The reverse process is *decoding*, and the device is a naturally called a *decoder*. The general term for both encoders and decoders is *codec*. Codecs can be implemented in hardware or software.
Compression technology has changed rapidly since the advent of digital media, and a large number of compression schemes are in use today. This fact is one of the main challenges for digital media programming.
## Media Containers
It is rare to store a raw audio or video stream as a computer file, or to send one directly over the network. For one thing, it would be impossible to decode such a stream, without knowing in advance which codec to use. Therefore, media files usually contain at least some of the following elements:
- File headers that describe the number of streams, the format of each stream, and so on.
- An index that enables random access to the content.
- Metadata that describes the content (for example, the artist or title).
- Packet headers, to enable network transmission or random access.
This documentation uses the term *container* to describe the entire package of streams, headers, indexes, metadata, and so forth. The reason for using the term *container* rather than *file* is that some container formats are designed for live broadcast. An application could generate the container in real time, never storing it to a file.
An early example of a media container is the AVI file format. Other examples include MP4 and Advanced Systems Format (ASF). Containers can be identified by file name extension (for example, .mp4) or by MIME type.
The following diagram shows a typical structure for a media container. The diagram does not represent any specific format; the details of each format vary widely.

Notice that the structure shown in the diagram is hierarchical, with header information appearing at the start of the container. This structure is typical of many (but not all) container formats. Also notice that the data section contains interleaved audio and video packets. This type of interleaving is common in media containers.
The term *multiplexing* refers to the process of packetizing the audio and video streams and interleaving the packets into the container. The reverse process, reassembling the streams from the packetized data, is called *demultiplexing*.
## Formats
In digital media, the term *format* is ambiguous. A format can refer to the type of *encoding*, such as H.264 video, or the *container*, such as MP4. This distinction is often confusing for ordinary users. The names given to media formats do not always help. For example, *MP3* refers both to an encoding format (MPEG-1 Audio Layer 3) and a file format.
The distinction is important, however, because reading a media file actually involves two stages:
1. First, the container must be parsed. In most cases, the number of streams and the format of each stream cannot be known until this step is complete.
2. Next, if the streams are compressed, they must be decoded using the appropriate decoders.
This fact leads quite naturally to a software design where separate components are used to parse containers and decode streams. Further, this approach lends itself to a plug-in model, so that third parties can provide their own parsers and codecs. On Windows, the Component Object Model (COM) provides a standard way to separate an API from its implementation, which is a requirement for any plug-in model. For this reason (among others), Media Foundation uses COM interfaces.
The following diagram shows the components used to read a media file:

Writing a media file also requires two steps:
1. Encoding the uncompressed audio/video data.
2. Putting the compressed data into a particular container format.
The following diagram shows the components used to write a media file:

## Related topics
<dl> <dt>
[Media Foundation Programming Guide](media-foundation-programming-guide.md)
</dt> </dl>
| 70.75 | 533 | 0.790106 | eng_Latn | 0.999733 |
8aebfbb46cb40f1e34ad6e08498be72fa4036d74 | 1,300 | md | Markdown | readme.md | andit-team/flightyIT | f0b81d2982b685fa6244cae9c42a806e4d2feaba | [
"MIT"
] | null | null | null | readme.md | andit-team/flightyIT | f0b81d2982b685fa6244cae9c42a806e4d2feaba | [
"MIT"
] | null | null | null | readme.md | andit-team/flightyIT | f0b81d2982b685fa6244cae9c42a806e4d2feaba | [
"MIT"
] | null | null | null | <p align="center"><img src="https://myflighty.com/images/logo-111x33.png" width="400"></p>
<p align="center">
<a href="#"><img src="https://travis-ci.org/laravel/framework.svg" alt="Build Status"></a>
<a href="#"><img src="https://poser.pugx.org/laravel/framework/d/total.svg" alt="Total Downloads"></a>
<a href="#"><img src="https://poser.pugx.org/laravel/framework/v/stable.svg" alt="Latest Stable Version"></a>
<a href="#"><img src="https://poser.pugx.org/laravel/framework/license.svg" alt="License"></a>
</p>
## flightyIT
FlightyIT is a web application with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:
flightyIT is accessible, powerful, and provides tools required for large, robust applications.
## Contributing
Only for AndIT Team
## Security Vulnerabilities
If you discover a security vulnerability within flightyIT, please send an e-mail to AND IT via [andit.andimpex@gmail.com](mailto:andit.andimpex@gmail.com). All security vulnerabilities will be promptly addressed.
## License
The FlightyIT Software is not open-source software licensed under the [MIT license](https://opensource.org/licenses/MIT).
| 46.428571 | 256 | 0.753077 | eng_Latn | 0.776346 |
8aec28e89c44f81035146e3780f4ed7df272ff2d | 5,007 | md | Markdown | README.md | textcreationpartnership/A94257 | bfa9b56dae43cf08bc1d994248ba1f34a9345bc8 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A94257 | bfa9b56dae43cf08bc1d994248ba1f34a9345bc8 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A94257 | bfa9b56dae43cf08bc1d994248ba1f34a9345bc8 | [
"CC0-1.0"
] | null | null | null | #A catalogue of all the cheifest rarities in the publick theater and Anatomie-Hall of the University of Leyden which are so set in order that all may easily bee found in their places. Sic erimus cunĉti postquam nos auferet oreus.#
A catalogue of all the cheifest rarities in the publick theater and Anatomie-Hall of the University of Leyden which are so set in order that all may easily bee found in their places. Sic erimus cunĉti postquam nos auferet oreus.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/A94/A94257.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/A94/A94257.epub) •
[Page images (Historical Texts)](https://historicaltexts.jisc.ac.uk/eebo-99899304e)
**Availability**
To the extent possible under law, the Text Creation Partnership has waived all copyright and related or neighboring rights to this keyboarded and encoded edition of the work described above, according to the terms of the CC0 1.0 Public Domain Dedication (http://creativecommons.org/publicdomain/zero/1.0/). This waiver does not extend to any page images or other supplementary files associated with this work, which may be protected by copyright or other license restrictions. Please go to https://www.textcreationpartnership.org/ for more information about the project.
**Major revisions**
1. __2010-03__ __TCP__ *Assigned for keying and markup*
1. __2010-03__ __SPi Global__ *Keyed and coded from ProQuest page images*
1. __2010-04__ __Kayla Ondracek__ *Sampled and proofread*
1. __2010-04__ __Kayla Ondracek__ *Text and markup reviewed and edited*
1. __2011-06__ __pfs__ *Batch review (QC) and XML conversion*
##Content Summary##
#####Front#####
A CATALOGUE Of all the cheifeſt RARITIES In the Publick THEATER and ANATOMIE-HALL Of the Univerſity
1. JACOBUS VOORN Lecturis valere & gaudere.
1. In Theatrum ANATOMICUM, LUGD. BATAV.
#####Body#####
1. THESE MAY BE SEEN IN THE ENTRANCE.
1. IN THEATRUM ANATOMICUM, L. ƲGD. BATAV.
**Types of content**
* There are 24 **verse** lines!
* Oh, Mr. Jourdain, there is **prose** in there!
There are 13 **omitted** fragments!
@__reason__ (13) : duplicate (1), illegible (12) • @__extent__ (13) : 1 page (1), 1 letter (8), 1+ letters (3), 2 letters (1) • @__resp__ (12) : #UOM (12)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|Latin-1 Supplement|àùûâ|224 249 251 226|
|Latin Extended-A|ĉſ|265 383|
|Latin Extended-B|Ʋ|434|
|Combining Diacritical Marks|̄|772|
|General Punctuation|•…|8226 8230|
|Superscripts and Subscripts|⁶|8310|
|CJKSymbolsandPunctuation|〈〉|12296 12297|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|3||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__change__|5||
|5.|__date__|8| @__when__ (1) : 2011-12 (1)|
|6.|__edition__|1||
|7.|__editionStmt__|1||
|8.|__editorialDecl__|1||
|9.|__encodingDesc__|1||
|10.|__extent__|2||
|11.|__fileDesc__|1||
|12.|__idno__|6| @__type__ (6) : DLPS (1), STC (2), EEBO-CITATION (1), PROQUEST (1), VID (1)|
|13.|__keywords__|1| @__scheme__ (1) : http://authorities.loc.gov/ (1)|
|14.|__label__|5||
|15.|__langUsage__|1||
|16.|__language__|1| @__ident__ (1) : eng (1)|
|17.|__listPrefixDef__|1||
|18.|__note__|8||
|19.|__notesStmt__|2||
|20.|__p__|11||
|21.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|22.|__profileDesc__|1||
|23.|__projectDesc__|1||
|24.|__pubPlace__|2||
|25.|__publicationStmt__|2||
|26.|__publisher__|2||
|27.|__ref__|1| @__target__ (1) : http://www.textcreationpartnership.org/docs/. (1)|
|28.|__revisionDesc__|1||
|29.|__seriesStmt__|1||
|30.|__sourceDesc__|1||
|31.|__term__|2||
|32.|__textClass__|1||
|33.|__title__|3||
|34.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__body__|1||
|2.|__closer__|3||
|3.|__desc__|13||
|4.|__div__|5| @__type__ (5) : title_page (1), to_the_reader (1), part (1), lists (1), poem (1)|
|5.|__front__|1||
|6.|__g__|14| @__ref__ (14) : char:EOLhyphen (12), char:cmbAbbrStroke (1), char:V (1)|
|7.|__gap__|13| @__reason__ (13) : duplicate (1), illegible (12) • @__extent__ (13) : 1 page (1), 1 letter (8), 1+ letters (3), 2 letters (1) • @__resp__ (12) : #UOM (12)|
|8.|__head__|18||
|9.|__hi__|135| @__rend__ (3) : sup (3)|
|10.|__item__|302||
|11.|__l__|24||
|12.|__list__|15||
|13.|__p__|4||
|14.|__pb__|15| @__facs__ (15) : tcp:152982:1 (1), tcp:152982:2 (2), tcp:152982:3 (2), tcp:152982:4 (2), tcp:152982:5 (2), tcp:152982:6 (2), tcp:152982:7 (2), tcp:152982:8 (2) • @__rendition__ (1) : simple:additions (1)|
|15.|__q__|1||
|16.|__salute__|1||
|17.|__seg__|1| @__rend__ (1) : decorInit (1)|
|18.|__signed__|1||
| 40.379032 | 570 | 0.689235 | eng_Latn | 0.349975 |
8aec2e87cf2a77aafaf25fb9d2b58423ab5385aa | 7,963 | md | Markdown | content/posts/livros-para-presentar-no-natal.md | danrocha/heloiche | 20b9dedb8f9ddad1f608569e381f20d6289955d2 | [
"MIT"
] | 1 | 2021-07-18T21:38:06.000Z | 2021-07-18T21:38:06.000Z | content/posts/livros-para-presentar-no-natal.md | danrocha/heloiche | 20b9dedb8f9ddad1f608569e381f20d6289955d2 | [
"MIT"
] | 4 | 2020-08-17T06:57:11.000Z | 2022-02-26T23:53:03.000Z | content/posts/livros-para-presentar-no-natal.md | danrocha/heloiche | 20b9dedb8f9ddad1f608569e381f20d6289955d2 | [
"MIT"
] | null | null | null | ---
title: "Dicas de livros para presentear neste natal" # required
slug: livros-para-presentar-no-natal
description: "Chegamos ao final de 2019! Ano difícil, mas com boas leituras, o que sempre nos ajuda a entender ou fugir ou viver outras realidades melhores ou piores."
date: 2019-12-25 09:00:00
author: Heloisa
tags: ['📚Heloiche Lê']
# cover: ./imgs/090220-heloiche-le.png # optional parallax post cover image
# fullscreen: false # optional - when `true`, makes parallax cover image take up full viewport height
excerpt: "Chegamos ao final de 2019! Ano difícil, mas com boas leituras, o que sempre nos ajuda a entender ou fugir ou viver outras realidades melhores ou piores."
---
import Book from '~/components/Book.vue'
Chegamos ao final de 2019!
Ano difícil, mas com boas leituras, o que sempre nos ajuda a entender ou fugir ou viver outras realidades melhores ou piores.
Com a correria natural de dezembro, li menos do que gostaria. Mesmo porque depois de tantos anos vividos, finais de ano vão cada vez ficando mais melancólicos e com muitas saudades de tempos passados, quando as reuniões de família eram tão mais numerosas e com tão menos ausências.
Mas calhou de eu ler duas autobiografias nestas duas semanas.
<book title="Prólogo, ato, epílogo: Memórias" author="Fernanda Montenegro" link="https://amzn.to/2LUZVfE">
<a href="https://www.amazon.com.br/Pr%C3%B3logo-ato-ep%C3%ADlogo-Fernanda-Montenegro/dp/8535932550/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=a02536a36b4e294dbe68d32d1d3fc0c8&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8535932550&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
A primeira de uma mulher que admiro há muito tempo: **_[Prólogo, ato, epílogo: Memórias](https://amzn.to/2LUZVfE)_**, de **Fernanda Montenegro**. Uma delícia de se ler. Me senti como em um longo café da tarde onde tempos que eu também vivi foram lembrados e outros bem anteriores fiquei conhecendo. É isso que eu gosto em biografias. Aprendemos muito com o olhar do outro sobre a realidade efetivamente vivida e, se nos é contemporânea, um novo olhar é oferecido. Foi assim com Fernanda Montenegro, quem passei a admirar ainda mais ao conhecer seu lado não de ícone e sim de uma mulher como tantas outras e, em tantos pontos, como eu.
<book title="Formas de voltar para casa" author="Alejando Zambra" link="https://amzn.to/2PGWvOU">
<a href="https://www.amazon.com.br/Formas-Voltar-Para-Alejandro-Zambra/dp/8540506041/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=44e9f6178003908be171b1893d36101e&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8540506041&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
Alejandro Zambra é um jovem e muito talentoso autor. Chileno crescido nos anos da ditadura de Pinochet. Aí uma realidade não vivida, mas que conheci a distância. Ao mesmo tempo o sair e o querer retornar a casa como um eterno desejo dos mais humanos; a constatação de que a casa como por nós concebida não é mais aquela casa idealizada. E os pais, com histórias e convicções tantas vezes desconhecidas por nós e que devem ser respeitadas mesmo que delas discordemos totalmente.
---
Isso tem tudo a ver com Natal não?
Eu sempre insisto com meu irmão para que revisitemos juntos os nossos lugares de infância. E ele sempre reluta. E após a leitura desse livro percebo qual a razão dele - não quebrar o encanto que tanto alegra nossas lembranças de infância.
Tenho recebido muitos pedidos de indicações de livros a serem presenteados no Natal. Difícil opinar sobre isso, não conhecendo os presenteados. Mas espero ter ajudado e acertado.
Sem maiores comentários, eis os livros que comprei para netos e amigos queridos. Alguns já lidos, e outros ainda na lista dos desejados. Voltaremos a eles em breve em futuras postagens após a parada necessária de fim de ano.
### Para as crianças
<book title="Contos para garotos que sonham em mudar o mundo: 50 histórias inspiradoras de super-heróis de carne e osso" author="G.L. Marvel (Autor), Sandra Martha Dolinsky (Tradutor)" link="https://amzn.to/38CFbDj">
<a href="https://www.amazon.com.br/gp/product/8542214587/ref=as_li_ss_il?ie=UTF8&psc=1&linkCode=li3&tag=heloiche-20&linkId=9569060a84d329f6cbb3e309a0868b76&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8542214587&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
<book title="Se os Tubarões Fossem Homens" author="Bertolt Brecht" link="https://amzn.to/36xBHjJ">
<a href="https://www.amazon.com.br/gp/product/859323402X/ref=as_li_ss_il?ie=UTF8&psc=1&linkCode=li3&tag=heloiche-20&linkId=7d4f7b4b4e44d75249ee82efeeea8257&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=859323402X&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
<book title="As Mil e Uma Noites" author="Ferreira Gullar" link="https://amzn.to/2PKIKij">
<a href="https://www.amazon.com.br/gp/product/8571061912/ref=as_li_ss_il?ie=UTF8&psc=1&linkCode=li3&tag=heloiche-20&linkId=acb23b5590306fbd3ba7b96556f8f42c&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8571061912&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
### Para os adultos
<book title="Marrom e amarelo" author="Paulo Scott" link="https://amzn.to/35mNvFh">
<a href="https://www.amazon.com.br/Marrom-amarelo-Paulo-Scott/dp/855652091X/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=c6dfd45b02584b50936165f550751a41&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=855652091X&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
<book title="Torto arado" author="Itamar Vieira Junior" link="https://amzn.to/2LV1iel">
<a href="https://www.amazon.com.br/Torto-Arado-Em-Portugues-Brasil/dp/6580309318/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=88aa974228f18397d3285f44cb6c86db&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=6580309318&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
<book title="Os sete maridos de Evelyn Hugo" author="Taylor Jenkins Reid (Autor), Alexandre Boide (Tradutor)" link="https://amzn.to/2PHXwpU">
<a href="https://www.amazon.com.br/Os-sete-maridos-Evelyn-Hugo/dp/8584391509/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=c44b1d5560641fd89309fdeea0537586&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8584391509&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
<book title="O acerto de contas de uma mãe" author="Sue Klebold (Autor), Ana Paula Doherty (Tradutor)" link="https://amzn.to/36CEfgt">
<a href="https://www.amazon.com.br/acerto-contas-uma-m%C3%A3e/dp/8576864568/ref=as_li_ss_il?ie=UTF8&linkCode=li3&tag=heloiche-20&linkId=a870203ce47abb1b4146982d0c3550f3&language=pt_BR" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=8576864568&Format=_SL250_&ID=AsinImage&MarketPlace=BR&ServiceVersion=20070822&WS=1&tag=heloiche-20&language=pt_BR" ></a>
</book>
---
Bom final e começo de novo ano a todos!
Que continuemos juntos em 2020 porque ler ainda é a melhor coisa que sempre nos acontece!
| 97.109756 | 634 | 0.784754 | por_Latn | 0.938061 |
8aed7049385b395b52e136911e587780f023b505 | 721 | md | Markdown | CONTRIBUTING.md | Cu3PO42/partial-array | 63ca517dc6ba828afb44f01ce84961fe581f7bf9 | [
"Apache-2.0",
"MIT"
] | null | null | null | CONTRIBUTING.md | Cu3PO42/partial-array | 63ca517dc6ba828afb44f01ce84961fe581f7bf9 | [
"Apache-2.0",
"MIT"
] | 9 | 2021-09-11T07:01:09.000Z | 2021-12-12T15:48:06.000Z | CONTRIBUTING.md | Cu3PO42/partial-array | 63ca517dc6ba828afb44f01ce84961fe581f7bf9 | [
"Apache-2.0",
"MIT"
] | 1 | 2021-09-26T06:53:22.000Z | 2021-09-26T06:53:22.000Z | # Contributing
🎉 First of all: thank you for taking your time contributing to this project 🎉
The contributing guidelines are very simple:
- if you have a question or miss a feature simply open an issue describing, what you want to do and what doesn't work
- if you want to takle some easy issues, have a look at the [good first issues] list
- if you're ready to implement changes by yourself, simply fork the repository, do your changes and open a pull request.
Note that code additions have to have the same license as this project.
Keep in mind, that you always use appropriate language.
[good first issues]: https://github.com/jfrimmel/partial-array/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22
| 48.066667 | 123 | 0.776699 | eng_Latn | 0.999317 |
8aed844451bb307c0fe58c5658627a41879873e8 | 333 | md | Markdown | src/components/aeModal/aeModal.md | etharner/aepp-components | d315ebbcad7f63e3a64a64194a3111ff6d0451d0 | [
"ISC"
] | 44 | 2017-10-22T10:31:20.000Z | 2022-01-14T09:09:24.000Z | src/components/aeModal/aeModal.md | etharner/aepp-components | d315ebbcad7f63e3a64a64194a3111ff6d0451d0 | [
"ISC"
] | 148 | 2017-10-23T17:07:29.000Z | 2021-09-21T18:07:24.000Z | src/components/aeModal/aeModal.md | etharner/aepp-components | d315ebbcad7f63e3a64a64194a3111ff6d0451d0 | [
"ISC"
] | 16 | 2017-11-12T16:57:11.000Z | 2022-03-06T07:02:16.000Z | ```js
new Vue({
data(){ return { modalVisible: false } },
template: `
<div>
<ae-button @click="modalVisible = true">Show modal</ae-button>
<ae-modal
v-if="modalVisible"
@close="modalVisible = false"
title="Modal title"
>
Modal content
</ae-modal>
</div>
`
})
```
| 18.5 | 68 | 0.51952 | kor_Hang | 0.11582 |
8aef67b729b8f10b5731b31628daff62aadbb4fe | 136 | md | Markdown | README.md | volr/compiler | 15d665a0d2a0c4f711064adff0e42c62c7964194 | [
"BSD-3-Clause"
] | null | null | null | README.md | volr/compiler | 15d665a0d2a0c4f711064adff0e42c62c7964194 | [
"BSD-3-Clause"
] | null | null | null | README.md | volr/compiler | 15d665a0d2a0c4f711064adff0e42c62c7964194 | [
"BSD-3-Clause"
] | null | null | null | [](https://travis-ci.org/volr/compiler)
# Volr language compiler
| 34 | 109 | 0.75 | yue_Hant | 0.302055 |
8aeff253ed67c860b56d6f3e556cc9c8300d3c55 | 782 | md | Markdown | packages/charts/README.md | mauroerta/ui5-webcomponents-react | 818773744f3c07fd9d53b143faf4a1e6b7a5c69b | [
"Apache-2.0"
] | 249 | 2019-06-28T10:42:37.000Z | 2022-03-17T06:34:17.000Z | packages/charts/README.md | mauroerta/ui5-webcomponents-react | 818773744f3c07fd9d53b143faf4a1e6b7a5c69b | [
"Apache-2.0"
] | 2,527 | 2019-06-17T06:59:53.000Z | 2022-03-31T15:24:15.000Z | packages/charts/README.md | mauroerta/ui5-webcomponents-react | 818773744f3c07fd9d53b143faf4a1e6b7a5c69b | [
"Apache-2.0"
] | 69 | 2019-06-19T10:26:17.000Z | 2022-03-01T10:47:45.000Z | # @ui5/webcomponents-react-charts
Chart library for ui5-webcomponents-react.
## Usage
### Installation
```bash
npm install @ui5/webcomponents-react-charts --save
```
### Documentation
You can find an interactive documentation in our [Storybook](https://sap.github.io/ui5-webcomponents-react/).
## Contribute
Please check our [Contribution Guidelines](https://github.com/SAP/ui5-webcomponents-react/blob/master/CONTRIBUTING.md).
## License
Please see our [LICENSE](https://github.com/SAP/ui5-webcomponents-react/blob/main/LICENSE) for copyright and license information.
Detailed information including third-party components and their licensing/copyright information is available via the [REUSE tool](https://api.reuse.software/info/github.com/SAP/ui5-webcomponents-react).
| 31.28 | 202 | 0.785166 | eng_Latn | 0.56916 |
8af08fbfb408f97aa7a76d23dea33b10763ada7a | 235 | md | Markdown | pull_request_template.md | Francesco149/protonfit | 28e3b49ce6c8cbb4ca14ed2e9a85a6d22d654301 | [
"Unlicense"
] | 59 | 2020-09-10T06:19:00.000Z | 2022-03-24T23:02:02.000Z | .github/pull_request_template.md | Francesco149/gopher | a1da48127b9eb2d60759e1d370450b0ee728c2de | [
"Unlicense"
] | 2 | 2021-05-10T01:36:55.000Z | 2021-05-24T03:24:53.000Z | .github/pull_request_template.md | Francesco149/gopher | a1da48127b9eb2d60759e1d370450b0ee728c2de | [
"Unlicense"
] | 6 | 2020-08-14T15:15:48.000Z | 2021-11-18T15:57:18.000Z | # Unlicensing your contributions
- [ ] If this is a non-trivial contribution (more than 15 lines of code changed), I will attach a contributor waiver as explained in
[the contributor guidelines](../blob/master/CONTRIBUTING.md) .
| 58.75 | 132 | 0.753191 | eng_Latn | 0.988164 |
8af12b4e5c7777dbd840e48d6f5ecf858d8c517b | 110 | md | Markdown | README.md | S-wkj/OS_file-management_MariaDB_control | a239e23da07a1a705d86ad78b7020c726774562a | [
"MIT"
] | null | null | null | README.md | S-wkj/OS_file-management_MariaDB_control | a239e23da07a1a705d86ad78b7020c726774562a | [
"MIT"
] | 1 | 2016-04-01T02:07:24.000Z | 2016-04-01T02:07:24.000Z | README.md | S-wkj/OS_file-management_MariaDB_control | a239e23da07a1a705d86ad78b7020c726774562a | [
"MIT"
] | null | null | null | # OS_file-management_MariaDB_control
operating system file management and MariaDB work together
- start work
| 22 | 58 | 0.845455 | eng_Latn | 0.972195 |
8af13cfc7e9dfb5655a8f79ce36d3bff3ac5dd4b | 1,251 | md | Markdown | basic-cpp/Graphs/SocialNetwork/README.md | gramai/school-related | 152d67b4e475c3ba7c9370d6de39b38fc453392d | [
"MIT"
] | null | null | null | basic-cpp/Graphs/SocialNetwork/README.md | gramai/school-related | 152d67b4e475c3ba7c9370d6de39b38fc453392d | [
"MIT"
] | null | null | null | basic-cpp/Graphs/SocialNetwork/README.md | gramai/school-related | 152d67b4e475c3ba7c9370d6de39b38fc453392d | [
"MIT"
] | null | null | null | # Social Network _Eng._
Let's take a non-oriented graph, that represents a social network.
A is friend with B if he
there is an edge between A and B (in this case
we say that the degree of friendship is 1). The
friends of my friends have the degree of friendship 2.
Considering a user, view all
his friends having the degree <= N (N is given).
### Example
Let the following graph be:
{(0,1);(2,3);(0,2);(0,6);(4,5);(7,5);(6,4)}
If the user is at node 0 and the maximum degree of friendship is 2 (N = 2)
the program will print:
- The user at node 0 has friends with the max degree 2 the following :
1 2 3 6 4
# Réseau Social _Fr._
Prenons un graphe non orienté, qui
représente un réseau social. Chaque sommet
représente un utilisateur. A est ami avec B s'il
existe une arête entre A et B (dans ce cas-là
nous disons que le degré d'amitié est 1). Les
amis de mes amis ont le degré d'amitié 2.
Tenant compte d'un utilisateur, affichez tous
ses amis ayant le degré <= N (N est donné).
### Example
Soit le graphe avec les arêtes:
{(0,1);(2,3);(0,2);(0,6);(4,5);(7,5);(6,4)}
Si l'utilisateur est au noeud 0 et le degré maximum d'amitié est 2 (N = 2), on affiche:
- L'utilisateur au noeud 0 a des amis avec le degré max 2 comme suit:
1 2 3 6 4 | 29.093023 | 87 | 0.707434 | fra_Latn | 0.485169 |
8af144aafcd0ee2d468e6005bed8e0ccd0b6b3b3 | 3,494 | md | Markdown | docs/smoketest.md | mattray/knative-docs | 11883b667fafd64dd0908bc12a3d92dc88b948f2 | [
"Apache-2.0"
] | 3,383 | 2018-07-23T21:00:17.000Z | 2022-03-30T17:13:52.000Z | docs/smoketest.md | mattray/knative-docs | 11883b667fafd64dd0908bc12a3d92dc88b948f2 | [
"Apache-2.0"
] | 4,617 | 2018-07-23T21:55:06.000Z | 2022-03-31T21:52:36.000Z | docs/smoketest.md | mattray/knative-docs | 11883b667fafd64dd0908bc12a3d92dc88b948f2 | [
"Apache-2.0"
] | 1,240 | 2018-07-23T20:36:04.000Z | 2022-03-30T20:03:07.000Z | # Hidden smoketest page
<!-- is this page still needed? -->
Use this page to test your changes and ensure that there are not any issues,
unwanted behaviors, or regression that are caused by your changes.
This is a set of site elements that have causes issues in the past:
## Lists
- Top level:
1. A nested list item.
1. another level lower
1. Nested code sample: <br>Syntax: <code>{<code>{< readfile
file="../code-samples/community/serving/helloworld-java-quarkus/service.yaml"
code="true" lang="yaml" >}</code>}</code> <br>Example:
{{< readfile file="../code-samples/community/serving/helloworld-java-quarkus/service.yaml" code="true" lang="yaml" >}}
1. This should be the third bullet (3.).
1. More nested code: <br>Shortcode: <code>{<code>{< readfile
file="/code-samples/serving/hello-world/helloworld-go/Dockerfile" code="true"
lang="go" >}</code>}</code> <br>Example:
{{< readfile file="./code-samples/serving/hello-world/helloworld-go/Dockerfile" code="true" lang="go" >}}
1. Another nested ordered list item (2.)
## Code samples
The following use the
[`readfile` shortcode](https://github.com/knative/website/blob/main/layouts/shortcodes/readfile.md)
Shortcode: <code>{<code>{< readfile file="../hack/reference-docs-gen-config.json" code="true" lang="json" >}</code>}</code>
renders as:
{{< readfile file="../hack/reference-docs-gen-config.json" code="true" lang="json" >}}
Shortcode: <code>{<code>{< readfile file="./code-samples/serving/cloudevents/cloudevents-nodejs/service.yaml" code="true" lang="yaml" >}</code>}</code>
renders as:
{{< readfile file="./code-samples/serving/cloudevents/cloudevents-nodejs/service.yaml" code="true" lang="yaml" >}}
## Install version numbers and Clone branch commands
Examples of how the manual and dynamic version number or branch name can be
added in-line with the
[`version` shortcode](https://github.com/knative/website/blob/main/layouts/shortcodes/version.md)
(uses the define values from
[config/\_default/params.toml](https://github.com/knative/website/blob/main/config/_default/params.toml))
1. Shortcode: <code>{<code>{< version >}</code>}</code>
renders as: {{< version >}}
Example:
`kubectl apply version/{{< version >}}/is-the-latest/docs-version.yaml`
1. Shortcode: <code>{<code>{< version override="v0.2.2" >}</code>}</code>
renders as: {{< version override="v0.2.2" >}}
Example:
`kubectl apply the-version-override/{{< version override="v0.2.2" >}}/is-manually-specified.yaml`
1. Shortcode: <code>{<code>{< version patch=".20" >}</code>}</code>
renders as: {{< version patch=".20" >}}
Example:
`kubectl apply this-is-a-point-release/{{< version patch=".20" >}}/filename.yaml`
1. Shortcode: <code>{<code>{< branch >}</code>}</code>
renders as: {{ branch }}
Example:
`git clone -b "{{ branch }}" https://github.com/knative/docs knative-docs`
1. Shortcode: <code>{<code>{< branch override="release-0.NEXT" >}</code>}</code>
renders as: {{< branch override="release-0.NEXT" >}}
Example:
`git clone -b "{{< branch override="release-0.NEXT" >}}" https://github.com/knative/docs knative-docs`
## Tabs
How to include tabbed content in your page. Note that you can set a default tab.
{{< tab name="Regular example" >}}
This is a regular example tab.
{{< tab name="Include example" >}}
{{% readfile file="./code-samples/serving/multi-container/service.yaml" code="true" lang="yaml" %}}
| 39.704545 | 151 | 0.679164 | eng_Latn | 0.801968 |
8af1a841460d020ee043471f3cab1d9249f6a0ba | 1,853 | md | Markdown | _publishedPreprints/2020-07-01-preprintWhitley.md | paxcalpt/henriqueslab.github.io | d5f8fefbec1e1aab7581dabca2cc15cc7d03a671 | [
"MIT"
] | 1 | 2021-05-30T16:42:42.000Z | 2021-05-30T16:42:42.000Z | _publishedPreprints/2020-07-01-preprintWhitley.md | paxcalpt/henriqueslab.github.io | d5f8fefbec1e1aab7581dabca2cc15cc7d03a671 | [
"MIT"
] | null | null | null | _publishedPreprints/2020-07-01-preprintWhitley.md | paxcalpt/henriqueslab.github.io | d5f8fefbec1e1aab7581dabca2cc15cc7d03a671 | [
"MIT"
] | 15 | 2020-02-25T17:08:06.000Z | 2022-03-22T16:36:58.000Z | ---
title: "FtsZ treadmilling is essential for Z-ring condensation and septal constriction initiation in bacterial cell division"
collection: publications
date: 2020-07-02
venue: 'bioRxiv'
authors: 'Kevin D Whitley, Calum Jukes, Nicholas Tregidgo, Eleni Karinou, Pedro Almada, Ricardo Henriques, Cees Dekker, Séamus Holden'
paperurl: https://doi.org/10.1101/2020.07.01.182006
type: 'Preprint'
doi: 10.1101/2020.07.01.182006
preprint: 10.1101/2020.07.01.182006
theme: "microbiology, methods, software, hardware"
resources: "DeepAutoFocus"
---
<h2> Abstract </h2>
<p align= "justify">
Despite the central role of division in bacterial physiology, how division proteins work together as a nanoscale machine to divide the cell remains poorly understood. Cell division by cell wall synthesis proteins is guided by the cytoskeleton protein FtsZ, which assembles at mid-cell as a dense Z-ring formed of motile treadmilling filaments. However, although FtsZ treadmilling is essential for cell division, the function of FtsZ treadmilling motility remains unclear. Here, we systematically resolve the function of FtsZ treadmilling across each stage of division in the Gram-positive model organism Bacillus subtilis using a novel combination of nanofabrication, advanced microscopy, and microfluidics to measure the division-protein dynamics in live cells with ultrahigh sensitivity. We find that FtsZ treadmilling has two essential functions: mediating condensation of diffuse FtsZ filaments into a dense Z-ring, and initiating constriction by guiding septal cell wall synthesis. After constriction initiation, FtsZ treadmilling has a dispensable function in accelerating septal constriction rate. Our results show that FtsZ treadmilling is critical for assembling and initiating the bacterial cell division machine.
{% include paper-research-resources.html %}
| 92.65 | 1,223 | 0.816514 | eng_Latn | 0.985258 |
8af24e9e8951b6355550155d8a66d046df44daf0 | 1,497 | md | Markdown | vendor/src/github.com/yudai/golcs/README.md | ttvnp/bridge-server | e6f65f39c2f1aca7bca61b16c6fa364385729720 | [
"Apache-2.0"
] | 11 | 2015-12-05T19:31:50.000Z | 2021-08-12T23:46:47.000Z | vendor/src/github.com/yudai/golcs/README.md | ttvnp/bridge-server | e6f65f39c2f1aca7bca61b16c6fa364385729720 | [
"Apache-2.0"
] | 20 | 2017-04-06T22:40:25.000Z | 2020-04-24T04:48:04.000Z | vendor/src/github.com/yudai/golcs/README.md | ttvnp/bridge-server | e6f65f39c2f1aca7bca61b16c6fa364385729720 | [
"Apache-2.0"
] | 4 | 2016-05-10T03:44:57.000Z | 2021-08-12T23:48:49.000Z | # Go Longest Common Subsequence (LCS)
A package to calculate [LCS](http://en.wikipedia.org/wiki/Longest_common_subsequence_problem) of slices.
## Usage
```sh
go get github.com/yudai/golcs
```
```go
import " github.com/yudai/golcs"
left = []interface{}{1, 2, 5, 3, 1, 1, 5, 8, 3}
right = []interface{}{1, 2, 3, 3, 4, 4, 5, 1, 6}
lcs := golcs.New(left, right)
lcs.Values() // LCS values => []interface{}{1, 2, 5, 1}
lcs.IndexPairs() // Matched indices => [{Left: 0, Right: 0}, {Left: 1, Right: 1}, {Left: 2, Right: 6}, {Left: 4, Right: 7}]
lcs.Length() // Matched length => 4
lcs.Table() // Memo table
```
All the methods of `Lcs` cache their return values. For example, the memo table is calculated only once and reused when `Values()`, `Length()` and other methods are called.
## FAQ
### How can I give `[]byte` values to `Lcs()` as its arguments?
As `[]interface{}` is incompatible with `[]othertype` like `[]byte`, you need to create a `[]interface{}` slice and copy the values in your `[]byte` slice into it. Unfortunately, Go doesn't provide any mesure to cast a slice into `[]interface{}` with zero cost. Your copy costs O(n).
```go
leftBytes := []byte("TGAGTA")
left = make([]interface{}, len(leftBytes))
for i, v := range leftBytes {
left[i] = v
}
rightBytes := []byte("GATA")
right = make([]interface{}, len(rightBytes))
for i, v := range rightBytes {
right[i] = v
}
lcs.New(left, right)
```
## LICENSE
The MIT license (See `LICENSE` for detail)
| 27.218182 | 283 | 0.641951 | eng_Latn | 0.897469 |
8af31bdb29176a8d05936d2bdcac1c395b7f3d50 | 166 | md | Markdown | _indicators/es/3-3-3.md | umutkaya/umutkaya | 012412389d1c615f1228f00879017c5c91b3edc3 | [
"CC0-1.0"
] | null | null | null | _indicators/es/3-3-3.md | umutkaya/umutkaya | 012412389d1c615f1228f00879017c5c91b3edc3 | [
"CC0-1.0"
] | 1 | 2019-03-13T21:00:34.000Z | 2019-05-06T20:09:32.000Z | _indicators/es/3-3-3.md | umutkaya/umutkaya | 012412389d1c615f1228f00879017c5c91b3edc3 | [
"CC0-1.0"
] | 2 | 2017-11-20T15:34:15.000Z | 2021-05-19T19:22:01.000Z | ---
title: "Incidencia de la malaria por cada 1.000 habitantes"
lang: es
permalink: /es/3-3-3/
sdg_goal: 3
layout: indicator
indicator: "3.3.3"
target_id: "3.3"
---
| 15.090909 | 59 | 0.686747 | spa_Latn | 0.340969 |
8af3ce2e8de1af6b0a8ae8017c6dbad0f035eca7 | 2,303 | md | Markdown | docs/outlook/mapi/ipstxiunknown.md | sloppyjuicy/office-developer-client-docs.fr-FR | 7eaaee254db929b89d8df6493d9b975aefbe7209 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-19T18:52:25.000Z | 2022-03-26T23:53:12.000Z | docs/outlook/mapi/ipstxiunknown.md | sloppyjuicy/office-developer-client-docs.fr-FR | 7eaaee254db929b89d8df6493d9b975aefbe7209 | [
"CC-BY-4.0",
"MIT"
] | 5 | 2021-07-19T21:24:40.000Z | 2021-12-08T02:52:10.000Z | docs/outlook/mapi/ipstxiunknown.md | sloppyjuicy/office-developer-client-docs.fr-FR | 7eaaee254db929b89d8df6493d9b975aefbe7209 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-13T18:11:35.000Z | 2021-07-19T20:46:56.000Z | ---
title: IPSTX IUnknown
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: reference
ms.prod: office-online-server
ms.localizationpriority: medium
api_name:
- IPSTX
api_type:
- COM
ms.assetid: 73752f57-6fbc-0201-bf95-0e75c56c04e6
description: Dernière modification le 9 mars 2015
ms.openlocfilehash: a2394b684751d2ff57d55e7c1fb2ac327545cb95
ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/24/2021
ms.locfileid: "59620463"
---
# <a name="ipstx--iunknown"></a>IPSTX : IUnknown
**S’applique à** : Outlook 2013 | Outlook 2016
Cette interface fournit des fonctionnalités d’aide lors de la réplication via l’interface **[IOSTX.](iostxiunknown.md)**
|||
|:-----|:-----|
|Fourni par <br/> |Requête sur [IMsgStore](imsgstoreimapiprop.md) <br/> |
|Identificateur d’interface : <br/> |IID_IPSTX <br/> |
## <a name="vtable-order"></a>Ordre des vtables
|||
|:-----|:-----|
|**[GetLastError](ipstx-getlasterror.md)** <br/> |Obtient des informations étendues sur la dernière erreur. <br/> |
|**[GetSyncObject](ipstx-getsyncobject.md)** <br/> |Obtient l’interface **[IOSTX](iostxiunknown.md)** associée. <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
|**[EmulateSpooler](ipstx-emulatespooler.md)** <br/> |Définit un magasin local pour émuler le gestionnaire Outlook protocole pour mettre en file d’ensemble les messages sortants sur un serveur. <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
| *Membre d’espace réservé* <br/> | *Non pris en charge ou documenté.* <br/> |
## <a name="see-also"></a>Voir aussi
[À propos de l’API de réplication](about-the-replication-api.md)
[Constantes MAPI](mapi-constants.md)
| 38.383333 | 202 | 0.688667 | fra_Latn | 0.701197 |
8af3f97b8d45f57ff3d5d3e48a5411513c6948b1 | 2,694 | md | Markdown | docs/framework/wpf/graphics-multimedia/animation-and-timing-how-to-topics.md | ichengzi/docs.zh-cn | d227f2c5c114a40a4b99d232084cd086593fe21a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-14T18:16:06.000Z | 2020-04-14T18:16:06.000Z | docs/framework/wpf/graphics-multimedia/animation-and-timing-how-to-topics.md | ichengzi/docs.zh-cn | d227f2c5c114a40a4b99d232084cd086593fe21a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/animation-and-timing-how-to-topics.md | ichengzi/docs.zh-cn | d227f2c5c114a40a4b99d232084cd086593fe21a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 动画和计时帮助主题
ms.date: 03/30/2017
f1_keywords:
- AutoGeneratedOrientationPage
helpviewer_keywords:
- timing system [WPF]
- animation [WPF]
ms.assetid: 587e36f6-1957-424e-9d89-c43724f26d84
ms.openlocfilehash: 4936ba7bcd78c4867dae99df8bad11776cf655db
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 04/23/2019
ms.locfileid: "62010314"
---
# <a name="animation-and-timing-how-to-topics"></a>动画和计时帮助主题
以下主题演示如何使用[!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)]动画和计时系统应用程序中的。
## <a name="in-this-section"></a>本节内容
[加速或减速播放动画](how-to-accelerate-or-decelerate-an-animation.md)
[在重复循环过程中累积动画值](how-to-accumulate-animation-values-during-repeat-cycles.md)
[向动画起始值添加动画输出值](how-to-add-an-animation-output-value-to-an-animation-starting-value.md)
[使用情节提要对属性进行动画处理](how-to-animate-a-property-by-using-a-storyboard.md)
[在不使用情节提要的情况下为属性设置动画效果](how-to-animate-a-property-without-using-a-storyboard.md)
[在 ControlTemplate 中设置动画效果](how-to-animate-in-a-controltemplate.md)
[在样式中设置动画效果](how-to-animate-in-a-style.md)
[为元素或画笔的不透明度设置动画效果](how-to-animate-the-opacity-of-an-element-or-brush.md)
[在不更改时间线速度的情况下更改时钟速度](change-the-speed-of-a-clock.md)
[在情节提要启动后使用其交互式方法对其进行控制](how-to-control-a-storyboard-after-it-starts.md)
[使用 From、To 和 By 控制动画](how-to-control-an-animation-using-from-to-and-by.md)
[定义名称范围](how-to-define-a-name-scope.md)
[在时钟状态发生变化时接收通知](how-to-receive-notification-when-clock-state-changes.md)
[重复动画](how-to-repeat-an-animation.md)
[搜寻情节提要](how-to-seek-a-storyboard.md)
[同步搜寻演示图板](how-to-seek-a-storyboard-synchronously.md)
[设置动画时长](how-to-set-a-duration-for-an-animation.md)
[在使用情节提要为属性设置动画效果后设置属性](how-to-set-a-property-after-animating-it-with-a-storyboard.md)
[使用子时间线简化动画](how-to-simplify-animations-by-using-child-timelines.md)
[指定演示图板动画之间的 HandoffBehavior](how-to-specify-handoffbehavior-between-storyboard-animations.md)
[为已经到达有效期末尾的时间线指定 FillBehavior](specify-the-fillbehavior-for-a-timeline.md)
[指定时间线是否自动反转](how-to-specify-whether-a-timeline-automatically-reverses.md)
[在属性值更改时触发动画](how-to-trigger-an-animation-when-a-property-value-changes.md)
[在情节提要启动之后使用事件触发器来控制情节提要](how-to-use-event-triggers-to-control-a-storyboard-after-it-starts.md)
## <a name="reference"></a>参考
<xref:System.Windows.Media.Animation.Timeline>
<xref:System.Windows.Media.Animation.Storyboard>
<xref:System.Windows.Media.Animation.BeginStoryboard>
<xref:System.Windows.Media.Animation.Clock>
<xref:System.Windows.Media.Animation>
## <a name="related-sections"></a>相关章节
[图形和多媒体](index.md)
| 45.661017 | 105 | 0.762064 | yue_Hant | 0.499537 |
8af44c042f06bcd181fb9018151e70740fd448db | 2,533 | md | Markdown | content/edge/setting-up-edge-bundle/setting-up-vmware.md | SoftwareAG/c8y-docs | 90338d49735974f33ec05a4e9ac75bbe6e3cef5e | [
"FSFAP"
] | 6 | 2021-10-08T10:48:11.000Z | 2022-03-11T08:50:26.000Z | content/edge/setting-up-edge-bundle/setting-up-vmware.md | SoftwareAG/c8y-docs | 90338d49735974f33ec05a4e9ac75bbe6e3cef5e | [
"FSFAP"
] | 69 | 2021-10-21T12:23:55.000Z | 2022-03-29T09:24:46.000Z | content/edge/setting-up-edge-bundle/setting-up-vmware.md | SoftwareAG/c8y-docs | 90338d49735974f33ec05a4e9ac75bbe6e3cef5e | [
"FSFAP"
] | 2 | 2021-10-11T12:43:59.000Z | 2021-12-15T08:17:59.000Z | ---
weight: 70
title: Example setup for VMware Workstation Player
layout: redirect
---
### Setting up for VMware
To set up the Edge appliance in VMware Workstation Player, follow the steps below.
>**Info:** The following steps show a reference example. Refer to the VMware documentation for the exact setup. The final configuration also depends on the end user setup.
1. In VMware, navigate to **Player** > **File** > **Open** to import the Edge appliance.
2. Navigate to the folder where the Edge appliance files are located, select the OVF file and click **Open**.
3. Change the Edge appliance name if required and click **Import**. You can also change the storage path of the Edge appliance here.
>**Important:** On VMware Workstation, you must use UTC on your host machine. If you choose not to use UTC, you may have time sync issues. Set `rtc.diffFromUTC=0` in the .vmx file to avoid the time sync issues.
4. Start the Edge appliance by clicking **Play virtual machine**.
Next, perform the Edge appliance installation. See, [Installing {{< product-c8y-iot >}} Edge](/edge/installation/).
### Setting up for vmnetcfg utility
You can use the VMware `vmnetcfg` utility to get the necessary details like the subnet mask and gateway IP required to configure the network.
The following example illustrates the network configuration on a Windows platform. For instructions on Linux platform, see VMware Knowledge Base.
1. Download the correct version of the `vmnetcfg` utility. It can also be extracted from the VMware Workstation Pro installer.
2. Save the vmnetcfg binary file (*vmnetcfg.exe*) in the VMware Workstation Player installation directory. In a Windows environment, this is usually *C:\Program Files (x86)\VMware\VMware Player*.<br>
3. Open the file with the appropriate rights. <br>
<img src="/images/edge/edge-vmware-05.png" name="Setting up VMware"/>
4. Select "NAT" as external connection.<br>
5. Click **NAT settings** to open the **NAT Settings** window.<br>
6. Note down the gateway IP address and close the **NAT settings** window.<br>
<img src="/images/edge/edge-vmware-06.png" name="Setting up VMware"/>
7. Click **DHCP Settings** to open the **DHCP Settings** window.<br>
8. In the fields **Starting IP address** and **Ending IP address**, change the IP range from 3 to 254, i.e. if your gateway IP is 192.168.117.2, set the IP range from 192.168.117.3 to 192.168.117.254.<br>
<img src="/images/edge/edge-vmware-07.png" name="Setting up VMware"/>
9. Click **OK** to save your settings.
| 50.66 | 213 | 0.744966 | eng_Latn | 0.979992 |
8af4a99985b2acb50189f46f3f82f358c8689c23 | 671 | md | Markdown | alias/README.md | OzymandiasTheGreat/react-native-alias | 7538bf88b7de4964f7fe5d7d709a3db5e6b59675 | [
"MIT"
] | null | null | null | alias/README.md | OzymandiasTheGreat/react-native-alias | 7538bf88b7de4964f7fe5d7d709a3db5e6b59675 | [
"MIT"
] | null | null | null | alias/README.md | OzymandiasTheGreat/react-native-alias | 7538bf88b7de4964f7fe5d7d709a3db5e6b59675 | [
"MIT"
] | null | null | null | # react-native-alias
Create Webpack like alias in React-Native.
Sugar for Metro config option `extraNodeModules` and `babel-plugin-module-resolver`.
Automatically installs shims.
Contains a list of known shims for builtin node modules.
```plaintext
Usage: react-native-alias [options] [command]
Options:
-h, --help display help for command
Commands:
alias [options] <module:shim...> Add alias for given module:shim pairs
unalias [options] <modules...> Remove previously added alias
node [options] [modules...] Use a well known shim to shim builtin node modules
help [command] display help for command
```
| 33.55 | 86 | 0.697466 | eng_Latn | 0.937076 |
8af4fe71188fbf30533bc4d16754abf5c8373228 | 10,375 | md | Markdown | sdk-api-src/content/winuser/nf-winuser-getmessagea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/winuser/nf-winuser-getmessagea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/winuser/nf-winuser-getmessagea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:winuser.GetMessageA
title: GetMessageA function (winuser.h)
description: Retrieves a message from the calling thread's message queue. The function dispatches incoming sent messages until a posted message is available for retrieval.
helpviewer_keywords: ["GetMessage","GetMessage function [Windows and Messages]","GetMessageA","GetMessageW","_win32_GetMessage","_win32_getmessage_cpp","winmsg.getmessage","winui._win32_getmessage","winuser/GetMessage","winuser/GetMessageA","winuser/GetMessageW"]
old-location: winmsg\getmessage.htm
tech.root: winmsg
ms.assetid: VS|winui|~\winui\windowsuserinterface\windowing\messagesandmessagequeues\messagesandmessagequeuesreference\messagesandmessagequeuesfunctions\getmessage.htm
ms.date: 12/05/2018
ms.keywords: GetMessage, GetMessage function [Windows and Messages], GetMessageA, GetMessageW, _win32_GetMessage, _win32_getmessage_cpp, winmsg.getmessage, winui._win32_getmessage, winuser/GetMessage, winuser/GetMessageA, winuser/GetMessageW
req.header: winuser.h
req.include-header: Windows.h
req.target-type: Windows
req.target-min-winverclnt: Windows 2000 Professional [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi: GetMessageW (Unicode) and GetMessageA (ANSI)
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: User32.lib
req.dll: User32.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- GetMessageA
- winuser/GetMessageA
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- User32.dll
- API-MS-Win-NTUser-IE-message-l1-1-0.dll
- ie_shims.dll
- API-MS-Win-RTCore-NTUser-Window-l1-1-0.dll
- minuser.dll
- Ext-MS-Win-NTUser-message-l1-1-0.dll
- Ext-MS-Win-NTUser-message-l1-1-1.dll
- Ext-MS-Win-RTCore-NTUser-Window-Ext-l1-1-0.dll
- Ext-MS-Win-NTUser-Message-l1-1-2.dll
- Ext-MS-Win-NTUser-Message-L1-1-3.dll
api_name:
- GetMessage
- GetMessageA
- GetMessageW
req.apiset: ext-ms-win-ntuser-message-l1-1-0 (introduced in Windows 8)
---
# GetMessageA function
## -description
Retrieves a message from the calling thread's message queue. The function dispatches incoming sent messages until a posted message is available for retrieval.
Unlike <b>GetMessage</b>, the <a href="/windows/desktop/api/winuser/nf-winuser-peekmessagea">PeekMessage</a> function does not wait for a message to be posted before returning.
## -parameters
### -param lpMsg [out]
Type: <b>LPMSG</b>
A pointer to an <a href="/windows/desktop/api/winuser/ns-winuser-msg">MSG</a> structure that receives message information from the thread's message queue.
### -param hWnd [in, optional]
Type: <b>HWND</b>
A handle to the window whose messages are to be retrieved. The window must belong to the current thread.
If <i>hWnd</i> is <b>NULL</b>, <b>GetMessage</b> retrieves messages for any window that belongs to the current thread, and any messages on the current thread's message queue whose <b>hwnd</b> value is <b>NULL</b> (see the <a href="/windows/desktop/api/winuser/ns-winuser-msg">MSG</a> structure). Therefore if hWnd is <b>NULL</b>, both window messages and thread messages are processed.
If <i>hWnd</i> is -1, <b>GetMessage</b> retrieves only messages on the current thread's message queue whose <b>hwnd</b> value is <b>NULL</b>, that is, thread messages as posted by <a href="/windows/desktop/api/winuser/nf-winuser-postmessagea">PostMessage</a> (when the <i>hWnd</i> parameter is <b>NULL</b>) or <a href="/windows/desktop/api/winuser/nf-winuser-postthreadmessagea">PostThreadMessage</a>.
### -param wMsgFilterMin [in]
Type: <b>UINT</b>
The integer value of the lowest message value to be retrieved. Use <b>WM_KEYFIRST</b> (0x0100) to specify the first keyboard message or <b>WM_MOUSEFIRST</b> (0x0200) to specify the first mouse message.
Use <a href="/windows/desktop/inputdev/wm-input">WM_INPUT</a> here and in <i>wMsgFilterMax</i> to specify only the <b>WM_INPUT</b> messages.
If <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i> are both zero, <b>GetMessage</b> returns all available messages (that is, no range filtering is performed).
### -param wMsgFilterMax [in]
Type: <b>UINT</b>
The integer value of the highest message value to be retrieved. Use <b>WM_KEYLAST</b> to specify the last keyboard message or <b>WM_MOUSELAST</b> to specify the last mouse message.
Use <a href="/windows/desktop/inputdev/wm-input">WM_INPUT</a> here and in <i>wMsgFilterMin</i> to specify only the <b>WM_INPUT</b> messages.
If <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i> are both zero, <b>GetMessage</b> returns all available messages (that is, no range filtering is performed).
## -returns
Type: <b>BOOL</b>
If the function retrieves a message other than <a href="/windows/desktop/winmsg/wm-quit">WM_QUIT</a>, the return value is nonzero.
If the function retrieves the <a href="/windows/desktop/winmsg/wm-quit">WM_QUIT</a> message, the return value is zero.
If there is an error, the return value is -1. For example, the function fails if <i>hWnd</i> is an invalid window handle or <i>lpMsg</i> is an invalid pointer. To get extended error information, call <a href="/windows/desktop/api/errhandlingapi/nf-errhandlingapi-getlasterror">GetLastError</a>.
Because the return value can be nonzero, zero, or -1, avoid code like this:
```
while (GetMessage( lpMsg, hWnd, 0, 0)) ...
```
The possibility of a -1 return value in the case that hWnd is an invalid parameter (such as referring to a window that has already been destroyed) means that such code can lead to fatal application errors. Instead, use code like this:
```
BOOL bRet;
while( (bRet = GetMessage( &msg, hWnd, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
```
## -remarks
An application typically uses the return value to determine whether to end the main message loop and exit the program.
The <b>GetMessage</b> function retrieves messages associated with the window identified by the <i>hWnd</i> parameter or any of its children, as specified by the <a href="/windows/desktop/api/winuser/nf-winuser-ischild">IsChild</a> function, and within the range of message values given by the <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i> parameters. Note that an application can only use the low word in the <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i> parameters; the high word is reserved for the system.
Note that <b>GetMessage</b> always retrieves <a href="/windows/desktop/winmsg/wm-quit">WM_QUIT</a> messages, no matter which values you specify for <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i>.
During this call, the system delivers pending, nonqueued messages, that is, messages sent to windows owned by the calling thread using the <a href="/windows/desktop/api/winuser/nf-winuser-sendmessage">SendMessage</a>, <a href="/windows/desktop/api/winuser/nf-winuser-sendmessagecallbacka">SendMessageCallback</a>, <a href="/windows/desktop/api/winuser/nf-winuser-sendmessagetimeouta">SendMessageTimeout</a>, or <a href="/windows/desktop/api/winuser/nf-winuser-sendnotifymessagea">SendNotifyMessage</a> function. Then the first queued message that matches the specified filter is retrieved. The system may also process internal events. If no filter is specified, messages are processed in the following order:
<ul>
<li>Sent messages </li>
<li>Posted messages </li>
<li>Input (hardware) messages and system internal events </li>
<li>Sent messages (again) </li>
<li>
<a href="/windows/desktop/gdi/wm-paint">WM_PAINT</a> messages </li>
<li>
<a href="/windows/desktop/winmsg/wm-timer">WM_TIMER</a> messages </li>
</ul>
To retrieve input messages before posted messages, use the <i>wMsgFilterMin</i> and <i>wMsgFilterMax</i> parameters.
<b>GetMessage</b> does not remove <a href="/windows/desktop/gdi/wm-paint">WM_PAINT</a> messages from the queue. The messages remain in the queue until processed.
If a top-level window stops responding to messages for more than several seconds, the system considers the window to be not responding and replaces it with a ghost window that has the same z-order, location, size, and visual attributes. This allows the user to move it, resize it, or even close the application. However, these are the only actions available because the application is actually not responding. When in the debugger mode, the system does not generate a ghost window.
<h3><a id="DPI_Virtualization"></a><a id="dpi_virtualization"></a><a id="DPI_VIRTUALIZATION"></a>DPI Virtualization</h3>
This API does not participate in DPI virtualization. The output is in the mode of the window that the message is targeting. The calling thread is not taken into consideration.
#### Examples
For an example, see <a href="/windows/desktop/winmsg/using-messages-and-message-queues">Creating a Message Loop</a>.
<div class="code"></div>
> [!NOTE]
> The winuser.h header defines GetMessage as an alias which automatically selects the ANSI or Unicode version of this function based on the definition of the UNICODE preprocessor constant. Mixing usage of the encoding-neutral alias with code that not encoding-neutral can lead to mismatches that result in compilation or runtime errors. For more information, see [Conventions for Function Prototypes](/windows/win32/intl/conventions-for-function-prototypes).
## -see-also
<b>Conceptual</b>
<a href="/windows/desktop/api/winuser/nf-winuser-ischild">IsChild</a>
<a href="/windows/desktop/api/winuser/ns-winuser-msg">MSG</a>
<a href="/windows/desktop/winmsg/messages-and-message-queues">Messages and Message Queues</a>
<a href="/windows/desktop/api/winuser/nf-winuser-peekmessagea">PeekMessage</a>
<a href="/windows/desktop/api/winuser/nf-winuser-postmessagea">PostMessage</a>
<a href="/windows/desktop/api/winuser/nf-winuser-postthreadmessagea">PostThreadMessage</a>
<b>Reference</b>
<a href="/windows/desktop/api/winuser/nf-winuser-waitmessage">WaitMessage</a>
| 45.90708 | 710 | 0.739373 | eng_Latn | 0.84858 |
8af594f990a5d7b1ff74638e0fdc40567bcfbd33 | 90 | md | Markdown | books_and_notes/professional_courses/operating_system/sources/ucore_os_lab/docs/ucore实验指导书/lab0/lab0_1_goals.md | gxw1/review_the_national_post-graduate_entrance_examination | 8812779a7a4ce185a531d120562d5194b697c0c9 | [
"MIT"
] | 640 | 2019-03-30T11:32:43.000Z | 2022-03-31T14:05:18.000Z | books_and_notes/professional_courses/operating_system/sources/ucore_os_lab/docs/ucore实验指导书/lab0/lab0_1_goals.md | yyzVegst/review_the_national_post-graduate_entrance_examination | 8812779a7a4ce185a531d120562d5194b697c0c9 | [
"MIT"
] | 6 | 2019-07-22T01:57:24.000Z | 2022-01-20T15:03:16.000Z | books_and_notes/professional_courses/operating_system/sources/ucore_os_lab/docs/ucore实验指导书/lab0/lab0_1_goals.md | yyzVegst/review_the_national_post-graduate_entrance_examination | 8812779a7a4ce185a531d120562d5194b697c0c9 | [
"MIT"
] | 212 | 2019-04-10T02:31:50.000Z | 2022-03-30T02:32:47.000Z |
## 1.实验目的:
- 了解操作系统开发实验环境
- 熟悉命令行方式的编译、调试工程
- 掌握基于硬件模拟器的调试技术
- 熟悉C语言编程和指针的概念
- 了解X86汇编语言 | 11.25 | 17 | 0.744444 | zho_Hans | 0.233102 |
8af5dc35695b7d5324e7abc4a9532cf77dc059a2 | 4,427 | md | Markdown | samples/react-adaptive-cards-image-gallery/README.md | DEV-GRAVES/sp-dev-fx-webparts | 153992b43091aed488acfc2ab7427b54445ade1a | [
"MIT"
] | 601 | 2020-05-06T04:29:08.000Z | 2022-03-31T07:05:40.000Z | samples/react-adaptive-cards-image-gallery/README.md | DEV-GRAVES/sp-dev-fx-webparts | 153992b43091aed488acfc2ab7427b54445ade1a | [
"MIT"
] | 1,036 | 2020-05-06T03:18:04.000Z | 2022-03-31T22:04:25.000Z | samples/react-adaptive-cards-image-gallery/README.md | DEV-GRAVES/sp-dev-fx-webparts | 153992b43091aed488acfc2ab7427b54445ade1a | [
"MIT"
] | 1,400 | 2020-05-06T05:57:36.000Z | 2022-03-30T13:19:40.000Z | ---
page_type: sample
products:
- office-sp
languages:
- javascript
- typescript
extensions:
contentType: samples
technologies:
- SharePoint Framework
platforms:
- React
createdDate: 11/28/2018 12:00:00 AM
---
# Image Gallery Built with Adaptive Cards
## Summary
This sample demonstrates the capability of using [Adaptive Cards](https://adaptivecards.io/) with SharePoint Framework. Adaptive cards are great fit for Bot, however they can be effectively used with SPFx to render the content. This web part helps to display the image gallery from SharePoint list.
![Web part preview][figure1]
When added to SharePoint site, the source list containing images information, number of images to display can be configured from web part properties.
The sample also provisions the list called "Adaptive Card Images" which can be used as an example to start using the web part.
![SharePoint Run][figure2]
### SharePoint Asset
A SharePoint list (named "Adaptive Card Images") is provisioned to store the image information. The schema of the list is as below.
![List Schema][figure3]
- The "Image Link" column stores the url of image to be displayed in adaptive card.
- The "Navigation URL" column represents the url to navigate by clicking on image in adaptive card.
- The "Sort Order" column represents the order in which images to be displayed in adaptive card.
The solution also provisions sample data to the "Adaptive Card Images" list.
![List Sample Data][figure4]
### NPM Packages Used
Below NPM packages are used to develop this sample.
1. sp-pnp-js (https://www.npmjs.com/package/sp-pnp-js)
2. adaptivecards (https://www.npmjs.com/package/adaptivecards)
## Used SharePoint Framework Version

## Applies to
* [SharePoint Framework Developer Preview](https://docs.microsoft.com/sharepoint/dev/spfx/sharepoint-framework-overview)
* [Office 365 developer tenant](https://docs.microsoft.com/sharepoint/dev/spfx/set-up-your-developer-tenant)
## Solution
Solution|Author(s)
--------|---------
react-adaptive-cards-image-gallery|[Nanddeep Nachan](https://www.linkedin.com/in/nanddeepnachan/) (SharePoint Consultant, [@NanddeepNachan](https://http://twitter.com/NanddeepNachan) )
|[Ravi Kulkarni](https://www.linkedin.com/in/ravi-kulkarni-a5381723/) (SharePoint Consultant)
## Version history
Version|Date|Comments
-------|----|--------
1.1.0|June 15, 2020|Upgrade to SPFx 1.10.0
1.0.0|November 28, 2018|Initial release
## Disclaimer
**THIS CODE IS PROVIDED *AS IS* WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING ANY IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR NON-INFRINGEMENT.**
---
## Prerequisites
- SharePoint Online tenant
- Site Collection created under the **/sites/** or **/**
## Minimal Path to Awesome
- Clone this repository.
- On the command prompt, navigate to the web part folder and execute:
- `npm i`
- `gulp bundle --ship`
- `gulp package-solution --ship`
- The package can be found at `\react-adaptive-cards-image-gallery\sharepoint\solution\react-adaptive-cards-image-gallery.sppkg`
- [Deploy the package](https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/get-started/serve-your-web-part-in-a-sharepoint-page#deploy-the-helloworld-package-to-app-catalog) to the app catalog.
- [Install the client-side solution](https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/get-started/serve-your-web-part-in-a-sharepoint-page#install-the-client-side-solution-on-your-site) to your SharePoint site.
- [Add web part to your SharePoint page](https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/get-started/serve-your-web-part-in-a-sharepoint-page#add-the-helloworld-web-part-to-modern-page) named "Adaptive Cards Image Gallery".
## Features
This sample web part shows how adaptive cards can be used effectively with SharePoint Framework to render an image gallery with data stored in a SharePoint list.
- Integrating adaptive cards
- Rendering image gallery
- SharePoint assets provisioning
- Creating extensible services
- Using @sp-pnp-js
- Using @adaptivecards
[figure1]: ./assets/webpart-preview.png
[figure2]: ./assets/sharepoint-run.gif
[figure3]: ./assets/list-schema.png
[figure4]: ./assets/list-sample-data.png
<img src="https://telemetry.sharepointpnp.com/sp-dev-fx-webparts/samples/react-adaptive-cards-image-gallery" />
| 40.245455 | 298 | 0.763723 | eng_Latn | 0.814512 |