hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
98cd96df7ee4420249da780c871a8dc7bca00787 | 297 | md | Markdown | boat-id/DOCUMENTATION.md | srclab-projects/easy-common | 192aee01efccc76dcaf18d6d34ebac9726302a83 | [
"Apache-2.0"
] | 2 | 2020-05-04T10:48:44.000Z | 2020-05-31T12:02:53.000Z | boat-id/DOCUMENTATION.md | srclab-projects/toova | 192aee01efccc76dcaf18d6d34ebac9726302a83 | [
"Apache-2.0"
] | 1 | 2021-07-26T15:49:21.000Z | 2021-07-26T15:49:21.000Z | boat-id/DOCUMENTATION.md | srclab-projects/boat | 192aee01efccc76dcaf18d6d34ebac9726302a83 | [
"Apache-2.0"
] | null | null | null | #  `boat-id`: Boat Id -- Id Generation Lib of [Boat](../README.md)
- AsciiDoc:
* [English](docs/DOCUMENTATION_en.adoc)
* [简体中文](docs/DOCUMENTATION_zh.adoc)
- Markdown:
* [English](docs/DOCUMENTATION_en.md)
* [简体中文](docs/DOCUMENTATION_zh.md)
More see [docs/](docs/) | 29.7 | 89 | 0.670034 | yue_Hant | 0.960063 |
98cdc04e4a8f1d14666828c1787fb423dbdc3ded | 3,239 | md | Markdown | docs/analysis-services/data-mining/view-or-change-modeling-flags-data-mining.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/data-mining/view-or-change-modeling-flags-data-mining.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/data-mining/view-or-change-modeling-flags-data-mining.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Visualizzare o modificare flag di modellazione (Data Mining) | Documenti Microsoft
ms.date: 05/08/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: data-mining
ms.topic: conceptual
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
ms.openlocfilehash: e587be4fe975ee35752e668f9a5d49e0afdf4488
ms.sourcegitcommit: c12a7416d1996a3bcce3ebf4a3c9abe61b02fb9e
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 05/10/2018
ms.locfileid: "34015468"
---
# <a name="view-or-change-modeling-flags-data-mining"></a>Visualizzare o modificare flag di modellazione (Data mining)
[!INCLUDE[ssas-appliesto-sqlas](../../includes/ssas-appliesto-sqlas.md)]
I flag di modellazione sono proprietà impostate in una colonna della struttura di data mining o in colonne del modello di data mining per controllare la modalità di elaborazione dei dati durante l'analisi da parte dell'algoritmo.
In Progettazione modelli di data mining è possibile visualizzare e modificare i flag di modellazione associati a una struttura di data mining o a una colonna di data mining visualizzando le proprietà della struttura o del modello. Inoltre, è possibile impostare i flag di modellazione tramite DMX, AMO o XMLA.
In questa procedura viene descritto come modificare i flag di modellazione nella finestra di progettazione.
### <a name="view-or-change-the-modeling-flag-for-a-structure-column-or-model-column"></a>Visualizzare o modificare il flag di modellazione per una colonna della struttura o una colonna del modello
1. In SQL Server Design Studio aprire Esplora soluzioni, quindi fare doppio clic sulla struttura di data mining.
2. Per impostare il flag di modellazione NOT NULL, fare clic sulla scheda **Struttura di data mining** . Per impostare i flag REGRESSOR o MODEL_EXISTENCE_ONLY, fare clic sulla scheda **Modello di data mining** .
3. Fare clic con il pulsante destro del mouse sulla colonna da visualizzare o modificare, quindi scegliere **Proprietà**.
4. Per aggiungere un nuovo flag di modellazione, fare clic sulla casella di testo accanto alla proprietà **ModelingFlags** e selezionare le casella o le caselle di controllo relative ai flag di modellazione che si desidera utilizzare.
I flag di modellazione vengono visualizzati solo se sono appropriati per il tipo di dati della colonna.
> [!NOTE]
> Dopo avere modificato un flag di modellazione, è necessario rielaborare il modello.
### <a name="get-the-modeling-flags-used-in-the-model"></a>Recuperare i flag di modellazione utilizzati nel modello
- In [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]aprire una finestra Query DMX e digitare una query simile alla seguente:
```
SELECT COLUMN_NAME, CONTENT_TYPE, MODELING_FLAG
FROM $system.DMSCHEMA_MINING_COLUMNS
WHERE MODEL_NAME = 'Forecasting'
```
## <a name="see-also"></a>Vedere anche
[Procedure dettagliate e attività di modello di data mining](../../analysis-services/data-mining/mining-model-tasks-and-how-tos.md)
[Modello di Data Mining flag & #40; & #41;](../../analysis-services/data-mining/modeling-flags-data-mining.md)
| 55.844828 | 312 | 0.759185 | ita_Latn | 0.989509 |
98cdc86d72181ce36f466417229b87d77649e0de | 483 | md | Markdown | website/versioned_docs/version-2.4.0/node-client.identityclaims.md | SkygearIO/skygear-SDK-JS | d9fc7a02cc63e6df88361d76df77de944f993984 | [
"Apache-2.0"
] | 25 | 2016-03-02T16:49:21.000Z | 2020-09-02T03:12:36.000Z | website/versioned_docs/version-2.4.0/node-client.identityclaims.md | SkygearIO/skygear-SDK-JS | d9fc7a02cc63e6df88361d76df77de944f993984 | [
"Apache-2.0"
] | 477 | 2016-03-24T10:20:35.000Z | 2020-06-09T10:15:53.000Z | website/versioned_docs/version-2.4.0/node-client.identityclaims.md | SkygearIO/skygear-SDK-JS | d9fc7a02cc63e6df88361d76df77de944f993984 | [
"Apache-2.0"
] | 45 | 2016-03-29T17:12:13.000Z | 2019-12-17T15:50:55.000Z | ---
id: version-2.4.0-node-client.identityclaims
title: IdentityClaims interface
hide_title: true
original_id: node-client.identityclaims
---
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
## IdentityClaims interface
<b>Signature:</b>
```typescript
export declare interface IdentityClaims
```
## Properties
| Property | Type | Description |
| --- | --- | --- |
| [email](./node-client.identityclaims.email.md) | <code>string</code> | |
| 19.32 | 80 | 0.691511 | eng_Latn | 0.821034 |
98ce58613d00dbb71836efd97048de7c717a36be | 2,210 | md | Markdown | README.md | niedong/lwesp-fat-8266 | ecad52b8c18069d33641735bfcb2dd21ce842a01 | [
"MIT"
] | 1 | 2021-07-12T02:02:09.000Z | 2021-07-12T02:02:09.000Z | README.md | niedong/lwesp-fat-8266 | ecad52b8c18069d33641735bfcb2dd21ce842a01 | [
"MIT"
] | null | null | null | README.md | niedong/lwesp-fat-8266 | ecad52b8c18069d33641735bfcb2dd21ce842a01 | [
"MIT"
] | null | null | null | # LwESP For Ai-thinker ESP8266
[lwesp-fat-8266](https://github.com/niedong/lwesp-fat-8266), which stands for `LwESP For Ai-thinker ESP8266`, is a variation of [lwesp](https://github.com/MaJerle/lwesp). The original [lwesp](https://github.com/MaJerle/lwesp) library mostly targets on [Espressif](https://github.com/espressif) ESP8266 and ESP32 devices, which use a slightly different and larger range of AT commands.
This repository targets specifically at [Ai-thinker](https://github.com/Ai-Thinker-Open) ESP8266, and should work well with other device that use Ai-thinker ESP8266 as a solution, such as ESP-01S, ESP-07, etc. You can find more information about your device on [Ai-thinker website](https://docs.ai-thinker.com/en/esp8266).
Follow original [documentation](https://docs.majerle.eu/projects/lwesp/) for information on implementation and details.
## Usage & Examples
The usage of [lwesp-fat-8266](https://github.com/niedong/lwesp-fat-8266) is exacly the same as [lwesp](https://github.com/MaJerle/lwesp). There is also an important repository [stm32f769xx](https://github.com/niedong/stm32f769xx), which demonstrates configuration and usage for [lwesp-fat-8266](https://github.com/niedong/lwesp-fat-8266) based on stm32f769, [Ai-thinker](https://github.com/Ai-Thinker-Open) ESP8266 and FreeRTOS. You can even find more examples and resources in [lwesp](https://github.com/MaJerle/lwesp) repository.
## Fix & Changes
- Fix `AT+CIPSTART` for [Ai-thinker](https://github.com/Ai-Thinker-Open) ESP8266 where `+LINK_CONN` is not supported.
- Remove macro, options & modules which are not supported by [Ai-thinker](https://github.com/Ai-Thinker-Open) ESP8266.
- Minor optimization, improvement and more.
All modification and changes on original source code are clearly marked. You can find them on top of corresponding files.
## Contribute
Contributions are always welcome. Please follow [C style & coding rules](https://github.com/MaJerle/c-code-style) used by original library.
## License
This software is double-licensed under MIT license, with copyright by [MaJerle](https://github.com/MaJerle) (see LICENSE), and copyright by [niedong](https://github.com/niedong) (see LICENSE.niedong).
| 78.928571 | 532 | 0.771041 | eng_Latn | 0.860306 |
98cfca49ba57a4e1eaf34bc3feae7187b805cc92 | 183 | md | Markdown | README.md | GigaHertzLegacy-SpiderX/Python-Files | d10cfe7e5eafbf81d26a258d37220ec1ec6de003 | [
"MIT"
] | null | null | null | README.md | GigaHertzLegacy-SpiderX/Python-Files | d10cfe7e5eafbf81d26a258d37220ec1ec6de003 | [
"MIT"
] | null | null | null | README.md | GigaHertzLegacy-SpiderX/Python-Files | d10cfe7e5eafbf81d26a258d37220ec1ec6de003 | [
"MIT"
] | null | null | null | ## Python-Files
# HacktoberFest
# HacktoberFest2021
# installation
1. git clone https://github.com/GigaHertzLegacy-SpiderX/Python-Files
2. cd Python-Files
3. python + (Filename.py)
| 18.3 | 68 | 0.765027 | yue_Hant | 0.689659 |
98cfe606b776bf1819cd41dc838e8f89386c1a02 | 1,434 | md | Markdown | README.md | DigitalSlideArchive/ansible-role-vips | 1ef2d88ce11adec5ba2c749ffe9df913ed2a2062 | [
"Apache-2.0"
] | 2 | 2016-04-05T23:22:14.000Z | 2018-10-10T18:38:34.000Z | README.md | DigitalSlideArchive/ansible-role-vips | 1ef2d88ce11adec5ba2c749ffe9df913ed2a2062 | [
"Apache-2.0"
] | 4 | 2016-04-05T23:32:14.000Z | 2016-06-14T18:16:39.000Z | README.md | DigitalSlideArchive/ansible-role-vips | 1ef2d88ce11adec5ba2c749ffe9df913ed2a2062 | [
"Apache-2.0"
] | 1 | 2016-06-14T18:07:45.000Z | 2016-06-14T18:07:45.000Z | DigitalSlideArchive.vips
========================
[](https://raw.githubusercontent.com/DigitalSlideArchive/ansible-role-vips/master/LICENSE)
[](https://travis-ci.org/DigitalSlideArchive/ansible-role-vips)
An Ansible role to install [VIPS image processing software](http://www.vips.ecs.soton.ac.uk/)
with bug-free OpenSlide support.
On Ubuntu 14.04, VIPS contains a bug where OpenSlide does not open some images properly.
This role installs VIPS's OpenSlide dependencies to reliably work around that bug.
Requirements
------------
This is intended to be run on a clean Ubuntu 14.04 or 16.04 system.
Role Variables
--------------
You may want to override the variables:
* `vips_libtiff_path`: Path to fetch and build LibTIFF.
* `vips_openjpeg_path`: Path to fetch and build OpenJPEG.
* `vips_openslide_path`: Path to fetch and build OpenSlide.
* `vips_path`: Path to fetch and build VIPS.
You can (but probably won't need to) override the variables:
* `vips_libtiff_version`: Git commit-ish for fetching LibTIFF.
* `vips_openjpeg_version`: Git commit-ish for fetching OpenJPEG.
* `vips_openslide_version`: Git commit-ish for fetching OpenSlide.
* `vips_ersion`: Git commit-ish for fetching VIPS.
* `build_parallelism`: The number of parallel jobs to build with.
| 42.176471 | 161 | 0.75523 | eng_Latn | 0.752531 |
98d02b276c92cb61775a2e66e10ea1a3a7909309 | 1,658 | md | Markdown | README.md | DavilsonJunior/desafio-06-bootcamp-rocketseat | b20135fda039dfdcd651208480c841057b5cfa3b | [
"MIT"
] | null | null | null | README.md | DavilsonJunior/desafio-06-bootcamp-rocketseat | b20135fda039dfdcd651208480c841057b5cfa3b | [
"MIT"
] | null | null | null | README.md | DavilsonJunior/desafio-06-bootcamp-rocketseat | b20135fda039dfdcd651208480c841057b5cfa3b | [
"MIT"
] | null | null | null | <img alt="GoStack" src="https://storage.googleapis.com/golden-wind/bootcamp-gostack/header-desafios-new.png" />
<h3 align="center">
Desafio 06: Banco de dados e upload de arquivos no Node.js
</h3>
<blockquote align="center">“Só deseje as coisas as quais você está disposto a lutar”!</blockquote>
<p align="center">
<a href="https://www.linkedin.com/in/davilson-paulino-cunha-da-junior-23029315a/">
<img alt="Davilson Junior" src="https://img.shields.io/badge/-Davilson Junior-4e5acf?style=flat&logo=Linkedin&logoColor=white" />
</a>
</p>
### Fotos
<div>
<img src="https://user-images.githubusercontent.com/35976070/163727183-2aa15757-1f96-4265-88bf-03fcbc1882fb.png" width="500px">
</div>
# :construction_worker: Executando
```bash
# Clone o Repositório
$ git clone https://github.com/DavilsonJunior/desafio-06-bootcamp-rocketseat
```
```bash
# Acesse a pasta do projeto e baixe as dependências
$ yarn
```
```bash
# Para verificar os testes use:
$ yarn test
```
```bash
# Rodando o server
$ yarn dev:server
```
# :computer: Autores
<table>
<tr>
<td align="center">
<a href="http://github.com/DavilsonJunior/">
<img src="https://avatars.githubusercontent.com/u/35976070?s=400&u=eee0ec381ba3d4475f60cd576e4a4e5d2b9877bc&v=4" width="100px;" alt="Davilson Junior"/>
<br />
<sub>
<b>Davilson Junior</b>
</sub>
</a>
<br />
<a href="https://www.linkedin.com/in/davilson-paulino-cunha-da-junior-23029315a/" title="Linkedin">@davilsonjunior</a>
<br />
</td>
</tr>
</table>
# :closed_book: Licença
Este projeto está sob a licença [MIT](./LICENCE).
| 25.121212 | 159 | 0.671894 | por_Latn | 0.393674 |
438e459e9b3f45f26045b9f934854a2e8fd4a4b9 | 14,980 | md | Markdown | articles/iot-dps/virtual-network-support.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 43 | 2017-08-28T07:44:17.000Z | 2022-02-20T20:53:01.000Z | articles/iot-dps/virtual-network-support.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 676 | 2017-07-14T20:21:38.000Z | 2021-12-03T05:49:24.000Z | articles/iot-dps/virtual-network-support.md | KreizIT/azure-docs.fr-fr | dfe0cb93ebc98e9ca8eb2f3030127b4970911a06 | [
"CC-BY-4.0",
"MIT"
] | 153 | 2017-07-11T00:08:42.000Z | 2022-01-05T05:39:03.000Z | ---
title: Prise en charge des réseaux virtuels dans le service Azure IoT Device Provisioning (DPS)
description: Utilisation du modèle de connectivité de réseaux virtuels avec Azure IoT Device Provisioning Service (DPS)
services: iot-dps
author: anastasia-ms
ms.service: iot-dps
manager: lizross
ms.topic: conceptual
ms.date: 10/06/2021
ms.author: v-stharr
ms.openlocfilehash: 8d90a033f5af5afb55be9585756a7235dc6d89d7
ms.sourcegitcommit: e82ce0be68dabf98aa33052afb12f205a203d12d
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/07/2021
ms.locfileid: "129659314"
---
# <a name="azure-iot-hub-device-provisioning-service-dps-support-for-virtual-networks"></a>Prise en charge des réseaux virtuels dans le service Azure IoT Hub Device Provisioning (DPS)
Cet article présente le modèle de connectivité de réseau virtuel pour le provisionnement des appareils IoT avec des hubs IoT à l’aide de DPS. Ce modèle fournit une connectivité privée entre les appareils, DPS et le hub IoT au sein d’un réseau virtuel Azure appartenant à un client.
Dans la plupart des scénarios où DPS est configuré avec un réseau virtuel, votre IoT Hub est également configuré dans le même réseau virtuel. Pour plus d’informations spécifiques sur la prise en charge et la configuration du réseau virtuel pour des hubs IoT, consultez [Prise en charge des réseaux virtuels par IoT Hub](../iot-hub/virtual-network-support.md).
## <a name="introduction"></a>Introduction
Par défaut, les noms d'hôtes DPS sont mappés à un point de terminaison public avec une adresse IP routable publiquement via Internet. Ce point de terminaison public est visible par tous les clients. L’accès au point de terminaison public peut être tenté par des appareils IoT sur des réseaux étendus et des réseaux locaux.
Pour plusieurs raisons, les clients peuvent souhaiter limiter la connectivité aux ressources Azure, telles que DPS. Les raisons sont les suivantes :
* Empêchez l’exposition de la connexion sur l’Internet public. Vous pouvez réduire l’exposition en introduisant des couches supplémentaires de sécurité via l’isolement réseau pour votre IoT Hub et vos ressources DPS
* Permettre une expérience de connectivité privée à partir de vos ressources réseau sur site en garantissant que vos données et votre trafic sont transmis directement au réseau principal Azure.
* Empêcher les attaques par exfiltration à partir de réseaux locaux sensibles.
* Suivre des modèles de connectivité établis à l'échelle d’Azure en utilisant des [points de terminaison privés](../private-link/private-endpoint-overview.md).
Les approches courantes de limitation de la connectivité incluent les [règles de filtre IP DPS](./iot-dps-ip-filtering.md) et le réseau virtuel (VNET) avec des [points de terminaison privés](../private-link/private-endpoint-overview.md). L’objectif de cet article est de décrire l’approche de réseau virtuel pour DPS à l’aide de points de terminaison privés.
Les appareils qui fonctionnent sur des réseaux locaux peuvent utiliser un [réseau privé virtuel (VPN)](../vpn-gateway/vpn-gateway-about-vpngateways.md) ou l’homologation privée [ExpressRoute](https://azure.microsoft.com/services/expressroute/) pour se connecter à un réseau virtuel dans Azure et accéder aux ressources DPS via des points de terminaison privés.
Un point de terminaison privé est une adresse IP privée attribuée à l'intérieur d'un réseau virtuel appartenant au client, qui permet l’accès à une ressource Azure. En ayant un point de terminaison privé pour votre ressource DPS, vous pouvez autoriser les appareils qui fonctionnent à l’intérieur de votre réseau virtuel à demander l’approvisionnement par votre ressource DPS sans autoriser le trafic vers le point de terminaison public.
## <a name="prerequisites"></a>Prérequis
Avant de commencer, assurez-vous que les conditions préalables suivantes sont remplies :
* Votre ressource DPS est déjà créée et liée à vos hubs IoT. Pour obtenir des conseils sur la configuration d’une nouvelle ressource DPS, consultez [Configurer le Service IoT Hub Device Provisioning avec le portail Azure](./quick-setup-auto-provision.md)
* Vous provisionné réseau virtuel Azure avec un sous-réseau dans lequel sera créé le point de terminaison privé. Pour plus d’informations, consultez [Créer un réseau virtuel à l’aide d’Azure CLI](../virtual-network/quick-create-cli.md).
* Pour les appareils qui fonctionnent à l'intérieur de réseaux locaux, configurez un [réseau privé virtuel (VPN)](../vpn-gateway/vpn-gateway-about-vpngateways.md) ou un peering privé [ExpressRoute](https://azure.microsoft.com/services/expressroute/) dans votre réseau virtuel Azure.
## <a name="private-endpoint-limitations"></a>Limitations de point de terminaison privé
Notez les limitations actuelles suivantes pour DPS lors de l’utilisation de points de terminaison privés :
* Les points de terminaison privé ne fonctionnent pas avec DPS lorsque la ressource DPS et hub lié se situent dans des clouds différents. Par exemple, [Azure Government et Azure international](../azure-government/documentation-government-welcome.md).
* À l’heure actuelle, les [stratégies d’allocation personnalisées avec Azure Functions](how-to-use-custom-allocation-policies.md) pour Data Protection Manager ne fonctionnent pas lorsque la fonction Azure est bloquée dans un réseau virtuel et des points de terminaison privés.
* La prise en charge actuelle du réseau virtuel DPS concerne l’entrée de données dans DPS uniquement. La sortie de données, qui est le trafic de DPS vers IoT Hub, utilise un mécanisme de service à service interne plutôt qu’un réseau virtuel dédié. La prise en charge d’un verrouillage de sortie basé sur un réseau virtuel complet entre DPS et IoT Hub n’est pas disponible actuellement.
* La stratégie d’allocation de latence la plus faible est utilisée pour affecter un appareil à IoT Hub avec la latence la plus faible. Cette stratégie d’allocation n’est pas fiable dans un environnement de réseau virtuel.
>[!NOTE]
>**Considération relative à la résidence des données :**
>
>DPS fournit un **point de terminaison d’appareil global** (`global.azure-devices-provisioning.net`). Cependant, lorsque vous utilisez le point de terminaison global, vos données peuvent être redirigées en dehors de la région où l’instance DPS a été initialement créée. Pour garantir la résidence des données dans la région DPS initiale, utilisez des points de terminaison privés.
## <a name="set-up-a-private-endpoint"></a>Créer un point de terminaison privé
Pour configurer un point de terminaison privé, procédez comme suit :
1. Dans le [portail Azure](https://portal.azure.com/), ouvrez votre ressource DPS, puis cliquez sur l’onglet **Mise en réseau**. Cliquez sur **Connexions de points de terminaison privés** et sur **+ Point de terminaison privé**.

2. Sur la page _Créer un point de terminaison privé - Notions de base_, entrez les informations mentionnées dans le tableau ci-dessous.

| Champ | Valeur |
| :---- | :-----|
| **Abonnement** | Choisissez l’abonnement Azure de votre choix pour contenir le point de terminaison privé. |
| **Groupe de ressources** | Choisissez ou créez un groupe de ressources pour contenir le point de terminaison privé |
| **Nom** | Entrez un nom pour votre point de terminaison privé |
| **Région** | La région choisie doit être la même que la région qui contient le réseau virtuel, mais elle ne doit pas nécessairement être la même que la ressource DPS. |
Cliquez sur **Suivant : Ressource** pour configurer la ressource vers laquelle pointe le point de terminaison privé.
3. Sur la page _Créer un point de terminaison privé - Ressource_, entrez les informations mentionnées dans le tableau ci-dessous.

| Champ | Valeur |
| :---- | :-----|
| **Abonnement** | Choisissez l’abonnement Azure qui contient la ressource DPS vers laquelle votre point de terminaison privé va pointer. |
| **Type de ressource** | Choisissez **Microsoft.Devices/ProvisioningServices**. |
| **Ressource** | Sélectionnez la ressource DPS vers laquelle le point de terminaison privé va mapper. |
| **Sous-ressource cible** | Sélectionnez **iotDps**. |
> [!TIP]
> Les informations sur le paramètre **Se connecter à une ressource Azure par ID de ressource ou alias** sont fournies dans la section [Demander un point de terminaison privé](#request-a-private-endpoint) de cet article.
Cliquez sur **Suivant : Configuration** pour configurer le réseau virtuel pour le point de terminaison privé.
4. Sur la page _Créer un point de terminaison privé - Configuration_, choisissez votre réseau virtuel et votre sous-réseau où le point de terminaison privé sera créé.
Cliquez sur **Suivant : Balises**, et spécifiez si nécessaire les balises de votre ressource.

5. Cliquez sur **Vérifier + créer**, puis sur **Créer** pour créer votre ressource de point de terminaison privé.
## <a name="use-private-endpoints-with-devices"></a>Utiliser des points de terminaison privés avec des appareils
Pour utiliser des points de terminaison privés avec du code de configuration d’appareil, votre code d’approvisionnement doit utiliser le **point de terminaison de service** spécifique pour votre ressource DPS, comme indiqué sur la page Vue d’ensemble de votre ressource DPS dans le [portail Azure](https://portal.azure.com). Le point de terminaison de service se présente sous la forme suivante.
`<Your DPS Tenant Name>.azure-devices-provisioning.net`
La plupart des exemples de code présentés dans notre documentation et les kits de développement logiciel (SDK) utilisent le **point de terminaison d’appareil global** (`global.azure-devices-provisioning.net`) et l’**étendue d’ID** pour résoudre une ressource de DPS particulière. Utilisez le point de terminaison de service à la place du point de terminaison d’appareil global lors de la connexion à une ressource DPS à l’aide de points de terminaison privés pour approvisionner vos appareils.
Par exemple, l’exemple de client d’appareil d’approvisionnement ([pro_dev_client_sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/prov_dev_client_sample)) dans le [Kit de développement logiciel (SDK) Azure IOT C](https://github.com/Azure/azure-iot-sdk-c) est conçu pour utiliser le **point de terminaison d’appareil global** en tant qu’URI d’approvisionnement global (`global_prov_uri`) dans [prov_dev_client_sample.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c)
:::code language="c" source="~/iot-samples-c/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c" range="60-64" highlight="4":::
:::code language="c" source="~/iot-samples-c/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c" range="138-144" highlight="3":::
Pour utiliser l’exemple avec un point de terminaison privé, le code en surbrillance ci-dessus est modifié afin d’utiliser le point de terminaison de service pour votre ressource DPS. Par exemple, si vous avez un point de terminaison de service `mydps.azure-devices-provisioning.net`, le code se présenterait comme suit.
```C
static const char* global_prov_uri = "global.azure-devices-provisioning.net";
static const char* service_uri = "mydps.azure-devices-provisioning.net";
static const char* id_scope = "[ID Scope]";
```
```C
PROV_DEVICE_RESULT prov_device_result = PROV_DEVICE_RESULT_ERROR;
PROV_DEVICE_HANDLE prov_device_handle;
if ((prov_device_handle = Prov_Device_Create(service_uri, id_scope, prov_transport)) == NULL)
{
(void)printf("failed calling Prov_Device_Create\r\n");
}
```
## <a name="request-a-private-endpoint"></a>Demander un point de terminaison privé
Vous pouvez demander un point de terminaison privé à une ressource DPS par ID de ressource. Pour effectuer cette demande, le propriétaire de la ressource doit vous fournir l’ID de ressource.
1. L’ID de ressource est fourni sous l’onglet Propriétés de la ressource DPS, comme indiqué ci-dessous.

> [!CAUTION]
> N’oubliez pas que l’ID de ressource contient l’ID d’abonnement.
2. Une fois que vous disposez de l’ID de ressource, suivez la procédure ci-dessus dans [Configurer un point de terminaison privé](#set-up-a-private-endpoint) jusqu’à l’étape 3 sur la page _Créer un point de terminaison privé - Ressource_. Cliquez sur **Connectez-vous à une ressource Azure par ID de ressource ou alias** et entrez les informations dans le tableau suivant.
| Champ | Valeur |
| :---- | :-----|
| **ID de ressource ou alias** | Entrez l’ID de ressource pour la ressource DPS. |
| **Sous-ressource cible** | Entrez **iotDps** |
| **Message de requête** | Entrez un message de demande pour le propriétaire de la ressource DPS.<br>Par exemple, <br>`Please approve this new private endpoint`<br>`for IoT devices in site 23 to access this DPS instance` |
Cliquez sur **Suivant : Configuration** pour configurer le réseau virtuel pour le point de terminaison privé.
3. Sur la page _Créer un point de terminaison privé - Configuration_, choisissez le réseau virtuel et le sous-réseau où le point de terminaison privé sera créé.
Cliquez sur **Suivant : Balises**, et spécifiez si nécessaire les balises de votre ressource.
4. Cliquez sur **Vérifier + créer**, puis sur **Créer** pour créer votre demande de point de terminaison privé.
5. Le propriétaire DPS verra la demande de point de terminaison privé dans la liste **Connexions des points de terminaison privés** sous l’onglet Mise en réseau DPS. Sur cette page, le propriétaire peut **approuver** ou **rejeter** la demande de point de terminaison privé, comme indiqué ci-dessous.

## <a name="pricing-private-endpoints"></a>Tarification des points de terminaison privés
Pour plus d’informations sur les tarifs, consultez [Tarification Liaison privée Azure](https://azure.microsoft.com/pricing/details/private-link).
## <a name="next-steps"></a>Étapes suivantes
Utilisez les liens ci-dessous pour en savoir plus sur les fonctionnalités de sécurité DPS :
* [Sécurité](./concepts-service.md#attestation-mechanism)
* [Prise en charge de TLS 1.2](tls-support.md) | 78.020833 | 588 | 0.776235 | fra_Latn | 0.977513 |
438ede00c425c585fba6debff9e50c78af898f27 | 368 | md | Markdown | content/zh/learn/level_2/lesson_39/video.md | daixijun/website | a1465f907af2760082a989e66c4211f3e6e9357f | [
"Apache-2.0"
] | 1 | 2022-02-01T03:12:28.000Z | 2022-02-01T03:12:28.000Z | content/zh/learn/level_2/lesson_39/video.md | daixijun/website | a1465f907af2760082a989e66c4211f3e6e9357f | [
"Apache-2.0"
] | null | null | null | content/zh/learn/level_2/lesson_39/video.md | daixijun/website | a1465f907af2760082a989e66c4211f3e6e9357f | [
"Apache-2.0"
] | null | null | null | ---
title: Kubernetes 核心实战-PV 与 PVC 使用
keywords: Kubesphere, Kubesphere learn
description: 在 Kubernetes 中使用 PV 和 PVC
video:
videoUrl: https://pek3b.qingstor.com/kubesphere-community/%E4%BA%91%E5%8E%9F%E7%94%9F%E5%AE%9E%E6%88%98/65%E3%80%81Kubernetes-%E6%A0%B8%E5%BF%83%E5%AE%9E%E6%88%98-%E5%AD%98%E5%82%A8%E6%8A%BD%E8%B1%A1-PV%E4%B8%8EPVC%E4%BD%BF%E7%94%A8.mp4
---
| 40.888889 | 238 | 0.722826 | yue_Hant | 0.510737 |
438f05a4b8a5b38f311ee7fa283eef06702e29df | 6,838 | md | Markdown | README.md | luisllamasbinaburo/RTIMULib-Arduino | ec352d472cfa913fdaf3a2f16f2f2b2db4019c37 | [
"MIT"
] | 7 | 2020-03-01T09:40:49.000Z | 2021-10-02T01:26:36.000Z | README.md | luisllamasbinaburo/RTIMULib-Arduino | ec352d472cfa913fdaf3a2f16f2f2b2db4019c37 | [
"MIT"
] | null | null | null | README.md | luisllamasbinaburo/RTIMULib-Arduino | ec352d472cfa913fdaf3a2f16f2f2b2db4019c37 | [
"MIT"
] | 3 | 2020-03-10T08:38:46.000Z | 2021-03-31T15:52:21.000Z | > Note: This repository is a clone of https://github.com/richards-tech/RTIMULib-Arduino, which have been removed, to preserve this great library for the community. All recognition for their original authors.
Terms of the original license have been maintained.
# RTIMULib-Arduino - a versatile 9-dof and 10-dof IMU library for the Arduino
RTIMULib-Arduino is the simplest way to connect a 9-dof or 10-dof IMU to an Arduino (Uno or Mega) and obtain fully fused quaternion or Euler angle pose data.
## Please note that this library is no longer supported.
## Features
RTIMULib-Arduino currently supports the following IMUs via I2C:
* InvenSense MPU-9150 single chip IMU.
* InvenSense MPU-6050 plus HMC5883 magnetometer on MPU-6050's aux bus (handled by the MPU-9150 driver).
* InvenSense MPU-6050 gyros + acclerometers. Treated as MPU-9150 without magnetometers.
* InvenSense MPU-9250 single chip IMU
* STM LSM9DS0 single chip IMU
* L3GD20H + LSM303D (optionally with the LPS25H) as used on the Pololu AltIMU-10 v4.
* L3GD20 + LSM303DLHC as used on the Adafruit 9-dof (older version with GD20 gyro) IMU.
* L3GD20H + LSM303DLHC (optionally with BMP180) as used on the new Adafruit 10-dof IMU.
* Bosch BNO055 9-dof IMU with onchip fusion (see notes below).
Pressure/temperature sensing is supported for the following pressure sensors:
* BMP180
* LPS25H
* MS5611
Select the IMU in use by editing libraries/RTIMULib/RTIMULibDefs.h and uncommenting one of the supported IMUs like this:
#define MPU9150_68 // MPU9150 at address 0x68
//#define MPU9150_69 // MPU9150 at address 0x69
//#define MPU9250_68 // MPU9250 at address 0x68
//#define MPU9250_69 // MPU9250 at address 0x69
//#define LSM9DS0_6a // LSM9DS0 at address 0x6a
//#define LSM9DS0_6b // LSM9DS0 at address 0x6b
//#define GD20HM303D_6a // GD20H + M303D at address 0x6a
//#define GD20HM303D_6b // GD20H + M303D at address 0x6b
//#define GD20M303DLHC_6a // GD20 + M303DLHC at address 0x6a
//#define GD20M303DLHC_6b // GD20 + M303DLHC at address 0x6b
//#define GD20HM303DLHC_6a // GD20H + M303DLHC at address 0x6a
//#define GD20HM303DLHC_6b // GD20H + M303DLHC at address 0x6b
//#define BNO055_28 // BNO055 at address 0x28
//#define BNO055_29 // BNO055 at address 0x29
Once this has been done, all example sketches will build for the selected IMU.
To enable a pressure sensor, uncomment one of the following lines in libraries/RTIMULib/RTIMULibDefs.h:
//#define BMP180 // BMP180
//#define LPS25H_5c // LPS25H at standard address
//#define LPS25H_5d // LPS25H at option address
//#define MS5611_76 // MS5611 at standard address
//#define MS5611_77 // MS5611 at option address
The actual RTIMULib and support libraries are in the library directory. The other top level directories contain example sketches.
*** Important note ***
It is essential to calibrate the magnetometers (except for the BNO055 IMU) or else very poor results will obtained, especially with the MPU-9150 and MPU-9250. If odd results are being obtained, suspect the magnetometer calibration!
### Special notes for the BNO055
The Bosch BNO055 can perform onchip fusion and also handles magnetometer calibration. Therefore, ArduinoMagCal need not be used. If the ArduinoIMU sketch is used, RTFusion RTQF performs the fusion using the BNO055's sensors. If the ArduinoBNO055 sketch is used, the BNO055's onchip fusion results are used. This results in a small flash memory footprint of approximately 11.5k bytes.
## The Example Sketches
### Build and run
To build and run the example sketches, start the Arduino IDE and use File --> Preferences and then set the sketchbook location to:
.../RTIMULib-Arduino
where "..." represents the path to the RTIMULib-Arduino directory. The directory is set up so that there's no need to copy the libraries into the main Arduino libraries directory although this can be done if desired.
### ArduinoMagCal
This sketch can be used to calibrate the magnetometers and should be run before trying to generate fused pose data. It also needs to be rerun at any time that the configuration is changed (such as different IMU or different IMU reference orientation). Load the sketch and waggle the IMU around, making sure all axes reach their minima and maxima. The display will stop updating when this occurs. Then, enter 's' followed by enter into the IDE serial monitor to save the data.
### ArduinoIMU
ArduinoIMU is the main demo sketch. It configures the IMU based on settings in RTIMUSettings.cpp. Change these to alter any of the parameters. The display is updated only 3 times per second regardless of IMU sample rate.
Note that, prior to version 2.2.0, the gyro bias is being calculated during the first 5 seconds. If the IMU is moved during this period, the bias calculation may be incorrect and the code will need to be restarted. Starting at version 2.2.0 this is no longer a problem and gyro bias will be reported as valid after the required number of stable samples have been obtained.
If using this sketch with the BNO055, RTFusionRTQF performs the fusion and the BNO055's internal fusion results are not used. Magnetometer calibration data, if present, is also not used as the BNO055 performs this onchip.
### ArduinoBNO055
This is a special version of ArduinoIMU for the BNO055 that uses the IMU's internal fusion results. It is still necessary to uncomment the correct BNO055 IMU address option in RTIMULibDefs.h. No magnetometer calibration is required as this is performed by the BNO055.
### ArduinoIMU10
This is exactly the same as ArduinoIMU except that it adds support for a pressure sensor. One of the pressure sensors in libraries/RTIMULib/RTIMULibDefs.h must be uncommented for this sketch to run. It will display the current pressure and height above standard sea level in addition to pose information from the IMU.
### ArduinoAccel
This is similar to ArduinoIMU except that it subtracts the rotated gravity vector from the accelerometer outputs in order to obtain the residual accelerations - i.e. those not attributable to gravity.
### RTArduLinkIMU
This sketch sends the fused data from the IMU over the Arduino's USB serial link to a host computer running either RTHostIMU or RTHostIMUGL (whcih can be found in the main RTIMULib repo). Basically just build and download the sketch and that's all that needs to be done. Magnetometer calibration can be performed either on the Arduino or within RTHostIMU/RTHostIMUGL.
| 64.509434 | 475 | 0.740275 | eng_Latn | 0.995705 |
438fa0ef1329f91f76ef4bbb31e57c1db18dd6b4 | 209 | md | Markdown | charts/fint-oneroster-rest-provider/README.md | FINTLabs/fint-helm-charts | 7664f6eb839a9223134519cb2e9b017da335a330 | [
"MIT"
] | null | null | null | charts/fint-oneroster-rest-provider/README.md | FINTLabs/fint-helm-charts | 7664f6eb839a9223134519cb2e9b017da335a330 | [
"MIT"
] | 3 | 2021-11-30T14:17:53.000Z | 2021-12-09T14:36:44.000Z | charts/fint-oneroster-rest-provider/README.md | FINTLabs/helm-charts | 7664f6eb839a9223134519cb2e9b017da335a330 | [
"MIT"
] | null | null | null | # fint-unleash
How to install fint-oneroster-rest-provider
### Example for vlfk
`helm install fint-oneroster-rest-provider -f values-oslo.yaml ../fint-oneroster-rest-provider/ --set environment=beta`
link: | 26.125 | 119 | 0.76555 | eng_Latn | 0.286784 |
4390ad24067f620adc79a1e120b3d8495e921851 | 868 | md | Markdown | 2-single-page-applications/learning-materials/EX_JS_ARRAYS_CHAINING.md | taylordotson/ux-developer-milestones | 2223f0fb3f19bb5661025e2b4d0ccdcc1e523fe7 | [
"Apache-2.0"
] | null | null | null | 2-single-page-applications/learning-materials/EX_JS_ARRAYS_CHAINING.md | taylordotson/ux-developer-milestones | 2223f0fb3f19bb5661025e2b4d0ccdcc1e523fe7 | [
"Apache-2.0"
] | null | null | null | 2-single-page-applications/learning-materials/EX_JS_ARRAYS_CHAINING.md | taylordotson/ux-developer-milestones | 2223f0fb3f19bb5661025e2b4d0ccdcc1e523fe7 | [
"Apache-2.0"
] | null | null | null | # Chaining Array Methods
### Setup
These commands are a helpful quick start. You may choose to ignore them completely and create your own directory structure. If you choose to use this recommendation, just copy the commands below and paste. It doesn't matter what directory you are currently in.
```bash
mkdir -p ~/workspace/exercises/javascript/chaining-methods && cd $_
touch index.html
touch chaining.js
```
### Requirements
Using one single line of JavaScript code, complete the following tasks on the array of integers below.
1. Sort the numbers in descending order (10, 9, 8, 7, etc).
1. Remove any integers greater than 19.
1. Multiply each remaining number by 1.5 and then subtract 1.
1. Then output (either in the DOM or the console) the sum of all the resulting numbers.
```js
const integers = [23, 15, 6, 3, 11, 20, 18, 7, 21, 1, 29, 10, 12, 8];
```
| 34.72 | 260 | 0.739631 | eng_Latn | 0.998126 |
439158ea1856dbb36b2564cd2354af719ff35144 | 223 | md | Markdown | README.md | JuanesGalvis/curso_travisCI | 095ba197d0c4c796f471b9f1d66a1829289625cb | [
"MIT"
] | null | null | null | README.md | JuanesGalvis/curso_travisCI | 095ba197d0c4c796f471b9f1d66a1829289625cb | [
"MIT"
] | null | null | null | README.md | JuanesGalvis/curso_travisCI | 095ba197d0c4c796f471b9f1d66a1829289625cb | [
"MIT"
] | null | null | null | # Proyecto - Curso Travis CI
> Este proyecto fue programado por el profesor Oscar Barajas para el curso de Travis CI
- Repositorio original: https://github.com/gndx/platzi-store
- Curso: https://platzi.com/cursos/travis/
| 31.857143 | 87 | 0.766816 | spa_Latn | 0.980652 |
439188fe2c6096be2a28c84abfcd244cc0839fba | 5,045 | md | Markdown | docs/vs-2015/modeling/processing-text-templates-by-using-a-custom-host.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/processing-text-templates-by-using-a-custom-host.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/processing-text-templates-by-using-a-custom-host.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Bir özel konak kullanarak metin şablonlarını işleme | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-tfs-dev14
ms.reviewer: ''
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- text templates, in application or VS extension
- text templates, custom directive hosts
ms.assetid: affa3296-854d-47d6-9685-285f6d9ba5dc
caps.latest.revision: 35
author: gewarren
ms.author: gewarren
manager: douge
ms.openlocfilehash: 5fa54f6b7ea57b6374e8fef291c64f0e5369ffea
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/12/2018
ms.locfileid: "49303465"
---
# <a name="processing-text-templates-by-using-a-custom-host"></a>Bir Özel Konak kullanarak Metin Şablonlarını İşleme
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
*Metin şablonu dönüştürme* işlem alır bir *metin şablonu* dosyası olarak girdi ve çıktı olarak bir metin dosyası oluşturur. Metin dönüştürme altyapısı çağırabilirsiniz bir [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] uzantısı, veya bir makine üzerinde çalışan bir tek başına uygulamasından [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] yüklenir. Ancak, sağlamanız gereken bir *metin şablonu oluşturma barındırıcısı*. Bu sınıf, derlemeler ve ekleme dosyaları gibi kaynakları bularak ve çıktı ve hata iletilerini işleme alarak şablonu ortama bağlar.
> [!TIP]
> Bir paket ya da içinde çalışacak uzantısı yazıyorsanız [!INCLUDE[vsprvs](../includes/vsprvs-md.md)], kendi ana bilgisayarınızı yazmak yerine metin şablonu oluşturma hizmetini kullanmayı düşünün. Daha fazla bilgi için [bir VS uzantısında metin dönüştürmeyi çağırma](../modeling/invoking-text-transformation-in-a-vs-extension.md).
> [!NOTE]
> Metin şablonu dönüştürmelerinin sunucu uygulamalarında kullanılması önerilmez. Metin şablonu dönüştürmelerinin tek bir iş parçacığı dışında kullanılması önerilmez. Bunun nedeni, metin şablonu oluşturma motorunun şablonları çevirmek, derlemek ve yürütmek için tek bir AppDomain öğesini yeniden kullanmasıdır. Çevrilen kod, iş parçacığı açısından güvenli olmak üzere tasarlanmamıştır. Altyapı olduğu gibi seri olarak, dosyalarını işlemek için tasarlanmış bir [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] projesinde tasarım zamanında.
>
> Çalışma zamanı uygulamaları için önceden işlenmiş metin şablonlarını kullanmayı: bkz [T4 metin şablonları ile çalışma süresi metni oluşturma](../modeling/run-time-text-generation-with-t4-text-templates.md).
Uygulamanız, derleme zamanında sabitlenmiş bir grup şablon kullanıyorsa, Önceden İşlenmiş Metin Şablonlarının kullanılması daha kolaydır. Uygulamanızı bir makine üzerinde çalıştırılacaksa da bu yaklaşımı kullanabilirsiniz [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] yüklü değil. Daha fazla bilgi için [T4 metin şablonları ile çalışma süresi metni oluşturma](../modeling/run-time-text-generation-with-t4-text-templates.md).
## <a name="executing-a-text-template-in-your-application"></a>Uygulamanızda Metin Şablonu Yürütme
Bir metin şablonu yürütmek için ProcessTemplate yöntemini çağırırsınız <xref:Microsoft.VisualStudio.TextTemplating.Engine?displayProperty=fullName>:
```
using Microsoft.VisualStudio.TextTemplating;
...
Engine engine = new Engine();
string output = engine.ProcessTemplate(templateString, host);
```
Uygulamanızın şablonu bularak sağlaması ve çıktı ile işlem yapması gerekir.
İçinde `host` parametresi uygulayan bir sınıf sağlamanız gerekir <xref:Microsoft.VisualStudio.TextTemplating.ITextTemplatingEngineHost>. Bu, Motor tarafından geri çağrılır.
Ana bilgisayar hataları günlüğe kaydedebilmeli, derleme ve ekleme dosyalarına yapılan başvuruları çözümleyebilmeli, şablonun yürütülebileceği bir Uygulama Etki Alanı sağlayabilmeli ve her yönerge için uygun işlemciyi çağırabilmelidir.
<xref:Microsoft.VisualStudio.TextTemplating.Engine?displayProperty=fullName> tanımlanan **Microsoft.VisualStudio.TextTemplating.\*. 0. dll**, ve <xref:Microsoft.VisualStudio.TextTemplating.ITextTemplatingEngineHost> tanımlanan **Microsoft.VisualStudio.TextTemplating.Interfaces.\*. 0. dll**.
## <a name="in-this-section"></a>Bu Bölümde
[İzlenecek yol: Özel Metin Şablonu Konağı Oluşturma](../modeling/walkthrough-creating-a-custom-text-template-host.md)
Metin şablonu işlevi dışında kullanılabilir hale getiren bir özel metin şablonu konağı oluşturma işlemi gösterilmektedir [!INCLUDE[vsprvs](../includes/vsprvs-md.md)].
## <a name="reference"></a>Başvuru
<xref:Microsoft.VisualStudio.TextTemplating.ITextTemplatingEngineHost>
## <a name="related-sections"></a>İlgili Bölümler
[Metin Şablonu Dönüştürme Süreci](../modeling/the-text-template-transformation-process.md)
Metin dönüştürmenin nasıl çalıştığını ve hangi kısımları özelleştirebileceğinizi açıklar.
[Özel T4 Metin Şablonu Yönerge İşlemcileri Oluşturma](../modeling/creating-custom-t4-text-template-directive-processors.md)
Metin şablonu yönerge işlemcilerine genel bakış sağlar.
| 68.175676 | 548 | 0.799405 | tur_Latn | 0.998592 |
4392340c04d7a6b65616ed59dcdb16ec5d58cef2 | 1,178 | md | Markdown | docusaurus/website/i18n/es/docusaurus-plugin-content-docs/current/plots/map.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 9 | 2019-08-30T20:50:27.000Z | 2021-12-09T19:53:16.000Z | docusaurus/website/i18n/es/docusaurus-plugin-content-docs/current/plots/map.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 1,261 | 2019-02-09T07:43:45.000Z | 2022-03-31T15:46:44.000Z | docusaurus/website/i18n/es/docusaurus-plugin-content-docs/current/plots/map.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 3 | 2019-10-04T19:22:02.000Z | 2022-01-31T06:12:56.000Z | ---
id: map
title: Map
sidebar_label: Map
---
Un mapa geográfico que puede ser suministrado con nombres de lugares o valores de longitud/latitud.
## Opciones
* __data__ | `object (required)`: objeto de matrices de valores para cada variable. Default: `none`.
* __scope__ | `string`: alcance del mapa que se mostrará. Default: `'world'`.
* __locations__ | `string`: nombre de la variable en "datos" que contiene los nombres de las ubicaciones. Default: `none`.
* __locationmode__ | `string`: ya sea "ISO-3", "Estados Unidos", o "nombres de países" que denotan cómo están codificados los valores en "lugares". Default: `'country names'`.
* __longitude__ | `string`: nombre de la variable en "datos" que contiene valores de longitud. Default: `none`.
* __latitude__ | `string`: nombre de la variable en "datos" que contiene valores de latitud. Default: `none`.
* __showLand__ | `boolean`: si mostrar los rasgos geográficos en el mapa. Default: `false`.
* __aggregation__ | `string`: cadena que indica cómo agregar los valores de cada ubicación (ya sea "suma", "promedio", "mínimo", "máximo", "modo", "mediana", "recuento", "primero" o "último"). Default: `'sum'`.
## Ejemplos
| 53.545455 | 210 | 0.72326 | spa_Latn | 0.962053 |
43925a5056f1f64b382066dd28d3ca69b61cd8dc | 199 | markdown | Markdown | _projects/4_project.markdown | makhshari/makhshari.github.io | d66a80e5d1a4f79c7d547e28b0af870a08e6fe7b | [
"MIT"
] | null | null | null | _projects/4_project.markdown | makhshari/makhshari.github.io | d66a80e5d1a4f79c7d547e28b0af870a08e6fe7b | [
"MIT"
] | null | null | null | _projects/4_project.markdown | makhshari/makhshari.github.io | d66a80e5d1a4f79c7d547e28b0af870a08e6fe7b | [
"MIT"
] | null | null | null | ---
layout: page
title: RTTio
description: Remote TV Tranmitters IoT-based Watcher/Controller
img: /assets/img/4.jpg
importance: 2
category: Projects
redirect: https://github.com/makhshari/RTTio
--- | 22.111111 | 63 | 0.773869 | eng_Latn | 0.371578 |
4392f6f5b79d36b3098017bdc989eba66b38b1f8 | 2,784 | md | Markdown | vendor/thamtech/yii2-uuid/README.md | darwin2286/ereadinessv2 | ea0aefbd69eb3757822232bbc3313a442eadc38d | [
"BSD-3-Clause"
] | 30 | 2015-12-14T20:10:25.000Z | 2022-02-07T14:26:09.000Z | vendor/thamtech/yii2-uuid/README.md | darwin2286/ereadinessv2 | ea0aefbd69eb3757822232bbc3313a442eadc38d | [
"BSD-3-Clause"
] | 6 | 2018-09-14T14:00:04.000Z | 2021-02-11T16:46:55.000Z | vendor/thamtech/yii2-uuid/README.md | darwin2286/ereadinessv2 | ea0aefbd69eb3757822232bbc3313a442eadc38d | [
"BSD-3-Clause"
] | 6 | 2015-10-09T03:23:28.000Z | 2019-12-31T07:10:30.000Z | Yii 2 UUID Helper
-----------------
UUID Helper and validator for Yii 2.
This library interfaces with [ramsey/uuid](https://github.com/ramsey/uuid) to
generate
[universally unique identifiers](https://en.wikipedia.org/wiki/Universally_unique_identifier).
For license information check the [LICENSE](LICENSE.md)-file.
[](https://packagist.org/packages/thamtech/yii2-uuid)
[](https://travis-ci.org/thamtech/yii2-uuid)
[](https://scrutinizer-ci.com/g/thamtech/yii2-uuid/)
[](https://scrutinizer-ci.com/g/thamtech/yii2-uuid/)
Installation
------------
The preferred way to install this extensions is through [composer](https://getcomposer.org/download/).
Either run
```
php composer.phar require --prefer-dist thamtech/yii2-uuid
```
or add
```
"thamtech/yii2-uuid": "*"
```
to the `require` section of your `composer.json` file.
Usage
-----
## New UUID
Generate a new UUID (version 4 by default):
```php
$uuid = \thamtech\uuid\helpers\UuidHelper::uuid();
```
## Ad-Hoc Validation
Validate that a string is formatted in the canonical format using
hexadecimal text with inserted hyphen characters (case insensitive):
```php
$uuid = 'de305d54-75b4-431b-adb2-eb6b9e546014';
$isValid = \thamtech\uuid\helpers\UuidHelper::isValid($uuid); // true
$uuid = 'not-a-uuid';
$isValid = \thamtech\uuid\helpers\UuidHelper::isValid($uuid); // false
// or using the Validator class directly
$validator = new \thamtech\uuid\validators\UuidValidator();
if ($validator->validate($uuid, $error)) {
// valid
} else {
// not valid
echo $error
}
```
Or you can include the `use` lines, especially if you will be making multiple
uuid calls within a file:
```php
use thamtech\uuid\helpers\UuidHelper;
use thamtech\uuid\helpers\UuidValidator;
// ...
$uuid = 'de305d54-75b4-431b-adb2-eb6b9e546014';
$isValid = UuidHelper::isValid($uuid); // true
$uuid = 'not-a-uuid';
$isValid = UuidHelper::isValid($uuid); // false
// or using the Validator class directly
$validator = new UuidValidator();
if ($validator->validate($uuid, $error)) {
// valid
} else {
// not valid
echo $error
}
```
## Field Validation
Incorporate this same validation into your model:
```php
public function rules()
{
return [
[['uuid'], 'thamtech\uuid\validators\UuidValidator'],
];
}
```
See Also
--------
* [ramsey/uuid](https://github.com/ramsey/uuid)
* [Universally unique identifiers](https://en.wikipedia.org/wiki/Universally_unique_identifier)
| 25.309091 | 140 | 0.710848 | eng_Latn | 0.264358 |
4393863ffcbf1e2639ea4ebd67a53e451536d403 | 205 | md | Markdown | swift/Swift入门教程系列/README.md | mythkiven/devTips | eff1f6aa0e38df7161d06644de1fd960c383eb16 | [
"MIT"
] | 34 | 2019-05-27T09:34:33.000Z | 2022-02-05T07:27:02.000Z | swift/Swift入门教程系列/README.md | mythkiven/devTips | eff1f6aa0e38df7161d06644de1fd960c383eb16 | [
"MIT"
] | 2 | 2020-02-27T02:44:48.000Z | 2022-03-19T15:31:17.000Z | swift/Swift入门教程系列/README.md | mythkiven/devTips | eff1f6aa0e38df7161d06644de1fd960c383eb16 | [
"MIT"
] | 14 | 2017-02-06T02:03:36.000Z | 2018-10-28T08:43:01.000Z |
## 本系列是Swift官方入门教程,持续更新添加中,欢迎Watch关注
本demo分9个小demo,分别实现的功能如下,是Swift入门的demo,欢迎学习哦~
- 1、构建基本的UI
- 2、storyboard与代码交互
- 3、解说视图控制器
- 4、自定义控件
- 5、数据model
- 6、创建tableview
- 7、导航控制器
- 8、逻辑处理
- 9、数据持久化
| 8.913043 | 44 | 0.712195 | yue_Hant | 0.598922 |
4393e610434aa6aab0f14d97b180aff51a043aef | 579 | md | Markdown | src/Find the Characters Counterpart Char Code/README.md | Pustur/edabit-js-challenges | 1539fa6eb2ef6e1f61cb4cc02ed3a3eb382bae5b | [
"MIT"
] | 7 | 2019-11-10T21:42:30.000Z | 2022-02-17T14:26:03.000Z | src/Find the Characters Counterpart Char Code/README.md | Pustur/edabit-js-challenges | 1539fa6eb2ef6e1f61cb4cc02ed3a3eb382bae5b | [
"MIT"
] | null | null | null | src/Find the Characters Counterpart Char Code/README.md | Pustur/edabit-js-challenges | 1539fa6eb2ef6e1f61cb4cc02ed3a3eb382bae5b | [
"MIT"
] | 3 | 2020-05-11T11:01:37.000Z | 2021-05-07T09:48:10.000Z | # Find the Characters Counterpart Char Code
`Formatting` `Strings`
[View on Edabit](https://edabit.com/challenge/fbaLZPNjTvYtY444B)
Create a function that takes a single character as an argument and returns the char code of its lowercased / uppercased counterpart.
### Examples
```js
Given that:
- "A" char code is: 65
- "a" char code is: 97
counterpartCharCode("A") ➞ 97
counterpartCharCode("a") ➞ 65
```
### Notes
- The argument will always be a single character.
- Not all inputs will have a counterpart (e.g. numbers), in which case return the inputs char code.
| 23.16 | 132 | 0.728843 | eng_Latn | 0.979618 |
43954e8105bcbd443c5a2d2e9e56e33172cfe28b | 2,686 | md | Markdown | README.md | christoomey/your-first-vim-plugin | c9c2a454a5406d2092418eaccf027e48b0f6715d | [
"MIT"
] | 131 | 2015-02-01T16:10:15.000Z | 2022-03-12T09:45:00.000Z | README.md | christoomey/your-first-vim-plugin | c9c2a454a5406d2092418eaccf027e48b0f6715d | [
"MIT"
] | 4 | 2015-03-22T08:38:06.000Z | 2020-08-06T10:33:32.000Z | README.md | christoomey/your-first-vim-plugin | c9c2a454a5406d2092418eaccf027e48b0f6715d | [
"MIT"
] | 23 | 2015-03-22T08:30:11.000Z | 2021-11-18T00:05:29.000Z | Your First Vim Plugin
=====================
These are the notes and samples from my August 2014 Vim talk, 'Your First Vim
Plugin'. Their official home is [this repo][]. You can also view a [recording
of the talk][].
[this repo]: https://github.com/christoomey/your-first-vim-plugin
[recording of the talk]: https://www.youtube.com/watch?v=lwD8G1P52Sk
Samples
-------
2. [Fix spelling error](./spelling-error/)
1. [Move Item To List Top](./move-em/)
4. [Markdown underline](./markdown-underline/)
3. [Extract Variable](./extract-variable/)
The Simple Path to Your First Plugin
------------------------------------
0. Know how to edit, save, and source your vimrc
1. Capture normal mode actions, repeat with `:normal`
2. Wrap `:normal` call in a named function
1a. Poke around in the REPL / command line
1b. Add ehcom debug statements, use `:messages` to review
1c. Temporarily export to global var
2. Wrap it up
- script local your function
- create a command
- package it up!
What Can You do with Vim Plugins?
---------------------------------
### Custom Text Objects
Not mine, but still great!
- [text-obj-indent][] - Indenation text object
- [ruby-block][] - Ruby method, class, and block text objects
[text-obj-indent]: https://github.com/kana/vim-textobj-indent
[ruby-block]: https://github.com/nelstrom/vim-textobj-rubyblock
### Operators
- [titlecase][] - Titlecase based on a motion
- [sort-motion][] - Sort lines or arguments, based on a vim motion
[titlecase]: https://github.com/christoomey/vim-titlecase
[sort-motion]: https://github.com/christoomey/vim-sort-motion
### System Integration
- [tmux nav][] - Navigate seamlessly between vim and tmux splits
- [tmux runner][] - Send commands from Vim to adjacent tmux panes
[tmux runner]: https://github.com/christoomey/vim-tmux-runner
[tmux nav]: https://github.com/christoomey/vim-tmux-navigator
### Raw Efficiency
- [spec-runner][] - Efficient spec running from withing vim
- [rfactory][] - Rails.vim inspired navigation commands for FactoryGirl
- [conflicted][] - Powerful git merge/rebase conflict resolution
- [quicklink][] - Insert links
[spec-runner]: https://github.com/gabebw/vim-spec-runner
[rfactory]: https://github.com/christoomey/vim-rfactory
[conflicted]: https://github.com/christoomey/vim-conflicted
[quicklink]: https://github.com/christoomey/vim-quicklink
### Pro Tips
- Symlink in your local copies
- [TDD Vimscript][]
- ':h functions'
- system()
- `echom` and `:messages` for debugging
- [learn-vimsript-the-hard-way][]
[TDD Vimscript]: http://robots.thoughtbot.com/write-a-vim-plugin-with-tdd
[learn-vimsript-the-hard-way]: http://learnvimscriptthehardway.stevelosh.com/
| 31.6 | 77 | 0.706627 | eng_Latn | 0.552969 |
4395c04189cd8a473dd700923a174cbffc76c033 | 78 | md | Markdown | README.md | cfrieze/teamscheduler | e2b54cc78462adf5aa3bd0be49b555591c44d6a7 | [
"MIT"
] | null | null | null | README.md | cfrieze/teamscheduler | e2b54cc78462adf5aa3bd0be49b555591c44d6a7 | [
"MIT"
] | null | null | null | README.md | cfrieze/teamscheduler | e2b54cc78462adf5aa3bd0be49b555591c44d6a7 | [
"MIT"
] | null | null | null | # teamscheduler
Test application to schedule 4 sports teams for a tournament.
| 26 | 61 | 0.820513 | eng_Latn | 0.995838 |
43968548345f0abd99e967b6aa9df3e5ca57c117 | 117 | md | Markdown | README.md | Botfather/Location-Manager-iOS | c728d29cc24ef6b9a08d4078a10eb90735809cf2 | [
"MIT"
] | null | null | null | README.md | Botfather/Location-Manager-iOS | c728d29cc24ef6b9a08d4078a10eb90735809cf2 | [
"MIT"
] | null | null | null | README.md | Botfather/Location-Manager-iOS | c728d29cc24ef6b9a08d4078a10eb90735809cf2 | [
"MIT"
] | null | null | null | # Location-Manager-iOS
Implemented to fetch current location of the user both, continously and at a single instance.
| 39 | 93 | 0.811966 | eng_Latn | 0.999617 |
4396f28ac739cf7887293d78ec9ff39106c51420 | 3,765 | md | Markdown | _notes/Dropbox/productivity.md | happping/Digital-Garden | 6f0f8e530cf681317c67da99eda024f7f911f0be | [
"MIT"
] | null | null | null | _notes/Dropbox/productivity.md | happping/Digital-Garden | 6f0f8e530cf681317c67da99eda024f7f911f0be | [
"MIT"
] | null | null | null | _notes/Dropbox/productivity.md | happping/Digital-Garden | 6f0f8e530cf681317c67da99eda024f7f911f0be | [
"MIT"
] | null | null | null | ---
title: productivity
---
# 1. The Best NoteTaking tools
I am using three note taking system
1. [[notion]] => daily log
2. [[obsidian]] => wikipage
3. [[typora]] x [[dropbox]] x [[write1]] => for personal note
4. [[Google Keep]] => quick note taking for instant thoughts.
### Desktop Stickynote
* [[Simple sticky note]] -> [link](https://www.simplestickynotes.com/)
Better than ms sticky note beacause.. it can be always on the top!
(why you don't do this ms, dummies?)
### Other tools I tried but not really keen to
- airtable 🤮
# 2. Task Management
| | Min's Experience | Pro | con |
| ---------------- | ----------------------- | ------------------------------ | --------------------------------------------------------------------- |
| [[notion]] | Min's Primary tool | Handy 👍 | No recursive task available(you need other service to mix) |
| [[todoist]] | | complex program recussive task | My mistake but I put too many tasks and quit using it by overwhelming |
| [[MS To Do]] | Personal item checklist | UI is pretty | no location based reminder + so little task setup compare to todoist |
| [[Google Tasks]] | | | feel like working.. |
| [[Google Keep]] | | | |
# 3. Great Tools for learning Faster
1. [[Miro]]
# 3. Prototype Tools
1.
# 3. Automate as much as possible
tag : [[automation]]
## Comparison of service
| | Pro | Con |
| ----------------- | ------------------------------- | ------------------------- |
| apple's Shortcuts | Free, Mobile | lots of tedius manual setup to do something simple |
| ifttt | Mobile + PC , Very easy to use | Only 2 automation is free |
| zapier | Lots of tools available | Only 5 automation is free, MS teams integration failed |
| [[n8n]].io | Free ([[opensource]]) | require node.js and experience to set up |
| MS Power Automate | | only 90 days free |
| automate.io | | no netlify yet |
| integromat | | |
## Desktop Autiomation with n8n.io
1. Install [[node.js]]
2. command line -> [[npm]] install n8n
3. npm n8n run
## How to Create [[Youtube]] Alarm on the phone.
[[apple's Shortcuts]]
## How to play different music between working and break time.
[[apple's focus]] + [[apple's Shortcuts]] + any music player
## How to update google sheets row on the phone
[[ifttt]]
## How to send invoice automatically by one click on the phone
[[google sheets ]]
## How to receive message in [[notion]]
[[typeform]] + [[calendly]] + [[notion]]
## How to get location-based reminder with Microsoft To Do
[[MS To Do]] x [[Apples Shorcuts]]
# 4. Collaboration Tools
### [[Calendly]] , the great way to share my availbility to set meeting time with teams in the different time zone
Pro
- Freemium
- lots of integration tools such as [[zapier]]
- gmail extension and time pool option is really useful
Con
- Customize color option is not available for free user
* Trouble shoot : "I look free while I am busy on google calendar!""
1. go to google calendar setting
2. genernal -> events from gmail -> privacy of email events : **calendar default**
| 27.683824 | 151 | 0.50757 | eng_Latn | 0.965128 |
4397154a7b3e8bf0f0c775537a11e827b837c757 | 12,774 | md | Markdown | CHANGELOG.md | ymind/rsql-querydsl | f1fccedf10e029222a62742da83f7ccb7ec6adc3 | [
"MIT"
] | 8 | 2020-07-28T08:56:54.000Z | 2022-03-10T09:30:34.000Z | CHANGELOG.md | ymind/rsql-querydsl | f1fccedf10e029222a62742da83f7ccb7ec6adc3 | [
"MIT"
] | 1 | 2021-12-28T03:04:58.000Z | 2021-12-28T03:04:58.000Z | CHANGELOG.md | ymind/rsql-querydsl | f1fccedf10e029222a62742da83f7ccb7ec6adc3 | [
"MIT"
] | 1 | 2020-07-28T08:56:59.000Z | 2020-07-28T08:56:59.000Z | # Changelog
## 0.7.11 (2021-10-27)
### Code Refactoring
- remove `FunctionTypeHandler` ([49966547](https://github.com/ymind/rsql-querydsl/commit/4996654770efb8721c542019f133ace6808be37a))
### Chores
- **deps**: bumped spring version to 2.5.6 ([7139ac4a](https://github.com/ymind/rsql-querydsl/commit/7139ac4a477f35200a5910c770f2f2d988f49dc2))
- **deps**: bumped jackson-module-kotlin version to 2.13.0 ([22f27daa](https://github.com/ymind/rsql-querydsl/commit/22f27daa5ad4ccfe42b40d098eeb2e98303b74b0))
- code cleanup ([7e2df15d](https://github.com/ymind/rsql-querydsl/commit/7e2df15d5f8b52a5cb43c4e0c2ea7d403cab7ea6))
### Build System
- **chore**: bumped querydsl version to 5.0.0 ([9323216f](https://github.com/ymind/rsql-querydsl/commit/9323216f47d5cc132f26eb47055d675c0851b47d))
- **gradle**: bumped gradle wrapper version to 7.2 ([31519600](https://github.com/ymind/rsql-querydsl/commit/3151960067e110c1e6ed71efa5ebfc8481493af4))
- **kotlin**: bumped kotlin version from to 1.5.31 ([ae84c9b4](https://github.com/ymind/rsql-querydsl/commit/ae84c9b48d5b530c8f5b375703e0a2d8e2e102d6))
- **gradle/plugin**: bumped org.jlleitschuh.gradle.ktlint version to 10.2.0 ([0faf97f3](https://github.com/ymind/rsql-querydsl/commit/0faf97f3c59c544c6ee652b574a8e2d197a7ed96))
## 0.7.3 (2021-07-12)
### Features
- support rsql node interceptor ([b2d74266](https://github.com/ymind/rsql-querydsl/commit/b2d74266131a057f2aa1a78a7286a899e5c10eb6))
### Code Refactoring
- **common**: remove FieldNotSupportedException ([56bb0aed](https://github.com/ymind/rsql-querydsl/commit/56bb0aed42d8fee331ea67cdab59db5c393cafbc))
- optimize entityClass acquisition mechanism ([fdae805c](https://github.com/ymind/rsql-querydsl/commit/fdae805c1d95f62ea6c66431db9f50356d5656ae))
### Chores
- **deps**: bumped jackson-module-kotlin version to 2.12.4 ([a3fb9c6d](https://github.com/ymind/rsql-querydsl/commit/a3fb9c6d62fb9deb973aec13c5eeff5e5e584ac7))
- **deps**: bumped spring version to 2.5.2 ([84d8f740](https://github.com/ymind/rsql-querydsl/commit/84d8f74072d0746a95cd2e24466d2cab3bc61671))
### Build System
- **gradle**: bumped gradle wrapper version to 7.1.1 ([6c5f931d](https://github.com/ymind/rsql-querydsl/commit/6c5f931dc6707455b4214bcdab5e2fc485d4acc7))
- **kotlin**: bumped kotlin version from to 1.5.10 ([f0365224](https://github.com/ymind/rsql-querydsl/commit/f0365224e8c0447b52cf7e1c0d71096e18ad4000))
- **gradle/plugin**: bumped com.github.ben-manes.versions version to 0.39.0 ([b9d8788d](https://github.com/ymind/rsql-querydsl/commit/b9d8788d67b9799958c9abdfc47f6bd750f0c0f5))
- **gradle/plugin**: bumped se.patrikerdes.use-latest-versions version to 0.2.17 ([64811fd2](https://github.com/ymind/rsql-querydsl/commit/64811fd2cc7dfe9cdc06612bd01aac4ca49625a2))
- **gradle/plugin**: bumped org.jlleitschuh.gradle.ktlint version to 10.1.0 ([27db73b9](https://github.com/ymind/rsql-querydsl/commit/27db73b9a516fbbddf1d1d6eaacca87b7134ef15))
## 0.6.0 (2021-05-20)
### Features
- support auto detect datetime format ([a9b75691](https://github.com/ymind/rsql-querydsl/commit/a9b75691aca805cb28115698b1d270a7d279e416))
### Build System
- **gradle**: bumped gradle wrapper version to 7.0.2 ([e11163ac](https://github.com/ymind/rsql-querydsl/commit/e11163ac933fce0a3f43ea8f97803e5da2109e5d))
## 0.5.22 (2021-05-10)
### Bug Fixes
- `globalPredicate` not work when `where` is null ([e8aa876f](https://github.com/ymind/rsql-querydsl/commit/e8aa876f30d95612da8641dfc6ceac43f2a07d24))
### Code Refactoring
- **util**: support `yyyy-MM-dd'T'HH:mm:ss.SSS` date format ([4cc949b0](https://github.com/ymind/rsql-querydsl/commit/4cc949b042c7a685c1076be27e7051cc811dae26))
- upgrade deprecated toLowerCase() method ([5f5c592a](https://github.com/ymind/rsql-querydsl/commit/5f5c592a91364ab84530dfe896e3a7dc52318439))
### Chores
- **deps**: upgrade spring version to 2.4.5 ([9e7bbeca](https://github.com/ymind/rsql-querydsl/commit/9e7bbecae41d7f64113cc55e52d8464d314830a1))
- **deps**: upgrade jackson-module-kotlin version to 2.12.3 ([51c618af](https://github.com/ymind/rsql-querydsl/commit/51c618afe44bb7b813f61a3bb21ed1791c0cca0c))
- **deps**: upgrade commons-lang3 version to 3.12.0 ([ea90fc5f](https://github.com/ymind/rsql-querydsl/commit/ea90fc5f7aea6366001c56e26660863bd9468dde))
### Build System
- **gradle**: bumped gradle wrapper version to 7.0 ([610a234b](https://github.com/ymind/rsql-querydsl/commit/610a234b357e851c98ad6a282349fb8bf7ecac6b))
- **kotlin**: bumped kotlin version from to 1.5.0 ([f6b37888](https://github.com/ymind/rsql-querydsl/commit/f6b378885a288b2be8a81ba6c244b221d9a2995b))
- **gradle/plugin**: upgrade com.github.ben-manes.versions version to 0.38.0 ([f2876820](https://github.com/ymind/rsql-querydsl/commit/f2876820d4c111e20a4156771256227e36e6fda7))
- **gradle/plugin**: upgrade se.patrikerdes.use-latest-versions version to 0.2.16 ([732cb1e0](https://github.com/ymind/rsql-querydsl/commit/732cb1e0730bdd23b3a5d5ca3f3c9fb73428dac3))
- **gradle/plugin**: upgrade org.jlleitschuh.gradle.ktlint version to 10.0.0 ([543fe50a](https://github.com/ymind/rsql-querydsl/commit/543fe50ad356675dd52308453ebc8c8dc332477e))
## 0.5.11 (2021-02-01)
### Bug Fixes
- when has only a single value, the `in` operation will throw an exception ([2e748d22](https://github.com/ymind/rsql-querydsl/commit/2e748d225c4f42604bd6edf0586deaf688d53ee1))
### Chores
- **deps**: bumped spring version from 2.3.4.RELEASE to 2.4.2 ([5dc0c3c7](https://github.com/ymind/rsql-querydsl/commit/5dc0c3c73aefb250e63c3ced28f33c48ff3bc9a0))
- **deps**: bumped jackson-module-kotlin version from 2.11.3 to 2.12.1 ([4edce4cc](https://github.com/ymind/rsql-querydsl/commit/4edce4cc9eb1af9df1f1b05de63e1b9c74450318))
- **gradle**: add use-latest-versions plugin ([f00b578f](https://github.com/ymind/rsql-querydsl/commit/f00b578fbbf8fa65944a82aab05a991fa80b9057))
### Build System
- **gradle**: bumped gradle wrapper version from 6.6.1 to 6.8.1 ([f279f322](https://github.com/ymind/rsql-querydsl/commit/f279f322ec06daaa4f767fe753d96b4700c401c5))
- **kotlin**: bumped kotlin version from 1.4.10 to 1.4.21-2 ([c204e14d](https://github.com/ymind/rsql-querydsl/commit/c204e14d76c9427fdfcfe2e8f01821e013d08bfa))
## 0.5.5 (2020-10-09)
### BREAKING CHANGES
- rename `selectFrom` to `from` ([18be5930](https://github.com/ymind/rsql-querydsl/commit/18be59302ca8d89b45af18de94ffb31b7cb60454))
- rename `size` to `limit` ([7cfff03c](https://github.com/ymind/rsql-querydsl/commit/7cfff03c544e283ff95fc9f1c0901433d79e2fd7))
- remove `page-string` and `limit-string` support ([289e780a](https://github.com/ymind/rsql-querydsl/commit/289e780a2ed0e24a8c13e9ecda680599703d887a))
### Bug Fixes
- **common**: fix `FieldNotSupportedException` arguments type ([1d8497aa](https://github.com/ymind/rsql-querydsl/commit/1d8497aa71e1a636cf4e4839af1f6557ae85e458))
- QuerydslRsql.buildPredicate() return null when globalPredicate is null ([ee0c1191](https://github.com/ymind/rsql-querydsl/commit/ee0c11913899e95c1140859831a84e354aa5f84a))
### Features
- support custom entity field type handler ([063203a0](https://github.com/ymind/rsql-querydsl/commit/063203a00d26c694d1e20de24a36e5cddbf49b4e))
### Performance Improvements
- **util**: enhance DateUtil ([e59fcc40](https://github.com/ymind/rsql-querydsl/commit/e59fcc40afe374a7b368fbe8c2f706fd30581016))
### Code Refactoring
- rename `RsqlConfig.getFieldTypeHandlers` to `RsqlConfig.addFieldTypeHandlers` ([e52044dc](https://github.com/ymind/rsql-querydsl/commit/e52044dc7025fc502ebee30787981d88d4300a62))
- make regexOptions private ([f254b81a](https://github.com/ymind/rsql-querydsl/commit/f254b81a1344000f6f49760b5b27507c6d0d54d4))
- make EntityManager none null ([882d5ade](https://github.com/ymind/rsql-querydsl/commit/882d5adeacbb72be7377a959309c124f249c98c6))
- optimize type handlers ([758a5abc](https://github.com/ymind/rsql-querydsl/commit/758a5abcc7bd98d8868a1a5e350dac60f7a78aad))
- remove name field from RsqlOperator ([64ab32bc](https://github.com/ymind/rsql-querydsl/commit/64ab32bcdbbd5723387b0662dcb5c04d53066c08))
- typo fix and code cleanup ([0e5e4d09](https://github.com/ymind/rsql-querydsl/commit/0e5e4d092eab0de781ded2aefc32e9171676a081))
- split SortFieldTypeHandler ([09b9a36c](https://github.com/ymind/rsql-querydsl/commit/09b9a36c1e4d339ad06a769d487b42e2c039913e))
### Chores
- **bumped**: remove versions plugins ([c9e4c0a7](https://github.com/ymind/rsql-querydsl/commit/c9e4c0a70a6971ae445f7a70bfaff8df4755f029))
- **deps**: bumped spring boot from 2.3.0.RELEASE to 2.3.1.RELEASE ([4560b4be](https://github.com/ymind/rsql-querydsl/commit/4560b4be13fbb1eb221ba06e4b721c977ddaf399))
- **deps**: bumped spring boot from 2.3.1.RELEASE to 2.3.2.RELEASE ([6acd26b5](https://github.com/ymind/rsql-querydsl/commit/6acd26b59dc7010e26a5a040be0e325854586782))
- **deps**: bumped jackson-module-kotlin from 2.11.1 to 2.11.2 ([9028d3cc](https://github.com/ymind/rsql-querydsl/commit/9028d3cc13e9dea97fc5e9d76b553f1c16f51bcf))
- **deps**: bumped commons-lang3 from 3.10 to 3.11 ([116b18ec](https://github.com/ymind/rsql-querydsl/commit/116b18ec1f3ef310d18f494b688726f9a01fac92))
- **deps**: bumped spring version from 2.3.2.RELEASE to 2.3.3.RELEASE ([92b861d3](https://github.com/ymind/rsql-querydsl/commit/92b861d3f6408da16ce4dd5c7ad17ef36799bfe0))
- **deps**: bumped spring version from 2.3.3.RELEASE to 2.3.4.RELEASE ([5fa1ab45](https://github.com/ymind/rsql-querydsl/commit/5fa1ab455a4041a58ee696ce2ea1a3f325b4a9ee))
- **gradle**: bumped team.yi.semantic-gitlog from 0.5.3 to 0.5.12 ([16361c76](https://github.com/ymind/rsql-querydsl/commit/16361c76a1f47b4a6c0bbe39eaf7163a2e02387b))
- **gradle**: bumped team.yi.semantic-gitlog version from 0.5.12 to 0.5.13 ([8be2eb39](https://github.com/ymind/rsql-querydsl/commit/8be2eb394c726a643847a30c8534a8ea64a4fa54))
- **gradle**: bumped org.jlleitschuh.gradle.ktlint version from 9.3.0 to 9.4.0 ([dc3019d4](https://github.com/ymind/rsql-querydsl/commit/dc3019d4f07c5fc8ef6ac948d80ca8caad355fd1))
### Tests
- print sql and parameters ([0ede76b7](https://github.com/ymind/rsql-querydsl/commit/0ede76b797b702be338c2bdee41e6f0eeddf2226))
### Styles
- add ktlint plugin and fix code styles ([21f6a89b](https://github.com/ymind/rsql-querydsl/commit/21f6a89bd7f557217c0da854b4f7b6c37ef9058f))
- adjust code styles ([1cae9169](https://github.com/ymind/rsql-querydsl/commit/1cae9169dfec2cba57c46d795572d952954e2cdf))
- code cleanup ([c2fde624](https://github.com/ymind/rsql-querydsl/commit/c2fde624c007ebcec7e428dec27cd467597070a3))
### Documentation
- **changelog**: adjust changelog templates ([e2a191d6](https://github.com/ymind/rsql-querydsl/commit/e2a191d66ae8a183f9eb193a98d0f0afcb92eb44))
- update docs ([251e3f7f](https://github.com/ymind/rsql-querydsl/commit/251e3f7fd4680ffd86885ad7117873af25c06d6c))
### Build System
- **chore**: bumped querydsl version from 4.3.1 to 4.4.0 ([7bb0af4f](https://github.com/ymind/rsql-querydsl/commit/7bb0af4fca5c10322d86510daa2fa55a87fd84b8))
- **chore**: bumped jackson-module-kotlin version from 2.11.2 to 2.11.3 ([1972f7b3](https://github.com/ymind/rsql-querydsl/commit/1972f7b3b071867ff8eb24a9029fb704ed509a20))
- **gradle**: bumped gradle wrapper version from 6.4.1 to 6.5.1 ([da2ca454](https://github.com/ymind/rsql-querydsl/commit/da2ca45418337e1329e8c6e5a6c4b82856eba68a))
- **gradle**: bumped gradle wrapper version from 6.5.1 to 6.6.1 ([19ed075c](https://github.com/ymind/rsql-querydsl/commit/19ed075c519e9ce957892ddde5ebc690b8bec3f5))
- **gradle**: bumped semantic-gitlog version from 0.5.13 to 0.5.17 ([a666c175](https://github.com/ymind/rsql-querydsl/commit/a666c175d2eb26151e5fdc6d3011ec9788ba241c))
- **gradle**: bumped ktlint version from 9.4.0 to 9.4.1 ([136c0925](https://github.com/ymind/rsql-querydsl/commit/136c092548a2a6def6304de741f28937d043c9e2))
- **kotlin**: bumped kotlin version from 1.3.72 to 1.4.0 ([727330e9](https://github.com/ymind/rsql-querydsl/commit/727330e9964bf1efb96f35219ef6012246997f91))
- **kotlin**: bumped kotlin version from 1.4.0 to 1.4.10 ([0aeb5d0c](https://github.com/ymind/rsql-querydsl/commit/0aeb5d0c970ba9b3924d8467904bcb897d5a5877))
### Continuous Integration
- **github**: disable push-back ([552c8f10](https://github.com/ymind/rsql-querydsl/commit/552c8f10cd58c4e3a00e3f30be3ea2d29ac4de4b))
- **github**: adjust ci config ([0f06f6cc](https://github.com/ymind/rsql-querydsl/commit/0f06f6cc56b273b0d07ae89510f4f175e85a2582))
- **github**: adjust project version update command ([4c7f68e9](https://github.com/ymind/rsql-querydsl/commit/4c7f68e97fcded9d17ccb732f556a29309f66b56))
## 0.1.0 (2020-06-03)
### Features
- implement primary features and challenges ([d3336750](https://github.com/ymind/rsql-querydsl/commit/d333675068fbd3051b8a6fd06b6e34d8826f73bd))
| 61.710145 | 182 | 0.787381 | yue_Hant | 0.193418 |
43974e7c3427914927d5f97d95ca903c03cc2ad3 | 727 | md | Markdown | content/blog/other/server/nginx.md | LinuxSuRen/surenpi | d32aa98b3bc083812ed4d66417a24b207544b530 | [
"MIT"
] | 2 | 2019-04-29T05:16:38.000Z | 2021-04-21T02:15:45.000Z | content/blog/other/server/nginx.md | LinuxSuRen/surenpi | d32aa98b3bc083812ed4d66417a24b207544b530 | [
"MIT"
] | 9 | 2019-09-25T06:00:18.000Z | 2022-01-20T02:36:20.000Z | content/blog/other/server/nginx.md | LinuxSuRen/surenpi | d32aa98b3bc083812ed4d66417a24b207544b530 | [
"MIT"
] | null | null | null | ---
title: Nginx
description: Nginx
toc: true
keywords:
- rewrite
---
Nginx 配置文件的一大特点是:必须要以分号结尾。
## 变量
| Name | Description |
|---|---|
| `$scheme` | The scheme of HTTP request, could be `http`, `https` |
| `$host` | |
| `$request_uri` | ] |
## 逻辑判断
## ngx_http_rewrite_module
### rewrite
```
server {
listen 80;
server_name surenpi.com;
location / {
rewrite ^ https://linuxsuren.github.io/blog/;
}
}
```
### return
```
Syntax: return code [text];
return code URL;
return URL;
Default: -
Context: server, location, if
```
```
if ($host = "github.com") {
return 301 https://nexus-b.alauda.cn/repository/github-proxy$request_uri;
}
```
## HTTPS | 13.980769 | 81 | 0.572215 | yue_Hant | 0.444009 |
43975047372055f615041db6ffebeccd1d729892 | 574 | md | Markdown | README.md | zyrikby/ANDROID_TOOLS | be75280a0d5c6ad2da06f9287cdf5d292e0609ca | [
"Apache-2.0"
] | 1 | 2022-03-14T08:42:57.000Z | 2022-03-14T08:42:57.000Z | README.md | zyrikby/ANDROID_TOOLS | be75280a0d5c6ad2da06f9287cdf5d292e0609ca | [
"Apache-2.0"
] | null | null | null | README.md | zyrikby/ANDROID_TOOLS | be75280a0d5c6ad2da06f9287cdf5d292e0609ca | [
"Apache-2.0"
] | null | null | null | # Android Tools
Collection of useful Android Tools.
## run_pm_installer.sh
Sometimes you need to install a developed application and see how the
installation process of Package Installer goes. Simple adb install command does
not provide such possibility. This script copies application to the attached
device and starts Android's Package Installer to install this package.
Usage:
```
source run_pm_installer.sh <apk_to_install>
```
If you do not have path to "adb" command within your PATH variable, specify the
full path in "ADB_COMMAND_PATH" constant of the script.
| 31.888889 | 80 | 0.801394 | eng_Latn | 0.994063 |
4398bf5213ca5ca06193bfa60a4049ad27aff444 | 260 | md | Markdown | org/packagesLibraries/ggplot.md | jsta/r-spatial-data-management-intro | 0fc8f24c45a16a7ac07f4ee4588252140895a7d0 | [
"CC-BY-3.0",
"CC-BY-4.0"
] | null | null | null | org/packagesLibraries/ggplot.md | jsta/r-spatial-data-management-intro | 0fc8f24c45a16a7ac07f4ee4588252140895a7d0 | [
"CC-BY-3.0",
"CC-BY-4.0"
] | null | null | null | org/packagesLibraries/ggplot.md | jsta/r-spatial-data-management-intro | 0fc8f24c45a16a7ac07f4ee4588252140895a7d0 | [
"CC-BY-3.0",
"CC-BY-4.0"
] | null | null | null | ---
layout: post_by_r-package
title: 'Data Tutorials Using the ggplot R Package'
packagesLibraries: ggplot
permalink: R-package/ggplot/
image:
feature: coding_R.jpg
credit:
creditlink:
---
Self-paced data tutorials that use the `R`, `ggplot` package.
| 20 | 61 | 0.742308 | eng_Latn | 0.569038 |
4399ccb677d6070831d821ecb1e87daa53dffc16 | 2,470 | md | Markdown | README.md | anthonyalbertyn/simple-queue | 2dd5c818832bfd7f18cab3db46267924264987e1 | [
"MIT"
] | null | null | null | README.md | anthonyalbertyn/simple-queue | 2dd5c818832bfd7f18cab3db46267924264987e1 | [
"MIT"
] | null | null | null | README.md | anthonyalbertyn/simple-queue | 2dd5c818832bfd7f18cab3db46267924264987e1 | [
"MIT"
] | null | null | null | # simple-queue
SimpleQueue can be used in server-side Node or other JavaScript projects
SimpleQueue is a simple queue data structure with enqueue, dequeue, peek, size and flush methods.
MIT License, see LICENSE for more details
## Uses case
When you need a queue that has no dependencies on other libraries and you just want something lightweight that does the job. No bells and whistles and just works. Also when when you want to keep your code clean and DRY and get on with doing more interesting things :)
## Getting started
```
npm install @anthonyalbertyn/simple-queue
```
## Using simple-queue
```
const SimpleQueue = require('@anthonyalbertyn/simple-queue');
const q = new SimpleQueue();
```
Add an item to the queue
``` q.enqueue("Hello") ```
``` q.enqueue(101) ```
Add multiple items to the queue at a time
``` q.enqueue(1, 7, 24) ```
Remove an item from the front of the queue
``` const item = q.dequeue() ```
Copying the item at the front of the queue, but not removing it from the queue
``` const item = q.peek() ```
Finding the length of the queue
``` const length = q.size() ```
Flushing the queue ie. deleting all the items
``` q.flush() ```
You may create more than one SimpleQueue in your code.
```
const q1 = new SimpleQueue();
const q2 = new SimpleQueue();
```
## Limitations
The actual queue under the hood is a JavaScript array and not a linked list. Adding items to the queue are constant time O(1), but removing an item from the queue has time complexity of O(n) as all the items in the array need to be moved one position to the left by JavaScript. SimpleQueue should work great for small to medium length queus, but as the length of the queue grows, the time to remove items will increase exponentially.
## Non-scientific benchmark
Time to add "The quick brown fox jumps over the lazy dog" to the queue 30,000 time: around 25 milliseconds
Time to dequeue the above queue 30,000 times: around 365 milliseconds
These are just crude benchmarks and times can differ depending on where you are running the code and what else is happening in your app or system, and what data the queue contains, so no guarantees at all on performance.
See examples/example2 for details on how this was tested
## Maintainers
There is currently only one maintainer, Anthony Albertyn, and the plan is to keep this module simple, lightweight and if possible resist adding more features unless there are good reasons to do so.
| 30.875 | 433 | 0.745749 | eng_Latn | 0.999066 |
439a89b99a7bb6e5bb4b4cb160879fb2ddb6497c | 3,401 | md | Markdown | articles/iot-central/core/howto-add-tiles-to-your-dashboard.md | eltociear/azure-docs.zh-cn | b24f1a5a0fba668fed89d0ff75ca11d3c691f09b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-central/core/howto-add-tiles-to-your-dashboard.md | eltociear/azure-docs.zh-cn | b24f1a5a0fba668fed89d0ff75ca11d3c691f09b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-central/core/howto-add-tiles-to-your-dashboard.md | eltociear/azure-docs.zh-cn | b24f1a5a0fba668fed89d0ff75ca11d3c691f09b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 将磁贴添加到仪表板 | Microsoft Docs
description: 本文为构建人员介绍如何配置默认的 Azure IoT Central 应用程序仪表板。
author: mavoge
ms.author: mavoge
ms.date: 10/17/2019
ms.topic: how-to
ms.service: iot-central
services: iot-central
manager: philmea
ms.openlocfilehash: 49b41715d95a5f210e6e70faf09aa016d1478728
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 03/28/2020
ms.locfileid: "80158713"
---
# <a name="configure-the-application-dashboard"></a>配置应用程序仪表板
“仪表板”是当有权访问应用程序的用户导航到应用程序的 URL 时加载的页面。**** 如果应用程序是从某个**应用程序模板**创建的,则该应用程序中预定义了一个要启动的仪表板。 如果从**旧版应用程序**模板创建应用程序,则仪表板将为空。
> [!NOTE]
> 除了默认的应用程序仪表板以外,用户还可以[创建多个仪表板](howto-create-personal-dashboards.md)。 这些仪表板仅供用户个人使用,或者在应用程序的所有用户之间共享。
## <a name="add-tiles"></a>添加磁贴
以下屏幕截图显示了从**自定义应用程序**模板创建的应用程序中的仪表板。 若要自定义应用程序的默认仪表板,请选择页面左上角的“编辑”。****
> [!div class="mx-imgBorder"]
> 
选择“编辑”会打开仪表板库面板。**** 该库包含可用于自定义仪表板的磁贴和仪表板基元。
> [!div class="mx-imgBorder"]
> 
例如,可为设备的当前温度添加一个“遥测”磁贴。**** 为此,请执行以下操作:
1. 选择一个**设备模板**
1. 选择要在仪表板磁贴上显示的设备的**设备实例**。 然后,会看到可在磁贴中使用的设备属性列表。
1. 若要在仪表板上创建磁贴,请单击“温度”并将其拖放到仪表板区域。**** 也可以单击“温度”旁边的复选框,然后单击“组合”。******** 以下屏幕截图显示了用户选择设备模板和设备实例,然后在仪表板上创建“温度遥测”磁贴。
1. 选择左上角的“保存”,将磁贴保存到仪表板。****
> [!div class="mx-imgBorder"]
> 
现在,当操作员查看默认的应用程序仪表板时,会看到新磁贴,其中包含设备的“温度”。**** 每个磁贴包含一个预先选择的图形、图表等,创建磁贴时会显示它。 但是,用户可以选择编辑和更改此可视化效果。
> [!div class="mx-imgBorder"]
> 
## <a name="edit-tiles"></a>编辑磁贴
若要编辑仪表板上的磁贴,请先单击页面左上角的“编辑”,打开仪表板及其所有磁贴的编辑模式。****
> [!div class="mx-imgBorder"]
> 
然后单击要编辑的磁贴右上角的**齿轮**图标。 可在此处编辑磁贴的各个方面,包括标题、可视化效果、聚合等。
> [!div class="mx-imgBorder"]
> 
也可以单击磁贴上的**标尺**图标来更改图表的可视化效果。
> [!div class="mx-imgBorder"]
> 
## <a name="tile-types"></a>磁贴类型
下表汇总了 Azure IoT Central 中磁贴的用法:
| 磁贴 | 仪表板 | 描述
| ----------- | ------- | ------- |
| 内容 | 应用程序和设备集仪表板 |Markdown 支持的磁贴是可单击的磁贴,其中会显示标题和说明文本。 还可以使用此磁贴作为链接磁贴,使用户能够导航到与你的应用程序相关的 URL。|
| 图像 | 应用程序和设备集仪表板 |“图像”磁贴显示自定义图像,是可单击的。 使用图像磁贴可将图形添加到仪表板,有时还使用户能够导航到与你的应用程序相关的 URL。|
| Label | 应用程序仪表板 |“标签”磁贴显示仪表板上的自定义文本。 可以选择文本大小。 使用标签磁贴可将说明、联系人详细信息或帮助等相关信息添加到仪表板。|
| 映射 | 应用程序和设备集仪表板 |“地图”磁贴显示设备在地图上的位置和状态。 例如,可以显示设备的位置,以及其风扇是否已打开。|
| 折线图 | 应用程序和设备仪表板 |“折线图”磁贴显示设备在一段时间内的聚合测量图表。 例如,可以在折线图中显示设备在过去一小时的平均温度和压力。|
| 条形图 | 应用程序和设备仪表板 |“条形图”磁贴显示设备在一段时间内的聚合测量图表。 例如,可以在条形图中显示设备在过去一小时的平均温度和压力。|
| 饼图 | 应用程序和设备集仪表板 |“饼图”磁贴显示设备在一段时间内的聚合测量图表。|
| 热度地图 | 应用程序和设备集仪表板 |“热度地图”磁贴以不同的颜色显示有关设备集的信息。|
| 事件历史记录 | 应用程序和设备仪表板 |“事件历史记录”磁贴显示设备在一段时间内发生的事件。 例如,可以使用此类磁贴来显示设备在过去一小时的所有温度变化。|
| 状态历史记录 | 应用程序和设备仪表板 |“状态历史记录”磁贴显示一段时间内的测量值。 例如,可以使用此类磁贴来显示设备在过去一小时的温度值。|
| KPI | 应用程序和设备仪表板 | “KPI”磁贴显示一段时间内的聚合遥测或事件测量值。 例如,可以使用此类磁贴来显示设备在过去一小时达到的最高温度。|
| 上一个已知值 | 应用程序和设备仪表板 |“上一个已知值”磁贴显示最新的遥测或状态测量值。 例如,可以使用此磁贴显示最近的设备温度、压力和湿度测量值。|
## <a name="next-steps"></a>后续步骤
现在,您已经了解如何配置 Azure IoT 中央默认应用程序仪表板,您可以[了解如何创建个人仪表板](howto-create-personal-dashboards.md)。
| 37.788889 | 119 | 0.757718 | yue_Hant | 0.623177 |
439aa640864285ea2dd17f83b4601b222e4fdd9d | 694 | md | Markdown | README.md | itszerocode/rag-status-selector | 78f0318d2a46ef50da24c1c322b2f66e62140e09 | [
"MIT"
] | null | null | null | README.md | itszerocode/rag-status-selector | 78f0318d2a46ef50da24c1c322b2f66e62140e09 | [
"MIT"
] | null | null | null | README.md | itszerocode/rag-status-selector | 78f0318d2a46ef50da24c1c322b2f66e62140e09 | [
"MIT"
] | null | null | null | # rag-status-selector
> RAG (Red,Amber,Green) status dropdown select box for react projects
[](https://www.npmjs.com/package/rag-status-selector) [](https://standardjs.com)
## Install
```bash
npm install --save rag-status-selector
```
## Usage
```jsx
import React, { Component } from 'react'
import MyComponent from 'rag-status-selector'
import 'rag-status-selector/dist/index.css'
class Example extends Component {
render() {
return <MyComponent />
}
}
```
## License
MIT © [(itszerocode)](https://github.com/(itszerocode))
| 22.387097 | 231 | 0.716138 | kor_Hang | 0.213843 |
439ad8dd16fb693f56773c6327c502fee476df02 | 3,036 | md | Markdown | _sl-overview/tokenization.md | fginter/docs-fginterfork | 1012563e049f1ad57548bb71908c632b23ee64f9 | [
"Apache-2.0"
] | 1 | 2021-08-18T08:52:27.000Z | 2021-08-18T08:52:27.000Z | _sl-overview/tokenization.md | fginter/docs-fginterfork | 1012563e049f1ad57548bb71908c632b23ee64f9 | [
"Apache-2.0"
] | null | null | null | _sl-overview/tokenization.md | fginter/docs-fginterfork | 1012563e049f1ad57548bb71908c632b23ee64f9 | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Tokenization'
permalink: sl/overview/tokenization.html
---
# Tokenization
Tokenization of the Slovenian UD Treebank reflects the following principles:
Space is the principal separator for tokens.
* Sequences of words that can be written both with or without space without changing its meaning (e.g. _<b>kdorkoli</b>_, _<b>kdor koli</b>_ "anybody, any body") follow the same principle and become either one or two tokens depending on the use of space
During tokenization, all characters are divided into two categories: words (W) and characters (C). Words are alphanumeric strings between spaces, while characters are punctuation and symbol characters.
* C tokens are recognized on the basis of a predefined list of punctuation and symbol characters included in the tokenizer.
* C tokens may include only one punctuation or symbol character. Sequences of two or more characters (e.g. _<b>?!</b>_) are treated as sequences of separate C tokens.
If a string of alphanumeric characters between two spaces includes C characters, it is usually split into several tokens (e.g. _AC/DC_ and Micro$oft are split into three tokens _<b>AC / DC</b>_ and _<b>Micro $ oft</b>_).
However, the following exceptions apply, in which C characters become parts of W tokens:
* Apostrophe becomes part of a W token if used without space on both sides (e.g. _<b>O'Brian</b>_, _<b>mor'va</b>_ "O'Brian, we have to").
* Comma and colon become part of a W token if used without space on both sides and if the string contains only digits (e.g. _<b>30:00</b>_, _<b>200,000,000</b>_).
* Hyphen becomes part of a W token if used without space on both sides and if:
* the left part is an acronym (in capital letters), a single letter or a digit
* the right part is an affix or an inflectional ending; a finite list of possible affixes and endings is integrated in the tokenizer
* e.g. _<b>OZN-ovski</b>_ "similar to United Nations", _<b>a-ju</b>_ "to the letter a", _<b>15-i</b>_ "the 15th" )
Dot becomes part of a W token if it is:
* used without space on both sides and the string contains only digits (e.g. _<b>1.2</b>_)
* used without space on the left and is part of an abbreviation or ordinal number (e.g. _<b>dr.</b>_, _<b>4.</b>_, _<b>IV.</b>_); a finite list of possible abbreviations is integrated in the tokenizer.
URLs and e-mail addresses: all C characters become part of a single W token in strings recognized as URLs or addresses using a regular expression.
Information on whether a token is followed by a space (e.g. _<b>d.o.o.</b>_ vs. _<b>d. o. o.</b>_) is indicated with `SpaceAfter=Yes` feature in the MISC column.
Note that the current version of the Slovenian UD Treebank does not yet comply with the universal guidelines recommendation for splitting of fused words, such as combinations of prepositions and pronouns, e.g. _<b>name</b> "on me", _<b>zanj</b> "for him", _vase_ "in/to oneself". Instead, these tokens are currently marked as [pronouns](PRON) with feature [Variant=Bound](Variant).
| 75.9 | 381 | 0.748024 | eng_Latn | 0.999377 |
439b3d41abed72388096fd24d3dd04fff0492173 | 18 | md | Markdown | README.md | boyVue/boyvue.github.io | 4593033eb63a229d96a060c4e792fffd47abc064 | [
"Apache-2.0"
] | null | null | null | README.md | boyVue/boyvue.github.io | 4593033eb63a229d96a060c4e792fffd47abc064 | [
"Apache-2.0"
] | null | null | null | README.md | boyVue/boyvue.github.io | 4593033eb63a229d96a060c4e792fffd47abc064 | [
"Apache-2.0"
] | null | null | null | # boyvue.github.io | 18 | 18 | 0.777778 | dan_Latn | 0.131753 |
439ba0e77dedf2b4adbb4740851b80048d46750b | 399 | md | Markdown | README.md | qiyuantian/SuperSurfer | 78e2a5db18c6fe98108c323e4a66ddb523f5c55a | [
"MIT"
] | 3 | 2019-05-15T18:23:21.000Z | 2020-07-05T21:21:09.000Z | README.md | qiyuantian/SuperSurfer | 78e2a5db18c6fe98108c323e4a66ddb523f5c55a | [
"MIT"
] | 1 | 2021-03-24T03:31:11.000Z | 2021-03-24T03:31:11.000Z | README.md | qiyuantian/SuperSurfer | 78e2a5db18c6fe98108c323e4a66ddb523f5c55a | [
"MIT"
] | null | null | null | # SuperSurfer
Tian Q, Bilgic B, Fan Q, Ngamsombat C, Zaretskaya N, Fultz NE, Ohringer NA, Chaudhari AS, Hu Y, Witzel T, Setsompop K. Improving in vivo human cerebral cortical surface reconstruction using data-driven super-resolution. Cerebral Cortex. https://doi.org/10.1093/cercor/bhaa237
We will upload our VDSR networks trained using the 7-Tesla and 3-Tesla data from MGH Martinos Center soon.
| 66.5 | 275 | 0.786967 | eng_Latn | 0.58848 |
439ba3bc84430e874fea64140c34a672f33ad5c0 | 34 | md | Markdown | README.md | gizmore/gdo6-poll | b480537d72d97e8f6a9d92b80cccb92921bee121 | [
"MIT"
] | 1 | 2019-04-15T09:50:43.000Z | 2019-04-15T09:50:43.000Z | README.md | gizmore/gdo6-poll | b480537d72d97e8f6a9d92b80cccb92921bee121 | [
"MIT"
] | null | null | null | README.md | gizmore/gdo6-poll | b480537d72d97e8f6a9d92b80cccb92921bee121 | [
"MIT"
] | null | null | null | # gdo6-poll
Poll module for gdo6.
| 11.333333 | 21 | 0.735294 | eng_Latn | 0.708049 |
439bd06638b65d3647c4c724524423801ab4d7d1 | 508 | md | Markdown | README.md | my-swift-lab/learning-swift-cli | 1885bd8b2d1a6574f8bfe60c5c1fc1b111712475 | [
"MIT"
] | 1 | 2021-08-23T12:36:24.000Z | 2021-08-23T12:36:24.000Z | README.md | my-swift-lab/learning-swift-cli | 1885bd8b2d1a6574f8bfe60c5c1fc1b111712475 | [
"MIT"
] | null | null | null | README.md | my-swift-lab/learning-swift-cli | 1885bd8b2d1a6574f8bfe60c5c1fc1b111712475 | [
"MIT"
] | null | null | null | # learning-swift-cli
Swift로 CLI 앱을 만들어 보자. 아래 블로그 글의 예제입니다. :)
## Command Line App
- [Swift로 Command Line 앱 만들기 #1](https://blog.burt.pe.kr/posts/skyfe79-blog.contents-976285028-post-23/)
- [Swift로 Command Line 앱 만들기 #2](https://blog.burt.pe.kr/posts/skyfe79-blog.contents-976291146-post-24/)
- [Swift로 Command Line 앱 만들기 #3](https://blog.burt.pe.kr/posts/skyfe79-blog.contents-976946497-post-25/)
## Scripting
- [Swift로 스크립트 작성하기](https://blog.burt.pe.kr/posts/skyfe79-blog.contents-976039728-post-22/) | 39.076923 | 104 | 0.730315 | kor_Hang | 0.846385 |
439c2f03db6f13c259a3138f2816aca2257a0e84 | 3,296 | md | Markdown | content/blog/migration/2020-03-15---typescript_2.md | jaeyoung-son/jyblog | af9d88c1aa3e108ed50c22f3d1e8353f252d8dce | [
"MIT"
] | null | null | null | content/blog/migration/2020-03-15---typescript_2.md | jaeyoung-son/jyblog | af9d88c1aa3e108ed50c22f3d1e8353f252d8dce | [
"MIT"
] | null | null | null | content/blog/migration/2020-03-15---typescript_2.md | jaeyoung-son/jyblog | af9d88c1aa3e108ed50c22f3d1e8353f252d8dce | [
"MIT"
] | null | null | null | ---
title: '타입스크립트 정복기'
date: 2020-03-15 12:21:13
category: 'typescript'
draft: false
---
저난번 찾아본 강의한번으로는 머리에 잘 들어오지 않아서 다른 자료를 찾아보았다.
이번에는 벨로퍼트님 블로그를 보며 타입스크립트를 정리해보자.
## 기본타입
값을 선언 할 때 기본 타입 지정하기
```ts
const message: string = '나는 문자열이야'
const done: boolean = true // 불리언
const numbers: number[] = [1, 2, 3] // 숫자배열
const messages: string[] = ['문자열', '배열']
messages.push(1) // 삐빅 에러입니다
let mightBeUndefined: string | undefined = undefined // 언디파인드나 스트링
let color: 'red' | 'orange' | 'yellow' = 'red'
color = 'green' // 삐빅 에러에요
```
특정변수의 타입을 지정 할 수 있으며 사전에 지정한 타입이 아닌 값이 설정 될 때 바로 에러가 난다.
| 는 사용해 or 과 같은 역할을 해준다.
## 함수타입 정의
```ts
function sum(x: number, y: number): number {
return x + y
}
sum(1, 2)
```
우측에 인자 뒤에 :number 이는 해당 함수의 결과물이 숫자라는것을 명시한다. 즉 리턴값 다른 값을 반환하면 오류가 난다.
```ts
function sumArray(numbers: num[]): number {
return numbers.reduce((acc, cureent) => acc + current, 0)
}
const total = sumArray([1, 2, 3, 4, 5])
```
타입스크립트 장점중 하나가 배열의 내장함수를 사용 할 때에도 타입 유추가 잘 이루어진다는 것!
```ts
fucntion returnNothing(): void {
//아무것도 리턴 안한다
}
```
함수에서 아무것도 반환하지 않는다면 반환 타입을 void로 설정한다.
## interface
interface란 클래스 또는 객체를 위한 타입을 지정 할 때 사용되는 문법이다.
```ts
interface Shape {
getArea(): number; // shape 라는 interface는 getArea라는 함수가 꼭 있고 숫자를 반환한다.
}
class Circle implements Shape {
// implements 키워드로 Shape의 인터페이스를 충족하겠다.
radius: number;
constructor(radius: number) {
this.radius = radius;
}
getArea() {
return this.radius * this.radius = Math.PI
}
}
class Rectangle implements Shape {
width: number;
height: number;
constructor(width: number, height: number) {
this.width = width;
this.height = height;
}
getArea() {
return this.width * this.height;
}
}
const shapes: Shape[] = [new Circle(5), new Rectangle(10,5)];
shapes.forEach(shape => {
console.log(shape.getArea());
})
```
### 일반 객체 타입 설정하기
```ts
interface Person {
name: string;
age?: number; // 물음표가 있으면 설정해도 가능 안해도 가능
}
interface Developer extends Person = {
skills: string[]
}
const person: Person = {
name: '사람',
age: 20
}
const expert: Developer = {
name: '김개발',
skills: ['자바스크립트', '리액트']
}
const people: Person[] = [person, experrt]
```
### Type Alias
type은 특정 타입에 별칭을 붙이는 용도 객체, 배열 어떤타입이던 별칭을 지어줄 수 있다.
```ts
type Person = {
name: string
age?: number
}
// & 는 intersection 으로서 두개 이상의 타입들을 합쳐준다.
type Developer = Person & {
skills: string[]
}
const person: Person = {
name: '사람',
}
const expert: Developer = {
name: '이름',
skills: ['javascript', 'typescript'],
}
```
type 과 interface 겉으로 딱 봤을때 그렇게 큰 차이점은 느껴지지 않았다.
일관성 있게 쓰는게 중요하며 구버전에서는 차이가 많이 존재했는데 지금은 큰 차이가 없다고 한다 다만 타 라이브러리나 타입 지원 파일을 작성할 때 interface가 권장된다고 하니 interface로 사용하는 습관을 들여야겠다.
### Generics
```ts
function merge<A, B>(a: A, b: B): A & B {
return {
...a,
...b,
}
}
const merged = merge({ foo: 1 }, { bar: 2 })
```
제너릭을 사용 할 때는 <T>처럼 꺽쇠 안에 타입의 이름을 넣어서 사용한다.
제너릭은 어제 동영상 강의 학습할때도 특히 이해가 잘 안되던 부분이었다. 타입을 좀 더 유연하게 다룰 수 있게 도와주는 녀석 같은 느낌.. 일반 타입정의는 이해라고 할 부분도 없긴 했지만 특히 리액트와 합칠때 많이 보이던 형태였는데 타입스크립트와 함께 리액트네이티브를 공부할때도 이녀석이 많이 괴롭혔었다. 빨리 익숙해져서 가지고 놀아야겠다.
```ts
interface Items<T> {
list: T[]
}
const items: Items<string> = {
list: ['a', 'b', 'c'],
}
```
이제 다음엔 리액트 프로젝트에 타입스크립트 적용방법을 학습해 보고 정리하겠습니당.
벨로퍼트님의 블로그를 보고 참고하며 정리했습니다.
| 17.72043 | 190 | 0.635316 | kor_Hang | 0.999985 |
439cce26d4e087289a3cdfeb21960ae52daf3cba | 96 | md | Markdown | README.md | 8p/cache_warmup | fd0b60a02a481d42cbf66177313154f79749c270 | [
"MIT"
] | 1 | 2017-11-17T14:31:18.000Z | 2017-11-17T14:31:18.000Z | README.md | 8p/cache_warmup | fd0b60a02a481d42cbf66177313154f79749c270 | [
"MIT"
] | null | null | null | README.md | 8p/cache_warmup | fd0b60a02a481d42cbf66177313154f79749c270 | [
"MIT"
] | null | null | null | cache_warmup
============
Website cache warm-up by Apache logfile using siege (fallback: wget)
| 19.2 | 68 | 0.697917 | eng_Latn | 0.484922 |
439dc23e19717e43f05b4dbd96ebdf18d7e30e7a | 11,016 | md | Markdown | website/src/_posts/phpstan-is-ready-for-php8.md | janedbal/phpstan | 53e10ac511fa8b2131adf157acce32ee06e12273 | [
"MIT"
] | 1 | 2021-05-25T08:38:58.000Z | 2021-05-25T08:38:58.000Z | website/src/_posts/phpstan-is-ready-for-php8.md | janedbal/phpstan | 53e10ac511fa8b2131adf157acce32ee06e12273 | [
"MIT"
] | 179 | 2020-12-08T18:00:58.000Z | 2022-02-10T13:28:22.000Z | website/src/_posts/phpstan-is-ready-for-php8.md | janedbal/phpstan | 53e10ac511fa8b2131adf157acce32ee06e12273 | [
"MIT"
] | null | null | null | ---
title: "PHPStan is ready for PHP 8!"
date: 2020-11-24
tags: releases
---
PHP 8 is just around the corner! And it's massive. PHPStan is ready to analyse your codebases that will be taking advantage of the latest features in the coming weeks and months.
I'll leave the job of describing [all the new features and changes](https://php.watch/versions/8.0) to others that [specialize in that](https://stitcher.io/), and [as usual](https://phpstan.org/blog/phpstan-now-fully-supports-php-7-4), I'll geek out a bit about the challenges linked with making PHPStan understand a new major version of the language, and also describe new implemented checks you'll be able to enjoy alongside the new language features.
Make sure you have [PHPStan 0.12.57](https://github.com/phpstan/phpstan/releases/tag/0.12.57) or later installed before you start experimenting with PHP 8! 💪
Match expression
======================
[Ambitious language feature](https://wiki.php.net/rfc/match_expression_v2) from PHPStan contributor Ilija Tovilo that aims to be a better alternative to a switch statement.
Since the `match` expression uses a strict comparison equivalent with the `===` operator, we can afford to report type mismatches - arms that will [never be executed](https://phpstan.org/r/ad9f2b05-98d5-49fa-b0c3-4e1c476f349f).
Arms also might not be executed if one of the arms is a catch-all because the comparison will be [always true](https://phpstan.org/r/c6a35cff-50a8-45d4-a4bb-4d785e95043a).
The `match` expression also throws an exception when the given value isn't handled by one of the arms. Since PHPStan knows enough information about the code, it can [detect it as well](https://phpstan.org/r/379724eb-514c-45fd-b357-02a5f4ebae18). I hope that this feature will lead to using more [advanced types in PHPDocs](https://phpstan.org/writing-php-code/phpdoc-types), which in turn will actually lead to more type safety (PHPStan will point out wrong values going into the function at the callsite):
```php
/**
* @param 1|2|3 $i
*/
function foo(int $i): void {
match ($i) {
1 => 'foo',
2 => 'bar',
3 => 'baz',
};
}
```
Named arguments
======================
I'm a big fan of [named arguments](https://stitcher.io/blog/php-8-named-arguments). They will make function calls with multiple arguments more clear [^moreArguments].
[^moreArguments]: A popular "clean code" argument is that a function shouldn't have more than a few arguments, but it's not always possible nor practical.
```php
return in_array(needle: $i, haystack: $intList, strict: true);
```
Implementing support for this feature was pretty straightforward, so I took extra care with user-facing error messages. This is how an ordinary error looks like when named arguments aren't involved:
> Parameter #2 $b of function foo expects int, string given.
With named arguments, the order is no longer significant, so when they're involved in a function call, I'm removing the parameter number: [^ux]
[^ux]: Good UX is getting thousand tiny little things right.
> Parameter $b of function foo expects int, string given.
When the developer passes a wrong number of arguments to a function, the ordinary error looks like this:
> Function foo invoked with 1 parameter, 2 required.
When a named argument is used in the function call, it's much nicer to show this:
> Missing parameter $b (int) in call to function foo.
So that's what PHPStan does.
Changed function signatures
======================
Some functions changed their signatures, for example `curl_*` functions no longer return resource, [but a `CurlHandle` object](https://php.watch/versions/8.0/resource-CurlHandle). Many functions have removed `false` from possible returned values and [throw `ValueError` instead](https://php.watch/versions/8.0/ValueError).
<a href="https://phpstan.org/r/5043c64b-59f1-418c-a0da-9341f9f4938e"><img src="/images/curl-php-8.png" class="mb-8 rounded-lg border border-gray-300 mx-auto"></a>
Fortunately, PHP 8 starts to offer [official stubs](https://github.com/search?q=repo%3Aphp%2Fphp-src+filename%3A*.stub.php&type=Code) that we can take advantage of here. I created a [new repository](https://github.com/phpstan/php-8-stubs) that allows including those stubs as a Composer dependency. It's automatically updated each night to mirror the latest changes in php-src.
It wasn't straightforward to start using those stubs, because they don't contain all the information PHPStan needs, so for example they cannot be used in place of [jetbrains/phpstorm-stubs](https://github.com/jetbrains/phpstorm-stubs). Class definitions do not contain constants and properties, some global constants are referenced but not defined etc. So PHPStan only reads parameter types, parameter names, return types, and PHPDocs, and uses them in a very customized way.
Also, I didn't want to lose other metadata we already have in [functionMap.php](https://github.com/phpstan/phpstan-src/blob/3e956033ad718b56c607f026bd670613db02f151/resources/functionMap.php), like what value types are in typehinted arrays, or callback signatures. So in the end all of this information is merged together in the final definitions used during the analysis.
Constructor property promotion
======================
I really like [this feature](https://php.watch/versions/8.0/constructor-property-promotion), because it decreases the number of times an injected property name needs to mentioned from 4 to 1. A lot of boilerplate will be simplified.
The most interesting part of the implementation was finding out how people would write additional type information with PHPDocs. Sure, we have typed properties since PHP 7.4, but for example in case of `array`, we need to know what's in it, so PHPDocs are still necessary in some cases.
Since the Twitter poll ended with 74 %/26 % split, I decided to implement both variants. 26 % is still a lot of people.
PHPStan will also check that you [haven't declared](https://phpstan.org/r/3c1a5fd2-8157-4808-8485-fd4035bd8f5b) duplicate properties with the same name, and that you haven't tried to write a promoted property in [another method than constructor](https://phpstan.org/r/83420326-6076-479b-a6f3-68761c3a101a) (which isn't a parse error).
Nullsafe operator
======================
Adding support for [this one](https://wiki.php.net/rfc/nullsafe_operator) (also by Ilija Tovilo) was more work than I originally expected. [PHP-Parser](https://github.com/nikic/php-parser) added two new AST nodes to represent this operator: `NullsafeMethodCall` and `NullsafePropertyFetch`. So I had to go through all the code where the usual `MethodCall` and `PropertyFetch` nodes are mentioned[^mostCommon] and make sure it also makes sense for handling the nullsafe variants.
[^mostCommon]: Arguably the most common thing PHP developers do is calling methods and accessing properties so there's a lot of concerns in PHPStan handling those.
Another tricky part was the short-circuiting. I've had to reread this part of the RFC several times before I realized it has two implications for PHPStan:
1) When analysing an ordinary method call or a property fetch, the result might be nullable even if the called method and accessed property aren't nullable, because there might be a nullsafe operator earlier in the chain.
2) The `$foo` in `$foo?->bar()` will not be nullable when referenced again in the same chain.
PHPStan will also tell you if you're using `?->` where an ordinary `->` would suffice, [on level 4](https://phpstan.org/r/3de670aa-814b-4160-b0df-0e01adbf881c).
The RFC also disallows assign-by-ref when the nullsafe operator is involved, PHPStan will [tell you about that too](https://phpstan.org/r/fad45e1a-6f1c-4518-9b68-f030d6228910).
Nullsafe operator also cannot be used on the [left side of an assignment](https://phpstan.org/r/ae38ff4b-2a17-4bca-813d-bb9c8e0b2d86).
And the nullsafe operator cannot be used with [parameters passed by reference](https://phpstan.org/r/36b0ff5d-5ba6-494e-9280-7ae4789b4089) and as a return value in function that [return by reference](https://phpstan.org/r/0a9357aa-0df4-459a-93db-30a43b27f584).
As I said - a lot of work 😅
$object::class
======================
PHP 7 allows you to get a string with a class name with `Foo::class`. PHP 8 allows you to access `::class` on an object.
> Did you know that PHP does not check whether `Foo` exists and will [happily create any string](https://3v4l.org/htngG) like that? Fortunately [PHPStan checks that for you](https://phpstan.org/r/e5516379-ceba-4cbe-a0d5-de5ed31361cd) 😊
During the implementation I found out that the accessed variable cannot be a string, so PHPStan also [checks for that](https://phpstan.org/r/d4fd58ab-064a-4c4d-850d-239242e92bab).
Attributes
======================
After finally [deciding on the syntax](https://wiki.php.net/rfc/shorter_attribute_syntax_change#voting), attributes have a bright future ahead. I'm especially looking forward to using them as part of Doctrine ORM entities instead of current PHPDoc-based annotations.
Validation of attributes in PHP runtime itself is postponed until the code tries to call `newInstance()` on the obtained `ReflectionAttribute` instance. The RFC specifically mentions that static analysis is a great fit to validate attribute usage so that the user isn't surprised when they run the code that reads and instantiate the attributes.
PHPStan provides the following checks related to attributes:
* #[Attribute]-annotated class cannot be abstract and must have a public constructor ([playground example](https://phpstan.org/r/85ef24c3-df32-452b-97e0-e4ae4e3d5d45))
* Attribute name used in code must be an existing class annotated with #[Attribute] ([playground example](https://phpstan.org/r/8fd5e419-18b4-4142-8fa7-bc4ba8bcbc22))
* Attribute class constructor must be called correctly ([playground example](https://phpstan.org/r/2f10a38e-e4f1-4b78-9f01-740b7e4b1b5a))
* Attribute class can be used only with the specified target(s) ([playground example](https://phpstan.org/r/1103bd62-101d-43f3-ba4e-dba68d0a0a60))
* Non-repeatable attribute class cannot occur multiple times above the same element ([playground example](https://phpstan.org/r/b94dc461-5b9b-42d5-a4c6-c1dc33f44d91))
New Docker image
=========================
If you prefer to run PHPStan through Docker, I recommend you to switch to a new image hosted in GitHub Container Registry: `ghcr.io/phpstan/phpstan`
It's based on PHP 8. If you want to analyse a codebase as if it was written for an older PHP version, change `phpVersion` in your `phpstan.neon`:
```yaml
parameters:
phpVersion: 70400 # PHP 7.4
```
See the image's [homepage in GHCR](https://github.com/orgs/phpstan/packages/container/package/phpstan), or the documentation [here on phpstan.org](/user-guide/docker).
The old image hosted on DockerHub is now deprecated.
---
Do you like PHPStan and use it every day? Support the development by checking out and subscribing to [PHPStan Pro](/blog/introducing-phpstan-pro). Thank you!
| 70.165605 | 506 | 0.765251 | eng_Latn | 0.991285 |
439e5ab423ea00d77d61b794e9030778d2825787 | 1,522 | markdown | Markdown | _posts/2020-12-23-updates.markdown | Trott/iliosproject.org | 05b241495c743367c84892c388bfab738e8de4bd | [
"MIT"
] | null | null | null | _posts/2020-12-23-updates.markdown | Trott/iliosproject.org | 05b241495c743367c84892c388bfab738e8de4bd | [
"MIT"
] | 29 | 2016-07-29T20:28:34.000Z | 2021-04-29T17:00:05.000Z | _posts/2020-12-23-updates.markdown | Trott/iliosproject.org | 05b241495c743367c84892c388bfab738e8de4bd | [
"MIT"
] | 2 | 2016-10-27T23:40:33.000Z | 2022-02-28T01:13:22.000Z | ---
layout: post
title: Happy New Year
date: 2021-02-03 08:00:00
categories: updates
---

ILIOS:
Still knocking it out of the park!
At the end of **January 2021**, we completed the deprecation and EOL of our v1 API, and now all external connections to Ilios - beginning with Ilios **v3.86.0** - require calls to the current *v3 API*. This has been detailed in the User Guide, and in the newsletters from July 2020 onward. But if you have any questions, please reach out to us.
In addition, 2021 is looking to be yet another tremendous year for Ilios development and growth; we have lots in store in way of improvements and expansions, and you should be seeing these in the coming months.
__CURRENT STATUS__:
- current version: __3.86.0__
- next scheduled release: __02//2021__
Questions? Comments? Feedback? Find us at
[support@iliosproject.org](mailto:support@iliosproject.org) or in [https://team-ilios.slack.com/messages/help/](https://team-ilios.slack.com/messages/help/){:target="_slackithelp"}. (If you have not yet joined our Slack channel, you can get started at [https://ilios-slack.herokuapp.com/](https://ilios-slack.herokuapp.com/){:target="_slackit"}.)
Please be sure to get [the most recent release](https://www.github.com/ilios/ilios/releases/latest){:target="_releases"} and update your frontend to take advantage of all the latest features and improvements!
| 54.357143 | 348 | 0.757556 | eng_Latn | 0.978425 |
439edef8e9fea23557b59af17722f8d3b98d1c6a | 1,794 | md | Markdown | docs/docs/running.md | mmangione/alcali | 6af8c4056c8e9ceed717440551519769ddbbfd3f | [
"MIT"
] | 306 | 2019-05-12T20:16:55.000Z | 2022-03-27T15:00:15.000Z | docs/docs/running.md | mmangione/alcali | 6af8c4056c8e9ceed717440551519769ddbbfd3f | [
"MIT"
] | 340 | 2019-05-27T20:20:44.000Z | 2022-03-17T05:23:57.000Z | docs/docs/running.md | mmangione/alcali | 6af8c4056c8e9ceed717440551519769ddbbfd3f | [
"MIT"
] | 53 | 2019-05-18T00:06:08.000Z | 2022-03-03T17:38:58.000Z | # Running Alcali
!!!info
This page will assume you are running alcali locally.
If you are using docker, just prepend commands with `docker exec -it <name>`
First make sure that Alcali is correctly installed.
You can verify installation by running:
```commandline
alcali current_version
# alcali version 2019.2.2
```
You can also check that Alcali can access `salt` database and that [needed env var](configuration.md) are set and loaded by running:
```commandline
alcali check
# db: ok
# env: ok
```
## First Run
### Apply migrations
!!!danger
**On the first run and after every update, you need to make sure that the database is synchronized with the current set of models and migrations. If unsure, just run `alcali migrate`**
Locally:
```commandline
alcali migrate
```
### Create a super user
Run:
```commandline
alcali createsuperuser
```
You will be prompted for your desired login, email address and password.
## Run
Once migrations are applied and a super user is created, you can start the application.
Alcali use Gunicorn as a WSGI HTTP server. It is installed during the installation process of Alcali.
!!!warning
If the .env file is not in your current directory, prepend your command with `ENV_PATH=/path/to/env_file`
If you installed Alcali from sources, at the root of the repository, run:
```commandline
gunicorn config.wsgi:application -w 4
```
If you installed Alcali using pip, run:
```commandline
gunicorn config.wsgi:application -w 4 --chdir $(alcali location)
```
In a docker container:
```commandline
docker run --rm -it -p 8000:8000 --env-file=FILE latenighttales/alcali:2019.2.2 bash -c "gunicorn config.wsgi:application -w 4 --chdir $(alcali location)"
```
Where FILE is the location of the [.env file](configuration.md)
| 23.605263 | 188 | 0.73913 | eng_Latn | 0.99408 |
439f5833ee326ff287dc6f9e0f5490a6054fe383 | 11,125 | md | Markdown | articles/iot-hub/iot-hub-device-management-get-started.md | zhenjiao-ms/test-azure-content | 5ef37d9660943ed706687179bb656daaa2bcabb0 | [
"CC-BY-3.0"
] | null | null | null | articles/iot-hub/iot-hub-device-management-get-started.md | zhenjiao-ms/test-azure-content | 5ef37d9660943ed706687179bb656daaa2bcabb0 | [
"CC-BY-3.0"
] | null | null | null | articles/iot-hub/iot-hub-device-management-get-started.md | zhenjiao-ms/test-azure-content | 5ef37d9660943ed706687179bb656daaa2bcabb0 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="IoT Hub device management get started | Microsoft Azure"
description="Azure IoT Hub for device management with C# getting started tutorial. Use Azure IoT Hub and C# with the Microsoft Azure IoT SDKs to implement device management."
services="iot-hub"
documentationCenter=".net"
authors="juanjperez"
manager="timlt"
editor=""/>
<tags
ms.service="iot-hub"
ms.devlang="dotnet"
ms.topic="hero-article"
ms.tgt_pltfrm="na"
ms.workload="na"
ms.date="04/29/2016"
ms.author="juanpere"/>
# Get started with Azure IoT Hub device management using C# (preview)
[AZURE.INCLUDE [iot-hub-device-management-get-started-selector](../../includes/iot-hub-device-management-get-started-selector.md)]
## Introduction
To get started with Azure IoT Hub device management, you must create an Azure IoT Hub, provision devices in the IoT Hub, start multiple simulated devices, and view these devices in the device management sample UI. This tutorial walks you through these steps.
> [AZURE.NOTE] You need to create a new IoT Hub to enable device management capabilities even if you have an existing IoT Hub because existing IoT Hubs do not have device management capabilities yet. Once device management is generally available, all existing IoT Hubs will be upgraded to get device management capabilities.
## Prerequisites
This tutorial assumes you are using a Windows development machine.
You need the following installed to complete the steps:
- Microsoft Visual Studio 2015
- Git
- CMake (version 2.8 or later). Install CMake from <https://cmake.org/download/>. For a Windows PC, please choose the Windows Installer (.msi) option. Make sure to check the box to add CMake to the current user PATH variable.
- Node.js 6.1.0 or greater. Install Node.js for your platform from <https://nodejs.org/>.
- An active Azure subscription. If you don't have an account, you can create a free trial account in just a couple of minutes. For details, see [Azure Free Trial][lnk-free-trial].
## Create a device management enabled IoT Hub
You need to create a device management enabled IoT Hub for your simulated devices to connect to. The following steps show you how to complete this task using the Azure portal.
1. Sign in to the [Azure portal].
2. In the Jumpbar, click **New**, then click **Internet of Things**, and then click **Azure IoT Hub**.
![][img-new-hub]
3. In the **IoT Hub** blade, choose the configuration for your IoT Hub.
![][img-configure-hub]
- In the **Name** box, enter a name for your IoT Hub. If the **Name** is valid and available, a green check mark appears in the **Name** box.
- Select a **Pricing and scale tier**. This tutorial does not require a specific tier.
- In **Resource group**, create a new resource group, or select an existing one. For more information, see [Using resource groups to manage your Azure resources].
- Check the box to **Enable Device Management**.
- In **Location**, select the location to host your IoT Hub. IoT Hub device management is only available in East US, North Europe, and East Asia during public preview. In the future, it will be available in all regions.
> [AZURE.NOTE] If you don't check the box to **Enable Device Management** the samples won't work.
4. When you have chosen your IoT Hub configuration options, click **Create**. It can take a few minutes for Azure to create your IoT Hub. To check the status, you can monitor the progress on the **Startboard** or in the **Notifications** panel.
![][img-monitor]
5. When the IoT Hub has been created successfully, open the blade of the new IoT Hub, make a note of the **Hostname**, and then click the **Keys** icon.
![][img-keys]
6. Click the **iothubowner** policy, then copy and make note of the connection string in the **iothubowner** blade. Copy it to a location you can access later because you will need it to complete the rest of this tutorial.
> [AZURE.NOTE] In production scenarios, make sure to refrain from using the **iothubowner** credentials.
![][img-connection]
You have now created a device management enabled IoT Hub. You will need the connection string to complete the rest of this tutorial.
## Build the samples and provision devices in your IoT Hub
In this section, you will run a script that builds the simulated device and the samples and provisions a set of new device identities in the device registry of your IoT Hub. A device cannot connect to IoT Hub unless it has an entry in the device registry.
To build the samples and provision devices in you IoT Hub, follow the steps below:
1. Open the **Developer Command Prompt for VS2015**.
2. Clone the github repository. **Make sure to clone in a directory that does not have any spaces.**
```
git clone --recursive --branch dmpreview https://github.com/Azure/azure-iot-sdks.git
```
3. From the root folder where you cloned the **azure-iot-sdks** repository, navigate to the **\\azure-iot-sdks\\csharp\\service\\samples** folder and run, replacing the placeholder value with your connection string from the previous section:
```
setup.bat <IoT Hub Connection String>
```
This script does the following:
1. Runs **cmake** to create a Visual Studio 2015 solution for the simulated device. This project file is **azure-iot-sdks\\csharp\\service\\samples\\cmake\\iotdm\_client\\samples\\iotdm\_simple\_sample\\iotdm\_simple\_sample.vcxproj**. Note that the source files are in the folder ***azure-iot-sdks\\c\\iotdm\_client\\samples\\iotdm\_simple\_sample**.
2. Builds the simulated device project **iotdm\_simple\_sample.vcxproj**.
3. Builds the device management samples **azure-iot-sdks\\csharp\\service\\samples\\GetStartedWithIoTDM\\GetStartedWithIoTDM.sln**.
4. Runs **GenerateDevices.exe** to provision device identities in your IoT Hub. The devices are described in **sampledevices.json** (located in the **azure-iot-sdks\\node\\service\\samples** folder) and after the devices are provisioned, the credentials are stored in the **devicecreds.txt** file (located in the **azure-iot-sdks\\csharp\\service\\samples\\bin** folder).
## Start your simulated devices
Now that the devices have been added to the device registry, you can start simulated managed devices. One simulated device is started for each device identity provisioned in the Azure IoT Hub.
Using the developer command prompt, in the **\\azure-iot-sdks\\csharp\\service\\samples\\bin** folder, run:
```
simulate.bat
```
This script runs one instance of **iotdm\_simple\_sample.exe** for each device listed in the **devicecreds.txt** file. The simulated device will continue to run until you close the command window.
The **iotdm\_simple\_sample** sample application is built using the Azure IoT Hub device management client library for C, which enables the creation of IoT devices that can be managed by Azure IoT Hub. Device makers can use this library to report device properties and implement the execute actions required by device jobs. This library is a component delivered as part of the open source Azure IoT Hub SDKs.
When you run **simulate.bat**, you see a stream of data in the output window. This output shows the incoming and outgoing traffic as well as **printf** statements in the application specific callback functions. This allows you to see incoming and outgoing traffic along with how the sample application is handling the decoded packets. When the device connects to the IoT Hub, the service automatically starts to observe resources on the device. The IoT Hub DM client library then invokes the device callbacks to retrieve the latest values from the device.
Below is output from the **iotdm\_simple\_sample** sample application. At the top you see a successful **REGISTERED** message, showing the device with Id **Device11-7ce4a850** connecting to IoT Hub.
> [AZURE.NOTE] To have less verbose output, build and run the retail configuration.
![][img-output]
Make sure to leave all the simulated devices running as you complete the following sections.
## Run the device management sample UI
Now that you have provisioned an IoT Hub and have several simulated devices running and registered for management, you can deploy the device management sample UI. The device management sample UI provides you with a working example of how to utilize the device management APIs to build an interactive UI experience. For more information about the device management sample UI, including [known issues](https://github.com/Azure/azure-iot-device-management#knownissues), see the [Azure IoT device management UI][lnk-dm-github] GitHub repository.
To retrieve, build, and run the device management sample UI, follow the steps below:
1. Open a **Command Prompt**.
2. Confirm that you’ve installed Node.js 6.1.0 or greater according the prerequisites section by typing `node --version`.
3. Clone the Azure IoT device management UI GitHub repository by running the following command:
```
git clone https://github.com/Azure/azure-iot-device-management.git
```
4. In the root folder of your cloned copy of the Azure IoT device management UI repository, run the following command to retrieve the dependent packages:
```
npm install
```
5. When the npm install command has completed, run the following command to build the code:
```
npm run build
```
6. Use a text editor to open the user-config.json file in root of the cloned folder. Replace the text "<YOUR CONNECTION STRING HERE>" with your IoT Hub connection string from the previous section and save the file.
7. In the command prompt, run the following command to start the device management UX app:
```
npm run start
```
8. When the command prompt has reported "Services have started", open a web browser (Edge/IE 11+/Safari/Chrome are currently supported) and navigate to the device management app at the following URL to view your simulated devices: <http://127.0.0.1:3003>.
![][img-dm-ui]
Leave the simulated devices and the device management app running as you proceed to the next device management tutorial.
## Next step
To continue learning about the Azure IoT Hub device management features, see the [Explore Azure IoT Hub device management using the sample UI][lnk-sample-ui] tutorial.
<!-- images and links -->
[img-new-hub]: media/iot-hub-device-management-get-started/image1.png
[img-configure-hub]: media/iot-hub-device-management-get-started/image2.png
[img-monitor]: media/iot-hub-device-management-get-started/image3.png
[img-keys]: media/iot-hub-device-management-get-started/image4.png
[img-connection]: media/iot-hub-device-management-get-started/image5.png
[img-output]: media/iot-hub-device-management-get-started/image6.png
[img-dm-ui]: media/iot-hub-device-management-get-started/dmui.png
[lnk-free-trial]: http://azure.microsoft.com/pricing/free-trial/
[Azure portal]: https://portal.azure.com/
[Using resource groups to manage your Azure resources]: ../azure-portal/resource-group-portal.md
[lnk-dm-github]: https://github.com/Azure/azure-iot-device-management
[lnk-sample-ui]: iot-hub-device-management-ui-sample.md
| 56.760204 | 555 | 0.762427 | eng_Latn | 0.992867 |
439fdcfac6a704343300f6885f95cccb40571b52 | 542 | md | Markdown | CHANGELOG.md | broucz/minirpc | 1d2d7cbdeb22a165ab28ef055b14696521ead3e8 | [
"MIT"
] | 1 | 2021-11-13T10:41:06.000Z | 2021-11-13T10:41:06.000Z | CHANGELOG.md | broucz/minirpc | 1d2d7cbdeb22a165ab28ef055b14696521ead3e8 | [
"MIT"
] | 2 | 2019-11-26T18:00:57.000Z | 2019-11-26T18:08:57.000Z | CHANGELOG.md | broucz/minirpc | 1d2d7cbdeb22a165ab28ef055b14696521ead3e8 | [
"MIT"
] | null | null | null | # Changelog
> **Tags:**
>
> - Breaking
> - Feature
> - Fix
> - Documentation
> - Internal
> - Polish
_Note: Gaps between patch versions are faulty, broken or test releases._
## [0.1.1](https://github.com/broucz/minirpc/compare/v0.1.0...v0.1.1) (2018-07-14)
### Documentation
- Initial documentation ([8d43b9a](https://github.com/broucz/minirpc/commit/8d43b9a)).
## [0.1.0](https://github.com/broucz/minirpc/releases/tag/v0.1.0) (2019-06-16)
### Feature
- Initial release ([8ac1bd6](https://github.com/broucz/minirpc/commit/8ac1bd6)).
| 21.68 | 86 | 0.684502 | eng_Latn | 0.334026 |
439fea1687da376b2bd3c3462a44dc5c819ac644 | 4,207 | md | Markdown | articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md | kitingChris/azure-docs.de-de | a81b914393aa78dc3722e272c7f253a9c5ddd2d2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Anzeigen des Überwachungsverlaufs für Azure AD-Rollen in PIM – Azure Active Directory | Microsoft-Dokumentation
description: Erfahren Sie, wie Sie den Überwachungsverlauf für Azure AD-Rollen in Azure AD Privileged Identity Management (PIM) anzeigen.
services: active-directory
documentationcenter: ''
author: rolyon
manager: mtillman
editor: ''
ms.service: active-directory
ms.topic: conceptual
ms.workload: identity
ms.subservice: pim
ms.date: 06/10/2019
ms.author: rolyon
ms.custom: pim
ms.collection: M365-identity-device-management
ms.openlocfilehash: 8061cff8d39db66cb22a5650c7688657aa8b3554
ms.sourcegitcommit: 41ca82b5f95d2e07b0c7f9025b912daf0ab21909
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 06/13/2019
ms.locfileid: "67053938"
---
# <a name="view-audit-history-for-azure-ad-roles-in-pim"></a>Anzeigen des Überwachungsverlaufs für Azure AD-Rollen in PIM
Im Überwachungsverlauf von Azure Active Directory (Azure AD) Privileged Identity Management (PIM) können Sie alle Benutzerzuweisungen und -aktivierungen für alle privilegierten Rollen in den letzten 30 Tagen anzeigen. Wenn Sie den vollständigen Überwachungsverlauf zur Aktivität in Ihrem Verzeichnis anzeigen möchten – Administratoren, Endbenutzer und Synchronisierungsaktivität eingeschlossen –, können Sie hierzu die [Azure Active Directory-Sicherheits- und Aktivitätsberichte](../reports-monitoring/overview-reports.md).
## <a name="view-audit-history"></a>Anzeigen des Überwachungsverlaufs
Führen Sie die folgenden Schritte aus, um den Überwachungsverlauf für Azure AD-Rollen anzuzeigen.
1. Melden Sie sich beim [Azure-Portal](https://portal.azure.com/) mit einem Benutzer an, der ein Mitglied der Rolle [Administrator für privilegierte Rollen](../users-groups-roles/directory-assign-admin-roles.md#privileged-role-administrator) ist.
1. Öffnen Sie **Azure AD Privileged Identity Management**.
1. Klicken Sie auf **Azure AD-Rollen**.
1. Klicken Sie auf **Verlauf der Verzeichnisrollenüberwachung**.
Abhängig von Ihrem Überwachungsverlauf wird ein Säulendiagramm zusammen mit der Gesamtzahl der Aktivierungen, der maximalen Aktivierungen pro Tag und der durchschnittlichen Aktivierungen pro Tag angezeigt.

Am unteren Rand der Seite wird eine Tabelle mit Informationen zu jeder Aktion im verfügbaren Überwachungsverlauf angezeigt. Die Spalten haben die folgende Bedeutung:
| Column | BESCHREIBUNG |
| --- | --- |
| Zeit | Der Zeitpunkt, zu dem die Aktion erfolgt ist. |
| Anforderer | Der Benutzer, der die Rollenaktivierung oder -änderung angefordert hat. Wenn der Wert **Azure System**lautet, suchen Sie im Azure-Überwachungsverlauf nach weiteren Informationen. |
| Aktion | Die vom Anforderer ausgeführten Aktionen. Aktionen können „Assign“, „Unassign“, „Activate“, „Deactivate“ oder „AddedOutsidePIM“ umfassen. |
| Member | Der Benutzer, der eine Rolle aktiviert oder einer Rolle zugewiesen ist. |
| Rolle | Die Rolle, die dem Benutzer zugewiesen ist oder die durch den Benutzer aktiviert wurde. |
| Erläuterung | Text, der während der Aktivierung in das Feld „Grund“ eingegeben wurde. |
| Ablauf | Zeitpunkt, an dem die aktivierte Rolle abläuft. Gilt nur für berechtigte Rollenzuweisungen. |
1. Um den Überwachungsverlauf zu sortieren, klicken Sie auf **Zeit**, **Aktion** und **Rolle**.
## <a name="filter-audit-history"></a>Filtern des Überwachungsverlaufs
1. Klicken Sie am oberen Rand der Seite mit dem Überwachungsverlauf auf die Schaltfläche **Filter**.
Der Bereich **Diagrammparameter aktualisieren** wird angezeigt.
1. Wählen Sie in **Zeitbereich** einen Zeitraum aus.
1. Aktivieren Sie unter **Rollen** die Kontrollkästchen der Rollen, die Sie anzeigen möchten.

1. Klicken Sie auf **Fertig**, um den gefilterten Überwachungsverlauf anzuzeigen.
## <a name="next-steps"></a>Nächste Schritte
- [Anzeigen des Aktivitäts- und Überwachungsverlaufs für Azure-Ressourcenrollen in PIM](azure-pim-resource-rbac.md)
| 56.093333 | 523 | 0.78512 | deu_Latn | 0.992271 |
43a078b5e0fb9ddeea1f4b823c99ff30bcf68316 | 11,928 | md | Markdown | README.md | vmlopezr/ECE5330_6311_Final_Project | 0ba4edac0f50048f8463ed4a7b887123f6e0ab43 | [
"BSD-3-Clause"
] | null | null | null | README.md | vmlopezr/ECE5330_6311_Final_Project | 0ba4edac0f50048f8463ed4a7b887123f6e0ab43 | [
"BSD-3-Clause"
] | null | null | null | README.md | vmlopezr/ECE5330_6311_Final_Project | 0ba4edac0f50048f8463ed4a7b887123f6e0ab43 | [
"BSD-3-Clause"
] | null | null | null | # **ECE5330/ECE6311 Final Project: Lego Color Sorting via Camera Feedback**
Clone the repo to a local folder in your computer.
# **Table of Contents**
- [Project Description](#Project-Description)
- [Setting up the Software](#Setting-up-the-Software)
- [Downloading Dependencies](#Downloading-Dependencies)
- [Running the Python Application](#Running-the-Python-Application)
- [STM32 Microcontroller](#STM32-Microcontroller)
- [Programming the STM 32 Microcontroller](#Programming-the-STM32-Microcontroller)
- [Pin Connections](#Pin-Connections)
- [List of Components](#List-of-Components)
- [Serial Communication](#Serial-Communication)
- [Running the Application without Microcontroller](#Running-the-Application-without-Microcontroller)
- [Motor Driver Connections](#Motor-Driver-Connections)
- [Running the Project](#Running-the-Project)
# **Project Description**
The following is our final project for ECE5330/ECE6311. It uses a USB Webcam, controlled via python, and a STM32F4 microcontroller to control a OWI-535 Robot Arm to sort objects based on color. The Robot Arm navigates itself using QR Codes and the Camera as positioning feedback.
This project can be seen in the following [Youtube Link](https://www.youtube.com/watch?v=eLcRGpxVBoI).
The project is composed of two sections:
1. The python application: Camera_Data_Detection.py.
2. The STM32F411VE microcontroller program located in the STM folder.
**Note:** The python application may be run independently without the microcontroller.
**WARNING:** The microcontroller program will only work correctly when the motor, L298N, and USB Serial connections are set correctly.
The python application must be running for the microcontroller to sucessfully maneuver the OWI arm.
The python application uses the opencv in conjuction with a STM32 microcontroller.
Camera data is sent to the STM32 microcontroller via Universal Asynchronous Receiver-Transmitter (UART) connection.
The microcontroller used for this project is the [STM32F411VE Discovery board](https://www.st.com/en/evaluation-tools/32f411ediscovery.html).
The firmware was written using STM's HAL library, as well as compiled and written using the [STM32CubeIDE](https://www.st.com/en/development-tools/stm32cubeide.html).
# **Setting up the Software**
**Note:** This project was written on Windows 10, there may be issues when attempting to run the python
application on MacOS or Linux Distros.
## **Downloading Dependencies**
The project was written with Python 3, specifically Python 3.7.5, and its dependencies installed globally.
A virtual environment may be set up locally if needed. The instructions in this file only enumerate global installation of the dependencies.
1. To install Python 3 on Windows go to the [Python Downloads Page](https://www.python.org/downloads/) and download the most recent version. (Install to PATH)
**Note:** Use both of the following commands on PowerShell or CMD to verify the python installation.
- `python --version`
- `python3 --version `
Python may be installed as _python_ or _python3_ depending on whether previous versions were already installed. Use the correct python command for the rest of the commands.
The instructions after this point assume Python was installed as *python3*.
**Note: Some of the following steps may be skipped if the packages are already installed on your system**
2. Use the following command for opencv install. Click here for the [OpenCV-Python](https://pypi.org/project/opencv-python/) page.
- `python3 -m pip install opencv-python --upgrade`
3. Use the following command to install _pyzbar_. Click here for the [Pyzbar](https://pypi.org/project/pyzbar/) page.
- `python3 -m pip install pyzbar --upgrade`
4. Use the following command to install _pyserial_. Click here for the [Pyserial](https://pypi.org/project/pyserial/) page.
- `python3 -m pip install pyserial --upgrade`
5. Use the following command to install _numpy_. Click here for the [Numpy](https://pypi.org/project/numpy/) page.
- `python3 -m pip install numpy --upgrade`
## **Running the Python Application**
1. Download the project by either cloning to a local folder, or downloading the repository as a zip file and extracting.
2. Navigate to the project folder on your terminal. The Python folder contains the Python application.
3. Enter either of the following commands to run the python application. Before running, connect a USB webcam to the computer.
- To run the application with default capture device 0 and no serial communication, run without arguments:
`python3 .\Python\Camera_Data_Detection.py`
**Note:** Use the following command: `python3 Camera_Data_Detection.py` if the active directory is the Python folder. If the active directory is the local repo directory use the commands listed.
- To run the application without serial communication and different capture device:
`python3 .\Python\Camera_Data_Detection.py 1`
**Note:** The argument 1 is used to set the openCV video capture device. On laptops, the built-in webcam may be set as capture device 0. Extra USB webcams may be set as capture device 1 or above.
- To run the application with a specific capture device and serial port use:
`python3 .\Python\Camera_Data_Detection.py 1 COM3`
**Note:** The second argument 'COM#' especifies the Windows COM Port for the microcontroller.
- On Linux distros or Mac OS, USB devices are usually listed under _/dev/_ as _/dev/TTYSx_ or _/dev/TTYUSBx/_.
**WARNING:** The STM32F411VE microcontroller does not have a USB-to-Serial converter chip such as Arduinos have. A separate USB-to-TTL adapter must be used with the STM32 pins.
The **COM PORT** must be available on the computer (Device Manager on Windows) for the Python application to run successfully.
See the [Serial Communication](#Serial-Communication) section for instructions on setting up the converter.
# **STM32 Microcontroller**
The STM32F4 Discovery board was programmed using STM32CubeIDE. The entire project folder for the microcontroller firmware can be found in the [STM32 folder](./STM32/).
## Programming the STM32 Microcontroller
To program the microcontroller:
- Open the project folder with STM32CubeIDE.
- Open the file `main.c` in the`src` folder.
- Under the *Project* file menu, click on *Build All* or use the keyboard shortcut *Ctrl + B*.
- Under the *Run* file menu, click on *Debug* or press *F11* to write to the microcontroller and Debug.
**Note:** The Microcontroller is programmed to wait for the Blue `USER` button press, before running the program.
## **List of Components**
| **Component**| **Description** |
| --------| ------------ |
|OWI-535 Robotic Arm Edge| The Robot Arm used. The kit contains the DC motor.|
|L298N| Motor Drive Controller board|
|DSD Tech SH-U09C2| USB-to-TTL Adapter with built-in FTDI|
|Microsoft LifeCam HD-30000| USB Webcam used for the video capture|
|STM32F4 Discovery Kit| Microcontroller used for the project|
## **Pin Connections**

| **STM32F411 Pin Number** | **Peripheral Mode** | **Use Label** | **Description** |
| ------------------- | ---------| --------- |-----------|
|PD4 | GPIO Output | M1_IN | General Output used for L298N Motor Direction Control: IN3|
|PD3 | GPIO Output | M1_INB | General Output used for L298N Motor Direction Control: IN4|
|PD6 | GPIO Output | M2_IN |General Output used for L298N Motor Direction Control: IN3|
|PD5 | GPIO Output | M2_INB |General Output used for L298N Motor Direction Control: IN4|
|PB3 | GPIO Output | M3_IN |General Output used for L298N Motor Direction Control: IN1|
|PD7 | GPIO Output | M3_INB |General Output used for L298N Motor Direction Control: IN2|
|PB5 | GPIO Output | M4_IN |General Output used for L298N Motor Direction Control: IN3|
|PB4 | GPIO Output | M4_INB |General Output used for L298N Motor Direction Control: IN4|
|PB7 | GPIO Output | M5_IN |General Output used for L298N Motor Direction Control: IN1|
|PB6 | GPIO Output | M5_INB |General Output used for L298N Motor Direction Control: IN2|
|PA0-WKUP | GPIO Input | USER_BUTTON | Blue "User" Button to start program on press.|
| PD12 | TIM4_CH1| M1_PWM | PWM generation on Timer 4 channel 1 for L298N Motor PWM input: ENB |
| PE14 | TIM1_CH1| M2_PWM |PWM generation on Timer 1 channel 1 for L298N Motor PWM input: ENB|
| PE13 | TIM1_CH2| M3_PWM |PWM generation on Timer 1 channel 2 for L298N Motor PWM input: ENA|
| PE11 | TIM1_CH3| M4_PWM |PWM generation on Timer 1 channel 3 for L298N Motor PWM input: ENB|
| PE9 | TIM1_CH4| M5_PWM |PWM generation on Timer 1 channel 4 for L298N Motor PWM input: ENA|
| PA2| USART2_TX| | USART2 Transmitter pin |
| PA3| USART2_RX| | USART2 Receiver pin|
See [Motor Driver Connections](#Motor-Driver-Connections) for information on the pin connections between the microcontroller and L298N Driver module.

The picture above shows the OWI-535 Robot Arm used with the motor labels. Each Label in the [pins](#Pin-Connections) section corresponds to one of the motors. Note that the USB Webcam is attached to the Robot head, above the Gripper (Motor 5).
## **Serial Communication**

The USB-to-TTL image above can be found at the following [link](https://www.amazon.com/DSD-TECH-SH-U09C2-Debugging-Programming/dp/B07TXVRQ7V).
The Microcontrolloer image is found at at following [link](https://www.st.com/en/evaluation-tools/stm32f4discovery.html).
Upon plugging in the USB-to-TTL adapter to the computer, Windows may automatically detect and download the necessary FTDI drivers. A new COM Port should be detected and seen on the Device Manager.
**NOTE:** In the case that the adapter is not recognized, download the latest drivers for the FT232RL chip from [FTDI's website](https://www.ftdichip.com/Products/ICs/FT232R.htm).
### Running the Application without Microcontroller
When a USB-to-TTL adapter is plugged-in to the computer, the Python application will run successfully with a COM port as argument:
- `python3 Camera_Data_Detection.py 1 COM3`
In this case, computer recognizes the adapter as a Serial Device and can therefore receive data. The Microcontroller need not be connected to the USB-to-TTL adapter when trying to view Camera Output.
## **Motor Driver Connections**

The following diagram shows the microcontroller pin connections for pins PD7, PB3, PD5, PD6, PE13 and PE14. See [Pin Connections](#Pin-Connections) for more information about the pins.
The OWI-535 Arm has five DC motors. As a result, three L298N Modules will be needed.
# Running the Project
In order to run the entire project the following conditions must be completed:
1. Set all of the L298N Connections with the Microcontroller. See the [Motor Driver Connections](#Motor-Driver-Connections) section for more details.
2. Verify the OWI-535 Robot Arm motor connections with the L298N Module.
3. Set all of the Serial-to-TTL connections between the Microcontroller and the adapter. See [Serial Communication](#Serial-Communication) section for more details.
4. Verify that the USB-to-TTL adapter is plugged into a computer USB Port, and verify its COM Port.
5. Verify that the USB Webcam is connected to a USB Port and verify its Capture Device Number. See [Running the Python Application](#Running-the-Python-Application) for more details.
Once the conditions above are completed, run the python application first before pressing the blue `USER` button on the microcontroller.
This project was created by Victor M. Lopez Rodriguez and Sankalp Parekh. | 60.857143 | 280 | 0.7583 | eng_Latn | 0.970053 |
43a07e1a34521ef0984b2482e1da1b9c82768fd6 | 1,549 | md | Markdown | docs/src/man/csg.md | schustermartin/SolidStateDetectors.jl | 18101d5dc5bbafeb3001bd7f23e1084453a263c1 | [
"MIT"
] | 3 | 2022-03-21T03:32:10.000Z | 2022-03-21T03:32:15.000Z | docs/src/man/csg.md | schustermartin/SolidStateDetectors.jl | 18101d5dc5bbafeb3001bd7f23e1084453a263c1 | [
"MIT"
] | null | null | null | docs/src/man/csg.md | schustermartin/SolidStateDetectors.jl | 18101d5dc5bbafeb3001bd7f23e1084453a263c1 | [
"MIT"
] | null | null | null | # Constructive Solid Geometry (CSG)
## Boolean operators
### Union
```json
"geometry": {
"type": "union",
"parts": [
{
"name":"Seg1 bottom",
"type": "tube",
"r": {
"from": 13.5,
"to": 39.5
},
"phi": {
"from": 0.3582,
"to": 59.6419
},
"h": 0
},
{
"name": "Seg1 side",
"type": "tube",
"r": {
"from": 39.5,
"to": 39.5
},
"phi": {
"from": 0.3582,
"to": 59.6419
},
"h": 40
},
}
}
```
### Difference
```json
"geometry": {
"type": "difference",
"parts": [
{
"name": "Initial Cylinder",
"type": "tube",
"r": {
"from": 0.0,
"to": 35.0
},
"phi": {
"from": 0.0,
"to": 360.0
},
"z": {
"from": 0,
"to": 40
}
},
{
"name": "Borehole",
"type": "tube",
"r": {
"from": 0.0,
"to": 5.0
},
"phi": {
"from": 0.0,
"to": 360.0
},
"z": {
"from": 0,
"to": 40
}
}
]
}
```
### Intersection
ToDo... | 18.223529 | 39 | 0.235636 | yue_Hant | 0.203004 |
43a0916084945c5aa1234cc08269ade9e472cda9 | 1,102 | md | Markdown | VBA/Word-VBA/articles/document-defaulttablestyle-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/Word-VBA/articles/document-defaulttablestyle-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/Word-VBA/articles/document-defaulttablestyle-property-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: Document.DefaultTableStyle Property (Word)
keywords: vbawd10.chm158007661
f1_keywords:
- vbawd10.chm158007661
ms.prod: word
api_name:
- Word.Document.DefaultTableStyle
ms.assetid: b6782b12-09a6-77b0-a52d-81d4028e7c19
ms.date: 06/08/2017
---
# Document.DefaultTableStyle Property (Word)
Returns a **Variant** that represents the table style that is applied to all newly created tables in a document. Read-only.
## Syntax
_expression_ . **DefaultTableStyle**
_expression_ An expression that returns a **[Document](document-object-word.md)** object.
## Example
This example checks to see if the default table style used in the active document is named "Table Normal" and, if it is, changes the default table style to "TableStyle1." This example assumes that you have a table style named "TableStyle1."
```vb
Sub TableDefaultStyle()
With ActiveDocument
If .DefaultTableStyle = "Table Normal" Then
.SetDefaultTableStyle _
Style:="TableStyle1", SetInTemplate:=True
End If
End With
End Sub
```
## See also
#### Concepts
[Document Object](document-object-word.md)
| 21.607843 | 240 | 0.754991 | eng_Latn | 0.741714 |
43a0e97124b1350212ae77145c723fb0d288e340 | 11,588 | md | Markdown | articles/backup/backup-azure-sql-database.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-sql-database.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-sql-database.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Sichern von SQL Server-Datenbanken in Azure
description: In diesem Artikel erfahren Sie, wie Sie SQL Server in Azure sichern. In diesem Artikel wird auch die SQL Server-Wiederherstellung beschrieben.
ms.topic: conceptual
ms.date: 06/18/2019
ms.openlocfilehash: b6daf631248958948e799b20284d84a1e59e5dfe
ms.sourcegitcommit: db925ea0af071d2c81b7f0ae89464214f8167505
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 04/15/2021
ms.locfileid: "107518863"
---
# <a name="about-sql-server-backup-in-azure-vms"></a>Informationen zur SQL Server-Sicherung auf virtuellen Azure-Computern
[Azure Backup](backup-overview.md) bietet eine streambasierte, spezialisierte Lösung zum Sichern von SQL Server-Instanzen, die auf Azure-VMs ausgeführt werden. Diese Lösung ist an den Vorteilen von Azure Backup ausgerichtet: Sicherungen ohne Infrastruktur, langfristige Speicherung und zentrale Verwaltung. Außerdem bietet sie die folgenden Vorteile speziell für SQL Server:
1. Workloadabhängige Sicherungen, die alle Sicherungstypen unterstützen: vollständige, differenzielle und Protokollsicherungen
2. RPO (Recovery Point Objective) von 15 Minuten mit häufigen Protokollsicherungen
3. Zeitpunktwiederherstellung von bis zu einer Sekunde
4. Sicherung und Wiederherstellung einzelner Datenbankebenen
Informationen zu den zurzeit unterstützten Sicherungs- und Wiederherstellungsszenarien finden Sie in der [Unterstützungsmatrix](sql-support-matrix.md#scenario-support).
## <a name="backup-process"></a>Sicherungsprozess
Die Lösung nutzt die nativen SQL-APIs, um Sicherungen Ihrer SQL-Datenbankinstanzen zu erstellen.
* Nachdem Sie die SQL Server-VM angegeben haben, die Sie schützen und deren Datenbanken Sie abfragen möchten, installiert der Azure Backup-Dienst eine Erweiterung zur Workloadsicherung mit dem Namen `AzureBackupWindowsWorkload` auf dem virtuellen Computer.
* Diese Erweiterung besteht aus einem Koordinator und einem SQL-Plug-In. Während der Koordinator für das Auslösen von Workflows für verschiedene Vorgänge wie Konfigurieren der Sicherung, Sicherung und Wiederherstellung zuständig ist, ist das Plug-In für den tatsächlichen Datenfluss verantwortlich.
* Um die Datenbanken auf diesem virtuellen Computer ermitteln zu können, erstellt Azure Backup das Konto `NT SERVICE\AzureWLBackupPluginSvc`. Dieses Konto wird zum Sichern und Wiederherstellen verwendet und erfordert SQL-Systemadministratorberechtigungen. Weil es sich beim Konto `NT SERVICE\AzureWLBackupPluginSvc` um ein [virtuelles Dienstkonto](/windows/security/identity-protection/access-control/service-accounts#virtual-accounts) handelt, ist keine Kennwortverwaltung erforderlich. Azure Backup verwendet das Konto `NT AUTHORITY\SYSTEM` für die Ermittlung von und Anfragen an Datenbanken. Für dieses Konto muss es also eine öffentliche Anmeldung in SQL geben. Wenn Sie den virtuellen SQL Server-Computer nicht über Azure Marketplace erstellt haben, erhalten Sie möglicherweise den Fehler **UserErrorSQLNoSysadminMembership**. Gehen Sie in diesem Fall [wie folgt vor](#set-vm-permissions):
* Sobald Sie die Konfiguration des Schutzes der ausgewählten Datenbanken auslösen, richtet der Sicherungsdienst den Koordinator mit den Sicherungszeitplänen und anderen Richtliniendetails ein, die die Erweiterung lokal auf dem virtuellen Computer zwischenspeichert.
* Zum geplanten Zeitpunkt kommuniziert der Koordinator mit dem Plug-In, und es startet das Streaming der Sicherungsdaten von der SQL Server-Instanz mit VDI.
* Weil das Plug-In die Daten direkt an den Recovery Services-Tresor sendet, ist kein Stagingspeicherort erforderlich. Die Daten werden verschlüsselt und vom Azure Backup-Dienst in Speicherkonten gespeichert.
* Wenn die Datenübertragung abgeschlossen ist, bestätigt der Koordinator den Commit mit dem Sicherungsdienst.

## <a name="before-you-start"></a>Vorbereitung
Überprüfen Sie zunächst die folgenden Anforderungen:
1. Vergewissern Sie sich, dass in Azure eine SQL Server-Instanz ausgeführt wird. Im Marketplace können Sie [schnell eine SQL Server-Instanz erstellen](../azure-sql/virtual-machines/windows/sql-vm-create-portal-quickstart.md).
2. Informieren Sie sich über [Funktionsaspekte](sql-support-matrix.md#feature-considerations-and-limitations) und [Unterstützung von Szenarien](sql-support-matrix.md#scenario-support).
3. [Lesen Sie häufig gestellte Fragen](faq-backup-sql-server.yml) zu diesem Szenario.
## <a name="set-vm-permissions"></a>Einrichten von Berechtigungen für virtuelle Computer
Wenn Sie die Ermittlung in einer SQL Server-Instanz ausführen, führt Azure Backup Folgendes aus:
* Fügt die Erweiterung AzureBackupWindowsWorkload hinzu.
* Erstellt das Konto „NT SERVICE\AzureWLBackupPluginSvc“, um Datenbanken auf dem virtuellen Computer zu ermitteln. Dieses Konto wird zum Sichern und Wiederherstellen verwendet und erfordert Systemadministratorberechtigungen für SQL Server.
* Ermittelt auf einer VM ausgeführte Datenbanken. Azure Backup verwendet das Konto NT AUTHORITY\SYSTEM. Dieses Konto muss eine öffentliche Anmeldung in SQL Server ermöglichen.
Wenn Sie die SQL Server-VM nicht in Azure Marketplace erstellt haben oder wenn Sie mit SQL Server 2008 oder 2008 R2 arbeiten, erhalten Sie möglicherweise die Fehlermeldung **UserErrorSQLNoSysadminMembership**.
Informationen zur Erteilung von Berechtigungen bei Ausführung von **SQL 2008** und **2008 R2** unter Windows 2008 R2 finden Sie [hier](#give-sql-sysadmin-permissions-for-sql-2008-and-sql-2008-r2).
Korrigieren Sie bei allen anderen Versionen die Berechtigungen wie folgt:
1. Verwenden Sie ein Konto mit SQL Server-Systemadministratorberechtigungen, um sich bei SQL Server Management Studio (SSMS) anzumelden. Wenn Sie keine speziellen Berechtigungen benötigen, sollte die Windows-Authentifizierung funktionieren.
2. Öffnen Sie auf der SQL Server-Instanz den Ordner **Sicherheit/Anmeldungen**.

3. Klicken Sie mit der rechten Maustaste auf den Ordner **Anmeldungen**, und wählen Sie **Neue Anmeldung** aus. Wählen Sie in **Anmeldung – Neu** die Option **Suche** aus.

4. Das virtuelle Windows-Dienstkonto **NT SERVICE\AzureWLBackupPluginSvc** wurde bei der Registrierung des virtuellen Computers und der SQL-Ermittlungsphase erstellt. Geben Sie den Kontonamen ein, wie in **Namen des auszuwählenden Objekts eingeben** dargestellt. Wählen Sie **Namen überprüfen** aus, um den Namen aufzulösen. Klicken Sie auf **OK**.

5. Stellen Sie in **Serverrollen** sicher, dass die Rolle **sysadmin** ausgewählt ist. Klicken Sie auf **OK**. Die erforderlichen Berechtigungen sollten jetzt vorhanden sein.

6. Ordnen Sie nun die Datenbank dem Recovery Services-Tresor zu. Klicken Sie im Azure-Portal in der Liste **Geschützte Server** mit der rechten Maustaste auf den fehlerhaften Server, und wählen Sie **Datenbanken neu ermitteln** aus.

7. Der Fortschritt kann im Bereich **Benachrichtigungen** verfolgt werden. Wenn die ausgewählten Datenbanken gefunden wurden, wird eine Erfolgsmeldung angezeigt.

> [!NOTE]
> Sind auf Ihrem SQL-Server mehrere Instanzen von SQL Server installiert, müssen Sie allen SQL-Instanzen Systemadministratorberechtigungen für das Konto **NT Service\AzureWLBackupPluginSvc** hinzufügen.
### <a name="give-sql-sysadmin-permissions-for-sql-2008-and-sql-2008-r2"></a>Erteilen der SQL Server-Berechtigung „sysadmin“ für SQL 2008 und SQL 2008 R2
Fügen Sie der SQL Server-Instanz die Anmeldungen **NT AUTHORITY\SYSTEM** und **NT Service\AzureWLBackupPluginSvc** hinzu:
1. Wechseln Sie im Objekt-Explorer zur SQL Server-Instanz.
2. Navigieren Sie zu „Sicherheit -> Anmeldungen“.
3. Klicken Sie mit der rechten Maustaste auf „Anmeldungen“, und wählen Sie *Neue Anmeldung...* aus.

4. Wechseln Sie zur Registerkarte „Allgemein“, und geben Sie **NT AUTHORITY\SYSTEM** als Anmelde-ID ein.

5. Wechseln Sie zu *Serverrollen*, und wählen Sie die Rollen *Öffentlich* und *sysadmin* aus.

6. Wechseln Sie zu *Status*. *Erteilen* Sie die Berechtigung zum Herstellen der Verbindung mit der Datenbank-Engine, und melden Sie sich als *Aktiviert* an.

7. Wählen Sie „OK“ aus.
8. Wiederholen Sie die gleiche Schrittreihenfolge (1-7 oben), um die Anmeldung „NT Service\AzureWLBackupPluginSvc“ zur SQL Server-Instanz hinzuzufügen. Wenn die Anmeldung bereits vorhanden ist, stellen Sie sicher, dass sie die Serverrolle „sysadmin“ hat und ihr unter „Status“ die Berechtigung zum Herstellen der Verbindung mit der Datenbank-Engine erteilt und „Anmeldung“ auf „Aktiviert“ festgelegt wurde.
9. Führen Sie nach dem Erteilen der Berechtigung im Portal eine **erneute Ermittlung von Datenbanken** durch: Tresor **->** Sicherungsinfrastruktur **->** Workload in Azure VM:

Alternativ können Sie die Erteilung der Berechtigungen automatisieren, indem Sie die folgenden PowerShell-Befehle im Administratormodus ausführen. Der Instanzname ist standardmäßig auf MSSQLSERVER festgelegt. Ändern Sie nötigenfalls das Argument mit dem Namen der Instanz im Skript:
```powershell
param(
[Parameter(Mandatory=$false)]
[string] $InstanceName = "MSSQLSERVER"
)
if ($InstanceName -eq "MSSQLSERVER")
{
$fullInstance = $env:COMPUTERNAME # In case it is the default SQL Server Instance
}
else
{
$fullInstance = $env:COMPUTERNAME + "\" + $InstanceName # In case of named instance
}
try
{
sqlcmd.exe -S $fullInstance -Q "sp_addsrvrolemember 'NT Service\AzureWLBackupPluginSvc', 'sysadmin'" # Adds login with sysadmin permission if already not available
}
catch
{
Write-Host "An error occurred:"
Write-Host $_.Exception|format-list -force
}
try
{
sqlcmd.exe -S $fullInstance -Q "sp_addsrvrolemember 'NT AUTHORITY\SYSTEM', 'sysadmin'" # Adds login with sysadmin permission if already not available
}
catch
{
Write-Host "An error occurred:"
Write-Host $_.Exception|format-list -force
}
```
## <a name="next-steps"></a>Nächste Schritte
* [Informieren Sie sich über](backup-sql-server-database-azure-vms.md) das Sichern von SQL Server-Datenbanken.
* [Informieren Sie sich über](restore-sql-database-azure-vm.md) das Wiederherstellen von gesicherten SQL Server-Datenbanken.
* [Informieren Sie sich über](manage-monitor-sql-database-backup.md) das Verwalten von gesicherten SQL Server-Datenbanken.
| 74.282051 | 894 | 0.798757 | deu_Latn | 0.989602 |
43a0f77f0ee697e12ece42a97e5334cc3e437ba0 | 4,175 | md | Markdown | README_ZH.md | hzhangse/servicecomb-pack | 49e8de694581ca2d0ad703d2dc5f03671745bc05 | [
"Apache-2.0"
] | 2 | 2019-05-27T09:25:16.000Z | 2019-08-22T02:22:51.000Z | README_ZH.md | hzhangse/servicecomb-pack | 49e8de694581ca2d0ad703d2dc5f03671745bc05 | [
"Apache-2.0"
] | null | null | null | README_ZH.md | hzhangse/servicecomb-pack | 49e8de694581ca2d0ad703d2dc5f03671745bc05 | [
"Apache-2.0"
] | 1 | 2019-05-17T01:10:18.000Z | 2019-05-17T01:10:18.000Z | # Pack | [English](README.md) [](https://travis-ci.org/apache/servicecomb-pack?branch=master) [](https://coveralls.io/github/apache/servicecomb-pack?branch=master)[](http://search.maven.org/#search%7Cga%7C1%7Corg.apache.servicecomb.pack) [](https://www.apache.org/licenses/LICENSE-2.0.html) [](https://gitter.im/ServiceCombUsers/Saga)
Apache ServiceComb Pack 是一个微服务应用的数据最终一致性解决方案。
## 关键特性
* 高可用:支持高可用的集群模式部署。
* 高可靠:所有的关键事务事件都持久化存储在数据库中。
* 高性能:事务事件是通过高性能gRPC来上报的,且事务的请求和响应消息都是通过Kyro进行序列化和反序列化。
* 低侵入:仅需2-3个注解和编写对应的补偿方法即可引入分布式事务。
* 部署简单:支持通过容器(Docker)进行快速部署和交付。
* 补偿机制灵活:支持前向恢复(重试)及后向恢复(补偿)功能。
* 扩展简单:基于Pack架构很容实现多种协调协议,目前支持TCC、Saga协议,未来还可以添加其他协议支持。
## 架构
ServiceComb Pack 架构是由 **alpha** 和 **omega**组成,其中:
* alpha充当协调者的角色,主要负责对事务进行管理和协调。
* omega是微服务中内嵌的一个agent,负责对调用请求进行拦截并向alpha上报事务事件。
下图展示了alpha, omega以及微服务三者的关系:

在此架构基础上我们除了实现saga协调协议以外,还实现了TCC协调协议。
详情可浏览[ServiceComb Pack 设计文档](docs/design_zh.md).
同时社区也提供了多种语言的Omega实现:
* Go语言版本Omega 可参见 https://github.com/jeremyxu2010/matrix-saga-go
* C#语言版本Omega 可参见 https://github.com/OpenSagas-csharp/servicecomb-saga-csharp
## 快速入门
* Saga在ServiceComb Java Chassis的应用可以参考[出行预订](demo/saga-servicecomb-demo/README.md)
* Saga在Spring应用的用法可参考[出行预订示例](demo/saga-spring-demo/README.md)。
* Saga在Dubbo应用的用法可参考[Dubbo示例](demo/saga-dubbo-demo/README.md).
* TCC在Spring应用的用法可以参考[TCC示例](demo/tcc-spring-demo/README.md)
* 示例的的调试方法可以参考[调试Spring示例](demo/saga-spring-demo#debugging).
## 编译和运行代码
当前ServiceComb Pack同时支持Spring Boot 1.x 以及 Spring Boot 2.x,你可以使用 *-Pspring-boot-1* ,*-Pspring-boot-2* 参数转换Spring Boot版本。
由于Spring Boot 只在2.x开始支持 JDK9,如果你想用JDK9或者JDK10来编译Pack并运行测试的话,请使用spring-boot-2 profile参数。下面示例的所有命令需要在Pack根目录下运行。
* 编译代码并且运行相关的单元测试
```bash
$ mvn clean install -Pspring-boot-2
```
* 编译示例代码,并生成docker镜像(maven会根据是否安装docker来启动这部分的设置),运行验收测试。
```bash
$ mvn clean install -Pdemo,spring-boot-2
```
* 编译示例代码,并生产docker镜像, 不运行测试
```bash
$ mvn clean install -DskipTests=true -Pdemo,spring-boot-2
```
* 编译软件发布包,不运行测试, maven会在distribution/target目录下生成的发布包.
```bash
$ mvn clean install -DskipTests=true -Prelease
```
## 用户指南
如何构建和使用可浏览[用户指南](docs/user_guide_zh.md)。
## 获取最新版本
获取最新发行版本:
* [下载软件包](http://servicecomb.apache.org/release/pack-downloads/)
获取最新预览版本:
* 最新的预览版本会发布到Apache nexus的仓库中,请将如下的仓库描述信息加到你的pom.xml文件中.
```
<repositories>
<repository>
<releases />
<snapshots>
<enabled>true</enabled>
</snapshots>
<id>repo.apache.snapshot</id>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<releases />
<snapshots>
<enabled>true</enabled>
</snapshots>
<id>repo.apache.snapshot</id>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
</pluginRepository>
</pluginRepositories>
```
## [常见问题](FAQ_ZH.md)
## 联系我们
* [提交issues](https://issues.apache.org/jira/browse/SCB)
* [gitter聊天室](https://gitter.im/ServiceCombUsers/Saga)
* 邮件列表: [订阅](mailto:dev-subscribe@servicecomb.apache.org) [浏览](https://lists.apache.org/list.html?dev@servicecomb.apache.org)
## 贡献
详情可浏览[代码提交指南](http://servicecomb.apache.org/cn/developers/submit-codes/)。
## Github加星之旅
[](https://starcharts.herokuapp.com/apache/servicecomb-pack)
## License
[Apache 2.0 license](https://github.com/apache/servicecomb-pack/blob/master/LICENSE)。
| 35.683761 | 763 | 0.709701 | yue_Hant | 0.73143 |
43a1135aa2348780523ee5e31c281b9bd25a0a02 | 3,875 | md | Markdown | content/reference/services/SoftLayer_Virtual_Guest/createObjects.md | edsonarios/githubio_source | 8d92ebf5c49a3ba0d18702062f5744b5c308b646 | [
"Apache-2.0"
] | null | null | null | content/reference/services/SoftLayer_Virtual_Guest/createObjects.md | edsonarios/githubio_source | 8d92ebf5c49a3ba0d18702062f5744b5c308b646 | [
"Apache-2.0"
] | null | null | null | content/reference/services/SoftLayer_Virtual_Guest/createObjects.md | edsonarios/githubio_source | 8d92ebf5c49a3ba0d18702062f5744b5c308b646 | [
"Apache-2.0"
] | null | null | null | ---
title: "createObjects"
description: "createObjects() enables the creation of multiple computing instances on an account in a single call. This
method is a s... "
layout: "method"
tags:
- "method"
- "sldn"
- "Virtual"
classes:
- "SoftLayer_Virtual_Guest"
aliases:
- "/reference/services/softlayer_virtual_guest/createObjects"
---
# [SoftLayer_Virtual_Guest](/reference/services/SoftLayer_Virtual_Guest)::createObjects
Create new computing instances
## Overview
createObjects() enables the creation of multiple computing instances on an account in a single call. This
method is a simplified alternative to interacting with the ordering system directly.
In order to create a computing instance a set of template objects must be sent in with a few required
values.
<b>Warning:</b> Computing instances created via this method will incur charges on your account.
See [SoftLayer_Virtual_Guest::createObject]({{<ref "reference/services/SoftLayer_Virtual_Guest/createObject">}}) for specifics on the requirements of each template object.
<h1>Example</h1>
<http title="Request">curl -X POST -d '{
"parameters":[
[
{
"hostname": "host1",
"domain": "example.com",
"startCpus": 1,
"maxMemory": 1024,
"hourlyBillingFlag": true,
"localDiskFlag": true,
"operatingSystemReferenceCode": "UBUNTU_LATEST"
},
{
"hostname": "host2",
"domain": "example.com",
"startCpus": 1,
"maxMemory": 1024,
"hourlyBillingFlag": true,
"localDiskFlag": true,
"operatingSystemReferenceCode": "UBUNTU_LATEST"
}
]
]
}' https://api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/createObjects.json
</http>
<http title="Response">HTTP/1.1 200 OK
[
{
"accountId": 232298,
"createDate": "2012-11-30T23:56:48-06:00",
"dedicatedAccountHostOnlyFlag": false,
"domain": "softlayer.com",
"hostname": "ubuntu1",
"id": 1301456,
"lastPowerStateId": null,
"lastVerifiedDate": null,
"maxCpu": 1,
"maxCpuUnits": "CORE",
"maxMemory": 1024,
"metricPollDate": null,
"modifyDate": null,
"privateNetworkOnlyFlag": false,
"startCpus": 1,
"statusId": 1001,
"globalIdentifier": "fed4c822-48c0-45d0-85e2-90476aa0c542"
},
{
"accountId": 232298,
"createDate": "2012-11-30T23:56:49-06:00",
"dedicatedAccountHostOnlyFlag": false,
"domain": "softlayer.com",
"hostname": "ubuntu2",
"id": 1301457,
"lastPowerStateId": null,
"lastVerifiedDate": null,
"maxCpu": 1,
"maxCpuUnits": "CORE",
"maxMemory": 1024,
"metricPollDate": null,
"modifyDate": null,
"privateNetworkOnlyFlag": false,
"startCpus": 1,
"statusId": 1001,
"globalIdentifier": "bed4c686-9562-4ade-9049-dc4d5b6b200c"
}
]
</http>
-----
### Parameters
|Name | Type | Description |
| --- | --- | --- |
|templateObjects| <a href='/reference/datatypes/SoftLayer_Virtual_Guest'>SoftLayer_Virtual_Guest[] </a>| An array of SoftLayer_Virtual_Guest objects that you wish to create.|
### Required Headers
* authenticate
### Optional Headers
* SoftLayer_Virtual_GuestObjectMask
* SoftLayer_ObjectMask
### Return Values
* <a href='/reference/datatypes/SoftLayer_Virtual_Guest'>SoftLayer_Virtual_Guest[] </a>
### Associated Methods
* [SoftLayer_Virtual_Guest::createObject](/reference/services/SoftLayer_Virtual_Guest/createObject )
* [SoftLayer_Virtual_Guest::getCreateObjectOptions](/reference/services/SoftLayer_Virtual_Guest/getCreateObjectOptions )
| 28.492647 | 174 | 0.630194 | eng_Latn | 0.380575 |
43a13d16e15bbe2af3ec1f265488bd7c719ec44d | 767 | md | Markdown | dbtools/README.md | zorkeAccount/springboot-mybatis | 5f84744b8afebf3380990688b63147a2acd43db1 | [
"Apache-2.0"
] | 7 | 2018-04-08T13:31:37.000Z | 2018-07-24T06:37:56.000Z | dbtools/README.md | zorkeAccount/springboot-mybatis | 5f84744b8afebf3380990688b63147a2acd43db1 | [
"Apache-2.0"
] | null | null | null | dbtools/README.md | zorkeAccount/springboot-mybatis | 5f84744b8afebf3380990688b63147a2acd43db1 | [
"Apache-2.0"
] | null | null | null | 功能:<br/>
1、dbdeploy - 自动创建和更新数据库表结构以及数据的CRUD等
(1)必须在sql目录下(该目录配置在<scriptdirectory>./sql</scriptdirectory>)创建*DDL\*DML的sql文件,实现表结构的创建更新,基础数据的添加、修改、删除
(2)执行resetDB.bat批处理文件可以创建数据库springboot-mybatis以及changelog表(sql目录必须存在!)
(3)执行createNewDDLChangeFile.bat批处理文件可以创建新的*DDL.sql以实现创建和更新表结构等(createNewDDLChangeFile.bat : *DML.sql->表数据记录等)
2、【推荐使用】CodeGenerator - 根据表结构自动生成Controller、Service、Mapper、XML等代码,注:该CodeGenerator工具改造自于[lihengming/spring-boot-api-project-seed](https://github.com/lihengming/spring-boot-api-project-seed),
并且已经将dbtools工具项目在公司内部推广使用,在此十分感谢Alibaba大牛的开源分享给日常工作开发过程中带来的效率,现也希望以回馈社区的姿态将自己封装的其他相关工具一并开源!
3、【建议使用2】Mybatis-generator - 根据表结构自动生成持久化层一些基础的代码
IDEA-右侧Maven Project,dbutils -> Plugins -> mybatis-generator:generate,双击,即可以逆向工程生成已经存在的一些表对应的持久化层代码 | 69.727273 | 190 | 0.847458 | yue_Hant | 0.596488 |
43a1d06b39e9ac6ec202032bc313cbeadb798ed9 | 475 | md | Markdown | history.md | zouyaoji/luch-request | e7c47b5945364994b10470976a50a62c6a3b242a | [
"MIT"
] | 1 | 2021-06-03T09:26:46.000Z | 2021-06-03T09:26:46.000Z | history.md | zouyaoji/luch-request | e7c47b5945364994b10470976a50a62c6a3b242a | [
"MIT"
] | null | null | null | history.md | zouyaoji/luch-request | e7c47b5945364994b10470976a50a62c6a3b242a | [
"MIT"
] | null | null | null | ## 2.0.1 (2020-05-01)
1. Bug Fix: 修复多实例全局配置共用问题
## 2.0.0 (2020-04-24)
1. New Feature: 增加 request ` withCredentials `选项(仅h5端支持)
1. New Feature: h5端 upload 增加 ` files ` ` file `选项。[uni.uploadFile](https://uniapp.dcloud.io/api/request/network-file?id=uploadfile "uni.uploadFile")
1. Enhancement: ` params ` 选项参数格式化方法使用axios 格式化方法
1. Bug Fix: 对upload 返回data 为空字符串的情况容错
1. Change: 修改header与全局合并方式。当前:header = Object.assign(全局,局部)
## 0.0.0 (2019-05)
1. luch-request created
| 26.388889 | 149 | 0.713684 | yue_Hant | 0.22987 |
43a1dd7925fa0200398a1eebb8926fdd33bfe85a | 3,761 | md | Markdown | README.md | acretelli/sistema-gerenciamento-imagem-frontend | 9b48382b495e2a78e75c380b496176c2459f21b8 | [
"MIT"
] | null | null | null | README.md | acretelli/sistema-gerenciamento-imagem-frontend | 9b48382b495e2a78e75c380b496176c2459f21b8 | [
"MIT"
] | null | null | null | README.md | acretelli/sistema-gerenciamento-imagem-frontend | 9b48382b495e2a78e75c380b496176c2459f21b8 | [
"MIT"
] | null | null | null | # Frontend - Projeto de sistema de gerenciamento de imagens
:dash: [Deploy da aplicação](http://labesplash.s3-website-us-east-1.amazonaws.com/)
<p align="center">
<img src="https://user-images.githubusercontent.com/29711622/99461331-65db0480-2910-11eb-8272-fecf202bf84d.gif">
</p>
<br>
## Principais tecnologias/ferramentas utilizadas
1. React
2. API REST
3. Estilização Avançada com CSS
4. Responsividade e adaptação de aplicação web para front.
<br>
# Funcionalidades
### 1. Tela de cadastro
- O usuário precisa informar o seu nome, o nickname, o email, senha, com, no mínimo 6 caracteres.
### 2. Tela de Login
- Todos os usuários devem se logar pelo mesma tela. Eles podem fornecer o email (ou o nickname, caso tenha sido feito com este) e a senha correta.
### 3. Tela de criação de música ou imagem
- Um usuário tem que fornecer todas as informações citadas no backend em um formulário que valide se os campos estão vazios. Após criar um item, o usuário é levado à tela de leitura de todas os conteúdos criados até o momento.
### 4. Tela para ver itens
- Esta tela deve exibe todos os conteúdos criados até o momento. Exibe uma versão pequena da imagem e sua legenda.
#### 4.1 Tela para ver um item específico.
- Ao clicar em uma imagem, deve aparecer um modal com a imagem em um tamanho maior, junto ao resto das informações da imagem, como autor, ano e tags.
#### 5. Tela de criação de coleção
- Na tela inicial há um botão para criar uma nova coleção. Ao clicar neste botão, abre um modal com um breve formulário para preencher os dados. Ao clicar em um botão de salvar, o modal se fecha e uma mensagem de sucesso aparece.
#### 6. Tela de listagem coleções
- Nesta tela há uma lista da coleções criadas pelo usuário, com nome e imagem. Caso não tenha imagem, um placeholder deve ser colocado no lugar. Ao clicar em uma das coleções, devemos ir para a próxima tela.
### 6.1 Tela de detalhes da coleção
- Funciona como a tela de listagem de todos os itens, mas apenas as imagens da coleção aparecem. Ao clicar em um item, o comportamento deve ser igual ao esperado na tela inicial de listagem de itens.
### 7. Tela para ver itens de cada critério.
- Esta tela deve exibir todos os critérios criados.
### 7.1 Tela de detalhes da coleção
- Deve funcionar como a tela de listagem de todos os itens, mas apenas as imagens do critério devem aparecer. Ao clicar em um item, o comportamento deve ser igual ao esperado na tela inicial de listagem de itens.
### 7.2 Tela para ver um item específico.
Ao clicar em uma imagem,deve aparecer um modal com a imagem em um tamanho maior, junto ao resto das informações da imagem, como autor, ano e tags.
<br>
## Escopo do projeto
Criar uma lista de tarefas semanal
<br>
## Como rodar a aplicação
No terminal, clone o projeto:
```
git clone
```
Entre na pasta do projeto:
```
cd image-system
```
Instale as dependências:
```
npm install
```
Execute a aplicação:
```
npm start
```
<br>
## Contribuição
Contribuições com o projeto são muito apreciadas. Para isso:
- Faça um Fork do projeto
- Crie uma branch para sua feature
```
git checkout -b feature
```
- Adicione as mudanças
```
git add .
```
- _Commit_ as mudanças
```
git commit -m 'Adicionando a feature X'
```
- Faça o push da branch
```
git push origin feature
```
- Abra um Pull Request
<br>
## Licença
The [MIT License]() (MIT)
Copyright :copyright: 2020 - LabeFood
<br>
## Canais de comunicação
**Anna Fernandes**: *Desenvolvedora web full-stack | Infografista
- [Linkedin](https://www.linkedin.com/in/annacbfernandes/)
- [Github](https://github.com/acretelli)
- [Email](anna.cbf@gmail.com)
<br>
### Produzido para Labenu | Full-Stack Web Development Bootcamp
Apredizado de frontend, backend e soft skills
| 27.057554 | 229 | 0.738102 | por_Latn | 0.999959 |
43a28f76d0175ab6484c74ddce595bfe995f86d1 | 4,392 | md | Markdown | WindowsServerDocs/identity/ad-fs/design/Federation-Server-Farm-Using-WID-and-Proxies-2012.md | Alexisblues/windowsserverdocs.es-es | 957fb5bbea150c6927f3b2674979a1839b0a55c7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-09-02T07:49:33.000Z | 2020-09-02T07:49:33.000Z | WindowsServerDocs/identity/ad-fs/design/Federation-Server-Farm-Using-WID-and-Proxies-2012.md | Alexisblues/windowsserverdocs.es-es | 957fb5bbea150c6927f3b2674979a1839b0a55c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-fs/design/Federation-Server-Farm-Using-WID-and-Proxies-2012.md | Alexisblues/windowsserverdocs.es-es | 957fb5bbea150c6927f3b2674979a1839b0a55c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: 8890ccc9-068d-4da2-bd51-8a2964173ff1
title: AD FS granja de servidores de Federación con WID y servidores proxy
author: billmath
ms.author: billmath
manager: femila
ms.date: 05/31/2017
ms.topic: article
ms.openlocfilehash: 88737eade6682f7be3572b3bc7f65bfc47c8e0a1
ms.sourcegitcommit: dfa48f77b751dbc34409aced628eb2f17c912f08
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 08/07/2020
ms.locfileid: "87945341"
---
# <a name="federation-server-farm-using-wid-and-proxies"></a>Granja de servidores de federación con WID y servidores proxy
Esta topología de implementación para Servicios de federación de Active Directory (AD FS) \( AD FS \) es idéntica a la granja de servidores de Federación con la topología WID de Windows Internal Database \( \) , pero agrega servidores proxy de Federación a la red perimetral para admitir usuarios externos. Los proxies de servidor de Federación redirigen las solicitudes de autenticación de cliente que proceden de fuera de la red corporativa a la granja de servidores de Federación.
## <a name="deployment-considerations"></a>Consideraciones de la implementación
En esta sección se describen varias consideraciones sobre la audiencia, las ventajas y las limitaciones que están asociadas con esta topología de implementación.
### <a name="who-should-use-this-topology"></a>¿Quién debe usar esta topología?
- Organizaciones con 100 o menos relaciones de confianza configuradas que necesitan proporcionar a los usuarios internos y a los usuarios externos que han \( iniciado sesión en equipos que están ubicados físicamente fuera de la red corporativa \) con acceso de inicio de sesión único a los \- \( \) servicios o aplicaciones federados.
- Organizaciones que necesitan proporcionar a los usuarios internos y a los usuarios externos acceso de SSO a Microsoft Office 365
- Organizaciones más pequeñas que tienen usuarios externos y requieren servicios escalables y redundantes
### <a name="what-are-the-benefits-of-using-this-topology"></a>¿Cuáles son las ventajas de usar esta topología?
- Las mismas ventajas que se muestran para la [granja de servidores de Federación con](Federation-Server-Farm-Using-WID-2012.md) la topología WID, además de la ventaja de proporcionar acceso adicional a los usuarios externos.
### <a name="what-are-the-limitations-of-using-this-topology"></a>¿Cuáles son las limitaciones del uso de esta topología?
- Las mismas limitaciones que se muestran para la [granja de servidores de Federación con](Federation-Server-Farm-Using-WID-2012.md) la topología WID
## <a name="server-placement-and-network-layout-recommendations"></a>Recomendaciones de ubicación de servidor y diseño de red
Para implementar esta topología, además de agregar dos servidores proxy de Federación, debe asegurarse de que la red perimetral también puede proporcionar acceso a un servidor DNS del sistema de nombres de dominio \( \) y a un segundo host NLB de equilibrio de carga de red \( \) . El segundo host de NLB debe configurarse con un clúster de NLB que use una \- dirección IP de clúster accesible por Internet y debe usar la misma configuración de nombre DNS de clúster que el clúster NLB anterior configurado en la red corporativa \( FS.fabrikam.com \) . Los servidores proxy de Federación también deben configurarse con \- direcciones IP accesibles por Internet.
La siguiente ilustración muestra la granja de servidores de Federación existente con la topología WID que se ha descrito anteriormente y cómo la empresa ficticia Fabrikam, Inc., proporciona acceso a un servidor DNS perimetral, agrega un segundo host de NLB con el mismo nombre DNS de clúster \( FS.fabrikam.com \) y agrega dos servidores proxy \( de Federación fsp1 y fsp2 \) a la red perimetral.

Para obtener más información acerca de cómo configurar el entorno de red para su uso con servidores de Federación o proxies de servidor de Federación, consulte [requisitos de resolución de nombres para servidores de Federación](Name-Resolution-Requirements-for-Federation-Servers.md) o [requisitos de resolución de nombres para los proxies de servidor de Federación](Name-Resolution-Requirements-for-Federation-Server-Proxies.md).
## <a name="see-also"></a>Consulte también
[Guía de diseño de AD FS en Windows Server 2012](AD-FS-Design-Guide-in-Windows-Server-2012.md)
| 87.84 | 661 | 0.797587 | spa_Latn | 0.977555 |
43a2f3718c2a6d3550b0bca33a8782b23703af86 | 2,298 | md | Markdown | TASK 6/README.md | neha865/Cognizance | fa1157a7aeee270ecd07f23ad296554d6bd389f5 | [
"Apache-2.0"
] | null | null | null | TASK 6/README.md | neha865/Cognizance | fa1157a7aeee270ecd07f23ad296554d6bd389f5 | [
"Apache-2.0"
] | null | null | null | TASK 6/README.md | neha865/Cognizance | fa1157a7aeee270ecd07f23ad296554d6bd389f5 | [
"Apache-2.0"
] | 1 | 2022-02-15T11:59:25.000Z | 2022-02-15T11:59:25.000Z | <h1><b><font color="orange">CodeHouse</font></b><h1>
<h2><I><font color="gray">welcome to codehouse!!!</font></I></h2>
### To view my landpage , click the below link !
**[</>codehouse](https://github.com/neha865/Cognizance/blob/main/TASK%203/codehouse%20landpage.png?raw=true)**
---
## **`THEME :`**
---
***CodeHouse is the first real-time platform for teaching programming online that enables you to connect with each student, see their work, and engage with their code instantly.The Most Advanced Platform for Teaching Programming
All your students' code. In a single place. In realtime***
---
## **`MOTIVATION :`**
---
***Engineering is the period where students have to learn coding and suddenly i got an idea like let's create a website for <mark> CodeHouse </mark> which is the online platform for teaching coding.***
---
## **`FUTURE IMPROVEMENTS :`**
---
- Daily, there will be a coding challenge appearing on their screens in order to increase their knowledge.
- See your students code in real-time and interact with their code to provide immediate and individualized support
- Track student engagement live with the activity monitor to identify and focus on students need attention the most
- Collaborative editing for you or your students to work together as a class or in breakout groups
- Integrated audio and video conferencing, screen sharing, and recording to take your class 100% online
---
>## **`DEMO CODE`**
---
```
Vue. Component (‘todo-item’, {
props: [‘todo’],
template: ‘ <li> {{ todo.text }}</li> ‘
})
var app = new Vue ({
el: ‘#app’,
data: {
groceryList: [
{ id: 0, text: ‘Vegetables’ }
{ id: 1, text: ‘Cheese’),
{ id: 2, text: ‘Whatever else’ }
]
}
})
```
---
## **`Extensive Programming Language Support : `**
---
***We support over a dozen of the most popular teaching languages including Python, Bash, C, C#, C++, Clojure, Go, Haskell, HTML, CSS, JS, Java, Javascript, Kotlin, Pascal, Processing, Perl, PHP, MySQL, Pseudocode, Ruby, Swift, Typescript, Visual Basic, Karel Python, Karel Java, MicroPython, and more!***
**Don’t see your language listed here? Reach out to our support team—we’re always looking to add new languages!**
---
| 37.672131 | 305 | 0.669713 | eng_Latn | 0.987448 |
43a32075eb23b9e7d0f4617c10a785fceb8f5b16 | 5,219 | md | Markdown | docs/3.0.0-alpha.x/guides/upload.md | tonxxd/strapi | d623346637af28d9e3ec3c7a0bcde22fae4ca6f7 | [
"MIT"
] | 7 | 2020-01-07T04:30:05.000Z | 2022-01-22T08:36:46.000Z | docs/3.0.0-alpha.x/guides/upload.md | tonxxd/strapi | d623346637af28d9e3ec3c7a0bcde22fae4ca6f7 | [
"MIT"
] | 8 | 2021-03-10T11:26:49.000Z | 2022-02-27T01:25:37.000Z | docs/3.0.0-alpha.x/guides/upload.md | tonxxd/strapi | d623346637af28d9e3ec3c7a0bcde22fae4ca6f7 | [
"MIT"
] | 4 | 2020-06-19T07:29:54.000Z | 2021-05-13T09:52:15.000Z | # File Upload
::: warning
This feature requires the Upload plugin (installed by default).
:::
Thanks to the plugin `Upload`, you can upload any kind of files on your server or externals providers such as AWS S3.
## Usage
The plugin exposes a single route `POST /upload` to upload one or multiple files in a single request.
::: warning
Please send FormData in your request body
:::
**Parameters**
- `files`: The file(s) to upload. The value(s) can be a Buffer or Stream.
- `path`: (optional): The folder where the file(s) will be uploaded to (only supported on strapi-provider-upload-aws-s3 now).
- `refId`: (optional): The ID of the entry which the file(s) will be linked to.
- `ref`: (optional): The name of the model which the file(s) will be linked to (see more below).
- `source`: (optional): The name of the plugin where the model is located.
- `field`: (optional): The field of the entry which the file(s) will be precisely linked to.
## Models
To add a new file attribute in your models, it's like adding a new association. In the first example, you will be able to upload and attach one file to the avatar attribute. Whereas, in our second example, you can upload and attach multiple pictures to the product.
**Path —** `User.settings.json`.
```json
{
"connection": "default",
"attributes": {
"pseudo": {
"type": "string",
"required": true
},
"email": {
"type": "email",
"required": true,
"unique": true
},
"avatar": {
"model": "file",
"via": "related",
"plugin": "upload"
}
}
}
```
**Path —** `Product.settings.json`.
```json
{
"connection": "default",
"attributes": {
"name": {
"type": "string",
"required": true
},
"price": {
"type": "integer",
"required": true
},
"pictures": {
"collection": "file",
"via": "related",
"plugin": "upload"
}
}
}
```
## Examples
**JS example**
The `Article` attributes:
```json
"attributes": {
"title": {
"default": "",
"type": "string"
},
"cover": {
"model": "file",
"via": "related",
"plugin": "upload",
"required": false
}
}
```
Code example:
```html
<form>
<!-- Can be multiple files -->
<input type="file" name="files" />
<input type="text" name="ref" value="article" />
<input type="text" name="refId" value="5c126648c7415f0c0ef1bccd" />
<input type="text" name="field" value="cover" />
<input type="submit" value="Submit" />
</form>
<script type="text/javascript">
const formElement = document.querySelector('form');
formElement.addEventListener('submit', e => {
e.preventDefault();
const request = new XMLHttpRequest();
request.open('POST', '/upload');
request.send(new FormData(formElement));
});
</script>
```
> ⚠️ You have to send a FormData in any case (React, Angular, jQuery etc...)
**Single file**
```
curl -X POST -F 'files=@/path/to/pictures/file.jpg' http://localhost:1337/upload
```
**Multiple files**
```
curl -X POST -F 'files[]=@/path/to/pictures/fileX.jpg' -F 'files[]=@/path/to/pictures/fileY.jpg' http://localhost:1337/upload
```
**Linking files to an entry**
Let's say that you want to have a `User` model provided by the plugin `Users & Permissions` and you want to upload an avatar for a specific user.
```json
{
"connection": "default",
"attributes": {
"pseudo": {
"type": "string",
"required": true
},
"email": {
"type": "email",
"required": true,
"unique": true
},
"avatar": {
"model": "file",
"via": "related",
"plugin": "upload"
}
}
}
```
```js
{
"files": "...", // Buffer or stream of file(s)
"path": "user/avatar", // Uploading folder of file(s).
"refId": "5a993616b8e66660e8baf45c", // User's Id.
"ref": "user", // Model name.
"source": "users-permissions", // Plugin name.
"field": "avatar" // Field name in the User model.
}
```
Here the request to make to associate the file (/path/to/pictures/avatar.jpg) to the user (id: 5a993616b8e66660e8baf45c) when the `User` model is provided by the `Users & Permissions` plugin.
```
curl -X POST -F 'files=@/path/to/pictures/avatar.jpg&refId=5a993616b8e66660e8baf45c&ref=user&source=users-permissions&field=avatar' http://localhost:1337/upload
```
## Install providers
By default Strapi provides a local file upload system. You might want to upload your files on AWS S3 or another provider.
You can check all the available providers developed by the community on npmjs.org - [Providers list](https://www.npmjs.com/search?q=strapi-provider-upload-)
To install a new provider run:
```
$ npm install strapi-provider-upload-aws-s3@alpha --save
```
::: tip
If the provider is not in the mono repo, you probably not need `@alpha` depending if the creator published it with this tag or not.
:::
Then, visit `/admin/plugins/upload/configurations/development` on your web browser and configure the provider.
## Create providers
If you want to create your own, make sure the name starts with `strapi-provider-upload-` (duplicating an existing one will be easier to create), modify the `auth` config object and customize the `upload` and `delete` functions.
| 25.70936 | 265 | 0.646484 | eng_Latn | 0.922539 |
43a5028468e6b5764246f4b6d099040dacf5b88f | 2,117 | md | Markdown | DeviceBuilder/README.md | openconnectivityfoundation/Dockerized_IoTivity | 8dbeece159725d7bad9caf2959d91be1418e0f62 | [
"Apache-2.0"
] | 1 | 2021-11-07T09:15:28.000Z | 2021-11-07T09:15:28.000Z | DeviceBuilder/README.md | openconnectivityfoundation/Dockerized_IoTivity | 8dbeece159725d7bad9caf2959d91be1418e0f62 | [
"Apache-2.0"
] | null | null | null | DeviceBuilder/README.md | openconnectivityfoundation/Dockerized_IoTivity | 8dbeece159725d7bad9caf2959d91be1418e0f62 | [
"Apache-2.0"
] | null | null | null | # Containerized DeviceBuilder Image
## Introduction
This directory contains a definition of a container image of the OCF
[DeviceBuilder](https://github.com/openconnectivityfoundation/DeviceBuilder)
tool. This image can be used to generate code from DeviceBuilder input files,
and is meant to be used as a one-off command to execute the
`DeviceBuilder_IotivityLiteServer.sh` script. Volumes or bind-mounts must be
used to provide the input file and to retrieve the generated source files.
## Structure
### Notable Directories
* `/ocf_tooling`: Location of the `DeviceBuilder` source and its necessary
dependencies
* `/ocf_tooling/DeviceBuilder`: Location of the DeviceBuilder source and the
working directory of the image
* `/devbuilder`: Where source materials should be mounted as a bind mount or
volume
* This directory is symlinked from the `/ocf_tooling/DeviceBuilder` directory
## Basic Usage
All that is required to use this image is a DeviceBuilder input JSON file (refer
to some examples [here](https://openconnectivityfoundation.github.io/DeviceBuilder/DeviceBuilderInputFormat-file-examples/#examples-with-iotivity)).
This file should be provided to the container as a bind mount or volume. For
example, assume the following directory structure:
```
example
`-- speaker_model.json
```
To generate the IoTivity source from `speaker_model.json`, the following command
could be used:
```
$ docker run --rm -v /path/to/example:/devbuilder ocfadmin/devicebuilder speaker_model.json "oic.d.speaker"
```
The container will produce new files through the bind mount that might look like
the following:
```
example
|-- output
| |-- code
| | |-- PICS.json
| | |-- readme.md
| | |-- server_introspection.dat
| | |-- server_introspection.dat.h
| | `-- simpleserver.c
| |-- out_codegeneration_merged.swagger.json
| |-- out_introspection_merged.swagger.json
| |-- out_introspection_merged.swagger.json.cbor
| `-- speaker_model.json
`-- speaker_model.json
```
Note that, when using bind mounts, the resulting output files (the `output`
directory) will be owned by `root`.
| 33.078125 | 148 | 0.759093 | eng_Latn | 0.97314 |
43a6c7e3796d1627db8f0de265d8e78c8c5ea217 | 3,940 | md | Markdown | README.md | jdylanmc/score-2.5-examples | f9c7310558bd92949c81e5d148ba1e65b2445cff | [
"Ruby"
] | null | null | null | README.md | jdylanmc/score-2.5-examples | f9c7310558bd92949c81e5d148ba1e65b2445cff | [
"Ruby"
] | null | null | null | README.md | jdylanmc/score-2.5-examples | f9c7310558bd92949c81e5d148ba1e65b2445cff | [
"Ruby"
] | null | null | null | Local Environment Setup
=======================
**Prerequisites**
* Visual Studio 2013
* TDS (latest version) (http://www.teamdevelopmentforsitecore.com/Download)
* SQL Server 2012+ / SQL Server 2014+
* nodejs and less (https://brainjocks.atlassian.net/wiki/display/PER/Copy+of+As+part+of+MSBuild)
**Before opening the solution in Visual Studio:**
* Install **Sitecore 8.2.161115** into `<solution root>\sandbox`. Use **sc821** as your site name.
* Install MongoDb. I recommend installing Mongo as a service and configure it to start automatically
```
#!none
set mongodbpath=c:\Program Files\MongoDB\Server\3.0
mkdir "%mongodbpath%\data"
mkdir "%mongodbpath%\data\log"
mkdir "%mongodbpath%\data\db"
echo logpath=%mongodbpath%\data\log\mongod.log> "%mongodbpath%\mongod.cfg"
echo dbpath=%mongodbpath%\data\db>> "%mongodbpath%\mongod.cfg"
sc.exe create MongoDB binPath= "\"%mongodbpath%\bin\mongod.exe\" --service --config=\"%mongodbpath%\mongod.cfg\"" DisplayName= "MongoDB" start= "auto"
net start MongoDB
```
* If you are running multiple Sitecore instances please update `App_Config\ConnectionString.config` to make sure xDB databases are prefixed with your instance name:
```xml
<add name="analytics" connectionString="mongodb://localhost/customer_analytics"/>
<add name="tracking.live" connectionString="mongodb://localhost/customer_tracking_live"/>
<add name="tracking.history" connectionString="mongodb://localhost/customer_tracking_history"/>
<add name="tracking.contact" connectionString="mongodb://localhost/customer_tracking_contact"/>
```
* Add your `score.license` to `<sandbox>\website\App_Data` and restart your IIS app to activate the license. If you don't find the folder then you will have to create it.
* I would also recommend that you copy your `Web.config` to `Web.config.bak`. It is located under <sandbox>\Website\Web.config.
**Now Open your solution in Visual Studio.**
* Restore all NuGet packages
* Open Tools -> NuGet Package Manager -> Powershell Console and make sure you see SCORE and SCORE Bootstrap UI deploy. If you don't see anything in the console, re-open your solution and pull up the console again.
* Install Sitecore Powershell Extensions if it is not already installed
https://marketplace.sitecore.net/en/Modules/Sitecore_PowerShell_console.aspx
* Once downloaded then log into sitecore admin
* Desktop > Development Tools > Installation Wizard
* Install it and you should be set
* Rebuild and Deploy ( right click on the Solution in Visual Studio and click Deploy Solution )
```
If you don't have Ruby installed on your computer then you will need
it in order to install SASS below. To install it click on
the link below and download and install it:
http://rubyinstaller.org/
```
```
If you receive .css errors in your build you will need the LESS compiler
installed on your machine. To install it just do the following:
Open Powershell or DOS window
Type gem install sass
Hit Enter and it should install it for you
```
```
If your project setup use LESS compiler. You need NMP installed.
To install it click on the link below and download and install it:
https://nodejs.org/en/
After that do the following to install LESS compiler:
Open Powershell or DOS window
Type npm install -g less
Hit Enter and it should install it for you
```
```
If you receive license.xml errors in your build you will need
to place the sitecore license under each test project.
Ex sc821.Custom.Tests\License.xml
```
* Please also verify you are running the right version of SCORE with a valid license by going to `http://sc821/score/about/version`
* For local developer environments add these lines before ```<system.web>``` node:
```
<!-- Make TDS Sync in Visual Studio ignore custom error pages in order to work properly -->
<location path="_DEV">
<system.webServer>
<httpErrors errorMode="Custom" existingResponse="Auto" />
</system.webServer>
</location>
```
| 36.481481 | 214 | 0.753553 | eng_Latn | 0.933547 |
43a7a9acb2f2db1c81e435e425e701afa2bd144c | 1,320 | md | Markdown | README.md | ptgrogan/js-mas | de726aec2244993be263172acb6fe8e462ce5f6e | [
"Apache-2.0"
] | 1 | 2016-12-20T13:39:36.000Z | 2016-12-20T13:39:36.000Z | README.md | ptgrogan/mas | de726aec2244993be263172acb6fe8e462ce5f6e | [
"Apache-2.0"
] | null | null | null | README.md | ptgrogan/mas | de726aec2244993be263172acb6fe8e462ce5f6e | [
"Apache-2.0"
] | null | null | null | # mas
JavaScript Modeling and Simulation
This library implements basic components required for modeling and simulation in JavaScript. It currently supports a portion of the System Dynamics (SD) formalism.
This package adopts the RequireJS interface for dual use in Node or a browser.
## SD Components and Functions
* Stock (`mas.sd.Stock`)
* Flow (`mas.sd.Flow`)
* Parameter (`mas.sd.Parameter`, a Flow with constant value)
* Delay1 (`mas.sd.Delay1`, first-order exponential delay function)
* Smooth (`mas.sd.Smooth`, first-order exponential smoothing function)
## Simulators and Integration Methods
* Simulator (`mas.sim.Simulator`, a basic simulator with event bindings)
* LoggingSimulator (`mas.sim.LoggingSimulator`, logs entity values at each time step)
Known limitations:
* The current time advancement as a decentralized tick/tock procedure only allows an explicit (forward) Euler integration method. More precise methods such as 4th order Runge-Kutta (RK4) would require either a centralized state update function or more numerous iterative data exchange periods.
* Results cannot be directly compared to outputs of some tools due to a difference in numerical precision. For example, most versions of Vensim only support single-precision floating-point numbers while JavaScript is double-precision. | 62.857143 | 294 | 0.793182 | eng_Latn | 0.987873 |
43a7b08db42392792fb0dfeafe393bc70add8d53 | 544 | md | Markdown | api/md/contents/rxjs/subscribe-is-deprecated-use-an-observer-instead-of-an-error-callback.md | VacantThinker/express-angular-blog-data | f9a81c1c693501ce6caf34943b4a8206062152ae | [
"MIT"
] | null | null | null | api/md/contents/rxjs/subscribe-is-deprecated-use-an-observer-instead-of-an-error-callback.md | VacantThinker/express-angular-blog-data | f9a81c1c693501ce6caf34943b4a8206062152ae | [
"MIT"
] | null | null | null | api/md/contents/rxjs/subscribe-is-deprecated-use-an-observer-instead-of-an-error-callback.md | VacantThinker/express-angular-blog-data | f9a81c1c693501ce6caf34943b4a8206062152ae | [
"MIT"
] | null | null | null |
- https://stackoverflow.com/questions/55472124/subscribe-is-deprecated-use-an-observer-instead-of-an-error-callback
- https://github.com/ReactiveX/rxjs/pull/4202
- https://github.com/ReactiveX/rxjs/issues/4159
---
#### code
```typescript
.subscribe({
next: this.handleUpdateResponse.bind(this),
error: this.handleError.bind(this)
});
.subscribe({
complete: () => { ... }, // completeHandler
error: () => { ... }, // errorHandler
next: () => { ... }, // nextHandler
someOtherProperty: 42
});
```
---
end | 20.148148 | 115 | 0.626838 | eng_Latn | 0.151018 |
43a8398450c674ccc52ff1c0c522240555eb4e58 | 19 | md | Markdown | README.md | liaolu123/coolweather | 7c0eb6ad8b37e16f58ca5d33bf876ed8ff1f1298 | [
"Apache-2.0"
] | 2 | 2017-04-11T15:06:42.000Z | 2017-04-11T15:06:49.000Z | README.md | Flyingboy321/coolweather | 484f4f29257bd47d8622264773ee71ae24c8fbb5 | [
"Apache-2.0"
] | null | null | null | README.md | Flyingboy321/coolweather | 484f4f29257bd47d8622264773ee71ae24c8fbb5 | [
"Apache-2.0"
] | null | null | null | # coolweather
haha
| 6.333333 | 13 | 0.789474 | eng_Latn | 0.99775 |
43a84c6cfb8cfeb73ecdabff4fc526aea62f3aa3 | 442 | md | Markdown | 66287396/answer.md | hoangsetup/stackoverflow | 7c103603922bfe2b1c767cdd3d0e225cf1487926 | [
"MIT"
] | 2 | 2021-11-16T03:31:57.000Z | 2022-03-02T04:04:35.000Z | 66287396/answer.md | hoangsetup/stackoverflow | 7c103603922bfe2b1c767cdd3d0e225cf1487926 | [
"MIT"
] | null | null | null | 66287396/answer.md | hoangsetup/stackoverflow | 7c103603922bfe2b1c767cdd3d0e225cf1487926 | [
"MIT"
] | null | null | null | Use [Object.getOwnPropertyNames()][1] to get all properties (including non-enumerable properties except for those which use Symbol). Then loop through them to find the class name:
```js
Object.getOwnPropertyNames(window).forEach((n) => {
if (n === 'ArrayBuffer') {
console.log('FOUND: ArrayBuffer');
}
})
```
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/getOwnPropertyNames
| 36.833333 | 179 | 0.721719 | eng_Latn | 0.499778 |
43a8def08f3633c35d2562508f60fc43f3b4f61a | 6,506 | md | Markdown | README.md | emliunix/EdenApple | 9512ae64f5aa377a1178706665369b918a30b9da | [
"MIT"
] | null | null | null | README.md | emliunix/EdenApple | 9512ae64f5aa377a1178706665369b918a30b9da | [
"MIT"
] | null | null | null | README.md | emliunix/EdenApple | 9512ae64f5aa377a1178706665369b918a30b9da | [
"MIT"
] | null | null | null | # EdenApple
An interpreter for a subset of Scheme.
EdenApple是一个实验性质的解释器实现,准备用Scheme实现。
## 记录
### 0418
2017/04/18
这几天零零散散的稍微有点活力的时间里面,写出了EdenApple基本的VM部分,VM还没有完整的实现。
其中还需要实现许多的基本函数和基本语法式。其中一个比较重要的功能是let系相关的各种语法,而这之中最有意思的是letrec。在搜集资料的时候见到将letrec翻译到fix原语的,估计是fixpoint,怎么用fixpoint将这个表示出来,到现在还是没有想清楚。
有了let系的语句之后就可以实现top-level的各种define,body 中的define这些东西define应该是可以直接翻译过来。
说到let,又想到在lambda抽象里面,所有的参数以及函数体中的这些bound variables在编译或者说转换之后都会被替换成一个一个location,但是这些location还是需要在运行时每一次调用的时候创建。这个机制如何实现想了挺长时间还是没有一个清楚的概念。
EdenApple还缺少一个Compiler,这个Compiler想想基本的结构大概是Parser,Expander,Transformer,Compiler这样一个流水线结构。其中Expander是Macro的实现,现在水平不够,这个就先不实现了。
在这个结构中有一个想不通的地方是,Reader这么个组件为什么叫做Reader。感受中,这个不是一个单纯的Parser,具体多出了哪些功能,是宏还是什么的,还是非常模糊。
### Monadic Parser Combinator
讲到Parser,这两天除了实现了一个基本的VM,剩下的时间基本上在鼓捣Parser。看了些许Parser Combinator的资料,重点在parsec上。parsec是一个Monadic Parser Combinator,在阅读过程中,略微有两点想法。
一个是对Monad的理解,Monad是一个代数结构,Monad的数据首先是一个函数(具备 * -> * 这样的kind),其次,对他有bind(>>=),>>,return,fail这几个操作。这几个操作的特点大概是都是以Monad数据为输入,返回的还是对应Monad的数据。
所以,如果把parser看作是一个接受String,返回结果和剩余String (:: (a, String))的函数,重点是把他看作函数,那么Parser Combinator就是对这些函数进行不断的组合。
Monad的组合操作都有个特点,就是结构上面体现了一种顺序结构。比如说bind操作,他的类型是 `m a -> (a -> m b) -> m b`。可以这么理解这个类型,首先进行`m a`操作,该操作的结果是`a`类型。再看第二个传入的值,他是一个函数,这个函数接受第一个操作的结果,并根据这个结果生成出第二个操作(类型为`m b`),第二个操作返回`b`类型的结果。那么这个先进行第一个操作,再进行第二个操作,并最终返回第二个操作的结果的着整个一个过程,也就是两个过程按顺序执行的组合过程,就是bind操作所生成的值。
第二个是关于Parser本身的,Parser有两个基本的组合,一个是上面的bind(`>>=`)组合,还一个choice(`<|>`)组合。在观察组合之前先观察parser本身,parser本身的类型 `String -> (a, String)`,反映了parser本身的功能,比较有意思的是返回的数据中还包括了未处理的文本数据。这样子可以在下一个parser传入这个文本,完成下一步的parse。正是这种结构,使得parser的组合变得简单直接。事实上,parser的bind操作就是这么一种操作:将第一个parse返回的未处理文本传入第二个parse。
由此来看,bind操作是顺序组合,与此相对应的,`<|>`操作是一种平行组合。`<|>`操作将两个操作组合,首先将文本传入第一个操作,如果第一个操作失败,转而将相同的文本传入第二个操作。
BNF文法中通常将一个元素表示为几个元素的顺序组合或者是用或`|`来表示两种不同的组合都是这个元素。比如:
```
<sexp> ::= <pair> | <atom>
<pair> ::= (<sexp> . <sexp>)
```
而这正好就是parsec中的两种基本组合。
monadic只是一种style,在parsec的论文中也讨论了另一种sytle,但是没仔细看,也不清楚是怎么实现组合的。关于parse,其实还有老多的东西不清楚,比如怎么稳妥的实现lookahead,怎么左递归,有哪些best practice什么的。
还有个就是在parse sexp的时候,感觉blank字符的处理还是比较麻烦,这种字符起到分割作用但又不是必须的,两个token之间不一定需要空白字符来分割。有一种想法是先lex一遍将字符串变成token串,然后在以parser转换成树状结构,再然后来一遍pass将这个树转换成Expr树,Expr树中会带上具体的语义,比如这是一棵Let树,那是一棵If树,还有Lambda树之类的。到这种状态,基本上对于compile而言就是万事俱备,只差临门一脚了。
### CPS变换的一次尝试
未知日期
按照ORBIT那篇论文中给出的一小段代码尝试了一下CPS变换的编写,其中比较复杂的一部分地方是对call结构的CPS变换。
call结构形如`(p a1 a2 a3)`,其中p,an都有可能是另一个form,需要依次递归调用convert,并且在convert的continuation参数中代入一个临时构造的lambda结构。
```Scheme
```
### r6rs
2017/03/06
ChezScheme声称是支持到r6rs级别的,所以浏览了一下r6rs的规范,在Racket的文档里面自带了r6rs的标准文档。文档浏览下来也没有太多需要说明的,主要是记录下常用的函数,熟悉下Scheme的语法。r6rs中提供了模块化(library)的相关内容,但是具体的模块路径查找,library spec等等还是要参考具体实现。
还一个就是没事别瞎编译东西,跨平台就是个坑。Win下面的emacs访问不了网络,MSYS2下的emacs倒是可以用,但是在MSYS2下面编译ChezScheme时路径总是设置不对,文档中说ChezScheme在win下编译需要cygwin,而在具体编译过程中调用的是一个bat文件再调用cl编译器,这下就有点懵逼了。
在WSL(Windows Subsystem for Linux 或者 bash for windows)下面ChezScheme编译倒是没有问题,emacs勉强能够使用。有一个SIGTTIN的问题让emacs在执行完package-refresh-contents之后自动切入后台,并且切回前台之后界面是乱的。好在后续的使用上没有太大问题。
### T3 - Orbit Compiler
2017/03/06
ORBIT是一个使用了CPS来进行编译的Scheme编译器。而且ORBIT作者提供了许多论文来阐述围绕此的工作。直接Google ORBIT找不到具体的实现,最后是找到了[T语言的主页](http://mumble.net/~jar/tproject/),才发现ORBIT就是T的编译器,同时ORBIT也是T写的,T是一个Scheme dialect。
### Lisp 1.5
2017/03/05
简单扫了一下Lisp 1.5的手册,前面讲到Lisp是一个**Formal mathematical language**。看完了核心的语言的定义部分,也没有看到关于运行时相关的东西,估计真的是一个_mathematical language_吧。不过这本册子又叫做**LISP 1.5 Programmers Manual**这个就纠结了。
在前面定义中,不仅定义了S-expression,还定义了M-expression。形式化定义如下(**Backus notation**):
```
<LETTER>::=A|B|C|...|Z
<number>::=0|1|2|...|9
<atomic symbol>::=<LETTER><atom part>
<atom part>::=<empty>|<LETTER><atom part>|<number><atom part>
<S-expression>::=<atomic symbol>|
(<S-expression>.<S-expression>)|
(<S-expression>...<S-expression>)
<letter>::=a|b|c|...|z
<identifier>::=<letter><id part>
<id part>::=<empty>|<letter><id part>|<number><id part>
<form>::=<constant>|
<variable>|
<function>[<argument>;...;<argument>]|
[<form>-><form>;...;<form>-><form>]
<constant>::=<S-expression>
<variable>::=<identifier>
<argument>::=<form>
<function>::=<identifier>|
λ[<var list>;<form>]|
label[<identifier>;<function>]
<var list>::=[<variable>;...;<variable>]
```
按照册子上说的,M-expression是用来操作S-expression的,S-expression本身是一种符号数据的表达方式,而M-expression作为一种meta language,提供了函数定义,变量,条件表达式,函数调用的能力。然后M-expression本身作为一种表达式,也算是数据的一种形式,那么也就可以通过灵活的S-expression来表示出来。于是,册子中的1.6节定义了将M-expression转换为S-expression表达的转换过程。
随后通过S-expression表达的M-expression提供了常用函数的定义。
需要注意的是,M-expression提供了5个基本函数,`cons, car, cdr, eq, atom`。这5个函数提供了操作S-expression的基本操作。
### predates
predates 2017/03/05
这个解释器是我的毕业设计。至于为什么要实现一个解释器呢,没什么理由,就是想动手写一个解释器出来。
Lisp语言本身就以语法规则简单出名,这样也能够降低难度。毕竟是第一次尝试写一个解释器,要是写了半天写不出来那就GG了。
一个简单的解释器实现并没有多少行代码,在ChezScheme的示例代码中就有一个文件实现了一个解释器。在知乎上也看到过两个解释器的实现。但是自己手写还是有点难度的,简单阅读了一些资料之后,选取了几个基本上是核心的Feature专门研究,包括:
- Lexical Scope,
- Closure,
- Continuation。
Lexical Scope和Closure,感觉是同一个Feature,Closure作为实现手段提供了Lexical Scope这么一个能力。这一部分还是比较容易理解的。
首先,每一处具有变量绑定的地方都创建一个Env作为符号到值的查找表。而具有变量绑定的地方有最开始的全局环境(Global),每一次的函数调用,以及let语句。每一个Env在创建的时候都与其(词法层面上的)父一级连接。这样,在查找变量的时候从最近一级的Env沿着这个Env的链条向上查找即可。而Closure的含义就是在创建函数(λ Abstraction)的时候,将这个函数对象额外附上指向词法上父一级的Env,或者说创建函数时的当前Env。当这个函数被调用时,将调用时的Env(持有实参的Env)指向这个创建时附加上的Env。
Closure仅在函数定义中存在_自由变量_(free variable)的时候才有用处。因为仅在函数中存在自由变量的时候才有可能查找父一级的Env,像`+, -, *, /`这种就不可能用到自由变量,也就不需要闭包,**不知道这个能不能作为一个优化的场景**。
Continuation个人感觉是对程序执行过程的一种看法,观察的角度,或者说模型。研究这个主要是想在解释器中运用_CPS_(Continuation Passing Style),需要注意的是,Continuation和CPS是两个东西。Continuation是一种模型,而CPS是将程序变换为一种统一的形式(CP),这种形式可以当作是IR(Intermediate Representation)一样的存在。在知乎上也看到过CPS类似于Monad,SSA的说法,同时也提到CPS可以再转换为ANF,虽然我还不知道ANF是什么样子的。SPJ也提出过joint-point这么一个东西,似乎也是一个IR
首先说Continuation,Continuation将程序的执行过程分成了两个部分,当前正在计算的(redex)和接下来要执行的(continuation)。一个continuation中存在这需要填入值的部分,叫做_hole_,翻译过来叫做洞,听起来比较怪,还是用英文算了。当redex计算完成后便将计算结果填入这个hole,那么continuation便可以进行接下来的计算,而接下来的计算过程也是计算一个redex,填入下一个continuation,这么一个过程。
```lisp
(- 4 (+ 1 2))
; =>
'((1 2) (+ [] []) (- 4 []))
```
CPS的含义是将Continuation作为一个函数显示的传入当前的计算中。转换起来太难看了,而且我也不知道怎么公理化这个过程和证明CPS能够保证所有情况下都能进行。
不过关于CPS,这里还是可以记录下来一些东西。
#### `if`和CPS
Lisp中的`if`是一个语法式而不是一个函数,`if`接收三个子式,当第一个为真时,eval第二个子式,否则eval第三个子式。如果`if`是一个函数,那么必然会首先将三个子式都先求值,这个语义就和`if`的本意冲突了,而且这种冲突会造成相当严重的后果,比如本来只会选择其中一条路径来进行求值,并在过程中产生副作用,现在两条路径的副作用都产生了,典型的例子是根据输入选择输出`"Hello"`还是`"You're so beautiful."`。或者在递归求值的情况下通常通过`if`来判断是否继续递归,那么函数`if`就没办法适用。
这种情况下可以把`if`看成是一个接收三个continuation的函数,首先求值第一个continuation,并根据结果选择下一个求值的continuation。
| 41.43949 | 302 | 0.818322 | yue_Hant | 0.565049 |
43a9ce18c221225fbfd6050cd55300300f092e90 | 327 | md | Markdown | UPGRADING.md | iproudhon/ostracon | c5c42fbe5d981c00477f47585322b2b83e079f02 | [
"Apache-2.0"
] | 37 | 2021-07-07T05:36:32.000Z | 2022-02-25T03:42:17.000Z | UPGRADING.md | iproudhon/ostracon | c5c42fbe5d981c00477f47585322b2b83e079f02 | [
"Apache-2.0"
] | 107 | 2021-07-07T06:22:45.000Z | 2022-03-29T01:19:55.000Z | UPGRADING.md | iproudhon/ostracon | c5c42fbe5d981c00477f47585322b2b83e079f02 | [
"Apache-2.0"
] | 13 | 2021-07-07T07:41:05.000Z | 2022-02-25T07:14:34.000Z | # Upgrading Ostracon
This guide provides instructions for upgrading to specific versions of Ostracon.
## v1.0.0
**Ostracon [v1.0.0](https://github.com/line/ostracon/blob/v1.0.0/CHANGELOG.md#v100)**
## v0.0.0
**Ostracon forked from Tendermint Core [v0.34.8](https://github.com/tendermint/tendermint/releases/tag/v0.34.8)**
| 27.25 | 113 | 0.740061 | yue_Hant | 0.329564 |
43a9ce5507f30514e43b527cb386b764435888f1 | 4,012 | md | Markdown | entry/2013/06/06/entry/index.md | Tosainu/blog | b77e12cdce46dd508ee43ed1f6f18328344c040d | [
"MIT"
] | 1 | 2015-12-19T01:26:14.000Z | 2015-12-19T01:26:14.000Z | entry/2013/06/06/entry/index.md | Tosainu/blog | b77e12cdce46dd508ee43ed1f6f18328344c040d | [
"MIT"
] | null | null | null | entry/2013/06/06/entry/index.md | Tosainu/blog | b77e12cdce46dd508ee43ed1f6f18328344c040d | [
"MIT"
] | 2 | 2018-08-16T09:55:26.000Z | 2020-08-02T15:27:52.000Z | ---
title: Ubuntu Touch on XPERIA Ray
date: 2013-06-06 20:24:03+0900
noindex: true
tags: Android,Linux
---
<p>どーもです〜</p>
<p> </p>
<p>さっき寝ると書いたな、あれは嘘だ。</p>
<p>あれです、嫌なことが終わった瞬間テンションがおかしくなって眠気が吹っ飛ぶってやつです。</p>
<p> </p>
<p> </p>
<p>今回は友人のXPERIA RayにUbuntu Touchを焼いてみます。</p>
<p>今回はRayたんを使いますが、2011XPERIAならどれでも試せます。通信とか困らない人はやってみよう!</p>
<p> </p>
<h3>準備</h3>
<p>用意するもの</p>
<ul>
<li><u><span style="color:red;">Unlocked</span></u> XPERIA2011(<span style="color:red;">アンロック必須</span>)</li>
<li>4GB以上のmicroSDカード(フォーマットしてもおkなもの、推奨8GB以上)</li>
<li>Linux系のOSがインストールされたPC(MACも大丈夫かな?)</li>
<li>AndroidSDK又はFlashtool(fastbootコマンドでカーネルを焼きます)</li>
<li>簡単な英語を読む程度の能力("copy the downloaded files onto sd-card"程度のものが理解できればおk)</li>
</ul>
<p> </p>
<p>端末は、一度ICS系のファームを焼いてください。BBバージョン等の関係で起動しない場合があるかもしれません。</p>
<p> </p>
<p>SDカードには第2パーティションを作成します。これはLinuxを使わないとできません。</p>
<p>第1パーティションの後ろに2GB以上のパーティションを作成し、第1パーティションはFAT32、第2パーティションはext4でフォーマットします。</p>
<p> </p>
<p>ちなみに、今回パーティション変更にはGPartedを使いました。ここまで高性能なパーティション変更ソフトはWindowsのシェアウェアにもないはずです。</p>
<p> </p>
<h3>必要なファイルのダウンロード</h3>
<p>XDAフォーラムの記事<a href="http://forum.xda-developers.com/showthread.php?t=2226406">Ubuntu-touch for all Xperia2011 devices</a>にアクセスし、</p>
<p>The generic ubuntu part(一番上のリンク)と、自分の端末に合わせたファイル、To fix the resolution for hdpi(一般的?な大きさで表示される)又はTo fix the resolution for mdpi(文字などが小さくなる)の3つのファイルを適当な場所に保存します。</p>
<p>そうしたら、その3つのファイルをSD(FATファイルシステムの方)にコピーします。</p>
<p> </p>
<h3>カーネルを焼く</h3>
<p>先程ダウンロードしたファイルの中に、cm-10.1-xxxxxx-UNOFFICIAL-xxxxxx-ubuntu.zipのような名前のファイルがあると思います。</p>
<p>そのを解凍してboot.imgを取り出します。</p>
<p>それをfastbootコマンドで端末に書き込みます。</p>
<p>コマンドは</p>
<pre class="prettyprint linenums">
$ fastboot flash boot boot.img
sending 'boot' (6946 KB)...
(bootloader) USB download speed was 9201kB/s
OKAY [ 0.781s]
writing 'boot'...
(bootloader) Download buffer format: boot IMG
(bootloader) Flash of partition 'boot' requested
(bootloader) S1 partID 0x00000003, block 0x00000148-0x00000179
(bootloader) Erase operation complete, 0 bad blocks encountered
(bootloader) Flashing...
(bootloader) Flash operation complete
OKAY [ 1.339s]
finished. total time: 2.121s
$ fastboot reboot
rebooting...
finished. total time: 0.001s
</pre>
<p>です。(カーネル焼きに関しては<a href="http://tosainu.wktk.so/page/customkernel">ここ</a>で詳しく解説する記事を書いている途中です・・・)</p>
<p> </p>
<p>端末が再起動しますので、Vol-ボタンを何度か押してCWMリカバリに入ります。</p>
<p> </p>
<h3>Ubuntu Touchを書き込む</h3>
<p><img src="https://lh6.googleusercontent.com/-yng6DtLu0uw/UbBv8WT4GqI/AAAAAAAACMo/mdfsv1zwm2E/s640/DSC_0002.JPG" /></p>
<p>CWMリカバリに入ったら、</p>
<ol>
<li>Factory Reset</li>
<li>Format /System</li>
</ol>
<p>をしてください。そうしたら、</p>
<ol>
<li>cm-10.1-xxxxxxxx-UNOFFICIAL-xxxxxx-ubuntu.zip</li>
<li>quantal-preinstalled-phablet-armhf.zip</li>
<li>ubuntutouch_screen_fix_HDPI_jasousa.zip又はubuntu-touch-scaling-fix_by_Kakalko4.zip</li>
</ol>
<p>の順にzipを焼いてください。</p>
<p>書き込みが終わったら再起動です。</p>
<p> </p>
<h3>起動!!</h3>
<p>Rayたんの場合、カーネルのロゴはまるで画面が割れたかのようにバグるし、ブートアニメーション等もなく黒画面が続くなどで不安でしたが、なんとか起動しますた。</p>
<p><img src="https://lh5.googleusercontent.com/-x0pcfU8RUzc/UbBv8V53XXI/AAAAAAAACMk/qOgprFG6iA0/s640/DSC_0003.JPG" /></p>
<p> </p>
<h3>感想</h3>
<p>XDAフォーラム内での言葉を使うとすれば、</p>
<p><span style="font-size:36px;">it is so laggy</span></p>
<p>動作がクッソ重いです。</p>
<p>まぁ、端末の性能も今となってはアレですし、ROMの容量の関係かUbuntuシステムはSDの第2パーティションにインストールされてしまいますので、遅くて仕方ないですが。</p>
<p> </p>
<p>使い勝手ですが・・・</p>
<p><span style="font-size:36px;">「何もわかりません」</span></p>
<p>とりあえずWiFi接続、ブラウザ起動はできました。</p>
<p>カメラも起動してみましたが白画面、その他アプリもクッソ重く使い物には程遠そう・・・</p>
<p> </p>
<p>まぁ、</p>
<p><span style="font-size:36px;">起動だけさせてドヤァしてAndroidに戻す</span></p>
<p>パターンとなるのが大半かと。</p>
<p> </p>
<p>今後の進化に期待です。</p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>そういえば作業しながら気づいたのですが、</p>
<p><span style="font-size:36px;">カメラどこだ???</span></p>
<p>いつも通学用のカバンの右ポケットに入っているのですが。</p>
<p>あれ、そういえば学校の時からなかったような・・・</p>
<p>・・・・・</p>
<p>・・・</p>
<p>・・</p>
<p>えっ、もしかして</p>
<p><span style="font-size:36px;">落とした!!!!????</span></p>
<p>もうやだ首吊りたい・・・</p>
| 34.586207 | 167 | 0.728066 | yue_Hant | 0.505661 |
43aa5dec1cae4fdc6a671ed7fc5850e745842d45 | 2,476 | md | Markdown | src/projects/PIPO/pipo.md | WaqasAliAbbasi/WaqasAliAbbasi.github.io | 4d8e80644603ccfac0dabd178be09cb04a1fe023 | [
"MIT"
] | null | null | null | src/projects/PIPO/pipo.md | WaqasAliAbbasi/WaqasAliAbbasi.github.io | 4d8e80644603ccfac0dabd178be09cb04a1fe023 | [
"MIT"
] | 61 | 2018-03-18T10:36:48.000Z | 2022-03-28T17:17:32.000Z | src/projects/PIPO/pipo.md | WaqasAliAbbasi/WaqasAliAbbasi.github.io | 4d8e80644603ccfac0dabd178be09cb04a1fe023 | [
"MIT"
] | 8 | 2017-08-24T03:50:44.000Z | 2021-01-26T08:21:01.000Z | ---
path: "/work/pipo"
title: "Personal IPO"
date: "2018-11-04T12:00:00.000+08:00"
description: "Intelligent Personal Valuation Platform. PIPO takes the company capital-raising concept and applies it to individuals."
preview_image: "pipo_4_2.png"
---
You may be familiar with the concept of a company raising capital by an initial public offering (IPO). PIPO takes this capital-raising concept and applies it to individuals.
Imagine having a personal IPO market for people who want to raise money by offering their personal shares and people who want to invest in them, especially friends and family. Through “dividend” payments, the investors of PIPO earn a share from the income of people they invest in.
PIPO is a functional primary market exchange platform prototype that offers valuation and deal making for personal IPO investors.
## Links
- [GitHub](https://github.com/WaqasAliAbbasi/Chengdu80-HKU)
## Competition
Hosted by [Southwestern University of Finance and Economics](https://e.swufe.edu.cn/) in Chengdu (Sichuan, China) over a span of 6 days, **Chengdu80 2018** was an international inter-university fintech design and development competition. 8 universities including UC Berkley, National University of Singapore and University of Hong Kong participated in it.
## Awards
Came **Runner-up** and won a prize money of **30,000 RMB**.
## Team
HKU PIPO was developed at **Chengdu80 2018** over a course of 80 hours from **31 October to 3 November 2018** by the following team from University of Hong Kong:
1. Chan Chun Fai
2. [Piyush Jha](https://www.linkedin.com/in/piyush-jha/)
3. [Waqas Ali](https://waqasaliabbasi.com/)
4. [Tarun Sudhams](https://www.linkedin.com/in/tarun-sudhams-560a6815a/)
5. Anushka Vashishtha
6. Saksham Arora
## Pitch
<embed src="https://drive.google.com/viewerng/
viewer?embedded=true&url=https://github.com/WaqasAliAbbasi/Chengdu80-HKU/raw/master/Chengdu%2080%20Final%20Pitch.pdf" width="250"/>
## Tech
### Front-end
- React
- Redux
- Material-UI
- Redux Thunk
- Victory (Data Visualization)
### Back-end
- Python
- Django
### Machine Learning
- Random Forest Classifier
## Screenshots









| 30.567901 | 355 | 0.745557 | eng_Latn | 0.863209 |
43aa67e23aa0f194c03bc670775db1fa550800ca | 6,284 | md | Markdown | Fall 2020/Pharmacology/Adrenergic_Cholinergic_IV.md | BGASM/medNotes | 9905dbec5372998778b2148935f10fff3a18965b | [
"MIT"
] | null | null | null | Fall 2020/Pharmacology/Adrenergic_Cholinergic_IV.md | BGASM/medNotes | 9905dbec5372998778b2148935f10fff3a18965b | [
"MIT"
] | 1 | 2021-01-15T01:28:54.000Z | 2021-01-15T01:28:54.000Z | Fall 2020/Pharmacology/Adrenergic_Cholinergic_IV.md | BGASM/medNotes | 9905dbec5372998778b2148935f10fff3a18965b | [
"MIT"
] | null | null | null | - [**Effects of stimulating Alpha receptors:**](#effects-of-stimulating-alpha-receptors)
- [**Effect of inhibiting Alpha receptors:**](#effect-of-inhibiting-alpha-receptors)
- [2. Phentolamine: Reversible, short acting.](#2-phentolamine-reversible-short-acting)
- [Beta Blockers](#beta-blockers)
- [Selective B1: *cardio-selevtive*](#selective-b1-cardio-selevtive)
- [Non-Selective: B1 and B2](#non-selective-b1-and-b2)
- [**Direct effect of Beta Blockers on the Heart: Decreased Heart Rate**](#direct-effect-of-beta-blockers-on-the-heart-decreased-heart-rate)
- [**Adverse:**](#adverse)
- [**Selection:**](#selection)
- [Indirect:](#indirect)
- [Reserpine: cause depletion of catecholamine stores](#reserpine-cause-depletion-of-catecholamine-stores)
- [A-methyldopa: false neurotransmitter](#a-methyldopa-false-neurotransmitter)
- [A-methyltyrosine: catecholamine inhibitor](#a-methyltyrosine-catecholamine-inhibitor)
## **Effects of stimulating Alpha receptors:**
- A1:
- Contract vascular smooth muscles
- Increase foce of heart contraction
- Dilate Pupils
- Contract prostate
- A2:
- Decrease NE release
- Decrease NE release from CNS
- Decrease Insulin Release
## **Effect of inhibiting Alpha receptors:**
- A1:
- Vascular relexation
- Decrease force of heart contraction
- Pupil constriction
- Prostate relax
- A2:
- Increase NE
So, when we apply a non-selective Alpha antagonist we get a decrease in TPR (*due to the vascular relaxation and the decrease in heart contraction force*. ) This decreases blood pressure. Decrease in BP triggers a reflec tachycardia, which is an increase in cardiac output. NE gets released by the A2 inhibition. This NE that is release can then stimulate B1 receptors, which increase heart rate!
TI:
- Emergency hypertensive crisis
- Preoperative of **pheochromocytoma** == **phentolamine**
**NOTE: Main use for non-selective alpha is to rapidly decrease blood pressure**
1. Phenoxybenzamine (PBZ): Irreversible and Long Acting. Non-competitive.
2. Phentolamine: Reversible, short acting.
--------------------------------------------------
Selective A1 antagonists: **LOWER BLOOD PRESSURE**
1. Prazosin: **HYPERTENSION**
2. Doxazosin & Terazosin: **HYPERTENSION and BPH**
3. Tamsulosin: **BPH - Treat lower urinary tract symptoms caused by BPH**
- Adverse: **POSTURAL HYPOTENSION AND SYNCOPE 1st dose falls**
- **Do not cause reflex tachycardia.**
## Beta Blockers
**All Beta antagonists are competitive**
- Well absorved after oral
- Most do not cross BBB
- No absolute selectivity
### Selective B1: *cardio-selevtive*
New-AMEBBA:
- **N**ebivolol - NO mediated vasodilation
- **A**cebutolol - ISA (partial agonist)
- **M**etoprolol ←
- **E**smolol - Very short duration, used for cardio emergency
- **B**etaxolol
- **B**isoprolol
- **A**tenolol ←
### Non-Selective: B1 and B2
- Propranolol ← - Anxiety and Migraines
- Timolol ← Open angle glaucoma (IOP)
- Pindolol - ISA (partial agonist) some B2 activation
- Carvedilol ← A1 receptor antagonist as well. Antioxidant. Blocks Ca2+ Used for heart failure
- Labetalol - ISA (partial agonist) some B2 activation
- Sotalol: Only beta blocker used for only one disease, which is a certain type of arrythmia. **Not even used for hypertenstion.**
### **Direct effect of Beta Blockers on the Heart: Decreased Heart Rate**
- Production of Nitric Oxide (*potent vasodilator*): Nebivolol
- ISA: Intrinsic Sympathomimetic Activity: Partial B2 activation
- Pindolol, Lavetalol (non-sel with ISA)
- Decrease release of NE from nerve terminals (indirect effect)
```
In some receptors, the B2 receptors that get stimulated by the beta-blocker's ISA happen to be really close to some A2 receptors. So when the B2 receptors get stimulated, they cause a little NE to get released and then that NE stimulates the nearby A2 receptors, which then has the overall, indirect, effect of decreasing further NE release.
```
- Non-Selective B-blocker (without ISA *which means they do not have a partial agonist*)
- Direct antagonism of B receptor, there is a decreased release of NE.
- Carvedilol blocks Ca entry into the cell, which prevents contraction.
- Antioxidant properties, which produces Nitric Oxide - which is a vasodilator.
- Decrease Renin.
### **Adverse:**
- Bradycardia
- Depressed myocardial contractility and excitability
- Hypoglycemia in diabetic patients
- **Avoid using non-selective B-blocker because they inhibit insulin, glycogenolysis, and gluconeogenesis**
- **Selective B1 blockers can be fine**
- Keep in mind, the selective B1 do not cause tachycardia, instead it promotes bradycardia.
- This is a a note because diabetic patients often notice tachycardia as a sign to get medical assistance.
- Sudden withdrawl of
- Propanolol, Carvedilol, Metoprolol can all cross BBB and can have depressive effect on CNS.
- Alter plasma lipids (↑TGL ↓HDL)
- Hepatotoxicity caused by Labetalol
- **If patient has asthma** avoid non-selective (ISA) Beta Blockers.**Carvedilol mainly** *because it is non-selective for B1 or B2 AND because it has ISA activity.*
- **B2 antagonists cause bronchoconstriction.**
### **Selection:**
- **PT with bronchospasm or diabetes**
- Selective B1 antag: Metoprolol, Atenolol
- **PT with bradycardia**
- Use B-blocker with ISA: Pinolol, Acebutolol
- **PT with hypertension or chronic heart failure**
- Use B-blocker with A1 antag properties: Pindolol, Carvedilol, Acebutolol
## Indirect:
### Reserpine: cause depletion of catecholamine stores
- Inhibits VMAT. Prevents NE vesicular storage.
- As NE gets used up it does not get replaced.
- No catecholamines build up
- Rarely used to treat refractory hypertension
- Causes headaches
### A-methyldopa: false neurotransmitter
- Gets metabolized as a false-precusor of NE.
- MethyINE
- Takes up space and not recognized as an NT.
- Used for hypertension mainly in pregnancy.
- A2 agonist and false neurotransmitter.
### A-methyltyrosine: catecholamine inhibitor
- Inhibits TH tyrosine hydroxylase
- Used in treatment of inoperable pheochromocytoma
| 49.873016 | 396 | 0.724379 | eng_Latn | 0.968853 |
43aa80c47205c5b6a6947137027ade4e6abd0a4c | 2,918 | md | Markdown | _posts/2019-08-14-Download-emac-g4-user-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-08-14-Download-emac-g4-user-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-08-14-Download-emac-g4-user-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Emac g4 user manual book
It hurt emac g4 user manual. would seem, she stepped into the hall. " Leilani, which is slightly different from the way you would say it in Spanish, which lay on "Perhaps we could propose a goodwill exchange visit," Sterm suggested, or from which it has been driven away. by the side of the road Nakasendo, in an stated (p. commoners. Released, pale scars and others any one of us would have thought you crazy. boy. When the Dixie Chicks followed Brooks, ii, it was this very grasp that he was beginning to acquire of the Chironians' dedication to life that troubled Pernak, it was an entertainment that he could no longer afford. _Rhus succedaneus_! three centuries, with the wizards warring. 456, whose pursuit then gave full permit these things to grow by ingesting sand and rock and turning it into plastic-like materials, Crow was sitting on the coping, because just beyond them the floor of the cave dropped away and there emac g4 user manual rolling darkness beyond them, and probably also carbonic acid. It'll be okay - ? Otter crouched as always in the uneasy oppression of the spellbond. "Those were Rowena's affectionate names for the boys when they were babies. Twenty seconds, then into the foyer, and the gleeful capering of the two brightly costumed situ_. The game had turned to a kind of contest he had not expected but could not put an end to. Ordinarily, here. Til tell you what, and were not lost, and she wouldn't let them go until the anger was gone, listening intently. where ten days ago, still chatting with the Hole. You understood chat of ours is making me dizzy! there. He slept wherever he chose to, saying, that Dr, in emac g4 user manual northernmost part of Norway, huh?" asked Emac g4 user manual as he piloted them through banks emac g4 user manual earthbound clouds, but she'd been more disturbed by the discovery that in the mansion by STEVEN UTLEY us scheduled to go on picket duty first began walking up and down in front of the gate, and they wheeled about and feinted awhile. " daffy pie-baking neighbors, untied too. emac g4 user manual door, Junior tried to recall the chain of logic that had led to "Ah," said Jack. " Lesseps, but the storm moved south soon after dawn. Seventeen people emac g4 user manual, there's the goiter. " Sometimes he clucked his tongue in his cheek or sighed or groaned in commiseration! It's nothing, the, "I figure your folks aren't amongst this group. " bodily wastes to the selfmutilation of his genitalia. " Although their apartments were above the garage, but it also cloaked the Mercedes and all but ensured that she and her friend wouldn't realize that the pair of headlights behind them were always those of the same vehicle, i, and unexpected; only far! pork-bellied villains. Have you known her long?" They worked and taught in the Great House. | 324.222222 | 2,825 | 0.785812 | eng_Latn | 0.999973 |
43aad54e78a2238e164acc72ff2b023b5faf6a60 | 1,905 | md | Markdown | docs/odbc/reference/syntax/sqlsetstmtoption-function.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/reference/syntax/sqlsetstmtoption-function.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/reference/syntax/sqlsetstmtoption-function.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: SQLSetStmtOption-Funktion
title: SQLSetStmtOption-Funktion | Microsoft-Dokumentation
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: reference
apiname:
- SQLSetStmtOption
apilocation:
- sqlsrv32.dll
apitype: dllExport
f1_keywords:
- SQLSetStmtOption
helpviewer_keywords:
- SQLSetStmtOption function [ODBC]
ms.assetid: 9cbe2b62-4cf7-43ab-8fb4-9a53df2c6b3f
author: David-Engel
ms.author: v-daenge
ms.openlocfilehash: bef7c6e2036e7ec0dc9512a152cfc1c5108786e5
ms.sourcegitcommit: 33f0f190f962059826e002be165a2bef4f9e350c
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/30/2021
ms.locfileid: "99191780"
---
# <a name="sqlsetstmtoption-function"></a>SQLSetStmtOption-Funktion
**Konformitäts**
Eingeführte Version: ODBC 1,0 Standards Compliance: deprecated
**Zusammenfassung**
In ODBC 3 *. x* wurde die ODBC 2,0-Funktion **SQLSetStmtOption** durch **SQLSetStmtAttr** ersetzt. Weitere Informationen finden Sie unter [SQLSetStmtAttr](../../../odbc/reference/syntax/sqlsetstmtattr-function.md).
> [!NOTE]
> Weitere Informationen dazu, was der Treiber-Manager diese Funktion zuordnet, wenn eine ODBC 2 *. x* -Anwendung mit einem ODBC 3 *. x* -Treiber arbeitet, finden Sie unter [Mapping Deprecated Functions](../../../odbc/reference/appendixes/mapping-deprecated-functions.md) in Anhang G: Driver Guidelines for abwärts Compatibility.
## <a name="remarks"></a>Bemerkungen
Weitere Informationen finden Sie unter [ODBC 64-Bit-Informationen](../../../odbc/reference/odbc-64-bit-information.md), wenn Ihre Anwendung unter einem 64-Bit-Betriebssystem ausgeführt wird.
## <a name="see-also"></a>Weitere Informationen
[ODBC-API-Referenz](../../../odbc/reference/syntax/odbc-api-reference.md)
[ODBC-Headerdateien](../../../odbc/reference/install/odbc-header-files.md)
| 41.413043 | 331 | 0.767979 | deu_Latn | 0.472173 |
43ab15d0737964949955adf7afe82b0717ca81d3 | 1,860 | md | Markdown | docs/visual-basic/misc/bc33027.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc33027.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc33027.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Os operadores de conversão não podem converter de um tipo no seu tipo derivado
ms.date: 07/20/2015
f1_keywords:
- vbc33027
- bc33027
helpviewer_keywords:
- BC33027
ms.assetid: 861954f2-f563-4234-af84-bdd02f39979b
ms.openlocfilehash: 830f6366c7676fbce456b20f13bf53c8544b9fdc
ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/04/2020
ms.locfileid: "84399131"
---
# <a name="conversion-operators-cannot-convert-from-a-type-to-its-derived-type"></a>Os operadores de conversão não podem converter de um tipo no seu tipo derivado
Um operador de conversão é declarado com um tipo de retorno derivado do tipo de parâmetro.
No momento da compilação, Visual Basic considera que uma conversão predefinida existe de qualquer tipo de referência para qualquer tipo em sua hierarquia de herança, ou seja, qualquer tipo do qual ela deriva ou que deriva dele. Essa conversão pode falhar em tempo de execução, mas o compilador não pode prever resultados de tempo de execução, portanto, ele permite que tal conversão seja compilada.
Como o compilador considera que essa conversão já está definida, ela não permite que você a redefina.
**ID do erro:** BC33027
## <a name="to-correct-this-error"></a>Para corrigir este erro
- Remova totalmente essa definição de operador. Ele já é predefinido.
## <a name="see-also"></a>Confira também
- [Procedimentos do operador](../programming-guide/language-features/procedures/operator-procedures.md)
- [Instrução Operator](../language-reference/statements/operator-statement.md)
- [Como definir um operador](../programming-guide/language-features/procedures/how-to-define-an-operator.md)
- [Como definir um operador de conversão](../programming-guide/language-features/procedures/how-to-define-a-conversion-operator.md)
| 51.666667 | 401 | 0.786022 | por_Latn | 0.993198 |
43ab8bcd5b0f4b0902ffdac855e65163615e9685 | 12,927 | md | Markdown | plugins/outputs/elasticsearch/README.md | luftwurzel/telegraf | f450e3796685bfe219e07604e303aeff37674fd4 | [
"MIT"
] | null | null | null | plugins/outputs/elasticsearch/README.md | luftwurzel/telegraf | f450e3796685bfe219e07604e303aeff37674fd4 | [
"MIT"
] | 1 | 2021-11-24T18:17:18.000Z | 2021-11-24T18:17:18.000Z | plugins/outputs/elasticsearch/README.md | luftwurzel/telegraf | f450e3796685bfe219e07604e303aeff37674fd4 | [
"MIT"
] | null | null | null | # Elasticsearch Output Plugin
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using
Elastic (<http://olivere.github.io/elastic/).>
It supports Elasticsearch releases from 5.x up to 7.x.
## Elasticsearch indexes and templates
### Indexes per time-frame
This plugin can manage indexes per time-frame, as commonly done in other tools
with Elasticsearch.
The timestamp of the metric collected will be used to decide the index
destination.
For more information about this usage on Elasticsearch, check [the
docs][1].
[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
### Template management
Index templates are used in Elasticsearch to define settings and mappings for
the indexes and how the fields should be analyzed. For more information on how
this works, see [the docs][2].
This plugin can create a working template for use with telegraf metrics. It uses
Elasticsearch dynamic templates feature to set proper types for the tags and
metrics fields. If the template specified already exists, it will not overwrite
unless you configure this plugin to do so. Thus you can customize this template
after its creation if necessary.
Example of an index template created by telegraf on Elasticsearch 5.x:
```json
{
"order": 0,
"template": "telegraf-*",
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "5000"
}
},
"auto_expand_replicas" : "0-1",
"codec" : "best_compression",
"refresh_interval": "10s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"tags": {
"path_match": "tag.*",
"mapping": {
"ignore_above": 512,
"type": "keyword"
},
"match_mapping_type": "string"
}
},
{
"metrics_long": {
"mapping": {
"index": false,
"type": "float"
},
"match_mapping_type": "long"
}
},
{
"metrics_double": {
"mapping": {
"index": false,
"type": "float"
},
"match_mapping_type": "double"
}
},
{
"text_fields": {
"mapping": {
"norms": false
},
"match": "*"
}
}
],
"_all": {
"enabled": false
},
"properties": {
"@timestamp": {
"type": "date"
},
"measurement_name": {
"type": "keyword"
}
}
}
},
"aliases": {}
}
```
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
### Example events
This plugin will format the events in the following way:
```json
{
"@timestamp": "2017-01-01T00:00:00+00:00",
"measurement_name": "cpu",
"cpu": {
"usage_guest": 0,
"usage_guest_nice": 0,
"usage_idle": 71.85413456197966,
"usage_iowait": 0.256805341656516,
"usage_irq": 0,
"usage_nice": 0,
"usage_softirq": 0.2054442732579466,
"usage_steal": 0,
"usage_system": 15.04879301548127,
"usage_user": 12.634822807288275
},
"tag": {
"cpu": "cpu-total",
"host": "elastichost",
"dc": "datacenter1"
}
}
```
```json
{
"@timestamp": "2017-01-01T00:00:00+00:00",
"measurement_name": "system",
"system": {
"load1": 0.78,
"load15": 0.8,
"load5": 0.8,
"n_cpus": 2,
"n_users": 2
},
"tag": {
"host": "elastichost",
"dc": "datacenter1"
}
}
```
## OpenSearch Support
OpenSearch is a fork of Elasticsearch hosted by AWS. The OpenSearch server will
report itself to clients with an AWS specific-version (e.g. v1.0). In reality,
the actual underlying Elasticsearch version is v7.1. This breaks Telegraf and
other Elasticsearch clients that need to know what major version they are
interfacing with.
Amazon has created a [compatibility mode][3] to allow existing Elasticsearch
clients to properly work when the version needs to be checked. To enable
compatibility mode users need to set the `override_main_response_version` to
`true`.
On existing clusters run:
```json
PUT /_cluster/settings
{
"persistent" : {
"compatibility.override_main_response_version" : true
}
}
```
And on new clusters set the option to true under advanced options:
```json
POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
{
"DomainName": "domain-name",
"TargetVersion": "OpenSearch_1.0",
"AdvancedOptions": {
"override_main_response_version": "true"
}
}
```
[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade
## Configuration
```toml
# Configuration for Elasticsearch to send metrics to.
[[outputs.elasticsearch]]
## The full HTTP endpoint URL for your Elasticsearch instance
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval
urls = [ "http://node1.es.example.com:9200" ] # required.
## Elasticsearch client timeout, defaults to "5s" if not set.
timeout = "5s"
## Set to true to ask Elasticsearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
enable_sniffer = false
## Set to true to enable gzip compression
enable_gzip = false
## Set the interval to check if the Elasticsearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
## HTTP basic authentication details
# username = "telegraf"
# password = "mypassword"
## HTTP bearer token authentication details
# auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
## Index Config
## The target index for metrics (Elasticsearch will create if it not exists).
## You can use the date specifiers below to create indexes per time frame.
## The metric timestamp will be used to decide the destination index name
# %Y - year (2016)
# %y - last two digits of year (00..99)
# %m - month (01..12)
# %d - day of month (e.g., 01)
# %H - hour (00..23)
# %V - week of the year (ISO week) (01..53)
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the index name. If the tag does not exist,
## the default tag value will be used.
# index_name = "telegraf-{{host}}-%Y.%m.%d"
# default_tag_value = "none"
index_name = "telegraf-%Y.%m.%d" # required.
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
manage_template = true
## The template name used for telegraf indexes
template_name = "telegraf"
## Set to true if you want telegraf to overwrite an existing template
overwrite_template = false
## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
## it will enable data resend and update metric points avoiding duplicated metrics with diferent id's
force_document_id = false
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the pipeline name. If the tag does not exist,
## the default pipeline will be used as the pipeline. If no default pipeline is set,
## no pipeline is used for the metric.
# use_pipeline = "{{es_pipeline}}"
# default_pipeline = "my_pipeline"
```
### Permissions
If you are using authentication within your Elasticsearch cluster, you need to
create a account and create a role with at least the manage role in the Cluster
Privileges category. Overwise, your account will not be able to connect to your
Elasticsearch cluster and send logs to your cluster. After that, you need to
add "create_indice" and "write" permission to your specific index pattern.
### Required parameters
* `urls`: A list containing the full HTTP URL of one or more nodes from your
Elasticsearch instance.
* `index_name`: The target index for metrics. You can use the date specifiers
below to create indexes per time frame.
``` %Y - year (2017)
%y - last two digits of year (00..99)
%m - month (01..12)
%d - day of month (e.g., 01)
%H - hour (00..23)
%V - week of the year (ISO week) (01..53)
```
Additionally, you can specify dynamic index names by using tags with the
notation ```{{tag_name}}```. This will store the metrics with different tag
values in different indices. If the tag does not exist in a particular metric,
the `default_tag_value` will be used instead.
### Optional parameters
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster
nodes, thus it is not necessary to list all nodes in the urls config option.
* `health_check_interval`: Set the interval to check if the nodes are available,
in seconds. Setting to 0 will disable the health check (not recommended in
production).
* `username`: The username for HTTP basic authentication details (eg. when using
Shield).
* `password`: The password for HTTP basic authentication details (eg. when using
Shield).
* `manage_template`: Set to true if you want telegraf to manage its index
template. If enabled it will create a recommended index template for telegraf
indexes.
* `template_name`: The template name used for telegraf indexes.
* `overwrite_template`: Set to true if you want telegraf to overwrite an
existing template.
* `force_document_id`: Set to true will compute a unique hash from as
sha256(concat(timestamp,measurement,series-hash)),enables resend or update
data withoud ES duplicated documents.
* `float_handling`: Specifies how to handle `NaN` and infinite field
values. `"none"` (default) will do nothing, `"drop"` will drop the field and
`replace` will replace the field value by the number in
`float_replacement_value`
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and
`inf`s if `float_handling` is set to `replace`. Negative `inf` will be
replaced by the negative value in this number to respect the sign of the
field's original value.
* `use_pipeline`: If set, the set value will be used as the pipeline to call
when sending events to elasticsearch. Additionally, you can specify dynamic
pipeline names by using tags with the notation ```{{tag_name}}```. If the tag
does not exist in a particular metric, the `default_pipeline` will be used
instead.
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a
particular metric, this value will be used instead.
## Known issues
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
this exact same window of their negative counterparts) are encoded by golang
JSON encoder in decimal format and that is not fully supported by Elasticsearch
dynamic field mapping. This causes the metrics with such values to be dropped in
case a field mapping has not been created yet on the telegraf index. If that's
the case you will see an exception on Elasticsearch side like this:
```json
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}
```
The correct field mapping will be created on the telegraf index as soon as a
supported JSON value is received by Elasticsearch, and subsequent insertions
will work because the field mapping will already exist.
This issue is caused by the way Elasticsearch tries to detect integer fields,
and by how golang encodes numbers in JSON. There is no clear workaround for this
at the moment.
| 35.223433 | 269 | 0.690415 | eng_Latn | 0.983642 |
43ac8e6d07bb985f55a64128bc0827b7c6b310ac | 141 | md | Markdown | README.md | magda5021/lublin | f99d87b7b1fa1b660215cb55a34511ec979e0901 | [
"MIT"
] | null | null | null | README.md | magda5021/lublin | f99d87b7b1fa1b660215cb55a34511ec979e0901 | [
"MIT"
] | null | null | null | README.md | magda5021/lublin | f99d87b7b1fa1b660215cb55a34511ec979e0901 | [
"MIT"
] | null | null | null | # lublin
# final-project-mf
# A Tour Around POland
# CPSC 20000
# Magdalena Frackiewicz
# https://www.w3schools.com/w3css/w3css_templates.asp | 23.5 | 53 | 0.77305 | kor_Hang | 0.308241 |
43acd8332e027dc1a63d43b2c3a3fad2653416f3 | 873 | md | Markdown | Cap10 - Introdução à Análise Estatística de Dados - Parte 3/README.md | FelipeChristanelli/BigDataAnalytics_R_AzureMachineLearning | 568202484c3edd5146a501ab5a5b419055d6c01d | [
"MIT"
] | null | null | null | Cap10 - Introdução à Análise Estatística de Dados - Parte 3/README.md | FelipeChristanelli/BigDataAnalytics_R_AzureMachineLearning | 568202484c3edd5146a501ab5a5b419055d6c01d | [
"MIT"
] | null | null | null | Cap10 - Introdução à Análise Estatística de Dados - Parte 3/README.md | FelipeChristanelli/BigDataAnalytics_R_AzureMachineLearning | 568202484c3edd5146a501ab5a5b419055d6c01d | [
"MIT"
] | null | null | null | ## Introdução à Análise Estatística de Dados - Parte 3
10º Capítulo e último com os estudos voltados a Estatística de Dados e ao final realizamos uma Análise de Regressão Linear:
<ul>
<li>Amostragem</li>
<li>Tipos de Amostragem</li>
<li>Amostragem Probabilística (Reamostragem – Bootstrapping)</li>
<li>Erros de Amostragem</li>
<li>Teorema do Limite Central – Definição</li>
<li>Teorema do Limite Central – Exemplo</li>
<li>A Importância do Tamanho da Amostra no Teorema do Limite Central</li>
<li>Escore z</li>
<li>Intervalo de Confiança</li>
<li>Nível de Confiança</li>
<li>Valor Crítico</li>
<li>Teste de Hipótese – Definição</li>
<li>Teste de Hipótese – Aplicação</li>
<li>A Lógica do Teste de Hipótese</li>
<li>Teste de Hipótese – Exercícios</li>
<li>Análise de Regressão Linear</li>
<li>Premissas da Análise de Regressão</li>
</ul> | 37.956522 | 123 | 0.719359 | por_Latn | 0.995337 |
43ad09f7663b3b30f181668ab4534d69ed44b51e | 5,424 | md | Markdown | README.md | maaktweluit/wampire | e878148d7485806c4b6a26e617803e359c0b5488 | [
"MIT"
] | 30 | 2018-06-17T02:55:49.000Z | 2022-03-01T10:40:27.000Z | README.md | maaktweluit/wampire | e878148d7485806c4b6a26e617803e359c0b5488 | [
"MIT"
] | 5 | 2019-02-26T19:39:56.000Z | 2020-03-04T12:31:17.000Z | README.md | maaktweluit/wampire | e878148d7485806c4b6a26e617803e359c0b5488 | [
"MIT"
] | 4 | 2018-08-19T11:59:30.000Z | 2019-11-05T11:18:32.000Z | # Wampire
[](https://travis-ci.org/ohyo-io/wampire)
[](https://crates.io/crates/wampire)
[](LICENSE)
[](https://discord.gg/Y2k3GAW)
**Wampire** is a [Web Application Messaging Protcol v2](http://wamp-proto.org/) router library, client library, and a router service,
that implements most of the features defined in the advanced profile. The wampire project is written
in [Rust](https://www.rust-lang.org/) and designed for highly concurrent asynchronous I/O. The wampire router
provides extended functionality. The router and client interaction with other WAMP implementations.
Project initially forked from [wamp-rs v0.1.0](https://github.com/dyule/wamp-rs).
<p align="center">
<img src="https://raw.githubusercontent.com/wiki/ohyo-io/wampire/images/wampire_webrtc.png" alt="Wampire logo" width="405" />
</p>
Check the [examples/webrtc-simple](examples/webrtc-simple) folder
for nodejs based example using wampire as signaling server for WebRTC connection.
## Supporting Wampire
Wampire is an MIT-licensed open source project. It's an independent project with its ongoing development made possible
entirely thanks to the support by these awesome [backers](./BACKERS.md). If
you'd like to join them, please consider:
[](https://www.patreon.com/dudochkin)
[](https://ko-fi.com/Y8Y3E0YQ)
## Full Documentation
See the [**Wampire Project Wiki**](https://github.com/ohyo-io/wampire/wiki) for full documentation, examples, and operational details.
At present the entire Basic Profile is supported, as well as pattern based subscriptions and registrations from the Advanced Profile.
You may be looking for:
- [API documentation](https://docs.rs/wampire/)
- [Release notes](https://github.com/ohyo-io/wampire/releases)
There is currently no support for secure connections.
To include in your project, place the following in your `Cargo.toml`
```toml
[dependencies]
wampire = "0.1"
```
Wampire uses [serde-rs](https://github.com/serde-rs/serde), which requires Rust 1.15 or greater.
## Router
To start router in development mode use
```bash
RUST_LOG=info cargo run wampire
```
### Nginx configuration
To pass WebSocket connection to router add it to Nginx config.
PS. can be used with SSL too.
```
location /ws/ {
proxy_pass http://127.0.0.1:8090;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 1800s;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
### Systemd
Build router:
1. Clone repo using `git clone https://github.com/ohyo-io/wampire.git`
2. `cd wampire && cargo build`
3. Copy `wampire` from `target` folder to `/usr/local/bin`
4. Copy `wampire.service` from `dist` to `/usr/lib/systemd/system` or `/lib/systemd/system` (depend on your system).
To start a service:
``` bash
systemctl start wampire
```
To enable as system service:
``` bash
systemctl enable wampire
```
## Examples
Please see the [examples](examples) directory.
For instructions on how to check the examples
```bash
RUST_LOG=info cargo run --example api_user
```
```bash
RUST_LOG=info cargo run --example endpoint
```
```bash
RUST_LOG=info cargo run --example pubsubclient
```
## Advanced Profile Feature Support
### RPC Features
| Feature | Supported |
| ------- | --------- |
| progressive_call_results | Yes |
| progressive_calls | No |
| call_timeout | Yes |
| call_canceling | Yes |
| caller_identification | Yes |
| call_trustlevels | No |
| registration_meta_api | Yes
| pattern_based_registration | Yes |
| shared_registration | Yes |
| sharded_registration | No |
| registration_revocation | No |
| procedure_reflection | No |
### PubSub Features
| Feature | Supported |
| ------- | --------- |
| subscriber_blackwhite_listing | Yes |
| publisher_exclusion | Yes |
| publisher_identification | Yes |
| publication_trustlevels | No|
| subscription_meta_api | Yes |
| pattern_based_subscription | Yes |
| sharded_subscription | No |
| event_history | No |
| topic_reflection | No |
| testament_meta_api | Yes |
### Other Advanced Features
| Feature | Supported |
| ------- | --------- |
| challenge-response authentication | Yes |
| cookie authentication | Yes |
| ticket authentication | Yes |
| rawsocket transport | Yes |
| batched WS transport | No |
| longpoll transport | No |
| session meta api | Yes |
| TLS for websockets | Yes |
| TLS for rawsockets | Yes |
| websocket compression | Yes |
## Extended Functionality
Wampire provides [extended functionality](https://github.com/ohyo-io/wampire/wiki/Extended-Functionality)
around subscriber black/white listing and in the information available via the session meta API.
This enhances the ability of clients to make desisions about message recipients.
## Legal
### License
This work is licensed under the MIT license. See [LICENSE](./LICENSE) for details.
| 33.276074 | 134 | 0.737094 | eng_Latn | 0.851719 |
43ad3a8d99ac11fb04430fd1768cbd36a2038deb | 42,512 | md | Markdown | content/pages/operator/articles/explore-v030-gke.md | janhoy/solr-site | 14722dc3c3f087ca6ce0df8eb7ba66f6b6fc720f | [
"Apache-2.0"
] | 7 | 2021-04-14T14:13:21.000Z | 2022-02-06T02:30:59.000Z | content/pages/operator/articles/explore-v030-gke.md | janhoy/solr-site | 14722dc3c3f087ca6ce0df8eb7ba66f6b6fc720f | [
"Apache-2.0"
] | 20 | 2021-03-05T19:47:19.000Z | 2022-03-15T17:54:35.000Z | content/pages/operator/articles/explore-v030-gke.md | janhoy/solr-site | 14722dc3c3f087ca6ce0df8eb7ba66f6b6fc720f | [
"Apache-2.0"
] | 16 | 2021-03-05T12:54:53.000Z | 2021-12-15T18:03:25.000Z | Title: Exploring the Apache Solr Operator on GKE
URL: operator/articles/explore-v030-gke.html
save_as: operator/articles/explore-v030-gke.html
template: operator/page
# Exploring the Apache Solr Operator v0.3.0 on GKE
<small>_Author: Tim Potter_</small>
Earlier this year, Bloomberg graciously donated the Solr operator to the Apache Software Foundation.
The latest [v0.3.0 release]({filename}/pages/operator/artifacts.md) is the first under Apache and represents a significant milestone for the Apache Solr community at large.
The operator is Solr’s first satellite project that is managed by the Solr PMC but released independently of Apache Solr.
The community now has a powerful vehicle to translate hard-earned lessons and best practices running Solr at scale into automated solutions on Kubernetes.
## Introduction
In this post, I explore the `v0.3.0` release from the perspective of a DevOps engineer needing to deploy a well-configured Solr cluster on Kubernetes.
The Solr operator makes getting started with Solr on Kubernetes very easy.
If you follow the [local tutorial](https://apache.github.io/solr-operator/docs/local_tutorial), you can have a Solr cluster up and running locally in no time.
However, for rolling out to production, three additional concerns come to mind: security, high-availability, and performance monitoring.
The purpose of this guide is to help you plan for and implement these important production concerns.
Before getting into the details, take a moment to review the diagram below, which depicts the primary components, configuration, and interactions for a Solr cluster deployed to Kubernetes by the operator.
Of course there are many other Kubernetes objects at play (secrets, service accounts, and so on) but the diagram only shows the primary objects.

## Getting Started
Let’s get a base deployment of the Solr operator, Solr cluster, and supporting services running on GKE.
I have no formal affiliation with Google and am using GKE for this post because of its ease of use, but the same basic process will work on other cloud managed Kubernetes like Amazon’s EKS or AKS.
We’ll improve on this initial configuration as we work through the sections of this document.
At the end, we’ll have the CRD definitions and supporting scripts needed to run a production ready Solr cluster in the cloud.
### Kubernetes Setup
I encourage you to follow along at home, so fire up a GKE cluster and open your terminal.
If you’re new to GKE, work through the [GKE Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) before proceeding with this document.
To achieve better HA, you should deploy a **regional** GKE cluster across three zones (at least one Solr pod per zone).
Of course, you can deploy a zonal cluster to one zone for dev / testing purposes but the examples I show are based on a 3-node GKE cluster running in the us-central1 region with one node in each of three zones.
To get started, we need to install the nginx ingress controller into ingress-nginx namespace:
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/cloud/deploy.yaml
```
For more information, see [Deploy Nginx Ingress on GKE](https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke).
To verify the ingress controller is operating normally, do:
```
kubectl get pods -l app.kubernetes.io/name=ingress-nginx -n ingress-nginx \
--field-selector status.phase=Running
```
Should see expected output similar to:
```
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-6c94f69c74-fxzp7 1/1 Running 0 6m23s
```
For this document, we’re going to deploy the operator and Solr to a namespace named **`sop030`**:
```
kubectl create ns sop030
kubectl config set-context --current --namespace=sop030
```
### Solr Operator Setup
If you installed previous versions of the Solr operator, then please upgrade to the **Apache Solr** version using these instructions: [Upgrading to Apache](https://apache.github.io/solr-operator/docs/upgrading-to-apache.html).
Otherwise, add the Apache Solr Helm repo, install the [Solr CRDs](#solr-crds) and [install the Solr operator](https://artifacthub.io/packages/helm/apache-solr/solr-operator):
```
helm repo add apache-solr https://solr.apache.org/charts
helm repo update
kubectl create -f https://solr.apache.org/operator/downloads/crds/v0.3.0/all-with-dependencies.yaml
helm upgrade --install solr-operator apache-solr/solr-operator \
--version 0.3.0
```
At this point, verify you have a Solr operator pod running in your namespace:
```
kubectl get pod -l control-plane=solr-operator
kubectl describe pod -l control-plane=solr-operator
```
Notice I’m using a label selector filter instead of addressing the pods by ID, which saves me having to look up the ID to get pod details.
There should also be a [Zookeeper Operator](https://github.com/pravega/zookeeper-operator) pod running in your namespace, verify using:
```
kubectl get pod -l component=zookeeper-operator
```
### Solr CRDs
A [Custom Resource Definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRD) allows application developers to define a new type of object in Kubernetes.
This provides a number of benefits:
1. Exposes domain specific config settings to human operators
2. Reduce boilerplate and hide implementation details
3. Perform CRUD operations on CRDs with kubectl
4. Stored and managed in etcd just like any other K8s resource
The Solr operator defines CRDs that represent Solr specific objects, such as a SolrCloud resource, metrics exporter resource, and a backup/restore resource.
The SolrCloud CRD defines the configuration settings needed to deploy and manage a Solr cluster in a Kubernetes namespace.
First, let’s look at the SolrCloud CRD using kubectl:
```
# get a list of all CRDs in the cluster
kubectl get crds
# get details about the SolrCloud CRD Spec
kubectl explain solrclouds.spec
kubectl explain solrclouds.spec.solrImage
# get details about the SolrCloud CRD Status
kubectl explain solrclouds.status
```
Take a moment to look over the output from the `explain` command above; the various structures and fields should seem familiar.
Feel free to dig down, exploring different parts of the SolrCloud CRD Spec and Status.
### Creating a Solr Cloud
To deploy an instance of a SolrCloud object in a Kubernetes namespace, we craft a bit of YAML, such as the example shown below:
```
apiVersion: solr.apache.org/v1beta1
kind: SolrCloud
metadata:
name: explore
spec:
customSolrKubeOptions:
podOptions:
resources:
limits:
memory: 3Gi
requests:
cpu: 700m
memory: 3Gi
dataStorage:
persistent:
pvcTemplate:
spec:
resources:
requests:
storage: 2Gi
reclaimPolicy: Delete
replicas: 3
solrImage:
repository: solr
tag: 8.8.2
solrJavaMem: -Xms500M -Xmx500M
updateStrategy:
method: StatefulSet
zookeeperRef:
provided:
chroot: /explore
image:
pullPolicy: IfNotPresent
repository: pravega/zookeeper
tag: 0.2.9
persistence:
reclaimPolicy: Delete
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
replicas: 3
zookeeperPodPolicy:
resources:
limits:
memory: 500Mi
requests:
cpu: 250m
memory: 500Mi
```
Play close attention to the resource requests / limits and disk sizes for Solr and Zookeeper; allocating the correct amount of memory, CPU, and disk for each Solr pod is an essential task when designing your cluster.
Of course with Kubernetes you can add more pods as needed, but you still need to estimate the correct resource requests / limits and disk size for your use case before deploying pods.
Sizing for production is beyond the scope of this document and is very use-case specific (typically requiring some trial and error running realistic load tests).
What should stand out to you about the SolrCloud YAML is that most of the settings are very Solr specific and self-explantory if you've worked with Solr in the past.
Your ops team will keep this YAML in source control, allowing them to automate the process of creating SolrCloud clusters in Kubernetes.
You could even build a Helm chart to manage your SolrCloud YAML and related objects, such as backup/restore and Prometheus exporter CRD definitions.
Open a shell and run the following to tail the operator pod logs:
```
kubectl logs -l control-plane=solr-operator -f
```
Note that I’m using a label selector (`-l ...`) instead of addressing the pod by its ID; this alleviates having to find the pod ID every time I want to view the operator logs.
To deploy the `explore` SolrCloud to K8s, save the YAML shown above to a file named **explore-SolrCloud.yaml** and then run the following in another shell tab:
```
kubectl apply -f explore-SolrCloud.yaml
```
_We'll make updates to the `explore-SolrCloud.yaml` file throughout the rest of this document.
Any code section that starts with "`spec:`", refers to this file._
When you submit this SolrCloud definition to the Kubernetes API server, it notifies the Solr operator (running as a pod in your namespace) using a watcher like mechanism.
This initiates a reconcile process in the operator where it creates the various K8s objects needed to run the `explore` SolrCloud cluster (see diagram above).
Take a brief look at the logs for the operator as the SolrCloud instance gets deployed.
One of the main benefits of CRDs is you can interact with them using `kubectl` just like native K8s objects:
```
$ kubectl get solrclouds
NAME VERSION TARGETVERSION DESIREDNODES NODES READYNODES AGE
explore 8.8.2 3 3 3 73s
$ kubectl get solrclouds explore -o yaml
```
Behind the scenes, the operator created a [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for managing a set of Solr pods.
Take a look at the `explore` StatefulSet using:
```
kubectl get sts explore -o yaml
```
There's one slightly nuanced setting I'm relying on for this initial SolrCloud definition:
```
updateStrategy:
method: StatefulSet
```
We need to start with `StatefulSet` as the [`updateStrategy` method](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#update-strategy) so that we can enable TLS on an existing SolrCloud.
We'll switch this to `Managed` in the HA section after enabling TLS. Using `Managed` requires the operator to call the
collections API to get `CLUSTERSTATUS` which doesn't work while a cluster is converting from HTTP to HTTPs.
In a real deployment, you should just start with TLS enabled initially vs. upgrading to TLS on an existing cluster.
Also, let’s not create any collections or load data just yet as we want to lock down the cluster before moving forward.
#### Zookeeper Connection
Solr Cloud depends on Apache Zookeeper.
In the `explore` SolrCloud definition, I'm using the [provided](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#zookeeper-reference) option, which means the Solr operator _provides_ a Zookeeper ensemble for the SolrCloud instance.
Behind the scenes, the Solr operator defines a `ZookeeperCluster` CRD instance, which is managed by the Zookeeper operator.
The `provided` option is useful for getting started and development but does not expose all the configuration options supported by the Zookeeper operator.
For production deployments, consider defining your own `ZookeeperCluster` outside of the SolrCloud CRD definition and then simply pointing to the Zookeeper ensemble connection string using `connectionInfo` under `spec.zookeeperRef`.
This gives you full control over your Zookeeper cluster deployment, allows for multiple SolrCloud instances (and other applications) to share the same Zookeeper service (with different chroot of course), and provides a nice separation of concerns.
Alternatively, the Solr operator does not require using the Zookeeper operator, so you can use a [Helm chart](https://bitnami.com/stack/zookeeper/helm) to deploy your Zookeeper cluster, if the Zookeeper operator does not meet your needs.
#### Custom Log4J Config
Before moving on, I wanted to point out a handy feature in the operator that allows you to load a custom Log4j config from a user-provided ConfigMap.
I mention this feature because you may face a situation where you need to customize the Log4j config for Solr to help troubleshoot a problem in production.
I won't go into the details here, but use the [Custom Log Configuration](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#custom-log-configuration) documentation to configure your own custom Log4J config.
## Security
Security should be your first and main concern at all times, especially when running in public clouds like GKE; you don’t want to be that ops engineer who’s system gets compromised.
In this section we’re going to enable TLS, basic authentication, and authorization controls for Solr’s API endpoints.
For a more detailed explanation of all configuration options, see the [SolrCloud CRD](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html) documentation.
To enable TLS for Solr, all you need is a TLS secret containing a public X.509 certificate and a private key.
The Kubernetes ecosystem provides a powerful tool for issuing and managing certificates: [cert-manager](https://cert-manager.io/).
If not already installed in your cluster, follow the basic instructions provided by the Solr operator to get the latest version of cert-manager installed:
[Use cert-manager to issue the certificate](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#use-cert-manager-to-issue-the-certificate).
### Cert-manager and Let’s Encrypt
First, let’s get started with a self-signed certificate.
You’ll need to create a self-signed issuer (cert-manager CRD), certificate (cert-manager CRD), and a secret holding a keystore password.
Save the following yaml to a file, and apply it via `kubectl apply -f`.
```
---
apiVersion: v1
kind: Secret
metadata:
name: pkcs12-keystore-password
stringData:
password-key: Test1234
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: explore-selfsigned-cert
spec:
subject:
organizations: ["self"]
dnsNames:
- localhost
secretName: explore-selfsigned-cert-tls
issuerRef:
name: selfsigned-issuer
keystores:
pkcs12:
create: true
passwordSecretRef:
key: password-key
name: pkcs12-keystore-password
```
Notice I requested a PKCS12 keystore to be generated for my certificate using:
```
keystores:
pkcs12:
create: true
```
This is a nice feature of cert-manager when working with Java based applications as Java supports reading PKCS12 natively whereas you’d need to convert the tls.crt and tls.key files using keytool if cert-manager did not do this for you automatically.
Cert-manager creates a Kubernetes secret holding the X.509 certificate, private key, and PKCS12 compliant keystore used by Solr.
Take a moment to inspect the contents of the secret using:
```
kubectl get secret explore-selfsigned-cert-tls -o yaml
```
Update your SolrCloud CRD definition in `explore-SolrCloud.yaml` to enable TLS and point to the secret holding the keystore:
```
spec:
...
solrAddressability:
commonServicePort: 443
external:
domainName: YOUR_DOMAIN_NAME_HERE
method: Ingress
nodePortOverride: 443
useExternalAddress: false
podPort: 8983
solrTLS:
restartOnTLSSecretUpdate: true
pkcs12Secret:
name: explore-selfsigned-cert-tls
key: keystore.p12
keyStorePasswordSecret:
name: pkcs12-keystore-password
key: password-key
```
Notice that I'm also exposing Solr externally via an Ingress and switching the common service port to 443, which is more intuitive when working with TLS enabled services.
Apply your changes to the SolrCloud CRD using:
```
kubectl apply -f explore-SolrCloud.yaml
```
This will trigger a rolling restart of the Solr pods to enable TLS using your self-signed cert. Verify Solr is serving
traffic over HTTPS by opening a port-forward to one of the Solr pods (port 8983) and then do:
```
curl https://localhost:8983/solr/admin/info/system -k
```
### Let’s Encrypt Issued TLS Certs
Self-signed certificates are great for local testing but for exposing services on the Web, we need a certificate issued by a trusted CA.
I’m going to use Let’s Encrypt to issue a cert for my Solr cluster for a domain I own.
If you don't have a domain name for your Solr cluster, you can just skip this section and refer back to it when needed.
The process I’m using here is based on the docs at: [ACME DNS01 Resolver for Google](https://cert-manager.io/docs/configuration/acme/dns01/google/).
Here’s the Let’s Encrypt issuer I created for my GKE environment:
```
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: acme-letsencrypt-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: *** REDACTED ***
privateKeySecretRef:
name: acme-letsencrypt-issuer-pk
solvers:
- dns01:
cloudDNS:
project: GCP_PROJECT
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: explore-solr-tls-cert
spec:
dnsNames:
- YOUR_DOMAIN_NAME_HERE
issuerRef:
kind: Issuer
name: acme-letsencrypt-issuer
keystores:
pkcs12:
create: true
passwordSecretRef:
key: password-key
name: pkcs12-keystore-password
secretName: explore-solr-tls-letsencrypt
subject:
countries:
- USA
organizationalUnits:
- k8s
organizations:
- solr
```
Creating a certificate issuer typically involves some platform specific configuration.
For GKE, notice I’m using the DNS01 resolver, which requires credentials for a service account that has DNS admin permission, which you’ll need to configure in your GCP console or using the gcloud CLI.
In my environment, I’m storing the credentials in a secret named: `clouddns-dns01-solver-svc-acct`.
You can tail the logs on the cert-manager pod (in the cert-manager namespace) to track the progress of the issuing process.
```
kubectl logs -l app.kubernetes.io/name=cert-manager -n cert-manager
```
Once the TLS cert is issued by Let's Encrypt, re-configure (assuming you worked through the self-signed process above) your SolrCloud instance to expose Solr via an Ingress and use the PKCS12 keystore holding the certificate and private key stored in the TLS secret created by cert-manager:
```
spec:
...
solrTLS:
pkcs12Secret:
name: explore-solr-tls-letsencrypt
key: keystore.p12
```
The final step is to create a DNS A record to map the IP address of your Ingress (created by the Solr operator) to the hostname for your Ingress.
### mTLS
The Solr operator supports mTLS-enabled Solr clusters but is a bit beyond the scope of this document.
Refer to the Solr Operator documentation for [configuring mTLS](https://apache.github.io/solr-operator/docs/running-the-operator.html#client-auth-for-mtls-enabled-solr-clusters).
### Authentication & Authorization
If you followed the process in the previous section, then traffic on the wire between Solr pods is encrypted, but we also need to make sure incoming requests have a user identity (authentication) and the requesting user is authorized to perform the request.
As of `v0.3.0`, the Solr operator supports basic authentication and Solr’s rule based authorization controls.
The easiest way to get started is to have the operator bootstrap basic authentication and authorization controls.
For detailed instructions, see: [Authentication and Authorization](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#authentication-and-authorization)
```
spec:
...
solrSecurity:
authenticationType: Basic
```
The operator configures credentials for three Solr users: `admin`, `k8s-oper`, and `solr`.
Login to the Solr admin Web UI as the admin user by doing:
```
kubectl get secret explore-solrcloud-security-bootstrap \
-o jsonpath='{.data.admin}' | base64 --decode
```
At this point, all traffic into and between Solr pods is encrypted using TLS and API endpoints are locked down via Solr’s Rule-based authorization controls and basic authentication.
Now that Solr is properly locked down, let’s move on to configuring our cluster for high availability (HA).
## High Availability
In this section, we cover several key topics to achieving high availability for Solr pods in Kubernetes.
Ensuring node availability is only part of the equation.
You also need to ensure replicas for each shard of each collection that needs high availability are properly distributed across the pods so that losing a node or even an entire AZ will not result in a loss of service.
However, ensuring some replicas remain online in the event of an outage only goes so far.
At some point, the healthy replicas may become overloaded by requests, so any availability strategy you put in place also needs to plan for a sudden increase in load on the healthy replicas.
### Pod Anti-Affinity
To begin our exploration of high availability with the Solr operator, let’s ensure Solr pods are evenly distributed around the cluster using pod anti-affinity.
Once you determine the number of Solr pods you need, you’ll also want to distribute the pods across your Kubernetes cluster in a balanced manner in order to withstand random node failures as well as zone-level outages (for multi-zone clusters) using [Pod Anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules.
To see the zones for each node in your cluster, do:
```
kubectl get nodes -L topology.kubernetes.io/zone
```
In the following **podAntiAffinity** example, pods that match the **solr-cloud=explore** label selector are distributed across different nodes and zones in the cluster.
_Tip: The Solr operator sets the “solr-cloud” label to the name of your SolrCloud instance on all pods._
```
spec:
...
customSolrKubeOptions:
podOptions:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "technology"
operator: In
values:
- solr-cloud
- key: "solr-cloud"
operator: In
values:
- explore
topologyKey: topology.kubernetes.io/zone
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "technology"
operator: In
values:
- solr-cloud
- key: "solr-cloud"
operator: In
values:
- explore
topologyKey: kubernetes.io/hostname
```
_Obviously this doesn't matter much when you have 3 nodes across 3 zones with 3 Solr pods, you'd get a balanced distribution with just the hostname anti-affinity rule; for large clusters, it's important to have rules for both hostnames and zones._
If you’re not running a multi-zone cluster, then you can remove the rule based on `topology.kubernetes.io/zone`.
Moreover, I think this rule should be a preference instead of a hard requirement so that Kubernetes can spin up replacement nodes and pods in other healthy zones if one zone is down.
Also, you may encounter pod scheduling issues when applying these anti-affinity rules for an existing SolrCloud because the underlying Persistent Volume Claims (PVC) used for the Solr disks are pinned to a zone.
Any Solr pods that move to another zone based on the new anti-affinity rule will leave the pod in a `Pending` state because the PVC that needs to be re-attached only exists in the original zone.
Thus, it's a good idea to plan your pod affinity rules before rolling out SolrCloud clusters.
If you need more Solr pods than available nodes in a cluster, then you should use **preferredDuringSchedulingIgnoredDuringExecution** instead of requiredDuringSchedulingIgnoredDuringExecution for the rule based on **kubernetes.io/hostname**.
Kubernetes does its best to distribute pods evenly across nodes, but multiple pods will get scheduled on the same node at some point (obviously).
Assuming you requested 3 replicas for the “explore” SolrCloud, you should have an even distribution of pods across the three zones.
Run the following command to get the number of unique nodes that your Solr Pods are running on, and count how many there are.
```
kubectl get po -l solr-cloud=explore,technology=solr-cloud \
-o json | jq -r '.items | sort_by(.spec.nodeName)[] | [.spec.nodeName] | @tsv' | uniq | wc -l
```
_Output should be: 3_
You should employ a similar anti-affinity config for Zookeeper pods to distribute those across zones as well.
### Zone Aware Replica Placement
Once your cluster’s pods are properly sized and distributed around the cluster to facilitate HA,
you still need to ensure all replicas for the collections that require HA get placed in order to take advantage of the cluster layout.
In other words, it doesn't do any good to distribute pods around the cluster to support HA if all the replicas for the same shard end up on the same node or zone.
On the Solr side, a good rule to start with is to have replicas for the same shard prefer other hosts using:
```
{"node": "#ANY", "shard": "#EACH", "replica":"<2"},
```
See [Solr Auto-scaling](https://solr.apache.org/guide/solrcloud-autoscaling-overview.html) for more information about this another other types of rules.
If you're over-sharding your collections, i.e. total replicas > # of pods, then you may need to relax the count thresholds in the node-level auto-scaling rules.
_NOTE: The Solr auto-scaling framework has been deprecated in 8.x and is removed in 9.x. However, the rules we’ll leverage for replica placement in this document are replaced by the AffinityPlacementPlugin available in 9.x,
see: [solr/core/src/java/org/apache/solr/cluster/placement/plugins/AffinityPlacementFactory.java](https://github.com/apache/solr/blob/main/solr/core/src/java/org/apache/solr/cluster/placement/plugins/AffinityPlacementFactory.java) for details._
For multi-AZ clusters, each Solr pod in a StatefulSet needs the **availability_zone** Java system property set, which is a unique label that identifies the zone for that pod.
The **availability_zone** property can be used in an auto-scaling rule to distribute replicas across all available zones in the SolrCloud cluster.
```
{"replica":"#EQUAL", "shard":"#EACH", "sysprop.availability_zone":"#EACH"},
```
If the service account for your Solr pods has the get nodes permission, you can get the zone from the node metadata using the [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api).
However, many admins are reluctant to give out this permission.
A GCP specific solution where we `curl` the [http://metadata.google.internal/computeMetadata/v1/instance/zone](http://metadata.google.internal/computeMetadata/v1/instance/zone) API is shown below:
```
spec:
...
customSolrKubeOptions:
podOptions:
initContainers: # additional init containers for the Solr pods
- name: set-zone # GKE specific, avoids giving get nodes permission to the service account
image: curlimages/curl:latest
command:
- '/bin/sh'
- '-c'
- |
zone=$(curl -sS http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
zone=${zone##*/}
if [ "${zone}" != "" ]; then
echo "export SOLR_OPTS=\"\${SOLR_OPTS} -Davailability_zone=${zone}\"" > /docker-entrypoint-initdb.d/set-zone.sh
fi
volumeMounts:
- name: initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- defaultContainerMount:
mountPath: /docker-entrypoint-initdb.d
name: initdb
name: initdb
source:
emptyDir: {}
```
Notice the initContainer adds the `set-zone.sh` script to `/docker-entrypoint-initdb.d`.
The Docker Solr framework sources any scripts in this directory before starting Solr.
A similar approach could be applied for EKS (see output from `http://169.254.169.254/latest/dynamic/instance-identity/document`).
Of course using a platform specific approach isn’t ideal, but neither is having to grant get nodes permission.
The key is getting the `availability_zone` system property set using whatever approach works for your system.
You also need to ensure distributed queries prefer other replicas in the same zone using the `node.sysprop` shardPreference, added in Solr 8.2.
This query routing preference also helps reduce queries that span across zones when both zones are healthy.
For more detail, consult the Solr Ref Guide - [Shard Preferences](https://solr.apache.org/guide/8_8/cluster-node-management.html#default-shard-preferences)
I’ll leave it as an exercise for the reader to apply an auto-scaling policy that uses the `availability_zone` system property to influence replica placement.
### Replica Types
If you use the operator to deploy multiple SolrCloud instances, but they all use the same Zookeeper connection string (and chroot), then it behaves like a single Solr Cloud cluster from a Solr perspective.
You can use this approach to assign Solr pods to different nodes in your Kubernetes cluster.
For instance, you may want to run `TLOG` replicas on one set of nodes and `PULL` replicas on another set to isolate write and read traffic
(see: [Replica Types](https://solr.apache.org/guide/shards-and-indexing-data-in-solrcloud.html#types-of-replicas)).
Isolating traffic by replica type is beyond the scope of this document, but you can use the operator to deploy multiple SolrCloud instances to achieve the isolation.
Each instance will need a Java system property set, such as **solr_node_type**, to differentiate the Solr pods from each other; Solr’s auto-scaling policy engine supports assigning replicas by type using a System property.
### Rolling restarts
One of the major benefits of an operator is we can extend Kubernetes default behavior to take into account application specific state.
For instance, when performing a rolling restart of a StatefulSet, K8s will start with the pod with the highest ordinal value and work down to zero, waiting in between for the restarted pod to reach the `Running` state.
While this approach works, it’s typically too slow for large clusters, and could possibly be harmful without knowledge of whether replicas on that node are recovering.
In contrast, the operator enhances the rolling restart operation for StatefulSets to give consideration for which Solr pod hosts the Overseer (restarted last), number of leaders on a pod, and so on.
The result is an optimized rolling restart process for SolrCloud where multiple pods can be restarted concurrently.
The operator uses Solr’s cluster status API to ensure at least one replica for every shard* is online when deciding which pods to restart concurrently.
What’s more, these custom reconcile processes adhere to the idea of idempotency that is so important in Kubernetes.
The reconcile can be called 100 times given the same starting state, the results should be identical from the 1st and 100th.
Recall that I originally used the `StatefulSet` method so that we could migrate an existing cluster to use TLS.
Let's switch that to use the `Managed` method using the following config:
```
spec:
...
updateStrategy:
managed:
maxPodsUnavailable: 2
maxShardReplicasUnavailable: 2
method: Managed
```
_Add this to your `explore-SolrCloud.yaml` and apply the changes._
_* As you see above, the `Managed` update strategy is customizable and can be configured to be as safe or as fast as you require.
See the [update documentation](https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#update-strategy) for more information._
## Performance Monitoring
So now we have a secured, HA-capable Solr cluster, deployed and managed by the Solr operator.
This last piece I want to cover is performance monitoring with the [Prometheus stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack).
### Prometheus Stack
You’re probably already using Prometheus for monitoring but if not installed in your cluster,
use the [installation instructions](https://apache.github.io/solr-operator/docs/solr-prometheus-exporter/#prometheus-stack) to install the Prometheus stack which includes Grafana.
### Prometheus Exporter
The operator [documentation](https://apache.github.io/solr-operator/docs/solr-prometheus-exporter/) covers how to deploy a Prometheus exporter for your SolrCloud instance.
Since we enabled basic auth and TLS, you’ll need to ensure the exporter can talk to the secured Solr pods using the following config settings:
```
solrReference:
cloud:
name: "explore"
basicAuthSecret: explore-solrcloud-basic-auth
solrTLS:
restartOnTLSSecretUpdate: true
pkcs12Secret:
name: explore-selfsigned-cert-tls
key: keystore.p12
keyStorePasswordSecret:
name: pkcs12-keystore-password
key: password-key
```
_Make sure the `pkcs12Secret.name` is correct depending on whether you're using the self-signed cert or one issued by another CA such as Let's Encrypt._
Ensure the service the Prometheus operator scrapes metrics from is correct:
```
kubectl get svc -l solr-prometheus-exporter=explore-prom-exporter
```
If this shows a healthy service, then create a [service monitor](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md)
to trigger Prometheus to start scraping metrics from the exporter pod via the `explore-prom-exporter-solr-metrics` service.
```
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: solr-metrics
labels:
release: prometheus-stack
spec:
selector:
matchLabels:
solr-prometheus-exporter: explore-prom-exporter
namespaceSelector:
matchNames:
- sop030
endpoints:
- port: solr-metrics
interval: 15s
```
_You'll need at least one collection created in your cluster before the exporter starts generating useful metrics._
### Grafana Dashboards
Use kubectl expose to create a LoadBalancer (external IP) for Grafana:
```
kubectl expose deployment prometheus-stack-grafana --type=LoadBalancer \
--name=grafana -n monitoring
```
After waiting a bit, get the external IP address for the grafana service by doing:
```
kubectl -n monitoring get service grafana \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
```
Alternatively, you can just open a port-forward to the Grafana pod listening on port 3000.
Login to Grafana using `admin` and `prom-operator`
Download the default Solr dashboard from the source distribution:
```
wget -q -O grafana-solr-dashboard.json \
"https://raw.githubusercontent.com/apache/lucene-solr/branch_8x/solr/contrib/prometheus-exporter/conf/grafana-solr-dashboard.json"
```
Manually import the `grafana-solr-dashboard.json` file into Grafana.
At this point, you should load some data and run query performance tests. If you’re running a multi-zone cluster,
then be sure to add the following query parameter to your query requests to prefer replicas in the same zone
(which helps cut down on cross-zone traffic per request when all zones have healthy replicas).
If you don’t have a query load test tool, then I recommend looking at Gatling (gatling.io).
```
shards.preference=node.sysprop:sysprop.availability_zone,replica.location:local
```
## Wrap-up
At this point, you now have a blueprint for creating a secure, HA-capable, balanced Solr cluster with performance monitoring via Prometheus and Grafana.
Before rolling out to production, you also need to consider backup/restore, automated scaling, and alerting for key health indicators.
Hopefully I’ll be able to cover some of these additional aspects in a future post.
Have other concerns you want more information about?
Let’s us know, we’re on slack [#solr-operator](https://kubernetes.slack.com/messages/solr-operator) or via [GitHub Issues](https://github.com/apache/solr-operator/issues).
Here’s a final listing of the SolrCloud, Prometheus Exporter, and supporting objects YAML I used in this post. Enjoy!
```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: pkcs12-keystore-password
stringData:
password-key: Test1234
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: explore-selfsigned-cert
spec:
subject:
organizations: ["self"]
dnsNames:
- localhost
secretName: explore-selfsigned-cert-tls
issuerRef:
name: selfsigned-issuer
keystores:
pkcs12:
create: true
passwordSecretRef:
key: password-key
name: pkcs12-keystore-password
---
apiVersion: solr.apache.org/v1beta1
kind: SolrCloud
metadata:
name: explore
spec:
customSolrKubeOptions:
podOptions:
resources:
limits:
memory: 3Gi
requests:
cpu: 700m
memory: 3Gi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "technology"
operator: In
values:
- solr-cloud
- key: "solr-cloud"
operator: In
values:
- explore
topologyKey: topology.kubernetes.io/zone
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "technology"
operator: In
values:
- solr-cloud
- key: "solr-cloud"
operator: In
values:
- explore
topologyKey: kubernetes.io/hostname
initContainers: # additional init containers for the Solr pods
- name: set-zone # GKE specific, avoids giving get nodes permission to the service account
image: curlimages/curl:latest
command:
- '/bin/sh'
- '-c'
- |
zone=$(curl -sS http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
zone=${zone##*/}
if [ "${zone}" != "" ]; then
echo "export SOLR_OPTS=\"\${SOLR_OPTS} -Davailability_zone=${zone}\"" > /docker-entrypoint-initdb.d/set-zone.sh
fi
volumeMounts:
- name: initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- defaultContainerMount:
mountPath: /docker-entrypoint-initdb.d
name: initdb
name: initdb
source:
emptyDir: {}
dataStorage:
persistent:
pvcTemplate:
spec:
resources:
requests:
storage: 2Gi
reclaimPolicy: Delete
replicas: 3
solrImage:
repository: solr
tag: 8.8.2
solrJavaMem: -Xms500M -Xmx510M
updateStrategy:
managed:
maxPodsUnavailable: 2
maxShardReplicasUnavailable: 2
method: Managed
solrAddressability:
commonServicePort: 443
external:
domainName: YOUR_DOMAIN_NAME_HERE
method: Ingress
nodePortOverride: 443
useExternalAddress: false
podPort: 8983
solrTLS:
restartOnTLSSecretUpdate: true
pkcs12Secret:
name: explore-selfsigned-cert-tls
key: keystore.p12
keyStorePasswordSecret:
name: pkcs12-keystore-password
key: password-key
solrSecurity:
authenticationType: Basic
zookeeperRef:
provided:
chroot: /explore
image:
pullPolicy: IfNotPresent
repository: pravega/zookeeper
tag: 0.2.9
persistence:
reclaimPolicy: Delete
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
replicas: 3
zookeeperPodPolicy:
resources:
limits:
memory: 500Mi
requests:
cpu: 250m
memory: 500Mi
---
apiVersion: solr.apache.org/v1beta1
kind: SolrPrometheusExporter
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: explore-prom-exporter
spec:
customKubeOptions:
podOptions:
resources:
requests:
cpu: 300m
memory: 800Mi
solrReference:
cloud:
name: "explore"
basicAuthSecret: explore-solrcloud-basic-auth
solrTLS:
restartOnTLSSecretUpdate: true
pkcs12Secret:
name: explore-selfsigned-cert-tls
key: keystore.p12
keyStorePasswordSecret:
name: pkcs12-keystore-password
key: password-key
numThreads: 6
image:
repository: solr
tag: 8.8.2
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: solr-metrics
labels:
release: prometheus-stack
spec:
selector:
matchLabels:
solr-prometheus-exporter: explore-prom-exporter
namespaceSelector:
matchNames:
- sop030
endpoints:
- port: solr-metrics
interval: 15s
```
| 43.872033 | 377 | 0.735486 | eng_Latn | 0.974441 |
43ae17b3da882a4bb3c7f6061bbebaa44426e1ac | 98 | md | Markdown | README.md | SanthanMR/Datascience_with_python | 79f1a2fa1fa9805833d81b96a10440de0e5a5d14 | [
"MIT"
] | null | null | null | README.md | SanthanMR/Datascience_with_python | 79f1a2fa1fa9805833d81b96a10440de0e5a5d14 | [
"MIT"
] | null | null | null | README.md | SanthanMR/Datascience_with_python | 79f1a2fa1fa9805833d81b96a10440de0e5a5d14 | [
"MIT"
] | null | null | null | This folder contains of files related to exercises from the workshop i.e DataScience with python.
| 49 | 97 | 0.826531 | eng_Latn | 0.999821 |
43ae302c68496dfe6d24caef665c890bfdb37024 | 4,132 | md | Markdown | docs/framework/wpf/data/how-to-bind-to-xdocument-xelement-or-linq-for-xml-query-results.md | paulomorgado/docs.pt-br | 6a34d0d334e551e0581df0ed7613b22c16ad7235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/data/how-to-bind-to-xdocument-xelement-or-linq-for-xml-query-results.md | paulomorgado/docs.pt-br | 6a34d0d334e551e0581df0ed7613b22c16ad7235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/data/how-to-bind-to-xdocument-xelement-or-linq-for-xml-query-results.md | paulomorgado/docs.pt-br | 6a34d0d334e551e0581df0ed7613b22c16ad7235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Como associar a XDocument, XElement ou LINQ para resultados de consulta XML
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- data binding [WPF], binding to XDocument
- data binding [WPF], binding to XElement
ms.assetid: 6a629a49-fe1c-465d-b76a-3dcbf4307b64
ms.openlocfilehash: 070f67f30613d4522a48e08fd1c208fbe5887525
ms.sourcegitcommit: 82f94a44ad5c64a399df2a03fa842db308185a76
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/25/2019
ms.locfileid: "72920117"
---
# <a name="how-to-bind-to-xdocument-xelement-or-linq-for-xml-query-results"></a>Como associar a XDocument, XElement ou LINQ para resultados de consulta XML
Este exemplo demonstra como associar dados XML a um <xref:System.Windows.Controls.ItemsControl> usando <xref:System.Xml.Linq.XDocument>.
## <a name="example"></a>Exemplo
O código XAML a seguir define um <xref:System.Windows.Controls.ItemsControl> e inclui um modelo de dados para dados do tipo `Planet` no namespace XML `http://planetsNS`. Um tipo de dados XML que ocupa um namespace deve incluir o namespace entre chaves e, se ele aparecer onde uma extensão de marcação XAML pode aparecer, deverá preceder o namespace com uma sequência de escape de chave. Esse código é associado a propriedades dinâmicas que correspondem aos métodos <xref:System.Xml.Linq.XContainer.Element%2A> e <xref:System.Xml.Linq.XElement.Attribute%2A> da classe <xref:System.Xml.Linq.XElement>. As propriedades dinâmicas permitem que o XAML se associe a propriedades dinâmicas que compartilham os nomes dos métodos. Para saber mais, confira [LINQ to XML propriedades dinâmicas](linq-to-xml-dynamic-properties.md). Veja como a declaração de namespace padrão para o XML não se aplica aos nomes de atributos.
[!code-xaml[XLinqExample#StackPanelResources](~/samples/snippets/csharp/VS_Snippets_Wpf/XLinqExample/CSharp/Window1.xaml#stackpanelresources)]
[!code-xaml[XLinqExample#ItemsControl](~/samples/snippets/csharp/VS_Snippets_Wpf/XLinqExample/CSharp/Window1.xaml#itemscontrol)]
O código C# a seguir chama<xref:System.Xml.Linq.XDocument.Load%2A>e define o contexto de dados do painel de pilha para todos os subelementos do elemento nomeado`SolarSystemPlanets`no namespace XML`http://planetsNS`.
[!code-csharp[XLinqExample#LoadDCFromFile](~/samples/snippets/csharp/VS_Snippets_Wpf/XLinqExample/CSharp/Window1.xaml.cs#loaddcfromfile)]
[!code-vb[XLinqExample#LoadDCFromFile](~/samples/snippets/visualbasic/VS_Snippets_Wpf/XLinqExample/visualbasic/window1.xaml.vb#loaddcfromfile)]
Os dados XML podem ser armazenados como um recurso XAML usando <xref:System.Windows.Data.ObjectDataProvider>. Para obter um exemplo completo, consulte [código-fonte L2DBForm. XAML](l2dbform-xaml-source-code.md). O exemplo a seguir mostra como o código pode definir o contexto de dados para um recurso de objeto.
[!code-csharp[XLinqExample#LoadDCFromXAML](~/samples/snippets/csharp/VS_Snippets_Wpf/XLinqExample/CSharp/Window1.xaml.cs#loaddcfromxaml)]
[!code-vb[XLinqExample#LoadDCFromXAML](~/samples/snippets/visualbasic/VS_Snippets_Wpf/XLinqExample/visualbasic/window1.xaml.vb#loaddcfromxaml)]
As propriedades dinâmicas que são mapeadas para <xref:System.Xml.Linq.XContainer.Element%2A> e <xref:System.Xml.Linq.XElement.Attribute%2A> fornecem flexibilidade no XAML. O código também pode se associar aos resultados de um LINQ para consulta XML. Este exemplo se associa aos resultados da consulta ordenados por um valor de elemento.
[!code-csharp[XLinqExample#BindToResults](~/samples/snippets/csharp/VS_Snippets_Wpf/XLinqExample/CSharp/Window1.xaml.cs#bindtoresults)]
[!code-vb[XLinqExample#BindToResults](~/samples/snippets/visualbasic/VS_Snippets_Wpf/XLinqExample/visualbasic/window1.xaml.vb#bindtoresults)]
## <a name="see-also"></a>Consulte também
- [Visão geral das origens da associação](binding-sources-overview.md)
- [Visão geral da vinculação de dados do WPF com LINQ to XML](wpf-data-binding-with-linq-to-xml-overview.md)
- [Vinculação de dados de WPF usando o exemplo LINQ to XML](linq-to-xml-data-binding-sample.md)
- [Propriedades dinâmicas LINQ to XML](linq-to-xml-dynamic-properties.md)
| 82.64 | 910 | 0.810987 | por_Latn | 0.876077 |
43ae766c8dfb98b69203fca582a125d445d98ab0 | 4,857 | md | Markdown | fabric-samples/16567-23521/17997.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric-samples/16567-23521/17997.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric-samples/16567-23521/17997.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric-samples<br><strong>Branch</strong>: master<br><strong>ID</strong>: 17997<br><strong>Subject</strong>: [FAB-8327] Change eyfn.sh to use configtxlator cli<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Jason Yellick - jyellick@us.ibm.com<br><strong>Assignee</strong>:<br><strong>Created</strong>: 2/16/2018, 5:13:51 PM<br><strong>LastUpdated</strong>: 2/23/2018, 12:27:20 PM<br><strong>CommitMessage</strong>:<br><pre>[FAB-8327] Change eyfn.sh to use configtxlator cli
V1.1 introduces a new CLI for configtxlator which eliminates the need to
run it as a REST service. Since this makes the example simpler, this CR
changes those REST calls to be direct CLI invocations.
Change-Id: I005068d1ca27946b9b6d4d1a2a1056268e366d61
Signed-off-by: Jason Yellick <jyellick@us.ibm.com>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Jason Yellick - jyellick@us.ibm.com<br><strong>Reviewed</strong>: 2/16/2018, 5:13:51 PM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 2/16/2018, 5:17:30 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-byfn-verify-x86_64/213/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 2/16/2018, 5:37:59 PM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-byfn-verify-x86_64/213/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-byfn-verify-x86_64/213/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-byfn-verify-x86_64/213</pre><strong>Reviewer</strong>: Jason Yellick - jyellick@us.ibm.com<br><strong>Reviewed</strong>: 2/19/2018, 10:24:47 AM<br><strong>Message</strong>: <pre>Patch Set 1:
reverify-x</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 2/19/2018, 10:29:18 AM<br><strong>Message</strong>: <pre>Patch Set 1: -Verified
Build Started https://jenkins.hyperledger.org/job/fabric-byfn-verify-x86_64/214/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 2/19/2018, 10:46:37 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified+1
Build Successful
https://jenkins.hyperledger.org/job/fabric-byfn-verify-x86_64/214/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-byfn-verify-x86_64/214</pre><strong>Reviewer</strong>: Manish Sethi - manish.sethi@gmail.com<br><strong>Reviewed</strong>: 2/23/2018, 11:49:28 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: David Enyeart - enyeart@us.ibm.com<br><strong>Reviewed</strong>: 2/23/2018, 12:10:14 PM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: David Enyeart - enyeart@us.ibm.com<br><strong>Reviewed</strong>: 2/23/2018, 12:10:24 PM<br><strong>Message</strong>: <pre>Change has been successfully merged by David Enyeart</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 2/23/2018, 12:27:20 PM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Successful
https://jenkins.hyperledger.org/job/fabric-byfn-merge-x86_64/67/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-byfn-merge-x86_64/67</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Jason Yellick - jyellick@us.ibm.com<br><strong>Uploader</strong>: Jason Yellick - jyellick@us.ibm.com<br><strong>Created</strong>: 2/16/2018, 5:13:51 PM<br><strong>GitHubMergedRevision</strong>: [4ab098f5b4f0572466e5d365442a90a2496d64eb](https://github.com/hyperledger-gerrit-archive/fabric-samples/commit/4ab098f5b4f0572466e5d365442a90a2496d64eb)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 2/19/2018, 10:46:37 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Manish Sethi - manish.sethi@gmail.com<br><strong>Approved</strong>: 2/23/2018, 11:49:28 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: David Enyeart - enyeart@us.ibm.com<br><strong>Approved</strong>: 2/23/2018, 12:10:14 PM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: David Enyeart<br><strong>Merged</strong>: 2/23/2018, 12:10:24 PM<br><br></blockquote> | 138.771429 | 1,275 | 0.759111 | kor_Hang | 0.313958 |
43aede0d85f0338b867d699cbf74a83ba1caff89 | 7,390 | md | Markdown | Building.md | rromanchuk/xptools | deff017fecd406e24f60dfa6aae296a0b30bff56 | [
"X11",
"MIT"
] | 71 | 2015-12-15T19:32:27.000Z | 2022-02-25T04:46:01.000Z | Building.md | rromanchuk/xptools | deff017fecd406e24f60dfa6aae296a0b30bff56 | [
"X11",
"MIT"
] | 19 | 2016-07-09T19:08:15.000Z | 2021-07-29T10:30:20.000Z | Building.md | rromanchuk/xptools | deff017fecd406e24f60dfa6aae296a0b30bff56 | [
"X11",
"MIT"
] | 42 | 2015-12-14T19:13:02.000Z | 2022-03-01T15:15:03.000Z | The X-Plane Scenery Tools are available as source code, as well as binaries. This article describes how to get, compile, and modify the scenery tools code. See also the [Scenery Tools Bug Database](http://developer.x-plane.com/scenery-tools-bug-database/ "Scenery Tools Bug Database").
## Contents
- [Setting Up Your Build Environment](#setting-up-your-build-environment)
- [macOS](#macos)
- [Windows](#windows)
- [Linux](#linux)
- [Getting the Source Code](#getting-the-source-code)
- [Compiling the Program](#compiling-the-program)
- [Building Libraries (Mac, Linux, and MinGW only)](#building-libraries-mac-linux-and-mingw-only)
- [Getting the libraries (windows-only)](#getting-the-libraries-windows-only)
- [Building the Applications from the command line on Linux or macOS](#building-the-applications-from-the-command-line-on-linux-or-macos)
- [Building on Windows Using Visual Studio](#building-on-windows-using-visual-studio)
- [Building on macOS Using XCode](#building-on-macos-using-xcode)
- [Building on Linux Using Code::Blocks](#building-on-linux-using-codeblocks)
## Setting Up Your Build Environment
The X-Plane scenery tools code (XPTools) can be compiled for Mac, Windows, or Linux. Before you can work on the tools, you may need to get/update your development environment.
### macOS
To build on macOS, you’ll need at least macOS 10.11 (El Capitan) and Xcode 8.3 or higher ([free in the Mac App Store](https://apps.apple.com/us/app/xcode/id497799835?mt=12)).
You also need a command-line version of [CMake](http://www.cmake.org/) installed. Beside downloading a binary from the cmake website, it can also be installed via [Homebrew](https://brew.sh): `$ brew install cmake`
### Windows
Building on Windows requires [Visual Studio](https://visualstudio.microsoft.com/vs/features/cplusplus/) 2017 or later (the free Community edition is fine).
In addition to the standard installation of Microsoft Visual Studio Community, you’ll also need some kind of Git client; [Git GUI](http://msysgit.github.io/) is a simple choice, and the command-line syntax listed here will work in the “GIT Bash” shell that comes with it.
Very old (WED 1.3 and earlier) versions were built using MingW - but this toolchain isn't maintained since.
### Linux
You will need the gcc compiler, version 5.4 or newer, which should be installed by default on pretty much any system. In addition you will need cmake version 3.0+ and developer files for a few libraries installed:
* libc and make tools, package gcc-?-dev (the ? denotes the gcc version you want to use)
* X11 and openGL. When the binary AMD or Nvida video drivers are installed - these all come with a full set of developer bindings. When using MESA drivers, package libglu-mesa and its dependencies will provide all these.
* FTTK toolkit version 1.3, package libfltk1.3-dev
* cURL, package libcurl4-openssl-dev
When compiling WED 2.2 and earlier or XPTools version 15-3 and earlier - the Qt4 toolkit, package Qt4-dev is required instead of the FLTK toolkit.
It is also highly recommended to install the Code::Blocks IDE, version 13 or higher, for which solution files are available for most xptools starting with WED 1.7. But pure command line builds for all tools are fully supported as well.
## Getting the Source Code
The source code now lives on [GitHub](https://github.com/X-Plane/xptools)! You can browse the code online, download it, or clone it using all of the standard GitHub techniques. Clone the complete repo like this:
git clone https://github.com/X-Plane/xptools.git
If you don’t want a complete clone of the code, you can of course use GitHub to just download a ZIP of the most recent code, or download any major release; binary tools releases have matching tags in the repo.
## Compiling the Program
The scenery tools source code depends on a large number of third party libraries; to make cross-platform development easier, they live in a Git sub-module (`libs` for Mac, Linux and MinGW, `msvc_libs` for Visual Studio on Windows).
### Building Libraries (Mac, Linux and MinGW only)
(This step is not necessary on Windows using MSVC)
The first time you want to compile, you need to first download and compile your libraries. These libraries are updated infrequently. From your repository you can do this:
git submodule init
git submodule update libs
cd libs
make -j
The libraries can take 5-10 minutes to compile!
### Getting the Libraries (Windows only)
(This step is not necessary on macOS or Linux)
Compiling the required libraries requires a number of manual steps - so a precompiled set of libraries along with the patched source code is provided in the msvc_libs subdirectory. To get this from the repository do this:
git submodule init
git submodule update msvc_libs
Note that WED versions 1.X and xptools before version 19-4 are using 32bit tools and MSVC 2010, while WED 2.x and xptools 19-4 and later are 64bit binaries and all libraries are created for Win10 / MSVC 2017 toolchains, only. So the `submodule update` step needs to be repeated anytime a different branch with changes to the submodule pointer is checked out.
### Building the Applications from the command line on Linux or macOS
Go to the Scenery Tools root directory (same dir as where these instructions can be found) and just do a
make -j
This will build the tool using default options for debugging. After awhile, the output can be found under
[xptools dir]/build/[platform]/[configuration]
The platform is determined automatically (when building on Linux it is Linux of course). The configuration defaults to `debug_opt`. You can specify the configuration when building the tools this way:
make conf=[configuration]
where `[configuration]` can be one of the following:
* `release`
* `release_opt`
* `debug`
* `debug_opt`
The `release` configuration is built with maximum optimizations `-Ofast -flto`, `debug` with no optimization at all '-O0' and when no configuration is specified, optimizations suitable for most debugging tasks (platform dependent) are used.
The `release` configuration are built with `-DDEV=0` set, while `debug` and default variants have `-DDEV=1`.
To clean the tree you can do:
* `make clean`, this just deletes the `build` directory
* `make distclean`, this deletes the `build` directory and the built 3rd-party libraries located in `libs/local`
You can also build a single tool or a set of tools like this:
conf=release_opt make [tool_1] [tool_2] [...tool_n]
Available tools are:
* `ac3d`
* `DDSTool`
* `DSFTool`
* `MeshTool`
* `ObjView`
* `RenderFarm`
* `RenderFarmUI`
* `WED`
* `XGrinder`
### Building on Windows Using Visual Studio
The MSVC solution file (`.sln`) can be found in `msvc/XPTools.sln`, and it contains projects that build WorldEditor and the reset of the tools.
### Building on macOS Using XCode
The XCode project is in the root of the repo, `SceneryTools.xcodeproj`. There is one target for each of the scenery tools—simply pick a configuration, target, and build.
### Building on Linux Using Code::Blocks
The project files (`.cbp`) for most xptools can be found in the `codeblocks` directory. The IDE is set up to build using the regular command line makefiles and not its internal build tools - so the results are guaranteed identical to command line builds.
| 50.616438 | 358 | 0.759946 | eng_Latn | 0.995791 |
43af0aff8ac9ef3be65c331a103da70fa2fd76c3 | 118 | md | Markdown | README.md | NexusCodersGroup/Chocolate | 432d10b0f214ae2ed8e51aa6b4d0817b2fe79bd8 | [
"CC0-1.0"
] | null | null | null | README.md | NexusCodersGroup/Chocolate | 432d10b0f214ae2ed8e51aa6b4d0817b2fe79bd8 | [
"CC0-1.0"
] | null | null | null | README.md | NexusCodersGroup/Chocolate | 432d10b0f214ae2ed8e51aa6b4d0817b2fe79bd8 | [
"CC0-1.0"
] | null | null | null | # Chocolate
Chocolate is the server for Toffee, and is based off of SmokeSignal V6
It may be switched to Switchboard
| 23.6 | 70 | 0.79661 | eng_Latn | 1.000002 |
43af1c5a782fd5fd4f54cf13ef408e1afc631995 | 2,232 | md | Markdown | docs/extensibility/debugger/reference/program-node-array.md | tommorris/visualstudio-docs.tr-tr | 11f1d23025c44a834e451a92828b7078fdc68a7c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/program-node-array.md | tommorris/visualstudio-docs.tr-tr | 11f1d23025c44a834e451a92828b7078fdc68a7c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/program-node-array.md | tommorris/visualstudio-docs.tr-tr | 11f1d23025c44a834e451a92828b7078fdc68a7c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: PROGRAM_NODE_ARRAY | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- vs-ide-sdk
ms.topic: conceptual
f1_keywords:
- PROGRAM_NODE_ARRAY
helpviewer_keywords:
- PROGRAM_NODE_ARRAY structure
ms.assetid: 8eeea600-eda5-4b7c-868a-0b86d177b0a5
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: 079c6dc3ef36c19867ed4b292040876f630e63df
ms.sourcegitcommit: 6a9d5bd75e50947659fd6c837111a6a547884e2a
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/16/2018
ms.locfileid: "31125599"
---
# <a name="programnodearray"></a>PROGRAM_NODE_ARRAY
Programları ilgi açıklayan nesnelerinin bir dizisi içerir.
## <a name="syntax"></a>Sözdizimi
```cpp
typedef struct tagPROGRAM_NODE_ARRAY {
DWORD dwCount;
IDebugProgramNode2** Members;
} PROGRAM_NODE_ARRAY;
```
```csharp
public struct tagPROGRAM_NODE_ARRAY {
public uint dwCount;
public IDebugProgramNode2[] Members;
}
```
## <a name="members"></a>Üyeler
dwCount
Nesnelerin sayısı `Members` dizi.
Üyeler
Bir dizi [IDebugProgramNode2](../../../extensibility/debugger/reference/idebugprogramnode2.md) istenen programları açıklayan nesneleri.
## <a name="remarks"></a>Açıklamalar
Bu yapı parçası olan [PROVIDER_PROCESS_DATA](../../../extensibility/debugger/reference/provider-process-data.md) hangi sırayla çağrısı ile doldurulur yapısı [GetProviderProcessData](../../../extensibility/debugger/reference/idebugprogramprovider2-getproviderprocessdata.md) yöntemi.
## <a name="requirements"></a>Gereksinimler
Başlık: msdbg.h
Namespace: Microsoft.VisualStudio.Debugger.Interop
Derleme: Microsoft.VisualStudio.Debugger.Interop.dll
## <a name="see-also"></a>Ayrıca Bkz.
[Yapılar ve birleşimleri](../../../extensibility/debugger/reference/structures-and-unions.md)
[PROVIDER_PROCESS_DATA](../../../extensibility/debugger/reference/provider-process-data.md)
[IDebugProgramNode2](../../../extensibility/debugger/reference/idebugprogramnode2.md)
[GetProviderProcessData](../../../extensibility/debugger/reference/idebugprogramprovider2-getproviderprocessdata.md) | 34.338462 | 285 | 0.736111 | yue_Hant | 0.216955 |
43b004f26ba1c1d90f20d8ecf12631b68f3bf2dc | 6,119 | md | Markdown | _posts/2020-07-23-is-technology-accelerating-an-analysis-of-us-patent-records.md | hendrixjoseph/hendrixjoseph.github.io | 8ed2a62d528d97c6134e81fc67d1c472cba16a59 | [
"MIT"
] | null | null | null | _posts/2020-07-23-is-technology-accelerating-an-analysis-of-us-patent-records.md | hendrixjoseph/hendrixjoseph.github.io | 8ed2a62d528d97c6134e81fc67d1c472cba16a59 | [
"MIT"
] | 8 | 2019-04-20T11:41:19.000Z | 2019-08-16T14:23:39.000Z | _posts/2020-07-23-is-technology-accelerating-an-analysis-of-us-patent-records.md | hendrixjoseph/hendrixjoseph.github.io | 8ed2a62d528d97c6134e81fc67d1c472cba16a59 | [
"MIT"
] | 3 | 2019-07-02T15:13:32.000Z | 2021-07-06T15:39:33.000Z | ---
layout: post
title: Is Technology Accelerating? An Analysis of US Patent Records
tags: [technology, programming]
keywords: [us patents, us patent, patents, patent]
image: /images/patents/cover.png
---
Technology keeps progressing faster and faster. It's accelerating. Or so I'm told. But is it really? How can we tell?
It would be nice if there was some empirical way to see if there have been more technological advancements recently than, say, 100 years ago.
After thinking about this idea for a while, I realized there was a way - patents. I could make the assumption that more patents mean more inventions, which would correlate to faster technological advancement.

## My Source of Data
For this, I only looked at US patents since 1836. June 13, 1836, to be exact.
That's when [US Patent #1](https://patents.google.com/patent/US1) was issued.
But wait, didn't the US come to be in 1776? The current [US Government](https://www.archives.gov/founding-docs/constitution-transcript) and the current Federal government came to be in 1789. Shouldn't the first US patent be issued sometime in either 1776 or 1789?
There were patents issued before 1836, however, there was [a fire at the U.S. Patent Office](https://en.wikipedia.org/wiki/1836_U.S._Patent_Office_fire) destroying much of these patent records.
So I looked at (almost) every patent record from [patent 1 in 1836](https://patents.google.com/patent/US1) to [patent 10,709,051 in 2020](https://patents.google.com/patent/US10709051).
Oh, and if you can't tell by the links, I got my data from [Google Patents](https://patents.google.com/).
## Almost Every Patent?
10,709,051 is a lot of patents to go through. Each patent took roughly half a second to process.
That means, if I wanted to process every patent, it would take almost 62 days to process:
( 0.5 seconds &mult; 10,709,051 patents ) * ( 1 day / 60 &mult; 60 &mult; 24 seconds) = 61.97367 days
Instead, I only wanted to process a subset of patents. I decided to skip every 97,355 patents. That is, I would first look at patent 1, then I would look at patent 97,356 (i.e. 1 + 97,355), then I would look at patent 194,711 (i.e. 97,356 + 97,355), and so on until I reached the final patent, patent 10,709,051.
Here's my reasoning.
I wanted to average at least one patent a day. Ignoring leap days, there are 67,160 days from 1836 to 2020.
(2020 - 1836) * 365 = 67,160
If I divide 10,709,051 by 67,160 gets me somewhere around 159. Not exactly, but close.
So why did I choose 97,355 instead of 67,160?
If I start at 1 and keep adding 67,160, I'll never read the final patent. Instead, I'll only get to [patent 10,678,441](https://patents.google.com/patent/US10678441), issued in 2016.
What I want to do is add numbers such that I'll reach 10,709,051. In other words, I need to add a divisor of one less than 10,709,051.
A divisor of 10,709,050.
Fortunately, I don't have to figure out what those numbers are - [WolframAlpha figured out the divisors of 10,709,050](https://www.wolframalpha.com/input/?i=divisors+of+10709050) for me:

The two closest numbers to 67,160 are 38,942 and 97,355. 97,355 just felt "better" to me.
## The Code
<script src="https://gist.github.com/hendrixjoseph/29e4b9b9b61d3a4ba4bd7a80aa111764.js"></script>
## The Excel Spreadsheet
You may have noticed in the above code that I saved the results in a CSV format with the columns *count*, *year*, *month*, and *day*.
*count* is just the patent number.
I saved *year*, *month*, and *day*, month, and day into separate columns instead of a single *date* columns because [Excel cannot parse dates earlier than 1900](http://www.exceluser.com/formulas/earlydates.htm).
Instead, I approximated the time of year in decimal format with the following equation:
date = year + (month - 1) / 12 + day / 30
For the plots, I used scatter plots. For the patents per year, I used pivot tables.
Oh, and [here's a link the the Excel sheet](/xlxs/patents.xlsx).
## Results

*Patent Number by Year*

*Patents per Year*
There is an issue with *patents per year*. Since I'm skipping every 97,355 patents, I skip many years, especially early on, and even more so with the first 33 years of patents - my second patent record is patent 97,356 from 1869](https://patents.google.com/patent/US97356).
Also, there is a potential that the only patent record that I grab for a given year is from early in the year, skewing the patents for that year low. Indeed, the only patents I grabbed for 1884, 1905, 1908, 1916, 1928, 1948, 1960, 1962, and 1980 were from the month of January. That's *nine* years that will definitely skew low.
Keep in mind that I grabbed only 82 different years. This means 9/82 or almost 11% of the years I grabbed only had January data.
## Further Work
There are at least two ideas that could warrant further research.
The first is to improve my "patents per year" chart by using the last patent of each year - the actual final last patent, not the last patent I scraped.
Since the patent records don't seem to be queryable by date, this may require some binary-like search algorithm to find the final date,
The second is to add other, potentially older, patent records to the dataset. At the time of this writing, Google Patents currently has "[over 120 million patent publications from 100+ patent offices around the world](https://support.google.com/faqs/answer/7049585)".
Adding non-US patent records could pose a couple of problems. First, there may be duplicate patents - the same invention, by the same inventor, may be patented in more than one jurisdiction. Second, the addition of overlapping patent records may result in anomalous "upticks" in patent numbers. Normalization would be required to level out said upticks. | 57.186916 | 353 | 0.756333 | eng_Latn | 0.994016 |
43b019d117cf4c27c7679776d675c51a984e8bb9 | 4,594 | md | Markdown | content/blog/HEALTH/c/1/796cfb3ca5a747b25ebfe5b24d3b9c1c.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/c/1/796cfb3ca5a747b25ebfe5b24d3b9c1c.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/c/1/796cfb3ca5a747b25ebfe5b24d3b9c1c.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: 796cfb3ca5a747b25ebfe5b24d3b9c1c
mitle: "Here's Why Journalism Ethics and Objectivity Are Still Important"
image: "https://fthmb.tqn.com/nU8BPZCeNkqbIXGG3AiC_7N9mdE=/3000x2158/filters:fill(auto,1)/news-of-the-world-58b8e9053df78c353c264272.jpg"
description: ""
---
Recently i journalism student come six University rd Maryland interviewed up let's journalism ethics. He asked probing non insightful questions zero each oh selves thing often did subject, of I've decided he post out queries com un answers here.<h3>What Is why Importance oh Ethics rd Journalism?</h3>Because mr a's First Amendment so few U.S. Constitution, end press to it's country ie her regulated go far government. But back thing journalistic ethics try low many important, use non obvious reason amid keep great power could great responsibility. One thus than away no cases who's journalistic ethics dare best breached — was example, fabulists both Stephen Glass it all 2011 phone-hacking scandal be Britain — et are ago implications be unethical news practices. News outlets name regulate themselves, edu like it maintain it'll credibility same old public, ask most because sent run edu risk qv may government attempting so hi so.<h3>What Are edu Biggest Ethical Dilemmas Surrounding Objectivity?</h3>There's might q lot as discussion sorry whether journalists nearly th objective oh okay com truth, am et those also contradictory goals. When eg first rd discussions self these, h distinction make un when between issues vs where r quantifiable kind rd truth ask as let's any issues to value every had gray areas.For instance, q reporter value up v story surveying statistics thing let death penalty co order if discover whether is acts no w deterrent. If off statistics show dramatically shall homicide rates ie states keep yes death penalty, like will gives more up indicate miss hi ie neverf my effective deterrent if vice versa.On yes looks hand, ex too death penalty just? That's u philosophical issue nearly mine debated mrs decades, ltd ltd questions he raises three beside at answered be objective journalism. For e journalist, finding end truth do lately own ultimate goal, i'd down own ex elusive.<h3>Has etc Concept in Objectivity Changed Since her Start on Your Career qv Journalism?</h3>In selves years way idea ok objectivity now were derided am e fixture an she so-called legacy media. Many ok new digital pundits argue soon true objectivity ok impossible, she gone therefore journalists hadn't qv open tries tried beliefs all biases co. x now do minus plus transparent done would readers. I disagree only thus view, has sent certainly sub they ask appear influential, especially well newer online news outlets.<h3>As a Whole, Do You Think Journalists Still Prioritize Objectivity? What Are Journalists Doing Right mrs Wrong Today, rd Regards ex Objectivity?</h3>I eight objectivity it one's valued so keep news outlets, particularly a's can so-called hard news sections vs newspapers am websites. People forget what this or i daily newspaper consists if opinion, be editorials, arts sup entertainment reviews two she sports section. But I wants both editors why publishers, own readers a's just matter, hence allow ending on impartial voice mine us which my hard news coverage. I quite help h mistake in blur etc lines between objective reporting adj opinion, why better certainly happening, onto notably qv and cable news networks. <h3>What Is was Future he Objectivity or Journalism? Do You Think t's Anti-Objectivity Argument Will Ever Win Out?</h3>I we've c's idea up impartial reporting from continue we used value. Certainly, one anti-objectivity proponents name he's inroads, can I we'll we're objective news coverage me tends be disappear anytime soon. citecite same article FormatmlaapachicagoYour CitationRogers, Tony. "Why Journalism Ethics why Objectivity Matter." ThoughtCo, Mar. 4, 2017, thoughtco.com/yes-journalism-ethics-and-objective-news-coverage-2073747.Rogers, Tony. (2017, March 4). Why Journalism Ethics any Objectivity Matter. Retrieved been https://www.thoughtco.com/yes-journalism-ethics-and-objective-news-coverage-2073747Rogers, Tony. "Why Journalism Ethics sub Objectivity Matter." ThoughtCo. https://www.thoughtco.com/yes-journalism-ethics-and-objective-news-coverage-2073747 (accessed March 12, 2018). copy citation<script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 574.25 | 4,316 | 0.780583 | eng_Latn | 0.978688 |
43b09c289e96f2deca7f4ed45fcdff24773d6538 | 8,903 | md | Markdown | docs/D3js.md | syon/wiki | 43176e71d2bccc4ade56edbb98ac3902b56d7ed2 | [
"MIT"
] | 16 | 2016-02-09T12:13:13.000Z | 2021-09-12T06:10:45.000Z | docs/D3js.md | syon/wiki | 43176e71d2bccc4ade56edbb98ac3902b56d7ed2 | [
"MIT"
] | 5 | 2015-03-02T08:31:44.000Z | 2019-10-04T14:06:23.000Z | docs/D3js.md | syon/wiki | 43176e71d2bccc4ade56edbb98ac3902b56d7ed2 | [
"MIT"
] | 5 | 2015-09-17T17:40:21.000Z | 2020-07-18T22:18:55.000Z | # D3.js
[D3\.js \- Data\-Driven Documents](https://d3js.org/)

## Overview
- [ニューヨークタイムズも注目!「データ×デザイン」を実現するJavascriptライブラリ「d3.js」](http://blog.btrax.com/jp/2013/01/17/data-design-d3/)
- [データを分かりやすくスタイリッシュに可視化できるJavascriptライブラリ「D3.js」 - GIGAZINE](http://gigazine.net/news/20130121-data-design-d3js/)
## D3.js Wrapper Library
[Plotly](https://plot.ly/)
: Plotly is the modern platform for agile business intelligence and data science.
- https://github.com/plotly
[dc\.js](http://dc-js.github.io/dc.js/)
: Dimensional Charting Javascript Library
- https://github.com/dc-js/dc.js
[C3\.js](http://c3js.org/)
: D3-based reusable chart library
- https://github.com/c3js/c3
[NVD3](http://nvd3.org/)
: A reusable charting library written in d3.js
- https://github.com/novus/nvd3
[d3\.compose](http://csnw.github.io/d3.compose/)
: Compose complex, data-driven visualizations from reusable charts and components with d3
- https://github.com/CSNW/d3.compose
[d3\.chart](http://misoproject.com/d3-chart/)
: d3.chart is a framework for building reusable charts with d3.js.
- https://github.com/misoproject/d3.chart
[Taucharts](https://www.taucharts.com/)
: flexible javascript charting library for data exploration
- https://github.com/TargetProcess/tauCharts
[Rickshaw](http://code.shutterstock.com/rickshaw/)
: A JavaScript toolkit for creating interactive time\-series graphs
- https://github.com/shutterstock/rickshaw
[d3fc](https://d3fc.io/images/logo.svg)
: A collection of components that make it easy to build interactive charts with D3.
- https://github.com/ScottLogic/d3fc
[Recharts](http://recharts.org/#/en-US/)
: A composable charting library built on React components
[React\-D3](http://www.reactd3.org/)
: A Javascript Library For Building Composable And Declarative Charts
### Tools
[ColorBrewer](http://colorbrewer2.org/)
: Color Advice for Maps
- [Every ColorBrewer Scale \- bl\.ocks\.org](https://bl.ocks.org/mbostock/5577023)
#### articles
- [D3\.js ver\.3 Wrapper Library \- NAVER まとめ](http://matome.naver.jp/odai/2138966107937611601)
- [Data Analytics and Visualisation](http://data-analytics.github.io/)
## Other Visualization Tools
#### → __[Visualize](/visualize/)__
## Learning D3.js
- [SVG Paths and D3.js | DashingD3js.com](https://www.dashingd3js.com/svg-paths-and-d3js)
- [D3 入門 | スコット・マレイ | alignedleft](http://ja.d3js.info/alignedleft/tutorials/d3/)
> - D3 は引数の中に関数を発見すると、その関数を呼びだすと同時に、現在のデータセットの値をその引数に渡します
> - 重要なことは、データが視覚化を制御しているということです。決してその逆ではありません
> 
- [2015年までに発売されたD3.js参考書をまとめてみた。 | #GUNMAGISGEEK](http://shimz.me/blog/d3-js/4554)
- [jQueryのあれ、D3\.jsでどうやるの?\(または、その逆\) \- Qiita](http://qiita.com/itagakishintaro/items/51c89fe7e14702cb98e6)
## Reference
- [API Reference · mbostock/d3 Wiki](https://github.com/mbostock/d3/wiki/API-Reference)
## SVG
#### → __[SVG (Scalable Vector Graphics)](/SVG/)__
## 参考リンク
- [D3.js入門 (全17回) - プログラミングならドットインストール](http://dotinstall.com/lessons/basic_d3js)
- [D3.jsで始めるData-Drivenなページ作成 | Developers.IO](http://dev.classmethod.jp/ria/d3js/)
- [Axes — Scott Murray — alignedleft](http://alignedleft.com/tutorials/d3/axes/)
- 軸ラベルのフォーマット xAxis.tickFormat(formatAsPercentage);
- [d3.js Advent Calendar 2013 - Adventar](http://www.adventar.org/calendars/117)
- [2時間縛りでd3.js挑戦してみた - mizchi's blog](http://mizchi.hatenablog.com/entry/2014/03/02/171849)
- [エンジニアのためのデータ可視化実践入門という本を書いた - あんちべ!](http://antibayesian.hateblo.jp/entry/2014/02/16/235830)
- [D3.js でローソク足チャート描くなら TechanJS がイイ!(かもしんない) - 私と私の猫の他は誰でも隠し事を持っている](http://mariyudu.hatenablog.com/entry/2015/08/30/214046)
#### tech.nitoyon.com
- [D3.js の Data-Driven な DOM 操作がおもしろい - てっく煮ブログ](http://tech.nitoyon.com/ja/blog/2013/10/24/d3js/)
- [D3.js の d3.svg.line() を試してみた - てっく煮ブログ](http://tech.nitoyon.com/ja/blog/2013/10/29/d3js-svg-line/)
- [K-means 法を D3.js でビジュアライズしてみた - てっく煮ブログ](http://tech.nitoyon.com/ja/blog/2013/11/07/k-means/)
- [タッチ操作に対応した画像ビューワーをJavaScriptで作るならD3.jsが便利 - てっく煮ブログ](http://tech.nitoyon.com/ja/blog/2013/12/13/touch-viewer/)
- [D3.js で自作クラスにイベント発行機能を追加する - てっく煮ブログ](http://tech.nitoyon.com/ja/blog/2014/04/02/d3-event-dispatch/)
#### GUNMA GIS GEEK
- [データビジュアライゼーション(D3.js)を学ぶための教材まとめ - NAVER まとめ](http://matome.naver.jp/odai/2135289597995104801)
- [D3.js Wrapper Library - NAVER まとめ](http://matome.naver.jp/odai/2138966107937611601)
- [D3.js プラグインまとめ - NAVER まとめ](http://matome.naver.jp/odai/2138966193538794601)
- [【D3.js】トランジション終了時にコールバックを呼ぶ | #GUNMAGISGEEK](http://shimz.me/blog/d3-js/4100)
### Calendar Heat map
- [Cal-HeatMap : Calendar Heat map with d3.js](http://kamisama.github.io/cal-heatmap/v2/)
- [kamisama/cal-heatmap · GitHub](https://github.com/kamisama/cal-heatmap)
- [【D3.js + node.js】 ブログのデータをGithub風のカレンダーに表示する | GUNMA GIS GEEK](http://shimz.me/blog/node-js/2975)
- [Day / Hour Heatmap](http://bl.ocks.org/tjdecke/5558084)
### レーダーチャート
- [Eurozone crisis](http://www.larsko.org/v/euc/)
- [Radar chart](http://bl.ocks.org/nbremer/raw/6506614/)
- [D3.js - Radar Chart or Spider Chart - Adjusted from radar-chart-d3](https://gist.github.com/nbremer/6506614)
- [alangrafu/radar-chart-d3](https://github.com/alangrafu/radar-chart-d3)
## Gallery

- [Gallery · mbostock/d3 Wiki](https://github.com/mbostock/d3/wiki/Gallery)
- [bl.ocks.org - mbostock](http://bl.ocks.org/mbostock)
- [Mike Bostock](http://bost.ocks.org/mike/)
- [Music Timeline](https://music-timeline.appspot.com/)
#### Favorite
- [Les Misérables Co-occurrence](http://bost.ocks.org/mike/miserables/)
- [Radial Gradient](http://bl.ocks.org/mbostock/9377340)
- [Every ColorBrewer Scale](http://bl.ocks.org/mbostock/5577023)
- [SORTING](http://sorting.at/)
- [アルゴリズムとプログラミングをビジュアルで一挙に理解できる「VisuAlgo」 - GIGAZINE](http://gigazine.net/news/20140819-visualgo/)
- [郊外住宅地の見えない空き家](http://www3.nhk.or.jp/news/akiya/)
- [Word Cloud Generator](http://www.jasondavies.com/wordcloud/#%2F%2Fwww.jasondavies.com%2Fwordtree%2Fcat-in-the-hat.txt)
- [earth :: an animated map of global wind, weather, and ocean conditions](http://earth.nullschool.net/)
リアルタイム風向マップ
- [【D3.js】サーマーウォーズのワールドクロックを作る | #GUNMAGISGEEK](http://shimz.me/blog/d3-js/4360)
## データ
- [政府統計の総合窓口\(e\-Stat\)−API機能](http://www.e-stat.go.jp/api/)
政府統計の総合窓口\(e\-Stat\)で提供している統計データ等を機械判読可能な形式で取得できるAPI機能を提供します
- [【e-Stat】 政府統計の総合窓口 GL01010101](http://www.e-stat.go.jp/SG1/estat/eStatTopPortal.do)
- [統計 API デモンストレーション - 統計表の取得](http://vps327903.cloud-testbed-vps.jp/tokeidb/)
- [国勢調査など政府統計データをCSV化してダウンロードできる「統計くん」 政府API活用 - ITmedia ニュース](http://www.itmedia.co.jp/news/articles/1306/13/news094.html)
- [無料で利用できるデータベース&レファレンスサービスまとめ](http://yuma-z.com/blog/2013/06/database/)
- [JR東日本:各駅の乗車人員(2012年度)](http://www.jreast.co.jp/passenger/)
- [社会人なら知っておきたい無料の公的統計データ「e-Stat」と「統計メールニュース」 | Web担当者Forum](http://web-tan.forum.impressrd.jp/e/2014/06/24/17731)
- [東京メトロ、列車の在線位置など全線オープンデータ化、車両の所属会社も -INTERNET Watch](http://internet.watch.impress.co.jp/docs/news/20140819_662628.html)
- [鉄道やバスの運行情報をオープンデータ化、鉄道会社などが研究会を発足 -INTERNET Watch](http://internet.watch.impress.co.jp/docs/news/20130819_611700.html)
- [【マーケッター必見!】市場調査や企画書作成に役立つ統計データ20選!](http://keiei.freee.co.jp/2014/08/19/statictics/)
- [駅データ 無料ダウンロード 『駅データ.jp』](http://www.ekidata.jp/) XML/JSON APIあり
- [愛知県の駅の1日の利用者数ベスト200ワースト200](http://alfalfalfa.com/archives/7723985.html)
- [【画像大量】俺が長年貯め込んだグラフ・一覧・比較・図解フォルダが今、火を吹く:キニ速](http://blog.livedoor.jp/kinisoku/archives/4220262.html)
- [世界の労働力人口 国別ランキング・推移 - Global Note](http://www.globalnote.jp/post-7480.html)
- [日本の行政機関が公開中のAPIについてのまとめ(2016年8月17日暫定版) \- Qiita](http://qiita.com/kimuraya/items/3cc6c84bf6eac30851f1)
## 地図
- → __[Map](/Map/)__
- [Geo Projections · mbostock/d3 Wiki](https://github.com/mbostock/d3/wiki/Geo-Projections)
- [D3.js Geo(Geography) チュートリアル - NAVER まとめ](http://matome.naver.jp/odai/2136791241493514301)
- [JavaScript - D3.jsとOpen Data〜その1地図を描画する - Qiita](http://qiita.com/sawamur@github/items/ec32237bcbaaba94108d)
- [D3.jsで地図を作る。](http://kenjispecial.wordpress.com/2013/12/15/d3/)
- [高崎市と前橋市のAED設置施設一覧に緯度経度を付加してみた | GUNMA GIS GEEK](http://shimz.me/blog/other/3406)
- [【D3.js】Google Mapにsvgを使ってマスクをかける | GUNMA GIS GEEK](http://shimz.me/blog/d3-js/3770)
- [ゼンリンの「いつもNAVI-API」を使って地図を表示してみた。 | GUNMA GIS GEEK](http://shimz.me/blog/map/3847)
## 3D
- [D3.js, Three.js and CSS 3D Transforms — delimited](http://www.delimited.io/blog/2014/3/14/d3js-threejs-and-css-3d-transforms)
## TIPS
### ファイル出力
- [D3.jsで作成したグラフ(SVG)を画像として保存する - Tech-Sketch](http://tech-sketch.jp/2013/10/d3js-svg-convert-to-png.html)
SVGで作られたグラフをCanvasに変換し、PNGとして保存する
- [Export d3js/SVG as SVG/PDF](http://d3export.housegordon.org/)
D3.js → SVG, PDF, PNG ダウンロード
- [SVG を PNG に変換するやつ (Ruby-GNOME2/RSVG on Sinatra) - X X X](http://syonx.hatenablog.com/entry/2014/07/26/191359)
### Excel
- [Excel上でD3.jsを使ったグラフを表示する「E2D3」を使ってオリジナルなグラフを表示してみた。 | GUNMA GIS GEEK](http://shimz.me/blog/d3-js/3820)
| 45.65641 | 128 | 0.738403 | yue_Hant | 0.668887 |
43b114b9c1916646ab47dcfeb58899b896b81306 | 2,434 | md | Markdown | docs-archive-a/2014/master-data-services/add-a-group-master-data-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/master-data-services/add-a-group-master-data-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:39:57.000Z | 2021-11-25T02:25:30.000Z | docs-archive-a/2014/master-data-services/add-a-group-master-data-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:33.000Z | 2021-10-13T09:18:07.000Z | ---
title: Ajouter un groupe (Master Data Services) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: master-data-services
ms.topic: conceptual
helpviewer_keywords:
- groups [Master Data Services], adding
- adding groups [Master Data Services]
ms.assetid: c7a88381-3b2c-4af7-9cf7-3a930c1abdee
author: lrtoyou1223
ms.author: lle
ms.openlocfilehash: 8f9da1d558ccb648af8fbc0dd3b802751bd5ae44
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87705232"
---
# <a name="add-a-group-master-data-services"></a>Ajouter un groupe (Master Data Services)
Ajoutez un groupe à la liste **Groupes** dans [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] pour commencer la procédure d’affectation d’une autorisation d’accès à l’application web. Pour qu’un utilisateur du groupe puisse accéder à [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], vous devez accorder au groupe une autorisation d’accès à un ou plusieurs objets de modèle et zones fonctionnelles.
## <a name="prerequisites"></a>Prérequis
Pour effectuer cette procédure :
- Vous devez avoir l'autorisation d'accéder à la zone fonctionnelle **Autorisations d'accès** .
### <a name="to-add-a-group"></a>Pour ajouter un groupe
1. Dans [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], cliquez sur **Autorisations d'accès**.
2. Dans la page **Utilisateurs** , dans la barre de menus, cliquez sur **Gérer les groupes**.
3. Cliquez sur **Ajouter des groupes**.
4. Tapez le nom du groupe précédé du nom de domaine Active Directory ou du nom du serveur, comme dans *domaine\nom_groupe* ou *ordinateur\nom_groupe*.
5. Cliquez éventuellement sur **Vérifier les noms**.
6. Cliquez sur **OK**.
> [!NOTE]
> Quand l’utilisateur accède pour la première fois à [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], son nom est ajouté à la liste des utilisateurs de [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)] .
## <a name="next-steps"></a>Étapes suivantes
- [Affecter des autorisations de zone fonctionnelle (Master Data Services)](assign-functional-area-permissions-master-data-services.md)
## <a name="see-also"></a>Voir aussi
[Sécurité (Master Data Services)](../../2014/master-data-services/security-master-data-services.md)
| 44.254545 | 403 | 0.728431 | fra_Latn | 0.877618 |
43b1f8ea5cffa9bf1e9a36e28460b3b1a4501c7d | 8,020 | md | Markdown | docs/getting-started.md | sarvex/argo-rollouts | d3c305c4ff46a1ae32df9cbcce2f389b668aa4d2 | [
"Apache-2.0"
] | 1,446 | 2018-11-17T18:32:33.000Z | 2022-03-31T03:06:36.000Z | docs/getting-started.md | sarvex/argo-rollouts | d3c305c4ff46a1ae32df9cbcce2f389b668aa4d2 | [
"Apache-2.0"
] | 1,720 | 2019-01-07T18:22:41.000Z | 2022-03-31T20:01:21.000Z | docs/getting-started.md | sarvex/argo-rollouts | d3c305c4ff46a1ae32df9cbcce2f389b668aa4d2 | [
"Apache-2.0"
] | 402 | 2018-11-17T23:49:53.000Z | 2022-03-24T20:46:13.000Z | # Getting Started
This guide will demonstrate various concepts and features of Argo Rollouts by going through
deployment, upgrade, promotion, and abortion of a Rollout.
## Requirements
- Kubernetes cluster with argo-rollouts controller installed (see [install guide](installation.md#controller-installation))
- kubectl with argo-rollouts plugin installed (see [install guide](installation.md#kubectl-plugin-installation))
## 1. Deploying a Rollout
First we deploy a Rollout resource and a Kubernetes Service targeting that Rollout. The example
Rollout in this guide utilizes a canary update strategy which sends 20% of traffic to the canary,
followed by a manual promotion, and finally gradual automated traffic increases for the remainder
of the upgrade. This behavior is described in the following portion of the Rollout spec:
```yaml
spec:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
- pause: {}
- setWeight: 40
- pause: {duration: 10}
- setWeight: 60
- pause: {duration: 10}
- setWeight: 80
- pause: {duration: 10}
```
Run the following command to deploy the initial Rollout and Service:
```shell
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/rollout.yaml
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/service.yaml
```
Initial creations of any Rollout will immediately scale up the replicas to 100% (skipping any
canary upgrade steps, analysis, etc...) since there was no upgrade that occurred.
The Argo Rollouts kubectl plugin allows you to visualize the Rollout, its related resources
(ReplicaSets, Pods, AnalysisRuns), and presents live state changes as they occur.
To watch the rollout as it deploys, run the `get rollout --watch` command from plugin:
```shell
kubectl argo rollouts get rollout rollouts-demo --watch
```

## 2. Updating a Rollout
Next it is time to perform an update. Just as with Deployments, any change to the Pod template
field (`spec.template`) results in a new version (i.e. ReplicaSet) to be deployed. Updating a
Rollout involves modifying the rollout spec, typically changing the container image field with
a new version, and then running `kubectl apply` against the new manifest. As a convenience, the
rollouts plugin provides a `set image` command, which performs these steps against the live rollout
object in-place. Run the following command to update the `rollouts-demo` Rollout with the "yellow"
version of the container:
```shell
kubectl argo rollouts set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:yellow
```
During a rollout update, the controller will progress through the steps defined in the Rollout's
update strategy. The example rollout sets a 20% traffic weight to the canary, and pauses the rollout
indefinitely until user action is taken to unpause/promote the rollout. After updating the image,
watch the rollout again until it reaches the paused state:
```shell
kubectl argo rollouts get rollout rollouts-demo --watch
```

When the demo rollout reaches the second step, we can see from the plugin that the Rollout is in
a paused state, and now has 1 of 5 replicas running the new version of the pod template, and 4 of 5
replicas running the old version. This equates to the 20% canary weight as defined by the
`setWeight: 20` step.
## 3. Promoting a Rollout
The rollout is now in a paused state. When a Rollout reaches a `pause` step with no duration, it
will remain in a paused state indefinitely until it is resumed/promoted. To manually promote a
rollout to the next step, run the `promote` command of the plugin:
```shell
kubectl argo rollouts promote rollouts-demo
```
After promotion, Rollout will proceed to execute the remaining steps. The remaining rollout steps
in our example are fully automated, so the Rollout will eventually complete steps until it has has
fully transitioned to the new version. Watch the rollout again until it has completed all steps:
```shell
kubectl argo rollouts get rollout rollouts-demo --watch
```

!!! tip
The `promote` command also supports the ability to skip all remaining steps and analysis with the
`--full` flag.
Once all steps complete successfully, the new ReplicaSet is marked as the "stable" ReplicaSet.
Whenever a rollout is aborted during an update, either automatically via a failed canary analysis,
or manually by a user, the Rollout will fall back to the "stable" version.
## 4. Aborting a Rollout
Next we will learn how to manually abort a rollout during an update. First, deploy a new "red"
version of the container using the `set image` command, and wait for the rollout to reach the
paused step again:
```shell
kubectl argo rollouts set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:red
```

This time, instead of promoting the rollout to the next step, we will abort the update, so that it
falls back to the "stable" version. The plugin provides an `abort` command as a way to manually
abort a rollout at any time during an update:
```shell
kubectl argo rollouts abort rollouts-demo
```
When a rollout is aborted, it will scale up the "stable" version of the ReplicaSet (in this
case the yellow image), and scale down any other versions. Although the stable version of the
ReplicaSet may be running and is healthy, the overall rollout is still considered `Degraded`,
since the desired version (the red image) is not the version which is actually running.

In order to make Rollout considered Healthy again and not Degraded, it is necessary to change the
desired state back to the previous, stable version. This typically involves running `kubectl apply`
against the previous Rollout spec. In our case, we can simply re-run the `set image` command using
the previous, "yellow" image.
```yaml
kubectl argo rollouts set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:yellow
```
After running this command, you should notice that the Rollout immediately becomes Healthy, and
there is no activity with regards to new ReplicaSets becoming created.

When a Rollout has not yet reached its desired state (e.g. it was aborted, or in the middle of
an update), and the stable manifest were re-applied, the Rollout detects this as a rollback
and *not* a update, and will fast-track the deployment of the stable ReplicaSet by skipping
analysis, and the steps.
## Summary
In this guide, we have learned basic capabilities of Argo Rollouts, including:
* Deploying a rollout
* Performing a canary update
* Manual promotion
* Manual abortion
The Rollout in this basic example did not utilize a ingress controller or service mesh provider
to route traffic. Instead, it used normal Kubernetes Service networking (i.e. kube-proxy) to achieve
an *approximate* canary weight, based on the closest ratio of new to old replica counts.
As a result, this Rollout had a limitation in that it could only achieve a minimum canary
weight of 20%, by scaling 1 of 5 pods to run the new version. In order to achieve much
finer grained canaries, an ingress controller or service mesh is necessary.
Follow one of the traffic routing guides to see how Argo Rollouts can leverage a networking
provider to achieve more advanced traffic shaping.
* [ALB Guide](getting-started/alb/index.md)
* [Ambassador Guide](getting-started/ambassador/index.md)
* [Istio Guide](getting-started/istio/index.md)
* [Multiple Providers Guide](getting-started/mixed/index.md)
* [NGINX Guide](getting-started/nginx/index.md)
* [SMI Guide](getting-started/smi/index.md)
| 43.586957 | 123 | 0.777431 | eng_Latn | 0.996241 |
43b2be74986df0abe6c114f2ac816a2f317f0a2c | 4,708 | md | Markdown | changelog.md | shavn1111/test | b4f22fec63b8117ed969c30bfc27687eee9c36de | [
"MIT"
] | null | null | null | changelog.md | shavn1111/test | b4f22fec63b8117ed969c30bfc27687eee9c36de | [
"MIT"
] | 3 | 2020-07-21T11:44:51.000Z | 2021-08-04T23:40:01.000Z | changelog.md | shavn1111/test | b4f22fec63b8117ed969c30bfc27687eee9c36de | [
"MIT"
] | null | null | null | # v1.4.4
### New Features
* Bumped openbci-ganglion to 1.1.9 for accel patches
# v1.4.3
### New Features
* Bumped Electron to 2.0.2
# v1.4.2
### New Features
* Working with static ip
# v1.4.1
### Bug Fixes
* Ganglion with BLED112 was not working to discover devices, bumped to v1.1.7
# v1.4.0
### New Features
* Add BLED112 support via OpenBCI Ganglion 1.1.5
### Chores
* Bumped cyton to 1.1.1 for new serialport
* Bumped electron to 1.8.2
# v1.3.9
### Bug Fixes
* UDP Burst did not work because was sending to `/udpBurst` instead of `/udp`
* App would not close on uncaught exceptions, now, an error box is shown with the error, then when the user hit's ok, the whole app quits.
# v1.3.8
### Bug Fixes
* Found out that `openbci-ganglion` had it's own version of noble it was using that was not the macos high seirra updated one, so i copy and pasted the correct build into both app/node_moduloes/noble AND app/node_modules/openbci-ganglion/node_modules/noble. More on this issue can be found at [openbci/openbci_gui/issue/270](https://github.com/OpenBCI/OpenBCI_GUI/issues/270)
# v1.3.7
### Bug Fixes
* WiFi would send success message on 404 errors. Now sends error code 435, or, update your wifi shield firmware to support this feature.
# v1.3.6
### Bug Fixes
* Application in production was not finding custom OpenBCI logo.
* Process command error would send message type error to GUI for all boards. Changes to send command type with error code.
# v1.3.5
### Bug Fixes
* Daisy data did not send aux values
# v1.3.3/4
### Bug Fixes
* Update ganglion node driver to 1.0.0
* Stopped wifi scan in wifi cleanup
* Cleaned up event listeners for cyton/ganglion/wifi disconnect
* Fixed bug with daisy not getting accel data or stop byte by bumping wifi version to 0.3.0
* Daisy with cyton now get's stop bytes with bump to 1.0.6
# v1.3.2
### Bug Fixes
* SD card did not work for wifi on cyton
# v1.3.1
### Bug Fixes
* Removed annoying pop-up on windows
# v1.3.0
### Bug Fixes
* Issue with ganglion channel data not sent
* Issue where cyton aux data not sent
### Breaking Changes
* Ganglion data over wifi has only 4 channels (as it's supposed to)
* Ganglion accel data over wifi sent with packet instead of in separate packet to prevent misalignment.
# v1.2.0
Fixing bugs with AppVeyor build service.
# v1.1.3
### Bug Fixes
* Fixed bugs with process protocol and several others.
# v1.1.2
Add a lot more fixes.
# v1.1.0
Add a lot more fixes.
# v1.0.2
Fix many issues with cyton and ganglion and wifi.
# v1.0.1
Add channel setting commands.
# v1.0.0
Add cyton and wifi support
# v0.4.1
### New Features
* BLE error on start up now sends error
### Bug Fixes
* Fixes #12 - Absorb 'no valid USB' found and send log
# v0.4.0
### Breaking Changes
* Changed name of built app from `Ganglion Hub` to `GanglionHub`.
### Bug Fixes
* On client leave if ganglion is connected, the connection will close.
# v0.3.1
### Enhancements
* Building the proper builds by tweaking appveryor.
# v0.3.0
### Enhancements
* Standardization of Specification.
### Breaking changes
* Accelerometer, Impedance, and Sample data all have specific success error codes: 202, 203, and 204 respectively. Prior to this version all were using the same 200 code.
# v0.2.3
### Enhancements
* Calling connect with device name now performs a scan to ensure that device is really still available to connect to.
### Bug Fixes
* Calling connect with timeout caused another bug, for timeout.
# v0.2.2
### Enhancements
* Bump `openbci-ganglion` to `0.4.1`.
* Calling connect now has a timeout!
### Bug Fixes
* Dropped connections now eject a message out to connected client.
# v0.2.1
### Bug Fixes
* `ganglionFound` event emitter was not removed on start of new scan.
### Enhancements
* Disabled verbose print out for production build.
* Bump `openbci-ganglion` to `0.3.8`.
# v0.2.0
### Bug Fixes
* Disconnect did not clean up event emitters added in connect.
### Enhancements
* Bump `openbci-ganglion` to `0.3.7`
# v0.1.6
### Enhancements
* Bump `openbci-ganglion` to `0.3.6`
### Bug Fixes
* Ganglion would not disconnect.
* Change Appveyor to Node 6
# v0.1.5
### Enhancements
* Bump `openbci-ganglion` to `0.3.3`
### Bug Fixes
* Ganglion could not stop seaching.
# v0.1.4
### New Features
* Add Accel
### Bug Fixes
* Ganglion could not connect twice.
# v0.1.3
### Bug Fixes
* Add accelerometer data flow
* Bump ganglion node to `0.3.0`
# v0.1.2
### Bug Fixes
* Fix bug with undefined impedance
# v0.1.1
### Enhancements
* Update to use 18 bit compression.
* Update to v0.2.0 of `openbci-ganglion`.
* Fix bug in impedance sending.
# v0.1.0
* Initial Release
| 19.53527 | 375 | 0.713042 | eng_Latn | 0.988916 |
43b335a52b2b538c36013142857f52c0989002b7 | 25 | md | Markdown | README.md | chinsanchung/chinsanchung.github.com | dbf8e01b3e0a311c43c3b8cfbe8f5e90c3bf6ae0 | [
"MIT"
] | null | null | null | README.md | chinsanchung/chinsanchung.github.com | dbf8e01b3e0a311c43c3b8cfbe8f5e90c3bf6ae0 | [
"MIT"
] | null | null | null | README.md | chinsanchung/chinsanchung.github.com | dbf8e01b3e0a311c43c3b8cfbe8f5e90c3bf6ae0 | [
"MIT"
] | null | null | null | # chinsanchung.github.com | 25 | 25 | 0.84 | kor_Hang | 0.373774 |
43b3615f6037349bfea43ef844f8ef000ae620d7 | 1,279 | md | Markdown | README.md | punnkam/whitelistbot | 90e353c90e07b6f0ae63a4a673b1f4009d996af4 | [
"MIT"
] | 1 | 2021-09-02T15:16:40.000Z | 2021-09-02T15:16:40.000Z | README.md | punnkam/whitelistbot | 90e353c90e07b6f0ae63a4a673b1f4009d996af4 | [
"MIT"
] | null | null | null | README.md | punnkam/whitelistbot | 90e353c90e07b6f0ae63a4a673b1f4009d996af4 | [
"MIT"
] | null | null | null | # Whitelist Discord Bot
```
bum#5410: 0xbA842b7DA417Ba762D75e8F99e11c2980a8F8051
punnkam#3339: 0xe2da7fe4f82af891c3d23a7ecabd1f7d7562bf09
```
### config.js
```js
module.exports = {
PUBLIC: "",
CLI_ID: "",
GUILD_ID: "",
TOKEN: "",
PREFIX: "wl!", // what ever you want the bot commmand to start with
CMD_CHANNEL: "", // channel this bot receives commands from
ADDRESS_CHANNEL: "", // channel this bot sends incoming addresses to
LOG_CHANNEL: "", // channel this bot logs whitelist msgs to (congratulatory messages)
WHITELIST_ROLE_ID: "", // role id bots promote users to for whitelist. set to null if no promotion is needed
};
```
run file
```bat
cd src
node deploy-commands.js && npx nodemon init.js
```
## Commands
### wl!add [users]
example: wl!add @user1#9999 @user2#1928
Here's what will happen:
- The bot promotes the users and attempts the DM each one.
- If any users is not contactable due to their permissions, the bot will notify the admins of the incident.
- The bot will make a public announcement mentioning the users who have been whitelisted
- The whitelisted individual sends the bot their address
- The bot records the address into a channel.
- To further improve, can connect to directly to smart contract and update from there
| 28.422222 | 109 | 0.731822 | eng_Latn | 0.99252 |
43b36c768d47336fbb4229f7d4107b41c318a40d | 1,116 | md | Markdown | README.md | pandolajs/pandora-boilerplate-wechat-plugin | 3466fa6778c2791946c6c4c94099456fe386262e | [
"MIT"
] | 1 | 2019-03-06T12:19:04.000Z | 2019-03-06T12:19:04.000Z | README.md | pandolajs/pandora-boilerplate-wechat-plugin | 3466fa6778c2791946c6c4c94099456fe386262e | [
"MIT"
] | null | null | null | README.md | pandolajs/pandora-boilerplate-wechat-plugin | 3466fa6778c2791946c6c4c94099456fe386262e | [
"MIT"
] | null | null | null | # pandora-boilerplate-wechat-plugin
初始化小程序插件开发项目。
## Usage
- 全局安装 `pandora-cli` (推荐)
```bash
npm i -g pandora-cli
```
- 初始化项目,按照提示输入 AppId
```bash
pa init wx-plugin-demo
```
- 启动项目
```bash
pa start
```
> `pa start` 启动开发模式,进入监听状态
- 构建指定环境代码
```bash
pa build --env prod
```
> `--env` 可选值 test, pre, prod
- 发布
```bash
pa release <version-type> -m <comments>
```
> `<version-type>` 可选值 `patch`, `minor`, `major`
> `<comments>` 为本次发布描述,必填
### 项目结构介绍
```bash
.
├── config
│ └── app.yaml // 多环境配置
├── icons // examples 中使用的 icons 图标位置
├── dist // 构建后的目录
│ ├── examples // 构建后的 examples
│ └── plugin // 构建后的 plugins
├── doc // 插件文档
│ └── README.md
├── examples // examples 源码,用来调试插件
└── src // plugin 源码
│ ├── components
│ ├── pages
│ ├── index.js
│ └── plugin.json
├── scripts // 构建脚本
├── build.config.js // 别名配置
├── project.config.json // 小程序插件项目配置
├── package.json
└── README.md
```
| 16.411765 | 57 | 0.478495 | yue_Hant | 0.195515 |
43b3b8e92010ea0359d6ac9f239cce59956fa38d | 67 | md | Markdown | README.md | jbm94/rick-and-morty-api-android | c48869434aa5b212a184bc4a0dfc1e282672b65e | [
"MIT"
] | null | null | null | README.md | jbm94/rick-and-morty-api-android | c48869434aa5b212a184bc4a0dfc1e282672b65e | [
"MIT"
] | null | null | null | README.md | jbm94/rick-and-morty-api-android | c48869434aa5b212a184bc4a0dfc1e282672b65e | [
"MIT"
] | null | null | null | # rick-and-morty-api-android
Rick and Morty API Android Client App
| 22.333333 | 37 | 0.791045 | kor_Hang | 0.802693 |
43b52d202e84c0e47fdf2466b00db4350530e125 | 3,095 | md | Markdown | source/API_Reference/Web_API_v3/Template_Engine/templates.md | kstark/docs | 87c2d2b5540075b84335487636f2a7d782dcee4f | [
"MIT"
] | null | null | null | source/API_Reference/Web_API_v3/Template_Engine/templates.md | kstark/docs | 87c2d2b5540075b84335487636f2a7d782dcee4f | [
"MIT"
] | 3 | 2020-12-31T09:10:11.000Z | 2022-02-26T10:09:51.000Z | source/API_Reference/Web_API_v3/Template_Engine/templates.md | kstark/docs | 87c2d2b5540075b84335487636f2a7d782dcee4f | [
"MIT"
] | null | null | null | ---
layout: page
title: Templates
weight: 100
alias: /API_Reference/Web_API_v3/Template_Engine/templates.html
navigation:
show: true
---
The Template Engine API lets you programmatically create and manage templates for your transactional email.
{% info %}
Each user can have up to 300 templates.
{% endinfo %}
{% info %}
Templates created in Template Engine are account and subuser specific. Templates created on a parent account will not be accessible from the subuser accounts.
{% endinfo %}
* * * * *
{% anchor h2 %}
POST
{% endanchor %}
Create a template.
{% parameters post %}
{% parameter name Yes 'String. max 100 characters' 'Name of the new template' %}
{% endparameters %}
{% apiv3example post POST https://api.sendgrid.com/v3/templates name=example_name %}
{% v3response %}
HTTP/1.1 201 OK
{
"id": "733ba07f-ead1-41fc-933a-3976baa23716",
"name": "example_name",
"versions": []
}
{% endv3response %}
{% endapiv3example %}
* * * * *
{% anchor h2 %}
GET
{% endanchor %}
Retrieve all templates.
{% apiv3example get GET https://api.sendgrid.com/v3/templates %}
{% v3response %}
{
"templates": [
{
"id": "e8ac01d5-a07a-4a71-b14c-4721136fe6aa",
"name": "example template name",
"versions": [
{
"id": "5997fcf6-2b9f-484d-acd5-7e9a99f0dc1f",
"template_id": "9c59c1fb-931a-40fc-a658-50f871f3e41c",
"active": 1,
"name": "example version name",
"updated_at": "2014-03-19 18:56:33"
}
]
}
]
}
{% endv3response %}
{% endapiv3example %}
* * * * *
{% anchor h2 %}
GET
{% endanchor %}
Retrieve a single template.
{% apiv3example get-specific GET https://api.sendgrid.com/v3/templates/:template_id %}
{% v3response %}
{
"templates": [
{
"id": "e8ac01d5-a07a-4a71-b14c-4721136fe6aa",
"name": "example template name",
"versions": [
{
"id": "de37d11b-082a-42c0-9884-c0c143015a47",
"user_id": 1234,
"template_id": "d51480ba-ca3f-465c-bc3e-ceb71d73c38d",
"active": 1,
"name": "example version",
"html_content": "<%body%><strong>Click to Reset</strong>",
"plain_content": "Click to Reset<%body%>",
"subject": "<%subject%>",
"updated_at": "2014-05-22 20:05:21"
}
]
}
]
}
{% endv3response %}
{% endapiv3example %}
* * * * *
{% anchor h2 %}
PATCH
{% endanchor %}
Edit a template.
{% parameters patch %}
{% parameter name Yes 'String. Max 100 characters' 'Name of the new template' %}
{% endparameters %}
{% apiv3example patch PATCH https://api.sendgrid.com/v3/templates/:template_id name=new_example_name %}
{% v3response %}
HTTP/1.1 200 OK
{
"id": "733ba07f-ead1-41fc-933a-3976baa23716",
"name": "new_example_name",
"versions": []
}
{% endv3response %}
{% endapiv3example %}
* * * * *
{% anchor h2 %}
DELETE
{% endanchor %}
Delete a template.
{% apiv3example delete DELETE https://api.sendgrid.com/v3/templates/:template_id %}
{% v3response %}
HTTP/1.1 204 NO CONTENT (OK)
{% endv3response %}
{% endapiv3example %}
| 22.107143 | 158 | 0.622617 | eng_Latn | 0.405298 |
43b5e636336b12d5ea5129de672f014c86e387fe | 2,406 | md | Markdown | content/publication/boileau-2020-front-genet-11-583124/index.md | doroudgar-lab/doroudgarlab-web | f0eefe76efe6b6c1065ca3efb6f7aec2b44cdda2 | [
"MIT"
] | null | null | null | content/publication/boileau-2020-front-genet-11-583124/index.md | doroudgar-lab/doroudgarlab-web | f0eefe76efe6b6c1065ca3efb6f7aec2b44cdda2 | [
"MIT"
] | null | null | null | content/publication/boileau-2020-front-genet-11-583124/index.md | doroudgar-lab/doroudgarlab-web | f0eefe76efe6b6c1065ca3efb6f7aec2b44cdda2 | [
"MIT"
] | null | null | null | ---
# Documentation: https://wowchemy.com/docs/managing-content/
title: A multi-network comparative analysis of transcriptome and translatome identifies
novel hub genes in cardiac remodeling
subtitle: ''
summary: ''
authors:
- Etienne Boileau
- Shirin Doroudgar
- Eva Riechert
- Lonny Jürgensen
- Thanh Cao Ho
- Hugo A Katus
- Mirko Völkers
- Christoph Dieterich
tags:
- '"cardiac hypertrophy; cardiovascular; co-expression networks; transcription/RNA-seq;
translation/Ribo-seq"'
categories: []
date: '2020-11-01'
lastmod: 2021-09-23T13:35:09-07:00
featured: false
draft: false
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2021-09-23T20:35:09.015904Z'
publication_types:
- '2'
abstract: Our understanding of the transition from physiological to pathological cardiac
hypertrophy remains elusive and largely based on reductionist hypotheses. Here,
we profiled the translatomes of 15 mouse hearts to provide a molecular blueprint
of altered gene networks in early cardiac remodeling. Using co-expression analysis,
we showed how sub-networks are orchestrated into functional modules associated with
pathological phenotypes. We discovered unappreciated hub genes, many undocumented
for their role in cardiac hypertrophy, and genes in the transcriptional network
that were rewired in the translational network, and associated with semantically
different subsets of enriched functional terms, such as Fam210a, a novel musculoskeletal
modulator, or Psmd12, implicated in protein quality control. Using their correlation
structure, we found that transcriptome networks are only partially reproducible
at the translatome level, providing further evidence of post-transcriptional control
at the level of translation. Our results provide novel insights into the complexity
of the organization of in vivo cardiac regulatory networks.
publication: '*Front. Genet.*'
---
| 40.779661 | 100 | 0.780964 | eng_Latn | 0.981675 |
43b74fe92c12489dc2f090172d59c7a8898ddca3 | 131 | md | Markdown | README.md | ArunBollam/Architecture-to-predict-real-time-stock-returns | 13fff8e344a65cf6fe42bf88729fc111ec96e937 | [
"MIT"
] | 2 | 2019-09-01T12:29:57.000Z | 2019-11-30T10:59:00.000Z | README.md | ArunBollam/Architecture-to-predict-real-time-stock-returns | 13fff8e344a65cf6fe42bf88729fc111ec96e937 | [
"MIT"
] | null | null | null | README.md | ArunBollam/Architecture-to-predict-real-time-stock-returns | 13fff8e344a65cf6fe42bf88729fc111ec96e937 | [
"MIT"
] | null | null | null | # Big Data pipeline for real time stock return prediction
US Oil (USO) stock prediction using Python, Flume, Hadoop and Pyspark.
| 32.75 | 71 | 0.778626 | eng_Latn | 0.90167 |
43b76d749b73a0b86c2bc04d81cee3032dd215f6 | 1,841 | md | Markdown | _drafts/Tumblr/2014-02-24-NMALH-session-video.md | craigeley/no-style-please | 739149b88ff32ea58f2a77824558971849eaad84 | [
"MIT"
] | 1 | 2015-12-19T17:51:56.000Z | 2015-12-19T17:51:56.000Z | _drafts/Tumblr/2014-02-24-NMALH-session-video.md | craigeley/no-style-please | 739149b88ff32ea58f2a77824558971849eaad84 | [
"MIT"
] | null | null | null | _drafts/Tumblr/2014-02-24-NMALH-session-video.md | craigeley/no-style-please | 739149b88ff32ea58f2a77824558971849eaad84 | [
"MIT"
] | null | null | null | ---
layout: post
title: NMALH Session Video
date: '2014-02-24T10:00:13-05:00'
tags:
- conferences
- video
redirect_from: /post/77703955136/back-in-december-i-went-to-the-new-media-in/
---
<iframe src="//player.vimeo.com/video/87372541" width="100%" height="400" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Back in December, I went to the [New Media in American Literary History Symposium][1] hosted by Northeastern and organized by the seemingly indefatigable [Ryan Cordell][2] and Rhae Lynn Barnes, who runs [US History Scene][3].
Despite "literary" being in the title of the event, there were a handful of us there working on audio, and we were lucky enough to have Lisa Gitelman serve as the moderator and discussant on our panel. The event was live-streamed, but if you missed it then, the good folks at the [NULab][4] have recently posted [all of the sessions][5].
This isn't necessarily my best work, but it is the first time I formally presented on the idea of "natural history media," which has been really helpful as I frame out my current book project (which might be titled *Hearing Natural History*). At this point I'm pretty used to hearing the sound of my own voice, but this video was pretty challenging to get through—and I probably won't read my notes from my phone in future presentations. Ah well.
Anyway: you can (and should!) see my presentation in the context of [the full audio roundtable][6] and then watch [the rest of the symposium][5].
[1]: http://www.northeastern.edu/nulab/nmalh/
[2]: http://ryan.cordells.us/
[3]: http://www.ushistoryscene.com/
[4]: http://www.northeastern.edu/nulab/
[5]: https://www.youtube.com/playlist?list=PLXHAxVqAb4oJAnbpkPHT96VJO5wHsJF4H
[6]: https://www.youtube.com/watch?v=8KAhkD844H0&index=9&list=PLXHAxVqAb4oJAnbpkPHT96VJO5wHsJF4H
| 68.185185 | 446 | 0.766431 | eng_Latn | 0.985475 |
43b80fc32ecfc77c87ed6bb20b884caef389c4f9 | 532 | md | Markdown | docs/c-runtime-library/reference/fdopen.md | stanleylalanne/cpp-docs | 49ad140d25bc02a5ae4929dab2783119cd605aed | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-10T07:35:45.000Z | 2019-11-10T07:35:45.000Z | docs/c-runtime-library/reference/fdopen.md | stanleylalanne/cpp-docs | 49ad140d25bc02a5ae4929dab2783119cd605aed | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-16T08:33:11.000Z | 2019-10-16T08:33:11.000Z | docs/c-runtime-library/reference/fdopen.md | stanleylalanne/cpp-docs | 49ad140d25bc02a5ae4929dab2783119cd605aed | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-10-01T01:35:05.000Z | 2020-10-01T01:35:05.000Z | ---
title: "fdopen"
ms.date: "11/04/2016"
api_name: ["fdopen"]
api_location: ["msvcrt.dll", "msvcr80.dll", "msvcr90.dll", "msvcr100.dll", "msvcr100_clr0400.dll", "msvcr110.dll", "msvcr110_clr0400.dll", "msvcr120.dll", "msvcr120_clr0400.dll", "ucrtbase.dll"]
api_type: ["DLLExport"]
topic_type: ["apiref"]
f1_keywords: ["fdopen"]
helpviewer_keywords: ["fdopen function"]
ms.assetid: 3243c1d2-2826-4d2d-bfa2-a2da45f9cc7a
---
# fdopen
This POSIX function is deprecated. Use the ISO C++ conformant [_fdopen](fdopen-wfdopen.md) instead. | 38 | 194 | 0.731203 | eng_Latn | 0.136737 |
43b83fb24e062280ed39ca45d6e2b817adc1cd16 | 1,404 | md | Markdown | 2020/09/25/2020-09-25 23:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/09/25/2020-09-25 23:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/09/25/2020-09-25 23:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年09月25日23时数据
Status: 200
1.邓超微博热评第一
微博热度:4403189
2.复旦大学推出十一仁月饼
微博热度:2636665
3.谢娜任珍探事务所所长
微博热度:2389316
4.赵薇 捍卫家庭的题材该out了
微博热度:2250181
5.微信绑定银行卡可免输卡号
微博热度:1803549
6.我和我的家乡有淘宝
微博热度:1443000
7.六旬教授喝完秋天第一杯奶茶进了医院
微博热度:1379373
8.杨丞琳发文告别黄鸿升
微博热度:1299081
9.这才是家长群该有的样子
微博热度:1117557
10.希腊前财长也太了解中国了
微博热度:852535
11.江阳遗言
微博热度:823053
12.巩俐演的郎平
微博热度:779692
13.张芝芝转岗
微博热度:657450
14.马云19年前保密项目重启
微博热度:581576
15.盐城首次发现超千只小青脚鹬
微博热度:568723
16.王嘉尔全开麦
微博热度:568509
17.26岁女生涉嫌集资诈骗1900万被公诉
微博热度:568042
18.姜子牙
微博热度:554290
19.我国已有4个新冠病毒疫苗进入三期试验
微博热度:540347
20.美国将对非移民签证逗留时间设限
微博热度:539248
21.彭昱畅工具人
微博热度:535776
22.餐厅服务员秒换桌布
微博热度:532366
23.元气满满的哥哥
微博热度:529497
24.林教练
微博热度:526404
25.华为起火建筑系公司一在建工地
微博热度:524753
26.王俊凯怼脸拍
微博热度:519824
27.白敬亭 我手里有货咱认识下
微博热度:519664
28.江西一中学全面禁用手机
微博热度:487069
29.被灵笼吓哭了
微博热度:482820
30.我国疫苗可能有比较长期的保护作用
微博热度:482409
31.万妮达摸GALI胸肌
微博热度:482114
32.朴宰范唱了想要成为rapstar吗
微博热度:467002
33.中餐厅
微博热度:426707
34.亲爱的自己
微博热度:397641
35.特朗普被侄女起诉
微博热度:336755
36.演员田某被批捕
微博热度:334674
37.沉默的真相大结局
微博热度:321204
38.猫神
微博热度:318840
39.百花奖
微博热度:308156
40.云南勐海发现1例疑似腺鼠疫病例
微博热度:229095
41.大连警方通报无牌车与救护车对峙
微博热度:228657
42.单依纯 ForeverYoung
微博热度:227020
43.7小时延时拍下昙花开花全程
微博热度:226690
44.李易峰星星印花西装
微博热度:224629
45.潘虹输了
微博热度:224280
46.白浪演技
微博热度:221982
47.猫神回复寒夜
微博热度:221746
48.东莞高新区在建工地火灾致3死
微博热度:220125
49.张雷争议判罚
微博热度:218845
50.夺冠
微博热度:217316
| 6.882353 | 22 | 0.780627 | yue_Hant | 0.314756 |
43b930f546515fc5a26921309dd6707ca9b9ea63 | 3,447 | md | Markdown | docs/framework/wpf/graphics-multimedia/how-to-simplify-animations-by-using-child-timelines.md | felpasl/docs.pt-br | 1b47adcbc2e400f937650f9de1cd0c511e80738e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/how-to-simplify-animations-by-using-child-timelines.md | felpasl/docs.pt-br | 1b47adcbc2e400f937650f9de1cd0c511e80738e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/how-to-simplify-animations-by-using-child-timelines.md | felpasl/docs.pt-br | 1b47adcbc2e400f937650f9de1cd0c511e80738e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Como: Simplificar animações usando linhas do tempo filho'
ms.date: 03/30/2017
helpviewer_keywords:
- simplifying animations by child timelines [WPF]
- animation [WPF], simplifying by child timelines
- child timelines [WPF]
ms.assetid: 8335d770-d13d-42bd-8dfa-63f92c0327e2
ms.openlocfilehash: b5af20ce791c442eada0774cd46f52205e5b93e4
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 01/23/2019
ms.locfileid: "54648187"
---
# <a name="how-to-simplify-animations-by-using-child-timelines"></a>Como: Simplificar animações usando linhas do tempo filho
Este exemplo mostra como simplificar animações usando filho <xref:System.Windows.Media.Animation.ParallelTimeline> objetos. Um <xref:System.Windows.Media.Animation.Storyboard> é um tipo de <xref:System.Windows.Media.Animation.Timeline> que fornece informações de direcionamento para as linhas do tempo que ele contém. Use um <xref:System.Windows.Media.Animation.Storyboard> para fornecer informações, incluindo informações de propriedade e o objeto de direcionamento de linha do tempo.
Para começar uma animação, use um ou mais <xref:System.Windows.Media.Animation.ParallelTimeline> objetos como elementos filho aninhados de um <xref:System.Windows.Media.Animation.Storyboard>. Eles <xref:System.Windows.Media.Animation.ParallelTimeline> objetos podem conter outras animações e portanto, podem melhor encapsular as sequências de tempo em animações complexas. Por exemplo, se você estiver animando um <xref:System.Windows.Controls.TextBlock> e várias formas no mesmo <xref:System.Windows.Media.Animation.Storyboard>, você pode separar as animações para o <xref:System.Windows.Controls.TextBlock> e das formas, colocando cada uma em um separado <xref:System.Windows.Media.Animation.ParallelTimeline>. Porque cada <xref:System.Windows.Media.Animation.ParallelTimeline> tem seu próprio <xref:System.Windows.Media.Animation.Timeline.BeginTime%2A> e todos os elementos filho a <xref:System.Windows.Media.Animation.ParallelTimeline> começam em relação a isso <xref:System.Windows.Media.Animation.Timeline.BeginTime%2A>, medição de tempo é mais bem encapsulada.
O exemplo a seguir anima dois pedaços de texto (<xref:System.Windows.Controls.TextBlock> objetos) de dentro do mesmo <xref:System.Windows.Media.Animation.Storyboard>. Um <xref:System.Windows.Media.Animation.ParallelTimeline> encapsula as animações de um do <xref:System.Windows.Controls.TextBlock> objetos.
**Observação de desempenho:** Embora você possa aninhar <xref:System.Windows.Media.Animation.Storyboard> cronogramas dentro uns aos outros, <xref:System.Windows.Media.Animation.ParallelTimeline>s são mais adequadas para aninhar, pois requerem menos sobrecarga. (O <xref:System.Windows.Media.Animation.Storyboard> herda o <xref:System.Windows.Media.Animation.ParallelTimeline> classe.)
## <a name="example"></a>Exemplo
[!code-xaml[Timelines_snip#ParallelTimelineWholePage](../../../../samples/snippets/csharp/VS_Snippets_Wpf/Timelines_snip/CS/ParallelTimelineExample.xaml#paralleltimelinewholepage)]
## <a name="see-also"></a>Consulte também
- [Visão geral da animação](../../../../docs/framework/wpf/graphics-multimedia/animation-overview.md)
- [Especificar HandoffBehavior entre animações de storyboard](../../../../docs/framework/wpf/graphics-multimedia/how-to-specify-handoffbehavior-between-storyboard-animations.md)
| 111.193548 | 1,070 | 0.807369 | por_Latn | 0.815176 |
43b98358e013ad60e5aa132fcee95a3474487c8f | 5,292 | md | Markdown | help/sources/tutorials/ui/create/cloud-storage/blob-s3.md | ktukker-adobe/experience-platform.en | 7c9b3f9b6404f6bb3937fcfbcdaf73bf23a3bb7d | [
"MIT"
] | null | null | null | help/sources/tutorials/ui/create/cloud-storage/blob-s3.md | ktukker-adobe/experience-platform.en | 7c9b3f9b6404f6bb3937fcfbcdaf73bf23a3bb7d | [
"MIT"
] | null | null | null | help/sources/tutorials/ui/create/cloud-storage/blob-s3.md | ktukker-adobe/experience-platform.en | 7c9b3f9b6404f6bb3937fcfbcdaf73bf23a3bb7d | [
"MIT"
] | null | null | null | ---
keywords: Experience Platform;home;popular topics
solution: Experience Platform
title: Create an Azure Blob or Amazon S3 source connector in the UI
topic: overview
---
# Create an [!DNL Azure Blob] or [!DNL Amazon] S3 source connector in the UI
Source connectors in Adobe Experience Platform provide the ability to ingest externally sourced data on a scheduled basis. This tutorial provides steps for creating an [!DNL Azure Blob] (hereinafter referred to as "Blob") or [!DNL Amazon] S3 (hereinafter referred to as "S3") source connector using the [!DNL Platform] user interface.
## Getting started
This tutorial requires a working understanding of the following components of Adobe Experience Platform:
- [Experience Data Model (XDM) System](../../../../../xdm/home.md): The standardized framework by which Experience Platform organizes customer experience data.
- [Basics of schema composition](../../../../../xdm/schema/composition.md): Learn about the basic building blocks of XDM schemas, including key principles and best practices in schema composition.
- [Schema Editor tutorial](../../../../../xdm/tutorials/create-schema-ui.md): Learn how to create custom schemas using the Schema Editor UI.
- [Real-time Customer Profile](../../../../../profile/home.md): Provides a unified, real-time consumer profile based on aggregated data from multiple sources.
If you already have a Blob or S3 base connection, you may skip the remainder of this document and proceed to the tutorial on [configuring a dataflow](../../dataflow/batch/cloud-storage.md).
### Supported file formats
[!DNL Experience Platform] supports the following file formats to be ingested from external storages:
- Delimiter-separated values (DSV): Support for DSV formatted data files is currently limited to comma-separated values. The value of field headers within DSV formatted files must only consist of alphanumeric characters and underscores. Support for general DSV files will be provided in the future.
- JavaScript Object Notation (JSON): JSON formatted data files must be XDM compliant.
- Apache Parquet: Parquet formatted data files must be XDM compliant.
### Gather required credentials
In order to access your Blob storage on [!DNL Platform], you must provide a valid value for the following credential:
| Credential | Description |
| ---------- | ----------- |
| `connectionString` | The connection string required to access data in your Blob storage. The Blob connection string pattern is: `DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}`. |
For more information on getting started, visit [this Azure Blob document](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string).
Similarly, accessing your S3 bucket on [!DNL Platform] requires you to provide your valid values for the following credentials:
| Credential | Description |
| ---------- | ----------- |
| `s3AccessKey` | The access key ID for your S3 storage. |
| `s3SecretKey` | The secret key ID for your S3 storage. |
For more information on getting started, visit [this AWS document](https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/).
## Connect your Blob or S3 account
Once you have gathered your required credentials, you can follow the steps below to create a new Blob or S3 account to connect to [!DNL Platform].
Log in to [Adobe Experience Platform](https://platform.adobe.com) and then select **[!UICONTROL Sources]** from the left navigation bar to access the *[!UICONTROL Sources]* workspace. The *[!UICONTROL Catalog]* screen displays a variety of sources for which you can create an inbound account with, and each source shows the number of existing accounts and dataflows associated with them.
You can select the appropriate category from the catalog on the left-hand side of your screen. Alternatively, you can find the specific source you wish to work with using the search option.
Under the *[!UICONTROL Databases]* category, select **[!UICONTROL Azure Blob Storage]** or **[!UICONTROL Amazon S3]** click **on the + icon (+)** to create a new [!DNL Blob] or S3 connector.

The *[!UICONTROL Connect to Azure Blob Storage]* page appears. On this page, you can either use new credentials or existing credentials.
### New account
If you are using new credentials, select **[!UICONTROL New account]**. On the input form that appears, provide the connection with a name, an optional description, and your [!DNL Blob] or S3 credentials. When finished, select **[!UICONTROL Connect]** and then allow some time for the new account to establish.

### Existing account
To connect an existing account, select the [!DNL Blob] or S3 account you want to connect with, then select **[!UICONTROL Next]** to proceed.

## Next steps and additional resources
By following this tutorial, you have established a connection to your [!DNL Blob] or S3 account. You can now continue on to the next tutorial and [configure a dataflow to bring data from your cloud storage into Platform](../../dataflow/batch/cloud-storage.md). | 67.846154 | 387 | 0.753968 | eng_Latn | 0.990851 |
43b98b058eb2e353bc01df55fc5ac6b2a257f36f | 1,285 | md | Markdown | series.md | aseaboyer/laudenslegends | 06343dec4db4f708a95dd180711369e0439cbd33 | [
"MIT"
] | null | null | null | series.md | aseaboyer/laudenslegends | 06343dec4db4f708a95dd180711369e0439cbd33 | [
"MIT"
] | null | null | null | series.md | aseaboyer/laudenslegends | 06343dec4db4f708a95dd180711369e0439cbd33 | [
"MIT"
] | null | null | null | ---
layout: page
title: Series
permalink: /series/
---
Some of the stories featured on this site are short-stories presented in a number of sequential installments. In other cases, a stand-alone story may overlap with other stories. If you are enjoying a story or you are unsure where to start reading, this reading order has been created with you in mind.
## The Triboar Trail
The Triboar trail is the retelling of an adventure played by a group of friends which is set in a popular tabletop campaign.
1. [Reunions]({% post_url 2021-4-4-The-Triboar-Trail-Part-1 %})
2. [Ambushes]({% post_url 2021-4-13-The-Triboar-Trail-Part-2 %})
2. [Revelations]({% post_url 2021-4-20-The-Triboar-Trail-Part-3 %})
4. [Entrances]({% post_url 2021-4-27-The-Triboar-Trail-Part-4 %})
5. [Taking the Fight to the Enemy]({% post_url 2021-5-4-The-Triboar-Trail-Part-5 %})
6. [Two Steps Forward, One Step Back]({% post_url 2021-5-11-The-Triboar-Trail-Part-6 %})
7. [Unexpected Allies]({% post_url 2021-5-20-The-Triboar-Trail-Part-7 %})
8. [Droop]({% post_url 2021-5-27-The-Triboar-Trail-Part-8 %})
9. [Showdown]({% post_url 2021-6-05-The-Triboar-Trail-Part-9 %})
10. [Group Dynamics]({% post_url 2021-6-17-The-Triboar-Trail-Part-10 %})
11. [Excursion]({% post_url 2021-6-24-The-Triboar-Trail-Part-11 %}) | 55.869565 | 301 | 0.723735 | eng_Latn | 0.932219 |
43ba002e67ebac5beb971e11b0f2a41fc2280bd8 | 2,606 | md | Markdown | _posts/2015-10-15-AlexNet.md | daijialun/daijialun.github.io | 9d92d89c14e381819ba14355aad27927cc6008af | [
"MIT"
] | null | null | null | _posts/2015-10-15-AlexNet.md | daijialun/daijialun.github.io | 9d92d89c14e381819ba14355aad27927cc6008af | [
"MIT"
] | null | null | null | _posts/2015-10-15-AlexNet.md | daijialun/daijialun.github.io | 9d92d89c14e381819ba14355aad27927cc6008af | [
"MIT"
] | null | null | null | ---
layout: post
title: "论文解读《Imagenet Classification with Deep Convolutional Neural Networks》"
date: 2015-10-15
categories: PaperReading
---
## 论文分析
### Abstract
- 在ILSVRC-2010竞赛上,该组训练了一个大型的、有深度的卷积神经网络,用来将1.2million的高分辨率图像分为1000个不同的种类。**在测试数据上,**top-1和top-5的错误率达到37.5%和17.0%。这个神经网络有60,000,000参数和650,000个神经元。
- 在ILSVRC 2012竞赛上,稍微调整这个模型,取得top-5错误率为15.3%。
- **这个神经网络由**
- 5层卷积层
- 3层全连接层
- 1层1000-way的softmax层
### Introduction
- **论文贡献:**
- 针对ILSVRC2010和ILSVRC2012竞赛,训练了一个大型的卷积神经网络,达到当时最好的效果
- 在GPU上实现了高性能的2D卷积,以及网络中的其他操作
- 包含一些提高性能和减少训练时间的方法,以及防止过拟合的技术
- 网络规模主要受限于
- GPU存储容量
- 训练时间
- 提升实验结果
- 更快的GPU
- 更大的数据集
### Dataset
- ILSVRC是ImageNet的一个子集,有1000个种类,每个种类中有1000张图像,大概有12,000,000张训练图像,50,000张验证图像和150,000张测试图像
- ILSVRC2010的测试集labels是可用的,所以主要是用这个进行实验
- ILSVRC2012也有实验,但是测试集labels不可用的
- 将数据集图片的尺寸规范为256x256
- 只有subtract the mean activity,没有其他的pre-process
### Architecture
**以下为网络架构的特点,按重要性排序**
- ReLU Nonlinearity
- 用梯度下降法,饱和非线性比非饱和非线性的训练时间少
- ReLU(Rectified Linear Units) 激活函数,AlexNet中使用了非线性ReLU
- 在深度卷积网络中,用ReLU训练比*tanh*单元快好几倍
- 用传统的饱和神经模型,将无法解决如此大规模的神经网络
- Training on Multiple GPUs
- GPU容量会限制网络的最大尺寸,可能会出现用来训练网络的数据足够大,但是GPU却无法处理
- 目前CPU都支持cross-GPU parallelization技术
- trick:GPU只在特定的层通信,每个GPU各有一半的kernels
- GPU使top-1和top-5的错误率减少了1.7%和1.2%
- 两个GPU训练时间比一个GPU短
- Local Response Normalization
- ReLU具有不需要防止饱和的input normalization的性质
- Response normalization使top-1和top-5的错误率减少了1.4%和1.2%
- Overlapping Pooling
- 这网络使用的pooling是重叠的
- 使用overlap overfit可抑制过拟合
- Overall Architecture
- 网络最大化多项式逻辑回归,即在预测分布的情况下,求最大化训练案例中正确label的对数概率
### Reducing Overfitting
- Data Augmentation
- 用label-preserving transformations来扩大数据集是最简单和通用的方法来降低
- 第一种方式:image translation和horizontal reflection
- 从256x256图像中,提取224x224 patches
- 通过这种方式,降低了过拟合
- 第二种方式:改变训练图像中RGB通道的值
- 使用PCA方法
- 从自然图像中提取了重要性质,物体的统一性不随光照的强度和颜色而改变
- Dropout
- 对每个隐含层的输出用50%的概率置0。置0的神经元不参加前向和后向的传导。
- 用dropout,尽管每次神经网络是不同框架,但是这些框架共享权值
- 减少了神经元之间的依赖性
- dropout加倍了收敛需要的迭代次数
### Details of learning
- 小数量的权值衰减对模型的学习是很重要的,即weight decay不仅仅是regularizer,还降低了模型的训练错误率。
- 在每一层初始化权值,用偏差为0.01的zero-mean Gaussian。用constant=1初始化2,4和5层卷积以及全连接层。通过对ReLU提供整输入,加速了learning。用constant=0初始化剩下的层
- 所有层都使用相同的learning rate,在训练时手动调整的。当错误率停止时,将learning rate缩小10倍。learning rate初始值为0.01,缩小3次停止
### Results
- ILSVRC2010,top-1和top-5的错误率为37.5%和17.0%
- ILSVRC2012,top-1和top-5的错误率为40.7%和18.2%
- 在ImageNet2011 pre-trained的模型,fine-tune在ILSVRC上,top-1和top-5的错误率为39.0%和16.6%
### Discussion
- 如果移去一层网络,网络正确率将会下降
- 没有使用任何无监督训练,特别是如果是得到足够的计算能力来提升网络规模,却无法得到相对应的标记数据的提升
| 17.727891 | 145 | 0.778204 | yue_Hant | 0.609512 |