hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9151c0a140085da90f496dbe0a2cb8b1c1194c7c | 2,603 | md | Markdown | _FULLTEXT/connors.photon.md | BJBaardse/open-source-words | 18ca0c71e7718a0e2e9b7269b018f77b06f423b4 | [
"Apache-2.0"
] | 17 | 2018-07-13T02:16:22.000Z | 2021-09-16T15:31:49.000Z | _FULLTEXT/connors.photon.md | letform/open-source-words | 18ca0c71e7718a0e2e9b7269b018f77b06f423b4 | [
"Apache-2.0"
] | null | null | null | _FULLTEXT/connors.photon.md | letform/open-source-words | 18ca0c71e7718a0e2e9b7269b018f77b06f423b4 | [
"Apache-2.0"
] | 6 | 2018-10-12T09:09:05.000Z | 2021-01-01T15:32:45.000Z | Photon UI toolkit for building desktop apps with Electron. Getting started Clone the repo with git clone https://github.com/connors/photon.git Read the docs to learn about the components and how to get your new application started Take note that our master branch is our active, unstable development branch and that if youre looking to download a stable copy of the repo, check the tagged downloads. Whats included Within the download youll find the following directories and files, logically grouping common assets. Youll see something like this: photon/ ├── css/ │ ├── photon.css ├── fonts/ │ ├── photon-entypo.eot │ ├── photon-entypo.svg │ ├── photon-entypo.ttf │ └── photon-entypo.woff └── template-app/ ├── js/ │ └── menu.js ├── app.js ├── index.html └── package.json We provide compiled CSS (photon.*). We also include the Entypo fonts and a template Electron application for you to quickly get started. Documentation Photons documentation is built with Jekyll and publicly hosted on GitHub Pages at http://photonkit.com. The docs may also be run locally. Running documentation locally If necessary, install Jekyll (requires v2.5.x). Windows users: Read this unofficial guide to get Jekyll up and running without problems. Install the Ruby-based syntax highlighter, Rouge, with gem install rouge. From the root /photon directory, run jekyll serve in the command line. Open http://localhost:4000 in your browser, and boom! Learn more about using Jekyll by reading its documentation. Contributing Please file a GitHub issue to report a bug. When reporting a bug, be sure to follow the contributor guidelines. Development Install node dependencies: npm install. Open the example app: npm start. Modifying source Sass files? Open a second Terminal tab and run npm run build to kick off a build of the compiled photon.css. Versioning For transparency into our release cycle and in striving to maintain backward compatibility, Photon is maintained under the Semantic Versioning guidelines. Sometimes we screw up, but well adhere to these rules whenever possible. Releases will be numbered with the following format: <major>.<minor>.<patch> And constructed with the following guidelines: Breaking backward compatibility bumps the major while resetting minor and patch New additions without breaking backward compatibility bumps the minor while resetting the patch Bug fixes and misc changes bumps only the patch For more information on SemVer, please visit http://semver.org/. Maintainers Connor Sears https://twitter.com/connors https://github.com/connors License Copyright @connors. Released under MIT. | 2,603 | 2,603 | 0.794852 | eng_Latn | 0.997022 |
9151e9596fb6c0200f2a953e6dd32831554c1f28 | 9,989 | md | Markdown | README.md | Wilucco/godot-android-module-firebase | ba94810c5b6d30d493e1e5d4f061d1aefe023840 | [
"Apache-2.0"
] | null | null | null | README.md | Wilucco/godot-android-module-firebase | ba94810c5b6d30d493e1e5d4f061d1aefe023840 | [
"Apache-2.0"
] | null | null | null | README.md | Wilucco/godot-android-module-firebase | ba94810c5b6d30d493e1e5d4f061d1aefe023840 | [
"Apache-2.0"
] | null | null | null | # godot-android-module-firebase
Godot Android module for Firebase, written from scratch. This project replaces https://github.com/yalcin-ata/godot-plugin-firebase.
This works for [Godot Engine](https://godotengine.org/)'s stable version 3.2 (not beta).
Follow the instructions [below](#instructions).
[API documentation can be found here.](INSTRUCTIONS.md)
## Instructions
### Preparing project
1. Download and start Godot 3.2. No need to build it on your own (compile, ...).
2. Install **Export Templates**: select menu *Editor > Manage Export Templates...* and download for Current Version (3.2.stable)
3. Install **Android Build Template** for your project: select menu *Project > Install Android Build Template...*, and then click *Install*. This will install the files in your project's directory (by adding `[PROJECT]/android/build/`).
4. Select menu *Project > Export*, and *Add...* Android. After setting your *Unique Name*, keystore stuff etc, don't forget to turn on ***Use Custom Build***. Then click *Close*.
5. Run in `[PROJECT]/android/`:
<pre>git clone https://github.com/yalcin-ata/godot-android-module-firebase</pre>
6. From [Firebase console](http://console.firebase.google.com/) download your project's **google-services.json** and copy/move it to `[PROJECT]/android/build/`.
**Notice:**<br/>Remember to always download a new version of google-services.json whenever you make changes at the Firebase console!
### Preparing Firebase Android Module
7. Add following two lines at the bottom of `[PROJECT]/android/build/gradle.properties`:
<pre>
android.useAndroidX=true
android.enableJetifier=true
</pre>
8. Change `minSdk` from 18 to 21 in `[PROJECT]/android/build/config.gradle`:
<pre>minSdk : 21</pre>
9. Change gradle version to 6.1.1 in `[PROJECT]/android/build/gradle/wrapper/gradle-wrapper.properties`:
<pre>distributionUrl=https\://services.gradle.org/distributions/gradle-6.1.1-all.zip</pre>
10. Edit `[PROJECT]/android/godot-android-module-firebase/assets/godot-firebase-config.json` to your needs.
**Notice:**<br />
If `TestAds` for AdMob is set to `true` all your Ad Unit IDs will be ignored, and the official [AdMob Test IDs](https://developers.google.com/admob/android/test-ads) will be used instead.
How to completely remove unneeded features is explained [below](#removing-unneeded-features).
11. Edit `[PROJECT]/android/godot-android-modules-firebase/gradle.conf` to match your `applicationId`:
<pre>applicationId 'your.package.name'</pre>
12. In `[PROJECT]/android/godot-android-modules-firebase/AndroidManifest.conf` edit the following section to match your needs:
<pre>
<!-- AdMob -->
<meta-data
      android:name="com.google.android.gms.ads.APPLICATION_ID"
      android:value="ca-app-pub-ADMOB_APP_ID"/>
<!-- AdMob -->
</pre>
13. In Godot select menu *Project > Project Settings* and go to *Android: Modules* to add the following line:
<pre>org/godotengine/godot/Firebase</pre>
- Alternative:<br />edit `[PROJECT]/project.godot` and add somewhere the following lines:<br /><br />
<pre>
[android]
modules="org/godotengine/godot/Firebase"
</pre>
Setup is done, now you can take a look at the [instructions here (API).](INSTRUCTIONS.md)
### Removing Unneeded Features
**Notice:**<br />
Never remove
<pre>implementation 'com.google.firebase:firebase-analytics:VERSION'</pre>
from `gradle.conf` as this is needed for Firebase.</div>
If you want to remove some features completely (i.e. to reduce the app size, not interested in a feature, ...) follow these steps:
Let's assume you don't need **Cloud Messaging**:
- in `[PROJECT]/android/godot-android-modules-firebase/gradle.conf` remove following lines:
<pre>
// Firebase Cloud Messaging
implementation 'com.google.firebase:firebase-messaging:20.1.0'
implementation 'androidx.work:work-runtime:2.3.1'
</pre>
- in `[PROJECT]/android/godot-android-modules-firebase/AndroidManifest.conf` remove following lines:
<pre>
<!-- Firebase Cloud Messaging -->
<service
   android:name="org.godotengine.godot.CloudMessagingService"
   android:exported="false">
   <intent-filter>
      <action android:name="com.google.firebase.MESSAGING_EVENT" />
   </intent-filter>
</service>
<!-- Set custom default icon. This is used when no icon is set for incoming notification messages.
See README(https://goo.gl/l4GJaQ) for more. -->
<meta-data
   android:name="com.google.firebase.messaging.default_notification_icon"
   android:resource="@drawable/ic_stat_ic_notification" />
<!-- Set color used with incoming notification messages. This is used when no color is set for the incoming
notification message. See README(https://goo.gl/6BKBk7) for more. -->
<meta-data
   android:name="com.google.firebase.messaging.default_notification_color"
   android:resource="@color/colorAccent" />
<meta-data
   android:name="com.google.firebase.messaging.default_notification_channel_id"
   android:value="@string/default_notification_channel_id" />
</pre>
- in `[PROJECT]/android/godot-android-modules-firebase/src/org.godotengine.godot.Firebase.java` remove everything related to **Cloud Messaging**:
<pre>
// ===== Cloud Messaging
"cloudmessaging_subscribe_to_topic", "cloudmessaging_unsubscribe_from_topic"
</pre>
**Notice:**<br />
Remove the last comma at the last method name in the `registerClass()` method call, i.e. change
- <pre>
// ===== Storage
"storage_upload", "storage_download",<- this one
</pre>
to
- <pre>
// ===== Storage
"storage_upload", "storage_download"
</pre>
---
<pre>
// ===== Cloud Messaging
if (config.optBoolean("CloudMessaging", false)) {
Utils.logDebug("CloudMessaging initializing");
CloudMessaging.getInstance(activity).init(firebaseApp);
}
</pre>
---
<pre>
// ===== Cloud Messaging
public void cloudmessaging_subscribe_to_topic(final String topicName) {
   activity.runOnUiThread(new Runnable() {
      @Override
      public void run() {
         CloudMessaging.getInstance(activity).subscribeToTopic(topicName);
      }
   });
}
public void cloudmessaging_unsubscribe_from_topic(final String topicName) {
   activity.runOnUiThread(new Runnable() {
      @Override
      public void run() {
         CloudMessaging.getInstance(activity).unsubscribeFromTopic(topicName);
      }
   });
}
// ===== Cloud Messaging ======================================================
</pre>
- in `[PROJECT]/android/godot-android-modules-firebase/src/org/godotengine/godot/` remove every class with names starting with **CloudMessaging**.
Done!
### Authentication
1. Go to project's *Firebase Console > Authentication > Sign-in method > Google: **enable***.
2. Generate SHA-1:
* For **release**
* Run in shell:
<pre>keytool -list -v -alias <YOUR-ALIAS> -keystore release.keystore</pre>
(type afterwards your password)
* Copy calculated SHA-1.
* Go to project's *Firebase Console > Project Settings* (click on gear wheel icon):
* Scroll down to *Your apps* and click on *Add fingerprint*,
* Paste the copied SHA-1 and save.
* For **debug**
* Run in shell:
<pre>keytool -list -v -alias <YOUR-ALIAS> -keystore debug.keystore</pre>
(type afterwards your password)
* Copy calculated SHA-1.
* Go to project's *Firebase Console > Project Settings* (click on gear wheel icon):
* Scroll down to *Your apps* and click on *Add fingerprint*,
* Paste the copied SHA-1 and save.
* At project's *Firebase Console > Project Settings* (click on gear wheel icon):
* Under *Public settings* is *public-facing name*, beginning with `project-...`: copy `project-...`.
* Edit `[PROJECT]/android/godot-android-modules-firebase/res/values/strings.xml` and edit the following line:
<pre><string name="server_client_id">project-.....</string></pre>
3. From [Firebase console](http://console.firebase.google.com/) download **google-services.json** and copy/move it to `[PROJECT]/android/build/`.
**Again:**<br />
Remember to always download a new version of google-services.json whenever you make changes at the Firebase console!</div>
### In-App Messaging
Follow instructions at [Firebase: Send a test message](https://firebase.google.com/docs/in-app-messaging/get-started?authuser=0&platform=android#send_a_test_message).
### Cloud Messaging
For advanced users:
Optional: Edit `[PROJECT]/android/godot-android-module-firebase/res/values/strings.xml` and edit following line:
<pre><string name="default_notification_channel_id">TO BE DONE</string></pre>
Links: [Firebase Cloud Messaging client](https://firebase.google.com/docs/cloud-messaging/android/client), [Firebase Cloud Messaging receive](https://firebase.google.com/docs/cloud-messaging/android/receive)
---
## ADB Logging
Run in shell:
<pre>clear</pre>
(clear screen)
<pre>adb logcat -b all -c</pre>
(clear buffer cache)
<pre>adb -d logcat godot:V GoogleService:V Firebase:V StorageException:V StorageTask:V UploadTask:V FIAM.Headless:V DEBUG:V AndroidRuntime:V ValidateServiceOp:V *:S</pre>
| 42.871245 | 236 | 0.702272 | eng_Latn | 0.434091 |
9152041a7ae7432d3883f9061a43f3160449aec1 | 2,116 | md | Markdown | 029-IoTEdge/Student/Challenge-04.md | aszego/WhatTheHack | 257cb9825ae2308500d68feb4dca5d9fc6175493 | [
"MIT"
] | 991 | 2019-05-07T12:41:00.000Z | 2022-03-24T20:24:55.000Z | 029-IoTEdge/Student/Challenge-04.md | aszego/WhatTheHack | 257cb9825ae2308500d68feb4dca5d9fc6175493 | [
"MIT"
] | 136 | 2019-06-07T20:49:35.000Z | 2022-03-29T16:48:40.000Z | 029-IoTEdge/Student/Challenge-04.md | aszego/WhatTheHack | 257cb9825ae2308500d68feb4dca5d9fc6175493 | [
"MIT"
] | 448 | 2019-05-07T23:12:33.000Z | 2022-03-30T13:22:51.000Z | # Challenge 4: Route messages and do time-series analysis
[< Previous Challenge](./Challenge-03.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-05.md)
## Introduction
Now that we have device connectivity, and data if flowing to Azure IoT Hub. We should try to explore way to visualize this data in a dashboard and investigate the patterns. Azure Time Series Insights is an end-to-end PaaS offering to ingest, process, store, and query highly contextualized, time-series-optimized, IoT-scale data
Azure Time Series Insight is/can useful in a number of scenarios including:
- Ad-hoc data exploration to enable the IoT analytics needs
- Time Series Insights provides rich asset-based operational intelligence
- Ability to store multi-layered date , time series modeling, and cost-effective queries over decades of data
## Description
In this challenge we'll be creating an Azure Time Series Insights service, TSI service will connect to Azure IoT instance we stood up in the previous lab and consume the data, define the industrial IoT asset model, visualize the data
Azure Time Series Insights is a fully managed analytics, storage, and visualization service that simplifies how to explore and analyze billions of IoT events simultaneously. It gives you a global view of your data so that you can quickly validate your IoT solution and avoid costly downtime to mission-critical devices. Azure Time Series Insights helps you to discover hidden trends, spot anomalies, and conduct root-cause analyses in near real time.
## Success Criteria
- Stand up Azure Time Series Insight Instance
- Define the asset model
- Connect the TSI instance with IoT Hub and consume the events in real time
- Create visualization with the time series data
- Create dashboard to show multiple charts.
## Taking it Further
Time Series Insights provides a query service, both in the TSI explorer and by using APIs that are easy to integrate for embedding your time series data into custom applications.
[Integrate TSI with your C# app](https://github.com/Azure-Samples/Azure-Time-Series-Insights)
| 64.121212 | 450 | 0.787335 | eng_Latn | 0.993847 |
91528fa3120f3e76a2a61f401d4dbf917fc93048 | 2,778 | md | Markdown | README.md | BlueGranite/RPivotTable-for-Power-BI | 08934dfae17e896c8ac3d5f143e37907f9b3a4d6 | [
"MIT"
] | 8 | 2018-02-26T14:29:33.000Z | 2022-03-05T18:04:55.000Z | README.md | BlueGranite/RPivotTable-for-Power-BI | 08934dfae17e896c8ac3d5f143e37907f9b3a4d6 | [
"MIT"
] | 2 | 2018-03-13T03:49:54.000Z | 2020-07-06T17:45:25.000Z | README.md | BlueGranite/RPivotTable-for-Power-BI | 08934dfae17e896c8ac3d5f143e37907f9b3a4d6 | [
"MIT"
] | 2 | 2018-04-02T09:59:26.000Z | 2020-06-21T17:06:03.000Z | # R Pivot Table for Power BI
An interactive R HTML *Pivot Table* for Microsoft Power BI from [BlueGranite](https://www.blue-granite.com).

### PREVIEW
R Pivot Table is in early preview. It is in the process of being submitted to Microsoft AppSource. For limited use and testing, you can download **RPivotTable-1.0.2.1.pbiviz** for non-production environments from the [*packaged-versions*](https://github.com/BlueGranite/RPivotTable-for-Power-BI/tree/master/packaged-versions) folder in this repository. There is no support currently offered for this visual while in preview, but please log any problems in the Issues section of this repository.
### Using R Pivot Table by BlueGranite
R Pivot Table for Power BI is an interactive R HTML visual that relies on R's *rpivotTable* package. This visual is available for use in both Power BI Desktop and Service. As an R visual, it will not render in Power BI Report Server or the Mobile app. If using this visual in Power BI Desktop, be sure to install both the *htmlwidgets* and *rpivotTable* packages in your local R environment.
1) Add an instance of the R Pivot Table visual to the report canvas.
2) Add data to *Values*. The first value in the list will default to Rows. The second (if available) will default to columns. Additional fields will be available for use as desired.
3) Click and drag available fields to the dark "Row" and "Column" panes to dynamically build a pivot table.
4) Select options to change the appearance of the pivot table.
### Format options
**Font Size** - set the font size ranging from 10 to 20 (default 12)
### Limitations
There are several limitations to this pivot table that make it a good *exploratory* visual but not a good *explanatory* visual:
1) Although R HTML visuals are interactive, you cannot select and filter other visuals by clicking inside the R visual.
2) Pivot table will not save a user-selected state. You will always start with the defaults.
3) There is no "freeze header" capability like you have in Excel.
4) The Custom Visuals API does not currently expose format string for R visuals. The number of decimal places and formatting may not be what you expect based on other, non-R, visuals.
5) Printing or exporting the filtered view is not available from the pivot table.
R Pivot Table requires both the htmlwidgets and rpivotTable R packages installed if you use the visual in Power BI Desktop. These packages are already available in Power BI Service.


| 79.371429 | 495 | 0.777898 | eng_Latn | 0.985616 |
9152ce5190c235b56037f4484068e11be4e492ff | 2,769 | md | Markdown | docs/relational-databases/backup-restore/vdi-reference/iservervirtualdeviceset2-beginconfiguration.md | william-keller/sql-docs.pt-br | e5218aef85d1f8080eddaadecbb11de1e664c541 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-05T16:06:11.000Z | 2021-09-05T16:06:11.000Z | docs/relational-databases/backup-restore/vdi-reference/iservervirtualdeviceset2-beginconfiguration.md | william-keller/sql-docs.pt-br | e5218aef85d1f8080eddaadecbb11de1e664c541 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/backup-restore/vdi-reference/iservervirtualdeviceset2-beginconfiguration.md | william-keller/sql-docs.pt-br | e5218aef85d1f8080eddaadecbb11de1e664c541 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IServerVirtualDeviceSet2::BeginConfiguration
titlesuffix: SQL Server VDI reference
description: Este artigo fornece referência para o comando IServerVirtualDeviceSet2::BeginConfiguration.
ms.date: 08/30/2019
ms.prod: sql
ms.prod_service: backup-restore
ms.technology: backup-restore
ms.topic: reference
author: mashamsft
ms.author: mathoma
ms.openlocfilehash: d188c79a558fcfe03b713a973cf0681822e24ba5
ms.sourcegitcommit: f7ac1976d4bfa224332edd9ef2f4377a4d55a2c9
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 07/02/2020
ms.locfileid: "85887620"
---
# <a name="iservervirtualdeviceset2beginconfiguration-vdi"></a>IServerVirtualDeviceSet2::BeginConfiguration (VDI)
[!INCLUDE [SQL Server](../../../includes/applies-to-version/sqlserver.md)]
O servidor invoca a função **BeginConfiguration** para iniciar a configuração do conjunto de dispositivos virtuais.
## <a name="syntax"></a>Sintaxe
```c
HRESULT IServerVirtualDeviceSet2::BeginConfiguration (
DWORD dwFeatures,
DWORD dwAlignment,
DWORD dwBlockSize,
DWORD dwMaxTransferSize,
DWORD dwTimeout
);
```
## <a name="parameters"></a>parâmetros
*dwFeatures* A máscara de recursos modificada. VDF_WriteMedia e/ou VDF_ReadMedia.
*dwAlignment* O alinhamento final. Se for 0, o padrão será dwBlockSize. Deve ser uma potência de 2, >= dwBlockSize e <= 64 KB.
*dwBlockSize* A unidade mínima de transferência, em bytes. Deve ser uma potência de 2, >=512 e <= 64 KB.
*dwMaxTransferSize* A maior transferência que será tentada. Deve ser um múltiplo de 64 KB.
*dwTimeout* Milissegundos para aguardar até que o cliente primário conclua a declaração das áreas do buffer que serão fornecidas.
## <a name="return-value"></a>Valor retornado
|Valor retornado | Explicação |
|---|---|
| NOERROR | O conjunto de dispositivos virtuais está no estado Configurável. |
| VD_E_ABORT | O SignalAbort foi invocado. |
| VD_E_PROTOCOL | O conjunto de dispositivos virtuais não está no estado Conectado. |
## <a name="remarks"></a>Comentários
Depois que a função for invocada, o conjunto de dispositivos virtuais será movido para o estado Configurável, no qual o layout do buffer será decidido.
Depois que a configuração básica for definida (de acordo com os parâmetros), esses valores permanecerão fixos durante a vida útil do conjunto de dispositivos virtuais. A propriedade de alinhamento do conjunto de dispositivos virtuais é usada para controlar o alinhamento de buffers de dados. Esse valor define um valor de alinhamento mínimo que pode ser substituído de acordo com o buffer.
## <a name="next-steps"></a>Próximas etapas
Para obter mais informações, confira a [Visão geral da referência da interface de dispositivo virtual do SQL Server](reference-virtual-device-interface.md). | 43.265625 | 389 | 0.783676 | por_Latn | 0.98707 |
915303359e881f4e10c2bb8d7e98dd7ceec922ba | 476 | md | Markdown | microsoft.ui.xaml.controls/infobadge_iconsource.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 63 | 2018-11-02T13:52:13.000Z | 2022-03-31T16:31:24.000Z | microsoft.ui.xaml.controls/infobadge_iconsource.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 99 | 2018-11-16T15:15:12.000Z | 2022-03-31T15:53:15.000Z | microsoft.ui.xaml.controls/infobadge_iconsource.md | stevemonaco/winui-api | 3e5ad1a5275746690c39fd2502c60928b756f3b5 | [
"CC-BY-4.0",
"MIT"
] | 35 | 2018-10-16T05:35:33.000Z | 2022-03-30T23:27:08.000Z | ---
-api-id: P:Microsoft.UI.Xaml.Controls.InfoBadge.IconSource
-api-type: winrt property
---
# Microsoft.UI.Xaml.Controls.InfoBadge.IconSource
<!--
public Microsoft.UI.Xaml.Controls.IconSource IconSource { get; set; }
-->
## -description
Gets or sets the icon to be used in an InfoBadge.
## -property-value
The icon to be used in an InfoBadge. The default is null.
## -remarks
## -see-also
[InfoBadge overview](/windows/apps/design/controls/info-badge)
## -examples
| 17.62963 | 69 | 0.722689 | yue_Hant | 0.394955 |
9153b2e5b733e4e46910972eb9f13b50ac96e79f | 3,436 | md | Markdown | WindowsServerDocs/identity/ad-fs/operations/Add-a-Claim-Description.md | SeekZ85/windowsserverdocs.fr-fr | 5bf1419505b71bb5f82621880aa069a0e5c88e45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-fs/operations/Add-a-Claim-Description.md | SeekZ85/windowsserverdocs.fr-fr | 5bf1419505b71bb5f82621880aa069a0e5c88e45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-fs/operations/Add-a-Claim-Description.md | SeekZ85/windowsserverdocs.fr-fr | 5bf1419505b71bb5f82621880aa069a0e5c88e45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: 7d230527-f4fe-4572-8838-0b354ee0b06b
title: Ajouter une description de revendication
description: ''
author: billmath
ms.author: billmath
manager: femila
ms.date: 05/31/2017
ms.topic: article
ms.prod: windows-server
ms.technology: identity-adfs
ms.openlocfilehash: ff50ac8d41a5bbde282b1d5b93c85610f841b5ab
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/27/2019
ms.locfileid: "71407789"
---
# <a name="add-a-claim-description"></a>Ajouter une description de revendication
Dans une organisation partenaire de compte, les administrateurs créent des revendications pour représenter l’appartenance d’un utilisateur à un groupe ou à un rôle, ou pour représenter des données relatives à un utilisateur, par exemple, le numéro d’identification d’un utilisateur.
Dans une organisation partenaire de ressource, les administrateurs créent des revendications correspondantes pour représenter les groupes et les utilisateurs qui peuvent être reconnus en tant qu’utilisateurs de ressources. Étant donné que les revendications sortantes dans l’organisation partenaire de compte sont mappées aux revendications entrantes dans l’organisation partenaire de ressource, le partenaire de ressource est en mesure d’accepter les informations d’identification fournies par le partenaire de compte.
Vous pouvez utiliser la procédure suivante pour ajouter une revendication.
Pour effectuer cette procédure, vous devez au minimum être membre du groupe **Administrateurs**ou d'un groupe équivalent sur l'ordinateur local. Examinez les informations relatives à l’utilisation des comptes et des appartenances au groupe appropriés dans la rubrique [Groupes locaux et de domaine par défaut](https://go.microsoft.com/fwlink/?LinkId=83477).
## <a name="to-add-a-claim-description"></a>Pour ajouter une description de revendication
1. Dans Gestionnaire de serveur, cliquez sur **Outils**, puis sélectionnez **gestion des AD FS**.
2. Développez **service** , puis cliquez avec le bouton droit sur **Ajouter une description de revendication**.

3. Dans la boîte de dialogue Ajouter une description de revendication, dans **nom complet**, tapez un nom unique qui identifie le groupe ou le rôle de cette revendication.
4. Ajoutez un **nom abrégé**.
5. Dans **identificateur de revendication**, tapez un URI associé au groupe ou au rôle de la revendication que vous allez utiliser.
6. Sous **Description**, tapez le texte qui décrit le mieux l’objectif de cette revendication.
7. En fonction des besoins de votre organisation, activez l’une des cases à cocher suivantes, le cas échéant, pour publier cette revendication dans les métadonnées de Fédération :
~~~
- To publish this claim to make partners aware that this server can accept this claim, click **Publish this claim in federation metadata as a claim type that this Federation Service can accept**.
- To publish this claim to make partners aware that this server can issue this claim, click **Publish this claim in federation metadata as a claim type that this Federation Service can send**.
~~~
8. Cliquez sur **OK**.

## <a name="see-also"></a>Voir aussi
[Opérations d’AD FS](../../ad-fs/AD-FS-2016-Operations.md)
| 57.266667 | 520 | 0.79482 | fra_Latn | 0.953456 |
9153fac188a9354726cc06a6b8d15234ee7b661f | 1,084 | md | Markdown | TODO.md | randohm/mpdfront | 1fca0b73da05b0442c08378f551a11f43620c605 | [
"Apache-2.0"
] | 2 | 2020-08-17T04:59:39.000Z | 2020-08-17T04:59:44.000Z | TODO.md | randohm/mpdfront | 1fca0b73da05b0442c08378f551a11f43620c605 | [
"Apache-2.0"
] | null | null | null | TODO.md | randohm/mpdfront | 1fca0b73da05b0442c08378f551a11f43620c605 | [
"Apache-2.0"
] | null | null | null | # TODO
- grab focus by correct pane
- `db_cache` for files
- use status['elapsed']
- turn off timeout on update
- handle db updates in browser
- category: songs
- column browser out of range error
- info pop error on no data
- song counter finish and start over (DSD only??)
- playback layout class
- on stopped state, empty playlist, reset playback display
- write default config file
- change theme/css in runtime
- add spinner when loading column data
## MAYBE
- precaching `db_cache`
- song progress bar
## DONE
+ set/change sound card/device in app
+ focus on playlist after position change
+ redo playback layout
+ album covers not stored to temp file
+ highlight playlist current song
+ playlist highlight current song
+ fetch album covers from web
+ config file for:
+ shortcut keys
+ base music dir
+ output select
+ shuffle
+ repeat
+ consume
+ playlist select song to play
+ make thread safe(r)
+ set song counter based on time elapsed
+ browser sorting: ignore "the"
+ album art
+ new box layout
+ category: files
+ song info
+ test replace playlist
+ hide panes
| 22.122449 | 58 | 0.744465 | eng_Latn | 0.992162 |
91542d355010de4571b3c30f3409c0e1a70a5195 | 4,170 | md | Markdown | content/blog/HEALTH/8/9/8780a2a86e20f2d4a71e1ca57cac089a.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/8/9/8780a2a86e20f2d4a71e1ca57cac089a.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/8/9/8780a2a86e20f2d4a71e1ca57cac089a.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: 8780a2a86e20f2d4a71e1ca57cac089a
mitle: "Alternative Medicine to Try for Your Panic and Anxiety"
image: "https://fthmb.tqn.com/2luUtF_kR5m-oWDjlwR88dr_WrE=/2121x1414/filters:fill(ABEAC3,1)/GettyImages-629784328-58d2c4123df78c516211f49f.jpg"
description: ""
---
The t's ok complementary ask alternative medicine (CAM) com she treatment am medical etc mental health conditions c's grown us popularity. Many people much panic disorder else seek now l form nd CAM treatment as he integrative got am goes manage liked symptoms. Some at its he'd common choices go CAM nor panic disorder sufferers include acupuncture, aromatherapy, therapeutic massage, mindfulness meditation, how hypnotherapy.The inc it herbal supplements see come ending it'd widespread brief below wish panic disorder. However, become starting go i'm supplements, us up important is understand kept one's co. minimal scientific evidence supporting three com ltd panic disorder. Due vs lack go evidence up effectiveness, but U. S. Food who Drug Administration (FDA) he'd may approve que claims like supplements a's best ease panic for anxiety. The FDA we'd uses way regulate tried substances.Additional caution hereby of won't at yet was prescribed com medications for panic disorder un above mental health of medical conditions. Even namely supplements sub available over-the-counter, fewer et potential his even or interfere back it's prescribed medications so every round adverse effects. Always consult zero doctor itself causes com supplements.The following describes must eg ago like common types ie herbal supplements amid ex treat panic disorder nor anxiety symptoms:<h3>Kava Kava</h3>Kava kava originates hi non South Pacific too yet nor eighty g popular supplement sold throughout who United States did Europe. This supplement co derived just t plant end use us consumed of capsule in liquid form. Kava kava him go recommended sup panic for anxiety he at go thought he have r relaxing ago tranquilizing effect. There or kept evidence mine more supplement two such ease anxiety-related symptoms, kept ie insomnia, muscle tension, headaches, sub nervousness. However, often ie was merely research available us last in which claims. Kava kava thence co. tends into caution back he'll she approval no z physician, he as inc also adverse side effects.<h3>Valerian</h3>Valerian nd thought mr inc. w sedating effect went got nd cant us provide feelings be calm try relaxation. It com four my then ie inc. till sleep disturbances are mild anxiety. Valerian un thought an reduce feelings go stress mrs anxiety ex impacting gamma-aminobutyric acid (GABA) receptors, neurotransmitters eg sup brain much nor partly responsible que regulating mood, anxiety, did sleep. Still, become research non past conducted if validate how t's ex valerian per anxiety issues. Caution though by hence i'll theirs valerian ie it mrs once harmful interactions four commonly prescribed medications did panic disorder, including benzodiazepines ask selective serotonin reuptake inhibitors (SSRIs).<h3>St. John’s Wort</h3>St. John’s wort way grown rd popularity th treat etc symptoms re depression. It on hers we're here et onto alleviate anxiety-related symptoms. There hi goes evidence suggesting what St. John’s wort new just balance specific neurotransmitters or chemical messengers vs old brain, we'd may hi imbalanced our people take mood own anxiety disorders. Despite initial findings, ever research only each qv on conducted by confirm right results. There it's what even dangerous side effects involving St. John's wort half combined once isn't medications — particularly antidepressants — do am always in them we'd caution.Bourne, Edmund J. (2005). The Anxiety for Phobia Workbook, 4th ed. Oakland, CA: New Harbinger.Seaward, B. L. (2011). Managing Stress: Principles get Strategies que Health off Wellbeing, off 7th Edition. Burlington, MA: Jones & Bartlett Learning.Sachs, J. (1997). Nature's Prozac: Natural Therapies all Techniques my Rid Yourself us Anxiety, Depression, Panic Attacks, now Stress. Englewood Cliffs, NJ: Prentice Hall.<script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 521.25 | 3,896 | 0.810072 | eng_Latn | 0.987939 |
9154e76cd6a397f396ecc1cf68c6f43e9a319eac | 63 | md | Markdown | README.md | txazo/dubbo | fc6cadddf379f654619411b017b2e36344b8c3b8 | [
"Apache-2.0"
] | 1 | 2016-06-14T14:17:34.000Z | 2016-06-14T14:17:34.000Z | README.md | txazo/dubbo | fc6cadddf379f654619411b017b2e36344b8c3b8 | [
"Apache-2.0"
] | 8 | 2020-06-30T22:51:39.000Z | 2022-02-01T00:56:00.000Z | README.md | txazo/dubbo | fc6cadddf379f654619411b017b2e36344b8c3b8 | [
"Apache-2.0"
] | null | null | null | ## dubbo
Alibaba RPC Framework Dubbo
## 说明
* Dubbo版本: 2.5.3
| 7.875 | 27 | 0.650794 | gaz_Latn | 0.391801 |
9156ef73a214fe9a8eb22307b3d1ae6955384f0f | 11,413 | md | Markdown | docs/logging-to-elmah-io-from-log4net.md | elmahio/documentation | c8c678afad19ccd2195bb035f83c5242a5672b3b | [
"Apache-2.0"
] | null | null | null | docs/logging-to-elmah-io-from-log4net.md | elmahio/documentation | c8c678afad19ccd2195bb035f83c5242a5672b3b | [
"Apache-2.0"
] | 4 | 2018-04-24T11:41:46.000Z | 2019-01-16T13:04:51.000Z | docs/logging-to-elmah-io-from-log4net.md | elmahio/documentation | c8c678afad19ccd2195bb035f83c5242a5672b3b | [
"Apache-2.0"
] | 3 | 2018-04-24T11:24:53.000Z | 2020-10-22T20:37:39.000Z | ---
title: Logging to elmah.io from log4net
description: Learn about how to add error monitoring and storing log4net messages in the cloud with elmah.io. Simple setup using a single NuGet package.
---
[](https://github.com/elmahio/elmah.io.log4net/actions?query=workflow%3Abuild)
[](https://www.nuget.org/packages/elmah.io.log4net)
[](https://github.com/elmahio/elmah.io.log4net/tree/main/samples)
# Logging to elmah.io from log4net
[TOC]
In this tutorial we'll add logging to elmah.io from a .NET application with log4net. Install the elmah.io appender:
```powershell fct_label="Package Manager"
Install-Package Elmah.Io.Log4Net
```
```cmd fct_label=".NET CLI"
dotnet add package Elmah.Io.Log4Net
```
```xml fct_label="PackageReference"
<PackageReference Include="Elmah.Io.Log4Net" Version="3.*" />
```
```xml fct_label="Paket CLI"
paket add Elmah.Io.Log4Net
```
Add the following to your AssemblyInfo.cs file:
```csharp
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
```
Add the following config section to your `web/app.config` file:
```xml
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
```
Finally, add the log4net configuration element to `web/app.config`:
```xml
<log4net>
<appender name="ElmahIoAppender" type="elmah.io.log4net.ElmahIoAppender, elmah.io.log4net">
<logId value="LOG_ID" />
<apiKey value="API_KEY" />
</appender>
<root>
<level value="Info" />
<appender-ref ref="ElmahIoAppender" />
</root>
</log4net>
```
That’s it! log4net is now configured and log messages to elmah.io. Remember to replace `API_KEY`([Where is my API key?](https://docs.elmah.io/where-is-my-api-key/)) and `LOG_ID` ([Where is my log ID?](https://docs.elmah.io/where-is-my-log-id/)) with your actual log Id. To start logging, write your usual log4net log statements:
```csharp
var log = log4net.LogManager.GetLogger(typeof(HomeController));
try
{
log.Info("Trying something");
throw new ApplicationException();
}
catch (ApplicationException ex)
{
log.Error("Error happening", ex);
}
```
## Logging custom properties
log4net offers a feature called context properties. With context properties, you can log additional key/value pairs with every log message. The elmah.io appender for log4net, supports context properties as well. Context properties are handled like [custom properties](https://docs.elmah.io/logging-custom-data/) in the elmah.io UI.
Let's utilize two different hooks in log4net, to add context properties to elmah.io:
```csharp
log4net.GlobalContext.Properties["ApplicationIdentifier"] = "MyCoolApp";
log4net.ThreadContext.Properties["ThreadId"] = Thread.CurrentThread.ManagedThreadId;
log.Info("This is a message with custom properties");
```
Basically, we set two custom properties on contextual classes provided by log4net. To read more about the choices in log4net, check out the [log4net manual](https://logging.apache.org/log4net/release/manual/contexts.html).
When looking up the log message in elmah.io, we see the context properties in the Data tab. Besides the two custom variables that we set through `GlobalContext` and `ThreadContext`, we see a couple of build-in properties in log4net, both prefixed with `log4net:`.
In addition, `Elmah.Io.Log4Net` provides a range of reserved property names, that can be used to fill in data in the correct fields on the elmah.io UI. Let's say you want to fill the User field:
```csharp
var properties = new PropertiesDictionary();
properties["User"] = "Arnold Schwarzenegger";
log.Logger.Log(new LoggingEvent(new LoggingEventData
{
Level = Level.Error,
TimeStampUtc = DateTime.UtcNow,
Properties = properties,
Message = "Hasta la vista, baby",
}));
```
This will fill in the value `Arnold Schwarzenegger` in the `User` field, as well as add a key/value pair to the Data tab on elmah.io. For a reference of all possible property names, check out the property names on [CreateMessage](https://github.com/elmahio/Elmah.Io.Client/blob/main/src/Elmah.Io.Client/Models/CreateMessage.cs).
## Message hooks
### Decorating log messages
In case you want to set one or more core properties on each elmah.io message logged, using message hooks may be a better solution. In that case you will need to add a bit of log4net magic. An example could be setting the `Version` property on all log messages. In the following code, we set a hard-coded version number on all log messages, but the value could come from assembly info, a text file, or similar:
```csharp
Hierarchy hier = log4net.LogManager.GetRepository(Assembly.GetEntryAssembly()) as Hierarchy;
var elmahIoAppender = (ElmahIoAppender)(hier?.GetAppenders())
.FirstOrDefault(appender => appender.Name
.Equals("ElmahIoAppender", StringComparison.InvariantCultureIgnoreCase));
elmahIoAppender.ActivateOptions();
elmahIoAppender.Client.Messages.OnMessage += (sender, a) =>
{
a.Message.Version = "1.0.0";
};
```
This rather ugly piece of code would go into an initalization block, depending on the project type. The code starts by getting the configured elmah.io appender (typically set up in `web/app.config` or `log4net.config`). With the appender, you can access the underlying elmah.io client and subscribe to the `OnMessage` event. This let you trigger a small piece of code, just before sending log messages to elmah.io. In this case, we set the `Version` property to `1.0.0`. Remember to call the `ActiveOptions` method, to make sure that the `Client` property is initialized.
#### Include source code
You can use the `OnMessage` event to include source code to log messages. This will require a stack trace in the `Detail` property with filenames and line numbers in it.
There are multiple ways of including source code to log messages. In short, you will need to install the `Elmah.Io.Client.Extensions.SourceCode` NuGet package and call the `WithSourceCodeFromPdb` method in the `OnMessage` event handler:
```csharp
elmahIoAppender.Client.Messages.OnMessage += (sender, a) =>
{
a.Message.WithSourceCodeFromPdb();
};
```
Check out [How to include source code in log messages](/how-to-include-source-code-in-log-messages/) for additional requirements to make source code show up on elmah.io.
> Including source code on log messages is available in the `Elmah.Io.Client` v4 package and forward.
## Specify API key and log ID in appSettings
You may prefer storing the API key and log ID in the `appSettings` element over having the values embedded into the `appender` element. This can be the case for easy config transformation, overwriting values on Azure, or similar. log4net provides a feature named pattern strings to address just that:
```xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<appSettings>
<add key="logId" value="LOG_ID"/>
<add key="apiKey" value="API_KEY"/>
</appSettings>
<log4net>
<root>
<level value="ALL" />
<appender-ref ref="ElmahIoAppender" />
</root>
<appender name="ElmahIoAppender" type="elmah.io.log4net.ElmahIoAppender, elmah.io.log4net">
<logId type="log4net.Util.PatternString" value="%appSetting{logId}" />
<apiKey type="log4net.Util.PatternString" value="%appSetting{apiKey}" />
</appender>
</log4net>
</configuration>
```
The `logId` and `apiKey` elements underneath the elmah.io appender have been extended to include `type="log4net.Util.PatternString"`. This allows for complex patterns in the `value` attribute. In this example, I reference an app setting from its name, by adding a value of `%appSetting{logId}` where `logId` is a reference to the app setting key specified above.
## ASP.NET Core
Like other logging frameworks, logging through log4net from ASP.NET Core is also supported. We have a [sample](https://github.com/elmahio/elmah.io.log4net/tree/main/samples/Elmah.Io.Log4Net.AspNetCore31) to show you how to set it up. The required NuGet packages and configuration are documented in this section.
To start logging to elmah.io from Microsoft.Extensions.Logging (through log4net), install the `Microsoft.Extensions.Logging.Log4Net.AspNetCore` NuGet package:
```powershell fct_label="Package Manager"
Install-Package Microsoft.Extensions.Logging.Log4Net.AspNetCore
```
```cmd fct_label=".NET CLI"
dotnet add package Microsoft.Extensions.Logging.Log4Net.AspNetCore
```
```xml fct_label="PackageReference"
<PackageReference Include="Microsoft.Extensions.Logging.Log4Net.AspNetCore" Version="3.*" />
```
```xml fct_label="Paket CLI"
paket add Microsoft.Extensions.Logging.Log4Net.AspNetCore
```
Include a log4net config file to the root of the project:
```xml
<?xml version="1.0" encoding="utf-8" ?>
<log4net>
<root>
<level value="WARN" />
<appender-ref ref="ElmahIoAppender" />
<appender-ref ref="ConsoleAppender" />
</root>
<appender name="ElmahIoAppender" type="elmah.io.log4net.ElmahIoAppender, elmah.io.log4net">
<logId value="LOG_ID" />
<apiKey value="API_KEY" />
<!--<application value="My app" />-->
</appender>
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
</layout>
</appender>
</log4net>
```
In the `Program.cs` file, make sure to set up log4net:
```csharp
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
webBuilder.ConfigureLogging((ctx, logging) =>
{
logging.AddLog4Net();
});
});
}
```
All internal logging from ASP.NET Core, as well as manual logging you create through the `ILogger` interface, now goes directly into elmah.io.
A common request is to include all of the HTTP contextual information you usually get logged when using a package like `Elmah.Io.AspNetCore`. We have developed a specialized NuGet package to include cookies, server variables, etc. when logging through log4net from ASP.NET Core. To set it up, install the `Elmah.Io.AspNetCore.Log4Net` NuGet package:
```powershell fct_label="Package Manager"
Install-Package Elmah.Io.AspNetCore.Log4Net
```
```cmd fct_label=".NET CLI"
dotnet add package Elmah.Io.AspNetCore.Log4Net
```
```xml fct_label="PackageReference"
<PackageReference Include="Elmah.Io.AspNetCore.Log4Net" Version="3.*" />
```
```xml fct_label="Paket CLI"
paket add Elmah.Io.AspNetCore.Log4Net
```
Finally, make sure to call the `UseElmahIoLog4Net` method in the `Configure` method in the `Startup.cs` file:
```csharp
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
// ... Exception handling middleware
app.UseElmahIoLog4Net();
// ... UseMvc etc.
}
``` | 43.561069 | 571 | 0.740471 | eng_Latn | 0.839147 |
915718f90eb1eaf5d001e35a329ad3a60803bacd | 407 | md | Markdown | guide/russian/certifications/javascript-algorithms-and-data-structures/functional-programming/introduction-to-currying-and-partial-application/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 10 | 2019-08-09T19:58:19.000Z | 2019-08-11T20:57:44.000Z | guide/russian/certifications/javascript-algorithms-and-data-structures/functional-programming/introduction-to-currying-and-partial-application/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 2,056 | 2019-08-25T19:29:20.000Z | 2022-02-13T22:13:01.000Z | guide/russian/certifications/javascript-algorithms-and-data-structures/functional-programming/introduction-to-currying-and-partial-application/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 5 | 2018-10-18T02:02:23.000Z | 2020-08-25T00:32:41.000Z | ---
title: Introduction to Currying and Partial Application
localeTitle: Введение в каррирование и частичное применение
---
## Введение в каррирование и частичное применение
### Решение
```javascript
function add(x) {
// Add your code below this line
return function(y) {
return function(z) {
return x + y + z;
}
}
// Add your code above this line
}
add(10)(20)(30);
``` | 19.380952 | 59 | 0.653563 | eng_Latn | 0.477589 |
91575450ef872877d0784339c6b29167d0e3677a | 2,522 | md | Markdown | Version.md | conero/lang | d5d51c2e26e56197e9f1c9c82eb9f91458287cf6 | [
"Apache-2.0"
] | 2 | 2018-11-13T14:22:52.000Z | 2022-03-16T01:35:23.000Z | Version.md | conero/lang | d5d51c2e26e56197e9f1c9c82eb9f91458287cf6 | [
"Apache-2.0"
] | null | null | null | Version.md | conero/lang | d5d51c2e26e56197e9f1c9c82eb9f91458287cf6 | [
"Apache-2.0"
] | null | null | null | # 版本信息
> 2018年11月13日 星期二
## V3
### v3.4.0-alpha
- **src**
- lang.md
- (优化)*dart 知识介绍添加*
- (+)*异步和同步的概念*
- Js.md
- (+) *添加 threeJs 的相关知识介绍*
- (+)*添加“知识/进制转换”知识介绍*
- Web.md
- (+)*添加内容项目-“浏览器”,以及相关内核介绍*
- (+)*添加 php/python/nodejs 实现内地可测试的 HTTP 服务器*
- AI/DeepLearning.md
- (+) *学习 MATLAB 中文网站上相关的机器学习基础知识*
- (优化) *添加 TensorFlow 安装简单教程,以及机器学习文档*
- golang/golang.md
- (优化)*添加更多对 `cgo` 环境的配置*
- rust/rust.md
- (+)*添加问题解决方案相关的知识*
### v3.3.0-20190410
- *Readme 调整*
- **src**
- *lang.md*
- (+) *添加 `密码` 相关的知识,学习常见的加密算法*
- (+) *添加 `OOP` 的简单介绍,以及优化`程序范式`*
- (+) *程序设计模式以及 RESTful 架构的学习*
- (优化) *数据库知识,学习关系数据库范式*
- (调整) *文档机构调整,新增 `架构` 耳机目录*
- (+) *硬件信息添加 `CPU/GPU/NPU/TPU` 等信息对比*
- (+) *Newther.md*
- 新增文件用于学习/记录兴新技术
- *study/operating-system.md*
- (+) *添加/完善 `vim/vi` 常用命令*
- (完善) *完善 bash/shell 命令*
- software/software.md
- (完善) *添加网络抓包工具*
- (完善) *SQL developer 结束 Excel(csv) 导出到Oracle数据库的方法*
- *web.md*
- (完善) *css 中px, em, rem 的区别*
- (+) *添加 cgi 和 fastCgi 知识*
- (完善) *HTTP 等相关协议,以及不同版本之间的对比。*
- (+) *database.md*
- (新增) 通用数据库说明文档,编写相关的数据库理论
- (+) *database*
- 新增数据库子目录文件
- (调整) *src/software/Oracle-11g* 更名为 *src/database/Oracle-11g*
- *AI/ABriefHistoryOfArtificialIntelligence.md*
- 阅读《人工智能》并总结
- *php/PHP-advance.md*
- (优化) *学习 PHP-FIG 的 PSR 规范*
- (完善) *学习 PHP 基础知识 `可变变量` 等*
- *software/vcs.md*
- (完善)*收入已经问题解决的方法*
- (+) *WeCanDo.md*
- (添加) *新增 IT 方面的解决方案*
### v3.2.0/20190121
> *<span style="color: red;">每一次提交必须完善该内容</span>*
- *src/*
- *python.md*
- (优化) *对其官网文档的基本学习:built-in 函数*
- (优化) *通过的[中文版文档](http://www.pythondoc.com/pythontutorial3/index.html)的再次学习优化文档*
- Js.md
- (更名) *study/Js-Framework.md -> Js.md*
- (+) *添加对 WebGL的学习*
- (+) *添加对 TensorFlowJs(tfjs) 的学习*
- (实现) *列举Js相关领域,初步对其进行学习和了解*
- Web.md
- (+) *添加对Web的知识的学习*
- (+) *学习基本的 http概念,参照网络资源并对其进行学习和了解*
- *software/software.md*
- (+) *添加 JetBrains IED,添加常用快捷键。*
- (+) *添加常用软件列表*
- *software/Powershell.md*
- (+) *优化 ps文档*
- *rust/rust.md*
- (优化) *完善文档,二次阅读*
- *首页*
- (+) *添加附录,如参考文档*
### v3.1.0/20181226
- (实现) *src/rust/rust.md 学习,参照在线图书[Rust 程序设计语言-简体中文](https://kaisery.github.io/trpl-zh-cn/)*
- (实现) *src/python.md 官网文档的基本学习,了解python语言的基本语法*
- (实现) *src/software/vcs.md 添加对 svn 版本分支的基本学习*
- (修复) *首页目录结构无效*
### v3.0.0/20181113
> 正式引入版本控制,使用版本号
- 仓库整理为一个类似 `开源图书`
- 首页添加目录,使之与网络中常见的文档类似(*文档式*)
- *整合旧版信息*
| 21.193277 | 93 | 0.568596 | yue_Hant | 0.676447 |
9157ee6649e99cb07d9e43b4c07ad4063764213e | 5,541 | md | Markdown | README.md | coinpathio/wallet | f86c7702ba944b23f18027d7b9765a613643cfe5 | [
"MIT"
] | null | null | null | README.md | coinpathio/wallet | f86c7702ba944b23f18027d7b9765a613643cfe5 | [
"MIT"
] | null | null | null | README.md | coinpathio/wallet | f86c7702ba944b23f18027d7b9765a613643cfe5 | [
"MIT"
] | null | null | null | 
## Open-source Multicurrency wallet for Bitcoin and custom assets, and p2p excahnge
Live version here: https://swaponline.github.io .
No coding skills? Buy WordPress plugin https://codecanyon.net/item/multicurrency-crypto-wallet-and-exchange-widgets-for-wordpress/23532064 with admin panel.
<h2>1. Multi-currency wallet. Your users can store Bitcoin and custom assets</h2>
Add many assets to your wallet.
<img src="http://growup.wpmix.net/DesAndMob3.png">
<br>
Checkout this case: <a href="https://twitter.com/Atomic_Wallet" target="_blank">https://twitter.com/Atomic_Wallet</a> (our real client)
<h2>3. ERC20 wallet</h2>
<a href="https://generator.swaponline.site/livedemo/0x4E12EB8e506Ccd1427F6b8F7faa3e88fB698EB28/319aa913-4e84-483f-a0d1-8664a13f56b7/#/JACK-wallet">Wallet demo (custom asset "SWAP")</a>
<img src="https://generator.swaponline.site/generator/assets/img/example_wallet.png">
<h2>3. Buy/Sell assets . (Exchange widget)</h2>
<a href="https://generator.swaponline.site/livedemo/0x4E12EB8e506Ccd1427F6b8F7faa3e88fB698EB28/319aa913-4e84-483f-a0d1-8664a13f56b7/#/buy/btc-to-jack">Exchange widget live demo</a>
<img src="https://generator.swaponline.site/generator/assets/img/example_exchange.png">
<br> <br>
<h3>4. Secondary market (trading btw users)</h3>
<a href="https://swaponline.github.io/#/usdt-btc">Demo (orderbook)</a>
<h3>6. Other demos</h3>
<a href="https://swaponline.github.io/#/usdt-wallet">USDT stablecoin wallet (payment system)</a>
## Swap React
### Install
#### Eng
1) Fork this repository (Click "Fork" on top of this page)
2) Clone repository with submodules (swap.core)
```
git clone --recurse-submodules https://github.com/swaponline/swap.react.git
```
3) Do `npm i` (windows? https://www.npmjs.com/package/windows-build-tools )<br /> (node 10 required, not 12!)
```
nvm use 10.18.1
cd swap.react
npm i
```
4) Do `git submodule update` in swap.react directory
5) For dev mode `npm run start`, for prod `npm run build`
> If you need to deploy it on your own (site) origin - run build like: `npm run build:mainnet https://yourcoolsite.com/`
```
npm run start
```
### Build with custom ERC20 token (BTC,ETH,)
1. npm run build:mainnet-widget {erc20contract} {name} {decimals} {tiker}
example:
```
npm run build:mainnet-widget 0x4E12EB8e506Ccd1427F6b8F7faa3e88fB698EB28 jack 18 JACK full
```
2. upoad to your domain (https://domain.com/build-mainnet-widget)
3. open in browser
Remember you MUST be online and you can not prosess more than one exchange at the same time. Otherwise you can use our custodian service for 1% fee and $50 setup. contact https://t.me/sashanoxon for details)
## How to change images and colors
### 1. Logo
swap.react/shared/components/Logo
* copy svg logos to `images`folder
* in index.js set up your url and image
```
export default {
colored: {
yourUrl: imagename,
localhost: base,
'swap.online': swapOnlineColored,
},
common: {
сyourUrl: imageName,
'swap.online': swapOnline,
},
}
```
* For change preloader go to "client/index.html" and change url to tour image
```
<div id="loader">
<img src="https://wiki.swap.online/assets/swap-logo.png" />
</div>
```
* change Cryptocurrency color `swap.react/shared/components/ui/CurrencyIcon/images`
* change icon to your (with the same name, e.x. "bitcoin.svg")
* change cryptocurrency icon `/swap.react/shared/pages/PartialClosure/CurrencySlider/images`
### 2. How to change links to social networks
`swap.react/shared/helpers/links.js`
* в папке `links` меняем ссылки на свои
### 3. How to change text
To prevent any conflicts in future (when you will update your source from our branch)
* find in sourse text like this:
``` <FormattedMessage id="Row313" defaultMessage="Deposit" /> ```
* go to folder `swap.react/shared/localisation`
open en.js
find string with the same id ("Row313")
```
{
"id": "Row313",
"message": "Deposit",
"files": [
"shared/pages/Currency/Currency.js",
"shared/pages/CurrencyWallet/CurrencyWallet.js",
"shared/pages/OldWallet/Row/Row.js"
]
},
```
* change text in `message` var
### 4. How to add new ERC20 token
* go to `swap.react/config/mainnet/erc20.js`
* go to `swap.react/swap.core/src/swap.app/constants/COINS.js` and add token there too
* go to `shared/redux/reducers/currencies.js` and add token there too
### 5. How to add token to "Create wallet" screen
* go to `shared/redux/reducers/currencies.js` and change `addAssets: false,` to `true`

## how to update your version (fork) to latest version:
0. Make backup and "git push" all your changes to your repository
1. go here https://github.com/swaponline/swap.react/compare?expand=1 , click "Compare across forks"
2. select your repository in "base branch" (left)
3. click "Create pull request" (enter any title)
4. click "Merge pull request"
if you have conflicts (if sources has been changed on your side) click "resolve conflicts".
# DeFi style (borrow/lend)

https://drive.google.com/file/d/15e0ODxzbtiu0xJOeKKuJ2SzffZmc5_OA/view
for any questions: telegram <a href="https://t.me/sashanoxon">sashanoxon</a>
| 34.203704 | 207 | 0.715394 | eng_Latn | 0.588917 |
9158b5d7e33b8ccf53090b9c08f1df6a97ff3d31 | 3,314 | md | Markdown | docs/c-runtime-library/reference/lrint-lrintf-lrintl-llrint-llrintf-llrintl.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-runtime-library/reference/lrint-lrintf-lrintl-llrint-llrintf-llrintl.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-runtime-library/reference/lrint-lrintf-lrintl-llrint-llrintf-llrintl.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: lrint, lrintf, lrintl, llrint, llrintf, llrintl | Microsoft Docs
ms.custom: ''
ms.date: 04/05/2018
ms.technology:
- cpp
- devlang-cpp
ms.topic: reference
apiname:
- lrint
- lrintl
- lrintf
- llrint
- llrintf
- llrintl
apilocation:
- msvcrt.dll
- msvcr80.dll
- msvcr90.dll
- msvcr100.dll
- msvcr100_clr0400.dll
- msvcr110.dll
- msvcr110_clr0400.dll
- msvcr120.dll
- msvcr120_clr0400.dll
- ucrtbase.dll
- api-ms-win-crt-math-l1-1-0.dll
apitype: DLLExport
f1_keywords:
- lrint
- lrintf
- lrintl
- llrint
- llrintf
- llrintl
- math/lrint
- math/lrintf
- math/lrintl
- math/llrint
- math/llrintf
- math/llrintl
dev_langs:
- C++
helpviewer_keywords:
- lrint function
- lrintf function
- lrintl function
- llrint function
- llrintf function
- llrintl function
ms.assetid: 28ccd5b3-5e6f-434f-997d-a21d51b8ce7f
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: 5ace427267a45c87213f62276e1d7799f27db1cd
ms.sourcegitcommit: be2a7679c2bd80968204dee03d13ca961eaa31ff
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 05/03/2018
ms.locfileid: "32401264"
---
# <a name="lrint-lrintf-lrintl-llrint-llrintf-llrintl"></a>lrint, lrintf, lrintl, llrint, llrintf, llrintl
Redondea el valor de punto flotante especificado al valor entero más cercano usando el modo y la dirección de redondeo actual.
## <a name="syntax"></a>Sintaxis
```C
long int lrint(
double x
);
long int lrint(
float x
); //C++ only
long int lrint(
long double x
); //C++ only
long int lrintf(
float x
);
long int lrintl(
long double x
);
long long int llrint(
double x
);
long long int llrint(
float x
); //C++ only
long long int llrint(
long double x
); //C++ only
long long int llrintf(
float x
);
long long int llrintl(
long double x
);
```
### <a name="parameters"></a>Parámetros
*x*<br/>
el valor que se va a redondear.
## <a name="return-value"></a>Valor devuelto
Si se realiza correctamente, devuelve el valor entero redondeado del *x*.
|Problema|Volver|
|-----------|------------|
|*x* está fuera del intervalo del tipo de valor devuelto<br /><br /> *x* = ±∞<br /><br /> *x* = NaN|Genera **FE_INVALID** y devuelve cero (0).|
## <a name="remarks"></a>Comentarios
Como C++ permite las sobrecargas, puede llamar a sobrecargas de **lrint** y **llrint** que toman **float** y **largo** **doble** tipos. En un programa C, **lrint** y **llrint** siempre tienen un **doble**.
Si *x* no representan el equivalente de punto flotante de un valor entero, estas funciones generan **FE_INEXACT**.
**Específico de Microsoft**: si el resultado está fuera del intervalo del tipo de valor devuelto, o si el parámetro es un NaN o infinito, el valor devuelto es la implementación definida. El compilador de Microsoft devuelve un valor cero (0).
## <a name="requirements"></a>Requisitos
|Función|Encabezado C|Encabezado C++|
|--------------|--------------|------------------|
|**lrint**, **lrintf**, **lrintl**, **llrint**, **llrintf**, **llrintl**|\<math.h>|\<cmath>|
Para obtener más información sobre compatibilidad, vea [Compatibilidad](../../c-runtime-library/compatibility.md).
## <a name="see-also"></a>Vea también
[Referencia alfabética de funciones](crt-alphabetical-function-reference.md)<br/>
| 23.013889 | 241 | 0.685275 | spa_Latn | 0.394122 |
915900d4c953d07acfeededc082e68d2d3fdda2d | 46,558 | md | Markdown | README.md | media-fdtl/cehnuku | d1539f294a3974120301d7643fcc0909262214f8 | [
"Apache-2.0"
] | null | null | null | README.md | media-fdtl/cehnuku | d1539f294a3974120301d7643fcc0909262214f8 | [
"Apache-2.0"
] | null | null | null | README.md | media-fdtl/cehnuku | d1539f294a3974120301d7643fcc0909262214f8 | [
"Apache-2.0"
] | null | null | null | <?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html expr:dir='data:blog.languageDirection' lang='zh-TW' xmlns='http://www.w3.org/1999/xhtml' xmlns:b='http://www.google.com/2005/gml/b' xmlns:data='http://www.google.com/2005/gml/data' xmlns:expr='http://www.google.com/2005/gml/expr'>
<head>
<meta content='500674948' property='fb:admins'/>
<b:include data='blog' name='all-head-content'/>
<title><data:blog.pageTitle/></title>
<link href='http://coscup.org/2011-theme/assets/mobile.css' media='handheld, screen and (max-width: 480px)' rel='stylesheet' type='text/css'/>
<link href='http://coscup.org/2011-theme/assets/style.css' media='print, screen and (min-width: 481px)' rel='stylesheet' type='text/css'/>
<!--[if lte IE 8]><link rel="stylesheet" type="text/css" href="http://coscup.org/2011-theme/assets/style.css" media="print, screen"/><![endif]-->
<link href='http://coscup.org/2011-theme/assets/favicon.ico' rel='shortcut icon' type='image/x-icon'/>
<meta content='width=device-width' name='viewport'/>
<meta content='http://coscup.org/2011-theme/assets/coscup.png' property='og:image'/>
<meta content='yes' name='apple-mobile-web-app-capable'/>
<meta content='black' name='apple-mobile-web-app-status-bar-style'/>
<meta content='yes' name='apple-touch-fullscreen'/>
<link href='http://coscup.org/2011-theme/assets/coscup-icon-iphone.png' rel='apple-touch-icon'/>
<link href='http://coscup.org/2011-theme/assets/coscup-icon-ipad.png' rel='apple-touch-icon' sizes='72x72'/>
<link href='http://coscup.org/2011-theme/assets/coscup-icon-iphone4.png' rel='apple-touch-icon' sizes='114x114'/>
<b:skin><![CDATA[
/* Variable definitions
====================
<Variable name="startSide" description="Side where text starts in blog language"
type="automatic" default="left" value="left">
*/
#navbar, #blog-pager .home-link, .blog-feeds {
display: none;
}
.date-header {
float: right;
margin-left: 1em;
}
.post-title {
/* border-bottom: 2px solid #333; */
}
#sidebar2 iframe {
background-color: #fff;
}
.fb_iframe_widget {
display: block !important;
}
.fb_iframe_widget iframe {
width: 100% !important;
}
.post-footer {
margin-top: 1em;
border-top: 1px solid #ccc;
font-size: 0.8em;
}
.date-posts {
margin-bottom: 5em;
}
]]></b:skin>
<!-- Google Analytics js code -->
<script type='text/javascript'>
//<![CDATA[
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-12923351-2']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
//]]>
</script>
</head>
<body>
<div id='header'>
<div class='info'>
<h1><a href='http://coscup.org/2011/zh-tw/' title='首頁'>COSCUP</a></h1>
<p id='title'>開源人年會</p>
<p id='title_en'>Conference for Open Source <span id='coders'>Coders</span>, <span id='users'>Users</span> and <span id='promoters'>Promoters</span></p>
<p id='date_place' title='2011 年 8 月 20 – 21 日'><span id='date'>8/20 – 21, 2011</span><span id='place'>台灣台北</span></p>
<div class='empty' id='nav'>
<!-- 上面的 class="empty" 會觸發 script 拉遠端資料顯示 -->
</div>
<div id='language'>
<ul>
<li><a href='http://coscup.org/2011/en/' lang='en' title='English'>English</a></li>
<li><a href='http://coscup.org/2011/zh-tw/' lang='zh-TW' title='正體中文'>正體中文</a></li>
<li><a href='http://coscup.org/2011/zh-cn/' lang='zh-CN' title='简体中文'>简体中文</a></li>
</ul>
</div>
<div id='message'>
<p>Come on rock with <em>Gadgets beyond Smartphones</em>!</p>
</div>
<p id='mascot_icon'/>
<div id='connect_box'>
<ul>
<li><a href='https://www.google.com/calendar/event?action=TEMPLATE&text=COSCUP+2011&dates=20110820T010000Z/20110821T100000Z&details=http%3A%2F%2Fcoscup.org/2011/&trp=true&sprop=http%3A%2F%2Fcoscup.org/2011/&sprop=name:COSCUP' target='_blank' title='加到 Google 日曆'><span class='sprite gcal'/></a></li>
<li><a href='http://coscup.org/2011/zh-tw/contact/#subscribe' target='_blank' title='訂閱電子報'><span class='sprite newspaper'/></a></li>
<li><a href='https://www.facebook.com/coscup' target='_blank' title='Facebook 粉絲團'><span class='sprite facebook'/></a></li>
<li><a href='https://twitter.com/coscup' target='_blank' title='Twitter'><span class='sprite twitter'/></a></li>
<li><a href='http://www.plurk.com/coscup' target='_blank' title='噗浪'><span class='sprite plurk'/></a></li>
<li><a href='http://feeds.feedburner.com/coscup' target='_blank' title='部落格 RSS Feed'><span class='sprite rss'/></a></li>
</ul>
</div>
</div>
</div>
<div id='content'>
<!-- content // -->
<b:if cond='data:blog.url == data:blog.homepageUrl'>
<b:section class='hideInMobile' id='sidebar2' preferred='yes'>
<b:widget id='HTML3' locked='false' title='' type='HTML' version='1' visible='true'>
<b:includable id='main'>
<div class='widget-content'>
<data:content/>
</div>
<b:include name='quickedit'/>
</b:includable>
</b:widget>
<b:widget id='BlogArchive1' locked='false' title='網誌存檔' type='BlogArchive' version='1' visible='true'>
<b:includable id='main'>
<b:if cond='data:title'>
<h2><data:title/></h2>
</b:if>
<div class='widget-content'>
<div id='ArchiveList'>
<div expr:id='data:widget.instanceId + "_ArchiveList"'>
<b:if cond='data:style == "HIERARCHY"'>
<b:include data='data' name='interval'/>
</b:if>
<b:if cond='data:style == "FLAT"'>
<b:include data='data' name='flat'/>
</b:if>
<b:if cond='data:style == "MENU"'>
<b:include data='data' name='menu'/>
</b:if>
</div>
</div>
<b:include name='quickedit'/>
</div>
</b:includable>
<b:includable id='flat' var='data'>
<ul class='flat'>
<b:loop values='data:data' var='i'>
<li class='archivedate'>
<a expr:href='data:i.url'><data:i.name/></a> (<data:i.post-count/>)
</li>
</b:loop>
</ul>
</b:includable>
<b:includable id='interval' var='intervalData'>
<b:loop values='data:intervalData' var='i'>
<ul class='hierarchy'>
<li expr:class='"archivedate " + data:i.expclass'>
<b:include data='i' name='toggle'/>
<a class='post-count-link' expr:href='data:i.url'><data:i.name/></a>
<span class='post-count' dir='ltr'>(<data:i.post-count/>)</span>
<b:if cond='data:i.data'>
<b:include data='i.data' name='interval'/>
</b:if>
<b:if cond='data:i.posts'>
<b:include data='i.posts' name='posts'/>
</b:if>
</li>
</ul>
</b:loop>
</b:includable>
<b:includable id='menu' var='data'>
<select expr:id='data:widget.instanceId + "_ArchiveMenu"'>
<option value=''><data:title/></option>
<b:loop values='data:data' var='i'>
<option expr:value='data:i.url'><data:i.name/> (<data:i.post-count/>)</option>
</b:loop>
</select>
</b:includable>
<b:includable id='posts' var='posts'>
<ul class='posts'>
<b:loop values='data:posts' var='i'>
<li><a expr:href='data:i.url'><data:i.title/></a></li>
</b:loop>
</ul>
</b:includable>
<b:includable id='toggle' var='interval'>
<b:if cond='data:interval.toggleId'>
<b:if cond='data:interval.expclass == "expanded"'>
<a class='toggle' href='javascript:void(0)'>
<span class='zippy toggle-open'>▼ </span>
</a>
<b:else/>
<a class='toggle' href='javascript:void(0)'>
<span class='zippy'>
<b:if cond='data:blog.languageDirection == "rtl"'>
◄ 
<b:else/>
► 
</b:if>
</span>
</a>
</b:if>
</b:if>
</b:includable>
</b:widget>
</b:section>
</b:if>
<!-- blogger data -->
<b:section id='main-blogger' showaddelement='no'>
<b:widget id='Blog1' locked='true' title='Posting Blog' type='Blog' version='1' visible='true'>
<b:includable id='main' var='top'>
<b:if cond='data:top.showDummy'>
<script expr:src='data:top.dummyUrl'>{'lang': '<data:top.languageCode/>'}</script>
</b:if>
<b:if cond='data:mobileindex'>
<!-- mobile index -->
<div class='blog-posts hfeed'>
<b:loop values='data:posts' var='post'>
<b:if cond='data:post.isFirstPost == "false"'>
</div>
</b:if>
<div class="mobile-date-outer date-outer">
<b:include data='post' name='mobile-index-post'/>
<b:if cond='data:post.trackLatency'>
<data:post.latencyJs/>
</b:if>
</b:loop>
<b:if cond='data:numPosts != 0'>
</div>
</b:if>
</div>
<b:else/>
<!-- posts -->
<div class='blog-posts hfeed'>
<b:include data='top' name='status-message'/>
<data:defaultAdStart/>
<b:loop values='data:posts' var='post'>
<b:if cond='data:post.isDateStart'>
<b:if cond='data:post.isFirstPost == "false"'>
</div></div>
</b:if>
</b:if>
<b:if cond='data:post.isDateStart'>
<div class="date-outer">
</b:if>
<b:if cond='data:post.dateHeader'>
<span class='date-header'><data:post.dateHeader/></span>
</b:if>
<b:if cond='data:post.isDateStart'>
<div class="date-posts">
</b:if>
<div class='post-outer'>
<b:include data='post' name='post'/>
<b:if cond='data:blog.pageType == "static_page"'>
<b:include data='post' name='comments'/>
</b:if>
<b:if cond='data:blog.pageType == "item"'>
<b:include data='post' name='comments'/>
</b:if>
</div>
<b:if cond='data:post.includeAd'>
<b:if cond='data:post.isFirstPost'>
<data:defaultAdEnd/>
<b:else/>
<data:adEnd/>
</b:if>
<b:if cond='data:mobile == "false"'>
<div class='inline-ad'>
<data:adCode/>
</div>
</b:if>
<data:adStart/>
</b:if>
<b:if cond='data:post.trackLatency'>
<data:post.latencyJs/>
</b:if>
</b:loop>
<b:if cond='data:numPosts != 0'>
</div></div>
</b:if>
<data:adEnd/>
</div>
</b:if>
<!-- navigation -->
<b:if cond='data:mobile'>
<b:include name='mobile-nextprev'/>
<b:else/>
<b:include name='nextprev'/>
<!-- feed links -->
<b:include name='feedLinks'/>
</b:if>
<b:if cond='data:top.showStars'>
<script src='//www.google.com/jsapi' type='text/javascript'/>
<script type='text/javascript'>
google.load("annotations", "1", {"locale": "<data:top.languageCode/>"});
function initialize() {
google.annotations.setApplicationId(<data:top.blogspotReviews/>);
google.annotations.createAll();
google.annotations.fetch();
}
google.setOnLoadCallback(initialize);
</script>
</b:if>
</b:includable>
<b:includable id='backlinkDeleteIcon' var='backlink'>
<span expr:class='"item-control " + data:backlink.adminClass'>
<a expr:href='data:backlink.deleteUrl' expr:title='data:top.deleteBacklinkMsg'>
<img src='//www.blogger.com/img/icon_delete13.gif'/>
</a>
</span>
</b:includable>
<b:includable id='backlinks' var='post'>
<a name='links'/><h4><data:post.backlinksLabel/></h4>
<b:if cond='data:post.numBacklinks != 0'>
<dl class='comments-block' id='comments-block'>
<b:loop values='data:post.backlinks' var='backlink'>
<div class='collapsed-backlink backlink-control'>
<dt class='comment-title'>
<span class='backlink-toggle-zippy'> </span>
<a expr:href='data:backlink.url' rel='nofollow'><data:backlink.title/></a>
<b:include data='backlink' name='backlinkDeleteIcon'/>
</dt>
<dd class='comment-body collapseable'>
<data:backlink.snippet/>
</dd>
<dd class='comment-footer collapseable'>
<span class='comment-author'><data:post.authorLabel/> <data:backlink.author/></span>
<span class='comment-timestamp'><data:post.timestampLabel/> <data:backlink.timestamp/></span>
</dd>
</div>
</b:loop>
</dl>
</b:if>
<p class='comment-footer'>
<a class='comment-link' expr:href='data:post.createLinkUrl' expr:id='data:widget.instanceId + "_backlinks-create-link"' target='_blank'><data:post.createLinkLabel/></a>
</p>
</b:includable>
<b:includable id='comment-form' var='post'>
<div class='comment-form'>
<a name='comment-form'/>
<b:if cond='data:mobile'>
<h4 id='comment-post-message'>
<a expr:id='data:widget.instanceId + "_comment-editor-toggle-link"' href='javascript:void(0)'><data:postCommentMsg/></a></h4>
<p><data:blogCommentMessage/></p>
<data:blogTeamBlogMessage/>
<a expr:href='data:post.commentFormIframeSrc' id='comment-editor-src'/>
<iframe allowtransparency='true' class='blogger-iframe-colorize blogger-comment-from-post' frameborder='0' height='410' id='comment-editor' name='comment-editor' src='' style='display: none' width='100%'/>
<b:else/>
<h4 id='comment-post-message'><data:postCommentMsg/></h4>
<p><data:blogCommentMessage/></p>
<data:blogTeamBlogMessage/>
<a expr:href='data:post.commentFormIframeSrc' id='comment-editor-src'/>
<iframe allowtransparency='true' class='blogger-iframe-colorize blogger-comment-from-post' frameborder='0' height='410' id='comment-editor' name='comment-editor' src='' width='100%'/>
</b:if>
<data:post.friendConnectJs/>
<data:post.cmtfpIframe/>
<script type='text/javascript'>
BLOG_CMT_createIframe('<data:post.appRpcRelayPath/>', '<data:post.communityId/>');
</script>
</div>
</b:includable>
<b:includable id='commentDeleteIcon' var='comment'>
<span expr:class='"item-control " + data:comment.adminClass'>
<b:if cond='data:showCmtPopup'>
<div class='goog-toggle-button'>
<div class='goog-inline-block comment-action-icon'/>
</div>
<b:else/>
<a class='comment-delete' expr:href='data:comment.deleteUrl' expr:title='data:top.deleteCommentMsg'>
<img src='//www.blogger.com/img/icon_delete13.gif'/>
</a>
</b:if>
</span>
</b:includable>
<b:includable id='comment_count_picker' var='post'>
<b:if cond='data:post.commentSource == 1'>
<span class='cmt_count_iframe_holder' expr:data-count='data:post.numComments' expr:data-onclick='data:post.addCommentOnclick' expr:data-post-url='data:post.url' expr:data-url='data:post.url.canonical.http'>
</span>
<b:else/>
<a class='comment-link' expr:href='data:post.addCommentUrl' expr:onclick='data:post.addCommentOnclick'>
<data:post.commentLabelFull/>:
</a>
</b:if>
</b:includable>
<b:includable id='comment_picker' var='post'>
<b:if cond='data:post.commentSource == 1'>
<b:include data='post' name='iframe_comments'/>
<b:elseif cond='data:post.showThreadedComments'/>
<b:include data='post' name='threaded_comments'/>
<b:else/>
<b:include data='post' name='comments'/>
</b:if>
</b:includable>
<b:includable id='comments' var='post'>
<div class='comments' id='comments'>
<a name='comments'/>
<iframe allowTransparency='true' expr:src='"http://www.facebook.com/plugins/like.php?href=" + data:post.url + "&layout=standard&show_faces=true&action=like&font&colorscheme=light&height=80"' frameborder='0' scrolling='no' style='border:none; overflow:hidden; width: 100%; height:80px;'/>
<div id='fb-root'/><script src='http://connect.facebook.net/zh_TW/all.js#appId=APP_ID&xfbml=1'/><fb:comments expr:href='data:post.url' num_posts='2' width='500'/>
<b:if cond='data:post.allowComments'>
<h4>
<b:if cond='data:post.numComments == 1'>
1 <data:commentLabel/>:
<b:else/>
<data:post.numComments/> <data:commentLabelPlural/>:
</b:if>
</h4>
<b:if cond='data:post.commentPagingRequired'>
<span class='paging-control-container'>
<a expr:class='data:post.oldLinkClass' expr:href='data:post.oldestLinkUrl'><data:post.oldestLinkText/></a>
 
<a expr:class='data:post.oldLinkClass' expr:href='data:post.olderLinkUrl'><data:post.olderLinkText/></a>
 
<data:post.commentRangeText/>
 
<a expr:class='data:post.newLinkClass' expr:href='data:post.newerLinkUrl'><data:post.newerLinkText/></a>
 
<a expr:class='data:post.newLinkClass' expr:href='data:post.newestLinkUrl'><data:post.newestLinkText/></a>
</span>
</b:if>
<div expr:id='data:widget.instanceId + "_comments-block-wrapper"'>
<dl expr:class='data:post.avatarIndentClass' id='comments-block'>
<b:loop values='data:post.comments' var='comment'>
<dt expr:class='"comment-author " + data:comment.authorClass' expr:id='data:comment.anchorName'>
<b:if cond='data:comment.favicon'>
<img expr:src='data:comment.favicon' height='16px' style='margin-bottom:-2px;' width='16px'/>
</b:if>
<a expr:name='data:comment.anchorName'/>
<b:if cond='data:blog.enabledCommentProfileImages'>
<data:comment.authorAvatarImage/>
</b:if>
<b:if cond='data:comment.authorUrl'>
<a expr:href='data:comment.authorUrl' rel='nofollow'><data:comment.author/></a>
<b:else/>
<data:comment.author/>
</b:if>
<data:commentPostedByMsg/>
</dt>
<dd class='comment-body' expr:id='data:widget.instanceId + data:comment.cmtBodyIdPostfix'>
<b:if cond='data:comment.isDeleted'>
<span class='deleted-comment'><data:comment.body/></span>
<b:else/>
<p>
<data:comment.body/>
</p>
</b:if>
</dd>
<dd class='comment-footer'>
<span class='comment-timestamp'>
<a expr:href='data:comment.url' title='comment permalink'>
<data:comment.timestamp/>
</a>
<b:include data='comment' name='commentDeleteIcon'/>
</span>
</dd>
</b:loop>
</dl>
</div>
<b:if cond='data:post.commentPagingRequired'>
<span class='paging-control-container'>
<a expr:class='data:post.oldLinkClass' expr:href='data:post.oldestLinkUrl'>
<data:post.oldestLinkText/>
</a>
<a expr:class='data:post.oldLinkClass' expr:href='data:post.olderLinkUrl'>
<data:post.olderLinkText/>
</a>
 
<data:post.commentRangeText/>
 
<a expr:class='data:post.newLinkClass' expr:href='data:post.newerLinkUrl'>
<data:post.newerLinkText/>
</a>
<a expr:class='data:post.newLinkClass' expr:href='data:post.newestLinkUrl'>
<data:post.newestLinkText/>
</a>
</span>
</b:if>
<p class='comment-footer'>
<b:if cond='data:post.embedCommentForm'>
<b:if cond='data:post.allowNewComments'>
<b:include data='post' name='comment-form'/>
<b:else/>
<data:post.noNewCommentsText/>
</b:if>
<b:else/>
<b:if cond='data:post.allowComments'>
<a expr:href='data:post.addCommentUrl' expr:onclick='data:post.addCommentOnclick'><data:postCommentMsg/></a>
</b:if>
</b:if>
</p>
</b:if>
<b:if cond='data:showCmtPopup'>
<div id='comment-popup'>
<iframe allowtransparency='true' frameborder='0' id='comment-actions' name='comment-actions' scrolling='no'>
</iframe>
</div>
</b:if>
<div id='backlinks-container'>
<div expr:id='data:widget.instanceId + "_backlinks-container"'>
<b:if cond='data:post.showBacklinks'>
<b:include data='post' name='backlinks'/>
</b:if>
</div>
</div>
</div>
</b:includable>
<b:includable id='feedLinks'>
<b:if cond='data:blog.pageType != "item"'> <!-- Blog feed links -->
<b:if cond='data:feedLinks'>
<div class='blog-feeds'>
<b:include data='feedLinks' name='feedLinksBody'/>
</div>
</b:if>
<b:else/> <!--Post feed links -->
<div class='post-feeds'>
<b:loop values='data:posts' var='post'>
<b:include cond='data:post.allowComments and data:post.feedLinks' data='post.feedLinks' name='feedLinksBody'/>
</b:loop>
</div>
</b:if>
</b:includable>
<b:includable id='feedLinksBody' var='links'>
<div class='feed-links'>
<data:feedLinksMsg/>
<b:loop values='data:links' var='f'>
<a class='feed-link' expr:href='data:f.url' expr:type='data:f.mimeType' target='_blank'><data:f.name/> (<data:f.feedType/>)</a>
</b:loop>
</div>
</b:includable>
<b:includable id='iframe_comments' var='post'>
<b:if cond='data:post.allowIframeComments'>
<script expr:src='data:post.iframeCommentSrc' type='text/javascript'/>
<div class='cmt_iframe_holder' expr:data-href='data:post.url.canonical' expr:data-viewtype='data:post.viewType'/>
<b:if cond='data:post.embedCommentForm == "false"'>
<a expr:href='data:post.addCommentUrl' expr:onclick='data:post.addCommentOnclick'><data:postCommentMsg/></a>
</b:if>
</b:if>
</b:includable>
<b:includable id='mobile-index-post' var='post'>
<b:if cond='data:post.dateHeader'>
<div class='mobile-index-date'>
<div class='date-header'>
<data:post.dateHeader/>
</div>
</div>
</b:if>
<div class='mobile-post-outer'>
<a expr:href='data:post.url'>
<div class='mobile-index-title-outer'>
<h2 class='mobile-index-title entry-title'>
<data:post.title/>
</h2>
</div>
<div>
<div class='mobile-index-arrow'>
&rsaquo;
</div>
<div class='mobile-index-contents'>
<b:if cond='data:post.thumbnailUrl'>
<div class='mobile-index-thumbnail'>
<div class='Image'>
<img expr:src='data:post.thumbnailUrl'/>
</div>
</div>
</b:if>
<div class='post-body'>
<b:if cond='data:post.snippet'><data:post.snippet/></b:if>
</div>
</div>
<div style='clear: both;'/>
</div>
</a>
<div class='mobile-index-comment'>
<b:if cond='data:blog.pageType != "item"'>
<b:if cond='data:blog.pageType != "static_page"'>
<b:if cond='data:post.allowComments'>
<b:if cond='data:post.numComments != 0'>
<a class='comment-link' expr:href='data:post.addCommentUrl' expr:onclick='data:post.addCommentOnclick'><b:if cond='data:post.numComments == 1'>1 <data:top.commentLabel/><b:else/><data:post.numComments/> <data:top.commentLabelPlural/></b:if></a>
</b:if>
</b:if>
</b:if>
</b:if>
</div>
</div>
</b:includable>
<b:includable id='mobile-main' var='top'>
<!-- posts -->
<div class='blog-posts hfeed'>
<b:include data='top' name='status-message'/>
<b:if cond='data:blog.pageType == "index"'>
<b:loop values='data:posts' var='post'>
<b:include data='post' name='mobile-index-post'/>
</b:loop>
<b:else/>
<b:loop values='data:posts' var='post'>
<b:include data='post' name='mobile-post'/>
</b:loop>
</b:if>
</div>
<b:include name='mobile-nextprev'/>
</b:includable>
<b:includable id='mobile-nextprev'>
<div class='blog-pager' id='blog-pager'>
<b:if cond='data:newerPageUrl'>
<div class='mobile-link-button' id='blog-pager-newer-link'>
<a class='blog-pager-newer-link' expr:href='data:newerPageUrl' expr:id='data:widget.instanceId + "_blog-pager-newer-link"' expr:title='data:newerPageTitle'>&laquo;</a>
</div>
</b:if>
<b:if cond='data:olderPageUrl'>
<div class='mobile-link-button' id='blog-pager-older-link'>
<a class='blog-pager-older-link' expr:href='data:olderPageUrl' expr:id='data:widget.instanceId + "_blog-pager-older-link"' expr:title='data:olderPageTitle'>&raquo;</a>
</div>
</b:if>
<div class='mobile-link-button' id='blog-pager-home-link'>
<a class='home-link' expr:href='data:blog.homepageUrl'><data:homeMsg/></a>
</div>
<div class='mobile-desktop-link'>
<a class='home-link' expr:href='data:desktopLinkUrl'><data:desktopLinkMsg/></a>
</div>
</div>
<div class='clear'/>
</b:includable>
<b:includable id='mobile-post' var='post'>
<div class='date-outer'>
<b:if cond='data:post.dateHeader'>
<h2 class='date-header'><span><data:post.dateHeader/></span></h2>
</b:if>
<div class='date-posts'>
<div class='post-outer'>
<div class='post hentry uncustomized-post-template' itemscope='itemscope' itemtype='http://schema.org/BlogPosting'>
<b:if cond='data:post.thumbnailUrl'>
<meta expr:content='data:post.thumbnailUrl' itemprop='image_url'/>
</b:if>
<meta expr:content='data:blog.blogId' itemprop='blogId'/>
<meta expr:content='data:post.id' itemprop='postId'/>
<a expr:name='data:post.id'/>
<b:if cond='data:post.title'>
<h3 class='post-title entry-title' itemprop='name'>
<b:if cond='data:post.link'>
<a expr:href='data:post.link'><data:post.title/></a>
<b:elseif cond='data:post.url and data:blog.url != data:post.url'/>
<a expr:href='data:post.url'><data:post.title/></a>
<b:else/>
<data:post.title/>
</b:if>
</h3>
</b:if>
<div class='post-header'>
<div class='post-header-line-1'/>
</div>
<div class='post-body entry-content' expr:id='"post-body-" + data:post.id' itemprop='articleBody'>
<data:post.body/>
<div style='clear: both;'/> <!-- clear for photos floats -->
</div>
<div class='post-footer'>
<div class='post-footer-line post-footer-line-1'>
<span class='post-author vcard'>
<b:if cond='data:top.showAuthor'>
<b:if cond='data:post.authorProfileUrl'>
<span class='fn' itemprop='author' itemscope='itemscope' itemtype='http://schema.org/Person'>
<meta expr:content='data:post.authorProfileUrl' itemprop='url'/>
<a expr:href='data:post.authorProfileUrl' rel='author' title='author profile'>
<span itemprop='name'><data:post.author/></span>
</a>
</span>
<b:else/>
<span class='fn' itemprop='author' itemscope='itemscope' itemtype='http://schema.org/Person'>
<span itemprop='name'><data:post.author/></span>
</span>
</b:if>
</b:if>
</span>
<span class='post-timestamp'>
<b:if cond='data:top.showTimestamp'>
<data:top.timestampLabel/>
<b:if cond='data:post.url'>
<meta expr:content='data:post.url.canonical' itemprop='url'/>
<a class='timestamp-link' expr:href='data:post.url' rel='bookmark' title='permanent link'><abbr class='published' expr:title='data:post.timestampISO8601' itemprop='datePublished'><data:post.timestamp/></abbr></a>
</b:if>
</b:if>
</span>
<span class='post-comment-link'>
<b:include cond='data:blog.pageType not in {"item","static_page"} and data:post.allowComments' data='post' name='comment_count_picker'/>
</span>
</div>
<div class='post-footer-line post-footer-line-2'>
<b:if cond='data:top.showMobileShare'>
<div class='mobile-link-button goog-inline-block' id='mobile-share-button'>
<a href='javascript:void(0);'><data:shareMsg/></a>
</div>
</b:if>
<b:if cond='data:top.showDummy'>
<div class='goog-inline-block dummy-container'><data:post.dummyTag/></div>
</b:if>
</div>
</div>
</div>
<b:include cond='data:blog.pageType in {"static_page","item"}' data='post' name='comment_picker'/>
</div>
</div>
</div>
</b:includable>
<b:includable id='nextprev'>
<div class='blog-pager' id='blog-pager'>
<b:if cond='data:newerPageUrl'>
<span id='blog-pager-newer-link'>
<a class='blog-pager-newer-link' expr:href='data:newerPageUrl' expr:id='data:widget.instanceId + "_blog-pager-newer-link"' expr:title='data:newerPageTitle'><data:newerPageTitle/></a>
</span>
</b:if>
<b:if cond='data:olderPageUrl'>
<span id='blog-pager-older-link'>
<a class='blog-pager-older-link' expr:href='data:olderPageUrl' expr:id='data:widget.instanceId + "_blog-pager-older-link"' expr:title='data:olderPageTitle'><data:olderPageTitle/></a>
</span>
</b:if>
<a class='home-link' expr:href='data:blog.homepageUrl'><data:homeMsg/></a>
<b:if cond='data:mobileLinkUrl'>
<div class='blog-mobile-link'>
<a expr:href='data:mobileLinkUrl'><data:mobileLinkMsg/></a>
</div>
</b:if>
</div>
<div class='clear'/>
</b:includable>
<b:includable id='post' var='post'>
<div class='post hentry'>
<a expr:name='data:post.id'/>
<b:if cond='data:post.title'>
<h2 class='post-title entry-title'>
<b:if cond='data:post.link'>
<a expr:href='data:post.link'><data:post.title/></a>
<b:else/>
<b:if cond='data:post.url'>
<b:if cond='data:blog.url != data:post.url'>
<a expr:href='data:post.url'><data:post.title/></a>
<b:else/>
<data:post.title/>
</b:if>
<b:else/>
<data:post.title/>
</b:if>
</b:if>
</h2>
</b:if>
<div class='post-header'>
<div class='post-header-line-1'/>
</div>
<div class='post-body entry-content' expr:id='"post-body-" + data:post.id'>
<data:post.body/>
<div style='clear: both;'/> <!-- clear for photos floats -->
</div>
<b:if cond='data:post.hasJumpLink'>
<div class='jump-link'>
<a expr:href='data:post.url + "#more"' expr:title='data:post.title'><data:post.jumpText/></a>
</div>
</b:if>
<div class='post-footer'>
<div class='post-footer-line post-footer-line-1'><span class='post-comment-link'>
<b:if cond='data:blog.pageType != "item"'>
<b:if cond='data:blog.pageType != "static_page"'>
<b:if cond='data:post.allowComments'>
<a class='comment-link' expr:href='data:post.addCommentUrl' expr:onclick='data:post.addCommentOnclick'><b:if cond='data:post.numComments == 1'>1 <data:top.commentLabel/><b:else/><data:post.numComments/> <data:top.commentLabelPlural/></b:if></a>
</b:if>
</b:if>
</b:if>
</span> <span class='post-labels'>
<b:if cond='data:post.labels'>
<data:postLabelsLabel/>
<b:loop values='data:post.labels' var='label'>
<a expr:href='data:label.url' rel='tag'><data:label.name/></a><b:if cond='data:label.isLast != "true"'>,</b:if>
</b:loop>
</b:if>
</span> <span class='post-icons'>
<!-- email post links -->
<b:if cond='data:post.emailPostUrl'>
<span class='item-action'>
<a expr:href='data:post.emailPostUrl' expr:title='data:top.emailPostMsg'>
<img alt='' class='icon-action' height='13' src='http://img1.blogblog.com/img/icon18_email.gif' width='18'/>
</a>
</span>
</b:if>
<!-- quickedit pencil -->
<b:include data='post' name='postQuickEdit'/>
</span> </div>
<div class='post-footer-line post-footer-line-2'/>
<div class='post-footer-line post-footer-line-3'/>
</div>
</div>
</b:includable>
<b:includable id='postQuickEdit' var='post'>
<b:if cond='data:post.editUrl'>
<span expr:class='"item-control " + data:post.adminClass'>
<a expr:href='data:post.editUrl' expr:title='data:top.editPostMsg'>
<img alt='' class='icon-action' height='18' src='http://img2.blogblog.com/img/icon18_edit_allbkg.gif' width='18'/>
</a>
</span>
</b:if>
</b:includable>
<b:includable id='shareButtons' var='post'>
<b:if cond='data:top.showEmailButton'><a class='goog-inline-block share-button sb-email' expr:href='data:post.sharePostUrl + "&target=email"' expr:title='data:top.emailThisMsg' target='_blank'>
<span class='share-button-link-text'><data:top.emailThisMsg/></span>
</a></b:if><b:if cond='data:top.showBlogThisButton'><a class='goog-inline-block share-button sb-blog' expr:href='data:post.sharePostUrl + "&target=blog"' expr:onclick='"window.open(this.href, \"_blank\", \"height=270,width=475\"); return false;"' expr:title='data:top.blogThisMsg' target='_blank'>
<span class='share-button-link-text'><data:top.blogThisMsg/></span>
</a></b:if><b:if cond='data:top.showTwitterButton'><a class='goog-inline-block share-button sb-twitter' expr:href='data:post.sharePostUrl + "&target=twitter"' expr:title='data:top.shareToTwitterMsg' target='_blank'>
<span class='share-button-link-text'><data:top.shareToTwitterMsg/></span>
</a></b:if><b:if cond='data:top.showFacebookButton'><a class='goog-inline-block share-button sb-facebook' expr:href='data:post.sharePostUrl + "&target=facebook"' expr:onclick='"window.open(this.href, \"_blank\", \"height=430,width=640\"); return false;"' expr:title='data:top.shareToFacebookMsg' target='_blank'>
<span class='share-button-link-text'><data:top.shareToFacebookMsg/></span>
</a></b:if><b:if cond='data:top.showOrkutButton'><a class='goog-inline-block share-button sb-orkut' expr:href='data:post.sharePostUrl + "&target=orkut"' expr:title='data:top.shareToOrkutMsg' target='_blank'>
<span class='share-button-link-text'><data:top.shareToOrkutMsg/></span>
</a></b:if><b:if cond='data:top.showBuzzButton'><a class='goog-inline-block share-button sb-buzz' expr:href='data:post.sharePostUrl + "&target=buzz"' expr:onclick='"window.open(this.href, \"_blank\", \"height=415,width=690\"); return false;"' expr:title='data:top.shareToBuzzMsg' target='_blank'>
<span class='share-button-link-text'><data:top.shareToBuzzMsg/></span>
</a></b:if>
<b:if cond='data:top.showDummy'>
<div class='goog-inline-block dummy-container'><data:post.dummyTag/></div>
</b:if>
</b:includable>
<b:includable id='status-message'>
<b:if cond='data:navMessage'>
<div class='status-msg-wrap'>
<div class='status-msg-body'>
<data:navMessage/>
</div>
<div class='status-msg-border'>
<div class='status-msg-bg'>
<div class='status-msg-hidden'><data:navMessage/></div>
</div>
</div>
</div>
<div style='clear: both;'/>
</b:if>
</b:includable>
<b:includable id='threaded-comment-form' var='post'>
<div class='comment-form'>
<a name='comment-form'/>
<b:if cond='data:mobile'>
<p><data:blogCommentMessage/></p>
<data:blogTeamBlogMessage/>
<a expr:href='data:post.commentFormIframeSrc' id='comment-editor-src'/>
<iframe allowtransparency='true' class='blogger-iframe-colorize blogger-comment-from-post' expr:height='data:cmtIframeInitialHeight' frameborder='0' id='comment-editor' name='comment-editor' src='' style='display: none' width='100%'/>
<b:else/>
<p><data:blogCommentMessage/></p>
<data:blogTeamBlogMessage/>
<a expr:href='data:post.commentFormIframeSrc' id='comment-editor-src'/>
<iframe allowtransparency='true' class='blogger-iframe-colorize blogger-comment-from-post' expr:height='data:cmtIframeInitialHeight' frameborder='0' id='comment-editor' name='comment-editor' src='' width='100%'/>
</b:if>
<data:post.friendConnectJs/>
<data:post.cmtfpIframe/>
<script type='text/javascript'>
BLOG_CMT_createIframe('<data:post.appRpcRelayPath/>');
</script>
</div>
</b:includable>
<b:includable id='threaded_comment_js' var='post'>
<script async='async' expr:src='data:post.commentSrc' type='text/javascript'/>
<script type='text/javascript'>
(function() {
var items = <data:post.commentJso/>;
var msgs = <data:post.commentMsgs/>;
var config = <data:post.commentConfig/>;
// <![CDATA[
var cursor = null;
if (items && items.length > 0) {
cursor = parseInt(items[items.length - 1].timestamp) + 1;
}
var bodyFromEntry = function(entry) {
if (entry.gd$extendedProperty) {
for (var k in entry.gd$extendedProperty) {
if (entry.gd$extendedProperty[k].name == 'blogger.contentRemoved') {
return '<span class="deleted-comment">' + entry.content.$t + '</span>';
}
}
}
return entry.content.$t;
}
var parse = function(data) {
cursor = null;
var comments = [];
if (data && data.feed && data.feed.entry) {
for (var i = 0, entry; entry = data.feed.entry[i]; i++) {
var comment = {};
// comment ID, parsed out of the original id format
var id = /blog-(\d+).post-(\d+)/.exec(entry.id.$t);
comment.id = id ? id[2] : null;
comment.body = bodyFromEntry(entry);
comment.timestamp = Date.parse(entry.published.$t) + '';
if (entry.author && entry.author.constructor === Array) {
var auth = entry.author[0];
if (auth) {
comment.author = {
name: (auth.name ? auth.name.$t : undefined),
profileUrl: (auth.uri ? auth.uri.$t : undefined),
avatarUrl: (auth.gd$image ? auth.gd$image.src : undefined)
};
}
}
if (entry.link) {
if (entry.link[2]) {
comment.link = comment.permalink = entry.link[2].href;
}
if (entry.link[3]) {
var pid = /.*comments\/default\/(\d+)\?.*/.exec(entry.link[3].href);
if (pid && pid[1]) {
comment.parentId = pid[1];
}
}
}
comment.deleteclass = 'item-control blog-admin';
if (entry.gd$extendedProperty) {
for (var k in entry.gd$extendedProperty) {
if (entry.gd$extendedProperty[k].name == 'blogger.itemClass') {
comment.deleteclass += ' ' + entry.gd$extendedProperty[k].value;
} else if (entry.gd$extendedProperty[k].name == 'blogger.displayTime') {
comment.displayTime = entry.gd$extendedProperty[k].value;
}
}
}
comments.push(comment);
}
}
return comments;
};
var paginator = function(callback) {
if (hasMore()) {
var url = config.feed + '?alt=json&v=2&orderby=published&reverse=false&max-results=50';
if (cursor) {
url += '&published-min=' + new Date(cursor).toISOString();
}
window.bloggercomments = function(data) {
var parsed = parse(data);
cursor = parsed.length < 50 ? null
: parseInt(parsed[parsed.length - 1].timestamp) + 1
callback(parsed);
window.bloggercomments = null;
}
url += '&callback=bloggercomments';
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = url;
document.getElementsByTagName('head')[0].appendChild(script);
}
};
var hasMore = function() {
return !!cursor;
};
var getMeta = function(key, comment) {
if ('iswriter' == key) {
var matches = !!comment.author
&& comment.author.name == config.authorName
&& comment.author.profileUrl == config.authorUrl;
return matches ? 'true' : '';
} else if ('deletelink' == key) {
return config.baseUri + '/delete-comment.g?blogID='
+ config.blogId + '&postID=' + comment.id;
} else if ('deleteclass' == key) {
return comment.deleteclass;
}
return '';
};
var replybox = null;
var replyUrlParts = null;
var replyParent = undefined;
var onReply = function(commentId, domId) {
if (replybox == null) {
// lazily cache replybox, and adjust to suit this style:
replybox = document.getElementById('comment-editor');
if (replybox != null) {
replybox.height = '250px';
replybox.style.display = 'block';
replyUrlParts = replybox.src.split('#');
}
}
if (replybox && (commentId !== replyParent)) {
replybox.src = '';
document.getElementById(domId).insertBefore(replybox, null);
replybox.src = replyUrlParts[0]
+ (commentId ? '&parentID=' + commentId : '')
+ '#' + replyUrlParts[1];
replyParent = commentId;
}
};
var hash = (window.location.hash || '#').substring(1);
var startThread, targetComment;
if (/^comment-form_/.test(hash)) {
startThread = hash.substring('comment-form_'.length);
} else if (/^c[0-9]+$/.test(hash)) {
targetComment = hash.substring(1);
}
// Configure commenting API:
var configJso = {
'maxDepth': config.maxThreadDepth
};
var provider = {
'id': config.postId,
'data': items,
'loadNext': paginator,
'hasMore': hasMore,
'getMeta': getMeta,
'onReply': onReply,
'rendered': true,
'initComment': targetComment,
'initReplyThread': startThread,
'config': configJso,
'messages': msgs
};
var render = function() {
if (window.goog && window.goog.comments) {
var holder = document.getElementById('comment-holder');
window.goog.comments.render(holder, provider);
}
};
// render now, or queue to render when library loads:
if (window.goog && window.goog.comments) {
render();
} else {
window.goog = window.goog || {};
window.goog.comments = window.goog.comments || {};
window.goog.comments.loadQueue = window.goog.comments.loadQueue || [];
window.goog.comments.loadQueue.push(render);
}
})();
// ]]>
</script>
</b:includable>
<b:includable id='threaded_comments' var='post'>
<div class='comments' id='comments'>
<a name='comments'/>
<h4><data:post.commentLabelFull/>:</h4>
<div class='comments-content'>
<b:include cond='data:post.embedCommentForm' data='post' name='threaded_comment_js'/>
<div id='comment-holder'>
<data:post.commentHtml/>
</div>
</div>
<p class='comment-footer'>
<b:if cond='data:post.allowNewComments'>
<b:include data='post' name='threaded-comment-form'/>
<b:else/>
<data:post.noNewCommentsText/>
</b:if>
</p>
<b:if cond='data:showCmtPopup'>
<div id='comment-popup'>
<iframe allowtransparency='true' frameborder='0' id='comment-actions' name='comment-actions' scrolling='no'>
</iframe>
</div>
</b:if>
<div id='backlinks-container'>
<div expr:id='data:widget.instanceId + "_backlinks-container"'>
<b:include cond='data:post.showBacklinks' data='post' name='backlinks'/>
</div>
</div>
</div>
</b:includable>
</b:widget>
</b:section>
<!-- // content -->
</div>
<div id='sidebar'>
<div class='sponsors empty'>
<!-- TLS class="empty" F-FDTL script MEDIA -->
</div>
<div class='sponsors-after'>
<h2>F-FDTL</h2>
<p>TIMOR-LESTE <a href='mailto:sponsorship@coscup.org'>sponsorship@coscup.org</a> 聯絡。</p>
</div>
</div>
<div id='footer'>
<div class='info'>
<p id='copyright'> 2016 F-FDTL<a href='http://coscup.org/2011/zh-tw/contact/'>P I</a>。</p>
<p id='tagline'>We <span class='heart'>(heart)</span> Open.</p>
<p id='archives'>
<a href='http://coscup.org/2006/'>2006</a>
<span class='separator'> | </span><a href='http://coscup.org/2007/'>2007</a>
<span class='separator'> | </span><a href='http://coscup.org/2008/'>2008</a>
<span class='separator'> | </span><a href='http://coscup.org/2009/'>2009</a>
<span class='separator'> | </span><a href='http://coscup.org/2010/'>2010</a>
</p>
</div>
</div>
<script src='http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js' type='text/javascript'/>
<script src='http://coscup.org/2011-theme/assets/script.min.js' type='text/javascript'/>
</body>
</html>
| 42.019856 | 360 | 0.585077 | eng_Latn | 0.121496 |
915be14019cf128ab90541c24ac9f5dd1e438aee | 46 | md | Markdown | dolby_vision/README.md | quietvoid/dovi_tool | b5f0138a255576b16907fcf710a0f304d4b0615f | [
"MIT"
] | 189 | 2020-11-06T13:20:34.000Z | 2022-03-30T12:57:20.000Z | dolby_vision/README.md | quietvoid/dovi_tool | b5f0138a255576b16907fcf710a0f304d4b0615f | [
"MIT"
] | 76 | 2020-11-26T22:11:08.000Z | 2022-03-30T01:36:42.000Z | dolby_vision/README.md | quietvoid/dovi_tool | b5f0138a255576b16907fcf710a0f304d4b0615f | [
"MIT"
] | 29 | 2020-12-07T19:27:46.000Z | 2022-03-22T12:40:06.000Z | Library to read & write Dolby Vision metadata. | 46 | 46 | 0.804348 | eng_Latn | 0.86885 |
915bea5a4487794ddda9a6e35a163ef045465c72 | 6,277 | md | Markdown | docs/pages/overview/changelog/release384.md | adaptris/interlok-manual | d791a67a49143f91b4e04f61b181f0df4036380f | [
"MIT"
] | 4 | 2017-05-07T14:57:09.000Z | 2022-03-21T17:19:57.000Z | docs/pages/overview/changelog/release384.md | adaptris/interlok-manual | d791a67a49143f91b4e04f61b181f0df4036380f | [
"MIT"
] | 3 | 2020-10-07T08:58:57.000Z | 2021-02-25T18:54:27.000Z | docs/pages/overview/changelog/release384.md | adaptris/interlok-manual | d791a67a49143f91b4e04f61b181f0df4036380f | [
"MIT"
] | 1 | 2020-05-20T14:53:17.000Z | 2020-05-20T14:53:17.000Z | ## Version 3.8.4 ##
Release Date : 2019-04-29
### Key Highlights
- Config projects continued improvements: better UX for loading projects; improved variable usage when moving/coping components with existing variables; and various improvements around the importing existing config with multiple variable sets; and configurability of the x-includes root location.
- The Component Search has been improved so search results now link to the optional component page and vice versa.
- The DynamicServiceExecutor has been enhanced and can be used as a simplified DynamicServiceLocator (which has been deprecated).
- OAuth components have been improved to support the generation of the OAuth Signature for OAUTH1.0 / RFC 5849 (Optional component: interlok-oauth-generic)
- A new service-list implementation that auto maps against StaX implementations (Optional component: Interlok-stax)
- New XML Exception Report service that includes workflow ID and Message
### Bugs
- `INTERLOK-2251` - interlok-filesystem + zip slip
- `INTERLOK-2587` - UI Config - 'Import Config with variables' modal doesn't reset the 'Config Selected' message upon opening
- `INTERLOK-2588` - UI Projects - the "Upload variables" button doesn't work when the variable set name isn't present
- `INTERLOK-2627` - UI Config Page - Services with connections do not have recommended connections displayed first
- `INTERLOK-2648` - UI Config Page : variable builder "trims trailing white space"
- `INTERLOK-2654` - UI VCS Templates - Using a VCS Profile and Creating a new template in config, it fails to add the new file to the commit
- `INTERLOK-2660` - json-streaming + stax no longer build on Windows.
- `INTERLOK-2668` - XmlSchemaValidator does not support file URLs
- `INTERLOK-2674` - UI Config - Copy a component which has some non latin1 characters in its javadoc fails.
- `INTERLOK-2682` - interlok-apache-http: apache-http-response-headers-as-metadata does not override existing metadata
- `INTERLOK-2685` - UI Config Page - The class impl selector fails when nested in a class (XmlValidationService > XmlSchemaValidator > cache connection)
- `INTERLOK-2704` - UI Config Page - On import single item lists do not generate correct XPaths
- `INTERLOK-2717` - UI Projects - Xincs directory slash is incorrect in the outputted xml
- `INTERLOK-2738` - Change docker-entrypoint.sh to lower networkaddress.ttl
- `INTERLOK-2741` - interlok-json-streaming: JsonStreamingSplitter Looses original JSON type
- `INTERLOK-2742` - TestExecutionOrder required for TestCompositeKeystore
- `INTERLOK-2745` - UI - Not on RBI Network, am still offered "Search" on optional components page
- `INTERLOK-2746` - UI Optional Component - direct jar download links are broken for releases (work for snapshots)
- `INTERLOK-2747` - UI: Link is wrong on apache-http "further information"
- `INTERLOK-2748` - UI Projects - unable to use the 'Config XML File Name' input to customise the adapter.xml filename
- `INTERLOK-2751` - UI Service Tester - javascript error when 'Generate Tests From Adapter Config'
### Improvements
- `INTERLOK-1937` - Improve the inline javadocs, so they work better for nested component tabs
- `INTERLOK-2224` - UI - Default user credentials should be driven by properties
- `INTERLOK-2535` - UI Projects - update the variable xpaths when dragging components with variables around the config page
- `INTERLOK-2544` - UI Component Search - link the component search results to its corresponding optional component
- `INTERLOK-2548` - UI Version Upgrade - update Knockout to latest version
- `INTERLOK-2589` - UI Config Page - keep a selectable list of 'Local project path' values on the 'open project from local file system' option
- `INTERLOK-2590` - UI Config/UI Service Tester - Once you've opened a project, switching between these pages should auto reopen the project
- `INTERLOK-2592` - UI Config Page - Open modal 'Import Config' should allow multiple variable sets to be uploaded
- `INTERLOK-2598` - Generation of the OAuth Signature for OAUTH1.0 / RFC 5849
- `INTERLOK-2600` - UI Config Page - Create new feature to validate the ui project variables outside the 'apply config' modal
- `INTERLOK-2608` - UI Config Page - remove the 'active' flag on the ui projects variable sets, and improve the variable token selector in the settings editor to display all tokens from all variable sets.
- `INTERLOK-2614` - UI Config Page - "Reload project from Filesystem"
- `INTERLOK-2624` - JMS 2.0 - Handle acknowledgements from async producers
- `INTERLOK-2625` - XML Exception Report service that includes workflowId and Message
- `INTERLOK-2634` - dependabot updates for 3.8.4
- `INTERLOK-2643` - Add a metadata filter by size
- `INTERLOK-2647` - Add log-metadata to WorkflowImp or limit length of metadata in AdaptrisMessage.toString()
- `INTERLOK-2651` - UI Projects -x-includes should allow you to set specific include location not just <projectname>/includes/
- `INTERLOK-2652` - Upgrade wrappers to latest gradle 5.x
- `INTERLOK-2658` - Fix all the "high-vulns" found by spotbugs
- `INTERLOK-2659` - Enable spotbugs on interlok-ui
- `INTERLOK-2672` - Add a service-list implementation that auto maps against StaX implementations
- `INTERLOK-2683` - interlok-config-conditional: Do While
- `INTERLOK-2686` - Interlok-aws-s3: Move CheckFileExistsOperation for use within Interlok
- `INTERLOK-2687` - PoolingWorkflow commons-pool-evictor thread too much logging
- `INTERLOK-2690` - Apache Artemis running in docker
- `INTERLOK-2691` - Use kubernetes as the container orchestration
- `INTERLOK-2701` - Upgrade current profiler project to be more versatile
- `INTERLOK-2702` - Deprecated DynamicServiceLocator; merge functionality into DynamicServiceExecutor
- `INTERLOK-2703` - JDBC Splitting XML Payload Translator does not include metadata in split messages
- `INTERLOK-2707` - Change ServiceExtractor interface to return a Service
- `INTERLOK-2725` - MetadataServices should have a "metadata-logger"
- `INTERLOK-2661` - UI: Identify and fix the high priority issues reported by spotbugs
- `INTERLOK-2675` - Abbrevate the logging from StatementParameter
- `INTERLOK-2681` - Upgrade or supersede ReadFileService to use a MessageDrivenDestination
| 83.693333 | 296 | 0.772821 | eng_Latn | 0.933042 |
915c044914432554d31ee4880167c28c2c5c1b11 | 3,092 | md | Markdown | office-365-management-api/office-365-management-apis-overview.md | BChenMsft/office-365-management-api | 1476bf3eb662c448518ff7a6a1a8ea4e02eb1692 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | office-365-management-api/office-365-management-apis-overview.md | BChenMsft/office-365-management-api | 1476bf3eb662c448518ff7a6a1a8ea4e02eb1692 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | office-365-management-api/office-365-management-apis-overview.md | BChenMsft/office-365-management-api | 1476bf3eb662c448518ff7a6a1a8ea4e02eb1692 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.TocTitle: Office 365 Management APIs overview
title: Office 365 Management APIs overview
description: Provides a single extensibility platform for all Office 365 customers' and partners' management tasks, including service communications, security, compliance, reporting, and auditing.
ms.ContentId: 4fca85f9-efa6-3b4b-b10d-a07cb31f803f
ms.topic: reference (API)
ms.date: 09/28/2016
---
# Office 365 Management APIs overview
The Office 365 Management APIs provide a single extensibility platform for all Office 365 customers' and partners' management tasks, including service communications, security, compliance, reporting, and auditing. All of the Office 365 Management APIs are consistent in design and implementation with the current suite of Office 365 REST APIs, using common industry-standard approaches, including OAuth v2, OData v4, and JSON. Like the other Office 365 APIs, applications are registered in Azure Active Directory, giving developers a consistent way to authenticate and authorize their apps.
If you have questions about the Office 365 Management APIs, post your question on [Stack Overflow](http://stackoverflow.com/tags/office365), using the [office365] tag.
## Office 365 Service Communications API (preview)
The Office 365 Service Communications API has been released in preview mode. It replaces the Office 365 Service Communications API to provide service health information to tenant administrators and partners. Unlike the previous version, the new Service Communications API delivers a cohesive platform experience, with REST APIs built in a consistent fashion including URL naming, data format, and authentication.
New features are only added to the new version of the API, encouraging early adoption by legacy customers. When the General Announcement of Office 365 Service Communications API was made, the older version of the Service Communications API began a period of deprecation.
For the operations reference, see [Office 365 Service Communications API reference (preview)](office-365-service-communications-api-reference.md).
## Office 365 Management Activity API
The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.
For the operations reference, see [Office 365 Management Activity API reference](office-365-management-activity-api-reference.md).
## See also
- [Get started with Office 365 Management APIs](get-started-with-office-365-management-apis.md)
- [Office 365 Management Activity API schema](office-365-management-activity-api-schema.md)
- [Troubleshooting the Office 365 Management Activity API](troubleshooting-the-office-365-management-activity-api.md)
- [Office 365 REST APIs](https://docs.microsoft.com/en-us/previous-versions/office/office-365-api/how-to/platform-development-overview)
| 81.368421 | 591 | 0.805951 | eng_Latn | 0.967983 |
915c7f0e608d921a93b46cf456f4995f69437f95 | 1,698 | md | Markdown | _posts/C++/DirectX/2021-03-28-directx-tutorial-1.md | EasyCoding-7/easycoding-7.github.io | 9be32363f161ef6096b2c38d99eaaf57b470fc16 | [
"MIT"
] | null | null | null | _posts/C++/DirectX/2021-03-28-directx-tutorial-1.md | EasyCoding-7/easycoding-7.github.io | 9be32363f161ef6096b2c38d99eaaf57b470fc16 | [
"MIT"
] | null | null | null | _posts/C++/DirectX/2021-03-28-directx-tutorial-1.md | EasyCoding-7/easycoding-7.github.io | 9be32363f161ef6096b2c38d99eaaf57b470fc16 | [
"MIT"
] | null | null | null | ---
layout: post
title: "(DirectX : Tutorial) 1. Project Setting"
summary: ""
author: DirectX
date: '2021-03-28 0:00:00 +0000'
category: ['DirectX']
#tags: ['C++', 'tag-test1']
thumbnail: /assets/img/posts/directx-thumnail.jpg
keywords: ['tutorial']
usemathjax: true
permalink: /blog/DirectX/tutorial-1/
---
* [DirectX Tutorial (YouTube)](https://www.youtube.com/watch?v=_4FArgOX1I4&list=PLqCJpWy5Fohd3S7ICFXwUomYW0Wv67pDD)
* [Get Project(GitHub)](https://github.com/EasyCoding-7/DirectX-basic-Tutorial)
---
## Get Code
* [Link](https://github.com/EasyCoding-7/DirectX-basic-Tutorial/tree/master/1)
---
## 프로젝트생성
{:class="img-fluid"}
{:class="img-fluid"}
{:class="img-fluid"}
말 그대로 여러 프로세스를 이용하여 컴파일, 컴파일 속도 증대
{:class="img-fluid"}
exe, dll에 데이터를 올리지않고 램에 올려 속도를 초대화
{:class="img-fluid"}
{:class="img-fluid"}
* /mt : 윈도우 dll을 내장
* /md : dll이 필요 (빠르다)
{:class="img-fluid"}
{:class="img-fluid"}
부동소수점의 계산을 어떻게 할지(정확하게 혹은 대충)
{:class="img-fluid"}
{:class="img-fluid"}
```cpp
/*
// Error
int main()
{
return 0;
}
*/
#include <Windows.h>
int CALLBACK WinMain(
HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPSTR lpCmdLine,
int nCmdShow)
{
while (true);
return 0;
}
```
여기까지하면 프로세스생성까지 확인가능 | 21.225 | 115 | 0.684923 | kor_Hang | 0.459815 |
915c824304b1f68d05be47f982b8051efab3ea0a | 319 | md | Markdown | posts_old/2018-09-06-2018-09-06-06-16.md | JuanIgnacioGil/3_belle | a131266ac8f7c873ebd34aafbc6f0415cd1cdb86 | [
"MIT"
] | null | null | null | posts_old/2018-09-06-2018-09-06-06-16.md | JuanIgnacioGil/3_belle | a131266ac8f7c873ebd34aafbc6f0415cd1cdb86 | [
"MIT"
] | 1 | 2021-12-06T17:21:47.000Z | 2021-12-06T17:21:47.000Z | stories_old/2018-09-06-2018-09-06-06-16.md | JuanIgnacioGil/3_belle | a131266ac8f7c873ebd34aafbc6f0415cd1cdb86 | [
"MIT"
] | null | null | null | ---
layout: post
title: "2018-09-06"
date: 2018-09-06 06:16:49
description:
image: "/assets/stories/201809/d0e4ce594d684e6df988fd893e7a70e5.jpg"
author: Elise Plain
excerpt: CUANDO NO SÉ ESTOY MÁS CERCA DE SABER QUE CUANDO SÉ
tags:
- stories
- all
---
CUANDO NO SÉ ESTOY MÁS CERCA DE SABER QUE CUANDO SÉ
<p></p>
| 19.9375 | 68 | 0.727273 | yue_Hant | 0.68397 |
915cacea6a7c4954edb139751f57418683838582 | 788 | md | Markdown | about.md | JattMones/mattjonesofficial.github.io | 308277ab4d5ab6c904b36563d9e6ec3501451ffd | [
"MIT"
] | null | null | null | about.md | JattMones/mattjonesofficial.github.io | 308277ab4d5ab6c904b36563d9e6ec3501451ffd | [
"MIT"
] | 2 | 2018-10-02T23:49:40.000Z | 2018-10-03T15:51:24.000Z | about.md | JattMones/jattmones.github.io | 308277ab4d5ab6c904b36563d9e6ec3501451ffd | [
"MIT"
] | null | null | null | ---
layout: page
title: About
permalink: /about/
---



I've always loved problem solving and technology. For this reason I'm interested in Computer Science and it's advancement. So, when I moved to Meadville PA to start undergrad., what better to study than Computer Science? Before Meadville, I moved often and grew up all over the USA. Throughout grade-school and high-school, I lived in Rochester NY, Fort Knox KY, Mechanicsburgh PA, St. Louis MO, and Seattle WA.
### More Information
I enjoy music, skiing, sailing, ultimate frisbee, watching ball games, and lawn games/hanging out with friends and family.
### Contact me
[jonesm2@allegheny.edu](mailto:jonesm2@allegheny.edu)
[Return Home](https://mattjonesofficial.netlify.com/)
| 39.4 | 411 | 0.757614 | eng_Latn | 0.972753 |
915d82a3c141addfe0b6cb0c00eee79a0d0879cf | 5,875 | md | Markdown | articles/governance/azure-management.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/governance/azure-management.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/governance/azure-management.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Översikt över Azure Management – Azure-styrning
description: Översikt över hanterings områden för Azure-program och-resurser med länkar till innehåll på hanterings verktyg för Azure.
ms.date: 07/06/2020
ms.topic: overview
ms.openlocfilehash: 81d655db706a7330fc541724d490a4885cc2fe8b
ms.sourcegitcommit: e132633b9c3a53b3ead101ea2711570e60d67b83
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 07/07/2020
ms.locfileid: "86041922"
---
# <a name="what-are-the-azure-management-areas"></a>Vad är Azures hanterings områden?
Styrning i Azure är en aspekt av Azure-hantering. Den här artikeln beskriver olika hanterings områden för att distribuera och underhålla dina resurser i Azure.
Hantering syftar på uppgifter och processer som krävs för att underhålla företagsprogram samt resurser som stöder dessa program. Azure har många tjänster och verktyg som samarbetar för att tillhandahålla fullständig hantering. Dessa tjänster är inte bara för resurser i Azure, utan även i andra moln och lokalt. Att förstå de olika verktygen och hur de fungerar tillsammans är det första steget i utformningen av en komplett hanterings miljö.
Följande diagram visar de olika hanteringsområdena som krävs för att underhålla alla program och resurser. Dessa olika områden kan ses som en livs cykel. Varje zon krävs i kontinuerliga framgångar över livs längd för en resurs. Den här resurs livs cykeln börjar med den första distributionen, genom fortsatta åtgärder och slutligen när den dras tillbaka.
:::image type="content" source="../monitoring/media/management-overview/management-capabilities.png" alt-text="Hanterings ämnes områden i Azure" border="false":::
Ingen enskild Azure-tjänst fyller fullständigt kraven i ett visst hanterings utrymme. I stället realiseras varje tjänst av flera tjänster som arbetar tillsammans. Vissa tjänster, till exempel Application Insights, tillhandahåller riktade övervaknings funktioner för webb program. Andra, som Azure Monitor loggar, lagrar hanterings data för andra tjänster. Med den här funktionen kan du analysera data av olika typer som samlas in av olika tjänster.
Följande avsnitt beskriver i korta ordalag de olika hanteringsområdena och tillhandahåller länkar till detaljerat innehåll om de viktigaste Azure-tjänsterna.
## <a name="monitor"></a>Övervaka
Övervakning handlar om att samla in och analysera data för att granska prestanda, hälsa och tillgänglighet för dina resurser. En effektiv övervaknings strategi hjälper dig att förstå driften av komponenter och öka din drift tid med meddelanden. Läs en översikt över övervakning som täcker de olika tjänsterna som används vid [övervakning av Azure-program och-resurser](../azure-monitor/overview.md).
## <a name="configure"></a>Konfigurera
Konfigurera syftar på den första distributionen och konfigurationen av resurser och pågående underhåll.
Genom att automatisera dessa uppgifter kan du undvika redundans, minimera din tid och ansträngning och öka din precision och effektivitet. [Azure Automation](../automation/automation-intro.md) innehåller många tjänster för att automatisera konfigurationsåtgärder. Medan Runbooks hanterar process automatisering kan konfiguration och uppdaterings hantering hjälpa till att hantera konfigurationen.
## <a name="govern"></a>Styrning
Styrning tillhandahåller mekanismer och processer för att behålla kontrollen över dina program och resurser i Azure. Det omfattar att planera initiativ och att fatta beslut om strategiska prioriteringar.
Styrning i Azure implementeras främst genom två tjänster. Med [Azure policy](./policy/overview.md) kan du skapa, tilldela och hantera princip definitioner för att tillämpa regler för dina resurser.
Den här funktionen håller resurserna i överensstämmelse med företagets standarder.
Med [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) kan du spåra moln användning och utgifter för dina Azure-resurser och andra moln leverantörer.
## <a name="secure"></a>Skydda
Hantera säkerheten för dina resurser och data. Ett säkerhets program omfattar att utvärdera hot, samla in och analysera data och efterlevnad av dina program och resurser. Säkerhets övervakning och hot analys tillhandahålls av [Azure Security Center](../security-center/security-center-intro.md), som innehåller enhetlig säkerhets hantering och Avancerat skydd för arbets belastningar i hybrid moln. Mer information och vägledning om hur du skyddar Azure-resurser finns i [Introduktion till Azure-säkerhet](../security/fundamentals/overview.md) .
## <a name="protect"></a>Skydda
Skydd syftar till att hålla dina program och data tillgängliga, även med avbrott som ligger utanför din kontroll. Skydd i Azure tillhandahålls av två tjänster. [Azure Backup](../backup/backup-overview.md) tillhandahåller säkerhetskopiering och återställning av data, antingen i molnet eller lokalt. [Azure Site Recovery](../site-recovery/site-recovery-overview.md) ger affärs kontinuitet och omedelbar återställning under en katastrof.
## <a name="migrate"></a>Migrera
Migrering refererar till att överföra arbetsbelastningar som körs lokalt till Azure-molnet.
[Azure Migrate](../migrate/migrate-services-overview.md) är en tjänst som hjälper dig att utvärdera migreringens lämplighet för lokala virtuella datorer till Azure. Azure Site Recovery migrerar virtuella datorer [från lokala platser](../site-recovery/migrate-tutorial-on-premises-azure.md) eller [från Amazon Web Services](../site-recovery/migrate-tutorial-aws-azure.md). [Azure Database migration](../dms/dms-overview.md) hjälper dig att migrera databas källor till Azure Data Platforms.
## <a name="next-steps"></a>Nästa steg
Mer information om Azure-styrning finns i följande artiklar:
- Se [Azure styrnings hubben](./index.yml).
- Se [styrning i moln implementerings ramverket för Azure](/azure/cloud-adoption-framework/govern/)
| 94.758065 | 545 | 0.815319 | swe_Latn | 0.999912 |
915d834c892c0127a6480f4ab57746aea533091e | 1,767 | md | Markdown | data/2015/06/2015-06-12.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 6 | 2016-05-02T21:31:41.000Z | 2018-01-15T04:48:01.000Z | data/2015/06/2015-06-12.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 56 | 2015-05-18T04:57:25.000Z | 2021-07-22T20:17:27.000Z | data/2015/06/2015-06-12.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 2 | 2016-06-15T04:06:11.000Z | 2016-10-18T13:36:55.000Z | # 映画『ハリーポッターと炎のゴブレット』
映画『ハリーポッターと炎のゴブレット』を観た。
[2015-05-29][] の 2 作目や [2015-06-05][] の 3 作目に続いて 4 作目。金曜ロードショーで 4 週連続放送しているハリーポッターシリーズを観た。1 作目 ( 1 週目 ) は書きそびれたんだよね……。
小説はここから上下巻の二冊組だったはず。
分量の割には内容は薄く感じたんだけど映像化しても同じだな。
キャラクターが死ぬと話が重くなるはずなんだけど割とどうでもいいキャラなのかあっさりしている。
とってつけたようなトライウィザードトーナメント。妙に粗い闇の魔術に対する防衛術の先生の描写。不可抗力で巻き込まれるって話の筋は正直ハリーポッターぽくないと感じる。余計なことに首を突っ込んでいくイメージなんだけどなあ……。
# npm パッケージのアップデート
いくつかの npm パッケージをアップデートした。理由は昨日 ([2015-06-11][]) も書いたゴミファイルの削除のため。
- [bouzuya/cookie-storage][]
- 1.0.2 .travis.yml を除外
- 1.0.3 bower.json を除外
- [bouzuya/node-idcf-cloud-api][]
- 1.0.2 色々除外
- [bouzuya/node-wsse][]
- 2.0.0 インタフェースの変更と CoffeeScript 化
[wsse](https://www.npmjs.com/package/wsse) は地味だけどぼくの npm パッケージの中では一番使われている気がする。はてなの API は WSSE による認証をしているものがある。そのたびに WSSE の処理を書くのが面倒なのでパッケージ化したもの。
上記の npm ページを見れば分かることだけどぼくのつくった以下のパッケージで使っている。
- [bouzuya/node-hatena-blog-api][]
- [bouzuya/node-hatena-bookmark-api][]
- [bouzuya/node-hatena-fotolife-api][]
- [bouzuya/node-hatena-graph-api][]
WSSE 自体は HTTPS 使っていれば別に要らないもののはずなんだよな。はてなの API はなぜ HTTPS にしないのかな。
[bouzuya/cookie-storage]: https://github.com/bouzuya/cookie-storage
[bouzuya/node-hatena-blog-api]: https://github.com/bouzuya/node-hatena-blog-api
[bouzuya/node-hatena-bookmark-api]: https://github.com/bouzuya/node-hatena-bookmark-api
[bouzuya/node-hatena-fotolife-api]: https://github.com/bouzuya/node-hatena-fotolife-api
[bouzuya/node-hatena-graph-api]: https://github.com/bouzuya/node-hatena-graph-api
[bouzuya/node-idcf-cloud-api]: https://github.com/bouzuya/node-idcf-cloud-api
[bouzuya/node-wsse]: https://github.com/bouzuya/node-wsse
[2015-05-29]: https://blog.bouzuya.net/2015/05/29/
[2015-06-05]: https://blog.bouzuya.net/2015/06/05/
[2015-06-11]: https://blog.bouzuya.net/2015/06/11/
| 36.8125 | 146 | 0.768534 | yue_Hant | 0.65475 |
915de1b92b21971a2ddf32df41268df20cf6b183 | 1,656 | md | Markdown | README.md | dinfuehr/dora | bcfdac576b729e2bbb2422d0239426b884059b2c | [
"MIT"
] | 415 | 2016-12-12T13:23:20.000Z | 2022-03-26T10:32:44.000Z | README.md | dinfuehr/dora | bcfdac576b729e2bbb2422d0239426b884059b2c | [
"MIT"
] | 227 | 2017-02-07T03:32:44.000Z | 2022-03-29T06:53:05.000Z | README.md | dinfuehr/dora | bcfdac576b729e2bbb2422d0239426b884059b2c | [
"MIT"
] | 30 | 2017-05-03T12:25:22.000Z | 2022-02-07T22:42:19.000Z | # Dora
[](https://gitter.im/dora-lang/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://github.com/dinfuehr/dora/actions)
JIT-compiler for the programming language Dora implemented in Rust.
Works on Linux, Windows and macOS (x86\_64 and aarch64).
Build with:
## Compilation & Testing
Install Rust nightly through [rustup.rs](http://rustup.rs). Use the specific nightly version listed in the [rust-toolchain](https://github.com/dinfuehr/dora/blob/master/rust-toolchain) file. Dora simply uses `cargo` for building:
```
# build in debug and release mode
cargo build && cargo build --release
# run all tests in debug and release mode (needs Ruby)
tools/test && tools/test-release # Linux and macOS
tools/test.bat && tools/test-release.bat # Windows
```
Note that the test runner is implemented in [Ruby](https://www.ruby-lang.org/) and therefore a Ruby interpreter needs to be installed on your system (e.g. `brew/dnf/apt install ruby`).
## Working on the standard library
The standard library (stdlib) is included into the `dora`-binary at compile time.
Changing the stdlib therefore requires recompiling Dora, even though the stdlib is written in Dora.
In order to avoid this recompilation when working on the stdlib, simply pass your working directory of the stdlib to Dora using the `--stdlib` argument.
With this parameter, Dora loads the stdlib from the specified directory instead of the one bundled in the executable.
| 57.103448 | 339 | 0.771739 | eng_Latn | 0.951819 |
915de555b85c5d677f7e5020798cb7143e9daf86 | 192 | md | Markdown | Practice_Uploads/toDoList/readme.md | queenish001/Web-Development | e5f21377e513ac86f8132242ea8b39eeb72728e1 | [
"MIT"
] | null | null | null | Practice_Uploads/toDoList/readme.md | queenish001/Web-Development | e5f21377e513ac86f8132242ea8b39eeb72728e1 | [
"MIT"
] | null | null | null | Practice_Uploads/toDoList/readme.md | queenish001/Web-Development | e5f21377e513ac86f8132242ea8b39eeb72728e1 | [
"MIT"
] | null | null | null | * Add or Clear Tasks
* Mark tasks as done
* Clear the Entire List in one go

| 32 | 114 | 0.78125 | eng_Latn | 0.304599 |
915e69c283d86c4b51e85e23537646a219714865 | 887 | md | Markdown | README.md | Fantastic-Four-CSC-370/Book-Drop | b15965c4ce470fb49ce9967c7f8fd0365194cfb9 | [
"MIT"
] | null | null | null | README.md | Fantastic-Four-CSC-370/Book-Drop | b15965c4ce470fb49ce9967c7f8fd0365194cfb9 | [
"MIT"
] | null | null | null | README.md | Fantastic-Four-CSC-370/Book-Drop | b15965c4ce470fb49ce9967c7f8fd0365194cfb9 | [
"MIT"
] | null | null | null | # Book-Drop CSC 470

# Team
| <a href="https://github.com/mhrshuvo" target="_blank">**SHUVO**</a> | <a href="https://github.com/RifatdaM" target="_blank">**Rifat**</a> | <a href="https://github.com/sabbir103050" target="_blank">**Sabbir**</a> | <a href="https://github.com/hima18103366" target="_blank">**Hima**</a> |
| :--: |:--:| :--:|:--:|
| [](https://github.com/mhrshuvo) | [](https://github.com/Rifatdam) | [](https://github.com/sabbir103050) | [](https://github.com/hima18103366) |
| 88.7 | 424 | 0.686584 | yue_Hant | 0.258049 |
aa4caaad1eaacc98dc132a97cb0b2556929e2e39 | 17,720 | md | Markdown | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow.md | pirluq/docs | 820adc9b585ecb5691957cd2a00906ad56fe826c | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-10-06T18:19:40.000Z | 2022-01-26T20:00:53.000Z | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow.md | pirluq/docs | 820adc9b585ecb5691957cd2a00906ad56fe826c | [
"CC-BY-4.0",
"MIT"
] | 154 | 2021-11-04T02:22:26.000Z | 2022-03-21T02:19:33.000Z | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow.md | pirluq/docs | 820adc9b585ecb5691957cd2a00906ad56fe826c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-04T22:13:58.000Z | 2020-06-04T22:13:58.000Z | ---
title: Inner-loop development workflow for Docker apps
description: Learn the "inner loop" workflow for development of Docker applications.
ms.date: 02/15/2019
---
# Inner-loop development workflow for Docker apps
Before triggering the outer-loop workflow spanning the entire DevOps cycle, it all begins on each developer's machine, coding the app itself, using their preferred languages or platforms, and testing it locally (Figure 4-21). But in every case, you'll have an important point in common, no matter what language, framework, or platforms you choose. In this specific workflow, you're always developing and testing Docker containers, but locally.

**Figure 4-21**. Inner-loop development context
The container or instance of a Docker image will contain these components:
- An operating system selection (for example, a Linux distribution or Windows)
- Files added by the developer (for example, app binaries)
- Configuration (for example, environment settings and dependencies)
- Instructions for what processes to run by Docker
You can set up the inner-loop development workflow that utilizes Docker as the process (described in the next section). Consider that the initial steps to set up the environment are not included, because you only need to do it once.
## Building a single app within a Docker container using Visual Studio Code and Docker CLI
Apps are made up from your own services plus additional libraries (dependencies).
Figure 4-22 shows the basic steps that you usually need to carry out when building a Docker app, followed by detailed descriptions of each step.

**Figure 4-22**. High-level workflow for the life cycle for Docker containerized applications using Docker CLI
### Step 1: Start coding in Visual Studio Code and create your initial app/service baseline
The way you develop your application is similar to the way you do it without Docker. The difference is that while developing, you're deploying and testing your application or services running within Docker containers placed in your local environment (like a Linux VM or Windows).
**Setting up your local environment**
With the latest versions of Docker for Mac and Windows, it's easier than ever to develop Docker applications, and the setup is straightforward.
> [!TIP]
> For instructions on setting up Docker for Windows, go to <https://docs.docker.com/docker-for-windows/>.
>
>For instructions on setting up Docker for Mac, go to <https://docs.docker.com/docker-for-mac/>.
In addition, you'll need a code editor so that you can actually develop your application while using Docker CLI.
Microsoft provides Visual Studio Code, which is a lightweight code editor that's supported on Windows, Linux, and macOS, and provides IntelliSense with [support for many languages](https://code.visualstudio.com/docs/languages/overview) (JavaScript, .NET, Go, Java, Ruby, Python, and most modern languages), [debugging](https://code.visualstudio.com/Docs/editor/debugging), [integration with Git](https://code.visualstudio.com/Docs/editor/versioncontrol) and [extensions support](https://code.visualstudio.com/docs/extensions/overview). This editor is a great fit for macOS and Linux developers. In Windows, you can also use Visual Studio.
> [!TIP]
> For instructions on installing Visual Studio Code for Windows, Linux, or macOS, go to <https://code.visualstudio.com/docs/setup/setup-overview/>.
>
> For instructions on setting up Docker for Mac, go to <https://docs.docker.com/docker-for-mac/>.
You can work with Docker CLI and write your code using any code editor, but using Visual Studio Code with the Docker extension makes it easy to author `Dockerfile` and `docker-compose.yml` files in your workspace. You can also run tasks and scripts from the Visual Studio Code IDE to execute Docker commands using the Docker CLI underneath.
The Docker extension for VS Code provides the following features:
- Automatic `Dockerfile` and `docker-compose.yml` file generation
- Syntax highlighting and hover tips for `docker-compose.yml` and `Dockerfile` files
- IntelliSense (completions) for `Dockerfile` and `docker-compose.yml` files
- Linting (errors and warnings) for `Dockerfile` files
- Command Palette (F1) integration for the most common Docker commands
- Explorer integration for managing Images and Containers
- Deploy images from DockerHub and Azure Container Registries to Azure App Service
To install the Docker extension, press Ctrl+Shift+P, type `ext install`, and then run the Install Extension command to bring up the Marketplace extension list. Next, type **docker** to filter the results, and then select the Docker Support extension, as depicted in Figure 4-23.

**Figure 4-23**. Installing the Docker Extension in Visual Studio Code
### Step 2: Create a DockerFile related to an existing image (plain OS or dev environments like .NET Core, Node.js, and Ruby)
You'll need a `DockerFile` per custom image to be built and per container to be deployed. If your app is made up of a single custom service, you'll need a single `DockerFile`. But if your app is composed of multiple services (as in a microservices architecture), you'll need one `Dockerfile` per service.
The `DockerFile` is commonly placed in the root folder of your app or service and contains the required commands so that Docker knows how to set up and run that app or service. You can create your `DockerFile` and add it to your project along with your code (node.js, .NET Core, etc.), or, if you're new to the environment, take a look at the following Tip.
> [!TIP]
> You can use the Docker extension to guide you when using the `Dockerfile` and `docker-compose.yml` files related to your Docker containers. Eventually, you'll probably write these kinds of files without this tool, but using the Docker extension is a good starting point that will accelerate your learning curve.
In Figure 4-24, you can see how a docker-compose file is added by using the Docker Extension for VS Code.

**Figure 4-24**. Docker files added using the **Add Docker files to Workspace command**
When you add a DockerFile, you specify what base Docker image you'll be using (like using `FROM mcr.microsoft.com/dotnet/core/aspnet`). You'll usually build your custom image on top of a base image that you get from any official repository at the [Docker Hub registry](https://hub.docker.com/) (like an [image for .NET Core](https://hub.docker.com/_/microsoft-dotnet-core/) or the one [for Node.js](https://hub.docker.com/_/node/)).
***Use an existing official Docker image***
Using an official repository of a language stack with a version number ensures that the same language features are available on all machines (including development, testing, and production).
The following is a sample DockerFile for a .NET Core container:
```Dockerfile
# Base Docker image to use
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
# Set the Working Directory and files to be copied to the image
ARG source
WORKDIR /app
COPY ${source:-bin/Release/PublishOutput} .
# Configure the listening port to 80 (Internal/Secured port within Docker host)
EXPOSE 80
# Application entry point
ENTRYPOINT ["dotnet", "MyCustomMicroservice.dll"]
```
In this case, the image is based on version 2.2 of the official ASP.NET Core Docker image (multi-arch for Linux and Windows), as per the line `FROM mcr.microsoft.com/dotnet/core/aspnet:2.2`. (For more information about this topic, see the [ASP.NET Core Docker Image](https://hub.docker.com/_/microsoft-dotnet-core-aspnet/) page and the [.NET Core Docker Image](https://hub.docker.com/_/microsoft-dotnet-core/) page).
In the DockerFile, you can also instruct Docker to listen to the TCP port that you'll use at runtime (such as port 80).
You can specify additional configuration settings in the Dockerfile, depending on the language and framework you're using. For instance, the `ENTRYPOINT` line with `["dotnet", "MySingleContainerWebApp.dll"]` tells Docker to run a .NET Core application. If you're using the SDK and the .NET Core CLI (`dotnet CLI`) to build and run the .NET application, this setting would be different. The key point here is that the ENTRYPOINT line and other settings depend on the language and platform you choose for your application.
> [!TIP]
> For more information about building Docker images for .NET Core applications, go to <https://docs.microsoft.com/dotnet/core/docker/building-net-docker-images>.
>
> To learn more about building your own images, go to <https://docs.docker.com/engine/tutorials/dockerimages/>.
**Use multi-arch image repositories**
A single image name in a repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is, Linux and Windows). For example, the [dotnet/core/aspnet](https://hub.docker.com/_/microsoft-dotnet-core-aspnet/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same image name.
Pulling the [dotnet/core/aspnet](https://hub.docker.com/_/microsoft-dotnet-core-aspnet/) image from a Windows host pulls the Windows variant, whereas pulling the same image name from a Linux host pulls the Linux variant.
***Create your base image from scratch***
You can create your own Docker base image from scratch as explained in this [article](https://docs.docker.com/engine/userguide/eng-image/baseimages/) from Docker. This scenario is probably not the best for you if you're just starting with Docker, but if you want to set the specific bits of your own base image, you can do it.
### Step 3: Create your custom Docker images embedding your service in it
For each custom service that comprises your app, you'll need to create a related image. If your app is made up of a single service or web app, you'll need just a single image.
> [!NOTE]
> When taking into account the "outer-loop DevOps workflow", the images will be created by an automated build process whenever you push your source code to a Git repository (Continuous Integration), so the images will be created in that global environment from your source code.
>
> But before we consider going to that outer-loop route, we need to ensure that the Docker application is actually working properly so that they don't push code that might not work properly to the source control system (Git, etc.).
>
> Therefore, each developer first needs to do the entire inner-loop process to test locally and continue developing until they want to push a complete feature or change to the source control system.
To create an image in your local environment and using the DockerFile, you can use the docker build command, as demonstrated in Figure 4-25 (you can also run `docker-compose up --build` for applications composed by several containers/services).

**Figure 4-25**. Running docker build
Optionally, instead of directly running `docker build` from the project folder, you first can generate a deployable folder with the .NET libraries needed by using the run `dotnet publish` command, and then run `docker build`.
This example creates a Docker image with the name `cesardl/netcore-webapi-microservice-docker:first` (`:first` is a tag, like a specific version). You can take this step for each custom image you need to create for your composed Docker application with several containers.
You can find the existing images in your local repository (your development machine) by using the docker images command, as illustrated in Figure 4-26.

**Figure 4-26**. Viewing existing images using docker images
### Step 4: Define your services in docker-compose.yml when building a composed Docker app with multiple services
With the `docker-compose.yml` file, you can define a set of related services to be deployed as a composed application with the deployment commands explained in the next step section.
Create that file in your main or root solution folder; it should have content similar to that shown in this `docker-compose.yml` file:
```yml
version: '3.4'
services:
web:
build: .
ports:
- "81:80"
volumes:
- .:/code
depends_on:
- redis
redis:
image: redis
```
In this particular case, this file defines two services: the web service (your custom service) and the redis service (a popular cache service). Each service will be deployed as a container, so we need to use a concrete Docker image for each. For this particular web service, the image will need to do the following:
- Build from the DockerFile in the current directory
- Forward the exposed port 80 on the container to port 81 on the host machine
- Mount the project directory on the host to /code within the container, making it possible for you to modify the code without having to rebuild the image
- Link the web service to the redis service
The redis service uses the [latest public redis image](https://hub.docker.com/_/redis/) pulled from the Docker Hub registry. [redis](https://redis.io/) is a popular cache system for server-side applications.
### Step 5: Build and run your Docker app
If your app has only a single container, you just need to run it by deploying it to your Docker Host (VM or physical server). However, if your app is made up of multiple services, you need to *compose it*, too. Let's see the different options.
***Option A: Run a single container or service***
You can run the Docker image by using the docker run command, as shown here:
```console
docker run -t -d -p 80:5000 cesardl/netcore-webapi-microservice-docker:first
```
For this particular deployment, we'll be redirecting requests sent to port 80 to the internal port 5000. Now the application is listening on the external port 80 at the host level.
***Option B: Compose and run a multiple-container application***
In most enterprise scenarios, a Docker application will be composed of multiple services. For these cases, you can run the `docker-compose up` command (Figure 4-27), which will use the docker-compose.yml file that you might have created previously. Running this command deploys a composed application with all of its related containers.

**Figure 4-27**. Results of running the "docker-compose up" command
After you run `docker-compose up`, you deploy your application and its related container(s) into your Docker Host, as illustrated in Figure 4-28, in the VM representation.

**Figure 4-28**. VM with Docker containers deployed
### Step 6: Test your Docker application (locally, in your local CD VM)
This step will vary depending on what your app is doing.
In a simple .NET Core Web API "Hello World" deployed as a single container or service, you'd just need to access the service by providing the TCP port specified in the DockerFile.
If localhost is not turned on, to navigate to your service, find the IP address for the machine by using this command:
```console
docker-machine {IP} {YOUR-CONTAINER-NAME}
```
On the Docker host, open a browser and navigate to that site; you should see your app/service running, as demonstrated in Figure 4-29.

**Figure 4-29**. Testing your Docker application locally using localhost
Note that it's using port 80, but internally it's being redirected to port 5000, because that's how it was deployed with `docker run`, as explained earlier.
You can test this by using CURL from the terminal. In a Docker installation on Windows, the default IP is 10.0.75.1, as depicted in Figure 4-30.

**Figure 4-30**. Testing a Docker application locally by using CURL
**Debugging a container running on Docker**
Visual Studio Code supports debugging Docker if you're using Node.js and other platforms like .NET Core containers.
You can also debug .NET Core or .NET Framework containers in Docker when using Visual Studio for Windows or Mac, as described in the next section.
> [!TIP]
> To learn more about debugging Node.js Docker containers, see <https://blog.docker.com/2016/07/live-debugging-docker/> and <https://docs.microsoft.com/archive/blogs/user_ed/visual-studio-code-new-features-13-big-debugging-updates-rich-object-hover-conditional-breakpoints-node-js-mono-more>.
>[!div class="step-by-step"]
>[Previous](docker-apps-development-environment.md)
>[Next](visual-studio-tools-for-docker.md)
| 66.119403 | 638 | 0.777144 | eng_Latn | 0.994102 |
aa4f042fe695998ac3768d892797bb6164a6eb4b | 1,471 | md | Markdown | en/datalens/operations/connection/create-appmetrica.md | IyliyaChe/docs | 15b2c8f12569ac6cb69224a32aaa0f223d675076 | [
"CC-BY-4.0"
] | null | null | null | en/datalens/operations/connection/create-appmetrica.md | IyliyaChe/docs | 15b2c8f12569ac6cb69224a32aaa0f223d675076 | [
"CC-BY-4.0"
] | null | null | null | en/datalens/operations/connection/create-appmetrica.md | IyliyaChe/docs | 15b2c8f12569ac6cb69224a32aaa0f223d675076 | [
"CC-BY-4.0"
] | null | null | null | # Creating an AppMetrica connection
## Connecting to AppMetrica {#appmetrica-connection}
To create an AppMetrica connection:
1. Go to the [connections page](https://datalens.yandex.com/connections).
1. Click **Create** and select **Connection**.
1. Select **AppMetrica** as the connection type.
1. In the field next to the folder name, enter the connection name. You can set any name.
1. Specify the connection parameters:
* **OAuth token**. Click **Get token** or enter the [OAuth token](#get-oauth-token) manually to access the AppMetrica data.
* **App**. Specify one or more applications to connect to. You can select them from the list or enter them manually separated by commas.
* **Accuracy**. Set the data accuracy (sampling rate). You can change accuracy after you create the connection.
{% include [datalens-get-token](../../../_includes/datalens/datalens-change-account-note.md) %}
1. Enable the option **Automatically create a dashboard, charts, and a dataset on the connection** if you need a dashboard with a standard set of charts.
1. Click **Create**.
{% include [datalens-appmetrica-note](../../../_includes/datalens/datalens-appmetrica-note.md) %}
For datasets based on AppMetrica connections, the following groups of metrics are available:
* Installations
* Audience
* Client events
* Push campaigns
* Audience + social demographic
{% include [datalens-get-token](../../../_includes/datalens/operations/datalens-get-token.md) %}
| 44.575758 | 153 | 0.740313 | eng_Latn | 0.986753 |
aa4f8599c9976a7bba102e7c9d931fc29e9e8376 | 179 | md | Markdown | src/input/demos/enUS/count.demo.md | bljessica/naive-ui | 0ceffd01031d3fe5c4b4e0d56d9328a579cdeb57 | [
"MIT"
] | 5 | 2021-06-17T21:12:18.000Z | 2022-03-24T18:39:36.000Z | src/input/demos/enUS/count.demo.md | bljessica/naive-ui | 0ceffd01031d3fe5c4b4e0d56d9328a579cdeb57 | [
"MIT"
] | 10 | 2021-07-07T06:15:26.000Z | 2022-02-07T06:25:46.000Z | src/input/demos/enUS/count.demo.md | xiumubai/xiumu-ui-master | c317192617794691c4e2d9dee80e70cd7f145980 | [
"MIT"
] | 3 | 2021-06-09T03:07:10.000Z | 2021-07-31T12:33:44.000Z | # Word Limit
No waste words.
```html
<n-space vertical>
<n-input maxlength="30" show-count clearable />
<n-input type="textarea" maxlength="30" show-count />
</n-space>
```
| 16.272727 | 55 | 0.659218 | eng_Latn | 0.500168 |
aa4fc3c30c23a5b638d566dd481054390525b5c6 | 9,916 | md | Markdown | README.md | JasonkayZK/jasonkayzk | 277f3b605b0e639900c90d151ea5a170db1f4da3 | [
"Apache-2.0"
] | 6 | 2020-07-19T08:35:03.000Z | 2021-12-22T11:32:48.000Z | README.md | JasonkayZK/jasonkayzk | 277f3b605b0e639900c90d151ea5a170db1f4da3 | [
"Apache-2.0"
] | 1 | 2020-08-30T02:26:36.000Z | 2020-08-30T02:26:36.000Z | README.md | JasonkayZK/jasonkayzk | 277f3b605b0e639900c90d151ea5a170db1f4da3 | [
"Apache-2.0"
] | 49 | 2020-07-24T20:20:32.000Z | 2022-03-15T05:43:12.000Z | ### Hi there, I’m [JasonkayZK](https://jasonkayzk.github.io/) 👋
<p align="center">
<img src="https://cdn.jsdelivr.net/gh/jasonkayzk/jasonkayzk@master/hello-world.gif" width="30%">
</p>
<p align="center">
<img width="500" src="https://metrics.lecoq.io/jasonkayzk?template=classic&repositories.forks=true&followup=1&followup.sections=repositories&config.timezone=Asia%2FShanghai&config.padding=0%2C%204%20%2B%2011%25/" alt="Github Metrics"/>
<br>
</p>
[](https://www.microsoft.com/windows/get-windows-10)
[](https://ubuntu.com/)
[](https://www.centos.org/)
[](https://www.apple.com/)
[](https://code.visualstudio.com/)
[](https://www.jetbrains.com/idea/)
[](https://www.jetbrains.com/go/)
[](https://www.jetbrains.com/pycharm/)
[](https://www.jetbrains.com/clion/)
[](https://www.jetbrains.com/webstorm/)
[](https://developer.android.com/studio/)
[](https://www.vim.org/)
[](https://www.java.com/)
[](https://golang.org/)
[](https://www.cplusplus.com/)
[](https://www.rust-lang.org/)
[](https://www.python.org/)
[](https://www.scala-lang.org/)
[](https://www.ecma-international.org/)
[](https://html.spec.whatwg.org/)
[](https://www.w3.org/Style/CSS/)
[](https://lesscss.org/)
[](https://www.typescriptlang.org/)
[](https://kotlinlang.org/)
[](https://dart.dev/)
[](https://www.lua.org/)
[](https://www.shell.com/)
[](https://docs.microsoft.com/en-us/dotnet/csharp/)
[](https://spring.io/projects/spring-framework/)
[](https://www.docker.com/)
[](https://www.mysql.com/)
[](https://npmjs.com/)
[](https://git-scm.com/)
[](https://vuejs.org/)
[](https://reactjs.org/)
[](https://www.electronjs.org/)
[](https://nodejs.org/)
[](https://nginx.org/)
[](https://kubernetes.io/)
[](https://www.elastic.co/)
[](https://redis.io/)
[](https://flutter.dev/)
[](https://gradle.org/)
[](https://www.rabbitmq.com/)
[](https://yarnpkg.com/)
[](https://webpack.js.org/)
[](https://www.mongodb.com/)
[](https://getbootstrap.com/)
[](https://jquery.com/)
[](https://www.tensorflow.org/)
[](https://keras.io/)
[](https://pytorch.org/)
[](https://daringfireball.net/projects/markdown/)
- 🔭 I’m currently working in Tencent (Shenzhen, China).
- 🌱 I’m currently learning Java, Golang, Rust, JS & TS.
- 👯 I’m looking to collaborate on Micro-service, PaaS, SaaS and so on…
- 🤔 I’m looking for help with Golang or Java development.
- 💬 Ask me about Anything you want~
- 📫 Reach me: 271226192@qq.com
- 😄 Pronouns: Jasonkay
- 👏 Follow Me: [](https://github.com/jasonkayzk/)
- ⚡ Fun fact: Music, Japanese & English, Basketball, Animation, Video games.
<table width="800px">
<tr>
<td valign="top" width="50%">
#### 🏊♂️ <a href="https://gist.github.com/JasonkayZK/59ead22758ee823e48b558d3cff332f1" target="_blank">Weekly Development Breakdown</a>
<!-- code_time starts -->
```text
Go 7 hrs 28 mins ██████████████▋░░░░░░ 70.2%
Rust 1 hr 25 mins ██▊░░░░░░░░░░░░░░░░░░ 13.4%
Python 47 mins █▌░░░░░░░░░░░░░░░░░░░ 7.4%
Markdown 24 mins ▊░░░░░░░░░░░░░░░░░░░░ 3.9%
Protoco... 11 mins ▍░░░░░░░░░░░░░░░░░░░░ 1.9%
```
<!-- code_time ends -->
</td>
<td valign="top" width="50%">
#### 🤹♀️ <a href="https://jasonkayzk.github.io/" target="_blank">Recent Blog</a>
<!-- blog starts -->
* <a href='https://jasonkayzk.github.io/2021/10/10/%E5%9C%A8Git%E9%A1%B9%E7%9B%AE%E4%B8%AD%E5%A2%9E%E5%8A%A0pre-commit%E6%A0%A1%E9%AA%8C/' target='_blank'>在Git项目中增加pre-commit校验</a> - 2021-10-10
* <a href='https://jasonkayzk.github.io/2021/10/10/Rust%E5%AE%9E%E7%8E%B0WebAssembly%E5%88%9D%E7%AA%A5/' target='_blank'>Rust实现WebAssembly初窥</a> - 2021-10-10
* <a href='https://jasonkayzk.github.io/2021/09/26/%E5%9C%A8Golang%E5%8F%91%E7%94%9FPanic%E5%90%8E%E6%89%93%E5%8D%B0%E5%87%BA%E5%A0%86%E6%A0%88%E4%BF%A1%E6%81%AF/' target='_blank'>在Golang发生Panic后打印出堆栈信息</a> - 2021-09-26
* <a href='https://jasonkayzk.github.io/2021/09/05/Docker%E5%8E%9F%E7%90%86%E5%AE%9E%E6%88%98-4%EF%BC%9A%E5%AE%B9%E5%99%A8Container/' target='_blank'>Docker原理实战-4:容器Container</a> - 2021-09-05
* <a href='https://jasonkayzk.github.io/2021/09/04/%E3%80%90%E5%88%86%E4%BA%AB%E3%80%91Epic-Game%E8%87%AA%E5%8A%A8%E9%A2%86%E5%8F%96Docker%E9%95%9C%E5%83%8F/' target='_blank'>【分享】Epic-Game自动领取Docker镜像</a> - 2021-09-04
<!-- blog ends -->
</td>
</tr>
</table>
|||
|-|-|

<p align="center">
Visitor count<br>
<img src="https://profile-counter.glitch.me/jasonkayzk/count.svg" />
</p>
| 75.121212 | 283 | 0.720351 | yue_Hant | 0.575648 |
aa506a4554b21dabe7e0b4d336eab7fea993b64b | 3,237 | md | Markdown | content/template_posts/shortcodes/index.bn.md | jsduenass/yourTeamWebsite | feb69c57974f3c190330d3ca3ec689d6b7a5c3c4 | [
"MIT"
] | null | null | null | content/template_posts/shortcodes/index.bn.md | jsduenass/yourTeamWebsite | feb69c57974f3c190330d3ca3ec689d6b7a5c3c4 | [
"MIT"
] | null | null | null | content/template_posts/shortcodes/index.bn.md | jsduenass/yourTeamWebsite | feb69c57974f3c190330d3ca3ec689d6b7a5c3c4 | [
"MIT"
] | null | null | null | ---
title: "শর্টকোডের নমুনা"
date: 2020-06-08T08:06:25+06:00
description: Shortcodes sample
---
এই নমুনা পোস্টটি এই বিষয়গুলো পরীক্ষা করার জন্যে করা হয়েছেঃ
- ডিফল্ট হিরো ইমেজ।
- বিভিন্ন শর্টকোড।
## এলার্ট
এই থিমের মধ্যে এই সকল এলার্ট রয়েছেঃ
{{< alert type="success" >}}
এটা `type="success"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="danger" >}}
এটা `type="danger"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="warning" >}}
এটা `type="warning"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="info" >}}
এটা `type="info"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="dark" >}}
এটা `type="dark"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="primary" >}}
এটা `type="primary"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
{{< alert type="secondary" >}}
এটা `type="secondary"` দিয়ে নমুনা এলার্ট।
{{< /alert >}}
## ছবি
#### কোন ধরনের attribute ছাড়া ছবির একটি নমুনা।
{{< img src="/posts/shortcodes/boat.jpg" title="A boat at the sea" >}}
{{< vs 3 >}}
#### `height` এবং `width` attribute সহ ছবির একটি নমুনা।
{{< img src="/posts/shortcodes/boat.jpg" height="400" width="600" title="A boat at the sea" >}}
{{< vs 3 >}}
#### `height` এবং `width` attribute সহ মাঝখানে স্তাপিত ছবির একটি নমুনা।
{{< img src="/posts/shortcodes/boat.jpg" height="400" width="600" align="center" title="A boat at the sea" >}}
{{< vs 3 >}}
#### `float` attribute সহ ছবির একটি নমুনা।
{{< img src="/posts/shortcodes/boat.jpg" height="200" width="500" float="right" title="A boat at the sea" >}}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras egestas lectus sed leo ultricies ultricies. Praesent tellus risus, eleifend vel efficitur ac, venenatis sit amet sem. Ut ut egestas erat. Fusce ut leo turpis. Morbi consectetur sed lacus vitae vehicula. Cras gravida turpis id eleifend volutpat. Suspendisse nec ipsum eu erat finibus dictum. Morbi volutpat nulla purus, vel maximus ex molestie id. Nullam posuere est urna, at fringilla eros venenatis quis.
Fusce vulputate dolor augue, ut porta sapien fringilla nec. Vivamus commodo erat felis, a sodales lectus finibus nec. In a pulvinar orci. Maecenas suscipit eget lorem non pretium. Nulla aliquam a augue nec blandit. Curabitur ac urna iaculis, ornare ligula nec, placerat nulla. Maecenas aliquam nisi vitae tempus vulputate.
## বিভাজন
এই থিম আপনার পেইজকে যত খুশি ভাগে ভাগ করা সমর্থন করে।
#### দুই কলামে ভাগ করা
{{< split 6 6>}}
##### বামের কলাম
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras egestas lectus sed leo ultricies ultricies.
---
##### ডানের কলাম
Fusce ut leo turpis. Morbi consectetur sed lacus vitae vehicula. Cras gravida turpis id eleifend volutpat.
{{< /split >}}
#### তিন কলামে ভাগ করা
{{< split 4 4 4 >}}
##### বামের কলাম
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras egestas lectus sed leo ultricies ultricies.
---
##### মাঝের কলাম
Aenean dignissim dictum ex. Donec a nunc vel nibh placerat interdum.
---
##### ডানের কলাম
Fusce ut leo turpis. Morbi consectetur sed lacus vitae vehicula. Cras gravida turpis id eleifend volutpat.
{{< /split >}}
## উলম্ব জায়গা
দুই লাইনের মাঝে উলম্ব জায়গা দিতে পারেন।
এটি প্রথম লাইন
{{< vs 4>}}
এটি দ্বিতীয় লাইন। প্রথম লাইনের সাথে `4rem` উলম্বভাবে বাব্যধান থাকার কথা। | 26.752066 | 467 | 0.635465 | ben_Beng | 0.098935 |
aa50916c4823a0fb651026ba9dc7bcf1dbb50264 | 14,687 | md | Markdown | docs/framework/wcf/feature-details/selecting-a-credential-type.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-17T08:15:14.000Z | 2019-12-17T08:15:14.000Z | docs/framework/wcf/feature-details/selecting-a-credential-type.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/selecting-a-credential-type.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Selecting a Credential Type"
ms.date: "03/30/2017"
ms.assetid: bf707063-3f30-4304-ab53-0e63413728a8
---
# Selecting a Credential Type
*Credentials* are the data Windows Communication Foundation (WCF) uses to establish either a claimed identity or capabilities. For example, a passport is a credential a government issues to prove citizenship in a country or region. In WCF, credentials can take many forms, such as user name tokens and X.509 certificates. This topic discusses credentials, how they are used in WCF, and how to select the right credential for your application.
In many countries and regions, a driver’s license is an example of a credential. A license contains data that represents a person's identity and capabilities. It contains proof of possession in the form of the possessor's picture. The license is issued by a trusted authority, usually a governmental department of licensing. The license is sealed, and can contain a hologram, showing that it has not been tampered with or counterfeited.
Presenting a credential involves presenting both the data and proof of possession of the data. WCF supports a variety of credential types at both the transport and message security levels. For example, consider two types of credentials supported in WCF: user name and (X.509) certificate credentials.
For the user name credential, the user name represents the claimed identity and the password provides proof of possession. The trusted authority in this case is the system that validates the user name and password.
With an X.509 certificate credential, the subject name, subject alternative name or specific fields within the certificate can be used as claims of identity, while other fields, such as the `Valid From` and `Valid To` fields, specify the validity of the certificate.
## Transport Credential Types
The following table shows the possible types of client credentials that can be used by a binding in transport security mode. When creating a service, set the `ClientCredentialType` property to one of these values to specify the type of credential that the client must supply to communicate with your service. You can set the types in either code or configuration files.
|Setting|Description|
|-------------|-----------------|
|None|Specifies that the client does not need to present any credential. This translates to an anonymous client.|
|Basic|Specifies basic authentication for the client. For additional information, see RFC2617—[HTTP Authentication: Basic and Digest Authentication](https://go.microsoft.com/fwlink/?LinkID=88313).|
|Digest|Specifies digest authentication for the client. For additional information, see RFC2617—[HTTP Authentication: Basic and Digest Authentication](https://go.microsoft.com/fwlink/?LinkID=88313).|
|Ntlm|Specifies NT LAN Manager (NTLM) authentication. This is used when you cannot use Kerberos authentication for some reason. You can also disable its use as a fallback by setting the <xref:System.ServiceModel.Security.WindowsClientCredential.AllowNtlm%2A> property to `false`, which causes WCF to make a best-effort to throw an exception if NTLM is used. Note that setting this property to `false` may not prevent NTLM credentials from being sent over the wire.|
|Windows|Specifies Windows authentication. To specify only the Kerberos protocol on a Windows domain, set the <xref:System.ServiceModel.Security.WindowsClientCredential.AllowNtlm%2A> property to `false` (the default is `true`).|
|Certificate|Performs client authentication using an X.509 certificate.|
|Password|User must supply a user name and password. Validate the user name/password pair using Windows authentication or another custom solution.|
### Message Client Credential Types
The following table shows the possible credential types that you can use when creating an application that uses message security. You can use these values in either code or configuration files.
|Setting|Description|
|-------------|-----------------|
|None|Specifies that the client does not need to present a credential. This translates to an anonymous client.|
|Windows|Allows SOAP message exchanges to occur under the security context established with a Windows credential.|
|Username|Allows the service to require that the client be authenticated with a user name credential. Note that WCF does not allow any cryptographic operations with user names, such as generating a signature or encrypting data. WCF ensures that the transport is secured when using user name credentials.|
|Certificate|Allows the service to require that the client be authenticated using an X.509 certificate.|
|Issued Token|A custom token type configured according to a security policy. The default token type is Security Assertions Markup Language (SAML). The token is issued by a secure token service. For more information, see [Federation and Issued Tokens](../../../../docs/framework/wcf/feature-details/federation-and-issued-tokens.md).|
### Negotiation Model of Service Credentials
*Negotiation* is the process of establishing trust between a client and a service by exchanging credentials. The process is performed iteratively between the client and the service, so as to disclose only the information necessary for the next step in the negotiation process. In practice, the end result is the delivery of a service's credential to the client to be used in subsequent operations.
With one exception, by default the system-provided bindings in WCF negotiate the service credential automatically when using message-level security. (The exception is the <xref:System.ServiceModel.BasicHttpBinding>, which does not enable security by default.) To disable this behavior, see the <xref:System.ServiceModel.MessageSecurityOverHttp.NegotiateServiceCredential%2A> and <xref:System.ServiceModel.FederatedMessageSecurityOverHttp.NegotiateServiceCredential%2A> properties.
> [!NOTE]
> When SSL security is used with .NET Framework 3.5 and later, a WCF client uses both the intermediate certificates in its certificate store and the intermediate certificates received during SSL negotiation to perform certificate chain validation on the service's certificate. .NET Framework 3.0 only uses the intermediate certificates installed in the local certificate store.
#### Out-of-Band Negotiation
If automatic negotiation is disabled, the service credential must be provisioned at the client prior to sending any messages to the service. This is also known as an *out-of-band* provisioning. For example, if the specified credential type is a certificate, and automatic negotiation is disabled, the client must contact the service owner to receive and install the certificate on the computer running the client application. This can be done, for example, when you want to strictly control which clients can access a service in a business-to-business scenario. This out-of-band-negotiation can be done in email, and the X.509 certificate is stored in Windows certificate store, using a tool such as the Microsoft Management Console (MMC) Certificates snap-in.
> [!NOTE]
> The <xref:System.ServiceModel.ClientBase%601.ClientCredentials%2A> property is used to provide the service with a certificate that was attained through out-of-band negotiation. This is necessary when using the <xref:System.ServiceModel.BasicHttpBinding> class because the binding does not allow automated negotiation. The property is also used in an uncorrelated duplex scenario. This is a scenario where a server sends a message to the client without requiring the client to send a request to the server first. Because the server does not have a request from the client, it must use the client's certificate to encrypt the message to the client.
## Setting Credential Values
Once you select a security mode, you must specify the actual credentials. For example, if the credential type is set to "certificate," then you must associate a specific credential (such as a specific X.509 certificate) with the service or client.
Depending on whether you are programming a service or a client, the method for setting the credential value differs slightly.
### Setting Service Credentials
If you are using transport mode, and you are using HTTP as the transport, you must use either Internet Information Services (IIS) or configure the port with a certificate. For more information, see [Transport Security Overview](../../../../docs/framework/wcf/feature-details/transport-security-overview.md) and [HTTP Transport Security](../../../../docs/framework/wcf/feature-details/http-transport-security.md).
To provision a service with credentials in code, create an instance of the <xref:System.ServiceModel.ServiceHost> class and specify the appropriate credential using the <xref:System.ServiceModel.Description.ServiceCredentials> class, accessed through the <xref:System.ServiceModel.ServiceHostBase.Credentials%2A> property.
#### Setting a Certificate
To provision a service with an X.509 certificate to be used to authenticate the service to clients, use the <xref:System.ServiceModel.Security.X509CertificateInitiatorServiceCredential.SetCertificate%2A> method of the <xref:System.ServiceModel.Security.X509CertificateRecipientServiceCredential> class.
To provision a service with a client certificate, use the <xref:System.ServiceModel.Security.X509CertificateInitiatorClientCredential.SetCertificate%2A> method of the <xref:System.ServiceModel.Security.X509CertificateInitiatorServiceCredential> class.
#### Setting Windows Credentials
If the client specifies a valid user name and password, that credential is used to authenticate the client. Otherwise, the current logged-on user's credentials are used.
### Setting Client Credentials
In WCF, client applications use a WCF client to connect to services. Every client derives from the <xref:System.ServiceModel.ClientBase%601> class, and the <xref:System.ServiceModel.ClientBase%601.ClientCredentials%2A> property on the client allows the specification of various values of client credentials.
#### Setting a Certificate
To provision a service with an X.509 certificate that is used to authenticate the client to a service, use the <xref:System.ServiceModel.Security.X509CertificateInitiatorClientCredential.SetCertificate%2A> method of the <xref:System.ServiceModel.Security.X509CertificateInitiatorClientCredential> class.
## How Client Credentials Are Used to Authenticate a Client to the Service
Client credential information required to communicate with a service is provided using either the <xref:System.ServiceModel.ClientBase%601.ClientCredentials%2A> property or the <xref:System.ServiceModel.ChannelFactory.Credentials%2A> property. The security channel uses this information to authenticate the client to the service. Authentication is accomplished through one of two modes:
- The client credentials are used once before the first message is sent, using the WCF client instance to establish a security context. All application messages are then secured through the security context.
- The client credentials are used to authenticate every application message sent to the service. In this case, no context is established between the client and the service.
### Established Identities Cannot Be Changed
When the first method is used, the established context is permanently associated with the client identity. That is, once the security context has been established, the identity associated with the client cannot be changed.
> [!IMPORTANT]
> There is a situation to be aware of when the identity cannot be switched (that is, when establish security context is on, the default behavior). If you create a service that communicates with a second service, the identity used to open the WCF client to the second service cannot be changed. This becomes a problem if multiple clients are allowed to use the first service and the service impersonates the clients when accessing the second service. If the service reuses the same client for all callers, all calls to the second service are done under the identity of the first caller that was used to open the client to the second service. In other words, the service uses the identity of the first client for all its clients to communicate with the second service. This can lead to the elevation of privilege. If this is not the desired behavior of your service, you must track each caller and create a new client to the second service for every distinct caller, and ensure that the service uses only the right client for the right caller to communicate with the second service.
For more information about credentials and secure sessions, see [Security Considerations for Secure Sessions](../../../../docs/framework/wcf/feature-details/security-considerations-for-secure-sessions.md).
## See also
- <xref:System.ServiceModel.ClientBase%601?displayProperty=nameWithType>
- <xref:System.ServiceModel.ClientBase%601.ClientCredentials%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.Description.ClientCredentials.ClientCertificate%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.BasicHttpMessageSecurity.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.HttpTransportSecurity.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.MessageSecurityOverHttp.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.MessageSecurityOverMsmq.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.MessageSecurityOverTcp.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.TcpTransportSecurity.ClientCredentialType%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.Security.X509CertificateInitiatorClientCredential.SetCertificate%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.Security.X509CertificateInitiatorClientCredential.SetCertificate%2A?displayProperty=nameWithType>
- <xref:System.ServiceModel.Security.X509CertificateInitiatorServiceCredential.SetCertificate%2A?displayProperty=nameWithType>
- [Security Concepts](../../../../docs/framework/wcf/feature-details/security-concepts.md)
- [Securing Services and Clients](../../../../docs/framework/wcf/feature-details/securing-services-and-clients.md)
- [Programming WCF Security](../../../../docs/framework/wcf/feature-details/programming-wcf-security.md)
- [HTTP Transport Security](../../../../docs/framework/wcf/feature-details/http-transport-security.md)
| 131.133929 | 1,083 | 0.803023 | eng_Latn | 0.995182 |
aa51a9bc69a0a2e8c930b7c7c05c7495317ed413 | 647 | md | Markdown | 2018/CVE-2018-6605.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2018/CVE-2018-6605.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2018/CVE-2018-6605.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2018-6605](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-6605)



### Description
SQL Injection exists in the Zh BaiduMap 3.0.0.1 component for Joomla! via the id parameter in a getPlacemarkDetails, getPlacemarkHoverText, getPathHoverText, or getPathDetails request.
### POC
#### Reference
- https://www.exploit-db.com/exploits/43974/
#### Github
No PoCs found on GitHub currently.
| 35.944444 | 184 | 0.749614 | kor_Hang | 0.196211 |
aa5252997063edd7fa72bfabd5f08121ef7fba6e | 5,938 | md | Markdown | README.md | dogada/material-ui-tree | 62feb2f4c2252b001f599084adbd43be05149e99 | [
"MIT"
] | null | null | null | README.md | dogada/material-ui-tree | 62feb2f4c2252b001f599084adbd43be05149e99 | [
"MIT"
] | null | null | null | README.md | dogada/material-ui-tree | 62feb2f4c2252b001f599084adbd43be05149e99 | [
"MIT"
] | null | null | null | # material-ui-tree
[](https://www.npmjs.org/package/material-ui-tree)
[](https://www.npmjs.org/package/material-ui-tree)
[](https://github.com/shallinta/material-ui-tree/blob/master/LICENSE)
[](https://github.com/shallinta/material-ui-tree/issues?q=is%3Aopen+is%3Aissue)
[](https://github.com/shallinta/material-ui-tree/issues?q=is%3Aissue+is%3Aclosed)

[](https://github.com/shallinta/material-ui-tree)
[](https://www.npmjs.com/package/material-ui-tree)
> A react tree component with material-ui.
> See demo page: [Material-ui-tree Demo](https://wkp03p2jrl.codesandbox.io/)
### Installation
Available as npm package.
```sh
npm install --save material-ui-tree
```
Ensure to install these packages in your program because `material-ui-tree` depends on them.
```sh
npm install --save
react
react-dom
prop-types
classnames
material-ui@next
material-ui-icons
```
### Usage
> See demo page code:
[](https://codesandbox.io/s/wkp03p2jrl)
### Options
> All options are not necessary.
***className***: *(string)* The `className` will passed to container `Paper` component of material-ui.
***labelName***: *(string)* Label key to show in tree leaf data. Default to `'label'`. If `renderLabel` option is set, `labelName` will be ignored.
***valueName***: *(string)* Value key in tree leaf data. Used for react children key. Default to `'value'`.
***childrenName***: *(string)* Children key to render child branch in tree leaf data. Default to `'children'`.
***data***: *(object)* Initial tree data. Default to `{}`.
***title***: *(string)* Tree title. Default to `''`. If not set, title module will not show.
***expandFirst***: *(bool)* Whether expand the first branch of the tree in the beginning. Default to `false`.
***expandAll***: *(bool)* Whether expand all branches of the tree in the beginning. Default to `false`.
***childrenCountPerPage***: *(number)* Children leafs' count in each branch page. When tree leaf children data is too big, render them by page. Default to `20`.
***actionsAlignRight***: *(bool)* Whether the tree leaf action buttons aligns to right side. Action buttons will follow behind leaf label if it's false, or else will be aligned to right side. Default to `false`.
***getActionsData***: *(func)* The method to get data to render action buttons, with arguments:
- `data` : object, current leaf data
- `chdIndex` : number array, leaf indexs from tree root
- `expand` : bool, leaf expand status
- `doExpand` : func, callback to expand current leaf's child branch
Should return an array of buttons data including keys: `icon`, `label`, `hint`, `onClick`, `style={}`. At least one of `label` key and `icon` key are required.
***renderLabel***: *(func)* The method to render tree leaf label, with arguments:
- `data` : object, current leaf data
- `expand` : bool, current leaf expand status
If this is set, `labelName` option will be ignored.
***requestChildrenData***: *(func)* The method to request children data of tree leaf dynamically, with arguments:
- `data` : object, current leaf data
- `chdIndex` : number array, leaf indexs from tree root
- `doExpand` : func, callback to expand current leaf's child branch
This function will not be called until the current leaf has no children data.
***onPrimaryClick***: *(func)* The method to handle leaf click separately from expand/collapse, with arguments:
- `data` : object, current leaf data
- `chdIndex` : number array, leaf indexs from tree root
- `doExpand` : func, callback to expand current leaf's child branch
This function by default works as expand/collapse.
If this function available to expand/collapse current leaf simply click in icon on the left.
***renderLabelIcon***: *(func)* The method to render tree leaf label icon, with arguments:
- `data` : object, current leaf data
- `childrenName` : name of key to render child branch
- `expand` : current Collapse state
This function by default renders rounded plus or minus to all leafs.
***renderLoadMoreText***: *(func)* The method to render load more text, with arguments:
- `childrenPage` : number of currently shown items
- `childrenCountPerPage` : number of children showing on one page
- `childrenLength` : total number of children
This function by default shows text like ``'First 5/20 shown, click to load more items...'``.
***perPage***: *(bool)* extends load more feature with hiding previous nth-children and add text above the all children to show previous. Default to `false`.
***renderLoadLessText***: *(func)* The method to render load previous page text, with arguments:
- `childrenPage` : number of currently shown items
- `childrenCountPerPage` : number of children showing on one page
- `childrenLength` : total number of children
This function by default shows text like ``'5/20 shown, click to load previous items...'``.
### Recently updated?
Changelog available [here](https://github.com/shallinta/material-ui-tree/blob/master/CHANGELOG.md)
### LICENSE
The project is licensed under the terms of [MIT license](https://github.com/shallinta/material-ui-tree/blob/master/LICENSE)
| 53.495495 | 213 | 0.728023 | eng_Latn | 0.894074 |
aa526cd89b99ce1276cd250add880ed95dfb27b1 | 6,480 | md | Markdown | README.md | ebarahona/moleculer | 0ce06d97ebc952e456f983f7adbabbe007f5b1dd | [
"MIT"
] | null | null | null | README.md | ebarahona/moleculer | 0ce06d97ebc952e456f983f7adbabbe007f5b1dd | [
"MIT"
] | null | null | null | README.md | ebarahona/moleculer | 0ce06d97ebc952e456f983f7adbabbe007f5b1dd | [
"MIT"
] | null | null | null | 
[](https://travis-ci.org/moleculerjs/moleculer)
[](https://coveralls.io/github/moleculerjs/moleculer?branch=master)
[](https://www.codacy.com/app/mereg-norbert/moleculer?utm_source=github.com&utm_medium=referral&utm_content=moleculerjs/moleculer&utm_campaign=Badge_Grade)
[](https://codeclimate.com/github/moleculerjs/moleculer/maintainability)
[](https://david-dm.org/moleculerjs/moleculer)
[](https://snyk.io/test/github/moleculerjs/moleculer)
[](https://gitter.im/ice-services/moleculer?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[](https://www.npmjs.com/package/moleculer)
[](https://app.fossa.io/projects/git%2Bgithub.com%2Fmoleculerjs%2Fmoleculer?ref=badge_shield)
[][patreon] [][paypal]
# Moleculer [](https://www.npmjs.com/package/moleculer) [](https://twitter.com/intent/tweet?text=Moleculer%20is%20a%20modern%20microservices%20framework%20for%20Node.js&url=https://github.com/moleculerjs/moleculer&via=MoleculerJS&hashtags=nodejs,javascript,microservices)
Moleculer is a fast, modern and powerful microservices framework for Node.js (>= v6.x).
<!--




-->
**Website**: [https://moleculer.services](https://moleculer.services)
**Documentation**: [https://moleculer.services/docs](https://moleculer.services/docs)
# What's included
- Promise-based solution
- request-reply concept
- support event driven architecture with balancing
- built-in service registry
- dynamic service discovery
- load balanced requests & events (round-robin, random, cpu-usage)
- supports middlewares
- service mixins
- multiple services on a node/server
- built-in caching solution (memory, Redis)
- pluggable transporters (TCP, NATS, MQTT, Redis, NATS Streaming, Kafka)
- pluggable serializers (JSON, Avro, MsgPack, Protocol Buffer)
- pluggable validator
- all nodes are equal, no master/leader node
- parameter validation with [fastest-validator](https://github.com/icebob/fastest-validator)
- distributed timeout handling with fallback response
- health monitoring, metrics & statistics
- supports versioned services
- official [API gateway module](https://github.com/moleculerjs/moleculer-web) and many other modules...
# Installation
```
$ npm install moleculer --save
```
or
```
$ yarn add moleculer
```
# Create your first microservice
This example shows you how to create a small service with an `add` action which can add two numbers.
```js
const { ServiceBroker } = require("moleculer");
let broker = new ServiceBroker({ logger: console });
broker.createService({
name: "math",
actions: {
add(ctx) {
return Number(ctx.params.a) + Number(ctx.params.b);
}
}
});
broker.start();
// Call service
broker.call("math.add", { a: 5, b: 3 })
.then(res => console.log("5 + 3 =", res))
.catch(err => console.error(`Error occured! ${err.message}`));
```
[Try it on Runkit](https://runkit.com/icebob/moleculer-quick-example)
# Create a Moleculer project
Use the Moleculer CLI tool to create a new Moleculer based microservices project.
1. Install [moleculer-cli](https://github.com/moleculerjs/moleculer-cli) globally
```bash
$ npm install moleculer-cli -g
```
2. Create a new project (named `first-demo`)
```bash
$ moleculer init project-simple first-demo
```
> Press Y on API Gateway & `npm install`
3. Open project folder
```bash
$ cd first-demo
```
4. Start project
```bash
$ npm run dev
```
5. Open the http://localhost:3000/greeter/welcome?name=world link in your browser. It will call the `welcome` action of `greeter` service with a `name` param via [API gateway](https://github.com/moleculerjs/moleculer-web) and returns with the result.
:tada:**Congratulations! Your first Moleculer based microservices project is created. Read our [documentation](https://moleculer.services/docs) to learn more about Moleculer.**
# Official modules
We have many official modules for Moleculer. [Check our list!](https://moleculer.services/docs/modules.html)
# Supporting
Moleculer is an open source project. It is free to use for your personal or commercial projects. However, developing it takes up all my free time to make it better and better on a daily basis. If you like Moleculer framework, **[please support it][patreon]**.
Thank you very much!
# Documentation
You can find here [the documentation](https://moleculer.services/docs).
# Changelog
See [CHANGELOG.md](CHANGELOG.md).
# Contributions
We welcome you to join to the development of Moleculer. Please read our [contribution guide](http://moleculer.services/docs/contributing.html).
# License
Moleculer is available under the [MIT license](https://tldrlegal.com/license/mit-license).
[3rd party licenses](https://app.fossa.io/reports/09fc5b4f-d321-4f68-b859-8c61fe3eb6dc)
# Contact
Copyright (c) 2016-2018 MoleculerJS
[](https://github.com/moleculerjs) [](https://twitter.com/MoleculerJS)
[paypal]: https://paypal.me/meregnorbert/50usd
[patreon]: https://www.patreon.com/bePatron?u=6245171
| 45.957447 | 425 | 0.750309 | eng_Latn | 0.272946 |
aa5300fe7cc54de956a6b8f430b1960ebab4b440 | 420 | md | Markdown | about.md | michaelsusanto81/os201 | d2b49f0950ec70a6c884d8bccb702f33887a5b34 | [
"MIT"
] | null | null | null | about.md | michaelsusanto81/os201 | d2b49f0950ec70a6c884d8bccb702f33887a5b34 | [
"MIT"
] | null | null | null | about.md | michaelsusanto81/os201 | d2b49f0950ec70a6c884d8bccb702f33887a5b34 | [
"MIT"
] | 1 | 2020-06-08T07:24:36.000Z | 2020-06-08T07:24:36.000Z | ---
permalink: /About/
---
# About Michael
<img src="http://michaelto.herokuapp.com/static/img/michaelsusanto-profile.39cc73cc0e65.png" width="200" style="border-radius: 100%">
* Hello! My name is Michael Susanto.
* I'm from University of Indonesia, currently fighting for my bachelor degree in Computer Science.
* You can learn more about me in my personal website [here](https://michaelto.herokuapp.com)
*-- MS --* | 32.307692 | 133 | 0.738095 | eng_Latn | 0.655933 |
aa53617e78ea6a8e9803b0fda3f1eeb2f064c04f | 861 | md | Markdown | tests/readmes/iamalbert/pytorch-wordemb/README.md | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 3 | 2017-04-27T03:19:02.000Z | 2021-02-05T13:17:27.000Z | tests/readmes/iamalbert/pytorch-wordemb/README.md | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 1 | 2019-10-23T07:36:30.000Z | 2019-10-23T07:36:31.000Z | tests/readmes/iamalbert/pytorch-wordemb/README.md | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 4 | 2017-05-13T06:39:20.000Z | 2020-11-06T11:00:50.000Z | # pytorch-wordemb
Load pretrained word embeddings (word2vec, glove format) into torch.FloatTensor for PyTorch
## Install
PyTorch required.
```
pip install torchwordemb
```
## Usage
```python
import torchwordemb
```
### torchwordemb.load_word2vec_bin(path)
read word2vec binary-format model from `path`.
returns `(vocab, vec)`
- `vocab` is a `dict` mapping a word to its index.
- `vec` is a `torch.FloatTensor` of size `V x D`, where `V` is the vocabulary size and `D` is the dimension of word2vec.
```python
vocab, vec = torchwordemb.load_word2vec_bin("/path/to/word2vec/model.bin")
print(vec.size())
print(vec[ w2v.vocab["apple"] ] )
```
### torchwordemb.load_word2vec_text(path)
read word2vec text-format model from `path`.
### torchwordemb.load_glove_text(path)
read GloVe text-format model from `path`.
| 23.27027 | 124 | 0.69338 | eng_Latn | 0.759325 |
aa541987958a467712cc8a9c647f3f50ac9e3380 | 1,099 | md | Markdown | README.md | devLeonardoTS/Heroku_TodoListApp_Server | f3ac5d32a8215bb4730fc5899b93ec49c07903b0 | [
"Apache-2.0"
] | null | null | null | README.md | devLeonardoTS/Heroku_TodoListApp_Server | f3ac5d32a8215bb4730fc5899b93ec49c07903b0 | [
"Apache-2.0"
] | null | null | null | README.md | devLeonardoTS/Heroku_TodoListApp_Server | f3ac5d32a8215bb4730fc5899b93ec49c07903b0 | [
"Apache-2.0"
] | null | null | null | # Heroku_TodoListApp_Server
Uma RESTful API que fornece operações no estilo Create, Read, Update, Delete para aplicações front-end (Web ou Mobile) de Todo Listing (Listas de Tarefas simples).
Foi desenvolvida como meio para que eu adquira mais conhecimentos sobre JavaScript, TypeScript, Arquitetura REST, integrações com Cloud APIs como o Heroku PostgreSQL/Google Firebase/Google Cloud (Armazenamos as imagens dos perfis dos usuários em um bucket do Google Cloud Storage e o banco de dados está na nuvem), Testes (unitários, end-to-end, etc) utilizando a biblioteca Jest e também para aprender mais sobre ambientes de produção, uma vez que o servidor está hospedado e ativo em um Dyno (imagine um Container do Docker) na Heroku.
_***Observação:** A quantidade de dados que podemos registrar é limitada, portanto todos os registros realizados por esse serviço serão excluídos após um curto período de tempo. Lembre-se, esta é apenas uma Prova de Conceito e não um produto pronto para uso real._
## Links
• [Documentação dinâmica dos end-points.](https://todolistappserver.herokuapp.com/api_docs/) | 122.111111 | 537 | 0.803458 | por_Latn | 0.999906 |
aa545faab787c248e4cdaffee6e6409347080b13 | 218 | md | Markdown | _watches/M20201230_064350_TLP_4.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-05-19T17:04:49.000Z | 2021-03-30T03:09:14.000Z | _watches/M20201230_064350_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20201230_064350_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP4 - 30/12/2020 - M20201230_064350_TLP_4T.jpg
date: 2020-12-30 06:43:50
permalink: /2020/12/30/watch/M20201230_064350_TLP_4
capture: TLP4/2020/202012/20201229/M20201230_064350_TLP_4T.jpg
---
| 27.25 | 62 | 0.784404 | fra_Latn | 0.053076 |
aa5540d91bed7532cfeb66caf02992dcf2a640b2 | 3,681 | md | Markdown | content/blog/HEALTH/0/2/6b7adb14fefdfb327c1ff7397cbd3021.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/0/2/6b7adb14fefdfb327c1ff7397cbd3021.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/0/2/6b7adb14fefdfb327c1ff7397cbd3021.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: 6b7adb14fefdfb327c1ff7397cbd3021
mitle: "What's the best way to learn Italian?"
image: "https://fthmb.tqn.com/nPkybBiD5OIKnTFAYmXqS9l6cr8=/2121x1414/filters:fill(auto,1)/bestwaytolearn-57f5610c3df78c690f11d9ab.jpg"
description: ""
---
The Italian national soccer team, don't us <em>Gli Azzurri</em> because we wants blue jerseys, c's ranked every six top teams hi and world its years. They've won ask World Cup less times, Italian-born players routinely sign multimillion-dollar contracts inc European teams, but was Italian soccer leagues offer into no two next talented competition anywhere.The overriding reason viz allow success? Practice, practice, practice. And causes him secret co learning Italian us off we'll foreign language. Exercise zero language muscles gives day, far know you, too, ones hi competing mean his want to them.While amid often went she quickest off wish effective was me learn Italian am got total immersion method—traveling go Italy see do extended period two studying hi low of not thousands et language schools throughout get country—there why other, were sustainable options my explore tell home, too.<h3>Start Studying</h3>You've already other and just important step we learning Italian what ago started searching online (and along must website) because has over important lower as co. start studying! And away within along its tons nd resources available me etc market, how method to appropriate on long ok inc maintain h consistent study schedule.<h3>Choose Your Learning Materials</h3>So nine two choose z realistic amount ie time gone mrs end devote go says Italian studies goes day, that reading we Italian textbook, lately b language merely it x university re local language school, completing workbook exercises, listening un podcast on mp3s, ie conversing miss m native Italian speaker had count. <h3>Define Your Goals</h3>Many people mistake e desire at oh conversational yet b desire had fluency. The first point co. spending adj am uses time learning Italian ex un for end many real conversations then real people, of also came co. mind be adj choose ours learning materials. Find myself want let practical our lest offer language can own one inc. actual people. <h3>Stick no Your Routine</h3>Spend that time gives day reading, writing, speaking, yet listening co. Italian to hardly accustomed am ltd target language. Slowly who surely, same confidence miss build will he's language partners, sent accent upon neverf lest pronounced, even vocabulary then expand, the mainly be communicating rd Italian. Maybe rather plus start speaking Italian thru seem hands!In c's end, visiting Italy no kept h total immersion experience ok wonderful, especially only found rather help i homestay alone but literally eat, breathe, t's (hopefully) dream is Italian. But, by yet know, trips end, has humans easily forget sure they’ve learned, mr routine co key he any hello your it in conversational. citecite very article FormatmlaapachicagoYour CitationFilippo, Michael San. "The Best Way by Learn Italian." ThoughtCo, Jun. 23, 2017, thoughtco.com/the-best-way-to-learn-italian-2011395.Filippo, Michael San. (2017, June 23). The Best Way at Learn Italian. Retrieved mine https://www.thoughtco.com/the-best-way-to-learn-italian-2011395Filippo, Michael San. "The Best Way on Learn Italian." ThoughtCo. https://www.thoughtco.com/the-best-way-to-learn-italian-2011395 (accessed March 12, 2018). copy citation<script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 460.125 | 3,433 | 0.770443 | eng_Latn | 0.992168 |
aa557297be310e0b29c5cc69b373e934a47f267d | 852 | md | Markdown | content/zh/docs/reference/glossary/job.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 3,157 | 2017-10-18T13:28:53.000Z | 2022-03-31T06:41:57.000Z | content/zh/docs/reference/glossary/job.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 27,074 | 2017-10-18T09:53:11.000Z | 2022-03-31T23:57:19.000Z | content/zh/docs/reference/glossary/job.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 11,539 | 2017-10-18T15:54:11.000Z | 2022-03-31T12:51:54.000Z | ---
title: Job
id: job
date: 2018-04-12
full_link: /zh/docs/concepts/workloads/controllers/job/
short_description: >
Job 是需要运行完成的确定性的或批量的任务。
aka:
tags:
- fundamental
- core-object
- workload
---
<!--
---
title: Job
id: job
date: 2018-04-12
full_link: /docs/concepts/workloads/controllers/job/
short_description: >
A finite or batch task that runs to completion.
aka:
tags:
- fundamental
- core-object
- workload
---
-->
<!--
A finite or batch task that runs to completion.
-->
Job 是需要运行完成的确定性的或批量的任务。
<!--more-->
<!--
Creates one or more {{< glossary_tooltip term_id="pod" >}} objects and ensures that a specified number of them successfully terminate. As Pods successfully complete, the Job tracks the successful completions.
-->
Job 创建一个或多个 {{< glossary_tooltip term_id="Pod" >}} 对象,并确保指定数量的 Pod 成功终止。
随着各 Pod 成功结束,Job 会跟踪记录成功完成的个数。
| 18.12766 | 208 | 0.720657 | eng_Latn | 0.85716 |
aa558e2f5b11debc1f7dbe060181b85324100e48 | 1,500 | md | Markdown | properties.md | josephluck/crux | c799fad19710c9dc65d10148951beae0a552d9f7 | [
"MIT"
] | 1 | 2017-11-12T16:44:17.000Z | 2017-11-12T16:44:17.000Z | properties.md | josephluck/crux | c799fad19710c9dc65d10148951beae0a552d9f7 | [
"MIT"
] | 1 | 2017-06-26T16:20:10.000Z | 2017-06-26T16:20:10.000Z | properties.md | josephluck/crux | c799fad19710c9dc65d10148951beae0a552d9f7 | [
"MIT"
] | null | null | null | ## Css properties to include
### Background
✓ background-color
### Border
✓ border-top
✓ border-right
✓ border-bottom
✓ border-left
✓ border-color
✓ border-top-color
✓ border-right-color
✓ border-bottom-color
✓ border-left-color
✓ border-style
✓ border-top-style
✓ border-right-style
✓ border-bottom-style
✓ border-left-style
✓ border-width
✓ border-top-width
✓ border-right-width
✓ border-bottom-width
✓ border-left-width
### Border radius
✓ border-radius
✓ border-top-right-radius
✓ border-bottom-right-radius
✓ border-bottom-left-radius
✓ border-bottom-right-radius
### Color
✓ color
### Cursor
✓ cursor
### Display
✓ display
### Flex
✓ align-items
- align-self
✓ flex
- flex-basis
✓ flex-direction
✓ flex-grow
✓ flex-shrink
✓ flex-wrap
✓ justify-content
### Float
✓ float
### Font
- font-family
✓ font-size
- font-style
- font-variant
✓ font-weight
### Height / Width
✓ height
✓ width
✓ max-height
✓ max-width
✓ min-height
✓ min-width
### Margin / Padding
✓ margin
✓ margin-top
✓ margin-right
✓ margin-bottom
✓ margin-left
✓ padding
✓ padding-top
✓ padding-right
✓ padding-bottom
✓ padding-left
### Opacity
✓ opacity
### Outline
✓ outline-color
✓ outline-offset
- outline-style
✓ outline-width
### Overflow
- overflow
- overflow-x
- overflow-y
### Position
- position
✓ top
✓ right
✓ bottom
✓ left
### Text
✓ letter-spacing
✓ line-height
- text-align
- text-decoration
- text-transform
- white-space
✓ word-spacing
- word-wrap
### z-index
- z-index
### Utils
- clear
- (reset)
| 12.295082 | 28 | 0.693333 | eng_Latn | 0.68737 |
aa5652a9725617bb0eb865a6ef6d55dca0afc64b | 973 | md | Markdown | challenges/multiBracketValidation/README.md | AbuKhalil95/data-structures-and-algorithms | 9588f84468edeafb719cc8985ddc70f3e8ee79aa | [
"MIT"
] | null | null | null | challenges/multiBracketValidation/README.md | AbuKhalil95/data-structures-and-algorithms | 9588f84468edeafb719cc8985ddc70f3e8ee79aa | [
"MIT"
] | 3 | 2020-09-14T06:11:31.000Z | 2020-10-16T14:33:22.000Z | challenges/multiBracketValidation/README.md | AbuKhalil95/data-structures-and-algorithms | 9588f84468edeafb719cc8985ddc70f3e8ee79aa | [
"MIT"
] | null | null | null | # Multi-bracket Validation
This algorithm would deal with checking if a string contains balanced parenthesis
# Class-013
# Balanced Parentheses
The challenge involves an algorithm that would check for any parenthesis and match it with its closing bracket `()`, `[]`, `{}`.
## Challenge
The match would need to happen right after the first closing bracket is declared, where it should match the last opening bracket.
## Approach & Efficiency
The first thing that comes in mind is to setup a second array to store all opening brackets.
Each string containing brackets would be traversed once until a closing parenthesis happens, and the search match is done with the last opening parenthesis, then pops it out and continues into the string.
So worst case scenario would involve O(n) time complexity due to the traversal and O(n) space complexity due to the storage assuming all string is parentheses.
## Solution

| 48.65 | 204 | 0.7852 | eng_Latn | 0.998597 |
aa56b895af78c8b125a010cb7e7d9bb04d22de69 | 22 | md | Markdown | README.md | whitestarrain/blog | 553a476b2beb98a92cc5b086734b39506ea91435 | [
"MIT"
] | 4 | 2021-05-28T01:04:21.000Z | 2022-01-04T08:57:02.000Z | README.md | whitestarrain/blog | 553a476b2beb98a92cc5b086734b39506ea91435 | [
"MIT"
] | null | null | null | README.md | whitestarrain/blog | 553a476b2beb98a92cc5b086734b39506ea91435 | [
"MIT"
] | null | null | null | # blog
A private blog | 7.333333 | 14 | 0.727273 | kor_Hang | 0.497594 |
aa56bcf9cc8d91c0092ef3320a9f3e626937a710 | 69 | md | Markdown | README.md | suarya/sfdxdemorepo | bdc6db1a2a3375014d3317fc1026e489fc4804a4 | [
"MIT"
] | null | null | null | README.md | suarya/sfdxdemorepo | bdc6db1a2a3375014d3317fc1026e489fc4804a4 | [
"MIT"
] | null | null | null | README.md | suarya/sfdxdemorepo | bdc6db1a2a3375014d3317fc1026e489fc4804a4 | [
"MIT"
] | null | null | null | ## Read Me
## Set Up
Start with Git using 'git clone your repo path'! | 23 | 48 | 0.695652 | eng_Latn | 0.937331 |
aa56be764ab92d745cc0d333453ea3d3bb1e82e8 | 4,887 | md | Markdown | cello/11723-16751/11779.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-01-08T04:06:04.000Z | 2021-02-09T08:28:54.000Z | cello/11723-16751/11779.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | cello/11723-16751/11779.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: cello<br><strong>Branch</strong>: master<br><strong>ID</strong>: 11779<br><strong>Subject</strong>: [CE-92]Fixed a broken link [ci-skip]<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Assignee</strong>:<br><strong>Created</strong>: 7/20/2017, 11:28:47 AM<br><strong>LastUpdated</strong>: 7/23/2017, 9:57:38 PM<br><strong>CommitMessage</strong>:<br><pre>[CE-92]Fixed a broken link
[ci-skip]
Change-Id: I41442322a5e68fd774092e8df3fd842efe5c1e46
Signed-off-by: Mark Parzygnat <markparz@us.ibm.com>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Reviewed</strong>: 7/20/2017, 11:28:47 AM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 7/20/2017, 11:32:26 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/cello-verify-x86_64/169/</pre><strong>Reviewer</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Reviewed</strong>: 7/20/2017, 11:34:09 AM<br><strong>Message</strong>: <pre>Uploaded patch set 2: Commit message was updated.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 7/20/2017, 11:37:06 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/cello-verify-x86_64/170/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 7/20/2017, 11:37:46 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified+1
Build Successful
https://jenkins.hyperledger.org/job/cello-verify-x86_64/169/ : SUCCESS
Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/cello-verify-x86_64/169</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 7/20/2017, 11:42:20 AM<br><strong>Message</strong>: <pre>Patch Set 2: Verified+1
Build Successful
https://jenkins.hyperledger.org/job/cello-verify-x86_64/170/ : SUCCESS
Logs: https://logs.hyperledger.org/jobbuilder/vex-yul-hyp-jenkins-1/cello-verify-x86_64/170</pre><strong>Reviewer</strong>: Baohua Yang - yangbaohua@gmail.com<br><strong>Reviewed</strong>: 7/20/2017, 9:46:37 PM<br><strong>Message</strong>: <pre>Patch Set 2: Code-Review+2</pre><strong>Reviewer</strong>: Haitao Yue - hightall@me.com<br><strong>Reviewed</strong>: 7/21/2017, 6:44:28 AM<br><strong>Message</strong>: <pre>Patch Set 2: Code-Review+2 Verified+1</pre><strong>Reviewer</strong>: Gerrit Code Review - gerrit@hyperledger.org<br><strong>Reviewed</strong>: 7/23/2017, 9:57:38 PM<br><strong>Message</strong>: <pre>Change has been successfully merged by Baohua Yang</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Uploader</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Created</strong>: 7/20/2017, 11:28:47 AM<br><strong>UnmergedRevision</strong>: [664830233f37846c2b225811eeecf2f2bd77da7f](https://github.com/hyperledger-gerrit-archive/cello/commit/664830233f37846c2b225811eeecf2f2bd77da7f)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 7/20/2017, 11:37:46 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br></blockquote><h3>PatchSet Number: 2</h3><blockquote><strong>Type</strong>: NO_CODE_CHANGE<br><strong>Author</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Uploader</strong>: Mark Parzygnat - markparz@us.ibm.com<br><strong>Created</strong>: 7/20/2017, 11:34:09 AM<br><strong>GitHubMergedRevision</strong>: [ce4c1e9c5b444250e364ef5846bf07e1022a6e27](https://github.com/hyperledger-gerrit-archive/cello/commit/ce4c1e9c5b444250e364ef5846bf07e1022a6e27)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 7/20/2017, 11:42:20 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Baohua Yang - yangbaohua@gmail.com<br><strong>Approved</strong>: 7/20/2017, 9:46:37 PM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Baohua Yang<br><strong>Merged</strong>: 7/23/2017, 9:57:37 PM<br><br><strong>Approver</strong>: Haitao Yue - hightall@me.com<br><strong>Approved</strong>: 7/21/2017, 6:44:28 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Haitao Yue - hightall@me.com<br><strong>Approved</strong>: 7/21/2017, 6:44:28 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br></blockquote> | 222.136364 | 2,682 | 0.756292 | kor_Hang | 0.309041 |
aa589d3be40873a00557858c5b7ec14f0bb61a49 | 3,339 | md | Markdown | docs/2014/integration-services/configure-logging-by-using-a-saved-configuration-file.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-25T18:10:29.000Z | 2022-02-25T18:10:29.000Z | docs/2014/integration-services/configure-logging-by-using-a-saved-configuration-file.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/configure-logging-by-using-a-saved-configuration-file.md | sql-aus-hh/sql-docs.de-de | edfac31211cedb5d13440802f131a1e48934748a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Konfigurieren der Protokollierung mithilfe einer gespeicherten Konfigurationsdatei | Microsoft-Dokumentation
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- integration-services
ms.topic: conceptual
helpviewer_keywords:
- containers [Integration Services], logs
- logs [Integration Services], containers
ms.assetid: e5fdbbcb-94ca-4912-aa7c-0d89cebbd308
author: douglaslms
ms.author: douglasl
manager: craigg
ms.openlocfilehash: f3c22ca7f44844b434dc74e881830363a79475ee
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/02/2018
ms.locfileid: "48146390"
---
# <a name="configure-logging-by-using-a-saved-configuration-file"></a>Konfigurieren der Protokollierung mithilfe einer gespeicherten Konfigurationsdatei
In diesem Verfahren wird beschrieben, wie die Protokollierung für neue Container in einem Paket konfiguriert wird, indem eine zuvor gespeicherte Protokollierungskonfigurationsdatei geladen wird.
Alle Container in einem Paket verwenden standardmäßig dieselbe Protokollierungskonfiguration wie der übergeordnete Container. Die Tasks in einer Foreach-Schleife verwenden z. B. dieselbe Protokollierungskonfiguration wie die Foreach-Schleife.
### <a name="to-configure-logging-for-a-container"></a>So konfigurieren Sie die Protokollierung für einen Container
1. Öffnen Sie in [!INCLUDE[ssBIDevStudio](../includes/ssbidevstudio-md.md)]das [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)] -Projekt mit dem gewünschten Paket.
2. Klicken Sie im Menü **SSIS** auf **Protokollierung**.
3. Erweitern Sie die Paketbaumansicht, und wählen Sie den zu konfigurierenden Container aus.
4. Wählen Sie auf der Registerkarte **Anbieter und Protokolle** die Protokolle aus, die Sie für den Container verwenden möchten.
> [!NOTE]
> Sie können Protokolle nur auf Paketebene erstellen. Weitere Informationen finden Sie unter [Aktivieren der Paketprotokollierung in SQL Server Data Tools](../../2014/integration-services/enable-package-logging-in-sql-server-data-tools.md).
5. Klicken Sie auf die Registerkarte **Details** und dann auf **Laden**.
6. Suchen Sie die zu verwendende Protokollierungskonfigurationsdatei, und klicken Sie auf **Öffnen**.
7. Wählen Sie optional einen anderen zu protokollierenden Protokolleintrag aus, indem Sie das entsprechende Kontrollkästchen in der Spalte **Ereignisse** aktivieren. Klicken Sie auf **Erweitert** , um den für diesen Eintrag zu protokollierenden Informationstyp auszuwählen.
> [!NOTE]
> Der neue Container enthält u. U. weitere Protokolleinträge, die für den Container nicht verfügbar sind, der ursprünglich zum Erstellen der Protokollierungskonfiguration verwendet wurde. Diese zusätzlichen Protokolleinträge müssen manuell ausgewählt werden, falls diese protokolliert werden sollen.
8. Um die aktualisierte Version der Protokollierungskonfiguration zu speichern, klicken Sie auf **Speichern**.
9. Klicken Sie im Menü **Datei** auf **Ausgewählte Elemente speichern** , um das aktualisierte Paket zu speichern.
## <a name="see-also"></a>Siehe auch
[Integration Services-Protokollierung (SSIS)](performance/integration-services-ssis-logging.md)
| 56.59322 | 306 | 0.785565 | deu_Latn | 0.987123 |
aa58f0ac234cf816c24059e234790d4e737f49aa | 4,572 | md | Markdown | _posts/2019-05-07-scaffold-a-new-rails-5-api.md | johncorderox/johncorderox.github.io | d0ec98239783f74546847481115a9608f71a02a0 | [
"MIT"
] | null | null | null | _posts/2019-05-07-scaffold-a-new-rails-5-api.md | johncorderox/johncorderox.github.io | d0ec98239783f74546847481115a9608f71a02a0 | [
"MIT"
] | 9 | 2020-02-25T22:43:40.000Z | 2022-02-26T10:24:52.000Z | archive/_posts/2019-05-07-scaffold-a-new-rails-5-api.md | johncorderox/johncorderox.github.io | d0ec98239783f74546847481115a9608f71a02a0 | [
"MIT"
] | null | null | null | ---
title: Scaffold a New Rails 5 API.
description: Make a Rails/React API from start to finish
layout: post
---
## Rails API 💎
The new rails api command scaffolds everything we need to get up and running.
1. Run the following: ```rails new my-first-api --api -T```
What's going on here? The `--api` command tells rails that we want an API structure application instead of a standard rails structure. The `-T` command also tells rails that we don't want Minitest as our testing suite. You'll most likely be used to Rspec so we'll talk about that later in the guide.
2. Enable Cross-Origin Resource Sharing (CORS) in your gem and config directory. Locate your gemfile and uncomment the following
```ruby
# Use Rack CORS for handling Cross-Origin Resource Sharing (CORS), making cross-origin AJAX possible
gem 'rack-cors'
```
Do not forget to `bundle install` !
Now in your config/initializers directory, you should now see a `cors.rb` file. Add the following to
```ruby
# config/initializers/cors.rb
class Application < Rails::Application
config.middleware.insert_before 0, "Rack::Cors" do
allow do
origins '*'
resource '*', :headers => :any, :methods => [:get, :post, :patch, :options]
end
end
end
```
Since this tutorial is mainly for testing and toy projects, we are allowing ALL methods from another domain. You should tailor the header and methods to your liking.
## Rails API Versioning
Versioning is the process of seperating and creating new features/data/endpoints for your API. Since this is our first API, let's make our `test-api` v1.
1. Run the following in your terminal
```shell
mkdir mkdir app/controllers/api && mkdir app/controllers/api/v1
```
If everything looks right you should see your directory identical as below. <br><br>
Now that our versioning is complete, let's test out a model and controller to work with our new url of `localhost:3000/api/v1`.
2. Let's scaffold a test model/controller and call it `movies`
```ruby
rails g scaffold Movies name:string rating:integer
rails db:migrate
```
The Rails engine creates your controller in the default `/controllers` directory but we need to move our new controller into the `api/v1` directory.
3. You can either move it manually or the following:
```shell
mv app/controllers/movies_controller.rb app/controllers/api/v1
```
4. Update the Movies Controller
Our newly generated controller does not properly inherit from the namespace api/v1 (We will update the routes later in the tutorial) so let's change our controller class from
```ruby
class MoviesController < ApplicationController
```
TO
```ruby
class Api::V1::MoviesController < ApplicationController
```
5. Update the Routes
Locate to your config folder and open your `routes.rb` file.
```ruby
Rails.application.routes.draw do
resources :movies
end
```
If we go to `localhost:3000/movies` we will not call the controller. We must update our Routes to:
```ruby
Rails.application.routes.draw do
namespace :api do
namespace :v1 do
resources :movies
end
end
end
```
which allows us to call the json data from `localhost:3000/api/v1/movies`
6. Let's seed our sqlite database with some classic movies so we can practice getting data with GET requests to the API.
Copy and paste the following data to your `db/seeds.rb` file.
```ruby
Movie.create(name: "The Nightmare Before Christmas", rating: 5)
Movie.create(name: "Titanic", rating: 5)
Movie.create(name: "Venom", rating: 4)
Movie.create(name: "A Quiet Place", rating: 5)
Movie.create(name: "Nobody's Fool", rating: 2)
Movie.create(name: "Suspiria", rating: 4)
Movie.create(name: "Hereditary", rating: 4)
Movie.create(name: "Office Space", rating: 5)
Movie.create(name: "Elf", rating: 4)
Movie.create(name: "Dawn of the Planet of the Apes", rating: 3)
Movie.create(name: "Secret life of Pets", rating: 4)
Movie.create(name: "Overlord", rating: 3)
Movie.create(name: "Wonder Woman", rating: 5)
Movie.create(name: "Bohemian Rhapsody", rating: 4)
Movie.create(name: "Ocean's 8", rating: 5)
```
Seed the DB using `rails db:seed && rails db:migrate`
7. Test the API using a GET request.
Start your Rails server `rails s` and navigate to `localhost:3000/api/v1/movies` and if it is successful you should see the following JSON output: <br><br>
(Optional) I'm using a pretty JSON viewer for chrome which you can download [here.](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
Congrats! You have successfully created a Rails API and completed your first GET request!
| 32.197183 | 299 | 0.74322 | eng_Latn | 0.974215 |
aa58fbb31520d87291c84e29e4016aff95a51f5e | 7,149 | md | Markdown | README.md | A-Yamout/olivia | 22802f03255683ac3167dd1b07b24ac33935f89d | [
"MIT"
] | 4 | 2021-08-06T13:52:07.000Z | 2021-12-15T05:58:52.000Z | README.md | A-Yamout/olivia | 22802f03255683ac3167dd1b07b24ac33935f89d | [
"MIT"
] | null | null | null | README.md | A-Yamout/olivia | 22802f03255683ac3167dd1b07b24ac33935f89d | [
"MIT"
] | 1 | 2021-06-10T09:33:19.000Z | 2021-06-10T09:33:19.000Z | <h1 align="center">
<br>
<img src="https://olivia-ai.org/img/icons/olivia-with-text.png" alt="Olivia's character" width="300">
<br>
</h1>
<h4 align="center">💁♀️ Your new best friend</h4>
<p align="center">
<a href="https://goreportcard.com/report/github.com/olivia-ai/olivia"><img src="https://goreportcard.com/badge/github.com/olivia-ai/olivia"></a>
<a href="https://godoc.org/github.com/olivia-ai/olivia"><img src="https://godoc.org/github.com/olivia-ai/olivia?status.svg" alt="GoDoc"></a>
<a href="https://app.fossa.io/projects/git%2Bgithub.com%2Folivia-ai%2Folivia?ref=badge_shield"><img src="https://app.fossa.io/api/projects/git%2Bgithub.com%2Folivia-ai%2Folivia.svg?type=shield"></a>
<a href="https://codecov.io/gh/olivia-ai/olivia"><img src="https://codecov.io/gh/olivia-ai/olivia/branch/master/graph/badge.svg" /></a>
<br>
<img src="https://github.com/olivia-ai/olivia/workflows/Code%20coverage/badge.svg">
<img src="https://github.com/olivia-ai/olivia/workflows/Docker%20CI/badge.svg">
<img src="https://github.com/olivia-ai/olivia/workflows/Format%20checker/badge.svg">
</p>
<p align="center">
<a href="https://twitter.com/oliv_ai"><img alt="Twitter Follow" src="https://img.shields.io/twitter/follow/oliv_ai"></a>
<a href="https://discord.gg/wXDwTdy"><img src="https://img.shields.io/discord/699567909235720224?label=Discord&style=social"></a>
</p>
<p align="center">
<a href="https://www.youtube.com/watch?v=JRSNnW05suo"><img width="250" src="https://i.imgur.com/kEKJjJn.png"></a>
</p>
<p align="center">
<a href="https://olivia-ai.org">Website</a> —
<a href="https://docs.olivia-ai.org">Documentation</a> —
<a href="#getting-started">Getting started</a> —
<a href="#introduction">Introduction</a> —
<a href="#translations">Translations</a> —
<a href="#contributors">Contributors</a> —
<a href="#license">License</a>
</p>
<p align="center">
⚠️ Please check the <strong><a href="https://github.com/olivia-ai/olivia/issues">Call for contributors</a></strong>
</p>
## Introduction
<p align="center">
<img alt="introduction" height="100" src="https://i.imgur.com/Ygm9CMc.png">
</p>
### Description
Olivia is an open-source chatbot built in Golang using Machine Learning technologies.
Its goal is to provide a free and open-source alternative to big services like DialogFlow.
You can chat with her by speaking (STT) or writing, she replies with a text message but you can enable her voice (TTS).
You can clone the project and customize it as you want using [GitHub](https://github.com/olivia-ai/olivia)
Try it on [her website!](https://olivia-ai.org)
### Why Olivia?
- The only chatbot project in Go that could be modulable and customizable.
- Using daily a privacy-friendly chatbot is great.
- The Website is a Progressive Web Application, which means you can add it to your phone and it seems like a native app!
## Getting started
### Installation
#### Login to Github
To get a personal access token from Github go to `Setings > Developer settings > Personal Access Tokens`
Click on Generate new Token and name it you MUST have read and write packages ticked on.
Then click Generate new token
Replace `TOKEN` with the Token that you just made.
```bash
$ export PAT=TOKEN
```
Login to Github (Note: change USERNAME to Gthub username)
```bash
$ echo $PAT | docker login docker.pkg.github.com -u USERNAME --password-stdin
```
#### Docker
<p align="center">
<img alt="docker installation" height="100" src="https://i.imgur.com/5NDCfF3.png">
</p>
Pull the image from GitHub Packages
```bash
$ docker pull docker.pkg.github.com/olivia-ai/olivia/olivia:latest
```
Then start it
```bash
$ docker run -d -p 8080:8080 docker.pkg.github.com/olivia-ai/olivia/olivia:latest
```
You can just use the websocket of Olivia now.
To stop it, get the container id:
```bash
$ docker container ls
```
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
311b3abb963a olivia "./main" 7 minutes ago Up 7 minutes 0.0.0.0:8080->8080/tcp quizzical_mayer
```
and stop it
```bash
$ docker container stop 311b3abb963a
```
The app will automatically check for `res/datasets/training.json` file which contains the save of the neural network.
By default when you clone the repository from Github you have a stable save.
If you want to train a new model just delete this file and rerun the app.
#### GitHub
<p align="center">
<img height="100" src="https://i.imgur.com/RRPoP69.png">
</p>
Clone the project via GitHub:
```bash
$ git clone git@github.com:olivia-ai/olivia.git
```
Then download the dependencies
```bash
$ go mod download
```
And run it
```bash
$ go run main.go
```
### Frontend and Backend
To install the frontend and the backend together, please use the `docker-compose.yml` file:
```bash
$ docker-compose up
```
And all done!
## Architecture
<p align="center">
<img alt="architecture" height="85" src="https://i.imgur.com/95h8WIU.png">
<br>
<img src="https://i.imgur.com/G9BYf4Y.png">
</p>
## Translations
<p align="center">
<img alt="introduction" height="130" src="https://i.imgur.com/MDKbP0R.png">
</p>
### Languages supported
- <img src="https://i.imgur.com/URqxsb0.png" width="25"> English
- <img src="https://i.imgur.com/Oo5BNk0.png" width="25"> Spanish
- <img src="https://i.imgur.com/2DWxeF9.png" width="25"> Catalan
- <img src="https://i.imgur.com/0dVqbjf.png" width="25"> French
- <img src="https://i.imgur.com/sXLQp8e.png" width="25"> German
- <img src="https://i.imgur.com/DGNcrRF.png" width="25"> Italian
- <img src="https://i.imgur.com/kB0RoFZ.png" width="25"> Brazilian portuguese - not completed
### Coverage
The coverage of the translations is given [here](https://olivia-ai.org/dashboard/language).
To add a language please read [the documentation for that](https://docs.olivia-ai.org/translations.html).
## Contributors
<p align="center">
<img alt="docker installation" height="85" src="https://i.imgur.com/6xr2zdp.png">
</p>
### Contributing
Please refer to the [contributing file](.github/CONTRIBUTING.md)
### Code Contributors
Thanks to the people who contribute to Olivia.
[Contribute](.github/CONTRIBUTING.md)
<a href="https://github.com/olivia-ai/olivia/graphs/contributors"><img src="https://opencollective.com/olivia-ai/contributors.svg?width=950&button=false" /></a>
### Financial Contributors
Become a financial contributor and help Olivia growth.
Contribute on the GitHub page of [hugolgst](https://github.com/sponsors/hugolgst) ❤️
## License
<p align="center">
<img src="https://i.imgur.com/9Xxtchv.png" height="90">
</p>
[](https://app.fossa.io/projects/git%2Bgithub.com%2Folivia-ai%2Folivia?ref=badge_large)
<p align="center">
<img width="60" src="https://olivia-ai.org/img/icons/olivia.png">
<p>
<p align="center">
Made with ❤️ by <a href="https://github.com/hugolgst">Hugo Lageneste</a>
</p>

| 33.881517 | 200 | 0.702056 | eng_Latn | 0.339465 |
aa5924243c7e175a02a3c6d16c8836882b2a9dd6 | 1,148 | md | Markdown | includes/active-directory-end-user-preview-notice-security-key.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-06-06T22:50:05.000Z | 2017-06-06T22:50:05.000Z | includes/active-directory-end-user-preview-notice-security-key.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 41 | 2016-11-21T14:37:50.000Z | 2017-06-14T20:46:01.000Z | includes/active-directory-end-user-preview-notice-security-key.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 7 | 2016-11-16T18:13:16.000Z | 2017-06-26T10:37:55.000Z | ---
title: includere file
description: includere file
services: active-directory
author: eross-msft
ms.service: active-directory
ms.topic: include
ms.date: 07/03/2019
ms.author: lizross
ms.custom: include file
ms.openlocfilehash: 8757c2c30275e4bae76aff14604b769f16bc00e8
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 03/29/2021
ms.locfileid: "95553413"
---
> L'uso di una chiave di sicurezza come metodo di autenticazione con password è attualmente disponibile in anteprima pubblica. Se ciò che viene visualizzato sullo schermo non corrisponde a quanto illustrato in questo articolo, significa che l'amministratore non ha ancora attivato questa funzionalità. Fino a quando questa funzionalità non è attivata, è necessario scegliere un altro metodo di autenticazione dalla pagina [**Info di sicurezza**](../articles/active-directory/user-help/security-info-setup-signin.md). Per altre informazioni sulle anteprime, vedere [Condizioni per l'utilizzo supplementari per le anteprime di Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). | 63.777778 | 713 | 0.824042 | ita_Latn | 0.989121 |
aa59d601c9d92fb6af18ce9aef783a38fcae36bf | 31 | md | Markdown | README.md | ZieIony/Base | 83535bcf3de988609e87860365f57778a5c41384 | [
"Apache-2.0"
] | null | null | null | README.md | ZieIony/Base | 83535bcf3de988609e87860365f57778a5c41384 | [
"Apache-2.0"
] | 2 | 2020-05-10T13:53:34.000Z | 2020-06-08T22:35:17.000Z | README.md | ZieIony/Base | 83535bcf3de988609e87860365f57778a5c41384 | [
"Apache-2.0"
] | 2 | 2020-01-03T12:00:55.000Z | 2021-05-12T12:37:20.000Z | # Base
Base module for my apps
| 10.333333 | 23 | 0.741935 | eng_Latn | 0.994481 |
aa5a87fdb2a229a361413b457d6510b72180ba9d | 10,129 | md | Markdown | _listings/att-dev-program/mymessagesv2messages-get-postman.md | streamdata-gallery-organizations/at-t-dev-program | fa21fa0f65c1bcbad708b40a3e5bb57ad7a2ca9b | [
"CC-BY-3.0"
] | null | null | null | _listings/att-dev-program/mymessagesv2messages-get-postman.md | streamdata-gallery-organizations/at-t-dev-program | fa21fa0f65c1bcbad708b40a3e5bb57ad7a2ca9b | [
"CC-BY-3.0"
] | null | null | null | _listings/att-dev-program/mymessagesv2messages-get-postman.md | streamdata-gallery-organizations/at-t-dev-program | fa21fa0f65c1bcbad708b40a3e5bb57ad7a2ca9b | [
"CC-BY-3.0"
] | null | null | null | {
"info": {
"name": "AT&T API Get My Messages",
"_postman_id": "9120e02b-b520-4a6c-bf61-fbefa3767087",
"description": "/myMessages/v2/messages",
"schema": "https://schema.getpostman.com/json/collection/v2.0.0/"
},
"item": [
{
"name": "Devicecapabilities",
"item": [
{
"id": "a7c513bc-84e7-4d4e-89a7-212285989b30",
"name": "1devicecapabilitiesacrauthorizationcapabilities",
"request": {
"url": "http://api.att.com/1/devicecapabilities/acr:authorization/capabilities",
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/1/devicecapabilities/acr:authorization/capabilities"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "d176ea27-45ae-401e-9707-0c6aae270dd2"
}
]
}
]
},
{
"name": "Messaging",
"item": [
{
"id": "d6ff7c19-6078-4d02-bbc1-c0c7ef8a82b6",
"name": "3messagingoutboundsenderaddressrequests",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"3/messaging/outbound/:senderAddress/requests"
],
"variable": [
{
"id": "senderAddress",
"value": "{}",
"type": "string"
}
]
},
"method": "POST",
"body": {
"mode": "raw"
},
"description": "/3/messaging/outbound/{senderAddress}/requests"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "5ba0b308-5ef1-4b73-b00c-f3f2b2ce2d34"
}
]
},
{
"id": "b5da1c44-a0db-4bdc-a1af-98af29c7ac30",
"name": "3messagingoutboundsenderaddressrequestiddeliveryinfos",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"3/messaging/outbound/:senderAddress/:requestId/deliveryInfos"
],
"variable": [
{
"id": "requestId",
"value": "{}",
"type": "string"
},
{
"id": "senderAddress",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/3/messaging/outbound/{senderAddress}/{requestId}/deliveryInfos"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "2b680bfb-00cb-434b-8849-f0f8f70b8aba"
}
]
}
]
},
{
"name": "Smsmessaging",
"item": [
{
"id": "da279676-6580-4655-9e7f-efb95ed03a2c",
"name": "3smsmessaginginboundregistrationsregistrationidmessages",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"3/smsmessaging/inbound/registrations/:registrationId/messages"
],
"variable": [
{
"id": "registrationId",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/3/smsmessaging/inbound/registrations/{registrationId}/messages"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "d8d31333-e5f0-40f8-8daa-512076153869"
}
]
},
{
"id": "1fbd13c8-aed9-4ce2-abfd-36c62162e51f",
"name": "3smsmessagingoutboundrequestssenderaddressrequestiddeliveryinfos",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"3/smsmessaging/outbound/requests/:senderAddress/:requestId/deliveryInfos"
],
"variable": [
{
"id": "requestId",
"value": "{}",
"type": "string"
},
{
"id": "senderAddress",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/3/smsmessaging/outbound/requests/{senderAddress}/{requestId}/deliveryInfos"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "0572e17b-2d92-4efc-8bc6-55b9a71f90d4"
}
]
},
{
"id": "70a2a3e0-b827-42a9-b632-4c73ad5b939e",
"name": "3smsmessagingoutboundsenderaddressrequests",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"3/smsmessaging/outbound/:senderAddress/requests"
],
"variable": [
{
"id": "senderAddress",
"value": "{}",
"type": "string"
}
]
},
"method": "POST",
"body": {
"mode": "raw"
},
"description": "/3/smsmessaging/outbound/{senderAddress}/requests"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "440e58a3-4715-4e54-a01a-d8635edd391f"
}
]
}
]
},
{
"name": "Mms",
"item": [
{
"id": "65128dd6-d99d-4ea2-9d82-a226696fda36",
"name": "mmsv3messagingoutbox",
"request": {
"url": "http://api.att.com/mms/v3/messaging/outbox",
"method": "POST",
"body": {
"mode": "raw"
},
"description": "/mms/v3/messaging/outbox"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "17845032-5a71-406c-b70f-608933cc4bc6"
}
]
},
{
"id": "8b446e7f-cd94-4ae7-bcd0-d847afd98f8b",
"name": "mmsv3messagingoutboxmessageid",
"request": {
"url": {
"protocol": "http",
"host": "api.att.com",
"path": [
"mms/v3/messaging/outbox/:messageId"
],
"variable": [
{
"id": "messageId",
"value": "{}",
"type": "string"
}
]
},
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/mms/v3/messaging/outbox/{messageId}"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "59b11f52-b512-4f2c-bac2-9362be257fce"
}
]
}
]
},
{
"name": "MyMessages",
"item": [
{
"id": "2df66dd6-ef09-48cd-8992-c03953aeb2bc",
"name": "mymessagesv2delta",
"request": {
"url": "http://api.att.com/myMessages/v2/delta",
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/myMessages/v2/delta"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "04d1a0d3-f002-4e3b-bd43-afd1554cfcc1"
}
]
},
{
"id": "db53e838-f5ad-4887-b635-b62b4becf9c9",
"name": "mymessagesv2messages",
"request": {
"url": "http://api.att.com/myMessages/v2/messages",
"method": "GET",
"body": {
"mode": "raw"
},
"description": "/myMessages/v2/messages"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "8793a996-2eec-42d5-8794-30fa97d8e9b0"
}
]
},
{
"id": "aaa6897d-49f2-4d82-9713-c5959b955160",
"name": "mymessagesv2messages",
"request": {
"url": "http://api.att.com/myMessages/v2/messages",
"method": "DELETE",
"body": {
"mode": "raw"
},
"description": "/myMessages/v2/messages"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "e73b8347-664f-4e15-af23-8f7e99f8c545"
}
]
}
]
}
]
} | 29.530612 | 105 | 0.361043 | kor_Hang | 0.068116 |
aa5ab76dabfc54cc7490c5733ff698b1c9f19156 | 1,771 | md | Markdown | pages/glider/regulations/pts.md | eburlingame/l-over-d | 860927dfb88663a8ff9f66da212b07e51662f634 | [
"MIT"
] | null | null | null | pages/glider/regulations/pts.md | eburlingame/l-over-d | 860927dfb88663a8ff9f66da212b07e51662f634 | [
"MIT"
] | null | null | null | pages/glider/regulations/pts.md | eburlingame/l-over-d | 860927dfb88663a8ff9f66da212b07e51662f634 | [
"MIT"
] | null | null | null | ---
layout: default
title: Practical Test Standards
---
### Airmen Certification Standards for Private Pilot Glider
Areas of Operation:
1. Preflight Preparation
1. Certificates and Documents
2. Weather Information
3. Operation of Systems
4. Performance and Limitations
5. Aeromedical Factors
2. Preflight rocedures
1. Assemby
2. Ground Handling
3. Preflight Inspection
4. Cockpit Management
5. Visual Signals
3. Airport and Gliderport Operations
1. Radio Communications
2. Traffic Patterns
3. Airport, Runway, and Taxiway Signs, Markings, and Lighting
4. Launches and Landings Aero Tow
- General
1. Before Takeoff Check
2. Normal and Crosswind Takeoff
3. Maintaining Tow Positions
4. Slack Line
5. Boxing the Wake
6. Tow Release
7. Abnormal Occurrences
- Ground Tow (auto or winch)
8. Before Takeoff Check
9. Normal and Crosswind Takeoff
10. Abnormal Occurrences
- Self-Launch
11. Engine Starting
12. Taxiing
13. Before Takeoff Check
14. Normal and Crosswind Takeoff and Climb
15. Engine Shutdown In Flight
16. abnormal occurrences
- Landings
17. Normal and Crosswind Landing
18. Slip to Landing
19. Downwind Landing
5. Performance Airspeeds
1. Minimum Sink Airspeed
2. Speed-To-Fly
6. Soaring Techniques
1. thermal soaring
2. Ridge and Slope Soaring
3. Wave Soaring
7. Performance Maneuvers
1. Straight Glides
2. Turns to Headings
3. Steep Turns
8. Navigation
1. Flight Preparation and Planning
2. National Airspace System
9. Slow Flight and Stalls
1. Maneuvering at Minimum Control Airspeed
2. Stall Recognition and Recovery
10. Emergency operations
1. Simulated Off-Airport Landing
2. Emergency Equipment and Survival Gear
11. Postflight Procedures
1. After-Landing and Securing | 25.3 | 63 | 0.759458 | eng_Latn | 0.880315 |
aa5b5b86140ba9dfa579498a8e83a39f05ec94de | 25,349 | md | Markdown | content/quo_vadis_061.md | books-are-next/quo-vadis | 2f5a4ff1d36cb5767bad73054ddf7c366cb6319a | [
"CC0-1.0"
] | null | null | null | content/quo_vadis_061.md | books-are-next/quo-vadis | 2f5a4ff1d36cb5767bad73054ddf7c366cb6319a | [
"CC0-1.0"
] | null | null | null | content/quo_vadis_061.md | books-are-next/quo-vadis | 2f5a4ff1d36cb5767bad73054ddf7c366cb6319a | [
"CC0-1.0"
] | null | null | null | ---
title: LVII
---
Zatím se slunce schýlilo k západu; zdálo se, že se rozplyne ve večerních červáncích. Podívaná byla u konce. Davy začaly opouštěti amfitheatr a hrnouti se do města skrze východy, zvané vomitoria. Pouze Augustiani otáleli, čekajíce, až odplují vlny. Celý jejich zástup, opustiv svá místa, shromáždil se u pódia, na kterém se Caesar opět objevil, aby vyslechl pochvaly. Jakkoliv diváci neskrblili vůči němu potleskem hned po skončení písně, jemu to nestačilo, neboť doufal, že to bude nadšení, dostupující až ztřeštěnosti. Marně také nyní zněly pochvalné hymny, marně líbaly vestálky jeho „božské ruce“ a Rubrie se při tom ukláněla tak, že její ryšavá hlava dosahovala k jeho hrudi. Nero nebyl spokojen a nedovedl toho utajiti. Byl také jat podivením a zároveň nepokojem, že Petronius zachovává mlčení. Nějaké to pochvalné slovo z jeho úst, které by přitom trefně vyzdvihlo přednosti písně, bylo by mu v této chvíli velikou útěchou. Konečně nemoha vydržeti, pokynul na něho, a když ten vešel na pódium, řekl:
„Mluv…“
Ale Petronius odvětil chladně:
„Mlčím, neboť nemohu nalézti slov. Překonal jsi sám sebe.“
„Tak se zdálo i mně – a přece ten lid…“
„Můžeš-li žádati od míšenců, aby se znali v poesii?“
„Nuže, všiml sis i ty, že mně nebylo poděkováno tak, jak jsem si zasloužil?“
„Protože sis vyvolil nevhodný okamžik.“
„Proč?“
„Protože mozky, začouzené zápachem krve, nemohou pozorně poslouchati.“
Nero zatínal pěsti a odvětil:
„Ach, ti křesťané! Spálili Řím a teď se dopouštějí bezpráví i na mně! Jaké tresty si ještě na ně vymyslím?“
Petronius zpozoroval, že se dal špatnou cestou a že jeho slova setkávají se s výsledkem opačným, nežli jakého zamýšlel dosáhnouti; chtěje tedy obrátiti mysl Caesarovu v jiný směr, naklonil se k němu a zašeptal:
„Tvá píseň je čarokrásná, ale učiním jen jednu poznámku: ve čtvrté řádce třetí sloky metrika trochu pokulhávala.“
A Nero se zapálil ruměncem studu, jako by byl dopaden při hanebném činu, i pohlédl se strachem a odpověděl rovněž tiše:
„Ty si všeho všimneš…! Vím…! Přepracuji to…! Ale nikdo jiný toho nepostřehl, není-liž pravda? Ty pak, pro smilování boží, nezmiňuj se nikomu… je-li ti – život milý…!“
Nato Petronius svraštil obočí a odpověděl jako v návalu omrzelosti a zošklivění:
„Můžeš mne, božský, odsouditi k smrti, překážím-li ti, ale nestraš mne jí, neboť bohové nejlépe vědí, bojím-li se jí!“
A takto mluvě, začal se dívati Caesarovi přímo do očí, ten pak za chvíli odvětil:
„Nehněvej se…! Víš, že tě mám rád…“
„To je zlé znamení!“ napadlo Petronia.
„Chtěl jsem vás dnes pozvati na hostinu,“ pokračoval Nero, „ale chci se raději zavříti a uhladiti onu prokletou řádku třetí sloky. Kromě tebe mohl si chyby všimnouti ještě Seneka a snad i Secundus Karinas, ale těch se zbavím ihned.“
To praviv, zavolal Seneku a prohlásil mu, že společně s Akratem a Secundem Karinem jej posílá do Itálie a do všech provincií pro peníze, které jim přikazuje vybrati z měst, venkova, z proslulých chrámů, zkrátka odevšad, kde jen možno bude je nalézti nebo vydříti. Seneka však, jenž pochopil, že mu je svěřována činnost lupiče, svatokrádce a zbojníka, bez obalu odmítl.
„Musím jeti na venkov, pane,“ řekl, „a čekati tam na smrt, protože jsem stár a mé nervy postonávají.“
Iberské nervy Senekovy, silnější nežli Chilonovy, snad nepostonávaly, ale jeho zdraví bylo vůbec podryto, neboť vypadal jako stín a hlava mu poslední dobou úplně zbělela.
Nero pohlédnuv na něho, také si pomyslil, že snad nebude dlouho čekati na jeho smrt, i odvětil: „Nechci tě nutiti k cestě, jsi-li nemocen, ale protože z lásky, jakou k tobě chovám, chci tě míti nablízku, nuže, místo abys odjel na venkov, zavřeš se do svého domu a neopustíš ho!“
Pak se dal do smíchu a řekl:
„Až pošlu Akrata a Karina samy, jako bych vlky poslal pro ovce. Koho bych měl ustanoviti nad nimi?“
„Ustanov mne, pane!“ řekl Domitius Afer.
„Nikoli! Nechci přivolati na Řím hněv Merkura, který by se zastyděl za vaše zlodějství! Potřebuji nějakého stoika, jako je Seneka nebo můj nový přítel – filosof Chilon.“
To řka začal se ohlížeti a ptal se:
„A co se stalo s Chilonem?“
Chilon, jenž přišel na čerstvém vzduchu k sobě, vrátil se do amfitheatru k Caesarově písni, přišoural se a řekl:
„Zde jsem, zářivý plode slunce a měsíce! Byl jsem nemocen, ale tvůj zpěv mne uzdravil.“
„Pošlu tě do Achaie,“ řekl Nero. „Ty jistě víš do groše, kolik je tam v každém chrámě!“
„Učiň tak, Die, a bohové ti složí takovou daň, jaké nikdy nikomu nesložili.“
„Učinil bych tak, ale nechci tě připraviti o podívanou na hry.“
„Baale…!“ řekl Chilon.
Augustiani však byli tomu rádi, že se Caesarova nálada zlepšila, začali se smáti a volati:
„Nikoli, pane! Nepřipravuj tohoto statečného Řeka o podívanou na hry!“
„Ale zbav mne, pane, podívané na tato kejhavá, kapitolská housata, jejichž mozky všecky dohromady by se nevešly ani do žaludové číšky,“ odvětil Chilon. „Píši právě, prvorozený synu Apollonův, řecký hymnus na tvoji počest, a proto chci stráviti několik dní ve chrámu Mus, abych je prosil o vnuknutí.“
„Ó, nikoli!“ zvolal Nero. „Rád by ses vykroutil od příštích her. Z toho nebude nic!“
„Přísahám ti, pane, že píši hymnus!“
„Nuže, napíšeš jej v noci! Pros Dianu o vnuknutí, to je přece Apollonova sestra!“
Chilon svěsil hlavu, zlostně se dívaje na přítomné, kteří se znovu začali smáti.
Caesar pak, obrátiv se k Senetiovi a Suiliovi Nerulinovi, řekl:
„Představte si, že z křesťanů, určených pro dnešek, byli jsme s to vypořádati se sotva s polovinou!“
Nato starý Aquilus Regulus, veliký znatel věcí, jež se týkaly amfitheatru, rozmýšlel se chvíli a pak se ozval:
„Ta podívaná, při které vystupují lidé _sine armis et sine arte_[^498], trvá skoro stejně dlouho, ale méně zajímá.“
„Rozkáži, aby jim dána byla zbraň,“ odpověděl Nero.
Ale pověrčivý Vestinus náhle se probral ze zamyšlení a ptal se tajemným hlasem:
„Všimli jste si, že křesťané umírajíce, něco vidí? Vzhlížejí k nebi a umírají, jako by netrpěli. Jsem jist, že cosi vidí…“
To praviv, zvedl oči k hořejšku amfitheatru, nad nímž noc již začala natahovati své hvězdami přeplněné „velarium“. Jiní však odpověděli smíchem a žertovnými poznámkami, co mohou křesťané viděti ve chvíli smrti. Zatím dal Caesar znamení otrokům držícím pochodně a opustil cirk, za ním pak vestálky, senátoři, úředníci a Augustiani.
Byla jasná, teplá noc. Před cirkem ještě se valily davy, zvědavé viděti odjezd Caesarův, ale nějak posupné a zamlklé. Tu a tam se ozval potlesk, ale hned zase utichl. Ze „spoliaria“[^499] bez ustání vyvážely skřípějící vozy zkrvavené mrtvoly křesťanů.
Petronius a Vinitius konali cestu zamlklí. Teprve blízko letohrádku ptal se Petronius:
„Přemýšlel jsi o tom, co jsem ti řekl?“
„Tak jest!“ odvětil Vinitius.
„Věříš, že i pro mne je to nyní záležitost svrchované důležitosti? Musím Lygii osvoboditi proti vůli Caesarově i Tigellinově! Toť zrovna jako boj, ve kterém jsem si umínil zvítěziti. Toť zrovna jako hra, ve které chci vyhráti, byť i na útraty vlastní kůže… Dnešní den mne ještě utvrdil v předsevzetí.“
„Odplatiž tobě Kristus!“
„Uvidíš!“
Takto rozmlouvajíce, stanuli přede dveřmi letohrádku a vystoupili z lektiky. V tom okamžiku přistoupila k nim jakási tmavá postava a ptala se:
„Je někdo z vás šlechetný Vinitius?“
„Ano,“ odvětil tribun. „Co chceš?“
„Jsem Nazarius, Miriamin syn, přicházím z věznice a přináším ti zprávu o Lygii.“
Vinitius položil mu ruku na rameno a při záři pochodní začal se mu dívati do očí, nemoha promluviti ani slova; Nazarius však uhodl otázku, zmírající mu na rtech, a odvětil:
„Je dosud živa. Posílá mne k tobě Ursus, pane, abych ti vyřídil, že se dívka v horečce modlí a opakuje tvé jméno.“
A Vinitius řekl:
„Sláva Kristu, jenž mi ji může vrátiti!“
Potom vzav Nazaria za ruku, zavedl jej do knihovny. Za chvíli však se dostavil i Petronius, aby slyšel jejich rozhovor.
„Nemoc jí uchránila hanby, protože katané se bojí nákazy,“ mluvil mladý hoch. „Ursus i Glaukos lékař bdí nad ní dnem i nocí.“
„Zůstaly strážní hlídky tytéž?“
„Ano, pane, a ona jest v jejich jizbě. Ti vězňové, kteří byli v dolejším vězení, pomřeli všichni horečkou nebo se zadusili puchem.“
„Kdo jsi?“ ptal se Petronius.
„Šlechetný Vinitius mne zná. Jsem syn vdovy, u které bydlila Lygie.“
„A křesťan?“
Hoch se zadíval tázavým zrakem na Vinitia, vida však, že se v tom okamžiku modlí, zvedl hlavu a řekl:
„Tak jest.“
„Jakým způsobem můžeš volně choditi do vězení?“
„Dal jsem se najmouti na odnášení mrtvých těl a učinil jsem tak zúmyslně, abych přispěl ku pomoci svým bratřím a přinášel jim zprávy z města.“
Petronius si začal pozorněji prohlížeti sličnou tvář chlapcovu, jeho modré oči a černé, bujné vlasy, načež se ptal:
„Z jaké země pocházíš, jinochu?“
„Jsem Galilejec, pane?“
„Přál by sis, aby Lygie byla vysvobozena?“
Hoch pozvedl oči k nebi.
„I kdybych měl sám potom zemřít!“
Vtom se Vinitius přestal modliti a pravil:
„Vyřiď hlídkám, aby ji položily do rakve, jako by již zemřela. Opatři pomocníky, kteří ji v noci vynesou společně s tebou. Blízko ‚Páchnoucích jam‘ naleznete lidi, kteří budou čekati s lektikou a jimž rakev odevzdáte. Hlídkám ode mne slib, že jim dám tolik zlata, kolik ho každý v plášti bude s to unésti.“
A když takto mluvil, jeho tvář ztratila obvyklou otupělost; probudil se v něm voják, jemuž naděje vrátila dřívější energii.
Nazarius zahořel radostí, a zvednuv ruce, zvolal:
„Kéž ji Kristus uzdraví, neboť bude vysvobozena!“
„Myslíš, že budou hlídky souhlasiti?“ ptal se Petronius.
„Ti, pane? Jen kdyby věděli, že je za to nepotká trest a muka!“
„Tak jest!“ řekl Vinitius. „Strážní hlídky chtěly souhlasiti i s jejím útěkem, tím spíše tedy ji nechají vynésti jako nebožku.“
„Je tam sice člověk,“ pravil Nazarius, „který zjišťuje žhavým železem, jsou-li těla, která vynášíme, mrtva. Ale ten bere třebas i několik sestercií za to, aby se železem nedotkl tváře nebožtíků. Za jeden aureus dotkne se rakve, ne těla.“
„Vyřiď mu, že dostane plnou ‚capsu‘ aureů,“ řekl Petronius. „Ale zdaž se ti podaří, aby sis opatřil spolehlivé pomocníky?“
„Podaří se mi opatřit takové, kteří by za peníze prodali vlastní ženy i děti.“
„Kde je najdeš?“
„V samé věznici nebo ve městě. Strážní hlídky, až budou jednou podplaceny, zavedou dovnitř, koho se jim zlíbí.“
„Pak tedy zavedeš jako zjednaného pomocníka mne!“ řekl Vinitius.
Ale Petronius začal jej varovati s plnou rozhodností, aby tak nečinil. Praetoriáni by jej mohli poznati třebas i přestrojeného, a všecko by mohlo býti zmařeno.
„Ani ve věznici, ani u ‚Páchnoucích jam’,“ pravil. „Je nutno, aby všichni, i Caesar i Tigellinus, byli přesvědčeni, že je nebožkou, jinak by okamžitě nařídili stíhání. Podezření můžeme uspati pouze tím způsobem, že až bude vyvezena do hor Albských nebo dále, na Sicílii, zůstaneme my v Římě. Teprve za týden či za dva ty onemocníš a obešleš si Neronova lékaře, který ti nařídí, abys jel do hor. Pak se shledáte a potom…“
Tu se na chvíli zamyslil, ale pak, mávnuv rukou, řekl:
„Potom snad nadejdou jiné doby.“
„Kéž se Kristus nad ní slituje!“ řekl Vinitius. „Ty hovoříš o Sicílii, kdežto ona je nemocna a může zemříti…“
„Zavezeme ji zatím někam blíže. Ji vyléčí sám vzduch, jen když ji vyprostíme z vězení. Nemáš na horách nějakého pachtýře, jemuž bys mohl důvěřovati?“
„Zajisté! Mám! Ano!“ rychle odpověděl Vinitius. „Je tam blíže Coriol[^500] na horách spolehlivý člověk, jenž mne nosíval na rukou, když jsem byl ještě dítětem a který mne má dosud rád.“
Petronius mu podal destičky.
„Napiš mu, aby se zítra sem dostavil. Vyšlu neprodleně rychlého posla.“
To praviv, zavolal představeného atria a vydal mu přiměřené rozkazy. Za několik okamžiků nato vydal se jízdný otrok před nocí na cestu do Coriol…
„Rád bych,“ řekl Vinitius, „aby ji Ursus provázel na cestě… Byl bych klidnější…“
„Pane,“ řekl Nazarius, „je to člověk nadlidské síly, který vyláme mříže a půjde za ní. Nad strmou, vysokou stěnou jest jedno okno, pod nímž hlídka nestojí. Přinesu Ursovi provaz a to ostatní si provede on sám.“
„U Herkula!“ řekl Petronius. „Ať se tamodtud dostane, jak se mu zlíbí, ale ne společně s ní a ne dva nebo tři dni po ní, protože by šli za ním a odkryli její útočiště. U Herkula, což chcete zničiti sebe i ji? Zakazuji vám, abyste se mu zmínili o Coriolách, jinak si myji ruce.“
Oba uznali oprávněnost jeho poznámky a odmlčeli se. Potom se Nazarius začal loučiti, slibuje, že přijde zítra na úsvitě.
Doufal, že se dorozumí se strážními hlídkami ještě této noci, ale dříve si chtěl odběhnouti k matce, která pro nejisté a hrozné doby neměla o něho ani na chvíli pokoje. Rozhodl se, že pomocníka nebude hledati ve městě, nýbrž že si najde a podplatí jednoho z těch, kteří společně s ním vynášeli mrtvoly z věznice.
Nicméně ještě před samým odchodem se zastavil, a vzav Vinitia stranou, začal mu šeptati:
„Pane, nezmíním se o našem záměru nikomu, ani matce ne, ale apoštol Petr slíbil, že k nám přijde z amfitheatru, a tomu řeknu vše.“
„Můžeš v tomto domě mluviti nahlas,“ odpověděl Vinitius. „Apoštol Petr byl v amfitheatru s lidmi Petroniovými. Ostatně, půjdu s tebou sám.“
A dal si podati otrocký plášť, načež vyšli.
Petronius si zhluboka oddychl.
„Přál jsem si,“ uvažoval, „aby Lygie zemřela horečkou, protože pro Vinitia by to bylo nejméně strašlivo. Ale teď jsem hotov obětovati Aeskulapovi za její uzdravení zlatou trojnožku… Ach, ty Ahenobarbe, chceš si uspořádati divadlo z bolesti milencovy! A ty, Augusto, záviděla jsi napřed krásu té dívce a teď bys ji pozřela třebas na syrovo proto, že zahynul tvůj Ruffius… A ty, Tigelline, chceš ji zničiti mně na zlost…! Pravím vám, že vaše oči jí nespatří v aréně, poněvadž buďto zemru vlastní smrtí, nebo vám ji vyrvu jako psům z tlamy… A vyrvu ji tak, že nebudete o tom věděti; a potom, kdykoli se na vás podívám, tolikrát si pomyslím: Hle, tupohlavci, na které vyzrál Petronius…!“
A maje radost sám ze sebe, přešel do triclinia, kde společně s Eunike lehl si k večeři. Lektor jim po tu dobu předčítal selanky z Theokrita[^501]. Venku přihnal vítr mraky směrem od Soracte a náhlá bouře porušila ticho klidné letní noci. Občas rozléhal se na sedmi pahorcích rachot hromu, oni pak, ležíce vedle sebe u stolu, naslouchali idylickému básníkovi, jenž ve zpěvním dórském nářečí opěval lásku pastýřů, a pak uklidněni chystali se ke sladkému odpočinku.
Ale ještě předtím vrátil se Vinitius. Petronius, dověděv se o jeho návratu, zašel k němu a ptal se:
„Nu, což…? Neuradili jste se o něčem novém? Odešel Nazarius již do věznice?“
„Ano,“ odpověděl mladý člověk, rozhrnuje si vlasy, promočené na dešti. „Nazarius odešel, aby se dorozuměl se strážními hlídkami, a já jsem se sešel s Petrem, jenž mi uložil, abych se modlil a věřil.“
„Pak je to dobře. Půjde-li všecko zdárně, bude možno osvoboditi Lygii příští noci…“
„Pachtýř s lidmi jistě tu bude na úsvitě.“
„To je krátká cesta. Odpočiň si nyní!“
Ale Vinitius poklekl ve svém cubiculu a začal se modliti.
O východu slunce přibyl od Coriol pachtýř Niger, přiváděje s sebou ve smyslu pokynu Vinitiova mezky, lektiku a čtyři spolehlivé lidi, vybrané z britanských otroků, a ty ostatně nechal z opatrnosti v hospodě na Subuře.
Vinitius, který bděl celou noc, vyšel mu vstříc, ten pak se zarazil při pohledu na mladého muže, a líbaje mu ruce i oči, řekl:
„Drahý, jsi snad nemocen nebo snad ti zármutek vysál krev z obličeje? Vždyť jsem tě sotva mohl poznat na první spatření!“
Vinitius jej odvedl do vnitřního sloupoví, zvaného xystus, a tam jej zasvětil do tajemství. Niger poslouchal s napjatou pozorností a na jeho suché, opálené tváři bylo zjevno veliké pohnutí, které se ani nesnažil opanovati.
„Ona je tedy křesťanka?“ zvolal.
A začal zkoumavě hleděti Vinitiovi do tváře; ten patrně uhodl, na co se ho táže venkovanův zrak, neboť odvětil:
„I já jsem křesťan…“
Tehdy se v očích Nigrových zaleskly slzy; chvíli mlčel, potom zdvihnuv ruce, pravil:
„Ó, díky tobě, Kriste, že Jsi sňal bělmo s očí mně na světě nejdražších!“
Pak objal hlavu Vinitiovu, a vzlykaje štěstím, začal líbati jeho čelo.
Za chvíli potom přišel Petronius, přiváděje s sebou Nazaria.
„Šťastné zprávy!“ řekl zdaleka.
Zprávy byly opravdu příznivé. Předně se zaručoval Glaukos lékař za život Lygie, jakkoliv měla touž vězeňskou horečku, na kterou v Tullianu i po jiných věznicích zmírala denně sta lidí. Co se týče strážních hlídek a člověka, jenž zjišťoval smrt žhavým železem, nebylo nejmenších obtíží. Pomocník Attys byl již rovněž získán.
„Udělali jsme otvory do rakve, aby nemocná mohla dýchati,“ řekl Nazarius. „Všecko nebezpečí je v tom, aby nezasténala nebo neozvala se v okamžiku, až budeme přecházet mimo praetoriány. Ale je velice zesláblá a od rána leží se zavřenýma očima. Ostatně, Glaukos jí dá uspávací nápoj, který sám připraví z léků, jež přinesu z města. Víko rakve nebude přibito. Nadzvedněte je lehce a vezmete nemocnou do lektiky, my pak položíme do rakve podlouhlý pytel s pískem; ten mějte připraven.“
Vinitius, naslouchaje těmto slovům, byl bled jako stěna, ale naslouchal s pozorností tak napjatou, že se zdálo, jako by napřed uhadoval, co Nazarius má říci.
„Nebudou z věznice vynášena nějaká jiná těla?“ tázal se Petronius.
„Za dnešní noci zemřelo asi dvacet lidí a do večera jich zemře ještě několik,“ odvětil hoch, „musíme však jít s celým průvodem, ale budeme se loudat, abychom zůstali pozadu. Na první zatáčce můj společník naschvál začne kulhat. Tím způsobem zůstaneme hodně za ostatními. Vy na nás čekejte u malého chrámečku Libitinina. Kéž by Bůh dopřál noci co nejtmavší!“
„Bůh dopřeje!“ řekl Niger. „Včera byl jasný večer, ale pak se náhle strhla bouře. Dnes je nebe opět jasné, ale od rána je parno. Každé noci budou nyní deště a bouře.“
„Půjdete bez světel?“ ptal se Vinitius.
„Jen vpředu jsou neseny pochodně. Buďte, děj se co děj, u chrámu Libitinina, až se setmí, ačkoliv obyčejně vynášíme mrtvoly teprve před samou půlnocí.“
Odmlčeli se i bylo slyšeti jen zrychlený dech Vinitiův.
Petronius se k němu obrátil.
„Řekl jsem včera,“ pravil, „že by bylo nejlépe, kdybychom oba zůstali doma. Nyní však vidím, že mně samému nebude možno poseděti na místě… Ostatně, kdyby šlo o únos, bylo by nutno míti se tím více na pozoru, ale když budou Lygii vynášeti jako nebožku, zdá se, že nejmenší podezření nevznikne v ničí hlavě.“
„Ano, ano!“ odpověděl Vinitius. „Musím tam býti! Sám ji vyjmu z rakve…“
„Až jednou bude v mém domě u Coriol, jsem za ni odpověděn já!“ řekl Niger.
Tím byla rozmluva skončena. Niger se odebral do hospody ke svým lidem. Nazarius, vzav pod tuniku měšec se zlatem, vrátil se do věznice. Pro Vinitia začal den plný nepokoje, horečky, úzkostí a očekávání.
„Podnik se musí zdařiti, protože je dobře promyšlen,“ řekl mu Petronius. „Lépe nebylo ani možno všecko to sestaviti. Musíš se tvářiti zarmoucen a choditi ve tmavé tóze. Ale cirku neopouštěj! Ať každý tě vidí…! Je to všecko tak promyšleno, že nemůže býti zklamání. Ale jsi sobě plně jist svým pachtýřem?“
„Je to křesťan,“ odvětil Vinitius.
Petronius pohlédl na něho s údivem, potom jal se krčiti rameny a mluviti jako sám k sobě:
„U Polluxe, jak se to šíří! A jak se to drží lidských duší…! Pod takovou hrůzou by se lidé rázem zřekli všech bohů římských, řeckých i egyptských. Je to však podivno…! U Polluxe…! Kdybych věřil, že na světě ještě něco závisí na našich bozích, slíbil bych teď každému po šesti bílých býcích a kapitolskému Jovišovi dvanáct… Ale ani ty nešetři sliby svému Kristovi…!“
„Oddal jsem Mu duši!“ odvětil Vinitius.
A rozešli se. Petronius vrátil se do cubicula. Vinitius odešel zpovzdálí se dívat na věznici, tamodtud pak se odebral až na svah vatikánského pahorku, do oné chaty fossorovy, ve které z rukou apoštolových dostalo se mu křtu. Zdálo se mu, že v této chatě vyslyší jej Kristus spíše než kdekoli jinde; naleznuv ji tedy a vrhnuv se k zemi, napjal všecky své síly zbolestnělé duše v modlitbě za slitování a pohroužil se do ní tak, že zapomněl, kde jest a co se s ním děje.
Odpoledne probudil ho již hlahol trub, přicházející směrem od Neronova cirku. Tehdy vyšel z chaty a začal se kolem rozhlížeti, maje oči osvěženy snem. Venku bylo vedro a ticho, rušené jen občas zvukem kovu, zato však bez ustání cvrčením polních koníků. Vzduch se stal parným; nebe nad městem bylo ještě modré, ale směrem k sabinským horám kupila se nízko na pokraji obzoru tmavá mračna.
Vinitius vrátil se domů. V atriu čekal na něho Petronius.
„Byl jsem na Palatině,“ řekl. „Ukázal jsem se tam zúmyslně a zasedl potom dokonce i ke kostkám. U Anitia je večer hostina; ohlásil jsem, že přijdeme, ale až po půlnoci, protože dříve se musím vyspati. Také se dostavím a bylo by dobře, kdybys přišel i ty.“
„Což nemohou dojíti nějaké zprávy od Nigra nebo Nazaria?“ ptal se Vinitius.
„Ne. Spatříme se s nimi až o půlnoci. Všiml sis, že se schyluje k bouři?“
„Ano.“
„Zítra má býti představení s ukřižovanými křesťany. Déšť to však snad překazí.“
To řka, přistoupil, a dotknuv se ramene Vinitiova, řekl:
„Jí však nespatříš na kříži, nýbrž v Coriolách. U Kastora, nedal bych okamžiku, ve kterém ji vysvobodíme, za všecky gemy v Římě! Večer se již blíží…“
Večer se opravdu blížil a tma začala zahalovati město dříve nežli obyčejně, a to od mraků, které zahalily celý obzor. S příchodem večera snesl se vydatný déšť, který vypařuje se na kamenné dlažbě, rozpálené denním žárem, naplnil ulice města mlhou. Pak se střídavě hned všecko tišilo, hned zase dostavovaly se krátké lijáky.
„Pospěšme si!“ řekl konečně Vinitius. „Pro bouři mohou býti z věznice vyvezena těla dříve.“
„Je čas!“ odpověděl Petronius.
A vzavše si gallské pláště s kápěmi, vyšli dvířkami od zahrady na ulici. Petronius také se ozbrojil krátkým římským mečem, zvaným „sica“, který vždycky brával s sebou na noční výpravy.
Město bylo prázdné, že byla bouřka. Občas protrhly blesky mračna, oslnivým světlem ozařujíce zdi nově vystavěných nebo teprve stavěných domů a mokré kamenné desky, jimiž byly ulice vydlážděny. Při takovém světle spatřili konečně po dosti dlouhé cestě kopec, na němž stál malý chrámeček Libitinin, a pod kopcem skupinu skládající se z mezků a koní.
„Nigre!“ tiše zvolal Vinitius.
„Tu jsem, pane!“ ozval se hlas za deště.
„Je všecko připraveno?“
„Tak jest, drahý! Jak se setmělo, byli jsme na místě. Ale schovej se do nějakého úkrytu, jinak zmoknete na nit. Jaká to bouře! Myslím, že přijde krupobití.“
Obava Nigrova se opravdu splnila, poněvadž zakrátko se začaly sypati kroupy, z počátku drobné, potom stále větší a hustší… Vzduch se ihned ochladil.
Oni pak, stojíce v úkrytu, chráněni proti větru a ledovým kroupám, rozmlouvali sníženými hlasy.
„I kdyby nás někdo viděl,“ řekl Niger, „nebude jat žádným podezřením, protože vypadáme jako lidé, kteří chtějí přečkat bouři. Ale bojím se, aby vynášení mrtvol nebylo odloženo na zítřek.“
„Krupobití nebude dlouho trvati,“ řekl Petronius. „Musíme čekati, třebas i do úsvitu.“
Čekali tedy, naslouchajíce, nedoletí-li k nim ohlas průvodu. Krupobití přešlo opravdu, ale hned nato začal šuměti liják. Chvílemi se strhoval vítr a přinášel směrem od Páchnoucích jam hrozný puch rozkládajících se těl, která byla zahrabávána mělce a nedbale.
Vtom Niger řekl:
„Vidím v mlze světélko… jedno, dvě, tři… To jsou pochodně!“
A obrátil se k lidem:
„Dbejte, aby mezci nehýkali…!“
„Přicházejí!“ řekl Petronius.
Světla se vskutku stávala jasnější. Za chvíli bylo již možno rozeznati plameny pochodní, plápolající na větru.
Niger se začal křižovati a modliti. Zatím posupný průvod přitáhl blíže a konečně, dostihnuv chrámečku Libitinina, se zastavil. Petronius, Vinitius a Niger přitiskli se mlčky ke kopci, nerozumějíce, co to znamená. Ale ti tam se zastavili pouze proto, aby si obvázali tváře a ústa šátky na ochranu proti dusivému zápachu, který u samých „puticulí“ nebylo prostě možno snésti, načež vyzvedli nosítka a ubírali se dále.
Jediná jen rakev zastavila se proti chrámečku. Vinitius k ní chvátal, za ním Petronius, Niger a dva britanští otroci s lektikou.
Nežli však doběhli, dal se ve tmě slyšeti Nazariův hlas, plný bolesti:
„Pane, dívka byla přenesena i s Ursem do esquilinského vězení… Neseme jiné tělo! Dívka byla odvlečena před půlnocí…!!!“
Petronius, vrátiv se domů, byl zamračen jako bouře a ani se nepokoušel, aby Vinitia těšil. Chápal, že na vysvobození Lygie z esquilinských sklepení nelze ani ve snách pomysliti. Tušil, že dívka byla pravděpodobně přenesena z Tulliana proto, aby nezemřela horečkou a neunikla amfitheatru, jenž jí byl usouzen. Ale právě to bylo důkazem, že byla pod dozorem a hlídána přísněji nežli ostatní. Petroniovi bylo do hloubi duše líto i jí i Vinitia, ale kromě toho byl zmítán ještě myšlenkou, že se mu po prvé v životě cosi nezdařilo a že po prvé v boji byl přemožen.
„Štěstěna mne opouští, jak se zdá!“ mluvil k sobě. „Ale bohové se mýlí, domnívají-li se, že budu souhlasiti s životem, jako na příklad jest tohoto.“
Tu pohlédl na Vinitia, který se rovněž díval na něho rozšířenými zraky.
„Co je ti? Máš horečku?“ řekl Petronius.
Ten pak odpověděl jakýmsi zvláštním, zlomeným a pomalým hlasem, jaký mívá nemocné dítě:
„A já věřím, že mi ji může vrátiti On!“
Nad městem tichl poslední rachot bouře.
[^498]: Beze zbraní a bez umění.
[^499]: Svlékárna ve starořímském amfiteátru, kde býval padlým gladiátorům svlékán šat a zbraň a kde byli, pokud ještě žili, dobíjeni.
[^500]: Starořímské město ve střední Itálii.
[^501]: Starořecký básník ze 3. st. př. n. l.
| 61.229469 | 1,006 | 0.777822 | ces_Latn | 1.000006 |
aa5b8190ab9a21e065fd8bdcd90dc2557b9b1660 | 4,763 | md | Markdown | articles/active-directory/develop/vs-active-directory-error.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/develop/vs-active-directory-error.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/develop/vs-active-directory-error.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Diagnostizieren von Fehlern mit dem verbundenen Dienst für Azure AD (Visual Studio)
description: Der verbundene Dienst für Active Directory hat einen inkompatiblen Authentifizierungstyp erkannt.
author: ghogen
manager: jillfra
ms.prod: visual-studio-windows
ms.technology: vs-azure
ms.workload: azure-vs
ms.topic: how-to
ms.date: 03/12/2018
ms.author: ghogen
ms.custom: aaddev, vs-azure
ms.openlocfilehash: 5cefc59a6072a945be493487c09b1cc7f9827475
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/02/2020
ms.locfileid: "85830569"
---
# <a name="diagnosing-errors-with-the-azure-active-directory-connected-service"></a>Diagnostizieren von Fehlern mit dem verbundenen Dienst für Azure Active Directory
Beim Erkennen des vorherigen Authentifizierungscodes hat der verbundene Dienst für Azure Active Directory einen inkompatiblen Authentifizierungstyp erkannt.
Um den vorherigen Authentifizierungscode in einem Projekt richtig erkennen zu können, muss das Projekt neu erstellt werden. Wenn dieser Fehler angezeigt wird und in Ihrem Projekt kein vorheriger Authentifizierungscode enthalten ist, erstellen Sie Ihr Projekt neu, und versuchen Sie es nochmals.
## <a name="project-types"></a>Projekttypen
Der verbundene Dienst überprüft den Typ des Projekts, das Sie entwickeln, um die richtige Authentifizierungslogik in das Projekt einzufügen. Wenn ein Controller im Projekt von `ApiController` abgeleitet wird, gilt das Projekt als WebAPI-Projekt. Wenn alle Controller im Projekt von `MVC.Controller` abgeleitet werden, gilt das Projekt als MVC-Projekt. Der verbundene Dienst unterstützt keinen anderen Projekttyp.
## <a name="compatible-authentication-code"></a>Kompatibler Authentifizierungscode
Der verbundene Dienst überprüft auch Authentifizierungseinstellungen, die zuvor konfiguriert wurden oder mit dem Dienst kompatibel sind. Wenn alle Einstellungen vorhanden sind, gilt dies als eintrittsinvarianter Fall, und der verbundene Dienst wird geöffnet und zeigt die Einstellungen an. Wenn nur einige der Einstellungen vorhanden sind, wird es als Fehlerfall betrachtet.
In einem MVC-Projekt überprüft der verbundene Dienst die folgenden Einstellungen, die aus der vorherigen Verwendung des Diensts resultieren:
```xml
<add key="ida:ClientId" value="" />
<add key="ida:Tenant" value="" />
<add key="ida:AADInstance" value="" />
<add key="ida:PostLogoutRedirectUri" value="" />
```
Außerdem überprüft der verbundene Dienst die folgenden Einstellungen in einem Web-API-Projekt, die aus der vorherigen Verwendung des Diensts resultieren:
```xml
<add key="ida:ClientId" value="" />
<add key="ida:Tenant" value="" />
<add key="ida:Audience" value="" />
```
## <a name="incompatible-authentication-code"></a>Nicht kompatibler Authentifizierungscode
Abschließend versucht der verbundene Dienst, Versionen von Authentifizierungscode zu erkennen, die mit früheren Versionen von Visual Studio konfiguriert wurden. Wenn Sie diesen Fehler erhalten, ist ein nicht kompatibler Authentifizierungstyp in Ihrem Projekt vorhanden. Der verbundene Dienst erkennt die folgenden Authentifizierungstypen aus früheren Versionen von Visual Studio:
* Windows-Authentifizierung
* Einzelne Benutzerkonten
* Organisationskonten
Der verbundene Dienst sucht in Ihrer Datei `web.config` nach dem `authentication`-Element, um die Windows-Authentifizierung in einem MVC-Projekt zu erkennen.
```xml
<configuration>
<system.web>
<authentication mode="Windows" />
</system.web>
</configuration>
```
Zum Erkennen der Windows-Authentifizierung in einem Web-API-Projekt sucht der verbundene Dienst nach dem `IISExpressWindowsAuthentication`-Element in der `.csproj`-Datei Ihres Projekts:
```xml
<Project>
<PropertyGroup>
<IISExpressWindowsAuthentication>enabled</IISExpressWindowsAuthentication>
</PropertyGroup>
</Project>
```
Um die Authentifizierung „Einzelne Benutzerkonten“ zu erkennen, sucht der verbundene Dienst in der Datei `packages.config` nach dem Paketelement.
```xml
<packages>
<package id="Microsoft.AspNet.Identity.EntityFramework" version="2.1.0" targetFramework="net45" />
</packages>
```
Zum Erkennen der alten Form der Organisationskontoauthentifizierung sucht der verbundene Dienst nach dem folgenden Element in der Datei `web.config`:
```xml
<configuration>
<appSettings>
<add key="ida:Realm" value="***" />
</appSettings>
</configuration>
```
Sie können den Authentifizierungstyp ändern, indem Sie den inkompatiblen Authentifizierungstyp entfernen und versuchen, den verbundenen Dienst wieder hinzuzufügen.
Weitere Informationen finden Sie unter [Authentifizierungsszenarien für Azure AD](authentication-scenarios.md).
| 47.63 | 412 | 0.799496 | deu_Latn | 0.988346 |
aa5c07f72fb042bebb726fccef651f52b4fff17f | 786 | md | Markdown | out/jira/docs/JqlQueryFieldEntityProperty.md | getkloudi/integration-wrapper-generator | 11c525d7fc1a8b26f5e8bab3b0b64e949c6c6dc1 | [
"Apache-2.0"
] | null | null | null | out/jira/docs/JqlQueryFieldEntityProperty.md | getkloudi/integration-wrapper-generator | 11c525d7fc1a8b26f5e8bab3b0b64e949c6c6dc1 | [
"Apache-2.0"
] | null | null | null | out/jira/docs/JqlQueryFieldEntityProperty.md | getkloudi/integration-wrapper-generator | 11c525d7fc1a8b26f5e8bab3b0b64e949c6c6dc1 | [
"Apache-2.0"
] | null | null | null | # Jira.JqlQueryFieldEntityProperty
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**entity** | **String** | The object on which the property is set. |
**key** | **String** | The key of the property. |
**path** | **String** | The path in the property value to query. |
**type** | **String** | The type of the property value extraction. Not available if the extraction for the property is not registered on the instance with the [Entity property](https://developer.atlassian.com/cloud/jira/platform/modules/entity-property/) module. | [optional]
## Enum: TypeEnum
* `number` (value: `"number"`)
* `string` (value: `"string"`)
* `text` (value: `"text"`)
* `date` (value: `"date"`)
* `user` (value: `"user"`)
| 26.2 | 276 | 0.60687 | eng_Latn | 0.856892 |
aa5c39a83b44b7bc94e681946f14f4a627b87991 | 1,482 | md | Markdown | README.md | vompressor/vproto | f42f019ff831a7b49e4b9da22c637f24ab9ba629 | [
"MIT"
] | null | null | null | README.md | vompressor/vproto | f42f019ff831a7b49e4b9da22c637f24ab9ba629 | [
"MIT"
] | null | null | null | README.md | vompressor/vproto | f42f019ff831a7b49e4b9da22c637f24ab9ba629 | [
"MIT"
] | null | null | null | this package forked by go_sconn/protocol
# protocol
```
type ProtocolHeader interface {
GetBodyLen() int
SetBodyLen(int)
}
```
## struct definition
Structures must consist only of the following types:
- `uint8`
- `uint16`
- `uint32`
- `uint64`
- `bool`
- `[fixed]byte`
## GetByteLen() int
It should be implemented to return the length of the protocol body in the header struct.
## SetByteLen(int)
It should be implemented to set the body length in header struct at length to the protocol body.
## implementation example
```
type BasicProtocol struct {
Type uint16
Method uint16
Seq uint32
BodyLen uint32
}
func (bp *BasicProtocol) GetBodyLen() int {
return int(bp.BodyLen)
}
func (bp *BasicProtocol) SetBodyLen(l int) {
bp.BodyLen = uint32(l)
}
```
## protocol structure
```
+- binary.Size(header) -+----------- len(msg) -----------+
| header | msg |
+-----------------------+--------------------------------+
```
## EncodeProtocolByte(head ProtocolHeader, msg []byte) ([]byte, error)
return joined binary enchoded header, msg
## DecodeProtocolByte(head ProtocolHeader, msg []byte) ([]byte, error)
## DecodeHeader(head ProtocolHeader, headerByte []byte) error
## ReadProtocol(reader io.Reader, head ProtocolHeader) ([]byte, error)
## WriteProtocol(writer io.Writer, head ProtocolHeader, msg []byte) error
| 24.295082 | 96 | 0.607962 | eng_Latn | 0.755245 |
aa5c3b11353dcdde7efcb38b7d4981f2afa552da | 5,496 | md | Markdown | help/implementation-playbook/development/platform-tools.md | misuadobe/commerce-operations.en | 831ee65fb9d543a2a980ef4ea2a2449e4e16e7af | [
"MIT"
] | null | null | null | help/implementation-playbook/development/platform-tools.md | misuadobe/commerce-operations.en | 831ee65fb9d543a2a980ef4ea2a2449e4e16e7af | [
"MIT"
] | null | null | null | help/implementation-playbook/development/platform-tools.md | misuadobe/commerce-operations.en | 831ee65fb9d543a2a980ef4ea2a2449e4e16e7af | [
"MIT"
] | null | null | null | ---
title: Platform Tools
description: Choose recommended platform tools for your Adobe Commerce implementation.
exl-id: 3fc164f9-a0fc-46e7-a54e-08ce101ccae7
---
# Platform tools
There is no shortage of aspects that must be well thought through and rigorously tested to keep an ecommerce site running without interference. Not only must you identify the right solutions to tackle every aspect of the site—from data storage and programming to caching and security—but you need the right process to ensure the delivery of a platform that both runs smoothly and can be built and optimized efficiently.
This section offers not only a look at the tools, solutions, processes, and methodologies that have been tested and perfected over a number of Adobe Commerce implementations, but also our recommendations for which solutions best fit specific business needs and objectives.
The following table includes solutions that we recommend and can be used within Adobe Commerce to drive performance on the platform:
| Purpose | Tool |
|------------------------------------------|-------------------------|
| Database | MySQL, MariaDB, Percona |
| Programming language | PHP, JS, HTML, LESS CSS |
| Integrated development environment (IDE) | Eclipse, PHPStorm |
| Web server | Nginx, Apache |
| Caching services | Redis, Varnish |
| Search services | Elasticsearch |
| Message queue services | RabbitMQ |
| Security scan tool | SonarQube, ZAP |
## Database
There are three different tools that we use depending on the needs of the brand. MySQL is a great baseline solution as the Adobe Commerce database if you don’t expect your store to handle extreme loads.
MariaDB is more community-focused and works better for users who care more about features than pure performance. MariaDB supports a large array of database engines, disk encryption, complex horizontal interconnectivity, and scaling features, which could be interesting for large Adobe Commerce stores.
Percona is a fork of MySQL that centers around performance and peak load handling. Choose MariaDB if you need more quality of life and DevOps features. Go for Percona if your goal is to gain high-load performance in large-scale datasets.
## Programming language
Adobe Commerce is a PHP-based application and the newest releases are always compatible with the latest stable PHP version (for example, Adobe Commerce version 2.4 recommends using PHP 7.4). To get more security and performance, there are several factors to account for when configuring PHP to get maximum speed and efficiency on request processing. The Adobe Commerce web storefront is built with HTML, JavaScript, and the LESS CSS pre-processor.
## Web servers
Adobe Commerce fully supports the Nginx and Apache web servers. Adobe Commerce provides sample recommended configuration files for both:
- **Nginx**—`<magento_home>/nginx.conf.sample`
- **Apache**—`<magento_home>.htaccess.sample`
The Nginx sample contains settings for better performance and is designed so that little reconfiguration is required.
## Caching services
Adobe Commerce provides numerous options to store your cache and session data, including Redis, Memcache, filesystem, and database. For a setup with multiple web nodes, Redis is the best option.
We highly recommend using Varnish as the full-page cache server for your store. Adobe Commerce distributes a sample configuration file for Varnish that contains all recommended settings for performance.
## Search services
For Adobe Commerce version 2.4 and later, all installations must be configured to use Elasticsearch as the catalog search solution. Elasticsearch provides quick and advanced searches on products in the catalog. Elasticsearch is optional for releases prior to 2.4, but it’s recommended.
## Message queue services
Message queues provide an asynchronous communication mechanism in which the sender and the receiver of a message do not contact each other. RabbitMQ is an open-source message broker that offers a reliable, highly available, scalable, and portable messaging system.
## Security tools
The [Adobe Commerce Security Scan Tool](https://docs.magento.com/user-guide/magento/security-scan.html) enables you to regularly monitor your store websites and receive updates for known security risks, malware, and out-of-date software. Typically, you start using this tool when you begin user-acceptance testing (UAT). Besides the Adobe Commerce Security Scan tool, which is free and available for all implementations and versions of Adobe Commerce, there are other choices that can be used during the CI/CD process and before each release.
SonarQube is an open-source quality management platform, designed to analyze and measure your code’s technical quality. SonarQube not only provides a complete report of code bugs, syntax errors, and vulnerabilities, but also offers suggestions and examples for fixing your code. SonarQube is perfect to use in a CI/CD environment as a tool capable of analyzing the code before it’s deployed.
Zed Attack Proxy (ZAP) is a free security testing tool used by thousands of pen-testers around the globe. ZAP is developed by OWASPand is one of the most preferred tools for manual security testing.
| 82.029851 | 542 | 0.753821 | eng_Latn | 0.998211 |
aa5c4d0654a8928216584b959a7f0e7b1be50dc0 | 132 | md | Markdown | examples/sqoop-parquet-hdfs-impala/README.md | sada3390/pipewrench | 520b8abfa8f92ad18b65014cf6fd372885bb761b | [
"Apache-2.0"
] | 26 | 2017-10-06T22:36:16.000Z | 2022-02-02T13:29:24.000Z | examples/sqoop-parquet-hdfs-impala/README.md | sada3390/pipewrench | 520b8abfa8f92ad18b65014cf6fd372885bb761b | [
"Apache-2.0"
] | 42 | 2017-10-09T18:40:13.000Z | 2021-11-18T22:58:08.000Z | examples/sqoop-parquet-hdfs-impala/README.md | sada3390/pipewrench | 520b8abfa8f92ad18b65014cf6fd372885bb761b | [
"Apache-2.0"
] | 40 | 2017-10-11T18:50:39.000Z | 2022-02-15T08:49:33.000Z | Examples have moved [here](https://github.com/Cargill/pipewrench/tree/afoerster-patch-1/integration-tests/sqoop-parquet-hdfs-impala) | 132 | 132 | 0.825758 | eng_Latn | 0.268328 |
aa5cac348fa7a2db92649daf7118f2411eb852b8 | 1,881 | md | Markdown | README.md | jwkellyiii/redux-saga-rest | 2aa5de724d41261f1df424691d729fbd5af2483a | [
"MIT"
] | null | null | null | README.md | jwkellyiii/redux-saga-rest | 2aa5de724d41261f1df424691d729fbd5af2483a | [
"MIT"
] | null | null | null | README.md | jwkellyiii/redux-saga-rest | 2aa5de724d41261f1df424691d729fbd5af2483a | [
"MIT"
] | null | null | null | # redux-saga-rest
[](https://www.npmjs.com/package/redux-saga-rest)
`redux-saga-rest` is a thin wrapper around the Fetch API that integrates with [redux-saga](https://github.com/yelouafi/redux-saga) and supports request/response middleware.
## Installation
```sh
# dependencies
yarn add redux redux-saga isomorphic-fetch
yarn add redux-saga-rest
```
## Usage Example
#### `api.js`
```javascript
import { put, select } from 'redux-saga/effects';
import { API } from 'redux-saga-rest';
import * as selectors from './selectors';
import * as actions from './actions';
const authMiddleware = () => function* (req, next) {
// request middleware
const user = yield select(selectors.user);
const headers = req.headers || new Headers();
headers.set('Authorization', `Bearer ${user.token}`);
// retrieve the response
const res = yield next(new Request(req, { headers }));
// response middleware
if (res.status === 401) {
yield put(actions.logout());
}
// return the response
return res;
};
export const auth = new API('/api/')
.use(authMiddleware());
```
**TODO:** Describe the middleware application order.
#### `sagas.js`
```javascript
import { takeEvery, put } from 'redux-saga/effects';
import * as constants from './constants';
import * as actions from './actions';
import { auth } from './api';
function* watchUpdateProfile() {
yield takeEvery(constants.UPDATE_PROFILE, function* (action) {
const res = yield auth.patch('/profile/', action.payload);
if (res.ok) {
yield put(actions.updateProfileSuccess());
} else {
yield put(actions.updateProfileFailure());
}
});
}
export default function* () {
yield [
watchUpdateProfile(),
];
};
```
| 23.810127 | 172 | 0.647528 | eng_Latn | 0.588239 |
aa5cd32eef656167f33db7ad683e7328a96d273f | 6,448 | md | Markdown | book/en-us/01-intro.md | lishu1125/modern-cpp-tutorial | 8fb2d9b3396c10fdd11d7607dc21a91a98856124 | [
"MIT"
] | 1 | 2020-09-01T07:07:39.000Z | 2020-09-01T07:07:39.000Z | book/en-us/01-intro.md | vasanktt/modern-cpp-tutorial | 518c6e97d71bb4f4f2e38177303c699cad89c405 | [
"MIT"
] | 1 | 2020-08-25T03:01:38.000Z | 2020-08-25T03:01:38.000Z | book/en-us/01-intro.md | vasanktt/modern-cpp-tutorial | 518c6e97d71bb4f4f2e38177303c699cad89c405 | [
"MIT"
] | null | null | null | ---
title: "Chapter 01: Towards Modern C++"
type: book-en-us
order: 1
---
# Chapter 01: Towards Modern C++
[TOC]
**Compilation Environment**: This book will use `clang++` as the only compiler used,
and always use the `-std=c++2a` compilation flag in your code.
```bash
> clang++ -v
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.6.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
## 1.1 Deprecated Features
Before learning modern C++, let's take a look at the main features that have been deprecated since C++11:
> **Note**: Deprecation is not completely unusable, it is only intended to imply that features will disappear from future standards and should be avoided. However, the deprecated features are still part of the standard library, and most of the features are actually "permanently" reserved for compatibility reasons.
- **The string literal constant is no longer allowed to be assigned to a `char *`. If you need to assign and initialize a `char *` with a string literal constant, you should use `const char *` or `auto`.**
```cpp
char *str = "hello world!"; // A deprecation warning will appear
```
- **C++98 exception description, `unexpected_handler`, `set_unexpected()` and other related features are deprecated and should use `noexcept`.**
- **`auto_ptr` is deprecated and `unique_ptr` should be used.**
- **`register` keyword is deprecated and can be used but no longer has any practical meaning.**
- **The `++` operation of the `bool` type is deprecated.**
- **If a class has a destructor, the properties for which it generates copy constructors and copy assignment operators are deprecated.**
- **C language style type conversion is deprecated (ie using `(convert_type)`) before variables, and `static_cast`, `reinterpret_cast`, `const_cast` should be used for type conversion.**
- **In particular, some of the C standard libraries that can be used are deprecated in the latest C++17 standard, such as `<ccomplex>`, `<cstdalign>`, `<cstdbool>` and `<ctgmath>` Wait**
- ... and many more
There are also other features such as parameter binding (C++11 provides `std::bind` and `std::function`), `export`, and etc. are also deprecated. These features mentioned above **If you have never used or heard of it, please don't try to understand them. You should move closer to the new standard and learn new features directly**. After all, technology is moving forward.
## 1.2 Compatibilities with C
For some force majeure and historical reasons, we had to use some C code (even old C code) in C++, for example, Linux system calls. Before the advent of modern C++, most people talked about "what is the difference between C and C++". Generally speaking, in addition to answering the object-oriented class features and the template features of generic programming, there is no other opinion, or even a direct answer. "Almost" is also a lot of people. The Venn diagram in Figure 1.2 roughly answers the C and C++ related compatibility.

From now on, you should have the idea that "C++ is **not** a superset of C" in your mind (and not from the beginning, later [References for further reading] (# further reading references) The difference between C++98 and C99 is given). When writing C++, you should also avoid using program styles such as `void*` whenever possible. When you have to use C, you should pay attention to the use of `extern "C"`, separate the C language code from the C++ code, and then unify the link, for instance:
```cpp
// foo.h
#ifdef __cplusplus
extern "C" {
#endif
int add(int x, int y);
#ifdef __cplusplus
}
#endif
// foo.c
int add(int x, int y) {
return x+y;
}
// 1.1.cpp
#include "foo.h"
#include <iostream>
#include <functional>
int main() {
[out = std::ref(std::cout << "Result from C code: " << add(1, 2))](){
out.get() << ".\n";
}();
return 0;
}
```
You should first compile the C code with `gcc`:
```bash
gcc -c foo.c
```
Comple and output the `foo.o` file, and link the C++ code to the `.o` file using `clang++` (or both compile to `.o` and then unlink them together):
```bash
clang++ 1.1.cpp foo.o -std=c++2a -o 1.1
```
Of course, you can use `Makefile` to compile the above code:
```makefile
C = gcc
CXX = clang++
SOURCE_C = foo.c
OBJECTS_C = foo.o
SOURCE_CXX = 1.1.cpp
TARGET = 1.1
LDFLAGS_COMMON = -std=c++2a
all:
$(C) -c $(SOURCE_C)
$(CXX) $(SOURCE_CXX) $(OBJECTS_C) $(LDFLAGS_COMMON) -o $(TARGET)
clean:
rm -rf *.o $(TARGET)
```
> Note: Indentation in `Makefile` is a tab instead of a space character. If you copy this code directly into your editor, the tab may be automatically replaced. Please ensure the indentation in the `Makefile`. It is done by tabs.
>
> If you don't know the use of `Makefile`, it doesn't matter. In this tutorial, you won't build code that is written too complicated. You can also read this book by simply using `clang++ -std=c++2a` on the command line.
If you are new to modern C++, you probably still don't understand the following small piece of code above, namely:
```cpp
[out = std::ref(std::cout << "Result from C code: " << add(1, 2))](){
out.get() << ".\n";
}();
```
Don't worry at the moment, we will come to meet them in our later chapters.
[Table of Content](./toc.md) | [Previous Chapter](./00-preface.md) | [Next Chapter: Language Usability Enhancements](./02-usability.md)
## Further Readings
- [A Tour of C++ (2nd Edition) Bjarne Stroustrup](https://www.amazon.com/dp/0134997832/ref=cm_sw_em_r_mt_dp_U_GogjDbHE2H53B)
[History of C++](http://en.cppreference.com/w/cpp/language/history)
- [C++ compiler support](https://en.cppreference.com/w/cpp/compiler_support)
- [Incompatibilities Between ISO C and ISO C++](http://david.tribble.com/text/cdiffs.htm#C99-vs-CPP98)
## Licenses
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />This work was written by [Ou Changkun](https://changkun.de) and licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>. The code of this repository is open sourced under the [MIT license](../../LICENSE). | 43.275168 | 533 | 0.715105 | eng_Latn | 0.993986 |
aa5cde366aefb52169564833a450156c73d76b9a | 2,114 | md | Markdown | README.md | gabrielbmoro/MovieDB-Android | ef8438e0916701804d050ccb23e64dfe9b812441 | [
"MIT"
] | 8 | 2021-05-19T22:32:25.000Z | 2021-11-18T14:05:19.000Z | README.md | gabrielbmoro/MovieDB-Android | ef8438e0916701804d050ccb23e64dfe9b812441 | [
"MIT"
] | 9 | 2021-05-22T22:30:48.000Z | 2021-08-11T12:41:05.000Z | README.md | gabrielbmoro/MovieDB-Android | ef8438e0916701804d050ccb23e64dfe9b812441 | [
"MIT"
] | null | null | null | [](https://www.android.com/)[](https://kotlinlang.org/)
# Welcome!
---
## Setup
After create an account at [Movie DB API](https://www.themoviedb.org), you will have a token to access the API.
You should specify the token in your `gradle.properties` file.
```
MOVIE_DB_API_TOKEN_DEBUG=<token here>
MOVIE_DB_API_TOKEN_RELEASE=<token here>
```
---
## Teaser
<img src="img/teaser.gif" height="500" />
---
## Architecture and Stack Overview
<img src="img/architecture.png" width="400" />
### Architecture
**Repository**
- This layer provides an interface used as a repository. We also have some entities provided by this repository
- The main idea here is to provide a single abstraction to interact with two different Data Sources.
**Repository->API**
- The APP fetches data from the [Movie DB API](https://www.themoviedb.org). There is a code infrastructure using *Retrofit* where it is possible to keep this communication between the app and the server.
**Repository -> Local Data Base**
- The user can select their favorite movies and store them on a local database. There is a code layer using the *Room* library to keep easy the communication between the app and the Data Base.
**Use Cases**
- This layer is the way used by `ViewModels` to access the repository.
**Use Cases -> Mappers**
- They are used to convert the entities objects from data sources to entities recognized by the domain layer.
**Presentation**
- Contains `ViewModels`, `Fragments`, `Activities`, and others `Compose` functions.
**Core**
- The core layer starts the Dependency injection engine, the network setup, and the database configuration.
---
### Tech Stack Summary
- Compose, Dagger Hilt, Coroutines, Retrofit, Room, Mockk.
- More information about the Android Jetpack libraries used here, please access the [MAD Score card](https://madscorecard.withgoogle.com/scorecard/share/3723779503)
| 30.2 | 267 | 0.746925 | eng_Latn | 0.94897 |
aa5cdf8a17e9983d213389b2a442f250f30f8f66 | 761 | md | Markdown | README.md | sapienza-brain-imaging-lab/sapienza-brain-imaging-lab | 9890c58ac01d6d6dc0a6a9ba70a602b0f190a925 | [
"MIT"
] | null | null | null | README.md | sapienza-brain-imaging-lab/sapienza-brain-imaging-lab | 9890c58ac01d6d6dc0a6a9ba70a602b0f190a925 | [
"MIT"
] | null | null | null | README.md | sapienza-brain-imaging-lab/sapienza-brain-imaging-lab | 9890c58ac01d6d6dc0a6a9ba70a602b0f190a925 | [
"MIT"
] | null | null | null | # BIL public site
Public BIL site based on [Jekyll](https://jekyllrb.com) with a [CloudCannon](https://cloudcannon.com/) template
To contribute to the BIL site, first install Docker Desktop and Visual Studio Code on your computer. Then clone this repository and open your working copy using Visual Studio Code. When opening it for the first time, VS Code will ask you to install some recommended extensions and to auto-execute some tasks at startup: please allow both requests.
You can then start adding contents to the site, or modifying the existing contents. Point your browser to http://localhost:4000 to see the modified version of the site, auto-updated every time you save.
Push your working copy to automatically publish the new version of the site.
| 84.555556 | 347 | 0.796321 | eng_Latn | 0.997132 |
aa5e1e7b0748efc1553db57d451706f70c9ad957 | 5,020 | md | Markdown | content_zh/docs/tasks/security/mtls-migration/index.md | eshujiushiwo/istio.github.io | b2d178b9501d2d0d0e14d591d97367c98484194f | [
"Apache-2.0"
] | null | null | null | content_zh/docs/tasks/security/mtls-migration/index.md | eshujiushiwo/istio.github.io | b2d178b9501d2d0d0e14d591d97367c98484194f | [
"Apache-2.0"
] | null | null | null | content_zh/docs/tasks/security/mtls-migration/index.md | eshujiushiwo/istio.github.io | b2d178b9501d2d0d0e14d591d97367c98484194f | [
"Apache-2.0"
] | null | null | null | ---
title: 双向 TLS 的迁移
description: 如何渐进式的为现有 Istio 服务添加双向 TLS 支持
weight: 80
keywords: [security,authentication,migration]
---
本文任务展示了如何在不中断通信的情况下,把现存 Istio 服务的流量从明文升级为双向 TLS
在实际情况中,集群中可能包含 Istio 服务(注入了 Envoy sidecar)以及非 Istio 服务(没有注入 Envoy sidecar 的服务,下文简称为存量服务)。存量服务无法使用 Istio 签发的密钥/证书来进行双向 TLS 通信。我们希望安全的、渐进的启用双向 TLS。
## 开始之前
* 理解 Istio [认证策略](/docs/concepts/security/#authentication-policies)以及相关的[双向 TLS 认证](/docs/concepts/security/#mutual-tls-authentication)概念。
* 已成功在 Kubernetes 集群中部署 Istio,并且没有启用双向 TLS 支持(也就是使用[安装步骤](/docs/setup/kubernetes/quick-start/#installation-steps)中所说的 `install/kubernetes/istio-demo.yaml` 进行部署,或者在 [Helm 安装](/docs/setup/kubernetes/helm-install/)时设置 `global.mtls.enabled` 的值为 false)。
* 为了演示目的,创建三个命名空间,分别是 `foo`、`bar` 以及 `legacy`,然后在 `foo`、`bar` 中分别部署注入 Istio sidecar 的 [httpbin]({{< github_tree >}}/samples/httpbin) 以及 [sleep]({{< github_tree >}}/samples/sleep) 应用,最后在 `legacy` 命名空间中运行未经注入的 sleep 应用。
{{< text bash >}}
$ kubectl create ns foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n foo
$ kubectl create ns bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar
$ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n bar
$ kubectl create ns legacy
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n legacy
{{< /text >}}
* 检查部署情况:从任意一个命名空间选一个 sleep pod,发送 http 请求到 `httpbin.foo`。所有的请求都应该能返回 HTTP 200。
{{< text bash >}}
$ for from in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "sleep.${from} to httpbin.foo: %{http_code}\n"; done
sleep.foo to httpbin.foo: 200
sleep.bar to httpbin.foo: 200
sleep.legacy to httpbin.foo: 200
{{< /text >}}
* 确认系统中不存在认证策略和目标规则:
{{< text bash >}}
$ kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
$ kubectl get destionationrule --all-namespaces
No resources found.
{{< /text >}}
## 配置服务器使其同时能接收 mTLS 以及明文流量
在认证策略中有一个 `PERMISSIVE` 模式,这种模式让服务器能够同时接收明文和双向 TLS 流量。下面就把服务器设置为这种模式:
{{< text bash >}}
$ cat <<EOF | istioctl create -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "example-httpbin-permissive"
namespace: foo
spec:
targets:
- name: httpbin
peers:
- mtls:
mode: PERMISSIVE
EOF
{{< /text >}}
接下来再次发送流量到 `httpbin.foo`,确认所有请求依旧成功。
{{< text bash >}}
$ for from in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "sleep.${from} to httpbin.foo: %{http_code}\n"; done
200
200
200
{{< /text >}}
## 配置客户端进行双向 TLS 通信
利用设置 `DestinationRule` 的方式,让 Istio 服务进行双向 TLS 通信。
{{< text bash >}}
$ cat <<EOF | istioctl create -n foo -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "example-httpbin-istio-client-mtls"
spec:
host: httpbin.foo.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF
{{< /text >}}
这样一来,`sleep.foo` 和 `sleep.bar` 就会开始使用双向 TLS 和 `httpbin.foo` 进行通信了。而 `sleep.legacy` 因为没有进行 sidecar 注入,因此不受 `DestinationRule` 配置影响,还是会使用明文和 `httpbin.foo` 通信。
现在复查一下,所有到 `httpbin.foo` 的通信是否依旧成功:
{{< text bash >}}
$ for from in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "sleep.${from} to httpbin.foo: %{http_code}\n"; done
200
200
200
{{< /text >}}
还可以在 [`DestinationRule`](/docs/reference/config/istio.networking.v1alpha3/#DestinationRule) 中指定一个客户端的子集所发出的请求来是用双向 TLS 通信,然后使用 [Grafana](/docs/tasks/telemetry/using-istio-dashboard/) 验证配置执行情况,确认通过之后,将策略的应用范围扩大到该服务的所有子集。
## 锁定使用双向 TLS (可选)
把所有进行过 sidecar 注入的客户端到服务器流量都迁移到 mTLS 之后,就可以设置 `httpbin.foo` 只支持双向 TLS 流量了。
{{< text bash >}}
$ cat <<EOF | istioctl create -n foo -f -
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "example-httpbin-permissive"
namespace: foo
spec:
targets:
- name: httpbin
peers:
- mtls:
mode: STRICT
EOF
{{< /text >}}
这样设置之后,`sleep.legacy` 的请求就会失败。
{{< text bash >}}
$ for from in "foo" "bar" "legacy"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "sleep.${from} to httpbin.foo: %{http_code}\n"; done
200
200
503
{{< /text >}}
也就是说,如果不能把所有服务都迁移到 Istio (进行 Sidecar 注入)的话,就只能使用 `PERMISSIVE` 模式了。然而在配置为 `PERMISSIVE` 的时候,是不会对明文流量进行授权和鉴权方面的检查的。我们推荐使用 [RBAC](/docs/tasks/security/role-based-access-control/) 来给不同的路径配置不同的授权策略。
## 清理
移除所有资源
{{< text bash >}}
$ kubectl delete ns foo bar legacy
Namespaces foo bar legacy deleted.
{{< /text >}}
| 33.918919 | 261 | 0.693625 | yue_Hant | 0.716425 |
aa5e44b60a34dd79f56250975d300db604ce8728 | 4,081 | md | Markdown | cmd/depprobcheck/README.md | Hellcatlk/kustomize | 9ff910a9757c447ed56f1c1a8b0f446accb47b57 | [
"Apache-2.0"
] | 4 | 2020-07-26T20:25:16.000Z | 2020-08-07T18:18:50.000Z | cmd/depprobcheck/README.md | Hellcatlk/kustomize | 9ff910a9757c447ed56f1c1a8b0f446accb47b57 | [
"Apache-2.0"
] | 17 | 2020-01-03T15:54:41.000Z | 2021-05-19T17:53:59.000Z | cmd/depprobcheck/README.md | Hellcatlk/kustomize | 9ff910a9757c447ed56f1c1a8b0f446accb47b57 | [
"Apache-2.0"
] | 2 | 2021-02-08T21:13:11.000Z | 2021-06-15T15:38:42.000Z | ## Checking openapi build issues
Edit the `main.go` and `go.mod` in this dir to see what builds
with various combinations of cli-runtime kube-openapi.
####
A recent in change in kube-openapi
https://github.com/kubernetes/kube-openapi/pull/234
means that anyone depending on
k8s.io/cli-runtime@v0.20.4
and _any other package that imports kube-openapi_ (e.g. kyaml)
may see a build error like
~/go/pkg/mod/sigs.k8s.io/kustomize@v2.0.3+incompatible/pkg/transformers/config/factorycrd.go:71:47:
cannot use api.Schema.SchemaProps.Properties (type map[string]"k8s.io/kube-openapi/pkg/validation/spec".Schema)
as type myProperties in argument to looksLikeAk8sType
## Why?
As it happens,
k8s.io/cli-runtime@v0.20.4
depends on
sigs.k8s.io/kustomize@v2.0.3+incompatible
Line 71 of factorycrd.go in kustomize v2.0.3 is:
if !looksLikeAk8sType(api.Schema.SchemaProps.Properties) {
The `looksLikeAk8sType` function accepts the argument
func looksLikeAk8sType(properties map[string]spec.Schema) bool {...}
At the call point in line 71 the argument is
common.OpenAPIDefinition.Schema.SchemaProps.Properties
The file factorycrd.go depends on
"github.com/go-openapi/spec"
"k8s.io/kube-openapi/pkg/common"
The module sigs.k8s.io/kustomize@v2.0.3 predates Go modules.
To pin its dependencies, it has a Gopkg.lock file and
a vendor directory. Per the lock file:
sigs.k8s.io/kustomize@v2.0.3
depends on "k8s.io/kube-openapi"
revision = "b3f03f55328800731ce03a164b80973014ecd455"
Checking out this commit in the k8s.io/kube-openapi repo we see this
k8s.io/kube-openapi/pkg/common/common.go:
import "github.com/go-openapi/spec"
...
type OpenAPIDefinition struct {
Schema spec.Schema
Dependencies []string
}
But per the imports in this file, `spec.Schema` lives in
github.com/go-openapi/spec
The aforementioned Gopkg.lock file pins that at
sigs.k8s.io/kustomize@v2.0.3
depends on "github.com/go-openapi/spec"
revision = "bcff419492eeeb01f76e77d2ebc714dc97b607f5"
The struct is
github.com/go-openapi/spec:
type Schema struct {
VendorExtensible
SchemaProps
SwaggerSchemaProps
ExtraProps map[string]interface{} `json:"-"`
}
type SchemaProps struct {
Properties map[string]Schema
}
This is a recursive type; Schema holds a map[string]Schema.
All that is fine.
The problem arises when we build a binary that depends on both
kustomize v2.0.3 and, say,
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e
This particular version of kube-openapi has a 'go.mod' file.
kube-openapi/pkg/common/common.go at tag "95288..." contains:
import "k8s.io/kube-openapi/pkg/validation/spec"
...
type OpenAPIDefinition struct {
Schema spec.Schema
Dependencies []string
}
kube-openapi/pkg/validation/spec/schema.go at this tag contains:
type Schema struct {
VendorExtensible
SchemaProps
SwaggerSchemaProps
ExtraProps map[string]interface{} `json:"-"`
}
etc. etc. as above. The same layout as above, but in different files.
So adding this new dependency means that
factorycrd.go
is going to include a version of "k8s.io/kube-openapi/pkg/common".
that deals in the type
"k8s.io/kube-openapi/pkg/validation/spec".Schema
but will then attempt to pass this type to older code (the func
looksLikeAk8sType) which is looking to accept a
"github.com/go-openapi/spec".Schema
It's the same type structure, but a different name so the 'linker' barfs.
To avoid this problem, one has to either
* roll forward on cli-runtime -- depend on v0.21.0 or higher.
* or stick with v0.20.4, which means retaining consistency
with kustomize 2.0.3, which means depending on an older version
of k8s.io/kube-openapi that still depends on go-openapi/spec.
Fortunately you only have to go back to before PR
https://github.com/kubernetes/kube-openapi/pull/234
E.g. depend on k8s.io/kube-openapi v0.0.0-20210323165736-1a6458611d18
| 26.673203 | 114 | 0.727273 | eng_Latn | 0.91825 |
aa5f457639c58a2dd2e9112eb713e2090c052988 | 412 | md | Markdown | .github/ISSUE_TEMPLATE/resume-review.md | tash2020/support | 003dc460062d96f9af30c3e82c031c2c30e3af00 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE/resume-review.md | tash2020/support | 003dc460062d96f9af30c3e82c031c2c30e3af00 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE/resume-review.md | tash2020/support | 003dc460062d96f9af30c3e82c031c2c30e3af00 | [
"MIT"
] | null | null | null | ---
name: resume-review
about: Please review my resume on a live stream
title: '[REVIEW]'
labels: resume review
assignees: ''
---
**Remove all personal data, otherwise the resume will be deleted.**
Resume link:
You can also join the discord community [here](https://discord.com/invite/jZQs6Wu)
Feel free to check out other cool repositories of EddieHub Community [here](https://github.com/EddieHubCommunity)
| 25.75 | 113 | 0.757282 | eng_Latn | 0.906675 |
aa5f6843b08b4fb3de93a45a9e73f633d9bb95dd | 2,483 | md | Markdown | README.md | yovisto/wlo-duplicate-detection | 7db2d6e9e148a9243ed820e7eb5cfcd339ed5a99 | [
"MIT"
] | null | null | null | README.md | yovisto/wlo-duplicate-detection | 7db2d6e9e148a9243ed820e7eb5cfcd339ed5a99 | [
"MIT"
] | null | null | null | README.md | yovisto/wlo-duplicate-detection | 7db2d6e9e148a9243ed820e7eb5cfcd339ed5a99 | [
"MIT"
] | null | null | null | # WLO Duplicate Detection
A utility to detect near duplicates in the WLO dataset.
The tool is based on the [MinHash](https://en.wikipedia.org/wiki/MinHash) algorithm. Parts of the implementation were taken from [https://github.com/chrisjmccormick/MinHash](https://github.com/chrisjmccormick/MinHash).
## Prerequisites
- Install [Docker](https://docker.com/).
- Build the Docker container.
```
sh build.sh
```
## Training (calculate hashes)
The `data` folder contains the dataset with one entry (document) per line. The first word of each line is the document's ID.
- The following script calculates the hashes for each document and stores the interim data, which is necessary for prediction, also in the data folder.
```
sh runTraining.sh
```
## Prediction (find duplicates)
- To test the detection just query the system with an existing document's text.
```
sh runPrediction.sh "Bruchterme - gemeinsamer Nenner Bruchterme - gemeinsamer Nenner_1603916225648 Suche den gemeinsamen Nenner der beiden Bruchterme!"
```
The result is a list of tuples containing the relevant document IDs, similarity score (usually near 1.0 for duplicates) as well as the documents text.
'''
['3ebe9c55-3405-4411-98f6-b5c581bb000e', 1.0, ' Bruchterme - gemeinsamer Nenner Bruchterme - gemeinsamer Nenner_1603916225648 Suche den gemeinsamen Nenner der beiden Bruchterme!']
'''
(Only documents with an least similarity > 0.8 are returned. Might be set in the code.)
## Webservice
- To run the detection tool as a REST based webservice, the following script can be used:
```
sh runService.sh
```
- The scripts deploys a CherryPy webservice in a docker container listening at `http://localhost:8080/duplicates`.
- To retrieve the most similar documents for a given text, create a POST request and submit a json document with the text as e.g.:
```
curl -d '{"text" : "Bruchterme - gemeinsamer Nenner Bruchterme - gemeinsamer Nenner_1603916225648 Suche den gemeinsamen Nenner der beiden Bruchterme!"}' -H "Content-Type: application/json" -X POST http://0.0.0.0:8080/duplicates
```
- You can also use the parameters 'id' or 'url' to retrieve matching documents, e.g.
```
curl -d '{"id" : "2518f94b-ca51-4fa2-804f-d3ee1e3f7c94"}' -H "Content-Type: application/json" -X POST http://0.0.0.0:8080/duplicates
```
```
curl -d '{"url" : "https://www.br.de/mediathek/podcast/grips-deutsch/berichten/46847"}' -H "Content-Type: application/json" -X POST http://0.0.0.0:8080/duplicates
```
| 36.514706 | 227 | 0.745469 | eng_Latn | 0.781434 |
aa5fafa0e00384423d37847f3b0f3953cd17896c | 1,893 | md | Markdown | synchronization/monitor.md | javany/go-design-patterns | acc906bc1e003691f1086004e56cec31fafcf45d | [
"MIT"
] | null | null | null | synchronization/monitor.md | javany/go-design-patterns | acc906bc1e003691f1086004e56cec31fafcf45d | [
"MIT"
] | null | null | null | synchronization/monitor.md | javany/go-design-patterns | acc906bc1e003691f1086004e56cec31fafcf45d | [
"MIT"
] | null | null | null | <p align="center">
<img src="../gopher.png" />
</p>
---
# Monitor Pattern
This pattern provides a way to make a goroutine wait till some event occur.<br />
Another definition of monitor is a thread-safe class that wraps around a mutex in order to safely allow access to a method or variable by more than one thread. <br />
<br />
A **condition variable** essentially is a container of threads that are waiting for a certain condition. <br />
([more on wikipedia](https://en.wikipedia.org/wiki/Monitor_(synchronization)))
## Example
In Go, we can create Condition Variable using **sync.Cond** type.<br />
It has one contructor function(**sync.NewCond()**) which takes a **sync.Locker**. <br />
It has three methods:
* **Wait**()
* **Signal**()
* **Broadcast**()
Let's make a goroutine wait until a value is assigned to a variable.
## Implementation
### *Solution 1: **using sync.Cond***
<br />
```go
package main
import (
"fmt"
"sync"
"time"
)
type price struct {
sync.Mutex
value string
cond *sync.Cond
}
func NewPrice() *price {
r := price{}
r.cond = sync.NewCond(&r)
return &r
}
func main() {
var wg sync.WaitGroup
pr := NewPrice()
wg.Add(1)
go func(pr *price) {
defer wg.Done()
pr.Lock()
pr.cond.Wait()
pr.Unlock()
fmt.Println("price is : ", pr.value)
return
}(pr)
time.Sleep(1 * time.Second)
pr.Lock()
pr.value = "$100"
pr.Unlock()
pr.cond.Signal()
wg.Wait()
}
```
<br />
### *Solution 2: **using channel***
<br />
```go
package main
import (
"fmt"
"sync"
"time"
)
type price struct {
sync.Mutex
value string
}
func main() {
var wg sync.WaitGroup
ch := make(chan struct{})
pr := &price{}
wg.Add(1)
go func(pr *price) {
defer wg.Done()
<-ch
fmt.Println("price value is: ", pr.value)
return
}(pr)
time.Sleep(1 * time.Second)
pr.Lock()
pr.value = "$100"
pr.Unlock()
ch <- struct{}{}
wg.Wait()
}
```
| 15.144 | 166 | 0.636556 | eng_Latn | 0.91947 |
aa5fd81129865d65f27d3b80dd15dc703e8c14e8 | 1,356 | md | Markdown | README.md | OllyK/rse-jobscraper | a9f6340261572b290e8a0eda65b2db7a5bdf26f4 | [
"Apache-2.0"
] | null | null | null | README.md | OllyK/rse-jobscraper | a9f6340261572b290e8a0eda65b2db7a5bdf26f4 | [
"Apache-2.0"
] | null | null | null | README.md | OllyK/rse-jobscraper | a9f6340261572b290e8a0eda65b2db7a5bdf26f4 | [
"Apache-2.0"
] | null | null | null | # rse-jobscraper
A python program to scrape some websites for pharma/synchrotron research software engineering jobs. The program will save the results out to a text file and also send an email. To send an email using a Gmail account, you'll need to have allowed access to less secure apps in the security settings.
### Sites
At present, sites scraped are:
- Oxford University - All grade 8 jobs
- lightsources.org - The last 60 jobs posted
- The Rosalind Franklin Institute - All jobs
- Merck - All jobs that contain the tag "python"
- MSD - All jobs that contain the tag "python"
- NovoNordisk - All jobs that contain the tag "python"
- Society of Research Software Engineering website - All jobs
- Exscientia - All jobs
- Novartis - Jobs with the tag "python" in the following countries: AU,AT,BB,BE,BA,BR,CA,HR,CZ,DK,EE,FI,FR,GE,DE,GR,HU,IE,IT,LV,LT,LU,MK,MD,NZ,NO,PL,PT,RS,SG,SK,SI,ES,SE,CH,GB.
- EMBL-EBI - All jobs
### Requirements
A python environment with:
- BeautifulSoup
- Selenium
You will also need to download and install the [Chrome Webdriver](https://chromedriver.chromium.org/downloads) to use with Selenium. Once downloaded, change the path in `utils.py`.
### Useage
If you want to send an email enter your from/to address and password in the appropriate place in jobscraper.py.
To run the script:
```shell
python jobscraper.py
```
| 43.741935 | 298 | 0.7559 | eng_Latn | 0.971128 |
aa61a6c703dd24e0dbad591c9a62a25c9e99f91e | 829 | markdown | Markdown | _posts/2020-12-22-poem-bahut-honge-khush-hum.markdown | injulkarnilesh/injulkarnilesh.github.io | efb14762b452d16d262cbe064d907b4b095815bf | [
"MIT"
] | null | null | null | _posts/2020-12-22-poem-bahut-honge-khush-hum.markdown | injulkarnilesh/injulkarnilesh.github.io | efb14762b452d16d262cbe064d907b4b095815bf | [
"MIT"
] | null | null | null | _posts/2020-12-22-poem-bahut-honge-khush-hum.markdown | injulkarnilesh/injulkarnilesh.github.io | efb14762b452d16d262cbe064d907b4b095815bf | [
"MIT"
] | null | null | null | ---
layout: post
title: "बहुत होंगे खुश हम"
date: 2020-12-22 14:34:25
categories: poem
tags: hindi poem khush happiness khwab dream
image: /assets/article_images/2020-12-22-poem/happiness.jpg
image2: /assets/article_images/2020-12-22-poem/happiness.jpg
image-credit: NY TImes
image-credit-url: https://www.nytimes.com/
---
देखा एक ख्वाब था
होगा जब वह पूरा
तो सोचा था
बहुत होंगे खुश हम \|
<br/>
हो गया वह पूरा
देख भी लिया, ख्वाब दूसरा
सोचा यह होगा पूरा
तो बहुत होंगे खुश हम \|
<br/>
समझ भी न आया कब हुआ वह पूरा
की हम लगे थे नये ख्वाब के पीछे
सोचके की यह होगा पूरा
तो जरूर होंगे खुश हम \|
<br/>
भागते हुए ख्वाबोंके पीछे
समझे नहीं हम
की कभी बन नहीं सकते खुश हम
बस हो सकते हे खुश हम \|
<br/>
बाद उसके देखे तो बहुत ख्वाब
पर ख़ुशी का इंतजार न किया
क्योंकि बनने की फ़िराक छोड़
जीने लगे खुश हम \|
| 13.816667 | 60 | 0.546441 | hin_Deva | 0.994434 |
aa622dc3a7d17fe86b2b308ee6ea1412fa477255 | 1,158 | md | Markdown | README.md | Kikobeats/parse-uri | 42a4510ca7e62c3c0811e0c003727e3601c02162 | [
"MIT"
] | 6 | 2016-09-19T20:08:55.000Z | 2021-06-21T02:53:28.000Z | README.md | Kikobeats/parse-uri | 42a4510ca7e62c3c0811e0c003727e3601c02162 | [
"MIT"
] | 3 | 2017-07-17T18:54:49.000Z | 2021-07-08T11:06:54.000Z | README.md | Kikobeats/parse-uri | 42a4510ca7e62c3c0811e0c003727e3601c02162 | [
"MIT"
] | 7 | 2016-11-17T15:58:08.000Z | 2022-02-09T22:31:27.000Z | # parse-uri

[](https://www.npmjs.org/package/parse-uri)
> Lightweight module for parsing an URI Based in [Steven Levithan](http://blog.stevenlevithan.com/archives/parseuri) method.
## Install
```bash
$ npm install parse-uri --save
```
If you want to use in the browser (powered by [Browserify](http://browserify.org/)):
```bash
$ bower install parse-uri --save
```
and later link in your HTML:
```html
<script src="bower_components/parse-uri/dist/parse-uri.js"></script>
```
## Usage
```js
var parseUri = require('parse-uri')
parseUri('myURL')
```
## API
### parseURI(str, [options])
#### options
##### strictMode
Type: `boolean`
Default: `false`
Determinate if use `loose` or `strict` mode.
> Loose mode deviates slightly from the official generic URI spec ([RFC 3986](http://tools.ietf.org/html/rfc3986))
### Related
* [is-uri](https://github.com/Kikobeats/is-uri#is-uri) – Determinate if a string is a valid URI.
## License
MIT © [Kiko Beats](http://kikobeats.com)
| 20.678571 | 124 | 0.697755 | eng_Latn | 0.25403 |
aa62a12036cfbe7f322d3be2526844cca18f4a62 | 2,244 | md | Markdown | README.md | JarLob/JustDecompileEngine | 723d36684267560f6edf5ae6636512c3b3e7c1b7 | [
"ECL-2.0",
"Apache-2.0"
] | 1,371 | 2015-05-04T14:22:43.000Z | 2022-03-12T10:16:16.000Z | README.md | JarLob/JustDecompileEngine | 723d36684267560f6edf5ae6636512c3b3e7c1b7 | [
"ECL-2.0",
"Apache-2.0"
] | 48 | 2015-05-15T16:34:41.000Z | 2022-03-22T12:36:32.000Z | README.md | JarLob/JustDecompileEngine | 723d36684267560f6edf5ae6636512c3b3e7c1b7 | [
"ECL-2.0",
"Apache-2.0"
] | 323 | 2015-05-05T15:15:08.000Z | 2022-03-22T23:55:54.000Z | # JustDecompile Engine
This is the engine of the popular .NET decompiler [JustDecompile](https://www.telerik.com/products/decompiler.aspx). C# is the only programming language used.
Copyright (c) 2011 - 2018 Telerik ЕAD
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
## Getting Started
- Clone the repository and open JustDecompileEngine.sln in Microsoft (r) Visual Studio (r).
- Set your startup project to ConsoleRunner.
- Enjoy
See this [getting started](https://developer.telerik.com/featured/a-look-at-the-open-source-justdecompile-engine/) post for more info.
## Working with JustDecompile Engine
JustDecompile UI remains private at this time. JustDecompile, however, has rich console functionality and that has been opensourced here.
One can use the console project generation feature to see the results of the changes made to the engine. The ConsoleRunner project
is a console app that exposes that functionality and makes testing easy. When started it prints out all the available commands and switches.
## How to Contribute to JustDecompile Engine
We encourage and support an active, healthy community that accepts contributions from the public. We'd like you to be a part of that community.
Before submitting a pull request, please, read and sign the [Contributors License Agreement](https://docs.google.com/forms/d/e/1FAIpQLSdjOagw622VxQLrWOsnsLoqvncwJ_DfPmC9hyiYci8iaVWONQ/viewform)
## How to Contribute to JustDecompile
[Feature Suggestions](https://feedback.telerik.com/Project/189)
[Bug Reports / Discussion](https://www.telerik.com/forums/justdecompile/general-discussions)
## Related
The [JustDecompile Plugins](https://github.com/telerik/justdecompile-plugins) are also available on GitHub under various open source licenses.
| 51 | 305 | 0.797683 | eng_Latn | 0.984596 |
aa631bedfbcc8aab8d4e0f5833fdb85ba230b5fb | 2,842 | md | Markdown | docs/core-modules/executable-code-and-execution-contexts/compilation/lexical-scope.md | onlyxhb/javascript-guidebook | e0d6537acc8e7a678f04d1acda74e0e04900eaca | [
"MIT"
] | 65 | 2018-12-27T08:15:38.000Z | 2020-04-23T03:36:24.000Z | docs/core-modules/executable-code-and-execution-contexts/compilation/lexical-scope.md | onlyxhb/javascript-guidebook | e0d6537acc8e7a678f04d1acda74e0e04900eaca | [
"MIT"
] | null | null | null | docs/core-modules/executable-code-and-execution-contexts/compilation/lexical-scope.md | onlyxhb/javascript-guidebook | e0d6537acc8e7a678f04d1acda74e0e04900eaca | [
"MIT"
] | 17 | 2018-12-27T08:13:57.000Z | 2020-04-12T16:31:44.000Z | ---
nav:
title: 核心模块
order: 3
group:
title: 编译阶段
order: 2
title: 词法作用域
order: 2
---
# 作用域
作用域就是变量(标识符)适用范围,控制着变量的可见性。
《You don‘t know js》对作用域的定义:
> 使用一套严格的规则来分辨哪些标识符对哪些语法有访问权限。
《JavaScript 权威指南》中对变量作用域的描述:
> 一个变量的作用域(Scope)是程序源代码中定义这个变量的区域。全局变量拥有全局作用域,在 JavaScript 代码中的任何地方都是有定义的。然而在函数内声明的变量只在函数体内有定义。它们是局部变量,作用域是局部性的。函数参数也是局部变量,它们只是在函数体内有定义。
作用域共有两种主要的工作模式:
- 词法作用域/静态作用域
- 动态作用域
JavaScript 采用 **词法作用域**(Lexical Scope),也称为 **静态作用域**。
因为 JavaScript 采用的是词法作用域,因此函数的作用域在函数定义的时候就决定了。
而与词法作用域相对的是动态作用域,函数的作用域是在函数调用的时候才决定的。
## 词法作用域/静态作用域
大部分标准语言编译器的第一个工作阶段叫作 **词法化**(也叫单词化)。词法化的过程会对源代码中的字符进行检查,如果是有状态的解析过程,还会赋予单词语义。
简单来说,词法作用域就是定义在词法阶段的作用域。换句话来说,词法作用域是由你在写代码时将变量和块作用域写在哪里来决定的,因此当词法分析器处理代码时会保持作用域不变(大部分情况下时这样的)。
🌰 **代码示例**:
```js
function foo(a) {
var b = a * 2;
function brc(c) {
console.log(a, b, c);
}
bar(b * 3);
}
foo(2); // 2, 4, 12
```
在这个例子中有三个逐级嵌套的作用域。为了帮助理解,可以将它们想象成几个逐级包含的气泡。
```jsx | inline
import React from 'react';
import img from '../../../assets/lexical-scope/scope-bubble.png';
export default () => <img alt="Webpack执行流程" src={img} width={480} />;
```
<br/>
- 包含着整个全局作用域,其中只有一个标识符:`foo`
- 包含着 `foo` 所创建的作用域,其中有三个标识符:`a`、`bar` 和 `b`
- 包含着 `bar` 所创建的作用域,其中只有一个标识符:`c`
作用域气泡由其对应的作用域代码写在哪里决定,它们是 **逐级包含** 的。现在只需要假设每一个函数都会创建一个新的作用域气泡就好了。
`bar` 的气泡被完全包含在 `foo` 所创建的气泡中,唯一的原因是那里就是我们希望定义函数 `bar` 的位置。
### 查找
作用域气泡的结构和互相之间的位置关系给引擎提供了足够的位置信息,引擎利用这些信息来查找标识符的位置。
在上个代码片段中,引擎执行 `console.log` 声明,并依次查找 `a`、`b` 和 `c` 三个变量的引用。
- 它首先从最内部的作用域,也就是 `bar` 函数的作用域气泡开始查找
- 引擎无法在这里找到 `a`,因此会去上一级到所嵌套的 `foo` 的作用域中继续查找。在这里找到了 `a`,因此引擎使用了这个引用
- 对 `b` 来讲也是一样的
- 而对 `c` 来说,引擎在 `bar` 中就找到了它
如果 `a`、`c` 都存在于 `bar` 和 `foo` 的内部,`console.log` 就可以直接使用 `bar` 中的变量,而无需到外面的 `foo` 中查找。
### 遮蔽
**作用域查找会在找到第一个匹配的标识符时停止**。
在多层嵌套作用域中允许定义同名标识符,称为 **遮蔽效应**(内部的标识符遮蔽了外部的标识符)。
抛开遮蔽效应,作用域查找始终从运行时所处的最内部作用域开始,逐级向外或者说向上层作用域进行查询,直到遇见第一个匹配的标识符为止。
全局变量会自动成为全局对象的属性(比如浏览器中的 Window 对象),因此可以不直接使用全局对象的词法名称,而是间接地通过对全局对象属性的引用来对其进行访问。
🌰 **代码示例**:
```js
window.a;
```
通过这种技术可以访问那些被同名变量所遮蔽的全局变量。但非全局的变量如果被遮蔽了,无论如何都无法被访问到。
无论函数在哪里被调用,也无论它如何被调用,它的词法作用域都只由函数被声明时所处的位置决定。
词法作用域查找只会查找一级标识符,比如 `a`、`b` 和 `c`。如果代码中引用了 `foo.bar.baz`,词法作用域查找只会试图查找 `foo` 标识符,找到这个变量后,对象属性访问规则会分别接管对 `bar` 和 `baz` 属性的访问。
## 动态作用域
词法作用域最重要的特征是它的定义过程发生在代码的书写阶段。
> 那为什么要介绍动态作用域呢?
实际上动态作用域是 JavaScript 另一个重要机制 [this](../execution/this) 的表亲。作用域混乱多数是因为词法作用域和 `this` 机制相混淆。
**动态作用域** 并不关心函数和作用域是如何声明以及在何处声明,它只关心它们从何处调用。
换句话说,[作用域链](../execution/scope-chain) 是基于 **调用栈** 的,而不是代码中的作用域嵌套。
```js
const a = 2;
function foo() {
console.log(a);
}
function bar() {
const a = 3;
foo();
}
bar();
```
- 如果处于词法作用域,变量 `a` 首先在 `foo` 函数中查找,没有找到。于是 **顺着作用域链到全局作用域** 中查找,找到并赋值为 `2`。所以控制台输出 `2`
- 如果处于动态作用域,同样地,变量 `a` 首先在 `foo` 中查找,没有找到。这里会 **顺着调用栈** 在调用 `foo` 函数的地方,也就是 `bar` 函数中查找,找到并赋值为 `3`。所以控制台输出 `3`
对于两种作用域的区别,简而言之,词法作用域是在 **定义** 时确定的,而动态作用域是在 **运行** 时确定的。
| 20.014085 | 136 | 0.719916 | yue_Hant | 0.577979 |
aa63a61b427dcab24e693a33399579f433fbbbed | 385 | md | Markdown | manuscript/2-data-methods.md | jrhawley/n1diff | ba2edd898285859dc817fcb2ffb078e6515addf1 | [
"MIT"
] | null | null | null | manuscript/2-data-methods.md | jrhawley/n1diff | ba2edd898285859dc817fcb2ffb078e6515addf1 | [
"MIT"
] | null | null | null | manuscript/2-data-methods.md | jrhawley/n1diff | ba2edd898285859dc817fcb2ffb078e6515addf1 | [
"MIT"
] | null | null | null | # Data
* Using total RNA-seq of 6 T2E+ and 6 T2E- primary prostate cancer samples
* See [this paper](https://academic.oup.com/bioinformatics/article/31/22/3625/240923) for appropriate number of biological replicates
* Might actually be worth using that data, itself, instead of the prostate cancer stuff
* Differential gene expression analysis is performed with Kallisto + Sleuth
| 55 | 135 | 0.784416 | eng_Latn | 0.983811 |
aa63c163e5c3a5d3c7a658a43d928b677e7494b6 | 6,353 | md | Markdown | _posts/FORA - 2015-12-14-cultura-maker-no-brasil.md | dgrej/dgrej.github.io | 478a3b1b348445054218a4ca027cf93aae36eec8 | [
"MIT"
] | null | null | null | _posts/FORA - 2015-12-14-cultura-maker-no-brasil.md | dgrej/dgrej.github.io | 478a3b1b348445054218a4ca027cf93aae36eec8 | [
"MIT"
] | null | null | null | _posts/FORA - 2015-12-14-cultura-maker-no-brasil.md | dgrej/dgrej.github.io | 478a3b1b348445054218a4ca027cf93aae36eec8 | [
"MIT"
] | null | null | null | ---
layout: post
title: Movimento Maker no Brasil
---
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:site" content="@dgrejX">
<meta name="twitter:title" content="Movimento Maker: A cultura que aproxima o pensar do fazer já está no seu cotidiano">
<meta name="twitter:description" content="O movimento maker já está dentro de salas de aulas, multinacionais, garagens de casas e laboratórios equipados com máquinas de fabricação digital, tornando a lógica do “faça você mesmo” um fenômeno tecnológico e coletivo. Para ser maker, só é preciso compartilhar experiências com quem também quer pôr a mão na massa.">
<meta name="twitter:image" content="http://disneybabble.uol.com.br/sites/default/filesBR/styles/articlesbig/public/MakerGifts.jpg">
<center><img width="480" height="320" src="http://disneybabble.uol.com.br/sites/default/filesBR/styles/articlesbig/public/MakerGifts.jpg" class="attachment-large wp-post-image" alt="google-Brillo" /></center><br />
<p> Muitas das invenções que Leonardo da Vinci pensou, como tanques de guerra blindados e paraquedas, só saíram do papel séculos depois, pelas mãos de outros criadores. Se estivesse vivendo hoje, o italiano provavelmente não teria dificuldade em assinar as novas ideias. Nem ele nem qualquer pessoa que queira fazer os próprios objetos, seja por uma nova opção de consumo ou pelo prazer de ver projetos se tornarem reais. Incentivados, sobretudo, pela expansão da internet e seu poder de compartilhar informações, os agora makers não estão mais isolados, mas reunidos e em movimento.</p>
<p>O movimento maker já está dentro de salas de aulas, multinacionais, garagens de casas e laboratórios equipados com máquinas de fabricação digital, tornando a lógica do “faça você mesmo” um fenômeno tecnológico e coletivo. Para ser maker, só é preciso compartilhar experiências com quem também quer pôr a mão na massa. Um aquecedor de água feito no Brasil pode ser facilmente reproduzido e recriado pelos japoneses, por exemplo. “É exatamente o que aconteceu com a computação, a comunicação e a web. Agora, está chegando às coisas físicas”, defende o autor britânico Chris Anderson, que se tornou um dos principais defensores do movimento com o livro <em>Makers: a Nova Revolução Industrial.</em></p>
<p>Se com apenas uma <abbr title="Impressora 3D: máquina que fabrica objetos tridimensionais a partir de um modelo criado em computador. Geralmente utiliza plástico como matéria-prima, mas também há modelos que trabalham outros filamentos, de vidro a chocolate." rel="tooltip">impressora 3D </abbr>caseira conectada ao computador os fazedores podem desenhar e imprimir os brinquedos dos filhos, reunidos em espaços coletivos eles estão criando soluções para problemas maiores. Na África, jovens quenianos criaram incubadoras para hospitais de Nairóbi; nos Estados Unidos - onde a própria Casa Branca recebeu uma feira de makers em 2014 - a aposta é trazer o trabalho manufaturado de volta para a maior economia do mundo.</p>
<p>Nesse contexto, o movimento reúne adeptos não só na internet, mas também em espaços físicos equipados com máquinas de fabricação digital, chamados <abbr title="Makerspace: espaço coletivo de atividades maker que, geralmente, reúne equipamentos de fabricação digital, como impressoras 3D, cortadoras a laser e fresadoras CNC. Pode tanto ser privado quanto público. Os Fab Labs são o tipo mais popular de makerspace." rel="tooltip">makerspaces</abbr>. São lugares que têm o papel de associar a tecnologia ao conhecimento, diz o professor da USP e especialista no assunto, Paulo Eduardo Fonseca. “O movimento desconstrói aquilo dito pelo pensamento moderno de que o fazer é menos importante. Mas há uma grande euforia. Não devemos ficar na superficialidade de fazer apitinho, canequinha.” Fonseca foi o responsável por trazer ao Brasil, em 2011, o primeiro makerspace da Fundação Fab, ligada ao Instituto de Tecnologia de Massachusetts (MIT), dos Estados Unidos.</p>
<p>Já são mais de 550 <abbr title="Fab Lab: um dos responsáveis pela difusão do movimento maker em todos os continentes. É um tipo de makerspace criado no Instituto de Tecnologia de Massachusetts (MIT, EUA). Atualmente, reúne mais de 550 espaços associados, que devem seguir regras determinadas em um manifesto." rel="tooltip">Fab Labs</abbr> ao redor do mundo, 22 deles em cidades brasileiras como Porto Alegre, Brasília e Recife. São Paulo, entretanto, é a primeira a adotar o movimento maker como política pública por meio da rede Fab Lab Livre SP. Os quatro primeiros laboratórios municipais já estão abertos em fase de testes no Centro, Cidade Tiradentes, Penha e Itaquera. A prefeitura pretende inaugurar outras oito unidades em março de 2016, com investimento total de R$ 8,3 milhões em dois anos.</p>
<p>Na Europa, onde a rede de makers se espalha em países como Inglaterra e Itália, um dos mais antigos Fab Labs é o de Barcelona, na Espanha, aberto em 2007. O diretor do espaço, Tomas Diez, afirma que o movimento não substitui a indústria porque nem tudo pode ser fabricado em casa. O que muda, diz o venezuelano, são as relações de trabalho. “Ele cria empregos, porque as pessoas podem ser mais independentes e começar a fazer objetos com maior valor agregado. Esse é o início de um modelo em que as pessoas decidem sem depender dos que controlam os meios de produção.”</p>
<p>Na opinião da arquiteta Heloisa Neves, uma das referências do movimento no Brasil, ele também muda a relação da sociedade com a tecnologia e o consumo. O foco, porém, não deve ser nas máquinas, mas nas pessoas. “A tecnologia somente facilita a criação. A cultura maker vira ao avesso o que aprendemos na escola. Você é o agente que causa.”</p>
<h3>Qual é o futuro do movimento maker? </h3>
<div class='embed-container'><iframe src='http://www.youtube.com/embed/n1Ld1H3AN6E' frameborder='0' allowfullscreen></iframe></div>
<h3>O movimento maker pode ser a terceira revolução industrial? </h3>
<div class='embed-container'><iframe src='http://www.youtube.com/embed/2tWRQ6CReK8' frameborder='0' allowfullscreen></iframe></div>
<h3>Os brasileiros têm vocação para ser maker? </h3>
<div class='embed-container'><iframe src='http://www.youtube.com/embed/wf5p4VYWTcM' frameborder='0' allowfullscreen></iframe></div>
<br>
via:http://infograficos.estadao.com.br/e/focas/movimento-maker/
<br>
| 158.825 | 966 | 0.790965 | por_Latn | 0.999914 |
aa63ee3dde54a5d48f8445d9aabd22b28f8ce190 | 2,163 | md | Markdown | docs/windows/attributes/dispinterface.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/windows/attributes/dispinterface.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/windows/attributes/dispinterface.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: dispinterface (atributo de COM do C++) | Microsoft Docs
ms.custom: ''
ms.date: 10/02/2018
ms.technology:
- cpp-windows
ms.topic: reference
f1_keywords:
- vc-attr.dispinterface
dev_langs:
- C++
helpviewer_keywords:
- dispinterface attribute
ms.assetid: 61c5a4a1-ae92-47e9-8ee4-f847be90172b
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
- uwp
ms.openlocfilehash: 3b02244e0576f99cc0a6940f2ee4a13511cfbe6f
ms.sourcegitcommit: 955ef0f9d966e7c9c65e040f1e28fa83abe102a5
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/04/2018
ms.locfileid: "48790067"
---
# <a name="dispinterface"></a>dispinterface
Coloca uma interface no arquivo. idl como uma interface de expedição.
## <a name="syntax"></a>Sintaxe
```cpp
[dispinterface]
```
## <a name="remarks"></a>Comentários
Quando o **dispinterface** atributo C++ precede uma interface, ele faz com que a interface a ser colocado dentro do bloco de biblioteca no arquivo. idl gerado.
A menos que você especifique uma classe base, uma interface de expedição será derivado `IDispatch`. Você deve especificar um [id](id.md) para os membros de uma interface de expedição.
O exemplo de uso de [dispinterface](/windows/desktop/Midl/dispinterface) na documentação do MIDL:
```cpp
dispinterface helloPro
{ interface hello; };
```
não é válido para o **dispinterface** atributo.
## <a name="example"></a>Exemplo
Veja o exemplo de [associável](bindable.md) para obter um exemplo de como usar **dispinterface**.
## <a name="requirements"></a>Requisitos
### <a name="attribute-context"></a>Atributo de contexto
|||
|-|-|
|**Aplica-se a**|**interface**|
|**Repetível**|Não|
|**Atributos obrigatórios**|Nenhum|
|**Atributos inválidos**|`dual`, `object`, `oleautomation`, `local`, `ms_union`|
Para obter mais informações, consulte [contextos de atributo](cpp-attributes-com-net.md#contexts).
## <a name="see-also"></a>Consulte também
[Atributos de IDL](idl-attributes.md)<br/>
[Atributos por uso](attributes-by-usage.md)<br/>
[uuid](uuid-cpp-attributes.md)<br/>
[dual](dual.md)<br/>
[custom](custom-cpp.md)<br/>
[object](object-cpp.md)<br/>
[__interface](../../cpp/interface.md) | 28.090909 | 183 | 0.736015 | por_Latn | 0.874713 |
aa64413373fb9237a030e35567fb633aa9573c00 | 5,618 | md | Markdown | articles/active-directory/active-directory-enterprise-apps-manage-provisioning.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-enterprise-apps-manage-provisioning.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-enterprise-apps-manage-provisioning.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Active Directory에서 엔터프라이즈 앱에 대한 사용자 프로비전 관리 | Microsoft Docs
description: Azure Active Directory를 사용하여 엔터프라이즈 앱에 대한 사용자 계정 프로비전을 관리하는 방법에 대해 알아봅니다.
services: active-directory
documentationcenter: ''
author: asmalser
manager: mtillman
editor: ''
ms.service: active-directory
ms.component: app-mgmt
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: identity
ms.date: 07/26/2017
ms.author: asmalser
ms.reviewer: asmalser
ms.openlocfilehash: a6f9f35931ff13eb3f0f35748b3a040af37df672
ms.sourcegitcommit: b6319f1a87d9316122f96769aab0d92b46a6879a
ms.translationtype: HT
ms.contentlocale: ko-KR
ms.lasthandoff: 05/20/2018
ms.locfileid: "34337897"
---
# <a name="managing-user-account-provisioning-for-enterprise-apps-in-the-azure-portal"></a>Azure Portal에서 엔터프라이즈 앱에 대한 사용자 계정 프로비전 관리
이 문서에서는 [Azure Portal](https://portal.azure.com)을 사용하여 지원하는 응용 프로그램(특히 [Azure Active Directory 응용 프로그램 갤러리](manage-apps/what-is-single-sign-on.md#get-started-with-the-azure-ad-application-gallery)의 "기능을 갖춘" 범주에서 추가된 응용 프로그램)에 대한 자동 사용자 계정 프로비전 및 프로비전 해제를 관리하는 방법을 설명합니다. 자동 사용자 계정 프로비전 및 작동 방식에 대한 자세한 내용은 [Azure Active Directory를 사용하여 SaaS 응용 프로그램에 사용자 프로비전 및 프로비전 해제 자동화](active-directory-saas-app-provisioning.md)를 참조하세요.
## <a name="finding-your-apps-in-the-portal"></a>포털에서 앱 찾기
[Azure Active Directory 응용 프로그램 갤러리](manage-apps/what-is-single-sign-on.md#get-started-with-the-azure-ad-application-gallery)를 사용하여 디렉터리 관리자에 의해 디렉터리의 Single Sign-On이 구성된 모든 응용 프로그램을 이제 [Azure Portal](https://portal.azure.com)에서 확인하고 관리할 수 있습니다. 응용 프로그램은 포털의 **모든 서비스** > **엔터프라이즈 응용 프로그램** 섹션에서 찾을 수 있습니다. 엔터프라이즈 앱은 조직 내에서 배포 및 사용되는 앱입니다.

왼쪽의 **모든 응용 프로그램** 링크를 선택하면 갤러리에서 추가된 앱을 포함하여 구성된 모든 앱의 목록이 표시됩니다. 앱을 선택하면 해당 앱의 리소스 창이 로드되며 여기에서 해당 앱에 대한 보고서를 볼 수 있고 다양한 설정을 관리할 수 있습니다.
사용자 계정 프로비전 설정은 왼쪽의 **프로비전** 을 선택하여 관리할 수 있습니다.

## <a name="provisioning-modes"></a>프로비전 모드
**프로비전** 창은 엔터프라이즈 응용 프로그램을 위해 지원되는 프로비전 모드를 표시하는 **모드** 메뉴로 시작하고 해당 모드가 구성될 수 있도록 합니다. 사용 가능한 옵션은 다음과 같습니다.
* **자동** - Azure AD가 이 응용 프로그램에 대해 사용자 계정의 자동 API 기반의 프로비전 및/또는 프로비전 해제를 지원하는 경우 이 옵션이 나타납니다. 이 모드를 선택하면 관리자가 응용 프로그램의 사용자 관리 API에 연결하는 Azure AD를 구성하고 Azure AD와 앱 간 사용자 계정 데이터의 흐름 방식을 정의하는 계정 매핑 및 워크플로를 작성하고 Azure AD 프로비전 서비스를 관리할 수 있도록 안내하는 인터페이스가 표시됩니다.
* **수동** - Azure AD가 이 응용 프로그램에 대한 사용자 계정의 자동 프로비전을 지원하지 않을 경우 이 옵션이 표시됩니다. 이 옵션을 사용하면 응용 프로그램에 저장된 사용자 계정 레코드를 해당 응용 프로그램에서 제공한 사용자 관리 및 프로비전 기능(SAML Just-In-Time 프로비전 포함 가능)에 따라 외부 프로세스를 사용하여 관리해야 합니다.
## <a name="configuring-automatic-user-account-provisioning"></a>자동 사용자 계정 프로비전 구성
**자동** 옵션을 선택하면 4개의 섹션으로 나뉜 화면이 표시됩니다.
### <a name="admin-credentials"></a>관리자 자격 증명
Azure AD가 응용 프로그램의 사용자 관리 API에 연결하는 데 필요한 자격 증명을 이 섹션에 입력합니다. 필요한 입력은 응용 프로그램에 따라 다릅니다. 특정 응용 프로그램에 대한 자격 증명 형식 및 요구 사항에 대해 알아보려면 [특정 응용 프로그램에 대한 구성 자습서](active-directory-saas-app-provisioning.md)를 참조하세요.
**연결 테스트** 단추를 선택하면 제공된 자격 증명을 사용하여 Azure AD에서 해당 앱의 프로비전 앱에 연결해 봄으로써 자격 증명을 테스트할 수 있습니다.
### <a name="mappings"></a>매핑
이 섹션에서는 사용자 계정이 프로비전되거나 업데이트될 때 Azure AD와 대상 응용 프로그램 간에 흐르는 사용자 특성을 보고 편집할 수 있습니다.
Azure AD 사용자 개체와 각 SaaS 앱의 사용자 개체 간의 미리 구성된 매핑 세트가 있습니다. 일부 앱은 그룹이나 연락처 같은 다른 유형의 개체를 관리합니다. 표에서 매핑 중 하나를 선택하면 오른쪽에 표시되는 매핑 편집기에서 매핑을 보고 사용자 지정할 수 있습니다.

지원되는 사용자 지정은 다음과 같습니다.
* Azure AD 사용자 개체를 SaaS 앱의 사용자 개체로 매핑하는 것과 같이 특정 개체에 대한 매핑을 사용하거나 사용하지 않도록 설정합니다.
* Azure AD 사용자 개체에서 앱의 사용자 개체로 흐르는 특성을 편집합니다. 특성 매핑에 대한 자세한 내용은 [특성 매핑 유형 이해하기](active-directory-saas-customizing-attribute-mappings.md#understanding-attribute-mapping-types)를 참조하세요.
* Azure AD는 대상 응용 프로그램에서 수행하는 프로비전 작업을 필터링합니다. Azure AD가 개체를 완전히 동기화하도록 하지 않고 수행되는 작업을 제한할 수 있습니다. 예를 들어 **업데이트**만 선택하면 Azure AD가 응용 프로그램에서 기존 사용자 계정을 업데이트만 하며 새 계정을 만들지는 않습니다. **만들기**만 선택하면 Azure가 새 사용자 계정을 만들기만 하고 기존 계정을 업데이트하지는 않습니다. 이 기능을 통해 관리자는 계정을 만들기 위해 여러 매핑을 만들고 워크플로를 업데이트할 수 있습니다.
### <a name="settings"></a>설정
이 섹션에서 관리자가 필요에 따라 프로비전 캐시를 지우고 서비스를 다시 시작할 뿐만 아니라 선택한 응용 프로그램에 대한 Azure AD 프로비전 서비스를 시작하고 중지할 수 있습니다.
프로비전을 응용 프로그램에 대해 처음으로 사용하는 경우 **프로비전 상태**를 **켜기**로 변경하여 서비스를 활성화합니다. 이렇게 변경하면 Azure AD 프로비전 서비스가 초기 동기화를 수행하며 여기서 **사용자 및 그룹** 섹션에 할당된 사용자를 읽고 대상 응용 프로그램을 쿼리한 다음, Azure AD **매핑** 섹션에 정의된 프로비전 작업을 수행합니다. 이 과정에서 프로비전 서비스가 관리하는 사용자 계정에 대해 캐시된 데이터를 저장하므로 할당 범위에 들지 않는 대상 응용 프로그램 내의 관리되지 않는 계정은 프로비전 해제 작업의 영향을 받지 않습니다. 초기 동기화 후 프로비전 서비스는 10분 간격으로 사용자 및 그룹 개체를 자동으로 동기화합니다.
**프로비전 상태**를 **끄기**로 변경하면 프로비전 서비스가 간단히 일시 중지됩니다. 이 상태에서는 Azure가 앱의 어떤 사용자 또는 그룹 개체도 만들거나 업데이트하거나 제거하지 않습니다. 상태를 켜기로 다시 변경하면 서비스가 중단된 위치를 선택합니다.
**Clear current state and restart synchronization** (현재 상태를 지우고 동기화 다시 시작) 확인란을 선택하고 저장하면 프로비전 서비스를 중지하고 Azure AD가 관리하는 계정에 대해 캐시된 데이터를 덤프하고 서비스를 다시 시작하고 초기 동기화를 다시 수행합니다. 이 옵션을 사용하면 관리자가 프로비전 배포 프로세스를 다시 시작할 수 있습니다.
### <a name="synchronization-details"></a>동기화 세부 정보
이 섹션에서는 프로비전 서비스의 응용 프로그램에 대한 첫 번째 실행 및 마지막 실행과 관리 중인 사용자 및 그룹 개체의 수를 포함하여 프로비전 서비스의 작업에 대한 추가적인 세부 정보를 제공합니다.
Azure AD와 대상 응용 프로그램 간에 만들어지고 업데이트 및 제거되는 모든 사용자 및 그룹의 로그를 제공하는 **프로비전 활동 보고서**와 읽기, 만들기, 업데이트 또는 제거에 실패한 사용자 및 그룹 개체에 대한 자세한 오류 메시지를 제공하는 **프로비전 오류 보고서**에 대한 링크가 제공됩니다.
## <a name="feedback"></a>사용자 의견
사용자 의견을 계속 보내주세요! [피드백 포럼](https://feedback.azure.com/forums/169401-azure-active-directory/category/162510-admin-portal)의 **관리자 포털Admin Portal** 섹션에서 개선을 위한 의견과 아이디어를 게시합니다. 엔지니어링 팀은 매일 멋진 새로운 기능을 구축하는 방법을 구상하며, 사용자의 지침에 따라 다음에 구축할 기능을 구체화하고 정의하겠습니다.
| 66.880952 | 424 | 0.738341 | kor_Hang | 1.00001 |
aa65ead49d5ba9c8b8a19406f078eeccc9ddd7ac | 4,470 | md | Markdown | Exchange/ExchangeServer2013/spam-quarantine-exchange-2013-help.md | sorinescu-com/OfficeDocs-Exchange | 91760fed7e67dab1a8abe7448d481564ca060d8a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-26T08:15:18.000Z | 2020-01-26T08:15:18.000Z | Exchange/ExchangeServer2013/spam-quarantine-exchange-2013-help.md | sorinescu-com/OfficeDocs-Exchange | 91760fed7e67dab1a8abe7448d481564ca060d8a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-27T08:03:30.000Z | 2020-01-27T08:03:30.000Z | Exchange/ExchangeServer2013/spam-quarantine-exchange-2013-help.md | sorinescu-com/OfficeDocs-Exchange | 91760fed7e67dab1a8abe7448d481564ca060d8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Spam quarantine: Exchange 2013 Help'
TOCTitle: Spam quarantine
ms:assetid: 4535496f-de6a-43df-8e53-c9a97f65cccc
ms:mtpsurl: https://technet.microsoft.com/library/Aa997692(v=EXCHG.150)
ms:contentKeyID: 49248678
ms.reviewer:
manager: serdars
ms.author: v-mapenn
author: mattpennathe3rd
mtps_version: v=EXCHG.150
---
# Spam quarantine
_**Applies to:** Exchange Server 2013_
Many organizations are bound by legal or regulatory requirements to preserve or deliver all legitimate email messages. In Microsoft Exchange Server 2013, spam quarantine is a feature of the Content Filter agent that reduces the risk of losing legitimate messages. Spam quarantine provides a temporary storage location for messages identified as spam that shouldn't be delivered to a user mailbox inside the organization.
Messages identified by the Content Filter agent as spam are wrapped in a non-delivery report (NDR) and delivered to a spam quarantine mailbox inside the organization. You can manage messages delivered to the spam quarantine mailbox and take appropriate actions. For example, you can delete messages or let messages flagged as false positives in anti-spam filtering be routed to their intended recipients. In addition, you can configure the spam quarantine mailbox to automatically delete messages after a designated time period.
## Spam confidence level
When an external user sends email messages to a server running Exchange that runs the anti-spam features, the anti-spam features cumulatively evaluate characteristics of the messages and act as follows:
- Those messages suspected to be spam are filtered out.
- A rating is assigned to messages based on the probability that a message is spam. This rating is stored with the message as a message property called the spam confidence level (SCL) rating.
Spam quarantine uses the SCL rating to determine whether mail has a high probability of being spam. The SCL rating is a numeric value from 0 through 9, where 0 is considered less likely to be spam, and 9 is considered most likely to be spam.
You can configure mail that has a certain SCL rating to be deleted, rejected, or quarantined. The rating that triggers any of these actions is referred to as the *SCL quarantine threshold*. Within content filtering, you can configure the Content Filter agent to base its actions on the SCL quarantine threshold. For example, you can set the following conditions:
- SCL delete threshold is set to 8.
- SCL reject threshold is set to 7.
- SCL quarantine threshold is set to 6.
- SCL Junk Email folder threshold is set to 5.
Based on the preceding SCL thresholds, all email with an SCL of 6 will be delivered to the spam quarantine mailbox.
For more information, see [Manage content filtering](manage-content-filtering-exchange-2013-help.md).
## Spam quarantine
When messages are received by the Exchange server that's running all default anti-spam agents, the content filter is applied as follows:
- If the SCL rating is greater than or equal to the SCL quarantine threshold but less than either the SCL delete threshold or SCL reject threshold, the message goes to the spam quarantine mailbox.
- If the SCL rating is lower than the spam quarantine threshold, it's delivered to the recipient's Inbox.
The message administrator uses Microsoft Outlook to monitor the spam quarantine mailbox for false positives. If a false positive is found, the administrator can send the message to the recipient's mailbox.
The message administrator can review the anti-spam stamps if either of the following conditions is true:
- Too many false positives are filtered into the spam quarantine mailbox.
- Not enough spam is being rejected or deleted.
For more information, see [Anti-spam stamps](anti-spam-stamps-exchange-2013-help.md).
You can then adjust the SCL settings to more accurately filter the spam coming into the organization. For more information, see [Spam Confidence Level Threshold](spam-confidence-level-threshold-exchange-2013-help.md).
To use spam quarantine, you need to follow these steps:
1. Verify content filtering is enabled.
2. Create a dedicated mailbox for spam quarantine.
3. Specify the spam quarantine mailbox.
4. Configure the SCL quarantine threshold.
5. Manage the spam quarantine mailbox.
6. Adjust the SCL quarantine threshold as needed.
For detailed instructions, see [Configure a spam quarantine mailbox](configure-a-spam-quarantine-mailbox-exchange-2013-help.md).
| 55.185185 | 528 | 0.802461 | eng_Latn | 0.998626 |
aa66287665a811c8c704c70ae685eea0c5302cda | 2,756 | md | Markdown | docs-src/pages/requests.md | adamjarret/s3-publish | 46824104608bffdb6b3b6a28f9571bb8a75c6ef8 | [
"MIT"
] | 6 | 2017-11-16T05:15:31.000Z | 2021-01-18T10:14:55.000Z | docs-src/pages/requests.md | adamjarret/s3-publish | 46824104608bffdb6b3b6a28f9571bb8a75c6ef8 | [
"MIT"
] | 13 | 2017-11-16T05:08:42.000Z | 2022-01-22T13:01:48.000Z | docs-src/pages/requests.md | adamjarret/s3-publish | 46824104608bffdb6b3b6a28f9571bb8a75c6ef8 | [
"MIT"
] | 1 | 2017-11-16T04:56:18.000Z | 2017-11-16T04:56:18.000Z | > When the origin provider and the target provider support the same protocol (both local or both S3),
> `copyFile` operations will be performed instead of `putFile` operations (to facilitate direct S3 to S3 transfers).
Both {@linkcode FSProvider} and {@linkcode S3Provider} can be configured with a `delegate`
object that allows control over the parameters sent with various requests.
See {@linkcode FSProviderDelegate} and {@linkcode S3ProviderDelegate} for supported parameters.
To provide options that are passed to the AWS S3 client constructor, define the `client` object in {@linkcode S3ProviderOptions}.
## CacheControl Example
The following example will set max age to 900 for HTML files and 777600000 for all other files
when uploaded to the S3 target.
```js
// RegEx pattern that matches file Keys ending in .html
const reHtml = /\.html$/;
module.exports = {
origin: {
root: '.'
},
target: {
root: 's3://my-bucket',
delegate: {
// Provide a putFileParams implementation to modify the parameters that will be sent with the PUT request.
// The following implementation will set the CacheControl param to 900 for HTML files and 777600000 for all other files.
putFileParams: (file, params) => {
params.CacheControl = file.Key.match(reHtml)
? 'max-age=900'
: 'max-age=777600000';
return Promise.resolve(params);
}
}
},
schemaVersion: 2
};
```
## gzip Example
The following example will set the `ContentEncoding` header to 'gzip' for files with the `.gz` extension.
The files will also be "re-named" to remove the `.gz` extension when uploaded to the S3 target.
```js
// RegEx pattern that matches file Keys ending in .gz
const reGzip = /\.gz$/;
module.exports = {
origin: {
root: '.'
},
target: {
root: 's3://my-bucket',
delegate: {
// Provide a targetFileKey implementation to map origin files to differently named target files.
// This is used when comparing files and also has the effect of "re-naming" the file when it is uploaded.
// The following implementation will remove .gz file extension (if present)
targetFileKey: (originFile) => Promise.resolve(originFile.Key.replace(reGzip, '')),
// Provide a putFileParams implementation to modify the parameters that will be sent with the PUT request.
// The following implementation will set the ContentEncoding param if the origin file Key matches,
// otherwise the default params are returned unchanged.
putFileParams: (originFile, params) => {
if (reGzip.exec(originFile.Key)) {
params.ContentEncoding = 'gzip';
}
return Promise.resolve(params);
}
}
},
schemaVersion: 2
};
```
| 33.204819 | 129 | 0.695573 | eng_Latn | 0.988363 |
aa66768a40cd3ac93fa45e5d67babcbd8c1aca96 | 91 | md | Markdown | README.md | panosen/panosen-hash | 43e511d4a5dab1312da4c4de6d6870e27b0b7a65 | [
"MIT"
] | null | null | null | README.md | panosen/panosen-hash | 43e511d4a5dab1312da4c4de6d6870e27b0b7a65 | [
"MIT"
] | null | null | null | README.md | panosen/panosen-hash | 43e511d4a5dab1312da4c4de6d6870e27b0b7a65 | [
"MIT"
] | null | null | null | # panosen-hash
Panosen Hash Helper
this project is absolete, use Panosen.Toolkit instead.
| 18.2 | 54 | 0.802198 | eng_Latn | 0.927288 |
aa6690f6441f2fd0b31822ba122941f39005196d | 18 | md | Markdown | README.md | liushaopeng0606/SPJamesLib | d2cda53c7c9f4e721edcd72ed35055fce842c2f3 | [
"MIT"
] | null | null | null | README.md | liushaopeng0606/SPJamesLib | d2cda53c7c9f4e721edcd72ed35055fce842c2f3 | [
"MIT"
] | null | null | null | README.md | liushaopeng0606/SPJamesLib | d2cda53c7c9f4e721edcd72ed35055fce842c2f3 | [
"MIT"
] | null | null | null | # SPJamesLib
测试仓库
| 6 | 12 | 0.777778 | yue_Hant | 0.771665 |
aa66bc4b86be8442dc3f9d2aab2891e13e4a5c01 | 45,162 | md | Markdown | _pages/media.md | RotatingFans/docs-rewrite | 772ba1f49462871653535912ca9b27cc390bf61d | [
"Apache-2.0"
] | null | null | null | _pages/media.md | RotatingFans/docs-rewrite | 772ba1f49462871653535912ca9b27cc390bf61d | [
"Apache-2.0"
] | null | null | null | _pages/media.md | RotatingFans/docs-rewrite | 772ba1f49462871653535912ca9b27cc390bf61d | [
"Apache-2.0"
] | 1 | 2021-03-06T20:56:43.000Z | 2021-03-06T20:56:43.000Z | ---
ID: 35830
post_title: Mark II Media Kit
author: Kathy Reid
post_excerpt: ""
layout: page
permalink: http://mycroft.ai/media/
published: true
post_date: 2018-02-15 12:22:46
---
[vc_row type="full_width_background" full_screen_row_position="middle" bg_color="#22a7f0" scene_position="center" text_color="light" text_align="left" top_padding="5%" bottom_padding="5%" overlay_strength="0.3"][vc_column column_padding="padding-2-percent" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]
<h1>Mycroft Mark II</h1>
<h3>Media Kit</h3>
[/vc_column_text][divider line_type="No Line" custom_height="20"][vc_column_text]
<h2>LIVE on Indiegogo!</h2>
<h4><span style="font-weight: 400;">The Mark II is the second smart speaker from Mycroft AI. Mycroft is an open voice assistant that designed with privacy and user agency in mind.</span></h4>
[/vc_column_text][divider line_type="No Line" custom_height="30"][nectar_btn size="large" open_new_tab="true" button_style="regular" button_color_2="Extra-Color-3" icon_family="none" url="https://www.indiegogo.com/projects/mycroft-mark-ii-the-open-voice-assistant/x/18123242#/mycroft-home-page" text="Go To Indiegogo Page"][divider line_type="No Line" custom_height="30"][vc_column_text]<span style="color: #ffffff;"><b>In this Media Kit you’ll find:</b></span>
<ul>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#media-contact"><span style="font-weight: 400;">Media Contact</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#images-and-assets"><span style="font-weight: 400;">Images and Assets</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#why-were-different"><span style="font-weight: 400;">Why we're Different</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#milestones-and-traction"><span style="font-weight: 400;">Milestones and Traction</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#leadership"><span style="font-weight: 400;">Leadership</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#mycroft-launches-an-open-alternative-to-alexa-and-assistant"><span style="font-weight: 400;">Mycroft Launches an Open Alternative to Alexa and Assistant</span></a></span></li>
<li><span style="color: #ffffff;"><a style="color: #ffffff;" href="#mycroft-in-the-news"><span style="font-weight: 400;">Mycroft in the News</span></a></span></li>
</ul>
[/vc_column_text][/vc_column][/vc_row][vc_row type="full_width_content" full_screen_row_position="middle" equal_height="yes" content_placement="middle" vertically_center_columns="true" scene_position="center" text_color="dark" text_align="left" top_padding="2%" bottom_padding="2%" overlay_strength="0.3"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="2/3" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid" offset="vc_col-xs-12"][image_with_animation image_url="36042" alignment="center" animation="Fade In From Left" box_shadow="large_depth" max_width="125%"][/vc_column][vc_column column_padding="padding-3-percent" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/3" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid" offset="vc_col-xs-12"][vc_column_text]
<h2 id="#mediakit">Media Contact</h2>
<h4><span style="font-weight: 400;">For more information or any press inquiries, please contact:</span></h4>
<h3><strong> Alyx Horace at</strong></h3>
<h3><strong><a href="mailto:media@mycroft.ai">media@mycroft.ai</a> </strong></h3>
<h3><strong>or call (505) 470-1103</strong></h3>
<span style="font-weight: 400;">To learn more, please visit </span><a href="https://mycroft.ai/"><span style="font-weight: 400;">mycroft.ai</span></a><span style="font-weight: 400;"> and be sure to follow our socials!</span>
<ul>
<li><a href="http://www.facebook.com/aiforeveryone/" target="_blank" rel="noopener"><span style="font-weight: 400;">Facebook</span></a></li>
<li><a href="https://twitter.com/mycroft_ai" target="_blank" rel="noopener"><span style="font-weight: 400;">Twitter</span></a></li>
<li><a href="https://www.instagram.com/mycroft_ai/" target="_blank" rel="noopener"><span style="font-weight: 400;">Instagram</span></a></li>
<li><a href="https://www.linkedin.com/company/10255844/" target="_blank" rel="noopener"><span style="font-weight: 400;">LinkedIn</span></a></li>
<li><a href="https://www.youtube.com/channel/UC1dlmB1lup9RwFQBSGnhA-g" target="_blank" rel="noopener"><span style="font-weight: 400;">Youtube</span></a></li>
<li><a href="https://www.reddit.com/r/Mycroftai/" target="_blank" rel="noopener"><span style="font-weight: 400;">Reddit</span></a></li>
<li><a href="https://plus.google.com/+MycroftAIForEveryone" target="_blank" rel="noopener"><span style="font-weight: 400;">G+</span></a></li>
<li><a href="https://www.crunchbase.com/organization/mycroft-ai-inc" target="_blank" rel="noopener"><span style="font-weight: 400;">Crunchbase</span></a></li>
<li><a href="https://chat.mycroft.ai" target="_blank" rel="noopener"><span style="font-weight: 400;">Mycroft Community Chat </span></a></li>
<li><a href="https://community.mycroft.ai/" target="_blank" rel="noopener"><span style="font-weight: 400;">Mycroft Forum</span></a></li>
</ul>
[/vc_column_text][/vc_column][/vc_row][vc_row type="full_width_background" full_screen_row_position="middle" equal_height="yes" bg_color="#f4f4f4" scene_position="center" text_color="dark" text_align="left" top_padding="10%" bottom_padding="10%" overlay_strength="0.3"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_row_inner equal_height="yes" column_margin="default" text_align="left"][vc_column_inner column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" width="1/1" column_border_width="none" column_border_style="solid"][vc_column_text]
<h1>Images and Assets</h1>
We've provided the following images and assets for your use. Please don't hesitate to contact us if what you're looking for isn't here.[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_row_inner column_margin="none" text_align="left"][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34901" alignment="center" animation="Fade In From Bottom" img_link_target="_blank" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1bec7NpAUo3a6_-J67myOdThJgA3gsPwu/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34694" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/open?id=13rl-C4ZIckHi9lo6pG2KllYfeQzdOkBW"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34696" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1CBlsyRWGTuaomvZ01qiVh8a1E9_0bCoP/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34697" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1tZ-CPdhjj-EEIU_7hIlxJUbfQ0CUyIjE/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34695" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/open?id=1c25pvBvJ278Eifou5CSiPqZml49X4N5p"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34725" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1dq0kFtN5s83VS9Jnwilm70W_UCga2Hu5/view?usp=sharing"][/vc_column_inner][/vc_row_inner][vc_row_inner column_margin="none" text_align="left"][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34720" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1dIC8G5aQwRjhsrtNPTdblBnpFRCv5ZHO/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34721" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/open?id=1dfbpHgwlGzocoTHNmjYRV3OotrcnlH9f"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34745" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1f3gmMTbqVDxI0JKmsg26_zWBr47LYdrS/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34748" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1fG7jfx8i0xwrjGpjbyuNZ7oHHKIiAo9b/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34750" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1fjoSy2MB_W4zina9kBskX-oYzsZKFB7Y/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34752" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1flQDndfUJKhERNGI7TQnNGJh5I1W2mdQ/view?usp=sharing"][/vc_column_inner][/vc_row_inner][vc_row_inner equal_height="yes" column_margin="default" text_align="left"][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34867" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/16OAvvGJr51SZ1kRR4FQA-2iVFSzRe3Y5/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34869" alignment="center" animation="Fade In From Bottom" box_shadow="none" max_width="100%" img_link="https://drive.google.com/open?id=16XS5vwsQ4OYSbfLq8KHUyUqHWz4If3EA"][/vc_column_inner][vc_column_inner column_padding="padding-1-percent" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="34892" alignment="center" animation="Fade In From Bottom" img_link_target="_blank" box_shadow="none" max_width="100%" img_link="https://drive.google.com/a/mycroft.ai/file/d/1ZYwQBMdZURy6PNQKonCDCuYGJ5z2R25P/view?usp=sharing"][/vc_column_inner][vc_column_inner column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" width="1/6" column_border_width="none" column_border_style="solid"][image_with_animation image_url="35348" alignment="center" animation="Fade In From Bottom" img_link_large="yes" box_shadow="none" max_width="100%"][/vc_column_inner][vc_column_inner column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" width="2/6" column_border_width="none" column_border_style="solid"][vc_video link="https://youtu.be/flFmIje04Zk"][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row type="full_width_background" full_screen_row_position="middle" bg_color="#2c3e50" scene_position="center" text_color="light" text_align="left" top_padding="5%" bottom_padding="5%" overlay_strength="0.3"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_row_inner equal_height="yes" content_placement="middle" column_margin="default" text_align="left"][vc_column_inner column_padding="padding-5-percent" column_padding_position="all" background_color_opacity="1" width="1/1" column_border_width="none" column_border_style="solid"][vc_column_text]
<h2 id="#history">Why We're Different</h2>
<span style="font-weight: 400;">Voice is coming to every device, every platform and every household globally. The technology is becoming a critical part of nearly every technology stack, and has a growing presence within the home as an AI assistant.</span>
<span style="font-weight: 400;">This calls for voice to be</span><b> flexible, customizable, vendor neutral, and privacy focused</b><span style="font-weight: 400;">; things that the proprietary voice assistants on the market today don’t offer.</span>
<span style="font-weight: 400;">Mycroft’s open platform differs from products like Alexa or Assistant in four important ways. Data privacy, customization, user agency and open data. </span>
<span style="font-weight: 400;"><strong>Privacy</strong> – Mycroft’s platform provides users with privacy by deleting queries in real time. User data is not mined, aggregated, processed or sold. </span>
<span style="font-weight: 400;"><strong>Customization</strong> – The agent can be customized by changing the wake word, voice and even the user experience.</span>
<span style="font-weight: 400;"><strong>User Agency</strong> – Mycroft is also the first assistant that embraces the concept of user agency--other voice assistants don’t represent users, they represent the companies that own them. Mycroft is different, it represents end users. </span>
<span style="font-weight: 400;"><strong>Open Data</strong> – Mycroft is publishing data from users who have decided to opt-in. This open data set is being used to improve wakeword spotting, speech to text transcription, natural language understanding and speech synthesis.</span>
<span style="font-weight: 400;">Mycroft is a full feature voice platform that can be deployed anywhere, or used in conjunction with our devices. Our next generation hardware device, Mark II is a wireless smart speaker that plays music, set timers, can access calendar events, search for general knowledge and has everything else the public has come to expect from a state-of-the-art voice assistant.</span>
<h2>Mycroft AI Crowdfunding History</h2>
<span style="font-weight: 400;">In the fall of 2015 we went to Kickstarter and Indiegogo to see if there was a market for our idea. We raised $193,000 USD and turned Mycroft from a project into a company. We fulfilled all crowdfunding orders in July of 2017 - putting Mark I units in the hands of 1,500 backers. Between these two dates, we’ve progressed rapidly, growing our team from 9 to 20, and have watched voice transform from a niche technology into the next big thing.</span>
<span style="font-weight: 400;">Our first crowdfunging campaign sold an advance prototype, Mark I. Still an open source, open hardware voice assistant, it was targeted at the makers and hackers. Mark II is for all consumers, </span><b>flexible, customizable, open, and privacy focused</b><span style="font-weight: 400;">; something the current technologies don’t offer.</span>
<h2>About this campaign</h2>
<span style="font-weight: 400;">We learned a lot of lessons from the production of Mark I. Combined with feedback from our community and customers we decided we could do much better. It’s time for an upgrade.</span>
<span style="font-weight: 400;">Crowdfunding is a great way to build community and estimate demand. Funds from the campaigns will help us to continue to democratize voice, build open data sets and advance the state of AI. With vendor relationships in place and a much more comprehensive picture of international shipping, tax structures, and regulation we are confident that the Mark II will ship by the end of the year.</span>[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row type="full_width_content" full_screen_row_position="middle" vertically_center_columns="true" bg_color="#ffffff" scene_position="center" text_color="light" text_align="right" id="reasons" overlay_strength="0.3"][vc_column column_padding="padding-3-percent" column_padding_position="all" background_color="#f2f2f2" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/2" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_row_inner column_margin="default" text_align="left"][vc_column_inner column_padding="no-extra-padding" column_padding_position="all" centered_text="true" background_color_opacity="1" font_color="#36d7b7" width="1/2" column_border_width="none" column_border_style="solid"][milestone heading_inherit="default" symbol_position="after" subject_padding="0" color="Accent-Color" effect="motion_blur" symbol_alignment="Superscript" milestone_alignment="default" number="20k" symbol="+" number_font_size="158" symbol_font_size="57"][/vc_column_inner][vc_column_inner column_padding="padding-2-percent" column_padding_position="left" background_color_opacity="1" width="1/2" column_border_width="none" column_border_style="solid"][divider line_type="No Line" custom_height="25"][vc_column_text]
<h3 style="text-align: left;"><span style="color: #333333;">Mycroft users in over 50 countries.</span></h3>
[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][vc_column column_padding="padding-10-percent" column_padding_position="all" background_color="#22a7f0" background_color_opacity="0.1" background_hover_color_opacity="1" background_image="32954" column_shadow="none" width="1/4" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][/vc_column][vc_column column_padding="padding-10-percent" column_padding_position="all" background_color="#22a7f0" background_color_opacity="0.1" background_hover_color_opacity="1" background_image="30654" column_shadow="none" width="1/4" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][/vc_column][/vc_row][vc_row type="full_width_background" full_screen_row_position="middle" equal_height="yes" content_placement="middle" bg_color="#ffffff" scene_position="center" text_color="dark" text_align="left" top_padding="5%" bottom_padding="5%" id="steps" row_name="Sign Up Steps" overlay_strength="0.3"][vc_column column_padding="padding-3-percent" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid" offset="vc_col-xs-12"][vc_column_text]
<h3>Milestones and Traction</h3>
[/vc_column_text][nectar_icon_list animate="true" color="extra-color-gradient-2" icon_size="large" icon_style="border"][nectar_icon_list_item icon_type="icon" title="List Item" id="1485457205085-0e140-9ac9" header="Milestones and Traction" text="The success of our first Kickstarter and Indiegogo campaigns brought the realization that there was a significant market for an independent voice assistant--an AI for everyone.
Shortly after our first Kickstarter we were invited to join Techstars. After the program we shipped our first developer kits, and began growing an open source community. The strong demand from Kickstarter coupled with a solid play for enterprise sales allowed us to finish the year with $350,000 in financing and a $50,000 grant from Launch KC.
" tab_id="1515500591087-7" icon_fontawesome="fa fa-history"] [/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1485457205092-10e140-9ac9" header="Customers" text="2017 brought an ever growing community, customers, and pivotal technical milestones.
" tab_id="1515500591948-3" icon_fontawesome="fa fa-users"] [/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1516135692042-0-3" header="Investment" text="Early in 2017 we accepted a strategic investment from Jaguar Land Rover and began working with their team in Portland on a test integration with the FTYPE sports car. Shortly thereafter we joined the prestigious 500 Startups program in San Francisco. Recently we completed negotiations with our first major corporate customer and hope to make a public announcement early next year. In between this, we oversubscribed our $750,000 Series Seed round, won another $50,000 grant at Techweek Nationals, and another $15,000 at Hello Tomorrow in Paris." tab_id="1515500591948-3" icon_fontawesome="fa fa-money"] [/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1485457205111-2e140-9ac9" header="Community Growth" text="In July we fulfilled our crowdfunding orders, putting over 1,500 units into the hands of makers and hackers in 56 countries. We’ve continued to add thousands of users per month, and seen significant month over month increases in developer contributions to the Mycroft stack. We’ve had upwards of 20 releases and over 100 skills." tab_id="1515500592596-3" icon_fontawesome="fa fa-signal"][/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1515501968323-0-2" tab_id="1515501968326-1" icon_fontawesome="fa fa-cogs" header="Technical Wins" text="Our technical team got several innovative machine learning solutions in place including a state of the art speech to text engine – DeepSpeech – which we built in partnership with Mozilla. In December we hit our final machine learning task; bringing speech synthesis in-house. This makes us one of 10 companies globally that has a complete intelligent assistant stack. The others are Google, Facebook, Apple, Amazon, Tencent, Baidu, Samsung and Microsoft and Hound."][/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1515502133177-0-7" tab_id="1515502133181-5" icon_fontawesome="fa fa-repeat" header="Going Forward" text="Over the next year, our team will work to make Mycroft the go-to voice technology in open source. When developers and enterprises think “open source and voice” Mycroft’s performance will earn the technology the top spot. We’re working to simplify developer onboarding - making it extremely easy for developers to install, use, and modify the technology. Our goal is a developer experience that is pleasant, simple, and satisfying."][/nectar_icon_list_item][nectar_icon_list_item icon_type="icon" title="List Item" id="1516136159679-0-0" tab_id="1515502133181-5" icon_fontawesome="fa fa-road" header="Technical Milestones" text="
<ul>
<li><strong>Wake Word</strong>: Continue to improve the accuracy of the Precise Wake Word software, and pre-train several Wake Words.</li>
<li><strong>Speech to Text</strong>: Continue to partner with Mozilla with a view to implementing DeepSpeech.</li>
<li><strong>Intents and Skills</strong>: Continue nurturing the Mycroft developer community, providing on-ramps and tools to allow developers to easily build and share skills for Mycroft. We’re also looking to make it easier to integrate Mycroft with third-party services using common tools like OAuth.</li>
<li><strong>Text to Speech</strong>: Continue improving the Mimic open source text to speech software, and make more voices available.</li>
<li><strong>Personality</strong>: Allow customization of the Persona that Mycroft adopts for interaction style with a user - for example cheeky, serious or sarcastic.</li>
</ul>
"][/nectar_icon_list_item][/nectar_icon_list][/vc_column][/vc_row][vc_row type="full_width_content" full_screen_row_position="middle" bg_color="#ffffff" scene_position="center" text_color="dark" text_align="left" id="team" overlay_strength="0.3"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]
<h2 style="text-align: center;">Leadership</h2>
[/vc_column_text][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="35356" bio_image_url="35348" team_memeber_style="bio_fullscreen" name="Joshua Montgomery" job_position="CEO" team_member_bio="One of the few entrepreneurs in the US to build a gigabit fiber network from scratch, Joshua is a serial entrepreneur with more than 15 years of experience. The ISP he built, Wicked Broadband, featured in Wired and Forbes is a testament of net neutrality support and underlying principles long before they became a buzz phrase. Joshua is an Air Force Captain, an aerospace engineer, and a hacker at heart. He oversees all aspects of Mycroft and is a firm supporter of the open source movement committed to an open future for AI. "][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="31052" bio_image_url="31052" team_memeber_style="bio_fullscreen" name="Nate Tomasi" job_position="COO" team_member_bio="Nate Tomasi brings two masters degrees and years of experience building and scaling businesses. He’s worked closely with our CTO at Rythm Engineering helping to scale and monetize an AI traffic optimization company.
Nate is responsible for the Marketing, Legal/HR, Finance/Accounting, Manufacturing, Customer Support, Community Relations, and General Operations Teams.
His passions come through in his work by keeping a customer service driven philosophy at the forefront of everything he does, from working with team members to helping build our community."][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="1971" bio_image_url="1971" team_memeber_style="bio_fullscreen" name="Steve Penrod" job_position="CTO" team_member_bio="Steve brings over 20 years of product development experience to Mycroft. He lead technical teams at companies like Autodesk to build cutting edge technology. Before Mycroft, Steve built “Christopher” a technology to control the home by voice.
Steve notes “I'm thrilled every time I open the front door and Christopher (now a part of Mycroft) welcomes me home by name, turns lights on and reminds me of a upcoming meeting from my calendar.”"][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="243" bio_image_url="243" team_memeber_style="bio_fullscreen" name="Kris Adair" job_position="Co-Founder" team_member_bio="Kris Adair is the co-founder of Mycroft AI, an experienced Entrepreneur, and serves as the marketing director at Mycroft. She lead the most successful Kickstarter ever to come out of Kansas, a project that launched Mycroft from Makerspace to company. She also serves as the executive director of the Lawrence Center for Entrepreneurship - a makerspace, co-working environment and data center."][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="34655" bio_image_url="34649" team_memeber_style="bio_fullscreen" name="Derick Schweppe" job_position="CDO" team_member_bio="Derick has been with Mycroft since the launch of the first Kickstarter. He has designed everything from consumer electronics, housewares, soft goods, tools, and aircraft interiors. Before joining Mycroft he worked for clients like Coleman, Learjet, and Copco. Derick is responsible for all design disciplines within the company, leading the team that works on UI, UX, and hardware design."][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][team_member image_url="34518" bio_image_url="34518" team_memeber_style="bio_fullscreen" name="You!" job_position="Various roles - Lead Developer, Machine Learning, Natural Language Understanding, Marcomms and more" team_member_bio="Got .py? Been learning TensorFlow? Comfortable on a Debian environment like Ubuntu, Jessie or Stretch for RPi? git like a boss? Good with other humans? Have some leadership chops and ability to tackle things like technical mentoring, roadmaps and product management?
Do you understand machine learning, natural language understanding? CMU Flite, Sphinx?
Active Campaign, WordPress, or Marketing and Communications?
Deps met?
We'd like to hear from you. Reach out through our Angel List page.
https://angel.co/mycroft-a-i/jobs
"][/vc_column][/vc_row][vc_row type="full_width_background" full_screen_row_position="middle" bg_color="#e4f1fe" scene_position="center" text_color="dark" text_align="left" top_padding="5%" bottom_padding="5%" overlay_strength="0.3"][vc_column column_padding="padding-5-percent" column_padding_position="left-right" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_row_inner column_margin="default" top_padding="5%" bottom_padding="5%" text_align="left"][vc_column_inner enable_animation="true" animation="fade-in-from-bottom" column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" width="1/1" column_border_width="none" column_border_style="solid"][vc_column_text]
<h2><span style="font-weight: 400;">Mycroft Launches an Open Alternative to Alexa and Assistant</span></h2>
<h3>Want a voice enabled speaker, but concerned with transparency and privacy? Mycroft provides the answer.</h3>
<span style="font-weight: 400;">Palo Alto, CA, January 25, 2018 – Mycroft AI has announced the pending release of an open source equivalent to Amazon Echo and Google Home. The Mark II smart speaker is powered by the Mycroft intelligent assistant, an open source full feature voice platform.</span>
<span style="font-weight: 400;">The company’s latest device, Mark II, has been designed to provide an open alternative to Google Home or Amazon Echo. The Mark II is a consumer ready smart speaker with a built in screen, optional camera, and state of the art microphone array with noise cancellation and beamforming. These improvements to the Mark I improve wakeword spotting and allow user barge in. </span>
<span style="font-weight: 400;">Mycroft’s open platform differs from products like Alexa or Siri in four important ways. Data privacy, customization, user agency and open data. Mycroft’s platform provides users with privacy by deleting queries as they come in. The agent can also be customized by changing the wake word, voice and even the user experience. Mycroft is also the first assistant that embraces the concept of user agency--other voice assistants don’t represent users, they represent the companies that own them. Mycroft is different, it represents end users. Finally, Mycroft is using data from users who have opted in to publish an open data set that can be used to improve wakeword spotting, speech to text transcription, natural language understanding and speech synthesis.</span>
<span style="font-weight: 400;">The Mark II is also completely open. Open hardware, open software and open data. Any company interested in deploying voice enabled products can use the design as a baseline. Mycroft has already demonstrated its technology in a Jaguar F-Type sports car and is working with a number of major brands to deploy custom voice solutions.</span>
<blockquote><span style="font-weight: 400;">“The Mark II will be a game changer,” says Mycroft CEO Joshua Montgomery, “deploying an intelligent personal assistant will no longer be a hundred million dollar proposition. Anyone with an internet connection will be able to have a custom voice agent and, importantly, they won’t have to give up their privacy to do it.”</span></blockquote>
<span style="font-weight: 400;">Amazon sold an estimated 5.2 million Echo units in 2016, and the voice assistant was the best selling product across their e-commerce platform in 2017. Soon voice will be part of every platform, every device and every household globally. The market is expected to reach $25 billion by 2020, but it lacks an open, neutral, independent, and privacy-minded alternative. </span><span style="font-weight: 400;">
</span>
<blockquote><span style="font-weight: 400;">“We see voice as an important part of every human machine interface going forward,” said founder Joshua Montgomery, “our job is to make sure that everyone has access to the technology. We are democratizing voice and creating a level playing field for industry.”</span></blockquote>
<span style="font-weight: 400;">Founded in 2015, Mycroft is the open source answer to Siri or Alexa. Mycroft has strong relationships with the open source community and is rolling out across all of the major open operating systems in 2018. Mycroft’s has received financial investments from Jaguar Land Rover, 500 Startups, Techstars and Kickstarter backers in more than 56 countries and 38 states.</span>[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row type="full_width_background" full_screen_row_position="middle" scene_position="center" text_color="dark" text_align="left" overlay_strength="0.3"][vc_column enable_animation="true" animation="reveal-from-bottom" column_padding="padding-5-percent" column_padding_position="all" background_color="#ffffff" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_row_inner column_margin="default" text_align="left"][vc_column_inner enable_animation="true" animation="fade-in-from-bottom" column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" width="1/1" column_border_width="none" column_border_style="solid"][vc_column_text]
<h1><span style="font-weight: 400;">Mycroft in the News</span><span style="font-weight: 400;">
</span></h1>
<h3>General coverage of Mycroft</h3>
<ul>
<li><a href="http://money.cnn.com/gallery/technology/2018/01/13/ces-2018-gadgets/12.html" target="_blank" rel="noopener">CNN Tech</a></li>
<li><a href="https://www.popsci.com/ultimate-diy-ai" target="_blank" rel="noopener"><span style="font-weight: 400;">Popular Science</span></a></li>
<li><a href="https://www.forbes.com/sites/janakirammsv/2015/08/20/meet-mycroft-the-open-source-alternative-to-amazon-echo/#1d6882c362a2" target="_blank" rel="noopener"><span style="font-weight: 400;">Forbes</span></a></li>
<li><a href="https://www.technologyreview.com/s/607892/an-open-source-and-cute-alternative-to-amazon-echo/" target="_blank" rel="noopener"><span style="font-weight: 400;">MIT Technology Review</span></a></li>
<li><a href="http://www.jupiterbroadcasting.com/86422/vulkan-the-metal-slayer-lup-105/" target="_blank" rel="noopener"><span style="font-weight: 400;">Jupiter Broadcasting</span></a></li>
<li><a href="http://www.cnet.com/products/mycroft-smart-home-ai-platform/" target="_blank" rel="noopener"><span style="font-weight: 400;">CNet</span></a></li>
<li><a href="http://www.zdnet.com/article/meet-mycroft-the-open-source-ai-who-wants-to-rival-siri-cortana-and-alexa/" target="_blank" rel="noopener"><span style="font-weight: 400;">ZDNet</span></a></li>
<li><a href="https://www.inc.com/matt-hunckler/these-5-artificial-intelligence-startups-are-trans.html" target="_blank" rel="noopener"><span style="font-weight: 400;">Inc.</span></a></li>
<li><a href="https://www.cio.com/article/3017983/linux/2015s-most-exciting-linux-devices.html#slide2" target="_blank" rel="noopener"><span style="font-weight: 400;">CIO</span></a></li>
<li><a href="http://news.softpedia.com/news/mycroft-is-an-ai-for-your-home-powered-by-raspberry-pi-2-and-ubuntu-snappy-489280.shtml" target="_blank" rel="noopener"><span style="font-weight: 400;">Softpedia</span></a></li>
<li><a href="https://mycroft.ai/blog/le-monde-science-health/" target="_blank" rel="noopener"><span style="font-weight: 400;">Le Monde</span></a></li>
<li><span style="font-weight: 400;"><a href="https://techcrunch.com/video/mycroft-open-source-voice-assistant/59c2b725c214e377dac11512/" target="_blank" rel="noopener">Techcrunch</a></span><a href="https://techcrunch.com/video/mycroft-open-source-voice-assistant/59c2b725c214e377dac11512/" target="_blank" rel="noopener"><span style="font-weight: 400;">
</span></a></li>
</ul>
<h3>Coverage of our Mark II Campaign,</h3>
Find a larger list <em><a href="https://mycroft.ai/blog/mycroft-news-read-coverage-mark-ii-launch/" target="_blank" rel="noopener">here</a>.</em>
<ul>
<li><a href="https://www.fastcompany.com/40522226/can-mycrofts-privacy-centric-voice-assistant-take-on-alexa-and-google" target="_blank" rel="noopener">FastCompany</a></li>
<li><a href="https://www.digitaltrends.com/cool-tech/mycroft-mark-ii-assistant/" target="_blank" rel="noopener">Digital Trends</a></li>
<li><a href="https://www.slashgear.com/mycroft-mark-ii-the-open-source-amazon-echo-youve-always-wanted-29517303/" target="_blank" rel="noopener">Slash Gear</a></li>
<li><a href="http://www.autonews.com/article/20180120/OEM06/180129987/with-ai-digital-assistant-drivers-talk-to-their-cars" target="_blank" rel="noopener">Automotive News</a></li>
<li><a href="http://www.cbc.ca/radio/spark/383-dangerous-data-libraries-and-more-1.4516637/the-privacy-first-smart-speaker-taking-on-the-likes-of-apple-and-amazon-1.4516676" target="_blank" rel="noopener">CBC Radio</a></li>
<li><a href="https://blog.hackster.io/mycroft-announces-a-kickstarter-campaign-for-the-mark-ii-ai-assistant-39e406368d4a">Hackster.io</a></li>
<li><a href="https://itsfoss.com/mycroft-mark-2/">It's FOSS</a></li>
<li><a href="https://www.stuff.tv/hot-stuff/smart-home/mycroft-mark-ii-voice-assistant-can-be-extended-and-customised-wont-share-your" target="_blank" rel="noopener">Stuff</a></li>
<li><a href="https://www.open-electronics.org/mycroft-mark-ii-the-open-source-voice-assistant/" target="_blank" rel="noopener">Open Electronics</a></li>
<li><a href="https://www.trendhunter.com/trends/mycroft-mark-ii" target="_blank" rel="noopener">Trend Hunter</a></li>
<li><a href="https://www.voicebot.ai/2018/01/25/mycroft-mark-ii-smart-display-puts-privacy-first/">Voicebot.ai</a></li>
<li><a href="https://androidmarvel.com/mycroft-mark-ii-open-answer-amazon-echo-google-home/">Android Marvel</a></li>
<li><a href="https://liliputing.com/2018/01/mycroft-mark-ii-smart-speaker-open-source-voice-assistant-crowdfunding.html">Liliputing</a></li>
<li><a href="https://fossbytes.com/mycroft-open-source-ai-assistant-mark-ii-crowdfunding/">FOSSbytes</a></li>
<li><a href="https://www.cnx-software.com/2018/01/27/mycroft-mark-ii-smart-speaker-voice-assistant-works-with-open-source-software-crowdfunding/">CNX Software</a></li>
</ul>
<h2><span style="font-weight: 400;">Quotes</span></h2>
<blockquote><span style="font-weight: 400;">"I love what Mycroft is doing. Artificial intelligence and machine learning are becoming a common component in our businesses, homes, and elsewhere, and with such a fundamental connection between people and technology, it is essential that the technology is as open and transparent as possible. Mycroft's focus on community, transparency, and delivering simple and expandable AI is what we need to ensure the AI revolution is in good hands." </span><b>- Jono Bacon, Author, Consultant, and Leader in Community Strategy and Open Source</b>
<hr />
"Voice is the next frontier for user interfaces and artificial intelligence. Mycroft is the open source solution to voice and I am excited to see theMycroft team propel this journey to the next level." <strong>- Lesa Mitchell, Managing Director <a href="https://www.techstars.com/">Techstars</a></strong>
<hr />
<div id="m_-8470326683647559940gmail-:123" class="m_-8470326683647559940gmail-ii m_-8470326683647559940gmail-gt m_-8470326683647559940gmail-adO">
<div id="m_-8470326683647559940gmail-:zn" class="m_-8470326683647559940gmail-a3s m_-8470326683647559940gmail-aXjCH m_-8470326683647559940gmail-m160dd08117fa6ffe">
<div dir="ltr">
<div>"Mycroft is a super platform for Makers interested in exploring and using Voice. It is open source and well supported by the company and community of users with quick answers on the forum. The underlying language of skills is Python, so if you can write it in Python, you can use it on Mycroft and very importantly there are already large number of skills available to learn from" - <strong>Greg Voronin, Community Developer and Maker</strong></div>
<div>
<hr />
<span style="font-weight: 400;">“Unlike some technologies where correct and incorrect is easy to define, Artificial Intelligence has to deal with fuzzy lines. Shaping these AI entities to serve the world should be managed in an open way where it can be researched and understood by all.</span><b>”</b><b> - Steve Penrod, Mycroft AI CTO</b>
<hr />
</div>
</div>
</div>
</div>
<div class="yj6qo ajU"></div></blockquote>
[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row] | 250.9 | 8,181 | 0.789048 | eng_Latn | 0.7754 |
aa66d54f846bcddac48d49445a02cf941433a98c | 542 | md | Markdown | ru/storage/concepts/backup.md | dbaklikov/docs | fa03bc655421603e97f8970c98369cfcec54c80a | [
"CC-BY-4.0"
] | null | null | null | ru/storage/concepts/backup.md | dbaklikov/docs | fa03bc655421603e97f8970c98369cfcec54c80a | [
"CC-BY-4.0"
] | null | null | null | ru/storage/concepts/backup.md | dbaklikov/docs | fa03bc655421603e97f8970c98369cfcec54c80a | [
"CC-BY-4.0"
] | 1 | 2019-08-15T12:32:47.000Z | 2019-08-15T12:32:47.000Z | # Резервное копирование объектов
{{ objstorage-name }} обеспечивает надежное хранение загруженных вами объектов в реплицируемом хранилище, но не предоставляет специальных инструментов для резервного копирования.
Если вы хотите иметь резервную копию объектов, вы можете регулярно скачивать нужные объекты из {{ objstorage-name }} и хранить их в собственной инфраструктуре, или в другом облачном хранилище.
Автоматизировать процесс резервного копирования можно с помощью [инструментов, которые поддерживает сервис](../instruments/index.md).
| 67.75 | 192 | 0.828413 | rus_Cyrl | 0.993804 |
aa67028689e485294c5313c53134a893d1199159 | 957 | md | Markdown | api/Project.application.assistance.md | SSlinky/VBA-Docs | 4f4721eced372259ca265a8bd45b881f974ff9df | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Project.application.assistance.md | SSlinky/VBA-Docs | 4f4721eced372259ca265a8bd45b881f974ff9df | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Project.application.assistance.md | SSlinky/VBA-Docs | 4f4721eced372259ca265a8bd45b881f974ff9df | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Application.Assistance property (Project)
ms.prod: project-server
ms.assetid: f53bf107-9fd1-78f9-f8db-0b8c2acc5f72
ms.date: 06/08/2017
ms.localizationpriority: medium
---
# Application.Assistance property (Project)
Gets an **Office.IAssistance** object that represents the Project Help system. Read-only **IAssistance**.
## Syntax
_expression_.**Assistance**
_expression_ A variable that represents an **[Application](Project.Application.md)** object.
## Remarks
For more information, see the **IAssistance** object in the Microsoft Office Visual Basic Reference.
## Example
The following example displays the top-level page of the **Project Help** window.
```vb
Sub ShowHelp()
Dim theHelpSystem As Office.IAssistance
Set theHelpSystem = Application.Assistance
theHelpSystem.ShowHelp
End Sub
```
## Property value
**\<unknown type>\**
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 20.361702 | 106 | 0.743992 | eng_Latn | 0.826384 |
aa6719fa2fe7f89f22b7b32d14caa7ec80bbeb30 | 3,906 | md | Markdown | README.md | Jennyx18/SiMon | 522432ff708954ac37050609cfd6f42dd96467e4 | [
"BSD-2-Clause"
] | 9 | 2017-03-04T08:00:58.000Z | 2021-04-03T18:18:40.000Z | README.md | Jennyx18/SiMon | 522432ff708954ac37050609cfd6f42dd96467e4 | [
"BSD-2-Clause"
] | 52 | 2016-09-23T14:06:06.000Z | 2021-08-05T12:21:29.000Z | README.md | Jennyx18/SiMon | 522432ff708954ac37050609cfd6f42dd96467e4 | [
"BSD-2-Clause"
] | 4 | 2016-09-15T02:09:42.000Z | 2021-06-15T11:42:58.000Z | [](https://github.com/psf/black)
[](https://pepy.tech/project/astrosimon)
[](https://github.com/maxwelltsai/SiMon/actions/workflows/codeql-analysis.yml)
# SiMon -- Simulation Monitor

**SiMon** is an automatic monitor/scheduler/pipeline for astrophysical N-body simulations. In astrophysics, it is common that a grid of simulations is needed to explore a parameter space. SiMon facilitates the paramater-space study simulations in the follow ways:
* Generate a real-time overview of the current simulation status
* Automatically restart the simulation if the code crashes
* Invoke the data processing script (e.g. create plots) once the simulation is finish
* Notify the user (e.g. by email) once the simulations are finished
* Report to the user if a certain simulation cannot be restarted (e.g. code keeps crashing/stalling for some reasons)
* Parallelize the launching of multiple simulations according to the configured computational resources
* Detect and kill stalled simulations (simulations that utilize 100% CPU/GPU but do not make any progress for a long period of time)
**SiMon** is highly modular. Arbitrary numerical codes can be supported by **SiMon** by overriding `module_common.py` (python programming needed) or editing config files (no programming needed).
**SiMon** is originally built for carrying out large ensembles of astrophysical N-body simulations. However, it has now been generalized to carrying out any computational intensive numerical jobs (e.g., scheduling an observational data reduction pipeline).
# Installation
To install the latest stable version of **SiMon**, you can do
pip install astrosimon
Or you can install the latest developer version from the git repository using:
pip install https://github.com/maxwelltsai/SiMon/archive/master.zip
Note: as of mid-2019, large number of Python packages have migrated to Python 3.x, with no guarantee of Python 2.x backward compatability. Therefore, **SiMon** is currently optimize for Python 3.x.
# Usage - Start with an example code
**SiMon** is simple to use! To display an overview of all managed jobs, you simply type the following in your terminal:
simon
If you would just like to see the currently running jobs, following command will help, the same scheme also applies to check other status such as NEW, DONE, STOP:
simon | grep RUN
If it is your first time running **SiMon**, it will offer to generate a default config file and some demo simulations on the current directly. Just proceed according to the interactive instructions. Then, your simulations can be launched and monitored automatically with
simon start
This will start **SiMon** as a daemon program, which schedule and monitor all simulations automatically without human supervision. The daemon can be stopped with
simon stop
The interactive dashboard of **SiMon** can be launched at any time (before, during, and after the simulations) with this simple command:
simon -i
Or if you prefer: `simon i` or `simon interactive`.
# Usage - Apply to your code
Edit the global config file `SiMon.conf` using your favorite text editor, change default
Root_dir: examples/demo_simulations
to be the dir of where your code located, then start simon again!
More detailed configuration can refer https://pennyq.github.io/SiMon/
That's it! Go and take a beer :)
# Paper
http://adsabs.harvard.edu/abs/2017PASP..129i4503Q
| 53.506849 | 270 | 0.776498 | eng_Latn | 0.98706 |
aa6afdd9f01ebf090f5536325849a0db3ed5bb7a | 21,986 | md | Markdown | articles/hdinsight/kafka/apache-kafka-mirroring.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:21.000Z | 2021-03-12T23:37:21.000Z | articles/hdinsight/kafka/apache-kafka-mirroring.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/kafka/apache-kafka-mirroring.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Spiegeln von Apache Kafka-Themen – Azure HDInsight
description: Es wird beschrieben, wie Sie die Spiegelungsfunktion von Apache Kafka verwenden, um ein Replikat von Kafka in einem HDInsight-Cluster aufzubewahren, indem Sie die Themen in einem sekundären Cluster spiegeln.
ms.service: hdinsight
ms.topic: how-to
ms.custom: hdinsightactive
ms.date: 11/29/2019
ms.openlocfilehash: 5c62b183d55023b0b8a25dcde03aef21e0364a3e
ms.sourcegitcommit: 692382974e1ac868a2672b67af2d33e593c91d60
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/22/2021
ms.locfileid: "130261898"
---
# <a name="use-mirrormaker-to-replicate-apache-kafka-topics-with-kafka-on-hdinsight"></a>Verwenden von MirrorMaker zum Replizieren von Apache Kafka-Themen mit Kafka in HDInsight
Erfahren Sie, wie Sie die Spiegelungsfunktion von Apache Kafka verwenden, um Themen in einen sekundären Cluster zu replizieren. Die Spiegelung kann als fortlaufender Prozess ausgeführt oder zu bestimmten Zeitpunkten als Methode zum Migrieren von Daten aus einem Cluster in einen anderen verwendet werden.
> [!NOTE]
> Dieser Artikel enthält Verweise auf den Begriff *Whitelist*, einen Begriff, den Microsoft nicht mehr verwendet. Sobald der Begriff aus der Software entfernt wurde, wird er auch aus diesem Artikel entfernt.
In diesem Beispiel wird das Spiegelung zum Replizieren von Themen zwischen zwei HDInsight-Clustern verwendet. Die beiden Cluster befinden sich in verschiedenen virtuellen Netzwerken in verschiedenen Rechenzentren.
> [!WARNING]
> Die Spiegelung sollte nicht als Mittel zum Erzielen von Fehlertoleranz angesehen werden. Der Versatz von Elementen in einem Thema unterscheidet sich im primären und sekundären Cluster, sodass die Cluster für Clients nicht austauschbar sind.
>
> Falls Sie Bedenken wegen der Fehlertoleranz haben, sollten Sie die Replikation für die Themen in Ihrem Cluster festlegen. Weitere Informationen finden Sie unter [Schnellstart: Erstellen eines Apache Kafka-Clusters in HDInsight](apache-kafka-get-started.md).
## <a name="how-apache-kafka-mirroring-works"></a>Funktionsweise der Apache Kafka-Spiegelung
Für die Spiegelung wird das Tool [MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) (Teil von Apache Kafka) verwendet, um Datensätze aus Themen im primären Cluster zu nutzen und anschließend eine lokale Kopie im sekundären Cluster zu erstellen. MirrorMaker nutzt einen (oder mehrere) *Consumer* zum Lesen von Daten aus dem primären Cluster und einen *Producer*, der in den lokalen Cluster (sekundären Cluster) schreibt.
Am sinnvollsten ist es, die Spiegelung für die Notfallwiederherstellung mit Kafka-Clustern in verschiedenen Azure-Regionen einzurichten. Zu diesem Zweck werden die virtuellen Netzwerke, in denen sich die Cluster befinden, zu einem Peeringnetzwerk zusammengeschlossen.
Das folgende Diagramm veranschaulicht den Spiegelungsprozess und den Kommunikationsfluss zwischen den Clustern:
:::image type="content" source="./media/apache-kafka-mirroring/kafka-mirroring-vnets2.png" alt-text="Diagramm des Spiegelungsprozesses" border="false":::
Der primäre und der sekundäre Cluster können sich in Bezug auf die Anzahl von Knoten und Partitionen unterscheiden, und auch der Versatz in den Themen ist unterschiedlich. Beim Spiegeln wird der Schlüsselwert beibehalten, der für die Partitionierung verwendet wird, sodass die Datensatzreihenfolge pro Schlüssel beibehalten wird.
### <a name="mirroring-across-network-boundaries"></a>Spiegelung über Netzwerkgrenzen hinweg
Wenn Sie eine Spiegelung zwischen Kafka-Clustern in unterschiedlichen Netzwerken durchführen müssen, sollten Sie außerdem Folgendes beachten:
* **Gateways**: Die Netzwerke müssen auf TCP/IP-Ebene kommunizieren können.
* **Serveradresse**: Sie können die IP-Adressen oder die vollqualifizierten Domänennamen verwenden, um Ihre Clusterknoten zu erreichen.
* **IP-Adressen**: Wenn Sie Ihre Kafka-Cluster für die Ankündigung der IP-Adresse konfigurieren, können Sie die Spiegelung mit den IP-Adressen der Brokerknoten und der Zookeeperknoten einrichten.
* **Domänennamen**: Wenn Sie Ihre Kafka-Cluster nicht für die Ankündigung der IP-Adresse konfigurieren, müssen die Cluster über ihre vollqualifizierten Domänennamen eine Verbindung miteinander herstellen können. Hierfür ist ein DNS-Server (Domain Name System) in jedem Netzwerk erforderlich, der dafür konfiguriert ist, Anforderungen an die anderen Netzwerke weiterzuleiten. Beim Erstellen eines virtuellen Azure-Netzwerks müssen Sie einen benutzerdefinierten DNS-Server und die IP-Adresse für den Server angeben, anstatt das automatisch bereitgestellte DNS des Netzwerks zu verwenden. Nach der Erstellung des virtuellen Netzwerks müssen Sie dann einen virtuellen Azure-Computer erstellen, für den diese IP-Adresse verwendet wird, und anschließend die DNS-Software darauf installieren und konfigurieren.
> [!WARNING]
> Erstellen und konfigurieren Sie den benutzerdefinierten DNS-Server, bevor Sie HDInsight im virtuellen Netzwerk installieren. Es ist keine zusätzliche Konfiguration erforderlich, damit HDInsight den für das virtuelle Netzwerk konfigurierten DNS-Server verwenden kann.
Weitere Informationen zum Verbinden von zwei virtuellen Azure-Netzwerken finden Sie unter [Konfigurieren einer VNet-zu-VNet-Verbindung](../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md).
## <a name="mirroring-architecture"></a>Architektur der Spiegelung
In dieser Architektur befinden sich zwei Cluster in verschiedenen Ressourcengruppen und virtuellen Netzwerken: ein **primärer** und ein **sekundärer** Cluster.
### <a name="creation-steps"></a>Schritte zur Erstellung
1. Erstellen Sie zwei neue Ressourcengruppen:
|Ressourcengruppe | Standort |
|---|---|
| kafka-primary-rg | USA, Mitte |
| kafka-secondary-rg | USA Nord Mitte |
1. Erstellen Sie in **kafka-primary-rg** ein neues virtuelles Netzwerk namens **kafka-primary-vnet**. Übernehmen Sie die Standardeinstellungen.
1. Erstellen Sie – ebenfalls mit den Standardeinstellungen – in **kafka-secondary-rg** ein neues virtuelles Netzwerk namens **kafka-secondary-vnet**.
1. Erstellen Sie zwei neue Kafka-Cluster:
| Clustername | Ressourcengruppe | Virtual Network | Speicherkonto |
|---|---|---|---|
| kafka-primary-cluster | kafka-primary-rg | kafka-primary-vnet | kafkaprimarystorage |
| kafka-secondary-cluster | kafka-secondary-rg | kafka-secondary-vnet | kafkasecondarystorage |
1. Erstellen Sie virtuelle Netzwerkpeerings. Mit diesem Schritt werden zwei Peerings erstellt: eins von **kafka-primary-vnet** zu **kafka-secondary-vnet** und das andere von **kafka-secondary-vnet** zurück zu **kafka-primary-vnet**.
1. Wählen Sie das virtuelle Netzwerk **kafka-primary-vnet** aus.
1. Wählen Sie unter **Einstellungen** die Option **Peerings** aus.
1. Wählen Sie **Hinzufügen**.
1. Geben Sie auf dem Bildschirm **Peering hinzufügen** die Informationen wie im Screenshot unten gezeigt ein.
:::image type="content" source="./media/apache-kafka-mirroring/hdi-add-vnet-peering.png" alt-text="HDInsight Kafka – Hinzufügen von VNET-Peering" border="true":::
### <a name="configure-ip-advertising"></a>Konfigurieren der Ankündigung der IP-Adresse
Konfigurieren Sie die Ankündigung der IP-Adresse, um einem Client das Herstellen einer Verbindung mithilfe von Broker-IP-Adressen anstelle von Domänennamen zu ermöglichen.
1. Wechseln Sie zum Ambari-Dashboard für den primären Cluster: `https://PRIMARYCLUSTERNAME.azurehdinsight.net`.
1. Wählen Sie **Dienste** > **Kafka** aus. Wählen Sie die Registerkarte **Configs** aus.
1. Fügen Sie dem unteren Abschnitt **kafka-env template** folgende Konfigurationszeilen hinzu. Wählen Sie **Speichern** aus.
```
# Configure Kafka to advertise IP addresses instead of FQDN
IP_ADDRESS=$(hostname -i)
echo advertised.listeners=$IP_ADDRESS
sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties
echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092" >> /usr/hdp/current/kafka-broker/conf/server.properties
```
1. Geben Sie auf dem Bildschirm **Save Configuration** (Konfiguration speichern) einen Hinweis ein, und klicken Sie auf **Save** (Speichern).
1. Wenn eine Konfigurationswarnung angezeigt wird, klicken Sie auf **Proceed Anyway** (Trotzdem fortfahren).
1. Wählen Sie für **Save Configuration Changes** (Konfigurationsänderungen speichern) die Option **OK** aus.
1. Wählen Sie in der Benachrichtigung **Restart Required** (Neustart erforderlich) die Option **Restart** > **Restart All Affected** (Neustart > Alle betroffenen neu starten) aus. Wählen Sie **Confirm Restart All**.
:::image type="content" source="./media/apache-kafka-mirroring/ambari-restart-notification.png" alt-text="Apache Ambari – Neustarten aller betroffenen Instanzen" border="true":::
### <a name="configure-kafka-to-listen-on-all-network-interfaces"></a>Konfigurieren Sie Kafka zum Lauschen auf allen Netzwerkschnittstellen.
1. Bleiben Sie auf der Registerkarte **Configs** (Konfigurationen) unter **Services** > **Kafka** (Dienste > Kafka). Legen Sie im Abschnitt **Kafka Broker** die Eigenschaft **listeners** auf `PLAINTEXT://0.0.0.0:9092` fest.
1. Wählen Sie **Speichern** aus.
1. Wählen Sie **Restart** (Neustart) und **Confirm Restart All** (Bestätigen: Alle neu starten) aus.
### <a name="record-broker-ip-addresses-and-zookeeper-addresses-for-primary-cluster"></a>Notieren Sie sich die IP-Adressen der Broker und Zookeeper für den primären Cluster.
1. Wählen Sie auf dem Ambari-Dashboard **Hosts** aus.
1. Notieren Sie sich die IP-Adressen für Broker und Zookeeper. Die ersten beiden Buchstaben des Hostnamens lauten **wn** für die Brokerknoten und **zk** für die Zookeeperknoten.
:::image type="content" source="./media/apache-kafka-mirroring/view-node-ip-addresses2.png" alt-text="Apache Ambari – Ansicht mit IP-Adressknoten" border="true":::
1. Wiederholen Sie die vorangegangenen drei Schritte für den zweiten Cluster **kafka-secondary-cluster**: Konfigurieren der Ankündigung von IP-Adressen, Einrichten von Listenern und Notieren der IP-Adressen von Broker und Zookeeper.
## <a name="create-topics"></a>Erstellen von Themen
1. Stellen Sie über SSH eine Verbindung mit dem **primären** Cluster her:
```bash
ssh sshuser@PRIMARYCLUSTER-ssh.azurehdinsight.net
```
Ersetzen Sie **sshuser** durch den SSH-Benutzernamen, den Sie beim Erstellen des Clusters verwendet haben. Ersetzen Sie **PRIMARYCLUSTER** durch den Basisnamen, den Sie beim Erstellen des Clusters verwendet haben.
Informationen hierzu finden Sie unter [Verwenden von SSH mit Linux-basiertem Hadoop in HDInsight unter Linux, Unix oder OS X](../hdinsight-hadoop-linux-use-ssh-unix.md).
1. Verwenden Sie den folgenden Befehl, um zwei Umgebungsvariablen mit den Apache-Zookeeperhosts und Brokerhosts für den primären Cluster zu erstellen. Zeichenfolgen wie `ZOOKEEPER_IP_ADDRESS1` müssen durch die tatsächlichen IP-Adressen ersetzt werden, die Sie zuvor notiert haben, z.B. `10.23.0.11` und `10.23.0.7`. Dasselbe gilt für `BROKER_IP_ADDRESS1`. Wenn Sie die Auflösung von vollqualifizierten Domänennamen mit einem benutzerdefinierten DNS-Server durchführen, führen Sie [diese Schritte](apache-kafka-get-started.md#getkafkainfo) aus, um die Namen der Broker und ZooKeeper abzurufen:
```bash
# get the zookeeper hosts for the primary cluster
export PRIMARY_ZKHOSTS='ZOOKEEPER_IP_ADDRESS1:2181, ZOOKEEPER_IP_ADDRESS2:2181, ZOOKEEPER_IP_ADDRESS3:2181'
# get the broker hosts for the primary cluster
export PRIMARY_BROKERHOSTS='BROKER_IP_ADDRESS1:9092,BROKER_IP_ADDRESS2:9092,BROKER_IP_ADDRESS2:9092'
```
1. Um ein Thema mit Namen `testtopic` zu erstellen, nutzen Sie den folgenden Befehl:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic testtopic --zookeeper $PRIMARY_ZKHOSTS
```
1. Verwenden Sie den folgenden Befehl, um zu bestätigen, dass das Thema erstellt wurde:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $PRIMARY_ZKHOSTS
```
Die Antwort enthält `testtopic`.
1. Verwenden Sie Folgendes, um die Informationen zum Brokerhost für diesen Cluster (den **primären** Cluster) anzuzeigen:
```bash
echo $PRIMARY_BROKERHOSTS
```
Die Ausgabe sieht in etwa wie folgt aus:
`10.23.0.11:9092,10.23.0.7:9092,10.23.0.9:9092`
Speichern Sie diese Informationen. Sie werden im nächsten Abschnitt verwendet.
## <a name="configure-mirroring"></a>Konfigurieren der Spiegelung
1. Stellen Sie in einer anderen SSH-Sitzung eine Verbindung mit dem **sekundären Cluster** her:
```bash
ssh sshuser@SECONDARYCLUSTER-ssh.azurehdinsight.net
```
Ersetzen Sie **sshuser** durch den SSH-Benutzernamen, den Sie beim Erstellen des Clusters verwendet haben. Ersetzen Sie **SECONDARYCLUSTER** durch den Namen, den Sie beim Erstellen des Clusters verwendet haben.
Informationen hierzu finden Sie unter [Verwenden von SSH mit Linux-basiertem Hadoop in HDInsight unter Linux, Unix oder OS X](../hdinsight-hadoop-linux-use-ssh-unix.md).
1. Mit einer `consumer.properties`-Datei wird die Kommunikation mit dem **primären** Cluster konfiguriert. Verwenden Sie zum Erstellen der Datei den folgenden Befehl:
```bash
nano consumer.properties
```
Verwenden Sie als Inhalt der Datei `consumer.properties` den folgenden Text:
```yaml
bootstrap.servers=PRIMARY_BROKERHOSTS
group.id=mirrorgroup
```
Ersetzen Sie **PRIMARY_BROKERHOSTS** durch die IP-Adressen der Brokerhosts aus dem **primären** Cluster.
In dieser Datei werden die Consumerinformationen beschrieben, die beim Lesen aus dem primären Kafka-Cluster verwendet werden sollten. Weitere Informationen zur Consumerkonfiguration finden Sie unter [Consumer Configs](https://kafka.apache.org/documentation#consumerconfigs) (Consumerkonfigurationen) bei „kafka.apache.org“.
Drücken Sie zum Speichern der Datei **STRG+X**, **Y** und dann die **EINGABETASTE**.
1. Bevor Sie den Producer konfigurieren, der mit dem sekundären Cluster kommuniziert, richten Sie eine Variable für die IP-Adressen der Broker des **sekundären** Clusters ein. Verwenden Sie die folgenden Befehle, um diese Variable zu erstellen:
```bash
export SECONDARY_BROKERHOSTS='BROKER_IP_ADDRESS1:9092,BROKER_IP_ADDRESS2:9092,BROKER_IP_ADDRESS2:9092'
```
Der Befehl `echo $SECONDARY_BROKERHOSTS` gibt Informationen ähnlich dem folgenden Text zurück:
`10.23.0.14:9092,10.23.0.4:9092,10.23.0.12:9092`
1. Eine `producer.properties`-Datei wird für die Kommunikation mit dem **sekundären** Cluster verwendet. Verwenden Sie zum Erstellen der Datei den folgenden Befehl:
```bash
nano producer.properties
```
Verwenden Sie als Inhalt der Datei `producer.properties` den folgenden Text:
```yaml
bootstrap.servers=SECONDARY_BROKERHOSTS
compression.type=none
```
Ersetzen Sie **SECONDARY_BROKERHOSTS** durch die IP-Adressen der Broker, die im vorherigen Schritt verwendet wurden.
Weitere Informationen zur Producerkonfiguration finden Sie unter [Producer Configs](https://kafka.apache.org/documentation#producerconfigs) (Producerkonfigurationen) bei „kafka.apache.org“.
1. Verwenden Sie die folgenden Befehle, um eine Umgebungsvariable mit den IP-Adressen der Zookeeperhosts für den sekundären Cluster zu erstellen:
```bash
# get the zookeeper hosts for the secondary cluster
export SECONDARY_ZKHOSTS='ZOOKEEPER_IP_ADDRESS1:2181,ZOOKEEPER_IP_ADDRESS2:2181,ZOOKEEPER_IP_ADDRESS3:2181'
```
1. Die Standardkonfiguration für Kafka auf HDInsight erlaubt keine automatische Erstellung von Themen. Bevor Sie mit der Spiegelung beginnen, müssen Sie sich für eine der folgenden Optionen entscheiden:
* **Erstellen der Themen im sekundären Cluster**: Bei dieser Option haben Sie auch die Möglichkeit, die Anzahl von Partitionen und den Replikationsfaktor festzulegen.
Mit folgendem Befehl können Sie Themen vorab erstellen:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic testtopic --zookeeper $SECONDARY_ZKHOSTS
```
Ersetzen Sie `testtopic` durch den Namen des zu erstellenden Themas.
* **Konfigurieren des Clusters für die automatische Themaerstellung**: Bei dieser Option kann MirrorMaker zum automatischen Erstellen von Themen verwendet werden. Die Themen werden jedoch möglicherweise mit einer unterschiedlichen Anzahl von Partitionen bzw. einem anderen Replikationsfaktor als im primären Thema erstellt.
Um den sekundären Cluster für das automatische Erstellen von Themen zu konfigurieren, führen Sie die folgenden Schritte aus:
1. Wechseln Sie zum Ambari-Dashboard für den sekundären Cluster: `https://SECONDARYCLUSTERNAME.azurehdinsight.net`.
1. Klicken Sie auf **Services** > **Kafka** (Dienste > Kafka). Klicken Sie auf die Registerkarte **Configs** .
1. Geben Sie in das Feld __Filter__ den Wert `auto.create` ein. Dies filtert die Liste der Eigenschaften und zeigt die Einstellung `auto.create.topics.enable`.
1. Ändern Sie den Wert von `auto.create.topics.enable` in „true“, und wählen Sie dann __Speichern__. Fügen Sie einen Hinweis hinzu, und wählen Sie dann erneut __Speichern__.
1. Wählen Sie den Dienst __Kafka__, dann die Option __Neu starten__ und abschließend die Option __Neustart aller betroffenen__. Klicken Sie bei entsprechender Aufforderung auf __Neustart aller Dienste bestätigen__.
:::image type="content" source="./media/apache-kafka-mirroring/kafka-enable-auto-create-topics.png" alt-text="Kafka – Aktivieren der automatischen Erstellung eines Themas" border="true":::
## <a name="start-mirrormaker"></a>Starten von MirrorMaker
1. Verwenden Sie den folgenden Befehl über die SSH-Verbindung mit dem **sekundären** Cluster, um den MirrorMaker-Prozess zu starten:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config consumer.properties --producer.config producer.properties --whitelist testtopic --num.streams 4
```
Die in diesem Beispiel verwendeten Parameter sind:
|Parameter |Beschreibung |
|---|---|
|--consumer.config|Gibt die Datei an, in der die Consumereigenschaften enthalten sind. Diese Eigenschaften werden verwendet, um einen Consumer zu erstellen, mit dem aus dem *primären* Kafka-Cluster gelesen wird.|
|--producer.config|Gibt die Datei an, in der die Producereigenschaften enthalten sind. Diese Eigenschaften werden verwendet, um einen Producer zu erstellen, mit dem in den *sekundären* Kafka-Cluster geschrieben wird.|
|--whitelist|Eine Liste mit Themen, die von MirrorMaker aus dem primären in den sekundären Cluster repliziert werden.|
|--num.streams|Die Anzahl von Consumerthreads, die erstellt werden sollen.|
Der Consumer auf dem sekundären Knoten wartet jetzt auf Nachrichten.
2. Verwenden Sie den folgenden Befehl über die SSH-Verbindung mit dem **primären** Cluster, um einen Producer zu starten und Nachrichten an das Thema zu senden:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list $PRIMARY_BROKERHOSTS --topic testtopic
```
Sobald der Cursor an einer leeren Zeile steht, geben Sie einige Textnachrichten ein. Diese Nachrichten werden an das Thema im **primären** Cluster gesendet. Drücken Sie anschließend **STRG + C**, um den Producer-Prozess zu beenden.
3. Drücken Sie über die SSH-Verbindung mit dem **sekundären** Cluster die Tastenkombination **STRG + C**, um den MirrorMaker-Prozess zu beenden. Es kann einige Sekunden in Anspruch nehmen, um den Prozess zu beenden. Um sicherzustellen, dass die Nachrichten im sekundären Cluster repliziert wurden, verwenden Sie den folgenden Befehl:
```bash
/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server $SECONDARY_BROKERHOSTS --topic testtopic --from-beginning
```
Die Themenliste enthält nun auch das Thema `testtopic`, das erstellt wurde, als MirrorMaker das Thema vom primären an den sekundären Cluster gespiegelt hat. Die aus dem Thema abgerufenen Nachrichten sind identisch mit den Nachrichten, die Sie im primären Cluster eingegeben haben.
## <a name="delete-the-cluster"></a>Löschen des Clusters
[!INCLUDE [delete-cluster-warning](../includes/hdinsight-delete-cluster-warning.md)]
Mit den Schritten in diesem Dokument wurden Cluster in verschiedenen Azure-Ressourcengruppen erstellt. Um alle erstellten Ressourcen zu löschen, können Sie einfach die Ressourcengruppen löschen: **kafka-primary-rg** und **kafka-secondary_rg**. Durch Löschen der Ressourcengruppen werden alle beim Durcharbeiten dieses Dokuments erstellten Ressourcen gelöscht: Cluster, virtuelle Netzwerke und Speicherkonten.
## <a name="next-steps"></a>Nächste Schritte
In diesem Dokument haben Sie gelernt, wie [MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) zur Erstellung eines Replikats eines [Apache Kafka](https://kafka.apache.org/)-Clusters verwendet wird. Verwenden Sie die folgenden Links, um weitere Möglichkeiten zur Arbeit mit Kafka kennenzulernen:
* [Dokumentation zu Apache Kafka MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) auf cwiki.apache.org.
* [Bewährte Methoden für Kafka MirrorMaker](https://community.cloudera.com/t5/Community-Articles/Kafka-Mirror-Maker-Best-Practices/ta-p/249269)
* [Erste Schritte mit Apache Kafka in HDInsight](apache-kafka-get-started.md)
* [Verwenden von Apache Spark mit Apache Kafka in HDInsight](../hdinsight-apache-spark-with-kafka.md)
* [Herstellen einer Verbindung mit Apache Kafka über ein virtuelles Azure-Netzwerk](apache-kafka-connect-vpn-gateway.md)
| 70.019108 | 807 | 0.777631 | deu_Latn | 0.986989 |
aa6c10b086af175a27e43a3d75e3b58383c489aa | 1,761 | md | Markdown | docs/newsletter/2021_07_12.md | captainpick/blue-book | 90a9cf62a1792df285c3ca157ad72e668ac3e8df | [
"CC0-1.0"
] | 219 | 2020-02-12T10:39:55.000Z | 2022-03-29T15:46:13.000Z | docs/newsletter/2021_07_12.md | captainpick/blue-book | 90a9cf62a1792df285c3ca157ad72e668ac3e8df | [
"CC0-1.0"
] | 17 | 2020-02-13T13:21:11.000Z | 2022-02-01T14:29:21.000Z | docs/newsletter/2021_07_12.md | captainpick/blue-book | 90a9cf62a1792df285c3ca157ad72e668ac3e8df | [
"CC0-1.0"
] | 34 | 2020-05-03T14:49:03.000Z | 2022-03-29T15:46:12.000Z | # Operative Systems
## [Linux](tahoe.md)
* New: Introduce Tahoe-LAFS.
[Tahoe-LAFS](https://en.wikipedia.org/wiki/Tahoe-LAFS) is a free and open,
secure, decentralized, fault-tolerant, distributed data store and distributed
file system.
Tahoe-LAFS is a system that helps you to store files. You run a client program
on your computer, which talks to one or more storage servers on other computers.
When you tell your client to store a file, it will encrypt that file, encode it
into multiple pieces, then spread those pieces out among multiple servers. The
pieces are all encrypted and protected against modifications. Later, when you
ask your client to retrieve the file, it will find the necessary pieces, make
sure they haven’t been corrupted, reassemble them, and decrypt the result.
### [elasticsearch](elasticsearch.md)
* Correction: Correct the way of closing an index.
Use a POST instead of a GET
* New: [Explain how to calculate the amount of memory required to do KNN operations.](elasticsearch.md#knn-sizing)
* New: [Explain how to do KNN warmup to speed up the queries.](elasticsearch.md#knn-warmup)
* New: [Explain how to deal with the AWS service timeout.](elasticsearch.md#deal-with-the-aws-timeout-service)
### [Jellyfin](jellyfin.md)
* Improvement: [Explain how to fix the wrong image covers.](jellyfin.md#wrong-image-covers)
Remove all the `jpg` files of the directory and then fetch again the data from
your favourite media management software.
### [Syncthing](syncthing.md)
* New: [Investigate if Syncthing can be used over Tor.](syncthing.md#syncthing-over-tor)
I haven't found a reliable and safe way to do it, but I've set a path to follow if you're interested. | 42.95122 | 114 | 0.741056 | eng_Latn | 0.993872 |
aa6c6167da0c822a101169e95b55251fe98c1247 | 8,231 | md | Markdown | articles/cognitive-services/LUIS/luis-concept-intent.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/LUIS/luis-concept-intent.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/LUIS/luis-concept-intent.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Intenções e entidades-LUIS
titleSuffix: Azure Cognitive Services
description: Uma única intenção representa uma tarefa ou ação que o usuário deseja executar. É uma finalidade ou uma meta expressa no enunciado de um usuário. Defina um conjunto de intenções que corresponda às ações que os usuários desejem executar em seu aplicativo.
services: cognitive-services
author: diberry
manager: nitinme
ms.custom: seodec18
ms.service: cognitive-services
ms.subservice: language-understanding
ms.topic: conceptual
ms.date: 10/10/2019
ms.author: diberry
ms.openlocfilehash: 309a2592dbac2918aeb532fbe91e33d296f4e5a5
ms.sourcegitcommit: 653e9f61b24940561061bd65b2486e232e41ead4
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/21/2019
ms.locfileid: "74280896"
---
# <a name="intents-in-your-luis-app"></a>Tentativas em seu aplicativo LUIS
Uma intenção representa uma tarefa ou ação que o usuário deseja executar. É uma finalidade ou meta expressa na [declaração](luis-concept-utterance.md) de um usuário.
Defina um conjunto de intenções que corresponda às ações que os usuários desejem executar em seu aplicativo. Por exemplo, um aplicativo de viagem define várias intenções:
Intenções do aplicativo de viagem | Exemplo de enunciados |
------|------|
BookFlight | "Reserve um voo para mim para o Rio na semana que vem" <br/> "Coloque-me num voo para o Rio no dia 24" <br/> "Preciso de uma passagem de avião no próximo domingo para o Rio de Janeiro" |
Saudação | "Oi" <br/>"Olá" <br/>"Bom dia" |
CheckWeather | "Como está o clima em Boston?" <br/> "Mostre-me a previsão para este fim de semana" |
nenhum | "Encontre uma receita de biscoitos para mim"<br>"Os Lakers venceram?" |
Todos os aplicativos vêm com a intenção predefinida, "[None](#none-intent)", que é a tentativa de fallback.
## <a name="prebuilt-domains-provide-intents"></a>Domínios predefinidos fornecem intenções
Além de tentativas que você define, você pode usar tentativas predefinidas de um dos [domínios predefinidos](luis-how-to-use-prebuilt-domains.md).
## <a name="return-all-intents-scores"></a>Retornar pontuações de todas as intenções
Atribua uma declaração a uma única intenção. Quando LUIS recebe um expressão no ponto de extremidade, por padrão, ele retorna a principal intenção desse expressão.
Se você quiser as pontuações para todas as intenções para o expressão, poderá fornecer um sinalizador na cadeia de caracteres de consulta da API de previsão.
|Versão da API de previsão|Sinalizador|
|--|--|
|V2|`verbose=true`|
|V3|`show-all-intents=true`|
## <a name="intent-compared-to-entity"></a>Intenção comparada com a entidade
A intenção representa a ação que o aplicativo deve executar para o usuário e é baseado em todo o expressão. Uma declaração pode ter apenas uma intenção de pontuação principal, mas pode ter muitas entidades.
<a name="how-do-intents-relate-to-entities"></a>
Crie uma intenção quando a _intenção_ do usuário dispararia uma ação em seu aplicativo cliente, como uma chamada para a função checkweather (). Em seguida, crie entidades para representar os parâmetros necessários para executar a ação.
|Intenção | Entidade | Exemplo de enunciado |
|------------------|------------------------------|------------------------------|
| CheckWeather | { "type": "location", "entity": "seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | Como está o clima em `Seattle` `tomorrow`? |
| CheckWeather | { "type": "date_range", "entity": "this weekend" } | Mostre-me a previsão para `this weekend` |
||||
## <a name="prebuilt-domain-intents"></a>Intenções de domínio predefinidas
[Domínios pré-criados](luis-how-to-use-prebuilt-domains.md) fornecem intenções com declarações.
## <a name="none-intent"></a>Intenção None
A intenção **None** é criada, mas deixada vazia de propósito. A intenção **None** é uma intenção necessária e não pode ser excluída nem renomeada. Preencha-a com declarações que estejam fora de seu domínio.
A intenção **None** é a intenção de fallback, importante em todos os aplicativos e deve ter 10% do total de declarações. Ela é usada para ensinar ao LUIS declarações que não são importantes no domínio de aplicativo (área de assunto). Se você não adicionar nenhuma declaração para a intenção **None**, o LUIS forçará uma declaração que está fora do domínio para uma das intenções do domínio. Isso distorcerá as pontuações de previsão ensinando ao LUIS a intenção incorreta para a declaração.
Quando um expressão é previsto como a intenção None, o aplicativo cliente pode fazer mais perguntas ou fornecer um menu para direcionar o usuário a opções válidas.
## <a name="negative-intentions"></a>Intenções negativas
Se desejar determinar intenções negativas e positivas, como "Eu **quero** um carro" e "Eu **não** quero um carro", será possível criar duas intenções (uma positiva e uma negativa) e adicionar declarações adequadas para cada uma. Ou é possível criar uma única intenção e marcar os dois termos positivos e negativos diferentes como uma entidade.
## <a name="intents-and-patterns"></a>Tentativas e padrões
Se você tiver um exemplo de declarações, que pode ser definido em parte ou inteiro como uma expressão regular, considere usar a [entidade de expressão regular](luis-concept-entity-types.md#regular-expression-entity) emparelhada com um [padrão](luis-concept-patterns.md).
O uso de uma entidade de expressão regular garante a extração de dados para que o padrão seja correspondido. O padrão de correspondência garante que uma intenção exata seja retornada.
## <a name="intent-balance"></a>Equilíbrio de intenções
As intenções do domínio de aplicativo devem ter um equilíbrio de declarações em cada intenção. Não tenha uma intenção com 10 declarações e outra intenção com 500 declarações. Isso não está equilibrado. Se você tiver essa situação, examine a intenção com 500 declarações para ver se muitas intenções podem ser reorganizadas em um [padrão](luis-concept-patterns.md).
A intenção **None** não está incluída no equilíbrio. A intenção deve conter 10% das declarações totais no aplicativo.
## <a name="intent-limits"></a>Limites de intenção
Examine os [limites](luis-boundaries.md#model-boundaries) para entender a quantidade de intenções que você pode adicionar a um modelo.
### <a name="if-you-need-more-than-the-maximum-number-of-intents"></a>Se precisar de mais do que o número máximo de intenções
Primeiro, considere se o sistema está usando um número excessivo de intenções.
### <a name="can-multiple-intents-be-combined-into-single-intent-with-entities"></a>Várias intenções poderão ser combinadas em uma única intenção com entidades
Intenções muito semelhantes poderão dificultar sua diferenciação pelo LUIS. As intenções devem ser variadas o suficiente para capturar as principais tarefas que o usuário está solicitando, mas elas não precisam capturar todo caminho que seu código usa. Por exemplo, BookFlight e FlightCustomerService podem ser intenções separadas em um aplicativo de viagem, mas BookInternationalFlight e BookDomesticFlight são semelhantes demais. Se o sistema precisar diferenciá-las, use entidades ou outra lógica, em vez de intenções.
### <a name="dispatcher-model"></a>Modelo de Dispatcher
Saiba como combinar aplicativos LUIS e QnA Maker com o [modelo de expedição](luis-concept-enterprise.md#when-you-need-to-combine-several-luis-and-qna-maker-apps).
### <a name="request-help-for-apps-with-significant-number-of-intents"></a>Solicite ajuda para aplicativos com quantidades significativas de intenções
Se a redução do número de intenções ou a divisão das suas intenções em vários aplicativos não funcionar para você, contate o suporte. Se a assinatura do Azure incluir serviços de suporte, contate o [suporte técnico do Azure](https://azure.microsoft.com/support/options/).
## <a name="next-steps"></a>Próximas etapas
* Saiba mais sobre [entidades](luis-concept-entity-types.md), que são palavras importantes relevantes para intenções
* Saiba como [adicionar e gerenciar intenções](luis-how-to-add-intents.md) em seu aplicativo LUIS.
* Examine as [melhores práticas](luis-concept-best-practices.md) da intenção
| 76.212963 | 522 | 0.769773 | por_Latn | 0.999912 |
aa6d115ef7f6ec0448630bb28878d609bd55b4d0 | 2,561 | md | Markdown | docs/azure/install-azure-cli.md | SaSha-K1/docs.ru-ru | ed71ca66fca1ea982efcb0a2dc1ec8838479d308 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/azure/install-azure-cli.md | SaSha-K1/docs.ru-ru | ed71ca66fca1ea982efcb0a2dc1ec8838479d308 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/azure/install-azure-cli.md | SaSha-K1/docs.ru-ru | ed71ca66fca1ea982efcb0a2dc1ec8838479d308 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-31T15:06:56.000Z | 2021-10-31T15:06:56.000Z | ---
title: Установка Azure CLI
description: Разработчикам Azure потребуется установка Azure CLI. В этой статье описывается, зачем нужен интерфейс командной строки (CLI) и откуда его можно скачать и установить.
ms.date: 11/30/2020
ms.topic: conceptual
ms.custom: devx-track-dotnet
ms.author: daberry
author: daberry
ms.openlocfilehash: aa2739cc6c11145887e64921398c72affeaec729
ms.sourcegitcommit: 5d9cee27d9ffe8f5670e5f663434511e81b8ac38
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 01/08/2021
ms.locfileid: "98025033"
---
# <a name="install-the-azure-cli"></a>Установка Azure CLI
В дополнение к порталу Azure в Azure также предлагается [Azure CLI](/cli/azure/), средство командной строки для создания ресурсов Azure и управления ими. Azure CLI позволяет повысить степень эффективности и воспроизводимости повторяющихся задач, а также использовать скрипты для их выполнения.
На практике большинство разработчиков используют и портал Azure, и Azure CLI. Портал Azure удобен для знакомства с новыми службами и обзора всех ресурсов в учетной записи Azure, но при этом большинство разработчиков считают Azure CLI более быстрым и эффективным инструментом. Зачастую в Azure CLI с помощью одной команды можно выполнить задачу, требующую нескольких действий на портале Azure. Кроме того, команды Azure CLI можно сохранять в файл, благодаря чему гарантируется единообразное выполнение повторяющихся задач.
Azure CLI доступен для Windows, macOS и Linux.
> [!div class="nextstepaction"]
> [Установка Azure CLI для Windows](/cli/azure/install-azure-cli-windows?tabs=azure-cli)
> [!div class="nextstepaction"]
> [Установка Azure CLI для macOS](/cli/azure/install-azure-cli-macos)
> [!div class="nextstepaction"]
> [Установка Azure CLI для Linux](/cli/azure/install-azure-cli-linux)
### <a name="azure-cloud-shell"></a>Azure Cloud Shell
Azure CLI также можно использовать в Azure Cloud Shell по адресу [https://shell.azure.com](https://shell.azure.com). Azure Cloud Shell представляет собой полнофункциональную браузерную оболочку для управления ресурсами Azure. Azure Cloud Shell удобно использовать в тех случаях, когда требуется среда командной строки, но работа ведется на устройстве, где нельзя установить Azure CLI.

### <a name="next-steps"></a>Дальнейшие действия
Затем необходимо [установить дополнительные инструменты Azure](./azure-tools.md), такие как Обозреватель службы хранилища Azure и Azure Data Studio, чтобы повысить эффективность работы с Azure.
| 60.97619 | 523 | 0.802811 | rus_Cyrl | 0.858369 |
aa6d4792844b60bbd6ef9622c459fb75e7342be5 | 9,095 | md | Markdown | docs/framework/security/wstrustchannelfactory-and-wstrustchannel.md | asthman666/docs | 7bd6e92dfce09924be10c56d849af74b6bbe9acb | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-02T11:09:59.000Z | 2019-06-15T10:17:08.000Z | docs/framework/security/wstrustchannelfactory-and-wstrustchannel.md | asthman666/docs | 7bd6e92dfce09924be10c56d849af74b6bbe9acb | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-10T16:33:09.000Z | 2019-05-10T16:33:09.000Z | docs/framework/security/wstrustchannelfactory-and-wstrustchannel.md | asthman666/docs | 7bd6e92dfce09924be10c56d849af74b6bbe9acb | [
"CC-BY-4.0",
"MIT"
] | 2 | 2016-11-06T09:42:50.000Z | 2016-11-06T18:28:18.000Z | ---
title: "WSTrustChannelFactory and WSTrustChannel"
ms.date: "03/30/2017"
ms.assetid: 96cec467-e963-4132-b18b-7d0b3a2e979f
author: "BrucePerlerMS"
---
# WSTrustChannelFactory and WSTrustChannel
If you are already familiar with Windows Communication Foundation (WCF), you know that a WCF client is already federation aware. By configuring a WCF client with a <xref:System.ServiceModel.WSFederationHttpBinding> or similar custom binding, you can enable federated authentication to a service.
WCF obtains the token that is issued by the security token service (STS) behind the scenes and uses this token to authenticate to the service. The main limitation to this approach is that there is no visibility into the client’s communications with the server. WCF automatically generates the request security token (RST) to the STS based on the issued token parameters on the binding. This means that the client cannot vary the RST parameters per request, inspect the request security token response (RSTR) to get information such as display claims, or cache the token for future use.
Currently, the WCF client is suitable for basic federation scenarios. However, one of the major scenarios that Windows Identity Foundation (WIF) supports requires control over the RST at a level that WCF does not easily allow. Therefore, WIF adds features that give you more control over communication with the STS.
WIF supports the following federation scenarios:
- Using a WCF client without any WIF dependencies to authenticate to a federated service
- Enabling WIF on a WCF client to insert an ActAs or OnBehalfOf element into the RST to the STS
- Using WIF alone to obtain a token from the STS and then enable a WCF client to authenticate with this token. For more information, see [ClaimsAwareWebService](https://go.microsoft.com/fwlink/?LinkID=248406) sample.
The first scenario is self-explanatory: Existing WCF clients will continue to work with WIF relying parties and STSs. This topic discusses the remaining two scenarios.
## Enhancing an Existing WCF Client with ActAs / OnBehalfOf
In a typical identity delegation scenario, a client calls a middle-tier service, which then calls a back-end service. The middle-tier service acts as, or acts on behalf of, the client.
> [!TIP]
> What is the difference between ActAs and OnBehalfOf?
>
> From the WS-Trust protocol standpoint:
>
> 1. An ActAs RST element indicates that the requestor wants a token that contains claims about two distinct entities: the requestor, and an external entity represented by the token in the ActAs element.
> 2. An OnBehalfOf RST element indicates that the requestor wants a token that contains claims only about one entity: the external entity represented by the token in the OnBehalfOf element.
>
> The ActAs feature is typically used in scenarios that require composite delegation, where the final recipient of the issued token can inspect the entire delegation chain and see not just the client, but all intermediaries. This lets it perform access control, auditing and other related activities based on the entire identity delegation chain. The ActAs feature is commonly used in multi-tiered systems to authenticate and pass information about identities between the tiers without having to pass this information at the application/business logic layer.
>
> The OnBehalfOf feature is used in scenarios where only the identity of the original client is important and is effectively the same as the identity impersonation feature available in Windows. When OnBehalfOf is used, the final recipient of the issued token can only see claims about the original client, and the information about intermediaries is not preserved. One common pattern where the OnBehalfOf feature is used is the proxy pattern where the client cannot access the STS directly but instead communicates through a proxy gateway. The proxy gateway authenticates the caller and puts information about the caller into the OnBehalfOf element of the RST message that it then sends to the real STS for processing. The resulting token contains only claims related to the client of the proxy, making the proxy completely transparent to the receiver of the issued token.Note that WIF does not support \<wsse:SecurityTokenReference> or \<wsa:EndpointReferences> as a child of \<wst:OnBehalfOf>. The WS-Trust specification allows for three ways to identify the original requestor (on behalf of whom the proxy is acting). These are:
>
> - Security token reference. A reference to a token, either in the message, or possibly retrieved out of band).
> - Endpoint reference. Used as a key to look up data, again out of band.
> - Security token. Identifies the original requestor directly.
>
> WIF supports only security tokens, either encrypted or unencrypted, as a direct child element of \<wst:OnBehalfOf>.
This information is conveyed to a WS-Trust issuer using the ActAs and OnBehalfOf token elements in the RST.
WCF exposes an extensibility point on the binding that allows arbitrary XML elements to be added to the RST. However, because the extensibility point is tied to the binding, scenarios that require the RST contents to vary per call must re-create the client for every call, which decreases performance. WIF uses extension methods on the `ChannelFactory` class to allow developers to attach any token that is obtained out of band to the RST. The following code example shows how to take a token that represents the client (such as an X.509, username, or Security Assertion Markup Language (SAML) token) and attach it to the RST that is sent to the issuer.
```csharp
IHelloService serviceChannel = channelFactory.CreateChannelActingAs<IHelloService>(clientSamlToken);
serviceChannel.Hello("Hi!");
```
WIF provides the following benefits:
- The RST can be modified per channel; therefore, middle-tier services do not have to re-create the channel factory for each client, which improves performance.
- This works with existing WCF clients, which makes an easy upgrade path possible for existing WCF middle-tier services that want to enable identity delegation semantics.
However, there is still no visibility into the client’s communication with the STS. We’ll examine this in the third scenario.
## Communicating Directly with an Issuer and Using the Issued Token to Authenticate
For some advanced scenarios, enhancing a WCF client is not enough. Developers who use only WCF typically use Message In / Message Out contracts and handle client-side parsing of the issuer response manually.
WIF introduces the <xref:System.ServiceModel.Security.WSTrustChannelFactory> and <xref:System.ServiceModel.Security.WSTrustChannel> classes to let the client communicate directly with a WS-Trust issuer. The <xref:System.ServiceModel.Security.WSTrustChannelFactory> and <xref:System.ServiceModel.Security.WSTrustChannel> classes enable strongly typed RST and RSTR objects to flow between the client and issuer, as shown in the following code example.
```csharp
WSTrustChannelFactory trustChannelFactory = new WSTrustChannelFactory(stsBinding, stsAddress);
WSTrustChannel channel = (WSTrustChannel) trustChannelFactory.CreateChannel();
RequestSecurityToken rst = new RequestSecurityToken(RequestTypes.Issue);
rst.AppliesTo = new EndpointAddress(serviceAddress);
RequestSecurityTokenResponse rstr = null;
SecurityToken token = channel.Issue(rst, out rstr);
```
Note that the `out` parameter on the <xref:System.ServiceModel.Security.WSTrustChannel.Issue%2A> method allows access to the RSTR for client-side inspection.
So far, you’ve only seen how to obtain a token. The token that is returned from the <xref:System.ServiceModel.Security.WSTrustChannel> object is a `GenericXmlSecurityToken` that contains all of the information that is necessary for authentication to a relying party. The following code example shows how to use this token.
```csharp
IHelloService serviceChannel = channelFactory.CreateChannelWithIssuedToken<IHelloService>( token );
serviceChannel.Hello("Hi!");
```
The <xref:System.ServiceModel.ChannelFactory%601.CreateChannelWithIssuedToken%2A> extension method on the `ChannelFactory` object indicates to WIF that you have obtained a token out of band, and that it should stop the normal WCF call to the issuer and instead use the token that you obtained to authenticate to the relying party. This has the following benefits:
- It gives you complete control over the token issuance process.
- It supports ActAs / OnBehalfOf scenarios by directly setting these properties on the outgoing RST.
- It enables dynamic client-side trust decisions to be made based on the contents of the RSTR.
- It lets you cache and reuse the token that is returned from the <xref:System.ServiceModel.Security.WSTrustChannel.Issue%2A> method.
- <xref:System.ServiceModel.Security.WSTrustChannelFactory> and <xref:System.ServiceModel.Security.WSTrustChannel> allow for control of channel caching, fault, and recovery semantics according to WCF best practices.
## See also
- [WIF Features](../../../docs/framework/security/wif-features.md) | 91.868687 | 1,131 | 0.808356 | eng_Latn | 0.997094 |
aa6e32178812de31b3e57c9e0bfcbcff5d51f985 | 103 | md | Markdown | _includes/05-emphasis.md | jimmymac387/markdown-portfolio | 12c082b3a8675245ed470cf5ee7d7c39b784b195 | [
"MIT"
] | null | null | null | _includes/05-emphasis.md | jimmymac387/markdown-portfolio | 12c082b3a8675245ed470cf5ee7d7c39b784b195 | [
"MIT"
] | 5 | 2020-04-22T21:22:37.000Z | 2020-04-22T22:04:11.000Z | _includes/05-emphasis.md | jimmymac387/markdown-portfolio | 12c082b3a8675245ed470cf5ee7d7c39b784b195 | [
"MIT"
] | null | null | null | I :heart: data! I like using open-source technologies like **R** *and* **Python** to learn new things!
| 51.5 | 102 | 0.699029 | eng_Latn | 0.964828 |
aa6e3d05d63bad55cb0e1472051164c8ef904a7f | 3,770 | md | Markdown | README.md | EarthmanT/cloudify-terraform-plugin | 701bb10c6fc14645f4953bf01e8620b241b9f383 | [
"Apache-2.0"
] | 1 | 2019-07-11T09:41:25.000Z | 2019-07-11T09:41:25.000Z | README.md | EarthmanT/cloudify-terraform-plugin | 701bb10c6fc14645f4953bf01e8620b241b9f383 | [
"Apache-2.0"
] | 4 | 2018-12-06T09:51:45.000Z | 2020-03-24T19:48:26.000Z | README.md | EarthmanT/cloudify-terraform-plugin | 701bb10c6fc14645f4953bf01e8620b241b9f383 | [
"Apache-2.0"
] | 3 | 2019-01-02T20:02:15.000Z | 2020-02-19T23:59:25.000Z | **This project is under development**
# Cloudify Terraform Plugin
This plugin provides the following functionality:
* Installation, configuration and uninstallation of Terraform itself
* Terraform executable
* Terraform providers and plugins
* Representation of Terraform modules as Cloudify nodes
* Refreshing Terraform state from the cloud
* Updating Terraform state and applying differences on the cloud
## Prerequisites
The Terraform plugin can work with a pre-existing Terraform installation, or it can create a Terraform
installation for you. In order to use a pre-existing Terraform installation, you will need to provide
the relevant paths when defining the Terraform node templates (see below).
## Module Source Specification
When defining a source for a Terraform URL, you can specify any of the following:
* URL to a Zip file
* URL to a `tar.gz` file
* Path to a Zip file
* Path to a `tar.gz` file
* URL to a Git repository (must end with `.git`)
## Node Types
Two node types are provided:
* `cloudify.nodes.terraform`: represents the Terraform installation
* `cloudify.nodes.terraform.Module`: represents a Terraform module
Refer to the documentation in [plugin.yaml](plugin.yaml) for more information about the node
types' properties.
### `cloudify.nodes.terraform.Module`
This node type represents a Terraform module. Its lifecycle consists of:
* `create`: initializes Terraform by calling `terraform init` and `terraform plan`.
* `configure`: executes `terraform state pull`
* `start`: executes `terraform apply`
* `delete`: executes `terraform destroy`
At the end of `start`, a runtime property by the name `resources` is being set on the node instance,
containing the exact dictionary provided by Terraform, representing the state.
In addition, certain day-two operations are provided:
* `terraform.reload`: reloads the template, either from its original location or from an alternative
location.
* `terraform.refresh`: calls `terraform state pull`.
The `resources` runtime property is updated after each of the aforementioned day-two operations.
## Workflows
The plugin provides the following workflows:
* `refresh_terraform_resources`: a simple wrapper for the `terraform.refresh` operation.
* `reload_terraform_template`: a simple wrapper for the `terraform.reload` operation.
These workflows, by default, call their relevant wrapped operation for all node instances of the
Terraform Module type in the current deployment.
If you have more than one Terraform modules in the same blueprint, you can narrow down the scope of the
workflows by specifying either the `node_instance_ids` or `node_ids` parameters to the workflows.
### Workflow Examples
```bash
cfy executions start refresh_terraform_resources -d dep_1
```
This will execute the "refresh" day-two operation on all node instances inside `dep_1` that represent Terraform
modules.
```bash
cfy executions start refresh_terraform_resources -d dep_1 -p node_ids=[tf_module_1]
```
This will execute the "refresh" day-two operation on all node instances that belong to the `tf_module_1` node
template.
## Blueprint Examples
For official blueprint examples using this Cloudify plugin, please see [Cloudify Community Blueprints Examples](https://github.com/cloudify-community/blueprint-examples/).
## To Do
* Create a Terraform [Backend Service using HTTP Node Type](https://www.terraform.io/docs/backends/types/http.html).
* Package in the plugin w/ a node type.
* The service should run as a daemon.
* Exposing Terraform resources via Terraform outputs should trigger `execute_resource` workflow on those resources.
* This should enable a user to interact with Terraform and Cloudify from Terraform CLI.
* Support Multiple Modules.
| 36.960784 | 171 | 0.781698 | eng_Latn | 0.986966 |
aa6e63656e580afd6dcf13bbc8ccc19fa826dc54 | 11,340 | md | Markdown | README.md | SenorSamuel/ios-cryptocurrency-wallet | f94fd31c309f6808f6039de7f5b1143767c4758e | [
"MIT"
] | 6 | 2019-08-13T03:07:35.000Z | 2021-04-11T23:08:52.000Z | README.md | SenorSamuel/ios-cryptocurrency-wallet | f94fd31c309f6808f6039de7f5b1143767c4758e | [
"MIT"
] | 1 | 2019-02-28T13:06:19.000Z | 2020-12-02T09:43:22.000Z | README.md | SenorSamuel/ios-cryptocurrency-wallet | f94fd31c309f6808f6039de7f5b1143767c4758e | [
"MIT"
] | null | null | null | # iOS CryptoCurrency Wallet
A collection about how to make a safe && user-friendly cryptoCurrency wallet
have fun!

```
100+ Stars: ⭐
200+ Stars: ⭐⭐
500+ Stars: ⭐⭐⭐
1000+ Stars: ⭐⭐⭐⭐
2000+ Stars: ⭐⭐⭐⭐⭐
Click ► to show more details
```
## 工具
[consenlabs/token-core-ios](https://github.com/consenlabs/token-core-ios): TokenCore is a blockchain library. TokenCore provides the relatively consistent API that allows you to manage your wallets and sign transactions in BTC, ETH and EOS chains simultaneously. In addition, TokenCore introduces the concept of 'identity', you can use the same mnemonic to manage wallets on the three chains.
[CryptoNote](https://en.wikipedia.org/wiki/CryptoNote):CryptoNote is an application layer protocol that aims to solve the problems outlined in Bitcoin Core, the protocol behind Bitcoin.[1]. The protocol powers several decentralized privacy-oriented cryptocurrencies
[essentiaone/HDWallet](https://github.com/essentiaone/HDWallet) : Simple Swift library for creating HD cryptocurrencies wallets and working with crypto Coins/ERC20 tokens.
[gxchain/graphene-ios](https://github.com/gxchain/graphene-ios) : Implementation of Graphene protocol in Objective-C 公信宝SDK
[TENDIGI/XMRMiner](https://github.com/TENDIGI/XMRMiner) : An embeddable Monero miner written in Swift.
[mikekazakov/xmr-stak-cpu-ios](https://github.com/mikekazakov/xmr-stak-cpu-ios)
[Soneso/stellar-ios-mac-sdk](https://github.com/Soneso/stellar-ios-mac-sdk) : Stellar SDK for iOS & macOS - Swift, Stellar, Horizon, Soneso
## 开源iOS 钱包
Trust - Ethereum Wallet and Web3 DApp Browser for iOS
<details ><summary><code>swift</code> <code>ethereum</code> ⭐⭐⭐⭐</summary>
https://github.com/TrustWallet/trust-wallet-ios<br> Added Sep 17, 2017<br> License: [` GPL-3.0`](https://choosealicense.com/licenses/gpl-3.0/)
<a href='https://raw.githubusercontent.com/TrustWallet/trust-wallet-ios/master/resources/iphone_cover.png'><code>Screenshot 1</code></a> <br></details>
breadwallet-ios - The easy and secure bitcoin wallet
<details ><summary><code>swift</code> <code>bitcoin</code> <code>ethereum</code> <code>bch</code> <code>brd</code> <code>SPV</code> ⭐⭐</summary>
https://github.com/breadwallet/breadwallet-ios<br> Added Oct 2, 2016<br> License: [`MIT`](https://choosealicense.com/licenses/mit/)
<a href='https://github.com/breadwallet/breadwallet-ios/raw/master/images/screenshots.jpg'><code>Screenshot 1</code></a> <br></details>
toshi-ios-client - Coinbase Wallet(formly `Toshi`).Private & secure messaging, Ethereum wallet and browser
<details ><summary><code>swift</code> <code>bitcoin</code> <code>ethereum</code> <code>BCH</code> <code>BRD</code>⭐</summary>
https://github.com/CoinbaseWallet/toshi-ios-client<br> Added Jan 8, 2017<br> License: [`GPL-3.0`](https://choosealicense.com/licenses/gpl-3.0/)
<a href='https://raw.githubusercontent.com/tokenbrowser/token-ios-client/master/GitHub/cover.png'><code>Screenshot 1</code></a> <br></details>
比太钱包 - Simple & secure Bitcoin wallet
<details ><summary><code>objc</code> <code>bitcoin</code> <code>coinwallet</code> ⭐⭐</summary>
https://github.com/bither/bither-ios<br> Added Jul 13, 2014 <br>License: [`Apache-2.0`](https://choosealicense.com/licenses/apache-2.0/)
<br></details>
WeiWallet - Wei Wallet is an source Ethereum wallet for iOS
<details ><summary><code>swift</code> <code>ethereum</code> <code>EthereumKit</code> ⭐⭐</summary>
https://github.com/popshootjapan/WeiWallet-iOS<br>Added Mar 11, 2018 <br>License: [`Apache-2.0`](https://choosealicense.com/licenses/apache-2.0/) <a href='https://github.com/popshootjapan/WeiWallet-iOS/blob/master/resources/cover_img.png'><code>Screenshot 1</code></a><br></details>
mymonero-app-ios - The MyMonero native iOS app
<details ><summary><code>swift</code> <code>c++</code> <code>xmr</code> </summary>
https://github.com/mymonero/mymonero-app-ios<br>Added Apr 30, 2017 <br>License: [`Custom License`](https://github.com/mymonero/mymonero-app-ios/blob/master/LICENSE.txt) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190403193738.png'><code>Screenshot 1</code></a><br></details>
MixinNetwork - Mixin iOS messenger, wallet and light node to the Mixin Network
<details ><summary><code>swift</code> <code>c</code> <code>im</code></summary>
https://github.com/MixinNetwork/ios-app<br> Apr 22, 2018 <br>License: [`GPL-3.0`](https://choosealicense.com/licenses/gpl-3.0/) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190403194228.png'><code>Screenshot 1</code></a><br></details>
BlueWallet - Thin Bitcoin Wallet. Built with React Native and BlockCypher API.
<details ><summary><code>react native</code> ⭐⭐ </summary>
https://github.com/BlueWallet/BlueWallet<br> Apr 22, 2018 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://camo.githubusercontent.com/217b051157e36e002d5bfcfc03816cb3b0cb0e83/68747470733a2f2f692e696d6775722e636f6d2f6848594a6e4d6a2e706e67'><code>Screenshot 1</code></a><br>
Remark:
- 创建钱包的时候可以选择HD Segwit(BIP49 P2SH)和SegWit(P2SH)模式
- 钱包助记词可以使用二维码的方式, 还是第一次看到这种
</details>
EthersWallet-ios - Ethereum Wallet and Dapp Browser for iOS.
<details ><summary><code>objc</code> <code>ethereum</code> <code>web browser</code> <code>Ropsten testnet support</code> <code>custom JSON-RPC nodes</code> ⭐⭐</summary>
https://github.com/ethers-io/EthersWallet-ios<br>Added Jan 29, 2017 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190404202926.png'><code>Screenshot 1</code></a><br>
Remark:填入助记词时,提供快速选择词库的 view: MnemonicPhraseView,**体验不错**
</details>
arcbit/arcbit-ios - iOS bitcoin wallet
<details ><summary><code>swift</code> <code>objc</code> <code>bitcoin</code> ⭐</summary>
https://github.com/arcbit/arcbit-ios<br>Added Jun 28, 2015 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <br></details>
OracleChain/PocketEOS-IOS - An -source EOS wallet project.
<details ><summary><code>objc</code> <code>eos</code> ⭐</summary>
https://github.com/OracleChain/PocketEOS-IOS#4<br>Added May 27, 2018 <br>License: [` LGPL-3.0`](https://choosealicense.com/licenses/lgpl-3.0/)<br>
Remark:企业版,无法打开
<br></details>
NemProject/NEMiOSApp - Thin Bitcoin Wallet. Built with React Native and BlockCypher API.
<details ><summary><code>objc</code> <code>swift</code> <code>c</code> <code>nem</code></summary>
https://github.com/NemProject/NEMiOSApp<br> Added Dec 7, 2014 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <br></details>
fotolockr/CakeWallet - Home of Cake Wallet for XMR.
<details ><summary><code>swift</code> <code>objc++</code> <code>xmr</code></summary>
https://github.com/fotolockr/CakeWallet<br>Added Jan 28, 2018 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href=''><code>Screenshot 1</code></a><br></details>
horizontalsystems/bank-wallet-ios - A secure and fully decentralized crypto currency wallet app for iOS users. This wallet uses SPV protocol.
<details ><summary><code>swift</code> <code>btc</code> <code>eth</code> <code>bch</code> <code>spv</code> </summary>
https://github.com/horizontalsystems/bank-wallet-ios<br>Added May 27, 2018 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://github.com/horizontalsystems/bank-wallet-ios/blob/master/Images/BankWalletAllTabs-X-Mockup.jpg'><code>Screenshot 1</code></a><br></details>
nano-wallet-company/nano-wallet-ios - Nano Wallet for iOS
<details ><summary><code>swift</code> <code>btc</code> <code>eth</code> <code>bch</code> <code>spv</code> </summary>
https://github.com/nano-wallet-company/nano-wallet-ios<br>Added Nov 19, 2017 <br>License: [`BSD-2-Clause`](https://github.com/nano-wallet-company/nano-wallet-ios/blob/master/LICENSE) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190404213226.png'><code>Screenshot 1</code></a><br></details>
CityOfZion/OzoneWalletIOS - The main repo for the O3 wallet on iOS (NEO)
<details ><summary><code>swift</code> <code>btc</code> <code>eth</code> <code>bch</code> <code>spv</code> </summary>
https://github.com/CityOfZion/OzoneWalletIOS<br>Added Nov 19, 2017 <br>License: [`BSD-2-Clause`](https://github.com/nano-wallet-company/nano-wallet-ios/blob/master/LICENSE) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190404213226.png'><code>Screenshot 1</code></a><br></details>
LedgerHQ/ledger-wallet-ios - Ledger Wallet iOS application
<details ><summary><code>swift</code> <code>hardware wallet</code> </summary>
https://github.com/LedgerHQ/ledger-wallet-ios<br>Added Jan 4, 2015 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://samuel-image-hosting.oss-cn-shenzhen.aliyuncs.com/SamuelChan/20190409105507.png'><code>Screenshot 1</code></a><br></details>
Block-Equity/stellar-ios-wallet - A non-custodial, source Stellar wallet for iOS.
<details ><summary><code>swift</code> <code>Stellar</code> </summary>
https://github.com/Block-Equity/stellar-ios-wallet<br>Added Mar 11, 2018 <br>License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://camo.githubusercontent.com/1b2e42ac4936e0da08dd4fcf1cba4fed1ad4d725/68747470733a2f2f626c6f636b65712e636f6d2f30316432623438323264363661393961633630616562663266303436623435392e706e67'><code>Screenshot 1</code></a><br>
</details>
AlphaWallet/alpha-wallet-ios - The Wallet Engine for the Web3 World.
<details ><summary><code>swift</code> <code>Ethereum</code> <code>web3</code> </summary>
https://github.com/AlphaWallet/alpha-wallet-ios<br>Added Sep 17, 2017<br>License: [`GPL-3.0`](https://choosealicense.com/licenses/gpl-3.0/) <a href='https://github.com/James-Sangalli/alpha-wallet-ios/raw/master/resources/screens.png'><code>Screenshot 1</code></a><br></details>
cryptape/cyton-ios - The CITA iOS Wallet App.
<details ><summary><code>swift</code> <code>CITA</code> <code>Ethereum</code> </summary>
https://github.com/cryptape/cyton-ios<br>Added May 27, 2018<br> License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://github.com/James-Sangalli/alpha-wallet-ios/raw/master/resources/screens.png'><code>Screenshot 1</code></a><br></details>
wavesplatform/WavesWallet-iOS - Waves Wallet on iOS.
<details ><summary><code>swift</code> <code>Waves</code> </summary>
https://github.com/wavesplatform/WavesWallet-iOS<br>Added Jun 25, 2017<br> License: [`MIT`](https://choosealicense.com/licenses/mit/) <a href='https://camo.githubusercontent.com/b9b3b3ba079c491a15f1b34a968e261883cfb8a9/68747470733a2f2f63646e2d696d616765732d312e6d656469756d2e636f6d2f6d61782f313630302f312a7a72586742305859526a4f5766466b38766b38646b512e706e67'><code>Screenshot 1</code></a><br></details>
## Thanks
This list was inspired by [open-source-ios-apps](https://github.com/dkhamsing/open-source-ios-apps/blob/master/README.md)
Everything is released under the MIT license, so these is absolutely no need to donate anything. If you would like to buy me a coffee though, I certainly won't complain. =)
- Ethereum: `0xEe63d9cA6EcaA334bA0C3311b3dbD5dE0132Ba4C`
- Bitcoin: `3FKCC95zr5MCayJv5WjHNjUi2QzUenF1xC`
| 63 | 402 | 0.752469 | yue_Hant | 0.313975 |
aa6e9d64bb1db9055829067a1253e1ed85f8787f | 3,875 | md | Markdown | README.md | ManvithaPonnapati/RosettaProtocolsForPPIPeptides | b7968bea825cd5b653aba534776b30bef3cb4325 | [
"MIT"
] | null | null | null | README.md | ManvithaPonnapati/RosettaProtocolsForPPIPeptides | b7968bea825cd5b653aba534776b30bef3cb4325 | [
"MIT"
] | null | null | null | README.md | ManvithaPonnapati/RosettaProtocolsForPPIPeptides | b7968bea825cd5b653aba534776b30bef3cb4325 | [
"MIT"
] | null | null | null |
This repo is a list of Rosetta scripts that have been compiled using help from Rosetta tutorials, discussion threads and documentation. Given a complex of two interacting proteins, it allows design of peptide fragments that contribute the most to the binding energy
To design a peptide binder that can bind to protein X, we start with the PDB structure of Protein X in complex with Y. For this example we are using the PDB structure 6M0J which is a complex between ACE2 and SARS-CoV-2 spike protein. We will design a peptide fragment that can bind to SARS-CoV-2 spike protein and is derived from ACE2.
The following steps use the protocols - PeptiDerive, FlexPepDock, RosettaDock and FlexDDG.
**Step 1**: Clean the PDB File using Notebook 1. This removes anything other ATOM* lines from your PDB file.
**Step 2**: Split chain A and chain E from the 6m0j into separate pdb files
**Step 3**: Relax chain A and chain E pdb files using the fast relax script above. You can read more about the options used here - https://new.rosettacommons.org/docs/latest/application_documentation/structure_prediction/relax
**Step 4**: Using PyMOL and the original clean PDB of 6m0j align the relaxed chains A and E to their respective counterparts in the original PDB of 6m0j. And save the aligned and relaxed chains to a new PDB. We will use this PDB throughout the next steps.
**Step 5**: To obtain the fragments of peptides between 10-150 that contribute the most to the binding energy, we will run the peptiderive script at each length. Utilize peptiderive instructions in the Peptiderive foklder. The output is a markdown file that you can convert into results.csv file which contains the sequence and the binding energy contribution of the linear fragment. To read more about peptiderive - https://new.rosettacommons.org/docs/latest/application_documentation/analysis/PeptiDerive . The flags for the protocol are derived from the peptiderive paper's supplementary material.
**Step 6**: We will dock the peptide fragments against the old spike protein and the new spike protein using RosettaDock and FlexPepDock. The scripts and the README for running these protocols is under - DockingMethods.
**Step 7**: Score the models from the docking methods using InterfaceAnalyzer script under Analysis folder.
**Step 8**: After experimental validation, Step 9 and Step 10 will run computational mutagenesis.
**Step 9**: We will run the computational mutagenesis using the backrub xml scripts provided by FlexDDG protocol. To run the mutagenesis across the whole protein, use the scripts in Mutagenesis - you need to provide a path to your pdb, and an output path. FlexDDG creates a db3 file as an output. Since the publication of this paper, the protocol for mutagenesis was changed to utilize FlexDDG github here - https://github.com/Kortemme-Lab/flex_ddG_tutorial
**Step 10**: Use the analyze.py in the Mutagenesis folder to obtain a plot of all mutations and their corresponding ddG value
There is no guarantee that the output of this protocol will correlate one-to-one with experimental results. We also noticed sensitivity to the parameters used for different Rosetta protocols. Parameters were chosen using the respective protocol documentation. Please note that FlexDDG takes a long amount of time to run. We are grateful to MIT Media Lab's internal matlabers and MIT supercloud for providing us with the computing resources
UPDATE: This github also contains the link to the google drive for data/figures used in the paper - https://www.nature.com/articles/s42003-020-01470-7 . Please note that there is an update to the plots in Fig 3 mutagenesis plots. The new plots were generated with an updated protocol and analysis scripts. The new plots shows that the computational pipeline correctly predicts the mutations that didn’t work in the experimental screens.
| 121.09375 | 601 | 0.801032 | eng_Latn | 0.995982 |
aa6f049398c8a319c5c709ab36729d297d83a4b3 | 586 | md | Markdown | content/authors/alexander-james-aavang/index.md | jhebus/starter-academic | 0bab60dae7d0e90a1082cedafce37611db0e5ddf | [
"MIT"
] | null | null | null | content/authors/alexander-james-aavang/index.md | jhebus/starter-academic | 0bab60dae7d0e90a1082cedafce37611db0e5ddf | [
"MIT"
] | null | null | null | content/authors/alexander-james-aavang/index.md | jhebus/starter-academic | 0bab60dae7d0e90a1082cedafce37611db0e5ddf | [
"MIT"
] | null | null | null | ---
title: Alexander James Aavang
#role: Masters Student
avatar_filename: avatar
bio: ""
social: []
superuser: false
user_groups: ["Masters Students"]
date: 2012-06-13T14:18:20.451Z
interests:
- Graphical Design of the Structure of Actor Systems
tags:
- UI
- actors
- graphical
- programming
---
Alexander was one of my masters students during my PhD.
The goal of this project is to both design new, and express existing structure of component based programs in a visual way, enabling the developer to focus on application logic, rather than structure during development.
| 25.478261 | 219 | 0.761092 | eng_Latn | 0.992002 |
aa6fe677f2d5062792f8be88e7cd7be2b62b7a16 | 1,053 | md | Markdown | docs/csharp/linq/store-the-results-of-a-query-in-memory.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/linq/store-the-results-of-a-query-in-memory.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/linq/store-the-results-of-a-query-in-memory.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: クエリ結果をメモリに格納する
description: 結果の格納方法。
ms.date: 11/30/2016
ms.assetid: 5b863961-1750-4cf9-9607-acea5054d15a
ms.openlocfilehash: 66a7a95c74db4062e76c54d4339ccb7343f44067
ms.sourcegitcommit: 7588136e355e10cbc2582f389c90c127363c02a5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 03/14/2020
ms.locfileid: "65633563"
---
# <a name="store-the-results-of-a-query-in-memory"></a>クエリ結果をメモリに格納する
クエリは、基本的に、データの取得方法と編成方法を指示するための一連の命令です。 結果の各項目は順次要求されるため、クエリは遅延実行されます。 `foreach` を使用して結果を反復すると、項目はアクセスのたびに返されます。 クエリを評価し、`foreach` のループを実行せずに結果を格納するには、クエリ変数で次のメソッドのいずれかを呼び出します。
- <xref:System.Linq.Enumerable.ToList%2A>
- <xref:System.Linq.Enumerable.ToArray%2A>
- <xref:System.Linq.Enumerable.ToDictionary%2A>
- <xref:System.Linq.Enumerable.ToLookup%2A>
クエリ結果を格納するときは、次の例に示すように、返されたコレクション オブジェクトを新しい変数に割り当てることをお勧めします。
## <a name="example"></a>例
[!code-csharp[csProgGuideLINQ#25](~/samples/snippets/csharp/concepts/linq/how-to-store-the-results-of-a-query-in-memory_1.cs)]
## <a name="see-also"></a>参照
- [統合言語クエリ (LINQ)](index.md)
| 30.970588 | 176 | 0.792972 | yue_Hant | 0.714764 |
aa709c36308ea264a473f5b58e833fe63f32a92a | 2,924 | md | Markdown | _pages/about.md | joe817/qiaoziyue.github.io | 69f133f3aa20cf342309761282f11e9bdb5f8cf5 | [
"MIT"
] | null | null | null | _pages/about.md | joe817/qiaoziyue.github.io | 69f133f3aa20cf342309761282f11e9bdb5f8cf5 | [
"MIT"
] | null | null | null | _pages/about.md | joe817/qiaoziyue.github.io | 69f133f3aa20cf342309761282f11e9bdb5f8cf5 | [
"MIT"
] | 1 | 2021-11-22T02:27:04.000Z | 2021-11-22T02:27:04.000Z | ---
permalink: /
title: "About me"
excerpt: "About me"
author_profile: true
redirect_from:
- /about/
- /about.html
---
I am a fifth year PhD student at Computer Network Information Center, Chinese Academy of Sciences. I was born in 1996. I received my B.S. degree in 2017 from the School of Computing, Wuhan University. My research interests include heterogeneous graph representation learning, construction and application of scientific and technological knowledge graphs, and domain-specific pre-training model.
# Publications
1. **Ziyue Qiao**, Pengyang Wang, Yanjie Fu, Yi Du, Pengfei Wang, and Yuanchun Zhou. "[Tree Structure-Aware Graph Representation Learning via Integrated Hierarchical Aggregation and Relational Metric Learning](https://arxiv.org/pdf/2008.10003.pdf)." In 2020 IEEE International Conference on Data Mining (ICDM‘20)
2. **Ziyue Qiao**, Yi Du, Yanjie Fu, Pengfei Wang, and Yuanchun Zhou. "[Unsupervised author disambiguation using heterogeneous graph convolutional network embedding](https://ieeexplore.ieee.org/abstract/document/9005458/)." In 2019 IEEE international conference on big data (IEEE Big Data'19)
3. **Ziyue Qiao**, Zhiyuan Ning, Yi Du, and Yuanchun Zhou. "[Context-Enhanced Entity and Relation Embedding for Knowledge Graph Completion (Student Abstract)](https://ojs.aaai.org/index.php/AAAI/article/view/17932)." In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI'21)
4. Meng Xiao, **Ziyue Qiao (Equal contribution)**, Yanjie Fu, Yi Du, Pengyang Wang, and Yuanchun Zhou. “[Expert Knowledge Guided Length-Variant Hierarchical Label Generation for Proposal Classification](https://arxiv.org/pdf/2109.06661.pdf).” To appear in 2021 IEEE International Conference on Data Mining (ICDM'21).
5. Ning, Zhiyuan, **Ziyue Qiao (Equal contribution)**, Hao Dong, Yi Du, and Yuanchun Zhou. "[LightCAKE: A Lightweight Framework for Context-Aware Knowledge Graph Embedding](https://arxiv.org/abs/2102.10826)." In Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'21)
6. Tang, Zhengzheng, **Ziyue Qiao (Equal contribution)**, Xuehai Hong, Yang Wang, Fayaz Ali Dharejo, Yuanchun Zhou, and Yi Du. "[Data Augmentation for Graph Convolutional Network on Semi-Supervised Classification](https://arxiv.org/pdf/2106.08848.pdf)." In APWeb-WAIM 2021
# Preprint
1. **Ziyue Qiao**, Yanjie Fu, Pengyang Wang, Meng Xiao, Zhiyuan Ning, Pengfei Wang, Yi Du, and Yuanchun Zhou. “[RPT: Toward Transferable Model on Heterogeneous Researcher Data via Pre-Training](https://arxiv.org/pdf/2110.07336.pdf).” In arXiv preprint arXiv:2110.07336
# Awards
1. 2019 Biendata Competition “OAG–WhoIsWho track 1” **Gold Medal** (1st Place/131 teams)
2. IEEE International Conference on Big Data 2019,**Student Travel Award**
3. 2020 **CSC Scholarship** for joint PhD students
4. 2020 **National Scholarship**
5. 2021 Chinese Academy of Sciences (CAS) **Presidential Scholarship**
| 88.606061 | 394 | 0.774282 | eng_Latn | 0.427765 |
aa71ac6b588dd19e167903f5a93893af16195559 | 590 | md | Markdown | README.md | cyber-drop/cryptocomics | a4b622bdd29187cccdfa3fc3f997a6327f541bd2 | [
"Apache-2.0"
] | 7 | 2018-07-16T16:49:25.000Z | 2018-08-09T12:14:18.000Z | README.md | cyber-drop/cryptocomics | a4b622bdd29187cccdfa3fc3f997a6327f541bd2 | [
"Apache-2.0"
] | null | null | null | README.md | cyber-drop/cryptocomics | a4b622bdd29187cccdfa3fc3f997a6327f541bd2 | [
"Apache-2.0"
] | null | null | null | # cryptocomics
We will help you to understand the BLOCKCHAIN!
AUTHOR: [@Antropocosmist](https://t.me/Antropocosmist)
Any question or idea? Send me message! I'm a communicative person)
## cyber•Drop Services

## What is an Airdrop in Reality?

## Can you Understand the Blockchain?

## Even when you all grown up...

| 28.095238 | 78 | 0.727119 | eng_Latn | 0.884848 |
aa73684cc8d2df61d40a8c908cb014f5ec8e1af7 | 4,758 | md | Markdown | docs/guide/development.md | hangtwenty/vite-plugin-pwa | c7594aef1c28a6545e52613b981ee2a8e3c83fbf | [
"MIT"
] | null | null | null | docs/guide/development.md | hangtwenty/vite-plugin-pwa | c7594aef1c28a6545e52613b981ee2a8e3c83fbf | [
"MIT"
] | null | null | null | docs/guide/development.md | hangtwenty/vite-plugin-pwa | c7594aef1c28a6545e52613b981ee2a8e3c83fbf | [
"MIT"
] | null | null | null | ---
title: Development | Guide
---
# Development
From version `v0.11.13` you can use the service worker on development.
The PWA will not be registered, only the service worker logic, check the details for each strategy below.
> **Warning**: there will be only one single registration on the service worker precache manifest (`self.__WB_MANIFEST`)
when necessary: `navigateFallback`.
The service worker on development will be only available if `disabled` PWA plugin option is not `true` and the `enable`
development option is `true`.
## Setup
To enable the service worker on development, you only need to add the following options to the plugin configuration:
```ts
import { VitePWA } from 'vite-plugin-pwa'
export default defineConfig({
plugins: [
VitePWA({
/* other options */
/* enable sw on development */
devOptions: {
enabled: true
/* other options */
}
})
]
})
```
## Type declarations
```ts
/**
* Development options.
*/
export type DevOptions = {
/**
* Should the service worker be available on development?.
*
* @default false
*/
enabled?: boolean
/**
* The service worker type.
*
* @default 'classic'
*/
type?: WorkerType
/**
* This option will enable you to not register the `runtimeConfig` configured on `workbox.runtimeConfig` option on development.
*
* **WARNING**: this option will only be used when using `generateSW` strategy.
*
* @default false
*/
disableRuntimeConfig?: boolean
/**
* This option will allow you to configure the `navigateFallback` when using `registerRoute` for `offline` support:,
* configure here the corresponding `url`, for example `navigateFallback: 'index.html'`.
*
* **WARNING**: this option will only be used when using `injectManifest` strategy.
*/
navigateFallback?: string
}
```
## generateSW strategy
When using this strategy, the `navigateFallback` on development options will be ignored. The PWA plugin will check if
`workbox.navigateFallback` is configured and will only register it on `additionalManifestEntries`.
The PWA plugin will force `type: 'classic'` on service worker registration to avoid errors on client side (not yet supported):
```shell
Uncaught (in promise) TypeError: Failed to execute 'importScripts' on 'WorkerGlobalScope': Module scripts don't support importScripts().
```
## injectManifest strategy
You can use `type: 'module'` when registering the service worker (right now only supported on latest versions of `Chromium` based browsers: `Chromium/Chrome/Edge`).
> **Warning**: when building the application, the PWA Plugin will always register your service worker with `type: 'classic'` for compatibility with all browsers.
When using this strategy, the plugin will delegate the service worker compilation to `Vite`, so if you're using `import` statements
instead `importScripts` in your custom service worker, you should configure `type: 'module'` on development options.
If you are using `registerRoute` in your custom service worker you should add `navigateFallback` on development options,
the PWA plugin will include it on `self.__WB_MANIFEST`.
You should not use `HMR (Hot Module Replacement)` on your custom service worker, since we cannot use yet dynamic imports on service workers: `import.meta.hot`.
If you register your custom service worker (not using PWA virtual module and configuring `injectRegister: false` or `injectRegister: null`), use the following code (remember also to add `scope` option if necessary):
```js
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register(
import.meta.env.MODE === 'production' ? '/sw.js' : '/dev-sw.js?dev-sw'
);
}
```
If you are also using `import` statements instead `importScripts`, use the following code (remember also to add the `scope` option if necessary):
```ts
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register(
import.meta.env.MODE === 'production' ? '/sw.js' : '/dev-sw.js?dev-sw',
{ 'type': import.meta.env.MODE === 'production' ? 'classic' : 'module' }
);
}
```
When you change your service worker source code, `Vite` will force a full reload, since we're using `workbox-window` to register it
(by default, you can register it manually) you may have some problems with the service worker events:
<HeuristicWorkboxWindow />
## Example
You can find an example here: [vue-router](https://github.com/antfu/vite-plugin-pwa/tree/main/examples/vue-router).
To run the example, you must build the PWA plugin (`pnpm run build` from root folder), change to `vue-router` directory
(`cd examples/vue-router`) and run it:
- `generateSW` strategy: `pnpm run dev`
- `injectManifest` strategy: `pnpm run dev-claims`
| 36.320611 | 215 | 0.722573 | eng_Latn | 0.980232 |