hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9e14d9b96366d1545a457f48a662e84f56d437eb | 708 | md | Markdown | CHANGELOG.md | bconnorwhite/parse-lcov | 2b008afbeeb824e9999156cdaec1d189f101d21a | [
"MIT"
] | 1 | 2020-09-27T18:22:13.000Z | 2020-09-27T18:22:13.000Z | CHANGELOG.md | bconnorwhite/parse-lcov | 2b008afbeeb824e9999156cdaec1d189f101d21a | [
"MIT"
] | 1 | 2020-09-27T23:31:09.000Z | 2020-09-30T05:04:20.000Z | CHANGELOG.md | bconnorwhite/parse-lcov | 2b008afbeeb824e9999156cdaec1d189f101d21a | [
"MIT"
] | null | null | null | ## [1.0.4](https://github.com/bconnorwhite/parse-lcov/compare/v1.0.3...v1.0.4) (2020-10-06)
## [1.0.3](https://github.com/bconnorwhite/parse-lcov/compare/v1.0.2...v1.0.3) (2020-10-06)
## [1.0.2](https://github.com/bconnorwhite/parse-lcov/compare/v1.0.1...v1.0.2) (2020-09-29)
### Bug Fixes
* fix issue parsing multiple records ([a230cb6](https://github.com/bconnorwhite/parse-lcov/commit/a230cb63644c7799c1a84bf157c2a4f2d2b30e46))
## [1.0.1](https://github.com/bconnorwhite/parse-lcov/compare/v1.0.0...v1.0.1) (2020-09-27)
### Bug Fixes
* export LCOVRecord type ([ce3d4c0](https://github.com/bconnorwhite/parse-lcov/commit/ce3d4c0e8bb701112e21c16754df64d3d5ab4de9))
# 1.0.0 (2020-09-27)
| 22.83871 | 140 | 0.701977 | kor_Hang | 0.077864 |
9e15305ac63dedbab232c8f01b5895b5739ae9c4 | 732 | md | Markdown | README.md | xiaomi-sa/falcon-eye | 8b4072e0f6e89957dea774666da8d635b88eaeaf | [
"MIT"
] | 9 | 2015-01-20T10:22:38.000Z | 2019-05-14T10:20:57.000Z | README.md | ljb-2000/falcon-eye | 8b4072e0f6e89957dea774666da8d635b88eaeaf | [
"MIT"
] | null | null | null | README.md | ljb-2000/falcon-eye | 8b4072e0f6e89957dea774666da8d635b88eaeaf | [
"MIT"
] | 6 | 2015-01-16T09:06:34.000Z | 2019-05-14T10:20:58.000Z | falcon-eye
==========
linux monitor tool. an agent running on your host collect and display performance data. just like https://github.com/afaqurk/linux-dash
### install
```
mkdir -p $GOPATH/src/github.com/ulricqin
cd $GOPATH/src/github.com/ulricqin
git clone https://github.com/UlricQin/falcon-eye.git
cd $GOPATH/src
go get github.com/ulricqin/falcon-eye/...
cd github.com/ulricqin/falcon-eye
vi package # modify GOROOT and GOPATH
./package
cd /tmp/qinxh/release_falcon_eye/falcon_eye
./startup
# default http port is: 1988. goto: http://localhost:1988
```
*too complicated?* use gopm(https://github.com/gpmgo/gopm) and:
```
git clone https://github.com/UlricQin/falcon-eye.git
cd falcon-eye && gopm build && ./falcon-eye
```
| 26.142857 | 135 | 0.737705 | yue_Hant | 0.311322 |
9e15978910053592c10d3c404cbff7fbdf2e1c52 | 45 | md | Markdown | README.md | equinor/eds-zeroheight | cdc6b03e8fa43a79311a178b47102856fe281300 | [
"MIT"
] | null | null | null | README.md | equinor/eds-zeroheight | cdc6b03e8fa43a79311a178b47102856fe281300 | [
"MIT"
] | null | null | null | README.md | equinor/eds-zeroheight | cdc6b03e8fa43a79311a178b47102856fe281300 | [
"MIT"
] | null | null | null | # eds-zeroheight
Experiments in zero gravity
| 15 | 27 | 0.822222 | eng_Latn | 0.440948 |
9e15ce1c76a2381429ea98aaeb8fde2320a3a2dd | 87 | md | Markdown | _includes/02-image.md | jssol/markdown-portfolio | cd92fdd4c1746b92cad7801acbcbc7e5bcefe3d1 | [
"MIT"
] | null | null | null | _includes/02-image.md | jssol/markdown-portfolio | cd92fdd4c1746b92cad7801acbcbc7e5bcefe3d1 | [
"MIT"
] | 5 | 2022-01-29T20:22:38.000Z | 2022-01-30T21:15:19.000Z | _includes/02-image.md | jssol/markdown-portfolio | cd92fdd4c1746b92cad7801acbcbc7e5bcefe3d1 | [
"MIT"
] | null | null | null | 
| 43.5 | 86 | 0.816092 | kor_Hang | 0.206476 |
9e15f2dcd2bc8df4f56be0e5a774017b258529b9 | 1,724 | md | Markdown | moveit/README.md | R3tr074/nlw4-monorepo | 2880af29932942741f8fb52d5e0e72e77eaf5e22 | [
"MIT"
] | 15 | 2021-02-28T19:34:00.000Z | 2021-11-18T17:15:25.000Z | moveit/README.md | R3tr074/nlw4-monorepo | 2880af29932942741f8fb52d5e0e72e77eaf5e22 | [
"MIT"
] | null | null | null | moveit/README.md | R3tr074/nlw4-monorepo | 2880af29932942741f8fb52d5e0e72e77eaf5e22 | [
"MIT"
] | null | null | null | <h1 align="center">
Move it | NLW#4
</h1>
<p align="center"> Application developed in the fourth edition of Rocketseat Next Level Week </p>
<p align="center">
<a href="#the-project">The project</a> •
<a href="#technologies">Technologies</a> •
<a href="#contribution">Contribution</a> •
<a href="#author">Author</a> •
<a href="#license">License</a>
</p>
## 🎯 The project
Track your time, be more productive and take care of your health. Move it was developed for time management, as in the pomodoro technique, dividing the work into 25 minute periods. After that time it releases a challenge, which is some stretching for the body or exercise for the eyes. Each challenge has its xp points and, accumulating the points you level up.
## 🛠 Technologies
The following tools were used in the construction of the project:
- [ReactJS](https://reactjs.org)
- [NextJS](https://nextjs.org)
- [NodeJS](https://nodejs.org/en/)
- [TypeScript](https://typescriptlang.org/)
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## 💻 Author
<img style="border-radius: 5px;" src="https://avatars.githubusercontent.com/u/46076340" width="100px;" alt="Author avatar"/>
Write by Jorge Buzeti:
[get in touch](https://github.com/R3tr074#-get-in-touch)
## License
Distributed under the MIT License. See [`LICENSE`](/LICENSE) for more information.
| 35.916667 | 361 | 0.731439 | eng_Latn | 0.952917 |
9e1684136b1692527c6a6b73b0b207f9585a7394 | 233 | md | Markdown | docs/UserCustomClaims.md | OakLabsInc/javascript-dashboard-api | 1838b43d07c3352faa24646f1f6886d44eb05e0c | [
"Apache-2.0"
] | null | null | null | docs/UserCustomClaims.md | OakLabsInc/javascript-dashboard-api | 1838b43d07c3352faa24646f1f6886d44eb05e0c | [
"Apache-2.0"
] | null | null | null | docs/UserCustomClaims.md | OakLabsInc/javascript-dashboard-api | 1838b43d07c3352faa24646f1f6886d44eb05e0c | [
"Apache-2.0"
] | null | null | null | # dashboard.UserCustomClaims
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**domain** | **String** | |
**domainUser** | **String** | |
**role** | **String** | |
| 21.181818 | 60 | 0.416309 | yue_Hant | 0.244612 |
9e16d21de68168a943fff7cdd9c5c60c44c452ca | 2,884 | md | Markdown | api/qsharp/microsoft.quantum.characterization.estimatefrequencya.md | FalkW/quantum-docs-pr.de-DE | 55f8dcffe38c522c4c5f22fadc2ad3bbc2e6b4c8 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T20:13:03.000Z | 2020-07-23T08:16:03.000Z | api/qsharp/microsoft.quantum.characterization.estimatefrequencya.md | FalkW/quantum-docs-pr.de-DE | 55f8dcffe38c522c4c5f22fadc2ad3bbc2e6b4c8 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-02-20T08:56:47.000Z | 2021-02-02T23:04:31.000Z | api/qsharp/microsoft.quantum.characterization.estimatefrequencya.md | FalkW/quantum-docs-pr.de-DE | 55f8dcffe38c522c4c5f22fadc2ad3bbc2e6b4c8 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-04-15T13:01:51.000Z | 2021-11-15T09:21:30.000Z | ---
uid: Microsoft.Quantum.Characterization.EstimateFrequencyA
title: Estimatefrequescya-Vorgang
ms.date: 1/23/2021 12:00:00 AM
ms.topic: article
qsharp.kind: operation
qsharp.namespace: Microsoft.Quantum.Characterization
qsharp.name: EstimateFrequencyA
qsharp.summary: Given a preparation that is adjointable and measurement, estimates the frequency with which that measurement succeeds (returns `Zero`) by performing a given number of trials.
ms.openlocfilehash: 6cca8dff70283e0d69441db8a5b31fb5bfb3082a
ms.sourcegitcommit: 71605ea9cc630e84e7ef29027e1f0ea06299747e
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/26/2021
ms.locfileid: "98839917"
---
# <a name="estimatefrequencya-operation"></a>Estimatefrequescya-Vorgang
Namespace: [Microsoft. Quantum. Charakterisierung](xref:Microsoft.Quantum.Characterization)
Paket: [Microsoft. Quantum. Standard](https://nuget.org/packages/Microsoft.Quantum.Standard)
Bei einer Vorbereitung, bei der es sich um adjointable und Messungen handelt, schätzt die Häufigkeit, mit der diese Messung erfolgreich ist (gibt zurück `Zero` ), indem eine bestimmte Anzahl von Tests ausgeführt wird.
```qsharp
operation EstimateFrequencyA (preparation : (Qubit[] => Unit is Adj), measurement : (Qubit[] => Result), nQubits : Int, nMeasurements : Int) : Double
```
## <a name="input"></a>Eingabe
### <a name="preparation--qubit--unit--is-adj"></a>Vorbereitung: [Qubit](xref:microsoft.quantum.lang-ref.qubit)[] => [Einheit](xref:microsoft.quantum.lang-ref.unit) ist ADJ
Ein adjointable-Vorgang $P $, der einen bestimmten Status $ \rho $ für das Eingabe Register vorbereitet.
### <a name="measurement--qubit--__invalidresult__"></a>Messung: [Qubit](xref:microsoft.quantum.lang-ref.qubit)[] => __ungültig <Result>__
Ein Vorgang $M $, der die Maßeinheit darstellt.
### <a name="nqubits--int"></a>nqubits: [int](xref:microsoft.quantum.lang-ref.int)
Die Anzahl der Qubits, die für die Vorbereitung und Messung jeweils ausgeführt werden.
### <a name="nmeasurements--int"></a>nmessungen: [int](xref:microsoft.quantum.lang-ref.int)
Gibt an, wie oft die Messung durchgeführt werden soll, um die gewünschte Häufigkeit einzuschätzen.
## <a name="output--double"></a>Ausgabe: [Double](xref:microsoft.quantum.lang-ref.double)
Eine Schätzung von $ \hat{p} $ der Häufigkeit, mit der $M (p (\ket{00 \cdots 0} \bra{00 \cdots 0})) $ zurückgibt `Zero` , wird mithilfe der unausgewogenen Binomial-Schätzung $ \hat{p} = n \_ {\uparamerow}/n \_ {\text{Measurements}} $ abgerufen, wobei $n \_ {\uparamerow} $ die Anzahl der `Zero` beobachteten Ergebnisse ist.
## <a name="remarks"></a>Bemerkungen
Bei adjointable-Vorgängen können bestimmte Annahmen festgelegt werden, z. b. durch das Aufrufen des Vorgangs, dass die Qubits auf genau denselben Zustand vorbereitet werden, sodass Zielcomputer einige Leistungsoptimierungen vornehmen können. | 48.066667 | 323 | 0.76699 | deu_Latn | 0.810489 |
9e16d6801e4c5abeb3fcb0cfe50afb40dc80a0b7 | 2,327 | md | Markdown | docs/database-engine/deprecated-database-engine-features-in-sql-server-version-15.md | roaming-debug/sql-docs.zh-cn | 6a1bc73995cfdbde269233c6342e136f32349419 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/deprecated-database-engine-features-in-sql-server-version-15.md | roaming-debug/sql-docs.zh-cn | 6a1bc73995cfdbde269233c6342e136f32349419 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/deprecated-database-engine-features-in-sql-server-version-15.md | roaming-debug/sql-docs.zh-cn | 6a1bc73995cfdbde269233c6342e136f32349419 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: '[!INCLUDE[sssql19-md](../includes/sssql19-md.md)] 中弃用的数据库引擎功能'
title: SQL Server 2019 中弃用的数据库引擎功能 | Microsoft Docs
titleSuffix: SQL Server 2019
ms.custom: seo-lt-2019
ms.date: 02/12/2021
ms.prod: sql
ms.prod_service: high-availability
ms.reviewer: ''
ms.technology: release-landing
ms.topic: conceptual
helpviewer_keywords:
- deprecated changes 2019 [SQL Server]
ms.assetid: ''
author: MikeRayMSFT
ms.author: mikeray
monikerRange: '>=sql-server-ver15||>=sql-server-linux-ver15'
ms.openlocfilehash: 4e322ff478e9ccdc031882e7c3b5b18fd8506e42
ms.sourcegitcommit: ca81fc9e45fccb26934580f6d299feb0b8ec44b7
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 03/05/2021
ms.locfileid: "102186440"
---
# <a name="deprecated-database-engine-features-in-sql-server-2019-15x"></a>SQL Server 2019 (15.x) 中不推荐使用的数据库引擎功能
[!INCLUDE[sqlserver2019](../includes/applies-to-version/sqlserver2019.md)]
除了先前版本中已弃用的功能之外,[!INCLUDE [sssql19-md](../includes/sssql19-md.md)] 不弃用任何功能:
- [[!INCLUDE [sssql17-md](../includes/sssql17-md.md)]](deprecated-database-engine-features-in-sql-server-2017.md)
- [[!INCLUDE [sssql16-md](../includes/sssql16-md.md)]](deprecated-database-engine-features-in-sql-server-2016.md)
## <a name="deprecation-guidelines"></a>弃用准则
如果功能标记为已弃用,表示:
- 该功能仅处于维护模式。 不进行新的更改,包括使用新功能解决互操作性的相关更改。
- 我们努力不从将来的版本中删除已弃用的功能,使升级更简单。 但是,在极少数情况下,如果该功能限制了将来的创新,我们可能选择从 [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] 中将其永久停用(删除)。
- 请勿在新的开发工作中使用已弃用的功能。 对于现有应用,计划尽快修改当前使用这些功能的应用程序。
可以使用 [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] Deprecated Features Object 性能计数器或 `deprecation_announcement` 和 `deprecation_final_support` 扩展事件监视已弃用功能的使用情况。 有关详细信息,请参阅 [使用 SQL Server 对象](../relational-databases/performance-monitor/use-sql-server-objects.md)。
## <a name="query-deprecated-features"></a>查询已弃用的功能
这些计数器的值也可通过执行以下语句而得:
```sql
SELECT * FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Deprecated Features';
```
### <a name="see-also"></a>请参阅
- [SQL Server 2019 中数据库引擎功能的重大更改](../database-engine/breaking-changes-to-database-engine-features-in-sql-server-version-15.md)
- [SQL Server 中弃用的数据库引擎功能](../database-engine/discontinued-database-engine-functionality-in-sql-server.md)
- [SQL Server 数据库引擎的后向兼容性](./discontinued-database-engine-functionality-in-sql-server.md)
| 40.12069 | 272 | 0.776536 | yue_Hant | 0.511332 |
9e1754b7311295c06f21826fb05486a4393658ec | 599 | md | Markdown | README.md | yboyer/micrometa | 11f5b45167c3dc97e4f8527108011b9683c0581d | [
"MIT"
] | null | null | null | README.md | yboyer/micrometa | 11f5b45167c3dc97e4f8527108011b9683c0581d | [
"MIT"
] | null | null | null | README.md | yboyer/micrometa | 11f5b45167c3dc97e4f8527108011b9683c0581d | [
"MIT"
] | null | null | null | # Micrometa
> Subject: https://ensweb.users.info.unicaen.fr/devoirs/m2-dnr-umdn3c/

### Technologies
- `Silex`
- `Twig`
- `SASS`
- `ExifTool`
## Prerequisites:
- [`composer`](https://getcomposer.org/download/)
- [`ExifTool`](http://www.sno.phy.queensu.ca/~phil/exiftool/) for non-Windows users
- `Node.js` and `npm`
## Usage:
### Install
##### Clone the GitHub repo:
```bash
git clone https://github.com/yboyer/micrometa.git && cd micrometa
```
##### Install dependencies:
```bash
npm i
```
| 22.185185 | 90 | 0.66611 | yue_Hant | 0.292393 |
9e1822ddac112e20e99ff2ccd3698b42956a33d2 | 468 | markdown | Markdown | _posts/2015-03-08-how-can-i-be-as-great-as-steve-jobs.markdown | malachai0/raw | 4ca5c13a34ad276c8c4f1991988fc16e7f8dd063 | [
"MIT"
] | 1 | 2019-01-28T09:08:07.000Z | 2019-01-28T09:08:07.000Z | _posts/2015-03-08-how-can-i-be-as-great-as-steve-jobs.markdown | malachai0/raw | 4ca5c13a34ad276c8c4f1991988fc16e7f8dd063 | [
"MIT"
] | null | null | null | _posts/2015-03-08-how-can-i-be-as-great-as-steve-jobs.markdown | malachai0/raw | 4ca5c13a34ad276c8c4f1991988fc16e7f8dd063 | [
"MIT"
] | 1 | 2019-01-28T09:08:09.000Z | 2019-01-28T09:08:09.000Z | ---
layout: post
title: How can I be as great as Steve Jobs?
date: '2015-03-08 12:57:10'
---
Young Composer: “Herr Mozart, I am thinking of writing a symphony. How should I get started?”
Mozart: “A symphony is a very complex musical form and you are still young. Perhaps you should start with something simpler, like a concerto.”
Young Composer: “But Herr Mozart, you were writing symphonies when you were 8 years old.”
Mozart: “Yes, but I never asked anyone how.” | 36 | 142 | 0.741453 | eng_Latn | 0.998856 |
9e1836f1a7573cacd9b2d96bd120fbb7e2e96c18 | 1,053 | md | Markdown | README.md | Bluette1/progressbar | caab6ef96a8c8748f8fa29e3b941d0b23254f52b | [
"MIT"
] | null | null | null | README.md | Bluette1/progressbar | caab6ef96a8c8748f8fa29e3b941d0b23254f52b | [
"MIT"
] | 1 | 2021-03-22T07:41:13.000Z | 2021-03-22T07:41:13.000Z | README.md | Bluette1/progressbar | caab6ef96a8c8748f8fa29e3b941d0b23254f52b | [
"MIT"
] | null | null | null | # progressbar
A simple progress bar built using Javascript, HTML, and CSS.

## Built With
- Javascript
- HTML
- CSS
## Description
A simple progress bar built using Javascript, HTML, and CSS.
[Demo](https://bluette1.github.io/progressbar/)
## Authors
👤 **Marylene Sawyer**
- Github: [@Bluette1](https://github.com/Bluette1)
- Twitter: [@MaryleneSawyer](https://twitter.com/MaryleneSawyer)
- Linkedin: [Marylene Sawyer](https://www.linkedin.com/in/marylene-sawyer-b4ba1295/)
# Acknowledgements
- The content in this repository was retrieved from or inspired by the following sites
- [Microverse](https:www.microverse.org/) - @microverseinc
- [W3schools](https://www.w3schools.com/)
## 🤝 Contributing
Contributions, issues and feature requests are welcome!
Feel free to check the [issues page](https://github.com/Bluette1/progressbar/issues).
## Show your support
Give a ⭐️ if you like this project!
## 📝 License
This project is [MIT](https://opensource.org/licenses/MIT) licensed.
| 21.9375 | 86 | 0.734093 | eng_Latn | 0.725682 |
9e1a1cdca966f40c0a239b28d2049f4ec4fbe3b2 | 4,284 | md | Markdown | _drafts/2017-06-02-&&-and.md | bensk/SE8_p5js | e330ed73a09f7d8cbb257b03ce649e7df32fabf3 | [
"MIT"
] | null | null | null | _drafts/2017-06-02-&&-and.md | bensk/SE8_p5js | e330ed73a09f7d8cbb257b03ce649e7df32fabf3 | [
"MIT"
] | null | null | null | _drafts/2017-06-02-&&-and.md | bensk/SE8_p5js | e330ed73a09f7d8cbb257b03ce649e7df32fabf3 | [
"MIT"
] | null | null | null | ---
title: AND
date: 2017-06-02 16:38:00 Z
layout: post
---
## Do Now (Google Classroom)
How can you make an ellipse that _ALWAYS_ appears 20px above the middle of the canvas?

When you're done answering, return to your [Traffic Light](http://bsk.education/SE8_p5js/2016/05/31/stop&review/) from yesterday.
---
## `&&` then some...
What if we want to light up the ellipse ONLY when our mouse is between the top and bottom of the ellipse?
Well, where is the **top** of our ellipse?
The middle of the screen is `windowHeight/2`, our ellipse is `20px` above that, and my ellipse is `50px` tall.
Which means the top of my ellipse is `windowHeight/2-70`.
The bottom of my ellipse is just `windowHeight/2-20`.
> If your ellipse is not 50px tall, your numbers will be different. That's fine.
So, I want a conditional fill that is only activate when `mouseY` is **greater than** `windowHeight/2-70` and **less than** `windowHeight/2-20`. Wouldn't it be nice if I could somehow combine
```javascript
// mouseY greater than windowHeight/2-70
if(mouseY>windowHeight/2-70){
fillColor = 'blue'
} else {
fillColor = 'white'
}
```
and
```javascript
// mouseY less than windowHeight/2-20
if(mouseY<windowHeight/2-20){
fillColor = 'blue'
} else {
fillColor = 'white'
}
```
Luckily I can, and I don't even have the type the word "and"! All I need is `&&`:
```javascript
if(mouseY>windowHeight/2-70 && mouseY<windowHeight/2-20){
fillColor = 'blue'
} else {
fillColor = 'white'
}
```
## Operators in <span style="color: #ED1F5E">p5</span>
We've seen **operators** before:
| Operator | Description | Example |
|:--------:|:-------------------------|:----------|
| === | equal | `x === 3` |
| != | not equal | `x != 8` |
| > | greater than | `x > 42` |
| < | less than | `x < 42` |
| >= | greater than or equal to | `x >= 42` |
| <= | less than or equal to | `x <= 42` |
Now we have two more:
| Operator | Description | Example |
|:------------:|:------------|:-------------------------------------------------------|
| || | or | `(x>windowWidth || x<0)` |
| && | and | `mouseY>windowHeight/2-70 && mouseY<windowHeight/2-20` |
<script type="text/p5" data-autoplay data-preview-width="200" data-preview-height="800">
var fillColor = 'white'
function setup() {
createCanvas(windowWidth,windowHeight )
}
function draw() {
background('white')
line(0,windowHeight/2,windowWidth,windowHeight/2)
fill(fillColor)
ellipse(windowWidth/2,windowHeight/2-45,50,50)
if(mouseY>windowHeight/2-70 && mouseY<windowHeight/2-20){
fillColor = 'blue'
} else {
fillColor = 'white'
}
}
</script>
Now go forth and make the most beautiful <span style="color: #ED1F5E">p5</span> traffic light that anyone has ever seen.
## Traffic Light
<iframe src="{{ site.baseurl }}/Code_Examples/TrafficLight" width="100%" height="400px" style="border:solid 1px"></iframe>
**Rubric**
| Specification | Points |
|:--------------------------------------------------------------------------|:------:|
| 3 ellipses, each light up different colors | 3 |
| It must be able to "light" up in three different colors **one at a time** | 1 |
| Only one light should be lit at a time | 1 |
| Lights are **conditionally** activated[^1] | 1 |
| Code comments explain **debugging** and **iteration** | 2 |
| **Total** | 8 |
## Exit Slip (in Google Classroom)
We need to use variables and conditionals in our code because…
We need to use variables and conditionals in our code but…
We need to use variables and conditionals in our code so...
For a 4: Combine your sentences
---
[^1]: It's your choice how to activate the lights. Mine are based off `mouseY`. You could use a click, `mouseX`, your keyboard...whatever you want.
| 33.46875 | 192 | 0.562558 | eng_Latn | 0.9731 |
9e1a523b17b1c1c3322402735271890699f03246 | 830 | md | Markdown | Network/Coding/close/close.md | nicesu/wiki | 8990270ceba0fece83ceddeaae834ceeff0774af | [
"MIT"
] | 357 | 2015-07-17T02:16:53.000Z | 2022-03-30T17:54:42.000Z | Network/Coding/close/close.md | nicesu/wiki | 8990270ceba0fece83ceddeaae834ceeff0774af | [
"MIT"
] | 36 | 2016-06-04T17:06:14.000Z | 2021-12-31T09:40:16.000Z | Network/Coding/close/close.md | nicesu/wiki | 8990270ceba0fece83ceddeaae834ceeff0774af | [
"MIT"
] | 80 | 2015-09-02T14:47:33.000Z | 2022-03-27T18:48:27.000Z | ## Close [Back](./../Coding.md)
- to close socket
- **close** method is just to decrease the number of references of **sockfd**, and clear sockfd when the number becomes **0**. That's because TCP will continue to use this sockfd to complete transmissions of data
### 1. close()
- 關閉後, 其他進程扔可以使用該socket描述符
##### method
```c
int close(int sockfd)
```
##### parameters
- sockfd: socket描述符
##### return value
- 0: success
- -1: failure
- errno: wrong code
### 2. shutdown()
- 關閉後, 其他進程不能再用已關閉的通道
- 提供更高級的關閉操作, 可以只關閉一個通道, 而另一個通道仍可以操作
##### method
```c
int shutdown(int sockfd, int howto)
```
##### parameters
- sockfd: socket描述符
- howto:指定關閉的方式
- 0: 關閉**讀通道**, 丟棄尚未讀取的數據, 對後來收到的數據確認後丟棄.
- 1: 關閉**寫通道**, 繼續發送緩衝區沒發送完的數據, 然後發送**FIN**字段關閉通道.
- 2: 關閉**讀寫通道**, 任何進程再也不能操作該socket
##### return value
- 0: success
- -1: failure
| 18.863636 | 213 | 0.662651 | eng_Latn | 0.858955 |
9e1b0e2f044f50371925e7dbd5aa9ad7e78a4bab | 87 | md | Markdown | content/english/blog/_index.md | RishabhNaik/portfolio | 37b6c57ce2aa336c3dd78c08722e37ca13ffb531 | [
"MIT"
] | 1 | 2021-05-24T07:24:15.000Z | 2021-05-24T07:24:15.000Z | content/english/blog/_index.md | RishabhNaik/portfolio | 37b6c57ce2aa336c3dd78c08722e37ca13ffb531 | [
"MIT"
] | null | null | null | content/english/blog/_index.md | RishabhNaik/portfolio | 37b6c57ce2aa336c3dd78c08722e37ca13ffb531 | [
"MIT"
] | null | null | null | ---
title: "My Latest Post"
description : "this is a meta description"
draft: false
--- | 17.4 | 42 | 0.689655 | eng_Latn | 0.960825 |
9e1b688fc3119a9c542cbea8e5d5f97b3092fec1 | 1,127 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugobjectvalue-getcontext-method.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugobjectvalue-getcontext-method.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugobjectvalue-getcontext-method.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICorDebugObjectValue::GetContext-Methode
ms.date: 03/30/2017
api_name:
- ICorDebugObjectValue.GetContext
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugObjectValue::GetContext
helpviewer_keywords:
- GetContext method, ICorDebugObjectValue interface [.NET Framework debugging]
- ICorDebugObjectValue::GetContext method [.NET Framework debugging]
ms.assetid: 40594774-5105-4187-a06b-4e7f50bada3c
topic_type:
- apiref
ms.openlocfilehash: 0ba3a1899abfea10393a57bbf6ed61bbfc483262
ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 11/24/2020
ms.locfileid: "95724662"
---
# <a name="icordebugobjectvaluegetcontext-method"></a>ICorDebugObjectValue::GetContext-Methode
`GetContext` ist in dieser Version des .NET Framework nicht implementiert.
## <a name="syntax"></a>Syntax
```cpp
HRESULT GetContext (
[out] ICorDebugContext **ppContext
);
```
## <a name="requirements"></a>Requirements (Anforderungen)
**Header:** CorDebug.idl, CorDebug.h
## <a name="see-also"></a>Weitere Informationen
| 26.833333 | 94 | 0.770186 | yue_Hant | 0.665966 |
9e1b7da63246e3e4403d7cefa3e3b2a007ad619e | 589 | md | Markdown | solutions/3-formatted-input-output/projects/02/README.md | RichardKavanagh/c-modern-approach-solutions | 3f00cd657e8185b056394f660cb59f82a78f56a9 | [
"CC-BY-4.0"
] | null | null | null | solutions/3-formatted-input-output/projects/02/README.md | RichardKavanagh/c-modern-approach-solutions | 3f00cd657e8185b056394f660cb59f82a78f56a9 | [
"CC-BY-4.0"
] | null | null | null | solutions/3-formatted-input-output/projects/02/README.md | RichardKavanagh/c-modern-approach-solutions | 3f00cd657e8185b056394f660cb59f82a78f56a9 | [
"CC-BY-4.0"
] | null | null | null | ### C Formatted Input/Output - Project 3.02
Write a program that formats product information entered by the user.
A session with the program should look like this:
```
Enter item number: 583
Enter unit price: 13.5
Enter purchase date (mm/dd/yyyy): 10/24/2010
Item Unit Purchase
Price Date
583 $ 13.50 10/24/2010
```
The item number and date should be left justified; the unit price should be right justified. Allow dollar amounts up to $9999.99.
*Hint* Use tabs to line up the columns.
### Solution
See ```02.c``` | 28.047619 | 130 | 0.650255 | eng_Latn | 0.987076 |
9e1cc440d76df45ce5d69984a20a550446ba81a3 | 21 | md | Markdown | README.md | wawaac77/redux-toolkit-saga | 62ab22182e3a717f6136c530d08d315482a7f112 | [
"MIT"
] | null | null | null | README.md | wawaac77/redux-toolkit-saga | 62ab22182e3a717f6136c530d08d315482a7f112 | [
"MIT"
] | null | null | null | README.md | wawaac77/redux-toolkit-saga | 62ab22182e3a717f6136c530d08d315482a7f112 | [
"MIT"
] | null | null | null | # redux-toolkit-saga
| 10.5 | 20 | 0.761905 | est_Latn | 0.888034 |
9e1cdc427511fde19b78a2fb00aca49395dc8011 | 203 | md | Markdown | README.md | SofianSaleh/my-car-dealer-backend | bbd2fca1e0e14076c9ea276d560aa222c0e63f91 | [
"MIT"
] | 1 | 2020-09-22T21:09:44.000Z | 2020-09-22T21:09:44.000Z | README.md | SofianSaleh/my-car-dealer-backend | bbd2fca1e0e14076c9ea276d560aa222c0e63f91 | [
"MIT"
] | null | null | null | README.md | SofianSaleh/my-car-dealer-backend | bbd2fca1e0e14076c9ea276d560aa222c0e63f91 | [
"MIT"
] | null | null | null | # My-Car-Dealer
### folder structure:
- Using the Fractal Pattern:
#### Pros
- Reason about the location of files
- Manage and create complex user interfaces
- Iterate quickly
- And scale repeatably
| 15.615385 | 43 | 0.73399 | eng_Latn | 0.984803 |
9e1d121e1e9f0dbd6677ad3229dd43f7e070b4dc | 32 | md | Markdown | README.md | jgrooscors/proyectos | 1787e0d1d5e96c0f757b8787c0cea1aad1446c69 | [
"MIT"
] | null | null | null | README.md | jgrooscors/proyectos | 1787e0d1d5e96c0f757b8787c0cea1aad1446c69 | [
"MIT"
] | null | null | null | README.md | jgrooscors/proyectos | 1787e0d1d5e96c0f757b8787c0cea1aad1446c69 | [
"MIT"
] | null | null | null | # proyectos
proyectos de prueba
| 10.666667 | 19 | 0.8125 | spa_Latn | 0.999917 |
9e1d2646c585184f24f11441cb074643731b0baf | 164 | md | Markdown | test/date/README.md | nicholaswilde/solar-battery-charger | 47276386ef842f35346c14e2499c346413065469 | [
"Apache-2.0"
] | 5 | 2022-02-21T22:47:23.000Z | 2022-03-13T12:36:50.000Z | test/date/README.md | nicholaswilde/solar-battery-charger | 47276386ef842f35346c14e2499c346413065469 | [
"Apache-2.0"
] | 41 | 2022-02-20T09:16:33.000Z | 2022-03-31T02:04:40.000Z | test/date/README.md | nicholaswilde/solar-battery-charger | 47276386ef842f35346c14e2499c346413065469 | [
"Apache-2.0"
] | null | null | null | # date
Connect to an NTP server and get the date.
## Documentation
Documentation can be found [here](https://nicholaswilde.io/solar-battery-charger/test/date/).
| 20.5 | 93 | 0.756098 | eng_Latn | 0.930144 |
9e1d7ef55c7ae44333baffddbb74aa6104ae754f | 6,144 | md | Markdown | _posts/2019-07-13-Download-sacraments-of-life-life-of-the-sacraments-story-theology.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-07-13-Download-sacraments-of-life-life-of-the-sacraments-story-theology.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-07-13-Download-sacraments-of-life-life-of-the-sacraments-story-theology.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Sacraments of life life of the sacraments story theology book
I see. But he let Losen act the master! As far as I am aware, till they finally form a dreams, she didn't worry shores of the most northerly islands on Spitzbergen. it's you?" Ile wondered about the etiquette of just a little reciprocal flirtation [Illustration: TOBIESEN'S WINTER HOUSE ON BEAR ISLAND. Placing a nonstick cotton pad over the punctures. " all sacraments of life life of the sacraments story theology touched at this harbour that I might meet the expressed "Okay, to begin to answer his questions about the Grove, to share the wonder. competition and closed them again. Lipscomb as he of the coast towns of Scandinavia have thus in our days a greater Wait here in the car. " gray. and were then seen without interruption during Every world has dogs or their equivalent, and the baths inlaid with pearls and jewels and told him that which had befallen Meimoun the Sworder, a soon as he was able to act, the too-bright morning stung her eyes, old Preston has touched me only Bartholomew, until they came to open water, sed sunt multum pallidi, he caused bring forth the cook and his household to the divan. "Mars?" NOTE. One had an urge to -- well; I don't know. I am frazzled. " huge helicopter throbbing across the desert. She half expected to find him waiting beyond "He worked in your shipyard, melting a little more of it each time. (167) Could a vizier have been dispensed withal, why must a blind boy climb a tree?" moment and 71 deg. " mercury_, it pleased him and he pardoned the servant. Find out if they got to Roke and what sacraments of life life of the sacraments story theology there. " certainly didn't owe her monogamy. Witches were to learn only from one another or from sorcerers? " Startled, in pity. They were forbidden to enter Roke "All right," he agreed, were it a thousand dirhems a day or more. " gentle wind and with a pretty clear atmosphere the lower strata of Steller's sea-cow (Rhytina) may in former times have occasionally leaves behind when he asks questions. She couldn't as the circling, her cheeks, ii. What about the spider last week?" After tucking the flashlight under his belt, photographs him. This detective was asking about Andrew Detweiler in Tom was an Oregon State Police detective, and she asked now for the help of her Maker, drawn by R, eight of the nine sculptures were so disturbing that marriage why he left the public stage?" "Sinsemilla-she's a media circus all "Fantastic? The possible implications were intriguing. "While my driver harnessed the dogs for the journey home, honey, I don't want anything to do with what you do, whispering about creatures half-serpent and half-human, but I Between Curtis and the front door. I considered that I might take the Chukch's Other three-year-olds, by a Kamchatka-beaver (_Enhydris lutris_, closet by sacraments of life life of the sacraments story theology. One for Celie and Angel, and we've got six more weeks to go before we become eligible for unemployment insurance. For if I do lose "And what wonders can Angel perform?" Tom asked Celestina. " stacked by the roaster tower bringing him a memory of the work yards at home, no," she says, in a cheerful mood, wait -- the other thing is more important, are reduced to noon, following sacraments of life life of the sacraments story theology endless spell of his own enchanting voice, Klonk or not Klonk, women know the Old Powers. ] So does Curtis. It took Smith six weeks to increase the efficiency of the image intensifier enough to bring up the ghost me through half-closed eyes: myself. argument to cancel this last remaining hope. Ike picketed with me, and because neither Gully or Otak seemed names well suited to him. 1742 by Chelyuskin in the course of a new sledge journey, when Aboulhusn brought bowl and ewer and potash (16) and they washed their hands. evergreens, Robbie, John. But what were you doing there. watched the shadows of the leaves play across the ground. Only one thing mattered: The Bartholomew hunt was at last nearing an end. " "Stop destroying your head," Rose told him. its northern extremity passed for the first time, Curtis looks up. 168 Bernard, he was able to convert the visible vibrations of the vocal cords into sound of fair quality, chill chased chill up [Illustration: WALRUS HUNTING, "The king biddeth thee in weal, and several pretty girls were always near him. We can be easy? I know you. The surefooted dog at once adapts to this abrupt change in the Instinctively, Barty would understand how terrible his condition might be. I'm not of the persuasion that As he'd been instructed, they're all right. "Hey, we don't have a strike fund. " because in our journey we so often feel abandoned, Steve, Japanese is poor and tough, gazing at him. Kind fate and his clever sister-become from the sacraments of life life of the sacraments story theology crowd. Switches off the flashlight. With Illustrations. Little twisted wizards. The foot-covering consists of reindeer or file:D|Documents20and20SettingsharryDesktopUrsula20K. We're back in the Bomb Factory. His body was completely white. " So he went and played a fine trick, the acidic odor of browning newsprint and yellowing paperbacks A tough choice here, facing the pumps, and the faint light "Wrong, enormous color photographs of the Grand Canyon. " homicide and escape the consequences thereof, the pressure of a ox, and PHILIPPOV the conservator. Now hath the king of the Greeks sent to demand thee in marriage, or she can bunk with me. This Momentous Day. Warrington Laying the gun on the newspaper, and conspiring--no doubt until the early hours. ' Then he bade his guards plunder the [unjust] king and his attendants; so they plundered them and stripping them of their clothes, and contempt he remembered. You said your niece phoned you?" guard dogs in sacraments of life life of the sacraments story theology lobby and a doorman who didn't talk, and her body strains against longer spinning-wink. And I think maybe ! 110. " She shook her head. | 682.666667 | 6,014 | 0.789063 | eng_Latn | 0.999797 |
9e1dad7fc99b7f49a7f4c06baf91fb9555892030 | 18,071 | md | Markdown | bookdown/docs/Lab3_Distributions_I.md | liatkofler/psyc7709Lab | f7a4ce5fddc738afcb3cd8c8f848b9f3effdac9d | [
"CC-BY-4.0"
] | null | null | null | bookdown/docs/Lab3_Distributions_I.md | liatkofler/psyc7709Lab | f7a4ce5fddc738afcb3cd8c8f848b9f3effdac9d | [
"CC-BY-4.0"
] | 10 | 2021-08-17T20:40:04.000Z | 2021-12-08T08:56:38.000Z | bookdown/docs/Lab3_Distributions_I.md | liatkofler/psyc7709Lab | f7a4ce5fddc738afcb3cd8c8f848b9f3effdac9d | [
"CC-BY-4.0"
] | 1 | 2021-11-08T16:58:51.000Z | 2021-11-08T16:58:51.000Z | # Distributions I
"9/2/2020 | Last Compiled: 2020-12-04"
## Reading
Vokey & Allen, Chapters 5 & 6 on additional descriptive statistics, and recovering the distribution.
## Overview
<iframe width="560" height="315" src="https://www.youtube.com/embed/YQC3QlRNHwI" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
We are about to spend three entire labs devoted to understanding and working with distributions. We will cover topics that closely relate to the main statistics lecture, but also take advantage of R to examine distributions in a more direct and hands manner that is not possible without a programming environment.
This lab has one practical section and two conceptual sections.
Practical I
- Sampling from distributions in R
Conceptual I
- Monte-Carlo simulation
In a research context data is collected in the form of measurements under various conditions. The data is a **sample**, representing only the outcomes that **did** happen. Generally, researchers recognize that there is **variability** in their measuring process, so the **sample data could have been different**. When analyzing data we are interested both in what did happen, and what could have happened in terms of the pattern of numbers. To really understand the issues at play, we need to take a deep dive into distributions and understand what happens when we take samples from them.
## Practical I: Sampling from distributions in R
### What is a distribution?
I will take an informal approach to defining distributions. We can think of a distribution as the place or machine controlling where numbers come from. In other words, distributions are number creation machines. We get to define them, and our choices determine the kinds of numbers that can be produced from a distribution.
More formally, a probability distribution defines the probabilities that particular numbers can be drawn or sampled from the distribution.
### Creating your own distributions in R with `sample()`
R has several built-in functions for sampling from common distributions (discussed later). Before we look at those, let's make our own using `sample()`.
Here is the input syntax for `sample(x, size, replace = FALSE, prob = NULL)`. `x` is a vector with one or more elements, `size` is how many samples to take from the elements in the vector, `replace` can be set to TRUE or FALSE (controlling whether sampling is done with or without replacement), and `prob` is a vector of probabilities controlling the probability of sampling each element in the vector `x`.
When we use sample, we can create discrete distributions and sample from them. By, default every element has an equal probability of being sampled.
1. Create a distribution with two equally possible numbers, sample from it twice:
```r
sample(x= 1:2, size = 2)
```
```
## [1] 2 1
```
2. Create a distribution with two equally possible numbers, sample from it 10 times (note must set replace=TRUE, because we are sampling more items than exist in the vector):
```r
sample(x= 1:2, size = 10, replace = TRUE)
```
```
## [1] 1 2 2 2 2 1 2 2 1 1
```
3. Create a distribution with where the first number has a probability of 90% of being sampled, and the second number has a probability of 10% of being sampled, sample from it 10 times
```r
sample(x= 1:2, size = 10, replace = TRUE, prob=c(.9,.1))
```
```
## [1] 1 1 1 1 1 1 1 1 1 1
```
4. Create a distribution to model a coin flip for an unbiased coin, flip the coin 10 times, have the distribution return "heads" or "tails".
```r
sample(x = c("heads","tails"), size=10, replace= TRUE)
```
```
## [1] "heads" "heads" "tails" "tails" "heads" "heads" "tails" "heads" "tails"
## [10] "tails"
```
5. Create a distribution that has the numbers 1 to 1000, and allows them to be sampled with equal probability. Sample 10 numbers from this distribution without replacement (if you have sampled one number, you are not allowed to sample it again because it has been taken out):
```r
sample(x= 1:1000, size = 10, replace = FALSE)
```
```
## [1] 92 39 873 427 757 630 809 885 361 110
```
### Normal distribution
To sample random deviates from a normal distribution, use the `rnorm(n, mean = 0, sd = 1)` function. `n` is the number of observations to sample, `mean` is the mean of the normal distribution, and `sd` is the standard deviation of the normal distribution.
1. Two ways to sample 10 numbers from a normal distribution with mean = 0, and standard deviation = 1.
```r
rnorm(n= 10,mean = 0, sd = 1)
```
```
## [1] -0.35931279 1.99421774 0.57771893 0.12608408 0.34459689 2.14585773
## [7] -0.02731723 -0.77159647 -0.17774504 -0.81589296
```
```r
rnorm(10,0,1)
```
```
## [1] 0.8691077 -1.5410391 1.7044105 -0.8194698 1.6152961 0.8902007
## [7] -0.4585975 0.8115101 -0.1315423 -1.5310965
```
2. Visualize the sample quickly with `hist()`
```r
my_sample <- rnorm(100,0,1)
hist(my_sample)
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-8-1.png" width="672" />
3. Visualize the sample with ggplot2 using `geom_histogram()`. A requirement here is that the sample data is formatted in a `data.frame` first. I create a data frame with 100 observations in a `sample_data` column, and I add a `sample` column which contains all 1s, to refer to the fact that all of the numbers in sample_data belong to sample #1.
```r
my_data <- data.frame(sample_data = rnorm(100,0,1),
sample = 1)
library(ggplot2)
ggplot(my_data, aes(x=sample_data))+
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-9-1.png" width="672" />
4. Visualizing multiple samples with individual histograms with ggplot2. Let's say we want to sample 25 values from a normal distribution, but we want to repeat this process four times. We will have samples 1 to 4, each containing 25 observations. We also want to generate four histograms to quickly look at each of the four samples. We can do this by setting up our dataframe to represent this situation, and by using `facet_wrap()`.
Note: the use of the `rep()` function is new, it creates a vector that repeats the numbers from 1 to 4, 25 times each. This way, the first 25 rows in the dataframe represent the 25 observations in sample 1, the next 25 rows represent the observations in sample 2, and so on.
```r
my_data <- data.frame(sample_data = rnorm(100,0,1),
sample = rep(1:4, each=25))
ggplot(my_data, aes(x=sample_data))+
geom_histogram()+
facet_wrap(~sample)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-10-1.png" width="672" />
### Uniform Distribution (rectangle distribution)
A uniform distribution is an equal probability distribution, where all numbers in between the smallest and largest have an equal probability of being sampled.
Use `runif(n, min = 0, max = 1)` to sample numbers from a uniform distribution. `n` is the number of observations, `min` is the starting minimum value, `max` is the largest value.
1. Sample and plot 1000 values from a uniform distribution between 0 and 1.
```r
hist(runif(1000,0,1))
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-11-1.png" width="672" />
2. Sample and plot 10000 values from a uniform distribution between 100 and 1000.
```r
hist(runif(10000,100,1000))
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-12-1.png" width="672" />
3. Take one sample of 100 numbers from a uniform distribution between 0 and 1. Then, for this one sample return a count of how many numbers are less than the value .05.
```r
my_sample <- runif(100,0,1)
length(my_sample[my_sample < .05])
```
```
## [1] 3
```
### Other distributions
R contains many distributions to sample numbers from. The list can be found by `?distributions`. Here a few more examples:
Exponential distribution
```r
hist(rexp(1000,rate =2))
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-14-1.png" width="672" />
Binomial Distribution
```r
hist(rbinom(100,1,prob=c(.5,.5)))
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-15-1.png" width="672" />
Weibull distribution
```r
hist(rweibull(n=1000, shape=2, scale = 1))
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-16-1.png" width="672" />
### Other descriptive statistics
In Chapter 5, Vokey and Allen discuss skewness and kurtosis as additional descriptive statistics that describe the shapes of sets of numbers. Functions for skewness and kurtosis can be obtained in R by installing additional packages such as the `moments` packages.
1. Compute the mean, sd, skewness, and kurtosis for a sample of 1000 observations from a normal distribution:
```r
library(moments)
my_sample <- rnorm(1000,0,1)
mean(my_sample)
```
```
## [1] 0.01022157
```
```r
sd(my_sample)
```
```
## [1] 1.007102
```
```r
skewness(my_sample)
```
```
## [1] 0.07970892
```
```r
kurtosis(my_sample)
```
```
## [1] 3.222505
```
```r
hist(my_sample)
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-17-1.png" width="672" />
2. Compute the mean, sd, skewness, and kurtosis for a sample of 1000 observations from a right-skewed exponential distribution.
```r
my_sample <- rexp(1000,2)
mean(my_sample)
```
```
## [1] 0.5063596
```
```r
sd(my_sample)
```
```
## [1] 0.4977857
```
```r
skewness(my_sample)
```
```
## [1] 1.815369
```
```r
kurtosis(my_sample)
```
```
## [1] 7.015908
```
```r
hist(my_sample)
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-18-1.png" width="672" />
## Conceptual I: Monte carlo simulations
Many of the next conceptual sections in our labs will involve a process called Monte carlo simulation. In short, a Monte Carlo simulation is one where a sampling process is carried out hundreds or thousands of times in order to estimate how the sampling process behaves over the long run. Monte Carlo simulations can be conducted very easily in R, because we can write scripts to make R repeatedly sample things, and then we can measure and assess the samples we created.
Monte-carlo simulations can be used as a tool to demonstrate statistical facts and concepts, and we will take the opportunity to use this tool in many different ways throughout this course. The purpose of this conceptual section is to introduce you to running Monte-Carlo simulations, and show you that they can be done in different ways.
In general we will:
1. Simulate a repeated sampling process
2. Save what was sampled on each iteration
3. Sample as many times as we want (usually a few thousand)
4. Evaluate our simulation
And, most important, we will identify important statistical concepts and use monte-carlo simulations to demonstrate our understanding of these concepts.
### Fair coin
A coin is fair if it comes up heads equally often as tails in the long run. Let's consider how we could use a simulation to demonstrate this idea. We need to
1. Have a way to sample the outcomes of a binary variable
2. Take several samples
3. Look at if we get an equal number of "heads" or "tails" in the long run.
There is more than one way to use R to accomplish these goals. Here, we use the `sample` function, and sample 1s for heads, and 0s for tails. We also create a `for` loop, and repeat a sampling process 100 times. Each iteration we flip a coin, save the result, and calculate the proportion of heads and tails so far. We save everything in a data.frame, and plot the proportion of heads as we go from 1 to 100 flips. We should see the proportion get closer to .5 as we increase the number of flips.
```r
#initialize variables
flip <- c()
outcome <- c()
proportion_heads <- c()
proportion_tails <- c()
# run the simulation
for(i in 1:1000){
flip[i] <- i
outcome[i] <- sample(x = c(1,0), size = 1)
proportion_heads[i] <- sum(outcome)/length(outcome)
proportion_tails[i] <- 1-proportion_heads[i]
}
# create a dataframe with saved data
sim_data <- data.frame(flip,
outcome,
proportion_heads,
proportion_tails)
# plot the simulation results
ggplot(sim_data, aes(x=flip,y=proportion_heads))+
geom_point()+
geom_line()+
geom_hline(yintercept=.5, color="red")
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-19-1.png" width="672" />
### Samples become the population as n increases
A fundamental concept in sampling is that samples of numbers become increasingly like their parent population (or distribution) as the size of the sample (n or number of observations in the sample) increases. Let's demonstrate an example of this phenomena.
Our parent population will be a normal distribution with mean =100, and sd = 50. We want to conduct a simulation that takes a sample across different ranges of n. Then for each sample, we will calculate a sample statistic such as the mean and standard deviation. These sample statistics should become closer and closer to the "true" parent distribution parameters as n increases.
```r
#initialize variables
n <- seq(1000,100000,1000)
sample_mean <- c()
sample_sd <- c()
#run simulation
for(i in 1:length(n)){
sim_sample <- rnorm(n[i], mean = 100, sd = 50)
sample_mean[i] <- mean(sim_sample)
sample_sd[i] <- sd(sim_sample)
}
# organize results in dataframe
sim_data <- data.frame(n,
sample_mean,
sample_sd)
# graph results
ggplot(sim_data,aes(x=n,y=sample_mean))+
geom_point()+
geom_line()+
geom_hline(yintercept=100, color="red")
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-20-1.png" width="672" />
```r
ggplot(sim_data,aes(x=n,y=sample_sd))+
geom_point()+
geom_line()+
geom_hline(yintercept=50, color="red")
```
<img src="Lab3_Distributions_I_files/figure-html/unnamed-chunk-20-2.png" width="672" />
## Lab 3 Generalization Assignment
### Instructions
In general, labs will present a discussion of problems and issues with example code like above, and then students will be tasked with completing generalization assignments, showing that they can work with the concepts and tools independently.
Your assignment instructions are the following:
1. Work inside the R project "StatsLab1" you have been using
2. Create a new R Markdown document called "Lab3.Rmd"
3. Use Lab3.Rmd to show your work attempting to solve the following generalization problems. Commit your work regularly so that it appears on your Github repository.
4. **For each problem, make a note about how much of the problem you believe you can solve independently without help**. For example, if you needed to watch the help video and are unable to solve the problem on your own without copying the answers, then your note would be 0. If you are confident you can complete the problem from scratch completely on your own, your note would be 100. It is OK to have all 0s or 100s anything in between.
5. Submit your github repository link for Lab 3 on blackboard.
6. There are four problems to solve
<iframe width="560" height="315" src="https://www.youtube.com/embed/kvPVWdEWm6k" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Problems
1. Create five samples of 25 observations from a normal distribution with mean 200, and standard deviation 100. Compute the mean of each sample, and plot the means in a graph using ggplot2. (1 point)
2. Additionally calculate the standard deviation of each sample from above. Use the standard deviations for error bars, and produce another graph with the means along with error bars using ggplot2. (1 point)
---
The last two problems concern the concept of using a sample to estimate a property of the population or distribution the sample came from. For example, if we know the mean of a sample, can we be confident that the population has the same mean? If we were trying to guess at the population mean, what statistics from the sample should we use?
Some sample statistics are "biased", and may systematically under or overestimate a population parameter. Others are "unbiased", in this case the sample statistic tends to correctly estimate the population parameter over the long run.
3. Demonstrate that the sample mean across a range of n, is an unbiased estimator of the population mean using a monte-carlo simulation. (2 points).
- The population is a normal distribution with mean = 10, standard deviation = 5.
- Test a variety of n (sample size), including n = 2, 5, 10, 50, and 100
- For each sample size n, your task is to draw 10,000 samples of that size, then for each sample, calculate the sample mean. If the mean is unbiased, then we expect that "on average" the sample means will be the same as the population mean. To determine if this is true, compute the mean of the sample means that you produce to see if it is close to the population mean.
- Show the mean of the sample means for each sample size.
4. Use a monte carlo simulation to compare the standard deviation formulas (divide by N vs N-1), and show that the N-1 formula is a better unbiased estimate of the population standard deviation, especially for small n. (2 points)
- Use the same normal distribution and samples sizes from above
- Rather than computing the mean for each sample, compute both forms of the standard deviation formula, including the sample standard deviation that divides by N-1, and the regular standard deviation that divides by N
- You should have 10,000 samples for each sample size, and 10,000 standard deviations for each the sample and regular standard deviation. Your task is to find the average of each, for each sample-size.
- Which of the standard deviations is more systematically biased? That is, which one is systematically worse at estimating the population standard deviation?
| 38.530917 | 588 | 0.740191 | eng_Latn | 0.99804 |
9e1ee7bdb11e8f8f24d80af079c938d0e6178231 | 475 | md | Markdown | README.md | EveryMundo/global-root-dir | 9a54abd8afe6ba6efe47023531847e531c10cc65 | [
"MIT"
] | null | null | null | README.md | EveryMundo/global-root-dir | 9a54abd8afe6ba6efe47023531847e531c10cc65 | [
"MIT"
] | 1 | 2021-05-06T22:40:36.000Z | 2021-05-06T22:40:36.000Z | README.md | EveryMundo/global-root-dir | 9a54abd8afe6ba6efe47023531847e531c10cc65 | [
"MIT"
] | null | null | null | # @everymundo/global-root-dir
Sets a global variable variable __rootdir
## Install
```sh
npm install @everymundo/global-root-dir
```
## Usage
```js
// this will use process.pwd() as value
require('@everymundo/global-root-dir').setGlobalRootDir()
console.log(global.__rootdir);
console.log({__rootdir});
// if you prefer you can pass a directory
require('@everymundo/global-root-dir').setGlobalRootDir(__dirname)
console.log(global.__rootdir);
console.log({__rootdir});
``` | 23.75 | 66 | 0.749474 | eng_Latn | 0.257645 |
9e1f200647fac089d10414c5752677059cd8653a | 8,681 | md | Markdown | README.md | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | null | null | null | README.md | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | null | null | null | README.md | CristianGuemes/pyOCD | 1307acd7aa7f4dd4472af4d11eadea90f6c4fe5c | [
"Apache-2.0"
] | 1 | 2019-01-21T03:01:53.000Z | 2019-01-21T03:01:53.000Z | pyOCD
=====
pyOCD is an open source Python package for programming and debugging Arm Cortex-M microcontrollers
using multiple supported types of USB debug probes. It is fully cross-platform, with support for
Linux, macOS, and Windows.
A command line tool is provided that covers most use cases, or you can make use of the Python
API to enable low-level target control. A common use for the Python API is to run and control CI
tests.
Upwards of 70 popular MCUs are supported built-in. In addition, through the use of CMSIS-Packs,
nearly every Cortex-M device on the market is supported.
The `pyocd` command line tool gives you total control over your device with these subcommands:
- `gdbserver`: GDB remote server allows you to debug using gdb via either
[GNU MCU Eclipse plug-in](https://gnu-mcu-eclipse.github.io/) or the console.
- `flash`: Program files of various formats into flash memory.
- `erase`: Erase part or all of an MCU's flash memory.
- `pack`: Manage [CMSIS Device Family Packs](http://arm-software.github.io/CMSIS_5/Pack/html/index.html)
that provide additional target device support.
- `commander`: Interactive REPL control and inspection of the MCU.
- `server`: Share a debug probe with a TCP/IP server.
- `list`: Show connected devices.
The API and tools provide these features:
- halt, step, resume control
- read/write memory
- read/write core registers
- set/remove hardware and software breakpoints
- set/remove watchpoints
- write to flash memory
- load binary, hex, or ELF files into flash
- reset control
- access CoreSight DP and APs
- SWO and SWV
- and more!
Configuration and customization is supported through [config files](docs/configuration.md) and
[user scripts](docs/user_scripts.md).
Requirements
------------
_**Important note**: Python 2 support is deprecated and is planned to be dropped from an upcoming release.
Existing releases of pyOCD will, of course, continue to work with Python 2. If this is a major issue for you
moving forward, please create a [new issue](https://github.com/mbedmicro/pyOCD/issues/new/choose) describing
your concerns._
- Python 3.6.0 or later (preferred), or Python 2.7.9 or later
- macOS, Linux, or Windows 7 or newer
- Microcontroller with an Arm Cortex-M CPU
- Supported debug probe
- [CMSIS-DAP](http://www.keil.com/pack/doc/CMSIS/DAP/html/index.html) v1 (HID),
such as:
- An on-board debug probe using [DAPLink](https://os.mbed.com/handbook/DAPLink) firmware.
- NXP LPC-LinkII
- [CMSIS-DAP](http://www.keil.com/pack/doc/CMSIS/DAP/html/index.html) v2 (WinUSB),
such as:
- [DAPLink](https://os.mbed.com/handbook/DAPLink) firmware version 0254 or newer.
- Cypress KitProg3
- Keil ULINKplus
- SEGGER J-Link (experimental)
- STLinkV2 or STLinkV3, either on-board or the standalone versions.
Status
------
PyOCD is functionally reliable and fully useable.
The Python API is considered partially unstable as we are restructuring and cleaning it up prior to
releasing version 1.0.
Documentation
-------------
The pyOCD documentation is located in [the docs directory](docs/).
In addition to user guides, you can generate reference documentation using Doxygen with the
supplied [config file](docs/Doxyfile).
Installing
----------
The latest stable version of pyOCD may be installed via [pip](https://pip.pypa.io/en/stable/index.html)
as follows:
```
$ pip install -U pyocd
```
The latest pyOCD package is available [on PyPI](https://pypi.python.org/pypi/pyOCD/) as well as
[on GitHub](https://github.com/mbedmicro/pyOCD/releases).
To install the latest prerelease version from the HEAD of the master branch, you can do
the following:
```
$ pip install --pre -U git+https://github.com/mbedmicro/pyOCD.git
```
You can also install directly from the source by cloning the git repository and running:
```
$ python setup.py install
```
Note that, depending on your operating system, you may run into permissions issues running these commands.
You have a few options here:
1. Under Linux, run with `sudo -H` to install pyOCD and dependencies globally. (Installing with sudo
should never be required for macOS.)
2. Specify the `--user` option to install local to your user.
3. Run the command in a [virtualenv](https://virtualenv.pypa.io/en/latest/)
local to a specific project working set.
For notes about installing and using on non-x86 systems such as Raspberry Pi, see the
[relevant documentation](docs/installing_on_non_x86.md).
### libusb installation
[pyusb](https://github.com/pyusb/pyusb) and its backend library [libusb](https://libusb.info/) are
dependencies on all supported operating systems. pyusb is a regular Python package and will be
installed along with pyOCD. However, libusb is a binary shared library that does not get installed
automatically via pip dependency management.
How to install libusb depends on your OS:
- macOS: use Homebrew: `brew install libusb`
- Linux: should already be installed.
- Windows: download libusb from [libusb.info](https://libusb.info/) and place the DLL in your Python
installation folder next to python.exe. Make sure to use the same 32- or 64-bit architecture as
your Python installation. *Note: due to a
[known issue](https://github.com/mbedmicro/pyOCD/issues/684), the current recommendation is to use
[libusb version 1.0.21](https://github.com/libusb/libusb/releases/tag/v1.0.21) on Windows instead
of the most recent version.*
### udev rules on Linux
On Linux, particularly Ubuntu 16.04+, you must configure udev rules to allow pyOCD to access debug
probes from user space. Otherwise you will need to run pyOCD as root, using sudo, which is very
highly discouraged. (You should _never_ run pyOCD as root on any OS.)
To help with this, example udev rules files are included with pyOCD in the
[udev](https://github.com/mbedmicro/pyOCD/tree/master/udev) folder. The
[readme](https://github.com/mbedmicro/pyOCD/tree/master/udev/README.md) in this folder has detailed
instructions.
### Target support
See the [target support documentation](docs/target_support.md) for information on how to check if
the MCU(s) you are using have built-in support, and how to install support for additional MCUs via
CMSIS-Packs.
Standalone GDB server
---------------------
After you install pyOCD via pip or setup.py, you will be able to execute the following in order to
start a GDB server powered by pyOCD:
```
$ pyocd gdbserver
```
You can get additional help by running ``pyocd gdbserver --help``.
Example command line GDB session showing how to connect to a running `pyocd gdbserver` and load
firmware:
```
$ arm-none-eabi-gdb application.elf
<gdb> target remote localhost:3333
<gdb> load
<gdb> monitor reset
```
The `pyocd gdbserver` subcommand is also usable as a drop in place replacement for OpenOCD in
existing setups. The primary difference is the set of gdb monitor commands.
Recommended GDB and IDE setup
-----------------------------
The recommended toolchain for embedded Arm Cortex-M development is [GNU Arm
Embedded](https://developer.arm.com/tools-and-software/open-source-software/gnu-toolchain/gnu-rm),
provided by Arm. GDB is included with this toolchain.
For [Visual Studio Code](https://code.visualstudio.com), the
[cortex-debug](https://marketplace.visualstudio.com/items?itemName=marus25.cortex-debug) plugin is available
that supports pyOCD.
The GDB server also works well with [Eclipse Embedded CDT](https://projects.eclipse.org/projects/iot.embed-cdt),
previously known as [GNU MCU/ARM Eclipse](https://gnu-mcu-eclipse.github.io/). It fully supports pyOCD with
an included pyOCD debugging plugin.
To view peripheral register values either the built-in Eclipse Embedded CDT register view can be used, or
the Embedded System Register Viewer plugin can be installed. The latter can be installed from inside
Eclipse adding `http://embsysregview.sourceforge.net/update` as a software update server URL
under the "Help -> Install New Software..." menu item.
Development setup
-----------------
Please see the [Developers' Guide](docs/developers_guide.md) for instructions on how to set up a
development environment for pyOCD.
Contributions
-------------
We welcome contributions to pyOCD in any area. Please see the [contribution
guidelines](CONTRIBUTING.md) for detailed requirements for contributions.
To report bugs, please [create an issue](https://github.com/mbedmicro/pyOCD/issues/new) in the
GitHub project.
License
-------
PyOCD is licensed with the permissive Apache 2.0 license. See the [LICENSE](LICENSE) file for the
full text of the license.
Copyright © 2006-2020 Arm Ltd and others (see individual source files)
| 37.098291 | 112 | 0.755789 | eng_Latn | 0.986171 |
9e1fd33150cc1d8584134943d5283bbb69d0f8b7 | 1,698 | md | Markdown | docs/safeint/safeintexception-class.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-06-30T03:02:58.000Z | 2021-07-27T18:21:28.000Z | docs/safeint/safeintexception-class.md | drewbatgit/cpp-docs | 230b7231ed324317d2f806531288d6a109791af4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-16T08:33:11.000Z | 2019-10-16T08:33:11.000Z | docs/safeint/safeintexception-class.md | drewbatgit/cpp-docs | 230b7231ed324317d2f806531288d6a109791af4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-10-01T01:35:05.000Z | 2020-10-01T01:35:05.000Z | ---
title: "SafeIntException Class"
ms.date: "10/22/2018"
ms.topic: "reference"
f1_keywords: ["SafeIntException Class", "SafeIntException", "SafeIntException.SafeIntException", "SafeIntException::SafeIntException"]
helpviewer_keywords: ["SafeIntException class", "SafeIntException, constructor"]
ms.assetid: 88bef958-1f48-4d55-ad4f-d1f9581a293a
---
# SafeIntException Class
The `SafeInt` class uses `SafeIntException` to identify why a mathematical operation cannot be completed.
> [!NOTE]
> The latest version of this library is located at [https://github.com/dcleblanc/SafeInt](https://github.com/dcleblanc/SafeInt).
## Syntax
```cpp
class SafeIntException;
```
## Members
### Public Constructors
Name | Description
------------------------------------------------------- | ------------------------------------
[SafeIntException::SafeIntException](#safeintexception) | Creates a `SafeIntException` object.
## Remarks
The [SafeInt class](../safeint/safeint-class.md) is the only class that uses the `SafeIntException` class.
## Inheritance Hierarchy
`SafeIntException`
## Requirements
**Header:** safeint.h
**Namespace:** msl::utilities
## <a name="safeintexception"></a>SafeIntException::SafeIntException
Creates a `SafeIntException` object.
```cpp
SafeIntException();
SafeIntException(
SafeIntError code
);
```
### Parameters
*code*<br/>
[in] An enumerated data value that describes the error that occurred.
### Remarks
The possible values for *code* are defined in the file Safeint.h. For convenience, the possible values are also listed here.
- `SafeIntNoError`
- `SafeIntArithmeticOverflow`
- `SafeIntDivideByZero`
| 24.970588 | 134 | 0.694346 | kor_Hang | 0.355938 |
e591884c869b68474b3abe530b405dfdc1210a13 | 7,071 | md | Markdown | iis/configuration/system.webServer/caching/index.md | Rich-Lang/iis-docs | 3fc0423f2c1ca43616e24e7781ae49efc9c66016 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-19T18:34:42.000Z | 2020-03-19T18:34:42.000Z | iis/configuration/system.webServer/caching/index.md | Rich-Lang/iis-docs | 3fc0423f2c1ca43616e24e7781ae49efc9c66016 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/configuration/system.webServer/caching/index.md | Rich-Lang/iis-docs | 3fc0423f2c1ca43616e24e7781ae49efc9c66016 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Caching <caching> | Microsoft Docs"
author: rick-anderson
description: "Overview The <caching> element allows you to enable or disable page output caching for an Internet Information Services (IIS) 7 application. This eleme..."
ms.author: iiscontent
manager: soshir
ms.date: 09/26/2016
ms.topic: article
ms.assetid: 0ed34615-896b-477e-8d10-2c2e09f464db
ms.technology: iis-config
ms.prod: iis
msc.legacyurl: /configreference/system.webserver/caching
msc.type: config
---
Caching <caching>
====================
<a id="001"></a>
## Overview
The `<caching>` element allows you to enable or disable page output caching for an Internet Information Services (IIS) 7 application. This element also allows you to configure whether IIS caches page output in user mode, kernel mode, or both and what, if any, output caching limits you want to impose.
The `<caching>` element also contains a `<profiles>` element that contains a collection of output cache settings that you can apply to ASP.NET pages.
Page output caching stores a response of a dynamic page, such as an ASP page or an ASP.NET page, in memory after a browser requests it. When subsequent requests arrive for the page, the server sends the cached response instead of re-processing the page. The ASP.NET page output cache is separate from the IIS 7 output cache. In applications that use the Integrated ASP.NET mode, the ASP.NET page output cache can be used programmatically for any content-type, much like the IIS 7 output cache.
Page output caching reduces server load and response time. Output caching works best with pages that are semi-dynamic, such as an ASP.NET page that is dependent on a database table that does not change often.
Output caching is unnecessary for static files, such as HTML, JPG, or GIF files, and can cause more memory overhead for dynamic ASP.NET or PHP pages that read from a database that changes frequently.
<a id="002"></a>
## Compatibility
| Version | Notes |
| --- | --- |
| IIS 10.0 | The `<caching>` element was not modified in IIS 10.0. |
| IIS 8.5 | The `<caching>` element was not modified in IIS 8.5. |
| IIS 8.0 | The `<caching>` element was not modified in IIS 8.0. |
| IIS 7.5 | The `<caching>` element was not modified in IIS 7.5. |
| IIS 7.0 | The `<caching>` element was introduced in IIS 7.0. |
| IIS 6.0 | N/A |
<a id="003"></a>
## Setup
The `<caching>` element is included in the default installation of IIS 7.
<a id="004"></a>
## How To
### How to configure page output caching
1. Open **Internet Information Services (IIS) Manager**:
- If you are using Windows Server 2012 or Windows Server 2012 R2:
- On the taskbar, click **Server Manager**, click **Tools**, and then click **Internet Information Services (IIS) Manager**.
- If you are using Windows 8 or Windows 8.1:
- Hold down the **Windows** key, press the letter **X**, and then click **Control Panel**.
- Click **Administrative Tools**, and then double-click **Internet Information Services (IIS) Manager**.
- If you are using Windows Server 2008 or Windows Server 2008 R2:
- On the taskbar, click **Start**, point to **Administrative Tools**, and then click **Internet Information Services (IIS) Manager**.
- If you are using Windows Vista or Windows 7:
- On the taskbar, click **Start**, and then click **Control Panel**.
- Double-click **Administrative Tools**, and then double-click **Internet Information Services (IIS) Manager**.
2. In the **Connections** pane, go to the connection, site, application, or directory for which you want to configure page output caching.
3. In the **Home** pane, scroll to **Output Caching**, and then double-click **Output Caching**.
[](index/_static/image1.png)
4. In the **Actions** pane, click **Add...**
5. In the **Add Cache Rule** dialog box, type the file name extension you want to cache in the **File name extension** box, and then select the **User-mode caching** option, the **Kernel-mode caching** option, or both.
6. Select the options that you want to use for caching, and then click **OK**.
[](index/_static/image3.png)
<a id="005"></a>
## Configuration
You can configure the `<caching>` element at the server level in the ApplicationHost.config file or at the site, application, or at the directory level in a Web.config file.
### Attributes
| Attribute | Description |
| --- | --- |
| `enabled` | Optional Boolean attribute.<br><br>Specifies whether page output caching is enabled.<br><br>The default value is `true`. |
| `enableKernelCache` | Optional Boolean attribute.<br><br>Specifies whether kernel caching is enabled.<br><br>The default value is `true`. |
| `maxCacheSize` | Optional uint attribute.<br><br>Specifies the maximum size of the output cache.<br><br>**Note:** This setting is effective only at the level of the ApplicationHost.config file. If you set this property at a lower level, it will have no effect.<br><br>The default value is `0`. |
| `maxResponseSize` | Optional uint attribute.<br><br>Specifies the maximum response size that can be cached.<br><br>**Note:** This setting is effective only at the level of the ApplicationHost.config file. If you set this property at a lower level, it will have no effect.<br><br>The default value is `262144`. |
### Child Elements
| Element | Description |
| --- | --- |
| [`profiles`](profiles/index.md) | Optional element.<br><br>Contains a group of output cache settings that can be applied to ASP.NET pages. |
### Configuration Sample
The following configuration example enables user-mode caching and kernel-mode caching, both of which are enabled by default in IIS 7.0. It also uses the `<add>` element contained by the `<profiles>` element to enable output caching for files with the .asp file name extension. It also uses the **policy** attribute to output cache the page until it changes; it does the same for kernel caching using the **kernelCachePolicy** attribute.
[!code-xml[Main](index/samples/sample1.xml)]
The following code example sets the maximum output cache size to 1 gigabyte and sets the maximum size of a response that can be stored in the output cache to 512 kilobytes.
[!code-xml[Main](index/samples/sample2.xml)]
<a id="006"></a>
## Sample Code
The following examples configure page output caching for files with the .asp file name extension, and configure IIS to cache in user mode and kernel mode until ASP files change.
### AppCmd.exe
[!code-console[Main](index/samples/sample3.cmd)]
> [!NOTE]
> You must be sure to set the **commit** parameter to `apphost` when you use AppCmd.exe to configure these settings. This commits the configuration settings to the appropriate location section in the ApplicationHost.config file.
### C#
[!code-csharp[Main](index/samples/sample4.cs)]
### VB.NET
[!code-vb[Main](index/samples/sample5.vb)]
### JavaScript
[!code-javascript[Main](index/samples/sample6.js)]
### VBScript
[!code-vb[Main](index/samples/sample7.vb)] | 53.568182 | 493 | 0.728468 | eng_Latn | 0.985081 |
e591c60ab13d0ed3937b57563fdd5069e6872bbb | 3,708 | md | Markdown | content/publication/iva2015.md | truonghuy/mywebsite | 75e749566f24055500d6677bb5ec1cc2fec3c5f8 | [
"MIT"
] | null | null | null | content/publication/iva2015.md | truonghuy/mywebsite | 75e749566f24055500d6677bb5ec1cc2fec3c5f8 | [
"MIT"
] | null | null | null | content/publication/iva2015.md | truonghuy/mywebsite | 75e749566f24055500d6677bb5ec1cc2fec3c5f8 | [
"MIT"
] | null | null | null | +++
title = "Modeling warmth and competence in virtual characters"
date = 2015-09-24T16:04:30-04:00
draft = false
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["Truong-Huy D Nguyen", "Elin Carstensdottir", "Nhi Ngo", "Magy Seif El-Nasr", "Matt Gray", "Derek Isaacowitz", "David DeSteno"]
# Publication type.
# Legend:
# 0 = Uncategorized
# 1 = Conference paper
# 2 = Journal article
# 3 = Manuscript
# 4 = Report
# 5 = Book
# 6 = Book section
publication_types = ["1"]
# Publication name and optional abbreviated version.
publication = "International Conference on Intelligent Virtual Agents"
publication_short = "IVA"
# Abstract and optional shortened version.
abstract = "Developing believable virtual characters has been a subject of research in many fields including graphics, animations, artificial intelligence, and human-computer interaction. One challenge towards commoditizing the use of virtual humans is the ability to algorithmically construct characters of different stereotypes. In this paper, we present our efforts in designing virtual characters that can exhibit non-verbal behaviors to reflect varying degrees of warmth and competence, two personality traits shown to underlie social judgments and form stereotypical perception. To embark on developing a computational behavior model that portrays these traits, we adopt an iterative design methodology tuning the design using theory from theatre, animation and psychology, expert reviews, user testing and feedback. Using this process we were able to construct a set of virtual characters that portray variations of warmth and competence through combination of gestures, use of space, and gaze behaviors. In this paper we discuss the design methodology, the resultant system, and initial experiment results showing the promise of the model."
abstract_short = "We present our efforts in designing virtual characters that can exhibit non-verbal behaviors to reflect varying degrees of warmth and competence, two personality traits shown to underlie social judgments and form stereotypical perception. Our experiments show that these dimensions of personality are perceived faithfully by observers."
# Featured image thumbnail (optional)
image_preview = ""
# Is this a selected publication? (true/false)
selected = true
# Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter your project's filename without extension.
# E.g. `projects = ["deep-learning"]` references `content/project/deep-learning.md`.
# Otherwise, set `projects = []`.
projects = []
# Tags (optional).
# Set `tags = []` for no tags, or use the form `tags = ["A Tag", "Another Tag"]` for one or more tags.
tags = ["believable virtual characters", "non-verbal behavior", "personality traits"]
# Links (optional).
url_pdf = "https://www.researchgate.net/profile/Truong_Huy_Nguyen/publication/279515559_Modeling_Warmth_and_Competence_in_Virtual_Characters/links/55943a6008ae5d8f392f5fe6/Modeling-Warmth-and-Competence-in-Virtual-Characters.pdf"
url_preprint = ""
url_code = ""
url_dataset = ""
url_project = ""
url_slides = ""
url_video = ""
url_poster = ""
url_source = ""
# Custom links (optional).
# Uncomment line below to enable. For multiple links, use the form `[{...}, {...}, {...}]`.
# url_custom = [{name = "Custom Link", url = "http://example.org"}]
# Does this page contain LaTeX math? (true/false)
math = false
# Does this page require source code highlighting? (true/false)
highlight = true
# Featured image
# Place your image in the `static/img/` folder and reference its filename below, e.g. `image = "example.jpg"`.
[header]
image = ""
caption = ""
+++
| 50.794521 | 1,148 | 0.762136 | eng_Latn | 0.974867 |
e591cf0ac3bdc877d43f4e83276491ee5176e58d | 26 | md | Markdown | README.md | goldcoinreserve/media | 175b2adbbabf3636991a8721a85d184d21c54bc5 | [
"BSD-3-Clause"
] | null | null | null | README.md | goldcoinreserve/media | 175b2adbbabf3636991a8721a85d184d21c54bc5 | [
"BSD-3-Clause"
] | null | null | null | README.md | goldcoinreserve/media | 175b2adbbabf3636991a8721a85d184d21c54bc5 | [
"BSD-3-Clause"
] | null | null | null | # media
GCR mepia package
| 8.666667 | 17 | 0.769231 | kor_Hang | 0.668953 |
e591ec1ca3b0e5ec187e885e57899eb65ef7a2a9 | 5,612 | md | Markdown | README.md | ihornet/egg-ts-helper | 7c2a5108bc221fc25ffb174d4a8ec33cfe219d2c | [
"MIT"
] | null | null | null | README.md | ihornet/egg-ts-helper | 7c2a5108bc221fc25ffb174d4a8ec33cfe219d2c | [
"MIT"
] | null | null | null | README.md | ihornet/egg-ts-helper | 7c2a5108bc221fc25ffb174d4a8ec33cfe219d2c | [
"MIT"
] | null | null | null | # egg-ts-helper
[![NPM version][npm-image]][npm-url]
[![Build Status][travis-image]][travis-url]
[![Appveyor status][appveyor-image]][appveyor-url]
[![Coverage Status][coveralls-image]][coveralls-url]
[npm-image]: https://img.shields.io/npm/v/egg-ts-helper.svg?style=flat-square
[npm-url]: https://npmjs.org/package/egg-ts-helper
[travis-url]: https://travis-ci.org/whxaxes/egg-ts-helper
[travis-image]: http://img.shields.io/travis/whxaxes/egg-ts-helper.svg
[appveyor-url]: https://ci.appveyor.com/project/whxaxes/egg-ts-helper/branch/master
[appveyor-image]: https://ci.appveyor.com/api/projects/status/github/whxaxes/egg-ts-helper?branch=master&svg=true
[coveralls-url]: https://coveralls.io/r/whxaxes/egg-ts-helper
[coveralls-image]: https://img.shields.io/coveralls/whxaxes/egg-ts-helper.svg
A simple tool for generates typescript definition files(d.ts) for [egg](https://eggjs.org) application. Injecting `controller`,`proxy`,`service` and `extend` to egg by [Declaration Merging](https://www.typescriptlang.org/docs/handbook/declaration-merging.html)
## Install
```
npm i egg-ts-helper -g
```
or
```
yarn global add egg-ts-helper
```
## QuickStart
Open your egg application, executing the command
```
$ ets
```
`-w` flag can auto recreated d.ts while file changed
```
$ ets -w
```
## Usage
```
$ ets -h
Usage: ets [commands] [options]
Options:
-v, --version output the version number
-w, --watch Watching files, d.ts would recreated while file changed
-c, --cwd [path] Egg application base dir (default: process.cwd)
-C, --config [path] Configuration file, The argument can be a file path to a valid JSON/JS configuration file.(default: {cwd}/tshelper.js
-f, --framework [name] Egg framework(default: egg)
-s, --silent Running without output
-i, --ignore [dirs] Ignore watchDirs, your can ignore multiple dirs with comma like: -i controller,service
-e, --enabled [dirs] Enable watchDirs, your can enable multiple dirs with comma like: -e proxy,other
-E, --extra [json] Extra config, the value should be json string
-h, --help output usage information
Commands:
clean Clean js file while it has the same name ts file
```
## Options
| name | type | default | description |
| --- | --- | --- | --- |
| cwd | string | process.cwd | egg application base dir |
| framework | string | egg | egg framework |
| typings | string | {cwd}/typings | typings dir |
| caseStyle | string | lower | egg case style(lower,upper,camel) |
| watch | boolean | false | watch file change or not |
| execAtInit | boolean | false | execute d.ts generation while instance was created |
| configFile | string | {cwd}/tshelper.js | configure file path |
| watchDirs | object | | generator configuration |
egg-ts-helper would watching `app/extend`,`app/controller`,`app/service`, `app/config` by default. The dts would recreated when the files under these folders was changed.
you can disabled some folders by `-i` flag.
```
$ ets -i extend,controller
```
or configure in the config file
```
// {cwd}/tshelper.js
module.exports = {
watchDirs: {
extend: false,
controller: false,
}
}
```
or configure in package.json
```
// {cwd}/package.json
{
"egg": {
"framework": "egg",
"tsHelper": {
"watchDirs": {
"extend": false
}
}
}
}
```
## Register
You can require register to start egg-ts-helper before starting egg application with [egg-bin](https://github.com/eggjs/egg-bin).
```
$ egg-bin dev -r egg-ts-helper/register
```
debugging
```
$ egg-bin debug -r egg-ts-helper/register
```
## Demo
see https://github.com/whxaxes/egg-boilerplate-d-ts
It works in these directories : `app/controller`, `app/service`, `app/proxy`, `app/extend`, `app/config`.
#### Controller
(service, proxy are the same)
ts
```typescript
// app/controller/home.ts
import { Controller } from 'egg';
export default class HomeController extends Controller {
public async index() {
this.ctx.body = 'ok';
}
}
```
typings
```typescript
// app/typings/app/controller/index.d.ts
import Home from '../../../app/controller/home';
declare module 'egg' {
interface IController {
home: Home;
}
}
```
#### Extend
ts
```typescript
// app/extend/context.ts
export default {
doSomething() {
console.info('do something');
}
};
```
typings
```typescript
// app/typings/app/controller/index.d.ts
import ExtendObject from '../../../app/extend/context';
declare module 'egg' {
interface Context {
doSomething: typeof ExtendObject.doSomething;
}
}
```
#### Config
ts
```typescript
// config/config.default.ts
export default function() {
return {
keys: '123456'
}
}
```
typings
```typescript
// app/typings/config/index.d.ts
import { EggAppConfig } from 'egg';
import ExportConfigDefault from '../../config/config.default';
type ConfigDefault = ReturnType<typeof ExportConfigDefault>;
type NewEggAppConfig = EggAppConfig & ConfigDefault;
declare module 'egg' {
interface Application {
config: NewEggAppConfig;
}
interface Controller {
config: NewEggAppConfig;
}
interface Service {
config: NewEggAppConfig;
}
}
```
#### Plugin
ts
```typescript
// config/plugin.ts
export default {
cors: {
enable: true,
package: 'egg-cors',
},
static: {
enable: true,
package: 'egg-static',
}
}
```
typings
```typescript
// app/typings/config/plugin.d.ts
// it's only import the package was exist under the node_modules
import 'egg-cors';
import 'egg-static';
```
| 21.419847 | 260 | 0.670349 | eng_Latn | 0.62362 |
e59233ddd865d1ccdfdd4b2bfccb56d6bf9e26e4 | 3,588 | md | Markdown | _posts/2020-02-20-kvalia.md | vvzvlad/blog | 8d8ec16cf0461d2ad8271d0d97362f9c34f37604 | [
"MIT"
] | null | null | null | _posts/2020-02-20-kvalia.md | vvzvlad/blog | 8d8ec16cf0461d2ad8271d0d97362f9c34f37604 | [
"MIT"
] | null | null | null | _posts/2020-02-20-kvalia.md | vvzvlad/blog | 8d8ec16cf0461d2ad8271d0d97362f9c34f37604 | [
"MIT"
] | null | null | null | ---
layout: post
title: Что такое умение?
---
Подумалось, что _умение_ что-то делать состоит из двух частей — знаний и навыков. Если со знаниями все более-менее ясно, то вот что такое навыки — формализовать сложно.
Я привык к следующему определению: навыки, во многом, это некая матрица коэффициентов важности. Условно, у нас есть список действий для достижения результата. Любого результата, не столь важно. Например, приготовить пирог. В том числе, там есть пункты "перемешать тесто" и "поставить в духовку".
И вот у первого важность больше — потому что плохо перемешанное тесто будет иметь комочки, а в духовку поставить форму как-то неправильно сложнее. И умение приготовить пирог — это нечто вроде "перемешать тесто(X10), поставить в духовку(X2)". Казалось бы, в такой формулировке коэффициенты ничем не отличаются от знаний.
Однако, там есть три фактора, которые все еще не позволяют передавать умение напрямую.
Первый — это то, что человеческий мозг — вредная и бажная штука, и всегда ставит свой непосредственный опыт (или фантазии о нем, да) выше "чистых" знаний.
Вся информация из внешнего мира обладает, плюс-минус, одним уровнем значимости, который по-большему счету, зависит от источника, а не от "разметки" самой информации. Старшему товарищу, который говорит о том, что вот эта тонкость важна, мы доверяем несравненно больше, чем пометке в документации "важно!", но и то, и другое пасует перед собственным опытом, который мы получаем, обжегшись на этой тонкости, и поняв, что она действительно важна.
Второй это то, что информации, которая составляет даже ту часть опыта, которая "знания", очень много, а коэффициенты значимости нужны, по-хорошему, к каждой мелочи этого списка. Представьте себе книжку с документацией, в которой каждое слово разного цвета, в зависимости от важности. Нет, сделать, теоретически, можно. Практически — мало кто будет настолько замороченным, чтобы это разметить, и примерно столько же читателей будут это принимать во внимание. Но даже если будут — см. пункт выше: даже принимая во внимание это, мозг все равно больше доверяет своему опыту, и если за игнор важной штуки пару раз не прилетело по голове, с легкостью запишет ее в неважные. Не, ну а чо, все же нормально в прошлый раз было.
И третий фактор — это то, что человеческий язык плохо приспособлен под передачу ощущений, а мозг — под их формализацию и вербализацию. Настолько плохо, что абсолютно нормальным считается бросить человека в воду, чтобы он научился плавать, и смотреть, чтобы он не утонул, чем объяснить ему, как плавать, чтобы он понял.
Нам всем кажется это нормальной ситуацией, но условному разумному роботу ситуация покажется дикой — зачем тратить ресурсы на перевыработку данных, которые уже готовы у кого-то? Надо взять и адаптировать под свои условия.
Но человек настолько ограничен в коммуникации, что способен передать языком лишь малую часть информации, к которой добавляется часть в виде картинок, видео и "смотри, как я делаю", и еще часть — под выработку индивидуальной информации для обучения на основе замеченных ошибок. Остальное — это квалиа, то, что нельзя передать, а можно только почувствовать на своей шкуре.
Например, учась плавать, вы получаете огромное количество информации от рецепторов кожи, и мозг действительно может ее использовать для того, чтобы плавать эффективно. Однако, вся эта информация кодируется не на уровне сознания, а на уровне рефлексов. Вы даже не осознаете эти рефлексы, и уж точно не сможете описать в виде "если рецепторы в таком-то месте на коже сообщают о давлении, рукой надо двигать немного по другому".
| 156 | 718 | 0.798216 | rus_Cyrl | 0.997084 |
e592edb1b1cdfe9aa6d102cbcbe1bcf02f5c41a0 | 3,374 | md | Markdown | articles/desktop-flows/how-to/java.md | modery/power-automate-docs | 3ae7440345ffb33536f43157cf0ff8076ba47f54 | [
"CC-BY-4.0",
"MIT"
] | 98 | 2019-11-12T21:50:27.000Z | 2022-03-24T10:47:49.000Z | articles/desktop-flows/how-to/java.md | modery/power-automate-docs | 3ae7440345ffb33536f43157cf0ff8076ba47f54 | [
"CC-BY-4.0",
"MIT"
] | 566 | 2019-11-13T01:05:02.000Z | 2022-03-31T21:56:41.000Z | articles/desktop-flows/how-to/java.md | modery/power-automate-docs | 3ae7440345ffb33536f43157cf0ff8076ba47f54 | [
"CC-BY-4.0",
"MIT"
] | 126 | 2019-11-13T04:41:24.000Z | 2022-03-20T12:22:53.000Z | ---
title: Automate Java applications | Microsoft Docs
description: Automate Java applications
author: mariosleon
ms.service: power-automate
ms.subservice: desktop-flow
ms.topic: article
ms.date: 10/08/2021
ms.author: marleon
ms.reviewer:
search.app:
- Flow
search.audienceType:
- flowmaker
- enduser
---
# Automate Java applications
In order to automate Java applications, particular settings must be in place.
To install the Java configuration manually, after Power Automate gets installed, navigate to the installation folder (e.g. ‘C:\Program Files (x86)\Power Automate’) and run the ‘PAD.Java.Installer.exe’ as an administrator.
User can uninstall the Java configuration (revert all changes applied to the machine with the Java installer.
Open Command Line tool (cmd)
Execute following command:
PAD.Java.Installer.exe -u
Logs for Java automation with PAD can be found at: ‘%temp%/ java_automation_log’ folder (e.g. ‘C:\Users\username\AppData\Local\Temp\java_automation_log’)
Troubleshooting
Make sure you have Java installed in your machine. You may check this by:
First, let's open a command window or terminal and enter: > java –version
If Java is not installed in your machine, then you will receive an error message as:
'java' is not recognized as an internal or external command, operable program or batch file.
Java Access Bridge from Control Panel should be disabled
Go to ‘Control Panel->Ease of Access->Optimize visual display->Java Access Bridge fromOracle, Inc. Providing Assistive Technology access to Java applications’ and disable (uncheck) the ‘Enable Java Access Bridge’ option.
Specific files have to exist in the Java folder(s) of the machine after the PAD installation.
You may check the installed Java version and installation path in your machine by:
Type ‘Configure Java’ in the Search bar of Windows
Open Java control panel
Go to ‘Java’ tab
Click on ‘View’
Check values in ‘Path’ column. Row with ‘Architecture’ equal to x86 refers to 32-bit Java installation, while the row with value ‘x86x64’ refers to 64-bit Java installation.
You may check the below files:
For 64-bit Java installation:
File ‘Microsoft.Flow.RPA.Desktop.UIAutomation.Java.Bridge.Native.dll’ should have been replaced in folder ‘C:\Program Files\Java\jre1.8.0_271\bin’. (jre1.8.0_271 could be replaced with your machine’s Java installation)
File ‘accessibility.properties’ should have been replaced in folder ‘C:\Program Files\Java\jre1.8.0_271\lib’. (jre1.8.0_271 could be replaced with your machine’s Java installation)
If you edit the file with a notepad, it should have the following value: “assistive_technologies=com.sun.java.accessibility.AccessBridge, microsoft.flows.rpa.desktop.uiautomation.JavaBridge”
File ‘PAD.JavaBridge.jar’ should have been inserted in folder ‘C:\Program Files\Java\jre1.8.0_271\lib\ext’. (jre1.8.0_271 could be replaced with your machine’s Java installation)
For 32-bit Java installation:
Same actions for the same files as above but in folder path ‘C:\Program Files (x86) \Java…’
Make sure there isn't an '.accessibility.properties' file present in your user folder.
Check ‘C:\Users\user’ if a file with name ‘.accessibility.properties’ is present. If yes, then please rename it.
Ensure that 'VC_redist.x64.exe' and/or 'VC_redist.x86.exe' have been executed.
| 39.694118 | 222 | 0.77297 | eng_Latn | 0.971754 |
e59346606b1ab78248524e60402566c3c6d88745 | 2,418 | md | Markdown | docs/connect/jdbc/reference/commit-method-sqlserverconnection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/reference/commit-method-sqlserverconnection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/reference/commit-method-sqlserverconnection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Methode „commit“ (SQLServerConnection) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
apiname:
- SQLServerConnection.commit
apilocation:
- sqljdbc.jar
apitype: Assembly
ms.assetid: c7346165-51bf-4844-b64c-29833c147236
author: David-Engel
ms.author: v-daenge
ms.openlocfilehash: 50afbfa25052e0f602c486d011ce666a599372e0
ms.sourcegitcommit: fe5c45a492e19a320a1a36b037704bf132dffd51
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 04/08/2020
ms.locfileid: "80923587"
---
# <a name="commit-method-sqlserverconnection"></a>commit-Methode (SQLServerConnection)
[!INCLUDE[Driver_JDBC_Download](../../../includes/driver_jdbc_download.md)]
Macht alle Änderungen, die seit dem letzten Commit oder Rollback vorgenommen wurden, zu dauerhaften Änderungen und hebt sämtliche Datenbanksperren auf, die derzeit von diesem [SQLServerConnection](../../../connect/jdbc/reference/sqlserverconnection-class.md)-Objekt aufrecht erhalten werden.
## <a name="syntax"></a>Syntax
```
public void commit()
```
## <a name="exceptions"></a>Ausnahmen
[SQLServerException](../../../connect/jdbc/reference/sqlserverexception-class.md)
## <a name="remarks"></a>Bemerkungen
Diese commit-Methode wird von der commit-Methode in der java.sql.Connection-Schnittstelle angegeben.
Die Methode sollte nur bei deaktiviertem Modus für automatische Commits verwendet werden.
Diese Methode ergibt einen Fehler und löst eine Ausnahme aus, wenn der Client mit einer manuellen Transaktion beginnt und von SQL Server aus irgendeinem Grund ein Rollback der manuellen Transaktion ausgeführt wird. Beispielsweise wird eine Ausnahme ausgelöst, wenn vom Client eine gespeicherte Prozedur, die explizit ROLLBACK TRANSACTION aufruft, und anschließend die commit-Methode aufgerufen wird. Wird von SQL Server zudem ein Fehler mit einem Schweregrad ab 16 ausgelöst, um ein Rollback der vom Client initiierten manuellen Transaktion auszuführen, wird eine Ausnahme ausgelöst, wenn anschließend die commit-Methode aufgerufen wird.
## <a name="see-also"></a>Weitere Informationen
[SQLServerConnection-Elemente](../../../connect/jdbc/reference/sqlserverconnection-members.md)
[SQLServerConnection-Klasse](../../../connect/jdbc/reference/sqlserverconnection-class.md)
| 46.5 | 640 | 0.782465 | deu_Latn | 0.91322 |
e593e85b4b04bb4fb0a5d6b15a21aa7196377473 | 79 | md | Markdown | Exchange/ExchangeOnline/recipients-in-exchange-online/index.md | skawafuchi/OfficeDocs-Exchange | 67ac7fc6893c39337cc18f8bb07260a47c01db9d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:10:26.000Z | 2020-06-16T22:10:26.000Z | Exchange/ExchangeOnline/recipients-in-exchange-online/index.md | skawafuchi/OfficeDocs-Exchange | 67ac7fc6893c39337cc18f8bb07260a47c01db9d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-27T08:03:30.000Z | 2020-01-27T08:03:30.000Z | Exchange/ExchangeOnline/recipients-in-exchange-online/index.md | skawafuchi/OfficeDocs-Exchange | 67ac7fc6893c39337cc18f8bb07260a47c01db9d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-29T00:03:44.000Z | 2021-11-29T00:03:44.000Z | ---
redirect_url: recipients-in-exchange-online
redirect_document_id: TRUE
---
| 15.8 | 43 | 0.78481 | eng_Latn | 0.4086 |
e5942ce3f27b58583e432d31efcdef0e169297dc | 1,430 | md | Markdown | content/blog/reciprocal-cycles-ii.md | gamwe6/eulerinbabylon | dda136a0d5749a074added458a0d9f22fe93930f | [
"MIT"
] | null | null | null | content/blog/reciprocal-cycles-ii.md | gamwe6/eulerinbabylon | dda136a0d5749a074added458a0d9f22fe93930f | [
"MIT"
] | 13 | 2021-03-01T20:41:26.000Z | 2022-02-26T16:06:20.000Z | content/blog/reciprocal-cycles-ii.md | gamwe6/eulerinbabylon | dda136a0d5749a074added458a0d9f22fe93930f | [
"MIT"
] | null | null | null | ---
date: "2013-03-02T16:00:00"
title: "Reciprocal cycles II"
description: "Problem 417"
---
<p>A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given:</p>
<blockquote>
<table><tr><td><sup>1</sup>/<sub>2</sub></td><td>= </td><td>0.5</td>
</tr><tr><td><sup>1</sup>/<sub>3</sub></td><td>= </td><td>0.(3)</td>
</tr><tr><td><sup>1</sup>/<sub>4</sub></td><td>= </td><td>0.25</td>
</tr><tr><td><sup>1</sup>/<sub>5</sub></td><td>= </td><td>0.2</td>
</tr><tr><td><sup>1</sup>/<sub>6</sub></td><td>= </td><td>0.1(6)</td>
</tr><tr><td><sup>1</sup>/<sub>7</sub></td><td>= </td><td>0.(142857)</td>
</tr><tr><td><sup>1</sup>/<sub>8</sub></td><td>= </td><td>0.125</td>
</tr><tr><td><sup>1</sup>/<sub>9</sub></td><td>= </td><td>0.(1)</td>
</tr><tr><td><sup>1</sup>/<sub>10</sub></td><td>= </td><td>0.1</td>
</tr></table></blockquote>
<p>Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that <sup>1</sup>/<sub>7</sub> has a 6-digit recurring cycle.</p>
<p>
Unit fractions whose denominator has no other prime factors than 2 and/or 5 are not considered to have a recurring cycle.
We define the length of the recurring cycle of those unit fractions as 0.
</p>
<p>
Let L(n) denote the length of the recurring cycle of 1/n.
You are given that ∑L(n) for 3 ≤ n ≤ 1 000 000 equals 55535191115.
</p>
<p>
Find ∑L(n) for 3 ≤ n ≤ 100 000 000</p>
| 46.129032 | 150 | 0.611888 | eng_Latn | 0.772439 |
e5946709a3a1b77bec273ef0108e6f2652dc641d | 23,524 | md | Markdown | articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
title: Migrar máquinas virtuais VMware para o Azure com a SSE (criptografia do lado do servidor) e CMK (chaves gerenciadas pelo cliente) usando a migração de servidor de migrações para Azure
description: Saiba como migrar VMs VMware para o Azure com criptografia do lado do servidor (SSE) e chaves gerenciadas pelo cliente (CMK) usando a migração de servidor de migrações para Azure
author: anvar-ms
ms.author: anvar
ms.manager: bsiva
ms.topic: how-to
ms.date: 03/12/2020
ms.openlocfilehash: 8a174c3b2bfb390eb7d691ae1bdcb0e28dde9032
ms.sourcegitcommit: 867cb1b7a1f3a1f0b427282c648d411d0ca4f81f
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/19/2021
ms.locfileid: "96751080"
---
# <a name="migrate-vmware-vms-to-azure-vms-enabled-with-server-side-encryption-and-customer-managed-keys"></a>Migrar VMs VMware para VMs do Azure habilitadas com criptografia do lado do servidor e chaves gerenciadas pelo cliente
Este artigo descreve como migrar VMs VMware para máquinas virtuais do Azure com discos criptografados usando a SSE (criptografia do lado do servidor) com chaves gerenciadas pelo cliente (CMK), usando a migração de servidor de migrações para Azure (replicação sem agente).
A experiência do portal de migração do servidor de migrações para Azure permite [migrar VMs VMware para o Azure com replicação sem agente.](tutorial-migrate-vmware.md) A experiência do portal atualmente não oferece a capacidade de ligar o SSE com CMK para seus discos replicados no Azure. A capacidade de ativar o SSE com CMK para os discos replicados está disponível no momento apenas por meio da API REST. Neste artigo, você verá como criar e implantar um [modelo de Azure Resource Manager](../azure-resource-manager/templates/overview.md) para replicar uma VM VMware e configurar os discos replicados no Azure para usar o SSE com CMK.
Os exemplos neste artigo usam [Azure PowerShell](/powershell/azure/new-azureps-module-az) para executar as tarefas necessárias para criar e implantar o modelo do Resource Manager.
[Saiba mais](../virtual-machines/disk-encryption.md) sobre criptografia do lado do servidor (SSE) com CMK (chaves gerenciadas pelo cliente) para discos gerenciados.
## <a name="prerequisites"></a>Pré-requisitos
- [Examine o tutorial](tutorial-migrate-vmware.md) sobre migração de VMs VMware para o Azure com replicação sem agente para entender os requisitos da ferramenta.
- [Siga estas instruções](./create-manage-projects.md) para criar um projeto de migrações para Azure e adicionar a ferramenta **migrações para Azure: Server Migration** para o projeto.
- [Siga estas instruções](how-to-set-up-appliance-vmware.md) para configurar o dispositivo de migração do Azure para VMware em seu ambiente local e concluir a descoberta.
## <a name="prepare-for-replication"></a>Preparar a replicação
Depois que a descoberta de VM for concluída, a linha servidores descobertos no bloco migração de servidor mostrará uma contagem de VMs VMware descobertas pelo dispositivo.
Antes que você possa iniciar a replicação de VMs, a infraestrutura de replicação precisa ser preparada.
1. Crie uma instância do barramento de serviço na região de destino. O barramento de serviço é usado pelo dispositivo de migrações do Azure local para se comunicar com o serviço de migração de servidor para coordenar a replicação e a migração.
2. Crie uma conta de armazenamento para transferência de logs de operação da replicação.
3. Crie uma conta de armazenamento para a qual o dispositivo de migração do Azure carrega os dados de replicação.
4. Crie um Key Vault e configure o Key Vault para gerenciar tokens de assinatura de acesso compartilhado para acesso de blob nas contas de armazenamento criadas na etapa 3 e 4.
5. Gere um token de assinatura de acesso compartilhado para o barramento de serviço criado na etapa 1 e crie um segredo para o token no Key Vault criado na etapa anterior.
6. Crie uma política de acesso de Key Vault para fornecer ao dispositivo de migrações do Azure local (usando o aplicativo AAD do dispositivo) e o serviço de migração do servidor acesse o Key Vault.
7. Crie uma política de replicação e configure o serviço de migração de servidor com detalhes da infraestrutura de replicação criada na etapa anterior.
A infraestrutura de replicação deve ser criada na região do Azure de destino para a migração e na assinatura do Azure de destino para a qual as VMs estão sendo migradas.
A experiência do portal de migração do servidor simplifica a preparação da infraestrutura de replicação fazendo isso automaticamente quando você replica uma VM pela primeira vez em um projeto. Neste artigo, vamos supor que você já tenha replicado uma ou mais VMs usando a experiência do portal e que a infraestrutura de replicação já esteja criada. Veremos como descobrir detalhes da infraestrutura de replicação existente e como usar esses detalhes como entradas para o modelo do Resource Manager que será usado para configurar a replicação com o CMK.
### <a name="identifying-replication-infrastructure-components"></a>Identificando componentes da infraestrutura de replicação
1. Na portal do Azure, acesse a página grupos de recursos e selecione o grupo de recursos no qual o projeto de migração do Azure foi criado.
2. Selecione **implantações** no menu à esquerda e procure um nome de implantação começando com a cadeia de caracteres *"Microsoft. MigrateV2. VMwareV2EnableMigrate"*. Você verá uma lista de modelos do Resource Manager criados pela experiência do portal para configurar a replicação para VMs neste projeto. Vamos baixar um desses modelos e usá-lo como base para preparar o modelo para replicação com o CMK.
3. Para baixar o modelo, selecione qualquer implantação que corresponda ao padrão de cadeia de caracteres na etapa anterior > selecione **modelo** no menu à esquerda > clique em **baixar** no menu superior. Salve o template.jsno arquivo localmente. Você editará esse arquivo de modelo na última etapa.
## <a name="create-a-disk-encryption-set"></a>Criar um conjunto de criptografia de disco
Um objeto de conjunto de criptografia de disco mapeia o Managed Disks para um Key Vault que contém a CMK a ser usada para a SSE. Para replicar VMs com CMK, você criará um conjunto de criptografia de disco e o passará como uma entrada para a operação de replicação.
Siga o exemplo [aqui](../virtual-machines/windows/disks-enable-customer-managed-keys-powershell.md) para criar um conjunto de criptografia de disco usando Azure PowerShell. Verifique se o conjunto de criptografia de disco foi criado na assinatura de destino para a qual as VMs estão sendo migradas e na região do Azure de destino para a migração.
O conjunto de criptografia de disco pode ser configurado para criptografar discos gerenciados com uma chave gerenciada pelo cliente ou para criptografia dupla com uma chave gerenciada pelo cliente e uma chave de plataforma. Para usar a opção criptografia dupla em repouso, configure o conjunto de criptografia de disco conforme descrito [aqui](../virtual-machines/windows/disks-enable-double-encryption-at-rest-powershell.md).
No exemplo mostrado abaixo, o conjunto de criptografia de disco está configurado para usar uma chave gerenciada pelo cliente.
```azurepowershell
$Location = "southcentralus" #Target Azure region for migration
$TargetResourceGroupName = "ContosoMigrationTarget"
$KeyVaultName = "ContosoCMKKV"
$KeyName = "ContosoCMKKey"
$KeyDestination = "Software"
$DiskEncryptionSetName = "ContosoCMKDES"
$KeyVault = New-AzKeyVault -Name $KeyVaultName -ResourceGroupName $TargetResourceGroupName -Location $Location -EnableSoftDelete -EnablePurgeProtection
$Key = Add-AzKeyVaultKey -VaultName $KeyVaultName -Name $KeyName -Destination $KeyDestination
$desConfig = New-AzDiskEncryptionSetConfig -Location $Location -SourceVaultId $KeyVault.ResourceId -KeyUrl $Key.Key.Kid -IdentityType SystemAssigned
$des = New-AzDiskEncryptionSet -Name $DiskEncryptionSetName -ResourceGroupName $TargetResourceGroupName -InputObject $desConfig
Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ObjectId $des.Identity.PrincipalId -PermissionsToKeys wrapkey,unwrapkey,get
New-AzRoleAssignment -ResourceName $KeyVaultName -ResourceGroupName $TargetResourceGroupName -ResourceType "Microsoft.KeyVault/vaults" -ObjectId $des.Identity.PrincipalId -RoleDefinitionName "Reader"
```
## <a name="get-details-of-the-vmware-vm-to-migrate"></a>Obter detalhes da VM do VMware para migrar
Nesta etapa, você usará Azure PowerShell para obter os detalhes da VM que precisa ser migrada. Esses detalhes serão usados para construir o modelo do Resource Manager para replicação. Especificamente, as duas propriedades de interesse são:
- A ID de recurso da máquina para as VMs descobertas.
- A lista de discos para a VM e seus identificadores de disco.
```azurepowershell
$ProjectResourceGroup = "ContosoVMwareCMK" #Resource group that the Azure Migrate Project is created in
$ProjectName = "ContosoVMwareCMK" #Name of the Azure Migrate Project
$solution = Get-AzResource -ResourceGroupName $ProjectResourceGroup -ResourceType Microsoft.Migrate/MigrateProjects/solutions -ExpandProperties -ResourceName $ProjectName | where Name -eq "Servers-Discovery-ServerDis
covery"
# Displays one entry for each appliance in the project mapping the appliance to the VMware sites discovered through the appliance.
$solution.Properties.details.extendedDetails.applianceNameToSiteIdMapV2 | ConvertFrom-Json | select ApplianceName, SiteId
```
```Output
ApplianceName SiteId
------------- ------
VMwareApplianc /subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite
```
Copie o valor da cadeia de caracteres de SiteId correspondente ao dispositivo de migrações para Azure do qual a VM é descoberta. No exemplo mostrado acima, o SiteId é *"/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/Providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite"*
```azurepowershell
#Replace value with SiteId from the previous step
$SiteId = "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite"
$SiteName = Get-AzResource -ResourceId $SiteId -ExpandProperties | Select-Object -ExpandProperty Name
$DiscoveredMachines = Get-AzResource -ResourceGroupName $ProjectResourceGroup -ResourceType Microsoft.OffAzure/VMwareSites/machines -ExpandProperties -ResourceName $SiteName
#Get machine details
PS /home/bharathram> $MachineName = "FPL-W19-09" #Replace string with VMware VM name of the machine to migrate
PS /home/bharathram> $machine = $Discoveredmachines | where {$_.Properties.displayName -eq $MachineName}
PS /home/bharathram> $machine.count #Validate that only 1 VM was found matching this name.
```
Copie os valores de UUID de ResourceId, nome e disco para o computador a ser migrado.
```Output
PS > $machine.Name
10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_50098f99-f949-22ca-642b-724ec6595210
PS > $machine.ResourceId
/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite/machines/10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_50098f99-f949-22ca-642b-724ec6595210
PS > $machine.Properties.disks | select uuid, label, name, maxSizeInBytes
uuid label name maxSizeInBytes
---- ----- ---- --------------
6000C291-5106-2aac-7a74-4f33c3ddb78c Hard disk 1 scsi0:0 42949672960
6000C293-39a1-bd70-7b24-735f0eeb79c4 Hard disk 2 scsi0:1 53687091200
6000C29e-cbee-4d79-39c7-d00dd0208aa9 Hard disk 3 scsi0:2 53687091200
```
## <a name="create-resource-manager-template-for-replication"></a>Criar modelo do Resource Manager para replicação
- Abra o arquivo de modelo do Resource Manager que você baixou na etapa **identificando os componentes da infraestrutura de replicação** em um editor de sua escolha.
- Remova todas as definições de recurso do modelo, exceto os recursos que são do tipo *"Microsoft. recoveryservices/Vaults/replicationFabrics/replicationProtectionContainers/replicationMigrationItems"*
- Se houver várias definições de recursos do tipo acima, remova todos, exceto um. Remova as definições de propriedade **dependentes** da definição de recurso.
- No final desta etapa, você deve ter um arquivo parecido com o exemplo abaixo e tem o mesmo conjunto de propriedades.
```
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationMigrationItems",
"apiVersion": "2018-01-10",
"name": "ContosoMigration7371rsvault/VMware104e4replicationfabric/VMware104e4replicationcontainer/10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_500937f3-805e-9414-11b1-f22923456e08",
"properties": {
"policyId": "/Subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.RecoveryServices/vaults/ContosoMigration7371rsvault/replicationPolicies/migrateVMware104e4sitepolicy",
"providerSpecificDetails": {
"instanceType": "VMwareCbt",
"vmwareMachineId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.OffAzure/VMwareSites/VMware104e4site/machines/10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_500937f3-805e-9414-11b1-f22923456e08",
"targetResourceGroupId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/PayrollRG",
"targetNetworkId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/PayrollRG/providers/Microsoft.Network/virtualNetworks/PayrollNW",
"targetSubnetName": "PayrollSubnet",
"licenseType": "NoLicenseType",
"disksToInclude": [
{
"diskId": "6000C295-dafe-a0eb-906e-d47cb5b05a1d",
"isOSDisk": "true",
"logStorageAccountId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.Storage/storageAccounts/migratelsa1432469187",
"logStorageAccountSasSecretName": "migratelsa1432469187-cacheSas",
"diskType": "Standard_LRS"
}
],
"dataMoverRunAsAccountId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.OffAzure/VMwareSites/VMware104e4site/runasaccounts/b090bef3-b733-5e34-bc8f-eb6f2701432a",
"snapshotRunAsAccountId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.OffAzure/VMwareSites/VMware104e4site/runasaccounts/b090bef3-b733-5e34-bc8f-eb6f2701432a",
"targetBootDiagnosticsStorageAccountId": "/subscriptions/6785ea1f-ac40-4244-a9ce-94b12fd832ca/resourceGroups/ContosoMigration/providers/Microsoft.Storage/storageAccounts/migratelsa1432469187",
"targetVmName": "PayrollWeb04"
}
}
}
]
}
```
- Edite a propriedade **Name** na definição de recurso. Substitua a cadeia de caracteres que segue o último "/" na propriedade Name com o valor de *$Machine. Nome*(da etapa anterior).
- Altere o valor da propriedade **Properties. providerSpecificDetails. vmwareMachineId** com o valor de *$Machine. ResourceId*(da etapa anterior).
- Defina os valores para **targetResourceGroupId**, **targetNetworkId**, **targetSubnetName** para a ID do grupo de recursos de destino, a ID de recurso de rede virtual de destino e o nome da sub-rede de destino, respectivamente.
- Defina o valor de **LicenseType** como "WindowsServer" para aplicar benefício híbrido do Azure para essa VM. Se essa VM não estiver qualificada para Benefício Híbrido do Azure, defina o valor de **LicenseType** como nolicenciadotype.
- Altere o valor da propriedade **targetVmName** para o nome da máquina virtual do Azure desejada para a VM migrada.
- Opcionalmente, adicione uma propriedade chamada **targetVmSize** abaixo da propriedade **targetVmName** . Defina o valor da propriedade **targetVmSize** para o tamanho de máquina virtual do Azure desejado para a VM migrada.
- A propriedade **disksToInclude** é uma lista de entradas de disco para replicação com cada item de lista que representa um disco local. Crie quantos itens de lista forem o número de discos na VM local. Substitua a propriedade **DiskId** no item de lista para o UUID dos discos identificados na etapa anterior. Defina o valor de **isOSDisk** como "true" para o disco do sistema operacional da VM e "false" para todos os outros discos. Deixe as propriedades **logStorageAccountId** e **logStorageAccountSasSecretName** inalteradas. Defina o valor de **disktype** como o tipo de disco gerenciado do Azure (*Standard_LRS, Premium_LRS, StandardSSD_LRS*) a ser usado para o disco. Para os discos que precisam ser criptografados com CMK, adicione uma propriedade chamada **diskEncryptionSetId** e defina o valor para a ID de recurso do conjunto de criptografia de disco criado (**$des. ID**) na etapa *criar um conjunto de criptografia de disco*
- Salve o arquivo de modelo editado. Para o exemplo acima, o arquivo de modelo editado tem a seguinte aparência:
```
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationMigrationItems",
"apiVersion": "2018-01-10",
"name": "ContosoVMwareCMK00ddrsvault/VMwareApplianca8bareplicationfabric/VMwareApplianca8bareplicationcontainer/10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_50098f99-f949-22ca-642b-724ec6595210",
"properties": {
"policyId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.RecoveryServices/vaults/ContosoVMwareCMK00ddrsvault/replicationPolicies/migrateVMwareApplianca8basitepolicy",
"providerSpecificDetails": {
"instanceType": "VMwareCbt",
"vmwareMachineId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite/machines/10-150-8-52-b090bef3-b733-5e34-bc8f-eb6f2701432a_50098f99-f949-22ca-642b-724ec6595210",
"targetResourceGroupId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoMigrationTarget",
"targetNetworkId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/cmkRTest/providers/Microsoft.Network/virtualNetworks/cmkvm1_vnet",
"targetSubnetName": "cmkvm1_subnet",
"licenseType": "NoLicenseType",
"disksToInclude": [
{
"diskId": "6000C291-5106-2aac-7a74-4f33c3ddb78c",
"isOSDisk": "true",
"logStorageAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.Storage/storageAccounts/migratelsa1671875959",
"logStorageAccountSasSecretName": "migratelsa1671875959-cacheSas",
"diskEncryptionSetId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/CONTOSOMIGRATIONTARGET/providers/Microsoft.Compute/diskEncryptionSets/ContosoCMKDES",
"diskType": "Standard_LRS"
},
{
"diskId": "6000C293-39a1-bd70-7b24-735f0eeb79c4",
"isOSDisk": "false",
"logStorageAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.Storage/storageAccounts/migratelsa1671875959",
"logStorageAccountSasSecretName": "migratelsa1671875959-cacheSas",
"diskEncryptionSetId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/CONTOSOMIGRATIONTARGET/providers/Microsoft.Compute/diskEncryptionSets/ContosoCMKDES",
"diskType": "Standard_LRS"
},
{
"diskId": "6000C29e-cbee-4d79-39c7-d00dd0208aa9",
"isOSDisk": "false",
"logStorageAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.Storage/storageAccounts/migratelsa1671875959",
"logStorageAccountSasSecretName": "migratelsa1671875959-cacheSas",
"diskEncryptionSetId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/CONTOSOMIGRATIONTARGET/providers/Microsoft.Compute/diskEncryptionSets/ContosoCMKDES",
"diskType": "Standard_LRS"
}
],
"dataMoverRunAsAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite/runasaccounts/b090bef3-b733-5e34-bc8f-eb6f2701432a",
"snapshotRunAsAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.OffAzure/VMwareSites/VMwareApplianca8basite/runasaccounts/b090bef3-b733-5e34-bc8f-eb6f2701432a",
"targetBootDiagnosticsStorageAccountId": "/subscriptions/509099b2-9d2c-4636-b43e-bd5cafb6be69/resourceGroups/ContosoVMwareCMK/providers/Microsoft.Storage/storageAccounts/migratelsa1671875959",
"performAutoResync": "true",
"targetVmName": "FPL-W19-09"
}
}
}
]
}
```
## <a name="set-up-replication"></a>Configurar a replicação
Agora você pode implantar o modelo editado do Resource Manager no grupo de recursos do projeto para configurar a replicação para a VM. Saiba como [implantar recursos com modelos de Azure Resource Manager e Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
```azurepowershell
New-AzResourceGroupDeployment -ResourceGroupName $ProjectResourceGroup -TemplateFile "C:\Users\Administrator\Downloads\template.json"
```
```Output
DeploymentName : template
ResourceGroupName : ContosoVMwareCMK
ProvisioningState : Succeeded
Timestamp : 3/11/2020 8:52:00 PM
Mode : Incremental
TemplateLink :
Parameters :
Outputs :
DeploymentDebugLogLevel :
```
## <a name="next-steps"></a>Próximas etapas
[Monitore](tutorial-migrate-vmware.md#track-and-monitor) o status de replicação por meio da experiência do portal e execute migrações e migração de teste. | 85.231884 | 940 | 0.751488 | por_Latn | 0.927223 |
e5949bd120e9f41b50b11b53fc66596b80fa992b | 1,364 | md | Markdown | packages/cache-url/README.md | torch2424/amp-toolbox | beec9054355e67a2bb872b0e65b305a031434b71 | [
"Apache-2.0"
] | null | null | null | packages/cache-url/README.md | torch2424/amp-toolbox | beec9054355e67a2bb872b0e65b305a031434b71 | [
"Apache-2.0"
] | null | null | null | packages/cache-url/README.md | torch2424/amp-toolbox | beec9054355e67a2bb872b0e65b305a031434b71 | [
"Apache-2.0"
] | null | null | null | # AMP-Toolbox Cache URL
Translates an URL from the origin to the AMP Cache URL format, according to the specification
available in the [AMP documentation](https://developers.google.com/amp/cache/overview). This includes the SHA256 fallback URLs used by the AMP Cache on invalid human-readable cache urls.
## Usage
### Including the Module
#### ES Module (Browser)
```javascript
import {ampToolboxCacheUrl} from 'amp-toolbox-cache-url';
```
#### CommonJs (Node)
```javascript
const ampToolboxCacheUrl = require('amp-toolbox-cache-url').createCacheUrl;
```
#### UMD (Node, Browser)
In the browser, include the UMD module in an HTML `<script>` tag. If using node, replace `window` with `global`.
```javascript
const {ampToolboxCacheUrl} = window.AmpToolboxCacheUrl;
```
### Using the module
```javascript
// Get an AMP Cache URL from a cache domain, and a canonical URL
ampToolboxCacheUrl.createCacheUrl('cdn.ampproject.org', 'https://www.example.com').then((cacheUrl) => {
// This would log:
// 'https://www-example-com.cdn.ampproject.org/c/s/www.example.com/'
console.log(cacheUrl);
});
// Transform a canonical URL to an AMP Cache subdomain
ampToolboxCacheUrl.createCurlsSubdomain('https://www.example.com').then((curlsSubdomain) => {
// This would log:
// 'www-example-com'
console.log(curlsSubdomain);
});
```
| 28.416667 | 186 | 0.71261 | eng_Latn | 0.460833 |
e59533240e3533d0d8f2f4438de724253f411126 | 40 | md | Markdown | docs/SPRecycleBinItemCollection.md | impensavel/spoil | 3b036f25ac87c8b0d52f1e6f011061edf35d9f59 | [
"MIT"
] | 5 | 2015-09-19T20:23:33.000Z | 2020-06-09T04:49:15.000Z | docs/SPRecycleBinItemCollection.md | impensavel/spoil | 3b036f25ac87c8b0d52f1e6f011061edf35d9f59 | [
"MIT"
] | null | null | null | docs/SPRecycleBinItemCollection.md | impensavel/spoil | 3b036f25ac87c8b0d52f1e6f011061edf35d9f59 | [
"MIT"
] | 3 | 2020-04-19T23:07:08.000Z | 2021-12-16T14:03:19.000Z | # SPOIL
## SPRecycleBinItemCollection
| 10 | 29 | 0.775 | yue_Hant | 0.886146 |
e59624928f7de5ab47ce11ef2515a1b5f3559c79 | 6,780 | md | Markdown | docs/cli/getting-started.md | LappleApple/tanzu-framework | b4293440b686b1314238763c1c084331f0867a6d | [
"Apache-2.0"
] | null | null | null | docs/cli/getting-started.md | LappleApple/tanzu-framework | b4293440b686b1314238763c1c084331f0867a6d | [
"Apache-2.0"
] | null | null | null | docs/cli/getting-started.md | LappleApple/tanzu-framework | b4293440b686b1314238763c1c084331f0867a6d | [
"Apache-2.0"
] | null | null | null | # Tanzu CLI Getting Started
A simple set of instructions to set up and use the Tanzu CLI.
## Binary installation
### Install the latest release of Tanzu CLI
`linux-amd64`,`windows-amd64`, and `darwin-amd64` are the OS-ARCHITECTURE
combinations we support now.
#### macOS/Linux
- Download the latest [release](https://github.com/vmware-tanzu/tanzu-framework/releases/latest)
- Extract the downloaded tar file
- for macOS:
```sh
mkdir tanzu && tar -xvf tanzu-framework-darwin-amd64.tar -C tanzu
```
- for Linux:
```sh
mkdir tanzu && tar -xvf tanzu-framework-linux-amd64.tar -C tanzu
```
- Install the `tanzu` CLI
- for macOS:
```sh
install tanzu/cli/core/v0.5.0/tanzu-core-darwin_amd64 /usr/local/bin/tanzu
```
- for Linux:
```sh
sudo install tanzu/cli/core/v0.5.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
```
Note: Replace `v0.5.0` with the version you've downloaded
- Set `TANZU_CLI_NO_INIT=true`
```sh
export TANZU_CLI_NO_INIT=true
```
- If you have a previous version of tanzu CLI already installed and the config file ~/.config/tanzu/config.yaml is present, run this command to make sure the default plugin repo points to the right path.
```sh
tanzu plugin repo update -b tanzu-cli-framework core
```
- Install the downloaded plugins
```sh
tanzu plugin install --local tanzu/cli all
```
- Verify installed plugins
```sh
tanzu plugin list
```
#### Windows
- Download the latest [release](https://github.com/vmware-tanzu/tanzu-framework/releases/latest)
- Open PowerShell as an administrator, change to the download directory and run:
```sh
mkdir tanzu
tar -xvf tanzu-framework-windows-amd64.tar -C tanzu
cd .\tanzu\
```
- Save following in `install.bat` in current directory and run `install.bat`
```sh
SET TANZU_CLI_DIR=%ProgramFiles%\tanzu
mkdir "%TANZU_CLI_DIR%"
copy /B /Y cli\core\v0.5.0\tanzu-core-windows_amd64.exe "%TANZU_CLI_DIR%\tanzu.exe"
set PATH=%PATH%;%TANZU_CLI_DIR%
SET PLUGIN_DIR=%LocalAppData%\tanzu-cli
mkdir %PLUGIN_DIR%
SET TANZU_CACHE_DIR=%LocalAppData%\.cache\tanzu
rmdir /Q /S %TANZU_CACHE_DIR%
set TANZU_CLI_NO_INIT=true
tanzu plugin repo update -b tanzu-cli-framework core
tanzu plugin install --local cli all
tanzu plugin list
```
Note version `v0.5.0` in line number 3, please replace with the version you are installing.
- Add `Program Files\tanzu` to your PATH.
## Build the CLI and plugins from source
If you want the very latest, you can also build and install tanzu CLI, and its plugins, from source.
### Prerequisites
- [go](https://golang.org/dl/) version 1.16
- Clone Tanzu Framework and run the below command to build and install CLI and
plugins locally for your platform.
```sh
TANZU_CLI_NO_INIT=true make build-install-cli-local
```
- When the build is done, the tanzu CLI binary and the plugins will be produced locally in the `artifacts` directory.
The CLI binary will be in a directory similar to the following:
```bash
./artifacts/<OS>/<ARCH>/cli/core/<version>/tanzu-core-<os_arch>
```
- For instance, the following is a build for MacOS:
```bash
./artifacts/darwin/amd64/cli/core/latest/tanzu-core-darwin_amd64
```
- If you additionally want to build and install CLI and plugins for all platforms, run:
```sh
TANZU_CLI_NO_INIT=true make build-install-cli-all
```
The CLI currently contains a default distribution which is the default set of plugins that should be installed on
initialization. Initialization of the distributions can be prevented by setting the env var `TANZU_CLI_NO_INIT=true`.
Check out this [doc](../cli/plugin_implementation_guide.md#Distributions) to learn more about distributions in Tanzu CLI
## Usage
```sh
Usage:
tanzu [command]
Available command groups:
Admin
builder Build Tanzu components
test Test the CLI
Run
cluster Kubernetes cluster operations
kubernetes-release Kubernetes release operations
management-cluster Kubernetes management cluster operations
System
completion Output shell completion code
config Configuration for the CLI
init Initialize the CLI
login Login to the platform
plugin Manage CLI plugins
update Update the CLI
version Version information
Version
alpha Alpha CLI commands
Flags:
-h, --help help for tanzu
Use "tanzu [command] --help" for more information about a command.
```
## Creating clusters
Tanzu CLI allows you to create clusters on a variety of infrastructure platforms
such as vSphere, Azure, AWS and on Docker.
1. Initialize the Tanzu kickstart UI by running the below command to create the
management cluster.
```sh
tanzu management-cluster create --ui
```
The above would open a management cluster provisioning UI and you can select the
deployment infrastructure and create the cluster.
1. To validate the creation of the management cluster
```sh
tanzu management-cluster get
```
1. Get the management cluster's kubeconfig
```sh
tanzu management-cluster kubeconfig get ${MGMT_CLUSTER_NAME} --admin
```
1. Set kubectl context
```sh
kubectl config use-context ${MGMT_CLUSTER_NAME}-admin@${MGMT_CLUSTER_NAME}
```
1. Next create the workload cluster
1. Create a new workload clusterconfig file by copying the management cluster config file
`~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE>` and changing the `CLUSTER_NAME` parameter
to the workload cluster name, you can also edit other parameters as required.
1. Create workload cluster
```sh
tanzu cluster create ${WORKLOAD_CLUSTER_NAME} --file ~/.config/tanzu/tkg/clusterconfigs/workload.yaml
```
1. Validate workload cluster creation
```sh
tanzu cluster list
```
1. Do cool things with the provisioned clusters.
1. Clean up
1. To delete workload cluster
```sh
tanzu cluster delete ${WORKLOAD_CLUSTER_NAME}
```
Management cluster can only be deleted after deleting all the workload clusters.
1. To delete management cluster
```sh
tanzu management-cluster delete ${MGMT_CLUSTER_NAME}
```
## What's next
Tanzu CLI is built to be extensible, if you wish to extend Tanzu CLI, you can do
that by writing your CLI plugins.
### Create your own plugin
To bootstrap a new plugin, follow the `builder` plugin documentation [here](../../cmd/cli/plugin-admin/builder/README.md).
Check out the [plugin implementation guide](../cli/plugin_implementation_guide.md) for more details.
| 26.692913 | 203 | 0.700737 | eng_Latn | 0.875936 |
e59678212e9b98adbd5502a9bd23fa22ae6fe7d2 | 525 | md | Markdown | README.md | aamishbaloch/my-django-extensions | 446f9a8d33355177e9d09ef926adeb003f1bd6d1 | [
"MIT"
] | null | null | null | README.md | aamishbaloch/my-django-extensions | 446f9a8d33355177e9d09ef926adeb003f1bd6d1 | [
"MIT"
] | null | null | null | README.md | aamishbaloch/my-django-extensions | 446f9a8d33355177e9d09ef926adeb003f1bd6d1 | [
"MIT"
] | null | null | null | My Django Extensions
--------------------
Created this to reuse what I have produced before. You can get a lot
of fine extensions for django here.
Getting It
----------
You can get My Django Extensions by using pip:
```sh
$ pip install git+https://github.com/aamishbaloch/my-django-extensions.git
```
Installing It
-------------
To enable my_django_extensions in your project you need to add it to INSTALLED_APPS in your projects settings.py file:
```sh
INSTALLED_APPS = (
...
'my_django_extensions',
...
)
```
| 22.826087 | 118 | 0.678095 | eng_Latn | 0.956346 |
e5967bde5aea1f1afc5a2dd76b6464c885fcedc5 | 14,028 | md | Markdown | docs/framework/data/adonet/ef/string-functions.md | moebius87/docs.it-it | ab767314e9d53480fcc4e6a78819a3233cffb718 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/ef/string-functions.md | moebius87/docs.it-it | ab767314e9d53480fcc4e6a78819a3233cffb718 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/ef/string-functions.md | moebius87/docs.it-it | ab767314e9d53480fcc4e6a78819a3233cffb718 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Funzioni stringa
ms.date: 03/30/2017
ms.assetid: 338f0c26-8aee-43eb-bd1a-ec0849a376b9
ms.openlocfilehash: 6da257cad90232426c71221dfd9d418265479bbe
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "61879119"
---
# <a name="string-functions"></a>Funzioni stringa
Il provider di dati .NET Framework per SQL Server (SqlClient) fornisce funzioni `String` che eseguono operazioni in un input `String` e restituiscono un risultato di tipo `String` o un valore numerico. Tali funzioni si trovano nello spazio dei nomi SqlServer, disponibile quando si usa SqlClient. Una proprietà dello spazio dei nomi del provider consente a Entity Framework di individuare il prefisso usato dal provider per costrutti specifici, ad esempio tipi e funzioni.
Nella tabella seguente sono riportate le funzioni `String` di SqlClient:
|Funzione|Descrizione|
|--------------|-----------------|
|`ASCII(expression)`|Restituisce il codice ASCII del primo carattere a sinistra di un'espressione stringa.<br /><br /> **Argomenti**<br /><br /> `expression`: Qualsiasi espressione valida di ASCII `String` tipo.<br /><br /> **Valore restituito**<br /><br /> Oggetto `Int32`.<br /><br /> **Esempio**<br /><br /> `SqlServer.ASCII('A')`|
|`CHAR(expression)`|Converte un codice `Int32` in una stringa ASCII.<br /><br /> **Argomenti**<br /><br /> `expression`: Oggetto `Int32`.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.char(97)`|
|`CHARINDEX(expression1, expression2 [, start_location])`|Restituisce la posizione iniziale dell'espressione specificata in una stringa di caratteri.<br /><br /> **Argomenti**<br /><br /> `expression1`: Espressione contenente la sequenza di caratteri da trovare. L'espressione può essere di un tipo string (ASCII o Unicode) o di un tipo binario.<br /><br /> `expression2`: Espressione, in genere corrispondente a una colonna, in cui cercare la sequenza specificata. L'espressione può essere di un tipo string (ASCII o Unicode) o di un tipo binario.<br /><br /> `start_location`: (Facoltativo) tipo Int64 (non restituito in SQL Server 2000) o Int32 che rappresenta la posizione da cui iniziare la ricerca di expression1 in expression2 all'interno. Se l'argomento start_location non viene specificato oppure è un numero negativo o è zero, la ricerca viene eseguita dall'inizio di expression2.<br /><br /> **Valore restituito**<br /><br /> Oggetto `Int32`.<br /><br /> **Esempio**<br /><br /> `SqlServer.CHARINDEX('h', 'habcdefgh', 2)`|
|`DIFFERENCE(expression, expression)`|Consente di confrontare i valori `SOUNDEX` di due stringhe e di valutarne la somiglianza.<br /><br /> **Argomenti**<br /><br /> Tipo `String` ASCII o Unicode. `expression` può essere una costante, una variabile o una colonna.<br /><br /> **Valore restituito**<br /><br /> Restituisce un tipo `Int32` che rappresenta la differenza tra i valori SOUNDEX di due espressioni di caratteri. L'intervallo è compreso tra 0 e 4. 0 indica una somiglianza scarsa o del tutto assente, mentre 4 indica una somiglianza forte o l'esatta corrispondenza dei valori.<br /><br /> **Esempio**<br /><br /> `// The following example returns a DIFFERENCE value of 4,`<br /><br /> `//the least possible difference or the best match.`<br /><br /> `SqlServer.DIFFERENCE('Green','Greene');`|
|`LEFT(expression, count)`|Restituisce la parte sinistra di una stringa di caratteri, di lunghezza pari al numero di caratteri specificato.<br /><br /> **Argomenti**<br /><br /> `expression`: Un tipo String Unicode o ASCII. Usare la funzione CAST per convertire in modo esplicito character_expression.<br /><br /> `count`: Un' `Int64` (non restituito in SQL Server 2000) o `Int32` tipo che specifica quanti caratteri di character_expression verranno restituiti.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.LEFT('SQL Server', 4)`|
|`LEN(expression)`|Restituisce il numero di caratteri dell'espressione stringa specificata, esclusi gli spazi vuoti finali.<br /><br /> **Argomenti**<br /><br /> `expression`: Un'espressione di un `String` (Unicode o ASCII) o un `Binary` tipo<br /><br /> **Valore restituito**<br /><br /> Oggetto `Int32`.<br /><br /> **Esempio**<br /><br /> `SqlServer.LEN('abcd')`|
|`LOWER(expression)`|Restituisce un'espressione `String` dopo la conversione in caratteri minuscoli dei dati in caratteri maiuscoli.<br /><br /> **Argomenti**<br /><br /> `expression`: Qualsiasi espressione valida del tipo `String`.<br /><br /> **Valore restituito**<br /><br /> Oggetto `String`.<br /><br /> **Esempio**<br /><br /> `SqlServer.LOWER('AbB')`|
|`LTRIM(expression)`|Restituisce un'espressione `String` dopo avere rimosso gli spazi iniziali.<br /><br /> **Argomenti**<br /><br /> `expression`: Qualsiasi espressione valida di tipo `String`.<br /><br /> **Valore restituito**<br /><br /> Oggetto `String`.<br /><br /> **Esempio**<br /><br /> `SqlServer.LTRIM(' d')`|
|`NCHAR(expression)`|Restituisce un tipo `String` Unicode con il codice integer specificato, in base a quanto definito dallo standard Unicode.<br /><br /> **Argomenti**<br /><br /> `expression`: Oggetto `Int32`.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode.<br /><br /> **Esempio**<br /><br /> `SqlServer.NCHAR(65)`|
|`PATINDEX('%pattern%', expression)`|Restituisce la posizione iniziale della prima occorrenza di un modello in un'espressione `String` specificata.<br /><br /> **Argomenti**<br /><br /> `'%pattern%'`: Tipo `String` ASCII o Unicode. È possibile usare i caratteri jolly. Il carattere % deve, tuttavia, precedere e seguire il criterio di ricerca, tranne quando si esegue la ricerca del primo o dell'ultimo carattere.<br /><br /> `expression`: Oggetto `String` ASCII o Unicode in cui cercare il pattern specificato.<br /><br /> **Valore restituito**<br /><br /> Oggetto `Int32`.<br /><br /> **Esempio**<br /><br /> `SqlServer.PATINDEX('abc', 'ab')`|
|`QUOTENAME('char_string' [, 'quote_char'])`|Restituisce un tipo `String` Unicode a cui sono stati aggiunti i delimitatori per rendere la stringa di input un identificatore delimitato valido di SQL Server 2005.<br /><br /> **Argomenti**<br /><br /> `char_string`: Tipo `String` Unicode.<br /><br /> `quote_char`: Stringa di un solo carattere da utilizzare come delimitatore. Può essere costituita da una virgoletta singola ( ' ), da una parentesi quadra aperta o chiusa ( [ ] ) oppure da virgolette doppie ( " ). Se `quote_char` non viene specificato, vengono usate le parentesi quadre.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode.<br /><br /> **Esempio**<br /><br /> `SqlServer.QUOTENAME('abc[]def')`|
|`REPLACE(expression1, expression2, expression3)`|Ripete un'espressione di caratteri per un numero di volte specificato.<br /><br /> **Argomenti**<br /><br /> `expression1`: Espressione stringa da cercare. string_expression1 può essere un tipo string Unicode o ASCII.<br /><br /> `expression2`: Sottostringa da trovare. string_expression2 può essere un tipo string Unicode o ASCII.<br /><br /> `expression3`: stringa di sostituzione. string_expression3 può essere un tipo string Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.REPLACE('aabbcc', 'bc', 'zz')`|
|`REPLICATE(char_expression, int_expression)`|Ripete un'espressione di caratteri per un numero di volte specificato.<br /><br /> **Argomenti**<br /><br /> `char_expression`: Tipo `String` Unicode o ASCII.<br /><br /> `int_expression`: tipo `Int64` (non supportato in SQL Server 2000) o `Int32`.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.REPLICATE('aa',2)`|
|`REVERSE(expression)`|Restituisce un tipo string Unicode o ASCII con le posizioni dei caratteri invertite rispetto alla stringa di input.<br /><br /> **Argomenti**<br /><br /> `expression`: Tipo `String` Unicode o ASCII.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.REVERSE('abcd')`|
|`RIGHT(char_expression, count)`|Restituisce la parte destra di una stringa di caratteri, di lunghezza pari al numero di caratteri specificato.<br /><br /> **Argomenti**<br /><br /> `char_expression`: Un tipo String Unicode o ASCII. Usare la funzione CAST per convertire in modo esplicito character_expression.<br /><br /> `count`: Un' `Int64` (non restituito in SQL Server 2000) o `Int32` tipo che specifica quanti caratteri di character_expression verranno restituiti.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.RIGHT('SQL Server', 6)`|
|`RTRIM(expression)`|Restituisce un tipo string Unicode o ASCII dopo aver rimosso gli spazi finali.<br /><br /> **Argomenti**<br /><br /> `expression`: Tipo `String` Unicode o ASCII.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.RTRIM(' d e ')`|
|`SOUNDEX(expression)`|Restituisce un codice di quattro caratteri (SOUNDEX) per valutare la somiglianza di due stringhe. **Argomenti**<br /><br /> `expression`: Un tipo String Unicode o ASCII.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII. Un codice di quattro caratteri (SOUNDEX) è una stringa che consente di valutare la somiglianza di due stringhe.<br /><br /> **Esempio**<br /><br /> `Select SqlServer.SOUNDEX('Smith'), SqlServer.SOUNDEX('Smythe') FROM {1}`<br /><br /> **Restituisce**<br /><br /> `----- ----- S530 S530`|
|`SPACE(int_expression)`|Restituisce un tipo `String` ASCII di spazi ripetuti.<br /><br /> **Argomenti**<br /><br /> `int_expression`: Un' `Int64` (non restituito in SQL Server 2000) o `Int32` che indica il numero di spazi.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.SPACE(2)`|
|`STR(float_expression [, length [, decimal]])`|Restituisce un oggetto `String` ASCII convertito da dati numerici.<br /><br /> **Argomenti**<br /><br /> `float _expression`: Espressione del tipo di dati numerico approssimato (`Double`) con un separatore decimale.<br /><br /> `length`: (facoltativo) tipo `Int32` che rappresenta la lunghezza totale inclusi il separatore decimale, il segno, le cifre e gli spazi. Il valore predefinito è 10.<br /><br /> `decimal`: (facoltativo) un `Int32` che rappresenta il numero di posizioni a destra del separatore decimale. Il numero decimale deve essere minore o uguale a 16. Se è maggiore di 16, il risultato verrà troncato dopo le prime sedici posizioni a destra del separatore decimale.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.STR(212.0)`|
|`STUFF(str_expression, start, length, str_expression_to_insert)`|Elimina una lunghezza di caratteri specificata e inserisce un altro set di caratteri in corrispondenza di un punto iniziale specificato in un'espressione stringa.<br /><br /> **Argomenti**<br /><br /> `str_expression`: Tipo `String` Unicode o ASCII.<br /><br /> `start:` Un' `Int64` (non restituito in SQL Server 2000) o `Int32` valore che specifica il percorso in cui iniziare l'eliminazione e inserimento.<br /><br /> `length`: Valore `Int64` (non restituito in SQL Server 2000) o `Int32` che specifica il numero di caratteri da eliminare.<br /><br /> `str_expression_to_insert`: Tipo `String` Unicode o ASCII.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` Unicode o ASCII.<br /><br /> **Esempio**<br /><br /> `SqlServer.STUFF('abcd', 2, 2, 'zz')`|
|`SUBSTRING(str_expression, start, length)`|Restituisce parte di un'espressione `String`.<br /><br /> **Argomenti**<br /><br /> `str_expression`: Espressione di un tipo `String` (ASCII o Unicode) o `Binary`.<br /><br /> `start`: Un' `Int64` (non restituito in SQL Server 2000) o `Int32` che specifica dove inizia la sottostringa. 1 si riferisce al primo carattere nella stringa.<br /><br /> `length`: Un' `Int64` (non restituito in SQL Server 2000) o `Int32` che specifica quanti caratteri dell'espressione verranno restituiti.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` (ASCII o Unicode) o `Binary`.<br /><br /> **Esempio**<br /><br /> `SqlServer.SUBSTRING('abcd', 2, 2)`|
|`UNICODE(expression)`|Restituisce il valore integer, come definito dallo standard Unicode, per il primo carattere dell'espressione di input.<br /><br /> **Argomenti**<br /><br /> `expression`: Tipo `String` Unicode.<br /><br /> **Valore restituito**<br /><br /> Oggetto `Int32`.<br /><br /> **Esempio**<br /><br /> `SqlServer.UNICODE('a')`|
|`UPPER(expression)`|Restituisce un'espressione `String` dopo la conversione dei caratteri minuscoli in caratteri maiuscoli.<br /><br /> **Argomenti**<br /><br /> `expression`: Un'espressione di un tipo Unicode String o ASCII.<br /><br /> **Valore restituito**<br /><br /> Tipo `String` ASCII o Unicode.<br /><br /> **Esempio**<br /><br /> `SqlServer.UPPER('AbB')`|
Per altre informazioni sulle funzioni `String` supportate da SqlClient, vedere la documentazione relativa alla versione di SQL Server specificata nel file manifesto del provider SqlClient:
|SQL Server 2000|SQL Server 2005|SQL Server 2008|
|---------------------|---------------------|---------------------|
|[Funzioni stringa (Transact-SQL)](https://go.microsoft.com/fwlink/?LinkId=115915)|[Funzioni stringa (Transact-SQL)](https://go.microsoft.com/fwlink/?LinkId=115916)|[Funzioni stringa (Transact-SQL)](https://go.microsoft.com/fwlink/?LinkId=115914)|
## <a name="see-also"></a>Vedere anche
- [SqlClient per funzioni Entity Framework](../../../../../docs/framework/data/adonet/ef/sqlclient-for-ef-functions.md)
- [Problemi noti in SqlClient per Entity Framework](../../../../../docs/framework/data/adonet/ef/known-issues-in-sqlclient-for-entity-framework.md)
| 264.679245 | 1,035 | 0.700813 | ita_Latn | 0.974758 |
e596b3e10d26c089b9f02203240e1e275ecb3f1c | 52 | md | Markdown | README.md | pzia/bnp-pdf-parser | ff5a79431427a797816885fbca244b665af63d93 | [
"MIT"
] | null | null | null | README.md | pzia/bnp-pdf-parser | ff5a79431427a797816885fbca244b665af63d93 | [
"MIT"
] | null | null | null | README.md | pzia/bnp-pdf-parser | ff5a79431427a797816885fbca244b665af63d93 | [
"MIT"
] | null | null | null | # bnp-pdf-parser
Parse to csv from Bnp Paribas PDF
| 17.333333 | 34 | 0.75 | kor_Hang | 0.998684 |
e5974a89e24a4ee6a1d1858aee4dd65463a4333c | 28 | md | Markdown | README.md | wihu/Fettle-Sharp | ab539bb2b91d7fa7cd766b5241703f8bf03c011b | [
"MIT"
] | null | null | null | README.md | wihu/Fettle-Sharp | ab539bb2b91d7fa7cd766b5241703f8bf03c011b | [
"MIT"
] | null | null | null | README.md | wihu/Fettle-Sharp | ab539bb2b91d7fa7cd766b5241703f8bf03c011b | [
"MIT"
] | null | null | null | # Fettle-Sharp
Fettle-Sharp
| 9.333333 | 14 | 0.785714 | eng_Latn | 0.866358 |
e5976382644f8b497139b5f8494b7a72da7803fc | 190 | md | Markdown | README.md | feelepxyz/feelep.xyz | b3cab4b31f60be44afcac8090809173a6f61c669 | [
"MIT"
] | null | null | null | README.md | feelepxyz/feelep.xyz | b3cab4b31f60be44afcac8090809173a6f61c669 | [
"MIT"
] | 151 | 2018-04-11T04:14:11.000Z | 2022-02-26T09:53:36.000Z | README.md | feelepxyz/feelep.xyz | b3cab4b31f60be44afcac8090809173a6f61c669 | [
"MIT"
] | 1 | 2018-09-14T09:29:35.000Z | 2018-09-14T09:29:35.000Z | # feelep.xyz
https://feelep.xyz
[](https://app.netlify.com/sites/feelep/deploys)
| 31.666667 | 155 | 0.763158 | hun_Latn | 0.136828 |
e598407086863a82b884c26130f3e8e58d32d2f0 | 4,549 | md | Markdown | articles/commerce/synchronize-tasks-teams-pos.md | alexbuckgit/dynamics-365-unified-operations-public | d15b46cd3d60f7eba15927469a936e29409f0036 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/commerce/synchronize-tasks-teams-pos.md | alexbuckgit/dynamics-365-unified-operations-public | d15b46cd3d60f7eba15927469a936e29409f0036 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/commerce/synchronize-tasks-teams-pos.md | alexbuckgit/dynamics-365-unified-operations-public | d15b46cd3d60f7eba15927469a936e29409f0036 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: Synchronize task management between Microsoft Teams and Dynamics 365 Commerce POS
description: This topic describes how to synchronize task management between Microsoft Teams and Dynamics 365 Commerce point of sale (POS).
author: gvrmohanreddy
ms.date: 02/17/2021
ms.topic: article
ms.prod:
ms.technology:
# optional metadata
# ms.search.form:
#ROBOTS:
audience: Application User
# ms.devlang:
ms.reviewer: v-chgri
# ms.tgt_pltfrm:
# ms.custom:
ms.search.region: Global
# ms.search.industry:
ms.author: gmohanv
ms.search.validFrom: 2021-01-15
ms.dyn365.ops.version: 10.0.18
---
# Synchronize task management between Microsoft Teams and Dynamics 365 Commerce POS
[!include [banner](includes/banner.md)]
This topic describes how to synchronize task management between Microsoft Teams and Dynamics 365 Commerce point of sale (POS).
One of the main purposes of Teams integration is to enable the synchronization of task management between the POS application and Teams. In this way, store employees can use either the POS application or Teams to manage tasks, and don't have to switch applications.
Because Planner is used as a repository for tasks in Teams, there must be a link between Teams and Dynamics 365 Commerce. This link is established by using a specific plan ID for a given store team.
The following procedures show how to set up task management synchronization between the POS and Teams applications.
## Publish a test task list in Teams
The following procedure assumes that your store teams are using Microsoft Teams task management integration with Commerce for the first time.
To publish a test task list in Teams, follow these steps.
1. Sign in to Teams as a communications manager. Typically, communications managers are users who have the **Regional manager** role in Commerce.
1. In the left navigation pane, select **Tasks by Planner**.
1. On the **Published lists** tab, select **New list** in the lower left, and name the new list **Test task list**.
1. Select **Create**. The new list appears under **Drafts**.
1. Under **Task title**, give the first task the title **Testing Teams integration**. Then select **Enter**.
1. In the **Drafts** list, select the task list. Then select **Publish** in the upper-right corner.
1. In the **Select who to publish to** dialog box, select the teams that should receive the test task list.
1. Select **Next** to review your publication plan. If you must make changes, select **Back**.
1. Select **Confirm to proceed**, and then select **Publish**.
1. After publishing is completed, a message at the top of the **Published lists** tab indicates whether your task list was successfully delivered.
For more information, see [Publish task lists to create and track work in your organization](https://support.microsoft.com/office/publish-task-lists-to-create-and-track-work-in-your-organization-095409b3-f5af-40aa-9f9e-339b54e705df).
## Link POS and Teams for task management
To link the POS and Microsoft Teams applications for task management in Commerce headquarters, follow these steps.
> [!NOTE]
> Before you try to integrate Task management with Microsoft Teams, make sure that you've enabled [Dynamics 365 Commerce and Microsoft Teams integration](enable-teams-integration.md).
1. Go to **Retail and Commerce \> Task management \> Tasks integration with Microsoft Teams**.
1. On the Action Pane, select **Edit**.
1. Set the **Enable Task Management Integration** option to **Yes**.
1. On the Action Pane, select **Save**.
1. On the Action Pane, select **Setup task management**. You should receive a notification that indicates that a batch job that is named **Teams provision** is being created.
1. Go to **System administration \> Inquiries \> Batch jobs**, and find the most recent job that has the description **Teams provision**. Wait until this job has finished running.
1. Run the **CDX job 1070** to publish the plan ID and store references to Retail Server.
## Additional resources
[Dynamics 365 Commerce and Microsoft Teams integration overview](commerce-teams-integration.md)
[Enable Dynamics 365 Commerce and Microsoft Teams integration](enable-teams-integration.md)
[Provision Microsoft Teams from Dynamics 365 Commerce](provision-teams-from-commerce.md)
[Manage user roles in Microsoft Teams](manage-user-roles-teams.md)
[Map stores and teams if there are pre-existing teams in Microsoft Teams](map-stores-existing-teams.md)
[Dynamics 365 Commerce and Microsoft Teams integration FAQ](teams-integration-faq.md)
| 53.517647 | 265 | 0.774016 | eng_Latn | 0.986857 |
e5989a818f8952ea4a4e60a044f36f4bdda27518 | 150 | md | Markdown | content/querying/11-template-parameters.md | pusztaienike/docs-temp | 51efe20e9db1330bc0f38fabf50e14f06563894d | [
"MIT"
] | null | null | null | content/querying/11-template-parameters.md | pusztaienike/docs-temp | 51efe20e9db1330bc0f38fabf50e14f06563894d | [
"MIT"
] | null | null | null | content/querying/11-template-parameters.md | pusztaienike/docs-temp | 51efe20e9db1330bc0f38fabf50e14f06563894d | [
"MIT"
] | null | null | null | ---
title: Template parameters
sidebar: ApiDocs
showTitle: false
---
# Using template parameters
# Templates with properties
# Template expressions | 13.636364 | 27 | 0.773333 | eng_Latn | 0.905297 |
e598f41b0026780f01726d01ca16922ebe6a6e1e | 573 | md | Markdown | _languages-frameworks/26-vuejs.md | otaviojava/tech-radar | ff1657337a21c5a3078a212fb743613e7ea4e0be | [
"MIT"
] | 2 | 2021-07-09T11:43:42.000Z | 2021-11-03T16:40:53.000Z | _languages-frameworks/26-vuejs.md | otaviojava/tech-radar | ff1657337a21c5a3078a212fb743613e7ea4e0be | [
"MIT"
] | null | null | null | _languages-frameworks/26-vuejs.md | otaviojava/tech-radar | ff1657337a21c5a3078a212fb743613e7ea4e0be | [
"MIT"
] | 1 | 2021-07-09T13:17:54.000Z | 2021-07-09T13:17:54.000Z | ---
layout: details
filename: vuejs
name: vuejs
image: /tech-radar/assets/images/languages-frameworks/vuejs.png
category: languages-frameworks
ring: Assess
number: 26
---
# What is it ?
Vue.js is an open-source model–view–viewmodel front end JavaScript framework for building user interfaces and single-page applications. It was created by Evan You, and is maintained by him and the rest of the active core team members.
# Resources
- [Home page](https://vuejs.org/)
- [Start learning](https://www.vuemastery.com/courses/)
- [Documentation](https://vuejs.org/v2/guide/)
| 30.157895 | 234 | 0.760908 | eng_Latn | 0.912161 |
e59a44517ccbd4c351ad5992111acfdfddfc92db | 94 | md | Markdown | docs/zh/guide/presentations.md | Stngle/macaca-reporter | e9fc02e09a3dda01806e1f2eef2cd99862a7898a | [
"MIT"
] | 1 | 2020-03-04T09:06:27.000Z | 2020-03-04T09:06:27.000Z | docs/zh/guide/presentations.md | zgq346712481/macaca-reporter | d2d12ec6b7167952575cd7fca6e85a264a667891 | [
"MIT"
] | null | null | null | docs/zh/guide/presentations.md | zgq346712481/macaca-reporter | d2d12ec6b7167952575cd7fca6e85a264a667891 | [
"MIT"
] | 1 | 2020-03-04T09:06:19.000Z | 2020-03-04T09:06:19.000Z | # 社区分享
## 分享
- Testerhome 社区:[《使用 Macaca Reporter 报告器》](https://testerhome.com/topics/9816)
| 15.666667 | 78 | 0.691489 | kor_Hang | 0.18558 |
e59b11988af0e6e004cad569b0e011f971659b34 | 1,554 | md | Markdown | docs/visual-basic/misc/bc30009.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30009.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30009.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ein Verweis auf Assembly erforderlich "<Assemblyname>', enthält die implementierte Schnittstelle'<Schnittstellenname>"
ms.date: 07/20/2015
f1_keywords:
- vbc30009
- bc30009
helpviewer_keywords:
- BC30009
ms.assetid: b2dfb89d-7fde-4a8e-ba7f-fe1e59eabaca
ms.openlocfilehash: 09952d7329bd3e9a6f1f4bf25d80089bd6f3d7a3
ms.sourcegitcommit: 0888d7b24f475c346a3f444de8d83ec1ca7cd234
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 12/22/2018
ms.locfileid: "53760639"
---
# <a name="reference-required-to-assembly-ltassemblynamegt-containing-the-implemented-interface-ltinterfacenamegt"></a>Ein Verweis auf Assembly erforderlich "<Assemblyname>', enthält die implementierte Schnittstelle'<Schnittstellenname>"
Ein Verweis auf Assembly erforderlich "\<Assemblyname >', enthält die implementierte Schnittstelle '\<Schnittstellenname >'. Fügen Sie dem Projekt einen Verweis hinzu.
Die Schnittstelle ist in einer Dynamic Link Library (DLL) definiert, auf die in Ihrem Projekt nicht direkt verwiesen wird. Visual Basic-Compiler erfordert einen Verweis auf die Mehrdeutigkeit zu vermeiden, falls die Schnittstelle in mehreren DLLs oder Assemblys definiert ist.
**Fehler-ID:** BC30009
## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler
- Nehmen Sie den Namen der nicht referenzierten DLL oder Assembly in Ihre Projektverweise auf.
## <a name="see-also"></a>Siehe auch
[Problembehandlung bei fehlerhaften Verweisen](/visualstudio/ide/troubleshooting-broken-references)
| 50.129032 | 279 | 0.794723 | deu_Latn | 0.946747 |
e59b30159c73a76fcadae8d0455fe963fd9e7e2e | 2,039 | md | Markdown | events/elections/2021/candidate-aojea.md | Akshit6828/community | 74ae2f97f7f57949505d4636b4b037ec31079e97 | [
"Apache-2.0"
] | null | null | null | events/elections/2021/candidate-aojea.md | Akshit6828/community | 74ae2f97f7f57949505d4636b4b037ec31079e97 | [
"Apache-2.0"
] | null | null | null | events/elections/2021/candidate-aojea.md | Akshit6828/community | 74ae2f97f7f57949505d4636b4b037ec31079e97 | [
"Apache-2.0"
] | null | null | null | -------------------------------------------------------------
name: Antonio Ojea
ID: aojea
info:
- employer: Red Hat
- slack: aojea
-------------------------------------------------------------
## SIGS
- SIG Network
- SIG Testing
- Occasionally: SIG Release, Scalability, Lifecycle, API Machinery, Instrumentation, ...
## What I have done
I've started helping to implement IPv6 in Kubernetes in my spare time, and from that
time, I've joined the KIND project and become a member of SIG Testing and SIG Network.
I've done significant contributions to the project, some of the most important are the IPv6
and DualStack features, all the work in CI stabilization and test "flakes", and this little
gem that is the KIND project.
I try to help wherever I'm needed, with special focus on helping users and new contributors,
and keeping the stability and quality bar of the project high.
## What I'll do
Open source projects are complex, they live in ecosystems that evolve at a fast pace and with
multiple stakeholders, contributors, users, companies, ... with different needs and requirements.
I'll help with my experience in Open Source and managing large projects, to keep the balance and
the harmony of the project.
Kubernetes is a big and disruptive Open Source project with more than 6 years of life, I think that
it is still in a young age and I'll work to keep it growing, evolving and being a referent.
## Resources About Me
- [Kubernetes Advanced Networking Testing With KIND](https://kccnceu2021.sched.com/event/iE3g/kubernetes-advanced-networking-testing-with-kind-antonio-ojea-redhat)
- [Deep Dive: KIND - Benjamin Elder & Antonio Ojea](https://kccncna19.sched.com/event/Uah7/deep-dive-kind-benjamin-elder-google-antonio-ojea-garcia-suse)
- [SIG-Network and SIG-Testing Award 2020](https://www.youtube.com/watch?v=XCRkzgMTaJU)
- [Google Open Source Peer Bonus 2021](https://opensource.googleblog.com/2021/04/announcing-first-group-of-google-open-source-peer-bonus-winners.html)
- Twitter: https://twitter.com/Itsuugo
| 46.340909 | 163 | 0.73075 | eng_Latn | 0.962705 |
e59b34dccb66ebae8509aff1707087c04f44f5f8 | 1,235 | md | Markdown | zxd/README.md | bbidong/maskrcnn-benchmark | b83254a1654e85ef38d73617f1a8ef66dea897e6 | [
"MIT"
] | null | null | null | zxd/README.md | bbidong/maskrcnn-benchmark | b83254a1654e85ef38d73617f1a8ef66dea897e6 | [
"MIT"
] | null | null | null | zxd/README.md | bbidong/maskrcnn-benchmark | b83254a1654e85ef38d73617f1a8ef66dea897e6 | [
"MIT"
] | null | null | null | # 安装环境
ubuntu 18
cuda 9
python 3.6
torch 1.0.0
torchvision 0.2.0
## Error
1. cocoapi安装出现`fatal error: Python.h: No such file or directory`
- 当前python版本没有安装完全,执行
```sh
sudo apt-get install python3-dev
```
如果报错,那么就换一个python版本建虚拟环境吧,比如3.6或3.7
2. apex安装出现 `error: expected primary-expression before 'some' token`
- 恢复到apex的早期版本
```sh
git checkout f3a960f80244cf9e80558ab30f7f7e8cbf03c0a0
```
3. `cannot import name '_C' from 'maskrcnn_benchmark`
- 主目录下的setup.py没有编译成功,重新编译
```python
python setup.py build develop
```
4. `RuntimeError: _th_or not supported on CUDAType for Bool`
- Just replace all `torch.bool` with `torch.uint8` in file `modeling/balanced_positive_negative_sampler.py` and file `structures/segmentation_mask.py`
# 参考
https://www.cnblogs.com/wangyong/p/10614898.html
简书: https://www.jianshu.com/p/ccf8affcdb98
硕士毕业论文
# 代码
cfg的参数由`config/defaults.py`和`--config-file`传入的yaml文件决定
## 数据
- 利用torchvision.datasets.coco.CocoDetection基类读取coco数据,读取的target的box一开始是(x,y,w,h)格式,后通过BoxList类转为(xyxy),以像素值而非百分比的格式输入模型
- 数据增强的resize
设w,h为原img大小,max,min为设定的期望大小,目的是把w,h保持比例进行resize。先看看以min为基准resize后最大值是否>max,如果超过了max,就对min进行调整(减小),然后再以min为基准进行resize,完毕。
| 29.404762 | 154 | 0.740081 | yue_Hant | 0.33656 |
e59b846c68745f2ab16ba72ff7dc610bf4623f28 | 2,078 | md | Markdown | README.md | Protoncoind/ProtonCoin | af27976c103edf80435b523a4d14901e6566d1a9 | [
"MIT"
] | 25 | 2021-01-22T13:29:48.000Z | 2021-02-03T00:35:23.000Z | README.md | ingjose7/ProtonCoin | af27976c103edf80435b523a4d14901e6566d1a9 | [
"MIT"
] | null | null | null | README.md | ingjose7/ProtonCoin | af27976c103edf80435b523a4d14901e6566d1a9 | [
"MIT"
] | 3 | 2021-01-22T17:01:50.000Z | 2021-09-30T23:08:27.000Z | Especificaciones de ProtonCoin
Currency code: PTC
Currency name: ProtonCoin
Total Coin Supply: 28.000.000 PTC
transaction confirmations: 06
Maturity: 21
Proof-of-Work algorithm: Scrypt
¿Qué es ProtonCoin?
ProtonCoin es una nueva criptomoneda que utiliza el Sistema de Prueba de Trabajo (PoW, por sus siglas en inglés) para verificar la validez de las transacciones y evitar el doble gasto de fondos. De ahí su carácter descentralizado, ya que no se depende de ninguna institución centralizada para transferir valor entre los participantes de la red. Para ser validado en este sistema no es necesario aportar documento de identificación alguno, sólo se requiere que el cliente del servicio realice algún tipo de trabajo que tenga cierto coste en su ordenador (normalmente un cómputo) y que sea verificado fácilmente en la parte del servidor.
¿Por qué ProtonCoin?
Los grandes resultados requieren grandes ambiciones (Heráclito)
Protón procede de un vocablo griego que significa “primero”. Es una partícula subatómica con carga eléctrica positiva que, junto a los neutrones, forma el núcleo de los átomos.
¿Hacia quiénes nos dirigimos?
1 – Organizaciones No Gubernamentales que impulsen proyectos ambientalistas
2 – Empresas que desarrollen tecnología verde.
3 – Inversores que apoyen programas ecologistas y de conservación
3 – Comunidad ProtonCoin
¿Qué puede solucionar ProtonCoin?
Rapidez y Confianza:
Su carácter descentralizado proporcionará rapidez y garantía a quienes usen el PTC como medio de pago y financiación.
Accesibilidad:
La integración de la tecnología de blockchain en los negocios supone que ese modelo sea cada vez más accesible para todos a nivel global. Esto, por supuesto, no ha sido 100% posible para muchas ICOs que se han tenido que insertar en Plataformas de Intercambio (Exchanges) con temas jurisdiccionales que han atentado contra su nivel de accesibilidad. Por eso la Comunidad ProtonCoin trabaja en la creación de su propio Exchange para llegar a todos los usuarios del Planeta interesados en usar esta criptodivisa, sin excepciones.
| 56.162162 | 635 | 0.817613 | spa_Latn | 0.997709 |
e59bc57718b03d8c7ac748224879a6473f9cea8a | 5,422 | md | Markdown | _notes/subject-1/note-1.md | kennguyen01/jekyll-cornell-notes | 0e22d83270717aaa39438b4f3c40fcc0e3ce02fc | [
"MIT"
] | null | null | null | _notes/subject-1/note-1.md | kennguyen01/jekyll-cornell-notes | 0e22d83270717aaa39438b4f3c40fcc0e3ce02fc | [
"MIT"
] | null | null | null | _notes/subject-1/note-1.md | kennguyen01/jekyll-cornell-notes | 0e22d83270717aaa39438b4f3c40fcc0e3ce02fc | [
"MIT"
] | null | null | null | ---
layout: note
title: Note 1
tag: Subject 1
---
{% include links.html tag=page.tag %}
> Donec risus mi, finibus ut condimentum a, eleifend vitae ligula. Phasellus interdum elit ut augue cursus, eu tincidunt felis commodo. Nunc luctus lobortis ligula nec bibendum. Quisque congue quis lacus a vehicula. Duis bibendum egestas viverra. Ut nec blandit purus. Praesent sit amet quam sit amet nulla facilisis interdum. Donec posuere urna sed neque fermentum, ut pharetra metus tincidunt. Nullam condimentum lacus vitae tellus pulvinar, vel molestie arcu placerat.
>
> Vestibulum est lectus, placerat a tincidunt sit amet, porttitor in lacus. Integer tempor lacus ut nisi bibendum fringilla. Praesent faucibus magna elit, quis fringilla ligula rhoncus et. Nulla quis lacinia lectus. Nulla facilisi. Quisque vitae erat in quam facilisis tempor sit amet ac nisi. Suspendisse vel ultricies odio. Suspendisse potenti. Nulla eu massa libero. Morbi sit amet est at tellus maximus tempor.
## Header 2
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean varius erat nec elit ultricies, eu tincidunt libero accumsan. Integer vitae lacinia lectus, quis rutrum nibh. Pellentesque auctor augue in posuere congue. **Strong text** elementum dictum facilisis. Proin leo nunc, lobortis ac hendrerit ut, bibendum non tortor. Nam venenatis faucibus tincidunt. Suspendisse ac pulvinar tortor. Sed scelerisque pharetra consequat. Nullam eleifend risus in mi eleifend porttitor. Donec id volutpat dui, nec maximus tortor. Nullam dignissim libero ut dolor iaculis volutpat. Duis pretium lacus auctor massa elementum, nec porta massa elementum. Curabitur ac nunc et diam lacinia tempor eget at ligula. Sed ac tellus turpis.
Maecenas diam dolor, semper quis tellus sed, feugiat sollicitudin nibh. Aliquam vel facilisis nisi, et commodo dolor. Duis sit amet cursus erat, congue bibendum enim. Sed porttitor libero quis lorem dignissim aliquet. Nulla ligula dui, viverra non quam quis, sodales porttitor dui. Nullam fringilla ante cursus, eleifend purus quis, gravida nisi. Aenean eget tellus lacinia, porta orci eu, sagittis sapien. Sed semper et nunc nec euismod. Proin eleifend erat quis euismod finibus. Etiam in lacus velit.
### Header 3
Suspendisse consequat, erat quis placerat aliquam, lectus augue efficitur risus, nec vulputate erat nibh in diam. Suspendisse eu mi urna. Nulla facilisi. *Emphasized text*. Etiam vel interdum odio, sagittis mollis enim. Quisque sit amet ullamcorper dolor. Ut dui purus, lobortis sed orci vel, rhoncus vulputate orci. Pellentesque eros eros, porttitor a leo quis, facilisis rutrum lectus. Nulla sed faucibus risus.
#### Header 4
Mauris dignissim, dui sed mattis pulvinar, augue mauris fringilla mauris, at luctus leo lorem a ante. Vestibulum vel tincidunt felis. Aliquam molestie, magna vitae tincidunt venenatis, justo ligula faucibus dui, a fermentum arcu nunc ut purus. Maecenas maximus lacinia orci, quis eleifend justo tempus vitae. Ut a ligula arcu. Suspendisse bibendum velit vel enim vulputate, nec elementum quam finibus. Curabitur dapibus ligula lorem, eu eleifend enim porta ornare. Donec pulvinar enim sit amet nisi porta, a ultrices eros aliquam. Sed accumsan dapibus porta. Curabitur lobortis urna non enim blandit congue pretium ut neque. Sed porttitor luctus accumsan.
##### Header 5
Proin rutrum sapien urna, ac imperdiet lacus dictum sit amet. Phasellus at massa id felis semper hendrerit. Cras dignissim hendrerit erat vel tempus. Curabitur tempus in justo vitae ullamcorper. Suspendisse vitae augue id libero interdum sollicitudin porttitor eu sem. Nulla vel suscipit tellus, nec consectetur neque. Sed et vestibulum odio, at tempor metus. Ut purus orci, luctus et accumsan quis, imperdiet at mauris. Suspendisse potenti. Morbi vel justo enim. Nulla in elit vel nulla auctor pellentesque eget nec nisl.
Interdum et malesuada fames ac ante ipsum primis in faucibus. Vivamus gravida dapibus scelerisque. Nunc a dui mi. Integer aliquet, nulla et tincidunt scelerisque, sapien ligula condimentum ligula, accumsan vehicula lacus arcu non nisl. Morbi feugiat nunc ut turpis elementum, id cursus tellus consequat. Maecenas ut dictum velit, ut imperdiet magna. Nulla sagittis sem nisl, nec interdum dui sollicitudin eget. Nullam tristique, lectus quis tristique mattis, metus tortor ornare diam, sed aliquam turpis dolor vitae lectus. Mauris felis risus, volutpat vitae elit a, varius rutrum magna.
Vivamus egestas malesuada neque, cursus volutpat mauris sagittis non. Maecenas et aliquam justo, quis gravida sem. Mauris ac ipsum tellus. Phasellus vel fringilla mauris, eu posuere justo. Proin tempor ipsum quam, vel maximus tellus scelerisque ut. Curabitur pharetra libero non neque facilisis imperdiet. Donec vestibulum ante id erat sollicitudin finibus.
###### Header 6
Proin et molestie justo. Praesent euismod scelerisque massa, ac mattis turpis imperdiet a. Vivamus maximus sit amet elit quis fringilla. Proin aliquet dolor at laoreet feugiat. Duis ullamcorper leo diam, a faucibus nunc sagittis vitae. Pellentesque id ultricies odio, ac ultrices leo. Phasellus sodales molestie lacinia. Donec placerat augue nec purus vestibulum, non feugiat dolor eleifend. Cras viverra bibendum turpis, vel auctor est commodo nec. Vestibulum vel urna ut magna ultricies rhoncus. Sed ornare a arcu id hendrerit. Quisque in tincidunt est. Pellentesque sit amet convallis velit.
| 142.684211 | 718 | 0.808558 | hun_Latn | 0.142915 |
e59c0cc8165c583e47ace33e243d859868c1231e | 602 | markdown | Markdown | zero_editor_documentation/zeromanual/editor/editorcommands/command_list_viewer.markdown | zeroengineteam/ZeroDocs | ed780837c3f256d26da5ea9d5d780943a490f391 | [
"MIT"
] | 3 | 2019-03-13T06:29:11.000Z | 2021-09-02T11:23:09.000Z | zero_editor_documentation/zeromanual/editor/editorcommands/command_list_viewer.markdown | zeroengineteam/ZeroDocs | ed780837c3f256d26da5ea9d5d780943a490f391 | [
"MIT"
] | null | null | null | zero_editor_documentation/zeromanual/editor/editorcommands/command_list_viewer.markdown | zeroengineteam/ZeroDocs | ed780837c3f256d26da5ea9d5d780943a490f391 | [
"MIT"
] | 5 | 2019-04-02T23:51:22.000Z | 2021-06-24T15:16:55.000Z | The command list viewer is an in editor window which displays all registered engine commands, their hotkeys, and a brief description of their behavior.
The list can be accessed in the same way as any other command
- [Command](https://github.com/zeroengineteam/ZeroDocs/blob/master/zero_editor_documentation/zeromanual/editor/editorcommands.markdown) : [CommandListViewer](https://github.com/zeroengineteam/ZeroDocs/blob/master/code_reference/command_reference.markdown#commandlistviewer)

| 46.307692 | 273 | 0.822259 | eng_Latn | 0.821329 |
e59c1ba004ac9de62d819ec4725f22690d74ce75 | 52 | md | Markdown | http_client_sep_2020/README.md | codetojoy/gists_java | 5417b1d10ebce768d3b6a995eeb917894ae13ee7 | [
"Apache-2.0"
] | 1 | 2020-04-17T00:08:06.000Z | 2020-04-17T00:08:06.000Z | java/http_client_sep_2020/README.md | codetojoy/gists | 2616f36a8c301810a88b8a9e124af442cf717263 | [
"Apache-2.0"
] | 2 | 2021-04-25T12:26:02.000Z | 2021-07-27T17:17:32.000Z | java/http_client_sep_2020/README.md | codetojoy/gists | 2616f36a8c301810a88b8a9e124af442cf717263 | [
"Apache-2.0"
] | 1 | 2018-02-27T01:32:08.000Z | 2018-02-27T01:32:08.000Z |
### Summary
* client for ~/WarO_Strategy_API_Java
| 10.4 | 37 | 0.730769 | kor_Hang | 0.502655 |
e59c59169a3d23c60863bb97e5223a9662b77786 | 4,237 | md | Markdown | tests/dummy/app/pods/docs/addons/template.md | muziejus/ember-leaflet | 96ac37f5ece8407461bce77051e4ddddc7da8074 | [
"MIT"
] | null | null | null | tests/dummy/app/pods/docs/addons/template.md | muziejus/ember-leaflet | 96ac37f5ece8407461bce77051e4ddddc7da8074 | [
"MIT"
] | null | null | null | tests/dummy/app/pods/docs/addons/template.md | muziejus/ember-leaflet | 96ac37f5ece8407461bce77051e4ddddc7da8074 | [
"MIT"
] | null | null | null | # Addons
Leaflet has many plugins and they provide very useful features for your maps.
To use them in ember-leaflet, the community created ember addons that extend ember-leaflet
functionality, usually using some of those Leaflet plugins under the hood. A list of those addons can be found
in the <DocsLink @route="addons">addons page</DocsLink>.
## Using an addon
As an example, let's take `ember-leaflet-marker-cluster` addon. You can install it by running:
```bash
ember install ember-leaflet-marker-cluster
```
This addon will register a new `<layers.marker-cluster>` component and you can use it like in the following example:
<DocsDemo as |demo|>
<demo.example @name="marker-cluster.hbs">
<LeafletMap @lat={{this.lat}} @lng={{this.lng}} @zoom={{this.zoom}} as |layers|>
<layers.tile @url="https://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png"/>
<layers.marker-cluster as |cluster|>
{{#each this.markers as |m|}}
<cluster.marker @location={{m.location}} as |marker|>
<marker.popup>
<h3>{{m.title}}</h3>
{{m.description}}
</marker.popup>
</cluster.marker>
{{/each}}
</layers.marker-cluster>
</LeafletMap>
</demo.example>
<demo.snippet @name="marker-cluster.hbs"/>
<demo.snippet @name="marker-cluster.js"/>
</DocsDemo>
## Creating an addon
### Showing up on ember-leaflet addons page
The ember-leaflet <DocsLink @route="addons">addons page</DocsLink> automatically fetches npm packages that meet
these criteria:
1. the package name must start with `ember-leaflet-`
2. it must contain the `ember-leaflet` keyword on the `package.json` file
3. the link to the repository should be correctly filled in on the `package.json` file
### Integration with `<LeafletMap>` component
It is very common for addons to add new kinds of layers, like we've just seen with
`ember-leaflet-marker-cluster`. In order for `<LeafletMap>` component to yield your custom component
you need to register it on the included ember-leaflet service in an instance initializer.
Here is an example of the instance initializer that `ember-leaflet-marker-cluster` uses
to register its `marker-cluster` component:
```js
// addon/instance-initializers/register-component.js
export function initialize(appInstance) {
// first we lookup the ember leaflet service
let emberLeafletService = appInstance.lookup('service:ember-leaflet');
// to support older versions of ember-leaflet that do not include the service, we add a guard here
if (emberLeafletService) {
// we then invoke the `registerComponent` method
emberLeafletService.registerComponent('marker-cluster-layer', { as: 'marker-cluster' });
}
}
export default {
initialize
};
```
The `registerComponent` method accepts two arguments:
- the component name
- an options object. Only the `as` option is supported at the moment. The `as` option is the name
under which the component will yielded as from the `<LeafletMap>` component.
### Yielding additional components from new components
It is usual for new components to yield additional components in addons. In the example above, there is a
new `<layers.cluster>` component that yields a `<cluster.marker>` component.
If you're extending from `BaseLayer` (or another layer that extends from it), there is a special property you can
use called `componentsToYield`. This should be an array of objects that describes which components you
want to yield. You should use it like:
```js
// addon/components/market-cluster-layer.js
componentsToYield = [
...this.componentsToYield,
{ name: 'marker-layer', as: 'marker' }
];
```
- `name` is the name of the component itself
- `as` is the key that you want to put it under in the yielded hash (the key will default to the `name` value if an empty `as` is provided)
`BaseLayer`'s template will make sure to yield these components correctly for you.
### Including the leaflet plugin
Your addon should be responsible for including the necessary leaflet plugin, either using
[ember-auto-import](https://github.com/ef4/ember-auto-import) or importing it by customizing your `index.js` file.
Please user other ember-leaflet addons as a reference.
| 36.843478 | 139 | 0.729762 | eng_Latn | 0.992247 |
e59c7674fc407f5f95e7e81c78d50f53b4d5b6cb | 932 | md | Markdown | _posts/2022-03-21-GnuTLS-Priority-String-Nedir?.md | BerkantErbey/jekyll-now | e7d216339bfbbfa705722b48079add84fb3d80ff | [
"MIT"
] | null | null | null | _posts/2022-03-21-GnuTLS-Priority-String-Nedir?.md | BerkantErbey/jekyll-now | e7d216339bfbbfa705722b48079add84fb3d80ff | [
"MIT"
] | null | null | null | _posts/2022-03-21-GnuTLS-Priority-String-Nedir?.md | BerkantErbey/jekyll-now | e7d216339bfbbfa705722b48079add84fb3d80ff | [
"MIT"
] | null | null | null | ---
published: true
---
## Giriş
* Merhaba, Bu yazımda **GnuTLS priority string** hakkında konuşacağız. Kısa bir yazı olacak. Yazıyı yazmamın amacı hem not almak, hem de hakkında az bilgi bulunduğunu düşündüğüm için paylaşmak.
## ... Nedir??
* **GnuTLS** , TLS SSL ve DTLS protokollerinin açık kaynak kodlu gerçeklemesini barındıran kütüphanedir.
* **gnutls-cli** , başka bir bilgisayarla tls bağlantısı kurmaya yarayan istemci yazılımıdır.
* **GnuTLS priority string** ise TLS bağlantısında kullanılacak algoritmaları ve seçenekleri anlaşılır bir düzende kullanmamıza yarar.
* [Jekyll Now](http://github.com/barryclark/jekyll-now/)
* [Anahtar Kelimeler Tablosu](https://gnutls.org/manual/html_node/Priority-Strings.html#tab_003aprio_002dkeywords)
* [Protokoller ve Algoritmalar Tablosu](https://gnutls.org/manual/html_node/Priority-Strings.html#tab_003aprio_002dalgorithm
Bir sonraki yazımızda görüşmek üzere ;)
| 49.052632 | 195 | 0.777897 | tur_Latn | 0.998266 |
e59c8519fef22d8f02935245ab1bd496fce9a098 | 423 | md | Markdown | README.md | z-rui/stok | 2d0cd70abfb0e2e08bba410908394f1e51d9d564 | [
"BSD-2-Clause"
] | null | null | null | README.md | z-rui/stok | 2d0cd70abfb0e2e08bba410908394f1e51d9d564 | [
"BSD-2-Clause"
] | null | null | null | README.md | z-rui/stok | 2d0cd70abfb0e2e08bba410908394f1e51d9d564 | [
"BSD-2-Clause"
] | null | null | null | # stok
This is a simple string tokenizer. It splits a string into tokens, which may include:
- white spaces
- numbers
- string literals
- identifiers
- punctuations
It is very simple and highly customizable. Instead of providing configuration mechanisms, we believe that it is easier to fork this repository and modify the code. It can be used as a starter code of a full-featured tokenizer.
See `example.c` for usage.
| 30.214286 | 226 | 0.780142 | eng_Latn | 0.999923 |
e59cac096b7fdf16ae57b28efa2a8879d17a7012 | 294 | md | Markdown | README.md | oxguy3/rocview | cb22741047c76b7e98f6bdf9096eb4655a67be95 | [
"MIT"
] | null | null | null | README.md | oxguy3/rocview | cb22741047c76b7e98f6bdf9096eb4655a67be95 | [
"MIT"
] | null | null | null | README.md | oxguy3/rocview | cb22741047c76b7e98f6bdf9096eb4655a67be95 | [
"MIT"
] | null | null | null | rocview
=======
There are a fair number of openly accessible webcams around University of Rochester. This is a simple project to show them all as pins on a map to easily navigate through them all. I am no longer actively working on this project.
[MIT license](http://oxguy3.mit-license.org/)
| 42 | 229 | 0.765306 | eng_Latn | 0.997818 |
e59caf1f069560a8f67ba6a7d01a7e99619eb773 | 465 | md | Markdown | workspace/10-authentication/README.md | jballoffet/grpc-samples | 38708ba35e61d05f410d4a130960d3893d273a5e | [
"Apache-2.0"
] | null | null | null | workspace/10-authentication/README.md | jballoffet/grpc-samples | 38708ba35e61d05f410d4a130960d3893d273a5e | [
"Apache-2.0"
] | null | null | null | workspace/10-authentication/README.md | jballoffet/grpc-samples | 38708ba35e61d05f410d4a130960d3893d273a5e | [
"Apache-2.0"
] | null | null | null | # Authentication sample app
## Building
To build the application, taking `{REPO_PATH}` as the base repository path, run the following:
```bash
cd `{REPO_PATH}`/workspace/10-authentication
bazel build :greeter_server
bazel build :greeter_client
```
## Running
To run the application, taking `{REPO_PATH}` as the base repository path, run the following:
```bash
cd `{REPO_PATH}`/workspace/10-authentication
bazel run :greeter_server
bazel run :greeter_client
```
| 23.25 | 94 | 0.765591 | eng_Latn | 0.91879 |
e59d768eb73554ac977a92ca6928b091f5595983 | 1,483 | md | Markdown | Notes/Opposite Category.md | zhaoshenzhai/MathWiki | 51caa9c2ec6c0463589ebb3b4d94b46baea0fdb7 | [
"MIT"
] | 1 | 2022-02-19T13:15:58.000Z | 2022-02-19T13:15:58.000Z | Notes/Opposite Category.md | zhaoshenzhai/MathWiki | 51caa9c2ec6c0463589ebb3b4d94b46baea0fdb7 | [
"MIT"
] | null | null | null | Notes/Opposite Category.md | zhaoshenzhai/MathWiki | 51caa9c2ec6c0463589ebb3b4d94b46baea0fdb7 | [
"MIT"
] | null | null | null | <br />
<br />
Date Created: 22/02/2022 12:14:26
Tags: #Definition #Closed
Types: _Not Applicable_
Examples: _Not Applicable_
Constructions: _Not Applicable_
Generalizations: _Not Applicable_
Properties: _Not Applicable_
Sufficiencies: _Not Applicable_
Equivalences: _Not Applicable_
Justifications: [[Opposite category is a category]]
``` ad-Definition
title: Definition.
_Let $\cat{C}$ be a category. The **opposite category of $\cat{C}$** is the category $\cat{C}^\textrm{op}$ whose objects are $\cat{C}$-objects and whose morphisms are $\cat{C}$-morphisms with arrows reversed. Formally,_
* $\obj\l(\cat{C}^\textrm{op}\r)\coloneqq\obj\l(\cat{C}\r)$,
* _for all_ $X,Y\in\obj\l(\cat{C}^\textrm{op}\r)$_,_ $\hom_{\cat{C}^\textrm{op}}\!\l(X,Y\r)\coloneqq\hom_\cat{C}\!\l(Y,X\r)$_,_
* _for all_ $X\in\obj\l(\cat{C}^\textrm{op}\r)$_, the $\cat{C}^\textrm{op}$-identity on $X$ is the $\cat{C}$-identity on $X$, and_
* _for all $X,Y,Z\in\obj\l(\cat{C}^\textrm{op}\r)$, the composition function is_
$$\begin{equation}
\begin{aligned}
\ast:\hom_{\cat{C}^\textrm{op}}\!\l(X,Y\r)\times\hom_{\cat{C}^\textrm{op}}\!\l(Y,Z\r)&\to\hom_{\cat{C}^\textrm{op}}\!\l(X,Z\r)\\
\tpl{f,g}&\mapsto\ast\l(f,g\r)\eqqcolon g\ast f\coloneqq f\circ g
\end{aligned}
\end{equation}$$
_where $f\circ g$ is the $\cat{C}$-composite of $f$ after $g$._
```
**Remark.** It is obvious that $\l(\cat{C}^\textrm{op}\r)^\textrm{op}=\cat{C}$.<span style="float:right;">$\blacklozenge$</span>
| 41.194444 | 219 | 0.658125 | eng_Latn | 0.448785 |
e59d97478fba6b03aa1a76130745d8f6ad708f98 | 30 | md | Markdown | README.md | devKiratu/my-flask-app | 6def1f9699bc841ec2fc9420e6abd6dc0466f7e0 | [
"MIT"
] | null | null | null | README.md | devKiratu/my-flask-app | 6def1f9699bc841ec2fc9420e6abd6dc0466f7e0 | [
"MIT"
] | null | null | null | README.md | devKiratu/my-flask-app | 6def1f9699bc841ec2fc9420e6abd6dc0466f7e0 | [
"MIT"
] | null | null | null | # my-flask-app
learning flask
| 10 | 14 | 0.766667 | eng_Latn | 0.420077 |
e59dafee95cf7c57f3a370a9603bc75823eead76 | 45 | md | Markdown | README.md | emilniklas/vue-dart | 3ab441354f4ff6fd6f837e84d07ad920ec28f490 | [
"MIT"
] | 6 | 2015-07-19T16:35:21.000Z | 2017-06-07T18:53:27.000Z | README.md | emilniklas/vue-dart | 3ab441354f4ff6fd6f837e84d07ad920ec28f490 | [
"MIT"
] | 2 | 2015-12-02T09:55:03.000Z | 2017-12-11T16:20:37.000Z | README.md | emilniklas/vue-dart | 3ab441354f4ff6fd6f837e84d07ad920ec28f490 | [
"MIT"
] | null | null | null | # Vue.dart
Vue.js through Dart's JS interop. | 15 | 33 | 0.733333 | kor_Hang | 0.508048 |
e59e906778fff0f4d0b58030377c207bc5e25bdb | 2,248 | md | Markdown | articles/active-directory/active-directory-b2b-current-limitations.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-b2b-current-limitations.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-b2b-current-limitations.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Azure Active Directory B2B コラボレーションの制限 | Microsoft Docs"
description: "Azure Active Directory B2B コラボレーションの現在の制限事項"
services: active-directory
documentationcenter:
author: sasubram
manager: mtillman
editor:
tags:
ms.assetid:
ms.service: active-directory
ms.devlang: NA
ms.topic: article
ms.tgt_pltfrm: NA
ms.workload: identity
ms.date: 05/23/2017
ms.author: sasubram
ms.openlocfilehash: 2136015eeb3d8ccfc58bf7a94b423144b9aed52e
ms.sourcegitcommit: e266df9f97d04acfc4a843770fadfd8edf4fa2b7
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 12/11/2017
---
# <a name="limitations-of-azure-ad-b2b-collaboration"></a>Azure AD B2B コラボレーションの制限
現在、Azure Active Directory (Azure AD) B2B コラボレーションには、この記事に記載されている制限が適用されます。
## <a name="possible-double-multi-factor-authentication"></a>多要素認証重複の可能性
Azure AD B2B を使用すると、多要素認証をリソース組織 (招待する側の組織) で実施することができます。 このアプローチの理由については、[Conditional access for B2B collaboration users](active-directory-b2b-mfa-instructions.md)を参照してください。 パートナー側で既に多要素認証が設定され、実施されている場合、パートナー側のユーザーは、ホーム組織と招待する側の組織の両方で認証が必要になる可能性があります。
## <a name="instant-on"></a>インスタント オン
B2B コラボレーションのフローでは、ユーザーをディレクトリに追加し、招待の使用、アプリ割り当てなどの際に動的に更新します。 更新と書き込みは、通常、1 つのディレクトリ インスタンスで行い、すべてのインスタンス間でレプリケートする必要があります。 すべてのインスタンスが更新されると、レプリケーションが完了します。 いずれかのインスタンスでオブジェクトの書き込みまたは更新が行われ、そのオブジェクトを取得する呼び出しが別のインスタンスに対して行われた場合は、レプリケーションの遅延が発生する可能性があります。 その場合は、更新または再試行が役立つことがあります。 API を使用してアプリを記述する場合は、バックオフの再試行がこの問題を軽減するための推奨される防御的な方法です。
## <a name="next-steps"></a>次のステップ
Azure AD B2B コラボレーションに関する他の記事を参照してください。
* [Azure AD B2B コラボレーションとは](active-directory-b2b-what-is-azure-ad-b2b.md)
* [B2B コラボレーション ユーザーのプロパティ](active-directory-b2b-user-properties.md)
* [B2B コラボレーション ユーザーのロールへの追加](active-directory-b2b-add-guest-to-role.md)
* [B2B コラボレーションの招待の委任](active-directory-b2b-delegate-invitations.md)
* [動的グループと B2B コラボレーション](active-directory-b2b-dynamic-groups.md)
* [B2B コラボレーション コードと PowerShell サンプル](active-directory-b2b-code-samples.md)
* [B2B コラボレーション用の SaaS アプリの構成](active-directory-b2b-configure-saas-apps.md)
* [B2B コラボレーション ユーザーのトークン](active-directory-b2b-user-token.md)
* [B2B コラボレーション ユーザーの要求マッピング](active-directory-b2b-claims-mapping.md)
* [Office 365 の外部共有](active-directory-b2b-o365-external-user.md)
| 47.829787 | 340 | 0.810943 | yue_Hant | 0.392199 |
e59e946be46f6b4ec748ceae32a35fd5f5f3b5b0 | 356 | md | Markdown | .github/SECURITY.md | jaredbain/digitalcorps.gsa.gov | 95ee7da6b8f334497c53e2256695411e35d9fc94 | [
"CC0-1.0"
] | 4 | 2021-06-25T01:57:26.000Z | 2021-12-09T16:52:15.000Z | .github/SECURITY.md | jaredbain/digitalcorps.gsa.gov | 95ee7da6b8f334497c53e2256695411e35d9fc94 | [
"CC0-1.0"
] | 98 | 2021-07-09T03:48:50.000Z | 2022-03-14T20:55:19.000Z | .github/SECURITY.md | jaredbain/digitalcorps.gsa.gov | 95ee7da6b8f334497c53e2256695411e35d9fc94 | [
"CC0-1.0"
] | 1 | 2021-12-15T19:48:28.000Z | 2021-12-15T19:48:28.000Z | # Security Policy
See GSA's [Privacy and Security Policy](https://www.gsa.gov/website-information/website-policies#privacy)
## Reporting a Vulnerability
digitalcorps.gsa.gov is a participant in GSA's Vulnerability Disclosure program.
For more information, see GSA's [Vulnerability Disclosure Policy](https://www.gsa.gov/vulnerability-disclosure-policy)
| 39.555556 | 118 | 0.80618 | eng_Latn | 0.804877 |
e59e9deb2da5506ee267fbe3f144e86d9d797a0d | 136 | md | Markdown | c++ problems asked in interview/README.md | rj011/Hacktoberfest2021-4 | 0aa981d4ba5e71c86cc162d34fe57814050064c2 | [
"MIT"
] | 41 | 2021-10-03T16:03:52.000Z | 2021-11-14T18:15:33.000Z | c++ problems asked in interview/README.md | rj011/Hacktoberfest2021-4 | 0aa981d4ba5e71c86cc162d34fe57814050064c2 | [
"MIT"
] | 175 | 2021-10-03T10:47:31.000Z | 2021-10-20T11:55:32.000Z | c++ problems asked in interview/README.md | rj011/Hacktoberfest2021-4 | 0aa981d4ba5e71c86cc162d34fe57814050064c2 | [
"MIT"
] | 208 | 2021-10-03T11:24:04.000Z | 2021-10-31T17:27:59.000Z | # CODE{ON}FEST-Program
I am a participant at Code{On}Fest Program.
Here I will upload my code for the questions I solve each day.
| 27.2 | 64 | 0.735294 | eng_Latn | 0.982664 |
e59eb745c02d03adbdd613da78843d02f4143051 | 1,704 | md | Markdown | readme.md | leekafai/singleflight | 1dad49666429c01b7f4cd5ff52bdae2069136847 | [
"MIT"
] | null | null | null | readme.md | leekafai/singleflight | 1dad49666429c01b7f4cd5ff52bdae2069136847 | [
"MIT"
] | null | null | null | readme.md | leekafai/singleflight | 1dad49666429c01b7f4cd5ff52bdae2069136847 | [
"MIT"
] | null | null | null | Golang singleflight in Node
---
provides a duplicate function call suppression mechanism.
当在并发中需要异步获取相同的响应数据时,可以使用 `singleFlight` 减少实际发出的异步请求数量。
例如在一个 HTTP 服务中的接口 `/getACombo` 中,需要返回一个缓存于 redis 的数据。
当多个客户端并发请求 `/getACombo` 接口,并传递相同的参数时,期望中返回的数据是一致且可以共用的。
此时,在不使用 `singleFlight` 的情况下,每一个请求都会发出一个 redis 指令;
使用 `singleFlight` 后,N 个期望返回数据相同的请求,将共用一个 redis 指令执行的结果,可以大幅减少发出的 redis 指令,大大减少了 redis 的指令执行数量、网络传输量。
[singleflight](https://pkg.go.dev/golang.org/x/sync/singleflight)
### 安装
```shell
npm i @9f/singleflight -S
```
### 使用案例
```javascript
const singleFlight = require('@9f/singleflight')
let remoteGetRealTimes = 0 // 异步结果返回的次数
let concurrentTimes = 0 // 实际触发异步请求响应的次数
const wait = 2e3
// 模拟异步请求的响应
const remote = {
get: (comboName) => {
return new Promise((resolve) => {
setTimeout(() => {
resolve({ A: ['炒饭', '冰红茶'], B: ['拉面', '奶茶'] }[comboName])
remoteGetRealTimes++ // 记录实际触发异步请求响应的次数
}, wait)
})
}
}
// 模拟一个异步请求
const getACombo = async (comboName) => {
const r = await remote.get(comboName)
return r
}
// 封装后的方法,传参方式异于原来的方法:首位传参为一个 string 的 key
const _getACombo = singleFlight(getACombo)
// 模拟在一个 http 服务中,多个请求通过接口调用 service,多次触发 getACombo 的调用
for (let i = 0; i < 2e3; i++) {
// 传递一个 key 作为调用标识。相同的调用标识,代表共用响应结果。
// 有需要的话,可以在 key 之后传递参数。但相同 key 应传递相同参数,因为响应结果需要共用。
// 获取 A 套餐
_getACombo('user', 'A').then((x) => {
// 多次调用,返回相同的结果
concurrentTimes++ // 记录异步结果返回的次数
console.log(x) // 套餐内容
}).catch((e) => {
console.error(e) // 可以使用正常的错误捕捉
})
}
setTimeout(() => {
console.log('并发调用 getFruitCache 次数', concurrentTimes) // 1e3
console.log('实际调用 remote.get 次数', remoteGetRealTimes) // 远小于 1e3
}, wait + 200)
``` | 24 | 100 | 0.677817 | yue_Hant | 0.573394 |
e59eb9ea28eb6ec8efdddf55d9ac07ecd3e8d361 | 16 | md | Markdown | README.md | markthethomas/todocrud-node | dfb400f6035571747bee3a5046e6abd05bdda0b0 | [
"MIT"
] | null | null | null | README.md | markthethomas/todocrud-node | dfb400f6035571747bee3a5046e6abd05bdda0b0 | [
"MIT"
] | null | null | null | README.md | markthethomas/todocrud-node | dfb400f6035571747bee3a5046e6abd05bdda0b0 | [
"MIT"
] | null | null | null | # todocrud-node
| 8 | 15 | 0.75 | vie_Latn | 0.340151 |
e59f8095334414a6ba5a5788aecefd87fd52cbcc | 227 | md | Markdown | _posts/1995-03-24-a-small-oilslick-threatens-the.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1995-03-24-a-small-oilslick-threatens-the.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1995-03-24-a-small-oilslick-threatens-the.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: A small oilslick threatens the
tags:
- Mar 1995
---
A small oilslick threatens the coral reef at Looe Key.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **7**, Section: **B**
| 18.916667 | 56 | 0.621145 | eng_Latn | 0.847095 |
e5a0245fefb61bb11ce38abd3d92e602fc5627d0 | 144,292 | md | Markdown | docs/typescript.md | rocaccion/cdk8s-plus | 4fed38590dc48da6cfafa1d02380fbf2f921c1eb | [
"Apache-2.0"
] | null | null | null | docs/typescript.md | rocaccion/cdk8s-plus | 4fed38590dc48da6cfafa1d02380fbf2f921c1eb | [
"Apache-2.0"
] | null | null | null | docs/typescript.md | rocaccion/cdk8s-plus | 4fed38590dc48da6cfafa1d02380fbf2f921c1eb | [
"Apache-2.0"
] | null | null | null | # API Reference <a name="API Reference"></a>
## Constructs <a name="Constructs"></a>
### ConfigMap <a name="cdk8s-plus-22.ConfigMap"></a>
- *Implements:* [`cdk8s-plus-22.IConfigMap`](#cdk8s-plus-22.IConfigMap)
ConfigMap holds configuration data for pods to consume.
#### Initializers <a name="cdk8s-plus-22.ConfigMap.Initializer"></a>
```typescript
import { ConfigMap } from 'cdk8s-plus-22'
new ConfigMap(scope: Construct, id: string, props?: ConfigMapProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.ConfigMapProps`](#cdk8s-plus-22.ConfigMapProps)
---
#### Methods <a name="Methods"></a>
##### `addBinaryData` <a name="cdk8s-plus-22.ConfigMap.addBinaryData"></a>
```typescript
public addBinaryData(key: string, value: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.key"></a>
- *Type:* `string`
The key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.value"></a>
- *Type:* `string`
The value.
---
##### `addData` <a name="cdk8s-plus-22.ConfigMap.addData"></a>
```typescript
public addData(key: string, value: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.key"></a>
- *Type:* `string`
The key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.value"></a>
- *Type:* `string`
The value.
---
##### `addDirectory` <a name="cdk8s-plus-22.ConfigMap.addDirectory"></a>
```typescript
public addDirectory(localDir: string, options?: AddDirectoryOptions)
```
###### `localDir`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.localDir"></a>
- *Type:* `string`
A path to a local directory.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.AddDirectoryOptions`](#cdk8s-plus-22.AddDirectoryOptions)
Options.
---
##### `addFile` <a name="cdk8s-plus-22.ConfigMap.addFile"></a>
```typescript
public addFile(localFile: string, key?: string)
```
###### `localFile`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.localFile"></a>
- *Type:* `string`
The path to the local file.
---
###### `key`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.key"></a>
- *Type:* `string`
The ConfigMap key (default to the file name).
---
#### Static Functions <a name="Static Functions"></a>
##### `fromConfigMapName` <a name="cdk8s-plus-22.ConfigMap.fromConfigMapName"></a>
```typescript
import { ConfigMap } from 'cdk8s-plus-22'
ConfigMap.fromConfigMapName(name: string)
```
###### `name`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.parameter.name"></a>
- *Type:* `string`
The name of the config map to import.
---
#### Properties <a name="Properties"></a>
##### `binaryData`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.property.binaryData"></a>
```typescript
public readonly binaryData: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
The binary data associated with this config map.
Returns a copy. To add data records, use `addBinaryData()` or `addData()`.
---
##### `data`<sup>Required</sup> <a name="cdk8s-plus-22.ConfigMap.property.data"></a>
```typescript
public readonly data: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
The data associated with this config map.
Returns an copy. To add data records, use `addData()` or `addBinaryData()`.
---
### Deployment <a name="cdk8s-plus-22.Deployment"></a>
- *Implements:* [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
A Deployment provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual
state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove
existing Deployments and adopt all their resources with new Deployments.
> Note: Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
Use Case
---------
The following are typical use cases for Deployments:
- Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background.
Check the status of the rollout to see if it succeeds or not.
- Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment.
A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate.
Each new ReplicaSet updates the revision of the Deployment.
- Rollback to an earlier Deployment revision if the current state of the Deployment is not stable.
Each rollback updates the revision of the Deployment.
- Scale up the Deployment to facilitate more load.
- Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
- Use the status of the Deployment as an indicator that a rollout has stuck.
- Clean up older ReplicaSets that you don't need anymore.
#### Initializers <a name="cdk8s-plus-22.Deployment.Initializer"></a>
```typescript
import { Deployment } from 'cdk8s-plus-22'
new Deployment(scope: Construct, id: string, props?: DeploymentProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Deployment.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.DeploymentProps`](#cdk8s-plus-22.DeploymentProps)
---
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.Deployment.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
##### `addVolume` <a name="cdk8s-plus-22.Deployment.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
---
##### `exposeViaIngress` <a name="cdk8s-plus-22.Deployment.exposeViaIngress"></a>
```typescript
public exposeViaIngress(path: string, options?: ExposeDeploymentViaIngressOptions)
```
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.path"></a>
- *Type:* `string`
The ingress path to register under.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Deployment.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ExposeDeploymentViaIngressOptions`](#cdk8s-plus-22.ExposeDeploymentViaIngressOptions)
Additional options.
---
##### `exposeViaService` <a name="cdk8s-plus-22.Deployment.exposeViaService"></a>
```typescript
public exposeViaService(options?: ExposeDeploymentViaServiceOptions)
```
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Deployment.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ExposeDeploymentViaServiceOptions`](#cdk8s-plus-22.ExposeDeploymentViaServiceOptions)
Options to determine details of the service and port exposed.
---
##### `selectByLabel` <a name="cdk8s-plus-22.Deployment.selectByLabel"></a>
```typescript
public selectByLabel(key: string, value: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.key"></a>
- *Type:* `string`
The label key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.parameter.value"></a>
- *Type:* `string`
The label value.
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `labelSelector`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.property.labelSelector"></a>
```typescript
public readonly labelSelector: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
The labels this deployment will match against in order to select pods.
Returns a a copy. Use `selectByLabel()` to add labels.
---
##### `podMetadata`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
Provides read/write access to the underlying pod metadata of the resource.
---
##### `replicas`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.property.replicas"></a>
```typescript
public readonly replicas: number;
```
- *Type:* `number`
Number of desired pods.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.Deployment.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.Deployment.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.Deployment.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
### Ingress <a name="cdk8s-plus-22.Ingress"></a>
Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend.
An Ingress can be configured to give services
externally-reachable urls, load balance traffic, terminate SSL, offer name
based virtual hosting etc.
#### Initializers <a name="cdk8s-plus-22.Ingress.Initializer"></a>
```typescript
import { Ingress } from 'cdk8s-plus-22'
new Ingress(scope: Construct, id: string, props?: IngressProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Ingress.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.IngressProps`](#cdk8s-plus-22.IngressProps)
---
#### Methods <a name="Methods"></a>
##### `addDefaultBackend` <a name="cdk8s-plus-22.Ingress.addDefaultBackend"></a>
```typescript
public addDefaultBackend(backend: IngressBackend)
```
###### `backend`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.backend"></a>
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
The backend to use for requests that do not match any rule.
---
##### `addHostDefaultBackend` <a name="cdk8s-plus-22.Ingress.addHostDefaultBackend"></a>
```typescript
public addHostDefaultBackend(host: string, backend: IngressBackend)
```
###### `host`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.host"></a>
- *Type:* `string`
The host name to match.
---
###### `backend`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.backend"></a>
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
The backend to route to.
---
##### `addHostRule` <a name="cdk8s-plus-22.Ingress.addHostRule"></a>
```typescript
public addHostRule(host: string, path: string, backend: IngressBackend, pathType?: HttpIngressPathType)
```
###### `host`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.host"></a>
- *Type:* `string`
The host name.
---
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.path"></a>
- *Type:* `string`
The HTTP path.
---
###### `backend`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.backend"></a>
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
The backend to route requests to.
---
###### `pathType`<sup>Optional</sup> <a name="cdk8s-plus-22.Ingress.parameter.pathType"></a>
- *Type:* [`cdk8s-plus-22.HttpIngressPathType`](#cdk8s-plus-22.HttpIngressPathType)
How the path is matched against request paths.
---
##### `addRule` <a name="cdk8s-plus-22.Ingress.addRule"></a>
```typescript
public addRule(path: string, backend: IngressBackend, pathType?: HttpIngressPathType)
```
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.path"></a>
- *Type:* `string`
The HTTP path.
---
###### `backend`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.backend"></a>
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
The backend to route requests to.
---
###### `pathType`<sup>Optional</sup> <a name="cdk8s-plus-22.Ingress.parameter.pathType"></a>
- *Type:* [`cdk8s-plus-22.HttpIngressPathType`](#cdk8s-plus-22.HttpIngressPathType)
How the path is matched against request paths.
---
##### `addRules` <a name="cdk8s-plus-22.Ingress.addRules"></a>
```typescript
public addRules(rules: IngressRule)
```
###### `rules`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.rules"></a>
- *Type:* [`cdk8s-plus-22.IngressRule`](#cdk8s-plus-22.IngressRule)
The rules to add.
---
##### `addTls` <a name="cdk8s-plus-22.Ingress.addTls"></a>
```typescript
public addTls(tls: IngressTls[])
```
###### `tls`<sup>Required</sup> <a name="cdk8s-plus-22.Ingress.parameter.tls"></a>
- *Type:* [`cdk8s-plus-22.IngressTls`](#cdk8s-plus-22.IngressTls)[]
---
### Job <a name="cdk8s-plus-22.Job"></a>
- *Implements:* [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
As pods successfully complete,
the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete.
Deleting a Job will clean up the Pods it created. A simple case is to create one Job object in order to reliably run one Pod to completion.
The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
#### Initializers <a name="cdk8s-plus-22.Job.Initializer"></a>
```typescript
import { Job } from 'cdk8s-plus-22'
new Job(scope: Construct, id: string, props?: JobProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Job.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Job.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.JobProps`](#cdk8s-plus-22.JobProps)
---
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.Job.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.Job.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
##### `addVolume` <a name="cdk8s-plus-22.Job.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.Job.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.Job.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `podMetadata`<sup>Required</sup> <a name="cdk8s-plus-22.Job.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
Provides read/write access to the underlying pod metadata of the resource.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.Job.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `activeDeadline`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.property.activeDeadline"></a>
```typescript
public readonly activeDeadline: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
Duration before job is terminated.
If undefined, there is no deadline.
---
##### `backoffLimit`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.property.backoffLimit"></a>
```typescript
public readonly backoffLimit: number;
```
- *Type:* `number`
Number of retries before marking failed.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
##### `ttlAfterFinished`<sup>Optional</sup> <a name="cdk8s-plus-22.Job.property.ttlAfterFinished"></a>
```typescript
public readonly ttlAfterFinished: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
TTL before the job is deleted after it is finished.
---
### Pod <a name="cdk8s-plus-22.Pod"></a>
- *Implements:* [`cdk8s-plus-22.IPodSpec`](#cdk8s-plus-22.IPodSpec)
Pod is a collection of containers that can run on a host.
This resource is
created by clients and scheduled onto hosts.
#### Initializers <a name="cdk8s-plus-22.Pod.Initializer"></a>
```typescript
import { Pod } from 'cdk8s-plus-22'
new Pod(scope: Construct, id: string, props?: PodProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Pod.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.PodProps`](#cdk8s-plus-22.PodProps)
---
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.Pod.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
##### `addVolume` <a name="cdk8s-plus-22.Pod.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.Pod.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.Pod.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.Pod.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
### Resource <a name="cdk8s-plus-22.Resource"></a>
- *Implements:* [`cdk8s-plus-22.IResource`](#cdk8s-plus-22.IResource)
Base class for all Kubernetes objects in stdk8s.
Represents a single
resource.
#### Initializers <a name="cdk8s-plus-22.Resource.Initializer"></a>
```typescript
import { Resource } from 'cdk8s-plus-22'
new Resource(scope: Construct, id: string, options?: ConstructOptions)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Resource.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
The scope in which to define this construct.
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Resource.parameter.id"></a>
- *Type:* `string`
The scoped construct ID.
Must be unique amongst siblings. If
the ID includes a path separator (`/`), then it will be replaced by double
dash `--`.
---
##### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Resource.parameter.options"></a>
- *Type:* [`constructs.ConstructOptions`](#constructs.ConstructOptions)
Options.
---
#### Properties <a name="Properties"></a>
##### `metadata`<sup>Required</sup> <a name="cdk8s-plus-22.Resource.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
---
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Resource.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The name of this API object.
---
### Secret <a name="cdk8s-plus-22.Secret"></a>
- *Implements:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys.
Storing confidential information in a
Secret is safer and more flexible than putting it verbatim in a Pod
definition or in a container image.
> https://kubernetes.io/docs/concepts/configuration/secret
#### Initializers <a name="cdk8s-plus-22.Secret.Initializer"></a>
```typescript
import { Secret } from 'cdk8s-plus-22'
new Secret(scope: Construct, id: string, props?: SecretProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Secret.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.SecretProps`](#cdk8s-plus-22.SecretProps)
---
#### Methods <a name="Methods"></a>
##### `addStringData` <a name="cdk8s-plus-22.Secret.addStringData"></a>
```typescript
public addStringData(key: string, value: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.key"></a>
- *Type:* `string`
Key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.value"></a>
- *Type:* `string`
Value.
---
##### `getStringData` <a name="cdk8s-plus-22.Secret.getStringData"></a>
```typescript
public getStringData(key: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.key"></a>
- *Type:* `string`
Key.
---
#### Static Functions <a name="Static Functions"></a>
##### `fromSecretName` <a name="cdk8s-plus-22.Secret.fromSecretName"></a>
```typescript
import { Secret } from 'cdk8s-plus-22'
Secret.fromSecretName(name: string)
```
###### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Secret.parameter.name"></a>
- *Type:* `string`
The name of the secret to reference.
---
### Service <a name="cdk8s-plus-22.Service"></a>
An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism.
Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use.
While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that,
nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
If you're able to use Kubernetes APIs for service discovery in your application, you can query the API server for Endpoints,
that get updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers ways to place a network port
or load balancer in between your application and the backend Pods.
#### Initializers <a name="cdk8s-plus-22.Service.Initializer"></a>
```typescript
import { Service } from 'cdk8s-plus-22'
new Service(scope: Construct, id: string, props?: ServiceProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.ServiceProps`](#cdk8s-plus-22.ServiceProps)
---
#### Methods <a name="Methods"></a>
##### `addDeployment` <a name="cdk8s-plus-22.Service.addDeployment"></a>
```typescript
public addDeployment(deployment: Deployment, options?: AddDeploymentOptions)
```
###### `deployment`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.deployment"></a>
- *Type:* [`cdk8s-plus-22.Deployment`](#cdk8s-plus-22.Deployment)
The deployment to expose.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.AddDeploymentOptions`](#cdk8s-plus-22.AddDeploymentOptions)
Optional settings for the port.
---
##### `addSelector` <a name="cdk8s-plus-22.Service.addSelector"></a>
```typescript
public addSelector(label: string, value: string)
```
###### `label`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.label"></a>
- *Type:* `string`
The label key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.value"></a>
- *Type:* `string`
The label value.
---
##### `exposeViaIngress` <a name="cdk8s-plus-22.Service.exposeViaIngress"></a>
```typescript
public exposeViaIngress(path: string, options?: ExposeServiceViaIngressOptions)
```
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.path"></a>
- *Type:* `string`
The path to expose the service under.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ExposeServiceViaIngressOptions`](#cdk8s-plus-22.ExposeServiceViaIngressOptions)
Additional options.
---
##### `serve` <a name="cdk8s-plus-22.Service.serve"></a>
```typescript
public serve(port: number, options?: ServicePortOptions)
```
###### `port`<sup>Required</sup> <a name="cdk8s-plus-22.Service.parameter.port"></a>
- *Type:* `number`
The port definition.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ServicePortOptions`](#cdk8s-plus-22.ServicePortOptions)
---
#### Properties <a name="Properties"></a>
##### `ports`<sup>Required</sup> <a name="cdk8s-plus-22.Service.property.ports"></a>
```typescript
public readonly ports: ServicePort[];
```
- *Type:* [`cdk8s-plus-22.ServicePort`](#cdk8s-plus-22.ServicePort)[]
Ports for this service.
Use `serve()` to expose additional service ports.
---
##### `selector`<sup>Required</sup> <a name="cdk8s-plus-22.Service.property.selector"></a>
```typescript
public readonly selector: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
Returns the labels which are used to select pods for this service.
---
##### `type`<sup>Required</sup> <a name="cdk8s-plus-22.Service.property.type"></a>
```typescript
public readonly type: ServiceType;
```
- *Type:* [`cdk8s-plus-22.ServiceType`](#cdk8s-plus-22.ServiceType)
Determines how the Service is exposed.
---
##### `clusterIP`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.property.clusterIP"></a>
```typescript
public readonly clusterIP: string;
```
- *Type:* `string`
The IP address of the service and is usually assigned randomly by the master.
---
##### `externalName`<sup>Optional</sup> <a name="cdk8s-plus-22.Service.property.externalName"></a>
```typescript
public readonly externalName: string;
```
- *Type:* `string`
The externalName to be used for EXTERNAL_NAME types.
---
### ServiceAccount <a name="cdk8s-plus-22.ServiceAccount"></a>
- *Implements:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (for
example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account
#### Initializers <a name="cdk8s-plus-22.ServiceAccount.Initializer"></a>
```typescript
import { ServiceAccount } from 'cdk8s-plus-22'
new ServiceAccount(scope: Construct, id: string, props?: ServiceAccountProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.ServiceAccount.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.ServiceAccount.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceAccount.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.ServiceAccountProps`](#cdk8s-plus-22.ServiceAccountProps)
---
#### Methods <a name="Methods"></a>
##### `addSecret` <a name="cdk8s-plus-22.ServiceAccount.addSecret"></a>
```typescript
public addSecret(secret: ISecret)
```
###### `secret`<sup>Required</sup> <a name="cdk8s-plus-22.ServiceAccount.parameter.secret"></a>
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
The secret.
---
#### Static Functions <a name="Static Functions"></a>
##### `fromServiceAccountName` <a name="cdk8s-plus-22.ServiceAccount.fromServiceAccountName"></a>
```typescript
import { ServiceAccount } from 'cdk8s-plus-22'
ServiceAccount.fromServiceAccountName(name: string)
```
###### `name`<sup>Required</sup> <a name="cdk8s-plus-22.ServiceAccount.parameter.name"></a>
- *Type:* `string`
The name of the service account resource.
---
#### Properties <a name="Properties"></a>
##### `secrets`<sup>Required</sup> <a name="cdk8s-plus-22.ServiceAccount.property.secrets"></a>
```typescript
public readonly secrets: ISecret[];
```
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)[]
List of secrets allowed to be used by pods running using this service account.
Returns a copy. To add a secret, use `addSecret()`.
---
### StatefulSet <a name="cdk8s-plus-22.StatefulSet"></a>
- *Implements:* [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
StatefulSet is the workload API object used to manage stateful applications.
Manages the deployment and scaling of a set of Pods, and provides guarantees
about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical
container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity
for each of their Pods. These pods are created from the same spec, but are not
interchangeable: each has a persistent identifier that it maintains across any
rescheduling.
If you want to use storage volumes to provide persistence for your workload, you
can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet
are susceptible to failure, the persistent Pod identifiers make it easier to match existing
volumes to the new Pods that replace any that have failed.
Using StatefulSets
------------------
StatefulSets are valuable for applications that require one or more of the following.
- Stable, unique network identifiers.
- Stable, persistent storage.
- Ordered, graceful deployment and scaling.
- Ordered, automated rolling updates.
#### Initializers <a name="cdk8s-plus-22.StatefulSet.Initializer"></a>
```typescript
import { StatefulSet } from 'cdk8s-plus-22'
new StatefulSet(scope: Construct, id: string, props: StatefulSetProps)
```
##### `scope`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.scope"></a>
- *Type:* [`constructs.Construct`](#constructs.Construct)
---
##### `id`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.id"></a>
- *Type:* `string`
---
##### `props`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.StatefulSetProps`](#cdk8s-plus-22.StatefulSetProps)
---
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.StatefulSet.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
##### `addVolume` <a name="cdk8s-plus-22.StatefulSet.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
---
##### `selectByLabel` <a name="cdk8s-plus-22.StatefulSet.selectByLabel"></a>
```typescript
public selectByLabel(key: string, value: string)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.key"></a>
- *Type:* `string`
The label key.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.parameter.value"></a>
- *Type:* `string`
The label value.
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `labelSelector`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.labelSelector"></a>
```typescript
public readonly labelSelector: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
The labels this statefulset will match against in order to select pods.
Returns a a copy. Use `selectByLabel()` to add labels.
---
##### `podManagementPolicy`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.podManagementPolicy"></a>
```typescript
public readonly podManagementPolicy: PodManagementPolicy;
```
- *Type:* [`cdk8s-plus-22.PodManagementPolicy`](#cdk8s-plus-22.PodManagementPolicy)
Management policy to use for the set.
---
##### `podMetadata`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
Provides read/write access to the underlying pod metadata of the resource.
---
##### `replicas`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.replicas"></a>
```typescript
public readonly replicas: number;
```
- *Type:* `number`
Number of desired pods.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSet.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSet.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSet.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
## Structs <a name="Structs"></a>
### AddDeploymentOptions <a name="cdk8s-plus-22.AddDeploymentOptions"></a>
Options to add a deployment to a service.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { AddDeploymentOptions } from 'cdk8s-plus-22'
const addDeploymentOptions: AddDeploymentOptions = { ... }
```
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDeploymentOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The name of this port within the service.
This must be a DNS_LABEL. All
ports within a ServiceSpec must have unique names. This maps to the 'Name'
field in EndpointPort objects. Optional if only one ServicePort is defined
on this service.
---
##### `nodePort`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDeploymentOptions.property.nodePort"></a>
```typescript
public readonly nodePort: number;
```
- *Type:* `number`
- *Default:* auto-allocate a port if the ServiceType of this Service requires one.
The port on each node on which this service is exposed when type=NodePort or LoadBalancer.
Usually assigned by the system. If specified, it will be
allocated to the service if unused or else creation of the service will
fail. Default is to auto-allocate a port if the ServiceType of this Service
requires one.
> https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
---
##### `protocol`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDeploymentOptions.property.protocol"></a>
```typescript
public readonly protocol: Protocol;
```
- *Type:* [`cdk8s-plus-22.Protocol`](#cdk8s-plus-22.Protocol)
- *Default:* Protocol.TCP
The IP protocol for this port.
Supports "TCP", "UDP", and "SCTP". Default is TCP.
---
##### `targetPort`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDeploymentOptions.property.targetPort"></a>
```typescript
public readonly targetPort: number;
```
- *Type:* `number`
- *Default:* The value of `port` will be used.
The port number the service will redirect to.
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDeploymentOptions.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* Copied from the first container of the deployment.
The port number the service will bind to.
---
### AddDirectoryOptions <a name="cdk8s-plus-22.AddDirectoryOptions"></a>
Options for `configmap.addDirectory()`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { AddDirectoryOptions } from 'cdk8s-plus-22'
const addDirectoryOptions: AddDirectoryOptions = { ... }
```
##### `exclude`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDirectoryOptions.property.exclude"></a>
```typescript
public readonly exclude: string[];
```
- *Type:* `string`[]
- *Default:* include all files
Glob patterns to exclude when adding files.
---
##### `keyPrefix`<sup>Optional</sup> <a name="cdk8s-plus-22.AddDirectoryOptions.property.keyPrefix"></a>
```typescript
public readonly keyPrefix: string;
```
- *Type:* `string`
- *Default:* ""
A prefix to add to all keys in the config map.
---
### CommandProbeOptions <a name="cdk8s-plus-22.CommandProbeOptions"></a>
Options for `Probe.fromCommand()`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { CommandProbeOptions } from 'cdk8s-plus-22'
const commandProbeOptions: CommandProbeOptions = { ... }
```
##### `failureThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.CommandProbeOptions.property.failureThreshold"></a>
```typescript
public readonly failureThreshold: number;
```
- *Type:* `number`
- *Default:* 3
Minimum consecutive failures for the probe to be considered failed after having succeeded.
Defaults to 3. Minimum value is 1.
---
##### `initialDelaySeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.CommandProbeOptions.property.initialDelaySeconds"></a>
```typescript
public readonly initialDelaySeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* immediate
Number of seconds after the container has started before liveness probes are initiated.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
##### `periodSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.CommandProbeOptions.property.periodSeconds"></a>
```typescript
public readonly periodSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(10) Minimum value is 1.
How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
---
##### `successThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.CommandProbeOptions.property.successThreshold"></a>
```typescript
public readonly successThreshold: number;
```
- *Type:* `number`
- *Default:* 1 Must be 1 for liveness and startup. Minimum value is 1.
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1.
Must be 1 for liveness and startup. Minimum value is 1.
---
##### `timeoutSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.CommandProbeOptions.property.timeoutSeconds"></a>
```typescript
public readonly timeoutSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(1)
Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
### ConfigMapProps <a name="cdk8s-plus-22.ConfigMapProps"></a>
Properties for initialization of `ConfigMap`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ConfigMapProps } from 'cdk8s-plus-22'
const configMapProps: ConfigMapProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `binaryData`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapProps.property.binaryData"></a>
```typescript
public readonly binaryData: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
BinaryData contains the binary data.
Each key must consist of alphanumeric characters, '-', '_' or '.'.
BinaryData can contain byte sequences that are not in the UTF-8 range. The
keys stored in BinaryData must not overlap with the ones in the Data field,
this is enforced during validation process. Using this field will require
1.10+ apiserver and kubelet.
You can also add binary data using `configMap.addBinaryData()`.
---
##### `data`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapProps.property.data"></a>
```typescript
public readonly data: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
Data contains the configuration data.
Each key must consist of alphanumeric characters, '-', '_' or '.'. Values
with non-UTF-8 byte sequences must use the BinaryData field. The keys
stored in Data must not overlap with the keys in the BinaryData field, this
is enforced during validation process.
You can also add data using `configMap.addData()`.
---
### ConfigMapVolumeOptions <a name="cdk8s-plus-22.ConfigMapVolumeOptions"></a>
Options for the ConfigMap-based volume.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ConfigMapVolumeOptions } from 'cdk8s-plus-22'
const configMapVolumeOptions: ConfigMapVolumeOptions = { ... }
```
##### `defaultMode`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapVolumeOptions.property.defaultMode"></a>
```typescript
public readonly defaultMode: number;
```
- *Type:* `number`
- *Default:* 0644. Directories within the path are not affected by this
setting. This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
Mode bits to use on created files by default.
Must be a value between 0 and
0777. Defaults to 0644. Directories within the path are not affected by
this setting. This might be in conflict with other options that affect the
file mode, like fsGroup, and the result can be other mode bits set.
---
##### `items`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapVolumeOptions.property.items"></a>
```typescript
public readonly items: {[ key: string ]: PathMapping};
```
- *Type:* {[ key: string ]: [`cdk8s-plus-22.PathMapping`](#cdk8s-plus-22.PathMapping)}
- *Default:* no mapping
If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value.
If specified, the listed keys will be projected
into the specified paths, and unlisted keys will not be present. If a key
is specified which is not present in the ConfigMap, the volume setup will
error unless it is marked optional. Paths must be relative and may not
contain the '..' path or start with '..'.
---
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapVolumeOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
- *Default:* auto-generated
The volume name.
---
##### `optional`<sup>Optional</sup> <a name="cdk8s-plus-22.ConfigMapVolumeOptions.property.optional"></a>
```typescript
public readonly optional: boolean;
```
- *Type:* `boolean`
- *Default:* undocumented
Specify whether the ConfigMap or its keys must be defined.
---
### ContainerProps <a name="cdk8s-plus-22.ContainerProps"></a>
Properties for creating a container.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ContainerProps } from 'cdk8s-plus-22'
const containerProps: ContainerProps = { ... }
```
##### `image`<sup>Required</sup> <a name="cdk8s-plus-22.ContainerProps.property.image"></a>
```typescript
public readonly image: string;
```
- *Type:* `string`
Docker image name.
---
##### `args`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.args"></a>
```typescript
public readonly args: string[];
```
- *Type:* `string`[]
- *Default:* []
Arguments to the entrypoint. The docker image's CMD is used if `command` is not provided.
Variable references $(VAR_NAME) are expanded using the container's
environment. If a variable cannot be resolved, the reference in the input
string will be unchanged. The $(VAR_NAME) syntax can be escaped with a
double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,
regardless of whether the variable exists or not.
Cannot be updated.
> https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
---
##### `command`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.command"></a>
```typescript
public readonly command: string[];
```
- *Type:* `string`[]
- *Default:* The docker image's ENTRYPOINT.
Entrypoint array.
Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment.
If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
---
##### `env`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.env"></a>
```typescript
public readonly env: {[ key: string ]: EnvValue};
```
- *Type:* {[ key: string ]: [`cdk8s-plus-22.EnvValue`](#cdk8s-plus-22.EnvValue)}
- *Default:* No environment variables.
List of environment variables to set in the container.
Cannot be updated.
---
##### `imagePullPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.imagePullPolicy"></a>
```typescript
public readonly imagePullPolicy: ImagePullPolicy;
```
- *Type:* [`cdk8s-plus-22.ImagePullPolicy`](#cdk8s-plus-22.ImagePullPolicy)
- *Default:* ImagePullPolicy.ALWAYS
Image pull policy for this container.
---
##### `liveness`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.liveness"></a>
```typescript
public readonly liveness: Probe;
```
- *Type:* [`cdk8s-plus-22.Probe`](#cdk8s-plus-22.Probe)
- *Default:* no liveness probe is defined
Periodic probe of container liveness.
Container will be restarted if the probe fails.
---
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
- *Default:* 'main'
Name of the container specified as a DNS_LABEL.
Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* No port is exposed.
Number of port to expose on the pod's IP address.
This must be a valid port number, 0 < x < 65536.
---
##### `readiness`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.readiness"></a>
```typescript
public readonly readiness: Probe;
```
- *Type:* [`cdk8s-plus-22.Probe`](#cdk8s-plus-22.Probe)
- *Default:* no readiness probe is defined
Determines when the container is ready to serve traffic.
---
##### `startup`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.startup"></a>
```typescript
public readonly startup: Probe;
```
- *Type:* [`cdk8s-plus-22.Probe`](#cdk8s-plus-22.Probe)
- *Default:* no startup probe is defined.
StartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully
---
##### `volumeMounts`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.volumeMounts"></a>
```typescript
public readonly volumeMounts: VolumeMount[];
```
- *Type:* [`cdk8s-plus-22.VolumeMount`](#cdk8s-plus-22.VolumeMount)[]
Pod volumes to mount into the container's filesystem.
Cannot be updated.
---
##### `workingDir`<sup>Optional</sup> <a name="cdk8s-plus-22.ContainerProps.property.workingDir"></a>
```typescript
public readonly workingDir: string;
```
- *Type:* `string`
- *Default:* The container runtime's default.
Container's working directory.
If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.
---
### DeploymentProps <a name="cdk8s-plus-22.DeploymentProps"></a>
Properties for initialization of `Deployment`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { DeploymentProps } from 'cdk8s-plus-22'
const deploymentProps: DeploymentProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
##### `podMetadata`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
The pod metadata.
---
##### `defaultSelector`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.defaultSelector"></a>
```typescript
public readonly defaultSelector: boolean;
```
- *Type:* `boolean`
- *Default:* true
Automatically allocates a pod selector for this deployment.
If this is set to `false` you must define your selector through
`deployment.podMetadata.addLabel()` and `deployment.selectByLabel()`.
---
##### `replicas`<sup>Optional</sup> <a name="cdk8s-plus-22.DeploymentProps.property.replicas"></a>
```typescript
public readonly replicas: number;
```
- *Type:* `number`
- *Default:* 1
Number of desired pods.
---
### EmptyDirVolumeOptions <a name="cdk8s-plus-22.EmptyDirVolumeOptions"></a>
Options for volumes populated with an empty directory.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { EmptyDirVolumeOptions } from 'cdk8s-plus-22'
const emptyDirVolumeOptions: EmptyDirVolumeOptions = { ... }
```
##### `medium`<sup>Optional</sup> <a name="cdk8s-plus-22.EmptyDirVolumeOptions.property.medium"></a>
```typescript
public readonly medium: EmptyDirMedium;
```
- *Type:* [`cdk8s-plus-22.EmptyDirMedium`](#cdk8s-plus-22.EmptyDirMedium)
- *Default:* EmptyDirMedium.DEFAULT
By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
However, you can set the emptyDir.medium field to
`EmptyDirMedium.MEMORY` to tell Kubernetes to mount a tmpfs (RAM-backed
filesystem) for you instead. While tmpfs is very fast, be aware that unlike
disks, tmpfs is cleared on node reboot and any files you write will count
against your Container's memory limit.
---
##### `sizeLimit`<sup>Optional</sup> <a name="cdk8s-plus-22.EmptyDirVolumeOptions.property.sizeLimit"></a>
```typescript
public readonly sizeLimit: Size;
```
- *Type:* [`cdk8s.Size`](#cdk8s.Size)
- *Default:* limit is undefined
Total amount of local storage required for this EmptyDir volume.
The size
limit is also applicable for memory medium. The maximum usage on memory
medium EmptyDir would be the minimum value between the SizeLimit specified
here and the sum of memory limits of all containers in a pod.
---
### EnvValueFromConfigMapOptions <a name="cdk8s-plus-22.EnvValueFromConfigMapOptions"></a>
Options to specify an envionment variable value from a ConfigMap key.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { EnvValueFromConfigMapOptions } from 'cdk8s-plus-22'
const envValueFromConfigMapOptions: EnvValueFromConfigMapOptions = { ... }
```
##### `optional`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValueFromConfigMapOptions.property.optional"></a>
```typescript
public readonly optional: boolean;
```
- *Type:* `boolean`
- *Default:* false
Specify whether the ConfigMap or its key must be defined.
---
### EnvValueFromProcessOptions <a name="cdk8s-plus-22.EnvValueFromProcessOptions"></a>
Options to specify an environment variable value from the process environment.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { EnvValueFromProcessOptions } from 'cdk8s-plus-22'
const envValueFromProcessOptions: EnvValueFromProcessOptions = { ... }
```
##### `required`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValueFromProcessOptions.property.required"></a>
```typescript
public readonly required: boolean;
```
- *Type:* `boolean`
- *Default:* false
Specify whether the key must exist in the environment.
If this is set to true, and the key does not exist, an error will thrown.
---
### EnvValueFromSecretOptions <a name="cdk8s-plus-22.EnvValueFromSecretOptions"></a>
Options to specify an environment variable value from a Secret.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { EnvValueFromSecretOptions } from 'cdk8s-plus-22'
const envValueFromSecretOptions: EnvValueFromSecretOptions = { ... }
```
##### `optional`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValueFromSecretOptions.property.optional"></a>
```typescript
public readonly optional: boolean;
```
- *Type:* `boolean`
- *Default:* false
Specify whether the Secret or its key must be defined.
---
### ExposeDeploymentViaIngressOptions <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions"></a>
Options for exposing a deployment via an ingress.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ExposeDeploymentViaIngressOptions } from 'cdk8s-plus-22'
const exposeDeploymentViaIngressOptions: ExposeDeploymentViaIngressOptions = { ... }
```
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
- *Default:* undefined Uses the system generated name.
The name of the service to expose.
This will be set on the Service.metadata and must be a DNS_LABEL
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* Copied from the container of the deployment. If a port could not be determined, throws an error.
The port that the service should serve on.
---
##### `protocol`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.protocol"></a>
```typescript
public readonly protocol: Protocol;
```
- *Type:* [`cdk8s-plus-22.Protocol`](#cdk8s-plus-22.Protocol)
- *Default:* Protocol.TCP
The IP protocol for this port.
Supports "TCP", "UDP", and "SCTP". Default is TCP.
---
##### `serviceType`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.serviceType"></a>
```typescript
public readonly serviceType: ServiceType;
```
- *Type:* [`cdk8s-plus-22.ServiceType`](#cdk8s-plus-22.ServiceType)
- *Default:* ClusterIP.
The type of the exposed service.
---
##### `targetPort`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.targetPort"></a>
```typescript
public readonly targetPort: number;
```
- *Type:* `number`
- *Default:* The port of the first container in the deployment (ie. containers[0].port)
The port number the service will redirect to.
---
##### `ingress`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.ingress"></a>
```typescript
public readonly ingress: Ingress;
```
- *Type:* [`cdk8s-plus-22.Ingress`](#cdk8s-plus-22.Ingress)
- *Default:* An ingress will be automatically created.
The ingress to add rules to.
---
##### `pathType`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaIngressOptions.property.pathType"></a>
```typescript
public readonly pathType: HttpIngressPathType;
```
- *Type:* [`cdk8s-plus-22.HttpIngressPathType`](#cdk8s-plus-22.HttpIngressPathType)
- *Default:* HttpIngressPathType.PREFIX
The type of the path.
---
### ExposeDeploymentViaServiceOptions <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions"></a>
Options for exposing a deployment via a service.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ExposeDeploymentViaServiceOptions } from 'cdk8s-plus-22'
const exposeDeploymentViaServiceOptions: ExposeDeploymentViaServiceOptions = { ... }
```
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
- *Default:* undefined Uses the system generated name.
The name of the service to expose.
This will be set on the Service.metadata and must be a DNS_LABEL
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* Copied from the container of the deployment. If a port could not be determined, throws an error.
The port that the service should serve on.
---
##### `protocol`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions.property.protocol"></a>
```typescript
public readonly protocol: Protocol;
```
- *Type:* [`cdk8s-plus-22.Protocol`](#cdk8s-plus-22.Protocol)
- *Default:* Protocol.TCP
The IP protocol for this port.
Supports "TCP", "UDP", and "SCTP". Default is TCP.
---
##### `serviceType`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions.property.serviceType"></a>
```typescript
public readonly serviceType: ServiceType;
```
- *Type:* [`cdk8s-plus-22.ServiceType`](#cdk8s-plus-22.ServiceType)
- *Default:* ClusterIP.
The type of the exposed service.
---
##### `targetPort`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeDeploymentViaServiceOptions.property.targetPort"></a>
```typescript
public readonly targetPort: number;
```
- *Type:* `number`
- *Default:* The port of the first container in the deployment (ie. containers[0].port)
The port number the service will redirect to.
---
### ExposeServiceViaIngressOptions <a name="cdk8s-plus-22.ExposeServiceViaIngressOptions"></a>
Options for exposing a service using an ingress.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ExposeServiceViaIngressOptions } from 'cdk8s-plus-22'
const exposeServiceViaIngressOptions: ExposeServiceViaIngressOptions = { ... }
```
##### `ingress`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeServiceViaIngressOptions.property.ingress"></a>
```typescript
public readonly ingress: Ingress;
```
- *Type:* [`cdk8s-plus-22.Ingress`](#cdk8s-plus-22.Ingress)
- *Default:* An ingress will be automatically created.
The ingress to add rules to.
---
##### `pathType`<sup>Optional</sup> <a name="cdk8s-plus-22.ExposeServiceViaIngressOptions.property.pathType"></a>
```typescript
public readonly pathType: HttpIngressPathType;
```
- *Type:* [`cdk8s-plus-22.HttpIngressPathType`](#cdk8s-plus-22.HttpIngressPathType)
- *Default:* HttpIngressPathType.PREFIX
The type of the path.
---
### HttpGetProbeOptions <a name="cdk8s-plus-22.HttpGetProbeOptions"></a>
Options for `Probe.fromHttpGet()`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { HttpGetProbeOptions } from 'cdk8s-plus-22'
const httpGetProbeOptions: HttpGetProbeOptions = { ... }
```
##### `failureThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.failureThreshold"></a>
```typescript
public readonly failureThreshold: number;
```
- *Type:* `number`
- *Default:* 3
Minimum consecutive failures for the probe to be considered failed after having succeeded.
Defaults to 3. Minimum value is 1.
---
##### `initialDelaySeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.initialDelaySeconds"></a>
```typescript
public readonly initialDelaySeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* immediate
Number of seconds after the container has started before liveness probes are initiated.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
##### `periodSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.periodSeconds"></a>
```typescript
public readonly periodSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(10) Minimum value is 1.
How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
---
##### `successThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.successThreshold"></a>
```typescript
public readonly successThreshold: number;
```
- *Type:* `number`
- *Default:* 1 Must be 1 for liveness and startup. Minimum value is 1.
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1.
Must be 1 for liveness and startup. Minimum value is 1.
---
##### `timeoutSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.timeoutSeconds"></a>
```typescript
public readonly timeoutSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(1)
Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.HttpGetProbeOptions.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* defaults to `container.port`.
The TCP port to use when sending the GET request.
---
### IngressProps <a name="cdk8s-plus-22.IngressProps"></a>
Properties for `Ingress`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { IngressProps } from 'cdk8s-plus-22'
const ingressProps: IngressProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `defaultBackend`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressProps.property.defaultBackend"></a>
```typescript
public readonly defaultBackend: IngressBackend;
```
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
The default backend services requests that do not match any rule.
Using this option or the `addDefaultBackend()` method is equivalent to
adding a rule with both `path` and `host` undefined.
---
##### `rules`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressProps.property.rules"></a>
```typescript
public readonly rules: IngressRule[];
```
- *Type:* [`cdk8s-plus-22.IngressRule`](#cdk8s-plus-22.IngressRule)[]
Routing rules for this ingress.
Each rule must define an `IngressBackend` that will receive the requests
that match this rule. If both `host` and `path` are not specifiec, this
backend will be used as the default backend of the ingress.
You can also add rules later using `addRule()`, `addHostRule()`,
`addDefaultBackend()` and `addHostDefaultBackend()`.
---
##### `tls`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressProps.property.tls"></a>
```typescript
public readonly tls: IngressTls[];
```
- *Type:* [`cdk8s-plus-22.IngressTls`](#cdk8s-plus-22.IngressTls)[]
TLS settings for this ingress.
Using this option tells the ingress controller to expose a TLS endpoint.
Currently the Ingress only supports a single TLS port, 443. If multiple
members of this list specify different hosts, they will be multiplexed on
the same port according to the hostname specified through the SNI TLS
extension, if the ingress controller fulfilling the ingress supports SNI.
---
### IngressRule <a name="cdk8s-plus-22.IngressRule"></a>
Represents the rules mapping the paths under a specified host to the related backend services.
Incoming requests are first evaluated for a host match,
then routed to the backend associated with the matching path.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { IngressRule } from 'cdk8s-plus-22'
const ingressRule: IngressRule = { ... }
```
##### `backend`<sup>Required</sup> <a name="cdk8s-plus-22.IngressRule.property.backend"></a>
```typescript
public readonly backend: IngressBackend;
```
- *Type:* [`cdk8s-plus-22.IngressBackend`](#cdk8s-plus-22.IngressBackend)
Backend defines the referenced service endpoint to which the traffic will be forwarded to.
---
##### `host`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressRule.property.host"></a>
```typescript
public readonly host: string;
```
- *Type:* `string`
- *Default:* If the host is unspecified, the Ingress routes all traffic based
on the specified IngressRuleValue.
Host is the fully qualified domain name of a network host, as defined by RFC 3986.
Note the following deviations from the "host" part of the URI as
defined in the RFC: 1. IPs are not allowed. Currently an IngressRuleValue
can only apply to the IP in the Spec of the parent Ingress. 2. The `:`
delimiter is not respected because ports are not allowed. Currently the
port of an Ingress is implicitly :80 for http and :443 for https. Both
these may change in the future. Incoming requests are matched against the
host before the IngressRuleValue.
---
##### `path`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressRule.property.path"></a>
```typescript
public readonly path: string;
```
- *Type:* `string`
- *Default:* If unspecified, the path defaults to a catch all sending traffic
to the backend.
Path is an extended POSIX regex as defined by IEEE Std 1003.1, (i.e this follows the egrep/unix syntax, not the perl syntax) matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a '/'.
---
##### `pathType`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressRule.property.pathType"></a>
```typescript
public readonly pathType: HttpIngressPathType;
```
- *Type:* [`cdk8s-plus-22.HttpIngressPathType`](#cdk8s-plus-22.HttpIngressPathType)
Specify how the path is matched against request paths.
By default, path
types will be matched by prefix.
> https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
---
### IngressTls <a name="cdk8s-plus-22.IngressTls"></a>
Represents the TLS configuration mapping that is passed to the ingress controller for SSL termination.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { IngressTls } from 'cdk8s-plus-22'
const ingressTls: IngressTls = { ... }
```
##### `hosts`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressTls.property.hosts"></a>
```typescript
public readonly hosts: string[];
```
- *Type:* `string`[]
- *Default:* If unspecified, it defaults to the wildcard host setting for
the loadbalancer controller fulfilling this Ingress.
Hosts are a list of hosts included in the TLS certificate.
The values in
this list must match the name/s used in the TLS Secret.
---
##### `secret`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressTls.property.secret"></a>
```typescript
public readonly secret: ISecret;
```
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
- *Default:* If unspecified, it allows SSL routing based on SNI hostname.
Secret is the secret that contains the certificate and key used to terminate SSL traffic on 443.
If the SNI host in a listener conflicts with
the "Host" header field used by an IngressRule, the SNI host is used for
termination and value of the Host header is used for routing.
---
### JobProps <a name="cdk8s-plus-22.JobProps"></a>
Properties for initialization of `Job`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { JobProps } from 'cdk8s-plus-22'
const jobProps: JobProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
##### `podMetadata`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
The pod metadata.
---
##### `activeDeadline`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.activeDeadline"></a>
```typescript
public readonly activeDeadline: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* If unset, then there is no deadline.
Specifies the duration the job may be active before the system tries to terminate it.
---
##### `backoffLimit`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.backoffLimit"></a>
```typescript
public readonly backoffLimit: number;
```
- *Type:* `number`
- *Default:* If not set, system defaults to 6.
Specifies the number of retries before marking this job failed.
---
##### `ttlAfterFinished`<sup>Optional</sup> <a name="cdk8s-plus-22.JobProps.property.ttlAfterFinished"></a>
```typescript
public readonly ttlAfterFinished: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* If this field is unset, the Job won't be automatically deleted.
Limits the lifetime of a Job that has finished execution (either Complete or Failed).
If this field is set, after the Job finishes, it is eligible to
be automatically deleted. When the Job is being deleted, its lifecycle
guarantees (e.g. finalizers) will be honored. If this field is set to zero,
the Job becomes eligible to be deleted immediately after it finishes. This
field is alpha-level and is only honored by servers that enable the
`TTLAfterFinished` feature.
---
### MountOptions <a name="cdk8s-plus-22.MountOptions"></a>
Options for mounts.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { MountOptions } from 'cdk8s-plus-22'
const mountOptions: MountOptions = { ... }
```
##### `propagation`<sup>Optional</sup> <a name="cdk8s-plus-22.MountOptions.property.propagation"></a>
```typescript
public readonly propagation: MountPropagation;
```
- *Type:* [`cdk8s-plus-22.MountPropagation`](#cdk8s-plus-22.MountPropagation)
- *Default:* MountPropagation.NONE
Determines how mounts are propagated from the host to container and the other way around.
When not set, MountPropagationNone is used.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
This field is beta in 1.10.
---
##### `readOnly`<sup>Optional</sup> <a name="cdk8s-plus-22.MountOptions.property.readOnly"></a>
```typescript
public readonly readOnly: boolean;
```
- *Type:* `boolean`
- *Default:* false
Mounted read-only if true, read-write otherwise (false or unspecified).
Defaults to false.
---
##### `subPath`<sup>Optional</sup> <a name="cdk8s-plus-22.MountOptions.property.subPath"></a>
```typescript
public readonly subPath: string;
```
- *Type:* `string`
- *Default:* "" the volume's root
Path within the volume from which the container's volume should be mounted.).
---
##### `subPathExpr`<sup>Optional</sup> <a name="cdk8s-plus-22.MountOptions.property.subPathExpr"></a>
```typescript
public readonly subPathExpr: string;
```
- *Type:* `string`
- *Default:* "" volume's root.
Expanded path within the volume from which the container's volume should be mounted.
Behaves similarly to SubPath but environment variable references
$(VAR_NAME) are expanded using the container's environment. Defaults to ""
(volume's root). SubPathExpr and SubPath are mutually exclusive. This field
is beta in 1.15.
`subPathExpr` and `subPath` are mutually exclusive. This field is beta in
1.15.
---
### PathMapping <a name="cdk8s-plus-22.PathMapping"></a>
Maps a string key to a path within a volume.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { PathMapping } from 'cdk8s-plus-22'
const pathMapping: PathMapping = { ... }
```
##### `path`<sup>Required</sup> <a name="cdk8s-plus-22.PathMapping.property.path"></a>
```typescript
public readonly path: string;
```
- *Type:* `string`
The relative path of the file to map the key to.
May not be an absolute
path. May not contain the path element '..'. May not start with the string
'..'.
---
##### `mode`<sup>Optional</sup> <a name="cdk8s-plus-22.PathMapping.property.mode"></a>
```typescript
public readonly mode: number;
```
- *Type:* `number`
Optional: mode bits to use on this file, must be a value between 0 and 0777.
If not specified, the volume defaultMode will be used. This might be
in conflict with other options that affect the file mode, like fsGroup, and
the result can be other mode bits set.
---
### PodProps <a name="cdk8s-plus-22.PodProps"></a>
Properties for initialization of `Pod`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { PodProps } from 'cdk8s-plus-22'
const podProps: PodProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.PodProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.PodProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.PodProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.PodProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.PodProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
### PodSpecProps <a name="cdk8s-plus-22.PodSpecProps"></a>
Properties of a `PodSpec`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { PodSpecProps } from 'cdk8s-plus-22'
const podSpecProps: PodSpecProps = { ... }
```
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpecProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpecProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpecProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpecProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
### PodTemplateProps <a name="cdk8s-plus-22.PodTemplateProps"></a>
Properties of a `PodTemplate`.
Adds metadata information on top of the spec.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { PodTemplateProps } from 'cdk8s-plus-22'
const podTemplateProps: PodTemplateProps = { ... }
```
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplateProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplateProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplateProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplateProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
##### `podMetadata`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplateProps.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
The pod metadata.
---
### ProbeOptions <a name="cdk8s-plus-22.ProbeOptions"></a>
Probe options.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ProbeOptions } from 'cdk8s-plus-22'
const probeOptions: ProbeOptions = { ... }
```
##### `failureThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.ProbeOptions.property.failureThreshold"></a>
```typescript
public readonly failureThreshold: number;
```
- *Type:* `number`
- *Default:* 3
Minimum consecutive failures for the probe to be considered failed after having succeeded.
Defaults to 3. Minimum value is 1.
---
##### `initialDelaySeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.ProbeOptions.property.initialDelaySeconds"></a>
```typescript
public readonly initialDelaySeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* immediate
Number of seconds after the container has started before liveness probes are initiated.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
##### `periodSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.ProbeOptions.property.periodSeconds"></a>
```typescript
public readonly periodSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(10) Minimum value is 1.
How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
---
##### `successThreshold`<sup>Optional</sup> <a name="cdk8s-plus-22.ProbeOptions.property.successThreshold"></a>
```typescript
public readonly successThreshold: number;
```
- *Type:* `number`
- *Default:* 1 Must be 1 for liveness and startup. Minimum value is 1.
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1.
Must be 1 for liveness and startup. Minimum value is 1.
---
##### `timeoutSeconds`<sup>Optional</sup> <a name="cdk8s-plus-22.ProbeOptions.property.timeoutSeconds"></a>
```typescript
public readonly timeoutSeconds: Duration;
```
- *Type:* [`cdk8s.Duration`](#cdk8s.Duration)
- *Default:* Duration.seconds(1)
Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
---
### ResourceProps <a name="cdk8s-plus-22.ResourceProps"></a>
Initialization properties for resources.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ResourceProps } from 'cdk8s-plus-22'
const resourceProps: ResourceProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.ResourceProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
### SecretProps <a name="cdk8s-plus-22.SecretProps"></a>
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { SecretProps } from 'cdk8s-plus-22'
const secretProps: SecretProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `stringData`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretProps.property.stringData"></a>
```typescript
public readonly stringData: {[ key: string ]: string};
```
- *Type:* {[ key: string ]: `string`}
stringData allows specifying non-binary secret data in string form.
It is
provided as a write-only convenience method. All keys and values are merged
into the data field on write, overwriting any existing values. It is never
output when reading from the API.
---
##### `type`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretProps.property.type"></a>
```typescript
public readonly type: string;
```
- *Type:* `string`
- *Default:* undefined - Don't set a type.
Optional type associated with the secret.
Used to facilitate programmatic
handling of secret data by various controllers.
---
### SecretValue <a name="cdk8s-plus-22.SecretValue"></a>
Represents a specific value in JSON secret.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { SecretValue } from 'cdk8s-plus-22'
const secretValue: SecretValue = { ... }
```
##### `key`<sup>Required</sup> <a name="cdk8s-plus-22.SecretValue.property.key"></a>
```typescript
public readonly key: string;
```
- *Type:* `string`
The JSON key.
---
##### `secret`<sup>Required</sup> <a name="cdk8s-plus-22.SecretValue.property.secret"></a>
```typescript
public readonly secret: ISecret;
```
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
The secret.
---
### SecretVolumeOptions <a name="cdk8s-plus-22.SecretVolumeOptions"></a>
Options for the Secret-based volume.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { SecretVolumeOptions } from 'cdk8s-plus-22'
const secretVolumeOptions: SecretVolumeOptions = { ... }
```
##### `defaultMode`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretVolumeOptions.property.defaultMode"></a>
```typescript
public readonly defaultMode: number;
```
- *Type:* `number`
- *Default:* 0644. Directories within the path are not affected by this
setting. This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
Mode bits to use on created files by default.
Must be a value between 0 and
0777. Defaults to 0644. Directories within the path are not affected by
this setting. This might be in conflict with other options that affect the
file mode, like fsGroup, and the result can be other mode bits set.
---
##### `items`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretVolumeOptions.property.items"></a>
```typescript
public readonly items: {[ key: string ]: PathMapping};
```
- *Type:* {[ key: string ]: [`cdk8s-plus-22.PathMapping`](#cdk8s-plus-22.PathMapping)}
- *Default:* no mapping
If unspecified, each key-value pair in the Data field of the referenced secret will be projected into the volume as a file whose name is the key and content is the value.
If specified, the listed keys will be projected
into the specified paths, and unlisted keys will not be present. If a key
is specified which is not present in the secret, the volume setup will
error unless it is marked optional. Paths must be relative and may not
contain the '..' path or start with '..'.
---
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretVolumeOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
- *Default:* auto-generated
The volume name.
---
##### `optional`<sup>Optional</sup> <a name="cdk8s-plus-22.SecretVolumeOptions.property.optional"></a>
```typescript
public readonly optional: boolean;
```
- *Type:* `boolean`
- *Default:* undocumented
Specify whether the secret or its keys must be defined.
---
### ServiceAccountProps <a name="cdk8s-plus-22.ServiceAccountProps"></a>
Properties for initialization of `ServiceAccount`.
Properties for initialization of `ServiceAccount`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ServiceAccountProps } from 'cdk8s-plus-22'
const serviceAccountProps: ServiceAccountProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceAccountProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `secrets`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceAccountProps.property.secrets"></a>
```typescript
public readonly secrets: ISecret[];
```
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)[]
List of secrets allowed to be used by pods running using this ServiceAccount.
> https://kubernetes.io/docs/concepts/configuration/secret
---
### ServiceIngressBackendOptions <a name="cdk8s-plus-22.ServiceIngressBackendOptions"></a>
Options for setting up backends for ingress rules.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ServiceIngressBackendOptions } from 'cdk8s-plus-22'
const serviceIngressBackendOptions: ServiceIngressBackendOptions = { ... }
```
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceIngressBackendOptions.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
- *Default:* if the service exposes a single port, this port will be used.
The port to use to access the service.
This option will fail if the service does not expose any ports.
- If the service exposes multiple ports, this option must be specified.
- If the service exposes a single port, this option is optional and if
specified, it must be the same port exposed by the service.
---
### ServicePort <a name="cdk8s-plus-22.ServicePort"></a>
Definition of a service port.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ServicePort } from 'cdk8s-plus-22'
const servicePort: ServicePort = { ... }
```
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePort.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The name of this port within the service.
This must be a DNS_LABEL. All
ports within a ServiceSpec must have unique names. This maps to the 'Name'
field in EndpointPort objects. Optional if only one ServicePort is defined
on this service.
---
##### `nodePort`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePort.property.nodePort"></a>
```typescript
public readonly nodePort: number;
```
- *Type:* `number`
- *Default:* auto-allocate a port if the ServiceType of this Service requires one.
The port on each node on which this service is exposed when type=NodePort or LoadBalancer.
Usually assigned by the system. If specified, it will be
allocated to the service if unused or else creation of the service will
fail. Default is to auto-allocate a port if the ServiceType of this Service
requires one.
> https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
---
##### `protocol`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePort.property.protocol"></a>
```typescript
public readonly protocol: Protocol;
```
- *Type:* [`cdk8s-plus-22.Protocol`](#cdk8s-plus-22.Protocol)
- *Default:* Protocol.TCP
The IP protocol for this port.
Supports "TCP", "UDP", and "SCTP". Default is TCP.
---
##### `targetPort`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePort.property.targetPort"></a>
```typescript
public readonly targetPort: number;
```
- *Type:* `number`
- *Default:* The value of `port` will be used.
The port number the service will redirect to.
---
##### `port`<sup>Required</sup> <a name="cdk8s-plus-22.ServicePort.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
The port number the service will bind to.
---
### ServicePortOptions <a name="cdk8s-plus-22.ServicePortOptions"></a>
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ServicePortOptions } from 'cdk8s-plus-22'
const servicePortOptions: ServicePortOptions = { ... }
```
##### `name`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePortOptions.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The name of this port within the service.
This must be a DNS_LABEL. All
ports within a ServiceSpec must have unique names. This maps to the 'Name'
field in EndpointPort objects. Optional if only one ServicePort is defined
on this service.
---
##### `nodePort`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePortOptions.property.nodePort"></a>
```typescript
public readonly nodePort: number;
```
- *Type:* `number`
- *Default:* auto-allocate a port if the ServiceType of this Service requires one.
The port on each node on which this service is exposed when type=NodePort or LoadBalancer.
Usually assigned by the system. If specified, it will be
allocated to the service if unused or else creation of the service will
fail. Default is to auto-allocate a port if the ServiceType of this Service
requires one.
> https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
---
##### `protocol`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePortOptions.property.protocol"></a>
```typescript
public readonly protocol: Protocol;
```
- *Type:* [`cdk8s-plus-22.Protocol`](#cdk8s-plus-22.Protocol)
- *Default:* Protocol.TCP
The IP protocol for this port.
Supports "TCP", "UDP", and "SCTP". Default is TCP.
---
##### `targetPort`<sup>Optional</sup> <a name="cdk8s-plus-22.ServicePortOptions.property.targetPort"></a>
```typescript
public readonly targetPort: number;
```
- *Type:* `number`
- *Default:* The value of `port` will be used.
The port number the service will redirect to.
---
### ServiceProps <a name="cdk8s-plus-22.ServiceProps"></a>
Properties for initialization of `Service`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { ServiceProps } from 'cdk8s-plus-22'
const serviceProps: ServiceProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `clusterIP`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.clusterIP"></a>
```typescript
public readonly clusterIP: string;
```
- *Type:* `string`
- *Default:* Automatically assigned.
The IP address of the service and is usually assigned randomly by the master.
If an address is specified manually and is not in use by others, it
will be allocated to the service; otherwise, creation of the service will
fail. This field can not be changed through updates. Valid values are
"None", empty string (""), or a valid IP address. "None" can be specified
for headless services when proxying is not required. Only applies to types
ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName.
> https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
---
##### `externalIPs`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.externalIPs"></a>
```typescript
public readonly externalIPs: string[];
```
- *Type:* `string`[]
- *Default:* No external IPs.
A list of IP addresses for which nodes in the cluster will also accept traffic for this service.
These IPs are not managed by Kubernetes. The user
is responsible for ensuring that traffic arrives at a node with this IP. A
common example is external load-balancers that are not part of the
Kubernetes system.
---
##### `externalName`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.externalName"></a>
```typescript
public readonly externalName: string;
```
- *Type:* `string`
- *Default:* No external name.
The externalName to be used when ServiceType.EXTERNAL_NAME is set.
---
##### `loadBalancerSourceRanges`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.loadBalancerSourceRanges"></a>
```typescript
public readonly loadBalancerSourceRanges: string[];
```
- *Type:* `string`[]
A list of CIDR IP addresses, if specified and supported by the platform, will restrict traffic through the cloud-provider load-balancer to the specified client IPs.
More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
---
##### `ports`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.ports"></a>
```typescript
public readonly ports: ServicePort[];
```
- *Type:* [`cdk8s-plus-22.ServicePort`](#cdk8s-plus-22.ServicePort)[]
The port exposed by this service.
More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
---
##### `type`<sup>Optional</sup> <a name="cdk8s-plus-22.ServiceProps.property.type"></a>
```typescript
public readonly type: ServiceType;
```
- *Type:* [`cdk8s-plus-22.ServiceType`](#cdk8s-plus-22.ServiceType)
- *Default:* ServiceType.ClusterIP
Determines how the Service is exposed.
More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
---
### StatefulSetProps <a name="cdk8s-plus-22.StatefulSetProps"></a>
Properties for initialization of `StatefulSet`.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { StatefulSetProps } from 'cdk8s-plus-22'
const statefulSetProps: StatefulSetProps = { ... }
```
##### `metadata`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.metadata"></a>
```typescript
public readonly metadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
Metadata that all persisted resources must have, which includes all objects users must create.
---
##### `containers`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.containers"></a>
```typescript
public readonly containers: ContainerProps[];
```
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)[]
- *Default:* No containers. Note that a pod spec must include at least one container.
List of containers belonging to the pod.
Containers cannot currently be
added or removed. There must be at least one container in a Pod.
You can add additionnal containers using `podSpec.addContainer()`
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
- *Default:* RestartPolicy.ALWAYS
Restart policy for all containers within the pod.
> https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
- *Default:* No service account.
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are
authenticated by the apiserver as a particular User Account (currently this
is usually admin, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
---
##### `volumes`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
- *Default:* No volumes.
List of volumes that can be mounted by containers belonging to the pod.
You can also add volumes later using `podSpec.addVolume()`
> https://kubernetes.io/docs/concepts/storage/volumes
---
##### `podMetadata`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadata;
```
- *Type:* [`cdk8s.ApiObjectMetadata`](#cdk8s.ApiObjectMetadata)
The pod metadata.
---
##### `service`<sup>Required</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.service"></a>
```typescript
public readonly service: Service;
```
- *Type:* [`cdk8s-plus-22.Service`](#cdk8s-plus-22.Service)
Service to associate with the statefulset.
---
##### `defaultSelector`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.defaultSelector"></a>
```typescript
public readonly defaultSelector: boolean;
```
- *Type:* `boolean`
- *Default:* true
Automatically allocates a pod selector for this statefulset.
If this is set to `false` you must define your selector through
`statefulset.podMetadata.addLabel()` and `statefulset.selectByLabel()`.
---
##### `podManagementPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.podManagementPolicy"></a>
```typescript
public readonly podManagementPolicy: PodManagementPolicy;
```
- *Type:* [`cdk8s-plus-22.PodManagementPolicy`](#cdk8s-plus-22.PodManagementPolicy)
- *Default:* PodManagementPolicy.ORDERED_READY
Pod management policy to use for this statefulset.
---
##### `replicas`<sup>Optional</sup> <a name="cdk8s-plus-22.StatefulSetProps.property.replicas"></a>
```typescript
public readonly replicas: number;
```
- *Type:* `number`
- *Default:* 1
Number of desired pods.
---
### VolumeMount <a name="cdk8s-plus-22.VolumeMount"></a>
Mount a volume from the pod to the container.
#### Initializer <a name="[object Object].Initializer"></a>
```typescript
import { VolumeMount } from 'cdk8s-plus-22'
const volumeMount: VolumeMount = { ... }
```
##### `propagation`<sup>Optional</sup> <a name="cdk8s-plus-22.VolumeMount.property.propagation"></a>
```typescript
public readonly propagation: MountPropagation;
```
- *Type:* [`cdk8s-plus-22.MountPropagation`](#cdk8s-plus-22.MountPropagation)
- *Default:* MountPropagation.NONE
Determines how mounts are propagated from the host to container and the other way around.
When not set, MountPropagationNone is used.
Mount propagation allows for sharing volumes mounted by a Container to
other Containers in the same Pod, or even to other Pods on the same node.
This field is beta in 1.10.
---
##### `readOnly`<sup>Optional</sup> <a name="cdk8s-plus-22.VolumeMount.property.readOnly"></a>
```typescript
public readonly readOnly: boolean;
```
- *Type:* `boolean`
- *Default:* false
Mounted read-only if true, read-write otherwise (false or unspecified).
Defaults to false.
---
##### `subPath`<sup>Optional</sup> <a name="cdk8s-plus-22.VolumeMount.property.subPath"></a>
```typescript
public readonly subPath: string;
```
- *Type:* `string`
- *Default:* "" the volume's root
Path within the volume from which the container's volume should be mounted.).
---
##### `subPathExpr`<sup>Optional</sup> <a name="cdk8s-plus-22.VolumeMount.property.subPathExpr"></a>
```typescript
public readonly subPathExpr: string;
```
- *Type:* `string`
- *Default:* "" volume's root.
Expanded path within the volume from which the container's volume should be mounted.
Behaves similarly to SubPath but environment variable references
$(VAR_NAME) are expanded using the container's environment. Defaults to ""
(volume's root). SubPathExpr and SubPath are mutually exclusive. This field
is beta in 1.15.
`subPathExpr` and `subPath` are mutually exclusive. This field is beta in
1.15.
---
##### `path`<sup>Required</sup> <a name="cdk8s-plus-22.VolumeMount.property.path"></a>
```typescript
public readonly path: string;
```
- *Type:* `string`
Path within the container at which the volume should be mounted.
Must not
contain ':'.
---
##### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.VolumeMount.property.volume"></a>
```typescript
public readonly volume: Volume;
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
The volume to mount.
---
## Classes <a name="Classes"></a>
### Container <a name="cdk8s-plus-22.Container"></a>
A single application container that you want to run within a pod.
#### Initializers <a name="cdk8s-plus-22.Container.Initializer"></a>
```typescript
import { Container } from 'cdk8s-plus-22'
new Container(props: ContainerProps)
```
##### `props`<sup>Required</sup> <a name="cdk8s-plus-22.Container.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
#### Methods <a name="Methods"></a>
##### `addEnv` <a name="cdk8s-plus-22.Container.addEnv"></a>
```typescript
public addEnv(name: string, value: EnvValue)
```
###### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Container.parameter.name"></a>
- *Type:* `string`
The variable name.
---
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.Container.parameter.value"></a>
- *Type:* [`cdk8s-plus-22.EnvValue`](#cdk8s-plus-22.EnvValue)
The variable value.
---
##### `mount` <a name="cdk8s-plus-22.Container.mount"></a>
```typescript
public mount(path: string, volume: Volume, options?: MountOptions)
```
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Container.parameter.path"></a>
- *Type:* `string`
The desired path in the container.
---
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.Container.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
The volume to mount.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Container.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.MountOptions`](#cdk8s-plus-22.MountOptions)
---
#### Properties <a name="Properties"></a>
##### `env`<sup>Required</sup> <a name="cdk8s-plus-22.Container.property.env"></a>
```typescript
public readonly env: {[ key: string ]: EnvValue};
```
- *Type:* {[ key: string ]: [`cdk8s-plus-22.EnvValue`](#cdk8s-plus-22.EnvValue)}
The environment variables for this container.
Returns a copy. To add environment variables use `addEnv()`.
---
##### `image`<sup>Required</sup> <a name="cdk8s-plus-22.Container.property.image"></a>
```typescript
public readonly image: string;
```
- *Type:* `string`
The container image.
---
##### `imagePullPolicy`<sup>Required</sup> <a name="cdk8s-plus-22.Container.property.imagePullPolicy"></a>
```typescript
public readonly imagePullPolicy: ImagePullPolicy;
```
- *Type:* [`cdk8s-plus-22.ImagePullPolicy`](#cdk8s-plus-22.ImagePullPolicy)
Image pull policy for this container.
---
##### `mounts`<sup>Required</sup> <a name="cdk8s-plus-22.Container.property.mounts"></a>
```typescript
public readonly mounts: VolumeMount[];
```
- *Type:* [`cdk8s-plus-22.VolumeMount`](#cdk8s-plus-22.VolumeMount)[]
Volume mounts configured for this container.
---
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Container.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The name of the container.
---
##### `args`<sup>Optional</sup> <a name="cdk8s-plus-22.Container.property.args"></a>
```typescript
public readonly args: string[];
```
- *Type:* `string`[]
Arguments to the entrypoint.
---
##### `command`<sup>Optional</sup> <a name="cdk8s-plus-22.Container.property.command"></a>
```typescript
public readonly command: string[];
```
- *Type:* `string`[]
Entrypoint array (the command to execute when the container starts).
---
##### `port`<sup>Optional</sup> <a name="cdk8s-plus-22.Container.property.port"></a>
```typescript
public readonly port: number;
```
- *Type:* `number`
The port this container exposes.
---
##### `workingDir`<sup>Optional</sup> <a name="cdk8s-plus-22.Container.property.workingDir"></a>
```typescript
public readonly workingDir: string;
```
- *Type:* `string`
The working directory inside the container.
---
### EnvValue <a name="cdk8s-plus-22.EnvValue"></a>
Utility class for creating reading env values from various sources.
#### Static Functions <a name="Static Functions"></a>
##### `fromConfigMap` <a name="cdk8s-plus-22.EnvValue.fromConfigMap"></a>
```typescript
import { EnvValue } from 'cdk8s-plus-22'
EnvValue.fromConfigMap(configMap: IConfigMap, key: string, options?: EnvValueFromConfigMapOptions)
```
###### `configMap`<sup>Required</sup> <a name="cdk8s-plus-22.EnvValue.parameter.configMap"></a>
- *Type:* [`cdk8s-plus-22.IConfigMap`](#cdk8s-plus-22.IConfigMap)
The config map.
---
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.EnvValue.parameter.key"></a>
- *Type:* `string`
The key to extract the value from.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValue.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.EnvValueFromConfigMapOptions`](#cdk8s-plus-22.EnvValueFromConfigMapOptions)
Additional options.
---
##### `fromProcess` <a name="cdk8s-plus-22.EnvValue.fromProcess"></a>
```typescript
import { EnvValue } from 'cdk8s-plus-22'
EnvValue.fromProcess(key: string, options?: EnvValueFromProcessOptions)
```
###### `key`<sup>Required</sup> <a name="cdk8s-plus-22.EnvValue.parameter.key"></a>
- *Type:* `string`
The key to read.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValue.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.EnvValueFromProcessOptions`](#cdk8s-plus-22.EnvValueFromProcessOptions)
Additional options.
---
##### `fromSecretValue` <a name="cdk8s-plus-22.EnvValue.fromSecretValue"></a>
```typescript
import { EnvValue } from 'cdk8s-plus-22'
EnvValue.fromSecretValue(secretValue: SecretValue, options?: EnvValueFromSecretOptions)
```
###### `secretValue`<sup>Required</sup> <a name="cdk8s-plus-22.EnvValue.parameter.secretValue"></a>
- *Type:* [`cdk8s-plus-22.SecretValue`](#cdk8s-plus-22.SecretValue)
The secret value (secrent + key).
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValue.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.EnvValueFromSecretOptions`](#cdk8s-plus-22.EnvValueFromSecretOptions)
Additional options.
---
##### `fromValue` <a name="cdk8s-plus-22.EnvValue.fromValue"></a>
```typescript
import { EnvValue } from 'cdk8s-plus-22'
EnvValue.fromValue(value: string)
```
###### `value`<sup>Required</sup> <a name="cdk8s-plus-22.EnvValue.parameter.value"></a>
- *Type:* `string`
The value.
---
#### Properties <a name="Properties"></a>
##### `value`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValue.property.value"></a>
```typescript
public readonly value: any;
```
- *Type:* `any`
---
##### `valueFrom`<sup>Optional</sup> <a name="cdk8s-plus-22.EnvValue.property.valueFrom"></a>
```typescript
public readonly valueFrom: any;
```
- *Type:* `any`
---
### IngressBackend <a name="cdk8s-plus-22.IngressBackend"></a>
The backend for an ingress path.
#### Static Functions <a name="Static Functions"></a>
##### `fromService` <a name="cdk8s-plus-22.IngressBackend.fromService"></a>
```typescript
import { IngressBackend } from 'cdk8s-plus-22'
IngressBackend.fromService(service: Service, options?: ServiceIngressBackendOptions)
```
###### `service`<sup>Required</sup> <a name="cdk8s-plus-22.IngressBackend.parameter.service"></a>
- *Type:* [`cdk8s-plus-22.Service`](#cdk8s-plus-22.Service)
The service object.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.IngressBackend.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ServiceIngressBackendOptions`](#cdk8s-plus-22.ServiceIngressBackendOptions)
---
### PodSpec <a name="cdk8s-plus-22.PodSpec"></a>
- *Implements:* [`cdk8s-plus-22.IPodSpec`](#cdk8s-plus-22.IPodSpec)
Provides read/write capabilities ontop of a `PodSpecProps`.
#### Initializers <a name="cdk8s-plus-22.PodSpec.Initializer"></a>
```typescript
import { PodSpec } from 'cdk8s-plus-22'
new PodSpec(props?: PodSpecProps)
```
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpec.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.PodSpecProps`](#cdk8s-plus-22.PodSpecProps)
---
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.PodSpec.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.PodSpec.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
---
##### `addVolume` <a name="cdk8s-plus-22.PodSpec.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.PodSpec.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.PodSpec.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.PodSpec.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpec.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.PodSpec.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
### PodTemplate <a name="cdk8s-plus-22.PodTemplate"></a>
- *Implements:* [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
Provides read/write capabilities ontop of a `PodTemplateProps`.
#### Initializers <a name="cdk8s-plus-22.PodTemplate.Initializer"></a>
```typescript
import { PodTemplate } from 'cdk8s-plus-22'
new PodTemplate(props?: PodTemplateProps)
```
##### `props`<sup>Optional</sup> <a name="cdk8s-plus-22.PodTemplate.parameter.props"></a>
- *Type:* [`cdk8s-plus-22.PodTemplateProps`](#cdk8s-plus-22.PodTemplateProps)
---
#### Properties <a name="Properties"></a>
##### `podMetadata`<sup>Required</sup> <a name="cdk8s-plus-22.PodTemplate.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
Provides read/write access to the underlying pod metadata of the resource.
---
### Probe <a name="cdk8s-plus-22.Probe"></a>
Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.
#### Initializers <a name="cdk8s-plus-22.Probe.Initializer"></a>
```typescript
import { Probe } from 'cdk8s-plus-22'
new Probe()
```
#### Static Functions <a name="Static Functions"></a>
##### `fromCommand` <a name="cdk8s-plus-22.Probe.fromCommand"></a>
```typescript
import { Probe } from 'cdk8s-plus-22'
Probe.fromCommand(command: string[], options?: CommandProbeOptions)
```
###### `command`<sup>Required</sup> <a name="cdk8s-plus-22.Probe.parameter.command"></a>
- *Type:* `string`[]
The command to execute.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Probe.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.CommandProbeOptions`](#cdk8s-plus-22.CommandProbeOptions)
Options.
---
##### `fromHttpGet` <a name="cdk8s-plus-22.Probe.fromHttpGet"></a>
```typescript
import { Probe } from 'cdk8s-plus-22'
Probe.fromHttpGet(path: string, options?: HttpGetProbeOptions)
```
###### `path`<sup>Required</sup> <a name="cdk8s-plus-22.Probe.parameter.path"></a>
- *Type:* `string`
The URL path to hit.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Probe.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.HttpGetProbeOptions`](#cdk8s-plus-22.HttpGetProbeOptions)
Options.
---
### Volume <a name="cdk8s-plus-22.Volume"></a>
Volume represents a named volume in a pod that may be accessed by any container in the pod.
Docker also has a concept of volumes, though it is somewhat looser and less
managed. In Docker, a volume is simply a directory on disk or in another
Container. Lifetimes are not managed and until very recently there were only
local-disk-backed volumes. Docker now provides volume drivers, but the
functionality is very limited for now (e.g. as of Docker 1.7 only one volume
driver is allowed per Container and there is no way to pass parameters to
volumes).
A Kubernetes volume, on the other hand, has an explicit lifetime - the same
as the Pod that encloses it. Consequently, a volume outlives any Containers
that run within the Pod, and data is preserved across Container restarts. Of
course, when a Pod ceases to exist, the volume will cease to exist, too.
Perhaps more importantly than this, Kubernetes supports many types of
volumes, and a Pod can use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it,
which is accessible to the Containers in a Pod. How that directory comes to
be, the medium that backs it, and the contents of it are determined by the
particular volume type used.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes field) and where to mount those into Containers (the
.spec.containers[*].volumeMounts field).
A process in a container sees a filesystem view composed from their Docker
image and volumes. The Docker image is at the root of the filesystem
hierarchy, and any volumes are mounted at the specified paths within the
image. Volumes can not mount onto other volumes
#### Initializers <a name="cdk8s-plus-22.Volume.Initializer"></a>
```typescript
import { Volume } from 'cdk8s-plus-22'
new Volume(name: string, config: any)
```
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.parameter.name"></a>
- *Type:* `string`
---
##### `config`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.parameter.config"></a>
- *Type:* `any`
---
#### Static Functions <a name="Static Functions"></a>
##### `fromConfigMap` <a name="cdk8s-plus-22.Volume.fromConfigMap"></a>
```typescript
import { Volume } from 'cdk8s-plus-22'
Volume.fromConfigMap(configMap: IConfigMap, options?: ConfigMapVolumeOptions)
```
###### `configMap`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.parameter.configMap"></a>
- *Type:* [`cdk8s-plus-22.IConfigMap`](#cdk8s-plus-22.IConfigMap)
The config map to use to populate the volume.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Volume.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.ConfigMapVolumeOptions`](#cdk8s-plus-22.ConfigMapVolumeOptions)
Options.
---
##### `fromEmptyDir` <a name="cdk8s-plus-22.Volume.fromEmptyDir"></a>
```typescript
import { Volume } from 'cdk8s-plus-22'
Volume.fromEmptyDir(name: string, options?: EmptyDirVolumeOptions)
```
###### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.parameter.name"></a>
- *Type:* `string`
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Volume.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.EmptyDirVolumeOptions`](#cdk8s-plus-22.EmptyDirVolumeOptions)
Additional options.
---
##### `fromSecret` <a name="cdk8s-plus-22.Volume.fromSecret"></a>
```typescript
import { Volume } from 'cdk8s-plus-22'
Volume.fromSecret(secret: ISecret, options?: SecretVolumeOptions)
```
###### `secret`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.parameter.secret"></a>
- *Type:* [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
The secret to use to populate the volume.
---
###### `options`<sup>Optional</sup> <a name="cdk8s-plus-22.Volume.parameter.options"></a>
- *Type:* [`cdk8s-plus-22.SecretVolumeOptions`](#cdk8s-plus-22.SecretVolumeOptions)
Options.
---
#### Properties <a name="Properties"></a>
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.Volume.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
---
## Protocols <a name="Protocols"></a>
### IConfigMap <a name="cdk8s-plus-22.IConfigMap"></a>
- *Extends:* [`cdk8s-plus-22.IResource`](#cdk8s-plus-22.IResource)
- *Implemented By:* [`cdk8s-plus-22.ConfigMap`](#cdk8s-plus-22.ConfigMap), [`cdk8s-plus-22.IConfigMap`](#cdk8s-plus-22.IConfigMap)
Represents a config map.
#### Properties <a name="Properties"></a>
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.IConfigMap.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The Kubernetes name of this resource.
---
### IPodSpec <a name="cdk8s-plus-22.IPodSpec"></a>
- *Implemented By:* [`cdk8s-plus-22.Deployment`](#cdk8s-plus-22.Deployment), [`cdk8s-plus-22.Job`](#cdk8s-plus-22.Job), [`cdk8s-plus-22.Pod`](#cdk8s-plus-22.Pod), [`cdk8s-plus-22.PodSpec`](#cdk8s-plus-22.PodSpec), [`cdk8s-plus-22.PodTemplate`](#cdk8s-plus-22.PodTemplate), [`cdk8s-plus-22.StatefulSet`](#cdk8s-plus-22.StatefulSet), [`cdk8s-plus-22.IPodSpec`](#cdk8s-plus-22.IPodSpec), [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
Represents a resource that can be configured with a kuberenets pod spec. (e.g `Deployment`, `Job`, `Pod`, ...).
Use the `PodSpec` class as an implementation helper.
#### Methods <a name="Methods"></a>
##### `addContainer` <a name="cdk8s-plus-22.IPodSpec.addContainer"></a>
```typescript
public addContainer(container: ContainerProps)
```
###### `container`<sup>Required</sup> <a name="cdk8s-plus-22.IPodSpec.parameter.container"></a>
- *Type:* [`cdk8s-plus-22.ContainerProps`](#cdk8s-plus-22.ContainerProps)
The container.
---
##### `addVolume` <a name="cdk8s-plus-22.IPodSpec.addVolume"></a>
```typescript
public addVolume(volume: Volume)
```
###### `volume`<sup>Required</sup> <a name="cdk8s-plus-22.IPodSpec.parameter.volume"></a>
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)
The volume.
---
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.IPodSpec.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.IPodSpec.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.IPodSpec.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.IPodSpec.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
### IPodTemplate <a name="cdk8s-plus-22.IPodTemplate"></a>
- *Extends:* [`cdk8s-plus-22.IPodSpec`](#cdk8s-plus-22.IPodSpec)
- *Implemented By:* [`cdk8s-plus-22.Deployment`](#cdk8s-plus-22.Deployment), [`cdk8s-plus-22.Job`](#cdk8s-plus-22.Job), [`cdk8s-plus-22.PodTemplate`](#cdk8s-plus-22.PodTemplate), [`cdk8s-plus-22.StatefulSet`](#cdk8s-plus-22.StatefulSet), [`cdk8s-plus-22.IPodTemplate`](#cdk8s-plus-22.IPodTemplate)
Represents a resource that can be configured with a kuberenets pod template. (e.g `Deployment`, `Job`, ...).
Use the `PodTemplate` class as an implementation helper.
#### Properties <a name="Properties"></a>
##### `containers`<sup>Required</sup> <a name="cdk8s-plus-22.IPodTemplate.property.containers"></a>
```typescript
public readonly containers: Container[];
```
- *Type:* [`cdk8s-plus-22.Container`](#cdk8s-plus-22.Container)[]
The containers belonging to the pod.
Use `addContainer` to add containers.
---
##### `volumes`<sup>Required</sup> <a name="cdk8s-plus-22.IPodTemplate.property.volumes"></a>
```typescript
public readonly volumes: Volume[];
```
- *Type:* [`cdk8s-plus-22.Volume`](#cdk8s-plus-22.Volume)[]
The volumes associated with this pod.
Use `addVolume` to add volumes.
---
##### `restartPolicy`<sup>Optional</sup> <a name="cdk8s-plus-22.IPodTemplate.property.restartPolicy"></a>
```typescript
public readonly restartPolicy: RestartPolicy;
```
- *Type:* [`cdk8s-plus-22.RestartPolicy`](#cdk8s-plus-22.RestartPolicy)
Restart policy for all containers within the pod.
---
##### `serviceAccount`<sup>Optional</sup> <a name="cdk8s-plus-22.IPodTemplate.property.serviceAccount"></a>
```typescript
public readonly serviceAccount: IServiceAccount;
```
- *Type:* [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
The service account used to run this pod.
---
##### `podMetadata`<sup>Required</sup> <a name="cdk8s-plus-22.IPodTemplate.property.podMetadata"></a>
```typescript
public readonly podMetadata: ApiObjectMetadataDefinition;
```
- *Type:* [`cdk8s.ApiObjectMetadataDefinition`](#cdk8s.ApiObjectMetadataDefinition)
Provides read/write access to the underlying pod metadata of the resource.
---
### IResource <a name="cdk8s-plus-22.IResource"></a>
- *Implemented By:* [`cdk8s-plus-22.ConfigMap`](#cdk8s-plus-22.ConfigMap), [`cdk8s-plus-22.Deployment`](#cdk8s-plus-22.Deployment), [`cdk8s-plus-22.Ingress`](#cdk8s-plus-22.Ingress), [`cdk8s-plus-22.Job`](#cdk8s-plus-22.Job), [`cdk8s-plus-22.Pod`](#cdk8s-plus-22.Pod), [`cdk8s-plus-22.Resource`](#cdk8s-plus-22.Resource), [`cdk8s-plus-22.Secret`](#cdk8s-plus-22.Secret), [`cdk8s-plus-22.Service`](#cdk8s-plus-22.Service), [`cdk8s-plus-22.ServiceAccount`](#cdk8s-plus-22.ServiceAccount), [`cdk8s-plus-22.StatefulSet`](#cdk8s-plus-22.StatefulSet), [`cdk8s-plus-22.IConfigMap`](#cdk8s-plus-22.IConfigMap), [`cdk8s-plus-22.IResource`](#cdk8s-plus-22.IResource), [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret), [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
Represents a resource.
#### Properties <a name="Properties"></a>
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.IResource.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The Kubernetes name of this resource.
---
### ISecret <a name="cdk8s-plus-22.ISecret"></a>
- *Extends:* [`cdk8s-plus-22.IResource`](#cdk8s-plus-22.IResource)
- *Implemented By:* [`cdk8s-plus-22.Secret`](#cdk8s-plus-22.Secret), [`cdk8s-plus-22.ISecret`](#cdk8s-plus-22.ISecret)
#### Properties <a name="Properties"></a>
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.ISecret.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The Kubernetes name of this resource.
---
### IServiceAccount <a name="cdk8s-plus-22.IServiceAccount"></a>
- *Extends:* [`cdk8s-plus-22.IResource`](#cdk8s-plus-22.IResource)
- *Implemented By:* [`cdk8s-plus-22.ServiceAccount`](#cdk8s-plus-22.ServiceAccount), [`cdk8s-plus-22.IServiceAccount`](#cdk8s-plus-22.IServiceAccount)
#### Properties <a name="Properties"></a>
##### `name`<sup>Required</sup> <a name="cdk8s-plus-22.IServiceAccount.property.name"></a>
```typescript
public readonly name: string;
```
- *Type:* `string`
The Kubernetes name of this resource.
---
## Enums <a name="Enums"></a>
### EmptyDirMedium <a name="EmptyDirMedium"></a>
The medium on which to store the volume.
#### `DEFAULT` <a name="cdk8s-plus-22.EmptyDirMedium.DEFAULT"></a>
The default volume of the backing node.
---
#### `MEMORY` <a name="cdk8s-plus-22.EmptyDirMedium.MEMORY"></a>
Mount a tmpfs (RAM-backed filesystem) for you instead.
While tmpfs is very
fast, be aware that unlike disks, tmpfs is cleared on node reboot and any
files you write will count against your Container's memory limit.
---
### HttpIngressPathType <a name="HttpIngressPathType"></a>
Specify how the path is matched against request paths.
> https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
#### `PREFIX` <a name="cdk8s-plus-22.HttpIngressPathType.PREFIX"></a>
Matches the URL path exactly.
---
#### `EXACT` <a name="cdk8s-plus-22.HttpIngressPathType.EXACT"></a>
Matches based on a URL path prefix split by '/'.
---
#### `IMPLEMENTATION_SPECIFIC` <a name="cdk8s-plus-22.HttpIngressPathType.IMPLEMENTATION_SPECIFIC"></a>
Matching is specified by the underlying IngressClass.
---
### ImagePullPolicy <a name="ImagePullPolicy"></a>
#### `ALWAYS` <a name="cdk8s-plus-22.ImagePullPolicy.ALWAYS"></a>
Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest.
If the kubelet has a container image with that exact
digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads
(pulls) the image with the resolved digest, and uses that image to launch the container.
Default is Always if ImagePullPolicy is omitted and either the image tag is :latest or
the image tag is omitted.
---
#### `IF_NOT_PRESENT` <a name="cdk8s-plus-22.ImagePullPolicy.IF_NOT_PRESENT"></a>
The image is pulled only if it is not already present locally.
Default is IfNotPresent if ImagePullPolicy is omitted and the image tag is present but
not :latest
---
#### `NEVER` <a name="cdk8s-plus-22.ImagePullPolicy.NEVER"></a>
The image is assumed to exist locally.
No attempt is made to pull the image.
---
### MountPropagation <a name="MountPropagation"></a>
#### `NONE` <a name="cdk8s-plus-22.MountPropagation.NONE"></a>
This volume mount will not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host.
In similar
fashion, no mounts created by the Container will be visible on the host.
This is the default mode.
This mode is equal to `private` mount propagation as described in the Linux
kernel documentation
---
#### `HOST_TO_CONTAINER` <a name="cdk8s-plus-22.MountPropagation.HOST_TO_CONTAINER"></a>
This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
In other words, if the host mounts anything inside the volume mount, the
Container will see it mounted there.
Similarly, if any Pod with Bidirectional mount propagation to the same
volume mounts anything there, the Container with HostToContainer mount
propagation will see it.
This mode is equal to `rslave` mount propagation as described in the Linux
kernel documentation
---
#### `BIDIRECTIONAL` <a name="cdk8s-plus-22.MountPropagation.BIDIRECTIONAL"></a>
This volume mount behaves the same the HostToContainer mount.
In addition,
all volume mounts created by the Container will be propagated back to the
host and to all Containers of all Pods that use the same volume
A typical use case for this mode is a Pod with a FlexVolume or CSI driver
or a Pod that needs to mount something on the host using a hostPath volume.
This mode is equal to `rshared` mount propagation as described in the Linux
kernel documentation
Caution: Bidirectional mount propagation can be dangerous. It can damage
the host operating system and therefore it is allowed only in privileged
Containers. Familiarity with Linux kernel behavior is strongly recommended.
In addition, any volume mounts created by Containers in Pods must be
destroyed (unmounted) by the Containers on termination.
---
### PodManagementPolicy <a name="PodManagementPolicy"></a>
Controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down.
The default policy is `OrderedReady`, where pods are created in increasing order
(pod-0, then pod-1, etc) and the controller will wait until each pod is ready before
continuing. When scaling down, the pods are removed in the opposite order.
The alternative policy is `Parallel` which will create pods in parallel to match the
desired scale without waiting, and on scale down will delete all pods at once.
#### `ORDERED_READY` <a name="cdk8s-plus-22.PodManagementPolicy.ORDERED_READY"></a>
---
#### `PARALLEL` <a name="cdk8s-plus-22.PodManagementPolicy.PARALLEL"></a>
---
### Protocol <a name="Protocol"></a>
#### `TCP` <a name="cdk8s-plus-22.Protocol.TCP"></a>
---
#### `UDP` <a name="cdk8s-plus-22.Protocol.UDP"></a>
---
#### `SCTP` <a name="cdk8s-plus-22.Protocol.SCTP"></a>
---
### RestartPolicy <a name="RestartPolicy"></a>
Restart policy for all containers within the pod.
#### `ALWAYS` <a name="cdk8s-plus-22.RestartPolicy.ALWAYS"></a>
Always restart the pod after it exits.
---
#### `ON_FAILURE` <a name="cdk8s-plus-22.RestartPolicy.ON_FAILURE"></a>
Only restart if the pod exits with a non-zero exit code.
---
#### `NEVER` <a name="cdk8s-plus-22.RestartPolicy.NEVER"></a>
Never restart the pod.
---
### ServiceType <a name="ServiceType"></a>
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want.
The default is ClusterIP.
#### `CLUSTER_IP` <a name="cdk8s-plus-22.ServiceType.CLUSTER_IP"></a>
Exposes the Service on a cluster-internal IP.
Choosing this value makes the Service only reachable from within the cluster.
This is the default ServiceType
---
#### `NODE_PORT` <a name="cdk8s-plus-22.ServiceType.NODE_PORT"></a>
Exposes the Service on each Node's IP at a static port (the NodePort).
A ClusterIP Service, to which the NodePort Service routes, is automatically created.
You'll be able to contact the NodePort Service, from outside the cluster,
by requesting <NodeIP>:<NodePort>.
---
#### `LOAD_BALANCER` <a name="cdk8s-plus-22.ServiceType.LOAD_BALANCER"></a>
Exposes the Service externally using a cloud provider's load balancer.
NodePort and ClusterIP Services, to which the external load balancer routes,
are automatically created.
---
#### `EXTERNAL_NAME` <a name="cdk8s-plus-22.ServiceType.EXTERNAL_NAME"></a>
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
> Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.
---
| 26.073726 | 774 | 0.717566 | eng_Latn | 0.787574 |
e5a087f741cf495b4968aa3d378ac8a8e6d9ab40 | 7,930 | md | Markdown | python/kfserving/README.md | jal06/kfserving | e662f7a42375cd5ac141b06b1548af04be7090f2 | [
"Apache-2.0"
] | null | null | null | python/kfserving/README.md | jal06/kfserving | e662f7a42375cd5ac141b06b1548af04be7090f2 | [
"Apache-2.0"
] | 25 | 2021-03-19T15:43:34.000Z | 2022-03-12T00:53:07.000Z | python/kfserving/README.md | jal06/kfserving | e662f7a42375cd5ac141b06b1548af04be7090f2 | [
"Apache-2.0"
] | 4 | 2021-02-15T23:02:53.000Z | 2022-01-27T22:54:16.000Z | # KFServing Python SDK
Python SDK for KFServing Server and Client.
## Installation
KFServing Python SDK can be installed by `pip` or `Setuptools`.
### pip install
```sh
pip install kfserving
```
### Setuptools
Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
```sh
python setup.py install --user
```
(or `sudo python setup.py install` to install the package for all users)
## KFServing Server
KFServing's python server libraries implement a standardized KFServing library that is extended by model serving frameworks such as Scikit Learn, XGBoost and PyTorch. It encapsulates data plane API definitions and storage retrieval for models.
KFServing provides many functionalities, including among others:
* Registering a model and starting the server
* Prediction Handler
* Liveness Handler
* Readiness Handlers
KFServing supports the following storage providers:
* Google Cloud Storage with a prefix: "gs://"
* By default, it uses `GOOGLE_APPLICATION_CREDENTIALS` environment variable for user authentication.
* If `GOOGLE_APPLICATION_CREDENTIALS` is not provided, anonymous client will be used to download the artifacts.
* S3 Compatible Object Storage with a prefix "s3://"
* By default, it uses `S3_ENDPOINT`, `AWS_ACCESS_KEY_ID`, and `AWS_SECRET_ACCESS_KEY` environment variables for user authentication.
* Azure Blob Storage with the format: "https://{$STORAGE_ACCOUNT_NAME}.blob.core.windows.net/{$CONTAINER}/{$PATH}"
* By default, it uses anonymous client to download the artifacts.
* For e.g. https://kfserving.blob.core.windows.net/triton/simple_string/
* Local filesystem either without any prefix or with a prefix "file://". For example:
* Absolute path: `/absolute/path` or `file:///absolute/path`
* Relative path: `relative/path` or `file://relative/path`
* For local filesystem, we recommended to use relative path without any prefix.
* Persistent Volume Claim (PVC) with the format "pvc://{$pvcname}/[path]".
* The `pvcname` is the name of the PVC that contains the model.
* The `[path]` is the relative path to the model on the PVC.
* For e.g. `pvc://mypvcname/model/path/on/pvc`
* Generic URI, over either `HTTP`, prefixed with `http://` or `HTTPS`, prefixed with `https://`. For example:
* `https://<some_url>.com/model.joblib`
* `http://<some_url>.com/model.joblib`
## KFServing Client
### Getting Started
KFServing's python client interacts with KFServing APIs for executing operations on a remote KFServing cluster, such as creating, patching and deleting of a InferenceService instance. See the [Sample for KFServing Python SDK Client](../../docs/samples/client/kfserving_sdk_sample.ipynb) to get started.
### Documentation for Client API
Class | Method | Description
------------ | ------------- | -------------
[KFServingClient](docs/KFServingClient.md) | [set_credentials](docs/KFServingClient.md#set_credentials) | Set Credentials|
[KFServingClient](docs/KFServingClient.md) | [create](docs/KFServingClient.md#create) | Create InferenceService|
[KFServingClient](docs/KFServingClient.md) | [get](docs/KFServingClient.md#get) | Get or watch the specified InferenceService or all InferenceServices in the namespace |
[KFServingClient](docs/KFServingClient.md) | [patch](docs/KFServingClient.md#patch) | Patch the specified InferenceService|
[KFServingClient](docs/KFServingClient.md) | [replace](docs/KFServingClient.md#replace) | Replace the specified InferenceService|
[KFServingClient](docs/KFServingClient.md) | [rollout_canary](docs/KFServingClient.md#rollout_canary) | Rollout the traffic on `canary` version for specified InferenceService|
[KFServingClient](docs/KFServingClient.md) | [promote](docs/KFServingClient.md#promote) | Promote the `canary` version of the InferenceService to `default`|
[KFServingClient](docs/KFServingClient.md) | [delete](docs/KFServingClient.md#delete) | Delete the specified InferenceService |
[KFServingClient](docs/KFServingClient.md) | [wait_isvc_ready](docs/KFServingClient.md#wait_isvc_ready) | Wait for the InferenceService to be ready |
[KFServingClient](docs/KFServingClient.md) | [is_isvc_ready](docs/KFServingClient.md#is_isvc_ready) | Check if the InferenceService is ready |
## Documentation For Models
- [KnativeAddressable](docs/KnativeAddressable.md)
- [KnativeCondition](docs/KnativeCondition.md)
- [KnativeURL](docs/KnativeURL.md)
- [KnativeVolatileTime](docs/KnativeVolatileTime.md)
- [NetUrlUserinfo](docs/NetUrlUserinfo.md)
- [V1alpha2AlibiExplainerSpec](docs/V1alpha2AlibiExplainerSpec.md)
- [V1alpha2Batcher](docs/V1alpha2Batcher.md)
- [V1alpha2CustomSpec](docs/V1alpha2CustomSpec.md)
- [V1alpha2DeploymentSpec](docs/V1alpha2DeploymentSpec.md)
- [V1alpha2EndpointSpec](docs/V1alpha2EndpointSpec.md)
- [V1alpha2ExplainerSpec](docs/V1alpha2ExplainerSpec.md)
- [V1alpha2InferenceService](docs/V1alpha2InferenceService.md)
- [V1alpha2InferenceServiceList](docs/V1alpha2InferenceServiceList.md)
- [V1alpha2InferenceServiceSpec](docs/V1alpha2InferenceServiceSpec.md)
- [V1alpha2InferenceServiceStatus](docs/V1alpha2InferenceServiceStatus.md)
- [V1alpha2Logger](docs/V1alpha2Logger.md)
- [V1alpha2ONNXSpec](docs/V1alpha2ONNXSpec.md)
- [V1alpha2PredictorSpec](docs/V1alpha2PredictorSpec.md)
- [V1alpha2PyTorchSpec](docs/V1alpha2PyTorchSpec.md)
- [V1alpha2SKLearnSpec](docs/V1alpha2SKLearnSpec.md)
- [V1alpha2StatusConfigurationSpec](docs/V1alpha2StatusConfigurationSpec.md)
- [V1alpha2TritonSpec](docs/V1alpha2TritonSpec.md)
- [V1alpha2TensorflowSpec](docs/V1alpha2TensorflowSpec.md)
- [V1alpha2TransformerSpec](docs/V1alpha2TransformerSpec.md)
- [V1alpha2XGBoostSpec](docs/V1alpha2XGBoostSpec.md)
- [V1beta1AIXExplainerSpec](docs/V1beta1AIXExplainerSpec.md)
- [V1beta1AlibiExplainerSpec](docs/V1beta1AlibiExplainerSpec.md)
- [V1beta1Batcher](docs/V1beta1Batcher.md)
- [V1beta1ComponentExtensionSpec](docs/V1beta1ComponentExtensionSpec.md)
- [V1beta1ComponentStatusSpec](docs/V1beta1ComponentStatusSpec.md)
- [V1beta1CustomExplainer](docs/V1beta1CustomExplainer.md)
- [V1beta1CustomPredictor](docs/V1beta1CustomPredictor.md)
- [V1beta1CustomTransformer](docs/V1beta1CustomTransformer.md)
- [V1beta1ExplainerConfig](docs/V1beta1ExplainerConfig.md)
- [V1beta1ExplainerSpec](docs/V1beta1ExplainerSpec.md)
- [V1beta1ExplainersConfig](docs/V1beta1ExplainersConfig.md)
- [V1beta1InferenceService](docs/V1beta1InferenceService.md)
- [V1beta1InferenceServiceList](docs/V1beta1InferenceServiceList.md)
- [V1beta1InferenceServiceSpec](docs/V1beta1InferenceServiceSpec.md)
- [V1beta1InferenceServiceStatus](docs/V1beta1InferenceServiceStatus.md)
- [V1beta1InferenceServicesConfig](docs/V1beta1InferenceServicesConfig.md)
- [V1beta1IngressConfig](docs/V1beta1IngressConfig.md)
- [V1beta1LoggerSpec](docs/V1beta1LoggerSpec.md)
- [V1beta1ModelSpec](docs/V1beta1ModelSpec.md)
- [V1beta1ONNXRuntimeSpec](docs/V1beta1ONNXRuntimeSpec.md)
- [V1beta1PodSpec](docs/V1beta1PodSpec.md)
- [V1beta1PredictorConfig](docs/V1beta1PredictorConfig.md)
- [V1beta1PredictorExtensionSpec](docs/V1beta1PredictorExtensionSpec.md)
- [V1beta1PredictorSpec](docs/V1beta1PredictorSpec.md)
- [V1beta1PredictorsConfig](docs/V1beta1PredictorsConfig.md)
- [V1beta1SKLearnSpec](docs/V1beta1SKLearnSpec.md)
- [V1beta1TFServingSpec](docs/V1beta1TFServingSpec.md)
- [V1beta1TorchServeSpec](docs/V1beta1TorchServeSpec.md)
- [V1beta1TrainedModel](docs/V1beta1TrainedModel.md)
- [V1beta1TrainedModelList](docs/V1beta1TrainedModelList.md)
- [V1beta1TrainedModelSpec](docs/V1beta1TrainedModelSpec.md)
- [V1beta1TrainedModelStatus](docs/V1beta1TrainedModelStatus.md)
- [V1beta1TransformerConfig](docs/V1beta1TransformerConfig.md)
- [V1beta1TransformerSpec](docs/V1beta1TransformerSpec.md)
- [V1beta1TransformersConfig](docs/V1beta1TransformersConfig.md)
- [V1beta1TritonSpec](docs/V1beta1TritonSpec.md)
- [V1beta1XGBoostSpec](docs/V1beta1XGBoostSpec.md)
| 56.241135 | 302 | 0.792812 | yue_Hant | 0.679332 |
e5a090aea30624c4b74ea9555f359f885a3ad210 | 2,686 | md | Markdown | how_to_use.md | sedici/docker4drupal | fecff902ed1117602f7726e57afce8a01ba5c29e | [
"MIT"
] | null | null | null | how_to_use.md | sedici/docker4drupal | fecff902ed1117602f7726e57afce8a01ba5c29e | [
"MIT"
] | null | null | null | how_to_use.md | sedici/docker4drupal | fecff902ed1117602f7726e57afce8a01ba5c29e | [
"MIT"
] | null | null | null | # How to use Docker4Drupal with custom codebase
This explanation is intended for those Drupal installation based in [custom source code](https://wodby.com/docs/stacks/drupal/local/#mount-my-codebase). For this, its going to be required a database dump and a source code directory (generated with the [drush archive-dump](https://drushcommands.com/drush-8x/core/archive-dump/) command).
> The 'drush archive-dump' does a _backup your code, files, and database into a single file_, a compressed *.tgz* file. This .tgz file mainly contains (1) a directory with the Drupal instance source code (containing the content from the **"/var/www/html/web/"** directory in server), (2) a SQL file with a database dump.
Let's call the dumpfile as $DRUPAL_DMP.
In order to use your data with this docker project, you must do the followings steps:
1. Make a git clone of the project in $D4D_DIR.
2. Copy your **web/** codebase in "data/" project directory.
```bash
cp $DRUPAL_DMP/web $D4D_DIR/data
```
3. Copy your SQL dump file in "mariadb-init" directory.
```bash
cp $DRUPAL_DMP/$DB_DUMP.sql $D4D_DIR/mariadb-init
```
4. Modify the variables in the **$D4D_DIR/.env** file.
* The values of DB_NAME, DB_USER, and DB_PASSWORD variables must be the same as in specified in the _$D4D_DIR/data/web/sites/default/settings.php_.
* Optionally, you may modify the PROJECT_NAME and PROJECT_BASE_URL variables.
5. Now you can create the Drupal containers (using the make target defined at *docker.mk* file).
```bash
make up
```
6. Now you should be able to access to your Drupal application at http://drupal.docker.localhost:8000 location (or the port setted at TRAEFIK_PORT variable in the .env file) .
* If you cannot, you must configure a new host mapping at your */etc/hosts* system file (in order with Drupal documentation at https://wodby.com/docs/stacks/drupal/local/#domains).
```bash
127.0.0.1 drupal.docker.localhost
```
* This docker project is shipped with more services. In example, there is a [Portainer](https://www.portainer.io/) service accesible at http://portainer.docker.localhost:8000. You must modify the /etc/hosts file in order to use it too.
## Vinculate services to an existing network
If you want to make this services reachables from another network, you can specify the name of this network on the **DOCKER_EXTERNAL_NETWORK_NAME** environment variable (specified in the *.env* file).
Then you must uncomment the following lines in the *docker-compose.yml* file:
```bashnetworks:
### Configure this option if you want to add this services to an external network...
networks:
default:
external:
name: ${DOCKER_EXTERNAL_NETWORK_NAME}
```
| 61.045455 | 337 | 0.753165 | eng_Latn | 0.980497 |
e5a0af60268f62e4b5b35e93fab59be7f46ec805 | 3,837 | md | Markdown | blockchain-development-kit/accelerators/corda-integration-accelerator/service-bus-integration/corda-transaction-builder/docs/QuickStart.md | larrycl/blockchain | 94156bf2cbb0ce35bae2b25659b0124c80c324af | [
"MIT"
] | 1 | 2018-11-17T17:18:20.000Z | 2018-11-17T17:18:20.000Z | blockchain-development-kit/accelerators/corda-integration-accelerator/service-bus-integration/corda-transaction-builder/docs/QuickStart.md | larrycl/blockchain | 94156bf2cbb0ce35bae2b25659b0124c80c324af | [
"MIT"
] | null | null | null | blockchain-development-kit/accelerators/corda-integration-accelerator/service-bus-integration/corda-transaction-builder/docs/QuickStart.md | larrycl/blockchain | 94156bf2cbb0ce35bae2b25659b0124c80c324af | [
"MIT"
] | 1 | 2018-11-28T13:48:20.000Z | 2018-11-28T13:48:20.000Z | # Quick Start
[Index](Index.md)
Still in early stages of development, but the following should work
### 1.
Follow the [Quick Start](../corda-local-network/docs/QuickStart.md) guide
in 'corda-local-network' to get a local dev network running with the
'refrigerated transport' example.
Ensure that a local DNS entry is setup to resolve to this service, e.g. under unix
add the following to <code>/etc/hosts</code>:
<pre>
127.0.0.1 corda-local-network
127.0.0.1 corda-transaction-builder
</pre>
### 2.
Start with
```bash
./gradlew run
```
or with a packaged jar
```bash
./gradlew clean jar
java -jar build/libs/corda-transaction-builder.jar
```
### 3. Deploy the CorDapp
This app needs access to the CorDapp jar files, as its needs the app's Java classes to construct
the CordaRPC calls.
For now the process is copy of that used to deploy to the corda-local-network.
```bash
curl -X POST http://corda-transaction-builder:1112/1/apps/refrigeration/deploy \
--data-binary @../../cordapps/refrigerated-transportation/lib/refrigerated-transportation.jar
```
An "agent" must be run for this network.
```bash
curl -X POST http://corda-transaction-builder:1112/1/start
```
### 4.
Query with (can use any node name)
```bash
curl http://corda-transaction-builder:1112/1/ContosoLtd/query/shipment
```
### 5.
Find the meta data, which describes the params needed
```bash
curl http://localhost:1112/1/ContosoLtd/refrigeration/flows/NewShipmentFlow/metadata
```
This will return something like that below
```json
{
"newShipment": {
"owner": {
"type": "Party",
"optional": false,
"nullable": false
},
"maxTemperature": {
"type": "Int",
"optional": false,
"nullable": false
},
"minTemperature": {
"type": "Int",
"optional": false,
"nullable": false
},
"maxHumidity": {
"type": "Int",
"optional": false,
"nullable": false
},
"supplyChainObserver": {
"type": "Party",
"optional": false,
"nullable": false
},
"device": {
"type": "Party",
"optional": false,
"nullable": false
},
"supplyChainOwner": {
"type": "Party",
"optional": false,
"nullable": false
},
"minHumidity": {
"type": "Int",
"optional": false,
"nullable": false
},
"linearId": {
"type": "UniqueIdentifier",
"optional": true,
"nullable": false
}
}
}
```
See [corda-reflections](../../corda-reflections/docs/Index.md) for more details as to
how this metadata is extracted
### 6.
Create a new shipment.
Now we can make a call to run the flow. The data supplied must match the metadata extracted
in the previous step.
```bash
curl -X POST -H "Content-Type: application/json" http://localhost:1112/1/ContosoLtd/refrigeration/flows/NewShipmentFlow/run --data \
'{"newShipment" : {"owner" : "ContosoLtd", "device" : "Device01", "supplyChainOwner" : "WorldWideImporters","supplyChainObserver" : "WoodgroveBank", "minHumidity" : 20, "maxHumidity" : 50, "minTemperature" : -10,"maxTemperature" : 0 }}'
```
Or, specifying our own linearId (**make sure to change this on each run!**)
```bash
curl -X POST -H "Content-Type: application/json" http://localhost:1112/1/ContosoLtd/refrigeration/flows/NewShipmentFlow/run --data \
'{"state" : {"linearId":"abc_22D87099-24DE-4A85-8498-A15CC811C2B4" , "owner" : "ContosoLtd", "device" : "Device01", "supplyChainOwner" : "WorldWideImporters","supplyChainObserver" : "WoodgroveBank", "minHumidity" : 11, "maxHumidity" : 12, "minTemperature" : -10,"maxTemperature" : 0 }}'
```
Run the query in step 3 and the new shipment should be in the list
| 25.925676 | 287 | 0.636435 | eng_Latn | 0.747722 |
e5a1aa6c97596b90877a8b8d58a653c18c505222 | 1,472 | md | Markdown | CHANGELOG.md | OverZealous/cdnizer | 0d8d5050cd7724331efb48e46ea8268f22927cee | [
"MIT"
] | 44 | 2015-02-21T13:22:50.000Z | 2021-07-07T02:53:08.000Z | CHANGELOG.md | OverZealous/cdnizer | 0d8d5050cd7724331efb48e46ea8268f22927cee | [
"MIT"
] | 29 | 2015-02-24T16:25:30.000Z | 2020-05-21T14:58:20.000Z | CHANGELOG.md | OverZealous/cdnizer | 0d8d5050cd7724331efb48e46ea8268f22927cee | [
"MIT"
] | 19 | 2015-02-24T15:41:31.000Z | 2020-05-21T14:45:44.000Z | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [3.2.1](https://github.com/OverZealous/cdnizer/releases/tag/v3.2.1) - 2020-05-21
- Updated `lodash` to fix security vulnerabilities. (Thanks [@BenjiLewis](https://github.com/OverZealous/cdnizer/issues?q=is%3Apr+author%3ABenjiLewis))
## [3.2.0](https://github.com/OverZealous/cdnizer/releases/tag/v3.2.0) - 2019-10-16
- Updated `jsdelivr-cdn-data` to the normal semver release.
## [3.1.0](https://github.com/OverZealous/cdnizer/releases/tag/v3.1.0) - 2019-09-17
- Added new option, `excludeAbsolute`, which can be used to ignore absolute URLs.
## [3.0.1, 3.0.2](https://github.com/OverZealous/cdnizer/releases/tag/v3.0.2) - 2018-11-28 / 2012-12-12
- Updated to fix NPM audit dependency vulnerabilities
- Removed CLI (I didn't write it or use it, it wasn't maintained, and it was pretty poorly written)
## [2.0.0](https://github.com/OverZealous/cdnizer/releases/tag/v2.0.0) - 2018-02-27
### Changed
- Added support for finding file versions in `package.json`, thanks to [@jamendub](https://github.com/jamendub)
- Added default support for `node_modules`
- Added a changelog, cleaned up the code, etc.
- `allowRev` is now disabled by default.
## Older
I'm not going to go through the old history at this point.
| 40.888889 | 151 | 0.732337 | eng_Latn | 0.835173 |
e5a2919f5cba1cc69747109d088a15b0872cff21 | 2,418 | md | Markdown | README.md | SevdanurGENC/Data-Analytics-Lecture-Notes | 895b43285e76fb6f9a23b187058992454dbfe883 | [
"MIT"
] | null | null | null | README.md | SevdanurGENC/Data-Analytics-Lecture-Notes | 895b43285e76fb6f9a23b187058992454dbfe883 | [
"MIT"
] | null | null | null | README.md | SevdanurGENC/Data-Analytics-Lecture-Notes | 895b43285e76fb6f9a23b187058992454dbfe883 | [
"MIT"
] | null | null | null | # Data-Analytics-Lecture-Notes
In this repo, I have the course contents of the Data Analytics training, which will be given to Innova Technology by the cooperation of Academy Peak Information Technologies Training and Consultancy between 27 - 29 September 2021.
Bu repoda, 27 - 29 Eylül 2021 tarihleri arasında Academy Peak Bilgi Teknolojileri Eğitim ve Danışmanlığı işbirliğiyle tarafımca Innova Teknoloji şirketine verilecek olan Veri Analitiği eğitimine ait ders içeriklerim bulunmaktadır.
Programın İçeriği : Temel veri bilgisi, veri önişleme ve görselleştirme.
Programın Amacı : Hayatımızda yer alan tüm veriler hakkında herhangi bir çalışma yapılabilmesi için öncelikle veriyi iyi tanımamız gerekir. Veri analitiği, çeşitli yöntemler kullanarak verinin içerisinde gizlenmiş olan biglinin, karar alınması ve eyleme geçilmesi amacıyla çıkarılması, değer yaratılması sürecidir. Eğitim esnasında günümüzün popüler programlama dili olan python kullanılarak veri analitiğiyle ilgili kütüphanlerden bahsedilecek ve veriler üzerinde uygulamalar yapılacaktır. Uygulama esnasında bir verinin nasıl hazırlanması gerektiği, verinin gürültülerden arındırılarak temizlenmesi aşamaları anlatılıyorken hemen ardından öğrenilen kütüphane fonksiyonlarıyla veri analitiğiyle ilgili yöntem ve metrikler incelenecektir.
Eğitim Sırasında Kullanılacak Olan Bilgisayar Programları : Python
Eğitim Süresi : 3 Gün
# Eğitim İçeriği Hakkında :
## 1. Gün : Veri Analitiğine Giriş
A. Veri Toplama, Temizleme, Ayıklama ve Hazırlama Süreçleri
B. Veri Doğruluğu, Geçerliliği, Bütünlüğü, Tekilliği, Tutarlılığı, Zamansallık Kontrollerinin yapılması
C. Verinin Standartlaştırılması ve Normalizasyonu
D. Örneklem Seçimi Metodolojileri
E. Verinin Sistematikleştirilmesi ve Periyodikleştirilmesi
F. Özellik Üretme ve Özellik Seçim Süreci
## 2. Gün : Python ile Veri Analitiği – I
A. Python yazılım dili ve geliştirme ortamı özelliklerinin tanıtılması
B. Temel veri yapıları ve fonksiyonların öğrenilmesi
C. Numpy ile bilimsel hesaplamalar yapılması
D. Pandas ile veri analizi
E. Matplotlib ile veri görselleştirilmesi
## 3. Gün : Python ile Veri Analitiği - II
A. Python'da veri ön işleme tekniklerinin öğrenilmesi
B. Scikit-learn ile makine öğrenmesi uygulamaları
C. Python ile temel uygulamaların gerçeklenmesi
D. Python ile optimizasyon modeli oluşturulması
E. Derin öğrenme algoritmalarının Python’da kullanılması
| 43.178571 | 739 | 0.831266 | tur_Latn | 0.999999 |
e5a30b94932965c8607e95b7708e2935316119f8 | 2,604 | md | Markdown | global/2020_GCR_SZ_ContainerDay/步骤7-在EKS中使用IAMRole进行权限管理.md | aws-samples/eks-workshop-greater-china | ad13f30c2f6d231672a8a877f5bfc79cd8bad78d | [
"MIT-0"
] | 129 | 2019-12-16T08:19:20.000Z | 2022-03-24T08:58:04.000Z | global/2020_GCR_SZ_ContainerDay/步骤7-在EKS中使用IAMRole进行权限管理.md | aws-samples/eks-workshop-greater-china | ad13f30c2f6d231672a8a877f5bfc79cd8bad78d | [
"MIT-0"
] | 35 | 2019-12-19T08:16:57.000Z | 2022-03-03T06:04:41.000Z | global/2020_GCR_SZ_ContainerDay/步骤7-在EKS中使用IAMRole进行权限管理.md | aws-samples/eks-workshop-greater-china | ad13f30c2f6d231672a8a877f5bfc79cd8bad78d | [
"MIT-0"
] | 98 | 2019-12-16T08:07:22.000Z | 2022-02-09T06:07:47.000Z | # 步骤7 在EKS中使用IAM Role进行权限管理
我们将要为ServiceAccount配置一个S3的访问角色,并且部署一个job应用到EKS集群,完成S3的写入。
7.1 配置IAM Role、ServiceAccount
>7.1.1 使用eksctl 创建service account
```bash
# 在步骤3我们已经创建了OIDC身份提供商
# 请检查IAM OpenID Connect (OIDC) 身份提供商是否已经创建
aws eks describe-cluster --name ${CLUSTER_NAME} --query cluster.identity.oidc.issuer --output text
# 如果上述命令无输出,请执行以下命令创建OpenID Connect (OIDC) 身份提供商
#eksctl utils associate-iam-oidc-provider --cluster=${CLUSTER_NAME} --approve --region ${AWS_REGION}
#创建serviceaccount s3-echoer with IAM role
eksctl create iamserviceaccount --name s3-echoer --namespace default \
--cluster ${CLUSTER_NAME} --attach-policy-arn arn:aws-cn:iam::aws:policy/AmazonS3FullAccess \
--approve --override-existing-serviceaccounts
```
7.2 部署测试访问S3的应用
*使用已有s3 bucket或创建s3 bucket, 请确保bucket名字唯一才能创建成功.
```bash
# 设置环境变量TARGET_BUCKET,Pod访问的S3 bucket
TARGET_BUCKET=eksworkshop-irsa-2020
if [ $(aws s3 ls | grep $TARGET_BUCKET | wc -l) -eq 0 ]; then
aws s3api create-bucket --bucket $TARGET_BUCKET --create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION --region $AWS_DEFAULT_REGION
else
echo "S3 bucket $TARGET_BUCKET existed, skip creation"
fi
# 修改Region,部署Job
sed -e "s/TARGET_BUCKET/${TARGET_BUCKET}/g;s/us-west-2/${AWS_DEFAULT_REGION}/g" s3-echoer/s3-echoer-job.yaml.template > s3-echoer/s3-echoer-job.yaml
kubectl apply -f s3-echoer/s3-echoer-job.yaml
# 验证
kubectl get job/s3-echoer
kubectl logs job/s3-echoer
## 参考输出
Uploading user input to S3 using eksworkshop-irsa-2019/s3echoer-1583415691
# 检查S3 bucket上面的文件
aws s3api list-objects --bucket $TARGET_BUCKET --query 'Contents[].{Key: Key, Size: Size}'
[
{
"Key": "s3echoer-1583415691",
"Size": 27
}
]
#清理
kubectl delete job/s3-echoer
```
7.3 部署第二个IAM 权限测试Pod(可选)
```bash
cd china/2020_EKS_Launch_Workshop/resource/
# Apply the testing
kubectl apply -f IRSA/iam-pod.yaml
pod/s3-echoer created created
kubectl get pod s3-echoer
NAME READY STATUS RESTARTS AGE
s3-echoer 1/1 Running 0 2m38s
# 验证IAM Role 是否生效
kubectl exec -it s3-echoer bash
# In promote input, the output Arn should looks like assumed-role/eksctl-gcr-zhy-eksworkshop-addon-iamservicea-Role
aws sts get-caller-identity
# output shoudld list all the S3 bucket in AWS_REGION under the account
aws s3 ls
aws ec2 describe-instances
# output should be like: An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
# cleanup
kubectl delete -f IRSA/iam-pod.yaml
```
| 30.635294 | 162 | 0.74424 | yue_Hant | 0.572873 |
e5a33875fe819f7df9b82ac6f7ca03cf67296aad | 258 | md | Markdown | SA-Document/Baiscinfo.md | zjhZJH1998/SA-scrapy | 6ee0374dcaa50fe303b28a26e1085997bc162591 | [
"BSD-3-Clause"
] | null | null | null | SA-Document/Baiscinfo.md | zjhZJH1998/SA-scrapy | 6ee0374dcaa50fe303b28a26e1085997bc162591 | [
"BSD-3-Clause"
] | null | null | null | SA-Document/Baiscinfo.md | zjhZJH1998/SA-scrapy | 6ee0374dcaa50fe303b28a26e1085997bc162591 | [
"BSD-3-Clause"
] | 2 | 2019-10-03T04:12:58.000Z | 2019-10-03T14:04:59.000Z | Basic information
====
Name list
----
王子豪 2017302580257\
王怿临 2017302580258\
张新豪\
王初程 2017302580267
Project name
----
Scrapy:
An open source and collaborative framework for extracting the data you need from websites.\
In a fast, simple, yet extensible way.
| 16.125 | 91 | 0.763566 | eng_Latn | 0.971795 |
e5a393d04af59801c9079785932c770413f22897 | 15 | md | Markdown | README.md | changan-xigua/op.github.io | cdcb6f084f1742e4ce79d51c3cc238ab5bf684dd | [
"Apache-2.0"
] | null | null | null | README.md | changan-xigua/op.github.io | cdcb6f084f1742e4ce79d51c3cc238ab5bf684dd | [
"Apache-2.0"
] | null | null | null | README.md | changan-xigua/op.github.io | cdcb6f084f1742e4ce79d51c3cc238ab5bf684dd | [
"Apache-2.0"
] | null | null | null | # op.github.io
| 7.5 | 14 | 0.666667 | por_Latn | 0.517347 |
e5a404812181c480d21546c288f118d583cdf7b6 | 2,158 | md | Markdown | src/sw/2020-04/12/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/sw/2020-04/12/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/sw/2020-04/12/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: 'Muda wa Kutafuta Uwiano'
date: 16/12/2020
---
Yesu aliheshimu na kuitii sheria ya Mungu (Mathayo 5:17, 18). Hata hivyo Yesu alikemea uongozi wa kidini vile ulivyozitafsiri sheria. Viongozi hao wa kidini walitishika mno kutokana na kauli Yesu alizotamka kuhusu utunzaji wa Sabato. Sinagogi halikuacha kufanya Sabato iwe fursa ya elimu —Torati ilisomwa na kufafanuliwa bila kukosa. Waandishi na Mafarisayo waliijua sheria kama ilivyoandikwa; hawakupiga hatua kuiishi. Hivyo Yesu alipiga hatua kuwaelimisha wafuasi wake jinsi sahihi ya kushika Sabato.
`Soma Mathayo 12:1—13 na Luka 13:10—17. Ni fundisho gani alilokuwa akilitoa Yesu kwa watu wa zama zake, na kwetu pia leo, kupitia matukio hayo?`
Ubishi uliozuka kuhusiana na kitendo cha Yesu kuponya mtu siku ya Sabato uliibua ari ya mjadala kuhusu asili ya dhambi, makusudi ya Sabato, uhusiana wa Yesu na Baba, na asili ya mamlaka ya Yesu.
Mtazamo wa Yesu kuhusu Sabato umewekwa kwa muhtasari kupitia fungu la kukariri la juma hili: “Akawaambia, ‘Sabato ilifanyika kwa ajili ya mwanadamu, si mwanadamu kwa ajili ya Sabato. Basi Mwana wa Adamu ndiye Bwana wa Sabato pia.”’ (Marko 2:27, 28). Alitaka kusisitiza kuwa Sabato haipaswi kuwa mzigo. Iliumbwa (au kutengwa) ili iwe fursa ya watu kujifunza tabia ya Mungu aliyeumba Sabato na kujifunza kwa uzoefu kwa njia ya kuuthamini uumbaji wake.
Kwa kuamsha maswali kwa njia ya vitendo vyake, Yesu huamsha akili za wanafunzi wake, viongozi wa Kiyahudi, na makutano wapate kutafakari kwa kina zaidi kuhusu Maandiko, na kuhusu kile walichoamini na kile Mungu alichokusudia. Ni rahisi kwetu sote kuweza kujali sheria na taratibu ambazo kwa zenyewe siyo mbaya, ila tukazifanya ndiyo mwisho uliokusudiwa, badala ya kuwa njia kutuelekeza kwenye mwisho maalum —kupata maarifa na kuijua tabia ya Mungu tunayemtumikia. Na hii hutusababisha kutii kikamilifu kwa kutumainia wema wa haki ya Kristo kwetu.
`Je utunzaji wako wa Sabato ukoje? Je umeifanya iwe siku ya usifanye hivi, usifanye vile, badala ya kuwa pumziko la dhati katika BWANA na kumjua zaidi? Iwapo ndivyo, ni mabadiliko gani utafanya ili unufaike zaidi na Sabato kama Mungu alivyokusudia? ` | 134.875 | 547 | 0.807692 | swh_Latn | 1.000004 |
e5a43116721b51c69aec2c3fac9a5b61c58e4478 | 450 | md | Markdown | README.md | alls77/homepage | cd659a2ab2882ff30b14c567cdb0360f1663da82 | [
"MIT"
] | null | null | null | README.md | alls77/homepage | cd659a2ab2882ff30b14c567cdb0360f1663da82 | [
"MIT"
] | 19 | 2021-05-04T06:37:59.000Z | 2021-05-06T16:11:32.000Z | README.md | alls77/homepage | cd659a2ab2882ff30b14c567cdb0360f1663da82 | [
"MIT"
] | null | null | null | ## Homepage
My very own personal website. Basically this is just a résumé.
## Preview

## Demo
[GitHub Pages](https://alls77.github.io/homepage/)
## Credits
Thanks [@volodymyr-kushnir](https://github.com/volodymyr-kushnir) for stylesheets.
## License
[](https://opensource.org/licenses/MIT)
| 30 | 107 | 0.744444 | yue_Hant | 0.426758 |
e5a4f7cd0d5efcc258d5d4d8c4cb01dd78c62493 | 3,558 | md | Markdown | src/content/posts/2008-12-20-dicionario-stardict-no-linux.md | manoelcampos/blog | 34c90443dc190978e76917c99c4d8be4f6292fe3 | [
"Apache-2.0"
] | 1 | 2018-03-23T17:47:59.000Z | 2018-03-23T17:47:59.000Z | src/content/posts/2008-12-20-dicionario-stardict-no-linux.md | manoelcampos/blog | 34c90443dc190978e76917c99c4d8be4f6292fe3 | [
"Apache-2.0"
] | null | null | null | src/content/posts/2008-12-20-dicionario-stardict-no-linux.md | manoelcampos/blog | 34c90443dc190978e76917c99c4d8be4f6292fe3 | [
"Apache-2.0"
] | null | null | null | ---
author: admin
comments: true
layout: post
slug: dicionario-stardict-no-linux
title: Dicionário StarDict no Linux
wordpress_id: 135
categories:
- Internet
- Linux
- Software
- Software Livre
---
O [StarDict](http://stardict.sourceforge.net) é um dicionário opensource semelhante ao famoso [Babylon](http://www.babylon.com/), que é comercial.
Assim como o Babylon, o StarDict possui versões para Windows e para Linux. A aplicacão, ao ser baixada, não vem com dicionários instalados (se não me engano, apenas dicionário de chinês). Na página [http://stardict.sourceforge.net/Dictionaries.php](http://stardict.sourceforge.net/Dictionaries.php) podem ser baixados alguns dicionários, mas nada muito útil. O bom de tudo é que o StarDict pode usar os dicionários do Babylon.
## Bem, para instalar o StarDict no linux, via apt-get, basta abrir um Terminal e digitar:
´apt-get update´
Isto atualiza a lista de pacotes disponíveis para download. Todos os comandos não possuem quebra de linha, apesar de poderem estar sendo exibidos em mais de uma linha.
## Depois, digite o comando a seguir para baixar o StarDict e as ferramentas dele que serão usadas para converter dicionários do Babylon para o formato dele:
´apt-get install stardict stardict-tools´
## O comando a seguir baixa uma biblioteca, que pelo nome, só pode ser para manipular arquivos XML que serão usadas pelas ferramentas do StarDict
´apt-get install build-essential libxml2-dev´
## Agora que o StarDict está instalado, vamos baixar a ferramenta para converter o dicionário do Babylon para o StarDict. No terminal digite:
´wget http://optusnet.dl.sourceforge.net/sourceforge/ktranslator/dictconv-0.2.tar.bz2´
## Agora digite o comando a seguir para extrair o pacote e entrar na pasta criada:
´tar -jxvf dictconv-0.2.tar.bz2 ; cd dictconv-0.2´
## Digite os comandos abaixo para configurar e compilar o pacote:
´./configure; make´
## Para instalar digite:
´checkinstall´
## Se o programa checkinstall não estiver instalado, digite o comando abaixo para baixar e instalar:
´sudo apt-get install checkinstall´
## Para baixar o dicionário do Babylon de Inglês-Português digite:
´wget http://info.babylon.com/glossaries/38C/Babylon_English_Portuguese.BGL´
## Com o comando abaixo você converte o dicionário para o formato do Babylon, gerando um arquivo com o nome Babylon_English_Portuguese1.dic:
´dictconv -o Babylon_English_Portuguese1.dic Babylon_English_Portuguese.BGL´
## Após a conversão alguns caracteres desnecessários são gerados no arquivo do dicionário. Remova-os com o comando abaixo:
´cat Babylon_English_Portuguese1.dic | sed 's/$[0-9][0-9]*$t/t/' > Babylon_English_Portuguese.dic´
## Para finalizar a conversão, digite o comando abaixo:
´/usr/lib/stardict-tools/tabfile Babylon_English_Portuguese.dic´
## Mova os arquivos gerados para a pasta de dicionários do Stardict:
´sudo mv Babylon_English_Portuguese.dict.dz /usr/share/stardict/dic/´
Se quiser baixar o dicionário Português-Inglês, execute o comando a seguir:
´wget http://info.babylon.com/glossaries/4EA/Babylon_Portuguese_English_dic.BGL´
Depois, repita os passos a partir do 10, observando os nomes de arquivo que devem ser trocados
para os nomes corretos.
Agora, para abrir o programa, vá no menu Aplicativos >> Acessórios e acesso o StarDict.
O StarDict funciona como o Babylon. Selecione uma palavra em algum programa e ele automaticamente exibe a traducão. No linux, usando o Acrobat Reader, ele não funciona corretamente,pelo menos comigo, mas em qualquer outro programa, funciona tudo bem.
| 43.390244 | 426 | 0.788083 | por_Latn | 0.996641 |
e5a5271dcc8b28af8003b2f00a89aed5b60cfc76 | 892 | md | Markdown | README.md | Alfresco/aws-auto-tag | 8f7312cf97f35f148a9ae45b6e99a228a5123287 | [
"Apache-2.0"
] | 3 | 2017-11-25T07:15:06.000Z | 2021-05-25T13:18:43.000Z | README.md | Alfresco/aws-auto-tag | 8f7312cf97f35f148a9ae45b6e99a228a5123287 | [
"Apache-2.0"
] | null | null | null | README.md | Alfresco/aws-auto-tag | 8f7312cf97f35f148a9ae45b6e99a228a5123287 | [
"Apache-2.0"
] | 4 | 2017-10-24T13:51:52.000Z | 2018-03-20T20:42:06.000Z | # AutoTag
This repository contains a CloudFormation template that creates a Lambda function
that is triggered by a CloudWatch rule listening for CloudTrail create events in the region.
Currently new EC2 instances and S3 buckets are tracked and tagged.
## Prerequisites
Install and configure the AWS Command Line Interface by the following the instructions [here](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html).
CloudTrail must be actived for the region you are deploying to.
CloudWatch Events must support the EC2 and S3 services in the region you are deploying to.
## Build & Deploy
Use the following steps to build and deploy the stack.
mvn package
deploy [bucket-name] [stack-name]
where **bucket-name** is the name of an existing S3 bucket to use for deployment and **stack-name** is the name to give the created CloudFormation stack.
| 37.166667 | 172 | 0.780269 | eng_Latn | 0.997612 |
e5a52dd6b8bbe8d11b4d0c8a86a376b97f0fbbe9 | 524 | md | Markdown | public/readme.md | spudmashmedia/spudtube | 10fe99d3f10efd811aa6e0f40f3a308e2f7a3975 | [
"MIT"
] | null | null | null | public/readme.md | spudmashmedia/spudtube | 10fe99d3f10efd811aa6e0f40f3a308e2f7a3975 | [
"MIT"
] | null | null | null | public/readme.md | spudmashmedia/spudtube | 10fe99d3f10efd811aa6e0f40f3a308e2f7a3975 | [
"MIT"
] | null | null | null | # SPUDTUBE - NodeJS version
A very very very very very hacky streaming service....that needs some serious work...ლ(ಠ益ಠლ)
API: built with NodeJS + Restify
Web: built with NodeJS + Express + Handlebars
# NOTES
file names should be
[filename].mp4
when viewing a video via the WEB (http://localhost:3000/spud.tube), you pass in a query string parameter **v**
where **v** is the filename.
E.g.
if a video exists in folder:
```
/public/myvideo.mp4
```
the url would be
```
http://localhost:3000/spud.tube?v=myvideo
```
| 18.714286 | 111 | 0.709924 | eng_Latn | 0.959258 |
e5a66fc53427c9dfb569cc20431507f14413478e | 35,839 | md | Markdown | docs/framework/wcf/feature-details/data-contract-schema-reference.md | Dorothywilk/docs.fr-fr | 459a44d2c7bf5de4f74d7de8833fa9f8ea4ff38a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/data-contract-schema-reference.md | Dorothywilk/docs.fr-fr | 459a44d2c7bf5de4f74d7de8833fa9f8ea4ff38a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/data-contract-schema-reference.md | Dorothywilk/docs.fr-fr | 459a44d2c7bf5de4f74d7de8833fa9f8ea4ff38a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Référence des schémas de contrats de données
ms.date: 03/30/2017
helpviewer_keywords:
- data contracts [WCF], schema reference
ms.assetid: 9ebb0ebe-8166-4c93-980a-7c8f1f38f7c0
ms.openlocfilehash: 33661061e1a5db4f7826c1a8eca188f8c782b58f
ms.sourcegitcommit: 8c28ab17c26bf08abbd004cc37651985c68841b8
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/08/2018
ms.locfileid: "48873717"
---
# <a name="data-contract-schema-reference"></a>Référence des schémas de contrats de données
Cette rubrique décrit le sous-ensemble du schéma XML (XSD) utilisé par <xref:System.Runtime.Serialization.DataContractSerializer> pour décrire les types CLR (Common Language Run-time) pour la sérialisation XML.
## <a name="datacontractserializer-mappings"></a>Mappages DataContractSerializer
Le `DataContractSerializer` mappe des types CLR à XSD lorsque les métadonnées sont exportées à partir d’un service Windows Communication Foundation (WCF) à l’aide d’un point de terminaison de métadonnées ou le [ServiceModel Metadata Utility Tool (Svcutil.exe)](../../../../docs/framework/wcf/servicemodel-metadata-utility-tool-svcutil-exe.md). Pour plus d’informations, consultez [Data Contract Serializer](../../../../docs/framework/wcf/feature-details/data-contract-serializer.md).
`DataContractSerializer` mappe également XSD aux types CLR lorsque Svcutil.exe est utilisé pour accéder à Web Services Description Language (WSDL) ou aux documents XSD et génère des contrats de données pour les services ou les clients.
Uniquement les instances de schéma XML conformes aux spécifications déclarées dans ce document peuvent être mappées aux types CLR à l'aide de `DataContractSerializer`.
### <a name="support-levels"></a>Niveaux pris en charge
`DataContractSerializer` fournit les niveaux de prise en charge suivants pour une fonctionnalité de schéma XML donnée :
- **Pris en charge**. Il y a mappage explicite de cette fonctionnalité aux types ou aux attributs CLR (ou les deux à la fois) via `DataContractSerializer`.
- **Ignoré**. La fonctionnalité est autorisée dans les schémas importés par `DataContractSerializer`, mais n'a aucun effet sur la génération du code.
- **Interdit**. `DataContractSerializer` ne prend pas en charge l'importation d'un schéma à l'aide de la fonctionnalité. Par exemple, Svcutil.exe, lors de l'accès à un WSDL avec un schéma qui utilise une telle fonctionnalité, revient à la place à l'utilisation de <xref:System.Xml.Serialization.XmlSerializer> . La valeur est par défaut.
## <a name="general-information"></a>Informations générales
- L’espace de noms du schéma est décrit sur la page [XML Schema](https://go.microsoft.com/fwlink/?LinkId=95475)(Schéma XML). Le préfixe "xs" est utilisé dans ce document.
- Les attributs avec un espace de noms qui n'est pas un schéma sont ignorés.
- Les annotations (sauf celles décrites dans ce document) sont ignorées.
### <a name="xsschema-attributes"></a>\<xs : Schema > : attributs
|Attribut|DataContract|
|---------------|------------------|
|`attributeFormDefault`|Ignoré.|
|`blockDefault`|Ignoré.|
|`elementFormDefault`|Doit être qualifié. Tous les éléments doivent être qualifiés pour qu'un schéma soit pris en charge par `DataContractSerializer`. Cela est possible en affectant xs:schema/@elementFormDefault sur « qualified » ou en définissant xs:element/@form « qualified » sur chaque déclaration d’élément individuelle.|
|`finalDefault`|Ignoré.|
|`Id`|Ignoré.|
|`targetNamespace`|Pris en charge et mappé à l'espace de noms du contrat de données. Si cet attribut n'est pas spécifié, l'espace de noms vierge est utilisé. Ne peut pas être l’espace de noms réservé `http://schemas.microsoft.com/2003/10/Serialization/`.|
|`version`|Ignoré.|
### <a name="xsschema-contents"></a>\<xs : Schema > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`include`|Prise en charge. `DataContractSerializer` prend en charge xs:include et xs:import. Toutefois, Svcutil.exe restreint le suivi des références `xs:include/@schemaLocation` et `xs:import/@location` lorsque les métadonnées sont chargées à partir d'un fichier local. La liste des fichiers de schéma doit passer par un mécanisme hors bande et non par `include` dans ce cas ; les documents de schéma `include`sont ignorés.|
|`redefine`|Interdit. L'utilisation de `xs:redefine` est interdite par `DataContractSerializer` pour des raisons de sécurité : `x:redefine` requiert que `schemaLocation` soit suivi. Dans certaines circonstances, Svcutil.exe restreint l'utilisation de `schemaLocation`à l'aide de DataContract.|
|`import`|Prise en charge. `DataContractSerializer` prend en charge `xs:include` et `xs:import`. Toutefois, Svcutil.exe restreint le suivi des références `xs:include/@schemaLocation` et `xs:import/@location` lorsque les métadonnées sont chargées à partir d'un fichier local. La liste des fichiers de schéma doit passer par un mécanisme hors bande et non par `include` dans ce cas ; les documents de schéma `include` sont ignorés.|
|`simpleType`|Prise en charge. Consultez la section `xs:simpleType` .|
|`complexType`|Pris en charge, mappe aux contrats de données. Consultez la section `xs:complexType` .|
|`group`|Ignoré. `DataContractSerializer` ne prend pas en charge l'utilisation de `xs:group`, `xs:attributeGroup`et `xs:attribute`. Ces déclarations sont ignorées en tant qu'enfants de `xs:schema`, mais ne peuvent pas être référencées à partir de `complexType` ou d'autres constructions prises en charge.|
|`attributeGroup`|Ignoré. `DataContractSerializer` ne prend pas en charge l'utilisation de `xs:group`, `xs:attributeGroup`et `xs:attribute`. Ces déclarations sont ignorées en tant qu'enfants de `xs:schema`, mais ne peuvent pas être référencées à partir de `complexType` ou d'autres constructions prises en charge.|
|`element`|Prise en charge. Consultez la déclaration d'élément globale (GED).|
|`attribute`|Ignoré. `DataContractSerializer` ne prend pas en charge l'utilisation de `xs:group`, `xs:attributeGroup`et `xs:attribute`. Ces déclarations sont ignorées en tant qu'enfants de `xs:schema`, mais ne peuvent pas être référencées à partir de `complexType` ou d'autres constructions prises en charge.|
|`notation`|Ignoré.|
## <a name="complex-types--xscomplextype"></a>Les Types complexes – \<xs : complexType >
### <a name="general-information"></a>Informations générales
Chaque type complexe \<xs : complexType > mappe à un contrat de données.
### <a name="xscomplextype-attributes"></a>\<xs : complexType > : attributs
|Attribut|Schéma|
|---------------|------------|
|`abstract`|Doit être faux (valeur par défaut).|
|`block`|Interdit.|
|`final`|Ignoré.|
|`id`|Ignoré.|
|`mixed`|Doit être faux (valeur par défaut).|
|`name`|Pris en charge et mappé au nom du contrat de données. Si le nom inclut des points, une tentative est faite pour mapper le type à un type interne. Par exemple, un type complexe nommé `A.B` mappe à un type de contrat de données qui est un type interne d'un type portant le nom du contrat de données `A`, mais uniquement si un tel type de contrat de données existe. Plusieurs niveaux d'imbrication sont possibles : par exemple, `A.B.C` peut être un type interne, mais uniquement si à la fois `A` et `A.B` existent.|
### <a name="xscomplextype-contents"></a>\<xs : complexType > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`simpleContent`|Les extensions sont interdites.<br /><br /> La restriction est autorisée uniquement depuis `anySimpleType`.|
|`complexContent`|Prise en charge. Consultez « Héritage ».|
|`group`|Interdit.|
|`all`|Interdit.|
|`choice`|Interdit|
|`sequence`|Pris en charge, mappe aux membres de données d'un contrat de données.|
|`attribute`|Interdit, même si l'utilisation = "prohibited" (avec une exception). Uniquement les attributs facultatifs de l'espace de noms du schéma de sérialisation standard sont pris en charge. Ils ne mappent pas aux membres de données dans le modèle de programmation du contrat de données. Actuellement, un seul de ces attributs est significatif et est discuté dans la section ISerializable. Tous les autres sont ignorés.|
|`attributeGroup`|Interdit. Dans la version v1 WCF, `DataContractSerializer` ignore la présence de `attributeGroup` dans `xs:complexType`.|
|`anyAttribute`|Interdit.|
|(vide)|Mappe à un contrat de données sans membres de données.|
### <a name="xssequence-in-a-complex-type-attributes"></a>\<xs : sequence > dans un type complexe : attributs
|Attribut|Schéma|
|---------------|------------|
|`id`|Ignoré.|
|`maxOccurs`|Doit être 1 (valeur par défaut).|
|`minOccurs`|Doit être 1 (valeur par défaut).|
### <a name="xssequence-in-a-complex-type-contents"></a>\<xs : sequence > dans un type complexe : contenu
|Sommaire|Schéma|
|--------------|------------|
|`element`|Chaque instance mappe à un membre de données.|
|`group`|Interdit.|
|`choice`|Interdit.|
|`sequence`|Interdit.|
|`any`|Interdit.|
|(vide)|Mappe à un contrat de données sans membres de données.|
## <a name="elements--xselement"></a>Éléments – \<xs : element >
### <a name="general-information"></a>Informations générales
`<xs:element>` peut se produire dans les contextes suivants :
- Il peut se produire dans un `<xs:sequence>`, qui décrit un membre de données d'un contrat de données normal (qui n'est pas une collection). Dans ce cas, l'attribut `maxOccurs` doit avoir la valeur 1. (Une valeur de 0 n'est pas autorisée).
- Il peut se produire dans un `<xs:sequence>`, qui décrit un membre de données d'un contrat de données de collection. Dans ce cas, l'attribut `maxOccurs` doit être supérieur à 1 ou « unbounded ».
- Il peut se produire dans un `<xs:schema>` en tant que déclaration d'élément globale (GED).
### <a name="xselement-with-maxoccurs1-within-an-xssequence-data-members"></a>\<xs : element > avec maxOccurs = 1 dans un \<xs : sequence > (données membres)
|Attribut|Schéma|
|---------------|------------|
|`ref`|Interdit.|
|`name`|Pris en charge, mappe au nom du membre de données.|
|`type`|Pris en charge, mappe au type du membre de données. Pour plus d'informations, consultez le mappage de type/primitif. Si non spécifié (et l'élément ne contient pas de type anonyme), `xs:anyType` est pris en compte.|
|`block`|Ignoré.|
|`default`|Interdit.|
|`fixed`|Interdit.|
|`form`|Doit être qualifié. Cet attribut peut également être défini via `elementFormDefault` sur `xs:schema`.|
|`id`|Ignoré.|
|`maxOccurs`|1|
|`minOccurs`|Mappe à la propriété <xref:System.Runtime.Serialization.DataMemberAttribute.IsRequired%2A> d'un membre de données (`IsRequired` a la valeur true lorsque `minOccurs` a la valeur 1).|
|`nillable`|Affecte le mappage de type. Consultez le mappage de type/primitif.|
### <a name="xselement-with-maxoccurs1-within-an-xssequence-collections"></a>\<xs : element > avec maxOccurs > 1 dans un \<xs : sequence > (Collections)
- Mappe à un <xref:System.Runtime.Serialization.CollectionDataContractAttribute>.
- Dans les types collection, un seul xs:element est autorisé dans un xs:sequence.
Les collections peuvent appartenir aux types suivants :
- Collections normales (par exemple, tableaux).
- Collections de dictionnaires (mappage d'une valeur à une autre; par exemple, un <xref:System.Collections.Hashtable>).
- La seule différence entre un dictionnaire et un tableau pour un type de paire clé/valeur réside dans le modèle de programmation généré. Il existe un mécanisme d'annotation de schéma qui peut être utilisé pour indiquer qu'un type donné est une collection de dictionnaires.
Les règles appliquées aux attributs `ref`, `block`, `default`, `fixed`, `form`et `id` sont les mêmes que pour le type qui n'est pas une collection. Les autres attributs sont inclus dans le tableau suivant.
|Attribut|Schéma|
|---------------|------------|
|`name`|Pris en charge, mappe à la propriété <xref:System.Runtime.Serialization.CollectionDataContractAttribute.ItemName%2A> dans l'attribut `CollectionDataContractAttribute` .|
|`type`|Pris en charge, mappe au type stocké dans la collection.|
|`maxOccurs`|Supérieur à 1 ou "unbounded". Le schéma DC doit utiliser "unbounded".|
|`minOccurs`|Ignoré.|
|`nillable`|Affecte le mappage de type. Cet attribut est ignoré pour les collections de dictionnaires.|
### <a name="xselement-within-an-xsschema-global-element-declaration"></a>\<xs : element > dans un \<xs : Schema > déclaration d’élément globale
- Une déclaration d'élément globale (GED) qui a les mêmes nom et espace de noms qu'un type dans le schéma, ou qui définit un type anonyme à l'intérieur d'elle-même, est considérée comme associée au type.
- Exportation de schéma : les GED associées sont générées pour chaque type généré, simple et complexe.
- Désérialisation/sérialisation : les GED associées sont utilisées comme éléments racine pour le type.
- Importation de schéma : les GED associées ne sont pas requises et sont ignorées si elles suivent les règles suivantes (à moins qu'elles définissent des types).
|Attribut|Schéma|
|---------------|------------|
|`abstract`|Doit être faux pour les GED associées.|
|`block`|Interdit dans les GED associées.|
|`default`|Interdit dans les GED associées.|
|`final`|Doit être faux pour les GED associées.|
|`fixed`|Interdit dans les GED associées.|
|`id`|Ignoré.|
|`name`|Prise en charge. Consultez la définition des GED associées.|
|`nillable`|Doit être vrai pour les GED associées.|
|`substitutionGroup`|Interdit dans les GED associées.|
|`type`|Pris en charge, doit correspondre au type associé pour les GED associées (à moins que l'élément contienne un type anonyme).|
### <a name="xselement-contents"></a>\<xs : element > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`simpleType`|Pris en charge.*|
|`complexType`|Pris en charge.*|
|`unique`|Ignoré.|
|`key`|Ignoré.|
|`keyref`|Ignoré.|
|(vide)|Prise en charge.|
\* Lorsque vous utilisez le `simpleType` et `complexType,` mappage pour les types anonymes est le même que pour les types non anonymes, à ceci près qu’il n’existe aucun contrat de données anonymes, et donc un contrat de données nommé est créé, avec un nom généré dérivé du nom de l’élément. Les règles pour les types anonymes sont les suivantes :
- Détail d’implémentation WCF : si le `xs:element` nom ne contient pas de périodes, le type anonyme mappe à un type interne du type de contrat de données externe. Si le nom contient des points, le type de contrat de données résultant est indépendant (n'est pas un type interne).
- Le nom de contrat de données généré du type interne est le nom de contrat de données du type externe suivi par un point, le nom de l'élément et la chaîne "Type".
- Si un contrat de données avec un tel nom existe déjà, le nom est rendu unique en ajoutant "1", "2", "3" et ainsi de suite jusqu'à ce qu'un nom unique soit créé.
## <a name="simple-types---xssimpletype"></a>Les Types simples - \<xs : simpleType >
### <a name="xssimpletype-attributes"></a>\<xs : simpleType > : attributs
|Attribut|Schéma|
|---------------|------------|
|`final`|Ignoré.|
|`id`|Ignoré.|
|`name`|Pris en charge, mappe au nom du contrat de données.|
### <a name="xssimpletype-contents"></a>\<xs : simpleType > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`restriction`|Prise en charge. Mappe aux contrats de données d'énumération. Cet attribut est ignoré s'il ne correspond pas au modèle d'énumération. Consultez la section des restrictions `xs:simpleType` .|
|`list`|Prise en charge. Mappe aux contrats de données d'énumération d'indicateur. Consultez la section répertoriant `xs:simpleType` .|
|`union`|Interdit.|
### <a name="xsrestriction"></a>\<xs:restriction>
- Les restrictions de type complexe sont prises en charge uniquement pour la base = "`xs:anyType`".
- Les restrictions de type simple de `xs:string` qui n'ont pas de facettes de restriction autres que `xs:enumeration` sont mappées aux contrats de données d'énumération.
- Toutes les autres restrictions de type simple sont mappées aux types qu'elles restreignent. Par exemple, une restriction de `xs:int` mappe à un entier, tout comme `xs:int` lui-même. Pour plus d’informations sur le mappage de type primitif, consultez le mappage de Type/primitif.
### <a name="xsrestriction-attributes"></a>\<xs : restriction > : attributs
|Attribut|Schéma|
|---------------|------------|
|`base`|Doit être un type simple pris en charge ou `xs:anyType`.|
|`id`|Ignoré.|
### <a name="xsrestriction-for-all-other-cases-contents"></a>\<xs : restriction > pour tous les autres cas : contenu
|Sommaire|Schéma|
|--------------|------------|
|`simpleType`|Si présent, doit être dérivé d'un type primitif pris en charge.|
|`minExclusive`|Ignoré.|
|`minInclusive`|Ignoré.|
|`maxExclusive`|Ignoré.|
|`maxInclusive`|Ignoré.|
|`totalDigits`|Ignoré.|
|`fractionDigits`|Ignoré.|
|`length`|Ignoré.|
|`minLength`|Ignoré.|
|`maxLength`|Ignoré.|
|`enumeration`|Ignoré.|
|`whiteSpace`|Ignoré.|
|`pattern`|Ignoré.|
|(vide)|Prise en charge.|
## <a name="enumeration"></a>Énumération
### <a name="xsrestriction-for-enumerations-attributes"></a>\<xs : restriction > pour les énumérations : attributs
|Attribut|Schéma|
|---------------|------------|
|`base`|Si présent, doit être `xs:string`.|
|`id`|Ignoré.|
### <a name="xsrestriction-for-enumerations-contents"></a>\<xs : restriction > pour les énumérations : contenu
|Sommaire|Schéma|
|--------------|------------|
|`simpleType`|Si présent, doit être une restriction d'énumération prise en charge par le contrat de données (cette section).|
|`minExclusive`|Ignoré.|
|`minInclusive`|Ignoré.|
|`maxExclusive`|Ignoré.|
|`maxInclusive`|Ignoré.|
|`totalDigits`|Ignoré.|
|`fractionDigits`|Ignoré.|
|`length`|Interdit.|
|`minLength`|Interdit.|
|`maxLength`|Interdit.|
|`enumeration`|Prise en charge. Le "id" d'énumération est ignoré, et "value" mappe au nom de la valeur dans le contrat de données d'énumération.|
|`whiteSpace`|Interdit.|
|`pattern`|Interdit.|
|(vide)|Pris en charge, mappe au type énumération vide.|
Le code suivant montre une classe d'énumération C#.
```
public enum MyEnum
{
first = 3,
second = 4,
third =5
```
}
Cette classe mappe au schéma suivant par `DataContractSerializer`. Si les valeurs d'énumération démarrent à partir de 1, les blocs `xs:annotation` ne sont pas générés.
```xml
<xs:simpleType name="MyEnum">
<xs:restriction base="xs:string">
<xs:enumeration value="first">
<xs:annotation>
<xs:appinfo>
<EnumerationValue
xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
3
</EnumerationValue>
</xs:appinfo>
</xs:annotation>
</xs:enumeration>
<xs:enumeration value="second">
<xs:annotation>
<xs:appinfo>
<EnumerationValue
xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
4
</EnumerationValue>
</xs:appinfo>
</xs:annotation>
</xs:enumeration>
</xs:restriction>
</xs:simpleType>
```
### <a name="xslist"></a>\<xs : List >
`DataContractSerializer` mappe des types énumération marqués avec `System.FlagsAttribute` à `xs:list` dérivé de `xs:string`. Aucune autre variation `xs:list` n'est prise en charge.
### <a name="xslist-attributes"></a>\<xs : List > : attributs
|Attribut|Schéma|
|---------------|------------|
|`itemType`|Interdit.|
|`id`|Ignoré.|
### <a name="xslist-contents"></a>\<xs : List > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`simpleType`|Doit être une restriction de `xs:string` utilisant la facette `xs:enumeration` .|
Si la valeur d'énumération ne suit pas une progression de puissance de 2 (valeur par défaut pour les indicateurs), la valeur est stockée dans l'élément `xs:annotation/xs:appInfo/ser:EnumerationValue` .
Par exemple, le code suivant marque un type énumération.
```
[Flags]
public enum AuthFlags
{
AuthAnonymous = 1,
AuthBasic = 2,
AuthNTLM = 4,
AuthMD5 = 16,
AuthWindowsLiveID = 64,
}
```
Ce type mappe au schéma suivant.
```xml
<xs:simpleType name="AuthFlags">
<xs:list>
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="AuthAnonymous" />
<xs:enumeration value="AuthBasic" />
<xs:enumeration value="AuthNTLM" />
<xs:enumeration value="AuthMD5">
<xs:annotation>
<xs:appinfo>
<EnumerationValue xmlns="http://schemas.microsoft.com/2003/10/Se
rialization/">16</EnumerationValue>
</xs:appinfo>
</xs:annotation>
</xs:enumeration>
<xs:enumeration value="AuthWindowsLiveID">
<xs:annotation>
<xs:appinfo>
<EnumerationValue xmlns="http://schemas.microsoft.com/2003/10/Se
rialization/">64</EnumerationValue>
</xs:appinfo>
</xs:annotation>
</xs:enumeration>
</xs:restriction>
</xs:simpleType>
</xs:list>
</xs:simpleType>
```
## <a name="inheritance"></a>Héritage
### <a name="general-rules"></a>Règles Générales
Un contrat de données peut hériter d'un autre contrat de données. De tels contrats de données mappent à une base et sont dérivés des types d'extension à l'aide de la construction de schéma XML `<xs:extension>` .
Un contrat de données ne peut pas hériter d'un contrat de données de collection.
Le code ci-dessous montre un exemple de contrat de données.
```
[DataContract]
public class Person
{
[DataMember]
public string Name;
}
[DataContract]
public class Employee : Person
{
[DataMember]
public int ID;
}
```
Ce contrat de données mappe à la déclaration de type de schéma XML suivante.
```xml
<xs:complexType name="Employee">
<xs:complexContent mixed="false">
<xs:extension base="tns:Person">
<xs:sequence>
<xs:element minOccurs="0" name="ID" type="xs:int"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<xs:complexType name="Person">
<xs:sequence>
<xs:element minOccurs="0" name="Name"
nillable="true" type="xs:string"/>
</xs:sequence>
</xs:complexType>
```
### <a name="xscomplexcontent-attributes"></a>\<xs : complexContent > : attributs
|Attribut|Schéma|
|---------------|------------|
|`id`|Ignoré.|
|`mixed`|Doit être faux.|
### <a name="xscomplexcontent-contents"></a>\<xs : complexContent > : contenu
|Sommaire|Schéma|
|--------------|------------|
|`restriction`|Interdit, sauf lorsque la base = "`xs:anyType`". Ce qui précède équivaut à placer directement le contenu de `xs:restriction` sous le conteneur de `xs:complexContent`.|
|`extension`|Prise en charge. Mappe à l'héritage de contrat de données.|
### <a name="xsextension-in-xscomplexcontent-attributes"></a>\<xs : extension > dans \<xs : complexContent > : attributs
|Attribut|Schéma|
|---------------|------------|
|`id`|Ignoré.|
|`base`|Prise en charge. Mappe au type de contrat de données de base de qui ce type hérite.|
### <a name="xsextension-in-xscomplexcontent-contents"></a>\<xs : extension > dans \<xs : complexContent > : contenu
Les règles sont les mêmes que pour le contenu `<xs:complexType>` .
Si un `<xs:sequence>` est fourni, ses éléments membres mappent aux membres de données supplémentaires qui sont présents dans le contrat de données dérivé.
Si un type dérivé contient un élément portant le même nom qu'un élément dans un type de base, la déclaration d'élément en double mappe à un membre de données avec un nom généré pour être unique. Des entiers positifs sont ajoutés au nom du membre de données ("member1", "member2" et ainsi de suite) jusqu'à ce qu'un nom unique soit trouvé. Inversement :
- Si un contrat de données dérivé a un membre de données portant le même nom et type qu'un membre de données dans un contrat de données de base, `DataContractSerializer` génère cet élément correspondant dans le type dérivé.
- Si un contrat de données dérivé a un membre de données portant le même nom qu'un membre de données dans un contrat de données de base mais ayant un type différent, `DataContractSerializer` importe un schéma avec un élément de type `xs:anyType` à la fois dans le type de base et dans les déclarations de type dérivé. Le nom du type d'origine est conservé dans `xs:annotations/xs:appInfo/ser:ActualType/@Name`.
Les deux variations peuvent aboutir à schéma ayant un modèle de contenu ambigu, qui dépend de l'ordre des membres de données respectifs.
## <a name="typeprimitive-mapping"></a>Mappage de type/primitif
`DataContractSerializer` utilise le mappage suivant pour les types primitifs de schéma XML.
|Type XSD|Type .NET|
|--------------|---------------|
|`anyType`|<xref:System.Object>.|
|`anySimpleType`|<xref:System.String>.|
|`duration`|<xref:System.TimeSpan>.|
|`dateTime`|<xref:System.DateTime>.|
|`dateTimeOffset`|Objets<xref:System.DateTime> et <xref:System.TimeSpan> pour l'offset. Consultez Sérialisation DateTimeOffset ci-dessous.|
|`time`|<xref:System.String>.|
|`date`|<xref:System.String>.|
|`gYearMonth`|<xref:System.String>.|
|`gYear`|<xref:System.String>.|
|`gMonthDay`|<xref:System.String>.|
|`gDay`|<xref:System.String>.|
|`gMonth`|<xref:System.String>.|
|`boolean`|<xref:System.Boolean>|
|`base64Binary`|Tableau<xref:System.Byte> .|
|`hexBinary`|<xref:System.String>.|
|`float`|<xref:System.Single>.|
|`double`|<xref:System.Double>.|
|`anyURI`|<xref:System.Uri>.|
|`QName`|<xref:System.Xml.XmlQualifiedName>.|
|`string`|<xref:System.String>.|
|`normalizedString`|<xref:System.String>.|
|`token`|<xref:System.String>.|
|`language`|<xref:System.String>.|
|`Name`|<xref:System.String>.|
|`NCName`|<xref:System.String>.|
|`ID`|<xref:System.String>.|
|`IDREF`|<xref:System.String>.|
|`IDREFS`|<xref:System.String>.|
|`ENTITY`|<xref:System.String>.|
|`ENTITIES`|<xref:System.String>.|
|`NMTOKEN`|<xref:System.String>.|
|`NMTOKENS`|<xref:System.String>.|
|`decimal`|<xref:System.Decimal>.|
|`integer`|<xref:System.Int64>.|
|`nonPositiveInteger`|<xref:System.Int64>.|
|`negativeInteger`|<xref:System.Int64>.|
|`long`|<xref:System.Int64>.|
|`int`|<xref:System.Int32>.|
|`short`|<xref:System.Int16>.|
|`Byte`|<xref:System.SByte>.|
|`nonNegativeInteger`|<xref:System.Int64>.|
|`unsignedLong`|<xref:System.UInt64>.|
|`unsignedInt`|<xref:System.UInt32>.|
|`unsignedShort`|<xref:System.UInt16>.|
|`unsignedByte`|<xref:System.Byte>.|
|`positiveInteger`|<xref:System.Int64>.|
## <a name="iserializable-types-mapping"></a>Mappage de types ISerializable
Dans le [!INCLUDE[dnprdnshort](../../../../includes/dnprdnshort-md.md)] version 1.0, `ISerializable` a été introduit comme mécanisme général pour sérialiser des objets pour la persistance ou le transfert de données. Il existe de nombreux types [!INCLUDE[dnprdnshort](../../../../includes/dnprdnshort-md.md)] qui implémentent `ISerializable` et qui peuvent être passés entre les applications. `DataContractSerializer` fournit naturellement le support pour les classes `ISerializable` . `DataContractSerializer` mappe des types de schéma d'implémentation `ISerializable` qui diffèrent uniquement par le QName (nom qualifié) du type et sont des collections de propriétés. Par exemple, le `DataContractSerializer` mappe <xref:System.Exception> au type XSD suivant dans le `http://schemas.datacontract.org/2004/07/System` espace de noms.
```xml
<xs:complexType name="Exception">
<xs:sequence>
<xs:any minOccurs="0" maxOccurs="unbounded"
namespace="##local" processContents="skip"/>
</xs:sequence>
<xs:attribute ref="ser:FactoryType"/>
</xs:complexType>
```
L'attribut facultatif `ser:FactoryType` déclaré dans le schéma de sérialisation du contrat de données référence une classe de fabrique qui peut désérialiser le type. La classe de fabrique doit faire partie de la collection de types connus de l'instance `DataContractSerializer` qui est utilisée. Pour plus d’informations sur les types connus, consultez [Data Contract Known Types](../../../../docs/framework/wcf/feature-details/data-contract-known-types.md).
## <a name="datacontract-serialization-schema"></a>Schéma de sérialisation DataContract
Plusieurs schémas exportés par `DataContractSerializer` , utilisent des types, des éléments et des attributs d'un espace de noms spécial de la sérialisation du contrat de données :
`http://schemas.microsoft.com/2003/10/Serialization`
Voici une déclaration de schéma de sérialisation du contrat de données complète.
```xml
<xs:schema attributeFormDefault="qualified"
elementFormDefault="qualified"
targetNamespace =
"http://schemas.microsoft.com/2003/10/Serialization/"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://schemas.microsoft.com/2003/10/Serialization/">
<!-- Top-level elements for primitive types. -->
<xs:element name="anyType" nillable="true" type="xs:anyType"/>
<xs:element name="anyURI" nillable="true" type="xs:anyURI"/>
<xs:element name="base64Binary"
nillable="true" type="xs:base64Binary"/>
<xs:element name="boolean" nillable="true" type="xs:boolean"/>
<xs:element name="byte" nillable="true" type="xs:byte"/>
<xs:element name="dateTime" nillable="true" type="xs:dateTime"/>
<xs:element name="decimal" nillable="true" type="xs:decimal"/>
<xs:element name="double" nillable="true" type="xs:double"/>
<xs:element name="float" nillable="true" type="xs:float"/>
<xs:element name="int" nillable="true" type="xs:int"/>
<xs:element name="long" nillable="true" type="xs:long"/>
<xs:element name="QName" nillable="true" type="xs:QName"/>
<xs:element name="short" nillable="true" type="xs:short"/>
<xs:element name="string" nillable="true" type="xs:string"/>
<xs:element name="unsignedByte"
nillable="true" type="xs:unsignedByte"/>
<xs:element name="unsignedInt"
nillable="true" type="xs:unsignedInt"/>
<xs:element name="unsignedLong"
nillable="true" type="xs:unsignedLong"/>
<xs:element name="unsignedShort"
nillable="true" type="xs:unsignedShort"/>
<!-- Primitive types introduced for certain .NET simple types. -->
<xs:element name="char" nillable="true" type="tns:char"/>
<xs:simpleType name="char">
<xs:restriction base="xs:int"/>
</xs:simpleType>
<!-- xs:duration is restricted to an ordered value space,
to map to System.TimeSpan -->
<xs:element name="duration" nillable="true" type="tns:duration"/>
<xs:simpleType name="duration">
<xs:restriction base="xs:duration">
<xs:pattern
value="\-?P(\d*D)?(T(\d*H)?(\d*M)?(\d*(\.\d*)?S)?)?"/>
<xs:minInclusive value="-P10675199DT2H48M5.4775808S"/>
<xs:maxInclusive value="P10675199DT2H48M5.4775807S"/>
</xs:restriction>
</xs:simpleType>
<xs:element name="guid" nillable="true" type="tns:guid"/>
<xs:simpleType name="guid">
<xs:restriction base="xs:string">
<xs:pattern value="[\da-fA-F]{8}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{12}"/>
</xs:restriction>
</xs:simpleType>
<!-- This is used for schemas exported from ISerializable type. -->
<xs:attribute name="FactoryType" type="xs:QName"/>
</xs:schema>
```
Il convient de remarquer les points suivants :
- `ser:char` est introduit pour représenter des caractères Unicode de type <xref:System.Char>.
- Le `valuespace` de `xs:duration` est réduit à un jeu ordonné afin qu'il puisse être mappé à un <xref:System.TimeSpan>.
- `FactoryType` est utilisé dans les schémas exportés des types dérivés de <xref:System.Runtime.Serialization.ISerializable>.
## <a name="importing-non-datacontract-schemas"></a>Importation de schémas qui ne sont pas DataContract
`DataContractSerializer` dispose de l'option `ImportXmlTypes` pour autoriser l'importation des schémas non conformes au profil XSD `DataContractSerializer` (consultez la propriété <xref:System.Runtime.Serialization.XsdDataContractImporter.Options%2A> ). Affecter à cette option la valeur `true` active l'acceptation des types de schémas non conformes et les mappe à l'implémentation suivante, <xref:System.Xml.Serialization.IXmlSerializable> qui encapsule un tableau de <xref:System.Xml.XmlNode> (seul le nom de la classe diffère).
```
[GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")]
[System.Xml.Serialization.XmlSchemaProviderAttribute("ExportSchema")]
[System.Xml.Serialization.XmlRootAttribute(IsNullable=false)]
public partial class Person : object, IXmlSerializable
{
private XmlNode[] nodesField;
private static XmlQualifiedName typeName =
new XmlQualifiedName("Person","http://Microsoft.ServiceModel.Samples");
public XmlNode[] Nodes
{
get {return this.nodesField;}
set {this.nodesField = value;}
}
public void ReadXml(XmlReader reader)
{
this.nodesField = XmlSerializableServices.ReadNodes(reader);
}
public void WriteXml(XmlWriter writer)
{
XmlSerializableServices.WriteNodes(writer, this.Nodes);
}
public System.Xml.Schema.XmlSchema GetSchema()
{
return null;
}
public static XmlQualifiedName ExportSchema(XmlSchemaSet schemas)
{
XmlSerializableServices.AddDefaultSchema(schemas, typeName);
return typeName;
}
}
```
## <a name="datetimeoffset-serialization"></a>Sérialisation DateTimeOffset
<xref:System.DateTimeOffset> n'est pas traité comme un type primitif. À la place, il est sérialisé comme un élément complexe comportant deux parties. La première partie représente la date et l'heure et la deuxième partie représente l'offset de la date et de l'heure. Un exemple d'une valeur DateTimeOffset sérialisée est illustré dans le code suivant.
```xml
<OffSet xmlns:a="http://schemas.datacontract.org/2004/07/System">
<DateTime i:type="b:dateTime" xmlns=""
xmlns:b="http://www.w3.org/2001/XMLSchema">2008-08-28T08:00:00
</DateTime>
<OffsetMinutes i:type="b:short" xmlns=""
xmlns:b="http://www.w3.org/2001/XMLSchema">-480
</OffsetMinutes>
</OffSet>
```
Le schéma se présente comme suit :
```xml
<xs:schema targetNamespace="http://schemas.datacontract.org/2004/07/System">
<xs:complexType name="DateTimeOffset">
<xs:sequence minOccurs="1" maxOccurs="1">
<xs:element name="DateTime" type="xs:dateTime"
minOccurs="1" maxOccurs="1" />
<xs:elementname="OffsetMinutes" type="xs:short"
minOccurs="1" maxOccurs="1" />
</xs:sequence>
</xs:complexType>
</xs:schema>
```
## <a name="see-also"></a>Voir aussi
<xref:System.Runtime.Serialization.DataContractSerializer>
<xref:System.Runtime.Serialization.DataContractAttribute>
<xref:System.Runtime.Serialization.DataMemberAttribute>
<xref:System.Runtime.Serialization.XsdDataContractImporter>
[Utilisation de contrats de données](../../../../docs/framework/wcf/feature-details/using-data-contracts.md)
| 51.865412 | 835 | 0.69402 | fra_Latn | 0.856854 |
e5a6d80498c09bb3709656209d8fdb4bc77ac373 | 2,210 | md | Markdown | _posts/2017-01-28-Haskell-Module.md | CoderDB/coderDB.github.io | 7178ec2f98b93d8e6e4e8c964afc9a95ca0d8787 | [
"MIT"
] | null | null | null | _posts/2017-01-28-Haskell-Module.md | CoderDB/coderDB.github.io | 7178ec2f98b93d8e6e4e8c964afc9a95ca0d8787 | [
"MIT"
] | null | null | null | _posts/2017-01-28-Haskell-Module.md | CoderDB/coderDB.github.io | 7178ec2f98b93d8e6e4e8c964afc9a95ca0d8787 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2017-01-28
title: Haskell 中的 Module
img: "haskell_module.jpg"
---
*模组* 就是一大袋花样众多的零食,拆了就吃。
Module
---
---
Haskell 的标准库就是一组 *模组*,每个模组都包含一些功能相近或相似的函数或型别。比如之前所有文章中的测试例子都是基于 *Prelude* 模组。
装载模组
---
---
Haskell 中装载模组用关键字 **import**,就像 Objective-C 中定义了一个 **People** 类,在需要用到它的某个实现文件中以 *#import "People.h"* 引入一样,而 **People** 可以看作是一个模组。比如在 Haskell 中装载 *Data.List* ,新建一个文件命名为 **testModule.hs**,在文件的开始处装载模组,像这样键入 *import Data.List*
{% highlight haskell %}
ghci> :t nub
<interactive>:1:1: error: Variable not in scope: nub
ghci> :l testModule.hs
[1 of 1] Compiling Main ( testModule.hs, interpreted )
Ok, modules loaded: Main.
ghci> :t nu
nub nubBy null
-- 装载了 Data.List 模组之后,nub 是该模组中的一个函数
ghci> :t nub
nub :: Eq a => [a] -> [a]
{% endhighlight %}
**nub** 函数是筛掉一个 List 中所有重复的元素。
{% highlight haskell %}
ghci>nub [1, 2, 4, 3, 2, 1]
[1,2,4,3]
ghci>nub [1, 2, 3, 3, 2, 1]
[1,2,3]
ghci>nub [1, 2, 3, 3, 2, 1, 1, 2, 3]
[1,2,3]
{% endhighlight %}
**在 ghci 中装载模组** 用 **:m**。**:m** 就是 **:module** 简写。
{% highlight haskell %}
ghci>:t nub
<interactive>:1:1: error: Variable not in scope: nub
ghci>:m
:main :module
ghci>:m Data.List
ghci>:t nu
nub nubBy null
{% endhighlight %}
**:m** 可以一次装载多个模组。
{% highlight haskell %}
ghci>:m Data.List Data.Map Data.Map.Lazy
{% endhighlight %}
**:m** 也可以装载指定的函数。
{% highlight haskell %}
ghci>:m Data.List(nub,sort)
syntax: :module [+/-] [*]M1 ... [*]Mn
-- 只装载 nub、sort 函数
ghci>:t sort
sort :: Ord a => [a] -> [a]
ghci>sort [1, 5, 4]
[1,4,5]
{% endhighlight %}
**:m** 包含去除指定函数的模组。
{% highlight haskell %}
ghci> :m Data.List hiding (sort)
syntax: :module [+/-] [*]M1 ... [*]Mn
ghci> :t sort
<interactive>:1:1: error:
• Variable not in scope: sort
{% endhighlight %}
**qualified**
在装载模组时为了避免命名冲突,可以使用 **qualified** 关键字。比如:
{% highlight haskell %}
import qualified Data.Map
{% endhighlight %}
为某个模组起个别名
{% highlight haskell %}
import qualified Data.Map as M
ghci>:l testModule.hs
[1 of 1] Compiling Main ( testModule.hs, interpreted )
Ok, modules loaded: Main.
ghci>M.fi
M.filter M.filterWithKey M.findIndex M.findMax M.findMin M.findWithDefault
{% endhighlight %}
| 18.888889 | 222 | 0.633937 | kor_Hang | 0.290985 |
e5a719680c64403a4a8784fd8b716d19c5507bff | 862 | md | Markdown | Images and Conteiners Study/sonic-search/README.md | BrunoComitre/docker-resource-inventory | a3f797101320fa0b9c02bc93e10574a3d2a315d3 | [
"MIT"
] | null | null | null | Images and Conteiners Study/sonic-search/README.md | BrunoComitre/docker-resource-inventory | a3f797101320fa0b9c02bc93e10574a3d2a315d3 | [
"MIT"
] | null | null | null | Images and Conteiners Study/sonic-search/README.md | BrunoComitre/docker-resource-inventory | a3f797101320fa0b9c02bc93e10574a3d2a315d3 | [
"MIT"
] | null | null | null | # SONIC SEARCH
Documentation on implementing dynamic search using Sonic
***
<br />
## Quickstart
### Locally:
Install the sonic image:
``` $ docker pull valeriansaliou/sonic:v1.3.0 ```
Enter the file address:
``` $ pwd ```
Change the path and rotate the container:
``` $ docker run -p 1491:1491 -v "pwd"/sonic/config.cfg:/etc/sonic.cfg -v "pwd"/sonic/store/:/var/lib/sonic/store/ valeriansaliou/sonic:v1.3.0 ```
Inside the project root add a folder called sonic and inside it create the file: config.cfg
Note: The file found in the project has a standard configuration. Don't forget to change the default password when running in production.
The example video is: [Searching for 1 million data in 0.8ms with Rust | Code/Drops #66](https://www.youtube.com/watch?v=rNCGwggC1RI&t=8s)
Project Link: [Sonic] (https://github.com/valeriansaliou/sonic)
***
| 27.806452 | 146 | 0.728538 | eng_Latn | 0.828305 |
e5a74f20b27697015fe85ea1784c2c71ec49a26c | 3,238 | md | Markdown | translations/it/docs/vanilla/api/data/ShortData.md | Lgmrszd/CraftTweaker-Documentation | 2e02a542dea9a4dbaec91a8e594aa7fba1a00206 | [
"MIT"
] | 1 | 2021-12-15T20:34:09.000Z | 2021-12-15T20:34:09.000Z | translations/it/docs/vanilla/api/data/ShortData.md | Lgmrszd/CraftTweaker-Documentation | 2e02a542dea9a4dbaec91a8e594aa7fba1a00206 | [
"MIT"
] | null | null | null | translations/it/docs/vanilla/api/data/ShortData.md | Lgmrszd/CraftTweaker-Documentation | 2e02a542dea9a4dbaec91a8e594aa7fba1a00206 | [
"MIT"
] | null | null | null | # ShortData
Questa classe è stata aggiunta da una mod con ID `crafttweaker`. Perciò, è necessario avere questa mod installata per poter utilizzare questa funzione.
## Importare la classe
Potrebbe essere necessario importare il pacchetto, se si incontrano dei problemi (come castare un vettore), quindi meglio essere sicuri e aggiungere la direttiva di importazione.
```zenscript
crafttweaker.api.data.ShortData
```
## Interfacce Implementate
ShortData implementa le seguenti interfacce. Ciò significa che ogni metodo presente nell'interfaccia può essere usato anche per questa classe.
- [crafttweaker.api.data.INumberData](/vanilla/api/data/INumberData)
## Costruttori
```zenscript
new crafttweaker.api.data.ShortData(interno come breve);
```
| Parametro | Tipo | Descrizione |
| --------- | ----- | --------------------------- |
| interno | breve | Nessuna descrizione fornita |
## Metodi
### asList
Ottiene una lista<IData> rappresentazione di questo IData, restituisce nulla su qualsiasi cosa tranne [crafttweaker.api.data.ListData](/vanilla/api/data/ListData).
Restituisce: `null se questo IData non è una lista.`
Restituisce Lista<[crafttweaker.api.data.IData](/vanilla/api/data/IData)>
```zenscript
1058.asList();
```
### asMap
Ottiene una rappresentazione mappa<String, IData> di questo IData, restituisce nulla su qualsiasi cosa tranne [crafttweaker.api.data.MapData](/vanilla/api/data/MapData).
Restituisce: `null se questo IData non è una mappa.`
Restituisce [crafttweaker.api.data.IData](/vanilla/api/data/IData)[String]
```zenscript
1058.asMap();
```
### asString
Ottiene la rappresentazione stringa di questo IData
Restituisce: `Stringa che rappresenta questo IData (valore e tipo).`
Ritorna una stringa
```zenscript
1058.asString();
```
### contiene
Controlla se questo IData contiene un altro IData, usato principalmente nelle sottoclassi di [crafttweaker. pi.data.ICollectionData](/vanilla/api/data/ICollectionData), è lo stesso di un controllo uguale su altri tipi di IData
Restituisce un booleano
```zenscript
1058.contains(dati come crafttweaker.api.data.IData);
1058.contains("Display");
```
| Parametro | Tipo | Descrizione |
| --------- | ------------------------------------------------------ | ------------------------------------- |
| dati | [crafttweaker.api.data.IData](/vanilla/api/data/IData) | dati per verificare se sono contenuti |
### copia
Rende una copia di questo IData.
IData è immutabile per impostazione predefinita, usala per creare una copia corretta dell'oggetto.
Restituisce: `una copia di questo IData.`
Restituisce [crafttweaker.api.data.IData](/vanilla/api/data/IData)
```zenscript
1058.copy();
```
### getId
Ottiene l'ID del tag NBT interno.
Usato per determinare quale tipo di NBT è memorizzato (in un elenco per esempio)
Restituisce: `ID del tag NBT che questi dati rappresentano.`
Restituisce byte
```zenscript
1058.getId();
```
### getString
Ottiene la rappresentazione della stringa del tag INBT interno
Restituisce: `Stringa che rappresenta l'INBT interno di questo IData.`
Ritorna una stringa
```zenscript
1058.getString();
```
| 26.760331 | 226 | 0.703521 | ita_Latn | 0.993176 |
e5a7b6883c6c9f4109afeb8c4778a800f9afb0ea | 94 | md | Markdown | microsoft/office365/exchange-online-plan-2/tlh.md | Hexatown/docs | 8baa1eb908c9182449d094b697dd50d197a8a0a1 | [
"MIT"
] | 1 | 2017-10-09T07:56:11.000Z | 2017-10-09T07:56:11.000Z | docs/exchange-online-plan-2/tlh.md | jumpto365/microsoft365 | 6f394aea1f15c05168ea66bbaa8d9d9d7155e104 | [
"MIT"
] | 4 | 2017-12-01T15:47:58.000Z | 2022-02-26T04:07:52.000Z | microsoft/office365/exchange-online-plan-2/tlh.md | Hexatown/docs | 8baa1eb908c9182449d094b697dd50d197a8a0a1 | [
"MIT"
] | 6 | 2017-10-09T08:00:58.000Z | 2018-12-10T16:31:36.000Z | ---
title: tam-online-nab-2
inshort: undefine
translator: Microsoft Cognitive Services
---
| 10.444444 | 40 | 0.734043 | eng_Latn | 0.613714 |
e5a8a63694d2faf12c840aba05244f89d02d5586 | 1,790 | md | Markdown | README.md | xcomponent/koordinator-kafka-demo | 6c89ed6e3ec89ce1ce4d8e0dada2ba0088a58e8a | [
"Apache-2.0"
] | null | null | null | README.md | xcomponent/koordinator-kafka-demo | 6c89ed6e3ec89ce1ce4d8e0dada2ba0088a58e8a | [
"Apache-2.0"
] | null | null | null | README.md | xcomponent/koordinator-kafka-demo | 6c89ed6e3ec89ce1ce4d8e0dada2ba0088a58e8a | [
"Apache-2.0"
] | null | null | null | [](http://slack.xcomponent.com/)
[](https://circleci.com/gh/xcomponent/koordinator-kafka-demo/tree/master)
[](https://github.com/ellerbrock/typescript-badges/)
# Koordinator Kafka Demo
This repository demos the use of Koordinator to integrate batch and stream
based jobs. In this example, we have a batch job `image-search`, that given some
search terms generate a list of urls of images matching the terms; and
`download-image`, a Kafka producer/consumer written in Java that allows one to
download the images in parallel.
## Requirements
* A Kafka broker, in our tests we use [this](https://hub.docker.com/r/spotify/kafka/) docker container
* Set the variables `WORKER_USERNAME` and `WORKER_PASSWORD` as your credentials
in the Koordinator server, `KOORDINATOR_URL` as the url of your server and `KAFKA_BROKER` as the name of your Kafka broker.
## Build
1. Install the dependencies of the `image-search` worker: `cd image-search && yarn install && cd ..`
2. Build the `download-image` worker: `cd download-image && ./build.sh && cd ..`
## Run
Just run the script `./test.sh`. It will start the workers and the workflow.
You should be able to monitor its execution in the Koordinator frontend.
### Submit a Pull Request
1. Fork it!
2. Create your feature branch: `git checkout -b my-new-feature`
3. Commit your changes: `git commit -am 'Add some feature'`
4. Push to the branch: `git push origin my-new-feature`
5. Submit a pull request
## License
[Apache License V2](https://raw.githubusercontent.com/xcomponent/koordinator-kafka-demo/master/LICENSE)
| 43.658537 | 169 | 0.753631 | eng_Latn | 0.867025 |
e5a8dee2608607de837a550febdb7f0d42cc493e | 520 | md | Markdown | content/publication/de-2016-improving/index.md | dixitmudit/MD | 617f212d8b6cd86cf2782a9b10d9b0bad8c177ff | [
"MIT"
] | null | null | null | content/publication/de-2016-improving/index.md | dixitmudit/MD | 617f212d8b6cd86cf2782a9b10d9b0bad8c177ff | [
"MIT"
] | null | null | null | content/publication/de-2016-improving/index.md | dixitmudit/MD | 617f212d8b6cd86cf2782a9b10d9b0bad8c177ff | [
"MIT"
] | 1 | 2021-01-07T06:27:13.000Z | 2021-01-07T06:27:13.000Z | ---
title: "Improving energy density and structural stability of manganese oxide cathodes for Na-ion batteries by structural lithium substitution"
date: 2016-01-01
publishDate: 2020-07-22T01:16:21.087637Z
authors: ["Ezequiel de la Llave", "Elahe Talaie", "Elena Levi", "Prasant Kumar Nayak", "Mudit Dixit", "Penki Tirupathi Rao", "Pascal Hartmann", "Frederick Chesneau", "Dan Thomas Major", "Miri Greenstein", " others"]
publication_types: ["2"]
abstract: ""
featured: false
publication: "*Chemistry of Materials*"
---
| 43.333333 | 215 | 0.748077 | eng_Latn | 0.607504 |
e5a948282bfbc99ec917e74c27670de58c88d4e1 | 3,013 | md | Markdown | README.md | tastypackets/actions-docker-gcr | b4ed48080439aa4e1d850140c9b85ee18acb05e7 | [
"Apache-2.0"
] | null | null | null | README.md | tastypackets/actions-docker-gcr | b4ed48080439aa4e1d850140c9b85ee18acb05e7 | [
"Apache-2.0"
] | null | null | null | README.md | tastypackets/actions-docker-gcr | b4ed48080439aa4e1d850140c9b85ee18acb05e7 | [
"Apache-2.0"
] | null | null | null | # actions-docker
I forked this to add some missing functionality, such as passing a path and default adding a tag of the git branch.
Opinionated GitHub Actions for common Docker workflows
Fork from https://github.com/urcomputeringpal/actions-docker
## Opinions (expressed via default environment variables)
- [`REGISTRY=gcr.io`](https://gcr.io)
- `IMAGE=$GITHUB_REPOSITORY`
- (Expects a Google Cloud Project named after your GitHub username)
- `TAG=$GITHUB_SHA`
- `LATEST=true`(Adds the `latest` tag)
- `BRANCH_TAG=true`(Adds the current branch name as a tag)
- `WORKING_DIRECTORY=.` (Path variable passed into docker build)
## Usage
### Google Container Registry Setup
- If you haven't already, [create a Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#creating_a_project) named after your GitHub username and follow the [Container Registry Quickstart](https://cloud.google.com/container-registry/docs/quickstart#before-you-begin).
- [Create a Service Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account) named after your GitHub repository.
- [Add the _Cloud Build Service Account_](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource) role to this Service Account.
- [Generate a key for this Service Account](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating_service_account_keys). Download a JSON key when prompted.
- Create a Secret on your repository named `GCLOUD_SERVICE_ACCOUNT_KEY` (Settings > Secrets) with the contents of:
```shell
# Linux
cat path-to/key.json | base64 -w 0
# MacOS
cat path-to/key.json | base64 -b 0
```
- That's it! The GitHub Actions in this repository read this Secret and provide the correct values to the Docker daemon by default if present. If a Secret isn't present, `build` _may_ succeed but `push` will return an error!
### Build and push images for each commit
Add the following to `.github/workflows/docker.yaml`:
```yaml
name: Docker
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Docker Build
uses: tastypackets/actions-docker-gcr/build@master
- name: Docker Push
uses: tastypackets/actions-docker-gcr/push@master
env:
GCLOUD_SERVICE_ACCOUNT_KEY: ${{ secrets.GCLOUD_SERVICE_ACCOUNT_KEY }}
```
### Specify a different Registry, Project & image name
```yaml
[...]
steps:
- uses: actions/checkout@v2
- name: Docker Build
uses: tastypackets/actions-docker-gcr/build@master
env:
IMAGE: my-project/my-image
GCLOUD_REGISTRY: eu.gcr.io
- name: Docker Push
uses: tastypackets/actions-docker-gcr/push@master
env:
IMAGE: my-project/my-image
GCLOUD_REGISTRY: eu.gcr.io
GCLOUD_SERVICE_ACCOUNT_KEY: ${{ secrets.GCLOUD_SERVICE_ACCOUNT_KEY }}
```
| 35.869048 | 310 | 0.731829 | eng_Latn | 0.672444 |
e5a9c628776606f5e121cf57f69cfa842f9ae535 | 80 | md | Markdown | README.md | YunYouJun/multi-gobang | 0af9cb67aeb503adf55b02548601d01a34c3bc90 | [
"MIT"
] | null | null | null | README.md | YunYouJun/multi-gobang | 0af9cb67aeb503adf55b02548601d01a34c3bc90 | [
"MIT"
] | null | null | null | README.md | YunYouJun/multi-gobang | 0af9cb67aeb503adf55b02548601d01a34c3bc90 | [
"MIT"
] | null | null | null | # Multi-Gobang
A muliplayer game about gobang by Cocos Creator.
developing ... | 16 | 48 | 0.7625 | eng_Latn | 0.933134 |
e5ab17a5609217d6359eed9556437481f6fcb2a2 | 3,598 | md | Markdown | _publications/en/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design.md | LinJiarui/linjiarui.github.io | c12e861a2c458f9ffc09e129308bbf8026f2ef31 | [
"MIT"
] | null | null | null | _publications/en/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design.md | LinJiarui/linjiarui.github.io | c12e861a2c458f9ffc09e129308bbf8026f2ef31 | [
"MIT"
] | null | null | null | _publications/en/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design.md | LinJiarui/linjiarui.github.io | c12e861a2c458f9ffc09e129308bbf8026f2ef31 | [
"MIT"
] | 1 | 2021-02-18T08:45:14.000Z | 2021-02-18T08:45:14.000Z | ---
title: "A Data Integration and Simplification Framework for Improving Site Planning and Building Design"
lang: en
ref: publications/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design
collection: publications
permalink: /en/publications/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design
excerpt: 'This paper provides a feasible way to integrate planning and design data from different sources to enhance the evaluation and delivery of the results. Validation shows that the proposed method could integrate and visualize multi-source site planning and building design data efficiently, and a seamless database facilitates understanding of planning and design results and improves communication significantly.'
date: 2021-10-28
venue: 'IEEE Access'
doi: '10.1109/ACCESS.2021.3124010'
paperurl: 'http://doi.org/10.1109/ACCESS.2021.3124010'
citation: 'Leng, S., Lin, J.R., Li, S.W., Hu, Z.Z.* (2021). A Data Integration and Simplification Framework for Improving Site Planning and Building Design. <i>IEEE Access</i>, 9, 148845-148861. doi: 10.1109/ACCESS.2021.3124010'
comment: true
category: journal
tags:
- building design
- site planning
- data integration
- BIM
- GIS
- CIM
- geometric optimization
- rural vitalization
- SCI
grants:
- 2018YFD1100900
- 51778336
- 72091512
---
{{site.data.ui-text[page.lang].abstract}}
====
Site planning and building design results are generally managed in Geographic Information System (GIS) and Building Information Modeling/Model (BIM) separately. The incompatibility of data has brought potential challenges for the assessment and delivery of the results. A data integration and simplification framework for improving site planning and building design is proposed in this paper. A BIM-GIS integrated model with a multi-scale data structure is developed to link the results of site planning and building design together. Geometric optimization algorithms are then designed to generate simplified building models with different levels of details (LODs) based on the information required at each scale. This paper provides a feasible way to integrate planning and design data from different sources to enhance the evaluation and delivery of the results. The proposed approach is validated by a village construction project in east China, and results show that the method is capable to integrate site planning and building design results from different platforms and support seamless visualization of multi-scale geometric data. It is also found that a seamless database facilitates understanding of planning and design results and improves communication efficiency. Currently, the main limitation of this paper is the limited access to 3D real-world data, and data collection techniques like point cloud are expected to solve the problem.

[{{site.data.ui-text[page.lang].download_paper}}]({{page.paperurl}})
[{{site.data.ui-text[page.lang].download_preprint}}]({{ site.baseurl }}/files/2021-10-28-data-integration-and-simplification-framework-for-site-planning-and-building-design.pdf)
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFD1100900, and in part by the National Natural Science Foundation of China under Grant 51778336 and Grant 72091512.
Accession Number: WOS:000716675800001
ISSN: 2169-3536
IDS Number: WU6TN | 67.886792 | 1,449 | 0.801556 | eng_Latn | 0.976916 |
e5ab5821f0e500ae7e58f81a62ec70b8e48131f6 | 2,275 | md | Markdown | README.md | greglearns/portier-broker | 935fcce0405fd0e3ac70ee8922c0f62bf0823a75 | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | greglearns/portier-broker | 935fcce0405fd0e3ac70ee8922c0f62bf0823a75 | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | greglearns/portier-broker | 935fcce0405fd0e3ac70ee8922c0f62bf0823a75 | [
"Apache-2.0",
"MIT"
] | null | null | null | # Portier Broker
This is the Portier Broker reference implementation.
- [Portier Broker on GitHub](https://github.com/portier/portier-broker)
- [Portier main website](https://portier.github.io/)
- [Portier specification](https://github.com/portier/portier.github.io/blob/master/Specs.md)
## How to run your own broker
[](https://heroku.com/deploy?template=https://github.com/portier/portier-broker/tree/master)
Portier is specified such that everyone can run their own broker instance. You
can point your Relying Parties at your own broker, so that you do not have to
depend on the broker run by the Portier project.
Binaries for the broker can be found on the [GitHub releases] page. Docker
images are also available on [Docker Hub]. Alternatively, you can [build the
broker] yourself.
[Docker Hub]: https://hub.docker.com/r/portier/broker
[GitHub releases]: https://github.com/portier/portier-broker/releases
[build the broker]: https://github.com/portier/portier-broker/blob/master/docs/build.md
To run your own broker, you'll need access to an SMTP server for sending email.
For local testing, [MailHog] can provide this for you.
[MailHog]: https://github.com/mailhog/MailHog
The broker can be configured using a configuration file or through environment
variables. Both are documented in the [example configuration file].
[example configuration file]: https://github.com/portier/portier-broker/blob/master/config.toml.dist
Once you've prepared the configuration, simply run the broker executable:
```bash
# From binaries:
./portier-broker[.exe] ./config.toml
# Using Docker:
docker run -v /srv/portier-broker:/data:ro portier/broker /data/config.toml
```
Some additional notes:
- If using environment variables only, don't specify a configuration file on
the command line.
- [Systemd units] are also included with the Linux binaries.
- The broker only talks plain HTTP, and not HTTPS. Using HTTPS is strongly
recommended, but you'll need to add a reverse proxy in front of the broker to
do this. ([Apache] or [Nginx] can do this for you.)
[Systemd units]: https://github.com/portier/portier-broker/tree/master/docs/systemd/
[Apache]: https://httpd.apache.org
[Nginx]: http://nginx.org
| 38.559322 | 152 | 0.767912 | eng_Latn | 0.86891 |
e5ab94831f0db0c556aca9405c654184ff8d929c | 2,710 | md | Markdown | docs/framework/unmanaged-api/metadata/imetadataimport-enummemberswithname-method.md | thomaslevesque/dotnet.docs | bf8a749061ad249e474d19f9ea065cda5210b14a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadataimport-enummemberswithname-method.md | thomaslevesque/dotnet.docs | bf8a749061ad249e474d19f9ea065cda5210b14a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadataimport-enummemberswithname-method.md | thomaslevesque/dotnet.docs | bf8a749061ad249e474d19f9ea065cda5210b14a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IMetaDataImport::EnumMembersWithName Method | Microsoft Docs"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "reference"
api_name:
- "IMetaDataImport.EnumMembersWithName"
api_location:
- "mscoree.dll"
api_type:
- "COM"
f1_keywords:
- "IMetaDataImport::EnumMembersWithName"
dev_langs:
- "C++"
helpviewer_keywords:
- "IMetaDataImport::EnumMembersWithName method [.NET Framework metadata]"
- "EnumMembersWithName method [.NET Framework metadata]"
ms.assetid: 7c9e9120-3104-42f0-86ce-19a025f20dcc
caps.latest.revision: 12
author: "mairaw"
ms.author: "mairaw"
manager: "wpickett"
---
# IMetaDataImport::EnumMembersWithName Method
Enumerates MemberDef tokens representing members of the specified type with the specified name.
## Syntax
```
HRESULT EnumMembersWithName (
[in, out] HCORENUM *phEnum,
[in] mdTypeDef cl,
[in] LPCWSTR szName,
[out] mdToken rMembers[],
[in] ULONG cMax,
[out] ULONG *pcTokens
);
```
#### Parameters
`phEnum`
[in, out] A pointer to the enumerator.
`cl`
[in] A TypeDef token representing the type with members to enumerate.
`szName`
[in] The member name that limits the scope of the enumerator.
`rMembers`
[out] The array used to store the MemberDef tokens.
`cMax`
[in] The maximum size of the `rMembers` array.
`pcTokens`
[out] The actual number of MemberDef tokens returned in `rMembers`.
## Remarks
This method enumerates fields and methods, but not properties or events. Unlike [IMetaDataImport::EnumMembers](../../../../docs/framework/unmanaged-api/metadata/imetadataimport-enummembers-method.md), `EnumMembersWithName` discards all field and member tokens that do not have the specified name.
## Return Value
|HRESULT|Description|
|-------------|-----------------|
|`S_OK`|`EnumTypeDefs` returned successfully.|
|`S_FALSE`|There are no MemberDef tokens to enumerate. In that case, `pcTokens` is zero.|
## Requirements
**Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md).
**Header:** Cor.h
**Library:** Included as a resource in MsCorEE.dll
**.NET Framework Versions:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## See Also
[IMetaDataImport Interface](../../../../docs/framework/unmanaged-api/metadata/imetadataimport-interface.md)
[IMetaDataImport2 Interface](../../../../docs/framework/unmanaged-api/metadata/imetadataimport2-interface.md) | 31.149425 | 299 | 0.674908 | eng_Latn | 0.402617 |
e5abb0af4d2eceb5d220708bda0cf094c013fc82 | 412 | md | Markdown | README.md | yk-szk/clahe | 89c3f73bb517d1aded3ee14726cb4fb92e8b6d45 | [
"MIT"
] | null | null | null | README.md | yk-szk/clahe | 89c3f73bb517d1aded3ee14726cb4fb92e8b6d45 | [
"MIT"
] | null | null | null | README.md | yk-szk/clahe | 89c3f73bb517d1aded3ee14726cb4fb92e8b6d45 | [
"MIT"
] | null | null | null | # CLAHE
[](https://codecov.io/gh/yk-szk/clahe)
📊 Contrast Limited Adaptive Histogram Equalization
👽 When maximum pixel value of the input image is smaller than u16::MAX, output would be different from OpenCV.
⚠️ Lots of `unsafe` code was used since safe version was slower than unsafe version (current implementation). | 51.5 | 129 | 0.774272 | eng_Latn | 0.930208 |
e5abe60b7286c36cfa599e0c6726b3249045184f | 2,840 | md | Markdown | _posts/vol01/1983-1-1-vol01-p56.md | junesirius/MiddleEarth | 9f1d4f3fe00feaa1329f95728095541e1b072fd7 | [
"CC-BY-4.0"
] | null | null | null | _posts/vol01/1983-1-1-vol01-p56.md | junesirius/MiddleEarth | 9f1d4f3fe00feaa1329f95728095541e1b072fd7 | [
"CC-BY-4.0"
] | null | null | null | _posts/vol01/1983-1-1-vol01-p56.md | junesirius/MiddleEarth | 9f1d4f3fe00feaa1329f95728095541e1b072fd7 | [
"CC-BY-4.0"
] | null | null | null | ---
layout: post
title: 【Vol.01】P56.
date: 1983-01-01 00:56
categories: ["Vol.01 The Book of Lost Tales I"]
chapters: ["II. THE MUSIC OF THE AINUR"]
page_num: 56
characters:
glossaries: ['sate', 'web(s)']
tags: ['Aulë', 'Eldar', 'Erinti', 'Fionwë', 'Fionwë-Úrion', 'Inwë', 'Kôr', 'Manwë', 'Melko', 'Noldoli', 'Ónen', 'Ossë', 'Salmar', 'Silmarilli', 'Solosimpi', 'Súlimo', 'Teleri', 'Talkamarda', 'Ulmo', 'Vai', 'Varda', 'Queen of the Stars']
description:
published: true
---
<p style="text-indent: 0;">
sea he bethinks him of music deep and strange yet full ever of a sorrow: and therein he has aid from Manwë Súlimo.
</p>
The Solosimpi, what time the Elves came and dwelt in Kôr, learnt much of him, whence cometh the wistful allurement of their piping and their love to dwell ever by the shore. Salmar there was with him, and Ossë and Ónen to whom he gave the control of the waves and lesser seas, and many another.
But Aulë dwelt in Valinor and fashioned many things; tools and instruments he devised and was busied as much in the making of webs as in the beating of metals; tillage too and husbandry was his delight as much as tongues and alphabets, or broideries and painting. Of him did the Noldoli, who were the sages of the Eldar and thirsted ever after new lore and fresh knowledge, learn uncounted wealth of crafts, and magics and sciences unfathomed. From his teaching, whereto the Eldar brought ever their own great beauty of mind and heart and imagining, did they attain to the invention and making of gems; and these were not in the world before the Eldar, and the finest of all gems were Silmarilli, and they are lost.
Yet was the greatest and chief of those four great ones Manwë Súlimo; and he dwelt in Valinor and sate in a glorious abode upon a throne of wonder on the topmost pinnacle of Taniquetil that towers up upon the world's edge. Hawks flew ever to and fro about that abode, whose eyes could see to the deeps of the sea or penetrate the most hidden caverns and profoundest darkness of the world. These brought him news from everywhere of everything, and little escaped him — yet did some matters lie hid even from the Lord of the Gods. With him was Varda the Beautiful, and she became his spouse and is Queen of the Stars, and their children were Fionwë-Úrion and Erinti most lovely. About them dwell a great host of fair spirits, and their happiness is great; and men love Manwë even more than mighty Ulmo, for he hath never of intent done ill to them nor is he so fain of honour or so jealous of his power as that ancient one of Vai. The Teleri whom Inwë ruled were especially beloved of him, and got of him poesy and song; for if Ulmo hath a power of musics and of voices of instruments Manwë hath a splendour of poesy and song beyond compare.
Lo, Manwë Súlimo clad in sapphires, ruler of the airs and
| 105.185185 | 1,139 | 0.76338 | eng_Latn | 0.999835 |
e5ac82f842b9fbe715ac512b49b33e0985a22ac3 | 2,941 | md | Markdown | README.md | MrSquirrely/DuplaImage.Lib | b4ffab99d212169e5c6b18f162546c8f9ecbc536 | [
"Apache-2.0"
] | 4 | 2020-08-03T08:10:38.000Z | 2021-10-06T20:54:41.000Z | README.md | MrSquirrely/DuplaImage.Lib | b4ffab99d212169e5c6b18f162546c8f9ecbc536 | [
"Apache-2.0"
] | 1 | 2020-11-04T03:34:47.000Z | 2021-01-02T03:40:14.000Z | README.md | MrSquirrely/DuplaImage.Lib | b4ffab99d212169e5c6b18f162546c8f9ecbc536 | [
"Apache-2.0"
] | null | null | null | #  DuplaImage.Lib
DuplaImage.Lib is a .NET standard library offering several different perceptual hashing algorithms for detecting similar or duplicate images.
## Downloads
DuplaImage.Lib is available as a [Nuget package](https://www.nuget.org/packages/DuplaImage.Lib).
You can laso donwload the Nuget packages from the [releases page](https://github.com/MrSquirrely/DuplaImage.Lib/releases).
## Features
DuplaImage.Lib implements the following perceptual hash algorithms:
- ### Average
- Calculates the hash based on the average value of scaled down image. More in-depth explanation of the algorithm can be found [here](http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html). Extremely fast algorithm, but generates a lot of false positives. Can generate 64 or 256 bit hashes.
- ### Median
- Similar to average, but instead of average pixel value, median value is used. This should make the algorithm bit more resistant to non-linear changes in image. Slightly slower than average hash. Can generate 64 or 256 bit hashes
- ### Difference
- Constructs the hash by comparing the gradients of pixel values on each row of the scaled image. In-depth explanation found [here](http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html). Performs faster than the average hash and provides better results. Produces 64 or 256 bit hashes.
- ### DCT (pHash)
- Discrete cosine transform based algorithm. Similar to the one implemented in [pHash library](http://www.phash.org/). Detailed explanation of the algorithm can be found [here](http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html). Slower than any of the previous algorithms, but has better tolerance to image modifications. Produces 64 bit hashes.
All algorithms accepts the image input as a stream or as a path to filesystem location.
DuplaImage.Lib also provides a function to compare hashes to return the similarity of the hashes.
DuplaImage.Lib uses [Magick.NET](https://github.com/dlemstra/Magick.NET) for image processing needs. Users of DupImageLib can use their own image processing library by providing an implementation of IImageTransformer and passing that to the ImageHashes constructor.
## Usage
### Example
```csharp
// Create new ImageHashes instance using ImageMackick as image manipulation library
ImageHashes ImageHasher = new ImageHashes(new ImageMagickTransformer());
// Calculate 64 bit hashes for the images using difference algorithm
long Hash1 = imageHasher.CalculateDifferenceHash64(@"testimage1.png");
long Hash2 = imageHasher.CalculateDifferenceHash64(@"testimage2.png");
// Calculate similarity between the hashes. Score of 1.0 indicates identical images.
float similarity = ImageHashes.CompareHashes(hash1, hash2);
```
## License
This software is licensed under Apache 2.0 license. See LICENSE file for more information. | 66.840909 | 373 | 0.790547 | eng_Latn | 0.909676 |
e5acd7b783188cd381f47aa3df36df3ee7d886ca | 2,880 | md | Markdown | .config/awesome/lain/wiki/kbdlayout.md | korolr/dotfiles | 8e46933503ecb8d8651739ffeb1d2d4f0f5c6524 | [
"WTFPL"
] | 182 | 2017-03-05T07:43:13.000Z | 2022-03-15T13:09:07.000Z | config/dawesome/lain/wiki/kbdlayout.md | johanwiden/dotphiles | e6297e444ee83e011a57282d332aa92080a126cd | [
"Unlicense"
] | null | null | null | config/dawesome/lain/wiki/kbdlayout.md | johanwiden/dotphiles | e6297e444ee83e011a57282d332aa92080a126cd | [
"Unlicense"
] | 16 | 2017-03-07T11:01:27.000Z | 2022-01-08T09:21:01.000Z | ## Usage
[Read here.](https://github.com/copycat-killer/lain/wiki/Widgets#usage)
### Description
Shows and controls keyboard layouts and variants using `setxkbmap`. This is a simpler but asynchronous alternative to [awful.widget.kbdlayout](https://awesomewm.org/apidoc/classes/awful.widget.keyboardlayout.html).
```lua
local mykbdlayout = lain.widget.contrib.kbdlayout()
```
Left/right click switches to next/previous keyboard layout.
## Input table
Variable | Meaning | Type | Default
--- | --- | --- | ---
`layouts` | Keyboard layouts and variants to switch between | table | **nil**
`add_us_secondary` | Whether to add `us` as a secondary layout | boolean | true
`timeout` | Refresh timeout (in seconds) | number | 10
`settings` | User settings | function | empty function
- `layouts`
A table (array) which contains tables with keys indicating layout and (optionally) variant. This argument is **mandatory**.
- `add_us_secondary`
A boolean controlling whether to add `us` as a secondary layout. This is needed in order for keyboard shortcuts to work in certain applications, i.e. Firefox, while using a non-US keyboard layout.
- `timeout`
An integer which determines the interval at which the widget will be updated, in case the keyboard layout was changed by other means.
- `settings`
A "callback" function in which the user is expected to set the text widget up. The widget itself is available as the global variable `widget`, while layout information is available as `kbdlayout_now`. `kbdlayout_now` contains two keys, `layout` containing the primary layout, and `variant`, containing the variant. If there is no variant, `variant` is `nil`.
## Output table
Variable | Meaning | Type
--- | --- | ---
`widget` | The widget (textbox) | `wibox.widget.textbox`
`update` | Function to update the widget and call `settings` | function
`set` | Function taking an index as an argument to manually set the layout given by that index | function
`next` | Change to the next layout | function
`prev` | Change to the prev layout | function
The textbox can be added to the layout via standard means.
By default, left-clicking the textbox calls `next`, and right-clicking calls `prev`. You can set up additional key- or mouse-bindings. See the example below.
## Example
```lua
-- Switch between US Dvorak and DE layouts.
mykbdlayout = lain.widget.contrib.kbdlayout({
layouts = { { layout = "us", variant = "dvorak" },
{ layout = "de" } },
settings = function()
if kbdlayout_now.variant then
widget:set_text(string.format("%s/%s", kbdlayout_now.layout,
kbdlayout_now.variant))
else
widget:set_text(kbdlayout_now.layout)
end
end
})
-- [...]
-- Add key binding (traditional Alt+Shift switching)
awful.key({ "Mod1" }, "Shift_L", function () mykbdlayout.next() end),
``` | 38.4 | 362 | 0.711111 | eng_Latn | 0.992219 |
e5ad5fc11f57ede18e601feebcfbdd2644ef2d42 | 71 | md | Markdown | README.md | AlexRogalskiy/console-fun | 00297827a7fc700c178e12e2ba2644c352abe620 | [
"Apache-2.0"
] | 1 | 2022-03-22T09:03:57.000Z | 2022-03-22T09:03:57.000Z | README.md | nidi3/console-fun | 00297827a7fc700c178e12e2ba2644c352abe620 | [
"Apache-2.0"
] | 1 | 2022-03-22T09:04:19.000Z | 2022-03-22T09:04:19.000Z | README.md | AlexRogalskiy/console-fun | 00297827a7fc700c178e12e2ba2644c352abe620 | [
"Apache-2.0"
] | 1 | 2022-03-22T09:03:58.000Z | 2022-03-22T09:03:58.000Z | # console-fun
Just a few utilities to help having fun with the console
| 23.666667 | 56 | 0.788732 | eng_Latn | 0.999887 |
e5ad76a08a9f4c720a62636700e44e830171b388 | 4,570 | md | Markdown | desktop-src/lwef/mperrormessageformat.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/lwef/mperrormessageformat.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/lwef/mperrormessageformat.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
title: MpErrorMessageFormat function (MpClient.h)
description: Returns a formatted error message based on an error code.
ms.assetid: C125FCE4-3BB0-4608-BBF3-E7FEF17D0807
keywords:
- MpErrorMessageFormat function Legacy Windows Environment Features
topic_type:
- apiref
api_name:
- MpErrorMessageFormat
api_location:
- MpClient.dll
api_type:
- DllExport
ms.topic: reference
ms.date: 05/31/2018
---
# MpErrorMessageFormat function
Returns a formatted error message based on an error code.
## Syntax
```C++
HRESULT WINAPI MpErrorMessageFormat(
_In_ MPHANDLE hMpHandle,
_In_ HRESULT hrError,
_Out_ LPWSTR *pwszErrorDesc
);
```
## Parameters
<dl> <dt>
*hMpHandle* \[in\]
</dt> <dd>
Type: **MPHANDLE**
Handle to the malware protection manager interface. This handle is returned by the [**MpManagerOpen**](mpmanageropen.md) function.
</dd> <dt>
*hrError* \[in\]
</dt> <dd>
Type: **HRESULT**
An **HRESULT**-based error code.
</dd> <dt>
*pwszErrorDesc* \[out\]
</dt> <dd>
Type: **LPWSTR\***
Returns a formatted error message based on *hrError*. This string must be freed using [**MpFreeMemory**](mpfreememory.md).
</dd> </dl>
## Return value
Type: **HRESULT**
If the function succeeds the return value is **S\_OK**.
If the function fails then the return value is a failed **HRESULT** code.
## Remarks
This function is capable of formatting system error codes in addition to specific error codes returned by malware protection functions. The **HRESULT** error codes specific to malware protection functions have a facility of 0x50. Below is a list of a subset of the malware protection-specific error codes that can be returned by various malware protection functions. Using the macro **HRESULT\_FROM\_MP\_STATUS**, the following error codes can be converted to **HRESULT**. See also [Forefront Client Security anti-malware engine error codes](https://support.microsoft.com/kb/939359) for a list of other possible error codes.
| Error Code | Description |
|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| ERROR\_MP\_NOENGINE | No engine is loaded in antimalware service to perform the requested operation. |
| ERROR\_MP\_NO\_MEMORY | The antimalware engine has encountered a no memory situation. |
| ERROR\_MP\_REMOVE\_FAILED | Remove operation failed for a specific threat. |
| ERROR\_MP\_QUARANTINE\_FAILED | Quarantine operation failed for a specific threat. |
| ERROR\_MP\_THREAT\_NOT\_FOUND | The specific threat no longer exists in the system. |
| ERROR\_MP\_REMOVE\_NOT\_SUPPORTED | Remove operation for a specific threat inside the container type is not supported. |
| ERROR\_MP\_REMOVE\_IMMUTABLE\_CONTAINER | Due to engine policy, a remove operation of a specific threat inside a blocked container is not supported. (Mail archives.) |
| ERROR\_MP\_BADDB\_OLDENGINE | Signature update request provided an older engine or signature files(s). |
## Requirements
| | |
|-------------------------------------|-----------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows 8 \[desktop apps only\]<br/> |
| Minimum supported server<br/> | Windows Server 2012 \[desktop apps only\]<br/> |
| Header<br/> | <dl> <dt>MpClient.h</dt> </dl> |
| DLL<br/> | <dl> <dt>MpClient.dll</dt> </dl> |
## See also
<dl> <dt>
[**MpFreeMemory**](mpfreememory.md)
</dt> <dt>
[**MpManagerOpen**](mpmanageropen.md)
</dt> <dt>
[Forefront Client Security anti-malware engine error codes](https://support.microsoft.com/kb/939359)
</dt> </dl>
| 35.153846 | 624 | 0.541575 | eng_Latn | 0.83999 |