hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1c37a08e547d5d7617df693e636da4104f62a388 | 805 | md | Markdown | docs/compre/com-2.md | nieyafei/front-end-interview-discard | b32e2162e708710ca47944045864faa9ee2ade71 | [
"MIT"
] | 185 | 2018-03-27T08:13:06.000Z | 2020-06-03T05:32:17.000Z | docs/compre/com-2.md | nieyafei/front-end-interview-discard | b32e2162e708710ca47944045864faa9ee2ade71 | [
"MIT"
] | 80 | 2018-03-18T12:38:09.000Z | 2020-04-01T02:09:36.000Z | docs/compre/com-2.md | nieyafei/front-end-interview-discard | b32e2162e708710ca47944045864faa9ee2ade71 | [
"MIT"
] | 52 | 2018-03-28T09:32:22.000Z | 2020-06-01T02:55:56.000Z | # webpack 中,module,chunk 和 bundle 的区别是什么?
### Module
!> 提供比完整程序接触面(surface area)更小的离散功能块。精心编写的模块提供了可靠的抽象和封装界限,使得应用程序中每个模块都具有条理清楚的设计和明确的目的。
模块解析(Module Resolution): 一个模块可以作为另一个模块的依赖模块,resolver 是一个库( library )用于帮助找到模块的绝对路径... 模块将在 resolve.modules 中指定的所有目录内搜索。
### Chunk
!> Chunk是 webpack 特定的术语被用在内部来管理 building 过程。bundle 由 chunk 组成,其中有几种类型(例如,入口 chunk(entry chunk) 和子 chunk(child chunk))。通常 chunk 会直接对应所输出的 bundle,但是有一些配置并不会产生一对一的关系。
### Bundle
!> 由多个不同的模块生成,bundles 包含了早已经过加载和编译的最终源文件版本。
Bundle 分离(Bundle Splitting): 这个流程提供一个优化 build 的方法,允许 webpack 为应用程序生成多个 bundle。最终效果是,当其他某些 bundle 的改动时,彼此独立的另一些 bundle 都可以不受到影响,减少需要重新发布的代码量,因此由客户端重新下载并利用浏览器缓存。
**参考资料:**
[webpack概念术语](https://webpack.docschina.org/glossary)
**题目来源:**
[webpack 中那些最易混淆的 5 个知识点](https://juejin.im/post/5cede821f265da1bbd4b5630) | 35 | 163 | 0.798758 | yue_Hant | 0.860588 |
1c3846d25f4ce01301f4e99a456d7c8146eebd04 | 1,876 | md | Markdown | snap/README.md | Lin-Buo-Ren/mari0-ce-snap | 0cdc3c442f1bc1e863240c4fc7c53ee3df1ccec9 | [
"MIT"
] | 4 | 2019-03-11T16:12:57.000Z | 2021-04-10T05:55:27.000Z | snap/README.md | brlin-tw/mari0-ce-snap | 0cdc3c442f1bc1e863240c4fc7c53ee3df1ccec9 | [
"MIT"
] | null | null | null | snap/README.md | brlin-tw/mari0-ce-snap | 0cdc3c442f1bc1e863240c4fc7c53ee3df1ccec9 | [
"MIT"
] | 1 | 2021-04-10T05:55:28.000Z | 2021-04-10T05:55:28.000Z | # Unofficial Snap Packaging for Mari0: Community Edition
<!--
Use the Staticaly service for easy access to in-repo pictures:
https://www.staticaly.com/
-->

**This is the unofficial snap for Mari0: Community Edition**, *"The open-source, community-driven counterpart to Alesan's Entities"*. It works on Ubuntu, Fedora, Debian, and other major Linux distributions.
[](https://build.snapcraft.io/user/Lin-Buo-Ren/mari0-ce-snap)

Published for <img src="http://anything.codes/slack-emoji-for-techies/emoji/tux.png" align="top" width="24" /> with 💝 by Snapcrafters
## Installation
([Don't have snapd installed?](https://snapcraft.io/docs/core/install))
### In a Terminal
# Install the snap #
sudo snap install mari0-ce
# Connect the snap to optional security confinement interfaces #
## For allowing the game to use joystick ##
sudo snap connect mari0-ce:joystick
# Launch the application #
mari0-ce
### The Graphical Way
[](https://snapcraft.io/mari0-ce)
## What is Working
* Launch
* Keyboard & mouse gameplay
## What is NOT Working...yet
Check out the [issue tracker](https://github.com/Lin-Buo-Ren/mari0-ce-snap/issues) for known issues.
## Support
* Report issues regarding using this snap to the issue tracker:
<https://github.com/Lin-Buo-Ren/mari0-ce-snap/issues>
* You may also post on the Snapcraft Forum, under the `snap` topic category:
<https://forum.snapcraft.io/c/snap>
| 41.688889 | 207 | 0.729744 | eng_Latn | 0.599977 |
1c38526735c503edc50fd69136b86c10cbfee41e | 544 | md | Markdown | _subcommittee-members/paul-miller.md | nhsland/apperta.org | 30d5daf8a533bd86b9c3567914f824557193f2cd | [
"MIT"
] | null | null | null | _subcommittee-members/paul-miller.md | nhsland/apperta.org | 30d5daf8a533bd86b9c3567914f824557193f2cd | [
"MIT"
] | null | null | null | _subcommittee-members/paul-miller.md | nhsland/apperta.org | 30d5daf8a533bd86b9c3567914f824557193f2cd | [
"MIT"
] | null | null | null | ---
name: Paul Miller
subcommittee: Clinical Content
photo: '/img/paul-miller.png'
role: Member
bio: Paul has worked across Scotland a GP and is currently a partner in a practice in Paisley. He is the Clinical Lead for "Scottish Clinical Information Management in Practice" - SCIMP, which is a GP focused organisation providing health informatics advice and leadership to NHS Scotland. He is a member of the RCGP Health Informatics Group and a 2018 Founding Fellow of the Faculty of Clinical Informatics.
twitter:
www:
github:
linkedin:
--- | 49.454545 | 408 | 0.788603 | eng_Latn | 0.995386 |
1c386f4f92342b366b800fb3d00c9e0379f42e63 | 686 | md | Markdown | includes/active-directory-b2c-advanced-audience-warning.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | 43 | 2017-08-28T07:44:17.000Z | 2022-02-20T20:53:01.000Z | includes/active-directory-b2c-advanced-audience-warning.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | 676 | 2017-07-14T20:21:38.000Z | 2021-12-03T05:49:24.000Z | includes/active-directory-b2c-advanced-audience-warning.md | miiitch/azure-docs.fr-fr | e313657eaf54f5b2ed1a87723e447fb546a6beb4 | [
"CC-BY-4.0",
"MIT"
] | 153 | 2017-07-11T00:08:42.000Z | 2022-01-05T05:39:03.000Z | ---
author: msmimart
ms.service: active-directory-b2c
ms.topic: include
ms.date: 04/09/2021
ms.author: mimart
ms.openlocfilehash: ad22114b917fbe241748d3e34d8ee730becb1c2b
ms.sourcegitcommit: 20f8bf22d621a34df5374ddf0cd324d3a762d46d
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/09/2021
ms.locfileid: "107260695"
---
> [!NOTE]
> Dans Active Directory B2C, les [stratégies personnalisées](../articles/active-directory-b2c/user-flow-overview.md) sont principalement conçues pour gérer des scénarios complexes. Pour la plupart des scénarios, nous vous recommandons de recourir à des [flux d’utilisateurs](../articles/active-directory-b2c/user-flow-overview.md) intégrés. | 45.733333 | 340 | 0.810496 | fra_Latn | 0.64951 |
1c388855bcc4880dc5705d23eaf82a4946a34bbb | 2,899 | md | Markdown | articles/financials/localizations/tasks/set-up-bank-accounts-iso20022-direct-debits.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/financials/localizations/tasks/set-up-bank-accounts-iso20022-direct-debits.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/financials/localizations/tasks/set-up-bank-accounts-iso20022-direct-debits.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: Set up customers and customer bank accounts for ISO20022 direct debits
description: This task walks you through setting up a customer bank account and a customer direct debit mandate which are required to generate the customer payment file like ISO20022 direct debit.
author: mrolecki
manager: AnnBe
ms.date: 10/31/2017
ms.topic: business-process
ms.prod:
ms.service: dynamics-ax-applications
ms.technology:
# optional metadata
# ms.search.form:
audience: Application User
# ms.devlang:
ms.reviewer: shylaw
ms.search.scope: Operations
# ms.tgt_pltfrm:
# ms.custom:
ms.search.region: Global
# ms.search.industry:
ms.author: mrolecki
ms.search.validFrom: 2016-06-30
ms.dyn365.ops.version: AX 7.0.0
---
# Set up customers and customer bank accounts for ISO20022 direct debits
[!include [task guide banner](../../includes/task-guide-banner.md)]
This task walks you through setting up a customer bank account and a customer direct debit mandate which are required to generate the customer payment file like ISO20022 direct debit. Depending on the customer payment formats that are set up, additional information, not covered in this procedure, might be required for a customer or a customer bank account.
This task was created using the demo data company DEMF with a legal entity primary address in Germany.
This is the fourth of five procedures that demonstrate the customer payment process using electronic reporting configurations.
## Set up a customer bank account
1. Go to Accounts receivable > Customers > All customers.
2. Use the Quick Filter to find records. For example, filter on the Account field with a value of 'DE-010 '.
3. In the list, click the link in the selected row.
4. On the Action Pane, click Customer.
5. Click Bank accounts.
6. Click New.
7. In the Bank account field, type a value.
8. In the Name field, type a value.
* For example, enter 'EUR bank'.
9. In the Bank groups field, enter or select a value.
10. In the IBAN field, type a value.
* For example, enter 'DE36200400000628808808'.
11. In the SWIFT code field, type a value.
* For example: Enter 'COBADEFFXXX'. Please note that SWIFT \ BIC is not mandatory for many payment formats however it is recommended to have it registered for a bank account.
12. Click Save.
13. Close the page.
14. Click Edit.
15. Expand the Payment defaults section.
16. In the Bank account field, enter or select a value.
## Add a direct debit mandate
1. Expand the Direct debit mandates section.
2. Click Add.
3. In the Creditor bank account field, enter or select a value.
* For example, select DEMF OPER.
4. In the Signature date field, enter a date.
5. Click Yes to confirm the date update.
6. In the Signature location field, enter or select a value.
7. In the Expected number of payments field, enter a number.
8. Click OK.
9. Click Save.
| 38.653333 | 359 | 0.756813 | eng_Latn | 0.992812 |
1c389b6d19b7958e9fa32395dc45c280e31d847c | 1,313 | md | Markdown | development.md | coredna/magna-plugin-google-places-autocomplete | 3abb41aa25e6292aeef5ab7d2abda6050558bccf | [
"MIT"
] | null | null | null | development.md | coredna/magna-plugin-google-places-autocomplete | 3abb41aa25e6292aeef5ab7d2abda6050558bccf | [
"MIT"
] | 1 | 2021-05-11T08:19:54.000Z | 2021-05-11T08:19:54.000Z | development.md | coredna/magna-plugin-google-places-autocomplete | 3abb41aa25e6292aeef5ab7d2abda6050558bccf | [
"MIT"
] | null | null | null | # Development Instructions
> Ensure you replace **my-plugin** with the name of your plugin in the following commands
Install the magna cli into your system
```bash
npm install --global magna-cli
```
To begin development run the following commands
```bash
magna create plugin my-plugin
cd magna-plugin-my-plugin
git init
npm install
npm run dev
```
Once you have filled in the plugin wizard you can add your plugin code to `your-plugin/src/index.js`
When your plugin is ready to test in your website you can use [npm link](https://docs.npmjs.com/cli/link.html) to symlink your local package into your build without needing to publish to the npm repository.
```bash
npm link
cd your-website
npm link @coredna/magna-plugin-my-plugin
```
Once your plugin has been linked you can make changes to your plugin and the build system of your main website should live reload allowing your normal workflow.
When your plugin is ready create a new repository on the [coredna github account](https://github.com/organizations/coredna/repositories/new)
```bash
git remote add origin https://github.com/coredna/magna-plugin-my-plugin.git
git push -u origin master
```
Your plugin is now ready to publish to npm.
```bash
npm login # use your login credentials linked to the coredna account
npm publish --access=public
```
| 30.534884 | 206 | 0.7738 | eng_Latn | 0.980616 |
1c38b57885ae420eb0e6200669db334b0f48a1cf | 1,936 | md | Markdown | src/da/2020-01/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/da/2020-01/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/da/2020-01/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Den fjerde mand
date: 22/01/2020
---
`Læs Dan 3,19-27. Hvad sker der? Hvem er denne fjerde person i ilden?`
Da Nebukadnesar havde smidt de trofaste jøder ind i ilden, overraskes han, da han ser en fjerde person inde i ovnen. Så godt han kan, identificerer kongen den fjerde som ”en gudssøn“ (Dan 3,25).
Kongen kan ikke sige mere; men vi ved, hvem den fjerde person er. Han viste sig for Abraham før ødelæggelsen af Sodoma og Gomorra, kæmpede med Jakob ved bækken Jabbok og viste sig for Moses i en brændende tornebusk. Han er Jesus Kristus i sin præ-inkarnerede skikkelse, som kommer for at vise, at Gud står sammen med sit folk i deres vanskeligheder.
Ellen White siger: ”Men Herren glemte ikke dem, der tilhørte ham. Da hans vidner blev kastet ind i ovnen, viste Frelseren sig for dem i egen person, og de vandrede samme midt i ilden. Flammerne mistede evnen til at fortære, da han, som er herre over varme og kulde, kom til stede“ (Profeter og konger, s. 247).
Og Gud siger gennem profeten Esajas: ”Går du gennem vand, er jeg med dig, gennem floder, skyller de ikke sammen over dig; går du gennem ild, bliver du ikke forbrændt, flammen brænder dig ikke“ (Es 43,2).
Vi elsker beretninger som disse; men de rejser også spørgsmål om andre, som ikke på mirakuløs måde blev reddet fra forfølgelse på grund af deres tro. Sådanne mennesker kender til den samme erfaring, som Esajas og Zakarias havde. De blev begge dræbt af ugudelige konger. Hele vejen ned igennem Bibelens historie og helt til vores tid har trofaste kristne været udsat for frygtelige lidelser, som ikke endte med en mirakuløs befrielse, men i en smertefuld død. Her har vi en beretning, hvor de trofaste blev reddet ved et mirakel; men vi ved, at sådan går det normalt ikke.
`Til at tænke over. Hvad er på den anden side den mirakuløse befrielse, som alle Guds trofaste efterfølgere vil få, uanset hvad deres skæbne her måtte være? Se 1 Kor 15,12-26.` | 107.555556 | 571 | 0.775826 | dan_Latn | 0.999871 |
1c39a333f2d8ddebbca684e4d6e2f106b039991c | 1,603 | md | Markdown | _posts/2018-02-09-ALYCE-Paris-Mother-of-the-Bride-Jean-de-Lys-Dress-Style-29693-2014.md | gownthlala/gownthlala.github.io | f86dbdf6fa0e98646ca4cb470fa2928a56e04eec | [
"MIT"
] | null | null | null | _posts/2018-02-09-ALYCE-Paris-Mother-of-the-Bride-Jean-de-Lys-Dress-Style-29693-2014.md | gownthlala/gownthlala.github.io | f86dbdf6fa0e98646ca4cb470fa2928a56e04eec | [
"MIT"
] | null | null | null | _posts/2018-02-09-ALYCE-Paris-Mother-of-the-Bride-Jean-de-Lys-Dress-Style-29693-2014.md | gownthlala/gownthlala.github.io | f86dbdf6fa0e98646ca4cb470fa2928a56e04eec | [
"MIT"
] | null | null | null | ---
layout: post
date: 2018-02-09
title: "ALYCE Paris Mother of the Bride - Jean de Lys Dress Style 29693 2014"
category: ALYCE Paris
tags: [ALYCE Paris,2014]
---
### ALYCE Paris Mother of the Bride - Jean de Lys Dress Style 29693
Just **$339.99**
### 2014
<table><tr><td>BRANDS</td><td>ALYCE Paris</td></tr><tr><td>Years</td><td>2014</td></tr></table>
<a href="https://www.readybrides.com/en/alyce-paris/56386-alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.html"><img src="//img.readybrides.com/133100/alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.jpg" alt="ALYCE Paris Mother of the Bride - Jean de Lys Dress Style 29693" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/alyce-paris/56386-alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.html"><img src="//img.readybrides.com/133101/alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.jpg" alt="ALYCE Paris Mother of the Bride - Jean de Lys Dress Style 29693" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/alyce-paris/56386-alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.html"><img src="//img.readybrides.com/133099/alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.jpg" alt="ALYCE Paris Mother of the Bride - Jean de Lys Dress Style 29693" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/alyce-paris/56386-alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.html](https://www.readybrides.com/en/alyce-paris/56386-alyce-paris-mother-of-the-bride-jean-de-lys-dress-style-29693.html)
| 94.294118 | 342 | 0.733624 | yue_Hant | 0.619555 |
1c3b6a8ca86962b33a1e5421579cc634b7c1ab97 | 2,545 | md | Markdown | README.md | lisc119/Soiling-Loss-Analysis | 979424a9b355a0ad6196492d105d1905844cd238 | [
"MIT"
] | null | null | null | README.md | lisc119/Soiling-Loss-Analysis | 979424a9b355a0ad6196492d105d1905844cd238 | [
"MIT"
] | null | null | null | README.md | lisc119/Soiling-Loss-Analysis | 979424a9b355a0ad6196492d105d1905844cd238 | [
"MIT"
] | null | null | null | # Estimating soiling losses at photovoltaic plants
This repository contains the final project of Data Science bootcamp at Propulsion Academy in April 2021.
The project was done in collaboration with [Nispera](https://nispera.com/).
## Authors
[Lisa Crowther](https://www.linkedin.com/in/lisa-m-crowther/), [Lina Siegrist](https://www.linkedin.com/in/lina-sc/), [Marcus Lindberg](https://www.linkedin.com/in/marcuslindberg/)
## Supervisors
[Badru Stanicki](https://www.linkedin.com/in/badrustanicki/), [Dipanjan Sarkar
](https://www.linkedin.com/in/dipanzan/)
## Installation
A local installation of `conda` is required to set up the project environment.
Using `conda`, install the environment using the provided **`environment.yml`** file as follows:
conda env create --file environment.yml
If installation was successful, activate your environment and you will be ready to get started.
conda activate pv_analysis
Please see the respective `readme` in the `/notebooks/` directory for more information the notebooks showcasing parts (or the entirety) of the pipeline.
## Purpose
The detection of soiling losses at photovoltaic plants and the decision of when to clean the panels is an important business problem. The cost of cleaning the panels at such large scale plants needs to be balanced against the losses in output occurring from soiling. The challenge was to identify output losses that occur due to soiling, in the absence of soiling sensor data that would quantify soiling.
We developed a semi-automated pipeline to analyse photovoltaic panel performance using power output data to detect parkwide soiling related losses, and to further analyse soiling of individual strings of panels to identify clusters of most-soiled strings. This could be useful for recommendations for cleaning and maintenance, allowing soiling detection from power output, temperature and irradiance data alone.
## Data
- Project data is provided by Nispera. Data is not disclosed because of the NDA.
- For 2 different parks, data is provided as follows.
- Data of production at string level, with 5 min resolution
- Data of environmental information(irradiation, ambient temperature and panel temperature) with 5 min resolution
## Flowchart
<img src="soiling_pipeline.svg" title="flow chart" width="70%">
<!--  -->
## Requirements
The [Environment](environment.yml)
file contains all the necessary python packages.
| 41.048387 | 411 | 0.781925 | eng_Latn | 0.990929 |
1c3e054036ebdf4c28734ecfee5a84b0c0b7d0ff | 83 | md | Markdown | _featured_tags/study-gan.md | kdh3071/kdh3071.github.io | 2009c6bdf78e3adb902467842aeef4e9fb4c1dd2 | [
"MIT"
] | null | null | null | _featured_tags/study-gan.md | kdh3071/kdh3071.github.io | 2009c6bdf78e3adb902467842aeef4e9fb4c1dd2 | [
"MIT"
] | null | null | null | _featured_tags/study-gan.md | kdh3071/kdh3071.github.io | 2009c6bdf78e3adb902467842aeef4e9fb4c1dd2 | [
"MIT"
] | null | null | null | ---
layout: tag-blog
title: GAN
slug: gan
category: study
menu: false
order: 5
---
| 9.222222 | 16 | 0.674699 | eng_Latn | 0.283662 |
1c3e1b987f84ebc1193717df31e6e11ff695ac19 | 105 | md | Markdown | README.md | D-hanashri33/Neutral_Network_ | 1751a9b9afffdf4ba9d3b79ad0994b7331468a24 | [
"Apache-2.0"
] | null | null | null | README.md | D-hanashri33/Neutral_Network_ | 1751a9b9afffdf4ba9d3b79ad0994b7331468a24 | [
"Apache-2.0"
] | null | null | null | README.md | D-hanashri33/Neutral_Network_ | 1751a9b9afffdf4ba9d3b79ad0994b7331468a24 | [
"Apache-2.0"
] | null | null | null | # Neutral_Network_
predicting turbine energy yield (TEY) using ambient variables as features Gas Turbine
| 35 | 85 | 0.838095 | eng_Latn | 0.902301 |
1c3e6053f83d3d643701f964637c2514270f7fe6 | 2,341 | md | Markdown | README.md | Baryshych/Index | ea8f4a4b135d03105623e0190a141a3b6e1ec33c | [
"MIT"
] | null | null | null | README.md | Baryshych/Index | ea8f4a4b135d03105623e0190a141a3b6e1ec33c | [
"MIT"
] | 1 | 2021-02-24T21:17:25.000Z | 2021-05-03T07:38:41.000Z | README.md | Baryshych/Index | ea8f4a4b135d03105623e0190a141a3b6e1ec33c | [
"MIT"
] | null | null | null | # Индекс и состояние курсов
## Последовательная программа на 2 года
- [Введение и обзор знаний](Courses/Introduction.md)
(есть несколько видео-лекций)
- [Основы программирования (1 год обучения)](Courses/Fundamentals.md)
(есть до 95% материала)
- [Программирование (2 год обучения)](Courses/Advanced.md)
(есть до 50% материала)
## Отдельные курсы
- [Асинхронное программирование](Courses/Asynchronous.md)
(есть 95% примеров кода и видео-лекции)
- [Технологический стек Node.js](Courses/NodeJS.md)
(есть 90% кода и видео-лекций)
- [Шаблоны проектирования](Courses/Patterns.md)
(есть 50% примеров кода и видео-лекций)
- [Парадигмы программирования](Courses/Paradigms.md)
(есть 50% кода и несколько видео-лекций)
- [Метапрограммирование и мультипарадигменное программирование](Courses/Metaprogramming.md)
(есть 50% кода и несколько видео-лекций)
- [Алгоритмы и структуры данных](Courses/AlgAndData.md)
(есть до 20% материала, нужно сводить разные варианты)
- [Проектирование сетевых протоколов и сервисов](Courses/Network.md)
(есть 50% примеров кода и несколько видео-лекций)
- [Инструменты разработки и жизненного цикла ПО](Courses/Tools.md)
(есть несколько видео-лекций, нужно расширять)
- [Функциональное программирование](Courses/Functional.md)
(есть 20% примеров кода и несколько видео-лекций)
- [Объектно-ориентированное программирование](Courses/OOP.md)
(есть 20% примеров кода и несколько видео-лекций)
- [Операционные системы](Courses/OS.md)
(нужно сформировать программу)
- [Системное программирование](Courses/System.md)
(нужно сформировать программу)
- [Архитектура информационных систем](Courses/Architecture.md)
(есть несколько видео-лекций)
- [Веб-технологии](Courses/Web.md)
(есть 25% кода и несколько видео-лекций)
- [Параллельное программирование](Courses/Parallel.md)
(есть 10% примеров и несколько видео-лекций)
- [Проектирование баз данных](Courses/Databases.md)
(нужно сформировать программу)
- [Высоконагруженные и масштабируемые системы](Courses/Highload.md)
(есть несколько видео-лекций)
- [Проектирование пользовательских интерфейсов](Courses/UI-UX.md)
(нужно сформировать программу)
- [Безопасность информационных систем](Courses/Security.md)
(нужно сформировать программу)
- [Качество, тестирование и надежность ПО](Courses/Quality.md)
(есть несколько видео-лекций)
| 43.351852 | 91 | 0.777446 | rus_Cyrl | 0.768557 |
1c3e8b104e6c48545d0dd744c039aeef2f0ca1fd | 1,359 | md | Markdown | README.md | MichaelJWelsh/unleashed | 7836d0887f85239f9161010654b2b11ead6f539b | [
"MIT"
] | 1 | 2017-09-25T13:14:40.000Z | 2017-09-25T13:14:40.000Z | README.md | MichaelJWelsh/unleashed | 7836d0887f85239f9161010654b2b11ead6f539b | [
"MIT"
] | null | null | null | README.md | MichaelJWelsh/unleashed | 7836d0887f85239f9161010654b2b11ead6f539b | [
"MIT"
] | 1 | 2018-09-17T16:29:13.000Z | 2018-09-17T16:29:13.000Z | # Unleashed

## Background
Unleashed is my *FIRST* major programming project, Java project, and 2D platformer game created way back in high school. Going into it I had limited knowledge of project organization and game development. Initially, I did not know what a game engine and double buffering was, just to give you an idea. I did not know how to use Slick2D or the LWJGL. **Take the code itself with a grain of salt.**
My friend Ben Siti and I entered a competition hosted by the Future Business Leaders of America (FBLA) in which we had about a month to create a platformer with at least three levels and a score system. Ben did all the graphics and I wrote the code. Time was a *HUGE* constraint because we could only work on it in our free time. I had to teach myself game development theory and learn how to operate Slick2D and the LWJGL. After learning all this, we were two weeks in, Ben completed the graphics of the main character, and I was done writing the boilerplate game engine code.
The two weeks that followed were full of obstacles and "budget cuts". What's funny is we built this game, which is all about manipulating gravity, when we hadn't taken any physics courses yet. In the end, this mess of a project made us go far. We didn't win, but we learned a lot.
| 135.9 | 578 | 0.779249 | eng_Latn | 0.999967 |
1c3e8c2b20974bf6f4134be80698e8595681ff34 | 1,712 | md | Markdown | README.md | ssmak/directory-versioning | 3e1881c0965e6a5752c1cafa036ba22b606a806f | [
"MIT"
] | 1 | 2019-05-09T09:07:52.000Z | 2019-05-09T09:07:52.000Z | README.md | ssmak/directory-versioning | 3e1881c0965e6a5752c1cafa036ba22b606a806f | [
"MIT"
] | null | null | null | README.md | ssmak/directory-versioning | 3e1881c0965e6a5752c1cafa036ba22b606a806f | [
"MIT"
] | null | null | null | <h1 align="center">directory-versioning</h1>
<h5 align="center">Make versioning for directory with Git.</h5>
<br />
<div align="center">
<a href="https://github.com/ssmak/directory-versioning">
<img src="https://img.shields.io/badge/version-v1.0.5-blueviolet.svg" />
</a>
<a href="https://www.npmjs.com/package/directory-versioning">
<img src="https://img.shields.io/badge/env-nodejs-orange.svg" />
</a>
</div>
<br />
<div align="center">
<a href="https://nodei.co/npm/directory-versioning/"><img src="https://nodei.co/npm/directory-versioning.png?compact=true"></a>
</div>
<br />
## History
Manager calls a meeting with me and complain that the website is unstable. "Anything updated yesterday?" - ask by the Manager"<br />
The production environment can be complex and not only one person can add, update or delete the content. The result is "I don't know who make the change."<br />
The solution is to make versioning for any repositories that you concern. View and rollback the change on demand.
<br />
<div align="center">
<a href="https://paypal.me/ssmak">
<img src="https://img.shields.io/badge/Donate-PayPal-green.svg" alt="PayPal Donation" />
</a>
<br />
<a href="https://paypal.me/ssmak">
<img src="https://www.paypalobjects.com/webstatic/mktg/logo/AM_mc_vs_dc_ae.jpg" alt="PayPal" />
</a>
</div>
## Installation + Use
1. Install the npm globally
``` bash
npm install -g directory-versioning
```
2. Start versioning for directory
```bash
directory-versioning --path <PATH_OF_THE_FOLDER> [--d] [--i 10] [--q]
```
d: Run as long run process.<br />
i: Time interval, unit in second(s).<br />
q: Quiet mode.
## Test
``` bash
npx mocha
OR
npm test
```
## License
MIT
| 30.035088 | 160 | 0.690421 | eng_Latn | 0.51116 |
1c3ead7507b05f0c0be63a348cc42e6ba7378641 | 1,157 | md | Markdown | docs/standard-library/is-final-class.md | POMATOpl/cpp-docs.pl-pl | ae1925d41d94142f6a43c4e721d45cbbbfeda4c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/is-final-class.md | POMATOpl/cpp-docs.pl-pl | ae1925d41d94142f6a43c4e721d45cbbbfeda4c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/is-final-class.md | POMATOpl/cpp-docs.pl-pl | ae1925d41d94142f6a43c4e721d45cbbbfeda4c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Dowiedz się więcej na temat klasy is_final
title: Klasa is_final
ms.date: 11/04/2016
f1_keywords:
- type_traits/std::is_final
helpviewer_keywords:
- is_final
ms.assetid: 9dbad82f-6685-4909-94e8-98e4a93994b9
ms.openlocfilehash: 04660309205689e14200cb5d214ce5dc80efb88f
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 12/11/2020
ms.locfileid: "97323744"
---
# <a name="is_final-class"></a>Klasa is_final
Testuje, czy typ jest oznaczony typem klasy `final` .
## <a name="syntax"></a>Składnia
```cpp
template <class T>
struct is_final;
```
### <a name="parameters"></a>Parametry
*&*\
Typ do zapytania.
## <a name="remarks"></a>Uwagi
Wystąpienie predykatu typu ma wartość true, jeśli typ *T* jest oznaczony jako typ klasy `final` , w przeciwnym razie ma wartość false. Jeśli *T* jest typem klasy, musi być kompletnym typem.
## <a name="requirements"></a>Wymagania
**Nagłówek:**\<type_traits>
**Przestrzeń nazw:** std
## <a name="see-also"></a>Zobacz też
[<type_traits>](../standard-library/type-traits.md)\
[końcowy specyfikator](../cpp/final-specifier.md)
| 24.617021 | 189 | 0.74503 | pol_Latn | 0.913996 |
1c3f46fe8fce51e2fb7a22443a49577ff5d7183d | 51,930 | md | Markdown | aspnet/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-models-and-data-access.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-models-and-data-access.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-models-and-data-access.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-models-and-data-access
title: ASP.NET los modelos y el acceso a los datos de MVC 4 | Microsoft Docs
author: rick-anderson
description: 'Nota: en este laboratorio práctico se supone que tiene conocimientos básicos de ASP.NET MVC. Si no ha usado ASP.NET MVC antes, le recomendamos que pase a ASP.NET MVC 4...'
ms.author: riande
ms.date: 02/18/2013
ms.assetid: 634ea84b-f904-4afe-b71b-49cccef4d9cc
msc.legacyurl: /mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-models-and-data-access
msc.type: authoredcontent
ms.openlocfilehash: 90635b617930d0a9c126795f4c8790d542e33dc9
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 03/06/2020
ms.locfileid: "78451471"
---
# <a name="aspnet-mvc-4-models-and-data-access"></a>Acceso a datos y modelos de ASP.NET MVC 4
por [equipo de grupos de web](https://twitter.com/webcamps)
[Descargar el kit de aprendizaje de Web.](https://aka.ms/webcamps-training-kit)
Este laboratorio práctico supone que tiene conocimientos básicos de **ASP.NET MVC**. Si no ha usado **ASP.NET MVC** antes, le recomendamos que consulte el laboratorio práctico de **conceptos básicos de ASP.NET MVC 4** .
Este laboratorio le guía a través de las mejoras y las nuevas características descritas anteriormente mediante la aplicación de cambios menores en una aplicación Web de ejemplo que se proporciona en la carpeta de origen.
> [!NOTE]
> Todo el código de ejemplo y los fragmentos de código se incluyen en el kit de aprendizaje de Web., disponible en las [versiones Microsoft-Web/WebCampTrainingKit](https://aka.ms/webcamps-training-kit). El proyecto específico de este laboratorio está disponible en los [modelos y el acceso a datos de ASP.NET MVC 4](https://github.com/Microsoft-Web/HOL-MVC4ModelsAndDataAccess).
En el laboratorio práctico de **conceptos básicos de ASP.NET MVC** , ha estado pasando datos codificados de forma rígida desde los controladores a las plantillas de vista. Sin embargo, para compilar una aplicación web real, puede que desee usar una base de datos real.
Este laboratorio práctico le mostrará cómo usar un motor de base de datos para almacenar y recuperar los datos necesarios para la aplicación Music Store. Para ello, comenzará con una base de datos existente y creará la Entity Data Model a partir de ella. A lo largo de este laboratorio, cumplirá el enfoque **Database First** , así como el enfoque **code First** .
Sin embargo, también puede usar el enfoque de **Model First** , crear el mismo modelo con las herramientas de y, a continuación, generar la base de datos a partir de él.

*Database First frente a Model First*
Después de generar el modelo, realizará los ajustes adecuados en el StoreController para proporcionar las vistas de almacén con los datos tomados de la base de datos, en lugar de usar datos codificados de forma rígida. No será necesario realizar ningún cambio en las plantillas de vista porque el StoreController devolverá el mismo ViewModels a las plantillas de vista, aunque esta vez los datos procederán de la base de datos.
**El enfoque Code First**
El enfoque Code First nos permite definir el modelo a partir del código sin generar clases que generalmente están acopladas al marco.
En Code First, los objetos de modelo se definen con POCO, "objetos CLR antiguos sin formato". Los POCO son clases sin formato simples que no tienen herencia y no implementan interfaces. Podemos generar automáticamente la base de datos a partir de ellas, o bien usar una base de datos existente y generar la asignación de clase a partir del código.
Las ventajas de usar este enfoque es que el modelo sigue siendo independiente del marco de persistencia (en este caso, Entity Framework), ya que las clases POCO no se acoplan con el marco de asignación.
> [!NOTE]
> Este laboratorio se basa en ASP.NET MVC 4 y en una versión de la aplicación de ejemplo de Music Store personalizada y minimizada para ajustarse solo a las características que se muestran en este laboratorio práctico.
>
> Si desea explorar toda la aplicación de tutorial de **Music Store** , puede encontrarla en [MVC-Music-Store](https://github.com/evilDave/MVC-Music-Store).
<a id="Prerequisites"></a>
<a id="Prerequisites"></a>
### <a name="prerequisites"></a>Requisitos previos
Debe tener los siguientes elementos para completar este laboratorio:
- [Microsoft Visual Studio Express 2012 para web](https://www.microsoft.com/visualstudio/eng/products/visual-studio-express-for-web) o superior (Lea el [Apéndice A](#AppendixA) para obtener instrucciones sobre cómo instalarlo).
<a id="Setup"></a>
<a id="Setup"></a>
### <a name="setup"></a>Programa de instalación
**Instalar fragmentos de código**
Para mayor comodidad, gran parte del código que va a administrar a lo largo de este laboratorio está disponible como fragmentos de código de Visual Studio. Para instalar el archivo **.\Source\Setup\CodeSnippets.VSI** de ejecución de fragmentos de código.
Si no está familiarizado con los fragmentos de Visual Studio Code y desea obtener información sobre cómo usarlos, puede consultar el apéndice de este documento "[Apéndice C: usar fragmentos de código](#AppendixC)".
---
<a id="Exercises"></a>
<a id="Exercises"></a>
## <a name="exercises"></a>Ejercicios
Este laboratorio práctico se compone de los siguientes ejercicios:
1. [Ejercicio 1: agregar una base de datos](#Exercise1)
2. [Ejercicio 2: crear una base de datos con Code First](#Exercise2)
3. [Ejercicio 3: consultar la base de datos con parámetros](#Exercise3)
> [!NOTE]
> Cada ejercicio va acompañado de una carpeta **final** que contiene la solución resultante que debe obtener después de completar los ejercicios. Puede usar esta solución como guía si necesita ayuda adicional para trabajar en los ejercicios.
Tiempo estimado para completar este laboratorio: **35 minutos**.
<a id="Exercise1"></a>
<a id="Exercise_1_Adding_a_Database"></a>
### <a name="exercise-1-adding-a-database"></a>Ejercicio 1: agregar una base de datos
En este ejercicio, obtendrá información sobre cómo agregar una base de datos con las tablas de la aplicación MusicStore a la solución con el fin de consumir sus datos. Una vez que la base de datos se genera con el modelo y se agrega a la solución, modificará la clase StoreController para proporcionar la plantilla de vista con los datos tomados de la base de datos, en lugar de usar valores codificados de forma rígida.
<a id="Ex1Task1"></a>
<a id="Task_1_-_Adding_a_Database"></a>
#### <a name="task-1---adding-a-database"></a>Tarea 1: agregar una base de datos
En esta tarea, agregará una base de datos ya creada con las tablas principales de la aplicación MusicStore a la solución.
1. Abra la carpeta **Begin** Solution ubicada en **source/EX1-AddingADatabaseDBFirst/Begin/** .
1. Tendrá que descargar algunos paquetes NuGet que faltan antes de continuar. Para ello, haga clic en el menú **proyecto** y seleccione **administrar paquetes NuGet**.
2. En el cuadro de diálogo **administrar paquetes NuGet** , haga clic en **restaurar** para descargar los paquetes que faltan.
3. Por último, compile la solución haciendo clic en **Compilar** | **compilar solución**.
> [!NOTE]
> Una de las ventajas de usar NuGet es que no tiene que enviar todas las bibliotecas del proyecto, lo que reduce el tamaño del proyecto. Con las herramientas avanzadas de NuGet, especificando las versiones de paquete en el archivo packages. config, podrá descargar todas las bibliotecas necesarias la primera vez que ejecute el proyecto. Esta es la razón por la que tendrá que ejecutar estos pasos después de abrir una solución existente desde este laboratorio.
2. Agregue el archivo de base de datos **MvcMusicStore** . En este laboratorio práctico, usará una base de datos ya creada denominada **MvcMusicStore. MDF**. Para ello, haga clic con el botón derecho en **App\_** carpeta de datos, seleccione **Agregar** y, a continuación, haga clic en **elemento existente**. Vaya a **\Source\Assets** y seleccione el archivo **MvcMusicStore. MDF** .

*Agregar un elemento existente*

*Archivo de base de datos MvcMusicStore. MDF*
La base de datos se ha agregado al proyecto. Incluso cuando la base de datos se encuentra dentro de la solución, puede consultarla y actualizarla tal y como se hospedaba en un servidor de base de datos diferente.

*Base de datos MvcMusicStore en Explorador de soluciones*
3. Compruebe la conexión a la base de datos. Para ello, haga doble clic en **MvcMusicStore. MDF** para establecer una conexión.

*Conexión a MvcMusicStore. MDF*
<a id="Ex1Task2"></a>
<a id="Task_2_-_Creating_a_Data_Model"></a>
#### <a name="task-2---creating-a-data-model"></a>Tarea 2: crear un modelo de datos
En esta tarea, creará un modelo de datos para interactuar con la base de datos agregada en la tarea anterior.
1. Cree un modelo de datos que represente la base de datos. Para ello, en Explorador de soluciones haga clic con el botón secundario en la carpeta **modelos** , seleccione **Agregar** y, a continuación, haga clic en **nuevo elemento**. En el cuadro de diálogo **Agregar nuevo elemento** , seleccione la plantilla de **datos** y, a continuación, el elemento **Entity Data Model ADO.net** . Cambie el nombre del modelo de datos a **StoreDB. edmx** y haga clic en **Agregar**.

*Agregar el StoreDB ADO.NET Entity Data Model*
2. Aparecerá el **Asistente para Entity Data Model** . Este asistente le guiará a través de la creación de la capa de modelo. Dado que el modelo debe crearse según la base de datos existente agregada recientemente, seleccione **generar desde la base de datos** y haga clic en **siguiente**.

*Elección del contenido del modelo*
3. Dado que va a generar un modelo a partir de una base de datos, deberá especificar la conexión que se va a usar. Haga clic en **nueva conexión**.
4. Seleccione **Microsoft SQL Server archivo de base de datos** y haga clic en **continuar**.

*Cuadro de diálogo elegir origen de datos*
5. Haga clic en **examinar** y seleccione la base de datos **MvcMusicStore. MDF** que se encuentra en la carpeta **App\_Data** y haga clic en **Aceptar**.

*Propiedades de conexión*
6. La clase generada debe tener el mismo nombre que la cadena de conexión de entidad, así que cambie su nombre a **MusicStoreEntities** y haga clic en **siguiente**.

*Elegir la conexión de datos*
7. Elija los objetos de base de datos que desea usar. Dado que el modelo de entidad usará solo las tablas de la base de datos, seleccione la opción **tablas** y asegúrese de que las opciones **incluir columnas de clave externa en el modelo** y poner en **plural o en singular los nombres de objeto generados** también están seleccionadas. Cambie el espacio de nombres del modelo a **MvcMusicStore. Model** y haga clic en **Finalizar**.

*Elección de los objetos de base de datos*
> [!NOTE]
> Si se muestra un cuadro de diálogo de advertencia de seguridad, haga clic en **Aceptar** para ejecutar la plantilla y generar las clases para las entidades del modelo.
8. Aparecerá un diagrama de entidades para la base de datos, mientras que se creará una clase independiente que asigna cada tabla a la base de datos. Por ejemplo, la tabla de **álbumes** se representará mediante una clase **album** , donde cada columna de la tabla se asignará a una propiedad de clase. Esto le permitirá consultar y trabajar con objetos que representan las filas de la base de datos.

*Diagrama de entidades*
> [!NOTE]
> Las plantillas T4 (. TT) ejecutan código para generar las clases de entidades y sobrescribirán las clases existentes con el mismo nombre. En este ejemplo, las clases "álbum", "género" y "artista" se han sobrescrito con el código generado.
<a id="Ex1Task3"></a>
<a id="Task_3_-_Building_the_Application"></a>
#### <a name="task-3---building-the-application"></a>Tarea 3: compilar la aplicación
En esta tarea, comprobará que, aunque la generación del modelo haya quitado las clases de modelo **album**, **género** y **artista** , el proyecto se compilará correctamente con las nuevas clases de modelo de datos.
1. Compile el proyecto; para ello, seleccione el elemento de menú **compilar** y, a continuación, **compile MvcMusicStore**.

*Compilar el proyecto*
2. El proyecto se compila correctamente. ¿Por qué sigue funcionando? Funciona porque las tablas de base de datos tienen campos que incluyen las propiedades que estaba usando en el **álbum** de clases quitadas y el **género**.

*Compilaciones correctas*
3. Aunque el diseñador muestra las entidades en un formato de diagrama, son realmente C# clases. Expanda el nodo **StoreDB. edmx** en el explorador de soluciones y, a continuación, **StoreDB.TT**, verá las nuevas entidades generadas.

*Archivos generados*
<a id="Ex1Task4"></a>
<a id="Task_4_-_Querying_the_Database"></a>
#### <a name="task-4---querying-the-database"></a>Tarea 4: consultar la base de datos
En esta tarea, actualizará la clase StoreController para que, en lugar de usar datos codificados, consulte la base de datos para recuperar la información.
1. Abra **Controllers\StoreController.CS** y agregue el siguiente campo a la clase para que contenga una instancia de la clase **MusicStoreEntities** , denominada **storeDB**:
(Fragmentos de código- *modelos y acceso a datos-EX1 storeDB*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample1.cs)]
2. La clase **MusicStoreEntities** expone una propiedad de colección para cada tabla en la base de datos. Actualice el método de acción de **exploración** para recuperar un género con todos los **álbumes**.
(Fragmentos de código- *modelos y acceso a datos; examen del almacén EX1*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample2.cs)]
> [!NOTE]
> Está usando una funcionalidad de .NET llamada **LINQ** (Language-Integrated Query) para escribir expresiones de consulta fuertemente tipadas en estas colecciones, que ejecutan código en la base de datos y devuelven objetos que se pueden programar.
>
> Para obtener más información acerca de LINQ, visite el [sitio de MSDN](https://msdn.microsoft.com/library/bb397926&#040;v=vs.110&#041;.aspx).
3. Actualice el método de acción de **Índice** para recuperar todos los géneros.
(Fragmento de código- *modelos y acceso a datos-índice del almacén EX1*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample3.cs)]
4. Actualice el método de acción de **Índice** para recuperar todos los géneros y transformar la colección en una lista.
(Fragmentos de código- *modelos y acceso a datos-EX1 Store GenreMenu*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample4.cs)]
<a id="Ex1Task5"></a>
<a id="Task_5_-_Running_the_Application"></a>
#### <a name="task-5---running-the-application"></a>Tarea 5: ejecutar la aplicación
En esta tarea, comprobará que la página de índice de la tienda mostrará ahora los géneros almacenados en la base de datos en lugar de los codificados. No es necesario cambiar la plantilla de vista porque el **StoreController** devuelve las mismas entidades que antes, pero esta vez los datos proceden de la base de datos.
1. Vuelva a compilar la solución y presione **F5** para ejecutar la aplicación.
2. El proyecto se inicia en la Página principal. Compruebe que el menú de **géneros** ya no es una lista codificada y que los datos se recuperan directamente de la base de datos.

*Examinar géneros de la base de datos*
3. Ahora, vaya a cualquier género y compruebe que los álbumes se han rellenado desde la base de datos.

*Examinar álbumes de la base de datos*
<a id="Exercise2"></a>
<a id="Exercise_2_Creating_a_Database_Using_Code_First"></a>
### <a name="exercise-2-creating-a-database-using-code-first"></a>Ejercicio 2: crear una base de datos con Code First
En este ejercicio, obtendrá información sobre cómo usar el enfoque Code First para crear una base de datos con las tablas de la aplicación MusicStore y cómo obtener acceso a sus datos.
Una vez generado el modelo, modificará el StoreController para proporcionar la plantilla de vista con los datos tomados de la base de datos, en lugar de usar valores codificados de forma rígida.
> [!NOTE]
> Si ha completado el ejercicio 1 y ya ha trabajado con el enfoque Database First, ahora aprenderá a obtener los mismos resultados con un proceso diferente. Las tareas comunes con el ejercicio 1 se han marcado para facilitar la lectura. Si no ha completado el ejercicio 1 pero le gustaría aprender el enfoque de Code First, puede empezar a partir de este ejercicio y obtener una cobertura completa del tema.
<a id="Ex2Task1"></a>
<a id="Task_1_-_Populating_Sample_Data"></a>
#### <a name="task-1---populating-sample-data"></a>Tarea 1: rellenar datos de ejemplo
En esta tarea, rellenará la base de datos con datos de ejemplo cuando se cree inicialmente con Code-First.
1. Abra la carpeta **Begin** Solution ubicada en **source/Ex2-CreatingADatabaseCodeFirst/Begin/** . De lo contrario, puede seguir usando la solución **final** obtenida al completar el ejercicio anterior.
1. Si ha abierto la solución de **Inicio** proporcionada, tendrá que descargar algunos paquetes NuGet que faltan antes de continuar. Para ello, haga clic en el menú **proyecto** y seleccione **administrar paquetes NuGet**.
2. En el cuadro de diálogo **administrar paquetes NuGet** , haga clic en **restaurar** para descargar los paquetes que faltan.
3. Por último, compile la solución haciendo clic en **Compilar** | **compilar solución**.
> [!NOTE]
> Una de las ventajas de usar NuGet es que no tiene que enviar todas las bibliotecas del proyecto, lo que reduce el tamaño del proyecto. Con las herramientas avanzadas de NuGet, especificando las versiones de paquete en el archivo packages. config, podrá descargar todas las bibliotecas necesarias la primera vez que ejecute el proyecto. Esta es la razón por la que tendrá que ejecutar estos pasos después de abrir una solución existente desde este laboratorio.
2. Agregue el archivo **SampleData.CS** a la carpeta **Models** . Para ello, haga clic con el botón secundario en la carpeta **Models** , seleccione **Agregar** y, a continuación, haga clic en **elemento existente**. Vaya a **\Source\Assets** y seleccione el archivo **SampleData.CS** .

*Código de rellenado de datos de ejemplo*
3. Abra el archivo **global.asax.CS** y agregue las siguientes instrucciones *using* .
(Fragmentos de código- *modelos y acceso a datos-Ex2 variables de asax globales mediante*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample5.cs)]
4. En el método **Start () de Application\_,** agregue la siguiente línea para establecer el inicializador de base de datos.
(Fragmentos de código- *modelos y acceso a datos-Ex2 global asax SetInitializer*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample6.cs)]
<a id="Ex2Task2"></a>
<a id="Task_2_-_Configuring_the_connection_to_the_Database"></a>
#### <a name="task-2---configuring-the-connection-to-the-database"></a>Tarea 2: configurar la conexión a la base de datos
Ahora que ya ha agregado una base de datos a nuestro proyecto, escribirá en el archivo **Web. config** la cadena de conexión.
1. Agregue una cadena de conexión en **Web. config**. Para ello, abra el **archivo Web. config** en la raíz del proyecto y reemplace la cadena de conexión denominada DefaultConnection por esta línea en la sección **<connectionStrings>** :

*Ubicación del archivo Web. config*
[!code-xml[Main](aspnet-mvc-4-models-and-data-access/samples/sample7.xml)]
<a id="Ex2Task3"></a>
<a id="Task_3_-_Working_with_the_Model"></a>
#### <a name="task-3---working-with-the-model"></a>Tarea 3: trabajar con el modelo
Ahora que ya ha configurado la conexión a la base de datos, vinculará el modelo con las tablas de base de datos. En esta tarea, creará una clase que se vinculará a la base de datos con Code First. Recuerde que hay una clase de modelo POCO existente que debe modificarse.
> [!NOTE]
> Si ha completado el ejercicio 1, observará que este paso lo realizó un asistente. Al realizar Code First, creará manualmente las clases que se vincularán a las entidades de datos.
1. Abra la carpeta del proyecto de clase POCO modelo **género** desde **modelos** e incluya un identificador. Use una propiedad int con el nombre **GenreId**.
(Fragmentos de código- *modelos y acceso a datos-Ex2 Code First género*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample8.cs)]
> [!NOTE]
> Para trabajar con convenciones de Code First, el género de la clase debe tener una propiedad de clave principal que se detectará automáticamente.
>
> Puede leer más información sobre las convenciones de Code First en este [artículo de MSDN](https://msdn.microsoft.com/library/hh161541&#040;v=vs.103&#041;.aspx).
2. Ahora, abra la carpeta de proyecto de clase POCO modelo **album** de **modelos** e incluya las claves externas, cree propiedades con los nombres **GenreId** y **ArtistId**. Esta clase ya tiene **GenreId** para la clave principal.
(Fragmento de código- *modelos y acceso a datos-Ex2 Code First album*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample9.cs)]
3. Abra el **intérprete** de clase poco modelo y incluya la propiedad **ArtistId** .
(Fragmentos de código- *modelos y acceso a datos-Ex2 Code First artista*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample10.cs)]
4. Haga clic con el botón derecho en la carpeta del proyecto **modelos** y seleccione **Agregar | Clase**. Asigne al archivo el nombre **MusicStoreEntities.CS**. A continuación, haga clic en **Agregar.**

*Agregar un nuevo elemento*

*Agregar una clase*
5. Abra la clase que acaba de crear, **MusicStoreEntities.CS**e incluya los espacios de nombres **System. Data. Entity** y **System. Data. Entity. Infrastructure**.
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample11.cs)]
6. Reemplace la declaración de clase para extender la clase **DbContext** : declare un método público **DBSet** e invalide **OnModelCreating** . Después de este paso, obtendrá una clase de dominio que vinculará el modelo con el Entity Framework. Para ello, reemplace el código de clase por lo siguiente:
(Fragmentos de código- *modelos y acceso a datos-Ex2 Code First MusicStoreEntities*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample12.cs)]
> [!NOTE]
> Con Entity Framework **DbContext** y **DBSet** , podrá consultar el género de clase poco. Al extender el método **OnModelCreating** , se especifica en el **código** cómo se asignará el género a una tabla de base de datos. Puede encontrar más información sobre DBContext y DBSet en este artículo de MSDN: [Link](https://msdn.microsoft.com/library/system.data.entity.dbcontext(v=vs.103).aspx)
<a id="Ex2Task4"></a>
<a id="Task_4_-_Querying_the_Database"></a>
#### <a name="task-4---querying-the-database"></a>Tarea 4: consultar la base de datos
En esta tarea, actualizará la clase StoreController para que, en lugar de usar datos codificados, la recupere de la base de datos.
> [!NOTE]
> Esta tarea es común con el ejercicio 1.
>
> Si ha completado el ejercicio 1, tendrá en cuenta que estos pasos son los mismos en ambos enfoques (primero la base de datos o el código primero). Son diferentes en la forma en que se vinculan los datos al modelo, pero el acceso a las entidades de datos todavía es transparente desde el controlador.
1. Abra **Controllers\StoreController.CS** y agregue el siguiente campo a la clase para que contenga una instancia de la clase **MusicStoreEntities** , denominada **storeDB**:
(Fragmentos de código- *modelos y acceso a datos-EX1 storeDB*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample13.cs)]
2. La clase **MusicStoreEntities** expone una propiedad de colección para cada tabla en la base de datos. Actualice el método de acción de **exploración** para recuperar un género con todos los **álbumes**.
(Fragmentos de código- *modelos y acceso a datos-Ex2 Store Browse*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample14.cs)]
> [!NOTE]
> Está usando una funcionalidad de .NET llamada **LINQ** (Language-Integrated Query) para escribir expresiones de consulta fuertemente tipadas en estas colecciones, que ejecutan código en la base de datos y devuelven objetos que se pueden programar.
>
> Para obtener más información acerca de LINQ, visite el [sitio de MSDN](https://msdn.microsoft.com/library/bb397926(v=vs.110).aspx).
3. Actualice el método de acción de **Índice** para recuperar todos los géneros.
(Fragmento de código- *modelos y acceso a datos-índice de almacén Ex2*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample15.cs)]
4. Actualice el método de acción de **Índice** para recuperar todos los géneros y transformar la colección en una lista.
(Fragmentos de código- *modelos y acceso a datos-Ex2 Store GenreMenu*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample16.cs)]
<a id="Ex2Task5"></a>
<a id="Task_5_-_Running_the_Application"></a>
#### <a name="task-5---running-the-application"></a>Tarea 5: ejecutar la aplicación
En esta tarea, comprobará que la página de índice de la tienda mostrará ahora los géneros almacenados en la base de datos en lugar de los codificados. No es necesario cambiar la plantilla de vista porque el **StoreController** devuelve el mismo **StoreIndexViewModel** que antes, pero esta vez los datos proceden de la base de datos.
1. Vuelva a compilar la solución y presione **F5** para ejecutar la aplicación.
2. El proyecto se inicia en la Página principal. Compruebe que el menú de **géneros** ya no es una lista codificada y que los datos se recuperan directamente de la base de datos.

*Examinar géneros de la base de datos*
3. Ahora, vaya a cualquier género y compruebe que los álbumes se han rellenado desde la base de datos.

*Examinar álbumes de la base de datos*
<a id="Exercise3"></a>
<a id="Exercise_3_Querying_the_Database_with_Parameters"></a>
### <a name="exercise-3-querying-the-database-with-parameters"></a>Ejercicio 3: consultar la base de datos con parámetros
En este ejercicio, obtendrá información sobre cómo consultar la base de datos mediante parámetros y cómo usar el modelado de resultados de consultas, una característica que reduce el número de accesos de base de datos a la recuperación de datos de una manera más eficaz.
> [!NOTE]
> Para obtener más información sobre el modelado de resultados de consultas, visite el siguiente [artículo de MSDN](https://msdn.microsoft.com/library/bb896272&#040;v=vs.100&#041;.aspx).
<a id="Ex3Task1"></a>
<a id="Task_1_-_Modifying_StoreController_to_Retrieve_Albums_from_Database"></a>
#### <a name="task-1---modifying-storecontroller-to-retrieve-albums-from-database"></a>Tarea 1: modificación de StoreController para recuperar álbumes de la base de datos
En esta tarea, cambiará la clase **StoreController** para tener acceso a la base de datos con el fin de recuperar álbumes de un género específico.
1. Abra la solución de **Inicio** que se encuentra en la carpeta **Source\Ex3-QueryingTheDatabaseWithParametersCodeFirst\Begin** si quiere usar el enfoque de primer código o la carpeta **Source\Ex3-QueryingTheDatabaseWithParametersDBFirst\Begin** si desea usar el enfoque de primera base de datos. De lo contrario, puede seguir usando la solución **final** obtenida al completar el ejercicio anterior.
1. Si ha abierto la solución de **Inicio** proporcionada, tendrá que descargar algunos paquetes NuGet que faltan antes de continuar. Para ello, haga clic en el menú **proyecto** y seleccione **administrar paquetes NuGet**.
2. En el cuadro de diálogo **administrar paquetes NuGet** , haga clic en **restaurar** para descargar los paquetes que faltan.
3. Por último, compile la solución haciendo clic en **Compilar** | **compilar solución**.
> [!NOTE]
> Una de las ventajas de usar NuGet es que no tiene que enviar todas las bibliotecas del proyecto, lo que reduce el tamaño del proyecto. Con las herramientas avanzadas de NuGet, especificando las versiones de paquete en el archivo packages. config, podrá descargar todas las bibliotecas necesarias la primera vez que ejecute el proyecto. Esta es la razón por la que tendrá que ejecutar estos pasos después de abrir una solución existente desde este laboratorio.
2. Abra la clase **StoreController** para cambiar el método de acción de **exploración** . Para ello, en el **Explorador de soluciones**, expanda la carpeta **Controllers** y haga doble clic en **StoreController.CS**.
3. Cambie el método de acción de **exploración** para recuperar los álbumes de un género específico. Para ello, reemplace el código siguiente:
(Fragmentos de código- *modelos y acceso a datos-Ex3 StoreController BrowseMethod*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample17.cs)]
> [!NOTE]
> Para rellenar una colección de la entidad, debe utilizar el método **include** para especificar que desea recuperar también los álbumes. Puede usar el. Extensión **Single ()** en LINQ porque, en este caso, solo se espera un género para un álbum. El método **Single ()** toma una expresión lambda como parámetro, que en este caso especifica un objeto de género único, de modo que su nombre coincide con el valor definido.
>
> Aprovechará una característica que le permitirá indicar otras entidades relacionadas que desea cargar también cuando se recupere el objeto Genre. Esta característica se denomina **modelado de resultados de consultas**y permite reducir el número de veces que es necesario tener acceso a la base de datos para recuperar información. En este escenario, querrá capturar previamente los álbumes del género que recupere.
>
> La consulta incluye **géneros. incluya ("álbumes")** para indicar que desea también álbumes relacionados. Esto producirá una aplicación más eficaz, ya que recuperará los datos de género y de álbum en una única solicitud de base de datos.
<a id="Ex3Task2"></a>
<a id="Task_2_-_Running_the_Application"></a>
#### <a name="task-2---running-the-application"></a>Tarea 2: ejecución de la aplicación
En esta tarea, ejecutará la aplicación y recuperará álbumes de un género específico de la base de datos.
1. Presione **F5** para ejecutar la aplicación.
2. El proyecto se inicia en la Página principal. Cambie la dirección URL a **/Store/Browse? Genre = pop** para comprobar que los resultados se recuperan de la base de datos.

*Examinar/Store/Browse? género = pop*
<a id="Ex3Task3"></a>
<a id="Task_3_-_Accessing_Albums_by_Id"></a>
#### <a name="task-3---accessing-albums-by-id"></a>Tarea 3: acceso a álbumes por identificador
En esta tarea, repetirá el procedimiento anterior para obtener álbumes por su identificador.
1. Cierre el explorador si es necesario para volver a Visual Studio. Abra la clase **StoreController** para cambiar el método de acción **Details** . Para ello, en el **Explorador de soluciones**, expanda la carpeta **Controllers** y haga doble clic en **StoreController.CS**.
2. Cambie el método de acción de **detalles** para recuperar los detalles de los álbumes según su **identificador**. Para ello, reemplace el código siguiente:
(Fragmentos de código- *modelos y acceso a datos-Ex3 StoreController DetailsMethod*)
[!code-csharp[Main](aspnet-mvc-4-models-and-data-access/samples/sample18.cs)]
<a id="Ex3Task4"></a>
<a id="Task_4_-_Running_the_Application"></a>
#### <a name="task-4---running-the-application"></a>Tarea 4: ejecución de la aplicación
En esta tarea, ejecutará la aplicación en un explorador Web y obtendrá los detalles del álbum por su identificador.
1. Presione **F5** para ejecutar la aplicación.
2. El proyecto se inicia en la Página principal. Cambie la dirección URL a **/Store/details/51** o examine los géneros y seleccione un álbum para comprobar que los resultados se recuperan de la base de datos.

*Examinar/Store/Details/51*
> [!NOTE]
> Además, puede implementar esta aplicación en sitios web de Windows Azure después del [Apéndice B: publicación de una aplicación de ASP.NET MVC 4 mediante Web deploy](#AppendixB).
---
<a id="Summary"></a>
<a id="Summary"></a>
## <a name="summary"></a>Resumen
Al completar este laboratorio práctico, ha aprendido los aspectos básicos de los modelos y el acceso a los datos de ASP.NET MVC, mediante el enfoque de **Database First** , así como el enfoque **code First** :
- Cómo agregar una base de datos a la solución para consumir sus datos
- Cómo actualizar controladores para proporcionar plantillas de vista con los datos tomados de la base de datos en lugar de codificarlos de forma rígida
- Cómo consultar la base de datos mediante parámetros
- Cómo usar la forma de los resultados de la consulta, una característica que reduce el número de accesos a bases de datos, la recuperación de datos de una manera más eficaz
- Cómo usar los enfoques de Database First y Code First en Microsoft Entity Framework para vincular la base de datos con el modelo
<a id="AppendixA"></a>
<a id="Appendix_A_Installing_Visual_Studio_Express_2012_for_Web"></a>
## <a name="appendix-a-installing-visual-studio-express-2012-for-web"></a>Apéndice A: instalación de Visual Studio Express 2012 para Web
Puede instalar **Microsoft Visual Studio Express 2012 para web** u otra versión de "Express" con el **[instalador de plataforma web de Microsoft](https://www.microsoft.com/web/downloads/platform.aspx)** . Las instrucciones siguientes le guían por los pasos necesarios para instalar *Visual Studio Express 2012 para web* mediante *instalador de plataforma web de Microsoft*.
1. Vaya a [[https://go.microsoft.com/?linkid=9810169](https://go.microsoft.com/?linkid=9810169)](https://go.microsoft.com/?linkid=9810169). Como alternativa, si ya tiene instalado el instalador de plataforma web, puede abrirlo y buscar el producto "<em>Visual Studio Express 2012 para web con el SDK de Windows Azure</em>".
2. Haga clic en **instalar ahora**. Si no tiene el **instalador de plataforma web** , se le redirigirá para que lo descargue e instale primero.
3. Una vez que el **instalador de plataforma web** está abierto, haga clic en **instalar** para iniciar la instalación.

*Instalar Visual Studio Express*
4. Lea todos los términos y licencias de los productos y **haga clic en Acepto para** continuar.

*Aceptación de los términos de licencia*
5. Espere hasta que se complete el proceso de descarga e instalación.

*Progreso de la instalación*
6. Cuando se complete la instalación, haga clic en **Finalizar**.

*Instalación completada*
7. Haga clic en **salir** para cerrar el instalador de plataforma Web.
8. Para abrir Visual Studio Express para Web, vaya a la pantalla **Inicio** y empiece a escribir ""**vs Express** y, a continuación, haga clic en el icono de **vs Express para web** .

*VS Express para Web icono*
<a id="AppendixB"></a>
<a id="Appendix_B_Publishing_an_ASPNET_MVC_4_Application_using_Web_Deploy"></a>
## <a name="appendix-b-publishing-an-aspnet-mvc-4-application-using-web-deploy"></a>Apéndice B: publicación de una aplicación de ASP.NET MVC 4 con Web Deploy
Este apéndice le mostrará cómo crear un nuevo sitio web desde el Portal de administración de Windows Azure y publicar la aplicación que obtuvo siguiendo el laboratorio, aprovechando la característica de publicación de Web Deploy proporcionada por Windows Azure.
<a id="ApxBTask1"></a>
<a id="Task_1_-_Creating_a_New_Web_Site_from_the_Windows_Azure_Portal"></a>
#### <a name="task-1---creating-a-new-web-site-from-the-windows-azure-portal"></a>Tarea 1: crear un nuevo sitio web desde el portal de Windows Azure
1. Vaya a la [portal de administración de Windows Azure](https://manage.windowsazure.com/) e inicie sesión con las credenciales de Microsoft asociadas a su suscripción.
> [!NOTE]
> Con Windows Azure, puede hospedar 10 sitios web de ASP.NET de forma gratuita y, a continuación, escalar a medida que crezca el tráfico. Puede registrarse [aquí](https://aka.ms/aspnet-hol-azure).

*Iniciar sesión en Windows Azure Portal de administración*
2. Haga clic en **nuevo** en la barra de comandos.

*Crear un nuevo sitio web*
3. Haga clic en **compute** | **sitio web**. A continuación, seleccione la opción **creación rápida** . Proporcione una dirección URL disponible para el nuevo sitio web y haga clic en **crear sitio web**.
> [!NOTE]
> Un sitio web de Windows Azure es el host para una aplicación web que se ejecuta en la nube que puede controlar y administrar. La opción creación rápida permite implementar una aplicación web completada en el sitio web de Windows Azure desde fuera del portal. No incluye los pasos para configurar una base de datos.

*Crear un nuevo sitio web mediante creación rápida*
4. Espere hasta que se cree el nuevo **sitio web** .
5. Una vez creado el sitio web, haga clic en el vínculo de la columna **URL** . Compruebe que el nuevo sitio web funciona.

*Ir al nuevo sitio web*

*Sitio web en ejecución*
6. Vuelva al portal y haga clic en el nombre del sitio web en la columna **nombre** para mostrar las páginas de administración.

*Abrir las páginas de administración de sitios web*
7. En la página **Panel** , en la sección **vista rápida** , haga clic en el vínculo **Descargar Perfil de publicación** .
> [!NOTE]
> El *Perfil de publicación* contiene toda la información necesaria para publicar una aplicación web en un sitio web de Windows Azure para cada método de publicación habilitado. El perfil de publicación contiene las direcciones URL, las credenciales de usuario y las cadenas de base de datos necesarias para conectarse a todos los extremos para los que está habilitado un método de publicación y autenticarse en ellos. **Microsoft WebMatrix 2**, **Microsoft Visual Studio Express para Web** y **Microsoft Visual Studio 2012** admiten la lectura de perfiles de publicación para automatizar la configuración de estos programas para la publicación de aplicaciones web en sitios web de Windows Azure.

*Descargar el perfil de publicación del sitio web*
8. Descargue el archivo de Perfil de publicación en una ubicación conocida. Además, en este ejercicio verá cómo usar este archivo para publicar una aplicación web en sitios web de Windows Azure desde Visual Studio.

*Guardar el archivo de Perfil de publicación*
<a id="ApxBTask2"></a>
<a id="Task_2_-_Configuring_the_Database_Server"></a>
#### <a name="task-2---configuring-the-database-server"></a>Tarea 2: configurar el servidor de base de datos
Si su aplicación usa bases de datos de SQL Server, deberá crear un servidor de SQL Database. Si desea implementar una aplicación simple que no usa SQL Server podría omitir esta tarea.
1. Necesitará un servidor de SQL Database para almacenar la base de datos de la aplicación. Puede ver los servidores de SQL Database de su suscripción en el portal de administración de Windows Azure en **bases de datos SQL** | **servidores** | el **panel del servidor**. Si no tiene un servidor creado, puede crear uno mediante el botón **Agregar** de la barra de comandos. Anote el nombre del **servidor y la dirección URL, el nombre de inicio de sesión y la contraseña del administrador**, ya que los usará en las siguientes tareas. No cree todavía la base de datos, ya que se creará en una etapa posterior.

*Panel de SQL Database Server*
2. En la siguiente tarea probará la conexión de base de datos desde Visual Studio, por ese motivo, debe incluir la dirección IP local en la lista de **direcciones IP permitidas**del servidor. Para ello, haga clic en **configurar**, seleccione la dirección IP de la **dirección IP del cliente actual** y péguela en los cuadros de texto **dirección IP inicial** y **dirección IP final** , y haga clic en el botón .

*Agregando la dirección IP del cliente*
3. Una vez agregada la **dirección IP del cliente** a la lista direcciones IP permitidas, haga clic en **Guardar** para confirmar los cambios.

*Confirmar cambios*
<a id="ApxBTask3"></a>
<a id="Task_3_-_Publishing_an_ASPNET_MVC_4_Application_using_Web_Deploy"></a>
#### <a name="task-3---publishing-an-aspnet-mvc-4-application-using-web-deploy"></a>Tarea 3: publicación de una aplicación de ASP.NET MVC 4 con Web Deploy
1. Vuelva a la solución ASP.NET MVC 4. En el **Explorador de soluciones**, haga clic con el botón secundario en el proyecto del sitio web y seleccione **publicar**.

*Publicar el sitio web*
2. Importe el perfil de publicación que guardó en la primera tarea.

*Importando Perfil de publicación*
3. Haga clic en **validar conexión**. Una vez finalizada la validación, haga clic en **siguiente**.
> [!NOTE]
> La validación se completa una vez que aparece una marca de verificación verde junto al botón validar conexión.

*Validando conexión*
4. En la página **configuración** , en la sección **bases** de datos, haga clic en el botón situado junto al cuadro de texto de la conexión de base de datos (es decir, **DefaultConnection**).

*Configuración de web deploy*
5. Configure la conexión de base de datos de la siguiente manera:
- En el **nombre del servidor** , escriba la dirección URL del servidor de SQL Database mediante el prefijo *TCP:* .
- En **nombre de usuario** , escriba el nombre de inicio de sesión del administrador del servidor.
- En **contraseña** , escriba la contraseña de inicio de sesión del administrador del servidor.
- Escriba un nuevo nombre de base de datos.

*Configuración de la cadena de conexión de destino*
6. A continuación, haga clic en **Aceptar**. Cuando se le pida que cree la base de datos, haga clic en **sí**.

*Crear la base de datos*
7. La cadena de conexión que usará para conectarse a SQL Database en Windows Azure se muestra en el cuadro de texto conexión predeterminada. Después, haga clic en **Siguiente**.

*Cadena de conexión que apunta a SQL Database*
8. En la página **vista previa** , haga clic en **publicar**.

*Publicación de la aplicación Web*
9. Una vez finalizado el proceso de publicación, el explorador predeterminado abrirá el sitio Web publicado.
<a id="AppendixC"></a>
<a id="Appendix_C_Using_Code_Snippets"></a>
## <a name="appendix-c-using-code-snippets"></a>Apéndice C: usar fragmentos de código
Con los fragmentos de código, tiene todo el código que necesita a su alcance. El documento de laboratorio le indicará exactamente cuándo se pueden usar, como se muestra en la ilustración siguiente.

*Usar fragmentos de código de Visual Studio para insertar código en el proyecto*
***Para agregar un fragmento de código mediante el tecladoC# (solo)***
1. Coloque el cursor donde desea insertar el código.
2. Comience a escribir el nombre del fragmento de código (sin espacios ni guiones).
3. Ver como IntelliSense muestra los nombres de los fragmentos de código coincidentes.
4. Seleccione el fragmento de código correcto (o siga escribiendo hasta que se seleccione el nombre del fragmento de código completo).
5. Presione la tecla TAB dos veces para insertar el fragmento de código en la ubicación del cursor.

*Comience a escribir el nombre del fragmento de código.*

*Presione TAB para seleccionar el fragmento de código resaltado*

*Presione la tecla TAB de nuevo y el fragmento de código se expandirá*
***Para agregar un fragmento de código mediante el mouseC#(, Visual Basic y XML)*** dimensional. Haga clic con el botón secundario en el lugar donde desea insertar el fragmento de código.
1. Seleccione **Insertar fragmento** de código seguido de **mis fragmentos de código**.
2. Seleccione el fragmento de código relevante de la lista haciendo clic en él.

*Haga clic con el botón derecho en el lugar donde desea insertar el fragmento de código y seleccione Insertar fragmento de código.*

*Seleccione el fragmento de código relevante de la lista; para ello, haga clic en él*
| 69.61126 | 700 | 0.761429 | spa_Latn | 0.982632 |
1c3ffe0e1e9537f06462cfb869c6768d446be5ad | 20,798 | md | Markdown | articles/fin-ops-core/dev-itpro/analytics/er-design-multilingual-reports.md | MicrosoftDocs/Dynamics-365-Operations.et-ee | dc7d4df9666186a929909ca4d7f4ca8b41df301d | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:13:59.000Z | 2021-04-20T21:13:45.000Z | articles/fin-ops-core/dev-itpro/analytics/er-design-multilingual-reports.md | MicrosoftDocs/Dynamics-365-Operations.et-ee | dc7d4df9666186a929909ca4d7f4ca8b41df301d | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-12-08T15:04:50.000Z | 2019-04-30T11:45:50.000Z | articles/fin-ops-core/dev-itpro/analytics/er-design-multilingual-reports.md | MicrosoftDocs/Dynamics-365-Operations.et-ee | dc7d4df9666186a929909ca4d7f4ca8b41df301d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Mitmekeelsete aruannete kujundamine elektroonilises aruandluses
description: Selles teemas selgitatakse, kuidas saate kasutada elektroonilise aruandluse (ER) silte mitmekeelsete aruannete kujundamiseks ja loomiseks.
author: NickSelin
ms.date: 09/03/2021
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: ERDataModelDesigner, ERModelMappingDesigner, EROperationDesigner, ERExpressionDesignerFormula
audience: Application User, Developer, IT Pro
ms.reviewer: kfend
ms.custom: ''
ms.assetid: ''
ms.search.region: Global
ms.author: nselin
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: e199b350101e10ba3e424894f4dc9881d05c9558
ms.sourcegitcommit: 81bc42551e6c9af6ad38908afb606ee1f8d3c44b
ms.translationtype: HT
ms.contentlocale: et-EE
ms.lasthandoff: 09/03/2021
ms.locfileid: "7473401"
---
# <a name="design-multilingual-reports-in-electronic-reporting"></a>Mitmekeelsete aruannete kujundamine elektroonilises aruandluses
[!include[banner](../includes/banner.md)]
[!include[banner](../includes/preview-banner.md)]
## <a name="overview"></a>Ülevaade
Ärikasutajana saate kasutada raamistikku [Elektrooniline aruandlus (ER)](general-electronic-reporting.md), et konfigureerida väljaminevate dokumentide vorminguid, mis tuleb luua eri riikide või regioonide õigusnõuete järgi. Kui nõuete järgi tuleb väljaminevad dokumendid luua eri riikide või regioonide jaoks mitmes keeles, saate konfigureerida ühe ER-i [vormingu](general-electronic-reporting.md#FormatComponentOutbound), mis sisaldab keelest sõltuvaid ressursse. Sel viisil saate vormingut kasutada mitmeid kordi, et luua väljaminevaid dokumente eri riikide või regioonide jaoks. Soovi korral võite kasutada üht ER-i vormingut, et luua väljaminev dokument eri keeltes asjakohastele klientidele, hankijatele, tütarettevõtetele või teistele pooltele.
Te saate konfigureerida ER-i andmemudeleid ja mudelivastendusi konfigureeritud ER-i vormingute andmeallikateks, et määratleda andmevoog, mis täpsustab, millised rakenduse andmed loodud dokumentidesse lisatakse. ER-i konfiguratsiooni[pakkujana](general-electronic-reporting.md#Provider) saate [avaldada](tasks/er-upload-configuration-into-lifecycle-services.md#upload-a-configuration-into-lcs) konfigureeritud [andmemudeleid](general-electronic-reporting.md#data-model-and-model-mapping-components), [mudelivastendusi](general-electronic-reporting.md#data-model-and-model-mapping-components) ja [vorminguid](general-electronic-reporting.md#FormatComponentOutbound) ER-i lahenduse komponentidena, et luua kindlaid väljaminevaid dokumente. Samuti saate lubada klientidel avaldatud ER-i lahendust [üles laadida](general-electronic-reporting-manage-configuration-lifecycle.md), et seda oleks võimalik kasutada ja kohandada. Kui te arvate, et kliendid võivad rääkida teises keeles, saate konfigureerida ER-i komponente nii, et need sisaldaksid keelest sõltuvaid ressursse. Sel viisil on võimalik kujundamise ajal esitada muudetava ER-i komponendi sisu kliendi eelistatud keeles.
Saate konfigureerida keelest sõltuvaid ressursse ER-i siltidena. Seejärel saate silte kasutada ER-i komponentide konfigureerimiseks järgmistel eesmärkidel.
- Kujundamise ajal
- Esitage konfigureeritud ER-i komponentide sisu kasutaja eelistatud keeles.
- Käitusajal
- Looge väljaminevate dokumentide jaoks keelest sõltuv sisu.
- Lisage hoiatus-ja tõrketeated kasutaja eelistatud keeles.
- Paluge kasutaja eelistatud keeles täita kohustuslikud väljad.
ER-i silte saab konfigureerida igas ER-i [konfiguratsioonis](general-electronic-reporting.md#Configuration), mis sisaldab eri komponente. Silte on võimalik hallata ER-i andmemudelite, ER-i mudelivastenduste ja ER-i vormingukomponentide konfigureeritud loogikast sõltumatult.
Iga ER-i silt on tuvastatav ID abil, mis on seda silti sisaldavas ER-i konfiguratsioonis kordumatu. Iga silt võib sisaldada silditeksti igas keeles, mida toetatakse praeguses Microsoft Dynamics 365 Finance'i eksemplaris. Toetatud keeled hõlmavad rakendatud kohanduste keeli.
## <a name="entry"></a>Kirje
ER-i andmemudeli, ER-i mudelivastenduse või ER-i vormingu loomisel kuvatakse suvand **Tõlgi** iga kord, kui valite välja, mis võib sisaldada tõlgitavat sisu. Selle suvandi valimisel saate valitud välja siduda ER-i sildiga <a name="TextTranslationPane">paanil</a> **Teksti tõlge**. Saate valida olemasoleva ER-i sildi või lisada uue ER-i sildi, kui see ei ole veel saadaval. ER-i sildi valimisel või lisamisel saate lisada seotud teksti iga keele jaoks, mida toetatakse praeguses Finance'i eksemplaris.
Järgmisel illustratsioonil on näha, kuidas toimib tõlkimine muudetavas ER-i andmemudelis. Selles näites on muudetava **Arve mudeli** välja **PurchaseOrder** atribuut **Kirjeldus** tõlgitud Austria saksa (DE-AT) ja jaapani (JA) keelde.

Tõlkida saab ainult selliste siltide teksti, mis asuvad muudetavas ER-i komponendis. Näiteks kui valite ER-i mudelivastenduse andmeallikas oleva sildiatribuudi puhul suvandi **Tõlgi** ning valite seejärel ER-i emaandmemudelis oleva ER-i sildi, näete te sildi sisu, aga ei saa seda muuta. Sellistel juhtudel pole väli **Tõlgitud tekst** kättesaadav, nagu on näidatud järgmisel illustratsioonil.

> [!NOTE]
> Te ei saa kasutada kujundajaid, et kustutada silt, mis on sisestatud muudetavasse ER-i komponenti.
## <a name="scope"></a>Ulatus
ER-i siltidele saab viidata ER-i komponentide mitmes tõlgitavas atribuudis.
### <a name="data-model-component"></a>Andmemudeli komponent
ER-i andmemudeli konfigureerimisel saate sellele lisada ER-i silte. ER-i andmemudelisse lisatava ER-sildiga saab siduda mudeliüksuse atribuute **Silt** ja **Kirjeldus**, iga mudeli välju ja iga <a id="LinkModelEnum"></a>mudeli loendamisväärtust.

Kui ER-i andmemudel konfigureeritakse sel viisil, esitatakse selle sisu ER-i andmemudeli kujundaja kasutajatele nende eelistatud keeles. Seetõttu on mudeli haldamine lihtsam. Järgmistel illustratsioonidel on näha, kuidas see funktsioon töötab kasutajate puhul, kelle eelistatud keeleks on seatud DE-AT ja JA.


### <a name="model-mapping-component"></a>Mudelivastenduse komponent
Kuna ER-i mudelivastendus põhineb ER-i andmemudelil, siis esitatakse viidatud andmemudeli elementide sildid mudelivastenduse kujundajas kasutaja eelistatud keeles. Järgmisel illustratsioonil on näha, kuidas selgitatakse muudetavas mudelivastenduses välja **PurchaseOrder** tähendust konfigureeritud andmemudelisse lisatud atribuudi **Kirjeldus** sildi abil. Pange tähele, et silt on esitatud kasutaja eelistatud keeles (selles näites DE-AT).

Kui andmeallika **Kasutaja sisendparameeter** atribuut **Silt** on konfiguratsioonis seotud ER-i sildiga, esitatakse sellele andmeallikale vastav parameetriväli käitusajal kasutaja dialoogiboksis kasutaja eelistatud keeles.
### <a name="format-component"></a>Vormingu komponent
ER-i vormingu konfigureerimisel saate sellele lisada ER-i silte. Iga konfigureeritud andmeallika atribuute **Silt** ja **Spikritekst** on võimalik siduda ER-i sildiga, mis lisatakse ER-i vormingusse. Iga <a id="LinkFormatEnum"></a>vormingu loendamisväärtuse atribuute **Silt** ja **Kirjeldus** saab samuti siduda ER-i sildiga, millele pääseb juurde muudetavas ER-i vormingus.
> [!NOTE]
> Samuti saate need atribuudid siduda sellise ER-i emaandmemudeli ER-sildiga, mis taaskasutab mudeli silte igas ER-i vormingus, mis on selle ER-i andmemudeli jaoks konfigureeritud.
Kui ER-i vorming konfigureeritakse sel viisil, esitatakse vormingu sisu ER-i toimingute kujundaja kasutajatele nende eelistatud keeles. Seetõttu on vormingu haldamine ja konfigureeritud loogika analüüsimine lihtsam.
Kuna ER-i vorming põhineb ER-i andmemudelil, siis esitatakse andmemudeli elementides viidatud sildid ER-i vormingu kujundajas kasutaja eelistatud keeles.
Kui andmeallika **Kasutaja sisendparameeter** atribuut **Silt** on seotud ER-i sildiga, esitatakse parameetrile vastav väli käitusajal kasutaja dialoogiboksis viibana. Järgmistel illustratsioonidel on näha, kuidas siduda andmeallika **Kasutaja sisendparameeter** atribuut **Silt** kujundamise ajal ER-sildiga, et kasutajatelt küsitaks parameetrit käitusajal eri kasutaja eelistatud keeltes (näitena on toodud Ameerika Ühendriikide inglise (EN-US) keel ja keel DE-AT).



### <a name="expressions"></a>Avaldised
Sildi kasutamiseks ER-i [avaldises](er-formula-language.md) peate kasutama süntaksit **@"GER\_LABEL:X"**, kus eesliide **@** näitab, et operand viitab sildile, **GER\_LABEL** näitab, et kaasatud on ER-i silt, ja **X** on ER-sildi ID.

Süsteemi (rakenduse) sildile viitamiseks kasutage süntaksit **@"X"**, kus eesliide **@** näitab, et operand viitab sildile, ja **X** on süsteemi sildi ID.

#### <a name="model-mapping"></a>Mudeli vastendamine
ER-i mudelivastenduse avaldist saab konfigureerida sildi abil. Kui seda vastendust kasutab ER-i vorming, mis käivitati väljamineva dokumendi loomiseks, sisaldab käivitamise kontekst keelekoodi. Konfigureeritud avaldise silt täidetakse sildi tekstiga, mis on konfigureeritud selle konteksti keele jaoks.
Kui viidatud sildil ei ole mudelivastendust kasutava vormingu käivitamise konteksti keele jaoks tõlget, kasutatakse selle asemel sildi teksti keeles EN-US.
#### <a name="format"></a>Vorming
ER-i vormingu ER-i avaldist saab konfigureerida siltide abil. Kui see vorming käivitatakse väljamineva dokumendi loomiseks, sisaldab käivitamise kontekst keelekoodi. Konfigureeritud avaldise silt täidetakse sildi tekstiga, mis on konfigureeritud selle konteksti keele jaoks.


Aruande loomiseks kasutaja eelistatud keeles saate konfigureerida ER-i vormingu komponendi **FAIL**.

Kui konfigureerite ER-i vormingu sel viisil, luuakse aruanne ER-i siltidest võetud sobiva teksti põhjal. Järgmistel illustratsioonidel on näha aruannete näited kasutaja keeltes EN-US ja DE-AT.


Kui viidatud sildil ei ole vormingu käivitamise konteksti keele jaoks tõlget, kasutatakse selle asemel sildi teksti keeles EN-US.
## <a name="language"></a>Keel
ER toetab eri viise, kuidas määrata loodud aruande keel. Saate valida vahekaardi **Vorming** väljal **Keele-eelistused** järgmiste väärtuste vahel.
- **Ettevõtte eelistus** – looge aruanne ettevõtte määratud keeles.

- **Kasutaja eelistus** – looge aruanne kasutaja eelistatud keeles.
- **Selgelt määratletud** – looge aruanne keeles, mis määratakse kujundamise ajal.

- **Käitusajal määratletud** – looge aruanne keeles, mis määratakse käitusajal. Kui valite selle väärtuse, konfigureerige väljal **Keel** ER-i avaldis, mis tagastab keele koodi, nt vastava kliendi keel.

## <a name="culture-specific-formatting"></a>Kultuuripõhine vormindamine
ER toetab erinevaid viise, kuidas määrata loodud aruande kultuuri. Seetõttu saab kuupäeva, kellaaja ja numbriväärtuste puhul kasutada õiget kultuurispetsiifilist vormingut. ER-vormingu kujundamisel saate vahekaardi **Vorming** väljal **Kultuuri-eelistused** valida ühe järgmistest väärtustest iga vormingukomponendi kohta **tavapärase\\faili**, **Exceli\\faili**, **PDF-\\faili** või **PDF\\Mergeri** tüübi kohta:
- **Kasutaja eelistus** – vormindage väärtused vastavalt kasutaja eelistatud kultuurile. See kultuur on määratletud **kasutaja suvandite** lehe vahekaardi **Eelistused** väljal **Kuupäev, kellaaeg ja numbrivorming**.

- **Eraldi määratletud** – vormindage väärtused vastavalt kujunduse jooksul määratud kultuurile.

- **Käitusajal määratletud** – vormindage väärtused vastavalt käitusaja jooksul määratud kultuurile. Kui valite selle väärtuse vahekaardi **Vastendamine** väljal **Kuupäev, kellaaeg ja numbrivorming**, konfigureerige ER-avaldis, mis tagastab kultuuri koht kultuurikoodi, nt vastava kliendi kultuuri.

> [!NOTE]
> ER-i komponent, mille jaoks määratlete kindla kultuuri, võib sisaldada tütar-ER-komponente, mis on konfigureeritud tekstiväärtuse sisestamiseks. Vaikimisi kasutatakse emakomponendi kultuuri nende komponentide väärtuste vormindamiseks. Järgmiste sisseehitatud ER-funktsioonide abil saate konfigureerida nende komponentide seoseid ja rakendada väärtuse vormindamiseks alternatiivset kultuuri:
>
> - [DATEFORMAT](er-functions-datetime-dateformat.md#syntax-2)
> - [DATETIMEFORMAT](er-functions-datetime-datetimeformat.md#syntax-2)
> - [NUMBERFORMAT](er-functions-text-numberformat.md#syntax-2)
>
> Versioonis 10.0.20 ja uuemates formaadikomponentides kasutatakse **Common\\failitüüpi** ja **Exceli\\failitüüpi**, et vormindada väärtusi loodud dokumendist [PDFi teisendamisel](electronic-reporting-destinations.md#OutputConversionToPDF).
## <a name="translation"></a>Tõlge
Saate lisada vajalikud ER-i sildid muudetavale ER-i komponendile. ER-i sildi lisamisel saab seda tõlkida kahel viisil: käsitsi ja automaatselt.
### <a name="manual-translation"></a>Käsitsi tõlkimine
Kui lisate **Teksti tõlkimise** [paanil](#TextTranslationPane) ER-i sildi, saate selle käsitsi tõlkida kõigisse keeltesse, mida toetatakse praeguses Finance'i eksemplaris. Valige eelistatud keel jaotise **Süsteemikeel** või **Kasutaja keel** väljal **Keel**, sisestage sobiv tekst asjakohasele väljale **Tõlgitud tekst** ja valige seejärel käsk **Tõlgi**. Seda protsessi tuleb korrata iga vajaliku keele ja iga lisatud sildi puhul.
### <a name="automatic-translation"></a>Automaatne tõlkimine
ER-i komponente konfigureeritakse sellise ER-i konfiguratsiooni mustandiversioonis, milles asub muudetav ER-i komponent.

Nagu selles teemas eespool kirjeldatud, saate lisada muudetavale ER-i komponendile vajalikud ER-i sildid. Sel viisil saate täpsustada ER-i siltide teksti keeles EN-US. Seejärel saate eksportida ER-i komponendi sildid sisseehitatud ER-i funktsiooni abil. Valige muudetavat ER-i komponenti sisaldava ER-i konfiguratsiooni mustandiversioon ja valige seejärel **Vahetus \> Ekspordi sildid**.

Saate eksportida kas kõik sildid või sildid ühe keele jaoks, mille määrate ekspordi alguses. Sildid eksporditakse ZIP-failina, mis sisaldab XML-faile. Iga XML-fail sisaldab silte ühe keele jaoks.

Seda vormingut kasutatakse siltide automaatseks tõlkimiseks välise tõlketeenuse abil, nt [Dynamics 365 Translation Service](../lifecycle-services/translation-service-overview.md). Tõlgitud siltide kättesaamisel saate need importida tagasi sellise ER-i konfiguratsiooni mustandiversiooni, mis sisaldab neid silte omavaid ER-i komponente. Valige muudetavat ER-i komponenti sisaldava ER-i konfiguratsiooni mustandiversioon ja valige **Vahetus \> Laadi sildid**.

Tõlgitud sildid imporditakse valitud ER-i konfiguratsiooni. Selles ER-i konfiguratsioonis leiduvad tõlgitud sildid asendatakse. Kui ER-i konfiguratsioonis pole mõnda tõlgitud silti, siis see lisatakse.
## <a name="lifecycle"></a>Töötsükkel
Muudetava ER-i komponendi silte hoitakse koos komponendi muu sisuga ER-i konfiguratsiooni sobivas versioonis.
ER-i aluskomponendi siltidele saab viidata ER-i komponendi tuletatud versioonis, mille te loote oma muudatuste tutvustamiseks.
ER-i versioonimine kontrollib sildi määramist ER-komponendi mis tahes atribuudile. Sildi määramise muudatused salvestatakse sellise muudetava ER-i komponendi muudatusloendisse (delta), mis on loodud antud ER-komponendi tuletatud versioonina. Need muudatused kinnitatakse, kui tuletatud versioon muutub uue alusversiooni aluseks.
## <a name="functions"></a>Funktsioonid
Sisseehitatud ER-i funktsioonil [LISTOFFIELDS](er-functions-list-listoffields.md) on juurdepääs ER-i siltidele, mis on konfigureeritud mõne ER-i komponendi jaoks.
Nagu selles teemas eespool kirjeldatud, saab asjakohases ER-i komponendis kättesaadava ER-i sildiga siduda iga [mudeli](#LinkModelEnum) või [vormingu](#LinkFormatEnum) ER-i loendamisväärtuse atribuute **Silt** ja **Kirjeldus**. Saate konfigureerida ER-i avaldise, milles kutsute ER-i loendamist argumendina kasutades välja funktsiooni **LISTOFFIELDS**. Avaldis tagastab loendi, mis sisaldab kirjet selle funktsiooni argumendina määratletud ER-i loendi iga väärtuse kohta. Iga kirje sisaldab ER-i loendamisväärtusega seotud ER-i sildi väärtust.
- Atribuudiga **Silt** seotud ER-i sildi väärtus salvestatakse tagastatud kirje väljale **Silt**.
- Atribuudiga **Kirjeldus** seotud ER-i sildi väärtus salvestatakse tagastatud kirje väljale **Kirjeldus**.
## <a name="performance"></a><a name=performance></a>Jõudlus
Kui konfigureerite ER-vormingu komponenti, et luua aruanne eelistatud [keeles](#language), või importida sissetulev dokument, kus sisu sõelutakse eelistatud keele järgi, on soovitatav lubada funktsioonihalduse tööruumis **praeguse kasutaja eelistatud keel** funktsioon [Funktsioonihalduse](../../fin-ops/get-started/feature-management/feature-management-overview.md) tööruumis. See funktsioon aitab parandada jõudlust, eriti ER-vormingu komponentide puhul, mis sisaldavad mitmeid viiteid ER-i valemite ja sidumiste [siltidele](general-electronic-reporting-formula-designer.md#TestFormula) ning palju kinnitusreegliid eelistatud keelekasutajateadete loomiseks.
## <a name="additional-resources"></a>Lisaressursid
- [Elektroonilise aruandluse ülevaade](general-electronic-reporting.md)
- [Elektroonilise aruandluse funktsioonid](er-formula-language.md#Functions)
[!INCLUDE[footer-include](../../../includes/footer-banner.md)]
| 84.889796 | 1,172 | 0.811376 | est_Latn | 0.999986 |
1c40ca7aacbb927807d0c7a3a453ade67e447e3b | 5,441 | md | Markdown | iis/web-development-reference/native-code-api-reference/ihttptokenkey-getcachename-method.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/web-development-reference/native-code-api-reference/ihttptokenkey-getcachename-method.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iis/web-development-reference/native-code-api-reference/ihttptokenkey-getcachename-method.md | baxter40/iis-docs | 484babba6fc20bdfc12a1a3fbceb5efc17afc356 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IHttpTokenKey::GetCacheName Method"
ms.date: "10/07/2016"
ms.assetid: b3c94361-aaff-ccae-fb26-99db535f08fa
---
# IHttpTokenKey::GetCacheName Method
Returns the name of the global token cache.
## Syntax
```cpp
PCWSTR GetCacheName(
VOID
) const;
```
### Parameters
This method takes no parameters.
## Return Value
A pointer to a constant null-terminated Unicode string that contains the name of the global token cache. The default is `TOKEN_CACHE_NAME` (described in [IIS Caching Constants](../../web-development-reference/native-code-api-reference/caching-constants.md)).
## Remarks
The cache name represents the unique name of the global cache where user data can be stored and retrieved.
## Notes for Implementers
[IHttpTokenKey](../../web-development-reference/native-code-api-reference/ihttptokenkey-interface.md) implementers are responsible for memory management with this data; therefore, `IHttpTokenKey` implementers that use dynamic memory allocation must release or call `delete` on the `PCWSTR` pointer when it is no longer needed.
Classes that directly implement the `IHttpTokenKey` interface should not override the default `GetCacheName` method, because the `TOKEN_CACHE_NAME` value instructs clients that are holding an [IHttpCacheKey](../../web-development-reference/native-code-api-reference/ihttpcachekey-interface.md) pointer that the pointer may be safely downcast to an `IHttpTokenKey` pointer.
## Notes for Callers
`IHttpTokenKey` implementers are responsible for memory management with this data; therefore, `IHttpTokenKey` clients must not release or call `delete` on the returned `PCWSTR` pointer when this data is no longer needed. Furthermore, clients must not cast this data to a pointer that is not a `const` or change the state of the memory referenced by this `PCWSTR`; otherwise, an access violation will be thrown or the data will become invalid.
## Notes for Inheritors
Interfaces that extend the `IHttpTokenKey` interface may override the `GetCacheName` method. However, the returned value must not collide with currently defined values, including those returned from the `IHttpTokenKey::GetCacheName`, [IFileKey::GetCacheName](../../web-development-reference/native-code-api-reference/ifilekey-getcachename-method.md), and [IUriKey::GetCacheName](../../web-development-reference/native-code-api-reference/iurikey-getcachename-method.md) methods.
## Example
The following code example demonstrates how to create a global module that listens for [GL_CACHE_OPERATION](../../web-development-reference/native-code-api-reference/request-processing-constants.md) and [GL_CACHE_CLEANUP](../../web-development-reference/native-code-api-reference/request-processing-constants.md) events and then writes the `GetCacheName` information to the Event Viewer.
> [!CAUTION]
> [!INCLUDE[iisver](../../wmi-provider/includes/iisver-md.md)] generates a large number of events in the Event Viewer. To avoid a log overflow error in a production environment, you should generally avoid writing cache information to the event log. For demonstration purposes, this code example writes an entry to the Event Viewer in debug mode only.
<!-- TODO: review snippet reference [!CODE [IHttpTokenKey#2](IHttpTokenKey#2)] -->
The above code writes a new event to the Event Viewer, where the Data box contains a string that is similar to the following.
```
IHttpTokenKey::GetCacheName: TOKEN
```
Your module must export the [RegisterModule](../../web-development-reference/native-code-api-reference/pfn-registermodule-function.md) function. You can export this function by creating a module definition (.def) file for your project, or you can compile the module by using the `/EXPORT:RegisterModule` switch. For more information, see [Walkthrough: Creating a Request-Level HTTP Module By Using Native Code](../../web-development-reference/native-code-development-overview/walkthrough-creating-a-request-level-http-module-by-using-native-code.md).
You can optionally compile the code by using the `__stdcall (/Gz)` calling convention instead of explicitly declaring the calling convention for each function.
## Requirements
|Type|Description|
|----------|-----------------|
|Client|- IIS 7.0 on [!INCLUDE[winvista](../../wmi-provider/includes/winvista-md.md)]<br />- IIS 7.5 on Windows 7<br />- IIS 8.0 on Windows 8<br />- IIS 10.0 on Windows 10|
|Server|- IIS 7.0 on [!INCLUDE[winsrv2008](../../wmi-provider/includes/winsrv2008-md.md)]<br />- IIS 7.5 on Windows Server 2008 R2<br />- IIS 8.0 on Windows Server 2012<br />- IIS 8.5 on Windows Server 2012 R2<br />- IIS 10.0 on Windows Server 2016|
|Product|- IIS 7.0, IIS 7.5, IIS 8.0, IIS 8.5, IIS 10.0<br />- [!INCLUDE[iisexp75](../../web-development-reference/native-code-api-reference/includes/iisexp75-md.md)], [!INCLUDE[iisexp80](../../web-development-reference/native-code-api-reference/includes/iisexp80-md.md)], [!INCLUDE[iisexp100](../../web-development-reference/native-code-api-reference/includes/iisexp100-md.md)]|
|Header|Httpcach.h|
## See Also
[IHttpTokenKey Interface](../../web-development-reference/native-code-api-reference/ihttptokenkey-interface.md)
[IHttpCacheKey::GetCacheName Method](../../web-development-reference/native-code-api-reference/ihttpcachekey-getcachename-method.md) | 82.439394 | 553 | 0.751516 | eng_Latn | 0.89606 |
1c413e388eaffbe2425b801060df22d2daca667d | 10,543 | md | Markdown | README.md | mateusap1/athenas | f48df6e05452be39c87479279be7eb378291c404 | [
"MIT"
] | null | null | null | README.md | mateusap1/athenas | f48df6e05452be39c87479279be7eb378291c404 | [
"MIT"
] | null | null | null | README.md | mateusap1/athenas | f48df6e05452be39c87479279be7eb378291c404 | [
"MIT"
] | null | null | null | # Athena
Athena is a decentralized and voluntary justice system, which main focus is to offer a better solution to conflicts that may occur inside a community. The target audience is any group of people who agreed to follow a set of rules and want to have a way of moderating possible conflicts.
** Disclaimer: The current version is just a prototype, the code, in it's current state, is far from usable
For inspiration:
* https://medium.com/kleros/kleros-a-decentralized-justice-protocol-for-the-internet-38d596a6300d
* https://medium.com/the-crowdjury/the-crowdjury-a-crowdsourced-court-system-for-the-collaboration-era-66da002750d8
### How it works
The Athena blockchain works by having a diversity of transaction types that can be used to create contracts, accuse people of breaking them and solving conflicts by sending a verdict, which determines if the accused must send a payment in order to follow the sentence. The way communities will enforce the payment of fines is up to them and Athena does not solve this problem, as it is a justice system and not a police one.
Additionally, the way the trials will be executed is also responsability of the parts involved to provide the necessary infrastructure (online or physical). Therefore, Athena is a solution to the frequent unfairness that people encounter in a centralized and coercitive justice system, but does not solve police or trial execution problems.
For one to understand how Athena works, it is extremally important that he understands the transaction types. There are four types of transactions: the **contract**, the **accusation**, the **veredict** and the **appeal**. This transactions can be sent by making a "/send_transaction" POST request to any Node, containing the necessary paramaters. One of the most important paramaters that is needed in all forms of transactions is the **ID**. The ID is a dictionary that contains the user's username (any string with a maximum of 64 characters), e-mail, public key, a nonce (an arbitrary number) and a hash, that needs to have at least five leading zeros.
This ID is extremeally important, because it can be used to avoid spam attacks, as well as, give a real word form of identification to the people that may sign a contract together. Nonetheless, the ID information will be publically available, so it is important that the people who decide to create an ID do not expose any private information or personal e-mails.
In these next topics, we will have a closer look at the transaction types and what they are used for.
### The Contract
A contract can be created by sending a transaction of type **contract** and can be signed by any user that agrees with it's rules. When someone signs a contract, he is saying that he agrees with the rules written and that he promisses to follow them and accept the judges veredict.
In order for a contract to be valid, it must contain the following paramaters: The **sender's ID** (all transactions should have it), the **rules** (a list of strings), the **list of judges** (a list of IDs) and the **signatures** (a list of RSA signatures with their corresponding IDs).
### The Accusation
After having signed a contract, anyone is able to send an **accusation**. In order for someone to send it, he will need to provide his own **ID**, the **accused ID** and the **contract that was broken**. Both the accused and the first judge in the list will be notified (by e-mail). The judge, after hearing both parts and their lawyers (optional), will send a transaction of type **veredict**. How the judge will decide if the accused is guilty or not is up to him and the involved parts.
### The Veredict
If someone send an accusation, a judge will be warned and will be requested to send a **veredict** within 30 days. He will then hear both the accuser and the accused and make his decision. His decision will be official when he send a veredict to the network, which must include **his own ID**, the **accusation he is responding to** (of type *accusation*), **if the accused is guilty** (a boolean) and a **description** explaining his decision (a string).
If the judge doesn't send the transaction within the 30 days limit, the next judge from the list will be warned. If this repeats, the next from the list will be requested until there are no more judges, which would automatically declare the accused innocent. Therefore, it is very important to make a decent sized list before sending a contract to the network.
A veredict must be stored by the interested parts and can be shown in communities that recognize this system as a proof that someone is either innocent or guilty.
### The Appeal
If someone disagrees with a veredict, he can still send a transaction of type **appeal**, which must include the **sender's ID** and the **veredict he disagrees with**. The two next judges of the list (after the one that condenmed him) would, after hearing both parts, make their own decisions by sending a transaction of type **veredict**. If both decisions are opposed to the one made by the first judge, he won't receive his reward and the veredict is reverted.
### Practice example
Let's suppose, for instance, that Michael, Jim, Pam, Ryan and Dwight want to make a books club. One of them (let's say Dwight) would write a set of rules that all of them will agree with. Suppose the rules are the following:
* Every month a member must lend three books to the club
* Any member can borrow a maximum of three books every month
* Every end of the month, the books must be given back
Additionally he would write a list of judges and their rewards. Suppose they are the following and the reward is 0.001 BTC:
* Angela
* Andy
* Stanly
* Meredith
* Oscar
After creating this contract by sending a transaction to the network, those who agree with it can start sending their IDs as a response to the contract.
So, for instance, if Michael agrees with the rules writen and trust the listed judges, he will send a **Node_Request** of type **Signature** containing his ID. If Michael was the only one to do this, the community would only be formed by him and Dwight. In our example, though, all of them will do the same as Michael, because they trust the judges and agree with the rules, forming a community of five people.
At this moment, each one would need to lend three books to a community safe (as stated by the initial contract). Let's suppose, though, that Jim don't want to follow the rules and decides to lend only two books. If this happens, any member can send a transaction of type accusation with, as paramaters, Jim's ID and the contract that was broken.
Angela, the first of the judges list, will then call a meeting (not obrigatory, but recommended) and will analyze each part's side. If Angela doesn't send a veredict within 30 days, she will lose her reward and the next judge (Andy) will be called. If the same thing happens, the new judge will be the next in the list (Stanly). This may continue until there are no more judges. In this case, Jim would automatically be declared innocent.
Assuming this doesn't happen and supposing Angela is not honest, she would declare Jim innocent. Dwight could, then, send a new transaction of type appeal and both Andy and Stanly would make their own veredict. In the case of them being honest and competent, they would send a veredict stating that Jim is actually guilty. This means he would be condemned to pay a fine and Angela would not receive her reward.
What would happen if Jim did not pay is up to Dwight, Michael, Ryan and Pam. In this case, he would probably be expelled, but in bigger communities, like private cities, the defendant could even be arrested or have justice denied in the future depending on the severity and how the community deals with these cases.
### How it is better than the current justice system
The main reasons our current system is not very good is beacuse it is not voluntary, it is mainly corrupted, it is incompetent and the victim does not earn anything after being wronged. Most of these bad aspects come from the fact that our justice system is centralized and that there is no real incentive for judges to make the right decision and not to accept bribery other than their own moral beliefs and the fear of being fired, which most of the times does not materialize.
Athena solve most of these issues by being a decentralized, immutable and voluntary network. It is decentralized in the sense that there is no central authority that controls Athena or it's nodes. It is immutable in the sense that no one is able to change a contract without altering it's structure, which would invalidate it. It is voluntary in the sense that no one is forced to accept other people's contracts.
Athena also offers an incentive to judges in order for them to make the right decision: money. Every judge is paid and if it is later proved that he made the wrong decision, he has to give the money he received back. Additionally, the list of judges is created by the people who signed the contract. This means that, ideally, only trusted people would judge anyone's cases.
### Athena components
#### The Accounts
After initializing the program, a JSON file called "account.json" will be created. This file must contain all contract copies, all signatures and accusations received and all veredicts against or in favor of this person.
Each account will have an ID, that contains an username (a string with a maximum of 64 characters), an e-mail, a public key, a nonce, and a hash (with five leading zeros).
It will be with this account that the user will be able to send and receive any transactions. Whenever the account is initialized, the program will make a request to a trusted nodes to see if there's any transaction with his public key as a target. If that is the case, it will warn the user.
#### The Nodes
Each node will have a "temp_transactions.json" file in which every transaction received from the network will be stored. When a user wants to send a transaction, he will make a "/send_transaction" POST request to a Node, that will, if validated, store his transaction for 24 hours.
In order to avoid spam attacks, the same account will only be able to send twenty transactions per hour.
Every *cycle*, the nodes will make a "/synchronize" GET request, that will verify if the other nodes have any additional transactions and, if they are valid, add these transactions to the JSON file. Additionally, the Nodes will delete any transactions older than 24 hours.
| 117.144444 | 656 | 0.781656 | eng_Latn | 0.999943 |
1c41a12114833cc467c70d1e5b6481021ec3b304 | 2,891 | md | Markdown | documents/aws-cloudtrail-user-guide/doc_source/cloudtrail-and-interface-VPC.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-cloudtrail-user-guide/doc_source/cloudtrail-and-interface-VPC.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-cloudtrail-user-guide/doc_source/cloudtrail-and-interface-VPC.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Using AWS CloudTrail with Interface VPC Endpoints<a name="cloudtrail-and-interface-VPC"></a>
If you use Amazon Virtual Private Cloud \(Amazon VPC\) to host your AWS resources, you can establish a private connection between your VPC and AWS CloudTrail\. You can use this connection to enable CloudTrail to communicate with your resources on your VPC without going through the public internet\.
Amazon VPC is an AWS service that you can use to launch AWS resources in a virtual network that you define\. With a VPC, you have control over your network settings, such the IP address range, subnets, route tables, and network gateways\. With VPC endpoints, the routing between the VPC and AWS Services is handled by the AWS network, and you can use IAM policies to control access to service resources\.
To connect your VPC to CloudTrail, you define an *interface VPC endpoint* for CloudTrail\. An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service\. The endpoint provides reliable, scalable connectivity to CloudTrail without requiring an internet gateway, network address translation \(NAT\) instance, or VPN connection\. For more information, see [What is Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/) in the *Amazon VPC User Guide*\.
Interface VPC endpoints are powered by AWS PrivateLink, an AWS technology that enables private communication between AWS services using an elastic network interface with private IP addresses\. For more information, see [AWS PrivateLink](https://aws.amazon.com/privatelink/)\.
The following steps are for users of Amazon VPC\. For more information, see [Getting Started](https://docs.aws.amazon.com/vpc/latest/userguide/GetStarted.html) in the *Amazon VPC User Guide*\.
## Availability<a name="cloudtrail-interface-VPC-availability"></a>
CloudTrail currently supports VPC endpoints in the following AWS Regions:
+ US East \(Ohio\)
+ US East \(N\. Virginia\)
+ US West \(N\. California\)
+ US West \(Oregon\)
+ Asia Pacific \(Mumbai\)
+ Asia Pacific \(Seoul\)
+ Asia Pacific \(Singapore\)
+ Asia Pacific \(Sydney\)
+ Asia Pacific \(Tokyo\)
+ Canada \(Central\)
+ Europe \(Frankfurt\)
+ Europe \(Ireland\)
+ Europe \(London\)
+ Europe \(Paris\)
+ South America \(São Paulo\)
## Create a VPC Endpoint for CloudTrail<a name="create-VPC-endpoint-for-CloudTrail"></a>
To start using CloudTrail with your VPC, create an interface VPC endpoint for CloudTrail\. For more information, see [Creating an Interface Endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint.html) in the *Amazon VPC User Guide*\.
You don't need to change the settings for CloudTrail\. CloudTrail calls other AWS services using either public endpoints or private interface VPC endpoints, whichever are in use\. | 80.305556 | 554 | 0.774473 | eng_Latn | 0.937601 |
1c41db46dab9fae530d653d876f8964b3ba22ae0 | 1,369 | md | Markdown | README.md | anam-dodhy/github-query-script | d658f9ce9e25489156a4d2cb430d36ce005e9665 | [
"MIT"
] | 1 | 2020-08-11T01:02:22.000Z | 2020-08-11T01:02:22.000Z | README.md | PowerOlive/github-query-script | d658f9ce9e25489156a4d2cb430d36ce005e9665 | [
"MIT"
] | null | null | null | README.md | PowerOlive/github-query-script | d658f9ce9e25489156a4d2cb430d36ce005e9665 | [
"MIT"
] | 2 | 2019-02-06T13:33:18.000Z | 2020-08-11T01:02:38.000Z | # github-query-script
Following are the steps to execute both scripts:
1. In case of windows, open a command prompt or windows power shell.
2. With python setup already completed, first make sure that github3 API is already installed, if not then run the following command:
pip install --pre github3.py
https://github.com/sigmavirus24/github3.py
3. Delete any old “repositories.txt” or “code.csv” files in the directory where the scripts are present so that the following script can create a new file with the latest results.
4. Then update both get_matching_repositories.py and get_matching_code_files.py scripts with valid GitHub username and password.
5. Now go to the directory via command prompt or power shell where the scripts are present.
6. To execute the first python script, run the following command:
python get_matching_repositories.py
This will extract Java repositories and export their information into “repositories.txt” file as explained in the previous section.
7. Now in order to execute the second script, run the following command:
python get_matching_code_files.py --username <githubusername> --repoFile repositories.txt --outputFile code.csv
This will give the final results in a “code.csv” file which would have the list of repositories along with the file names which are using javax.crypto library as explained in the previous section.
| 80.529412 | 196 | 0.807159 | eng_Latn | 0.998881 |
1c420a232fcaa5cbdfcc0b9aab543107c8d7e577 | 1,761 | md | Markdown | biztalk/core/transmitpipeline-sendport-node.md | OPS-E2E-PPE/biztalk-docs.ja-JP | 5e8314d59a5aa91e3eb4a20c1bdbc75821170d17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | biztalk/core/transmitpipeline-sendport-node.md | OPS-E2E-PPE/biztalk-docs.ja-JP | 5e8314d59a5aa91e3eb4a20c1bdbc75821170d17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | biztalk/core/transmitpipeline-sendport-node.md | OPS-E2E-PPE/biztalk-docs.ja-JP | 5e8314d59a5aa91e3eb4a20c1bdbc75821170d17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: TransmitPipeline (SendPort ノード) |Microsoft Docs
ms.custom: ''
ms.date: 06/08/2017
ms.prod: biztalk-server
ms.reviewer: ''
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- TransmitPipeline node [binding file]
ms.assetid: cee55451-6c3f-4e37-aef9-870d4b5d23aa
caps.latest.revision: 8
author: MandiOhlinger
ms.author: mandia
manager: anneta
ms.openlocfilehash: c4985117385195447a1f4dc45586e0c04da5b3da
ms.sourcegitcommit: 381e83d43796a345488d54b3f7413e11d56ad7be
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 05/07/2019
ms.locfileid: "65243079"
---
# <a name="transmitpipeline-sendport-node"></a>TransmitPipeline (SendPort ノード)
バインド ファイルの SendPort ノードの TransmitPipeline ノードでは、バインド ファイルと共にエクスポートされる送信ポートにバインドされる送信パイプラインに関する特定の情報が提供されます。
## <a name="nodes-in-the-transmitpipeline-node"></a>TransmitPipeline ノード内のノード
次の表に、バインド ファイルのこのノードに設定できるプロパティを示します。
|**名前**|**ノードの種類**|**[データ型]**|**[説明]**|**制限**|**コメント**|
|--------------|-------------------|-------------------|---------------------|----------------------|------------------|
|名前|属性|xs:string|送信パイプラインの名前を指定します。|任意|既定値: 空|
|FullyQualifiedName|属性|xs:string|パイプラインの完全修飾名を指定します。これには、パイプラインの展開先であるアセンブリの名前が含まれます。|任意|既定値: 空|
|型|属性|xs:int|パイプラインの種類を指定します。|必須|既定値: なし<br /><br /> 設定可能な値は、 [Microsoft.BizTalk.ExplorerOM.PipelineType](http://msdn.microsoft.com/library/microsoft.biztalk.explorerom.pipelinetype.aspx) 列挙体を参照してください。|
|TrackingOption|属性|PipelineTrackingTypes (SimpleType)|パイプラインの追跡オプションを指定します。|必須|既定値: なし<br /><br /> 設定可能な値は、 [Microsoft.BizTalk.ExplorerOM.PipelineTrackingTypes](http://msdn.microsoft.com/library/microsoft.biztalk.explorerom.pipelinetrackingtypes.aspx) 列挙体を参照してください。|
|説明|属性|xs:string|送信パイプラインの説明を指定します。|任意|既定値: 空| | 48.916667 | 268 | 0.730267 | yue_Hant | 0.389976 |
1c423eb93c22c4d413b22c54b1a2603a3dccecd2 | 1,799 | md | Markdown | articles/service-bus-relay/relay-api-overview.md | prashantbhutani90/azure-docs | 179676f15b1fd45c6f57fb7f44525c54ca28750f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-relay/relay-api-overview.md | prashantbhutani90/azure-docs | 179676f15b1fd45c6f57fb7f44525c54ca28750f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-relay/relay-api-overview.md | prashantbhutani90/azure-docs | 179676f15b1fd45c6f57fb7f44525c54ca28750f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-08-12T22:45:25.000Z | 2019-08-12T22:45:25.000Z | ---
title: Azure Relay API overview | Microsoft Docs
description: Overview of available Azure Relay APIs
services: event-hubs
documentationcenter: na
author: sethmanheim
manager: timlt
editor: ''
ms.assetid: fdaa1d2b-bd80-4e75-abb9-0c3d0773af2d
ms.service: service-bus-relay
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 07/03/2017
ms.author: sethm
---
# Available Relay APIs
## Runtime APIs
The following table lists all currently available Relay runtime clients.
The [additional information](#additional-information) section contains more information about the status of each runtime library.
| Language/Platform | Available feature | Client package | Repository |
| --- | --- | --- | --- |
| .NET Standard | Hybrid Connections | [Microsoft.Azure.Relay](https://www.nuget.org/packages/Microsoft.Azure.Relay/) | [GitHub](https://github.com/azure/azure-relay-dotnet) |
| .NET Framework | WCF Relay | [WindowsAzure.ServiceBus](https://www.nuget.org/packages/WindowsAzure.ServiceBus/) | N/A |
| Node | Hybrid Connections | [`hyco-ws`](https://www.npmjs.com/package/hyco-ws)<br/>[`hyco-websocket`](https://www.npmjs.com/package/hyco-websocket) | [GitHub](https://github.com/Azure/azure-relay-node) |
### Additional information
#### .NET
The .NET ecosystem has multiple runtimes, hence there are multiple .NET libraries for Event Hubs. The .NET Standard library can be run using either .NET Core or the .NET Framework, while the .NET Framework library can only be run in a .NET Framework environment. For more information on .NET Frameworks, see [framework versions](/dotnet/articles/standard/frameworks#framework-versions).
## Next steps
To learn more about Azure Relay, visit these links:
* [What is Azure Relay?](relay-what-is-it.md)
* [Relay FAQ](relay-faq.md) | 42.833333 | 386 | 0.75264 | eng_Latn | 0.528924 |
1c42504cebb7a551c1559cb6b9fba718f57ceb96 | 914 | md | Markdown | steering/2021-11-30.md | rnjudge/meetings-1 | 1d97891ea587de0469a39fbda0898c796000d6b4 | [
"CC-BY-3.0"
] | 4 | 2020-11-04T11:45:02.000Z | 2022-03-27T17:02:50.000Z | steering/2021-11-30.md | PinkDiamond1/meetings | 8a8ef88295b987570f0db582d474566c42834c7d | [
"CC-BY-3.0"
] | 43 | 2020-06-04T16:22:02.000Z | 2022-03-09T18:38:56.000Z | steering/2021-11-30.md | PinkDiamond1/meetings | 8a8ef88295b987570f0db582d474566c42834c7d | [
"CC-BY-3.0"
] | 8 | 2020-06-04T17:02:36.000Z | 2022-03-11T12:50:13.000Z | # SPDX Steering Committee Minutes, November 30, 2021
## Attending
* Jilayne Lovejoy
* Paul Madick
* Phil Odence
* Gary O'Neall
* Kate Stewart
* Steve Winslow
## Not attending
* Jack Manbeck
## Minutes
* Discussed member sign-ups under new governance: email announcement forthcoming after General Meeting; reviewed dates; need to add link to website; discussed thoughts on Member Representatives nominations process as part of later announcement.
* Publishing summaries of Steering Committee minutes: Steve to draft and submit to spdx/meetings repo
* Third party certifiers: received inquiry from community participant about becoming a third party certifier; noted no current project "conformance definition" other than SPDX specification; suggested guiding towards becoming a member of SPDX and also that the requester could explore with community about developing such a conformance or best-practices program.
| 50.777778 | 362 | 0.811816 | eng_Latn | 0.995402 |
1c42653865ef44fbdeadd41daa14585a441428b4 | 897 | md | Markdown | README.md | StickerGiant/taxjar-bundle | 2e8fedb48814b672dbc363268629b64e68b14ffa | [
"MIT"
] | null | null | null | README.md | StickerGiant/taxjar-bundle | 2e8fedb48814b672dbc363268629b64e68b14ffa | [
"MIT"
] | 1 | 2018-06-26T09:22:58.000Z | 2018-06-26T09:22:58.000Z | README.md | StickerGiant/taxjar-bundle | 2e8fedb48814b672dbc363268629b64e68b14ffa | [
"MIT"
] | null | null | null | # LAShowroom TaxJar Bundle
[](https://travis-ci.org/lashowroom/taxjar-bundle)
[](https://scrutinizer-ci.com/g/lashowroom/taxjar-bundle/?branch=master)
[](https://scrutinizer-ci.com/g/lashowroom/taxjar-bundle/?branch=master)
[](https://insight.sensiolabs.com/projects/d0511188-be97-4f86-81e9-de526832f3c5)
This bundle integrates the TaxJar API client into Symfony, and exposes a service on top.
Branch 2.x-1.x means that is version 1 of this library, and works with version 2.x of symfony.
| 81.545455 | 188 | 0.790412 | yue_Hant | 0.245785 |
1c43254726c03fd7f00b475324a6d83e38160544 | 164 | md | Markdown | _glossary/iron-deficiency.md | hortigraph/wiki | 3fe4c6cb054a5dc7daa54da80909038366b21c1b | [
"MIT"
] | 1 | 2019-10-01T21:01:22.000Z | 2019-10-01T21:01:22.000Z | _glossary/iron-deficiency.md | hortigraph/wiki | 3fe4c6cb054a5dc7daa54da80909038366b21c1b | [
"MIT"
] | null | null | null | _glossary/iron-deficiency.md | hortigraph/wiki | 3fe4c6cb054a5dc7daa54da80909038366b21c1b | [
"MIT"
] | null | null | null | ---
title: IRON DEFICIENCY
---
`IRON DEFICIENCY`
Young leaves all yellow/bright yellow tips;
Marginal scorch, die back of growth;
Usually lime induced.`
undefined | 16.4 | 43 | 0.762195 | eng_Latn | 0.908991 |
1c432cf00b52be19a3d3b96c17eef3344e6b27bf | 422 | md | Markdown | README.md | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 1 | 2020-10-22T13:22:39.000Z | 2020-10-22T13:22:39.000Z | README.md | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 34 | 2018-07-05T18:18:37.000Z | 2022-03-12T01:01:03.000Z | README.md | RobMackie/CheckMeIn | 2cb66b4fff7259fa038a2754878056af9dbca6f4 | [
"MIT"
] | 1 | 2021-09-06T04:12:43.000Z | 2021-09-06T04:12:43.000Z | # CheckMeIn
a system for checking into and out of a building
# Setup
You'll need a python venv, set it up like this:
1. ```python3 -m venv venv```
2. ```source venv/bin/activate```
3. ```pip install -r requirements.txt```
You'll also need a ```sessions``` directory to store session data
## Running tests
To make sure you haven't broken anything where it will crash, run the tests using:
python -m pytest tests
| 28.133333 | 82 | 0.706161 | eng_Latn | 0.994875 |
1c43f57e6ec44fb7c201444ac3632dbd2e6e9f32 | 2,765 | md | Markdown | docs/csharp/programming-guide/concepts/assemblies-gac/how-to-load-and-unload-assemblies.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/concepts/assemblies-gac/how-to-load-and-unload-assemblies.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/programming-guide/concepts/assemblies-gac/how-to-load-and-unload-assemblies.md | Dodozz/docs.it-it | f34c4bb1e8afb7492f8512359d32a9156c9c768d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Procedura: Caricare e scaricare gli assembly (C#)'
ms.date: 07/20/2015
ms.assetid: 6a4f490f-3576-471f-9533-003737cad4a3
ms.openlocfilehash: 52f7173efe497ab286c607db681f256983adc077
ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 04/18/2019
ms.locfileid: "59342041"
---
# <a name="how-to-load-and-unload-assemblies-c"></a>Procedura: Caricare e scaricare gli assembly (C#)
Gli assembly cui il programma fa riferimento verranno caricati automaticamente in fase di compilazione, ma è anche possibile caricare assembly specifici nel dominio dell'applicazione corrente in fase di esecuzione. Per altre informazioni, vedere [Procedura: Caricare assembly in un dominio dell'applicazione](../../../../framework/app-domains/how-to-load-assemblies-into-an-application-domain.md).
Non è possibile scaricare un singolo assembly senza scaricare tutti i domini dell'applicazione che lo contengono. Anche se l'assembly esce dall'ambito, il file effettivo dell'assembly rimane caricato finché non vengono scaricati tutti i domini dell'applicazione che lo contengono.
Se si vogliono scaricare alcuni assembly ma non altri, è consigliabile creare un nuovo dominio dell'applicazione, eseguire il codice all'interno del dominio, e quindi scaricare tale dominio dell'applicazione. Per altre informazioni, vedere [Procedura: Scaricare un dominio dell'applicazione](../../../../framework/app-domains/how-to-unload-an-application-domain.md).
### <a name="to-load-an-assembly-into-an-application-domain"></a>Per caricare un assembly in un dominio dell'applicazione
1. Usare uno dei numerosi metodi di caricamento contenuti nelle classi <xref:System.AppDomain> e <xref:System.Reflection>. Per altre informazioni, vedere [Procedura: Caricare assembly in un dominio dell'applicazione](../../../../framework/app-domains/how-to-load-assemblies-into-an-application-domain.md).
### <a name="to-unload-an-application-domain"></a>Per scaricare un dominio dell'applicazione
1. Non è possibile scaricare un singolo assembly senza scaricare tutti i domini dell'applicazione che lo contengono. Per scaricare i domini dell'applicazione, usare il metodo `Unload` da <xref:System.AppDomain>. Per altre informazioni, vedere [Procedura: Scaricare un dominio dell'applicazione](../../../../framework/app-domains/how-to-unload-an-application-domain.md).
## <a name="see-also"></a>Vedere anche
- [Guida per programmatori C#](../../../../csharp/programming-guide/index.md)
- [Assembly in .NET](../../../../standard/assembly/index.md)
- [Procedura: Caricare assembly in un dominio dell'applicazione](../../../../framework/app-domains/how-to-load-assemblies-into-an-application-domain.md)
| 86.40625 | 399 | 0.773599 | ita_Latn | 0.987557 |
1c44042012c4d68633c398f17699b0f87c94bef5 | 5,740 | md | Markdown | docs/Authentication_Application_Scope.md | antonyjim/open-service-management | f1bed4108f01277c2d0b273095e2db0c6ce80567 | [
"Fair"
] | 1 | 2020-05-19T22:51:08.000Z | 2020-05-19T22:51:08.000Z | docs/Authentication_Application_Scope.md | antonyjim/open-service-management | f1bed4108f01277c2d0b273095e2db0c6ce80567 | [
"Fair"
] | 8 | 2019-08-15T16:38:47.000Z | 2022-02-12T22:46:32.000Z | docs/Authentication_Application_Scope.md | antonyjim/osm | f1bed4108f01277c2d0b273095e2db0c6ce80567 | [
"Fair"
] | null | null | null |
# Authentication and Application Scope in OSM
OSM is built to be scalable to any organization, whether there 16 levels of organization or 1, OSM needs to adapt to its environment. When it comes to authentication and scope, there are 3 key concepts to understand:
- Organization Level
- Organizational Units
- Application Scope
### Organizational Level
At the root of any OSM instance, there exists a _root organization level_ from which all other organizational levels branch out. For example, a simple 3 tiered organizational heirarchy may look like this:
```
root node
⇩
cost center
⇩
department
⇩
team sector
```
The root node exists across all _application scopes_ (discussed further down). Organizational levels can then be restricted to a certain application scope, or available to the general scope. This organizational chart is stored in the `sys_organization` table which, for this particular organization would look something like so:
```
+--------------------+-------------+---------+--------------+
| organization_level | beholden_to | claim | scope_prefix |
+--------------------+-------------+---------+--------------+
| Root Node | Root Node | global | SYS |
| Cost Center | Root Node | cstcntr | SYS |
| Department | Cost Center | dept | SYS |
| Team Sector | Department | tmsctr | SYS |
+--------------------+-------------+---------+--------------+
```
When a user is defined to a specific organizational level, they will only be able to see records from tables whose `sys_ol_claim` is less than or equal to their organizational level. For example, consider a table of internal news articles:
```
+------------+---------------------+--------------+--------------+
| created_at | title | sys_ol_claim | sys_ou_claim |
+------------+---------------------+--------------+--------------+
| 2019-08-01 | Good News Everyone! | cstcntr | 09S667 |
| 2019-08-01 | Sad News Everyone | tmsctr | 7CS354 |
+------------+---------------------+--------------+--------------+
```
The first article would be visible to everyone who is assigned to Cost Center 09S667, as well as everyone who is assigned to any Team Sector that falls under of Cost Center 09S667. The sad post would be visible to anyone in Team Sector 7CS354, as well as anyone assigned to the Cost Center which that Team Sector falls under. However, other users that are assigned to other Team Sectors under that Cost Center would not be able to see the sad news article.
### Organizational Units
With Organizational Levels defining the overall layout of the organization, _Organizational Units_ implement that layout. Consider this selection from the `sys_organization_unit` table:
```
+--------+---------------------+-------------+---------------+------------+
| sys_id | unit_name | ou_level | descendent_of | auth_claim |
+--------+---------------------+-------------+---------------+------------+
| cst667 | Cost Center #09S667 | Cost Center | 789EEF | 09S667 |
| tsc354 | Team Sector #7CS354 | Team Sector | 09S667 | 7CS354 |
+--------+---------------------+-------------+---------------+------------+
```
Here we can see a simple heirarchy. From our news example above, it is clear how our `auth_claim` field is used and where it comes from. Users are defined to these Organizational Units in the `sys_user_org` table. Consider the following selection from that table:
```
+---------------+----------+
| user_id | org_unit |
+---------------+----------+
| administrator | tsc354 |
+---------------+----------+
```
As we can see here, the "administrator" user is defined to the Team Sector #7CS354 Organizational Unit. The system also knows that for any applications the user is accessing in the `SYS` scope, that user will only be shown information available to that Organizational Unit.
### Application Scope
Because OSM has the option for several different _application scopes_, it's important to understand how Organization Levels and Organizational Units interact with those application scopes.
Simply put, an Application Scope is a way to separate permissions and data between solutions. Technically, all solutions may be a part of the global application scope. This may cause problems if, for example, a certain solution is publicly accessible and you wish to prevent public users from accessing api data that may be a part of the global application scope.
Scopes themselves are very minimal. They are only there to provide a high level authentication and authorization scheme. A sample scope may look something like so:
```
+---------------+--------------+---------------+
| friendly_name | scope_prefix | global_access |
+---------------+--------------+---------------+
| Point of Sale | POS | false |
+---------------+--------------+---------------+
```
The `scope_prefix` is typically how scopes are identified, though they do have a `sys_id` as well.
By default, users are able to query any global route from any application. For example, a user of the `SYS` scope can query any table using the `/api/q/tablename` route, but they are restricted from accessing `/api/scope/POS`. Users that are part of another scope can access global routes such as `/api/q/tablename` as well as scope-specific routes such as `/api/scope/POS/csearch/`. This behavior can be controlled using the `global_access` flag on the `sys_app_scope` table.
When designing solutions, the scoping is handled automatically provided standard routing functions are used. This concept is described more in `Designing and Implementing Solutions`.
| 60.421053 | 476 | 0.632753 | eng_Latn | 0.9968 |
1c440a85b25206b3ae5ddaf3200197ab96d12d63 | 105 | md | Markdown | README.md | juanma-cvega/booking-service | 1d4d6699b80d98e21294b000f75dcca9c7ec1877 | [
"Apache-2.0"
] | null | null | null | README.md | juanma-cvega/booking-service | 1d4d6699b80d98e21294b000f75dcca9c7ec1877 | [
"Apache-2.0"
] | null | null | null | README.md | juanma-cvega/booking-service | 1d4d6699b80d98e21294b000f75dcca9c7ec1877 | [
"Apache-2.0"
] | null | null | null | # booking-service
Application that allows the creation and manage of rooms and their bookings by users.
| 35 | 86 | 0.809524 | eng_Latn | 0.999897 |
1c44d30c2120aa36b2f463cf626fcabbc16c1881 | 798 | md | Markdown | vendor/gonum.org/v1/gonum/.github/ISSUE_TEMPLATE/bug_report.md | jcpowermac/api | 082f8e2a947ea8b4ed15c9c0f7b190d1fd35e6bc | [
"Apache-2.0"
] | 61 | 2019-05-06T08:48:02.000Z | 2022-03-27T11:44:34.000Z | vendor/gonum.org/v1/gonum/.github/ISSUE_TEMPLATE/bug_report.md | jcpowermac/api | 082f8e2a947ea8b4ed15c9c0f7b190d1fd35e6bc | [
"Apache-2.0"
] | 3 | 2019-06-18T07:06:14.000Z | 2021-05-30T13:41:59.000Z | vendor/gonum.org/v1/gonum/.github/ISSUE_TEMPLATE/bug_report.md | jcpowermac/api | 082f8e2a947ea8b4ed15c9c0f7b190d1fd35e6bc | [
"Apache-2.0"
] | 42 | 2019-05-06T08:48:04.000Z | 2022-02-18T23:47:23.000Z | ---
name: Bug Report
about: Report a problem with Gonum
---
<!--
Please make sure your issue title matches the Go convention; a summary
of the problem, prefixed by the primary affected package.
-->
### What are you trying to do?
### What did you do?
<!-- Please include a link to a minimal reproducer here. -->
### What did you expect to happen?
### What actually happened?
### What version of Go and Gonum are you using?
<!--
Paste the output of `go version` and if you are installing Gonum from source, paste
the output of `(cd $(go env GOPATH)/src/gonum.org/v1/gonum && git rev-parse HEAD)`.
If you are using modules, also paste the output of `grep gonum.org/v1/gonum go.sum`,
executed in the root of your dependent module.
-->
### Does this issue reproduce with the current master?
| 23.470588 | 84 | 0.70802 | eng_Latn | 0.999208 |
1c4503cc14d88feeee61218ea0ed7a5937b527ca | 3,674 | md | Markdown | _episodes/introduction.md | cgeroux/DH-cloud-course | a946241fcc6011a95a3465c22d70e8c0d6e2365a | [
"CC-BY-4.0"
] | null | null | null | _episodes/introduction.md | cgeroux/DH-cloud-course | a946241fcc6011a95a3465c22d70e8c0d6e2365a | [
"CC-BY-4.0"
] | null | null | null | _episodes/introduction.md | cgeroux/DH-cloud-course | a946241fcc6011a95a3465c22d70e8c0d6e2365a | [
"CC-BY-4.0"
] | null | null | null | ---
layout: episode
title: "Introduction"
teaching: 15
exercises: 0
questions:
- "Who are we?"
- "Who are you?"
- "What will we do in this course?"
objectives:
- "Introduce the instructors."
- "Get to know the students and how they want to use the cloud."
- "Provide an overview of the course."
- "Ensure everyone has their environments setup."
keypoints:
- "[**Cloud computing**](../reference#cloud-computing) is very flexible and has many diverse uses."
- "Setup of Compute Canada cloud environments is left to its users."
- "In this course we will setup a cloud environment to run WordPress."
- "We will see methods to easy setup of cloud environments."
start: true
start_time: 0
---
## Introductions
Lets get to know each other starting with introductions and a brief background about your self. Tell us why you are interested in this course and what you hope to get out of it.
## Use cases
Cloud computing offers a wide range of usage possibilities. From running basic HTML sites for publishing works, to collaborative wikis, to running persistent twitter/web scrapers automating data collection and processing, to providing platforms to help support whole research communities. Possible use cases are varied and wide ranging. One particular example using the Compute Canada cloud is [Voyant Tools](https://voyant-tools.org/), a web-based text reading and analysis environment. It is a scholarly project that is designed to facilitate reading and interpretive practises for digital humanities students and scholars as well as for the general public.
## Motivation
Humanities and social sciences often make use of computers in varied and diverse ways typically not needed by the more traditional high performance computing tasks. Cloud computing provides the required flexibility for these varied and diverse usage cases. However, this flexibility comes at a cost. To facilitate a wide array possible use cases, much of the configuration and setup must be undertaken by individual researchers and not by the central bodies which manage the cloud infrastructure. With each cloud setup being customized and unique it becomes difficult to manage centrally in a consistent manner and much of the management and customization is left to the end cloud user.
## Overview
This course will start by introducing some basic concepts about the Internet and cloud in general to lay the foundation for the episodes to come. By the end of the first day you should have the basic tools and knowledge in hand to start working with the cloud, having created a web-server and your first website hosted on your own cloud server.
In day two we will use WordPress as an example of how to setup and configure more complex website software. WordPress is a content management systems (CMS). There are many different CMSs available which have many components in common. We will start with manually walking through the steps to create a WordPress site in the cloud. Understanding the manual steps of installing and configuring a CMS will allow you to generalize to other CMS systems for example wikimedia, omeka, and drupal. Day two will also cover some more advanced ways to interact with the cloud using your terminal.
In day 3, with the knowledge of how a manual installation and setup works we will look at ways to automate this task. In the morning we will learn how to automate installation and setup of software and in the afternoon how to automate the creation of cloud resources.
Before we begin make sure you have completed the steps in the [setup](../setup) page. If you have had difficulty with any part of them, now is the time to let us know and we will help you solve them.
| 89.609756 | 686 | 0.795863 | eng_Latn | 0.999789 |
1c455a804481c9ee9accb3cc6bab4ad93ed3a7d8 | 283 | md | Markdown | build/docstrings/body/CutAggPasses_param.md | ykrist/rust-grb | 9042347a1609bdbf4d94f30fa618e28e4920e864 | [
"MIT"
] | 3 | 2021-05-22T23:25:29.000Z | 2022-02-23T10:53:16.000Z | build/docstrings/body/CutAggPasses_param.md | ykrist/rust-grb | 9042347a1609bdbf4d94f30fa618e28e4920e864 | [
"MIT"
] | 4 | 2021-06-02T16:20:31.000Z | 2022-01-11T00:43:46.000Z | build/docstrings/body/CutAggPasses_param.md | ykrist/rust-grb | 9042347a1609bdbf4d94f30fa618e28e4920e864 | [
"MIT"
] | 2 | 2021-04-26T08:23:46.000Z | 2021-05-22T23:37:55.000Z | A non-negative value indicates the maximum number of constraint aggregation passes performed during cut generation.
Overrides the `Cuts` parameter.
Changing the value of this parameter rarely produces a significant benefit.
Note: Only affects mixed integer programming (MIP) models | 47.166667 | 115 | 0.830389 | eng_Latn | 0.992872 |
1c45c62add9e81444eed81c801690e5aa3ce1f49 | 3,595 | md | Markdown | doc/MergeBasesApi.md | sowderca/azure_devops_sdk | 1ef1b3b5f72dca3d5075d211f97196caa99494ad | [
"MIT"
] | 2 | 2019-10-07T12:30:29.000Z | 2021-03-19T11:49:53.000Z | doc/MergeBasesApi.md | sowderca/azure_devops_sdk | 1ef1b3b5f72dca3d5075d211f97196caa99494ad | [
"MIT"
] | null | null | null | doc/MergeBasesApi.md | sowderca/azure_devops_sdk | 1ef1b3b5f72dca3d5075d211f97196caa99494ad | [
"MIT"
] | null | null | null | # azure_devops_sdk.api.MergeBasesApi
## Load the API package
```dart
import 'package:azure_devops_sdk/api.dart';
```
All URIs are relative to *https://app.vssps.visualstudio.com*
Method | HTTP request | Description
------------- | ------------- | -------------
[**list**](MergeBasesApi.md#list) | **GET** /{organization}/{project}/_apis/git/repositories/{repositoryNameOrId}/commits/{commitId}/mergebases |
# **list**
> List<GitCommitRef> list(organization, repositoryNameOrId, commitId, otherCommitId, project, apiVersion, otherCollectionId, otherRepositoryId)
Find the merge bases of two commits, optionally across forks. If otherRepositoryId is not specified, the merge bases will only be calculated within the context of the local repositoryNameOrId.
### Example
```dart
import 'package:azure_devops_sdk/api.dart';
// TODO Configure OAuth2 access token for authorization: oauth2
//defaultApiClient.getAuthentication<OAuth>('oauth2').accessToken = 'YOUR_ACCESS_TOKEN';
var api_instance = MergeBasesApi();
var organization = organization_example; // String | The name of the Azure DevOps organization.
var repositoryNameOrId = repositoryNameOrId_example; // String | ID or name of the local repository.
var commitId = commitId_example; // String | First commit, usually the tip of the target branch of the potential merge.
var otherCommitId = otherCommitId_example; // String | Other commit, usually the tip of the source branch of the potential merge.
var project = project_example; // String | Project ID or project name
var apiVersion = apiVersion_example; // String | Version of the API to use. This should be set to '5.1-preview.1' to use this version of the api.
var otherCollectionId = 38400000-8cf0-11bd-b23e-10b96e4ef00d; // String | The collection ID where otherCommitId lives.
var otherRepositoryId = 38400000-8cf0-11bd-b23e-10b96e4ef00d; // String | The repository ID where otherCommitId lives.
try {
var result = api_instance.list(organization, repositoryNameOrId, commitId, otherCommitId, project, apiVersion, otherCollectionId, otherRepositoryId);
print(result);
} catch (e) {
print("Exception when calling MergeBasesApi->list: $e\n");
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**organization** | **String**| The name of the Azure DevOps organization. | [default to null]
**repositoryNameOrId** | **String**| ID or name of the local repository. | [default to null]
**commitId** | **String**| First commit, usually the tip of the target branch of the potential merge. | [default to null]
**otherCommitId** | **String**| Other commit, usually the tip of the source branch of the potential merge. | [default to null]
**project** | **String**| Project ID or project name | [default to null]
**apiVersion** | **String**| Version of the API to use. This should be set to '5.1-preview.1' to use this version of the api. | [default to null]
**otherCollectionId** | [**String**](.md)| The collection ID where otherCommitId lives. | [optional] [default to null]
**otherRepositoryId** | [**String**](.md)| The repository ID where otherCommitId lives. | [optional] [default to null]
### Return type
[**List<GitCommitRef>**](GitCommitRef.md)
### Authorization
[oauth2](../README.md#oauth2)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
| 48.581081 | 192 | 0.716273 | eng_Latn | 0.716272 |
1c462638202c99436def84eaaec7edb3a6d35378 | 1,556 | md | Markdown | Disaster Recovery/SafeHaven-Delivering-Process.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | Disaster Recovery/SafeHaven-Delivering-Process.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | Disaster Recovery/SafeHaven-Delivering-Process.md | danielgainer/PublicKB | 3b94c291cb7680e96f8497620568123e81de50e0 | [
"Apache-2.0"
] | null | null | null | {{{
"title": "Process of Delivering SafeHaven As a Disaster Recovery Service",
"date": "1-06-2016",
"author": "Shasha Zhu",
"attachments": [],
"related-products" : [],
"contentIsHTML": false,
"sticky": true
}}}
### Overview
This article covers major steps to use SafeHaven as a disaster recovery
service.

### Get an Order From Marketing Team
To start the process, you simply contact the SafeHaven team,
safehavenmarketing@ctl.io, requesting a free trial or direct order. A SafeHaven
sales lead will deliver a demo and work with you to do the capacity plan and
compatibility validation. You will be transferred to SafeHaven on-boarding team
when the PoC or direct order is approved.
### Start Working With On-Boarding Team
SafeHaven on-boarding team provides customized design based on your specific
requirements. An experienced on-boarding engineer will be assigned to pair with you to provide professional guidance on how to setup SafeHaven environment and manage the DR solution. The on-boarding project normally takes 30 to 45 days to complete. Depending on your specific requirements and the number of servers to protect, the on-boarding time varies.
### Get Help From Customer Care Team
After on-boarding SafeHaven, you should be able to manage the solution by
yourself. If you need help, contact our Customer Care team. This
[Knowledge Base](https://www.ctl.io/knowledge-base/support/how-do-i-report-a-support-issue/) article explains how to open a ticket requesting support.
| 50.193548 | 355 | 0.775064 | eng_Latn | 0.995917 |
1c47d57aa13ee6815685c67b3b33674fa6de801a | 1,577 | md | Markdown | README.md | bitrise-steplib/bitrise-step-build-status-change | f5f91ef827b5af0cd2326db94deb57f36b792240 | [
"MIT"
] | 2 | 2021-06-22T11:28:20.000Z | 2021-06-24T16:01:44.000Z | README.md | bitrise-steplib/bitrise-step-build-status-change | f5f91ef827b5af0cd2326db94deb57f36b792240 | [
"MIT"
] | 2 | 2019-12-04T19:29:30.000Z | 2021-10-15T08:06:53.000Z | README.md | bitrise-steplib/bitrise-step-build-status-change | f5f91ef827b5af0cd2326db94deb57f36b792240 | [
"MIT"
] | 1 | 2019-06-21T23:02:45.000Z | 2019-06-21T23:02:45.000Z | # Build Status Change
> Exports environment variables to be able to detect if the currently running build's status has changed to a previous one.
## Inputs
- access_token: __(required)__ __(sensitive)__
> Your access token for the account that has access to the Bitrise app.
## Outputs
### Exported Environment variables
- BUILD_STATUS_CHANGED: Build Status Changed
> True if the actual build status is different from the previous one.
- PREVIOUS_BUILD_STATUS: Previous Build Status
> Status text of the previous build.
## Contribute
1. Fork this repository
1. Make changes
1. Submit a PR
## How to run this step from source
1. Clone this repository
1. `cd` to the cloned repository's root
1. Create a bitrise.yml (if not yet created)
1. Prepare a workflow that contains a step with the id: `path::./`
> For example:
> ```yaml
> format_version: "6"
> default_step_lib_source: https://github.com/bitrise-io/bitrise-steplib.git
>
> workflows:
> my-workflow:
> steps:
> - path::./:
> inputs:
> - my_input: "my input value"
> ```
1. Run the workflow: `bitrise run my-workflow`
## About
This is an official Step managed by Bitrise.io and is available in the [Workflow Editor](https://www.bitrise.io/features/workflow-editor) and in our [Bitrise CLI](https://github.com/bitrise-io/bitrise) tool. If you seen something in this readme that never before please visit some of our knowledge base to read more about that:
- devcenter.bitrise.io
- discuss.bitrise.io
- blog.bitrise.io
| 31.54 | 327 | 0.703868 | eng_Latn | 0.985365 |
1c49279ed2ea22fcac7e10891ce3c0dbcb877833 | 4,041 | md | Markdown | site/content/3.0/components/navbar.md | nkdas91/material-style | 87c03cdb251990ad0e94f866c32e8261d1601458 | [
"MIT"
] | 4 | 2018-11-21T06:53:34.000Z | 2020-01-31T15:38:08.000Z | site/content/3.0/components/navbar.md | nkdas91/material-style | 87c03cdb251990ad0e94f866c32e8261d1601458 | [
"MIT"
] | null | null | null | site/content/3.0/components/navbar.md | nkdas91/material-style | 87c03cdb251990ad0e94f866c32e8261d1601458 | [
"MIT"
] | null | null | null | ---
layout: docs
title: Navbar
group: components
toc: true
keywords: layout, navbar
---
# Navbar
<p class="fs-4 ms-0 mb-4 text-secondary">
Navigation bars offer a persistent and convenient way to switch between primary destinations in an app.
</p>
{{< example codeId="code1" >}}
<nav class="navbar navbar-expand-lg navbar-dark bg-green">
<div class="container-fluid">
<a class="navbar-brand" href="#">Navbar</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse"
data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button"
data-bs-toggle="dropdown"
aria-expanded="false">
Dropdown
</a>
<ul class="dropdown-menu" aria-labelledby="navbarDropdown">
<li><a class="dropdown-item" href="#">Action</a></li>
<li><a class="dropdown-item" href="#">Another action</a></li>
<li>
<hr class="dropdown-divider">
</li>
<li><a class="dropdown-item" href="#">Something else here</a></li>
</ul>
</li>
<li class="nav-item">
<a class="nav-link disabled" href="#" tabindex="-1" aria-disabled="true">Disabled</a>
</li>
</ul>
<form class="d-flex">
<input class="form-control me-2" type="search" placeholder="Search" aria-label="Search" autocomplete="off">
<button class="btn btn-yellow" type="button">Search</button>
</form>
</div>
</div>
</nav>
{{< /example >}}
## Auto hide
<div class="border rounded-3">
<div class="p-4 d-flex justify-content-center">
<a class="btn btn-success rounded-pill px-4" href="/materialstyle/3.0/demos/navbar-auto-hide">
View Demo <i class="bi bi-box-arrow-up-right"></i>
<span class="ripple-surface"></span>
</a>
</div>
<div class="d-flex justify-content-end">
<btn class="btn btn-sm btn-outline-purple border-0 rounded-0 d-flex align-items-center" data-bs-toggle="collapse" href="#code2">
<i class="bi bi-code-slash fs-5 me-1"></i> CODE
<span class="ripple-surface"></span>
</btn>
</div>
<div class="collapse" id="code2">
```html
<!DOCTYPE html>
<html>
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Material Style CSS -->
<link rel="stylesheet"
href="https://unpkg.com/@materialstyle/materialstyle@3.0.0/dist/css/materialstyle.min.css">
<title>Material Style</title>
</head>
<body>
<!-- Navbar -->
<nav class="navbar navbar-expand-sm bg-purple navbar-dark fixed-top auto-hide">
<div class="container-fluid">
<a class="navbar-brand d-flex align-items-center" href="javascript:">
<i class="bi bi-star-fill me-2"></i>Brand
</a>
</div>
</nav>
<div class="container-fluid p-2" style="margin-top: var(--navbar-fixed-height);">
<!-- Your content here -->
</div>
<!-- Footer -->
<footer class="bg-dark text-white p-3">
Footer
</footer>
<!-- Popper JS -->
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.10.2/dist/umd/popper.min.js"
integrity="sha384-7+zCNj/IqJ95wo16oMtfsKbZ9ccEh31eOz1HGyDuCQ6wgnyJNSYdrPa03rtR1zdB"
crossorigin="anonymous"></script>
<!-- Material Style JS -->
<script src="https://unpkg.com/@materialstyle/materialstyle@3.0.0/dist/js/materialstyle.min.js"></script>
</body>
</html>
```
</div>
</div>
| 31.084615 | 132 | 0.615194 | eng_Latn | 0.127961 |
1c49681d3fb56f6d07593746fd279d084256e489 | 129 | md | Markdown | src/pages/2019-12-11-post-one/index.md | corey-hammond/gatsby_crash_course | dfad1bc4ffc6fc11b186d72d75cb08475fcd4e4d | [
"MIT"
] | null | null | null | src/pages/2019-12-11-post-one/index.md | corey-hammond/gatsby_crash_course | dfad1bc4ffc6fc11b186d72d75cb08475fcd4e4d | [
"MIT"
] | null | null | null | src/pages/2019-12-11-post-one/index.md | corey-hammond/gatsby_crash_course | dfad1bc4ffc6fc11b186d72d75cb08475fcd4e4d | [
"MIT"
] | null | null | null | ---
path: "/post-one"
date: "2019-12-11"
title: "My First Gatsby Post"
author: "Corey"
---
This is my first blog post in Gatsby
| 14.333333 | 36 | 0.658915 | eng_Latn | 0.919787 |
1c4a69cfe1154c255c0102540cfa1281ca6d2eda | 3,912 | md | Markdown | articles/application-gateway/ingress-controller-expose-websocket-server.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/ingress-controller-expose-websocket-server.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/ingress-controller-expose-websocket-server.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Uwidacznianie serwera WebSocket Application Gateway
description: Ten artykuł zawiera informacje na temat udostępniania serwerowi WebSocket Application Gateway z kontrolerem transferu danych przychodzących dla klastrów AKS.
services: application-gateway
author: caya
ms.service: application-gateway
ms.topic: article
ms.date: 11/4/2019
ms.author: caya
ms.openlocfilehash: 01fde82e69917f59f6519524c4c8828feb84a4f9
ms.sourcegitcommit: 018e3b40e212915ed7a77258ac2a8e3a660aaef8
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/07/2019
ms.locfileid: "73795965"
---
# <a name="expose-a-websocket-server-to-application-gateway"></a>Uwidacznianie serwera WebSocket Application Gateway
Zgodnie z opisem w dokumentacji Application Gateway v2 — [zapewnia natywną obsługę protokołów WebSocket i http/2](https://docs.microsoft.com/azure/application-gateway/overview#websocket-and-http2-traffic). Należy pamiętać, że dla obu Application Gateway i Kubernetesal — nie istnieje konfigurowalne ustawienie użytkownika umożliwiające wybiórcze Włączanie lub wyłączanie obsługi protokołu WebSocket.
W poniższym rozmieszczeniu Kubernetes YAML przedstawiono konfigurację minimalną używaną do wdrożenia serwera WebSocket, która jest taka sama jak wdrażanie zwykłego serwera sieci Web:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: websocket-server
spec:
selector:
matchLabels:
app: ws-app
replicas: 2
template:
metadata:
labels:
app: ws-app
spec:
containers:
- name: websocket-app
imagePullPolicy: Always
image: your-container-repo.azurecr.io/websockets-app
ports:
- containerPort: 8888
imagePullSecrets:
- name: azure-container-registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: websocket-app-service
spec:
selector:
app: ws-app
ports:
- protocol: TCP
port: 80
targetPort: 8888
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: websocket-repeater
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: ws.contoso.com
http:
paths:
- backend:
serviceName: websocket-app-service
servicePort: 80
```
Uwzględniając, że wszystkie wymagania wstępne są spełnione i masz Application Gateway kontrolowane przez ruch przychodzący Kubernetes w AKS, wdrożenie powyżej spowoduje, że serwer WebSockets zostanie uwidoczniony na porcie 80 publicznego adresu IP Application Gateway i `ws.contoso.com` domeny.
Następujące polecenie zwinięcie spowoduje przetestowanie wdrożenia serwera WebSocket:
```sh
curl -i -N -H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Origin: http://localhost" \
-H "Host: ws.contoso.com" \
-H "Sec-Websocket-Version: 13" \
-H "Sec-WebSocket-Key: 123" \
http://1.2.3.4:80/ws
```
## <a name="websocket-health-probes"></a>Sondy kondycji protokołu WebSocket
Jeśli w danym wdrożeniu nie zdefiniowano jawnie sond kondycji, Application Gateway podejmie próbę pobrania HTTP w punkcie końcowym serwera WebSocket.
W zależności od implementacji serwera (w[tym miejscu](https://github.com/gorilla/websocket/blob/master/examples/chat/main.go)) mogą być wymagane nagłówki specyficzne dla protokołu WebSocket (`Sec-Websocket-Version` dla wystąpienia).
Ponieważ Application Gateway nie dodaje nagłówków protokołu WebSocket, prawdopodobnie zostanie `400 Bad Request`odpowiedź z sondy kondycji Application Gateway z serwera WebSocket.
W efekcie Application Gateway oznacza to, że Twoje wartości są oznaczone jako w złej kondycji, co w efekcie spowoduje `502 Bad Gateway` dla użytkowników serwera WebSocket.
Aby tego uniknąć, może być konieczne dodanie procedury obsługi HTTP GET na potrzeby kontroli kondycji na serwerze (`/health` dla wystąpienia, które zwraca `200 OK`).
| 39.918367 | 399 | 0.759202 | pol_Latn | 0.994341 |
1c4a88299377470973c135b50e651502ecdf607f | 146 | md | Markdown | README.md | ZPVIP/smart_cal | 2b06ded9384a42519886710d00ded70efd0ccce4 | [
"Apache-2.0"
] | null | null | null | README.md | ZPVIP/smart_cal | 2b06ded9384a42519886710d00ded70efd0ccce4 | [
"Apache-2.0"
] | null | null | null | README.md | ZPVIP/smart_cal | 2b06ded9384a42519886710d00ded70efd0ccce4 | [
"Apache-2.0"
] | 1 | 2020-05-02T14:58:20.000Z | 2020-05-02T14:58:20.000Z | # smart_cal
#make exe:
ocra --windows smart_cal_fx.rb
#try the windows binary in
release_exe folder
#ruby way
ruby ./smart_cal_fx.rb
| 13.272727 | 31 | 0.712329 | eng_Latn | 0.948602 |
1c4a9d916fb66d41342d743ee3ab03c20f248884 | 1,023 | md | Markdown | README.md | SaddamDeveloper/FileManagementSystem | 2a0fabbc5a5b886d45eca088c282ade146e87833 | [
"MIT"
] | null | null | null | README.md | SaddamDeveloper/FileManagementSystem | 2a0fabbc5a5b886d45eca088c282ade146e87833 | [
"MIT"
] | 6 | 2019-05-23T12:27:12.000Z | 2022-02-26T15:20:59.000Z | README.md | SaddamDeveloper/FileManagementSystem | 2a0fabbc5a5b886d45eca088c282ade146e87833 | [
"MIT"
] | null | null | null | ## File Management System Final
run the project by following steps
1. git clone https://github.com/SaddamDeveloper/FileManagementSystemFinal.git
2. cd FileManagementSystemFinal
3. npm install
4. composer update
5. rename .env.example file as .env file
6. open .env file change the dbname as 'fms', username 'root', password leave blank
7. run command 'php artisan key:generate'
8. open phpmyadmin by going http://localhost/phpmyadmin create database 'fms'
9. import the 'fms.sql' file
10. run command 'php artisan serve'
11. run 'php artisan jwt:secret' and 'php artisan cache:clear' if any error of login
### important links
https://medium.com/@ripoche.b/create-a-spa-with-role-based-authentication-with-laravel-and-vue-js-ac4b260b882f
## Screenshot

## License
The Laravel framework is open-source software licensed under the [MIT license](https://opensource.org/licenses/MIT).
| 26.921053 | 124 | 0.773216 | eng_Latn | 0.507479 |
1c4ab603af25734bb4afdc07f98ab0d0e54967c3 | 535 | md | Markdown | sessions/RRVNAM.md | YaguraStation/stateofthemap-2020 | 09bed7968dcc44e7dcf271922594afcec6f9a12d | [
"Apache-2.0"
] | null | null | null | sessions/RRVNAM.md | YaguraStation/stateofthemap-2020 | 09bed7968dcc44e7dcf271922594afcec6f9a12d | [
"Apache-2.0"
] | null | null | null | sessions/RRVNAM.md | YaguraStation/stateofthemap-2020 | 09bed7968dcc44e7dcf271922594afcec6f9a12d | [
"Apache-2.0"
] | null | null | null | ---
layout: session
title: "Winds of Change in OpenStreetMap"
subtitle: "The next 15 years"
code: "RRVNAM"
speaker_names_with_affiliations: ["Allan Mustard"]
room: "Track 1"
length: "20"
time: "Saturday, 10:20"
time_iso: "2020-07-04T10:20:00Z"
resources: []
recording: True
pad: https://pad.sotm.bitcast.co.za/p/winds-of-change-in-openstreetmap
---
OSM Foundation board chairperson Allan Mustard offers his personal assessment of challenges facing OSM and how he thinks the community and the OSM Foundation Board could deal with them.
| 33.4375 | 185 | 0.770093 | eng_Latn | 0.887442 |
1c4ade7a1dcc06c30981fd319976ec3fc40902d1 | 11,868 | md | Markdown | docs/xquery/comparison-expressions-xquery.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xquery/comparison-expressions-xquery.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xquery/comparison-expressions-xquery.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Espressioni di confronto (XQuery) | Microsoft Docs
ms.custom: ''
ms.date: 08/09/2016
ms.prod: sql
ms.prod_service: sql
ms.reviewer: ''
ms.technology:
- database-engine
ms.topic: language-reference
dev_langs:
- XML
helpviewer_keywords:
- node comparison operators [XQuery]
- comparison expressions [XQuery]
- node order comparison operators [XQuery]
- expressions [XQuery], comparison
- comparison operators [XQuery]
- value comparison operators
ms.assetid: dc671348-306f-48ef-9e6e-81fc3c7260a6
author: rothja
ms.author: jroth
manager: craigg
ms.openlocfilehash: e2cc3870ca0b302175c6fdd546b730e73956485f
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 10/01/2018
ms.locfileid: "47824787"
---
# <a name="comparison-expressions-xquery"></a>Espressioni di confronto (XQuery)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
XQuery include i tipi di operatori di confronto seguenti:
- Operatori di confronto generali
- Operatori di confronto dei valori
- Operatore di confronto dei nodi
- Operatori di confronto dell'ordine dei nodi
## <a name="general-comparison-operators"></a>Operatori di confronto generali
È possibile utilizzare gli operatori di confronto generali per confrontare valori atomici, sequenze o una combinazione dei due.
Gli operatori generali sono indicati nella tabella seguente.
|Operatore|Description|
|--------------|-----------------|
|=|Uguale a|
|!=|Diverso da|
|\<|Minore di|
|>|Maggiore di|
|\<=|Minore o uguale a|
|>=|Maggiore o uguale a|
Quando si confrontano due sequenze utilizzando gli operatori di confronto generali e la seconda sequenza include un valore che risulta True in base al confronto con un valore della prima sequenza, il risultato complessivo è True. Negli altri casi, il risultato è False. Ad esempio, il risultato di (1, 2, 3) = (3, 4) è True, in quanto il valore 3 è incluso in entrambe le sequenze.
```
declare @x xml
set @x=''
select @x.query('(1,2,3) = (3,4)')
```
Il confronto prevede che i valori siano paragonabili. In particolare, vengono sottoposti al controllo statico. Per i confronti numerici, può verificarsi la promozione del tipo numeric. Se ad esempio il valore decimal 10 viene confrontato con il valore double 1e1, il valore decimal viene convertito in double. Si noti che tale operazione potrebbe generare risultati imprecisi, in quanto i confronti tra valori double non possono essere esatti.
Se uno dei valori non è tipizzato, viene eseguito il relativo cast al tipo dell'altro valore. Nell'esempio seguente, il valore 7 viene considerato come valore intero. Prima del confronto, il valore /a[1] non tipizzato viene convertito in valore intero. Il confronto tra valori interi restituisce True.
```
declare @x xml
set @x='<a>6</a>'
select @x.query('/a[1] < 7')
```
Se invece il valore non tipizzato viene confrontato con una stringa o con un altro valore non tipizzato, verrà eseguito il relativo cast a xs:string. Nella query seguente, la stringa 6 viene confrontata con la stringa "17". Tale query restituisce False, a causa del confronto tra stringhe.
```
declare @x xml
set @x='<a>6</a>'
select @x.query('/a[1] < "17"')
```
La query seguente restituisce foto di piccole dimensioni di un modello di prodotto derivato dal catalogo prodotti disponibile nel database di esempio AdventureWorks. La query confronta una sequenza di valori atomici restituiti da `PD:ProductDescription/PD:Picture/PD:Size` con la sequenza singleton "small". Se il confronto è True, restituisce il < Picture\> elemento.
```
WITH XMLNAMESPACES ('http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription' AS PD)
SELECT CatalogDescription.query('
for $P in /PD:ProductDescription/PD:Picture[PD:Size = "small"]
return $P') as Result
FROM Production.ProductModel
WHERE ProductModelID=19
```
La query confronta una sequenza di numeri di telefono in < numero\> elementi per la stringa letterale "112-111-1111". La query confronta la sequenza di elementi dei numeri di telefono nella colonna AdditionalContactInfo per determinare se il documento include un numero di telefono specifico di un determinato cliente.
```
WITH XMLNAMESPACES (
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactTypes' AS act,
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactInfo' AS aci)
SELECT AdditionalContactInfo.value('
/aci:AdditionalContactInfo//act:telephoneNumber/act:number = "112-111-1111"', 'nvarchar(10)') as Result
FROM Person.Contact
WHERE ContactID=1
```
La query restituisce True. Questo valore indica che il documento contiene tale numero. La query seguente è una versione leggermente modificata della query precedente. In questa query, i valori dei numeri di telefono recuperati dal documento vengono confrontati con una sequenza costituita da due numeri telefonici. Se il confronto è True, il < numero\> viene restituito l'elemento.
```
WITH XMLNAMESPACES (
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactTypes' AS act,
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactInfo' AS aci)
SELECT AdditionalContactInfo.query('
if (/aci:AdditionalContactInfo//act:telephoneNumber/act:number = ("222-222-2222","112-111-1111"))
then
/aci:AdditionalContactInfo//act:telephoneNumber/act:number
else
()') as Result
FROM Person.Contact
WHERE ContactID=1
```
Risultato:
```
\<act:number
xmlns:act="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactTypes">
111-111-1111
\</act:number>
\<act:number
xmlns:act="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ContactTypes">
112-111-1111
\</act:number>
```
## <a name="value-comparison-operators"></a>Operatori di confronto dei valori
Gli operatori di confronto dei valori vengono utilizzati per confrontare valori atomici. Si noti che nelle query è possibile utilizzare gli operatori di confronto generali anziché gli operatori di confronto dei valori.
Gli operatori di confronto dei valori sono indicati nella tabella seguente.
|Operatore|Description|
|--------------|-----------------|
|eq|Uguale a|
|ne|Diverso da|
|lt|Minore di|
|gt|Maggiore di|
|le|Minore o uguale a|
|ge|Maggiore o uguale a|
Se i due valori confrontati soddisfano entrambi la condizione dell'operatore scelto, l'espressione restituirà True. In caso contrario, restituirà False. Se uno dei valori è una sequenza vuota, il risultato dell'espressione è False.
È possibile utilizzare questi operatori solo su valori atomici singleton. Pertanto, non è possibile specificare una sequenza come uno degli operandi.
Ad esempio, la query seguente recupera \<Picture > elementi di un modello di prodotto in cui la dimensione dell'immagine è "piccolo:
```
SELECT CatalogDescription.query('
declare namespace PD="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription";
for $P in /PD:ProductDescription/PD:Picture[PD:Size eq "small"]
return
$P
') as Result
FROM Production.ProductModel
WHERE ProductModelID=19
```
Dalla query precedente si noti quanto segue:
- `declare namespace` definisce il prefisso dello spazio dei nomi utilizzato successivamente nella query.
- Il \<dimensioni > valore dell'elemento viene confrontato con il valore atomico specificato, "small".
- Si noti che poiché gli operatori funzionano solo su valori atomici, la **data ()** funzione viene utilizzata in modo implicito per recuperare il valore del nodo. Pertanto, `data($P/PD:Size) eq "small"` restituisce lo stesso risultato.
Risultato:
```
\<PD:Picture
xmlns:PD="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription">
\<PD:Angle>front\</PD:Angle>
\<PD:Size>small\</PD:Size>
\<PD:ProductPhotoID>31\</PD:ProductPhotoID>
\</PD:Picture>
```
Si noti che le regole per la promozione dei tipi sono identiche a quelle utilizzate per i confronti generali. Durante i confronti tra valori, inoltre, [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] utilizza le stesse regole di cast per i valori non tipizzati utilizzate durante i confronti generali. Le regole delle specifiche XQuery, invece, prevedono sempre l'esecuzione del cast del valore non tipizzato a xs:string durante i confronti tra valori.
## <a name="node-comparison-operator"></a>Operatore di confronto dei nodi
L'operatore di confronto, nodo **è**, si applica solo ai tipi di nodo. Il risultato restituito indica se due nodi passati come operandi rappresentano lo stesso nodo nel documento di origine. Questo operatore restituisce True se i due operandi rappresentano lo stesso nodo. In caso contrario, restituisce False.
La query seguente controlla se il centro di lavorazione 10 è il primo nell'ambito del processo di produzione di un modello di prodotto specifico.
```
WITH XMLNAMESPACES ('http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelManuInstructions' AS AWMI)
SELECT ProductModelID, Instructions.query('
if ( (//AWMI:root/AWMI:Location[@LocationID=10])[1]
is
(//AWMI:root/AWMI:Location[1])[1] )
then
<Result>equal</Result>
else
<Result>Not-equal</Result>
') as Result
FROM Production.ProductModel
WHERE ProductModelID=7
```
Risultato:
```
ProductModelID Result
-------------- --------------------------
7 <Result>equal</Result>
```
## <a name="node-order-comparison-operators"></a>Operatori di confronto dell'ordine dei nodi
Gli operatori di confronto dell'ordine dei nodi confrontano coppie di nodi in base alle relative posizioni in un documento.
I confronti eseguiti, basati sull'ordine dei nodi all'interno del documento, sono i seguenti:
- `<<` : Viene **operando 1** far precedere **operando 2** nell'ordine del documento.
- `>>` : Viene **operando 1** seguono **operando 2** nell'ordine del documento.
La query seguente restituisce True se la descrizione del catalogo prodotti ha il \<garanzia > elemento precede il \<manutenzione > elemento nell'ordine del documento per un determinato prodotto.
```
WITH XMLNAMESPACES (
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelDescription' AS PD,
'http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/ProductModelWarrAndMain' AS WM)
SELECT CatalogDescription.value('
(/PD:ProductDescription/PD:Features/WM:Warranty)[1] <<
(/PD:ProductDescription/PD:Features/WM:Maintenance)[1]', 'nvarchar(10)') as Result
FROM Production.ProductModel
where ProductModelID=19
```
Dalla query precedente si noti quanto segue:
- Il **Value ()** metodo per il **xml**tipo di dati viene usato nella query.
- Il valore booleano della query viene convertito in **nvarchar(10)** e restituiti.
- La query restituisce True.
## <a name="see-also"></a>Vedere anche
[Sistema di tipi (XQuery)](../xquery/type-system-xquery.md)
[Espressioni XQuery](../xquery/xquery-expressions.md)
| 46.359375 | 461 | 0.706775 | ita_Latn | 0.984138 |
1c4afe9a9876c14ec7a1e08ecaa3fb0096d1a0d5 | 2,418 | md | Markdown | source/.Archive/SUMMARY.md/2020-02-08 18-09-28.md | CharlotteFallices/Swift_Chinese_Documents | b1fcb1e5d57d0df17ebf3001ad664289e1aea024 | [
"Artistic-2.0"
] | 1 | 2020-02-10T02:56:36.000Z | 2020-02-10T02:56:36.000Z | source/.Archive/SUMMARY.md/2020-02-08 18-09-28.md | CharlotteFallices/Swift_Chinese_Documents | b1fcb1e5d57d0df17ebf3001ad664289e1aea024 | [
"Artistic-2.0"
] | null | null | null | source/.Archive/SUMMARY.md/2020-02-08 18-09-28.md | CharlotteFallices/Swift_Chinese_Documents | b1fcb1e5d57d0df17ebf3001ad664289e1aea024 | [
"Artistic-2.0"
] | null | null | null | # 教程
* 欢迎使用 Swift
* [关于 Swift](./01_welcome_to_swift/01_about_swift.html)
* [版本兼容性](./01_welcome_to_swift/02_version_compatibility.html)
* [Swift 初见](./01_welcome_to_swift/03_a_swift_tour.html)
* [Swift 版本历史记录](./04_revision_history/04_revision_history.html)
* Swift 教程
* [基础部分](./02_language_guide/01_The_Basics.html)
* [基本运算符](./02_language_guide/02_Basic_Operators.html)
* [字符串和字符](./02_language_guide/03_Strings_and_Characters.html)
* [集合类型](./02_language_guide/04_Collection_Types.html)
* [控制流](./02_language_guide/05_Control_Flow.html)
* [函数](./02_language_guide/06_Functions.html)
* [闭包](./02_language_guide/07_Closures.html)
* [枚举](./02_language_guide/08_Enumerations.html)
* [类和结构体](./02_language_guide/09_Structures_And_Classes.html)
* [属性](./02_language_guide/10_Properties.html)
* [方法](./02_language_guide/11_Methods.html)
* [下标](./02_language_guide/12_Subscripts.html)
* [继承](./02_language_guide/13_Inheritance.html)
* [构造过程](./02_language_guide/14_Initialization.html)
* [析构过程](./02_language_guide/15_Deinitialization.html)
* [可选链](./02_language_guide/16_Optional_Chaining.html)
* [错误处理](./02_language_guide/17_Error_Handling.html)
* [类型转换](./02_language_guide/18_Type_Casting.html)
* [嵌套类型](./02_language_guide/19_Nested_Types.html)
* [扩展](./02_language_guide/20_Extensions.html)
* [协议](./02_language_guide/21_Protocols.html)
* [泛型](./02_language_guide/22_Generics.html)
* [不透明类型](./02_language_guide/23_Opaque_Types.html)
* [自动引用计数](./02_language_guide/24_Automatic_Reference_Counting.html)
* [内存安全](./02_language_guide/25_Memory_Safety.html)
* [访问控制](./02_language_guide/26_Access_Control.html)
* [高级运算符](./02_language_guide/27_Advanced_Operators.html)
* 语言参考
* [关于语言参考](./03_language_reference/01_About_the_Language_Reference.html)
* [词法结构](./03_language_reference/02_Lexical_Structure.html)
* [类型](./03_language_reference/03_Types.html)
* [表达式](./03_language_reference/04_Expressions.html)
* [语句](./03_language_reference/05_Statements.html)
* [声明](./03_language_reference/06_Declarations.html)
* [特性](./03_language_reference/07_Attributes.html)
* [模式](./03_language_reference/08_Patterns.html)
* [泛型参数](./03_language_reference/09_Generic_Parameters_and_Arguments.html)
* [语法总结](./03_language_reference/10_Summary_of_the_Grammar.html)
* 翻译贡献者
* [贡献者名单](./contributors.html)
| 49.346939 | 77 | 0.748966 | yue_Hant | 0.914931 |
1c4b4f6f1a6276a5d725603e3d030876f0376564 | 1,550 | md | Markdown | desktop-src/extensible-storage-engine/esentsecondaryindexcorruptedexception-constructor.md | nbassett-MSFT/win32 | 13f1bdd126606b889e607594f7e8f7cfe58eb008 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-27T04:48:11.000Z | 2021-11-27T04:48:11.000Z | desktop-src/extensible-storage-engine/esentsecondaryindexcorruptedexception-constructor.md | nbassett-MSFT/win32 | 13f1bdd126606b889e607594f7e8f7cfe58eb008 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/extensible-storage-engine/esentsecondaryindexcorruptedexception-constructor.md | nbassett-MSFT/win32 | 13f1bdd126606b889e607594f7e8f7cfe58eb008 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-01-01T04:19:14.000Z | 2022-01-01T04:19:14.000Z | ---
title: EsentSecondaryIndexCorruptedException constructor
TOCTitle: 'EsentSecondaryIndexCorruptedException constructor '
ms:assetid: M:Microsoft.Isam.Esent.Interop.EsentSecondaryIndexCorruptedException.#ctor
ms:mtpsurl: https://msdn.microsoft.com/en-us/library/microsoft.isam.esent.interop.esentsecondaryindexcorruptedexception.esentsecondaryindexcorruptedexception(v=EXCHG.10)
ms:contentKeyID: 55102678
ms.date: 07/30/2014
ms.topic: reference
f1_keywords:
- Microsoft.Isam.Esent.Interop.EsentSecondaryIndexCorruptedException.EsentSecondaryIndexCorruptedException
dev_langs:
- CSharp
- JScript
- VB
- other
api_name:
- Microsoft.Isam.Esent.Interop.EsentSecondaryIndexCorruptedException..ctor
topic_type:
- apiref
- kbSyntax
api_type:
- Managed
api_location:
- Microsoft.Isam.Esent.Interop.dll
ROBOTS: INDEX,FOLLOW
---
# EsentSecondaryIndexCorruptedException constructor
Initializes a new instance of the EsentSecondaryIndexCorruptedException class.
**Namespace:** [Microsoft.Isam.Esent.Interop](hh596136\(v=exchg.10\).md)
**Assembly:** Microsoft.Isam.Esent.Interop (in Microsoft.Isam.Esent.Interop.dll)
## Syntax
``` vb
'Declaration
Public Sub New
'Usage
Dim instance As New EsentSecondaryIndexCorruptedException()
```
``` csharp
public EsentSecondaryIndexCorruptedException()
```
## See also
#### Reference
[EsentSecondaryIndexCorruptedException class](dn350606\(v=exchg.10\).md)
[EsentSecondaryIndexCorruptedException members](dn350607\(v=exchg.10\).md)
[Microsoft.Isam.Esent.Interop namespace](hh596136\(v=exchg.10\).md)
| 25.833333 | 169 | 0.809677 | yue_Hant | 0.820042 |
1c4b6a40080e4bff4496a18ef4e2c3c907953537 | 5,075 | md | Markdown | Example/Auth/README.md | pmendesTekever/firebase-ios-sdk | edb6a9ecfcbf231dc99f811f5cd51379933bc552 | [
"Apache-2.0"
] | 8 | 2020-08-06T21:46:53.000Z | 2020-12-03T18:30:33.000Z | Example/Auth/README.md | pmendesTekever/firebase-ios-sdk | edb6a9ecfcbf231dc99f811f5cd51379933bc552 | [
"Apache-2.0"
] | 6 | 2021-09-10T06:19:03.000Z | 2022-03-20T22:26:17.000Z | Example/Auth/README.md | pmendesTekever/firebase-ios-sdk | edb6a9ecfcbf231dc99f811f5cd51379933bc552 | [
"Apache-2.0"
] | 1 | 2020-05-08T15:34:22.000Z | 2020-05-08T15:34:22.000Z | ### Running Sample Application
In order to run this application, you'll need to follow the following steps!
#### GoogleService-Info.plist files
You'll need valid `GoogleService-Info.plist` files for those samples. To get your own
`GoogleService-Info.plist` files:
1. Go to the [Firebase Console](https://console.firebase.google.com/)
2. Create a new Firebase project, if you don't already have one
3. For each sample app you want to test, create a new Firebase app with the sample app's bundle
identifier (e.g. `com.google.FirebaseExperimental1.dev`)
4. Download the resulting `GoogleService-Info.plist` and place it in
[Sample/GoogleService-Info.plist](Sample/GoogleService-Info.plist)
#### GoogleService-Info\_multi.plist files
1. Create a second sample app and download its `GoogleService_Info.plist` file. This can be in the
same Firebase project as the one above, or a different one. Use a different app bundle identifier
(e.g. `com.google.FirebaseExperimental2.dev`).
2. Save this plist file as `GoogleService-Info_multi.plist` in
[Sample/GoogleService-Info\_multi.plist](Sample/GoogleService-Info_multi.plist).
This enables testing that FirebaseAuth continues to work after switching the Firebase App in the
runtime.
#### Getting your own Credential files
Please follow the instructions in
[Sample/AuthCredentialsTemplate.h](Sample/AuthCredentialsTemplate.h)
to generate the AuthCredentials.h file.
#### Application.plist file
Generate the `Sample/Application.plist` file from
[Sample/ApplicationTemplate.plist](Sample/ApplicationTemplate.plist) by replacing `$BUNDLE_ID` and
`$REVERSED_CLIENT_ID` with their values from `GoogleService-Info.plist` and
`$REVERSED_CLIENT_MULTI_ID` with its value from `GoogleService-Info_multi.plist`.
This could be done in bash via something like this from within the `Sample` directory:
```bash
$ BUNDLE_ID=`xmllint --xpath "/plist/dict/key[.='BUNDLE_ID']/following-sibling::string[1]/text()" GoogleService-Info.plist`
$ REVERSED_CLIENT_ID=`xmllint --xpath "/plist/dict/key[.='REVERSED_CLIENT_ID']/following-sibling::string[1]/text()" GoogleService-Info.plist`
$ REVERSED_CLIENT_MULTI_ID=`xmllint --xpath "/plist/dict/key[.='REVERSED_CLIENT_ID']/following-sibling::string[1]/text()" GoogleService-Info_multi.plist`
$ sed \
-e 's/\$BUNDLE_ID/'$BUNDLE_ID'/g' \
-e 's/\$REVERSED_CLIENT_ID/'$REVERSED_CLIENT_ID'/g' \
-e 's/\$REVERSED_CLIENT_MULTI_ID/'$REVERSED_CLIENT_MULTI_ID'/g' \
ApplicationTemplate.plist > Application.plist
```
#### Sample.entitlements file
In order to test the "Reset Password In App" feature you will need to create a dynamic link for your
Firebase project in the Dynamic Links section of the Firebase Console. Once the link is created,
please copy the contents of
[Sample/SampleTemplate.entitlements](Sample/SampleTemplate.entitlements)
into a file named `Sample/Sample.entitlements` and replace `$KAPP_LINKS_DOMAIN` with your own
relevant appLinks domain. Your appLinks domains are domains that your app will handle as universal
links, in this particular case you can obtain this domain from the aforementioned Dynamic Links
section of the Firebase Console.
#### Getting your own Credential files
Please follow the instructions in
[Sample/AuthCredentialsTemplate.h](Sample/AuthCredentialsTemplate.h)
to generate the AuthCredentials.h file.
### Running SwiftSample Application
In order to run this application, you'll need to follow the following steps!
#### GoogleService-Info.plist files
You'll need valid `GoogleService-Info.plist` files for those samples. To get your own
`GoogleService-Info.plist` files:
1. Go to the [Firebase Console](https://console.firebase.google.com/)
2. Create a new Firebase project, if you don't already have one
3. For each sample app you want to test, create a new Firebase app with the sample app's bundle
identifier (e.g. `com.google.FirebaseExperimental2.dev`)
4. Download the resulting `GoogleService-Info.plist` and place it in
[SwiftSample/GoogleService-Info.plist](SwiftSample/GoogleService-Info.plist)
#### Info.plist file
Please follow the instructions in
[SwiftSample/InfoTemplate.plist](SwiftSample/InfoTemplate.plist)
to generate the right Info.plist file
#### Getting your own Credential files
Please follow the instructions in
[SwiftSample/AuthCredentialsTemplate.swift](SwiftSample/AuthCredentialsTemplate.swift)
to generate the AuthCredentials.swift file.
### Running API tests
In order to run the API tests, you'll need to follow the following steps!
#### Getting your own Credential files
Please follow the instructions in
[ApiTests/AuthCredentialsTemplate.h](ApiTests/AuthCredentialsTemplate.h)
to generate the AuthCredentials.h file.
#### Console
In the Firebase conosle for your test project, you'll need to enable the
following auth providers:
* Email/Password
* Google
* Facebook
* Anonymous
You'll also need to create a user with email
`user+email_existing_user@example.com` and password of `password`.
## Usage
```
$ pod update
$ open Firebase.xcworkspace
```
Then select an Auth scheme and run.
| 40.6 | 153 | 0.785419 | eng_Latn | 0.915243 |
1c4be4db7c19a335be4a34785d11aefab3765451 | 3,487 | md | Markdown | _posts/2021-06-06-basic-cs.md | zpxlffjrm/zpxlffjrm.github.io | 3f5f7052a5a67a2b3cf85b15b251d8911f7b6050 | [
"MIT"
] | null | null | null | _posts/2021-06-06-basic-cs.md | zpxlffjrm/zpxlffjrm.github.io | 3f5f7052a5a67a2b3cf85b15b251d8911f7b6050 | [
"MIT"
] | null | null | null | _posts/2021-06-06-basic-cs.md | zpxlffjrm/zpxlffjrm.github.io | 3f5f7052a5a67a2b3cf85b15b251d8911f7b6050 | [
"MIT"
] | null | null | null | ---
title: "인터넷"
toc: true
toc_sticky: true
categories:
- basic-cs
last_modified_at: 2020-06-6T21:51:00+00:00
---
이번에는 인터넷에대해 간략히 정리해 보고자 한다.
## 인터넷
우리는 어떻게 인터넷에 접속하고 동영상을 시청할
수있는것일까?
결론부터 말하자면 각종 표준을 따르는 통신규약 및 여러
하드웨어를 이용하여 인터넷에 접속하고 동영상을
시청할 수 있다.
### 스위치? 라우터? DNS? DHCP?
위에 작성한 하드웨어 장치들은 네트워크 통신을 위한
장비들중 **일부** 이다. 이외 다른장지들도 분명 존재하지만
지금은 다루지 않는것 뿐이다. 결코 저 위의 4가지로만
네트워크가 구성되지 않는다.
### OSI 7계층
위 장비들에 대해 간단하게 정리하기 전에 이 개념을 먼저
알고 있어야 한다.
OSI 7계층은 국제표준화기구(ISO)에서 개발한 모델로,
컴퓨터 네트워크 프로토콜 디자인과 통신을 계층으로 나누어 설명한 것이다.
먼가 어려워 보이지만, 1계층부터 어떻게 네트워크를 구성해야
하는지 생각해 보면 이해하기 쉽니다.
### 제 1층 피지컬(물리) 레이어
랜선을 이용한 물리적인 신호 연결 및 전송.
### 제 2층 데이터링크 레이어
프레임이라 부르는 데이터를 전송하는 계층.
근처에 존재하는 주변 노드들과 통신을 하며 물리적인 신호속에
충돌, 오류를 탐지하고 다시 전송하는등 흐름을 제어하며 신뢰있는 전송을 담당하는 계층이다.
어쩌면 들어봤을 MAC(Media Access Control Address) 주소를 이용하여
주변 기기들을 구분한다.
전송 프로토콜로 이더넷, 랜등 한번쯤 들어봤을 연결방법들이 해당된다.
특이하게 2가지 서브레이어를 가지고 있다.
1. MAC(Media Access Control Address)레이어
해당 계층은 위에서 말한 흐름제어를 담당하며 동시에
다중화(multiplexing - 하나의 전송매체에 하나 이상의 데이터를 동시에 전송하는 기법,
시간, 주파수, 공간등을 분할하여 전송하는 기법. AM, FM등이 해당.)를 담당한다.
2. LLC 3계층이 다중화를 사용할수 있도록 일종의 인터페이스를 제공한다.
### 제 3층 네트워크 레이어
해당 계층은 여러 2계층 노드들을 연결하는 경우를 생각해보면 이해하기 쉽다.
랜, 이더넷등의 네트워크들은 바로 옆에 컴퓨터와 통신을 할수 있지만
이런 2계층 네트워크가 여러개 존재하고, 2계층 네트워크끼리 다시 구성을 해야할 경우
한쪽 네트워크의 끝에서 다른쪽 네트워크의 끝 노드의 주소를 어떻게 알수 있을까?
해답은 MAC 주소를 대신하여 사용할 IP주소를 사용하는 것이다.
(MAC 주소는 2계층에서 여전히 사용된다! 다만 3계층에서 사용할 다른 주소체제가 필요한것 뿐)
한번쯤 보았을 192.168.1.1 과 같은 주소를 각 컴퓨터에 부여한다.
이런 주소를 IPv4 주소라고 하는데, 8비트 4개를 이용하여 주소를 표현한다.
즉 0.0.0.0 ~ 255.255.255.255 까지의 주소만 사용이 가능하다.
특이한 점은 특정 주소들은 특정 목적을 위해 사용이 금지되어 있다.
(192.168.X.X의 주소는 흔히 집에서 사용하는 공유기와 같은 내부망을 위한 주소이다)
하지만 이런 IPv4의 주소는 IoT의 규모가 점차 커지는 등의 이유로 남은 주소가 부족해졌다.
> 위키피디아에 따르면, 2012년 1월 30일 기준으로 IPv4의 232인 4,294,967,296개 중에 3,410,303,904개가 할당, 588,514,560개가 특수용도, 296,148,832개가 미할당이다.
> 그래서 나온것이 IPv6 주소이다. 기존의 주소가 32비트였다면, IPv6의 주소는 128비트이다.
> 16진수를 사용하여 16비트씩 8부분으로 나눠 표현한다.
기존주소가 43억개의 주소만 사용가능했다면, 약 43억×43억×43억×43억개의 주소가 지원가능.
하지만 여전히 IPv4를 여전히 많이 사용한다.
이 계층은 패킷을 주고 받는다.
### 제 4층 트랜스포트 레이어
그럼 3층에서 컴퓨터간의 통신은 완료되었다.
하지만, 컴퓨터 사이의 통신일 뿐, 어떤 프로그램들이 연결되어야 할지 아직 알수 없다.
프로그램사이, 호스트간 연결을 담당하는것이 제 4층이다.
해당 계층에서는 데이터 세그먼트 혹은 데이터그램 이라는것을 주고받으며 통신한다.
전송프로토콜로 TCP , UDP가 존재. 둘의 차이는 잘못된 데이터를 수신한경우 약간의 시간을 더
사용하여 정확한 데이터를 요청할것인지, 빠른 속도를 중시하여 정확한 데이터를 요청하지 않을것인지
의 차이이다.
### 제 5계층 세션 레이어
어떤 사이트를 이용하다보면 세션이 만료되어 로그아웃 처리되었다며 다시 로그인을 해야 하는 경우가
종종 있다.
이런 연결, 세션을 유지하는 계층이 세션 레이어 이다.
요 계층은 후술할 어플리케이션 계층에서도 충분이 제공이 가능하다고 판단되었는지
7층 어플리케이션 레이어에 기능이 병합되어 사용되는것이 일반적이다.
### 제 6층 프레젠테이션 레이어
데이터 암호화/복호화, 인코딩등을 통해 호스트들이 사용할 데이터를 일치시키도록 보장하는 레이어
역시 7층 어플리케이션 레이어에서 제공 가능하다 판단되어 실제로 잘 사용되지 않는다.
### 제 7층 어플리케이션 레이어
사용자들이 제작한 프로그램, 어플리케이션등이 사용할 프로토콜을 제공하는 계층이다.
FTP, HTTP 등의 프로토콜을 제공한다.
### 특징
해당 다이어그램?은 실사용과 맞지 않다. 우선 6,5계층은 7계층에 통합되어 있고
3,4 계층 그리고 간혹 2계층까지 통합된것이 현실.
데이터들은 7->6-> ... 1계층으로 데이터가 넘어가며 가 계층마다 다른 포장법으로
포장된다. 해당 포장법은 라우터, 혹은 다른 컴퓨터의 같은 계층에서만 해독이 가능하다.
쉽게 이해하자면 택배의 전달과정과 비슷하다.
발신인은 택배에 내용물을 포장한다. 이 내용물은 수신자만 확인이 가능하다.
택배는 택배기사의 의해 집화되고 트럭에 실린다 택배는 터미널 허브에 도달하기전까지
확인이 불가능하다.
---
다시 돌아와서 스위치 라우터 DNS DHCP에 대해 설명하자면
스위치는 L2(제 2계층)에서 동작하며 보내야할 곳에 데이터를 전송한다.
MAC주소를 이용하여 전송한곳과 전송할 곳을 확인하고 전송한다.
라우터는 스위치와 비슷하다 다만 L3 계층에서 IP주소를 이용하여 어디로 라우팅을
해야할지 결정한다. (일반적인 공유기의 라우터가 아닌 네트워크 라우터를 말한다)
DNS는 사람이 이해하기 쉬운 도메인주소를 컴퓨터가 이해할수 있는 IP주소로 변환해주는
일종의 시스템이다.
하나의 루트 서버 내부에 .net, .com등을 관장하는 여러 서버들이 존재하며, 해당 서버
들이 요청받은 도메인주소에 해당되는 IP주소를 반환하게 된다.
DHCP는 망 내부에서 새로운 클라이언트에게 IP주소를 할당하기위한 프로토콜이다.
이런 프로토콜을 사용하는 서버를 이용하여 새로운 클라이언트에 새로운 내부망을
동적으로 할당하고 관리할 수 있다.
| 22.352564 | 121 | 0.714654 | kor_Hang | 1.00001 |
1c4c2d93e4fa529452a9fad5c3cf433f747a7686 | 490 | md | Markdown | docs/basic.md | xrr2016/data-list | 976eada7f28d98c6dee6eb8710920548fb1777a6 | [
"MIT"
] | null | null | null | docs/basic.md | xrr2016/data-list | 976eada7f28d98c6dee6eb8710920548fb1777a6 | [
"MIT"
] | null | null | null | docs/basic.md | xrr2016/data-list | 976eada7f28d98c6dee6eb8710920548fb1777a6 | [
"MIT"
] | null | null | null | 基础用法
```vue
<template>
<div style="height: 400px; overflow: auto">
<data-list ref="dataList" :url="url">
<!--通过slot-scope从data-list组件获取到返回的数据-->
<ul slot-scope="props">
<li v-for="(item, index) in props.list" :key="index">
{{item.name}}
</li>
</ul>
</data-list>
</div>
</template>
<script>
export default {
data() {
return {
url: 'https://easy-mock.com/mock/5c323f1b2188f1589db6af5f/data-list',
}
}
}
</script>
```
| 18.846154 | 75 | 0.555102 | kor_Hang | 0.113937 |
1c4c710c9ca17f29deac61c6a67fd9f592433e0f | 12,061 | md | Markdown | articles/machine-learning/studio/manage-new-webservice.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/studio/manage-new-webservice.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/studio/manage-new-webservice.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Web hizmetlerini yönetme
titleSuffix: ML Studio (classic) - Azure
description: Microsoft Azure Machine Learning Web Hizmetleri portalını kullanarak Machine Learning New ve Classic Web hizmetlerinizi yönetin. Klasik Web hizmetleri ve Yeni Web hizmetleri farklı temel teknolojilere dayandığından, her biri için biraz farklı yönetim yeteneklerine sahipsiniz.
services: machine-learning
ms.service: machine-learning
ms.subservice: studio
ms.topic: conceptual
author: likebupt
ms.author: keli19
ms.custom: previous-ms.author=yahajiza, previous-author=YasinMSFT
ms.date: 02/28/2017
ms.openlocfilehash: 2277aa3de5955efe5a3e4cb938fa557352f89006
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/28/2020
ms.locfileid: "79217969"
---
# <a name="manage-a-web-service-using-the-azure-machine-learning-studio-classic-web-services-portal"></a>Azure Machine Learning Studio (klasik) Web Hizmetleri portalını kullanarak bir web hizmetini yönetme
[!INCLUDE [Notebook deprecation notice](../../../includes/aml-studio-notebook-notice.md)]
Microsoft Azure Machine Learning Web Hizmetleri portalını kullanarak Machine Learning New ve Classic Web hizmetlerinizi yönetebilirsiniz. Klasik Web hizmetleri ve Yeni Web hizmetleri farklı temel teknolojilere dayandığından, her biri için biraz farklı yönetim yeteneklerine sahipsiniz.
Machine Learning Web Services portalında şunları yapabilirsiniz:
* Web hizmetinin nasıl kullanıldığını izleyin.
* Açıklamayı yapılandırın, web hizmetinin anahtarlarını güncelleştirin (yalnızca Yeni), depolama hesabı anahtarınızı güncelleştirin (yalnızca Yeni), günlüğe kaydetmeyi etkinleştirin ve örnek verileri etkinleştirin veya devre dışı kalacaksınız.
* Web hizmetini silin.
* Faturalandırma planlarını oluşturun, silin veya güncelleştirin (yalnızca Yeni).
* Uç noktaları ekleme ve silme (Yalnızca Klasik)
>[!NOTE]
>Ayrıca, **Web hizmetleri** sekmesinde [Machine Learning Studio'da (klasik)](https://studio.azureml.net) Klasik web hizmetlerini yönetebilirsiniz.
## <a name="permissions-to-manage-new-resources-manager-based-web-services"></a>Yeni Kaynaklar Yöneticisi tabanlı web hizmetlerini yönetme izinleri
Yeni web hizmetleri Azure kaynakları olarak dağıtılır. Bu nedenle, Yeni web hizmetlerini dağıtmak ve yönetmek için doğru izinlere sahip olmalısınız. Yeni web hizmetlerini dağıtmak veya yönetmek için, web hizmetinin dağıtıldığı abonelikte bir katılımcı veya yönetici rolü atanması gerekir. Başka bir kullanıcıyı makine öğrenimi çalışma alanına davet ederseniz, web hizmetlerini dağıtmadan veya yönetebilmeleri için önce onları abonelikteki bir katılımcı veya yönetici rolüne atamanız gerekir.
Kullanıcı, Azure Machine Learning Web Hizmetleri portalındaki kaynaklara erişmek için doğru izinlere sahip değilse, bir web hizmeti dağıtmaya çalışırken aşağıdaki hatayı alır:
*Web Hizmeti dağıtımı başarısız oldu. Bu hesap, Çalışma Alanı'nı içeren Azure aboneliğine yeterli erişime sahip değildir. Bir Web Hizmetini Azure'a dağıtmak için, aynı hesabın Çalışma Alanına davet edilmesi ve Çalışma Alanı'nı içeren Azure aboneliğine erişim izni verilmesi gerekir.*
Çalışma alanı oluşturma hakkında daha fazla bilgi için azure [machine learning studio (klasik) çalışma alanı oluştur ve paylaş'](create-workspace.md)a bakın.
Erişim izinleri ayarlama hakkında daha fazla bilgi için Bkz. [RBAC ve Azure portalını kullanarak erişimi yönet.](../../role-based-access-control/role-assignments-portal.md)
## <a name="manage-new-web-services"></a>Yeni Web hizmetlerini yönetme
Yeni Web hizmetlerinizi yönetmek için:
1. Microsoft Azure hesabınızı kullanarak [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portalında oturum açın - Azure aboneliğiyle ilişkili hesabı kullanın.
2. Menüde Web **Hizmetleri'ni**tıklatın.
Bu, aboneliğiniz için dağıtılan Web hizmetlerinin listesini görüntüler.
Bir Web hizmetini yönetmek için Web Hizmetleri'ni tıklatın. Web Hizmetleri sayfasından şunları yapabilirsiniz:
* Yönetmek için web hizmetini tıklatın.
* Güncelleştirmek için web hizmetiiçin FaturaLandırma Planı'nı tıklatın.
* Bir web hizmetini silin.
* Bir web hizmetini kopyalayın ve başka bir bölgeye dağıtın.
Bir web hizmetini tıklattığınızda, web hizmeti Quickstart sayfası açılır. Web hizmeti Quickstart sayfasında web hizmetinizi yönetmenize olanak tanıyan iki menü seçeneği vardır:
* **PANO** - Web hizmeti kullanımını görüntülemenizi sağlar.
* **CONFIGURE** - Açıklayıcı metin eklemenize, Web hizmetiyle ilişkili depolama hesabının anahtarını güncelleştirmenize ve örnek verileri etkinleştirmenize veya devre dışı bıraktığınızda izin verir.
### <a name="monitoring-how-the-web-service-is-being-used"></a>Web hizmetinin nasıl kullanıldığını izleme
**PANO** sekmesini tıklatın.
Panodan, web hizmetinizin genel kullanımını belirli bir süre içinde görüntüleyebilirsiniz. Kullanım grafiklerinin sağ üst kısmındaki Dönem açılır menüsünden görüntülenebilen dönemi seçebilirsiniz. Pano aşağıdaki bilgileri gösterir:
* **Zaman Daki İstekler,** seçili zaman dilimindeki istek sayısının adım grafiğini görüntüler. Kullanımda ani artışlar yaşanıp yaşamadığınızı belirlemenize yardımcı olabilir.
* **İstek-Yanıt İstekleri,** hizmetin seçili zaman dilimi içinde aldığı toplam İstek-Yanıt çağrısı sayısını ve bunların kaçtanesinin başarısız olduğunu görüntüler.
* **Ortalama İstek-Yanıt İşlem Süresi,** alınan istekleri yürütmek için gereken sürenin ortalamasını görüntüler.
* **Toplu İşlemler,** hizmetin seçili zaman dilimi içinde aldığı toplam Toplu İşlem sayısını ve bunların kaç tanesinin başarısız olduğunu görüntüler.
* **Ortalama İş Gecikmesi,** alınan istekleri yürütmek için gereken sürenin ortalamasını görüntüler.
* **Hatalar,** web hizmetine yapılan aramalarda oluşan toplam hata sayısını görüntüler.
* **Hizmet Maliyetleri,** hizmetle ilişkili faturalandırma planının ücretlerini görüntüler.
### <a name="configuring-the-web-service"></a>Web hizmetini yapılandırma
**CONFIGURE** menüsü seçeneğini tıklatın.
Aşağıdaki özellikleri güncelleştirebilirsiniz:
* **Açıklama,** Web hizmeti için bir açıklama girmenize olanak tanır.
* **Başlık,** Web hizmeti için bir başlık girmenizi sağlar
* **Tuşlar** birincil ve ikincil API tuşlarınızı döndürmenizi sağlar.
* **Depolama hesabı anahtarı,** Web hizmeti değişiklikleriyle ilişkili depolama hesabının anahtarını güncelleştirmenize olanak tanır.
* **Örnek verileri etkinleştir,** İstek-Yanıt hizmetini sınamak için kullanabileceğiniz örnek verileri sağlamanıza olanak tanır. Machine Learning Studio'da (klasik) web hizmetini oluşturduysanız, örnek veriler modelinizi eğitmek için kullandığınız verilerden alınır. Hizmeti programlı bir şekilde oluşturduysanız, veriler JSON paketinin bir parçası olarak sağladığınız örnek verilerden alınır.
### <a name="managing-billing-plans"></a>Faturalandırma planlarını yönetme
Web hizmetleri Quickstart sayfasından **Planlar** menüsü seçeneğini tıklatın. Ayrıca, bu planı yönetmek için belirli Web hizmetiyle ilişkili planı da tıklatabilirsiniz.
* **Yeni** yeni bir plan oluşturmanıza olanak sağlar.
* **Plan Ekle/Kaldır örneği,** kapasite eklemek için varolan bir planı "ölçeklendirmenize" olanak tanır.
* **Yükseltme/Düşürme,** kapasite eklemek için varolan bir planı "ölçeklendirmenize" olanak tanır.
* **Silme,** bir planı silmenizi sağlar.
Panosunu görüntülemek için bir planı tıklatın. Pano, belirli bir süre boyunca anlık görüntü veya plan kullanımı sağlar. Görüntülenebilmek için süreyi seçmek için, panonun sağ üst kısmındaki **Dönem** açılır görünümünü tıklatın.
Plan panosu aşağıdaki bilgileri sağlar:
* **Plan Açıklaması,** planla ilişkili maliyetler ve kapasite yle ilgili bilgileri görüntüler.
* **Plan Kullanımı,** plana karşı ücretlendirilen hareket ve işlem saatlerinin sayısını görüntüler.
* **Web Hizmetleri,** bu planı kullanan Web hizmetlerinin sayısını görüntüler.
* **Aramalara Göre En İyi Web Hizmeti,** plana karşı ücretlendirilen çağrılar yapan en iyi dört Web hizmetini görüntüler.
* **Compute Hrs tarafından En İyi Web Hizmetleri,** plana karşı ücretlendirilen işlem kaynaklarını kullanan en iyi dört Web hizmetlerini görüntüler.
## <a name="manage-classic-web-services"></a>Klasik Web Hizmetlerini Yönetin
> [!NOTE]
> Bu bölümdeki yordamlar, Azure Machine Learning Web Services portalı üzerinden Klasik web hizmetlerinin yönetilmesiyle ilgilidir. Machine Learning Studio (klasik) ve Azure portalı üzerinden Klasik Web hizmetlerini yönetme hakkında daha fazla bilgi için [bkz.](manage-workspace.md)
>
>
Klasik Web hizmetlerinizi yönetmek için:
1. Microsoft Azure hesabınızı kullanarak [Microsoft Azure Machine Learning Web Services](https://services.azureml.net/quickstart) portalında oturum açın - Azure aboneliğiyle ilişkili hesabı kullanın.
2. Menüde Klasik **Web Hizmetleri'ni**tıklatın.
Klasik Web Hizmeti'ni yönetmek için **Klasik Web Hizmetleri'ni**tıklatın. Klasik Web Hizmetleri sayfasından şunları yapabilirsiniz:
* İlişkili uç noktaları görüntülemek için web hizmetini tıklatın.
* Bir web hizmetini silin.
Klasik Web hizmetini yönettiğinizde, uç noktaların her birini ayrı ayrı yönetirsiniz. Web Hizmetleri sayfasında bir web hizmetini tıklattığınızda, hizmetle ilişkili uç noktaların listesi açılır.
Klasik Web Hizmeti bitiş noktası sayfasında, hizmete uç noktaları ekleyebilir ve silebilirsiniz. Uç noktaları ekleme hakkında daha fazla bilgi için [bkz.](create-endpoint.md)
Web hizmeti Quickstart sayfasını açmak için uç noktalardan birini tıklatın. Quickstart sayfasında, web hizmetinizi yönetmenize olanak tanıyan iki menü seçeneği vardır:
* **PANO** - Web hizmeti kullanımını görüntülemenizi sağlar.
* **CONFIGURE** - Açıklayıcı metin eklemenize, hata günlüğe kaydetmeyi açmanızı, Web hizmetiyle ilişkili depolama hesabının anahtarını güncelleştirmenizi ve örnek verileri etkinleştirmenizi ve devre dışı etmenizi sağlar.
### <a name="monitoring-how-the-web-service-is-being-used"></a>Web hizmetinin nasıl kullanıldığını izleme
**PANO** sekmesini tıklatın.
Panodan, web hizmetinizin genel kullanımını belirli bir süre içinde görüntüleyebilirsiniz. Kullanım grafiklerinin sağ üst kısmındaki Dönem açılır menüsünden görüntülenebilen dönemi seçebilirsiniz. Pano aşağıdaki bilgileri gösterir:
* **Zaman Daki İstekler,** seçili zaman dilimindeki istek sayısının adım grafiğini görüntüler. Kullanımda ani artışlar yaşanıp yaşamadığınızı belirlemenize yardımcı olabilir.
* **İstek-Yanıt İstekleri,** hizmetin seçili zaman dilimi içinde aldığı toplam İstek-Yanıt çağrısı sayısını ve bunların kaçtanesinin başarısız olduğunu görüntüler.
* **Ortalama İstek-Yanıt İşlem Süresi,** alınan istekleri yürütmek için gereken sürenin ortalamasını görüntüler.
* **Toplu İşlemler,** hizmetin seçili zaman dilimi içinde aldığı toplam Toplu İşlem sayısını ve bunların kaç tanesinin başarısız olduğunu görüntüler.
* **Ortalama İş Gecikmesi,** alınan istekleri yürütmek için gereken sürenin ortalamasını görüntüler.
* **Hatalar,** web hizmetine yapılan aramalarda oluşan toplam hata sayısını görüntüler.
* **Hizmet Maliyetleri,** hizmetle ilişkili faturalandırma planının ücretlerini görüntüler.
### <a name="configuring-the-web-service"></a>Web hizmetini yapılandırma
**CONFIGURE** menüsü seçeneğini tıklatın.
Aşağıdaki özellikleri güncelleştirebilirsiniz:
* **Açıklama,** Web hizmeti için bir açıklama girmenize olanak tanır. Açıklama gerekli bir alandır.
* **Günlüğe kaydetme,** bitiş noktasında hata günlüğe kaydetmeyi etkinleştirmenize veya devre dışı betmenizi sağlar. Günlük İşlemi hakkında daha [logging for Machine Learning web services](web-services-logging.md)fazla bilgi için bkz.
* **Örnek verileri etkinleştir,** İstek-Yanıt hizmetini sınamak için kullanabileceğiniz örnek verileri sağlamanıza olanak tanır. Machine Learning Studio'da (klasik) web hizmetini oluşturduysanız, örnek veriler modelinizi eğitmek için kullandığınız verilerden alınır. Hizmeti programlı bir şekilde oluşturduysanız, veriler JSON paketinin bir parçası olarak sağladığınız örnek verilerden alınır.
| 75.38125 | 493 | 0.820496 | tur_Latn | 0.999971 |
1c4cddcc9c0df3ce1ea7638dfdf06ad5d2998dc7 | 2,565 | md | Markdown | README.md | dheerajrathee/MEGBCI2020 | 9a852f98aa2b73ae509a8dc896e7917133140931 | [
"MIT"
] | 1 | 2021-03-11T01:39:01.000Z | 2021-03-11T01:39:01.000Z | README.md | dheerajrathee/MEGBCI2020 | 9a852f98aa2b73ae509a8dc896e7917133140931 | [
"MIT"
] | null | null | null | README.md | dheerajrathee/MEGBCI2020 | 9a852f98aa2b73ae509a8dc896e7917133140931 | [
"MIT"
] | null | null | null | # MEGBCI2020
We release a 306-channel MEG-BCI data recorded at 1KHz sampling frequency during four mental imagery tasks (i.e. hand imagery, feet imagery, subtraction imagery, and word generation imagery). The dataset contains two sessions of MEG recordings performed on separate days from 17 healthy participants using a typical BCI imagery paradigm. The current dataset will be the only publicly available MEG imagery BCI dataset as per our knowledge. The dataset can be used by the scientific community towards the development of novel pattern recognition machine learning methods to detect brain activities related to MI and CI tasks using MEG signals.
Dataset and Article Links:
```
Dataset: https://doi.org/10.6084/m9.figshare.c.5101544”
Article: https://doi.org/10.1038/s41597-021-00899-7
```
We have provided the MEG BCI dataset in two different file formats:
1. Brain Imaging Data Structure **(BIDS)**. To read more [click](https://bids.neuroimaging.io/index.html) and under BIDS format the raw data is avialable in Functional Image File Format **(.fif)** files. To read more [click](https://www.dropbox.com/s/q58whpso2jt9tx0/Fiff.pdf?dl=0)
2. MAT-file is the data file format MATLAB **(.mat)**. To read more [click](https://in.mathworks.com/help/matlab/import_export/mat-file-versions.html)
In this repository, we have provided Matlab scripts for the following tasks:
1. **Step0_script_fif2bids.m** : Script to convert MEG data from Elekta MEG format (.fif) to .MAT format.
2. **Step1_script_bids2mat.m** : Script to convert MEG data from BIDS format to .MAT format.
3. **Step2_script_mat2features.m** :Script to extract the motor and cognitive imagery features using common spatial patterns (CSP) algorithm.
4. **Step3_script_ClassifyFeatures.m** :Script for single-trial MEG classification to produce the baseline results.
** Note: We have used [fieldtrip](https://www.fieldtriptoolbox.org/) toolbox for basic pre-processing of MEG BCI dataset. As a dependency we recommend to download and add fieldtrip to your Matlab path if you want to reproduce our results.
***
#### References:
Please cite our work:
```
@article{Rathee2021,
title = {{A magnetoencephalography dataset for motor and cognitive imagery-based brain-computer interface}},
author = {Rathee, Dheeraj and Raza, Haider and Roy, Sujit and Prasad, Girijesh},
doi = {10.1038/s41597-021-00899-7},
issn = {2052-4463},
journal = {Scientific Data},
number = {1},
pages = {120},
url = {https://doi.org/10.1038/s41597-021-00899-7},
volume = {8},
year = {2021}
}
```
| 51.3 | 642 | 0.760624 | eng_Latn | 0.882431 |
1c4d52f69f5e383d6a829963a688c87b672f2dc2 | 4,351 | md | Markdown | README.md | jtlai0921/storage-selector | 3b43a7e53c275dc18e1686d85b42c33461f34602 | [
"Apache-2.0"
] | 4 | 2017-11-14T16:54:00.000Z | 2018-02-12T19:48:32.000Z | README.md | jtlai0921/storage-selector | 3b43a7e53c275dc18e1686d85b42c33461f34602 | [
"Apache-2.0"
] | 18 | 2017-11-13T13:12:28.000Z | 2019-05-16T22:40:13.000Z | README.md | jtlai0921/storage-selector | 3b43a7e53c275dc18e1686d85b42c33461f34602 | [
"Apache-2.0"
] | 16 | 2017-11-15T20:32:13.000Z | 2018-09-27T15:33:19.000Z | # Storage Selector
Storage Selector is used to select from a list of possible Block Device implementations and FileSystem implementations.
## Storage options
The following storage options are supported:
- `SD_CARD` - Uses SPI (defaults to onboard card slot if available).
- `SPI_FLASH` - Must be a standard Serial NOR Flash device (defaults to onboard flash if available).
- `DATA_FLASH` - Supports the SPI interface for the AT45DB device family.
- `INTERNAL_FLASH` - Uses internal flash of device.
- `HEAP` - RAM backed device, most useful for debugging.
Most of these options have additional configuration options. They are designed to have reasonable defaults, but you are encouraged to override these values to suit your needs. See the corresponding driver for more information
## Selecting the storage
In your application's `mbed_app.json`, add the following lines:
```json
{
"target_overrides": {
"K64F": {
"storage-selector.storage": "SPI_FLASH"
}
}
}
```
Where `K64F` should be replaced by your target and `SPI_FLASH` should be replaced by the storage option.
## Using the storage
In your application, instantiate the Block Device object like this:
```
#include "storage-selector/storage-selector.h"
BlockDevice* bd = storage_selector();
```
## Adding new storage options
The following must be true for all new storage options:
- The driver must compile **even when not in use**.
- This means if the driver requires config options, you must provide reasonable defaults.
- The driver must not add to the ROM/RAM usage when not in use.
- The driver must be wrapped in `#ifdef DEVICE_*` guards for the Mbed HAL APIs it uses to ensure it compiles.
- An example of this can be seen in the [sd-driver](https://github.com/ARMmbed/sd-driver/blob/master/SDBlockDevice.h#L26).
## Filesystem options
The following filesystem options are supported:
- `FAT` - FAT filesystem.
- `LITTLE` - Experimental fail-safe embedded filesystem.
Most filesystems have additional configuration options. They are designed to have reasonable defaults, but you are encouraged to override these values to suit your needs. See the corresponding driver for more information
## Selecting the filesystem
In your application's `mbed_app.json`, add the following lines:
```json
{
"target_overrides": {
"NUCLEO_F429ZI": {
"storage-selector.filesystem": "FAT",
"storage-selector.mount-point": "\"sd\""
}
}
}
```
Where `NUCLEO_F429ZI` should be replaced by your target, `FAT` should be replaced by the filesystem option, and `mount-point` should be replaced with the filesystem mounting point. Note the escaped double quotes around the mounting point. You only need to configure a filesystem if you intend to use one.
## Multiple filesystems/partitions
You need to configure an addition config parameter to use multiple filesystems/partitions:
```
{
"target_overrides": {
"NUCLEO_F429ZI": {
"storage-selector.filesystem-instances": 2
}
}
}
```
The following is an example of using the `MBRBlockDevice` to use multiple partitions.
```c++
// Get the block device
BlockDevice *bd = storage_selector();
bd->init();
// Create and initialize partitions
MBRBlockDevice::partition(bd , 1, 0x83, 0, 512*1024);
MBRBlockDevice::partition(bd , 2, 0x83, 512*1024, 1024*1024);
MBRBlockDevice part1(bd, 1);
MBRBlockDevice part2(bd, 2);
// Mount the filesystems
FileSystem *fs1 = filesystem_selector("fs1", &part1, 1);
FileSystem *fs2 = filesystem_selector("fs2", &part2, 2);
```
## Using the filesystem
In your application, you can instantiate the filesystem like this:
```
FileSystem* fs = filesystem_selector();
```
This will automatically instantiate the Block Device selected and pass it to the selected filesystem and mount it at the configured location. Alternatively, a selected filesystem can be instantiated manually:
```
BlockDevice* sd = storage_selector();
FileSystem* fs = filesystem_selector("sd", sd);
```
## Adding new filesystem options
The following must be true for all new filesystem options:
- The filesystem must compile **even when not in use**.
- This means if the filesystem requires config options, you must provide reasonable defaults.
- The filesystem must not add to the ROM/RAM usage when not in use.
| 33.728682 | 304 | 0.738221 | eng_Latn | 0.991846 |
1c4db7c8045f032a14f920087264d2c518efdbfb | 1,429 | md | Markdown | readme.md | dgoguerra/parse-dburi | 40aa226433eacb751fd3b88e69f64bf516529186 | [
"MIT"
] | null | null | null | readme.md | dgoguerra/parse-dburi | 40aa226433eacb751fd3b88e69f64bf516529186 | [
"MIT"
] | null | null | null | readme.md | dgoguerra/parse-dburi | 40aa226433eacb751fd3b88e69f64bf516529186 | [
"MIT"
] | null | null | null | parse-dburi
==========
Parse and stringify a DB URI.
Installation
------------
```
npm install parse-dburi
```
Usage
-----
```js
var dbUri = require('parse-dburi');
// parse a database uri
dbUri.parse('mysql://username:password@example.com:12345/db_name');
/*
{ uri: 'mysql://username:password@example.com:12345/db_name',
fullUri: 'mysql://username:password@example.com:12345/db_name',
protocol: 'mysql',
host: 'example.com',
port: 12345,
username: 'username',
password: 'password',
database: 'db_name' }
*/
// fill an incomplete uri with defaults
var defaults = {
protocol: 'mysql',
username: 'username',
password: 'password',
port: 12345
};
dbUri.parse('example.com/db_name', defaults);
/*
{ uri: 'example.com/db_name',
fullUri: 'mysql://username:password@example.com:12345/db_name',
protocol: 'mysql',
host: 'example.com',
port: 12345,
username: 'username',
password: 'password',
database: 'db_name' }
*/
// fill an incomplete uri with defaults
dbUri.resolve('example.com/db_name', defaults);
// 'mysql://username:password@example.com:12345/db_name'
// stringify a db connection object
dbUri.stringify({
protocol: 'mysql',
host: 'example.com',
port: 12345,
username: 'username',
password: 'password',
database: 'db_name'
});
// 'mysql://username:password@example.com:12345/db_name'
```
License
-------
MIT license - http://www.opensource.org/licenses/mit-license.php
| 20.126761 | 67 | 0.676697 | kor_Hang | 0.242104 |
1c4dea7edc9c58f376868306a3b9d6384e2d570d | 126 | md | Markdown | bash_one_liners.md | vgazzola/snippets | 39acff1b0cc7c05202cc07f1994bd119e2b967c7 | [
"MIT"
] | null | null | null | bash_one_liners.md | vgazzola/snippets | 39acff1b0cc7c05202cc07f1994bd119e2b967c7 | [
"MIT"
] | null | null | null | bash_one_liners.md | vgazzola/snippets | 39acff1b0cc7c05202cc07f1994bd119e2b967c7 | [
"MIT"
] | null | null | null |
# Change homes for new uid after setting sssd
```
ls -l /home | awk '{print $9}' | xargs -I {} chown -R {}:{} /home/{}
```
| 18 | 69 | 0.539683 | eng_Latn | 0.783699 |
1c4e6148781e1f7ae626e28dc8a8b5de027c454b | 15,534 | md | Markdown | notebook/single_source_localization.md | matsuren/gm_counter_simulator | c396b36ab3a30593ec86a79eefac5761397c1d33 | [
"MIT"
] | null | null | null | notebook/single_source_localization.md | matsuren/gm_counter_simulator | c396b36ab3a30593ec86a79eefac5761397c1d33 | [
"MIT"
] | null | null | null | notebook/single_source_localization.md | matsuren/gm_counter_simulator | c396b36ab3a30593ec86a79eefac5761397c1d33 | [
"MIT"
] | 1 | 2022-02-16T18:17:11.000Z | 2022-02-16T18:17:11.000Z | ---
jupyter:
jupytext:
formats: ipynb,md
text_representation:
extension: .md
format_name: markdown
format_version: '1.3'
jupytext_version: 1.10.3
kernelspec:
display_name: Python 3
language: python
name: python3
---
# Single source localization
```python
import sys
sys.path.insert(0, "../")
```
```python
from gm_simulator import Source, Detector, World
import numpy as np
import matplotlib.pyplot as plt
from typing import List
import seaborn as sns
```
```python
# Construct world
world = World()
world.add_source(Source(loc=[5, 8, 0], intensity=1))
world.add_source(Source(loc=[-7, -2, 0], intensity=2.3))
ax = world.visualize_world()
```
```python
# Set detectors
div_num = 10
x_lin = np.linspace(-10, 10, div_num)
y_lin = np.linspace(10, -10, div_num)
x_grid, y_grid = np.meshgrid(x_lin, y_lin)
# Constant height
z = 1.5
detectors = []
for x, y in zip(x_grid.flatten(), y_grid.flatten()):
detectors.append(Detector(loc=[x, y, z]))
cnts = world.get_measuments(detectors)
world.visualize_world(detectors)
print(f"max_count: {cnts.max()}")
viz = cnts.reshape(div_num, div_num)
plt.figure()
plt.imshow(viz, vmin=0, vmax=np.percentile(cnts, 99))
```
<!-- #region -->
# Localization
Radiation localization assuming only one source.
State: $\mathbf{s} =(\mathbf{x}_{0} ,q_{0} )\in \mathbb{R}^{4}$, where $q_{0}$ is intensity of radiation source, and $\mathbf{x}_{0} =[x_{0} ,y_{0} ,z_{0} ]^{\top }$
Measurement: $p_{i} \in \mathbb{R}$, where $y_{i}$ is the number of count detected by detector $i$ located at $\mathbf{y}_{i} \in \mathbb{R}^{3}$.
The probability of $y_{i}$ is formulated as follows:
\begin{gather*}
y_{i} \sim \mathrm{Pois}( \lambda _{i}) =\frac{\lambda ^{y_{i}} e^{-\lambda }}{y_{i} !} ,\\
\lambda _{i} =\Gamma \frac{q_{0}}{||\mathbf{x} -\mathbf{y}_{i} ||^{2}} ,
\end{gather*}
E.g., the likelihood after knowing three measurements $\mathcal{P} =\{p_{i} :0\leqq i\leqq 2\}$
\begin{gather*}
\mathrm{P}(\mathbf{s} |\mathcal{P}) =\prod _{i \in |\mathcal{P}|}\mathrm{Pois}( \lambda _{i}) ,\\
\log\mathrm{P}(\mathbf{s} |\mathcal{P}) =\sum _{i \in |\mathcal{P}|}( y_{i}\log \lambda _{i} -\lambda _{i} -\log y_{i}!) ,
\end{gather*}
<!-- #endregion -->
```python
def get_pois_lambda(detector: Detector, loc: List[float], q: float) -> float:
"""
Get lambda for Poisson distribution
Parameters
----------
detector: Detector
Detector
loc: List[float]
Location of the radiation source such that [x, y, z] [m].
q: float
Intensity of the radiation source [MBq]. (default is 1.0)
Returns
-------
lambda: float
lambda for Poisson
"""
# Factor
factor = detector.duration * detector.factor
# Locations
detec_loc = np.array(detector.loc)
src_loc = np.array(loc)
# distance
dist = np.linalg.norm(src_loc - detec_loc)
lmd = factor * q / dist / dist
return lmd
```
```python
def get_loglikelihood(y: int, lmd: float) -> float:
"""
Get log likelihood.
Parameters
----------
y: int
Measurement count
lmd: float
lambda in Poisson distribution
Returns
-------
logli: float
log likelihood
"""
# y*log(mu)-mu-log(y!)
logli = y * np.log(lmd) - lmd - np.log(np.arange(1, y + 1)).sum()
return logli
```
```python
def sum_loglikelihood(
cnts: List[int], detectors: List[Detector], target_s: List[float]
) -> float:
"""
Sum loglikelihood from observation.
Parameters
----------
cnts: List[int]
Counts from multiple measurements
detectors: List[Detector]
List of detectors.
target_s:List[float]
Target state [x, y, z, p]
Returns
-------
logli: float
Sum of log likelihood from multiple measurements
"""
sum_logli = []
for i, y in enumerate(cnts):
lmd = get_pois_lambda(detectors[i], target_s[:3], target_s[3])
logli = get_loglikelihood(y, lmd)
sum_logli.append(logli)
return sum(sum_logli)
```
```python
from math import factorial
p = lambda k, l: l ** k * np.exp(-l) / factorial(k)
```
```python
l = 3.0
ret = []
for k in range(15):
ret.append(p(k, l))
plt.plot(ret)
print(sum(ret))
```
## Construct world for example
```python
# Construct world
world = World()
world.add_source(Source(loc=[1, 2.5, 0], intensity=2))
world.visualize_world(figsize=(5, 5))
```
```python
# Set detectors (locations are known)
detector_locations = [
[-1.0, 0, 0],
[0, 0, 0],
[1.0, 0, 0],
[-1.0, 2, 0],
[0, 2, 0],
[1.0, 2, 0],
]
detectors = []
for loc in detector_locations:
detectors.append(Detector(loc=loc))
cnts = world.get_measuments(detectors)
print(f"max_count: {cnts.max()}")
locs = np.array(detector_locations)
fig = plt.figure()
ax = fig.add_subplot(111)
sc = ax.scatter(locs[:, 0], locs[:, 1], c=cnts, cmap=plt.cm.jet)
_ = fig.colorbar(sc, orientation="horizontal")
ax.set_aspect("equal")
```
```python
world.visualize_world(detectors, figsize=(6, 6), plotsize=2)
```
```python
assert len(world.sources) == 1, "should be single source"
# Groundtruth
gt_loc = world.sources[0].loc
gt_q = world.sources[0].intensity
gt_s = list(gt_loc + [gt_q])
print(f"groundtruth state: {gt_s}")
```
```python
print("Check if the groundtruth gives maximum log likelihood")
diff = 0.1
for i in range(-1, 3):
# Copy to keep original value
target_s = list(gt_s)
if i < 0:
print("Groundtruth:", target_s)
else:
# Add perturbation
target_s[i] += diff
print("Add perturbation:", target_s)
sum_logli = sum_loglikelihood(cnts, detectors, target_s)
print(f" Likelihood: {sum_logli:.3f}")
```
```python
def sample(mu: List[float], std: List[float], num: int = 1) -> np.ndarray:
"""
Sample location from Normal distribution(mu[:3], std[:3]**2)
and intensity from Exponential distribution(mu[3])
Parameters
----------
mu:float
std:float
num:int
Returns
-------
sample_s: ndarray
Sampled states whose shape is (num, 4)
"""
assert len(mu) == 4, "len(mu) should be 4"
assert len(std) == 3, "len(std) should be 3"
sample_loc = np.random.normal(mu[:3], std[:3], (num, 3))
sample_q = np.random.exponential(mu[3], (num, 1))
sample_s = np.hstack([sample_loc, sample_q])
return sample_s
```
## Cross-entropy method
Reference: https://youtu.be/mJlAfKc4990?t=4296
```python
world
```
```python
init_mu = [0, 0, 0, 0.1]
std = [0.5, 0.5, 0] # std[2]=0 so that z=0
eval_num = 100
top_n_percentile = 90
# Initial sample
mu = sample(init_mu, std).squeeze()
# Save current max
curr_max = -np.inf
curr_max_mu = None
for i in range(1000):
# Sample
sample_s = sample(mu, std, eval_num)
# Calculate loglikehood
logli_hist = []
for s in sample_s:
logli = sum_loglikelihood(cnts, detectors, s)
logli_hist.append(logli)
# Decide threshold
logli_hist = np.array(logli_hist)
thresh = np.percentile(logli_hist, top_n_percentile)
# Calculate next mu
mu = sample_s[logli_hist > thresh].mean(axis=0)
# Update max mu
max_idx = np.argmax(logli_hist)
if logli_hist[max_idx] > curr_max:
curr_max = logli_hist[max_idx]
curr_max_mu = sample_s[max_idx]
if not i % 100:
print(f"[{i}] Max: {curr_max:.3f}, Param:{curr_max_mu}")
print("Estimation:", curr_max_mu)
print("Groundtruth:", gt_s)
```
## Particle filter
### Practice for particle filter
Estimate mu and std for Normal distribution
```python
# Visualize groundtruth
gt_mu = -10.0
gt_scale = 5.5
pts = np.random.normal(gt_mu, gt_scale, size=1000)
sns.histplot(pts)
```
```python
def g(x):
"""Observation"""
y = np.random.normal(x[0], np.exp(x[1]))
return y
def loglikelihood(ys, x):
"""
Log likelihood function.
P(mu, sigma | {y0,...,yn})
"""
n = len(ys)
mu = x[0]
sq_std = np.exp(x[1]) ** 2 # variance
# Calculate log likelihood
sq_diff = (np.array(ys) - mu) ** 2
logli = (
-0.5 * n * np.log(2 * np.pi)
- 0.5 * n * np.log(sq_std)
- 0.5 / sq_std * sq_diff.sum()
)
return logli
def importance_sampling(weight, ys, x):
weight_updated = weight * np.exp(loglikelihood(ys, x))
return weight_updated
```
```python
# State
# x[0]: loc
# x[1]: log(std) so that 0<std
gt_x = [gt_mu, np.log(gt_scale)]
N_m = 100 # Number of measurement
N_p = 1000 # Number of particle
ys = [g(gt_x) for _ in range(N_m)] # Measurement
xs = np.random.normal(0, 5, (N_p, 2)) # Initial particle
ws = np.full(shape=(N_p), fill_value=1.0)
```
```python
fig, ax = plt.subplots()
ax.plot(gt_mu, gt_scale, "rd")
print(f"Groundtruth: loc:{gt_mu:.3f}, scale:{gt_scale:.3f}")
for epoch in range(100):
# Update
w_updated = []
for i in range(N_p):
w_updated.append(importance_sampling(ws[i], ys, xs[i]))
# Normalize weight
sum_w = sum(w_updated)
w_updated = [w / sum_w for w in w_updated]
# Resampling
new_xs = []
resamples = np.random.multinomial(N_p, w_updated)
for i, n in enumerate(resamples):
for _ in range(n):
# Add noise because no transition model
noise = np.random.normal(loc=0.0, scale=0.5, size=(2))
new_xs.append(xs[i] + noise)
xs = np.array(new_xs)
ws = np.full(shape=(N_p), fill_value=1.0)
# Estimation
est_x = xs.mean(axis=0)
est_loc = est_x[0]
est_scale = np.exp(est_x[1])
if not epoch % 5:
print(f"Pred: loc:{est_loc:.3f}, scale:{est_scale:.3f}")
ax.plot(est_loc, est_scale, "bd")
```
### Single source localization
```python
# # Try different detector locations
# detector_locations = [
# [-1.0, 1, 0],
# [0, -0.5, 0],
# [1, 1, 0],
# [0, 2, 0],
# ]
# detectors = []
# for loc in detector_locations:
# detectors.append(Detector(loc=loc))
# cnts = world.get_measuments(detectors)
# cnts
```
```python
# Make sure if world is set correctly
assert len(world.sources) == 1, "should be single source"
# Groundtruth
gt_loc = world.sources[0].loc
gt_q = world.sources[0].intensity
gt_s = list(gt_loc + [gt_q])
print(f"groundtruth state: {gt_s}")
world.visualize_world(detectors, figsize=(5, 5), plotsize=2)
```
```python
N_p = 1000 # Number of particle
# Initial particle
is_uniform = True
if is_uniform:
# Uniform
xs_loc = np.random.uniform(-5, 5, size=(N_p, 3))
xs_loc[:, 2] = 0
xs_q = np.random.uniform(0.1, 10, size=(N_p, 1))
xs = np.hstack([xs_loc, xs_q])
else:
# Normal distribution
init_mu = [0, 0, 0, 1]
std = [1, 1, 0] # std[2]=0 so that z=0
xs = sample(init_mu, std, N_p)
ws = np.full(shape=(N_p), fill_value=1.0)
```
```python
def importance_sampling(weight, cnt, detectors, x):
# TODO how to use loglikelihood for sampling?
logli = sum_loglikelihood(cnts, detectors, x)
weight_updated = weight * np.exp(logli)
return weight_updated
```
```python
ax = world.visualize_world(detectors, figsize=(5, 5), plotsize=2)
print("Groundtruth:", gt_s)
for epoch in range(10):
# Update
w_updated = []
for i in range(N_p):
w_updated.append(importance_sampling(ws[i], cnts, detectors, xs[i]))
# Normalize weight
sum_w = sum(w_updated)
w_updated = [w / sum_w for w in w_updated]
# Resampling
new_xs = []
resamples = np.random.multinomial(N_p, w_updated)
for i, n in enumerate(resamples):
for _ in range(n):
# Add noise because no transition model
# Noise should be zero-centered
noise = np.random.multivariate_normal(
mean=[0, 0, 0, 0], cov=np.diag([0.05, 0.05, 0, 0.1])
)
new_xs.append(xs[i] + noise)
xs = np.array(new_xs)
xs[xs[:, 3] <= 0] = 0.01 # Intensity is always positive
ws = np.full(shape=(N_p), fill_value=1.0)
# Estimation
est_x = xs.mean(axis=0)
if not epoch % 10:
print("Estimation:", est_x)
ax.plot(est_x[0], est_x[1], "cd")
print("Final estimation:", est_x)
ax.plot(est_x[0], est_x[1], "yd")
ax.scatter(xs[:, 0], xs[:, 1], alpha=0.05)
```
## MCMC
### NumPyro
```python
import arviz as az
import jax
import jax.numpy as jnp
import numpy as np
import numpyro
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS, Predictive
```
```python
numpyro.enable_validation(True)
numpyro.set_platform("cpu")
draws = 2000
chains = 2
```
```python
# with handlers.seed(rng_seed=0):
# y_ = model(detectors, Y=None)
# y_
```
```python
# Y = None
# with handlers.seed(rng_seed=0):
# # Prior
# rad_loc = numpyro.sample('rad_loc', dist.Normal(loc=jnp.array([0,0,0]), scale=jnp.array([5,5,0.01])))
# rad_q = numpyro.sample('rad_q', dist.Exponential(rate=0.1))
# # Lambda for Poisson
# lmds = []
# for detec in detectors:
# factor = detec.duration * detec.factor
# d = jnp.linalg.norm(rad_loc - jnp.array(detec.loc))
# lmds.append(factor*rad_q/d/d)
# lmds = numpyro.deterministic('lambda', jnp.array(lmds))
# Y_obs = numpyro.sample("obs", dist.Poisson(lmds), obs=Y)
```
```python
def model(detectors, Y=None):
# Prior
rad_loc = numpyro.sample('rad_loc', dist.Normal(loc=jnp.array([0,0,0]), scale=jnp.array([5,5,0.01])))
rad_q = numpyro.sample('rad_q', dist.Exponential(rate=0.1))
# Lambda for Poisson
lmds = []
for detec in detectors:
factor = detec.duration * detec.factor
d = jnp.linalg.norm(rad_loc - jnp.array(detec.loc))
lmds.append(factor*rad_q/d/d)
lmds = numpyro.deterministic('lambda', jnp.array(lmds))
Y_obs = numpyro.sample("obs", dist.Poisson(lmds), obs=Y)
return Y_obs
```
```python
Y = jnp.array(cnts)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=draws, num_warmup=1000, num_chains=chains)
rng_key = jax.random.PRNGKey(0)
mcmc.run(rng_key, detectors, Y)
```
```python
print("Groundtruth:", gt_s)
```
```python
mcmc.print_summary()
```
```python
az.plot_trace(mcmc, var_names=["rad_loc", "rad_q"], combined=False);
```
```python
ax = world.visualize_world(detectors, figsize=(5, 5), plotsize=2)
s = mcmc.get_samples()
est_x = s["rad_loc"]
ax.plot(est_x[:, 0].mean(), est_x[:, 1].mean(), "yd")
ax.scatter(est_x[::10, 0], est_x[::10, 1], alpha=0.05)
```
### Pyro
NumPyro is a lot faster than Pyro
```python
import arviz as az
import numpy as np
import pyro
import pyro.distributions as dist
import torch
# torch.multiprocessing.set_start_method("spawn")
from pyro.infer import MCMC, NUTS, Predictive
```
```python
pyro.enable_validation(True)
pyro.set_rng_seed(0)
draws = 1000
chains = 1
```
```python
def model(detectors, Y=None):
# Prior
rad_loc = pyro.sample('rad_loc', dist.Normal(loc=torch.zeros(3), scale=torch.tensor([2, 2, 0.001])))
rad_q = pyro.sample('rad_q', dist.Exponential(rate=0.5))
# Lambda for Poisson
lmds = []
for detec in detectors:
factor = detec.duration * detec.factor
d = torch.norm(rad_loc - torch.tensor(detec.loc))
lmds.append(factor*rad_q/d/d)
lmds = pyro.deterministic('lambda', torch.tensor(lmds))
Y_obs = pyro.sample("obs", dist.Poisson(lmds), obs=Y)
return Y_obs
```
```python
Y = torch.tensor(cnts)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=draws, num_chains=1)
mcmc.run(detectors, Y)
```
```python
print("Groundtruth:", gt_s)
```
```python
mcmc.summary()
```
```python
```
| 23.465257 | 165 | 0.626368 | eng_Latn | 0.402488 |
1c4f064074319bd850f5bc58f1ffb1a615923f63 | 321 | markdown | Markdown | _posts/2016-03-24-e78699e58583e781af.markdown | imfms/blog | 8caebe2d9abf3bf25604edd5f37301d4e87cd44e | [
"Apache-2.0"
] | null | null | null | _posts/2016-03-24-e78699e58583e781af.markdown | imfms/blog | 8caebe2d9abf3bf25604edd5f37301d4e87cd44e | [
"Apache-2.0"
] | null | null | null | _posts/2016-03-24-e78699e58583e781af.markdown | imfms/blog | 8caebe2d9abf3bf25604edd5f37301d4e87cd44e | [
"Apache-2.0"
] | 1 | 2019-08-25T07:55:24.000Z | 2019-08-25T07:55:24.000Z | ---
author: F_Ms
comments: true
date: 2016-03-24 16:29:34+00:00
layout: post
link: https://imf.ms/?p=603
slug: '%e7%86%99%e5%85%83%e7%81%af'
title: 熙
wordpress_id: 603
categories:
- picture-white-sex-feelings
post_format:
- 图像
tags:
- 浓黑淡白话怀情
---
![黑白-色情怀_熙元灯饰_灯店[003]](/img/post/wp/2016/03/黑白-色情怀_熙元灯饰_灯店003.jpg)
熙
| 13.375 | 66 | 0.694704 | kor_Hang | 0.039012 |
1c4f46658239d1c0124ff47bf4f0260d5385958f | 220 | md | Markdown | _definitions/bld-ocoupier.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 5 | 2018-08-07T21:57:01.000Z | 2022-02-26T13:29:20.000Z | _definitions/bld-ocoupier.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 1 | 2018-08-07T22:29:07.000Z | 2018-08-07T22:45:46.000Z | _definitions/bld-ocoupier.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 2 | 2020-12-26T17:22:04.000Z | 2021-02-12T21:35:50.000Z | ---
title: Ocoupier
letter: O
permalink: "/definitions/bld-ocoupier.html"
body: Ah occupant; one who la hi the enjoyment of a thlng
published_at: '2018-07-07'
source: Black's Law Dictionary 2nd Ed (1910)
layout: post
--- | 24.444444 | 57 | 0.740909 | eng_Latn | 0.769489 |
1c4fc67cb43cd88f9a30a2d6b2728ad24f9ec6b3 | 3,708 | markdown | Markdown | _posts/2014-10-26-ice-cream-and-distributed-systems.markdown | 365zph/365zph.github.io | c5fc8b1ec00511214632d94db0ac86dedc4f0afa | [
"MIT"
] | null | null | null | _posts/2014-10-26-ice-cream-and-distributed-systems.markdown | 365zph/365zph.github.io | c5fc8b1ec00511214632d94db0ac86dedc4f0afa | [
"MIT"
] | null | null | null | _posts/2014-10-26-ice-cream-and-distributed-systems.markdown | 365zph/365zph.github.io | c5fc8b1ec00511214632d94db0ac86dedc4f0afa | [
"MIT"
] | null | null | null | ---
author: viviworld
comments: true
date: 2014-10-26 14:40:18+00:00
layout: post
link: http://www.labazhou.net/2014/10/ice-cream-and-distributed-systems/
slug: ice-cream-and-distributed-systems
title: 冰激凌和分布式系统
wordpress_id: 1160
categories:
- 编程
tags:
- 二阶段提交
- 分布式
---
我们能够处理好冰激凌的总量吗?
在我小时候,我真的喜欢吃冰激凌。它现在还是不错,只是回到那个时候我对它多少有些狂热。我父母知道脂肪和糖的美味混合物最好偶尔食用,因此要小心地限制量。当然,我找到了系统中的方法。首先,我找到妈妈,问问是不是可以吃冰激凌了,她将给出答案。如果她不同意,我再去找爸爸问同样的问题。这个策略增加了获得同意的机会,因为我父母的决定不是一致的。少数情况下,我甚至能吃到妈妈同意的冰激凌,然后找爸爸尝试图吃到第二碗。
这种诡计持续了一段时间后,我父母明白过来了。他们决定有必要给我提供一致的答案,唯一的方式就是在我询问吃冰激凌时,他们每次都要彼此沟通一下。他们的协调方法非常有效。它确保了一致性的答案,只不过让我等待问题答案的时间稍微长了些。
当我父母上班时,这种方法就不管用了。做为一个孩子,我能够找到好借口在任何时候与其中一个家长说话,但是他们的工作会妨碍他们彼此沟通。再一次,我可以把这种情况用在了丰富的、甜甜的、奶油优势上。由于我父母无法交流,我能够强迫得到不一致的决定。我父母让他们做出是否购买冰激凌的一致性决定,却不能彼此沟通,这是他们的失误?
<blockquote>假定网络由至少两个节点组成,这样网络可以被分为两个不相交的、非空集合:{G1,G2}。证实的基本思想就是假定G1和G2之间的所有信息都丢失了。如果G1发生了一个写操作,随后G2发生了一个读操作,那么读操作不能返回之前写操作的结果。</blockquote>
假定我父母没有手表,他们做出的决定只能根据他们收到的信息和内部状态,[Glibert和Lynch](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.67.6951)证实了他们通常不能做出一致性的决定。这是关于写和读的通常结果。他们在某些特定情况下可以做得更好吗?
### 灵活使用时钟
在时间上,我父母的冰激凌策略是我一周得到一碗。他们每天上班之前商量一下,在我还没有得到每周配额的时候,决定在接下来的八个小时内,只有他们其中一个可以发出冰激凌的决定。如果轮到我妈妈负责,我给爸爸打电话要决定,他就告诉我,他不能给我决定。只要我能够联系到妈妈,我就能得到一致性的答案。如果我不能联系到妈妈,我就运气不佳了。尽管如此,补救措施就是如果妈妈加班,爸爸会注意到超过八个小时了,然后做出决定。
很快,我这个灵巧的小家伙,就意识到爸爸的手表比妈妈的表走得快些。当他到家时,我就去找爸爸要一个香草味儿的。他看看他的手表,发现八个小时过了,就认为妈妈已经失去了做决定的授权。在检查了冰箱里的碗仍然是空着的,以确保妈妈没有在白天决定,那么他就让我吃。我将狼吞虎咽地吃完,然后给我妈妈打电话。她的手表告诉她,还没有过八个小时,她就给我第二碗了。我打败了这个系统!
<blockquote>[客户的租约时间]被限额E因为始终偏移缩短了。至少,租约的正确功能只需要始终有一个已知的、有界限的偏移。</blockquote>
如果我父母读过[Gray和Sheraton](http://web.stanford.edu/class/cs240/readings/89-leases.pdf),他们将知道如何修复他们的租约协议。我妈妈和爸爸将不得不测出两个手表速率之间的偏差,在假定我妈妈不再拥有租约之前,要加上一些时间(E)到我爸爸的时间上。
### 把结果汇总
由于时尚饮食横扫全国,我父母认为冰激凌没有他们想象的那样糟。做为负责任的父母,他们仍然想追踪我的消费来检查他们的假设。在工作时间,我妈妈和爸爸退回到了做不一致的决定,每个人只是保持了他们自己同意的频率的记录。一旦他们再次回到家里,他们就累加各自数量,以得到精确的总数。
追踪口味有点儿难度。每次我提出一勺子的量,他们都要写下我被许可的口味。偶尔地,我打开冰箱发现那种口味没有了,然后我在打电话请求减少我的总数。由于我是个小孩子,不记事,我记不住妈妈或爸爸是否记录了“同意”,因此我随机给其中一人打电话记录“不同意”。这没有关系,因为他们仍然能够累加各自独立的数量,在一天结束后再汇总出精确的总数。精确的总数,也就是说,直到灾难发生才出现。
我从妈妈那里收到了吃草莓味冰激凌的许可,但是在制冰盒和冰冻果子之间的普通地方没有找到。我就回拨过去报告没找到,但是在我说再见之前电话断了。由于不能联系到妈妈而带来的焦躁,我向爸爸报告了同样的情况。当总数在一天结束被汇总出来时,我父母被负的总数搞晕了。我们发明了“负冰激凌”吗?
不幸的是,我父母没有追踪无冲突复制数据类型的开发。如果他们知道,他们将用[OR set](https://hal.inria.fr/hal-00738680/PDF/RR-8083.pdf)解决这个问题,用单标签(unique tags)追踪增加和移除。如果被那篇论文武装了,研究了CRDT新的和旧的,那么他们就会退回去限制我的配额吗?意图很明显:如果我们能够独立计算一些东东,如果我们能够独立管理一个set,那么我们能够每天独立地强制同一个碗吗?非常不幸,做不到。重要的不同在于,增加一个和增加到set是可交换的【注1】,然而如果总数大于零,减少一个就不是可交换的。
### 让每个人同意
根据他们对于口味追踪的、令人沮丧的战争,我父母让他们的兼职管家玛丽协助处理这个问题。在失去对时尚饮食书籍的信任之后,我父母都腾出一部分工作时间去调查冰激凌的健康属性,也经常改变他们的观点。他们想仔细地追踪我吃了多少,尤其是剂量对于我的健康非常重要。妈妈和爸爸决定,只有他们都同意了,玛丽才能允许我吃一些。玛丽乐于接受这个任务,但是存在一个大问题:他不喜欢打电话。幸运的是,她喜欢发短信。不幸的是,信息仍然有着奇怪的、昂贵的回复。
玛丽、妈妈和爸爸坐在一起,尽量搞清楚如何用最少的消息数量来通过这个问题。玛丽发明了一个简单的方法:当我问她是否可以吃冰激凌时,她同时给我的爸爸和妈妈发短信以征求他们的意见,在收到玛丽短信之前他们是不会改变意见的。如果他们都同意,她将继续并让他们知道她将准备甜点了。如果有一个人说不同意,她将让他们知道碗里仍然是空的。这个协议,他们称之为冰激凌处于凝固和液体阶段后面的、二阶段提交(Two-phase Commit),需要4条信息才能完成。玛丽可以做得更好些,并节约一些短信上的花费吗?
在不考虑中央处理器故障的情况下,任何提交协议……需要至少 2(N-1)条消息才能完成一个事务。
他们比较幸运,我父母没有浪费太多时间去思考这个问题。玛丽偶然碰到了[Cynthia Dwork和Dale Skeen](http://dl.acm.org/citation.cfm?id=806705)的论文,它指出了玛丽需要知道的条件。只要玛丽发送短信,就没有比她的协议更好的办法了。
* 原文地址:[http://brooker.co.za/blog/2014/10/25/ice-cream.html](http://brooker.co.za/blog/2014/10/25/ice-cream.html)
* 注1:交换律是被普遍使用的一个数学名词,意指能改变某物的顺序而不改变其最终结果。交换律是大多数数学分支中的基本性质,而且许多的数学证明需要倚靠交换律。简单运算的交换律许久都被假定存在,且没有给定其一特定的名称,直到19世纪,数学家开始形式化数学理论之后。[http://zh.wikipedia.org/wiki/交換律](http://zh.wikipedia.org/wiki/交換律)
* 注2:二阶段提交(Two-phase Commit)是指,在计算机网络以及数据库领域内,为了使基于分布式系统架构下的所有节点在进行事务提交时保持一致性而设计的一种算法(Algorithm)。二阶段提交的算法思路可以概括为: 参与者将操作成败通知协调者,再由协调者根据所有参与者的反馈情报决定各参与者是否要提交操作还是中止操作。[http://zh.wikipedia.org/wiki/二阶段提交](http://zh.wikipedia.org/wiki/二阶段提交)
| 45.777778 | 284 | 0.852481 | zho_Hans | 0.455239 |
1c4ff9c9cfa508cf6228f483801d661a20889c48 | 296 | md | Markdown | docs/ja/commands/user.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | docs/ja/commands/user.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | docs/ja/commands/user.md | arrow2nd/twnyan_pub | b398797e2680202ff1cd89badca5bea6dfc15adc | [
"MIT"
] | null | null | null | # user
指定したユーザのタイムラインを表示します。
```
twnyan user {<ユーザ名 / ツイート番号> [取得件数] | <コマンド>}
twnyan ur {<ユーザ名 / ツイート番号> [取得件数] | <コマンド>}
```
- ユーザ名の `@` は省略可能です
- 取得件数を省略した場合、設定ファイル内のデフォルト値が指定されます
## コマンド
### user own
自分のタイムラインを表示します。
```
twnyan user own [取得件数]
```
- 取得件数を省略した場合、設定ファイル内のデフォルト値が指定されます
| 12.333333 | 45 | 0.665541 | yue_Hant | 0.577087 |
1c504adb261e6606de743f8e2bce4390c56d9eb3 | 1,811 | md | Markdown | _posts/2007-04-09-call-for-prayer.md | FishOrg/fishorg.github.io | 5b415442ec43ba4c86cb336d0a778be6e13f6dd8 | [
"MIT"
] | 8 | 2015-01-17T19:34:36.000Z | 2018-02-16T20:34:11.000Z | _posts/2007-04-09-call-for-prayer.md | FishOrg/fishorg.github.io | 5b415442ec43ba4c86cb336d0a778be6e13f6dd8 | [
"MIT"
] | 7 | 2015-02-10T16:53:58.000Z | 2018-03-29T15:21:52.000Z | _posts/2007-04-09-call-for-prayer.md | FishOrg/fishorg.github.io | 5b415442ec43ba4c86cb336d0a778be6e13f6dd8 | [
"MIT"
] | 18 | 2015-01-11T19:42:31.000Z | 2017-11-08T19:44:25.000Z | ---
author: slowe
comments: true
date: 2007-04-09 20:07:01+00:00
layout: post
slug: call-for-prayer
title: Call For Prayer
wordpress_id: 439
categories: Personal
tags:
- Christianity
- Personal
---
I know that my weblog defies the "normal rules" of how one should run a weblog in that I freely mix both personal and professional topics on the same site. As I said in [my very first post][1], I can't hide who I am, and I am a Christian. If that means that my Christianity bleeds into everything else, so be it.
I don't know if you read my site for my technical articles, or for my occasional personal post, and I don't know if any of you reading out there are indeed Christians. If you are a follower of Christ, then I'd like to make a simple request of you: please pray for my family.
My family is going through a rough time right now. The details of the specific situation aren't important; what's important is that I---we---continue to lean upon the Lord and His strength in order to make it through the storm. To do that, we've already enlisted the help of many prayer warriors that we know personally. Now I seek the help of prayer warriors that I don't know personally.
This situation that we are going through will come to a head sometime next week. If you could, please remember my family in your prayers throughout the remainder of this week. My wife and I are praying together with the family nightly, and praying with each other every day. Our friends, our pastors (past and present), Christian co-workers, and fellow church members also have us on their prayer lists. Will you pray as well? I know that through the combined prayer and faith of the believers that my family can emerge through this situation with a powerful testimony to share with others.
[1]: {% post_url 2005-05-11-welcome %}
| 75.458333 | 590 | 0.774158 | eng_Latn | 0.999919 |
1c50612f070e2d87a0dc6f7b0daa3dea81317dcb | 1,278 | md | Markdown | README.md | telemachus/vim-colors-off | 32f53d7173723f28d8160ae35438e1d23c4ce18e | [
"MIT"
] | 207 | 2016-03-18T04:23:56.000Z | 2022-03-08T12:52:22.000Z | README.md | telemachus/vim-colors-off | 32f53d7173723f28d8160ae35438e1d23c4ce18e | [
"MIT"
] | 10 | 2016-06-03T17:14:46.000Z | 2021-07-23T14:43:52.000Z | README.md | telemachus/vim-colors-off | 32f53d7173723f28d8160ae35438e1d23c4ce18e | [
"MIT"
] | 62 | 2016-03-29T19:18:07.000Z | 2022-03-08T12:52:58.000Z | # vim-colors-off
For a number of weeks, I ran vim with `syntax off`. It was quite nice,
with only two annoyances:
- Bright white on jet black was a bit off-putting.
- There were cases when I did miss the lack of color, vimdiff for
example.
Therefore, I aimed to find or create a colorscheme to solve these two
issues.
The result is very much based on the [pencil][] colorscheme, which is
surprising because it's a very colorful colorscheme, but:
- It uses a very sane approach to defining and setting colors
- It has nice background and foreground colors
- In the areas where I do want color, I like how it colors things
[pencil]: https://github.com/reedes/vim-colors-pencil

## Installation
- Use your preferred plugin manager
- Add "pbrisbin/vim-colors-off" as a plugin
## Usage
```
:colorscheme off
```
Supports both `background=light` and `background=dark`.
## Options
- `g:colors_off_a_little`: Set to `1` to bring back _a little_ color, here and there. Default `0`.
## Trouble-shooting
**Plugin fails to update**: we recently switched our default branch; this can confuse
some existing checkouts. The solution is to remove and re-add the plugin. How do to that
depends on the plugin manager you use.
---
[LICENSE](./LICENSE)
| 25.56 | 98 | 0.740219 | eng_Latn | 0.998284 |
1c5074271b0174339f1258b2b9956b6be34d2543 | 657 | md | Markdown | XAML/CallingFactoryMethods/README.md | inespines/xamarin-forms-samples | 649f55c51ae9f64aefad1657aa8336fb66b4df2a | [
"Apache-2.0"
] | 3 | 2021-06-13T00:34:05.000Z | 2021-06-13T14:42:02.000Z | XAML/CallingFactoryMethods/README.md | jeoozjuan/xamarin-forms-samples | 03241da43df1cac120a521afd6bc92145c8d63c9 | [
"Apache-2.0"
] | null | null | null | XAML/CallingFactoryMethods/README.md | jeoozjuan/xamarin-forms-samples | 03241da43df1cac120a521afd6bc92145c8d63c9 | [
"Apache-2.0"
] | 1 | 2019-07-29T12:54:27.000Z | 2019-07-29T12:54:27.000Z | ---
name: Calling Factory Methods
description: This sample demonstrates using XAML to call a factory method that can be used to initialize an object. For more information about this sample, see...
topic: sample
languages:
- csharp
products:
- xamarin
technologies:
- xamarin-forms
urlFragment: xaml-callingfactorymethods
---
Calling Factory Methods
=======================
This sample demonstrates using XAML to call a factory method that can be used to initialize an object.
For more information about this sample, see [Passing Arguments in XAML](https://developer.xamarin.com/guides/xamarin-forms/xaml/passing-arguments/).
Author
------
David Britch
| 27.375 | 163 | 0.759513 | eng_Latn | 0.98347 |
1c50f8c45c55576287e2edfca6be7a9e9ae67dfc | 2,194 | md | Markdown | docs/crates/crypto.md | deltanet-lab/libra-website | 1f2c79c531fc1189e53ccbe560484fd856af096d | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | docs/crates/crypto.md | deltanet-lab/libra-website | 1f2c79c531fc1189e53ccbe560484fd856af096d | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | docs/crates/crypto.md | deltanet-lab/libra-website | 1f2c79c531fc1189e53ccbe560484fd856af096d | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | ---
id: crypto
title: 加密
custom_edit_url: https://github.com/deltanet-lab/libra-website-cn/edit/master/docs/crates/crypto.md
---
加密组件承载了我们在Libra中使用的所有加密原语的实现:散列、签名和密钥派生/生成。使用“traits.rs”的部分库包含强制类型安全的加密API、可验证的随机函数(VRF)、EdDSA和BLS签名。
## 概览
Libra使用了如下几种密码学算法:
* SHA-3作为主要的散列函数。它在[FIPS 202](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf)中标准化。它基于[tiny_keccak](https://docs.rs/tiny-keccak/1.4.2/tiny_keccak/)。
* HKDF:符合[rfc5869](https://tools.ietf.org/html/rfc5869)的HMAC的提取和扩展密钥派生函数(HKDF)。它用于从盐(可选)、种子和应用程序信息(可选)生成密钥。
* traits.rs为crypto API引入了新的抽象。
* Ed25519使用基于[ed25519-dalek](https://docs.rs/ed25519-dalek/1.0.0-pre.1/ed25519_dalek/)的新API设计执行签名,并进行了额外的安全检查(例如:可延展性)。
* BLS12381使用基于[threshold_crypto](https://github.com/poanetwork/threshold_crypto)库的新API来执行签名。BLS签名目前正在进行[标准化过程](https://tools.ietf.org/html/draft-boneh-bls-signature-00)。
* ECVRF根据Curve25519上的[draft-irtf-cfrg-vrf-04](https://tools.ietf.org/html/draft-irtf-cfrg-vrf-04)在实现了可验证随机函数(VRF)。
* SLIP-0010根据[SLIP-0010](https://github.com/satoshilabs/slips/blob/master/slip-0010.md)为ED25519实现了通用的层次密钥派生。
* 用于执行密钥交换的X25519:它通过[Noise协议框架](http://www.noiseprotocol.org/noise.html)保护验证者之间的通信。它基于x25519-dalek库进行开发。
## 模块的代码组织(How is this module organized?)
```
crypto/src
├── hash.rs # Hash function (SHA-3)
├── hkdf.rs # HKDF implementation (HMAC-based Extract-and-Expand Key Derivation Function based on RFC 5869)
├── macros/ # Derivations for SilentDebug and SilentDisplay
├── utils.rs # Serialization utility functions
├── lib.rs
├── bls12381.rs # Bls12-381 implementation of the signing/verification API in traits.rs
├── ed25519.rs # Ed25519 implementation of the signing/verification API in traits.rs
├── slip0010.rs # SLIP-0010 universal hierarchical key derivation for Ed25519
├── x25519.rs # X25519 keys generation
├── test_utils.rs
├── traits.rs # New API design and the necessary abstractions
├── unit_tests/ # Tests
└── vrf/
├── ecvrf.rs # ECVRF implementation using curve25519 and SHA512
├── mod.rs
└── unit_tests # Tests
```
| 52.238095 | 169 | 0.70237 | kor_Hang | 0.220314 |
1c512dc3c59c82fe204aa7185686128450176773 | 5,505 | md | Markdown | sdk-api-src/content/shellapi/nf-shellapi-findexecutablea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/shellapi/nf-shellapi-findexecutablea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/shellapi/nf-shellapi-findexecutablea.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:shellapi.FindExecutableA
title: FindExecutableA function (shellapi.h)
description: Retrieves the name of and handle to the executable (.exe) file associated with a specific document file.
helpviewer_keywords: ["FindExecutable","FindExecutable function [Windows Shell]","FindExecutableA","FindExecutableW","_win32_FindExecutable","shell.FindExecutable","shellapi/FindExecutable","shellapi/FindExecutableA","shellapi/FindExecutableW"]
old-location: shell\FindExecutable.htm
tech.root: shell
ms.assetid: 969edbd9-164e-457f-ab0a-dc4d069bf16b
ms.date: 12/05/2018
ms.keywords: FindExecutable, FindExecutable function [Windows Shell], FindExecutableA, FindExecutableW, _win32_FindExecutable, shell.FindExecutable, shellapi/FindExecutable, shellapi/FindExecutableA, shellapi/FindExecutableW
req.header: shellapi.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows XP [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi: FindExecutableW (Unicode) and FindExecutableA (ANSI)
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Shell32.lib
req.dll: Shell32.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- FindExecutableA
- shellapi/FindExecutableA
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- Shell32.dll
api_name:
- FindExecutable
- FindExecutableA
- FindExecutableW
---
# FindExecutableA function
## -description
Retrieves the name of and handle to the executable (.exe) file associated with a specific document file.
## -parameters
### -param lpFile [in]
Type: <b>LPCTSTR</b>
The address of a <b>null</b>-terminated string that specifies a file name. This file should be a document.
### -param lpDirectory [in, optional]
Type: <b>LPCTSTR</b>
The address of a <b>null</b>-terminated string that specifies the default directory. This value can be <b>NULL</b>.
### -param lpResult [out]
Type: <b>LPTSTR</b>
The address of a buffer that receives the file name of the associated executable file. This file name is a <b>null</b>-terminated string that specifies the executable file started when an "open" by association is run on the file specified in the <i>lpFile</i> parameter. Put simply, this is the application that is launched when the document file is directly double-clicked or when <b>Open</b> is chosen from the file's shortcut menu. This parameter must contain a valid non-<b>null</b> value and is assumed to be of length MAX_PATH. Responsibility for validating the value is left to the programmer.
## -returns
Type: <b>HINSTANCE</b>
Returns a value greater than 32 if successful, or a value less than or equal to 32 representing an error.
The following table lists possible error values.
<table>
<tr>
<th>Return code/value</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>SE_ERR_FNF</b></dt>
<dt>2</dt>
</dl>
</td>
<td width="60%">
The specified file was not found.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>SE_ERR_PNF</b></dt>
<dt>3</dt>
</dl>
</td>
<td width="60%">
The specified path is invalid.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>SE_ERR_ACCESSDENIED</b></dt>
<dt>5</dt>
</dl>
</td>
<td width="60%">
The specified file cannot be accessed.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>SE_ERR_OOM</b></dt>
<dt>8</dt>
</dl>
</td>
<td width="60%">
The system is out of memory or resources.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>SE_ERR_NOASSOC</b></dt>
<dt>31</dt>
</dl>
</td>
<td width="60%">
There is no association for the specified file type with an executable file.
</td>
</tr>
</table>
## -remarks
Use <b>FindExecutable</b> for documents. If you want to retrieve the path of an executable file, use the following:
```
AssocQueryString(ASSOCF_OPEN_BYEXENAME,
ASSOCSTR_EXECUTABLE,
pszExecutableName,
NULL,
pszPath,
pcchOut);
```
Here, <i>pszExecutableName</i> is a pointer to a <b>null</b>-terminated string that specifies the name of the executable file, <i>pszPath</i> is a pointer to the <b>null</b>-terminated string buffer that receives the path to the executable file, and <i>pcchOut</i> is a pointer to a <b>DWORD</b> that specifies the number of characters in the <i>pszPath</i> buffer. When the function returns, <i>pcchOut</i> is set to the number of characters actually placed in the buffer. See <a href="/windows/desktop/api/shlwapi/nf-shlwapi-assocquerystringa">AssocQueryString</a> for more information.
When <b>FindExecutable</b> returns, the <i>lpResult</i> parameter may contain the path to the Dynamic Data Exchange (DDE) server started if a server does not respond to a request to initiate a DDE conversation with the DDE client application.
> [!NOTE]
> The shellapi.h header defines FindExecutable as an alias which automatically selects the ANSI or Unicode version of this function based on the definition of the UNICODE preprocessor constant. Mixing usage of the encoding-neutral alias with code that not encoding-neutral can lead to mismatches that result in compilation or runtime errors. For more information, see [Conventions for Function Prototypes](/windows/win32/intl/conventions-for-function-prototypes).
## -see-also
<a href="/windows/desktop/api/shellapi/nf-shellapi-shellexecutea">ShellExecute</a> | 29.918478 | 600 | 0.731517 | eng_Latn | 0.931643 |
1c51cfbd199d579e2ed92c556fa9895064bddbdc | 2,974 | md | Markdown | methods/transformers/model_cards/valhalla/longformer-base-4096-finetuned-squadv1/README.md | INK-USC/RiddleSense | a3d57eaf084da9cf6b77692c608e2cd2870fbd97 | [
"MIT"
] | 3 | 2021-07-06T20:02:31.000Z | 2022-03-27T13:13:01.000Z | methods/transformers/model_cards/valhalla/longformer-base-4096-finetuned-squadv1/README.md | INK-USC/RiddleSense | a3d57eaf084da9cf6b77692c608e2cd2870fbd97 | [
"MIT"
] | null | null | null | methods/transformers/model_cards/valhalla/longformer-base-4096-finetuned-squadv1/README.md | INK-USC/RiddleSense | a3d57eaf084da9cf6b77692c608e2cd2870fbd97 | [
"MIT"
] | null | null | null | # LONGFORMER-BASE-4096 fine-tuned on SQuAD v1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
[Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
> `Longformer` is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
## Model Training
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing).
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that
1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this
` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
2. `input_ids` should always be a batch of examples.
## Results
|Metric | # Value |
|-------------|---------|
| Exact Match | 85.1466 |
| F1 | 91.5415 |
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added.
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
| 54.072727 | 285 | 0.755884 | eng_Latn | 0.921528 |
1c52f05cca12cc9b290715af5557f0d91fe5d74a | 4,441 | md | Markdown | docker/README.md | CorentinB/deepdetect | 3b397f72b94bdb0e950a51accbe1b51fbb55f7b7 | [
"Apache-2.0"
] | 2 | 2018-08-11T20:56:47.000Z | 2021-08-25T05:11:46.000Z | docker/README.md | PalmerEk/deepdetect | d505465ca579b207eaaa7534a53bd6d32899c9ac | [
"Apache-2.0"
] | null | null | null | docker/README.md | PalmerEk/deepdetect | d505465ca579b207eaaa7534a53bd6d32899c9ac | [
"Apache-2.0"
] | null | null | null | ## DeepDetect Docker images
This repository contains the Dockerfiles for building the CPU and GPU images for deepdetect.
Also see https://hub.docker.com/u/beniz/starred/ for pre-built images
The docker images contain:
- a running `dede` server ready to be used, no install required
- `googlenet` and `resnet_50` pre-trained image classification models, in `/opt/models/`
This allows to run the container and set an image classification model based on deep (residual) nets in two short command line calls.
### Getting and running official images
```
docker pull beniz/deepdetect_cpu
```
or
```
docker pull beniz/deepdetect_gpu
```
#### Running the CPU image
```
docker run -d -p 8080:8080 beniz/deepdetect_cpu
```
`dede` server is now listening on your port `8080`:
```
curl http://localhost:8080/info
{"status":{"code":200,"msg":"OK"},"head":{"method":"/info","version":"0.1","branch":"master","commit":"c8556f0b3e7d970bcd9861b910f9eae87cfd4b0c","services":[]}}
```
Here is how to do a simple image classification service and prediction test:
- service creation
```
curl -X PUT "http://localhost:8080/services/imageserv" -d "{\"mllib\":\"caffe\",\"description\":\"image classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\"},\"mllib\":{\"nclasses\":1000}},\"model\":{\"repository\":\"/opt/models/ggnet/\"}}"
{"status":{"code":201,"msg":"Created"}}
```
- image classification
```
curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"parameters\":{\"input\":{\"width\":224,\"height\":224},\"output\":{\"best\":3},\"mllib\":{\"gpu\":false}},\"data\":[\"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg\"]}"
{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","time":852.0,"service":"imageserv"},"body":{"predictions":{"uri":"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg","classes":[{"prob":0.2255125343799591,"cat":"n03868863 oxygen mask"},{"prob":0.20917612314224244,"cat":"n03127747 crash helmet"},{"last":true,"prob":0.07399296760559082,"cat":"n03379051 football helmet"}]}}}
```
#### Running the GPU image
This requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) in order for the local GPUs to be made accessible by the container.
The following steps are required:
- install `nvidia-docker`: https://github.com/NVIDIA/nvidia-docker
- run with
```
nvidia-docker run -d -p 8080:8080 beniz/deepdetect_gpu
```
Notes:
- `nvidia-docker` requires docker >= 1.9
To test on image classification on GPU:
```
curl -X PUT "http://localhost:8080/services/imageserv" -d "{\"mllib\":\"caffe\",\"description\":\"image classification service\",\"type\":\"supervised\",\"parameters\":{\"input\":{\"connector\":\"image\"},\"mllib\":{\"nclasses\":1000}},\"model\":{\"repository\":\"/opt/models/ggnet/\"}}"
{"status":{"code":201,"msg":"Created"}}
```
and
```
curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"parameters\":{\"input\":{\"width\":224,\"height\":224},\"output\":{\"best\":3},\"mllib\":{\"gpu\":true}},\"data\":[\"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg\"]}"
```
Try the `POST` call twice: first time loads the net so it takes slightly below a second, then second call should yield a `time` around 100ms as reported in the output JSON.
#### Access to server logs
To look at server logs, use
```
docker logs -f <container name>
```
where <container name> can be obtained via `docker ps`
Example:
- start container and server:
```
> docker run -d -p 8080:8080 beniz/deepdetect_cpu
```
- look for container:
```
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9944734d5d6 beniz/deepdetect_cpu "/bin/sh -c './dede -" 17 seconds ago Up 16 seconds 0.0.0.0:8080->8080/tcp loving_shaw
```
- access server logs:
```
> docker logs -f loving_shaw
DeepDetect [ commit 4e2c9f4cbd55eeba3a93fae71d9d62377e91ffa5 ]
Running DeepDetect HTTP server on 0.0.0.0:8080
```
- share a volume with the image:
```
docker run -d -p 8080:8080 -v /path/to/volume:/mnt beniz/deepdetect_cpu
```
where `path/to/volume` is the path to your local volume that you'd like to attach to `/opt/deepdetect/`. This is useful for sharing / saving models, etc...
#### Building an image
Example goes with the CPU image:
```
cd cpu
docker build -t beniz/deepdetect_cpu --no-cache .
```
| 36.401639 | 388 | 0.673497 | eng_Latn | 0.567831 |
1c5301d9ff47cf7d0f318471d675b531aaabee71 | 6,169 | md | Markdown | articles/media-services/media-services-dotnet-how-to-use.md | OpenLocalizationTestOrg/azure-docs-pr15_hr-HR | 94470f6d3849fb1d48d443d49ffe0217ddba2f80 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/media-services-dotnet-how-to-use.md | OpenLocalizationTestOrg/azure-docs-pr15_hr-HR | 94470f6d3849fb1d48d443d49ffe0217ddba2f80 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/media-services-dotnet-how-to-use.md | OpenLocalizationTestOrg/azure-docs-pr15_hr-HR | 94470f6d3849fb1d48d443d49ffe0217ddba2f80 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Kako postaviti računalo za Media Services razvoja s .NET"
description="Saznajte više o preduvjetima za Media Services pomoću Media Services SDK za .NET. I Saznajte kako stvoriti aplikaciju za Visual Studio."
services="media-services"
documentationCenter=""
authors="juliako"
manager="erikre"
editor=""/>
<tags
ms.service="media-services"
ms.workload="media"
ms.tgt_pltfrm="na"
ms.devlang="dotnet"
ms.topic="article"
ms.date="10/24/2016"
ms.author="juliako"/>
#<a name="media-services-development-with-net"></a>Razvoj Media Services s .NET
[AZURE.INCLUDE [media-services-selector-setup](../../includes/media-services-selector-setup.md)]
U ovoj se temi opisuje postupak da biste pokrenuli razvoja Media Services aplikacije koje se koriste .NET.
Biblioteka **Azure Media Services.NET SDK** omogućuje programu protiv Media Services pomoću .NET. Da biste lakše čak i za razvoj s .NET, u biblioteku **Azure Media Services .NET SDK proširenja** navedeni su. Biblioteka sadrži skup načina nastavka i funkcije preglednika koje će pojednostaviti .NET kod. Obje biblioteke su dostupne putem **NuGet** i **GitHub**.
##<a name="prerequisites"></a>Preduvjeti
- S računom Media Services u novu ili postojeću pretplatu Azure. Potražite u temi [Stvaranje računa za servise medijske sadržaje](media-services-portal-create-account.md).
- Operacijske sustave: Windows 10, Windows 7, Windows 2008 R2 ili Windows 8.
- .NET framework 4,5.
- Visual Studio 2015, Visual Studio 2013, Visual Studio 2012 ili Visual Studio 2010 SP1 (Professional, Premium, Ultimate ili Express).
##<a name="create-and-configure-a-visual-studio-project"></a>Stvaranje i konfiguriranje projekta za Visual Studio
U ovom se odjeljku objašnjava stvaranje projekta u Visual Studio i postaviti za razvoj Media Services. U ovom slučaju projekta je aplikacije konzole za Windows C#, no iste korake postavljanja ovi odnose na druge vrste projekata možete stvoriti za aplikacije Media Services (na primjer, aplikaciju za Windows obrasce ili ASP.NET Web-aplikacije).
U ovom se odjeljku pokazuje kako koristiti **NuGet** da biste dodali Media Services .NET SDK i drugih biblioteka zavisne.
Umjesto toga, možete dobiti najnovije bitova Media Services .NET SDK od GitHub ([github.com/Azure/azure-sdk-for-media-services](https://github.com/Azure/azure-sdk-for-media-services) i [github.com/Azure/azure-sdk-for-media-services-extensions](https://github.com/Azure/azure-sdk-for-media-services-extensions)), sastavljanje rješenja i dodati reference projekt klijenta. Imajte na umu potrebne ovisnosti se preuzimaju i izdvojene automatski.
1. Stvaranje nove C# konzole za aplikacije u Visual Studio 2010 SP1 ili novije verzije za dodavanje veze za VANJSKIH. Unesite **naziv**, **mjesto**i **naziv rješenja**, a zatim kliknite u redu.
2. Stvaranje rješenja.
2. Da biste instalirali i dodavanje **Azure Media Services .NET SDK proširenja**pomoću **NuGet** . Instalirate paket, instalira **Media Services.NET SDK** i dodaje ostale potrebne ovisnosti.
Provjerite imate li najnovija verzija sustava NuGet instaliran. Dodatne informacije i instalaciju upute potražite u članku [NuGet](http://nuget.codeplex.com/).
2. U pregledniku rješenja, desnom tipkom miša kliknite naziv projekta, a zatim odaberite upravljanje NuGet paketa...
Pojavit će se dijaloški okvir upravljanje NuGet paketa.
3. U galeriji Online tražiti Azure MediaServices proširenja, odaberite Azure Media Services .NET SDK proširenja i kliknite gumb Instaliraj.
Mijenja se projekt i dodat će se reference na proširenja za SDK .NET Media Services, Media Services .NET SDK i ostalih zavisne sklopova.
4. Da biste povećali razinu čišćenje razvojno okruženje, razmislite o omogućivanju NuGet paket vratiti. Dodatne informacije potražite u članku [NuGet paketa vraćanje "](http://docs.nuget.org/consume/package-restore).
3. Dodavanje reference u sklopu **System.Configuration** . Ovaj skup sadrži na System.Configuration. **ConfigurationManager** klasa koja se koristi za dohvaćanje datoteka konfiguracije (na primjer, App.config).
Da biste dodali reference pomoću dijaloškog okvira upravljajte reference, desnom tipkom miša kliknite naziv projekta u pregledniku rješenja. Zatim odaberite Dodaj i reference.
Pojavit će se dijaloški okvir upravljanje reference.
4. U odjeljku .NET framework skupine, pronaći i odaberite u sklopu System.Configuration, a zatim pritisnite u redu.
5. Otvorite datoteku App.config (Dodavanje datoteke u projekt ako je dodana po zadanom) i dodajte odjeljak *appSettings* u datoteku.
Postavljanje vrijednosti za servisa Azure Media Services imenom i računom ključ računa, kao što je prikazano u sljedećem primjeru.
Da biste pronašli naziv i ključ vrijednosti, idite na portal za Azure i odaberite svoj račun. Pojavit će se prozor postavke na desnoj strani. U prozoru postavke odaberite tipke. Klikom na ikonu pokraj svake tekstni okvir kopira vrijednost u međuspremnik sustava.
<configuration>
...
<appSettings>
<add key="MediaServicesAccountName" value="Media-Services-Account-Name" />
<add key="MediaServicesAccountKey" value="Media-Services-Account-Key" />
</appSettings>
</configuration>
6. Prebriši postojeće **pomoću** naredbe na početku datoteke Program.cs sljedeći kod.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Configuration;
using System.Threading;
using System.IO;
using Microsoft.WindowsAzure.MediaServices.Client;
Sada ste spremni za početak razvoj aplikacija za Media Services.
##<a name="media-services-learning-paths"></a>Tečajevi za Media Services
[AZURE.INCLUDE [media-services-learning-paths-include](../../includes/media-services-learning-paths-include.md)]
##<a name="provide-feedback"></a>Slanje povratnih informacija
[AZURE.INCLUDE [media-services-user-voice-include](../../includes/media-services-user-voice-include.md)]
| 58.198113 | 441 | 0.760739 | hrv_Latn | 0.697021 |
1c53b86731a768c9aa237be1481ae64daa629d4a | 19,698 | md | Markdown | yo/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p02_ch09_03_02.md | john4martins/from-en | 2e63a75d99d60239e077a9d50da37431212afbeb | [
"MIT"
] | null | null | null | yo/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p02_ch09_03_02.md | john4martins/from-en | 2e63a75d99d60239e077a9d50da37431212afbeb | [
"MIT"
] | null | null | null | yo/Economics and Ethics of Private Property Studies in Political Economy and Philosophy/03_p02_ch09_03_02.md | john4martins/from-en | 2e63a75d99d60239e077a9d50da37431212afbeb | [
"MIT"
] | null | null | null |
Bakannaa, idi ti o se pataki fun isiro ti je eko ti a priori ati sibesibe, gege bi awon oniye-oro ti ni oye nigba gbogbo, bayi o di mimo. Awon oludaniloju ti o ni igbimo-oselopo ti ara ti ni igbalode muwole ti onisiro bi ifasile awon ami ti a ti ko lainidii gege bi awon ilana iyipada ti a ko ni igbekele, Fun eleyi, eyi ti o je ki o je ki isiro ohun kan sugbon ki o sere, sugbon o je ologbon, ilosiwaju aseyori ti isiro ni eko physic je emu ogbon. Nitooto, awon Alamo ara ni yoo ni lati salaye alaye yii gege bi jijepe isele iyanu kan. Ti o je ko si iyanu, sibesibe, o han gbangba ni igba ti o je ilana eko tabi –lati lo nibi awon oro ti o je akoye oro oniye-oro julo akosile -mathimatician Paul Lorenzen ati ile-iwe –ise ti o je ise tabi ti o ni isiro ti apere. Atilese ati awon ohun kiko re gege bi eko imo-ogbon ti a priori -synthetic ti wa ni orisun ni oye wa nipa atunwi –atunse ise. Ni afikun, o duro lori oye wa itumo ti "se eyi –ki o tun se eyi leekansi, ti o bere lati abajade ti o wa bayi". Pelupelu, awon isiro ti o wa ni ipile pelu awon ohun gidi: pelu ti a se tabi ti se afihan awon ohun ti nkan kan. O se afihan awon ibara enisoro ti o wa laarin iru awon isiro nitori otito pe won ti ko won gege bi ofin atunse. Gege bi Paulu Lorenzen ti se afihan ni apejuwe, kii se gbogbo awon ohun ti o wa bi mathematiciani bayi-ati awon eya naa lehinna o ye ki o mo daju fun ohun ti won je: awon ere ifihan alaise ti ko wulo. Sugbon gbogbo awon ona ero mathematician ti a nlo ni ise-ise physic (bii awon irinse ti isiro kiasika), le ni agbara ti o ni agbara. Won kii se awon ijerisi ti o ni idaniloju sugbon dipo awon imoran otito nipa otito. Won lo si ohun gbogbo bi o ti ni awon eya kan tabi die e sii, ati pe bi a ti se awon ifilele wonyi tabi ti a mo bi awon iwon nipase ilana kan ti "se tun se, se tabi se imoran miiran nipa tun se ise isaaju." [^20 ] Leekansi, okan le so pe 2 ati 2 je igba miiran 4 sugbon awon igba 2 tabi 5, ati ni otito gangan, fun awon kiniun pelu awon omo-agutan tabi fun awon ehoro, eyi le je otito, [^21] sugbon ni otito ti igbese, ni idamo tabi se awon eya naa ni awon atunse, otito ti 2 ati 2 ko je ohunkohun sugbon 4 ko le see se.
[^20]: Lori itumo ti o se alaye ti o wa ni akosile wo White, * Idi ati Imoye *, pp. 427-31; lori ipile ti o se agbekale ti isiro, ni pato, woo Lorenzen, * Einfuhrung ni ise Logik ati Mathematik *; idem, * MethodischesDenken *, chaps. 6, 7; idem, * Emu ise deede ati Ethics *, chap. 4; lori ipile agbekale ti isiro onipori woo Paul Lorenzen, * Iyato ati Integral-EinekonstruktiveEinführung ni iku klassische Analysis * (Frankfurt / M .: AkademischeVerlagsgesellschaft, 1965); fun imoran ti o ni imoran ti mathematiki formalism wo Kambartel, * Erfahrung und Struktur *, chap. 6, esp. pp. 236-42; lori ai se pataki ti Godel-theorem ti a gbajumo fun isiro ti o daadaa woo Paul Lorenzen, * Metamathematik * (Mannheim: BibliographischesInstitut, 1962); tun Charles Thiel, "Das Begründungsproblem der Mathematic und die Philosophie," ni Kambartel ati Mittelstrass, eds., * Zumnormativen Fundament der Wissenschaft *, esp. pp 99-101 \. Ifarahan Kurt Gödel, eyi ti bi eri ti o se atileyin ni iseduro kuku ju idinadii ti o se alaye ti o seese fun imoye a priori, nikan se afihan pe a ko le saseyori iseto eto Hilist akoko nipase nitori pe lati se afihan ifarahan ti awon eko axiomatic kan okan gbodo ni isiro pelu awon ona agbara die sii ju awon ti a se agbekale ni yii-ara re. O yanilenu, odun pupo saaju eri Godel ti 1931, awon isoro ti eto formalist ti yori atijo Hilbert lati se akiyesi pe o nilo lati tun pada ni itumo iyipada ti mathematiki * si la * Kant, eyi ti yoo fun awon axiom ipile ati idalare ti o je igbekele igbokanle eyikeyi awon eri imudarasi ti isase. Woo Kambartel, * Erfahrung ati Struktur *, pp. 185-87.
[^21]: Awon apeere ti iru yi ni lilo nipase Popper ki o le "saro" imo-ogbon ti awon ofin ti isiro je ofin ti otito. Woo Karl Popper, * Awon ifokansi ati awon imoran * (London: Routledge ati Kegan Paul, 1969), p. 211.
Pelupelu, awon oniroyin atijo ti so pe awon eya ara ilu Euclidean je a priori sibe sibesibe o pe imoye ti o ni imoran nipa aaye ti o ni atileyin, pelu, fun imoran wa si awon idiwo ti eko lori eko. Niwon igbasile ti awon eya-ara ti kii-Euclidean ati ni pato niwon igbasile iyasoto ti Einstein ti gravitation, ipo ti o ni agbara nipa iwon-ara je tunmo-ara ati imudarasi. O je ti apere onirururu bi boya o je ara ti awon oro ti o ni iyato, oro-eko ti o ni ipilese, tabi bi jijeeri ti ko ni itumo. Eya-ara yii je boya isise tabi ailopin ayeraye si idanwo ti o dabi pe o je eyiti ko ni ibamu pelu otito pe agbasoro Euclidean je ipile ti imo-ero ati imole, ati pe ko si enikan ninu awon aaye naa ti o ronu nipa irufe bee gegebi o je otito nikan. [^22] Imo gege bi a ti ni idiwo ti iseduro se alaye idi ti idiwo alakikan oju-iwe kede ti ko to ati idi ti idibaje ti iseduro agbasoro Euclidean kii se nkan ijamba. Alaye imole tun wa ninu itumo ti igbese. Ise je ise ti ara ti ara ni aaye. Laisi igbasile nibe ko le ni imo ti awon ibasepo ile-aye ko si si wiwon. Iwon wi pe nkan kan si apeere kan. Laisi awon ajohunse, ko si wiwon, ati pe ko si iwon ti o le se atunse awon bosewa. Lai se kedere, o ye ki a pese apere ti o ni ibamu pelu awon ilana ti o se pataki fun iselopo awon isoro ti ara ni aaye ati idasile awon ohun elo wiwon nipase ara ti ara ati ni ibamu pelu awon ilana ti awon ile aye ti o wa ninu re. Adaro-eri Euclidean, gege bi Paul Lorenzen ti se alaye tele, ko si siwaju sii ati pe ko kere ju atunse awon ilana ti o dara julo ti o je ki iselopo iru awon ipile irufe be gege bi awon ojuami, awon ila, awon oko ofurufu ati awon ijinna ti o wa ni die sii tabi kere si pipe sugbon ona ti o rorun nigba gbogbo ti a dapo tabi ti a mo ni paapaa julo awon ohun elo ti o wa ni ibere ti awon wiwon aaye gege bi odiwon osuwon. Bi o se le je, awon iwuwon ati awon ilosiwaju normative ko le je atunṣse nipase abajade ti iwon ilawon eyikeyi. On the contrary, their cognitive validity is substantiated by the fact that it is they that make physical measurements in space possible. Any actual measurement must already presuppose the validity of the norms leading to the construction of one’s measurement standards. It is in this sense that geometry is an a priori science and must simultaneously be regarded as an empirically meaningful discipline because it is not only the very precondition for any empirical spatial description, but it is also the precondition for any active orientation in space.[^23]
[^22]: See on this also Mises, *The Ultimate Foundation of Economic Science*, pp. 12–14.
[^23]: On the a prioristic character of Euclidean geometry see Lorenzen, *Methodisches Denken*, chaps. 8 and 9; idem, *Normative Logic and Ethics*, chap. 5; Hugo Dingler, *Die Grundlagen der Geometrie* (Stuttgart: Enke, 1933); on Euclidean geometry as a necessary presupposition of objective, intersubjectively communicable measurements and in particular of any empirical verification of non-Euclidean geometries (after all, the lenses of the telescopes which one uses to confirm Einstein’s theory regarding the non-Euclidean structure of physical space must themselves be constructed according to Euclidean principles) see Kambartel, *Erfahrung und Struktur*, pp. 132–33; Peter Janich, *Die Protophysik der Zeit* (Mannhein: Bibliographisches Institut, 1969), pp. 45–50; idem, “Eindeutigkeit, Konsistenz und methodische Ordnung,” in Kambartel and Mittelstrass, eds., *Zum normativen Fundament der Wissenschaft*.
::::Following the lead of Hugo Dingler, Paul Lorenzen and other members of the so-called Erlangen School have worked out a system of protophysics, which contains all a prioristic presuppositions of empirical physics, including, apart from geometry, also chronometry and hylometry (i.e., classical mechanics without gravitation, or rational mechanics).
::::> Geometry, chronometry and hylometry are a priori theories which make empirical measurements of space, time and material “possible.” They have to be established before physics in the modern sense of an empirical science, with hypothetical fields of forces, can begin. Therefore, I should like to call these disciplines by a common name: protophysics. (Lorenzen, *Normative Logic and Ethics*, p. 60)
In view of the recognition of the praxeological character of knowledge, these insights regarding the nature of logic, arithmetic and geometry become integrated and embedded into a system of epistemological dualism.[^24] The ultimate justification for this dualist position (the claim that there are two realms of intellectual inquiry that can be understood a priori as requiring categorically distinct methods of treatment and analysis), also lies in the praxeological nature of knowledge. It explains why we must differentiate between a realm of objects which is categorized causally and a realm that is categorized teleologically instead.
[^24]: On the fundamental nature of epistemological dualism see also Mises, *Theory and History*, pp. 1–2.
I have already briefly indicated during my discussion of praxeology that causality is a category of action. The idea of causality—that there are constant, time-invariantly operating causes which allow one to project past observations regarding the relation of events into the future—is something (as empiricism since Hume has noticed) which has no observational basis whatsoever. One cannot observe the connecting link between observations. Even if one could, such an observation would not prove it to be a time-invariant connection. Instead, the principle of causality must be understood as implied in our understanding of action as an interference with the observational world, made with the intent of diverting the natural course of events in order to produce a different, preferred state of affairs (of making things happen that otherwise would not happen), and thus presupposes the notion of events which are related to each other through time-invariantly operating causes. An actor might err with respect to his particular assumptions about which earlier interference produced which later result. But successful or not, any action, changed or unchanged in light of its previous success or failure, presupposes that there are constantly connected events as such, even if no particular cause for any particular event can ever be preknown to any actor. Without such an assumption it would be impossible to ever categorize two or more observational experiments as falsifying or confirming each other rather than interpreting them as logically incommensurable events. Only because the existence of time-invariantly operating causes *as such* is already assumed can one ever encounter particular instances of confirming or disconfirming observational evidence, or can there ever be an actor who can learn anything from past experience by classifying his actions as successful and confirming some previous knowledge or as unsuccessful and disconfirming it. It is simply by virtue of acting and distinguishing between successes and failures that the a priori validity of the principle of causality is established; even if one tried, one could not successfully refute its validity.[^25]
[^25]: On the a prioristic character of the category of causality see Mises, *Human Action*, chap. 5; Hoppe, *Kritik der kausalwissenschaftlichen Sozialforschung*; idem, “Is Research Based on Causal Scientific Principles Possible in the Social Sciences?” (infra chap. 7); on the causality principle as a necessary presupposition in particular also of the indeterminacy principle of quantum physics and the fundamental misconception involved in interpreting the Heisenberg-principle as invalidating the causality principle see Kambartel, *Erfahrung and Struktur*, pp. 138–40; also Hoppe, “In Defense of Extreme Rationalism,” footnote 36\. In fact, it is precisely the indisputable praxeological fact that separate measurement acts can only be performed sequentially which explains the very possibility of irreducibly probabilistic—rather than deterministic—predictions as they are characteristic of quantum physics; however, in order to perform any experiments in the field of quantum mechanics, and in particular to repeat two or more experiments and state this to be the case, the validity of the causality principle must evidently already be presupposed.
In so understanding causality as a necessary presupposition of action, it is also immediately implied that its range of applicability must then be delineated a priori from that of the category of teleology. Indeed, both categories are strictly exclusive and complementary. Action presupposes a causally structured observational reality, but the reality of action which we can understand as requiring such structure, is not itself causally structured. Instead, it is a reality that must be categorized teleologically, as purpose-directed, meaningful behavior. In fact, one can neither deny nor undo the view that there are two categorically different realms of phenomena, since such attempts would have to presuppose causally related events qua actions that take place within observational reality as well as the existence of intentionally rather than causally related phenomena in order to interpret such observational events as meaning to deny something. Neither a causal nor a teleological monism could be justified without running into an open contradiction: in physically stating either position and in claiming to say something meaningful in so doing, the case is in fact made for an indisputable complementarity of both a realm of causal *and* teleological phenomena.[^26]
[^26]: On the necessary complementarity of the categories of causality and teleology see Mises, *Human Action*, p. 25; idem, *The Ultimate Foundation of Economic Science*, pp. 6–8; Hoppe, *Kritik der kausalwissenschaftlichen Sozialforschung*; idem, “Is Research Based on Causal Scientific Principles Possible in the Social Sciences?” (infra chap. 7); also Georg Henrik von Wright, *Norm and Action* (London: Routledge and Kegan Paul, 1963); idem, *Explanation and Understanding* (Ithaca, N.Y.: Cornell University Press, 1971); K.O. Apel, *Die Erklären: Verstehen Kontroverse in transzendental-pragmatischer Sicht* (Frankfurt/M.: Suhrkamp, 1979).
Everything which is not an action must necessarily be categorized causally. There is nothing to be known a priori about this range of phenomena except that it is structured causally and that it is structured according to the categories of propositional logic, arithmetic and geometry.[^27] Everything else there is to know about this range of phenomena must be derived from contingent observations and thus represents a posteriori knowledge. In particular, all knowledge about two or more specific observational events being causally related or not is a posteriori knowledge. Obviously, the range of phenomena described in this way coincides (more or less) with what is usually considered to be the field of the empirical natural sciences.
[^27]: More precisely still, it is structured according to the categories of logic, arithmetic, and protophysics (including geometry). See note 23 above.
In contrast, everything that is an action must be categorized teleologically. This realm of phenomena is constrained by the laws of logic and arithmetic, too. But it is not constrained by the laws of geometry as incorporated in our instruments of measuring spatially extending objects because actions do not exist apart from subjective interpretations of observable things. Therefore, they must be identified by reflective understanding rather than spatial measurements. Nor are actions causally connected events, but events that are connected meaningfully within a categorical framework of means and ends.
One can not know a priori what the *specific* values, choices and costs of some actor are or will be. This would fall entirely into the province of empirical, a posteriori knowledge. In fact, which particular action an actor is going to undertake would depend on his knowledge regarding the observational reality and/or the reality of other actors’ actions. It would be manifestly impossible to conceive of such states of knowledge as predictable on the basis of time-invariantly operating causes. A knowing actor cannot predict his future knowledge before he has actually acquired it, and he demonstrates, simply by virtue of distinguishing between successful and unsuccessful predictions, that he must conceive of himself as capable of learning from unknown experiences in as yet unknown ways. Thus, knowledge regarding the particular course of actions is only a posteriori. Since such knowledge would have to include the actor’s own knowledge—as a necessary ingredient of every action whose every change can have an influence on a particular action being chosen—teleological knowledge must also necessarily be reconstructive or historical knowledge. It would only provide *ex post* explanations which would have no systematic bearing on the prediction of future actions because future states of knowledge could never be predicted on the basis of constantly operating empirical causes. Obviously, such a delineation of a branch of a posteriori and reconstructive science of action fits the usual description of such disciplines as history and sociology.[^28]
[^28]: On the logic of history and sociology as reconstructive disciplines see, in addition to the works of Mises mentioned at the outset of this chapter, Hoppe, *Kritik der kausalwissenschaftlichen Sozialforschung*, chap. 2.
What *is* known to be true a priori regarding the field of action and what would then have to constrain any historical or sociological explanation is this: For one thing, any such explanation, which essentially would have to reconstruct an actor’s knowledge, would invariably have to be a reconstruction in terms of knowledge of ends and means, of choices and costs, of profits and losses and so on. Second, since these are evidently the categories of praxeology as conceived of by Mises, any such explanation must also be constrained by the laws of praxeology. Since these laws are a priori laws, they must also operate as logical constraints on any future course of action. They are valid independent of any specific state of knowledge that an actor might have acquired, simply by virtue of the fact that whatever this state might be, it must be described in terms of action categories. And as referring to actions as such, the laws of praxeology must then be coextensive with all the predictive knowledge there can be in the field of the science of action. In fact, ignoring for the moment that the status of geometry as an a priori science is ultimately grounded in our understanding of action and in so far praxeology must be regarded as the more fundamental cognitive discipline, the peculiar role of praxeology proper within the entire system of epistemology can be understood as somewhat analogous to that of geometry. Praxeology is for the field of action what Euclidean geometry is for the field of observations (non-actions). As the geometry incorporated in our measuring instruments constrains the spatial structure of observational reality, so praxeology constrains the range of things that can possibly be experienced in the field of actions.[^29]
[^29]: On the categorical distinctiveness of praxeological theory and history (sociology) and the logical constraints that praxeology imposes on historical and sociological research as well as on social and economic predictions, see Mises, *Human Action*, pp. 51–59, 117–18; Hoppe, “In Defense of Extreme Rationalism”; idem, *Praxeology and Economic Science*.
| 458.093023 | 2,494 | 0.80135 | eng_Latn | 0.971377 |
1c53e4fbb4520fe8e23871b8d01f8f76851b49b6 | 6,882 | md | Markdown | _pages/about.md | Davi1990/Davi1990.github.io | 37b010dbe75d01ff68456efd851e43305fff02cb | [
"MIT"
] | null | null | null | _pages/about.md | Davi1990/Davi1990.github.io | 37b010dbe75d01ff68456efd851e43305fff02cb | [
"MIT"
] | null | null | null | _pages/about.md | Davi1990/Davi1990.github.io | 37b010dbe75d01ff68456efd851e43305fff02cb | [
"MIT"
] | null | null | null | ---
permalink: /
title: ""
excerpt: ""
author_profile: true
redirect_from:
- /about/
- /about.html
---
<p align="center">
<img src="https://github.com/Davi1990/Davi1990.github.io/blob/master/images/Tanzania.jpg?raw=true" alt="Photo" style="width: 450px;"/>
</p>
# About Me
* I am a Post-Doctoral Research Fellow at the [Whole Brain Modelling Group](https://www.grifflab.com/) at the [Krembil Centre for Neuroinformatics - CAMH](https://www.camh.ca/en/science-and-research/institutes-and-centres/krembil-centre-for-neuroinformatics). [[Curriculum Vitae](https://davi1990.github.io/files/CV_Davide_Momi-merged.pdf)] [[Google Scholar](https://scholar.google.com/citations?user=I-BACCgAAAAJ&hl=en)]
* I received my Ph.D. from the [Department of Neuroscience, Imaging and Clinical Sciences](https://en.unich.it/ugov/organizationunit/17147) at [University "G. d'Annunzio" of Chieti](http://www.bbs.unich.it/) which was focused on integrating multimodal neuroimaging techniques with electrophysiological measures in order to optimize noninvasive brain stimulation intervention.
* Now my research is mainly focused on developing computational models of brain stimulation. I have experience in the following: neuroimaging (e.g. DTI, fMRI, ASL) analysis, TMS applications, machine learning model building, EEG data collection and analysis, quantitative structural MRI assessment (e.g. brain morphometry, cortical thickness etc.), cognitive tasks development.
* I received my Master’s Degree in "Neuroscience and Neuropsychological Rehabilitation" from the [University of Bologna](https://www.unibo.it/en/teaching/degree-programmes/programme/2014/0989).
# Recent News
* December 09, 2021. I gave a talk entitled ["Modelling large-scale brain network dynamics underlying the TMS-EEG evoked response"](https://davi1990.github.io/files/2021-12-09-_BSC.pdf) during the 4th International Brain Stimulation Conference 2021 in Charleston (SC) - USA
* November 20, 2021. The manuscript entitled "Phase-dependent local brain states determine the impact of image-guided TMS on motor network EEG synchronization" has been accepted for publication in The Journal of Physiology.
* June 14, 2021. The manuscript entitled "Perturbation of resting-state network nodes preferentially propagates to structurally rather than functionally connected regions" has been accepted for publication in Scientific Reports.
* April 26, 2021. The manuscript entitled "Overlapping and dissociable brain activations for fluid intelligence and executive functions" has been accepted for publication in Cognitive, Affective, & Behavioral Neuroscience.
* March 1, 2021. Start working as Post-Doctoral Research Fellow at the Whole Brain Modelling Group
* February 12, 2021. The manuscript entitled "Cortical Responses to Noninvasive Perturbations Enable Individual Brain Fingerprinting" has been accepted for publication in Brain Stimulation.
* December 18, 2020. The manuscript entitled "Network-level Macroscale Structural Connectivity Predicts Propagation of Transcranial Magnetic Stimulation" has been accepted for publication in NeuroImage.
* December, 2020. Finish PhD at the [Department of Neuroscience, Imaging and Clinical Sciences](https://www.dni.unich.it/dottorato/business-and-behavioural-sciences) at [University "G. d'Annunzio" of Chieti](http://www.bbs.unich.it/)
* October 29, 2020. The manuscript entitled "Functional connectivity changes and symptoms improvement after personalized, double-daily dosing, repetitive transcranial magnetic stimulation in obsessive-compulsive disorder: A pilot study" has been accepted for publication in Journal of Psychiatric Research
* October 1, 2020. Finish PhD visiting period at [Martinos Center for Biomedical Imaging](https://www.martinos.org/)
* July 13 - July 31 2020. I have been selected to attend the "Neuromatch Academy: An online school for Computational Neuroscience"
* May 29, 2020. The manuscript entitled "Long-lasting Connectivity Changes Induced by Intensive First-Person Shooter Gaming" has been accepted for publication in Brain Imaging and Behavior
* April 7, 2020. The manuscript entitled "Individualized perturbation of the human connectome reveals reproducible biomarkers of network dynamics relevant to cognition" has been accepted for publication in PNAS.
* October 30, 2019. The manuscript entitled "Cognitive Enhancement via Network-Targeted Cortico-cortical Associative Brain Stimulation" has been accepted for publication in Cerebral Cortex
* October 7, 2019. Starting visit PhD period at [Martinos Center for Biomedical Imaging](https://www.martinos.org/)
* September 16 - Semptember 27 2019. I attended the "Disruptive Summer School in Data Science & Machine Learning" at the University of Viterbo
* June 24 - June 28 2019. I won a Fellowship to attend the “Summer School of Interdisciplinary Research on Brain Network Dynamics” in Terzolas at the Department of Physics of the University of Trento - Italy
* March 25 - March 30 2019. I won a [FENS - Federation of European Neuroscience Societies](https://www.fens.org/) Travel Grant to attend the “International Interdisciplinary Computational Cognitive Science Spring School (IICCSSS)” at the Bernstein Center Freiburg, Germany
* January 25 - January 30, 2019. I attended the "European Workshop on Cognitive Neuropsychology" when I won the [EWCN Prize](https://sites.google.com/view/ewcn/ewcn-prize/ewcn-prize-2019)
* November 26 - December 7 2018. I attended the Winter School on the “Neurotechnology applications on aging-related disorders” at the Cuban Neuroscience Center (CNEURO), Havana, Cuba
* October 22 - October 26 2018. I attended the "Afni + Suma Training Workshop" at the National Institute of Health (NIH), Bethesda (MD)
* October 1 - October 4 2018. I attended the "FreeSurfer Tutorial and Workshop" at the Martinos Center for Biomedical Imaging, Boston (MA),
* May 18 - May 23 2018. I attended the "6th TMS-EEG Science Factory: TMS-EEG Summer School and Workshop" at the Aalto University, Espoo, Finland
* May 2 - May 4 2018. I attended the "Brainhack San Sebastien BCBL" at the Basque Center on Cognition, Brain and Language, Spain
* February 26 - February 28 2018. I attended the "20th Natbrainlab Neuroanatomy and Tractography workshop Natbrainlab" at King’s College in London, UK
* October 31 - November 1 2017. I attended the "Introduction to Transcranial Current Stimulation", Harvard Medical School, Boston (MA)
* October 24 - October 28 2017. I attended the "Intensive Course in Transcranial Magnetic Stimulation" at the Berenson-Allen Center for Noninvasive Brain Stimulation", Harvard Medical School, Boston (MA)
# Academic Services
* Member of the Psychologists Association of Umbria Since November 2018
* Member of “International Society for Intelligence Research” Since July 2017
* Member of “Italian Society of Psychophysiology” Since November 2015
| 127.444444 | 421 | 0.798024 | eng_Latn | 0.895703 |
1c54648bc5d803add36eb00429423b72912dce2e | 4,320 | md | Markdown | README.md | amoghsa/ephemeral-gateway-skeleton-sandbox2 | 8835296f177ffeebf4d34ff661364beeea288f06 | [
"MIT"
] | 1 | 2019-04-22T19:58:15.000Z | 2019-04-22T19:58:15.000Z | README.md | amoghsa/ephemeral-gateway-skeleton-sandbox2 | 8835296f177ffeebf4d34ff661364beeea288f06 | [
"MIT"
] | null | null | null | README.md | amoghsa/ephemeral-gateway-skeleton-sandbox2 | 8835296f177ffeebf4d34ff661364beeea288f06 | [
"MIT"
] | 4 | 2019-09-24T19:27:37.000Z | 2020-03-10T17:53:32.000Z | # ephemeral-gateway-skeleton-repo
Use this repository to start creating your own CI/CD pipeline with gateway configuration.
# About
This is a skeleton repository that you can use as a starting point for your gateway projects.
In order to use this as a starting point for you projects follow these steps:
1) Select the `Clone of download` dropdown in the top left then click `Download ZIP`.
2) Create a local folder for your project and extract the contents of the downloaded zip into it.
3) Fill in details in the following files:
* `build.gradle`: On line [23](build.gradle#L23) replace `<project-folder>` with the path of the folder that your solution is located on the Gateway. It must start with a `/`
* : On line [16](docker-compose.yml#L16) replace `<project.name>` with the name of your project. If not explicitly set it is usually the same as the name of the folder that your project is in (the one you created in step #2)
See more detail on using the gateway-developer-plugin here: [gateway-developer-plugin](https://github.com/ca-api-gateway/gateway-developer-plugin/wiki)
# Starting your project.
Put a valid gateway license in the `docker` folder. The license file should be called `license.xml`. For information on getting a license see the [License Section from the Gateway Container readme](https://hub.docker.com/r/caapim/gateway/).
## Exporting from your Gateway
If you connect to the running gateway with the CA API Gateway Policy Manager and make changes to the services and policies you can export those changes by running:
```./gradlew export```
This will export the changes to the various project folders. Note that your local edits will be overridden by changes from the gateway
## Building a Solution
In order to package the solution into something that can be applied to the CA API Gateway run the following Gradle command:
```./gradlew build```
## Running the Solution
In order to run the solution you need to do the following:
1) Put a valid gateway license in the `docker` folder. The license file should be called `license.xml`. For information on getting a license see the [License Section from the Gateway Container readme](https://hub.docker.com/r/caapim/gateway/).
2) Make sure you have already built the solution by running `./gradlew build`
3) Start the Gateway Container by running: `docker-compose up --force-recreate`
After the container is up and running you can connect the CA API Gateway Policy Manager to it.
## Stopping Docker container
To stop the running Gateway Container, run the following command:
`docker-compose down`
## Adding webhook
You can add webhook in github repo under settings for jenkins job, so that jenkins job is ran each time there is commit in branch.
`http://<jenkins-host>/github-webhook/`
You can add project url in jenkins job, so that you can manually trigger job from jenkins directly.
## Usage
Modify below mentioned files for simple export from your existing gateway cluster:
i. build.gradle
a) my.group.id
b) version
c) host.name
d) project.folder
ii. settings.gradle
a) project.name
iii. Docker-compose (for local testing)
a) project.name
b) Version
c) Add this under volume (if needed) "- ./src/main/gateway/config/env.properties:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/env/env.properties"
iv. Jenkinsfile (for K8s testing)
a) git.repository
b) image.name
v. Dockerfile (for K8s testing)
a) project.name
b) version
c) Add this below copy gw7 (if needed) "COPY src/main/gateway/config/env.properties /opt/SecureSpan/Gateway/node/default/etc/bootstrap/env/env.properties"
## Versioning
If you have setup github with a webhook to jenkins, then each commit/merge to master will create artifacts using Jenkins build job. If you do not want to create artifact on each merge, you can remove github webhook, keep git url in Jenkins job, so that you can run this job manually.
# Giving Back
## How You Can Contribute
Contributions are welcome and much appreciated. To learn more, see the [Contribution Guidelines][contributing].
## License
Copyright (c) 2018 CA. All rights reserved.
This software may be modified and distributed under the terms
of the MIT license. See the [LICENSE][license-link] file for details.
[license-link]: /LICENSE
[contributing]: /CONTRIBUTING.md
| 47.472527 | 283 | 0.768519 | eng_Latn | 0.996791 |
1c54bd2554579108b2c41198f326b8d791fdf726 | 2,362 | md | Markdown | docs/2014/database-engine/dev-guide/updategram-sample-applications-sqlxml-4-0.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/dev-guide/updategram-sample-applications-sqlxml-4-0.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/dev-guide/updategram-sample-applications-sqlxml-4-0.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Updategram-Beispielanwendungen (SQLXML 4,0) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: database-engine
ms.topic: reference
helpviewer_keywords:
- sample applications [SQLXML]
- SQLXML, samples
- updategrams [SQLXML], samples
- examples [SQLXML], Updategram
ms.assetid: d2287e10-4007-4ba4-ad84-4e2b6adfede5
author: mashamsft
ms.author: mathoma
manager: craigg
ms.openlocfilehash: 0805ef503f7206ffeb1f7ccaf85fd09e808ac554
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/08/2020
ms.locfileid: "62780418"
---
# <a name="updategram-sample-applications-sqlxml-40"></a>Updategram-Beispielanwendungen (SQLXML 4.0)
Dieser Abschnitt enthält Beispiele zur Verwendung von Updategrams.
In allen Beispielen in diesem Abschnitt wird die AdventureWorks-Beispieldatenbank [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]in verwendet. Alle Updates werden für die Tabellen in dieser AdventureWorks-Datenbank übernommen.
## <a name="in-this-section"></a>In diesem Abschnitt
[Ausführen eines Update grams mithilfe von ADO (SQLXML 4,0)](../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/updategrams/executing-an-updategram-by-using-ado-sqlxml-4-0.md)
Stellt eine [!INCLUDE[msCoName](../../includes/msconame-md.md)] Visual Basic-Anwendung bereit, die ADO verwendet, um eine Verbindung zu [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] herzustellen und ein Updategram auszuführen.
[Ausführen eines Update grams mithilfe OLE DB (SQLXML 4,0)](../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/updategrams/executing-an-updategram-by-using-ole-db-sqlxml-4-0.md)
Stellt ein funktionstüchtiges Beispiel zur Verwendung von OLE DB bereit, um ein Updategram auszuführen.
[Verwenden eines Update Grams in einer Beispiel-ASP-Anwendung (SQLXML 4,0)](../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/updategrams/using-an-updategram-in-a-sample-asp-application-sqlxml-4-0.md)
Stellt eine Active Server Pages-Anwendung (ASP) bereit, um Kundeninformationen in der Person.Contact-Tabelle in der AdventureWorks-Datenbank zu aktualisieren.
| 57.609756 | 291 | 0.781964 | deu_Latn | 0.463124 |
1c54f408fb9ad6d8268f884be2faaaf441e9d402 | 4,572 | md | Markdown | sdk-api-src/content/winbase/nf-winbase-setvolumelabela.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/winbase/nf-winbase-setvolumelabela.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/winbase/nf-winbase-setvolumelabela.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:winbase.SetVolumeLabelA
title: SetVolumeLabelA function (winbase.h)
description: Sets the label of a file system volume.
helpviewer_keywords: ["SetVolumeLabel","SetVolumeLabel function [Files]","SetVolumeLabelA","SetVolumeLabelW","_win32_setvolumelabel","base.setvolumelabel","fs.setvolumelabel","winbase/SetVolumeLabel","winbase/SetVolumeLabelA","winbase/SetVolumeLabelW"]
old-location: fs\setvolumelabel.htm
tech.root: fs
ms.assetid: 1851ed79-7a29-4731-8b67-75d6e9220705
ms.date: 12/05/2018
ms.keywords: SetVolumeLabel, SetVolumeLabel function [Files], SetVolumeLabelA, SetVolumeLabelW, _win32_setvolumelabel, base.setvolumelabel, fs.setvolumelabel, winbase/SetVolumeLabel, winbase/SetVolumeLabelA, winbase/SetVolumeLabelW
req.header: winbase.h
req.include-header: Windows.h
req.target-type: Windows
req.target-min-winverclnt: Windows XP [desktop apps \| UWP apps]
req.target-min-winversvr: Windows Server 2003 [desktop apps \| UWP apps]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi: SetVolumeLabelW (Unicode) and SetVolumeLabelA (ANSI)
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Kernel32.lib
req.dll: Kernel32.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- SetVolumeLabelA
- winbase/SetVolumeLabelA
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- Kernel32.dll
- API-MS-Win-Core-Kernel32-Legacy-l1-1-0.dll
- kernel32legacy.dll
- API-MS-Win-Core-Kernel32-Legacy-l1-1-1.dll
- API-MS-Win-Core-Kernel32-Legacy-l1-1-2.dll
- API-MS-Win-DownLevel-Kernel32-l2-1-0.dll
- API-MS-Win-Core-Kernel32-Legacy-L1-1-3.dll
- API-Ms-Win-Core-Kernel32-Legacy-Ansi-L1-1-0.dll
- API-MS-Win-Core-Kernel32-Legacy-L1-1-4.dll
- API-MS-Win-Core-Kernel32-Legacy-L1-1-5.dll
api_name:
- SetVolumeLabel
- SetVolumeLabelA
- SetVolumeLabelW
---
# SetVolumeLabelA function
## -description
Sets the label of a file system volume.
## -parameters
### -param lpRootPathName [in, optional]
A pointer to a string that contains the volume's drive letter (for example, X:\) or the path
of a mounted folder that is associated with the volume (for example, Y:\MountX\). The string must
end with a trailing backslash ('\'). If this parameter is <b>NULL</b>, the root of the
current directory is used.
### -param lpVolumeName [in, optional]
A pointer to a string that contains the new label for the volume. If this parameter is
<b>NULL</b>, the function deletes any existing label from the specified volume and does not
assign a new label.
## -returns
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To get extended error information, call
<a href="/windows/desktop/api/errhandlingapi/nf-errhandlingapi-getlasterror">GetLastError</a>.
## -remarks
The maximum volume label length is 32 characters.
<b>FAT filesystems: </b>The maximum volume label length is 11 characters.
A label is a user-friendly name that a user assigns to a volume to make it easier to recognize. A volume can
have a label, a drive letter, both, or neither. For more information, see
<a href="/windows/desktop/FileIO/naming-a-volume">Naming a Volume</a>.
In Windows 8 and Windows Server 2012, this function is supported by the following technologies.
<table>
<tr>
<th>Technology</th>
<th>Supported</th>
</tr>
<tr>
<td>
Server Message Block (SMB) 3.0 protocol
</td>
<td>
No
</td>
</tr>
<tr>
<td>
SMB 3.0 Transparent Failover (TFO)
</td>
<td>
No
</td>
</tr>
<tr>
<td>
SMB 3.0 with Scale-out File Shares (SO)
</td>
<td>
No
</td>
</tr>
<tr>
<td>
Cluster Shared Volume File System (CsvFS)
</td>
<td>
Yes
</td>
</tr>
<tr>
<td>
Resilient File System (ReFS)
</td>
<td>
Yes
</td>
</tr>
</table>
SMB does not support volume management functions.
> [!NOTE]
> The winbase.h header defines SetVolumeLabel as an alias which automatically selects the ANSI or Unicode version of this function based on the definition of the UNICODE preprocessor constant. Mixing usage of the encoding-neutral alias with code that not encoding-neutral can lead to mismatches that result in compilation or runtime errors. For more information, see [Conventions for Function Prototypes](/windows/win32/intl/conventions-for-function-prototypes).
## -see-also
<a href="/windows/desktop/api/fileapi/nf-fileapi-getvolumeinformationa">GetVolumeInformation</a>
<a href="/windows/desktop/FileIO/volume-management-functions">Volume Management Functions</a> | 26.427746 | 462 | 0.746938 | eng_Latn | 0.605917 |
1c559e816df45742ce527edf36f5eca1305dd05b | 5,021 | md | Markdown | _metadane_globalne/3-4-2.md | PiotrWalaszczak/sdg-indicators-pl | 2518a2054a2920c08a1212c11f7eb0718c5ad6b6 | [
"CC0-1.0"
] | null | null | null | _metadane_globalne/3-4-2.md | PiotrWalaszczak/sdg-indicators-pl | 2518a2054a2920c08a1212c11f7eb0718c5ad6b6 | [
"CC0-1.0"
] | null | null | null | _metadane_globalne/3-4-2.md | PiotrWalaszczak/sdg-indicators-pl | 2518a2054a2920c08a1212c11f7eb0718c5ad6b6 | [
"CC0-1.0"
] | null | null | null | ---
translation_id: 3-4-2
pl_jednostka: >-
osoby
pl_title: >-
Współczynnik zgonów w wyniku samobójstw na 100 tys. ludności
pl_graph_title: >-
Współczynnik zgonów w wyniku samobójstw na 100 tys. ludności
pl_nazwa_wskaznika: >-
<b>3.4.2 Współczynnik zgonów w wyniku samobójstw na 100 tys. ludności</b>
pl_cel: >-
Cel 3. Dobre zdrowie i jakość życia
pl_zadanie: 3.4 Do 2030 roku obniżyć o 1/3 przedwczesną umieralność z powodu chorób niezakaźnych poprzez zapobieganie i leczenie oraz promowanie zdrowia psychicznego i dobrostanu.
pl_definicja: >-
Liczba zgonów z powodu samobójstw w przeliczeniu na 100 tys. ludności.
pl_jednostka_prezentacji: >-
osoby
pl_dostepne_wymiary: ogółem, płeć
pl_wyjasnienia_metodologiczne: >-
<p><b>Zgon </b trwałe, nieodwracalne ustanie czynności narządów niezbędnych dla życia, konsekwencją czego jest ustanie czynności całego ustroju.</p>
<p><b>Samobójstwo </b>– rozmyślny akt pozbawienia się życia (wg Międzynarodowej Statystycznej Klasyfikacji Chorób i Problemów Zdrowotnych ICD-10 – jednostka chorobowa o symbolu z zakresu X60-X84, Y87.0).</p>
<p><b>Źródłem informacji o zgonach </b>jest wykorzystywany wtórnie przez statystykę indywidualny dokument "Karta zgonu" (Rozporządzenie Ministra Zdrowia w sprawie wzoru karty zgonu i sposobu jej wypełniania Dz. U. 2015 r., poz. 231).</p>
<p>Dane o zgonach opracowano w podziale terytorialnym - według miejsca zameldowania na pobyt stały osoby zmarłej.</p>
<p>Przy opracowywaniu danych zgonów <b>według przyczyn </b>przyjmuje się wyjściowa przyczynę zgonu. Za przyczynę wyjściową uważa się chorobę stanowiącą początek procesu chorobowego, który doprowadził do zgonu albo uraz czy zatrucie, w wyniku którego nastąpił zgon.</br>
<p>Dane dotyczące orzecznictwa o przyczynach zgonów podano zgodnie z Międzynarodową Statystyczną Klasyfikacją Chorób i Problemów Zdrowotnych (X Rewizja).</p>
<p><b>Ludność </b>opracowano na podstawie: </p>
<p> • bilansów ludności zamieszkałej na terenie gminy w oparciu o dane Narodowego Spisu Powszechnego Ludności i Mieszkań 2011 (dla danych od 2010 r.) dla lat wcześniejszych (2000-2009) w oparciu o dane Narodowego Spisu Powszechnego Ludności i Mieszkań 2002, </p>
<p> • rejestrów Ministerstwa Spraw Wewnętrznych i Administracji - migracje wewnętrzne i zagraniczne na pobyt stały (od 2006 r. dane są pobierane z rejestru PESEL - Powszechny Elektroniczny System Ewidencji Ludności), </p>
<p> • sprawozdań urzędów stanu cywilnego - urodzenia, zgony.</p>
pl_zrodlo_danych: Główny Urząd Statystyczny
pl_czestotliwosc_dostępnosc_danych: >-
Dane roczne od 2010 r.
en_jednostka: >-
persons
en_title: >-
Suicide mortality rate
en_graph_title: >-
Suicide mortality rate
en_nazwa_wskaznika: >-
3.4.2 Suicide mortality rate
en_cel: >-
Goal 3. Good health and well-being
en_zadanie: 3.4 By 2030, reduce by one third premature mortality from non-communicable diseases through prevention and treatment and promote mental health and well-being
en_definicja: >-
Suicide rate per 100 thous. population.
en_jednostka_prezentacji: >-
persons
en_dostepne_wymiary: total, sex
en_wyjasnienia_metodologiczne: >-
Death - permanent, irreversible cessation of functions of the essential for life organs, the consequence of which is the cessation of all functions of the whole organism.Suicide - act of deliberately killing oneself (according to the International Statistical Classification of Diseases and Related Health Problems ICD-10: disease entity symbol X60-X84).The source of data on death is the document of the Ministry of Health ”Death certificate”, which is basic document for civil status acts and is in the part secondarily utilized by national statistics (Regulation of the Minister of Health, Journal of Laws 2015, item 231).When compiling the data on deaths by cause the initial cause of death is assumed. The initial cause is the disease, which was at the beginning of the morbid process and which caused the death it may be also the injury or the poisoning, which caused the death.Data relating to the judicature on the causes of death are given in accordance with the International Statistical Classification of Diseases and Related Health Problems (Revision X).Data on population were compiled on the basis of: the balances of the residing population in a gmina based on the results of 2011 Population and Housing Census (for data since 2010) for previous years (2003 – 2009) on the basis of the 2002 Population and Housing Census, the registers of the Ministry of Interior - internal and international migration of population for permanent residence (since 2006 the presented data come from the Common Electronic System of Population Register – PESEL), documentation of Civil Status Offices regarding registered marriages, births and deaths.
en_zrodlo_danych: Statistics Poland
en_czestotliwosc_dostępnosc_danych: >-
Annual data Since 2010.
--- | 92.981481 | 1,654 | 0.779526 | pol_Latn | 0.99734 |
1c55a8e8505926a24d1aafb2036674b0ed28b349 | 9,424 | md | Markdown | windows-365/provide-localized-windows-experience.md | edroszcz/memdocs | 363af70d669b5ffbe6bd50c45abbbbe9cc71837a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-365/provide-localized-windows-experience.md | edroszcz/memdocs | 363af70d669b5ffbe6bd50c45abbbbe9cc71837a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-365/provide-localized-windows-experience.md | edroszcz/memdocs | 363af70d669b5ffbe6bd50c45abbbbe9cc71837a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: Provide users a localized Windows experience on their Cloud PC
titleSuffix:
description: Learn how to provide a localized Windows experience for your end-users.
keywords:
author: ErikjeMS
ms.author: erikje
manager: dougeby
ms.date: 09/03/2021
ms.topic: how-to
ms.service: cloudpc
ms.subservice:
ms.localizationpriority: high
ms.technology:
ms.assetid:
# optional metadata
#ROBOTS:
#audience:
ms.reviewer: chrimo
ms.suite: ems
search.appverid: MET150
#ms.tgt_pltfrm:
ms.custom: intune-azure; get-started
ms.collection: M365-identity-device-management
---
# Provide users a localized Windows experience
For users to be productive on their Windows 365 Cloud PC, it's important for Windows to use a display language that they're comfortable with. Users can always change the display language themselves through the Settings app in Windows. But the Windows experience is more welcoming if the user sees the right language immediately, starting when they first sign in.
To provide a localized Windows experience when users first sign in, there are two steps:
1. [Create a custom device image with the languages installed](#create-a-custom-image-with-the-languages-installed).
2. [Configure the default language using Group Policy](#configure-the-default-language-using-group-policy).
Cloud PCs provisioned from this image will be fully configured to work in any of the installed languages, without any user action. When the user signs in to the Cloud PC, Group Policy will evaluate the device and set the appropriate pre-installed language as the user's preferred language for Windows.
## Create a custom image with the languages installed
Creating a custom image with the languages installed is the best way to make sure that the desired languages are available on the Cloud PC when the user signs in.
Before starting the custom image process, check if your language is supported by the [Windows 365 Language Installer](https://www.powershellgallery.com/packages/Windows365LanguagesInstaller) script. If:
- The language you want to provide for your users is supported by the PowerShell script, follow the steps to [Add languages to Windows using a script and capture the image](#add-languages-to-windows-using-a-script-and-capture-the-image).
- The language you want to provide for your users isn't supported by the PowerShell script, follow the steps to [Add languages to Windows manually and capture the image](#add-languages-to-windows-manually-and-capture-the-image).
### Add languages to Windows using a script and capture the image
To add a language using the [Windows 365 Language Installer](https://www.powershellgallery.com/packages/Windows365LanguagesInstaller/1.0.0.0) script:
1. Sign in to the virtual machine you're customizing for use as the custom image.
2. Complete one of the **Installation Options** described for the [Windows 365 Language Installer](https://www.powershellgallery.com/packages/Windows365LanguagesInstaller/1.0.0.0) script.
3. Run the script and enter the number corresponding to the language you'd like to install on the custom image.
> [!NOTE]
> You can use the script to install as many languages as you'd like on the custom image. To do so, run the script one time for each language.
After you're done adding the desired languages and are ready to capture the image, follow the steps to [finish customizing your image](/azure/virtual-desktop/language-packs#finish-customizing-your-image).
### Add languages to Windows manually and capture the image
To manually install the desired languages to your Windows 10 Enterprise custom image, follow the steps in [Add language packs to a Windows 10 multi-session image](/azure/virtual-desktop/language-packs) up to and including [finish customizing your image](/azure/virtual-desktop/language-packs#finish-customizing-your-image).
> [!NOTE]
> Though these instructions are written specifically for Windows 10 Enterprise multi-session, these same steps apply to Windows 10 Enterprise.
### Upload the custom image
To upload the custom image to the Windows 365 service, after you've captured the image as an Azure managed image, follow the steps in [Add or delete device images](add-device-images.md).
## Configure the default language using Group Policy
Now that the languages are installed on the image that users will receive, you must create a Group Policy to apply the correct pre-installed language as the default for your users.
The following steps configure [Group Policy Preferences](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn581922(v=ws.11)) to set the PreferredUILanguages Registry value and the Windows Regional Options. These options are then [targeted by security group](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn581922(v=ws.11)#item-level-targeting) to sets of users. Each security group and Group Policy object configures a single language as the default for those users. To cater for users with different language default requirements, you can use a single image with multiple languages and different Group Policy objects targeted to different groups of users.
1. Create a security group in your Active Directory domain that will map a specific language to a specific set of users in that group.
2. Add all Cloud PC users who should receive that language to this new security group.
3. In Server Manager, open **Group Policy Management** and create a new Group Policy object linked to the Organization Unit (OU) or domain that will contain the Cloud PCs for those users.
4. Right-click the new Group Policy object, and select **Edit...**
5. Navigate to **User Configuration** > **Preferences** > **Windows Settings**, right-click **Registry**, and select **New** > **Registry Item**.
6. Enter the following details in the **General** tab. Here is an example that shows Spanish (Spain) with language code es-ES:
- Action: Replace
- Hive: HKEY_CURRENT_USER
- Key Path: Control Panel\Desktop
- Value name: PreferredUILanguages
- Value type: REG_SZ
- Value data: [Language code].
> [!Note]
> To find the language code for your desired language and region combination, see the [language pack list](/windows-hardware/manufacture/desktop/available-language-packs-for-windows#language-packs).
7. Switch to the **Common** tab and check the following three options:
- **Run in logged-on user's security context (user policy option)**
- **Apply once and do not reapply**
> [!Note]
> This setting makes sure that users can change language options themselves later.
- **Item-level targeting**
8. Select **Targeting...**, **New Item**, and **Security Group**.
9. Select **...** next to the Group, search for the new security group, select the new security group, and hit **OK**.
10. Select **User in group**, then select **OK** and **OK** to complete the new registry process.
11. In the "Group Policy Management Editor", navigate to **User Configuration** > **Preferences** > **Windows Settings**, right-click **Regional Options**, and select **New** > **Regional Options**.
12. Under **User Locale**, select the language and region combination that matches the registry key you created above.
13. After selecting your desired language and region combination from the dropdown, the dropdown menu may be underlined in red. This indicates that the selection isn't confirmed. Press the **F5** function key on your keyboard to confirm the selection, resulting in a green underlined dropdown menu.
Before hitting **F5**:

After hitting **F5**:

14. Switch to the **Common** tab and check the following three options:
- **Run in logged-on user's security context (user policy option).**
- **Apply once and do not reapply.**
> [!Note]
> This setting makes sure that users can change language options themselves later.
- **Item-level targeting.**
15. Select **Targeting..**, **New Item**, and **Security Group**.
16. Select **...** next to the Group, search for the new security group, select the new security group, and select **OK**.
17. Select **User in group**, then select **OK** and **OK** to complete the new registry process.
You can perform these steps for each language you need to provide as the default language for users. If your users have both Cloud PCs and physical devices, you may want to apply [Group Policy loopback](/troubleshoot/windows-server/group-policy/loopback-processing-of-group-policy) so these settings only affect users when they sign in to their Cloud PC.
> [!NOTE]
> Step 6 above uses the "Replace" command, setting the user's preferred language to just the one language defined in the registry item. If you create multiple Group Policy objects to assign different languages to users, make sure each user is only a member of a single security group that is being targeted.
## Next steps
[Create a provisioning policy](create-provisioning-policy.md)
| 71.393939 | 693 | 0.771965 | eng_Latn | 0.994193 |
1c55b64a543362f6497f228142c5e332c233452a | 841 | md | Markdown | docs/framework/wcf/diagnostics/etw/3559-serviceactivationstop.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/3559-serviceactivationstop.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/etw/3559-serviceactivationstop.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 3559 - ServiceActivationStop
ms.date: 03/30/2017
ms.assetid: 57aa18b4-6512-4f1a-a4e3-71f58a867ed2
ms.openlocfilehash: 5b442a334ff411e6bc8be0f0fd9ce38ddad3b497
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 05/04/2018
ms.locfileid: "33465130"
---
# <a name="3559---serviceactivationstop"></a>3559 - ServiceActivationStop
## <a name="properties"></a>Proprietà
|||
|-|-|
|ID|3559|
|Parole chiave|WebHost|
|Livello|Informazioni|
|Canale|Microsoft-Windows-Application Server-Applications/Analytic|
## <a name="description"></a>Descrizione
Questo evento viene generato all'arresto dell'attivazione del servizio.
## <a name="message"></a>Messaggio
Arresto attivazione del servizio
## <a name="details"></a>Dettagli
| 29 | 74 | 0.739596 | ita_Latn | 0.279377 |
1c55c94004e463c8cbda8656ada82631696de480 | 1,432 | md | Markdown | AlchemyInsights/add-update-vat-tax-id-legacy-wd-mca-cl.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:06:50.000Z | 2021-03-06T00:35:09.000Z | AlchemyInsights/add-update-vat-tax-id-legacy-wd-mca-cl.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.id-ID | 95d1cef182a766160dba451d9c5027f04a6dbe06 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:41.000Z | 2022-02-09T07:00:30.000Z | AlchemyInsights/add-update-vat-tax-id-legacy-wd-mca-cl.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:21.000Z | 2021-10-09T10:42:11.000Z | ---
title: Menambahkan atau memperbarui ID PPN/Pajak - Langkah-langkah yang disarankan WD + MCA CL _ Warisan
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 12/09/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "9004166"
- "7291"
ms.openlocfilehash: 172453664d2e950634c46977b8c543a054afbf6dfbed1356b7b13416ecf80b22
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: id-ID
ms.lasthandoff: 08/05/2021
ms.locfileid: "53953692"
---
# <a name="add-or-update-vattax-id---legacy-wd--mca-cl---recommended-steps"></a>Menambahkan atau memperbarui ID PPN/Pajak - Legacy WD + MCA CL - Langkah yang disarankan
ID Pajak digunakan untuk perhitungan bebas pajak dan muncul di faktur Anda.
1. Masuk ke halaman [Manajemen Biaya +](https://ms.portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/Overview) Tagihan.
2. Klik Properti dari sisi kiri.
3. Klik **Kelola ID Pajak** dari panel ID **Pajak,** lalu masukkan ID Pajak baru Anda.
4. Klik **Perbarui.**
Jika Tidak melihat panel **ID** Pajak, artinya ID Pajak belum dikumpulkan untuk kawasan Anda, atau memperbarui ID Pajak di portal tidak didukung untuk akun Anda.
**Dokumen yang disarankan**
[Negara/kawasan dan mata uang yang didukung](https://azure.microsoft.com/pricing/faq/)
| 37.684211 | 168 | 0.784218 | ind_Latn | 0.837363 |
1c55ef30275a5de12de6ac2b8ec57727cbe78efd | 40 | md | Markdown | README.md | aazhbd/TheRun | 539199ff4f50e9735a716abdbcca6c342c59980e | [
"MIT"
] | null | null | null | README.md | aazhbd/TheRun | 539199ff4f50e9735a716abdbcca6c342c59980e | [
"MIT"
] | null | null | null | README.md | aazhbd/TheRun | 539199ff4f50e9735a716abdbcca6c342c59980e | [
"MIT"
] | null | null | null | # TheRun
Just a multi thread experiment
| 13.333333 | 30 | 0.8 | eng_Latn | 0.986068 |
1c570166cb0873a1f3387367d02bf0d9319259b3 | 1,150 | md | Markdown | README.md | Lmello1/bullbots | 87703fd3877a6ff7bdf18f9f20d5b1f49680de17 | [
"MIT"
] | 1 | 2019-12-06T06:35:22.000Z | 2019-12-06T06:35:22.000Z | README.md | Lmello1/bullbots | 87703fd3877a6ff7bdf18f9f20d5b1f49680de17 | [
"MIT"
] | null | null | null | README.md | Lmello1/bullbots | 87703fd3877a6ff7bdf18f9f20d5b1f49680de17 | [
"MIT"
] | null | null | null | # Bullbots FRC Tracker
This is our own version of the blue alliance for the idaho reigional. if you request for some of the team members to ask, we will submit any appropriate data.
## Getting Started
You can navigate the webiste easily if you have a brain. I was told specififcally to "let them figure it out on their own." Hopefully you can navigate using the many comment(s) that we left in the code.
### Prerequisites
idk what u need to install probably vscode, yalls are on your owns.
```
lol good luck
```
### Installing
A step by step series of examples that tell you to not ask me to make a readme
1 dont approach me
```
its simple
```
if you do, dont ask me to make a readme
```
please
```
just change whatever you want lol. i dont really care.
## Running the tests
probably just mess around in the code
## Built With
* php
* landons tears
* also his sleep schedule
## Contributing
just go to the add data and input the password that you set for it
## Authors
* devin
* landon
* aiden (me)
## License
we dont got a liscense lol
## Acknowledgments
* give
* us
* credit
* if
* you
* use
* this
* also the blue alliance
| 16.911765 | 202 | 0.722609 | eng_Latn | 0.999722 |
1c57adab0e90313d075009ce9ed706acef6d2b04 | 2,786 | md | Markdown | docs/relational-databases/system-tables/msmerge-replinfo-transact-sql.md | polocco/sql-docs.it-it | 054013d9cd6f2c81f53fc91a7eafc8043f12c380 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-tables/msmerge-replinfo-transact-sql.md | polocco/sql-docs.it-it | 054013d9cd6f2c81f53fc91a7eafc8043f12c380 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-tables/msmerge-replinfo-transact-sql.md | polocco/sql-docs.it-it | 054013d9cd6f2c81f53fc91a7eafc8043f12c380 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: MSmerge_replinfo (Transact-SQL)
title: MSmerge_replinfo (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/04/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: replication
ms.topic: language-reference
f1_keywords:
- MSmerge_replinfo_TSQL
- MSmerge_replinfo
dev_langs:
- TSQL
helpviewer_keywords:
- MSmerge_replinfo system table
ms.assetid: b0924094-c0cc-49c1-869a-65be0d0465a0
author: markingmyname
ms.author: maghan
ms.openlocfilehash: ce4bee70ea5ef410512ce399e8d715fbc321e1a6
ms.sourcegitcommit: dd36d1cbe32cd5a65c6638e8f252b0bd8145e165
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/08/2020
ms.locfileid: "89545609"
---
# <a name="msmerge_replinfo-transact-sql"></a>MSmerge_replinfo (Transact-SQL)
[!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)]
La tabella **MSmerge_replinfo** contiene una riga per ogni sottoscrizione. In questa tabella viene tenuta traccia delle informazioni relative alle sottoscrizioni. Questa tabella è archiviata nei database di pubblicazione e di sottoscrizione.
|Nome colonna|Tipo di dati|Descrizione|
|-----------------|---------------|-----------------|
|**repid**|**uniqueidentifier**|ID univoco della replica.|
|**use_interactive_resolver**|**bit**|Specifica se nella fase di riconciliazione dei conflitti viene utilizzato il sistema di risoluzione interattivo.<br /><br /> **0** = non viene utilizzato il sistema di risoluzione interattivo.<br /><br /> **1** = usa il sistema di risoluzione interattivo.|
|**validation_level**|**int**|Tipo di convalida da eseguire sulla sottoscrizione. I possibili valori sono i seguenti:<br /><br /> **0** = nessuna convalida.<br /><br /> **1** = convalida con solo conteggio delle righe.<br /><br /> **2** = convalida tramite conteggio delle righe e checksum.<br /><br /> **3** = convalida tramite conteggio delle righe e checksum binario.|
|**resync_gen**|**bigint**|Numero di generazione utilizzato per la risincronizzazione della sottoscrizione. Il valore **-1** indica che la sottoscrizione non è contrassegnata per la risincronizzazione.|
|**login_name**|**sysname**|Nome dell'utente che ha creato la sottoscrizione.|
|**hostname**|**sysname**|Valore utilizzato dal filtro di riga con parametri durante la generazione della partizione per la sottoscrizione.|
|**merge_jobid**|**binary(16)**|ID del processo di merge della sottoscrizione.|
|**sync_info**|**int**|Solo per uso interno.|
## <a name="see-also"></a>Vedere anche
[Tabelle di replica ()Transact-SQL ](../../relational-databases/system-tables/replication-tables-transact-sql.md)
[Viste della replica (Transact-SQL)](../../relational-databases/system-views/replication-views-transact-sql.md)
| 56.857143 | 373 | 0.741565 | ita_Latn | 0.9292 |
1c57f2f187f8391693006ba328978c19707cda1d | 4,110 | md | Markdown | doc/jetty-config.md | nwolfe/trapperkeeper | e2fbe2822196a30bc5f6518abdbc9ab68118231e | [
"Apache-2.0"
] | null | null | null | doc/jetty-config.md | nwolfe/trapperkeeper | e2fbe2822196a30bc5f6518abdbc9ab68118231e | [
"Apache-2.0"
] | null | null | null | doc/jetty-config.md | nwolfe/trapperkeeper | e2fbe2822196a30bc5f6518abdbc9ab68118231e | [
"Apache-2.0"
] | null | null | null | ## Configuring The Jetty 7 Webserver
The `[jetty]` section in an `.ini` configuration file configures an embedded
Jetty HTTP server inside trapperkeeper.
### `host`
This sets the hostname to listen on for _unencrypted_ HTTP traffic. If not
supplied, we bind to `localhost`, which will reject connections from anywhere
but the server process itself. To listen on all available interfaces,
use `0.0.0.0`.
### `port`
This sets what port to use for _unencrypted_ HTTP traffic. If not supplied, we
won't listen for unencrypted traffic at all.
### `max-threads`
This sets the maximum number of threads assigned to responding to HTTP and HTTPS
requests, effectively changing how many concurrent requests can be made at one
time. Defaults to 50.
> **Note:** Due to how Jetty 7 behaves, this setting must be higher than the
number of CPU's on your system or it will stop processing any HTTP requests.
### `ssl-host`
This sets the hostname to listen on for _encrypted_ HTTPS traffic. If not
supplied, we bind to `localhost`. To listen on all available interfaces,
use `0.0.0.0`.
### `ssl-port`
This sets the port to use for _encrypted_ HTTPS traffic. If not supplied, we
won't listen for encrypted traffic at all.
### `ssl-cert`
This sets the path to the server certificate PEM file used by the web
service for HTTPS.
> **Note:** This setting overrides the alternate configuration settings
`keystore` and `key-password`.
### `ssl-key`
This sets the path to the private key PEM file that corresponds with the
`ssl-cert`, it used by the web service for HTTPS.
> **Note:** This setting overrides the alternate configuration settings
`keystore` and `key-password`.
### `ssl-ca-cert`
This sets the path to the CA certificate PEM file used for client
authentication. Authorized clients must be signed by the CA that that
corresponds to this certificate.
> **Note:** This setting overrides the alternate configuration settings
`truststore` and `trust-password`.
### `keystore`
This sets the path to a Java keystore file containing the key and certificate
to be used for HTTPS.
### `key-password`
This sets the passphrase to use for unlocking the keystore file.
### `truststore`
This describes the path to a Java keystore file containing the CA certificate(s)
for your infrastructure.
### `trust-password`
This sets the passphrase to use for unlocking the truststore file.
### `certificate-whitelist`
Optional. This describes the path to a file that contains a list of certificate
names, one per line. Incoming HTTPS requests will have their certificates
validated against this list of names and only those with an _exact_ matching
entry will be allowed through. (For a puppet master, this compares against the
value of the `certname` setting, rather than the `dns_alt_names` setting.)
If not supplied, trapperkeeper uses standard HTTPS without any additional
authorization. All HTTPS clients must still supply valid, verifiable SSL client
certificates.
### `cipher-suites`
Optional. A comma-separated list of cryptographic ciphers to allow for incoming
SSL connections. Valid names are listed in the
[official JDK cryptographic providers documentation](http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#SupportedCipherSuites);
you'll need to use the all-caps cipher suite name.
If not supplied, trapperkeeper uses the default cipher suites for your local
system on JDK versions older than 1.7.0u6. On newer JDK versions, trapperkeeper
will use only non-DHE cipher suites.
### `ssl-protocols`
Optional. A comma-separated list of protocols to allow for incoming SSL
connections. Valid names are listed in the
[official JDK cryptographic protocol documentation](http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider);
you'll need to use the names with verbatim capitalization.
For example: `SSLv3, TLSv1, TLSv1.1, TLSv1.2`.
If not supplied, trapperkeeper uses the default SSL protocols for your local
system.
> **Note:** This setting is only effective when trapperkeeper is running with
Java version 1.7 or better.
| 35.128205 | 157 | 0.777129 | eng_Latn | 0.99509 |
1c580e4fb689ca8022acecbf02d1f540b3ef5946 | 3,434 | md | Markdown | Pathfinder-Core/README.md | rvighne/Pathfinder | dd349a525f147e620383bcb06332b562bc3866df | [
"MIT"
] | 248 | 2016-03-27T16:45:36.000Z | 2020-06-13T17:49:49.000Z | Pathfinder-Core/README.md | rvighne/Pathfinder | dd349a525f147e620383bcb06332b562bc3866df | [
"MIT"
] | 55 | 2016-04-01T12:44:21.000Z | 2019-03-08T14:06:15.000Z | Pathfinder-Core/README.md | rvighne/Pathfinder | dd349a525f147e620383bcb06332b562bc3866df | [
"MIT"
] | 103 | 2016-04-07T21:37:14.000Z | 2020-05-14T02:19:58.000Z | # Pathfinder C Core Library
This is the C code for the core of the 'Pathfinder' motion profiling library. This library can be used in any C application for quick and easy generation of
motion profiles and trajectories.
## Using the Library
Full examples are provided under `examples/`
### Includes
```c
#include <pathfinder.h>
```
### Creating some Waypoints
```c
int POINT_LENGTH = 3;
Waypoint points[POINT_LENGTH];
Waypoint p1 = { -4, -1, d2r(45) }; // Waypoint @ x=-4, y=-1, exit angle=45 degrees
Waypoint p2 = { -1, 2, 0 }; // Waypoint @ x=-1, y= 2, exit angle= 0 radians
Waypoint p3 = { 2, 4, 0 }; // Waypoint @ x= 2, y= 4, exit angle= 0 radians
points[0] = p1;
points[1] = p2;
points[2] = p3;
```
### Generating the Trajectory
```c
TrajectoryCandidate candidate;
// Prepare the Trajectory for Generation.
//
// Arguments:
// Fit Function: FIT_HERMITE_CUBIC or FIT_HERMITE_QUINTIC
// Sample Count: PATHFINDER_SAMPLES_HIGH (100 000)
// PATHFINDER_SAMPLES_LOW (10 000)
// PATHFINDER_SAMPLES_FAST (1 000)
// Time Step: 0.001 Seconds
// Max Velocity: 15 m/s
// Max Acceleration: 10 m/s/s
// Max Jerk: 60 m/s/s/s
pathfinder_prepare(points, POINT_LENGTH, FIT_HERMITE_CUBIC, PATHFINDER_SAMPLES_HIGH, 0.001, 15.0, 10.0, 60.0, &candidate);
int length = candidate.length;
// Array of Segments (the trajectory points) to store the trajectory in
Segment *trajectory = malloc(length * sizeof(Segment));
// Generate the trajectory
int result = pathfinder_generate(&candidate, trajectory);
if (result < 0) {
// An error occured
printf("Uh-Oh! Trajectory could not be generated!\n");
}
```
### Using the Segments
```c
int i;
for (i = 0; i < length; i++) {
Segment s = trajectory[i];
printf("Time Step: %f\n", s.dt);
printf("Coords: (%f, %f)\n", s.x, s.y);
printf("Position (Distance): %f\n", s.position);
printf("Velocity: %f\n", s.velocity);
printf("Acceleration: %f\n", s.acceleration);
printf("Jerk (Acceleration per Second): %f\n", s.jerk);
printf("Heading (radians): %f\n", s.heading);
}
```
Don't forget to free the `trajectory`!
## Modifying your Trajectory
### Tank Drive
```c
Segment leftTrajectory[length];
Segment rightTrajectory[length];
// The distance between the left and right sides of the wheelbase is 0.6m
double wheelbase_width = 0.6;
// Generate the Left and Right trajectories of the wheelbase using the
// originally generated trajectory
pathfinder_modify_tank(trajectory, length, leftTrajectory, rightTrajectory, wheelbase_width);
```
### Swerve Drive
```c
// Output Trajectories
Segment frontLeft[length];
Segment frontRight[length];
Segment backLeft[length];
Segment backRight[length];
// The distance between the left and right sides of the wheelbase is 0.6m
double wheelbase_width = 0.6;
// The distance between the front and back sides of the wheelbase is 0.5m
double wheelbase_depth = 0.5;
// The swerve mode to generate will be the 'default' mode, where the robot
// will constantly be facing forward and 'sliding' sideways to follow a
// curved path.
SWERVE_MODE mode = SWERVE_DEFAULT;
// Generate the trajectories of each of the 4 swerve wheels using the
// originally generated trajectory
pathfinder_modify_swerve(trajectory, length, frontLeft, frontRight,
backLeft, backRight, wheelbase_width, wheelbase_depth, mode);
``` | 30.936937 | 156 | 0.689866 | eng_Latn | 0.90073 |
1c5833e3e22cc479bbdaa4c7db4f52729aca263b | 61,863 | md | Markdown | README.md | jasnow/rdl | c42400e851448d20bc4fb38cdadf7ebe2416ab52 | [
"BSD-3-Clause"
] | 51 | 2019-06-12T04:06:02.000Z | 2022-03-31T23:34:22.000Z | README.md | jasnow/rdl | c42400e851448d20bc4fb38cdadf7ebe2416ab52 | [
"BSD-3-Clause"
] | 2 | 2019-10-30T22:08:20.000Z | 2022-02-10T15:27:49.000Z | README.md | jasnow/rdl | c42400e851448d20bc4fb38cdadf7ebe2416ab52 | [
"BSD-3-Clause"
] | 6 | 2019-09-08T15:30:46.000Z | 2022-03-30T03:15:30.000Z | [](https://badge.fury.io/rb/rdl) [](https://travis-ci.org/tupl-tufts/rdl)
# Table of Contents
* [Introduction](#introduction)
* [Using RDL](#using-rdl)
* [Supported versions of Ruby](#supported-versions-of-ruby)
* [Installing RDL](#installing-rdl)
* [Loading RDL](#loading-rdl)
* [Disabling RDL](#disabling-rdl)
* [Rails](#rails)
* [Preconditions and Postconditions](#preconditions-and-postconditions)
* [Type Annotations](#type-annotations)
* [RDL Types](#rdl-types)
* [Nominal Types](#nominal-types)
* [Nil Type](#nil-type)
* [Top Type (%any)](#top-type-any)
* [Dynamic Type (%dyn)](#dynamic-type-dyn)
* [Union Types](#union-types)
* [Intersection Types](#intersection-types)
* [Optional Argument Types](#optional-argument-types)
* [Variable Length Argument Types](#variable-length-argument-types)
* [Named Argument Types](#named-argument-types)
* [Dependent Types](#dependent-types)
* [Higher-order Types](#higher-order-types)
* [Class/Singleton Method Types](#classsingleton-method-types)
* [Structural Types](#structural-types)
* [Singleton Types](#singleton-types)
* [Self Type](#self-type)
* [Type Aliases](#type-aliases)
* [Generic Class Types](#generic-class-types)
* [Tuple Types](#tuple-types)
* [Finite Hash Types](#finite-hash-types)
* [Type Casts](#type-casts)
* [Bottom Type (%bot)](#bottom-type-bot)
* [Non-null Type](#non-null-type)
* [Constructor Type](#constructor-type)
* [Static Type Checking](#static-type-checking)
* [Types for Variables](#types-for-variables)
* [Tuples, Finite Hashes, and Subtyping](#tuples-finite-hashes-and-subtyping)
* [Other Features and Limitations](#other-features-and-limitations)
* [Assumptions](#assumptions)
* [Type-Level Computations](#type-level-computations)
* [RDL Method Reference](#rdl-method-reference)
* [Performance](#performance)
* [Queries](#queries)
* [Configuration](#configuration)
* [Bibliography](#bibliography)
* [Copyright](#copyright)
* [Contributors](#contributors)
* [TODO List](#todo-list)
# Introduction
RDL is a lightweight system for adding types, type checking, and contracts to Ruby. In RDL, *types* can be used to decorate methods:
```ruby
require 'rdl'
extend RDL::Annotate # add annotation methods to current scope
type '(Integer, Integer) -> String'
def m(x,y) ... end
```
This indicates that `m` returns a `String` if given two `Integer` arguments. When written as above, RDL enforces this type as a *contract* checked at run time: When `m` is called, RDL checks that `m` is given exactly two arguments and both are `Integers`, and that `m` returns an instance of `String`.
RDL can also *statically type check* method bodies against their signatures. For example:
```ruby
file.rb:
require 'rdl'
extend RDL::Annotate
type '(Integer) -> Integer', typecheck: :label
def id(x)
"forty-two"
end
RDL.do_typecheck :label
```
```
$ ruby file.rb
.../lib/rdl/typecheck.rb:158:in `error': (RDL::Typecheck::StaticTypeError)
.../file.rb:6:3: error: got type `String' where return type `Integer' expected
.../file.rb:6: "forty-two"
.../file.rb:6: ^~~~~~~~~~~
```
Passing `typecheck: sym` for some other symbol statically type checks the method body when `RDL.do_typecheck sym` is called. Passing `typecheck: :call` to `type` statically type checks the method body whenever it is called.
Note that RDL only type checks the bodies of methods with type signatures. Methods without type signatues, or code that is not in method bodies, is not checked.
The `type` method can also be called with the class and method to be annotated, and it can also be invoked as `RDL.type` in case `extend RDL::Annotate` would cause namespace issues:
```
type :A, :id, '(Integer) -> Integer', typecheck: :label # Add a type annotation for A#id.
RDL.type :A, :id, '(Integer) -> Integer', typecheck: :label # Note class and method name required when calling like this
```
RDL tries to follow the philosophy that you get what you pay for. Methods with type annotations can be checked dynamically or statically; methods without type annotations are unaffected by RDL. See the [performance](#performance) discussion for more detail.
RDL supports many more complex type annotations; see below for a complete discussion and examples.
RDL types are stored in memory at run time, so it's also possible for programs to query them. RDL includes lots of contracts and types for the core and standard libraries. Since those methods are generally trustworthy, RDL doesn't actually enforce the contracts (since that would add overhead), but they are available to search, query, and use during type checking. RDL includes a small script `rdl_query` to look up type information from the command line. Note you might need to put the argument in quotes depending on your shell.
```shell
$ rdl_query String#include? # print type for instance method of another class
$ rdl_query Pathname.glob # print type for singleton method of a class
$ rdl_query Array # print types for all methods of a class
$ rdl_query "(Integer) -> Integer" # print all methods that take an Integer and return an Integer
$ rdl_query "(.) -> Integer" # print all methods that take a single arg of any type
$ rdl_query "(..., Integer, ...) -> ." # print all methods that take an Integer as some argument
```
See below for more details of the query format. The `RDL.query` method performs the same function as long as the gem is loaded, so you can use this in `irb`.
```ruby
$ irb
> require 'rdl'
=> true
> require 'types/core'
=> true
> RDL.query '...' # as above
```
RDL also supports more general contracts, though these can only be enforced at run time and are not statically checked. These more general contracts take the form of *preconditions*, describing what a method assumes about its inputs, and *postconditions*, describing what a method guarantees about its outputs. For example:
```ruby
require 'rdl'
extend RDL::Annotate
pre { |x| x > 0 }
post { |r,x| r > 0 }
def sqrt(x)
# return the square root of x
end
```
Given this program, RDL intercepts the call to `sqrt` and passes its argument to the `pre` block, which checks that the argument is positive. Then when `sqrt` returns, RDL passes the return value (as `r`) and the initial argument (as `x`) to the `post` block, which checks that the return is positive. (Let's ignore complex numbers to keep things simple...) RDL contracts are enforced at method entry and exit. For example, if we call `sqrt(49)`, RDL first checks that `49 > 0`; then it passes `49` to `sqrt`, which (presumably) returns `7`; then RDL checks that `7 > 0`; and finally it returns `7`. The `pre` and `post` methods can also be called as `RDL.pre` and `RDL.post`, as long as they are also given class and method arguments, similarly to `type`. Note that pre- and postconditions can't be searched for using `RDL.query`.
Note: RDL is a research project from the [Tufts University Computer Science Department](https://www.cs.tufts.edu) and the [University of Maryland, College Park Computer Science Department](https://www.cs.umd.edu). If you are looking for an industrial strength Ruby type system, check out Stripe’s [Sorbet](https://sorbet.org) system.
# Using RDL
## Supported versions of Ruby
RDL currently supports Ruby 2.x except 2.1.1-2.1.6. RDL may or may not work with other versions.
## Installing RDL
`gem install rdl` should do it.
## Loading RDL
Use `require 'rdl'` to load the RDL library. If you want to access the annotation language, add `extend RDL::Annotate` as appropriate. If you want to use the core and standard library type signatures that come with RDL, follow it with `require 'types/core'`. Currently RDL has types for the following versions of Ruby:
* 2.x
## Disabling RDL
For performance reasons you probably don't want to use RDL in production code. To disable RDL, replace `require 'rdl'` with `require 'rdl_disable'`. This will cause all invocations of RDL methods to either be no-ops or to do the minimum necessary to preserve the program's semantics (e.g., if the RDL method returns `self`, then so does the `rdl_disable` method.)
## Rails
To add types to a Ruby on Rails application, add
```ruby
gem 'rdl'
```
to your `Gemfile` to get the latest stable version, or
```ruby
gem 'rdl', git: 'https://github.com/plum-umd/rdl'
```
to get the head version from github.
In development and test mode, you will now have access to `rdl`, `types/core` from RDL, and extra type annotations for Rails and some related gems. In production mode, RDL will be disabled (by loading `rdl_disable`).
**Warning:** Rails support is currently extremely limited, not well tested, and generally needs more work...please send bug reports/pull requests/etc and we will try to fix things.
## Preconditions and Postconditions
The `pre` method takes a block and adds that block as a precondition to a method. When it's time to check the precondition, the block will be called with the method's arguments. If the block returns `false` or `nil` the precondition is considered to have failed, and RDL will raise a `ContractError`. Otherwise, if the block returns a true value, then the method executes as usual. The block can also raise its own error if the contract fails.
The `pre` method can be called in several ways:
* `pre { block }` - Apply precondition to the next method to be defined
* `pre(mth) { block }` - Apply precondition to method `mth` of the current class, where `mth` is a `Symbol` or `String`
* `pre(cls, mth) { block }` - Apply precondition to method `mth` of class `cls`, where `cls` is a `Class`, `Symbol`, or `String`, and `mth` is a `Symbol` or `String`
The `post` method is similar, except its block is called with the return value of the method (in the first position) followed by all the method's arguments. For example, you probably noticed that for `sqrt` above the `post` block took the return value `r` and the method argument `x`.
(Note: RDL does *not* clone or dup the arguments at method entry. So, for example, if the method body has mutated fields stored inside those argument objects, the `post` block or any other check evaluated afterwards will see the mutated field values rather than the original values.)
The `post` method can be called in the same ways as `pre`.
Methods can have no contracts, `pre` by itself, `post` by itself, both, or multiple instances of either. If there are multiple contracts, RDL checks that *all* contracts are satisfied, in the order that the contracts were bound to the method.
Both `pre` and `post` accept an optional named argument `version` that takes a rubygems version requirement string or array of version requirement students. If the current Ruby version does not match the requirement, then the call to `pre` and `post` is ignored.
## Type Annotations
The `type` method adds a type contract to a method. It supports the same calling patterns as `pre` and `post`, except rather than a block, it takes a string argument describing the type. More specifically, `type` can be called as:
* `type 'typ'`
* `type m, 'typ'`
* `type cls, mth, 'typ'`
A type string generally has the form `(typ1, ..., typn) -> typ` indicating a method that takes `n` arguments of types `typ1` through `typn` and returns type `typ`. Below, to illustrate the various types RDL supports, we'll use examples from the core library type annotations.
The `type` method can be called with `wrap: false` so the type information is stored but the type is not enforced. For example, due to the way RDL is implemented, the method `String#=~` can't have a type or contract on it because then it won't set the correct `$1` etc variables:
```ruby
type :=~, '(Object) -> Integer or nil', wrap: false # Wrapping this messes up $1 etc
```
For consistency, `pre` and `post` can also be called with `wrap: false`, but this is generally not as useful.
The `type` method also accepts an optional `version` named argument.
# RDL Types
## Nominal Types
A nominal type is simply a class name, and it matches any object of that class or any subclass.
```ruby
type String, :insert, '(Integer, String) -> String'
```
## Nil Type
The nominal type `NilClass` can also be written as `nil`. The only object of this type is `nil`:
```ruby
type IO, :close, '() -> nil' # IO#close always returns nil
```
Currently, `nil` is treated as if it were an instance of any class.
```ruby
x = "foo"
x.insert(0, nil) # RDL does not report a type error
```
We chose this design based on prior experience with static type systems for Ruby, where not allowing this leads to a lot of false positive errors from the type system. However, we may change this in the future.
## Top Type (%any)
RDL includes a special "top" type `%any` that matches any object:
```ruby
type Object, :=~, '(%any) -> nil'
```
We call this the "top" type because it is the top of the subclassing hierarchy RDL uses. Note that `%any` is more general than `Object`, because not all classes inherit from `Object`, e.g., `BasicObject` does not.
## Dynamic Type (%dyn)
RDL has the dynamic type `%dyn` that is the subtype and supertype of any type.
```ruby
type Example, :method, '(%dyn) -> %dyn'
```
This is useful for typed portions of a Ruby program that interact with untyped portions. RDL allows setting a [configuration option](#configuration) `assume_dyn_type` to `true` so that any method that is missing a type will be assumed to take and return the dynamic type. By default, this option is set to `false`.
## Union Types
Many Ruby methods can take several different types of arguments or return different types of results. The union operator `or` can be used to indicate a position where multiple types are possible.
```ruby
type IO, :putc, '(Numeric or String) -> %any'
type String, :getbyte, '(Integer) -> Integer or nil'
```
Note that for `getbyte`, we could leave off the `nil`, but we include it to match the current documentation of this method.
## Intersection Types
Sometimes Ruby methods have several different type signatures. (In Java these would be called *overloaded* methods.) In RDL, such methods are assigned a set of type signatures:
```ruby
type String, :[], '(Integer) -> String or nil'
type String, :[], '(Integer, Integer) -> String or nil'
type String, :[], '(Range or Regexp) -> String or nil'
type String, :[], '(Regexp, Integer) -> String or nil'
type String, :[], '(Regexp, String) -> String or nil'
type String, :[], '(String) -> String or nil'
```
We say the method's type is the *intersection* of the types above.
When this method is called at run time, RDL checks that at least one type signature matches the call:
```ruby
"foo"[0] # matches first type
"foo"[0,2] # matches second type
"foo"[0..2] # matches third type
"foo"[0, "bar"] # error, doesn't match any type
# etc
```
Notice that union types in arguments could also be written as intersection types of methods, e.g., instead of the third type of `[]` above we could have equivalently written
```ruby
type String, :[], '(Range) -> String or nil'
type String, :[], '(Regexp) -> String or nil'
```
## Optional Argument Types
Optional arguments are denoted in RDL by putting `?` in front of the argument's type. For example:
```ruby
type String, :chomp, '(?String) -> String'
```
This is actually just a shorthand for an equivalent intersection type:
```ruby
type String, :chomp, '() -> String'
type String, :chomp, '(String) -> String'
```
but it helps make types more readable.
Like Ruby, RDL allows optional arguments to appear anywhere in a method's type signature.
## Variable Length Argument Types
In RDL, `*` is used to decorate an argument that may appear zero or more times. Currently in RDL this annotation may only appear on the rightmost argument. For example, `String#delete` takes one or more `String` arguments:
```ruby
type String, :delete, '(String, *String) -> String'
```
## Named Argument Types
RDL allows arguments to be named, for documentation purposes. Names are given after the argument's type, and they do not affect type contract checking in any way. For example:
```ruby
type Integer, :to_s, '(?Integer base) -> String'
```
Here we've named the first argument of `to_s` as `base` to give some extra hint as to its meaning.
## Dependent Types
RDL allows for refinement predicates to be attached to named arguments. The predicates on the arguments are checked when the method is called, and the predicates on the return is checked when then method returns. For instance:
```ruby
type '(Float x {{ x>=0 }}) -> Float y {{ y>=0 }}'
def sqrt(x)
# return the square root of x
end
```
Here, RDL will check that the `sqrt` method is called on an argument of type `Float` which is greater than or equal to 0, and it will check the same of the return value of the method. Note that, in effect, dependent type contracts can be used in place of pre and post contracts.
Dependencies can also exist across a method's arguments and return value:
```ruby
type '(Integer x {{ x>y }}, Integer y) -> Float z {{ z==(x+y) }}'
def m(x,y) ... end
```
Any arbitrary code can be placed between the double braces of a type refinement, and RDL will dynamically check that this predicate evaluates to true, or raise a type error if it evaluates to false.
Most pre- and postconditions can be translated into a dependent type by attaching the precondition to one of the arguments and the postcondition to the return. Note, however, that dependently typed positions must always have a name, even if the associated refinment doesn't refer to that name:
```ruby
type '(Integer x {{ $y > 0 }}) -> nil' # argument name must be present even though refinment doesn't use it.
```
## Higher-order Types
RDL supports types for arguments or return values which are themselves `Proc` objects. Simply enclose the corresponding argument's type with braces to denote that it is a `Proc`. For example:
```ruby
type '(Integer, {(Integer) -> Integer}) -> Integer'
def m(x, y) ... end
```
The type annotation above states that the method m takes two arguments: one of type `Integer`, and another which is a `Proc` which itself takes an `Integer` and returns an `Integer`. A `Proc` may be the return value of a method as well:
```ruby
type '(Integer) -> {(Float) -> Float}'
def m(x) ... end
```
These higher-order types are checked by wrapping the corresponding `Proc` argument/return in a new `Proc` which checks that the type contract holds.
A type contract can be provided for a method block as well. The block's type should be included after the method argument types:
```ruby
type '(Integer, Float) {(Integer, String) -> String } -> Float'
def m(x,y,&blk) ... end
```
Note that this notation will work whether or not a method block is explicitly referenced in the parameters, i.e., whether or not `&blk` is included above. Finally, dependent types work across higher order contracts:
```ruby
type '(Integer x, Float y) -> {(Integer z {{ z>y }}) -> Integer}'
def m(x,y,&blk) ... end
```
The type contract above states that method `m` returns a `Proc` which takes an `Integer z` which must be greater than the argument `Float y`. Whenever this `Proc` is called, it will be checked that this contract holds.
## Class/Singleton Method Types
RDL method signatures can be used both for instance methods and for class methods (often called *singleton methods* in Ruby). To indicate a type signature applies to a singleton method, prefix the method name with `self.`:
```ruby
type File, 'self.dirname', '(String file) -> String dir'
```
(Notice also the use of a named return type, which we haven't seen before.)
## Structural Types
Some Ruby methods are intended to take any object that has certain methods. RDL uses *structural types* to denote such cases:
```ruby
type IO, :puts, '(*[to_s: () -> String]) -> nil'
```
Here `IO#puts` can take zero or more arguments, all of which must have a `to_s` method that takes no arguments and returns a `String`.
The actual checking that RDL does here varies depending on what type information is available. Suppose we call `puts(o)`. If `o` is an instance of a class that has a type signature `t` for `to_s`, then RDL will check that `t` is compatible with `() -> String`. On the other hand, if `o` is an instance of a class with no type signature for `to_s`, RDL only checks that `o` has a `to_s` method, but it doesn't check its argument or return types.
## Singleton Types
Not to be confused with types for singleton methods, RDL includes *singleton types* that denote positions that always have one particular value; this typically happens only in return positions. For example, `Dir#mkdir` always returns the value 0:
```ruby
type Dir, 'self.mkdir', '(String, ?Integer) -> 0'
```
In RDL, any integer or floating point number denotes a singleton type. Arbitrary values can be turned into singleton types by wrapping them in `${.}`. For example, `Float#angle` always returns 0 or pi.
```ruby
type Float, :angle, '() -> 0 or ${Math::PI}'
```
RDL checks if a value matches a singleton type using `equal?`. As a consequence, singleton string types aren't currently possible.
Note that the type `nil` is actually implemented as a singleton type with the special behavior that `nil` is a treated as a member of any class. However, while `nil` can in general be used anywhere any type is expected, it *cannot* be used where a different singleton type is expected. For example, `nil` could not be a return value of `Dir#mkdir` or `Float#angle`.
## Self Type
Consider a method that returns `self`:
```ruby
class A
def id
self
end
end
```
If that method might be inherited, we can't just give it a nominal type, because it will return a different object type in a subclass:
```ruby
class B < A
end
type A, :id, '() -> A'
A.new.id # okay, returns an A
B.new.id # type error, returns a B
```
To solve this problem, RDL includes a special type `self` for this situation:
```ruby
type A, :id, '() -> self'
A.new.id # okay, returns self
B.new.id # also okay, returns self
```
Thus, the type `self` means "any object of self's class."
## Type Aliases
RDL allows types to be aliases to make them faster to write down and more readable. All type aliases begin with `%`. RDL has one built-in alias, `%bool`, which is shorthand for `TrueClass or FalseClass`:
```ruby
type String, :==, '(%any) -> %bool'
```
Note it is not a bug that `==` is typed to allow any object. Though you would think that developers would generally only compare objects of the same class (since otherwise `==` almost always returns false), in practice a lot of code does compare objects of different classes.
Method `type_alias(name, typ)` (part of `RDL::Annotate`) can be used to create a user-defined type alias, where `name` must begin with `%`:
```ruby
type_alias '%real', 'Integer or Float or Rational'
type_alias '%string', '[to_str: () -> String]'
type_alias '%path', '%string or Pathname'
```
Type aliases have to be created before they are used (so above, `%path` must be defined after `%string`).
## Generic Class Types
RDL supports *parametric polymorphism* for classes, a.k.a. *generics*. The `type_params` method (part of `RDL::Annotate`) names the type parameters of the class, and those parameters can then be used inside type signatures:
```ruby
class Array
type_params [:t], :all?
type :shift, '() -> t'
end
```
Here the first argument to `type_params` is a list of symbols or strings that name the type parameters. In this case there is one parameter, `t`, and it is the return type of `shift`. The `type_params` method accepts an optional first argument, the class whose type parameters to set (this defaults to `self`).
Generic types are applied to type arguments using `<...>` notation, e.g., `Array<Integer>` is an `Array` class where `t` is replaced by `Integer`. Thus, for example, if `o` is an `Array<Integer>`, then `o.shift` returns `Integer`. As another example, here is the type for the `[]` method of `Array`:
```ruby
type Array, :[], '(Range) -> Array<t>'
type Array, :[], '(Integer or Float) -> t'
type Array, :[], '(Integer, Integer) -> Array<t>'
```
Thus if `o` is again an `Array<Integer>`, then `o[0]` returns an `Integer` and `o[0..5]` returns an `Array<Integer>`.
In general it's impossible to assign generic types to objects without knowing the programmer's intention. For example, consider code as simple as `x = [1,2]`. Is it the programmer's intention that `x` is an `Array<Integer>`? `Array<Numeric>`? `Array<Object>`?
Thus, by default, even though `Array` is declared to take type parameters, by default RDL treats array objects at the *raw* type `Array`, which means the type parameters are ignored whenever they appear in types. For our example, this means a call such as `x.push("three")` would not be reported as an error (the type signature of `Array#push` is `'(?t) -> Array<t>'`).
To fully enforce generic types, RDL requires that the developer `RDL.instantiate!` an object with the desired type parameters:
```ruby
x = [1,2]
RDL.instantiate!(x, 'Integer')
x.push("three") # type error
```
Note that the instantiated type is associated with the object, not the variable:
```ruby
y = x
y.push("three") # also a type error
```
Calls to `RDL.instantiate!` may also come with a `check` flag. By default, `check` is set to false. When `check` is set to true, we ensure that the receiving object's contents are consistent with the given type *at the time of the call to* `RDL.instantiate!`. Currently this is enforced using the second parameter to `type_params`, which must name a method that behaves like `Array#all?`, i.e., it iterates through the contents, checking that a block argument is satisfied. As seen above, for `Array` we call `type_params(:t, :all?)`. Then at the call `x.instantiate('Integer', check: true)`, RDL will call `Array#all?` to iterate through the contents of `x` to check they have type `Integer`. A simple call to `RDL.instantiate!(x, 'Integer')`, on the other hand, will not check the types of the elements of `x`. The `check` flag thus leaves to the programmer this choice between dynamic type safety and performance.
RDL also includes a `RDL.deinstantiate!` method to remove the type instantiation from an object:
```ruby
RDL.deinstantiate!(x)
x.push("three") # no longer a type error
```
Finally, `type_params` can optionally take a third argument that is an array of *variances*, which are either `:+` for covariance, `:-` for contravariance, or `:~` for invariance. If variances aren't listed, type parameters are assumed to be invariant, which is a safe default.
Variances are only used when RDL checks that one type is a subtype of another. This only happens in limited circumstances, e.g., arrays of arrays where all levels have instantiated types. So generally you don't need to worry much about the variance.
The rules for variances are standard. Let's assume `A` is a subclass of `B`. Also assume there is a class `C` that has one type parameter. Then:
* `C<A>` is a subtype of `C<A>` always
* `C<A>` is a subtype of `C<B>` if `C`'s type parameter is covariant
* `C<B>` is a subtype of `C<A>` if `C`'s type parameter is contravariant
## Tuple Types
A type such as `Array<Integer>` is useful for homogeneous arrays, where all elements have the same type. But Ruby programs often use heterogeneous arrays, e.g., `[1, "two"]`. The best generic type we can give this is `Array<Integer or String>`, but that's imprecise.
RDL includes special *tuple types* to handle this situation. Tuple types are written `[t1, ..., tn]`, denoting an `Array` of `n` elements of types `t1` through `tn`, in that order. For example, `[1, "two"]` has type `[Integer, String]`. As another example, here is the type of `Process#getrlimit`, which returns a two-element array of `Integers`:
```ruby
type Process, 'self.getrlimit', '(Symbol or String or Integer resource) -> [Integer, Integer] cur_max_limit'
```
## Finite Hash Types
Similarly to tuple types, RDL also supports *finite hash types* for heterogeneous hashes. Finite hash types are written `{k1 => v1, ..., kn => vn}` to indicate a `Hash` with `n` mappings of type `ki` maps to `vi`. The `ki` may be strings, integers, floats, or constants denoted with `${.}`. If a key is a symbol, then the mapping should be written `ki: vi`. In the latter case, the `{}`'s can be left off:
```ruby
type MyClass, :foo, '(a: Integer, b: String) { () -> %any } -> %any'
```
Here `foo`, takes a hash where key `:a` is mapped to an `Integer` and key `:b` is mapped to a `String`. Similarly, `{'a'=>Integer, 2=>String}` types a hash where keys `'a'` and `2` are mapped to `Integer` and `String`, respectively. Both syntaxes can be used to define hash types.
Value types can also be declared as optional, indicating that the key/value pair is an optional part of the hash:
```ruby
type MyClass, :foo, '(a: Integer, b: ?String) { () -> %any } -> %any'
```
With this type, `foo` takes a hash where key `:a` is mapped to an `Integer`, and furthermore the hash may or may not include a key `:b` that is mapped to a `String`.
RDL also allows a "rest" type in finite hashes (of course, they're not so finite if they use it!):
```ruby
type MyClass, :foo, '(a: Integer, b: String, **Float) -> %any'
```
In this method, `a` is an `Integer`, `b` is a `String`, and any number (zero or more) remaining keyword arguments can be passed where the values have type `Float`, e.g., a call `foo(a: 3, b: 'b', pi: 3.14)` is allowed.
## Type Casts
Sometimes RDL does not have precise information about an object's type (this is most useful during static type checking). For these cases, RDL supports type casts of the form `RDL.type_cast(o, t)`. This call returns a new object that delegates all methods to `o` but that will be treated by RDL as if it had type `t`. If `force: true` is passed to `RDL.type_cast`, RDL will perform the cast without checking whether `o` is actually a member of the given type. For example, `x = RDL.type_cast('a', 'nil', force: true)` will make RDL treat `x` as if it had type `nil`, even though it's a `String`.
Similarly, if an object's type is parameterized (see [Generic Types](#generic-class-types) above), RDL might not statically know the correct instantiation of the object's type parameters. In this case, we can use `RDL.instantiate!` to provide the proper type parameter bindings. For instance, if `a` has type `Array`, but we want RDL to know that `a`'s elements are all `Integer`s or `String`s, we can call `RDL.instantiate!(a, 'Integer or String')`.
## Bottom Type (%bot)
RDL also includes a special *bottom* type `%bot` that is a subtype of any type, including any class and any singleton types. In static type checking, the type `%bot` is given to so-called *void value expressions*, which are `return`, `break`, `next`, `redo`, and `retry` (notice that these expressions perform jumps rather than producing a value, hence they can be treated as having an arbitrary type). No Ruby objects have type `%bot`.
## Non-null Type
Types can be prefixed with `!` to indicate the associated value is not `nil`. For example:
`type :x=, '(!Integer) -> !Integer' # x's argument must not be nil`
**Warning:** This is simply *documentation* of non-nullness, and **is not checked** by the static type checker. The contract checker might or might not enforce non-nullness. (For those who are curious: RDL has this annotation because it seems useful for descriptive purposes. However, it's quite challenging to build a practical analysis that enforces non-nilness without reporting too many false positives.)
## Constructor Type
Type signatures can be added to constructors by giving a type signature for `initialize` (not for `new` or `self.new`). The return type for `initialize` must always be `self` or a [generic type](#generic-class-types) where the base is `self`:
```ruby
type :initialize, '(String, Fixnum) -> self'
```
# Static Type Checking
As mentioned in the introduction, calling `type` with `typecheck: sym` statically type checks the body of the annotated method body against the given signature when `RDL.do_typecheck sym` is called. Note that at this call, all methods whose `type` call is labeled with `sym` will be statically checked.
Additionally, `type` can be called with `typecheck: :call`, which will delay checking the method's type until the method is called. Currently these checks are not cached, so expect a big performance hit for using this feature. Finally, `type` can be called with `typecheck: :now` to check the method immediately after it is defined. This is probably only useful for demos.
To perform type checking, RDL needs source code, which it gets by parsing the file containing the to-be-typechecked method. Hence, static type checking does not work in `irb` since RDL has no way of getting the source. Typechecking does work in `pry` (this feature has only limited testing) as long as typechecking is delayed until after the method is defined:
```ruby
[2] pry(main)> require 'rdl'
[3] pry(main)> require 'types/core'
[4] pry(main)> type '() -> Integer', typecheck: :later # note: typecheck: :now doesn't work in pry
[5] pry(main)> def f; 'haha'; end
[6] pry(main)> RDL.do_typecheck :later
RDL::Typecheck::StaticTypeError:
(string):2:3: error: got type `String' where return type `Integer' expected
(string):2: 'haha'
(string):2: ^~~~~~
from .../typecheck.rb:158:in `error'
```
RDL currently uses the [parser Gem](https://github.com/whitequark/parser) to parse Ruby source code. (And RDL uses the parser gem's amazing diagnostic output facility to print type error messages.)
Next we discuss some special features of RDL's type system and some of its limitations.
## Types for Variables
In a standard type system, local variables have one type throughout a method or function body. For example, in C and Java, declaring `int x` means `x` can only be used as an integer. However, in Ruby, variables need not be declared before they are used. Thus, by default, RDL treats local variables *flow-sensitively*, meaning at each assignment to a local variable, the variable's type is replaced by the type of the right hand side. For example:
```ruby
x = 3 # Here `x` is a `Integer`
x = "three" # Now `x` is a `String`
```
(Note this is a slight fib, since after the first line, `x` will actually have the singleton type `3`. But we'll ignore this just to keep the discussion a bit simpler, especially since `3` is a subtype of `Integer`.)
After conditionals, variables have the union of the types they have along both branches:
```ruby
if (some condition) then x = 3 else x = "three" end
# x has type `Integer or String`
```
RDL also provides a method `RDL.var_type` that can be used to force a local variable to have a single type through a method body, i.e., to treat it *flow-insensitively* like a standard type system:
```ruby
RDL.var_type :x, 'Integer'
x = 3 # okay
x = "three" # type error
```
The first argument to `RDL.var_type` is a symbol with the local variable name, and the second argument is a string containing the variable's type. Note that `RDL.var_type` is most useful at the beginning of method or code block. Using it elsewhere may result in surprising error messages, since RDL requires variables with fixed types to have the same type along all paths. Method parameters are treated as if `RDL.var_type` was called on them at the beginning of the method, fixing them to their declared type. This design choice may be revisited in the future.
There is one subtlety for local variables and code blocks. Consider the following code:
```ruby
x = 1
m() { x = 'bar' }
# what is x's type here?
```
If `m` invokes the code block, `x` will be a `String` after the call. Otherwise `x` will be `1`. Since RDL can't tell whether the code block is ever called, it assigns `x` type `1 or String`. It's actually quite tricky to do very precise reasoning about code blocks. For example, `m` could (pathologically) store its block in a global variable and then only call it the second time `m` is invoked. To keep its reasoning simple, RDL treats any local variables captured (i.e., imported from an outer scope) by a code block flow-insensitively for the lifetime of the method. The type of any such local variable is the union of all types that are ever assigned to it.
RDL always treats instance, class, and global variables flow-insensitively, hence their types must be defined with `var_type`. In this case, `var_type` can optionally be accessed without the `RDL` prefix by adding in the annotation syntax:
```ruby
class A
extend RDL::Annotate
var_type :@f, 'Integer'
def m
@f = 3 # type safe
@f = "three" # type error, incompatible type in assignment
@g = 42 # type error, no var_type for @g
end
end
```
The `var_type` method may also be called as `var_type klass, :name, typ` to assign a type to an instance or class variable of class `klass`.
As a short-hand, RDL defines methods `attr_accessor_type`, `attr_reader_type`, and `attr_writer_type` (also part of `RDL::Annotate`) to behave like their corresponding non-`_type` analogs but assign types to the attributes. For example, `attr_accessor_type :f, 'Integer', :g, 'String'` is equivalent to:
```ruby
var_type :@f, 'Integer'
var_type :@g, 'String'
type :f, '() -> Integer'
type :f=, '(Integer) -> Integer'
type :g, '() -> String'
type :g=, '(String) -> String'
```
## Tuples, Finite Hashes, and Subtyping
When RDL encounters a literal array in the program, it assigns it a tuple type, which allows, among other things, precise handling of multiple assignment. For example:
```ruby
x = [1, 'foo'] # x has type [1, String]
a, b = x # a has type 1, b has type String
```
RDL also allows a tuple `[t1, ..., tn]` to be used where `Array<t1 or ... or tn>` is expected. This means both when a tuple is passed to an `Array` position, and when any method is invoked on the tuple (even if RDL could safely apply that method to the tuple; this may change in the future):
```ruby
var_type @f, 'Array<Integer or String>'
@f = [1, 'foo'] # okay
@f.length # also okay
```
To maintain soundness, a tuple that is used as an `Array` is treated as if it were always an array. For example:
```ruby
x = [1, 'foo'] # at this point, x has type [1, String]
var_type @f, '[1, String]'
@f = x # okay so far
var_type @g, 'Array<Integer or String>'
@g = x # uh oh
```
When RDL encounters the assignment to `@g`, it retroactively changes `x` to have type `Array<Integer or String>`, which is incompatible with type `[1, String]` of `@f`, so the assignment to `@g` signals an error.
RDL uses the same approach for hashes: hash literals are treated as finite hashes. A finite hash `{k1=>v1, ..., kn=>vn}` can be used where `Hash<k1 or ... or kn, v1 or ... or vn>` is expected. And if a finite hash is used as a `Hash` (including invoking methods on the finite hash; this may change in the future), then it is retroactively converted to a `Hash`.
## Other Features and Limitations
*Displaying types.* As an aid to debugging, the method `RDL.note_type e` will display the type of `e` during type checking. At run time, this method returns its argument. Note that in certain cases RDL may type check the same code repeatedly, in which case an expression's type could be printed multiple times.
* *Conditional guards and singletons.* If an `if` or `unless` guard has a singleton type, RDL will typecheck both branches but not include types from the unrealizable branch in the expression type. For example, `if true then 1 else 'two' end` has type `1`. RDL behaves similarly for `&&` and `||`. However, RDL does not implement this logic for `case`.
* *Case analysis by class.* If the guard of a `case` statement is a variable, and then `when` branches compare against classes, RDL refines the type of the guard to be those classes within the corresponding `when` branch. For example, in `case x when Integer ...(1)... when String ...(2)... end`, RDL will assume `x` is an `Integer` within `(1)` and a `String` within `(2)`.
* *Multiple Assignment and nil.* In Ruby, extra left-hand sides of multiple assignments are set to `nil`, e.g., `x, y = [1]` sets `x` to `1` and `y` to `nil`. However, RDL reports an error in this case; this may change in the future.
* *Block formal arguments.* Similarly, RDL reports an error if a block is called with the wrong number of arguments even though Ruby does not signal an error in this case.
* *Caching.* If `typecheck: :call` is specified on a method, Ruby will type check the method every time it is called. In the future, RDL will cache these checks.
* *Dependent Types.* RDL ignores refinements in checking code with dependent types. E.g., given an `Integer x {{ x > 0 }}`, RDL will simply treat `x` as an `Integer` and ignore the requirement that it be positive.
* *Unsupported Features.* There are several features of Ruby that are currently not handled by RDL. Here is a non-exhaustive list:
* `super` is not supported.
* `lambda` has special semantics for `return`; this is not supported. Stabby lambda is also not supported.
* Only simple `for` iteration variables are supported.
* Control flow for exceptions is not analyzed fully soundly; some things are not reported as possibly `nil` that could be.
* Only simple usage of constants is handled.
## Assumptions
RDL's static type checker makes some assumptions that should hold unless your Ruby code is doing something highly unusual. RDL assumes:
* `Class#===` is not redefined
* `Proc#call` is not redefined
* `Object#class` is not redefined
(More assumptions will be added here as they are added to RDL...)
# Type-Level Computations
RDL includes support for type-level computations: embedding Ruby code *within* a method's type signature. Using these type-level computations, we can write type signatures that can check more expressive properties, such as the correctness of column names and types in a database query. We have found that type-level computations significantly reduce the need for manually inserted type casts. Below we will cover the basics of using type-level computations. For a closer look, check out our [paper](https://www.cs.tufts.edu/~jfoster/papers/pldi19.pdf) on the same topic.
We use type-level computations only in type signatures for methods which themselves are *not* type checked. Thus, our primary focus is on applying them to library methods. We add a computation to a type by writing Ruby code, delimited by double backticks \`\`...\`\`, in place of a type within a method's signature. This Ruby code may refer to two special variables: `trec`, which names the RDL type of the receiver for a given method call, and `targs`, which names an array of RDL types of the arguments for a given method call. The code must itself evaluate to a type.
As an example, we will write a type-level computation for the `Integer#+` method. Normaly, this method would have the simple RDL type `type '(Integer) -> Integer'` (for the case that it takes another `Integer`). Using type level computations, we can write the following more precise type:
```ruby
type '(Integer) -> ``if trec.is_a?(RDL::Type::SingletonType) && targs[0].is_a?(RDL::Type::SingletonType) then RDL::Type::SingletonType.new(trec.val+targs[0].val) else RDL::Type::NominalType.new(Integer) end``'
```
Now, in place of a simple return type, we have written some Ruby code. This code checks if both the receiver (`trec`) and argument (`targs[0]`) have a singleton type. If so, in the first arm of the branch we compute a new singleton type containing the sum of the receiver and argument. Otherwise, we fall back on returning the nominal type `Integer`. With this signature, RDL can determine the expression `1+2` has the type `3`, rather than the less precise type `Integer`.
We also provide a way of binding variable names to argument types, to avoid using the variable `targs`. For example, for the type signature above, we can give the argument type the name `t`, which we then refer to in the type-level computation:
```ruby
type '(Integer) -> ``if trec.is_a?(RDL::Type::SingletonType) && t.is_a?(RDL::Type::SingletonType) then RDL::Type::SingletonType.new(trec.val+t.val) else RDL::Type::NominalType.new(Integer) end``'
```
We have written type-level computations for methods from the Integer, Float, Array, Hash, and String core libraries, which you can find in the `/lib/types/core/` directory. We have also written them for a number of database query methods from both the [ActiveRecord](https://github.com/rails/rails/tree/master/activerecord) and [Sequel](https://github.com/jeremyevans/sequel) DSLs. They can be found in `comp_types.rb` file in the `/lib/types/rails/active_record` and `/lib/types/sequel` directories, respectively. These type-level computations for the database query methods depend on RDL pre-processing the schema of the database before type checking any user code. RDL provides helper methods to process the database schema:
```ruby
RDL.load_sequel_schema(DATABASE) # `DATABASE` is the Sequel Database object
RDL.load_rails_schema # Automatically triggered if RDL is loaded in a Rails environment. If you want to use ActiveRecord without Rails, you need to call this method
```
Because type-level computations are used for methods which themselves are not type checked, we include an optional configuration for inserting dynamic checks that ensure that these methods return values of the type expected by their type-level computation. Additionally, because type-level computations can refer to mutable state, we include a second configuration for re-running type-level computations at runtime to ensure that they evaluate to the same value they did at type checking time. Both of these configurations are by default turned off. See the [Configuration](#configuration) section for information on turning them on.
# RDL Method Reference
The following methods are available in `RDL::Annotate`.
* `pre [klass], [meth], &blk, wrap: true, version: nil` - add `blk` as a precondition contract on `klass#meth`. If `klass` is omitted, applies to `self`. If `meth` is also omitted, applies to next defined method. If `wrap` is true, wrap the method to actually check the precondition. If it's false, don't wrap the method (this is probably useless). If a `version` string or array of strings is specified (in rubygems format), only apply when the current Ruby version matches.
* `post [klass], [meth], &blk, wrap: true, version: nil` - same as `pre`, but add a postcondition.
* `type [klass], [meth], typ, wrap: true, typecheck: nil, version: nil` - same as `pre`, but add a type specification. If `typecheck` is `nil`, does no static type checking. If it's `:call`, will type check the method each time it's called. If it's `:now`, will type check the method after it's defined. If it's some other `symbol`, will type check the method when `RDL.do_typecheck symbol` is called.
* `var_type [klass], var, typ` - indicate the `typ` is the type for `var`, which may be a `:@field` or `:local_variable`.
* `attr_accessor_type :name1, typ1, ...` calls `attr_accessor :name1, ...` and creates type annotations for the given field and its getters and setters.
* `attr_reader_type`, `attr_type`, and `attr_writer_type` - analogous to `attr_accessor_type`
* `rdl_alias [klass], new_name, old_name` tells RDL that method `new_name` of `klass` is an alias for method `old_name` (of the same class), and therefore they should have the same contracts and types. This method is only needed when adding contracts and types to method that have already been aliased; it's not needed if the method is aliased after the contract or type has been added. If the `klass` argument is omitted it's assumed to be `self`.
* `type_params [klass] [:param1, ...], :all, variance: nil, [&blk]` - indicates that `klass` should be treated as a generic type with parameters `:param1...`. The `:all` argument names a method of `klass` that iterates through a `klass` instance's contents. Alternatively, if a block is passed as an argument, that block is used as the iterator. The `variance` argument gives an array of variances of the parameters, `:+` for covariant, `:-` for contravaraiant, and `:~` for invariant. If `variance` is omitted, the parameters are assumed to be invariant.
The methods above can also be accessed as `rdl_method` after `extend RDL::RDLAnnotate`. They can also be accessed as `RDL.method`, but when doing so, however, the `klass` and `meth` arguments are not optional and must be specified. The `RDL` module also includes several other useful methods:
* `RDL.at(sym, &blk)` invokes `blk.call(sym)` when `RDL.do_typecheck(sym)` is called. Useful when type annotations need to be generated at some later time, e.g., because not all classes are loaded.
* `RDL.deinsantiate!(var)` - remove `var`'s instantiation.
* `RDL.do_typecheck(sym)` statically type checks all methods whose type annotations include argument `typecheck: sym`.
* `RDL.insantiate!(var, typ1, ...)` - `var` must have a generic type. Instantiates the type of `x` with type parameters `typ1`, ...
* `RDL.note_type e` - evaluates `e` and returns it at run time. During static type checking, prints out the type of `e`.
* `RDL.nowrap [klass]`, if called at the top-level of a class, causes RDL to behave as if `wrap: false` were passed to all `type`, `pre`, and `post` calls in `klass`. This is mostly used for the core and standard libraries, which have trustworthy behavior hence enforcing their types and contracts is not worth the overhead. If `klass` is omitted it's assumed to be `self`.
* `RDL.query` prints information about types; see below for details.
* `RDL.remove_type klass, meth` removes the type annotation for meth in klass. Fails if meth does not have a type annotation.
* `RDL.reset` resets all global state stored internally inside RDL back to their original settings. Note: this is really only for internal RDL testing, as it doesn't completely undo RDL's effect (e.g., wrapped methods will be broken by a reset).
* `RDL.type_alias '%name', typ` - indicates that if `%name` appears in a type annotations, it should be expanded to `typ`.
* `RDL.type_cast(e, typ, force: false)` - evaluate `e` and return it at run time. During dynamic contract checking, the returned object will have `typ` associated with it. If `force` is `false`, will also check that `e`'s run-time type is compatible with `typ`. If `force` is true then `typ` will be applied regardless of `e`'s type. During static type checking, the type checker will treat the object returned by `type_cast` has having type `typ`.
# Performance
RDL supports some tradeoffs between safety and performance. There are three main sources of overhead in using RDL:
* When a method is wrapped by RDL, invoking it is more expensive because it requires calling the wrapper which then calls the underlying method. This is the most significant run-time cost of using RDL, and it can be eliminated by adding `wrap: false` to an annotation. (But, this only makes sense for types, since those are the only annotations that can be statically checked.)
* When type checking is performed, RDL parses the program's source code and type checks method bodies. This source of overhead only happens once per method (unless `typecheck: :call` is used), so the overhead is probably not too bad (though we have not measured it).
* RDL adds an implementation of `method_added` and `singleton_method_added` to carry out some of its work, and those are called on every method addition. This source of overhead is again likely not too significant.
For uses of `pre` and `post`, there's not a lot of choice: those contracts are enforced at run-time and will incur the costs of wrapped methods. However, note that any methods that are not annotated with `pre` or `post` will not incur the cost of wrapping. (Similarly, methods not annotated with `type` never incur any wrapping cost.)
For uses of `type`, there are more choices, which can be split into two main use cases. First, suppose there's a method `m` that we want a type for but don't want to type check (for example, it may come from some external library). So suppose we read the documentation and give `m` type `(Integer) -> Integer`. We now have to decide whether to wrap `m`. If we don't wrap `m`, then we incur no overhead on calls to `m`, but we are trusting the type. If we do wrap `m`, then on every call to it RDL will check that we call it with an `Integer` and it actually returns an `Integer`. So if we're not completely sure of `m`'s type, it might be useful to wrap it and then run test cases against it to see if the type annotation is every violated. (For example, the RDL developers did this to test out many of the core library annotations in RDL.)
Second, suppose there's a method `m` that we do want to type check, and again `m` has type `(Integer) -> Integer`. Now RDL will use type checking to ensure that if `m` is given an `Integer` then it returns `Integer`. But now we again have to decide whether to wrap `m`. If we don't wrap `m`, then we get the most efficiency, since typechecking (assuming we do not do it at calls) will only happen once and calls will incur no overhead. On the other hand, it could be that some non-typechecked code calls `m` with something that's not an `Integer`, in which case `m` might do anything, including report a type error. (Notice the type checking of `m` assumed its input was an `Integer`, and it doesn't say anything about the case when its argument is not.) To protect against this case, we can wrap `m`. Then if a caller violates `m`'s type, we'll get an error in the caller code when it tries to call `m`.
(Side note: If typed methods are wrapped, then their type contracts are checked at run time for *all* callers, including ones that are were statically type checked and hence couldn't call methods at incorrect types. A future version of RDL will fix this, but it will require some significant changes to RDL's implementation strategy.)
# Queries
As discussed above, RDL includes a small script, `rdl_query`, to look up type information. (Currently it does not support other pre- and postconditions.) The script takes a single argument, which should be a string. Note that when using the shell script, you may need to use quotes depending on your shell. Currently several queries are supported:
* Instance methods can be looked up as `Class#method`.
```shell
$ rdl_query String#include?
String#include?: (String) -> TrueClass or FalseClass
```
* Singleton (class) methods can be looked up as `Class.method`.
```shell
$ rdl_query Pathname.glob
Pathname.glob: (String p1, ?String p2) -> Array<Pathname>
```
* All methods of a class can be listed by passing the class name `Class`.
```shell
$ rdl_query Array
&: (Array<u>) -> Array<t>
*: (String) -> String
... and a lot more
```
* Methods can also be search for by their type signature:
```shell
$ rdl_query "(Integer) -> Integer" # print all methods of type (Integer) -> Integer
BigDecimal.limit: (Integer) -> Integer
Dir#pos=: (Integer) -> Integer
... and a lot more
```
The type signature uses the standard RDL syntax, with two extensions: `.` can be used as a wildcard to match any type, and `...` can be used to match any sequence of arguments.
```shell
$ rdl_query "(.) -> ." # methods that take one argument and return anything
$ rdl_query "(Integer, .) -> ." # methods that take two arguments, the first of which is an Integer
$ rdl_query "(Integer, ...) -> ." # methods whose first argument is an Integer
$ rdl_query "(..., Integer) -> ." # methods whose last argument is an Integer
$ rdl_query "(..., Integer, ...) -> ." # methods that take an Integer somewhere
$ rdl_query "(Integer or .) -> ." # methods that take a single argument that is a union containing an Integer
$ rdl_query "(.?) -> ." # methods that take one, optional argument
```
Note that aside from `.` and `...`, the matching is exact. For example `(Integer) -> Integer` will not match a method of type `(Integer or String) -> Integer`.
# Configuration
To configure RDL, execute the following shortly after RDL is loaded:
```ruby
RDL.config { |config|
# use config to configure RDL here
}
```
RDL supports the following configuration options:
* `config.nowrap` - `Array<Class>` containing all classes whose methods should not be wrapped.
* `config.gather_stats` - currently disabled.
* `config.report` - if true, then when the program exits, RDL will print out a list of methods that were statically type checked, and methods that were annotated to be statically type checked but weren't.
* `config.guess_types` - List of classes (of type `Array<Symbol>`). For every method added to a listed class *after* this configuration option is set, RDL will record the types of its arguments and returns at run-time. Then when the program exits, RDL will print out a skeleton for each class with types for the monitored methods based on what RDL recorded at run-time and based on what Ruby knows about the methods' signatures. This is probably not going to produce the correct method types, but it might be a good starting place.
* `config.type_defaults` - Hash containing default options for `type`. Initially `{ wrap: true, typecheck: false }`.
* `config.pre_defaults` - Hash containing default options for `pre`. Initially `{ wrap: true }`.
* `config.post_defaults` - same as `pre_defaults`, but for `post`.
* `config.use_comp_types` - when true, RDL makes use of types with type-level computations. When false, RDL ignores such types. By default set to true.
* `config.check_comp_types` - when true, RDL inserts dynamic checks which ensure that methods with type-level computations will return the expected type. False by default.
* `config.rerun_comp_types` - when true, RDL inserts dynamic checks which rerun type-level computations at method call sites, ensuring that they evaluate to the same type they did at type checking time. False by default.
# Bibliography
Here are some research papers we have written exploring types, contracts, and Ruby.
* Milod Kazerounian, Sankha Narayan Guria, Niki Vazou, Jeffrey S. Foster, and David Van Horn.
[Type-Level Computations for Ruby Libraries](https://www.cs.tufts.edu/~jfoster/papers/pldi19.pdf).
In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Phoenix, AZ, June 2019.
* Brianna M. Ren and Jeffrey S. Foster.
[Just-in-Time Static Type Checking for Dynamic Languages](http://www.cs.tufts.edu/~jfoster/papers/pldi16.pdf).
In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Santa Barbara, CA, June 2016.
* T. Stephen Strickland, Brianna Ren, and Jeffrey S. Foster.
[Contracts for Domain-Specific Languages in Ruby](http://www.cs.tufts.edu/~jfoster/papers/dls12.pdf).
In Dynamic Languages Symposium (DLS), Portland, OR, October 2014.
* Brianna M. Ren, John Toman, T. Stephen Strickland, and Jeffrey S. Foster.
[The Ruby Type Checker](http://www.cs.tufts.edu/~jfoster/papers/oops13.pdf).
In Object-Oriented Program Languages and Systems (OOPS) Track at ACM Symposium on Applied Computing, pages 1565–1572, Coimbra, Portugal, March 2013.
* Jong-hoon (David) An, Avik Chaudhuri, Jeffrey S. Foster, and Michael Hicks.
[Dynamic Inference of Static Types for Ruby](http://www.cs.tufts.edu/~jfoster/papers/popl11.pdf).
In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pages 459–472, Austin, TX, USA, January 2011.
* Jong-hoon (David) An.
[Dynamic Inference of Static Types for Ruby](http://www.cs.tufts.edu/~jfoster/papers/thesis-an.pdf).
MS thesis, University of Maryland, College Park, 2010.
* Michael Furr.
[Combining Static and Dynamic Typing in Ruby](https://www.cs.tufts.edu/~jfoster/papers/thesis-furr.pdf).
PhD thesis, University of Maryland, College Park, 2009.
* Michael Furr, Jong-hoon (David) An, Jeffrey S. Foster, and Michael Hicks.
[The Ruby Intermediate Langauge](http://www.cs.tufts.edu/~jfoster/papers/dls09-ril.pdf).
In Dynamic Languages Symposium (DLS), pages 89–98, Orlando, Florida, October 2009.
* Michael Furr, Jong-hoon (David) An, and Jeffrey S. Foster.
[Profile-Guided Static Typing for Dynamic Scripting Languages](http://www.cs.tufts.edu/~jfoster/papers/oopsla09.pdf).
In ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA), pages 283–300, Orlando, Floria, October 2009. Best student paper award.
* Michael Furr, Jong-hoon (David) An, Jeffrey S. Foster, and Michael Hicks.
[Static Type Inference for Ruby](http://www.cs.tufts.edu/~jfoster/papers/oops09.pdf).
In Object-Oriented Program Languages and Systems (OOPS) Track at ACM Symposium on Applied Computing (SAC), pages 1859–1866, Honolulu, Hawaii, March 2009.
# Copyright
Copyright (c) 2014-2017, University of Maryland, College Park. All rights reserved.
# Contributing
Contributions are welcome! If you submit any pull requests, please
submit them against the `dev` branch. We keep the `master` branch so
it matches the latest gem version.
# Authors
* [Jeffrey S. Foster](http://www.cs.tufts.edu/~jfoster/)
* [Brianna M. Ren](https://www.cs.umd.edu/~bren/)
* [T. Stephen Strickland](https://www.cs.umd.edu/~sstrickl/)
* Alexander T. Yu
* Milod Kazerounian
* [Sankha Narayan Guria](https://sankhs.com/)
# Contributors
* [Joel Holdbrooks](https://github.com/noprompt)
* [Paul Tarjan](https://github.com/ptarjan)
# Discussion group
[RDL Users](https://groups.google.com/forum/#!forum/rdl-users)
| 61.432969 | 916 | 0.733379 | eng_Latn | 0.997924 |
1c58a1befc8c9c93090f3ad1ffa735932ce6570c | 276 | md | Markdown | README.PL.md | wolfsea89/Portainer.Docker-Compose | 60119b4774195f7f01c172f015ae3dc821bd7680 | [
"BSD-3-Clause"
] | null | null | null | README.PL.md | wolfsea89/Portainer.Docker-Compose | 60119b4774195f7f01c172f015ae3dc821bd7680 | [
"BSD-3-Clause"
] | 3 | 2020-10-10T18:12:58.000Z | 2020-10-10T18:25:54.000Z | README.PL.md | wolfsea89/Portainer.Docker-Compose | 60119b4774195f7f01c172f015ae3dc821bd7680 | [
"BSD-3-Clause"
] | null | null | null | Portainer.Docker-Compose
=========
Uruchomienie docker-compose
```
docker-compose up -d
```
Język: [EN](README.md), [PL](README.PL.md)
Licencja
-------
BSD
Autor
------------------
**Maciej Rachuna**
##### System Administrator & DevOps Engineer
rachuna.maciej@gmail.com
| 13.142857 | 44 | 0.626812 | pol_Latn | 0.376611 |
1c5900d69e30184450e009f03f7a7c3ca8480a0e | 343 | md | Markdown | sites/vulkanon/content/en/docs/developmemt/index.md | 03k64serenity/pqrs.org | 19a77719af0f01a3c5683b4681d2fbd2eee3062f | [
"Unlicense"
] | 4 | 2019-12-03T20:08:30.000Z | 2020-07-09T05:16:47.000Z | sites/vulkanon/content/en/docs/developmemt/index.md | 03k64serenity/pqrs.org | 19a77719af0f01a3c5683b4681d2fbd2eee3062f | [
"Unlicense"
] | 9 | 2020-04-23T12:57:51.000Z | 2022-02-08T15:15:18.000Z | sites/vulkanon/content/en/docs/developmemt/index.md | 03k64serenity/pqrs.org | 19a77719af0f01a3c5683b4681d2fbd2eee3062f | [
"Unlicense"
] | 7 | 2020-04-23T05:49:31.000Z | 2022-02-06T13:07:03.000Z | ---
title: 'Development'
weight: 20
---
## How to build
### Requirements
- devkitPro r18 (buildscripts 20060412)
- You need to use version r18. Do not use the latest release!
### Steps
Execute the following commands in Terminal.
```shell
git clone --depth 1 https://github.com/pqrs-org/Vulkanon.git
cd Vulkanon/vulkanon
make
```
| 15.590909 | 67 | 0.696793 | eng_Latn | 0.899584 |
1c591486ac930715079f9ad7512bd9e5c9ac2c7a | 3,988 | md | Markdown | docs/get-started/getting-started.md | jaronheard/deck.gl | 2fc418d6505f300ad74d818f3b445fea99541cdf | [
"MIT"
] | null | null | null | docs/get-started/getting-started.md | jaronheard/deck.gl | 2fc418d6505f300ad74d818f3b445fea99541cdf | [
"MIT"
] | null | null | null | docs/get-started/getting-started.md | jaronheard/deck.gl | 2fc418d6505f300ad74d818f3b445fea99541cdf | [
"MIT"
] | null | null | null | # Installing, Building and Running Examples
## Installation
To install the deck.gl framework:
```bash
npm install deck.gl --save
```
or
```bash
yarn add deck.gl
```
The [deck.gl](https://www.npmjs.com/package/deck.gl) module includes all deck.gl features and their dependencies. If you want to selectively install a subset of the features, see the [Dependencies](#selectively-install-dependencies) section below.
## Running the Examples
The deck.gl repository contains an [examples folder](https://github.com/uber/deck.gl/tree/master/examples) with a selection of small, standalone examples that could be good starting points for your application.
You should be able to copy these folders to your preferred locations, and get them running simply as follows:
Clone the deck.gl repo, if you haven't already
```bash
git clone git@github.com:uber/deck.gl.git
```
For most consistent results, it is recommended that you check out the latest release branch (e.g. `7.0-release`) instead of `master` when running examples.
```bash
git checkout 7.0-release
```
Change directory to the example you are interested in, e.g.
```bash
cd deck.gl/examples/get-started/pure-js/basic
```
Then install dependencies using the installer of your choice:
```bash
npm install
# or
yarn
```
and then running using:
```bash
npm start
```
If the example uses a mapbox base map you need a [Mapbox access token](/docs/get-started/using-with-mapbox-gl.md)
```bash
export MapboxAccessToken={Your Token Here} && npm start
```
If you want to build the example against the latest deck.gl source code in the cloned repo (rather than the published version of deck.gl listed in the examples `package.json`)
```bash
npm run start-local
```
> The examples on the `master` branch are updated to use features from the latest, unreleased version of deck.gl. If some example doesn't work using `npm start` it can be worth trying `npm run start-local`.
> While all examples support `npm run start-local`, there are some caveats when running against local source. Most importantly, you must make sure to run `npm install` or `yarn` in the deck.gl root folder before running `npm run start-local` in an example folder.
## Selectively Install Dependencies
A family of NPM modules are published as part of the deck.gl framework. The following tree shows their scope and dependencies:
- `@deck.gl/core` - Core module that handles the WebGL rendering pipeline, data management, and user interaction
+ `@deck.gl/layers` - Primitive layers that are the building blocks of all visualizations
* `@deck.gl/aggregation-layers` - Advanced layers that aggregate data into alternative representations, e.g. heatmap, contour, hex bins, etc.
* `@deck.gl/geo-layers` - Additional layers that handle geospatial use cases and GIS formats.
* `@deck.gl/mesh-layers` - Additional layers that render 3D meshes and [scene graphs](https://en.wikipedia.org/wiki/Scene_graph).
+ `@deck.gl/json` - Declarative interface that supports specifying deck.gl layers and views using a JSON format.
+ `@deck.gl/mapbox` - An integration with the [Mapbox custom layer](/docs/api-reference/mapbox/overview.md) API.
+ `@deck.gl/react` - React wrapper of deck.gl.
+ `@deck.gl/test-utils` - Testing utilities.
For example, to render a `PointCloudLayer`, you may install:
```bash
yarn add @deck.gl/core @deck.gl/layers
```
To use the `HexagonLayer` with React, you need to install:
```bash
yarn add @deck.gl/core @deck.gl/layers @deck.gl/aggregation-layers @deck.gl/react
```
While installing submodules separately affords applications the maximum control over the dependencies that it pulls in, the submodule versions are expected to be synchronized manually in order to produce consistent results.
The `deck.gl` master module includes all submodules except for `@deck.gl/test-utils`. Most bundling solutions (Webpack, Rollup etc.) offer tree-shaking capabilities that exclude unused exports from a production build.
| 38.346154 | 263 | 0.758776 | eng_Latn | 0.993426 |
1c5959c7a7398aa86afb518f643fc606bc631d73 | 2,374 | md | Markdown | archive/WorkingWithDynamics/how-to-see-which-report-layouts-are-used-on-reports.md | AleksanderGladkov/dynamics365smb-docs | f061beeb61260d6b78df86334a7a8c50be2bb8d4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-28T12:52:43.000Z | 2020-04-28T12:52:43.000Z | archive/WorkingWithDynamics/how-to-see-which-report-layouts-are-used-on-reports.md | AleksanderGladkov/dynamics365smb-docs | f061beeb61260d6b78df86334a7a8c50be2bb8d4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | archive/WorkingWithDynamics/how-to-see-which-report-layouts-are-used-on-reports.md | AleksanderGladkov/dynamics365smb-docs | f061beeb61260d6b78df86334a7a8c50be2bb8d4 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-18T10:43:56.000Z | 2019-04-18T10:43:56.000Z | ---
title: How to See Which Report Layouts are Used on Reports | Microsoft Docs
description: You can see which report layouts are used on Business Central reports from the **Report Layout Selection** window. From this window, you can change layout that is used on a report, create new custom layouts, or make variations of existing custom layouts.
services: project-madeira
documentationcenter: ''
author: SorenGP
ms.service: dynamics365-financials
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords:
ms.date: 07/01/2017
ms.author: sgroespe
---
<<<<<<< HEAD
=======
<<<<<<< HEAD
# See Which Report Layouts are Used on Reports
=======
>>>>>>> refs/remotes/origin/Update14
# See Which Report Layouts are Used on Reports
You can see which report layouts are used on Business Central reports from the **Report Layout Selection** window. From this window, you can change layout that is used on a report, create new custom layouts, or make variations of existing custom layouts.
You can see which report layouts are used on [!INCLUDE[d365fin](../../includes/d365fin_md.md)] reports from the **Report Layout Selection** window. From this window, you can change layout that is used on a report, create new custom layouts, or make variations of existing custom layouts.
### To see the report layouts are set up on reports
1. Choose the  icon, enter **Report Layout Selection**, and then choose the related link.
The window lists all the reports that are available for the company that is set in the **Company** field at the top of the window.
The **Selected Layout** and **Report Description** fields indicate the layout that is currently used on the report.
2. To see the report layouts for another company, change the **Company** field.
## See Also
[Managing Report Layouts From the Microsoft Dynamics NAV Clients](../FullExperience/managing-report-layouts-from-the-microsoft-dynamics-nav-clients.md)
[About Report Layouts](../FullExperience/about-report-layouts.md)
[Create a Custom Report Layout](../FullExperience/how-to-create-a-custom-report-layout.md)
[Modify a Custom Report Layout](../FullExperience/how-to-modify-a-custom-report-layout.md)
| 53.954545 | 289 | 0.735889 | eng_Latn | 0.992142 |
1c5985c5b4e5cecca6674b77af4c5ed962c499ea | 38,035 | md | Markdown | 2017/2017-12-20.md | JHON835-smith/github-trending | 773b35ca8d45f25dce22c1f519a11cf0fe5ed644 | [
"MIT"
] | 37 | 2017-10-12T01:50:42.000Z | 2022-02-24T02:44:45.000Z | 2017/2017-12-20.md | JHON835-smith/github-trending | 773b35ca8d45f25dce22c1f519a11cf0fe5ed644 | [
"MIT"
] | null | null | null | 2017/2017-12-20.md | JHON835-smith/github-trending | 773b35ca8d45f25dce22c1f519a11cf0fe5ed644 | [
"MIT"
] | 12 | 2018-07-31T10:04:56.000Z | 2022-02-07T00:08:06.000Z | ## 2017-12-20
#### all
* [tipsy / github-profile-summary](https://github.com/tipsy/github-profile-summary):Tool for visualizing GitHub profiles
* [Chalarangelo / 30-seconds-of-code](https://github.com/Chalarangelo/30-seconds-of-code):Curated collection of useful Javascript snippets that you can understand in 30 seconds or less.
* [google / boardgame.io](https://github.com/google/boardgame.io):State management and more for turn based games.
* [Jam3 / math-as-code](https://github.com/Jam3/math-as-code):a cheat-sheet for mathematical notation in code form
* [parcel-bundler / parcel](https://github.com/parcel-bundler/parcel):📦 🚀 Blazing fast, zero configuration web application bundler
* [gka / schnack](https://github.com/gka/schnack):🗣️ Simple node app for Disqus-like drop-in commenting on static websites
* [TalkingData / inmap](https://github.com/TalkingData/inmap):Map visualization
* [gocolly / colly](https://github.com/gocolly/colly):Fast and Elegant Scraping Framework for Golang
* [jwasham / coding-interview-university](https://github.com/jwasham/coding-interview-university):A complete computer science study plan to become a software engineer.
* [tensorflow / tensorflow](https://github.com/tensorflow/tensorflow):Computation using data flow graphs for scalable machine learning
* [bulutyazilim / awesome-datascience](https://github.com/bulutyazilim/awesome-datascience):📝 An awesome Data Science repository to learn and apply for real world problems.
* [k88hudson / git-flight-rules](https://github.com/k88hudson/git-flight-rules):Flight rules for git
* [bradoyler / xmr-miner](https://github.com/bradoyler/xmr-miner):XMR mining app, built with Vue.js, D3 and CoinHive
* [tidwall / jj](https://github.com/tidwall/jj):JSON Stream Editor (command line utility)
* [bitcoin / bitcoin](https://github.com/bitcoin/bitcoin):Bitcoin Core integration/staging tree
* [facebook / Docusaurus](https://github.com/facebook/Docusaurus):Easy to maintain open source documentation websites.
* [avast-tl / retdec](https://github.com/avast-tl/retdec):RetDec is a retargetable machine-code decompiler based on LLVM.
* [Popmotion / popmotion](https://github.com/Popmotion/popmotion):The JavaScript motion engine. Create unique animations and interactions with tweens, physics and input tracking.
* [cirocosta / cr](https://github.com/cirocosta/cr):Runs your tasks at maximum concurrency
* [vuejs / vue](https://github.com/vuejs/vue):A progressive, incrementally-adoptable JavaScript framework for building UI on the web.
* [SoySauceLab / CollectionKit](https://github.com/SoySauceLab/CollectionKit):A modern Swift framework for building reusable data-driven collection components.
* [peewpw / Invoke-PSImage](https://github.com/peewpw/Invoke-PSImage):Embeds a PowerShell script in the pixels of a PNG file and generates a oneliner to execute
* [alibaba / beidou](https://github.com/alibaba/beidou):Isomorphic framework for server-rendered React apps
* [Blankj / awesome-java-leetcode](https://github.com/Blankj/awesome-java-leetcode):👑 LeetCode of algorithms with java solution(updating).
* [mehdidc / pomodoro](https://github.com/mehdidc/pomodoro):simple command line pomodoro app with visualization of statistics
#### unknown
* [Jam3 / math-as-code](https://github.com/Jam3/math-as-code):a cheat-sheet for mathematical notation in code form
* [jwasham / coding-interview-university](https://github.com/jwasham/coding-interview-university):A complete computer science study plan to become a software engineer.
* [bulutyazilim / awesome-datascience](https://github.com/bulutyazilim/awesome-datascience):📝 An awesome Data Science repository to learn and apply for real world problems.
* [k88hudson / git-flight-rules](https://github.com/k88hudson/git-flight-rules):Flight rules for git
* [haskellcamargo / pragmatic-functional-javascript](https://github.com/haskellcamargo/pragmatic-functional-javascript):The "Pragmatic Functional JavaScript" book
* [haskell-perf / checklist](https://github.com/haskell-perf/checklist):The Haskell performance checklist
* [gothinkster / realworld](https://github.com/gothinkster/realworld):"The mother of all demo apps" — Exemplary fullstack Medium.com clone powered by React, Angular, Node, Django, and many more 🏅
* [sindresorhus / awesome](https://github.com/sindresorhus/awesome):😎 Curated list of awesome lists
* [DDFE / DDFE-blog](https://github.com/DDFE/DDFE-blog):👏 welcome to DDFE's blog
* [getify / You-Dont-Know-JS](https://github.com/getify/You-Dont-Know-JS):A book series on JavaScript. @YDKJS on twitter.
* [github / gitignore](https://github.com/github/gitignore):A collection of useful .gitignore templates
* [vuejs / awesome-vue](https://github.com/vuejs/awesome-vue):A curated list of awesome things related to Vue.js
* [kelseyhightower / kubernetes-the-hard-way](https://github.com/kelseyhightower/kubernetes-the-hard-way):Bootstrap Kubernetes the hard way on Google Cloud Platform. No scripts.
* [appcypher / awesome-wasm-langs](https://github.com/appcypher/awesome-wasm-langs):😎 A curated list of languages that compile directly to or have their VMs in WebAssembly
* [ChristosChristofidis / awesome-deep-learning](https://github.com/ChristosChristofidis/awesome-deep-learning):A curated list of awesome Deep Learning tutorials, projects and communities.
* [hindupuravinash / nips2017](https://github.com/hindupuravinash/nips2017):A list of resources for all invited talks, tutorials, workshops and presentations at NIPS 2017
* [ashishb / android-security-awesome](https://github.com/ashishb/android-security-awesome):A collection of android security related resources
* [bnb / awesome-awesome-nodejs](https://github.com/bnb/awesome-awesome-nodejs):🐢 🚀 An Awesome list of Awesome lists related to Node.js.
* [ethereum / wiki](https://github.com/ethereum/wiki):The Ethereum Wiki -
* [h5bp / Front-end-Developer-Interview-Questions](https://github.com/h5bp/Front-end-Developer-Interview-Questions):A list of helpful front-end related questions you can use to interview potential candidates, test yourself or completely ignore.
* [enaqx / awesome-react](https://github.com/enaqx/awesome-react):A collection of awesome things regarding React ecosystem.
* [caesar0301 / awesome-public-datasets](https://github.com/caesar0301/awesome-public-datasets):A topic-centric list of high-quality open datasets in public domains. By everyone, for everyone!
* [ipfs / ipfs](https://github.com/ipfs/ipfs):IPFS - The Permanent Web
* [ArcherFMY / Paper_Reading_List](https://github.com/ArcherFMY/Paper_Reading_List):Recommended Papers. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Learning (cs.LG)
* [xitu / gold-miner](https://github.com/xitu/gold-miner):🥇 掘金翻译计划,可能是世界最大最好的英译中技术社区,最懂读者和译者的翻译平台:
#### python
* [mehdidc / pomodoro](https://github.com/mehdidc/pomodoro):simple command line pomodoro app with visualization of statistics
* [bloomberg / powerfulseal](https://github.com/bloomberg/powerfulseal):A powerful testing tool for Kubernetes clusters.
* [lium-lst / nmtpytorch](https://github.com/lium-lst/nmtpytorch):Neural Machine Translation Framework in PyTorch
* [tensorflow / models](https://github.com/tensorflow/models):Models and examples built with TensorFlow
* [ston3o / docker-hacklab](https://github.com/ston3o/docker-hacklab):My personal hacklab, create your own.
* [vinta / awesome-python](https://github.com/vinta/awesome-python):A curated list of awesome Python frameworks, libraries, software and resources
* [pytorch / pytorch](https://github.com/pytorch/pytorch):Tensors and Dynamic neural networks in Python with strong GPU acceleration
* [burningion / poor-mans-deep-learning-camera](https://github.com/burningion/poor-mans-deep-learning-camera):Build a thin client deep learning camera with the Raspberry Pi, Flask, and YOLO
* [thtrieu / darkflow](https://github.com/thtrieu/darkflow):Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices
* [scikit-learn / scikit-learn](https://github.com/scikit-learn/scikit-learn):scikit-learn: machine learning in Python
* [joshua-wu / deepfakes_faceswap](https://github.com/joshua-wu/deepfakes_faceswap):from deekfakes' faceswap: https://www.reddit.com/user/deepfakes/
* [keras-team / keras](https://github.com/keras-team/keras):Deep Learning for humans
* [josephmisiti / awesome-machine-learning](https://github.com/josephmisiti/awesome-machine-learning):A curated list of awesome Machine Learning frameworks, libraries and software.
* [openai / gym](https://github.com/openai/gym):A toolkit for developing and comparing reinforcement learning algorithms.
* [django / django](https://github.com/django/django):The Web framework for perfectionists with deadlines.
* [Instagram / MonkeyType](https://github.com/Instagram/MonkeyType):A system for Python that generates static type annotations by collecting runtime types
* [vi3k6i5 / flashtext](https://github.com/vi3k6i5/flashtext):Extract Keywords from sentence or Replace keywords in sentences.
* [pypa / pipenv](https://github.com/pypa/pipenv):Python Development Workflow for Humans.
* [ageitgey / face_recognition](https://github.com/ageitgey/face_recognition):The world's simplest facial recognition api for Python and the command line
* [geek-ai / MAgent](https://github.com/geek-ai/MAgent):A Platform for Many-agent Reinforcement Learning
* [songrotek / Deep-Learning-Papers-Reading-Roadmap](https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap):Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
* [orppra / ropa](https://github.com/orppra/ropa):ROP chain creation as easy as drinking water
* [facebookresearch / visdom](https://github.com/facebookresearch/visdom):A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.
* [XX-net / XX-Net](https://github.com/XX-net/XX-Net):a web proxy tool
* [scrapy / scrapy](https://github.com/scrapy/scrapy):Scrapy, a fast high-level web crawling & scraping framework for Python.
#### swift
* [SoySauceLab / CollectionKit](https://github.com/SoySauceLab/CollectionKit):A modern Swift framework for building reusable data-driven collection components.
* [roberthein / Ease](https://github.com/roberthein/Ease):Animate everything with Ease
* [xcodeswift / sake](https://github.com/xcodeswift/sake):A make-like build utility for Swift.
* [Ramotion / fluid-slider](https://github.com/Ramotion/fluid-slider):💧 A slider widget with a popup bubble displaying the precise value selected. Made by @Ramotion
* [uias / Tabman](https://github.com/uias/Tabman):™️ A powerful paging view controller with indicator bar.
* [shadowsocks / ShadowsocksX-NG](https://github.com/shadowsocks/ShadowsocksX-NG):Next Generation of ShadowsocksX
* [Alamofire / Alamofire](https://github.com/Alamofire/Alamofire):Elegant HTTP Networking in Swift
* [vsouza / awesome-ios](https://github.com/vsouza/awesome-ios):A curated list of awesome iOS ecosystem, including Objective-C and Swift Projects
* [airbnb / Lona](https://github.com/airbnb/Lona):A tool for defining design systems and using them to generate cross-platform UI code, Sketch files, images, and other artifacts.
* [lkzhao / Hero](https://github.com/lkzhao/Hero):Elegant transition library for iOS & tvOS
* [raywenderlich / swift-algorithm-club](https://github.com/raywenderlich/swift-algorithm-club):Algorithms and data structures in Swift, with explanations!
* [kizitonwose / CountryPickerView](https://github.com/kizitonwose/CountryPickerView):A simple, customizable view for efficiently collecting country information in iOS apps.
* [ninjaprox / NVActivityIndicatorView](https://github.com/ninjaprox/NVActivityIndicatorView):A collection of awesome loading animations
* [krzysztofzablocki / Sourcery](https://github.com/krzysztofzablocki/Sourcery):Meta-programming for Swift, stop writing boilerplate code.
* [ReactiveX / RxSwift](https://github.com/ReactiveX/RxSwift):Reactive Programming in Swift
* [Juanpe / SkeletonView](https://github.com/Juanpe/SkeletonView):An elegant way to show users that something is happening and also prepare them to which contents he is waiting
* [BalestraPatrick / Stryng](https://github.com/BalestraPatrick/Stryng):Swift strings taken to a whole new syntax level.
* [olucurious / Awesome-ARKit](https://github.com/olucurious/Awesome-ARKit):A curated list of awesome ARKit projects and resources. Feel free to contribute!
* [Minecodecraft / 50DaysOfSwift](https://github.com/Minecodecraft/50DaysOfSwift):One demo a day, keep coding for 50 days with Swift. The struggle of a growing iOS developer
* [krzyzanowskim / CryptoSwift](https://github.com/krzyzanowskim/CryptoSwift):CryptoSwift is a growing collection of standard and secure cryptographic algorithms implemented in Swift
* [danielgindi / Charts](https://github.com/danielgindi/Charts):Beautiful charts for iOS/tvOS/OSX! The Apple side of the crossplatform MPAndroidChart.
* [vapor / vapor](https://github.com/vapor/vapor):💧 A server-side Swift web framework.
* [matteocrippa / awesome-swift](https://github.com/matteocrippa/awesome-swift):A collaborative list of awesome Swift libraries and resources. Feel free to contribute!
* [lhc70000 / iina](https://github.com/lhc70000/iina):The modern video player for macOS.
* [xcodeswift / xcproj](https://github.com/xcodeswift/xcproj):Swift library for parsing Xcode projects
#### objective-c
* [zhenglibao / FlexLib](https://github.com/zhenglibao/FlexLib):An obj-c layout framework for iOS based on flexbox model, easy and powerful. You can write iOS ui even faster than web & android. Trash xib & storyboard, autolayout & masonry now. :)
* [weidian-inc / hera](https://github.com/weidian-inc/hera):A framework for running WeChat applet
* [madaoCN / FxxkBaseClass-MVVM-ReactiveObjc](https://github.com/madaoCN/FxxkBaseClass-MVVM-ReactiveObjc):iOS架构实践干货:AOP来Fxxk基类 + MVVM + ReactiveObjC + JLRoutes组件化,代码比较完善,可以直接拿来写项目,大家按需自取,能顺手给个Star那也是极好的
* [Brances / ZMBCY-iOS](https://github.com/Brances/ZMBCY-iOS):高仿二次元网易GACHA,所有接口均通过Charles抓取而来,图片资源通过 https://github.com/yuedong56/Assets.carTool 工具提取。
* [twitter / twitter-kit-ios](https://github.com/twitter/twitter-kit-ios):Twitter Kit is a native SDK to include Twitter content inside mobile apps.
* [PengfeiWang666 / HighlightedSearch](https://github.com/PengfeiWang666/HighlightedSearch):🔍 搜索关键字 🚀 🚀 高亮显示,支持汉字、全拼、简拼搜索,支持多音字搜索
* [leerme / LYPersonTool](https://github.com/leerme/LYPersonTool):
* [airbnb / lottie-ios](https://github.com/airbnb/lottie-ios):An iOS library to natively render After Effects vector animations
* [stackhou / YJCategories](https://github.com/stackhou/YJCategories):Objective-C 常用分类集合,支持Cocoapods
* [AFNetworking / AFNetworking](https://github.com/AFNetworking/AFNetworking):A delightful networking framework for iOS, macOS, watchOS, and tvOS.
* [marcuswestin / WebViewJavascriptBridge](https://github.com/marcuswestin/WebViewJavascriptBridge):An iOS/OSX bridge for sending messages between Obj-C and JavaScript in UIWebViews/WebViews
* [rs / SDWebImage](https://github.com/rs/SDWebImage):Asynchronous image downloader with cache support as a UIImageView category
* [SnapKit / Masonry](https://github.com/SnapKit/Masonry):Harness the power of AutoLayout NSLayoutConstraints with a simplified, chainable and expressive syntax. Supports iOS and OSX Auto Layout
* [banchichen / TZImagePickerController](https://github.com/banchichen/TZImagePickerController):一个支持多选、选原图和视频的图片选择器,同时有预览、裁剪功能,支持iOS6+。 A clone of UIImagePickerController, support picking multiple photos、original photo、video, also allow preview photo and video, support iOS6+
* [TextureGroup / Texture](https://github.com/TextureGroup/Texture):Smooth asynchronous user interfaces for iOS apps.
* [hackiftekhar / IQKeyboardManager](https://github.com/hackiftekhar/IQKeyboardManager):Codeless drop-in universal library allows to prevent issues of keyboard sliding up and cover UITextField/UITextView. Neither need to write any code nor any setup required and much more.
* [halfrost / Halfrost-Field](https://github.com/halfrost/Halfrost-Field):✍️ 这里是写博客的地方 —— Halfrost-Field 冰霜之地
* [react-community / react-native-maps](https://github.com/react-community/react-native-maps):React Native Mapview component for iOS + Android
* [ChangbaDevs / KTVHTTPCache](https://github.com/ChangbaDevs/KTVHTTPCache):A media cache framework from Changba iOS Team.
* [expo / expo](https://github.com/expo/expo):Expo iOS/Android Client
* [ko1o / PYSearch](https://github.com/ko1o/PYSearch):🔍 An elegant search controller which replaces the UISearchController for iOS (iPhone & iPad) .
* [ccgus / fmdb](https://github.com/ccgus/fmdb):A Cocoa / Objective-C wrapper around SQLite
* [CocoaLumberjack / CocoaLumberjack](https://github.com/CocoaLumberjack/CocoaLumberjack):A fast & simple, yet powerful & flexible logging framework for Mac and iOS
* [changsanjiang / SJVideoPlayer](https://github.com/changsanjiang/SJVideoPlayer):Video Player. 视频播放器. 支持调速&提供UI模型, 自由配置UI界面.
* [AloneMonkey / MonkeyDev](https://github.com/AloneMonkey/MonkeyDev):CaptainHook Tweak、Logos Tweak and Command-line Tool、Patch iOS Apps, Without Jailbreak.
#### javascript
* [Chalarangelo / 30-seconds-of-code](https://github.com/Chalarangelo/30-seconds-of-code):Curated collection of useful Javascript snippets that you can understand in 30 seconds or less.
* [google / boardgame.io](https://github.com/google/boardgame.io):State management and more for turn based games.
* [parcel-bundler / parcel](https://github.com/parcel-bundler/parcel):📦 🚀 Blazing fast, zero configuration web application bundler
* [TalkingData / inmap](https://github.com/TalkingData/inmap):Map visualization
* [gka / schnack](https://github.com/gka/schnack):🗣️ Simple node app for Disqus-like drop-in commenting on static websites
* [facebook / Docusaurus](https://github.com/facebook/Docusaurus):Easy to maintain open source documentation websites.
* [bradoyler / xmr-miner](https://github.com/bradoyler/xmr-miner):XMR mining app, built with Vue.js, D3 and CoinHive
* [Popmotion / popmotion](https://github.com/Popmotion/popmotion):The JavaScript motion engine. Create unique animations and interactions with tweens, physics and input tracking.
* [vuejs / vue](https://github.com/vuejs/vue):A progressive, incrementally-adoptable JavaScript framework for building UI on the web.
* [alibaba / beidou](https://github.com/alibaba/beidou):Isomorphic framework for server-rendered React apps
* [facebook / react](https://github.com/facebook/react):A declarative, efficient, and flexible JavaScript library for building user interfaces.
* [Molunerfinn / PicGo](https://github.com/Molunerfinn/PicGo):A simple & beautiful tool for pictures uploading built by electron-vue
* [nitin42 / react-perf-devtool](https://github.com/nitin42/react-perf-devtool):A Chrome developer tool extension to inspect performance of React components.
* [lukeed / sade](https://github.com/lukeed/sade):Smooth (CLI) Operator 🎶
* [thedaviddias / Front-End-Checklist](https://github.com/thedaviddias/Front-End-Checklist):🗂 The perfect Front-End Checklist for modern websites and meticulous developers
* [GoogleChrome / puppeteer](https://github.com/GoogleChrome/puppeteer):Headless Chrome Node API
* [dabbott / react-framer](https://github.com/dabbott/react-framer):Create Framer ( https://framer.com ) prototypes using React ( https://reactjs.org/ )
* [prettier / prettier](https://github.com/prettier/prettier):Prettier is an opinionated code formatter.
* [seek-oss / html-sketchapp-cli](https://github.com/seek-oss/html-sketchapp-cli):Quickly generate Sketch libraries from HTML documents and living style guides, powered by html-sketchapp
* [iotaledger / wallet](https://github.com/iotaledger/wallet):IOTA Wallet
* [bootstrap-vue / bootstrap-vue](https://github.com/bootstrap-vue/bootstrap-vue):Quickly integrate Bootstrap 4 Components with Vue.js.
* [storybooks / storybook](https://github.com/storybooks/storybook):Interactive development & testing environment for React, React-Native, Vue UI components
* [airbnb / react-sketchapp](https://github.com/airbnb/react-sketchapp):render React components to Sketch ⚛️ 💎
* [facebookincubator / create-react-app](https://github.com/facebookincubator/create-react-app):Create React apps with no build configuration.
* [facebook / react-native](https://github.com/facebook/react-native):A framework for building native apps with React.
#### go
* [gocolly / colly](https://github.com/gocolly/colly):Fast and Elegant Scraping Framework for Golang
* [tidwall / jj](https://github.com/tidwall/jj):JSON Stream Editor (command line utility)
* [cirocosta / cr](https://github.com/cirocosta/cr):Runs your tasks at maximum concurrency
* [Ne0nd0g / merlin](https://github.com/Ne0nd0g/merlin):Merlin is a cross-platform post-exploitation HTTP/2 Command & Control server and agent written in golang.
* [ethereum / go-ethereum](https://github.com/ethereum/go-ethereum):Official Go implementation of the Ethereum protocol
* [kubernetes / kubernetes](https://github.com/kubernetes/kubernetes):Production-Grade Container Scheduling and Management
* [openfaas / faas](https://github.com/openfaas/faas):OpenFaaS - Serverless Functions Made Simple for Docker & Kubernetes
* [mojocn / base64Captcha](https://github.com/mojocn/base64Captcha):Golang验证码support digits, numbers,alphabet, arithmetic, audio and digit-alphabet captcha.
* [golang / go](https://github.com/golang/go):The Go programming language
* [gohugoio / hugo](https://github.com/gohugoio/hugo):The world’s fastest framework for building websites.
* [istio / istio](https://github.com/istio/istio):An open platform to connect, manage, and secure microservices.
* [istio / fortio](https://github.com/istio/fortio):Fortio load testing library and command line tool and web UI in go (golang). Allows to specify a set query-per-second load and record latency histograms and other useful stats.
* [avelino / awesome-go](https://github.com/avelino/awesome-go):A curated list of awesome Go frameworks, libraries and software
* [dzonerzy / goWAPT](https://github.com/dzonerzy/goWAPT):Go Web Application Penetration Test
* [pingcap / tidb](https://github.com/pingcap/tidb):TiDB is a distributed HTAP database compatible with MySQL protocol
* [randomvariable / kms-cryptsetup](https://github.com/randomvariable/kms-cryptsetup):Encrypt your on-premise server disks and save the keys in the cloud securely
* [containous / traefik](https://github.com/containous/traefik):Træfik, a modern reverse proxy
* [yunabe / lgo](https://github.com/yunabe/lgo):Go (golang) REPL and Jupyter Notebook kernel
* [gin-gonic / gin](https://github.com/gin-gonic/gin):Gin is a HTTP web framework written in Go (Golang). It features a Martini-like API with much better performance -- up to 40 times faster. If you need smashing performance, get yourself some Gin.
* [prometheus / prometheus](https://github.com/prometheus/prometheus):The Prometheus monitoring system and time series database.
* [coreos / etcd](https://github.com/coreos/etcd):Distributed reliable key-value store for the most critical data of a distributed system
* [astaxie / build-web-application-with-golang](https://github.com/astaxie/build-web-application-with-golang):A golang ebook intro how to build a web with golang
* [kubernetes / minikube](https://github.com/kubernetes/minikube):Run Kubernetes locally
* [minio / minio](https://github.com/minio/minio):Minio is an open source object storage server compatible with Amazon S3 APIs
* [xwjdsh / nba-live](https://github.com/xwjdsh/nba-live):Watch NBA games in the terminal, the content in Chinese only.
#### java
* [Blankj / awesome-java-leetcode](https://github.com/Blankj/awesome-java-leetcode):👑 LeetCode of algorithms with java solution(updating).
* [kekingcn / file-online-preview](https://github.com/kekingcn/file-online-preview):使用spring boot打造文件文档在线预览项目解决方案,支持doc、docx、ppt、pptx、xls、xlsx、zip、rar、以及众多类文本如txt、html、xml、java、properties、mp3、mp4、sql、js、md、json、conf、ini、vue、php、py、bat、gitignore等文件在线预览
* [alibaba / dubbo](https://github.com/alibaba/dubbo):Dubbo is a high-performance, java based, open source RPC framework
* [spring-projects / spring-boot](https://github.com/spring-projects/spring-boot):Spring Boot
* [biezhi / blade](https://github.com/biezhi/blade):🚀 Lightning fast and elegant mvc framework for Java8
* [scwang90 / SmartRefreshLayout](https://github.com/scwang90/SmartRefreshLayout):🔥 下拉刷新、上拉加载、二级刷新、淘宝二楼、RefreshLayout、OverScroll,Android智能下拉刷新框架,支持越界回弹、越界拖动,具有极强的扩展性,集成了几十种炫酷的Header和 Footer。
* [crazyandcoder / citypicker](https://github.com/crazyandcoder/citypicker):citypicker城市选择器,支持仿iOS滚轮实现,一级或者三级列表展示方式。
* [Blankeer / MDWechat](https://github.com/Blankeer/MDWechat):一个能让微信 Material Design 化的 Xposed 模块
* [bumptech / glide](https://github.com/bumptech/glide):An image loading and caching library for Android focused on smooth scrolling
* [ReactiveX / RxJava](https://github.com/ReactiveX/RxJava):RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.
* [yanzhenjie / Sofia](https://github.com/yanzhenjie/Sofia):SystemBar一体化,状态栏和导航栏均支持设置颜色、渐变色、图片、透明度、内容入侵。状态栏支持设置深色字体,以上特性兼容国产魅族、小米手机(包括7.0及以上)和其它标准模式的手机。
* [spring-projects / spring-framework](https://github.com/spring-projects/spring-framework):Spring Framework
* [apache / incubator-skywalking](https://github.com/apache/incubator-skywalking):A distributed tracing system, and APM ( Application Performance Monitoring )
* [CarGuo / GSYVideoPlayer](https://github.com/CarGuo/GSYVideoPlayer):视频播放器(IJKplayer),HTTPS支持,支持弹幕,支持滤镜、水印、gif截图,支持基本的拖动,声音、亮度调节,支持边播边缓存,支持视频本身自带rotation的旋转(90,270之类),重力旋转与手动旋转的同步支持,支持列表播放 ,直接添加控件为封面,列表全屏动画,视频加载速度,列表小窗口支持拖动,5.0的过场效果,调整比例,多分辨率切换,支持切换播放器,进度条小窗口预览,其他一些小动画效果,rtsp、concat、mpeg。简书:
* [googlesamples / android-architecture-components](https://github.com/googlesamples/android-architecture-components):Samples for Android Architecture Components.
* [shuzheng / zheng](https://github.com/shuzheng/zheng):基于Spring+SpringMVC+Mybatis分布式敏捷开发系统架构,提供整套公共微服务服务模块:集中权限管理(单点登录)、内容管理、支付中心、用户管理(支持第三方登录)、微信平台、存储系统、配置中心、日志分析、任务和通知等,支持服务治理、监控和追踪,努力为中小型企业打造全方位J2EE企业级开发解决方案。
* [iluwatar / java-design-patterns](https://github.com/iluwatar/java-design-patterns):Design patterns implemented in Java
* [PhilJay / MPAndroidChart](https://github.com/PhilJay/MPAndroidChart):A powerful Android chart view / graph view library, supporting line- bar- pie- radar- bubble- and candlestick charts as well as scaling, dragging and animations.
* [uber / NullAway](https://github.com/uber/NullAway):A tool to help eliminate NullPointerExceptions (NPEs) in your Java code with low build-time overhead
* [elastic / elasticsearch](https://github.com/elastic/elasticsearch):Open Source, Distributed, RESTful Search Engine
* [QMUI / QMUI_Android](https://github.com/QMUI/QMUI_Android):提高 Android UI 开发效率的 UI 库
* [JetBrains / kotlin](https://github.com/JetBrains/kotlin):The Kotlin Programming Language
* [Blankj / AndroidUtilCode](https://github.com/Blankj/AndroidUtilCode):🔥 Android developers should collect the following utils(updating).
* [apache / hadoop](https://github.com/apache/hadoop):Mirror of Apache Hadoop
* [Wechat-Group / weixin-java-tools](https://github.com/Wechat-Group/weixin-java-tools):微信支付、开放平台、小程序、企业号和公众号 Java SDK开发工具包
#### html
* [froala / design-blocks](https://github.com/froala/design-blocks):A set of 170+ Bootstrap based design blocks ready to be used to create clean modern websites.
* [carlos8f / zenbot](https://github.com/carlos8f/zenbot):Zenbot is a command-line cryptocurrency trading bot using Node.js and MongoDB.
* [jaywcjlove / awesome-mac](https://github.com/jaywcjlove/awesome-mac): This repo is a collection of awesome Mac applications and tools for developers and designers.
* [almasaeed2010 / AdminLTE](https://github.com/almasaeed2010/AdminLTE):AdminLTE - Free Premium Admin control Panel Theme Based On Bootstrap 3.x
* [google / styleguide](https://github.com/google/styleguide):Style guides for Google-originated open-source projects
* [mathiasbynens / proposal-number-fromstring](https://github.com/mathiasbynens/proposal-number-fromstring):{BigInt,Number.fromString}
* [ariya / phantomjs](https://github.com/ariya/phantomjs):Scriptable Headless WebKit
* [facebookresearch / fastText](https://github.com/facebookresearch/fastText):Library for fast text representation and classification.
* [TeamStuQ / skill-map](https://github.com/TeamStuQ/skill-map):程序员技能图谱
* [tc39 / ecma262](https://github.com/tc39/ecma262):Status, process, and documents for ECMA262
* [wesbos / JavaScript30](https://github.com/wesbos/JavaScript30):30 Day Vanilla JS Challenge
* [google / gson](https://github.com/google/gson):A Java serialization/deserialization library to convert Java Objects into JSON and back
* [SilenceHVK / articles](https://github.com/SilenceHVK/articles):📚 Github static blog post, experience the fun of using Issues.Welcome watch and star,static blog address ( 静态博客文章,体验一下使用 Issues 的乐趣,欢迎 watch 和 star,静态博客地址: http://hvkcoder.me )
* [clong / DetectionLab](https://github.com/clong/DetectionLab):Vagrant & Packer scripts to build a lab environment complete with security tooling and logging best practices
* [portainer / portainer](https://github.com/portainer/portainer):Simple management UI for Docker
* [mrholek / CoreUI-Free-Bootstrap-Admin-Template](https://github.com/mrholek/CoreUI-Free-Bootstrap-Admin-Template):CoreUI is free bootstrap admin template with Angular2, AngularJS, React.js & Vue.js support.
* [google / tacotron](https://github.com/google/tacotron):Audio samples from the paper "Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model".
* [cs231n / cs231n.github.io](https://github.com/cs231n/cs231n.github.io):Public facing notes page
* [apachecn / MachineLearning](https://github.com/apachecn/MachineLearning):Machine Learning in Action(机器学习实战)
* [chjj / marked](https://github.com/chjj/marked):A markdown parser and compiler. Built for speed.
* [infinum / rails-handbook](https://github.com/infinum/rails-handbook):Describing the development process used by the Infinum Rails team.
* [swagger-api / swagger-codegen](https://github.com/swagger-api/swagger-codegen):swagger-codegen contains a template-driven engine to generate documentation, API clients and server stubs in different languages by parsing your OpenAPI / Swagger definition.
* [octocat / Spoon-Knife](https://github.com/octocat/Spoon-Knife):This repo is for demonstration purposes only.
* [gdi2290 / awesome-angular](https://github.com/gdi2290/awesome-angular):📄 A curated list of awesome Angular resources by @TipeIO
* [tc39 / proposal-pipeline-operator](https://github.com/tc39/proposal-pipeline-operator):A proposal for adding the simple-but-useful pipeline operator to JavaScript.
#### kotlin
* [tipsy / github-profile-summary](https://github.com/tipsy/github-profile-summary):Tool for visualizing GitHub profiles
* [git-xuhao / KotlinMvp](https://github.com/git-xuhao/KotlinMvp):🔥 基于Kotlin+MVP+Retrofit+RxJava+Glide 等架构实现的短视频类的 APP 练手项目,UI 简约风格,代码详细注释;欢迎 Star or Fork~
* [heimashi / kotlin_tips](https://github.com/heimashi/kotlin_tips):用Kotlin去提高生产力:汇总Kotlin相对于Java的优势,以及怎么用Kotlin去简洁、务实、高效、安全开发的Tips
* [ximsfei / GodBlessYou](https://github.com/ximsfei/GodBlessYou):GBU is an Android library that tries to fix your app when it crashes. God bless your app no longer crashes.
* [bufferapp / android-clean-architecture-mvi-boilerplate](https://github.com/bufferapp/android-clean-architecture-mvi-boilerplate):A fork of our clean architecture boilerplate using the Model-View-Intent pattern
* [JetBrains / kotlin-native](https://github.com/JetBrains/kotlin-native):Kotlin/Native infrastructure
* [SchibstedSpain / Barista](https://github.com/SchibstedSpain/Barista):☕️ The guy who serves a great Espresso
* [po10cio / TimeLineView](https://github.com/po10cio/TimeLineView):A simple Timeline View that demonstrates the power of ConstraintLayout and RecyclerView. No drawing, just plug in and play.
* [google / flexbox-layout](https://github.com/google/flexbox-layout):Flexbox for Android
* [ansman / kotshi](https://github.com/ansman/kotshi):An annotations processor that generates Moshi adapters from immutable Kotlin data classes.
* [Kotlin / anko](https://github.com/Kotlin/anko):Pleasant Android application development
* [Gh0u1L5 / WechatMagician](https://github.com/Gh0u1L5/WechatMagician):WechatMagician is a Xposed module written in Kotlin, that can prevent Wechat from recalling messages, deleting moments or comments.
* [KotlinBy / awesome-kotlin](https://github.com/KotlinBy/awesome-kotlin):A curated list of awesome Kotlin related stuff Inspired by awesome-java.
* [uber / artist](https://github.com/uber/artist):An artist creates views. Artist is a Gradle plugin that codegens a base set of Android Views.
* [guoxiaoxing / phoenix](https://github.com/guoxiaoxing/phoenix):The one-stop solution for taking pictures / videos, picture / video selection, editing and compression on the Android platform.
* [dbacinski / Design-Patterns-In-Kotlin](https://github.com/dbacinski/Design-Patterns-In-Kotlin):Design Patterns implemented in Kotlin
* [JetradarMobile / android-snowfall](https://github.com/JetradarMobile/android-snowfall):Fully customizable implementation of "Snowfall View" on Android.
* [kategory / arrow](https://github.com/kategory/arrow):Functional Data Types & Abstractions for Kotlin
* [ssseasonnn / RxDownload](https://github.com/ssseasonnn/RxDownload):A multi-threaded download tool written with RxJava and Kotlin
* [shyiko / ktlint](https://github.com/shyiko/ktlint):An anti-bikeshedding Kotlin linter with built-in formatter
* [ktorio / ktor](https://github.com/ktorio/ktor):Framework for quickly creating connected applications in Kotlin with minimal effort
* [Kotlin / kotlinx.coroutines](https://github.com/Kotlin/kotlinx.coroutines):Libraries built upon Kotlin coroutines
* [Ekito / koin](https://github.com/Ekito/koin):KOIN - a concise and pragmatic dependency injection framework for Kotlin
* [kotlin-graphics / imgui](https://github.com/kotlin-graphics/imgui):Bloat-free Immediate Mode Graphical User interface for JVM with minimal dependencies
* [spekframework / spek](https://github.com/spekframework/spek):A specification framework for Kotlin
#### shell
* [robbyrussell / oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh):A delightful community-driven (with 1,000+ contributors) framework for managing your zsh configuration. Includes 200+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, php, python, etc), over 140 themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community.
* [rainglow / vscode](https://github.com/rainglow/vscode):100+ Color themes for Visual Studio Code.
* [gjmzj / kubeasz](https://github.com/gjmzj/kubeasz):使用Ansible脚本二进制方式安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响
* [creationix / nvm](https://github.com/creationix/nvm):Node Version Manager - Simple bash script to manage multiple active node.js versions
* [rainglow / jetbrains](https://github.com/rainglow/jetbrains):100+ color themes for JetBrains IDEs including PHPStorm, Webstorm and more.
* [rootsongjc / kubernetes-handbook](https://github.com/rootsongjc/kubernetes-handbook):Kubernetes中文指南/实践手册 https://jimmysong.io/kubernetes-handbook
* [pyenv / pyenv](https://github.com/pyenv/pyenv):Simple Python version management
* [rainglow / sublime](https://github.com/rainglow/sublime):100+ color themes for Sublime Text and Textmate.
* [laradock / laradock](https://github.com/laradock/laradock):Docker PHP development environment.
* [facebook / graphql](https://github.com/facebook/graphql):GraphQL is a query language and execution engine tied to any backend service.
* [fatihacet / turkcekaynaklar-com](https://github.com/fatihacet/turkcekaynaklar-com):Özenle seçilmiş Türkçe kaynaklar listesi En: Curated list of Turkish resources
* [dotnet / core](https://github.com/dotnet/core):Home repository for .NET Core
* [denysdovhan / spaceship-zsh-theme](https://github.com/denysdovhan/spaceship-zsh-theme):⭐️ 🚀 An “Oh My ZSH!” theme for Astronauts.
* [StreisandEffect / streisand](https://github.com/StreisandEffect/streisand):Streisand sets up a new server running L2TP/IPsec, OpenConnect, OpenSSH, OpenVPN, Shadowsocks, sslh, Stunnel, a Tor bridge, and WireGuard. It also generates custom instructions for all of these services. At the end of the run you are given an HTML file with instructions that can be shared with friends, family members, and fellow activists.
* [pi-hole / pi-hole](https://github.com/pi-hole/pi-hole):A black hole for Internet advertisements
* [deviantony / docker-elk](https://github.com/deviantony/docker-elk):The ELK stack powered by Docker and Compose.
* [nvie / gitflow](https://github.com/nvie/gitflow):Git extensions to provide high-level repository operations for Vincent Driessen's branching model.
* [fish-shell / fish-shell](https://github.com/fish-shell/fish-shell):The user-friendly command line shell.
* [drwetter / testssl.sh](https://github.com/drwetter/testssl.sh):Testing TLS/SSL encryption anywhere on any port
* [dokku / dokku](https://github.com/dokku/dokku):A docker-powered PaaS that helps you build and manage the lifecycle of applications
* [reinterpretcat / lfs](https://github.com/reinterpretcat/lfs):Docker configuration for building Linux From Scratch system
* [redox-os / redox](https://github.com/redox-os/redox):Redox: A Rust Operating System
* [hwdsl2 / setup-ipsec-vpn](https://github.com/hwdsl2/setup-ipsec-vpn):Scripts to build your own IPsec VPN server, with IPsec/L2TP and Cisco IPsec on Ubuntu, Debian and CentOS
* [flarum / flarum](https://github.com/flarum/flarum):Delightfully simple forum software.
* [kylemanna / docker-openvpn](https://github.com/kylemanna/docker-openvpn):🔒 OpenVPN server in a Docker container complete with an EasyRSA PKI CA
| 127.207358 | 419 | 0.784935 | eng_Latn | 0.315385 |
1c5acb66895b72c3b32c448b91f9f00a8349a5ea | 7,856 | md | Markdown | docs/overview.md | jaceklaskowski/test-book | e29d174e544900439fa1c420e5cffc079cb19f44 | [
"Apache-2.0"
] | 44 | 2021-01-04T18:35:26.000Z | 2022-03-04T07:15:50.000Z | docs/overview.md | jaceklaskowski/test-book | e29d174e544900439fa1c420e5cffc079cb19f44 | [
"Apache-2.0"
] | null | null | null | docs/overview.md | jaceklaskowski/test-book | e29d174e544900439fa1c420e5cffc079cb19f44 | [
"Apache-2.0"
] | 10 | 2021-01-06T14:08:21.000Z | 2022-01-10T13:56:24.000Z | # {{ book.title }}
[Kubernetes](https://kubernetes.io/) is an open-source system for automating deployment, scaling, and management of containerized applications.
Apache Spark supports `Kubernetes` resource manager using [KubernetesClusterManager](KubernetesClusterManager.md) (and [KubernetesClusterSchedulerBackend](KubernetesClusterSchedulerBackend.md)) with **k8s://**-prefixed master URLs (that point at [Kubernetes API servers](https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver)).
Spark on Kubernetes uses `TaskSchedulerImpl` ([Apache Spark]({{ book.spark_core }}/scheduler/TaskSchedulerImpl/)) for task scheduling.
## Kubernetes GA
As per [SPARK-33005 Kubernetes GA Preparation](https://issues.apache.org/jira/browse/SPARK-33005), Spark on Kubernetes is fully supported and production ready! 🎉
## <span id="SPARK_EXECUTOR_INACTIVE_LABEL"> Inactive Executor Pods
Spark on Kubernetes defines **spark-exec-inactive** label to mark executor pods as inactive after they have finished (successfully or not) but [spark.kubernetes.executor.deleteOnTermination](configuration-properties.md#spark.kubernetes.executor.deleteOnTermination) configuration property is `false` (when `ExecutorPodsLifecycleManager` is requested to [handle executor pods snapshots](ExecutorPodsLifecycleManager.md#onNewSnapshots)).
This label is used to skip executor pods when `PollRunnable` is requested to [fetch status of all executor pods in a Spark application from Kubernetes API server](PollRunnable.md#run).
## Cluster Deploy Mode
Spark on Kubernetes uses [KubernetesClientApplication](KubernetesClientApplication.md) in `cluster` deploy mode (as the `SparkApplication` ([Apache Spark]({{ book.spark_core }}/tools/SparkApplication/)) to run).
!!! note
Use `spark-submit --deploy-mode`, `spark.submit.deployMode` or `DEPLOY_MODE` environment variable to specify the deploy mode of a Spark application.
## Volumes
Volumes and volume mounts are configured using `spark.kubernetes.[type].volumes.`-prefixed configuration properties with `type` being `driver` or `executor` (for the driver and executor pods, respectively).
`KubernetesVolumeUtils` utility is used to [extract volume configuration](KubernetesVolumeUtils.md#parseVolumeSpecificConf) based on the volume type:
Volume Type | Configuration Property
-------------|---------
`emptyDir` | `[volumesPrefix].[volumeType].[volumeName].options.medium`
| `[volumesPrefix].[volumeType].[volumeName].options.sizeLimit`
`hostPath` | `[volumesPrefix].[volumeType].[volumeName].options.path`
`persistentVolumeClaim` | `[volumesPrefix].[volumeType].[volumeName].options.claimName`
Executor volumes (`spark.kubernetes.executor.volumes.`-prefixed configuration properties) are parsed right when `KubernetesConf` utility is used for a [KubernetesDriverConf](KubernetesConf.md#createDriverConf) (and a driver pod created). That makes executor volumes required when driver volumes are defined.
## Static File Resources
**File resources** are resources with `file` or no URI scheme (that are then considered `file`-based indirectly).
In Spark applications, file resources can be the primary resource (application jar, Python or R files) as well as files referenced by `spark.jars` and `spark.files` configuration properties (or their `--jars` and `--files` options of `spark-submit`, respectively).
When deployed in `cluster` mode, Spark on Kubernetes uploads file resources of a Spark application to a Hadoop DFS-compatible file system defined by the required [spark.kubernetes.file.upload.path](configuration-properties.md#spark.kubernetes.file.upload.path) configuration property.
### Local URI Scheme
A special case of static file resources are **local resources** that are resources with `local` URI scheme. They are considered already available on every Spark node (and are not added to a Spark file server for distribution when `SparkContext` is requested to [add such file]({{ book.spark_core }}/SparkContext/#static-files)).
In Spark on Kubernetes, `local` resources are used for primary application resource that are already included in a container image.
```text
./bin/spark-submit \
--master k8s://$K8S_SERVER \
local:///opt/docker/lib/meetup.spark-docker-example-0.1.0.jar
```
## Executor Pods State Synchronization
Spark on Kubernetes uses [ExecutorPodsPollingSnapshotSource](ExecutorPodsPollingSnapshotSource.md) for [polling Kubernetes API server for executor pods of a Spark application](PollRunnable.md#run) every polling interval (based on [spark.kubernetes.executor.apiPollingInterval](configuration-properties.md#spark.kubernetes.executor.apiPollingInterval) configuration property).
`ExecutorPodsPollingSnapshotSource` is given an [ExecutorPodsSnapshotsStore](ExecutorPodsSnapshotsStore.md) that is requested to [replace a snapshot](ExecutorPodsSnapshotsStore.md#replaceSnapshot) regularly.
`ExecutorPodsSnapshotsStore` keeps track of executor pods state snapshots and allows [subscribers](ExecutorPodsSnapshotsStore.md#addSubscriber) to be regularly updated (e.g. [ExecutorPodsAllocator](ExecutorPodsAllocator.md) and [ExecutorPodsLifecycleManager](ExecutorPodsLifecycleManager.md)).
## Dynamic Allocation of Executors
Spark on Kubernetes supports **Dynamic Allocation of Executors** using [ExecutorPodsAllocator](ExecutorPodsAllocator.md).
!!! tip "The Internals of Apache Spark"
Learn more about [Dynamic Allocation of Executors]({{ book.spark_core }}/dynamic-allocation/) in [The Internals of Apache Spark]({{ book.spark_core }}).
## <span id="spark-internal"> Internal Resource Marker
Spark on Kubernetes uses **spark-internal** special name in `cluster` deploy mode for internal application resources (that are supposed to be part of an image).
Given [renameMainAppResource](KubernetesUtils.md#renameMainAppResource), `DriverCommandFeatureStep` will re-write local `file`-scheme-based primary application resources to `spark-internal` special name when requested for the [base driver container](DriverCommandFeatureStep.md#baseDriverContainer) (for a `JavaMainAppResource` application).
### <span id="spark-internal-demo"> Demo
This demo is a follow-up to [Demo: Running Spark Application on minikube](demo/running-spark-application-on-minikube.md). Run it first.
Note `--deploy-mode cluster` and the application jar is "locally resolvable" (i.e. uses `file:` scheme indirectly).
```text
./bin/spark-submit \
--master k8s://$K8S_SERVER \
--deploy-mode cluster \
--name spark-docker-example \
--class meetup.SparkApp \
--conf spark.kubernetes.container.image=spark-docker-example:0.1.0 \
--conf spark.kubernetes.context=minikube \
--conf spark.kubernetes.namespace=spark-demo \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.file.upload.path=/tmp/spark-k8s \
--verbose \
~/dev/meetups/spark-meetup/spark-docker-example/target/scala-2.12/spark-docker-example_2.12-0.1.0.jar
```
```text
$ kubectl get po -l spark-role=driver
NAME READY STATUS RESTARTS AGE
spark-docker-example-dfd7d076e7099718-driver 0/1 Error 0 7m25s
```
Note `spark-internal` in the below output.
```text
$ kubectl describe po spark-docker-example-dfd7d076e7099718-driver
...
Containers:
spark-kubernetes-driver:
...
Args:
driver
--properties-file
/opt/spark/conf/spark.properties
--class
meetup.SparkApp
spark-internal
...
```
## Resources
* [Official documentation]({{ spark.doc }}/running-on-kubernetes.html)
* [Spark on Kubernetes](https://levelup.gitconnected.com/spark-on-kubernetes-3d822969f85b) by Scott Haines
* (video) [Getting Started with Apache Spark on Kubernetes](https://www.youtube.com/watch?v=xo7BIkFWQP4) by Jean-Yves Stephan and Julien Dumazert
| 59.515152 | 435 | 0.775713 | eng_Latn | 0.666713 |
1c5b19722199ba05c63fe1f238a70a8d80969f9f | 30 | md | Markdown | README.md | HenvyLuk/SlideMenu | 6c7bbdb93f64f06329a8f9cc9181ac6282be31be | [
"Apache-2.0"
] | null | null | null | README.md | HenvyLuk/SlideMenu | 6c7bbdb93f64f06329a8f9cc9181ac6282be31be | [
"Apache-2.0"
] | null | null | null | README.md | HenvyLuk/SlideMenu | 6c7bbdb93f64f06329a8f9cc9181ac6282be31be | [
"Apache-2.0"
] | null | null | null | # SlideMenu
HenvyLukSlideMenu
| 10 | 17 | 0.866667 | ces_Latn | 0.661846 |
1c5bcd2a55c23256741745913351f30f6fe5c441 | 16,182 | md | Markdown | treebanks/gun_dooley/index.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 204 | 2015-01-20T16:36:39.000Z | 2022-03-28T00:49:51.000Z | treebanks/gun_dooley/index.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 654 | 2015-01-02T17:06:29.000Z | 2022-03-31T18:23:34.000Z | treebanks/gun_dooley/index.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 200 | 2015-01-16T22:07:02.000Z | 2022-03-25T11:35:28.000Z | ---
layout: base
title: 'UD_Mbya_Guarani-Dooley'
udver: '2'
---
<!-- This page is automatically generated from the README file and from
the data files in the latest release.
Please do not edit this page directly. -->
# UD Mbya Guarani Dooley
Language: [Mbya Guarani](/gun/index.html) (code: `gun`)<br/>
Family: Tupian, Tupi-Guarani
This treebank has been part of Universal Dependencies since the UD v2.4 release.
The following people have contributed to making this treebank part of UD: Guillaume Thomas.
Repository: [UD_Mbya_Guarani-Dooley](https://github.com/UniversalDependencies/UD_Mbya_Guarani-Dooley)<br />
Search this treebank on-line: [PML-TQ](https://lindat.mff.cuni.cz/services/pmltq/#!/treebank/udgun_dooley29)<br />
Download all treebanks: [UD 2.9](/#download)
License: CC BY-NC-SA 4.0. The underlying text is not included; the user must obtain it separately and then merge with the UD annotation using a script distributed with UD
Genre: fiction
Questions, comments?
General annotation questions (either Mbya Guarani-specific or cross-linguistic) can be raised in the [main UD issue tracker](https://github.com/UniversalDependencies/docs/issues).
You can report bugs in this treebank in the [treebank-specific issue tracker on Github](https://github.com/UniversalDependencies/UD_Mbya_Guarani-Dooley/issues).
If you want to collaborate, please contact [guillaume • thomas (æt) utoronto • ca].
Development of the treebank happens outside the UD repository.
If there are bugs, either the original data source or the conversion procedure must be fixed.
Do not submit pull requests against the UD repository.
| Annotation | Source |
|------------|--------|
| Lemmas | assigned by a program, not checked manually |
| UPOS | assigned by a program, with some manual corrections, but not a full manual verification |
| XPOS | annotated manually |
| Features | annotated manually in non-UD style, automatically converted to UD, with some manual corrections of the conversion |
| Relations | assigned by a program, with some manual corrections, but not a full manual verification |
## Description
UD Mbya_Guarani-Dooley is a corpus of narratives written in Mbyá Guaraní (Tupian) in Brazil, and collected by Robert Dooley. Due to copyright restrictions, the corpus that is distributed as part of UD only contains the annotation (tags, features, relations) while the FORM and LEMMA columns are empty.
UD Mbya_Guarani-Dooley is the UD annotation of a corpus of narratives written by two Mbyá Guaraní speakers, Nelson Florentino and Darci Pires de Lima, between 1976 and 1990 in Brazil. The corpus was compiled by Robert A. Dooley (SIL), and is archived at the Archive of the Indigenous Languages of Latin America:
* Dooley, Robert A. "Mbyá Guaraní Collection of Robert Dooley" The Archive of the Indigenous Languages of Latin America: www.ailla.utexas.org. Media: text. Access: 100% restricted. PID ailla:119734
The narratives in Dooley's collection were interlinearized in SIL FieldWorks Language Explorer (Black and Simons 2006), and manually annotated in UD in Arborator (Gerdes 2013). Features were converted automatically from the morphological glosses added in SIL FieldWorks Language Explorer.
Due to copyright restrictions, the corpus that is distributed as part of UD only contains the annotation (tags, features, relations), while the FORM and LEMMA columns are empty. A password protected version of UD Mbya_Guarani-Dooley with FORM and LEMMA fields is archived on the Archive of the Indigenous Languages of Latin America, in the following collection:
* Guillaume, Thomas and Dooley, Robert A. Dependency Treebank derived from the Mbyá Guaraní collection of Robert Dooley. Access: 100% restricted. PID ailla:119734
Consider using the development version of the corpus, which contains the latest improvements, while the official release is updated every 6 months:
* https://github.com/UniversalDependencies/UD_Mbya_Guarani-Dooley/tree/dev
## Acknowledgments
The development of the corpus was supported by a Connaught New Researcher Award to Guillaume Thomas at the University of Toronto. Several research assistants participated in the dependency annotation of the corpus:
* Gregory Antono, Laurestine Bradford, Vidhya Elango, Jean-François Juneau, Angelika Kiss, Barbara Peixoto, Darragh Winkelman.
## References
* Andrew Black and Gary Simons. 2006. The SIL FieldWorks Language Explorer Approach to Morphological Parsing. Computational Linguistics for Less studied Languages: Texas Linguistics Society, 10. SIL.
* Kim Gerdes, 2013. Collaborative dependency annotation. In Journal Proceedings of the second international conference on dependency linguistics (DepLing 2013), 88-97.
# Statistics of UD Mbya Guarani Dooley
## POS Tags
[ADJ](gun_dooley-pos-ADJ.html) – [ADP](gun_dooley-pos-ADP.html) – [ADV](gun_dooley-pos-ADV.html) – [AUX](gun_dooley-pos-AUX.html) – [CCONJ](gun_dooley-pos-CCONJ.html) – [DET](gun_dooley-pos-DET.html) – [INTJ](gun_dooley-pos-INTJ.html) – [NOUN](gun_dooley-pos-NOUN.html) – [NUM](gun_dooley-pos-NUM.html) – [PART](gun_dooley-pos-PART.html) – [PRON](gun_dooley-pos-PRON.html) – [PROPN](gun_dooley-pos-PROPN.html) – [PUNCT](gun_dooley-pos-PUNCT.html) – [SCONJ](gun_dooley-pos-SCONJ.html) – [VERB](gun_dooley-pos-VERB.html) – [X](gun_dooley-pos-X.html)
## Features
[Clusivity](gun_dooley-feat-Clusivity.html) – [Clusivity[obj]](gun_dooley-feat-Clusivity-obj.html) – [Clusivity[psor]](gun_dooley-feat-Clusivity-psor.html) – [Clusivity[subj]](gun_dooley-feat-Clusivity-subj.html) – [Mood](gun_dooley-feat-Mood.html) – [Number](gun_dooley-feat-Number.html) – [Number[psor]](gun_dooley-feat-Number-psor.html) – [NumType](gun_dooley-feat-NumType.html) – [Person](gun_dooley-feat-Person.html) – [Person[obj]](gun_dooley-feat-Person-obj.html) – [Person[subj]](gun_dooley-feat-Person-subj.html) – [Polarity](gun_dooley-feat-Polarity.html) – [PronType](gun_dooley-feat-PronType.html) – [Subcat](gun_dooley-feat-Subcat.html) – [VerbForm](gun_dooley-feat-VerbForm.html)
## Relations
[acl](gun_dooley-dep-acl.html) – [advcl](gun_dooley-dep-advcl.html) – [advmod](gun_dooley-dep-advmod.html) – [amod](gun_dooley-dep-amod.html) – [appos](gun_dooley-dep-appos.html) – [aux](gun_dooley-dep-aux.html) – [case](gun_dooley-dep-case.html) – [cc](gun_dooley-dep-cc.html) – [ccomp](gun_dooley-dep-ccomp.html) – [compound](gun_dooley-dep-compound.html) – [compound:svc](gun_dooley-dep-compound-svc.html) – [conj](gun_dooley-dep-conj.html) – [cop](gun_dooley-dep-cop.html) – [csubj](gun_dooley-dep-csubj.html) – [dep](gun_dooley-dep-dep.html) – [dep:mod](gun_dooley-dep-dep-mod.html) – [det](gun_dooley-dep-det.html) – [discourse](gun_dooley-dep-discourse.html) – [dislocated](gun_dooley-dep-dislocated.html) – [fixed](gun_dooley-dep-fixed.html) – [flat](gun_dooley-dep-flat.html) – [goeswith](gun_dooley-dep-goeswith.html) – [mark](gun_dooley-dep-mark.html) – [nmod](gun_dooley-dep-nmod.html) – [nsubj](gun_dooley-dep-nsubj.html) – [nummod](gun_dooley-dep-nummod.html) – [obj](gun_dooley-dep-obj.html) – [obl](gun_dooley-dep-obl.html) – [obl:sentcon](gun_dooley-dep-obl-sentcon.html) – [parataxis](gun_dooley-dep-parataxis.html) – [parataxis:rep](gun_dooley-dep-parataxis-rep.html) – [punct](gun_dooley-dep-punct.html) – [root](gun_dooley-dep-root.html) – [vocative](gun_dooley-dep-vocative.html)
<h2>Tokenization and Word Segmentation</h2>
<ul>
<li>This corpus contains 1046 sentences and 11771 tokens.</li>
</ul>
<ul>
<li>All tokens in this corpus are followed by a space.</li>
</ul>
<ul>
<li>This corpus does not contain words with spaces.</li>
</ul>
<ul>
<li>This corpus contains 29 types of words that contain both letters and punctuation. Examples: ha'e, he'i, oma'ẽ, tyke'y, va'ekue, Ha'e'ỹ, Vexa'i, aipoa'e, aipoe'i, aje'i'i, e'ỹ, ema'ẽ, ka'aguy, ka'i, mba'eta, nda'eveive, oguenoẽ'i, ojae'o, oma'ẽmba, oro'e, peva'e, porã-porãve, rai'i, rogue'i, ta'vy, ta'yxy, va'e, va'erã, xera'y</li>
</ul>
<ul>
</ul>
<h2>Morphology</h2>
<h3>Tags</h3>
<ul>
<li>This corpus uses 16 UPOS tags out of 17 possible: <a>ADJ</a>, <a>ADP</a>, <a>ADV</a>, <a>AUX</a>, <a>CCONJ</a>, <a>DET</a>, <a>INTJ</a>, <a>NOUN</a>, <a>NUM</a>, <a>PART</a>, <a>PRON</a>, <a>PROPN</a>, <a>PUNCT</a>, <a>SCONJ</a>, <a>VERB</a>, <a>X</a></li>
<li>This corpus does not use the following tags: SYM</li>
</ul>
<ul>
<li>This corpus contains 18 word types tagged as particles (PART): _, ae, anho, avei, e'ỹ, ete, je, jevy, ju, ke, kuery, ma, merami, rai'i, rei, ri, ta'vy, tema</li>
</ul>
<ul>
<li>This corpus contains 6 lemmas tagged as pronouns (PRON): _, e, ha'e, ha'ee'ỹ, peteĩ, xee</li>
</ul>
<ul>
<li>This corpus contains 3 lemmas tagged as determiners (DET): _, opa, peva'e</li>
</ul>
<ul>
<li>Out of the above, 1 lemmas occurred sometimes as PRON and sometimes as DET: _</li>
</ul>
<ul>
<li>This corpus contains 1 lemmas tagged as auxiliaries (AUX): _</li>
</ul>
<ul>
<li>Out of the above, 1 lemmas occurred sometimes as AUX and sometimes as VERB: _</li>
</ul>
<ul>
<li>There are 7 <a href="../feat/VerbForm.html">(de)verbal forms:</a></li>
</ul>
<ul>
<li>Fin
<ul>
<li>VERB: _, oma'ẽ, tereo, Ereru, aa, aity, ema'ẽ, ereo, huũ, ikuai</li>
</ul>
</li>
</ul>
<ul>
<li>Inf
<ul>
<li>VERB: _, he'i, aipoa'e, aipoe'i, nda'eveive, oro'e, porayvu, porã-porãve</li>
</ul>
</li>
</ul>
<ul>
<li>Part
<ul>
<li>VERB: _</li>
</ul>
</li>
</ul>
<ul>
<li>Post
<ul>
<li>VERB: _, jekuaa</li>
</ul>
</li>
</ul>
<ul>
<li>Prov
<ul>
<li>VERB: _</li>
</ul>
</li>
</ul>
<ul>
<li>Ser
<ul>
<li>VERB: _, ouvy, aikovy, ejupy</li>
</ul>
</li>
</ul>
<ul>
<li>Vnoun
<ul>
<li>VERB: _, xera'y</li>
</ul>
</li>
</ul>
<h3>Nominal Features</h3>
<ul>
<li><a>Number</a></li>
</ul>
<ul>
<li>Plur
<ul>
<li>NOUN: _</li>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li>Sing
<ul>
<li>PRON: _, xee, ndere</li>
</ul>
</li>
</ul>
<h3>Degree and Polarity</h3>
<ul>
<li><a>Polarity</a></li>
</ul>
<ul>
<li>Neg
<ul>
<li>ADV: _</li>
<li>PRON: Ha'e'ỹ, _</li>
<li>VERB-Fin: _, ndaikuaavei</li>
<li>VERB-Inf: _, nda'eveive</li>
<li>VERB-Post: _</li>
<li>VERB-Vnoun: _</li>
</ul>
</li>
</ul>
<h3>Verbal Features</h3>
<ul>
<li><a>Mood</a></li>
</ul>
<ul>
<li>Des
<ul>
<li>VERB-Fin: _, tereo</li>
</ul>
</li>
</ul>
<ul>
<li>Imp
<ul>
<li>VERB-Fin: _, ema'ẽ</li>
<li>VERB-Ser: _, ejupy</li>
</ul>
</li>
</ul>
<ul>
<li>Ind
<ul>
<li>VERB-Fin: _, oma'ẽ, Ereru, aa, aity, ereo, huũ, ikuai, ndaikuaavei, oguenoẽ'i</li>
<li>VERB-Inf: _, he'i, aipoa'e, aipoe'i, nda'eveive, oro'e, porayvu, porã-porãve</li>
<li>VERB-Ser: _, ouvy, aikovy</li>
<li>VERB-Vnoun: _, xera'y</li>
</ul>
</li>
</ul>
<h3>Pronouns, Determiners, Quantifiers</h3>
<ul>
<li><a>PronType</a></li>
</ul>
<ul>
<li>Add
<ul>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li>Dem
<ul>
<li>DET: _, peva'e</li>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li>Ind
<ul>
<li>DET: _</li>
<li>PRON: _, Peteĩ</li>
</ul>
</li>
</ul>
<ul>
<li>Int
<ul>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li>Neg
<ul>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li>Prs
<ul>
<li>PRON: _, ha'e, xee, Ha'e'ỹ, ndere</li>
</ul>
</li>
</ul>
<ul>
<li>Tot
<ul>
<li>DET: _, Opa</li>
<li>PRON: _</li>
</ul>
</li>
</ul>
<ul>
<li><a>NumType</a></li>
</ul>
<ul>
<li>Card
<ul>
<li>NUM: _, 5, peteĩ, três</li>
</ul>
</li>
</ul>
<ul>
<li><a>Person</a></li>
</ul>
<ul>
<li>1
<ul>
<li>PRON: _, xee</li>
<li>VERB-Ser: _, aikovy</li>
</ul>
</li>
</ul>
<ul>
<li>2
<ul>
<li>PRON: _, ndere</li>
<li>VERB-Ser: _, ejupy</li>
</ul>
</li>
</ul>
<ul>
<li>3
<ul>
<li>PRON: _, ha'e, Ha'e'ỹ</li>
<li>VERB-Ser: _, ouvy</li>
</ul>
</li>
</ul>
<ul>
<li><a>Number[psor]</a></li>
</ul>
<ul>
<li>Plur
<ul>
<li>NOUN: _</li>
</ul>
</li>
</ul>
<ul>
<li>Sing
<ul>
<li>NOUN: _, Neme, xeryvy</li>
</ul>
</li>
</ul>
<h3>Other Features</h3>
<ul>
<li><a>Clusivity</a>
<ul>
<li>Ex
<ul>
<li>PRON: _</li>
<li>VERB-Ser: _</li>
</ul>
</li>
<li>In
<ul>
<li>PRON: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Clusivity[obj]</a>
<ul>
<li>Ex
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
<li>In
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Clusivity[psor]</a>
<ul>
<li>Ex
<ul>
<li>NOUN: _</li>
</ul>
</li>
<li>In
<ul>
<li>NOUN: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Clusivity[subj]</a>
<ul>
<li>Ex
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
<li>In
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Person[obj]</a>
<ul>
<li>1
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
<li>2
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
<li>3
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Person[subj]</a>
<ul>
<li>1
<ul>
<li>VERB-Fin: _, aa, aity, ndaikuaavei</li>
<li>VERB-Vnoun: _, xera'y</li>
</ul>
</li>
<li>2
<ul>
<li>VERB-Fin: _, tereo, Ereru, ema'ẽ, ereo, peipotave</li>
<li>VERB-Vnoun: _</li>
</ul>
</li>
<li>3
<ul>
<li>VERB-Fin: _, oma'ẽ, huũ, ikuai, oguenoẽ'i, oguerovia, oi, ojae'o, ojapyxaka, oma'ẽmba</li>
<li>VERB-Vnoun: _</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><a>Subcat</a>
<ul>
<li>Ditr
<ul>
<li>VERB-Fin: _</li>
</ul>
</li>
<li>Indir
<ul>
<li>VERB-Fin: _, oma'ẽ, ema'ẽ, oma'ẽmba</li>
</ul>
</li>
<li>Intr
<ul>
<li>VERB-Fin: _, tereo, aa, ereo, huũ, ikuai, oi, ojae'o, ojapyxaka, onhemondyi</li>
<li>VERB-Inf: _, nda'eveive, porayvu, porã-porãve</li>
<li>VERB-Vnoun: _, xera'y</li>
</ul>
</li>
<li>Tran
<ul>
<li>VERB-Fin: _, Ereru, aity, ndaikuaavei, oguenoẽ'i, oguerovia, omondoropa, peipotave</li>
<li>VERB-Inf: _, he'i, aipoa'e, aipoe'i, oro'e</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2>Syntax</h2>
<h3>Auxiliary Verbs and Copula</h3>
<ul>
<li>This corpus uses 1 lemmas as copulas (<a>cop</a>). Examples: _.</li>
</ul>
<ul>
<li>This corpus uses 1 lemmas as auxiliaries (<a>aux</a>). Examples: _.</li>
</ul>
<h3>Core Arguments, Oblique Arguments and Adjuncts</h3>
Here we consider only relations between verbs (parent) and nouns or pronouns (child).
<ul>
<li><a>nsubj</a>
<ul>
<li>VERB-Fin--NOUN (306)</li>
<li>VERB-Fin--PRON (164)</li>
<li>VERB-Inf--NOUN (79)</li>
<li>VERB-Inf--PRON (17)</li>
<li>VERB-Vnoun--NOUN (2)</li>
<li>VERB-Vnoun--PRON (6)</li>
</ul>
</li>
</ul>
<ul>
<li><a>obj</a>
<ul>
<li>VERB-Fin--NOUN (235)</li>
<li>VERB-Fin--NOUN-ADP(_) (5)</li>
<li>VERB-Fin--PRON (19)</li>
<li>VERB-Fin--PRON-ADP(_) (2)</li>
<li>VERB-Inf--NOUN (3)</li>
<li>VERB-Inf--PRON (2)</li>
</ul>
</li>
</ul>
<ul>
<li><a>iobj</a>
<ul>
</ul>
</li>
</ul>
<h3>Relations Overview</h3>
<ul>
<li>This corpus uses 4 relation subtypes: <a>compound:svc</a>, <a>dep:mod</a>, <a>obl:sentcon</a>, <a>parataxis:rep</a></li>
<li>The following 7 relation types are not used in this corpus at all: <a>iobj</a>, <a>xcomp</a>, <a>expl</a>, <a>clf</a>, <a>list</a>, <a>orphan</a>, <a>reparandum</a></li>
</ul>
| 25.403454 | 1,301 | 0.608145 | yue_Hant | 0.249514 |
1c5c77a1cc452b825cf762a7054f0fdb6e5aa7bc | 628 | md | Markdown | habitican_curse/readme.md | rbavishi/Habitica-s-Curse | 76b5d711cdb13b5f6ef4ccc847463a9e2396a033 | [
"MIT"
] | 40 | 2015-08-27T10:55:41.000Z | 2021-08-16T18:16:56.000Z | habitican_curse/readme.md | rbavishi/Habitica-s-Curse | 76b5d711cdb13b5f6ef4ccc847463a9e2396a033 | [
"MIT"
] | 20 | 2015-09-10T19:01:59.000Z | 2021-07-14T03:49:25.000Z | habitican_curse/readme.md | rbavishi/Habitica-s-Curse | 76b5d711cdb13b5f6ef4ccc847463a9e2396a033 | [
"MIT"
] | 5 | 2015-11-16T13:51:21.000Z | 2021-09-01T13:17:39.000Z | # General Architecture
# Data Flow
## Initialization
When HC is initiatied, a new instance of request manager is created which kicks off the first data pull. This does a pull of all user data and all task data (belonging to the user). User data is populated into the status bar and tasks are added to the trinity display at the top.
Task are iterated over and a new MenuItem is created for each task, depending on its type. Once all tasks have instantiated as a MenuItem, a new menu is created of each of the three types (habits, dailies, todos). These menus are the only storage for tasks in the code (this might be bad?)
| 78.5 | 291 | 0.773885 | eng_Latn | 0.999913 |
1c5c9abb660388336c7b6a6cf943350f8e435b93 | 1,941 | md | Markdown | src/content/en/tools/lighthouse/audits/deprecated-apis.md | kenminhthai/works | 764edcda1aeb1a171525d1ad3ebd1e6a9020f4d7 | [
"Apache-2.0"
] | null | null | null | src/content/en/tools/lighthouse/audits/deprecated-apis.md | kenminhthai/works | 764edcda1aeb1a171525d1ad3ebd1e6a9020f4d7 | [
"Apache-2.0"
] | null | null | null | src/content/en/tools/lighthouse/audits/deprecated-apis.md | kenminhthai/works | 764edcda1aeb1a171525d1ad3ebd1e6a9020f4d7 | [
"Apache-2.0"
] | null | null | null | project_path: /web/_project.yaml
book_path: /web/tools/_book.yaml
description: Reference documentation for the "Avoids Deprecated APIs" Lighthouse audit.
{# wf_updated_on: 2017-07-12 #}
{# wf_published_on: 2017-07-12 #}
# Avoids Deprecated APIs {: .page-title }
## Why the audit is important {: #why }
Deprecated APIs are scheduled to be removed from Chrome. Calling these APIs
after they're removed will cause errors on your site.
## How to pass the audit {: #how }
Lighthouse flags the deprecated APIs in your report. Go to [Chrome Platform
Status][CPS]{:.external} and expand the entries for the APIs that you're using
to learn more about why the APIs are deprecated, as well as how to replace
them.
[CPS]: https://www.chromestatus.com/features#deprecated
{% include "web/tools/lighthouse/audits/implementation-heading.html" %}
Lighthouse collects the deprecated API warnings that Chrome logs to the
DevTools Console.
[Audit source][src]{:.external}
[src]: https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/audits/deprecations
## Feedback {: #feedback }
{% framebox width="auto" height="auto" enable_widgets="true" %}
<script>
var label = 'Deprecated APIs / Helpful';
var url = 'https://github.com/google/webfundamentals/issues/new?title=[' +
label + ']';
var feedback = {
"category": "Lighthouse",
"choices": [
{
"button": {
"text": "This Doc Was Helpful"
},
"response": "Thanks for the feedback.",
"analytics": {
"label": label
}
},
{
"button": {
"text": "This Doc Was Not Helpful"
},
"response": 'Sorry to hear that. Please <a href="' + url +
'" target="_blank">open a GitHub issue</a> and tell us how to ' +
'make it better.',
"analytics": {
"label": label,
"value": 0
}
}
]
};
</script>
{% include "web/_shared/multichoice.html" %}
{% endframebox %}
| 28.130435 | 97 | 0.657908 | eng_Latn | 0.928855 |
1c5ca09f3caaafe96674eb84b3094ec9c840b4a8 | 790 | md | Markdown | openapi/thinkific/Module.md | ballerina-platform/openapi-connectors | c3641dddbc4e41686ae3d1b9bacf1621b639a019 | [
"Apache-2.0"
] | 4 | 2021-12-13T11:00:09.000Z | 2022-01-30T11:21:38.000Z | openapi/thinkific/Module.md | ballerina-platform/openapi-connectors | c3641dddbc4e41686ae3d1b9bacf1621b639a019 | [
"Apache-2.0"
] | 15 | 2021-11-29T08:42:18.000Z | 2022-02-26T02:43:17.000Z | openapi/thinkific/Module.md | ballerina-platform/openapi-connectors | c3641dddbc4e41686ae3d1b9bacf1621b639a019 | [
"Apache-2.0"
] | 2 | 2021-11-24T07:42:04.000Z | 2021-12-14T06:16:44.000Z | ## Overview
This is the generated connector for [Thinkific API v1](https://developers.thinkific.com/api/using-the-api) OpenAPI specification.
The Thinkific APIs allow developers to extend Thinkific's functionality in a variety of different ways by accessing site data.
## Prerequisites
Before using this connector in your Ballerina application, complete the following:
* Create a [Thinkific](https://www.thinkific.com/) account
* For private apps login to your Thinkific account and obtain API Key and Subdomain using the guide [here](https://developers.thinkific.com/api/api-key-auth/)
* For public apps create a Thinkific Partner Account and register an App to access the credentials. Follow the guide [here](https://developers.thinkific.com/api/authorization/) for more details.
| 49.375 | 195 | 0.792405 | eng_Latn | 0.956782 |
1c5d22498e7afe21f1141c9b9d67270dddba2243 | 139 | md | Markdown | _Upload/8 DiVe.md | YashBhartia00/yashbhartia00.github.io | ba5c46f330fc3a9e9b924b0d60ad6b765413e335 | [
"MIT"
] | null | null | null | _Upload/8 DiVe.md | YashBhartia00/yashbhartia00.github.io | ba5c46f330fc3a9e9b924b0d60ad6b765413e335 | [
"MIT"
] | null | null | null | _Upload/8 DiVe.md | YashBhartia00/yashbhartia00.github.io | ba5c46f330fc3a9e9b924b0d60ad6b765413e335 | [
"MIT"
] | null | null | null | ---
title: "DiVe"
excerpt: "<br/><img src='/images/500x300.png'>"
collection: projects
# redirect_to: https://github.com/YashBhartia00/
--- | 23.166667 | 48 | 0.690647 | yue_Hant | 0.102989 |
1c5d91e0df4c93b1d6c817fbdb2252adc9e86a37 | 3,518 | md | Markdown | wiki/translations/sv/Part_Boolean.md | dwhr-pi/FreeCAD-documentation | 0c889672d80e7969dcabe83f5ddf503e72a4f5bb | [
"CC0-1.0"
] | null | null | null | wiki/translations/sv/Part_Boolean.md | dwhr-pi/FreeCAD-documentation | 0c889672d80e7969dcabe83f5ddf503e72a4f5bb | [
"CC0-1.0"
] | null | null | null | wiki/translations/sv/Part_Boolean.md | dwhr-pi/FreeCAD-documentation | 0c889672d80e7969dcabe83f5ddf503e72a4f5bb | [
"CC0-1.0"
] | null | null | null | # Part Boolean/sv
---
- GuiCommand:/sv Name:Part_Booleans Name/sv:Booleans MenuLocation:Part → Booleans Workbenches:Del, Komplett SeeAlso:[Union](Part_Union/sv.md), [Common](Part_Common/sv.md) and [Cut](Part_Cut/sv.md)---
</div>
Detta kommando är ett allt-i-ett booleskt verktyg. Det tillåter dig att specificera vilken operation som ska utföras och vilka parametrar som ska användas via dialogen nedan. För snabbare booleska operationer, se även [Förena](Part_Union/sv.md), [Gemensamt](Part_Common/sv.md) och [Klipp](Part_Cut/sv.md).
**[<img src=images/Part_Boolean.svg style="width:16px"> [Part Boolean](Part_Boolean.md)**
is a generic all-in-one boolean tool. It allows you to specify the objects and operation to perform via a single dialog.
For quicker access to these operations, use **[<img src=images/Part_Cut.svg style="width:16px"> [Part Cut](Part_Cut.md)**, **[<img src=images/Part_Fuse.svg style="width:16px"> [Part Fuse](Part_Fuse.md)**, **[<img src=images/Part_Common.svg style="width:16px"> [Part Common](Part_Common.md)** and **[<img src=images/Part_Section.svg style="width:16px"> [Part Section](Part_Section.md)**.

*Dialog to select objects and perform boolean operations with them.*
## Usage
See the individual commands:
- **<img src="images/Part_Cut.svg" width=16px> [Part Cut](Part_Cut.md)
**
- **<img src="images/Part_Fuse.svg" width=16px> [Part Fuse](Part_Fuse.md)
**
- **<img src="images/Part_Common.svg" width=16px> [Part Common](Part_Common.md)
**
- **<img src="images/Part_Section.svg" width=16px> [Part Section](Part_Section.md)
**
<div class="mw-translate-fuzzy">
See also Part → [Refine Shape](Part_RefineShape/sv.md)
</div>
## Coplanar problems
The boolean operations are performed by the internal geometry kernel, [OpenCASCADE Technology](OpenCASCADE.md) (OCCT). This library sometimes has problems producing boolean results when the input objects share an edge or a face. To be sure the boolean operation is successful the recommendation is that the shapes intersect each other clearly; this means that in most cases, one shape should protrude or be larger in size than the other shape.
In cases of coplanarity, even if the first boolean operation succeeds, subsequent boolean operations may fail. In this case, the problem may not be in the last operation done, but in the older ones, that is, in the nested operations as indicated in the [tree view](Tree_view.md). To troubleshoot these issues, it is recommended to use the **[<img src=images/Part_CheckGeometry.svg style="width:16px"> [Part CheckGeometry](Part_CheckGeometry.md)** tool to inspect all objects for problems.
<img alt="" src=images/Part_Boolean_cut_coplanar_1.png style="width:500px;">
<img alt="" src=images/Part_Boolean_cut_coplanar_2.png style="width:500px;">
*Left: shapes that share a face, a boolean cut may produce incorrect results. Right: shapes that intersect each other clearly, the boolean cut will be successful in most cases.*
<img alt="" src=images/Part_Boolean_fusion_coplanar_1.png style="width:500px;">
<img alt="" src=images/Part_Boolean_fusion_coplanar_2.png style="width:500px;">
*Left: shapes that share a face, a boolean union may produce incorrect results. Right: shapes that intersect each other clearly, the boolean union will be successful in most cases.*
---
 [documentation index](../README.md) > [Part](Part_Workbench.md) > Part Boolean/sv
| 45.688312 | 488 | 0.748721 | eng_Latn | 0.868222 |
1c5d9da3ea000456919d6f528ba5ec406e914b52 | 1,366 | md | Markdown | 2020/11/21/2020-11-21 04:10.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/11/21/2020-11-21 04:10.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/11/21/2020-11-21 04:10.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年11月21日04时数据
Status: 200
1.香港疫情急速恶化
微博热度:253160
2.上海新增2例本地确诊病例
微博热度:234561
3.单依纯好声音冠军
微博热度:196148
4.狼殿下
微博热度:153828
5.哈哈哈哈哈
微博热度:122009
6.演员因大雪错过比赛在车厢内表演
微博热度:120956
7.想给单依纯披件衣服
微博热度:104161
8.外交部回应印度破坏中巴经济走廊建设
微博热度:97982
9.王大陆演技
微博热度:93623
10.上海周浦医院
微博热度:71621
11.天津瞰海轩小区为何两度发现感染者
微博热度:70589
12.邓超黄渤跳舞太好笑了
微博热度:61346
13.全球最后一只白色长颈鹿
微博热度:59204
14.斑马森林最后一首竟然唱一滴泪的时间
微博热度:58217
15.拉姆曾晒离婚证说感觉自己安全了
微博热度:53918
16.美国疫情
微博热度:53862
17.青岛七部门约谈滴滴出行
微博热度:53671
18.小米将建年产能千万台手机的无人工厂
微博热度:53655
19.电视剧半生缘改名情深缘起
微博热度:53447
20.何洁再唱你一定要幸福
微博热度:53413
21.鹿鼎记
微博热度:53275
22.特朗普律师开记者会两颊抢镜
微博热度:53166
23.张碧晨雨中唱生命之河
微博热度:49952
24.潘虹淘汰
微博热度:49279
25.姜子牙
微博热度:43129
26.燕云台
微博热度:42947
27.TES无缘季后赛
微博热度:41604
28.微信支持发送大文件
微博热度:41316
29.美国数万老人在养老院非正常死亡
微博热度:41230
30.新西游记8
微博热度:38348
31.上海南汇中心医院
微博热度:38225
32.演员田蕤涉嫌猥亵被提起公诉
微博热度:33833
33.外交部回应加方称不后悔逮捕孟晚舟
微博热度:32534
34.奚梦瑶 我嫁的不是豪门是爱情
微博热度:32338
35.吴彦祖演技
微博热度:31159
36.狼仔坠崖
微博热度:31132
37.壁画中走出来的异域佳人
微博热度:28471
38.广州将实现无人驾驶出租车
微博热度:28309
39.孟美岐被私生调换航班座位
微博热度:27599
40.收集自己掉的头发
微博热度:26379
41.王海回应举报辛巴燕窝售假
微博热度:26176
42.伊能静双十一开箱vlog
微博热度:26004
43.井柏然晒白敬亭丑照
微博热度:22660
44.WINTER身材
微博热度:22531
45.伪造血衣造谣家长被判缓刑
微博热度:21943
46.选秀中的心脏狙击名场面
微博热度:20710
47.美军军舰暴发大规模新冠疫情
微博热度:20621
48.秋日栗栗排骨饭
微博热度:18761
49.刑讯逼供制造冤案3民警获刑
微博热度:18545
50.爱的厘米
微博热度:16693
| 6.696078 | 20 | 0.775988 | yue_Hant | 0.43159 |
1c5ea3bfd37fda03fbda85b6b80ec9e8e0efeb83 | 549 | md | Markdown | _posts/2020/2020-06-11-thunderbird-常用设置、常用插件.md | netmicrobe/witech | 3d6b22d1ca78f9f6984d8ec2838c1b1efc3e62bb | [
"MIT"
] | null | null | null | _posts/2020/2020-06-11-thunderbird-常用设置、常用插件.md | netmicrobe/witech | 3d6b22d1ca78f9f6984d8ec2838c1b1efc3e62bb | [
"MIT"
] | null | null | null | _posts/2020/2020-06-11-thunderbird-常用设置、常用插件.md | netmicrobe/witech | 3d6b22d1ca78f9f6984d8ec2838c1b1efc3e62bb | [
"MIT"
] | 1 | 2018-09-30T08:35:57.000Z | 2018-09-30T08:35:57.000Z | ---
layout: post
title: thunderbird-常用设置、常用插件。 settings , addons
categories: [cm, mail]
tags: [mail, thunderbird]
---
* 参考:
* []()
* []()
* []()
* []()
插件 Addons
* CompactHeader
* Lightning
* provider-for-google-calendar
* QuickFolders
* ReplyWithHeader
* Send Later
* Send Later Button
### ThunderBird 与 Google Canlendar 同步
1. 安装插件`Lightning`、`Provider-for-google-calendar`,重启ThunderBird
1. 打开Calendar -\> 右键菜单 New Calendar... -\> On the Network -\> Google Calendar
1. "Locate your calendar"窗口输入 google 帐号,继续,打开窗口加载Google网页继续认证。
| 16.147059 | 77 | 0.68306 | yue_Hant | 0.318521 |
1c5ed34c1083d4533b79ca313a5d254720ecfa3f | 1,707 | md | Markdown | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-rilmsgdcsmsgclass.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-rilmsgdcsmsgclass.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-rilmsgdcsmsgclass.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:ntddrilapitypes.RILMSGDCSMSGCLASS
title: RILMSGDCSMSGCLASS
author: windows-driver-content
description: This topic supports the Windows driver infrastructure and is not intended to be used directly from your code.
old-location: netvista\rilmsgdcsmsgclass.htm
old-project: netvista
ms.assetid: 3190aa21-201a-40d1-b894-dd393e413826
ms.author: windowsdriverdev
ms.date: 4/25/2018
ms.keywords: RILMSGDCSMSGCLASS, RILMSGDCSMSGCLASS enumeration [Network Drivers Starting with Windows Vista], RIL_DCSMSGCLASS_1, RIL_DCSMSGCLASS_2, RIL_DCSMSGCLASS_3, RIL_DCSMSGCLASS_MAX, netvista.rilmsgdcsmsgclass, ntddrilapitypes/RILMSGDCSMSGCLASS, ntddrilapitypes/RIL_DCSMSGCLASS_1, ntddrilapitypes/RIL_DCSMSGCLASS_2, ntddrilapitypes/RIL_DCSMSGCLASS_3, ntddrilapitypes/RIL_DCSMSGCLASS_MAX
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: enum
req.header: ntddrilapitypes.h
req.include-header: Rilapitypes.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- ntddrilapitypes.h
api_name:
- RILMSGDCSMSGCLASS
product:
- Windows
targetos: Windows
req.typenames: RILMSGDCSMSGCLASS
---
# RILMSGDCSMSGCLASS enumeration
## -description
This topic supports the Windows driver infrastructure and is not intended to be used directly from your code.
## -enum-fields
### -field RIL_DCSMSGCLASS_0
### -field RIL_DCSMSGCLASS_1
### -field RIL_DCSMSGCLASS_2
### -field RIL_DCSMSGCLASS_3
### -field RIL_DCSMSGCLASS_MAX
| 22.76 | 390 | 0.805507 | yue_Hant | 0.578043 |
1c5f69f3fb8fd410200a7229f413749d71d0103a | 7,861 | md | Markdown | articles/service-fabric-mesh/service-fabric-mesh-tutorial-debug-service-fabric-mesh-app.md | maiemy/azure-docs.it-it | b3649d817c2ec64a3738b5f05f18f85557d0d9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric-mesh/service-fabric-mesh-tutorial-debug-service-fabric-mesh-app.md | maiemy/azure-docs.it-it | b3649d817c2ec64a3738b5f05f18f85557d0d9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric-mesh/service-fabric-mesh-tutorial-debug-service-fabric-mesh-app.md | maiemy/azure-docs.it-it | b3649d817c2ec64a3738b5f05f18f85557d0d9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Eseguire il debug di un'app Web di Azure Service Fabric Mesh in esecuzione nell'ambiente locale
description: In questa esercitazione viene eseguito il debug di un'applicazione Azure Service Fabric Mesh in esecuzione nel cluster locale.
author: georgewallace
ms.topic: tutorial
ms.date: 10/31/2018
ms.author: gwallace
ms.custom: mvc, devcenter
ms.openlocfilehash: 56cc8b4010dc17cf2b723a72898034de8d6a7175
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 10/09/2020
ms.locfileid: "91843295"
---
# <a name="tutorial-debug-a-service-fabric-mesh-application-running-in-your-local-development-cluster"></a>Esercitazione: eseguire il debug di un'applicazione Azure Service Fabric Mesh in esecuzione nel cluster di sviluppo locale
Questa esercitazione è la seconda parte di una serie e illustra come compilare ed eseguire il debug di un'applicazione Azure Service Fabric Mesh nel cluster di sviluppo locale.
In questa esercitazione si apprenderà:
> [!div class="checklist"]
> * Cosa accade quando si crea un'applicazione Azure Service Fabric Mesh
> * Come impostare un punto di interruzione per osservare una chiamata da servizio a servizio
In questa serie di esercitazioni si apprenderà come:
> [!div class="checklist"]
> * [Creare un'app Service Fabric Mesh in Visual Studio](service-fabric-mesh-tutorial-create-dotnetcore.md)
> * Eseguire il debug di un'app Service Fabric Mesh in esecuzione nel cluster di sviluppo locale
> * [Distribuire un'app Service Fabric Mesh](service-fabric-mesh-tutorial-deploy-service-fabric-mesh-app.md)
> * [Aggiornare un'app Service Fabric Mesh](service-fabric-mesh-tutorial-upgrade.md)
> * [Pulire le risorse di Service Fabric Mesh](service-fabric-mesh-tutorial-cleanup-resources.md)
[!INCLUDE [preview note](./includes/include-preview-note.md)]
## <a name="prerequisites"></a>Prerequisites
Prima di iniziare questa esercitazione:
* Se non si ha una sottoscrizione di Azure, è possibile creare un [account gratuito](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) prima di iniziare.
* Assicurarsi di [configurare l'ambiente di sviluppo](service-fabric-mesh-howto-setup-developer-environment-sdk.md), installando tra l'altro il runtime di Service Fabric, l'SDK, Docker e Visual Studio 2017.
## <a name="download-the-to-do-sample-application"></a>Scaricare l'applicazione attività di esempio
Se non si è creata l'applicazione attività di esempio nella [prima parte di questa serie di esercitazioni](service-fabric-mesh-tutorial-create-dotnetcore.md), è possibile scaricarla. In una finestra di comando eseguire il comando seguente per clonare il repository dell'app di esempio nel computer locale.
```
git clone https://github.com/azure-samples/service-fabric-mesh
```
L'applicazione si trova nella directory `src\todolistapp`.
## <a name="build-and-debug-on-your-local-cluster"></a>Compilare ed eseguire il debug nel cluster locale
Subito dopo il caricamento del progetto nel cluster locale viene compilata e distribuita automaticamente un'immagine Docker. Il processo potrebbe richiedere un po' di tempo. Per monitorare lo stato nel riquadro **Output** di Visual Studio, impostare l'elenco a discesa **Mostra output di** del riquadro Output su **Strumenti di Service Fabric**.
Premere **F5** per compilare ed eseguire il servizio in locale. Durante l'esecuzione e il debug del progetto in locale, Visual Studio:
* Si assicurerà che Docker per Windows sia in esecuzione e che sia impostato in modo da usare Windows come sistema operativo del contenitore.
* Scaricherà eventuali immagini di base mancanti di Docker. Questa parte potrebbe richiedere tempo.
* Compilerà o ricompilerà l'immagine di Docker usata per ospitare il progetto di codice.
* Distribuirà ed eseguirà il contenitore nel cluster di sviluppo locale di Service Fabric.
* Eseguirà i servizi e raggiungerà gli eventuali punti di interruzione impostati.
Quando la distribuzione locale è completata e l'app è in esecuzione in Visual Studio, viene aperta una finestra del browser con una pagina Web di esempio predefinita.
## <a name="debugging-tips"></a>Suggerimenti per il debug
Per rendere molto più rapida la prima esecuzione del debug (F5), seguire le istruzioni in [Ottimizzare le prestazioni di Visual Studio](service-fabric-mesh-howto-optimize-vs.md).
Attualmente si verifica un problema a causa del quale la chiamata a `using (HttpResponseMessage response = client.GetAsync("").GetAwaiter().GetResult())` provoca un errore di connessione al servizio. Questa situazione può verificarsi ogni volta che l'indirizzo IP dell'host cambia. Per risolvere il problema:
1. Rimuovere l'app dal cluster locale (in Visual Studio **Compila** > **Pulisci soluzione**).
2. Da Local Cluster Manager (Gestione cluster locale) di Service Fabric selezionare **Stop Local Cluster** (Arresta cluster locale) e quindi **Start Local Cluster** (Avvia cluster locale).
3. Ridistribuire l'app (in Visual Studio **F5**).
Se viene visualizzato l'errore **No Service Fabric local cluster is running** (Nessun cluster locale di Service Fabric è in esecuzione), assicurarsi che Service Fabric Local Cluster Manager (LCM) sia in esecuzione, fare clic con il pulsante destro del mouse sull'icona LCM sulla barra delle applicazioni e quindi fare clic su **Start Local Cluster** (Avvia cluster locale). Dopo l'avvio, tornare a Visual Studio e premere **F5**.
Se viene visualizzato un errore **404** all'avvio dell'app, è possibile che le variabili di ambiente in **service.yaml** non siano corrette. Assicurarsi che `ApiHostPort` e `ToDoServiceName` siano impostati conformemente alle istruzioni della sezione [Creare le variabili di ambiente](./service-fabric-mesh-tutorial-create-dotnetcore.md#create-environment-variables).
Se sono presenti errori di compilazione in **service.yaml**, assicurarsi che per impostare i rientri delle righe vengano usati gli spazi e non le tabulazioni. Inoltre, per il momento, è necessario compilare l'app usando le impostazioni locali inglese.
### <a name="debug-in-visual-studio"></a>Eseguire il debug in Visual Studio
Per il debug di un'applicazione Service Fabric Mesh in Visual Studio viene usato un cluster di sviluppo locale di Service Fabric. Per scoprire in che modo gli elementi attività vengono recuperati dal servizio back-end, eseguire il debug nel metodo OnGet().
1. Nel progetto **WebFrontEnd** aprire **Pages** > **Index.cshtml** > **Index.cshtml.cs** e impostare un punto di interruzione nel metodo **OnGet** (riga 17).
2. Nel progetto **ToDoService** aprire **TodoController.cs** e impostare un punto di interruzione nel metodo **Get** (riga 15).
3. Tornare al browser e aggiornare la pagina. È stato raggiunto il punto di interruzione nel metodo `OnGet()` del front-end Web. È possibile esaminare la variabile `backendUrl` per conoscere la combinazione delle variabili di ambiente definite nel file **service.yaml** nell'URL usato per contattare il servizio back-end.
4. Eseguire l'istruzione/routine (F10) della chiamata `client.GetAsync(backendUrl).GetAwaiter().GetResult())` per raggiungere il punto di interruzione `Get()` del controller. In questo metodo è possibile scoprire in che modo l'elenco degli elementi attività viene recuperato dall'elenco in memoria.
5. Al termine, premere **MAIUSC+F5** per arrestare il debug del progetto in Visual Studio.
## <a name="next-steps"></a>Passaggi successivi
In questa parte dell'esercitazione si è appreso:
> [!div class="checklist"]
> * Cosa accade quando si crea un'applicazione Azure Service Fabric Mesh
> * Come impostare un punto di interruzione per osservare una chiamata da servizio a servizio
Passare all'esercitazione successiva:
> [!div class="nextstepaction"]
> [Distribuire un'app Service Fabric Mesh](service-fabric-mesh-tutorial-deploy-service-fabric-mesh-app.md)
| 75.586538 | 429 | 0.790485 | ita_Latn | 0.997632 |
1c5fca54fee8b59c4a82f9e597152f77ca847393 | 12,755 | md | Markdown | source/develop/release-management/features/metrics/metrics-store-schema.html.md | bennyz/ovirt-site | 638f293a730ec3b2d8cd587af7ba09795adc15ba | [
"MIT"
] | null | null | null | source/develop/release-management/features/metrics/metrics-store-schema.html.md | bennyz/ovirt-site | 638f293a730ec3b2d8cd587af7ba09795adc15ba | [
"MIT"
] | null | null | null | source/develop/release-management/features/metrics/metrics-store-schema.html.md | bennyz/ovirt-site | 638f293a730ec3b2d8cd587af7ba09795adc15ba | [
"MIT"
] | null | null | null | ---
title: Metrics Store - Schema
category: feature
authors: sradco
feature_name: oVirt Metrics Store Schema
feature_modules: engine
feature_status: In Development
---
# oVirt Metrics - Schema
Please use the following for constructing the metrics visualizations in the UI tool.
General fields for metrics records:
- hostname: host FQDN
- collectd.interval: 10 (in seconds)
## NFS Plugin
- collectd.plugin: nfs
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.nfs.nfs_procedure | nfs_procedure | NFS activities | fs_name + server or client (Example: v3client) | derive |
**NFS activities**
null / getattr / lookup / access / readlink / read / write / create / mkdir / symlink / mknod / rename / readdir / remove / link / fsstat / fsinfo / readdirplus / pathconf / rmdir / commit / compound / reserved / access / close / delegpurge / delegreturn / getattr / getfh / lock / lockt / locku / lookupp / open_downgrade / putfh / putpubfh / putrootfh / renew / restorefh / savefh / secinfo / setattr / setclientid / setcltid_confirm / verify / open / openattr / open_confirm / exchange_id / create_session / destroy_session / bind_conn_to_session / nverify / release_lockowner / backchannel_ctl / free_stateid / get_dir_delegation / getdeviceinfo / getdevicelist / layoutcommit / layoutget / layoutreturn / secinfo_no_name / sequence / set_ssv / test_stateid / want_delegation / destroy_clientid / reclaim_complete
## Processes Plugin
- collectd.plugin: processes
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.processes.ps_state | ps_state | running/ zombies/ stopped/ paging/ blocked/ sleeping | | gauge |
| collectd.processes.ps_disk_ops.read | ps_disk_ops | | process name | derive |
| collectd.processes.ps_disk_ops.write | ps_disk_ops | | process name | derive |
| collectd.processes.ps_vm | ps_vm | | process name | gauge |
| collectd.processes.ps_rss | ps_rss | | process name | gauge |
| collectd.processes.ps_data | ps_data | | process name | gauge |
| collectd.processes.ps_code | ps_code | | process name | gauge |
| collectd.processes.ps_stacksize | ps_stacksize | | process name | gauge |
| collectd.processes.ps_cputime.syst | ps_cputime | | process name | derive |
| collectd.processes.ps_cputime.user | ps_cputime | | process name | derive |
| collectd.processes.ps_count.processes | ps_count | | process name | gauge |
| collectd.processes.ps_count.threads | ps_count | | process name | gauge |
| collectd.processes.ps_pagefaults.majfltadd | ps_pagefaults | | process name | derive |
| collectd.processes.ps_pagefaults.minflt | ps_pagefaults | | process name | derive |
| collectd.processes.ps_disk_octets.write | ps_disk_octets | | process name | derive |
| collectd.processes.ps_disk_octets.read | ps_disk_octets | | process name | derive |
| collectd.processes.fork_rate | fork_rate | | | derive |
## Disk Plugin
- collectd.plugin: disk
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.disk.disk_ops.read | disk_ops | | disk name | derive |
| collectd.disk.disk_ops.write | disk_ops | | disk name | derive |
| collectd.disk.disk_merged.read | disk_merged | | disk name | derive |
| collectd.disk.disk_merged.write | disk_merged | | disk name | derive |
| collectd.disk.disk_time.read | disk_time | | disk name | derive |
| collectd.disk.disk_time.write | disk_time | | disk name | derive |
| collectd.disk.pending_operations | pending_operations | | disk name | gauge |
| collectd.disk.disk_io_time.io_time | disk_io_time | | disk name | derive |
| collectd.disk.disk_io_time.weighted_io_time | disk_io_time | | disk name | derive |
## Interface Plugin
- collectd.plugin: interface
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.interface.if_octets.rx | if_octets | | Network Name | derive |
| collectd.interface.if_octets.tx | if_octets | | Network Name | derive |
| collectd.interface.if_packets.rx | if_packets | | Network Name | derive |
| collectd.interface.if_packets.tx | if_packets | | Network Name | derive |
| collectd.interface.if_errors.rx | if_errors | | Network Name | derive |
| collectd.interface.if_errors.tx | if_errors | | Network Name | derive |
| collectd.interface.if_dropped.rx | if_dropped | | Network Name | derive |
| collectd.interface.if_dropped.tx | if_dropped | | Network Name | derive |
## CPU Plugin
- collectd.plugin: cpu
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.cpu.percent | percent | interrupt / user / wait / nice / softirq / system / idle / steal | cpu number | gauge |
## DF Plugin
- collectd.plugin: df
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.df.df_complex | df_complex | free / used / reserved | A mounted partition | gauge |
| collectd.df.percent_bytes | percent_bytes | free / used / reserved | A mounted partition | gauge |
## Entropy Plugin
- collectd.plugin: entropy
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.entropy.entropy | entropy | | | gauge |
## Memory Plugin
- collectd.plugin: memory
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.memory.memory | memory | used / cached / free / slab_unrecl / slab_recl / buffered | | gauge |
| collectd.memory.percent | percent | used / cached / free / slab_unrecl / slab_recl / buffered | | gauge |
## Swap Plugin
- collectd.plugin: swap
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.swap.swap | swap | used / free / cached | | gauge |
| collectd.swap.swap_io | swap_io | in / out | | derive |
## Load Plugin
- collectd.plugin: load
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.load.load.longterm | load | | | gauge |
| collectd.load.load.midterm | load | | | gauge |
| collectd.load.load.shortterm | load | | | gauge |
## Aggregation Plugin
- collectd.plugin: aggregation
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.aggregation.percent | "percent" | interrupt / user / wait / nice / softirq / system / idle / steal | cpu-average / cpu-sum | gauge |
## Statsd Plugin (VDSM host stats)
- collectd.plugin: statsd
- ovirt.entity: host
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.statsd.host_storage | host_storage | storage uuid | | gauge |
## Virt Plugin
- collectd.plugin: virt
- ovirt.entity: vm
- ovirt.engine_fqdn.raw: _FQDN of the engine_
- ovirt.cluster_name.raw: _Cluster name_
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.virt.memory | memory | rss / total /actual_balloon / available / unused / usable / last_update / major_fault / minor_fault / swap_in / swap_out | vm name | gauge |
| collectd.virt.disk_octets.read | disk_octets.read | disk name | vm name | gauge |
| collectd.virt.disk_octets.write | disk_octets.write | disk name | vm name | gauge |
| collectd.virt.disk_ops.read | disk_ops.read | disk name | vm name | gauge |
| collectd.virt.disk_ops.write | disk_ops.write | disk name | vm name | gauge |
| collectd.virt.if_dropped.rx | if_dropped.rx | network name | vm name | derive |
| collectd.virt.if_dropped.tx | if_dropped.tx | network name | vm name | derive |
| collectd.virt.if_errors.rx | if_errors.rx | network name | vm name | derive |
| collectd.virt.if_errors.tx | if_errors.tx | network name | vm name | derive |
| collectd.virt.if_octets.rx | if_octets.rx | network name | vm name | derive |
| collectd.virt.if_octets.tx | if_octets.tx | network name | vm name | derive |
| collectd.virt.if_packets.rx | if_packets.rx | network name | vm name | derive |
| collectd.virt.if_packets.tx | if_packets.tx | network name | vm name | derive |
| collectd.virt.virt_cpu_total | virt_cpu_total | cpu number | vm name | derive |
| collectd.virt.virt_vcpu | virt_vcpu | cpu number | vm name | derive |
| collectd.virt.percent | percent | virt_cpu_total | vm name | gauge |
| collectd.virt.ps_cputime.user | ps_cputime.user | | vm name | derive |
| collectd.virt.ps_cputime.syst | ps_cputime.syst | | vm name | derive |
| collectd.virt.total_requests | total_requests | flush-DISK | vm name | derive |
| collectd.virt.total_time_in_ms | total_time_in_ms | flush-DISK | vm name | derive |
| collectd.virt.virt_vcpu | virt_vcpu | cpu number | vm name | derive |
## Postgresql Plugin
- collectd.plugin: postgresql
- ovirt.entity: engine
| Metric value field name | collectd.type | collectd.type_instance | collectd.plugin_instance | collectd.dstypes |
|-------------------------|---------------|------------------------|--------------------------|------------------|
| collectd.postgresql.pg_numbackends | pg_numbackends | | db name | gauge |
| collectd.postgresql.pg_n_tup_g | pg_n_tup_g | dead / live | db name | gauge |
| collectd.postgresql.pg_n_tup_c | pg_n_tup_c | ins / upd / del / hot_upd | db name | derive |
| collectd.postgresql.pg_xact | pg_xact | num_deadlocks | db name | derive |
| collectd.postgresql.pg_db_size | pg_db_size | | db name | gauge |
| collectd.postgresql.pg_blks | pg_blks | heap_read / heap_hit / idx_hit / toast_read / toast_hit / tidx_read / idx_read | db name | derive |
| 53.818565 | 817 | 0.638338 | eng_Latn | 0.550996 |
1c601f76fab9f229009025f5245f230bbbbfd170 | 502 | md | Markdown | index.md | elusivethreat/elusivethreat.github.io | e8231d1476fb97def47a2d7b7127e2f82a2c716c | [
"CC-BY-3.0"
] | null | null | null | index.md | elusivethreat/elusivethreat.github.io | e8231d1476fb97def47a2d7b7127e2f82a2c716c | [
"CC-BY-3.0"
] | null | null | null | index.md | elusivethreat/elusivethreat.github.io | e8231d1476fb97def47a2d7b7127e2f82a2c716c | [
"CC-BY-3.0"
] | null | null | null | ---
layout: home
title: Home
landing-title: '- _ e-lu-sive _ -'
description: ' Tending to elude capture, perception, comprehension, or memory.'
image: assets/images/nasa.jpg
author: Unknown
show_tile: false
---
<div>
<!-- Three -->
<section id="two">
<div class="inner">
<header class="major">
</header>
<p>All notes and information on completed HTB machines.</p>
<ul class="actions">
<li><a href="all_posts.html" class="button next">GET SYSTEM</a></li>
</ul>
</div>
</section>
</div>
| 20.08 | 79 | 0.659363 | eng_Latn | 0.705651 |