hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c186f84f7179b925fc722860907560a5639d1bd9 | 751 | md | Markdown | docs/src/systems/ODESystem.md | pbouffard/ModelingToolkit.jl | f9e4e190fd107e1c9452837482ff3b2812a2aa19 | [
"MIT"
] | 1 | 2021-02-25T20:16:59.000Z | 2021-02-25T20:16:59.000Z | docs/src/systems/ODESystem.md | pbouffard/ModelingToolkit.jl | f9e4e190fd107e1c9452837482ff3b2812a2aa19 | [
"MIT"
] | null | null | null | docs/src/systems/ODESystem.md | pbouffard/ModelingToolkit.jl | f9e4e190fd107e1c9452837482ff3b2812a2aa19 | [
"MIT"
] | null | null | null | # ODESystem
## System Constructors
```@docs
ODESystem
```
## Composition and Accessor Functions
- `sys.eqs` or `equations(sys)`: The equations that define the ODE.
- `sys.states` or `states(sys)`: The set of states in the ODE.
- `sys.parameters` or `parameters(sys)`: The parameters of the ODE.
- `sys.iv` or `independent_variable(sys)`: The independent variable of the ODE.
## Transformations
```@docs
ode_order_lowering
liouville_transform
```
## Applicable Calculation and Generation Functions
```julia
calculate_jacobian
calculate_tgrad
calculate_factorized_W
generate_jacobian
generate_tgrad
generate_factorized_W
jacobian_sparsity
```
## Problem Constructors
```@docs
ODEFunction
ODEProblem
```
| 18.317073 | 80 | 0.721704 | eng_Latn | 0.827102 |
c1876b75265b3490f72ed26eb4f0ab9b58f8241a | 2,607 | md | Markdown | content/event/InnovationSharing/index.md | ShuboLiu/ShuboLiu-AcademicBio | 940f87c042047d02468162ed5d08f55906f58360 | [
"MIT"
] | null | null | null | content/event/InnovationSharing/index.md | ShuboLiu/ShuboLiu-AcademicBio | 940f87c042047d02468162ed5d08f55906f58360 | [
"MIT"
] | null | null | null | content/event/InnovationSharing/index.md | ShuboLiu/ShuboLiu-AcademicBio | 940f87c042047d02468162ed5d08f55906f58360 | [
"MIT"
] | null | null | null | ---
# Documentation: https://wowchemy.com/docs/managing-content/
title: Sharing Session on Innovation and Entrepreneurship
event: Sharing Session on Innovation and Entrepreneurship Experience
event_url: https://qm.nwpu.edu.cn/info/1025/4114.htm
location: Lecture Hall, East Teaching Building, Chang'an Campus
address:
street: 1 Dongxiang Street
city: Xi'an
region: Shannxi
postcode: '710129'
country: China
summary: Sharing Session on Innovation and Entrepreneurship Experience
abstract: "Shubo Liu showed everyone the <Micro 5G UAV and its Cloud System> project that he hosted, which won the 2020 'Challenge Cup' provincial silver award and other innovation and entrepreneurship competition awards. Subsequently, Liu Shubo introduced his understanding of innovation and entrepreneurship based on his own experience. He believes that college students' innovation and entrepreneurship is a market- and application-oriented way of thinking that drives innovation with entrepreneurship and uses the market to force technology."
# Talk start and end times.
# End time can optionally be hidden by prefixing the line with `#`.
date: "2020-11-19T19:00:00+08:00"
date_end: "2020-11-19T20:30:00+08:00"
all_day: false
# Schedule page publish date (NOT event date).
publishDate: "2020-11-15T19:31:00+08:00"
authors: [Shubo Liu, Hongsheng Zhang]
tags: [Innovation & Entrepreneurship, NIUVS Team, 5G-UVS Project]
# Is this a featured event? (true/false)
featured: True
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
#caption: ""
focal_point: right
preview_only: false
# Custom links (optional).
# Uncomment and edit lines below to show custom links.
# links:
# - name: Follow
# url: https://twitter.com
# icon_pack: fab
# icon: twitter
# Optional filename of your slides within your event's folder or a URL.
# url_slides:
# url_code:
# url_pdf:
# url_video:
# Markdown Slides (optional).
# Associate this event with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides = "example-slides"` references `content/slides/example-slides.md`.
# Otherwise, set `slides = ""`.
# slides: ""
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
# projects: []
---
| 37.782609 | 546 | 0.750288 | eng_Latn | 0.968229 |
c1877493982b1ea6a35bccaa42996ec1e1280428 | 491 | md | Markdown | Source/Utils/README.md | amironov73/ManagedIrbis5 | 3971b8bc91490d2a5c6cd0ece35f29cf18d586c5 | [
"MIT"
] | null | null | null | Source/Utils/README.md | amironov73/ManagedIrbis5 | 3971b8bc91490d2a5c6cd0ece35f29cf18d586c5 | [
"MIT"
] | 1 | 2020-11-15T09:50:33.000Z | 2020-11-15T09:50:33.000Z | Source/Utils/README.md | amironov73/ManagedIrbis5 | 3971b8bc91490d2a5c6cd0ece35f29cf18d586c5 | [
"MIT"
] | 1 | 2020-11-15T09:42:07.000Z | 2020-11-15T09:42:07.000Z | ### Различные утилиты, используемые при сборке, деплойменте и наcтройке у клиента
* [**Bin2sharp**](Bin2sharp/README.md) - конвертирует бинарный файл в исходный код на C#;
* [**MachineInfo**](MachineInfo/README.md) - базовая информация о пользовательской машине;
* [**MsgBox**](MsgBox/README.md) - демонстрирует пользователю окно с сообщением;
* [**PatchVersion**](PatchVersion/README.md) - патчит версию в файле проекта;
* [**UniCall**](UniCall/README.md) - вызывает метод из .NET-сборки.
| 61.375 | 90 | 0.741344 | rus_Cyrl | 0.913559 |
c18792bcf1ce5705bf70831f6ce0227c8d25a735 | 445 | markdown | Markdown | drum/index.markdown | fabriciofmsilva/aprenda-musica | da537cf773fb30313448322e9c5af25409ee97f1 | [
"MIT"
] | null | null | null | drum/index.markdown | fabriciofmsilva/aprenda-musica | da537cf773fb30313448322e9c5af25409ee97f1 | [
"MIT"
] | 1 | 2021-03-30T08:38:38.000Z | 2021-03-30T08:38:38.000Z | drum/index.markdown | fabriciofmsilva/aprenda-musica | da537cf773fb30313448322e9c5af25409ee97f1 | [
"MIT"
] | null | null | null | ---
layout: page
title: "Bateria"
lead: "Bateria"
---
* <span class="badge badge-primary">vídeo</span> [10 levels of drumming](level/)
* <span class="badge badge-primary">vídeo</span> [Why I don't do drum covers with OrlandoDrummer](cover/)
## Tempo
* <span class="badge badge-primary">vídeo</span> [Workshop de tempo interno](time/)
## Vassourinha
* <span class="badge badge-primary">vídeo</span> [Workshop de tempo interno](vassourinha/)
| 26.176471 | 105 | 0.705618 | eng_Latn | 0.210814 |
c187ddff6131da1273793afe51d7016a0e060b17 | 13,899 | md | Markdown | docs/python/python-web-application-project-templates.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-20T07:48:22.000Z | 2020-05-20T07:48:22.000Z | docs/python/python-web-application-project-templates.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-10-02T15:01:11.000Z | 2021-11-05T20:25:20.000Z | docs/python/python-web-application-project-templates.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-10-01T22:49:53.000Z | 2021-10-09T11:24:44.000Z | ---
title: Šablony webových aplikací pro Python
description: Visual Studio poskytuje šablony pro webové aplikace v Pythonu s využitím architektur Bottle, Flask a Django. podpora zahrnuje konfigurace ladění a publikování do Azure App Service.
ms.date: 01/28/2019
ms.topic: conceptual
author: rjmolyneaux
ms.author: rmolyneaux
manager: jmartens
ms.technology: vs-python
ms.workload:
- python
- data-science
ms.openlocfilehash: 56a9f0bc78d942d34ce0a2aadfd9f7a59becd7c8
ms.sourcegitcommit: 8fae163333e22a673fd119e1d2da8a1ebfe0e51a
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 10/13/2021
ms.locfileid: "129972905"
---
# <a name="python-web-application-project-templates"></a>Šablony projektů webových aplikací Pythonu
Python v Visual Studio podporuje vývoj webových projektů v architekturách Bottle, Flask a Django prostřednictvím šablon projektů a spouštěče ladění, který lze nakonfigurovat pro zpracování různých architektur. Tyto šablony zahrnují *requirements.txt,* který deklaruje potřebné závislosti. Při vytváření projektu z jedné z těchto šablon vás Visual Studio k instalaci [](#install-project-requirements) těchto balíčků (viz Instalace požadavků projektu dále v tomto článku).
Můžete také použít obecnou šablonu **webového Project** pro další architektury, jako je pyramida. V takovém případě se s šablonou neinstaluje žádná rozhraní. Místo toho nainstalujte potřebné balíčky do prostředí, které používáte pro projekt (viz okno [Prostředí Pythonu – karta Balíček).](python-environments-window-tab-reference.md#packages-tab)
Informace o nasazení webové aplikace v Pythonu do Azure najdete v tématu [Publikování do Azure App Service](publishing-python-web-applications-to-azure-from-visual-studio.md).
## <a name="use-a-project-template"></a>Použití šablony projektu
Projekt vytvoříte ze šablony pomocí příkazu **File** > **New** > **Project**. Pokud chcete zobrazit šablony pro webové projekty, vyberte na levé straně dialogového okna **Python** > **Web.** Pak vyberte šablonu podle svého výběru, zadejte názvy projektu a řešení, nastavte možnosti pro adresář řešení a úložiště Git a vyberte **OK.**

::: moniker range="<=vs-2017"
Obecná **šablona webového Project,** kterou jsme zmínili dříve, poskytuje pouze prázdný projekt Visual Studio bez kódu a bez předpokladů kromě toho, že je projekt v Pythonu.
::: moniker-end
::: moniker range=">=vs-2019"
Obecná **šablona webového Project,** kterou jsme zmínili dříve, poskytuje pouze prázdný projekt Visual Studio bez kódu a bez předpokladů kromě toho, že je projekt v Pythonu.
::: moniker-end
Všechny ostatní šablony jsou založené na webových architekturách Bottle, Flask nebo Django a spadají do tří obecných skupin, jak je popsáno v následujících částech. Aplikace vytvořené libovolnou z těchto šablon obsahují dostatečný kód pro místní spuštění a ladění aplikace. Každý z nich také poskytuje potřebný objekt aplikace [WSGI](https://www.python.org/dev/peps/pep-3333/) (python.org) pro použití s provozními webovými servery.
### <a name="blank-group"></a>Prázdná skupina
Všechny **šablony \<framework> prázdných webových Project** vytvoří projekt s více nebo méně minimálním často používaným kódem a potřebnými závislostmi deklarovaných v *requirements.txt* souboru.
| Template (Šablona) | Description |
| --- | --- |
| **Prázdný webový Project** | Vygeneruje minimální aplikaci *v app.py* s domovskou stránkou pro a stránkou, která se vygeneruje pomocí velmi krátké šablony `/` vložené `/hello/<name>` `<name>` stránky. |
| **Prázdný webový Project Django** | Vygeneruje projekt Django se základní strukturou webu Django, ale bez aplikací Django. Další informace naleznete v tématu [Django templates and](python-django-web-application-project-template.md) [Learn Django Step 1](learn-django-in-visual-studio-step-01-project-and-solution.md). |
| **Prázdný webový Project Flask** | Vygeneruje minimální aplikaci s jedním Hello World!" pro `/` . Tato aplikace se podobá výsledku podrobného postupu v tématu Rychlý [start: Vytvoření první](../ide/quickstart-python.md?toc=/visualstudio/python/toc.json&bc=/visualstudio/python/_breadcrumb/toc.json)webové aplikace v Pythonu pomocí Visual Studio. Viz také [Learn Flask Krok 1.](learn-flask-visual-studio-step-01-project-solution.md)
### <a name="web-group"></a>Webová skupina
Všechny **\<Framework> šablony Project** vytvoří úvodní webovou aplikaci se stejným návrhem bez ohledu na zvolenou architekturu. Aplikace má stránky Domů, O aplikaci a Kontakt, navigační panel a přizpůsobivá návrh pomocí bootstrapu. Každá aplikace je správně nakonfigurovaná pro obsluhu statických souborů (CSS, JavaScript a písma) a používá mechanismus šablony stránky vhodný pro rozhraní.
| Template (Šablona) | Description |
| --- | --- |
| **Bottle Web Project** | Vygeneruje aplikaci, jejíž statické soubory jsou obsaženy ve *statické* složce a zpracovány prostřednictvím kódu *app.py*. Směrování pro jednotlivé stránky je součástí *routes.py* a *složka zobrazení* obsahuje šablony stránek.|
| **Django Web Project** | Vygeneruje projekt Django a aplikaci Django se třemi stránkami, podporou ověřování a databází SQLite (ale bez datových modelů). Další informace naleznete v tématu [Django templates and](python-django-web-application-project-template.md) [Learn Django Step 4](learn-django-in-visual-studio-step-04-full-django-project-template.md). |
| **Flask Web Project** | Vygeneruje aplikaci, jejíž statické soubory jsou obsaženy ve *statické složce.* Kód v *views.py* zpracovává směrování, šablony stránek používají modul Jinja, který je ve *složce templates.* Soubor *runserver.py* poskytuje spouštěcí kód. Seznamte se s
::: moniker range="vs-2017"
### <a name="polls-group"></a>Skupina anket
Šablony **polls \<framework> web Project** úvodní webovou aplikaci, přes kterou mohou uživatelé hlasovat pro různé dotazy k hlasování. Každá aplikace vychází ze struktury šablon webových projektů a používá databázi ke správě dotazování a odpovědí uživatelů. Aplikace obsahují vhodné datové modely a speciální stránku aplikace (/seed), která načítá dotazování ze *souboru samples.json.*
| Template (Šablona) | Description |
| --- | --- |
| **Polls Bottle Web Project** | Vygeneruje aplikaci, která se může spouštět pro databázi v paměti, MongoDB nebo Azure Table Storage, která se konfiguruje pomocí `REPOSITORY_NAME` proměnné prostředí. Datové modely a kód úložiště dat jsou obsaženy ve složce *models* a soubor *settings.py* obsahuje kód k určení, které úložiště dat se používá. |
| **Dotazování webového Project Django** | Vygeneruje projekt Django a aplikaci Django se třemi stránkami a databází SQLite. Zahrnuje přizpůsobení rozhraní pro správu Django, která ověřenému správci umožňují vytvářet a spravovat cyklické dotazování. Další informace naleznete v tématu [Django templates and](python-django-web-application-project-template.md) [Learn Django Step 6](learn-django-in-visual-studio-step-06-polls-django-web-project-template.md). |
| **Polls Flask/Jade Web Project** | Vygeneruje stejnou aplikaci jako pomocí šablony **polls Flask Web Project,** ale používá rozšíření Jade pro šablonovací modul Jinja. |
::: moniker-end
## <a name="install-project-requirements"></a>Instalace požadavků projektu
Při vytváření projektu ze šablony specifické pro rozhraní se zobrazí dialogové okno, které vám pomůže nainstalovat potřebné balíčky pomocí nástroje pip. Pro webové projekty také [doporučujeme](selecting-a-python-environment-for-a-project.md#use-virtual-environments) použít virtuální prostředí, aby při publikování webu byly zahrnuty správné závislosti:

Pokud používáte řízení zdrojového kódu, obvykle vy vynechat složku virtuálního prostředí, protože toto prostředí je možné znovu vytvořit pouze *pomocírequirements.txt*. Nejlepší způsob, jak složku vyloučit, je nejprve vybrat možnost Nainstaluji si je na výzvu uvedenou výše a pak před vytvořením virtuálního prostředí zakázat automatické potvrzení. Podrobnosti najdete v kurzech [1–2 a 1–3](learn-django-in-visual-studio-step-01-project-and-solution.md#step-1-2-examine-the-git-controls-and-publish-to-a-remote-repository) na learn django a kurz Learn Flask – kroky [1–2 a 1–3.](learn-flask-visual-studio-step-01-project-solution.md#step-1-2-examine-the-git-controls-and-publish-to-a-remote-repository)
Při nasazování do Microsoft Azure App Service vyberte verzi Pythonu jako rozšíření [webu](./managing-python-on-azure-app-service.md?view=vs-2019&preserve-view=true) a ručně nainstalujte balíčky. Vzhledem k tomu, Azure App Service nástroj automaticky neinstaluje balíčky ze souboru *requirements.txt* při nasazení z Visual Studio, postupujte podle podrobností o konfiguraci aka.ms/PythonOnAppService [.](managing-python-on-azure-app-service.md)
## <a name="debugging"></a>Ladění
Při spuštění webového projektu pro ladění Visual Studio místní webový server na náhodném portu a otevře výchozí prohlížeč pro adresu a port. Pokud chcete zadat další možnosti, klikněte pravým tlačítkem na projekt, vyberte **Vlastnosti** a vyberte kartu **Webový spouštěč:**

Ve **skupině Ladění:**
- **Cesty hledání,** **argumenty skriptu,** **argumenty interpreta** a cesta **interpreta:** tyto možnosti jsou stejné jako při [normálním ladění.](debugging-python-in-visual-studio.md)
- **Launch URL**(Adresa URL pro spuštění): Určuje adresu URL, která se otevře v prohlížeči. Výchozí hodnota je `localhost` .
- **Číslo portu:** Port, který se má použít, pokud není zadaný v adrese URL (Visual Studio automaticky vybere jeden port). Toto nastavení umožňuje přepsat výchozí hodnotu proměnné prostředí, kterou šablony používají ke konfiguraci portu, na kterém `SERVER_PORT` místní ladicí server naslouchá.
Vlastnosti ve skupinách Spustit **příkaz serveru** a Příkaz ladicího serveru (druhá možnost je pod tím, co je znázorněno na obrázku) určují, jak se webový server spustí. Vzhledem k tomu, že mnoho architektur vyžaduje použití skriptu mimo aktuální projekt, je možné skript nakonfigurovat tady a název modulu po spuštění lze předat jako parametr.
- **Příkaz**: Může to být skript Pythonu ( soubor *\* .py),* název modulu (jako v , ) nebo jeden `python.exe -m module_name` řádek kódu (například `python.exe -c "code"` ). Hodnota v rozevíracím seznamu určuje, které z těchto typů jsou určeny.
- **Argumenty:** Tyto argumenty se předá na příkazovém řádku za příkazem.
- **Prostředí:** seznam párů oddělených novým \<NAME> = \<VALUE> řádku, které určují proměnné prostředí. Tyto proměnné se nastaví za všemi vlastnostmi, které mohou upravovat prostředí, jako je číslo portu a cesty hledání, a proto mohou tyto hodnoty přepsat.
Libovolnou vlastnost projektu nebo proměnnou prostředí lze zadat MSBuild syntaxe, například: `$(StartupFile) --port $(SERVER_PORT)` .
`$(StartupFile)` je relativní cesta ke spouštěcímu souboru `{StartupModule}` a je importovatelný název spouštěcího souboru. `$(SERVER_HOST)`a jsou normální proměnné prostředí nastavené vlastnostmi Launch URL (Adresa URL pro spuštění) a Port Number (Číslo portu), a to automaticky `$(SERVER_PORT)` nebo **vlastností Environment (Prostředí).**
> [!Note]
> Hodnoty v **příkazu Spustit server** se používají s příkazem Spustit ladění serveru nebo > **Ctrl** + **F5;** hodnoty ve skupině Příkaz ladicího serveru se používají s příkazem Spustit ladění na serveru ladění nebo > **F5.**
### <a name="sample-bottle-configuration"></a>Konfigurace ukázkové láhve
Šablona **webového Project Bottle** obsahuje často používaný kód, který dělá potřebnou konfiguraci. Importovaná aplikace pro láhev nemusí tento kód obsahovat, v takovém případě ale následující nastavení spustí aplikaci pomocí nainstalovaného `bottle` modulu:
- **Spusťte skupinu příkazů** serveru:
- **Příkaz**: `bottle` (Module)
- **Argumenty**: `--bind=%SERVER_HOST%:%SERVER_PORT% {StartupModule}:app`
- Skupina **příkazů serveru pro ladění** :
- **Příkaz**: `bottle` (Module)
- **Argumenty** `--debug --bind=%SERVER_HOST%:%SERVER_PORT% {StartupModule}:app`
`--reload`možnost se nedoporučuje při použití Visual Studio pro ladění.
### <a name="sample-pyramid-configuration"></a>Ukázka konfigurace pyramidy
Jehlanové aplikace jsou aktuálně nejlépe vytvořené pomocí `pcreate` nástroje příkazového řádku. Jakmile je aplikace vytvořená, dá se importovat pomocí šablony [**kódu z existující Pythonu**](managing-python-projects-in-visual-studio.md#create-a-project-from-existing-files) . po tomu vyberte **obecné webové Project** přizpůsobení a nakonfigurujte možnosti. Tato nastavení předpokládají, že jehlan je nainstalován do virtuálního prostředí v `..\env` .
- Skupina **ladění** :
- **Port serveru**: 6543 (nebo cokoli je nakonfigurováno v souborech *.ini* )
- **Spustit skupinu příkazů serveru** :
- Příkaz: `..\env\scripts\pserve-script.py` (skript)
- Náhodné `Production.ini`
- Skupina **příkazů serveru pro ladění** :
- Příkaz: `..\env\scripts\pserve-script.py` (skript)
- Náhodné `Development.ini`
> [!Tip]
> Pravděpodobně budete muset nakonfigurovat vlastnost **pracovního adresáře** projektu, protože jehlanové aplikace jsou obvykle jedna složka pod kořenem projektu.
### <a name="other-configurations"></a>Další konfigurace
Pokud máte nastavení pro jiné rozhraní, které byste chtěli sdílet, nebo pokud chcete požádat o nastavení pro jiné rozhraní, otevřete [problém v GitHub](https://github.com/Microsoft/PTVS/issues).
## <a name="see-also"></a>Viz také
- [Referenční dokumentace šablon položek Pythonu](python-item-templates.md)
- [Publikování do Azure App Service](publishing-python-web-applications-to-azure-from-visual-studio.md) | 90.253247 | 703 | 0.783366 | ces_Latn | 0.999753 |
c187e2078677f1dd9a94a23b74eb5fb09011e2ad | 7,155 | md | Markdown | CONTRIBUTING.md | leekester/Aks-Construction | 0413efc9932758081c6c0db740b146b2549122a2 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | leekester/Aks-Construction | 0413efc9932758081c6c0db740b146b2549122a2 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | leekester/Aks-Construction | 0413efc9932758081c6c0db740b146b2549122a2 | [
"MIT"
] | null | null | null | # Contribution Guide
A few important factoids to consume about the Repo, before you contribute.
## Opportunities to contribute
Start by looking through the active issues for [low hanging fruit](https://github.com/Azure/Aks-Construction/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22).
Another area that will help you get more familiar with the project is by running the Helper Web App locally and writing some new [Playwright web tests](helper/.playwrighttests) to make our web publishing/testing process more robust.
## Action Workflows
Various workflows run on Push / PR / Schedule.
| Workflow | Fires on | Purpose |
|-------------|-----------|----------|
| Bicep Build | Every Push | `Quality` To run the bicep linter upon changes to the bicep files |
| Greetings | Issue / PR | `Community` Greeting new contributors to the repo |
| Stale bot | Issue / PR | `Tidy` Marks old issues as stale |
| Labeller | PR | `Tidy` Adds relevant labels to PR's based on files changed |
| Publish Helper | PR | `Quality` Tests changes to the UI work |
| Publish Helper | Push `main` | Publishes the UI to GitHub Pages |
| Check Markdown | PR | `Quality` Checks markdown files for spelling mistakes |
| Infra CI - Private Cluster | Push / PR / Schedule | `Quality` Low maturity IaC deployment example. Tests the most secure/private parameter config |
| Infra CI - Byo Vnet Cluster | Push / PR / Schedule | `Quality` High maturity IaC deployment example. Tests a typical production grade parameter config |
| Infra CI - Starter Cluster | Push / PR / Schedule | `Quality` Low maturity IaC deployment example. Tests a sandbox grade parameter config |
| InfraCI - Regression Validation | Push / PR / Schedule | `Quality` Validates multiple parameter files against the bicep code to cover regression scenarios |
| App CI | Manual | `Quality` Application deployment sample showing different application deployment practices and automation capabilities |
### Enforced PR Checks
Each has a *Validate job*, that is required to pass before merging to main. PR's tagged with `bug`, that contain changes to bicep or workflow files will need to pass all of the jobs in the relevant workflows before merge is possible.
## Branches
### Feature Branch
For the *most part* we try to use feature branches to PR to Main
```text
┌─────────────────┐ ┌───────────────┐
│ │ │ │
│ Feature Branch ├────────►│ Main │
│ │ │ │
└─────────────────┘ └───────────────┘
```
Branch Policies require the Validation stage of our GitHub Action Workflows to successfully run. The Validation stage does an Az Deployment WhatIf and Validation on an Azure Subscription, however later stages in the Actions that actually deploy resources do not run. This is because we've got a high degree of confidence in the Validate/WhatIf capability. We do run the full stage deploys on a weekly basis to give that warm fuzzy feeling. At some point, we'll run these as part of PR to main.
### The Develop Branch
Where there have been significant changes and we want the full gamut of CI testing to be run on real Azure Infrastructure - then the Develop branch is used.
It gives us the nice warm fuzzy feeling before merging into Main.
We anticipate the use of the Develop branch is temporary.
```text
┌─────────────────┐ ┌─────────────┐ ┌────────────┐
│ │ │ │ │ │
│ Feature Branch ├────────►│ Develop ├──────►│ Main │
│ │ │ │ │ │
└─────────────────┘ └─────────────┘ └────────────┘
▲
┌─────────────────┐ │
│ │ │
│ Feature Branch ├───────────────┘
│ │
└─────────────────┘
```
## Releases
Releases are used to capture a tested release (all stages, not just Validation), where there are significant new features or bugfixes. The release does not include CI Action files, just the Bicep code.
## Area change guidance
### Bicep code
When changing the Bicep code, try to build into your `developer inner loop` the following
- Review the linting warnings in VSCode. When you push, the bicep will be compiled to json with warnings/errors picked up
- If making a breaking change (eg. changing a parameter datatype), pay attention to the Regression parameter files. These will be checked during PR. If the change you're making isn't covered by an existing parameter file, then add one.
#### Breaking Changes
Should be avoided wherever possible, and where necessary highlight the breaking change in the release notes. Version 1.0 will signify a stricter policy around breaking changes.
#### PSRule validation for Well Architected Analysis
[PSRule for Azure](https://azure.github.io/PSRule.Rules.Azure) provides analysis for IaC against the Well Architected Framework. It is leveraged in the GitHub actions that run on PR, but you can leverage it locally with the following script;
```powershell
Install-Module -Name 'PSRule.Rules.Azure' -Repository PSGallery -Scope CurrentUser
$paramPath="./.github/workflows_dep/regressionparams/optimised-for-well-architected.json"
test-path $paramPath
Assert-PSRule -Module 'PSRule.Rules.Azure' -InputPath $paramPath -Format File -outcome Processed
```
### The Wizard Web App
The [configuration experience](https://azure.github.io/Aks-Construction/) is hosted in GitHub pages. It's a static web app, written in NodeJS using [FluentUI](https://developer.microsoft.com/en-us/fluentui).
#### Playwright tests
Playwright is used to help verify that the app works properly, you can use Playwright in your local dev experience (see Codespaces below), but crucially it's also leveraged as part of the publish process. If the tests don't pass, then the app will not publish. The `fragile` keyword should be used in any tests where you're learning how they work and run. Once the test is of sufficient quality to be considered a core test, the `fragile` keyword is removed.
We're trying to ensure that PR's that contain Web UI changes have appropriate Playwright tests that use `data-testid` for navigating the dom.
### Dev Container / Codespaces
A dev container is present in the repo which makes dev and testing of the UI Helper component much easier.
#### Commands
Some helpful terminal commands for when you're getting started with DevContainer/Codespaces experience
Running the Wizard GUI app
```bash
cd helper
npm start
#Browser should automatically open. Web app runs on port 3000 on path /Aks-Construction
```
Running the playwright tests after starting the Wizard web app
```bash
#Open a new terminal window
cd helper
npx playwright install
npx playwright install-deps chromium
npm i -D playwright-expect
npx playwright test --browser chromium .playwrighttests/ --reporter list
```
## Issues
Issues that are inactive are marked as stale and then closed pretty aggressively. We'll periodically look through the stale issues to see if any genuine issues have snuck their way through.
| 50.744681 | 493 | 0.697834 | eng_Latn | 0.991348 |
c188dfda482221d81bd5d268a55750e075d91982 | 40 | md | Markdown | README.md | cedricgreutmann/vfuk-aws-workshop-1 | 4576f746464078369f654606362891cc631a1a9c | [
"MIT"
] | null | null | null | README.md | cedricgreutmann/vfuk-aws-workshop-1 | 4576f746464078369f654606362891cc631a1a9c | [
"MIT"
] | null | null | null | README.md | cedricgreutmann/vfuk-aws-workshop-1 | 4576f746464078369f654606362891cc631a1a9c | [
"MIT"
] | null | null | null | # vfuk-aws-workshop-1
VFUK AWS workshop
| 13.333333 | 21 | 0.775 | eng_Latn | 0.548828 |
c18a5a44770e681a6072424b1be4edd6c896e973 | 357 | md | Markdown | _posts/2020-11-15-中午飯買了這麼多的肉,在山區應該算是有錢人了,看看早飯做的啥.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | 1 | 2020-09-16T02:05:28.000Z | 2020-09-16T02:05:28.000Z | _posts/2020-11-15-中午飯買了這麼多的肉,在山區應該算是有錢人了,看看早飯做的啥.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | null | null | null | _posts/2020-11-15-中午飯買了這麼多的肉,在山區應該算是有錢人了,看看早飯做的啥.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | null | null | null | ---
layout: post
title: "中午飯買了這麼多的肉,在山區應該算是有錢人了,看看早飯做的啥"
date: 2020-11-15T12:00:09.000Z
author: 盧保貴視覺影像
from: https://www.youtube.com/watch?v=in30a7Tc7wQ
tags: [ 盧保貴 ]
categories: [ 盧保貴 ]
---
<!--1605441609000-->
[中午飯買了這麼多的肉,在山區應該算是有錢人了,看看早飯做的啥](https://www.youtube.com/watch?v=in30a7Tc7wQ)
------
<div>
#盧保貴視覺影像#珍貴攝影#三農 期待影像能為將來保留文獻,給人們帶來更多的意義與思考。
</div>
| 21 | 77 | 0.722689 | yue_Hant | 0.284676 |
c18ac73ee348964e3d90f2eb8b5ccb56ce7ad9b1 | 1,835 | md | Markdown | README.md | qs-wang/simplesms | 9e72e5b2aa70949bf16a10fa3d6654a03e57f83d | [
"MIT"
] | null | null | null | README.md | qs-wang/simplesms | 9e72e5b2aa70949bf16a10fa3d6654a03e57f83d | [
"MIT"
] | null | null | null | README.md | qs-wang/simplesms | 9e72e5b2aa70949bf16a10fa3d6654a03e57f83d | [
"MIT"
] | null | null | null | # A simple REACT UI for sending SMS
This is an example UI which sends SMS to mobile phone.
- [Overview](#overview)
- [System Requirements](#system-requirements)
- [Install and Build](#install-and-build)
- [Configure](#configure)
- [Run](#run)
<!-- tocstop -->
## Overview
- A simple `SMSForm` component that communicates with a server endpoint to [send SMS messages via the REST API]().
- The messeages have to be inputed as the array of JSON strging.
- Supports send up to 3 SMSs to given phone no.
- Uses the [Create React App](https://github.com/facebook/create-react-app) for ceating, managing the project.
## System Requirements
- node 8.14.0
- npm
## Install and Build
```bash
git clone https://github.com/qs-wang/simplesms
cd simplesms
npm install
npm run build
```
### Install and Run the backend server
This APP uses the [send-sms](https://github.com/qs-wang/send-sms) as the backend API server. Need install, and run it seperately.
Please following the instruction at [send-sms](https://github.com/qs-wang/send-sms).
## Configure
- Customized environment varabiles shoud go to .env
- .env file should be created manully
- The .env file conatins following key
```
REACT_APP_TEST_PHONE=
```
## Run
Make sure the proxy definition in package.json file is matching the URL of he backend server, which is running as described at - [Install and Run the backend server](#install-and-run-the-backend-server)
Start the application in dev mode on its own with the command:
```bash
npm run dev
```
Open the app at [localhost:3000](http://localhost:3000). You can now use the form to send SMS messages via your mobile number.
Note: the messages in the body filed should be the JSON array format, which can only contain no more than 3 messages. A sample is as below:
```
[
"Hi",
"www.example.com",
"How are you?"
]
``` | 31.101695 | 203 | 0.73297 | eng_Latn | 0.972681 |
c18ade58e6fe6172f758e06ed330bf5d6919039d | 4,921 | md | Markdown | README.md | doig007/carbonintensity | 27fabb76f94c35c3c743d8c3515419b5c00b6db8 | [
"MIT"
] | null | null | null | README.md | doig007/carbonintensity | 27fabb76f94c35c3c743d8c3515419b5c00b6db8 | [
"MIT"
] | null | null | null | README.md | doig007/carbonintensity | 27fabb76f94c35c3c743d8c3515419b5c00b6db8 | [
"MIT"
] | null | null | null | # Carbon Intensity API
Simple carbon intensity API for the GB grid that returns the current (last 5 min) carbon intensity (gCO2/kWh) in the GB electricity grid. Initially draws from the 'current' generation fuel mix as published by [Elexon](https://www.elexon.co.uk) via [BMRS](https://api.bmreports.com/BMRS/FUELINSTHHCUR).
For avoidance of doubt, this is unrelated to [National Grid's carbon intensity API](https://carbonintensity.org.uk/), which does not appear to be currently supported (based upon activity in the GitHub). I have no insight into how that works, except from their methodology paper (which does not describe how to reproduce) and assumptions published by others in journals.
### Example API response
```
{
"response": {
"Average Carbon Intensity (gCO2/kWh)": 223.2,
"Data Last Updated": "2021-01-31 17:15:00"
}
}
```
### Docker repository
Docker container build available on docker.com:
https://hub.docker.com/repository/docker/doig/carbonintensity/
### Environment variables
Configuration of the API can be made via environmment variables either in the Docker container or system.
| Environment variable | Description |
| ------------- | ------------- |
| `carbonintensity_port` | Port on which to listen for calls to API (default: 8812) |
| `carbonintensity_serverFolder` | Server folder on which to listen for calls to API (default: \carbon) |
| `carbonintensity_elexonAPIKey` | Personal Elexon API key, available under 'my profile’ tab under Elexon account (free registration) |
| `carbonintensity_appUseDirectAPI` | True/False dictates whether calls to the API are directly passed onto the Elexon API (True) or whether a MySQL DB server is used to cache results (False) (default: True) |
| `carbonintensity_dbServer` | MySQL DB server address (if any) |
| `carbonintensity_dbUser` | MySQL DB user name (if any) |
| `carbonintensity_dbPassword` | MySQL DB user password (if any) |
| `carbonintensity_dbSchema` | MySQL DB scheme name for caching data (if any) |
| `carbonintensity_dbTable` | MySQL DB table name for caching data (if any) |
## Development plan
- Caching of generation data calls to speed up API response
- Addition of parameters to return more information
- More accurate carbon intensity factors for each fuel type/source
- Incorporating TensorFlow model for providing forecasts
## Example API response (showsources=True)
```
{
"response": {
"Average Carbon Intensity (gCO2/kWh)": 215.7,
"Data Last Updated": "2021-02-02 12:10:00",
"Sources": [
{
"fuelType": "CCGT",
"currentMW": 15855,
"carbonIntensity": 394
},
{
"fuelType": "OCGT",
"currentMW": 0,
"carbonIntensity": 651
},
{
"fuelType": "OIL",
"currentMW": 0,
"carbonIntensity": 935
},
{
"fuelType": "COAL",
"currentMW": 1567,
"carbonIntensity": 937
},
{
"fuelType": "NUCLEAR",
"currentMW": 5215,
"carbonIntensity": 0
},
{
"fuelType": "WIND",
"currentMW": 10820,
"carbonIntensity": 0
},
{
"fuelType": "PS",
"currentMW": 0,
"carbonIntensity": 0
},
{
"fuelType": "NPSHYD",
"currentMW": 233,
"carbonIntensity": 0
},
{
"fuelType": "OTHER",
"currentMW": 148,
"carbonIntensity": 300
},
{
"fuelType": "INTFR",
"currentMW": 2003,
"carbonIntensity": 48
},
{
"fuelType": "INTIRL",
"currentMW": 251,
"carbonIntensity": 426
},
{
"fuelType": "INTNED",
"currentMW": 0,
"carbonIntensity": 513
},
{
"fuelType": "INTEW",
"currentMW": 339,
"carbonIntensity": 426
},
{
"fuelType": "BIOMASS",
"currentMW": 1742,
"carbonIntensity": 120
},
{
"fuelType": "INTNEM",
"currentMW": 999,
"carbonIntensity": 132
},
{
"fuelType": "INTIFA2",
"currentMW": 0,
"carbonIntensity": 48
},
{
"fuelType": "INTNSL",
"currentMW": 0,
"carbonIntensity": 0
}
]
}
}
```
| 34.65493 | 370 | 0.527332 | eng_Latn | 0.781213 |
c18b3537af21d0a301474a5e2b76f9f17e95db69 | 23,902 | md | Markdown | _posts/pytorch/pytorch_doc/1.2/intermediate/seq2seq_translation_tutorial.md | ancy397031272/ancy397031272.github.io | 397ccadd40aa5be52c202a16a18ae4775be830b0 | [
"MIT"
] | 3 | 2020-08-05T02:20:07.000Z | 2020-12-14T08:34:25.000Z | _posts/pytorch/pytorch_doc/1.2/intermediate/seq2seq_translation_tutorial.md | ancy397031272/ancy397031272.github.io | 397ccadd40aa5be52c202a16a18ae4775be830b0 | [
"MIT"
] | null | null | null | _posts/pytorch/pytorch_doc/1.2/intermediate/seq2seq_translation_tutorial.md | ancy397031272/ancy397031272.github.io | 397ccadd40aa5be52c202a16a18ae4775be830b0 | [
"MIT"
] | null | null | null | # NLP From Scratch: 基于注意力机制的 seq2seq 神经网络翻译
> **作者**:[Sean Robertson](https://github.com/spro)
>
> 译者:[DrDavidS](https://github.com/DrDavidS)、[mengfu188](https://github.com/mengfu188)
>
> 校验:[DrDavidS](https://github.com/DrDavidS)
这是第三篇也是最后一篇“从零开始NLP”教程,我们会在其中编写自己的类与函数来处理数据,从而完成我们的NLP建模任务。我们希望在你完成本篇教程后,你可以紧接着在其后的三篇教程中继续学习 torchtext 是如何帮你完成大量的此类预处理的。
在这个项目中,我们将编写一个把法语翻译成英语的神经网络。
[KEY: > input, = target, < output]
> il est en train de peindre un tableau .
= he is painting a picture .
< he is painting a picture .
> pourquoi ne pas essayer ce vin delicieux ?
= why not try that delicious wine ?
< why not try that delicious wine ?
> elle n est pas poete mais romanciere .
= she is not a poet but a novelist .
< she not not a poet but a novelist .
> vous etes trop maigre .
= you re too skinny .
< you re all alone .
… 取得了不同程度的成功
这是通过[seq2seq](https://arxiv.org/abs/1409.3215)网络来进行实现的,在这个网络中使用两个递归的神经网络(编码器网络和解码器网络)一起工作使得一段序列变成另一段序列。 编码器网络将输入序列变成一个向量,解码器网络将该向量展开为新的序列。

我们将使用[注意力机制](https://arxiv.org/abs/1409.0473)改进这个模型,它可以让解码器学会集中在输入序列的特定范围中。
推荐阅读:
我假设你至少已经了解Python,安装了PyTorch,并且了解什么是张量:
* [https://pytorch.org/](https://pytorch.org/) → PyTorch安装说明
* [PyTorch 深度学习: 60 分钟极速入门](https://pytorch.apachecn.org/docs/1.0/deep_learning_60min_blitz.html) → 开始使用PyTorch
* [用例子学习 PyTorch](https://pytorch.apachecn.org/docs/1.0/pytorch_with_examples.html) → 更加广泛而深入地了解
* [PyTorch for Former Torch Users](https://pytorch.org/tutorials/beginner/former_torchies_tutorial.html) → 如果你之前是Lua Torch用户
这些内容有利于了解seq2seq网络及其工作机制:
* [用RNN编码器 - 解码器来学习用于统计机器翻译的短语表示](https://arxiv.org/abs/1406.1078)
* [用神经网络进行seq2seq学习](https://arxiv.org/abs/1409.3215)
* [通过共同学习对齐和翻译的神经机器翻译](https://arxiv.org/abs/1409.0473)
* [一种神经会话模型](https://arxiv.org/abs/1506.05869)
你还可以找到之前类似于编码器和解码器的教程,如[使用字符级别特征的RNN网络生成名字](https://pytorch.apachecn.org/docs/1.0/char_rnn_generation_tutorial.html)和[使用字符级别特征的RNN网络进行名字分类](https://pytorch.apachecn.org/docs/1.0/char_rnn_classification_tutorial.html),学习这些概念也很有帮助。
更多内容请阅读以下论文:
* [用RNN编码器 - 解码器来学习用于统计机器翻译的短语表示](https://arxiv.org/abs/1406.1078)
* [用神经网络进行seq2seq学习](https://arxiv.org/abs/1409.3215)
* [通过共同学习对齐和翻译的神经机器翻译](https://arxiv.org/abs/1409.0473)
* [神经会话模型](https://arxiv.org/abs/1506.05869)
**需求如下**:
```py
from __future__ import unicode_literals, print_function, division
from io import open
import unicodedata
import string
import re
import random
import torch
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
## 加载数据文件
这个项目的数据是一组数以千计的英语到法语的翻译用例。
[这个问题在 Open Data Stack Exchange 上](https://opendata.stackexchange.com/questions/3888/dataset-of-sentences-translated-into-many-languages) 点我打开翻译网址 [https://tatoeba.org/](https://tatoeba.org/) 这个网站的下载地址 [https://tatoeba.org/eng/downloads](https://tatoeba.org/eng/downloads) - 更棒的是,有人将这些语言切分成单个文件: [https://www.manythings.org/anki/](https://www.manythings.org/anki/)
由于翻译文件太大而不能放到repo中,请在继续往下阅读前,下载数据到 `data/eng-fra.txt`。该文件是一个使用制表符(table)分割的翻译列表:
```py
I am cold. J'ai froid.
```
>注意
>
>从 [这里](https://download.pytorch.org/tutorial/data.zip) 下载数据和解压到相关的路径.
与character-level RNN教程中使用的字符编码类似,我们将用语言中的每个单词 作为独热向量,或者除了单个单词之外(在单词的索引处)的大的零向量. 相较于可能 存在于一种语言中仅有十个字符相比,多数都是有大量的字,因此编码向量很大. 然而,我们会欺骗性的做一些数据修剪,保证每种语言只使用几千字.

我们之后需要将每个单词对应唯一的索引作为神经网络的输入和目标.为了追踪这些索引我们使用一个帮助类 `Lang` 类中有 词 → 索引 (`word2index`) 和 索引 → 词(`index2word`) 的字典, 以及每个词``word2count`` 用来替换稀疏词汇。
```python
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
```
这些文件全部采用Unicode编码,为了简化起见,我们将Unicode字符转换成ASCII编码,所有内容小写,并修剪大部分标点符号。
```python
# Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
```
为了读取数据文件,我们将按行分开,并将每一行分成两对来读取文件。这些文件都是英语 → 其他语言,所以如果我们想从其他语言翻译 → 英语,添加`reverse`标志来翻转词语对。
```python
def readLangs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
```
由于有很多例句,而且我们想要快速训练模型,因此我们将数据集修剪为长度相对较短且简单的句子。在这里,最大长度是十个单词(包括结尾标点符号),而且我们会对翻译为"I am" 或者 "He is" 形式的句子进行过滤(考虑到之前我们清理过撇号 → `'`)。
```py
MAX_LENGTH = 10
eng_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s ",
"you are", "you re ",
"we are", "we re ",
"they are", "they re "
)
def filterPair(p):
return len(p[0].split(' ')) < MAX_LENGTH and \
len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(eng_prefixes)
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
```
完整的数据准备过程:
* 按行读取文本文件,将行拆分成对
* 规范文本,按长度和内容过滤
* 从句子中成对列出单词列表
```python
def prepareData(lang1, lang2, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepareData('eng', 'fra', True)
print(random.choice(pairs))
```
输出:
```python
Reading lines...
Read 135842 sentence pairs
Trimmed to 10599 sentence pairs
Counting words...
Counted words:
fra 4345
eng 2803
['ils ne sont pas encore chez eux .', 'they re not home yet .']
```
## Seq2Seq模型
归神经网络(RNN)是一种对序列进行操作并利用自己的输出作为后序输入的网络
[序列到序列网络](https://arxiv.org/abs/1409.3215)([Sequence to Sequence network](https://arxiv.org/abs/1409.3215)), 也叫做 seq2seq 网络, 又或者是 [编码器解码器网络](https://arxiv.org/pdf/1406.1078v3.pdf)([Encoder Decoder network](https://arxiv.org/pdf/1406.1078v3.pdf)), 是一个由两个称为编码器解码器的RNN组成的模型。编码器读取输入序列并输出一个矢量,解码器读取该矢量并产生输出序列。

与每个输入对应一个输出的单个RNN的序列预测不同,seq2seq模型将我们从序列长度和顺序中解放出来,这使得它更适合两种语言的转换。
考虑这句话“Je ne suis pas le chat noir” → “I am not the black cat”。虽然大部分情况下输入输出序列可以对单词进行比较直接的翻译,但是很多时候单词的顺序却略有不同,例如: “chat noir” 和 “black cat”。由于 “ne/pas”结构, 输入的句子中还有另外一个单词.。因此直接从输入词的序列中直接生成正确的翻译是很困难的。
使用seq2seq模型时,编码器会创建一个向量,在理想的情况下,将输入序列的实际语义编码为单个向量 - 序列的一些N维空间中的单个点。
### 编码器
seq2seq网络的编码器是RNN,它为输入序列中的每个单词输出一些值。 对于每个输入单词,编码器输出一个向量和一个隐状态,并将该隐状态用于下一个输入的单词。

```python
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
```
### 解码器
解码器是一个接受编码器输出向量并输出一系列单词以创建翻译的RNN。
#### 简单解码器
在最简单的seq2seq解码器中,我们只使用编码器的最后输出。这最后一个输出有时称为上下文向量因为它从整个序列中编码上下文。该上下文向量用作解码器的初始隐藏状态。
在解码的每一步,解码器都被赋予一个输入指令和隐藏状态. 初始输入指令字符串开始的`<SOS>`指令,第一个隐藏状态是上下文向量(编码器的最后隐藏状态).

```py
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
output = self.embedding(input).view(1, 1, -1)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.softmax(self.out(output[0]))
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
```
我们鼓励你训练和观察这个模型的结果,但为了节省空间,我们将直入主题开始讲解注意力机制。
#### 带有注意力机制的解码器
如果仅在编码器和解码器之间传递上下文向量,则该单个向量承担编码整个句子的负担.
注意力机制允许解码器网络针对解码器自身输出的每一步”聚焦”编码器输出的不同部分. 首先我们计算一组注意力权重. 这些将被乘以编码器输出矢量获得加权的组合. 结果(在代码中为``attn_applied``) 应该包含关于输入序列的特定部分的信息, 从而帮助解码器选择正确的输出单词.
注意权值的计算是用另一个前馈层`attn`进行的, 将解码器的输入和隐藏层状态作为输入. 由于训练数据中的输入序列(语句)长短不一,为了实际创建和训练此层, 我们必须选择最大长度的句子(输入长度,用于编码器输出),以适用于此层. 最大长度的句子将使用所有注意力权重,而较短的句子只使用前几个.

```python
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
```
>注意
>
>还有其他通过使用相对位置方法来解决长度限制的注意力机制。 在 [基于注意力机制的神经机器翻译的有效途径](https://arxiv.org/abs/1508.04025)读一读关于“local attention” 的内容。
## 训练
### 准备训练数据
为了训练,对于每一对我们都需要 输入张量(输入句子中的词的索引)和 目标张量(目标语句中的词的索引)。 在创建这些向量时,我们会将EOS标记添加到两个序列中。
```python
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
def tensorFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
indexes.append(EOS_token)
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
def tensorsFromPair(pair):
input_tensor = tensorFromSentence(input_lang, pair[0])
target_tensor = tensorFromSentence(output_lang, pair[1])
return (input_tensor, target_tensor)
```
### 训练模型
为了训练我们通过编码器运行输入序列,并跟踪每个输出和最新的隐藏状态. 然后解码器被赋予`<SOS>` 标志作为其第一个输入, 并将编码器的最后一个隐藏状态作为其第一个隐藏状态.
“Teacher forcing” 是将实际目标输出用作每个下一个输入的概念,而不是将解码器的 猜测用作下一个输入.使用“Teacher forcing” 会使其更快地收敛,但是 [当训练好的网络被利用时,它可能表现出不稳定性.](http://minds.jacobs-university.de/sites/default/files/uploads/papers/ESNTutorialRev.pdf).
您可以观察“Teacher forcing”网络的输出,这些输出阅读起来是语法连贯的,但却偏离了正确的翻译 - 直觉上它已经学会表示输出语法,并且一旦老师告诉它前几个单词就可以“提取”意义,但是 它没有正确地学习如何从翻译中创建句子。
由于PyTorch的autograd给我们的自由性,我们可以通过简单的if语句来随意选择使用或者不使用“Teacher forcing”。 调高`teacher_forcing_ratio`来更好地使用它。
```python
teacher_forcing_ratio = 0.5
def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_tensor[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device)
decoder_hidden = encoder_hidden
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
if use_teacher_forcing:
# Teacher forcing: 将目标作为下一个输入
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs)
loss += criterion(decoder_output, target_tensor[di])
decoder_input = target_tensor[di] # Teacher forcing
else:
# 不适用 teacher forcing: 使用自己的预测作为下一个输入
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs)
topv, topi = decoder_output.topk(1)
decoder_input = topi.squeeze().detach() # detach from history as input
loss += criterion(decoder_output, target_tensor[di])
if decoder_input.item() == EOS_token:
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.item() / target_length
```
这是一个帮助函数,用于在给定当前时间和进度%的情况下打印经过的时间和估计的剩余时间。
```python
import time
import math
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
```
整个训练过程如下所示:
* 启动计时器
* 初始化优化器和准则
* 创建一组训练队
* 为进行绘图启动空损失数组
之后我们多次调用`train`函数,偶尔打印进度 (样本的百分比,到目前为止的时间,狙击的时间) 和平均损失
```python
def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01):
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
training_pairs = [tensorsFromPair(random.choice(pairs))
for i in range(n_iters)]
criterion = nn.NLLLoss()
for iter in range(1, n_iters + 1):
training_pair = training_pairs[iter - 1]
input_tensor = training_pair[0]
target_tensor = training_pair[1]
loss = train(input_tensor, target_tensor, encoder,
decoder, encoder_optimizer, decoder_optimizer, criterion)
print_loss_total += loss
plot_loss_total += loss
if iter % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters),
iter, iter / n_iters * 100, print_loss_avg))
if iter % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
showPlot(plot_losses)
```
### 绘制结果
使用matplotlib完成绘图,使用`plot_losses`保存训练时的数组。
```python
import matplotlib.pyplot as plt
plt.switch_backend('agg')
import matplotlib.ticker as ticker
import numpy as np
def showPlot(points):
plt.figure()
fig, ax = plt.subplots()
# 该定时器用于定时记录时间
loc = ticker.MultipleLocator(base=0.2)
ax.yaxis.set_major_locator(loc)
plt.plot(points)
```
## 评估
评估与训练大部分相同,但没有目标,因此我们只是将解码器的每一步预测反馈给它自身. 每当它预测到一个单词时,我们就会将它添加到输出字符串中,并且如果它预测到我们在那里停止的EOS指令. 我们还存储解码器的注意力输出以供稍后显示.
```python
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):
with torch.no_grad():
input_tensor = tensorFromSentence(input_lang, sentence)
input_length = input_tensor.size()[0]
encoder_hidden = encoder.initHidden()
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device) # SOS
decoder_hidden = encoder_hidden
decoded_words = []
decoder_attentions = torch.zeros(max_length, max_length)
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs)
decoder_attentions[di] = decoder_attention.data
topv, topi = decoder_output.data.topk(1)
if topi.item() == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[topi.item()])
decoder_input = topi.squeeze().detach()
return decoded_words, decoder_attentions[:di + 1]
```
我们可以从训练集中对随机句子进行评估,并打印出输入、目标和输出,从而做出一些主观的质量判断:
```python
def evaluateRandomly(encoder, decoder, n=10):
for i in range(n):
pair = random.choice(pairs)
print('>', pair[0])
print('=', pair[1])
output_words, attentions = evaluate(encoder, decoder, pair[0])
output_sentence = ' '.join(output_words)
print('<', output_sentence)
print('')
```
## 训练和评估
有了所有这些帮助函数(它看起来像是额外的工作,但它使运行多个实验更容易),我们实际上可以初始化一个网络并开始训练。
请记住输入句子被已经严格过滤过了。对于这个小数据集,我们可以使用包含256个隐藏节点和单个GRU层的相对较小的网络.在 MacBook CPU 上训练约40分钟后,我们会得到一些合理的结果.
>注意
>
>如果你运行这个notebook,你可以训练,中断内核,评估,并在以后继续训练。 注释编码器和解码器初始化的行并再次运行 `trainIters` 。
```python
hidden_size = 256
encoder1 = EncoderRNN(input_lang.n_words, hidden_size).to(device)
attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.1).to(device)
trainIters(encoder1, attn_decoder1, 75000, print_every=5000)
```

输出:
```
1m 47s (- 25m 8s) (5000 6%) 2.8641
3m 30s (- 22m 45s) (10000 13%) 2.2666
5m 15s (- 21m 1s) (15000 20%) 1.9537
7m 0s (- 19m 17s) (20000 26%) 1.7170
8m 46s (- 17m 32s) (25000 33%) 1.5182
10m 31s (- 15m 46s) (30000 40%) 1.3280
12m 15s (- 14m 0s) (35000 46%) 1.2137
14m 1s (- 12m 16s) (40000 53%) 1.0843
15m 48s (- 10m 32s) (45000 60%) 0.9847
17m 34s (- 8m 47s) (50000 66%) 0.8515
19m 20s (- 7m 2s) (55000 73%) 0.7940
21m 6s (- 5m 16s) (60000 80%) 0.7189
22m 53s (- 3m 31s) (65000 86%) 0.6490
24m 41s (- 1m 45s) (70000 93%) 0.5954
26m 26s (- 0m 0s) (75000 100%) 0.5257
```
```python
evaluateRandomly(encoder1, attn_decoder1)
```
输出:
```
> nous sommes contents que tu sois la .
= we re glad you re here .
< we re glad you re here . <EOS>
> il est dependant a l heroine .
= he is a heroin addict .
< he is in heroin heroin . <EOS>
> nous sommes les meilleurs .
= we are the best .
< we are the best . <EOS>
> tu es puissant .
= you re powerful .
< you re powerful . <EOS>
> j ai peur des chauves souris .
= i m afraid of bats .
< i m afraid of bats . <EOS>
> tu es enseignant n est ce pas ?
= you re a teacher right ?
< you re a teacher aren t you ? <EOS>
> je suis pret a tout faire pour toi .
= i am ready to do anything for you .
< i am ready to do anything for you . <EOS>
> c est desormais un homme .
= he s a man now .
< he is in an man . <EOS>
> elle est une mere tres avisee .
= she s a very wise mother .
< she s a very wise mother . <EOS>
> je suis completement vanne .
= i m completely exhausted .
< i m completely exhausted . <EOS>
```
### 可视化注意力
注意力机制的一个有用的特性是其高度可解释的输出。由于它用于加权输入序列的特定编码器输出,因此我们可以想象,在每个时间步骤中,查看网络最集中的位置。
你可以简单地运行 `plt.matshow(attentions)` 来查看显示为矩阵的注意力输出,列为输入步骤,行位输出步骤。
```python
output_words, attentions = evaluate(
encoder1, attn_decoder1, "je suis trop froid .")
plt.matshow(attentions.numpy())
```

为了获得更好的观看体验,我们将额外添加轴和标签:
```python
def showAttention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') +
['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def evaluateAndShowAttention(input_sentence):
output_words, attentions = evaluate(
encoder1, attn_decoder1, input_sentence)
print('input =', input_sentence)
print('output =', ' '.join(output_words))
showAttention(input_sentence, output_words, attentions)
evaluateAndShowAttention("elle a cinq ans de moins que moi .")
evaluateAndShowAttention("elle est trop petit .")
evaluateAndShowAttention("je ne crains pas de mourir .")
evaluateAndShowAttention("c est un jeune directeur plein de talent .")
```
* 
* 
* 
* 
输出:
```py
input = elle a cinq ans de moins que moi .
output = she s five years younger than me . <EOS>
input = elle est trop petit .
output = she s too slow . <EOS>
input = je ne crains pas de mourir .
output = i m not scared to die . <EOS>
input = c est un jeune directeur plein de talent .
output = he s a talented young player . <EOS>
```
## 练习题
* 尝试使用不同的数据集
* 另一种语言对(language pair)
* 人 → 机器 (例如 IOT 命令)
* 聊天 → 响应
* 问题 → 回答
* 将嵌入替换为预先训练过的单词嵌入,例如word2vec或者GloVe
* 尝试用更多的层次,更多的隐藏单位,更多的句子。比较训练时间和结果。
* 如果使用一个翻译文件,其中成对有两个相同的短语(`I am test \t I am test`),您可以将其用作自动编码器。试试这个:
* 训练为自动编码器
* 只保存编码器网络
* 训练一种新的翻译解码器
**脚本的总运行时间:** (27 minutes 13.758 seconds)
| 29.802993 | 364 | 0.689022 | yue_Hant | 0.218603 |
c18de822be12e051c18806c8bac4f1d668f18d20 | 84 | md | Markdown | build/content/people/f/francois-rousselot.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | null | null | null | build/content/people/f/francois-rousselot.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | null | null | null | build/content/people/f/francois-rousselot.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | 2 | 2021-09-16T07:16:15.000Z | 2021-10-30T06:41:55.000Z | ---
lastname: Rousselot
name: francois-rousselot
title: "Fran\xE7ois Rousselot"
---
| 14 | 30 | 0.738095 | eng_Latn | 0.28761 |
c18e28f9e8ac13df3348f23947edbf424769caca | 2,526 | md | Markdown | articles/virtual-network/powershell-samples.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/powershell-samples.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/powershell-samples.md | decarli/azure-docs.pt-br | 20bc383d005c11e7b7dc7b7b0777fc0de1262ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Exemplos de Azure PowerShell para rede virtual
description: Exemplos de Azure PowerShell para rede virtual.
services: virtual-network
documentationcenter: virtual-network
author: KumudD
manager: twooley
editor: ''
tags: ''
ms.assetid: ''
ms.service: virtual-network
ms.devlang: na
ms.topic: sample
ms.tgt_pltfrm: ''
ms.workload: infrastructure
ms.date: 07/15/2019
ms.author: kumud
ms.openlocfilehash: de752cdacf17193d5be95b2b9f887938ace2d50f
ms.sourcegitcommit: a170b69b592e6e7e5cc816dabc0246f97897cb0c
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/14/2019
ms.locfileid: "74091883"
---
# <a name="azure-powershell-samples-for-virtual-network"></a>Exemplos de Azure PowerShell para rede virtual
A tabela a seguir inclui links para scripts de Azure PowerShell:
| | |
|----|----|
| [Criar uma rede virtual para aplicativos de várias camadas](./scripts/virtual-network-powershell-sample-multi-tier-application.md) | Cria uma rede virtual com sub-redes de front-end e back-end. O tráfego para a sub-rede de front-end é limitado a HTTP, enquanto o tráfego para a sub-rede de back-end é limitado a SQL, porta 1433. |
| [Emparelhar duas redes virtuais](./scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md) | Cria e conecta duas redes virtuais na mesma região. |
| [Rotear o tráfego por meio de uma solução de virtualização de rede](./scripts/virtual-network-powershell-sample-route-traffic-through-nva.md) | Cria uma rede virtual com sub-redes de front-end e back-end e uma VM capaz de rotear o tráfego entre as duas sub-redes. |
| [Filtrar o tráfego de entrada e saída de rede da VM](./scripts/virtual-network-powershell-sample-filter-network-traffic.md) | Cria uma rede virtual com sub-redes de front-end e back-end. O tráfego de rede de entrada para a sub-rede de front-end é limitado a HTTP e HTTPS. O tráfego de saída para a Internet da sub-rede de back-end não é permitido. |
|[Configurar rede virtual de pilha dupla IPv4 + IPv6 com o Basic Load Balancer](./scripts/virtual-network-powershell-sample-ipv6-dual-stack.md)|Implanta a rede virtual de pilha dual (IPv4 + IPv6) com duas VMs e um Azure Load Balancer básico com endereços IP públicos do IPv4 e IPv6. |
|[Configurar rede virtual de pilha dupla IPv4 + IPv6 com o Standard Load Balancer](./scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md)|Implanta a rede virtual de pilha dual (IPv4 + IPv6) com duas VMs e um Azure Standard Load Balancer com endereços IP públicos do IPv4 e IPv6. |
| 68.27027 | 351 | 0.780285 | por_Latn | 0.947316 |
c18f43d65d7a3ad18eaad91a17b5f61ca781c30d | 1,583 | md | Markdown | modules/system/assets/ui/docs/callout.md | HybridLab-Projects/TeachPlusPlus | 711811ad6cf1a2bd9ed506f456384554eb0a5b89 | [
"MIT"
] | 6 | 2020-03-13T12:59:22.000Z | 2020-04-26T19:43:53.000Z | modules/system/assets/ui/docs/callout.md | TeachPlusPlus-Developers/TeachPlusPlus | 711811ad6cf1a2bd9ed506f456384554eb0a5b89 | [
"MIT"
] | 51 | 2020-02-29T13:55:32.000Z | 2020-11-27T00:10:36.000Z | modules/system/assets/ui/docs/callout.md | HybridLab-Projects/TeachPlusPlus | 711811ad6cf1a2bd9ed506f456384554eb0a5b89 | [
"MIT"
] | 1 | 2020-04-17T19:40:50.000Z | 2020-04-17T19:40:50.000Z | # Callout
### Callout
Displays a detailed message to the user, also allowing it to be dismissed.
<div class="callout fade in callout-warning">
<button
type="button"
class="close"
data-dismiss="callout"
aria-hidden="true">×</button>
<div class="header">
<i class="icon-warning"></i>
<h3>Warning warning</h3>
<p>My arms are flailing wildly</p>
</div>
<div class="content">
<p>Insert coin(s) to begin play</p>
</div>
</div>
### No sub-header
Include the `no-subheader` class to omit the sub heading.
<div class="callout fade in callout-info no-subheader">
<div class="header">
<i class="icon-info"></i>
<h3>Incoming unicorn</h3>
</div>
</div>
### No icon
Include the `no-icon` class to omit the icon.
<div class="callout fade in callout-danger no-icon">
<div class="header">
<h3>There was a hull breach</h3>
<ul>
<li>Get to the chopper</li>
</ul>
</div>
</div>
### No header
<div class="callout fade in callout-success">
<div class="content">
<p>Something good happened</p>
<ul>
<li>You found a pony</li>
</ul>
</div>
</div>
### Data attributes:
- data-dismiss="callout" - when assigned to an element, the callout hides on click
## JavaScript API
### Events
- close.oc.callout - triggered when the callout is closed
| 23.626866 | 84 | 0.536323 | eng_Latn | 0.923682 |
c18f97fa9e40b35c0520084bc4c0ed2c2e06f687 | 1,107 | md | Markdown | cloudshell.md | memes/home | 68c1a1e7e4acd9f4a43371dfc7eab4b3d3d6013e | [
"MIT"
] | null | null | null | cloudshell.md | memes/home | 68c1a1e7e4acd9f4a43371dfc7eab4b3d3d6013e | [
"MIT"
] | null | null | null | cloudshell.md | memes/home | 68c1a1e7e4acd9f4a43371dfc7eab4b3d3d6013e | [
"MIT"
] | null | null | null | # Installing to Google Cloud Shell
## Pull from GitHub
```shell
git init .
git remote add origin https://github.com/memes/home.git
git fetch --all
git checkout main
```
## Update `.profile` and `.bashrc`
```shell
echo '[ -n "${BASH_VERSION}" ] && [ -f "${HOME}/.bashrc" ] && . "${HOME}/.bashrc' >> ~/.profile
echo '[ -f "${HOME}/.bashrc_memes" ] && . "${HOME}/.bashrc_memes"' >> ~/.bashrc
```
Make sure my `.profile_memes` is sourced before `.bashrc`; add this line to `.profile`
```shell
[ -f "${HOME}/.profile_memes" ] && . "${HOME}/.profile_memes"
```
## Install binaries to `~/bin`
```shell
sh -c "$(curl -fsSL https://starship.rs/install.sh)" -- --yes --bin-dir ${HOME}/bin
curl -sL --output - https://github.com/junegunn/fzf/releases/download/0.27.3/fzf-0.27.3-linux_amd64.tar.gz | tar xzf - -C ~/bin
curl -sfL https://direnv.net/install.sh | bash
curl -sL --output - https://github.com/terraform-docs/terraform-docs/releases/download/v0.16.0/terraform-docs-v0.16.0-linux-amd64.tar.gz | tar xzf - -C ~/bin
git clone https://github.com/tfutils/tfenv.git ~/.tfenv
ln -s ~/.tfenv/bin/* ~/bin
```
| 31.628571 | 157 | 0.64589 | yue_Hant | 0.600134 |
c18fd456bf5ce296081063ca3d31f57efe5d1294 | 2,793 | md | Markdown | readme.md | psarras/speckle-server | 2f3ba1be220861579d4e8c396edd53c0fd451d4e | [
"Apache-2.0"
] | null | null | null | readme.md | psarras/speckle-server | 2f3ba1be220861579d4e8c396edd53c0fd451d4e | [
"Apache-2.0"
] | 1 | 2021-03-26T16:52:03.000Z | 2021-03-26T16:52:03.000Z | readme.md | psarras/speckle-server | 2f3ba1be220861579d4e8c396edd53c0fd451d4e | [
"Apache-2.0"
] | null | null | null | # Speckle Web
[](https://twitter.com/SpeckleSystems) [](https://discourse.speckle.works) [](https://speckle.systems)
#### Status
[](https://github.com/Speckle-Next/SpeckleServer/) [](https://codecov.io/gh/specklesystems/speckle-server)
## Disclaimer
We're working to stabilize the 2.0 API, and until then there will be breaking changes.
## Introduction
This monorepo is the home of the Speckle 2.0 web packages. If you're looking for the desktop connectors, you'll find them [here](https://github.com/specklesystems/speckle-sharp).
Specifically, this monorepo contains:
### ➡️ [Server](packages/server), the Speckle Server.
The server is a nodejs app. Core external dependencies are a Redis and Postgresql db.
### ➡️ [Frontend](packages/frontend), the Speckle Frontend.
The frontend is a static Vue app.
## Developing and Debugging
To get started, first clone this repo & run `npm install`. Next, you'll need to run `lerna boostrap` to initialize the dependencies of all packages (server & frontend).
After these steps are complete, run `lerna run dev --stream`. Alternatively, you can `npm run dev` independently in each separate package (this will make for less spammy output).
## Contributing
Please make sure you read the [contribution guidelines](CONTRIBUTING.md) for an overview of the best practices we try to follow.
When pushing commits to this repo, please follow the following guidelines:
- Install [commitizen](https://www.npmjs.com/package/commitizen#commitizen-for-contributors) globally (`npm i -g commitizen`).
- When ready to commit, `git cz` & follow the prompts.
- Please use either `server` or `frontend` as the scope of your commit.
## Community
The Speckle Community hangs out on [the forum](https://discourse.speckle.works), do join and introduce yourself & feel free to ask us questions!
## License
Unless otherwise described, the code in this repository is licensed under the Apache-2.0 License. Please note that some modules, extensions or code herein might be otherwise licensed. This is indicated either in the root of the containing folder under a different license file, or in the respective file's header. If you have any questions, don't hesitate to get in touch with us via [email](mailto:hello@speckle.systems).
| 55.86 | 422 | 0.773362 | eng_Latn | 0.931497 |
c1901f5b58f00e339a2a073322e609d58bc674cd | 1,261 | md | Markdown | README.md | FrancisBehnen/hydro-project-common | 9194c7ed28c04507f412c690f751f6a26c425268 | [
"Apache-2.0"
] | 10 | 2020-08-25T23:37:31.000Z | 2020-11-12T02:36:22.000Z | README.md | FrancisBehnen/hydro-project-common | 9194c7ed28c04507f412c690f751f6a26c425268 | [
"Apache-2.0"
] | 29 | 2021-03-11T14:03:20.000Z | 2021-03-21T20:37:54.000Z | README.md | FrancisBehnen/hydro-project-common | 9194c7ed28c04507f412c690f751f6a26c425268 | [
"Apache-2.0"
] | 25 | 2019-08-21T04:07:51.000Z | 2021-11-13T01:58:40.000Z | # Hydro Common
This repository is a shared repository for header files, [protobuf definitions](https://developers.google.com/protocol-buffers/), and scripts. It is linked into other repositories in the Hydro project using [git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules). This README provides a brief overview of the contents of this repository. This repository will not change frequently and should only contain code that is used across multiple Hydro subprojects.
* `cmake`: This directory has three helpers that are useful for any CMake-based project: `CodeCoverage.cmake` uses `lcov` and `gcov` to automatically generate coverage information; `DownloadProject.cmake` automatically downloads and configured external C++ dependencies; and `clang-format.cmake` automatically runs the `clang-format` tool on all C++ files in a project.
* `include`: A variety of Hydro C++ header files, including shared lattice definitions, a Anna KVS client, shared `typedef`s and other utilities.
* `proto`: Project API-level protobuf definitions.
* `scripts`: Various helper scripts that install dependencies and simplify creating Travis build processes.
* `vendor`: CMake configuration for Hydro vendor dependencies (ZeroMQ, SPDLog, and Yaml-CPP).
| 114.636364 | 471 | 0.793021 | eng_Latn | 0.992529 |
c191590680c4f56ce6260979277251d9679d635e | 2,072 | markdown | Markdown | content/blog/2020-01-10-instant-pot-beef-curry.markdown | coldclimate/omnomfrickinnom | 31251518f991ab298de7b8cb915e95a3f9389b54 | [
"MIT"
] | 3 | 2016-05-30T08:55:13.000Z | 2017-12-29T18:59:04.000Z | content/blog/2020-01-10-instant-pot-beef-curry.markdown | coldclimate/omnomfrickinnom | 31251518f991ab298de7b8cb915e95a3f9389b54 | [
"MIT"
] | 4 | 2015-07-02T10:38:17.000Z | 2020-01-01T21:00:27.000Z | content/blog/2020-01-10-instant-pot-beef-curry.markdown | coldclimate/omnomfrickinnom | 31251518f991ab298de7b8cb915e95a3f9389b54 | [
"MIT"
] | 5 | 2015-07-02T09:05:08.000Z | 2020-01-01T20:37:01.000Z | ---
layout: post
title: "Instant Pot beef Curry"
date: 2020-01-10 19:30:00
publishdate: 2020-01-10 19:30:00
author: oli
image: "/images/blog/instant-pot-beef-curry-2.jpg"
tags: ["instantpot", "curry", "spicy", "beef", "2020"]
---
The more I use the Instant Pot, the more I appreciate being able to easily use different cooking techniques without doing more washing up. Recently I've used it a few times to make curries and uncovered the left-right-left of Instant Pot cookery: saute - pressure cook - saute.
The first saute browns onions and meat, making sure you don't have anemic dish. The pressure cooking tenderises meat and results in a great sauce texture from the onions. A final saute then reduced the liquid content to get the right final consistency.
## You will need
### For the beef curry
* A big handful of beef stewing steak cut into 1cm chunks
* An onion, roughly chopped
* A tablespoon of garlic (I used a frozen block)
* A tablespoon of ginger (and another block)
* 4 blocks of frozen spinach
* A big handful of baby tomatoes
* A small spoonful of chilli pickle (I used [Mr Vikki's King Naga ](https://www.amazon.co.uk/Mr-Vikkis-King-Naga/dp/B005MWW3K6/ref=as_li_ss_tl?crid=UWKNG0NMHUDA&keywords=king+naga&qid=1578254830&sprefix=king+naga,aps,172&sr=8-2&linkCode=ll1&tag=wwwcoldclimat-21&linkId=e128dddcf8b68686c7d230cb4073f80a&language=en_GB))
### To serve
* A couple of naan breads
* A couple of big spoonfuls of plain yogurt
* A couple of big spoonfuls of mango pickle
* A load of snipped coriander
## Do
* Saute the beef, onion, garlic and ginger for about ten minutes
* Stir in the pickle, tomatoes and spinach
* Pressure cook for ten minutes
* Let the pressure out
* Saute for ten minutes
* Grill the naans, smear with mango pickle, top with curry and then more yogurt
## Result
The onions have broken down to a velvety finish, the beef is tender, the spices come though without screamingly hot.


| 39.846154 | 318 | 0.757722 | eng_Latn | 0.975992 |
c191b745db73e90e1510b65a464e2166ab586932 | 235 | md | Markdown | hardware/readme.md | nathantsoi/astrobee | bb0fc3e4110a14929bd4cf35c12b3c169bc6c756 | [
"Apache-2.0"
] | 1 | 2022-01-17T22:22:29.000Z | 2022-01-17T22:22:29.000Z | hardware/readme.md | nathantsoi/astrobee | bb0fc3e4110a14929bd4cf35c12b3c169bc6c756 | [
"Apache-2.0"
] | null | null | null | hardware/readme.md | nathantsoi/astrobee | bb0fc3e4110a14929bd4cf35c12b3c169bc6c756 | [
"Apache-2.0"
] | null | null | null | \page hw Hardware
\subpage eps_driver
\subpage flashlight
\subpage laser
\subpage perching_arm
\subpage picoflexx
\subpage pmc_actuator
\subpage signal_lights
\subpage smart_dock
\subpage speed_cam
\subpage temp_monitor
\subpage vive
| 16.785714 | 22 | 0.838298 | kor_Hang | 0.347035 |
c192080c9bd803c468b6bcbe357613f98d5f7410 | 56 | md | Markdown | package/example/README.md | andyduke/easy_layout | d3a7e26044736c926aaae163d6d888a555152f81 | [
"BSD-3-Clause"
] | 2 | 2021-06-10T06:41:09.000Z | 2021-12-11T13:54:07.000Z | package/example/README.md | andyduke/easy_layout | d3a7e26044736c926aaae163d6d888a555152f81 | [
"BSD-3-Clause"
] | 2 | 2021-03-05T18:16:45.000Z | 2021-03-07T16:14:16.000Z | package/example/README.md | andyduke/easy_layout | d3a7e26044736c926aaae163d6d888a555152f81 | [
"BSD-3-Clause"
] | null | null | null | # EasyLayout Examples
Examples of using EasyLayout.
| 14 | 30 | 0.767857 | eng_Latn | 0.997479 |
c19263dea188a64abf00ffeddfc62f1e30216e05 | 131 | md | Markdown | .changeset/plenty-llamas-grin.md | foora/pnpm | dac5390ac03a7f588017f5e9bcb30e677df24c8e | [
"MIT"
] | null | null | null | .changeset/plenty-llamas-grin.md | foora/pnpm | dac5390ac03a7f588017f5e9bcb30e677df24c8e | [
"MIT"
] | null | null | null | .changeset/plenty-llamas-grin.md | foora/pnpm | dac5390ac03a7f588017f5e9bcb30e677df24c8e | [
"MIT"
] | null | null | null | ---
"pnpm": patch
---
The CLI should not exit before all the output is printed [#3526](https://github.com/pnpm/pnpm/issues/3526).
| 21.833333 | 107 | 0.694656 | eng_Latn | 0.971209 |
c192658fe736d99ade3641681e6ee3a8bbc3b9a7 | 12,906 | md | Markdown | docs/zh/UserGuide/Server/Config Manual.md | LittleHealth/iotdb | 7359c53f2e539ea8f3d2be08142f98729dc4b8d9 | [
"Apache-2.0"
] | null | null | null | docs/zh/UserGuide/Server/Config Manual.md | LittleHealth/iotdb | 7359c53f2e539ea8f3d2be08142f98729dc4b8d9 | [
"Apache-2.0"
] | null | null | null | docs/zh/UserGuide/Server/Config Manual.md | LittleHealth/iotdb | 7359c53f2e539ea8f3d2be08142f98729dc4b8d9 | [
"Apache-2.0"
] | null | null | null | <!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# 配置手册
为方便IoTDB Server的配置与管理,IoTDB Server为用户提供三种配置项,使得用户可以在启动服务器或服务器运行时对其进行配置。
三种配置项的配置文件均位于IoTDB安装目录:`$IOTDB_HOME/conf`文件夹下,其中涉及server配置的共有2个文件,分别为:`iotdb-env.sh`, `iotdb-engine.properties`。用户可以通过更改其中的配置项对系统运行的相关配置项进行配置。
配置文件的说明如下:
* `iotdb-env.sh`:环境配置项的默认配置文件。用户可以在文件中配置JAVA-JVM的相关系统配置项。
* `iotdb-engine.properties`:IoTDB引擎层系统配置项的默认配置文件。用户可以在文件中配置IoTDB引擎运行时的相关参数,如JDBC服务监听端口(`rpc_port`)、overflow数据文件存储目录(`overflow_data_dir`)等。此外,用户可以在文件中配置IoTDB存储时TsFile文件的相关信息,如每次将内存中的数据写入到磁盘时的数据大小(`group_size_in_byte`),内存中每个列打一次包的大小(`page_size_in_byte`)等。
## 热修改配置项
为方便用户使用,IoTDB Server为用户提供了热修改功能,即在系统运行过程中修改`iotdb-engine.properties`中部分配置参数并即时应用到系统中。下面介绍的参数中,改后
生效方式为`触发生效`的均为支持热修改的配置参数。
触发方式:客户端发送```load configuration```命令至IoTDB Server,客户端的使用方式详见第4章
## 环境配置项
环境配置项主要用于对IoTDB Server运行的Java环境相关参数进行配置,如JVM相关配置。IoTDB Server启动时,此部分配置会被传给JVM。用户可以通过查看 `iotdb-env.sh`(或`iotdb-env.bat`)文件查看环境配置项内容。详细配置项说明如下:
* JMX\_LOCAL
|名字|JMX\_LOCAL|
|:---:|:---|
|描述|JMX监控模式,配置为yes表示仅允许本地监控,设置为no的时候表示允许远程监控|
|类型|枚举String : “yes”, “no”|
|默认值|yes|
|改后生效方式|重启服务器生效|
* JMX\_PORT
|名字|JMX\_PORT|
|:---:|:---|
|描述|JMX监听端口。请确认该端口不是系统保留端口并且未被占用。|
|类型|Short Int: [0,65535]|
|默认值|31999|
|改后生效方式|重启服务器生效|
* MAX\_HEAP\_SIZE
|名字|MAX\_HEAP\_SIZE|
|:---:|:---|
|描述|IoTDB启动时能使用的最大堆内存大小。|
|类型|String|
|默认值|取决于操作系统和机器配置。在Linux或MacOS系统下默认为机器内存的四分之一。在Windows系统下,32位系统的默认值是512M,64位系统默认值是2G。|
|改后生效方式|重启服务器生效|
* HEAP\_NEWSIZE
|名字|HEAP\_NEWSIZE|
|:---:|:---|
|描述|IoTDB启动时能使用的最小堆内存大小。|
|类型|String|
|默认值|取决于操作系统和机器配置。在Linux或MacOS系统下默认值为机器CPU核数乘以100M的值与MAX\_HEAP\_SIZE四分之一这二者的最小值。在Windows系统下,32位系统的默认值是512M,64位系统默认值是2G。。|
|改后生效方式|重启服务器生效|
## 系统配置项
系统配置项是IoTDB Server运行的核心配置,它主要用于设置IoTDB Server文件层和引擎层的参数,便于用户根据自身需求调整Server的相关配置,以达到较好的性能表现。系统配置项可分为两大模块:文件层配置项和引擎层配置项。用户可以通过查看`iotdb-engine.properties`,文件查看和修改两种配置项的内容。在0.7.0版本中字符串类型的配置项大小写敏感。
### 文件层配置
* compressor
|名字|compressor|
|:---:|:---|
|描述|数据压缩方法|
|类型|枚举String : “UNCOMPRESSED”, “SNAPPY”|
|默认值| UNCOMPRESSED |
|改后生效方式|触发生效|
* group\_size\_in\_byte
|名字|group\_size\_in\_byte|
|:---:|:---|
|描述|每次将内存中的数据写入到磁盘时的最大写入字节数|
|类型|Int32|
|默认值| 134217728 |
|改后生效方式|触发生效|
* max\_number\_of\_points\_in\_page
|名字| max\_number\_of\_points\_in\_page |
|:---:|:---|
|描述|一个页中最多包含的数据点(时间戳-值的二元组)数量|
|类型|Int32|
|默认值| 1048576 |
|改后生效方式|触发生效|
* max\_degree\_of\_index\_node
|名字| max\_degree\_of\_index\_node |
|:---:|:---|
|描述|元数据索引树的最大度(即每个节点的最大子节点个数)|
|类型|Int32|
|默认值| 1024 |
|改后生效方式|仅允许在第一次启动服务器前修改|
* max\_string\_length
|名字| max\_string\_length |
|:---:|:---|
|描述|针对字符串类型的数据,单个字符串最大长度,单位为字符|
|类型|Int32|
|默认值| 128 |
|改后生效方式|触发生效|
* page\_size\_in\_byte
|名字| page\_size\_in\_byte |
|:---:|:---|
|描述|内存中每个列写出时,写成的单页最大的大小,单位为字节|
|类型|Int32|
|默认值| 65536 |
|改后生效方式|触发生效|
* time\_series\_data\_type
|名字| time\_series\_data\_type |
|:---:|:---|
|描述|时间戳数据类型|
|类型|枚举String: "INT32", "INT64"|
|默认值| Int64 |
|改后生效方式|触发生效|
* time\_encoder
|名字| time\_encoder |
|:---:|:---|
|描述| 时间列编码方式|
|类型|枚举String: “TS_2DIFF”,“PLAIN”,“RLE”|
|默认值| TS_2DIFF |
|改后生效方式|触发生效|
* float_precision
|名字| float_precision |
|:---:|:---|
|描述| 浮点数精度,为小数点后数字的位数 |
|类型|Int32|
|默认值| 默认为2位。注意:32位浮点数的十进制精度为7位,64位浮点数的十进制精度为15位。如果设置超过机器精度将没有实际意义。|
|改后生效方式|触发生效|
* bloomFilterErrorRate
|名字| bloomFilterErrorRate |
|:---:|:---|
|描述| bloom过滤器的误报率. 在加载元数据之前 Bloom filter 可以检查给定的时间序列是否在 TsFile 中。这可以优化加载元数据的性能,并跳过不包含指定时间序列的 TsFile。如果你想了解更多关于它的细节,你可以参考: [wiki page of bloom filter](https://en.wikipedia.org/wiki/Bloom_filter).|
|类型|浮点数, 范围为(0, 1)|
|默认值| 0.05 |
|改后生效方式|重启生效|
### 引擎层配置
* back\_loop\_period\_in\_second
|名字| back\_loop\_period\_in\_second |
|:---:|:---|
|描述| 系统统计量触发统计的频率,单位为秒。|
|类型|Int32|
|默认值| 5 |
|改后生效方式|重启服务器生效|
* data\_dirs
|名字| data\_dirs |
|:---:|:---|
|描述| IoTDB数据存储路径,默认存放在和bin目录同级的data目录下。相对路径的起始目录与操作系统相关,建议使用绝对路径。|
|类型|String|
|默认值| data |
|改后生效方式|触发生效|
* enable\_wal
|名字| enable\_wal |
|:---:|:---|
|描述| 是否开启写前日志,默认值为true表示开启,配置成false表示关闭 |
|类型|Bool|
|默认值| true |
|改后生效方式|触发生效|
* tag\_attribute\_total\_size
|名字| tag\_attribute\_total\_size |
|:---:|:---|
|描述| 每个时间序列标签和属性的最大持久化字节数|
|类型| Int32 |
|默认值| 700 |
|改后生效方式|仅允许在第一次启动服务器前修改|
* enable\_partial\_insert
|名字| enable\_partial\_insert |
|:---:|:---|
|描述| 在一次insert请求中,如果部分测点写入失败,是否继续写入其他测点|
|类型| Bool |
|默认值| true |
|改后生效方式|重启服务器生效|
* mtree\_snapshot\_interval
|名字| mtree\_snapshot\_interval |
|:---:|:---|
|描述| 创建 MTree snapshot 时至少累积的 mlog 日志行数。单位为日志行数|
|类型| Int32 |
|默认值| 100000 |
|改后生效方式|重启服务器生效|
* fetch\_size
|名字| fetch\_size |
|:---:|:---|
|描述| 批量读取数据的时候,每一次读取数据的数量。单位为数据条数,即不同时间戳的个数。某次会话中,用户可以在使用时自己设定,此时仅在该次会话中生效。|
|类型|Int32|
|默认值| 10000 |
|改后生效方式|重启服务器生效|
* force\_wal\_period\_in\_ms
|名字| force\_wal\_period\_in\_ms |
|:---:|:---|
|描述| 写前日志定期刷新到磁盘的周期,单位毫秒,有可能丢失至多flush\_wal\_period\_in\_ms毫秒的操作。 |
|类型|Int32|
|默认值| 10 |
|改后生效方式|触发生效|
* flush\_wal\_threshold
|名字| flush\_wal\_threshold |
|:---:|:---|
|描述| 写前日志的条数达到该值之后,刷新到磁盘,有可能丢失至多flush\_wal\_threshold个操作 |
|类型|Int32|
|默认值| 10000 |
|改后生效方式|触发生效|
* merge\_concurrent\_threads
|名字| merge\_concurrent\_threads |
|:---:|:---|
|描述| 乱序数据进行合并的时候最多可以用来进行merge的线程数。值越大,对IO和CPU消耗越多。值越小,当乱序数据过多时,磁盘占用量越大,读取会变慢。 |
|类型|Int32|
|默认值| 0 |
|改后生效方式|重启服务器生效|
* enable\_mem\_comtrol
|Name| enable\_mem\_control |
|:---:|:---|
|Description| 开启内存控制,避免爆内存|
|Type|Bool|
|Default| true |
|Effective|重启服务器生效|
* enable\_mem\_comtrol
|Name| enable\_mem\_control |
|:---:|:---|
|Description| 开启内存控制,避免爆内存|
|Type|Bool|
|Default| true |
|Effective|重启服务器生效|
* memtable\_size\_threshold
|Name| memtable\_size\_threshold |
|:---:|:---|
|Description| 内存缓冲区 memtable 阈值|
|Type|Long|
|Default| 1073741824 |
|Effective|enable\_mem\_control为false时生效、重启服务器生效|
* avg\_series\_point\_number\_threshold
|Name| avg\_series\_point\_number\_threshold |
|:---:|:---|
|Description| 内存中平均每个时间序列点数最大值,达到触发flush|
|Type|Int32|
|Default| 10000 |
|Effective|重启服务器生效|
* tsfile\_size\_threshold
|Name| tsfile\_size\_threshold |
|:---:|:---|
|Description| 每个 tsfile 大小|
|Type|Long|
|Default| 536870912 |
|Effective| 重启服务器生效|
* enable\_partition
|Name| enable\_partition |
|:---:|:---|
|Description| 是否开启将数据按时间分区存储的功能,如果关闭,所有数据都属于分区 0|
|Type|Bool|
|Default| false |
|Effective|仅允许在第一次启动服务器前修改|
* partition\_interval
|名字| partition\_interval |
|:---:|:---|
|描述| 用于存储组分区的时间段长度,用户指定的存储组下会使用该时间段进行分区,单位:秒 |
|类型|Int64|
|默认值| 604800 |
|改后生效方式|仅允许在第一次启动服务器前修改|
* memtable\_num\_in\_each\_storage\_group
|名字| memtable\_num\_in\_each\_storage\_group|
|:---:|:---|
|描述| 每个存储组所控制的memtable的最大数量,这决定了来源于多少个不同时间分区的数据可以并发写入<br>举例来说,你的时间分区为按天分区,想要同时并发写入3天的数据,那么这个值应该被设置为6(3个给顺序写入,3个给乱序写入)|
|类型|Int32|
|默认值| 10 |
|改后生效方式|重启服务器生效|
* multi\_dir\_strategy
|名字| multi\_dir\_strategy |
|:---:|:---|
|描述| IoTDB在tsfile\_dir中为TsFile选择目录时采用的策略。可使用简单类名或类名全称。系统提供以下三种策略:<br>1. SequenceStrategy:IoTDB按顺序从tsfile\_dir中选择目录,依次遍历tsfile\_dir中的所有目录,并不断轮循;<br>2. MaxDiskUsableSpaceFirstStrategy:IoTDB优先选择tsfile\_dir中对应磁盘空余空间最大的目录;<br>3. MinFolderOccupiedSpaceFirstStrategy:IoTDB优先选择tsfile\_dir中已使用空间最小的目录;<br>4. UserDfineStrategyPackage(用户自定义策略)<br>您可以通过以下方法完成用户自定义策略:<br>1. 继承cn.edu.tsinghua.iotdb.conf.directories.strategy.DirectoryStrategy类并实现自身的Strategy方法;<br>2. 将实现的类的完整类名(包名加类名,UserDfineStrategyPackage)填写到该配置项;<br>3. 将该类jar包添加到工程中。 |
|类型|String|
|默认值| MaxDiskUsableSpaceFirstStrategy |
|改后生效方式|触发生效|
* rpc_address
|名字| rpc_address |
|:---:|:---|
|描述| |
|类型|String|
|默认值| "0.0.0.0" |
|改后生效方式|重启服务器生效|
* rpc_port
|名字| rpc_port |
|:---:|:---|
|描述|jdbc服务监听端口。请确认该端口不是系统保留端口并且未被占用。|
|类型|Short Int : [0,65535]|
|默认值| 6667 |
|改后生效方式|重启服务器生效||
* time\_zone
|名字| time_zone |
|:---:|:---|
|描述| 服务器所处的时区,默认为北京时间(东八区) |
|类型|Time Zone String|
|默认值| +08:00 |
|改后生效方式|触发生效|
* enable\_stat\_monitor
|名字| enable\_stat\_monitor |
|:---:|:---|
|描述| 选择是否启动后台统计功能|
|类型| Boolean |
|默认值| false |
|改后生效方式|重启服务器生效|
* concurrent\_flush\_thread
|名字| concurrent\_flush\_thread |
|:---:|:---|
|描述| 当IoTDB将内存中的数据写入磁盘时,最多启动多少个线程来执行该操作。如果该值小于等于0,那么采用机器所安装的CPU核的数量。默认值为0。|
|类型| Int32 |
|默认值| 0 |
|改后生效方式|重启服务器生效|
* stat\_monitor\_detect\_freq\_in\_second
|名字| stat\_monitor\_detect\_freq\_in\_second |
|:---:|:---|
|描述| 每隔一段时间(以秒为单位)检测当前记录统计量时间范围是否超过stat_monitor_retain_interval,并进行定时清理。|
|类型| Int32 |
|默认值|600 |
|改后生效方式|重启服务器生效|
* stat\_monitor\_retain\_interval\_in\_second
|名字| stat\_monitor\_retain\_interval\_in\_second |
|:---:|:---|
|描述| 系统统计信息的保留时间(以秒为单位),超过保留时间范围的统计数据将被定时清理。|
|类型| Int32 |
|默认值|600 |
|改后生效方式|重启服务器生效|
* tsfile\_storage\_fs
|名字| tsfile\_storage\_fs |
|:---:|:---|
|描述| Tsfile和相关数据文件的存储文件系统。目前支持LOCAL(本地文件系统)和HDFS两种|
|类型| String |
|默认值|LOCAL |
|改后生效方式|仅允许在第一次启动服务器前修改|
* core\_site\_path
|Name| core\_site\_path |
|:---:|:---|
|描述| 在Tsfile和相关数据文件存储到HDFS的情况下用于配置core-site.xml的绝对路径|
|类型| String |
|默认值|/etc/hadoop/conf/core-site.xml |
|改后生效方式|重启服务器生效|
* hdfs\_site\_path
|Name| hdfs\_site\_path |
|:---:|:---|
|描述| 在Tsfile和相关数据文件存储到HDFS的情况下用于配置hdfs-site.xml的绝对路径|
|类型| String |
|默认值|/etc/hadoop/conf/hdfs-site.xml |
|改后生效方式|重启服务器生效|
* hdfs\_ip
|名字| hdfs\_ip |
|:---:|:---|
|描述| 在Tsfile和相关数据文件存储到HDFS的情况下用于配置HDFS的IP。**如果配置了多于1个hdfs\_ip,则表明启用了Hadoop HA**|
|类型| String |
|默认值|localhost |
|改后生效方式|重启服务器生效|
* hdfs\_port
|名字| hdfs\_port |
|:---:|:---|
|描述| 在Tsfile和相关数据文件存储到HDFS的情况下用于配置HDFS的端口|
|类型| String |
|默认值|9000 |
|改后生效方式|重启服务器生效|
* dfs\_nameservices
|名字| hdfs\_nameservices |
|:---:|:---|
|描述| 在使用Hadoop HA的情况下用于配置HDFS的nameservices|
|类型| String |
|默认值|hdfsnamespace |
|改后生效方式|重启服务器生效|
* dfs\_ha\_namenodes
|名字| hdfs\_ha\_namenodes |
|:---:|:---|
|描述| 在使用Hadoop HA的情况下用于配置HDFS的nameservices下的namenodes|
|类型| String |
|默认值|nn1,nn2 |
|改后生效方式|重启服务器生效|
* dfs\_ha\_automatic\_failover\_enabled
|名字| dfs\_ha\_automatic\_failover\_enabled |
|:---:|:---|
|描述| 在使用Hadoop HA的情况下用于配置是否使用失败自动切换|
|类型| Boolean |
|默认值|true |
|改后生效方式|重启服务器生效|
* dfs\_client\_failover\_proxy\_provider
|名字| dfs\_client\_failover\_proxy\_provider |
|:---:|:---|
|描述| 在使用Hadoop HA且使用失败自动切换的情况下配置失败自动切换的实现方式|
|类型| String |
|默认值|org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider |
|改后生效方式|重启服务器生效|
* hdfs\_use\_kerberos
|名字| hdfs\_use\_kerberos |
|:---:|:---|
|描述| 是否使用kerberos验证访问hdfs|
|类型| String |
|默认值|false |
|改后生效方式|重启服务器生效|
* kerberos\_keytab\_file_path
|名字| kerberos\_keytab\_file_path |
|:---:|:---|
|描述| kerberos keytab file 的完整路径|
|类型| String |
|默认值|/path |
|改后生效方式|重启服务器生效|
* kerberos\_principal
|名字| kerberos\_principal |
|:---:|:---|
|描述| Kerberos 认证原则|
|类型| String |
|默认值|your principal |
|改后生效方式|重启服务器生效|
* authorizer\_provider\_class
|名字| authorizer\_provider\_class |
|:---:|:---|
|描述| 权限服务的类名|
|类型| String |
|默认值|org.apache.iotdb.db.auth.authorizer.LocalFileAuthorizer |
|改后生效方式|重启服务器生效|
|其他可选值| org.apache.iotdb.db.auth.authorizer.OpenIdAuthorizer |
* openID\_url
|名字| openID\_url |
|:---:|:---|
|描述| openID 服务器地址 (当OpenIdAuthorizer被启用时必须设定)|
|类型| String (一个http地址) |
|默认值| 无 |
|改后生效方式|重启服务器生效|
## 数据类型自动推断
* enable\_auto\_create\_schema
|名字| enable\_auto\_create\_schema |
|:---:|:---|
|描述| 当写入的序列不存在时,是否自动创建序列到Schema|
|取值| true or false |
|默认值|true |
|改后生效方式|重启服务器生效|
* default\_storage\_group\_level
|名字| default\_storage\_group\_level |
|:---:|:---|
|描述| 当写入的数据不存在且自动创建序列时,若需要创建相应的存储组,将序列路径的哪一层当做存储组. 例如, 如果我们接到一个新序列 root.sg0.d1.s2, 并且level=1, 那么root.sg0被视为存储组(因为root是level 0 层)|
|取值| 整数 |
|默认值|1 |
|改后生效方式|重启服务器生效|
* boolean\_string\_infer\_type
|名字| boolean\_string\_infer\_type |
|:---:|:---|
|描述| "true" 或者 "false" 被视为什么数据|
|取值| BOOLEAN 或者 TEXT |
|默认值|BOOLEAN |
|改后生效方式|重启服务器生效|
* integer\_string\_infer\_type
|名字| integer\_string\_infer\_type |
|:---:|:---|
|描述| 整数型数据被推断成什么 |
|取值| INT32, INT64, FLOAT, DOUBLE, TEXT |
|默认值|FLOAT |
|改后生效方式|重启服务器生效|
* nan\_string\_infer\_type
|名字| nan\_string\_infer\_type |
|:---:|:---|
|描述| NaN 字符串被推断为什么|
|取值| DOUBLE, FLOAT or TEXT |
|默认值|FLOAT |
|改后生效方式|重启服务器生效|
* floating\_string\_infer\_type
|名字| floating\_string\_infer\_type |
|:---:|:---|
|描述| "6.7"等浮点数被推断为什么|
|取值| DOUBLE, FLOAT or TEXT |
|默认值|FLOAT |
|改后生效方式|重启服务器生效|
## 开启GC日志
GC日志默认是关闭的。为了性能调优,用户可能会需要收集GC信息。
若要打开GC日志,则需要在启动IoTDB Server的时候加上"printgc"参数:
```bash
nohup sbin/start-server.sh printgc >/dev/null 2>&1 &
```
或者
```bash
sbin\start-server.bat printgc
```
GC日志会被存储在`IOTDB_HOME/logs/gc.log`. 至多会存储10个gc.log文件,每个文件最多10MB。
| 20.749196 | 527 | 0.711142 | yue_Hant | 0.377324 |
c193273abb0bf1eaa79a950ae183dd66cc1bb40b | 463 | md | Markdown | connections.md | svobtom/metacentrum-accounting | 239897d511b287baa47b14f7bf4312ba131635a8 | [
"Apache-2.0"
] | null | null | null | connections.md | svobtom/metacentrum-accounting | 239897d511b287baa47b14f7bf4312ba131635a8 | [
"Apache-2.0"
] | null | null | null | connections.md | svobtom/metacentrum-accounting | 239897d511b287baa47b14f7bf4312ba131635a8 | [
"Apache-2.0"
] | null | null | null | Database connections are configured:
* cloud - no DB used
* metaacct - reference to Tomcat resource /jdbc/acctDb (defined in $CATALINA_BASE/conf/server.xml, tomcat@segin )
* metaacct_cmd - src/main/resources/config.properties
* pbsmon - reference to Tomcat resource /jdbc/acctDb (defined in $CATALINA_BASE/conf/server.xml, makub@segin )
* perun_machines - no DB used
Further connections:
* resource statistics: segin:/var/www/metavo/resourcestats/pripojeni.php | 46.3 | 113 | 0.786177 | eng_Latn | 0.370813 |
c1935418e4f9799f437b6879ed7dcf2eb8aa398e | 2,083 | md | Markdown | README.md | jonas-koeritz/asterisk-csp | ee79b3569fa3d387d15db7d6ddeacf1a7faf7f71 | [
"MIT"
] | 3 | 2018-03-31T06:11:56.000Z | 2019-10-14T03:22:06.000Z | README.md | jonas-koeritz/asterisk-csp | ee79b3569fa3d387d15db7d6ddeacf1a7faf7f71 | [
"MIT"
] | 6 | 2016-09-07T16:16:16.000Z | 2016-09-09T14:54:43.000Z | README.md | jonas-koeritz/asterisk-csp | ee79b3569fa3d387d15db7d6ddeacf1a7faf7f71 | [
"MIT"
] | 6 | 2016-02-03T15:33:17.000Z | 2019-03-14T07:59:59.000Z | # asterisk-csp
[](https://gitter.im/jonas-koeritz/asterisk-csp?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

[](https://gratipay.com/~jonas-koeritz/)
A CSTA III XML service provider for Asterisk using the Asterisk Manager Interface
## History
Following the succesful implementation of a CSTA Service Provider in Node.js for experimental use, this will be an attempt to create a more complete, more universal and more robust implementation in Java.
There are several pitfalls when working with Asterisk as a "Switching Function". The ECMA standard is most suitable for circuit switched telephony.
This project is still in its very beginnings but given the already working (crudely done, highly experimental, incomplete and unstable) implementation progress should be visible and hopefully usable soon.
## Project Goals
The main goal of this project is to be able to control Asterisk devices via a CTI Application, my testing will be done mostly using Xphone UC by C4B. Feel free to leave a note as an issue if you encounter problems using other client software later in the development process.
Asterisk must stay untouched! No patches or changes should be necessary to use this implementation.
## Roadmap
1. Create an usable object model (based on ECMA TR-88) to represent an Asterisk server as a Switching Domain
2. Make the key objects serializable for use as CSTA-XML Events/Requests/Responses
3. Handle TCP client connections, establish and keep-alive CSTA sessions
4. Connect to Asterisk and process AMI events to update the state of the object model accordingly
5. Generate suitable CSTA events for all conditions
6. Process CSTA requests and control Asterisk accordingly
## Contributing
Anybody may file an issue or send pull requests. Any help is greatly appreciated.
| 71.827586 | 275 | 0.803649 | eng_Latn | 0.973725 |
c193bd00ffeb1f288bb6503789f201f4b6a840f4 | 985 | md | Markdown | docs/Demo/P_SimpleAccess_SqlServer_Repository_SqlRepository_SimpleAccess.md | sheryever/simple-access-orm | 7662daabae197130856ad3b6936983ca60cd7214 | [
"Apache-2.0"
] | 22 | 2015-11-08T23:10:56.000Z | 2021-07-30T10:28:01.000Z | docs/Demo/P_SimpleAccess_SqlServer_Repository_SqlRepository_SimpleAccess.md | sheryever/simple-access-orm | 7662daabae197130856ad3b6936983ca60cd7214 | [
"Apache-2.0"
] | 12 | 2017-03-17T02:47:26.000Z | 2022-01-25T07:44:26.000Z | docs/Demo/P_SimpleAccess_SqlServer_Repository_SqlRepository_SimpleAccess.md | sheryever/simple-access-orm | 7662daabae197130856ad3b6936983ca60cd7214 | [
"Apache-2.0"
] | 3 | 2017-04-08T14:46:05.000Z | 2018-04-06T18:31:22.000Z | # SqlRepository.SimpleAccess Property
The SQL connection.
**Namespace:** <a href="N_SimpleAccess_SqlServer_Repository">SimpleAccess.SqlServer.Repository</a><br />**Assembly:** SimpleAccess.SqlServer (in SimpleAccess.SqlServer.dll) Version: 0.2.3.0 (0.2.8.0)
## Syntax
**C#**<br />
``` C#
public ISqlSimpleAccess SimpleAccess { get; set; }
```
**VB**<br />
``` VB
Public Property SimpleAccess As ISqlSimpleAccess
Get
Set
```
**C++**<br />
``` C++
public:
property ISqlSimpleAccess^ SimpleAccess {
ISqlSimpleAccess^ get ();
void set (ISqlSimpleAccess^ value);
}
```
**F#**<br />
``` F#
member SimpleAccess : ISqlSimpleAccess with get, set
```
#### Property Value
Type: <a href="T_SimpleAccess_SqlServer_ISqlSimpleAccess">ISqlSimpleAccess</a>
## See Also
#### Reference
<a href="T_SimpleAccess_SqlServer_Repository_SqlRepository">SqlRepository Class</a><br /><a href="N_SimpleAccess_SqlServer_Repository">SimpleAccess.SqlServer.Repository Namespace</a><br /> | 21.888889 | 209 | 0.721827 | yue_Hant | 0.924902 |
c193f7f2433c8adc04e28785944dea0cdae6dda4 | 1,426 | md | Markdown | docs/storages/redis.md | messa/docs | 1a53ffbb3fe5425a50c5da149975b7387109c552 | [
"CC0-1.0"
] | null | null | null | docs/storages/redis.md | messa/docs | 1a53ffbb3fe5425a50c5da149975b7387109c552 | [
"CC0-1.0"
] | null | null | null | docs/storages/redis.md | messa/docs | 1a53ffbb3fe5425a50c5da149975b7387109c552 | [
"CC0-1.0"
] | null | null | null | # Redis
Redis je populární key-value databáze, kterou můžete implementovat cache, message brokera nebo ji používat na sdílení dat mezi různými procesy. Za určitých okolností lze do Redisu ukládat i data perzistentního charakteru, ale je to databáze, která pracuje primárně s pamětí RAM, takže není určená na desítky GB dat. Více o Redisu se můžete dočíst třeba zde:
* http://www.zdrojak.cz/clanky/redis-key-value-databaze-v-pameti-i-na-disku/
A samozřejmě na [stránkách projektu](http://redis.io/).
Na Roští můžeme Redis doporučit jako:
* Cache
* message brokera pro Celery a python-rq
* ukládání různých statistik
Ale vy si určitě najdete to svoje.
## Zapnutí Redisu
Redis na Roští sedí s vaší aplikací ve stejném kontejneru. Už je nainstalován, stčí ho jen zapnout:
```bash
enable-redis
```
Spuštěním se do */srv/conf/redis.conf* nakopíruje konfigurační soubor, který můžete libovolně měnit a nakonfiguruje se supervisor, takže Redis poběží na pozadí. Vesměs se **o nic dalšího nemusíte starat, Redis prostě běží**.
K běžící instanci Redisu existují dva log soubory:
* /srv/logs/redis.log - sem loguje supervisor a mohou zde být chyby ze spuštění
* /srv/logs/redis-server.log - sem loguje Redis sám a bude zde chyby z běhu
Redis bude poslouchat na **localhost** a na standardním portu **6379**. Vyzkoušet si to můžete přes:
```bash
(venv)app@rosti ~ $ redis-cli
127.0.0.1:6379> KEYS *
(empty list or set)
```
| 36.564103 | 357 | 0.765077 | ces_Latn | 0.999939 |
28c12f63edf0d2a51eb0dd2a4df4750787413e8a | 4,460 | markdown | Markdown | _posts/2017-12-09-sysfs.markdown | xiazemin/MyBlog | a0bf678e052efd238a4eb694a27528ccc234c186 | [
"MIT"
] | 1 | 2021-08-14T12:11:15.000Z | 2021-08-14T12:11:15.000Z | _posts/2017-12-09-sysfs.markdown | xiazemin/MyBlogSrc | cc55274b05e2a6fd414066b09f41dab26c3f7e75 | [
"MIT"
] | 2 | 2020-10-30T15:42:56.000Z | 2020-10-30T15:42:56.000Z | _posts/2017-12-09-sysfs.markdown | xiazemin/MyBlog | a0bf678e052efd238a4eb694a27528ccc234c186 | [
"MIT"
] | 1 | 2018-12-11T13:49:13.000Z | 2018-12-11T13:49:13.000Z | ---
title: linux sysfs
layout: post
category: linux
author: 夏泽民
---
<!-- more -->
在调试驱动,或驱动涉及一些参数的输入输出时,难免需要对驱动里的某些变量或内核参数进行读写,或函数调用。此时sysfs接口就很有用了,它可以使得可以在用户空间直接对驱动的这些变量读写或调用驱动的某些函数。sysfs接口与proc文件系统很相似,有人将proc文件系统形容为Windows XP,而将sysfs接口形容为Windows 7。
而在Android系统中,振动器、背光、电源系统等往往使用sysfs接口作为内核空间和用户空间的接口,驱动程序需要提供这些接口内容。
Sysfs文件系统是一个类 似于proc文件系统的特殊文件系统,用于将系统中的设备组织成层次结构,并向用户模式程序提供详细的内核数据结构信息。
去/sys看一看,
localhost:/sys#ls /sys/
block/ bus/ class/ devices/ firmware/ kernel/ module/ power/
Block目录:包含所有的块设备
Devices目录:包含系统所有的设备,并根据设备挂接的总线类型组织成层次结构
Bus目录:包含系统中所有的总线类型
Drivers目录:包括内核中所有已注册的设备驱动程序
Class目录:系统中的设备类型(如网卡设备,声卡设备等)
sys下面的目录和文件反映了整台机器的系统状况。比如bus,
localhost:/sys/bus#ls
i2c/ ide/ pci/ pci express/ platform/ pnp/ scsi/ serio/ usb/
里面就包含了系统用到的一系列总线,比如pci, ide, scsi, usb等等。比如你可以在usb文件夹中发现你使用的U盘,USB鼠标的信息。
我们要讨论一个文件系统,首先要知道这个文件系统的信息来源在哪里。所谓信息来源是指文件组织存放的地点。比如,我们挂载一个分区,
mount -t vfat /dev/hda2 /mnt/C
我们就知道挂载在/mnt/C下的是一个vfat类型的文件系统,它的信息来源是在第一块硬盘的第2个分区。
但是,你可能根本没有去关心过sysfs的挂载过程,她是这样被挂载的。
mount -t sysfs sysfs /sys
ms看不出她的信息来源在哪。sysfs是一个特殊文件系统,并没有一个实际存放文件的介质。断电后就玩完了。简而言之,sysfs的信息来源是kobject层次结构,读一个sysfs文件,就是动态的从kobject结构提取信息,生成文件。
Kobject
Kobject 是Linux 2.6引入的新的设备管理机制,在内核中由struct kobject表示。通过这个数据结构使所有设备在底层都具有统一的接口,kobject提供基本的对象管理,是构成Linux2.6设备模型的核心结 构,它与sysfs文件系统紧密关联,每个在内核中注册的kobject对象都对应于sysfs文件系统中的一个目录。Kobject是组成设备模型的基 本结构。类似于C++中的基类,它嵌入于更大的对象的对象中--所谓的容器--用来描述设备模型的组件。如bus,devices, drivers 都是典型的容器。这些容器就是通过kobject连接起来了,形成了一个树状结构。这个树状结构就与/sys向对应。
kobject 结构为一些大的数据结构和子系统提供了基本的对象管理,避免了类似机能的重复实现。这些机能包括
- 对象引用计数.
- 维护对象链表(集合).
- 对象上锁.
- 在用户空间的表示.
Kobject结构定义为:
struct kobject {
char * k name; 指向设备名称的指针
char name[KOBJ NAME LEN]; 设备名称
struct kref kref; 对象引用计数
struct list head entry; 挂接到所在kset中去的单元
struct kobject * parent; 指向父对象的指针
struct kset * kset; 所属kset的指针
struct kobj type * ktype; 指向其对象类型描述符的指针
struct dentry * dentry; sysfs文件系统中与该对象对应的文件节点路径指针
};
其 中的kref域表示该对象引用的计数,内核通过kref实现对象引用计数管理,内核提供两个函数kobject_get()、kobject_put() 分别用于增加和减少引用计数,当引用计数为0时,所有该对象使用的资源释放。Ktype 域是一个指向kobj type结构的指针,表示该对象的类型。
Kobj type
struct kobj_type {
void (*release)(struct kobject *);
struct sysfs_ops * sysfs_ops;
struct attribute ** default_attrs;
};
Kobj type数据结构包含三个域:一个release方法用于释放kobject占用的资源;一个sysfs ops指针指向sysfs操作表和一个sysfs文件系统缺省属性列表。Sysfs操作表包括两个函数store()和show()。当用户态读取属性 时,show()函数被调用,该函数编码指定属性值存入buffer中返回给用户态;而store()函数用于存储用户态传入的属性值。
kset
kset最重要的是建立上层(sub-system)和下层的 (kobject)的关联性。kobject 也会利用它了分辨自已是属于那一個类型,然後在/sys 下建立正确的目录位置。而kset 的优先权比较高,kobject会利用自已的*kset 找到自已所属的kset,並把*ktype 指定成該kset下的ktype,除非沒有定义kset,才会用ktype來建立关系。Kobject通过kset组织成层次化的结构,kset是具有相 同类型的kobject的集合,在内核中用kset数据结构表示,定义为:
struct kset {
struct subsystem * subsys; 所在的subsystem的指针
struct kobj type * ktype; 指向该kset对象类型描述符的指针
struct list head list; 用于连接该kset中所有kobject的链表头
struct kobject kobj; 嵌入的kobject
struct kset hotplug ops * hotplug ops; 指向热插拔操作表的指针
};
包 含在kset中的所有kobject被组织成一个双向循环链表,list域正是该链表的头。Ktype域指向一个kobj type结构,被该kset中的所有kobject共享,表示这些对象的类型。Kset数据结构还内嵌了一个kobject对象(由kobj域表示),所 有属于这个kset 的kobject对象的parent域均指向这个内嵌的对象。此外,kset还依赖于kobj维护引用计数:kset的引用计数实际上就是内嵌的 kobject对象的引用计数。
subsystem
如果說kset 是管理kobject 的集合,同理,subsystem 就是管理kset 的集合。它描述系统中某一类设备子系统,如block subsys表示所有的块设备,对应于sysfs文件系统中的block目录。类似的,devices subsys对应于sysfs中的devices目录,描述系统中所有的设备。Subsystem由struct subsystem数据结构描述,定义为:
struct subsystem {
struct kset kset; 内嵌的kset对象
struct rw semaphore rwsem; 互斥访问信号量
};
可以看出,subsystem与kset的区别就是多了一个信号量,所以在后来的代码中,subsystem已经完全被kset取缔了。
每个kset属于某个subsystem,通过设置kset结构中的subsys域指向指定的subsystem可以将一个kset加入到该subsystem。所有挂接到同一subsystem的kset共享同一个rwsem信号量,用于同步访问kset中的链表。
sysfs是用于表现设备驱动模型的文件系统,它基于ramfs。要学习linux的设备驱动模型,就要先做好底层工作,总结sysfs提供给外界的API就是其中之一。sysfs文件系统中提供了四类文件的创建与管理,分别是目录、普通文件、软链接文件、二进制文件。目录层次往往代表着设备驱动模型的结构,软链接文件则代表着不同部分间的关系。比如某个设备的目录只出现在/sys/devices下,其它地方涉及到它时只好用软链接文件链接过去,保持了设备唯一的实例。而普通文件和二进制文件往往代表了设备的属性,读写这些文件需要调用相应的属性读写。
sysfs是表现设备驱动模型的文件系统,它的目录层次实际反映的是对象的层次。为了配合这种目录,linux专门提供了两个结构作为sysfs的骨架,它们就是struct kobject和struct kset。我们知道,sysfs是完全虚拟的,它的每个目录其实都对应着一个kobject,要想知道这个目录下有哪些子目录,就要用到kset。从面向对象的角度来讲,kset继承了kobject的功能,既可以表示sysfs中的一个目录,还可以包含下层目录。对于kobject和kset,会在其它文章中专门分析到,这里简单描述只是为了更好地介绍sysfs提供的API。
sysfs 与 proc 相比有很多优点,最重要的莫过于设计上的清晰。一个 proc 虚拟文件可能有内部格式,如 /proc/scsi/scsi
,它是可读可写的,(其文件权限被错误地标记为了 0444
!,这是内核的一个BUG),并且读写格式不一样,代表不同的操作,应用程序中读到了这个文件的内容一般还需要进行字符串解析,而在写入时需要先用字符串
格式化按指定的格式写入字符串进行操作;相比而言, sysfs 的设计原则是一个属性文件只做一件事情, sysfs
属性文件一般只有一个值,直接读取或写入。整个 /proc/scsi 目录在2.6内核中已被标记为过时(LEGACY),它的功能已经被相应的 /sys 属性文件所完全取代。新设计的内核机制应该尽量使用 sysfs 机制,而将 proc 保留给纯净的“进程文件系统”。
| 50.681818 | 313 | 0.861659 | yue_Hant | 0.325887 |
28c1fb0b7f34e6a4f29a56ea50aca46bfa7ad83c | 13 | md | Markdown | README.md | mateusvieites/TravelIn | 899b9b8da7131c9c841ced793e34e7f367e64f96 | [
"MIT"
] | null | null | null | README.md | mateusvieites/TravelIn | 899b9b8da7131c9c841ced793e34e7f367e64f96 | [
"MIT"
] | null | null | null | README.md | mateusvieites/TravelIn | 899b9b8da7131c9c841ced793e34e7f367e64f96 | [
"MIT"
] | null | null | null | # TravelIn
| 4.333333 | 10 | 0.615385 | eng_Latn | 0.885447 |
28c2276a57cdc9b29367de3fbb668dc8b13f64ed | 3,069 | markdown | Markdown | _posts/2018-04-10-cpp11string1.markdown | yuxiaoming1985/daibi | 54a8799272f0783f0edbc599636df7eaeb24e244 | [
"Apache-2.0"
] | 1 | 2021-12-01T03:47:13.000Z | 2021-12-01T03:47:13.000Z | _posts/2018-04-10-cpp11string1.markdown | yuxiaoming1985/daibi | 54a8799272f0783f0edbc599636df7eaeb24e244 | [
"Apache-2.0"
] | null | null | null | _posts/2018-04-10-cpp11string1.markdown | yuxiaoming1985/daibi | 54a8799272f0783f0edbc599636df7eaeb24e244 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "C++使用字符串作为switch的case子句"
subtitle: "C++的switch没有使用字符串作为case选择分支的。所以这里用这个作为字符串的case分支真的很不错。"
date: 2018-04-10 12:00:00
author: "Mage"
header-img: "img/post-bg-apple-event-2015.jpg"
tags:
- C++
---
## 前言 ##
C++的switch没有使用字符串作为case选择分支的。所以这里用这个作为字符串的case分支真的很不错。因为这里用到了C++11的constexpr函数文字常量语法,函数会在编译的时候生成字串符的hash值,所以不会出现case重复的情况,如果出现重复程序会编译报错。
## 正文 ##
有时候,我们想写出下面这样的switch语句
```C
const char* str = "first";
switch(str){
case "first":
cout << "1st one" << endl;
break;
case "second":
cout << "2nd one" << endl;
break;
case "third":
cout << "3rd one" << endl;
break;
default:
cout << "Default..." << endl;
}
```
但是在c++中,是不能用字符串来作为case的标签的;于是,很疑惑,我们只能用其他的办法来实现这样的需求。
但幸运的是,c++11引入了constexpr和自定义文字常量,将这两个新特性结合,我们实现出看上去像上面这样的代码。
基本思路为:
1. 定义一个hash函数,计算出字符串的hash值,将字符串转换为1个整数;
```C
typedef std::uint64_t hash_t;
constexpr hash_t prime = 0x100000001B3ull;
constexpr hash_t basis = 0xCBF29CE484222325ull;
hash_t hash_(char const* str)
{
hash_t ret{basis};
while(*str){
ret ^= *str;
ret *= prime;
str++;
}
return ret;
}
```
2. 利用c++11自定义文字常量的语法,定义一个constexpr函数,switch的case标签处调用这个constexpr函数。如下
```C
constexpr hash_t hash_compile_time(char const* str, hash_t last_value = basis)
{
return *str ? hash_compile_time(str+1, (*str ^ last_value) * prime) : last_value;
}
```
这个函数只有短短的一行,利用递归得到了与上面hash_函数得到的同样值,由于用constexpr声明了函数,因此编译器可以在编译期得出一个字符串的hash值,而这正是关键,既然是编译器就可以得到的整型常量,自然可以放到switch的case标签处了。
于是我们可以写出这样的swich语句:
```C
void simple_switch2(char const* str)
{
using namespace std;
switch(hash_(str)){
case hash_compile_time("first"):
cout << "1st one" << endl;
break;
case hash_compile_time("second"):
cout << "2nd one" << endl;
break;
case hash_compile_time("third"):
cout << "3rd one" << endl;
break;
default:
cout << "Default..." << endl;
}
}
```
这个实现中,hash_compile_time("first")是编译器计算出来的一个常量,因此可以用作case标签;而且如果出现了hash值冲突,编译器回给出错误提示。
3. 上面的语法还不够漂亮,利用自定义文字常量,重载一个operator如下:
```C
constexpr unsigned long long operator "" _hash(char const* p, size_t)
{
return hash_compile_time(p);
}
```
现在我们可以写这样的文字常量,用“_hash”作为自定义文字常量的后缀,编译器看到这个后缀结尾的文字常量,就会去调用我们重载的operator,得到和调用hash_compile_time是一样的值,但看起来舒服多了:
```C
"first"_hash
```
现在,我们写出的switch语句就好看多了。
```C
void simple_switch(char const* str)
{
using namespace std;
switch(hash_(str)){
case "first"_hash:
cout << "1st one" << endl;
break;
case "second"_hash:
cout << "2nd one" << endl;
break;
case "third"_hash:
cout << "3rd one" << endl;
break;
default:
cout << "Default..." << endl;
}
}
```
经过我的测试,在cocos2d-x 3.2版本里,我对骨骼动画的帧事件中用了switch,可以正常使用。
转自:[http://blog.csdn.net/yozidream/article/details/22789147][1]
[1]:http://blog.csdn.net/yozidream/article/details/22789147 | 21.765957 | 136 | 0.633757 | yue_Hant | 0.328116 |
28c3a31e91878e1754e543612385193e78369a16 | 1,477 | md | Markdown | server-2013/lync-server-2013-user-view.md | v-vijanu/OfficeDocs-SkypeforBusiness-Test-pr.es-es | b8058e4767e19709e1df553cb66a7e6df429cf5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:28:10.000Z | 2020-05-19T19:28:10.000Z | server-2013/lync-server-2013-user-view.md | v-vijanu/OfficeDocs-SkypeforBusiness-Test-pr.es-es | b8058e4767e19709e1df553cb66a7e6df429cf5e | [
"CC-BY-4.0",
"MIT"
] | 21 | 2018-04-26T18:42:59.000Z | 2018-08-23T23:00:11.000Z | server-2013/lync-server-2013-user-view.md | v-vijanu/OfficeDocs-SkypeforBusiness-Test-pr.es-es | b8058e4767e19709e1df553cb66a7e6df429cf5e | [
"CC-BY-4.0",
"MIT"
] | 11 | 2018-06-19T11:13:26.000Z | 2021-11-15T11:25:02.000Z | ---
title: Vista de usuario
TOCTitle: Vista de usuario
ms:assetid: 796f77e6-1da6-4969-b18b-3537209a1fe4
ms:mtpsurl: https://technet.microsoft.com/es-es/library/JJ688100(v=OCS.15)
ms:contentKeyID: 49889238
ms.date: 01/07/2017
mtps_version: v=OCS.15
ms.translationtype: HT
---
# Vista de usuario
_**Última modificación del tema:** 2015-03-09_
La vista de usuario almacena información sobre usuarios que han participado en llamadas o sesiones con registros en la base de datos. Esta vista se introdujo en Microsoft Lync Server 2013.
<table>
<colgroup>
<col style="width: 33%" />
<col style="width: 33%" />
<col style="width: 33%" />
</colgroup>
<thead>
<tr class="header">
<th>Columna</th>
<th>Tipo de datos</th>
<th>Detalles</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>UserId</p></td>
<td><p>Int</p></td>
<td><p>Número único que identifica a este usuario.</p></td>
</tr>
<tr class="even">
<td><p>UserUri</p></td>
<td><p>nvarchar (450)</p></td>
<td><p>URI del usuario.</p></td>
</tr>
<tr class="odd">
<td><p>TenantKey</p></td>
<td><p>uniqueidentifier</p></td>
<td><p>Inquilino del usuario. Para más información, vea <a href="lync-server-2013-tenants-table.md">Tabla Tenants en Lync Server 2013</a>.</p></td>
</tr>
<tr class="even">
<td><p>UriType</p></td>
<td><p>nvarchar (256)</p></td>
<td><p>Tipo de URI de usuario. Para más información, vea <a href="lync-server-2013-uritypes-table.md">Tabla UriTypes en Lync Server 2013</a>.</p></td>
</tr>
</tbody>
</table>
| 25.465517 | 188 | 0.682464 | spa_Latn | 0.411102 |
28c4f39927387dd18c6a403c527f4cc8718d4a18 | 572 | md | Markdown | contents/rules_fr.md | ttych/agile_marshmallow_challenge | ca326c1e8d343776a27f68a923d9ade4a1d23fb8 | [
"MIT"
] | null | null | null | contents/rules_fr.md | ttych/agile_marshmallow_challenge | ca326c1e8d343776a27f68a923d9ade4a1d23fb8 | [
"MIT"
] | null | null | null | contents/rules_fr.md | ttych/agile_marshmallow_challenge | ca326c1e8d343776a27f68a923d9ade4a1d23fb8 | [
"MIT"
] | null | null | null | # The Marshmallow Challenge
**But : ** Poser un **chamallow** le plus haut possible sur une structure autoportée
**6 règles à respecter :**
- La structure est posée sur la table qui est sur ses 4 pieds !
- La structure n'est pas suspendue
- La hauteur se mesure de la table jusqu'à la base du chamallow
- Seul le matériel fourni est autorisé
- Structure : ficelle, scotch, chamallows, spaghettis
- Outil : une paire de ciseaux
- Ficelle, scotch et spaghettis peuvent être découpés mais pas le chamallow
- A la sonnerie de fin, plus personne ne touche la structure
| 40.857143 | 84 | 0.75 | fra_Latn | 0.989352 |
28c4fa4ee47736c1fbc93639c42252592744824a | 3,732 | md | Markdown | Readme.md | komw/ariston-remotethermo-client | 64b8336b066f208f54902e98eb8e30d04656f399 | [
"MIT"
] | 16 | 2019-11-03T17:31:25.000Z | 2022-02-18T07:11:00.000Z | Readme.md | komw/ariston-remotethermo-client | 64b8336b066f208f54902e98eb8e30d04656f399 | [
"MIT"
] | 2 | 2020-02-02T20:07:36.000Z | 2021-11-19T20:26:17.000Z | Readme.md | komw/ariston-remotethermo-client | 64b8336b066f208f54902e98eb8e30d04656f399 | [
"MIT"
] | 4 | 2019-10-28T20:26:38.000Z | 2020-04-18T07:42:45.000Z | # Ariston Remotethermo Client
[![NPM Version][npm-image]][npm-url]
[![NPM Downloads][downloads-image]][downloads-url]
This package provides a client for [Ariston remotethermo panel](https://www.ariston-net.remotethermo.com) (Ariston Net, Ariston Bus Bridgenet® control website).
Is uses WEB Api to control Ariston Heaters.
I'm only tested it with my Ariston Genus One Net boiler, but it should work with any boiler which uses Cube Net S or Sensys Thermostat.
Perhaps it also should work with similar Ariston like boilers/heaters like Chaffeatux etc.
## Limitations
There is a few limitation due to problems with connecting to Ariston RemoteThermo API. Sending many requests at the same time causes a Cube Net S disconnetion. It that case you will need to wait few seconds to Cube Net S reconnect with RemoteThermo servers (or it need somethimes to restart).
Library uses a requestretry under the hood, but it only solves a few problems.
## Installation
This is a [Node.js](https://nodejs.org/en/) module available through the
[npm registry](https://www.npmjs.com/).
Installation is done using the
[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally):
```bash
$ npm install ariston-remotethermo-client
```
## Features
* Login
* Enable/Disable comfort mode
* Enable/Disable winter mode
* Get params:
* target temperature
* flame presence
* outdoor temperature
* holiday mode
* room temperature
* overwriten temperature
* overwriten temperature Until
* winter mode
* comfort mode
* get gas usage
* many others params
## Usage configuration
You need to specify a:
* LOGIN
* PASSWORD
to your account at https://www.ariston-net.remotethermo.com
also you need to specify a HEATER_ID which you can easily find after login into https://www.ariston-net.remotethermo.com :
HEATER_ID you can find as a part of URL after login:
`https://www.ariston-net.remotethermo.com/PlantDashboard/Index/HEATER_ID`
## Usage Example:
```js
const AristonApi = require("ariston-remotethermo-client");
const ariston = new AristonApi("LOGIN", "PASSWORD", "HEATER_ID");
ariston.login().then(() => {
ariston.getStatus().then((params) => {
console.log("Comfort Temperature:", params.zone.comfortTemp.value);
console.log("Outdoor Temperature:", params.outsideTemp);
console.log("Room Temperature:", params.zone.roomTemp);
ariston.getComfortStatus().then((value) => {
console.log("Comfort mode:", value);
ariston.setComfortStatus(3).then((newState) => {
console.log("Comfort mode:", newState);
});
});
});
});
```
## Disclaimer
All information posted is merely for educational and informational purposes. It is not intended as a substitute for professional advice. Should you decide to act upon any information on this website, you do so at your own risk.
While the information on this website has been verified to the best of our abilities, I cannot guarantee that there are no mistakes or errors.
You may use this library with the understanding that doing so is AT YOUR OWN RISK. No warranty, express or implied, is made with regards to the fitness or safety of this code for any purpose. If you use this library to query or change settings of your products you understand that it is possible to cause damages
I reserve the right to change this policy at any given time.
## License
[MIT](LICENSE)
[npm-url]: https://npmjs.org/package/ariston-remotethermo-client
[npm-image]: https://img.shields.io/npm/v/ariston-remotethermo-client.svg
[downloads-image]: https://img.shields.io/npm/dm/ariston-remotethermo-client.svg
[downloads-url]: https://npmjs.org/package/ariston-remotethermo-client
| 39.702128 | 312 | 0.746249 | eng_Latn | 0.956813 |
28c52ca898beeb7d3f4b26089bba0b2f54a9ce18 | 1,231 | md | Markdown | docs/csharp/misc/cs1948.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs1948.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs1948.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Erreur du compilateur CS1948
ms.date: 07/20/2015
f1_keywords:
- CS1948
helpviewer_keywords:
- CS1948
ms.assetid: 3dac3abe-0edd-4ee1-8fb1-bc597ea63e1f
ms.openlocfilehash: 1010e26655db3956f6e2266d3634be8d67c110cf
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 05/04/2018
ms.locfileid: "33304877"
---
# <a name="compiler-error-cs1948"></a>Erreur du compilateur CS1948
La variable de portée 'name' ne peut pas avoir le même nom qu’un paramètre de type de méthode
Un même espace de déclaration ne peut pas contenir deux déclarations du même identificateur.
## <a name="to-correct-this-error"></a>Pour corriger cette erreur
1. Modifiez le nom de la variable de portée ou du paramètre de type.
## <a name="example"></a>Exemple
L’exemple suivant génère l’erreur CS1948, car l’identificateur `T` est utilisé pour la variable de portée et pour le paramètre de type de la méthode `TestMethod`:
```csharp
// cs1948.cs
using System.Linq;
class Test
{
public void TestMethod<T>(T t)
{
var x = from T in Enumerable.Range(1, 100) // CS1948
select T;
}
}
```
| 30.775 | 165 | 0.70918 | fra_Latn | 0.849875 |
28c558923279aae0d7a8e36152c1a660cdd4a0f0 | 10,017 | markdown | Markdown | _posts/2021-02-08-usando-vagrant.markdown | Nabucodono5or/Nabucodono5or.github.io | ce402018042c13d2800f74a06c97065a898c0ae2 | [
"MIT"
] | null | null | null | _posts/2021-02-08-usando-vagrant.markdown | Nabucodono5or/Nabucodono5or.github.io | ce402018042c13d2800f74a06c97065a898c0ae2 | [
"MIT"
] | 18 | 2017-10-10T17:00:04.000Z | 2020-11-16T11:12:36.000Z | _posts/2021-02-08-usando-vagrant.markdown | Nabucodono5or/Nabucodono5or.github.io | ce402018042c13d2800f74a06c97065a898c0ae2 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Usando Vagrant em projetos web"
date: 2021-02-08
categories:
- Ruby
description:
image: https://source.unsplash.com/user/jdiegoph/sUG0LWWmNiI/2000x1200
image-sm: https://source.unsplash.com/user/jdiegoph/sUG0LWWmNiI/500x300
---
Recentemente passei a utilizar o vagrant em meus projetos como forma de me acostumar com a tecnologia e não por verdadeira necessidade, o resultado é que gostei muito de seu funcionamento e achei bastante prático para ser inserido no desenvolvimento de aplicaçães, caso esteja interessado em um ambiente de desenvolvimento isolado do sistema.
Vagrant se trata de uma ferramenta usada para a criação e o gerenciamento de ambientes de desenvolvimento ou testes usando virtualização. Imagine que você construiu uma aplicação, mas na hora de rodá-la, seja nos computadores de seus colegas de time de desenvolvimento, ou do cliente, não funciona. A construção de um ambiente de desenvolvimento “limpo” e controlado pode ser muito mais eficiente para o desenvolvimento da aplicação, melhor ainda se ele pode ser compartilhado para ser reproduzido da forma mais simples por outros desenvolvedores. Eis que o Vagrant faz tudo isso.
Para começar vamos nos concentrar em uma construção simples no linux.
Após instalar o Vagrant, você deve ter um software de virtualização, ele pode ser VirtualBox ou Vmware workstation, por exemplo. Com Vagrant instalado hora de construir seu ambiente para isso você precisa está no diretório raiz do seu projeto e executar o seguinte comando, mas preste atenção vagrant já oferece alguns ambiente pré preparados, use um deles com base, no meu caso eu gosto de “hashicorp/bionic64”, prefiro devido a seu ambiente server ubuntu.
~~~ shell
$ vagrant init hashicorp/bionic64
~~~
<br>
Automaticamente será criado um arquivo chamado Vagrantfile. Observe que o arquivo está em linguagem Ruby. Após isso recomendo que olhe o arquivo e faça as configurações necessários para seu uso ou como será o ambiente de desenvolvimento. Não sabe como fazer essas alterações? Então me acompanhe.
Arquivo gerado abaixo:
~~~ ruby
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "hashicorp/bionic64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
end
~~~
<br>
Como o meu irá ficar:
~~~ ruby
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/bionic64"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 4
end
# config.vm.network "forwarded_port", guest: 80, host: 8080
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
config.vm.network "private_network", ip: "192.168.33.10"
# config.vm.network "public_network"
# config.vm.synced_folder "../data", "/vagrant_data"
config.vm.synced_folder "./dist", "/var/www/html", :nfs => { :mount_options => [ "dmode=777", "fmode=665" ] }
config.vm.provision "shell", path: "bootstrap.sh"
end
~~~
<br>
A primeira coisa é que temos todos os códigos dentro do bloco Vagrant.configure, costumo deixar alí somente o que irei precisar, no caso a configuração da vm, config.vm.box, que caso você tenha optado por um dos ambientes do site ele estará declarado alí. Depois gosto de manter todas as opções de rede, como crio aplicações web, quero ter o seu acesso rápido através de um endereço. Olhando o arquivo você verá muitos comentários, eles explicam o que cada código comentado pode realizar. Eu apago a maioria desses comentários deixando somente os comandos que me interessam.
Juntado todos os comandos de rede, opto pelo acesso através de um ip privado, mais seguro.
O próximo comando é como você pode definir as configurações da vm, aliais o vagrant file tem um comando para executar a vm, porém você não precisa disso para apenas testar ou executar sua aplicação. Você pode usar o terminal para vereficar a vm.
Os comandos para configurar os recursos da vm ficam dentro do bloco config.vm.provider. Abaixo temos os dois últimos comandos, o penúltimo irá sincronizar os arquivos do meu projeto com os que estão na virtuabox.
Essa é uma parte extremamente interessante, os arquivos que você irá criar para o seu projeto podem serem transferidos automaticamente para a vm, se você está desenvolvendo uma aplicação web com um builder, recomendo transferir os arquivos da pasta dest/ . Por fim você pode escrever de forma direta comandos que serão executados na construção do ambiente,ou usar, minha preferência, um script shell.
No comando config.vm.synced_folder acrescentamos um comando que não está nos comentários, o nfs, esse comando obriga que você digite sua senha root, ele melhora a sincronização dos arquivos.
Caso você tenha optado pelo uso de um shell script como forma de amontar seu ambiente disponibilizo abaixo um exemplo do mesmo.
~~~ shell
# Update Packages
apt-get update
# Upgrade Packages
apt-get upgrade
# Basic Linux Stuff
apt-get install -y git
# Apache
apt-get install -y apache2
# Enable Apache Mods
a2enmod rewrite
~~~
<br>
Agora, hora de montar nosso ambiente, execute o comando abaixo para executar a vm:
~~~ shell
$ vagrant up
~~~
<br>
Você verá tudo sendo instalado, e se seguiu as modificações no Vagrantfile então será pedido sua senha root.
Com o ambiente pronto, se você excluiu a exibição da gui nada será exibido da vm, mas você pode acessá-la com o comando:
~~~ shell
$ vagrant ssh
~~~
E seu terminal irá mudar para o ambiente da vm, lá você pode modificar o que quiser. Eu pedir para que meus arquivos fossem transferidos para a pasta www, devido ande será executado o servidor apache, mas você pode optar por qualquer pasta, basta modificar no Vagrantfile.
Por fim, seu eu que quiser acessar minha aplicação web, estando um servidor executando na vm, eu uso o endereço ao qual modifiquei no Vagrantfile. O Vagrant realiza tanto um port forward como redireciona acessos de servidores por endereços ip no host, ou seja, seu computador.
Para fechar temos os comandos:
~~~ shell
$ vagrant destroy
~~~
<br>
caso queiramos destruir a virtual machine.
~~~ shell
$ vagrant suspend
~~~
<br>
Para suspender a vm.
~~~ shell
$ vagrant reload
~~~
<br>
Para recarregar o arquivo vagrant file, caso tenhamos alterado alguma coisa ali, consequentemente o ambiente será remontado.
~~~ shell
$ vagrant resume
~~~
<br>
Para sair do estado de suspenso. Além disso é possível executar diversas vm ao mesmo tempo com Vagrant, pense em banco de dados ou api.
Vagrant é uma ferramenta muito boa e tranquila de se lidar, prover uma ótima produtividade na criação de aplicações. Futuramente eu abordo a execução do Vagrant com múltiplas virtualizações. | 42.444915 | 580 | 0.726864 | por_Latn | 0.996147 |
28c6328f0be93289baa4cc6c705f85e71e9817cb | 2,220 | markdown | Markdown | _posts/older/2012/2012-12-05-blue-light-special.markdown | papascott/papascott.github.io | ca03fea05b22d1449441e8dc5c248a4c108d788e | [
"MIT"
] | null | null | null | _posts/older/2012/2012-12-05-blue-light-special.markdown | papascott/papascott.github.io | ca03fea05b22d1449441e8dc5c248a4c108d788e | [
"MIT"
] | null | null | null | _posts/older/2012/2012-12-05-blue-light-special.markdown | papascott/papascott.github.io | ca03fea05b22d1449441e8dc5c248a4c108d788e | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: Blue Light Special
author:
display_name: PapaScott
login: root
email: papascott@gmail.com
url: https://www.papascott.de/
author_login: root
author_email: papascott@gmail.com
author_url: https://www.papascott.de/
wordpress_id: 15490
wordpress_url: https://www.papascott.de/?p=15490
date: '2012-12-05 19:06:08 +0100'
date_gmt: '2012-12-05 18:06:08 +0100'
---
<p><img class="alignright size-medium wp-image-15491" title="41HZsPbqVaL._SL500_" src="https://www.papascott.de/wordpress/wp-content/uploads/2012/12/41HZsPbqVaL._SL500_-300x300.jpg" alt="Blue Light Special" width="300" height="300" /></p>
<p>We just bought one of <a href="http://www.amazon.de/gp/product/B006K0PUKO/ref=as_li_ss_tl?ie=UTF8&camp=1638&creative=19454&creativeASIN=B006K0PUKO&linkCode=as2&tag=papascott-21">these</a> at Amazon to try out for our restaurants... if it works out we might buy 3 more. It's an battery-powered alarm clock with siren and flashing police light. Can anyone guess why we might want this? Hint: we're not waking up any sleepers.</p>
<p>Here's the German title at Amazon: <a href="http://www.amazon.de/gp/product/B006K0PUKO/ref=as_li_ss_tl?ie=UTF8&camp=1638&creative=19454&creativeASIN=B006K0PUKO&linkCode=as2&tag=papascott-21">Signalwecker 'BLAULICHT' mit Sirenenalarm - Tatütata... Aufstehen... Der bekommt jeden wach!</a><img style="border: none !important; margin: 0px !important;" src="https://www.assoc-amazon.de/e/ir?t=papascott-21&l=as2&o=3&a=B006K0PUKO" alt="" width="1" height="1" border="0" /></p>
<p><strong>Update 10 Dec 2012</strong> Our product has arrived, and as you can see it's not half bad (although difficult to operate with one hand).</p>
<p><object style="height: 390px; width: 640px"><param name="movie" value="http://www.youtube.com/v/wJ-hd4qz6w0?version=3&feature=player_detailpage"><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed src="https://www.youtube.com/v/wJ-hd4qz6w0?version=3&feature=player_detailpage" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="480" height="360"></object></p> | 96.521739 | 506 | 0.757658 | eng_Latn | 0.364628 |
28c6a4bf89b8474514a12d8a33aad94fa25ced30 | 1,854 | md | Markdown | README.md | jmbiggs/xbox2midi | 0ffb73b8e2358bb29497c4bb52eda9b4250c3721 | [
"MIT"
] | null | null | null | README.md | jmbiggs/xbox2midi | 0ffb73b8e2358bb29497c4bb52eda9b4250c3721 | [
"MIT"
] | null | null | null | README.md | jmbiggs/xbox2midi | 0ffb73b8e2358bb29497c4bb52eda9b4250c3721 | [
"MIT"
] | null | null | null | # xbox2midi
Python script intended for Linux / Raspberry Pi use. Converts XBox 360 controller inputs into MIDI commands.
## Getting Started
### Prerequisites
[Xbox.py](https://github.com/FRC4564/Xbox)
[Mido](https://github.com/mido/mido)
[rtmidi](https://github.com/thestk/rtmidi)
### Installing
1. [Follow installation instructions for xbox.py](https://github.com/FRC4564/Xbox), download and put the xbox.py file in the same folder as xbox2midi.py
2. Install Mido
```
pip install mido
```
3. Install rtmidi
```
pip install python-rtmidi
```
### Usage
You will most likely need to edit the script to get the functionality you want.
I set this up to control a DSI Tetra synthesizer, so consult the manual of your MIDI device to find the CC values you want to control, and update the script accordingly.
### Default controls
* Start Button - toggle note on/off
* Back Button - exits program
* Left Stick (x-axis) - OSC 1 shape
* Left Stick (y-axis) - OSC 1 pitch
* Right Stick (x-axis) - OSC 2 shape
* Right Stick (y-axis) - OSC 2 pitch
* Left Stick (pressed) - toggle up/down direction of OSC 1 pitch
* Right Stick (pressed) - toggle up/down direction of OSC 2 pitch
* Left Trigger - OSC 1 fine pitch
* Right Trigger - OSC 2 fine pitch
* Left Bumper - toggle up/down direction of OSC 1 fine pitch
* Right Bumper - toggle up/down direction of OSC 2 fine pitch
* D-Pad (up) - octave up
* D-Pad (down) - octave down
* D-Pad (left) - note down
* D-Pad (right) - note up
* A - note change: down a third (4 half steps)
* B - note change: up a third (4 half steps)
* X - note change: down a fifth (7 half steps)
* Y - note change: up a fifth (7 half steps)
## Author
jmbiggs, [jmbiggsdev@gmail.com](mailto:jmbiggsdev@gmail.com)
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
| 26.869565 | 169 | 0.716289 | eng_Latn | 0.886508 |
28c6cc7e329267633e8796ebb2328570b4c4cdeb | 2,247 | md | Markdown | README.md | UdaySagar/kibi | 7db67c3105b6584bd79e4ec3fa8831961ddea69e | [
"Apache-2.0"
] | null | null | null | README.md | UdaySagar/kibi | 7db67c3105b6584bd79e4ec3fa8831961ddea69e | [
"Apache-2.0"
] | null | null | null | README.md | UdaySagar/kibi | 7db67c3105b6584bd79e4ec3fa8831961ddea69e | [
"Apache-2.0"
] | null | null | null | # Kibi 0.2.0
Kibi extends Kibana 4.1 with data intelligence features; the core feature of
Kibi is the capability to join and filter data from multiple Elasticsearch
indexes and from SQL/NOSQL data sources ("external queries").
In addition, Kibi provides UI features and visualizations like dashboard
groups, tabs, cross entity relational navigation buttons, an enhanced search
results table, analytical aggregators, HTML templates on query results, and
much more.
## Quick start
* Download the Kibi demo distribution: [http://siren.solutions/kibi](http://siren.solutions/kibi)
* Start Elasticsearch by running `elasticsearch\bin\elasticsearch` on Linux/OS X or `elasticsearch\bin\elasticsearch.bat` on Windows.
* Go to the `kibi` directory and run `bin/kibi` on Linux/OS X or `bin\kibi.bat` on Windows.
A pre-configured Kibi is now running at [http://localhost:5602](http://localhost:5602);
a complete description of the demo is [available](http://siren.solutions/kibi/docs/current/getting-started.html) in the Kibi documentation.
## Documentation
Visit [siren.solutions](http://siren.solutions/kibi/docs) for the full Kibi
documentation.
## License
Copyright (c) 2015 SIREn Solutions
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this software except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
To enable index join capabilities, Kibi relies on the the Siren 2
Elasticsearch plugin. The Kibi demo distribution includes a pre release of
Siren 2 that is licensed exclusively for personal use, development and
accredited academic research. For a production license of Siren 2 please
contact info@siren.solutions .
## Acknowledgments
Kibana is a trademark of Elasticsearch BV, registered in the U.S. and in other
countries.
Elasticsearch is a trademark of Elasticsearch BV, registered in the U.S. and in
other countries.
| 41.611111 | 139 | 0.792167 | eng_Latn | 0.986039 |
28c6cecb560e2263359f0bafe06d18cc9af9357b | 138 | md | Markdown | README.md | swapnilshinde2/5G-NFV-Slice-preferences | 75d3b826650d0d48442f9e6f7445fe278f90fe28 | [
"CC0-1.0"
] | 1 | 2022-02-17T15:31:13.000Z | 2022-02-17T15:31:13.000Z | README.md | swapnilshinde2/5G-NFV-Slice-preferences | 75d3b826650d0d48442f9e6f7445fe278f90fe28 | [
"CC0-1.0"
] | null | null | null | README.md | swapnilshinde2/5G-NFV-Slice-preferences | 75d3b826650d0d48442f9e6f7445fe278f90fe28 | [
"CC0-1.0"
] | null | null | null | # 5G-NFV-Slice-preferences
Matlab Source files for preference-based Virtual Network Function Placement in Sliced 5G Network Architectures
| 46 | 110 | 0.847826 | eng_Latn | 0.613385 |
28c6eee35ae80ad6973442ba64feb9f8c458cd00 | 1,594 | md | Markdown | docs/index.md | olivercoad/elmish.hmr | 7d6bc0a5f79bca5b9f411a6395173a8644bd058c | [
"Apache-2.0"
] | null | null | null | docs/index.md | olivercoad/elmish.hmr | 7d6bc0a5f79bca5b9f411a6395173a8644bd058c | [
"Apache-2.0"
] | null | null | null | docs/index.md | olivercoad/elmish.hmr | 7d6bc0a5f79bca5b9f411a6395173a8644bd058c | [
"Apache-2.0"
] | null | null | null | ---
layout: standard
toc: false
---
## Hot Module Replacement
Elmish applications can benefit from Hot Module Replacement (known as HMR).
This allow us to modify the application while it's running, without a full reload. Your application will now maintain its state between two changes.

## Installation
Add Fable package with paket:
```sh
paket add nuget Fable.Elmish.HMR
```
## Webpack configuration
Add `hot: true` and `inline: true` to your `devServer` node.
Example:
```js
// ...
devServer: {
// ...
hot: true
}
// ...
```
## Parcel and Vite
Parcel and Vite, are supported since version 4.2.0. They don't require any specific configuration.
## Usage
The package will include the HMR support only if you are building your program with `DEBUG` set in your compilation conditions. Fable adds it by default when in watch mode.
You need to always include `open Elmish.HMR` after your others `open Elmish.XXX` statements. This is needed to shadow the supported APIs.
For example, if you use `Elmish.Program.run` it will be shadowed as `Elmish.HMR.Program.run`.
```fs
open Elmish
open Elmish.React
open Elmish.HMR // See how this is the last open statement
Program.mkProgram init update view
|> Program.withReactSynchronous "elmish-app"
|> Program.run
```
You can also use `Elmish.Program.runWith` if you need to pass custom arguments, `runWith` will also be shadowed as `Elmish.HMR.Program.runWith`:
```fs
Program.mkProgram init update view
|> Program.withReactSynchronous "elmish-app"
|> Program.runWith ("custom argument", 42)
```
| 24.523077 | 172 | 0.737767 | eng_Latn | 0.995436 |
28c6f517780bdd7f924dcefcafcfabf794955f8d | 804 | md | Markdown | README.md | mrdivinemaniac/Crypto-Wallet-Manager | 1bf13d1947d586259f6e10b5938812eb46316e75 | [
"MIT"
] | 2 | 2018-01-18T08:30:15.000Z | 2018-07-24T19:01:55.000Z | README.md | mrdivinemaniac/Crypto-Wallet-Manager | 1bf13d1947d586259f6e10b5938812eb46316e75 | [
"MIT"
] | null | null | null | README.md | mrdivinemaniac/Crypto-Wallet-Manager | 1bf13d1947d586259f6e10b5938812eb46316e75 | [
"MIT"
] | null | null | null | #Crypto Wallet Manager
A chrome extension to manage your crypto wallets
Owning multiple crypto-currency wallets is a hassle because of the multiple wallet addresses.
This is a chrome extension which provides an easy way to store wallet addresses for multiple crypto-currencies.
Managing multiple wallets has never been this easy.
#Features:
- Store and manage multiple wallet addresses
- Copy addresses to clipboard with a single click
- One click qr code generation
- Multiple currencies supported
- Open Source
#Donations:
I don't have enough funds to put this in the chrome extension store.
If you want to help me put this in the chrome store, donations are welcome at
19YEnU6g8Sr8MZgSJAxJQv5bubYS4iUFqE
#Bitcointalk Thread
https://bitcointalk.org/index.php?topic=1821481.0 | 18.272727 | 111 | 0.791045 | eng_Latn | 0.99643 |
28c728b4fb719d0edd3a970e460f2ca28833c982 | 369 | md | Markdown | _posts/2016-01-06-Machine-Learning-Timetime.md | lvduit/lvduit.github.io | d5c38cfcc8adfab9151c4185fcbe1216194a1952 | [
"MIT"
] | null | null | null | _posts/2016-01-06-Machine-Learning-Timetime.md | lvduit/lvduit.github.io | d5c38cfcc8adfab9151c4185fcbe1216194a1952 | [
"MIT"
] | null | null | null | _posts/2016-01-06-Machine-Learning-Timetime.md | lvduit/lvduit.github.io | d5c38cfcc8adfab9151c4185fcbe1216194a1952 | [
"MIT"
] | null | null | null | ---
layout: post
title: Machine learning Timeline
---
# WEEK 1
* Linear Regression with One Variable
* Linear Algebra Review
# WEEK 2
* Linear Regression with Multiple Variables
# WEEK 3
* Logistic Regression
* Regularization
# WEEK 4 + 5 + 6
* Neural Networks
# WEEK 7 + 8 + 9
* Support Vector Machines
# WEEK 10 + 11
* Anomaly Detection
* Recommender Systems
| 14.192308 | 43 | 0.712737 | eng_Latn | 0.699418 |
28c9487e49602d8cdc0e45075d9d58067a0837ad | 12,593 | md | Markdown | docs/framework/wpf/graphics-multimedia/maximize-wpf-3d-performance.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/maximize-wpf-3d-performance.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/maximize-wpf-3d-performance.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Maximize WPF 3D Performance | Microsoft Docs"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework-4.6"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-wpf"
ms.tgt_pltfrm: ""
ms.topic: "article"
helpviewer_keywords:
- "3-D graphics [WPF]"
ms.assetid: 4bcf949d-d92f-4d8d-8a9b-1e4c61b25bf6
caps.latest.revision: 9
author: dotnet-bot
ms.author: dotnetcontent
manager: "wpickett"
---
# Maximize WPF 3D Performance
As you use the [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] to build 3D controls and include 3D scenes in your applications, it is important to consider performance optimization. This topic provides a list of 3D classes and properties that have performance implications for your application, along with recommendations for optimizing performance when you use them.
This topic assumes an advanced understanding of [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] 3D features. The suggestions in this document apply to "rendering tier 2"—roughly defined as hardware that supports pixel shader version 2.0 and vertex shader version 2.0. For more details, see [Graphics Rendering Tiers](../../../../docs/framework/wpf/advanced/graphics-rendering-tiers.md).
## Performance Impact: High
|Property|Recommendation|
|-|-|
|<xref:System.Windows.Media.Brush>|Brush speed (fastest to slowest):<br /><br /> <xref:System.Windows.Media.SolidColorBrush><br /><br /> <xref:System.Windows.Media.LinearGradientBrush><br /><br /> <xref:System.Windows.Media.ImageBrush><br /><br /> <xref:System.Windows.Media.DrawingBrush> (cached)<br /><br /> <xref:System.Windows.Media.VisualBrush> (cached)<br /><br /> <xref:System.Windows.Media.RadialGradientBrush><br /><br /> <xref:System.Windows.Media.DrawingBrush> (uncached)<br /><br /> <xref:System.Windows.Media.VisualBrush> (uncached)|
|<xref:System.Windows.UIElement.ClipToBoundsProperty>|Set `Viewport3D.ClipToBounds` to false whenever you do not need to have [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] explicitly clip the content of a <xref:System.Windows.Controls.Viewport3D> to the Viewport3D’s rectangle. [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] antialiased clipping can be very slow, and `ClipToBounds` is enabled (slow) by default on <xref:System.Windows.Controls.Viewport3D>.|
|<xref:System.Windows.UIElement.IsHitTestVisible%2A>|Set `Viewport3D.IsHitTestVisible` to false whenever you do not need [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] to consider the content of a <xref:System.Windows.Controls.Viewport3D> when performing mouse hit testing. Hit testing 3D content is done in software and can be slow with large meshes. <xref:System.Windows.UIElement.IsHitTestVisible%2A> is enabled (slow) by default on <xref:System.Windows.Controls.Viewport3D>.|
|<xref:System.Windows.Media.Media3D.GeometryModel3D>|Create different models only when they require different Materials or Transforms. Otherwise, try to coalesce many <xref:System.Windows.Media.Media3D.GeometryModel3D> instances with the same Materials and Transforms into a few larger <xref:System.Windows.Media.Media3D.GeometryModel3D> and <xref:System.Windows.Media.Media3D.MeshGeometry3D> instances.|
|<xref:System.Windows.Media.Media3D.MeshGeometry3D>|Mesh animation—changing the individual vertices of a mesh on a per-frame basis—is not always efficient in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)]. To minimize the performance impact of change notifications when each vertex is modified, detach the mesh from the visual tree before performing per-vertex modification. Once the mesh has been modified, reattach it to the visual tree. Also, try to minimize the size of meshes that will be animated in this way.|
|3D Antialiasing|To increase rendering speed, disable multisampling on a <xref:System.Windows.Controls.Viewport3D> by setting the attached property <xref:System.Windows.Media.RenderOptions.EdgeMode%2A> to `Aliased`. By default, 3D antialiasing is disabled on [!INCLUDE[TLA#tla_winxp](../../../../includes/tlasharptla-winxp-md.md)] and enabled on [!INCLUDE[TLA#tla_longhorn](../../../../includes/tlasharptla-longhorn-md.md)] with 4 samples per pixel.|
|Text|Live text in a 3D scene (live because it’s in a <xref:System.Windows.Media.DrawingBrush> or <xref:System.Windows.Media.VisualBrush>) can be slow. Try to use images of the text instead (via <xref:System.Windows.Media.Imaging.RenderTargetBitmap>) unless the text will change.|
|<xref:System.Windows.Media.TileBrush>|If you must use a <xref:System.Windows.Media.VisualBrush> or a <xref:System.Windows.Media.DrawingBrush> in a 3D scene because the brush’s content is not static, try caching the brush (setting the attached property <xref:System.Windows.Media.RenderOptions.CachingHint%2A> to `Cache`). Set the minimum and maximum scale invalidation thresholds (with the attached properties <xref:System.Windows.Media.RenderOptions.CacheInvalidationThresholdMinimum%2A> and <xref:System.Windows.Media.RenderOptions.CacheInvalidationThresholdMaximum%2A>) so that the cached brushes won’t be regenerated too frequently, while still maintaining your desired level of quality. By default, <xref:System.Windows.Media.DrawingBrush> and <xref:System.Windows.Media.VisualBrush> are not cached, meaning that every time something painted with the brush has to be re-rendered, the entire content of the brush must first be re-rendered to an intermediate surface.|
|<xref:System.Windows.Media.Effects.BitmapEffect>|<xref:System.Windows.Media.Effects.BitmapEffect> forces all affected content to be rendered without hardware acceleration. For best performance, do not use <xref:System.Windows.Media.Effects.BitmapEffect>.|
## Performance Impact: Medium
|Property|Recommendation|
|-|-|
|<xref:System.Windows.Media.Media3D.MeshGeometry3D>|When a mesh is defined as abutting triangles with shared vertices and those vertices have the same position, normal, and texture coordinates, define each shared vertex only once and then define your triangles by index with <xref:System.Windows.Media.Media3D.MeshGeometry3D.TriangleIndices%2A>.|
|<xref:System.Windows.Media.ImageBrush>|Try to minimize texture sizes when you have explicit control over the size (when you’re using a <xref:System.Windows.Media.Imaging.RenderTargetBitmap> and/or an <xref:System.Windows.Media.ImageBrush>). Note that lower resolution textures can decrease visual quality, so try to find the right balance between quality and performance.|
|Opacity|When rendering translucent 3D content (such as reflections), use the opacity properties on brushes or materials (via <xref:System.Windows.Media.Brush.Opacity%2A> or <xref:System.Windows.Media.Media3D.DiffuseMaterial.Color%2A>) instead of creating a separate translucent <xref:System.Windows.Controls.Viewport3D> by setting `Viewport3D.Opacity` to a value less than 1.|
|<xref:System.Windows.Controls.Viewport3D>|Minimize the number of <xref:System.Windows.Controls.Viewport3D> objects you’re using in a scene. Put many 3D models in the same Viewport3D rather than creating separate Viewport3D instances for each model.|
|<xref:System.Windows.Freezable>|Typically it’s beneficial to reuse <xref:System.Windows.Media.Media3D.MeshGeometry3D>, <xref:System.Windows.Media.Media3D.GeometryModel3D>, Brushes, and Materials. All are multiparentable since they’re derived from `Freezable`.|
|<xref:System.Windows.Freezable>|Call the <xref:System.Windows.Freezable.Freeze%2A> method on Freezables when their properties will remain unchanged in your application. Freezing can decrease working set and increase speed.|
|<xref:System.Windows.Media.Brush>|Use <xref:System.Windows.Media.ImageBrush> instead of <xref:System.Windows.Media.VisualBrush> or <xref:System.Windows.Media.DrawingBrush> when the content of the brush will not change. 2D content can be converted to an <xref:System.Windows.Controls.Image> via <xref:System.Windows.Media.Imaging.RenderTargetBitmap> and then used in an <xref:System.Windows.Media.ImageBrush>.|
|<xref:System.Windows.Media.Media3D.GeometryModel3D.BackMaterial%2A>|Don’t use <xref:System.Windows.Media.Media3D.GeometryModel3D.BackMaterial%2A> unless you actually need to see the back faces of your <xref:System.Windows.Media.Media3D.GeometryModel3D>.|
|<xref:System.Windows.Media.Media3D.Light>|Light speed (fastest to slowest):<br /><br /> <xref:System.Windows.Media.Media3D.AmbientLight><br /><br /> <xref:System.Windows.Media.Media3D.DirectionalLight><br /><br /> <xref:System.Windows.Media.Media3D.PointLight><br /><br /> <xref:System.Windows.Media.Media3D.SpotLight>|
|<xref:System.Windows.Media.Media3D.MeshGeometry3D>|Try to keep mesh sizes under these limits:<br /><br /> <xref:System.Windows.Media.Media3D.MeshGeometry3D.Positions%2A>: 20,001 <xref:System.Windows.Media.Media3D.Point3D> instances<br /><br /> <xref:System.Windows.Media.Media3D.MeshGeometry3D.TriangleIndices%2A>: 60,003 <xref:System.Int32> instances|
|<xref:System.Windows.Media.Media3D.Material>|Material speed (fastest to slowest):<br /><br /> <xref:System.Windows.Media.Media3D.EmissiveMaterial><br /><br /> <xref:System.Windows.Media.Media3D.DiffuseMaterial><br /><br /> <xref:System.Windows.Media.Media3D.SpecularMaterial>|
|<xref:System.Windows.Media.Brush>|[!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] 3D doesn't opt out of invisible brushes (black ambient brushes, clear brushes, etc.) in a consistent way. Consider omitting these from your scene.|
|<xref:System.Windows.Media.Media3D.MaterialGroup>|Each <xref:System.Windows.Media.Media3D.Material> in a <xref:System.Windows.Media.Media3D.MaterialGroup> causes another rendering pass, so including many materials, even simple materials, can dramatically increase the fill demands on your GPU. Minimize the number of materials in your <xref:System.Windows.Media.Media3D.MaterialGroup>.|
## Performance Impact: Low
|Property|Recommendation|
|-|-|
|<xref:System.Windows.Media.Media3D.Transform3DGroup>|When you don’t need animation or data binding, instead of using a transform group containing multiple transforms, use a single <xref:System.Windows.Media.Media3D.MatrixTransform3D>, setting it to be the product of all the transforms that would otherwise exist independently in the transform group.|
|<xref:System.Windows.Media.Media3D.Light>|Minimize the number of lights in your scene. Too many lights in a scene will force [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] to fall back to software rendering. The limits are roughly 110 <xref:System.Windows.Media.Media3D.DirectionalLight> objects, 70 <xref:System.Windows.Media.Media3D.PointLight> objects, or 40 <xref:System.Windows.Media.Media3D.SpotLight> objects.|
|<xref:System.Windows.Media.Media3D.ModelVisual3D>|Separate moving objects from static objects by putting them in separate <xref:System.Windows.Media.Media3D.ModelVisual3D> instances. ModelVisual3D is "heavier" than <xref:System.Windows.Media.Media3D.GeometryModel3D> because it caches transformed bounds. GeometryModel3D is optimized to be a model; ModelVisual3D is optimized to be a scene node. Use ModelVisual3D to put shared instances of GeometryModel3D into the scene.|
|<xref:System.Windows.Media.Media3D.Light>|Minimize the number of times you change the number of lights in the scene. Each change of light count forces a shader regeneration and recompilation unless that configuration has existed previously (and thus had its shader cached).|
|Light|Black lights won’t be visible, but they will add to render time; consider omitting them.|
|<xref:System.Windows.Media.Media3D.MeshGeometry3D>|To minimize the construction time of large collections in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)], such as a MeshGeometry3D’s <xref:System.Windows.Media.Media3D.MeshGeometry3D.Positions%2A>, <xref:System.Windows.Media.Media3D.MeshGeometry3D.Normals%2A>, <xref:System.Windows.Media.Media3D.MeshGeometry3D.TextureCoordinates%2A>, and <xref:System.Windows.Media.Media3D.MeshGeometry3D.TriangleIndices%2A>, pre-size the collections before value population. If possible, pass the collections’ constructors prepopulated data structures such as arrays or Lists.|
## See Also
[3-D Graphics Overview](../../../../docs/framework/wpf/graphics-multimedia/3-d-graphics-overview.md) | 182.507246 | 976 | 0.785357 | eng_Latn | 0.625191 |
28c9516860a3bc2e5178b9b3fa511a6d326ea2cf | 483 | md | Markdown | README.md | L-Lawliet/TAPD-CSharpSDK | e1da756fc6450b4b66060b4a09a5c09e68580b17 | [
"MIT"
] | 4 | 2020-04-11T05:42:40.000Z | 2021-10-13T07:03:29.000Z | README.md | L-Lawliet/TAPD-CSharpSDK | e1da756fc6450b4b66060b4a09a5c09e68580b17 | [
"MIT"
] | null | null | null | README.md | L-Lawliet/TAPD-CSharpSDK | e1da756fc6450b4b66060b4a09a5c09e68580b17 | [
"MIT"
] | null | null | null | # TAPD-CSharpSDK
# 介绍
> 这是一个基于C#的TAPD API SDK库。
> TAPD是腾讯旗下一个非常好的项目管理工具,它提供了非常完善的工具和环境,让开发团队能够更好的对项目进行管理。
> 当然它也不是完美的,例如它没有桌面工具,它还缺少很多自动化的功能(一键迁移项目、一键添加需求任务、定制化的表格自动导出等)
> 幸运的是,它支持API调用:https://www.tapd.cn/help/view#1120003271001002318
> 所以本仓库是基于API接口进行封装。希望能让更多的TAPD衍生出更多的快捷工具
# 开发清单
- [x] 搭建基础框架
- [ ] 需求相关功能
- [ ] 缺陷相关功能
- [ ] 任务相关功能
# 版本标签说明
- 0.2.0:基础请求功能(测试)与需求列表查询功能(测试)
# 注意
- TAPD API调用需要向TAPD官网**申请**,本仓库**不提供**任何的**身份验证信息**用于测试。
- TAPD API暂不支持需求、任务、缺陷、看板等**删除**功能
| 18.576923 | 65 | 0.732919 | yue_Hant | 0.926636 |
28c97136eb0704cb0624cdcc22e1aba764b9d59e | 20,987 | md | Markdown | articles/migrate/how-to-scale-assessment.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/migrate/how-to-scale-assessment.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/migrate/how-to-scale-assessment.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Escalado de detección y evaluación mediante Azure Migrate | Microsoft Docs
description: Se describe cómo evaluar un número elevado de máquinas locales mediante el servicio Azure Migrate.
author: rayne-wiselman
ms.service: azure-migrate
ms.topic: conceptual
ms.date: 08/25/2018
ms.author: raynew
ms.openlocfilehash: 1f049b3e05ac17e416379762a0bced8340ae25d5
ms.sourcegitcommit: 31241b7ef35c37749b4261644adf1f5a029b2b8e
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 09/04/2018
ms.locfileid: "43666550"
---
# <a name="discover-and-assess-a-large-vmware-environment"></a>Detección y evaluación de un entorno grande de VMware
Azure Migrate tiene un límite de 1500 máquinas por proyecto. En este artículo se describe cómo evaluar un número elevado de máquinas virtuales (VM) locales mediante [Azure Migrate](migrate-overview.md).
## <a name="prerequisites"></a>Requisitos previos
- **VMware**: las máquinas virtuales que planea migrar deben administrarse mediante vCenter Server, versión 5.5, 6.0 o 6.5. Además, necesita un host de ESXi que ejecute la versión 5.0 o posterior para implementar la máquina virtual del recopilador.
- **Cuenta de vCenter**: necesita una cuenta de solo lectura para acceder a vCenter Server. Azure Migrate usa esta cuenta para detectar las máquinas virtuales locales.
- **Permisos**: en vCenter Server, necesitará permisos para crear una máquina virtual mediante la importación de un archivo en formato OVA.
- **Configuración de estadísticas**: la configuración de las estadísticas de vCenter Server se debe establecer en el nivel 3 antes de empezar la implementación. El nivel estadístico se establecerá en 3 para cada uno de los intervalos de colección de día, semana y mes. Si el nivel es inferior a 3 en cualquiera de los tres intervalos de colección, la valoración funcionará, pero los datos de rendimiento de almacenamiento y red no se recopilarán. Las recomendaciones de tamaño se basarán entonces en los datos de rendimiento de CPU y memoria y en los datos de configuración de discos y adaptadores de red.
### <a name="set-up-permissions"></a>Configuración de permisos
Azure Migrate necesita acceso a los servidores de VMware para detectar automáticamente las máquinas virtuales para su evaluación. La cuenta de VMware necesita los siguientes permisos:
- Tipo de usuario: al menos un usuario de solo lectura.
- Permisos: Data Center -> Propagate to Child Object, role=Read-only (Centro de datos -> Propagar a objeto secundario, rol = Solo lectura).
- Detalles: el usuario se asigna en el nivel de centro de datos y tiene acceso a todos los objetos de este.
- Para restringir el acceso, asigne el rol No access (Sin acceso) con Propagate to child object (Propagar a objeto secundario) a los objetos secundarios (hosts de vSphere, almacenes de datos, máquinas virtuales y redes).
Si va a implementar en un entorno de inquilinos, esta es una manera de configurar esta opción:
1. Cree un usuario por inquilino y use [RBAC](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal) para asignar permisos de solo lectura a todas las máquinas virtuales que pertenezcan a un inquilino determinado. A continuación, use esas credenciales para la detección. RBAC garantiza que el usuario correspondiente de vCenter solo tendrá acceso a las máquinas virtuales específicas del inquilino.
2. Configure RBAC para los usuarios de inquilinos diferentes como se describe en el ejemplo siguiente para el usuario n º 1 y el usuario n º 2:
- En **Nombre de usuario** y **Contraseña**, especifique las credenciales de la cuenta de solo lectura que el recopilador utilizará para detectar las máquinas virtuales.
- Centrodedatos1: asigne permisos de solo lectura al usuario n º 1 y al usuario n º 2. No propague esos permisos a todos los objetos secundarios, porque establecerá los permisos en las máquinas virtuales individuales.
- VM1 (inquilino n º 1) (permiso de solo de lectura al usuario n º 1)
- VM2 (inquilino n º 1) (permiso de solo de lectura al usuario n º 1)
- VM3 (inquilino n º 2) (permiso de solo de lectura al usuario n º 2)
- VM4 (inquilino n º 2) (permiso de solo de lectura al usuario n º 2)
- Si realiza la detección con las credenciales del usuario n º 1, solo se detectarán VM1 y VM2.
## <a name="plan-your-migration-projects-and-discoveries"></a>Planificación de los proyectos de migración y las detecciones
Un único recopilador de Azure Migrate admite la detección desde varias instancias de vCenter Server (una detrás de otra) y también admite la detección a varios proyectos de migración (uno detrás de otro). El recopilador funciona en un modelo de disparo y olvido; una vez que se realiza una detección, puede usar el mismo recopilador para recopilar datos de una instancia de vCenter Server diferente o enviarlos a un proyecto de migración distinto.
Planee las detecciones y evaluaciones en función de los límites siguientes:
| **Entidad** | **Límite de máquinas** |
| ---------- | ----------------- |
| proyecto | 1500 |
| Detección | 1500 |
| Evaluación | 1500 |
Tenga en cuenta estas consideraciones de planeación:
- Al realizar una detección mediante Azure Migrate Collector, puede establecer el ámbito de detección en una carpeta, un centro de datos, un clúster o un host de vCenter Server.
- Para hacer más de una detección, verifique en vCenter Server si las máquinas virtuales que desea detectar están en carpetas, centros de datos, clústeres o hosts que admiten la limitación de 1500 máquinas.
- Se recomienda que, para los fines de evaluación, mantenga las máquinas con interdependencias dentro del mismo proyecto y de la misma evaluación. En vCenter Server, asegúrese de que las máquinas dependientes están en la misma carpeta, centro de datos o clúster para la evaluación.
Dependiendo del escenario, puede dividir las detecciones como se indica a continuación:
### <a name="multiple-vcenter-servers-with-less-than-1500-vms"></a>Varias instancias de vCenter Server con menos de 1500 máquinas virtuales
Si tiene varias instancias de vCenter Server en su entorno y el número total de máquinas virtuales es inferior a 1500, puede usar un único recopilador y un solo proyecto de migración para detectar todas las máquinas virtuales en todas las instancias de vCenter Server. Puesto que el recopilador detecta las instancias de vCenter de una en una, puede ejecutar el mismo recopilador en todas las instancias de vCenter Server, una detrás de otra, y apuntar dicho recopilador al mismo proyecto de migración. Después de completar todas las detecciones, puede crear las evaluaciones de las máquinas.
### <a name="multiple-vcenter-servers-with-more-than-1500-vms"></a>Varias instancias de vCenter Server con más de 1500 máquinas virtuales
Si tiene varias instancias de vCenter Server con menos de 1500 máquinas virtuales por vCenter Server, pero más de 1500 máquinas virtuales entre todas las instancias de vCenter Server, debe crear varios proyectos de migración (un proyecto de migración solamente puede contener 1500 máquinas virtuales). Puede conseguirlo creando un proyecto de migración por instancia de vCenter Server y dividir las detecciones. Puede usar un único recopilador para detectar cada instancia de vCenter Server (una detrás de otra). Si desea que las detecciones se inicien al mismo tiempo, también puede implementar varios dispositivos y ejecutar las detecciones en paralelo.
### <a name="more-than-1500-machines-in-a-single-vcenter-server"></a>Más de 1500 máquinas en una sola instancia de vCenter Server
Si tiene más de 1500 máquinas virtuales en una sola instancia de vCenter Server, debe dividir la detección en varios proyectos de migración. Para dividir las detecciones, puede aprovechar el campo Ámbito en el dispositivo y especificar el host, el clúster, la carpeta o el centro de datos que desea detectar. Por ejemplo, si tiene dos carpetas en vCenter Server, una con 1000 máquinas virtuales (Carpeta1) y otra con 800 máquinas virtuales (Carpeta2), puede usar un único recopilador y realizar dos detecciones. En la primera detección, puede especificar Carpeta1 como el ámbito y apuntarla al primer proyecto de migración; una vez completada la primera detección, puede usar el mismo recopilador, cambiar su ámbito a Carpeta2 y los detalles del proyecto de migración al segundo proyecto de migración y realizar la segunda detección.
### <a name="multi-tenant-environment"></a>Entorno multiinquilino
Si tiene un entorno que se comparte entre los inquilinos y no desea detectar las máquinas virtuales de un inquilino en la suscripción de otro inquilino, puede usar el campo Ámbito en el dispositivo recopilador para definir el ámbito de la detección. Si los inquilinos comparten hosts, cree una credencial que tenga acceso de solo lectura solamente a las máquinas virtuales que pertenecen al inquilino específico y, luego, use esta credencial en el dispositivo recopilador y especifique el ámbito como el host para realizar la detección. Como alternativa, también puede crear carpetas en vCenter Server (por ejemplo, carpeta1 para inquilino1 y carpeta2 para inquilino2), en el host compartido, mover las máquinas virtuales de inquilino1 a carpeta1 y las de inquilino2 a carpeta2 y, por último, definir el ámbito de las detecciones en el recopilador en consecuencia especificando la carpeta apropiada.
## <a name="discover-on-premises-environment"></a>Detectar el entorno local
Cuando esté listo con el plan, puede iniciar la detección de las máquinas virtuales locales:
### <a name="create-a-project"></a>Crear un proyecto
Cree un proyecto de Azure Migrate según sus necesidades:
1. En Azure Portal, haga clic en **Crear un recurso**.
2. Busque **Azure Migrate** y seleccione el servicio **Azure Migrate** en los resultados de búsqueda. Seleccione **Crear**.
3. Especifique un nombre de proyecto y la suscripción de Azure para el proyecto.
4. Cree un nuevo grupo de recursos.
5. Especifique la ubicación en la que desea crear el proyecto y seleccione **Crear**. Tenga en cuenta que todavía puede evaluar las máquinas virtuales para una ubicación de destino diferente. La ubicación especificada para el proyecto se utiliza para almacenar los metadatos que se recopilan a partir de máquinas virtuales locales.
### <a name="set-up-the-collector-appliance"></a>Configuración del dispositivo de recopilador
Azure Migrate crea una VM local conocida como el dispositivo de recopilador. Esta máquina virtual detecta máquinas virtuales de VMware locales y envía metadatos sobre ellas al servicio Azure Migrate. Para configurar la aplicación del recopilador, descargue un archivo OVA e impórtelo a la instancia de vCenter Server local.
#### <a name="download-the-collector-appliance"></a>Descarga del dispositivo de recopilador
Si tiene varios proyectos, tiene que descargar la aplicación del recopilador solo una vez en vCenter Server. Después de descargar y configurar la aplicación, ejecútela en cada proyecto y especifique el identificador de proyecto exclusivo y la clave.
1. En el proyecto de Azure Migrate, seleccione **Introducción** > **Detectar y evaluar** > **Detectar máquinas**.
2. En **Detectar máquinas**, seleccione **Descargar** para descargar el archivo OVA.
3. En **Copiar las credenciales del proyecto**, copie la clave y el identificador del proyecto. Los necesitará cuando configure el recopilador.
#### <a name="verify-the-collector-appliance"></a>Comprobación del dispositivo de recopilador
Compruebe que el archivo OVA es seguro, antes de implementarlo:
1. En la máquina en la que descargó el archivo, abra una ventana de comandos de administrador.
2. Ejecute el siguiente comando para generar el código hash para el archivo OVA:
```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
Ejemplo de uso: ```C:\>CertUtil -HashFile C:\AzureMigrate\AzureMigrate.ova SHA256```
3. Asegúrese de que el código hash generado coincida con la configuración siguiente.
Para OVA versión 1.0.9.14
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | 6d8446c0eeba3de3ecc9bc3713f9c8bd
SHA1 | e9f5bdfdd1a746c11910ed917511b5d91b9f939f
SHA256 | 7f7636d0959379502dfbda19b8e3f47f3a4744ee9453fc9ce548e6682a66f13c
Para la versión 1.0.9.12 de OVA
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | d0363e5d1b377a8eb08843cf034ac28a
SHA1 | df4a0ada64bfa59c37acf521d15dcabe7f3f716b
SHA256 | f677b6c255e3d4d529315a31b5947edfe46f45e4eb4dbc8019d68d1d1b337c2e
Para la versión 1.0.9.8 de OVA
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | b5d9f0caf15ca357ac0563468c2e6251
SHA1 | d6179b5bfe84e123fabd37f8a1e4930839eeb0e5
SHA256 | 09c68b168719cb93bd439ea6a5fe21a3b01beec0e15b84204857061ca5b116ff
Para la versión 1.0.9.7 de OVA
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | d5b6a03701203ff556fa78694d6d7c35
SHA1 | f039feaa10dccd811c3d22d9a59fb83d0b01151e
SHA256 | e5e997c003e29036f62bf3fdce96acd4a271799211a84b34b35dfd290e9bea9c
Para la versión 1.0.9.5 de OVA
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | fb11ca234ed1f779a61fbb8439d82969
SHA1 | 5bee071a6334b6a46226ec417f0d2c494709a42e
SHA256 | b92ad637e7f522c1d7385b009e7d20904b7b9c28d6f1592e8a14d88fbdd3241c
Para la versión 1.0.9.2 de OVA
**Algoritmo** | **Valor del código hash**
--- | ---
MD5 | 7326020e3b83f225b794920b7cb421fc
SHA1 | a2d8d496fdca4bd36bfa11ddf460602fa90e30be
SHA256 | f3d9809dd977c689dda1e482324ecd3da0a6a9a74116c1b22710acc19bea7bb2
### <a name="create-the-collector-vm"></a>Creación de la VM de recopilador
Importe el archivo descargado a vCenter Server:
1. En la consola de cliente de vSphere, seleccione **File** (Archivo) > **Deploy OVF Template** (Implementar plantilla de OVF).

2. En el Deploy OVF Template Wizard (Asistente para implementar la plantilla de OVF) > **Source** (Origen), especifique la ubicación del archivo OVA.
3. En **Name** (Nombre) y **Location** (Ubicación), especifique un nombre descriptivo para la VM de recopilador y el objeto de inventario en el que se hospedará la VM.
4. En **Host/Cluster** (Host o clúster), especifique el host o clúster donde se ejecutará el recopilador de máquina virtual.
5. En el almacenamiento, especifique el destino de almacenamiento para la VM de recopilador.
6. En **Disk Format** (Formato de disco), especifique el tamaño y el tipo de disco.
7. En **Network Mapping** (Asignación de red), especifique la red a la que se conectará la VM de recopilador. La red necesita conectividad a Internet, para enviar metadatos a Azure.
8. Revise y confirme la configuración y, a continuación, seleccione **Finish** (Finalizar).
### <a name="identify-the-id-and-key-for-each-project"></a>Identificación del identificador y de la clave de cada proyecto
Si tiene varios proyectos, asegúrese de identificar el identificador y la clave de cada uno. Necesita la clave al ejecutar el recopilador para detectar las máquinas virtuales.
1. En el proyecto, seleccione **Introducción** > **Detectar y evaluar** > **Detectar máquinas**.
2. En **Copiar las credenciales del proyecto**, copie la clave y el identificador del proyecto.

### <a name="set-the-vcenter-statistics-level"></a>Configuración del nivel de estadísticas de vCenter
A continuación, se muestra la lista de contadores de rendimiento recopilados durante la detección. Los contadores están disponibles de forma predeterminada en distintos niveles de vCenter Server.
Se recomienda establecer el nivel común más alto (3) para el nivel de estadísticas, para que todos los contadores se recopilen correctamente. Si tiene vCenter establecido en un nivel inferior, solo algunos contadores pueden recopilarse por completo y los demás estarán establecidos en 0. Por lo tanto, puede que la evaluación muestre datos incompletos.
En la tabla siguiente también se muestran los resultados de la evaluación que resultarán afectados si no se recopila un contador determinado.
| Contador | Nivel | Nivel por dispositivo | Impacto en la evaluación |
| --------------------------------------- | ----- | ---------------- | ------------------------------------ |
| cpu.usage.average | 1 | N/D | Costo y tamaño de VM recomendados |
| mem.usage.average | 1 | N/D | Costo y tamaño de VM recomendados |
| virtualDisk.read.average | 2 | 2 | Tamaño del disco, costo de almacenamiento y tamaño de VM |
| virtualDisk.write.average | 2 | 2 | Tamaño del disco, costo de almacenamiento y tamaño de VM |
| virtualDisk.numberReadAveraged.average | 1 | 3 | Tamaño del disco, costo de almacenamiento y tamaño de VM |
| virtualDisk.numberWriteAveraged.average | 1 | 3 | Tamaño del disco, costo de almacenamiento y tamaño de VM |
| net.received.average | 2 | 3 | Tamaño de VM y costo de la red |
| net.transmitted.average | 2 | 3 | Tamaño de VM y costo de la red |
> [!WARNING]
> Si configuró un nivel de estadística más alto, los contadores de rendimiento pueden tardar hasta un día en generarse. Por lo tanto, se recomienda ejecutar la detección después de un día.
### <a name="run-the-collector-to-discover-vms"></a>Ejecución del recopilador para detectar VM
Para cada detección que necesite realizar, ejecute el recopilador para detectar máquinas virtuales en el ámbito requerido. Ejecute las detecciones una detrás de otra. No se admiten las detecciones simultáneas, y cada detección debe tener un ámbito distinto.
1. En la consola del cliente de vSphere, haga clic con el botón derecho en VM > **Open Console** (Abrir consola).
2. Proporcione el idioma, la zona horaria y las preferencias de contraseña para el dispositivo.
3. En el escritorio, seleccione el acceso directo **Run collector** (Ejecutar recopilador).
4. En Azure Migrate Collector, abra **Set Up Prerequisites** (Configurar requisitos previos) y luego:
a. Acepte los términos de licencia y lea la información de terceros.
El recopilador comprueba que la VM tenga acceso a Internet.
b. Si la máquina virtual tiene acceso a Internet a través de un proxy, seleccione **Configuración de proxy** y especifique el puerto de escucha y la dirección del proxy. Especifique las credenciales si el proxy requiere autenticación.
El recopilador comprueba que el servicio del recopilador se está ejecutando. El servicio se instala de forma predeterminada en la VM de recopilador.
c. Descargue e instale VMware PowerCLI.
5. En **Specify vCenter Server details** (Especificar detalles de vCenter Server), haga lo siguiente:
- Especifique el nombre (FQDN) o la dirección IP de vCenter Server.
- En **Nombre de usuario** y **Contraseña**, especifique las credenciales de cuenta de solo lectura que utilizará el recopilador para detectar las máquinas virtuales en vCenter Server.
- En **Seleccionar ámbito**, especifique un ámbito para la detección de máquinas virtuales. El recopilador solo puede detectar máquinas virtuales dentro del ámbito especificado. El ámbito se puede establecer en una carpeta, centro de datos o clúster específico. No debe contener más de 1000 máquinas virtuales.
6. En **Specify migration project** (Especificar proyecto de migración), especifique el identificador y la clave del proyecto. Si no los copió, abra Azure Portal desde la máquina virtual de recopilador. En la página **Introducción** del proyecto, seleccione **Detectar máquinas** y copie los valores.
7. En **View collection progress** (Ver progreso de recopilación), supervise el proceso de detección y compruebe que los metadatos recopilados de las máquinas virtuales se encuentran dentro del ámbito. El recopilador proporciona un tiempo de detección aproximado.
#### <a name="verify-vms-in-the-portal"></a>Comprobación de VM en el portal
El tiempo de detección depende de cuántas VM se están detectando. Normalmente, para 100 máquinas virtuales, la detección tarda alrededor de una hora en completarse después de que el recopilador finaliza la ejecución.
1. En el proyecto de Migration Planner, seleccione **Administrar** > **Máquinas**.
2. Compruebe que las VM que desea detectar aparecen en el portal.
## <a name="next-steps"></a>Pasos siguientes
- Obtenga información sobre cómo [crear un grupo](how-to-create-a-group.md) para evaluación.
- [Obtenga más información](concepts-assessment-calculation.md) sobre cómo se calculan las evaluaciones.
| 80.103053 | 899 | 0.762567 | spa_Latn | 0.991002 |
28c9e36a7b38d899c5b1d39563d6f8dd8fd6dc52 | 572 | md | Markdown | docs/about.md | obaaa8/Nora-Centers | 1c5ebe797c2c02e39203ec28f5ff9af09463ed75 | [
"MIT"
] | 3 | 2018-09-26T11:49:02.000Z | 2021-08-02T12:09:01.000Z | docs/about.md | obaaa8/Nora-Centers | 1c5ebe797c2c02e39203ec28f5ff9af09463ed75 | [
"MIT"
] | 1 | 2018-08-29T14:42:57.000Z | 2018-08-29T14:42:57.000Z | docs/about.md | obaaa/NoraCenter | 1c5ebe797c2c02e39203ec28f5ff9af09463ed75 | [
"MIT"
] | 2 | 2018-08-24T14:02:21.000Z | 2018-09-26T11:49:02.000Z | ---
sidebar: false
---
# About
---
The Nora system is Sponsored by [Obaaa](http://obaaa.sd) for design and programming. It is dedicated to its educational management. Provides a unified space for the trainee and the trainer and management, allows the trainee registration through the center and Joined Groups. follow-up attendance, payments, certificate request, and other [features](/features/), is also linked to a special Website and mobile App.

| 63.555556 | 416 | 0.784965 | eng_Latn | 0.992705 |
28ca087c4885345a26fb5ae0d916bfaf00947c1c | 51 | md | Markdown | README.md | Grassroots-Democrats-HQ/Autotexter | 58278675c7f86a8239ffe85019b79da555daf7a0 | [
"MIT"
] | null | null | null | README.md | Grassroots-Democrats-HQ/Autotexter | 58278675c7f86a8239ffe85019b79da555daf7a0 | [
"MIT"
] | null | null | null | README.md | Grassroots-Democrats-HQ/Autotexter | 58278675c7f86a8239ffe85019b79da555daf7a0 | [
"MIT"
] | null | null | null | # Autotexter
tamara ddosing fellows' phone numbers
| 17 | 37 | 0.823529 | eng_Latn | 0.87682 |
28ca496ca85f34aa233f78b59cdad1e654cfe1b2 | 92 | md | Markdown | README.md | samsheffield/Advanced_Game_Design | a1995b0ec9e390336b4d74cd1f8f37de1223592d | [
"MIT"
] | null | null | null | README.md | samsheffield/Advanced_Game_Design | a1995b0ec9e390336b4d74cd1f8f37de1223592d | [
"MIT"
] | null | null | null | README.md | samsheffield/Advanced_Game_Design | a1995b0ec9e390336b4d74cd1f8f37de1223592d | [
"MIT"
] | null | null | null | # Advanced_Game_Design
Examples from Advanced Game Design, taught by Sam Sheffield at MICA.
| 30.666667 | 68 | 0.826087 | eng_Latn | 0.968735 |
28ca51b7cdbe3472fe4c38cd678cabb1f3e03ed3 | 5,140 | md | Markdown | _posts/2018-10-20-Download-bls-2014-exam-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-10-20-Download-bls-2014-exam-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-10-20-Download-bls-2014-exam-answers.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Bls 2014 exam answers book
This was life, on the west coast of "We can't let you go to Idaho. 51, ii? In bls 2014 exam answers strong light his hair, be follows it eastward through a nickering of storm and sun-loses it, ii! Station attendants, I shoulda bls 2014 exam answers getting this on the camcorder," groaned plutonic and volcanic rocks is bls 2014 exam answers cosmic origin, as though we weren't even employee. She stood staring, yeah, twelve feet high. pages 21 and 508 (1867). walk, and as we that she had assumed was fantasy! Lombardi been moved to?" she asked. " She lowered her face to his. "I'll be coming bls 2014 exam answers it about thirty minutes before it leaves! Bls 2014 exam answers, maybe most people look through you because they don't After an unsuccessful attempt had been made to sail to the north of He shook his head. " and the tent-owner showed his guests a tin drinking-cup with the beggary, hath turned mine enemy. " Quoth my wife, the pedestrian precinct beneath the shopping complex and business offices of the Manhattan module was lively and crowded with people, but he wasn't able to relent. At that time tobacco was smoked in long pipes, and I'm sure you wouldn't want to be responsible for this baby being endangered by viral disease, because he had pretended to be asleep 16 Literary works too quiet and too patient to be bls 2014 exam answers living-dead incarnation of a murdered wife, Nolly had two chairs for clients. Is she underweight, while a very strong odour of Or maybe not, turn. " He nervously fingered the fabric of his slacks, and I believe it. "The more I hear, she'd hidden the knife in the mattress of the foldaway sofabed on which she slept each night. "Oh!" She blotted her eyes on the heels of her hands. I don't know anything bls 2014 exam answers it. sink, it was so exquisitely repellent that the artist's genius Cleaving prairie, 172; looks of astonishment and numerous frowns, all the silent language of the scene at the Prevost. And suppose you marry. Vanadium's presence, he raised his eyes still higher! Tracks of file:D|Documents20and20SettingsharryDesktopUrsula20K. "It hath been told me, and Lieut, too, bls 2014 exam answers now nineteen years; but on this occasion. 188, since its After topping off the fuel tank in Jackpot. limping after my long excursion on foot, "but I have little time for reading. 'Sunshine Cake' is a minor tune, floating and seeming to smile at you! But I think it's a mistake to believe that there just wasn't anything, which corresponds to a speed of three, his laughter was high-pitched and bls 2014 exam answers. How deemest thou of the affair?" "God prolong the king's continuance!" replied the vizier. The Bls 2014 exam answers, I'll call, Preston had changed his timetable, three? An hour later the company marched off the shuttle in smart order, the 707 had crashed into Jamaica Bay, I'll be willing to write it off as nothing more than planet fall getting to your head? of fear and confusion, rolled him onto his back, such as shall delude his heart and weary his soul, in St! He felt someone peel [Illustration: POLAR BEARS. She lifted her head and kissed me hard. They had died in 'This assurance, nor will it be long before the telegraph has spun its attempts at plunder. " "No offense, F met her eyes, isn't it, splashing with Curtis all "Little boy. "It was my fault. Hollow, there stepfather's story about extraterrestrial healers. All world "Suits me," said Licky. Verily, or on Roke; and the man Otter or Tern came from there. She wanted to tell him not to say these queer things, Mr, "I've got to go tell the rest of the guys. Nobody had horses but Alder, thanks. No good's in life (to the counsel list of one who's purpose-whole), the dog remaining by his side. When caught staring, typed his home address on six of them, stood on a high hill to the north. navigator. "I know. " westering sun. and her gaze had teeth? I'm just a wiseass. Lilljeborg, incredulous that she could turn against him, the songs don't tell. She hadn't sung since the early-morning hours of October 18, though she saw him not. Car tailpipes follows, he interlaced strips of cane protested when they received his weight, 100, I could see the blue mist of the "Go with the water," said Ayo. Was Olaf asleep! Not anyone at all. my pseudofather keeps her supplied with drugs. Banks. Petersburg in and reassuring. Within two months, 67; ii, Guard-Commander" in the direction bls 2014 exam answers Sirocco. No spell had been cast on the mechanism, "Barty. Sepharad?" Agnes asked. " "Yes, they had made few friends. It was like a cobweb made of flat, of previous exploratory sell Jesus door-to-door. She couldn't see the screen. " [Footnote 384: Further information on this point is given by Henry little prodigy had bls 2014 exam answers in his mind, in weariness, and the youth became enamoured of her and suffered grief and concern for the love of her and her loveliness, and got bls 2014 exam answers and limped back to the bedroom for his pouch. Bright Beach. Call him Smith. 265. | 571.111111 | 5,045 | 0.777626 | eng_Latn | 0.999893 |
28ca7a960c9f8637d224966f75f427b07521a2c6 | 1,906 | md | Markdown | README.md | haconline/nas_benchmarks | 1b09906ba3f522f15766b75643423acccd9db3a5 | [
"BSD-3-Clause"
] | 76 | 2019-02-13T00:47:50.000Z | 2022-02-09T00:02:23.000Z | README.md | haconline/nas_benchmarks | 1b09906ba3f522f15766b75643423acccd9db3a5 | [
"BSD-3-Clause"
] | 10 | 2019-01-30T14:18:37.000Z | 2021-11-20T01:29:38.000Z | README.md | haconline/nas_benchmarks | 1b09906ba3f522f15766b75643423acccd9db3a5 | [
"BSD-3-Clause"
] | 29 | 2018-03-14T22:27:30.000Z | 2022-03-06T23:01:48.000Z | # Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search
This repository contains code of tabular benchmarks for
- HPOBench: joint hyperparameter and architecture optimization of feed forward neural networks on regression problems (see [1])
- NASBench101: the architecture optimization of a convolutional neural network (see [2])
To download the datasets for the FC-Net benchmark:
wget http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz
tar xf fcnet_tabular_benchmarks.tar.gz
The data for NASBench is available [here](https://github.com/google-research/nasbench).
To install it, type:
git clone https://github.com/automl/nas_benchmarks.git
cd nas_benchmarks
python setup.py install
The following example shows how to load the benchmark and to evaluate a random hyperparameter configuration:
from tabular_benchmarks import FCNetProteinStructureBenchmark
b = FCNetProteinStructureBenchmark(data_dir="./fcnet_tabular_benchmarks/")
cs = b.get_configuration_space()
config = cs.sample_configuration()
print("Numpy representation: ", config.get_array())
print("Dict representation: ", config.get_dictionary())
max_epochs = 100
y, cost = b.objective_function(config, budget=max_epochs)
print(y, cost)
To see how you can run different open-source optimizers from the literature, have a look on the python scripts in 'experiment_scripts' folder, which were also used to conducted the experiments in the papers.
# References
[1] Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization
A. Klein and F. Hutter
arXiv:1905.04970 [cs.LG]
[2] NAS-Bench-101: Towards Reproducible Neural Architecture Search
C. Ying and A. Klein and E. Real and E. Christiansen and K. Murphy and F. Hutter
arXiv:1902.09635 [cs.LG]
| 38.897959 | 207 | 0.752361 | eng_Latn | 0.886358 |
28cba12070c0e234afb5018a182c4818010fd07c | 5,833 | md | Markdown | README.md | cennznet/crml-cennzx-spot.js | 16affef6f6a09a0c1610fa1f602325d8c2f2ef82 | [
"Apache-2.0"
] | 3 | 2019-05-24T04:44:07.000Z | 2021-07-22T00:03:51.000Z | README.md | cennznet/crml-cennzx-spot.js | 16affef6f6a09a0c1610fa1f602325d8c2f2ef82 | [
"Apache-2.0"
] | 8 | 2019-05-31T00:22:04.000Z | 2019-07-29T03:47:51.000Z | README.md | cennznet/crml-cennzx-spot.js | 16affef6f6a09a0c1610fa1f602325d8c2f2ef82 | [
"Apache-2.0"
] | 1 | 2019-05-24T04:44:10.000Z | 2019-05-24T04:44:10.000Z | # Merged into api.js repo
[https://github.com/cennznet/api.js](https://github.com/cennznet/api.js/tree/master/packages/crml-cennzx-spot)
----
# `cennznet-js/crml-cennzx-spot`
A sdk providing additional features for cennzx spot runtime module
# Install
It is peer dependency of `@cennznet/api`, should always be installed along with `@cennznet/api` and other `@cennznet/crml-` sdks
```
$> npm i --save @cennznet/api @cennznet/crml-generic-asset @cennznet/crml-cennzx-spot @cennznet/crml-attestation
```
# USAGE
`@cennznet/api` will create a instance of cennzxSpot after it finishes initialization.
```
// node --experimental-repl-await
// initialize Api and connect to dev network
const {Api} = require('@cennznet/api')
const api = await Api.create({provider: 'wss://rimu.unfrastructure.io/ws?apikey=***'});
const cennzxSpot = api.cennzxSpot;
// for Rxjs
const {ApiRx} = require('@cennznet/api')
const apiRx = await ApiRx.create({provider: 'wss://rimu.unfrastructure.io/ws?apikey=***'});
const cennzxSpotRx = apiRx.cennzxSpot;
```
# Derives
All derives related to crml-cennzx-spot are defined in this library, which can be access from both CennzxSpot instance and `api.derives.cennzxSpot.*`
* exchangeAddress
* inputPrice / inputPriceAt
* outputPrice / outputPriceAt
* liquidityBalance / liquidityBalanceAt
* totalLiquidity / totalLiquidityAt
check [API Docs](https://cennznetdocs.com/api/latest/api/classes/_cennznet_crml_cennzx_spot.cennzxspot.md) for more information
# Cookbook
## Add liquidity
```
const coreAssetId = 16001;
const tradeAssetA = 16000;
const tradeAssetB = 16002;
const investAmount: number = 1000;
const maxAssetAmount = '1000';
await cennzxSpot
.addLiquidity(tradeAssetA, 0, maxAssetAmount, investAmount)
.signAndSend(investor.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let isCreated = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'AddLiquidity') {
// Liquidity added
}
}
}
});
```
## Remove liquidity
```
#liquidity -> amount to remove
await cennzxSpot.removeLiquidity(tradeAssetA, liquidity, 1, 1)
.signAndSend(investor.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let isRemoved = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'RemoveLiquidity') {
}
}
}
});
```
## Buy Asset with Another Asset
Given a certain `amountBought`, `assetBought` and `assetSold`
Paying no more than `maxPayingAmount` of `assetSold` in order to trade for `amountBought` of `assetBought`
```
//1) query current exchange rate
const expectCost = await cennzxSpot.getOutputPrice(assetSold, assetBought, amountBought);
//2) add a buffer in case price goes up, let's say 2%
const maxPayingAmount = expectCost.muln(1.02);
//3) commit the exchange tx
await cennzxSpot
.assetSwapOutput(assetSold, assetBought, amountBought, maxPayingAmount)
.signAndSend(trader.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let trade = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'TradeAssetPurchase') { // check if ExtrinsicFailed or successful
}
}
}
});
```
## Buy Asset and transfer to a 3rd party account
```
await cennzxSpot
.assetTransferOutput(recipient, assetSold, assetBought, amountBought, maxPayingAmount)
.signAndSend(trader.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let trade = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'TradeAssetPurchase') { // check if ExtrinsicFailed or successful
}
}
}
});
```
## Sell Asset with Another Asset
Given a certain `amountSell`, `assetSold` and `assetBought`
Sell `amountSell` of `assetSold` to gain no less than `minReceive` amount of `AssetBought`
```
//1) query current exchange rate
const expectReceive = await cennzxSpot.getInputPrice(assetSold, assetBought, amountSell);
//2) add a buffer in case price goes down, let's say 2%
const minReceive = expectReceive.muln(0.98);
//3) commit the exchange tx
await cennzxSpot
.assetSwapInput(assetSold, assetBought, amountSell, minReceive)
.signAndSend(trader.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let trade = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'TradeAssetPurchase') { // check if ExtrinsicFailed or successful
}
}
}
});
```
## Sell Asset and transfer the gained Asset to a 3rd party account
```
await cennzxSpot
.assetTransferInput(recipient, assetSold, assetBought, amountSell, minReceive)
.signAndSend(trader.address, ({events, status}: SubmittableResult) => {
if (status.isFinalized && events !== undefined) {
let trade = false;
for (let i = 0; i < status.events.length; i += 1) {
const event = events[i];
if (event.event.method === 'TradeAssetPurchase') { // check if ExtrinsicFailed or successful
}
}
}
});
```
| 33.716763 | 149 | 0.64238 | eng_Latn | 0.697707 |
28cba35cda830486c790c8fdbacc84b09c5abc40 | 4,567 | md | Markdown | contributing/development/api/events.md | Syaifudin03/documentation-1 | 05518c2a8cc051a732f4ec53e9e9c63933f7310b | [
"MIT"
] | 1 | 2019-12-18T16:59:21.000Z | 2019-12-18T16:59:21.000Z | contributing/development/api/events.md | Syaifudin03/documentation-1 | 05518c2a8cc051a732f4ec53e9e9c63933f7310b | [
"MIT"
] | null | null | null | contributing/development/api/events.md | Syaifudin03/documentation-1 | 05518c2a8cc051a732f4ec53e9e9c63933f7310b | [
"MIT"
] | null | null | null | # Events
## List events
`/:collectiveSlug/events.json`
E.g. [https://opencollective.com/sustainoss/events.json?limit=10&offset=0](https://opencollective.com/sustainoss/events.json?limit=10&offset=0)
```text
[
{
"id": 8770,
"name": "Sustain",
"description": null,
"slug": "2017",
"image": null,
"startsAt": "Mon Jun 19 2017 17:00:00 GMT+0000 (UTC)",
"endsAt": "Thu Mar 16 2017 01:00:00 GMT+0000 (UTC)",
"location": {
"name": "GitHub HQ",
"address": "88 Colin P Kelly Jr Street, San Francisco, CA",
"lat": 37.782267,
"long": -122.391248
},
"url": "https://opencollective.com/sustainoss/events/2017",
"info": "https://opencollective.com/sustainoss/events/2017.json"
}
]
```
Parameters:
* limit: number of events to return
* offset: number of events to skip \(for pagination\)
Notes:
* `url` is the url of the page of the event on opencollective.com
* `info` is the url to get the detailed information about the event in json
## Get event info <a id="get-info"></a>
`/:collectiveSlug/events/:eventSlug.json`
E.g. [https://opencollective.com/sustainoss/events/2017.json](https://opencollective.com/sustainoss/events/2017.json)
```text
{
"id": 8770,
"name": "Sustain",
"description": null,
"longDescription": "A one day conversation for Open Source Software sustainers\n\nNo keynotes, expo halls or talks.\nOnly discussions about how to get more resources to support digital infrastructure.\n\n# What\nA guided discussion about getting and distributing money or services to the Open Source community. The conversation will be facilitated by [Gunner](https://aspirationtech.org/about/people/gunner) from AspirationTech.\n\n# Sustainer?\nA sustainer is someone who evangelizes and passionately advocates for the needs of open source contributors.\n\nThey educate the public through blog posts, talks & social media about the digital infrastructure that they use everyday and for the most part, take for granted.\n\nThey convince the companies that they work for to donate money, infrastructure, goods and/or services to the community at large. They also talk to the companies that they don’t work for about the benefits sustaining open source for the future.\n\n# Connect\n- Slack\nhttps://changelog.com/community\n\\#sustain\n- Twitter\n[@sustainoss](https://twitter.com/sustainoss)\n- GitHub\nhttps://github.com/sustainers/\n\n# Scholarships\nWe welcome everyone who wants to contribute to this conversation. Email us hello@sustainoss.org if the ticket doesn't fit your budget.\n\n# SUSTAIN IS SOLD OUT 🎉🎉 \nWe are still accepting sponsorships if you'd like to contribute. ",
"slug": "2017",
"image": null,
"startsAt": "Mon Jun 19 2017 17:00:00 GMT+0000 (UTC)",
"endsAt": "Thu Mar 16 2017 01:00:00 GMT+0000 (UTC)",
"location": {
"name": "GitHub HQ",
"address": "88 Colin P Kelly Jr Street, San Francisco, CA",
"lat": 37.782267,
"long": -122.391248
},
"currency": "USD",
"tiers": [
{
"id": 10,
"name": "sponsor",
"description": "Contribute to the travel & accomodation fund your logo/link on website\n$25 credit for sticker swap table.",
"amount": 100000
}
],
"url": "https://opencollective.com/sustainoss/events/2017",
"attendees": "https://opencollective.com/sustainoss/events/2017/attendees.json"
}
```
Notes:
* `url` is the url of the page of the event on opencollective.com
* `attendees` is the url to get the list of attendees in JSON
## Get list of attendees <a id="get-list-of-attendees"></a>
`/:collectiveSlug/events/:eventSlug/attendees.json`
E.g. [https://opencollective.com/sustainoss/events/2017/attendees.json?limit=10&offset=0](https://opencollective.com/sustainoss/events/2017/attendees.json?limit=10&offset=0)
```text
[
{
"MemberId": 10057,
"createdAt": "2017-12-01 19:42",
"type": "USER",
"role": "ATTENDEE",
"isActive": true,
"totalAmountDonated": 0,
"lastTransactionAt": "2018-02-15 23:43",
"lastTransactionAmount": 0,
"profile": "https://opencollective.com/magic_cacti",
"name": "David Baldwin ",
"company": null,
"description": "Opensource hardware and software hacker",
"image": null,
"email": null,
"twitter": "https://twitter.com/magic_cacti",
"github": null,
"website": "https://twitter.com/magic_cacti"
},
...
]
```
Notes:
* `github` is verified via oauth but `twitter` is not
* `email` returns null unless you make an authenticated call using the `accessToken` of one of the admins of the collective
| 38.70339 | 1,387 | 0.695643 | eng_Latn | 0.679136 |
28cc151a7cde5e4bb6b8a0c5254f181f9fbe5224 | 4,408 | md | Markdown | articles/supply-chain/production-control/tasks/create-subcontracted-work-cell-lean-manufacturing.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-18T17:15:00.000Z | 2022-03-02T03:46:26.000Z | articles/supply-chain/production-control/tasks/create-subcontracted-work-cell-lean-manufacturing.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2017-12-08T15:55:56.000Z | 2019-04-30T11:46:11.000Z | articles/supply-chain/production-control/tasks/create-subcontracted-work-cell-lean-manufacturing.md | MicrosoftDocs/Dynamics-365-Operations.tr-tr | 60fbe90cb8d4cfcd775d48c394e827a7795cfe27 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Yalın imalat için alt sözleşmeli iş hücresi oluşturma
description: Yalın imalat için taşeron işi modellemek için işi sağlayan satıcı ile ilişkilendirilmiş bir iş hücresi oluşturmanız gerekir.
author: johanhoffmann
ms.date: 06/23/2017
ms.topic: business-process
ms.prod: ''
ms.technology: ''
audience: Application User
ms.reviewer: kamaybac
ms.search.region: Global
ms.author: johanho
ms.search.validFrom: 2016-06-30
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: f37a38227ef57e6e66a77e90883bf157792e7f81
ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec
ms.translationtype: HT
ms.contentlocale: tr-TR
ms.lasthandoff: 09/29/2021
ms.locfileid: "7576844"
---
# <a name="create-a-subcontracted-work-cell-for-lean-manufacturing"></a>Yalın imalat için alt sözleşmeli iş hücresi oluşturma
[!include [banner](../../includes/banner.md)]
Yalın imalat için taşeron işi modellemek için işi sağlayan satıcı ile ilişkilendirilmiş bir iş hücresi oluşturmanız gerekir. Taşeron iş hücresi satıcıya Satıcı türündeki bir kaynağın ilişkilendirilmesi aracılığıyla atanmıştır. Bu kaydın USMF demo şirketinde yürütürseniz, satıcı hesap kodu 1002 ve tesis 1'i seçebilirsiniz.
## <a name="create-a-vendor-resource"></a>Bir satıcı kaynağı oluştur
1. Kaynaklar'a gidin.
2. Yeni'ye tıklayın.
3. Kaynak alanına bir değer yazın.
4. Açıklama alanına bir değer girin.
5. Tür alanında 'Satıcı' seçin.
6. Satıcı alanında, aramayı açmak için açılır menü düğmesine tıklayın.
## <a name="create-the-resource-group"></a>Bir kaynak grubu oluştur
1. Kaynak grupları'na gidin.
2. Yeni'ye tıklayın.
3. Kaynak grubu alanına bir değer yazın.
4. Açıklama alanına bir değer girin.
5. Tesis alanında, aramayı açmak için açılır menü düğmesine tıklayın.
* İş hücrelerinin tahsis edileceği tesisi seçin. Bir tesis teorik olarak bir satıcı tarafından çalıştırılan tek bir tesisi temsil edebilir. Ancak pek çok durumda taşeron kaynaklar, taşeron verilen işlerin siparişinin verildiği tesise tahsis edilir. Taşeron iş hücrelerinin giren ve çıkan ambarlarının aynı tesiste olması gerektiğini unutmayın.
6. Tesis alanına bir değer yazın.
7. @SysTaskRecorder:_RequestClose
8. İş hücresi onay kutusunu seçin veya temizleyin.
9. Giriş ambarı alanında, aramayı açmak için açılır menü düğmesini tıklatın.
* Satıcı tarafından yönetilen iş hücresi için malzemeyi hazırlamak amacıyla kullanılan ambar ve konumu seçin. Pek çok durumda ambar ve konum, satıcı başına ayrı bir ambar ve iş hücresi başına bir konum kullanılarak modellenir.
10. Giriş yerleşimi alanında, aramayı açmak için açılır menü düğmesini tıklatın.
11. Çıkış ambarı alanında, aramayı açmak için açılır menü düğmesini tıklatın.
* İş hücresinin taşeron işlemleri gönderildiğinde malzemenin gönderildiği ambarı ve konumu tanımlayın. Satıcı kanban işlerini raporluyorsa, ambar ve konum satıcının tesisinde olabilir. Alternatif olarak, ambar konum üretim akışının bir sonraki adımıyla ilişkilendirilmiş alma konumu olabilir.
12. Çıkış yerleşimi alanında, aramayı açmak için açılır menü düğmesini tıklatın.
13. Takvimler bölümünü genişletin veya daraltın.
14. Ekle öğesini tıklatın.
15. Takvim alanında, açılır menü düğmesine tıklayarak aramayı açın.
* İş hücresinin çalışma zamanı takvimini kaynak grubu ile ilişkilendirin. Kritik kaynaklar için, iş hücresi veya satıcı tesisinin tam çalışma zamanlarını ve ilişkili kapasitelerini temsil eden belirli takvimler tanımlamanızı öneririz.
16. Kaynaklar bölümünü genişletin veya daraltın.
17. Ekle öğesini tıklatın.
* Bir taşeron kaynak grubu, kaynak grubunu satıcı hesabına bağlayan Satıcı türünde bir ilişkilendirilmiş kaynağa sahip olmalıdır.
18. Kaynak alanında, aramayı açmak için açılır menü düğmesine tıklayın.
* Önceki alt görevde oluşturmuş olduğunuz satıcı kaynağını seçin veya girin.
19. İş hücresi kapasitesi bölümünü genişletin veya daraltın.
20. Ekle öğesini tıklatın.
* Bir iş hücresi tanımlanmış bir kapasiteye sahip olmalıdır. Bu örnekte, standart bir iş günü için 100 parçalık çıkış kapasitesi oluştururuz.
21. Üretim akışı modeli alanında, aramayı açmak için açılır menü düğmesini tıklatın.
22. Kapasite dönemi alanında bir seçenek belirtin.
23. Ortalama iş çıkarma yeteneği miktarı alanında bir sayı girin.
24. Birim alanında, aramayı açmak için açılır menü düğmesini tıklatın.
25. Birimi ResolveChanges.
[!INCLUDE[footer-include](../../../includes/footer-banner.md)] | 60.383562 | 349 | 0.808984 | tur_Latn | 0.999941 |
28ccd9145b01b7a2b359f12b32bb30740a15ba55 | 303 | md | Markdown | _posts/2016-09-09-fuse-js-javascript-fuzzy-search-library.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 5 | 2016-01-25T08:51:46.000Z | 2022-02-16T05:51:08.000Z | _posts/2016-09-09-fuse-js-javascript-fuzzy-search-library.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 3 | 2015-08-22T08:39:36.000Z | 2021-07-25T15:24:10.000Z | _posts/2016-09-09-fuse-js-javascript-fuzzy-search-library.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 2 | 2016-01-18T03:56:54.000Z | 2021-07-25T14:27:30.000Z | ---
title: Fuse.js - JavaScript fuzzy-search library
author: azu
layout: post
itemUrl: 'http://fusejs.io/'
editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2016/09/index.json'
date: '2016-09-09T01:15:23Z'
tags:
- JavaScript
- library
- search
---
データからあいまい検索を行えるJavaScriptライブラリ
| 21.642857 | 87 | 0.732673 | kor_Hang | 0.092232 |
28ccddf7e1f0b635f42600b7595361c0a8244932 | 1,743 | md | Markdown | README.md | Vedakeerthi/Search-Algorithms | e88c5a208ef7223b090594742d570f1bf5d79b8b | [
"MIT"
] | null | null | null | README.md | Vedakeerthi/Search-Algorithms | e88c5a208ef7223b090594742d570f1bf5d79b8b | [
"MIT"
] | null | null | null | README.md | Vedakeerthi/Search-Algorithms | e88c5a208ef7223b090594742d570f1bf5d79b8b | [
"MIT"
] | null | null | null | # Search-Algorithms
This repository consists of different search algorithms that are used to search a particular element in a given list using c++ language, some of the search algorithms are as follows:
<br/>
* Sequential search or linear search
* Binary search
# **1. Sequential search:**
<br/>
As the name suggests the sequential search algorithm is used to search the element in a list of elements sequentially or linearly, in other words it compares each element of the list to the target element and if the search is successful it returns the index of the element found in the list, else returns that 'The element is not present in the list', this is how sequential search works.
<center><img src="https://www.programmingsimplified.com/images/c/linear-search.gif" align='center' alt="Sequential search" height=300 width=1000></center>
<br/>
# **2. Binary search:**
<br/>
Binary search is a different kind of search, the only prerequisite for this search is that first all of the elements in the list are arranged in ascending order, then we need to choose the mid element in the list. Now the search works the target element is compared with the mid value if the value is same then the index of the mid value is returned, else if the value of the mid element is lesser than the target element then the search is continued towards the left part of the mid in the list, and if the mid value is greater than the target element then the search is continued towards the right part of the mid in the list, and it happens recursively till the target element is found in the list.
<center><img src="https://stackabuse.s3.amazonaws.com/media/binary-search-in-java-1.gif" align='center' alt="Binary search" height=300 width=1000></center>
<br/>
| 91.736842 | 701 | 0.779116 | eng_Latn | 0.999537 |
28ccebb7ca564cf0b3c610b579cf778d541c3565 | 7,881 | md | Markdown | docs/connect/jdbc/jdbc-4-2-compliance-for-the-jdbc-driver.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/jdbc-4-2-compliance-for-the-jdbc-driver.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/jdbc-4-2-compliance-for-the-jdbc-driver.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: JDBC 4.2 conformidade para o JDBC Driver | Microsoft Docs
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
ms.assetid: 36025ec0-3c72-4e68-8083-58b38e42d03b
author: MightyPen
ms.author: genemi
manager: craigg
ms.openlocfilehash: cdb0e888276c2c9b08eb99b6972b645ac9b46607
ms.sourcegitcommit: 63b4f62c13ccdc2c097570fe8ed07263b4dc4df0
ms.translationtype: MTE75
ms.contentlocale: pt-BR
ms.lasthandoff: 11/13/2018
ms.locfileid: "51602047"
---
# <a name="jdbc-42-compliance-for-the-jdbc-driver"></a>Conformidade do JDBC 4.2 com o JDBC Driver
[!INCLUDE[Driver_JDBC_Download](../../includes/driver_jdbc_download.md)]
> [!NOTE]
> Versões anteriores ao Microsoft JDBC Driver 4.2 para SQL Server são compatíveis com as especificações de API do Java Database Connectivity 4.0. Esta seção não se aplica a versões anteriores à versão 4.2.
A especificação de API do Java Database Connectivity 4.2 tem suporte pelo Microsoft JDBC Driver 4.2 para SQL Server, com os seguintes métodos de API.
## <a name="sqlserverstatement-class"></a>Classe SQLServerStatement
|Novos métodos|Descrição|Implementação notável|
|-----------------|-----------------|-------------------------------|
|long[] executeLargeBatch()|Executa o lote no qual as contagens de atualização retornada podem ser longas.|Implementado conforme descrito na interface java.sql.Statement. Para obter mais detalhes, veja [java.sql.Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#executeLargeBatch--).|
|long executeLargeUpdate(String sql)<br /><br /> long executeLargeUpdate(String sql, int autoGeneratedKeys)<br /><br /> long executeLargeUpdate(String sql, int[] columnIndexes)<br /><br /> executeLargeUpdate(String sql, String[] columnNames)|Executa uma instrução DML/DDL em que as contagens de atualização retornada podem ser longas. Há 4 novos métodos (sobrecarregados) para dar suporte a contagem de atualização longa.|Implementado conforme descrito na interface java.sql.Statement. Para obter mais detalhes, veja [java.sql.Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#executeLargeBatch--).|
|long getLargeMaxRows()|Recupera o número máximo de linhas como um valor longo que o ResultSet pode conter.|O SQL Server só dá suporte a limites de inteiro para máximo de linhas. Para obter mais detalhes, veja [java.sql.Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#executeLargeBatch--).|
|long getLargeUpdateCount()|Recupera o resultado atual como uma contagem de atualização longa.|O SQL Server só dá suporte a limites de inteiro para máximo de linhas. Para obter mais detalhes, veja [java.sql.Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#executeLargeBatch--).|
|void setLargeMaxRows(long max)|Define o número máximo de linhas como um valor longo que o ResultSet pode conter.|O SQL Server só dá suporte a limites de inteiro para máximo de linhas. Esse método lançará uma exceção sem suporte se um tamanho maior do que o inteiro máximo for passado como o parâmetro. Para obter mais detalhes, veja [java.sql.Statement](https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#executeLargeBatch--).|
## <a name="sqlservercallablestatement-class"></a>Classe SQLServerCallableStatement
|Novos métodos|Descrição|Implementação notável|
|-----------------|-----------------|-------------------------------|
|void registerOutParameter(int parameterIndex, SQLType sqlType)<br /><br /> void registerOutParameter(int parameterIndex, SQLType sqlType, int scale)<br /><br /> void registerOutParameter(int parameterIndex, SQLType sqlType, String typeName)<br /><br /> void registerOutParameter(String parameterName, SQLType sqlType)<br /><br /> void registerOutParameter(String parameterName, SQLType sqlType, int scale)<br /><br /> registerOutParameter(String parameterName, SQLType sqlType, String typeName)|Registra o parâmetro OUT. Há 6 novos métodos (sobrecarregados) para dar suporte à nova interface SQLType.|Implementado conforme descrito na interface java.sql.CallableStatement. Para obter mais detalhes, veja [java.sql.CallableStatement](https://docs.oracle.com/javase/8/docs/api/java/sql/CallableStatement.html).|
|void setObject(String parameterName, Object x, SQLType targetSqlType)<br /><br /> void setObject(String parameterName, Object x, SQLType targetSqlType, int scaleOrLength)|Define o valor do parâmetro com o objeto especificado. Há 2 novos métodos (sobrecarregados) para dar suporte à nova interface SQLType|Implementado conforme descrito na interface java.sql.CallableStatement. Para obter mais detalhes, veja [java.sql.CallableStatement](https://docs.oracle.com/javase/8/docs/api/java/sql/CallableStatement.html).|
## <a name="sqlserverpreparedstatement-class"></a>Classe SQLServerPreparedStatement
|Novos métodos|Descrição|Implementação notável|
|-----------------|-----------------|-------------------------------|
|long executeLargeUpdate()|Executar a instrução DML/DDL e retornar contagem de atualização longa|Implementado conforme descrito na interface java.sql.PreparedStatement. Para obter mais detalhes, veja [java.sql.PreparedStatement](https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html).|
|void setObject(int parameterIndex, Object x, SQLType targetSqlType)<br /><br /> void setObject(int parameterIndex, Object x, SQLType targetSqlType, int scaleOrLength)|Define o valor do parâmetro com o objeto especificado. Há 2 novos métodos (sobrecarregados) para dar suporte à nova interface SQLType.|Implementado conforme descrito na interface java.sql.PreparedStatement. Para obter mais detalhes, veja [java.sql.PreparedStatement](https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html).|
## <a name="sqlserverdatabasemetadata-class"></a>Classe SQLServerDatabaseMetaData
|Novos métodos|Descrição|Implementação notável|
|-----------------|-----------------|-------------------------------|
|long getMaxLogicalLobSize()|Recupera o número máximo de bytes que esse banco de dados permite para o tamanho lógico de uma LOB.|Para o SQL Server, esse valor é 2^31-1. Para obter mais detalhes, veja [java.sql.DatabaseMetaData](https://docs.oracle.com/javase/8/docs/api/java/sql/DatabaseMetaData.html).|
|boolean supportsRefCursors()|Recupera se esse banco de dados dá suporte a REF CURSOR.|Retornará falso já que o SQL Server não dá suporte a REF CURSOR. Para obter mais detalhes, veja [java.sql.DatabaseMetaData](https://docs.oracle.com/javase/8/docs/api/java/sql/DatabaseMetaData.html).|
## <a name="sqlserverresultset-class"></a>Classe SQLServerResultSet
||||
|-|-|-|
|Novos métodos|Descrição|Implementação notável|
||Atualiza a coluna designada com um valor de objeto. Há 4 novos métodos (sobrecarregados) para dar suporte à nova interface SQLType.|Implementado conforme descrito na interface java.sql.ResultSet. Para obter mais detalhes, veja [java.sql.ResultSet](https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html).|
A especificação de API do Java Database Connectivity 4.2 tem suporte pelo Microsoft JDBC Driver 4.2 para SQL Server, com os mapeamentos de tipo de dados a seguir.
|||
|-|-|
|Novos mapeamentos de tipo de dados|Descrição|
|**Novas classes Java em Java 8:** <br /> <br /> LocalDate/LocalTime/LocalDateTime<br /><br /> OffsetTime/OffsetDateTime<br /><br /> **Novos tipos JDBC:**<br /><br /> TIME_WITH_TIMEZONE<br /><br /> TIMESTAMP_WITH_TIMEZONE<br /><br /> REF_CURSOR|REF_CURSOR não tem suporte no SQL Server. O driver lançará uma exceção SQLFeatureNotSupportedException se esse tipo for usado. O driver dá suporte a todos os novos mapeamentos de tipo de Java e JDBC, conforme indicado na especificação do JDBC 4.2.|
| 106.5 | 812 | 0.758533 | por_Latn | 0.913045 |
28cd1ce85a918bee733c63b1150b7dafe5b65b39 | 93 | md | Markdown | README.md | krist0ph3r/dyndns | fca250434642a74007601f7eb62acb4a96b0f534 | [
"Unlicense"
] | null | null | null | README.md | krist0ph3r/dyndns | fca250434642a74007601f7eb62acb4a96b0f534 | [
"Unlicense"
] | null | null | null | README.md | krist0ph3r/dyndns | fca250434642a74007601f7eb62acb4a96b0f534 | [
"Unlicense"
] | null | null | null | # dyndns
Gets your machine's current IP address and registers it with a Dynamic DNS service.
| 31 | 83 | 0.795699 | eng_Latn | 0.977008 |
28ce12e90ffeabb89192903bcebbcf04ecad0651 | 1,716 | md | Markdown | README.md | davidboukari/apt-dpkg | 6cda8dee48c45b8f2971cbb0506ddbd74e73a930 | [
"Apache-2.0"
] | null | null | null | README.md | davidboukari/apt-dpkg | 6cda8dee48c45b8f2971cbb0506ddbd74e73a930 | [
"Apache-2.0"
] | null | null | null | README.md | davidboukari/apt-dpkg | 6cda8dee48c45b8f2971cbb0506ddbd74e73a930 | [
"Apache-2.0"
] | null | null | null | # apt-dpkg
```
apt-cache search package
apt-get install package
# List all installed package
dpkg -l
dpkg -l |grep fail2ban
ii fail2ban 0.11.1-1 all ban hosts that cause multiple authentication errors
# List all file of a package
dpkg -L fail2ban
/.
/etc
/etc/default
/etc/default/fail2ban
/etc/fail2ban
/etc/fail2ban/action.d
/etc/fail2ban/action.d/abuseipdb.conf
/etc/fail2ban/action.d/apf.conf
/etc/fail2ban/action.d/badips.conf
/etc/fail2ban/action.d/badips.py
...
```
# apt-key
```
apt-key list
/etc/apt/trusted.gpg
--------------------
pub rsa4096 2020-05-07 [SC]
E8A0 32E0 94D8 EB4E A189 D270 DA41 8C88 A321 9F7B
uid [ unknown] HashiCorp Security (HashiCorp Package Signing) <security+packaging@hashicorp.com>
sub rsa4096 2020-05-07 [E]
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32
uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <ftpmaster@ubuntu.com>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg
------------------------------------------------------
pub rsa4096 2018-09-17 [SC]
F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C
uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <ftpmaster@ubuntu.com>
```
| 29.084746 | 143 | 0.613054 | kor_Hang | 0.209524 |
28ceb5381fc7ae28d7fc47a3cbd75af0829a4dc8 | 673 | md | Markdown | README.md | johnblakey/LaTeX-Two-Column-One-Page-Resume | dadcd06e2b51bea8bb401cd9be8d28572b5a242a | [
"Apache-2.0"
] | 4 | 2017-07-15T18:47:21.000Z | 2021-09-12T21:58:50.000Z | README.md | johnblakey/LaTeX-Two-Column-One-Page-Resume | dadcd06e2b51bea8bb401cd9be8d28572b5a242a | [
"Apache-2.0"
] | null | null | null | README.md | johnblakey/LaTeX-Two-Column-One-Page-Resume | dadcd06e2b51bea8bb401cd9be8d28572b5a242a | [
"Apache-2.0"
] | null | null | null | # LaTeX-Two-Column-One-Page-Resume
One page two column resume template using XeTeX typesetting engine https://en.wikipedia.org/wiki/XeTeX, which allows you to take a .tex file, compile it, and produce a pdf. The .tex file contains the content as text and related commands to format it accordingly.
The original source for template was https://github.com/deedy/Deedy-Resume .
## Instructions
### Install XeTeX on Debian
#### Terminal Commands
> $ apt install texlive
> $ apt install texlive-xetex
### Produce pdf from .tex File
Navigate to directory with .tex file that contains your resume information
> $ xelatex <filename.tex>
Outputs pdf in the same directory | 30.590909 | 263 | 0.763744 | eng_Latn | 0.960521 |
28ceff177e591ef583cdf22bf2c925e40ed5c321 | 315 | md | Markdown | README.md | edgardeng/good-flutter-app | 6e4c12936a58bf7be9568639af109ba8c001c712 | [
"MIT"
] | 1 | 2020-05-28T15:18:49.000Z | 2020-05-28T15:18:49.000Z | README.md | edgardeng/good-flutter-app | 6e4c12936a58bf7be9568639af109ba8c001c712 | [
"MIT"
] | null | null | null | README.md | edgardeng/good-flutter-app | 6e4c12936a58bf7be9568639af109ba8c001c712 | [
"MIT"
] | null | null | null | # good-flutter-app
> a good flutter app
## 运行环境
本项目运行环境要求! Flutter Version (v1.17.0)
由于在国内访问Flutter有时可能会受到限制,clone项目后,请勿直接packages get,建议运行如下目录行:
```
export PUB_HOSTED_URL=https://pub.flutter-io.cn
export FLUTTER_STORAGE_BASE_URL=https://storage.flutter-io.cn
flutter packages get
flutter run --release
```
| 19.6875 | 63 | 0.771429 | yue_Hant | 0.510006 |
28cf76b41a06bcafcf8ba84842d870ed752bd2dc | 151 | md | Markdown | docs/src/api.md | ali-ramadhan/DispatchedTuples.jl | 22dae1da3a5dae2a36d684e9c269e0b7e2a363ec | [
"MIT"
] | null | null | null | docs/src/api.md | ali-ramadhan/DispatchedTuples.jl | 22dae1da3a5dae2a36d684e9c269e0b7e2a363ec | [
"MIT"
] | null | null | null | docs/src/api.md | ali-ramadhan/DispatchedTuples.jl | 22dae1da3a5dae2a36d684e9c269e0b7e2a363ec | [
"MIT"
] | null | null | null | # API
```@docs
DispatchedTuples.AbstractDispatchedTuple
DispatchedTuples.DispatchedTuple
DispatchedTuples.DispatchedSet
DispatchedTuples.dispatch
```
| 16.777778 | 40 | 0.860927 | yue_Hant | 0.984332 |
28d027523160199f625d14ee7060d12e4c9805bf | 760 | md | Markdown | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-models/productsummarycommonbase.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | null | null | null | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-models/productsummarycommonbase.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | null | null | null | content/core/2.1.0/umbraco-v9/reference/vendr-core/vendr-core-models/productsummarycommonbase.md | JamieTownsend84/vendr-documentation | a8295778c4f3a55f976d77ca3f9aa0087572568e | [
"MIT"
] | null | null | null | ---
title: ProductSummaryCommonBase
description: API reference for ProductSummaryCommonBase in Vendr, the eCommerce solution for Umbraco
---
## ProductSummaryCommonBase
```csharp
public abstract class ProductSummaryCommonBase : IProductSummaryCommon
```
**Inheritance**
* interface [IProductSummaryCommon](../iproductsummarycommon/)
**Namespace**
* [Vendr.Core.Models](../)
### Properties
#### Name
```csharp
public abstract string Name { get; }
```
---
#### Prices
```csharp
public abstract IEnumerable<ProductPrice> Prices { get; }
```
---
#### Reference
```csharp
public abstract string Reference { get; }
```
---
#### Sku
```csharp
public abstract string Sku { get; }
```
<!-- DO NOT EDIT: generated by xmldocmd for Vendr.Core.dll -->
| 13.818182 | 100 | 0.692105 | kor_Hang | 0.414724 |
28d0ddb73162ed07f0f7c84bde6ecae1d16e39d4 | 1,589 | md | Markdown | _posts/2020/2020-04-20-zhe-fan-dian--2016-|-xiang-yao-zao-qi-.md | conge/conge.github.io | 1c96cbc9b7deacae603653b5a43272c7d769a0c9 | [
"MIT"
] | null | null | null | _posts/2020/2020-04-20-zhe-fan-dian--2016-|-xiang-yao-zao-qi-.md | conge/conge.github.io | 1c96cbc9b7deacae603653b5a43272c7d769a0c9 | [
"MIT"
] | 11 | 2020-11-08T15:23:57.000Z | 2022-03-02T02:25:28.000Z | _posts/2020/2020-04-20-zhe-fan-dian--2016-|-xiang-yao-zao-qi-.md | conge/conge.github.io | 1c96cbc9b7deacae603653b5a43272c7d769a0c9 | [
"MIT"
] | 4 | 2021-08-08T06:52:19.000Z | 2021-08-21T03:34:32.000Z | ---
layout: post
title: "折返点 2016 |想要早起"
date: "2020-04-20 00:56:31"
categories: 折返点
excerpt: "受到几个人的启发,想要开始早起模式,目标定在5点。 在开始早起模式之前,我一般是早上7点到8点之间起床。起床一般是被儿子吵醒的。这个家伙每天一睁..."
auth: conge
---
* content
{:toc}
受到几个人的启发,想要开始早起模式,目标定在5点。
在开始早起模式之前,我一般是早上7点到8点之间起床。起床一般是被儿子吵醒的。这个家伙每天一睁眼就要奶吃。而妈妈想多睡一会儿,想让他稍等一会儿。他等不及,就哭。一哭,我就别想睡了。
被吵醒的人,一般都不会太开心。都说一日之计在于晨,要是早上带着情绪起床,这一天都很难过好。思来想去,唯一的破解办法,就是我要比他起得早,才行。
虽然目标定在5点钟起床,但是不能一蹴而就。毕竟7点到五点之间,有了两个小时的距离。一下子步子买太大,容易摔倒。
我的早起,也不打算用闹钟。想要的是自然醒。因为用闹钟,会影响妻子和孩子。我自己知道被吵醒,无论是被别人的声响还是自己的闹钟,都对情绪没多大好处,还不如乖乖让儿子当闹钟算了。
不用闹钟,剩下的办法就是早点儿睡了。于是这两天,我上床早,就已经有几次在6点钟左右起床了,也有几次在六点钟之前就起床的。
起床之后,第一件事是完成跑步。或许还有时间写点儿什么或者读点儿书。然后儿子会醒来,我就带他或读书,或说话。女儿稍后醒来,会加入我们。妻子正好可以早上多睡会儿。
一开始,起的早了,我发现,自己白天也没有更容易困倦。
但是持续了几天之后,终于在周五的来了一次打击。说起打击,也没什么大不了的,就是下午的时候,忽然头痛,困倦。妻子说,可能是因为我没有喝咖啡的缘故。但是我虽然没喝咖啡,但是喝了茶了呀。
妻子可能说得对,那种头痛缺失有点儿像是咖啡因戒断的反应。但我还是倾向于是我这几天早起,让我缺乏了睡眠,于是大脑果断采取措施,让我头疼,敦促我休息。
于是,我就趴在床上睡了一觉,才消除了疲惫感。周六的时候,中午又睡了一觉,感觉好多了。
看样子,如果要早起,就得中午的时候,增加一个午睡。或者在吃完早饭之后,来个回笼小觉。
不过说起来,午睡和回笼觉,真的是美事呀。以后应该尽量安排上。
----------
## Running log

## 折返点







```
2020-04-18 初稿
```
| 26.04918 | 90 | 0.797357 | zho_Hans | 0.144433 |
28d0f9379bcd460cd0bd3a0ae228a24325f88d99 | 10,124 | md | Markdown | module1/presentation.md | majkel84/kurs_cpp_podstawowy | eddaffb310c6132304aa26dc87ec04ddfc09c541 | [
"MIT"
] | 8 | 2020-05-17T18:04:40.000Z | 2021-04-15T07:52:33.000Z | module1/presentation.md | majkel84/kurs_cpp_podstawowy | eddaffb310c6132304aa26dc87ec04ddfc09c541 | [
"MIT"
] | 292 | 2020-03-10T18:03:14.000Z | 2022-03-26T17:00:51.000Z | module1/presentation.md | majkel84/kurs_cpp_podstawowy | eddaffb310c6132304aa26dc87ec04ddfc09c541 | [
"MIT"
] | 126 | 2020-05-11T16:25:40.000Z | 2022-02-21T00:41:03.000Z | # Pierwsze 4 podpunkty, wszystko jest w PreWork
# 5 Typy wbudowane i auto
Czym jest 1 bajt -> jest to 8 bitów.
Prosta matematyka, jeżeli mamy binarnego totolotka, wylosowane liczny mogą mieć 0 lub 1.
Zatem podczas losowania 8 numerków możemy otrzymać przykładowo: 1 0 1 0 1 0 1 0
Takich kombinacji jest dokładnie 128 -> (2^8).
Zatem na 1 bajcie (8 bitach) możemy zapisać 128 liczb -> np. od 0 do 127.
Jeżeli w totolotku losujemy 32 numerki, (32/8 = 4) czyli 4 bajty to takich kombinacji jest, 2^32 (czyli ponad 4 miliardy).
Przejdźmy teraz do typów wbudowanych w standardzie c++
Podstawowe typy wbudowane (przedrostek unsigned oznacza, że typ jest bez znaku, czyli od 0 do jakieś dodatniej wartości).
* bool 1bajt -> false lub true
* char 1batj -> od -128 do 127
* unsigned char 1bajt -> od 0 do 255
*Wielkość poniższych typów zależy od platformy np. 32 bity, 64 bity*
* short (unsigned short) - zwykle 2 bajty
* int (unsigned int) - zwykle 4 bajty
* long (unsigned long) - zwykle 4 bajty
* long long(unsigned long long) - jeżeli platforma jest 64 bitowa to 8 bajtów
* float - zwykle 4 bajty
* double - jeżeli platforma 64 bitowa to 8 bajtów
*Istnieją też typy, która są aliasami (inne nazewnictwo, w celu lepszego zrozumienia typu) jak*
size_t -> który w zależności od kompilatora może być typu (unsigned short, unsigned int, unsigned long, unsigned long long). Przeważnie jest on typu unsigned int. Warto wykorzystywać go gdy nasza zmienna będzie odnosić się do jakiegoś rozmiaru np. wielkość tablicy.
* oczywiście zawsze możemy użyć słowa `auto` i kompilator sam wydedukuje typ:
auto num = 5; -> int
auto num = 5.5 -> double
auto num = 5.f -> float
auto letter = 'a' = -> char
auto num = false -> bool
Na koniec mały suchar. Kim jest Hobbit? -> Jest to 1/8 Hobbajta :)
# 6 Funkcje
*Funkcja* jest to fragment programu, któremu nadano nazwę i który możemy wykonać poprzez podanie jego nazwy
oraz ewentualnych argumentów (o ile istnieją). Argumentami są natomiast dane przekazywane do funkcji np. `void fun(int)`. Funkcja ma nazwę fun, nic nie wraca a przyjmuje jeden argument typu int.
Zatem funkcja to nic innego jak jakiś podprogram. Przykładowo, w trakcie jazdy na rowerze,
naszą główną funkcją jest przemieszczanie się z punktu a do b. Jednak wykonujemy także kilka podprogramów, jak zmiana biegów,
hamowanie, rozpędzanie roweru, skręcanie ect... Podobnie w programie możemy wydzielić konkretne zachowania
i przenieść je do funkcji, które nazwiemy tak by sugerowały, co robią. Ważne, aby funkcja robiła tylko jedną rzecz.
Czyli jedna funkcja zmienia biegi, druga hamuje, trzecia skręca.
Przyjrzyjmy się innym funkcją:
`void foo(double)` jest to funkcja o nazwie foo, która nic nie zwraca a przyjmuje jeden argument typu double.
`double bar(float, const int)` jest to funkcja o nazwie bar, która zwraca typ double a przyjmuje 2 argumenty
pierwszy to float, a drugi to const int (const oznacza, że wartość ta nie może zostać zmodyfikowana).
Przykład wywołania funkcji:
`foo(5)` -> wywołujemy funkcję foo z argumentem int, który jest równy 5.
`double result = bar(5.4, 10) -> wywołujemy funkcję bar z argumentem float (5.4) oraz int (10)
a jej wynik przypisujemy do zmiennej double.
Operacje arytmetyczne:
* Podstawowe: + - * /
* Modyfikujące zmienną: += -= *= /=
Przykład: `a = 5 + 7 (a = 12)` natomiast `a = 5; a+=7 (a=12)`.
# 7 instrukcje warunkowe
Instrukcja warunkowa `if`
Instrukcja warunkowa to nic innego jak zadanie programowi pytania np.:
* Czy otrzymałeś już wszystkie dane?
* Czy życie bossa spadło do 0?
* Czy osiągnięcie zostało zdobyte przez gracza?
* Czy liczba jest większa od maksymalnie dopuszczanej?
Jej konstrukcja jest prosta:
```
if (condition) {
// do sth
}
```
A co w przypadku, gdy wiele informacji musi być spełnionych?
Możemy połączyć warunki operatorem *lub* `||` bądź *i* `&&` np:
* `if (czy_ziemniaki_zjedzone && czy_mieso_zjedzone && czy_surówka_zjedzona)`
-> wszystkie 3 warunki muszą zostać spełnione.
* `if (czy_gracz_ma_20_zręcznośći || czy_gracz_ma_18_inteligencji || czy_gracz_ma_22_siły)
-> w tym przypadku wystarczy spełnić jeden z 3 warunków. Mogą zostać spełnione wszystkie,
ale wystarczy by został spełniony jeden dowolny
Jeżeli program może różnie zareagować na spełnienie jakiś warunków możemy zastosować konstrukcje `if else`
```
if (number < 2) {
critical_miss();
} else if (number < 18) {
hit();
} else {
critical_hit();
}
```
Instrukcja warunkowa switch case.
Jest ona podobna w zachowaniu do instrukcji `if`. Jednak różni się nieco konstrukcją:
```
char option;
switch (option) {
case 'l':
GoLeft();
break;
case 'r':
GoRight();
break;
case 'f':
GoForward();
break;
case 'b':
GoBackward();
break;
default:
Exit();
break;
}
```
`case` oznacza konkretny przypadek. `break` informuje, że wychodzimy z instrukcji warunkowej i konstytuujemy dalej program.
`deafult` jest to miejsce gdzie program dotrze, gdy żaden inny warunek nie zostanie spełniony.
# 8 Pętle
Pętla w największym uproszczeniu służy do powtarzania instrukcji, które chcemy by się wykonały więcej
niż raz bez konieczności ich wielokrotnego pisania w kodzie.
Podstawowe pętle: while(), for()
while() -> używamy, gdy chcemy coś wykonać dopóki nie zostanie spełniony jakiś warunek.
przeważnie nie mamy pojęcia, kiedy to następy (nie znamy liczby kroków) np:
* Przeglądamy koszule w Internecie dopóki nie znajdziemy pasującej do nas,
* Powtarzamy walkę z tym samym bossem aż go nie pokonamy,
* Jemy zupę, aż talerz nie będzie pusty,
* Przeszukujemy kontakty w telefonie aż nie znajdziemy interesującej nas osoby,
for() -> używamy, gdy chcemy coś wykonać określoną liczbę razy, przeważnie znamy liczbę kroków np.
* Wypełniamy ankietę składającą się z 10 pytań -> liczba kroków 10,
* Przemieszczamy się z punktu A do B -> liczba kroków dystans / długość kroku,
* Piszemy egzamin składający się z 4 zadań -> liczba kroków (jak umiemy to 4, jak nie to jeszcze wykonujemy podprogram `ściągaj`).
* Zapinamy koszule (o ile nie wyrwiemy żadnego guzika).
Konstrukcja pętli jest bardzo prosta:
```
while (warunek) {
// Do sth
}
```
oraz
```
for (zmienna_do_inkrementacji ; warunek ; inkrementacja_zmiennej) {
Do sth
}
```
Przykład:
```
while (if(a == b)) {
std::cin >> a;
std::cin >> b;
}
```
```
for (size_t i = 0 ; i < 10 ; i+=2) {
std::cout << "i: " << i << '\n';
}
```
# 9 Wprowadzenie to tablic
Tablice można traktować jak wagony w pociągu. Są one ustawione kolejno jeden po drugim i połączone ze sobą.
Mogą pomieścić różne typy, jak człowiek, węgiel ect.
Zatem jeżeli mamy 10 wagonów z węglem możemy zapisać sobie to `Coal tab[10]` oznacza to, że tworzymy
tablicę, która przechowuje 10 elementów typu Coal (węgiel).
W C++ tablica znajduje się w jednym miejscu w pamięci i jest nierozłączna tak jak wagony.
(proponuje wrzucić zdjęcie pociągu oraz ładną ilustracje jak wygląda tablica w pamięci).
Oczywiście możemy usuwać poszczególne wagony, ale póki co nie komplikujmy sprawy :)
Tablica jest zawsze indeksowana od 0, zatem pierwszy element tablicy 10 elementowej to `tab[0]` a ostatni `tab[9]`.
Przykład modyfikacji tablicy:
```
int tab[10];
tab[0] = 1;
tab[1] = 2;
...
tab[9] = 9;
```
Jak widać do elementu tablicy odwołujemy się przez `operator []`. Musimy pamiętać, żeby zawsze odwoływać się
do istniejącego elementu tablicy. Inaczej program będzie miał niezdefiniowane zachowanie, gdyż spróbujemy
uzyskać dostęp do pamięci, która nie należy do tablicy. Mówimy, że znajdują się tam śmieci. W najlepszym przypadku
system operacyjny to wykryje i dostaniemy „crash”. W najgorszym będziemy działać na niepoprawnych danych przez długi
okres zanim program zakończy się "crashem".
# 10 Podstawy STL
STL -> jest to podstawowa biblioteka szablonów dostępna w standardzie języka C++.
Pierwszym z ciekawych elementów tej biblioteki jest std::vector.
Czyli dynamiczna tablica, która sama zarządza pamięcią. Więc nie musimy z góry precyzować ile ma być elementów.
std::vector sam zadba o alokacje nowej pamięci, gdy będzie to potrzebne, oraz jej dealokację, gdy już jej nie
będziemy potrzebować.
Przykład utworzenia std::vector `std::vector<int> vec;` jak można zauważyć podałem typ int, tak samo jak dla zwykłych tablic.
Wektor, zawsze musi wiedzieć, jakiego typu przechowuje dane.
* Modyfikacja wektora: `vec.push_back(5)`
* Odczytanie z wektora: `vec[1]`
* Inicjalizacja wielu elementów wektora naraz: `std::vector<int> vec {1,2,3,4,5}`
* przypisanie wielu elementów do wektora: `vec = {1,2,3,4,5}`
* pobieranie pierwszego elementu z wektora: `vec.front()`
* pobieranie ostatniego elementu z wektora: `vec.back()`
# 11 Range loop
Każdy kontener (w tym również tablica, czy wektor) posiada swój koniec i początek.
begin() -> określa początek wektora
end() -> określa koniec wektora.
Narazie nie jest istotne, jaki typ jest zwracany. Ważne jest dla nas, tylko to, że otrzymujemy początek i koniec zakresu :)
Zatem możemy sobie napisać pętle: `for (const auto& element : vec)`. Co ta pętla robi?
Dzięki informacji o początku i końcu zakresu, kompilator może sobie sam wygenerować pętle przez cały zakres wektora.
to troszkę tak jakbyśmy zapisali `for (auto i = vec.begin() ; i != vec.end() ; ++i)` jednak taki zapis jest niepotrzebnie
złożony i mało czytelny. Dlatego powstały `range loop` które umożliwiają łatwy zapis `for (typ nazwa : kontener)`
# 12 std::string
Innym typem kontenera jest std::string. Jest to specjalny kontener, który przechowuje znaki.
Ma on podobne funkcję jak std::vector:
* Modyfikacja std::string: `str.push_back('a')` ale nikt tak nie robi :). Polecam `str += 'a';
* Odczytanie z std::string: `str[1]`
* Inicjalizacja wielu elementów std::string naraz: `std::stringc str("Witam");
* przypisanie wielu elementów do std::string: `str = "Witam"
* pobieranie pierwszego elementu z std::string: `str.front()`
* pobieranie ostatniego elementu z std::string: `str.back()`
std::string ma również swój początek i koniec :) tak jak każdy kontener.
| 43.080851 | 267 | 0.739036 | pol_Latn | 0.999969 |
28d1e514ee05993141b65498165742b8dcf6699b | 8,462 | md | Markdown | articles/aks/concepts-sustainable-software-engineering.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2017-08-28T07:45:55.000Z | 2022-03-07T21:35:48.000Z | articles/aks/concepts-sustainable-software-engineering.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 441 | 2017-11-08T13:15:56.000Z | 2021-06-02T10:39:53.000Z | articles/aks/concepts-sustainable-software-engineering.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 27 | 2017-11-13T13:38:31.000Z | 2022-02-17T11:57:33.000Z | ---
title: Pojęcia — zrównoważony inżynieria oprogramowania w usłudze Azure Kubernetes Services (AKS)
description: Dowiedz się więcej o zrównoważonej inżynierii oprogramowania w usłudze Azure Kubernetes Service (AKS).
services: container-service
ms.topic: conceptual
ms.date: 03/29/2021
ms.openlocfilehash: c43c65dfa2f3930510bd59aaa24c798525bd691b
ms.sourcegitcommit: 6ed3928efe4734513bad388737dd6d27c4c602fd
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/07/2021
ms.locfileid: "107011495"
---
# <a name="sustainable-software-engineering-principles-in-azure-kubernetes-service-aks"></a>Zrównoważone Zasady inżynierii oprogramowania w usłudze Azure Kubernetes Service (AKS)
Zrównoważone Zasady inżynierii oprogramowania to zbiór kompetencji ułatwiających Definiowanie, kompilowanie i uruchamianie trwałych aplikacji. Ogólnym celem jest zmniejszenie poziomu węgla każdego aspektu aplikacji. [Zasady stałego inżynieria oprogramowania][principles-sse] mają przegląd zasad stałego inżynieria oprogramowania.
Zrównoważona inżynieria oprogramowania to zmiana priorytetów i fokus. W wielu przypadkach większość oprogramowania jest zaprojektowana i uruchamiana z wyróżnioną dużą wydajnością i małymi opóźnieniami. Tymczasem, zrównoważony inżynieria oprogramowania koncentruje się na zmniejszeniu możliwie największej emisji węgla. Rozważ następujące kwestie:
* Zastosowanie stałych zasad inżynierii oprogramowania może zapewnić szybszą wydajność lub mniejsze opóźnienia, na przykład przez obniżenie łącznej liczby podróży w sieci.
* Zmniejszenie emisji węgla może spowodować obniżenie wydajności lub zwiększone opóźnienia, takie jak opóźnianie obciążeń o niskim priorytecie.
Przed zastosowaniem w aplikacji trwałych zasad inżynierii oprogramowania należy zapoznać się z priorytetami, potrzebami i zaletami aplikacji.
## <a name="measure-and-optimize"></a>Mierzenie i optymalizacja
Aby obniżyć poziom węgla klastrów AKS, musisz zrozumieć, w jaki sposób zasoby klastra są używane. [Azure monitor][azure-monitor] zawiera szczegółowe informacje dotyczące użycia zasobów klastra, takie jak pamięć i użycie procesora CPU. Te dane informuje o zmniejszeniu zużycia węgla w klastrze i obserwują efekt zmian.
Możesz również zainstalować [Kalkulator Roztrwałości firmy Microsoft][sustainability-calculator] , aby zobaczyć zawartość węgla dla wszystkich zasobów platformy Azure.
## <a name="increase-resource-utilization"></a>Zwiększ wykorzystanie zasobów
Jednym z metod obniżenia poziomu węgla jest skrócenie czasu bezczynności. Skrócenie czasu bezczynności obejmuje zwiększenie wykorzystania zasobów obliczeniowych. Na przykład:
1. W klastrze wprowadzono cztery węzły, z których każda działa o pojemności 50%. W związku z tym wszystkie cztery węzły mają 50% niewykorzystane zdolności produkcyjne pozostałe.
1. Klaster został zredukowany do trzech węzłów, z których każda działa o pojemności 67%, przy jednoczesnym obciążeniu. W każdym węźle można pomyślnie obniżyć nieużywaną pojemność do 33% i zwiększyć wykorzystanie.
> [!IMPORTANT]
> Podczas rozważania zmiany zasobów w klastrze Sprawdź, czy [Pule systemu][system-pools] mają wystarczającą ilość zasobów, aby zachować stabilność głównych składników systemu klastra. **Nigdy nie** należy zmniejszać zasobów klastra do punktu, w którym klaster może stać się niestabilny.
Po przejrzeniu wykorzystania klastra Rozważ użycie funkcji oferowanych przez [wiele pul węzłów][multiple-node-pools]:
* Ustalanie rozmiarów węzłów
Użyj [ustalania rozmiarów węzłów][node-sizing] , aby zdefiniować pule węzłów z określonymi profilami procesora i pamięci, co pozwala na dostosowanie węzłów do potrzeb związanych z obciążeniami. Zmieniając rozmiar węzłów na potrzeby związane z obciążeniem, można uruchomić kilka węzłów na wyższym poziomie.
* Skalowanie klastra
Skonfiguruj sposób [skalowania][scale]klastra. Aby automatycznie skalować klaster na podstawie konfiguracji, należy użyć skalowania w [poziomie][scale-horizontal] poniżej i automatycznego skalowania [klastra][scale-auto] . Kontroluj sposób skalowania klastra, aby zapewnić, że wszystkie węzły działają przy wysokim obciążeniu, jednocześnie zachowując synchronizację ze zmianami obciążenia klastra.
* Pule punktów
W przypadkach, gdy obciążenie jest odporne na nagłe przerwy lub zakończenia, można użyć [pul dodatkowych][spot-pools]. Pule kontroli wykorzystują możliwości bezczynności w ramach platformy Azure. Na przykład pule dodatkowe mogą dobrze współpracować w przypadku zadań wsadowych lub środowisk programistycznych.
> [!NOTE]
>Zwiększenie wykorzystania może również zmniejszyć liczbę nadmiernych węzłów, co zmniejsza zużycie energii przez [rezerwacje zasobów w każdym węźle][resource-reservations].
Na koniec Przejrzyj *żądania* procesora CPU i pamięci oraz *limity* w manifestach Kubernetes aplikacji.
* W przypadku obniżenia ilości pamięci i procesora CPU klaster jest dostępny do uruchamiania innych obciążeń.
* Po uruchomieniu większej liczby obciążeń z niższym procesorem CPU i pamięci, klaster będzie bardziej gęsto przydzielony, co zwiększa wykorzystanie.
Przy zmniejszaniu procesora i pamięci dla aplikacji zachowanie aplikacji może być obniżone lub niestabilne, jeśli wartości procesora i pamięci są zbyt niskie. Przed zmianą liczby *żądań* procesora CPU i pamięci oraz *limitów* należy uruchomić testy porównawcze, aby sprawdzić, czy wartości są odpowiednio ustawione. Nigdy nie należy zmniejszać tych wartości do punktu niestabilności aplikacji.
## <a name="reduce-network-travel"></a>Zmniejsz liczbę podróży sieci
Zmniejszając liczbę żądań i odpowiedzi na odległość do i z klastra, można zmniejszyć emisję węgla i zużycie energii elektrycznej przez urządzenia sieciowe. Po przejrzeniu ruchu sieciowego Rozważ utworzenie klastrów [w regionach][regions] bliżej źródła ruchu sieciowego. Korzystając z [usługi azure Traffic Manager][azure-traffic-manager] , można kierować ruchem do najbliższego klastra i [grup umieszczania bliskości][proiximity-placement-groups] i zmniejszyć odległość między zasobami platformy Azure.
> [!IMPORTANT]
> Rozważając wprowadzanie zmian w sieci klastra, nigdy nie zmniejszaj ruchu sieciowego kosztem spełnienia wymagań dotyczących obciążenia. Na przykład podczas korzystania z [stref dostępności][availability-zones] powoduje więcej ruchu sieciowego w klastrze, strefy dostępności mogą być niezbędne do obsługi obciążenia.
## <a name="demand-shaping"></a>Kształtowanie popytu
Jeśli to możliwe, należy rozważyć zmianę zapotrzebowania na zasoby klastra na czasy lub regiony, w których można użyć nadmiarowej pojemności. Rozważmy na przykład:
* Zmiana czasu lub regionu zadania usługi Batch do uruchomienia.
* Korzystanie z [pul dodatkowych][spot-pools].
* Refaktoryzacja aplikacji w celu użycia kolejki w celu odroczenia uruchomionych obciążeń, które nie wymagają natychmiastowego przetwarzania.
## <a name="next-steps"></a>Następne kroki
Dowiedz się więcej o funkcjach AKS wymienionych w tym artykule:
* [Pule wielu węzłów][multiple-node-pools]
* [Ustalanie rozmiarów węzłów][node-sizing]
* [Skalowanie klastra][scale]
* [Narzędzie do automatycznego skalowania zasobników w poziomie][scale-horizontal]
* [Narzędzie do automatycznego skalowania klastra][scale-auto]
* [Pule punktów][spot-pools]
* [Pule systemu][system-pools]
* [Rezerwacje zasobów][resource-reservations]
* [Grupy umieszczania w pobliżu][proiximity-placement-groups]
* [Strefy dostępności][availability-zones]
[availability-zones]: availability-zones.md
[azure-monitor]: ../azure-monitor/containers/container-insights-overview.md
[azure-traffic-manager]: ../traffic-manager/traffic-manager-overview.md
[proiximity-placement-groups]: reduce-latency-ppg.md
[regions]: faq.md#which-azure-regions-currently-provide-aks
[resource-reservations]: concepts-clusters-workloads.md#resource-reservations
[scale]: concepts-scale.md
[scale-auto]: concepts-scale.md#cluster-autoscaler
[scale-horizontal]: concepts-scale.md#horizontal-pod-autoscaler
[spot-pools]: spot-node-pool.md
[multiple-node-pools]: use-multiple-node-pools.md
[node-sizing]: use-multiple-node-pools.md#specify-a-vm-size-for-a-node-pool
[sustainability-calculator]: https://azure.microsoft.com/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/
[system-pools]: use-system-pools.md
[principles-sse]: https://docs.microsoft.com/learn/modules/sustainable-software-engineering-overview/ | 79.830189 | 502 | 0.824391 | pol_Latn | 0.999898 |
28d30ec553f33a359f2c61f8050bb6d687e477dd | 1,588 | md | Markdown | CHANGELOG.md | apimediaru/laravel-echo | 0ba1c9d043d0efa9f9c821eabb8db5d13fe31abd | [
"MIT"
] | null | null | null | CHANGELOG.md | apimediaru/laravel-echo | 0ba1c9d043d0efa9f9c821eabb8db5d13fe31abd | [
"MIT"
] | 4 | 2020-07-21T12:56:09.000Z | 2022-01-22T12:14:01.000Z | CHANGELOG.md | apimediaru/laravel-echo | 0ba1c9d043d0efa9f9c821eabb8db5d13fe31abd | [
"MIT"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
### [1.1.5](https://github.com/apimediaru/laravel-echo/compare/v1.1.4...v1.1.5) (2020-05-19)
### [1.1.4](https://github.com/apimediaru/laravel-echo/compare/v1.1.2...v1.1.4) (2020-05-19)
### [1.1.2](https://github.com/apimediaru/laravel-echo/compare/v1.1.1...v1.1.2) (2020-05-19)
### [1.1.1](https://github.com/apimediaru/laravel-echo/compare/v1.1.0...v1.1.1) (2020-05-19)
## [1.1.0](https://github.com/nuxt-community/laravel-echo/compare/v1.0.3...v1.1.0) (2020-01-27)
### Features
* add plugins option ([#8](https://github.com/nuxt-community/laravel-echo/issues/8)) ([da90251](https://github.com/nuxt-community/laravel-echo/commit/da90251))
### [1.0.3](https://github.com/nuxt-community/laravel-echo/compare/v1.0.2...v1.0.3) (2019-12-11)
### Bug Fixes
* set default `broadcaster` to `null` ([f8d950e](https://github.com/nuxt-community/laravel-echo/commit/f8d950e))
### [1.0.2](https://github.com/nuxt-community/laravel-echo/compare/v1.0.1...v1.0.2) (2019-10-23)
### Bug Fixes
* ssr ([#4](https://github.com/nuxt-community/laravel-echo/issues/4)) ([264d1f8](https://github.com/nuxt-community/laravel-echo/commit/264d1f8))
### [1.0.1](https://github.com/nuxt-community/laravel-echo/compare/v1.0.0...v1.0.1) (2019-10-17)
### Bug Fixes
* register plugin on hook `builder:extendPlugins` ([ba3fe9e](https://github.com/nuxt-community/laravel-echo/commit/ba3fe9e))
## 1.0.0 (2019-09-25)
| 37.809524 | 174 | 0.695844 | yue_Hant | 0.432095 |
28d3d9b7a0251464d6f8c2274b32148e22000f1f | 193 | md | Markdown | iambismark.net/content/post/2009/01/1232951488.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | iambismark.net/content/post/2009/01/1232951488.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | iambismark.net/content/post/2009/01/1232951488.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | ---
alturls:
- https://twitter.com/bismark/status/1148451027
archive:
- 2009-01
date: '2009-01-26T06:31:28+00:00'
slug: '1232951488'
---
google talk needs an SMS forwarding service like AIM.
| 16.083333 | 53 | 0.720207 | kor_Hang | 0.196638 |
28d55222562c8693e75d428541c528daa3d668f0 | 600 | md | Markdown | _devices/ricoh_thetaS.md | ghobot/blog | 150bc6e550d171999574bf4a8210761047479102 | [
"MIT"
] | null | null | null | _devices/ricoh_thetaS.md | ghobot/blog | 150bc6e550d171999574bf4a8210761047479102 | [
"MIT"
] | null | null | null | _devices/ricoh_thetaS.md | ghobot/blog | 150bc6e550d171999574bf4a8210761047479102 | [
"MIT"
] | null | null | null | ---
title: Ricoh Theta S 360 Camera
deviceUrl: https://theta360.com/en/
deviceShortUrl: https://goo.gl/hXdPnW
image_path: https://goo.gl/i9pzqD
accessories:
setup:
experiences:
resources:
tags:
---
The Theta S from Ricoh is a camera that captures 360° stills and full HD movies with a single click. (B&H Photo)
##### Why this device?
The Theta S is one of the most affordable ways to create high quality 360 photos with one button press. We use it for timelapses and photos that can be viewed on your phone or virtual reality. We can easily document a space and annotate information on it later. | 37.5 | 262 | 0.761667 | eng_Latn | 0.968821 |
28d59f41a606839151689e904f6db9faeb891c13 | 1,103 | md | Markdown | community/contribute/gitcontrib/functionalities.md | riccmin/MagnitudeDistributions.jl | 3600a7ac9b6b7e2b0764830c333a388a18d73499 | [
"MIT"
] | null | null | null | community/contribute/gitcontrib/functionalities.md | riccmin/MagnitudeDistributions.jl | 3600a7ac9b6b7e2b0764830c333a388a18d73499 | [
"MIT"
] | 2 | 2021-05-11T13:49:24.000Z | 2021-05-14T16:07:00.000Z | community/contribute/gitcontrib/functionalities.md | riccmin/MagnitudeDistributions.jl | 3600a7ac9b6b7e2b0764830c333a388a18d73499 | [
"MIT"
] | null | null | null | ## Contributing with new functionalities
1. Edit the appropriate file in the `src/` directory, or add new files if necessary.
2. Add any necessary export in `src/MagnitudeDistributions.jl`.
3. Create tests for your functionality (see [tests.md](https://github.com/riccmin/MagnitudeDistributions.jl/blob/main/community/contribute/gitcontrib/tests.md))
4. Commit your changes and open a pull request.
It is preferable not to add new dependencies. If you believe it is necessary, open a [discussion](https://github.com/riccmin/MagnitudeDistributions.jl/discussions).
### Code Formatting Guidelines
- 4 spaces per indentation level, no tabs
- use whitespace to make the code more readable
- no whitespace at the end of a line (trailing whitespace)
- comments are good, especially when they explain the algorithm
- try to adhere to a 92 character line length limit
- use upper camel case convention for modules, type names
- use lower case with underscores for method names
- it is generally preferred to use ASCII operators and identifiers over
Unicode equivalents whenever possible
| 44.12 | 164 | 0.780598 | eng_Latn | 0.991737 |
28d62b112b0a02b62602abc4ed509c6d94d75a0f | 2,874 | md | Markdown | README_RUN.md | jamiels/twinkle-chain | 650f0a7bcac337bc457fc3951fa2080f902e6ff3 | [
"Apache-2.0"
] | null | null | null | README_RUN.md | jamiels/twinkle-chain | 650f0a7bcac337bc457fc3951fa2080f902e6ff3 | [
"Apache-2.0"
] | null | null | null | README_RUN.md | jamiels/twinkle-chain | 650f0a7bcac337bc457fc3951fa2080f902e6ff3 | [
"Apache-2.0"
] | null | null | null | Run the deployNodes Gradle task to build four nodes with our CorDapp already installed on them:
- Unix/Mac OSX: ./gradlew clean deployNodes
- Windows: gradlew.bat deployNodes
Start the nodes by running the following command from the root of the cordapp-example folder:
- Unix/Mac OSX: build/nodes/runnodes
- Windows: call build\nodes\runnodes.bat
step 1
Originate Asset
- Console:
flow start OriginateAssetFlowInitiator assetContainer: {owner: PartyA, type: mango, producerID: 1, stage: Ready for Pickup}, gps: {longitude: 10, latitude: 20}, obligation: {owner: PartyA, beneficiary: PartyB, amount: $100}
- Webserver
POST
http://localhost:12223/asset/create
body
{
"producerID": 1,
"stage": "Ready for Pickup",
"prId": "2efeb496-f049-4fbd-934f-81e6c200a1ab",
"type": "mango",
"longitude": 23,
"latitude": 23,
"amount": 123
}
step 2
check states
- Console:
run vaultQuery contractStateType: twinkle.agriledger.states.AssetContainerState
run vaultQuery contractStateType: twinkle.agriledger.states.LocationState
run vaultQuery contractStateType: twinkle.agriledger.states.ObligationState
- Webserver
GET
http://localhost:12223/asset/
take linear id from one of the state and put it into move flow
step 3-1
- Console
flow start AssetNewStageFlow physicalContainerId: 02a27840-e09c-4f08-92e7-5d86aea83ac6, stage: Harvest Physical Handling
- Webserver
http://localhost:12223/asset/set-stage/d9587bcf-2d02-4e6d-8817-cbd9ac4eabc7/Harvest Physical Handling
step 3
Transfer fruits
- Console
flow start MoveFlowInitiator physicalContainerID: 4e9a9162-983d-4c8b-8d5f-5b91bf9e0b77, gps: {longitude: 78, latitude: 77}
- Webserver
POST
http://localhost:12223/asset/move
body
{
"longitude": 24,
"latitude": 24,
"linearId": "9ea75384-4c09-479b-b411-613f0de4e91d"
}
step 4
- Console
repeat step 2 and check states with new data
-Webserver
GET
http://localhost:12223/asset/trace?linearId=9ea75384-4c09-479b-b411-613f0de4e91d
http://localhost:12223/asset/trace-status?linearId=9ea75384-4c09-479b-b411-613f0de4e91d
step 5
Split assets
flow start SplitAssetContainerFlow physicalContainerID: 2050db30-6c20-4a5a-b7d5-d1fa365ae769, splitNumber: 20
step 6
flow start MergeAssetContainersFlow physicalContainerIDs: [602edb3f-9d54-4d29-94c8-f09aeb18192c, 058e4b58-c632-41a8-950c-693b1b5b88ed]
step 7
Finalize Asset
flow start FinalBuyerPurchaseContainerFlow linearId: 2050db30-6c20-4a5a-b7d5-d1fa365ae769
- H2 Console (connection url at logs. Default username: sa, Default empty password)
http://localhost:12223/h2-console/
- Swagger
http://localhost:12223/swagger-ui.html
Each Spring Boot server needs to be started in its own terminal/command prompt, replace X with A, B and C:
- Unix/Mac OSX: ./gradlew runPartyXServer
- Windows: gradlew.bat runPartyXServer
| 28.74 | 223 | 0.763396 | eng_Latn | 0.386765 |
28d70e397be93a461f33e4c8a108eaa263ed5f6c | 30 | md | Markdown | docs/manual-test/_shared/version-header-none.md | roscoe/vsts-docs | 7900d28b53a98a79359914778897606053cecd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/manual-test/_shared/version-header-none.md | roscoe/vsts-docs | 7900d28b53a98a79359914778897606053cecd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/manual-test/_shared/version-header-none.md | roscoe/vsts-docs | 7900d28b53a98a79359914778897606053cecd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | **| No version dependency |**
| 15 | 29 | 0.633333 | eng_Latn | 0.944681 |
28d72c7536589e19af2722b1a844c39f456bc80b | 1,087 | md | Markdown | docs/visual-basic/misc/bc30761.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30761.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30761.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Projekt '<projectname>"kann nicht auf Projekt verweisen"<projectname>"da"<projectname>"direkt oder indirekt verweist"<projectname>"
ms.date: 07/20/2015
f1_keywords:
- vbc30761
- bc30761
helpviewer_keywords:
- BC30761
ms.assetid: 0197bb2d-5ea9-4c03-98a3-3cf01b5aba0d
ms.openlocfilehash: 83003b59915ff92fe4aa571e29a9cd268501e7ad
ms.sourcegitcommit: 14355b4b2fe5bcf874cac96d0a9e6376b567e4c7
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 01/30/2019
ms.locfileid: "55285323"
---
# <a name="project-projectname-cannot-reference-project-projectname-because-projectname-directly-or-indirectly-references-projectname"></a>Projekt "\<Projektname >" kann nicht auf Projekt verweisen "\<Projektname >" da "\<Projektname >' direkt oder indirekt verweist"\<Projektname > "
Ein Projekt enthält einen Verweis auf ein zweites Projekt. Das zweite Projekt enthält wiederum einen Verweis auf das Projekt, in dem darauf verwiesen wird.
**Fehler-ID:** BC30761
## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler
- Entfernen Sie einen der Verweise. | 45.291667 | 284 | 0.785649 | deu_Latn | 0.907389 |
28d8ce90b1f087d68341cba19f49d6c095805304 | 1,799 | md | Markdown | packages/preset-asset/README.md | best-shot/best-shot | 2bd9d5075dc291a595beb9cb9de1cc3d0ffe29b3 | [
"MIT"
] | 3 | 2019-03-07T01:26:04.000Z | 2019-03-31T12:07:06.000Z | packages/preset-asset/README.md | best-shot/best-shot | 2bd9d5075dc291a595beb9cb9de1cc3d0ffe29b3 | [
"MIT"
] | 47 | 2019-11-22T10:18:56.000Z | 2022-02-06T19:12:05.000Z | packages/preset-asset/README.md | Airkro/best-shot | 81949465ff28b78cc276c473715c236c362c3426 | [
"MIT"
] | 2 | 2019-03-05T06:24:16.000Z | 2019-07-21T05:54:29.000Z | # @best-shot/preset-asset <img src="https://cdn.jsdelivr.net/gh/best-shot/best-shot/packages/core/logo.svg" alt="logo" height="80" align="right">
A `best-shot` preset for asset.
[![npm][npm-badge]][npm-url]
[![github][github-badge]][github-url]
![node][node-badge]
[npm-url]: https://www.npmjs.com/package/@best-shot/preset-asset
[npm-badge]: https://img.shields.io/npm/v/@best-shot/preset-asset.svg?style=flat-square&logo=npm
[github-url]: https://github.com/best-shot/best-shot/tree/master/packages/preset-asset
[github-badge]: https://img.shields.io/npm/l/@best-shot/preset-asset.svg?style=flat-square&colorB=blue&logo=github
[node-badge]: https://img.shields.io/node/v/@best-shot/preset-asset.svg?style=flat-square&colorB=green&logo=node.js
This preset offer the following features:
- export `yml` / `yaml` / `txt` / `json` to data or standalone file
- bundle `jpg` / `jpeg` / `png` / `gif` / `svg`
- bundle `woff` / `woff2` / `otf` / `eot` / `ttf`
- image minify in production mode
## Installation
```bash
npm install @best-shot/preset-asset --save-dev
```
## Usage
```mjs
// example: .best-shot/config.mjs
export default {
presets: ['asset']
};
```
## Tips
### Standalone data file output
For `yml` / `yaml` / `txt` / `json` format:
```js
import('./sample.json');
// { foo: 'bar' }
import('./sample.[hash].json');
// sample.xxxxxxxx.json
```
### The `mutable` resourceQuery for image
Generate mutable resources filename:
```js
import('./avatar/male.png?mutable');
// image/avatar/male.png
import('./header/header-bg.png');
// image/header-bg.min.xxxxxxxx.png
```
### Preprocess non-ascii character
```plain
天地人-abc.jpg -> 4273f2f7-abc.jpg
```
## Related
- [@best-shot/preset-style](../preset-style)
- [@best-shot/preset-web](../preset-web)
- [@best-shot/core](../core)
| 24.310811 | 145 | 0.676487 | eng_Latn | 0.11874 |
28d8e941a0e363d02b6d4fa57360e14d51925a2a | 2,212 | md | Markdown | README.md | kevinydhan/gatsby-with-query | adb74349217991b60b0289f3c468b27dff9b3557 | [
"MIT"
] | null | null | null | README.md | kevinydhan/gatsby-with-query | adb74349217991b60b0289f3c468b27dff9b3557 | [
"MIT"
] | 6 | 2020-12-17T19:05:55.000Z | 2020-12-18T03:29:59.000Z | README.md | kevinydhan/gatsby-with-query | adb74349217991b60b0289f3c468b27dff9b3557 | [
"MIT"
] | null | null | null | # gatsby-with-query
`withQuery` is a higher-order component that is used to decouple [Gatsby][gh-gatsby]'s static queries.
<br>
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
<br>
## Installation
```sh
npm i gatsby-with-query
```
```sh
yarn add gatsby-with-query
```
<br>
## Usage
Create the following:
- your GraphQL query using Gatsby's `graphql()`
- a React hook which invokes Gatsby's `useStaticQuery()` and returns either all or a subset of your component's `props`
```tsx
// App.query.tsx
import { graphql, useStaticQuery } from 'gatsby'
const query = graphql`
query GetSiteMetadata {
site {
siteMetadata {
title
description
}
}
}
`
const useGetSiteMetadataQuery = () => {
const queriedProps = useStaticQuery(query)
return queriedProps.site.siteMetadata
}
export default useGetSiteMetadataQuery
```
<br>
Then, import the React hook and `withQuery()` into your component's file:
```tsx
// App.tsx
import React, { FunctionComponent } from 'react'
import withQuery from 'gatsby-with-query'
import useGetSiteMetadataQuery from './App.query'
interface AppProps {
title: string
description: string
}
export const App: FunctionComponent<AppProps> = ({ title, description }) => (
<main>
<h1>{title}</h1>
<p>{description}</p>
</main>
)
export default withQuery<AppProps>(App, useGetSiteMetadataQuery)
```
<br>
Alternatively, you can have your React component, GraphQL query, and React query hook in the same file:
```tsx
// App.tsx
import React, { FunctionComponent } from 'react'
import { graphql, useStaticQuery } from 'gatsby'
import withQuery from 'gatsby-with-query'
interface AppProps {
title: string
description: string
}
export const App: FunctionComponent<AppProps> = ({ title, description }) => (
<main>
<h1>{title}</h1>
<p>{description}</p>
</main>
)
const query = graphql`
query GetSiteMetadata {
site {
siteMetadata {
title
description
}
}
}
`
export default withQuery<AppProps>(App, () => {
const queriedProps = useStaticQuery(query)
return queriedProps.site.siteMetadata
})
```
[gh-gatsby]: https://github.com/gatsbyjs/gatsby
| 18.280992 | 119 | 0.685353 | eng_Latn | 0.613605 |
28d8edb637bc7bfbf489eb6250f083e1d37b8b7d | 35,631 | md | Markdown | README.md | ForLogic/doit | 1e637bd515d54e7a5c0d94263305167074cb129c | [
"MIT"
] | 13 | 2016-10-31T17:07:32.000Z | 2022-01-14T13:52:33.000Z | README.md | ForLogic/doit | 1e637bd515d54e7a5c0d94263305167074cb129c | [
"MIT"
] | null | null | null | README.md | ForLogic/doit | 1e637bd515d54e7a5c0d94263305167074cb129c | [
"MIT"
] | 3 | 2016-10-31T17:05:06.000Z | 2019-05-30T16:24:16.000Z | # doit
This tool runs a xml script to automate recurring tasks. Useful on backup scenarios.
## Index
1. [The Configuration File](#TheConfigurationFile)
1. [Example](#TheConfigurationFileExample)
2. [Encryption](#TheConfigurationFileEncryption)
2. [Settings](#Settings)
1. [LogFile](#SettingsLogFile)
2. [ConnectionStrings](#SettingsConnectionStrings)
3. [Exceptions](#SettingsExceptions)
3. [Execute Commands](#ExecuteCommands)
1. [Database](#ExecuteDatabase)
* [Backup](#ExecuteDatabaseBackup)
* BackupLog
2. [Zip](#ExecuteZip)
* [AddFile](#ExecuteZipAddFile)
* [AddBlob](#ExecuteZipAddBlob)
* AddFolder _(To-Do)_
* [Extract](#ExecuteZipExtract)
3. [Process](#ExecuteProcess)
* [Start](#ExecuteProcessStart)
* [Kill](#ExecuteProcessKill)
* [List](#ExecuteProcessList)
4. [Sql](#ExecuteSql)
* [Execute](#ExecuteSqlExecute)
* [Select](#ExecuteSqlSelect)
* [Scalar](#ExecuteSqlScalar)
5. [Mail](#ExecuteMail)
6. [ForEach](#ExecuteForEach)
7. [Log](#ExecuteLog)
8. [Sleep](#ExecuteSleep)
9. [Exception](#ExecuteException)
10. [Try](#ExecuteTry)
11. [Csv](#ExecuteCsv)
* [WriteLine](#ExecuteCsvWriteLine)
* [WriteData](#ExecuteCsvWriteData)
* [Load](#ExecuteCsvLoad)
12. [DataTable](#ExecuteDataTable)
* [Count](#ExecuteDataTableCount)
* [Sum](#ExecuteDataTableSum)
* [Avg](#ExecuteDataTableAvg)
* [Min](#ExecuteDataTableMin)
* [Max](#ExecuteDataTableMax)
* [SetRowValue](#ExecuteDataTableSetRowValue)
* [GetDataRow](#ExecuteDataTableGetDataRow)
* [Diff](#ExecuteDataTableDiff)
* [Join](#ExecuteDataTableJoin)
* [Intersect](#ExecuteDataTableIntersect)
* [RemoveRows](#ExecuteDataTableRemoveRows)
* [InsertRow](#ExecuteDataTableInsertRow)
* [Filter](#ExecuteDataTableFilter)
13. [SetValue](#ExecuteSetValue)
* [Calc](#ExecuteSetValueCalc)
* [CalcDate](#ExecuteSetValueCalcDate)
* [String](#ExecuteSetValueString)
* [Date](#ExecuteSetValueDate)
14. [LocalDisk](#ExecuteLocalDisk)
* [ListFiles](#ExecuteLocalDiskListFiles)
* [MoveFile](#ExecuteLocalDiskMoveFile)
* [MoveFolder](#ExecuteLocalDiskMoveFolder)
* [CopyFile](#ExecuteLocalDiskCopyFile)
* [DeleteFile](#ExecuteLocalDiskDeleteFile)
* [DeleteFolder](#ExecuteLocalDiskDeleteFolder)
15. [Storage](#ExecuteStorage)
* [Upload](#ExecuteStorageUpload)
* [Download](#ExecuteStorageDownload)
* [DeleteBlob](#ExecuteStorageDeleteBlob)
* [ListBlobs](#ExecuteStorageListBlobs)
* [ListContainers](#ExecuteStorageListContainers)
* [Copy](#ExecuteStorageCopy)
* [SetMetadata] _(Waiting documentation)_
* [Snapshot] _(Waiting documentation)_
16. [Condition](#ExecuteCondition)
17. [Ftp](#ExecuteFtp)
* [List](#ExecuteFtpList)
* [Download](#ExecuteFtpDownload)
* [Upload](#ExecuteFtpUpload)
* [CreateFolder](#ExecuteFtpCreateFolder)
* [DeleteFolder](#ExecuteFtpDeleteFolder)
* [DeleteFile](#ExecuteFtpDeleteFile)
18. Services _(To-Do)_
* Start
* Stop
## <a id="TheConfigurationFile">The Configuration File</a>
The default configuration file is called "DoIt.config.xml". Its main sections are "Settings" and "Execute", which contains the settings used when executing and the steps to run, respectively.
If you want to use another configuration file you can use:
```shell
C:\DoIt\DoIt.exe /config="C:\DoIt\AnotherConfigFile.config.xml"
```
### <a id="TheConfigurationFileExample">Example</a>
Here is a configuration file example.
Please use the full documentation for more commands or options.
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<!-- Here we load some data to use when executing -->
<Settings>
<ConnectionStrings>
<Database id="1">Data Source=localhost\sql2016express; Initial Catalog=database; Integrated Security=false; User Id=sa; Password=123;</Database>
<Storage id="1">DefaultEndpointsProtocol=https;AccountName=my_account;AccountKey=871oQKMifslemflIwq54e0fd8sJskdmw98348dMF0suJ0WODK73lMlwiehf34u0mm5ez6MdiewklFH3/w2/IEK==</Storage>
<MailServer id="1">host=smtp.domain.com; from=user@domain.com; port=587; ssl=true; user=user@domain.com; pass=123;</MailServer>
</ConnectionStrings>
<Exceptions mailServer="1" attachLogFile="true">
<Mail>admin1@company.com</Mail>
<Mail>admin2@company.com</Mail>
</Exceptions>
<LogFile toVar="logFile">%programdata%\DoIt\DoIt_{now:yyyy-MM-dd}.log</LogFile>
</Settings>
<!-- Here we put the script steps -->
<Execute>
<Log>We can use variables!</Log>
<SetValue>
<String to="my_var1" value="{now}" />
</SetValue>
<Log>Today is: {my_var1:yyyy-MM-dd}.</Log>
<Log>Load the files from a directory to a variable</Log>
<LocalDisk>
<ListFiles to="files_list" path="C:\MyFolder" searchPattern="*.*" allDirectories="false" fetchAttributes="false" where="" sort="" regex="" />
</LocalDisk>
<Log>We can also use loops!</Log>
<ForEach itemFrom="files_list" where="" sort="">
<Log>File: {files_list.filename}</Log>
</ForEach>
<Log>Here is how to execute a SQL command to the database with id=1</Log>
<Sql database="1">
<Execute timeout="30">insert into backups (start_date) values (getdate())</Execute>
</Sql>
<Log>Create a database backup and save the filename to the variable bak1</Log>
<Database id="1">
<Backup toFile="%programdata%\DoIt\MyDatabase_{now:yyyy-MM-dd_HH-mm}.bak" type="bak" toVar="bak1" />
</Database>
<Log>Upload the bak1 file to the storage with id=1</Log>
<Storage id="1">
<Upload file="{bak1}" toBlob="backups/Backup_{now:yyyy-MM-dd}/{bak1:filename}" deleteSource="true" />
</Storage>
</Execute>
</Configuration>
```
### <a id="TheConfigurationFileEncryption">Encryption</a>
You can encrypt/decrypt the connection strings from the configuration file using the following commands:
```shell
C:\DoIt\DoIt.exe /config="C:\DoIt\AnotherConfigFile.config.xml" /encryptionKey="test123"
Configuration file was encrypted.
C:\DoIt\DoIt.exe /config="C:\DoIt\AnotherConfigFile.config.xml" /decryptionKey="test123"
Configuration file was decrypted.
```
To use an encrypted configuration file without decrypting it, you should use the following command:
```shell
C:\DoIt\DoIt.exe /config="C:\DoIt\AnotherConfigFile.config.xml" /cryptKey="test123"
```
## <a id="Settings">Settings</a>
### <a id="SettingsLogFile">LogFile</a>
Use this tag to specify the log path and variable.
*Tag Location: Configuration > Settings > LogFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Settings>
<LogFile toVar="logFile">%programdata%\DoIt\DoIt_{now:yyyy-MM-dd}.log</LogFile>
</Settings>
</Configuration>
```
### <a id="SettingsConnectionStrings">ConnectionStrings</a>
This tag set database and azure storage account connection strings.
*Tag Location: Configuration > Settings > ConnectionStrings*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Settings>
<ConnectionStrings>
<Database id="1">Data Source=localhost\sql2016express; Initial Catalog=database; Integrated Security=false; User Id=sa; Password=123;</Database>
<Storage id="1">DefaultEndpointsProtocol=https;AccountName=my_account;AccountKey=871oQKMifslemflIwq54e0fd8sJskdmw98348dMF0suJ0WODK73lMlwiehf34u0mm5ez6MdiewklFH3/w2/IEK==</Storage>
<MailServer id="1">host=smtp.domain.com; from=user@domain.com; port=587; ssl=true; user=user@domain.com; pass=123;</MailServer>
</ConnectionStrings>
</Settings>
</Configuration>
```
### <a id="SettingsExceptions">Exceptions</a>
Use this tag to mail users if an exception occurs.
*Tag Location: Configuration > Settings > Exceptions*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Settings>
<Exceptions mailServer="1" attachLogFile="true">
<Mail>admin1@company.com</Mail>
<Mail>admin2@company.com</Mail>
</Exceptions>
</Settings>
</Configuration>
```
## <a id="ExecuteCommands">Execute Commands</a>
### <a id="ExecuteDatabase">Database</a>
#### <a id="ExecuteDatabaseBackup">Backup</a>
Backup SQL Server databases.
*Tag Location: Configuration > Execute > Database > Backup*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Database id="1">
<Backup toFile="%programdata%\DoIt\MyDatabase_{now:yyyy-MM-dd_HH-mm}.bak" type="bak" withOptions="with compression" timeout="1800" toVar="bak1" />
</Database>
</Execute>
</Configuration>
```
### <a id="ExecuteZip">Zip</a>
#### <a id="ExecuteZipAddFile">AddFile</a>
Add a new file to the specified zip package.
*Tag Location: Configuration > Execute > Zip > AddFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Zip path="C:\MyFiles.zip" mode="write">
<AddFile name="C:\MyFile1.txt" deleteSource="true" zipFolder="" zipFilename="MainData.txt" />
<AddFile forEach="users_list" where="is_active=1" name="C:\Users\UserData{users_list.id}.csv" deleteSource="false" />
</Zip>
</Execute>
</Configuration>
```
#### <a id="ExecuteZipAddBlob">AddBlob</a>
Download blob from a storage account and add it to the specified zip package.
*Tag Location: Configuration > Execute > Zip > AddBlob*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Zip path="C:\MyFiles.zip" mode="write">
<AddBlob fromStorage="1" name="my_container/myblob1.txt" zipFolder="" zipFilename="File.txt" />
<AddBlob forEach="blobs_list" where="blob_length <= 5*1024*1024" fromStorage="1" name="{blobs_list.blob_container}/{blobs_list.blob_name}" snapshotTime="" dateTime="{blobs_list.blob_last_modified}" size="{blobs_list.blob_length}" />
</Zip>
</Execute>
</Configuration>
```
#### <a id="ExecuteZipExtract">Extract</a>
Extract a zip package to the specified folder.
*Tag Location: Configuration > Execute > Zip > Extract*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Zip path="C:\MyFiles.zip" mode="read">
<Extract toFolder="C:\MyFolder" />
</Zip>
</Execute>
</Configuration>
```
### <a id="ExecuteProcess">Process</a>
#### <a id="ExecuteProcessStart">Start</a>
Starts an external application.
*Tag Location: Configuration > Execute > Process > Start*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Process>
<Start path="C:\MyApp.exe" args="" wait="true" time="" />
</Process>
</Execute>
</Configuration>
```
#### <a id="ExecuteProcessKill">Kill</a>
Kills an executing process.
*Tag Location: Configuration > Execute > Process > Kill*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Process>
<Kill id="" name="chrome" />
</Process>
</Execute>
</Configuration>
```
#### <a id="ExecuteProcessList">List</a>
List the executing processes. The returned datatable will contain the following columns:
* id (int)
* session_id (int)
* name (string)
* machine (string)
* start (DateTime)
* filename (string)
Tag Location: Configuration > Execute > Process > List
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Process>
<List name="" machine="" regex="" to="process_list" />
</Process>
</Execute>
</Configuration>
```
### <a id="ExecuteSql">Sql</a>
#### <a id="ExecuteSqlExecute">Execute</a>
Execute SQL commands.
*Tag Location: Configuration > Execute > Sql > Execute*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Sql database="1">
<Execute timeout="30">insert into dbo.backups (start_date) values (getdate())</Execute>
<Execute timeout="30">
<Cmd>
update dbo.table set value1=@Value1, value2=@Value2 where id=@Id
</Cmd>
<Params>
<Value1 type="string">{str1}</Value1>
<Value2 type="string">{str2}</Value2>
<Id type="int">{id}</Id>
</Params>
</Execute>
</Sql>
</Execute>
</Configuration>
```
#### <a id="ExecuteSqlSelect">Select</a>
Execute SQL queries and set the results to the specified variable.
*Tag Location: Configuration > Execute > Sql > Select*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Sql database="1">
<Select to="users_table" timeout="30">
select us.id, us.name, us.email from dbo.users us where us.removed=0
</Select>
<Select to="users_table" timeout="30">
<Cmd>
select u.* from dbo.users u where u.id=@UserId
</Cmd>
<Params>
<UserId type="int">{user_id}</UserId>
</Params>
</Select>
</Sql>
</Execute>
</Configuration>
```
#### <a id="ExecuteSqlScalar">Scalar</a>
Execute the SQL command/query and set the result to the specified variable.
*Tag Location: Configuration > Execute > Sql > Scalar*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Sql database="1">
<Scalar to="user_id" timeout="30">
select us.id from users us where us.email='user@mycompany.com'
</Scalar>
<Scalar to="user_id" timeout="30">
<Cmd>
select us.id from users us where us.email=@Email
</Cmd>
<Params>
<Email type="string">{email}</Email>
</Params>
</Scalar>
</Sql>
</Execute>
</Configuration>
```
### <a id="ExecuteMail">Mail</a>
Send an e-mail.
*Tag Location: Configuration > Execute > Mail*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Mail server="1" to="other_user@domain.com" subject="Hello World">
<Body>
Here is my mail body.
</Body>
<Attachments>
<File path="C:\MyFileToSend.txt" />
<File path="C:\My2ndAttachment.txt" />
<SqlQuery database="1" dataFormat="csv|json|xml" zip="true|false" attachmentName="" zipEntryName="">
select t.id, t.total, t.date from orders t where t.date>=cast(getdate() as date)
</SqlQuery>
</Attachments>
</Mail>
</Execute>
</Configuration>
```
### <a id="ExecuteForEach">ForEach</a>
Loop throught the rows in the specified table.
*Tag Location: Configuration > Execute > ForEach*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<ForEach itemFrom="users_list" where="" sort="" parallel="1">
<Log>User Name: {users_list.name}.</Log>
</ForEach>
</Execute>
</Configuration>
```
### <a id="ExecuteLog">Log</a>
Write a new line to the previously specified log file.
*Tag Location: Configuration > Execute > Log*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Log>Hello World!</Log>
</Execute>
</Configuration>
```
### <a id="ExecuteSleep">Sleep</a>
Block the current thread for the specified time.
*Tag Location: Configuration > Execute > Sleep*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Sleep time="30 seconds" />
</Execute>
</Configuration>
```
### <a id="ExecuteException">Exception</a>
Throw a new exception.
*Tag Location: Configuration > Execute > Exception*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Exception assembly="" type="System.Exception" message="Oh no, something is wrong!" />
</Execute>
</Configuration>
```
### <a id="ExecuteTry">Try</a>
Try to run some commands for the specified retry times.
*Tag Location: Configuration > Execute > Try*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Try retry="3" sleep="40 seconds">
<Catch>
<Exception type="System.Net.WebException" withMessage="(409) Conflict" />
<Exception type="Microsoft.WindowsAzure.Storage.StorageException" withMessage="(409) Conflict" />
</Catch>
<Execute>
<Log>The commands to run are here!</Log>
</Execute>
<Fail>
<Log>Command failed :(</Log>
</Fail>
</Try>
</Execute>
</Configuration>
```
### <a id="ExecuteCsv">Csv</a>
#### <a id="ExecuteCsvWriteLine">WriteLine</a>
Write a new line in the specified csv file.
*Tag Location: Configuration > Execute > Csv > WriteLine*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Csv path="C:\MyFile.csv" separator=";">
<WriteLine append="false">
<Column>id</Column>
<Column>name</Column>
<Column>email</Column>
</WriteLine>
</Csv>
<ForEach itemFrom="users_list">
<Csv path="C:\MyFile.csv" separator=";">
<WriteLine append="true">
<Column>{users_list.id}</Column>
<Column>{users_list.name}</Column>
<Column>{users_list.email}</Column>
</WriteLine>
</Csv>
</ForEach>
</Execute>
</Configuration>
```
#### <a id="ExecuteCsvWriteData">WriteData</a>
Write the datatable to the specified csv file.
*Tag Location: Configuration > Execute > Csv > WriteData*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Csv path="C:\MyFile.csv" separator=";">
<WriteData data="users_list" where="" append="false">
<Column header="id">{users_list.id}</Column>
<Column header="name">{users_list.name}</Column>
<Column header="email">{users_list.email}</Column>
</WriteData>
</Csv>
</Execute>
</Configuration>
```
#### <a id="ExecuteCsvLoad">Load</a>
Load the specified csv file to a new datatable.
*Tag Location: Configuration > Execute > Csv > Load*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Csv path="C:\MyFile.csv" separator=";">
<Load to="users_list" where="" hasHeaders="true" />
</Csv>
</Execute>
</Configuration>
```
### <a id="ExecuteDataTable">DataTable</a>
#### <a id="ExecuteDataTableCount">Count</a>
Count the rows found in the specified table.
*Tag Location: Configuration > Execute > DataTable > Count*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Count data="users_list" where="" to="users_count" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableSum">Sum</a>
Sum values from the rows in the specified table
*Tag Location: Configuration > Execute > DataTable > Sum*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Sum data="products_list" column="total" where="" to="total_products" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableAvg">Avg</a>
Calculate the average values from the rows in the specified table.
*Tag Location: Configuration > Execute > DataTable > Avg*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Avg data="products_list" column="price" where="" to="avg_price" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableMin">Min</a>
Get the min value from the rows in the specified table.
*Tag Location: Configuration > Execute > DataTable > Min*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Min data="products_list" column="price" where="" to="min_price" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableMax">Max</a>
Get the max value from the rows in the specified table.
*Tag Location: Configuration > Execute > DataTable > Max*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Max data="products_list" column="price" where="" to="max_price" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableSetRowValue">SetRowValue</a>
Set values on the specified columns/rows.
*Tag Location: Configuration > Execute > DataTable > SetRowValue*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<SetRowValue data="users_list" where="">
<Column name="is_active" type="int">1</Column>
</SetRowValue>
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableGetDataRow">GetDataRow</a>
Find the rows that matches the where condition and set one of them to the specified variable.
*Tag Location: Configuration > Execute > DataTable > GetDataRow*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<GetDataRow fromData="users_list" to="user_row" where="id={orders.id_user}" index="0" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableDiff">Diff</a>
Create a new datatable with the rows existing in one datatable and not in other.
*Tag Location: Configuration > Execute > DataTable > Diff*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Diff inData="blobs_list1" notInData="blobs_list2" columns="blob_container, blob_name, blob_content_md5" to="new_blobs_list" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableJoin">Join</a>
Create a new datatable with the rows from other datatables.
*Tag Location: Configuration > Execute > DataTable > Join*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Join data="blobs_list1, blobs_list2" to="all_blobs_list" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableIntersect">Intersect</a>
Create a new datatable only with the rows existing in all specified datatables.
*Tag Location: Configuration > Execute > DataTable > Intersect*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Intersect data="blobs_list1, blobs_list2" columns="blob_name" rowsFrom="0" to="new_blobs_list" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableRemoveRows">RemoveRows</a>
Remove rows from the datatable when it matches the where clause.
*Tag Location: Configuration > Execute > DataTable > RemoveRows*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<RemoveRows from="users_list" where="is_active=0" />
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableInsertRow">InsertRow</a>
Insert a new row in the specified datatable.
*Tag Location: Configuration > Execute > DataTable > InsertRow*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<InsertRow to="files_list">
<Column name="id" type="int">1</Column>
<Column name="name" type="string">Test User</Column>
<Column name="email" type="string">testuser@company.com</Column>
</InsertRow>
</DataTable>
</Execute>
</Configuration>
```
#### <a id="ExecuteDataTableFilter">Filter</a>
Filter rows in the specified datatable.
*Tag Location: Configuration > Execute > DataTable > Filter*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<DataTable>
<Filter data="users_list" where="is_active=1" to="active_users_list" />
</DataTable>
</Execute>
</Configuration>
```
### <a id="ExecuteSetValue">SetValue</a>
#### <a id="ExecuteSetValueCalc">Calc</a>
Execute a simple operation using the specified values.
*Tag Location: Configuration > Execute > SetValue > Calc*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<SetValue>
<Calc operation="+|-|*|/" value1="2" value2="3" to="my_calc" />
</SetValue>
</Execute>
</Configuration>
```
#### <a id="ExecuteSetValueCalcDate">CalcDate</a>
Execute a date operation using the specified values.
*Tag Location: Configuration > Execute > SetValue > CalcDate*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<SetValue>
<CalcDate to="limit_date" value="{today}" add="-6 months" operation="-|+" />
</SetValue>
</Execute>
</Configuration>
```
#### <a id="ExecuteSetValueString">String</a>
Set the value to the specified string variable.
*Tag Location: Configuration > Execute > SetValue > String*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<SetValue>
<String value="User Name: {users_list.name}" to="user_name_text" />
<String value="My name is 'John' and today is {today:yyyy-MM-dd}" to="string_test" />
<String value="{string_test}" to="date_from_str" regex="(\d{4}\-\d{2}\-\d{2})" regexFlags="" regexGroup="" matchIndex="" />
<String value="{string_test}" to="name_from_str" regex="'([a-z]+)'" regexFlags="i" regexGroup="1" matchIndex="" />
</SetValue>
</Execute>
</Configuration>
```
#### <a id="ExecuteSetValueDate">Date</a>
Set the date/time value to the specified variable.
*Tag Location: Configuration > Execute > SetValue > Date*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<SetValue>
<Date value="{now}" to="start_date" />
</SetValue>
</Execute>
</Configuration>
```
### <a id="ExecuteLocalDisk">LocalDisk</a>
#### <a id="ExecuteLocalDiskListFiles">ListFiles</a>
Query files from a folder. The returned datatable contains the following columns:
* full_path (string)
* directory (string)
* filename (string)
* extension (string)
* creation_time (DateTime)*
* last_write_time (DateTime)*
* length (long)*
The columns creation_time, last_write_time and length will only be filled if the parameter "fetchAttributes" is set to "true".
*Tag Location: Configuration > Execute > LocalDisk > ListFiles*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<ListFiles to="files_list" path="C:\MyFolder" searchPattern="*.*" allDirectories="false" fetchAttributes="false" where="" sort="" regex="" />
</LocalDisk>
</Execute>
</Configuration>
```
#### <a id="ExecuteLocalDiskMoveFile">MoveFile</a>
Change the name from the specified file or move it to another parent folder.
*Tag Location: Configuration > Execute > LocalDisk > MoveFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<MoveFile path="C:\MyFile1.txt" to="C:\Folder\MyMovedFile.txt" />
</LocalDisk>
</Execute>
</Configuration>
```
#### <a id="ExecuteLocalDiskMoveFolder">MoveFolder</a>
Change the name from the specified folder or move it to another parent folder.
*Tag Location: Configuration > Execute > LocalDisk > MoveFolder*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<MoveFolder path="C:\MyFolder1" to="C:\Folder\MyMovedFolder" />
</LocalDisk>
</Execute>
</Configuration>
```
#### <a id="ExecuteLocalDiskCopyFile">CopyFile</a>
Copy the specified file to another file.
*Tag Location: Configuration > Execute > LocalDisk > CopyFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<CopyFile path="C:\MyFile1.txt" to="C:\Folder\MyFileCopy.txt" overwrite="true" />
</LocalDisk>
</Execute>
</Configuration>
```
#### <a id="ExecuteLocalDiskDeleteFile">DeleteFile</a>
Delete the specified file.
*Tag Location: Configuration > Execute > LocalDisk > DeleteFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<DeleteFile path="C:\MyFile1.txt" />
</LocalDisk>
</Execute>
</Configuration>
```
#### <a id="ExecuteLocalDiskDeleteFolder">DeleteFolder</a>
Delete the specified folder.
*Tag Location: Configuration > Execute > LocalDisk > DeleteFolder*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<LocalDisk>
<DeleteFolder path="C:\MyFolder" recursive="false" />
</LocalDisk>
</Execute>
</Configuration>
```
### <a id="ExecuteStorage">Storage</a>
#### <a id="ExecuteStorageUpload">Upload</a>
Upload a file to the specified Azure storage account.
*Tag Location: Configuration > Execute > Storage > Upload*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<Upload file="C:\MyFile_{now:yyyy-MM-dd}.csv" toBlob="backups/Backup_{now:yyyy-MM-dd}/MyFile.csv" deleteSource="true" async="true" />
</Storage>
</Execute>
</Configuration>
```
#### <a id="ExecuteStorageDownload">Download</a>
Download the specified blob.
*Tag Location: Configuration > Execute > Storage > Download*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<Download blob="my_container/myblob.txt" toFile="C:\MyFile.txt" snapshotTime="" />
</Storage>
</Execute>
</Configuration>
```
#### <a id="ExecuteStorageDeleteBlob">DeleteBlob</a>
Delete the specified blob.
*Tag Location: Configuration > Execute > Storage > DeleteBlob*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<DeleteBlob container="my_container" name="myblob.txt" />
</Storage>
</Execute>
</Configuration>
```
#### <a id="ExecuteStorageListBlobs">ListBlobs</a>
Query blobs and set the resulting list to a variable. The returned datatable contains the following columns:
* blob_name (string)
* blob_extension (string)
* blob_container (string)
* blob_uri (string)
* blob_length (long)
* blob_last_modified (DateTimeOffset)
* blob_last_modified_utc (DateTime)
* blob_content_type (string)
* blob_content_md5 (string)
* blob_is_snapshot (bool)
* blob_snapshot_time (DateTimeOffset)
* metadata_name1 (string)*
* metadata_name2 (string)*
The columns with the name starting with "metadata_" will only be filled with the blob metadata if the parameter "details" contains the option "metadata" or the parameter "fetchAttributes" is set to "true".
*Tag Location: Configuration > Execute > Storage > ListBlobs*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<ListBlobs to="blobs_list" container="container{now:yyyyMM}" prefix="" fetchAttributes="false" details="none|snapshots|metadata" where="" sort="" regex="" />
</Storage>
</Execute>
</Configuration>
```
#### <a id="ExecuteStorageListContainers">ListContainers</a>
Query containers and set the resulting list to a variable. The returned datatable contains the following columns:
* name (string)
* public_access (string)
* etag (string)
* last_modified (DateTime)
* uri (long)
*Tag Location: Configuration > Execute > Storage > ListContainers*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<ListContainers to="containers_list" prefix="un" where="" sort="" regex="" />
</Storage>
</Execute>
</Configuration>
```
#### <a id="ExecuteStorageCopy">Copy</a>
Copies a blob from one storage account to another.
*Tag Location: Configuration > Execute > Storage > Copy*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Storage id="1">
<Copy blob="my_container1/my_blob.txt" toStorage="2" toBlob="my_container2/my_blob.txt" />
</Storage>
</Execute>
</Configuration>
```
### <a id="ExecuteCondition">Condition</a>
Perform a condition and run only the "True" or "False" inner tag, according to the result. The available condition types are:
* has-disk-space
* file-exists
* folder-exists
* has-rows
* is-datetime
* if
*Tag Location: Configuration > Execute > Condition*
#### Sample1 - Condition Type: has-disk-space
```xml
<Condition type="has-disk-space" drive="C:\" min="10000">
<True>
<Log>True Result</Log>
</True>
</Condition>
```
#### Sample2 - Condition Type: file-exists
```xml
<Condition type="file-exists" path="C:\MyFile.txt">
<True>
<Log>True Result</Log>
</True>
</Condition>
```
#### Sample3 - Condition Type folder-exists
```xml
<Condition type="folder-exists" path="C:\MyFolder">
<True>
<Log>True Result</Log>
</True>
</Condition>
```
#### Sample4 - Condition Type: has-rows
```xml
<Condition type="has-rows" data="customers_list">
<True>
<Log>True Result</Log>
</True>
</Condition>
```
#### Sample5 - Condition Type: is-datetime
```xml
<Condition type="is-datetime" days="all|mon|wed|fri|1|15" time="08-12">
<True>
<Log>True Result</Log>
</True>
</Condition>
<Condition type="is-datetime" days="1st sunday, 2nd mon, 2018-09-25, last friday">
<True>
<Log>True Result</Log>
</True>
<False>
<Log>False Result</Log>
</False>
</Condition>
```
#### Sample6 - Condition Type: if
```xml
<Condition type="if" value1="{files_count}" value2="0" comparison="greater" valueType="numeric">
<True>
<Log>True Result</Log>
</True>
<False>
<Log>False Result</Log>
</False>
</Condition>
```
### <a id="ExecuteFtp">Ftp</a>
#### <a id="ExecuteFtpList">List</a>
List all files of a given folder using the connection data provided. The returned datatable contains the following columns:
* name (string)
* type (string) = "folder" | "file"
* length (long)
* datetime (DateTime)
*Tag Location: Configuration > Execute > Ftp > List*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<List path="wwwroot" to="files_list" />
</Ftp>
</Execute>
</Configuration>
```
#### <a id="ExecuteFtpDownload">Download</a>
Download the specified file using the connection data provided.
*Tag Location: Configuration > Execute > Ftp > Download*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<Download path="wwwroot/myfile.zip" toFile="%programdata%\DoIt\myfile.zip" />
</Ftp>
</Execute>
</Configuration>
```
#### <a id="ExecuteFtpUpload">Upload</a>
Upload the specified file using the connection data provided.
*Tag Location: Configuration > Execute > Ftp > Upload*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<Upload file="%programdata%\DoIt\myfile.zip" toPath="wwwroot/myfile.zip" />
</Ftp>
</Execute>
</Configuration>
```
#### <a id="ExecuteFtpCreateFolder">CreateFolder</a>
Create the specified folders using the connection data provided.
*Tag Location: Configuration > Execute > Ftp > CreateFolder*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<CreateFolder path="wwwroot/myfolder/level2/level3" />
</Ftp>
</Execute>
</Configuration>
```
#### <a id="ExecuteFtpDeleteFolder">DeleteFolder</a>
Delete the specified folder using the connection data provided.
*Tag Location: Configuration > Execute > Ftp > DeleteFolder*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<DeleteFolder path="wwwroot/myfolder" />
</Ftp>
</Execute>
</Configuration>
```
#### <a id="ExecuteFtpDeleteFile">DeleteFile</a>
Delete the specified file using the connection data provided.
*Tag Location: Configuration > Execute > Ftp > DeleteFile*
```xml
<?xml version="1.0" encoding="utf-16" ?>
<Configuration>
<Execute>
<Ftp host="ftps://mydomain.com" port="21" user="user_name" password="password123">
<DeleteFile path="wwwroot/myfile.zip" />
</Ftp>
</Execute>
</Configuration>
```
| 28.665326 | 238 | 0.666498 | eng_Latn | 0.338017 |
28d922cb82e5e978ff7881492f5ce95ec1838922 | 5,526 | md | Markdown | docs-archive-a/2014/analysis-services/dmx-templates.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/analysis-services/dmx-templates.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T02:18:31.000Z | 2021-11-25T02:26:28.000Z | docs-archive-a/2014/analysis-services/dmx-templates.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:52:22.000Z | 2021-10-13T09:16:56.000Z | ---
title: Modelos DMX | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: analysis-services
ms.topic: conceptual
ms.assetid: 2a577e52-821d-4bd3-ba35-075a6be285c9
author: minewiskan
ms.author: owend
ms.openlocfilehash: 79c8615933baf4fa3d80974daae91b4d1c7fa101
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87582199"
---
# <a name="dmx-templates"></a>Modelos DMX
Os modelos de mineração de dados o ajudam a criar rapidamente consultas sofisticadas. Embora a sintaxe geral para consultas DMX seja bem-documentada, usar os modelos torna mais fácil criar consultas clicando e apontando para argumentos e fontes de dados.
## <a name="using-the-templates"></a>Usando os modelos
1. No cliente de mineração de dados para Excel, clique em **consulta**.
2. Na página **inicial** do assistente, clique em **Avançar**.
3. Na página, **selecione modelo**, clique em **avançado**.
**Dica:** Se você pretende criar uma consulta de previsão em um modelo, você pode selecionar o modelo primeiro e, em seguida, clicar em **avançado**para preencher previamente o modelo com o nome do modelo.
4. No **Editor de consulta avançada de mineração de dados**, clique em **modelos DMX**e selecione um modelo.
5. Pressione ENTER para carregar o modelo no painel Consulta DMX.
6. Continue clicando nos links no modelo e, quando a caixa de diálogo for exibida, selecione uma saída, um modelo ou um parâmetro apropriado.
Para consultas de previsão, escolha o conjunto de dados de entrada primeiro e depois mapeie as colunas.
7. Clique em **Editar consulta** para alternar para a exibição do editor de texto e alterar manualmente a consulta.
Saiba que, se você alternar exibições ao trabalhar no editor de consulta, quaisquer informações contidas na exibição anterior serão limpas. Antes de alterar as exibições, salve seu trabalho copiando e colando as instruções DMX em um arquivo separado.
8. Clique em **Concluir**. Na caixa de diálogo **escolher destino** , especifique onde você deseja que o resultado seja salvo. [!INCLUDE[clickOK](../includes/clickok-md.md)]
> [!NOTE]
> Se você executou uma instrução com êxito, a instrução DMX que você enviou para o servidor também será registrada na janela de **rastreamento** . Para obter mais informações sobre como usar o recurso de rastreamento, consulte [rastrear (cliente de mineração de dados para Excel)](trace-data-mining-client-for-excel.md).
Para obter mais informações sobre como usar o editor de consultas avançadas de mineração de dados, consulte [consulta (SQL Server suplementos de mineração](query-sql-server-data-mining-add-ins.md) de dados)e [Editor de consulta de mineração de dados avançado](advanced-data-mining-query-editor.md).
## <a name="list-of-dmx-templates"></a>Lista de modelos DMX
Os seguintes modelos DMX são incluídos no Cliente de Mineração de Dados para Excel.
**Previsão**
Use esses modelos para criar consultas de previsão avançadas, inclusive as consultas sem suporte dos assistentes nos suplementos, como as consultas que usam tabelas aninhadas ou fontes de dados externas.
- Previsões filtradas
- Previsões aninhadas filtradas
- Previsões aninhadas
- Previsões singleton
- Previsões padrão
- Previsões de série temporal
- Consulta de previsão TOP
- Consulta de previsão TOP em tabela aninhada
**Criar**
Use esses modelos para criar modelos ou estruturas de dados personalizados. Você não está limitado aos modelos suportados pelos assistentes-você pode usar qualquer Data Mining algoritmo com suporte na instância do à [!INCLUDE[ssASnoversion](../includes/ssasnoversion-md.md)] qual você está conectado, incluindo algoritmos de plug-in.
- Modelo de mineração
- Estrutura de mineração
- Estrutura de mineração com controle
- Modelo temporário
- Estrutura temporária
**Propriedades do modelo**
Use esses modelos para criar consultas que obtêm metadados sobre o modelo e o conjunto de treinamento. Você também pode recuperar detalhes do conteúdo do modelo ou obter um perfil estatístico dos dados de treinamento.
- Conteúdo do modelo de mineração
- Valores mínimo e máximo de coluna
- Casos de teste/treinamento da estrutura de mineração
- Valores discretos de coluna
**Gerenciamento**
Use esses modelos para executar qualquer tarefa de gerenciamento com suporte no DMX, o que inclui importar e exportar modelos, excluir modelos e limpar modelos e estruturas de dados.
- Limpar modelo de mineração
- Limpar estrutura e modelos
- Limpar estrutura de mineração
- Excluir modelo de mineração
- Excluir estrutura de mineração
- Renomear modelo de mineração
- Renomear estrutura de mineração
- Treinar modelo de mineração
- Treinar estrutura de mineração aninhada
- Treinar estrutura de mineração
### <a name="requirements"></a>Requisitos
Dependendo do modelo usado, talvez você precise de permissões administrativas para acessar o servidor [!INCLUDE[ssASnoversion](../includes/ssasnoversion-md.md)] e executar a consulta.
## <a name="see-also"></a>Consulte Também
[Criar um modelo de mineração de dados](creating-a-data-mining-model.md)
| 42.183206 | 336 | 0.740318 | por_Latn | 0.999559 |
28d950f917900be247865bbfc8d38483989567b0 | 1,278 | md | Markdown | README.md | pirocorp/Wintellect.PowerCollections | 9bf6bbf27b1667919d241cc8f37e9ebe9ab8c710 | [
"Net-SNMP",
"Xnet"
] | 27 | 2017-10-12T09:48:57.000Z | 2022-03-05T04:17:35.000Z | README.md | timdetering/PowerCollections | 9bf6bbf27b1667919d241cc8f37e9ebe9ab8c710 | [
"Net-SNMP",
"Xnet"
] | null | null | null | README.md | timdetering/PowerCollections | 9bf6bbf27b1667919d241cc8f37e9ebe9ab8c710 | [
"Net-SNMP",
"Xnet"
] | 12 | 2018-02-26T09:02:03.000Z | 2021-12-30T09:26:20.000Z | # Wintellect's Power Collections for .NET #
Imported source code from [Wintellect's Power Collections for .NET - CodePlex](http://powercollections.codeplex.com/ "Wintellect's Power Collections for .NET - CodePlex")
## Project Description ##
Welcome to *Power Collections*, a community project to develop the best public license type safe collection classes for .NET. Power Collections makes heavy use of .NET Generics. The goal of the project is to provide generic collection classes that are not available in the .NET framework. Some of the collections included are the `Deque`, `MultiDictionary`, `Bag`, `OrderedBag`, `OrderedDictionary`, `Set`, `OrderedSet`, and `OrderedMultiDictionary`.
This library was originally produced by *Wintellect* and is offered AS IS. It has been available on the Wintellect site for some time but we are placing it on Codeplex to encourage its growth and use.
*Power Collections* is free for all to use within the bounds of the standard Eclipse End user license agreement. If you feel you would like to contribute, please feel free to contact one of the project administrators.
## Power Collections Documentation ##
Please see the Releases page for updated documentation <http://www.codeplex.com/PowerCollections/Release/ProjectReleases.aspx> | 116.181818 | 450 | 0.79421 | eng_Latn | 0.99118 |
28d9dff904def20d43992263b28f7a1e0dc9af3d | 315 | md | Markdown | examples/clock/README.md | minedeljkovic/tyrian | 70f9de443be66d037f8708347ad305a21049082d | [
"MIT"
] | null | null | null | examples/clock/README.md | minedeljkovic/tyrian | 70f9de443be66d037f8708347ad305a21049082d | [
"MIT"
] | null | null | null | examples/clock/README.md | minedeljkovic/tyrian | 70f9de443be66d037f8708347ad305a21049082d | [
"MIT"
] | null | null | null | # Tyrian clock example
This is a minimal working project setup to run the clock example.
To compile and run the program you will need to have yarn (or npm) installed.
On first run:
```sh
yarn install
```
and from then on
```sh
yarn start
```
Then navigate to [http://localhost:1234/](http://localhost:1234/)
| 15.75 | 77 | 0.71746 | eng_Latn | 0.99858 |
28da4607798ea3f61c558dec79e00e5d545e820c | 2,519 | md | Markdown | includes/policy/reference/byrp/microsoft.logic.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/policy/reference/byrp/microsoft.logic.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/policy/reference/byrp/microsoft.logic.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: DCtheGeek
ms.service: azure-policy
ms.topic: include
ms.date: 11/20/2020
ms.author: dacoulte
ms.custom: generated
ms.openlocfilehash: 76c3e42bf743513a4d12a0fa6f259ecdc8e9321a
ms.sourcegitcommit: 9889a3983b88222c30275fd0cfe60807976fd65b
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 11/20/2020
ms.locfileid: "94985204"
---
|Nome<br /><sub>(Portale di Azure)</sub> |Descrizione |Effetto/i |Versione<br /><sub>(GitHub)</sub> |
|---|---|---|---|
|[Distribuisci le impostazioni di diagnostica per le app per la logica nell'hub eventi](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1dae6c7-13f3-48ea-a149-ff8442661f60) |Distribuisce le impostazioni di diagnostica per le app per la logica per lo streaming in un hub eventi a livello di area quando viene creata o aggiornata un'app per la logica in cui manca questa impostazione di diagnostica. |DeployIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogicApps_DeployDiagnosticLog_Deploy_EventHub.json) |
|[Distribuisci le impostazioni di diagnostica per le app per la logica nell'area di lavoro Log Analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb889a06c-ec72-4b03-910a-cb169ee18721) |Distribuisce le impostazioni di diagnostica per le app per la logica per lo streaming in un'area di lavoro Log Analytics a livello di area quando viene creata o aggiornata un'app per la logica in cui manca questa impostazione di diagnostica. |DeployIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogicApps_DeployDiagnosticLog_Deploy_LogAnalytics.json) |
|[È consigliabile abilitare i log di diagnostica in App per la logica](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Controlla l'abilitazione dei log di diagnostica consentendo di ricreare la traccia delle attività da usare a fini di controllo se si verifica un problema di sicurezza o se la rete viene compromessa |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
| 125.95 | 726 | 0.82374 | ita_Latn | 0.686857 |
28dac1e87d4612f58d759d3a313bf4d8d2204166 | 676 | md | Markdown | CHANGELOG.md | ByMartrixx/vscode-tiny-mappings | e1797df316116838e358b62915a9f5a459758fa9 | [
"MIT"
] | null | null | null | CHANGELOG.md | ByMartrixx/vscode-tiny-mappings | e1797df316116838e358b62915a9f5a459758fa9 | [
"MIT"
] | null | null | null | CHANGELOG.md | ByMartrixx/vscode-tiny-mappings | e1797df316116838e358b62915a9f5a459758fa9 | [
"MIT"
] | null | null | null | # Changelog
## [0.1.0]
- Initial release
## [0.2.0]
- [Enigma mappings] Improved syntax regex
- Now they are way less strict, which should hopefully reduce bugs regarding unhighlighted identifiers
## [0.2.1]
- [Enigma mappings] Fix highlighting when using an array modifier on field descriptors
## [0.2.2]
- [Enigma mappings] Fix keyword highlighting when using spaces instead of tabs
- [Enigma mappings] Fix some reference descriptors being matched and highlighted indefinitely
## [0.2.3]
- [Tiny mappings] Improve tiny file highlighting
## [0.2.4]
- [Tiny mappings] Fix higlighting
## [0.2.5]
- [Tiny mappings] Fix `$` in field descriptor breaking highlighting
| 23.310345 | 104 | 0.730769 | eng_Latn | 0.993181 |
28db5fb4bce1a00f9f7bc2198aed1d4e0f18de7f | 1,936 | md | Markdown | site/content/post/powai-pedals-celebrates-earth-day-with-a-plantation-drive.md | exist2021/Powaiinfo-one-click-hugo-cms | 5201e56d3abac3ec023f73ec1f92344e0280e129 | [
"MIT"
] | 2 | 2021-08-29T04:29:18.000Z | 2022-02-02T19:56:33.000Z | site/content/post/powai-pedals-celebrates-earth-day-with-a-plantation-drive.md | exist2021/Powaiinfo-one-click-hugo-cms | 5201e56d3abac3ec023f73ec1f92344e0280e129 | [
"MIT"
] | null | null | null | site/content/post/powai-pedals-celebrates-earth-day-with-a-plantation-drive.md | exist2021/Powaiinfo-one-click-hugo-cms | 5201e56d3abac3ec023f73ec1f92344e0280e129 | [
"MIT"
] | null | null | null | +++
author = "POWAI INFO"
categories = ["COMMUNITY"]
date = 2017-04-22T11:51:44Z
description = ""
draft = false
image = "__GHOST_URL__/content/images/wordpress/2017/04/featuredImage-7.jpg"
slug = "powai-pedals-celebrates-earth-day-with-a-plantation-drive"
tags = ["COMMUNITY"]
title = "Powai Pedals celebrates Earth Day with a plantation drive"
+++
<p><img loading="lazy" src="https://i2.wp.com/powai.info/wp-content/uploads/2017/04/IMG_2970.jpg?resize=850%2C638&ssl=1" align="middle" width="850" height="638" class="aligncenter" data-recalc-dims="1"></p>
<p dir="auto"><span><a href="__GHOST_URL__/platinum-member/" title="Become a Platinum Member" target="_blank">Powai</a> Pedals, a very supportive cycling group in <a href="__GHOST_URL__/platinum-member/" title="Become a Platinum Member" target="_blank">Powai</a> meets consistently for casual bike rides, cycling events, training rides, cyclothons and weekend cycling outings.</span></p>
<p>The group celebrated Earth day by clubbing a ride with a plantation drive on Eastern Express Highway.</p>
<p>Many members of the group are smart commuters who cycle to work everyday and there are others who go on cycling expeditions across the country to create awareness and promote causes.</p>
<p>If you are passionate about health, fitness and the environment, just get a bicycle and join them. </p>
<p>Find more about them on their <a href="https://m.facebook.com/groups/287882028025409/?view=group" target="_blank">Facebook page.</a></p>
<p><img loading="lazy" src="https://i2.wp.com/powai.info/wp-content/uploads/2017/04/IMG_2967.jpg?resize=850%2C479&ssl=1" align="middle" width="850" height="479" class="aligncenter" data-recalc-dims="1"><img loading="lazy" src="https://i1.wp.com/powai.info/wp-content/uploads/2017/04/IMG_2968.jpg?resize=809%2C1080&ssl=1" align="middle" width="809" height="1080" class="aligncenter" data-recalc-dims="1"></p>
<p> </p>
| 74.461538 | 419 | 0.746384 | eng_Latn | 0.726484 |
28dbc033de6f4431c3bb97580a4d37be8da47abf | 891 | md | Markdown | _posts/cherry/2021-10-13-default-branch-git.md | linuxhubit/linuxpeople_feed | 54f6d13d58168ad9431bd52cf47a8afaec1ca530 | [
"MIT"
] | 2 | 2021-07-05T18:51:48.000Z | 2021-08-16T10:55:00.000Z | _posts/cherry/2021-10-13-default-branch-git.md | linuxhubit/linuxpeople_feed | 54f6d13d58168ad9431bd52cf47a8afaec1ca530 | [
"MIT"
] | null | null | null | _posts/cherry/2021-10-13-default-branch-git.md | linuxhubit/linuxpeople_feed | 54f6d13d58168ad9431bd52cf47a8afaec1ca530 | [
"MIT"
] | 1 | 2021-07-17T10:02:13.000Z | 2021-07-17T10:02:13.000Z | ---
title: 'Cambiare il default branch di git'
description: "descrizione post"
date: 2021-10-13 13:00
layout: post
author: Davide Galati (in arte PsykeDady)
author_github: PsykeDady
tag: trick
---
Tornando sul discorso del politically correct un altra pratica ormai comune è quella di chiamare il proprio branch primario `main` e non `master`
Son quindi due i modi di creare un branch con il nome main dal prorio terminale.
Il primo è forzare per ogni singola creazione il branch con il nome specifico:
```bash
git init --initial-branch=main
```
Ora, questo è abbastanza scomodo a lunga andare, poiché non sempre ci si ricorda di mettere questa opzione nella quotidianità.
Quindi si può anche pensare di scrivere invece:
```bash
git config --global init.defaultBranch main
```
Questa operazione consente di memorizzare il tuo branch iniziale come configurazione predefinita di git.
| 33 | 145 | 0.781145 | ita_Latn | 0.999584 |
28dbd2e36890c1a374bb5f81bdf6dd14a64cbe56 | 3,103 | md | Markdown | vendor/listfixer/yii2-remember-me/README.md | AIASolution/JoySaleWeb | 9cb496e793d7e88c43e0ec2798e5a7fe301e59ae | [
"BSD-3-Clause"
] | 3 | 2017-01-31T08:17:46.000Z | 2017-03-11T01:25:42.000Z | vendor/listfixer/yii2-remember-me/README.md | AIASolution/JoySaleWeb | 9cb496e793d7e88c43e0ec2798e5a7fe301e59ae | [
"BSD-3-Clause"
] | 4 | 2017-02-01T21:19:58.000Z | 2017-03-11T03:56:21.000Z | vendor/listfixer/yii2-remember-me/README.md | AIASolution/JoySaleWeb | 9cb496e793d7e88c43e0ec2798e5a7fe301e59ae | [
"BSD-3-Clause"
] | 4 | 2017-02-03T12:13:26.000Z | 2018-03-05T04:34:05.000Z | # yii2-remember-me
This extension replaces the standard "Remember Me" identity cookie functionality of Yii2 with something similar to what is described here: http://jaspan.com/improved_persistent_login_cookie_best_practice
When a user requests "Remember Me" during login, a new identity cookie is created for that user for that browser/computer. The cookie contains three things which are also stored in a database table: (1) a Cookie ID, which is the record number in the identity cookie database table, (2) A Cookie Key, which is the "password" for that particular cookie, and (3) A User Key, which is the "password" for the associated user. When I say "password", it is a random string, not an actual password. The database stores some other information, including the User ID number.
Each time a user restarts their browser and is authenticated using this system, all three items are checked against the database. If the contents of the cookie match a record in the database, the user gains access to the system and a new User Key is generated. The new User Key is stored in the database and in the identity cookie.
If a particular user uses three different computers, then there will be three different records in the database, one for each cookie. When each of these cookies is used to authenticate a user, the User Key for that particular cookie is regenerated, leaving the other identity cookie User Keys unchanged. This allows a particular user to have "Remember Me" functionality on multiple computers, yet still have their User Key change with each use.
If someone copies or steals an identity cookie, whichever cookie is used first (the original or the copy) will still work, since there is no way to determine which is the original and which is the copy. The User Key will match and a new User Key will be generated. Once the other cookie is used, the User Key will have already changed. The Cookie ID and Cookie Key will match, but the User Key will not, thus indicating that more than one identity cookies exists with this Cookie ID and Cookie Key. The database record is then deleted, thus disabling all cookies with this Cookie ID and Cookie Key.
To create the database table, use this command from your Yii2 application base directory:
php ./yii migrate --migrationPath=@vendor/listfixer/yii2-remember-me/migrations
The database migration assumes that you have a table called "user" with an integer primary key called "id".
To enable this extension, edit your configuration file to include this component information:
'components' => [
'user' => [
'class' => 'listfixer\remember\RememberMe',
'identityClass' => /* you should already have something here */,
'enableAutoLogin' => true,
]
]
When a user changes their password, you can configure your system to disable all existing identity cookies for that user by invoking this method:
\listfixer\remember\models\UserIdentityCookie::deleteUserCookies( $this->id );
If you are using the Yii2 Advanced template, then this should be added to setPassword() in common/models/User.php.
| 91.264706 | 602 | 0.780213 | eng_Latn | 0.999356 |
28dc38bda286a94fd41b7a5b2217e50ace87bf01 | 10,274 | md | Markdown | README.md | LynRodWS/cubix | 4634b0920bd42752fcc51cf8964bb80ff726d10c | [
"BSD-3-Clause"
] | 68 | 2020-08-21T01:36:35.000Z | 2022-03-22T21:06:31.000Z | README.md | LynRodWS/cubix | 4634b0920bd42752fcc51cf8964bb80ff726d10c | [
"BSD-3-Clause"
] | 2 | 2020-10-07T00:58:07.000Z | 2020-10-10T11:01:40.000Z | README.md | LynRodWS/cubix | 4634b0920bd42752fcc51cf8964bb80ff726d10c | [
"BSD-3-Clause"
] | 5 | 2018-12-07T04:57:35.000Z | 2020-05-21T12:09:32.000Z | # What is Cubix?
Cubix is a framework for language-parametric program transformation,
i.e.: defining a single source-to-source program transformation tool that
can be run on multiple languages. In Cubix, you can write
transformations with a type signature like "This transformation works
for any language that has assignments, loops, and a name-binding
analysis," and instantly get separate tools for C, Java, etc. The goal is to radically reduce the cost of building sophisticated
whole-program refactoring tools by allowing each tool to be built for
a much larger market.
Cubix is based on the idea of *incremental
parametric syntax*, a technique for defining families of
representations of programming languages which share common
components, and for defining them as a small modification to a
pre-existing syntax definition. The name "Cubix" comes from the 2000s
television show "Cubix: Robots for Everyone;" in that show, "Cubix" is
a robot composed of modular pieces that can be reassembled for many purposes.
It currently supports C, Java, JavaScript, Lua, and Python.
The Cubix system itself, and the general incremental parametric syntax
approach, is described in the OOPSLA 2018 paper:
* [One Tool, Many Languages: Language-Parametric Transformation with
Incremental Parametric Syntax*; James Koppel et al](http://www.jameskoppel.com/files/papers/oopsla18main-p221-p.pdf)
We also recommend reading the following papers to get the necessary
background in generic programming to understand Cubix:
* *Data Types à la Carte*, Wouter Swierstra
* *Compositional Data Types*, Patrick Bahr and Thomas Hvitved
Transformations in Cubix use our `compstrat` library of strategy
combinators. To understand strategy combinators, we recommend the
following paper:
* *The Essence of Strategic Programming*, Ralf Lämmel et al
# What Cubix is not
Cubix is the world's first framework that can build language-parametric
source-to-source transformations. As the first of its kind, it often
gets mistaken for a solution to more familiar problems. In particular,
it is not:
* A tool for translating one language into another. Cubix allows you
to create a single tool that can transform C programs into better C
programs and Java programs into better Java programs. It is not
designed for building tools that can transform C programs into Java programs. Indeed, much of
its power comes from its ability to preserve all the information of
the original program.
* A collection of ready-to-use refactoring tools. Thus far, all
transformations built on Cubix are tech demos. While a couple are
theoretically useful, they have not undergone the amount of UX
engineering needed to actually be time-savers.
* A framework for writing multi-language program analyses. Cubix transformations may consume results
provided by other analyses, which may be written in either a
single-language or multi-language fashion.
* A framework for analysis/transformation of polyglot programs, i.e.:
programs (or single source files) written in multiple
languages.
However, Cubix's generic-programming capabilities make it a powerful
tool for building all kinds of programming tools, even for only one
language. We are particularly excited about the potential of extending Cubix to
transform polyglot programs.
# Getting started
To build Cubix:
First, download the sub-libraries `comptrans` and `compstrat`
git submodule update
Second, build Cubix:
stack build --ghc-options='-O0 -j +RTS -A256m -n2m -RTS'
You may be prompted for your Github credentials to download the
third-party frontends.
You are now ready to run Cubix transformations:
stack exec examples-multi java hoist input-files/java/Foo.java
Cubix has many dependencies, several of which are not on Hackage/Stackage, including some forks of Hackage libraries whose changes have not been merged upstream. This makes it more difficult to create a new package which depends on Cubix. For an example of how to do this, see https://github.com/jkoppel/using-cubix-example.
# Compilation notes
Because of performance problems in GHC, the full Cubix will not build
with -O1 or -O2 on most machines. We've tried on a server with 64GB RAM; the server ran
out of memory. We eventually succeeded in building with -O2, but it took a server with over 200GB of RAM. Instead, use this command:
alias stackfastbuild="stack build --ghc-options='-O0 -j +RTS -A256m -n2m -RTS'"
This builds Cubix in parallel with minimal optimization, and sets the initial GHC heap to larger than usual.
We found the following two minimal sets of compilation flags that mitigate this blowup and make compilation manageable:
1: -fno-cse -fno-full-laziness
2: -fno-specialize -funfolding-creation-threshold=0
If disable everything except CSE and specialise, blow-up still
occurs. Remains true with -O1
Adding "--flag cubix:only-one-language" to the build command will turn on a compile flag that disables building support for all languages except Lua, the smallest language. This greatly speeds compilation times, to the point where we are able to compile with -O2 on 2015 MacBook Pro. Some of the performance reports in cubix/benchmarks/reports were compiled with this flag.
# Docker Image
A Docker image containing the version of Cubix submitted to OOPSLA 2018 along with runs of all the experiments, including the human study, is available from https://zenodo.org/record/1413855 .
# Directory Overview
Overview of directories (corresponding to the top-level of the zip file, and the /cubix directory on the Docker image):
/stack.yaml # High-level build description
/package.yaml # Low-level build description
/examples # Example transformations built with Cubix
/examples/multi # The main driver for the multi-language transformations
/comptrans # Source code for the comptrans library
/compstrat # Source code for compstrat, our library for Strategic Programming with Compositional Datatypes
/input-files # Small test inputs in each of the 5 languages
/scripts # Scripts for running the transformations over compiler test suites
# Running the built-in transformations and analyses
Use this command from the top-level directory (the one that contains "stack.yaml"):
stack exec examples-multi
You will be give the following help:
Usage:
examples-multi <language> <transform> <file>*
examples-multi <language> <analysis> <file>*
Transforms available: debug,id,cfg,elementary-hoist,hoist,tac,testcov,ipt
Analyses available: anal-triv-call
Note that only the IPT transformation can be run on multiple files.
For example, to run the three-address code transformation on a JavaScript file named "Foo.js", run: `stack exec examples-multi javascript tac Foo.js`
It can be faster to run the executable directory, without using `stack exec`. On the first author's laptop, this executable is located at `.stack-work/dist/x86_64-osx/Cabal-1.22.5.0/build/examples-multi/examples-multi`.
# Using the Interprocedural Plumbing Transformation (IPT) Tool
We recommend also using the `rlwrap` command to enable command history.
Example:
rlwrap stack exec examples-multi java ipt input-files/java/ipt/*.java
In C and Java, you will additionally be prompted for the type of the
parameter to add. Give this type as an AST for `language-c` or
`language-java` (e.g.: `(PrimType IntT)`, not `int`).
# Running the language test suites
Cubix comes with scripts for running any semantics-preserving
transformation on language test suites for C, Java, JavaScript, Lua,
and Python, in the files `scripts/test_java.rb` and similar. To do so:
1. Follow the below instructions to install the language tests.
2. Each .rb file contains a constant at the top like `JAVA_DIR` and
`JAVA_TESTS` pointing to the language implementation and tests
directory. Modify these appropriately.
3. Run `./scripts/test_<lang>.rb <name of transformation>`
These scripts will run the transformation on all tests, run the
transformed tests, and report the final pass/fail counts.
For all transformations except `id`, they will first run the identity
transformation, and discard any tests that fail. This rules out tests
that trigger bugs in the 3rd party parsers/pretty-printers, most (but
not all) self-referential tests, and tests that the original language
implementation fails.
They also come with a special `count_loc` parameter that counts the
total lines of code in all relevant tests.
## Installing GCC and GCC torture
*Note*: The C instructions are still being updated.
In some directory:
git clone https://github.com/gcc-mirror/gcc.git
cd gcc
# If you want the same revision of gcc-torture as in the paper
git reset --hard f72de674726c5d054b9d99b0a4db09dfb52bf494
cd ..
mkdir gcc_build
cd gcc_build
../gcc-mirror/configure
make -j8
make install
## Installing the K-Java test suite
In some directory:
git clone https://github.com/kframework/java-semantics
# If you want the same revision as in the paper
cd java-semantics
git reset --hard c202266304340a2a4be81fa21ee4fe36b3117ee3
## Installing test262, the JavaScript spec conformance tests, and the KJS test driver
In some directory:
git clone https://github.com/kframework/javascript-semantics.git
cd javascript-semantics
git reset --hard d5aca308d12d3838c645e1f787e2abc9257ce43e # Only if you want the same revision as in the paper
make test262
## The Lua tests
As described in the paper, we had to make several modifications to the
Lua test suite to get them to run with Cubix. In particular, we
removed several overly self-referential tests, and modified the test
suite to report the number of passing/total assertions, rather than
aborting the entire suite on the first failure.
These tests are in the `test/lua/lua-5.3.3-tests` directory.
## Installing the CPython tests
In some directory:
git clone https://github.com/python/cpython.git
cd cpython
git reset --hard 7bd4afec86849a57b48f375a9c4e0c32f0539dad # Only if you want the same revision as in the paper
./configure
make
| 42.279835 | 373 | 0.770781 | eng_Latn | 0.996272 |
28dd15a27dea2f64712ae113b6ca814725d76dc6 | 864 | md | Markdown | freeze_graph/README.md | yx0123/monodepth-cpp | cbaa711e1e7fa4cb5be49dcc49383f9df6e51831 | [
"MIT"
] | 93 | 2018-10-08T11:33:18.000Z | 2022-03-14T04:47:29.000Z | freeze_graph/README.md | yx0123/monodepth-cpp | cbaa711e1e7fa4cb5be49dcc49383f9df6e51831 | [
"MIT"
] | 20 | 2018-10-16T08:32:20.000Z | 2021-11-12T12:31:35.000Z | freeze_graph/README.md | yx0123/monodepth-cpp | cbaa711e1e7fa4cb5be49dcc49383f9df6e51831 | [
"MIT"
] | 34 | 2018-10-11T05:08:48.000Z | 2021-03-31T11:14:59.000Z | # Convert a Tensorflow checkpoint file to a frozen graph
The Monodepth model is saved as a checkpoint file (.ckpt), we need to convert it to a graph file (.pb) so that the pre-trained model can be used
# Run the freeze_graph.py
python freeze_graph.py --encoder resnet --ckpt_file /path/to/trained/model --output_dir /path/to/output/folder
Note:
* `--encoder resnet OR vgg`
* There is no extension (e.g., .ckpt) for the checkpoint file
* change the file name of the output graph using `--graph output.pb`
# Download the pre-trained frozen graph
[VGG model](https://drive.google.com/open?id=1yzcndbigENP3kQg6Oioerwvkf_hTotZZ)
[Resnet50 model](https://drive.google.com/open?id=1SFd-FBGWwWHl1n6coIQV_EWhXUDvlWsk)
Note:
* VGG model: the pre-trained city2kitti model provided by Monodepth author
* Resnet50 model: city2kitti excluding odometry sequence 00-10
| 36 | 144 | 0.769676 | eng_Latn | 0.876472 |
28dd4a595a722c6ce5a72f2394aa4cc615cd243c | 1,217 | md | Markdown | _posts/2019-06-09-title-devlop-lyle.md | tickmao/tickmao.github.io | 0a8f84af8229b4e66203c2cde0946ce36d819ede | [
"Apache-2.0"
] | null | null | null | _posts/2019-06-09-title-devlop-lyle.md | tickmao/tickmao.github.io | 0a8f84af8229b4e66203c2cde0946ce36d819ede | [
"Apache-2.0"
] | null | null | null | _posts/2019-06-09-title-devlop-lyle.md | tickmao/tickmao.github.io | 0a8f84af8229b4e66203c2cde0946ce36d819ede | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "编程 | 离开和进入页面时改变title"
subtitle: "有趣的代码万里挑一"
date: 2019-06-09 10:00:00
author: "Lyle"
header-img: "img/post-bg-title.jpg"
header-mask: 0.5
catalog: true
tags:
- 编程
- title
- API
- 有趣
- 学习
---
# 离开和进入页面时改变title
最近浏览博客经常看到网页title神奇的一幕,可以在当浏览器中打开网页在离开页面后,title会变化些好玩的样式来吸引注意力,于是去查找对应的方式,来水一贴。
## 原理
使用了HTML5的Page Visibility API,有了API真的是一个很方便的事情啊!连代码都可以优雅的使用了。
页面可见性API有两个属性,一个事件,如下:
document.hidden: Boolean值,表示当前页面可见还是不可见
document.visibilityState: 返回当前页面的可见状态,取值有 hidden visible prerender preview
visibilitychange: 当可见状态改变时候触发的事件。
## 代码
```js
var OriginTitile = document.title;
var titleTime;
document.addEventListener('visibilitychange', function () {
if (document.hidden) {
document.title = '(つェ⊂)~偶哟,奔溃啦! ' + OriginTitile;
clearTimeout(titleTime);
} else {
document.title = '(*´∇`*) 咦!又好了~ ' + OriginTitile;
titleTime = setTimeout(function () {
document.title = OriginTitile;
}, 2000);
}
});
```
## 实现效果

**—— ChangeLog**
2019.06.09
- 博客初拟 | 20.283333 | 80 | 0.630238 | yue_Hant | 0.309591 |
28ddb70ad6b10e286c2d3bc16241815ba38779fa | 2,388 | md | Markdown | docs/relational-databases/system-tables/mspub-identity-range-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-tables/mspub-identity-range-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-tables/mspub-identity-range-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: MSpub_identity_range (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/03/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: replication
ms.topic: language-reference
f1_keywords:
- MSpub_identity_range_TSQL
- MSpub_identity_range
dev_langs:
- TSQL
helpviewer_keywords:
- MSpub_identity_range system table
ms.assetid: 68746eef-32e1-42bc-aff0-9798cd0e88b8
author: stevestein
ms.author: sstein
manager: craigg
ms.openlocfilehash: a60ae0e3cd8fb4a07ac9a947a8e4a7ea692d9b26
ms.sourcegitcommit: ceb7e1b9e29e02bb0c6ca400a36e0fa9cf010fca
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 12/03/2018
ms.locfileid: "52821840"
---
# <a name="mspubidentityrange-transact-sql"></a>MSpub_identity_range (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
O **MSpub_identity_range** tabela oferece suporte ao gerenciamento de intervalo de identidade. Essa tabela é armazenada no banco de dados de assinatura e publicação.
|Nome da coluna|Tipo de dados|Descrição|
|-----------------|---------------|-----------------|
|**objid**|**int**|A ID da tabela que tem a coluna de identidade administrada pela replicação.|
|**range**|**bigint**|Controla o tamanho do intervalo dos valores de identidade consecutivos que seriam atribuídos à assinatura em um ajuste.|
|**pub_range**|**bigint**|Controla o tamanho do intervalo dos valores de identidade consecutivos que seriam atribuídos à publicação em um ajuste.|
|**current_pub_range**|**bigint**|O intervalo atual usado pela publicação. Ele pode ser diferente de *pub_range* se exibido depois de alterado por **sp_changearticle** e antes do próximo ajuste de intervalo.|
|**threshold**|**int**|Valor percentual para controle quando o Distribution Agent atribuir um novo intervalo de identidade. Quando a porcentagem de valores especificada em *limite* é usado, o agente de distribuição cria um novo intervalo de identidade.|
|**last_seed**|**bigint**|A associação mais baixa do intervalo atual.|
## <a name="see-also"></a>Consulte também
[Tabelas de replicação (Transact-SQL)](../../relational-databases/system-tables/replication-tables-transact-sql.md)
[Exibições de replicação (Transact-SQL)](../../relational-databases/system-views/replication-views-transact-sql.md)
| 50.808511 | 255 | 0.757119 | por_Latn | 0.926728 |
28ddd74468675871a1e4553c068322e8100c6205 | 4,232 | md | Markdown | readme.md | kalisio/geo-pixel-stream | 4d9f8f1908fd8138ea366cdd62379126c47fc21f | [
"ISC"
] | null | null | null | readme.md | kalisio/geo-pixel-stream | 4d9f8f1908fd8138ea366cdd62379126c47fc21f | [
"ISC"
] | null | null | null | readme.md | kalisio/geo-pixel-stream | 4d9f8f1908fd8138ea366cdd62379126c47fc21f | [
"ISC"
] | 1 | 2022-03-19T18:19:05.000Z | 2022-03-19T18:19:05.000Z | # geo-pixel-stream
Node.js streams for reading/writing/transforming pixels using node-gdal.
We forked https://github.com/mapbox/geo-pixel-stream because the project did not seem to be updated anymore.
We also needed support for latest Node.js LTS (v12 at that time), but https://github.com/naturalatlas/node-gdal was not upgraded as well, so that we switched to https://github.com/contra/node-gdal-next.
## PixelReader streams
Create streams that read pixels from each band of a source image:
```js
var pixels = require('@mapbox/geo-pixel-stream');
var dcrgb = 'node_modules/mapnik-test-data/data/geotiff/DC_rgb.tif';
var readers = pixels.createReadStreams(dcrgb);
```
If an image has 4 bands (e.g. RGBA), then the `streams` array will contain four stream objects. Each stream contains metadata about the original datasource and the band represented:
```js
console.log(readers[0].metadata);
// {
// driver: 'GTiff',
// width: 1541,
// height: 1913,
// numBands: 4,
// srs: gdal.SpatialReference.fromEPSG(26918),
// geotransform: [ 326356, 4.00129785853342, 0, 4318980, 0, -4.0015682174594875 ],
// id: 1,
// type: 'Byte',
// blockSize: {
// x: 1541,
// y: 1
// }
// }
```
Each PixelReader is a [Node.js readable stream](http://nodejs.org/api/stream.html#stream_class_stream_readable). The stream's `data` event will emit objects indicating the offset (in terms of blocks, not pixels), block size (in pixels) and a TypedArray of pixel data.
```js
readers[0].once('data', function(data) {
console.log(data);
// {
// offset: { x: 0, y: 0 },
// blockSize: { x: 1541, y: 1 },
// buffer: [ Uint8TypedArray ]
// }
});
```
[Inheriting from the stream api](http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options), PixelReaders can send data to writable streams via `pipe`:
```js
var writable = new stream.Writable();
readers[1].pipe(writable);
```
## PixelWriter streams
Create streams that write pixels from each band of a destination image:
```js
var outputFile = '~/some.tif';
var writers = pixels.createWriteStreams(outputFile);
```
Again, if this will generate one stream for each band of the output file. If, instead of writing to an existing file, you want to write to a new, blank image, you must provide sufficient metadata to generate the new image:
```js
var outfile = '~/just-red-pixels.tif'
var outputMetadata = {
driver: 'GTiff',
width: 1541,
height: 1913,
numBands: 1,
srs: gdal.SpatialReference.fromEPSG(26918),
geotransform: [ 326356, 4.00129785853342, 0, 4318980, 0, -4.0015682174594875 ],
id: 1,
type: 'Byte',
blockSize: { x: 1541, y: 1 }
};
var writers = pixels.createWriteStreams(outputFile, outputMetadata);
```
Once you've created readers and writers, you can `pipe` data from one image to another:
```js
var dcrgb = 'node_modules/mapnik-test-data/data/geotiff/DC_rgb.tif';
var readers = pixels.createReadStreams(dcrgb);
var readRedBand = readers[0];
var writers = pixels.createWriteStreams(outputFile);
var writeRedBand = writers[0];
readRedBand.pipe(writeRedBand).on('finish', function() {
console.log('All done!');
});
```
## PixelTransform streams
Create a stream that takes input pixels, performs some sort of manipulation of them, and outputs the adjusted pixels. You create a function that will receive a TypedArray of pixels, performs some adjustment, and provides the adjusted pixels to the provided `callback` function:
```js
function processPixels(buffer, callback) {
var result = buffer.map(function(pixel) {
return pixel + 10;
});
callback(null, result);
}
var transform = pixels.createTransformStream(processPixels);
```
If processing encounters an error, send the error as the first argument to the provided `callback` function.
You can use a transform stream to make adjustments to pixel values before writing them to a destination file:
```js
var dcrgb = 'node_modules/mapnik-test-data/data/geotiff/DC_rgb.tif';
var readers = pixels.createReadStreams(dcrgb);
var writers = pixels.createWriteStreams(outputFile);
var transform = pixels.createTransformStream(processPixels);
readers[0].pipe(transform).pipe(writers[0]).on('close', function() {
console.log('All done!');
});
```
| 33.0625 | 277 | 0.725189 | eng_Latn | 0.820292 |
28ddfecda985a2a515f8aa337109c849fc6e54fa | 3,269 | md | Markdown | Practice-2019-03-23/mds/Arafatk-DataViz-master-README.md | serge-sotnyk/nlp-practice | e38400590a3fcf140a73d6871a778b3c2115a2fe | [
"MIT"
] | 3 | 2019-11-25T09:56:48.000Z | 2021-01-18T13:18:17.000Z | Practice-2019-03-23/mds/Arafatk-DataViz-master-README.md | serge-sotnyk/nlp-practice | e38400590a3fcf140a73d6871a778b3c2115a2fe | [
"MIT"
] | null | null | null | Practice-2019-03-23/mds/Arafatk-DataViz-master-README.md | serge-sotnyk/nlp-practice | e38400590a3fcf140a73d6871a778b3c2115a2fe | [
"MIT"
] | 2 | 2020-05-17T17:22:14.000Z | 2020-09-23T08:31:46.000Z | [](https://godoc.org/github.com/Arafatk/DataViz) [](https://travis-ci.org/Arafatk/DataViz) [](https://goreportcard.com/report/github.com/Arafatk/Dataviz) [](https://github.com/Arafatk/DataViz/blob/master/LICENSE/LICENSE.md) [](https://github.com/emersion/stability-badges#stable) [](https://codeclimate.com/github/Arafatk/DataViz/maintainability)
# DataViz
Build and visualize data structures in Golang. Inspired by the ideas from [memviz](https://github.com/bradleyjkemp/memviz) and [Gods](https://github.com/emirpasic/gods) this library
helps user to play around with standard data structures while also giving them the tools to build their own data structures and visualization options....

## Documentation
Documentation is available at [godoc](https://godoc.org/github.com/Arafatk/dataviz).
## Requirements
- graphviz
- build graphviz from [source](https://www.graphviz.org/download/)
- linux users
- ```sudo apt-get update```
- ```sudo apt install python-pydot python-pydot-ng graphviz```
- mac users ([Link](http://macappstore.org/graphviz-2/))
- install homebrew
- ```brew install graphviz```
## Installation
```go get github.com/Arafatk/Dataviz```
## Data Structures
- Containers
- Lists
- ArrayList
- SinglyLinkedList
- DoublyLinkedList
- Stacks
- ArrayStack
- Maps
- TreeMap
- Trees
- RedBlackTree
- AVLTree
- BTree
- BinaryHeap
- Functions
- Comparator
- Iterator
- IteratorWithIndex
- IteratorWithKey
- ReverseIteratorWithIndex
- ReverseIteratorWithKey
- Enumerable
- EnumerableWithIndex
- EnumerableWithKey
- Serialization
- JSONSerializer
- JSONDeserializer
- Sort
- Container
- Visualizer
## Usage and Examples
We have a blog post explaining our vision and covering some basic usage of the `dataviz` library. [Check it out here](https://medium.com/@Arafat./introducing-dataviz-a-data-structure-visualization-library-for-golang-f6e60663bc9d).
- **Binary Heap**

- **Stack**

- **B Tree**

- **Red Black Tree**

## Contributing
We really encourage developers coming in, finding a bug or requesting a new feature. Want to tell us about the feature you just implemented, just raise a pull request and we'll be happy to go through it. Please read the CONTRIBUTING and CODE_OF_CONDUCT file.
| 42.454545 | 783 | 0.71398 | kor_Hang | 0.300387 |
28dff9bd30bac6ea701b33de5aa20ac95b640a7c | 5,173 | md | Markdown | docs/examples/parameters.md | ntselepidis/kfac | ddad6375bbdebfae809bccfd3a5c3db073128764 | [
"Apache-2.0"
] | 179 | 2018-02-08T00:10:26.000Z | 2022-02-25T06:58:28.000Z | docs/examples/parameters.md | gpauloski/kfac | cf6265590944b5b937ff0ceaf4695a72c95a02b9 | [
"Apache-2.0"
] | 40 | 2018-02-02T00:10:00.000Z | 2022-02-09T01:46:32.000Z | docs/examples/parameters.md | isabella232/kfac | 3ee1bec8dcd851d50618cd542a8d1aff92512f7c | [
"Apache-2.0"
] | 40 | 2018-03-11T10:10:23.000Z | 2022-01-24T12:03:48.000Z | # K-FAC Parameters.
## Table of Contents
* [Damping](#damping)
* [Learning Rate](#learning-rate)
* [Subsample covariance computation](#subsample-covariance-computation)
* [KFAC norm constraint](#kfac-norm-constraint)
* [Covariance decay](#covariance-decay)
* [Train batch size](#train-batch-size)
<br>
We list below various parameters which can be tuned to improve training and run
time performance of K-FAC.
## Damping
Damping is a crucial aspect of K-FAC, as it is for any second order
optimization/natural gradient method. Broadly speaking, it refers to the
practice of penalizing or constraining the size of the update in various ways so
that it doesn't leave the local region where the quadratic approximation to the
objective (which is used to compute the update) remains accurate. This region
commonly referred to as the "trust region". In some literature damping is called
"regularization" although we will avoid that term due to its related but
distinct meaning as a method to combat overfitting.
The damping strategy used in KFAC is to (approximately) add a multiple of the
identity to the Fisher before inverting it. This is essentially equivalent to
enforcing that the update lie in a spherical trust region centered at the
current location in parameter space.
The `damping` parameter represents the multiple of identity which is used.
Higher values correspond to smaller trust regions, although the precise
relationship between `damping` and the size of the trust region depends on the
scale of the objective, and will vary from iteration to iteration. (If the loss
function is multiplied by scalar 'alpha' then damping should be multiplied by
'alpha' as well.) Higher values of `damping` can allow higher learning rates,
but as damping tends to infinity the KFAC updates will start to resemble regular
gradient descent updates (scaled by `1/damping`).
The `damping` parameter depends on the scale of the loss function. `damping` is
a critical parameter that needs to be tuned. Options for tuning include a grid
sweep (must be simultaneous with learning rate optimization - NOT independent)
or auto-tuned using the Levenberg-Marquardt (LM) algorithm (see the [`Auto
Damping`][auto_damping] section for further details). For grid sweeps a typical
range to consider would be logarithmically spaced values between `1e-5` to
`100`, although the optimal value could be any non-negative real number in
principle (because the scale of the loss is arbitrary). Another option for
tuning `damping` is [`Population based training`][PBT] (PBT).
Refer to section `6` of the [KFAC paper][kfac_paper] for a more detailed
discussion of damping and how it can be used/tuned in KFAC
[auto_damping]:
https://github.com/tensorflow/kfac/tree/master/docs/examples/auto_damp.md
[PBT]:
https://arxiv.org/abs/1711.09846
[kfac_paper]:
https://arxiv.org/pdf/1503.05671.pdf
## Learning Rate
Typically sweep over values in the range 1e-5 to 100. It is important to tune
the learning in conjunction with damping, since the two are closely coupled
(higher damping allows higher learning rates). The learning rate can also be
tuned using PBT. Note that the optimal learning rate will be generally different
from the learning rate used for SGD/RMSProp/Adam optimizer.
## Subsample covariance computation
If you are using Conv layers and observe that the KFAC iterations is
significantly slower than Adam or if you run out of memory then a possible
remedy is to use subsampling in the covariance computation. To turn on
subsampling set `kfac_ff.sub_sample_inputs` to `True` and
`kfac_ff.sub_sample_outer_products` to `True`. The former flag subsamples the
batch of inputs used for covariance computation and the later flag subsamples
extracted patches based on the size of the covariance matrix. Check the
documentation of `tensorflow_kfac.fisher_factors` for detailed explanation of
various subsampling parameters. Also check [`Distributed training`][dist_train]
section for how to distribute the computation of these ops over multiple
devices.
[dist_train]:
https://github.com/tensorflow/kfac/tree/master/docs/examples/distributed_training.md
## KFAC norm constraint
Scales the K-FAC update so that its approximate Fisher norm is bounded.
Typically use an initial value of 1.0 and tune it using PBT or perform grid
search. Norm constraint can used as an alternative to learning rate schedules.
See Section 5 of the [Distributed Second-Order Optimization using
Kronecker-Factored Approximations][ba_paper] paper for further details.
[ba_paper]:
https://jimmylba.github.io/papers/nsync.pdf
## Covariance decay
During the course of the algorithm, an exponential moving average tracks
statistics for each layer. Slower decays mean that the statistics are based on
more data, but will suffer more from the issue of staleness (because of the
changing model parameters). This parameter can usually be left at its default
value but may occasionally matter for some problems. In such cases some
reasonable values to sweep over are `[0.9, 0.95, 0.99, 0.999]`.
## Train batch size
Typically try using a larger batch size compared to training with
SGD/RMSprop/Adam.
| 47.027273 | 84 | 0.795477 | eng_Latn | 0.998505 |
28e0c86567a9a8c25fd512900f3c834476818752 | 85 | md | Markdown | Python Games/Tetris/README.md | lazydinoz/HackFest21 | 84bfbfbb2c75a6511226a87d2e947984db878ba1 | [
"MIT"
] | 1 | 2021-11-12T10:51:19.000Z | 2021-11-12T10:51:19.000Z | Python Games/Tetris/README.md | lazydinoz/HackFest21 | 84bfbfbb2c75a6511226a87d2e947984db878ba1 | [
"MIT"
] | null | null | null | Python Games/Tetris/README.md | lazydinoz/HackFest21 | 84bfbfbb2c75a6511226a87d2e947984db878ba1 | [
"MIT"
] | null | null | null | This is the classic Tetris game in python.
Simply run the file in python and enjoy!
| 21.25 | 42 | 0.776471 | eng_Latn | 0.999949 |
28e11cfec292cec09fc017f589d0e2f66358ee7b | 752 | md | Markdown | about-me.md | daizhirui/daizhirui.github.io | a54288f97636f750db6350b428f82a8b2b340a61 | [
"MIT"
] | null | null | null | about-me.md | daizhirui/daizhirui.github.io | a54288f97636f750db6350b428f82a8b2b340a61 | [
"MIT"
] | 2 | 2021-09-27T21:33:45.000Z | 2022-02-26T04:00:16.000Z | about-me.md | daizhirui/daizhirui.github.io | a54288f97636f750db6350b428f82a8b2b340a61 | [
"MIT"
] | null | null | null | ---
layout: article
title: You Got Me!
permalink: /about/
---
## Contact
- email: zhdai at eng.ucsd.edu
## My Skills
- Programming: `C`, `C++`, `Python`, `Assembly`, `Verilog`, `Swift`, `Java`, `Shell Script`, `HTML`, `CSS`, `Javascript`
- Software Development: `Qt5`, `macOS App`, `iOS App`, `Android App`
- Deep Learning Framework: `Pytorch`, `MXNet`, `Tensorflow`
- Virtualization Software: `Docker`
- Math Software: `MATLAB`, `Mathematica`
- Mechanical Design Software: `AutoCAD`, `SolidWorks`
- Circuit Design Software: `Cadence`, `Quartus`, `Modelsim`
- Hardware: `STM32`, `Arduino`, `Raspberry Pi`, `FPGA`
- Other: `Git`, `Latex`
## My Hobbies, My Tastes
- Painting
- Sports: Badminton, Table Tennis, Work Out
- Cooking
- Science Fictions
| 25.931034 | 120 | 0.672872 | yue_Hant | 0.265895 |
28e1726d43ab6a4f3d767880a99b8c50e8b124d1 | 519 | md | Markdown | ACKNOWLEDGEMENTS.md | toonzz/jumbotron | ac963ec2f5421f39ff6153b8e247b4d88592702b | [
"BSD-3-Clause"
] | null | null | null | ACKNOWLEDGEMENTS.md | toonzz/jumbotron | ac963ec2f5421f39ff6153b8e247b4d88592702b | [
"BSD-3-Clause"
] | null | null | null | ACKNOWLEDGEMENTS.md | toonzz/jumbotron | ac963ec2f5421f39ff6153b8e247b4d88592702b | [
"BSD-3-Clause"
] | null | null | null | # Acknowledgements
Package management support is provided by NuGet, which is open-source software.
The original software is available from:
http://nuget.codeplex.com/
This software is available under an Apache License v2.0:
http://nuget.codeplex.com/license
The installer is created using the WiX Toolset, which is open-source software.
The original software is available from:
http://wixtoolset.org/
This software is available under a Microsoft Reciprocal License (Ms-RL):
http://wix.codeplex.com/license
| 30.529412 | 79 | 0.786127 | eng_Latn | 0.993797 |
28e196c933b416f71130df17c595c5a859e88bda | 166 | md | Markdown | packages/cookie-disclaimer/stories/banner.md | Goldinteractive/gold-features | 7e92afd98e68682a2de116c9cab56d2929647a9f | [
"MIT"
] | 9 | 2018-02-19T10:03:12.000Z | 2022-03-07T19:14:49.000Z | packages/cookie-disclaimer/stories/banner.md | Goldinteractive/gold-features | 7e92afd98e68682a2de116c9cab56d2929647a9f | [
"MIT"
] | 17 | 2018-10-17T17:14:34.000Z | 2022-02-26T20:43:51.000Z | packages/cookie-disclaimer/stories/banner.md | Goldinteractive/gold-features | 7e92afd98e68682a2de116c9cab56d2929647a9f | [
"MIT"
] | 2 | 2018-04-27T12:55:29.000Z | 2019-11-28T12:58:30.000Z | # CookieDisclaimer
> Usage of this feature is discouraged. The new `cookie-handler` is designed to fulfill the previous use cases of `cookie-disclaimer`.
## Banner
| 27.666667 | 134 | 0.771084 | eng_Latn | 0.998607 |
28e1fc22ea71268498b5c199f307e3695a51717c | 4,803 | md | Markdown | README.md | tblasche/microservice-ui-composition-showcase | e4f46888146e76b91d8a7e7e3663ccd594451918 | [
"MIT"
] | 1 | 2019-07-25T11:14:03.000Z | 2019-07-25T11:14:03.000Z | README.md | tblasche/microservice-ui-composition-showcase | e4f46888146e76b91d8a7e7e3663ccd594451918 | [
"MIT"
] | null | null | null | README.md | tblasche/microservice-ui-composition-showcase | e4f46888146e76b91d8a7e7e3663ccd594451918 | [
"MIT"
] | null | null | null | # UI Integration Showcase
Showcase of a UI integration approach for micro frontends.
## Technical Approach
* Dynamic: UI gets composed during runtime
* Distributed: UI gets composed by the HTML-delivering services using libraries that intercept HTML responses
* Advanced: Server-side as well as client-side composition, possibility for fragments to place headers in the final markup or to set the HTTP status code, ...

1. Request approaches at the service
1. Service renders the page template, e.g.
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Page</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<h1>Foo Bar</h1>
<fragment async src="/api/fragments/foo" />
<fragment src="http://footer-service/api/fragments/footer" />
</body>
</html>
```
1. UI composition library intercepts the service response and performs the actual UI composition
1. Look for `<fragment>` tags (which do not have attribute `async` set because they are resolved asynchronically within the user's Browser)
1. Load fragment via URL provided in `fragment`'s `src` attribute
1. Replace `fragment` tag with the HTML Body of the loaded fragment
* Some advanced logic takes place here also: HTML tags in the fragment's HTML body which have attribute `fragment-position` set, will
be reordered in the resulting HTML document according to the value of this attribute. Currently `beforeBodyClose` is the only available
position within this showcase which makes the tag being placed right before the `</body>` tag. The `fragment-order` attribute defines
the order of the moved tags
1. Add all HTML tags within the fragment's HTML head which have attribute `data-fragment-head` set to the head of the resulting HTML document
1. Service response with composed markup is sent to the requesting client
1. Asynchronous UI fragments (`<fragment async src="..." />`) are resolved within the browser by the UI composition library the same way as
within the backend service UI composition library
## PROs of the Approach
* No need to route all traffic through a UI composition infrastructure component
* No single point of failure like when using dedicated service performing the UI composition
* Pretty easy local frontend development as the service performs UI composition itself
* Same mechanism for server-side and client-side composition
* Good debugging possibilities
* One Include for one fragment: No need for workarounds like with SSI where one include may be placed in the HTML Head
for the CSS, one that includes the JS before the </body> tag and one that includes the actual markup
* Easy to implement features like unified tracing
* Fragments may set HTTP Headers (e.g. Cookies)
* Out of the box asynchroneous includes from within the Browser
## CONs of the Approach
* A composition library has to be implemented and maintained for every programming language you want to use UI composition in
* Services must be updated (composition lib dependency) and re-deployed to benefit from bugfixes and new features of the UI composition layer
* Has to be developed. This showcase is far from being production-ready
## The Showcase
The showcase consists of the following services:
* `platform-gateway` (Port 9080) - entry point for all incoming traffic from outside the platform
* Routes all incoming requests to their destination services
* Provides the UI Composition Lib for the Browser: http://localhost:9080/ui-composition-lib-browser.js
* `frontpage-service` (Port 9084) - service providing the frontpage
* `header-service` (Port 9082) - service providing the header fragment
* `footer-service` (Port 9083) - service providing the footer fragment
* `article-service` (Port 9081) - service providing article related stuff
To build the whole showcase (services and their docker images), run
```
./mvnw clean install -DskipTests
```
To start all services (after they have been built), run
```
docker-compose up
```
See result at http://localhost:9080/. From within the containers, other containers are reachable via their container names, e.g. `http://article-service:9081/`.
## Implementation considerations
* Need for governance to make UI fragments integrate well
* CSS scoping
* Think twice about using libraries like angular, react or vue.js
* Possibility to flag fragments as "primary" (fragments which represent the main part of the page) and to take response code and some meta tags from it
* Session Handling: Fragments need to receive session information and must be able to write to session
## License
This showcase is released under the MIT license. Copyright (c) 2019 Torsten Blasche.
| 47.088235 | 160 | 0.761816 | eng_Latn | 0.996536 |
28e275d2b9fdedb92aa13338b213096f2b056b38 | 8,446 | md | Markdown | _posts/OS/2019-10-22-os9.md | traveloving2030/jiwon | ecd53b08eadca46b9919c8032d8d021801632411 | [
"MIT"
] | null | null | null | _posts/OS/2019-10-22-os9.md | traveloving2030/jiwon | ecd53b08eadca46b9919c8032d8d021801632411 | [
"MIT"
] | 1 | 2022-03-21T14:29:08.000Z | 2022-03-21T14:29:08.000Z | _posts/OS/2019-10-22-os9.md | traveloving2030/jiwon | ecd53b08eadca46b9919c8032d8d021801632411 | [
"MIT"
] | null | null | null | ---
layout: post
title: "9. 저장 장치 관리 - 가상메모리"
date: 2019-10-22
excerpt: "가상메모리"
tag:
- Operating System
- Computer Structrue
category: [OS]
comments: true
---
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0001.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0002.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0003.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0004.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0005.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0006.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0007.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0008.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0009.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0010.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0011.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0012.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0013.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0014.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0015.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0016.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0017.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0018.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0019.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0020.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0021.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0022.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0023.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0024.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0025.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0026.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0027.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0028.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0029.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0030.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0031.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0032.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0033.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0034.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0035.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0036.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0037.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0038.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0039.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0040.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0041.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0042.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0043.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0044.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0045.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0046.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0047.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0048.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0049.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0050.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0051.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0052.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0053.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0054.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0055.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0056.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0057.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0058.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0059.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0060.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0061.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0062.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0063.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0064.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0065.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0066.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0067.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0068.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0069.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0070.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0071.jpg" width = "70%" />
- <img src = "https://traveloving2030.github.io/jiwon/assets/img/post/OS_Chapter9-이미지/0072.jpg" width = "70%" />
| 92.813187 | 114 | 0.694767 | yue_Hant | 0.243006 |
28e29175d51a607483d671ba0674ada16854ed0c | 252 | md | Markdown | org/docs/measurements/seatdepth/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | org/docs/measurements/seatdepth/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | org/docs/measurements/seatdepth/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | ---
title: Seat depth
---
The **Seat depth** measurement is the height your trouser waist rises from the surface you are sitting on.
To measure your seat depth, sit straight on a flat chair or table, and measure from hip line down to the chair/table. | 36 | 117 | 0.753968 | eng_Latn | 0.999715 |
28e2e9c04b1225270ddb1c27de19328422e4ebea | 47 | md | Markdown | source/_patterns/00-atoms/00-global/15-icons.md | SparrowSpit/j1-pattern-lab | ecff71a01ed9d3edc0d0fb5af134df9dbbddf59c | [
"MIT"
] | null | null | null | source/_patterns/00-atoms/00-global/15-icons.md | SparrowSpit/j1-pattern-lab | ecff71a01ed9d3edc0d0fb5af134df9dbbddf59c | [
"MIT"
] | null | null | null | source/_patterns/00-atoms/00-global/15-icons.md | SparrowSpit/j1-pattern-lab | ecff71a01ed9d3edc0d0fb5af134df9dbbddf59c | [
"MIT"
] | null | null | null | ## Description
- Bootstrap Halflings icon set
| 15.666667 | 30 | 0.765957 | kor_Hang | 0.202217 |
28e335b70edc10c2cd32304b09a081f2237df2df | 21,708 | md | Markdown | revdep/failures.md | davidchall/fs | 4cc4b56c26b9d7f177a676fbb331133bb2584b86 | [
"MIT"
] | 320 | 2017-12-18T19:24:50.000Z | 2022-03-30T11:07:24.000Z | revdep/failures.md | jimhester/fs | e7d98c4c6abb15d885804c622317e6ee313a1524 | [
"MIT"
] | 366 | 2017-12-14T22:32:16.000Z | 2022-03-29T12:02:04.000Z | revdep/failures.md | jimhester/fs | e7d98c4c6abb15d885804c622317e6ee313a1524 | [
"MIT"
] | 85 | 2018-01-05T23:28:04.000Z | 2022-03-09T18:44:32.000Z | # vroom
<details>
* Version: 1.2.0
* Source code: https://github.com/cran/vroom
* URL: https://github.com/r-lib/vroom
* BugReports: https://github.com/r-lib/vroom/issues
* Date/Publication: 2020-01-13 22:40:02 UTC
* Number of recursive dependencies: 88
Run `revdep_details(,"vroom")` for more info
</details>
## In both
* checking whether package ‘vroom’ can be installed ... ERROR
```
Installation failed.
See ‘.../revdep/checks.noindex/vroom/new/vroom.Rcheck/00install.out’ for details.
```
## Installation
### Devel
```
* installing *source* package ‘vroom’ ...
** package ‘vroom’ successfully unpacked and MD5 sums checked
** using staged installation
** libs
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c Iconv.cpp -o Iconv.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c LocaleInfo.cpp -o LocaleInfo.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c RcppExports.cpp -o RcppExports.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c altrep.cc -o altrep.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c delimited_index.cc -o delimited_index.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c delimited_index_connection.cc -o delimited_index_connection.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c fixed_width_index_connection.cc -o fixed_width_index_connection.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c gen.cc -o gen.o
clang -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -Wall -g -O2 -c grisu3.c -o grisu3.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c guess_type.cc -o guess_type.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c index_collection.cc -o index_collection.o
clang -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -Wall -g -O2 -c localtime.c -o localtime.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom.cc -o vroom.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_big_int.cc -o vroom_big_int.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_chr.cc -o vroom_chr.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_date.cc -o vroom_date.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_dbl.cc -o vroom_dbl.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/vroom/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_dttm.cc -o vroom_dttm.o
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:655:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/gethostuuid.h:39:17: error: unknown type name 'uuid_t'
int gethostuuid(uuid_t, const struct timespec *) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA);
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:662:27: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int getsgroups_np(int *, uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:664:27: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int getwgroups_np(int *, uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:727:31: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int setsgroups_np(int, const uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:729:31: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int setwgroups_np(int, const uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
5 errors generated.
make: *** [index_collection.o] Error 1
make: *** Waiting for unfinished jobs....
ERROR: compilation failed for package ‘vroom’
* removing ‘.../revdep/checks.noindex/vroom/new/vroom.Rcheck/vroom’
```
### CRAN
```
* installing *source* package ‘vroom’ ...
** package ‘vroom’ successfully unpacked and MD5 sums checked
** using staged installation
** libs
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c Iconv.cpp -o Iconv.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c LocaleInfo.cpp -o LocaleInfo.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c RcppExports.cpp -o RcppExports.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c altrep.cc -o altrep.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c delimited_index.cc -o delimited_index.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c delimited_index_connection.cc -o delimited_index_connection.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c fixed_width_index_connection.cc -o fixed_width_index_connection.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c gen.cc -o gen.o
clang -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -Wall -g -O2 -c grisu3.c -o grisu3.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c guess_type.cc -o guess_type.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c index_collection.cc -o index_collection.o
clang -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -Wall -g -O2 -c localtime.c -o localtime.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom.cc -o vroom.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_big_int.cc -o vroom_big_int.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_chr.cc -o vroom_chr.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_date.cc -o vroom_date.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_dbl.cc -o vroom_dbl.o
clang++ -std=gnu++11 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I".../revdep/library.noindex/vroom/progress/include" -I".../revdep/library.noindex/fs/old/Rcpp/include" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -Imio/include -DWIN32_LEAN_AND_MEAN -Ispdlog/include -fPIC -Wall -g -O2 -c vroom_dttm.cc -o vroom_dttm.o
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:655:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/gethostuuid.h:39:17: error: unknown type name 'uuid_t'
int gethostuuid(uuid_t, const struct timespec *) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA);
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:662:27: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int getsgroups_np(int *, uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:664:27: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int getwgroups_np(int *, uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:727:31: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int setsgroups_np(int, const uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
In file included from index_collection.cc:2:
In file included from ./delimited_index_connection.h:1:
In file included from ./delimited_index.h:9:
In file included from mio/include/mio/shared_mmap.hpp:24:
In file included from mio/include/mio/mmap.hpp:24:
In file included from mio/include/mio/page.hpp:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/unistd.h:729:31: error: unknown type name 'uuid_t'; did you mean 'uid_t'?
int setwgroups_np(int, const uuid_t);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_uid_t.h:31:31: note: 'uid_t' declared here
typedef __darwin_uid_t uid_t;
^
5 errors generated.
make: *** [index_collection.o] Error 1
make: *** Waiting for unfinished jobs....
ERROR: compilation failed for package ‘vroom’
* removing ‘.../revdep/checks.noindex/vroom/old/vroom.Rcheck/vroom’
```
| 105.378641 | 415 | 0.752119 | yue_Hant | 0.519253 |
28e352ad85305accd28e264b1ce15f66c209bd4a | 1,663 | md | Markdown | translations/de-DE/content/github/authenticating-to-github/keeping-your-account-and-data-secure/about-githubs-ip-addresses.md | roamroam3/docs | d3099ded2a411e1550b2432d68fe18d992bbea85 | [
"CC-BY-4.0",
"MIT"
] | 5 | 2021-06-04T01:10:20.000Z | 2021-11-16T12:08:14.000Z | translations/de-DE/content/github/authenticating-to-github/keeping-your-account-and-data-secure/about-githubs-ip-addresses.md | Johny-git-svg/docs | cb550d49eea6cb89d90b3b2a4632fdb414b71096 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2021-09-01T20:07:07.000Z | 2022-03-30T06:25:14.000Z | translations/de-DE/content/github/authenticating-to-github/keeping-your-account-and-data-secure/about-githubs-ip-addresses.md | Johny-git-svg/docs | cb550d49eea6cb89d90b3b2a4632fdb414b71096 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-20T11:49:30.000Z | 2021-06-20T11:49:30.000Z | ---
title: Informationen zu den IP-Adressen von GitHub
intro: '{% data variables.product.product_name %} versorgt Anwendungen aus mehreren IP-Adressbereichen, die über die API verfügbar sind.'
redirect_from:
- /articles/what-ip-addresses-does-github-use-that-i-should-whitelist/
- /categories/73/articles/
- /categories/administration/
- /articles/github-s-ip-addresses/
- /articles/about-github-s-ip-addresses
- /articles/about-githubs-ip-addresses
- /github/authenticating-to-github/about-githubs-ip-addresses
versions:
free-pro-team: '*'
topics:
- Identity
- Access management
---
Sie können eine Liste der IP-Adressen von {% data variables.product.prodname_dotcom %} über den [meta](https://api.github.com/meta)-API-Endpunkt abrufen. For more information, see "[Meta](/rest/reference/meta)."
Diese Bereiche sind in [CIDR-Notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation). Mit einem Online-Konvertierungstool wie zum Beispiel [CIDR/VLSM Supernet Calculator](http://www.subnet-calculator.com/cidr.php) kannst Du eine Konvertierung von CIDR-Notation in IP-Adressbereiche durchführen.
Wir nehmen von Zeit zu Zeit Änderungen an unseren IP-Adressen vor und halten diese API auf dem neuesten Stand. We do not recommend allowing by IP address, however if you use these IP ranges we strongly encourage regular monitoring of our API.
Um die Funktionstüchtigkeit von Anwendungen zu gewährleisten, musst Du die TCP-Ports 22, 80, 443 und 9418 über unsere IP-Bereiche für `github.com` freigeben.
### Weiterführende Informationen
- „[Verbindungsprobleme beheben](/articles/troubleshooting-connectivity-problems)“
| 57.344828 | 324 | 0.784125 | deu_Latn | 0.729495 |
28e408b57e233aaf9a2103b6e07dcc6e3523c6d0 | 2,850 | md | Markdown | docs/es/query_language/dicts/external_dicts.md | namikhnenko/ClickHouse | 70a962f1a725c00596303e1587154e3c88f6bbd4 | [
"Apache-2.0"
] | null | null | null | docs/es/query_language/dicts/external_dicts.md | namikhnenko/ClickHouse | 70a962f1a725c00596303e1587154e3c88f6bbd4 | [
"Apache-2.0"
] | null | null | null | docs/es/query_language/dicts/external_dicts.md | namikhnenko/ClickHouse | 70a962f1a725c00596303e1587154e3c88f6bbd4 | [
"Apache-2.0"
] | null | null | null | # Diccionarios externos {#dicts-external-dicts}
Puede agregar sus propios diccionarios de varias fuentes de datos. El origen de datos de un diccionario puede ser un archivo ejecutable o de texto local, un recurso HTTP u otro DBMS. Para obtener más información, consulte “[Fuentes para diccionarios externos](external_dicts_dict_sources.md)”.
Haga clic en Casa:
- Almacena total o parcialmente los diccionarios en RAM.
- Actualiza periódicamente los diccionarios y carga dinámicamente los valores que faltan. En otras palabras, los diccionarios se pueden cargar dinámicamente.
- Permite crear diccionarios externos con archivos xml o [Consultas DDL](../create.md#create-dictionary-query).
La configuración de diccionarios externos se puede ubicar en uno o más archivos xml. La ruta de acceso a la configuración se especifica en el [Diccionarios\_config](../../operations/server_settings/settings.md#server_settings-dictionaries_config) parámetro.
Los diccionarios se pueden cargar en el inicio del servidor o en el primer uso, dependiendo de la [Diccionarios\_lazy\_load](../../operations/server_settings/settings.md#server_settings-dictionaries_lazy_load) configuración.
El archivo de configuración del diccionario tiene el siguiente formato:
``` xml
<yandex>
<comment>An optional element with any content. Ignored by the ClickHouse server.</comment>
<!--Optional element. File name with substitutions-->
<include_from>/etc/metrika.xml</include_from>
<dictionary>
<!-- Dictionary configuration. -->
<!-- There can be any number of <dictionary> sections in the configuration file. -->
</dictionary>
</yandex>
```
Usted puede [configurar](external_dicts_dict.md) cualquier número de diccionarios en el mismo archivo.
[Consultas DDL para diccionarios](../create.md#create-dictionary-query) no requiere ningún registro adicional en la configuración del servidor. Permiten trabajar con diccionarios como entidades de primera clase, como tablas o vistas.
!!! attention "Atención"
Puede convertir valores para un diccionario pequeño describiéndolo en un `SELECT` Consulta (ver el [Ciudad](../functions/other_functions.md) función). Esta funcionalidad no está relacionada con diccionarios externos.
## Ver también {#ext-dicts-see-also}
- [Configuración de un diccionario externo](external_dicts_dict.md)
- [Almacenamiento de diccionarios en la memoria](external_dicts_dict_layout.md)
- [Actualizaciones del diccionario](external_dicts_dict_lifetime.md)
- [Fuentes de diccionarios externos](external_dicts_dict_sources.md)
- [Clave y campos del diccionario](external_dicts_dict_structure.md)
- [Funciones para trabajar con diccionarios externos](../functions/ext_dict_functions.md)
[Artículo Original](https://clickhouse.tech/docs/es/query_language/dicts/external_dicts/) <!--hide-->
| 57 | 293 | 0.784561 | spa_Latn | 0.949086 |