hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e90e27df38bea2bd32b36b86af61b5372d00ca41 | 64 | md | Markdown | NEWS.md | validmeasures/somalia | fe0d920a53ea05f5cdf31c859d3f3c2f71998018 | [
"CC0-1.0"
] | 1 | 2018-05-02T02:15:02.000Z | 2018-05-02T02:15:02.000Z | NEWS.md | validmeasures/somalia | fe0d920a53ea05f5cdf31c859d3f3c2f71998018 | [
"CC0-1.0"
] | null | null | null | NEWS.md | validmeasures/somalia | fe0d920a53ea05f5cdf31c859d3f3c2f71998018 | [
"CC0-1.0"
] | null | null | null | # somalia v0.1.0
* First release of `somalia` dataset package.
| 16 | 45 | 0.71875 | eng_Latn | 0.660702 |
e90e33b205f68bec1c3f5e412be4aa5939b35a4a | 4,248 | md | Markdown | README.md | FroggieFrog/SettingsPlugin | d19211711b9cba068e76e1775eb4006469b52065 | [
"MIT"
] | 375 | 2016-06-28T20:06:52.000Z | 2022-03-20T14:45:12.000Z | README.md | FroggieFrog/SettingsPlugin | d19211711b9cba068e76e1775eb4006469b52065 | [
"MIT"
] | 139 | 2016-06-29T04:49:58.000Z | 2021-03-23T22:01:15.000Z | README.md | FroggieFrog/SettingsPlugin | d19211711b9cba068e76e1775eb4006469b52065 | [
"MIT"
] | 121 | 2016-08-14T06:13:57.000Z | 2022-02-27T20:58:58.000Z | # Settings Plugin for Xamarin And Windows
Create and access settings from shared code across all of your apps!
## Documentation
Get started by reading through the [Settings Plugin documentation](https://jamesmontemagno.github.io/SettingsPlugin/).
Looking to store credentials and sensitive information? Use Xamarin.Essential's [Secure Storage](https://docs.microsoft.com/xamarin/essentials/secure-storage?WT.mc_id=friends-0000-jamont)
## NuGet
* [Xam.Plugins.Settings](http://www.nuget.org/packages/Xam.Plugins.Settings) [](https://www.nuget.org/packages/Xam.Plugins.Settings)
### The Future: [Xamarin.Essentials](https://docs.microsoft.com/xamarin/essentials/index?WT.mc_id=friends-0000-jamont)
I have been working on Plugins for Xamarin for a long time now. Through the years I have always wanted to create a single, optimized, and official package from the Xamarin team at Microsoft that could easily be consumed by any application. The time is now with [Xamarin.Essentials](https://docs.microsoft.com/xamarin/essentials/index?WT.mc_id=friends-0000-jamont), which offers over 50 cross-platform native APIs in a single optimized package. I worked on this new library with an amazing team of developers and I highly highly highly recommend you check it out.
I will continue to work and maintain my Plugins, but I do recommend you checkout Xamarin.Essentials to see if it is a great fit your app as it has been for all of mine!
### Xamarin.Essentials Migration
This plugin and Xamarin.Essentials store information in the same exact location :). This means you can seemlessly swap out this plugin for Xamarin.Essentials and not lose any data. Checkout my blog for more info: https://montemagno.com/upgrading-from-plugins-to-xamarin-essentials/
## Build:
* 
* CI NuGet Feed: http://myget.org/F/xamarin-plugins
**Platform Support**
|Platform|Version|
| ------------------- | :-----------: |
|Xamarin.iOS|iOS 7+|
|Xamarin.Android|API 15+|
|Windows 10 UWP|10+|
|Xamarin.Mac|All|
|Xamarin.tvOS|All|
|Xamarin.watchOS|All|
|.NET|4.5+|
|.NET Core|2.0+|
|Tizen|4.0+|
#### Settings Plugin or Xamarin.Forms App.Properties
I get this question a lot, so here it is from a recent issue opened up. This plugin saves specific properties directly to each platforms native settings APIs (NSUserDefaults, SharedPreferences, etc). This ensures the fastest, most secure, and reliable creation and editing settings per application. Additionally, it works with **any Xamarin application**, not just Xamarin.Forms.
App.Current.Properties actually serializes and deserializes items to disk as you can see in the [implementation](https://github.com/xamarin/Xamarin.Forms/blob/e6d5186c8acbf37b877c7ca3c77a378352a3743d/Xamarin.Forms.Platform.iOS/Deserializer.cs).
To me that isn't as reliable as saving direct to the native platforms settings.
# Contribution
Thanks you for your interest in contributing to Settings plugin! In this section we'll outline what you need to know about contributing and how to get started.
### Bug Fixes
Please browse open issues, if you're looking to fix something, it's possible that someone already reported it. Additionally you select any `up-for-grabs` items
### Pull requests
Please fill out the pull request template when you send one.
Run tests to make sure your changes don't break any unit tests. Follow these instructions to run tests -
**iOS**
- Navigate to _tests/Plugin.Settings.NUnitTest.iOS_
- Execute `make run-simulator-tests`
**Android**
Execute `./build.sh --target RunDroidTests` from the project root
## License
The MIT License (MIT) see [License file](LICENSE)
### Want To Support This Project?
All I have ever asked is to be active by submitting bugs, features, and sending those pull requests down! Want to go further? Make sure to subscribe to my weekly development podcast [Merge Conflict](http://mergeconflict.fm), where I talk all about awesome Xamarin goodies and you can optionally support the show by becoming a [supporter on Patreon](https://www.patreon.com/mergeconflictfm).
| 57.405405 | 562 | 0.778719 | eng_Latn | 0.9735 |
e90ec513f8eddccf9be24a98988058266bb8c181 | 66 | md | Markdown | compiler/mio-circle/README.md | bogus-sudo/ONE-1 | 7052a817eff661ec2854ed2e7ee0de5e8ba82b55 | [
"Apache-2.0"
] | 255 | 2020-05-22T07:45:29.000Z | 2022-03-29T23:58:22.000Z | compiler/mio-circle/README.md | bogus-sudo/ONE-1 | 7052a817eff661ec2854ed2e7ee0de5e8ba82b55 | [
"Apache-2.0"
] | 5,102 | 2020-05-22T07:48:33.000Z | 2022-03-31T23:43:39.000Z | compiler/mio-circle/README.md | bogus-sudo/ONE-1 | 7052a817eff661ec2854ed2e7ee0de5e8ba82b55 | [
"Apache-2.0"
] | 120 | 2020-05-22T07:51:08.000Z | 2022-02-16T19:08:05.000Z | # mio-circle
Let's make it easy to read and write Circle models.
| 16.5 | 51 | 0.742424 | eng_Latn | 0.999687 |
e9107218281edb54ea0fc0fa5592470fa79e5660 | 1,081 | md | Markdown | README.md | mochilor/bplate | cb678670ce9e3738f2f0596eeb52a384402a2f67 | [
"MIT"
] | null | null | null | README.md | mochilor/bplate | cb678670ce9e3738f2f0596eeb52a384402a2f67 | [
"MIT"
] | null | null | null | README.md | mochilor/bplate | cb678670ce9e3738f2f0596eeb52a384402a2f67 | [
"MIT"
] | null | null | null | # Simple boiler plate for small PHP web development.
Based on great [HTML5 Boilerplate](https://html5boilerplate.com/) and intended for small php apps. It uses a small custom routing (it needs url rewriting to be activated on server) and have no classes nor autoloading. Public assets are in webroot folder. Pages files (views) are in /app/pages. If you need a new url, simply add a new file here named as the desired url. Each section after first slash will be treated as parameters and passed to page file.
## Gulp
Includes a gulpfile for handle resources files and put concatenated and minified files under webroot folder. If you plan not to use gulp, you must replace main.min.css and main.min.js with your own files.
### Included frontend libraries via bower:
- Bootstrap
- Fontawesome
- jQuery
- jQuery validation
Install libraries with:
```
npm install
gulp bower
```
Deafult `bower` task will watch for changes in resources folders.
###Missing features (also known as TO DO!)
- Javascript and server side simple validation.
- Database management (basic CRUD).
- ... | 40.037037 | 455 | 0.769658 | eng_Latn | 0.995918 |
e9108907744532a5f5a319626410047466cb33a6 | 1,570 | md | Markdown | README.md | hsz-devops/timescaledb--timescaledb-docker | ba0cbe086b17852ca6f2eb712d3aa406731f81dd | [
"Apache-2.0"
] | 1 | 2018-11-14T23:50:11.000Z | 2018-11-14T23:50:11.000Z | README.md | hsz-devops/timescaledb--timescaledb-docker | ba0cbe086b17852ca6f2eb712d3aa406731f81dd | [
"Apache-2.0"
] | null | null | null | README.md | hsz-devops/timescaledb--timescaledb-docker | ba0cbe086b17852ca6f2eb712d3aa406731f81dd | [
"Apache-2.0"
] | 1 | 2018-11-19T00:10:51.000Z | 2018-11-19T00:10:51.000Z | <img src="http://www.timescale.com/img/timescale-wordmark-blue.svg" alt="Timescale" width="300"/>
## What is TimescaleDB?
TimescaleDB is an open-source database designed to make SQL scalable
for time-series data. For more information, see
the [Timescale website](https://www.timescale.com).
## How to use this image
This image is based on the
official
[Postgres docker image](https://store.docker.com/images/postgres) so
the documentation for that image also applies here, including the
environment variables one can set, extensibility, etc.
### Starting a TimescaleDB instance
```
$ docker run -d --name some-timescaledb -p 5432:5432 timescale/timescaledb
```
Then connect with an app or the `psql` client:
```
$ docker run -it --net=host --rm timescale/timescaledb psql -h localhost -U postgres
```
You can also connect your app via port `5432` on the host machine.
If you are running your docker image for the first time, you can also set an environmental variable, `TIMESCALEDB_TELEMETRY`, to set the level of [telemetry](https://docs.timescale.com/using-timescaledb/telemetry) in the Timescale docker instance. For example, to turn off telemetry, run:
```
$ docker run -d --name some-timescaledb -p 5432:5432 --env TIMESCALEDB_TELEMETRY=off timescale/timescaledb
```
Note that if the cluster has previously been initialized, you should not use this environment variable to set the level of telemetry. Instead, follow the [instructions](https://docs.timescale.com/using-timescaledb/telemetry) in our docs to disable telemetry once a cluster is running.
| 41.315789 | 288 | 0.76879 | eng_Latn | 0.972663 |
e911a116828c4a6590ab616566ab2360e1355251 | 2,063 | md | Markdown | source/Modules/ActiveDirectoryDsc.Common/docs/Get-DomainControllerObject.md | nanderh/ActiveDirectoryDsc | 8a305922b5af060bf621ae89f41f8a0ec0c287b4 | [
"MIT"
] | 157 | 2015-04-16T19:55:00.000Z | 2019-07-13T23:10:26.000Z | source/Modules/ActiveDirectoryDsc.Common/docs/Get-DomainControllerObject.md | nanderh/ActiveDirectoryDsc | 8a305922b5af060bf621ae89f41f8a0ec0c287b4 | [
"MIT"
] | 425 | 2015-04-16T04:45:37.000Z | 2019-07-25T09:29:39.000Z | source/Modules/ActiveDirectoryDsc.Common/docs/Get-DomainControllerObject.md | nanderh/ActiveDirectoryDsc | 8a305922b5af060bf621ae89f41f8a0ec0c287b4 | [
"MIT"
] | 142 | 2015-04-16T04:16:31.000Z | 2019-07-23T18:15:59.000Z |
# Get-DomainControllerObject
## SYNOPSIS
Gets the domain controller object if the node is a domain controller.
## SYNTAX
```
Get-DomainControllerObject [-DomainName] <String> [[-ComputerName] <String>] [[-Credential] <PSCredential>]
[<CommonParameters>]
```
## DESCRIPTION
The Get-DomainControllerObject function is used to get the domain controller object if the node is a domain
controller, otherwise it returns $null.
## EXAMPLES
### EXAMPLE 1
```
Get-DomainControllerObject -DomainName contoso.com
```
## PARAMETERS
### -ComputerName
Specifies the name of the node to return the domain controller object for.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: False
Position: 2
Default value: $env:COMPUTERNAME
Accept pipeline input: False
Accept wildcard characters: False
```
### -Credential
Specifies the credentials to use when accessing the domain, or use the current user if not specified.
```yaml
Type: System.Management.Automation.PSCredential
Parameter Sets: (All)
Aliases:
Required: False
Position: 3
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -DomainName
Specifies the name of the domain that should contain the domain controller.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### None
## OUTPUTS
### Microsoft.ActiveDirectory.Management.ADDomainController
## NOTES
Throws an exception of Microsoft.ActiveDirectory.Management.ADServerDownException if the domain cannot be
contacted.
## RELATED LINKS
| 23.988372 | 316 | 0.743093 | eng_Latn | 0.595452 |
e911e8a5bc57ece25e9e9b2ee6ea43c6752b1d40 | 393 | md | Markdown | README.md | nathaliabruno/serasse-extension | d0c19d80a3d5d0e6b9999a5811aaf267c520f1b9 | [
"MIT"
] | null | null | null | README.md | nathaliabruno/serasse-extension | d0c19d80a3d5d0e6b9999a5811aaf267c520f1b9 | [
"MIT"
] | null | null | null | README.md | nathaliabruno/serasse-extension | d0c19d80a3d5d0e6b9999a5811aaf267c520f1b9 | [
"MIT"
] | null | null | null | # SERASSE??
## A funny extension created from a joke with my co-workers. Where we repeat a wrong popular brazilian expression.
- Work in progress
-------------
Boilerplate: [chrome-extension-webpack-boilerplate](https://github.com/samuelsimoes/chrome-extension-webpack-boilerplate) by Samuel Simões ~ [@samuelsimoes](https://twitter.com/samuelsimoes) ~ [Blog](http://blog.samuelsimoes.com/)
| 49.125 | 230 | 0.750636 | yue_Hant | 0.316161 |
e91286596ee938e6157f4f2c87de96b5c1e93364 | 109 | md | Markdown | README.md | quovixi/jekyll-blog | 00351ae4d3bcd0d71872132872510a0c6605cdb4 | [
"Apache-2.0"
] | null | null | null | README.md | quovixi/jekyll-blog | 00351ae4d3bcd0d71872132872510a0c6605cdb4 | [
"Apache-2.0"
] | null | null | null | README.md | quovixi/jekyll-blog | 00351ae4d3bcd0d71872132872510a0c6605cdb4 | [
"Apache-2.0"
] | null | null | null | # jekyll-blog
Attempt at setting up a jekyll blog following a tutorial
http://quovixi.github.io/jekyll-blog/
| 27.25 | 56 | 0.788991 | eng_Latn | 0.435856 |
e9128b7f395d6575491a45be71ae7568e64dd557 | 2,512 | md | Markdown | docs/integration-services/change-data-capture/use-the-new-instance-wizard.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/change-data-capture/use-the-new-instance-wizard.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/change-data-capture/use-the-new-instance-wizard.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Usare la Procedura guidata nuova istanza | Microsoft Docs
ms.custom: ''
ms.date: 03/01/2017
ms.prod: sql
ms.prod_service: integration-services
ms.reviewer: ''
ms.technology: integration-services
ms.topic: conceptual
ms.assetid: dfc09f71-7037-4cd5-a3cd-c79f8c714e22
author: chugugrace
ms.author: chugu
ms.openlocfilehash: d964639bbcf7679c71191d4b3a8e2a455a4635e8
ms.sourcegitcommit: 58158eda0aa0d7f87f9d958ae349a14c0ba8a209
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 03/30/2020
ms.locfileid: "71298557"
---
# <a name="use-the-new-instance-wizard"></a>Utilizzare la New Instance Wizard
[!INCLUDE[ssis-appliesto](../../includes/ssis-appliesto-ssvrpluslinux-asdb-asdw-xxx.md)]
Tramite la New Instance Wizard è possibile creare una nuova istanza per un servizio CDC. Viene visualizzata la procedura guidata Create an Oracle CDC Instance da CDC Designer Console. Nella New Instance Wizard è possibile eseguire le operazioni seguenti.
- [Creare il database delle modifiche di SQL Server](../../integration-services/change-data-capture/create-the-sql-server-change-database.md)
- [Connettersi a un database di origine Oracle](../../integration-services/change-data-capture/connect-to-an-oracle-source-database.md)
- [Connettersi a Oracle](../../integration-services/change-data-capture/connect-to-oracle.md)
- [Selezionare tabelle e colonne Oracle](../../integration-services/change-data-capture/select-oracle-tables-and-columns.md)
- [Selezionare le tabelle Oracle per l'acquisizione delle modifiche](../../integration-services/change-data-capture/select-oracle-tables-for-capturing-changes.md)
- [Apportare modifiche alle tabelle selezionate per l'acquisizione di modifiche](../../integration-services/change-data-capture/make-changes-to-the-tables-selected-for-capturing-changes.md)
- [Generare ed eseguire lo script di registrazione supplementare](../../integration-services/change-data-capture/generate-and-run-the-supplemental-logging-script.md)
- [Generare tabelle mirror e istanze di acquisizione di CDC](../../integration-services/change-data-capture/generate-mirror-tables-and-cdc-capture-instances.md)
- [Fine](../../integration-services/change-data-capture/finish.md)
## <a name="see-also"></a>Vedere anche
[Procedura di creazione dell'istanza del database delle modifiche di SQL Server](../../integration-services/change-data-capture/how-to-create-the-sql-server-change-database-instance.md)
| 51.265306 | 258 | 0.768312 | ita_Latn | 0.398617 |
e912c8c6c110c921d93606a3606d4fa73e48cf05 | 17 | md | Markdown | docs/README.zh-CN.md | karthikchilamkurthy/karthikchilamkurthy.github.io | 4032345ea4e888108a5f5fee96a32b6da27bf031 | [
"MIT"
] | null | null | null | docs/README.zh-CN.md | karthikchilamkurthy/karthikchilamkurthy.github.io | 4032345ea4e888108a5f5fee96a32b6da27bf031 | [
"MIT"
] | null | null | null | docs/README.zh-CN.md | karthikchilamkurthy/karthikchilamkurthy.github.io | 4032345ea4e888108a5f5fee96a32b6da27bf031 | [
"MIT"
] | null | null | null | # Karthik's BLOG
| 8.5 | 16 | 0.705882 | deu_Latn | 0.533885 |
e9134c739e8ae55fe06a42b8d588e112663abdb4 | 1,576 | md | Markdown | docs/api_docs/python/hub/load.md | rovinyu/hub | 25d378b24f233651b8a8027321e762d04607a8dd | [
"Apache-2.0"
] | 1 | 2019-10-10T06:23:16.000Z | 2019-10-10T06:23:16.000Z | docs/api_docs/python/hub/load.md | Jabrils/hub | 84ac11ac756050a186cc8bddb54e104323fb9dff | [
"Apache-2.0"
] | null | null | null | docs/api_docs/python/hub/load.md | Jabrils/hub | 84ac11ac756050a186cc8bddb54e104323fb9dff | [
"Apache-2.0"
] | 1 | 2019-10-10T06:23:22.000Z | 2019-10-10T06:23:22.000Z | <div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="hub.load" />
<meta itemprop="path" content="Stable" />
</div>
# hub.load
``` python
hub.load(
handle,
tags=None
)
```
Loads a module from a handle.
Currently this method is fully supported only with Tensorflow 2.x and with
modules created by calling tensorflow.saved_model.save(). The method works in
both eager and graph modes.
Depending on the type of handle used, the call may involve downloading a
Tensorflow Hub module to a local cache location specified by the
TFHUB_CACHE_DIR environment variable. If a copy of the module is already
present in the TFHUB_CACHE_DIR, the download step is skipped.
Currently, three types of module handles are supported:
1) Smart URL resolvers such as tfhub.dev, e.g.:
https://tfhub.dev/google/nnlm-en-dim128/1.
2) A directory on a file system supported by Tensorflow containing module
files. This may include a local directory (e.g. /usr/local/mymodule) or a
Google Cloud Storage bucket (gs://mymodule).
3) A URL pointing to a TGZ archive of a module, e.g.
https://example.com/mymodule.tar.gz.
#### Args:
* <b>`handle`</b>: (string) the Module handle to resolve.
* <b>`tags`</b>: A set of strings specifying the graph variant to use, if loading from
a v1 module.
#### Returns:
A trackable object (see tf.saved_model.load() documentation for details).
#### Raises:
* <b>`NotImplementedError`</b>: If the code is running against incompatible (1.x)
version of TF. | 31.52 | 86 | 0.718909 | eng_Latn | 0.975098 |
e913ef885cbebbd5fb973665aead2b9b4439e924 | 330 | md | Markdown | docs/2018/11/02.md | yzqcol/zaobao | 63f5c6b1b91833a3a6e7e748e2f41e551311a1ac | [
"MIT"
] | 2,088 | 2018-12-25T04:41:58.000Z | 2022-03-31T03:29:36.000Z | docs/2018/11/02.md | yzqcol/zaobao | 63f5c6b1b91833a3a6e7e748e2f41e551311a1ac | [
"MIT"
] | 94 | 2019-01-01T08:33:37.000Z | 2022-03-30T06:36:38.000Z | docs/2018/11/02.md | yzqcol/zaobao | 63f5c6b1b91833a3a6e7e748e2f41e551311a1ac | [
"MIT"
] | 163 | 2018-12-25T05:47:53.000Z | 2022-03-28T12:47:58.000Z | ### 2018.11.02
[工具] VS Code JavaScript Snippets 能大量简化写法,比如 import from,如果想看更多可以去 GitHub 搜索 VS Code Snippets:<https://github.com/xabikos/VS Code-javascript>
[类库] Webpack 二级缓存,加快启动时间:<https://github.com/mzgoddard/hard-source-webpack-plugin>
[类库] 在命令行中显示图表,我能想到的场景是,在构建多个项目时,能直观的看到哪个项目构建的比较久。<https://github.com/chunqiuyiyu/ervy>
| 41.25 | 140 | 0.781818 | yue_Hant | 0.924436 |
e914446548f96b11f559ff56d3dcbd68be473def | 1,860 | md | Markdown | CHANGELOG.md | sbergwall/RobocopyPS | 22e38a3a08e1876e55e85c41cf52d39552bdebcd | [
"MIT"
] | 24 | 2019-06-25T13:24:10.000Z | 2022-03-14T18:29:59.000Z | CHANGELOG.md | sbergwall/RobocopyPS | 22e38a3a08e1876e55e85c41cf52d39552bdebcd | [
"MIT"
] | 9 | 2020-04-30T04:53:35.000Z | 2022-01-15T16:44:56.000Z | CHANGELOG.md | sbergwall/RobocopyPS | 22e38a3a08e1876e55e85c41cf52d39552bdebcd | [
"MIT"
] | 4 | 2020-02-07T16:10:41.000Z | 2021-09-03T15:44:53.000Z | # RobocopyPS Release History
## 0.2.7 - 2021-10-17
### Added
* Added parameters ExcludeDirectory and ExcludeFileName to Get-RoboItem
### Changed
* Fixed problem with parameters ExcludeDirectory and ExcludeFileName
## 0.2.6 - 2021-09-02
### Added
* Added new cmdlets Get-RoboItem, Remove-RoboItem,Copy-RoboItem,Move-RoboItem.
### Changed
* Removed all ParameterSetName in Invoke-Robocopy, including IncludeSubDirectories (/s) and IncludeEmptySubDirectories (/e) as they are not mutually exlusive.
* Changed Pester Tests to match Version 5 of Pester.
* Changed help file for Get-Help.
* Changed how we validate source directories.
## 0.2.5 - 2021-08-12
### Added
* Added parameters so the module is in phase with native Robocopy, tested on Windows 10 21H1
### Changed
* Removed some of the forced parameters we use (example /v is not used if -Verbose is not specified)
* Changed some tests to be compatible with Pester version 5
* Changed documentation
### Removed
* Removed some tests
## 0.2.2 - 2019-07-18
### Fixed
* Fixed problem with parameter ExcludeFileName and ExcludeDirectory where you could not specify multiple files/directories.
### Added
* Added functionality to Exclude/IncludeAttribute and Remove/AddAttribute
## 0.2.0 - 2019-07-16
### Changed
* A re-write was done to be able to handle error code better and more precisely. Changes to the function names was also done, Start-Robocopy became Invoke-Robocopy and the internal logic handling output from Robocopy.exe was extracted to Invoke-RobocopyParser.
* All other functions was removed during this release so they can be re-worked to follow the new standard.
## 0.1.0 - 2019-05-30
### Fixed
* No fix as this is the first release
### Added
* Added function Start-Robocopy
* Added function Remove-RoboDirectory
### Changed
* No change as this is first release | 24.473684 | 260 | 0.754839 | eng_Latn | 0.997411 |
e915b8e08c7c110165b06b2c636bfe3f8d979621 | 821 | md | Markdown | _posts/pedingpost/2021-01-12-writeing.md | GinGu-Kang/GinGu-Kang.github.io | 868e9f0896c7e80e8c3d9d909cef90556334223b | [
"MIT"
] | null | null | null | _posts/pedingpost/2021-01-12-writeing.md | GinGu-Kang/GinGu-Kang.github.io | 868e9f0896c7e80e8c3d9d909cef90556334223b | [
"MIT"
] | null | null | null | _posts/pedingpost/2021-01-12-writeing.md | GinGu-Kang/GinGu-Kang.github.io | 868e9f0896c7e80e8c3d9d909cef90556334223b | [
"MIT"
] | null | null | null | ---
title: "초보 프로그래머의 블로그 포스팅 하기"
date: 2022-01-12 13:45
categories: "글쓰기"
published: false
---
# 개요
내가 첫 블로그를 생성하고 기술블로그를 생성하기 위해 필요했던 정보들과 앞으로의 블로그를 운영하기위해 필요한 정보들 그리고 블로그에 대한 나의 생각을 정리해보려고 한다.(추후 계속 다듬을 예정)
<details>
<summary>목차</summary>
<div markdown="1">
<ol >
<li>블로그를 하는 이유?</li>
<li>블로그를 생성할때 생각해야 할것</li>
<li>블로그를 운영할때 생각해야 할것</li>
<li>포스팅을 할때 생각해야 할것</li>
<li>글을쓰기 위해 생각해야 할것</li>
</ol>
</div>
</details>
<br>
# 나는 왜 블로그를 운영 하는가?
세상에 블로그를 운영하는 이유는 참 다양하다고 생각한다. 누군가는 소통을 하기 위해 누군가는 지식을 공유하기 위해 또 누군가는 돈을 벌기위해서 블로그를 운영한다. 그렇다면 나는 왜 블로그를 운영하려고 하는걸까.
<br>
<br>
### 첫째. 기록
첫 번째로 내가 블로그를 운영하고 싶었던 가장큰 이유는 기록이다. 기록한다는 것은 남긴 다는것, 기록을 함으로써 나의 소중했던 순간을 되새기며 그때의 감정을 오래도록 남기고 싶어서 이다. 또 한가지는 내가 공부했던 지식들을 남기고 정리하며 내가 어려워 했던 문제들과 실수에 대한 부분을 개선해가는 나의 성장과정을 기록하고 싶어서 이다.
<br>
### 둘째. 소통
| 20.525 | 187 | 0.64799 | kor_Hang | 1.00001 |
e9160b3384e48fada79b37251159fc37e3ee9ee3 | 419 | md | Markdown | _posts/2005-12-19-55-voprosov-hr.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | _posts/2005-12-19-55-voprosov-hr.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | _posts/2005-12-19-55-voprosov-hr.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | ---
id: 489
title: 55 вопросов hr
date: 2005-12-19T19:48:00+02:00
author: alexrb
layout: post
guid: http://alexrb.name/?p=489
permalink: /2005/12/55-voprosov-hr/
lj_itemid:
- "483"
lj_permalink:
- http://alexrb-aka-ral.livejournal.com/123855.html
post_views_count:
- "4"
categories:
- Lost-and-found
tags:
- работа
---
http://www.techinterviews.com/?p=230
на английском
с ответами
может кому понадобится | 18.217391 | 53 | 0.711217 | kor_Hang | 0.141842 |
e91686e9bf2a70a617f717d3e94931fa2d6665d2 | 2,477 | md | Markdown | README.md | FerramONG/ferramong-pay | 1bdd744fa79f4538e34f79b8007480b6b3a9df4b | [
"0BSD"
] | null | null | null | README.md | FerramONG/ferramong-pay | 1bdd744fa79f4538e34f79b8007480b6b3a9df4b | [
"0BSD"
] | null | null | null | README.md | FerramONG/ferramong-pay | 1bdd744fa79f4538e34f79b8007480b6b3a9df4b | [
"0BSD"
] | null | null | null | <p align='center'>
<img width="250px" src='https://raw.githubusercontent.com/FerramONG/ferramong-pay/master/docs/img/logo/logo.png?raw=true' />
</p>
<h1 align='center'>FerramONG - Pay</h1>
<p align='center'>Service responsible for managing users' purchases on the platform.</p>
<p align="center">
<a href="https://github.com/FerramONG/ferramong-pay/actions/workflows/windows.yml"><img src="https://github.com/FerramONG/ferramong-pay/actions/workflows/windows.yml/badge.svg" alt=""></a>
<a href="https://github.com/FerramONG/ferramong-pay/actions/workflows/macos.yml"><img src="https://github.com/FerramONG/ferramong-pay/actions/workflows/macos.yml/badge.svg" alt=""></a>
<a href="https://github.com/FerramONG/ferramong-pay/actions/workflows/ubuntu.yml"><img src="https://github.com/FerramONG/ferramong-pay/actions/workflows/ubuntu.yml/badge.svg" alt=""></a>
<a href="http://java.oracle.com"><img src="https://img.shields.io/badge/java-12+-D0008F.svg" alt="Java compatibility"></a>
<a href="https://github.com/FerramONG/ferramong-pay/blob/master/LICENSE"><img src="https://img.shields.io/badge/License-BSD0-919191.svg" alt="License"></a>
<a href="https://github.com/FerramONG/ferramong-pay/releases"><img src="https://img.shields.io/github/v/release/FerramONG/ferramong-pay" alt="Release"></a>
</p>
<hr />
## ❇ Introduction
This source code is a webservice that is used to manage users' purchases on the platform.
## ❓ How to use
See [here](https://ferramong-pay.herokuapp.com/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config) OpenAPI doc.
## ⚠ Warnings
The hosting service Heroku may have a certain delay (~ 1 min) for uploading the application so the loading of the website may have a certain delay.
## 🚩 Changelog
Details about each version are documented in the [releases section](https://github.com/FerramONG/ferramong-pay/releases).
## 🗺 Project structure
#### FerramONG architecture

#### Pay class diagram

## 📁 Files
### /
| Name |Type|Description|
|----------------|-------------------------------|-----------------------------|
|dist |`Directory`|Released versions|
|docs |`Directory`|Documentation files|
|src |`Directory`| Source files |
|test |`Directory`| Test files |
| 56.295455 | 189 | 0.716996 | yue_Hant | 0.540929 |
e9176b6e181e679e81903a14078bf81ede404d0d | 25,910 | md | Markdown | articles/cost-management-billing/costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-11-22T02:45:17.000Z | 2022-03-22T07:08:33.000Z | articles/cost-management-billing/costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cost-management-billing/costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-09T16:27:39.000Z | 2021-01-09T16:27:39.000Z | ---
title: Migrate from Enterprise Reporting to Azure Resource Manager APIs
description: This article helps you understand the differences between the Reporting APIs and the Azure Resource Manager APIs, what to expect when you migrate to the Azure Resource Manager APIs, and the new capabilities that are available with the new Azure Resource Manager APIs.
author: bandersmsft
ms.reviewer: adwise
ms.service: cost-management-billing
ms.subservice: common
ms.topic: reference
ms.date: 11/19/2020
ms.author: banders
---
# Migrate from Enterprise Reporting to Azure Resource Manager APIs
This article helps developers who have built custom solutions using the [Azure Reporting APIs for Enterprise Customers](../manage/enterprise-api.md) to migrate onto the Azure Resource Manager APIs for Cost Management. Service Principal support for the newer Azure Resource Manager APIs is now generally available. Azure Resource Manager APIs are in active development. Consider migrating to them instead of using the older Azure Reporting APIs for Enterprise Customers. The older APIs are being deprecated. This article helps you understand the differences between the Reporting APIs and the Azure Resource Manager APIs, what to expect when you migrate to the Azure Resource Manager APIs, and the new capabilities that are available with the new Azure Resource Manager APIs.
## API differences
The following information describes the differences between the older Reporting APIs for Enterprise Customers and the newer Azure Resource Manager APIs.
| **Use** | **Enterprise Agreement APIs** | **Azure Resource Manager APIs** |
| --- | --- | --- |
| Authentication | API Key provisioned in the Enterprise Agreement (EA) portal | Azure Active Directory (Azure AD) Authentication using User tokens or Service Principals. Service Principals take the place of API Keys. |
| Scopes and Permissions | All requests are at the Enrollment scope. The API Key permission assignments will determine whether data for the entire Enrollment, a Department, or a specific Account is returned. No user authentication. | Users or Service Principals are assigned access to the Enrollment, Department, or Account scope. |
| URI Endpoint | https://consumption.azure.com | https://management.azure.com |
| Development Status | In maintenance mode. On the path to deprecation. | Actively being developed |
| Available APIs | Limited to what is available currently | Equivalent APIs are available to replace each EA API. <p> Additional [Cost Management APIs](/rest/api/cost-management/) are also available to you, including: <p> <ul><li>Budgets</li><li>Alerts<li>Exports</li></ul> |
## Migration checklist
- Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).
- Determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](#ea-api-mapping-to-new-azure-resource-manager-apis).
- Configure service authorization and authentication for the Azure Resource Manager APIs
- If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad). Registration creates a service principal for you to use to call the APIs.
- Assign the service principal access to the scopes needed, as outlined below.
- Update any programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your Service Principal.
- Test the APIs and then update any programming code to replace EA API calls with Azure Resource Manager API calls.
- Update error handling to use new error codes. Some considerations include:
- Azure Resource Manager APIs have a timeout period of 60 seconds.
- Azure Resource Manager APIs have rate limiting in place. This results in a 429 throttling error if rates are exceeded. Build your solutions so that you don't place too many API calls in a short time period.
- Review the other Cost Management APIs available through Azure Resource Manager and assess for use later. For more information, see [Use additional Cost Management APIs](#use-additional-cost-management-apis).
## Assign Service Principal access to Azure Resource Manager APIs
After you create a Service Principal to programmatically call the Azure Resource Manager APIs, you need to assign it the proper permissions to authorize against and execute requests in Azure Resource Manager. There are two permission frameworks for different scenarios.
### Azure Billing Hierarchy Access
To assign Service Principal permissions to your Enterprise Billing Account, Departments, or Enrollment Account scopes, use [Billing Permissions](/rest/api/billing/2019-10-01-preview/billingpermissions), [Billing Role Definitions](/rest/api/billing/2019-10-01-preview/billingroledefinitions), and [Billing Role Assignments](/rest/api/billing/2019-10-01-preview/billingroleassignments) APIs.
- Use the Billing Permissions APIs to identify the permissions that a Service Principal already has on a given scope, like a Billing Account or Department.
- Use the Billing Role Definitions APIs to enumerate the available roles that can be assigned to your Service Principal.
- Only Read-Only EA Admin and Read-Only Department Admin roles can be assigned to Service Principals at this time.
- Use the Billing Role Assignments APIs to assign a role to your Service Principal.
The following example shows how to call the Role Assignments API to grant a Service Principal access to your billing account. We recommend using [PostMan](https://postman.com) to do these one-time permission configurations.
```json
POST https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountName}/createBillingRoleAssignment?api-version=2019-10-01-preview
```
#### Request Body
```json
{
"principalId": "00000000-0000-0000-0000-000000000000",
"billingRoleDefinitionId": "/providers/Microsoft.Billing/billingAccounts/{billingAccountName}/providers/Microsoft.Billing/billingRoleDefinition/10000000-aaaa-bbbb-cccc-100000000000"
}
```
### Azure role-based access control
New Service Principal support extends to Azure-specific scopes, like management groups, subscriptions, and resource groups. You can assign Service Principal permissions to these scopes directly [in the Azure portal](../../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application) or by using [Azure PowerShell](../../active-directory/develop/howto-authenticate-service-principal-powershell.md#assign-the-application-to-a-role).
## EA API mapping to new Azure Resource Manager APIs
Use the table below to identify the EA APIs that you currently use and the replacement Azure Resource Manager API to use instead.
| **Scenario** | **EA APIs** | **Azure Resource Manager APIs** |
| --- | --- | --- |
| Balance Summary | [/balancesummary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) |[Microsoft.Consumption/balances](/rest/api/consumption/balances/getbybillingaccount) |
| Price Sheet | [/pricesheet](/rest/api/billing/enterprise/billing-enterprise-api-pricesheet) | [Microsoft.Consumption/pricesheets/default](/rest/api/consumption/pricesheet) – use for negotiated prices <p> [Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) – use for retail prices |
| Reserved Instance Details | [/reservationdetails](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) | [Microsoft.CostManagement/generateReservationDetailsReport](/rest/api/cost-management/generatereservationdetailsreport) |
| Reserved Instance Summary | [/reservationsummaries](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) | [Microsoft.Consumption/reservationSummaries](/rest/api/consumption/reservationssummaries/list#reservationsummariesdailywithbillingaccountid) |
| Reserved Instance Recommendations | [/SharedReservationRecommendations](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-recommendation)<p>[/SingleReservationRecommendations](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-recommendation) | [Microsoft.Consumption/reservationRecommendations](/rest/api/consumption/reservationrecommendations/list) |
| Reserved Instance Charges | [/reservationcharges](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-charges) | [Microsoft.Consumption/reservationTransactions](/rest/api/consumption/reservationtransactions/list) |
## Migration details by API
The following sections show old API request examples with new replacement API examples.
### Balance Summary API
Use the following request URIs when calling the new Balance Summary API. Your enrollment number should be used as the billingAccountId.
#### Supported requests
[Get for Enrollment](/rest/api/consumption/balances/getbybillingaccount)
```json
https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/balances?api-version=2019-10-01
```
### Response body changes
_Old response body_:
```json
{
"id": "enrollments/100/billingperiods/201507/balancesummaries",
"billingPeriodId": 201507,
"currencyCode": "USD",
"beginningBalance": 0,
"endingBalance": 1.1,
"newPurchases": 1,
"adjustments": 1.1,
"utilized": 1.1,
"serviceOverage": 1,
"chargesBilledSeparately": 1,
"totalOverage": 1,
"totalUsage": 1.1,
"azureMarketplaceServiceCharges": 1,
"newPurchasesDetails": [
{
"name": "",
"value": 1
}
],
"adjustmentDetails": [
{
"name": "Promo Credit",
"value": 1.1
},
{
"name": "SIE Credit",
"value": 1.0
}
]
}
```
_New response body_:
The same data is now available in the `properties` field of the new API response. There might be minor changes to the spelling on some of the field names.
```json
{
"id": "/providers/Microsoft.Billing/billingAccounts/123456/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/balances/balanceId1",
"name": "balanceId1",
"type": "Microsoft.Consumption/balances",
"properties": {
"currency": "USD ",
"beginningBalance": 3396469.19,
"endingBalance": 2922371.02,
"newPurchases": 0,
"adjustments": 0,
"utilized": 474098.17,
"serviceOverage": 0,
"chargesBilledSeparately": 0,
"totalOverage": 0,
"totalUsage": 474098.17,
"azureMarketplaceServiceCharges": 609.82,
"billingFrequency": "Month",
"priceHidden": false,
"newPurchasesDetails": [
{
"name": "Promo Purchase",
"value": 1
}
],
"adjustmentDetails": [
{
"name": "Promo Credit",
"value": 1.1
},
{
"name": "SIE Credit",
"value": 1
}
]
}
}
```
### Price Sheet
Use the following request URIs when calling the new Price Sheet API.
#### Supported requests
You can call the API using the following scopes:
- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
- Subscription: `subscriptions/{subscriptionId}`
[_Get for current Billing Period_](/rest/api/consumption/pricesheet/get)
```json
https://management.azure.com/{scope}/providers/Microsoft.Consumption/pricesheets/default?api-version=2019-10-01
```
[_Get for specified Billing Period_](/rest/api/consumption/pricesheet/getbybillingperiod)
```json
https://management.azure.com/{scope}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}/providers/Microsoft.Consumption/pricesheets/default?api-version=2019-10-01
```
#### Response body changes
_Old response_:
```json
[
{
"id": "enrollments/57354989/billingperiods/201601/products/343/pricesheets",
"billingPeriodId": "201704",
"meterId": "dc210ecb-97e8-4522-8134-2385494233c0",
"meterName": "A1 VM",
"unitOfMeasure": "100 Hours",
"includedQuantity": 0,
"partNumber": "N7H-00015",
"unitPrice": 0.00,
"currencyCode": "USD"
},
{
"id": "enrollments/57354989/billingperiods/201601/products/2884/pricesheets",
"billingPeriodId": "201404",
"meterId": "dc210ecb-97e8-4522-8134-5385494233c0",
"meterName": "Locally Redundant Storage Premium Storage - Snapshots - AU East",
"unitOfMeasure": "100 GB",
"includedQuantity": 0,
"partNumber": "N9H-00402",
"unitPrice": 0.00,
"currencyCode": "USD"
},
...
]
```
_New response_:
Old data is now in the `pricesheets` field of the new API response. Meter details information is also provided.
```json
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Billing/billingPeriods/201702/providers/Microsoft.Consumption/pricesheets/default",
"name": "default",
"type": "Microsoft.Consumption/pricesheets",
"properties": {
"nextLink": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.consumption/pricesheets/default?api-version=2018-01-31&$skiptoken=AQAAAA%3D%3D&$expand=properties/pricesheets/meterDetails",
"pricesheets": [
{
"billingPeriodId": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Billing/billingPeriods/201702",
"meterId": "00000000-0000-0000-0000-000000000000",
"unitOfMeasure": "100 Hours",
"includedQuantity": 100,
"partNumber": "XX-11110",
"unitPrice": 0.00000,
"currencyCode": "EUR",
"offerId": "OfferId 1",
"meterDetails": {
"meterName": "Data Transfer Out (GB)",
"meterCategory": "Networking",
"unit": "GB",
"meterLocation": "Zone 2",
"totalIncludedQuantity": 0,
"pretaxStandardRate": 0.000
}
}
]
}
}
```
### Reserved instance usage details
Microsoft isn't actively working on synchronous-based Reservation Details APIs. We recommend at you move to the newer SPN-supported asynchronous API call pattern as a part of the migration. Asynchronous requests better handle large amounts of data and will reduce timeout errors.
#### Supported requests
Use the following request URIs when calling the new Asynchronous Reservation Details API. Your enrollment number should be used as the `billingAccountId`. You can call the API with the following scopes:
- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
#### Sample request to generate a reservation details report
```json
POST
https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.CostManagement/generateReservationDetailsReport?startDate={startDate}&endDate={endDate}&api-version=2019-11-01
```
#### Sample request to poll report generation status
```json
GET
https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.CostManagement/reservationDetailsOperationResults/{operationId}?api-version=2019-11-01
```
#### Sample poll response
```json
{
"status": "Completed",
"properties": {
"reportUrl": "https://storage.blob.core.windows.net/details/20200911/00000000-0000-0000-0000-000000000000?sv=2016-05-31&sr=b&sig=jep8HT2aphfUkyERRZa5LRfd9RPzjXbzB%2F9TNiQ",
"validUntil": "2020-09-12T02:56:55.5021869Z"
}
}
```
#### Response body changes
The response of the older synchronous based Reservation Details API is below.
_Old response_:
```json
{
"reservationOrderId": "00000000-0000-0000-0000-000000000000",
"reservationId": "00000000-0000-0000-0000-000000000000",
"usageDate": "2018-02-01T00:00:00",
"skuName": "Standard_F2s",
"instanceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resourvegroup1/providers/microsoft.compute/virtualmachines/VM1",
"totalReservedQuantity": 18.000000000000000,
"reservedHours": 432.000000000000000,
"usedHours": 400.000000000000000
}
```
_New response_:
The new API creates a CSV file for you. See the following file fields.
| **Old Property** | **New Property** | **Notes** |
| --- | --- | --- |
| | InstanceFlexibilityGroup | New property for instance flexibility. |
| | InstanceFlexibilityRatio | New property for instance flexibility. |
| instanceId | InstanceName | |
| | Kind | It's a new property. Value is `None`, `Reservation`, or `IncludedQuantity`. |
| reservationId | ReservationId | |
| reservationOrderId | ReservationOrderId | |
| reservedHours | ReservedHours | |
| skuName | SkuName | |
| totalReservedQuantity | TotalReservedQuantity | |
| usageDate | UsageDate | |
| usedHours | UsedHours | |
### Reserved Instance Usage Summary
Use the following request URIs to call the new Reservation Summaries API.
#### Supported requests
Call the API with the following scopes:
- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
[_Get Reservation Summary Daily_](/rest/api/consumption/reservationssummaries/list#reservationsummariesdailywithbillingaccountid)
```json
https://management.azure.com/{scope}/Microsoft.Consumption/reservationSummaries?grain=daily&$filter=properties/usageDate ge 2017-10-01 AND properties/usageDate le 2017-11-20&api-version=2019-10-01
```
[_Get Reservation Summary Monthly_](/rest/api/consumption/reservationssummaries/list#reservationsummariesmonthlywithbillingaccountid)
```json
https://management.azure.com/{scope}/Microsoft.Consumption/reservationSummaries?grain=daily&$filter=properties/usageDate ge 2017-10-01 AND properties/usageDate le 2017-11-20&api-version=2019-10-01
```
#### Response body changes
_Old response_:
```json
[
{
"reservationOrderId": "00000000-0000-0000-0000-000000000000",
"reservationId": "00000000-0000-0000-0000-000000000000",
"skuName": "Standard_F1s",
"reservedHours": 24,
"usageDate": "2018-05-01T00:00:00",
"usedHours": 23,
"minUtilizationPercentage": 0,
"avgUtilizationPercentage": 95.83,
"maxUtilizationPercentage": 100
}
]
```
_New response_:
```json
{
"value": [
{
"id": "/providers/Microsoft.Billing/billingAccounts/12345/providers/Microsoft.Consumption/reservationSummaries/reservationSummaries_Id1",
"name": "reservationSummaries_Id1",
"type": "Microsoft.Consumption/reservationSummaries",
"tags": null,
"properties": {
"reservationOrderId": "00000000-0000-0000-0000-000000000000",
"reservationId": "00000000-0000-0000-0000-000000000000",
"skuName": "Standard_B1s",
"reservedHours": 720,
"usageDate": "2018-09-01T00:00:00-07:00",
"usedHours": 0,
"minUtilizationPercentage": 0,
"avgUtilizationPercentage": 0,
"maxUtilizationPercentage": 0
}
}
]
}
```
### Reserved instance recommendations
Use the following request URIs to call the new Reservation Recommendations API.
#### Supported requests
Call the API with the following scopes:
- Enrollment: `providers/Microsoft.Billing/billingAccounts/{billingAccountId}`
- Subscription: `subscriptions/{subscriptionId}`
- Resource Groups: `subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}`
[_Get Recommendations_](/rest/api/consumption/reservationrecommendations/list)
Both the shared and the single scope recommendations are available through this API. You can also filter on the scope as an optional API parameter.
```json
https://management.azure.com/providers/Microsoft.Billing/billingAccounts/123456/providers/Microsoft.Consumption/reservationRecommendations?api-version=2019-10-01
```
#### Response body changes
Recommendations for Shared and Single scopes are combined into one API.
_Old response_:
```json
[{
"subscriptionId": "1111111-1111-1111-1111-111111111111",
"lookBackPeriod": "Last7Days",
"meterId": "2e3c2132-1398-43d2-ad45-1d77f6574933",
"skuName": "Standard_DS1_v2",
"term": "P1Y",
"region": "westus",
"costWithNoRI": 186.27634908960002,
"recommendedQuantity": 9,
"totalCostWithRI": 143.12931642978083,
"netSavings": 43.147032659819189,
"firstUsageDate": "2018-02-19T00:00:00"
}
]
```
_New response_:
```json
{
"value": [
{
"id": "billingAccount/123456/providers/Microsoft.Consumption/reservationRecommendations/00000000-0000-0000-0000-000000000000",
"name": "00000000-0000-0000-0000-000000000000",
"type": "Microsoft.Consumption/reservationRecommendations",
"location": "westus",
"sku": "Standard_DS1_v2",
"kind": "legacy",
"properties": {
"meterId": "00000000-0000-0000-0000-000000000000",
"term": "P1Y",
"costWithNoReservedInstances": 12.0785105,
"recommendedQuantity": 1,
"totalCostWithReservedInstances": 11.4899644807748,
"netSavings": 0.588546019225182,
"firstUsageDate": "2019-07-07T00:00:00-07:00",
"scope": "Shared",
"lookBackPeriod": "Last7Days",
"instanceFlexibilityRatio": 1,
"instanceFlexibilityGroup": "DSv2 Series",
"normalizedSize": "Standard_DS1_v2",
"recommendedQuantityNormalized": 1,
"skuProperties": [
{
"name": "Cores",
"value": "1"
},
{
"name": "Ram",
"value": "1"
}
]
}
},
]
}
```
### Reserved instance charges
Use the following request URIs to call the new Reserved Instance Charges API.
#### Supported requests
[_Get Reservation Charges by Date Range_](/rest/api/consumption/reservationtransactions/list)
```json
https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/reservationTransactions?$filter=properties/eventDate+ge+2020-05-20+AND+properties/eventDate+le+2020-05-30&api-version=2019-10-01
```
#### Response body changes
_Old response_:
```json
[
{
"purchasingEnrollment": "string",
"armSkuName": "Standard_F1s",
"term": "P1Y",
"region": "eastus",
"PurchasingsubscriptionGuid": "00000000-0000-0000-0000-000000000000",
"PurchasingsubscriptionName": "string",
"accountName": "string",
"accountOwnerEmail": "string",
"departmentName": "string",
"costCenter": "",
"currentEnrollment": "string",
"eventDate": "string",
"reservationOrderId": "00000000-0000-0000-0000-000000000000",
"description": "Standard_F1s eastus 1 Year",
"eventType": "Purchase",
"quantity": int,
"amount": double,
"currency": "string",
"reservationOrderName": "string"
}
]
```
_New response_:
```json
{
"value": [
{
"id": "/billingAccounts/123456/providers/Microsoft.Consumption/reservationtransactions/201909091919",
"name": "201909091919",
"type": "Microsoft.Consumption/reservationTransactions",
"tags": {},
"properties": {
"eventDate": "2019-09-09T19:19:04Z",
"reservationOrderId": "00000000-0000-0000-0000-000000000000",
"description": "Standard_DS1_v2 westus 1 Year",
"eventType": "Cancel",
"quantity": 1,
"amount": -21,
"currency": "USD",
"reservationOrderName": "Transaction-DS1_v2",
"purchasingEnrollment": "123456",
"armSkuName": "Standard_DS1_v2",
"term": "P1Y",
"region": "westus",
"purchasingSubscriptionGuid": "11111111-1111-1111-1111-11111111111",
"purchasingSubscriptionName": "Infrastructure Subscription",
"accountName": "Microsoft Infrastructure",
"accountOwnerEmail": "admin@microsoft.com",
"departmentName": "Unassigned",
"costCenter": "",
"currentEnrollment": "123456",
"billingFrequency": "recurring"
}
},
]
}
```
## Use additional Cost Management APIs
After you've migrated to Azure Resource Manager APIs for your existing reporting scenarios, you can use many other APIs, too. The APIs are also available through Azure Resource Manager and can be automated using Service Principal-based authentication. Here's a quick summary of the new capabilities that you can use.
- [Budgets](/rest/api/consumption/budgets/createorupdate) - Use to set thresholds to proactively monitor your costs, alert relevant stakeholders, and automate actions in response to threshold breaches.
- [Alerts](/rest/api/cost-management/alerts) - Use to view alert information including, but not limited to, budget alerts, invoice alerts, credit alerts, and quota alerts.
- [Exports](/rest/api/cost-management/exports) - Use to schedule recurring data export of your charges to an Azure Storage account of your choice. It's the recommended solution for customers with a large Azure presence who want to analyze their data and use it in their own internal systems.
## Next steps
- Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).
- If needed, determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](#ea-api-mapping-to-new-azure-resource-manager-apis).
- If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad).
- If needed, update any of your programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your Service Principal. | 42.47541 | 774 | 0.718641 | eng_Latn | 0.627252 |
e917757d4247bc14c2d017adb0b27972167119a1 | 594 | md | Markdown | README.md | xt0fer/COBOL-Lab2 | ef67b2623dc152a91df651fbb257a12eafa6f574 | [
"MIT"
] | null | null | null | README.md | xt0fer/COBOL-Lab2 | ef67b2623dc152a91df651fbb257a12eafa6f574 | [
"MIT"
] | null | null | null | README.md | xt0fer/COBOL-Lab2 | ef67b2623dc152a91df651fbb257a12eafa6f574 | [
"MIT"
] | null | null | null | # COBOL-Lab2
1) Modify the your 'ask-name' program from Lab1 such that only the users Alice and Bob are greeted with their names
2) Write a program that asks the user for a number n and prints the sum of the numbers 1 to n
3) Guessing game (Too Large/ Too Small)
Write a guessing game where the user has to guess a secret number. After every guess the program tells the user whether their number was too large or too small. At the end the number of tries needed should be printed. It counts only as one try if they input the same number multiple times consecutively.
submit a pull request
| 49.5 | 303 | 0.777778 | eng_Latn | 0.99999 |
e91ac64cba91fcc0f16fc6e7fb59282dcfce54ad | 3,818 | md | Markdown | README.md | rogerjdeangelis/utl_using_sas_zip_qnd_unzip_engines | 0924d8ff9c13d30394247e3cc15ae16731fb0013 | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl_using_sas_zip_qnd_unzip_engines | 0924d8ff9c13d30394247e3cc15ae16731fb0013 | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl_using_sas_zip_qnd_unzip_engines | 0924d8ff9c13d30394247e3cc15ae16731fb0013 | [
"MIT"
] | null | null | null | # utl_using_sas_zip_and_unzip_engines
Using SAS zip qnd unzip engines. Keywords: sas sql join merge big data analytics macros oracle teradata mysql sas communities stackoverflow statistics artificial inteligence AI Python R Java Javascript WPS Matlab SPSS Scala Perl C C# Excel MS Access JSON graphics maps NLP natural language processing machine learning igraph DOSUBL DOW loop stackoverflow SAS community.
Using SAS zip qnd unzip engines
Not sure I understand the problem
github
https://github.com/rogerjdeangelis/utl_using_sas_zip_and_unzip_engines
see
https://goo.gl/snSxAp
https://communities.sas.com/t5/General-SAS-Programming/Export-zip-files-using-ods/m-p/429058
WPS had the following ERROR
ERROR: ZIP is not a valid access method
INPUT (csv file)
================
d:\csv\class.csv
NAME,SEX,AGE,HEIGHT,WEIGHT
Alfred,M,14,69,112.5
Alice,F,13,56.5,84
Barbara,F,13,65.3,98
Carol,F,14,62.8,102.5
Henry,M,14,63.5,102.5
James,M,12,57.3,83
Jane,F,12,59.8,84.5
....
PROCESS (all the code)
======================
ZIP the csv file
filename foo ZIP 'd:\zip\class.zip';
data _null_;
infile "d:\csv\class.csv";
input;
file foo(class);
put _infile_;
run;quit;
UNZIP the csv file
filename foo ZIP 'd:\zip\class.zip';
data _null_;
infile foo(class);
input;
put _infile_;
run;quit;
OUTPUT
======
ZIP: d:\zip\class.zip
Files in the ZIP file
MEMNAME
class
N = 1
UNZIP d:\zip\class.zip
This will appear in log
NAME,SEX,AGE,HEIGHT,WEIGHT
Alfred,M,14,69,112.5
Alice,F,13,56.5,84
Barbara,F,13,65.3,98
Carol,F,14,62.8,102.5
Henry,M,14,63.5,102.5
James,M,12,57.3,83
Jane,F,12,59.8,84.5
* _ _ _
_ __ ___ __ _| | _____ __| | __ _| |_ __ _
| '_ ` _ \ / _` | |/ / _ \ / _` |/ _` | __/ _` |
| | | | | | (_| | < __/ | (_| | (_| | || (_| |
|_| |_| |_|\__,_|_|\_\___| \__,_|\__,_|\__\__,_|
;
dm "dexport sashelp.class 'd:\csv\class.csv' replace";
or type on Classic editor command line
dexport sashelp.class 'd:\csv\class.csv' replace
* _ _ _
___ ___ | |_ _| |_(_) ___ _ __
/ __|/ _ \| | | | | __| |/ _ \| '_ \
\__ \ (_) | | |_| | |_| | (_) | | | |
|___/\___/|_|\__,_|\__|_|\___/|_| |_|
;
ZIP the csv file
filename foo ZIP 'd:\zip\class.zip';
data _null_;
infile "d:\csv\class.csv";
input;
file foo(class);
put _infile_;
run;quit;
UNZIP the csv file
filename foo ZIP 'd:\zip\class.zip';
data _null_;
infile foo(class);
input;
put _infile_;
run;quit;
* _ _ _
___(_)_ __ ___ ___ _ __ | |_ ___ _ __ | |_ ___
|_ / | '_ \ / __/ _ \| '_ \| __/ _ \ '_ \| __/ __|
/ /| | |_) | | (_| (_) | | | | || __/ | | | |_\__ \
/___|_| .__/ \___\___/|_| |_|\__\___|_| |_|\__|___/
|_|
;
filename inzip zip "d:\zip\class.zip";
/* Read the "members" (files) from the ZIP file */
data contents(keep=memname);
length memname $200;
fid=dopen("inzip");
if fid=0 then
stop;
memcount=dnum(fid);
do i=1 to memcount;
memname=dread(fid,i);
output;
end;
rc=dclose(fid);
run;
/* create a report of the ZIP contents */
title "Files in the ZIP file";
proc print data=contents noobs N;
run;
| 24.792208 | 369 | 0.509953 | yue_Hant | 0.356431 |
e91b7619a72fe9c233f318f0f9a9d6f7f3bfcfda | 3,122 | md | Markdown | airbyte-integrations/connectors/source-plaid/README.md | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 6,215 | 2020-09-21T13:45:56.000Z | 2022-03-31T21:21:45.000Z | airbyte-integrations/connectors/source-plaid/README.md | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 8,448 | 2020-09-21T00:43:50.000Z | 2022-03-31T23:56:06.000Z | airbyte-integrations/connectors/source-plaid/README.md | rajatariya21/airbyte | 11e70a7a96e2682b479afbe6f709b9a5fe9c4a8d | [
"MIT"
] | 1,251 | 2020-09-20T05:48:47.000Z | 2022-03-31T10:41:29.000Z | # Plaid Source
This is the repository for the JavaScript Template source connector, written in JavaScript.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.io/integrations/sources/javascript-template).
## Local development
### Prerequisites
**To iterate on this connector, make sure to complete this prerequisites section.**
#### Build & Activate Virtual Environment
First, build the module by running the following from the `airbyte` project root directory:
```
./gradlew :airbyte-integrations:connectors:source-plaid:build
```
This will generate a virtualenv for this module in `source-plaid/.venv`. Make sure this venv is active in your
development environment of choice. To activate the venv from the terminal, run:
```
cd airbyte-integrations/connectors/source-plaid # cd into the connector directory
source .venv/bin/activate
```
If you are in an IDE, follow your IDE's instructions to activate the virtualenv.
#### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.io/integrations/sources/javascript-template)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_javascript_template/spec.json` file.
See `sample_files/sample_config.json` for a sample config file.
**If you are an Airbyte core member**, copy the credentials in RPass under the secret name `source-plaid-integration-test-config`
and place them into `secrets/config.json`.
### Locally running the connector
```
npm install
node source.js spec
node source.js check --config secrets/config.json
node source.js discover --config secrets/config.json
node source.js read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Unit Tests (wip)
To run unit tests locally, from the connector directory run:
```
npm test
```
### Locally running the connector docker image
```
# in airbyte root directory
./gradlew :airbyte-integrations:connectors:source-plaid:airbyteDocker
docker run --rm airbyte/source-plaid:dev spec
docker run --rm -v $(pwd)/airbyte-integrations/connectors/source-plaid/secrets:/secrets airbyte/source-plaid:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/airbyte-integrations/connectors/source-plaid/secrets:/secrets airbyte/source-plaid:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/airbyte-integrations/connectors/source-plaid/secrets:/secrets -v $(pwd)/airbyte-integrations/connectors/source-plaid/sample_files:/sample_files airbyte/source-plaid:dev read --config /secrets/config.json --catalog /sample_files/fullrefresh_configured_catalog.json
```
### Integration Tests
1. From the airbyte project root, run `./gradlew :airbyte-integrations:connectors:source-plaid:integrationTest` to run the standard integration test suite.
1. To run additional integration tests, place your integration tests in a new directory `integration_tests` and run them with `node test (wip)`.
## Dependency Management
All of your dependencies should go in `package.json`.
| 41.078947 | 289 | 0.785394 | eng_Latn | 0.925415 |
e91b9b04ae6be9df32234e82c06b0210d6a72a79 | 222 | md | Markdown | Compiled/Readme.md | SDXorg/ANTLR-collection | 790604794b7af8c75a9a8ea7a45a5fa437f46a2e | [
"MIT"
] | 1 | 2015-11-17T19:38:39.000Z | 2015-11-17T19:38:39.000Z | Compiled/Readme.md | SDXorg/ANTLR-collection | 790604794b7af8c75a9a8ea7a45a5fa437f46a2e | [
"MIT"
] | 1 | 2015-11-17T19:37:36.000Z | 2015-11-17T19:37:36.000Z | Compiled/Readme.md | SDXorg/ANTLR_collection | 790604794b7af8c75a9a8ea7a45a5fa437f46a2e | [
"MIT"
] | null | null | null | #Parsers and Lexers
This folder holds the ANTLR4 output from the model grammars, compiled with each of the various ANTLR bindings. We don't modify these directly, but extend them by constructing visitor/listener classes.
| 55.5 | 200 | 0.815315 | eng_Latn | 0.999546 |
e91c5cec506919bca19d7c72c228c0d4171bb213 | 2,394 | md | Markdown | docs/src/man/backends.md | UnofficialJuliaMirrorSnapshots/Gadfly.jl-c91e804a-d5a3-530f-b6f0-dfbca275c004 | d180d5760c758863f24e27e2bc42d7c669fc75ed | [
"MIT"
] | 996 | 2016-10-13T18:33:30.000Z | 2022-03-25T04:40:31.000Z | docs/src/man/backends.md | UnofficialJuliaMirrorSnapshots/Gadfly.jl-c91e804a-d5a3-530f-b6f0-dfbca275c004 | d180d5760c758863f24e27e2bc42d7c669fc75ed | [
"MIT"
] | 730 | 2016-10-11T03:23:01.000Z | 2022-03-31T18:20:39.000Z | docs/src/man/backends.md | UnofficialJuliaMirrorSnapshots/Gadfly.jl-c91e804a-d5a3-530f-b6f0-dfbca275c004 | d180d5760c758863f24e27e2bc42d7c669fc75ed | [
"MIT"
] | 189 | 2016-10-19T22:33:09.000Z | 2022-03-30T00:59:54.000Z | ```@meta
Author = "Daniel C. Jones, Tamas Nagy"
```
# Backends
Gadfly supports creating SVG images out of the box through the native Julian
renderer in [Compose.jl](https://github.com/GiovineItalia/Compose.jl). The
PNG, PDF, PS, and PGF formats, however, require Julia's bindings to
[cairo](https://www.cairographics.org/) and
[fontconfig](https://www.freedesktop.org/wiki/Software/fontconfig/), which can
be installed with
```julia
Pkg.add("Cairo")
Pkg.add("Fontconfig")
```
## Rendering to a file
In addition to the `draw` interface presented in the [Tutorial](@ref Tutorial):
```julia
p = plot(...)
draw(SVG("foo.svg", 6inch, 4inch), p)
```
one can more succintly use Julia's function chaining syntax:
```julia
p |> SVG("foo.svg", 6inch, 4inch)
```
If you plan on drawing many figures of the same size, consider
setting it as the default:
```julia
set_default_plot_size(6inch, 4inch)
p1 |> SVG("foo1.svg")
p2 |> SVG("foo2.svg")
p3 |> SVG("foo3.svg")
```
## Choosing a backend
Drawing to different backends is easy. Simply swap `SVG` for one
of `SVGJS`, `PNG`, `PDF`, `PS`, or `PGF`:
```julia
# e.g.
p |> PDF("foo.pdf")
```
## Interactive SVGs
The `SVGJS` backend writes SVG with embedded javascript. There are a couple
subtleties with using the output from this backend.
Drawing to the backend works like any other
```julia
draw(SVGJS("foo.svg", 6inch, 6inch), p)
```
If included with an `<img>` tag, the output will display as a static SVG image
though.
```html
<img src="foo.svg"/>
```
For the [interactive](@ref Interactivity) javascript features to be enabled, it
either needs to be included inline in the HTML page, or included with an object
tag.
```html
<object data="foo.svg" type="image/svg+xml"></object>
```
For the latter, a `div` element must be placed, and the `draw` function
must be passed the id of this element, so it knows where in the
document to place the plot.
## IJulia
The [IJulia](https://github.com/JuliaLang/IJulia.jl) project adds Julia support
to [Jupyter](https://jupyter.org/). This includes a browser based notebook
that can inline graphics and plots. Gadfly works out of the box with IJulia,
with or without drawing explicity to a backend.
Without an explicit call to `draw` (i.e. just calling `plot` without a trailing
semicolon), the SVGJS backend is used with the default plot size, which can be
changed as described above.
| 24.428571 | 79 | 0.718463 | eng_Latn | 0.988638 |
e91c72790a0415deb0febd645a1910c3f8958601 | 2,674 | md | Markdown | sdk-api-src/content/snmp/nf-snmp-snmputilasnanyfree.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/snmp/nf-snmp-snmputilasnanyfree.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/snmp/nf-snmp-snmputilasnanyfree.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:snmp.SnmpUtilAsnAnyFree
title: SnmpUtilAsnAnyFree function (snmp.h)
description: The SnmpUtilAsnAnyFree function frees the memory allocated for the specified AsnAny structure. This function is an element of the SNMP Utility API.
helpviewer_keywords: ["SnmpUtilAsnAnyFree","SnmpUtilAsnAnyFree function [SNMP]","_snmp_snmputilasnanyfree","snmp.snmputilasnanyfree","snmp/SnmpUtilAsnAnyFree"]
old-location: snmp\snmputilasnanyfree.htm
tech.root: SNMP
ms.assetid: b18c3722-398e-4659-ab1c-edd09d5c220d
ms.date: 12/05/2018
ms.keywords: SnmpUtilAsnAnyFree, SnmpUtilAsnAnyFree function [SNMP], _snmp_snmputilasnanyfree, snmp.snmputilasnanyfree, snmp/SnmpUtilAsnAnyFree
req.header: snmp.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows 2000 Professional [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Snmpapi.lib
req.dll: Snmpapi.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- SnmpUtilAsnAnyFree
- snmp/SnmpUtilAsnAnyFree
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- Snmpapi.dll
api_name:
- SnmpUtilAsnAnyFree
---
# SnmpUtilAsnAnyFree function
## -description
<p class="CCE_Message">[SNMP is available for use in the operating systems specified in the Requirements section. It may be altered or unavailable in subsequent versions. Instead, use <a href="/windows/desktop/WinRM/portal">Windows Remote Management</a>, which is the Microsoft implementation of WS-Man.]
The
<b>SnmpUtilAsnAnyFree</b> function frees the memory allocated for the specified
<a href="/windows/desktop/api/snmp/ns-snmp-asnany">AsnAny</a> structure. This function is an element of the SNMP Utility API.
## -parameters
### -param pAny [in]
Pointer to an
<a href="/windows/desktop/api/snmp/ns-snmp-asnany">AsnAny</a> structure whose memory should be freed.
## -returns
This function does not return a value.
## -remarks
Call the
<b>SnmpUtilAsnAnyFree</b> function to free the memory that the
<a href="/windows/desktop/api/snmp/nf-snmp-snmputilasnanycpy">SnmpUtilAsnAnyCpy</a> function allocates.
## -see-also
<a href="/windows/desktop/api/snmp/ns-snmp-asnany">AsnAny</a>
<a href="/windows/desktop/SNMP/snmp-functions">SNMP Functions</a>
<a href="/windows/desktop/SNMP/simple-network-management-protocol-snmp-">Simple Network Management Protocol (SNMP) Overview</a>
<a href="/windows/desktop/api/snmp/nf-snmp-snmputilasnanycpy">SnmpUtilAsnAnyCpy</a> | 29.711111 | 304 | 0.777861 | eng_Latn | 0.493286 |
e91cd6fd4af3ca680a390f937fdca882b433c07d | 3,572 | md | Markdown | docs/relational-databases/system-stored-procedures/sp-changesubscriptiondtsinfo-transact-sql.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-changesubscriptiondtsinfo-transact-sql.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-changesubscriptiondtsinfo-transact-sql.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-04T05:50:54.000Z | 2020-03-04T05:50:54.000Z | ---
title: sp_changesubscriptiondtsinfo (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/04/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: replication
ms.topic: language-reference
f1_keywords:
- sp_changesubscriptiondtsinfo
- sp_changesubscriptiondtsinfo_TSQL
helpviewer_keywords:
- sp_changesubscriptiondtsinfo
ms.assetid: 64fc085f-f81b-493b-b59a-ee6192d9736d
author: stevestein
ms.author: sstein
ms.openlocfilehash: a091df0cbbeb2883ff9905d7c5b3718d50efa86b
ms.sourcegitcommit: 728a4fa5a3022c237b68b31724fce441c4e4d0ab
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/03/2019
ms.locfileid: "68762555"
---
# <a name="spchangesubscriptiondtsinfo-transact-sql"></a>sp_changesubscriptiondtsinfo (Transact-SQL)
[!INCLUDE[appliesto-ss-asdbmi-xxxx-xxx-md](../../includes/appliesto-ss-asdbmi-xxxx-xxx-md.md)]
Modifie les propriétés de package DTS (Data Transformation Services) d'un abonnement. Cette procédure stockée est exécutée sur la base de données d'abonnement de l'Abonné.
 [Conventions de la syntaxe Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Syntaxe
```
sp_changesubscriptiondtsinfo [ [ @job_id = ] job_id ]
[ , [ @dts_package_name= ] 'dts_package_name' ]
[ , [ @dts_package_password= ] 'dts_package_password' ]
[ , [ @dts_package_location= ] 'dts_package_location' ]
```
## <a name="arguments"></a>Arguments
`[ @job_id = ] job_id`ID du travail de l’Agent de distribution pour l’abonnement par émission de type push. *job_id* est de type **varbinary (16)** , sans valeur par défaut. Pour Rechercher l’ID de tâche de distribution, exécutez **sp_helpsubscription** ou **sp_helppullsubscription**.
`[ @dts_package_name = ] 'dts_package_name'`Spécifie le nom du package DTS. *dts_package_name* est de **type sysname**, avec NULL comme valeur par défaut. Par exemple, pour spécifier un package nommé **DTSPub_Package**, vous devez spécifier `@dts_package_name = N'DTSPub_Package'`.
`[ @dts_package_password = ] 'dts_package_password'`Spécifie le mot de passe du package. *dts_package_password* est de **type sysname** , avec NULL comme valeur par défaut, qui spécifie que la propriété de mot de passe doit rester inchangée.
> [!NOTE]
> Un package DTS doit avoir un mot de passe.
`[ @dts_package_location = ] 'dts_package_location'`Spécifie l’emplacement du package. *dts_package_location* est de type **nvarchar (12)** , avec NULL comme valeur par défaut, qui spécifie que l’emplacement du package doit rester inchangé. L’emplacement du package peut être remplacé par **Distributor** ou Subscriber.
## <a name="return-code-values"></a>Valeurs des codes de retour
**0** (succès) ou **1** (échec)
## <a name="remarks"></a>Notes
**sp_changesubscriptiondtsinfo** est utilisé pour la réplication d’instantané et la réplication transactionnelle qui sont des abonnements par émission de type push uniquement.
## <a name="permissions"></a>Autorisations
Seuls les membres du rôle serveur fixe **sysadmin** , du rôle de base de données fixe **db_owner** ou du créateur de l’abonnement peuvent exécuter **sp_changesubscriptiondtsinfo**.
## <a name="see-also"></a>Voir aussi
[Procédures stockées système (Transact-SQL)](../../relational-databases/system-stored-procedures/system-stored-procedures-transact-sql.md)
| 53.313433 | 321 | 0.74776 | fra_Latn | 0.683099 |
e91da93f81a256b7a1f1c869cf1ab5f1a43bcd7c | 10,077 | md | Markdown | docs/design-methodology/Principles.md | AhmedAtCloudBricks/AlwaysOn | 95ad60ac002a7d6869dbd8fd573735a0a71115c1 | [
"MIT"
] | null | null | null | docs/design-methodology/Principles.md | AhmedAtCloudBricks/AlwaysOn | 95ad60ac002a7d6869dbd8fd573735a0a71115c1 | [
"MIT"
] | null | null | null | docs/design-methodology/Principles.md | AhmedAtCloudBricks/AlwaysOn | 95ad60ac002a7d6869dbd8fd573735a0a71115c1 | [
"MIT"
] | null | null | null | # Design principles
The AlwaysOn architectural framework presented within this repository is underpinned by 5 key design principles which serve as a compass for subsequent design decisions across technical domains and the critical design areas. Readers are strongly advised to familiarize themselves with these principles to better understand their impact and the trade-offs associated with non-adherence.
1. **Maximum Reliability** - Fundamental pursuit of the most reliable solution, ensuring trade-offs are properly understood.
1. **Sustainable Performance and Scalability** - Design for scalability across the end-to-end solution without performance bottlenecks.
1. **Operations by Design** - Engineered to last with robust and assertive operational management.
1. **Cloud-Native Design** - Focus on using native platforms services to minimize operational burdens, while mitigating known gaps.
1. **Always Secure** - Design for end-to-end security to maintain application stability and ensure availability.
[](./Principles.md)
## Maximum reliability
- **Design for failure** - Failure is impossible to avoid in a highly distributed multi-tenant cloud environment like Azure. By anticipating failures and cascading or correlated impact, from individual components to entire Azure regions, a solution can be designed and developed in a resilient manner.
- **Observe application health** - Before issues impacting application reliability can be mitigated, they must first be detected. By monitoring the operation of an application relative to a known healthy state it becomes possible to detect or even predict reliability issues, allowing for swift remedial action to be taken.
- **Drive automation** - One of the leading causes of application downtime is human error, whether that be due to the deployment of insufficiently tested software or misconfiguration. To minimize the possibility and impact of human errors, it is vital to strive for automation in all aspects of a cloud solution to improve reliability; automated testing, deployment, and management.
- **Design for self-healing** - Self healing describes a system's ability to deal with failures automatically through pre-defined remediation protocols connected to failure modes within the solution. It is an advanced concept that requires a high level of system maturity with monitoring and automation, but should be an aspiration from inception to maximize reliability.
## Sustainable performance and scalability
- **Design for scale-out** - Scale-out is a concept that focuses on a system's ability to respond to demand through horizontal growth. This means that as traffic grows, more resource units are added in parallel instead of increasing the size of the existing resources. A systems ability to handle expected and unexpected traffic increases through scale-units is essential to overall performance and reliability by further reducing the impact of a single resource failure.
- **Model capacity** - The system's expected performance under various load profiles should be modeled through load and performance tests. This capacity model enables planning of resource scale levels for a given load profile, and additionally exposes how system components perform in relation to each other, therefore enabling system-wide capacity allocation planning.
- **Test and experiment often** - Testing should be performed for each major change as well as on a regular basis. Such testing should be performed in testing and staging environments, but it can also be beneficial to run a subset of tests against the production environment. Ongoing testing validates existing thresholds, targets and assumptions and will help to quickly identify risks to resiliency and availability.
- **Baseline performance and identify bottlenecks** - Performance testing with detailed telemetry from every system component allows for the identification of bottlenecks within the system, including components which need to be scaled in relation to other components, and this information should be incorporated into the capacity model.
- **Use containerized or serverless architecture** - Using managed compute services and containerized architectures significantly reduces the ongoing administrative and operational overhead of designing, operating, and scaling applications by shifting infrastructure deployment and maintenance to the managed service provider.
## Operations by design
- **Loosely coupled components** - Loose coupling enables independent and on-demand testing, deployments, and updates to components of the application while minimizing inter-team dependencies for support, services, resources, or approvals.
- **Optimize build and release process** - Fully automated build and release processes reduce the friction and increase the velocity of deploying updates, bringing repeatability and consistency across environments. Automation shortens the feedback loop from developers pushing changes to getting automated near instantaneous insights on code quality, test coverage, security, and performance, which increases developer productivity and team velocity.
- **Understand operational health** - Full diagnostic instrumentation of all components and resources enables ongoing observability of logs, metrics and traces, and enables health modeling to quantify application health in the context to availability and performance requirements.
- **Rehearse recovery and practice failure** - Business Continuity (BC) and Disaster Recovery (DR) planning and practice drills are essential and should be conducted periodically, since learnings from drills can iteratively improve plans and procedures to maximize resiliency in the event of unplanned downtime.
- **Embrace continuous operational improvement** - Prioritize routine improvement of the system and user experience, leveraging a health model to understand and measure operational efficiency with feedback mechanisms to enable application teams to understand and address gaps in an iterative manner.
## Cloud native design
- **Azure-native managed services** - Azure-native managed services are prioritized due to their lower administrative and operational overhead as well as tight integration with consistent configuration and instrumentation across the application stack.
- **Roadmap alignment** - Incorporate upcoming new and improved Azure service capabilities as they become Generally Available (GA) to stay close to the leading edge of Azure.
- **Embrace preview capabilities and mitigate known gaps** - While Generally Available (GA) services are prioritized for supportability, Azure service previews are actively explored for rapid incorporation, providing technical and actionable feedback to Azure product groups to address gaps.
- **Landing Zone alignment** - Deployable within an [Azure Landing Zone](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/landing-zone/) and aligned to the Azure Landing Zone design methodology, but also fully functional and deployable in a bare environment outside of a Landing Zone.
## Always secure
- **Monitor the security of the entire solution and plan incident responses** - Correlate security and audit events to model application health and identify active threats. Establish automated and manual procedures to respond to incidents leveraging Security Information and Event Management (SIEM) tooling for tracking.
- **Model and test against potential threats** - Ensure appropriate resource hardening and establish procedures to identify and mitigate known threats, using penetration testing to verify threat mitigation, as well as static code analysis and code scanning.
- **Identify and protect endpoints** - Monitor and protect the network integrity of internal and external endpoints through security capabilities and appliances, such as firewalls or web application firewalls. Use industry standard approaches to protect against common attack vectors like Distributed Denial-Of-Service (DDoS) attacks such as SlowLoris.
- **Protect against code level vulnerabilities** - Identify and mitigate code-level vulnerabilities, such as cross-site scripting or SQL injection, and incorporate security patching into operational lifecycles for all parts of the codebase, including dependencies.
- **Automate and use least privilege** - Drive automation to minimize the need for human interaction and implement least privilege across both the application and control plane to protect against data exfiltration and malicious actor scenarios.
- **Classify and encrypt data** - Classify data according to risk and apply industry standard encryption at rest and in transit, ensuring keys and certificates are stored securely and managed properly.
# Additional project principles
- **Production ready artifacts**: Every AlwaysOn technical artifact will be ready for use in production environments with all end-to-end operational aspects considered.
- **Rooted in 'customer truth'** - All technical decisions will be guided by the experience customers have on the platform and the feedback they share.
- **Azure roadmap alignment** - The AlwaysOn architecture will have its own roadmap that is aligned with Azure product roadmaps.
---
|Previous Page|Next Page|
|--|--|
|[How to use the AlwaysOn Design Guidelines](./README.md)|[AlwaysOn Design Areas](./Design-Areas.md)
---
|Design Methodology|
|--|
|[How to use the AlwaysOn Design Methodology](./README.md)
|[AlwaysOn Design Principles](./Principles.md)
|[AlwaysOn Design Areas](./Design-Areas.md)
|[Application Design](./App-Design.md)
|[Application Platform](./App-Platform.md)
|[Data Platform](./Data-Platform.md)
|[Health Modeling and Observability](./Health-Modeling.md)
|[Deployment and Testing](./Deployment-Testing.md)
|[Networking and Connectivity](./Networking.md)
|[Security](./Security.md)
|[Operational Procedures](./Operational-Procedures.md)
---
[AlwaysOn | Documentation Inventory](/docs/README.md)
| 96.894231 | 471 | 0.810856 | eng_Latn | 0.998311 |
e91eebc56f86c93bada5cf64fcfef73ded7eefc3 | 1,877 | md | Markdown | contents/designers/alessandro-dallafina.md | tommasongr/com-archive | 3631e76a350de199036b19ba689db457e11de17f | [
"MIT"
] | null | null | null | contents/designers/alessandro-dallafina.md | tommasongr/com-archive | 3631e76a350de199036b19ba689db457e11de17f | [
"MIT"
] | null | null | null | contents/designers/alessandro-dallafina.md | tommasongr/com-archive | 3631e76a350de199036b19ba689db457e11de17f | [
"MIT"
] | null | null | null | ---
name: Alessandro Dallafina
job:
company: Google
type: Azienda (interno)
jobFields:
- UI/UX design
based:
city: Zurigo
country: Svizzera
awards: true
social:
behance: 'https://www.behance.net/AlessandroDallafina'
dribbble: 'https://dribbble.com/MrSlash'
github: 'https://github.com/AlessandroDallafina'
instagram: ''
linkedin: 'https://linkedin.com/in/dallafina'
medium: ''
podcast: ''
twitter: ''
vimeo: ''
youtube: ''
contents:
projects: true
extras: false
date: 2020-01-16T00:03:00.000Z
img: ../../static/assets/designer-alessandro-dallafina.png
---
Appassionato di design, tecnologia e media digitali. Ha pubblicato diversi progetti in libri, giornali e riviste di fama mondiale. Da anni lavora per compagnie internazionali come 500px.com dove ha guidato la user experience per il Marketplace, aiutando milioni di utenti ad acquistare fotografie digitali di altissima qualità. Ha anche lavorato per Getty Images, curando l'advertising (stampa e digitale) per tutto il mercato italiano. Ha acquisito grandissima esperienza come Information Designer lavorando per Density design, uno dei laboratori piu all'avanguardia in Europa per quanto riguarda la Data Visualization e le infografiche.
Senior Interaction Designer a **Google** (2019 - oggi)
Senior Product Designer a **Wattpad** (2018)
Senior Product Designer a **Shopify** (2017 - 2018)
Senior UX Designer a **Flipp** (2016 - 2017)
Designer a **500px** (2013 - 2016)
Visual & Web Designer a **Getty Images** (2012 - 2013)
Visual & Interaction Designer a **Glossom** (2011 - 2012)<br><br>
MA in **Design della Comunicazione** (Politecnico di Milano, 2008 - 2011)
CFP **Bauer** (2007 - 2008)
BA in **Comunicazione Digitale** (Università degli Studi di Milano, 2004 - 2007)<br><br>
[alessandrodallafina.com](http://www.dallafina.com)
| 43.651163 | 638 | 0.727757 | ita_Latn | 0.854018 |
e91f3e809f0e46ee47a7e70b488a419b12caa8b4 | 271 | md | Markdown | grafana/timely-app/dist/README.md | jonathan-stein/timely | 4944cbb2a9c3075ac9c7f46290053ecbb65f14c5 | [
"Apache-2.0"
] | null | null | null | grafana/timely-app/dist/README.md | jonathan-stein/timely | 4944cbb2a9c3075ac9c7f46290053ecbb65f14c5 | [
"Apache-2.0"
] | 5 | 2021-10-15T08:55:19.000Z | 2021-12-18T18:38:11.000Z | grafana/timely-app/dist/README.md | jonathan-stein/timely | 4944cbb2a9c3075ac9c7f46290053ecbb65f14c5 | [
"Apache-2.0"
] | null | null | null | # Timely Data Source Plugin
Timely is a time series database application that provides secure access to time
series data. Timely is written in Java, uses
[Apache Accumulo](https://accumulo.apache.org/) for backend storage, and has
a configurable length memory cache.
| 38.714286 | 81 | 0.789668 | eng_Latn | 0.990077 |
e9200825d204ae289d93b13d17f49c15d082911b | 8,457 | md | Markdown | 1-the-static-web/learning-materials/CSS_102.md | taylordotson/ux-developer-milestones | 2223f0fb3f19bb5661025e2b4d0ccdcc1e523fe7 | [
"Apache-2.0"
] | 167 | 2016-01-13T21:19:48.000Z | 2021-08-21T03:33:53.000Z | 1-the-static-web/learning-materials/CSS_102.md | taylordotson/ux-developer-milestones | 2223f0fb3f19bb5661025e2b4d0ccdcc1e523fe7 | [
"Apache-2.0"
] | 103 | 2016-01-08T19:55:12.000Z | 2017-12-05T20:51:49.000Z | 1-the-static-web/learning-materials/CSS_102.md | sjkimball/ux-developer-milestones | 5b9d7b6cbbf5328657c5b69ff224c37bc4603fb4 | [
"Apache-2.0"
] | 147 | 2016-01-12T15:44:34.000Z | 2020-06-25T22:08:41.000Z | # CSS 102
## The Box Model
Every element is a box; with padding and margin.
<img src="https://css-tricks.com/wp-content/csstricks-uploads/firebox.png">
## Display properties
There are quite a few values for the display property, but the most common are `block`, `inline`, `inline-block`, and `none`.
### block
A value of `block` will make an element behave like an block-level element.
```
display: block;
```
### inline
A value of `inline` will make an element behave like an inline-level element.
```
display: inline;
```
### inline-block
A value of `inline-block` will make an element behave like an inline-level element, but will allow you to style it with block level properties.
```
display: inline;
```
### **Margin & Padding on Inline-Level Elements**
Inline-level elements are affected a bit differently than `block` and `inline-block` elements when it comes to margins and padding. Margins only work horizontally—left and right—on inline-level elements. Padding works on all four sides of inline-level elements; however, the vertical padding—the top and bottom—may bleed into the lines above and below an element.
Margins and padding work like normal for block and inline-block elements.
## Flex Box Layout
A relatively new CSS feature is the [Flexible Box Layout module](http://css-tricks.com/snippets/css/a-guide-to-flexbox/).
```html
<div class="flex-container">
<header class="flex-item">Header</header>
<section class="flex-item sidebar__left">Sidebar</section>
<section class="flex-item main">Main</section>
<section class="flex-item sidebar__right">This is some longer sidebar content that shows how every cell in this row will grow to be the same size as this one.</section>
<footer class="flex-item">Footer</footer>
</div>
```
```css
.flex-container {
padding: 0;
margin: 0;
list-style: none;
height: 200px;
display: flex;
flex-flow: row wrap;
}
.flex-container > header,
.flex-container > footer {
flex: 1 100%;
}
.sidebar__left {
flex-grow: 2;
}
.sidebar__right {
flex-grow: 1;
}
.main {
flex-grow: 6;
}
.flex-item {
background: tomato;
padding: 10px;
width: 100px;
border: 3px solid rgba(0,0,0,.2);
color: white;
font-weight: bold;
font-size: 2em;
text-align: center;
}
```
---
## CSS Pseudo Class Selectors
Pseudo-classes are powerful CSS mechanisms that let you select multiple elements based off of their order of appearance in the HTML, rather than which classes are applied to them.
* first
* last
* nth-of-type
* nth-child
* before
* after
You can also use them to select an element based on how a user has interacted with it.
* hover
* active
* visited
*
### nth-child
Using the `nth-child` [pseudo class](https://developer.mozilla.org/en-US/docs/Web/CSS/:nth-child) in CSS is a newer addition that let's you select either a specific item in a list of child DOM elements, or a general class of child DOM elements.
Using the formula `an+b` where `a` and `b` are integer values.
1. `1n+0` to select all elements
1. `2n+0` to select even elements (2, 4, 6, ...)
1. `3n+0` to select every third element (3, 6, 9, ...)
1. `2n+1` to select odd elements (1, 3, 5, ...)
There are also shortcut selectors
1. `odd` is the same as `2n+1`
1. `even` is the same as `2n+0`
```html
<article>
<section> A </section>
<section> B </section>
<section> C </section>
<section> D </section>
</article>
```
The following selector will put a border around the even sections
```css
article section:nth-child(2n+0) {
border: 1px solid black;
}
```
### nth-of-type
This pseudo selector will only count the specific element type that you specify, instead of against all children. This is useful when the children of an element aren't all the same type.
```html
<article>
<section> A </section>
<section> B </section>
<aside> Aside </aside>
<section> C </section>
<section> D </section>
<section> E </section>
</article>
```
The following code will put a border around sections 1 and 5 because 3 isn't a section.
```css
article section:nth-child(2n+1) {
border: 1px solid black;
}
```
However, the following code will make sections 1, 3 and 5 because it's only counting the section elements and ignores the aside element.
```css
article section:nth-of-type(2n+1) {
color: blue;
}
```
### first-of-type
When you only want to select the first child element that of a type.
```html
<article>
<section> A </section>
<section> B </section>
<aside> Aside </aside>
<section> C </section>
<section> D </section>
<section> E </section>
</article>
```
The following code will highlight the first section, and the aside.
```css
article :first-of-type {
background-color: lime;
}
```
The following code will highlight the **only** first section because we made the selector more specific.
```css
article section:first-of-type {
background-color: lime;
}
```
### first-child, last-child
This will only select the first sibling child of a shared parent
```html
<article>
<section> A </section>
<section> B </section>
<aside> Aside </aside>
<section> C </section>
<section> D </section>
<section> E </section>
</article>
```
The following code will highlight the first section, and the aside.
```css
section {
background-color: orange;
}
article section:first-child {
background-color: lime;
}
```
### not()
Show a standard `<ul>` as a comma separated list.
```
ul > li {
list-style-type: none;
display: inline;
}
ul > li:not(:last-child)::after {
content: ",";
}
```
## Positioning (fixed, relative, absolute)
### Absolute
Absolute positioning is the easiest to understand. You start with the CSS position property:
```
position: absolute;
```
This tells the browser that whatever is going to be positioned should be removed from the normal flow of the document and will be placed in an exact location on the page. It is also taken out of the normal flow of the document - it won't affect how the elements before it or after it in the HTML are positioned on the Web page.
If you want to position an element 10ems from the top of the document window, you would use the "top" offset to position it there with absolute positioning:
```
position: absolute; top: 10em;
```
This element will then always display 10em from the top of the page - no matter what else displays there in normal flow.
Absolutely positioned boxes that have no width set on them behave a bit strangely. Their width is only as wide as it needs to be to hold the content. So if the box contains a single word, the box is only as wide as that word renders. If it grows to two words, it'll grow that wide.
### Relative (static)
These two values are basically the same. The difference is that you'll never explicitly state `static` as the value of the position because that is the default value.
Relative positioning uses the same four positioning properties as absolute positioning. But instead of basing the position of the element upon the browser view port, it starts from where the element would be if it were still in the normal flow.
For example, if you have three paragraphs on your Web page, and the third has a position: relative style placed on it, it's position will be offset based on it's current location - not from the original sides of the view port.
```html
<p>Paragraph 1.</p>
<p>Paragraph 2.</p>
<p style="position: relative;left: 2em;">Paragraph 3.</p>
```
In the above example, the third paragraph will be positioned 3em from the left side of the container element, but will still be below the first two paragraphs. It would remain in the normal flow of the document, and just be offset slightly.
### Fixed
Fixed positioning is a lot like absolute positioning. The position of the element is calculated in the same way as the absolute model - from the sides of the view port. But fixed elements are then fixed in that location, like a watermark. Everything else on the page will then scroll past that element.
```
position: fixed;
```
### Advanced pseudo-classes
`:checked` for selected any checkboxes in a form that have been checked by the user.
### HTML5 form type validation classes
```
<form>
<div>
<label>Enter a URL:</label>
<input type="url" />
</div>
<div>
<label>Enter an email address:</label>
<input type="email" required/>
</div>
</form>
```
```css
input:valid {
background-color: #ddffdd;
}
input:invalid {
background-color: #ffdddd;
}
```
| 25.705167 | 363 | 0.713847 | eng_Latn | 0.996635 |
e920ef35caec8d55d1e140503a37b7aa91718f09 | 4,063 | md | Markdown | CHANGELOG.md | no-day/docs-ts | 489d0ccc812175deea9770e9a7f269cdca1733af | [
"MIT"
] | null | null | null | CHANGELOG.md | no-day/docs-ts | 489d0ccc812175deea9770e9a7f269cdca1733af | [
"MIT"
] | null | null | null | CHANGELOG.md | no-day/docs-ts | 489d0ccc812175deea9770e9a7f269cdca1733af | [
"MIT"
] | null | null | null | # Changelog
> **Tags:**
>
> - [New Feature]
> - [Bug Fix]
> - [Breaking Change]
> - [Documentation]
> - [Internal]
> - [Polish]
> - [Experimental]
**Note**: Gaps between patch versions are faulty/broken releases.
**Note**: A feature tagged as Experimental is in a high state of flux, you're at risk of it changing without notice.
# 0.6.5
- **Polish**
- allow double quotes in `@example` project imports, #31 (@thought2)
# 0.6.4
- **New Feature**
- add `projectHomepage` configuration property, closes #26 (@IMax153)
# 0.6.3
- **Polish**
- fix modules not respecting config settings #24 (@IMax153)
- move `prettier` to `peerDependencies`, closes #22 (@gcanti)
# 0.6.2
- **Breaking Change**
- refactor `Markdown` module (@IMax153)
- add `Markdown` constructors (@IMax153)
- add tagged union of `Printable` types (@IMax153)
- add `fold` destructor for `Markdown` (@IMax153)
- add `Semigroup`, `Monoid`, and `Show` instances for `Markdown` (@IMax153)
- add `printModule` helper function (@IMax153)
- update `Parser` module (@IMax153)
- add `ParserEnv` which extends `Environment` (@IMax153)
- add `Ast` interface (@IMax153)
- update `Core` module (@IMax153)
- add `Program` and `Environment` types (@IMax153)
- update `Capabilities` interface (@IMax153)
- remove `Eff`, `MonadFileSystem`, and `MonadLog` types (@IMax153)
- remove `MonadFileSystem` and `MonadLog` instances (@IMax153)
- rename `domain` module to `Module` (@IMax153)
- rename all constructors to match their respective types (@IMax153)
- **New Feature**
- add `Config` module (@IMax153)
- support configuration through `docs-ts.json` file (@IMax153)
- add `Config`, `ConfigBuilder` and `Settings` types (@IMax153)
- add `build` constructor `ConfigBuilder` (@IMax153)
- add `resolveSettings` destructor for creating `Settings` from a `ConfigBuilder` (@IMax153)
- add combinators for manipulating a `ConfigBuilder` (@IMax153)
- add `FileSystem` module (@IMax153)
- add `FileSystem` instance (@IMax153)
- add `File` constructor (@IMax153)
- add `exists`, `readFile`, `remove`, `search`, and `writeFile` helper functions (@IMax153)
- add `Logger` module (@IMax153)
- add `LogEntry`, `LogLevel`, and `Logger` types (@IMax153)
- add `showEntry` and `Logger` instances (@IMax153)
- add `debug`, `error`, and `info` helper functions (@IMax153)
- Add `Example` module (@IMax153)
- add `run` helper function (@IMax153)
# 0.5.3
- **Polish**
- add support for TypeScript `4.x`, closes #19 (@gcanti)
# 0.5.2
- **Polish**
- use ts-node.cmd on windows, #15 (@mattiamanzati)
# 0.5.1
- **Bug Fix**
- should not return ignore function declarations (@gcanti)
- should not return internal function declarations (@gcanti)
- should output the class name when there's an error in a property (@gcanti)
# 0.5.0
- **Breaking Change**
- total refactoring (@gcanti)
# 0.4.0
- **Breaking Change**
- the signature snippets are not valid TS (@gcanti)
- add support for class properties (@gcanti)
# 0.3.5
- **Polish**
- support any path in `src` in the examples, #12 (@gillchristian)
# 0.3.4
- **Polish**
- remove `code` from headers (@gcanti)
# 0.3.3
- **Polish**
- remove useless postfix (@gcanti)
# 0.3.1
- **Bug Fix**
- add support for default type parameters (@gcanti)
# 0.3.0
- **Breaking Change**
- modules now can/must be documented as usual (@gcanti)
- required `@since` tag
- no more `@file` tags (descriptione can be specified as usual)
# 0.2.1
- **Internal**
- run `npm audit fix` (@gcanti)
# 0.2.0
- **Breaking Change**
- replace `ts-simple-ast` with `ts-morph` (@gcanti)
- make `@since` tag mandatory (@gcanti)
- **New Feature**
- add support for `ExportDeclaration`s (@gcanti)
# 0.1.0
upgrade to `fp-ts@2.0.0-rc.7` (@gcanti)
- **Bug Fix**
- fix static methods heading (@gcanti)
# 0.0.3
upgrade to `fp-ts@1.18.x` (@gcanti)
# 0.0.2
- **Bug Fix**
- fix Windows Path Handling (@rzeigler)
# 0.0.1
Initial release
| 25.71519 | 116 | 0.652227 | eng_Latn | 0.662619 |
e921965658e92a401be6751dec6fadd904277db8 | 4,835 | md | Markdown | content/post/124/notes.md | AverageMarcus/devopsish.com | befb0ca77bca556055df6dc8d25c82dc2a2ff2be | [
"MIT"
] | 18 | 2018-03-04T04:11:56.000Z | 2022-02-20T06:34:57.000Z | content/post/124/notes.md | AverageMarcus/devopsish.com | befb0ca77bca556055df6dc8d25c82dc2a2ff2be | [
"MIT"
] | 31 | 2020-08-30T02:14:54.000Z | 2022-02-25T17:14:54.000Z | content/post/124/notes.md | AverageMarcus/devopsish.com | befb0ca77bca556055df6dc8d25c82dc2a2ff2be | [
"MIT"
] | 2 | 2021-06-26T08:48:45.000Z | 2022-02-20T09:10:19.000Z | +++
author = "Chris Short"
categories = ["Notes"]
date = 2019-04-21T07:00:00Z
description = "Notes from DevOps'ish 124"
draft = false
url = "124/notes"
title = "DevOps'ish 124 Notes"
+++
{{< notes-note >}}
## Notes
[Google bankrupting Apple privacy promises by handing data to police - Business Insider](https://www.businessinsider.com/google-bankrupting-apple-privacy-promises-by-handing-data-to-police-2019-4)
[Chaos Engineering Traps – Nora Jones – Medium](https://medium.com/@njones_18523/chaos-engineering-traps-e3486c526059)
[Linux Load Averages: Solving the Mystery](http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html)
[chroot shenanigans 2: Running a full desktop environment on an Amazon Kindle](https://neonsea.uk/blog/2019/04/14/chroot-shenanigans-2.html)
[Google Fellow Eric Brewer offers insight into Anthos and open-source strategy - SiliconANGLE](https://siliconangle.com/2019/04/12/google-fellow-offers-insight-into-anthos-future-strategy-with-open-source-googlenext19/)
[Kafka-Streams: a road to autoscaling via Kubernetes](https://medium.com/xebia-france/kafka-streams-a-road-to-autoscaling-via-kubernetes-417f2597439)
[Astronomer slams sexists trying to tear down black hole researcher's rep • The Register](https://www.theregister.co.uk/2019/04/12/astronomer_schools_sexists/)
[Hashicorp Vault in Kubernetes with HA, TLS enabled and consul-free | kubernetes_examples](https://lucascollino.github.io/kubernetes_examples/vault/)
[Alibaba founder defends overtime work culture as 'huge blessing' - Reuters](https://www.reuters.com/article/us-china-tech-labour-idUSKCN1RO1BC)
[Kubernetes Ingress Past, Present, and Future | Scott Cranton — Solo.io Customer Success](https://scott.cranton.com/ingress_and_beyond.html)
[Kubernetes Serverless with Knative | Live Training](https://learning.oreilly.com/live-training/courses/kubernetes-serverless-with-knative/0636920258827/)
[How Feature Flagging Transforms Teams and Supports DevOps | LaunchDarkly Blog](https://launchdarkly.com/blog/how-feature-flagging-transforms-teams-and-supports-devops/)
[Jerry Hargrove | Amazon ElastiCache](https://www.awsgeek.com/posts/Amazon-ElastiCache_WA/)
[The Future of Cloud Providers in Kubernetes - Kubernetes](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/)
[Pod Priority and Preemption in Kubernetes - Kubernetes](https://kubernetes.io/blog/2019/04/16/pod-priority-and-preemption-in-kubernetes/)
[Time protection: the missing OS abstraction | the morning paper](https://blog.acolyer.org/2019/04/15/time-protection-the-missing-os-abstraction/)
[Monitoring container vitality and availability with Podman - Red Hat Developer Blog](https://developers.redhat.com/blog/2019/04/18/monitoring-container-vitality-and-availability-with-podman/)
[Inter-process communication in Linux: Shared files and shared memory | Opensource.com](https://opensource.com/article/19/4/interprocess-communication-linux-storage)
[Introduction to Kubernetes: From container to containers - Red Hat Developer Blog](https://developers.redhat.com/blog/2019/04/16/introduction-to-kubernetes-from-container-to-containers/)
[Jerry Hargrove | Periodic Table of Amazon Web Services](https://www.awsgeek.com/pages/AWS-Periodic-Table.html)
[AP Exclusive: Mysterious operative haunted Kaspersky critics](https://apnews.com/a3144f4ef5ab4588af7aba789e9892ed)
[Quick note: FOMO vs. legacy tech – Kaimar Karu – Medium](https://medium.com/@kaimarkaru/quick-note-fomo-vs-legacy-tech-18a993e6948e)
[Tinder's move to Kubernetes – Tinder Engineering – Medium](https://medium.com/@tinder.engineering/tinders-move-to-kubernetes-cda2a6372f44)
[How the Boeing 737 Max Disaster Looks to a Software Developer - IEEE Spectrum](https://spectrum.ieee.org/aerospace/aviation/how-the-boeing-737-max-disaster-looks-to-a-software-developer)
[4 years of coding in San Francisco, lessons learned - A geek with a hat](https://swizec.com/blog/4-years-of-coding-in-san-francisco-lessons-learned/swizec/9026)
[The curious case of Spamhaus, a port scanning scandal, and an apparent U-turn • The Register](https://www.theregister.co.uk/2019/04/16/spamhaus_port_scans/)
['Fork and Commoditize' — GitLab CEO on the Threat of the Hyper-Clouds - The New Stack](https://thenewstack.io/fork-and-commoditize-gitlab-ceo-critiques-the-new-open-source-approach-by-amazon-web-services/)
[How open source can survive the cloud | VentureBeat](https://venturebeat.com/2019/04/14/how-open-source-can-survive-the-cloud/)
[Comparing Kubernetes Service Mesh Tools](https://caylent.com/comparing-kubernetes-service-mesh-tools/)
[Will Google Cloud lose its futurist flair to gain enterprise cloud share? - SiliconANGLE](https://siliconangle.com/2019/04/10/will-google-cloud-lose-futurist-flare-gain-enterprise-cloud-share-googlenext19/)
| 64.466667 | 219 | 0.79152 | kor_Hang | 0.232576 |
e9219f8b8818621b5d84cb5ee8f3bcf90db833c7 | 396 | md | Markdown | CONTRIBUTING.md | chrisgilmerproj/brewing | fd8251e5bf34c20342034187fb30d9fffc723aa8 | [
"MIT"
] | 21 | 2016-08-16T17:34:17.000Z | 2021-05-09T13:44:48.000Z | CONTRIBUTING.md | chrisgilmerproj/brewing | fd8251e5bf34c20342034187fb30d9fffc723aa8 | [
"MIT"
] | 16 | 2016-11-16T17:07:12.000Z | 2020-12-16T18:11:49.000Z | CONTRIBUTING.md | chrisgilmerproj/brewing | fd8251e5bf34c20342034187fb30d9fffc723aa8 | [
"MIT"
] | 4 | 2016-08-16T16:40:49.000Z | 2019-10-31T01:53:00.000Z | # Contributing to Brew Day
As an open source project, Brew Day welcomes contributions of many forms.
Examples of contributions include:
* Code patches
* Documentation improvements
* Bug reports and patch reviews
## Contribution guidelines
- Please open a GitHub ticket to explain what you would like to improve/change.
- Pull requests should include test coverage for all new lines of code.
| 26.4 | 79 | 0.792929 | eng_Latn | 0.998821 |
e922a9351a23c38b2f76ccb687aed910c5ee56a7 | 232 | md | Markdown | images/singly_linked_list.md | edab/DSA_Quick_Reference | 827a7d3331d9224e8bb21feb9151a89fc637a649 | [
"MIT"
] | 3 | 2021-02-15T15:59:51.000Z | 2021-05-02T16:52:17.000Z | images/singly_linked_list.md | edab/DSA_Quick_Reference | 827a7d3331d9224e8bb21feb9151a89fc637a649 | [
"MIT"
] | null | null | null | images/singly_linked_list.md | edab/DSA_Quick_Reference | 827a7d3331d9224e8bb21feb9151a89fc637a649 | [
"MIT"
] | 1 | 2021-06-28T08:50:42.000Z | 2021-06-28T08:50:42.000Z | ```mermaid
graph LR
Head-->id2
id2-->id4
id4-->id6
id6-->None
subgraph one
id1[value]
id2[next]
end
subgraph two
id3[value]
id4[next]
end
subgraph three
id5[value]
id6[next]
end
```
| 11.6 | 16 | 0.556034 | eng_Latn | 0.555415 |
e92343af3a99e5c64ff3b81d85e08471a0452e98 | 40 | md | Markdown | README.md | KollatzThomas/right2left | eeeda76231b07ddfb1c7d606b9621a1362573dcf | [
"CC-BY-4.0"
] | null | null | null | README.md | KollatzThomas/right2left | eeeda76231b07ddfb1c7d606b9621a1362573dcf | [
"CC-BY-4.0"
] | null | null | null | README.md | KollatzThomas/right2left | eeeda76231b07ddfb1c7d606b9621a1362573dcf | [
"CC-BY-4.0"
] | null | null | null | # right2left
right2left support and XML
| 13.333333 | 26 | 0.825 | eng_Latn | 0.999717 |
e9236917a3155d8437d5567e0e634fce99c2b74a | 2,386 | md | Markdown | README.md | Ken-Utsunomiya/SnackTrack-Server | 62ec8489f68999e51d56f2f15ee89407971f69b6 | [
"MIT"
] | null | null | null | README.md | Ken-Utsunomiya/SnackTrack-Server | 62ec8489f68999e51d56f2f15ee89407971f69b6 | [
"MIT"
] | null | null | null | README.md | Ken-Utsunomiya/SnackTrack-Server | 62ec8489f68999e51d56f2f15ee89407971f69b6 | [
"MIT"
] | 1 | 2021-08-12T02:30:57.000Z | 2021-08-12T02:30:57.000Z | # SnackTrack-Server
### :gear: Development Workflow
#### Getting Started
* Make sure you have [Node.js](https://nodejs.org/en/) and [yarn](https://classic.yarnpkg.com/en/docs/install/) installed
* Clone the SnackTrack-Server repo to your local machine
* `yarn install` to get all of the dependency packages
* Spawn a local node server with `yarn run start`
* For developmenet(nodemon), run server with `yarn run dev`
* `yarn test` to launch test.
#### Making Changes
1. Create a new feature branch off of the `dev` branch and name it with the number of the Jira ticket you'll be working on (e.g. `SNAK-101`).
2. Make changes and commit your changes to your feature branch with [Conventional Commit Messages](https://gist.github.com/qoomon/5dfcdf8eec66a051ecd85625518cfd13)
3. Once you're satisfied that your changes address the ticket, open a new pull request for your feature branch with the corresponding Jira ticket number and title as the PR title (e.g. SNAK-61: Implement POST/api/v1/payments)
4. Fill out [PR template](https://github.com/CPSC319-Galvanize/SnackTrack-Server/blob/dev/.github/pull_request_template.md) when you post a PR
5. Resolve all merge conflicts as needed.
6. Assign two other BE team members to review your PR and be open and available to address feedback.
7. Comment the PR link in the Jira ticket.
8. After approval from both team members, confirm the PR and merge your feature branch into the `dev` branch.
9. Confirm that your changes are reflected in the `dev` branch, and then delete your feature branch.
At the end of every sprint (tentatively), we'll do a code freeze and merge the `dev` branch into `main`.
#### Little Things to Note
1. Follow [Conventional Commit Messages](https://gist.github.com/qoomon/5dfcdf8eec66a051ecd85625518cfd13)
2. Use ticket number as branch name. (Ex. SNAK-61)
3. Use ticket number + title as a PR title (Ex. SNAK-61-Implement POST/api/v1/payments)
4. Fill out [PR template](https://github.com/CPSC319-Galvanize/SnackTrack-Server/blob/dev/.github/pull_request_template.md) when you post a PR
#### Branches
| Branch | Description |
|--------|-------------|
| `main` | anything & everything |
| `dev` | experimental development branch |
| `TICKET-NUMBER` | feature, user story, bugs, fixes (e.g. `SNAK-50`) |
#### Reference
More information for [Sequelize](https://sequelize.org/master/index.html)
| 55.488372 | 226 | 0.748533 | eng_Latn | 0.965491 |
e923d5739be8ba30a3ee90832bf26f3defe2bd92 | 605 | md | Markdown | _posts/2018-10-14-opintotekopalkinto.md | niitapa/uudempiAS | 28f5ec0eaffd182d41f2b1b3daa1c89f9763e201 | [
"MIT"
] | null | null | null | _posts/2018-10-14-opintotekopalkinto.md | niitapa/uudempiAS | 28f5ec0eaffd182d41f2b1b3daa1c89f9763e201 | [
"MIT"
] | null | null | null | _posts/2018-10-14-opintotekopalkinto.md | niitapa/uudempiAS | 28f5ec0eaffd182d41f2b1b3daa1c89f9763e201 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Opintotekopalkinto"
date: 2018-10-14 20:50:00 +0300
language: fin
author: Opintomestari
categories: tiedotteet
---
On tullut aika ehdottaa Vuoden opintotekopalkinnon saajaa. Palkinto myönnetään henkilölle, joka on tehnyt hyvää työtä kurssien parantamiseksi ja opintomahdollisuuksien edistämiseksi.
Onko mielessäsi henkilö, jonka ansiosta opiskelusi ovat sujuneet erityisen sulavasti?
Ehdota allaolevalla lomakkeella palkinnon saajaa ja kerro miten hän on mielestäsi parantanut sekä sinun että opiskelukaveriesi opiskelumahdollisuuksia.
<https://goo.gl/forms/kQbvoP7WU3dTbVPQ2> | 40.333333 | 183 | 0.834711 | fin_Latn | 0.999964 |
e9244f824edceadb4d0df7e1d793282801e0ae26 | 2,372 | md | Markdown | docs/standard/threading/how-to-listen-for-cancellation-requests-by-polling.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/threading/how-to-listen-for-cancellation-requests-by-polling.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/threading/how-to-listen-for-cancellation-requests-by-polling.md | Ming77/docs.zh-cn | dd4fb6e9f79320627d19c760922cb66f60162607 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "如何:通过轮询侦听取消请求"
ms.custom:
ms.date: 03/30/2017
ms.prod: .net
ms.reviewer:
ms.suite:
ms.technology: dotnet-standard
ms.tgt_pltfrm:
ms.topic: article
dev_langs:
- csharp
- vb
helpviewer_keywords: cancellation, how to poll for requests
ms.assetid: c7f2f022-d08e-4e00-b4eb-ae84844cb1bc
caps.latest.revision: "12"
author: rpetrusha
ms.author: ronpet
manager: wpickett
ms.openlocfilehash: 3f0e05e3f66d591a28d7e84d358934959764dab6
ms.sourcegitcommit: bd1ef61f4bb794b25383d3d72e71041a5ced172e
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/18/2017
---
# <a name="how-to-listen-for-cancellation-requests-by-polling"></a>如何:通过轮询侦听取消请求
下面的示例演示一个用户代码可以按固定的间隔,以查看从调用线程是否已请求取消轮询取消标记的方法。 此示例使用<xref:System.Threading.Tasks.Task?displayProperty=nameWithType>类型,但相同的模式适用于异步操作直接通过创建<xref:System.Threading.ThreadPool?displayProperty=nameWithType>类型或<xref:System.Threading.Thread?displayProperty=nameWithType>类型。
## <a name="example"></a>示例
轮询需要某种类型的循环或递归代码,可以定期读取的布尔值<xref:System.Threading.CancellationToken.IsCancellationRequested%2A>属性。 如果你使用<xref:System.Threading.Tasks.Task?displayProperty=nameWithType>类型,并等待任务完成调用线程上,你可以使用<xref:System.Threading.CancellationToken.ThrowIfCancellationRequested%2A>方法来检查该属性,并引发异常。 通过使用此方法,你确保正确的异常引发到请求的响应中。 如果你使用<xref:System.Threading.Tasks.Task>,然后调用此方法优于手动引发<xref:System.OperationCanceledException>。 如果不需要引发异常,则仅可以检查该属性,并从该方法返回,如果属性是`true`。
[!code-csharp[Cancellation#11](../../../samples/snippets/csharp/VS_Snippets_Misc/cancellation/cs/cancellationex11.cs#11)]
[!code-vb[Cancellation#11](../../../samples/snippets/visualbasic/VS_Snippets_Misc/cancellation/vb/cancellationex11.vb#11)]
调用<xref:System.Threading.CancellationToken.ThrowIfCancellationRequested%2A>非常快,不会引入的很大的开销,在循环中。
如果你调用<xref:System.Threading.CancellationToken.ThrowIfCancellationRequested%2A>,只需显式检查<xref:System.Threading.CancellationToken.IsCancellationRequested%2A>属性,如果你有其他工作要做以响应取消除了引发异常。 在此示例中,你可以看到,代码实际访问该属性两次: 一次中的显式访问权限和再次在<xref:System.Threading.CancellationToken.ThrowIfCancellationRequested%2A>方法。 但是,由于读取 act<xref:System.Threading.CancellationToken.IsCancellationRequested%2A>属性包括只有一个可变读取每次访问的指令,两次访问并不重要从性能角度。 它是调用方法,而不是手动引发仍优于<xref:System.OperationCanceledException>。
## <a name="see-also"></a>另请参阅
[托管线程中的取消](../../../docs/standard/threading/cancellation-in-managed-threads.md)
| 57.853659 | 470 | 0.815767 | yue_Hant | 0.849066 |
e924aa2513996d8af1f827387d924796d3eba417 | 86 | md | Markdown | README.md | anonymouse212-hub/Facebook | a5f7cf01359fa9c8cbd09733fde9304389aaa63a | [
"Apache-2.0"
] | null | null | null | README.md | anonymouse212-hub/Facebook | a5f7cf01359fa9c8cbd09733fde9304389aaa63a | [
"Apache-2.0"
] | null | null | null | README.md | anonymouse212-hub/Facebook | a5f7cf01359fa9c8cbd09733fde9304389aaa63a | [
"Apache-2.0"
] | null | null | null | Facebook 1,000 Likes Hub 2021
WELCOME
Phone Number, Email address
Password
Login
| 7.818182 | 29 | 0.77907 | yue_Hant | 0.477089 |
e924bb05a8d32d878966de2ba2958b5a2ed736b3 | 3,141 | md | Markdown | README.md | fzymgc-house/PowerDNS-Admin | c8d992f1c8ce3bc2286fde1aa33eac02c5c1d39d | [
"MIT"
] | 1 | 2021-11-18T07:59:28.000Z | 2021-11-18T07:59:28.000Z | README.md | fzymgc-house/PowerDNS-Admin | c8d992f1c8ce3bc2286fde1aa33eac02c5c1d39d | [
"MIT"
] | null | null | null | README.md | fzymgc-house/PowerDNS-Admin | c8d992f1c8ce3bc2286fde1aa33eac02c5c1d39d | [
"MIT"
] | null | null | null | # PowerDNS-Admin
A PowerDNS web interface with advanced features.
[](https://lgtm.com/projects/g/ngoduykhanh/PowerDNS-Admin/context:python)
[](https://lgtm.com/projects/g/ngoduykhanh/PowerDNS-Admin/context:javascript)
#### Features:
- Multiple domain management
- Domain template
- User management
- User access management based on domain
- User activity logging
- Support Local DB / SAML / LDAP / Active Directory user authentication
- Support Google / Github / Azure / OpenID OAuth
- Support Two-factor authentication (TOTP)
- Dashboard and pdns service statistics
- DynDNS 2 protocol support
- Edit IPv6 PTRs using IPv6 addresses directly (no more editing of literal addresses!)
- Limited API for manipulating zones and records
- Full IDN/Punycode support
## Running PowerDNS-Admin
There are several ways to run PowerDNS-Admin. The easiest way is to use Docker.
If you are looking to install and run PowerDNS-Admin directly onto your system check out the [Wiki](https://github.com/ngoduykhanh/PowerDNS-Admin/wiki#installation-guides) for ways to do that.
### Docker
This are two options to run PowerDNS-Admin using Docker.
To get started as quickly as possible try option 1. If you want to make modifications to the configuration option 2 may be cleaner.
#### Option 1: From Docker Hub
The easiest is to just run the latest Docker image from Docker Hub:
```
$ docker run -d \
-e SECRET_KEY='a-very-secret-key' \
-v pda-data:/data \
-p 9191:80 \
ngoduykhanh/powerdns-admin:latest
```
This creates a volume called `pda-data` to persist the SQLite database with the configuration.
#### Option 2: Using docker-compose
1. Update the configuration
Edit the `docker-compose.yml` file to update the database connection string in `SQLALCHEMY_DATABASE_URI`.
Other environment variables are mentioned in the [legal_envvars](https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/configs/docker_config.py#L5-L46).
To use the Docker secrets feature it is possible to append `_FILE` to the environment variables and point to a file with the values stored in it.
Make sure to set the environment variable `SECRET_KEY` to a long random string (https://flask.palletsprojects.com/en/1.1.x/config/#SECRET_KEY)
2. Start docker container
```
$ docker-compose up
```
You can then access PowerDNS-Admin by pointing your browser to http://localhost:9191.
## Screenshots

## LICENSE
MIT. See [LICENSE](https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/LICENSE)
## Support
If you like the project and want to support it, you can *buy me a coffee* ☕
<a href="https://www.buymeacoffee.com/khanhngo" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a>
| 48.323077 | 208 | 0.765361 | eng_Latn | 0.762847 |
e924d19803cedf7cd662c2b25a209e24564a8cc5 | 14,661 | md | Markdown | _posts/oligotyping/2015-02-12-new_insights_into_microbial_ecology.md | USDA-ARS-GBRU/web | 0208dce628314342dc3a0b901dbc5a2fac4dbfd1 | [
"MIT"
] | null | null | null | _posts/oligotyping/2015-02-12-new_insights_into_microbial_ecology.md | USDA-ARS-GBRU/web | 0208dce628314342dc3a0b901dbc5a2fac4dbfd1 | [
"MIT"
] | null | null | null | _posts/oligotyping/2015-02-12-new_insights_into_microbial_ecology.md | USDA-ARS-GBRU/web | 0208dce628314342dc3a0b901dbc5a2fac4dbfd1 | [
"MIT"
] | null | null | null | ---
layout: post
authors: [meren]
title: New insights into microbial ecology through subtle nucleotide variation
excerpt: "A summary of the oligotyping special topic in Frontiers In Microbiology"
date: 2015-02-12 10:17:05
tags: [pubs, frontiers]
categories: [oligotyping]
comments: true
---
{% include _toc.html %}
>I am very pleased to announce that Frontiers in Microbiology is now hosting a research topic on oligotyping, which is open for submissions!
This is what [I had said]({% post_url oligotyping/2013-12-12-oligotyping-frontiers %}) on these pages about a year ago. Today, the research topic [New insights into microbial ecology through subtle nucleotide variation](http://journal.frontiersin.org/ResearchTopic/2427) is almost concluded, and contains 8 publications. The common theme among all these publications is that they use [oligotyping]({{ site.url }}/software/oligotyping/).
I thought this would be a good time to offer a glimpse of what has been published in this collection so far.
## Gut microbiomes of cheetahs and jackals
<figure>
<a href="http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00526/full"><img src="{{ site.url }}/images/oligotyping/menke_et_al.png"></a>
</figure>
[In their study](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00526/full) Menke *et al*. compare the gut microbiomes of two sympatric mammalian carnivores, cheetah and black-backed jackal, by sequencing the V4 region of the 16S rRNA gene. They amplified the material for sequencing from fecal samples of **free-ranging** animals, too!
Being sympatric, and having somewhat similar diets, these animals seem to have similar gut microbiomes in the big picture, yet their microbiomes are different enough at lower levels of taxonomy for them to separate from each other on an ordination. The authors show that even genera that seem to be shared among the two species are in fact composed of different [oligotypes]({{ site.url }}/software/oligotyping/faq.html#what-is-an-oligotype). For instance *Blautia* is pretty abundant microbial genus in both cheetah and jackal group, yet *Blautia* oligotypes differ dramatically between the two (cheetah samples are on the left) (Figure 6 from Menke *et al*.):
<figure>
<a href="{{ site.url }}/images/oligotyping/menke_et_al_blautia.png"><img src="{{ site.url }}/images/oligotyping/menke_et_al_blautia.png"></a>
</figure>
The figure shows that *Blautia* group is more abundant in cheetah samples in general, which is how it contributes to the separation of cheetah samples from jackal samples on an ordination. But with oligotypes, beyond separation, it is possible to recover specific microbial markers to distinguish each group from the other as it seems some *Blautia* organisms exclusively occur in sampls coming from one species or the other. I am especially interested in *Blautia* because this genus seems to be a great marker for different host species, as we had shown in our [2014 ISMEJ publication](http://www.nature.com/ismej/journal/v9/n1/full/ismej201497a.html), and previously shown by Sandra *et al.* in [an Environmental Microbiology paper](http://onlinelibrary.wiley.com/doi/10.1111/1462-2920.12092/abstract), again, using oligotyping.
But *Blautia* is not the only genus with oligotypes that distribute differently between the two species. Here is another example, *Slackia*:
<figure>
<a href="{{ site.url }}/images/oligotyping/menke_et_al_slackia.png"><img src="{{ site.url }}/images/oligotyping/menke_et_al_slackia.png"></a>
</figure>
You can read more of this study here: [doi:10.3389/fmicb.2014.00526](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00526/full)
## *Arcobacter* in sewage
<figure>
<a href="http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00525/full"><img src="{{ site.url }}/images/oligotyping/fisher_et_al.png"></a>
</figure>
To better understand the ecological factors that affect the survival and growth of *Arcobacter* spp. in sewer infrastructure, [Fisher *et al*. dissect the *Arcobacter* group](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00525/full) (which comprise 5% to 11% of sewage bacterial community) into a more precisely-defined taxonomic units by oligotyping reads coming from V4V5 region of the gene.
The study contains sequencing data from 12 sewage treatment centers in the US that were sampled three times: August 2012, January 2013, and April 2013 (there is also one additional sample from Spain). The stability of sewage systems is just very impressive. The first figure in the study shows that although the composition of *Arcobacter* oligotypes can differ from one station to the other, they do not differ as much across the three time points within one station (even when their abuncance in the the overall sample they are found change quite drastically):
<figure>
<a href="{{ site.url }}/images/oligotyping/fisher_et_al_fig_1.png"><img src="{{ site.url }}/images/oligotyping/fisher_et_al_fig_1.png"></a>
</figure>
The study has an extensive discussion on factors that affect the growth of *Arcobacter* in sewage, and the temperature, as usual, is one key player:
<figure>
<a href="{{ site.url }}/images/oligotyping/fisher_et_al_fig_3.png"><img src="{{ site.url }}/images/oligotyping/fisher_et_al_fig_3.png"></a>
</figure>
You can read more of this study here: [doi: 10.3389/fmicb.2014.00525](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00525/full)
## Dynamics of tongue microbial communities
<figure>
<a href="http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00568/full"><img src="{{ site.url }}/images/oligotyping/mark_welch_et_al.png"></a>
</figure>
In this study, which I was a part of, Mark Welch *et al*. re-analyzes the infamous microbiome time series data published by [Caporaso *et al*.](http://www.ncbi.nlm.nih.gov/pubmed/21624126). We had recently [re-analyzed and published](http://www.pnas.org/content/111/28/E2875.full) the oral samples collected by the HMP project. The HMP data were mostly representing a cross-sectional sampling of a large group of individuals. Cross-sectional sampling is great as they cover a great number of different individuals, but the lack of multiple samples from each individual leaves many other questions '*unanswerable*'. For instance one of the interesting findings we had in our previous study was this (from Fig 4 in [Eren *et al*](http://www.pnas.org/content/111/28/E2875.full).):
<figure>
<a href="{{ site.url }}/images/oligotyping/eren_et_al_neisseria.png"><img src="{{ site.url }}/images/oligotyping/eren_et_al_neisseria.png"></a>
</figure>
In this figure each bar represents one individual, and colors show the distribution of different *Neisseria* oligotypes in a person's tongue. What is interesting about this figure is that each individual is dominated by one of the *Neisseria* oligotypes (almost all of which are 99% identical to each other at the sequenced region and would have been binned together by conventional methods). So, what is going on here? Are these oligotypes represent functionally identical organisms? Do clusters of colors identify different stable community states in the oral cavity? Are some of those samples that seem to have multiple colors examples of transient community states? If we had looked at one of those individuals for a long period of time, would have we seen a sudden switch in the *Neisseria* population from one state to another? And most importantly, what governs these patterns? Neutral effects? The host immune system? Diet? Smoking or drinking habits? The rest of the microbiome? All of the above? Or none?
The dataset published in Caporaso *et al*.'s study, which contains 396 time points from only two individuals of course was a natural follow-up to our previous analysis of the HMP in an attempt to answer some of these questions. So. Jessica Mark Welch and Daniel Utter did the analysis, and here is the distribution of *Neisseria* oligotypes identified in these two individuals:
{: .notice}
I wish colors were identical to the previous study, but sequencing different regions of the 16S rRNA gene makes it very very hard to connect organisms to each other *confidently*.
<figure>
<a href="{{ site.url }}/images/oligotyping/mark_welch_et_al_fig_2.png"><img src="{{ site.url }}/images/oligotyping/mark_welch_et_al_fig_2.png"></a>
</figure>
There are a number of interesting things going on in this figure. For instance, at the very beginning of the sampling period, each individual is composed of one the *Neisseria* oligotype shown in green. Then they diverge into their own type, blue for female and cyan for male, and they stay like that! As a note, blue and cyan are one nucleotide apart from each other, so this pattern would also have been lost if conventional OTU clustering had been used to analyze the dataset. Here is a quote from the paper talking about this figure:
>These dynamics display two main characteristics which, taken together, may be termed a phase transition. The major behavior is one of stability. For most of the time, the oligotype distribution within an individual was essentially invariant, irrespective of whether the dominant oligotype in the individual was [Cyan] or [Blue]. The second property was of abrupt transition to an alternate oligotype. The time series data showed several instances in which a community initially dominated by one oligotype became transiently mixed and then transitioned to a state where one oligotype was dominant. These properties suggest that the evenly mixed populations of Neisseria on the tongue found in some individuals in the HMP data [shown in the previous figure] are transient states. Occasional replacement of the dominant oligotype argues against strong founder effects and priority effects for this taxon in the tongue microbiota. Throughout these transitions the fourth oligotype, [Purple], did not participate in the apparently competitive or exclusionary dynamics of types [Cyan] and [Blue], but persisted in relatively stable proportion in the community, likely demonstrating a subdivision of functional/ecological roles even among these very closely related taxa.
You can read the rest of the study here: [doi: 10.3389/fmicb.2014.00568](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00568/full)
## An R package for the entropy decomposition
<figure>
<a href="http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00601/full"><img src="{{ site.url }}/images/oligotyping/ramette_et_al.png"></a>
</figure>
One of the great surprises of this collection was to see Alban Ramette and Pier Luigi Buttigieg's R implementation of oligotyping and [Minimum Entropy Decomposition]({% post_url med/2014-11-04-med %}). The GitHub repository for 'otu2ot' is located here: [https://github.com/aramette/otu2ot](https://github.com/aramette/otu2ot).
Ramette *et al*.'s R library makes it much easier for R users to start using the approach on their datasets. The R library not only almost completely matches the functionality of a [full oligotyping installation]({% post_url oligotyping/2014-08-16-installing-the-oligotyping-pipeline %}), but it also comes with two novel features: Broken stick model, to identify which oligotypes are more abundant than expected by chance, and a one-pass procedure, to rapidly assess the amount of microdiversity present in a group of sequences after only one round of entropy calculation.
The oligotyping pipeline comes all sorts of bells and whistles in an attempt to improve the user experience. The R implementation, on the other hand, is more likely to be used by statisticians and developers to test and improve the approach. Broken stick model is a great example to that, and opens up a great path to better and statistically sound noise filtering on this type of data.
You can read the study here: [doi: 10.3389/fmicb.2014.00601](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00601/full)
## And others
There are four other publications in the collection, one of which is still in press as of today:
- [**Oligotyping reveals community level habitat selection within the genus Vibrio**](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00563)<br>
Victor T. Schmidt, Julie Reveillaud, Erik Zettler, Tracy J. Mincer, Leslie Murphy and Linda A. Amaral-Zettler.
- "[**Phaeocystis antarctica blooms strongly influence bacterial community structures in the Amundsen Sea polynya**](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00646)"<br>Tom O. Delmont, Katherine M. Hammar, Hugh W. Ducklow, Patricia L. Yager and Anton F. Post.
- "[**Biogeographic patterns of bacterial microdiversity in Arctic deep-sea sediments (HAUSGARTEN, Fram Strait)**](http://journal.frontiersin.org/Journal/10.3389/fmicb.2014.00660)"<br>Pier Luigi Buttigieg and Alban Ramette.
- "[**Oligotyping reveals stronger relationship of organic soil bacterial community structure with N-amendments and soil chemistry in comparison to that of mineral soil at Harvard Forest, MA, USA**](http://journal.frontiersin.org/Journal/10.3389/fmicb.2015.00049)"<br>Swathi Anuradha Turlapati, Rakesh Minocha, Stephanie Long, Jordan Ramsdell and Subhash C Minocha.
## Final Words
I am very thankful for everyone who contributed to this collection.
The diversity of environments studied in these publications, and the rate of the recovery of ecologically meaningful findings show that there is a great potential for highly resolved depictions of microbiomes.
[Oligotyping]({{ site.url }}/software/oligotyping/) and [minimum entropy decomposition]({{ site.url }}/software/med/) provided a framework for researchers to explore these dimensions of their datasets at levels of single-nucleotide resolution. I am sure there will be many similar approaches going forward, and a lot of research will take place to make these results more accurate, and to utilize them in searching for answers to outstanding questions in microbial ecology.
It is just about getting the mindset of microbial ecology out of the local minimum it stuck called "3%". The rest will come very quickly.
---
This has been a great experience on many levels, and I am very thankful to [Frontiers in Systems Microbiology](http://www.frontiersin.org/systems_microbiology) and the great team behind it for this opportunity.
I think research topics in Frontiers provide a great framework to create coherent collections of work that aim to contribute to a particular, well-defined issue.
If you think you have a study that would fit well into this collection, please send me or `microbiology.researchtopics at frontiersin dot org` an e-mail.
| 99.060811 | 1,265 | 0.787463 | eng_Latn | 0.994627 |
e926db0a55d09d9fb8ab484e2b60b553ea658be0 | 570 | md | Markdown | public/_page/contact.md | andideve/andideve.xyz | 2e2bfdfedf9618a828fcdb72636037d8e06532c7 | [
"MIT"
] | 2 | 2021-12-14T23:03:42.000Z | 2021-12-15T05:11:58.000Z | public/_page/contact.md | andideve/andideve.xyz | 2e2bfdfedf9618a828fcdb72636037d8e06532c7 | [
"MIT"
] | null | null | null | public/_page/contact.md | andideve/andideve.xyz | 2e2bfdfedf9618a828fcdb72636037d8e06532c7 | [
"MIT"
] | null | null | null | ---
title: 'Get in touch'
---
## Social media, etc.
- [**GitHub**](https://github.com/andideve) – Programming stuff and side projects, old and new.
- [**Twitter**](https://twitter.com/andideve) – Mostly just retweets now.
## Linktree
[See my Linktree here.](/linktree)
## Email
Due to the increasing amount of spam that I get nowadays, you will have to solve this simple puzzle to get my email address:
```text
`${(030042883939176).toString(36)}@${(6600471.519808).toString(25)}`
```
([You can also verify my Keybase PGP key here.](https://keybase.io/andideve))
| 24.782609 | 124 | 0.691228 | eng_Latn | 0.9024 |
e92776fff5734ae2848282ab2f0ac60531d079a1 | 5,620 | md | Markdown | Docker/README.md | LYRA-Block-Lattice/Lyra-Core | 352881e6215af4fae58b29d2fbd02f5554fd9930 | [
"MIT"
] | 17 | 2020-08-03T00:26:38.000Z | 2021-12-22T04:02:25.000Z | Docker/README.md | LYRA-Block-Lattice/Lyra-Core | 352881e6215af4fae58b29d2fbd02f5554fd9930 | [
"MIT"
] | 16 | 2020-11-05T01:47:12.000Z | 2022-02-15T07:27:23.000Z | Docker/README.md | LYRA-Block-Lattice/Lyra-Core | 352881e6215af4fae58b29d2fbd02f5554fd9930 | [
"MIT"
] | 5 | 2020-06-26T02:47:15.000Z | 2021-11-12T18:03:36.000Z | <img src="lyradocker.png"/>
- [Pre-requisites](#pre-requisites)
- [dotenv file specification](#dotenv-file-specification)
- [Setup Docker](#setup-docker)
- [Setup Lyra Node Daemon Container](#setup-lyra-node-daemon-container)
- [Upgrade Lyra container](#upgrade-lyra-container)
- [Migrate from legacy Lyra node to Docker](#migrate-from-legacy-lyra-node-to-docker)
- [Build your own docker image](#build-your-own-docker-image)
# Pre-requisites
* Ubuntu 20.04 LTS X86_64
* Debian 10 X86_64
# dotenv file specification
```
#certificate used by Lyra API. it should be cert.pfx or so.
HTTPS_CERT_NAME=cert
HTTPS_CERT_PASSWORD=P@ssW0rd
MONGO_ROOT_NAME=root
MONGO_ROOT_PASSWORD=StrongP@ssW0rd
LYRA_DB_NAME=lyra
LYRA_DB_USER=dbuser
LYRA_DB_PASSWORD=alongpassword
# which network
LYRA_NETWORK=mainnet
# Normal for normal staking node, App for app mode.
LYRA_NODE_MODE=Normal
# the staking wallet. auto create if not exists ~/.lyra/mainnet/wallets
LYRA_POS_WALLET_NAME=poswallet
LYRA_POS_WALLET_PASSWORD=VeryStrongP@ssW0rd
# testnet ports: 4503 & 4504
LYRA_P2P_PORT=5503
LYRA_API_PORT=5504
```
# Setup Docker
* Setup Docker and Docker Compose
* Ubuntu 20.04 X86_64 specifed. Other OS please follow Docker official documents https://docs.docker.com/engine/install/
```
# install prerequisities
sudo apt-get update
sudo apt-get -y install -y apt-transport-https ca-certificates curl gnupg lsb-release software-properties-common
# install docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install -y docker-ce docker-ce-cli containerd.io docker-compose
# make docker runs as normal user
sudo usermod -aG docker $USER
newgrp docker
```
# Setup Lyra Node Daemon Container
```
# create a self-signed certificate
mkdir ~/.lyra
mkdir ~/.lyra/https
# store your own https certification in ~/.lyra/https. or generate self-signed certificate by openssl as bellow:
# run openssl commands separately.
cd ~/.lyra/https
openssl req -x509 -days 3650 -newkey rsa:2048 -keyout cert.pem -out cert.pem
openssl pkcs12 -export -in cert.pem -inkey cert.pem -out cert.pfx
# get docker compose config
cd ~/
mkdir ~/.lyra/db
git clone https://github.com/LYRA-Block-Lattice/Lyra-Core
cd Lyra-Core/Docker
# review .env.*-example and change it
# change the HTTPS_CERT_PASSWORD to yours .pfx file
cp .env.mainnet-example .env
vi .env
# setup docker containers
docker-compose --env-file .env up -d
# or setup docker with database restoring and save a lot time!
#docker-compose --env-file .env up --no-start
#docker start docker_mongo_1
#cat dbrestore-mainnet.sh | docker exec -i docker_mongo_1 bash
#docker start docker_noded_1
# check if the daemon runs well
docker ps
docker logs docker_noded_1 # or other names
# Done!
# Your staking wallet is located ~/.lyra/mainnet/wallets
# on rare condition you may need to reset docker and redo
# docker stop $(docker ps -a -q)
# docker rm $(docker ps -a -q)
# docker volume prune
# docker rmi $(docker images -a -q)
# docker system prune -a
# docker-compose down -v
# rm -rf ~/.lyra/db/*
```
# Upgrade Lyra container
```
cd ~/Lyra-Core/Docker
# !!! don't do this if you have other containers!
docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q) && docker rmi $(docker images -a -q) && docker-compose down -v
cd ~/Lyra-Core
git pull
cd Docker
docker-compose --env-file .env up -d
```
# Hosting Dual Network
After normal setup above, you may want host both testnet and mainnet nodes in the same docker for Lyra network.
```
# create a self-signed certificate if not done already
mkdir ~/.lyra
mkdir ~/.lyra/https
cd ~/.lyra/https
openssl req -x509 -days 3650 -newkey rsa:2048 -keyout cert.pem -out cert.pem
openssl pkcs12 -export -in cert.pem -inkey cert.pem -out cert.pfx
# clone lyra project if not done already
cd ~/
mkdir ~/.lyra/db
git clone https://github.com/LYRA-Block-Lattice/Lyra-Core
# create docker containers for dualnet
cd Lyra-Core/Docker
cp .env.dualnet-example .env-dualnet
vi .env-dualnet
docker-compose --env-file .env-dualnet -f docker-compose-dualnet.yml up --no-start
docker start docker_mongo_1
cat dbrestore.sh | docker exec -i docker_mongo_1 bash
docker start docker_testnet_1
docker start docker_mainnet_1
# done!
# upgrade dualnet laterly
cd ~/Lyra-Core/Docker
docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q) && docker rmi $(docker images -a -q)
cd ~/Lyra-Core
git pull
cd Docker
docker-compose --env-file .env-dualnet -f docker-compose-dualnet.yml up -d
```
# Migrate from legacy Lyra node to Docker
* keep legacy Lyra node untouched, setup a complete new Docker node and let it do database sync.
* wait for the database sync done. (monitor by Nebula https://nebula.lyra.live/showbb)
* stop legacy Lyra node.
* stop and destroy docker containers, buy leave the mongodb there
```
cd Lyra-Core/Docker
docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q) && docker rmi $(docker images -a -q) && docker-compose down -v
```
* copy poswallet.lyrawallet from legacy node to docker node's new location: ~/.lyra/mainnet/wallets
* modify dotenv file, change the wallet's password, and recreate the containers
```
cd Lyra-Core/Docker
docker-compose --env-file .env up -d
```
# Build your own docker image
```
~/Lyra-Core/Core/Lyra.Node2/Dockerfile
~/Lyra-Core/Client/Lyra.Client.CLI/Dockfile
```
| 29.578947 | 126 | 0.75 | eng_Latn | 0.338343 |
e9278eb41cc888bbe88981e28578d3920e0ad03b | 477 | md | Markdown | README.md | stackedsax/Parallel-File-Sharing | 0bb9c38c356901a5d490ae9988c80d45982a06a0 | [
"MIT"
] | null | null | null | README.md | stackedsax/Parallel-File-Sharing | 0bb9c38c356901a5d490ae9988c80d45982a06a0 | [
"MIT"
] | null | null | null | README.md | stackedsax/Parallel-File-Sharing | 0bb9c38c356901a5d490ae9988c80d45982a06a0 | [
"MIT"
] | null | null | null | # Parallel-File-Sharing

Parallel upload and download of file
From the Server this platform allows parallel upload and download.
Epoll is used to allow Asynchronous IO and parallel uploads & downloads.
Principles of Design :
1. Client regsiteds to start/ initiate the sharing.
2. Client establishes connectivity to speicif client after regsitration with other clients.
3. The register gets updated everytime a new client registers or drops.
| 34.071429 | 92 | 0.800839 | eng_Latn | 0.990601 |
e927b9d2fffc574c6f0d7f5a8b1a13a74916ad11 | 526 | md | Markdown | DevCenter/Java/CommonTasks/remote-desktop.md | pablissima/azure-content | 75ceff54eb17131a78791dfa89c02a7f5250e41c | [
"CC-BY-3.0"
] | 1 | 2019-04-22T16:45:22.000Z | 2019-04-22T16:45:22.000Z | DevCenter/Java/CommonTasks/remote-desktop.md | pablissima/azure-content | 75ceff54eb17131a78791dfa89c02a7f5250e41c | [
"CC-BY-3.0"
] | 1 | 2018-05-30T19:40:41.000Z | 2018-05-30T19:40:41.000Z | DevCenter/Java/CommonTasks/remote-desktop.md | pablissima/azure-content | 75ceff54eb17131a78791dfa89c02a7f5250e41c | [
"CC-BY-3.0"
] | null | null | null | <properties linkid="dev-net-commons-tasks-remote-desktop" urlDisplayName="Remote Desktop" headerExpose="" pageTitle="Enable Remote Desktop - Java - Develop" metaKeywords="Azure Java remote access, Azure Java remote connection, Azure Java VM access, Azure Java virtual machine access" footerExpose="" metaDescription="Learn how to enable remote-desktop access for the virtual machines hosting your Windows Azure Java application. " umbracoNaviHide="0" disqusComments="1" />
<div chunk="../../Shared/Chunks/remote-desktop.md" /> | 263 | 472 | 0.790875 | eng_Latn | 0.244997 |
e928584c782a4ea0cf0ee1c9b2b33cdd47e64217 | 2,499 | md | Markdown | docs/framework/wcf/diagnostics/tracing/index.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/index.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/index.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Śledzenie
ms.date: 03/30/2017
ms.assetid: 2649eae2-dbf8-421c-9cfb-cfa9e01de87f
ms.openlocfilehash: 3520d2aca07f988c45d65d5d8113d05292a37638
ms.sourcegitcommit: 2701302a99cafbe0d86d53d540eb0fa7e9b46b36
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/28/2019
ms.locfileid: "64664947"
---
# <a name="tracing"></a>Śledzenie
Windows Communication Foundation (WCF) udostępnia instrumentacji aplikacji i danych diagnostycznych dla błędów monitorowania i analizy. Aby dowiedzieć się, jak aplikacja zachowuje się lub dlaczego błędów, można użyć śledzenia zamiast debugera. Między składnikami w celu udostępnienia środowiska end-to-end, można skorelować błędów i przetwarzania.
Usługi WCF wyświetla następujące dane śledzenia diagnostycznego:
- Śledzenie procesu punktów kontrolnych, dotyczące wszystkich składników aplikacji, takich jak wywołania operacji kodu: wyjątki, ostrzeżenia i inne zdarzenia przetwarzania."
- Zdarzenia błędu Windows działa funkcja śledzenia.
## <a name="in-this-section"></a>W tej sekcji
[Konfigurowanie śledzenia](../../../../../docs/framework/wcf/diagnostics/tracing/configuring-tracing.md)
W tym temacie opisano, jak skonfigurować śledzenie, na różnych poziomach w zależności od określonych wymagań.
[Kompleksowe śledzenie](../../../../../docs/framework/wcf/diagnostics/tracing/end-to-end-tracing.md)
W tej sekcji opisano, jak można użyć śledzenie działań i Propagacja dla korelacji end-to-end, ułatwiające debugowanie.
[Rozwiązywanie problemów z aplikacją za pomocą śledzenia](../../../../../docs/framework/wcf/diagnostics/tracing/using-tracing-to-troubleshoot-your-application.md)
W tej sekcji opisano, jak można użyć z funkcji śledzenia podczas debugowania aplikacji.
[Problemy dotyczące zabezpieczeń i przydatne porady na temat śledzenia](../../../../../docs/framework/wcf/diagnostics/tracing/security-concerns-and-useful-tips-for-tracing.md)
W tym temacie opisano, jak możesz chronić poufne informacje przed przypadkowym, a także przydatne porady, korzystając z hostem sieci Web.
[Informacje o śladach](../../../../../docs/framework/wcf/diagnostics/tracing/traces-reference.md)
Ten temat zawiera listę wszystkich danych śledzenia generowane przez architekturę WCF.
## <a name="see-also"></a>Zobacz także
- [Narzędzie do przeglądania danych śledzenia usług (SvcTraceViewer.exe)](../../../../../docs/framework/wcf/service-trace-viewer-tool-svctraceviewer-exe.md)
| 55.533333 | 349 | 0.771108 | pol_Latn | 0.99969 |
e9294497f861fa864bf1916ca291cb4af93bab74 | 37,673 | md | Markdown | articles/service-fabric/service-fabric-report-health.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-report-health.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-report-health.md | marcduiker/azure-docs.nl-nl | 747ce1fb22d13d1e7c351e367c87810dd9eafa08 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Aangepaste Service Fabric-statusrapporten toevoegen | Microsoft Docs
description: Beschrijft hoe u aangepaste statusrapporten verzenden naar Azure Service Fabric health entiteiten. Geeft aanbevelingen voor het ontwerpen en implementeren van statusrapporten kwaliteit.
services: service-fabric
documentationcenter: .net
author: oanapl
manager: timlt
editor:
ms.assetid: 0a00a7d2-510e-47d0-8aa8-24c851ea847f
ms.service: service-fabric
ms.devlang: dotnet
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 07/19/2017
ms.author: oanapl
ms.openlocfilehash: ed10eef347d4d93012078456b3a145589e66d30e
ms.sourcegitcommit: 6699c77dcbd5f8a1a2f21fba3d0a0005ac9ed6b7
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/11/2017
---
# <a name="add-custom-service-fabric-health-reports"></a>Aangepaste statusrapporten van Service Fabric toevoegen
Azure Service Fabric introduceert een [statusmodel](service-fabric-health-introduction.md) ontworpen om te markeren slecht cluster en de voorwaarden van toepassing op specifieke entiteiten. Maakt gebruik van het statusmodel **health rapporteurs** (onderdelen van het systeem en watchdogs). Het doel is snel en gemakkelijk diagnose en herstel. Schrijvers van de service moeten om na te denken over status vooraf. Een voorwaarde die van invloed kan status moet worden gerapporteerd, vooral als deze vlag problemen dicht bij de hoofdmap kan helpen. Statusgegevens bespaart tijd en moeite op onderzoek naar en foutopsporing. Het nut vooral duidelijk is wanneer de service actief op de gewenste schaal in de cloud is (particulier of Azure).
De Service Fabric-rapporteurs monitor geïdentificeerd voorwaarden van belang. Ze een rapport over deze voorwaarden op basis van hun lokale weergave. De [health store](service-fabric-health-introduction.md#health-store) aggregeert statusgegevens verzonden door alle rapporteurs om te bepalen of entiteiten globaal in orde zijn. Het model is bedoeld om uitgebreide, flexibel en eenvoudig te gebruiken. De kwaliteit van de statusrapporten over bepaalt de nauwkeurigheid van de health-weergave van het cluster. Fout-positieven waarin ten onrechte slecht problemen weergegeven kunnen nadelig upgrades of andere services die gebruikmaken van statusgegevens. Voorbeelden van dergelijke services zijn reparatie en waarschuwen mechanismen. Voorzichtig is daarom nodig is voor rapporten die voorwaarden van belang zijn in de beste manier vastleggen.
Voor het ontwerpen en implementeren van de gezondheid van rapportage, watchdogs en onderdelen van het systeem moeten:
* Definieer de voorwaarde dat ze geïnteresseerd bent in de manier waarop die deze wordt bewaakt en de impact op de cluster- of -functionaliteit. Op de rapporteigenschap gezondheid en status op basis van deze informatie kunt bepalen.
* Bepaal de [entiteit](service-fabric-health-introduction.md#health-entities-and-hierarchy) die het rapport is van toepassing op.
* Bepalen waar de rapportage is gedaan, van de service of via een interne of externe watchdog.
* Definieer een bron die is gebruikt om de Rapportagefout te identificeren.
* Kies een reporting strategie periodiek of op overgangen. De aanbevolen manier is periodiek, omdat deze eenvoudiger code vereist en minder gevoelig voor fouten is.
* Bepalen hoe lang het rapport voor slechte voorwaarden moet blijven in de health-store en hoe deze moet worden gewist. Met deze informatie kunt bepalen levensduur van het rapport en verwijderen op vervaldatum gedrag.
Zoals gezegd, rapportage kunt u doen vanaf:
* De bewaakte Service Fabric-service-replica.
* Interne watchdogs geïmplementeerd als een Service Fabric-service (bijvoorbeeld een Service Fabric staatloze service dat wordt bewaakt voorwaarden en rapporten). De watchdogs kunnen worden geïmplementeerd een alle knooppunten of kunnen worden wachtrijen op de bewaakte service.
* Interne watchdogs die worden uitgevoerd op de Service Fabric-knooppunten maar *niet* geïmplementeerd als een Service Fabric-services.
* Externe watchdogs die waaruit de bron-test *buiten* het Service Fabric-cluster (bijvoorbeeld bewaking service zoals Gomez).
> [!NOTE]
> Het cluster is buiten het vak gevuld met statusrapporten dat is verzonden door de onderdelen van het systeem. Meer informatie op [rapporten system health gebruiken voor het oplossen van](service-fabric-understand-and-troubleshoot-with-system-health-reports.md). De Gebruikersrapporten moeten worden verzonden op [de statusentiteiten](service-fabric-health-introduction.md#health-entities-and-hierarchy) die al zijn gemaakt door het systeem.
>
>
Eenmaal de status reporting ontwerp is uitgeschakeld, statusrapporten kunnen worden verzonden eenvoudig. U kunt [FabricClient](https://docs.microsoft.com/dotnet/api/system.fabric.fabricclient) voor de gezondheid van het rapport als het cluster niet [beveiligde](service-fabric-cluster-security.md) of als de fabric-client beheerdersbevoegdheden heeft. Rapportage kan worden gedaan via de API door met [FabricClient.HealthManager.ReportHealth](https://docs.microsoft.com/dotnet/api/system.fabric.fabricclient.healthclient.reporthealth), via PowerShell of via REST. Configuratie knoppen batch-rapporten voor betere prestaties.
> [!NOTE]
> De status van het rapport is synchrone en vertegenwoordigt de validatie werkt alleen op de client. Het feit dat het rapport wordt geaccepteerd door de health-client of de `Partition` of `CodePackageActivationContext` objecten betekent niet dat wordt toegepast in het archief. Het is asynchroon worden verzonden en mogelijk met andere rapporten batch verwerkt. De verwerking op de server mogelijk nog steeds: het volgnummer is mogelijk verlopen, de entiteit waarop het rapport moet worden toegepast is verwijderd, enzovoort.
>
>
## <a name="health-client"></a>Health-client
De health-rapporten worden verzonden naar de health store via een health-client, die zich in de fabric-client. De health-client kan worden geconfigureerd met de volgende instellingen:
* **HealthReportSendInterval**: de vertraging tussen het moment dat het rapport is toegevoegd aan de client en de tijd dat deze wordt verzonden naar de health store. Gebruikt voor batch-rapporten in een enkel bericht, in plaats van een bericht verzenden voor elk rapport. De batchverwerking verbetert de prestaties. Standaardwaarde: 30 seconden.
* **HealthReportRetrySendInterval**: het interval waarmee de client health opnieuw samengevoegde status verzendt rapporteert aan de health store. Standaardwaarde: 30 seconden.
* **HealthOperationTimeout**: de time-outperiode voor een rapportbericht verzonden naar de health store. Als er is een time-out opgetreden voor een bericht, de client health opnieuw probeert deze totdat de health store bevestigt dat het rapport is verwerkt. Standaardwaarde: twee minuten.
> [!NOTE]
> Wanneer u de rapporten in batch worden opgenomen, de fabric-client moet worden behouden voor ten minste de HealthReportSendInterval om ervoor te zorgen dat ze worden verzonden. Als het bericht verloren gegaan is of de health store niet veroorzaakt door tijdelijke fouten toepassen, de fabric-client moet worden behouden meer hieraan een kans om opnieuw te proberen.
>
>
De buffer op de client neemt de uniekheid van de rapporten in aanmerking. Als een bepaalde slechte Rapportagefout 100 rapporten per seconde op dezelfde eigenschap van dezelfde entiteit rapporteren is, kunt u voor de rapporten, worden vervangen door de laatste versie. Maximaal één zo'n rapport bestaat in de wachtrij van de client. Als batchverwerking is geconfigureerd, is het aantal rapporten die worden verzonden naar de health store slechts één per interval verzenden. Dit is de laatste toegevoegde lijst dat overeenkomt met de meest actuele status van de entiteit.
Geef configuratieparameters wanneer `FabricClient` wordt gemaakt door het doorgeven van [FabricClientSettings](https://docs.microsoft.com/dotnet/api/system.fabric.fabricclientsettings) met de gewenste waarden vermeldingen die betrekking hebben op status.
Het volgende voorbeeld maakt een fabric-client en geeft aan dat de rapporten moeten worden verzonden wanneer ze worden toegevoegd. Nieuwe pogingen gebeuren voor time-outs en fouten die kunnen opnieuw worden geprobeerd, elke 40 seconden.
```csharp
var clientSettings = new FabricClientSettings()
{
HealthOperationTimeout = TimeSpan.FromSeconds(120),
HealthReportSendInterval = TimeSpan.FromSeconds(0),
HealthReportRetrySendInterval = TimeSpan.FromSeconds(40),
};
var fabricClient = new FabricClient(clientSettings);
```
We raden de fabric-standaardclient instellingen, die ingesteld `HealthReportSendInterval` 30 seconden. Deze instelling zorgt ervoor dat de optimale prestaties als gevolg van batchverwerking. Gebruik voor kritieke rapporten die zo snel mogelijk moeten worden verzonden, `HealthReportSendOptions` met direct `true` in [FabricClient.HealthClient.ReportHealth](https://docs.microsoft.com/dotnet/api/system.fabric.fabricclient.healthclient.reporthealth) API. Onmiddellijke rapporten overslaan en het batchen interval. Gebruik deze vlag zorgvuldig; We willen profiteren van de health-client batchverwerking indien mogelijk. Onmiddellijke verzenden is ook nuttig bij het afsluiten van de fabric-client (bijvoorbeeld het proces heeft bepaald ongeldige status en moet worden afgesloten om te voorkomen dat bijwerkingen). Hiermee zorgt u ervoor een best-effort verzenden van de totale rapporten. Wanneer een rapport met onmiddellijke vlag wordt toegevoegd, batches de client health de samengevoegde rapporten sinds de laatste verzenden.
Dezelfde parameters kunnen worden opgegeven wanneer een verbinding met een cluster wordt gemaakt via PowerShell. Het volgende voorbeeld wordt een verbinding met een lokaal cluster:
```powershell
PS C:\> Connect-ServiceFabricCluster -HealthOperationTimeoutInSec 120 -HealthReportSendIntervalInSec 0 -HealthReportRetrySendIntervalInSec 40
True
ConnectionEndpoint :
FabricClientSettings : {
ClientFriendlyName : PowerShell-1944858a-4c6d-465f-89c7-9021c12ac0bb
PartitionLocationCacheLimit : 100000
PartitionLocationCacheBucketCount : 1024
ServiceChangePollInterval : 00:02:00
ConnectionInitializationTimeout : 00:00:02
KeepAliveInterval : 00:00:20
HealthOperationTimeout : 00:02:00
HealthReportSendInterval : 00:00:00
HealthReportRetrySendInterval : 00:00:40
NotificationGatewayConnectionTimeout : 00:00:00
NotificationCacheUpdateTimeout : 00:00:00
}
GatewayInformation : {
NodeAddress : localhost:19000
NodeId : 1880ec88a3187766a6da323399721f53
NodeInstanceId : 130729063464981219
NodeName : Node.1
}
```
Op dezelfde manier API, rapporten kunnen worden verzonden met behulp van `-Immediate` switch moet onmiddellijk verzonden, ongeacht de `HealthReportSendInterval` waarde.
Voor REST, worden de rapporten worden verzonden naar de Service Fabric-gateway een interne fabric-client is. Deze client wordt standaard geconfigureerd voor het verzenden van rapporten batch verwerkt elke 30 seconden. U kunt het interval batch wijzigen met de configuratie-instelling van het cluster `HttpGatewayHealthReportSendInterval` op `HttpGateway`. Zoals gezegd, een betere optie is voor het verzenden van de rapporten met `Immediate` true.
> [!NOTE]
> Om ervoor te zorgen dat niet-gemachtigde services health tegen de entiteiten in het cluster kunnen niet rapporteren, de server voor het accepteren van aanvragen van beveiligde clients alleen te configureren. De `FabricClient` gebruikt voor het melden van de beveiliging is ingeschakeld kunnen communiceren met het cluster (bijvoorbeeld met Kerberos of certificaten verificatie). Lees meer over [beveiliging cluster](service-fabric-cluster-security.md).
>
>
## <a name="report-from-within-low-privilege-services"></a>Rapport van binnen services met beperkte bevoegdheden
Als de Service Fabric-services geen beheerderstoegang tot het cluster, kunt u health rapporteren op entiteiten van de huidige context via `Partition` of `CodePackageActivationContext`.
* Gebruik voor stateless services [IStatelessServicePartition.ReportInstanceHealth](https://docs.microsoft.com/dotnet/api/system.fabric.istatelessservicepartition.reportinstancehealth) voor het rapporteren van de huidige service-exemplaar.
* Gebruik voor stateful services [IStatefulServicePartition.ReportReplicaHealth](https://docs.microsoft.com/dotnet/api/system.fabric.istatefulservicepartition.reportreplicahealth) voor het rapporteren van de huidige replica.
* Gebruik [IServicePartition.ReportPartitionHealth](https://docs.microsoft.com/dotnet/api/system.fabric.iservicepartition.reportpartitionhealth) voor het rapporteren van de huidige partitie entiteit.
* Gebruik [CodePackageActivationContext.ReportApplicationHealth](https://docs.microsoft.com/dotnet/api/system.fabric.codepackageactivationcontext.reportapplicationhealth) voor het rapporteren van de huidige toepassing.
* Gebruik [CodePackageActivationContext.ReportDeployedApplicationHealth](https://docs.microsoft.com/dotnet/api/system.fabric.codepackageactivationcontext.reportdeployedapplicationhealth) voor het rapporteren van de huidige toepassing geïmplementeerd op het huidige knooppunt.
* Gebruik [CodePackageActivationContext.ReportDeployedServicePackageHealth](https://docs.microsoft.com/dotnet/api/system.fabric.codepackageactivationcontext.reportdeployedservicepackagehealth) voor het rapporteren van een servicepakket voor de toepassing geïmplementeerd op het huidige knooppunt.
> [!NOTE]
> Intern maakt de `Partition` en de `CodePackageActivationContext` houdt een health-client met standaardinstellingen geconfigureerd. Zoals uitgelegd voor het [health client](service-fabric-report-health.md#health-client), rapporten zijn batch verwerkt en verzonden via een timer. De objecten moeten worden behouden om kans het rapport te verzenden.
>
>
U kunt opgeven `HealthReportSendOptions` bij het verzenden van rapporten via `Partition` en `CodePackageActivationContext` health API's. Als er kritieke rapporten die zo snel mogelijk moeten worden verzonden, gebruikt u `HealthReportSendOptions` met direct `true`. Onmiddellijke rapporten overslaan en het batchen interval van de interne status-client. Zoals al eerder vermeld, gebruikt u deze vlag zorgvuldig; We willen profiteren van de health-client batchverwerking indien mogelijk.
## <a name="design-health-reporting"></a>Health reporting ontwerpen
De eerste stap bij het genereren van rapporten van hoge kwaliteit met het identificeren van de voorwaarden die invloed op de status van de service hebben kunnen. Elke voorwaarde kunt vlag problemen in de service of het cluster wanneer deze wordt-- of zelfs beter voordat er een probleem voordoet--kan mogelijk miljarden bedragen opslaan. De voordelen zijn minder uitvaltijd, uren minder 's nachts besteed aan het onderzoeken en oplossen van problemen en hogere klanttevredenheid.
Nadat de voorwaarden worden geïdentificeerd, moeten watchdog schrijvers nagaan wat de beste manier om ze te controleren voor balans tussen bruikbaarheid en de overhead. Neem bijvoorbeeld een service die complexe berekeningen die gebruikmaken van een aantal tijdelijke bestanden op een share heeft. De share om ervoor te zorgen dat er voldoende ruimte beschikbaar is, kan een watchdog controleren. Het kan luisteren voor meldingen van bestand of map wijzigingen. Dit kan een waarschuwing worden gerapporteerd als een vooraf drempelwaarde is bereikt en een fout gemeld als de share vol is. Op een waarschuwing kan een reparatie systeem gestart opschonen van oudere bestanden op de share. Op een fout optreedt, kan een systeem herstellen de replica van de service naar een ander knooppunt verplaatsen. Houd er rekening mee hoe de statussen voorwaarde worden beschreven in termen van health: de status van de voorwaarde die kan worden overwogen in orde (ok) of slecht (waarschuwing of fout).
Zodra de details van de controle zijn ingesteld, moet een schrijver watchdog om te bepalen hoe de watchdog implementeert. Als de voorwaarden kunnen worden bepaald uit in de service, kan de watchdog deel uitmaken van de bewaakte service zelf. De code kan bijvoorbeeld Controleer het gebruik van de share en vervolgens rapporteren telkens wanneer wordt geprobeerd een bestand te schrijven. Het voordeel van deze benadering is dat reporting eenvoudig is. Moet nauwkeurig om te voorkomen dat watchdog bugs die invloed hebben op de functionaliteit van de service.
Reporting vanuit is de bewaakte service niet altijd een optie. Een watchdog in de service is mogelijk niet de voorwaarden te detecteren. Deze wellicht niet de logica of gegevens om het te bepalen. De overhead voor het controleren van de toestand mogelijk te hoog. De voorwaarden ook mogelijk niet specifiek zijn voor een service, maar in plaats daarvan invloed hebben op de interacties tussen services. Er is een andere optie om watchdogs in het cluster als afzonderlijke processen. De watchdogs bewaken de voorwaarden en het rapport, zonder dat de belangrijkste services op een manier. Deze watchdogs kunnen bijvoorbeeld worden geïmplementeerd als stateless services in dezelfde toepassing geïmplementeerd op alle knooppunten of op de knooppunten van de service.
Soms is een watchdog uitgevoerd in het cluster niet een optie ofwel. Als de bewaakte voorwaarde wordt de beschikbaarheid of de functionaliteit van de service, zoals gebruikers deze zien, is het beste de watchdogs hebben op dezelfde locatie als de gebruiker-clients. Daar kunnen ze de bewerkingen op dezelfde manier als gebruikers deze aanroepen testen. U kunt bijvoorbeeld een watchdog die zich buiten het cluster, verzendt aanvragen naar de service en controleert u de latentie en de juistheid van het resultaat hebben. (Voor een service Rekenmachine bijvoorbeeld 2 + 2 retourneert 4 binnen redelijke tijd?)
Zodra de watchdog details hebt is voltooid, moet u bepalen van een bron-ID die uniek wordt geïdentificeerd. Als meerdere watchdogs van hetzelfde type zijn dat in het cluster, ze moeten rapporteren op verschillende entiteiten, of als ze een rapport over dezelfde entiteit, gebruiken verschillende bron-ID of eigenschap. Op deze manier hun rapporten kunnen worden gecombineerd. De eigenschap van het statusrapport moet de bewaakte voorwaarde vastleggen. (Het bovenstaande voorbeeld is de eigenschap kan worden **ShareSize**.) Als meerdere rapporten toepassing naar dezelfde toestand is, moet de eigenschap sommige dynamische gegevens waarmee u rapporten naast te bevatten. Bijvoorbeeld, als meerdere shares worden bewaakt moeten, de eigenschapsnaam mag **ShareSize sharename**.
> [!NOTE]
> Voer *niet* health archief gebruikt om statusgegevens te houden. Alleen door de health-gerelateerde informatie moet worden gerapporteerd als de status, als deze informatie heeft gevolgen voor de statusevaluatie van een entiteit. De health store is niet ontworpen als een algemene archief. Het health evaluatie logica gebruikt om de status van alle gegevens samenvoegen. Verzenden van informatie die geen verband houdt met health (zoals rapportage over de status met een status van OK) heeft geen invloed op de geaggregeerde status, maar dit een negatieve invloed hebben op de prestaties van de health store.
>
>
De volgende beslissingspunt is welke entiteit aan rapport op. De meeste gevallen is de voorwaarde duidelijk idetifies de entiteit. Kies de entiteit met de best mogelijke samenvattingen. Als een voorwaarde heeft gevolgen voor alle replica's in een partitie, meld u op de partitie niet op de service. Er zijn hoek gevallen waar meer gedachte is vereist, hoewel. Als de voorwaarde van invloed is op een entiteit, zoals een replica, maar de wens om de voorwaarde die is gemarkeerd voor meer dan de duur van de levensduur van de replica en klik vervolgens op de partitie moet worden gemeld. Anders wanneer de replica wordt verwijderd, ruimt de health store de rapporten ervan. Watchdog schrijvers moeten Denk na over de levensduur van de entiteit en het rapport. Het moet wissen wanneer een rapport moet worden opgeruimd in een store (bijvoorbeeld wanneer een fout gerapporteerd voor een entiteit wordt niet langer van toepassing).
Bekijk een voorbeeld waarin plaatst samen de punten die ik beschreven. U kunt dat een Service Fabric-toepassing bestaat uit een stateful permanente service master en secundaire stateless services die zijn geïmplementeerd op alle knooppunten (één secundaire servicetype voor elk type taak). Het model heeft een wachtrij voor verwerking die opdrachten bevat die worden uitgevoerd door de secundaire replica's. De secundaire replica's voor het uitvoeren van de inkomende aanvragen en signalen back bevestiging verzenden. Een voorwaarde die kan worden bewaakt, is de lengte van de master-verwerkingswachtrij. Als de wachtrijlengte master een drempel bereikt, wordt een waarschuwing wordt gemeld. De waarschuwing geeft aan dat de belasting kunnen niet worden verwerkt door de secundaire replica's. Als de wachtrij de maximaal toegestane lengte is bereikt en opdrachten worden verwijderd, een fout wordt gerapporteerd, omdat de service kan niet worden hersteld. De rapporten kunnen zich op de eigenschap **QueueStatus**. De watchdog zich in de service en deze regelmatig op de master primaire replica wordt verzonden. Time to live is twee minuten en deze regelmatig elke 30 seconden wordt verzonden. Als de primaire uitvalt, wordt het rapport automatisch opgeschoond vanuit de store. Als de replica van de service actief is, maar er is een impasse of andere problemen hebt met, het rapport verloopt in het archief health. In dit geval wordt wordt de entiteit geëvalueerd op fout.
Nog een voorwaarde die kan worden gecontroleerd is uitvoeringstijd van de taak. De master distribueert taken die u moet de secundaire replica's op basis van het taaktype dat u. Het model kan de secundaire replica's voor de taakstatus pollen afhankelijk van het ontwerp. Het kan ook wachten op secundaire replica's signalen back bevestiging verzenden wanneer ze klaar bent. In het tweede geval extra op worden gelet voor het detecteren van situaties waarbij secundaire replica's die of berichten gaan verloren. Een mogelijkheid is voor de master een ping-aanvraag verzenden naar dezelfde secundaire, waarmee de status ervan terug wordt verzonden. Als geen status wordt ontvangen, wordt de master acht een fout en de taak opnieuw gepland. Dit gedrag wordt ervan uitgegaan dat de taken idempotent zijn.
De bewaakte voorwaarde kan worden omgezet als een waarschuwing als de taak is niet uitgevoerd in een bepaalde tijd (**t1**, bijvoorbeeld 10 minuten). Als de taak is niet voltooid in de tijd (**t2**, bijvoorbeeld 20 minuten), de bewaakte voorwaarde als fout kan worden omgezet. Deze rapportage kunt op verschillende manieren doen:
* De replica van de master primaire rapporten periodiek op zichzelf. U kunt één eigenschap voor alle in behandeling zijnde taken hebben in de wachtrij. Als ten minste één taak duurt langer, de rapportstatus op de eigenschap **PendingTasks** een waarschuwing of fout naar gelang van toepassing is. Als er geen taken in behandeling zijn of heeft de uitvoering van alle taken gestart, is het rapportstatus in orde. De taken zijn permanent. Als de primaire uitvalt, wordt de onlangs bevorderd primaire kunt doorgaan met het correct rapporteren.
* Een ander watchdog proces (in de cloud of extern) controleert taken (van buiten, op basis van het resultaat van taak) om te zien als ze zijn voltooid. Als ze bieden geen ondersteuning voor de drempelwaarden, wordt een rapport op de masterserver verzonden. Een rapport wordt ook op elke taak met de taak-id zoals verzonden **PendingTask + taskId**. Rapporten moeten alleen op slecht statussen worden verzonden. Tijd live naar een paar minuten en de rapporten moeten worden verwijderd wanneer ze zijn verlopen om ervoor te zorgen opschonen markeren instellen.
* De secundaire die een taak wordt uitgevoerd op het rapport wanneer het duurt langer dan verwacht uit te voeren. Bevat informatie over het service-exemplaar op de eigenschap **PendingTasks**. Het rapport lokaliseert het service-exemplaar die problemen heeft, maar deze niet de situatie waarin het exemplaar matrijzen vastleggen. De rapporten worden vervolgens opgeschoond. Dit kan een rapport over de secundaire service. Als de secundaire de taken is voltooid, wordt het rapport uit het archief in het exemplaar van de secundaire gewist. Het rapport vastleggen niet van de situatie waarin het bevestigingsbericht verloren en wordt de taak niet uit het oogpunt van het model is voltooid.
Maar de rapportage in de bovenstaande gevallen is voltooid, worden de rapporten worden vastgelegd in toepassingsstatus health wordt geëvalueerd.
## <a name="report-periodically-vs-on-transition"></a>Rapport regelmatig versus op overgang
Met behulp van het reporting statusmodel kunt watchdogs periodiek of overgangen rapporten verzenden. De aanbevolen manier voor het melden van watchdog is periodiek, omdat de code veel eenvoudiger en minder is gevoelig voor fouten. De watchdogs moeten willen zo eenvoudig mogelijk om te voorkomen dat bugs die onjuiste rapporten activeren. Onjuiste *slecht* rapporten gevolgen hebben voor health-beoordelingen en scenario's op basis van status, met inbegrip van upgrades. Onjuiste *orde* rapporten problemen in het cluster niet verbergen.
Voor periodieke rapportage kan de watchdog worden geïmplementeerd met een timer. Op een retouraanroep timer de watchdog Controleer de status en verzend een rapport op basis van de huidige status. Er is niet nodig om te bekijken welke rapport eerder is verzonden of maken van alle optimalisaties in termen van berichten. De health-client heeft batchverwerking logica om te helpen bij de prestaties. Terwijl de client health wordt gehouden actief is, het opnieuw probeert intern totdat het rapport is bevestigd door de health store of de watchdog een nieuwere rapport met dezelfde entiteit, eigenschap en bron genereert.
Rapportage over overgangen vereist zorgvuldige verwerking van de status. De watchdog bepaalde voorwaarden wordt gecontroleerd en -rapporten alleen wanneer de voorwaarden wijzigen. De opwaartse van deze benadering is dat er minder rapporten nodig zijn. Het nadeel is dat de logica van de watchdog complexe. De watchdog moet de voorwaarden of de rapporten onderhouden zodat ze kunnen worden gecontroleerd om te bepalen van de status verandert. Failover, Wees voorzichtig met rapporten is toegevoegd, maar nog niet verzonden naar de health store. Het volgnummer moet steeds groter wordende. Als dat niet het geval is, moet u de rapporten worden geweigerd als verouderd. In het zeldzame gevallen waarin het verlies van gegevens is gemaakt, zijn mogelijk tussen de status van de Rapportagefout en de status van de health store synchronisatie vereist.
Rapportage over overgangen zinvol is voor services die worden gerapporteerd met betrekking tot zelf, via `Partition` of `CodePackageActivationContext`. Wanneer het lokale object (replica of geïmplementeerd servicepakket / toepassing is geïmplementeerd) is verwijderd, worden de rapporten ervan ook verwijderd. Deze automatisch opschonen worden de noodzaak van de synchronisatie tussen Rapportagefout en health store. Als het rapport voor de bovenliggende partitie of de bovenliggende toepassing is, moet nauwkeurig op failover om te voorkomen dat verouderde rapporten in de health store. Logica moet worden toegevoegd aan de juiste status onderhouden en schakelt u het rapport uit de store wanneer het niet meer nodig hebt.
## <a name="implement-health-reporting"></a>Health reporting implementeren
Zodra de details van de entiteit en dit rapport uitgeschakeld zijn, kan statusrapporten verzenden worden gedaan via de API, PowerShell of REST.
### <a name="api"></a>API
Om aan te melden via de API, moet u een statusrapport specifiek zijn voor het entiteitstype dat ze in het rapport wilt maken. Het rapport geeft een client health. U kunt ook een statusgegevens maken en geef dit om op te lossen rapportagemethoden op `Partition` of `CodePackageActivationContext` voor het rapporteren van de huidige entiteiten.
Het volgende voorbeeld ziet periodieke rapportage vanuit een watchdog binnen het cluster. De watchdog controleert of een externe resource vanuit een knooppunt kan worden benaderd. De bron nodig is voor een servicemanifest in de toepassing. Als de bron niet beschikbaar is, kunnen de andere services binnen de toepassing nog steeds goed werken. Daarom wordt het rapport verzonden op de geïmplementeerde service pakket entiteit elke 30 seconden.
```csharp
private static Uri ApplicationName = new Uri("fabric:/WordCount");
private static string ServiceManifestName = "WordCount.Service";
private static string NodeName = FabricRuntime.GetNodeContext().NodeName;
private static Timer ReportTimer = new Timer(new TimerCallback(SendReport), null, 30 * 1000, 30 * 1000);
private static FabricClient Client = new FabricClient(new FabricClientSettings() { HealthReportSendInterval = TimeSpan.FromSeconds(0) });
public static void SendReport(object obj)
{
// Test whether the resource can be accessed from the node
HealthState healthState = this.TestConnectivityToExternalResource();
// Send report on deployed service package, as the connectivity is needed by the specific service manifest
// and can be different on different nodes
var deployedServicePackageHealthReport = new DeployedServicePackageHealthReport(
ApplicationName,
ServiceManifestName,
NodeName,
new HealthInformation("ExternalSourceWatcher", "Connectivity", healthState));
// TODO: handle exception. Code omitted for snippet brevity.
// Possible exceptions: FabricException with error codes
// FabricHealthStaleReport (non-retryable, the report is already queued on the health client),
// FabricHealthMaxReportsReached (retryable; user should retry with exponential delay until the report is accepted).
Client.HealthManager.ReportHealth(deployedServicePackageHealthReport);
}
```
### <a name="powershell"></a>PowerShell
Verzenden van statusrapporten met **verzenden ServiceFabric*EntityType*HealthReport **.
Het volgende voorbeeld ziet periodieke rapportage over CPU-waarden op een knooppunt. De rapporten elke 30 seconden moeten worden verzonden en hebben een levensduur van twee minuten. Als ze zijn verlopen, heeft de Rapportagefout problemen, zodat het knooppunt wordt geëvalueerd op fout. Wanneer de CPU hoger dan een drempelwaarde is, wordt in het rapport een status van de waarschuwing heeft. Wanneer de CPU boven een drempelwaarde komt meer dan de geconfigureerde tijd blijft, wordt als een fout gerapporteerd. Anders wordt verzendt de Rapportagefout een status OK.
```powershell
PS C:\> Send-ServiceFabricNodeHealthReport -NodeName Node.1 -HealthState Warning -SourceId PowershellWatcher -HealthProperty CPU -Description "CPU is above 80% threshold" -TimeToLiveSec 120
PS C:\> Get-ServiceFabricNodeHealth -NodeName Node.1
NodeName : Node.1
AggregatedHealthState : Warning
UnhealthyEvaluations :
Unhealthy event: SourceId='PowershellWatcher', Property='CPU', HealthState='Warning', ConsiderWarningAsError=false.
HealthEvents :
SourceId : System.FM
Property : State
HealthState : Ok
SequenceNumber : 5
SentAt : 4/21/2015 8:01:17 AM
ReceivedAt : 4/21/2015 8:02:12 AM
TTL : Infinite
Description : Fabric node is up.
RemoveWhenExpired : False
IsExpired : False
Transitions : ->Ok = 4/21/2015 8:02:12 AM
SourceId : PowershellWatcher
Property : CPU
HealthState : Warning
SequenceNumber : 130741236814913394
SentAt : 4/21/2015 9:01:21 PM
ReceivedAt : 4/21/2015 9:01:21 PM
TTL : 00:02:00
Description : CPU is above 80% threshold
RemoveWhenExpired : False
IsExpired : False
Transitions : ->Warning = 4/21/2015 9:01:21 PM
```
Het volgende voorbeeld wordt een tijdelijke waarschuwing rapporten op een replica. Eerst krijgt de partitie-ID en klik vervolgens op de replica-ID voor de service die het is geïnteresseerd in. Verzendt vervolgens een rapport van **PowershellWatcher** in de eigenschap **ResourceDependency**. Het rapport is van belang zijn voor slechts twee minuten en wordt deze verwijderd uit de store automatisch.
```powershell
PS C:\> $partitionId = (Get-ServiceFabricPartition -ServiceName fabric:/WordCount/WordCount.Service).PartitionId
PS C:\> $replicaId = (Get-ServiceFabricReplica -PartitionId $partitionId | where {$_.ReplicaRole -eq "Primary"}).ReplicaId
PS C:\> Send-ServiceFabricReplicaHealthReport -PartitionId $partitionId -ReplicaId $replicaId -HealthState Warning -SourceId PowershellWatcher -HealthProperty ResourceDependency -Description "The external resource that the primary is using has been rebooted at 4/21/2015 9:01:21 PM. Expect processing delays for a few minutes." -TimeToLiveSec 120 -RemoveWhenExpired
PS C:\> Get-ServiceFabricReplicaHealth -PartitionId $partitionId -ReplicaOrInstanceId $replicaId
PartitionId : 8f82daff-eb68-4fd9-b631-7a37629e08c0
ReplicaId : 130740415594605869
AggregatedHealthState : Warning
UnhealthyEvaluations :
Unhealthy event: SourceId='PowershellWatcher', Property='ResourceDependency', HealthState='Warning', ConsiderWarningAsError=false.
HealthEvents :
SourceId : System.RA
Property : State
HealthState : Ok
SequenceNumber : 130740768777734943
SentAt : 4/21/2015 8:01:17 AM
ReceivedAt : 4/21/2015 8:02:12 AM
TTL : Infinite
Description : Replica has been created.
RemoveWhenExpired : False
IsExpired : False
Transitions : ->Ok = 4/21/2015 8:02:12 AM
SourceId : PowershellWatcher
Property : ResourceDependency
HealthState : Warning
SequenceNumber : 130741243777723555
SentAt : 4/21/2015 9:12:57 PM
ReceivedAt : 4/21/2015 9:12:57 PM
TTL : 00:02:00
Description : The external resource that the primary is using has been rebooted at 4/21/2015 9:01:21 PM. Expect processing delays for a few minutes.
RemoveWhenExpired : True
IsExpired : False
Transitions : ->Warning = 4/21/2015 9:12:32 PM
```
### <a name="rest"></a>REST
Statusrapporten met REST met POST-aanvragen die gaat u naar de gewenste entiteit en hebben in de hoofdtekst van de beschrijving van de health-rapport verzenden. Zie bijvoorbeeld het verzenden van REST [cluster statusrapporten](https://docs.microsoft.com/rest/api/servicefabric/report-the-health-of-a-cluster) of [service statusrapporten](https://docs.microsoft.com/rest/api/servicefabric/report-the-health-of-a-service). Alle entiteiten worden ondersteund.
## <a name="next-steps"></a>Volgende stappen
Op basis van de statusgegevens, kunnen schrijvers van de service en cluster/toepassing beheerders van de volgende manieren gebruiken voor de gegevens zien. Ze kunnen bijvoorbeeld waarschuwingen op basis van status van instellen om af te vangen ernstige problemen voordat ze leiden uitval tot. Beheerders kunnen ook instellen reparatie systemen automatisch oplossen van problemen.
[Inleiding tot Service Fabric health Monitoring](service-fabric-health-introduction.md)
[Service Fabric-statusrapporten weergeven](service-fabric-view-entities-aggregated-health.md)
[Het rapport en controleer de servicestatus van de](service-fabric-diagnostics-how-to-report-and-check-service-health.md)
[Systeemstatusrapporten gebruiken voor het oplossen van problemen](service-fabric-understand-and-troubleshoot-with-system-health-reports.md)
[Controle en diagnose van lokaal services](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md)
[Upgrade van de service Fabric-toepassing](service-fabric-application-upgrade.md)
| 118.468553 | 1,473 | 0.77172 | nld_Latn | 0.999755 |
e9296021772e8caf9700c8c7ed4f1cc948b340b8 | 1,036 | md | Markdown | pages/apps/tworooks.md | Sammcb/Sammcb.github.io | 28aa1148ab8759375997e0fb4c1faff57a80ade7 | [
"MIT"
] | null | null | null | pages/apps/tworooks.md | Sammcb/Sammcb.github.io | 28aa1148ab8759375997e0fb4c1faff57a80ade7 | [
"MIT"
] | null | null | null | pages/apps/tworooks.md | Sammcb/Sammcb.github.io | 28aa1148ab8759375997e0fb4c1faff57a80ade7 | [
"MIT"
] | null | null | null | ---
layout: app
permalink: /tworooks
title: Two Rooks
css: /assets/styles/app.css
---
Two Rooks is a simple, colorful chess app where you can create your own themes! Currently, only local (two players on the same device) games are supported.
{:.device-image}
{:.center-column}
{:.device-image}
{:.center-column}
[{% include app-store-icon.html %}](https://apps.apple.com/us/app/two-rooks/id1555601585)
{:.center-column}
For information on how to file a bug report or feature request, check out the Two Rooks [discussions](https://github.com/Sammcb/TwoRooks/discussions/1). With the new 2.0.0 update, there is no longer a need ot suggest themes so that discussion has been deleted. If you would like to add the old themes to the app, check out this [discussion](https://github.com/Sammcb/TwoRooks/discussions/5) for the color codes and emojis.
## [Privacy Policy 🔗](#privacy){:.privacy-link}
Two Rooks never collects or stores any personal data.
{:#privacy}
| 43.166667 | 422 | 0.745174 | eng_Latn | 0.967654 |
e9299e7f7f6a1758304983d4dc0db55168cefd1b | 4,772 | md | Markdown | _posts/2015-04-25-a-few-hours-with-docker.md | zhouyiqi91/zhouyiqi91.github.io | 43f23b0b7a2cf965baa89066a535831d14a87e16 | [
"MIT"
] | 4 | 2017-03-27T03:24:23.000Z | 2021-01-28T06:01:52.000Z | _posts/2015-04-25-a-few-hours-with-docker.md | zhouyiqi91/zhouyiqi91.github.io | 43f23b0b7a2cf965baa89066a535831d14a87e16 | [
"MIT"
] | null | null | null | _posts/2015-04-25-a-few-hours-with-docker.md | zhouyiqi91/zhouyiqi91.github.io | 43f23b0b7a2cf965baa89066a535831d14a87e16 | [
"MIT"
] | 6 | 2016-06-29T07:55:43.000Z | 2021-09-02T08:19:27.000Z | ---
layout: post
title: "A few hours with docker"
description: ""
category:
tags: []
---
{% include JB/setup %}
### Installing docker on Mac
With all the buzz around [docker][docker], I finally decided to give it try.
I first asked Broad sysadmins if there are machines set up for testing docker
applications. They declined my request for security concerns and suggested
[Kitematic][kitematic] for my MacBook. This means I can hardly run sequence
analyses for human. Anyway, I followed their suggestion. Kitematic turns out
to be easy to install. It found my pre-installed [VirtualBox][vb], put
a new Linux VM in it, launched a docker server inside the VM and provided a
`/usr/local/bin/docker` on my laptop that talks to the server. When I opened a
terminal from Kitematic (hot key: command-shift-T), I have a fully functional
`docker` command. You can in principle launch `docker` from other terminals,
but you need to export the right environmental variables.
### Trying prebuilt images
I ran the [busybox image][busybox] successfully. I then tried ngseasy as it is
supposed to be easily installed with `make all`. When I did that, it started to
download a 600MB image. I frowned - my laptop does not have much disk space -
but decided to wait. After this one, it started to download another 500MB
image. I killed `make all` and deleted temporary files and the virtual machine.
A 1.1GB pipeline seems too much for my small experiment (and I don't know if it
keeps downloading more).
### Building my own image
Can I build a small image if I only want to install BWA in it? I asked myself.
I then googled around and found [this post][tinyimage]. It is still too complex
for my purpose, but does give the answer: I can. With more google searches, I
learned how to build a tiny image: to use statically linked binaries. I have put
up relevant files in [lh3/bwa-docker][bd] at github. Briefly, to build and use
it locally:
git clone https://github.com/lh3/bwa-docker.git
cd bwa-docker
docker build -t mybwa .
docker run -v `pwd`:/tmp -w /tmp mybwa index MT.fa
cat test.fq | docker run -iv `pwd`:/tmp -w /tmp mybwa mem MT.fa - > test.sam
This creates test.sam in the `bwa-docker` directory. Yes, docker naturally
reads from stdin and writes to stdout, though perhaps there are more efficient
ways to pipe between docker containers.
With files on github, I can also add [my image][bwa-dh] to [Docker Hub][dh] by
allowing Docker Hub to access my github account. You can access the image with:
docker pull lh3lh3/bwa
docker run -v `pwd`:/tmp -w /tmp lh3lh3/bwa index MT.fa
Is the above the typical approach to creating images? Definitely not. This way,
docker is no better than statically linked binaries. If you look at other
Dockerfiles (the file used to automatically build a docker image), you will see
the typical approach is to compile executables inside the docker VM. Images
created this way depend on "fat" base images. You have to download a base image
of hundreds of MB in size in the first place. If you have two tools built upon
different fat base images, you probably need to have both bases (is that
correct?).
### Preliminary thoughts
Docker is a bless to complex systems such as the old Apache+MySQL+PHP combo,
but is a curse to simple command line tools. For simple tools, it adds multiple
complications (security, kernel version, Dockerfile, large package,
inter-process communication, etc) with little benefit.
Bioinformatics tools are not rocket science. They are supposed to be simple. If
they are not simple, we should encourage better practices rather than live with
the problems and resort to docker. I am particularly against dockerizing
easy-to-compile tools such as velvet and bwa or well packaged tools such as
spades. Another large fraction of tools in C/C++ can be compiled to statically
linked binaries or shipped with necessary dynamic libraries (see salifish).
While not ideal, these are still better solutions than docker. Docker will be
needed for some tools with complex dependencies, but I predict most of such
tools will be abandoned by users unless they are substantially better than
other competitors, which rarely happens in practice.
PS: the only benefit of dockerizing simple tools is that we can acquire a tool
with `docker pull user/tool`, but that is really the benefit of a centralized
repository which we are lacking in our field.
[docker]: https://www.docker.com
[kitematic]: https://kitematic.com
[vb]: https://www.virtualbox.org
[busybox]: https://registry.hub.docker.com/_/busybox/
[tinyimage]: http://blog.xebia.com/2014/07/04/create-the-smallest-possible-docker-container/
[bd]: https://github.com/lh3/bwa-docker
[dh]: https://hub.docker.com
[bwa-dh]: https://registry.hub.docker.com/u/lh3lh3/bwa/
| 48.20202 | 92 | 0.771584 | eng_Latn | 0.997633 |
e92a788d112ccad06094dba3dd7d6757b52190b4 | 1,466 | md | Markdown | solution/question259.md | ftvision/LeetCode | 49a60cca90dcd2f14b3628b048a89178375ccdc1 | [
"MIT"
] | 2 | 2016-11-22T07:54:17.000Z | 2018-01-26T21:30:26.000Z | solution/question259.md | ftvision/LeetCode | 49a60cca90dcd2f14b3628b048a89178375ccdc1 | [
"MIT"
] | null | null | null | solution/question259.md | ftvision/LeetCode | 49a60cca90dcd2f14b3628b048a89178375ccdc1 | [
"MIT"
] | null | null | null | ---
title: Question 259
category: [Algorithm]
tags: [Sorting]
---

## Algorithm
- 很重要的一点是:这个题目实际上就是只要找出三个小标不一样的数,使得他们的和小于target,具体的下标其实并不重要,因为反正一个tuple里面可以重新定义`i,j,k`
- $O(n^3)$的方法就是直接枚举`i,j,k`
- $O(n^2\log(n))$的方法就是先sort整个数组,这样的话,枚举`i, j`,然后根据不等式得到`upper bound = target - i - j`,然后可以二分查找这个upper bound在哪里,然后所有小于upper bound的且不是`i,j`的都可以作为`k`的选择,这样直接算就可以了
- $O(n^2)$的方法:
- 首先还是sort整个数组
- 然后枚举`i`,时间是$O(n)$
- 然后假设`j = i + 1`,这个时候可以找到最大的`k`使得`nums[i] + nums[j] + nums[k] < target`,而且`k-j`这么多个数都可以作为`k`的选择
- 这个时候,我们只需要移动`j`,当`j`增大的时候,这个最大的`k`的下标只能维持不动,或者减小,也就是说是单调非增的。所以`j`和`k`最终会在某个地方相遇,这个过程中,每一次更新`j`都可以算一次最大`k`与`j`之间的距离,从而计算出有多少组解在。
- 整个扫`j`的时间是$O(n)$
## Comment
- 都已经提示了可不可以用$O(n^2)$,还是可以想出来的。
## Code
```python
class Solution(object):
def threeSumSmaller(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
nums = sorted(nums)
n = len(nums)
total = 0
for i in range(n):
k = i + 1
j = k + 1
while k < n and j < n and nums[i] + nums[k] + nums[j] < target:
j = j + 1
j = j - 1
while k < n and j < n and k < j:
while k < j and nums[i] + nums[k] + nums[j] >= target:
j = j - 1
if k < j:
total += j - k
k = k + 1
return total
```
| 28.192308 | 159 | 0.538881 | yue_Hant | 0.355756 |
e92ab26b5f617b976f0fa2b00fdd28b182187f07 | 1,849 | md | Markdown | _posts/12/2021-04-06-friz.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | _posts/12/2021-04-06-friz.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | _posts/12/2021-04-06-friz.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | ---
id: 5794
title: Friz
date: 2021-04-06T21:48:51+00:00
author: Laima
layout: post
guid: https://ukdataservers.com/friz/
permalink: /04/06/friz/
---
* some text
{: toc}
## Who is Friz
Polish social media star named Karol Wiśniewski who is famous for his Friz YouTube channel. He has gained more than 1.2 billion views for his challenges, pranks, and tags.
## Prior to Popularity
He began his YouTube channel in October 2010.
## Random data
He has amassed more than 3.8 million subscribers to his YouTube channel. He has also earned more than 2.6 million followers to his frizoluszek Instagram account.
## Family & Everyday Life of Friz
He was born and raised in Poland. He has dated YouTuber Weronika Sowa.
## People Related With Friz
He and Lord Kruszwil are both Polish YouTube stars known for their challenges.
| 21.011364 | 173 | 0.338021 | eng_Latn | 0.999556 |
e92b4f882db192c522e8cbadc2aa8ae78213ac26 | 256 | md | Markdown | README.md | lachlankrautz/word | 4bbfbb7da84ecb039f371a775fd048de71832677 | [
"MIT"
] | null | null | null | README.md | lachlankrautz/word | 4bbfbb7da84ecb039f371a775fd048de71832677 | [
"MIT"
] | null | null | null | README.md | lachlankrautz/word | 4bbfbb7da84ecb039f371a775fd048de71832677 | [
"MIT"
] | null | null | null | word
=================
### Automate some word processing
### Usage
```shell
target/word [FILE_PATH] <paragraphs|tables>
```
### Install
```shell
$ git clone git@github.com:lachlankrautz/word
$ mvn package
```
### Dependencies
- git
- maven
- java
| 12.190476 | 47 | 0.617188 | kor_Hang | 0.358135 |
e92bddf7bd905a73f63a46214e7bcb165791bf45 | 105 | md | Markdown | README.md | rdrews-dev/Teaching | ca9ca2c522576a13a1422094c6f1430b007b67e7 | [
"MIT"
] | null | null | null | README.md | rdrews-dev/Teaching | ca9ca2c522576a13a1422094c6f1430b007b67e7 | [
"MIT"
] | null | null | null | README.md | rdrews-dev/Teaching | ca9ca2c522576a13a1422094c6f1430b007b67e7 | [
"MIT"
] | null | null | null | This is a repository in which I collect some workflows under development and examples used for teaching.
| 52.5 | 104 | 0.828571 | eng_Latn | 1.000009 |
e92ce4eb37d18341483a3e5a81c92f5295101771 | 1,204 | md | Markdown | AlchemyInsights/configure-and-validate-microsoft-defender-antivirus-network-connections.md | isabella232/OfficeDocs-AlchemyInsights-pr.hu-HU | 308f0ab87b566ec302a8ddeadc3a529ab28bdaf0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:06:44.000Z | 2020-05-19T19:06:44.000Z | AlchemyInsights/configure-and-validate-microsoft-defender-antivirus-network-connections.md | isabella232/OfficeDocs-AlchemyInsights-pr.hu-HU | 308f0ab87b566ec302a8ddeadc3a529ab28bdaf0 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:28:58.000Z | 2022-02-09T06:52:58.000Z | AlchemyInsights/configure-and-validate-microsoft-defender-antivirus-network-connections.md | isabella232/OfficeDocs-AlchemyInsights-pr.hu-HU | 308f0ab87b566ec302a8ddeadc3a529ab28bdaf0 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-11T19:13:19.000Z | 2021-10-09T10:43:09.000Z | ---
title: A Microsoft Defender víruskereső hálózati kapcsolatainak konfigurálása és ellenőrzése
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 02/25/2021
audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "6035"
- "9001464"
ms.openlocfilehash: 42fb806913356babf4fc9d06274e8db7cbdcadae
ms.sourcegitcommit: 6741a997fff871d263f92d3ff7fb61e7755956a9
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 03/04/2021
ms.locfileid: "50481940"
---
# <a name="configure-and-validate-microsoft-defender-antivirus-network-connections"></a>A Microsoft Defender víruskereső hálózati kapcsolatainak konfigurálása és ellenőrzése
A Microsoft Defender víruskereső frissítésének biztosításához úgy kell konfigurálnia a hálózatot, hogy engedélyezze a végpontok és bizonyos Microsoft-kiszolgálók közötti kapcsolatokat. További információt a Microsoft Defender víruskereső hálózati kapcsolatainak konfigurálása és [ellenőrzése.](https://docs.microsoft.com/windows/security/threat-protection/microsoft-defender-antivirus/configure-network-connections-microsoft-defender-antivirus)
| 46.307692 | 444 | 0.843854 | hun_Latn | 0.974195 |
e92d930e01f153e9412d3be6dff2c9e633d7ce5f | 429 | md | Markdown | _posts/2018-09-02-New-Website-Active.md | FoamFlingers/foamflingers.github.io | 76aa86e028498e9b5bcbd7edfef9ffb3b9be9305 | [
"MIT"
] | null | null | null | _posts/2018-09-02-New-Website-Active.md | FoamFlingers/foamflingers.github.io | 76aa86e028498e9b5bcbd7edfef9ffb3b9be9305 | [
"MIT"
] | null | null | null | _posts/2018-09-02-New-Website-Active.md | FoamFlingers/foamflingers.github.io | 76aa86e028498e9b5bcbd7edfef9ffb3b9be9305 | [
"MIT"
] | null | null | null | ---
title: New Website Active
layout: post
author: foamflingersbucks
permalink: /new-website-active/
source-id: 117VOKVDaqfHAy3nOyuQTTX0GLRaT0zOf1ypZZrPPlJM
published: true
---
Finally, after much preparation, the new website is ready! From now one, all posts, information, photos and videos will be on the new site. This site is now discontinued.
You can find the new site at:
https://sites.google.com/view/foamflingersbucks
| 28.6 | 170 | 0.792541 | eng_Latn | 0.893099 |
e92dc1e96a2d5814b3bc948addecf097bbbf5c70 | 10,152 | md | Markdown | _posts/2018-05-14-refugees-tagging-wake-words-mycroft-workaround-partnership.md | CarstenAgerskov/docs-rewrite | 49dd41ca841ca1ce6ca9ccf8e46ef9cb570b4f7e | [
"Apache-2.0"
] | null | null | null | _posts/2018-05-14-refugees-tagging-wake-words-mycroft-workaround-partnership.md | CarstenAgerskov/docs-rewrite | 49dd41ca841ca1ce6ca9ccf8e46ef9cb570b4f7e | [
"Apache-2.0"
] | null | null | null | _posts/2018-05-14-refugees-tagging-wake-words-mycroft-workaround-partnership.md | CarstenAgerskov/docs-rewrite | 49dd41ca841ca1ce6ca9ccf8e46ef9cb570b4f7e | [
"Apache-2.0"
] | null | null | null | ---
ID: 37845
post_title: >
Using Precise to Help Refugees | Mycroft
Partners with WorkAround
author: Eric Jurgeson
post_excerpt: ""
layout: post
permalink: >
http://mycroft.ai/blog/refugees-tagging-wake-words-mycroft-workaround-partnership/
published: true
post_date: 2018-05-14 17:00:15
---
[vc_row type="in_container" full_screen_row_position="middle" scene_position="center" text_color="dark" text_align="left" overlay_strength="0.3" shape_divider_position="bottom"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]
<h4><span style="font-weight: 400;">Mycroft is partnering with </span><a href="http://workaroundonline.com/index.html" target="_blank" rel="noopener"><span style="font-weight: 400;">WorkAround</span></a><span style="font-weight: 400;"> -- a microwork platform that provides employment and a living wage to refugees and displaced persons -- for the tagging of Precise wake word samples.</span></h4>
<span style="font-weight: 400;">When I arrived at the <a href="https://masschallenge.org/" target="_blank" rel="noopener">MassChallenge</a> Boston accelerator in June of 2017, I quickly got to know a number of the companies in our batch. There was paddle boarding with </span><a href="https://www.finnest.co/" target="_blank" rel="noopener"><span style="font-weight: 400;">Finnest</span></a><span style="font-weight: 400;"> on the Charles River, </span><a href="https://www.necn.com/multimedia/MassChallenge-Mycroft-AI-Veripad-CareAline_NECN-441788123.html" target="_blank" rel="noopener"><span style="font-weight: 400;">being on TV</span></a><span style="font-weight: 400;"> with </span><a href="https://carealine.com/" target="_blank" rel="noopener"><span style="font-weight: 400;">CareAline</span></a><span style="font-weight: 400;"> and </span><a href="http://www.veripad.co/" target="_blank" rel="noopener"><span style="font-weight: 400;">Veripad</span></a><span style="font-weight: 400;">, and hiking with the teams of </span><a href="http://www.clevot.com/" target="_blank" rel="noopener"><span style="font-weight: 400;">Clevot</span></a><span style="font-weight: 400;">, </span><a href="https://etiquettebride.us//" target="_blank" rel="noopener"><span style="font-weight: 400;">Etiquette</span></a><span style="font-weight: 400;">, </span><a href="https://www.tot-em.com/en/" target="_blank" rel="noopener"><span style="font-weight: 400;">Tot-em</span></a><span style="font-weight: 400;">, and </span><a href="https://www.cloudboost.io/" target="_blank" rel="noopener"><span style="font-weight: 400;">Cloudboost</span></a><span style="font-weight: 400;">. Spending a summer in Boston with some top startups is hard to beat and I did my best to introduce myself around the MassChallenge space and get to know the other companies and their founders.</span>
<span style="font-weight: 400;">Thanks to that, in September when Mycroft was looking at the tens of thousands of wake word samples we were collecting from our Opted-In community members, and considering using Mechanical Turk to accelerate their tagging, I was able to raise my hand and say “We can do better than MTurk.” That was because I had taken a Lyft once from the MassChallenge building with Wafaa Arbash of WorkAround, who introduced me to her co-founder Jennie Kelly.</span>
<span style="font-weight: 400;">WorkAround is an online microwork platform that provides work opportunities to the 65 million displaced people in the world. Many of these are well-educated people with a smartphone or computer and internet access, but who may not be able to work in their trained professions while displaced from their homelands. Barely a year old, WorkAround has already provided work to over 250 displaced people in 7 different countries, their “WorkArounders.” </span><span style="font-weight: 400;">Director of Operations and Finance Jennie Kelly notes,</span>
<blockquote><span style="font-weight: 400;">“</span><i><span style="font-weight: 400;">economic opportunities for these skilled people provide stability for their families, helps integrate them into their new communities and reduces the risk of returning to conflict zones.”</span></i></blockquote>
[/vc_column_text][/vc_column][/vc_row][vc_row type="in_container" full_screen_row_position="middle" scene_position="center" text_color="dark" text_align="left" overlay_strength="0.3" shape_divider_position="bottom"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="5/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]WorkArounders also exercise other benefits of microwork like setting one’s own schedule, working from anywhere with an internet connection, and choosing which tasks to take and which to pass on. One WorkArounder, Rasheed Dadoush says,
<blockquote><span style="font-weight: 400;">“</span><em><span style="font-weight: 400;">I would like to thank WorkAround for the opportunity to work online, at a time when we need to have a glimmer of hope and some positive feelings.</span></em><span style="font-weight: 400;">”</span></blockquote>
[/vc_column_text][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][image_with_animation image_url="37864" alignment="right" animation="None" border_radius="none" box_shadow="none" max_width="100%"][/vc_column][/vc_row][vc_row type="in_container" full_screen_row_position="middle" scene_position="center" text_color="dark" text_align="left" overlay_strength="0.3" shape_divider_position="bottom"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="1/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][image_with_animation image_url="37882" alignment="center" animation="None" img_link_target="_blank" border_radius="none" box_shadow="none" max_width="100%" img_link="http://workaroundonline.com/index.html"][/vc_column][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="5/6" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]<span style="font-weight: 400;">Why go with WorkAround over MTurk or another platform? For one, it’s always great to work with a team you have a personal connection with. Additionally, WorkAround makes it simple to post a batch of work and get back quality results quickly. They handle all of the logistics of onboarding the workers and ensuring they are paid a living wage for their time and effort. We work closely with WorkAround to ensure the quality of the work is in line with the output of the Mycroft community (which it is). And given the turnaround of our first batch of tagging we sent to WorkAround, we could see the process scale fantastically when we need to develop a custom wake word for a customer or deploy something quickly for the community.</span>[/vc_column_text][/vc_column][/vc_row][vc_row type="in_container" full_screen_row_position="middle" scene_position="center" text_color="dark" text_align="left" overlay_strength="0.3" shape_divider_position="bottom"][vc_column column_padding="no-extra-padding" column_padding_position="all" background_color_opacity="1" background_hover_color_opacity="1" column_shadow="none" column_border_radius="none" width="1/1" tablet_text_alignment="default" phone_text_alignment="default" column_border_width="none" column_border_style="solid"][vc_column_text]<span style="font-weight: 400;">With the recent migration of Precise tagging into home.mycroft.ai, we finally have adequate tracking of that process to offer it out to WorkAround, and to tie it to community engagement. For example, we can offer 5,000 utterances to WorkAround for every 20,000 tagged by the community, adding a 25% boost to the efforts of the community and providing economic stability to refugees around the world.</span>
<span style="font-weight: 400;">There are a couple ways for the community to get involved. First and foremost is to Opt-In to the Mycroft Open Dataset so your wake words get added to the Precise tagger. Do this at home.mycroft.ai under </span><a href="https://home.mycroft.ai/#/setting/basic" target="_blank" rel="noopener"><span style="font-weight: 400;">Settings</span></a><span style="font-weight: 400;">. Next is to tag some wake words yourself at home.mycroft.ai under the </span><a href="https://home.mycroft.ai/#/precise" target="_blank" rel="noopener"><span style="font-weight: 400;">Tagging</span></a><span style="font-weight: 400;"> tab. Doing both, or even one of these, will ensure we continue to get better at spotting wake words while doing some worldly good in the process.</span>
<span style="font-weight: 400;">Would your organization like to expand your corporate social responsibility, see results in tagging, translation, data entry, or transcription, all while knowing that you’re helping refugees and displaced persons gain economic stability? WorkAround might be the perfect fit. You can get in touch with Jennie directly to discuss your project at <a href="mailto:jkelly@workaroundonline.com" target="_blank" rel="noopener">jkelly@workaroundonline.com</a></span>[/vc_column_text][/vc_column][/vc_row] | 362.571429 | 3,302 | 0.783885 | eng_Latn | 0.942385 |
e92dfb36c7e8409ef00ce8cccf79880c3986a636 | 3,083 | md | Markdown | README.md | aaronplasek/19th_C._novel_scraper | b68c73720d4458524ea4924a1670af4ea9fb0fb8 | [
"MIT"
] | null | null | null | README.md | aaronplasek/19th_C._novel_scraper | b68c73720d4458524ea4924a1670af4ea9fb0fb8 | [
"MIT"
] | 1 | 2015-10-12T14:58:44.000Z | 2015-10-13T22:44:53.000Z | README.md | aaronplasek/xml_scraper | b68c73720d4458524ea4924a1670af4ea9fb0fb8 | [
"MIT"
] | 1 | 2016-09-15T12:30:37.000Z | 2016-09-15T12:30:37.000Z | *TEI/XML Epigraph Scraper*
================
This script pulls XML-tagged text and metadata from all the XML files in a directory and outputs this information into a csv file (to be viewed in Excel, OpenOffice, or your preferred spreedsheet of choice). Presently the script grabs (1) author name, (2) novel title, (3) publication date, (4) publication location, (5) epigraph text, (6) epigraph attribution, (7) author birth and death years, and (8) file creation attribution from XML files in Early American Fiction and [Wright American Fiction](https://github.com/iulibdcs/tei_text) collections.
The script also scrapes the *number* of "quote" and "epigraph" tags in each XML file. This is done because it has been discovered that some files have epigraphs that have not been correctly tagged as such. Examining the number and placement of "quote" and "epigraph" tags, in combination with other methods, can be used to guide systematic checking of novels for epigraphs in cases where they have not been properly labeled.
If you want to use this scraper to examine different XML-encoded corpora, it will be necessary to make minor changes to the code. Please feel free to fork to your heart's satisfaction.
*Usage*
=============
Just place the script in the directory with your XML texts to be scraped, and then run script in the terminal by typing
`python3 xml_scraper.py`.
The csv files generated will be placed in the same directory containing xml_scraper.py.
*Testing*
==========
This script was tested on a 2012 Macbook Pro and a 2013 iMac (both using OS 10.9.x) using python 3.4. (Please note that this script will not work with python 2.x.) You will need Beautiful Soup 4.
*Code Provenance*
=============
The first version of this code was written in a weekend in November 2013 by Aaron Plasek in collaboration with the [NYU Digital Experiments Working Group](http://nyudigitalexperiments.com/) for the (now defunct) Epigraph Project. (This initial version was also the first program Aaron wrote in python, and the present version of the code bears many of the scars from that initial effort.) This initial version only collected novel author names and novel epigraphs. During this time Jonathan Reeve also wrote an [epigraph scraper](https://github.com/DigitalExperiments/epi-project) for the Epigraph Project that uses XPath exclusively.
Working in conversation with Collin Jennings and Robby Koehler from 2013-2015, Aaron added functionality to collect more information about novels being examined. During the NYU Spring 2015 semester Chancy Zhang, in collaboration with Colling Jennings and Aaron Plasek, also forked a [version of this scraper](https://github.com/yangchen506) that uses SQLite instead of python lists.
During the 2015 European Summer School in the Digital Humanities at Leipzig, the two functions "count tags" and "count nested tags" (used to facilitate checking of novels for epigraphs in cases where the epigraphs have been mislabeled or unlabeled) were collaboratively written by Ariane Pinche, Ana Migowski, Mark Moll, and Aaron Plasek.
| 114.185185 | 636 | 0.782355 | eng_Latn | 0.998836 |
e92e768f2e8631f7f0058eca175d24ceecf1996b | 2,280 | md | Markdown | src/pl/2018-03/08/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/pl/2018-03/08/04.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/pl/2018-03/08/04.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Dyskusja
date: 21/08/2018
---
`Przeczytaj Dz 15,7-11. Jaki był wkład Piotra w dyskusję podczas zjazdu Kościoła w Jerozolimie?`
Łukasz oczywiście nie zamierzał relacjonować obrad zjazdu w całości. Interesujące byłoby na przykład usłyszeć argumenty judaizujących (zob. Dz 15,5), jak również odpowiedzi Pawła i Barnaby (zob. Dz 15,12). Fakt, że zapisane zostały tylko przemówienia Piotra i Jakuba, wskazuje na znaczenie tych postaci wśród apostołów i przywódców ówczesnego Kościoła.
W swojej mowie Piotr zwrócił się do apostołów i starszych Kościoła, przypominając im o swoim doświadczeniu sprzed lat w domu Korneliusza. W gruncie rzeczy jego argumentacja była podobna do tej, jaką posłużył się wcześniej w rozmowie ze współwyznawcami w Jerozolimie (zob. Dz 11,4-17). Sam Bóg okazał swoją akceptację nawrócenia Korneliusza (mimo że był on nieobrzezanym poganinem), udzielając jemu i jego domownikom dar Ducha Świętego, którego udzielił apostołom w dniu Pięćdziesiątnicy.
W swojej opatrzności Bóg posłużył się właśnie Piotrem, by przekonać współwyznawców w Judei, że nie czyni różnicy między Żydami a poganami w kwestii zbawienia. Choć nie praktykowali obrzędów oczyszczenia określonych przepisami starego przymierza, chrześcijanie wywodzący się z pogaństwa nie powinni byli dłużej uchodzić za nieczystych, jako że sam Bóg oczyścił ich serca. Ostatnie słowa Piotra brzmiały tak, jakby wypowiedział je apostoł Paweł:— „Wierzymy przecież, że zbawieni będziemy przez łaskę Pana Jezusa, tak samo jak i oni” (Dz 15,11).
`Przeczytaj Dz 15,13-21. Jakie rozwiązanie problemu pogan zaproponował Jakub?`
Przemówienie Jakuba sugeruje, że posiadał on niemały autorytet (por. Dz 12,17; 21,18; Ga 2,9.12). Niezależnie od tego, co Jakub rozumiał przez odbudowanie przybytku Dawida, co w proroctwie Amosa jest kojarzone z przywróceniem władzy rodowi Dawida (zob. Am 9,11-12), najważniejszym celem Jakuba było wykazanie, że Bóg zadbał o przyłączenie nawróconych z pogaństwa do Kościoła i w ten sposób w pewnym sensie zrekonstruował pojęcie ludu Bożego, a zatem nawróceni poganie mogą być wcieleni do Izraela.
Ze względu na to Jakub zasugerował, by nie nakładać na nawróconych pogan większych wymagań ponad to, czego normalnie wymagano od obcokrajowców, którzy mieszkali w Izraelu. | 126.666667 | 542 | 0.813596 | pol_Latn | 1.000006 |
e92ecb9c1507cc61dd5dc1ebc230b13d07b35365 | 1,494 | md | Markdown | content/post/05-its-slides/index.md | aladinoster/academic-kickstart | ae880fc81e7f2352998ac0405150d4aadd167f00 | [
"MIT"
] | null | null | null | content/post/05-its-slides/index.md | aladinoster/academic-kickstart | ae880fc81e7f2352998ac0405150d4aadd167f00 | [
"MIT"
] | null | null | null | content/post/05-its-slides/index.md | aladinoster/academic-kickstart | ae880fc81e7f2352998ac0405150d4aadd167f00 | [
"MIT"
] | 1 | 2020-09-16T03:10:07.000Z | 2020-09-16T03:10:07.000Z | +++
title = "Slides on ITS - 2019/2020"
subtitle = "Materials of the course on Inteligent Transporation Systems"
summary = "Check here the content about the course and slides presented during the first session"
date = 2019-12-03T00:00:00Z
draft = false
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["admin"]
# Tags and categories
# For example, use `tags = []` for no tags, or the form `tags = ["A Tag", "Another Tag"]` for one or more tags.
tags = ["Intelligent Transportation Systems","Vehicle Platooning"]
categories = ["Courses","English"]
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["deep-learning"]` references
# `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects = []
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
[image]
# Caption (optional)
caption = "Image credit: [**Volvo group**](https://www.volvogroup.com/en-en/news/2018/feb/truck-platooning-on-european-roads.html)"
# Focal point (optional)
# Options: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight
focal_point = ""
+++
Today it's the first day for the course on intelligent transportation systems. Please find slides [here](http://bit.ly/ITS2019-Control).
For more content check [here]({{< ref "/courses/its2/_index.md" >}}) | 40.378378 | 137 | 0.707497 | eng_Latn | 0.920011 |
e92f0a2b7cd4ed2cc36bc9300cb32a2e1a5e9b8d | 5,371 | md | Markdown | articles/cognitive-services/Ink-Recognizer/language-support.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Ink-Recognizer/language-support.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Ink-Recognizer/language-support.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Nyelvi és területi támogatás a tinta-felismerő API-hoz
titleSuffix: Azure Cognitive Services
description: A tinta-felismerő API által támogatott természetes nyelvek listája.
services: cognitive-services
author: aahill
manager: nitinme
ms.service: cognitive-services
ms.subservice: ink-recognizer
ms.topic: conceptual
ms.date: 08/24/2020
ms.author: aahi
ms.openlocfilehash: b4acd431656eb008702f62dc1ecf12bda62dae17
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 10/09/2020
ms.locfileid: "89051083"
---
# <a name="language-and-region-support-for-the-ink-recognizer-api"></a>Nyelvi és területi támogatás a tinta-felismerő API-hoz
[!INCLUDE [ink-recognizer-deprecation](includes/deprecation-note.md)]
Ez a cikk ismerteti, hogy a tinta-felismerő API Milyen nyelveket támogat. Az API-k az alábbi nyelveken írt Digitális tinta-tartalmakat tudják értelmezni és feldolgozni.
## <a name="supported-languages"></a>Támogatott nyelvek
| Nyelv | Nyelvkód |
|:-------------------------------------------|:---------------:|
| búr | `af-ZA` |
| albán | `sq-AL` |
| Baszk | `eu-ES` |
| Bosnyák (latin betűs) | `bs-Latn-BA` |
| Katalán | `ca-ES` |
| kínai (egyszerűsített, Kína) | `zh-CN` |
| kínai (hagyományos, Tajvan) | `zh-TW` |
| Horvát (Horvátország) | `hr-HR` |
| Cseh | `cs-CZ` |
| Dán | `da-DK` |
| Holland (Belgium) | `nl-BE` |
| Holland (Hollandia) | `nl-NL` |
| Angol (Ausztrália) | `en-AU` |
| Angol (Kanada) | `en-CA` |
| angol (Egyesült Királyság) | `en-GB` |
| Angol (India) | `en-IN` |
| angol (Egyesült Államok) | `en-US` |
| Finn | `fi-FI` |
| Francia (Franciaország) | `fr-FR` |
| Gallego | `gl-ES` |
| Német (Svájc) | `de-CH` |
| Német (Németország) | `de-DE` |
| Görög | `el-GR` |
| Hindi | `hi-IN` |
| Indonéz | `id-ID` |
| Ír | `ga-IE` |
| Olasz (Olaszország) | `it-IT` |
| Japán | `ja-JP` |
| Kinyarvanda | `rw-RW` |
| Kiswahili (Kenya) | `sw-KE` |
| Koreai | `ko-KR` |
| Luxemburgi | `lb-LU` |
| Maláj (Brunei Darussalam) | `ms-BN` |
| Maláj (Malajzia) | `ms-MY` |
| maori | `mi-NZ` |
| Norvég (bokmal) | `nb-NO` |
| norvég (nynorsk) | `nn-NO` |
| Lengyel | `pl-PL` |
| Portugál (Brazília) | `pt-BR` |
| Portugál (Portugália) | `pt-PT` |
| Réto | `rm-CH` |
| Román | `ro-RO` |
| Orosz | `ru-RU` |
| Skót gael | `gd-GB` |
| Sesotho sa szoto | `nso-ZA` |
| Szerb (cirill betűs, Bosznia-Hercegovina) | `sr-Cyrl-BA` |
| Szerb (cirill betűs, Montenegró) | `sr-Cyrl-ME` |
| Szerb (cirill, Szerbia) | `sr-Cyrl-RS` |
| Szerb (latin betűs, Bosznia-Hercegovina) | `sr-Latn-BA` |
| Szerb (latin betűs, Montenegró) | `sr-Latn-ME` |
| Szerb (latin, Szerbia) | `sr-Latn-RS` |
| Setswana (Dél-Afrika) | `tn-ZA` |
| Szlovák | `sk-SK` |
| Szlovén | `sl-SI` |
| Spanyol (Argentína) | `es-AR` |
| Spanyol (Spanyolország) | `es-ES` |
| Spanyol (Mexikó) | `es-MX` |
| Svéd (Svédország) | `sv-SE` |
| Török | `tr-TR` |
| walesi | `cy-GB` |
| Wolof | `wo-SN` |
| xhosza | `xh-ZA` |
| zulu | `zu-ZA` |
## <a name="see-also"></a>Lásd még
* [Mi az Ink Recognizer API?](overview.md)
* [Digitális tollvonások küldése a tinta-felismerő API-nak](concepts/send-ink-data.md) | 55.371134 | 168 | 0.373301 | hun_Latn | 0.989657 |
e92f1ccf657f60bf886dcf114cb35e15f3ec1f86 | 3,143 | md | Markdown | scripts/release_management/README.md | n-hutton/cosmos-consensus | 5cfe6c1bbc77150f9151683c25b4649170bf8045 | [
"Apache-2.0"
] | 162 | 2021-04-12T09:47:25.000Z | 2022-03-31T15:02:19.000Z | scripts/release_management/README.md | n-hutton/cosmos-consensus | 5cfe6c1bbc77150f9151683c25b4649170bf8045 | [
"Apache-2.0"
] | 423 | 2021-04-21T05:46:11.000Z | 2022-03-31T11:18:55.000Z | scripts/release_management/README.md | n-hutton/cosmos-consensus | 5cfe6c1bbc77150f9151683c25b4649170bf8045 | [
"Apache-2.0"
] | 53 | 2021-04-12T15:42:45.000Z | 2022-03-29T08:51:50.000Z | # Release management scripts
## Overview
The scripts in this folder are used for release management in CircleCI. Although the scripts are fully configurable using input parameters,
the default settings were modified to accommodate CircleCI execution.
# Build scripts
These scripts help during the build process. They prepare the release files.
## bump-semver.py
Bumps the semantic version of the input `--version`. Versions are expected in vMAJOR.MINOR.PATCH format or vMAJOR.MINOR format.
In vMAJOR.MINOR format, the result will be patch version 0 of that version, for example `v1.2 -> v1.2.0`.
In vMAJOR.MINOR.PATCH format, the result will be a bumped PATCH version, for example `v1.2.3 -> v1.2.4`.
If the PATCH number contains letters, it is considered a development version, in which case, the result is the non-development version of that number.
The patch number will not be bumped, only the "-dev" or similar additional text will be removed. For example: `v1.2.6-rc1 -> v1.2.6`.
## zip-file.py
Specialized ZIP command for release management. Special features:
1. Uses Python ZIP libaries, so the `zip` command does not need to be installed.
1. Can only zip one file.
1. Optionally gets file version, Go OS and architecture.
1. By default all inputs and output is formatted exactly how CircleCI needs it.
By default, the command will try to ZIP the file at `build/tendermint_${GOOS}_${GOARCH}`.
This can be changed with the `--file` input parameter.
By default, the command will output the ZIP file to `build/tendermint_${CIRCLE_TAG}_${GOOS}_${GOARCH}.zip`.
This can be changed with the `--destination` (folder), `--version`, `--goos` and `--goarch` input parameters respectively.
## sha-files.py
Specialized `shasum` command for release management. Special features:
1. Reads all ZIP files in the given folder.
1. By default all inputs and output is formatted exactly how CircleCI needs it.
By default, the command will look up all ZIP files in the `build/` folder.
By default, the command will output results into the `build/SHA256SUMS` file.
# GitHub management
Uploading build results to GitHub requires at least these steps:
1. Create a new release on GitHub with content
2. Upload all binaries to the release
3. Publish the release
The below scripts help with these steps.
## github-draft.py
Creates a GitHub release and fills the content with the CHANGELOG.md link. The version number can be changed by the `--version` parameter.
By default, the command will use the tendermint/tendermint organization/repo, which can be changed using the `--org` and `--repo` parameters.
By default, the command will get the version number from the `${CIRCLE_TAG}` variable.
Returns the GitHub release ID.
## github-upload.py
Upload a file to a GitHub release. The release is defined by the mandatory `--id` (release ID) input parameter.
By default, the command will upload the file `/tmp/workspace/tendermint_${CIRCLE_TAG}_${GOOS}_${GOARCH}.zip`. This can be changed by the `--file` input parameter.
## github-publish.py
Publish a GitHub release. The release is defined by the mandatory `--id` (release ID) input parameter.
| 47.621212 | 162 | 0.76742 | eng_Latn | 0.99714 |
e92f47c764a8e3a32053cb1f3383cc86306a798d | 3,932 | md | Markdown | _posts/2020-01-13-isolationcmos.md | gyulab/gyulab.github.io | 53c37ba99608389fbbc37b291e3f50f831f1cbb7 | [
"MIT"
] | null | null | null | _posts/2020-01-13-isolationcmos.md | gyulab/gyulab.github.io | 53c37ba99608389fbbc37b291e3f50f831f1cbb7 | [
"MIT"
] | null | null | null | _posts/2020-01-13-isolationcmos.md | gyulab/gyulab.github.io | 53c37ba99608389fbbc37b291e3f50f831f1cbb7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Isolation Technique"
date: 2020-01-13T14:25:52-05:00
author: Gyujun Jeong
categories: Academics
---
{:.profile}
CMOS란 Complementary MOS의 약어로, n-channel MOS와 p-channel MOS라는 서로 다른 두 종류의 MOS 소자를 하나의 wafer 안에 놓은 MOS 소자이다. 이렇게 두 종류의 소자를 하나의 기판 위에 놓게 되면, 누설 전류나 Latch-up과 같은 다양한 문제들이 발생한다. 오늘은 이 소자들을 분리시키는 isolation technique에 대하여 알아보도록 하자.
<br>
{:.profile}
{:.profile}
먼저 왜 isolation이 필요한지 알아보도록 하자. 위의 그림에서 누설 전류(leakage) 경로가 표시되어 있는데, 이러한 누설 전류를 막기 위해서는 isolation이 필요하다. 2번째 그림을 보면, 소자의 크기를 줄이면서 isolation을 하는데 필요한 공간이 줄어들게 되어, isolation에도 다양한 기술이 필요하다는 것을 알 수 있다.
<br>
{:.profile}
Isolation을 하기 위한 다양한 기술들이 소개되어 있다. 위에서 아래로 내려올수록 최신 기술이라고 생각하면 된다. 이제 각각의 기술에 대하여 알아보도록 하자.
<br>
{:.profile}
Reverse biased diode를 이용한 diffusion isolation이다. Bipolar Transistor에서 사용되었으며, well을 통한 isolation으로 현재 이용한다.
{:.profile}
Oxide를 이용한 isolation으로, MOS 개발 초기에 이용된 기술이다. 하지만 다양한 단점들로 인해 다음에 소개될 LOCOS 기법이나 Trench Isolation등을 이용하게 된다.
{:.profile}
LOCOS기법은 LOCal Oxidation of Si의 약어로, 옛날 자료를 정리하느라 오늘날의 method라고 나와 있는데, 실제로는 현재의 나노공정에서 LOCOS는 후술하겠지만 Bird's Beak과 같은 side effect들로 인하여 지금은 사용하지 않고 있다.
{:.profile}
Trench Isolation의 Trench는 트렌치 코트 할 때의 Trench, 즉 참호이다. 공정이 복잡하다는 단점이 있지만, Packing density를 획기적으로 올릴 수 있어서 오늘날 가장 많이 사용하는 기법 중 하나이다.
{:.profile}
앞서 소개한 LOCOS 공정에 대하여 조금 더 자세하게 알아보도록 하자.
{:.profile}
{:.profile}
LOCOS 공정의 순서는 위의 그림과 같은데, 마지막 그림에서 field oxide를 grow하는 과정에서 oxide가 휘는 현상을 관찰할 수 있다.
{:.profile}
이렇게 휘는 현상을 새의 부리를 닮았다고 해서 Bird's Beak 현상이라고 부르는데, 이는 Device의 Active Area를 낮추는 효과를 가져와 LOCOS의 Limiting factor로 작용한다.
{:.profile}
이러한 Bird's Beak 현상을 해결하기 위하여, SWAMI, SPOT, OSELO, FUROX와 같은 다양한 기법을 사용했다.
{:.profile}
이제 Trench Isolation에 대하여 알아보도록 하자.
{:.profile}
Trench Isolation을 이용하는 응용기법을 알아보도록 하자. STI는 Shallow Trench Isolation이고, Bipolar를 Isolate하거나 CMOS의 Latch-up을 prevent하기 위해 Moderate Trench를 이용한다. 그리고 DRAM의 Trench capacitor를 만들기 위해서는 Deep trench를 이용한다. 각각의 구분은 1um, 3um을 기점으로 구분한다.
<br>
{:.profile}
STI(Shallow Trench Isolation) 과정이다.
{:.profile}
Trench를 Etching할 때, 둥근 trench가 선호된다. 둥글게 Trench를 만들기 위해서는 고도의 Dry-etching기법과 고온 열산화 과정을 거쳐야 한다. 고온 열산화 과정은 stress를 줄여 주고, 이는 누설 전류를 감소시킨다.
{:.profile}
앞선 포스팅에서 본 SOI(Silicon on Insulator)와 STI를 이용하면 더욱 더 효과적인 Isolation이 가능하다. 0.13um 이하의 공정에서 사용하는 기법이다.
<br>
{:.profile}
| 66.644068 | 253 | 0.71999 | kor_Hang | 0.99995 |
e9302e68ee93f1efb605b980ac551b980daa3dde | 2,491 | md | Markdown | docs/using-chatterbox/troubleshooting/getting-more-support.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | docs/using-chatterbox/troubleshooting/getting-more-support.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | docs/using-chatterbox/troubleshooting/getting-more-support.md | HelloChatterbox/dev_documentation | 5c55cb4d7293dc19194bffb05959977ce6fdc7ed | [
"Apache-2.0"
] | null | null | null | ---
description: I've tried everything! Where to get further support.
---
# Getting more support
## Does an answer already exist?
Before asking a new question, have you:
* Searched our documentation?
* Searched the [Community Forum](https://community.chatterbox.ai/c/Help-with-Chatterbox-related-issues)?
## Asking for help
### What you need
Help us to help you. When you ask for support, please provide as much detail as possible.
The following information is very useful for anyone trying to help:
* What type of Chatterbox device you're using, such as a Mark 1, Picroft, KDE Plasmoid, regular Linux desktop
* What steps you took leading up to the issue
* What happened, compared to what you expected to happen
* Information from your [Chatterbox logs](log-files.md)
* Information generated from the [Support Skill](support-skill.md)
### Where to ask
#### Community Forums
The best place to post most support questions is the [Community Forums](https://community.chatterbox.ai/c/Help-with-Chatterbox-related-issues). This enables the many experienced members of our Community to assist you in a timely manner. Once a solution is found, it also means that others who may face the same problem in the future can benefit from your experience.
[Check out the Support Category of the Community Forums.](https://community.chatterbox.ai/c/Help-with-Chatterbox-related-issues)
#### Community Chat
The Community also has a real-time Chat service. This is useful to have direct discussion about your issue. Please be mindful that we are a global Community and someone may not respond immediately to your message. Be sure to provide enough detail so that when someone does come online they can understand what is happening.
[Join the Troubleshooting channel in Chat.](https://chat.chatterbox.ai/community/channels/troubleshooting)
#### Github Issues
If you have discovered a technical issue and are aware of which Chatterbox component it relates to, these can be logged as issues on [Github](https://github.com/ChatterboxAI). If you are unsure, you can always ask in the [Community Chat](https://chat.chatterbox.ai/community/channels/troubleshooting).
#### Chatterbox Team
Where needed, the Chatterbox Team are happy to help. You can email queries to support@chatterbox.ai and a team member will respond when available.
Please note that we are a small team and our primary focus is on improving Chatterbox for everyone's benefit. It may take time for us to respond to an email.
| 47.903846 | 366 | 0.780409 | eng_Latn | 0.998551 |
e931934574e2098fc4ffb401db0f18765a2f0dfb | 15,162 | md | Markdown | README.md | cwbriones/metrics-portal | 01003972b1d23df84955a37a356434fedd568168 | [
"PostgreSQL",
"Apache-2.0"
] | 2 | 2015-12-29T05:37:15.000Z | 2020-03-18T07:04:23.000Z | README.md | cwbriones/metrics-portal | 01003972b1d23df84955a37a356434fedd568168 | [
"PostgreSQL",
"Apache-2.0"
] | 205 | 2015-10-19T17:24:38.000Z | 2022-01-17T04:36:29.000Z | README.md | cwbriones/metrics-portal | 01003972b1d23df84955a37a356434fedd568168 | [
"PostgreSQL",
"Apache-2.0"
] | 12 | 2015-12-07T08:10:39.000Z | 2020-05-28T23:08:14.000Z | Metrics Portal
==============
<a href="https://raw.githubusercontent.com/ArpNetworking/metrics-portal/master/LICENSE">
<img src="https://img.shields.io/hexpm/l/plug.svg"
alt="License: Apache 2">
</a>
<a href="https://travis-ci.com/ArpNetworking/metrics-portal">
<img src="https://travis-ci.com/ArpNetworking/metrics-portal.svg?branch=master"
alt="Travis Build">
</a>
<a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.arpnetworking.metrics%22%20a%3A%22metrics-portal%22">
<img src="https://img.shields.io/maven-central/v/com.arpnetworking.metrics/metrics-portal.svg"
alt="Maven Artifact">
</a>
<a href="https://hub.docker.com/r/arpnetworking/metrics-portal">
<img src="https://img.shields.io/docker/pulls/arpnetworking/metrics-portal.svg" alt="Docker">
</a>
Provides a web interface for managing the Inscope Metrics stack. This includes viewing telemetry (aka streaming
statistics) from one or more hosts running [Metrics Aggregator Daemon](https://github.com/ArpNetworking/metrics-aggregator-daemon).
The web interface also provides a feature for browsing hosts reporting metrics, viewing and editing alerts, scheduling
and delivering reports and performing roll-ups in KairosDb.
Setup
-----
### Installing
#### Source
Clone the repository and build the source. The artifacts from the build are in `target/metrics-portal-${VERSION}-bin.tgz`
where `${VERSION}` is the current build version. To install, copy the artifact directories recursively into an
appropriate target directory on your Metrics Portal host(s). For example:
metrics-portal> ./jdk-wrapper.sh ./mvnw package -Pno-docker
metrics-portal> scp -r target/metrics-portal-${VERSION}-bin.tgz my-host.example.com:/opt/metrics-portal/
#### Tar.gz
Additionally, Metrics Portal releases a `tar.gz` package of its build artifacts which may be obtained from Github releases. To install,
download the archive and explode it. Replace `${VERSION}` with the release version of Metrics Portal you are installing.
For example, if your Metrics Portal host(s) have Internet access you can install directly:
> ssh -c 'curl -L https://github.com/ArpNetworking/metrics-portal/releases/download/v${VERSION}/metrics-portal-${VERSION}-bin.tgz | tar -xz -C /var/tmp/metrics-portal/' my-host.example.com
Otherwise, you will need to download locally and distribute it before installing. For example:
> curl -L https://github.com/ArpNetworking/metrics-portal/releases/download/v${VERSION}/metrics-portal-${VERSION}-bin.tgz -o /var/tmp/metrics-portal.tgz
> scp /var/tmp/metrics-portal.tgz my-host.example.com:/var/tmp/
> ssh -c 'tar -xzf /var/tmp/metrics-portal.tgz -C /opt/metrics-portal/' my-host.example.com
#### RPM
Alternatively, each release of Metrics Portal also creates an RPM which is available on Github releases. To install,
download the RPM and install it. For example, if your Metrics Portal host(s) have Internet access you can install
directly:
> ssh -c 'sudo rpm -i https://github.com/ArpNetworking/metrics-portal/releases/download/v${VERSION}/metrics-portal-${VERSION}-1.noarch.rpm' my-host.example.com
Otherwise, you will need to download the RPM locally and distribute it before installing. For example:
> curl -L https://github.com/ArpNetworking/metrics-portal/releases/download/v${VERSION}/metrics-portal-${VERSION}-1.noarch.rpm -o /var/tmp/metrics-portal.rpm
> scp /var/tmp/metrics-portal.rpm my-host.example.com:/var/tmp/
> ssh -c 'rpm -i /var/tmp/metrics-portal.rpm' my-host.example.com
Please note that if your organization has its own authorized package repository you will need to work with your system
administrators to install the Metrics Portal RPM into your package repository for installation on your Metrics Portal
host(s).
#### Docker
Furthermore, if you use Docker each release of Metrics Portal also publishes a [Docker image](https://hub.docker.com/r/arpnetworking/metrics-portal/)
that you can either install directly or extend.
If you install the image directly you will likely need to mount either a local directory or data volume with your
organization specific configuration.
If you extend the image you can embed your configuration file directly in your Docker image.
Regardless, you can override the provided configuration by first importing
`portal.application.conf` in your configuration file like this:
include required("portal.application.conf")
Next set the `METRICS_PORTAL_CONFIG` environment variable to `-Dconfig.file="your_file_path"` like this:
docker run ... -e 'METRICS_PORTAL_CONFIG=-Dconfig.file="/opt/metrics-portal/config/custom.conf"' ...
In addition to `METRICS_PORTAL_CONFIG`, you can specify:
* `LOGBACK_CONFIG` - Location of Logback configuration XML; default is `-Dlogger.file=/opt/metrics-portal/config/logback.xml`
* `JVM_XMS` - Java initial memory allocation; default is `64m`
* `JVM_XMX` - Java maximum memory allocation; default is `1024m`
* `JAVA_OPTS` - Additional Java arguments; many arguments are passed by [https://github.com/ArpNetworking/metrics-portal/blob/master/main/docker/Dockerfile](default).
### Execution
#### Non-Docker
Regardless of your installation method, in the installation's `bin` sub-directory there is a script to start the Metrics
Portal: `metrics-portal`. This script should be executed on system start with appropriate parameters. In general:
/opt/metrics_portal/bin/metrics-portal <JVM ARGS> -- <APP ARGS>
For example:
/opt/metrics_portal/bin/metrics-portal -Xms512m -- /opt/metrics-portal
Arguments before the `--` are interpreted by the JVM while arguments after `--` are passed to Metrics Portal.
##### Reporting
If you have reporting enabled (`reports.enabled = true` in `portal.application.conf`), and you want to render web-based reports, you will need to have Chrome or Chromium installed alongside Metrics Portal, and set the `chromePath` configuration for those renderers to point to the appropriate executable file.
#### Docker
If you installed Metrics Portal using a Docker image then execution is very simple. In general:
docker run -p 8080:8080 <DOCKER ARGS> arpnetworking/metrics-portal
For example:
docker run -p 8080:8080 -e 'JAVA_OPTS=-Xms512m' arpnetworking/metrics-portal
The section above on Docker installation covers how to pass arguments in more detail.
### Configuration
Aside from the JVM command line arguments, you may provide two additional configuration files.
#### Logback
The first is the [LogBack](http://logback.qos.ch/) configuration file. To use a custom logging configuration simply
pass the following argument to the JVM:
-Dlogger.file=/opt/metrics-portal/custom-logger.xml
Where `/opt/metrics_portal/custom-logger.xml` is the path to your logging configuration file. Please refer to
[LogBack](http://logback.qos.ch/) documentation for more information on how to author a configuration file.
Installation via RPM or Docker will use the [production file logging configuration file](main/config/logback.xml) by
default. However, other installation methods will use the [debugging logging configuration file](conf/logback.xml) by
default and users are *strongly* recommended to override this behavior.
Metrics Portal ships with a second [production console logging configuration file](main/config/logback-console.xml)
which outputs to standard out instead of to a rotated and gzipped file.
#### Application
The second configuration file is for the application. To use a custom configuration simply pass the following argument to
the JVM:
-Dconfig.file=/opt/metrics_portal/custom.conf
Where `/opt/metrics_portal/custom.conf` is the path to your application configuration file.
Installation via RPM or Docker will use the included [default application configuration file](conf/portal.application.conf).
This configuration documents and demonstrates many of the configuration options available.
To use the default application configuration file for non-RPM and non-Docker installations use a command like this:
/opt/metrics_portal/bin/metrics-portal -Dconfig.resource=conf/portal.application.conf -- /opt/metrics-portal
Metrics Portal ships with two additional application configuration files, `[postgresql.application.conf](conf/postgresql.application.conf)`
for using [Postgresql](https://www.postgresql.org) as the data store and another `[cassandra.application.conf](conf/cassandra.application.conf)`
for using [Cassandra](http://cassandra.apache.org/) as the data store. You can specify one of these by adding the following
to argument to the JVM:
For Postgresql:
-Dconfig.resource=conf/postgresql.application.conf
For Cassandra:
-Dconfig.resource=conf/cassandra.application.conf
Both of these configuration files derive from the base configuration file, and it is recommended that you use one of these
as your base configuration. Additionally, both support overrides for locating the specific data store instance. Please
refer to these files when configuring your Metrics Portal instance.
Finally, while it is possible to leverage the provided configuration files, it is *strongly* recommended that users author
a custom application configuration and that you inherit from the default application configuration file and provide any
desired configuration as overrides. Please refer to [Play Framework](https://www.playframework.com/documentation/2.6.x/ProductionConfiguration)
documentation for more information on how to author a configuration file.
### Extension
The Metrics Portal project intentionally uses a custom default application configuration and custom default routes
specification. This allows projects extending the Metrics Portal to supplement functionality more easily with the
standard default application configuration and routes. To use these files as extensions rather than replacements you
should make the following changes.
First, add dependencies on the Metrics Portal code and assets in __conf/Build.scala__:
"com.arpnetworking.metrics" %% "metrics-portal" % "VERSION"
Second, your extending project's application configuration should include one of the custom default configuration in __conf/application.conf__:
Base:
include "portal.application.conf"
Postgresql:
include "postgresql.application.conf"
Cassandra:
include "cassandra.application.conf"
Third, your extending project's application configuration should restore the default router in __conf/application.conf__:
application.router = null
Finally, your extending project's routes specification should include the custom default routes in __conf/routes__:
-> / portal.Routes
### Building
Prerequisites:
* [Docker](http://www.docker.com/) (for [Mac](https://docs.docker.com/docker-for-mac/))
* [Node](https://nodejs.org/en/download/)
Building:
metrics-portal> ./jdk-wrapper.sh ./mvnw verify
Building without Docker (will disable integration tests):
metrics-portal> ./jdk-wrapper.sh ./mvnw -Pno-docker verify
To control which verification targets (e.g. Checkstyle, Findbugs, Coverage, etc.) are run please refer to the
[parent-pom](https://github.com/ArpNetworking/arpnetworking-parent-pom) for parameters (e.g. `-DskipAllVerification=true`).
When launching Metrics Portal via Play (e.g. `play2:run`) there is limited support for automatic recompiling and
reloading of assets (e.g. HTML, Typescript, etc.).
To run the server on port 8080 and its dependencies launched via Docker:
metrics-portal> ./jdk-wrapper.sh ./mvnw docker:start
To stop the server and its dependencies run; this is recommended in place of `docker kill` as it will also remove the
container and avoids name conflicts on restart:
metrics-portal> ./jdk-wrapper.sh ./mvnw docker:stop
To run the server on port 8080 _without_ dependencies via Play; you need to configure/provide/launch dependencies manually (see below):
metrics-portal> ./jdk-wrapper.sh ./mvnw play2:run -Dconfig.resource=postgresql.application.conf -Dpostgres.port=6432
To debug on port 9002 with the server on port 8080 and its dependencies launched via Docker:
metrics-portal> ./jdk-wrapper.sh ./mvnw -Ddebug=true docker:start
To debug on port 9002 with the server on port 8080 via Play; you need to configure/provide/launch dependencies manually (see below):
metrics-portal> MAVEN_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=9002" ./jdk-wrapper.sh ./mvnw play2:run -Dconfig.resource=postgresql.application.conf -Dpostgres.port=6432
To launch dependencies only via Docker:
metrics-portal> ./jdk-wrapper.sh ./mvnw docker:start -PdependenciesOnly
To execute unit performance tests:
metrics-portal> ./jdk-wrapper.sh ./mvnw -PperformanceTest test
^ TODO(ville): This is not yet implemented.
To execute integration performance tests:
metrics-portal> ./jdk-wrapper.sh ./mvnw -PperformanceTest verify
^ TODO(ville): This is not yet implemented.
To use the local version as a dependency in your project you must first install it locally:
metrics-portal> ./jdk-wrapper.sh ./mvnw install
### Testing
* Unit tests (`test/java/**/*Test.java`) may be run or debugged directly from your IDE.
* Integration tests may be run or debugged directly from your IDE provided an instance of Metrics Portal and its
dependencies are running locally on the default ports.
* To debug Metrics Portal while executing an integration test against it simply launch Metrics Portal for debug,
then attach your IDE and finally run/debug the integration test from your IDE.
* To run tests in your IDE which rely on EBean classes, you must first run `./jdk-wrapper.sh ./mvnw process-classes` on
the command line to enhance the Ebean classes.
### Debugging
(See also the list of debug flags in [the Building section](#building).)
* _Debugging Chrome-based reports._ With the default options in `portal.application.conf`, Chrome offers a remote debugger on port 48928, which you can access by visiting <chrome://inspect> in another Chrome instance and adding `localhost:48928` under "Discover network targets".
### Releasing
If you have write-access to this repository, you should just be able to cut a release by running `git checkout master && git pull && mvn release:prepare`, and accepting the default version-names it proposes.
### IntelliJ
The project can be imported normally using "File / New / Project From Existing Sources..." using the Maven aspect.
However, you will first need to mark the `target/twirl/main` directory as a generated source directory. Next, to reflect
changes to the templates within IntelliJ you will need to generate them from the command line using `./jdk-wrapper.sh ./mvnw compile`
(do so now). Finally, under "Module Settings", then under "Platform Settings" / "Global Libraries", you need to click "+", choose
"Scala SDK" and choose "Maven 2.11.12" and click "OK" and "OK" again. This should enable discovery of the generated code and
its compilation using `scalac` for use in the IDE (e.g. for running tests).
License
-------
Published under Apache Software License 2.0, see LICENSE
© Groupon Inc., 2014
| 48.440895 | 309 | 0.774106 | eng_Latn | 0.975196 |
e93250d0c3ccaa1a9a261acc12fd0d12c3d3a726 | 306 | md | Markdown | CHANGELOG.md | RobLui/casper | 0216db3b0b0c1e3a02e8aed9a6a067bf273de685 | [
"MIT"
] | null | null | null | CHANGELOG.md | RobLui/casper | 0216db3b0b0c1e3a02e8aed9a6a067bf273de685 | [
"MIT"
] | null | null | null | CHANGELOG.md | RobLui/casper | 0216db3b0b0c1e3a02e8aed9a6a067bf273de685 | [
"MIT"
] | null | null | null | # 0.5
## 12-11-2018
- added default.twig.html
- fixed a bug where theme wouldn't save posts
- fixed title display on tab
# 0.4
## 19-10-2018
- changed google font from Muuli to Nunito
- changed repo name in order to install theme successfully
# v0.2
## 03-10-2018
1. [*] Initial version : Spooky 👻
| 15.3 | 58 | 0.689542 | eng_Latn | 0.96875 |
e934479af94ef6d9e7a11298c86b48e1952e83e5 | 1,260 | md | Markdown | _rsk/public-nodes.md | rsksmart/rsksmart.github.io | b21f45d6c5275bec6b774609f63da4c6bba0a0ab | [
"MIT"
] | 3 | 2020-04-02T14:41:23.000Z | 2020-04-26T09:17:00.000Z | _rsk/public-nodes.md | rsksmart/rsksmart.github.io | b21f45d6c5275bec6b774609f63da4c6bba0a0ab | [
"MIT"
] | 80 | 2019-11-11T03:05:13.000Z | 2020-06-10T03:45:54.000Z | _rsk/public-nodes.md | rsksmart/rsksmart.github.io | b21f45d6c5275bec6b774609f63da4c6bba0a0ab | [
"MIT"
] | 17 | 2019-11-19T15:38:37.000Z | 2020-05-12T19:05:53.000Z | ---
layout: rsk
title: Using RSK Public Nodes (Mainnet & Testnet) provided by IOVLabs
tags: rsk, networks, versions, rpc, mainnet, testnet, cUrl
description: "RSK Nodes: Public nodes (Mainnet, Testnet), Versioning, RPC Methods, and cUrl example"
collection_order: 2200
---
## Public Nodes
IOVLabs currently provides two public nodes that you can use
for testing purposes, and you will find that information below.
Alternatively, follow the [installation instructions](/rsk/node/install/),
to run your own RSK node.
This is highly recommended for production environments,
and in accordance with the bitcoiners' maxim: **Don't trust. Verify.**
### Testnet
```
https://public-node.testnet.rsk.co
```
### Mainnet
```
https://public-node.rsk.co
```
## Supported RPC methods
List of more supported RPC methods for each module can be found in the [JSON-RPC documentation](/rsk/node/architecture/json-rpc/).
> **Note**: request headers must include `"Content-Type: application/json"`
## Example using `cURL`
Here's an example request using `cURL` to get the Mainnet block number:
```shell
curl https://public-node.rsk.co \
-X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
```
| 26.808511 | 130 | 0.724603 | eng_Latn | 0.918273 |
e9345f0ad9b1623070bff4b334c9ee940e93f18f | 808 | md | Markdown | README.md | meteergen/Corridor-Detection | cf446a38c0d71dd85e93842606115567a4216b2f | [
"Apache-2.0"
] | 2 | 2022-03-18T14:27:17.000Z | 2022-03-18T15:05:03.000Z | README.md | meteergen/Corridor-Detection | cf446a38c0d71dd85e93842606115567a4216b2f | [
"Apache-2.0"
] | 4 | 2020-12-10T20:31:54.000Z | 2022-03-18T15:07:19.000Z | README.md | meteergen/Corridor-Detection | cf446a38c0d71dd85e93842606115567a4216b2f | [
"Apache-2.0"
] | null | null | null | # Corridor-Detection
A QGIS plugin for corridor ordering on network data with Dijktra's Algorithm.
___
### Installation
**1)** Clone or download the zip of the repository
**2)** Extract or move the files under your qgis plugin folder ( most of the time under %appdata% )
**3)** Open compile.bat folder as a .txt file and replace the values with correct version of your QGIS and correct folder of your QGIS installation.
**4)** Go to Manage and Install plugin menu on QGIS interface.
**5)** Find the Corridor-Detection plugin under All or Installed plugins.
**6)** Load the plugin
(do not forget check 'Show also experimental plugins' in, Manage and Install Plugins --> Setting)
Copyright © 2020 Metehan Ergen/ Atahan Çelebi/ Berkay İbiş/ Dr. Berk Anbaroğlu - Hacettepe University.
| 40.4 | 151 | 0.735149 | eng_Latn | 0.913083 |
e934f8b8e9586eb5d95feec20109c700a8d201d7 | 2,130 | md | Markdown | _posts/2020-7-23-fedramp-announces-document-and-template-updates.md | anishaagrawalgsa/fedramp-gov | 123b02f7ec1c0244e7e0b463b412e6293b617937 | [
"CC0-1.0"
] | null | null | null | _posts/2020-7-23-fedramp-announces-document-and-template-updates.md | anishaagrawalgsa/fedramp-gov | 123b02f7ec1c0244e7e0b463b412e6293b617937 | [
"CC0-1.0"
] | null | null | null | _posts/2020-7-23-fedramp-announces-document-and-template-updates.md | anishaagrawalgsa/fedramp-gov | 123b02f7ec1c0244e7e0b463b412e6293b617937 | [
"CC0-1.0"
] | null | null | null | ---
title: FedRAMP Announces Document and Template Updates
permalink: /fedramp-announces-document-and-template-updates/
body-class: page-blog
image: /assets/img/blog-images/FRblog_Doc-Updates.png
author: FedRAMP
layout: blog-page
---
FedRAMP released updates to the <a href="{{site.baseurl}}/assets/resources/templates/SSP-A12-FedRAMP-Laws-and-Regulations-Template.xlsx">System Security Plan (SSP) Attachment 12 template</a>, the <a href="{{site.baseurl}}/assets/resources/documents/FedRAMP_Master_Acronym_and_Glossary.pdf">FedRAMP Master Acronym and Glossary document</a>, and the <a href="{{site.baseurl}}/assets/resources/templates/FedRAMP-Initial-Authorization-Package-Checklist.xls">FedRAMP Initial Authorization Package Checklist template</a>.
The <a href="{{site.baseurl}}/assets/resources/templates/SSP-A12-FedRAMP-Laws-and-Regulations-Template.xlsx">SSP Attachment 12 - FedRAMP Laws and Regulations template</a> was updated to include the latest publications, policies information, and relevant links. This is a required attachment to the SSP template and should be used, or updated, by CSPs undergoing the initial authorization process and submitted as part of their SSP package.
The <a href="{{site.baseurl}}/assets/resources/documents/FedRAMP_Master_Acronym_and_Glossary.pdf">FedRAMP Master Acronym and Glossary document</a> was updated to include a more comprehensive listing of acronyms / terms found in FedRAMP documentation.
The <a href="{{site.baseurl}}/assets/resources/templates/FedRAMP-Initial-Authorization-Package-Checklist.xls">FedRAMP Initial Authorization Package Checklist template</a> was updated to remove attachments that are now embedded in the SSP template and to clarify instructions. CSPs are required to complete and submit the checklist when uploading the authorization package to the FedRAMP Repository.
FedRAMP will continue to make ongoing updates to documents and templates and will communicate the changes once they’re released. If you have any questions, feedback, or suggestions for documentation updates, please reach out to <a href="mailto:info@fedramp.gov">info@fedramp.gov</a>.
| 112.105263 | 515 | 0.810798 | eng_Latn | 0.871867 |
e9353848293d52d85174fa337687d684666237b1 | 144 | md | Markdown | book/ListComprehensions.md | EMQ-YangM/documentation | 40f60733452416ed14fd4d3d07907d0c71b7a966 | [
"CC0-1.0"
] | 49 | 2020-06-11T12:33:27.000Z | 2020-08-11T13:43:38.000Z | book/ListComprehensions.md | EMQ-YangM/documentation | 40f60733452416ed14fd4d3d07907d0c71b7a966 | [
"CC0-1.0"
] | 10 | 2020-06-11T17:18:26.000Z | 2020-09-28T07:46:40.000Z | book/ListComprehensions.md | EMQ-YangM/documentation | 40f60733452416ed14fd4d3d07907d0c71b7a966 | [
"CC0-1.0"
] | 29 | 2020-06-08T01:57:26.000Z | 2020-09-27T23:43:00.000Z | ## List Comprehensions
### Overview
[x+2 | x <- [1..10]]
### Generators
### Dependent Generators
### Guards
### Expression
### Exercises
| 9 | 24 | 0.597222 | kor_Hang | 0.207492 |
e935f482a419f4621c55c05fb030f24ad16e984b | 608 | md | Markdown | ios9/SegueCatalog/README.md | susairajs/ios-samples | 6c3869949046a6f9c05d5aaa477dd335715e58ac | [
"MIT"
] | 4 | 2018-12-25T15:01:01.000Z | 2021-10-05T02:34:27.000Z | ios9/SegueCatalog/README.md | taimoor-janjua/monotouch-samples | f17be08989a85429e3e0cb4c48b37279833951d8 | [
"Apache-2.0"
] | null | null | null | ios9/SegueCatalog/README.md | taimoor-janjua/monotouch-samples | f17be08989a85429e3e0cb4c48b37279833951d8 | [
"Apache-2.0"
] | 5 | 2016-10-26T02:27:47.000Z | 2018-03-06T13:33:06.000Z | SegueCatalog
============
Sample demonstrates how to combine `UIStoryboardSegue` subclasses with `Transition Delegates` and `Adaptivity`, and how to use unwind segues
Build Requirements
------------------
Building this sample requires Xcode 7.0 and iOS 9.0 SDK
Refs
----
[Original sample](https://developer.apple.com/library/prerelease/ios/samplecode/SegueCatalog/Introduction/Intro.html)
Target
------
This sample runnable on iPhoneSimulator/iPadSimulator iPhone/iPad
Copyright
---------
Xamarin port changes are released under the MIT license
Author
------
Ported to Xamarin.iOS by Rustam Zaitov
| 21.714286 | 140 | 0.740132 | eng_Latn | 0.763785 |
e9366d42193ab64bf2308802d57f220d598cef53 | 2,315 | md | Markdown | CONTRIBUTING.md | Picorims/wav2bar | 72904a185f196fc62fda1de4060457a70984a6ab | [
"MIT"
] | 17 | 2021-02-21T17:17:49.000Z | 2022-01-18T04:02:53.000Z | CONTRIBUTING.md | Picorims/wav2bar | 72904a185f196fc62fda1de4060457a70984a6ab | [
"MIT"
] | 12 | 2021-04-17T21:08:10.000Z | 2022-01-23T16:47:38.000Z | CONTRIBUTING.md | Picorims/wav2bar | 72904a185f196fc62fda1de4060457a70984a6ab | [
"MIT"
] | 2 | 2021-10-03T12:56:13.000Z | 2022-01-17T18:51:03.000Z | # Contributing
First, thanks for your interest in contributing!
If you are willing to partake into big contributions, please contact me first at picorims.contact@gmail.com so we can discuss the topic and I can setup an appropriate branch workflow.
## What to be aware of
Make sure to check out [docs/DEVELOPMENT_GUIDELINES](./docs/DEVELOPMENT_GUIDELINES.md) for more information on the project itself.
Personal note (Picorims): I work on this project on my free time, while pursuing a computer science degree. That means that my availability is function to my life, work load and health. So please be patient, I may not be immediately available, but I will answer you when possible :).
## Types of contributions
### Filling an issue
When filling an issue, make sure it has not already been reported. If it is security related, mail it to picorims.contact@gmail.com instead.
### Suggesting a new feature
Make sure the suggestion do not already have a dedicated issue opened. Describe it precisely to make sure it is understood and scopped well enough.
### Fixing an issue, adding a feature, modifying the codebase
First, thanks for your interest! Here is a step by step guide for you:
- Add an issue corresponding to your contribution (which also opens discussion before diving too fast straight into code!)
- Fork the repository;
- Create a new branch;
- Work on your contribution;
- If you touched code, make sure the features you modified still work as expected (try to break it if it could be possible, to fix edge cases)
- commit and push your branch to your own repository;
- open a pull request, detailing your contribution, the motivation behind it and a brief explanation of how it works if it is code related;
- Let it be reviewed, and if changes are needed, they will be detailed to you.
### Good first issues
If you are new to contributing, here are some easy contributions:
- opening detailed bug reports;
- issues labelled as good first issues if any is available;
- documentation detailing;
- typos in documentation, README, assets files, etc.
### Contributions that doesn't help
Make sure to check the following list before contributing:
- Fixes about forgotten spaces or else alone are not worth it.
- A too vague issue, or an issue with no reproducing steps is unlikely to be fixed. | 49.255319 | 283 | 0.779266 | eng_Latn | 0.999656 |
e93728340f96ce58367d06e069e6b313c6df2478 | 678 | md | Markdown | docs/user-guide/rules/index.md | slheavner/wist | 44671e98f80dabdaf82c7594e55f1d3aeaf5f225 | [
"Apache-2.0"
] | 38 | 2017-12-18T19:31:12.000Z | 2022-01-06T02:09:41.000Z | docs/user-guide/rules/index.md | slheavner/wist | 44671e98f80dabdaf82c7594e55f1d3aeaf5f225 | [
"Apache-2.0"
] | 56 | 2017-12-21T23:11:16.000Z | 2022-02-26T04:18:28.000Z | docs/user-guide/rules/index.md | slheavner/wist | 44671e98f80dabdaf82c7594e55f1d3aeaf5f225 | [
"Apache-2.0"
] | 7 | 2018-03-09T00:59:14.000Z | 2021-12-27T20:55:18.000Z | ---
title: List of available rules
layout: doc
edit_link: https://github.com/willowtreeapps/wist/edit/master/docs/user-guide/rules/index.md
sidebar: "user-guide"
grouping: "rules"
---
# Rules
Configurable rules available in Wist.
{% for category in site.data.rules.categories %}
<table class="table table-striped table-sm table-responsive">
<tr>
<th>Name</th>
<th>Description</th>
<th>Since</th>
</tr>
<tbody>
{% assign rules = category.rules | sort: 'name' %}
{% for rule in rules %}
<tr>
<td markdown="1">[{{rule.name}}]({{rule.name}})
</td>
<td markdown="1">{{rule.description}}
</td>
<td markdown="1">{{rule.since}}
</td>
</tr>
{% endfor %}
</tbody>
</table>
{% endfor %} | 19.941176 | 92 | 0.668142 | eng_Latn | 0.486584 |
e9378358d143ba215d27db6e8507a7aa0eaac9fd | 13,495 | md | Markdown | subscriptions/signing-in.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | subscriptions/signing-in.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | subscriptions/signing-in.md | adrianodaddiego/visualstudio-docs.it-it | b2651996706dc5cb353807f8448efba9f24df130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Accesso alle sottoscrizioni di Visual Studio | Microsoft Docs
author: evanwindom
ms.author: lank
manager: lank
ms.date: 05/14/2019
ms.topic: conceptual
description: Come accedere alla sottoscrizione di Visual Studio
searchscope: VS Subscription
ms.openlocfilehash: d010a908d28fd6f7be86cee27fa86f0ac24471d6
ms.sourcegitcommit: 283f2dbce044a18e9f6ac6398f6fc78e074ec1ed
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 05/16/2019
ms.locfileid: "65805289"
---
# <a name="signing-in-to-your-visual-studio-subscription"></a>Accesso alla sottoscrizione di Visual Studio
La procedura per accedere alla sottoscrizione di Visual Studio dipende dal tipo di account usato. Ad esempio, si potrebbe usare un account Microsoft (MSA) o un indirizzo di posta elettronica fornito dal datore di lavoro o dell'istituto di istruzione. A partire da gennaio 2019 è possibile usare anche il proprio account GitHub per accedere ad alcune sottoscrizioni.
In questo articolo sono illustrati quattro diversi metodi di accesso: Usare i collegamenti a destra per passare a una qualsiasi di queste sezioni.
1. Accesso con l'account Microsoft
2. Accesso con l'account aziendale o dell'istituto di istruzione
3. Uso dell'account Microsoft per accedere a un account aziendale o dell'istituto di istruzione
4. Accesso con l'account GitHub
## <a name="signing-in-with-your-microsoft-account-msa"></a>Accesso con l'account Microsoft
1. Visitare [https://my.visualstudio.com](https://my.visualstudio.com?wt.mc_id=o~msft~docs).
2. Immettere l'indirizzo di posta elettronica fornito al momento della configurazione o dell'acquisto della sottoscrizione di Visual Studio.
> [!NOTE]
> Questo indirizzo viene anche identificato nel messaggio di posta elettronica di benvenuto del sottoscrittore ricevuto al momento dell'acquisto della sottoscrizione o dell'iscrizione per Visual Studio Dev Essentials. In caso di problemi a individuare il messaggio di benvenuto, controllare le cartelle della posta indesiderata.
3. Immettere la password.
4. Fai clic su **Accedi**.
5. A questo punto, dovrebbe essere visualizzata la pagina "Vantaggi".
### <a name="for-visual-studio-dev-essentials-users"></a>Per gli utenti di Visual Studio Dev Essentials:
Quando si accede alla sottoscrizione Visual Studio Dev Essentials per la prima volta, verrà visualizzata una finestra di dialogo di benvenuto. Fare clic su **Conferma** per accettare i termini e le condizioni del programma.
## <a name="signing-in-with-your-work-or-school-account"></a>Accesso con l'account aziendale o dell'istituto di istruzione
1. Visitare [https://my.visualstudio.com](https://my.visualstudio.com?wt.mc_id=o~msft~docs).
2. Immettere l'indirizzo di posta elettronica al quale è stata assegnata la nuova sottoscrizione di Visual Studio.
> [!NOTE]
> Questo indirizzo viene anche indicato nel messaggio di posta elettronica di benvenuto al sottoscrittore ricevuto. In caso di problemi a individuare il messaggio di benvenuto, controllare le cartelle della posta indesiderata.
3. Scegliere **Continua**.
4. Si verrà reindirizzati alla pagina di accesso aziendale.
5. Immettere la password.
6. Fai clic su **Accedi**
7. A questo punto, dovrebbe essere visualizzata la pagina "Vantaggi".
È ora possibile visualizzare il tipo di sottoscrizione in uso, indicato nella barra blu nella parte superiore del portale.
È anche possibile visualizzare la sottoscrizione attualmente selezionata in alto a destra, sotto il nome utente. Compare la dicitura "In fase di visualizzazione:", seguita dalla sottoscrizione. Se sono disponibili più sottoscrizioni, è possibile fare clic sulla freccia a discesa e selezionare la sottoscrizione che si vuole usare.
## <a name="using-your-microsoft-account-to-sign-in-to-a-work-or-school-account"></a>Uso dell'account Microsoft per accedere a un account aziendale o dell'istituto di istruzione
1. Passare a [https://my.visualstudio.com](https://my.visualstudio.com?wt.mc_id=o~msft~docs).
2. Immettere l'indirizzo di posta elettronica al quale è stata assegnata la nuova sottoscrizione di Visual Studio.
> [!NOTE]
> Questo indirizzo è anche indicato nella Lettera di benvenuto al sottoscrittore. Se la Lettera di benvenuto non è stata ricevuta, controllare nelle cartelle di posta indesiderata.
3. Scegliere **Continua**.
4. Si verrà reindirizzati a una pagina di decisione.
- Selezionare **Account aziendale o dell'istituto di istruzione** se la sottoscrizione è associata a un account aziendale o dell'istituto di istruzione associato a un tenant di Azure Active Directory (AAD).
- Selezionare **Personale** se la sottoscrizione è associata a un indirizzo di posta elettronica aziendale ma è stata anche convertita in un account Microsoft personale.
> [!NOTE]
> Ciò sarà vero per molti sottoscrittori che hanno usato sottoscrizioni di Visual Studio (in precedenza MSDN) in passato.
- Se un percorso non riesce, provare l'altro. Gli amministratori delle sottoscrizioni potrebbero aver modificato la sottoscrizione.
5. Immettere la password.
6. Fai clic su **Accedi**.
7. A questo punto, dovrebbe essere visualizzata la pagina "Vantaggi".
## <a name="signing-in-with-your-github-account"></a>Accesso con l'account GitHub
Il supporto delle identità GitHub consente di usare un account di GitHub esistente come credenziali per un account Microsoft nuovo o esistente, collegando l'account GitHub con l'account Microsoft.
Quando si esegue l'accesso con GitHub, Microsoft verifica se un indirizzo e-mail associato all'account GitHub corrisponde a un account Microsoft personale o aziendale. Se l'indirizzo corrisponde all'account aziendale dell'utente, verrà richiesto di accedere a tale account. Se l'indirizzo corrisponde a un account personale, l'account GitHub verrà aggiunto come metodo di accesso all'account personale.
Dopo che le credenziali dell'account GitHub e dell'account Microsoft sono state collegate, è possibile usare questo accesso Single Sign-On ovunque è consentito l'accesso con un account Microsoft personale ad esempio nei siti di Azure, nelle app di Office e in Xbox. Questi account possono essere usati anche per gli accessi come ospite di Azure Active Directory con un account Microsoft, a condizione che l'indirizzo di posta elettronica corrisponda a quello presente nell'invito.
> [!NOTE]
> Il collegamento di un'identità di GitHub a un account Microsoft non consente a Microsoft nessun accesso al codice. Quando le app come Azure DevOps e Visual Studio richiedono l'accesso ai repository di codice, verrà richiesto all'utente il consenso specifico per tale accesso.
### <a name="frequently-asked-questions"></a>Domande frequenti
Le domande frequenti che seguono riguardano aspetti relativi all'uso delle credenziali dell'account GitHub per l'accesso alle sottoscrizioni di Visual Studio.
#### <a name="q-i-forgot-my-github-password--how-can-i-access-my-account-now"></a>D: Ho dimenticato la password di GitHub. Come posso accedere al mio account?
R: È possibile recuperare l'account GitHub accedendo [Reimposta password](https://github.com/password_reset). In alternativa è possibile recuperare l'account Microsoft collegato a GitHub immettendo l'indirizzo di posta elettronica dell'account GitHub in [Recupera il tuo account](https://account.live.com/password/reset).
#### <a name="q-i-deleted-my-github-account--how-can-i-access-my-microsoft-account-msa-now"></a>D: Ho eliminato il mio account GitHub. Come posso accedere al mio account Microsoft (account del servizio gestito)?
R: Se non sono presenti altre credenziali nell'account del servizio gestito (ad esempio una password, l'app Authenticator o una chiave di sicurezza), è possibile ripristinare l'account Microsoft usando l'indirizzo di posta elettronica collegato a tale account. Per iniziare, passare a [Recupera il tuo account](https://account.live.com/password/reset). È necessario aggiungere una password per l'account, che potrà essere usata per gli accessi successivi.
#### <a name="q-theres-no-sign-in-with-github-option-on-the-sign-in-page--how-can-i-use-my-github-credentials-to-sign-in"></a>D: Non è disponibile alcuna opzione "Accedi con GitHub" nella pagina di accesso. Come è possibile usare le credenziali di GitHub per l'accesso?
R: Digitare l'indirizzo di posta elettronica dell'account GitHub selezionato durante la creazione dell'account Microsoft collegato a GitHub. Si verrà trasferiti a GitHub per l'accesso. In alternativa, se nella pagina è presente un collegamento Opzioni di accesso usare il pulsante **Accedi con GitHub** visualizzato dopo aver fatto clic su tale collegamento.
#### <a name="q-i-cant-sign-in-to-some-of-my-apps-and-products-with-github--why"></a>D: Non riesco ad accedere ad alcune app e prodotti con GitHub. Perché?
R: Non tutti i prodotti Microsoft supportano l'accesso a GitHub.com dalla loro pagina di accesso, ad esempio la console Xbox. Quando si digita l'indirizzo e-mail dell'account GitHub collegato, Microsoft invia un codice a tale indirizzo per verificare l'identità del mittente. L'utente continua ad accedere allo stesso account, ma con un altro metodo di accesso.
#### <a name="q--ive-added-a-password-to-the-microsoft-account-i-have-linked-to-my-github-account--will-that-cause-a-problem"></a>D: Ho aggiunto una password per l'account Microsoft collegato al mio account GitHub. Può rappresentare un problema?
R: Assolutamente no. Questo non modifica la password di GitHub, è semplicemente disponibile un altro modo per accedere all'account Microsoft. Ogni volta che si accede usando l'indirizzo e-mail, sarà possibile scegliere se accedere con la password dell'account Microsoft o andare a GitHub per l'accesso. Se è necessario aggiungere una password, è consigliabile che sia diversa dalla password per l'account GitHub.
#### <a name="q-i-want-to-add-the-authenticator-app-to-the-account-i-created-using-github--can-i-do-that"></a>D: Voglio aggiungere l'app Authenticator all'account creato tramite GitHub. È possibile?
R: Sì. È sufficiente scaricare l'app e accedere usando il proprio indirizzo di posta elettronica. Quando si accede con l'indirizzo di posta elettronica viene richiesto di scegliere come credenziali l'[app Authenticator](https://go.microsoft.com/fwlink/?linkid=2090219) o GitHub.
#### <a name="q-ive-enabled-two-factor-authentication-on-both-my-github-and-microsoft-accounts-msa-but-when-i-sign-in-to-my-msa-im-still-asked-to-authenticate-twice--why"></a>D: Ho abilitato l'autenticazione a due fattori sia nell'account GitHub sia nell'account Microsoft (account del servizio gestito), ma quando accedo all'account del servizio gestito, mi viene comunque chiesto di eseguire l'autenticazione due volte. Perché?
R: A causa di restrizioni di sicurezza, Microsoft considera l'accesso a GitHub come verifica a fattore singolo, anche se di fatto si tratta di una verifica in due passaggi. Pertanto, è necessario eseguire nuovamente l'autenticazione per l'account Microsoft.
#### <a name="q--how-can-i-tell-if-my-microsoft-account-and-github-accounts-are-linked"></a>D: Come è possibile stabilire se l'account Microsoft e l'account GitHub sono collegati?
R: Ogni volta che si accede tramite l'alias dell'account (indirizzo di posta elettronica, numero di telefono, nome Skype), vengono visualizzati tutti i metodi di accesso per l'account. Se GitHub non è visualizzato fra tali metodi, non è stato ancora configurato.
#### <a name="q--how-can-i-unlink-my-microsoft-and-github-accounts"></a>D: Come scollegare gli account Microsoft e GitHub?
R: Andare alla [scheda Sicurezza](https://account.microsoft.com/security) di account.microsoft.com e fare clic su **Altre opzioni di sicurezza** per scollegare l'account GitHub. Quando viene scollegato, l'account GitHub viene rimosso come metodo di accesso e viene anche rimosso l'accesso a qualsiasi repository GitHub in Visual Studio. Altri prodotti Microsoft potrebbero avere richiesto l'accesso all'account GitHub separatamente. Pertanto la rimozione dell'accesso in questa posizione non rimuoverà l'accesso in tutti i prodotti. Andare alla pagina delle [autorizzazioni applicazione](https://github.com/settings/applications) del profilo GitHub per revocare il consenso alle app elencate in tale pagina.
#### <a name="q--i-try-to-use-my-github-account-to-sign-in-but-im-prompted-that-i-already-have-a-microsoft-identity-that-i-should-use-instead--whats-happening"></a>D: Quando tento di usare l'account GitHub per l'accesso, mi viene segnalato che ho già un'identità Microsoft e devo usare tale identità. Qual è il problema?
R: Se si ha un indirizzo di posta elettronica di Azure Active Directory nell'account GitHub, si ha già un'identità Microsoft che può accedere ad Azure ed eseguire pipeline di integrazione continua tramite il codice GitHub. L'uso di tale account garantisce che le risorse di Azure e le pipeline di compilazione restino entro i limiti dell'organizzazione. Se tuttavia si stanno eseguendo attività a titolo personale, è consigliabile inserire un indirizzo di posta elettronica personale nell'account GitHub, in modo da potere sempre accedere all'account stesso. Dopo aver eseguito questa operazione, provare a eseguire nuovamente l'accesso e scegliere **Usa un indirizzo e-mail diverso** quando viene chiesto di accedere al proprio account aziendale o dell'istituto di istruzione. In questo modo è possibile creare un nuovo account Microsoft con tale indirizzo di posta elettronica personale.
| 108.830645 | 890 | 0.796073 | ita_Latn | 0.999279 |
e937b742abe6181dc2965ffd9bcbc8d8acaf6519 | 7,998 | md | Markdown | week13.md | karlbenedict/GEOG485-585 | 5874bf25b767508e3586972ad86f5194c5db51ed | [
"MIT"
] | 2 | 2017-08-05T16:24:05.000Z | 2020-01-03T09:11:44.000Z | week13.md | karlbenedict/GEOG485-585 | 5874bf25b767508e3586972ad86f5194c5db51ed | [
"MIT"
] | 1 | 2018-08-24T19:02:06.000Z | 2018-08-24T19:02:06.000Z | week13.md | karlbenedict/GEOG485-585 | 5874bf25b767508e3586972ad86f5194c5db51ed | [
"MIT"
] | 5 | 2017-04-08T16:58:00.000Z | 2018-11-06T11:46:06.000Z | ---
title: Week 13 - Platforms and GeoServer Introduction
...
<!---------------------------------------------------------------------------->
<!-- Week 13 ----------------------------------------------------------------->
<!---------------------------------------------------------------------------->
# Introduction # {#week13}
Thus far we have concentrated on the client side of geospatial services oriented architectures in developing web interfaces based upon the Google Maps API, the OpenLayers javascript framework, and accessing data published using the OGC WMS, WFS, and WCS standards in desktop applications. Starting this week we begin our work on the server side - working with the GeoServer server platform to publish data through the OGC WMS, WFS, and WCS service standards. This work will demonstrate the ease with which you can share data using these standards, facilitating client use such as that that we have seen in our web site and desktop application work.
*Expected Outcomes*
By the end of this class, students should be able to:
* Place files within the server file system for integration into the GeoServer platform
* Create a GeoServer _Workspace_, _Store_, and _Layer_ based upon those data
* Test those layers using the _Layer Preview_ tools integrated into GeoServer
*Key Concepts*
By the end of this class, students should understand:
* The components of a map server platform and their relationship to each other
* The role of a geospatial server within a geospatial services oriented architecture
* The information required about data to successfully configure it for publication within GeoServer
* The stepwise process through which a dataset may be published using GeoServer
# Reference Materials # {#week13-reference}
* Safari Books Online [*Fundamentals of Linux: Learn important command-line tools and utilities with real-world examples.*](https://www.safaribooksonline.com/library/view/fundamentals-of-linux/9781788293945/) - particularly:
* [Chapter 2: Getting to Know the Command Line](https://www.safaribooksonline.com/library/view/fundamentals-of-linux/9781788293945/video2_1.html)
* [Chapter 3: It's All About the Files](https://www.safaribooksonline.com/library/view/fundamentals-of-linux/9781788293945/video3_1.html)
* GeoServer [Online Documentation](http://docs.geoserver.org/stable/en/user/index.html): sections [Introduction](http://docs.geoserver.org/stable/en/user/introduction/index.html), [Getting Started](http://docs.geoserver.org/stable/en/user/gettingstarted/index.html), and [Web Administration Interface](http://docs.geoserver.org/stable/en/user/webadmin/index.html)
# Weekly Milestone - Linux Basics and GeoServer Data Import# {#week13-milestone}
## Working on the Class Server
For the GeoServer portion of our work, you will be working on a Linux server that has been created for the class. While we won't be doing a lot of Linux work, some basic familiarity with moving around, copying files, and working with files is needed. The class server is running Ubuntu Linux which is a broadly deployed, well supported operating system and computing platform that has excellent support for many Open Source geospatial applications, including those that we will be using in this class.
The first set of exercises relate to learning some basics about working with the Linux Operating system, applicable on just about any Linux server including the class server.
Review (but don't worry about memorizing) the following materials (in addition to watching the video tutorial sections listed above):
[Webmonkey "Unix Guide"](http://www.webmonkey.com/2010/02/unix-guide/)
[Linux Command Line Cheatsheet](http://www.cheatography.com/davechild/cheat-sheets/linux-command-line/)
QUESTION 1
: What command would you use to list the contents of a directory on a linux system?
QUESTION 2
: What command would you use to read the "manual page" for a specific command?
Log into the class Linux server - [`internetmapping.net:8080/geoserver/web`](http://internetmapping.net:8080/geoserver/web). *This is different from the address referenced in the below linked videos* The rest of the process is the same as demonstrated in the videos. Your username and password for both the class linux server and the GeoServer have been sent to you via email.
*Windows*: Open PuTTY on your computer and connect using the SSH protocol (see video demonstration)
[Link to the YouTube video demonstration for Windows](http://youtu.be/GdO_n89mey8)
*Mac*: Open the Terminal Application and connect using SSH (see video)
[Link to the YouTube video demonstration for Mac OS X](http://youtu.be/Gu_ij6HxTWo)
Start a session on the class Linux server, which is located at at the hostname `internetmapping.net` (you will use your class server username and password you received through email to open the connection). **NOTE: the class server is accessed through a non-standard network port. Enter the port number `23` in the connection dialog boxes where there is an option to specify the port. When using the SSH commmand [i.e. on the Mac] include the port number in the connection command.**
For example:
ssh -p 23 user001@internetmapping.net
After logging in you are in your `home directory` - the directory that is linked to your account on the system, and the directory that you are taken to when you type the `cd` command without any additional arguments.
## Adding data to GeoServer ##
To add data to GeoServer you must have a file location on the server where data files are stored and accessible by the GeoServer.
Task
: Change into your home directory using the `cd` command without any additional arguments.
Task
: Copy the folder of sample data files located at `/opt/geoserver/data_dir/general/user000/GeoserverSampleData` by executing the following command from *inside your home directory*.
cp -r ../user000/GeoserverSampleData . (make sure to include final '.')
This will place a copy of folder of data files in your home directory. Rename (using the linux `mv` command) each of the copied files and directories (and their contents) to prepend (and replace mine) your initials at the beginning of each file and directory name. For example, rename `kb_m_3510659_ne_13_1_20110523.tif` as `xy_m_3510659_ne_13_1_20110523.tif`. This will help avoid some issues with layers based on source files with the same name later in our work. **You might find this a faster task using the WinSCP [Windows] or CyberDuck [Mac] utilities instead of the command line**
Task
: Log into the Geoserver on the class server ([http://internetmapping.net:8080/geoserver/web/](http://internetmapping.net:8080/geoserver/web/)) using the username and password provided by email.
Create a new _workspace_ based on your net id. For example `ws_<your netid>`
Create a new _store_ for each of the datasets added to your home directory above (**4 .tif files and 3 shape files**). Assign the new store to the workspace that you created above. When specifying the the `Connection Parameters` for pointing to the file you can browse to the location in the server's file system by using the `browse...` link next to the URL field under the `Connection Parameters` section of the store creation page. All of the home directories are in the `general` folder under the `data_dir` in the file browser.
for example
file:general/user000/GeoserverSampleData/kb_m_3510659_ne_13_1_20110523.tif
\
Create a new _layer_ for each of the _stores_ added above. Here are some things to keep in mind:
You may need to designate the SRS for a layer if it can’t be read directly from the dataset. Your specify the _designated_ SRS using the standard EPSG:XXXX format.
The EPSG code for `GCS_North_American_1983` is EPSG:4269
Question 3
: Preview each of your added layers, using the _Layer Preview_ tool and the _Open Layers_ option to display the data. Include screen grabs of the previews in your write-up.
| 65.557377 | 648 | 0.766317 | eng_Latn | 0.994203 |
e937c47084994c6e98683b11182b112221abf5fe | 1,904 | md | Markdown | endpoints/favorites/POST_favorites_id.md | TeenQuotes/api-documentation | 22add396050162834c280760448df5d001a7209e | [
"FSFAP"
] | 1 | 2017-05-09T10:58:29.000Z | 2017-05-09T10:58:29.000Z | endpoints/favorites/POST_favorites_id.md | TeenQuotes/api-documentation | 22add396050162834c280760448df5d001a7209e | [
"FSFAP"
] | null | null | null | endpoints/favorites/POST_favorites_id.md | TeenQuotes/api-documentation | 22add396050162834c280760448df5d001a7209e | [
"FSFAP"
] | null | null | null | # FavoriteQuotes Resources
POST favorites/:id
## Description
Add a quote in the user's favorites.
## Requires authentication
* A valid access token must be provided in **access_token** parameter.
The `access_token` should be sent using an HTTP header like so:
Authorization: Bearer access_token
An example call with CURL:
curl --header "Authorization: Bearer ZllAle9NZ11FkMyX5xm0evswWOTinrr5I26uLcGB" --data "" https://api.teen-quotes.com/v1/favorites/42
## Return format
A JSON object containing keys of the new FavoriteQuote object in the following format:
- **id** - The ID of the new FavoriteQuote
- **quote_id** - The ID of the Quote added to the user's favorites
- **user_id** - The ID of the user currently adding the quote
- **created_at** - Describes the date when the resource was created
- **updated_at** - Describes the date when the resource was created
## Errors
All known errors cause the resource to return HTTP error code header together with a JSON array containing at least `status` and `error` keys describing the source of error.
- **400 Bad request** — When the **status** key has got one of the following values: `quote_not_found`, `quote_already_favorited`.
### `error` messages
The `error` messages are the following:
- If `status` is `quote_not_found`: `The quote #:id was not found.`
- If `status` is `quote_already_favorited`: `The quote #:id was already favorited.`
- If `status` is `quote_not_published`: `The quote #:id is not published.`
## Example
**Request**
POST https://api.teen-quotes.com/v1/favorites/750
### Success
With an HTTP code 201.
``` json
{
"user_id":42,
"quote_id":750,
"updated_at":"2014-05-24 14:14:38",
"created_at":"2014-05-24 14:14:38",
"id":2005
}
```
### Error
For an error with HTTP code 400:
``` json
{
"status":"quote_already_favorited",
"error":"The quote #750 was already favorited"
}
``` | 30.222222 | 173 | 0.717962 | eng_Latn | 0.958116 |
e938049a9291dc3c7c45e4e7e1eb2f98e966d1d0 | 7,693 | md | Markdown | articles/active-directory/b2b/google-federation.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-06T00:12:00.000Z | 2019-06-06T00:12:00.000Z | articles/active-directory/b2b/google-federation.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/b2b/google-federation.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Add Google as an identity provider for B2B - Azure Active Directory | Microsoft Docs
description: Federate with Google to enable guest users to sign in to your Azure AD apps with their own Gmail account
services: active-directory
ms.service: active-directory
ms.subservice: B2B
ms.topic: conceptual
ms.date: 12/17/2018
ms.author: mimart
author: msmimart
manager: celestedg
ms.reviewer: mal
ms.custom: "it-pro, seo-update-azuread-jan"
ms.collection: M365-identity-device-management
---
# Add Google as an identity provider for B2B guest users
By setting up federation with Google, you can allow invited users to sign in to your shared apps and resources with their own Google accounts, without having to create Microsoft Accounts (MSAs) or Azure AD accounts.
> [!NOTE]
> Your Google guest users must sign in using a link that includes the tenant context (for example, `https://myapps.microsoft.com/?tenantid=<tenant id>` or `https://portal.azure.com/<tenant id>`, or in the case of a verified domain, `https://myapps.microsoft.com/<verified domain>.onmicrosoft.com`). Direct links to applications and resources also work as long as they include the tenant context. Guest users are currently unable to sign in using endpoints that have no tenant context. For example, using `https://myapps.microsoft.com`, `https://portal.azure.com`, or the Teams common endpoint will result in an error.
## What is the experience for the Google user?
When you send an invitation to a Google Gmail user, the guest user should access your shared apps or resources using a link that includes the tenant context. Their experience varies depending on whether they're already signed in to Google:
- If the guest user is not signed in to Google, they're prompted to sign in to Google.
- If the guest user is already signed in to Google, they'll be prompted to choose the account they want to use. They must choose the account you used to invite them.
If the guest user sees a "header too long" error, they can try clearing their cookies, or they can open a private or incognito window and try signing in again.

## Step 1: Configure a Google developer project
First, create a new project in the Google Developers Console to obtain a client ID and a client secret that you can later add to Azure AD.
1. Go to the Google APIs at https://console.developers.google.com, and sign in with your Google account. We recommend that you use a shared team Google account.
2. Create a new project: On the Dashboard, select **Create Project**, and then select **Create**. On the New Project page, enter a **Project Name**, and then select **Create**.

3. Make sure your new project is selected in the project menu. Then open the menu in the upper left and select **APIs & Services** > **Credentials**.

4. Choose the **OAuth consent screen** tab and enter an **Application name**. (Leave the other settings.)

5. Scroll to the **Authorized domains** section and enter microsoftonline.com.

6. Select **Save**.
7. Choose the **Credentials** tab. In the **Create credentials** menu, choose **OAuth client ID**.

8. Under **Application type**, choose **Web application**, and then under **Authorized redirect URIs**, enter the following URIs:
- `https://login.microsoftonline.com`
- `https://login.microsoftonline.com/te/<directory id>/oauth2/authresp` <br>(where `<directory id>` is your directory ID)
> [!NOTE]
> To find your directory ID, go to https://portal.azure.com, and under **Azure Active Directory**, choose **Properties** and copy the **Directory ID**.

9. Select **Create**. Copy the client ID and client secret, which you'll use when you add the identity provider in the Azure AD portal.

## Step 2: Configure Google federation in Azure AD
Now you'll set the Google client ID and client secret, either by entering it in the Azure AD portal or by using PowerShell. Be sure to test your Google federation configuration by inviting yourself using a Gmail address and trying to redeem the invitation with your invited Google account.
#### To configure Google federation in the Azure AD portal
1. Go to the [Azure portal](https://portal.azure.com). In the left pane, select **Azure Active Directory**.
2. Select **Organizational Relationships**.
3. Select **Identity providers**, and then click the **Google** button.
4. Enter a name. Then enter the client ID and client secret you obtained earlier. Select **Save**.

#### To configure Google federation by using PowerShell
1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)).
2. Run the following command:
`Connect-AzureAD`.
3. At the sign-in prompt, sign in with the managed Global Administrator account.
4. Run the following command:
`New-AzureADMSIdentityProvider -Type Google -Name Google -ClientId [Client ID] -ClientSecret [Client secret]`
> [!NOTE]
> Use the client id and client secret from the app you created in "Step 1: Configure a Google developer project." For more information, see the [New-AzureADMSIdentityProvider](https://docs.microsoft.com/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview) article.
## How do I remove Google federation?
You can delete your Google federation setup. If you do so, Google guest users who have already redeemed their invitation will not be able to sign in, but you can give them access to your resources again by deleting them from the directory and re-inviting them.
### To delete Google federation in the Azure AD portal:
1. Go to the [Azure portal](https://portal.azure.com). In the left pane, select **Azure Active Directory**.
2. Select **Organizational Relationships**.
3. Select **Identity providers**.
4. On the **Google** line, select the context menu (**...**) and then select **Delete**.

1. Select **Yes** to confirm deletion.
### To delete Google federation by using PowerShell:
1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)).
2. Run `Connect-AzureAD`.
4. In the login in prompt, sign in with the managed Global Administrator account.
5. Enter the following command:
`Remove-AzureADMSIdentityProvider -Id Google-OAUTH`
> [!NOTE]
> For more information, see [Remove-AzureADMSIdentityProvider](https://docs.microsoft.com/powershell/module/azuread/Remove-AzureADMSIdentityProvider?view=azureadps-2.0-preview).
| 64.647059 | 617 | 0.763551 | eng_Latn | 0.963472 |
e9384f33d58801005598835cabc7c0c3249d53f5 | 42 | md | Markdown | README.md | iceonepiece/legendary-mercenary | 5d8723e9e2886acd573fe2c42c5a02e8a57b1173 | [
"MIT"
] | null | null | null | README.md | iceonepiece/legendary-mercenary | 5d8723e9e2886acd573fe2c42c5a02e8a57b1173 | [
"MIT"
] | null | null | null | README.md | iceonepiece/legendary-mercenary | 5d8723e9e2886acd573fe2c42c5a02e8a57b1173 | [
"MIT"
] | null | null | null | # legendary-mercenary
Legendary Mercenary
| 14 | 21 | 0.857143 | tuk_Latn | 0.553826 |
e9387bf7fd47e19554033fd7a58f048a36364049 | 557 | md | Markdown | _collection/user381-sing-reci.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/user381-sing-reci.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/user381-sing-reci.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | ---
id: 4499
full_name: "user381/sing-reci"
images:
- "user381-sing-reci-test"
- "user381-sing-reci-latest"
- "user381-sing-reci-latest"
- "user381-sing-reci-obsolete"
- "user381-sing-reci-101-cudnn7-runtime-ubuntu1804"
- "user381-sing-reci-110-cudnn8-runtime-ubuntu1804-rc"
- "user381-sing-reci-110-cudnn8-runtime-ubuntu1804-rc"
- "user381-sing-reci-92-cudnn7-runtime-ubuntu1804"
- "user381-sing-reci-110-cudnn8-runtime-ubuntu1804-rc"
- "user381-sing-reci-latest-gpu-jupyter"
- "user381-sing-reci-9.2-cudnn7-runtime-ubuntu18.04"
---
| 32.764706 | 56 | 0.719928 | kal_Latn | 0.074591 |
e9387c7c36303695568cd19a46f9456029dad77a | 11,916 | md | Markdown | node/logging-and-metrics-for-user-code.md | mmcshane/proposals | 274d506d215a681af351305ce11fa17f13541d4b | [
"MIT"
] | 33 | 2020-05-06T12:44:36.000Z | 2022-01-28T20:41:21.000Z | node/logging-and-metrics-for-user-code.md | mmcshane/proposals | 274d506d215a681af351305ce11fa17f13541d4b | [
"MIT"
] | 47 | 2020-05-05T21:47:52.000Z | 2022-03-29T15:37:00.000Z | node/logging-and-metrics-for-user-code.md | mmcshane/proposals | 274d506d215a681af351305ce11fa17f13541d4b | [
"MIT"
] | 12 | 2020-05-05T21:10:38.000Z | 2022-03-16T17:29:03.000Z | ## Logging and Metrics in user Workflow code
A hard requirement for our logging and metrics solution for user code is that we do not enforce a specific tool or convention and instead let our users use their existing tools.<br/>
For Activities, there is not an issue because users can run anything in an Activity. In Workflows OTOH, only deterministic code can run and metrics and logs should usually be dropped when a Workflow is replaying. Additionally the fact that the Node SDK runs Workflows in v8 isolates, means that there's no way to do IO or anything that interacts with the world without explicit support from the SDK.
### IO in Workflow code
There are a few options for doing IO inside a Workflow's isolate:
1. Load a binary (c++) module into the isolate, the module is not isolated the way JS code is
2. Inject a function reference from the main NodeJS isolate into a Workflow isolate
3. Accumulate data in the isolate (e.g when collecting commands to complete an activation) and pass it into a function in the main NodeJS isolate
(1) is automatically ruled out because users' existing tools are probably written in JS and depend on the NodeJS environment.<br/>
(2) has some overhead (cross thread communication and possibly blocking the isolate thread) - from the [isolated-vm](https://github.com/laverdet/isolated-vm#api-documentation) documentation:
> Calling the synchronous functions will block your thread while the method runs and eventually returns a value. The asynchronous functions will return a Promise while the work runs in a separate thread pool.<br/>
> ... <br/>
> Additionally, some methods will provide an "ignored" version which runs asynchronously but returns no promise. This can be a good option when the calling isolate would ignore the promise anyway, since the ignored versions can skip an extra thread synchronization. Just be careful because this swallows any thrown exceptions which might make problems hard to track down.
(2) is limited to either synchronous and ignored functions as async functions conflict with the way we run isolated Workflow code since we assume that there are never any outstanding promises at the end of Workflow activation.
(3) raises the following reliablility concerns:
- When isolate runs out of memory but in that case logs will probably not work anyways.
- When WF code gets stuck e.g goes into an infinite loop but we can mitigate this problem by enforcing a time limit of execution in the isolate, in case execution times out we will be able to extract the logs.
(3) has the lowest overhead but also means there's an inherent delay from logs and metrics generation to processing - should be relatively minimal assuming Workflow code is not CPU bound - activations are typically short.<br/>
(3) is limited to either ignored and asynchronous functions since because it does **not** block the isolate.
### Proposed solution
Allow users to expose their logging and metrics interfaces to Workflow code as external dependencies.
We'll support techniques (2) and (3) stated above depending on the type of injected function.
**Extra care should be taken when using values returned from external dependencies because it can easily break deterministic Workflow execution.**
#### Definitions
#### `WorkflowInfo`
Logging and metrics require the Workflow's execution context information to have any meaning.
We define the `WorkflowInfo` interface which is accessible in Workflow code via `Context.info` and passed into users' "external dependency" functions.
`@temporalio/workflow`
```ts
export interface WorkflowInfo {
workflowId: string;
runId: string;
filename: string;
namespace: string;
taskQueue: string;
isReplaying: boolean;
}
```
#### `ApplyMode`
We define an `ApplyMode` enum for specifying how a dependency function is executed
`@temporalio/workflow`
```ts
/**
* Controls how an external dependency function is executed.
* - `ASYNC*` variants run at the end of an activation and do **not** block the isolate.
* - `SYNC*` variants run during Workflow activation and block the isolate,
* they're passed into the isolate using an {@link https://github.com/laverdet/isolated-vm#referenceapplyreceiver-arguments-options-promise | isolated-vm Reference}
* The Worker will log if an error occurs in one of ignored variants.
*/
export enum ApplyMode {
/**
* Injected function will be called at the end of an activation.
* Isolate enqueues function to be called during activation and registers a callback to await its completion.
* Use if exposing an async function to the isolate for which the result should be returned to the isolate.
*/
ASYNC = 'async',
/**
* Injected function will be called at the end of an activation.
* Isolate enqueues function to be called during activation and does not register a callback to await its completion.
* This is the safest async `ApplyMode` because it can not break Workflow core determinism.
* Can only be used when the injected function returns void and the implementation returns void or Promise<void>.
*/
ASYNC_IGNORED = 'asyncIgnored',
/**
* Injected function is called synchronously, implementation must be a synchronous function.
* Injection is done using an `isolated-vm` reference, function called with `applySync`.
*/
SYNC = 'applySync',
/**
* Injected function is called synchronously, implementation must return a promise.
* Injection is done using an `isolated-vm` reference, function called with `applySyncPromise`.
* This is the safest sync `ApplyMode` because it can not break Workflow core determinism.
*/
SYNC_PROMISE = 'applySyncPromise',
/**
* Injected function is called in the background not blocking the isolate.
* Implementation can be either synchronous or asynchronous.
* Injection is done using an `isolated-vm` reference, function called with `applyIgnored`.
*/
SYNC_IGNORED = 'applyIgnored',
}
```
#### `InjectedDependencyFunction<F>`
We define an `InjectedDependencyFunction<F>` interface that takes a `DependencyFunction` (any function) and turns it
into a type safe specification consisting of the function implementation type and configuration.
> NOTE: The actual definition of this interface is much more complex because it constraints which apply modes can be used depdending on the interface and implementation.
```ts
/** Any function can be a dependency function (as long as it uses transferrable arguments and return type) */
export type DependencyFunction = (...args: any[]) => any;
export interface InjectedDependencyFunction<F extends DependencyFunction> {
/**
* Type of the implementation function for dependency `F`.
*/
fn(info: WorkflowInfo, ...args: Parameters<F>): ReturnType<F>;
/**
* Whether or not a dependency's functions will be called during Workflow replay
* @default false
*/
callDuringReplay?: boolean;
/**
* Defines how a dependency's functions are called from the Workflow isolate
* @default IGNORED
*/
applyMode: ApplyMode;
/**
* By default function arguments are copied on invocation.
* That can be customized per isolated-vm docs with these options.
* Only applicable to `SYNC_*` apply modes.
*/
arguments?: 'copy' | 'reference';
}
```
### Logger injection example
`interfaces/logger.ts`
```ts
/** Simplest logger interface for the sake of this example */
export interface Logger {
info(message: string): void;
error(message: string): void;
}
```
`interfaces/index.ts`
```ts
import { Dependencies } from '@temporalio/workflow';
import { Logger } from './logger';
export interface MyDependencies extends Dependencies {
logger: Logger;
}
```
Use dependencies from a Workflow.
`workflows/logger-deps-demo.ts`
```ts
import { Context } from '@temporalio/workflow';
import { MyDependencies } from '../interfaces';
const { logger } = Context.dependencies<MyDependencies>();
// NOTE: dependencies may not be called at the top level because they require the Workflow to be initialized.
// You may reference them as demonstrated above.
// If called here an `IllegalStateError` will be thrown.
export function main(): void {
logger.info('hey ho');
logger.error('lets go');
}
```
Register dependencies for injection into the Worker's isolate context.
Each time a Workflow is initialized it gets a reference to the injected dependencies.
`worker/index.ts`
```ts
import { WorkflowInfo, ApplyMode } from '@temporalio/workflow';
import { MyDependencies } from '../interfaces';
await worker.create<{ dependencies: MyDependencies /* optional for type checking */ }>({
workDir: __dirname,
dependencies: {
logger: {
// Your logger implementation goes here.
// NOTE: your implementation methods receive WorkflowInfo as an extra first argument.
info: {
fn(info: WorkflowInfo, message: string) {
console.log(info, message);
},
applyMode: ApplyMode.SYNC,
arguments: 'copy',
},
error: {
fn(info: WorkflowInfo, message: string) {
console.error(info, message);
},
applyMode: ApplyMode.ASYNC_IGNORED,
// Not really practical to have only error called during replay.
// We put it here just for the sake of the example.
callDuringReplay: true,
},
},
},
},
});
```
### Other considered solutions
1. Provide our own logger implementation
- Users cannot use their own tools
1. Expose the Worker's logger to Workflows and Activities
- Does not cover metrics
- Users cannot use their own tools
1. Let users build their own logger over the injected `console.log`
- Requires Workflow `console.log` output to be customizable
- Separate solution for Activities and Workflows
- Users cannot use their own tools
### A note on `console.log`
Currently we inject `console.log` into the Workflow isolate for convenience. This has proven to be quite handy but the produced logs are missing important context like the workflowId / runId that generated them.
We should modify `console.log`'s output to include the relevant context by default and allow overriding `console.log` via the external dependencies mechanism.
- `console.log` messages should be dropped during replays (by default).
### Dependencies
In order to implement this solution we need to add an `isReplay` flag to each activation in the core⇔lang interface.
### Alternative interface - NOT chosen
#### `Context.dependency` + `Worker.inject`
`workflows/logger-demo.ts`
```ts
import { Context } from '@temporalio/workflow';
import { Logger } from '../interfaces/logger';
const log = Context.dependency<Logger>('logger');
export async function main(): Promise<void> {
log.info('hey ho');
log.error('lets go');
}
```
`worker/index.ts`
```ts
import { WorkflowInfo } from '@temporalio/workflow';
import { Logger } from '../interfaces/logger';
// ...
worker.inject<Logger /* optional for type checking */>(
'logger',
{
/* Your logger implementation goes here */
info(info: WorkflowInfo, message: string) {
console.log(info, message);
},
error(info: WorkflowInfo, message: string) {
console.error(info, message);
},
},
{ callDuringReplay: true /* default is false */ }
);
```
#### Comparison
| | Alternative | Single `Dependencies` interface |
| -------------------- | ---------------------------------------------------------------- | ------------------------------- |
| Missing dependencies | ❌ Hard to enforce all dependencies are injected into the worker | ✅ Enforced by type checker |
| Dependency naming | ❌ Prone to typos | ✅ Enforced by type checker |
Single `Dependencies` interface was chosen for improved type safety.
| 40.121212 | 399 | 0.721635 | eng_Latn | 0.993772 |
e938a9582041d5e6c58bcf0550db45e2b963d00a | 1,480 | md | Markdown | _posts/P5JS/2020-09-30-Mathologer.md | Andremartiny/AndreMartiny.github.io | 54d6ebadb735bc865ee152a59d6ee964a0cf9c0c | [
"MIT"
] | null | null | null | _posts/P5JS/2020-09-30-Mathologer.md | Andremartiny/AndreMartiny.github.io | 54d6ebadb735bc865ee152a59d6ee964a0cf9c0c | [
"MIT"
] | null | null | null | _posts/P5JS/2020-09-30-Mathologer.md | Andremartiny/AndreMartiny.github.io | 54d6ebadb735bc865ee152a59d6ee964a0cf9c0c | [
"MIT"
] | null | null | null | ---
layout: post
title: "Modulær multiplikasjon"
mathjax: true
hidden: true
permalink: /animasjoner/P5JS/modular_multiplikasjon/
---
Under er en animasjon inspirert av en <a href="https://www.youtube.com/watch?v=qhbuKbxJsk8&ab_channel=Mathologer" target="_blank" >YouTube video </a> av Burkard Polster.
<iframe src="https://editor.p5js.org/AndreMartiny/embed/8TJxwkhib" width="600" height="600" frameBorder="0"></iframe>
<details>
<summary>Se koden her</summary>
<p>
{% highlight js linenos%}
let faktor = 0.0;
let antall = 150;
let radius;
let sentrum;
let lengde = 13;
let nevner = 20;
function setup() {
createCanvas(600, 600);
radius = width / 2 - 10;
sentrum = [width / 2, height / 2];
textSize(32);
}
function draw() {
colorMode(HSB);
background(0);
noFill();
stroke(255);
strokeWeight(6);
circle(width / 2, height / 2, radius * 2);
for (let i = 0; i < antall; i++) {
stroke(i * 255 / antall, 100, 60)
line(sentrum[0] + radius * cos(i * 2 * PI / antall),
sentrum[1] + radius * sin(i * 2 * PI / antall),
(sentrum[0] + radius * cos(i * 2 * PI / antall)) * (lengde / nevner) +
(sentrum[0] + radius * cos((faktor * i * 2 * PI / antall) % antall)) * (nevner - lengde) / nevner,
(sentrum[1] + radius * sin(i * 2 * PI / antall)) * (lengde / nevner) +
(sentrum[1] + radius * sin((faktor * i * 2 * PI / antall) % antall)) * (nevner - lengde) / nevner);
}
faktor += 0.02
}
{% endhighlight %}
</p>
</details> | 25.964912 | 170 | 0.619595 | nob_Latn | 0.489901 |
e9395d38f9f7a7312246b780c754eafcdd7bff7f | 165 | md | Markdown | README.md | dsenko/appMobileFramework | 93acf24ff0000f4b8f35f510fc47bfd758ae7492 | [
"Apache-2.0"
] | null | null | null | README.md | dsenko/appMobileFramework | 93acf24ff0000f4b8f35f510fc47bfd758ae7492 | [
"Apache-2.0"
] | null | null | null | README.md | dsenko/appMobileFramework | 93acf24ff0000f4b8f35f510fc47bfd758ae7492 | [
"Apache-2.0"
] | null | null | null | # appMobileFramework
Prototype framework to build apps with cordova
WARNING! This is an really old version - new version is Spike Framework www.spikeframework.com
| 27.5 | 94 | 0.818182 | eng_Latn | 0.974015 |
e939768ea78b95033b3a68b9932dba70ed6cb02a | 2,259 | md | Markdown | docs/framework/unmanaged-api/diagnostics/isymunmanagedencupdate-updatesymbolstore2-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/diagnostics/isymunmanagedencupdate-updatesymbolstore2-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/diagnostics/isymunmanagedencupdate-updatesymbolstore2-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ISymUnmanagedENCUpdate::UpdateSymbolStore2-Methode
ms.date: 03/30/2017
api_name:
- ISymUnmanagedENCUpdate.UpdateSymbolStore2
api_location:
- diasymreader.dll
api_type:
- COM
f1_keywords:
- ISymUnmanagedENCUpdate::UpdateSymbolStore2
helpviewer_keywords:
- ISymUnmanagedENCUpdate::UpdateSymbolStore2 method [.NET Framework debugging]
- UpdateSymbolStore2 method [.NET Framework debugging]
ms.assetid: 35588317-6184-485c-ab41-4b15fc1765d9
topic_type:
- apiref
author: mairaw
ms.author: mairaw
ms.openlocfilehash: 82f2f335299cfd3041dcecc7d176cb77ce54ae96
ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/08/2019
ms.locfileid: "59172132"
---
# <a name="isymunmanagedencupdateupdatesymbolstore2-method"></a>ISymUnmanagedENCUpdate::UpdateSymbolStore2-Methode
Ermöglicht es einen Compiler, um Funktionen, die nicht aus dem Stream Programm Programmdatenbankdatei (PDB) geändert wurden zu unterdrücken, sofern die Zeileninformationen die Anforderungen erfüllt. Die richtige Zeileninformationen kann mit der alten Zeileninformationen für die PDB-Datei und einem Delta für alle Zeilen in der Funktion bestimmt werden.
## <a name="syntax"></a>Syntax
```
HRESULT UpdateSymbolStore2(
[in] IStream *pIStream,
[in] SYMLINEDELTA* pDeltaLines,
[in] ULONG cDeltaLines);
```
## <a name="parameters"></a>Parameter
`pIStream`
[in] Ein Zeiger auf ein [IStream](/windows/desktop/api/objidl/nn-objidl-istream) , das die Zeileninformationen enthält.
`pDeltaLines`
[in] Ein Zeiger auf eine [SYMLINEDELTA](../../../../docs/framework/unmanaged-api/diagnostics/symlinedelta-structure.md) Struktur, die die Zeilen enthält, die geändert wurden.
`cDeltaLines`
[in] Ein `ULONG` , die die Anzahl der Zeilen, die geändert wurden.
## <a name="return-value"></a>Rückgabewert
S_OK, wenn die Methode erfolgreich ist; andernfalls E_FAIL oder einen anderen Fehlercode.
## <a name="requirements"></a>Anforderungen
**Header:** CorSym.idl, CorSym.h
## <a name="see-also"></a>Siehe auch
- [ISymUnmanagedENCUpdate-Schnittstelle](../../../../docs/framework/unmanaged-api/diagnostics/isymunmanagedencupdate-interface.md)
| 38.948276 | 355 | 0.76317 | deu_Latn | 0.637899 |
e939cf4a97c537c63548396f904c9663036c4a15 | 51 | md | Markdown | tags/formatting.md | jpkarlsberg/uncle-toms-story-of-his | b56b2e9e6b66decc0c7b01638bff957c7e11fe74 | [
"MIT"
] | 2 | 2019-07-15T14:23:30.000Z | 2019-10-08T12:12:12.000Z | tags/formatting.md | jpkarlsberg/uncle-toms-story-of-his | b56b2e9e6b66decc0c7b01638bff957c7e11fe74 | [
"MIT"
] | 4 | 2020-02-26T02:07:45.000Z | 2022-02-26T05:37:52.000Z | tags/formatting.md | jpkarlsberg/uncle-toms-story-of-his | b56b2e9e6b66decc0c7b01638bff957c7e11fe74 | [
"MIT"
] | 4 | 2017-10-20T14:54:13.000Z | 2019-07-29T16:30:53.000Z | ---
layout: annotation_by_tag
tag: formatting
---
| 8.5 | 25 | 0.705882 | eng_Latn | 0.596649 |
e939de5cb2c40dbdf039a535fc0a118bcf64c1a2 | 1,226 | md | Markdown | README.md | RevisionTen/mailchimp | e97086465a2c098fad7e999b6a9cfef7faf93803 | [
"MIT"
] | 1 | 2019-07-24T07:55:47.000Z | 2019-07-24T07:55:47.000Z | README.md | RevisionTen/mailchimp | e97086465a2c098fad7e999b6a9cfef7faf93803 | [
"MIT"
] | null | null | null | README.md | RevisionTen/mailchimp | e97086465a2c098fad7e999b6a9cfef7faf93803 | [
"MIT"
] | null | null | null | # revision-ten/mailchimp
## Installation
#### Install via composer
Run `composer req revision-ten/mailchimp`.
### Add the Bundle
Add the bundle to your AppKernel (Symfony 3.4.\*) or your Bundles.php (Symfony 4.\*).
Symfony 3.4.\* /app/AppKernel.php:
```PHP
new \RevisionTen\Mailchimp\MailchimpBundle(),
```
Symfony 4.\* /config/bundles.php:
```PHP
RevisionTen\Mailchimp\MailchimpBundle::class => ['all' => true],
```
### Configuration
Configure the bundle:
```YAML
# Mailchimp example config.
mailchimp:
api_key: 'XXXXXXXXXXXXXXXXXXXXXXX-us5' # Your mailchimp api key.
campaigns:
dailyNewsletterCampagin:
list_id: '123456' # Id of your newsletter list.
```
### Usage
Use the MailchimpService to subscribe users.
Symfony 3.4.\* example:
```PHP
$mailchimpService = $this->container->get(MailchimpService::class);
$subscribed = $mailchimpService->subscribe('dailyNewsletterCampagin', 'visitor.email@domain.tld', 'My Website', [
'FNAME' => 'John',
'LNAME' => 'Doe',
]);
```
Or unsubscribe users:
```PHP
$mailchimpService = $this->container->get(MailchimpService::class);
$unsubscribed = $mailchimpService->unsubscribe('dailyNewsletterCampagin', 'visitor.email@domain.tld');
```
| 21.892857 | 113 | 0.699021 | yue_Hant | 0.213953 |
e93a040b4b3564ac054aaadb4991cdf8cfb4c210 | 1,188 | md | Markdown | README.md | Belgin-Android/React-Native-Weather-App | 49b5d24f5c58b1131d7cd7d4a6daf808eb34eb2a | [
"MIT"
] | 5 | 2020-10-14T05:32:24.000Z | 2020-11-11T12:37:25.000Z | README.md | ibelgin/Simple-Weather-App | 49b5d24f5c58b1131d7cd7d4a6daf808eb34eb2a | [
"MIT"
] | 2 | 2020-12-06T10:57:55.000Z | 2020-12-18T23:31:34.000Z | README.md | Belgin-Android/Simple-Weather-App | 49b5d24f5c58b1131d7cd7d4a6daf808eb34eb2a | [
"MIT"
] | 6 | 2020-11-30T17:28:09.000Z | 2021-09-23T18:29:32.000Z | # Weather App In React Native
### This is A Simple Weather App Made Using React Native
### Installing
> Clone This Repo
> Run npm install
> TODO
> Run The App
## TODO
* Go To https://openweathermap.org/api To Get An API KEY
* Replace This Line ( App.js - Line : 42 )
```javascript
fetch('https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=***********************')
```
* With ( Replace The Stars With Your API KEY )
```javascript
fetch('http://api.openweathermap.org/data/2.5/weather?q='+this.state.city+'&appid=***')
```
## Built With
* React Native
* React-Native-Vector-Icons
* OpeanWeatherMap
## Tutorial To Get API KEY
[OpneWeatherMap API KEY - Belgin Android](https://www.youtube.com/watch?v=23WXD9_gdoY&t=45s)
## Sample Preview
<img src="https://user-images.githubusercontent.com/61349423/95949981-6401bb00-0e11-11eb-93ce-6bdc7960f11e.gif" width="250" height="500">
## Authors
* **Belgin Android** - *All Works* - [Belgin Android](https://github.com/Belgin-Android)
## Issues ?
* Contact Me At [Instagram](https://www.instagram.com/letonations/)
## Acknowledgments
* Hat tip to anyone whose code was used
* Inspiration
* etc
| 19.8 | 137 | 0.686869 | kor_Hang | 0.483763 |
e93a9ea24e92e71d3a3f308b156db1cb19701576 | 3,900 | md | Markdown | README.md | daredevils-team/SmartChange | 632742fda9edb6ccd1757695d847d0a9d4ca83dc | [
"MIT"
] | null | null | null | README.md | daredevils-team/SmartChange | 632742fda9edb6ccd1757695d847d0a9d4ca83dc | [
"MIT"
] | null | null | null | README.md | daredevils-team/SmartChange | 632742fda9edb6ccd1757695d847d0a9d4ca83dc | [
"MIT"
] | 1 | 2021-12-27T15:29:52.000Z | 2021-12-27T15:29:52.000Z | # SmartChange - JunctionX Seoul 2021
> AI-powered web application able to track changes in urban landscape
🥉 Third winner in the **[SI Analytics](https://si-analytics.ai/eng/)** track of **[JunctionX Seoul 2021](https://junctionx-seoul-2021.oopy.io/)** Hackathon (21-23rd, May 2021)

### Team
- 🇷🇺 **Nikita Rusetskii** (Irkutsk National Research Technical University, Russia) <a target="_blank" href="https://www.linkedin.com/in/xtenzq/" target="_blank"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5.svg?&style=flat-badge&logo=linkedin&logoColor=white" /></a> <a target="_blank" href="https://github.com/xtenzQ" target="_blank"><img alt="GitHub" src="https://img.shields.io/badge/GitHub-181717.svg?&style=flat-badge&logo=github&logoColor=white" /></a>
- 🇷🇺 **Konstantin Shusterzon** (Melentiev Energy Systems Institute, Russia) <a target="_blank" href="https://www.linkedin.com/in/konstantin-shusterzon-a9aa02181/" target="_blank"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5.svg?&style=flat-badge&logo=linkedin&logoColor=white" /></a> <a target="_blank" href="https://github.com/Exterminant" target="_blank"><img alt="GitHub" src="https://img.shields.io/badge/GitHub-181717.svg?&style=flat-badge&logo=github&logoColor=white" /></a>
- 🇷🇺 **Lily Grunwald** (Novosibirsk State University, Russia)
- 🇰🇷 **Bison Lim** (Inha University, South Korea)
- 🇰🇷 **Junyong Lee** (Inha University, South Korea)
### Technologies
- Vue.js (frontend)
- Flask (backend)
- PyTorch (machine learning library)
- Unet++ (neural network)
### Structure
```Python
.
├── app
│ └── unet # UNET++ files
...
├── src # Vue.js frontend
└── app.py # Flask backend
...
```
### Demo

Check [presentation](https://docs.google.com/presentation/d/e/2PACX-1vQblQ-zYomu3_cA2DgpTf8T95ekNDYvFl-_1eSlZwlufQGqlIUAByPfBlGKA0XYTljTGVOzCoKzH4m2/pub?start=false&loop=false&delayms=3000)
### How to use
0. Install CUDA Toolkit and cudNN.
I personally use [CUDA 10.0](https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork) and [cudNN 7.6.4](https://developer.nvidia.com/rdp/cudnn-archive).
During CUDA Toolkit installation I recommend you to choose `Custom installation` and disable all components except CUDA (also disable `Visual Studio Integration` in the CUDA component tree first) to avoid rewriting new drivers with old ones (since we're gonna install older version of CUDA) and minimize possible problems (especially with `Visual Studio Integration`)
Don't forget to add cudNN to `%PATH%` variable.
1. Set up your environment. I recommend you to use anaconda for it since we're doing some machine learning:
```bash
# create conda env with Python 3.7
$ conda create -n junctionx python=3.7
# activate it
$ conda activate junctionx
# install all dependencies
$ pip install -r requirements.txt
```
2. Install Node.JS modules:
```bash
$ npm install
```
3. Run backend:
```bash
$ python app.py
```
It is usually run on `http://127.0.0.1:5000/` (basically only needed for API)
4. Run frontend (you need second terminal):
```bash
$ npm run serve
```
It is usually run on `http://127.0.0.1:8080/` (open in your browser)
5. Upload pics to neural net and get results
5.1. Go to `Upload` page
5.2. Upload two pics of the size `650x650` before and after
5.3. Click `Upload` button
5.4. Get result and download by clicking `Download` button
### FAQ
> **What Python do you use for this project?**
Python 3.7 (since we're using PyTorch)
> **How to set up conda for IntelliJ IDEA?**
`File` -> `Settings` -> `Project` -> `Project Interpreter` -> `Add` -> Pick up new conda environment or use existing
> **Why recognition is so inaccurate?**
We didn't have much time during hackathon, so we trained it only on 22 images.
| 37.864078 | 505 | 0.721282 | eng_Latn | 0.524257 |
e93ace238ce9e710a32ce2f597f31f733664f5bd | 3,172 | md | Markdown | docs/android/user-interface/layouts/list-view/activity-lifecycle.md | PatGet/xamarin-docs.de-de | b2b3700a784acca6da236f51be47f2d6c88f9ab4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/android/user-interface/layouts/list-view/activity-lifecycle.md | PatGet/xamarin-docs.de-de | b2b3700a784acca6da236f51be47f2d6c88f9ab4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/android/user-interface/layouts/list-view/activity-lifecycle.md | PatGet/xamarin-docs.de-de | b2b3700a784acca6da236f51be47f2d6c88f9ab4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ListView-Steuerelement und den Aktivitätenlebenszyklus
ms.prod: xamarin
ms.assetid: 40840D03-6074-30A2-74DA-3664703E3367
ms.technology: xamarin-android
author: mgmclemore
ms.author: mamcle
ms.date: 02/06/2018
ms.openlocfilehash: 6e15fb8796ae6a616c5eae44059caae3d9478aef
ms.sourcegitcommit: 945df041e2180cb20af08b83cc703ecd1aedc6b0
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/04/2018
ms.locfileid: "30764220"
---
# <a name="listview-and-the-activity-lifecycle"></a>ListView-Steuerelement und den Aktivitätenlebenszyklus
Aktivitäten durchlaufen bestimmte Zustände als die Anwendung ausgeführt wird, z. B. starten, wird ausgeführt, wird angehalten und beendet wurde. Weitere Informationen und bestimmte Richtlinien für die Behandlung von Zustandsübergängen, finden Sie unter der [Lernprogramm zum Lebenszyklus von Aktivität](~/android/app-fundamentals/activity-lifecycle/index.md).
Es ist wichtig zu verstehen, den Aktivitätenlebenszyklus und den Ort Ihrer `ListView` Code in den richtigen Verzeichnissen.
Alle Beispiele in diesem Dokument in der Aktivitätssymbols ausführen "Setuptasks" `OnCreate` Methode und (wenn erforderlich) führen Sie zum "Beenden" im `OnDestroy`. Die Beispiele verwenden im Allgemeinen kleine Datasets, die nicht ändern, damit neu häufiger Laden der Daten nicht aufbewahrt werden.
Allerdings Ihrer Daten werden häufig ändern oder zu viel Arbeitsspeicher verwendet es kann sinnvoll sein, mit anderen Lebenszyklusmethoden aufzufüllen, und aktualisieren Sie Ihre `ListView`. Angenommen, wenn die zugrunde liegenden Daten werden ständig (oder möglicherweise Updates für andere Aktivitäten betroffen) und dann erstellen den Adapter in `OnStart` oder `OnResume` stellt sicher, dass die neuesten Daten werden jedes Mal die Aktivität angezeigt wird angezeigt.
Wenn der Adapter Ressourcen wie Arbeitsspeicher, oder einen verwalteten Cursor verwendet, denken Sie daran, diese Ressourcen in die komplementäre Methode hinzu, in dem sie (z. b. instanziiert wurden freizugeben in der erstellten Objekte `OnStart` können freigegeben werden, der in `OnStop`).
## <a name="configuration-changes"></a>Änderungen an der Konfiguration
Es ist wichtig, daran zu denken, dass die Konfiguration ändert – besonders Bildschirm Drehung und Tastatur Sichtbarkeit – kann dazu führen, dass die aktuelle Aktivität zerstört und neu erstellt werden (es sei denn, Sie geben Sie andernfalls mit der `ConfigurationChanges` Attribut ""). Dies bedeutet, dass unter normalen Bedingungen Drehen eines Geräts bewirkt eine `ListView` und `Adapter` neu erstellt werden und (es sei denn, Sie Code, in verfasst haben `OnPause` und `OnResume`) die Scroll Position und Zeile Auswahl Zustände verloren.
Das folgende Attribut würde es sich um eine Aktivität zerstört und neu erstellt, als Ergebnis der Änderungen an der Konfiguration verhindern:
```csharp
[Activity(ConfigurationChanges="keyboardHidden|orientation")]
```
Die Aktivität sollte dann überschreiben `OnConfigurationChanged` auf diese Änderungen entsprechend reagieren. Weitere Informationen zum Behandeln von konfigurationsänderungen finden Sie in der Dokumentation.
| 79.3 | 551 | 0.819672 | deu_Latn | 0.997768 |
e93b144a75ea03882df2c84f9f2df23b29ea61a6 | 3,384 | md | Markdown | README.md | squeek502/d2itemreader | 026bfad9bed9ae2e85d0ebf5d420cb2ab3b45cb7 | [
"Unlicense"
] | 27 | 2018-06-23T06:48:24.000Z | 2022-02-07T03:21:39.000Z | README.md | squeek502/d2itemreader | 026bfad9bed9ae2e85d0ebf5d420cb2ab3b45cb7 | [
"Unlicense"
] | 8 | 2019-01-21T23:58:11.000Z | 2021-09-25T05:22:11.000Z | README.md | squeek502/d2itemreader | 026bfad9bed9ae2e85d0ebf5d420cb2ab3b45cb7 | [
"Unlicense"
] | 1 | 2021-10-01T23:51:06.000Z | 2021-10-01T23:51:06.000Z | d2itemreader
============
[](https://github.com/squeek502/d2itemreader/actions/workflows/ci.yml)
[](https://ci.appveyor.com/project/squeek502/d2itemreader/branch/master)
**work in progress, everything is subject to change**
d2itemreader is a C library for parsing Diablo II character/stash files (`.d2s`, `.d2x`, and `.sss`) and retrieving data about the items contained inside them. It also tries to avoid any assumptions about the game version or game data, so that it can work with modded files (provided the library is initialized with the relevant modded .txt files on startup).
## Usage
Most API functions in d2itemreader.h work in the following way:
- There is a `<struct>_parse` function that takes a pointer to a struct and returns a `d2err` enum.
+ If the function returns `D2ERR_OK`, then the function succeeded and the struct will need to be cleaned up using the corresponding `_destroy` function.
+ If the function returns anything other than `D2ERR_OK`, then the `_destroy` function *does not* need to be called; any allocated memory is cleaned up by the `_parse` or `_init` function before it returns an error.
+ The `out_bytesRead` parameter will always be set regardless of the result of the `_parse` function. On failure, it will contain the number of bytes read before the error occured.
On program startup, you will need to initialize a `d2gamedata` struct with the data from some of Diablo II's `.txt` files found in its `.mpq` archives. For convenience, `d2itemreader` bundles the relevant data from the latest `.txt` files (1.14d), which can be loaded by calling:
```c
d2gamedata gameData;
d2err err = d2gamedata_init_default(&gameData);
```
If the `d2gamedata_init` function returns `D2ERR_OK`, the following function should be called on shutdown (or when you're done using the d2itemreader library):
```c
d2gamedata_destroy(&gameData);
```
After the `d2gamedata_init` function is called, you can parse files like so:
```c
const char *filename = "path/to/file";
// determine the filetype if it is not known in advance
enum d2filetype filetype = d2filetype_of_file(filename);
if (filetype != D2FILETYPE_D2_CHARACTER)
{
fprintf(stderr, "File is not a d2 character file: %s\n", filename);
return;
}
size_t bytesRead;
d2char character;
d2err err = d2char_parse_file(filename, &character, &gameData, &bytesRead);
if (err != D2ERR_OK)
{
fprintf(stderr, "Failed to parse %s: %s at byte 0x%zx\n", filename, d2err_str(err), bytesRead);
// don't need to call d2char_destroy, the memory is cleaned up when _parse returns an error
}
else
{
// do something with the character data
int numUniques = 0;
for (int i=0; i<character.items.count; i++)
{
d2item* item = &character.items.items[i];
if (item->rarity == D2RARITY_UNIQUE)
{
numUniques++;
}
}
printf("Number of unique items in %s: %d", filename, numUniques);
// clean up the memory allocated when parsing the character file
d2char_destroy(&character);
}
```
## Bindings
- Lua: [lua-d2itemreader](https://github.com/squeek502/lua-d2itemreader)
## Acknowledgements
- [nokka/d2s](https://github.com/nokka/d2s) - much of the d2s parsing of d2itemreader is ported from `nokka/d2s`
| 40.771084 | 359 | 0.748227 | eng_Latn | 0.976175 |
e93b231b4787327a6667e5ab11fb5142db2f9b93 | 2,755 | md | Markdown | _posts/2014-08-26-.md | pkok/pkok.github.io | 54f0178936b47299bb5bfe8060fdf7693ddf7a41 | [
"MIT"
] | null | null | null | _posts/2014-08-26-.md | pkok/pkok.github.io | 54f0178936b47299bb5bfe8060fdf7693ddf7a41 | [
"MIT"
] | 1 | 2020-07-19T10:16:50.000Z | 2020-07-19T10:16:50.000Z | _posts/2014-08-26-.md | pkok/pkok.github.io | 54f0178936b47299bb5bfe8060fdf7693ddf7a41 | [
"MIT"
] | null | null | null | ---
layout: post
categories:
- thesis
---
I've tried to fix the previous problem (not fixed), but I don't understand what is going wrong.
I recorded a second data set with a normal chess board. Then I attempted to create a ROS bag of it. `kalibr_bagcreater` raised some `ValueError`s, because filenames were too short (recorded too shortly after clock start). When I finally got a new bag, it still gave the same error:
```bash
$ ./kalibr_calibrate_cameras --bag ../../camera_calibration.bag --topics /cam0/image_raw /cam1/image_raw --models pinhole-radtan pinhole-radtan --target ../../config/calibration_target/checkerboard.yaml
importing libraries
Initializing cam0:
Camera model: pinhole-radtan
Dataset: ../../camera_calibration.bag
Topic: /cam0/image_raw
Number of images: 146
Extracting calibration target corners
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/home/pkok/thesis/calibration_kits/kalibr/src/aslam_offline_calibration/kalibr/python/kalibr_common/TargetExtractor.py", line 22, in multicoreExtractionWrapper
success, obs = detector.findTargetNoTransformation(stamp, np.array(image))
TypeError: Conversion is only valid for arrays with 1 or 2 dimensions. Argument has 3 dimensions
[FATAL] [1409057113.069846]: No corners could be extracted for camera /cam0/image_raw! Check the calibration target configuration and dataset.
Traceback (most recent call last):
File "./kalibr_calibrate_cameras", line 5, in <module>
exec(fh.read())
File "<string>", line 444, in <module>
File "<string>", line 182, in main
File "/home/pkok/thesis/calibration_kits/kalibr/src/aslam_offline_calibration/kalibr/python/kalibr_camera_calibration/CameraCalibrator.py", line 56, in initGeometryFromObservations
success = self.geometry.initializeIntrinsics(observations)
RuntimeError: [Exception] /home/pkok/thesis/calibration_kits/kalibr/src/aslam_cv/aslam_cameras/include/aslam/cameras/implementation/PinholeProjection.hpp:713: initializeIntrinsics() assert(observations.size() != 0) failed: Need min. one observation
```
Which is strange, because the board is visible on the photos. Problem is still not fixed. Argh!
| 67.195122 | 284 | 0.671506 | eng_Latn | 0.741398 |
e93bb275c64e32b022d3ffd4e5fdee3ea86926fc | 661 | md | Markdown | README.md | randyviandaputra/quran-offline | 0138c0141145b1dbd1216d1cddbb5bbb7a40a836 | [
"MIT"
] | 1 | 2019-09-06T02:58:06.000Z | 2019-09-06T02:58:06.000Z | README.md | maulayyacyber/quran-offline | 0138c0141145b1dbd1216d1cddbb5bbb7a40a836 | [
"MIT"
] | null | null | null | README.md | maulayyacyber/quran-offline | 0138c0141145b1dbd1216d1cddbb5bbb7a40a836 | [
"MIT"
] | 1 | 2020-01-08T05:04:42.000Z | 2020-01-08T05:04:42.000Z | # quran-offline
📖 Read Qur'an Anywhere, Directly from Your Browser, No Need Installing Apps Anymore
[](https://travis-ci.org/mazipan/quran-offline)
## Live Website
[https://quran-offline.netlify.com/](https://quran-offline.netlify.com/)
## Build Setup
``` bash
# install dependencies
$ yarn install
# serve with hot reload at localhost:3000
$ yarn run dev
# generate static project
$ yarn run generate
```
## Credit
Thanks for awesome repo [quran-json](https://github.com/rioastamal/quran-json) by [@rioastamal](https://github.com/rioastamal)
----
Copyright © 2018 by Irfan Maulana
| 20.65625 | 126 | 0.732224 | eng_Latn | 0.434479 |
e93cfd8b82ed0c3899a2fb4e65c1c8907bbdfd69 | 21,095 | markdown | Markdown | content/pages/examples/django/django-code-examples.markdown | kwhinnery/fullstackpython.com | 694e16f32c9dcdb7c9127cbb6cbf67fc351377b9 | [
"MIT"
] | null | null | null | content/pages/examples/django/django-code-examples.markdown | kwhinnery/fullstackpython.com | 694e16f32c9dcdb7c9127cbb6cbf67fc351377b9 | [
"MIT"
] | null | null | null | content/pages/examples/django/django-code-examples.markdown | kwhinnery/fullstackpython.com | 694e16f32c9dcdb7c9127cbb6cbf67fc351377b9 | [
"MIT"
] | null | null | null | title: Django Code Examples
category: page
slug: django-code-examples
sortorder: 50000
toc: False
sidebartitle: Django Code Examples
meta: Python code examples that show how to use the Django web application framework for many different situations.
[Django](/django.html) is a Python [web framework](/web-frameworks.html).
<a href="http://www.djangoproject.com/" style="border: none;"><img src="/img/logos/django.png" width="100%" alt="Official Django logo. Trademark Django Software Foundation." class="shot" style="margin-top:20px"></a>
## Django Example Projects
Part of Django's widespread adoption comes from its broad ecosystem of
open source code libraries and example projects.
It's good to familiarize yourself with the following projects to
learn what is available to you beyond the extensive
"[batteries-included](https://www.quora.com/Why-does-Django-tout-itself-as-a-batteries-included-web-framework-when-you-have-to-manually-write-regexes-to-do-URL-routing)"
code base.
These projects, ordered alphabetically, are also helpful as example
code for how to build your own applications.
### AuditLog
[Auditlog](https://github.com/jjkester/django-auditlog)
([project documentation](https://django-auditlog.readthedocs.io/en/latest/))
is a [Django](/django.html) app that logs changes to Python objects,
similar to the Django admin's logs but with more details and
output formats. Auditlog's source code is provided as open source under the
[MIT license](https://github.com/jjkester/django-auditlog/blob/master/LICENSE).
Example code found in the AuditLog project:
* [django.apps.config AppConfig](/django-apps-config-appconfig-examples.html)
* [django.contrib.admin.filters SimpleListFilter](/django-contrib-admin-filters-simplelistfilter-examples.html)
* [django.contrib.admin.sites.register](/django-contrib-admin-sites-register-examples.html)
* [django.db.models DateField](/django-db-models-datefield-examples.html)
* [django.db.models DateTimeField](/django-db-models-datetimefield-examples.html)
* [django.db.models IntegerField](/django-db-models-integerfield-examples.html)
* [django.utils.html format_html](/django-utils-html-format-html-examples.html)
### dccnsys
[dccnsys](https://github.com/dccnconf/dccnsys) is a conference registration
system built with [Django](/django.html). The code is open source under the
[MIT license](https://github.com/dccnconf/dccnsys/blob/master/LICENSE).
dccnsys is shown on the following code example pages:
* [django.apps.config AppConfig](/django-apps-config-appconfig-examples.html)
* [django.contrib.auth get_user_model](/django-contrib-auth-get-user-model-examples.html)
* [django.contrib.auth.decorators login_required](/django-contrib-auth-decorators-login-required-examples.html)
* [django.db.models DateField](/django-db-models-datefield-examples.html)
* [django.db.models IntegerField](/django-db-models-integerfield-examples.html)
* [django.http HttpResponseForbidden](/django-http-httpresponseforbidden-examples.html)
* [django.urls.path](/django-urls-path-examples.html)
### django-allauth
[django-allauth](https://github.com/pennersr/django-allauth)
([project website](https://www.intenct.nl/projects/django-allauth/)) is a
[Django](/django.html) library for easily adding local and social authentication
flows to Django projects. It is open source under the
[MIT License](https://github.com/pennersr/django-allauth/blob/master/LICENSE).
Code used for examples from the django-allauth project:
* [django.apps.config AppConfig](/django-apps-config-appconfig-examples.html)
* [django.conf.urls.url](/django-conf-urls-url-examples.html)
* [django.contrib.admin.sites.register](/django-contrib-admin-sites-register-examples.html)
* [django.forms](/django-forms-examples.html)
### django-angular
[django-angular](https://github.com/jrief/django-angular)
([project examples website](https://django-angular.awesto.com/classic_form/))
is a library with helper code to make it easier to use
[Angular](/angular.html) as the front-end to [Django](/django.html) projects.
The code for django-angular is
[open source under the MIT license](https://github.com/jrief/django-angular/blob/master/LICENSE.txt).
Code from django-angular is shown on:
* [django.conf.urls url](/django-conf-urls-url-examples.html)
* [django.conf settings](/django-conf-settings-examples.html)
* [django.http HttpResponseBadRequest](/django-http-httpresponsebadrequest-examples.html)
* [django.http HttpResponseForbidden](/django-http-httpresponseforbidden-examples.html)
* [django.http HttpResponsePermanentRedirect](/django-http-responses-httpresponsepermanentredirect-examples.html)
* [django.utils.html format_html](/django-utils-html-format-html-examples.html)
* [django.urls.exceptions NoReverseMatch](/django-urls-exceptions-noreversematch-examples.html)
### django-axes
[django-axes](https://github.com/jazzband/django-axes/)
([project documentation](https://django-axes.readthedocs.io/en/latest/)
and
[PyPI package information](https://pypi.org/project/django-axes/)
is a code library for [Django](/django.html) projects to track failed
login attempts against a web application. The goal of the project is
to make it easier for you to stop people and scripts from hacking your
Django-powered website.
The code for django-axes is
[open source under the MIT liense](https://github.com/jazzband/django-axes/blob/master/LICENSE)
and maintained by the group of developers known as
[Jazzband](https://jazzband.co/).
### django-cors-headers
[django-cors-headers](https://github.com/ottoyiu/django-cors-headers) is
an
[open source](https://github.com/ottoyiu/django-cors-headers/blob/master/LICENSE)
library for enabling
[Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS)
handling in your [Django](/django.html) web applications and appropriately
dealing with HTTP headers for CORS requests.
Code examples from the django-cors-headers project:
* [django.conf settings](/django-conf-settings-examples.html)
* [django.dispatch Signal](/django-dispatch-dispatcher-signal-examples.html)
### django-cms
[django-cms](https://github.com/divio/django-cms)
([project website](https://www.django-cms.org/en/)) is a Python-based
content management system (CMS) [library](https://pypi.org/project/django-cms/)
for use with Django web apps that is open sourced under the
[BSD 3-Clause "New" License](https://github.com/divio/django-cms/blob/develop/LICENSE).
Example code from django-cms:
* [django.conf.urls url](/django-conf-urls-url-examples.html)
* [django.contrib.admin.sites.register](/django-contrib-admin-sites-register-examples.html)
* [django.db OperationalError](/django-db-operationalerror-examples.html)
* [django.db.models Model](/django-db-models-model-examples.html)
* [django.http HttpResponseBadRequest](/django-http-httpresponsebadrequest-examples.html)
* [django.http HttpResponseForbidden](/django-http-httpresponseforbidden-examples.html)
* [django.template.response TemplateResponse](/django-template-response-templateresponse-examples.html)
* [django.utils timezone](/django-utils-timezone-examples.html)
### django-debug-toolbar
[django-debug-toolbar](https://github.com/jazzband/django-debug-toolbar)
([project documentation](https://github.com/jazzband/django-debug-toolbar)
and [PyPI page](https://pypi.org/project/django-debug-toolbar/))
grants a developer detailed request-response cycle information while
developing a [Django](/django.html) web application.
The code for django-debug-toolbar is
[open source](https://github.com/jazzband/django-debug-toolbar/blob/master/LICENSE)
and maintained by the developer community group known as
[Jazzband](https://jazzband.co/).
### django-easy-timezones
[django-easy-timezones](https://github.com/Miserlou/django-easy-timezones)
([project website](https://www.gun.io/blog/django-easy-timezones))
is a Django
[middleware](https://docs.djangoproject.com/en/stable/topics/http/middleware/)
[code library](https://pypi.org/project/django-easy-timezones/)
to simplify handling time data in your applications using
users' geolocation data.
Useful example code found within django-easy-timezones:
* [django.conf settings](/django-conf-settings-examples.html)
* [django.dispatch Signal](/django-dispatch-dispatcher-signal-examples.html)
* [django.utils.timezone](/django-utils-timezone-examples.html)
### django-extensions
[django-extensions](https://github.com/django-extensions/django-extensions)
([project documentation](https://django-extensions.readthedocs.io/en/latest/)
and [PyPI page](https://pypi.org/project/django-extensions/))
is a [Django](/django.html) project that adds a bunch of additional
useful commands to the `manage.py` interface. This
[GoDjango video](https://www.youtube.com/watch?v=1F6G3ONhr4k) provides a
quick overview of what you get when you install it into your Python
environment.
The django-extensions project is open sourced under the
[MIT license](https://github.com/django-extensions/django-extensions/blob/master/LICENSE).
### django-filer
[django-filer](https://github.com/divio/django-filer)
([project documentation](https://django-filer.readthedocs.io/en/latest/))
is a file management library for uploading and organizing files and images
in Django's admin interface. The project's code is available under the
[BSD 3-Clause "New" or "Revised" open source license](https://github.com/divio/django-filer/blob/develop/LICENSE.txt).
Code from django-filer can be found on these pages:
* [django.conf settings](/django-conf-settings-examples.html)
* [django.contrib.admin](/django-contrib-admin-examples.html)
* [django.contrib.admin.sites.register](/django-contrib-admin-sites-register-examples.html)
* [django.core.management.base BaseCommand](/django-core-management-base-basecommand-examples.html)
* [django.http HttpResponseBadRequest](/django-http-httpresponsebadrequest-examples.html)
### django-floppyforms
[django-floppyforms](https://github.com/jazzband/django-floppyforms)
([project documentation](https://django-floppyforms.readthedocs.io/en/latest/)
and
[PyPI page](https://pypi.org/project/django-floppyforms/))
is a [Django](/django.html) code library for better control
over rendering HTML forms in your [templates](/template-engines.html).
The django-floppyforms code is provided as
[open source](https://github.com/jazzband/django-floppyforms/blob/master/LICENSE)
and maintained by the collaborative developer community group
[Jazzband](https://jazzband.co/).
Code from django-floppyforms is used as examples for the following parts of
Django:
* [django.db.models DateField](/django-db-models-datefield-examples.html)
### django-haystack
[django-haystack](https://github.com/django-haystack/django-haystack)
([project website](http://haystacksearch.org/) and
[PyPI page](https://pypi.org/project/django-haystack/))
is a search abstraction layer that separates the Python search code
in a [Django](/django.html) web application from the search engine
implementation that it runs on, such as
[Apache Solr](http://lucene.apache.org/solr/),
[Elasticsearch](https://www.elastic.co/)
or [Whoosh](https://whoosh.readthedocs.io/en/latest/intro.html).
The django-haystack project is open source under the
[BSD license](https://github.com/django-haystack/django-haystack/blob/master/LICENSE).
### django-jet
[django-jet](https://github.com/geex-arts/django-jet)
([project documentation](https://jet.readthedocs.io/en/latest/),
[PyPI project page](https://pypi.org/project/django-jet/) and
[more information](http://jet.geex-arts.com/))
is a fancy [Django](/django.html) Admin panel replacement.
The django-jet project is open source under the
[GNU Affero General Public License v3.0](https://github.com/geex-arts/django-jet/blob/dev/LICENSE).
### django-jsonfield
[django-jsonfield](https://github.com/dmkoch/django-jsonfield)
([jsonfield on PyPi](https://pypi.org/project/jsonfield/)) is a
[Django](/django.html) code library that makes it easier to store validated
JSON in a [Django object-relational mapper (ORM)](/django-orm.html) database
model.
The django-jsonfield project is open source under the
[MIT license](https://github.com/dmkoch/django-jsonfield/blob/master/LICENSE).
### django-model-utils
[django-model-utils](https://github.com/jazzband/django-model-utils)
([project documentation](https://django-model-utils.readthedocs.io/en/latest/)
and
[PyPI package information](https://pypi.org/project/django-model-utils/))
provides useful mixins and utilities for working with
[Django ORM](/django-orm.html) models in your projects.
The django-model-utils project is open sourced under the
[BSD 3-Clause "New" or "Revised" License](https://github.com/jazzband/django-model-utils/blob/master/LICENSE.txt)
and it is maintained by the developer community group
[Jazzband](https://jazzband.co/).
### django-mongonaut
[django-mongonaut](https://github.com/jazzband/django-mongonaut)
([project documentation](https://django-mongonaut.readthedocs.io/en/latest/)
and
[PyPI package information](https://pypi.org/project/django-mongonaut/))
provides an introspective interface for working with
[MongoDB](/mongodb.html) via mongoengine. The project has its own new code
to map MongoDB to the [Django](/django.html) Admin interface.
django-mongonaut's highlighted features include:
* Automatic introspection of mongoengine documents
* The ability to constrain who sees what and what they can do
* Full control for adding, editing and deleting documents
The django-mongonaut project is open sourced under the
[MIT License](https://github.com/jazzband/django-mongonaut/blob/master/LICENSE.txt)
and it is maintained by the developer community group
[Jazzband](https://jazzband.co/).
### django-oauth-toolkit
[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit)
([project website](http://dot.evonove.it/)
and
[PyPI package information](https://pypi.org/project/django-oauth-toolkit/1.2.0/))
is a code library for adding and handling [OAuth2](https://oauth.net/)
flows within your [Django](/django.html) web application and
[API](/application-programming-interfaces.html).
The django-oauth-toolkit project is open sourced under the
[FreeBSD license](https://github.com/jazzband/django-oauth-toolkit/blob/master/LICENSE)
and it is maintained by the developer community group
[Jazzband](https://jazzband.co/).
Code examples provided by django-oauth-toolkit:
* [django.http HttpResponseForbidden](/django-http-httpresponseforbidden-examples.html)
### django-oscar
[django-oscar](https://github.com/django-oscar/django-oscar/)
([project website](http://oscarcommerce.com/))
is a framework for building e-commerce sites on top of
[Django](/django.html). The code for the project is available open
source under a
[custom license written by Tangent Communications PLC](https://github.com/django-oscar/django-oscar/blob/master/LICENSE).
Further code examples from django-oscar:
* [django.contrib.admin](/django-contrib-admin-examples.html)
* [django.contrib.auth.decorators login_required](/django-contrib-auth-decorators-login-required-examples.html)
### django-pipeline
[django-pipeline](https://github.com/jazzband/django-pipeline)
([project documentation](https://django-pipeline.readthedocs.io/en/latest/)
and
[PyPI package information](https://pypi.org/project/django-pipeline/))
is a code library for handling and compressing
[static content assets](/static-content.html) when handling requests in
[Django](/django.html) web applications.
The django-pipeline project is open sourced under the
[MIT License](https://github.com/jazzband/django-pipeline/blob/master/LICENSE)
and it is maintained by the developer community group
[Jazzband](https://jazzband.co/).
### django-push-notifications
[django-push-notifications](https://github.com/jazzband/django-push-notifications)
is a [Django](/django.html) app for storing and interacting with
push notification services such as
[Google's Firebase Cloud Messaging](https://firebase.google.com/docs/cloud-messaging/)
and
[Apple Notifications](https://developer.apple.com/notifications/).
The django-push-notification project's source code is available
open source under the
[MIT license](https://github.com/jazzband/django-push-notifications/blob/master/LICENSE).
* [django.db.models Model](/django-db-models-model-examples.html)
* [django.db.models BooleanField](/django-db-models-booleanfield-examples.html)
* [django.db.models CharField](/django-db-models-charfield-examples.html)
* [django.db.models DateTimeField](/django-db-models-datetimefield-examples.html)
### django-smithy
[django-smithy](https://github.com/jamiecounsell/django-smithy) is
a [Django](/django.html) code library that allows users to send
HTTP requests from the Django admin user interface. The code for
the project is open source under the
[MIT license](https://github.com/jamiecounsell/django-smithy/blob/master/LICENSE).
Code examples from django-smithy are shown on the following pages:
* [django.utils timezone](/django-utils-timezone-examples.html)
* [django.db.models CharField](/django-db-models-charfield-examples.html)
* [django.db.models TextField](/django-db-models-textfield-examples.html)
### django-taggit
[django-taggit](https://github.com/jazzband/django-taggit)
([project documentation](https://django-taggit.readthedocs.io/))
[PyPI page](https://pypi.org/project/django-taggit/)) provides a way
to create, store, manage and use tags in a [Django](/django.html) project.
The code for django-taggit is
[open source](https://github.com/jazzband/django-taggit/blob/master/LICENSE)
and maintained by the collaborative developer community group
[Jazzband](https://jazzband.co/).
### drf-action-serializer
[drf-action-serializer](https://github.com/gregschmit/drf-action-serializer)
([PyPI page](https://pypi.org/project/drf-action-serializer/))
is an extension for [Django REST Framework](/django-rest-framework-drf.html)
that makes it easier to configure specific serializers to use based on the
client's request action. For example, a list view should have one serializer
whereas the detail view would have a different serializer.
The project is open source under the
[MIT license](https://github.com/gregschmit/drf-action-serializer/blob/master/LICENSE).
There are code examples from the drf-action-serializer project on the
following pages:
* [django.urls.path](/django-urls-path-examples.html)
### gadget-board
[gadget-board](https://github.com/mik4el/gadget-board) is a
[Django](/django.html),
[Django REST Framework (DRF)](/django-rest-framework-drf.html) and
[Angular](/angular.html) web application that is open source under the
[Apache2 license](https://github.com/mik4el/gadget-board/blob/master/LICENSE).
Additional example code found within gadget-board:
* [django.apps.config AppConfig](/django-apps-config-appconfig-examples.html)
* [django.conf.urls url](/django-conf-urls-url-examples.html)
* [django.contrib admin](/django-contrib-admin-examples.html)
* [django.contrib.auth.hashers make_password](/django-contrib-auth-hashers-make-password-examples.html)
### jazzband
[jazzband](https://github.com/jazzband/website) is a
[Django](/django.html)-based web application that runs a website with
information on many Django projects such as
[django-debug-toolbar](https://github.com/jazzband/django-debug-toolbar)
and [django-taggit](https://github.com/jazzband/django-taggit).
The project's code is provided as open source under the
[MIT license](https://github.com/jazzband/website/blob/master/LICENSE).
### register
[register](https://github.com/ORGAN-IZE/register) is a [Django](/django.html),
[Bootstrap](/bootstrap-css.html), [PostgreSQL](/postgresql.html) project that is
open source under the
[GNU General Public License v3.0](https://github.com/ORGAN-IZE/register/blob/master/LICENSE).
This web application makes it easier for people to register as organ donors.
You can see the application live at
[https://register.organize.org/](https://register.organize.org/).
Useful example code from register can be found on:
* [django.conf.urls url](/django-conf-urls-url-examples.html)
### wagtail
[wagtail](https://github.com/wagtail/wagtail)
([project website](https://wagtail.io/)) is a fantastic
[Django](/django.html)-based CMS with code that is open source
under the
[BSD 3-Clause "New" or "Revised" License](https://github.com/wagtail/wagtail/blob/master/LICENSE).
Example code from wagtail shown on these pages:
* [django.conf.urls url](/django-conf-urls-url-examples.html)
* [django.contrib.admin.sites.register](/django-contrib-admin-sites-register-examples.html)
* [django.db.models DateField](/django-db-models-datefield-examples.html)
* [django.db.models IntegerField](/django-db-models-integerfield-examples.html)
* [django.http HttpResponseNotModified](/django-http-httpresponsenotmodified-examples.html)
* [django.http Http404](/django-http-http404-examples.html)
* [django.template.response TemplateResponse](/django-template-response-templateresponse-examples.html)
| 45.858696 | 215 | 0.780849 | eng_Latn | 0.446848 |
e93d02aef444de14d6729dc80b12cdbca7f1d64e | 1,554 | md | Markdown | docs/extensibility/debugger/reference/idebugproperty2-getreference.md | ManuSquall/visualstudio-docs.fr-fr | 87f0072eb292673de4a102be704162619838365f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-15T11:25:55.000Z | 2021-08-15T11:25:55.000Z | docs/extensibility/debugger/reference/idebugproperty2-getreference.md | ManuSquall/visualstudio-docs.fr-fr | 87f0072eb292673de4a102be704162619838365f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugproperty2-getreference.md | ManuSquall/visualstudio-docs.fr-fr | 87f0072eb292673de4a102be704162619838365f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Retourne une référence à la valeur de la propriété.
title: 'IDebugProperty2 :: GetReference | Microsoft Docs'
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- IDebugProperty2::GetReference
helpviewer_keywords:
- IDebugProperty2::GetReference method
ms.assetid: 2fa97d9b-c3d7-478e-ba5a-a933f40a0103
author: leslierichardson95
ms.author: lerich
manager: jmartens
ms.workload:
- vssdk
dev_langs:
- CPP
- CSharp
ms.openlocfilehash: cc8a922ad29b7f6b3ecff57ee5df7ad0e7dded1d
ms.sourcegitcommit: f2916d8fd296b92cc402597d1d1eecda4f6cccbf
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 03/25/2021
ms.locfileid: "105064759"
---
# <a name="idebugproperty2getreference"></a>IDebugProperty2::GetReference
Retourne une référence à la valeur de la propriété.
## <a name="syntax"></a>Syntaxe
```cpp
HRESULT GetReference(
IDebugReference2** ppReference
);
```
```csharp
int GetReference(
out IDebugReference2 ppReference
);
```
## <a name="parameters"></a>Paramètres
`ppRererence`\
à Retourne un objet [IDebugReference2](../../../extensibility/debugger/reference/idebugreference2.md) représentant une référence à la valeur de la propriété.
## <a name="return-value"></a>Valeur renvoyée
En cas de réussite, retourne `S_OK` ; sinon, retourne un code d’erreur, en général `E_NOTIMPL` ou `E_GETREFERENCE_NO_REFERENCE` .
## <a name="see-also"></a>Voir aussi
- [IDebugProperty2](../../../extensibility/debugger/reference/idebugproperty2.md)
- [IDebugReference2](../../../extensibility/debugger/reference/idebugreference2.md)
| 29.320755 | 157 | 0.772844 | yue_Hant | 0.296487 |
e93d33921a8afbff7a51914fa659df7276679d76 | 743 | markdown | Markdown | README.markdown | alexey-martynov/aspell-dictionaries | 262066db52af49c6dddfd91f1d3e06b2a065e05c | [
"MIT"
] | null | null | null | README.markdown | alexey-martynov/aspell-dictionaries | 262066db52af49c6dddfd91f1d3e06b2a065e05c | [
"MIT"
] | null | null | null | README.markdown | alexey-martynov/aspell-dictionaries | 262066db52af49c6dddfd91f1d3e06b2a065e05c | [
"MIT"
] | null | null | null | Aspell Dictionaries
===================
This project contains sources for some special dictionaries for [GNU
Aspell](http://aspell.net/).
The "ru-computer" dictionary contains Computer Science-related words
in Russian. For simplicity it contains also some useful English words
like keywords from C++ and abbrevs. It is not complete. All words
added in case when Aspell misses them in texts.
Building and Installing
-----------------------
To build this dictionaries GNU Aspell must be installed. The Makefile
performs all required tasks.
To install, the files "ru-computer.dat" and "ru-computer.rws" should
be copied to the dictionary directory on system. The file "ru.multi"
should be updated by adding line:
add ru-computer.rws
| 32.304348 | 69 | 0.741588 | eng_Latn | 0.995832 |
e93d4d6705a1a2f333ec00afb99120cc5b77b97e | 1,094 | md | Markdown | 2020/CVE-2020-11233.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2020/CVE-2020-11233.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2020/CVE-2020-11233.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2020-11233](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11233)



### Description
Time-of-check time-of-use race condition While processing partition entries due to newly created buffer was read again from mmc without validation in Snapdragon Auto, Snapdragon Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wearables
### POC
#### Reference
- https://www.qualcomm.com/company/product-security/bulletins/january-2021-bulletin
#### Github
- https://github.com/TinyNiko/android_bulletin_notes
| 60.777778 | 309 | 0.801645 | yue_Hant | 0.181723 |
e93d87431b89884535c917483c2d48ed7b7910c5 | 190 | md | Markdown | README.md | Re-Re-Engineered/neo-splendor-markdown | d2c943c4088700a7961e164dea023cdf375e51f5 | [
"MIT"
] | null | null | null | README.md | Re-Re-Engineered/neo-splendor-markdown | d2c943c4088700a7961e164dea023cdf375e51f5 | [
"MIT"
] | null | null | null | README.md | Re-Re-Engineered/neo-splendor-markdown | d2c943c4088700a7961e164dea023cdf375e51f5 | [
"MIT"
] | null | null | null | # Neo Splendor
A functioning fork of [splendor, the markdown theme](https://markdowncss.github.io), that is possibly subject to continousous future alterations.
Feel free to fork and use.
| 31.666667 | 145 | 0.784211 | eng_Latn | 0.971945 |
e93dc10f2f8a84c58560b2e11ec08b4af88103eb | 782 | md | Markdown | README.md | jjoaovitor7/lfa_trabalhopratico2_20212 | 590acb2b1be509f2a3341658549a17ef34b15500 | [
"MIT"
] | null | null | null | README.md | jjoaovitor7/lfa_trabalhopratico2_20212 | 590acb2b1be509f2a3341658549a17ef34b15500 | [
"MIT"
] | null | null | null | README.md | jjoaovitor7/lfa_trabalhopratico2_20212 | 590acb2b1be509f2a3341658549a17ef34b15500 | [
"MIT"
] | null | null | null | # lfa_trabalhopratico2_20212
Trabalho Prático 2 da disciplina de Linguagens Formais e Automâtos (LFA).
Integrantes:
* João Vítor Silva Ferreira
Algoritmo:
* CYK
Como baixar e executar?
Clonar o repositório.
```
git clone https://github.com/jjoaovitor7-unit/lfa_trabalhopratico2_20212.git
cd lfa_trabalhopratico2_20212/
```
<br />
(Terminal)
Criar um arquivo `.cyk` em `/files/` com a gramática na Forma Normal de Chomsky (FNC).
Modificar caminho do arquivo em `main.py`:
`file = File(f"{path__current}{os.sep}files{os.sep}<file>.cyk")`
`python3 main.py`
E digitar a palavra para ser verificada.
<br />
(Web)
Baixar a dependência *(flask)*.
`pip3 install flask`
Rodar o servidor.
`python3 server.py`
Enviar o arquivo e logo após digitar a palavra para ser verificada. | 18.619048 | 86 | 0.748082 | por_Latn | 0.978255 |
e93de1a634532eb27d0028be91d8b7cd17d54a37 | 2,171 | md | Markdown | docs/install/workload-component-id-vs-test-agent.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/install/workload-component-id-vs-test-agent.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/install/workload-component-id-vs-test-agent.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ID de composant et de charge de travail de Visual Studio Test Agent
titleSuffix: ''
description: Utiliser les ID de composant et de charge de travail de Visual Studio pour exécuter des tests automatisés et de chargement à distance
keywords: ''
author: ornellaalt
ms.author: ornella
manager: jillfra
ms.date: 08/05/2020
ms.topic: reference
helpviewer_keywords:
- workload ID, Visual Studio
- component ID, Visual Studio
- install Visual Studio, administrator guide
ms.assetid: 55aea29b-1066-4e5a-aa99-fc87d4efb6d5
ms.prod: visual-studio-windows
ms.technology: vs-installation
open_to_public_contributors: false
ms.openlocfilehash: afce5c9587a5b1a54e688197b0d8010920693d2d
ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/02/2020
ms.locfileid: "87805680"
---
# <a name="visual-studio-test-agent-component-directory"></a>Répertoire des composants Visual Studio Test Agent
[!INCLUDE[workloads-components-universal-header_md](includes/workloads-components-universal-header_md.md)]
::: moniker range="vs-2017"
[!INCLUDE[workloads-components-header-2017_md](includes/workloads-components-header-2017_md.md)]
[!include[Visual Studio Test Agent 2017](includes/vs-2017/workload-component-id-vs-test-agent.md)]
::: moniker-end
::: moniker range=">= vs-2019"
[!INCLUDE[workloads-components-header-2019_md](includes/workloads-components-header-2019_md.md)]
[!include[Visual Studio Test Agent 2019](includes/vs-2019/workload-component-id-vs-test-agent.md)]
::: moniker-end
[!INCLUDE[install_get_support_md](includes/install_get_support_md.md)]
## <a name="see-also"></a>Voir aussi
* [ID de charge de travail et de composant Visual Studio](workload-and-component-ids.md)
* [Guide de l’administrateur Visual Studio](visual-studio-administrator-guide.md)
* [Utiliser les paramètres de ligne de commande pour installer Visual Studio](use-command-line-parameters-to-install-visual-studio.md)
* [Exemples de paramètres de ligne de commande](command-line-parameter-examples.md)
* [Créer une installation hors connexion de Visual Studio](create-an-offline-installation-of-visual-studio.md)
| 39.472727 | 146 | 0.797789 | fra_Latn | 0.189513 |
e93e166ee7022dd5274f1f979018d9a9fd0767fc | 5,043 | md | Markdown | articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Infrastruttura e connettività a SAP HANA in Azure (istanze Large) | Microsoft Docs
description: Configurare l'infrastruttura di connettività necessaria per l'uso di SAP HANA in Azure (istanze Large).
services: virtual-machines-linux
documentationcenter: ''
author: RicksterCDN
manager: gwallace
editor: ''
ms.service: virtual-machines-linux
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure
ms.date: 07/12/2019
ms.author: juergent
ms.custom: H1Hack27Feb2017
ms.openlocfilehash: 4fa0fe072fe98d565ad9d6f947540b7e1b039732
ms.sourcegitcommit: 44e85b95baf7dfb9e92fb38f03c2a1bc31765415
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 08/28/2019
ms.locfileid: "70101166"
---
# <a name="sap-hana-large-instances-deployment"></a>Distribuzione di SAP HANA (istanze Large)
Questo articolo presuppone che sia stato effettuato l'acquisto di SAP HANA in Azure (istanze Large) da Microsoft. Prima di leggere questo articolo, vedere le informazioni di base relative a [termini comuni per le istanze Large di HANA](hana-know-terms.md) e [SKU di istanze Large di HANA](hana-available-skus.md).
Per la distribuzione di unità di istanze Large di HANA, Microsoft richiede le informazioni seguenti:
- Nome del cliente.
- Informazioni di contatto aziendali (inclusi indirizzo e-mail e numero di telefono).
- Informazioni di contatto per questioni tecniche (inclusi indirizzo e-mail e numero di telefono).
- Informazioni di contatto per questioni di rete tecniche (inclusi indirizzo e-mail e numero di telefono).
- Area di distribuzione di Azure (ad esempio, Stati Uniti occidentali, Australia orientale o Europa settentrionale).
- SKU (configurazione) di SAP HANA in Azure (istanze Large).
- Per ogni area di distribuzione di Azure:
- Intervallo di indirizzi IP /29 per le connessioni ER-P2P che connettono le reti virtuali di Azure a istanze Large di HANA.
- Blocco CIDR /24 usato per il pool di indirizzi IP del server di istanze Large di HANA.
- Facoltativo quando si usa [ExpressRoute copertura globale](https://docs.microsoft.com/azure/expressroute/expressroute-global-reach) per abilitare il routing diretto da unità locali a istanze large di Hana o il routing tra unità di istanze large di Hana in diverse aree di Azure, è necessario riservare un altro intervallo di indirizzi IP. Questo intervallo specifico potrebbe non sovrapporsi ad altri intervalli di indirizzi IP definiti in precedenza.
- Valori dell'intervallo di indirizzi IP usati nell'attributo dello spazio di indirizzi di ogni rete virtuale di Azure che si connette alle istanze Large di HANA.
- Dati per ogni sistema di istanze Large di HANA:
- Nome host desiderato, preferibilmente con un nome di dominio completo.
- Indirizzo IP desiderato per l'unità di istanze Large di HANA al di fuori dell'intervallo di indirizzi del pool di indirizzi IP del server. I primi 30 indirizzi IP nell'intervallo del pool di indirizzi IP del server sono riservati per l'uso interno nelle istanze Large di HANA.
- Nome SID di SAP HANA per l'istanza di SAP HANA (obbligatorio per creare i volumi disco richiesti legati a SAP HANA). Microsoft richiede il SID HANA per la creazione di autorizzazioni per sidadm nei volumi NFS. Questi volumi si collegano all'unità di istanze Large di HANA. Il SID HANA viene usato anche come uno dei componenti del nome dei volumi del disco che vengono montati. Se si vuole eseguire più di un'istanza HANA nell'unità, elencare più SID HANA, ognuno dei quali ottiene un set separato di volumi assegnati.
- Nel sistema operativo Linux, l'utente sidadm ha un ID gruppo. Questo ID è necessario per creare i volumi disco richiesti correlati a SAP HANA. L'installazione di SAP HANA crea in genere il gruppo sapsys con un ID gruppo 1001. L'utente sidadm fa parte di tale gruppo.
- Nel sistema operativo Linux, l'utente sidadm ha un ID utente. Questo ID è necessario per creare i volumi disco richiesti correlati a SAP HANA. Se nell'unità vengono eseguite più istanze HANA, elencare tutti gli utenti sidadm.
- ID della sottoscrizione di Azure a cui verranno connesse direttamente le istanze Large di SAP HANA in Azure. Questo ID sottoscrizione fa riferimento alla sottoscrizione di Azure su cui verranno addebitati i costi delle unità o delle unità di istanze Large di HANA.
Dopo che sono state fornite le informazioni indicate in precedenza, Microsoft effettua il provisioning di SAP HANA in Azure (istanze Large). Microsoft invia le informazioni per collegare le reti virtuali di Azure alle istanze Large di HANA. È anche possibile accedere alle unità di istanze Large di HANA.
Usare la sequenza seguente per connettersi alle istanze Large di HANA dopo la distribuzione da parte di Microsoft:
1. [Connessione di macchine virtuali di Azure a istanze Large di HANA](hana-connect-azure-vm-large-instances.md)
2. [Connessione di una rete virtuale a ExpressRoute per istanze Large di HANA](hana-connect-vnet-express-route.md)
3. [Requisiti di rete aggiuntivi (facoltativo)](hana-additional-network-requirements.md)
| 88.473684 | 522 | 0.801904 | ita_Latn | 0.997646 |
e93e42cfca3deba6325a2890cdd9851233a61cdf | 7,779 | md | Markdown | site/contents/write_ups/2021/26-05-21_writeup.md | varlottaang/data-ethics-club | b0fce7d5b0e1d45418e3e77ef196269951feee6e | [
"CC-BY-3.0"
] | null | null | null | site/contents/write_ups/2021/26-05-21_writeup.md | varlottaang/data-ethics-club | b0fce7d5b0e1d45418e3e77ef196269951feee6e | [
"CC-BY-3.0"
] | null | null | null | site/contents/write_ups/2021/26-05-21_writeup.md | varlottaang/data-ethics-club | b0fce7d5b0e1d45418e3e77ef196269951feee6e | [
"CC-BY-3.0"
] | null | null | null | # Data Ethics Club discusses ['Living in the Hidden Realms of AI: The Workers Perspective'](https://news.techworkerscoalition.org/2021/03/09/issue-5/)
```{admonition} What's this?
This is summary of Wednesday 26th May's Data Ethics Club discussion, where we spoke and wrote about the article ['Living in the Hidden Realms of AI: The Workers Perspective'](https://news.techworkerscoalition.org/2021/03/09/issue-5/) by Sherry Stanley.
The summary was written by Huw Day, who tried to synthesise everyone's contributions to this document and the discussion. "We" = "someone at Data Ethics Club".
Nina Di Cara and Natalie Thurlby helped with a final edit.
```
## Mechanical Turks and the Sweatshops of Machine Learning
We discussed an article by Sherry Stanley - a "Turker": someone who annotates data using Amazon
Mechanical Turk (often used to label Machine Learning). Stanley discusses problems with the back end,
their pay, and working conditions, showing the human side of data that we often think of as very
technical.
People need flexible work, but since the work is remote, contract-based and often time sensitive much of the power left in the
hands of those contracting the work, not the turkers. If a turker won't do a task, there are three more
who will take their place. This supply and demand inbalance leads to a weak negotiating position for the
turks. Stanley talks about having notifications on her phone to wake her in the middle of the night if a
valuable contract comes up.
Turks might complete work but get rejected by the contractors, denying them pay in the process with
limited power to protest. The workers often don't know why their work was rejected and technically their
hard work could still be used, making the system open for abuse.
This current system leads to exploited turk workers producing rushed data annotations for low levels of
compensation - so how do we go about changing this?
## How far should our ethical responsibility as data scientists reach?
Researchers tend to see Turkers as service providers rather than research participants. This viewpoint
allows a degree of detachment; for instance, data annotators may not be seen as human research participants by research ethics committees. So how do we ensure responsibility throughout the
research chain?
There is a lot of "variety" in ethical behaivour between different researchers. Any mechanism that
relies on good faith is simply not enough. There needs to be a clear contract between requester and
Turker. Perhaps a good solution to this is including something in grant proposals that guarantees the workers
are treated ethically as part of the ethical framework in the proposal.
Outsourcing tech work can be compared to other contract work - wage paid per individual job, precarious. Turker's rights are in some ways analogous with modern day slavery. The UK has requirements on
corperations to look throughout their supply chain to see if people at any point are being exploited in the production of clothing.
Why should the production of data be any different?
An institution wide approach would be more powerful than leaving decisions to individual researchers. There could be procedures in place (especially at public institutions like universities) to
ensure that reliable and ethically collected/annotated data is used. Major researchers may advocate for
ethical data integration but then use low-pay crowd workers for their own work. Accountability
throughout the system is required, and we cannot rely on individuals for that.
One issue that was brough up was that if regulations are put in place in countries like the US and UK, work might get outsourced to countries with less strict standards.
Whilst we not would want to avoid denying hard work work for Turkers in developing countries, it would be
important to enforce accountability throughout the supply chain.
## How should we recognise the contribution of annotators?
Simply acknowledging clickworkers is a good start. Unfortunately the distributed nature of crowdwork
makes it difficult to credit workers as well as ensure accountability if things go wrong (e.g. bad
annotations).
Some Turk surveys include personal information (e.g. mental health information) so scope for anonymity should be present. But as a general rule, credit (which comes hand in hand with accountability) for work
done is vital.
We had a discussion about where you draw the line in crediting clickworkers? In an academic context, do you include them as a co-author/author? This might not be a hard rule, but some sort of industry standard may be appropriate.
An important step would be to ask actual clickworkers how they would like to be credited!
## Change and the power to make it
There's strong parallels between Turk workers and recent advances in rights for 'gig economy' workers. Flexible and accessible opportunities to earn an income are important, but we do not want people to be exploited for them.
Any steps we as a society take need to involve closing loopholes in the law that let companies like Uber
and Amazon can get away paying contractors less than minimum wage. This is a broader issue in the so-
called "gig economy".
Even more broadly though, exploitation is most rife in environments where the workers, whoever they are, don't have better options. This naturally led our discussion to Universal Basic Income.
Universal Basic Income (UBI) is the idea that every adult in a specific area would receive a standard,
unconditional payment at regular intervals. If working was not a necesity, everyone who worked would
want to make it worth their while. UBI is being trialled in
[Wales](https://www.bbc.co.uk/news/uk-wales-politics-57120354) at the moment, so perhaps more research
into this area will aid our understanding in this area. In the past "researchers found the scheme left
those happier and less stressed, but did not aid them in finding work."
Probably a more realistic action for now is to support data annotators in the
formation of unions, and back their calls as an industry for better rights and benefits. We rely on annotated data for so many parts of data science, and we can't allow the people who make it to be invisible.
---
## Related Links
- [Turkers Dynamo guidelines for how annotators should be treated](https://blog.turkopticon.info/?page_id=121)
- [Google's alphabet union case study](https://the-turing-way.netlify.app/ethical-research/activism/activism-case-study-google.html)
- [Datasheets for datasets](https://arxiv.org/abs/1803.09010)
---
## Attendees
Note: this is not a full list of attendees, only those who felt comfortable sharing their names.
__Name, Job title, Affiliation, Links to find you__
- Natalie Thurlby, Data Scientist, University of Bristol, [NatalieThurlby](https://github.com/NatalieThurlby/), [@StatalieT](https://twitter.com/StatalieT)
- Nina Di Cara, PhD Student, University of Bristol, [ninadicara](https://github.com/ninadicara/), [@ninadicara](https://twitter.com/ninadicara)
- Huw Day, Maths PhDoer, [@disco_huw](https://twitter.com/disco_huw)
- Roman Shkunov, Maths/CS undergrad, University of Bristol
- Paul Lee, investor, @pclee27, senseoffairness.blog
- Ola Michalec, Postdoc (a social scientist in computer science school) @Ola_Michalec
- Sergio A. Araujo-Estrada, PostDoc, Aerospace Engineering, UoB
- James Cussens, Lecturer in CS Dept, UoB, https://jcussens.github.io/
- Robin Dasler, software product manager on hiatus, [daslerr](https://github.com/daslerr)
- Henry Addison, Interactive AI PhD student, UoB, [henryaddison](https://github.com/henryaddison)
- Arianna Manzini, Research Associate, Centre for Ethics in Medicine, UoB
- Vanessa Hanschke, PhD student, Interactive AI, UoB | 71.366972 | 252 | 0.792904 | eng_Latn | 0.998595 |
e93e5c69c918c3309c42fd303657ff4bb33c86c4 | 291 | md | Markdown | tests/recursive-calls/README.md | ratijas/SLang.NET | dcf87b2b13de4b8d7fdcb1da8864249d0891138c | [
"MIT"
] | 1 | 2019-12-02T08:53:07.000Z | 2019-12-02T08:53:07.000Z | tests/recursive-calls/README.md | ratijas/SLang.NET | dcf87b2b13de4b8d7fdcb1da8864249d0891138c | [
"MIT"
] | null | null | null | tests/recursive-calls/README.md | ratijas/SLang.NET | dcf87b2b13de4b8d7fdcb1da8864249d0891138c | [
"MIT"
] | null | null | null | The idea is to demonstrate routines calling each other and conditionally returning values.
In pseudo-code it would be like:
```c
int foo(int x) {
if (x) {
return 42;
} else {
return bar();
}
}
int bar() {
return foo(1);
}
main() {
return foo(0);
}
``` | 13.857143 | 90 | 0.573883 | eng_Latn | 0.992269 |
e93ee09496aa72a24dfd07aa5fd0295ffb96789e | 4,391 | md | Markdown | articles/managed-applications/create-uidefinition-elements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-29T16:59:33.000Z | 2019-09-29T16:59:33.000Z | articles/managed-applications/create-uidefinition-elements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/managed-applications/create-uidefinition-elements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Az Azure felhasználói felület jelentésdefiníciós elem létrehozása |} A Microsoft Docs
description: Felhasználóifelület-definíciók az Azure portal kapcsolatot létesítő használandó-elemeket ismerteti.
services: managed-applications
documentationcenter: na
author: tfitzmac
ms.service: managed-applications
ms.devlang: na
ms.topic: reference
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 09/19/2018
ms.author: tomfitz
ms.openlocfilehash: 41a583a77f85bb1524112fa20d9098e18bc4f431
ms.sourcegitcommit: 41ca82b5f95d2e07b0c7f9025b912daf0ab21909
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 06/13/2019
ms.locfileid: "60587938"
---
# <a name="createuidefinition-elements"></a>CreateUiDefinition elemek
Ez a cikk ismerteti a séma és a egy CreateUiDefinition minden támogatott elemei tulajdonságait.
## <a name="schema"></a>Séma
A legtöbb elemét sémája a következő:
```json
{
"name": "element1",
"type": "Microsoft.Common.TextBox",
"label": "Some text box",
"defaultValue": "my value",
"toolTip": "Provide a descriptive name.",
"constraints": {},
"options": {},
"visible": true
}
```
| Tulajdonság | Szükséges | Leírás |
| -------- | -------- | ----------- |
| name | Igen | Egy elem előfordulását hivatkozni belső azonosítója. Az elem nevét, a leggyakoribb használata `outputs`, ahol a kimeneti értékeket a megadott elemek vannak leképezve a sablon paramétereit. Kívánt elem értékét kimeneti kötést is használhatja a `defaultValue` másik elem. |
| type | Igen | Az elem megjelenítése a felhasználói felületi vezérlőnek. Támogatott típusainak listáját lásd: [elemek](#elements). |
| label | Igen | Az elem megjelenítendő szöveg. Néhány elemtípus tartalmazza a több címke, ezért az érték lehet több karakterláncokat tartalmazó objektumot. |
| defaultValue | Nem | Az alapértelmezett érték az elem. Néhány elem típusát támogatja a összetett alapértelmezett értékeket, ezért az érték lehet egy objektumot. |
| toolTip | Nem | Az elem az elemleírás megjeleníteni kívánt szöveg. Hasonló `label`, bizonyos elemek támogatja a több eszközre tipp karakterlánc. Beágyazott hivatkozások Markdown szintaxissal lehet beágyazni.
| constraints | Nem | Egy vagy több tulajdonságot, amely segítségével testre szabhatja az elem ellenőrzési viselkedését. Typ prvku megkötések támogatott tulajdonságai eltérők. Néhány elem típusát támogatja az ellenőrzési viselkedése testre szabhatja, és így vannak nincsenek megkötések tulajdonság. |
| options | Nem | Testre szabható a az elem viselkedése további tulajdonságokat. Hasonló `constraints`, a támogatott tulajdonságok elem típusa szerint változó. |
| visible | Nem | Azt jelzi, hogy az elem megjelenik-e. Ha `true`, az elem és a megfelelő gyermekelemek jelennek meg. Az alapértelmezett érték `true`. Használat [logikai függvények](create-uidefinition-functions.md#logical-functions) dinamikusan szabályozhatja e tulajdonság értéke.
## <a name="elements"></a>Elemek
A dokumentáció minden eleme egy felhasználói felületi mintát tartalmaz, sémát és a Megjegyzések a (általában vonatkozó érvényesítési és támogatott testreszabása) elemet, és a kimeneti viselkedését.
- [Microsoft.Common.DropDown](microsoft-common-dropdown.md)
- [Microsoft.Common.FileUpload](microsoft-common-fileupload.md)
- [Microsoft.Common.InfoBox](microsoft-common-infobox.md)
- [Microsoft.Common.OptionsGroup](microsoft-common-optionsgroup.md)
- [Microsoft.Common.PasswordBox](microsoft-common-passwordbox.md)
- [Microsoft.Common.Section](microsoft-common-section.md)
- [Microsoft.Common.TextBlock](microsoft-common-textblock.md)
- [Microsoft.Common.TextBox](microsoft-common-textbox.md)
- [Microsoft.Compute.CredentialsCombo](microsoft-compute-credentialscombo.md)
- [Microsoft.Compute.SizeSelector](microsoft-compute-sizeselector.md)
- [Microsoft.Compute.UserNameTextBox](microsoft-compute-usernametextbox.md)
- [Microsoft.Network.PublicIpAddressCombo](microsoft-network-publicipaddresscombo.md)
- [Microsoft.Network.VirtualNetworkCombo](microsoft-network-virtualnetworkcombo.md)
- [Microsoft.Storage.MultiStorageAccountCombo](microsoft-storage-multistorageaccountcombo.md)
- [Microsoft.Storage.StorageAccountSelector](microsoft-storage-storageaccountselector.md)
## <a name="next-steps"></a>További lépések
Felhasználóifelület-definíciók létrehozása bevezetésért lásd: [CreateUiDefinition használatának első lépései](create-uidefinition-overview.md).
| 59.337838 | 300 | 0.797768 | hun_Latn | 0.999554 |
e93f28f9fa8054f9b5e55abc63811d5690209b8f | 52 | md | Markdown | README.md | emasco7/DataImport | 88f758625a2c5bd2ce140c5598686f28cb48a1a0 | [
"MIT"
] | null | null | null | README.md | emasco7/DataImport | 88f758625a2c5bd2ce140c5598686f28cb48a1a0 | [
"MIT"
] | null | null | null | README.md | emasco7/DataImport | 88f758625a2c5bd2ce140c5598686f28cb48a1a0 | [
"MIT"
] | null | null | null | # DataImport
An asp.net core API for importing data
| 17.333333 | 38 | 0.788462 | eng_Latn | 0.475469 |