hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
1c922bee2a51c3cd73872f39604760522e820fa5
843
md
Markdown
docs/1.22/core/v1/persistentVolumeStatus.md
jsonnet-libs/k8s-libsonnet
f8efa81cf15257bd151b97e31599e20b2ba5311b
[ "Apache-2.0" ]
51
2021-07-02T12:34:06.000Z
2022-03-25T09:20:57.000Z
docs/1.22/core/v1/persistentVolumeStatus.md
jsonnet-libs/k8s-libsonnet
f8efa81cf15257bd151b97e31599e20b2ba5311b
[ "Apache-2.0" ]
null
null
null
docs/1.22/core/v1/persistentVolumeStatus.md
jsonnet-libs/k8s-libsonnet
f8efa81cf15257bd151b97e31599e20b2ba5311b
[ "Apache-2.0" ]
4
2021-07-22T17:39:30.000Z
2021-11-17T19:15:14.000Z
--- permalink: /1.22/core/v1/persistentVolumeStatus/ --- # core.v1.persistentVolumeStatus "PersistentVolumeStatus is the current status of a persistent volume." ## Index * [`fn withMessage(message)`](#fn-withmessage) * [`fn withPhase(phase)`](#fn-withphase) * [`fn withReason(reason)`](#fn-withreason) ## Fields ### fn withMessage ```ts withMessage(message) ``` "A human-readable message indicating details about why the volume is in this state." ### fn withPhase ```ts withPhase(phase) ``` "Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase" ### fn withReason ```ts withReason(reason) ``` "Reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI."
21.615385
165
0.733096
eng_Latn
0.967371
1c93a15d07add63a9f91f669d0be05949863f096
2,542
md
Markdown
installer/README.md
mwallard/PowerToys
cbef4bd603d55956e1e970fa49619ebc688303b3
[ "MIT" ]
null
null
null
installer/README.md
mwallard/PowerToys
cbef4bd603d55956e1e970fa49619ebc688303b3
[ "MIT" ]
null
null
null
installer/README.md
mwallard/PowerToys
cbef4bd603d55956e1e970fa49619ebc688303b3
[ "MIT" ]
null
null
null
# PowerToys installer instructions (Delete this) ## MSI installer instructions 1. Install the [WiX Toolset Visual Studio 2019 Extension](https://marketplace.visualstudio.com/items?itemName=RobMensching.WiXToolset). 2. Install the [WiX Toolset build tools](https://wixtoolset.org/releases/) in the development machine. 3. Open `powertoys.sln`, select the "Release" and "x64" configurations and build the `PowerToysSetup` project. 4. The resulting installer will be built to `PowerToysSetup\bin\Release\PowerToysSetup.msi`. ## MSIX installer instructions ### One-time tasks #### Create and install the self-sign certificate For the first-time installation, you'll need to generate a self-signed certificate. The script below will generate and add a cert to your [TRCA store](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store). 1. Open `Developer PowerShell for VS` as an Admin 2. Navigate to your repo's `installer\MSIX` 3. Run `.\generate_self_sign_cert.ps1` **Note:** if you delete the folder, you will have to regenerate the key #### Elevate `Developer PowerShell for VS` permissions due to unsigned file `reinstall_msix.ps1` is unsigned, you'll need to elevate your prompt. 1. Open `Developer PowerShell for VS` as admin 2. Run `Set-ExecutionPolicy -executionPolicy Unrestricted` #### Allow Sideloaded apps In order to install the MSIX package without using the Microsoft Store, sideloading apps needs to be enabled. This can be done by enabling `Developer Options > Sideload apps` or `Developer Options > Developer mode`. ### Building the MSIX package 1. Make sure you've built the `Release` configuration of `powertoys.sln` 2. Open `Developer PowerShell for VS` 3. Navigate to your repo's `installer\MSIX` 4. Run `.\reinstall_msix.ps1` from the devenv powershell ### What reinstall_msix.ps1 does `reinstall_msix.ps1` removes the current PowerToys installation, restarts explorer.exe (to update PowerRename and ImageResizer shell extension), builds `PowerToys-x64.msix` package, signs it with a PowerToys_TemporaryKey.pfx, and finally installs it. ## Cleanup - Removing all .msi/.msix PowerToys installations ```ps $name='PowerToys' Get-AppxPackage -Name $name | select -ExpandProperty "PackageFullName" | Remove-AppxPackage gwmi win32_product -filter "Name = '$name'" -namespace root/cimv2 | foreach { if ($_.uninstall().returnvalue -eq 0) { write-host "Successfully uninstalled $name " } else { write-warning "Failed to uninstall $name." } } ```
52.958333
277
0.774587
eng_Latn
0.907583
1c93efb44a64716f27c43f07253d6c415984251b
2,081
md
Markdown
README.md
clayne/syntax-check-perl
8fe82eb746a1b81a705b71df883d157f8d5144d6
[ "Artistic-1.0" ]
null
null
null
README.md
clayne/syntax-check-perl
8fe82eb746a1b81a705b71df883d157f8d5144d6
[ "Artistic-1.0" ]
null
null
null
README.md
clayne/syntax-check-perl
8fe82eb746a1b81a705b71df883d157f8d5144d6
[ "Artistic-1.0" ]
null
null
null
# Perl syntax checker [![](https://github.com/skaji/syntax-check-perl/workflows/test/badge.svg)](https://github.com/skaji/syntax-check-perl/actions) This is a Perl syntax checker, especially for [ale](https://github.com/dense-analysis/ale). ## Integrate with vim-plug and ale Here is how to integrate with [vim-plug](https://github.com/junegunn/vim-plug) and [ale](https://github.com/dense-analysis/ale). ```vim call plug#begin('~/.vim/plugged') Plug 'dense-analysis/ale' Plug 'skaji/syntax-check-perl' call plug#end() let g:ale_linters = { 'perl': ['syntax-check'] } ``` ## Configuration If you write Perl a lot, then I assume you have your own favorite for how to check Perl code. You can set config file for `syntax-check`: ```vim let g:ale_perl_syntax_check_config = expand('~/.vim/your-config.pl') " there is also my favorite, and you can use it:) let g:ale_perl_syntax_check_config = g:plug_home . '/syntax-check-perl/config/relax.pl' " add arbitrary perl executable names. defaults to "perl" let g:ale_perl_syntax_check_executable = 'my-perl' ``` The config files are written in Perl, so you can do whatever you want. :) See [default.pl](config/default.pl). ### Adding libs to @INC By default we try to add `lib` (or `blib` if appropriate), `t/lib`, `xt/lib` and `local/lib/perl5` to `@INC` when attempting to compile your code. Depending on how you work, this may not be what you want. The good news is that you can manage this via the Perl config file. See also [default.pl](config/default.pl) for more detailed information on how to do this. ## Security You should be aware that we use the `-c` flag to see if `perl` code compiles. This does not execute all of the code in a file, but it does run `BEGIN` and `CHECK` blocks. See `perl --help` and [StackOverflow](https://stackoverflow.com/a/12908487/406224). ## Debugging You can use `:ALEInfo` in `vim` to troubleshoot `Ale` plugins. Scroll to the bottom of the `:ALEInfo` output to find any errors which may have been produced by this plugin. ## Author Shoichi Kaji ## License The same as perl
33.031746
148
0.730418
eng_Latn
0.966629
1c94da3bf7948717a6b2ce9e5e54a596c1908726
15,952
md
Markdown
articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
Microsoft/azure-docs.ru-ru
980849d8505e40e8b260cb5a35b56e22d55fc9d3
[ "CC-BY-4.0", "MIT" ]
5
2016-12-12T09:33:15.000Z
2017-06-18T11:33:37.000Z
articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
changeworld/azure-docs.ru-ru
980849d8505e40e8b260cb5a35b56e22d55fc9d3
[ "CC-BY-4.0", "MIT" ]
44
2016-12-06T19:42:42.000Z
2017-06-16T13:45:55.000Z
articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
changeworld/azure-docs.ru-ru
980849d8505e40e8b260cb5a35b56e22d55fc9d3
[ "CC-BY-4.0", "MIT" ]
11
2016-11-30T11:36:13.000Z
2017-06-22T14:04:33.000Z
--- title: Механизм интеграции Cloud Foundry с Azure | Документация Майкрософт description: В этой статье описывается, как Cloud Foundry может использовать службы Azure для повышения эффективности работы предприятия. services: virtual-machines-linux documentationcenter: '' author: ningk tags: Cloud-Foundry ms.assetid: 00c76c49-3738-494b-b70d-344d8efc0853 ms.service: virtual-machines ms.topic: conceptual ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services ms.date: 05/11/2018 ms.author: ningk ms.openlocfilehash: ff2a6618b60ff2cfa5faa74c905e140466a14359 ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 03/20/2021 ms.locfileid: "102563325" --- # <a name="integrate-cloud-foundry-with-azure"></a>Интеграция Cloud Foundry и Azure [Cloud Foundry](https://docs.cloudfoundry.org/) — это платформа PaaS, работающая поверх платформы IaaS поставщиков облачных служб. Она обеспечивает согласованный процесс развертывания приложений между поставщиками облачных услуг. Она также может интегрироваться с различными службами Azure с высоким уровнем доступности, масштабируемостью и экономией корпоративного класса. Существует [6 подсистем Cloud Foundry](https://docs.cloudfoundry.org/concepts/architecture/), которые можно гибко масштабировать в сети, включая маршрутизацию, проверку подлинности, управление жизненным циклом приложений, Управление службами, Обмен сообщениями и мониторинг. Для каждой из подсистем можно настроить Cloud Foundry для использования корреспондентской службы Azure. ![Архитектура интеграции Cloud Foundry и Azure](media/CFOnAzureEcosystem-colored.png) ## <a name="1-high-availability-and-scalability"></a>1. высокий уровень доступности и масштабируемость ### <a name="managed-disk"></a>Управляемые диски Bosh использует Azure CPI (интерфейс облачного поставщика) для создания и удаления подподпрограмм диска. По умолчанию используются неуправляемые диски. Клиенту при этом нужно вручную создавать учетные записи хранения, а затем настраивать их в файлах манифестов CF. Это связано с ограничением на количество дисков на одну учетную запись хранения. Теперь можно использовать [управляемые диски](https://azure.microsoft.com/services/managed-disks/), которые обеспечивают управляемое защищенное и надежное дисковое хранилище для виртуальных машин. Клиенту больше не нужно работать с учетной записью хранения для обеспечения масштабирования и высокого уровня доступности. Azure упорядочивает диски автоматически. Будь то новое или существующее развертывание, в Azure Иос будет осуществляться создание или миграция управляемого диска во время развертывания CF. Он поддерживается в PCF 1,11. Дополнительные сведения см. в [руководстве по управляемым дискам](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/managed-disks) Cloud Foundry с открытым исходным кодом. ### <a name="availability-zone-"></a>Зона доступности * Cloud Foundry — это платформа приложений, созданных для облака. Для нее предусмотрено [четыре уровня обеспечения высокой доступности](https://docs.pivotal.io/pivotalcf/2-1/concepts/high-availability.html). Первые три уровня сбоев программного обеспечения могут обрабатываться самой системой CF, а отказоустойчивость платформы обеспечивается поставщиками облачных служб. Ключевые компоненты CF должны быть защищены решением для обеспечения высокого уровня доступности платформы поставщика облачных служб. К таким решениям относятся GoRouters, Diego Brains, плитки баз данных и служб CF. По умолчанию для обеспечения отказоустойчивости между кластерами в центре обработки данных используются [группы доступности Azure](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/deploy-cloudfoundry-with-availability-sets). Ситуацию улучшил выпуск [зон доступности Azure](../availability-zones/az-overview.md) с оптимизацией отказоустойчивости, реализованной как избыточность с низкой задержкой между центрами обработки данных. Зоны доступности Azure позволяет достичь высокого уровня доступности благодаря включению набора виртуальных машин в два или большее количество центров обработки данных. Каждый набор виртуальных машин избыточен по отношению к другим наборам. Если одна зона выйдет из строя, остальные наборы останутся доступными и изолированными от сбоя. > [!NOTE] > Зона доступности Azure пока предлагается не для всех регионов — см. последнее [объявление о списке поддерживаемых регионов](../availability-zones/az-overview.md). Также см. [руководство по зонам доступности Azure для Cloud Foundry с открытым исходным кодом](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/availability-zone). ## <a name="2-network-routing"></a>2. сетевая маршрутизация По умолчанию базовая подсистема балансировки нагрузки Azure используется для входящих запросов к API и приложениям CF, перенаправляя их в Gorouters. Компоненты CF, например Diego Brain, MySQL, ERT, могут также с помощью подсистемы балансировки нагрузки балансировать трафик для обеспечения высокой доступности. Azure также предоставляет набор полностью управляемых решений балансировки нагрузки. Если вы ищете завершение TLS/SSL ("разгрузка SSL") или обработку уровня приложения для запросов HTTP/HTTPS, рассмотрите возможность использования шлюза приложений. Для балансировки нагрузки с обеспечением высокой доступности и масштабируемости на четвертом уровне рекомендуется использовать стандартную подсистему балансировки нагрузки. ### <a name="azure-application-gateway-"></a>Шлюз приложений Azure * [Шлюз приложений Azure](../application-gateway/overview.md) предлагает различные возможности балансировки нагрузки уровня 7, включая разгрузку SSL, СКВОЗНОЙ протокол TLS, брандмауэр веб-приложения, сходство сеансов на основе файлов cookie и многое другое. Вы можете [настроить шлюз приложений в Cloud Foundry с открытым исходным кодом](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/application-gateway). Дополнительные сведения см. в [примечаниях к выпуску PCF 2.1](https://docs.pivotal.io/pivotalcf/2-1/pcf-release-notes/opsmanager-rn.html#azure-application-gateway) для теста POC. ### <a name="azure-standard-load-balancer-"></a>Azure Load Balancer уровня "Стандартный" * Azure Load Balancer — это подсистема балансировки нагрузки четвертого уровня. Он используется для распределения трафика между экземплярами служб в наборе с балансировкой нагрузки. Стандартная версия включает [дополнительные возможности](../load-balancer/load-balancer-overview.md) наряду с возможностями базовой версии, например: 1. Максимальный размер внутреннего пула повышен с 100 до 1000 виртуальных машин. 2. Конечные точки теперь поддерживают несколько групп доступности вместо одной. 3. Дополнительные функции, такие как порты HA, Расширенные данные мониторинга и т. д. Если вы переходите в зону доступности Azure, требуется стандартная подсистема балансировки нагрузки. При работе с новым развертыванием рекомендуется начать с Azure Load Balancer уровня "Стандартный". ## <a name="3-authentication"></a>3. Проверка подлинности [Cloud Foundry User Account and Authentication](https://docs.cloudfoundry.org/concepts/architecture/uaa.html) (UAA) — это служба централизованного управления удостоверениями для платформы CF и ее компонентов. [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) — это облачный каталог и служба управления удостоверениями корпорации Майкрософт. По умолчанию для проверки подлинности в Cloud Foundry используется служба UAA. UAA также позволяет использовать Azure AD в качестве внешнего пользовательского хранилища. Пользователи Azure AD могут получить доступ к Cloud Foundry с помощью удостоверения LDAP без учетной записи Cloud Foundry. См. дополнительные сведения о [настройке Azure AD для UAA в PCF](https://docs.pivotal.io/p-identity/1-6/azure/index.html). ## <a name="4-data-storage-for-cloud-foundry-runtime-system"></a>4. хранилище данных для системы среды выполнения Cloud Foundry Благодаря своей расширяемости, Cloud Foundry позволяет использовать хранилища BLOB-объектов Azure или службы Azure MySQL или PostgreSQL в качестве хранилищ системы среды выполнения приложений. ### <a name="azure-blobstore-for-cloud-foundry-cloud-controller-blobstore"></a>Использование хранилища BLOB-объектов Azure в качестве хранилища BLOB-объектов Cloud Foundry Cloud Controller Хранилище BLOB-объектов Cloud Controller — это критически важное хранилище данных для пакетов сборок, дроплетов, пакетов и пулов ресурсов. По умолчанию для хранилища BLOB-объектов Cloud Controller используется сервер NFS. Чтобы избежать реализации единой точки отказа, используйте хранилище BLOB-объектов Azure в качестве внешнего хранилища. Дополнительные сведения см. в [документации по Cloud Foundry](https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html) и [описании параметров в Pivotal Cloud Foundry](https://docs.pivotal.io/pivotalcf/2-0/customizing/azure.html). ### <a name="mysqlpostgresql-as-cloud-foundry-elastic-run-time-database-"></a>Использование MySQL или PostgreSQL в качестве базы данных Cloud Foundry Elastic Runtime * Для CF Elastic Runtime требуются две основные системные базы данных: #### <a name="ccdb"></a>CCDB База данных Cloud Controller. Cloud Controller предоставляет конечные точки REST API клиентам для доступа к системе. В базе CCDB хранятся таблицы организаций, пространств, служб, ролей пользователей и других компонентов Cloud Controller. #### <a name="uaadb"></a>UAADB База данных службы User Account and Authentication. В ней хранятся данные, связанные с аутентификацией пользователей, например зашифрованные имена пользователей и пароли. По умолчанию можно использовать локальную системную базу данных (MySQL). Для обеспечения высокой доступности и масштабирования используйте преимущества управляемых Azure служб MySQL или PostgreSQL. См. дополнительные сведения о [включении Azure MySQL или PostgreSQL для CCDB, UAADB и других системных баз данных в Cloud Foundry с открытым исходным кодом](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/configure-cf-external-databases-using-azure-mysql-postgres-service). ## <a name="5-open-service-broker"></a>5. Откройте Service Broker Azure Service Broker обеспечивает унифицированный интерфейс для управления доступом приложений к службам Azure. Новый [проект Open Service Broker для Azure](https://github.com/Azure/open-service-broker-azure) предоставляет единый и простой способ доставки служб приложениям через Cloud Foundry, OpenShift и Kubernetes. Дополнительные сведения о развертывании в PCF см. на [плитке Azure Open Service Broker для PCF](https://pivotal.io/platform/services-marketplace/data-management/microsoft-azure). ## <a name="6-metrics-and-logging"></a>6. метрики и ведение журнала Log Analytics сопла Azure — это Cloud Foundry компонент, который перенаправляет метрики из [Cloud Foundry loggregator firehose](https://docs.cloudfoundry.org/loggregator/architecture.html) в [журналы Azure Monitor](https://azure.microsoft.com/services/log-analytics/). С помощью Nozzle можно собирать, просматривать и анализировать метрики производительности и работоспособности системы CF в нескольких развертываниях. Щелкните [здесь](./cloudfoundry-oms-nozzle.md) , чтобы узнать, как развернуть сопло log Analytics Azure в среде с открытым исходным кодом и в сводном Cloud Foundry, а затем получить доступ к данным из консоли Azure Monitor журналы. > [!NOTE] > В PCF 2,0 метрики работоспособности BOSH для виртуальных машин пересылаются в Loggregator firehose по умолчанию и интегрируются в консоль Azure Monitor журналов. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)] ## <a name="7-cost-saving"></a>7. Сохранение затрат ### <a name="cost-saving-for-devtest-environments"></a>Сокращение затрат на среды разработки и тестирования #### <a name="b-series-"></a>Серия B: * Раньше для использования в производственной среде Pivotal Cloud Foundry часто рекомендовались виртуальные машины серии F и D. Сегодня новая [серия B](https://azure.microsoft.com/blog/introducing-b-series-our-new-burstable-vm-size/) с накапливаемыми ресурсами предоставляет новые возможности. Виртуальные машины серии B идеально подходят для рабочих нагрузок, которым не требуется непрерывная полная производительность ЦП, например веб-серверов, небольших баз данных и сред разработки и тестирования. Обычно для этих рабочих нагрузок требуется накапливаемая производительность. Ее стоимость составляет 0,012 долл. США в час для B1 по сравнению с 0,05 долл. США в час для F1. См. дополнительные сведения о [размерах виртуальных машин](../virtual-machines/sizes-general.md) и [ценах на них](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). #### <a name="managed-standard-disk"></a>Управляемый диск уровня "Стандартный" Раньше для обеспечения надежной производительности в рабочей среде рекомендовались диски класса "Премиум". Благодаря использованию [управляемых дисков](https://azure.microsoft.com/services/managed-disks/) стандартное хранилище обеспечивает аналогичную надежность с разными уровнями производительности. Для рабочих нагрузок без учета производительности, таких как разработка и тестирование или Некритическая среда, управляемые диски Standard предлагают альтернативный вариант с более низкими затратами. ### <a name="cost-saving-in-general"></a>Сокращение общих затрат #### <a name="significant-vm-cost-saving-with-azure-reservations"></a>Значительное сокращение затрат на виртуальные машины за счет резервирований Azure: Сейчас счета за использование всех виртуальных машин CF начисляются по требованию, хотя среды обычно работают в непрерывном режиме. Теперь вы можете зарезервировать емкость виртуальных машин на один или три года и получить скидку в размере 45–65 %. Скидки применяются в системе выставления счетов без внесения изменений в вашу среду. Дополнительные сведения см. на странице [Зарезервированные экземпляры виртуальных машин Azure](https://azure.microsoft.com/pricing/reserved-vm-instances/). #### <a name="managed-premium-disk-with-smaller-sizes"></a>Управляемые диски уровня "Премиум" меньших размеров Управляемые диски поддерживают и меньшие размеры, например P4 (32 Гбайт) и P6 (64 Гбайт) для уровней "Премиум" и "Стандартный". При небольших рабочих нагрузках можно сократить расходы, выполнив перенос со стандартных дисков уровня "Премиум" на управляемые диски уровня "Премиум". #### <a name="use-azure-first-party-services"></a>Использование служб первого производителя Azure: Преимущество собственных служб Azure заключается в снижении долгосрочных затрат на администрирование в дополнение к высокому уровню доступности и надежности, о котором говорилось выше. Компанией Pivotal была запущена [среда Small Footprint ERT](https://docs.pivotal.io/pivotalcf/2-0/customizing/small-footprint.html) для клиентов PCF. Компоненты совместно располагаются всего в четырех виртуальных машинах, на которых выполняются до 2500 экземпляров приложений. Пробная версия сейчас доступна в [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pivotal.pivotal-cloud-foundry). ## <a name="next-steps"></a>Дальнейшие действия Функции интеграции Azure в первую очередь доступны в [Cloud Foundry с открытым исходным кодом](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/), прежде чем они будут доступны в сводных Cloud Foundry. Компоненты, отмеченные звездочкой *, все еще недоступны в PCF. Интеграция Cloud Foundry с Azure Stack не рассматривается в этом документе. Для получения последних сведений о состоянии и помощи по PCF в компонентах, отмеченных звездочкой *, или по интеграции Cloud Foundry со стеком Azure Stack обратитесь к менеджеру по работе с клиентами Pivotal и Майкрософт.
169.702128
858
0.822091
rus_Cyrl
0.924349
1c9536aa09eb6b71541d118c01007846897fa3ab
343
md
Markdown
worker/README.md
julius220/example-voting-app
98ecc57c7c9299917406aa853ba124be31c2546d
[ "Apache-2.0" ]
null
null
null
worker/README.md
julius220/example-voting-app
98ecc57c7c9299917406aa853ba124be31c2546d
[ "Apache-2.0" ]
null
null
null
worker/README.md
julius220/example-voting-app
98ecc57c7c9299917406aa853ba124be31c2546d
[ "Apache-2.0" ]
null
null
null
## Worker Java App * Build Status [![Build Status](http://3.96.55.120:8080/buildStatus/icon?job=instavote%2Fworker-build)](http://3.96.55.120:8080/job/instavote/job/worker-build/) [![Build Status](http://3.96.55.120:8080/buildStatus/icon?job=instavote%2Fworker-test)](http://3.96.55.120:8080/job/instavote/job/worker-test&subject=UnitTest)
42.875
159
0.74344
yue_Hant
0.187474
1c954d43eec4954909d4733354ce18fc3a7af0a8
13,783
md
Markdown
articles/application-gateway/application-gateway-components.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/application-gateway/application-gateway-components.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/application-gateway/application-gateway-components.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Application Gateway-összetevők description: Ez a cikk az Application Gateway különböző összetevőivel kapcsolatos információkat tartalmaz services: application-gateway author: surajmb ms.service: application-gateway ms.topic: conceptual ms.date: 08/21/2020 ms.author: surmb ms.openlocfilehash: ebd06b0b78ee511dce535ff4220df03087fb6906 ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 10/09/2020 ms.locfileid: "88723316" --- # <a name="application-gateway-components"></a>Application Gateway-összetevők Az Application Gateway az ügyfelek számára egyetlen kapcsolódási pontként szolgál. A bejövő alkalmazások forgalmát több háttér-készlet között osztja el, többek között az Azure-beli virtuális gépeket, a virtuálisgép-méretezési csoportokat, a Azure App Servicet és a helyszíni/külső kiszolgálókat. A forgalom terjesztéséhez az Application Gateway számos, a jelen cikkben ismertetett összetevőt használ. ![Az Application gatewayben használt összetevők](./media/application-gateway-components/application-gateway-components.png) ## <a name="frontend-ip-addresses"></a>Előtérbeli IP-címek Az előtérbeli IP-cím az Application gatewayhöz tartozó IP-cím. Az Application Gateway egy nyilvános IP-címmel, egy magánhálózati IP-címmel vagy mindkettővel is konfigurálható. Az Application Gateway egy nyilvános vagy egy magánhálózati IP-címet támogat. A virtuális hálózatnak és a nyilvános IP-címnek ugyanabban a helyen kell lennie, mint az Application Gateway-nek. A létrehozást követően a rendszer egy előtérbeli IP-címet társít egy figyelőhöz. ### <a name="static-versus-dynamic-public-ip-address"></a>Statikus vagy dinamikus nyilvános IP-cím Az Azure Application Gateway v2 SKU konfigurálható úgy, hogy mindkét statikus belső IP-címet és statikus nyilvános IP-címet, vagy csak a statikus nyilvános IP-címet támogassa. Nem konfigurálható úgy, hogy csak a statikus belső IP-címet támogassa. A v1 SKU beállítható úgy, hogy támogassa a statikus vagy dinamikus belső IP-címet és a dinamikus nyilvános IP-címet. A Application Gateway dinamikus IP-címe nem változik egy futó átjárón. Csak az átjáró leállításakor vagy indításakor lehet megváltoztatni. Nem változtatja meg a rendszerhibákat, a frissítéseket, az Azure-gazdagépek frissítéseit stb. Az Application gatewayhez társított DNS-név nem változik az átjáró életciklusa során. Ennek eredményeképpen CNAME aliast kell használnia, és az Application Gateway DNS-címeként kell mutatnia. ## <a name="listeners"></a>Figyelők A figyelő egy logikai entitás, amely ellenőrzi a bejövő kapcsolatok kéréseit. A figyelő fogad egy kérelmet, ha a kérelemhez társított protokoll, port, állomásnév és IP-cím megegyezik a figyelő konfigurációjával társított elemekkel. Az Application Gateway használata előtt legalább egy figyelőt hozzá kell adnia. Az Application gatewayhez több figyelő is tartozhat, és ezekhez a protokollhoz is használható. Miután egy figyelő észleli az ügyfelektől érkező beérkező kéréseket, az Application Gateway ezeket a kéréseket a szabályban konfigurált háttér-készlet tagjaira irányítja. A figyelők a következő portokat és protokollokat támogatják. ### <a name="ports"></a>Portok Egy port, ahol a figyelő figyeli az ügyfél kérelmét. Az 1 és 65502 közötti portok a v2 SKU-hoz készült v1 SKU és 1 – 65199 között állíthatók be. ### <a name="protocols"></a>Protokollok A Application Gateway négy protokollt támogat: HTTP, HTTPS, HTTP/2 és WebSocket: >[!NOTE] >A HTTP/2 protokoll támogatása csak az Application Gateway-figyelőkhöz csatlakozó ügyfelek számára érhető el. A háttér-kiszolgálói készletekkel folytatott kommunikáció mindig HTTP/1.1-en keresztül történik. Alapértelmezés szerint a HTTP/2 támogatás le van tiltva. Dönthet úgy is, hogy engedélyezi. - A figyelő konfigurációjában adjon meg a HTTP és a HTTPS protokoll közötti értéket. - A [WebSockets és a http/2 protokollok](features.md#websocket-and-http2-traffic) támogatása natív módon történik, és a [WebSocket-támogatás](application-gateway-websocket.md) alapértelmezés szerint engedélyezve van. Kizárólag WebSocket-támogatásra vonatkozó felhasználói beállítás nem létezik. Websocketek használata HTTP-és HTTPS-figyelővel. HTTPS-figyelő használata a TLS-lezáráshoz. Egy HTTPS-figyelő kiszervezi a titkosítási és a visszafejtési munkát az Application Gateway felé, így a webkiszolgálók nem terhelik a terhelést. ### <a name="custom-error-pages"></a>Egyéni hibalapok Application Gateway lehetővé teszi, hogy egyéni hibaüzeneteket hozzon létre az alapértelmezett hibaüzenetek megjelenítése helyett. Az egyéni hibalapokon feltüntetheti saját védjegyeit, és egyéni elrendezést használhat. Application Gateway egy egyéni hibaüzenetet jelenít meg, ha egy kérelem nem tudja elérni a hátteret. További információ: [az Application Gateway egyéni hibáinak lapja](custom-error.md). ### <a name="types-of-listeners"></a>Figyelők típusai Kétféle figyelő létezik: - **Alapszintű**. Ez a típusú figyelő egyetlen tartományi helyet figyel, ahol egyetlen DNS-hozzárendelés van az Application Gateway IP-címéhez. Ez a figyelő-konfiguráció akkor szükséges, ha egyetlen helyet üzemeltet az Application Gateway mögött. - **Több hely**. Ezt a figyelő konfigurációt akkor kell megadni, ha az adott alkalmazás-átjárón egynél több webalkalmazáshoz tartozó állomásnév vagy tartománynév alapján szeretné konfigurálni az útválasztást. Így hatékonyabb topológiát konfigurálhat telepítéseihez, mivel akár 100-nál is több webhelyet adhat hozzá egyetlen alkalmazásátjáróhoz. Mindegyik webhelyet a saját háttérkészletéhez lehet irányítani. Például három tartomány (contoso.com, fabrikam.com és adatum.com) mutat az alkalmazásátjáró IP címére. Hozzon létre három [többhelyes figyelőt](multiple-site-overview.md) , és konfigurálja az egyes figyelőket a megfelelő port és protokoll beállításhoz. A helyettesítő karakterrel ellátott gazdaneveket többhelyes figyelőben és figyelőként legfeljebb 5 gazdanévben is meghatározhatja. További információ: [helyettesítő karakterek nevei a figyelőben (előzetes verzió)](multiple-site-overview.md#wildcard-host-names-in-listener-preview). A többhelyes figyelő konfigurálásával kapcsolatos további információkért lásd: [többhelyes üzemeltetés Application Gateway a Azure Portal használatával](create-multiple-sites-portal.md). A figyelő létrehozása után társítsa azt egy kérelem-útválasztási szabállyal. Ez a szabály határozza meg, hogy a figyelőre érkező kérés hogyan legyen átirányítva a háttérbe. A kérelem útválasztási szabálya tartalmazza az átirányítani kívánt háttér-készletet is, valamint azt a HTTP-beállítást, amelyben a háttér-port, a protokoll stb. szerepel. ## <a name="request-routing-rules"></a>Kérelemirányítási szabályok A kérelem-útválasztási szabály az Application Gateway egyik fő összetevője, mert meghatározza, hogyan irányíthatja át a forgalmat a figyelőn. A szabály köti a figyelőt, a háttér-kiszolgáló készletet és a háttérbeli HTTP-beállításokat. Amikor egy figyelő fogad egy kérelmet, a kérések útválasztási szabálya továbbítja a kérést a háttérnek, vagy máshová irányítja át. Ha a kérést továbbítják a háttérbe, a kérelem útválasztási szabálya határozza meg, hogy melyik háttér-kiszolgáló készletet szeretné továbbítani. A kérelem útválasztási szabálya azt is meghatározza, hogy a kérésben szereplő fejléceket át kell-e írni. Egy figyelő egyetlen szabályhoz is csatolható. A kérések útválasztási szabályainak két típusa létezik: - **Alapszintű**. A társított figyelőn (például blog.contoso.com/*) lévő összes kérelem a társított backend-készletbe van továbbítva a kapcsolódó HTTP-beállítás használatával. - **Elérésiút-alapú**. Ez az útválasztási szabály lehetővé teszi, hogy a kérelemben szereplő URL-cím alapján a társított figyelőn keresztül átirányítsa a kéréseket egy adott háttér-készletre. Ha egy kérelem URL-címének elérési útja megegyezik egy elérésiút-alapú szabályban szereplő elérésiút-mintával, a szabály a kérést irányítja át. Az elérésiút-mintát csak az URL-címre alkalmazza, nem pedig a lekérdezési paraméterekre. Ha a figyelőre vonatkozó kérelem URL-címének elérési útja nem egyezik az elérésiút-alapú szabályokkal, a kérést az alapértelmezett háttér-készletre és a HTTP-beállításokra irányítja át. További információ: [URL-alapú útválasztás](url-route-overview.md). ### <a name="redirection-support"></a>Átirányítási támogatás A kérések útválasztási szabálya lehetővé teszi az Application Gateway forgalmának átirányítását is. Ez egy általános átirányítási mechanizmus, amellyel bármely, a szabályok segítségével meghatározott portról átirányíthatja a t. Kiválaszthatja, hogy az átirányítási cél egy másik figyelő legyen (amely lehetővé teszi az automatikus HTTP-t a HTTPS-átirányításhoz) vagy egy külső helyet. Dönthet úgy is, hogy az átirányítás átmenetileg vagy állandó, vagy az átirányított URL-címhez hozzáfűzi az URI elérési utat és a lekérdezési karakterláncot. További információ: [forgalom átirányítása az Application Gateway](redirect-overview.md)-ben. ### <a name="rewrite-http-headers-and-url"></a>HTTP-fejlécek és URL átírása Az Újraírási szabályok használatával a HTTP (S) kérések és válaszok fejléceit, valamint az URL-cím és a lekérdezési karakterlánc paramétereit is hozzáadhatja, eltávolíthatja vagy frissítheti, mivel a kérelmek és válaszok csomagjai az Application Gateway segítségével az ügyfél és a háttérbeli készletek között mozognak. A fejlécek és URL-paraméterek statikus értékekre vagy más fejlécekre és kiszolgálói változókra állíthatók be. Ez segíti a fontos használati eseteket, például az ügyfél IP-címeinek kinyerését, a háttér bizalmas adatainak eltávolítását, a nagyobb biztonság hozzáadását stb. További információ: HTTP- [fejlécek és URL-cím átírása az Application Gateway-](rewrite-http-headers-url.md)ben. ## <a name="http-settings"></a>HTTP-beállítások Az Application Gateway átirányítja a forgalmat a háttér-kiszolgálókra (a HTTP-beállításokat tartalmazó kérelem-útválasztási szabályban meghatározottak szerint) a portszám, a protokoll és az ebben az összetevőben részletezett egyéb beállítások használatával. A HTTP-beállításokban használt port és protokoll határozza meg, hogy az Application Gateway és a háttérrendszer-kiszolgálók közötti forgalom titkosítva van-e (végpontok közötti TLS-t biztosítva) vagy titkosítatlan. Ez az összetevő a következőket is használja: - Állapítsa meg, hogy a felhasználói munkamenetet ugyanazon a kiszolgálón kell-e tárolni a [cookie-alapú munkamenet-affinitás](features.md#session-affinity)használatával. - A háttérbeli készlet tagjainak biztonságos eltávolítása a [kapcsolatok kiürítésével](features.md#connection-draining). - Rendeljen egyéni mintavételt a háttér állapotának figyeléséhez, állítsa be a kérés időkorlátját, felülbírálja az állomásnév és elérési út értékét a kérelemben, és egy kattintással könnyedén megadhatja a App Service háttér beállításait. ## <a name="backend-pools"></a>Háttér-készletek A háttér-készlet átirányítja a kérést a háttér-kiszolgálókra. A háttér-készletek a következőket tartalmazhatják: - Hálózati adapterek (NIC-k) - Virtuálisgép-méretezési csoportok - Nyilvános IP-címek - Belső IP-címek - FQDN - Több-bérlős háttérrendszer (például App Service) Application Gateway háttérbeli készlet tagjai nem kapcsolódnak rendelkezésre állási csoportokhoz. Az Application Gateway képes kommunikálni a virtuális hálózatán kívüli példányokkal. Ennek eredményeképpen a háttér-készletek tagjai többek között fürtök, adatközpontok vagy az Azure-on kívül is lehetnek, feltéve, hogy IP-kapcsolat van. Ha belső IP-címeket használ a háttérbeli készlet tagjaiként, [virtuális hálózati](../virtual-network/virtual-network-peering-overview.md) társítást vagy [VPN-átjárót](../vpn-gateway/vpn-gateway-about-vpngateways.md)kell használnia. A virtuális hálózat társítása támogatott és előnyös a más virtuális hálózatok terheléselosztási forgalmához. Az Application Gateway képes kommunikálni a helyszíni kiszolgálókkal is, amikor az Azure ExpressRoute vagy VPN-alagutak csatlakoznak, ha a forgalom engedélyezett. Különböző háttér-készleteket hozhat létre különböző típusú kérelmekhez. Hozzon létre például egy háttér-készletet az általános kérelmekhez, majd egy másik háttér-készletet az alkalmazáshoz tartozó Service-szolgáltatásokhoz intézett kérésekhez. ## <a name="health-probes"></a>Állapotminták Alapértelmezés szerint az Application Gateway a háttér-készlet összes erőforrásának állapotát figyeli, és automatikusan eltávolítja a nem megfelelő állapotú fájlokat. Ezután figyeli a nem megfelelő állapotú példányokat, és visszaadja azokat az egészséges háttérrendszer-készlethez, amikor elérhetővé válnak, és reagálnak az állapotra. Amellett, hogy az alapértelmezett állapot mintavételi figyelését is használja, testre is szabhatja az állapot-mintavételt az alkalmazás követelményeinek megfelelően. Az egyéni mintavételek részletesebb szabályozást tesznek lehetővé az állapot figyelése során. Egyéni mintavétel használatakor beállíthatja az egyéni állomásnév, az URL-cím és a mintavételi időköz értékét, valamint azt, hogy hány sikertelen választ fogadjon el a rendszer, mielőtt a háttérrendszer-példányt nem Kifogástalan állapotba állítja, egyéni állapotkódot és a válasz törzsének megfelelőt, stb. Javasoljuk, hogy az egyes háttérrendszer-készletek állapotának figyeléséhez egyéni mintavételt állítson be. További információ: [az Application Gateway állapotának figyelése](../application-gateway/application-gateway-probe-overview.md). ## <a name="next-steps"></a>További lépések Application Gateway létrehozása: * [Az Azure Portalon](quick-create-portal.md) * [Azure PowerShell használatával](quick-create-powershell.md) * [Az Azure CLI használatával](quick-create-cli.md)
87.234177
674
0.827033
hun_Latn
1.00001
1c95a17cfe32bd53fd52c1efad4580a4bd543f81
6,036
md
Markdown
_posts/2018-03-04-imax-vr.md
nppi3enz/jekyll-base
583b5c7570b023bcc4ba1209e819c1d1ffcd96df
[ "MIT" ]
null
null
null
_posts/2018-03-04-imax-vr.md
nppi3enz/jekyll-base
583b5c7570b023bcc4ba1209e819c1d1ffcd96df
[ "MIT" ]
null
null
null
_posts/2018-03-04-imax-vr.md
nppi3enz/jekyll-base
583b5c7570b023bcc4ba1209e819c1d1ffcd96df
[ "MIT" ]
null
null
null
--- layout: post title: "ตะลุย IMAX VR ท่องเกมโลกเสมือนจริงใจกลางสยาม" author: nppi3enz categories: [ lifestyle ] image: assets/images/28695317_1585694268144602_1586124231_o-1024x576.jpg --- ชวนมาสัมผัสประสบการณ์โลกเสมือนจริงผ่าน IMAX VR ที่ชั้น 5 Siam Paragon กันเถอะ ทำความรู้จักกับคำว่า VR ก่อน ---------------------------- VR ย่อมาจากคำว่า Virtual Reality ที่แปลตรง ๆ ว่าโลกเสมือนจริงนั่นเอง ซึ่งอุปกรณ์ VR นั้นจะฉายผ่านจอใกล้ ๆ ตาทำให้เรามองเห็นภาพเป็น 3 มิติ และสามารถหันไปดูวิวรอบๆ ได้ถึง 360 องศาเลย (ยกเว้นเรื่องเสียงยังก็ต้องหาอุปกรณ์พวกหูฟังอยู่ดี) สามารถหาซื้อได้ตั้งแต่ Google Cardboard, Oculus Rift รวมถึงเครื่อง IMAX VR นั่นเองครับ แล้ว VR มันต่างจาก AR อย่างไร? ------------------------------ หลายคนมักจะได้ยินคำว่า VR กับ AR บ่อย ๆ สินะ AR ย่อมาจาก Augmented Reality ที่เป็นการนำภาพที่อยู่ตรงหน้าจริง ๆ ไปประมวลผล เพื่อทำให้แสดงวัตถุซ้อนบนภาพได้อย่างแม่นยำ เช่น เมื่อเรานำกล้องส่องไปบนห้อง ระบบจะตรวจหาพื้น และนำลูกบอลใส่ลงไปในภาพได้ นี่เป็นสิ่งที่เกิดจาก AR ต่างจาก VR ที่ไม่ได้ใช้โลกจริงประมวลภาพเลย แสดงภาพล้วน ๆ เทคโนโลยีที่ใช้ AR อยู่ดัง ๆ เช่น Emoji AR ของ iPhone X, เกม Pokemon Go นั่นเองครับ IMAX VR  ≠ IMAX --------------- ก่อนอื่นขอบอกเลยว่า IMAX VR จอไม่ได้ใหญ่เท่ากับจอโรงภาพยนตร์ IMAX เลยนะ และไม่ได้มีความเกี่ยวข้องอะไรกันเลยกับ IMAX เพียงแต่เป็นเทคโนโลยีที่ IMAX ซื้อมาเพื่อให้ได้เล่นกันนี่แหละ แว่น VR ทั่วไป ต่างจาก IMAX VR อย่างไร? --------------------------------------- หากใครเคยได้สัมผัสพวก Google Cardboard หรือ VR Headset ทั่วไป จะพบปัญหาเรื่องการเห็นภาพมีขอบดำ ๆ เนื่องจากจอมือถือยังกว้างไม่พอ ทำให้หางตายังเห็นขอบดำ ๆ อยู่ แต่ IMAX VR สามารถมองเห็นได้ถึง 210 องศาเลย! ว่าแล้ว มาทำความรู้จักกับ IMAX VR ให้มากกว่านี้กันเถอะ ว่ามีความสามารถอะไรบ้าง {% youtube "https://www.youtube.com/watch?v=VP5aptNm4Gc" %} เมื่อพร้อมแล้ว เรามาตะลุย IMAX VR กันเลย ---------------------------------------- สามารถแวะมาได้ที่ Siam Paragon ชั้น 5 ชั้นเดียวกับโรงภาพยนตร์ Paragon Cineplex เมื่อเข้ามาก็จะเห็นภาพนี้ ![]({{ site.baseurl }}/assets/images/28695317_1585694268144602_1586124231_o-1024x576.jpg) {: style="text-align: center;"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- มีเกมอะไรบ้างตอนนี้ และราคา/ค่าใช้จ่าย IMAX VR เท่าไหร่? -------------------------------------------------------- โดยในแต่ละเกมจะใช้ความสามารถไม่เหมือนกัน บางเกมเล่นคนเดียว บางเกมต้องยืน บางเกมต้องนั่ง บางเกมต้องใช้ผู้เล่นสองคน ลองสอบถามแต่ละเกมได้จากหน้าเคาท์เตอร์ได้เลยครับ ขณะนี้มีอยู่ 7 เกม และจะมีเกมอื่นผลัดเวียนหมุนกันไป เพื่อให้ไม่ให้ผู้เล่นเบื่อนั่นเอง โดยมีรายชื่อดังต่อไปนี้ * John Wick Chronicles * Justice League * Raw Data * Eagle Flight (Recommend Multiplayer) * Space Flight * Life of Us (Recommend Multiplayer) * Deadwood Mansion โดยราคาก็จะมีดังนี้ (เนื่องจากเกม Deadwood Mansion เป็นเกมยิงซอมบี้ที่ใช้พื้นที่เยอะที่สุด และเล่นได้พร้อมกันสูงสุด 4 คน จึงมีราคาต่างจากปกติ) รีวิว IMAX VR กันเถอะ --------------------- เมื่อมาถึงแล้ว จะมีพนักงานหน้าตู้ Kiosk สอบถาม ให้แจ้งเกมที่ต้องการเล่นและจำนวนผู้เล่น เมื่อชำระเงินเสร็จจะได้สลิปหน้าตาแบบนี้ยื่นให้ในพื้นที่ต่อไปที่เรียกว่า **Pod** ![]({{ site.baseurl }}/assets/images/28537869_1585694244811271_1113312350_n.jpg) {: style="text-align: center;"} <center>หน้าตาสลิปพร้อมหางสลิปให้พนักงานฉีก</center> ตัวอย่างอันนี้เกม John Wick Chronicles อยู่ที่ Pod 1 เมื่อถึงรอบของตัวเองแล้ว ก็ไปหาพนักงานที่หมายเลข 1 ได้เลยครับ แล้วพนักงานจะนำเครื่อง VR Headset พร้อมหูฟังมาสวมให้ เมื่อพร้อมแล้วก็ออกไปผจญภัยได้เลย ความรู้สึกหลังเล่นเกม --------------------- เอาทีละเกมเลยดีกว่า เล่นมา 3 เกม Deadwood Mansion อยากเล่นมากเลยนะ ยังไม่มีโอกาส แต่ไว้หาเพื่อนให้ครบทีมก่อนค่อยกลับมาเล่น <center>*ตัวอย่างเกม Deadwood Mansion บอกเลยว่าห้ามพลาด ถ้ามาหลายคน!*</center> {% youtube "https://www.youtube.com/watch?v=pa6kXBw9ve8" %} ### John Wick Chronicles ![]({{ site.baseurl }}/assets/images/JW_ExperiencePoster_2160x3840_V02-169x300.jpg) {: style="text-align: center;"} เกมนี้ให้สมมติตัวเองเป็น John Wick ที่จะต้องล้างแค้นโดยการฆ่าโจรให้หมด ซึ่งผู้เล่นจะได้ VR, หูฟัง และ ปืนที่ในการยิงนั่นเอง (ดันเป็นปืนวิเศษ กระสุนมีไม่จำกัด 555+) เกมนี้ทำให้ต้องยืนตลอดทั้งเกม เล่นเสร็จก็เมื่อยขานิดนึง ได้ออกเหงื่อมากมาย แต่ผู้เล่นไม่ต้องเดินไปไหน (เดี๋ยวชนกำแพง พื้นที่มันเล็ก) ภาพเป็นภาพเกมอ่านะ ไม่ได้ความละเอียดสมจริงเท่าเครื่อง PS4 หรอก ใครชอบเกมยิงปืน เกมนี้ไม่ควรพลาดเลย ใครที่คิดว่าเกมนี้จะมีอะไรเกี่ยวกับหนัง John Wick ไหม ตอบให้เลยว่า ไม่! เกมยิงเอามันเฉย ๆ ถ้ามาเพราะรอดูเนื้อเรื่องพิเศษ ไม่มีแน่นอน 555+ ### Space Flight ![]({{ site.baseurl }}/assets/images/SpaceFlight_IMAXVR_Experience_Poster_2160x3840_V01-169x300.jpg) {: style="text-align: center;"} เปลี่ยนบรรยากาศมานั่งชมวิวอวกาศกันบ้าง (ได้นั่งแล้ว~) อันนี้ไม่ใช่เกม แต่เป็น VR ที่จะพาคุณท่องอวกาศนอกโลก และได้มองลงมาที่โลกของเรา บอกเลยว่ามันสวยมาก ใครที่ใฝ่ฝันว่าสักครั้งในชีวิตอยากเที่ยวอวกาศ ตัวนี้เหมาะเป็นอย่างยิ่ง ไว้มีตังค์ครบ 30 ล้านเมื่อไหร่ ค่อยไปนั่งของจริง ;____;b ### Justice League ![]({{ site.baseurl }}/assets/images/JLVR_IMAXVR_Experience_Poster_2160x3840_3_PORT-169x300.jpg) {: style="text-align: center;"} เกมที่คุณจะได้นั่ง พร้อมอุปกรณ์ Controller (HTC Vive ดี ๆ นี่เอง) ในการตะลุย Mini Game ต่างๆ ซึ่งมี 5 เกม ตามฮีโร่แต่ละตัวเลย วันนี้ได้เล่น * Batman - เกมขับรถหรู ทำลายศัตรู ขับไปทางซ้าย เก้าอี้มันก็หมุนตามด้วยนะ สมจริงๆ * Superman - เกมที่ต้องเหาะ ทำลายศัตรูเช่นกัน ระวังอย่าชนหินเข้าล่ะ เก้าอี้สั่นแรงมาก 555+ สรุปง่าย ๆ IMAX VR เพลินดี เหมาะสำหรับคนที่ต้องการเล่นเกมแบบที่ตัวเองได้เข้ามาอยู่ในโลกนี้จริง ๆ ยิ่งมีเพื่อนมาเล่น ยิ่งสนุกกว่าเดิม แต่มีคำเตือนคืออย่าเล่นเกมติดกันจนเกินไป อาจทำให้เวียนหัวได้ ค่อย ๆ เล่นวันละเกมก็ได้นะ อ้อ... ใครใช้ AIS หรือบัตร M Gen อย่าลืมพกมาเพื่อใช้ส่วนลดเพิ่มด้วยนะ ยังมีเกมอีกเยอะที่ยังไม่ได้ตะลุยมา ไว้ไปเมื่อไหร่จะกลับมาอัปเดทให้ฟังอีกนะครับ > หมายเหตุ : บทความนี้เป็น Consumer Review สำหรับรีวิวที่ผู้เขียนรีวิวเป็นผู้ซื้อสินค้าหรือเสียค่าบริการเอง ไม่มีผู้สนับสนุนให้สินค้าหรือบริการฟรี และผู้เขียนรีวิวไม่ได้รับสิ่งตอบแทนในการเขียนรีวิวใดๆทั้งสิ้น
59.762376
533
0.670808
tha_Thai
0.99648
1c964536734e9cbb238c5c2e6bc226022501746b
134
md
Markdown
.history/content/sections/6-Contact.ja_20210211190111.md
kizaki/portfolio
823cd4f88de7800b281a5b765651764280c1fa69
[ "MIT" ]
null
null
null
.history/content/sections/6-Contact.ja_20210211190111.md
kizaki/portfolio
823cd4f88de7800b281a5b765651764280c1fa69
[ "MIT" ]
null
null
null
.history/content/sections/6-Contact.ja_20210211190111.md
kizaki/portfolio
823cd4f88de7800b281a5b765651764280c1fa69
[ "MIT" ]
null
null
null
--- anchor: "コンタクト" header: "連絡方法" subheader: "私に興味がある方は、メールでお問い合わせください。" #telephone: 03-0000-0001 email: kizak.aiit[at]gmail.com ---
16.75
38
0.716418
eng_Latn
0.071365
1c96a6919bc2cafb728ebdccdab6ca91b46f65e3
12,365
md
Markdown
articles/role-based-access-control/role-assignments-external-users.md
klmnden/azure-docs.tr-tr
8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6
[ "CC-BY-4.0", "MIT" ]
2
2019-08-10T02:23:39.000Z
2019-08-10T02:23:40.000Z
articles/role-based-access-control/role-assignments-external-users.md
klmnden/azure-docs.tr-tr
8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/role-based-access-control/role-assignments-external-users.md
klmnden/azure-docs.tr-tr
8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: RBAC kullanarak dış kullanıcılar için Azure kaynaklarına erişimi yönetme | Microsoft Docs description: Rol tabanlı erişim denetimi (RBAC) kullanarak bir kuruluş için dış kullanıcılar için Azure kaynaklarına erişimi yönetmeyi öğrenin. services: active-directory documentationcenter: '' author: rolyon manager: mtillman editor: '' ms.assetid: '' ms.service: role-based-access-control ms.devlang: '' ms.topic: conceptual ms.tgt_pltfrm: '' ms.workload: identity ms.date: 03/20/2018 ms.author: rolyon ms.reviewer: skwan ms.custom: it-pro ms.openlocfilehash: d919453816436366c00dde506210a2ed38cc69b7 ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 06/13/2019 ms.locfileid: "65952215" --- # <a name="manage-access-to-azure-resources-for-external-users-using-rbac"></a>RBAC kullanarak dış kullanıcılar için Azure kaynaklarına erişimi yönetme Rol tabanlı erişim denetimi (RBAC) dış ortak çalışanlar, satıcılar, ortamınızda belirli kaynakların ancak mutlaka tüm erişmesi gereken freelancers ile çalışırken büyük kuruluşlar için ve SMB'ler için daha iyi güvenlik yönetimi sağlar. Altyapı veya herhangi bir faturalandırma ile ilgili kapsam. Bir Azure aboneliğine sahip olan esnekliği yönetici hesabı (Hizmet Yöneticisi rolü abonelik düzeyinde) tarafından yönetilen ve birden çok kullanıcı aynı abonelik altında ancak tüm yönetim haklarına sahip olmayan için çalışmaya davet RBAC sağlar . > [!NOTE] > Office 365 aboneliği veya Azure Active Directory lisansları (örneğin: Yönetim Merkezi için RBAC kullanarak uygun olmayan Microsoft 365'ten erişimi Azure Active Directory) sağlandı. ## <a name="assign-rbac-roles-at-the-subscription-scope"></a>Abonelik kapsamında RBAC Rolleri Ata RBAC kullanılan (ancak bunlarla sınırlı olmamak üzere olduğunda) iki yaygın örnekleri vardır: * Dış kullanıcıların kuruluşların belirli kaynaklara ya da tüm abonelik yönetmek için (yönetici kullanıcının Azure Active Directory kiracısı parçası değil) davet * Kuruluş (bunlar, kullanıcının Azure Active Directory kiracısı parçası olan) ancak farklı ekipler veya tam bir aboneliğe veya belirli bir kaynak grupları veya kaynak kapsamları ortamında ayrıntılı erişim gereken gruplarının parçası içinde kullanıcılar ile çalışma ## <a name="grant-access-at-a-subscription-level-for-a-user-outside-of-azure-active-directory"></a>Azure Active Directory dışındaki bir kullanıcı için erişim verme abonelik düzeyinde RBAC rollerini yalnızca verilebilir **sahipleri** abonelik. Bu role sahip bir kullanıcı, önceden atanmış veya Azure aboneliğini oluşturan bu nedenle, yönetici oturum açmanız gerekir. Yönetici oturum açtıktan sonra Azure portalından "Abonelikler" ve ardından istediğiniz birini seçin. ![Azure portalında abonelik dikey](./media/role-assignments-external-users/0.png) yönetici kullanıcı Azure aboneliği satın aldıysanız, varsayılan olarak, kullanıcı olarak görünecek **Hesap Yöneticisi**, bu abonelik rol alınıyor. Azure aboneliği rolleri hakkında daha fazla bilgi için bkz. [ekleme veya değiştirme Azure aboneliği yöneticileri](../billing/billing-add-change-azure-subscription-administrator.md). Bu örnekte, kullanıcı "alflanigan@outlook.com" olan **sahibi** "ücretsiz deneme sürümü" aboneliğinde AAD Kiracı "varsayılan Kiracı Azure". Bu kullanıcı ilk Microsoft Account "Outlook" ile Azure aboneliğini oluşturan olduğundan (Microsoft Account = Outlook, Canlı vb.) bu kiracıda eklenen tüm kullanıcılar için varsayılan etki alanı adı **"\@ alflaniganuoutlook.onmicrosoft.com"** . Tasarım gereği, yeni etki alanının sözdizimi Kiracı oluşturan kullanıcının kullanıcı adı ve etki alanı adını bir araya getirilmesi ve uzantı ekleyerek biçimlendirilmiş **". onmicrosoft.com"** . Ayrıca, kullanıcılar oturum kiracıdaki özel etki alanı ekleme ve yeni Kiracı için doğruladıktan sonra oturum açabilir. Azure Active Directory kiracısında özel etki alanı doğrulama hakkında daha fazla bilgi için bkz. [dizininize özel etki alanı adı ekleme](../active-directory/fundamentals/add-custom-domain.md). Bu örnekte, yalnızca kullanıcıların etki alanı adıyla "Varsayılan Kiracı Azure" dizinini içeren "\@alflanigan.onmicrosoft.com". Abonelik seçtikten sonra yönetici kullanıcı tıklatmalısınız **erişim denetimi (IAM)** ardından **Yeni Rol Ekle**. ![Azure portalında erişim denetimi IAM özelliği](./media/role-assignments-external-users/1.png) ![erişim denetimi IAM Özelliği Azure portalında yeni kullanıcı ekleme](./media/role-assignments-external-users/2.png) Sonraki adım, atanacak rol ve kullanıcı kim RBAC rolü atanmış seçeneğini belirlemektir. İçinde **rol** açılır menüsünde, yönetici kullanıcı, Azure'da kullanılabilen yalnızca yerleşik RBAC rolleri görür. Her rol ve bunların atanabilir kapsamlarla açıklamalarını ayrıntılı için bkz: [Azure kaynakları için yerleşik roller](built-in-roles.md). Yönetici kullanıcı, daha sonra dış kullanıcının e-posta adresi eklemeniz gerekir. Beklenen davranış mevcut kiracıda değil gösterilecek dış kullanıcı içindir. Dış kullanıcıyı davet sonra bunlar altında görünür olacak **abonelikler > erişim denetimi (IAM)** abonelik kapsamında bir RBAC rolü atanmış olan tüm geçerli kullanıcılar ile. ![Yeni RBAC rolü için izinleri ekleme](./media/role-assignments-external-users/3.png) ![Abonelik düzeyinde RBAC rolleri listesi](./media/role-assignments-external-users/4.png) Kullanıcı "chessercarlton@gmail.com" olarak davet bir **sahibi** "Ücretsiz deneme" aboneliği için. Dış kullanıcıyı davet gönderdikten sonra etkinleştirme bağlantısını içeren bir e-posta onayı alırsınız. ![e-posta davetiyesi RBAC rolü](./media/role-assignments-external-users/5.png) Kuruluş dışı olduğundan, yeni kullanıcı herhangi bir mevcut özniteliği "Varsayılan Kiracı Azure" dizininde yok. Dış kullanıcı izin verdiği sonra oluşturulurlar abonelikle ilişkili dizine kaydedilmesi, bir role atanmış. ![RBAC rolü için e-posta davetiyesi iletisi](./media/role-assignments-external-users/6.png) Dış kullanıcı olarak artık Azure Active Directory kiracısı ve bu dış kullanıcı gösterir, Azure portalında görüntülenebilir. ![Kullanıcılar dikey penceresinde azure active Directory'yi Azure portalı](./media/role-assignments-external-users/7.png) İçinde **kullanıcılar** görünümü, dış kullanıcılar, Azure portalında farklı simge türü tarafından tanınan. Ancak, vermiş **sahibi** veya **katkıda bulunan** bir dış kullanıcı erişimi **abonelik** kapsamı, yönetici kullanıcının dizine erişim sürece izin vermiyor **Genel yönetici** verir. Kullanıcı ekrana içinde **kullanıcı türü**, iki ortak parametreleri olan **üye** ve **Konuk** tanımlanabilir. Bir konuk bir kullanıcı bir dış kaynaktan dizine davet ederken dizinde kayıtlı bir kullanıcı bir üyesidir. Daha fazla bilgi için [nasıl Azure Active Directory yöneticileri ekleme B2B işbirliği kullanıcıları](../active-directory/active-directory-b2b-admin-add-users.md). > [!NOTE] > Portalda kimlik bilgilerini girdikten sonra emin olun, doğru dizinde oturum açmak için dış kullanıcıyı seçer. Aynı kullanıcı birden çok dizini erişimi kullanıcı adı, Azure portalının sağ üst taraflarında bulunur tıklayarak bunları birini seçin ve ardından açılır listeden uygun dizini seçin. Dizinde Konuk olan sırasında dış kullanıcı Azure aboneliği için tüm kaynakları yönetebilir, ancak dizinine erişilemiyor. ![Azure active Directory'yi Azure portalına kısıtlı erişim](./media/role-assignments-external-users/9.png) Azure Active Directory ve Azure aboneliği diğer Azure kaynakları gibi bir üst-alt ilişkisi yok (örneğin: sanal makineler, sanal ağlar, web uygulamaları, depolama vb.) ile bir Azure aboneliğine sahip. Tüm ikinci oluşturulmuş, yönetilen ve Azure aboneliğinin bir Azure dizinine erişimi yönetmek için kullanılan bir Azure aboneliği altında faturalandırılırken. Daha fazla bilgi için [nasıl bir Azure aboneliği Azure AD ile ilgili](../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md). Tüm yerleşik RBAC rolleri gelen **sahibi** ve **katkıda bulunan** katkıda bulunan oluşturun ve yeni RBAC rollerini silme emin olmasına fark ortamındaki tüm kaynakların tam yönetim erişimi sunar. . Bir yerleşik roller ister **sanal makine Katılımcısı** yalnızca ada göre bakılmaksızın belirtilen kaynaklar için tam yönetim erişimi sunan **kaynak grubu** içinde oluşturulan. Yerleşik RBAC rolü atama **sanal makine Katılımcısı** kullanıcı rolü atanmış bir abonelik düzeyinde anlamına gelir: * Tüm sanal makineler bakılmaksızın kendi dağıtım tarihi ve bunlar parçası olan kaynak grupları görüntüleyebilir * Abonelikte sanal makinelerin tam yönetim erişimi olan * Diğer kaynak türlerini abonelikte görüntüleyemezsiniz. * Faturalandırma açısından değişiklikleri çalışamaz ## <a name="assign-a-built-in-rbac-role-to-an-external-user"></a>Bir dış kullanıcı için bir yerleşik RBAC rolü atayın Bu test, dış kullanıcı farklı bir senaryo için "alflanigan@gmail.com" olarak eklenen bir **sanal makine Katılımcısı**. ![sanal makine Katılımcısı yerleşik rolü](./media/role-assignments-external-users/11.png) Bu yerleşik rolü bu dış kullanıcı için normal davranış görebilir ve yalnızca sanal makineler ve bitişik Resource Manager yalnızca kaynaklarını dağıtırken gereken yönetme sağlamaktır. Bu sınırlı roller, tasarım gereği, yalnızca Azure portalında oluşturulan düşen kaynaklarına erişim sunar. ![Azure portalında sanal makine katkıda bulunan rolüne genel bakış](./media/role-assignments-external-users/12.png) ## <a name="grant-access-at-a-subscription-level-for-a-user-in-the-same-directory"></a>Aynı dizinde bir kullanıcının erişim verme abonelik düzeyinde İşlem akışını, bir dış kullanıcı ekleme ile aynıdır, hem kullanıcı yanı sıra, RBAC rolü verme yönetici açısından rolüne erişim verilmeden. Buradaki fark, Abonelikteki tüm kaynak kapsamlar oturum açtıktan sonra Panoda kullanılabilir olarak davet edilen kullanıcının tüm e-posta Davetleri almaz ' dir. ## <a name="assign-rbac-roles-at-the-resource-group-scope"></a>Kaynak grubu kapsamındaki RBAC Rolleri Ata Bir RBAC rolü atama bir **kaynak grubu** kapsamına sahip iki kullanıcı türleri - harici veya dahili (aynı dizine parçası) için abonelik düzeyinde rol atamak için özdeş bir işlem. RBAC rolü atanmış kullanıcılar olduğu kaynak grubu, erişim atanmış yalnızca kendi ortamlarında görmek için **kaynak grupları** Azure portalında simgesi. ## <a name="assign-rbac-roles-at-the-resource-scope"></a>Kaynak kapsamda RBAC Rolleri Ata Azure'da bir kaynak kapsamda bir RBAC rolü atama abonelik düzeyinde veya her iki senaryo için aynı iş akışını izleyerek kaynak grubu düzeyinde rol atamak için benzer bir işlem var. RBAC rolü atanmış kullanıcılar için ya da erişim, atanmış öğeler yeniden görebilir **tüm kaynakları** sekmesinde veya doğrudan panonun. Hem kaynak grup kapsamı veya kaynak kapsamında RBAC için önemli bir yönüdür doğru dizinde mi oturum açtığınızdan emin olmak kullanıcılara yöneliktir. ![Azure portalında Directory oturum açma](./media/role-assignments-external-users/13.png) ## <a name="assign-rbac-roles-for-an-azure-active-directory-group"></a>RBAC rolleri için bir Azure Active Directory grubuna atayın Azure'da üç farklı kapsamlardaki RBAC kullanarak tüm senaryolar yöneten, dağıtan ve kişisel abonelik yönetimine gerek kalmadan atanmış bir kullanıcısı olarak çeşitli kaynakları yönetme yükselmesi sunar. Bakılmaksızın bir aboneliğe, kaynak grubuna ya da kaynak kapsamı için RBAC rolü atanmış, erişim için kullanıcıların sahip olduğu bir Azure aboneliği altında hakkında daha fazla atanmış kullanıcılar tarafından oluşturulan tüm kaynakları faturalandırılır. Bu şekilde, faturalama, tüm Azure aboneliğiniz için yönetici izinleri olan kullanıcıları var. eksiksiz bir genel bakış tüketimi, bakılmaksızın kimin kaynakları yönetme Büyük kuruluşlar için RBAC rollerini yönetici kullanıcı teams veya tüm Departmanlar için bu nedenle dikkate alarak, her bir kullanıcı için değil ayrı ayrı ayrıntılı erişim vermek istediğini açısından ele Azure Active Directory grupları ile aynı şekilde uygulanabilir Bu isteğe bağlı olarak son derece zaman ve yönetim etkin. Bu örneği göstermek üzere **katkıda bulunan** rolü, abonelik düzeyinde kiracısındaki gruplardan birine eklendi. ![AAD grupları için RBAC rolü Ekle](./media/role-assignments-external-users/14.png) Sağlanan ve yönetilen yalnızca Azure Active Directory içinde güvenlik gruplarını bu gruplarıdır.
91.592593
624
0.819976
tur_Latn
0.999964
1c9743bd984be76adba7b930290241e2dcdc6b8b
4,499
md
Markdown
src/Pramp/src/Problems/SystemDesign/ScaleOnTheCloud/readme.md
manastalukdar/Computer-Science_Self-Study
faaa4a24f7aea037343b2d2bb80f4d812ab7507c
[ "MIT" ]
23
2020-11-22T07:37:07.000Z
2022-03-23T17:23:44.000Z
src/Pramp/src/Problems/SystemDesign/ScaleOnTheCloud/readme.md
manastalukdar/Computer-Science_Self-Study
faaa4a24f7aea037343b2d2bb80f4d812ab7507c
[ "MIT" ]
40
2018-01-30T18:59:17.000Z
2018-03-13T22:58:05.000Z
src/Pramp/src/Problems/SystemDesign/ScaleOnTheCloud/readme.md
manastalukdar/Computer-Science_Self-Study
faaa4a24f7aea037343b2d2bb80f4d812ab7507c
[ "MIT" ]
4
2020-10-19T01:14:15.000Z
2022-03-23T20:25:36.000Z
# Problem Definition ## Description Many of today’s large scale concurrent systems rely on cloud computing platforms such as AWS or Google Cloud. Describe how would you design a service using a common cloud computing platform such as AWS, to scale incrementally from supporting hundreds of users to millions of users. For the purpose of the interview, assume the system generally receives user requests, does some processing, stores user data, and returns the results. ##################### This editor is synced in real time with your peer. Use it to share thoughts and resources, such as: - Features scope - API design - Pseudo code for specific components - Data model/schema - Back-of-the-envelope calculations - Reference links - Link to whiteboard or diagram such as https://sketchboard.me/new Good luck! ##################### 1. Assumptions 1. Store data 1. Read/write data - read/write queries. 1. Design for 100s of users first. 1. Re-design for millions of users. 1. No UI 1. Data has some patterns - non-blob data. 1. Some data is metadata 1. Some data is raw binary data 2. High-level abstractions 1. API (read/write) 1. Web server - host the EP(s) 1. Storage 3. Design for 100s of users 1. Storage solution 1. SQL BE store for metadata 1. Blob storage (Azure) or S3 (AWS) 1. API hosting 1. Web server will also host BL for API - read/write from BE storage. 1. Testing for determining issues with increasing scale 1. Need load testing. 1. Need chaos testing - break the platform hosting the system - use platform (Azure/AWS) APIs to break items. 1. Scale issues 1. Web server connection limit 1. Calls go all the way to BE storage - no fast in-memory caching. 1. Issues with reliability 1. What happens when a server goes down. 1. What happens when BE storage is non-responsive. 1. What happens when indeterminate transient problems happens - mostly network issues. 4. Calculations BE Storage - 10 GB. In-memory - 25% of BE storage. 5. Design for 1000s of users (high-level) 1. Need more web servers (system gateways). 1. Add LB before web server layer. 1. Separate out BL into application servers. 1. Add LB before application server layer. 1. Add in-memory caching layer in app servers for storage. 1. In-memory store will hold "hot" data. 1. BE storage will hold "cold" data. 1. "hot" - data that is being actively accessed. 1. When data is not more "hot", flush to BE storage - LRU. 1. No scaling at BE storage. 6. Design for millions of users 1. Need more web servers (system gateways). 1. Add LB before web server layer. 1. Separate out BL into application servers. 1. Add LB before application server layer. 1. Add in-memory cache servers. 1. Partitioning 1. Both in-memory cache and BE storage 1. Hash by UserId 1. Implement range partitioning or consistent hashing at application server layer. 1. Some logic in the in-memory caching servers to also correctly direct to the right BE storage partition. 1. Async patterns using message brokers. 1. API gateway that receives request and send request to MB. 1. Message broker like Kafka or events hubs that receive all incoming requests. 1. Queue/message processor services that take the MB message and send to BE services. 1. When a message has been processed from the queue it will notify another service (ACK service) with the requests ID of the processed message. Catch - This is easy to do using web sockets, or some other protocol, using http - will have to use long polling. 1. Does not provide data consistency guarantee! 1. 1st ACK on request being queued. 2nd ACK on data in requests making it to BE store. 1. AutoScaling 1. Use Cloud provider solution - autos-scaling based on CPU, network I/O, memory, etc. 1. Use some consensus algorithm platform like Azure SF, Apache Z. ## Notes 1. [Pramp - The Complete System Design Interviewer Guide](https://medium.com/@pramp/the-complete-system-design-interviewer-guide-e5d273724db8) 1. [GitHub - Donne Martin system-design-primer](https://github.com/donnemartin/system-design-primer/tree/master/solutions/system_design/scaling_aws#design-a-system-that-scales-to-millions-of-users-on-aws)
38.784483
265
0.701045
eng_Latn
0.989579
1c9878fdaedd093f99c01d995756d9deee08c3ba
339
md
Markdown
_posts/2015-03-31-should-sport-and-politics-ever-mix.md
mttmccb/mttmccb.github.io
81873501051b71edaed48e4be1fd1fa548922c51
[ "MIT" ]
null
null
null
_posts/2015-03-31-should-sport-and-politics-ever-mix.md
mttmccb/mttmccb.github.io
81873501051b71edaed48e4be1fd1fa548922c51
[ "MIT" ]
null
null
null
_posts/2015-03-31-should-sport-and-politics-ever-mix.md
mttmccb/mttmccb.github.io
81873501051b71edaed48e4be1fd1fa548922c51
[ "MIT" ]
null
null
null
--- title: Should sport and politics ever mix? tags: - Short - politics - sport --- I really like the style the BBC have started adopting for some of their articles. Splitting up the article with clear sections and also integrating video, very slick. And for the record, yes I think they should mix, as long as doesn't impact the sport.
30.818182
167
0.758112
eng_Latn
0.999361
1c989eb401b0a51242592e85d056750a38c30d2d
3,203
md
Markdown
content/kubermatic/v2.14/getting_started/manage_projects/_index.en.md
manju4d90/docs
23d9b6dd65dd5a3dd36d5f6af052fe434d88e0b7
[ "Apache-2.0" ]
null
null
null
content/kubermatic/v2.14/getting_started/manage_projects/_index.en.md
manju4d90/docs
23d9b6dd65dd5a3dd36d5f6af052fe434d88e0b7
[ "Apache-2.0" ]
null
null
null
content/kubermatic/v2.14/getting_started/manage_projects/_index.en.md
manju4d90/docs
23d9b6dd65dd5a3dd36d5f6af052fe434d88e0b7
[ "Apache-2.0" ]
null
null
null
+++ title = "Manage a Project" date = 2018-04-28T12:07:15+02:00 weight = 10 +++ Kubermatic organizes your clusters into projects. Projects allow you to share clusters with other users and make it easier to collaborate. ### Step 1 – Create a Project After you log in to Kubermatic, you will be greeted with the list of all projects you created or have been given access to. When first using Kubermatic, the list will be empty and you will need to create your first project. ![Empty project list](/img/kubermatic/v2.14/getting_started/manage_projects/projects-empty.png) Click on the button above the table or the link below to table to create your first project. Give it a descriptive name and click "Save". ![Add project dialog](/img/kubermatic/v2.14/getting_started/manage_projects/projects-add.png) After a short moment, your project will be ready. ### Step 2 – Select Your Current Project To manage clusters, you need to select in which project you would like to work. This can be achieved by either clicking the project in the project list or by using the dropdown in the top-left corner (it is visible only when you are already in one of the projects). ![Project list](/img/kubermatic/v2.14/getting_started/manage_projects/projects-list.png) After selecting the current project, the menu items for managing clusters, members and service accounts become active. ### Step 3 – Create a Cluster Refer to the [cluster documentation](../create_cluster) for more information on how to create and manage clusters. ### Step 4 – Manage Project Members After selecting a project like in step 2, click on "Members" in the menu on the left to see the list of active members. If you are the owner, you can add and remove members in your project. To add a member, just like adding a project or a cluster, use the button above the member list. Add the e-mail address *of an existing user* and define their role in the project. ![Add member dialog](/img/kubermatic/v2.14/getting_started/manage_projects/projects-member.png) * **Editors** have write access to your project and can manage clusters, nodes and SSH keys. * **Viewers** have read-only access and can only view the existing resources. You can change the role for a user or remove them altogether at any time. After adding a user to a project, the project will immediately show up in the user's project list. ### Step 5 - Manage Project Service Accounts After selecting a project like in step 2, click on "Service Accounts" in the menu on the left to see the list of active service accounts. If you are the owner of the project, you can add and remove service accounts and manage tokens that belong to them as well. To add a service account, just like adding a project or a cluster, use the button above the service account list. Add the name and select the group to assign. ![Add service account dialog](/img/kubermatic/v2.14/getting_started/manage_projects/projects-sa.png) * **Editors** have write access to your project and can manage clusters, nodes and SSH keys. * **Viewers** have read-only access and can only view the existing resources. You can change the role for a service account or remove them altogether at any time.
57.196429
420
0.775835
eng_Latn
0.999353
1c99d4eff0d7685450c02e46f01874b77567c329
1,479
md
Markdown
README.md
stanford-stagecast/pancake
e9cfd547edf2dd797f324b159252757190211ff2
[ "Apache-2.0" ]
2
2022-01-05T08:58:11.000Z
2022-01-06T05:33:14.000Z
README.md
stanford-stagecast/pancake
e9cfd547edf2dd797f324b159252757190211ff2
[ "Apache-2.0" ]
null
null
null
README.md
stanford-stagecast/pancake
e9cfd547edf2dd797f324b159252757190211ff2
[ "Apache-2.0" ]
null
null
null
Pancake: an amendable piano synthesizer ## How to Run: 1. Clone repo 2. In repo directory, make an empty build directory 3. Enter build directory and run `cmake ..` 4. Run `make` 5. Run `./src/frontend/synthesizer-test [device_prefix] [midi_device] [sample_directory]` If you're working on the snr-piano machine: - device_prefix: Scarlett - midi_device: /dev/midi2 - sample_directory: /usr/local/share/slender/samples/ ## File Overview - `synthesizer-test`: Entry point to the program. It runs an event loop which reads in new midi data, initiates the processing of midi data into audio, and sends the generated audio to the playback device. - `midi_processor`: Class that reads data in from the midi device into a buffer. Midi data consists of event type, event note, and event velocity, where an "event" is something like the press of a key, the release of a key, a change in position of pedal, etc. `midi-processor` provides functions to access the oldest unprocessed event. - `synthesizer`: Class handles the conversion of midi events into audio data. - `note_repository`: Class that manages a list of `NoteFiles`. - `note_files`: Class that holds all of the WAV files for a single note. (Each note has a low velocity, medium velocity, and high velocity audio file, as well as a release audio file.) [![Compiles](https://github.com/stanford-stagecast/pancake/workflows/Compile/badge.svg?event=push)](https://github.com/stanford-stagecast/pancake/actions)
64.304348
335
0.761325
eng_Latn
0.98524
1c9adf48144ec03fe6b9ee16cc93faf0fafc26e4
60
md
Markdown
README.md
edilio/PyWebPerf
9813de72831d8b07ec572c6876cffeb32ae1cec7
[ "Apache-2.0" ]
null
null
null
README.md
edilio/PyWebPerf
9813de72831d8b07ec572c6876cffeb32ae1cec7
[ "Apache-2.0" ]
null
null
null
README.md
edilio/PyWebPerf
9813de72831d8b07ec572c6876cffeb32ae1cec7
[ "Apache-2.0" ]
null
null
null
# PyWebPerf Py Web Frameworks performance cd docker make
7.5
29
0.783333
kor_Hang
0.581709
1c9b12b835c67d366396e4e44131a53190cf878f
3,753
md
Markdown
_posts/2021-03-15-rodovia.md
valeriabarros/barrosval.me
d374634955cbad0c8f0fc24367d24fd22fed4166
[ "MIT" ]
1
2021-10-30T21:51:37.000Z
2021-10-30T21:51:37.000Z
_posts/2021-03-15-rodovia.md
valeriabarros/barrosval.me
d374634955cbad0c8f0fc24367d24fd22fed4166
[ "MIT" ]
null
null
null
_posts/2021-03-15-rodovia.md
valeriabarros/barrosval.me
d374634955cbad0c8f0fc24367d24fd22fed4166
[ "MIT" ]
null
null
null
--- title: rodovia categories: [writings] comments: true --- Foi em um opala branco (ou talvez amarelo) que fiz a primeira travessia pro desconhecido. De Indaiatuba pra Aparecida de Goiânia são mais ou menos doze horas. Cinco crianças, dois adultos e uma cadela. a fofa - uma senhora que veio a falecer uns anos depois envenenada por uma vizinha - ou pelo menos essa era a teoria das crianças da rua. Essa vizinhha morava em uma casa bem assustadora na rua de casa. Um matagal sem fim, pé de bucha vegetal pra todo lado, cercada por um portão já consumido pelo lodo e as trepadeiras. Eu gostava das nossas aventuras de carro. Acabava sempre me distraindo dos motivos pelos quais se fugia. Mas eu gostava dessa energia de viajar De organizar o carro, se preparar e observar o motor ganhhando espaço naquela escuridão sem fim. As vezes a gente dormia na estrada, parava nos postos de gasolina e dormia as vezes no chão, as vezes no colchão. Depende da pressa. As vezes meu pai desligava o farol na estrada e era só a gente e o nada. Engraçado que não me dava medo esse escuro - sentia até mais conforto do que na luz. Na luz as vezes eu percebia meu pai quase dormindo no volante, e o carro ia de um lado pro outro, em estado de semi consciência. As vezes mãe acordava assustada e batia no braço dele pra ele acordar. Aquilo me dava mais medo do que o escuro Mas as vezes parecia que o opala sabia pra onde ir O povo fala que quando se é caminhoeiro, o motor é uma extensão do seu corpo, do volante até o motor. Caminhoneiro é bicho louco. Como que pode o Brasil ter uma infraestrutura tão pobre de logística, Oss caras cortam o país no pneu. Caminhões enormes, pesadíssimos. As vezes o As vezes a gente também viajava de caminhão, mas dava medo demais. Tem modelos de caminhão desses mais antigos que o freio é de ar. Quando você desce a ladeira você não precisa mais do motor. Você vai na banguela.Só que nessas as vezes o freio perde. é um desespero com o pé puxando pro freio acordar. Só que meu pai nunca desesperava. mass eu não tirava os olhos da estrada de certo achando que eu poderia ver a morte chegar. Agora eu me pergunto se foi na estrada que descobri o medo de morrer, ou se ele já tava lá. Aa mesmo tempo que é na estrava que me sentia mais viva. uma euforia que eu só senia quando ia embora. vendo as árvores passando, a paisagem mudando, a estrada pegando fogo. em goiás o sol é tão forte que as vezes dá até pra fritar ovo no asfalto. tão quente que tem lugar que a água nasce quente. fizeram caldas novas e rio quente pra vender de turismo a água que de tão quente cozinha o ovo, e também os turistas que vão na ilusão de se refrescar do calor infernál que faz no verão numa poça de água quente que se cozinha ovo também te cozinha. desespero que só. eu sempre gostei do cerrado, da planície em que você vê o horizonte todinho e acha que é infinito. as árvores pequenas secas e atrofiadas. as arvores do cerrado também são gigantes, mas debaixo da terra. as raízess se aprofundam até chegar ao lençol freático, formando uma floresta subterrânea. Cerrado é uma vida meio ressecada e retorcida. os frutos, as frutas são riquíssimias, exóticas, cheias de sementes e caroços. a floração é feita de passarinhho e fogo. gerando vida atrravés da fertilidade e morte. da chuva e da sequidão. o céu é de uma imensidão sem tamanho. o sol quando se vai parece que ocupa o céu inteiro. um laranja bem forte que vai se misturando e sumindo que se vai devagarinho, sumindo no horizonte até o finalzinhho que parece que o dia não quer ir embbora. quer continuar queimando e jogando raio sorbe o campo. quer pausar o tempo, estender seu calor por dias e dias. uma hora cede e vai se embora só pra voltar no próximo dia.
79.851064
426
0.782574
por_Latn
1.000003
1c9bf83916a0a10843f5800dee5dfca11c1c0764
4,640
md
Markdown
README.md
RJBeetel3/mimic3_analysis
5267a9cc9037da431bb257d157df8e00fab2d295
[ "MIT" ]
2
2018-11-27T07:47:10.000Z
2020-03-02T07:45:06.000Z
README.md
RJBeetel3/mimic3_analysis
5267a9cc9037da431bb257d157df8e00fab2d295
[ "MIT" ]
1
2018-12-03T18:04:27.000Z
2018-12-05T20:38:14.000Z
README.md
RJBeetel3/mimic3_analysis
5267a9cc9037da431bb257d157df8e00fab2d295
[ "MIT" ]
1
2018-03-10T23:23:17.000Z
2018-03-10T23:23:17.000Z
# Machine Learning Nanodegree Capstone Project Note: Currently converting what was a jupyter notebook based project to a script based project. The work is in progress. In this project a classifier was developed for predicting mortality for ICU patients given data from the first 24hrs of ICU admission. Patient data was collected from the the MIMIC-III (Medical Information Mart for Intensive Care III), a large, freely-available database comprising deidentified health-related data associated with patients who stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. Data was pre-processed and features were selected. These features were used to train and test a number of candidate machine learning classifiers. Classifiers were optimized across their respective parameter spaces as well as across input feature set and training/testing data set sizes. A classifier was selected from the group which provided the best performance with regard to predicting patient mortality. ## Getting Started Clone repo and install in a local directory using `pip install -e . ` Data was queried from the mimic database in 3 groups, chart_events, lab_events and patient_demographics. Queries were saved as CSV files CHART_EVENTS_FIRST24.csv, LAB_EVENTS_FIRST24.csv and PTNT_DEMOG_FIRST24.csv. These groups of data were pre-processed separately in iPython notebooks with corresponding names, CHARTEVENTS_FIRST24.ipynb, LABEVENTS_FIRST24.ipynb and PATIENT_DEMOGRAPHICS_FIRST24.ipynb. Pre-processing steps also included feature selection which used chi2 scores and corresponding p-values to select the features with the highest correlation with the outcomes. Selected features and corresponding feature selection scores were exported from each notebook and saved in the /features folder. A fourth iPython Notebook, ICU_MORTALITY_FIRST24.ipynb imports the selected features and scores, recombines and ranks the combined features and uses the top 20 to train, test and optimize candidate classifiers. ### Prerequisites Code was written in Python 2.7 installed using Anaconda ### Pre-Processing ** Note: The CHART_EVENTS_FIRST24.csv file is too large to store on github so is currently unavailable. CHARTEVENTS_FIRST24.ipynb will not run properly but the previously generated output files containing the chart_events features are in the repository and can be imported at the next stage. *** To generate the results from the raw .csv files, first run all code in the following three iPython notebooks: * CHARTEVENTS_FIRST24.ipynb * LABEVENTS_FIRST24.ipynb * PATIENT_DEMOGRAPHICS_FIRST24.ipynb These notebooks will generate the selected features and corresponding scores. The order in which they are run is not important ### Training, Testing and Optimizing Classifiers To complete feature selection and to train, test and optimize classifiers, run all code in: * ICU_MORTALITY_FIRST24.ipynb **Note: While the code block that optimizes the rest of the candidate classifiers can be run in a reasonable amount of time, the block that optimizes the SVC classifier takes a VERY long time. Optimized classifiers, optimized parameters and classifier scores were exported using pickle.dump to Optimized_Classifiers.txt. Code for reading in the optimized classifier info can be found, commented out below the optimization blocks. If one were interested in saving time, one might skip the optimization code and simply upload the optimized classifier data.** The output files from the pre-processing stages are included in the repository so one could begin directly with the ICU_MORTALITY_FIRST24.ipynb file ## Authors * **Rob Beetel** - *Initial work* - [RJBeetel3](https://github.com/RJBeetel3) ## License This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details ## Acknowledgments Many thanks to the people who made and maintain the Mimic-III database without which none of this would have been possible. Very powerful things can be done with this type of data. Citations: MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016). DOI: 10.1038/sdata.2016.35. Available at: http://www.nature.com/articles/sdata201635 Pollard, T. J. & Johnson, A. E. W. The MIMIC-III Clinical Database http://dx.doi.org/10.13026/C2XW26 (2016). Also special thanks to the people at Yereva Research Labs who's project I looked to for guidance on feature selection/mining. https://github.com/YerevaNN http://yerevann.com/
66.285714
707
0.805388
eng_Latn
0.996628
1c9cd645ea1e284f1f991951cacdb48257e58043
5,449
md
Markdown
articles/pricing-costing/cost-price-resolution.md
MicrosoftDocs/dynamics-365-project-operations-pr.de-de
c029d9fc1287a0e09ff7c25429d3f2d211873dbd
[ "CC-BY-4.0", "MIT" ]
1
2020-03-26T10:57:47.000Z
2020-03-26T10:57:47.000Z
articles/pricing-costing/cost-price-resolution.md
MicrosoftDocs/dynamics-365-project-operations-pr.de-de
c029d9fc1287a0e09ff7c25429d3f2d211873dbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/pricing-costing/cost-price-resolution.md
MicrosoftDocs/dynamics-365-project-operations-pr.de-de
c029d9fc1287a0e09ff7c25429d3f2d211873dbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Beschließen von Einstandspreisen für Vorkalkulationen und Istwerten description: Dieser Artikel informiert Sie darüber, wie Kalkulationen für Schätzungen und Istwerte aufgelöst werden. author: rumant ms.date: 04/09/2021 ms.topic: article ms.reviewer: johnmichalak ms.author: rumant ms.openlocfilehash: af17712f0aef4fe3e6e758edd976cc377e90631d ms.sourcegitcommit: 6cfc50d89528df977a8f6a55c1ad39d99800d9b4 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 06/03/2022 ms.locfileid: "8919967" --- # <a name="resolving-cost-prices-for-estimates-and-actuals"></a>Beschließen von Einstandspreisen für Vorkalkulationen und Istwerten _**Gilt für:** Project Operations für Szenarien basierend auf vorrätigen/nicht-vorrätigen Ressourcen_ Um die Einstandspreise und die Einstandspreisliste für Vorkalkulationen und Istwerte zu beschließen, verwendet das System die Informationen in den Feldern **Datum**, **Währung** und **Vertragseinheit** des zugehörigen Projekts. Nachdem die Einstandspreisliste beschlossen wurden, schließt die Anwendung den Kostensatz ab. ## <a name="resolving-cost-rates-on-actual-and-estimate-lines-for-time"></a>Beschließen von Kostensätzen in Istwert- und Vorkalkulationszeilen für Zeit Vorkalkulationszeilen für Zeit beziehen sich auf die Angebots- und Vertragszeilendetails für Zeit- und Ressourcenzuweisungen in einem Projekt. Nachdem eine Einstandspreisliste beschlossen wurde, verwendet das System die Felder **Rolle**, **Ressourcenzuordnungsunternehmen** und **Ressourcenzuordnungseinheit** in der Vorkalkulationszeile für Zeit, um diese mit den Rollenpreislisten in der Preisliste abzugleichen. Bei dieser Übereinstimmung wird davon ausgegangen, dass Sie sofort einsatzbereite Preisdimensionen für die Arbeitskosten verwenden. Wenn Sie das System so konfiguriert haben, dass es mit Feldern anstelle von **Rolle**, **Ressourcenzuordnungsunternehmen** und **Ressourcenzuordnungseinheit** übereinstimmt, wird eine andere Kombination verwendet, um eine übereinstimmende Rollenpreiszeile abzurufen. Wenn die Anwendung eine Rollenpreiszeile findet, die einen Kostensatz für die Kombination **Rolle**, **Ressourcenzuordnungsunternehmen** und **Ressourcenzuordnungseinheit** besitzt, dann ist dies der Standardkostensatz. Wenn die Anwendung nicht genau mit der Kombination der Werte **Rolle**, **Ressourcenzuordnungsunternehmen** und **Ressourcenzuordnungseinheit** übereinstimmt, werden Rollenpreiszeilen mit einem übereinstimmenden Rollenwert abgerufen, für die jedoch Nullwerte für **Ressourcenzuordnungseinheit** und **Ressourcenzuordnungsunternehmen** festgelegt sind. Nachdem ein übereinstimmender Rollenpreisdatensatz mit übereinstimmendem Rollenpreiswert gefunden wurde, wird der Kostensatz standardmäßig aus diesem Datensatz verwendet. > [!NOTE] > Wenn Sie eine andere Priorisierung von **Rolle**, **Ressourcenzuordnungsunternehmen** und **Ressourcenzuordnungseinheit** konfigurieren oder Sie andere Dimensionen mit einer höheren Priorität haben, ändert sich dieses Verhalten entsprechend. Das System ruft Rollenpreisdatensätze mit Werten ab, die mit jedem der Preisdimensionswerte in der Reihenfolge ihrer Priorität übereinstimmen, wobei Zeilen Nullwerte für die Dimensionen enthalten, die in der Prioritätsreihenfolge an letzter Stelle stehen. ## <a name="resolving-cost-rates-on-actual-and-estimate-lines-for-expense"></a>Beschließen von Kostensätzen in Istwert- und Vorkalkulationszeilen für Ausgaben Vorkalkulationszeilen für Ausgaben beziehen sich auf die Angebots- und Vertragszeilendetails für Ausgaben- und Ausgabenvorkalkulationszeilen in einem Projekt. Nachdem eine Einstandspreisliste beschlossen wurde, verwendet das System eine Kombination der Felder **Kategorie** und **Einheit** in der Vorkalkulationszeile für eine Ausgabe, um Zeilen mit **Kategoriepreis** in der beschlossenen Preisliste abzugleichen. Wenn das System eine Kategoriepreiszeile findet, die einen Kostensatz für die Kombination **Kategorie** und **Einheit** hat, ist der Kostensatz standardmäßig festgelegt. Wenn das System die Werte **Kategorie** und **Einheit** nicht abgleichen kann oder es eine übereinstimmende Kategoriepreiszeile findet, die Preisberechnungsmethode jedoch nicht **Einzelpreis** lautet, ist der Kostensatz standardmäßig auf Null (0) festgelegt. ## <a name="resolving-cost-rates-on-actual-and-estimate-lines-for-material"></a>Auflösen von Verkaufsraten für tatsächliche und geschätzte Zeilen für Material Schätzlinien für Material beziehen sich auf die Angebots- und Vertragszeilendetails für Materialien und die Materialschätzungslinien für ein Projekt. Nachdem eine Kostenpreisliste aufgelöst wurde, verwendet das System eine Kombination aus **Produkt**- und **Einheit**-Feldern in der Schätzzeile für eine Materialschätzung, die mit **Preislistenelement**-Zeilen auf der aufgelösten Preisliste übereinstimmt. Wenn das System eine Produktpreiszeile findet, die einen Kostensatz für die **Produkt**- und **Einheit**-Feldkombination hat, wird der Standard-Kostensatz voreingestellt. Wenn das System keine Übereinstimmung für **Produkt**- und **Einheit**-Werte findet, sind die Stückkosten standardmäßig null. Diese Standardeinstellung tritt auch auf, wenn eine übereinstimmende Preislisten-Artikelzeile vorhanden ist, die Preismethode jedoch auf Standardkosten oder aktuellen Kosten basiert, die nicht im Produkt definiert sind. [!INCLUDE[footer-include](../includes/footer-banner.md)]
123.840909
1,414
0.829694
deu_Latn
0.998907
1c9cdabf57aa995c8304773849da5c7eb2133dbb
339
md
Markdown
CHANGES.md
ofthesun9/zfs-prune-snapshots
b4f8639608c9b57534ddeb4c6a95563222381d04
[ "MIT" ]
144
2015-11-22T08:36:28.000Z
2022-03-29T10:59:47.000Z
CHANGES.md
ofthesun9/zfs-prune-snapshots
b4f8639608c9b57534ddeb4c6a95563222381d04
[ "MIT" ]
17
2018-08-16T19:57:19.000Z
2022-01-30T02:16:34.000Z
CHANGES.md
ofthesun9/zfs-prune-snapshots
b4f8639608c9b57534ddeb4c6a95563222381d04
[ "MIT" ]
39
2015-11-23T04:50:27.000Z
2021-11-03T13:21:12.000Z
ZFS Prune Snapshots Changes =========================== Not Yet Released ---------------- - Add zfs binary check ([#8](https://github.com/bahamas10/zfs-prune-snapshots/pull/8)) `v1.1.0` -------- - Support suffix matching (-s) (e7aa72160f8) `v1.0.1` -------- - Allow passing DESTDIR (6de152a168) `v1.0.0` -------- - Initial Release
14.73913
86
0.557522
kor_Hang
0.185943
1c9d15e2d8b8a355f81247791c5544a1ed4f7281
3,477
md
Markdown
readme.md
Senthan/ifasting.com
ed73350585e614dae32f990026faeee1ae8cd59f
[ "MIT" ]
null
null
null
readme.md
Senthan/ifasting.com
ed73350585e614dae32f990026faeee1ae8cd59f
[ "MIT" ]
10
2020-07-19T20:06:52.000Z
2022-02-27T00:53:56.000Z
readme.md
suegoiee/forum
4a1ab8a0b5efa1031196ee9eccd24588b7daf807
[ "MIT" ]
null
null
null
# Laravel.io Community Portal [![CircleCI](https://circleci.com/gh/laravelio/portal/tree/master.svg?style=svg)](https://circleci.com/gh/laravelio/portal/tree/master) [![StyleCI](https://styleci.io/repos/12895187/shield?branch=master)](https://styleci.io/repos/12895187) [![Laravel Version](https://shield.with.social/cc/github/laravelio/portal/master.svg?style=flat-square)](https://packagist.org/packages/laravel/framework) This is the repository for the [Laravel.io](http://laravel.io) community portal. The code is entirely open source and licensed under [the MIT license](license.md). We welcome your contributions but we encourage you to read the [the contributing guide](contributing.md) before creating an issue or sending in a pull request. Read the installation guide below to get started with setting up the app on your machine. We hope to see your contribution soon! ## Table of Contents - [Requirements](#requirements) - [Installation](#installation) - [Maintainers](#maintainers) - [Contributing](#contributing) - [Code of Conduct](#code-of-conduct) - [Security Vulnerabilities](#security-vulnerabilities) - [License](#license) ## Requirements The following tools are required in order to start the installation. - [VirtualBox](https://www.virtualbox.org/) - [Vagrant](https://www.vagrantup.com/) - [Composer](https://getcomposer.org/download/) - PHP >=7.1 ## Installation > Note that you're free to adjust the `~/Sites/laravelio` location to any directory you want on your machine. 1. Clone this repository: `git clone git@github.com:laravelio/laravel-io.git ~/Sites/laravelio` 2. Run `composer install && homestead make --no-after` 4. Run `vagrant up` 5. SSH into your Vagrant box, go to `/home/vagrant/code` and run `composer setup` 6. Add `192.168.10.10 laravelio.test` to your computer's `/etc/hosts` file 7. Setup a working e-mail driver like [Mailtrap](https://mailtrap.io/) 8. (optional) Set up Github authentication (see below) You can now visit the app in your browser by visiting [http://laravelio.test](http://laravelio.test). If you seeded the database you can login into a test account with `johndoe` & `password`. ### Github Authentication (optional) To get Github authentication to work locally, you'll need to [register a new OAuth application on Github](https://github.com/settings/applications/new). Use `http://laravelio.test` for the homepage url and `http://laravelio.test/auth/github` for the callback url. When you've created the app, fill in the ID and secret in your `.env` file in the env variables below. You should now be able to authentication with Github. ``` GITHUB_ID= GITHUB_SECRET= GITHUB_URL=http://laravelio.test/auth/github ``` ## Maintainers The Laravel.io portal is currently maintained by [Dries Vints](https://github.com/driesvints). If you have any questions please don't hesitate to create an issue on this repo. ## Contributing Please read [the contributing guide](contributing.md) before creating an issue or sending in a pull request. ## Code of Conduct Please read our [Code of Conduct](code_of_conduct.md) before contributing or engaging in discussions. ## Security Vulnerabilities If you discover a security vulnerability within Laravel.io, please send an email immediately to Dries Vints at [dries.vints@gmail.com](mailto:dries.vints@gmail.com). **Do not create an issue for the vulnerability.** ## License The MIT License. Please see [the license file](license.md) for more information.
42.402439
154
0.763589
eng_Latn
0.949764
1c9dd73fd4133f69ed71f24ff10467b7d356d4da
2,377
md
Markdown
patterns-practices/reference/commandbehaviorbase-t-class-mspp-interactivity.md
v-hearya/patterns-practices
cf8555e6832dc5f7ed6da08092890f0d3bb8b012
[ "CC-BY-4.0", "MIT" ]
8
2017-05-20T18:32:36.000Z
2021-08-13T11:13:58.000Z
patterns-practices/reference/commandbehaviorbase-t-class-mspp-interactivity.md
MicrosoftDocs/patterns-practices
966da570c52e6cf895505d887ceac1e1dbc7a8e4
[ "CC-BY-4.0", "MIT" ]
133
2017-04-20T23:13:22.000Z
2019-10-14T07:50:41.000Z
patterns-practices/reference/commandbehaviorbase-t-class-mspp-interactivity.md
isabella232/patterns-practices
966da570c52e6cf895505d887ceac1e1dbc7a8e4
[ "CC-BY-4.0", "MIT" ]
113
2017-05-16T09:47:49.000Z
2021-07-14T08:39:48.000Z
--- TOCTitle: 'CommandBehaviorBase(T) Class' Title: 'CommandBehaviorBase(T) Class (Microsoft.Practices.Prism.Interactivity)' ms:assetid: 'T:Microsoft.Practices.Prism.Interactivity.CommandBehaviorBase\`1' ms:mtpsurl: 'commandbehaviorbase-t-class-mspp-interactivity.md' --- # CommandBehaviorBase&lt;T&gt; Class Base behavior to handle connecting a [Control](http://msdn.microsoft.com/en-us/library/ms609826) to a Command. **Namespace:** [Microsoft.Practices.Prism.Interactivity](/patterns-practices/reference/mspp-interactivity-namespace) **Assembly:** Microsoft.Practices.Prism.Interactivity (in Microsoft.Practices.Prism.Interactivity.dll) **Version:** 5.0.0.0 (5.0.0.0) ## Syntax ```C# public class CommandBehaviorBase<T> where T : UIElement ``` ### Type Parameters *T* The target object must derive from Control ## Remarks CommandBehaviorBase can be used to provide new behaviors for commands. ## Inheritance Hierarchy [System.Object](http://msdn.microsoft.com/en-us/library/e5kfa45b) Microsoft.Practices.Prism.Interactivity.CommandBehaviorBase&lt;T&gt; ## See Also [CommandBehaviorBase&lt;T&gt; Members](/patterns-practices/reference/commandbehaviorbase-t-members-mspp-interactivity) [Microsoft.Practices.Prism.Interactivity Namespace](/patterns-practices/reference/mspp-interactivity-namespace) # CommandBehaviorBase(Of T) Class Base behavior to handle connecting a [Control](http://msdn.microsoft.com/en-us/library/ms609826) to a Command. **Namespace:** [Microsoft.Practices.Prism.Interactivity](/patterns-practices/reference/mspp-interactivity-namespace) **Assembly:** Microsoft.Practices.Prism.Interactivity (in Microsoft.Practices.Prism.Interactivity.dll) **Version:** 5.0.0.0 (5.0.0.0) ## Syntax ```VB 'Declaration Public Class CommandBehaviorBase(Of T As UIElement) ``` ### Type Parameters *T* The target object must derive from Control ## Remarks CommandBehaviorBase can be used to provide new behaviors for commands. ## Inheritance Hierarchy [System.Object](http://msdn.microsoft.com/en-us/library/e5kfa45b) Microsoft.Practices.Prism.Interactivity.CommandBehaviorBase(Of T) ## See Also [CommandBehaviorBase(Of T) Members](/patterns-practices/reference/commandbehaviorbase-t-members-mspp-interactivity) [Microsoft.Practices.Prism.Interactivity Namespace](/patterns-practices/reference/mspp-interactivity-namespace)
33.013889
120
0.78334
yue_Hant
0.929855
1c9deb0f7f7a696f2d11066e8efed5ce512b8320
7,367
md
Markdown
README.md
Roger-random/ESP_8_BIT_composite_video
001174d6b526de45c6792b26d53042dcc9f6de22
[ "MIT" ]
49
2021-04-26T20:47:21.000Z
2022-03-26T22:05:37.000Z
README.md
Roger-random/ESP_8_BIT_composite_video
001174d6b526de45c6792b26d53042dcc9f6de22
[ "MIT" ]
16
2021-04-25T05:16:26.000Z
2022-03-29T01:18:48.000Z
README.md
Roger-random/ESP_8_BIT_composite_video
001174d6b526de45c6792b26d53042dcc9f6de22
[ "MIT" ]
7
2021-07-31T14:31:46.000Z
2022-03-22T06:05:09.000Z
# ESP_8_BIT Color Composite Video Out Library ## Purpose The composite video generation code from [ESP_8_BIT](https://github.com/rossumur/esp_8_bit) extracted and packaged into a standalone Arduino library so everyone can write Arduino sketches that output a color composite video signal. NTSC and PAL are both supported. __Huge thanks to Peter Barrett / rossumur for ESP_8_BIT, without which this library would not have been possible.__ For more behind-the-scenes information on how this library came to be, see [the development diary](https://newscrewdriver.com/tag/esp_8_bit/) which has all the details anyone would ever want plus even more that nobody ever asked for. ## Hardware requirement * ESP32 (tested on ESP32 DevKitC) * Composite video connector to ESP32 GPIO25 video signal pin. * Display device with composite video input port. (Usually an old-school tube TV.) ## Arduino requirement * [Adafruit GFX Library](https://learn.adafruit.com/adafruit-gfx-graphics-library) available from Arduino IDE Library Manager. * [Espressif Arduino Core for ESP32](https://github.com/espressif/arduino-esp32), follow installation directions at that link. ## Installation This library can now be installed from within the Arduino desktop IDE via the Library Manager. Listed as "ESP_8_BIT Color Composite Video Library" It can also be installed from this GitHub repository if desired: 1. Download into a folder named "ESP_8_BIT_composite" under your Arduino IDE's `libraries` folder. 2. Restart Arduino IDE. ## Classes 1. `ESP_8_BIT_GFX` offers high-level drawing commands via the [Adafruit GFX API](https://learn.adafruit.com/adafruit-gfx-graphics-library). Easy to use, but not the highest performance. 2. `ESP_8_BIT_composite` exposes low-level frame buffer for those who prefer to manipulate bytes directly. Maximum performance, but not very easy to use. ## Examples 1. `GFX_HelloWorld` draws animated rectangles and text, both in changing colors, using the Adafruit GFX API exposed by `ESP_8_BIT_GFX`. 2. `RGB332_Colors` draws all 256 available colors directly to frame buffer allocated by `ESP_8_BIT_composite`. Draws once, no updates. 3. `RGB332_PulseB` draws 64 blocks of colors (8x8) representing different combinations of red (vertical axis) and green (horizontal axis). Uses the frame buffer of `ESP_8_BIT_composite` directly. Every second, the entire screen is redrawn with one of four possible values of blue in a pulsing cycle. 4. `GFX_Screen_Fillers` demonstrates several of the common ways to put graphics on screen. Includes the following APIS: `fillRect`, `fillCircle`, `drawFastVLine`, and `drawFastHLine`. 5. `AnimatedGIF` demonstrates how to use this video out library with the [AnimatedGIF decoder library](https://github.com/bitbank2/AnimatedGIF) by Larry Bank. Art used in this example is [Cat and Galactic Squid](https://twitter.com/MLE_Online/status/1393660363191717888) by Emily Velasco ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)) 6. `GFX_RotatedText` demonstrates support for Adafruit_GFX::setRotation() by rendering text in one of four orientations and one of three text sizes. ## Screen Size * Inherited from ESP_8_BIT, the addressible screen size is __256 pixels wide and 240 pixels tall__. This means valid X values of 0 to 255 inclusive, and valid Y values of 0 to 239 inclusive. * When displayed on a standard analog TV with 4:3 aspect ratio, these pixels are not square. So `drawCircle()` will look like a squat wide oval on screen and not a perfect circle. This is inherent to the system and not considered a bug. * When displayed on a standard analog TV, the visible image will be slightly cropped due to [overscan](https://en.wikipedia.org/wiki/Overscan). This is inherent to analog televisions and not considered a bug. * The developer-friendly `ESP_8_BIT_GFX` class checks for valid coordinates and will only draw within the valid range. So if X is too large (say, 300) `drawPixel()` will ignore the command and silently do nothing. * The raw `ESP_8_BIT_composite` class gives max performance power, but with great power comes great responsibility. Caller is responsible for making sure X and Y stay within bounds when manipulating frame buffer bytes via `getFrameBufferLines()[Y][X]`. Any bugs that use out of range array index may garble the image, or trigger a memory access violation and cause your ESP32 to reset, or other general memory corruption nastiness __including the potential for security vulnerabilities.__ ## 8-Bit Color Inherited from ESP_8_BIT is a fixed 8-bit color palette in [RGB332 format](https://en.wikipedia.org/wiki/List_of_monochrome_and_RGB_color_formats#8-bit_RGB_(also_known_as_3-3-2_bit_RGB)). The underlying composite video out code always works with this set of colors. (See [Examples](https://github.com/Roger-random/ESP_8_BIT_composite#examples).) * The developer-friendly `ESP_8_BIT_GFX` class constructor can be initialized in either 8-bit (native) or 16-bit (compatibility) color mode. * Adafruit GFX was written for 16-bit color in [RGB565 format](https://learn.adafruit.com/adafruit-gfx-graphics-library/coordinate-system-and-units). `ESP_8_BIT_GFX` in 16-bit mode is compatible with existing Adafruit GFX code by automatically downconverting color while drawing. The resulting colors will be approximate, but they should closely resemble the original. Using RGB332 color values while in this mode will result in wrong colors on screen due to interpretation as RGB565 colors. * In 8-bit mode, color values given to GFX APIs will be treated as native 8-bit RGB332 values. This is faster because it skips the color conversion process. Using RGB565 color values while in this mode will result in wrong colors on screen due to the higher 8 bits being ignored. * The raw `ESP_8_BIT_composite` class always works in 8-bit RGB332 color. Sample colors in 8-bit RGB332 format: |Name|RGB332 (binary)|RGB332 (hexadecimal)| |----:|:---:|:-----| |Black |0b00000000|0x00| |Blue |0b00000011|0x03| |Green |0b00011100|0x1C| |Cyan |0b00011111|0x1F| |Red |0b11100000|0xE0| |Magenta|0b11100011|0xE3| |Yellow |0b11111100|0xFC| |White |0b11111111|0xFF| ## 8-bit RGB332 Color Picker Utility [CLICK HERE](https://roger-random.github.io/RGB332_color_wheel_three.js/) for an interactive color picker web app. It shows all 256 possible 8-bit RGB332 colors in either a [HSV (hue/saturation/value)](https://en.wikipedia.org/wiki/HSL_and_HSV) color cylinder or a [RGB (red/green/blue)](https://en.wikipedia.org/wiki/RGB_color_space) color cube. ## Questions? Please [post to discussions](https://github.com/Roger-random/ESP_8_BIT_composite/discussions) and see if anyone knows the answer. Note there's no guarantee of an answer. ## Bugs? Please [open an issue](https://github.com/Roger-random/ESP_8_BIT_composite/issues) to see if it can be fixed. Note there's no guarantee of support. ## Tip jar Just like its predecessor ESP_8_BIT, this project is shared freely with the world. Under the MIT license, you don't owe me anything. But if you want to toss a few coins my way, you can do so by using my Amazon Associates link to buy your [ESP32 development boards](https://amzn.to/3dMdIDQ) or [composite video cables](https://amzn.to/33K9qXP). You'll pay the same price, but I get a small percentage. As an Amazon Associate I earn from qualifying purchases.
46.626582
128
0.78743
eng_Latn
0.979504
1c9ea94988392556067986a81bf3b79dfb73c790
1,109
md
Markdown
content/en/tracing/connect_logs_and_traces/_index.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
1
2020-04-09T01:40:33.000Z
2020-04-09T01:40:33.000Z
content/en/tracing/connect_logs_and_traces/_index.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
null
null
null
content/en/tracing/connect_logs_and_traces/_index.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
null
null
null
--- title: Connect Logs and Traces kind: documentation description: 'Connect your logs and traces to correlate them in Datadog.' aliases: - /tracing/advanced/connect_logs_and_traces/ --- {{< img src="tracing/connect_logs_and_traces/trace_id_injection.png" alt="Logs in Traces" style="width:100%;">}} The correlation between Datadog APM and Datadog Log Management is improved by automatically adding `dd.trace_id` and `dd.span_id` attributes in your logs with the Tracing Libraries. This can then be used in the platform to show you the exact logs correlated to the observed [trace][1]. Before correlating traces with logs, ensure your logs are either sent as JSON, or [parsed by the proper language level log processor][2]. Your language level logs _must_ be turned into Datadog attributes in order for traces and logs correlation to work. Select your language below to learn how to automatically or manually connect your logs to your traces: {{< partial name="apm/apm-connect-logs-and-traces.html" >}} [1]: /tracing/visualization/#trace [2]: /agent/logs/#enabling-log-collection-from-integrations
52.809524
285
0.779982
eng_Latn
0.990638
1c9f43d143a97e6cb29fc02d47335f097bd6d8dd
840
md
Markdown
_pages/index.md
dandele/digital-garden-812
cda9aefc84c66f18a86212cef44b7fa0d006beeb
[ "MIT" ]
null
null
null
_pages/index.md
dandele/digital-garden-812
cda9aefc84c66f18a86212cef44b7fa0d006beeb
[ "MIT" ]
null
null
null
_pages/index.md
dandele/digital-garden-812
cda9aefc84c66f18a86212cef44b7fa0d006beeb
[ "MIT" ]
null
null
null
--- layout: page title: Home id: home permalink: / --- # Ciao 👋 <p style="padding: 3em 1em; background: #f5f7ff; border-radius: 4px;"> Sono [[Daniele]] e questo è il mio [[digital garden]]. <br>Da questa pagina puoi partire nella tua esplorazione della mia mente, digitale. <br>Mi raccomando: cerca di non perderti! </p> This digital garden template is free, open-source, and [available on GitHub here](https://github.com/maximevaillancourt/digital-garden-jekyll-template). The easiest way to get started is to read this [step-by-step guide explaining how to set this up from scratch](https://maximevaillancourt.com/blog/setting-up-your-own-digital-garden-with-jekyll). If you need any help, my [DMs are open on Twitter (@vaillancourtmax)](https://twitter.com/vaillancourtmax). 👋 <style> .wrapper { max-width: 46em; } </style>
33.6
305
0.734524
eng_Latn
0.556354
17ccdf1feac84d1a9e73a905462a46a99591d9b3
478
md
Markdown
README.md
daboehme/c-util
afbff5f864cac5a67245c0818489e2c237d901a8
[ "BSD-2-Clause" ]
null
null
null
README.md
daboehme/c-util
afbff5f864cac5a67245c0818489e2c237d901a8
[ "BSD-2-Clause" ]
null
null
null
README.md
daboehme/c-util
afbff5f864cac5a67245c0818489e2c237d901a8
[ "BSD-2-Clause" ]
null
null
null
c-util: C utility functions ========================================== This is a collection of small general-purpose C utility functions. Written in 2016 by David Boehme. Released under a simplified BSD license. See `LICENCE` file for details. Contents ------------------------------------------ String utilities: `flatten`, `split`, `find_first_of`, and `strnlen`. Integer variable-length de/encoding: `vlenc_u64`, `vldec_u64`. Unit conversion and formatting: `unitfmt`
25.157895
69
0.631799
eng_Latn
0.879936
17cd5e349c7fca90c2048961153807660323bfaa
2,717
md
Markdown
cslvln_spring-tx-mooc_readme.md
lua-study/0
161010a7530d62864e917d1d3253e1fa0a8413f9
[ "Apache-2.0" ]
3
2021-06-08T07:57:41.000Z
2022-02-03T18:50:19.000Z
cslvln_spring-tx-mooc_readme.md
lua-study/0
161010a7530d62864e917d1d3253e1fa0a8413f9
[ "Apache-2.0" ]
null
null
null
cslvln_spring-tx-mooc_readme.md
lua-study/0
161010a7530d62864e917d1d3253e1fa0a8413f9
[ "Apache-2.0" ]
5
2021-03-11T07:42:05.000Z
2021-09-08T05:43:56.000Z
#spring-tx-mooc ##Demo1 - 示例1为编程式的事务管理方式主要是使用 使用TransactionTemplate对Dao层的操作进行管理来使用它来进行事务管理 - 该方式是显式的进行事务管理 - 由于该方式在代码中有太多的事务相关代码所以不常使用 ##Demo2 - 示例2为声明式的事务管理该方式主要是使用org.springframework.transaction.interceptor.TransactionProxyFactoryBean - 来对要进行事务管理的service来进行增强在注入时使用该增强的bean来进行注入 - 由于该方式在配置文件中有一个service就要有一个增强配置所以不常使用 ##Demo3 - 示例3为声明式事务管理方式 该方式主要是使用Aop的该方式对Service的事务进行管理 - 该方式是代码配置最少的经常使用 - 该方式的缺点就是限制死了Service的各个方法的名称 - 以下为通配写法示例: ``` ``` ##Demo4 - 示例4为声明式事务管理方式 该方式主要是使用@Transactional注解放到Service上面进行配置管理事务 - 该方式是配置最少的但是事务管理声明比较多故而经常使用 - 该方式的缺点就是不能对每个类的每个方法进行详细的事务管理而且使用的注解太多维护不便 ###本示例使用的是mysql数据库其他数据库也一样的 ``` SQL脚本: CREATE TABLE `account` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, `money` double DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8; INSERT INTO `account` VALUES ('1', 'aaa', '1000'); INSERT INTO `account` VALUES ('2', 'bbb', '1000'); INSERT INTO `account` VALUES ('3', 'ccc', '1000'); SQL脚本注意之处 mysql中InnoDB是支持事务的其他像MyISAM不支持事务 ``` 传播行为:解决业务层方法之间的相互调用的问题(7种分3类) 1.PROPAGATION_REQUIRED 支持当前事务,如果不存在就新建一个 2.PROPAGATION_SUPPORTS 支持当前事务,如果不存在,就不使用事务 3.PROPAGATION_MANDATORY 支持当前食物,如果不存在,抛出异常 1.PROPAGATION_REQUIRES_NEW 如果有事务存在,挂起当前事务,创建一个新事物 2.PROPAGATION_NOT_SUPPORTED 以非事务方式运行,如果有事物,挂起当前事物 3.PROPAGATION_NEVER 以非事物方式运行,如果有事务存在,抛出异常 1.PROPAGATION_NESTED 如果当前事务存在,则嵌套事务执行 spring的事务隔离级别 - DEFAULT - READ_UNCOMMITTED - READ_COMMITTED - REPEATABLE_READ - SERIALIZABLE 隔离级别(isolation level),是指事务与事务之间的隔离程度。 显然,事务隔离程度越高,并发性越差、性能越低;事务隔离程度越低,并发性越强、性能越高。 ANSI/ISO SQL92标准中定义了4种事务隔离级别: 1. 序列化(serializable)SERIALIZABLE 最高隔离级别。系统中所有的事务都是一个接一个执行的。因此也就不会发生任何事务之间的冲突问题。 2. 可重复读(repeatable read) REPEATABLE_READ 一个事务所读取的数据记录不允许被其他事务所修改。 3. 读已提交(read committed)READ_COMMITTED 该级别允许其他事务修改当前事务所读取的数据记录,并且那个事务提交之后,当前事务可以看到修改后的数据。 4. 读未提交(read uncommitted) READ_UNCOMMITTED 该级别允许其他事务修改当前事务所读取的数据记录,并且那个事务尚未提交时,当前事务就可以看到修改后的数据。即允许脏读。 5.default DEFAULT 使用数据库的默认事务隔离级别 事务隔离级别不同,执行一条数据库查询,得到的结果很可能让你感到意外,下面是这些情况的总结: 1. 脏读 读取了其他事务还没有提交的数据。 2. 不可重复读 当前事务已经读取的数据记录,被其他事务修改或删除。 3. 幻影读 其他事务插入了新的数据,当前事务以相同的查询条件,在那个事务插入数据之前和之后查询数据,得到的数据记录的条数不一样。 隔离级别 脏读 不可重复读 幻影读 序列化 N N N 可重复读 N N Y 读已提交 N Y Y 读未提交 Y Y Y # 良心友情链接 [腾讯QQ群快速检索](http://u.720life.cn/s/8cf73f7c) [软件免费开发论坛](http://u.720life.cn/s/bbb01dc0)
24.7
95
0.689363
yue_Hant
0.983805
17cd98294c9148a7b95da054f0e53343361e47c5
10,638
md
Markdown
README.md
MikoMagni/Alfred-for-Trello
14854c30864d7ebfeda4db5384428d0a8d472356
[ "MIT" ]
138
2015-01-28T09:15:57.000Z
2018-04-03T03:34:50.000Z
README.md
MikoMagni/Trello-Workflow-for-Alfred
14854c30864d7ebfeda4db5384428d0a8d472356
[ "MIT" ]
10
2018-04-24T01:31:34.000Z
2022-01-15T12:15:07.000Z
README.md
MikoMagni/Alfred-for-Trello
14854c30864d7ebfeda4db5384428d0a8d472356
[ "MIT" ]
7
2015-02-03T19:23:30.000Z
2017-12-11T13:22:43.000Z
# Trello Workflow for Alfred App v.1.6.2 ### Create cards in Trello using Alfred App [https://www.alfredapp.com/](https://www.alfredapp.com/) ### [Download Trello WorkFlow 1.6.2](https://github.com/MikoMagni/Trello-Workflow-for-Alfred/releases/tag/1.6.2) #### Tested and working with Alfred 4.0.x ## Install 1. Double click on the "**Trello Workflow for Alfred v.1.6.2**" workflow that you have just downloaded. More info: [https://www.alfredapp.com/help/workflows/](https://www.alfredapp.com/help/workflows/) Note: if you have version 1.5 installed, remove it before installing the new version. ## Setup 1. **Generate your Trello Developer API Key**<br> Use the keyword "**get trello key**" to generate your Trello Developer API Key.<br> More information: [https://developers.trello.com/docs/api-introduction](https://developers.trello.com/docs/api-introduction). **Note:** Make sure to be logged in Trello in your default browser before generating your API Key. ![](https://user-images.githubusercontent.com/2527231/39163817-68b8092a-47bb-11e8-939e-62cdfff3ed3b.png) 2. Copy your **API Key** 3. **Authorize Trello Workflow** Use the keyword "**get trello token**" plus your "**API Key**" to authorize the Trello Workflow to use your Trello account Example: **get trello token 00000000000000000000** More information: [https://developers.trello.com/docs/api-introduction](https://developers.trello.com/docs/api-introduction)<br> ![](https://user-images.githubusercontent.com/2527231/39280550-2a5bdb0e-493f-11e8-96de-81a64ce5cf17.png) 4. **Allow** Trello Workflow to use your account ![](https://user-images.githubusercontent.com/2527231/39164571-364a56b0-47bf-11e8-92f3-3c2a08fd04e9.png) 5. Copy your **Token** 6. **Your Trello board id** Choose the Trello board that you wish to use with Trello Workflow and copy the **board id** You can get the board id by simply going to your board and add .json at the end of the URL. **Example:** Go to the Trello developmemt Roadmap Board [https://trello.com/b/nC8QJJoZ/trello-development-roadmap](https://trello.com/b/nC8QJJoZ/trello-development-roadmap). To view the board id add .json at the end of the URL [https://trello.com/b/nC8QJJoZ/trello-development-roadmap.json](https://trello.com/b/nC8QJJoZ/trello-development-roadmap.json). You should now see the full JSON >{"id":"**4d5ea62fd76aa1136000000c**","name":"Trello Development Roadmap","desc":"","descData" The board id in the example is: **4d5ea62fd76aa1136000000c** 7. Open the Trello Workflow for Alfred in Alfred app. Use the Keyword **Alfred** to Show Alfred Preferences. Navigate to Workflows and select Trello Workflow for Alfred v1.6 from the side column. ![](https://user-images.githubusercontent.com/2527231/39165421-86508e96-47c3-11e8-8f90-f06bc0a6727f.png) 8. Double click on the **/bin/bash** script and enter your **API Key**, **Your Token** and your **board id** here: > key='**{YourAPIKey}**' > token='**{YourPersonalToken}**' > boardid='**{YourBoardId}**' Make sure that each preference in the bash file is within single quotes: > key='00000000000' > token='0000000000000000000000000000000' > boardid='0000000' Click **Save** ![](https://user-images.githubusercontent.com/2527231/39165568-388c8448-47c4-11e8-9864-fc32d2eaf9ad.png) ## Usage 1. General usage **trello** **{field}** separate fields using **;** You can choose to have spaces or not between fields. For example **{field1}; {field2}** and **{field1};{field2}** will work. Available fields: **{Card Title}; {Card Description}; {Labels}; {Due Date}; {List Name}; {Card Position}** ![](https://user-images.githubusercontent.com/2527231/39163922-f2d0f252-47bb-11e8-9bba-4b537528bd27.png) ## Basic Usage **Card Title** ``` trello make dinner reservation ``` will create a card on your board on the first list with the title "make dinner reservation" ![sdsd](https://user-images.githubusercontent.com/2527231/39225051-73d684a2-4889-11e8-9273-bfb21a1abc7d.png) ![sdsd](https://user-images.githubusercontent.com/2527231/39226873-50f9dbb8-4894-11e8-8864-0ce57a8a385d.png) **Card Description** ``` trello make dinner reservation; table for 10 people at around 7:30pm ``` will create a card on your board on the first list with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" ![](https://user-images.githubusercontent.com/2527231/39225192-166199c8-488a-11e8-8f38-015befdc412c.png) ![](https://user-images.githubusercontent.com/2527231/39226879-638a9a1a-4894-11e8-8449-2ae9f97af35e.png) **Labels** ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue ``` will create a card on your board on the first list with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" with a "blue" label **Available Labels** - **all** (will add green, yellow, orange, red, purple and blue) - **green** - **yellow** - **orange** - **red** - **purple** - **blue** You can add more than one label by comma separating them. ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue,red,yellow ``` Please note: Make sure not to have spaces between comma separated labels. Custom labels are not supported. If you find a way let me know :) ![](https://user-images.githubusercontent.com/2527231/39225416-6a59e25a-488b-11e8-8d0e-e7b6c2f3fe81.png) ![](https://user-images.githubusercontent.com/2527231/39226897-84c4c976-4894-11e8-84d2-c11daa3e37b4.png) ![](https://user-images.githubusercontent.com/2527231/39226930-b52e97ae-4894-11e8-92e3-37052eac9794.png) **Due Date** ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue; 04/26/2018 ``` will create a card on your board on the first list with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" with a "blue" label. The due date will be set as 04/26/2018 ![](https://user-images.githubusercontent.com/2527231/39225889-2a305bf2-488e-11e8-82e2-4da85e9db1ab.png) ![](https://user-images.githubusercontent.com/2527231/39226946-e22d4368-4894-11e8-8071-f7b78742d768.png) **List Name** ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue; 04/26/2018; Today ``` will create a card on your board on the list **Today** with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" with a "blue" label. The due date will be set as 04/26/2018. Please note: **List name are case sensitive** today will not work if your list is named Today. The example will only work if you have a list named Today, otherwise the card will be created on your first list. ![](https://user-images.githubusercontent.com/2527231/39226075-44f065e4-488f-11e8-900c-b2474b7d06e4.png) ![](https://user-images.githubusercontent.com/2527231/39226084-57dd980c-488f-11e8-93a5-ebdf397fdffa.png) **Card Position** ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue; 04/26/2018; Today; top ``` will create a card on your board on the list **Today** with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" with a "blue" label. The due date will be set as 04/26/2018. Note: If you don't specify a card position, your new card will automatically be placed at the end of the list. **Available options (case sensitive)** - **top** - **bottom** ![](https://user-images.githubusercontent.com/2527231/39225889-2a305bf2-488e-11e8-82e2-4da85e9db1ab.png) **bottom** ![](https://user-images.githubusercontent.com/2527231/39226984-280ab73a-4895-11e8-9892-541213eec1e4.png) **top** ![](https://user-images.githubusercontent.com/2527231/39226985-283cb2bc-4895-11e8-9939-1a4bcdb8525d.png) **URL Attachment** ``` trello make dinner reservation; table for 10 people at around 7:30pm; blue; 12/24/2019; Today; top; https://myfavoriterestaurant.com ``` will create a card on your board on the list **Today** with the title "make dinner reservation" and description "table for 10 people at around 7:30pm" with a "blue" label. The due date will be set as 12/24/2019. The URL `https://myfavoriterestaurant.com` is added as an attachment. ![](https://user-images.githubusercontent.com/7596032/71087131-68196980-2169-11ea-85b1-5d1658e00db3.png) ![](https://user-images.githubusercontent.com/7596032/71087140-7071a480-2169-11ea-92a3-ce553316bcf5.png) ![](https://user-images.githubusercontent.com/7596032/71087148-749dc200-2169-11ea-9031-329183f793c9.png) ## Advanced Usage You can skip any of the available fields by simply adding **;**   **{Card Title}; {Card Description}; {Labels}; {Due Date}; {List Name}; {Card Position}; {URL Attachment}** For example if I wanted to post a card with Title, Label and a Due date i would use this syntax **{Card Title}; ; {Labels}; {Due Date}** ``` trello Clean my car; ; red; 04/29/2018 ``` ![](https://user-images.githubusercontent.com/2527231/39227249-c4188d86-4896-11e8-9697-eff35768b368.png) ![](https://user-images.githubusercontent.com/2527231/39227247-bef258aa-4896-11e8-8f2b-52e8e7d73504.png) Or a card with title only but on a different list **{Card Title}; ; ; ; {List Name}** ``` trello Clean my car; ; ; ; Upcoming ``` ![](https://user-images.githubusercontent.com/2527231/39227342-4c118abc-4897-11e8-9486-4f008fd8fbd5.png) ![](https://user-images.githubusercontent.com/2527231/39227333-3f9a37a2-4897-11e8-81ae-7089f22153a6.png) ## Environment Variables by @gamell Given that some might want always to create the cards on the same list, or with the same label, or same due date, or same position _by default_, I added the ability to set those defaults via the environment variables `trello.list_name`, `trello.label`, `trello.due` and `trello.position`. One can conveniently add or edit those environment variables without programming knowledge through the Alfred Workflow editor, clicking on the `[x]` button on the top right (see screenshot below). *Note:* If you don't set the variable, the workflow will behave as it did before. ![](https://user-images.githubusercontent.com/2460215/44072791-96f57f66-9f45-11e8-9dbc-399744c5c34c.png) ![](https://user-images.githubusercontent.com/2460215/44072799-9d4107b4-9f45-11e8-9444-4d71f7f8135f.png) ## FAQ Coming soon ## License [MIT](https://github.com/MikoMagni/Trello-Workflow-for-Alfred/blob/master/MIT%20License) © Miko Magni
39.4
392
0.731904
eng_Latn
0.770133
17ce32e5e7d7ddfe32587328b3b32ef183be77d3
995
md
Markdown
README.md
davoodmood/simple-image-size
76576330d12cdf578bad2ae6d64560a9e576827d
[ "MIT" ]
null
null
null
README.md
davoodmood/simple-image-size
76576330d12cdf578bad2ae6d64560a9e576827d
[ "MIT" ]
null
null
null
README.md
davoodmood/simple-image-size
76576330d12cdf578bad2ae6d64560a9e576827d
[ "MIT" ]
null
null
null
# Simple Image Size Package User Guide Amazing! This package is what i use to verify an image's width and height in a very simple step. > You are requiring the getImageSize() and simply pass in your image file. > I would create a demo page to place here soon ... [Demo](https://portfolio.recash.tech/#) ## Install Add the Package using npm To install, use: ```bash npm install simple-image-size # or yarn simple-image-size ``` ## Usage First Import the package, `import getImageSize from 'simple-image-size'` or Import the package, `const getImageSize = require('simple-image-size')` The is an async operation so you can use async/await or then/catch to get a promise Object with the image's width and height. You can use it by passing the image file as the only parameter to `getImageSize(file)`. ### Tested images [`valid IAB images`](https://#) is tested for `jpg, png, gif`. ## stay in touch I recomment contacting me on twitter [twitter](https://twitter.com/davoodhakimi).
30.151515
213
0.735678
eng_Latn
0.983253
17ce5a330ded5efb3c2d0d2f7ee8bb1c6d8f0d35
2,421
md
Markdown
knowledge-base/how-to-use-reportsource-objects-with-reportbook.md
nelci592/reporting-docs
5cc96ef6a8d11054eb40871893de3e6db6cf634f
[ "Apache-2.0" ]
3
2018-01-16T10:43:30.000Z
2020-08-27T19:29:57.000Z
knowledge-base/how-to-use-reportsource-objects-with-reportbook.md
nelci592/reporting-docs
5cc96ef6a8d11054eb40871893de3e6db6cf634f
[ "Apache-2.0" ]
15
2019-11-06T18:13:01.000Z
2022-03-31T13:54:36.000Z
knowledge-base/how-to-use-reportsource-objects-with-reportbook.md
nelci592/reporting-docs
5cc96ef6a8d11054eb40871893de3e6db6cf634f
[ "Apache-2.0" ]
17
2018-01-23T11:25:59.000Z
2021-11-30T07:45:28.000Z
--- title: Adding reports to a ReportBook displays a warning in Visual Studio description: Correctly providng ReportSource objects to the ReportBook before and after R1 2017. type: troubleshooting page_title: Visual Studio shows a warning when adding reports to a ReportBook slug: how-to-use-reportsource-objects-with-reportbook res_type: kb --- ## Environment <table> <tr> <td>Product</td> <td>Progress® Telerik® Reporting</td> </tr> <tr> <td>Versions</td> <td>R1 2017 and newer</td> </tr> <tr> <td>Report Item</td> <td>Report Book</td> </tr> </table> ## Description As of the **R1 2017** release, adding reports to the [ReportBook.Reports](../p-telerik-reporting-reportbook-reports) collection is **obsolete** - [API Breaking Changes](../upgrade-path-2017-r1#api-breaking-changes). To allow integration with Standalone Designer, [ReportBook](../designing-reports-general-explanation) was updated to use [ReportSource](../report-sources) objects for adding the reports. An updated approach includes adding the necessary **ReportSources** to the [**ReportBook.ReportSources**](../p-telerik-reporting-reportbook-reportsources) collection. ## Solution ### Solution for adding reports to a ReportBook *before* the R1 2017 release. ````C# Telerik.Reporting.ReportBook reportBook = new ReportBook(); Telerik.Reporting.Report report = new Report1(); reportBook.Reports.Add(report); ```` ````VB Dim reportBook As Telerik.Reporting.ReportBook = New ReportBook() Dim report As Telerik.Reporting.Report = New Report1() reportBook.Reports.Add(report) ```` ### Solution for adding reports to a ReportBook *after* the R1 2017 release. ````C# Telerik.Reporting.ReportBook reportBook = new ReportBook(); Telerik.Reporting.TypeReportSource typeReportSource = new TypeReportSource(); typeReportSource.TypeName = typeof(Report1).AssemblyQualifiedName; reportBook.ReportSources.Add(typeReportSource); ```` ````VB Dim reportBook As Telerik.Reporting.ReportBook = New ReportBook() Dim typeReportSource As Telerik.Reporting.TypeReportSource = New TypeReportSource() typeReportSource.TypeName = GetType(Report1).AssemblyQualifiedName reportBook.ReportSources.Add(typeReportSource) ```` ## See Also [How to migrate your project to utilize the new ReportSource objects](./how-to-migrate-your-project-to-utilize-the-new-reportsource-objects) [Report Sources](../report-sources)
35.086957
217
0.754234
eng_Latn
0.384721
17ce5e34fac532bf37c97095c01630849d0ece3e
30
md
Markdown
libs/core/README.md
aeos42/pxt-pi
11afc9a2074d62cba5a074572b4ba27315365f87
[ "MIT" ]
51
2019-05-15T03:04:05.000Z
2022-01-30T07:09:46.000Z
libs/core/README.md
aeos42/pxt-pi
11afc9a2074d62cba5a074572b4ba27315365f87
[ "MIT" ]
7
2017-02-23T23:35:08.000Z
2018-05-16T17:41:05.000Z
libs/core/README.md
aeos42/pxt-pi
11afc9a2074d62cba5a074572b4ba27315365f87
[ "MIT" ]
90
2019-05-14T20:32:44.000Z
2022-02-28T17:49:48.000Z
# basic Add your docs here...
10
21
0.666667
eng_Latn
0.972483
17cf87315d457e3f2d1b80c543632ee175411e61
1,628
md
Markdown
README.md
Ahren-Li/GoogleDeviceRegistration
2a5c3ff03574c6b3a48097d1b55701be001ffa32
[ "Apache-2.0" ]
1
2019-01-21T07:11:41.000Z
2019-01-21T07:11:41.000Z
README.md
Ahren-Li/GoogleDeviceRegistration
2a5c3ff03574c6b3a48097d1b55701be001ffa32
[ "Apache-2.0" ]
1
2020-09-14T08:11:09.000Z
2020-09-14T08:11:09.000Z
README.md
Ahren-Li/GoogleDeviceRegistration
2a5c3ff03574c6b3a48097d1b55701be001ffa32
[ "Apache-2.0" ]
2
2021-01-15T02:58:00.000Z
2021-11-24T14:06:50.000Z
# GoogleDeviceRegistration Automatically register GSF ID with google Recently google has google account login restrictions for uncertified devices. But Google provide a way to register our own devices. [Google Device Registration](https://www.google.com/android/uncertified/) ### Problem ![error|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/error.png) ### Step 1 Open "GoogleDeviceRegistration" ![1|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/1.png) ### Step 2 Check `GSF ID` is right, not "null" ![2|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/2.png) ### Step 3 Click `GO` button, First time you need to sign in to your Google account. ![3|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/3.png) After successful login, app will automatically jump to the device registration page, It will automatically complete the rest of the step. ![4|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/4.png) After successful registration, you need to wait 5-10 min, and reboot your device. ![5|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/5.png) ### Attention After reboot your device, you may also get an error message, don't care about it, try to start `google play` several times. ### GSF ID show "null" - First check your network can connect to google. - Click `GO` button, and click `GOOGLE PLAY` to generate `GSF ID` ![null2|690x414](https://www.lili.kim/2019/01/04/android/Google%20Device%20Registration/null2.png)
45.222222
137
0.765971
eng_Latn
0.37652
17d0449a3c117a7510058054e6b7576167a149a3
268
md
Markdown
com.unity.netcode.adapter.utp/CHANGELOG.md
apilola/com.unity.netcode.gameobjects
3bcac1d3036a258bc3dfaa704f40d3639f7c50e9
[ "MIT" ]
null
null
null
com.unity.netcode.adapter.utp/CHANGELOG.md
apilola/com.unity.netcode.gameobjects
3bcac1d3036a258bc3dfaa704f40d3639f7c50e9
[ "MIT" ]
null
null
null
com.unity.netcode.adapter.utp/CHANGELOG.md
apilola/com.unity.netcode.gameobjects
3bcac1d3036a258bc3dfaa704f40d3639f7c50e9
[ "MIT" ]
null
null
null
# Changelog All notable changes to this package will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) ## [0.0.1-preview.1] - 2020-12-20 This is the first release of Unity Transport for Netcode for Gameobjects
44.666667
147
0.757463
eng_Latn
0.992582
17d04be0444db955c8410b8330991dbb1edb2fc4
3,679
md
Markdown
packages/vital-elements/doc/VitalElements.md
Asemirski/Bodiless-JS
e9dc1e55ed6f9d19dfa7d86375efd2b5ce5ca612
[ "Apache-2.0" ]
null
null
null
packages/vital-elements/doc/VitalElements.md
Asemirski/Bodiless-JS
e9dc1e55ed6f9d19dfa7d86375efd2b5ce5ca612
[ "Apache-2.0" ]
1
2022-02-22T09:51:48.000Z
2022-02-22T09:51:48.000Z
packages/vital-elements/doc/VitalElements.md
Asemirski/Bodiless-JS
e9dc1e55ed6f9d19dfa7d86375efd2b5ce5ca612
[ "Apache-2.0" ]
null
null
null
# Vital Elements Vital Elements is composed of [element tokens](/Design/DesignSystem#element-tokens) to implement an opinionated Vital Design System. It consists of the following types of component element tokens, and they are all in associated tokens folders: * Color * Font Size * Text Decoration * Typography ## Content Editor Details There is no interaction by the Content Editor with the Vital element tokens, only with tokens once they've been composed into components. ## Site Builder Details ### Usage of Vital Element Tokens As Is The Site Builder has the ability to use any of the token Vital elements which are in the `vital-elements` collection. #### Usage Import the required Element tokens from `@bodiless/vital-elements`. If a singular token is being used, and is directly from a specific Element token: ```js import { vitalColor } from '@bodiless/vital-elements'; const Foo = { Header1: vitalColor.TextPrimaryBodyCopy, //... }; ``` If combining multiple tokens, you can put them within `as()` or `flowHoc()`: ```js import { vitalColor, vitalTextDecoration } from '@bodiless/vital-elements'; const Foo = { BoldBody: as( vitalTextDecoration.Bold, vitalColor.TextPrimaryBodyCopy, ), //... }; ``` ### Using Vital Element Tokens, but Customizing for Site-Specific Typography The Site Builder may need to override a specific token, or a specific set of tokens, and the following is a how-to guide to apply the [best methodology](./SiteTypography) for doing so. ### Helper Utilities The package also includes some helper tokens that are very useful in token composition: * `asVitalTokenSpec` : Creates a token definition utility for a clean component, and will allow tokens to be assigned to any of the slots within your clean component. * Usage: ```jsx const asLayoutToken = asVitalTokenSpec<LayoutComponents>(); ``` * `asMetaToken` : Creates a token which applies the given metadata. * Usage: ```jsx TBD ``` * Explanation: TBD * `asElementToken` : Creates an element level token where only the `_` design key is allowed. * Usage: ```jsx const Link = asElementToken({ Core: { _: vitalFontSize.Base, }, Theme: { _: as( vitalTextDecoration.Bold, vitalTextDecoration.Underline, vitalColor.TextPrimaryInteractive, ), }, Meta: meta, }); ``` * The above example creates an element token that combines classes in the core and theme domains, as well as assigns the associated metadata for the token. * `asFluidToken` : Creates a token for a component with a fluid design (one in which any design key is allowed). * Usage: ```jsx TBD ``` * Explanation: TBD * `asTokenGroup` : Creates a group of element tokens with shared meta. * Usage: ```jsx default asTokenGroup(meta)({ Base: 'text-m-base lg:text-base', XXXL: 'text-m-3xl lg:text-3xl', XXL: 'text-m-2xl lg:text-2xl', XL: 'text-m-xl lg:text-xl', L: 'text-m-lg lg:text-lg', XS: 'text-m-xs lg:text-xs', }); ``` * The above example will apply the same meta to all element tokens. ### Shadowing Vital Element Tokens For more information on shadowing Vital Element tokens, read [Shadow](./Shadow.md). ## Architectural Details When adding new Element tokens to the `vital-elements` package: * Add to existing Element if it fits the associated component token, or create a new component token with applicable name. If creating a new component token: * Create a static version of the component. * Add relevant metadata. * Remember to export all.
24.526667
102
0.695569
eng_Latn
0.972904
17d08a1ba44725b6a96d5133ba0e275ff8daab2c
14
md
Markdown
README.md
janiltonmaciel/dotfiles
21ada7b6e323d6b8628b47b0afc419162e1ecd39
[ "MIT" ]
1
2020-11-25T12:42:06.000Z
2020-11-25T12:42:06.000Z
README.md
janiltonmaciel/dotfiles
21ada7b6e323d6b8628b47b0afc419162e1ecd39
[ "MIT" ]
null
null
null
README.md
janiltonmaciel/dotfiles
21ada7b6e323d6b8628b47b0afc419162e1ecd39
[ "MIT" ]
null
null
null
# my dotfiles
7
13
0.714286
eng_Latn
0.969495
17d1711dd4d612eaf212601d188c22d36904278e
577
md
Markdown
README.md
hdas2012/column-file-browser
609181c060d020cbe1a6226b145bdc42423be707
[ "MIT" ]
1
2022-03-31T14:27:44.000Z
2022-03-31T14:27:44.000Z
README.md
hdas2012/column-file-browser
609181c060d020cbe1a6226b145bdc42423be707
[ "MIT" ]
null
null
null
README.md
hdas2012/column-file-browser
609181c060d020cbe1a6226b145bdc42423be707
[ "MIT" ]
2
2020-09-29T18:41:49.000Z
2022-01-02T15:15:44.000Z
# column-file-browser ### About Google Apps Script code to view your Google Drive files in columnar layout ### Live demo http://goo.gl/9wUv5H ### How to run the app * Create a Google Apps Script (https://script.google.com) * Create two files `Code.gs` and `Explorer.html` * Copy the respective codes from the repository to your Apps Script * Publish the app (Go to Publish --> Deploy as web app...) * Select `Execute the app as:`-->`User accessing the web app` (Very very important) * Select `Who has access to the app:`-->`Anyone` (Very very important) * Deploy and test :)
36.0625
83
0.717504
eng_Latn
0.86644
17d1fc32f805ed4b68dade662b9e317b36cc273c
3,524
md
Markdown
documentation/goldmund-client.md
MatthewZito/goldmund.io
a595357468a304473d58c6c98dd5ec8055e6b7a3
[ "MIT" ]
null
null
null
documentation/goldmund-client.md
MatthewZito/goldmund.io
a595357468a304473d58c6c98dd5ec8055e6b7a3
[ "MIT" ]
31
2020-05-16T22:36:35.000Z
2020-06-09T01:05:33.000Z
documentation/goldmund-client.md
MatthewZito/goldmund_sh-auto
a595357468a304473d58c6c98dd5ec8055e6b7a3
[ "MIT" ]
null
null
null
## Goldmund-Client Development Notes Follows are notes regarding the `goldmund-client` package. ### Why I Elected to Use an External API As you may know, Nextjs version 9 saw the introduction of several new features which deprecated custom servers and introduced myriad utilities for integrating API routes into a Nextjs project. Nextjs v9's Routes API is incredible, however it is very nascent. I have elected to maintain my API as a standard RESTful CRUD endpoint. I have done this for a few reasons: - Nextjs Routes API does not support the level of granularity I need in my middlewares * I cannot perform logging as I would like * I cannot set sessions without inelegant logic (I'll nigh make mention of auth issues) - Building my API into the Next server will effectively marry this project to Nextjs; development would be contingent on Zeit's priorities - My services still maintain same-origin given they will be containerized (no need for explicit service coupling) - Authentication and authorization are both difficult to properly implement when dealing with Nextjs; auth will be handled by the API - ~~the rendered browser can simply store the token after authenticating with the API itself (as opposed to auth with next routed api, which can detriment security efforts)~~ (see the Goldmund.sh Admin CLI) Finally, as someone who admires a great deal the ethos of the giants upon whose shoulders we stand (ie our UNIX forefathers), I'd prefer each service *do one thing, and do it well*. I have not enumarated this in my afore-cited reasons for using a decoupled API as it is ultimately a matter of personal preference and not of performance. Next's internal API feature is blazing fast, but it simply is not extensible enough for my needs right now. ### Activating the Data Layer at Run-time *essentially deprecated but I am maintaining this portion of the docs nonetheless - only applicable in development with Docker-Compose; the current Skaffold setup fixes this issue*<sup>*</sup> A curious consequence of my architecture is the sensitive run-time configurations required to activate the data layers. As one can readily see, `goldmund-client` is somewhat a misnomer given it refers to a hybridized server-side-render/static-site-gen frontend. In order for the frontend service to operate at proper context, `goldmund-api` must be *actively serving data* at `goldmund-client`'s build-time. This is a quandary given my environment is fully automated; I must find a way to enforce a chronology at run-time. As it stands, this has been accomplished by utilizing a shell script (`wait-for-it.sh`) which polls for the `goldmund-api` service. The `goldmund-client` build will not occur until this script has successfully exited, ergo the data layer acquires insurance. Simply specifying the Nginx routing service's contingency on `goldmund-client` delays Docker Engine's execution of its run-time command; thus: 1. `goldmund-api` build init 2. `goldmund-client` build init 3. `wait-for-it.sh` execute, `goldmund-client` build freeze 4. `goldmund-api` run 5. `goldmund-api` ack, `wait-for-it.sh` exit, `goldmund-client` build commence 6. `goldmund-client` run (build SSG content + export) 7. `goldmund-server` build, run --> serve SSG content from step 6 <sup>*</sup>~~We'll see how this configuration changes relative to Kubernetes deployment, an imminent step in development at this moment.~~ ### Handling Isomorphic Requests in a Containerized Environment I'll get around to this one...
97.888889
522
0.785187
eng_Latn
0.999085
17d29408a7275632a6b1fc22865bee4efca6939d
3,559
md
Markdown
_portfolio/2020-12-14-twizy-braking.md
Nachtraven/nachtraven.github.io
0fcaa6b8523ca5ad5d0453d835bf9b9c7321c39f
[ "MIT" ]
1
2020-11-10T10:11:47.000Z
2020-11-10T10:11:47.000Z
_posts/2020-12-14-twizy-braking.md
Nachtraven/nachtraven.github.io
0fcaa6b8523ca5ad5d0453d835bf9b9c7321c39f
[ "MIT" ]
null
null
null
_posts/2020-12-14-twizy-braking.md
Nachtraven/nachtraven.github.io
0fcaa6b8523ca5ad5d0453d835bf9b9c7321c39f
[ "MIT" ]
null
null
null
--- title: "Twizy Braking" excerpt: "Achieving automated braking" sidebar: title: "Twizy Braking" nav: Autonomous Twizy title: "The build" nav: The build toc: true toc_label: "Braking" --- # Autonomous Twizy ## Letting the computer drive: Computer controlled braking for a Renault Twizy Within otiv.ai advanced driver assistance and monitoring for trams is being developped. In order to demonstrate the technology to investors, a platform is required to run our tests on. In this section we will explore the steps, items and tools necessary to perform this conversion. Feel free to contact me via my [Github](https://github.com/Nachtraven) or [Linkedin](https://www.linkedin.com/in/sean-nachtrab-14175715a/) if you require help or want me to do these steps for you. ![An afternoon of fiddling](/assets/images/working_on_twizy.jpg) The show (or in this case, twizy) stopping result ![The final result](/assets/images/braking.gif) The main elements are an electronic braking system detailed under Twizy Braking and a series of 3d printed parts that hold the camera and radar to the vehicle, photo below. Additionally, an aluminium frame is built in the space occupied by the rear seats in order to hold a battery and computer used for testing. ![Twizy with accessories](/assets/images/niels_twizy_testing.jpg) ## The build We aim to implement a system of computer controlled brakes. In order to achieve this we utilised [OSCC's github](https://github.com/PolySync/oscc/wiki/Hardware-Brake-%28Petrol%29) as a starting point for our work. ![Prius Brake Module](/assets/images/brake_module_prius_small.jpg) A 2004-2009 prius braking module is used. The diagram allows us to identify the main paths for brake fluid and necessary ![Pinout diagram](/assets/images/brake_pinout.png) Our goal is to have the front brakes lock up fully, ignoring the rear and leaving them mechanically connected to the main brake pedal and handbrake as from the factory. We begin by dissasembling the front endread: ![Twizy front end fully](/assets/images/font_fully_dissasembled.jpg) ![Twizy braking module positionning](/assets/images/module_positionning.jpg) Mounting the module and routing the brake lines was not easy but it ended up solidly in place, with fittings purchased at a toyota dealership Bleeding the brakes was necessary. Wiring a new loom and to control it, relays were used. ![Relays](/assets/images/relays.jpg) Not very pretty looking but we're prototyping here. For control, 10a relays are utilised. For SMC1/SMC2 safety, SLAFL/SLRFL fill ports and SLRFL/SLRFR empty ports the ground path is interrupted (Note: the positive for SMC1/2 solenoids comes from BSC1/BSC2). 3 ohms of resistors are added in order to lower the current consumption. Using arduino [standard firmata](https://www.arduino.cc/en/reference/firmata) allows us to write code in python to control the arduino outputs, thanks to the relays the computer and arduino are totally isolated from the 12v of the braking system. After all this it is time to put the twizy back together and have a real world test! Some cutting of the twizy plastics on the front end were necessary, but once back together it is completely invisible from the outside. The total time from beginning to end was about two weeks or 60ish hours, including the delays caused by difficulty finding the right brake lines, having to order a second pump in order to acquire a fill port, creating the wiring looms and experimenting with different ways of controlling the solenoids and pump.
49.430556
331
0.788424
eng_Latn
0.996793
17d30aeae6b014956194331404464a716cea8b75
33,016
md
Markdown
website/src/pages/docs/config-file.md
michalbiesek/appscope
4a3151b0315a52e3e6576518bc418c7c52af3e8c
[ "Apache-2.0" ]
null
null
null
website/src/pages/docs/config-file.md
michalbiesek/appscope
4a3151b0315a52e3e6576518bc418c7c52af3e8c
[ "Apache-2.0" ]
2
2021-11-24T11:20:59.000Z
2022-03-30T20:16:34.000Z
website/src/pages/docs/config-file.md
michalbiesek/appscope
4a3151b0315a52e3e6576518bc418c7c52af3e8c
[ "Apache-2.0" ]
null
null
null
--- title: "Config File" --- ## Config File `scope.yml` is the sole library configuration file in AppScope. The contents of the now-eliminated `scope_protocol.yml` configuration file reside in the `protocol` section of `scope.yml`. ### scope.yml Config File Below are the default contents of `scope.yml`: ``` # # AppScope Runtime Configuration # # The AppScope library (`libscope.so`) starts with default configs that are # mimicked here in this file; meaning, run with no config, or with the stock # version of this config, and the results are the same. # # After loading defaults, the library looks for a config in the following # places in the order shown. The first readable file found is used and the rest # are ignored. Entries in the config file override the defaults. # # 1. $SCOPE_CONF_PATH # 2. $SCOPE_HOME/conf/scope.yml # 3. $SCOPE_HOME/scope.yml # 4. /etc/scope/scope.yml # 5. $HOME/conf/scope.yml # 6. $HOME/scope.yml # 7. ./conf/scope.yml # 8. ./scope.yml # # Next, SCOPE_* environment variables are used to override corresponding # entries in the configs. Details are provided below for each setting and # the corresponding environment variable names. # # Finally, if the `cribl > enable` config is true at this point, either from # the config file or the $SCOPE_CRIBL/$SCOPE_CRIBL_CLOUD environment variable, # the library forces the following: # # - `metric > transport` is redirected to the `cribl` backend # - `metric > enable` is set to true # - `metric > format` is set to ndjson # - `event > transport` is redirected to the `cribl` backend # - `event > enable` is set to true # - `event > watch[]` with `name: http` is disabled # - `libscope > log > level` is set to warn # - `libscope > configevent` is set to true # # Use the `scope extract` command to get a copy of the default `scope.yml`. # # Use the command below to get a stripped down version of this config. # # egrep -v '^ *#.*$' scope.yml | sed '/^$/d' >scope-minimal.yml # # Settings for metrics # metric: # Enable the metrics backend # Type: boolean # Values: true, false # Default: true # Override: $SCOPE_METRIC_ENABLE # # enable: true # Settings for the format of metric data format: # Metric format type # Type: string # Values: statsd, ndjson # Default: statsd # Override: $SCOPE_METRIC_FORMAT # # When the `cribl` backend is enabled, this is forced to ndjson. # type: statsd # Prefix for statsd metrics; ignored if type isn't statsd # Type: string # Values: (and string) # Default: (none) # Override: $SCOPE_STATSD_PREFIX # statsdprefix: # Maximum length of formatted statsd metrics; ignored unless type is statsd # Type: integer # Values: (greater than zero) # Default: 512 # Override: $SCOPE_STATSD_MAXLEN # statsdmaxlen: 512 # Metric verbosity level # Type: integer # Values: 0-9 # Default: 4 # Override: $SCOPE_METRIC_VERBOSITY # # This setting controls two different aspects of the metrics generated by # the library: tag cardinality and aggregation. Lower values reduce the # verbosity of metric data produced, while higher values increase it. # # Metrics have at a minimum name, value, and type properties. Optional tags # can be added to provide additional detail on the measurement. The library # adds expanded Statsd tags depending on the value of this setting as # described below. These affect the cardinality of the metrics data. # # 0 none # 1 adds data and unit # 2 adds class and proto # 3 adds op # 4 adds pid, host, proc, and http_status # 5 adds domain and file # 6 adds localip, remoteip, localp, port, and remotep # 7 adds fd and args # 8 adds duration, numops, req_per_sec, req, resp, and protocol # # The library counts various events and generates metrics for them # periodically. The verbosity config disables this metric aggregation for # groups of events. When disabled, events that would normally have been # summarized in an aggregate metric are instead sent as individual metrics # with a count of 1 and additional details from the event added, e.g., # operation, filename, process, error code, etc. # # 0-4 full metric aggregation # 5 disable error metric aggregation # 6 disable filesystem open/close and DNS metric aggregation # 7 disable filesystem stat and network connect metric aggregation # 8 disable filesystem seek metric aggregation # 9 disable filesystem read/write and network send/recv metric aggregation # verbosity : 4 # Backend connection for metrics # # When the `cribl` backend is enabled, these settings are ignored and metrics # are instead sent to the `cribl` backend. # transport: # Set $SCOPE_METRIC_DEST to override the type, host, port, and path configs # below. The environment variable should be set to a URL. # # file:///tmp/output.log send to a file; note the triple slash # file://stdout send to standard out # file://stderr send to standard error # udp://host:port send to a network server (UDP protocol) # tcp://host:port send to a network server (TCP protocol) # unix://@abstractname send to a unix domain server w/abstract addr # unix:///var/run/mysock send to a unix domain server w/filesystem addr # edge send to cribl edge (over unix domain) # # Note: tls:// is not an option here. For TLS/SSL, use tcp://host:port and # set the $SCOPE_METRIC_TLS_* variables. # Connection type # Type: string # Values: udp, tcp, unix, file, and edge # Default: udp # Override: the protocol token in the $SCOPE_METRIC_DEST URL # type: udp # Connection host/address # Type: string # Values: (hostname or IP address) # Default: 127.0.0.1 # Override: the host token in the $SCOPE_METRIC_DEST URL # host: 127.0.0.1 # Connection port # Type: integer or string # Values: IP port number or service name # Default: 8125 # Override: the port token in the $SCOPE_METRIC_DEST URL # # The default 8125 is for normal statsd services. # port: 8125 # File path / unix domain socket path # Type: string # Values: (directory path, or socket path) # Default: (none) # Override: the path token in the $SCOPE_METRIC_DEST URL # # Applies when connection type is file or unix. # #path: '' # File buffering # Type: string # Values: line, full # Default: line # # Only applies when connection type is file # # Set this to line if there's a chance that multiple scoped processes will # be writing to the same file. This prevents interleaving of lines and # scrambling of the log file. Setting this to full may improve performance # in single-writer scenarios. # #buffer: line # TLS connection settings tls: # Enable TLS for the metrics backend # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_METRIC_TLS_ENABLE # # Only applies when the connection type is tcp. # enable: false # Validate the TLS server certificate # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_METRIC_TLS_VALIDATE_SERVER # # Set to false, works like the `curl -k` option. When set to true, the # connection will fail if the server certificate cannot be validated. # # Only applies if the connection type is tcp and TLS is enabled. # validateserver: true # CA Certificate Path # Type: string # Values: (file path) # Default: (none) # Override: $SCOPE_METRIC_TLS_CA_CERT_PATH # # Leave this blank when validateserver is set to true and the local # OS-provided trusted CA certificates are used to validate the server's # certificate. To use a PEM certificate file instead, specify its # full path; useful with self-signed certificates. # # Only applies if the connection type is tcp and TLS is enabled. # cacertpath: '' # Settings for events # event: # Enable the events backend # Type: boolean # Values: true, false # Default: true # Override: $SCOPE_EVENT_ENABLE # # enable: true # Tags can be applied to events as with metrics. Settings are in # the `metric > tags` section. See the notes there for details. # Settings for the format of event data format: # Metric format type # Type: string # Values: ndjson # Default: ndjson # Override: $SCOPE_EVENT_FORMAT # type: ndjson # Event rate limiter # Type: integer # Values: 0+ # Default: 10000 # Override: $SCOPE_EVENT_MAXEPS # # Set this to 0 to disable the limiter. # maxeventpersec: 10000 # Enable enhanced filesystem event data # Type: boolean # Values: true, false # Default: true # Override: $SCOPE_ENHANCE_FS # # When set to true, `event > watch[*] > type=fs` is enabled. We add uid, # gid, and mode to open events. # enhancefs: true # The `event > watch[*]` array contains objects that enable different # categories of events. Their type property specifies the category. The # rest are filters, so only matching events are generated. Comment out an # array entry to disable the category. watch: # The file category includes writes to files. It's intended primarily for # monitoring log files but is capable of generating events to writes to any # file. The name and value properties are regular expressions applied to # the filename and written data, respectively. Events will be generated when # both match. # # Set $SCOPE_EVENT_LOGFILE to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_LOGFILE_NAME and $SCOPE_EVENT_LOGFILE_VALUE. # - type: file name: (\/logs?\/)|(\.log$)|(\.log[.\d]) value: .* # The console category includes writes to standard out and error and is # intended for monitoring console output, especially in containerized # environments where logging to files isn't commonly done. The name and # value properties are regular expressions applied to the filename and # written data, respectively. Events will be generated when both match. # # Set $SCOPE_EVENT_CONSOLE to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_CONSOLE_NAME and $SCOPE_EVENT_CONSOLE_VALUE. # - type: console name: (stdout)|(stderr) value: .* # The net category includes network operations like listen, connect, close, # send, recv, etc. The name, field, and value properties are regular # expressions applied to the corresponding event properties. Events will be # generated when all match. # # Set $SCOPE_EVENT_NET to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_NET_NAME, $SCOPE_EVENT_NET_FIELD, and $SCOPE_EVENT_NET_VALUE. # - type: net name: .* field: .* value: .* # The fs category includes filesystem operations like open, close, stat, # read, write, etc. The name, field, and value properties are regular # expressions applied to the corresponding event properties. Events will be # generated when all match. # # Set $SCOPE_EVENT_FS to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_FS_NAME, $SCOPE_EVENT_FS_FIELD, and $SCOPE_EVENT_FS_VALUE. # - type: fs name: .* field: .* value: .* # The dns category includes DNS request and response events. The name, field, # and value properties are regular expressions applied to the corresponding # event properties. Events will be generated when all match. # # Set $SCOPE_EVENT_DNS to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_DNS_NAME, $SCOPE_EVENT_DNS_FIELD, and $SCOPE_EVENT_DNS_VALUE. # - type: dns name: .* field: .* value: .* # The http category includes HTTP request and response events. It currently # only supports HTTP/1.x, not HTTP/2. The name, field, value, and headers # properties are regular expressions applied to the corresponding event # properties. Events will be generated when all match. # # Set $SCOPE_EVENT_HTTP to true or false to enable or disable this # category. The regular expressions can be set with $SCOPE_EVENT_HTTP_NAME, # $SCOPE_EVENT_HTTP_FIELD, $SCOPE_EVENT_HTTP_VALUE, and # $SCOPE_EVENT_HTTP_HEADER. # # When the `cribl` backend is enabled, this is disabled. # - type: http name: .* field: .* value: .* headers: .* # yes, this should be singular but it's not. # The metric category is very seldom used. It includes events for # operations that are included in the metric aggregation described earlier # in `metric > verbosity`. It essentially enables events the same way # that setting verbosity to 9 generates raw metrics. This is only ever used # as a last resort when tracking down a problem and should rarely, if ever, # be enabled. Fraught with peril! # # The name, field, and value properties are all regular expressions. Only # matching events will be generated. # # Warning: Enabling this may interfere with proper metric aggregation. # # Set $SCOPE_EVENT_METRIC to true or false to enable or disable this # category. The regular expressions can be set with # $SCOPE_EVENT_METRIC_NAME, $SCOPE_EVENT_METRIC_FIELD, and # $SCOPE_EVENT_METRIC_VALUE. # #- type: metric # name: .* # field: .* # value: .* # Backend connection for events # # When the `cribl` backend is enabled, these settings are ignored and events # are instead sent to the `cribl` backend. # transport: # Set $SCOPE_EVENT_DEST to override the type, host, port, and path configs # below. The environment variable should be set to a URL. # # file:///tmp/output.log send to a file; note the triple slash # file://stdout send to standard out # file://stderr send to standard error # udp://host:port send to a network server (UDP protocol) # tcp://host:port send to a network server (TCP protocol) # unix://@abstractname send to a unix domain server w/abstract addr # unix:///var/run/mysock send to a unix domain server w/filesystem addr # edge send to cribl edge (over unix domain) # # Note: tls:// is not an option here. For TLS/SSL, use tcp://host:port and # set the $SCOPE_EVENT_TLS_* variables. # Connection type # Type: string # Values: udp, tcp, unix, file, and edge # Default: tcp # Override: the protocol token in the $SCOPE_EVENT_DEST URL # type: tcp # Connection host/address # Type: string # Values: (hostname or IP address) # Default: 127.0.0.1 # Override: the host token in the $SCOPE_EVENT_DEST URL # host: 127.0.0.1 # Connection port # Type: integer or string # Values: IP port number or service name # Default: 9109 # Override: the port token in the $SCOPE_EVENT_DEST URL # port: 9109 # File path / unix domain socket path # Type: string # Values: (directory path, or socket path) # Default: (none) # Override: the path token in the $SCOPE_EVENT_DEST URL # # Applies when connection type is file or unix. # #path: '' # File buffering # Type: string # Values: line, full # Default: line # # Only applies when connection type is file. # # Set this to line if there's a chance that multiple scoped processes will # be writing to the same file. This prevents interleaving of lines and # scrambling of the log file. Setting this to full may improve performance # in single-writer scenarios. # #buffer: line # TLS connection settings tls: # Enable TLS for the events backend # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_EVENT_TLS_ENABLE # # Only applies when the connection type is tcp. # enable: false # Validate the TLS server certificate # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_EVENT_TLS_VALIDATE_SERVER # # Set to false, works like the `curl -k` option. When set to true, the # connection will fail if the server certificate cannot be validated. # # Only applies if the connection type is tcp and TLS is enabled. # validateserver: true # CA Certificate Path # Type: string # Values: (file path) # Default: (none) # Override: $SCOPE_EVENT_TLS_CA_CERT_PATH # # Leave this blank when validateserver is set to true and the local # OS-provided trusted CA certificates are used to validate the server's # certificate. To use a PEM certificate file instead, specify its # full path; useful with self-signed certificates. # # Only applies if the connection type is tcp and TLS is enabled. # cacertpath: '' # Settings for payloads # payload: # Enable payload capture # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_PAYLOAD_ENABLE # # This can produce large amounts of data from I/O-intensive programs and # should be considered carefully before being enabled. # # See `protocol` for a way to enable this for specific protocols instead of # all traffic. # enable: false # Directory for payload files # Type: string # Values: (directory path) # Default: /tmp # Override: $SCOPE_PAYLOAD_DIR # # Consider using a performant filesystem to reduce I/O performance impacts. # dir: '/tmp' # Setting the the library # libscope: # Enable the config-event message on the event or `cribl` backend # Type: boolean # Values: true, false # Default: true # Override: $SCOPE_CONFIG_EVENT # # The connect-event message is the first one set on the connection and # contains details identifying the scoped program and the runtime configs. # It's more commonly referred to as the process-start message. # configevent: true # Metric summary interval # Type: integer # Values: 1+ seconds # Default: 10 # Override: $SCOPE_SUMMARY_PERIOD # # See also `metric > verbosity`. # summaryperiod : 10 # Command directory # Type: string # Values: (directory path) # Default: /tmp # Override: $SCOPE_CMD_DIR # # The library looks here periodically (see `libscope > summaryperiod`) for a # file named scope.{pid} matching the current process. If found, it's loaded # and deleted. The file should contain environment variables, one per line. # # SCOPE_METRIC_VERBOSITY=9 # SCOPE_EVENT_HTTP=false # # The given variables are applied to the running config just like startup. # commanddir : '/tmp' # Logging settings for the library # log: # Set logging verbosity # Type: string # Values: debug, info, warning, error, or none # Default: warning # Override: $SCOPE_LOG_LEVEL # # When the `cribl` backend is enabled, this is forced to warning. # level: warning # Backend connection for logs # transport: # Set $SCOPE_LOG_DEST to override the type, host, port, and path configs # below. The environment variable should be set to a URL. # # file:///tmp/output.log send to a file; note the triple slash # file://stdout send to standard out # file://stderr send to standard error # udp://host:port send to a network server (UDP protocol) # tcp://host:port send to a network server (TCP protocol) # unix://@abstractname send to a unix domain server w/abstract addr # unix:///var/run/mysock send to a unix domain server w/filesystem addr # edge send to cribl edge (over unix domain) # # Note: tls:// is not an option here. For TLS/SSL, use tcp://host:port and # set the $SCOPE_LOG_TLS_* variables. # Connection type # Type: string # Values: udp, tcp, unix, file, and edge # Default: file # Override: the protocol token in the $SCOPE_LOG_DEST URL # type: file # Connection host/address # Type: string # Values: (hostname or IP address) # Default: (none) # Override: the host token in the $SCOPE_LOG_DEST URL # #host: # Connection port # Type: integer or string # Values: IP port number or service name # Default: (none) # Override: the port token in the $SCOPE_LOG_DEST URL # #port: # File path / unix domain socket path # Type: string # Values: (directory path, or socket path) # Default: '/tmp/scope.log' # Override: the path token in the $SCOPE_LOG_DEST URL # # Applies when connection type is file or unix. # path: '/tmp/scope.log' # File buffering # Type: string # Values: line, full # Default: line # # Only applies when connection type is file. # # Set this to line if there's a chance that multiple scoped processes will # be writing to the same file. This prevents interleaving of lines and # scrambling of the log file. Setting this to full may improve performance # in single-writer scenarios. # buffer: line # Settings for the `cribl` backend # cribl: # Enable the `cribl` backend # Type: boolean # Values: true, false # Default: true # Override: $SCOPE_CRIBL_ENABLE # enable: true # Authentication token # Type: string # Values: (any) # Default: (none) # Override: $SCOPE_CRIBL_AUTHTOKEN # # If set, the value is added as a top-level authToken property in the initial # config-event (header) sent to Cribl when the library connects. # #authtoken: # Backend connection for cribl # transport: # Set $SCOPE_CRIBL to override the type, host, port and socket path configs below. # The environment variable should be set to a URL. # # tcp://host:port send to a TCP server # unix://@abstractname send to a unix domain server w/abstract addr # unix:///var/run/mysock send to a unix domain server w/filesystem addr # edge send to cribl edge (over unix domain) # # Note: tls:// is not an option here. For TLS/SSL, use tcp://host:port and # set the $SCOPE_CRIBL_TLS_* variables. # # Note: file:// is not supported here. # # Alternatively, set $SCOPE_CRIBL_CLOUD to the same URL and the library # sets $SCOPE_CRIBL_TLS_ENABLE=true, $SCOPE_CRIBL_TLS_VALIDATE_SERVER=true, # and $SCOPE_CRIBL_TLS_CA_CERT_PATH="" for you. # Connection type # Type: string # Values: tcp, unix, and edge # Default: edge # Override: the protocol token in the $SCOPE_CRIBL or $SCOPE_CRIBL_CLOUD URL # type: edge # Connection host/address # Type: string # Values: (hostname or IP address) # Default: 127.0.0.1 # Override: the host token in the $SCOPE_CRIBL or $SCOPE_CRIBL_CLOUD URL # # Only applies when the connection type is tcp. # host: 127.0.0.1 # Connection port # Type: integer or string # Values: IP port number or service name # Default: 10090 # Override: the port token in the $SCOPE_CRIBL or $SCOPE_CRIBL_CLOUD URL # # Defaults to 10090, which is the TCP port on the AppScope Source # in LogStream. If you are using the cloud version, 10090 is the TLS port # on the client-facing load balancer which is proxied to the cloud instance's # TCP:10090 port, without TLS. # # Use 10091 here if you need to connect to Cribl Cloud without TLS and # are not making any changes in the AppScope Source. # # Only applies when the connection type is tcp. # port: 10090 # Unix domain socket path # Type: string # Values: socket path # Default: (none) # Override: the socket_path token in the $SCOPE_CRIBL or $SCOPE_CRIBL_CLOUD URL # # Only applies when the connection type is unix. # #path: '' # TLS connection settings tls: # Enable TLS for the metrics backend # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_CRIBL_TLS_ENABLE or use $SCOPE_CRIBL_CLOUD # # Only applies when the connection type is tcp. # enable: false # Validate the TLS server certificate # Type: boolean # Values: true, false # Default: false # Override: $SCOPE_CRIBL_TLS_VALIDATE_SERVER # # Set to false, works like the `curl -k` option. When set to true, the # connection will fail if the server certificate cannot be validated. # # Only applies if the connection type is tcp and TLS is enabled. # validateserver: true # CA Certificate Path # Type: string # Values: (file path) # Default: (none) # Override: $SCOPE_CRIBL_TLS_CA_CERT_PATH # # Leave this blank when validateserver is set to true and the local # OS-provided trusted CA certificates are used to validate the server's # certificate. To use a PEM certificate file instead, specify its # full path; useful with self-signed certificates. # # Only applies if the connection type is tcp and TLS is enabled. # cacertpath: '' # Tags for events and metrics # tags: # `key: value` entries here become fields in generated events and metrics. # # Simple $EXAMPLE variables in the value will be replaced with the # corresponding environment variable values. The regex looks for dollar signs # followed by one or more alphanumeric or underscore characters. If the # corresponding environment variable is not set, the variable is left in the # value. # # Tags can also be added with environment variables prefixed with SCOPE_TAG_. # For example, SCOPE_TAG_service=eg is equivalent to the "service" example # below. The value of the environment variable may contain other variables # as described above too; i.e. SCOPE_TAG_user=\$USER. # #user: $USER #service: eg # Protocol detection and handling # protocol: # Entries in this list define protocols that AppScope should detect in network # payloads and how to handle matches. The first packet seen on a channel is # checked against the regular expression in each entry in the order they # appear in this file. When one matches, later entries are skipped. # # Entries have the following properties: # # name String protocol name used in protocol-detect events and payload # headers sent to LogStream (required) # regex The regular expression to use (required) # binary Boolean indicating whether the regex should be applied to a # hex-string version of the payload instead of the binary payload # (default: false) # len The number of bytes to convert to hex when `binary` is true # (default: 256) # detect Boolean indicating whether protocol-detect events should be # generated (default: true) # payload Boolean indicating whether payload-processing should be enabled # for matching streams (default: false) # # When payloads are enabled globally (`payload > enable`), the payload # options here are ignored. # # Warning: The `name` value is currently inserted into the JSON header for # payloads sent to LogStream so it cannot contain double quotes or # back-slashes without breaking the JSON. It needs to be kept fairly short # too so the header doesn't exceed the 1k limit. If this becomes a problem, # we'll consider adding logging and validation. # # Example for the plain-text Redis protocol using the default detect and # payload settings # #- name: Redis # regex: "^[*]\\d+|^[+]\\w+|^[$]\\d+" # Example for the MongoDB protocol showing how to detect a binary protocol # #- name: Mongo # regex: "^240100000000000000000000d407" # binary: true # len: 14 # AppScope uses an internally defined protocol detector for HTTP like the # example below automatically when the LogStream backend is enabled. # # Uncomment this and adjust as needed to override the defaults or to enable # HTTP detection when not using LogStream. # #- name: HTTP # regex: " HTTP\\/1\\.[0-2]|PRI \\* HTTP\\/2\\.0\r\n\r\nSM\r\n\r\n" # detect: true # payload: true # AppScope uses another internally defined protocol detector for TLS like the # example below by default. # # Uncomment this entry to override the regex details or to set detect to # false. The payload setting here is never used. AppScope never sends # encrypted payloads to disk and only sends payloads to LogSteam during TLS # negotiation. # #- name: TLS # regex: "^16030[0-3].{4}0[12]" # binary: true # len: 6 # Custom configs # custom: # Entries here represent overrides of the settings defined above for scoped # processes that match a set of filters. Each has a name and `filter` and # `config` entries as shown below. # # name: # filter: # ... # config: # ... # # Entries under `filter` are used to match aspects of a scoped process. There # must be at least one of them and all of them must match for the filter to # succeed. The following filters are supported. # # procname: string # # Matches if the given string value matches the basename of the scoped # process. # # arg: string # # Matches if the given string value appears and a substring anywhere in # the scoped process's full command line including an options and # arguments. # # hostname: string # # Matches if the given string value matches the hostname of the machine # where the scoped process is running. # # username: string # # Matches if the given string value matches the username for the scoped # process's UID. # # env: string # # The string value is the name of an environment variable alone (i.e. # "FOO") or with a value (i.e. "FOO=bar"). The filter matches if the # environment variable is set and, in the later case, the value matches. # # ancestor: string # # Matches if given string matches the basename of the scoped process's # partent, parent's parent, etc. # # The `config` section specifies the settings that should be overridden when # the filter matches. Entries under `config` use the same schema as the # top-level entries (without `custom`). # # Increase metric verbosity for processes owned by the "eg" user and running # on the "eg1" host. # #example: # filter: # username: eg # hostname: eg1 # config: # metric: # format: # verbosity: 7 # tags: # service: eg # Enable the Cribl/Logstream destination for Nginx processes. Both this entry # and the `example` entry above would apply if both filters match so the # service tag here would override the one above. # #nginx: # filter: # procname: nginx # config: # tags: # service: nginx # cribl: # enable: true # transport: # type: tcp # host: in.my-instance.logstream.cribl.cloud # port: 10090 # tls: # enable: true # EOF ```
33.18191
123
0.645535
eng_Latn
0.987912
17d57f368379c46bc86ddf56c6ea1046bffcc4da
2,702
md
Markdown
wdk-ddi-src/content/ntifs/nf-ntifs-psreferenceprimarytoken.md
sathishcg/windows-driver-docs-ddi
238bb4dc827cb3d2c64e535cf3f91d205b20c943
[ "CC-BY-4.0", "MIT" ]
2
2020-08-12T00:32:04.000Z
2021-07-17T15:32:12.000Z
wdk-ddi-src/content/ntifs/nf-ntifs-psreferenceprimarytoken.md
sathishcg/windows-driver-docs-ddi
238bb4dc827cb3d2c64e535cf3f91d205b20c943
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-psreferenceprimarytoken.md
sathishcg/windows-driver-docs-ddi
238bb4dc827cb3d2c64e535cf3f91d205b20c943
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:ntifs.PsReferencePrimaryToken title: PsReferencePrimaryToken function author: windows-driver-content description: The PsReferencePrimaryToken routine increments the reference count of the primary token for the specified process. old-location: ifsk\psreferenceprimarytoken.htm tech.root: ifsk ms.assetid: 8ff1add9-4b9e-42dd-b3e2-53d891788d43 ms.author: windowsdriverdev ms.date: 4/16/2018 ms.keywords: PsReferencePrimaryToken, PsReferencePrimaryToken routine [Installable File System Drivers], ifsk.psreferenceprimarytoken, ntifs/PsReferencePrimaryToken, psref_021aea60-1707-4817-9169-95a3dc79adb6.xml ms.prod: windows-hardware ms.technology: windows-devices ms.topic: function req.header: ntifs.h req.include-header: FltKernel.h, Ntifs.h req.target-type: Universal req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: NtosKrnl.lib req.dll: NtosKrnl.exe req.irql: PASSIVE_LEVEL topic_type: - APIRef - kbSyntax api_type: - DllExport api_location: - NtosKrnl.exe api_name: - PsReferencePrimaryToken product: - Windows targetos: Windows req.typenames: --- # PsReferencePrimaryToken function ## -description The <b>PsReferencePrimaryToken</b> routine increments the reference count of the primary token for the specified process. ## -parameters ### -param Process [in, out] Pointer to the process whose primary token's reference count is to be incremented. ## -returns <b>PsReferencePrimaryToken</b> returns a pointer to the primary token for the given process. ## -remarks This routine is available starting with Microsoft Windows 2000. <b>PsReferencePrimaryToken</b> increments the reference count of the returned primary token. Thus for every successful call to <b>PsReferencePrimaryToken</b>, the primary token's reference count must be decremented by calling one of the following functions: <ul> <li> <b>ObDereferenceObject</b>, for Windows 2000 </li> <li> <b>PsDereferencePrimaryToken</b>, for Microsoft Windows XP and later. </li> </ul> For more information about security and access control, see the documentation on these topics in the Microsoft Windows SDK. ## -see-also <a href="https://msdn.microsoft.com/library/windows/hardware/ff557724">ObDereferenceObject</a> <a href="https://msdn.microsoft.com/library/windows/hardware/ff551896">PsDereferencePrimaryToken</a> <a href="https://msdn.microsoft.com/library/windows/hardware/ff551929">PsReferenceImpersonationToken</a> <a href="https://msdn.microsoft.com/library/windows/hardware/ff556690">SeQueryInformationToken</a>    
22.516667
257
0.785714
eng_Latn
0.538972
17d5d6f78328aebc6d7054161109d2b7ea1257cf
368
md
Markdown
CHANGES.md
christopher-hesse/gym3
2ed9d344ede8bbd96b6280e9fbbbf55861ea33a9
[ "MIT" ]
null
null
null
CHANGES.md
christopher-hesse/gym3
2ed9d344ede8bbd96b6280e9fbbbf55861ea33a9
[ "MIT" ]
null
null
null
CHANGES.md
christopher-hesse/gym3
2ed9d344ede8bbd96b6280e9fbbbf55861ea33a9
[ "MIT" ]
null
null
null
# Changelog ## 0.3.3 * Fix memory leak in `ViewerWrapper` thanks to @jyotishp for reporting/debugging this. * Fix rendering settings, thanks @jyotishp and @mtrazzi! * Add seed argument to `vectorize_gym` and `FromGymEnv` to seed before reset ## 0.3.2 * Fix bug in `render()` function in `FromGymEnv` ## 0.3.1 * Add missing font file ## 0.3.0 * Initial release
18.4
86
0.706522
eng_Latn
0.942573
17d62cad611dcb6b89c01bd9cd177354da50fbd5
53,794
md
Markdown
articles/cloud-services/cloud-services-dotnet-get-started.md
rbirksteiner/azure-content-nlnl
dabb5b398adf6235c23e417fd0767a3cd5a8f8f8
[ "CC-BY-3.0" ]
1
2019-05-02T03:32:46.000Z
2019-05-02T03:32:46.000Z
articles/cloud-services/cloud-services-dotnet-get-started.md
rbirksteiner/azure-content-nlnl
dabb5b398adf6235c23e417fd0767a3cd5a8f8f8
[ "CC-BY-3.0" ]
null
null
null
articles/cloud-services/cloud-services-dotnet-get-started.md
rbirksteiner/azure-content-nlnl
dabb5b398adf6235c23e417fd0767a3cd5a8f8f8
[ "CC-BY-3.0" ]
1
2020-06-14T17:02:09.000Z
2020-06-14T17:02:09.000Z
<properties pageTitle="Aan de slag met Azure Cloud Services en ASP.NET | Microsoft Azure" description="Informatie over het maken van een app met meerdere lagen met ASP.NET MVC en Azure. De app wordt uitgevoerd in een cloudservice, met een webrol en een werkrol. De app maakt gebruik van Entity Framework, SQL Database, en wachtrijen en blobs van Azure Storage." services="cloud-services, storage" documentationCenter=".net" authors="Thraka" manager="timlt" editor=""/> <tags ms.service="cloud-services" ms.workload="tbd" ms.tgt_pltfrm="na" ms.devlang="dotnet" ms.topic="hero-article" ms.date="03/21/2016" ms.author="adegeo"/> # Aan de slag met Azure Cloud Services en ASP.NET > [AZURE.SELECTOR] - [Node.js](cloud-services-nodejs-develop-deploy-app.md) - [.NET](cloud-services-dotnet-get-started.md) ## Overzicht Deze zelfstudie laat zien hoe u een .NET-toepassing met meerdere lagen maakt met een ASP.NET MVC-front-end en deze implementeert in een [Azure-cloudservice](cloud-services-choose-me.md). De toepassing gebruikt [Azure SQL Database](http://msdn.microsoft.com/library/azure/ee336279), de [Azure Blob-service](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage) en de [Azure Queue-service](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). U kunt [het Visual Studio-project downloaden uit de ](http://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) uit de MSDN-codegalerie. De zelfstudie leert u hoe u de toepassing lokaal maakt en implementeert, hoe u deze in Azure implementeert en uitvoert in de cloud, en ten slotte hoe u deze van het begin af aan bouwt. Desgewenst kunt u de toepassing eerst van het begin af aan bouwen en vervolgens later de test doen en stappen implementeren. ## Toepassing Contoso Ads De toepassing is een bulletinboard voor advertenties. Gebruikers maken een advertentie door de tekst in te voeren en een afbeelding te uploaden. Ze kunnen een lijst met advertenties bekijken met miniatuurafbeeldingen. Als ze een advertentie selecteren om de details te bekijken, kunnen ze de afbeelding op volledige grootte weergeven. ![Advertentielijst](./media/cloud-services-dotnet-get-started/list.png) De toepassing maakt gebruik van het [wachtrijgerichte werkpatroon](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern) om de CPU te ontlasten bij het maken van miniatuurweergaven voor een back-endproces (een CPU-intensieve bewerking). ## Alternatieve architectuur: Websites en WebJobs Deze zelfstudie laat zien hoe u de front-end en back-end uitvoert in een cloudservice van Azure. Een alternatieve methode is het uitvoeren van de front-end op een [Azure-website](/services/web-sites/) en het gebruiken van de functie [WebJobs](http://go.microsoft.com/fwlink/?LinkId=390226) (momenteel als voorbeeld) voor de back-end. Zie [Aan de slag met de Azure WebJobs SDK](../app-service-web/websites-dotnet-webjobs-sdk-get-started.md) voor een zelfstudie waarin gebruik wordt gemaakt van WebJobs. Zie [Vergelijking van Azure-websites, cloudservices en virtuele machines](../app-service-web/choose-web-site-cloud-service-vm.md) voor meer informatie over het kiezen van de services die het meest geschikt zijn voor uw scenario. ## Wat u leert * De computer klaarmaken voor het ontwikkelen van Azure door de Azure SDK te installeren. * Een Visual Studio-cloudserviceproject maken met een ASP.NET MVC-webrol en -werkrol. * Het cloudserviceproject lokaal testen met behulp van de Azure-opslagemulator. * Het cloudproject publiceren naar een Azure-cloudservice en het project testen met behulp van een Azure-opslagaccount. * Bestanden uploaden en deze opslaan in de Azure Blob-service. * De Azure Queue-service gebruiken voor communicatie tussen lagen. ## Vereisten In de zelfstudie wordt ervan uitgegaan dat u bekend bent met de [basisconcepten van Azure-cloudservices](cloud-services-choose-me.md), waaronder de concepten *webrol* en *werkrol*. Ook wordt ervan uitgegaan dat u weet hoe u in Visual Studio werkt met [ASP.NET MVC](http://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started)- of [Web Forms](http://www.asp.net/web-forms/tutorials/aspnet-45/getting-started-with-aspnet-45-web-forms/introduction-and-overview)-projecten. De voorbeeldtoepassing gebruikt MVC, maar het meeste in de zelfstudie geldt ook voor Web Forms. U kunt de app lokaal zonder Azure-abonnement uitvoeren, maar u hebt wel een abonnement nodig om de toepassing in de cloud te implementeren. Als u geen account hebt, kunt u [uw voordelen als MSDN-abonnee activeren](/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A55E3C668) of [zich aanmelden voor een gratis proefversie](/pricing/free-trial/?WT.mc_id=A55E3C668). De instructies in de zelfstudie zijn van toepassing op een van de volgende producten: * Visual Studio 2013 * Visual Studio 2015 Als u deze niet hebt, wordt Visual Studio 2015 automatisch geïnstalleerd wanneer u de Azure SDK installeert. ## Toepassingsarchitectuur De app slaat de advertenties met behulp van Entity Framework Code First op in een SQL-database om de tabellen te maken en toegang te krijgen tot de gegevens. Voor elke advertentie slaat de database twee URL's op: één voor de afbeelding op volledige grootte en één voor de miniatuur. ![Advertentietabel](./media/cloud-services-dotnet-get-started/adtable.png) Wanneer een gebruiker een afbeelding uploadt, slaat de front-end (die wordt uitgevoerd in een webrol) de afbeelding op in een [Azure-blob](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage). De advertentiegegevens worden opgeslagen in de database, samen met een URL die naar de blob verwijst. Tegelijkertijd schrijft de front-end een bericht naar een Azure-wachtrij. Een back-end-proces dat wordt uitgevoerd in een werkrol, peilt de wachtrij periodiek op nieuwe berichten. Wanneer er een nieuw bericht binnenkomt, maakt de werkrol een miniatuur voor de betreffende afbeelding en werkt deze het databaseveld met de miniatuur-URL voor de advertentie bij. Het volgende diagram toont de wisselwerking tussen de onderdelen van de toepassing. ![Architectuur van Contoso Ads](./media/cloud-services-dotnet-get-started/apparchitecture.png) [AZURE.INCLUDE [install-sdk](../../includes/install-sdk-2015-2013.md)] ## De voltooide oplossing downloaden en uitvoeren 1. Download de [voltooide oplossing](http://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) en pak deze uit. 2. Start Visual Studio. 3. Kies in het menu **File** de optie **Open Project**, ga naar de locatie waar u de oplossing hebt gedownload en open het oplossingsbestand. 3. Druk op CTRL + SHIFT + B om de oplossing te bouwen. Standaard herstelt Visual Studio automatisch de inhoud van het NuGet-pakket, dat niet is opgenomen in het *.zip*-bestand. Als de pakketten niet worden hersteld, installeert u deze handmatig door naar het dialoogvenster **Manage NuGet Packages for Solution** te gaan en rechts bovenaan op de knop **Restore** te klikken. 3. Zorg er in **Solution Explorer** voor dat **ContosoAdsCloudService** is geselecteerd als opstartproject. 2. Als u Visual Studio 2015 gebruikt, wijzigt u de SQL Server-verbindingsreeks in het *Web.config*-toepassingsbestand van het ContosoAdsWeb-project en in het *ServiceConfiguration.Local.cscfg*-bestand van het ContosoAdsCloudService-project. Wijzig in beide gevallen '(localdb)\v11.0' in '(localdb)\MSSQLLocalDB'. 1. Druk op CTRL + F5 om de toepassing uit te voeren. Wanneer u een cloudserviceproject lokaal uitvoert, roept Visual Studio automatisch de Azure-*rekenemulator* en de Azure-*opslagemulator* aan. De rekenemulator maakt gebruik van bronnen van de computer om de webrol- en werkrolomgeving te simuleren. De opslagemulator maakt gebruik van een [SQL Server Express LocalDB](http://msdn.microsoft.com/library/hh510202.aspx)-database om Azure-cloudopslag te simuleren. De eerste keer dat u een cloudserviceproject uitvoert, duurt het ongeveer een minuut voordat de emulator opstart. Nadat de emulator is opgestart, wordt de standaardbrowser geopend met de startpagina van de toepassing. ![Architectuur van Contoso Ads](./media/cloud-services-dotnet-get-started/home.png) 2. Klik op **Create an Ad**. 2. Voer enkele testgegevens in en selecteer een *.jpg*-afbeelding om te uploaden. Klik vervolgens op **Create**. ![De pagina Create](./media/cloud-services-dotnet-get-started/create.png) De app gaat naar de indexpagina, maar geeft voor de nieuwe advertentie geen miniatuur weer omdat de betreffende bewerking nog niet heeft plaatsgevonden. 3. Wacht even en vernieuw vervolgens de indexpagina om de miniatuur te zien. ![De indexpagina](./media/cloud-services-dotnet-get-started/list.png) 4. Klik op **Details** voor uw advertentie om de afbeelding op volledige grootte te bekijken. ![De pagina Details](./media/cloud-services-dotnet-get-started/details.png) U hebt de toepassing volledig op uw lokale computer uitgevoerd, zonder verbinding met de cloud. De opslagemulator slaat de wachtrij en de blobgegevens op in een SQL Server Express LocalDB- database en de toepassing slaat de advertentiegegevens op in een andere LocalDB-database. Entity Framework Code First heeft de advertentiedatabase automatisch gemaakt op het moment dat de web-app voor het eerst heeft geprobeerd om deze te openen. In de volgende sectie configureert u de oplossing voor het gebruik van Azure-cloudbronnen voor wachtrijen, blobs en de toepassingsdatabase wanneer deze wordt uitgevoerd in de cloud. Als u wilt doorgaan met lokaal uitvoeren, maar wel gebruik wilt maken van cloudopslag- en databasebronnen, is dat mogelijk. Hiervoor hoeft u alleen verbindingsreeksen in te stellen. Dit wordt verderop uitgelegd. ## De toepassing implementeren in Azure U voert de volgende stappen uit voor het uitvoeren van de toepassing in de cloud: * Een Azure-cloudservice maken. * Een Azure SQL-database maken. * Een Azure-opslagaccount maken. * De oplossing configureren voor het gebruik van uw Azure SQL-database als deze wordt uitgevoerd in Azure. * De oplossing configureren voor het gebruik van uw Azure-opslagaccount als deze wordt uitgevoerd in Azure. * Het project implementeren in uw Azure-cloudservice. ### Een Azure-cloudservice maken Een Azure-cloudservice is de omgeving waarin de toepassing wordt uitgevoerd. 1. Open in uw browser de [klassieke Azure Portal](http://manage.windowsazure.com). 2. Klik op **Nieuw > Berekenen > Cloudservice > Snel maken**. 4. Voer in het URL-invoervak een URL-voorvoegsel in. Deze URL moet uniek zijn. U krijgt een foutmelding als het voorvoegsel dat u kiest, al door iemand anders wordt gebruikt. 5. Kies de regio waarin u de toepassing wilt implementeren. Dit veld geeft aan in welk datacenter uw cloudservice zal worden gehost. Voor een productietoepassing kiest u de regio die het dichtst bij uw klanten ligt. Voor deze zelfstudie kiest u de regio die het dichtst bij u ligt. 6. Klik op **Cloudservice maken**. Op de volgende afbeelding ziet u een cloudservice met de URL contosoads.cloudapp.net. ![Nieuwe cloudservice](./media/cloud-services-dotnet-get-started/newcs.png) ### Een Azure SQL-database maken Wanneer de app wordt uitgevoerd in de cloud, gebruikt deze een cloudgebaseerde database. 1. Klik in de [klassieke Azure Portal](http://manage.windowsazure.com) op **Nieuw > Dataservices > SQL Database > Snel maken**. 1. Voer in het vak **Databasenaam** *contosoads* in. 1. Kies in de vervolgkeuzelijst **Server** de optie **Nieuwe SQL-databaseserver**. Als uw abonnement al over een server beschikt, kunt u in de vervolgkeuzelijst ook die server selecteren. 1. Kies dezelfde **Regio** die u hebt gekozen voor de cloudservice. Wanneer de cloudservice en de database zich in verschillende datacenters (verschillende regio's) bevinden, neemt de latentie toe en wordt de bandbreedte buiten het datacenter aan u in rekening gebracht. Bandbreedte binnen een datacenter is gratis. 1. Geef een **Aanmeldingsnaam** en een **Wachtwoord** op voor de beheerder. Als u **Nieuwe SQL-databaseserver** hebt geselecteerd, voert u hier geen bestaande, maar een nieuwe naam- en wachtwoordcombinatie in. Deze stelt u nu in voor later gebruik (wanneer u de database opent). Als u een eerder gemaakte server hebt geselecteerd, wordt u gevraagd om het wachtwoord voor het bestaande gebruikersaccount met beheerdersrechten. 1. Klik op **SQL-database maken**. ![Nieuwe SQL-database](./media/cloud-services-dotnet-get-started/newdb.png) 1. Nadat Azure de database heeft gemaakt, klikt u op het tabblad **SQL-databases** in het linkerdeelvenster van de portal en vervolgens op de naam van de nieuwe database. 2. Klik op het tabblad **Dashboard**. 3. Klik op **Toegestane IP-adressen beheren**. 4. Wijzig onder **Toegestane services** **Azure-services** in **Ja**. 5. Klik op **Opslaan**. ### Een Azure-opslagaccount maken Een Azure-opslagaccount biedt resources voor het opslaan van wachtrij- en blobgegevens in de cloud. In een echte toepassing maakt u meestal afzonderlijke accounts voor toepassingsgegevens versus logboekgegevens, en afzonderlijke accounts voor testgegevens versus productiegegevens. Voor deze zelfstudie gebruikt u slechts één account. 1. Klik in de [klassieke Azure Portal](http://manage.windowsazure.com) op **Nieuw > Dataservices > Opslag > Snel maken**. 4. Voer in het vak **URL** een URL-voorvoegsel in. Dit voorvoegsel wordt samen met de tekst onder het vak de unieke URL voor uw opslagaccount. Als het voorvoegsel dat u invoert, al door iemand anders wordt gebruikt, moet u een ander voorvoegsel kiezen. 5. Stel de vervolgkeuzelijst **Regio** in op dezelfde regio die u voor de cloudservice hebt gekozen. Wanneer de cloudservice en het opslagaccount zich in verschillende datacenters (verschillende regio's) bevinden, neemt de latentie toe en wordt de bandbreedte buiten het datacenter aan u in rekening gebracht. Bandbreedte binnen een datacenter is gratis. Azure-affiniteitsgroepen bieden een mechanisme om de afstand tussen resources in een datacenter te minimaliseren, waardoor ze de latentie kunnen verminderen. In deze zelfstudie worden geen affiniteitsgroepen gebruikt. Zie [Een affiniteitsgroep maken in Azure](http://msdn.microsoft.com/library/jj156209.aspx) voor meer informatie. 6. Stel de vervolgkeuzelijst **Replicatie** in op **Lokaal redundant**. Wanneer geo-replicatie is ingeschakeld voor een opslagaccount, wordt de opgeslagen inhoud gerepliceerd naar een secundair datacenter om failover naar die locatie mogelijk te maken in het geval van een noodgeval op de primaire locatie. Geo-replicatie kan extra kosten met zich meebrengen. Voor test- en ontwikkelingsaccounts wilt u in het algemeen niet betalen voor geo-replicatie. Zie [Een opslagaccount maken, beheren of verwijderen](../storage/storage-create-storage-account.md#replication-options) voor meer informatie. 5. Klik op **Opslagaccount maken**. ![Nieuw opslagaccount](./media/cloud-services-dotnet-get-started/newstorage.png) Op de afbeelding ziet u een opslagaccount met de URL `contosoads.core.windows.net`. ### De oplossing configureren voor het gebruik van uw Azure SQL-database als deze wordt uitgevoerd in Azure Het webproject en het werkrolproject hebben elk hun eigen databaseverbindingsreeks. Beide projecten moeten verwijzen naar de Azure SQL-database wanneer de app wordt uitgevoerd in Azure. U gebruikt een [Web.config-transformatie](http://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) voor de webrol en een cloudserviceomgevingsinstelling voor de werkrol. >[AZURE.NOTE] In deze en de volgende sectie slaat u referenties op in projectbestanden. [Sla gevoelige gegevens niet op in openbare broncodeopslagplaatsen](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/source-control#secrets). 1. Open in het ContosoAdsWeb-project het transformatiebestand *Web.Release.config* voor de toepassing *Web.config*, verwijder het opmerkingenblok met het `<connectionStrings>`-element en vervang deze door de volgende code. ```xml <connectionStrings> <add name="ContosoAdsContext" connectionString="{connectionstring}" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> ``` Laat het bestand open, zodat u het kunt bewerken. 2. Klik in het linkerdeelvenster van de [klassieke Azure Portal](http://manage.windowsazure.com) op **SQL-databases** en klik daarna achtereenvolgens op de database die u voor deze zelfstudie hebt gemaakt, op het tabblad **Dashboard** en op **Verbindingsreeksen weergeven**. ![Verbindingsreeksen weergeven](./media/cloud-services-dotnet-get-started/showcs.png) In de portal worden verbindingsreeksen weergegeven met een tijdelijke aanduiding voor het wachtwoord. ![Verbindingsreeksen](./media/cloud-services-dotnet-get-started/connstrings.png) 4. Verwijder in het *Web.Release.config*-transformatiebestand `{connectionstring}` en plak in plaats daarvan de ADO.NET-verbindingsreeks uit de klassieke Azure Portal. 5. In de verbindingsreeks die u in het *Web.Release.config*-transformatiebestand hebt geplakt, vervangt u `{your_password_here}` door het wachtwoord dat u voor de nieuwe SQL-database hebt gemaakt. 7. Sla het bestand op. 6. Selecteer en kopieer de verbindingsreeks (zonder de aanhalingstekens) voor gebruik in de volgende stappen, waarin de projectwerkrol wordt geconfigureerd. 5. Klik in **Solution Explorer** onder **Roles** in het cloudserviceproject met de rechtermuisknop op **ContosoAdsWorker**. Klik vervolgens op **Properties**. ![Roleigenschappen](./media/cloud-services-dotnet-get-started/rolepropertiesworker.png) 6. Klik op het tabblad **Settings**. 7. Wijzig **Service Configuration** in **Cloud**. 7. Selecteer het veld **Value** voor de `ContosoAdsDbConnectionString`-instelling en plak de verbindingsreeks die u hebt gekopieerd uit de vorige sectie van de zelfstudie. ![Databaseverbindingsreeks voor werkrol](./media/cloud-services-dotnet-get-started/workerdbcs.png) 7. Sla uw wijzigingen op. ### De oplossing configureren voor het gebruik van uw Azure-opslagaccount als deze wordt uitgevoerd in Azure Azure-opslagaccountverbindingsreeksen voor het webrolroject en het werkrolproject worden opgeslagen in de omgevingsinstellingen in het cloudserviceproject. Elk project heeft één reeks instellingen die moeten worden gebruikt wanneer de toepassing lokaal wordt uitgevoerd en een tweede reeks instellingen voor het uitvoeren van de toepassing in de cloud. U gaat de cloudomgevingsinstellingen voor zowel het webrolproject als het werkrolproject bijwerken. 1. Klik in **Solution Explorer** met de rechtermuisknop op **ContosoAdsWeb** (onder **Roles** in het **ContosoAdsCloudService**-project) en klik vervolgens op **Properties**. ![Roleigenschappen](./media/cloud-services-dotnet-get-started/roleproperties.png) 2. Klik op het tabblad **Settings**. Kies in de vervolgkeuzelijst **Service Configuration** de optie **Cloud**. ![Cloudconfiguratie](./media/cloud-services-dotnet-get-started/sccloud.png) 3. Selecteer de vermelding **StorageConnectionString**. Aan de rechterkant van de regel ziet u nu een knop met weglatingstekens (**...**). Klik op deze knop om het dialoogvenster **Create Storage Account Connection String** te openen. ![Het dialoogvenster Create Storage Connection String](./media/cloud-services-dotnet-get-started/opencscreate.png) 4. In het dialoogvenster **Create Storage Connection String** klikt u op **Your subscription**, kiest u het opslagaccount dat u eerder hebt gemaakt en klikt u vervolgens op **OK**. Als u nog niet bent aangemeld, wordt u gevraagd om uw referenties voor uw Azure-account op te geven. ![Verbindingsreeks voor opslag maken](./media/cloud-services-dotnet-get-started/createstoragecs.png) 5. Sla uw wijzigingen op. 6. Volg dezelfde procedure die u voor de `StorageConnectionString`-verbindingsreeks hebt gebruikt, om de `Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString`-verbindingsreeks in te stellen. Deze verbindingsreeks wordt gebruikt voor logboekregistratie. 7. Volg de procedure die u hebt gebruikt voor de **ContosoAdsWeb**-rol, om beide verbindingsreeksen in te stellen voor de **ContosoAdsWorker**-rol. Vergeet niet om **Service Configuration** in te stellen op **Cloud**. De rolomgevingsinstellingen die u met behulp van de Visual Studio-gebruikersinterface hebt geconfigureerd, worden opgeslagen in de volgende bestanden in het project ContosoAdsCloudService: * *ServiceDefinition.csdef*: definieert de namen van de instellingen. * *ServiceConfiguration.Cloud.cscfg*: levert waarden voor wanneer de app wordt uitgevoerd in de cloud. * *ServiceConfiguration.Local.cscfg*: levert waarden voor wanneer de app lokaal wordt uitgevoerd. ServiceDefinition.csdef omvat bijvoorbeeld de volgende definities. ```xml <ConfigurationSettings> <Setting name="StorageConnectionString" /> <Setting name="ContosoAdsDbConnectionString" /> </ConfigurationSettings> ``` Het bestand *ServiceConfiguration.Cloud.cscfg* bevat de waarden die u in Visual Studio voor deze instellingen hebt ingevoerd. ```xml <Role name="ContosoAdsWorker"> <Instances count="1" /> <ConfigurationSettings> <Setting name="StorageConnectionString" value="{yourconnectionstring}" /> <Setting name="ContosoAdsDbConnectionString" value="{yourconnectionstring}" /> <!-- other settings not shown --> </ConfigurationSettings> <!-- other settings not shown --> </Role> ``` De `<Instances>`-instelling geeft het aantal virtuele machines aan waarop de werkrolcode door Azure wordt uitgevoerd. De sectie [Volgende stappen](#next-steps) bevat koppelingen naar meer informatie over het uitbreiden van een cloudservice, ### Het project implementeren in Azure 1. Klik in **Solution Explorer** met de rechtermuisknop op het **ContosoAdsCloudService**-cloudproject en selecteer vervolgens **Publish**. ![Het menu Publish](./media/cloud-services-dotnet-get-started/pubmenu.png) 2. Klik tijdens de stap **Sign in** van de wizard **Publish Azure Application** op **Next**. ![Aanmeldingsstap](./media/cloud-services-dotnet-get-started/pubsignin.png) 3. Klik in de stap **Settings** van de wizard op **Next**. ![De stap Settings](./media/cloud-services-dotnet-get-started/pubsettings.png) De standaardinstellingen op het tabblad **Advanced** zijn afdoende voor deze zelfstudie. Zie [De wizard Azure-toepassing publiceren](http://msdn.microsoft.com/library/hh535756.aspx) voor meer informatie over het tabblad Advanced. 4. Klik in de stap **Summary** op **Publish**. ![De stap Summary](./media/cloud-services-dotnet-get-started/pubsummary.png) Het venster **Azure Activity Log** wordt geopend in Visual Studio. 5. Klik op het pictogram met de pijl naar rechts om de implementatiedetails uit te vouwen. De implementatie kan 5 minuten of langer duren. ![Het venster Azure Activity Log](./media/cloud-services-dotnet-get-started/waal.png) 6. Wanneer de status van de implementatie 'complete' is, klikt u op de **Web-app-URL** om de toepassing te starten. 7. U kunt nu de app testen door advertenties te maken, weer te geven en te bewerken, zoals u deed toen de toepassing lokaal werd uitgevoerd. >[AZURE.NOTE] Wanneer u klaar bent met testen, kunt u de cloudservice verwijderen of stoppen. Zelfs als u de cloudservice niet gebruikt, lopen de kosten door, omdat er op virtuele machines bronnen voor worden gereserveerd. Als u de cloudservice actief laat, kan bovendien iedereen die uw URL vindt, advertenties maken en weergeven. Ga in de [klassieke Azure Portal](http://manage.windowsazure.com) naar het tabblad **Dashboard** voor de cloudservice en klik vervolgens onder aan de pagina op de knop **Verwijderen**. Wilt u alleen tijdelijk voorkomen dat anderen toegang hebben tot de site, klik dan op **Stoppen**. In dat geval blijven de kosten doorlopen. U kunt een vergelijkbare procedure volgen voor het verwijderen van de SQL-database en het opslagaccount wanneer u deze niet langer nodig hebt. ## De toepassing vanaf het begin maken Als u de [voltooide toepassing](http://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) nog niet hebt gedownload, doe dit dan nu. U kopieert de bestanden uit het gedownloade project naar het nieuwe project. Het maken van de Contoso Ads-toepassing omvat de volgende stappen: * Met Visual Studio een cloudserviceoplossing maken. * NuGet-pakketten bijwerken en toevoegen. * Projectverwijzingen instellen. * Verbindingsreeksen configureren. * Codebestanden toevoegen. Nadat de oplossing is gemaakt, bekijkt u de code die uniek is voor cloudserviceprojecten en voor Azure-blobs en -wachtrijen. ### Met Visual Studio een cloudserviceoplossing maken 1. Kies in Visual Studio in het menu **File** de optie **New Project**. 2. Vouw in het linkerdeelvenster van het dialoogvenster **New Project** de optie **Visual C#** uit en kies **Cloud**-sjablonen. Kies vervolgens de sjabloon voor **Azure Cloud Service**. 3. Geef het project en de oplossing de naam ContosoAdsCloudService en klik op **OK**. ![Nieuw project](./media/cloud-services-dotnet-get-started/newproject.png) 4. Voeg in het dialoogvenster **New Azure Cloud Service** een webrol en een werkrol toe. Geef de webrol de naam ContosoAdsWeb en de werkrol de naam ContosoAdsWorker. (Met het potloodpictogram in het rechterdeelvenster kunt u de namen van de rollen aanpassen.) ![Nieuw cloudserviceproject](./media/cloud-services-dotnet-get-started/newcsproj.png) 5. Wanneer u het dialoogvenster **New ASP.NET Project** voor de webrol ziet, kiest u de MVC-sjabloon en klikt u op **Change Authentication**. ![Verificatie wijzigen](./media/cloud-services-dotnet-get-started/chgauth.png) 7. Kies in het dialoogvenster **Change Authentication** de optie **No Authentication** en klik op **OK**. ![Geen verificatie](./media/cloud-services-dotnet-get-started/noauth.png) 8. Klik in het dialoogvenster **New ASP.NET Project** op **OK**. 9. Klik in **Solution Explorer** met de rechtermuisknop op de oplossing (niet op een van de projecten) en kies **Add - New Project**. 11. Kies in het dialoogvenster **Add New Project** de optie **Windows** (onder **Visual C#** in het linkerdeelvenster) en klik op de sjabloon **Class Library**. 10. Geef het project de naam *ContosoAdsCommon* en klik vervolgens op **OK**. Zowel vanuit het webrolproject als vanuit het werkrolproject moet u verwijzen naar de Entity Framework- context en het bijbehorende gegevensmodel. Als alternatief kunt u de aan Entity Framework gerelateerde klassen definiëren in het webrolproject en hier vanuit het werkrolproject naar verwijzen. Gebruikt u deze alternatieve methode, dan bevat uw werkrolproject echter een verwijzing naar webassembly's die niet nodig zijn. ### NuGet-pakketten bijwerken en toevoegen 1. Open het dialoogvenster **Manage NuGet Packages** voor de oplossing. 2. Selecteer boven aan het venster de optie **Updates**. 3. Zoek het *WindowsAzure.Storage*-pakket op en selecteer dit als het in de lijst staat. Selecteer vervolgens de web- en werkprojecten die u wilt bijwerken. Klik vervolgens op **Update**. De opslagclientbibliotheek wordt vaker bijgewerkt dan de Visual Studio-projectsjablonen, zodat u vaak zult zien dat de versie in een nieuw gemaakt project moet worden bijgewerkt. 4. Selecteer boven aan het venster de optie **Browse**. 5. Zoek het NuGet pakket *EntityFramework* op en installeer het in alle drie de projecten. 6. Zoek het NuGet-pakket *Microsoft.WindowsAzure.ConfigurationManager* op en installeer het in het werkrolproject. ### Projectverwijzingen instellen 1. Stel in het project ContosoAdsWeb een verwijzing in naar het project ContosoAdsCommon. Klik met de rechtermuisknop op het project ContosoAdsWeb en klik vervolgens op **References** - **Add References**. Selecteer in het dialoogvenster **Reference Manager** de optie **Solution – Projects** (in het linkerdeelvenster). Selecteer vervolgens **ContosoAdsCommon** en klik op **OK**. 2. Stel in het project ContosoAdsWorker een verwijzing in naar het project ContosoAdsCommon. ContosoAdsCommon bevat de Entity Framework-contextklasse en het bijbehorende gegevensmodel die door zowel de front-end als de back-end worden gebruikt. 3. Stel in het project ContosoAdsWorker een verwijzing in naar `System.Drawing`. Deze assembly wordt gebruikt door de back-end om afbeeldingen te converteren naar miniatuurweergaven. ### Verbindingsreeksen configureren In deze sectie configureert u Azure Storage- en SQL-verbindingsreeksen om lokaal te testen. In de implementatie-instructies eerder in deze handleiding wordt uitgelegd hoe u verbindingsreeksen instelt wanneer de app wordt uitgevoerd in de cloud. 1. Open in het project ContosoAdsWeb het bestand Web.config en voeg na het `configSections`-element het volgende `connectionStrings`-element toe. ```xml <connectionStrings> <add name="ContosoAdsContext" connectionString="Data Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" /> </connectionStrings> ``` Als u Visual Studio 2015 gebruikt, vervangt u "v11.0" door "MSSQLLocalDB". 2. Sla uw wijzigingen op. 3. Klik in het project ContosoAdsCloudService met de rechtermuisknop op ContosoAdsWeb (onder **Roles**) en klik vervolgens op **Properties**. ![Roleigenschappen](./media/cloud-services-dotnet-get-started/roleproperties.png) 4. Klik in het eigenschappenvenster van **ContosAdsWeb [rol]** op het tabblad **Settings** en vervolgens op **Add Setting**. Laat **Service Configuration** ingesteld op **All Configurations**. 5. Voeg een nieuwe instelling toe met de naam *StorageConnectionString*. Stel **Type** in op *ConnectionString*. Stel **Value** in op *UseDevelopmentStorage=true*. ![Nieuwe verbindingsreeks](./media/cloud-services-dotnet-get-started/scall.png) 6. Sla uw wijzigingen op. 7. Volg dezelfde procedure om een verbindingsreeks voor opslag toe te voegen aan de eigenschappen van de ContosoAdsWorker-rol. 8. Terwijl u zich nog in het eigenschappenvenster **ContosoAdsWorker [rol]** bevindt, voegt u nog een verbindingsreeks toe: * Name: ContosoAdsDbConnectionString * Type: String * Value: plak hier dezelfde verbindingsreeks die u voor het webrolproject hebt gebruikt. (Het volgende voorbeeld is voor Visual Studio 2013. Als u Visual Studio 2015 gebruikt en dit voorbeeld kopieert, vergeet dan niet om de gegevensbron te wijzigen.) ``` Data Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True; MultipleActiveResultSets=True; ``` ### Codebestanden toevoegen In deze sectie kopieert u codebestanden vanuit de gedownloade oplossing naar de nieuwe oplossing. In de volgende secties worden belangrijke onderdelen van deze code weergegeven en uitgelegd. Als u bestanden wilt toevoegen aan een project of map, klikt u met de rechtermuisknop op het project of de map en klikt u vervolgens op **Add** - **Existing Item**. Selecteer de gewenste bestanden en klik op **Add**. Als u wordt gevraagd of u de bestaande bestanden wilt vervangen, klikt u op **Yes**. 3. Verwijder in het project ContosoAdsCommon het bestand *Class1.cs* en voeg in plaats hiervan de bestanden *Ad.cs* en *ContosoAdscontext.cs* uit het gedownloade project toe. 3. Voeg in het project ContosoAdsWeb de volgende bestanden uit het gedownloade project toe. - *Global.asax.cs*. - In de map *Views\Shared*: *\_Layout.cshtml*. - In de map *Views\Home*: *Index.cshtml*. - In de map *Controllers*: *AdController.cs*. - In de map *Views\Ad* (maak eerst de map): vijf *.cshtml*-bestanden. 3. Voeg in het project ContosoAdsWorker *WorkerRole.cs* toe uit het gedownloade project. U kunt de toepassing nu ontwikkelen en uitvoeren zoals eerder in de zelfstudie is uitgelegd. De app zal gebruikmaken van lokale database- en opslagemulatorresources. De volgende secties geven uitleg over de code voor het werken met de Azure-omgeving, -blobs en -wachtrijen. In deze zelfstudie wordt niet ingegaan op het maken van MVC-controllers en -views met behulp van scaffolding, het schrijven van code voor Entity Framework die geschikt is voor SQL Server-databases en de basisbeginselen van asynchrone programmering in ASP.NET 4.5. Raadpleeg de volgende informatiebronnen voor meer informatie over deze onderwerpen: * [Aan de slag met MVC 5](http://www.asp.net/mvc/tutorials/mvc-5/introduction/getting-started) * [Aan de slag met EF 6 en MVC 5](http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc) * [Inleiding tot asynchrone programmering in .NET 4.5](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/web-development-best-practices#async). ### ContosoAdsCommon - Ad.cs Het bestand Ad.cs definieert een enum voor advertentiecategorieën en een POCO-entiteitsklasse voor advertentie-informatie. ```csharp public enum Category { Cars, [Display(Name="Real Estate")] RealEstate, [Display(Name = "Free Stuff")] FreeStuff } public class Ad { public int AdId { get; set; } [StringLength(100)] public string Title { get; set; } public int Price { get; set; } [StringLength(1000)] [DataType(DataType.MultilineText)] public string Description { get; set; } [StringLength(1000)] [DisplayName("Full-size Image")] public string ImageURL { get; set; } [StringLength(1000)] [DisplayName("Thumbnail")] public string ThumbnailURL { get; set; } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)] public DateTime PostedDate { get; set; } public Category? Category { get; set; } [StringLength(12)] public string Phone { get; set; } } ``` ### ContosoAdsCommon - ContosoAdsContext.cs De klasse ContosoAdsContext geeft aan dat de Ad-klasse wordt gebruikt in een DbSet-verzameling, die door Entity Framework wordt opgeslagen in een SQL-database. ```csharp public class ContosoAdsContext : DbContext { public ContosoAdsContext() : base("name=ContosoAdsContext") { } public ContosoAdsContext(string connString) : base(connString) { } public System.Data.Entity.DbSet<Ad> Ads { get; set; } } ``` De klasse heeft twee constructors. De eerste hiervan wordt gebruikt door het webproject en specificeert de naam van een verbindingsreeks die is opgeslagen in het bestand Web.config. Met de tweede constructor kunt u de werkelijke verbindingsreeks doorgeven. Deze is nodig voor het werkrolproject, omdat dat geen Web.config-bestand heeft. U weet al waar deze verbindingsreeks is opgeslagen. Later ziet u hoe de code de verbindingsreeks ophaalt bij het instantiëren van de DbContext-klasse. ### ContosoAdsWeb - Global.asax.cs Code die wordt aangeroepen vanuit de `Application_Start`-methode maakt een blobcontainer met *afbeeldingen* en een wachtrij met *afbeeldingen* als deze nog niet bestaan. Wanneer u begint met het gebruik van een nieuw opslagaccount of de opslagemulator op een andere computer gaat gebruiken, zorgt dit ervoor dat de vereiste blobcontainer en wachtrij automatisch worden gemaakt. De code krijgt toegang tot het opslagaccount door gebruik te maken van de opslagverbindingsreeks uit het *.cscfg*-bestand. ```csharp var storageAccount = CloudStorageAccount.Parse (RoleEnvironment.GetConfigurationSettingValue("StorageConnectionString")); ``` Vervolgens haalt de code een verwijzing op naar de blobcontainer met *afbeeldingen*, maakt deze de container als deze nog niet bestaat en stelt deze toegangsrechten voor de nieuwe container in. Standaard staan nieuwe containers alleen clients toe met opslagaccountreferenties voor toegang tot blobs. Voor de website moeten de blobs openbaar zijn, zodat deze afbeeldingen kan weergeven met behulp van URL's die naar de afbeeldingsblobs wijzen. ```csharp var blobClient = storageAccount.CreateCloudBlobClient(); var imagesBlobContainer = blobClient.GetContainerReference("images"); if (imagesBlobContainer.CreateIfNotExists()) { imagesBlobContainer.SetPermissions( new BlobContainerPermissions { PublicAccess =BlobContainerPublicAccessType.Blob }); } ``` Met vergelijkbare code wordt er een verwijzing naar de *afbeeldingen*wachtrij opgehaald en een nieuwe wachtrij gemaakt. In dit geval is er geen machtigingswijziging nodig. ```csharp CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); var imagesQueue = queueClient.GetQueueReference("images"); imagesQueue.CreateIfNotExists(); ``` ### ContosoAdsWeb - \_Layout.cshtml Het bestand *_Layout.cshtml* stelt de naam van de app in de kop- en voettekst in en maakt een menu-item 'Ads'. ### ContosoAdsWeb - Views\Home\Index.cshtml Het bestand *Views\Home\Index.cshtml* geeft categoriekoppelingen weer op de startpagina. De koppelingen geven de integerwaarde van de `Category`-enum door in een queryreeksvariabele naar de pagina Ads Index. ```razor <li>@Html.ActionLink("Cars", "Index", "Ad", new { category = (int)Category.Cars }, null)</li> <li>@Html.ActionLink("Real estate", "Index", "Ad", new { category = (int)Category.RealEstate }, null)</li> <li>@Html.ActionLink("Free stuff", "Index", "Ad", new { category = (int)Category.FreeStuff }, null)</li> <li>@Html.ActionLink("All", "Index", "Ad", null, null)</li> ``` ### ContosoAdsWeb - AdController.cs In het bestand *AdController.cs* roept de constructor de `InitializeStorage`-methode aan voor het maken van Azure Storage Client-bibliotheekobjecten die een API leveren voor het werken met blobs en wachtrijen. Vervolgens haalt de code een verwijzing op naar de blobcontainer met *afbeeldingen*, zoals u eerder hebt gezien in *Global.asax.cs*. Tijdens het uitvoeren hiervan wordt standaard [beleid voor opnieuw proberen](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling) ingesteld dat geschikt is voor een web-app. Toepassing van het standaardbeleid voor opnieuw proberen met exponentieel uitstel kan ertoe leiden dat de web-app bij een tijdelijke fout langer dan een minuut blijft hangen vanwege herhaalde pogingen om het opnieuw te proberen. Het beleid dat hier is opgegeven, schrijft voor dat er na elke poging 3 seconden wordt gewacht en dat het aantal pogingen maximaal 3 bedraagt. ```csharp var blobClient = storageAccount.CreateCloudBlobClient(); blobClient.DefaultRequestOptions.RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(3), 3); imagesBlobContainer = blobClient.GetContainerReference("images"); ``` Door vergelijkbare code wordt een verwijzing naar de *afbeeldingen*wachtrij opgehaald. ```csharp CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); queueClient.DefaultRequestOptions.RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(3), 3); imagesQueue = queueClient.GetQueueReference("images"); ``` Het grootste deel van de controllercode is typerend voor het werken met een Entity Framework-gegevensmodel met behulp van een DbContext-klasse. Een uitzondering is de HttpPost `Create`-methode, waarmee een bestand wordt geüpload en opgeslagen in blobopslag. De modelbinder levert een [HttpPostedFileBase](http://msdn.microsoft.com/library/system.web.httppostedfilebase.aspx)-object aan de methode. ```csharp [HttpPost] [ValidateAntiForgeryToken] public async Task<ActionResult> Create( [Bind(Include = "Title,Price,Description,Category,Phone")] Ad ad, HttpPostedFileBase imageFile) ``` Als de gebruiker een bestand heeft geselecteerd om te uploaden, wordt het door de code geüpload en opgeslagen in een blob. Tegelijkertijd wordt het Ad-databaserecord bijgewerkt met een URL die naar de blob verwijst. ```csharp if (imageFile != null && imageFile.ContentLength != 0) { blob = await UploadAndSaveBlobAsync(imageFile); ad.ImageURL = blob.Uri.ToString(); } ``` De code die het uploaden uitvoert, bevindt zich in de `UploadAndSaveBlobAsync`-methode. Deze maakt een GUID-naam voor de blob, uploadt het bestand en slaat dit op, en retourneert een verwijzing naar de opgeslagen blob. ```csharp private async Task<CloudBlockBlob> UploadAndSaveBlobAsync(HttpPostedFileBase imageFile) { string blobName = Guid.NewGuid().ToString() + Path.GetExtension(imageFile.FileName); CloudBlockBlob imageBlob = imagesBlobContainer.GetBlockBlobReference(blobName); using (var fileStream = imageFile.InputStream) { await imageBlob.UploadFromStreamAsync(fileStream); } return imageBlob; } ``` Nadat de HttpPost `Create`-methode een blob heeft geüpload en de database heeft bijgewerkt, maakt deze een wachtrijbericht om het betreffende back-endproces te laten weten dat er een afbeelding gereed is voor conversie naar een miniatuur. ```csharp string queueMessageString = ad.AdId.ToString(); var queueMessage = new CloudQueueMessage(queueMessageString); await queue.AddMessageAsync(queueMessage); ``` De code voor de HttpPost `Edit`-methode is vergelijkbaar, met dit verschil dat als de gebruiker kiest voor een nieuw afbeeldingsbestand, alle bestaande blobs moeten worden verwijderd. ```csharp if (imageFile != null && imageFile.ContentLength != 0) { await DeleteAdBlobsAsync(ad); imageBlob = await UploadAndSaveBlobAsync(imageFile); ad.ImageURL = imageBlob.Uri.ToString(); } ``` Het volgende voorbeeld toont de code voor het verwijderen van blobs wanneer u een advertentie verwijdert. ```csharp private async Task DeleteAdBlobsAsync(Ad ad) { if (!string.IsNullOrWhiteSpace(ad.ImageURL)) { Uri blobUri = new Uri(ad.ImageURL); await DeleteAdBlobAsync(blobUri); } if (!string.IsNullOrWhiteSpace(ad.ThumbnailURL)) { Uri blobUri = new Uri(ad.ThumbnailURL); await DeleteAdBlobAsync(blobUri); } } private static async Task DeleteAdBlobAsync(Uri blobUri) { string blobName = blobUri.Segments[blobUri.Segments.Length - 1]; CloudBlockBlob blobToDelete = imagesBlobContainer.GetBlockBlobReference(blobName); await blobToDelete.DeleteAsync(); } ``` ### ContosoAdsWeb - Views\Ad\Index.cshtml en Details.cshtml Het bestand *Index.cshtml* geeft de miniaturen en de overige advertentiegegevens weer. ```razor <img src="@Html.Raw(item.ThumbnailURL)" /> ``` Het bestand *Details.cshtml* geeft de afbeelding op volledige grootte weer. ```razor <img src="@Html.Raw(Model.ImageURL)" /> ``` ### ContosoAdsWeb - Views\Ad\Create.cshtml en Edit.cshtml De bestanden *Create.cshtml* en *Edit.cshtml* specificeren formuliercodering waarmee de controller het `HttpPostedFileBase`-object kan ophalen. ```razor @using (Html.BeginForm("Create", "Ad", FormMethod.Post, new { enctype = "multipart/form-data" })) ``` Een `<input>`-element vertelt de browser dat deze een dialoogvenster voor bestandsselectie moet aanleveren. ```razor <input type="file" name="imageFile" accept="image/*" class="form-control fileupload" /> ``` ### ContosoAdsWorker - WorkerRole.cs - OnStart-methode De Azure werkrolomgeving roept de `OnStart`-methode aan in de `WorkerRole`-klasse wanneer de werkrol begint en roept de `Run`-methode aan wanneer de `OnStart`-methode is voltooid. De `OnStart`-methode haalt de databaseverbindingsreeks op uit het bestand *.cscfg* en geeft deze door aan de Entity Framework DbContext-klasse. Er hoeft geen provider te worden opgegeven; standaard wordt de SQLClient-provider gebruikt. ```csharp var dbConnString = CloudConfigurationManager.GetSetting("ContosoAdsDbConnectionString"); db = new ContosoAdsContext(dbConnString); ``` Hierna haalt de methode een verwijzing op naar het opslagaccount en maakt deze de blobcontainer en de wachtrij als deze nog niet bestaan. De code hiervoor is vergelijkbaar met wat u al hebt gezien in de `Application_Start`-methode voor de webrol. ### ContosoAdsWorker - WorkerRole.cs - Run-methode De `Run`-methode wordt aangeroepen wanneer het initialisatiewerk van de `OnStart`-methode is voltooid. De methode voert een oneindige lus uit die controleert op nieuwe berichten in de wachtrij en verwerkt deze wanneer ze binnenkomen. ```csharp public override void Run() { CloudQueueMessage msg = null; while (true) { try { msg = this.imagesQueue.GetMessage(); if (msg != null) { ProcessQueueMessage(msg); } else { System.Threading.Thread.Sleep(1000); } } catch (StorageException e) { if (msg != null && msg.DequeueCount > 5) { this.imagesQueue.DeleteMessage(msg); } System.Threading.Thread.Sleep(5000); } } } ``` Na elke herhaling van de lus wordt als er geen wachtrij bericht is gevonden de inactieve modus ingeschakeld gedurende een seconde. Zo wordt voorkomen dat de werkrol buitensporig veel CPU-tijd en opslagtransactiekosten verbruikt. Het Microsoft Customer Advisory Team kent het verhaal van een ontwikkelaar die was vergeten dit element op te nemen voordat hij de app live zette en op vakantie ging. Toen hij na terugkomst zijn fout ontdekte, bleek deze meer te hebben gekost dan zijn hele vakantie. Soms veroorzaakt de inhoud van een wachtrijbericht een fout in de verwerking. We spreken dan van een *verontreinigd bericht*. Als u alleen een fout in het logboek hebt geregistreerd en de lus opnieuw hebt gestart, kan dit leiden tot eindeloze pogingen om het bericht te verwerken. Daarom bevat het catch-blok een if-instructie waarmee wordt gecontroleerd hoe vaak de app heeft geprobeerd het huidige bericht te verwerken. Bij meer dan 5 pogingen wordt het bericht uit de wachtrij verwijderd. `ProcessQueueMessage` wordt aangeroepen wanneer er een wachtrijbericht is gevonden. ```csharp private void ProcessQueueMessage(CloudQueueMessage msg) { var adId = int.Parse(msg.AsString); Ad ad = db.Ads.Find(adId); if (ad == null) { throw new Exception(String.Format("AdId {0} not found, can't create thumbnail", adId.ToString())); } CloudBlockBlob inputBlob = this.imagesBlobContainer.GetBlockBlobReference(ad.ImageURL); string thumbnailName = Path.GetFileNameWithoutExtension(inputBlob.Name) + "thumb.jpg"; CloudBlockBlob outputBlob = this.imagesBlobContainer.GetBlockBlobReference(thumbnailName); using (Stream input = inputBlob.OpenRead()) using (Stream output = outputBlob.OpenWrite()) { ConvertImageToThumbnailJPG(input, output); outputBlob.Properties.ContentType = "image/jpeg"; } ad.ThumbnailURL = outputBlob.Uri.ToString(); db.SaveChanges(); this.imagesQueue.DeleteMessage(msg); } ``` Deze code leest de database om de afbeeldings-URL op te halen, converteert de afbeelding naar een miniatuur, slaat de miniatuur op in een blob, werkt de database bij met de blob-URL van de miniatuur en verwijdert het bericht uit de wachtrij. >[AZURE.NOTE] Uit oogpunt van eenvoud maakt de code in de `ConvertImageToThumbnailJPG`-methode gebruik van klassen in de System.Drawing-naamruimte. De klassen in deze naamruimte zijn echter bedoeld voor gebruik met Windows Forms. Ze worden niet ondersteund voor gebruik in een Windows- of ASP.NET-service. Zie [Afbeeldingen dynamisch genereren](http://www.hanselman.com/blog/BackToBasicsDynamicImageGenerationASPNETControllersRoutingIHttpHandlersAndRunAllManagedModulesForAllRequests.aspx) en [Alles over het wijzigen van het formaat van afbeeldingen](http://www.hanselminutes.com/313/deep-inside-image-resizing-and-scaling-with-aspnet-and-iis-with-imageresizingnet-author-na) voor meer informatie over opties voor de verwerking van afbeeldingen. ## Problemen oplossen Voor het geval u tegen problemen aanloopt terwijl u de instructies in deze zelfstudie volgt, vindt u hier een aantal veelvoorkomende fouten en aanwijzingen om deze op te lossen. ### ServiceRuntime.RoleEnvironmentException Het `RoleEnvironment`-object wordt verstrekt door Azure wanneer u een toepassing in Azure uitvoert of wanneer u deze lokaal uitvoert met behulp van de Azure-rekenemulator. Als deze fout bij lokale uitvoering optreedt, controleert u of het project ContosoAdsCloudService is ingesteld als opstartproject. Zo zorgt u ervoor dat het project wordt uitgevoerd met behulp van de Azure-rekenemulator. De toepassing gebruikt de Azure RoleEnvironment onder meer voor het ophalen van verbindingsreekswaarden die zijn opgeslagen in de *.cscfg*-bestanden. Een andere oorzaak van deze uitzondering kan dan ook te maken hebben met een ontbrekende verbindingsreeks. Controleer of u de instelling StorageConnectionString in het project ContosoAdsWeb voor zowel de cloudconfiguratie als de lokale configuratie hebt gemaakt en of u ook in het project ContosoAdsWorker beide verbindingsreeksen voor beide configuraties hebt gemaakt. Als u in de hele oplossing een zoekopdracht **Find All** uitvoert voor StorageConnectionString, moet u deze in 6 bestanden 9 maal vinden. ### Cannot override to port xxx. New port below minimum allowed value 8080 for protocol http Wijzig het poortnummer dat door het webproject wordt gebruikt. Klik met de rechtermuisknop op het project ContosoAdsWeb en klik vervolgens op **Properties**. Klik op het tabblad **Web** en wijzig het poortnummer in de instelling **Project Url**. Raadpleeg de volgende sectie voor een alternatieve oplossing voor dit probleem. ### Andere fouten bij lokale uitvoering Standaard maken nieuwe cloudserviceprojecten gebruik van de expresversie van de Azure-rekenemulator om de Azure-omgeving te simuleren. Dit is een basisversie van de volledige rekenemulator. Onder bepaalde omstandigheden werkt de volledige emulator wel, maar de expresversie niet. Als u het project wilt wijzigen voor gebruik van de volledige emulator, klikt u met de rechtermuisknop op het project ContosoAdsCloudService en klikt u vervolgens op **Properties**. Klik in het venster **Properties** op het tabblad **Web** en schakel vervolgens het keuzerondje **Use Full Emulator** in. Als u de toepassing wilt uitvoeren met de volledige emulator, moet u Visual Studio openen met beheerdersrechten. ## Volgende stappen In het kader van deze 'Aan de slag'-zelfstudie is de Contoso Ads-toepassing met opzet eenvoudig gehouden. De toepassing gebruikt bijvoorbeeld geen [afhankelijkheidsinjectie](http://www.asp.net/mvc/tutorials/hands-on-labs/aspnet-mvc-4-dependency-injection), geen [patronen voor opslagplaatsen en werkeenheden](http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/advanced-entity-framework-scenarios-for-an-mvc-web-application#repo), geen [interface voor logboekregistratie](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry#log), geen [EF Code First-migraties](http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) voor het beheren van wijzigingen in het gegevensmodel, geen [EF-verbindingstolerantie](http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/connection-resiliency-and-command-interception-with-the-entity-framework-in-an-asp-net-mvc-application) voor het beheren van tijdelijke netwerkfouten, enzovoort. Hier volgen enkele voorbeelden, in oplopende volgorde van complexiteit, van cloudservicetoepassingen waarin meer code wordt gebruikt: * [PhluffyFotos](http://code.msdn.microsoft.com/PhluffyFotos-Sample-7ecffd31). Qua concept soortgelijk aan Contoso Ads, maar met meer functies en meer code. * [Azure Cloud Service-toepassing met meerdere lagen die opslagtabellen, wachtrijen en blobs bevat](http://code.msdn.microsoft.com/windowsazure/Windows-Azure-Multi-Tier-eadceb36). Introduceert Azure Storage-tabellen, evenals blobs en wachtrijen. Deze toepassing is gebaseerd op een oudere versie van de Azure SDK voor .NET. Voor gebruik met de huidige versie is een aantal wijzigingen nodig. * [Cloud Service Fundamentals in Microsoft Azure](http://code.msdn.microsoft.com/Cloud-Service-Fundamentals-4ca72649). Een uitgebreid voorbeeld met een breed scala aan aanbevolen procedures, opgezet door de Microsoft Patterns and Practice-groep. Zie [Echte cloud-apps ontwikkelen met Azure](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/introduction) voor algemene informatie over ontwikkelen voor de cloud. Zie [Microsoft Azure Storage - nieuwe functies, aanbevolen procedures en patronen](http://channel9.msdn.com/Events/Build/2014/3-628) voor een video-inleiding in aanbevolen procedures en patronen voor Azure Storage. Zie de volgende bronnen voor meer informatie: * [Deel 1 Azure Cloud Services: Inleiding](http://justazure.com/microsoft-azure-cloud-services-part-1-introduction/) * [Cloudservices beheren](cloud-services-how-to-manage.md) * [Azure Storage](/documentation/services/storage/) <!--HONumber=Jun16_HO2-->
61.338655
1,134
0.779195
nld_Latn
0.998758
17d62dae3e65eb140b0d4f25851852fb9910d724
667
md
Markdown
airship.tf/getting_started/troubleshooting.md
hosamshahin/terraform-aws-airship-ecs-service
8a3aa61e44a24fe47410dac61ff2d74e7a1c8d4a
[ "MIT" ]
146
2018-06-27T10:06:35.000Z
2021-11-12T11:26:29.000Z
airship.tf/getting_started/troubleshooting.md
hosamshahin/terraform-aws-airship-ecs-service
8a3aa61e44a24fe47410dac61ff2d74e7a1c8d4a
[ "MIT" ]
40
2018-08-09T11:26:34.000Z
2021-11-23T11:35:51.000Z
airship.tf/getting_started/troubleshooting.md
hosamshahin/terraform-aws-airship-ecs-service
8a3aa61e44a24fe47410dac61ff2d74e7a1c8d4a
[ "MIT" ]
49
2018-12-15T21:16:52.000Z
2022-02-22T00:10:47.000Z
--- sidebarDepth: 2 --- # Trouble Shooting What if it does not work as expected: 1. Make sure you have used the same **region** for all resources. 2. Make sure to have applied all intermediate stages as mentioned. If problems still persist, please join the newly created [Gitter Room](https://gitter.im/airship-modules/community), the #Airship channel on [Sweetops Slack](http://sweetops.slack.com) or create an issue [here](https://github.com/blinkist/terraform-aws-airship-ecs-service/issues)!. ::: tip If you see a problem with the documentation, please create an issue in the repo of this [website](http://github.com/doingcloudright/airship.tf/issues)! :::
37.055556
282
0.754123
eng_Latn
0.977753
17d638fa414a282087ab2c018c6896e7aa339a36
5,175
md
Markdown
README.md
axelallain/pvpchampions
80bf1872f1145aa4b157dfd9ff36d68a934324a9
[ "Apache-2.0" ]
null
null
null
README.md
axelallain/pvpchampions
80bf1872f1145aa4b157dfd9ff36d68a934324a9
[ "Apache-2.0" ]
null
null
null
README.md
axelallain/pvpchampions
80bf1872f1145aa4b157dfd9ff36d68a934324a9
[ "Apache-2.0" ]
null
null
null
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) <!-- LOGO DU PROJET --> <br /> <p align="center"> <a href="https://github.com/axelallain/pvpchampions"> <h3 align="center">Projet 12 (PUG Organizer, formerly called PVP Champions)</h3> </a> <p align="center"> Community event management for Classic WoW <br /> <br /> </p> </p> <!-- SOMMAIRE --> ## Sommaire 1. [Prérequis](#prérequis) 2. [Configuration](#configuration) * [Connexion à votre base de données](#connexion-à-votre-base-de-données) * [Génération du fichier .war avec votre configuration](#génération-du-fichier-war-avec-votre-configuration) 3. [Déploiement](#déploiement) * [Base de données](#base-de-données) * [Apache Tomcat](#apache-tomcat) 4. [Plus d'informations](#plus-dinformations) 5. [Contribuer au projet](#contribuer-au-projet) 6. [Licence](#licence) <!-- PRÉREQUIS --> ## Prérequis Tout d'abord, il faut savoir que le nom de code du projet est : "pvpchampions", premier nom de l'application. Le projet final s'appelle désormais PUG Organizer. Dans ce tutoriel nous allons voir comment configurer et déployer l'application via les outils suivants. * JDK 8 * PostgreSQL 11 * Apache Maven 3.6.3 * Apache Tomcat 9 Pour commencer, clonez le projet sur votre ordinateur dans un repertoire facilement accessible. Dans cet exemple nous allons utiliser le bureau. Ouvrez un terminal et tapez les commandes suivantes : ```sh cd Desktop ``` ```sh git clone https://github.com/axelallain/pvpchampions.git ``` <!-- CONFIGURATION --> ## Configuration ### Connexion à votre base de données Pour connecter l'application à votre base de données, nous devons modifier le fichier de configuration "jdbc.properties". Ouvrez le dossier du projet cloné et suivez ce chemin d'accès pour accéder à ce fichier de configuration : ```sh src/main/resources/jdbc.properties ``` Munissez-vous de votre éditeur de texte préféré pour éditer "jdbc.properties". Il ne vous reste plus qu'à modifier les propriétés suivantes pour connecter votre base de données PostgreSQL. ```properties jdbc.url=jdbc:postgresql://localhost:5432/pugorganizer jdbc.username=postgres jdbc.password=password ``` ### Génération du fichier .war avec votre configuration Une fois la configuration terminée, générons le fichier .war qui sera déployé sur notre serveur d'application. Pour cela, ouvrez un terminal et rendez-vous à la racine du projet. Si le dossier du projet est sur le bureau : ```sh cd Desktop/pvpchampions ``` Pour plus de configuration concernant le fichier .war, vous pouvez éditer le fichier pom.xml situé à la racine du projet. Vous pouvez désormais utiliser dans l'ordre les commandes Maven suivantes pour générer proprement le fichier .war : ```sh mvn clean ``` ```sh mvn package ``` Si le build se passe correctement, votre fichier .war sera accessible dans le dossier target situé à la racine du projet : ```sh pvpchampions/target/pugorganizer-1.0.0.war ``` <!-- DÉPLOIEMENT --> ## Déploiement Votre fichier .war est prêt ? Alors passons à l'étape finale de ce tutoriel : le déploiement ! ### Base de données Le script SQL est disponible à la racine du projet : Les tables ET les données de démo : ```sh pugorganizer.backup.sql ``` Dans cet exemple d'exécution du script, nous allons utiliser l'utilisateur par défault de PostgreSQL "postgres". Accédons d'abord au script présent à la racine du projet via un terminal : ```sh cd Desktop/pvpchampions ``` Pour exécuter le script "pugorganizer.backup.sql", entrez la commande suivante dans votre terminal : ```sh psql -U postgres -f pugorganizer.backup.sql ``` La base de données est prête ! ### Apache Tomcat L'étape finale de l'étape finale ! Nous allons déployer le fichier .war sur un serveur d'application Tomcat 9. Si vous ne l'avez pas encore fait, vous pouvez télécharger Tomcat depuis ce lien : ```sh https://tomcat.apache.org/download-90.cgi ``` Vous pouvez extraire Tomcat sur votre bureau. Ouvrez le dossier dézippé et rendez-vous dans le dossier "webapps" : ```sh tomcat/webapps ``` Il suffit de glisser le fichier .war dans ce dossier "webapps" et.. c'est tout ! Procédons au démarrage de Tomcat. Pour cela rendez-vous dans le dossier "bin" depuis un terminal : ```sh cd Desktop/tomcat/bin ``` Pour lancer le serveur sous un environnement UNIX, entrez les commandes suivantes : Rendre le script exécutable : ```sh chmod +x startup.sh ``` Démarrage du serveur : ```sh sh startup.sh ``` Pour lancer le serveur sous un environnement MS-DOS, entrez la commande suivante : ```sh startup.bat ``` L'application est déployée ! L'URL pour y accéder varie selon le nom de votre fichier .war : Si votre fichier .war s'appelle "pugorganizer.war", l'URL sera le suivant : ```sh localhost:8080/pugorganizer ``` <!-- PLUS D'INFORMATIONS --> ## Plus d'informations Pour plus d'informations un PDF est disponible à la racine du projet. ## Contribuer au projet N'hésitez pas à envoyer vos pull requests pour contribuer au projet ! ## Licence [APACHE 2.0](https://github.com/axelallain/pvpchampions/blob/master/LICENSE)
27.094241
122
0.741643
fra_Latn
0.969289
17d7845b81682ca6d3989cf856eeacc06705423d
2,072
md
Markdown
README.md
fu4303/chart-my-stars
818b288c3bc550cc8fef078ff8e5cd856bf6b99a
[ "MIT" ]
null
null
null
README.md
fu4303/chart-my-stars
818b288c3bc550cc8fef078ff8e5cd856bf6b99a
[ "MIT" ]
null
null
null
README.md
fu4303/chart-my-stars
818b288c3bc550cc8fef078ff8e5cd856bf6b99a
[ "MIT" ]
null
null
null
> [star-history chrome extension](https://chrome.google.com/webstore/detail/iijibbcdddbhokfepbblglfgdglnccfn) # Star History The missing star history graph of GitHub repos ## [As a website](https://star-history.t9t.io) ![](https://raw.githubusercontent.com/timqian/images/master/star-history.gif) ## [As an extension](https://chrome.google.com/webstore/detail/star-history/iijibbcdddbhokfepbblglfgdglnccfn) ![](https://raw.githubusercontent.com/timqian/images/master/star-history-extension.gif) > Note: You can [load the `./extension` folder to chrome](https://superuser.com/a/247654) to install the extension too. ## Access Token Star-history use GitHub API to retrieve repository metadata. When user exceed the rate [limit of unauthenticated requests](https://developer.github.com/v3/#rate-limiting). Star-history will need your personal access token to unlimit it. If you don't already have one, [create one](https://github.com/settings/tokens/new), and add to star-history (no scope to your personal data is needed) ## Develop ### Website ```bash npm run startWebsite ``` ### Extension ```bash npm run buildExtension # load the extension folder as unpacked extension into chrome to view it ``` ## Build and Deploy ### Website ```bash # deploy to star-history.t9t.io npm run deployWebsite ``` ### Extension ```bash npm run buildExtension # zip extension folder and publish to chrome web store ``` ## Updates - 2019-8-28: use [chart.xkcd](https://github.com/timqian/chart.xkcd) to plot the graph - 2019-3-06: Add personal access token; update style; mono repo - 2016-6-30: Alert to notie - 2016-6-28: Add clear btn - 2016-6-28: Better view for "many star" repos (use current star number as the last point on the graph) - 2016-6-26: **Store repo info into url hash** - 2016-6-26: **multiple kinds of input styles (eg: github.com/timqian/star-history, ...)** - 2016-6-26: Better view for less star repos #28 - 2016-6-14: **Toggle search by hit enter** #26, prevent crash while searching for not existing repo - 2016-5-26: Update mobile view
27.626667
236
0.7389
eng_Latn
0.565277
17d7b86ad8d88a79f3d38e77e150406c698de3c6
6,339
md
Markdown
desktop-src/TermServ/imstscadvancedsettings-interface.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
2
2022-03-18T02:46:08.000Z
2022-03-18T03:19:15.000Z
desktop-src/TermServ/imstscadvancedsettings-interface.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/TermServ/imstscadvancedsettings-interface.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: IMsTscAdvancedSettings interface description: Includes methods to retrieve and set properties that enable bitmap caching, compression, and printer and clipboard redirection. ms.assetid: 3385e843-be05-4801-8d59-6395d95686b1 ms.tgt_platform: multiple keywords: - IMsTscAdvancedSettings interface Remote Desktop Services - IMsTscAdvancedSettings interface Remote Desktop Services , described topic_type: - apiref api_name: - IMsTscAdvancedSettings api_location: - MsTscAx.dll api_type: - COM ms.topic: reference ms.date: 05/31/2018 --- # IMsTscAdvancedSettings interface Includes methods to retrieve and set properties that enable bitmap caching, compression, and printer and clipboard redirection. You can also specify names of virtual channel client DLLs. You obtain an instance of this interface by using the [**IMsTscAx::AdvancedSettings**](imstscax-advancedsettings.md) property. ## Members The **IMsTscAdvancedSettings** interface inherits from the [**IDispatch**](/windows/win32/api/oaidl/nn-oaidl-idispatch) interface. **IMsTscAdvancedSettings** also has these types of members: - [Properties](#properties) ### Properties The **IMsTscAdvancedSettings** interface has these properties. <table> <colgroup> <col style="width: 33%" /> <col style="width: 33%" /> <col style="width: 33%" /> </colgroup> <thead> <tr class="header"> <th style="text-align: left;">Property</th> <th style="text-align: left;">Access type</th> <th style="text-align: left;">Description</th> </tr> </thead> <tbody> <tr class="odd"> <td style="text-align: left;"><a href="imstscadvancedsettings-allowbackgroundinput.md"><strong>allowBackgroundInput</strong></a><br/></td> <td style="text-align: left;">Read/write<br/></td> <td style="text-align: left;">Specifies whether background input mode is enabled.<br/></td> </tr> <tr class="even"> <td style="text-align: left;"><a href="imstscadvancedsettings-bitmapperistence.md"><strong>BitmapPeristence</strong></a><br/></td> <td style="text-align: left;">Read/write<br/></td> <td style="text-align: left;">Specifies whether bitmap caching is enabled.<br/> <blockquote> [!Note]<br /> The spelling error in the name of the property is in the released version of the control. </blockquote> <br/></td> </tr> <tr class="odd"> <td style="text-align: left;"><a href="imstscadvancedsettings-compress.md"><strong>Compress</strong></a><br/></td> <td style="text-align: left;">Read/write<br/></td> <td style="text-align: left;">Specifies whether compression is enabled.<br/></td> </tr> <tr class="even"> <td style="text-align: left;"><a href="imstscadvancedsettings-containerhandledfullscreen.md"><strong>ContainerHandledFullScreen</strong></a><br/></td> <td style="text-align: left;">Read/write<br/></td> <td style="text-align: left;">Specifies whether the container-handled full-screen mode is enabled.<br/></td> </tr> <tr class="odd"> <td style="text-align: left;"><a href="imstscadvancedsettings-disablerdpdr.md"><strong>DisableRdpdr</strong></a><br/></td> <td style="text-align: left;">Read/write<br/></td> <td style="text-align: left;">Specifies whether printer and clipboard redirection is enabled.<br/></td> </tr> <tr class="even"> <td style="text-align: left;"><a href="imstscadvancedsettings-iconfile.md"><strong>IconFile</strong></a><br/></td> <td style="text-align: left;">Write-only<br/></td> <td style="text-align: left;">Specifies the name of the file containing icon data that will be accessed when displaying the client in full-screen mode.<br/></td> </tr> <tr class="odd"> <td style="text-align: left;"><a href="imstscadvancedsettings-iconindex.md"><strong>IconIndex</strong></a><br/></td> <td style="text-align: left;">Write-only<br/></td> <td style="text-align: left;">Specifies the index of the icon within the current icon file.<br/></td> </tr> <tr class="even"> <td style="text-align: left;"><a href="imstscadvancedsettings-keyboardlayoutstr.md"><strong>KeyBoardLayoutStr</strong></a><br/></td> <td style="text-align: left;">Write-only<br/></td> <td style="text-align: left;">Specifies the name of the active input locale identifier (formerly called the keyboard layout) to use for the connection.<br/></td> </tr> <tr class="odd"> <td style="text-align: left;"><a href="imstscadvancedsettings-plugindlls.md"><strong>PluginDlls</strong></a><br/></td> <td style="text-align: left;">Write-only<br/></td> <td style="text-align: left;">Specifies the names of virtual channel client DLLs to be loaded.<br/></td> </tr> </tbody> </table> ## Remarks This interface has been extended by the following interfaces, with each new interface inheriting all the methods and properties of the previous interfaces: - [**IMsRdpClientAdvancedSettings**](imsrdpclientadvancedsettings-interface.md) - [**IMsRdpClientAdvancedSettings2**](imsrdpclientadvancedsettings2.md) - [**IMsRdpClientAdvancedSettings3**](imsrdpclientadvancedsettings3.md) - [**IMsRdpClientAdvancedSettings4**](imsrdpclientadvancedsettings4.md) - [**IMsRdpClientAdvancedSettings5**](imsrdpclientadvancedsettings5.md) - [**IMsRdpClientAdvancedSettings6**](imsrdpclientadvancedsettings6.md) - [**IMsRdpClientAdvancedSettings7**](imsrdpclientadvancedsettings7.md) For more information about Remote Desktop Web Connection, see [Requirements for Remote Desktop Web Connection](requirements-for-remote-desktop-web-connection.md). ## Requirements | | | |-------------------------------------|-------------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows Vista<br/> | | Minimum supported server<br/> | Windows Server 2008<br/> | | Type library<br/> | <dl> <dt>MsTscAx.dll</dt> </dl> | | DLL<br/> | <dl> <dt>MsTscAx.dll</dt> </dl> | | IID<br/> | IID\_IMsTscAdvancedSettings is defined as 809945cc-4b3b-4a92-a6b0-dbf9b5f2ef2d<br/> | ## See also <dl> <dt> [**IDispatch**](/windows/win32/api/oaidl/nn-oaidl-idispatch) </dt> <dt> [Remote Desktop Web Connection Reference](remote-desktop-web-connection-reference.md) </dt> </dl>
42.26
190
0.685124
eng_Latn
0.588796
17d8c4d834ee79dd14eaeb86b5af7cec3f22ea23
4,389
md
Markdown
web/articles/node-base/bak.md
xiaoqiang-zhao/my-cellar
66373338d0dc0d61422df05ecf5c24c4ea3f8ce6
[ "MIT" ]
12
2018-08-14T02:52:52.000Z
2021-06-26T11:47:03.000Z
web/articles/node-base/bak.md
xiaoqiang-zhao/my-cellar
66373338d0dc0d61422df05ecf5c24c4ea3f8ce6
[ "MIT" ]
3
2019-04-15T15:01:41.000Z
2019-04-15T15:13:47.000Z
web/articles/node-base/bak.md
xiaoqiang-zhao/my-cellar
66373338d0dc0d61422df05ecf5c24c4ea3f8ce6
[ "MIT" ]
null
null
null
# NodeJs 必备基础 > 开发 NodeJs 程序有一些必须掌握的技巧,这里总结用到的基础知识,收集一些我遇到的问题和解决方案。 ## 概述 ## 文件操作 ## 版本控制 有些工具对 node 的版本有硬性要求,在各个工具中来回切换的时候也需要在 node 的版本之间来回切换,这里介绍一个 node 的版本管理模块 n(这名字可够短的...),是专门用来管理node.js的版本的。 首先安装n模块: npm install -g n 查看本地安装了那些版本的 node n // 上下键切换版本,回车确认 升级node.js到最新稳定版,尾号为双数的版本是稳定版 n stable 列出所有可用的 node 版本 n ls 安装特定版本 n 6.2.0 [node.js版本管理工具 n 无效的原理和解决方法](http://www.jb51.net/article/98153.htm) todo:原理, Linux 部分... ## 调试环境 ### inspector Web开发使用 `inspector` 插件在chrome中调试。win7下命令行运行 `npm install -g node-inspector` 安装,mac下如果用上面命令可能存在权限问题,如果出现错误无法安装,尝试 `sudo npm install -g node-inspector` 并输入登录密码,输密码时只有一个光标闪,但你可能已经输入了,一口气输完然后回车。 启动调试的命令是 `node-debug 你的程序入口`,如本站的 `server` 可以这样开始调试 `node-debug server`。 **高级用法** 上面是 `inspector` 最简单的用法,还有一种更为灵活的用法涉及到 `inspector` 的原理。 可以指定调试端口,可以手动启动程序(因为有了这一步我们可以做一些其他的事情,在下面的结合方法中介绍), 具体步骤和解释如下: // 创建一个服务 node-inspector --web-port=8888 & 这是一个端口为8888的web服务,这个web服务一端是浏览器另一端通过websocket监听v8引擎。 虽然此时会有 `Visit http://127.0.0.1:8888/debug?ws=127.0.0.1:8888&port=5858 to start debugging` 这样的提示但是此时v8的调试端口并没有确定,下面的命令才会指定v8引擎的调试服务端口。 // 指定v8的调试端口,并启动v8装载程序 node --debug-brk=5858 server // 当出现 Debugger listening on port 5858 时就证明已经监听成功 5858是v8的调试端口,可以自由指定。 // 在chrome浏览器中打开下面网址 http://127.0.0.1:8888/debug?ws=127.0.0.1:8888&port=5858 浏览器和引擎通过web socket同步,这样我们就可以进行调试。 另外一点需要说明的是在浏览器中启动调试port参数的值是上面指定v8调试端口,并不一定是提示的5858。 再多说几句,8888 是对浏览器的服务端口,8888服务监听端口了端口5858。 另外由于一些不确定的因素经常出现端口不能释放的情况, 如果按上面步骤不能达到预期的效果,换个端口试试。如果想查看端口的状态, 参考下面的命令: // 打开新的命令行窗口 Mac:com + n Win7:Win + R / cmd // 清空命令行窗口的内容 Mac:com + k Win7:cls // 查看端口的存活情况 Mac:lsof -i:8899 Win7:netstat -ano|findstr "8899" // 查看端口占用情况 Mac:lsof -i -P Win7:netstat -ano Git:[https://github.com/node-inspector/node-inspector](https://github.com/node-inspector/node-inspector) ### hotnode 上面方案的不方便之处在于每次修改完源码并不会反映到调试窗口中,当然小的修改可以直接在天使窗口中进行,但是返回头还需要改源码。 如果要在调试器中看到新代码需要手动关闭再启动node服务。找到一个开源工具可以热启动node,他的原理是监听文件改变,如果有改变自动重启node服务,并打出log。 安装和运行非常简单,命令如下: // 安装 npm install -g hotnode // 热启动应用 hotnode app Git:[https://github.com/saschagehlich/hotnode](https://github.com/saschagehlich/hotnode) 相似的热启动还有 `node-dev` 、 `supervisor` 和 `nodemon`。 ### 两种技术的结合 上面的准备已经做好,倒着一步就水到渠成了, 将inspector中的 `node --debug-brk=5858 server` 换成 `hotnode --debug-brk=5858 server` 就可以了。 ### IDE 上面是不依赖 IDE 的方式,但是用过 VS Code 之后发现用不着了解那么多东西,直接配置 VS Code 的启动项,然后 F5 就启动了,打断点、变量实时查看等各种调试方法都支持的很好。 ## 路径 `require` 加载相对路径的模块时其参考路径是当前的执行文件所在的路径, `fs` 文件系统读取文件时参考路径是程序的启动路径, 如 `node cellar/server` 的路径是cellar文件夹所在的目录,而 `node server` 是server所在的路径,也就是cellar。 以 `node cellar/server` 为例, 通过全局变量下的属性 `__filename` 拿到当前文件所在文件夹路径,再通过设置相对于此文件向上几级目录是参考根路径拿到绝对路径作为参考。还有一个需要注意的地方就是将windows的左斜杠换成右斜杠。关键代码如下: // 根路径,config.rootLevel rootPath = __dirname.replace(/\\/g, '/').split('/').slice(0, -1 * config.rootLevel).join('/'); // 然后把根路径加到前面,用的时候就好了 config.webRootPath = rootPath + config.webRootPath; 另外还有一个技巧来统一启动路径,那就是在 package.json 中配置 scripts,执行脚本的时候用 `npm run xxx` 这种方式,这样脚本的启动路径就一定是项目根路径了。 ## 异步队列 下面是一个路由队列的方案,每个成功后调用下一个,在最后一个中做404处理。 // 路由队列(路由的优先顺序在此配置) var routList = [ routStaticFile, routUserSettingPath, routAutoPath, notFound ]; /* 异步编程的队列方案 */ routList.shift()(request, response, routList); ## 进程守护 由于node中任何地方报错都会导致退出进程,所以生产环境需要在进程退出后重启,可用的工具:`pm2` 、`forever`. pm2扩展资料: [http://www.douban.com/note/314200231/](http://www.douban.com/note/314200231/) http://www.jianshu.com/p/fdc12d82b661 forver扩展资料: [http://blog.fens.me/nodejs-server-forever/](http://blog.fens.me/nodejs-server-forever/) ### pm2 ```shell # 开启进程守护 pm2 start server --name my-server-name # 显示所有进程状态 # 在 Linux 下是看不到其他用户启动的进程的,root 也不行 pm2 list # 停止指定的进程 pm2 stop 0 # 从列表中移除,移除后进程也终止 pm2 delete api # 重启进程 pm2 restart api ``` [pm2实现负载均衡](https://html-js.site/2018/04/08/pm2%E5%AE%9E%E7%8E%B0%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/) ## 子进程 ```js // 以 git 的提交为例 const options = { cwd: '/Users/username/code/project-name', shell: true }; // 指定要提交的文件 gitCommand = spawn('git add src/a.js src/b.js', options); ``` ## 参考和扩展 (Node.js核心入门(一))[https://juejin.im/post/5ac0e1fc6fb9a028b411345f] (Node.js核心入门(二))[https://juejin.im/post/5ac22344f265da238155ccde] (Node.js 子进程:你应该知道的一切)[https://juejin.im/entry/595dc35b51882568d00a97ab]
22.507692
190
0.730463
yue_Hant
0.736355
17d984dc1ca36610ae98a2afff3d7fa2e5671d3f
22,467
md
Markdown
docs/atl/reference/composite-control-global-functions.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/atl/reference/composite-control-global-functions.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/atl/reference/composite-control-global-functions.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Funkcje globalne kontrolki złożonej ms.date: 11/04/2016 f1_keywords: - atlhost/ATL::AtlAxDialogBox - atlhost/ATL::AtlAxCreateDialog - atlhost/ATL::AtlAxCreateControl - atlhost/ATL::AtlAxCreateControlEx - atlhost/ATL::AtlAxCreateControlLic - atlhost/ATL::AtlAxCreateControlLicEx - atlhost/ATL::AtlAxAttachControl - atlhost/ATL::AtlAxGetHost - atlhost/ATL::AtlAxGetControl - atlhost/ATL::AtlSetChildSite - atlhost/ATL::AtlAxWinInit - atlhost/ATL::AtlAxWinTerm - atlhost/ATL::AtlGetObjectSourceInterface helpviewer_keywords: - composite controls, global functions ms.assetid: 536884cd-e863-4c7a-ab0a-604dc60a0bbe ms.openlocfilehash: fe9d9a3a0538e2e5744987adcd64e67562711ea8 ms.sourcegitcommit: d9c94dcabd94537e304be0261b3263c2071b437b ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 09/25/2020 ms.locfileid: "91353119" --- # <a name="composite-control-global-functions"></a>Funkcje globalne kontrolki złożonej Te funkcje zapewniają obsługę tworzenia okien dialogowych oraz do tworzenia, hostingu i licencjonowania formantów ActiveX. > [!IMPORTANT] > Funkcje wymienione w poniższej tabeli nie mogą być używane w aplikacjach, które są wykonywane w środowisko wykonawcze systemu Windows. |Funkcja|Opis| |-|-| |[AtlAxDialogBox](#atlaxdialogbox)|Tworzy modalne okno dialogowe z szablonu okna dialogowego dostarczonego przez użytkownika. Wyniki okna dialogowego mogą zawierać kontrolki ActiveX.| |[AtlAxCreateDialog](#atlaxcreatedialog)|Tworzy niemodalne okno dialogowe z szablonu okna dialogowego dostarczonego przez użytkownika. Wyniki okna dialogowego mogą zawierać kontrolki ActiveX.| |[AtlAxCreateControl](#atlaxcreatecontrol)|Tworzy formant ActiveX, inicjuje go i umieszcza w określonym oknie.| |[AtlAxCreateControlEx](#atlaxcreatecontrolex)|Tworzy formant ActiveX, inicjuje go, hostuje w określonym oknie i Pobiera wskaźnik interfejsu (lub wskaźniki) z formantu.| |[AtlAxCreateControlLic](#atlaxcreatecontrollic)|Tworzy licencjonowany formant ActiveX, inicjuje go i umieszcza w określonym oknie.| |[AtlAxCreateControlLicEx](#atlaxcreatecontrollicex)|Tworzy licencjonowany formant ActiveX, inicjuje go, hostuje w określonym oknie i Pobiera wskaźnik interfejsu (lub wskaźniki) z formantu.| |[AtlAxAttachControl](#atlaxattachcontrol)|Dołącza wcześniej utworzony formant do określonego okna.| |[AtlAxGetHost](#atlaxgethost)|Służy do uzyskania bezpośredniego wskaźnika interfejsu do kontenera dla określonego okna (jeśli istnieje), z uwzględnieniem jego uchwytu.| |[AtlAxGetControl](#atlaxgetcontrol)|Służy do uzyskania bezpośredniego wskaźnika interfejsu do kontrolki zawartej w określonym oknie (jeśli istnieje), z uwzględnieniem uchwytu.| |[AtlSetChildSite](#atlsetchildsite)|Inicjuje `IUnknown` lokację podrzędną.| |[AtlAxWinInit](#atlaxwininit)|Inicjuje kod hostingu dla obiektów AxWin.| |[AtlAxWinTerm](#atlaxwinterm)|Odinicjalizuje kod hostingu dla obiektów AxWin.| |[AtlGetObjectSourceInterface](#atlgetobjectsourceinterface)|Zwraca informacje o domyślnym interfejsie źródłowym obiektu.| ## <a name="requirements"></a>Wymagania **Nagłówek:** atlhost. h ## <a name="atlaxdialogbox"></a><a name="atlaxdialogbox"></a> AtlAxDialogBox Tworzy modalne okno dialogowe z szablonu okna dialogowego dostarczonego przez użytkownika. ``` ATLAPI_(int) AtlAxDialogBox( HINSTANCE hInstance, LPCWSTR lpTemplateName, HWND hWndParent, DLGPROC lpDialogProc, LPARAM dwInitParam); ``` ### <a name="parameters"></a>Parametry *hInstance*<br/> podczas Identyfikuje wystąpienie modułu, którego plik wykonywalny zawiera szablon okna dialogowego. *lpTemplateName*<br/> podczas Identyfikuje szablon okna dialogowego. Ten parametr jest wskaźnikiem do ciągu znaków, który jest zakończony znakiem null, który określa nazwę szablonu okna dialogowego lub wartość całkowitą określającą identyfikator zasobu szablonu okna dialogowego. Jeśli parametr określa identyfikator zasobu, jego słowo o wysokim porządku musi mieć wartość zero, a jego słowo w niskim porządku musi zawierać identyfikator. Aby utworzyć tę wartość, można użyć makra [MAKEINTRESOURCE](/windows/win32/api/winuser/nf-winuser-makeintresourcew) . *hWndParent*<br/> podczas Identyfikuje okno, które jest właścicielem okna dialogowego. *lpDialogProc*<br/> podczas Wskazuje procedurę okna dialogowego. Aby uzyskać więcej informacji na temat procedury okna dialogowego, zobacz [DialogProc](/windows/win32/api/winuser/nc-winuser-dlgproc). *dwInitParam*<br/> podczas Określa wartość, która ma zostać przekazana do okna dialogowego w parametrze *lParam* komunikatu WM_INITDIALOG. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi Aby użyć `AtlAxDialogBox` z szablonem okna dialogowego zawierającym formant ActiveX, określ prawidłowy identyfikator CLSID, identyfikator appid lub ciąg adresu URL jako pole *tekstowe* sekcji **kontrolki** zasobu okna dialogowego wraz z "AtlAxWin80" jako pole *nazwy klasy* w tej samej sekcji. Poniżej pokazano, jak może wyglądać prawidłowa sekcja **kontrolki** : ``` CONTROL "{04FE35E9-ADBC-4f1d-83FE-8FA4D1F71C7F}", IDC_TEST, "AtlAxWin80", WS_GROUP | WS_TABSTOP, 0, 0, 100, 100 ``` Aby uzyskać więcej informacji na temat edytowania skryptów zasobów, zobacz [How to: Create Resources](../../windows/how-to-create-a-resource-script-file.md). Aby uzyskać więcej informacji na temat sterowania instrukcjami definicji zasobów, zobacz [Parametry formantów wspólnych](/windows/win32/menurc/common-control-parameters) w obszarze Windows SDK: SDK Tools. Aby uzyskać więcej informacji na temat ogólnych okien dialogowych, zobacz [DialogBox](/windows/win32/api/winuser/nf-winuser-dialogboxw) i [CreateDialogParam](/windows/win32/api/winuser/nf-winuser-createdialogparamw) w Windows SDK. ## <a name="atlaxcreatedialog"></a><a name="atlaxcreatedialog"></a> AtlAxCreateDialog Tworzy niemodalne okno dialogowe z szablonu okna dialogowego dostarczonego przez użytkownika. ``` ATLAPI_(HWND) AtlAxCreateDialog( HINSTANCE hInstance, LPCWSTR lpTemplateName, HWND hWndParent, DLGPROC lpDialogProc, LPARAM dwInitParam); ``` ### <a name="parameters"></a>Parametry *hInstance*<br/> podczas Identyfikuje wystąpienie modułu, którego plik wykonywalny zawiera szablon okna dialogowego. *lpTemplateName*<br/> podczas Identyfikuje szablon okna dialogowego. Ten parametr jest wskaźnikiem do ciągu znaków, który jest zakończony znakiem null, który określa nazwę szablonu okna dialogowego lub wartość całkowitą określającą identyfikator zasobu szablonu okna dialogowego. Jeśli parametr określa identyfikator zasobu, jego słowo o wysokim porządku musi mieć wartość zero, a jego słowo w niskim porządku musi zawierać identyfikator. Aby utworzyć tę wartość, można użyć makra [MAKEINTRESOURCE](/windows/win32/api/winuser/nf-winuser-makeintresourcew) . *hWndParent*<br/> podczas Identyfikuje okno, które jest właścicielem okna dialogowego. *lpDialogProc*<br/> podczas Wskazuje procedurę okna dialogowego. Aby uzyskać więcej informacji na temat procedury okna dialogowego, zobacz [DialogProc](/windows/win32/api/winuser/nc-winuser-dlgproc). *dwInitParam*<br/> podczas Określa wartość, która ma zostać przekazana do okna dialogowego w parametrze *lParam* komunikatu WM_INITDIALOG. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi Wyniki okna dialogowego mogą zawierać kontrolki ActiveX. Zobacz [okno dialogowe](/windows/win32/api/winuser/nf-winuser-createdialogw) i [CreateDialogParam](/windows/win32/api/winuser/nf-winuser-createdialogparamw) w Windows SDK. ## <a name="atlaxcreatecontrol"></a><a name="atlaxcreatecontrol"></a> AtlAxCreateControl Tworzy formant ActiveX, inicjuje go i umieszcza w określonym oknie. ``` ATLAPI AtlAxCreateControl( LPCOLESTR lpszName, HWND hWnd, IStream* pStream, IUnknown** ppUnkContainer); ``` ### <a name="parameters"></a>Parametry *lpszName*<br/> Wskaźnik do ciągu, który ma zostać przesłany do kontrolki. Muszą być sformatowane w jeden z następujących sposobów: - Identyfikator ProgID, taki jak `"MSCAL.Calendar.7"` - Identyfikator CLSID, taki jak `"{8E27C92B-1264-101C-8A2F-040224009C02}"` - Adres URL, taki jak `"<https://www.microsoft.com>"` - Odwołanie do aktywnego dokumentu, takiego jak `"file://\\\Documents\MyDoc.doc"` - Fragment kodu HTML, taki jak `"MSHTML:\<HTML>\<BODY>This is a line of text\</BODY>\</HTML>"` > [!NOTE] > `"MSHTML:"` musi poprzedzać fragment kodu HTML, aby został wyznaczył jako strumień MSHTML. *Właściwość*<br/> podczas Dojście do okna, do którego zostanie dołączona kontrolka. *pStream*<br/> podczas Wskaźnik do strumienia, który jest używany do inicjowania właściwości formantu. Może mieć wartość NULL. *ppUnkContainer*<br/> określoną Adres wskaźnika, który będzie otrzymywał `IUnknown` kontener. Może mieć wartość NULL. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi Ta funkcja globalna daje ten sam wynik jak wywołanie [AtlAxCreateControlEx](#atlaxcreatecontrolex)(*lpszName*, *HWND*, *pStream*, null, null, null, null);. Aby utworzyć licencjonowany formant ActiveX, zobacz [AtlAxCreateControlLic](#atlaxcreatecontrollic). ## <a name="atlaxcreatecontrolex"></a><a name="atlaxcreatecontrolex"></a> AtlAxCreateControlEx Tworzy formant ActiveX, inicjuje go i umieszcza w określonym oknie. Można również utworzyć wskaźnik interfejsu i zbiornik zdarzenia dla nowego formantu. ``` ATLAPI AtlAxCreateControlEx( LPCOLESTR lpszName, HWND hWnd, IStream* pStream, IUnknown** ppUnkContainer, IUnknown** ppUnkControl, REFIID iidSink = IID_NULL, IUnknown* punkSink = NULL); ``` ### <a name="parameters"></a>Parametry *lpszName*<br/> Wskaźnik do ciągu, który ma zostać przesłany do kontrolki. Muszą być sformatowane w jeden z następujących sposobów: - Identyfikator ProgID, taki jak `"MSCAL.Calendar.7"` - Identyfikator CLSID, taki jak `"{8E27C92B-1264-101C-8A2F-040224009C02}"` - Adres URL, taki jak `"<https://www.microsoft.com>"` - Odwołanie do aktywnego dokumentu, takiego jak `"file://\\\Documents\MyDoc.doc"` - Fragment kodu HTML, taki jak `"MSHTML:\<HTML>\<BODY>This is a line of text\</BODY>\</HTML>"` > [!NOTE] > `"MSHTML:"` musi poprzedzać fragment kodu HTML, aby został wyznaczył jako strumień MSHTML. *Właściwość*<br/> podczas Dojście do okna, do którego zostanie dołączona kontrolka. *pStream*<br/> podczas Wskaźnik do strumienia, który jest używany do inicjowania właściwości formantu. Może mieć wartość NULL. *ppUnkContainer*<br/> określoną Adres wskaźnika, który będzie otrzymywał `IUnknown` kontener. Może mieć wartość NULL. *ppUnkControl*<br/> określoną Adres wskaźnika, który będzie otrzymywał `IUnknown` utworzony formant. Może mieć wartość NULL. *iidSink*<br/> Identyfikator interfejsu interfejsu wychodzącego na zawartym obiekcie. *punkSink*<br/> Wskaźnik do `IUnknown` interfejsu obiektu ujścia, który ma być połączony z punktem połączenia określonym przez *iidSink* na zawartym obiekcie po pomyślnym utworzeniu zawartego obiektu. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi `AtlAxCreateControlEx` jest podobny do [AtlAxCreateControl](#atlaxcreatecontrol) , ale umożliwia również otrzymywanie wskaźnika interfejsu do nowo utworzonej kontrolki i skonfigurowanie ujścia zdarzeń do odbierania zdarzeń wyzwalanych przez formant. Aby utworzyć licencjonowany formant ActiveX, zobacz [AtlAxCreateControlLicEx](#atlaxcreatecontrollicex). ## <a name="atlaxcreatecontrollic"></a><a name="atlaxcreatecontrollic"></a> AtlAxCreateControlLic Tworzy licencjonowany formant ActiveX, inicjuje go i umieszcza w określonym oknie. ``` ATLAPI AtlAxCreateControlLic( LPCOLESTR lpszName, HWND hWnd, IStream* pStream, IUnknown** ppUnkContainer, BSTR bstrLic = NULL); ``` ### <a name="parameters"></a>Parametry *lpszName*<br/> Wskaźnik do ciągu, który ma zostać przesłany do kontrolki. Muszą być sformatowane w jeden z następujących sposobów: - Identyfikator ProgID, taki jak `"MSCAL.Calendar.7"` - Identyfikator CLSID, taki jak `"{8E27C92B-1264-101C-8A2F-040224009C02}"` - Adres URL, taki jak `"<https://www.microsoft.com>"` - Odwołanie do aktywnego dokumentu, takiego jak `"file://\\\Documents\MyDoc.doc"` - Fragment kodu HTML, taki jak `"MSHTML:\<HTML>\<BODY>This is a line of text\</BODY>\</HTML>"` > [!NOTE] > `"MSHTML:"` musi poprzedzać fragment kodu HTML, aby został wyznaczył jako strumień MSHTML. *Właściwość*<br/> Dojście do okna, do którego zostanie dołączona kontrolka. *pStream*<br/> Wskaźnik do strumienia, który jest używany do inicjowania właściwości formantu. Może mieć wartość NULL. *ppUnkContainer*<br/> Adres wskaźnika, który będzie otrzymywał `IUnknown` kontener. Może mieć wartość NULL. *bstrLic*<br/> BSTR zawierający licencję dla kontrolki. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="example"></a>Przykład Zobacz [hostowanie formantów ActiveX przy użyciu biblioteki ATL AxHost](../../atl/atl-control-containment-faq.md#hosting-activex-controls-using-atl-axhost) , aby uzyskać przykład użycia `AtlAxCreateControlLic` . ## <a name="atlaxcreatecontrollicex"></a><a name="atlaxcreatecontrollicex"></a> AtlAxCreateControlLicEx Tworzy licencjonowany formant ActiveX, inicjuje go i umieszcza w określonym oknie. Można również utworzyć wskaźnik interfejsu i zbiornik zdarzenia dla nowego formantu. ``` ATLAPI AtlAxCreateControlLicEx( LPCOLESTR lpszName, HWND hWnd, IStream* pStream, IUnknown** ppUnkContainer, IUnknown** ppUnkControl, REFIID iidSink = IID_NULL, IUnknown* punkSink = NULL, BSTR bstrLic = NULL); ``` ### <a name="parameters"></a>Parametry *lpszName*<br/> Wskaźnik do ciągu, który ma zostać przesłany do kontrolki. Muszą być sformatowane w jeden z następujących sposobów: - Identyfikator ProgID, taki jak `"MSCAL.Calendar.7"` - Identyfikator CLSID, taki jak `"{8E27C92B-1264-101C-8A2F-040224009C02}"` - Adres URL, taki jak `"<https://www.microsoft.com>"` - Odwołanie do aktywnego dokumentu, takiego jak `"file://\\\Documents\MyDoc.doc"` - Fragment kodu HTML, taki jak `"MSHTML:\<HTML>\<BODY>This is a line of text\</BODY>\</HTML>"` > [!NOTE] > `"MSHTML:"` musi poprzedzać fragment kodu HTML, aby został wyznaczył jako strumień MSHTML. *Właściwość*<br/> Dojście do okna, do którego zostanie dołączona kontrolka. *pStream*<br/> Wskaźnik do strumienia, który jest używany do inicjowania właściwości formantu. Może mieć wartość NULL. *ppUnkContainer*<br/> Adres wskaźnika, który będzie otrzymywał `IUnknown` kontener. Może mieć wartość NULL. *ppUnkControl*<br/> określoną Adres wskaźnika, który będzie otrzymywał `IUnknown` utworzony formant. Może mieć wartość NULL. *iidSink*<br/> Identyfikator interfejsu interfejsu wychodzącego na zawartym obiekcie. *punkSink*<br/> Wskaźnik do `IUnknown` interfejsu obiektu ujścia, który ma być połączony z punktem połączenia określonym przez *iidSink* na zawartym obiekcie po pomyślnym utworzeniu zawartego obiektu. *bstrLic*<br/> BSTR zawierający licencję dla kontrolki. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi `AtlAxCreateControlLicEx` jest podobny do [AtlAxCreateControlLic](#atlaxcreatecontrollic) , ale umożliwia również otrzymywanie wskaźnika interfejsu do nowo utworzonej kontrolki i skonfigurowanie ujścia zdarzeń do odbierania zdarzeń wyzwalanych przez formant. ### <a name="example"></a>Przykład Zobacz [hostowanie formantów ActiveX przy użyciu biblioteki ATL AxHost](../../atl/atl-control-containment-faq.md#hosting-activex-controls-using-atl-axhost) , aby uzyskać przykład użycia `AtlAxCreateControlLicEx` . ## <a name="atlaxattachcontrol"></a><a name="atlaxattachcontrol"></a> AtlAxAttachControl Dołącza wcześniej utworzony formant do określonego okna. ``` ATLAPI AtlAxAttachControl( IUnknown* pControl, HWND hWnd, IUnknown** ppUnkContainer); ``` ### <a name="parameters"></a>Parametry *pControl*<br/> podczas Wskaźnik do `IUnknown` kontrolki. *Właściwość*<br/> podczas Dojście do okna, które będzie hostować formant. *ppUnkContainer*<br/> określoną Wskaźnik do wskaźnika do `IUnknown` obiektu kontenera. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ### <a name="remarks"></a>Uwagi Aby jednocześnie utworzyć i dołączyć kontrolkę, użyj [AtlAxCreateControlEx](#atlaxcreatecontrolex) i [AtlAxCreateControl](#atlaxcreatecontrol) . > [!NOTE] > Dołączenie obiektu sterującego musi być poprawnie zainicjowane przed wywołaniem metody `AtlAxAttachControl` . ## <a name="atlaxgethost"></a><a name="atlaxgethost"></a> AtlAxGetHost Uzyskuje bezpośredni wskaźnik interfejsu do kontenera dla określonego okna (o ile istnieje), biorąc pod uwagę jego uchwyt. ``` ATLAPI AtlAxGetHost(HWND h, IUnknown** pp); ``` ### <a name="parameters"></a>Parametry *c*<br/> podczas Uchwyt do okna, w którym znajduje się kontrolka. *miesięcznie*<br/> określoną `IUnknown` Kontener formantu. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ## <a name="atlaxgetcontrol"></a><a name="atlaxgetcontrol"></a> AtlAxGetControl Uzyskuje bezpośredni wskaźnik interfejsu do formantu zawartego wewnątrz określonego okna, biorąc pod uwagę jego uchwyt. ``` ATLAPI AtlAxGetControl(HWND h, IUnknown** pp); ``` ### <a name="parameters"></a>Parametry *c*<br/> podczas Uchwyt do okna, w którym znajduje się kontrolka. *miesięcznie*<br/> określoną Kontrolka, która jest `IUnknown` hostowana. ### <a name="return-value"></a>Wartość zwracana Jedna ze standardowych wartości HRESULT. ## <a name="atlsetchildsite"></a><a name="atlsetchildsite"></a> AtlSetChildSite Wywołaj tę funkcję, aby ustawić lokację obiektu podrzędnego z `IUnknown` obiektem nadrzędnym. ``` HRESULT AtlSetChildSite(IUnknown* punkChild, IUnknown* punkParent); ``` ### <a name="parameters"></a>Parametry *punkChild*<br/> podczas Wskaźnik do `IUnknown` interfejsu podrzędnego. *punkParent*<br/> podczas Wskaźnik do `IUnknown` interfejsu elementu nadrzędnego. ### <a name="return-value"></a>Wartość zwracana Standardowa wartość HRESULT. ## <a name="atlaxwininit"></a><a name="atlaxwininit"></a> AtlAxWinInit Ta funkcja inicjuje kod hostingu formantu ATL przez zarejestrowanie klas okien **"AtlAxWin80"** i **"AtlAxWinLic80"** oraz kilka niestandardowych komunikatów okien. ``` ATLAPI_(BOOL) AtlAxWinInit(); ``` ### <a name="return-value"></a>Wartość zwracana Niezerowe, jeśli Inicjalizacja kodu hostingu formantu zakończyła się pomyślnie; w przeciwnym razie FALSE. ### <a name="remarks"></a>Uwagi Ta funkcja musi zostać wywołana przed użyciem interfejsu API hostingu kontrolki ATL. Po wywołaniu tej funkcji Klasa okna **"AtlAxWin"** może być używana w wywołaniach [do](/windows/win32/api/winuser/nf-winuser-createwindoww) lub [elementu CreateWindowEx](/windows/win32/api/winuser/nf-winuser-createwindowexw), zgodnie z opisem w Windows SDK. ## <a name="atlaxwinterm"></a><a name="atlaxwinterm"></a> AtlAxWinTerm Ta funkcja umożliwia odinicjowanie kodu hostingu formantu ATL przez Wyrejestrowanie klas okien **"AtlAxWin80"** i **"AtlAxWinLic80"** . ``` inline BOOL AtlAxWinTerm(); ``` ### <a name="return-value"></a>Wartość zwracana Zawsze zwraca wartość TRUE. ### <a name="remarks"></a>Uwagi Ta funkcja po prostu wywołuje [UnregisterClass](/windows/win32/api/winuser/nf-winuser-unregisterclassw) zgodnie z opisem w Windows SDK. Wywołaj tę funkcję, aby wyczyścić po usunięciu wszystkich istniejących okien hosta, jeśli wywołano [AtlAxWinInit](#atlaxwininit) , i nie musisz już tworzyć okien hosta. Jeśli ta funkcja nie zostanie wywołana, Klasa Window zostanie wyrejestrowana automatycznie po zakończeniu procesu. ## <a name="atlgetobjectsourceinterface"></a><a name="atlgetobjectsourceinterface"></a> AtlGetObjectSourceInterface Wywołaj tę funkcję, aby pobrać informacje o domyślnym interfejsie źródła obiektu. ``` ATLAPI AtlGetObjectSourceInterface( IUnknown* punkObj, GUID* plibid, IID* piid, unsigned short* pdwMajor, unsigned short* pdwMinor); ``` ### <a name="parameters"></a>Parametry *punkObj*<br/> podczas Wskaźnik do obiektu, dla którego ma zostać zwrócona informacja. *plibid*<br/> określoną Wskaźnik do identyfikatora LIBID biblioteki typów zawierającej definicję interfejsu źródłowego. *piid*<br/> określoną Wskaźnik do identyfikatora interfejsu domyślnego interfejsu źródłowego obiektu. *pdwMajor*<br/> określoną Wskaźnik do głównego numeru wersji biblioteki typów zawierającej definicję interfejsu źródłowego. *pdwMinor*<br/> określoną Wskaźnik do pomocniczego numeru wersji biblioteki typów zawierającej definicję interfejsu źródłowego. ### <a name="return-value"></a>Wartość zwracana Standardowa wartość HRESULT. ### <a name="remarks"></a>Uwagi `AtlGetObjectSourceInterface` może podać identyfikator interfejsu domyślnego interfejsu źródłowego oraz identyfikatora LIBID i główne i pomocnicze numery wersji biblioteki typów opisującej ten interfejs. > [!NOTE] > Aby ta funkcja pomyślnie pobiera żądane informacje, obiekt reprezentowany przez *punkObj* musi implementować `IDispatch` (i zwracać informacje o typie za pomocą elementu), a `IDispatch::GetTypeInfo` także musi implementować albo `IProvideClassInfo2` `IPersist` . Informacje o typie dla interfejsu źródłowego muszą znajdować się w tej samej bibliotece typów co informacje o typie `IDispatch` . ### <a name="example"></a>Przykład W poniższym przykładzie pokazano, jak można zdefiniować klasę ujścia zdarzeń, `CEasySink` która zmniejsza liczbę argumentów szablonu, które można przekazać do `IDispEventImpl` systemu operacyjnego. `EasyAdvise` i `EasyUnadvise` Użyj, `AtlGetObjectSourceInterface` Aby zainicjować członków [IDispEventImpl](../../atl/reference/idispeventimpl-class.md) przed wywołaniem [DispEventAdvise](idispeventsimpleimpl-class.md#dispeventadvise) lub [DispEventUnadvise](idispeventsimpleimpl-class.md#dispeventunadvise). [!code-cpp[NVC_ATL_Windowing#93](../../atl/codesnippet/cpp/composite-control-global-functions_1.h)] ## <a name="see-also"></a>Zobacz też [Funkcje](../../atl/reference/atl-functions.md)<br/> [Makra kontroli złożonej](../../atl/reference/composite-control-macros.md)
40.998175
534
0.777496
pol_Latn
0.997407
17d98ed58ddcdd9a57071bc1e02d943af5994cbf
2,983
md
Markdown
docs/framework/configure-apps/file-schema/sectiongroup-element-for-configsections.md
cihanyakar/docs.tr-tr
03b6c8998a997585f61b8be289df105261125239
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/configure-apps/file-schema/sectiongroup-element-for-configsections.md
cihanyakar/docs.tr-tr
03b6c8998a997585f61b8be289df105261125239
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/configure-apps/file-schema/sectiongroup-element-for-configsections.md
cihanyakar/docs.tr-tr
03b6c8998a997585f61b8be289df105261125239
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: '&lt;sectionGroup&gt; öğesi için &lt;configSections&gt;' ms.date: 05/01/2017 f1_keywords: - http://schemas.microsoft.com/.NetConfiguration/v2.0#configuration/configSections/sectionGroup helpviewer_keywords: - sectionGroup Element - <sectionGroup> Element ms.assetid: 6c27f9e2-809c-4bc9-aca9-72f90360e7a3 author: guardrex ms.author: mairaw ms.openlocfilehash: b898c81700e95ec9bc94e04c5a76494b7ac4b0dc ms.sourcegitcommit: 11f11ca6cefe555972b3a5c99729d1a7523d8f50 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 05/03/2018 ms.locfileid: "32754021" --- # <a name="sectiongroup-element-for-configsections"></a>\<sectionGroup > öğesi için \<configSections > Yapılandırma bölümleri için bir ad alanını tanımlar. [**\<Yapılandırma >**](~/docs/framework/configure-apps/file-schema/configuration-element.md) &nbsp;&nbsp;[**\<configSections >**](~/docs/framework/configure-apps/file-schema/configsections-element-for-configuration.md) &nbsp;&nbsp;&nbsp;&nbsp;**\<sectionGroup >** ## <a name="syntax"></a>Sözdizimi ```xml <sectionGroup name="section group name"> <!-- Configuration sections --> </sectionGroup> ``` ## <a name="attribute"></a>Öznitelik | | Açıklama | | --------- | ----------- | | **Adı** | Gerekli öznitelik.<br><br>Tanımladığınız bölüm grubu adını belirtir. | ## <a name="parent-element"></a>Üst öğesi | | Açıklama | | --- | ----------- | | [**\<configSections >** öğesi](~/docs/framework/configure-apps/file-schema/configsections-element-for-configuration.md) | Yapılandırma bölümü ve ad alanı bildirimlerini içerir. | ## <a name="child-elements"></a>Alt öğeleri | | Açıklama | | --- | ----------- | | [**\<Bölüm >**](~/docs/framework/configure-apps/file-schema/section-element.md) | Bir yapılandırma bölümü bildirimi içerir. | ## <a name="remarks"></a>Açıklamalar Bir bölüm grubu bildirme yapılandırma bölümlerinin için bir kapsayıcı etiket oluşturur ve başka bir kullanıcı tarafından tanımlanan yapılandırma bölümlerinin ile adlandırma çakışmalar olmasını sağlar. Geçirebilmenize **\<sectionGroup >** birbirine içindeki öğeler. ## <a name="example"></a>Örnek Aşağıdaki örnek, bir bölüm grubu bildirme ve bölümler bölüm grubu içinde bildirme gösterilmektedir: ```xml <configuration> <configSections> <sectionGroup name="mySectionGroup"> <section name="mySection" type="System.Configuration.NameValueSectionHandler,System" /> </sectionGroup> </configSections> <mySectionGroup> <mySection> <add key="key1" value="value1" /> </mySection> </mySectionGroup> </configuration> ``` ## <a name="configuration-file"></a>Yapılandırma dosyası Bu öğe uygulama yapılandırma dosyasında makine yapılandırma dosyası kullanılabilir (*Machine.config*), ve *Web.config* uygulama dizin düzeyinde olmayan dosyalar. ## <a name="see-also"></a>Ayrıca bkz. [.NET Framework için yapılandırma dosyası şeması](~/docs/framework/configure-apps/file-schema/index.md)
35.511905
265
0.720751
tur_Latn
0.942328
17d996d14263b932244ed9aac7c06455ed829d6c
6,024
md
Markdown
dev-itpro/administration/authenticating-users-with-azure-ad-overview.md
AleksanderGladkov/dynamics365smb-devitpro-pb
df3fee95c4e093b9a785b887f37be4e3e66d7010
[ "CC-BY-4.0", "MIT" ]
null
null
null
dev-itpro/administration/authenticating-users-with-azure-ad-overview.md
AleksanderGladkov/dynamics365smb-devitpro-pb
df3fee95c4e093b9a785b887f37be4e3e66d7010
[ "CC-BY-4.0", "MIT" ]
null
null
null
dev-itpro/administration/authenticating-users-with-azure-ad-overview.md
AleksanderGladkov/dynamics365smb-devitpro-pb
df3fee95c4e093b9a785b887f37be4e3e66d7010
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Authenticating Business Central Users with Azure Active Directory description: Get an overview about using Azure AD authentication in Business Central. ms.custom: na ms.date: 03/23/2022 ms.reviewer: na ms.suite: na ms.tgt_pltfrm: na ms.topic: conceptual ms.service: "dynamics365-business-central" author: jswymer --- # Authenticating [!INCLUDE[prod_short](../developer/includes/prod_short.md)] Users with Azure Active Directory Azure Active Directory \(Azure AD\) is a cloud service that provides identity and access capabilities for applications. The applications can be cloud-based, like on Microsoft Azure and Microsoft 365, and installed on-premises, like [!INCLUDE[prod_short](../developer/includes/prod_short.md)]. The article describes the tasks involved in setting up Azure AD authentication for authenticating [!INCLUDE[prod_short](../developer/includes/prod_short.md)] users. ## Azure AD and [!INCLUDE[prod_short](../developer/includes/prod_short.md)] With Azure AD authentication, you store user accounts and credentials in an Azure AD tenant. You then associate [!INCLUDE[prod_short](../developer/includes/prod_short.md)] user accounts with the Azure AD tenant user account. Once in place, users access [!INCLUDE[prod_short](../developer/includes/prod_short.md)] by using their Azure AD account. Azure AD authentication enables [!INCLUDE[prod_short](../developer/includes/prod_short.md)] to integrate with various applications and services, through a single sign-on experience. It's the required authentication method for some features offered by [!INCLUDE[prod_short](../developer/includes/prod_short.md)], such as: - Excel add-in - Excel financial reports - Outlook add-in - Cover sheets for contact management - Power BI reports and charts - Power Automate Management - Service-to-Service authentication with Automation APIs ## Moving from WS-Federation to OpenID Connect [!INCLUDE[2022_releasewave1](../includes/2022_releasewave1.md)] Starting with 2022 release wave 1 (version 20), Business Central supports the OpenID Connect (OIDC) protocol for Azure AD authentication. In previous releases, Azure AD authentication in Business Central used WS-Federation (Web Services Federation Language). [OpenID Connect](https://openid.net/connect/) is a modern protocol that's built on OAuth 2.0 and has a standard authentication library. For more information about OpenID Connect, see [Microsoft identity platform and OpenID Connect protocol](/azure/active-directory/develop/v2-protocols-oidc). With the introduction of OpenID Connect, WS-Federation support in Business Central has been deprecated. It will be removed in a later release. Until it's removed, you can continue to use Azure AD authentication with WS-Federation, but we recommend using OpenID Connect. For the complete setup of Azure AD with OpenID Connect, see [Configure Azure AD Authentication with OpenID Connect](authenticating-users-with-azure-ad-openid-connect.md). > [!NOTE] > [!INCLUDE[prod_short](../developer/includes/prod_short.md)] version 19 and earlier still only support WS-Federation. If you're setting up one of these version, see [Configure Azure AD Authentication with WS-Federation](authenticating-users-with-azure-active-directory.md). ### Switch an existing configuration from WS-Federation to OpenID Connect The complete setup for OpenID Connect isn't much different than it is for WS-Federation. The following steps outline the modifications you have to make to an existing deployment to go from WS-Federation to OpenID connect. 1. In Azure Active Directory, enable ID tokens on the registered application for Business Central authentication. You do this change from the [Azure portal](https://portal.azure.com). 2. In [!INCLUDE[prod_short](../developer/includes/prod_short.md)]: 1. Configure the [!INCLUDE[server](../developer/includes/server.md)] instance to include the `ValidAudiences` parameter set to the application ID assigned to the registered application in Azure AD. ```powershell Set-NAVServerConfiguration -ServerInstance <BC server instance name> -KeyName ValidAudiences -KeyValue "<application ID>" ``` 2. Configure the [!INCLUDE[webserver](../developer/includes/webserver.md)] to include the `AadApplicationId` and `AadAuthorityUri` parameters. Set `AadApplicationId` to the application ID assigned to the registered application in Azure AD. Set `AadAuthorityUri` to `"https://login.microsoftonline.com/<Azure_AD_Tenant_ID>`. ```powershell Set-NAVWebServerConfiguration -KeyName AadApplicationId -KeyValue "<Azure_AD_Application_ID>" Set-NAVWebServerConfiguration -KeyName AadAuthorityUri -KeyValue "https://login.microsoftonline.com/<Azure_AD_Tenant_ID>" ``` For the complete setup with more details, see [Configure Azure AD Authentication with OpenID Connect](authenticating-users-with-azure-ad-openid-connect.md). ### Configure legacy WS-Federation in version 20 Whether setting up a new version 20 deployment or upgrading a version 19 or earlier, you can still set up the Azure AD authentication use WS-Federation for now. The full setup is the same as in earlier versions, except the [!INCLUDE[webserver](../developer/includes/webserver.md)] now includes a setting named `UseLegacyAcsAuthentication` that you set to `true`. For example, using the [!INCLUDE[adminshell](../developer/includes/adminshell.md)], you run the following command: ```powershell Set-NAVWebServerConfiguration -KeyName UseLegacyAcsAuthentication -KeyValue "true" ``` For the complete setup, see [Configure Azure AD Authentication with WS-Federation](authenticating-users-with-azure-active-directory.md). ## See Also [Authentication and Credential Types](Users-Credential-Types.md) [Troubleshooting: SAML2 token errors with Azure Active Directory/Office 365 Authentication](troubleshooting-SAML2-token-not-valid-because-validity-period-ended.md) [Migrating to Multitenancy](../deployment/migrating-to-multitenancy.md)
70.870588
551
0.789841
eng_Latn
0.959992
17daf888a61d76e65766b17e2b0f3cf1c1713d3e
23
md
Markdown
README.md
gabrdud569/dog-breed-recognition
63d66a9f924879a3317a76f6e3381e265efc575d
[ "MIT" ]
1
2020-06-29T07:34:39.000Z
2020-06-29T07:34:39.000Z
README.md
gabrdud569/dog-breed-recognition
63d66a9f924879a3317a76f6e3381e265efc575d
[ "MIT" ]
null
null
null
README.md
gabrdud569/dog-breed-recognition
63d66a9f924879a3317a76f6e3381e265efc575d
[ "MIT" ]
null
null
null
# dog-breed-recognition
23
23
0.826087
eng_Latn
0.760196
17db2ad3d78894b321a6f91a94b67491e718d43c
5,133
md
Markdown
articles/lab-services/classroom-labs/tutorial-setup-lab-account.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/lab-services/classroom-labs/tutorial-setup-lab-account.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/lab-services/classroom-labs/tutorial-setup-lab-account.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure Lab Services ile laboratuvar hesabı ayarlama | Microsoft Docs description: Azure Lab Hizmetleri ile nasıl bir laboratuvar hesabı oluşturabileceğinizi, bir laboratuvar oluşturucusu eklemeyi ve laboratuvar hesabındaki laboratuvarlar tarafından kullanılacak Market görüntülerini belirtmeyi öğrenin. services: devtest-lab, lab-services, virtual-machines documentationcenter: na author: spelluru manager: '' editor: '' ms.service: lab-services ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: tutorial ms.custom: mvc ms.date: 02/10/2020 ms.author: spelluru ms.openlocfilehash: dba6a4c07691f3d7ec88d8b889e68d6ac7116f07 ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/24/2020 ms.locfileid: "79239451" --- # <a name="tutorial-set-up-a-lab-account-with-azure-lab-services"></a>Öğretici: Azure Lab Services ile bir laboratuvar hesabı ayarlama Azure Lab Services’te, bir laboratuvar hesabı, kuruluşunuzdaki laboratuvarların yönetildiği merkezi hesap olarak görev yapar. Laboratuvar hesabınızda, laboratuvar oluşturmak üzere başkalarına izin verin ve laboratuvar hesabı altındaki tüm laboratuvarlara uygulanan ilkeler ayarlayın. Bu eğitimde, nasıl bir laboratuvar hesabı oluşturabilirsiniz öğrenin. Bu öğreticide, aşağıdaki eylemleri gerçekleştireceksiniz: > [!div class="checklist"] > * Laboratuvar hesabı oluşturma > * Laboratuvar Oluşturan rolüne kullanıcı ekleme Azure aboneliğiniz yoksa, başlamadan önce [ücretsiz](https://azure.microsoft.com/free/) bir hesap oluşturun. ## <a name="create-a-lab-account"></a>Laboratuvar hesabı oluşturma Aşağıdaki adımlar, Azure portalını kullanarak Azure Lab Services ile nasıl bir laboratuvar hesabı oluşturulacağını göstermektedir. 1. [Azure portalında](https://portal.azure.com)oturum açın. 2. Sol menüden **Tüm Hizmetler'i** seçin. **Kategorilerden** **DevOps'leri** seçin. Ardından, **Laboratuvar Hizmetleri'ni**seçin. Laboratuvar`*` **Hizmetleri'nin**yanındaki yıldız ( ) seçeneğini seçerseniz, sol menüdeki SıK **Kullanılanlar** bölümüne eklenir. Bir sonraki andan itibaren, **Sık Kullanılanlar**altında Laboratuvar **Hizmetleri'ni** seçersiniz. ![Tüm Hizmetler -> Laboratuvar Hizmetleri](../media/tutorial-setup-lab-account/select-lab-accounts-service.png) 3. Laboratuvar **Hizmetleri** sayfasında, araç çubuğunda **Ekle'yi** seçin veya sayfadaki **laboratuvar hesabı oluştur** düğmesini seçin. ![Laboratuvar Hesapları sayfasında Ekle'yi seçin](../media/tutorial-setup-lab-account/add-lab-account-button.png) 4. **Laboratuvar hesabı oluştur** sayfasının **Temeller** sekmesinde aşağıdaki işlemleri yapın: 1. **Laboratuvar hesabı adı** için bir ad girin. 2. Laboratuvar hesabı oluşturmak istediğiniz **Azure aboneliğini** seçin. 3. **Kaynak grubu**için, varolan bir kaynak grubu seçin veya **yeni oluştur'u**seçin ve kaynak grubu için bir ad girin. 4. **Konum**için, laboratuvar hesabını oluşturmak istediğiniz bir konum/bölge seçin. ![Laboratuvar hesabı - temel ler sayfası](../media/tutorial-setup-lab-account/lab-account-basics-page.png) 5. **İncele ve oluştur**’u seçin. 6. Özeti gözden geçirin ve **Oluştur'u**seçin. ![İnceleme + oluşturma -> Oluştur](../media/tutorial-setup-lab-account/create-button.png) 5. Dağıtım tamamlandığında, Sonraki **adımları**genişletin ve **kaynağa Git'i**seçin. ![Laboratuvar hesabı sayfasına gitme](../media/tutorial-setup-lab-account/go-to-lab-account.png) 6. **Laboratuvar Hesabı** sayfasını gördüğünüzden onaylayın. ![Laboratuvar hesabı sayfası](../media/tutorial-setup-lab-account/lab-account-page.png) ## <a name="add-a-user-to-the-lab-creator-role"></a>Laboratuvar Oluşturan rolüne kullanıcı ekleme Bir laboratuvar hesabında sınıf laboratuvarı ayarlamak için kullanıcının ilgili laboratuvar hesabında **Laboratuvar Oluşturan** rolünün üyesi olması gerekir. Eğitimcilere, sınıfları için laboratuvar oluşturma ve **Laboratuvar Oluşturan** rolüne bunları ekleme izni sağlamak için: > [!NOTE] > Laboratuvar hesabını oluşturmak için kullandığınız hesap otomatik olarak bu role eklenir. Bu öğreticide bir sınıf laboratuvarı oluşturmak için aynı kullanıcı hesabını kullanmayı planlıyorsanız, bu adımı atlayın. 1. Laboratuvar **Hesabı** sayfasında **Access denetimi (IAM) seçeneğini**seçin, araç çubuğuna + **Ekle'yi** seçin ve ardından araç çubuğunda **rol ataması ekle'** yi seçin. ![Erişim Denetimi -> Rol Atama ekle düğmesi](../media/tutorial-setup-lab-account/add-role-assignment-button.png) 1. Rol **ataması ekle** sayfasında, **Rol**için **Lab Creator'ı** seçin, Laboratuvar Oluşturucuları rolüne eklemek istediğiniz kullanıcıyı seçin ve **Kaydet'i**seçin. ![Laboratuvar oluşturucuekleme](../media/tutorial-setup-lab-account/add-lab-creator.png) ## <a name="next-steps"></a>Sonraki adımlar Bu öğreticide, bir laboratuvar hesabı oluşturdunuz. Profesör olarak sınıf laboratuvarı oluşturma hakkında bilgi edinmek için bir sonraki öğreticiye ilerleyin: > [!div class="nextstepaction"] > [Bir sınıf laboratuvarı ayarlama](tutorial-setup-classroom-lab.md)
61.843373
358
0.782583
tur_Latn
0.998417
17db40aa3e36c5deedd1d81a97e554b6a14f62c4
17,581
md
Markdown
_posts/2017-07-19-albums2017.md
jhavaldar/jhavaldar.github.io
d848ac16346feae6f847aafb821075a9350b5bb7
[ "CC0-1.0" ]
null
null
null
_posts/2017-07-19-albums2017.md
jhavaldar/jhavaldar.github.io
d848ac16346feae6f847aafb821075a9350b5bb7
[ "CC0-1.0" ]
null
null
null
_posts/2017-07-19-albums2017.md
jhavaldar/jhavaldar.github.io
d848ac16346feae6f847aafb821075a9350b5bb7
[ "CC0-1.0" ]
1
2021-02-25T21:11:34.000Z
2021-02-25T21:11:34.000Z
--- layout: post title: "5 EPs You Missed From the First Half of 2017" date: 2017-07-19 tags: [music, hip-hop] download: false share: true category: post --- There's something to be said for the trend increasing democratization of music. In the past two decades, we've gone from a world where popular music is inescapable (and overwhelmingly sponsored by unsavory business tactics by record labels such as payola) to one where you pretty much only end up hearing the the latest One Direction or Taylor Swift if you're actively looking for it. On the other hand, this fragmentation has allowed the individual's music listening habits to retreat further into algorithmic echo chambers created by Apple Music and Spotify. Luckily, there's still a number of great music publications out there that keep you in the loop on the latest in music (shoutout to Noisey, Pigeons&Planes, Stereogum, The Needle Drop, NPR). I figured I'd do my part by writing short reviews of some EPs I've enjoyed so far this year that I haven't seen any mainstream discussion about. Disclaimer: my taste in music is mostly hip-hop and electronic music, so take that as you will. ## Zack Villere - Little Big World Zack Villere (formerly known as Froyo Ma, I think) is a geeky white teenager from Louisiana who looks like the kid from Napoleon Dynamite. And his music is awesome. This is the sort of album I can recommend to just about everybody. It's a bit rough around the edges, and it's not the most polished thing you've ever heard, but it comes with an overwhelming amount of charm. If I had to describe the ethos of the record, it'd be something like: "Michael Cera makes a Weezer album in the style of J Dilla." The whole thing has a very amateur-ish, lo-fi sort of aesthetic, and I mean that in the best possible way. The instrumentals mostly consist of bouncy lo-fi synths over jazzy chord progressions, ornamented with Zack's double-tracked smooth baritone, which sometimes reaches into a not-quite-there falsetto. The record sounds like it was recorded by one dude with a surprisingly soulful voice in his room, which it honestly probably was. There's also some sort of narrative to the album, which involves (from what I can tell) Zack and his friends finding an alien in some sort of wacky Goonies-esque adventure. I don't know what that's all about, but it's a lot of fun. There are also some quirky snippets of conversations throughout the record ("All right, yo, do you guys want a Popsicle?" "Definitely, you got the mango ones?" "I got you") <div class="video-container"> <iframe src="https://www.youtube.com/embed/923uTY2q71I" frameborder="0" allowfullscreen></iframe> </div> It has a certain earnestness to it which really shines in the lyrics. Some of my favorite awkward teen gems include: "I'll drive you to my parents' house / And then we started making out / While we're listening to Cherry Bomb / Ordered pizza then we take it out" and "Tomorrow you want to get coffee / But I don't even drink coffee / I'm down though / I'll just drink water". These lines are delivered with a sense of honesty and just the right amount of irony and self-awareness to make the Zack Villere character likeable, and surprisingly cool. The standout track of the album to me is _Cool_, a song about, well, wanting to be cool, and that's about all you need to know. It's a nostalgic feel-good summer jam that sums up the whole record pretty well. You can't help but bob your head to it. The whole project is just about a half hour long and the smooth soulful vocals and warm instrumentals make for great background music. Check it out if you've got a minute. ## Kero One & Azure - Kero One & Azure I always have to shout out Asian rappers, so of course I had to plug this record, which I have seen literally zero discussion about anywhere on the Internet. Now, I can understand why there isn't a lot of hype around this album. A pretty by-the-numbers jazz-rap album by two Asian dudes in 2017 was never going to get a lot of traction in the first place. But, even though it's not perfect, it is a solid record with a good grasp of the hip-hop fundamentals, and it deserves to be commended for that. Asians have been making contributions within the acid jazz niche for years now with artists like Freddie Joachim, Nujabes, DJ Okawari, and yes, Kero One. Aside from that, I truly believe we're on the cusp of a "second wave" of Asians in hip-hop, with artists like Rich Chigga, Joji, Tokimonsta, and Keith Ape making waves. I think viral singles like _It G Ma_ and _Dat Stick_ proved that the process of cultural exchange in hip-hop has already begun, and I can't wait to see the whole thing play out. But let's get back to the record. In context, both the artists on this record were essentially ahead of their time (...or will have been, in ten years; I'm really making a lot of predictions here). Korean rapper/producer Kero One has been spitting solid bars over jazzy beats for over a decade now and Azure is a member of the hip-hop collective HBK Gang. Both are pretty much veterans of the industry at this point, and they pull off some pretty effortless flows over some smooth and polished production on this project. Jazz samples plus bars is, to me, one of the most timeless recipes in all of modern music, and I will never get enough of it. The duo wear their influences on their sleeve: A Tribe Called Quest, Digable Planets, Pete Rock & CL Smooth, Slum Village, you know, the classics. Kero One in particular has a flow strongly reminiscent of Del Tha Funky Homo Sapien and a lot of '80s hip-hop. The two (but particularly Kero) are hip-hop scholars, and they know their stuff. On the opener _Jazzhop_, for example, bars directly reference Common (who is sampled in the hook), Jay-Z, Tribe, Ice Cube, and even Kanye's interview on Sway's radio show. And that's just the opener. <div class="video-container"> <iframe src="https://www.youtube.com/embed/tb_R2DfylH8" frameborder="0" allowfullscreen></iframe> </div> Unfortunately this means at times their music borders on pastiche. For example, the mid-2000s Southern rap influenced _Winning_, where Azure does his best Big Boi impression (it's not very good). But for the most part, the rapper/producer combo adds their own unique flavor to the mix. One of the highlights on this album is _Light It Up_, which starts with a spacey pitched-up soul sample a la early Kanye and incorporates a stellar beat switch in the second verse. There's some great wordplay on here ("Choppin beats like a vegan / Codeine with my flow scheme"; "Marvelous rhythms, I'm sharp as a prism / Mama say I'm always stoned, but I'm carvin' my vision") and both rappers have an impeccable flow throughout. Another standout is _Momma Said_ which, despite featuring virtual instrument trumpets that sound like they came straight out of Garage Band, will probably put you in a good mood by the end of the song. Azure comes through with a refreshingly candid verse about the struggles of being a broke 20-something serial partier coping with a difficult breakup, while Kero One paints a vivid picture of his hardworking Korean immigrant mother who inspired him to persevere and break through as an Asian rapper. Overall, it's a great concept for a song, and executed really well. These two have interesting and unique perspectives which complement each other perfectly, and they sound confident on a selection of solid classic hip-hop beats. If you're a hip-hop fan, especially a fan of '90s stuff, you'll probably like this. Why isn't this record more popular? ## Lapalux - Ruinism This one just barely makes the cut, as it was released almost exactly halfway through the year on June 30th. I saved this for third because while the first two albums on this list are pretty accessible, this one is...not at all. But hear me out. _Ruinism_ is the latest album to come out of the Brainfeeder music label, founded by revolutionary electronic/psychedelic/jazz-influenced music producer Flying Lotus. It's in general difficult to describe the music Brainfeeder puts out, because while it spans a huge range of genres and styles, it maintains a singular identity. The label is essentially a playground for pushing the boundaries of hip-hop and electronic music, often extending the work of Dilla, Madlib, and FlyLo himself (whose distinctive style has by now influenced hundreds of aspiring producers). If you want to get a good idea of what that sounds like, you really just need to listen to Flying Lotus. Actually, you should do that anyway. This new record by Lapalux, though, is weird, even for Brainfeeder. Each track contorts and twists in on itself, refusing to adhere to any structure at all. I'm not sure I could even call half of the the tracks on here "songs" in the traditional sense. They're more like soundscapes, something like audio paintings. This all sounds very abstract and high-concept, I know, but trust me that it makes sense if you're in the right mood to listen to it. My suggestion? Listen to it alone, preferably late at night, ideally intoxicated or sleep-deprived in some way. One of Lapalux's primary influences has always been Foley, or the practice of creating field sound effects for filmmaking, and it really comes through in this record. There are so many sounds on here you've likely never made the conscious effort to listen to before, and the textural diversity of the project on its own is something to be commended for. Lapalux combines eerie detuned synths, glitchy drums, distorted voice clips, real instruments (_Data Demon_ features a lengthy oboe solo for some reason), and rattling hi-hats to create something which definitely sounds like nothing you've ever heard before. <div class="video-container"> <iframe src="https://www.youtube.com/embed/fqzRsuiaYuo" frameborder="0" allowfullscreen></iframe> </div> With the first couple of listens, most of these tracks sound almost like a horror movie soundtrack, and they're a bit overwhelming. But the more you listen to them, the more you realize two things all these tracks have in common. The first is an incredible attention to detail. It would be hard for someone to create a track like _Rotted Arp_, for example, without an incredible sense of focus. The harshly out of tune synths which arrive at about two and a half minutes into the track wail over each other in a cacophony that is somehow more than the sum of its parts, as you eventually are able to make out a melancholic melody rising out of the din. The second trait most of these tracks share is a sense of narrative. What I mean by this is that each track goes through a cohesive emotional arc with a distinct beginning, middle, and end, between which motifs and sounds evolve and shift. My personal favorite track is _Data Demon_, which begins with an operatic female voice singing an aria in concert with a vibrato-heavy violin as a chorus of strings swell around them. Lurking underneath throughout the intro is an eerie bass, which crescendoes further into distortion with every cycle. Then, suddenly, the strings vanish (the voice does not), and the melancholic oboe duet begins in near silence, supplemented by uneasy reverb-soaked synth arpeggios which get louder and more intrusive throughout. A few seconds after the oboes drop out, you are assaulted by just about the most aggressive beat drop you'll ever hear in your life, with a glitchy bass relentlessly pounding in tandem with the kick drum. The track ends with the bass and drums getting faster and faster until they break off abruptly. And that's just one track! Now, I know this kind of music isn't for everybody. I know there are a lot of skeptics out there who will just hear noise and not music. But if you dig a little bit deeper and listen closely to a couple of tracks, I would be shocked if you didn't feel at least something, even if you couldn't explain it properly. Something you perhaps don't usually feel while listening to music. And isn't that what art is all about? ## Buddy x Kaytranada - Ocean & Montana Let me be unequivocally clear here: if Kaytranada was not on this project, I would not have liked it nearly as much. This isn't to say that Compton rapper Buddy isn't good, because he's really pretty okay, and he does reasonably well over these beats; however, at the end of the day, he's not really the main appeal here. What I'm getting at is that Kaytranada is one of the best producers working today. Period. The man has got the art of mixing bass and drums down to a science (especially bass, good lord) and you really owe it to yourself to check out his work. His 2016 album _99.9%_ is a great place to start. Kaytranada's signature style is immediately identifiable. The first major thing you'll notice is the heavy, almost disorienting sidechaining; the second is the exceptionally fat kick drum; and the third is the absolutely huge wall of bass which makes just as much of a statement when it is present as when it is not. And that's Kaytranada's music in a nutshell. On tracks like the opener _Find Me_, that's pretty much all there is (along with some other Kaytra signatures, like chime arpeggios and sharp hi-hats), and it still works, which is a testament to the strength of the beats. On closer _Love or Something_, the bass has such overwhelming presence it could honestly be credited as a feature artist. <div class="video-container"> <iframe src="https://www.youtube.com/embed/NWTNbDkalAQ" frameborder="0" allowfullscreen></iframe> </div> Now, like I said earlier, Buddy doesn't do a bad job at all. For example, the skeletal opener _Find Me_, on which Buddy is singing more than rapping, is an ode to urban loneliness and substance addiction which meshes perfectly with the dark, minimal instrumental. Another standout is smoking anthem _A Lite_, on which Buddy pulls off an incredibly smooth and laid-back flow. On _Guillotine_, Buddy pulls off a virtuosic fast flow over what sounds almost like an OutKast beat (although Kaytra makes sure to throw in a weird sample just to switch things up). Overall, it's a decent project. I can't wait to hear more from both of these guys, but for now it's nice to have some Kaytranada to tide the fans over until his next release. ## Jaeden Camstra - Kids' Menu Has anyone else noticed this trend of 24/7 lo-fi hip-hop streams on YouTube? I know a bunch of people that put these types of "radio stations" on to study or just to chill out. What's interesting is that no one ever really cares about what songs or even artists are specifically are being played -- for the most part, they're interchangeable. What's more important is the mood that the beats create, the atmosphere that they cultivate as a whole. Often these beats have a "vintage" aesthetic, even though this music is produced mostly produced by people in their teens or early 20s. As a result, most of the references go no further back than pop culture from the late '90s and early '00s. There's a lot of references to anime, cartoons, and video games. You know, stuff we liked as kids. This is a relatively recent development and one that I was waiting to happen for a long time. My generation has finally become old enough to be nostalgic for the '90s and '00s, in the same way that the hip-hop of the early '90s was itself influenced by funk and soul music. I understand that the turn of the millenium was a very particular cultural moment, and a huge chunk of the population won't relate to this stuff, but I can't help but like it a lot. <div class="video-container"> <iframe src="https://www.youtube.com/embed/DHkdeAC5il8" frameborder="0" allowfullscreen></iframe> </div> Enter: Kids' Menu by Jaeden Camstra, possibly the most emblematic a record of the current lo-fi hip-hop zeitgeist as we're ever going to get. The record seamlessly transitions between 20 tracks of around a minute long each and encompasses all the typical trappings of the subgenre. There's vinyl crackles, heavy side-chaining, exaggeratedly swung Dilla-style drums, jazz piano and nylon guitar samples, samples from Japanese music, and lots and lots of cultural references. Now I'm not claiming that this album is the best in the genre. But it definitely knows what kind of record it's trying to be, and occupies its lane spectacularly. In its less than half an hour of runtime, it references and samples (here we go): Nas, Super Mario, the Gameboy Advance startup jingle, Yoshi's Island, Coca Cola, Dragon Ball Z, Spongebob, Family Guy, the Nintendo Wii, Kanye West, the Wu Tang Clan, Adele's 19, and Hey Arnold. I don't know if this describes your childhood and adolescence, but it sums up mine pretty well. Even the track titles make you nostalgic: Saturday morning cartoons, watching infomercials late at night, waiting in your mom's car at the stoplight, and of course, ordering chicken tenders off the kids' menu. This is the childhood manifesto of a first generation kid with a genuine love for anime, video games, hip-hop, and American pop culture. Now I haven't really talked about the music itself that much, because I think that's almost missing the appeal of this kind of record. But I will say that the track _wii_ has an incredible smooth jazz sample with an infectious lead synth melody (if anyone knows what it is, please please let me know). By now, you should already know if you need this EP in your life. If you don't get the appeal, I'm not sure I can explain it any more than that. The Youtube video above actually has the entire album, so check it out.
162.787037
911
0.78545
eng_Latn
0.999843
17dba0570fb8856c4f85fcac3da769afd0eb8aaf
4,118
md
Markdown
README.md
CheggEng/grocer
1da3ef1947a4e0580d6d24cdfc803bbb03e9913e
[ "Apache-2.0" ]
2
2016-02-26T19:26:59.000Z
2016-03-02T01:56:52.000Z
README.md
CheggEng/grocer
1da3ef1947a4e0580d6d24cdfc803bbb03e9913e
[ "Apache-2.0" ]
null
null
null
README.md
CheggEng/grocer
1da3ef1947a4e0580d6d24cdfc803bbb03e9913e
[ "Apache-2.0" ]
null
null
null
# Grocer - A tool for managing ingredients in a cookbook Even the best chef needs the right ingredients. Grocer helps you procure the freshest and best tasting ingredients to make a spectacular meal. Your servers will be more delicious than ever. Sorry doesn't support organic or non-GMO at this time. ### TL;DR 1. download package. `cd` into package dir. 2. sudo python setup.py install 3. cd to repo; run grocer_test (foodcritic path defaults to /opt/chef/embedded use -f /opt/chefdk/bin/foodcritic on OSX) ### Overview Grocer manages the components of a chef cookbook, such as testing for style, syntax, or even functionality (unit tests). Nobody wants to cook with syntax errors or a gnarly regression! grocer_test is designed for use-cases such as a pre-commit hook on a developer workstation: ``` grocer_test 2015-06-01 14:17:41,537 - grocer - INFO - Starting test process 2015-06-01 14:17:41,537 - grocer - INFO - Running Foodcritic 2015-06-01 14:17:42,442 - grocer - INFO - Running Ruby Syntax Checks 2015-06-01 14:17:42,442 - grocer - INFO - Test process complete! ``` It can also manage resolving dependencies through berkshelf and uploading assets to a chef server for use in a CD/CI pipeline. Notice it will first run the same tests as in grocer_test: ``` grocer_upload 2015-06-01 13:38:33,561 - grocer - INFO - Starting upload process​ 2015-06-01 13:38:33,561 - grocer - INFO - Running Foodcritic 2015-06-01 13:38:35,100 - grocer - INFO - Running Ruby Syntax Checks 2015-06-01 13:38:35,100 - grocer - INFO - Testing syntax for file ./metadata.rb 2015-06-01 13:38:35,198 - grocer - INFO - Testing syntax for file ./test/integration/default/default.rb 2015-06-01 13:38:35,296 - grocer - INFO - Testing syntax for file ./test/integration/default/spec_helper.rb 2015-06-01 13:38:35,400 - grocer - INFO - Testing syntax for file ./recipes/default.rb 2015-06-01 13:38:35,493 - grocer - INFO - Testing syntax for file ./attributes/default.rb 2015-06-01 13:38:35,590 - grocer - INFO - Running Berks Install 2015-06-01 13:38:47,453 - grocer - INFO - Running Berks Update 2015-06-01 13:38:59,260 - grocer - INFO - Running Berks Upload ``` ### Options There are a few options you might want to set when running either grocer tool. It defaults to the locations of the testing binaries that are on the build hosts, so on a developer workstation you'll likely need to point it somewhere else. You can use the -h flag to see all the options: ``` optional arguments: -h, --help show this help message and exit -p PATH, --path PATH The path to the repo. Default is CWD -f FOODCRITIC_BIN, --foodcritic_bin FOODCRITIC_BIN The path to the foodcritic binary -r RUBY_BIN, --ruby_bin RUBY_BIN The path to the ruby binary -l LOG_LEVEL, --log_level LOG_LEVEL ``` ### Test Actions * foodcritic - Runs foodcritic against the specific path, e.g. 'foodcritic .' for CWD * ruby syntax - Runs the ruby interpreter's built in syntax check tool by invoking the -c flag. First scans the files and directories in the specified path to find ruby files (ending in .rb) and then loops through each one and tests. * chefspec (unit tests) - coming soon! ### Installation The tools build job in jenkins will produce an RPM package for installing on RHEL based systems. For OSX systems, you can use python setup tools to install. You'll need some prerequisites if you don't already have them: ``` sudo pip install argparse ``` If you don't have foodcritic or chefpec, the Chef Development Kit is a great way to get it: https://downloads.chef.io/chef-dk/. Then be sure to pass the correct path to the binaries when running it (e.g. "-f /opt/chefdk/bin/foodcritic") cd grocer python setup.py install ### Future Enhancements * Support style tests (rubocop, ruby tailor) * Validate ERB syntax * Validate JSON syntax * Allow for selection of specific tests only on the command line * Support unit test frameworks for chef such as chefspec ### Source Its written in python, and the code can be found in the repo called 'grocer'. Pull request, patches welcome!
45.755556
244
0.742108
eng_Latn
0.972557
17dc1279a6e1f518b7ebd28c317456f4fba3c032
77,704
md
Markdown
prod/master/list.md
adobe/xdm4registry
faf905422f458c7b55ad44fa394d85a82a77ca27
[ "ECL-2.0", "Apache-2.0" ]
1
2020-08-29T14:47:43.000Z
2020-08-29T14:47:43.000Z
prod/master/list.md
adobe/xdm4registry
faf905422f458c7b55ad44fa394d85a82a77ca27
[ "ECL-2.0", "Apache-2.0" ]
1
2019-05-11T14:10:35.000Z
2019-05-13T20:04:23.000Z
prod/master/list.md
adobe/xdm4registry
faf905422f458c7b55ad44fa394d85a82a77ca27
[ "ECL-2.0", "Apache-2.0" ]
2
2019-05-13T19:57:50.000Z
2020-10-02T21:36:33.000Z
# XDM Visualization ## Git Repo Branch: master ### Standard XDM Schemas [uberschemas.health_and_life_sciences.profile-generated-health_and_life_sciences](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.health_and_life_sciences.profile-generated-health_and_life_sciences.html)<br/> [uberschemas.health_and_life_sciences.experienceevent-generated-health_and_life_sciences](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.health_and_life_sciences.experienceevent-generated-health_and_life_sciences.html)<br/> [uberschemas.media_and_entertainment.profile-generated-media_and_entertainment](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.media_and_entertainment.profile-generated-media_and_entertainment.html)<br/> [uberschemas.media_and_entertainment.experienceevent-generated-media_and_entertainment](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.media_and_entertainment.experienceevent-generated-media_and_entertainment.html)<br/> [uberschemas.product-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.product-generated.html)<br/> [uberschemas.opportunity-contact-role-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.opportunity-contact-role-generated.html)<br/> [uberschemas.opportunity-person-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.opportunity-person-generated.html)<br/> [uberschemas.campaign-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.campaign-generated.html)<br/> [uberschemas.opportunity-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.opportunity-generated.html)<br/> [uberschemas.financial_services.experienceevent-generated-financial_services](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.financial_services.experienceevent-generated-financial_services.html)<br/> [uberschemas.financial_services.profile-generated-financial_services](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.financial_services.profile-generated-financial_services.html)<br/> [uberschemas.profile-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.profile-generated.html)<br/> [uberschemas.education.profile-generated-education](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.education.profile-generated-education.html)<br/> [uberschemas.education.experienceevent-generated-education](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.education.experienceevent-generated-education.html)<br/> [uberschemas.marketing-list-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.marketing-list-generated.html)<br/> [uberschemas.travel_and_hospitality.profile-generated-travel_and_hospitality](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.travel_and_hospitality.profile-generated-travel_and_hospitality.html)<br/> [uberschemas.travel_and_hospitality.experienceevent-generated-travel_and_hospitality](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.travel_and_hospitality.experienceevent-generated-travel_and_hospitality.html)<br/> [uberschemas.segmentdefinition-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.segmentdefinition-generated.html)<br/> [uberschemas.retail.experienceevent-generated-retail](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.retail.experienceevent-generated-retail.html)<br/> [uberschemas.retail.profile-generated-retail](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.retail.profile-generated-retail.html)<br/> [uberschemas.graphs-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.graphs-generated.html)<br/> [uberschemas.high_tech.experienceevent-generated-high_tech](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.high_tech.experienceevent-generated-high_tech.html)<br/> [uberschemas.high_tech.profile-generated-high_tech](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.high_tech.profile-generated-high_tech.html)<br/> [uberschemas.automotive.profile-generated-automotive](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.automotive.profile-generated-automotive.html)<br/> [uberschemas.automotive.experienceevent-generated-automotive](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.automotive.experienceevent-generated-automotive.html)<br/> [uberschemas.experienceevent-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.experienceevent-generated.html)<br/> [uberschemas.manufacturing.profile-generated-manufacturing](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.manufacturing.profile-generated-manufacturing.html)<br/> [uberschemas.manufacturing.experienceevent-generated-manufacturing](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.manufacturing.experienceevent-generated-manufacturing.html)<br/> [uberschemas.telecom.experienceevent-generated-telecom](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.telecom.experienceevent-generated-telecom.html)<br/> [uberschemas.telecom.profile-generated-telecom](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.telecom.profile-generated-telecom.html)<br/> [uberschemas.account-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.account-generated.html)<br/> [uberschemas.marketing-list-member-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.marketing-list-member-generated.html)<br/> [uberschemas.account-person-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.account-person-generated.html)<br/> [uberschemas.campaign-member-generated](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.campaign-member-generated.html)<br/> [uberschemas.public_sector.experienceevent-generated-public_sector](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.public_sector.experienceevent-generated-public_sector.html)<br/> [uberschemas.public_sector.profile-generated-public_sector](http://opensource.adobe.com/xdmVisualization/prod/master/uberschemas.public_sector.profile-generated-public_sector.html)<br/> ### Standard Core Components #### behaviors [behaviors.time-series](http://opensource.adobe.com/xdmVisualization/prod/master/behaviors.time-series.html)<br/> [behaviors.record](http://opensource.adobe.com/xdmVisualization/prod/master/behaviors.record.html)<br/> #### classes [classes.experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/classes.experienceevent.html)<br/> [classes.profile](http://opensource.adobe.com/xdmVisualization/prod/master/classes.profile.html)<br/> [classes.vehicle-product](http://opensource.adobe.com/xdmVisualization/prod/master/classes.vehicle-product.html)<br/> [classes.graphs](http://opensource.adobe.com/xdmVisualization/prod/master/classes.graphs.html)<br/> [classes.aircraft](http://opensource.adobe.com/xdmVisualization/prod/master/classes.aircraft.html)<br/> [classes.loan](http://opensource.adobe.com/xdmVisualization/prod/master/classes.loan.html)<br/> [classes.product](http://opensource.adobe.com/xdmVisualization/prod/master/classes.product.html)<br/> [classes.campaign](http://opensource.adobe.com/xdmVisualization/prod/master/classes.campaign.html)<br/> [classes.b2b.account](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.account.html)<br/> [classes.b2b.account-person](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.account-person.html)<br/> [classes.b2b.marketing-list-member](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.marketing-list-member.html)<br/> [classes.b2b.opportunity](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.opportunity.html)<br/> [classes.b2b.opportunity-contact-role](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.opportunity-contact-role.html)<br/> [classes.b2b.marketing-list](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.marketing-list.html)<br/> [classes.b2b.opportunity-person](http://opensource.adobe.com/xdmVisualization/prod/master/classes.b2b.opportunity-person.html)<br/> [classes.lodging-product](http://opensource.adobe.com/xdmVisualization/prod/master/classes.lodging-product.html)<br/> [classes.campaign-member](http://opensource.adobe.com/xdmVisualization/prod/master/classes.campaign-member.html)<br/> [classes.segmentdefinition](http://opensource.adobe.com/xdmVisualization/prod/master/classes.segmentdefinition.html)<br/> [classes.promotion](http://opensource.adobe.com/xdmVisualization/prod/master/classes.promotion.html)<br/> [classes.summary_metrics](http://opensource.adobe.com/xdmVisualization/prod/master/classes.summary_metrics.html)<br/> [classes.restaurant](http://opensource.adobe.com/xdmVisualization/prod/master/classes.restaurant.html)<br/> [classes.fsi.atm](http://opensource.adobe.com/xdmVisualization/prod/master/classes.fsi.atm.html)<br/> [classes.fsi.policy](http://opensource.adobe.com/xdmVisualization/prod/master/classes.fsi.policy.html)<br/> [classes.fsi.branch](http://opensource.adobe.com/xdmVisualization/prod/master/classes.fsi.branch.html)<br/> #### datatypes [datatypes.device](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.device.html)<br/> [datatypes.shipping](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.shipping.html)<br/> [datatypes.identityitem](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.identityitem.html)<br/> [datatypes.currency](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.currency.html)<br/> [datatypes.environment](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.environment.html)<br/> [datatypes.demographic.emailaddress](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.emailaddress.html)<br/> [datatypes.demographic.geo](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.geo.html)<br/> [datatypes.demographic.place](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.place.html)<br/> [datatypes.demographic.phonenumber](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.phonenumber.html)<br/> [datatypes.demographic.geounit](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.geounit.html)<br/> [datatypes.demographic.address](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.demographic.address.html)<br/> [datatypes.enduserids](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.enduserids.html)<br/> [datatypes.person.person](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.person.person.html)<br/> [datatypes.person.person-name](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.person.person-name.html)<br/> [datatypes.webinfo](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.webinfo.html)<br/> [datatypes.poi-detail](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.poi-detail.html)<br/> [datatypes.cart](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.cart.html)<br/> [datatypes.optinout-additional-details](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.optinout-additional-details.html)<br/> [datatypes.product](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.product.html)<br/> [datatypes.pushnotificationtoken](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.pushnotificationtoken.html)<br/> [datatypes.b2b.account-organization](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.b2b.account-organization.html)<br/> [datatypes.b2b.organization](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.b2b.organization.html)<br/> [datatypes.b2b.b2b-source](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.b2b.b2b-source.html)<br/> [datatypes.b2b.orgunit](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.b2b.orgunit.html)<br/> [datatypes.namespace](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.namespace.html)<br/> [datatypes.search](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.search.html)<br/> [datatypes.consent.consent-field](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.consent-field.html)<br/> [datatypes.consent.marketing-field-subscription](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.marketing-field-subscription.html)<br/> [datatypes.consent.personalization-field](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.personalization-field.html)<br/> [datatypes.consent.consentstring](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.consentstring.html)<br/> [datatypes.consent.consent-preferences](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.consent-preferences.html)<br/> [datatypes.consent.marketing-field-basic](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.consent.marketing-field-basic.html)<br/> [datatypes.browserdetails](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.browserdetails.html)<br/> [datatypes.identity](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.identity.html)<br/> [datatypes.media](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.media.html)<br/> [datatypes.segmentidentity](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.segmentidentity.html)<br/> [datatypes.sitesearch](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.sitesearch.html)<br/> [datatypes.marketing.directmarketing-address](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.directmarketing-address.html)<br/> [datatypes.marketing.marketing](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.marketing.html)<br/> [datatypes.marketing.directmarketing-phonenumber](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.directmarketing-phonenumber.html)<br/> [datatypes.marketing.advertising-break](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.advertising-break.html)<br/> [datatypes.marketing.advertising](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.advertising.html)<br/> [datatypes.marketing.direct-marketing](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.direct-marketing.html)<br/> [datatypes.marketing.commerce](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.commerce.html)<br/> [datatypes.marketing.directmarketing-emailaddress](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.marketing.directmarketing-emailaddress.html)<br/> [datatypes.external.id3.audio](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.id3.audio.html)<br/> [datatypes.external.schema.geoshape](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.schema.geoshape.html)<br/> [datatypes.external.schema.geocircle](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.schema.geocircle.html)<br/> [datatypes.external.schema.geocoordinates](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.schema.geocoordinates.html)<br/> [datatypes.external.iptc.season](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.iptc.season.html)<br/> [datatypes.external.iptc.series](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.iptc.series.html)<br/> [datatypes.external.iptc.creator](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.iptc.creator.html)<br/> [datatypes.external.iptc.rating](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.iptc.rating.html)<br/> [datatypes.external.iptc.episode](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.external.iptc.episode.html)<br/> [datatypes.profilestitch](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.profilestitch.html)<br/> [datatypes.placecontext](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.placecontext.html)<br/> [datatypes.auditing.auditable](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.auditing.auditable.html)<br/> [datatypes.auditing.external-source-system-audit](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.auditing.external-source-system-audit.html)<br/> [datatypes.productlistitem](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.productlistitem.html)<br/> [datatypes.data.metricdefinition](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.metricdefinition.html)<br/> [datatypes.data.paymentitem](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.paymentitem.html)<br/> [datatypes.data.measure](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.measure.html)<br/> [datatypes.data.pageviews](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.pageviews.html)<br/> [datatypes.data.record-timeseries-events](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.record-timeseries-events.html)<br/> [datatypes.data.datasource](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.datasource.html)<br/> [datatypes.data.order](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.order.html)<br/> [datatypes.data.opens](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.opens.html)<br/> [datatypes.data.cart-abandons](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.data.cart-abandons.html)<br/> [datatypes.industry-verticals.comparisons](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.comparisons.html)<br/> [datatypes.industry-verticals.claim](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.claim.html)<br/> [datatypes.industry-verticals.implementationdetails](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.implementationdetails.html)<br/> [datatypes.industry-verticals.tool-usage](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.tool-usage.html)<br/> [datatypes.industry-verticals.impressions](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.impressions.html)<br/> [datatypes.industry-verticals.telecom-subscription](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.telecom-subscription.html)<br/> [datatypes.industry-verticals.form-applications](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.form-applications.html)<br/> [datatypes.industry-verticals.transaction](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.transaction.html)<br/> [datatypes.industry-verticals.file-transfer](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.file-transfer.html)<br/> [datatypes.industry-verticals.subscription](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.subscription.html)<br/> [datatypes.industry-verticals.selfservice](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.selfservice.html)<br/> [datatypes.industry-verticals.internal-site-search](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.internal-site-search.html)<br/> [datatypes.industry-verticals.financial-account](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.industry-verticals.financial-account.html)<br/> [datatypes.segmentmembershipitem](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.segmentmembershipitem.html)<br/> [datatypes.application](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.application.html)<br/> [datatypes.segmentmembership](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.segmentmembership.html)<br/> [datatypes.profilestitchidentity](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.profilestitchidentity.html)<br/> [datatypes.channels.channel](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.channels.channel.html)<br/> [datatypes.channels.application](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.channels.application.html)<br/> [datatypes.channels.phone](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.channels.phone.html)<br/> [datatypes.deprecated.linkclicks](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.linkclicks.html)<br/> [datatypes.deprecated.product-list-adds](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-list-adds.html)<br/> [datatypes.deprecated.product-list-reopens](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-list-reopens.html)<br/> [datatypes.deprecated.product-list-opens](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-list-opens.html)<br/> [datatypes.deprecated.user-complaints](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.user-complaints.html)<br/> [datatypes.deprecated.checkouts](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.checkouts.html)<br/> [datatypes.deprecated.poi-exits](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.poi-exits.html)<br/> [datatypes.deprecated.product-list-views](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-list-views.html)<br/> [datatypes.deprecated.unsubscriptions](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.unsubscriptions.html)<br/> [datatypes.deprecated.save-for-laters](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.save-for-laters.html)<br/> [datatypes.deprecated.bounces](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.bounces.html)<br/> [datatypes.deprecated.not-sent](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.not-sent.html)<br/> [datatypes.deprecated.product-views](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-views.html)<br/> [datatypes.deprecated.product-list-removals](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.product-list-removals.html)<br/> [datatypes.deprecated.impressions](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.impressions.html)<br/> [datatypes.deprecated.mirror-pages](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.mirror-pages.html)<br/> [datatypes.deprecated.non-deliverables](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.non-deliverables.html)<br/> [datatypes.deprecated.purchases](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.purchases.html)<br/> [datatypes.deprecated.sends](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.sends.html)<br/> [datatypes.deprecated.poi-entries](http://opensource.adobe.com/xdmVisualization/prod/master/datatypes.deprecated.poi-entries.html)<br/> #### fieldgroups [fieldgroups.opportunity.opportunity-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.opportunity.opportunity-details.html)<br/> [fieldgroups.segment-definition.segmentdefinition-expression](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.segment-definition.segmentdefinition-expression.html)<br/> [fieldgroups.shared.external-source-system-audit-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.shared.external-source-system-audit-details.html)<br/> [fieldgroups.shared.identitymap](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.shared.identitymap.html)<br/> [fieldgroups.shared.person-identifier](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.shared.person-identifier.html)<br/> [fieldgroups.shared.record-status](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.shared.record-status.html)<br/> [fieldgroups.product.product-category](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.product.product-category.html)<br/> [fieldgroups.product.product-catalog](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.product.product-catalog.html)<br/> [fieldgroups.product.product-identifiers](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.product.product-identifiers.html)<br/> [fieldgroups.product.product-catalog-category](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.product.product-catalog-category.html)<br/> [fieldgroups.product.product-measurement](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.product.product-measurement.html)<br/> [fieldgroups.profile.b2b-person-components](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.b2b-person-components.html)<br/> [fieldgroups.profile.profile-personal-tax-profile-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-personal-tax-profile-details.html)<br/> [fieldgroups.profile.profile-travel-preferences](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-travel-preferences.html)<br/> [fieldgroups.profile.profile-segmentation](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-segmentation.html)<br/> [fieldgroups.profile.profile-test-profile](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-test-profile.html)<br/> [fieldgroups.profile.profile-work-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-work-details.html)<br/> [fieldgroups.profile.profile-personal-finance-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-personal-finance-details.html)<br/> [fieldgroups.profile.profile-consents](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-consents.html)<br/> [fieldgroups.profile.profile-preferences-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-preferences-details.html)<br/> [fieldgroups.profile.profile-directmarketing](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-directmarketing.html)<br/> [fieldgroups.profile.profile-loyalty-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-loyalty-details.html)<br/> [fieldgroups.profile.profile-personal-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-personal-details.html)<br/> [fieldgroups.profile.profile-phones](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-phones.html)<br/> [fieldgroups.profile.profile-user-account-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-user-account-details.html)<br/> [fieldgroups.profile.profile-push-notification-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-push-notification-details.html)<br/> [fieldgroups.profile.profile-subscriptions](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-subscriptions.html)<br/> [fieldgroups.profile.b2b-person-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.b2b-person-details.html)<br/> [fieldgroups.profile.profile-telecom-subscription](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-telecom-subscription.html)<br/> [fieldgroups.profile.profile-consentResults](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-consentResults.html)<br/> [fieldgroups.profile.profile-privacy](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-privacy.html)<br/> [fieldgroups.profile.profile-other-work-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-other-work-details.html)<br/> [fieldgroups.profile.profile-owning-entities](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-owning-entities.html)<br/> [fieldgroups.profile.profile-person-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.profile.profile-person-details.html)<br/> [fieldgroups.experience-event.experienceevent-implementation-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-implementation-details.html)<br/> [fieldgroups.experience-event.experienceevent-inappmessage-tracking](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-inappmessage-tracking.html)<br/> [fieldgroups.experience-event.experienceevent-segmentmembership](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-segmentmembership.html)<br/> [fieldgroups.experience-event.experienceevent-directmarketing](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-directmarketing.html)<br/> [fieldgroups.experience-event.experienceevent-profile-stitch](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-profile-stitch.html)<br/> [fieldgroups.experience-event.experienceevent-consumer](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-consumer.html)<br/> [fieldgroups.experience-event.experienceevent-commerce](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-commerce.html)<br/> [fieldgroups.experience-event.experienceevent-marketing](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-marketing.html)<br/> [fieldgroups.experience-event.experienceevent-technical-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-technical-details.html)<br/> [fieldgroups.experience-event.experienceevent-support-site-search](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-support-site-search.html)<br/> [fieldgroups.experience-event.experienceevent-social-network-usage-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-social-network-usage-details.html)<br/> [fieldgroups.experience-event.experienceevent-knowledge-base-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-knowledge-base-details.html)<br/> [fieldgroups.experience-event.experienceevent-site-search](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-site-search.html)<br/> [fieldgroups.experience-event.experienceevent-enduserids](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-enduserids.html)<br/> [fieldgroups.experience-event.experienceevent-service-payment-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-service-payment-details.html)<br/> [fieldgroups.experience-event.experienceevent-stitching](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-stitching.html)<br/> [fieldgroups.experience-event.experienceevent-pushtracking](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-pushtracking.html)<br/> [fieldgroups.experience-event.experienceevent-offer-impression-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-offer-impression-details.html)<br/> [fieldgroups.experience-event.experienceevent-file-upload-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-file-upload-details.html)<br/> [fieldgroups.experience-event.experienceevent-channel](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-channel.html)<br/> [fieldgroups.experience-event.experienceevent-web](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-web.html)<br/> [fieldgroups.experience-event.experienceevent-privacy](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-privacy.html)<br/> [fieldgroups.experience-event.experienceevent-search](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-search.html)<br/> [fieldgroups.experience-event.experienceevent-file-download-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-file-download-details.html)<br/> [fieldgroups.experience-event.experienceevent-advertising](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-advertising.html)<br/> [fieldgroups.experience-event.experienceevent-environment-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-environment-details.html)<br/> [fieldgroups.experience-event.experienceevent-media](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-media.html)<br/> [fieldgroups.experience-event.experienceevent-survey-response-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-survey-response-details.html)<br/> [fieldgroups.experience-event.experienceevent-application](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-application.html)<br/> [fieldgroups.experience-event.events.scorechanged](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.scorechanged.html)<br/> [fieldgroups.experience-event.events.linkclicks](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.linkclicks.html)<br/> [fieldgroups.experience-event.events.convert-lead](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.convert-lead.html)<br/> [fieldgroups.experience-event.events.emailsent](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailsent.html)<br/> [fieldgroups.experience-event.events.change-campaign-stream](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.change-campaign-stream.html)<br/> [fieldgroups.experience-event.events.add-to-list](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.add-to-list.html)<br/> [fieldgroups.experience-event.events.change-campaign-cadence](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.change-campaign-cadence.html)<br/> [fieldgroups.experience-event.events.opportunityupdated](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.opportunityupdated.html)<br/> [fieldgroups.experience-event.events.interesting-moment](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.interesting-moment.html)<br/> [fieldgroups.experience-event.events.formfilledout](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.formfilledout.html)<br/> [fieldgroups.experience-event.events.visit-webpage](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.visit-webpage.html)<br/> [fieldgroups.experience-event.events.callwebhook](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.callwebhook.html)<br/> [fieldgroups.experience-event.events.emailbounced](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailbounced.html)<br/> [fieldgroups.experience-event.events.revenueStageChanged](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.revenueStageChanged.html)<br/> [fieldgroups.experience-event.events.emailunsubscribed](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailunsubscribed.html)<br/> [fieldgroups.experience-event.events.new-lead](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.new-lead.html)<br/> [fieldgroups.experience-event.events.add-to-campaign](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.add-to-campaign.html)<br/> [fieldgroups.experience-event.events.remove-from-opportunity](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.remove-from-opportunity.html)<br/> [fieldgroups.experience-event.events.emailbouncedsoft](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailbouncedsoft.html)<br/> [fieldgroups.experience-event.events.remove-from-list](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.remove-from-list.html)<br/> [fieldgroups.experience-event.events.add-to-opportunity](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.add-to-opportunity.html)<br/> [fieldgroups.experience-event.events.merge-leads](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.merge-leads.html)<br/> [fieldgroups.experience-event.events.statusincampaignprogressionchanged](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.statusincampaignprogressionchanged.html)<br/> [fieldgroups.experience-event.events.emailopened](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailopened.html)<br/> [fieldgroups.experience-event.events.emailclicked](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emailclicked.html)<br/> [fieldgroups.experience-event.events.emaildelivered](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.events.emaildelivered.html)<br/> [fieldgroups.experience-event.experienceevent-card-actions](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-card-actions.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-contact-request-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-contact-request-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-device-trade-in-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-device-trade-in-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-lodging-reservation](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-lodging-reservation.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-upsell-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-upsell-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-flight-reservation](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-flight-reservation.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-bill-pay-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-bill-pay-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-dining-reservation](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-dining-reservation.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-card-application-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-card-application-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-loan-application-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-loan-application-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-claim-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-claim-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-insurance-claim-process](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-insurance-claim-process.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-credit-limit-increase-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-credit-limit-increase-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-upgrade-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-upgrade-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-reservation-search](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-reservation-search.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-warranty-claim-process](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-warranty-claim-process.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-vehicle-reservation](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-vehicle-reservation.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-reservation-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-reservation-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-alert-impressions](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-alert-impressions.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-deposit-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-deposit-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-prescription-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-prescription-details.html)<br/> [fieldgroups.experience-event.industry-verticals.experienceevent-balance-transfers](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.industry-verticals.experienceevent-balance-transfers.html)<br/> [fieldgroups.experience-event.experienceevent-user-login-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-user-login-details.html)<br/> [fieldgroups.experience-event.experienceevent-quote-request-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.experience-event.experienceevent-quote-request-details.html)<br/> [fieldgroups.opportunity-contact-role.opportunity-contact-role-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.opportunity-contact-role.opportunity-contact-role-details.html)<br/> [fieldgroups.campaign.campaign-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.campaign.campaign-details.html)<br/> [fieldgroups.campaign-member.campaign-member-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.campaign-member.campaign-member-details.html)<br/> [fieldgroups.graphs.graph](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.graphs.graph.html)<br/> [fieldgroups.graphs.graph-edge](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.graphs.graph-edge.html)<br/> [fieldgroups.graphs.graph-node](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.graphs.graph-node.html)<br/> [fieldgroups.account.account-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.account.account-details.html)<br/> [fieldgroups.account.related-accounts](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.account.related-accounts.html)<br/> [fieldgroups.account-person.account-person-details](http://opensource.adobe.com/xdmVisualization/prod/master/fieldgroups.account-person.account-person-details.html)<br/> ### Extension Components [adobe.experience.profile-edgeregion](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.profile-edgeregion.html)<br/> [adobe.experience.target-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target-experienceevent.html)<br/> [adobe.experience.adcloud-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud-experienceevent.html)<br/> [adobe.experience.offer-management.proposition-response-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.offer-management.proposition-response-detail.html)<br/> [adobe.experience.offer-management.offer-activity-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.offer-management.offer-activity-detail.html)<br/> [adobe.experience.offer-management.offer-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.offer-management.offer-detail.html)<br/> [adobe.experience.target.experienceevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.experienceevent-all.html)<br/> [adobe.experience.target.activity.preview](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.preview.html)<br/> [adobe.experience.target.activity.activityevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.activityevent.html)<br/> [adobe.experience.target.activity.activityevent.segmentevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.activityevent.segmentevent.html)<br/> [adobe.experience.target.activity.activityevent.optionevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.activityevent.optionevent.html)<br/> [adobe.experience.target.activity.activityevent.context](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.activityevent.context.html)<br/> [adobe.experience.target.experienceevent-shared](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.experienceevent-shared.html)<br/> [adobe.experience.target.activity](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.target.activity.html)<br/> [adobe.experience.adcloud.experienceevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.experienceevent-all.html)<br/> [adobe.experience.adcloud.adcloudsegment](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.adcloudsegment.html)<br/> [adobe.experience.adcloud.searchadvertising.account](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.account.html)<br/> [adobe.experience.adcloud.searchadvertising.aggregateperformancebykeyword](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.aggregateperformancebykeyword.html)<br/> [adobe.experience.adcloud.searchadvertising.aggregateperformancebyad](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.aggregateperformancebyad.html)<br/> [adobe.experience.adcloud.searchadvertising.adgroup](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.adgroup.html)<br/> [adobe.experience.adcloud.searchadvertising.portfolio](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.portfolio.html)<br/> [adobe.experience.adcloud.searchadvertising.campaign](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.campaign.html)<br/> [adobe.experience.adcloud.searchadvertising.aggregateperformancebyadbykeyword](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchadvertising.aggregateperformancebyadbykeyword.html)<br/> [adobe.experience.adcloud.profile-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.profile-all.html)<br/> [adobe.experience.adcloud.partnerdata](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.partnerdata.html)<br/> [adobe.experience.adcloud.creative](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.creative.html)<br/> [adobe.experience.adcloud.attributedconversionmodel](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.attributedconversionmodel.html)<br/> [adobe.experience.adcloud.segment](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.segment.html)<br/> [adobe.experience.adcloud.advertisement](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.advertisement.html)<br/> [adobe.experience.adcloud.fees](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.fees.html)<br/> [adobe.experience.adcloud.campaign](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.campaign.html)<br/> [adobe.experience.adcloud.creative-event](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.creative-event.html)<br/> [adobe.experience.adcloud.inventory](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.inventory.html)<br/> [adobe.experience.adcloud.conversiondetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.conversiondetails.html)<br/> [adobe.experience.adcloud.stitch](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.stitch.html)<br/> [adobe.experience.adcloud.addeliverydetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.addeliverydetails.html)<br/> [adobe.experience.adcloud.searchads.account](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.account.html)<br/> [adobe.experience.adcloud.searchads.aggregateperformancebykeyword](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.aggregateperformancebykeyword.html)<br/> [adobe.experience.adcloud.searchads.aggregateperformancebyad](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.aggregateperformancebyad.html)<br/> [adobe.experience.adcloud.searchads.adgroup](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.adgroup.html)<br/> [adobe.experience.adcloud.searchads.portfolio](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.portfolio.html)<br/> [adobe.experience.adcloud.searchads.campaign](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.campaign.html)<br/> [adobe.experience.adcloud.searchads.transactionproperties](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.transactionproperties.html)<br/> [adobe.experience.adcloud.searchads.platform](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.platform.html)<br/> [adobe.experience.adcloud.searchads.aggregateperformancebyadbykeyword](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.searchads.aggregateperformancebyadbykeyword.html)<br/> [adobe.experience.adcloud.syncedremarketingaudience](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.syncedremarketingaudience.html)<br/> [adobe.experience.adcloud.dsp.account](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.account.html)<br/> [adobe.experience.adcloud.dsp.placement](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.placement.html)<br/> [adobe.experience.adcloud.dsp.promotedvideo](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.promotedvideo.html)<br/> [adobe.experience.adcloud.dsp.advertisement](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.advertisement.html)<br/> [adobe.experience.adcloud.dsp.campaign](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.campaign.html)<br/> [adobe.experience.adcloud.dsp.site](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.site.html)<br/> [adobe.experience.adcloud.dsp.advertiser](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.advertiser.html)<br/> [adobe.experience.adcloud.dsp.package](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.dsp.package.html)<br/> [adobe.experience.adcloud.productdetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud.productdetails.html)<br/> [adobe.experience.consumer-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.consumer-experienceevent.html)<br/> [adobe.experience.audiencemanager.experienceevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.audiencemanager.experienceevent-all.html)<br/> [adobe.experience.audiencemanager.segmentdefinition](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.audiencemanager.segmentdefinition.html)<br/> [adobe.experience.audiencemanager.segmentfolder](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.audiencemanager.segmentfolder.html)<br/> [adobe.experience.adcloud-profile](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.adcloud-profile.html)<br/> [adobe.experience.edge-autofilled-environment-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.edge-autofilled-environment-details.html)<br/> [adobe.experience.implementations-ext](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.implementations-ext.html)<br/> [adobe.experience.aam-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.aam-experienceevent.html)<br/> [adobe.experience.aep-web-sdk-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.aep-web-sdk-experienceevent.html)<br/> [adobe.experience.workfront.workobject](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.workobject.html)<br/> [adobe.experience.workfront.portfolio](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.portfolio.html)<br/> [adobe.experience.workfront.changeevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.changeevent.html)<br/> [adobe.experience.workfront.project](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.project.html)<br/> [adobe.experience.workfront.opTask](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.opTask.html)<br/> [adobe.experience.workfront.program](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.program.html)<br/> [adobe.experience.workfront.task](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.workfront.task.html)<br/> [adobe.experience.analytics-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics-experienceevent.html)<br/> [adobe.experience.intelligentServices.profile-journeyai-engagementscores](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.intelligentServices.profile-journeyai-engagementscores.html)<br/> [adobe.experience.intelligentServices.profile-journeyai-sendtimeoptimization](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.intelligentServices.profile-journeyai-sendtimeoptimization.html)<br/> [adobe.experience.decisioning.profile-constraint-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.profile-constraint-details.html)<br/> [adobe.experience.decisioning.tag](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.tag.html)<br/> [adobe.experience.decisioning.criterion-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.criterion-details.html)<br/> [adobe.experience.decisioning.activity-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.activity-detail.html)<br/> [adobe.experience.decisioning.lifecycle-status](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.lifecycle-status.html)<br/> [adobe.experience.decisioning.proposition](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition.html)<br/> [adobe.experience.decisioning.proposition-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition-details.html)<br/> [adobe.experience.decisioning.experienceevent-proposition-interaction](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.experienceevent-proposition-interaction.html)<br/> [adobe.experience.decisioning.option-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.option-detail.html)<br/> [adobe.experience.decisioning.calendar-constraint-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.calendar-constraint-details.html)<br/> [adobe.experience.decisioning.calendar-constraints](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.calendar-constraints.html)<br/> [adobe.experience.decisioning.placement](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.placement.html)<br/> [adobe.experience.decisioning.tags](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.tags.html)<br/> [adobe.experience.decisioning.decisionevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.decisionevent-all.html)<br/> [adobe.experience.decisioning.frequency-capping-constraints](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.frequency-capping-constraints.html)<br/> [adobe.experience.decisioning.decision-scope](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.decision-scope.html)<br/> [adobe.experience.decisioning.scope-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.scope-details.html)<br/> [adobe.experience.decisioning.placement-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.placement-detail.html)<br/> [adobe.experience.decisioning.contents](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.contents.html)<br/> [adobe.experience.decisioning.personalized-content-option](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.personalized-content-option.html)<br/> [adobe.experience.decisioning.proposition-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition-detail.html)<br/> [adobe.experience.decisioning.fallback-content-option](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.fallback-content-option.html)<br/> [adobe.experience.decisioning.proposition-metric-profile](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition-metric-profile.html)<br/> [adobe.experience.decisioning.option-selection-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.option-selection-details.html)<br/> [adobe.experience.decisioning.ranking-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.ranking-details.html)<br/> [adobe.experience.decisioning.content-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.content-details.html)<br/> [adobe.experience.decisioning.proposition-interaction-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition-interaction-detail.html)<br/> [adobe.experience.decisioning.interaction-measurement-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.interaction-measurement-details.html)<br/> [adobe.experience.decisioning.ranking](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.ranking.html)<br/> [adobe.experience.decisioning.decisionevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.decisionevent.html)<br/> [adobe.experience.decisioning.activity](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.activity.html)<br/> [adobe.experience.decisioning.strategy-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.strategy-details.html)<br/> [adobe.experience.decisioning.filter](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.filter.html)<br/> [adobe.experience.decisioning.criteria](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.criteria.html)<br/> [adobe.experience.decisioning.profile-constraints](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.profile-constraints.html)<br/> [adobe.experience.decisioning.option](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.option.html)<br/> [adobe.experience.decisioning.content-component-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.content-component-details.html)<br/> [adobe.experience.decisioning.proposition-metric-total](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.decisioning.proposition-metric-total.html)<br/> [adobe.experience.aep-mobile-lifecycle-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.aep-mobile-lifecycle-details.html)<br/> [adobe.experience.profile.profile-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.profile.profile-all.html)<br/> [adobe.experience.profile.experienceevent-shared](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.profile.experienceevent-shared.html)<br/> [adobe.experience.implementations](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.implementations.html)<br/> [adobe.experience.campaign.experienceevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-all.html)<br/> [adobe.experience.campaign.profile-snapshot](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.profile-snapshot.html)<br/> [adobe.experience.campaign.profile-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.profile-all.html)<br/> [adobe.experience.campaign.experienceevent-profile-push-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-push-details.html)<br/> [adobe.experience.campaign.notificationsubscriptiontarget](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.notificationsubscriptiontarget.html)<br/> [adobe.experience.campaign.mutationevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.mutationevent.html)<br/> [adobe.experience.campaign.experienceevent-campaign-delivery-log](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-campaign-delivery-log.html)<br/> [adobe.experience.campaign.experienceevent-profile-owning-entities](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-owning-entities.html)<br/> [adobe.experience.campaign.offer-response-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.offer-response-detail.html)<br/> [adobe.experience.campaign.feedbackevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.feedbackevent.html)<br/> [adobe.experience.campaign.journeyaifatigue](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.journeyaifatigue.html)<br/> [adobe.experience.campaign.experienceevent-profile-subscriptions](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-subscriptions.html)<br/> [adobe.experience.campaign.experienceevent-campaign-tracking-log](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-campaign-tracking-log.html)<br/> [adobe.experience.campaign.offer-proposition-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.offer-proposition-detail.html)<br/> [adobe.experience.campaign.experienceevent-profile-preferences-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-preferences-details.html)<br/> [adobe.experience.campaign.journeyaiscores](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.journeyaiscores.html)<br/> [adobe.experience.campaign.offer-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.offer-detail.html)<br/> [adobe.experience.campaign.experienceevent-profile-personal-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-personal-details.html)<br/> [adobe.experience.campaign.notificationunsubscriptiondetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.notificationunsubscriptiondetails.html)<br/> [adobe.experience.campaign.orchestration.orchestrationdetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.orchestrationdetails.html)<br/> [adobe.experience.campaign.orchestration.reportingeventmetrics](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.reportingeventmetrics.html)<br/> [adobe.experience.campaign.orchestration.experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.experienceevent.html)<br/> [adobe.experience.campaign.orchestration.reportingevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.reportingevent.html)<br/> [adobe.experience.campaign.orchestration.reportingexternalevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.reportingexternalevent.html)<br/> [adobe.experience.campaign.orchestration.eventid](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.orchestration.eventid.html)<br/> [adobe.experience.campaign.experienceevent-profile-work-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-work-details.html)<br/> [adobe.experience.campaign.experienceevent-profile-test-profile](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-test-profile.html)<br/> [adobe.experience.campaign.address](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.address.html)<br/> [adobe.experience.campaign.notificationsubscription](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.notificationsubscription.html)<br/> [adobe.experience.campaign.experienceevent-profile-segmentation](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign.experienceevent-profile-segmentation.html)<br/> [adobe.experience.campaign-experienceevent](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.campaign-experienceevent.html)<br/> [adobe.experience.customerJourneyManagement.offers](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.offers.html)<br/> [adobe.experience.customerJourneyManagement.secondary-recipient-detail](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.secondary-recipient-detail.html)<br/> [adobe.experience.customerJourneyManagement.message-delivery-feedback](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.message-delivery-feedback.html)<br/> [adobe.experience.customerJourneyManagement.profile-counters-v2](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.profile-counters-v2.html)<br/> [adobe.experience.customerJourneyManagement.messageprofile](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.messageprofile.html)<br/> [adobe.experience.customerJourneyManagement.message-interaction](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.message-interaction.html)<br/> [adobe.experience.customerJourneyManagement.processing-flow-timeline](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.processing-flow-timeline.html)<br/> [adobe.experience.customerJourneyManagement.messageexecution](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.customerJourneyManagement.messageexecution.html)<br/> [adobe.experience.mobile-lifecycle-details-test](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.mobile-lifecycle-details-test.html)<br/> [adobe.experience.experienceevent-edgeregion](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.experienceevent-edgeregion.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationIdentity](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationIdentity.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationDebugInfo](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationDebugInfo.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventClass](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventClass.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventDataFetchFieldsMixin](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventDataFetchFieldsMixin.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventIdentityFieldsMixin](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventIdentityFieldsMixin.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyClass](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyClass.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventCommonFieldsMixin](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventCommonFieldsMixin.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventActionExecutionFieldsMixin](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventActionExecutionFieldsMixin.html)<br/> [adobe.experience.journeyOrchestration.stepEvents.journeyStepEventJourneyFieldsMixin](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.stepEvents.journeyStepEventJourneyFieldsMixin.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsStateMachine](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsStateMachine.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationClassification](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationClassification.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationJourney](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationJourney.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsDispatcher](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsDispatcher.html)<br/> [adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsSegmentExportJob](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.journeyOrchestration.journeyOrchestrationServiceEventsSegmentExportJob.html)<br/> [adobe.experience.analytics.keyvalue](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.keyvalue.html)<br/> [adobe.experience.analytics.experienceevent-all](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.experienceevent-all.html)<br/> [adobe.experience.analytics.events](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.events.html)<br/> [adobe.experience.analytics.keyedlist](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.keyedlist.html)<br/> [adobe.experience.analytics.evars](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.evars.html)<br/> [adobe.experience.analytics.listdetails](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.listdetails.html)<br/> [adobe.experience.analytics.commerce](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.commerce.html)<br/> [adobe.experience.analytics.productlistitem](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.experience.analytics.productlistitem.html)<br/> [adobe.b2b.bizible.bizible-account-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.b2b.bizible.bizible-account-details.html)<br/> [adobe.b2b.bizible.bizible-opportunity-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.b2b.bizible.bizible-opportunity-details.html)<br/> [adobe.b2b.bizible.bizible-person-details](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.b2b.bizible.bizible-person-details.html)<br/> [adobe.b2b.marketo.marketo-web-url](http://opensource.adobe.com/xdmVisualization/prod/master/adobe.b2b.marketo.marketo-web-url.html)<br/> [airship.airship-event](http://opensource.adobe.com/xdmVisualization/prod/master/airship.airship-event.html)<br/> [facebook.facebook-conversion-event](http://opensource.adobe.com/xdmVisualization/prod/master/facebook.facebook-conversion-event.html)<br/>
159.556468
257
0.84374
yue_Hant
0.319287
17dc366c431b6453ead89afd4b4bdf6ccd6c3509
945
md
Markdown
README.md
thainetizen/policybin
d9a126f802e2bbcb3a3e3efc5ae8ca5da6a65dfd
[ "CC0-1.0" ]
6
2020-03-22T08:04:31.000Z
2020-03-24T16:17:55.000Z
README.md
thainetizen/policybin
d9a126f802e2bbcb3a3e3efc5ae8ca5da6a65dfd
[ "CC0-1.0" ]
28
2020-03-21T20:06:59.000Z
2020-04-14T11:49:50.000Z
README.md
thainetizen/policybin
d9a126f802e2bbcb3a3e3efc5ae8ca5da6a65dfd
[ "CC0-1.0" ]
null
null
null
# Policybin รวบรวมนโยบายที่น่าสนใจ เอามากองๆ กันไว้ ให้คนหาและหยิบเอาไปดูต่อได้สะดวกขึ้น โดยเบื้องต้นเน้น 4 เรื่อง: - นโยบายและมาตรการช่วยเหลือที่เกี่ยวเนื่องกับ COVID-19 - นโยบายและมาตรการเกี่ยวกับข้อมูลส่วนบุคคลและความเป็นส่วนตัว - นโยบายและมาตรการเกี่ยวกับการแสดงออกและสื่อ - นโยบายและมาตรการเกี่ยวกับการรวมตัวกันเองของคนทำงาน ## วิธีการแจ้งนโยบายหรือมาตรการที่น่าสนใจ - โพสต์นโยบายไว้ในหน้า [Issues](https://github.com/thainetizen/policybin/issues) โดยกดปุ่ม "New Issue" สีเขียวที่ด้านขวาของจอ - เขียนข้อความ สรุปสั้นๆ ว่าเป็นนโยบายหรือมาตการอะไร ใครเป็นคนทำ เป้าหมายคือใคร บังคับใช้ตั้งแต่เมื่อไรถึงเมื่อไร ฯลฯ - เพิ่มลิงก์อ้างอิง อาจเป็นข่าวในสื่อ แถลงข่าวของรัฐ หรือตัวประกาศหรือกฎหมาย - เลือกติด Labels (ด้านขวาของกล่องพิมพ์ข้อความ) ตามหมวดหมู่ที่เกี่ยวข้อง เช่น เป็น `มาตรการจากรัฐ` ที่ `ช่วยฟรีแลนซ์` ในช่วง `covid19` ## รวมแล้วยังไงต่อ - ถ้ามีของเยอะพอ ก็น่าจะพอเอาจากหลายๆ ที่มาสรุปรวมกันได้ ว่าเรื่องนี้ มีใครทำอะไรอย่างไรบ้าง
49.736842
134
0.739683
tha_Thai
0.999931
17dc48144a9a680521e213be0e58a68d261715ea
1,750
md
Markdown
apps/prolb/readme.md
mengruts/azurehpc
cd5fd9f13e39979d870ebfc3baeeb4ce1463cf9c
[ "MIT" ]
76
2019-07-22T20:31:46.000Z
2022-03-27T19:48:15.000Z
apps/prolb/readme.md
husiana/azurehpc
1d7a3cb7c0d31f6b18ac0c8153484c92d3e3fddc
[ "MIT" ]
147
2019-07-31T16:11:20.000Z
2021-12-07T15:43:44.000Z
apps/prolb/readme.md
husiana/azurehpc
1d7a3cb7c0d31f6b18ac0c8153484c92d3e3fddc
[ "MIT" ]
55
2019-07-22T21:59:44.000Z
2021-12-20T13:46:24.000Z
## Install and run PROLB > Note : This version has been tested on HC44rs and HB60rs. When running on other SKUs, please update the `prolb.sh` script to adapt the memory per core to be used. ## Prerequisites Cluster is built with the desired configuration for networking, storage, compute etc. You can see the tutorial or examples folder in this repo for how to set this up. Dependencies for binary version: * HPCX with C++ bindings * OpenMPI shared libraries symbolic links (see `runtime_prolb.sh`) ## Installation First upload the install packages and cases in your favourite blob storage account. > NOTE: Provide the `INSTALL_TAR`, `TAR_SAS_URL`, `LICENSE_PORT_IP` and `APP_VERSION` as parameters to the **$azhpc_dir/apps/prolb/install_prolb.sh** Then copy the apps directory to the cluster. The `azhpc-scp` can be used to do this: ``` azhpc-scp -r $azhpc_dir/apps hpcuser@headnode:. ``` > Alternatively you can checkout the **azurehpc** repository but you will need to update the paths according to where you put it. Then run the installer ``` azhpc-run -u hpcuser apps/prolb/install_prolb.sh ``` > Note: This will install into `/apps`. Finally, if the `runtime_prolb.sh` is not part of your compute node installation, run that script on all compute nodes as follows ``` azhpc-run -n compute -u hpcuser apps/prolb/runtime_prolb.sh ``` ## Connect to the headnode ``` azhpc-connect -u hpcuser headnode ``` ## Running Copy the case file to **/data/prolb** Now, for example on 8 HC44 nodes, you can run as follows: ``` CASE=mycasename case_dir=/data/prolb/working mkdir -p $case_dir qsub -f -k oe -j oe -l select=8:ncpus=44:mpiprocs=44,place=scatter:excl -N prolb $azhpc_dir/apps/prolb/prolb $CASE $case_dir [version] ```
28.225806
166
0.744571
eng_Latn
0.984614
17dc4c44924311d4c27de0f69d50495c369f0209
34
md
Markdown
README.md
arafatpk/arafatpk.github.io
611c2714abaf491bf67d72482dcff147831879d0
[ "Apache-2.0" ]
1
2018-03-11T17:36:21.000Z
2018-03-11T17:36:21.000Z
README.md
arafatpk/arafatpk.github.io
611c2714abaf491bf67d72482dcff147831879d0
[ "Apache-2.0" ]
null
null
null
README.md
arafatpk/arafatpk.github.io
611c2714abaf491bf67d72482dcff147831879d0
[ "Apache-2.0" ]
null
null
null
# arafatpk.github.io Testing Site
11.333333
20
0.794118
nob_Latn
0.088349
17dcda605910c4fe33b7856cb13b078c3443e166
297
md
Markdown
README.md
protosam/flow
e91c511ec39204ef09a8a8662fde47839794b1e7
[ "Apache-2.0" ]
3
2022-01-11T01:41:48.000Z
2022-01-11T06:55:12.000Z
README.md
protosam/flow
e91c511ec39204ef09a8a8662fde47839794b1e7
[ "Apache-2.0" ]
4
2022-01-10T21:57:18.000Z
2022-01-10T21:59:00.000Z
README.md
protosam/flow
e91c511ec39204ef09a8a8662fde47839794b1e7
[ "Apache-2.0" ]
null
null
null
# flow Directed Acyclic Graph (DAG) inspired tasks queuing API. The examples directory contains samples. ## Status This project's API will not have any method changes. The only changes will additional functionality and bug fixes. ## Contributing PR's are welcomed, must be Apache 2.0 licensed.
24.75
61
0.784512
eng_Latn
0.998083
17dd76507eb7ec2072a5265d995125881054ad9e
1,556
md
Markdown
schemas/readme.md
bburns/cppagent
c1891c631465ebc9b63a4b3c627727ca3da14ee8
[ "Apache-2.0" ]
null
null
null
schemas/readme.md
bburns/cppagent
c1891c631465ebc9b63a4b3c627727ca3da14ee8
[ "Apache-2.0" ]
null
null
null
schemas/readme.md
bburns/cppagent
c1891c631465ebc9b63a4b3c627727ca3da14ee8
[ "Apache-2.0" ]
null
null
null
MTConnect Schema Files Versions 1.0 - 2.0 === Files are named with respect to the section of the standard they apply. ```MTConnect<Part>_<Version>[_<XSD Schema Version>].xsd``` The files included in this directory are as follows: * Version 1.0 * Version 1.1 * Version 1.2 * Version 1.3 (With XSD 1.0 compatible files) * Version 1.4 (With XSD 1.0 compatible files) * Version 1.5 (With XSD 1.0 compatible files) * Version 1.6 (With XSD 1.0 compatible files) * Version 1.7 (With XSD 1.0 compatible files) * Version 1.8 (With XSD 1.0 compatible files) * Version 2.0 (With XSD 1.0 compatible files) The schemas are replicated to http://schemas.mtconnect.org Microsoft XML an many legacy XML parsers are not current on the XML Schema 1.1 standard accepted in 2012 by the w3c. The MTConnect standard takes advantage of the latest advances in extensibility to add additional properties in a regulated manor using the xs:any tag and specifying they tags must be from another namespace. We are also using Schema Versioning from XML Schema 1.1 and will be creating new schema that use these new features as we move into 1.4 and beyond. There are many XML Parsers that now correctly handle XML Schema 1.1, namely Raptor from Altova and Xerces from Apache. If you must use Microsoft XML with validation turned on, then you must use the ...1.3_1.0.xsd files in the directory. The 1.3_1.0.xsd files will support the MSXML parser and validation, but will not support the advance extensibility. If you want both, talk with Microsoft to update their parser.
55.571429
385
0.766067
eng_Latn
0.996409
17df07820f5414168a2db77cc417b3f9bfd200fd
131
md
Markdown
.github/ISSUE_TEMPLATE/SECURITY.md
workflow-actions/.github
c0ebce5b8ea9c7276139564a8dfabf62d2c16a1b
[ "MIT" ]
8
2021-04-04T13:59:37.000Z
2022-03-01T20:18:53.000Z
.github/ISSUE_TEMPLATE/SECURITY.md
workflow-actions/.github
c0ebce5b8ea9c7276139564a8dfabf62d2c16a1b
[ "MIT" ]
105
2021-05-04T06:03:35.000Z
2022-03-31T20:12:52.000Z
.github/ISSUE_TEMPLATE/SECURITY.md
workflow-actions/.github
c0ebce5b8ea9c7276139564a8dfabf62d2c16a1b
[ "MIT" ]
10
2021-04-04T13:59:42.000Z
2022-02-21T08:10:05.000Z
# Reporting a Vulnerability If you discover a potential security issue in this project we ask that you [notify us](../issues/new)
32.75
101
0.770992
eng_Latn
0.998175
17e020d0503e0b79939af0a097ae12c8423eaeb8
4,450
md
Markdown
README.md
gtrailway/Cleaning-App
5925075cc5ad5d366987b6926bc20e893e0e229f
[ "MIT" ]
1
2021-11-08T14:18:44.000Z
2021-11-08T14:18:44.000Z
README.md
gtrailway/Cleaning-App
5925075cc5ad5d366987b6926bc20e893e0e229f
[ "MIT" ]
null
null
null
README.md
gtrailway/Cleaning-App
5925075cc5ad5d366987b6926bc20e893e0e229f
[ "MIT" ]
null
null
null
# Cleaning-App <table style="border-collapse: collapse; width: 100%; height: 898px;" border="1"> <tbody> <tr style="height: 55px;"> <td style="width: 100%; height: 55px;"><img src="https://www.complaintsdepartment.co.uk/image/1251/600/govia-thameslink-railway.jpg" alt="https://www.complaintsdepartment.co.uk/image/1251/600/govia-thameslink-railway.jpg" width="127" height="71" /></td> </tr> <tr style="height: 375px;"> <td style="width: 100%; height: 375px;"> <p><strong>Train Cleaning Input App:</strong></p> <p>An app for cleaning staff to input cleaning dates and times against set locations or trains. Based off a simple SharePoint list which is connected through the Power App data source.</p> <p> INSTALL THE .zip FILE ON POWER APPS - all guidance is within the Canvas app itself follow the simple steps to connect to a data course :)</p> <p>This then by Power Automate picked up and creates an Item in a separate SharePoint List which holds all historical records.</p> <p><strong>Key Features:</strong></p> <ul> <li><strong>Cleaning Input:</strong> Easily create new records with a click of a button as the Time and Date is automatically recorded.</li> <li><strong>Historical Records:</strong> View all historical clean records and search by train type.</li> <li><strong>Clean Types:</strong> Choose between 2 different clean types either Berth Clean or Viricide application.</li> </ul> <p><strong>Systems Used:</strong></p> <ul> <li><strong>Power Apps</strong></li> <li><strong>Power Automate</strong></li> <li><strong>SharePoint</strong></li> </ul> <p><strong>Connectors Used:</strong></p> <ul> <li><strong>Power Apps &amp; SharePoint</strong></li> </ul> </td> </tr> <tr> <td style="width: 100%;"> <p><strong>Cleaning Records App:</strong></p> <p>An app for staff to view cleaning dates and times against set locations or trains. Based off a simple SharePoint list which is connected through the Power App data source.</p> <p>Recognising the user login it records who had input the clean and that is stored on SharePoint along with the time stamp.</p> <p>This then by Power Automate picked up and creates an Item in a separate SharePoint List which holds all historical records.</p> <p><strong>Key Features:</strong></p> <ul> <li><strong>Historical Records:</strong> View all historical clean records and search by train type.</li> <li><strong>Clean Types:</strong> Choose between 2 different clean types either Berth Clean or Viricide application.</li> </ul> <p><strong>Systems Used:</strong></p> <ul> <li><strong>Power Apps</strong></li> <li><strong>Power Automate</strong></li> <li><strong>SharePoint</strong></li> </ul> <p><strong>Connectors Used:</strong></p> <ul> <li><strong>Power Apps &amp; SharePoint</strong></li> </ul> </td> </tr> <tr style="height: 46px;"> <td style="width: 100%; height: 46px;"> <p><strong>Legal Notice</strong></p> </td> </tr> <tr style="height: 422px;"> <td style="width: 100%; height: 422px;"> <p>This app template is provided under the&nbsp;<a href="https://github.com/gtrailway/Cleaning-App/blob/main/LICENSE">MIT License</a>&nbsp;terms. In addition to these terms, by using this app template you agree to the following:</p> <ul> <li> <p>You, not Govia Thameslink Railway, will license the use of your app to users or organization.</p> </li> <li> <p>This app template is not intended to substitute your own regulatory due diligence or make you or your app compliant with respect to any applicable regulations, including but not limited to privacy, healthcare, employment, or financial regulations.</p> </li> <li> <p>You are responsible for complying with all applicable privacy and security regulations including those related to use, collection and handling of any personal data by your app. This includes complying with all internal privacy and security policies of your organization if your app is developed to be sideloaded internally within your organization. Where applicable, you may be responsible for data related incidents or data subject requests for data collected through your app.</p> </li> <li> <p>Any trademarks or registered trademarks of Govia Thameslink Railway in the United Kingdom and/or other countries and logos included in this repository are the property of Microsoft, and the license for this project does not grant you rights to use any Govia Thameslink Railway names, logos or trademarks outside of this repository.</p> </li> </ul> <p>&nbsp;</p> </td> </tr> </tbody> </table>
54.938272
485
0.74427
eng_Latn
0.987447
17e06113bc0d40d9de73502aa69b7fb79f8ed3fd
3,148
md
Markdown
doc/distributed-system/zookeeper/zookeeper-get-started.md
yankj12/blog
53569384524cebd1f4ec9fc5a71c636f3ca1b23d
[ "MIT" ]
null
null
null
doc/distributed-system/zookeeper/zookeeper-get-started.md
yankj12/blog
53569384524cebd1f4ec9fc5a71c636f3ca1b23d
[ "MIT" ]
73
2017-12-11T15:59:34.000Z
2020-12-17T02:12:46.000Z
doc/distributed-system/zookeeper/zookeeper-get-started.md
yankj12/blog
53569384524cebd1f4ec9fc5a71c636f3ca1b23d
[ "MIT" ]
2
2019-01-02T02:15:03.000Z
2019-07-16T09:07:58.000Z
# Zookeeper Get Started 文章参考自[ZooKeeper Getting Started Guide](http://zookeeper.apache.org/doc/r3.4.14/zookeeperStarted.html) ## zk单节点 ### 启动单节点 解压 修改配置文件`conf/zoo.cfg` ```zoo.cfg tickTime=2000 dataDir=/var/lib/zookeeper clientPort=2181 ``` 启动 linux下 ```shell bin/zkServer.sh start ``` windows ```CMD bin>zkServer.cmd ``` ### 客户端连接及简单命令 ```Shell bin/zkCli.sh -server 127.0.0.1:2181 ``` 连接成功,控制台会出现下面的信息 ```console Connecting to localhost:2181 log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper). log4j:WARN Please initialize the log4j system properly. Welcome to ZooKeeper! JLine support is enabled [zkshell: 0] ``` 常用命令 `ls`查看列表 ```Shell [zkshell: 8] ls / [zookeeper] ``` `create /zk_test my_data`创建一个znode,并将字符串"my_data"和这个节点关联起来 ```Shell [zkshell: 9] create /zk_test my_data Created /zk_test ``` `ls /`命令查看文件夹中内容 ```Shell [zkshell: 11] ls / [zookeeper, zk_test] ``` 注意到zk_test文件夹已经被创建了 通过`get`命令验证znode节点关联的数据 ```Shell [zkshell: 12] get /zk_test my_data cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 5 mtime = Fri Jun 05 13:57:06 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0 dataLength = 7 numChildren = 0 ``` 通过`set`命令修改znode关联的数据 ```Shell [zkshell: 14] set /zk_test junk cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 6 mtime = Fri Jun 05 14:01:52 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0 dataLength = 4 numChildren = 0 [zkshell: 15] get /zk_test junk cZxid = 5 ctime = Fri Jun 05 13:57:06 PDT 2009 mZxid = 6 mtime = Fri Jun 05 14:01:52 PDT 2009 pZxid = 5 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0 dataLength = 4 numChildren = 0 ``` `delete`删除node ```Shell [zkshell: 16] delete /zk_test [zkshell: 17] ls / [zookeeper] [zkshell: 18] ``` 学习探索更多的内容,参考 [Programmer's Guide](http://zookeeper.apache.org/doc/r3.4.14/zookeeperProgrammers.html). ## 常见错误 ### ZooKeeper启动错误 Invalid arguments, exiting abnormally 解决方案 Windows 使用bin中的脚本 zkServer.cmd 启动zookeeper,输入命令 ```CMD D:\WORK\Project-Test\zookeeper-3.4.11\bin>zkServer.cmd start ``` 发现无法启动,出现如下错误 ```Java 2018-01-29 19:37:33,793 [myid:] - ERROR [main:ZooKeeperServerMain@57] - Invalid arguments, exiting abnormally java.lang.NumberFormatException: For input string: "D:\WORK\Project-Test\zookeeper-3.4.11\bin\..\conf\zoo.cfg" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.zookeeper.server.ServerConfig.parse(ServerConfig.java:61) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:55) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:119) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81) ``` 原因: 用法不对,start是在linux下用.sh脚本时的命令,在win下直接打脚本名就好 ```CMD D:\WORK\Project-Test\zookeeper-3.4.11\bin>zkServer.cmd ```
19.312883
110
0.736023
yue_Hant
0.219614
17e0ea8d4247243a2d350ed2f367f5e649eacac2
4,478
md
Markdown
en/docs/Learn/APISecurity/OAuth2/saving-access-tokens-in-separate-tables.md
HiranyaKavishani/docs-apim
c1854f5255e8a464b19d6ed784f5ca814ffaa193
[ "Apache-2.0" ]
null
null
null
en/docs/Learn/APISecurity/OAuth2/saving-access-tokens-in-separate-tables.md
HiranyaKavishani/docs-apim
c1854f5255e8a464b19d6ed784f5ca814ffaa193
[ "Apache-2.0" ]
null
null
null
en/docs/Learn/APISecurity/OAuth2/saving-access-tokens-in-separate-tables.md
HiranyaKavishani/docs-apim
c1854f5255e8a464b19d6ed784f5ca814ffaa193
[ "Apache-2.0" ]
null
null
null
# Saving Access Tokens in Separate Tables !!! warning This feature has been deprecated as it is redundant. Although it was introduced as a security measure, a compromise in the database would result in a compromise in all its tables. You can configure the API Manager instances to store access tokens in different tables according to their user store domains. This is referred to as **user token partitioning** and it ensures better security when there are multiple user stores configured in the system. To configure user stores other than the default one, see Configuring Secondary User Stores . The following topics explain how to enable user token partitioning: - [Enabling assertions](#SavingAccessTokensinSeparateTables-EnablingassertionsEnableAssertions) - [Storing keys in different tables](#SavingAccessTokensinSeparateTables-Storingkeysindifferenttables) #### Enabling assertions You use assertions to embed parameters into tokens and generate a strong access token. You can also use these parameters later for other processing. At the moment, the API Manager only supports UserName as an assertion. By default, assertions are set to `false` in the `<APIM_HOME>/repository/conf/identity/identity.xml` . To enable it, set the `<UserName>` element to `true` . You can add a user name to an access token when generating the key, and verify it by encoding the retrieved access token with Base64. **&lt;APIM\_HOME&gt;/repository/conf/identity/identity.xml** ``` xml <EnableAssertions> <UserName>true</UserName> </EnableAssertions> ``` #### Storing keys in different tables 1. If the `<UserName>` assertion is enabled, set the `<EnableAccessTokenPartitioning>` element in `<APIM_HOME>/repository/conf/identity/identity.xml` file to `true` . It determines whether you want to store the keys in different tables or not. ``` xml <EnableAccessTokenPartitioning>true</EnableAccessTokenPartitioning>  ``` 2. Set the user store domain names and mappings to new table names. For example, - if userId = foo.com/admin where 'foo.com' is the user store domain name, then a 'mapping:domain' combo can be defined as 'A:foo.com' - 'A' is the mapping for the table that stores tokens relevant to users coming from the 'foo.com' user store In this case, the actual table name is `IDN_OAUTH2_ACCESS_TOKEN_A` . We use a mapping simply to prevent any issues caused by lengthy table names when lengthy domain names are used. You must manually create the tables you are going to use to store the access tokens in each user store (i.e., manually create the tables `IDN_OAUTH2_ACCESS_TOKEN_A` and `IDN_OAUTH2_ACCESS_TOKEN_B` according to the following defined domain mapping). This table structure is similar to the `IDN_OAUTH2_ACCESS_TOKEN` table defined in the api-manager dbscript, which is inside the `<APIM_HOME>/dbscripts/apimgt` directory. You can provide multiple mappings separated by commas as follows. Note that the domain names need to be specified in upper case. ``` html/xml <AccessTokenPartitioningDomains>A:FOO.COM, B:BAR.COM</AccessTokenPartitioningDomains> ``` 3. According to the information given above, change the `<OAuth>` element in the `<APIM_HOME>/repository/conf/identity/identity.xml` file as shown in the following example: **&lt;APIM\_HOME&gt;/repository/conf/identity/identity.xml** ``` xml <!-- Assertions can be used to embed parameters into access token.--> <EnableAssertions> <UserName>true</UserName> </EnableAssertions> <!-- This should be set to true when using multiple user stores and keys should saved into different tables according to the user store. By default all the application keys are saved in to the same table. UserName Assertion should be 'true' to use this.--> <AccessTokenPartitioning> <EnableAccessTokenPartitioning>true</EnableAccessTokenPartitioning> <!-- user store domain names and mappings to new table names. eg: if you provide 'A:foo.com', foo.com should be the user store domain name and 'A' represent the relavant mapping of token storing table, i.e. tokens relevant to the users comming from foo.com user store will be added to a table called IDN_OAUTH2_ACCESS_TOKEN_A. --> <AccessTokenPartitioningDomains>A:foo.com, B:bar.com</AccessTokenPartitioningDomains> </AccessTokenPartitioning>    ```
65.852941
603
0.751675
eng_Latn
0.987353
17e1d77e2a7ffd0b9f99b5c6a369f95699605199
1,551
md
Markdown
docs/framework/unmanaged-api/metadata/imetadataimport-getuserstring-method.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
2
2021-04-08T08:02:39.000Z
2021-04-11T08:27:32.000Z
docs/framework/unmanaged-api/metadata/imetadataimport-getuserstring-method.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
548
2018-04-25T17:43:35.000Z
2022-03-09T02:06:35.000Z
docs/framework/unmanaged-api/metadata/imetadataimport-getuserstring-method.md
michha/docs
08f75b6ed8a9e6634235db708a21da4be57dc58f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "IMetaDataImport::GetUserString Method" ms.date: "03/30/2017" api_name: - "IMetaDataImport.GetUserString" api_location: - "mscoree.dll" api_type: - "COM" f1_keywords: - "IMetaDataImport::GetUserString" helpviewer_keywords: - "IMetaDataImport::GetUserString method [.NET Framework metadata]" - "GetUserString method, IMetaDataImport interface [.NET Framework metadata]" ms.assetid: 0fd3bb47-58b5-4083-b241-b9719df7a285 topic_type: - "apiref" --- # IMetaDataImport::GetUserString Method Gets the literal string represented by the specified metadata token. ## Syntax ```cpp HRESULT GetUserString ( [in] mdString stk, [out] LPWSTR szString, [in] ULONG cchString, [out] ULONG *pchString ); ``` ## Parameters `stk` [in] The String token to return the associated string for. `szString` [out] A copy of the requested string. `cchString` [in] The maximum size in wide characters of the requested `szString`. `pchString` [out] The size in wide characters of the returned `szString`. ## Requirements **Platforms:** See [System Requirements](../../get-started/system-requirements.md). **Header:** Cor.h **Library:** Included as a resource in MsCorEE.dll **.NET Framework Versions:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)] ## See also - [IMetaDataImport Interface](imetadataimport-interface.md) - [IMetaDataImport2 Interface](imetadataimport2-interface.md)
25.016129
111
0.680851
kor_Hang
0.306153
17e2c384824c4ee826a6e434a0a6d3fe942b84f6
1,087
md
Markdown
_posts/1922-02-27-GwasPostGwas.md
Shicheng-Guo/Shicheng-Guo.Github.io
0a269e2ec783cc5846d4c542774adde1468182ca
[ "MIT" ]
null
null
null
_posts/1922-02-27-GwasPostGwas.md
Shicheng-Guo/Shicheng-Guo.Github.io
0a269e2ec783cc5846d4c542774adde1468182ca
[ "MIT" ]
null
null
null
_posts/1922-02-27-GwasPostGwas.md
Shicheng-Guo/Shicheng-Guo.Github.io
0a269e2ec783cc5846d4c542774adde1468182ca
[ "MIT" ]
null
null
null
--- layout: post title: "Automatic GWAS and Post-GWAS Analysis Pipeline" author: Shicheng Guo date: 1922-02-28 categories: bioinformatics tags: Genetics Genomics GWAS PostGWAS --- Here, I summarized Automatic GWAS and Post-GWAS Analysis Pipeline Published Works: * 2020: The open targets post-GWAS analysis pipeline: https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btaa020/5701644 * 2019: [Odyssey](https://github.com/Orion1618/Odyssey.git): a [semi-automated pipeline for phasing, imputation, and analysis of genome-wide genetic data](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2964-5) * 2018: A tutorial on conducting genome‐wide association studies: [Quality control and statistical analysis](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6001694/) * 2018: GWAS Pipeline for H3Africa: [https://github.com/h3abionet/h3agwas](https://github.com/h3abionet/h3agwas) * 2017: [Semi-Automated Quantitative](https://github.com/ini-bdds/saqt-gwas) Trait Genome-Wide Association Studies http://loni.usc.edu/research/software
60.388889
234
0.791168
kor_Hang
0.277592
17e34aa3ec9f8d2ea02f017b7b609f180f9a81ed
677
md
Markdown
README.md
shane1027/WishyWash
7967cf41bd8ee36c8f24c362a2eea1a502495d94
[ "MIT" ]
1
2018-07-28T15:41:11.000Z
2018-07-28T15:41:11.000Z
README.md
shane1027/WishyWash
7967cf41bd8ee36c8f24c362a2eea1a502495d94
[ "MIT" ]
null
null
null
README.md
shane1027/WishyWash
7967cf41bd8ee36c8f24c362a2eea1a502495d94
[ "MIT" ]
null
null
null
# WishyWash Observe serial communications among a network of washers and dryers. Utilize silent time on the low-speed UART to join in on the conversation! talk dirty to devices designed to clean. Using a logic analyzer and some intellectual logic, I tapped into the laundry network in my dorm. After collecting and studying information and probing signal responses with a PIC24 board with code I developed, I was able to decipher a useful command set for laundry machine control. This command set includes the ability to check which machines are currently washing / drying, enable or disable them, and spoof a washer / dryer's start signal to start the load. fun stuff!!
135.4
477
0.800591
eng_Latn
0.999768
17e4950979cb00a5ff74b7d1908737a539b361de
274
md
Markdown
docs/core/install/includes/linux-not-supported-debian.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/core/install/includes/linux-not-supported-debian.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/core/install/includes/linux-not-supported-debian.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: ee47c77f0f641a94d067942662fa76ade87cca80 ms.sourcegitcommit: cdb295dd1db589ce5169ac9ff096f01fd0c2da9d ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 06/09/2020 ms.locfileid: "84602832" --- ❌ このバージョンの Debian は現在サポートされていないことに注意してください。
24.909091
60
0.839416
yue_Hant
0.118078
17e5830f4d39d795c0c8dbc439e73a2fed90658a
428
md
Markdown
Zoom/IM Groups/ZM Add IM Directory Group Members/Readme.md
MithunKulkarniCTS/MithunKulkarniCTS
a9c264209d25495dea7ae919b8ad7080449f5db2
[ "MIT" ]
4
2019-12-07T03:27:41.000Z
2021-03-11T02:30:40.000Z
Zoom/IM Groups/ZM Add IM Directory Group Members/Readme.md
MithunKulkarniCTS/MithunKulkarniCTS
a9c264209d25495dea7ae919b8ad7080449f5db2
[ "MIT" ]
2
2021-08-17T03:06:52.000Z
2021-09-17T13:18:19.000Z
Zoom/IM Groups/ZM Add IM Directory Group Members/Readme.md
MithunKulkarniCTS/MithunKulkarniCTS
a9c264209d25495dea7ae919b8ad7080449f5db2
[ "MIT" ]
37
2019-08-19T19:41:16.000Z
2022-02-03T08:43:59.000Z
<br># Zoom</br> <br>Add IM Directory Group Members</br> <br>Add members to an [IM directory group](https://support.zoom.us/hc/en-us/articles/203749815-IM-Management) under an account. **Scope:** `imgroup:write:admin` </br> <br>Method: Post</br> <br>OperationID: imGroupMembersCreate</br> <br>EndPoint:</br> <br>/im/groups/{groupId}/members</br> <br>Usage: members[]</br> <br>[{ "id": "%id%", "email": "%email%" }]</br>
28.533333
127
0.658879
eng_Latn
0.277647
17e5cd47b0ebc5268c7568952c4dc990300e582a
3,523
md
Markdown
pages/statistical_consulting.md
socsig/socsig.github.io
d9ff2ca6f6fb2fa1a3e394ec5001f7f4ea080785
[ "CC0-1.0" ]
null
null
null
pages/statistical_consulting.md
socsig/socsig.github.io
d9ff2ca6f6fb2fa1a3e394ec5001f7f4ea080785
[ "CC0-1.0" ]
null
null
null
pages/statistical_consulting.md
socsig/socsig.github.io
d9ff2ca6f6fb2fa1a3e394ec5001f7f4ea080785
[ "CC0-1.0" ]
null
null
null
--- layout: page title: Statistical and analytical consulting description: Our statistical and analytical consulting services --- Our scientists are experts in a wide variety of statistical methodologies, analytical methodologies, and subject matter areas. We are big proponents of Bayesian modeling; expert opinion matters in every analysis. At the beginning of any relationship, we work closely with you to incorporate your hard-won and often subjective expertise into our models. In other words, we're data-driven, but we know that experience is invaluable. For more information, please contact us at <consulting@sociotechnicalsignals.com>. ### Statistical methodology - Multivariate / cross-sectional data analysis with both long (many observations) and wide (many predictors) tables - Time series analysis and prediction, including automated model selection and dynamic prediction - Design of (distributed) computer experiments and A/B testing - Nonparametric analysis, e.g. distribution finding and fitting, error prediction, nonparametric hypothesis testing - Building and fitting large graph-based models (Bayesian DAGs) of multivariate or time series data ### Machine learning We strongly believe in the validity of Occam's razor: when two models have similar predictive accuracy, we accept the simpler explanation. With this in mind, we begin any analysis with the simplest models, moving toward models of increasing complexity only when simpler models have unacceptably low predictive power. All of our machine learning is done using Bayesian methods. When we use complicated models, you can rest assured that we will always provide error estimates and detailed summaries of model criticism; we will *never* present you with a black-box model and say "it just works, but we don't know why!" Our services include - Traditional supervised learning methodologies (GLM, support vector machines, decision trees and ensembling methods) - Density estimation and nonparametric modeling for question-answering and what-if sceenarios - Unsupervised algorithms, e.g., clustering - Supervised deep learning methodologies for categorical or real-valued data, including image and time-series classification and prediction problems - Unsupervised and self-supervised deep learning for dimensionality reduction and synthetic data generation ### Analytical methodology Our scientists have a strong mathematical background and are well-versed in many analytical methods; we can help you develop software to solve high-dimensional optimization or control problems, for example. ### Mechanistic and agent-based modeling We separate ourselves from the competition by asking and answering *why* things happen and not just settling for prediction. If you understand the mechanism generating observed phenomena, you are better positioned to prepare for situations for which there is no data, situations that haven't happened yet but may be very costly if not prepared for. One of our specialties is creating mechanistic models of complicated systems that can be interrogated to understand observed behaviors and predict future scenarios. Examples of such models that we have built include: - Real-options valuation model of a possible acquisition target for a federal contractor - Financial market microstructure-based model to understand tail risk and anomalous volatility - Game-theoretic [election interference model](https://arxiv.org/pdf/1908.02793.pdf) to find optimal strategies for countering election meddling
70.46
348
0.819756
eng_Latn
0.9989
17e618c12af22da7c03aebd01f28aa463f851271
3,691
md
Markdown
README.md
cristianvasquez/R-Tree-RDF-with-Spark-test
aee9a36f1b0ccb9df4d968eb45461fde3af1c6ad
[ "Apache-2.0" ]
2
2020-03-30T03:55:50.000Z
2021-03-31T01:01:44.000Z
README.md
cristianvasquez/R-Tree-RDF-with-Spark-test
aee9a36f1b0ccb9df4d968eb45461fde3af1c6ad
[ "Apache-2.0" ]
null
null
null
README.md
cristianvasquez/R-Tree-RDF-with-Spark-test
aee9a36f1b0ccb9df4d968eb45461fde3af1c6ad
[ "Apache-2.0" ]
null
null
null
Simba: Spatial In-Memory Big data Analytics =========================================== **Simba is now shipped as a standalone package outside Spark. Current version works with Spark 2.1.x. If you find any issues, please make a ticket in the issue tracking system.** Simba is a distributed in-memory spatial analytics engine based on Apache Spark. It extends the Spark SQL engine across the system stack to support rich spatial queries and analytics through both SQL and the DataFrame API. Besides, Simba introduces native indexing support over RDDs in order to develop efficient spatial operators. It also extends Spark SQL's query optimizer with spatial-aware and cost-based optimizations to make the best use of existing indexes and statistics. Simba is open sourced under Apache License 2.0. Currently, it is developed based on Spark 1.6.0. For recent updates and further information, please refer to [Simba's homepage](http://www.cs.utah.edu/~dongx/simba). Features -------------- + Expressive **SQL and DataFrame query interface** fully *compatible with original Spark SQL operators*. (SQL mode is currently not supported in the standalone version.) + Native distributed **indexing** support over RDDs. + Efficient **spatial operators**: *high-throughput* & *low-latency*. - Box range query: `IN RANGE` - Circle range query: `IN CIRCLERANGE` - *k* nearest neighbor query: `IN KNN` - Distance join: `DISTANCE JOIN` - kNN join: `KNN JOIN` + Modified Zeppelin: **interactive visualization** for Simba. + Spatial-aware **optimizations**: *logical* & *cost-based*. + Native thread-pool for multi-threading. + **Geometric objects** support (developing) + **Spatio-Temporal** and **spatio-textual** data analysis (developing) **Notes:** *We are still cleaning source codes for some of our features, which will be released to the master and develop branch later.* Developer Notes --------------- 1. Fork this repo (or create your own branch if you are a member of Simba's main development team) to start your development, **DO NOT** push your draft version to the master branch 2. You can build your own application in `org.apache.spark.examples` package for testing or debugging. 3. If you want to merge your feature branch to the main develop branch, please create a pull request from your local branch to develop branch (**not the master branch**). 4. Use IDE to debug this project. If you use IntelliJ IDEA, [INSTALL](./INSTALL.md) file contains a way to import the whole project to IntelliJ IDEA Branch Information ------------------ `standalone` branches are opened for maintaining Simba standalone package, which aims at building Simba packages standing outside Spark SQL core. Currently, `master` branch and `develop` branch are built on top of Spark 2.1.x. The `master` branch provides the latest stable version, while the `develop` branch is the main development branch where new features will be merged before ready to release. For legacy reasons, we also keep branches which archives old versions of Simba, which is developed based on former Spark versions, in the branches named `simba-spark-x.x`. Note that we will only integrate latest features into `master` and `develop` branches. Please make sure you checkout the correct branch before start using it. Contributors ------------ - Dong Xie: dongx [at] cs [dot] utah [dot] edu - Gefei Li: oizz01 [at] sjtu [dot] edu [dot] cn - Liang Zhou: nichozl [at] sjtu [dot] edu [dot] cn - Zhongpu Chen: chenzhongpu [at] sjtu [dot] edu [dot] cn - Feifei Li: lifeifei [at] cs [dot] utah [dot] edu - Bin Yao: yaobin [at] cs [dot] sjtu [dot] edu [dot] cn - Minyi Guo: guo-my [at] cs [dot] sjtu [dot] edu [dot] cn
75.326531
503
0.738824
eng_Latn
0.990605
17e6da6ad8cb7c4eccc38e8132f4a78b0310fd0c
296
md
Markdown
reading/_posts/2012-2-24-tv-is-broken.md
andrewpbrett/andrewpbrett.github.com
14dd79264030f808e1f4e875387c741bd0732168
[ "MIT" ]
null
null
null
reading/_posts/2012-2-24-tv-is-broken.md
andrewpbrett/andrewpbrett.github.com
14dd79264030f808e1f4e875387c741bd0732168
[ "MIT" ]
3
2017-12-06T20:09:47.000Z
2017-12-06T20:10:18.000Z
reading/_posts/2012-2-24-tv-is-broken.md
andrewpbrett/andybrett.com
14dd79264030f808e1f4e875387c741bd0732168
[ "MIT" ]
null
null
null
--- title: "Minimal Mac | TV Is Broken" external_link: http://minimalmac.com/post/18189678921/tv-is-broken --- >She just does not understand why one would want to watch anything this way. It's boring and frustrating. [See also: "I finally cracked it"][1] [1]: http://andybrett.com/bookmarks/68
29.6
105
0.733108
eng_Latn
0.911359
17e715f42363286ee85b4ddffeb31ccdd239b72b
107
md
Markdown
readme.md
bawaaaaah/PonyLogManager
edf467992e5083ac65b23d1ed578ad36f4229bdb
[ "MIT" ]
null
null
null
readme.md
bawaaaaah/PonyLogManager
edf467992e5083ac65b23d1ed578ad36f4229bdb
[ "MIT" ]
null
null
null
readme.md
bawaaaaah/PonyLogManager
edf467992e5083ac65b23d1ed578ad36f4229bdb
[ "MIT" ]
null
null
null
It's an hackable and thread safe c# logmanager for your project. use it, enjoy it, hack it, improve it =)
26.75
64
0.728972
eng_Latn
0.998167
17e849226a26be09ab76b04b137a305d6148eceb
1,067
md
Markdown
README.md
adm244/cdev
b30c56bb7af94ef1c5ea4ae09ff8544a79e13936
[ "Unlicense" ]
null
null
null
README.md
adm244/cdev
b30c56bb7af94ef1c5ea4ae09ff8544a79e13936
[ "Unlicense" ]
null
null
null
README.md
adm244/cdev
b30c56bb7af94ef1c5ea4ae09ff8544a79e13936
[ "Unlicense" ]
null
null
null
**cdev** is a small batch file that aims to help manage C\C++(and much more) projects. Since this is just a bunch of Windows Batch files, absolutely everything can be customized to suit specifically your needs. Text editor, compiler, a path to your projects folder, pre\post -build steps, you name it. The files located here are just a place to start from. Feel free to use, copy, modify, share and do whatever you want with these files. It's **Public Domain**. **Configuration:** You need to modify 2 files: `cdev.bat` and `tools/build.bat` Just replace placeholders in `[customize those variables]` sections and it should work. Folder `files` contains files that will be copied into `code` folder of your project. Folder `tools` contains files that will be copied into `tools` folder of your project. These tools can be accessed through cmd once you've loaded\created a project. Also, you might want to put folder with cdev.bat in users\systems path variable for accessing cdev directly from the console just as other cmd commands like "cd". **Have fun!**
48.5
162
0.764761
eng_Latn
0.999564
17e882e920cfee4cab695e5598894ee4d3814541
837
md
Markdown
README.md
Db-Lau/Reinforcement-Learning-Course
a048f4df06bf98d1c50e025cd87ff5aa88a016cc
[ "MIT" ]
null
null
null
README.md
Db-Lau/Reinforcement-Learning-Course
a048f4df06bf98d1c50e025cd87ff5aa88a016cc
[ "MIT" ]
null
null
null
README.md
Db-Lau/Reinforcement-Learning-Course
a048f4df06bf98d1c50e025cd87ff5aa88a016cc
[ "MIT" ]
null
null
null
[:house: Home page](https://github.com/Db-Lau/Reinforcement-Learning-Course) # Reinforcement-Learning-Course This repository contains codes I have been developing during the course of [Reinforcement Learning from Örebro University](https://www.oru.se/utbildning/kurser/kurs/reinforcement-learning-del-1-dt707a) in Autumn 2021. It includes codes for several groups of reinforcement learning algorithms (Dynamic Programming, Monte Carlo method and Temporal-difference learning). They are written based on the pseudo codes from [Sutton's book](http://www.incompleteideas.net/book/the-book-2nd.html). They are currently applicable to the Cliff-Walking environment in OpenAI GYM, and will be further modified to be applicable to differnt environments. - TODO: REFACTORING CODE - TODO: MODIFY CODES TO BE APPLICABLE TO ALL ENVIRONMENTS
64.384615
269
0.807646
eng_Latn
0.931917
17e9202f32f095a3346e2493864a32e05ea5b64b
1,712
md
Markdown
README.md
yumemayu/initto
01eea03889dd3f6473fcc9099cd1970b9c1217ee
[ "MIT" ]
null
null
null
README.md
yumemayu/initto
01eea03889dd3f6473fcc9099cd1970b9c1217ee
[ "MIT" ]
null
null
null
README.md
yumemayu/initto
01eea03889dd3f6473fcc9099cd1970b9c1217ee
[ "MIT" ]
null
null
null
# Tech Blog based on Eleventy blog kit with [11ty(Eleventy)](https://11ty.io) based on [hylia-forestry](https://github.com/DirtyF/hylia-forestry). ## Prepare your own environment ### installation Please execute it when installing in your environment for the first time. Not needed if already running ```bash $ npm ci ``` ### serve on local Please execute when starting the system in your environment ```bash $ npm start ``` ### Build a production version of the site The production version is set automatically, so you don't have to run it. ```bash $ npm production ``` ## How to write article ### Article in English Create a new md file in `src/posts` e.g.: `sample.md` ### Article in Japanese Create a new md file in `src/ja/post` e.g.: `sample.md` ### Add article file You can also add a file by referring to the arrow in the image below and write an article there ![readme01](https://user-images.githubusercontent.com/4590559/107755069-ad352d80-6d65-11eb-9ea6-add6e39c5b42.png) ### Contents The contents of the article file are as follows ``` --- title: <title> author: <author name> date: <publish date> socialImage: 'https://initto.devprotocol.xyz/images/ogp.png' level: BEGINNER | EXPERIENCED | 初級 | 中級以上 tags: - <tags...> - <tags...> - <tags...> --- <contents> ``` The text enclosed in `---` is the meta information of the article. Please write the title, date, tag, etc. of the article. If you set the date value to the future, it will not be displayed until the set date comes (it will not be displayed in the list). If you want to change OGP, change the value of socialImage. Please write the body of the article from below the meta information. Write the article in markdown format
24.811594
130
0.727804
eng_Latn
0.992717
17e97df011b407949d7742b2099302066d118111
1,443
md
Markdown
CONTRIBUTING.md
mackelab/pop_spike
8ebb12dec79e55ea7efc877e5b95bf95e8680a50
[ "BSD-2-Clause" ]
16
2017-08-15T12:13:11.000Z
2021-05-07T02:25:20.000Z
CONTRIBUTING.md
mackelab/pop_spike
8ebb12dec79e55ea7efc877e5b95bf95e8680a50
[ "BSD-2-Clause" ]
1
2017-08-16T21:05:27.000Z
2017-08-17T09:44:12.000Z
CONTRIBUTING.md
mackelab/pop_spike
8ebb12dec79e55ea7efc877e5b95bf95e8680a50
[ "BSD-2-Clause" ]
5
2018-02-26T20:23:45.000Z
2020-01-15T13:11:43.000Z
## How to contribute to CorBinian First off, thanks for taking the time to contribute! #### **Did you find a bug?** * See if the bug was not already reported under [Issues](https://github.com/mackelab/CorBinian/issues). * If there is no open issue addressing the problem, [open a new one](https://github.com/mackelab/CorBinian/issues/new). Please include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the unexpected behavior. #### **Did you write a patch that fixes a bug?** * Open a new GitHub pull request with the patch. * Ensure the PR description clearly describes the problem and solution. Please include the relevant issue number if applicable. * Please note that we will generally not accept changes that are only cosmetic in nature and do not add anything substantial in terms of functionality or stability. #### **Do you have questions about the source code?** * Feel free to ask any question related to the usage of our code. The maintainers of this repository are [Marcel Nonnemacher](https://github.com/mnonnenm) and [Jakob Macke](https://github.com/jahma). #### **Do you intend to add a new feature or change an existing one?** * Suggest your change to our [team](https://www.mackelab.org/contact/) and feel free to start writing code. * Open an issue on GitHub once you have collected positive feedback about the change.
45.09375
173
0.756757
eng_Latn
0.998551
17ea07b622310958aa8590ee7fc7d57384b9151a
17,369
md
Markdown
articles/finance/general-ledger/one-voucher.md
MicrosoftDocs/Dynamics-365-Operations.lt-lt
4edb9b5897a1f4cc990027074662bd1e8d56ea17
[ "CC-BY-4.0", "MIT" ]
3
2020-05-18T17:14:28.000Z
2022-01-30T03:33:06.000Z
articles/finance/general-ledger/one-voucher.md
MicrosoftDocs/Dynamics-365-Operations.lt-lt
4edb9b5897a1f4cc990027074662bd1e8d56ea17
[ "CC-BY-4.0", "MIT" ]
6
2017-12-12T12:37:43.000Z
2019-04-30T11:46:17.000Z
articles/finance/general-ledger/one-voucher.md
MicrosoftDocs/Dynamics-365-Operations.lt-lt
4edb9b5897a1f4cc990027074662bd1e8d56ea17
[ "CC-BY-4.0", "MIT" ]
2
2018-02-28T23:29:31.000Z
2019-10-12T18:18:06.000Z
--- title: Vienas kvitas description: Naudodamiesi finansinių žurnalų (bendrojo žurnalo, ilgalaikio turto žurnalo, tiekėjo mokėjimų žurnalo ir t. t.) funkcija Vienas kvitas vieno kvito kontekste galite įvesti kelias papildomos knygos operacijas. author: kweekley ms.date: 11/05/2018 ms.topic: article ms.prod: '' ms.technology: '' ms.search.form: LedgerJournalSetup, LedgerParameters, AssetProposalDepreciation audience: Application User ms.reviewer: roschlom ms.custom: 14091 ms.assetid: c64eed1d-df17-448e-8bb6-d94d63b14607 ms.search.region: Global ms.author: kweekley ms.search.validFrom: 2018-03-16 ms.dyn365.ops.version: 8.0.2 ms.openlocfilehash: 978d0dc28f86860335a782bd2ddaa141ed639fe5 ms.sourcegitcommit: b9c2798aa994e1526d1c50726f807e6335885e1a ms.translationtype: HT ms.contentlocale: lt-LT ms.lasthandoff: 08/13/2021 ms.locfileid: "7344063" --- # <a name="one-voucher"></a>Vienas kvitas [!include [banner](../includes/banner.md)] [!include [preview banner](../includes/preview-banner.md)] ## <a name="what-is-one-voucher"></a>Kas yra „Vienas kvitas“? Naudodamiesi esama finansinių žurnalų (bendrojo žurnalo, ilgalaikio turto žurnalo, tiekėjo mokėjimų žurnalo ir t. t.) funkcija vieno kvito kontekste galite įvesti kelias papildomos knygos operacijas (kliento, tiekėjo, ilgalaikio turto, projekto ir banko). „Microsoft“ šią funkcija vadina *Vienas kvitas*. Vieną kvitą galima sukurti vienu iš toliau nurodytų būdų. - Nustatykite žurnalo pavadinimą (**Didžioji knyga** \> **Žurnalo sąranka** \> **Žurnalų pavadinimai**), kad laukui **Naujas kvitas** būtų nustatytas parametras **Tik vienas kvito numeris**. Nuo šiol kiekviena į žurnalą įtraukta eilutė įtraukiama tame pačiame kvite. Dėl to tą patį kvitą galima įvesti kaip kelių eilučių kvitą, kaip toje pačioje eilutėje nurodytą sąskaitą / korespondentinę sąskaitą arba kaip kombinaciją. [![Viena eilutė.](./media/same-line.png)](./media/same-line.png) > [!IMPORTANT] > Vieno kvito aprašas **neapima** atvejų, kai žurnalų pavadinimams nustatytas parametras **Tik vienas kvito numeris**, tačiau vartotojas tada įveda kvitą, apimantį tik DK sąskaitų tipus. Šioje temoje išsireiškimas „vienas kvitas“ reiškia, kad viename kvite yra daugiau nei vienas tiekėjas, klientas, bankas, ilgalaikis turtas arba projektas. - Jei nenurodyta korespondentinė sąskaita, įveskite kelių eilučių kvitą. [![Kelių eilučių kvitas.](./media/Multi-line.png)](./media/Multi-line.png) - Įveskite kvitą, kuriame nurodomas sąskaitos ir poslinko sąskaitos papildomos knygos sąskaitos tipas, pavyzdžiui, **Tiekėjas**/**Tiekėjas**, **Klientas**/**Klientas**, **Tiekėjas**/**Klientas** arba **Bankas**/**Bankas**. [![Papildomos knygos kvitas.](./media/subledger.png)](./media/subledger.png) ## <a name="issues-with-one-voucher"></a>Su funkcija Vienas kvitas susijusios problemos Naudojantis funkcija Vienas kvitas iškyla problemų atsiskaitant, skaičiuojant mokesčius, atšaukiant operaciją, derinant papildomą knygą su didžiąja knyga, rengiant finansines ataskaitas ir ne tik. (Norėdami gauti daugiau informacijos apie atsiskaitant galinčias iškilti problemas (pavyzdžiui), žr. [Vienas kvitas su keliais kliento arba tiekėjo įrašais](../accounts-payable/single-voucher-multiple-customer-vendor-records.md).) Norint, kad šie procesai ir ataskaitos tinkamai veiktų, reikia nurodyti operacijos informaciją. Nors kai kurie scenarijai gali veikti tinkamai, priklausomai nuo jūsų organizacijos sąrankos, dažnai iškyla problemų viename kvite įvedant kelias operacijas. Pavyzdžiui, registruojate toliau nurodytą kelių eilučių kvitą. [![Kelių eilučių kvito pavyzdys.](./media/example.png)](./media/example.png) Po to darbo srityje **Finansinės įžvalgos** sugeneruojate ataskaitą **Išlaidos pagal tiekėją**. Šioje ataskaitoje išlaidų sąskaitos balansai sugrupuoti pagal tiekėjo grupę, o tada pagal tiekėją. Kai ataskaita sugeneruojama, sistema negali nustatyti, kurios tiekėjų grupės / tiekėjai patyrė 250,00 vienetų apimties išlaidų. Kadangi trūksta operacijos informacijos, sistema daro prielaidą, kad visas 250,00 vienetų apimties išlaidas patyrė pirmas kvite nurodytas tiekėjas. Todėl 250,00 vienetų apimties išlaidos, kurios įtrauktos į pagrindinės sąskaitos 600120 balansą, rodomos toje tiekėjų grupėje / tiekėjo srityje. Tačiau labai tikėtina, kad pirmasis tiekėjas kvite nurodytas neteisingai. Dėl to gali būti, kad ataskaita yra neteisinga. [![Išlaidos pagal pardavėjo ataskaitą.](./media/expenses.png)](./media/expenses.png) ## <a name="the-future-of-one-voucher"></a>Funkcijos Vienas kvitas ateitis Dėl problemų, kurios gali nutikti kai naudojamas vienas kvitas, ši funkcija galiausiai nebegalios. Nepaisant to, kadangi esama funkcijų tarpų, kurie priklauso nuo šios funkcijos, nebegaliojimas įvyks visas iš karto. Bus naudojamas toliau nurodytas grafikas. - **2018 m. pavasario versija** – Ši funkcija buvo išjungta pagal nutylėjimą per **Leisti kelias transakcijas viename kvite** parametre **Bendri** skirtuke **Bendros mokesčių knygos parametrai** puslapyje. Nepaisant to, galite jį vėl įjungti, jei jūsų organizacija turi scenarijų, kuris patenka į vieną iš funkcijų tarpų, įvardytų vėliau šioje temoje. - Jei verslo scenarijui nereikia vieno kvito, rekomenduojame jums palikti funkciją išjungtą. Jei naudojate jį net kai ir yra kitas sprendimas, „Microsoft“ neištaisys „klaidų“ srityse, kurios yra nurodomos vėliau šioje temoje. - Rekomenduojame jums nustoti naudoti vieną kvitą integravimams, išskyrus atvejus, kai jums reikia funkcijų vienam iš dokumentuotų funkcijų tarpų. - **Vėlesni leidimai** – Keli verslo reikalavimai gali būti įgyvendinti naudojant vieną kvitą. „Microsoft“ privalo užtikrinti, kad visi atpažinti verslo reikalavimai dar galėtų būti įgyvendinti sistemoje nurašius funkciją. Dėl to, naujos funkcijos greičiausiai turės būti įtrauktos siekiant užpildyti funkcijų tarpus. „Microsoft“ negali pateikti konkretaus sprendimo, nes kiekvienas funkcijos tarpas skiriasi ir turi būti vertinamas pagal verslo reikalavimus. Kai kurių funkcijų tarpai greičiausiai bus pakeisti su funkcijomis, kurios padeda atitikti konkrečius verslo reikalavimus. Nepaisant to, kiti tarpai gali būti užpildyti ir toliau leidžiant įvesti žurnalą, kai naudojamas vienas kvitas, bet jis pagerina sistemos sekimą daugiau informacijos. Užpildžius visus kitus funkcijų tarpus, „Microsoft“ praneš, kad funkcija nebegalios. Nepaisant to, negaliojimas neįsigalios mažiausiai vienerius metus po pranešimo. Nepaisant to, kad „Microsoft“ pateikia tikslų apskaičiavimą apie tai, kada vieno kvito veikimas nebegalios, tai nutinka dažniausiai likus dviem metams iki galiojimo pabaigos. „Microsoft“ politika yra palikti mažiausiai 12 mėnesių nuo negaliojančios funkcijos paskelbimo iki realaus nebegaliojimo, todėl klientai ir nepriklausomi programinės įrangos pardavėjai (ISV) turi laiko sureaguoti į keitimą. Pavyzdžiui, organizacijai gali reikėti naujinti savo verslo procesus, objektus ir integravimus. Vieno kvito nebegaliojimas yra svarbus pokytis, kuris bus praneštas plačiai. Kaip komunikavimo dalis, „Microsoft“ naujins šią temą, publikuos naują tinklaraščio įrašą „Microsoft Dynamics 365 Finance“ tinklaraštyje, naujins „Pašalintos ar nebegaliojančios funkcijos“ temą, praneš pakeitimą atitinkamoms „Microsoft“ konferencijoms ir t.t. ## <a name="why-use-one-voucher"></a>Kodėl verta naudotis funkcija Vienas kvitas? Remdamasi pokalbiais su klientais, „Microsoft“ parengė toliau pateikiamą sąrašą, kuriame išvardijami scenarijai, kai klientai naudojasi funkcija Vienas kvitas arba nurodomos priežastys, kodėl jie šia funkcija naudojasi. Kai kurių iš šių verslo reikalavimų galima laikytis tik naudojant funkciją Vienas kvitas. Tačiau daugelis alternatyvių scenarijų gali atitikti tuos pačius verslo reikalavimus. ### <a name="scenarios-that-require-one-voucher"></a>Scenarijai, pagal kuriuos reikia naudotis funkcija Vienas kvitas Toliau nurodytus scenarijus galima įvykdyti tik naudojantis funkcija Vienas kvitas. Jei jūsų organizacijoje naudojamas kuris nors iš šių scenarijų, turite įgalinti parinktį, leisiančią į vieną kvitą įversti keletą operacijų. Tą padaryti galite pakeitę parametrą **Leisti kelias vieno kvito operacijas**, pateiktą puslapyje **DK parametrai**. Vėlesniuose leidimuose funkcinės spragos bus užpildytos pristatant kitas funkcijas. > [!NOTE] > [Kiekvienam iš toliau nurodytų scenarijų laukas **Leisti kelias vieno kvito operacijas** turi būti nustatytas į Taip „FastTab“ **Bendroji informacija** puslapyje **Didžiosios knygos parametrai**.] ### <a name="post-vendor-or-customer-payments-in-summary-form-to-a-bank-account"></a>Tiekėjo arba kliento mokėjimų registravimas banko sąskaitoje suvestinės forma **Scenarijus** Organizacija perduoda tiekėjų ir sumų sąrašą savo bankui, o bankas naudoja šį sąrašą apmokėdamas sumas tiekėjams organizacijos vardu. Banko sąskaitoje bankas registruoja mokėjimų sumą kaip vieną išėmimą. Tiekėjų mokėjimų suvestinė palaikoma tik naudojant funkciją Vienas kvitas. Kiekvienas tiekėjas įvedamas atskiroje eilutėje, kad būtų galima tvarkyti papildomoje knygoje Mokėtinos sumos pateiktą išsamią informaciją. Tačiau visų mokėjimų sumų suvestinė perkeliama į vieną banko sumos eilutę. Todėl papildomoje banko knygoje išėmimas rodomas kaip viena sumų suvestinė. **Scenarijus** Organizacija deponuoja klientų mokėjimus arba bankas deponuoja klientų mokėjimus organizacijos vardu ir depozitas rodomas banko sąskaitoje kaip fiksuota suma. Kliento mokėjimų suvestinė paprastai palaikoma naudojantis depozito funkcija. Tačiau jei naudojate „tarpinį“ mokėjimo būdą, šis scenarijus palaikomas tik naudojantis funkcija Vienas kvitas. Klientų mokėjimai įvedami tokiu pat būdu kaip aprašyta sudaroma tiekėjų mokėjimų suvestinė. ### <a name="mechanism-to-group-transactions-from-a-business-event"></a>Verslo įvykio operacijų grupavimui skirtas mechanizmas **Scenarijus** Organizacija turi vieną verslo įvykį, kuris įjungia kelias operacijas. Tačiau apskaitos skyrius nori peržiūrėti apskaitos įrašus kartu, kad būtų užtikrinama geresnė kontrolė. Jei organizacija turi kartu peržiūrėti verslo įvykio apskaitos įrašus, ji turi naudotis funkcija Vienas kvitas. ### <a name="country-specific-features"></a>Šaliai būdingos funkcijos **Scenarijus** Šiuo metu naudojantis Lenkijos vieno administravimo dokumento (SAD) funkcija reikia naudoti vieną kvitą. Kol nebus galima naudotis šios funkcijos grupavimo parinktimi, turite ir toliau naudotis funkcija Vienas kvitas. Gali būti papildomų šaliai būdingų funkcijų, kuriomis naudojantis reikia naudotis funkcija Vienas kvitas. ### <a name="customer-prepayment-payment-journal-that-has-taxes-on-multiple-lines"></a>Kliento išankstinio mokėjimo žurnalas, kuriame keliose „eilutėse“ nurodyti mokesčiai Pagal šį scenarijų viename kvite nurodyti klientai yra tas pats klientas, nes operacija imituoja kliento užsakymo eilutes. Išankstinis mokėjimas turi būti įvestas viename kvite, nes mokesčių skaičiavimas turi būti atliekamas kliento atlikto vieno mokėjimo „eilutėse“. ### <a name="customer-reimbursement"></a>Kliento kompensacija **Scenarijus** Klientas atlieka išankstinį užsakymo apmokėjimą, o užsakymo eilutėse nurodyti skirtingi mokesčiai, kurie turi būti įrašyti atliekant išankstinį mokėjimą. Išankstinis kliento mokėjimas yra viena operacija, kuri imituoja užsakymo eilutes, kad kiekvienoje sumos eilutėje būtų galima įrašyti atitinkamą mokestį. Jei periodinė kompensacijos užduotis vykdoma modulyje Gautinos sumos, sukuriama operacija, kad balansą būtų galima perkelti iš kliento srities į tiekėjo sritį. Esant šiam scenarijui, funkciją Vienas kvitas naudoti reikia, kad klientui būtų galima išmokėti kompensaciją. ### <a name="fixed-asset-maintenance-catch-up-depreciation-split-asset-calculate-depreciation-on-disposal"></a>Ilgalaikio turto priežiūra: nusidėvėjimas atgaline data, turto skaidymas, nusidėvėjimo skaičiavimas likviduojant Naudojant versiją 10.0.21 ir vėlesnės versijos, ilgalaikio turto operacijos, kurias galima naudoti nusidėvėjimui gauti, turtui padalinti ir turto likvidavimui apskaičiuoti, bus sukurtos naudojant skirtingus kvitų numerius. ### <a name="bills-of-exchange-and-promissory-notes"></a>Įsakomieji ir paprastieji vekseliai Įsakomųjų ir paprastųjų vekselių atveju funkciją Vienas kvitas naudoti reikia, nes atliekant operacijas kliento arba tiekėjo balansas, remiantis mokėjimo būsena, iš vienos DK sąskaitos Gautinos sumos / Mokėtinos sumos perkeliamos į kitą. ## <a name="scenarios-that-dont-require-one-voucher"></a>Scenarijai, kai funkcija Vienas kvitas naudotis nereikia Toliau nurodytus scenarijus galima įvykdyti kitais būdais, nenaudojant funkcijos Vienas kvitas. ### <a name="post-customer-payments-in-summary-form-to-the-bank-account"></a>Kliento mokėjimų registravimas banko sąskaitoje suvestinės forma Organizacija deponuoja klientų mokėjimus arba bankas deponuoja klientų mokėjimus organizacijos vardu ir depozitas rodomas banko sąskaitoje kaip fiksuota suma. Kai mokėjimo būdui netaikoma tarpinė parinktis, kliento mokėjimų suvestinė palaikoma naudojantis depozito funkcija. ### <a name="netting"></a>Užskaita Atliekant užskaitą tiekėjo ir kliento balansai užskaitomi tarpusavyje vienas per kitą, nes tiekėjas ir klientas yra ta pati šalis. Taikant šį metodą sumažėja organizacijos ir kliento / tiekėjo šalies apsikeičiamų pinigų srautai. Užskaitą galima atlikti atskiruose kvituose įvedant didėjimą ir mažėjimą, tada užregistravus DK tarpuskaitos poslinkį. ### <a name="post-in-summary-to-the-general-ledger"></a>Suvestinės registravimas didžiojoje knygoje Organizacijos didžiosiose knygose atlikti registraciją dažnai nori suvestinės formoje, kad sumažėtų duomenų kiekis. Tačiau tokios organizacijos paprastai vis tiek reikalauja, kad būtų išlaikomi operacijos duomenys. Kai registracija atliekama suvestinės formoje naudojant vieną kvitą, operacijų duomenys nežinomi ir jų negalima tvarkyti. - Kadangi šiuo metu operacijų duomenų tvarkyti negalima, rekomenduojame, kad atlikdama registraciją suvestinės formoje organizacija **nenaudotų** funkcijos Vienas kvitas. - Kai funkcija Vienas kvitas bus pašalinta, žurnaluose bus galima įdiegti sistemas Šaltinio dokumentas ir Apskaita. Tada, naudojantis šiomis sistemomis, bus tvarkomi operacijų duomenys ir palaikomas suvestinės išsaugojimas didžiojoje knygoje. ### <a name="settle-multiple-unposted-payments-to-the-same-invoice"></a>Kelių neregistruotų mokėjimų atlikimas toje pačioje sąskaitoje faktūroje Šiuo scenarijumi paprastai naudojasi organizacijos, kai atsiskaitantys už pirkinius klientai gali naudotis keliais mokėjimo būdais. Pagal šį scenarijų organizacija turi galėti įrašyti kelis neužregistruotus mokėjimus ir apmokėti juos pagal kliento parengtą sąskaitą faktūrą. Naudojantis nauja funkcija, kuri buvo įtraukta į „Microsoft Dynamics 365 for Operations“ versiją 1611 (2016 m. lapkritį) vienoje sąskaitoje faktūroje galima atlikti kelis neregistruotus mokėjimus. Viename kvite jau nebebūtina įvesti kelių klientų mokėjimų. ### <a name="import-bank-statement-transactions"></a>Banko išrašo operacijų importavimas Bankai dažnai atlieka ir gauna mokėjimus organizacijos vardu, o šios operacijos įrašomos programoje „Finance“ naudojant iš banko gautą failą. Organizacijos dažnai nori, kad šios operacijos būtų sugrupuotos faile naudojant banko išrašo numerį. Kadangi banko išraše rodoma išsami informacija apie kiekvieną operaciją, papildomoje banko knygoje suvestinė nebūtina. Operacijas galima grupuoti naudojant kitus žurnalo laukus, pavyzdžiui, žurnalo paketo numerį arba dokumento numerį. ### <a name="transfer-balances"></a>Perkelti balansus Organizacijai gali tekti perkelti balansą iš vieno tiekėjo kitam tiekėjui, kadangi įvyko klaida arba kitas tiekėjas perėmė atsakomybę. Tokio tipo perkėlimai taip pat atliekami sąskaitos tipams, pvz., sąskaitoms **Klientas** ir **Bankas**. Balanso perkėlimas iš vienos sąskaitos (tiekėjo, kliento, banko ir t. t.) į kitą sąskaitą gali būti atliekamas naudojant atskirus kvitus, o poslinkį galima užregistruoti DK tarpuskaitoje. ### <a name="enter-beginning-balances"></a>Pradžios balansų įvedimas Organizacijos dažnai įveda papildomos knygos sąskaitų (tiekėjų, klientų, ilgalaikio turto ir t. t.) pradžios balansus kaip vieno kvito operaciją. Kiekvienos papildomos knygos sąskaitos pradžios balansus galima įvesti kaip atskirus kvitus, o poslinkį galima užregistruoti DK tarpuskaitoje. ### <a name="correct-the-accounting-entry-of-a-posted-customer-or-vendor-document"></a>Užregistruotų klientų arba tiekėjų dokumentų apskaitos įrašo koregavimas Organizacijai gali tekti koreguoti užregistruotos SF apskaitos įraše nurodytą gautinų arba mokėtinų sumų DK sąskaitą, tačiau ši SF negali būti atšaukta arba koreguojama naudojant kitą mechanizmą. Jei būtina koreguoti Gautinų arba Mokėtinų sumų DK sąskaitą, tai turi būti atliekama tiesiogiai DK sąskaitoje. Koregavimo negalima atlikti registruojant per tiekėją arba klientą. Taikant šį metodą koregavimus reikia atlikti „prastovos laiku“, kad DK sąskaitoje trumpam būtų galima atlikti neautomatinį įvedimą. ### <a name="the-system-allows-it"></a>„Sistema tai leidžia“ Organizacijos dažnai naudojasi funkcija Vienas kvitas tik todėl, kad sistema leidžia ja naudotis, nesuprasdamos, ką tai reiškia. [!INCLUDE[footer-include](../../includes/footer-banner.md)]
94.912568
749
0.818585
lit_Latn
1.000007
17eb1c3abf90ea6b3750d0d261b18c8161a69958
6,482
md
Markdown
site/content/posts/new-dotnet-docker-env-with-node-and-mariadb/index.md
ahmed-habbachi/website
b795ad8e9cedf5e709d21537b7e0ec7a7cbce4d0
[ "MIT" ]
null
null
null
site/content/posts/new-dotnet-docker-env-with-node-and-mariadb/index.md
ahmed-habbachi/website
b795ad8e9cedf5e709d21537b7e0ec7a7cbce4d0
[ "MIT" ]
null
null
null
site/content/posts/new-dotnet-docker-env-with-node-and-mariadb/index.md
ahmed-habbachi/website
b795ad8e9cedf5e709d21537b7e0ec7a7cbce4d0
[ "MIT" ]
null
null
null
--- layout: '[post]' path: '/new-dotnet-docker-env-with-node-and-mariadb' title: New dotnet docker environment with node and mariadb date: 2018-09-21 17:43:38 category: Tools tags: [Docker, Mariadb, Node, .Net Core, ASP.Net Core] featuredImage: ./docker-banner.png published: true --- In this post I'll try to describe the steps that i did to make my asp.net core app work on docker composer orchestration. My project consist of an implementation of **identityServer4**. I am going to create three docker images two for runtime (means two to run the application) and one for build: 1. The runtime images; 1. mariadb 2. aspnetcore runtime 2.1.4 2. The build image; * node * dotnetsdk 2.1.104 <!-- more --> ## The build image the build environment that i need for my project is an image that contains; * dotnet sdk 2.1.x * nodejs 1. lets start by pulling a node image 'the tag that i've useded is a mater of taste'; ```cmd docker pull node:8.12.0-alpine ``` this time we cannot use bash inside of our container as it is a alpine based image we have to use sh; ```cmd docker run --name nodedotnetsdk -it df48b68da02a /bin/sh ``` *PS: df48b68da02a is the id of the node image that i've downloaded already please to use your id or node image name, to get the ID of an image just run 'docker image ls' and look for your image. the '-it image_name /bin/sh' will run the sh shell and switch STDIN/OUT on that container* 2. then we will install dotnet sdk dependencies on the container; ```shell apk add --no-cache ca-certificates krb5-libs libgcc libintl libssl1.0 libstdc++ tzdata userspace-rcu zlib apk -X https://dl-cdn.alpinelinux.org/alpine/edge/main add --no-cache lttng-ust # Configure Kestrel web server to bind to port 80 when present ASPNETCORE_URLS=http://+:80 \ # Enable detection of running in a container DOTNET_RUNNING_IN_CONTAINER=true \ # Set the invariant mode since icu_libs isn't included (see https://github.com/dotnet/announcements/issues/20) apk add --no-cache icu-libs DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ``` and now with this commnad we will download and install dotnet sdk again on the container; ```shell DOTNET_SDK_VERSION=2.1.402 apk add --no-cache --virtual .build-deps openssl && wget -O dotnet.tar.gz https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$DOTNET_SDK_VERSION/dotnet-sdk-$DOTNET_SDK_VERSION-linux-musl-x64.tar.gz \ && dotnet_sha512='88309e5ddc1527f8ad19418bc1a628ed36fa5b21318a51252590ffa861e97bd4f628731bdde6cd481a1519d508c94960310e403b6cdc0e94c1781b405952ea3a' \ && echo "$dotnet_sha512 dotnet.tar.gz" | sha512sum -c - \ && mkdir -p /usr/share/dotnet \ && tar -C /usr/share/dotnet -xzf dotnet.tar.gz \ && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet \ && rm dotnet.tar.gz \ && apk del .build-deps DOTNET_USE_POLLING_FILE_WATCHER=true NUGET_XMLDOC_MODE=skip dotnet help ``` now type exit to exit the shell, and then ```shell docker commit -a "**auther name**" -m "add dotnet sdk 2.1.402 to a nodejs image" nodedotnetsdk somerepo/acontainer_name ``` so now we are done with the build image, this image will be used only yes only to build the project and pass the files to the runtime to run the app, so lets get the necessairy runtime image. ## Runtime docker images The runtime environment consist of two running docker containers: * aspnetcore runtime 2.1.x * mariadb Therefor i need to get the necessary images; 1. First lets get the mariadb image form [docker hub](hub.docker.com) ```shell docker pull mariadb ``` 2. Second lets get the dotnet runtime image ```shell docker pull microsoft/dotnet:2.1-aspnetcore-runtime ``` we don't need to do any change to the images we will apply some settings to the containers as folowing. ## Dockerfile and docker-compose the Dockerfile is a file needed to inform docker the steps that we need to do to build our container: so first thing first create a new file called "Dockerfile" without any extention and paste on it the step commands like following (this is just an example project): ```docker FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 5000 FROM habbachi/nodedotnetsdk AS build WORKDIR /src COPY Auerswald.IdentityServer.Web/Auerswald.IdentityServer.Web.csproj Auerswald.IdentityServer.Web/ COPY Auerswald.IdentityServer.Data/Auerswald.IdentityServer.Data.csproj Auerswald.IdentityServer.Data/ COPY Auerswald.IdentityServer.Public/Auerswald.IdentityServer.Public.csproj Auerswald.IdentityServer.Public/ RUN dotnet restore Auerswald.IdentityServer.Web/Auerswald.IdentityServer.Web.csproj COPY . . WORKDIR /src/Auerswald.IdentityServer.Web RUN npm run default RUN dotnet build Auerswald.IdentityServer.Web.csproj -c Release -o /app FROM build AS publish RUN dotnet publish Auerswald.IdentityServer.Web.csproj -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "Auerswald.IdentityServer.Web.dll"] ``` *PS: change the name of your image as your need, for me the build image name is habbachi/nodedotnetsdk.* this docker file is going to build and copy the files to the runtime container and lunch our application, what is missing here is the mariadb container. here we are talking about running multiple containers or running a working environment, therefor we need docker-compose; lets create a new file called docker-compose.yml; ```yml version: '3.4' services: db: container_name: "identityserver_db" image: "mariadb" ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: "root" MYSQL_DATABASE: "identityserverdb" networks: marianet: aliases: - db auerswald.identityserver.web: container_name: "identityserver" build: context: . dockerfile: Dockerfile ports: - "5000:5000" depends_on: - db networks: - webnet - marianet networks: webnet: marianet: ```
36.011111
198
0.69315
eng_Latn
0.882947
17eb29456a9eceb835b06ed2a20870849f9c357c
2,690
md
Markdown
readme.md
pinkmatter/farearth-events-testing
e49ad48afe750b3328706d33347122fceb9cbe90
[ "Apache-2.0" ]
null
null
null
readme.md
pinkmatter/farearth-events-testing
e49ad48afe750b3328706d33347122fceb9cbe90
[ "Apache-2.0" ]
null
null
null
readme.md
pinkmatter/farearth-events-testing
e49ad48afe750b3328706d33347122fceb9cbe90
[ "Apache-2.0" ]
null
null
null
# FarEarth Real-time events reference implementation * Serves to show how to integrate against the FarEarth real-time events sub-system. * A functional system requires both the reference service, as well as an active FarEarth catalogue. * The FarEarth catalogue sends real-time events to the real-time reference implementation and saves the received event data to disk. ## Building from sources * Requires Apache Maven and at least Java 8. ``` git clone https://github.com/pinkmatter/farearth-events-testing.git cd src/farearth-events-testing mvn install ``` ## Configuration * The `application.yml` configuration file exposes the following properties: * `server.port`: The port where the local reference implementation will listen for HTTP connections. * `kmz-output-directory`:The local path where received GeoJSON files will be saved. * `geojson-output-directory`: The local path where received GeoJSON files will be saved. * `catalogue-url`: The URL of the FarEarth catalogue. * `catalogue-username`: The credentials required to access the FarEarth catalogue. * `catalogue-password`: The credentials required to access the FarEarth catalogue. ## Directory locations * The ouput directories defaults to the current directory (`logs`, `output-kmz` and `output-geojson`). * Example output data are also included in the `example-data` folder that shows how the output for the reference implementation service will typically look. ## Execution * Example start-up scripts are included for Windows (`run.bat`) and Linux (`run.sh`). ## Local HTTP end-points and event filtering * The default end-points where the FarEarth catalogue will target are `/geoJsonEndPoint` and `/kmzEndPoint`. * However, this can be changed and needs to be supplied to Pinkmatter. * The `/geoJsonEndPoint` accepts `application/json` formatted HTTP POST requests, while the `/kmzEndPoint` accepts content `multipart/form-data` as a full KMZ file. * Events can also be filtered via geo-fencing and different areas can be configured to target different end-points accordingly (configured on the FarEarth catalogue). ## Service authentication * To communicate with the FarEarth catalogue, the following authentication procedure needs to be followed: * Build an HTTP POST object with content type `application/x-www-form-urlencoded` and body key value pairs `username` and `password`. * Post the object against the catalogue end-point at `/catalogue/login`. * A successful log-in will yield a `Set-Cookie` header response with a cookie named `JSESSIONID`. * Attach the `JSESSIONID` cookie to any requests made to the FarEarth Catalogue.
48.035714
167
0.765056
eng_Latn
0.991232
17eb4703f7d8b6e8907650aa6ff2aa2a0a038103
14,481
md
Markdown
articles/storage/common/customer-managed-keys-overview.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
1
2017-06-06T22:50:05.000Z
2017-06-06T22:50:05.000Z
articles/storage/common/customer-managed-keys-overview.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
41
2016-11-21T14:37:50.000Z
2017-06-14T20:46:01.000Z
articles/storage/common/customer-managed-keys-overview.md
changeworld/azure-docs.it-
34f70ff6964ec4f6f1a08527526e214fdefbe12a
[ "CC-BY-4.0", "MIT" ]
7
2016-11-16T18:13:16.000Z
2017-06-26T10:37:55.000Z
--- title: Chiavi gestite dal cliente per la crittografia dell'account titleSuffix: Azure Storage description: È possibile usare la propria chiave di crittografia per proteggere i dati nell'account di archiviazione. Quando si specifica una chiave gestita dal cliente, tale chiave viene usata per proteggere e controllare l'accesso alla chiave che crittografa i dati. Le chiavi gestite dal cliente offrono maggiore flessibilità per gestire i controlli di accesso. services: storage author: tamram ms.service: storage ms.date: 03/30/2021 ms.topic: conceptual ms.author: tamram ms.reviewer: ozgun ms.subservice: common ms.openlocfilehash: 07f8faf503bdea6be8263afa6240594956b61391 ms.sourcegitcommit: 73fb48074c4c91c3511d5bcdffd6e40854fb46e5 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 03/31/2021 ms.locfileid: "106059446" --- # <a name="customer-managed-keys-for-azure-storage-encryption"></a>Chiavi gestite dal cliente per la crittografia di archiviazione di Azure È possibile usare la propria chiave di crittografia per proteggere i dati nell'account di archiviazione. Quando si specifica una chiave gestita dal cliente, tale chiave viene usata per proteggere e controllare l'accesso alla chiave che crittografa i dati. Le chiavi gestite dal cliente offrono maggiore flessibilità per gestire i controlli di accesso. Per archiviare le chiavi gestite dal cliente, è necessario usare uno dei seguenti archivi chiavi di Azure: - [Azure Key Vault](../../key-vault/general/overview.md) - [Modulo di protezione hardware (HSM) gestito Azure Key Vault (anteprima)](../../key-vault/managed-hsm/overview.md) È possibile creare chiavi personalizzate e archiviarle nell'insieme di credenziali delle chiavi o nel modulo di protezione hardware gestito oppure è possibile usare le API Azure Key Vault per generare chiavi. L'account di archiviazione e l'insieme di credenziali delle chiavi o il modulo di protezione hardware gestito devono trovarsi nella stessa area e nello stesso tenant di Azure Active Directory (Azure AD), ma possono trovarsi in sottoscrizioni diverse. > [!IMPORTANT] > > La crittografia con chiavi gestite dal cliente archiviate nel modulo di protezione hardware gestito da Azure Key Vault è attualmente in fase di **Anteprima**. Vedere le [condizioni per l'utilizzo supplementari per le anteprime di Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) per le note legali applicabili alle funzionalità di Azure disponibili in versione beta, di anteprima o non ancora rilasciate a livello generale. > > Azure Key Vault e Azure Key Vault HSM gestito supportano le stesse API e le stesse interfacce di gestione per la configurazione. ## <a name="about-customer-managed-keys"></a>Informazioni sulle chiavi gestite dal cliente Il diagramma seguente illustra in che modo archiviazione di Azure usa Azure Active Directory e un insieme di credenziali delle chiavi o un modulo di protezione hardware gestito per eseguire richieste usando la chiave gestita dal cliente: ![Diagramma che illustra il funzionamento delle chiavi gestite dal cliente in archiviazione di Azure](media/customer-managed-keys-overview/encryption-customer-managed-keys-diagram.png) Nell'elenco seguente vengono illustrati i passaggi numerati nel diagramma: 1. Un amministratore Azure Key Vault concede le autorizzazioni per le chiavi di crittografia all'identità gestita associata all'account di archiviazione. 2. Un amministratore di archiviazione di Azure configura la crittografia con una chiave gestita dal cliente per l'account di archiviazione. 3. Archiviazione di Azure usa l'identità gestita associata all'account di archiviazione per autenticare l'accesso ai Azure Key Vault tramite Azure Active Directory. 4. Archiviazione di Azure esegue il wrapping della chiave di crittografia dell'account con la chiave Customer in Azure Key Vault. 5. Per le operazioni di lettura/scrittura, archiviazione di Azure invia richieste a Azure Key Vault per annullare il wrapping della chiave di crittografia dell'account per eseguire operazioni di crittografia e decrittografia. ## <a name="customer-managed-keys-for-queues-and-tables"></a>Chiavi gestite dal cliente per code e tabelle I dati archiviati nella coda e nell'archiviazione tabelle non vengono protetti automaticamente da una chiave gestita dal cliente quando le chiavi gestite dal cliente sono abilitate per l'account di archiviazione. Facoltativamente, è possibile configurare questi servizi da includere in questa protezione al momento della creazione dell'account di archiviazione. Per altre informazioni su come creare un account di archiviazione che supporti le chiavi gestite dal cliente per le code e le tabelle, vedere [creare un account che supporta chiavi gestite dal cliente per tabelle e code](account-encryption-key-create.md). I dati nell'archiviazione BLOB e nei File di Azure sono sempre protetti dalle chiavi gestite dal cliente quando le chiavi gestite dal cliente sono configurate per l'account di archiviazione. ## <a name="enable-customer-managed-keys-for-a-storage-account"></a>Abilitare le chiavi gestite dal cliente per un account di archiviazione Quando si configura una chiave gestita dal cliente, archiviazione di Azure esegue il wrapping della chiave di crittografia dei dati radice per l'account con la chiave gestita dal cliente nell'insieme di credenziali delle chiavi associato o nel modulo di protezione hardware gestito. L'abilitazione delle chiavi gestite dal cliente non influisce sulle prestazioni e ha effetto immediato. Quando si abilitano o disabilitano le chiavi gestite dal cliente o quando si modifica la chiave o la versione della chiave, la protezione della chiave di crittografia radice viene modificata, ma non è necessario crittografare nuovamente i dati nell'account di archiviazione di Azure. Le chiavi gestite dal cliente possono essere abilitate solo sugli account di archiviazione esistenti. L'insieme di credenziali delle chiavi o il modulo di protezione hardware gestito deve essere configurato per concedere le autorizzazioni all'identità gestita associata all'account di archiviazione. L'identità gestita è disponibile solo dopo la creazione dell'account di archiviazione. È possibile passare in qualsiasi momento tra chiavi gestite dal cliente e chiavi gestite da Microsoft. Per ulteriori informazioni sulle chiavi gestite da Microsoft, vedere [informazioni sulla gestione delle chiavi di crittografia](storage-service-encryption.md#about-encryption-key-management). Per informazioni su come configurare la crittografia di archiviazione di Azure con chiavi gestite dal cliente in un insieme di credenziali delle chiavi, vedere [configurare la crittografia con chiavi gestite dal cliente archiviate in Azure Key Vault](customer-managed-keys-configure-key-vault.md). Per configurare le chiavi gestite dal cliente in un modulo di protezione hardware gestito, vedere [configurare la crittografia con chiavi gestite dal cliente archiviate in Azure Key Vault HSM gestito (anteprima)](customer-managed-keys-configure-key-vault-hsm.md). > [!IMPORTANT] > Le chiavi gestite dal cliente si basano sulle identità gestite per le risorse di Azure, una funzionalità di Azure AD. Le identità gestite attualmente non supportano gli scenari tra directory. Quando si configurano le chiavi gestite dal cliente nel portale di Azure, un'identità gestita viene automaticamente assegnata all'account di archiviazione dietro le quinte. Se successivamente si sposta la sottoscrizione, il gruppo di risorse o l'account di archiviazione da una directory Azure AD a un'altra, l'identità gestita associata all'account di archiviazione non viene trasferita al nuovo tenant, quindi le chiavi gestite dal cliente potrebbero non funzionare più. Per altre informazioni, vedere **trasferimento di una sottoscrizione tra Azure ad directory** nelle [domande frequenti e problemi noti relativi alle identità gestite per le risorse di Azure](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories). La crittografia di archiviazione di Azure supporta chiavi RSA e RSA-HSM di dimensioni 2048, 3072 e 4096. Per ulteriori informazioni sulle chiavi, vedere [informazioni sulle chiavi](../../key-vault/keys/about-keys.md). L'uso di un insieme di credenziali delle chiavi o di un modulo HSM gestito ha costi associati Per ulteriori informazioni, vedere [Key Vault prezzi](https://azure.microsoft.com/pricing/details/key-vault/). ## <a name="update-the-key-version"></a>Aggiornare la versione della chiave Quando si configura la crittografia con chiavi gestite dal cliente, sono disponibili due opzioni per l'aggiornamento della versione della chiave: - **Aggiorna automaticamente la versione della chiave:** Per aggiornare automaticamente una chiave gestita dal cliente quando è disponibile una nuova versione, omettere la versione della chiave quando si Abilita la crittografia con chiavi gestite dal cliente per l'account di archiviazione. Se la versione della chiave viene omessa, archiviazione di Azure controlla ogni giorno l'insieme di credenziali delle chiavi o il modulo di protezione hardware gestito per una nuova versione di una chiave gestita dal cliente. Archiviazione di Azure usa automaticamente la versione più recente della chiave. - **Aggiornare manualmente la versione della chiave:** Per usare una versione specifica di una chiave per la crittografia di archiviazione di Azure, specificare la versione della chiave quando si Abilita la crittografia con chiavi gestite dal cliente per l'account di archiviazione. Se si specifica la versione della chiave, archiviazione di Azure usa tale versione per la crittografia fino a quando non si aggiorna manualmente la versione della chiave. Quando la versione della chiave viene specificata in modo esplicito, è necessario aggiornare manualmente l'account di archiviazione per usare il nuovo URI della versione della chiave quando viene creata una nuova versione. Per informazioni su come aggiornare l'account di archiviazione per usare una nuova versione della chiave, vedere [configurare la crittografia con chiavi gestite dal cliente archiviate in Azure Key Vault](customer-managed-keys-configure-key-vault.md) o [configurare la crittografia con chiavi gestite dal cliente archiviate in Azure Key Vault HSM gestito (anteprima)](customer-managed-keys-configure-key-vault-hsm.md). Quando si aggiorna la versione della chiave, la protezione della chiave di crittografia radice viene modificata, ma i dati nell'account di archiviazione di Azure non vengono nuovamente crittografati. Non sono necessarie altre azioni da parte dell'utente. > [!NOTE] > Per ruotare una chiave, creare una nuova versione della chiave nell'insieme di credenziali delle chiavi o nel modulo di protezione hardware gestito, in base ai criteri di conformità. È possibile ruotare la chiave manualmente o creare una funzione per ruotarla in base a una pianificazione. ## <a name="revoke-access-to-customer-managed-keys"></a>Revocare l'accesso alle chiavi gestite dal cliente È possibile revocare l'accesso dell'account di archiviazione alla chiave gestita dal cliente in qualsiasi momento. Dopo che l'accesso alle chiavi gestite dal cliente è stato revocato o dopo che la chiave è stata disabilitata o eliminata, i client non possono chiamare operazioni che leggono o scrivono in un BLOB o i relativi metadati. I tentativi di chiamare una delle operazioni seguenti avranno esito negativo con codice di errore 403 (accesso negato) per tutti gli utenti: - [Elencare i BLOB](/rest/api/storageservices/list-blobs), quando viene chiamato con il `include=metadata` parametro nell'URI della richiesta - [Get Blob](/rest/api/storageservices/get-blob) - [Get Blob Properties](/rest/api/storageservices/get-blob-properties) - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) - [BLOB snapshot](/rest/api/storageservices/snapshot-blob), quando viene chiamato con l' `x-ms-meta-name` intestazione della richiesta - [Copy Blob](/rest/api/storageservices/copy-blob) - [Copia BLOB da URL](/rest/api/storageservices/copy-blob-from-url) - [Set Blob Tier](/rest/api/storageservices/set-blob-tier) - [Put Block](/rest/api/storageservices/put-block) - [Inserisci blocco da URL](/rest/api/storageservices/put-block-from-url) - [Append Block](/rest/api/storageservices/append-block) - [Accoda blocco da URL](/rest/api/storageservices/append-block-from-url) - [Put Blob](/rest/api/storageservices/put-blob) - [Put Page](/rest/api/storageservices/put-page) - [Inserisci pagina dall'URL](/rest/api/storageservices/put-page-from-url) - [Incremental Copy Blob](/rest/api/storageservices/incremental-copy-blob) Per chiamare nuovamente queste operazioni, ripristinare l'accesso alla chiave gestita dal cliente. Tutte le operazioni sui dati non elencate in questa sezione possono continuare dopo la revoca delle chiavi gestite dal cliente o la disabilitazione o l'eliminazione di una chiave. Per revocare l'accesso alle chiavi gestite dal cliente, usare [PowerShell](./customer-managed-keys-configure-key-vault.md#revoke-customer-managed-keys) o l'interfaccia della riga di comando di [Azure](./customer-managed-keys-configure-key-vault.md#revoke-customer-managed-keys). ## <a name="customer-managed-keys-for-azure-managed-disks"></a>Chiavi gestite dal cliente per Azure Managed Disks Le chiavi gestite dal cliente sono disponibili anche per la gestione della crittografia di Azure Managed Disks. Le chiavi gestite dal cliente hanno un comportamento diverso per i dischi gestiti rispetto alle risorse di archiviazione di Azure. Per altre informazioni, vedere la pagina relativa alla [crittografia lato server di Azure Managed disks](../../virtual-machines/disk-encryption.md) per Windows o la [crittografia lato server di Azure Managed disks](../../virtual-machines/disk-encryption.md) per Linux. ## <a name="next-steps"></a>Passaggi successivi - [Crittografia del servizio di archiviazione di Azure per dati inattivi](storage-service-encryption.md) - [Configurare la crittografia con le chiavi gestite dal cliente archiviate in Azure Key Vault](customer-managed-keys-configure-key-vault.md) - [Configurare la crittografia con chiavi gestite dal cliente archiviate nel modulo di protezione hardware Azure Key Vault gestito (anteprima)](customer-managed-keys-configure-key-vault-hsm.md)
112.255814
992
0.816104
ita_Latn
0.998972
17ec1c86e4f2b26616ac69437c46762dddaf2555
3,905
md
Markdown
READMEs/URF_commands/README.md
erinyoung/UPHL
dbd917d2393e7d18f22f9e21089cfa5be01349b0
[ "MIT" ]
6
2019-02-25T08:28:12.000Z
2022-03-19T03:06:14.000Z
READMEs/URF_commands/README.md
Ikkik/UPHL
dbd917d2393e7d18f22f9e21089cfa5be01349b0
[ "MIT" ]
null
null
null
READMEs/URF_commands/README.md
Ikkik/UPHL
dbd917d2393e7d18f22f9e21089cfa5be01349b0
[ "MIT" ]
3
2019-06-14T17:36:59.000Z
2019-11-13T15:18:48.000Z
# The UPHL-Reference-Free pipeline takes paired-end fastq files to contigs for microbial WGS. Below is a list of the commands used in UPHL's Reference-Free Workflow for those who wish to use the same commands as UPHL, but with their own workflow manager. There are also custom scripts to format the results for [MultiQC](https://github.com/ewels/MultiQC) custom editions. As time permits, we hope to add to MultiQC's supported tools and remove our personal custom scripts. - [seqyclean](https://github.com/ibest/seqyclean) ``` seqyclean -minlen 25 -qual -c /Adapters_plus_PhiX_174.fasta -1 Sequencing_reads/Raw/sample_1.fastq -2 Sequencing_reads/Raw/sample_2.fastq -o Sequencing_reads/QCed/sample_clean ``` - [shovill](https://github.com/tseemann/shovill) ``` shovill --cpu 1 --ram $RAM --outdir shovill_result/sample --R1 Sequencing_reads/QCed/sample_clean_PE1.fastq --R2 Sequencing_reads/QCed/sample_clean_PE2.fastq ``` - [prokka](https://github.com/tseemann/prokka) ``` prokka --cpu 1 --compliant --centre --URF --mincontiglen 500 --outdir Prokka/sample --locustag locus_tag --prefix sample --genus ${mash_result[0]} --species ${mash_result[1]} --force shovill_result/sample/contigs.fa ``` - [fastqc](https://github.com/s-andrews/FastQC) ``` fastqc --outdir fastqc --threads 1 Sequencing_reads/*/*.fastq* ``` - [cg-pipeline](https://github.com/lskatz/CG-Pipeline) ``` run_assembly_shuffleReads.pl -gz Sequencing_reads/QCed/sample_clean_PE1.fastq Sequencing_reads/QCed/sample_clean_PE2.fastq > Sequencing_reads/shuffled/sample_clean_shuffled.fastq.gz run_assembly_readMetrics.pl Sequencing_reads/shuffled/sample_clean_shuffled.fastq.gz --fast --numcpus 1 -e $genome_length ``` - [quast](https://github.com/ablab/quast) ``` quast.py ALL_assembled/sample_contigs.fa --output-dir quast/sample --threads 1 ``` - [multiqc](https://github.com/ewels/MultiQC) ``` multiqc -f --outdir logs --cl_config "prokka_fn_snames: True" . ``` - [mash](https://github.com/marbl/Mash) ``` cat Sequencing_reads/QCed/sample_clean_PE1.fastq Sequencing_reads/QCed/sample_clean_PE2.fastq | mash sketch -m 2 -o mash/sample - mash dist -p 1 -v 0 /db/RefSeqSketchesDefaults.msh mash/sample.msh | sort -gk3 > mash/sample_mashdist.txt ``` - [seqsero](https://github.com/denglab/SeqSero) ``` SeqSero.py -m 2 -d SeqSero/sample -i Sequencing_reads/QCed/sample_clean_PE1.fastq Sequencing_reads/QCed/sample_clean_PE2.fastq ``` - [abricate](https://github.com/tseemann/abricate) ``` abricate --db serotypefinder --threads 1 shovill_result/sample/contigs.fa > abricate_results/serotypefinder/serotypefinder.sample.out.tab abricate --summary abricate_results*/serotypefinder/serotypefinder*tab abricate --db ncbi --threads 1 shovill_result/sample/contigs.fa > abricate_results/ncbi/ncbi.sample.out.tab abricate --summary abricate_results*/ncbi/ncbi*tab abricate --db vfdb --threads 1 shovill_result/sample/contigs.fa > abricate_results/vfdb/vfdb.sample.out.tab abricate --summary abricate_results*/vfdb/vfdb*tab > abricate_results/vfdb/vfdb.summary.txt ``` - [blastn](https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE_TYPE=BlastDocs&DOC_TYPE=Download) ``` blastn -query shovill_result/sample/contigs.fa -out blast/sample.tsv -num_threads 1 -db /blast/blastdb/nt -outfmt '6 qseqid staxids bitscore std' -max_target_seqs 10 -max_hsps 1 -evalue 1e-25 ``` - [bwa](http://bio-bwa.sourceforge.net/) ``` bwa index shovill_result/sample/contigs.fa bwa mem -t 1 shovill_result/sample/contigs.fa Sequencing_reads/QCed/sample_clean_PE1.fastq Sequencing_reads/QCed/sample_clean_PE2.fastq | samtools sort -o bwa/sample.sorted.bam ``` - [blobtools](https://blobtools.readme.io/docs) ``` blobtools create -o blobtools/sample -i shovill_result/sample/contigs.fa -b bwa/sample.sorted.bam -t blast/sample.tsv blobtools view -i blobtools/sample.blobDB.json -o blobtools/ blobtools plot -i blobtools/sample.blobDB.json -o blobtools/ -r species --format png ```
58.283582
379
0.776697
eng_Latn
0.355745
17ec42fed17df0602b386fe7dc4db96d6fa943c0
5,096
md
Markdown
README.md
jschiarizzi/palettable
da78289ac008619b4d3b30593d1b3184f27162bb
[ "MIT" ]
1
2016-09-07T19:03:33.000Z
2016-09-07T19:03:33.000Z
README.md
JosiahRooney/palettable
fe0b32146321a4cb9ac02729bfada1a425e7071f
[ "MIT" ]
null
null
null
README.md
JosiahRooney/palettable
fe0b32146321a4cb9ac02729bfada1a425e7071f
[ "MIT" ]
null
null
null
[![Build Status](https://travis-ci.org/alecortega/palettable.svg?branch=master)](https://travis-ci.org/alecortega/palettable) # <img src='http://i.imgur.com/580vPI2.png' height='50'></a> Create color palettes using the knowledge of millions of designers. **Full Website: https://palettable.io** **Fun fact:** Palettable has 2000 daily unique pageviews, 90% of which are from Japan! ![alt tag](http://i63.tinypic.com/16iikx4.png) Palettable is split up into two separate deployables, a web client and a beckend server. ## How to run the application: Navigate to the client directory and run: `yarn`. Navigation to the server directory and run: `yarn`. Navigate to the root directory and run: `yarn start`. This will spin up both the client and the server on the same process. Run tests with: `yarn test` in either sub-directory. ## Client ### Tech Used: React, Redux, Redux-Observable, Sass **Why was this stack chosen?** When a user likes or dislikes, a color a call to Palettable's backend is fired if there are no cached colors left that a user hasn not already liked or disliked. If a user is using the tool fairly quickly this results in a high number of asynchronous calls that all depend on one another and have side effects on client state when they resolve. On top of that, these calls may or may not be fired at all if there are still suggested colors in the cache that a user has not yet seen. Observables lend themselves well to solving this exact problem by using streams to handle the asynchronous calls and allows a developer to write the outcome of those streams in a very declaritive way. Redux-Observable inject's the current Redux state tree into each Observable function so that we can easily dispatch new events based on the previous state tree. **Other stacks that were considered:** Apollo-Client and GraphQL: Although apollo-client implements Observables under the hood to handle HTTP requests, graph architecture lends itself better to structured data. In Palettable most of the state needed to power the app is on the client. While apollo-client can store client-side data and it great for continuous asynchronous calls, it was difficult to implement side effects as a result of those calls and wasn't the best fit for this use-case. Redux and Redux-Thunk: While the current implementation still does use Redux to store client-side state, Redux-Thunk wasn't the greatest fit due to it's imperative style. Handling asynchronous calls and their side effects turned into deeply nested Promises and became very difficult to test and reason about. ### Data Flow: ![alt tag](http://i64.tinypic.com/2z9bb07.png) When a user likes or dislikes a color the action is sent through the redux-observable middleware and the current cache is checked. If there are still suggested colors cached that the user has not disliked or liked then the color is either changed or a new one is added. Otherwise, all disliked and liked colors are sent to the `/api/palette` endpoint and a new palette is fetched. Once the cache is updated with new colors the color is either changed or a new one is added. ### Redux state tree in action: ![](https://user-images.githubusercontent.com/6596787/44816030-11bc9c00-abaf-11e8-99a7-c0f5d2bede61.gif) ## Server ### Tech Used: Express **Why was this stack chosen?** Node is a pretty lightweight server choice and can be spun up fairly easily. We needed a backend that could send a different response based on the result of another controller and the ability to dynamically render a `.png` file. By using Express' built in middleware architecture we could cleanly write fallbacks and we can build images using an API that's very similar to the front-end canvas API. ### Data Flow: ![](https://user-images.githubusercontent.com/6596787/44816092-3d3f8680-abaf-11e8-9245-82c049864ebc.png) Palettable gives the user the ability to create a palette with _any_ color, but our suggestions are powered by the ColourLovers API so there isn't a human-generated palette for every hex code imaginable. To get around this we search the API using several different methods. **1. Search by exact hex code** First, we check if there is a human-generated palette containing the exact hex code we're searching for. We flatten all the palettes returned from the API and if there are 5 colors that have not been previously liked or disliked then we return those back to the client, otherwise we try another method. **2. Search by exact search term** If there isn't a palette that matches the exact hex code we want then we employ a bit of witchcraft. We transform the hex code into a string that describes it. For instance the hex code of `#0000FF` may be transformed into the string `"cobalt blue"` and we query the API with that search term. This allows us to query for a palette that resembles the one we're looking for while still giving the user the ability to create a palette with any color they wish to use. **3. Search for random palette** If we have exhausted all our options then we return a random palette back to the client.
63.7
482
0.781397
eng_Latn
0.998962
17ed6545aa5158a4505d94c22ccb1c649fa38e33
1,072
md
Markdown
docs/framework/wcf/diagnostics/tracing/system-servicemodel-warnservicehealthenablednobaseaddress.md
jorgearimany/docs.es-es
49946e3dab59d68100683a45a4543a1fb338e882
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-warnservicehealthenablednobaseaddress.md
jorgearimany/docs.es-es
49946e3dab59d68100683a45a4543a1fb338e882
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-warnservicehealthenablednobaseaddress.md
jorgearimany/docs.es-es
49946e3dab59d68100683a45a4543a1fb338e882
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: System.ServiceModel.WarnServiceHealthEnabledNoBaseAddress ms.date: 10/30/2018 ms.openlocfilehash: ec275f545d3dd09a6a80ac4be5ebfd53891f155c ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 04/18/2019 ms.locfileid: "59140470" --- # <a name="systemservicemodelwarnservicehealthenablednobaseaddress"></a>System.ServiceModel.WarnServiceHealthEnabledNoBaseAddress System.ServiceModel.WarnServiceHealthEnabledNoBaseAddress ## <a name="description"></a>Descripción La página de estado ServiceHealthBehavior está habilitada en una dirección relativa y no se puede crear porque no hay ninguna dirección base. ## <a name="see-also"></a>Vea también - [Traza](../../../../../docs/framework/wcf/diagnostics/tracing/index.md) - [Uso del seguimiento para solucionar problemas de su aplicación](../../../../../docs/framework/wcf/diagnostics/tracing/using-tracing-to-troubleshoot-your-application.md) - [Administración y diagnóstico](../../../../../docs/framework/wcf/diagnostics/index.md)
48.727273
171
0.787313
spa_Latn
0.344735
17edb18134c76709007282d17a1acde21f9f1522
11,722
md
Markdown
README.md
reactor/reactive-stream-extensions
9db7f7829460ea9678fbc4746e4e1215584fdf23
[ "Apache-2.0" ]
1
2015-12-23T14:27:17.000Z
2015-12-23T14:27:17.000Z
README.md
spring-attic/reactive-streams-commons
9db7f7829460ea9678fbc4746e4e1215584fdf23
[ "Apache-2.0" ]
1
2015-12-23T00:18:07.000Z
2015-12-23T00:18:07.000Z
README.md
reactor/reactive-stream-extensions
9db7f7829460ea9678fbc4746e4e1215584fdf23
[ "Apache-2.0" ]
1
2022-03-01T12:48:31.000Z
2022-03-01T12:48:31.000Z
# reactive-streams-commons is no longer actively maintained by VMware, Inc. # reactive-streams-commons A joint research effort for building highly optimized Reactive-Streams compliant operators. Current implementors include [RxJava2](https://github.com/ReactiveX/RxJava) and [Reactor](https://github.com/reactor/reactor-core). Java 8 required. <a href='https://travis-ci.org/reactor/reactive-streams-commons/builds'><img src='https://travis-ci.org/reactor/reactive-streams-commons.svg?branch=master'></a> ## Maven ``` repositories { maven { url 'https://repo.spring.io/libs-snapshot' } } dependencies { compile 'io.projectreactor:reactive-streams-commons:0.6.0.BUILD-SNAPSHOT' } ``` [Snapshot](https://repo.spring.io/libs-snapshot/io/projectreactor/reactive-streams-commons/) directory. ## Operator-fusion documentation - [Operator fusion, 1/2](https://akarnokd.blogspot.hu/2016/03/operator-fusion-part-1.html) - [Operator fusion, 2/2](https://akarnokd.blogspot.hu/2016/04/operator-fusion-part-2-final.html) - [Fusion Matrix](https://rawgit.com/reactor/reactive-streams-commons/master/fusion-matrix.html) ## Supported datasources I.e., converts non-reactive data sources into `Publisher`s. - `PublisherAmb` : relays signals of that source Publisher which responds first with any signal - `PublisherArray` : emits the elements of an array - `PublisherCallable` : emits a single value returned by a `Callable` - `PublisherCompletableFuture` : emits a single value produced by a `CompletableFuture` - `PublisherConcatArray` : concatenate an array of `Publisher`s - `PublisherConcatIterable` : concatenate an `Iterable` sequence of `Publisher`s - `PublisherDefer` : calls a `Supplier` to create the actual `Publisher` the `Subscriber` will be subscribed to. - `PublisherEmpty` : does not emit any value and calls `onCompleted`; use `instance()` to get its singleton instance with the proper type parameter - `PublisherError` : emits a constant or generated Throwable exception - `PublisherFuture` : awaits and emits a single value emitted by a `Future` - `PublisherGenerate` : generate signals one-by-one via a function - `PublisherInterval` : periodically emits an ever increasing sequence of long values - `PublisherIterable` : emits the elements of an `Iterable` - `PublisherJust` : emits a single value - `PublisherNever` : doesn't emit any signal other than `onSubscribe`; use `instance()` to get its singleton instance with the proper type parameter - `PublisherRange` : emits a range of integer values - `PublisherStream` : emits elements of a `Stream` - `PublisherTimer` : emit a single 0L after a specified amount of time - `PublisherUsing` : create a resource, stream values in a Publisher derived from the resource and release the resource when the sequence completes or the Subscriber cancels - `PublisherZip` : Repeatedly takes one item from all source Publishers and runs it through a function to produce the output item ## Supported transformations - `ConnectablePublisherAutoConnect` given a ConnectablePublisher, it connects to it once the given amount of subscribers subscribed - `ConnectablePublisherRefCount` given a ConnectablePublisher, it connects to it once the given amount of subscribers subscribed to it and disconnects once all subscribers cancelled - `ConnectablePublisherPublish` : allows dispatching events from a single source to multiple subscribers similar to a Processor but the connection can be manually established or stopped. - `PublisherAccumulate` : Accumulates the source values with an accumulator function and returns the intermediate results of this function application - `PublisherAggregate` : Aggregates the source values with an aggergator function and emits the last result. - `PublisherAll` : emits a single true if all values of the source sequence match the predicate - `PublisherAny` : emits a single true if any value of the source sequence matches the predicate - `PublisherAwaitOnSubscribe` : makes sure onSubscribe can't trigger the onNext events until it returns - `PublisherBuffer` : buffers certain number of subsequent elements and emits the buffers - `PublisherBufferBoundary` : buffers elements into continuous, non-overlapping lists where another Publisher signals the start/end of the buffer regions - `PublisherBufferBoundaryAndSize` : buffers elements into continuous, non-overlapping lists where the each buffer is emitted when they become full or another Publisher signals the boundary of the buffer regions - `PublisherBufferStartEnd` : buffers elements into possibly overlapping buffers whose boundaries are determined by a start Publisher's element and a signal of a derived Publisher - `PublisherCollect` : collects the values into a container and emits it when the source completes - `PublisherCombineLatest` : combines the latest values of many sources through a function - `PublisherConcatMap` : Maps each upstream value into a Publisher and concatenates them into one sequence of items - `PublisherCount` : counts the number of elements the source sequence emits - `PublisherDistinct` : filters out elements that have been seen previously according to a custom collection - `PublisherDistinctUntilChanged` : filters out subsequent and repeated elements - `PublisherDefaultIfEmpty` : emits a single value if the source is empty - `PublisherDelaySubscription` : delays the subscription to the main source until the other source signals a value or completes - `PublisherDetach` : detaches the both the child Subscriber and the Subscription on termination or cancellation. - `PublisherDrop` : runs the source in unbounded mode and drops values if the downstream doesn't request fast enough - `PublisherElementAt` : emits the element at the specified index location - `PublisherFilter` : filters out values which doesn't pass a predicate - `PublisherFlatMap` : maps a sequence of values each into a Publisher and flattens them back into a single sequence, interleaving events from the various inner Publishers - `PublisherFlattenIterable` : concatenates values from Iterable sequences generated via a mapper function - `PublisherGroupBy` : groups source elements into their own Publisher sequences via a key function - `PublisherIgnoreElements` : ignores values and passes only the terminal signals along - `PublisherIsEmpty` : returns a single true if the source sequence is empty - `PublisherLatest` : runs the source in unbounded mode and emits the latest value if the downstream doesn't request fast enough - `PublisherLift` : maps the downstream Subscriber into an upstream Subscriber which allows implementing custom operators via lambdas - `PublisherMap` : map values to other values via a function - `PublisherPeek` : peek into the lifecycle and signals of a stream - `PublisherReduce` : aggregates the source values with the help of an accumulator function and emits the the final accumulated value - `PublisherRepeat` : repeatedly streams the source sequence fixed or unlimited times - `PublisherRepeatPredicate` : repeatedly stream the source if a predicate returns true - `PublisherRepeatWhen` : repeats a source when a companion sequence signals an item in response to the main's completion signal - `PublisherResume` : if the source fails, the stream is resumed by another Publisher returned by a function for the failure exception - `PublisherRetry` : retry a failed source sequence fixed or unlimited times - `PublisherRetryPredicate` : retry if a predicate function returns true for the exception - `PublisherRetryWhen` : retries a source when a companion sequence signals an item in response to the main's error signal - `PublisherSample` : samples the main source whenever the other Publisher signals a value - `PublisherScan` : aggregates the source values with the help of an accumulator function and emits the intermediate results - `PublisherSingle` : expects the source to emit only a single item - `PublisherSkip` : skips a specified amount of values - `PublisherSkipLast` : skips the last N elements - `PublisherSkipUntil` : skips values until another sequence signals a value or completes - `PublisherSkipWhile`: skips values while the predicate returns true - `PublisherStreamCollector` : Collects the values from the source sequence into a `java.util.stream.Collector` instance; see `Collectors` utility class in Java 8+ - `PublisherSwitchIfEmpty` : continues with another sequence if the first sequence turns out to be empty. - `PublisherSwitchMap` : switches to and streams a Publisher generated via a function whenever the upstream signals a value - `PublisherTake` : takes a specified amount of values and completes - `PublisherTakeLast` : emits only the last N values the source emitted before its completion - `PublisherTakeWhile` : relays values while a predicate returns true for the values (checked before each value) - `PublisherTakeUntil` : relays values until another Publisher signals - `PublisherTakeUntilPredicate` : relays values until a predicate returns true (checked after each value) - `PublisherThrottleFirst` : takes a value from upstream then uses the duration provided by a generated Publisher to skip other values until that other Publisher signals - `PublisherThrottleTimeout` : emits the last value from upstream only if there were no newer values emitted during the time window provided by a publisher for that particular last value - `PublisherTimeout` uses per-item `Publisher`s that when they fire mean the timeout for that particular item unless a new item arrives in the meantime - `PublisherWindow` : splits the source sequence into possibly overlapping windows of given size - `PublisherWindowBatch` : batches the source sequence into continuous, non-overlapping windows where the length of the windows is determined by a fresh boundary Publisher or a maximum elemenets in that window - `PublisherWindowBoundary` : splits the source sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher - `PublisherWindowBoundaryAndSize` : splits the source sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher or if a window received a specified amount of values - `PublisherWindowStartEnd` : splits the source sequence into potentially overlapping windows controlled by a start Publisher and a derived end Publisher for each start value - `PublisherWithLatestFrom` : combines values from a master source with the latest values of another Publisher via a function - `PublisherZip` : Repeatedly takes one item from all source Publishers and runs it through a function to produce the output item - `PublisherZipIterable` : pairwise combines a sequence of values with elements from an iterable ## Supported extractions I.e., these allow leaving the reactive-streams world. - `BlockingIterable` : an iterable that consumes a Publisher in a blocking fashion - `BlockingFuture` : can return a future that consumes the source entierly and returns the very last value - `BlockingStream` : allows creating sequential and parallel j.u.stream.Stream flows out of a source Publisher - `PublisherBase.blockingFirst` : returns the very first value of the source, blocking if necessary; returns null for an empty sequence. - `PublisherBase.blockingLast` : returns the very last value of the source, blocking if necessary; returns null for an empty sequence. - `PublisherBase.peekLast` : returns the last value of a synchronous source or likely null for other or empty sequences.
84.330935
219
0.787408
eng_Latn
0.992926
17eea18528ad758d8366f2279736fb695980a2f5
1,026
md
Markdown
.github/ISSUE_TEMPLATE/add-gene-related-syndrome.md
JArgasinska/git
6308d5f05a97152bf5f95dd161c52a115fd2570a
[ "CC-BY-4.0" ]
126
2018-04-03T17:16:43.000Z
2022-03-28T22:56:48.000Z
.github/ISSUE_TEMPLATE/add-gene-related-syndrome.md
JArgasinska/git
6308d5f05a97152bf5f95dd161c52a115fd2570a
[ "CC-BY-4.0" ]
2,609
2018-04-03T19:24:32.000Z
2022-03-31T18:59:03.000Z
.github/ISSUE_TEMPLATE/add-gene-related-syndrome.md
JArgasinska/git
6308d5f05a97152bf5f95dd161c52a115fd2570a
[ "CC-BY-4.0" ]
28
2018-07-27T14:40:24.000Z
2022-02-14T22:40:05.000Z
--- name: Add term - gene related syndrome about: New term suggestion for a term that is defined by a gene, such as NAA10-related syndrome title: "[NTR/gene]" labels: New term request assignees: nicolevasilevsky --- **Preferred gene-related syndrome label** For example: NAA10-related syndrome **Synonyms** **Parent term (use [OLS](https://www.ebi.ac.uk/ols/ontologies/mondo), or your favorite ontology browser)** **Definition** Please write the definition in the format: Any [parent class] in which the cause of the disease is a mutation in the [gene name] gene. For example: Any congenital myopathy in which the cause of the disease is a mutation in the TPM3 gene. **Definition source (Please give PubMed ID, if applicable, in format PMID:#######) **Children terms (if applicable)** Should any existing terms be re-classified as children underneath this new proposed term? **Your nano-attribution (ORCID) or URL for a working group** If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
36.642857
237
0.746589
eng_Latn
0.995802
17ef6fa3655c9fa0539a699314becc3c74c04e1a
1,431
md
Markdown
docs/en/plugins/assign.md
634750802/tichi
66e564a5d25c9d419e50b4e5c3c199d77082e499
[ "Apache-2.0" ]
1
2021-01-08T07:32:12.000Z
2021-01-08T07:32:12.000Z
docs/en/plugins/assign.md
634750802/tichi
66e564a5d25c9d419e50b4e5c3c199d77082e499
[ "Apache-2.0" ]
189
2021-05-20T06:56:06.000Z
2022-03-31T18:12:19.000Z
docs/en/plugins/assign.md
634750802/tichi
66e564a5d25c9d419e50b4e5c3c199d77082e499
[ "Apache-2.0" ]
null
null
null
# assign ## Design Background Collaborating on a large repository requires assigning PRs or issues to specific collaborators to follow up on, but without write access, you can't assign them directly through the GitHub page. [assign](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/assign) provides a command that allows the bot to assign collaborators and request reviewers. ## Design The plugin was designed and developed by the Kubernetes community and provides two commands: - `/[un]assign @someone hi-rustin`: assign or un-assign Issue/PR to someone and hi-rustin. - `/[un]cc @someone hi-rustin`: request or un-request someone and hi-rustin to review PR. Note: If you do not specify a GitHub account after the command, it defaults to yourself. ## Parameter Configuration No configuration ## Reference documentations - [command help](https://prow.tidb.io/plugins?repo=ti-community-infra%2Ftichi) - [code](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/assign) ## Q&A ### Why do you support usernames that do not start with `@`? > https://github.com/ti-community-infra/tichi/issues/426 When starting with `@`, GitHub automatically sends an email to the corresponding user. Another notification email will send by the bot when the user has been assigned, or requested to review. To reduce the number of unnecessary emails, `assign` allows usernames that do not start with `@`.
39.75
193
0.772886
eng_Latn
0.993104
17f01626b0b41a916b5a0f540f5d82716d304b4a
523
md
Markdown
README.md
ecabrera78/ecabrera78.github.io
14fb65d29db7600000b0549b51c10b85ecfdaf78
[ "CC0-1.0" ]
null
null
null
README.md
ecabrera78/ecabrera78.github.io
14fb65d29db7600000b0549b51c10b85ecfdaf78
[ "CC0-1.0" ]
null
null
null
README.md
ecabrera78/ecabrera78.github.io
14fb65d29db7600000b0549b51c10b85ecfdaf78
[ "CC0-1.0" ]
null
null
null
# Edgar Bautista Cabrera Software developer with more than 20 years of experience. I have worked on the implementation of solutions for different line of business (Telecommunications, government, bank) Most of the projects were based on Java and JavaScript. I'm familiar with below frameworks and libraries: **Java** * Spring * Spring batch * Spring boot * Hibernate * Wicket **JavaScript** * React * Webpack * Babel * Mocha * Chai ## Projects In this section I will add some of my projects
21.791667
176
0.726577
eng_Latn
0.997043
17f04e327e18eeddd982cd75e4344ddb5c1e61bb
829
md
Markdown
README.md
danieldk/ndarray-tensorflow
d9d47b2ee5138350b37455fce2a536487cdea272
[ "Apache-2.0" ]
2
2020-08-05T16:11:29.000Z
2021-10-05T00:06:44.000Z
README.md
danieldk/ndarray-tensorflow
d9d47b2ee5138350b37455fce2a536487cdea272
[ "Apache-2.0" ]
3
2019-05-04T13:21:49.000Z
2019-11-28T15:53:34.000Z
README.md
danieldk/ndarray-tensorflow
d9d47b2ee5138350b37455fce2a536487cdea272
[ "Apache-2.0" ]
1
2019-11-28T14:46:44.000Z
2019-11-28T14:46:44.000Z
## Introduction [![crates.io](https://img.shields.io/crates/v/ndarray-tensorflow.svg)](https://crates.io/crates/ndarray-tensorflow) [![docs.rs](https://docs.rs/ndarray-tensorflow/badge.svg)](https://docs.rs/ndarray-tensorflow/) [![Travis CI](https://img.shields.io/travis/danieldk/ndarray-tensorflow.svg)](https://travis-ci.org/danieldk/ndarray-tensorflow) This crate provides a wrapper for the [`Tensor`](https://tensorflow.github.io/rust/tensorflow/struct.Tensor.html) type of the [`tensorflow` crate](https://crates.io/crates/tensorflow) that can create [`ArrayView`](https://docs.rs/ndarray/0.12.1/ndarray/type.ArrayView.html) and [`ArrayViewMut`](https://docs.rs/ndarray/0.12.1/ndarray/type.ArrayViewMut.html) instances. This makes it possible to use tensors through the [`ndarray`](https://crates.io/crates/ndarray) API.
59.214286
128
0.761158
yue_Hant
0.712988
17f0507f77297faf2814f3f9f47f153b556d8ed0
3,505
md
Markdown
_posts/Speech/2020-11-20-Time Limits In RL.md
uzman-anwar/UsmanAnwar391.github.io
ca8a856a6b7791e11ef6beb671478998779a6656
[ "MIT" ]
null
null
null
_posts/Speech/2020-11-20-Time Limits In RL.md
uzman-anwar/UsmanAnwar391.github.io
ca8a856a6b7791e11ef6beb671478998779a6656
[ "MIT" ]
null
null
null
_posts/Speech/2020-11-20-Time Limits In RL.md
uzman-anwar/UsmanAnwar391.github.io
ca8a856a6b7791e11ef6beb671478998779a6656
[ "MIT" ]
null
null
null
--- layout: post title: Time Limits In RL categories: [Research Papers, Reinforcement Learning] mathjax: true --- ## Time Limits In Reinforcement Learning [[Paper]](https://arxiv.org/abs/1712.00378) ### Metadata Paper was published at ICML 2019. ### Motivation Practically, all infinite horizon and finite horizon tasks in RL are dealt with as 'fixed' time horizon tasks. What are the implications of this? ### Paper Summary Goal of agent in RL is to maximize discounted sum of future rewards. In case of finite horizon tasks, the following form is valid by assuming that $R_t = 0 \ \ \forall \ \ t>T$ where $T$ is the length of horizon. $$ G_t = R_{t+1} + \gamma R_{t+2}+... = \sum_{k=1}^{\infty} \gamma^{k-1}R_{t+k} $$ However, often maximum length of episode is fixed (e.g. Atari games are only played till 128 steps maximum in PPO2 implementation of baseline) to a certain number for ease of training. In such cases, we can rewrite return in a computationally feasible form as below: $$ G_{t:T} = R_{t+1} + \gamma R_{t+2} + ...+\gamma^{T-t-1} R_{T} = \sum_{k=1}^{T-t} \gamma^{k-1} $$ Now the authors note that task of the RL agent may be to either 1. Maximize its reward over the fixed time period $[0,T]$ i.e. **time limited task.** 2. Maximize its reward over an indefinite time period $[0, \infty]$ i.e. **time unlimited task.** Authors note that for time limited tasks, **Markov state must contain time index or in other words, stationary policy does not exist for time limited tasks.** Hence, the only solution is to learn policies that are dependent on time. So, time left, that is $T-t$ is provided to the agent as input after normalizing it in the range $[-1,1]$. Authors give two examples which elaborate on this point. - Further, they show that openAI gym environments (walker, hopper, reacher, swimmer etc.), though philosophically time unlimited, are actually time limited (Gym's TimeLimit wrapper is included by default in all environments). Here, they show that using time aware PPO helps achieve superior performance and even allows training with $\gamma = 1$. <u>Results of time aware PPO with $\gamma=1$ are quite impressive and much better than standard PPO.</u> - The authors note that terminations due to time limit (i.e. in our Atari game example termination due to reacing 128 steps) is fundamentally different than environment termination. However, bcz they are treated alike this results in state aliasing (btw actual terminal state and states that become terminal due to time limit) and sub-optimal policies. - They hence propose 'partial-bootstrapping' for episodes which are terminated artificially. That is if state is terminated artificially (that is environment has not provided 'done' signal but episode is deemed terminated bcz it reached max_time_steps limit), they propose bootstrapping return. That is temporal difference targets for value function should be - ​ $r$ at environment termination. - $r + \gamma V_{\pi}(s)$ for all other states, including artificial termination. - They show that with this kind of bootstrapping, agents trained with small time limit are able to perform well for significantly larger time limits at evaluation time. For example, hopper trained with bootstrapped reward and time limit $T=200$ reaches time limit of $T = 10^6$ often at test time. ### What to read? See this [poster](https://fabiopardo.github.io/posters/time_limits_in_rl.pdf). Read discussion section and experiments (2.4 onwards).
73.020833
451
0.751783
eng_Latn
0.998782
17f17049e72ece7a248f8605973951ba212ab10b
4,367
md
Markdown
docs/i18n/ukrainian/how-to-proofread-files.md
devscast/freeCodeCamp
c96030c766f4ee241365f2d72eb3f0f8bda930fc
[ "BSD-3-Clause" ]
null
null
null
docs/i18n/ukrainian/how-to-proofread-files.md
devscast/freeCodeCamp
c96030c766f4ee241365f2d72eb3f0f8bda930fc
[ "BSD-3-Clause" ]
null
null
null
docs/i18n/ukrainian/how-to-proofread-files.md
devscast/freeCodeCamp
c96030c766f4ee241365f2d72eb3f0f8bda930fc
[ "BSD-3-Clause" ]
null
null
null
# Як редагувати переклади Наша команда з редагування відповідає за забезпечення точних перекладів оригінального тексту. Редактори гарантують високу якість перекладів наших матеріалів. Всі наші переклади зроблені від руки, живими людьми. Редагування гарантує, що всі матеріали, як от наша навчальна програма, перекладені систематично та коректно. Для того, щоб почати редагування, перейдіть на [нашу перекладацьку платформу](https://translate.freecodecamp.org) та зареєструйтеся. Select "Go to console" in the top navigation bar to switch from the public view to the workspace view. ## Оберіть файл Там ви побачите список проєктів, до яких вам надається доступ. Оберіть проєкт, який би ви хотіли редагувати, та мову. ![Зображення - Дерево файлів редагування](https://contribute.freecodecamp.org/images/crowdin/proof-file-tree.png) Тепер ви повинні побачити список доступних файлів. Виберіть ваш файл, натиснувши клавішу `Proofread` справа від файлу, тоді оберіть `Proofreading` з меню, що з'явилось. > [!NOTE] Якщо ви у цьому робочому вікні, але хочете [перекладати файл](how-to-translate-files.md), а не редагувати, ви можете натомість обрати `Crowdsourcing` у запропонованому меню. ## Редагуйте переклади ![Image - Proofreading View](https://contribute.freecodecamp.org/images/crowdin/proofread.png) <!--Add proofread/crowdsource button to the image--> Тут ви побачите список рядків у вибраному файлі разом з їхніми перекладами. Тут показано переклад, який отримав найвищий бал (між голосами за та проти) від спільноти перекладачів. Поки ви можете переглядати _ all_ запропоновані переклади для цього речення, оцінки спільноти (між голосами за та проти) повинні бути взяті до уваги під час вибору перекладу для затвердження. Спільнота може розглядати запропоновані переклади та рекомендувати, який саме найбільш точний і зрозумілий. 1. Це оригінальний рядок (англійською мовою). 2. Це відповідний перекладений рядок. The most popular translation proposal, based on upvotes and downvotes, will be displayed here. 3. Clicking this checkmark button will approve that translation. 4. Crowdin will display the status of each string. `Done` means a translation has been approved and will be downloaded on our next Crowdin pull. `Todo` means the string has not been proofread. `Hidden` means the string is locked and _should not be translated_. `Comment` means the string has a related comment. 5. Translations can be selected with the checkboxes and approved here in one bulk action. 6. You can view the community proposed translations, their popularity scores, and Crowdin suggested translations here. 7. This button shows/hides the right-hand side display pane, where you can view translations, comments, translation memory, and glossary terms. 8. Crowdin displays error messages here from the quality assurance checks. In other words, if something does not seem correct in the translation, Crowdin will notify you. These translations should be approved with care. No additional actions are required once a file has been proofread. > [!NOTE] Approving a string in the proofreading view will mark it as complete and it will be downloaded in our next pull from Crowdin to GitHub. ## Becoming a proofreader If you have any questions, or are interested in becoming a proofreader, feel free to reach out to us in our [contributors chat room](https://discord.gg/PRyKn3Vbay). We will typically grant you proofreading access if you have been contributing to freeCodeCamp for a while. Our staff team and community moderators teams are always looking for kind volunteers like you who help us make high quality translations available to the world. > [!NOTE] Crowdin will allow you to approve your translations. In general, it is best to allow another proofreader to review your proposed translations as extra safety to ensure there are no errors. ## Creating a channel on Chat for a world language For the most part we encourage you to use the [contributors chat](https://discord.gg/PRyKn3Vbay) room for all correspondence. However if the team of volunteer translators grows for a certain language, we can consider creating additional break-out channel for the language. If you are already a proofreader and are interested in having a dedicated channel on our chat servers for a specific language, [fill out this form](https://forms.gle/XU5CyutrYCgDYaVZA).
79.4
310
0.804671
eng_Latn
0.798306
17f1767409918466916296605cc9ad2192596864
1,603
md
Markdown
content/docs/cli/cordova/prepare/index.md
vveerrgg/ionic-site
ce40b5aef5a456e0e20e88a0553f616f2d019966
[ "Apache-2.0" ]
6
2019-09-09T10:04:07.000Z
2021-05-19T12:14:05.000Z
content/docs/cli/cordova/prepare/index.md
miaoZai/ionic-site
f5f6245c3c9ee1454f9a606bcd7b999258082029
[ "Apache-2.0" ]
4
2018-02-07T03:49:45.000Z
2018-03-10T12:30:50.000Z
content/docs/cli/cordova/prepare/index.md
miaoZai/ionic-site
f5f6245c3c9ee1454f9a606bcd7b999258082029
[ "Apache-2.0" ]
251
2019-06-19T09:42:12.000Z
2022-03-30T04:42:04.000Z
--- layout: fluid/cli_docs_base category: cli id: cli-cordova-prepare page_name: ionic cordova prepare command_name: ionic cordova prepare title: ionic cordova prepare - Ionic CLI Documentation header_sub_title: Ionic CLI --- {% comment %} !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! DO NOT MODIFY THIS FILE DIRECTLY -- IT IS GENERATED FROM THE CLI REPO !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! {% endcomment %} # `$ ionic cordova prepare` Copies assets to Cordova platforms, preparing them for native builds ## Synopsis ```bash $ ionic cordova prepare [<platform>] ``` ## Details `ionic cordova prepare` will do the following: - Copy the **www/** directory into your Cordova platforms. - Transform **config.xml** into platform-specific manifest files. - Copy icons and splash screens from **resources/** to into your Cordova platforms. - Copy plugin files into specified platforms. You may wish to use `ionic cordova prepare` if you run your project with Android Studio or Xcode. Input | Description ----- | ---------- `platform` | The platform you would like to prepare (`android`, `ios`) Option | Description ------ | ---------- `--no-build` | Do not invoke an Ionic build `--prod` | Build the application for production `--aot` | Perform ahead-of-time compilation for this build `--minifyjs` | Minify JS for this build `--minifycss` | Minify CSS for this build `--optimizejs` | Perform JS optimizations for this build ## Examples ```bash $ ionic cordova prepare $ ionic cordova prepare ios $ ionic cordova prepare android ```
26.716667
97
0.660636
eng_Latn
0.819859
17f1835177502d2beb175113998321d77f369f6e
1,609
md
Markdown
.github/ISSUE_TEMPLATE/bug_report.md
co-cart/cocart-js-lib
241dff75c3ee0e1eb80dc39f059ffd52af2c7cff
[ "MIT" ]
13
2020-10-26T04:58:01.000Z
2022-03-06T13:32:20.000Z
.github/ISSUE_TEMPLATE/bug_report.md
co-cart/cocart-js-lib
241dff75c3ee0e1eb80dc39f059ffd52af2c7cff
[ "MIT" ]
6
2020-10-26T10:08:31.000Z
2022-03-19T01:58:32.000Z
.github/ISSUE_TEMPLATE/bug_report.md
co-cart/cocart-js-lib
241dff75c3ee0e1eb80dc39f059ffd52af2c7cff
[ "MIT" ]
null
null
null
--- name: "🐛 Bug report" about: "Create a report to help me improve CoCart: JavaScript Library" title: "ISBAT ..." labels: "priority:low" assignees: '' --- <!-- Hi there! This form is for reporting bugs and issues specific to the CoCart: JavaScript Library. This is not a support portal. --> <!-- Please be as descriptive as possible; issues lacking the below details, or for any other reason than to report a bug, may be closed without action. --> ## Prerequisites <!-- Mark completed items with an [x] --> - [ ] I have searched for similar issues in both open and closed tickets and cannot find a duplicate. - [ ] I have attempted to find the simplest possible steps to reproduce the issue. - [ ] I have included a failing test as a pull request (Optional) - [ ] I have installed the requirements to run this library. ## Steps to reproduce the issue <!-- I need to be able to reproduce the bug in order to fix it so please be descriptive! --> 1. 2. 3. ## Expected/actual behaviour When I follow those steps, I see... I was expecting to see... ## Isolating the problem <!-- Mark completed items with an [x] --> - [ ] This bug happens with only WooCommerce and CoCart plugin are active. - [ ] This bug happens with the latest release of WooCommerce active. - [ ] This bug happens only when I authenticate as a customer. - [ ] This bug happens only when I authenticate as administrator. - [ ] I can reproduce this bug consistently using the steps above. ## WordPress Environment <details> ``` Go to "WooCommerce > System Status then copy and paste the system status report here. ``` </details>
30.358491
156
0.717837
eng_Latn
0.999057
17f4511e16a32e06e4f6dde44df778db3cc00501
1,937
md
Markdown
wix-stores/guides/carts/Introduction.md
michaelho-wix/wix-rest-docs
e1fcded0772d7d1994054e3958e2df67ec8324a8
[ "MIT" ]
1
2022-02-12T05:00:02.000Z
2022-02-12T05:00:02.000Z
wix-stores/guides/carts/Introduction.md
michaelho-wix/wix-rest-docs
e1fcded0772d7d1994054e3958e2df67ec8324a8
[ "MIT" ]
11
2020-09-22T07:59:28.000Z
2022-01-31T12:38:38.000Z
wix-stores/guides/carts/Introduction.md
michaelho-wix/wix-rest-docs
e1fcded0772d7d1994054e3958e2df67ec8324a8
[ "MIT" ]
10
2020-04-14T12:21:36.000Z
2021-11-24T15:11:41.000Z
# About Carts The Wix Stores Carts APIs provide your app with access to the site's customers' cart data. <blockquote class='important'> <p> <strong>Important:</strong><br/> This Carts API will be deprecated in the second half of 2020, when a completely new version will be deployed. </p> </blockquote> A cart is created the first time a visitor adds something to their cart. A cart is considered abandoned once it has been idle for 1 hour. The Abandoned Cart Webhook is triggered only for carts of site members who are logged in to the site, or for visitors who entered their email address in the checkout process before abandoning their cart. ## Use Case - Abandoned Cart Email Reminder 1. Listen to the [Cart Abandoned](https://dev.wix.com/api/wix-stores#carts.abandoned-carts.cartabandonedevent) and the [Cart Recovered](https://dev.wix.com/api/wix-stores#carts.abandoned-carts.cartrecoveredevent) webhooks. <blockquote class='note'> <p> <strong>Note:</strong><br/> To ensure up-to-date information, you can also call the <a href:"https://dev.wix.com/api/wix-stores#carts.abandoned-carts.get-abandoned-cart">Get Abandoned Carts</a> endpoint at regular intervals. </p> </blockquote> 2. When a cart status is changed to Abandoned (an hour after the last cart activity), call the [Get Cart Checkout URL](https://dev.wix.com/api/wix-stores#carts.carts.get-cart-checkout-url) and/or [Get Cart](https://dev.wix.com/api/wix-stores#carts.carts.get-cart) endpoints, as necessary. 3. Enter the relevant cart information into the email to send to the customer reminding them to complete their order. <blockquote class='note'> <p> <strong>Note: </strong><br/> If you receive a Cart Recovered webhook before the email is sent, cancel the email - the customer has already completed their order. </p> </blockquote> ## Testing Wix site owners can view all abandoned carts in their Wix Dashboard under Stores Orders > Abandoned Carts.
53.805556
290
0.766649
eng_Latn
0.977881
17f4ae99096bcc15e5f34daf42e8b2deb96ed1e4
2,631
md
Markdown
README.md
rugbymauri/docker-base-images
c119c5bc361b058088d5fed20d1daa779302fa2b
[ "MIT" ]
null
null
null
README.md
rugbymauri/docker-base-images
c119c5bc361b058088d5fed20d1daa779302fa2b
[ "MIT" ]
null
null
null
README.md
rugbymauri/docker-base-images
c119c5bc361b058088d5fed20d1daa779302fa2b
[ "MIT" ]
null
null
null
[![GitHub issues](https://img.shields.io/github/issues/whatwedo/docker-base-images.svg)](https://github.com/whatwedo/docker-base-images/issues) [![build status](https://dev.whatwedo.ch/whatwedo/docker-base-images/badges/v2.3/pipeline.svg)](https://dev.whatwedo.ch/whatwedo/docker-base-images/commits/v2.3) # whatwedo - Docker Base Images We at [whatwedo](https://whatwedo.ch/) are building and deploying all applications using Docker containers. For this reason we built some basic docker images. They are available on [Dockerhub](https://hub.docker.com/u/whatwedo/). You can use them easily in your own projects. ## Images | Name | Description | |---|---| | [whatwedo/base](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/base) | Base image with health check and init system | | [whatwedo/nginx](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/nginx) | nginx web server | | [whatwedo/nginx-php](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/nginx-php) | nginx web server and PHP-FPM | | [whatwedo/php](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/php) | PHP interpreter | | [whatwedo/symfony5](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/symfony5) | Symfony image based on nginx and PHP-FPM | | [whatwedo/yarn](https://github.com/whatwedo/docker-base-images/tree/v2.3/images/yarn) | yarn package manager | ## Versions | Tag | State | OS | PHP Version | |---|---|---|---| | `v2.3`, `v2.3-[BUILD-DATE]` | Stable | Alpine 3.12 | PHP 8.0 | | `v2.2`, `v2.2-[BUILD-DATE]` | Security fixes only | Alpine 3.11 | PHP 7.4 | ## Directory/File Layout The following table show the directory layout of this repository: | Folder | Description | |---|---| | `images` | `Dockerfile`, additional files and README's for the different images. | | `shared`| Files which are use by several images | | `build_order`| File to defined the order used while building all images | | `build.sh`| build.sh is a script for building and/or pushing all or single image/s | ## Bugs and Issues If you have any problems with this image, feel free to open a new issue in our issue tracker https://github.com/whatwedo/docker-base-images/issues. ## License This image is licensed under the MIT License. The full license text is available under https://github.com/whatwedo/docker-base-images/blob/v2.3/LICENSE. ## Further information There are a number of images we are using at [whatwedo](https://whatwedo.ch/). Feel free to use them. More information about the other images are available in [our Github repo](https://github.com/whatwedo/docker-base-images).
48.722222
275
0.732421
eng_Latn
0.836073
17f4fc9e078bd31e16f71ff4a21eca7a1d37f613
590
md
Markdown
examples/button/README.md
Devilbinder/ATMEGA328P_C
d440ab1b45d3b1d14cbdc2a75a4e41a457d6c180
[ "MIT" ]
1
2021-12-31T11:54:41.000Z
2021-12-31T11:54:41.000Z
examples/button/README.md
Devilbinder/ATMEGA328P_C
d440ab1b45d3b1d14cbdc2a75a4e41a457d6c180
[ "MIT" ]
null
null
null
examples/button/README.md
Devilbinder/ATMEGA328P_C
d440ab1b45d3b1d14cbdc2a75a4e41a457d6c180
[ "MIT" ]
1
2022-03-29T06:27:19.000Z
2022-03-29T06:27:19.000Z
# **Buttons as Inputs** This demonstrates the how to use the GPIO as inputs with buttons on a ATmega328P. [![Buttons as Inputs 🔴 ATmega328P Programming #5 AVR microcontroller with Atmel Studio](https://img.youtube.com/vi/AKv3cxVH9y0/0.jpg)](https://www.youtube.com/watch?v=AKv3cxVH9y0 "Buttons as Inputs 🔴 ATmega328P Programming #5 AVR microcontroller with Atmel Studio") ☕Coffee Funds☕. Shekels: https://www.paypal.me/bindertronics9/5 Patreon: https://www.patreon.com/BinderTronics Bitcoin: 19nohZzWXxVuZ9tZvw8Pvhajt5khG5mspW Ethereum: 0x5fe29789CDaE8c73C9791bEe36c7ad5db8511D39
29.5
265
0.794915
kor_Hang
0.416632
17f58fcec2b42aabcf02ed32b20ffe6615208a18
293
md
Markdown
README.md
devNoiseConsulting/PokemonGoGIS
0177d97f972f25bc0ebab28b10c1509d215a4447
[ "MIT" ]
null
null
null
README.md
devNoiseConsulting/PokemonGoGIS
0177d97f972f25bc0ebab28b10c1509d215a4447
[ "MIT" ]
null
null
null
README.md
devNoiseConsulting/PokemonGoGIS
0177d97f972f25bc0ebab28b10c1509d215a4447
[ "MIT" ]
null
null
null
# Pokemon Go GIS Tools Some scripts to help organize Pokemon Go Points of Interest. ## Some useful regex for munging data - `^(".*"),(-?[0-9\.]+),(-?[0-9\.]+)$` => `$1\t$2\t$3` - `^(.*)\.$` => `"$1"` - `^"(.*)"\t(-?[0-9\.]+)\t(-?[0-9\.]+)\t.*$` => `{ name: "$1", lat: $2, lng: $3 },`
29.3
85
0.430034
eng_Latn
0.385142
17f61b33156ea21b4cc20543c81d30688329c7d1
386
md
Markdown
README.md
brandonmichaelhunter/ElasticSearch
1e33be665461592ba787a274addf93b8f347e376
[ "Apache-2.0" ]
null
null
null
README.md
brandonmichaelhunter/ElasticSearch
1e33be665461592ba787a274addf93b8f347e376
[ "Apache-2.0" ]
null
null
null
README.md
brandonmichaelhunter/ElasticSearch
1e33be665461592ba787a274addf93b8f347e376
[ "Apache-2.0" ]
null
null
null
# ElasticSearch The purpose of this respository is demostrate my commanding knowldege of how to manage, support and develop solutions using ElasticSearch. Each folder in this repository respresent solutions that I've created in a particular language. If you have any questions and\or comments on how I can improve my code base, then you can email me at brandonmichaelhunter@gmail.com.
128.666667
369
0.821244
eng_Latn
0.999698
17f817eb0745879b80780eef3bda7ca017a59781
1,581
md
Markdown
README.md
alphagov/character_encoding_cleaner
3a34f381853aede742b8f51f228721aabc4c9ad6
[ "MIT" ]
2
2017-02-11T02:09:01.000Z
2020-02-20T04:13:22.000Z
README.md
alphagov/character_encoding_cleaner
3a34f381853aede742b8f51f228721aabc4c9ad6
[ "MIT" ]
1
2016-02-26T11:26:06.000Z
2016-02-26T11:26:06.000Z
README.md
alphagov/character_encoding_cleaner
3a34f381853aede742b8f51f228721aabc4c9ad6
[ "MIT" ]
2
2019-08-29T11:31:10.000Z
2021-04-10T20:07:08.000Z
# Character Encoding Cleaner ## Introduction This script is for fixing illegal encodings in text files. It's designed to clean up corrupt character sequences in UTF-8 files. The most common cause of such corruption is opening a UTF-8 encoded file as though it were ISO-8859-1, and then saving it as UTF-8. This double-encodes the UTF-8 byte sequences. This script makes no attempt to intelligently reverse such double encoding. Rather it detects and displays sequences of non-ascii characters (0x80-0xFF) in context, and allows the user to enter mappings for each of these in a mappings file. Any byte sequence which is a known target of a mapping is allowed to remain in the output file. ## Required gems * gem install colorize ## Usage Imagine you have a file with corrupted encodings called `badchars.csv`. Invoke the script like this: ``` $ ./clean_encoding.rb badchars.csv fixed.csv ``` This tells the script to read `badchars.csv`, apply any known mappings (read from `mappings.txt`) and output the result to `fixed.csv`. If an unknown sequence of non-ascii characters is detected, it will be displayed, highlighted in red, with a bit of context. The `mappings.txt` file will be updated with the new mapping and 'TODO'. ``` \xC3\x83\xC6\x92\xC3\x82\xE2\x80\x9A\xC3\x83\xE2\x80\x9A\xC3\x82\xC2\xA3:TODO ``` simply edit the file to indicate the desired replacement: ``` \xC3\x83\xC6\x92\xC3\x82\xE2\x80\x9A\xC3\x83\xE2\x80\x9A\xC3\x82\xC2\xA3:£ ``` The `mappings.txt` file should be UTF-8 encoded, so that the replacements can be displayed and edited correctly.
42.72973
306
0.764706
eng_Latn
0.998164
17f8593c1cd5f39044e1e3457efa95439e61fb59
1,942
md
Markdown
README.md
Flix01/Header-Only-GL-Helpers
facd50f7bdfba82d4022b767de46bed664b7e4a2
[ "MIT" ]
27
2017-11-14T21:22:17.000Z
2021-08-18T06:56:11.000Z
README.md
Flix01/Header-Only-GL-Helpers
facd50f7bdfba82d4022b767de46bed664b7e4a2
[ "MIT" ]
null
null
null
README.md
Flix01/Header-Only-GL-Helpers
facd50f7bdfba82d4022b767de46bed664b7e4a2
[ "MIT" ]
2
2019-01-08T07:33:12.000Z
2021-08-18T06:56:12.000Z
# Header-Only-GL-Helpers A collection of header files that can ease OpenGL programming. Filename | Language | Needs OpenGL | Description ---------------------|----------|--------------|----------------------------------------------------------------- teapot.h | C/C++ | Yes | The basic file that is used in all the demos. It can display the teapot mesh and a lot of other meshes dynamic_resolution.h | C/C++ | Yes | Implements dynamic resolution and the first shadow mapping pass im_matrix_stack.h | C/C++ | No | Implements a matrix stack and some other helper methods sdf.h | C++ | Yes | Signed distance fonts to display text on screen minimath.h | C/C++ | No | Just a collection of all the math of the other files (no example available) character.h (WIP) | C/C++ | Recommended | Animated character implementation # Demos The following demos are available: test_teapot.c, test_shadows.c, test_matrix_stack.c, test_character_standalone.c, test_character.c and test_sdf.cpp. Command-lines to compile them on Linux, Windows and Emscripten are present at the top of the files. ### Dependencies (demos only) * glut (or freeglut) * glew (Windows only) # Screenshots ### Click on images for WebGL demos <a href="https://flix01.github.io/emscripten/Header-Only-GL-Helpers/test_shadows.html" target="_blank"><img src="./Screenshots/test_shadows.jpg"></a> <a href="https://flix01.github.io/emscripten/Header-Only-GL-Helpers/test_shadows.html" target="_blank"><img src="./Screenshots/test_shadows_dr.jpg"></a> <a href="https://flix01.github.io/emscripten/Header-Only-GL-Helpers/test_character.html" target="_blank"><img src="./Screenshots/test_character_standalone.jpg"></a> <a href="https://flix01.github.io/emscripten/Header-Only-GL-Helpers/test_sdf.html" target="_blank"><img src="./Screenshots/test_sdf.gif"></a>
71.925926
164
0.664264
eng_Latn
0.7288
17f96c275b436d4baadea167f64ec53eacf53698
2,022
md
Markdown
src/mps/algorithms.md
emsflatiron/tensornetwork.org
5665b204ca4d2795fa091478e5c519a64db3791c
[ "Apache-2.0" ]
null
null
null
src/mps/algorithms.md
emsflatiron/tensornetwork.org
5665b204ca4d2795fa091478e5c519a64db3791c
[ "Apache-2.0" ]
null
null
null
src/mps/algorithms.md
emsflatiron/tensornetwork.org
5665b204ca4d2795fa091478e5c519a64db3791c
[ "Apache-2.0" ]
null
null
null
# Algorithms for Matrix Product States / Tensor Trains A wide variety of efficient algorithms have been developed for [[MPS/TT tensor networks|mps]]. ## Elementary MPS/TT Algorithms - [[Retrieving a Single MPS/TT Component|mps/index#component]] - [[Inner Product of Two MPS/TT|mps/index#innerprod]] - [[Compression of MPS/TT|mps/index#compression]] (Using Density Matrix Algorithm) ## Summing MPS/TT networks The following are algorithms for summing two or more MPS/TT networks and approximating the result by a single MPS/TT. - [[Density Matrix Algorithm|mps/algorithms/dm_sum]] (coming soon) - [[Direct Algorithm|mps/algorithms/sum_direct]] (coming soon) ## Multiplying a MPS/TT by an MPO The following are algorithms for multiplying a given MPS/TT tensor network by an MPO tensor network, resulting in a new MPS/TT that approximates the result. - [[Density Matrix Algorithm|mps/algorithms/dm_mpo]] (coming soon) - [[Fitting Algorithm|mps/algorithms/zip_up]] (coming soon) - [[Zip-Up Algorithm|mps/algorithms/zip_up]] (coming soon) ## Time Evolution Algorithms One reason MPS are very useful in quantum physics applications is that they can be efficiently evolved in real or imaginary time. This capability is useful for studying quantum dynamics and thermalization, and directly simulating finite-temperature systems. - [[Trotter Gate Time Evolution|mps/algorithms/trotter_tevol]] (coming soon) - [[Time-Step Targeting Method|mps/algorithms/targeting_tevol]] (coming soon) - [[Time-Dependent Variational Principle (TDVP)|mps/algorithms/tdvp_tevol]] (coming soon) - [[MPO Time Evolution|mps/algorithms/mpo_tevol]] (coming soon) - [[Krylov Time Evolution|mps/algorithms/krylov_tevol]] (coming soon) ## Solving Linear Equations The following algorithms involve solving equations such as $A x = \lambda x$ or $A x = b$ where $x$ is a tensor in MPS/TT form. - [[DMRG &mdash; Density Matrix Renormalization Group|mps/algorithms/dmrg]]. Adaptive algorithm for finding eigenvectors in MPS form. (coming soon)
38.884615
89
0.773492
eng_Latn
0.867195
17f99f072afdd08f9806b326c6193b5ad99c9376
1,657
md
Markdown
results/crinacle/usound/Empire Ears Phantom X/README.md
Banbeucmas/AutoEq
b8549b2347a19e1f127e6395147ecd6fb225a8ce
[ "MIT" ]
1
2020-07-17T03:48:21.000Z
2020-07-17T03:48:21.000Z
results/crinacle/usound/Empire Ears Phantom X/README.md
datascientist1976/AutoEq
dd8ea7ea5edb5a9087a001ceeb862326e7d23cb9
[ "MIT" ]
null
null
null
results/crinacle/usound/Empire Ears Phantom X/README.md
datascientist1976/AutoEq
dd8ea7ea5edb5a9087a001ceeb862326e7d23cb9
[ "MIT" ]
null
null
null
# Empire Ears Phantom X See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info. ### Parametric EQs In case of using parametric equalizer, apply preamp of **-7.1dB** and build filters manually with these parameters. The first 5 filters can be used independently. When using independent subset of filters, apply preamp of **-7.0dB**. | Type | Fc | Q | Gain | |:--------|:--------|:-----|:--------| | Peaking | 16 Hz | 0.88 | 3.2 dB | | Peaking | 189 Hz | 0.42 | -5.1 dB | | Peaking | 875 Hz | 2.37 | 1.9 dB | | Peaking | 3565 Hz | 1.34 | 4.0 dB | | Peaking | 5805 Hz | 3.13 | 5.2 dB | | Peaking | 1531 Hz | 3.49 | -0.9 dB | | Peaking | 2507 Hz | 4.68 | 0.9 dB | | Peaking | 6623 Hz | 4.29 | 1.2 dB | | Peaking | 6667 Hz | 3.59 | 0.9 dB | | Peaking | 7490 Hz | 2.15 | -1.9 dB | ### Fixed Band EQs In case of using fixed band (also called graphic) equalizer, apply preamp of **-6.3dB** (if available) and set gains manually with these parameters. | Type | Fc | Q | Gain | |:--------|:---------|:-----|:--------| | Peaking | 31 Hz | 1.41 | 1.2 dB | | Peaking | 62 Hz | 1.41 | -1.5 dB | | Peaking | 125 Hz | 1.41 | -4.0 dB | | Peaking | 250 Hz | 1.41 | -4.3 dB | | Peaking | 500 Hz | 1.41 | -1.7 dB | | Peaking | 1000 Hz | 1.41 | 1.0 dB | | Peaking | 2000 Hz | 1.41 | -0.4 dB | | Peaking | 4000 Hz | 1.41 | 5.7 dB | | Peaking | 8000 Hz | 1.41 | 0.7 dB | | Peaking | 16000 Hz | 1.41 | -0.3 dB | ### Graphs ![](https://raw.githubusercontent.com/jaakkopasanen/AutoEq/master/results/crinacle/usound/Empire%20Ears%20Phantom%20X/Empire%20Ears%20Phantom%20X.png)
41.425
150
0.57936
eng_Latn
0.671095