hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e1c3838253d004ff82627d989a17db3ea8d43f5e | 3,976 | md | Markdown | README.md | edsdk/imgpen | 25ac7c54812503cf20e5d355ad85baa66f00e6fd | [
"Net-SNMP",
"Xnet",
"Adobe-2006",
"Adobe-Glyph"
] | null | null | null | README.md | edsdk/imgpen | 25ac7c54812503cf20e5d355ad85baa66f00e6fd | [
"Net-SNMP",
"Xnet",
"Adobe-2006",
"Adobe-Glyph"
] | null | null | null | README.md | edsdk/imgpen | 25ac7c54812503cf20e5d355ad85baa66f00e6fd | [
"Net-SNMP",
"Xnet",
"Adobe-2006",
"Adobe-Glyph"
] | null | null | null | # ImgPen image editor SDK
> Image editor SDK for your websites and web applications. Both **client** and **server** side modules.
Easy to install image editor which covers all standard required features like cropping, resizing, filters and other manipulations. You can adopt photos and images to your website content before publishing them or edit existing pictures.
It can be perfectly integrated with common CMSs (WordPress, Drupal, Joomla, etc), with popular client frameworks (React, Angular, Vue, etc.), server frameworks (Laravel, Symphony, YII, RoR, Django, etc.) and in any other code using API.
The great advantage of ImgPen are tools for full stack application integration.
ImgPen contains both client script (JS/TypeScript) and server side [file uploader](https://npmjs.com/package/@edsdk/file-uploader-server) in **PHP**, **Node** and **Java** for saving images on your server. It also has microservice feature for those who would like to use uploader separately or uses different language on server side.
Deploy and run your own demo in 1 min using [ImgPen example](https://github.com/edsdk/imgpen-example) repository.
## Install
With [npm](https://npmjs.com/) installed, run
```
$ npm install @edsdk/imgpen
```
## Usage
```js
var ImgPen = require('@edsdk/imgpen');
var img = document.querySelector("#img");
ImgPen.editImage({
urlImage: img.src,
urlUploader: 'http://mywebsite/uploader',
urlFiles: 'http://mywebsite/files/',
onSave: function(url) {
img.src = url;
}
})
```
This code will call ImgPen image editor on specified image and upload result image file when (and if) user clicks save button. `onSave` callback will update your image with new picture.
You need to have [file uploader](https://npmjs.com/package/@edsdk/file-uploader-server) installed. When using ImgPen by free license, it comes as Express service module handling specified URL and uploading files in defined directory. For commercial license users PHP and Java backends are provided as well.
## API
```js
function editImage({
urlImage: string,
urlUploader: string,
urlFiles: string,
dirDestination?: string,
onSave: function(url: string)
});
```
- `urlImage` - URL of image you want to edit (be sure CORS is enabled for external resources)
- `urlUploader` - URL where your [file uploader](https://npmjs.com/package/@edsdk/file-uploader-server) is installed
- `urlFiles` - URL prefix for generating URL to the new image
- `dirDestination` - optional subdirectory where to place a new saved image, from the root of server directory of uploader
- `onSave` - callback with generated URL of saved image as parameter
See [ImgPen example](https://github.com/edsdk/imgpen-example) to see real code with comments.
#### Preloading
To avoid network delays you can preload ImgPen at any moment (e. g. you page is loaded):
```js
function preload(callback?: function());
```
After this call all next `editImage` calls will be faster. If you do not use `preload`, calling `editImage` first time can be slower.
You can also pass `callback` function if you want to execute some code right after ImgPen libraries were preloaded.
## Acknowledgments
ImgPen was inspired by Aviary (dead) and Adobe Creative SDK for Web (service was closed).
So ImgPen is good migration alternative for Adobe Creative SDK for Web users.
## License
Double licensing:
1. Trial EdSDK license
- All features
- NOT for commercial usage (except trial purposes on dev server)
- [Server side](https://npmjs.com/package/@edsdk/file-uploader-server) in TypeScript/JavaScript only.
2. Commercial EdSDK license
- All features
- Commercial usage is allowed
- No "powered by" link is required
- Commercial license for File Uploader for free
- File Uploader backends for Node (JavaScript, TypeScript), Java, PHP, ASP.NET
- OEM usage in applications is an option
- [Purchase a license](https://imgpen.com) in order to use it | 40.161616 | 333 | 0.743964 | eng_Latn | 0.962454 |
e1c3c2634634e2d50c861ee429a9c946e9d49e9d | 7,782 | md | Markdown | README.md | jeffdoolittle/DbTransmogrifier | 221d3c54ff47d73de496c2cf576c49d4620ba9fc | [
"BSD-2-Clause"
] | null | null | null | README.md | jeffdoolittle/DbTransmogrifier | 221d3c54ff47d73de496c2cf576c49d4620ba9fc | [
"BSD-2-Clause"
] | 12 | 2015-02-19T17:36:29.000Z | 2021-05-07T05:08:15.000Z | README.md | jeffdoolittle/DbTransmogrifier | 221d3c54ff47d73de496c2cf576c49d4620ba9fc | [
"BSD-2-Clause"
] | 1 | 2015-05-27T20:17:47.000Z | 2015-05-27T20:17:47.000Z | DbTransmogrifier
================
**trans·mog·ri·fy** */transˈmägrəˌfī/*
Verb: Transform, esp. in a surprising or magical manner.
Synonyms: transform - alter - change - transmute - metamorphose
Description
-----------
The DbTransmogrifier provides simple, convention based database migrations for .NET. The following RDBMS's are supported:
* Microsoft SQL Server
* Microsoft SQL Server Express
* PostgreSql
It would be fairly trivial to extend it to support Oracle, SQL CE, Firebird, MySql or any other RDBMS (like MS Access).
DbTransmogrifier is licensed under a BSD license.
DbTransmogrifier is also available as a [NuGet package](http://nuget.org/packages/DbTransmogrifier/).
Discovering Migrations
----------------------
DbTransmogrifier currently supports one simple convention for discovering migrations. It looks for classes that implement an interface called "IMigration" decorated with a MigrationAttribute. See [Defining Migrations in Your Assembly](#defining-migrations-in-your-assembly) for more details.
You do not have to reference the DbTransmogrifier assembly from your project in order to process migrations. Currently DbTransmogrifier comes with a single, simple convention for discovering and applying your migrations. Simply place the DbTransmogrifier executable in the same directory as the assembly which contains your migrations.
Defining Migrations in Your Assembly
------------------------------------
### Migration Interface and Attribute
DbTransmogrifier is designed so you do not have to take a dependency on the DbTransmogrifier assembly in your own migration definition assembly. DbTransmogrifier will scan your assembly to discover migrations that match particular signatures. The simplest way to wire things up is to create a `Migration.cs` file in assembly where you will build your migrations and add the following code to the file:
```
public interface IMigration
{
IEnumerable<string> Up();
IEnumerable<string> Down();
}
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = true)]
public class MigrationAttribute : Attribute
{
public MigrationAttribute(int version)
{
Version = version;
}
public int Version { get; private set; }
}
public abstract class Migration : IMigration
{
public abstract IEnumerable<string> Up();
public virtual IEnumerable<string> Down()
{
yield break;
}
}
```
DbTransmogrifier looks in your assembly for any classes that implement the `IMigration` interface and are decorated with the `MigrationAttribute`
See the *SampleMigrations* project for a detailed example.
Command line options
------------------------------------
DbTransmogrifier supports the following command line options:
### Database level commands
* ```--init``` :: Creates the target database if it does not exist. Creates the "SchemaVersion" table if it does not exist.
* ```--tear-down``` :: Deletes all database Tables, Constraints, Views, Functions and Stored Procedures. Resets "SchemaVersion" table to version 0 (zero). Basically restores the database to its initialized state.
* ```--drop``` :: Drops the database. No warnings, no redo, no cancel. Be careful! You've been warned.
### Migration commands
* ```--current-version``` :: Displays the current schema version number.
* ```--up-to-latest``` :: Applies all up migrations after the current version up to the maximum version available.
* ```--up-to={version}``` :: Applies all up migrations after the current version up to the version specified.
* ```--down-to={version}``` :: Applies all down migrations from the current version down to the version specified.
### Other commands
* ```--help``` :: Displays command line help. Basically just a dump of available command line options to help jog your memory if you forget them.
Advanced Options
----------------
DbTransmogrifier allows for the injection of ```IDbConnection``` and ```IDbTransaction``` so you can create migrations that query your database. You can choose between constructor or setter injection. See the *SampleMigrations* project for example implementations.
This functionality is implemented by the ```DefaultMigrationBuilder```. If you want to create your own custom implementation of ```IMigrationBuilder``` you'll have to do your own dependency injection. See the next section for configuration options.
Configuration
-------------
### Default Configuration
By default, DbTransmogrifer will use your app.config file to load up the following settings:
* Database Provider
* Master Connection String
* Target Connection String
Example:
<appSettings>
<add key="ProviderInvariantName" value="System.Data.SqlClient"/>
</appSettings>
<connectionStrings>
<clear/>
<add name="Target" connectionString="Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=YOUR_TARGET_DATABASE_NAME;Data Source=.\SQLEXPRESS"/>
<add name="Master" connectionString="Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=.\SQLEXPRESS"/>
</connectionStrings>
### Overriding the Default Configuration
In order to override the defaults, you'll need to reference the DbTransmogrifier assembly in a project of your own creation (for example, a Console app). The ```MigrationConfigurer``` class provides the following extensibility points for you to provide your own configuration options:
* ```ProviderNameSource```: A function that returns the name of the database provider you will be using (```System.Data.SqlClient``` is the default).
* ```MasterConnectionStringSource```: A function that returns a valid connection to your database server. This connection should *not* reference your target database (the value from app.config is the default).
* ```TargetConnectionStringSource```: A function that returns as valid connection to your target database (the value from app.config is the default).
* ```MigrationSourceFactory```: A function that returns an implementation of ```IMigrationSource``` (```DefaultMigrationTypeSource``` is the default).
* ```MigrationBuilderFactory```: A function that processes the results returned from your ```IMigrationTypeSource``` and can build each migration, injecting any necessary dependencies into them (```DefaultMigrationBuilder``` is the default).
Example:
class Program
{
static void Main(string[] args)
{
MigrationConfigurer.ProviderNameSource = () => "System.Data.SqlClient";
MigrationConfigurer.MasterConnectionStringSource = () => "Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=.";
MigrationConfigurer.TargetConnectionStringSource = () => "Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=YOUR_TARGET_DATABASE_NAME;Data Source=.";
var transmogrifier = MigrationConfigurer
.Configure()
.WithDefaultMigrationBuilderFactory()
.WithDefaultMigrationSourceFactory()
.BuildTransmogrifier();
var processor = new Processor(transmogrifier, args);
processor.Process();
}
}
Possible Plans for the Future
-----------------------------
* Support other RDBMS's (Oracle, SQL CE, Firebird, MySql, etc.)
* Allow for alternative migration discovery conventions (file system based migrations, alternative assembly scanning options, etc.)
* Add support for dependency injection so migrations can have their dependencies supplied to them by a container | 48.6375 | 402 | 0.719738 | eng_Latn | 0.966281 |
e1c3f3c4a4d2f8704ba56534863d742961aeef7d | 1,006 | md | Markdown | docs/api/enums/limit_state.md | zOadT/planck.js | 11e841a010beebe2f7f5f8647a0156ccf80033cc | [
"MIT"
] | null | null | null | docs/api/enums/limit_state.md | zOadT/planck.js | 11e841a010beebe2f7f5f8647a0156ccf80033cc | [
"MIT"
] | null | null | null | docs/api/enums/limit_state.md | zOadT/planck.js | 11e841a010beebe2f7f5f8647a0156ccf80033cc | [
"MIT"
] | null | null | null | [Planck.js API Doc](../README.md) › [Globals](../globals.md) › [LIMIT_STATE](limit_state.md)
# Enumeration: LIMIT_STATE
## Index
### Enumeration members
* [AT_LOWER_LIMIT](limit_state.md#at_lower_limit)
* [AT_UPPER_LIMIT](limit_state.md#at_upper_limit)
* [EQUAL_LIMITS](limit_state.md#equal_limits)
* [INACTIVE_LIMIT](limit_state.md#inactive_limit)
## Enumeration members
### AT_LOWER_LIMIT
• **AT_LOWER_LIMIT**:
*Defined in [joint/index.d.ts:8](https://github.com/shakiba/planck.js/blob/038d425/lib/joint/index.d.ts#L8)*
___
### AT_UPPER_LIMIT
• **AT_UPPER_LIMIT**:
*Defined in [joint/index.d.ts:9](https://github.com/shakiba/planck.js/blob/038d425/lib/joint/index.d.ts#L9)*
___
### EQUAL_LIMITS
• **EQUAL_LIMITS**:
*Defined in [joint/index.d.ts:10](https://github.com/shakiba/planck.js/blob/038d425/lib/joint/index.d.ts#L10)*
___
### INACTIVE_LIMIT
• **INACTIVE_LIMIT**:
*Defined in [joint/index.d.ts:7](https://github.com/shakiba/planck.js/blob/038d425/lib/joint/index.d.ts#L7)*
| 22.355556 | 110 | 0.723658 | yue_Hant | 0.593527 |
e1c4265c97cd21e9815ebf00f55fe857f6e9e6d1 | 2,904 | md | Markdown | README.md | gayratvg/yandex-cloud-docs-fork2 | 8c01373b67aefc0598f1012792237ed42cb470ae | [
"CC-BY-4.0"
] | null | null | null | README.md | gayratvg/yandex-cloud-docs-fork2 | 8c01373b67aefc0598f1012792237ed42cb470ae | [
"CC-BY-4.0"
] | null | null | null | README.md | gayratvg/yandex-cloud-docs-fork2 | 8c01373b67aefc0598f1012792237ed42cb470ae | [
"CC-BY-4.0"
] | null | null | null | # Документация Yandex.Cloud
Приветствуем в репозитории yandex-cloud/docs. Здесь вы можете предложить дополнения и правки для [документации](https://cloud.yandex.ru/docs) Yandex.Cloud или сделать их самостоятельно и получить грант в рамках контент-программы.
## Предложить правки
Откройте Issue и напишите свои замечания и предложения там. Мы вернемся с ответом. Если вам требуется помощь в работе с облаком, обратитесь в техническую поддержку.
## Поучаствуйте в контент-программе
**В настоящий момент документация Yandex Database активно перерабатывается. PR в нее рассматриваются дольше заявленных сроков и принимаются вручную.**
[Контент-программа Yandex.Cloud](https://cloud.yandex.ru/content-program) позволяет вам самим написать документацию и получить за нее грант на ваш платежный аккаунт. Поучаствовать в контент-программе можно двумя способами:
1. Если вы заметили опечатку или у вас есть небольшая правка, сделайте PR. Такие правки мы рассматриваем быстро и обычно они не требуют обсуждения.
1. Если вы хотите написать большой текст, пошаговое руководство или внести крупные смысловые правки, откройте Issue и расскажите, о чем вы хотите написать. Не нужно сразу приносить большой PR: давайте сначала обсудим ваши идеи и вместе решим, как действовать дальше.
Мы принимаем правки во все документы, кроме справочников API и CLI. Обратите внимание на [список важных тем](guides/needs-contributing.md) — за них мы начисляем повышенные гранты.
Чтобы мы знали, куда зачислить грант, обязательно укажите идентификатор вашего платежного аккаунта в Issue или PR.
## Про документацию
Документация разработана с использованием [Yandex Flavored Markdown](https://github.com/yandex-cloud/yfm-docs). [Подробнее про синтаксис](guides/yfm-syntax-ru.md).
## Как предложить правки
Чтобы предложить правки, вы должны прочитать «Лицензионное Соглашение Яндекса с Контрибьютором» и подтвердить свое согласие с его условиями. Подробная информация о том, как это сделать, и ссылки на текст Соглашения приведены в файле [CONTRIBUTING.md](CONTRIBUTING.md).
Если вы заметили опечатку или ошибку в документации или хотите дополнить какой-то раздел, создайте pull request (PR) с правками через GitHub.
## Как собрать документацию локально
Перед тем так создавать pull request, удобно собрать документацию локально и посмотреть на нее вживую. Для этого используется инструмент [yfm-docs](https://github.com/yandex-cloud/yfm-docs).
1. Установите **yfm-docs**:
`npm i @doc-tools/docs -g`
Чтобы обновить версию **yfm-docs**, используйте эту же команду.
1. Соберите документацию:
`yfm -i docs -o docs-gen`, где `docs` — каталог с исходными текстами, а `docs-gen` — каталог, в котором будет находится сгенерированная документация.
## Лицензии
© YANDEX LLC, 2018. Licensed under Creative Commons Attribution 4.0 International Public License. See [LICENSE](LICENSE) file for more details.
| 59.265306 | 269 | 0.797176 | rus_Cyrl | 0.981004 |
e1c45fea056da6297f252c082db596dfa54ff34d | 527 | md | Markdown | _publications/2019-adversarial.md | gshartnett/gshartnett.github.io | d9a786c9026f28e4cd6e92a1664ac9a0daf9d690 | [
"MIT"
] | null | null | null | _publications/2019-adversarial.md | gshartnett/gshartnett.github.io | d9a786c9026f28e4cd6e92a1664ac9a0daf9d690 | [
"MIT"
] | null | null | null | _publications/2019-adversarial.md | gshartnett/gshartnett.github.io | d9a786c9026f28e4cd6e92a1664ac9a0daf9d690 | [
"MIT"
] | null | null | null | ---
title: "Adversarial Examples for Cost-Sensitive Classifiers"
collection: publications
permalink: /publication/2019-adversarial
date: 2019-01-01
venue: 'ArXiV'
paperurl: #'/files/pdf/research/Agreement Strength.pdf'
link: 'https://arxiv.org/abs/1910.02095'
code: #'https://doi.org/10.7910/DVN/VUY8UI'
github: #'https://github.com/RANDCorporation/dgm'
citation: 'Hartnett, Gavin S., Andrew J. Lohn, and Alexander P. Sedlack. "Adversarial Examples for Cost-Sensitive Classifiers." arXiv preprint arXiv:1910.02095 (2019).'
---
| 40.538462 | 168 | 0.760911 | yue_Hant | 0.202888 |
e1c46033e100cfdf28059356b72bca4bbcf7a766 | 1,886 | md | Markdown | content/blog/get_iplayer-v2-93-released.1.md | SquarePenguin101/www.squarepenguin.co.uk | 656631fe5192484be1c703998bea0fbdd7379310 | [
"MIT"
] | 1 | 2021-05-22T11:57:22.000Z | 2021-05-22T11:57:22.000Z | content/blog/get_iplayer-v2-93-released.1.md | SquarePenguin101/www.squarepenguin.co.uk | 656631fe5192484be1c703998bea0fbdd7379310 | [
"MIT"
] | 1 | 2020-07-24T09:11:45.000Z | 2020-07-24T11:15:45.000Z | content/blog/get_iplayer-v2-93-released.1.md | SquarePenguin101/www.squarepenguin.co.uk | 656631fe5192484be1c703998bea0fbdd7379310 | [
"MIT"
] | 1 | 2020-07-02T23:40:43.000Z | 2020-07-02T23:40:43.000Z | +++
categories = ["Updates"]
date = "2015-06-03T00:31:39+01:00"
description = ""
draft = false
pageimage = ""
aliases = "/get_iplayer-v2-93-released"
title = "get_iplayer v2.93 released"
+++
<figure style="float:right;width:300px;"><img src="/img/2015/06/phoenix.png" alt="This looks like a Phoenix rising from the flames...right?" /><figcaption>This looks like a Phoenix rising from the flames...right?</figcaption></figure>
Thanks to the hard work of get_iplayer maintainer dinkypumkin, get_iplayer has been patched and updated to v2.93 which restores much of the functionality recently nuked by the BBC's removal of listings feeds.
<del>**EDIT** - Shortly after this release it's been noted that there are issues with live streaming and a further update is expected overnight 03-Jun-2105.</del>
EDIT 2 - get_iplayer v2.94 released - release notes as v2.93.
### Release Notes
It's important you take the time to read the release notes. Find them at the link below:
[get_iplayer v2.93 release notes](https://github.com/get-iplayer/get_iplayer/wiki/release293)
### Headline changes
Dinkypumpkin has made great efforts over the past 6 months to minimise the effect of the ongoing removal of listings feeds by the BBC, but this time we aren't spared as we were before.
The biggest change in this release is the time now taken to refresh the TV cache, along with the removal of the ability to search via category. Please take the time to read the [release notes](https://github.com/get-iplayer/get_iplayer/wiki/release293) for a full breakdown of the changes and what to expect thanks to the latest actions by Aunty.
### A note to visually impared and hearing impaired users
Thanks to the removal of the category listings you will no longer be able to use get_iplayer to search for Signed or Audio Described programmes.
This is an ugly outcome for all affected.
<!--more-->
| 49.631579 | 346 | 0.766172 | eng_Latn | 0.998491 |
e1c5903cec6cd99c410eb7a8f8ee833d69d8bed6 | 56 | md | Markdown | RELEASE.md | ko-he-/SW_CI_VERUP_TEST | 0bc9dfcac8a60afb43b0f654d985aa4c9d242238 | [
"Apache-2.0"
] | null | null | null | RELEASE.md | ko-he-/SW_CI_VERUP_TEST | 0bc9dfcac8a60afb43b0f654d985aa4c9d242238 | [
"Apache-2.0"
] | 37 | 2018-01-17T06:22:47.000Z | 2018-02-01T09:31:26.000Z | RELEASE.md | ko-he-/SW_CI_VERUP_TEST | 0bc9dfcac8a60afb43b0f654d985aa4c9d242238 | [
"Apache-2.0"
] | null | null | null | # Release 1.0.0
## 新機能および改良点
## バグフィックスやその他変更点
- 初期リリース
| 11.2 | 17 | 0.696429 | yue_Hant | 0.527332 |
e1c5d8f826ffeee25fc9ce552ce027d98bd56709 | 6,138 | md | Markdown | subscriptions/admin-expiration.md | MicrosoftDocs/visualstudio-docs.tr-tr | ff0c41f814d042e7d4a0e457839db4a191a59f81 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-09-14T23:12:51.000Z | 2021-08-22T21:23:28.000Z | subscriptions/admin-expiration.md | MicrosoftDocs/visualstudio-docs.tr-tr | ff0c41f814d042e7d4a0e457839db4a191a59f81 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-07-20T23:01:49.000Z | 2021-04-15T20:00:12.000Z | subscriptions/admin-expiration.md | MicrosoftDocs/visualstudio-docs.tr-tr | ff0c41f814d042e7d4a0e457839db4a191a59f81 | [
"CC-BY-4.0",
"MIT"
] | 22 | 2018-01-11T11:53:37.000Z | 2022-03-06T16:38:31.000Z | ---
title: vadesi geçen Visual Studio abonelik sözleşmeleri için yönetici portalı değişiklikleri | Microsoft Docs
author: evanwindom
ms.author: cabuschl
manager: cabuschl
ms.assetid: f38092ba-051c-4e58-97f5-4255dbe873ba
ms.date: 10/08/2021
ms.topic: conceptual
description: Anlaşmanın süresi dolarsa Yöneticiler için ne olacağını öğrenin
ms.openlocfilehash: fc7513e09323d82af531ec58684909c09cd6b636
ms.sourcegitcommit: 5f1e0171626e13bb2c5a6825e28dde48061208a4
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/09/2021
ms.locfileid: "129704732"
---
# <a name="admin-portal-changes-for-expired-agreements"></a>Vadesi geçen anlaşmalar için Yönetici portalı değişiklikleri
Visual Studio aboneliklerin satın alınması için kullanılan sözleşmenin süresi dolarsa, sözleşme ve içinde atanan abonelikler sınırlı bir süre için kullanılabilir olmaya devam eder. Bu dönem tüm anlaşmalarda aynı olmayabilir ve e-posta ve yönetim portalı aracılığıyla aldığınız iletişimlerdeki bu dönemin uzunluğu hakkında daha ayrıntılı bilgi sağlanacaktır. Şirketinizin planlarına bağlı olarak, abonelere yardımcı olan veya önemli bilgilerin kaybını önleyen bir işlem gerçekleştirmeniz gerekebilir.
## <a name="expiration-timeline"></a>Süre sonu zaman çizelgesi
Sözleşme süre sonu için zaman çizelgesi üç aşamadan oluşur:
- [Süresi dolmadan önce](#prior-to-expiration)
- [Süresi Doldu](#expired)
- [Devre dışı](#disabled)
### <a name="prior-to-expiration"></a>Süresi dolmadan önce
Sözleşmenizin süresi dolmadan yaklaşık 120 gün önce, yönetici ve süper yöneticilere bildirim göndermeye başlayacağız ve şirketinizin anlaşmasını yenilemeyi planlıyor olmanıza bağlı olarak yapmanız gerekebilecek adımlar.
### <a name="expired"></a>Süresi doldu
Sözleşmeniz sona erme tarihine ulaştığında, Yöneticiler ve aboneler hala sınırlı bir süre için erişime sahip olur. Bu işlem, şirketinizin anlaşmayı yenilemeyen veya yeni bir tane satın almayı seçtiği olaydaki önemli verileri korumak üzere her iki yönetici ve abonelik sağlamak için bir fırsat sağlamak üzere yapılır. Yöneticiler, bu süre boyunca, daha sonra kullanılmak üzere abone listeleri gibi bilgileri korumanıza yardımcı olmak için belirli bilgilerin bağlantılarıyla birlikte bildirim almaya devam edecektir. Aboneler Ayrıca, mevcut Azure aboneliklerinde oluşturmuş olabileceği varlıklar gibi bilgileri koruma konusunda rehberlik sunarak bu bildirimleri almaya başlar.
Bu aşamada, hem Yöneticiler hem de aboneler ilgili portallarına erişime sahip olmaya devam edecektir. Yöneticiler, abonelik yönetim görevlerinin tam aralığını yine de çalıştırabilecektir. Aboneler abonelik avantajlarına Kısıtlanmamış erişime sahip olmaya devam edecektir.
> [!IMPORTANT]
> Yöneticiler ve aboneler ilgili kaynaklarına erişmeye devam edecekken, önemli verilerin bu süre dolmadan önce kullanılması ve bilgilere erişmesi kaybedilmesi için eylemin hızla alınması önemlidir.
### <a name="disabled"></a>Devre dışı
Sözleşmeniz süresi geçen sürenin sonuna ulaştığında:
- Yöneticiler ve süper Yöneticiler, [yönetim portalındaki](https://manage.visualstudio.com)vadesi geçen sözleşmeye erişimi kaybedecektir. Sözleşme dahilindeki aboneliklerde herhangi bir değişiklik yapamazlar. (Yönetim portalındaki diğer geçerli sözleşmelere erişim etkilenmeyecektir. [Yardım al](https://manage.visualstudio.com/gethelp) sayfası da kullanılabilir olmaya devam edecektir.)
- Aboneler [abone portalındaki](https://my.visualstudio.com)vadesi geçen aboneliğe erişimi kaybedecektir. Başka bir sözleşmenin parçası olarak bunlara atanmış başka abonelikler varsa, bu abonelikler etkilenmez. Visual Studio aboneliği devre dışı bırakıldıktan sonra otuz gün sonra, Visual Studio aboneliğine bağlı olan tüm azure abonelikleri de kaldırılır ve bu sayede abonelerin, azure varlıklarını, onları sürdürmek istediklerinde başka bir geçerli aboneliğe taşımasını çok önemli hale gelir. Azure 'un bu durumda abonelere kılavuzluk edecek kendi bildirim süreci vardır.
## <a name="preserving-your-information"></a>Bilgilerinizi koruma
Yönetici olarak, sözleşmenizin süresi dolarsa veya yeni bir anlaşma satın alırsanız sürdürmek isteyebileceğiniz bazı bilgiler vardır.
- En yüksek kullanım. Sözleşmenizin ömrü boyunca atadığınız abonelik sayısını anlamak, kuruluşunuzun gereksinimleriniz için doğru sayıda abonelik satın alıp aldığının anlaşılmasına yardımcı olabilir. [Kullanım alanınızı görüntüleyebilir ve yönetim portalı içinden bir raporu dışarı aktarabilirsiniz](maximum-usage.md) .
- Abone listeniz. Geçerli anlaşmanızda [abonelerin listesini dışarı aktarmak](exporting-subscriptions.md) , bu abonelikleri yeni bir sözleşmeye hızlı bir şekilde taşımanıza yardımcı olabilir.
## <a name="assisting-subscribers"></a>Abonelere yardım etme
Abonelikler, aboneliklerinin sona ermesine ilişkin bildirimleri almaya başladıklarında sizinle sizinle iletişim kurabilirler. Bu soruların bazı yanıtları, şirketinizin planına bağlı olarak değişir. Şirketiniz, anlaşmasını yenilemeyi veya yeni bir tane satın almayı planlıyorsa, abonelere şirketinizin işlemde nerede olduğunu anlamalarına yardımcı olabilirsiniz. Şirketiniz yenilemeyi düşünmüyorsanız, önemli bilgilerini kaydetme sürecinde size rehberlik etmenize yardımcı olabilirsiniz. Bir sözleşmenin süresi dolarsa bireysel abonelerin nasıl etkileneceğinizi öğrenmek faydalı olabilir. Daha fazla bilgi için [aboneliklerin süre sonu](subscription-expiration.md) makalesini inceleyin.
## <a name="moving-to-a-new-agreement"></a>Yeni bir sözleşmeye geçme
Şirketiniz yeni bir anlaşma satın alıyorsa, aboneleri yeni anlaşmada yeniden oluşturmak yerine [Yeni bir sözleşmeye taşıyabilirsiniz](migrate-subscriptions.md) .
## <a name="next-steps"></a>Sonraki adımlar
- [Bireysel abonelerin](subscription-expiration.md) , vadesi geçen anlaşmalarla nasıl etkilendiğini öğrenin.
- [Abone listenizi dışarı aktarmayı](exporting-subscriptions.md)öğrenin.
- [Abonelikleri yeni bir sözleşmeye taşımayı](migrate-subscriptions.md) öğrenin
- [Azure Active Directory gruplarını kullanarak aboneler eklemeyi](assign-license-bulk.md#use-azure-active-directory-groups-to-assign-subscriptions) öğrenin.
| 105.827586 | 690 | 0.832519 | tur_Latn | 0.999948 |
e1c5e76f040b734ab9e20154252cca4b63f64763 | 8,961 | md | Markdown | baseline/README_SOURCE.md | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | baseline/README_SOURCE.md | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | baseline/README_SOURCE.md | RileyWClarke/flarubin | eb7b1ee21c828523f8a5374fe4510fe6e5ec2a2a | [
"MIT"
] | null | null | null | # syseng_throughputs #
SysEng-approved LSST throughput curves
The latest m5 depths are available in the notebooks, such as in [notebooks/Overview Paper.ipynb](./notebooks/Overview%20Paper.ipynb).
This repository provides the ultimate source of the throughput curves in the repository [lsst/throughputs](https://github.com/lsst/throughputs).
The [components](./components) directory contains the response curves
for each individual component of the camera and telescope. In each
directory, there is also a `*_Losses` directory that contains the
time-averaged ten-year losses due to contamination or condensation on
the surfaces of the component. In some directories, there is also a
`*_Coatings` directory, which contains information on coatings applied
to the surface, such as the Broad Band Anti-Reflection coatings on the
lenses.
These components curves are maintained and updated by the LSST system
engineering team.
Python utilties to read and combine these various
curves appropriately are maintained in this repository, in the
[python](./python/lsst/syseng/throughputs) directory. In particular, note the utilities
provided in [bandpassUtils.py](./python/lsst/syseng/throughputs/bandpassUtils.py). At this
time, we expect most users to use the throughputs repository instead
of this repository directly - the curves in the throughputs repository
are constructed from these curves, and can be traced through the git
SHA1 and release tags.
# Release 1.7 #
The M2 reflectivity was updated based on witness sample measurements from M2 coating run in July 2019. The PR for this update is https://github.com/lsst-pst/syseng_throughputs/pull/12. The notebooks showing what has changed and the updated m5 calculations are found in the "documentation" subdirectory.
# Release 1.6 #
The mirror reflectivity was updated based on measurements from coating samples from June 2019. The PR for this update is https://github.com/lsst-pst/syseng_throughputs/pull/11. The notebooks showing what has changed and the updated m5 calculations are found in the "documentation" subdirectory.
# Release 1.5 #
This is a minor update for throughputs (the lens2 glass and BBAR coating curves
have been extended in their wavelength information, but the curves themselves
are the same as previously). However it is a major update for documentation
and process information, as reflected in the "documentation" subdirectory.
# Release 1.4 #
The primary update here is in the lens2 response curves. The BBAR coating
has been updated.
Other minor updates include bug fixes in the python code in sedUtils.py,
updating of the jupyter notebooks, and the addition of notebooks evaluating
the effect of the mixed vendor detector focal plane and recreating the
inputs for the LSST Overview Paper.
# Release 1.3: #
The primary update here is in the detector response curves.
The QE response curves here are the result of measurements of multiple
chips provided by each vendor, ITL and E2V. The measurements have been
averaged across multiple CCDs; the default (single) 'generic' curve remains
the minimum QE response at each wavelength between both vendors.
These curves were provided by Steve Ritz in December, 2017.
Other minor updates include additional python code to allow scaling
of the FWHM at different airmasses and wavelengths (according to
details provided in Document-18208 and Document-20160), and a jupyter
notebook which can provide latex-formatted content of Table 2 from the
overview paper.
# Release 1.2: #
This is primarily an update to the python code in the repository, using
corrected and updated readnoise values (which results in corresponding
changes to m5, particularly in the u band).
# As of release 1.1: #
## Camera Components ##
* _Detector_: There are two separate detector response and loss curves,
corresponding to the expected response (QE response + AR coatings)
of the CCDs provided by each of the two vendors under
consideration. For most purposes (including the detector curve reported in
the throughputs repository), we use a 'generic' detector
response that is generated by combining both of these throughput
curves using the *minimum* QE response at each wavelength.
The response curves from each vendor correpond to a response
measured in LSST labs, using vendor-provided prototypes. The loss
curves provided for each vendor represent a simulated effect of
contamination buildup over time; the loss curves are identical for
both vendors and are the average expected values over ten
years. Note that some values in the 'contamination' loss file for
the detectors are > 1; this is because the contamination is
primarily a thin film of water, which at some wavelengths can
enhance the performance of the AR coating on the detector -- this is
only true for the detector.
* _Lenses_: There are three separate lenses in the camera, each with an
identical base `*_Glass.dat` curve that represents the fused silica
throughput of the lens itself. This throughput curve must be smoothed using the
Savitzy-Golay smoothing function. The fused silica lens transmission curves are
based on vendor-provided expected transmission curves. The silica base of the len must
also be combined with the BroadBand AntiReflective (BBAR) coatings
response in the `*_Coatings` directory. There are two coatings; one
for each side of the lens. The BBAR coating response is based on vendor-provided
models, consistent with LSST requested coating requirements. There are small differences between the
glass components used for each lens; there are also small
differences in the BBARS, including a difference from one side of
the lens to the other. In each lens, there are also several files in
the `*_Losses` directory, representing the time-averaged condensation and
contamination losses for each surface of each lens. The losses are based on
models developed by Andy Rasmussen at SLAC. These vary
depending on the direction the lens is facing and the location of
the lens in the camera. The final response curves for all lenses are
similar in shape, however lens3 has a slightly higher overall
throughput due to slightly lower losses (only by 1-2%).
* _Filters_: For each filter, a goal throughput envelope has been
provided. This is the goal throughput envelope provided to the
filter vendors; tolerances on this envelope have also been
provided. Note that this is not the expected performance for an
as-manufactured filter, which would likely include some out-of-band throughput leaks
(within a specified limit), and represents a change compared to
previously provided throughput curves (which represented one simulation of
an expected as-provided filter set). In the `*_Losses` directory,
there are also ten-year-average simulated
contamination and condensation losses for each surface of the
filters, based on models developed by Andy Rasmussen.
## Telesope Components ##
* _Mirrors_: Each mirror has a reflectivity curve, which should be
coupled with the respective losses curve found in the relevant
`*_Losses` directory. The reflectivity of mirror1 (primary mirror) and mirror3
(tertiary) is based on using a protected aluminum surface; the
reflectivity of mirror2 (secondary) is based on using a protected
silver surface. These mirror reflectivities are based on lab measurements
of pristine witness samples. The losses represent the ten-year average,
based on performance degradation measurements from historical telescope performance,
modified for the expected LSST maintenance schedule.
Currently mirror cleanings are scheduled yearly, with resurfacing every
two years.
## Site Properties ##
* _Atmosphere_: The atmosphere throughput is modeled by using MODTRAN to
produce a 'standard US Atmosphere', which does not include aerosols.
To better represent the expected atmospheric transmission on site, aerosols
have been added to the resulting throughput curves, using the python
script [addAerosols.py](./python/addAerosols.py). The atmospheric
transmission curves are in the [siteProperties](./siteProperties)
directory, with an X=1.2 and X=1.0 atmosphere, with and without
aerosols. To represent 'typical' throughput, the X=1.2, with aerosols
[atmosphere](./siteProperties/pachonModtranAtm_12_aerosol.dat) curve
should be used. To represent zenith, optimum throughputs, the X=1.0,
with aerosols [atmosphere](./siteProperties/atmos_10_aerosol.dat)
curve should be used.
* _Dark sky_: The expected dark sky, zenith, background spectrum can
be found in [darksky.dat](./siteProperties/darksky.dat). This is
used to calculate expected zenith, dark-sky limiting magnitude
values. The dark sky SED is based on data from UVES and Gemini Near-IR,
combined with ESO sky data from Ferdinand Patat, modified slightly at
the red and blue ends to match observed dark sky broadband skybrightness
values reported by SDSS.
| 56.00625 | 304 | 0.796451 | eng_Latn | 0.999681 |
e1c60915b14a4d7efe0cbcf14308950b699a8d54 | 1,731 | md | Markdown | curriculum/challenges/spanish/08-coding-interview-prep/project-euler/problem-159-digital-root-sums-of-factorisations.spanish.md | tmonks/freeCodeCamp | 7453131461f5073d9160bbc1402bcb0e052579c0 | [
"BSD-3-Clause"
] | 25 | 2020-02-16T00:26:35.000Z | 2022-03-30T19:46:05.000Z | curriculum/challenges/spanish/08-coding-interview-prep/project-euler/problem-159-digital-root-sums-of-factorisations.spanish.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 2,056 | 2019-08-25T19:29:20.000Z | 2022-02-13T22:13:01.000Z | curriculum/challenges/spanish/08-coding-interview-prep/project-euler/problem-159-digital-root-sums-of-factorisations.spanish.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 27 | 2017-02-12T11:48:34.000Z | 2022-03-30T17:44:39.000Z | ---
id: 5900f40c1000cf542c50ff1e
challengeType: 5
title: 'Problem 159: Digital root sums of factorisations'
videoUrl: ''
localeTitle: 'Problema 159: Sumas de raíz digitales de factorizaciones'
---
## Description
<section id="description"> Un número compuesto puede ser factorizado de muchas maneras diferentes. Por ejemplo, sin incluir la multiplicación por uno, 24 se pueden factorizar de 7 formas distintas: <p> 24 = 2x2x2x3 24 = 2x3x4 24 = 2x2x6 24 = 4x6 24 = 3x8 24 = 2x12 24 = 24 </p><p> Recuerde que la raíz digital de un número, en la base 10, se encuentra sumando los dígitos de ese número, y repitiendo ese proceso hasta que se llega a un número que es menor que 10. Por lo tanto, la raíz digital de 467 es 8. Nosotros llamará a una suma de raíz digital (DRS) la suma de las raíces digitales de los factores individuales de nuestro número. La siguiente tabla muestra todos los valores de DRS para 24. FactorizaciónDirital Root Sum2x2x2x3 92x3x4 92x2x6 104x6 103x8 112x12 524 6La máxima suma de la raíz digital de 24 es 11. La función mdrs (n) proporciona la máxima suma de la raíz digital de n. Entonces mdrs (24) = 11. Encuentra ∑mdrs (n) para 1 <n <1,000,000. </p></section>
## Instructions
<section id="instructions">
</section>
## Tests
<section id='tests'>
```yml
tests:
- text: <code>euler159()</code> debe devolver 14489159.
testString: 'assert.strictEqual(euler159(), 14489159, "<code>euler159()</code> should return 14489159.");'
```
</section>
## Challenge Seed
<section id='challengeSeed'>
<div id='js-seed'>
```js
function euler159() {
// Good luck!
return true;
}
euler159();
```
</div>
</section>
## Solution
<section id='solution'>
```js
// solution required
```
</section>
| 30.910714 | 979 | 0.723859 | spa_Latn | 0.904501 |
e1c61ab60ffbb5aaddcffa2d38dabc4e7c956fef | 2,849 | md | Markdown | TavoCalendar-master/README.md | agrocloud/cloud | 6f22036901895357d59f6c98585913decc0e2406 | [
"CC-BY-4.0"
] | null | null | null | TavoCalendar-master/README.md | agrocloud/cloud | 6f22036901895357d59f6c98585913decc0e2406 | [
"CC-BY-4.0"
] | null | null | null | TavoCalendar-master/README.md | agrocloud/cloud | 6f22036901895357d59f6c98585913decc0e2406 | [
"CC-BY-4.0"
] | null | null | null | # TavoCalendar
Display calendar and pick dates (singular or range).
## Setup
**HTML**
```html
<div class="calendar"></div>
```
**JS**
```js
const options = {
date: '2019-12-21'
}
const my_calendar = new TavoCalendar('.calendar', options);
```
**Available options:**
* `format` (*optional*) -- defaults to `YYYY-MM-DD`
* `locale` (*optional*) -- display of weekday names, month (defaults `en`)
* `date` (*optional*) -- calendar focus date (defaults to absolute date),
* `date_start` (*optional*) -- range end date (defaults to `null`),
* `date_end` (*optional*) -- range start date (defaults to `null`),
* `selected` (*optional*) -- mark dates selected (defaults to `[]`)
* `highlight` (*optional*) -- special dates (defaults to `[]`)
* `blacklist` (*optional*) -- disable dates (defaults to `[]`)
* `range_select` (*optional*) -- range select mode (defaults to `false`)
* `multi_select` (*optional*) -- mltiple date mode (defaults to `false`)
* `future_select` (*optional*) -- disable selecting days after `date` (defaults to `true`)
* `past_select` (*optional*) -- disable selecting days before `date` (defaults to `false`)
* `frozen` (*optional*) -- disable all interactions (defaults to `false`)
* `highligh_sunday` (*optional*) -- highlight sundays (defaults to `true`)
**Selecting**
* `getSelected()` -- returns an array of selected dates (in multiselect mode) or single
* `addSelected(date)` -- add date to selected
* `clearSelected()` -- clear all selected dates
* `getStartDate()` -- range start
* `setStartDate(date)` -- set range start
* `getEndDate()` -- range end
* `setEndDate(date)` -- set range end
* `getRange()` - range object { start: '2012-12-10', end: '2012-12-15'}
* `clearRange()` - clear range
* `setRange(date1, date2)` - set full range
* `clear()` - clear all selections
**Moving calendar window**
* `getFocusYear()` -- calendar focus year
* `setFocusYear(year)` -- set calendar focus year
* `getFocusMonth()` -- calendar focus month
* `setFocusMonth(month)` -- set calendar focus month
* `nextMonth()` -- move to next month
* `prevMonth()` -- move to prev month
**Other**
* `sync(other_calendar)` -- sync two or more calendars `calendarA.sync(calendarB)`
**Events**
* `calendar-change` -- fired when month changes
* `calendar-range` -- fired on day change
* `calendar-select` -- fired on day change
* `calendar-reset` -- fired on range reset
## Example
Select range from future date.
```js
const options = {
range_select: true
}
const calendar_el = document.querySelector('.calendar');
const my_calendar = new TavoCalendar(calendar_el, options)
calendar_el.addEventListener('calendar-range', (ev) => {
const range = my_calendar.getRange();
console.log('Start', range.start)
console.log('End', range.end)
});
```
## Depends on:
* [moment](https://github.com/moment/moment/)
| 29.677083 | 90 | 0.671815 | eng_Latn | 0.816306 |
c1269519a91db7bc6bcf3c9a2498f6a64ddb2068 | 16 | md | Markdown | README.md | emanuelsan/docker-wp-sage | cca131e73aca83516d05fd4932c300716fb7cd42 | [
"MIT"
] | null | null | null | README.md | emanuelsan/docker-wp-sage | cca131e73aca83516d05fd4932c300716fb7cd42 | [
"MIT"
] | null | null | null | README.md | emanuelsan/docker-wp-sage | cca131e73aca83516d05fd4932c300716fb7cd42 | [
"MIT"
] | null | null | null | # docker-wp-sage | 16 | 16 | 0.75 | vie_Latn | 0.221715 |
c128685381bf8289365ef3cdee884455374de00c | 2,647 | md | Markdown | docs/bjd/exam/14.md | jix1351/MPEI | 410481f227d972fcb1e06633bd6c1acb1ec4b388 | [
"BSD-3-Clause"
] | 2 | 2021-01-10T10:02:40.000Z | 2021-01-12T03:22:52.000Z | docs/bjd/exam/14.md | jix1351/MPEI | 410481f227d972fcb1e06633bd6c1acb1ec4b388 | [
"BSD-3-Clause"
] | 32 | 2021-03-26T23:07:30.000Z | 2022-01-21T15:11:14.000Z | docs/bjd/exam/14.md | jix1351/MPEI | 410481f227d972fcb1e06633bd6c1acb1ec4b388 | [
"BSD-3-Clause"
] | 5 | 2021-04-29T07:45:27.000Z | 2022-03-16T19:37:28.000Z | # Ураганы, бури, смерчи: определение, механизм возникновения, действующие негативные факторы, последствия
## Определения
### Ураган
`Ураган`– ветер большой разрушительной силы и значительной продолжительности. По своей разрушительной силе сравним с землетрясением
Условно, к ураганному относится ветер, нижняя граница скорости которого ≥ 30 м/с (120 км/ч)
Принципиальная особенность – прямолинейное («луч света») распространение воздушных масс
### Буря
`Буря` – разновидность линейного распространения воздушных масс, уступающая по силе урагану (скорость ветра – 70-115 км/ч)
### Смерч
`Смерч (торнадо)` – восходящие вихри быстро вращающегося воздуха, имеющего вид земляного (водяного) столба диаметром до сотен метров с вертикальной (иногда изогнутой) осью вращения.
Внутри столба разрежение (пониженное давление), обусловливающее всасывание всего встречающегося на пути смерча (земля, песок, вода и т.д.)
Смерч возникает в грозовом облаке и затем распространяется в виде тёмного рукава или хобота по направлению к поверхности суши или моря. В верхней части смерч имеет воронкообразное расширение,
сливающееся с облаками. Когда смерч опускается до поверхности земли или воды, нижняя часть его тоже становится расширенной, похожей на опрокинутую воронку. Высота смерча может достигать 800 – 1500 м.
Вихрь вращается как правило против часовой стрелки, причем одновременно поднимается по спирали вверх, втягивая в себя всё встречающееся на пути. Внутри потока скорость может достигать 200 км/час.
Смерч возникает обычно в тёплом секторе циклона, чаще перед холодным фронтом. Образование его связано с особо сильной неустойчивостью закономерного распределения по высоте температур атмосферного
воздуха (стратификации атмосферы)
???+ info "Схема образования смерча"

## Последствия
Разрушение строений, сооружений, транспортных коммуникаций, линий связи, ЛЭП
Эрозия, выветривание почв, нарушение сельскохозяйственной деятельности
## Меры защиты
При нахождении в здании:
1. Закрыть окна, двери
2. По возможности убрать с балконов предметы, представляющие опасность при падении (стекло, острые предметы и т.п.)
При нахождении на открытой местности:
1. Максимально быстро отойти потенциально опасных мест (в городе – рекламные щиты, линии электропередач, деревья)
2. По возможности укрыться в капитальных зданиях или сооружениях
3. При отсутствии такой возможности – укрыться в естественных выемках земли (ямы, канавы, овраги): лечь на дно и плотно прижаться к земле
- При возникновении смерча наиболее благоприятное место укрытия – подземные сооружения: подвал дома, погреб и т.п
| 52.94 | 199 | 0.80884 | rus_Cyrl | 0.993393 |
c129b6e6c414f72205487d20c3f0a1a89874d331 | 30 | md | Markdown | README.md | MPS-2/MPS-2.github.io | 70de76c596d8a38f33fc4d9ba9e349309c541435 | [
"MIT"
] | null | null | null | README.md | MPS-2/MPS-2.github.io | 70de76c596d8a38f33fc4d9ba9e349309c541435 | [
"MIT"
] | null | null | null | README.md | MPS-2/MPS-2.github.io | 70de76c596d8a38f33fc4d9ba9e349309c541435 | [
"MIT"
] | 1 | 2021-03-09T09:02:42.000Z | 2021-03-09T09:02:42.000Z | # FAUExplore
Das neue Rep
| 7.5 | 13 | 0.666667 | deu_Latn | 0.980484 |
c12a352a014e162bffa2ed4e479ba262ad5bc969 | 1,128 | md | Markdown | DEVELOPER.md | pcliu0822/battlesnake | b9b888f17f9c3a0cdd4f9b45e50367d70f72a3e6 | [
"MIT"
] | 1 | 2021-06-15T15:28:49.000Z | 2021-06-15T15:28:49.000Z | DEVELOPER.md | gogog22510/battlesnake | b9b888f17f9c3a0cdd4f9b45e50367d70f72a3e6 | [
"MIT"
] | 6 | 2021-06-15T21:59:00.000Z | 2021-06-16T17:28:44.000Z | DEVELOPER.md | gogog22510/battlesnake | b9b888f17f9c3a0cdd4f9b45e50367d70f72a3e6 | [
"MIT"
] | 2 | 2021-06-15T15:23:53.000Z | 2021-06-15T15:25:16.000Z | # Setup develope environment
Create conda environmnt
```shell
conda create -n battlesnake python=3.8.3
```
Activate conda environment
```shell
conda activate battlesnake
```
Install dependency using pip
```shell
python -m pip install -r requirements.txt
```
When you are finish, deactivate conda environment
```shell
conda deactivate
```
# Running Your Battlesnake Locally
Eventually you might want to run your Battlesnake server locally for faster testing and debugging. You can do this by installing [Python 3.8](https://www.python.org/downloads/) and running:
```shell
python server.py
```
**Note:** You cannot create games on [play.battlesnake.com](https://play.battlesnake.com) using a locally running Battlesnake unless you install and use a port forwarding tool like [ngrok](https://ngrok.com/).
Testing your api with curl
```shell
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" -d @test-data.json http://0.0.0.0:8080/start
```
Switch learning mode in runtime
```shell
curl -X GET -H "Accept: application/json" -H "Content-Type: application/json" http://0.0.0.0:8080/switch
```
| 26.232558 | 209 | 0.746454 | eng_Latn | 0.91873 |
c12a39630bd1b4c22122a33b34458d69c21b83f6 | 535 | md | Markdown | Unity/Assets/VCFrame/EngineCode/ThirdParty/com.alelievr.NodeGraphProcessor/docs/docfx/api/index.md | malering/ET | ed8dcd974b2433298e641e21b6e817478359cf74 | [
"MIT"
] | 2 | 2018-09-19T04:41:00.000Z | 2018-09-21T01:00:18.000Z | Unity/Assets/VCFrame/EngineCode/ThirdParty/com.alelievr.NodeGraphProcessor/docs/docfx/api/index.md | malering/ET | ed8dcd974b2433298e641e21b6e817478359cf74 | [
"MIT"
] | null | null | null | Unity/Assets/VCFrame/EngineCode/ThirdParty/com.alelievr.NodeGraphProcessor/docs/docfx/api/index.md | malering/ET | ed8dcd974b2433298e641e21b6e817478359cf74 | [
"MIT"
] | null | null | null | # Most useful classes
## Nodes
- [Base Node](GraphProcessor.BaseNode.yml)
- [Base Node View](GraphProcessor.BaseNodeView.yml)
## Windows
- [Base Graph Window](GraphProcessor.BaseGraphWindow.yml)
- [Toolbar View](GraphProcessor.ToolbarView.yml)
- [Pinned Element](GraphProcessor.PinnedElement.yml)
- [Pinned Element View](GraphProcessor.PinnedElementView.yml)
## Graph
- [Base Graph](GraphProcessor.BaseGraph.yml)
- [Base Graph Processor](GraphProcessor.BaseGraphProcessor.yml)
- [Processor View](GraphProcessor.ProcessorView.yml)
| 28.157895 | 63 | 0.783178 | yue_Hant | 0.507115 |
c12a53c42615cd7a2f216ff773f3761bc35cf5aa | 12,804 | md | Markdown | docs/surveys/2020-jupyterlab-survey.md | jweill-aws/team-compass | def7653e79b8880ad60c4a3e3a960a3624272c46 | [
"BSD-3-Clause"
] | 37 | 2019-04-24T17:37:19.000Z | 2022-02-27T01:27:45.000Z | docs/surveys/2020-jupyterlab-survey.md | jweill-aws/team-compass | def7653e79b8880ad60c4a3e3a960a3624272c46 | [
"BSD-3-Clause"
] | 139 | 2019-04-24T16:32:51.000Z | 2022-03-31T18:13:16.000Z | docs/surveys/2020-jupyterlab-survey.md | jweill-aws/team-compass | def7653e79b8880ad60c4a3e3a960a3624272c46 | [
"BSD-3-Clause"
] | 20 | 2019-07-12T22:08:27.000Z | 2022-03-11T23:33:47.000Z | # JupyterLab
- Even if you don't use Jupyter, you can still take this survey. Just indicate this fact in the first question
and carry on as best as you can.
- Thank you. Your participation guides Jupyter's roadmap toward your real-life use cases by quantifiably
helping us prioritize the functionality that is important to our userbase.
- So that you know what to expect, it's comprised of 20 questions spread across the sections below.
As a fair heads up, Question 7 is the biggest one, but it provides critical information.
- Usage patterns.
- Data
- Visualization.
- Scale.
- Collaboration.
- The aggregate survey data itself will be openly shared with the Jupyter community when polling closes in mid-December.
If you opt to provide your email address for a user interview, it will not be used for Jupyter's promotional purposes and
it will not be shared with a 3rd party.
### Usage Patterns
1. How frequently do you use Jupyter?
- Daily - heavy usage; 3+ hours per day.
- Daily - moderate usage; less than 3 hours per day.
- Weekly.
- Monthly.
- I no longer use Jupyter.
- I have never used Jupyter.
2. How long have you been using Jupyter?
- 2+ years.
- 1-2 years.
- 6-12 months.
- Less than 6 months (welcome =]).
- I don't use Jupyter.
3. What languages do you use in Jupyter? (pick up to 4)
- C (and derivatives)
- Go
- Groovy
- Java
- JavaScript
- Julia
- NodeJS
- Perl
- PHP
- Python
- R
- Ruby
- Rust
- Scala
- Spark SQL
- SQL
- TypeScript
- ❗I wrap/ use bindings for other languages.
- ❗My preferred language is not supported in Jupyter.
- Other (please specify)
4. What are your primary job roles when you are using Jupyter? (pick up to 2)
- Backend engineer.
- Business analyst.
- Data engineer.
- Data scientist.
- Database Admin (DBA).
- DevOps.
- Financial modeler/ analyst.
- Front end/ web development.
- Infrastructure engineer/ cloud architect.
- Scientist/ researcher.
- Student.
- Sysadmin.
- Teacher/ lecturer.
- Tutor/ teaching assistant.
- Other (please specify)
5. What are your go-to tools for performing data science, scientific computing,
and machine learning on your laptop/ desktop (non-cloud) for data science? (pick up to 3)
- Atom.
- Emacs.
- IPython.
- Jupyter Notebook - Classic.
- JupyterLab.
- nteract.
- PyCharm.
- RStudio.
- Spyder.
- Sublime Text.
- Vim.
- VS Code.
- Zeppelin.
- Other (please specify).
6. How do you run and/ or access Jupyter? (pick up to 4)
- 🖥️ Run directly on local machine (e.g. laptop, desktop).
- Through a Python virtual environment (e.g. conda, virtualenv).
- Through Docker.
- HPC or on-premise server.
- Cloud server (e.g. AWS EC2).
- JupyterHub.
- BinderHub / MyBinder.
- Cloud service - AWS (e.g. EMR, SageMaker).
- Cloud service - Azure (e.g. Notebooks, ML Studio).
- Cloud service - Databricks.
- Cloud service - Google (e.g. AI Platform, Dataproc).
- Cloud service - IBM (e.g. Watson Studio).
- Google Colab.
- CoCalc.
- Mobile device (e.g. phone, tablet). Comments welcome.
- ❓Don’t know how, I just go to a URL.
- Other (please specify).
7. What tasks do you need to perform and what tools do you use to accomplish them?
- Writing and running tests for software.
- Writing a software package.
- Creating content (e.g. blogs, books, education materials).
- Cleaning and preparing data.
- Run pipelines, workflows, or ETL (extract, transform, load) jobs.
- Developing extensions/ plugins to solve my problems.
- Writing software documentation.
- Finding extensions/ plugins to solve my problems.
- Building a machine learning or statistical model.
- Documenting research (reports, scientific papers)
- Visualize data in charts, plots, or dashboards.
- Other major use cases (please specify).
For each of the items above, provide additional information related to:
- How frequently do you perform this task?
- Never.
- Every few months.
- Monthly.
- Weekly.
- Daily.
- To what degree does Jupyter meet your expectations for this?
- Does not apply.
- No.
- Neutral.
- Yes.
- To what degree do alternative tools meet your expectations for this?
- Does not apply.
- No.
- Neutral.
- Yes.
### Data
8. What data sources are you primarily working with in your role? (pick up to 3)
- 🖥️ My local file system (e.g. files and folder on local machine).
- File system (e.g. HPC, EBS/EFS, JupyterHub volumes).
- Cloud object storage (e.g. buckets, S3, Blob, GS).
- SQL (e.g. PostgreSQL, MySQL).
- SQL - embedded (e.g. SQLite).
- NoSQL - columnar store (e.g. Parquet, Arrow, HDFS, BigQuery).
- NoSQL - document store (e.g. MongoDB, Elasticsearch, DynamoDB).
- Graph database (e.g. Neo4j, TigerGraph).
- Time Series (e.g. InfluxDB).
- Pub/ sub (e.g. Apache Kafka, Druid).
- Key value (e.g. Redis, MemcacheDB).
- Google Sheets.
- ❗Industry or field specific APIs.
- Streaming.
- Other (please specify).
9. What data formats are you mostly working with? (pick up to 3)
- Tabular (e.g. csv, spreadsheet, SQL tables, Parquet).
- Images.
- Tensors (e.g. manually handling PyTorch, Tensorflow inputs).
- Nested (e.g. JSON, NoSQL document).
- Hierarchical Data Format (e.g. HDF5 or similar).
- Time series.
- Text.
- Audio.
- Video.
- 3D/ CAD.
- Graph (e.g. nodes, edges).
- Spatial/ geographic (e.g. coordinates, GIS).
- Game/ reinforcement simulation.
- ❗Industry-specific file formats.
- Other (please specify)
10. Do you experience these **problems with data** in Jupyter? (rate from scale of 0-4)
- No grid view for manipulating/ filtering dataframes and arrays.
- Can’t see a list of my current variables.
- Plaintext or environment variable management of database passwords/ keys/ secrets.
- Lost data during failure or restart of kernel/ server.
- Data is too big to fit into memory on my machine/ server.
- Poor MVC/ ORM integrations (e.g. Django, Flask).
- Managing database/ source connections and secrets.
- Other (please specify)
For each of the items above, specify:
- Not a problem for me.
- Trivial.
- Minor.
- Major.
- Critical.
- N/A - skip, don't know.
11. What type of analysis are you running? (pick up to 4)
- ❗I am not performing ML/statistical tasks.
- Regression; predict a numeric output.
- Classification; predict a categorical output.
- Generative/ auto-encode; create new data based on existing data.
- Reinforcement learning; actions that maximize a reward.
- Dimensionality reduction (e.g. PCA, K-Nearest Neighbors)
- Feature engineering (e.g. importance, extraction, selection, permutation).
- Natural language processing (NLP).
- Graph data science.
- Outlier detection.
- Other (please specify)
### Visualization
12. What tools does your team use to create dashboards tools? (pick up to 3)
- Dash-Plotly.
- Google Data Studio.
- Grafana
- Kibana.
- Klipfolio.
- Looker.
- R Shiny.
- Spotfire.
- Tableau.
- Voila.
- ❗I don't create dashboards.
- ❗I write my own in HTML & JS.
- Other (please specify).
13. Do you experience these problems with visualization in Jupyter?
- No built-in UI for creating charts.
- Can't publish my charts as web-based dashboards.
- Poor/ buggy support for my plotting tool.
- Difficulty displaying highly dimensional data (e.g. array of array of arrays, too many rows/ columns to fit on screen).
- Lacking templating support (Jinja2)
For each of the items above, specify:
- Not a problem for me.
- Trivial.
- Minor.
- Major.
- Critical.
- N/A - skip, don't know.
### Scale
14. How do you scale and schedule your workloads? (pick up to 4)
- 🖥️ They run just fine on my local machine.
- ❓I need to scale, but don't know how.
- Server - on premise HPC/ data center.
- Server - cloud (e.g. AWS EC2).
- Cloud ML/ AI (e.g. AWS SageMaker, IBM Wastson Studio).
- Cluster - Spark and/ Hadoop.
- Cluster - Dask.
- Cluster - Kubernetes (or similar e.g. Mesos, Swarm, Slurm).
- Cluster - Jupyter Enterprise Gateway.
- Jupyter BinderHub.
- Quantum (e.g. D-Wave).
- Horovod.
- Kubeflow.
- Elyra.
- Snakemake.
- Papermill.
- CWL, Nextflow, and/ or WDL.
- Apache Airflow.
- Prefect.
- Cloud pipelines (e.g. AWS Batch).
- Cloud queries (e.g. AWS Presto, AWS Athena).
- Other (please specify).
15. Do you experience these problems with scale in Jupyter?
- Figuring out how to schedule batch execution of notebook-based jobs.
- Don’t have the budget for more scalable environment/ cloud services.
- Haven’t divided longer notebooks into multiple, modular notebooks.
- Not persisting the outputs of a notebook.
- Machine learning training jobs take too long.
- Can't call code/ modules from other notebooks.
- Difficulty managing Spark dependencies (Java).
For each of the items above, specify:
- Not a problem for me.
- Trivial.
- Minor.
- Major.
- Critical.
- N/A - skip, don't know.
### Collaboration
16. When it comes to working on notebooks in a team setting, with how many other people are you collaborating?
- 0
- 10
- 20
- 30
- 40
- 50+
17. What is your reason for sharing a notebook with someone else? (pick up to 3)
- ❗I am not working with other people.
- Share knowledge.
- Feedback about my writing.
- Feedback about my code.
- Formal code review.
- Integrate my code/ data with their downstream or upstream processes.
- Edit/ contribute some of their own code.
- Edit/ contribute some of their own writing.
- Teach/ tutor them.
- Peer programming.
- Deploy my code/ model/ pipeline/ dashboard.
- Other (please specify)
18. What is the nature of your collaboration?
- Describe the collaboration:
- How long have you been working together?
- I am not collaborating.
- 2+ years.
- 1-2 years.
- 6-12 months.
- Less then 6 months.
- How frequently do you work together?
- I am not collaborating.
- 2+ times per week.
- Weekly.
- A few times a month.
- Monthly.
- Less then monthly.
- How do you divide the work?
- I am not collaborating.
- We work on different projects.
- We work on the same project, but different parts.
- We work on the same part of the same project together.
- Comments about collaboration:
19. Do you have challenges with collaboration in Jupyter?
- Don't know what dependencies (versions of language, packages, extensions) a notebook uses.
- Don't know/ have the data a notebook is supposed to use.
- Poor support for our version control (git) system.
- No built-in way to publish my notebook to a shared location.
- Not being able to comment on notebooks.
- No "track changes;" can't figure out what changed between notebook checkpoints/ versions.
For each of the items above, specify:
- Not a problem for me.
- Trivial.
- Minor.
- Major.
- Critical.
- N/A - skip, don't know.
20. Do you have challenges with the notebook UI?
- No progress bar for running long notebooks.
- No marketplace for Extensions (e.g. 5 star ratings, browsable categories).
- No global search.
- Can't collapse sections of a notebook hierarchically.
- Poor autocompletion (e.g. LSP, show methods/ attributes).
- No modes for editing other Jupyter documents (MyST, Jupyter Book).
- Can't see hidden (.) files in file browser.
- Don't know which cell failed in long notebook.
For each of the items above, specify:
- Not a problem for me.
- Trivial.
- Minor.
- Major.
- Critical.
- N/A - skip, don't know.
### You did it - thank you!
21. Open feedback for problems/ pain points you didn't get to share.
22. Optional - Are you interested in giving qualitative feedback on JupyterLab, JupyterHub, and the JupyterLab developer experience? If we have permission to contact you for follow-up questions, please leave your email address below.
| 34.419355 | 233 | 0.646517 | eng_Latn | 0.982276 |
c12a8a51e37361141f08ffd487a162d6d31d2f76 | 7,654 | md | Markdown | docs/client-side-with-jquery.md | Cryshot/DevExtreme.AspNet.Data | a271f72b9c2efc47ea96023d37be7d3215cedd8b | [
"MIT"
] | null | null | null | docs/client-side-with-jquery.md | Cryshot/DevExtreme.AspNet.Data | a271f72b9c2efc47ea96023d37be7d3215cedd8b | [
"MIT"
] | null | null | null | docs/client-side-with-jquery.md | Cryshot/DevExtreme.AspNet.Data | a271f72b9c2efc47ea96023d37be7d3215cedd8b | [
"MIT"
] | null | null | null | # Client Side with jQuery
## Installation
The `dx.aspnet.data.js` script is the client-side part. You can install it in one of the following ways.
* Use [npm](https://www.npmjs.com/package/devextreme-aspnet-data).
1. Run the following command in the command line:
```
npm install devextreme-aspnet-data
```
2. Link the `dx.aspnet.data.js` script on your page:
```html
<script src="/node_modules/devextreme-aspnet-data/js/dx.aspnet.data.js"></script>
```
* Use [unpkg](https://unpkg.com/).
Link the `dx.aspnet.data.js` script on your page in the following way:
```html
<script src="https://unpkg.com/devextreme-aspnet-data/js/dx.aspnet.data.js"></script>
```
* Use [bower](https://libraries.io/bower/devextreme-aspnet-data).
**NOTE: Since Bower is deprecated we do not recommend you use this approach.**
1. Run the following command in the command line:
```
bower install devextreme-aspnet-data
```
... or add `devextreme-aspnet-data` to the *bower.json* file's `dependencies` section.
```
"dependencies": {
...
"devextreme-aspnet-data": "^2"
}
```
2. Link the `dx.aspnet.data.js` script on your page:
```html
<script src="/bower_components/devextreme-aspnet-data/js/dx.aspnet.data.js"></script>
```
#### See Also
- [Install DevExtreme Using npm](https://js.devexpress.com/Documentation/Guide/Getting_Started/Installation/npm_Package/)
- [Install DevExtreme Using Bower](https://js.devexpress.com/Documentation/Guide/Getting_Started/Installation/Bower_Package/)
## API Reference
The client-side API consists of the `DevExpress.data.AspNet.createStore` method that returns a [`CustomStore`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/)'s instance. This instance is configured to access a controller.
### Configuration
When you call the `DevExpress.data.AspNet.createStore` method, pass an object with the properties described below.
- `cacheRawData` - refer to [`CustomStore.cacheRawData`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#cacheRawData).
- `deleteMethod` - the HTTP method for delete requests; `"DELETE"` by default.
- `deleteUrl` - the URL used to delete data.
- `errorHandler` - refer to [`CustomStore.errorHandler`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#errorHandler).
- `insertMethod` - the HTTP method for insert requests; `"POST"` by default.
- `insertUrl` - the URL used to insert data.
- `key`- refer to [`CustomStore.key`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#key).
- `loadMethod` - the HTTP method for load requests; `"GET"` by default.
- `loadMode` - refer to [`CustomStore.loadMode`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#loadMode).
- `loadParams` - additional parameters that should be passed to `loadUrl`.
- `loadUrl` - the URL used to load data.
- `onAjaxError` - a function to be called when an AJAX request fails.
```js
onAjaxError: (e: { xhr, error }) => void
```
The `e` object has the following properties:
Property | Type | Description
-- | -- | --
`xhr` | [`jqXHR`](http://api.jquery.com/jQuery.ajax/#jqXHR) for jQuery; [`XMLHttpRequest`](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest) otherwise | The request object.
`error` | `string` or [`Error`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error) | The error object. You can assign a custom error message or a JavaScript `Error` object.
- `onBeforeSend` - a function that customizes the request before it is sent.
```js
onBeforeSend: (operation, ajaxSettings) => void
```
Parameter | Type | Description
--- | -- | ----
`operation` | `string` | The operation to be performed by the request: `"load"`, `"update"`, `"insert"`, or `"delete"`.
`ajaxSettings` | `object` | Request settings. Refer to [`jQuery.ajax()`](http://api.jquery.com/jquery.ajax/).
- `onInserted` - refer to [`CustomStore.onInserted`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onInserted).
- `onInserting` - refer to [`CustomStore.onInserting`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onInserting).
- `onLoaded` - refer to [`CustomStore.onLoaded`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onLoaded).
- `onLoading` - refer to [`CustomStore.onLoading`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onLoading).
- `onModified` - refer to [`CustomStore.onModified`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onModified).
- `onModifying` - refer to [`CustomStore.onModifying`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onModifying).
- `onPush` - refer to [`CustomStore.onPush`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onPush).
- `onRemoved` - refer to [`CustomStore.onRemoved`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onRemoved).
- `onRemoving` - refer to [`CustomStore.onRemoving`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onRemoving).
- `onUpdated` - refer to [`CustomStore.onUpdated`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onUpdated).
- `onUpdating` - refer to [`CustomStore.onUpdating`](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Configuration/#onUpdating).
- `updateMethod` - the HTTP method for update requests; `"PUT"` by default.
- `updateUrl` - the URL used to update data.
### Methods and Events
Refer to the `CustomStore` [methods](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Methods/) and [events](https://js.devexpress.com/DevExtreme/ApiReference/Data_Layer/CustomStore/Events/) for a list of available methods and events.
### Example
You can find a jQuery example [here](https://github.com/DevExpress/DevExtreme.AspNet.Data/blob/master/net/Sample/Views/Home/Index.cshtml).
[DevExtreme-based ASP.NET Core](https://docs.devexpress.com/AspNetCore/400263) and [DevExtreme ASP.NET MVC 5](https://docs.devexpress.com/DevExtremeAspNetMvc/400943/) controls call the `DevExpress.data.AspNet.createStore` method internally. To configure its parameters, use the `DataSource()` method's lambda expression.
```Razor
@(Html.DevExtreme().DataGrid()
.DataSource(ds => ds.WebApi()
.Controller("NorthwindContext")
.Key("OrderID")
.LoadAction("GetAllOrders")
.InsertAction("InsertOrder")
.UpdateAction("UpdateOrder")
.DeleteAction("RemoveOrder")
)
)
```
## See Also
- [DataGrid and Web API example](https://github.com/DevExpress/devextreme-examples/tree/17_2/datagrid-webapi)
- [PivotGrid and Web API example](https://github.com/DevExpress/devextreme-examples/tree/17_2/pivotgrid-webapi)
- [DataGrid in an MVC 5 App example](https://github.com/DevExpress/devextreme-examples/tree/17_2/datagrid-mvc5)
- [Custom Data Sources](https://js.devexpress.com/Documentation/Guide/Data_Binding/Specify_a_Data_Source/Custom_Data_Sources/)
- [PivotGrid - Use CustomStore](https://js.devexpress.com/Documentation/Guide/Widgets/PivotGrid/Use_CustomStore/)
| 52.786207 | 320 | 0.728247 | yue_Hant | 0.665636 |
c12c0381ae43ad0e82a0bc1bf86c23c0b965140d | 4,561 | md | Markdown | pytorch_toolkit/person_reidentification/README.md | megnanymous/openvino_training_extensions | c78d7ab0f8aff7c719d9136dd2dea7cd491a0312 | [
"Apache-2.0"
] | 1 | 2020-01-13T02:55:06.000Z | 2020-01-13T02:55:06.000Z | pytorch_toolkit/person_reidentification/README.md | megnanymous/openvino_training_extensions | c78d7ab0f8aff7c719d9136dd2dea7cd491a0312 | [
"Apache-2.0"
] | null | null | null | pytorch_toolkit/person_reidentification/README.md | megnanymous/openvino_training_extensions | c78d7ab0f8aff7c719d9136dd2dea7cd491a0312 | [
"Apache-2.0"
] | null | null | null | # Person re-identification
This repository contains training and inference code for person re-identification
neural networks. The networks are based on [OSNet](https://arxiv.org/abs/1905.00953)
architecture provided by [torchreid](https://github.com/KaiyangZhou/deep-person-reid.git)
project. The code supports conversion to ONNX format and inference of OpenVINO models.
## Setup
### Prerequisites
* Ubuntu 16.04
* Python 3.5.2
* PyTorch 1.3 or higher
* OpenVINO 2019 R4 (or later) with Python API
### Installation
1. Create and activate virtual python environment
```bash
cd $(git rev-parse --show-toplevel)/pytorch_toolkit/person_reidentification
bash init_venv.sh
. venv/bin/activate
```
### Datasets
Networks were trained on the next datasets:
* Market-1501
* MSMT17v2
For training it is necessary to set up a root directory to datasets.
Structure of the root directory:
```
root
├── market1501
│ └── Market-1501-v15.09.15
│ ├── bounding_box_test
│ ├── bounding_box_train
│ └── query
│
└── msmt17
└── MSMT17_V2
├── mask_test_v2
├── mask_train_v2
├── list_gallery.txt
├── list_query.txt
├── list_train.txt
└── list_val.txt
```
### Configuration files
Script for training and inference uses a configuration file.
[default_config.py](config/default_config.py) consists of default parameters.
This file also has description of parameters.
Parameters that you wish to change must be in your own configuration file.
Example: [person-reidentification-retail-0200.yaml](config/person-reidentification-retail-0200.yaml)
## Training
To start training create or choose configuration file and use the `main.py` script.
An example:
```bash
python main.py \
--root /path/to/datasets/directory/root \
--config config/person-reidentification-retail-0200.yaml
```
## Test
For test your network set in a configuration file parameter `test.evaluate` to `True`
and run a command like is used in training.
For visualization results set parameter `test.visrank` to True (it works only when
`test.evaluate` is `True`).
For visualization activation maps set parameter `test.visactmap` to True.
### Pretrained models
You can download pretrained models in PyTorch format corresponding to the provided configs from fileshare as well:
- [person-reidentification-retail-0103](https://download.01.org/opencv/openvino_training_extensions/models/person_reidentification/person-reidentification-retail-0103.pt)
- [person-reidentification-retail-0107](https://download.01.org/opencv/openvino_training_extensions/models/person_reidentification/person-reidentification-retail-0107.pt)
- [person-reidentification-retail-0200](https://download.01.org/opencv/openvino_training_extensions/models/person_reidentification/person-reidentification-retail-0200.pt)
### Test OpenVINO reidentification models
OpenVINO models are represented by \*.xml and \*.bin files (IR format).
To use such a model just set in config file the next parameters:
```yaml
model:
openvino:
name: /path/to/model/in/IR/format.xml
cpu_extension: /path/to/cpu/extension/lib.so
```
\*.xml and \*.bin files should be saved in the same directory.
## Conversion PyTorch model to OpenVINO format
The conversion is done in two stages: first - convert a PyTorch model to the ONNX format and second - convert the obtained ONNX model to the IR format.
To convert trained model from PyTorch to ONNX format use the next command:
```bash
python convert_to_onnx.py \
--config /path/to/config/file.yaml \
--output-name /path/to/output/model \
--verbose
```
Name of output model will be ended with `.onnx` automatically.
By default output model path is `model.onnx`. Be careful about parameter
`load_weights` in config file. `verbose` argument is non-required and
switches on detailed output in conversion function.
After the ONNX model is obtained one can convert it to IR.
This produces model `model.xml` and weights `model.bin` in single-precision floating-point format (FP32):
```bash
python <OpenVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo.py --input_model model.onnx \
--mean_values '[123.675, 116.28 , 103.53]' \
--scale_values '[58.395, 57.12 , 57.375]' \
--reverse_input_channels
```
## OpenVINO demo
OpenVINO provides multi-camera-multi-person tracking demo, which is able to use these models as person reidentification networks. Please, see details in the [demo](https://github.com/opencv/open_model_zoo/tree/develop/demos/python_demos/multi_camera_multi_person_tracking).
| 34.293233 | 273 | 0.757948 | eng_Latn | 0.914927 |
c12c39e6192044ff46d73e081c308d2734622f51 | 616 | md | Markdown | tests/test_biolink_model/output/markdown_no_image/decreases_folding_of.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 83 | 2021-03-17T16:31:02.000Z | 2022-03-13T23:17:02.000Z | tests/test_biolink_model/output/markdown_no_image/decreases_folding_of.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 390 | 2021-03-18T18:44:11.000Z | 2022-03-30T22:55:01.000Z | tests/test_biolink_model/output/markdown_no_image/decreases_folding_of.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 20 | 2021-03-27T08:55:56.000Z | 2022-02-24T15:25:57.000Z |
# Slot: decreases folding of
holds between two molecular entities where the action or effect of one decreases the rate or quality of folding of the other
URI: [biolink:decreases_folding_of](https://w3id.org/biolink/vocab/decreases_folding_of)
## Domain and Range
[MolecularEntity](MolecularEntity.md) → <sub>0..\*</sub> [MolecularEntity](MolecularEntity.md)
## Parents
* is_a: [affects folding of](affects_folding_of.md)
## Children
## Used by
## Other properties
| | | |
| --- | --- | --- |
| **In Subsets:** | | translator_minimal |
| **Exact Mappings:** | | CTD:decreases_folding_of |
| 19.870968 | 124 | 0.693182 | eng_Latn | 0.959045 |
c12cb5694d245713a2e9d2c547d7a061c7e8057d | 3,397 | md | Markdown | README.md | pluu2/SeparationFactor | 54a84a98f3d9905d1ab684382692be342ba55db2 | [
"MIT"
] | null | null | null | README.md | pluu2/SeparationFactor | 54a84a98f3d9905d1ab684382692be342ba55db2 | [
"MIT"
] | null | null | null | README.md | pluu2/SeparationFactor | 54a84a98f3d9905d1ab684382692be342ba55db2 | [
"MIT"
] | null | null | null | 
### SeparationFactor
----
K-sparse Conditional-Variational Autoencoder implementation.
Variational Autoencoders(VAEs) map data 'x' into a latent dimension 'z'. The dimension of 'z' is typically much smaller than 'x', As VAEs learns the distribution of x, it is believed that the underlying features of the distribution can be implicitely learned. The hope with VAEs is that with proper training the latent space will come to reflect the underlying features of the incoming data. Unfortunately this is not exactly the case.
There is signficant work being done on modifying VAEs in such a way to improve how well these features are mapped on to the latent space 'z'. This mechanism is known as 'disentanglment'. A perfectly disentangled latent representation is when modification of a single latent dimension will modify a single factor generated by the VAE [1] . Unfortunately this definition is vague, and has been a topic of extensive debate [2] .
Current methods on attempting to improve unsupervised disentanglement have focused primarily on using alternative liklihood functions[3][4].
Another avenue of investigation involves supervised disnetanglement, which focuses on modifying the liklihood function of VAEs, by implementing an inductive bias on the prior. This bias comes in the form of an additional condition on the liklihood function (p(x|z,c)). Specifically by adding a classifier to the encoder, one can add 'structure' to the posterior distribution [5].
To further improve this structure, I have written a custom layer for Tensorflow which can be added between a classifier and a latent space to boost the separation between different latent variables. I originally named this 'separation factor', but I found this later to be known as K-Sparse [6].
To see the implementation I have created a notebook with the basic use of my 'separation factor' . The VAE is trained on MNIST numbers. You will see that as you traverse through the latent dimension as a single one-hot array you can specify how a given reconstruction can be changed to another number.
### Example Implementation
----
<a href = "https://github.com/pluu2/SeparationFactor/blob/master/Conditional_VAEs_with_K_Sparse.ipynb"> Basic Implementation</a>
Below is a step wise change of the image of a '1' transforming into a '4'. The original dataset did not contain any of the resulting image except the first image. The transformation was learned by the network, with clean separation.



To do:
[x] Basic Implementation on MNIST
[ ] Demonstrate Quantiative Disentanglement.
References:
----
[1] https://towardsdatascience.com/disentanglement-with-variational-autoencoder-a-review-653a891b69bd
[2]https://arxiv.org/abs/1811.12359
[3]https://arxiv.org/abs/1606.04934
[4]https://arxiv.org/pdf/1611.05013.pdf
[5]Kihyuk,S., Lee, H., Yan, X. (2015) Learning Structured Output Representation using Deep Conditional Generative Models. Advances in Neural Information Processing Systems.
[6]Makhazani,A. Frey,B. (2013). K-Sparse Autoencoders. arXiv: 1312.5663
| 64.09434 | 437 | 0.78952 | eng_Latn | 0.990374 |
c12cc7d3cd1a826ec1c96683c90a0996331a316b | 10,398 | md | Markdown | README.md | ArcGIS/arcgis-clone-js | 6edfc4162426d12137735084b41522db5eff717c | [
"Apache-2.0"
] | 4 | 2018-12-11T01:03:29.000Z | 2019-06-03T09:28:59.000Z | README.md | ArcGIS/arcgis-clone-js | 6edfc4162426d12137735084b41522db5eff717c | [
"Apache-2.0"
] | 34 | 2018-12-10T18:32:16.000Z | 2019-06-06T23:09:10.000Z | README.md | ArcGIS/arcgis-clone-js | 6edfc4162426d12137735084b41522db5eff717c | [
"Apache-2.0"
] | 1 | 2019-03-07T08:13:08.000Z | 2019-03-07T08:13:08.000Z | [![npm status][npm-img]][npm-url]
[![Build status][travis-img]][travis-url]
[![Coverage status][coverage-img]][coverage-url]
[![Apache 2.0 licensed][license-img]][license-url]
[npm-img]: https://img.shields.io/npm/v/@esri/solution-common.svg?style=round-square&color=blue
[npm-url]: https://www.npmjs.com/package/@esri/solution-common
[travis-img]: https://img.shields.io/travis/com/Esri/solution.js/develop.svg
[travis-url]: https://app.travis-ci.com/github/Esri/solution.js
[coverage-img]: https://coveralls.io/repos/github/Esri/solution.js/badge.svg
[coverage-url]: https://coveralls.io/github/Esri/solution.js
[license-img]: https://img.shields.io/badge/license-Apache%202.0-blue.svg
[license-url]: #license
## Solution.js
> TypeScript wrappers running in Node.js and modern browsers for transferring ArcGIS Online items from one organization to another. [Video introduction](https://youtu.be/esmUmIf3hcI) from the 2020 Developer Summit.
### Table of Contents
- [API Overview](#api-overview)
- [Instructions](#instructions)
- [Frequently Asked Questions](#frequently-asked-questions)
- [Guides](#guides)
- [Issues](#issues)
- [Versioning](#versioning)
- [Contributing](#contributing)
- [License](#license)
---
### API Overview
#### Common terms
An ArcGIS Online (AGO) `item` is transformed into a `template` that contains all of its defining information. If the item depends on other items, those items are also transformed into templates.
A `Solution Item` can contain either
* a list of Item Templates
* a list of references to deployed items
When it contains Item Templates, it can be used for organizing and distributing Solutions, e.g., for displaying in a gallery of Solutions.
When a Solution is deployed into an organization, a new Solution is created that contains references to the items deployed into the organization; it serves as a table of contents for the deployment.
#### Packages
The API is divided into packages to make it easier to use just the parts that you want:
* `common`, which contains common helper functions for the other packages
* `creator`, which contains functions for transforming items into templates
* `deployer`, which contains functions for deploying item templates into items in a destination organization
* `feature-layer`, which contains functions for Feature Service items
* `file`, which contains functions for items that contain files
* `form`, which contains functions for form items
* `group`, which contains functions for Groups
* `hub-types`, which contains functions supporting ArcGIS Hub Sites and Initiatives
* `simple-types`, which contains functions for the simpler item types Dashboard, Form, Web Map, Web Mapping Application, and Workforce Project
* `storymap`, which contains functions for Storymap items
* `velocity`, which contains functions to support ArcGIS Velocity items
* `viewer`, which contains functions to support displaying Solution items
* `web-experience`, which contains functions for Experience Builder items
#### Additional information
The API documentation is published at https://esri.github.io/solution.js/
#### Supported ArcGIS Online Item Types
Currently, the ArcGIS Online item types that can be converted into a template are:
* **App types:** Dashboard, Form, Hub Initiative, Hub Page, Hub Site Application, Insights Model, Notebook, Oriented Imagery Catalog, QuickCapture Project, Site Application, Site Page, StoryMap, Web Experience, Web Mapping Application, Workforce Project
* **Map types:** Web Map, Web Scene
* **Layer types:** Big Data Analytic, Feature Collection, Feature Service, Feed, Map Service, Real Time Analytic
* **File types:** 360 VR Experience, AppBuilder Extension, AppBuilder Widget Package, Application Configuration, ArcGIS Pro Add In, ArcGIS Pro Configuration, ArcPad Package, Basemap Package, CAD Drawing, CityEngine Web Scene, Code Attachment, Code Sample, Color Set, Compact Tile Package, CSV Collection, CSV, Deep Learning Package, Desktop Add In, Desktop Application Template, Desktop Style, Document Link, Explorer Add In, Explorer Layer, Explorer Map, Feature Collection Template, File Geodatabase, GeoJson, GeoPackage, Geoprocessing Package, Geoprocessing Sample, Globe Document, Image Collection, Image, iWork Keynote, iWork Numbers, iWork Pages, KML Collection, Layer Package, Layer Template, Layer, Layout, Locator Package, Map Document, Map Package, Map Template, Microsoft Excel, Microsoft Powerpoint, Microsoft Word, Mobile Basemap Package, Mobile Map Package, Mobile Scene Package, Native Application, Native Application Installer, Native Application Template, netCDF, Operation View, Operations Dashboard Add In, Operations Dashboard Extension, PDF, Pro Layer Package, Pro Layer, Pro Map Package, Pro Map, Pro Report, Project Package, Project Template, Published Map, Raster function template, Report Template, Rule Package, Scene Document, Scene Package, Service Definition, Shapefile, Statistical Data Collection, Style, Survey123 Add In, Symbol Set, Task File, Tile Package, Toolbox Package, Vector Tile Package, Viewer Configuration, Visio Document, Window Mobile Package, Windows Mobile Package, Windows Viewer Add In, Windows Viewer Configuration, Workflow Manager Package
*The `implemented-types` demo generates its list from the source code.*
### Instructions
#### Setup
The following steps will build the repository:
1. `npm install`
1. `pushd demos`
1. `npm run build`
1. `popd`
1. `npm run test:chrome:ci`
These steps are in the file `build.bat` for Windows computers.
#### npm commands
For a list of all available commands run `npm run`.
These commands are
* building
* `npm run build` creates symlinks among packages and creates node, umd, and esm outputs for each package
* `npm run clean` runs `clean:src` and `clean:dist` _(requires bash console)_
* `npm run clean:src` deletes `.d.ts`, `.js`, and `.js.map` files
* `npm run clean:dist` deletes `.rpt2_cache` and `dist` folders
* `npm run lint` lints the TypeScript files
* `npm run lint:fix` lints the TypeScript files and fixes
* `npm run prettify` beautifies TypeScript files
* testing
* `npm run test` lints, then runs `test:chrome` tests to confirm that the API is functioning as expected
* `npm run test:browsers` runs karma in the Chrome, Firefox, and Chromium Edge browsers
* `npm run test:chrome` runs karma in the Chrome browser
* `npm run test:chrome:ci` runs karma in the ChromeHeadlessCI browser
* `npm run test:chrome:debug` runs karma in the Chrome browser and leaves the browser open for debugging tests
* `npm run test:edge` runs karma in the Edge (Chromium) browser
* `npm run test:firefox` runs karma in the Firefox browser
* `npm run test:ci` lints, then runs `test:chrome:ci`, `test:firefox`, and `coveralls` from a bash window
* `npm run test:ci:win` lints, then runs `test:chrome:ci`, `test:firefox`, and `coveralls:win` from a Windows window
* `npm run test:all` runs `test:chrome` and `test:edge` and `test:firefox`
* `npm run coveralls` updates code coverage info from a bash window
* `npm run coveralls:win` updates code coverage info from a Windows window
* publishing doc
* `npm run docs:build` builds the documentation ___(note that this script creates a `docs` folder, deleting any existing one)___
* `npm run docs:deploy` pushes the documentation to the repository's gh-pages
* publishing code
* `npm run release:prepare1` fetch, compile, and test _(requires bash shell)_
* `npm run release:prepare2` bundles packages and asks you for the new version number _(use arrow keys to put cursor on line_ above _desired version)_ _(requires Windows shell)_
* `npm run release:review` shows summary of git changes
* `npm run release:publish-git` publishes a version to GitHub _(requires bash shell)_
* `npm run release:publish-npm` publishes a version to npm _(requires Windows shell)_
* lifecycle
* postinstall runs `bootstrap`
* bootstrap
* precommit
### Frequently Asked Questions
* [Is this a supported Esri product?](https://github.com/Esri/solution.js/blob/master/guides/FAQ.md#is-this-a-supported-esri-product)
* [What browsers are supported?](https://github.com/Esri/solution.js/blob/master/guides/FAQ.md#what-browsers-are-supported)
* [What is the development workflow?](https://github.com/Esri/solution.js/blob/master/guides/FAQ.md#what-is-the-development-workflow)
### Guides
* [Package overview](https://github.com/Esri/solution.js/blob/master/guides/package-overview.md)
* [Deploying with the repository](https://github.com/Esri/solution.js/blob/master/guides/deployment.md)
* [Authentication in Browser-based Apps](https://github.com/Esri/solution.js/blob/master/guides/browser-authentication.md)
* [Publishing to npmjs](https://github.com/Esri/solution.js/blob/master/guides/Publishing%20to%20npmjs.md)
### Issues
Found a bug or want to request a new feature? Please take a look at [previously logged issues](https://github.com/Esri/solution.js/issues);
if you don't see your concern, please let us know by [submitting an issue](https://github.com/Esri/solution.js/issues/new).
### Versioning
For transparency into the release cycle and in striving to maintain backward compatibility, @esri/solution.js is maintained under Semantic Versioning guidelines and will adhere to these rules whenever possible. For more information on SemVer, please visit <http://semver.org/>.
## Contributing
Esri welcomes contributions from anyone and everyone. Please see our [guidelines for contributing](https://github.com/Esri/solution.js/blob/master/CONTRIBUTING.md).
### License
Copyright © 2018-2022 Esri
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
> http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
A copy of the license is available in the repository's [LICENSE](https://github.com/Esri/solution.js/blob/master/LICENSE) file.
__[Third-Party Licenses](https://github.com/Esri/solution.js/blob/master/Third-Party%20Licenses.md)__ | 54.15625 | 1,591 | 0.769379 | eng_Latn | 0.900718 |
c12d2bb3648ad4d2f7b3207d1cb45d89ddf88423 | 769 | markdown | Markdown | _posts/2016-03-01-leaf-exploit.markdown | NewEvolution/NewEvolution.github.io | 8b801dc5350ab428947834830d237843b8e0cbf4 | [
"MIT"
] | null | null | null | _posts/2016-03-01-leaf-exploit.markdown | NewEvolution/NewEvolution.github.io | 8b801dc5350ab428947834830d237843b8e0cbf4 | [
"MIT"
] | null | null | null | _posts/2016-03-01-leaf-exploit.markdown | NewEvolution/NewEvolution.github.io | 8b801dc5350ab428947834830d237843b8e0cbf4 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Nissan Leaf Exploit"
date: 2016-03-01 9:33:27
tags:
- security
- internet of things
---
Yet another automaker skimping on digitial secutity. This time it's
Nissan whose Leaf has a web app that allows you to check battery status,
but also manage the climate control remotely, yet has absolutely zero
authentication, allowing anyone with your VIN to freely roast or freeze
you out.
Here's an [excellent writeup][blogpost] by [Troy Hunt][about] detailing
how the exploit was found, tested and eventually exposed after Nissan
dragged their feet in addresing the issue once it was brought to their
attention.
[blogpost]: http://www.troyhunt.com/2016/02/controlling-vehicle-features-of-nissan.html
[about]: http://www.troyhunt.com/p/about.html
| 34.954545 | 87 | 0.777633 | eng_Latn | 0.993355 |
c12d4fc384e1a833e90e0fd043253117a1a57ad2 | 1,491 | md | Markdown | docs/GeocodePreferences.md | Syncsort/PreciselyAPIsSDK-Java | 9aa258b1c0a543bcbd108ea9b093b21d5f9b9586 | [
"Apache-2.0"
] | null | null | null | docs/GeocodePreferences.md | Syncsort/PreciselyAPIsSDK-Java | 9aa258b1c0a543bcbd108ea9b093b21d5f9b9586 | [
"Apache-2.0"
] | null | null | null | docs/GeocodePreferences.md | Syncsort/PreciselyAPIsSDK-Java | 9aa258b1c0a543bcbd108ea9b093b21d5f9b9586 | [
"Apache-2.0"
] | null | null | null |
# GeocodePreferences
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**returnAllCandidateInfo** | **Boolean** | |
**fallbackToGeographic** | **String** | |
**fallbackToPostal** | **String** | |
**maxReturnedCandidates** | **String** | |
**distance** | **String** | |
**streetOffset** | **String** | |
**cornerOffset** | **String** | |
**matchMode** | **String** | | [optional]
**clientLocale** | **String** | | [optional]
**clientCoordSysName** | **String** | | [optional]
**distanceUnits** | **String** | | [optional]
**streetOffsetUnits** | **String** | | [optional]
**cornerOffsetUnits** | **String** | | [optional]
**mustMatchFields** | [**FieldsMatching**](FieldsMatching.md) | | [optional]
**returnFieldsDescriptor** | [**ReturnFieldsDescriptor**](ReturnFieldsDescriptor.md) | | [optional]
**outputRecordType** | **String** | | [optional]
**customPreferences** | **Map<String, Object>** | | [optional]
**preferredDictionaryOrders** | **List<String>** | | [optional]
**outputCasing** | **String** | | [optional]
**latLongOffset** | **String** | | [optional]
**squeeze** | **String** | | [optional]
**returnLatLongFields** | **String** | | [optional]
**useGeoTaxAuxiliaryFile** | **String** | | [optional]
**latLongFormat** | **String** | | [optional]
**defaultBufferWidth** | **String** | | [optional]
**returnCensusFields** | **String** | | [optional]
| 38.230769 | 101 | 0.576794 | yue_Hant | 0.739962 |
c12e38cbd2f60770b28236c9193067cf0cd676fb | 2,247 | md | Markdown | _posts/2020-08-31-SoloDay.md | NemoTaek/NemoNote | 96e22f4c9f9c74d38fd151d047f8f935e510726b | [
"MIT"
] | null | null | null | _posts/2020-08-31-SoloDay.md | NemoTaek/NemoNote | 96e22f4c9f9c74d38fd151d047f8f935e510726b | [
"MIT"
] | null | null | null | _posts/2020-08-31-SoloDay.md | NemoTaek/NemoNote | 96e22f4c9f9c74d38fd151d047f8f935e510726b | [
"MIT"
] | null | null | null | ---
title: "Solo Day"
tags:
- 부트캠프
- HA
- 복습
---
4주의 기간동안 진행했던 Pre 코스를 끝마치고 1주간의 복습기간을 가지는 Solo Day가 진행되었다.
이 기간에는 지금까지 배운 내용을 복습하고, 이를 활용하여 현재 나의 상태를 체크하는 Hiring Assesment를 진행했다.
Hiring Assesment(이하 HA)는 immersive 과정을 가기 위해 진단하는 방법인데, 이를 통과하지 못한다고 해서 다음 과정에 탈락하고 그런 것이 아니라 현재 자신의 상태를 진단하는 하나의 도구라고 생각하면 된다고 한다.
화요일에 HA가 진행이 되는데, 하루동안 7문제를 풀고, 다음날 오전 10시까지 1차 제출을 한 후에 못 푼 문제가 있으면 계속 이어서 풀고 그 다음날 오전 10시에 제출 마감이 되는 과정이었다.
화요일에 진행되는 시험에 대비하여 월요일에는 지금까지 풀었던 코플릿 문제들과 스프린트를 훑어보면서 배웠던 것들을 복습하는 시간을 가졌다.
이를 보면서
> '와 내가 1달동안 이렇게 많은 것들을 했다고? 정말 대단한걸?'
이라는 생각을 했다.
이제 대망의 HA시간
7문제가 내 눈앞에 보였다.
1번문제, 입력받은 문자열에 존재하는 각 단어의 개수를 담은 객체를 리턴하라.
풀기 시작했다. 대부분 예상대로는 나왔지만, 공백을 제거했는데도 ''의 개수가 왜 출력이 되는가?
마지막에 ''의 키를 제거하니 해결이 되었다.
2번문제, 입력받은 수의 각 자릿수를 모두 더한 값을 리턴하라.
이는 그래도 쉬운편에 속했던것 같다. 음수의 경우에만 체크를 해 주면 되었다.
3번문제, 입력받은 수의 각 자릿수를 곱하는데, 결과가 한자릿 수가 될 때까지 진행하고 이 수를 리턴하라.
2번문제와 비슷했다. 한자릿 수가 될 때까지 재귀를 부르는 식으로 해결했다.
4번문제, 배열로 입력받은 정보를 HTML 엘리먼트의 형태로 변형해서 출력하라.
익숙하지 않은 HTML을 JS로 구현하는 문제였다.
태그를 만들고, 추가하고, 내용을 입력하는 코드를 작성했다.
그런데 실행을 해보니 결과가 출력되지 않았다.
2시간동안 끙끙대니 순서가 문제였었다. 여기서 코드 순서의 중요성을 깨달았다.
5번문제, 배열로 입력받은 정보에서 사람들의 이름을 출력하는데, 나이순으로 출력하라.
2번째로 어려운 문제였다.
처음엔 손도 못댔었다. 객체도 아니고 배열의 배열이라서 정렬과 출력에 대한 생각이 한번에 떠오르지 않았다.
어느정도 고민한 후에, 먼저 나이를 뽑아서 정렬하고, 배열에서 이름을 출력하는데, 정렬한 나이와 일치하는 사람의 이름을 출력하는 방법으로 했더니 통과가 되었다.
6번문제, 피보나치 수열을 순차적으로 출력하는 클로저 형태의 함수를 작성하라.
피보나치라 간단할줄 알았는데, 생각보다 까다로웠다.
그냥 풀때는 0, 1을 기본으로 깔고 시작했기 때문에 별 문제가 되지 않았지만, 이 문제에서는 0과 1도 호출 횟수로 쳐서 이까지 고려하는데 생각이 어느정도 필요했다.
배열에 먼저 0, 1을 넣어놓고 클로져 형태로 함수를 만들어 불러오는 식으로 풀었다.
대망의 마지막 7번문제, 객체를 요소로 갖는 배열과 id를 입력받아, 해당 id값을 가지고 있는 객체를 재귀를 사용하여 리턴하라.
구조는 머릿속에 그려졌다. 찾고자하는 id와 객체의 id가 같으면 출력하고, 아니면 다음으로 넘어간다. 그리고 children이 있으면 재귀를 돌려서 똑같이 찾는다.
하지만 통과가 되지 않았다. 계속 문제를 고민하고 디버깅하여 찾다보니 재귀를 해도 계속 리턴문에 걸려 빠져나가는 모습을 볼 수 있었다.
하지만 원인을 찾아도 문제를 해결할 수 없는 나를 보고 좌절감에 빠졌다.
쉬면서도 문제생각, 밥먹으면서도 문제생각, 자기전에도 문제생각... 반복이었다.
결국 다음날 오후에 되어서 해결이 되었다.
조건문에 자식이 있으면 재귀를 하라는 문구를 추가하니까 해결이 되었다.
아직 정확히 이해는 안된것 같지만 내 생각에는
조건문을 추가하기 전에는 자식이 있든 없든 리턴문을 하고 빠져나갔지만, 조건문을 추가하면서 자식이 있을때만 재귀를 하니 자식이 없으면 계속 이어서 진행되기 때문이지 않을까라고 생각이 된다.
HA의 모든 문제를 풀었다. 7문제를 푸는데도 하루가 넘게 걸렸다.
아직 내 실력은 부족한 것 같다. 남은 시간에도 더 공부해서 더 실력을 높여야겠다는 생각이 든다.
다음주부터 시작하는 immersive 코스에서도 화이팅!
| 36.836066 | 133 | 0.692924 | kor_Hang | 1.00001 |
c12f25d2943ea92a12616ce860a560918cd4ff41 | 877 | md | Markdown | README.md | Hearsayer/PopupView | 8fe417e646b20fd707660dcec05b217ab37f705c | [
"MIT"
] | 1 | 2017-12-25T02:20:04.000Z | 2017-12-25T02:20:04.000Z | README.md | Hearsayer/PopupView | 8fe417e646b20fd707660dcec05b217ab37f705c | [
"MIT"
] | null | null | null | README.md | Hearsayer/PopupView | 8fe417e646b20fd707660dcec05b217ab37f705c | [
"MIT"
] | null | null | null | # PopupView
Elegant pop-view in Swift

## How to use
1、PopupAlertView
```swift
let item1 = PopupItem(title: "确定", type: .destruct) {
print("点击了‘确定’按钮")
}
let item2 = PopupItem(title: "取消", type: .normal) {
print("点击了‘取消’按钮")
}
let popView = PopupAlertView(title: "标题", message: "内容信息,此处可填写很多很多很多的内", items: [item1, item2])
popView.show()
```
2、PopupSheetView
```swift
let item1 = PopupItem(title: "第一个按钮", type: .destruct) {
print("点击了‘第一个按钮’按钮")
}
let item2 = PopupItem(title: "第二个按钮", type: .normal) {
print("点击了‘第二个’按钮")
}
let sheetView = PopupSheetView(message: "message", items: [item1, item2])
sheetView.show()
```
3、PopupView
```swift
let popupView = PopupView()
popupView.popupView.addSubview(CustomView.show())
popupView.popType = .sheet
popupView.show()
```
| 22.487179 | 95 | 0.689852 | kor_Hang | 0.212904 |
c12f6680b7bab7b48d83d4f7c45d91d18f6d02bc | 21,487 | md | Markdown | src/analysis_map_visualization_cataluna_20200514.md | gentok/covid19spain | 9b732a050900eaede7893d8ea5b581cc13ad2743 | [
"MIT"
] | null | null | null | src/analysis_map_visualization_cataluna_20200514.md | gentok/covid19spain | 9b732a050900eaede7893d8ea5b581cc13ad2743 | [
"MIT"
] | null | null | null | src/analysis_map_visualization_cataluna_20200514.md | gentok/covid19spain | 9b732a050900eaede7893d8ea5b581cc13ad2743 | [
"MIT"
] | null | null | null | Visualize COVID-19 PCR Test Data
================
Gento Kato
May 15, 2020
# Preparation
``` r
## Clear Workspace
rm(list=ls())
## Set Working Directory (Automatically to Project Home) ##
library(rprojroot)
if (rstudioapi::isAvailable()==TRUE) {
setwd(dirname(rstudioapi::getActiveDocumentContext()$path));
}
projdir <- find_root(has_file("thisishome.txt"))
cat(paste("Working Directory Set to:\n",projdir))
```
## Working Directory Set to:
## /home/gentok/GoogleDrive/Projects/Coronavirus_Project/Coronavirus_spain
``` r
setwd(projdir)
## Plots ##
require(sf)
require(rmapshaper)
require(ggplot2)
require(dplyr)
# Date_analy <- Date_analy_simple <- "latest"
Date_analy <- "2020-05-14"
Date_analy_simple <- gsub("-","", Date_analy)
Date_analy
```
## [1] "2020-05-14"
``` r
## Import Relevant Data
granddt <- readRDS(paste0(projdir,"/data/granddt_",Date_analy_simple,".rds"))
shapedt <- readRDS(paste0(projdir,"/data/shapefile/shape_cataluna_rev.rds")) %>%
filter(mundesc != "(altres municipis)")
print(object.size(shapedt), units="Mb")
```
## 23.7 Mb
``` r
shapedt_simple <- ms_simplify(shapedt) %>% st_as_sf()
print(object.size(shapedt_simple), units="Mb")
```
## 2.3 Mb
``` r
# Merge ALl Data
granddt <- granddt %>%
filter(Data==max(Data)) %>%
inner_join(shapedt_simple, by = c("mundesc"="mundesc"))
target <- granddt
```
# Map Plots
``` r
## Limits COVID-19 Data to Those with >=10 Tests
## Positive Rate
target$posrate <- ifelse(granddt$test<10, NA, granddt$posrate)
plabs <- paste0(sprintf("%.1f", quantile(target$posrate, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(posrate)), color="white", size=0.1) +
scale_fill_viridis_c(name="Positive rate (%)", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title=paste0("COVID-19 Positive Rate in Catalonia \n(As of ", max(granddt$Data), ")"),
caption="Note: Municipalities with less than 10 reported PCR tests are greyed out.") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_posrate_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_posrate_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_posrate_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_posrate_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
```
``` r
## Positive Cases / 10k Population
target$pos_100k <- ifelse(granddt$test<10, NA, granddt$pos_100k)
plabs <- paste0(sprintf("%.0f", quantile(target$pos_100k, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(pos_100k)), color="white", size=0.1) +
scale_fill_viridis_c(name="Positive cases/\n10,000 population", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title=paste0("COVID-19 Positive Cases/10,000 Population in Catalonia \n(As of ", max(granddt$Data), ")"),
caption="Note: Municipalities with less than 10 reported PCR tests are greyed out.") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_pos_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_pos_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_pos_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_pos_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
```
``` r
## Tested Cases / 10k Population
target$test_100k <- ifelse(granddt$test<10, NA, granddt$test_100k)
plabs <- paste0(sprintf("%.0f", quantile(target$test_100k, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(test_100k)), color="white", size=0.1) +
scale_fill_viridis_c(name="Tested cases/\n10,000 population", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title=paste0("COVID-19 Tested Cases/10,000 Population in Catalonia \n(As of ", max(granddt$Data), ")"),
caption="Note: Municipalities with less than 10 reported PCR tests are greyed out.") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_test_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_test_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_test_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_test_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Registered Unemployment Rate
plabs <- paste0(sprintf("%.1f", quantile(target$regunemprate, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(regunemprate)), color="white", size=0.1) +
scale_fill_viridis_c(name="Registered unemployment in \npop. of age 15 to 65 (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Unemployment Rate in Catalonia (2019 Average)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_regunemprate_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_regunemprate_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_regunemprate_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_regunemprate_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Alternative Unemployment Rate
plabs <- paste0(sprintf("%.1f", quantile(target$unemprate, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(unemprate)), color="white", size=0.1) +
scale_fill_viridis_c(name="Unemployment in \nactive population (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Unemployment Rate in Catalonia (2011 Alternative Measurement)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_unemprate_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_unemprate_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_unemprate_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_unemprate_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Taxable Base Income
plabs <- paste0(sprintf("%.0f", quantile(target$taxbaseincome, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(taxbaseincome)), color="white", size=0.1) +
scale_fill_viridis_c(name="Av. taxable base income \n(in 10k euro)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Average Taxable Base Income in Catalonia (2017)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_taxbaseincome_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_taxbaseincome_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_taxbaseincome_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_taxbaseincome_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Proportion Living In Small Housing
plabs <- paste0(sprintf("%.1f", quantile(target$prop_smallhouse, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(prop_smallhouse)), color="white", size=0.1) +
scale_fill_viridis_c(name="Residence 90 sq. m \nor smaller (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,0.99), labels = plabs,
na.value = "grey80") +
labs(title="Proportion of Residents Living in Small Housing \nin Catalonia (2011 Census)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_smallhouse_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_smallhouse_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_smallhouse_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_smallhouse_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Proportion of Immigrants out of EU
plabs <- paste0(sprintf("%.1f", quantile(target$prop_immig_noEU, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(prop_immig_noEU)), color="white", size=0.1) +
scale_fill_viridis_c(name="Immigrants from \nout of EU (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Immigrants from Out of EU in Catalonia (2018)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_immig_noEU_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_immig_noEU_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_immig_noEU_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_immig_noEU_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Proportion of Service Industry Workers
plabs <- paste0(sprintf("%.1f", quantile(target$prop_service, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(prop_service)), color="white", size=0.1) +
scale_fill_viridis_c(name="Working in \nservice industry (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,0.96), labels = plabs,
na.value = "grey80") +
labs(title="Proportion of Service Industry Workers in Catalonia (2018)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_service_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_service_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_service_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_service_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Crude Death Rate (2018)
plabs <- paste0(sprintf("%.1f", quantile(target$deathrate, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(deathrate)), color="white", size=0.1) +
scale_fill_viridis_c(name="Crude death rate (2018)\n(death/1000 pop.)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Crude Death Rates in Catalonia (2018)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_deathrate_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_deathrate_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_deathrate_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_deathrate_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Proportion of 65+ Residents
plabs <- paste0(sprintf("%.1f", quantile(target$poppr_65plus, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(poppr_65plus)), color="white", size=0.1) +
scale_fill_viridis_c(name="Proportion of residents\nwith age 65+ (%)", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Proportion of Elderly Residents in Catalonia (2019)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_poppr_65plus_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_poppr_65plus_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_poppr_65plus_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_poppr_65plus_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Proportion of University Educated
plabs <- paste0(sprintf("%.1f", quantile(target$prop_univ, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(prop_univ)), color="white", size=0.1) +
scale_fill_viridis_c(name="Attended university in \nage 16+ pop. (%)",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Proportion of University Educated in Catalonia (2011 Census)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_univ_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_univ_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_univ_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_univ_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Population
plabs <- paste0(sprintf("%.0f", quantile(target$pop, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(pop)), color="white", size=0.1) +
scale_fill_viridis_c(name="Population", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Population in Catalonia (2019)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_pop_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_pop_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_pop_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_pop_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Population Density
plabs <- paste0(sprintf("%.0f", quantile(target$popdens, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(popdens)), color="white", size=0.1) +
scale_fill_viridis_c(name="Population density \n(per sq. km)", option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,1), labels = plabs,
na.value = "grey80") +
labs(title="Population Density in Catalonia (2019)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_popdens_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_popdens_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_popdens_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_popdens_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
## Average Number in House Hold
plabs <- paste0(sprintf("%.1f", quantile(target$mean_numhh_census, probs=c(0,0.25,0.5,0.75,1), na.rm=T)),
c(" (min)"," (25%tile)"," (median)"," (75%tile)"," (max)"))
p <- ggplot(target) +
geom_sf(aes(geometry=geometry, fill=percent_rank(mean_numhh_census)), color="white", size=0.1) +
scale_fill_viridis_c(name="Av. number of members \nin a household",
option="A", direction=-1,
breaks = c(0,0.25,0.5,0.75,0.99), labels = plabs,
na.value = "grey80") +
labs(title="Average Household Size in Catalonia (2011 Census)") +
theme_void() + theme(plot.title = element_text(hjust=0.5))
```
``` r
p
```
<!-- -->
``` r
ggsave(paste0(projdir,"/out/map_numhh_",Date_analy_simple,".png"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/map_numhh_",Date_analy_simple,".pdf"), p, width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_numhh_wotitle_",Date_analy_simple,".png"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
ggsave(paste0(projdir,"/out/ForArticle/map_numhh_wotitle_",Date_analy_simple,".pdf"),
p + labs(title = NULL, caption = NULL), width=8, height=6)
```
| 44.764583 | 112 | 0.648439 | eng_Latn | 0.203202 |
c1302e87d2b81250f545eb23254479f21026fd1f | 846 | md | Markdown | dart/README.md | ajsecord/material-color-utilities | 08b3fbb245c738f1e72c57f3670002a5688a6f59 | [
"Apache-2.0"
] | 1 | 2021-12-03T20:45:20.000Z | 2021-12-03T20:45:20.000Z | dart/README.md | ajsecord/material-color-utilities | 08b3fbb245c738f1e72c57f3670002a5688a6f59 | [
"Apache-2.0"
] | null | null | null | dart/README.md | ajsecord/material-color-utilities | 08b3fbb245c738f1e72c57f3670002a5688a6f59 | [
"Apache-2.0"
] | null | null | null | # material_color_utilities
[](https://pub.dev/packages/material_color_utilities)
Algorithms and utilities that power the Material Design 3 (M3) color system,
including choosing theme colors from images and creating tones of colors; all in
a new color space.
See the main
[README](https://github.com/material-foundation/material-color-utilities#readme)
for more information.
## Getting started
`dart pub add material_color_utilities` or `flutter pub add
material_color_utilities`
```dart
import 'package:material_color_utilities:material_color_utilities.dart';
```
## Contributing
This repo is not accepting external contributions, but feature requests and bug
reports are welcome on
[Github](https://github.com/material-foundation/material-color-utilities/issues).
| 31.333333 | 126 | 0.806147 | eng_Latn | 0.903301 |
c130665464ad00152dfd83fc316a06b51f1af74b | 2,156 | md | Markdown | README.md | AdeBC/GSSR | 895eab0d09ae3c8b48d96d3094e3d1de4b3187a5 | [
"MIT"
] | 5 | 2021-03-28T05:42:09.000Z | 2021-06-25T08:00:46.000Z | README.md | AdeBC/GSSR | 895eab0d09ae3c8b48d96d3094e3d1de4b3187a5 | [
"MIT"
] | null | null | null | README.md | AdeBC/GSSR | 895eab0d09ae3c8b48d96d3094e3d1de4b3187a5 | [
"MIT"
] | null | null | null | # GSSR
~~Gene Splice Site Recognition by WAM, Bayesian Network and SVM approaches~~
A Computing Data Science Perspective on Gene Splice Site Identification.
## Abstract
As molecular biology and information technology advances, machine learning techniques have a wide range of applications in bioinformatics. In this work, on the problem of gene splicing donor site identification, on both balanced dataset and unbalanced dataset, three models (WAM, BN , SVM) are evaluated comprehensively. A set of comprehensive performance metrics were also introduced in the experiments. And to detect splice signals precisely, a correction has also made to Bayesian network. The result shows that SVM has a good ability on take caring of unbalanced data while WAM and BN do not. Besides, the fitness of metrics is also tested. It shows that auPRC is more sensitive to unbalanced data and so is more applicable in many situations.
随着分子生物学和信息技术的发展,机器学习技术在生物信息学中有着广泛的应用。在本工作中,针对基因拼接供体位点识别问题,在平衡数据集和非平衡数据集上,三种模型(WAM,BN,SVM)的性能被全面地进行了评估。实验中还引入了一套综合性能指标。而为了精确检测拼接信号,还对贝叶斯网络进行了修正。结果表明,SVM对不平衡数据有很好的处理能力,而WAM和BN则没有。此外,实验还测试了指标的适用性。表明auPRC对不平衡数据比较敏感,所以在很多情况下更为适用。
## Report
[A Computing Data Science Perspective on Gene Splice Site Identification](https://github.com/AdeBC/GSSR/blob/master/Report/A%20Computing%20Data%20Science%20Perspective%20on%20Gene%20Splice%20Site%20Identification.pdf)
## Requirements
see [requirements.txt](requirements.txt)
## Reproducibility
The data, code and running process are retained in the report and [Report.ipynb](Source/Report.ipynb)
## Models
Weighted Array Model, Bayesian network, and Support vector machine.
See [Models.ipynb](Source/Models.ipynb) for details in implementation.
## Maintainer
| Name | Email | Organization |
| --------- | ----------------------------------------------- | ------------------------------------------------------------ |
| Hui Chong | chonghui@hust.edu.cn<br />huichong.me@gmail.com | Research Assistant, School of Life Science and Technology, Huazhong University of Science & Technology |
| 51.333333 | 747 | 0.724954 | eng_Latn | 0.860318 |
c1307e4838911f952f35fab8a1de302d146f941e | 2,050 | md | Markdown | README.md | masskaneko/bugospots | 681b465ec8d1399801e2ba7e49c9d98ed342ee8c | [
"MIT"
] | null | null | null | README.md | masskaneko/bugospots | 681b465ec8d1399801e2ba7e49c9d98ed342ee8c | [
"MIT"
] | 9 | 2020-12-14T13:31:53.000Z | 2021-02-22T14:39:21.000Z | README.md | masskaneko/bugospots | 681b465ec8d1399801e2ba7e49c9d98ed342ee8c | [
"MIT"
] | null | null | null | # Bugospots
Bugospots is a Go implementation of the reference [igrigorik/bugspots](https://github.com/igrigorik/bugspots) - a bug prediction tool.
The bug prediction algorithm is stated on chapter V. in the paper - [Does Bug Prediction Support Human Developers? Findings from a Google Case Study](https://research.google/pubs/pub41145/).
## Note
Bugospots is buggy and unstable.
Some behavior of Bugospots is not equal to the reference's.
## Building and Dependency
Bugospots uses
* [go-git/go-git](https://github.com/go-git/go-git)
* [cheggaaa/pb](https://github.com/cheggaaa/pb)
```
$ go get github.com/go-git/go-git
$ go get github.com/cheggaaa/pb
$ go build -o bugospots bugospots.go
```
## Usage
```
$ bugospots -path <A path to the target Git repository>
```
|option|default|description|
|----|----|----|
|-regexp|(?i)(^\| )(fi(x\|xed\|xes)\|clos(e\|es\|ed))|A regurar expression specifying bug fix commit message|
|-o|./bugospots.csv|Full result of csv file|
You will get top 10 hotspots score and its relative file path in the target repository, and full result in csv file.
Higher score represents higher possibility of including bugs.
Following shows sample output.
```
2020/12/31 19:43:33 oldest bug fix: 2015-01-01 00:00:01 +0900 JST
2020/12/31 19:43:33 latest bug fix: 2020-12-29 13:24:35 +0900 JST
2020/12/31 19:43:33 current: 2020-12-29 14:00:00.1542176 +0900 JST
2020/12/31 19:43:33 bug fixes: 987
2020/12/31 19:43:33 Calculating bug prediction score for bug fix commits:
987 / 987 [=================================================================] 100.00% 18s
2020/12/31 19:43:51 Hotspots(top 10):
1.0000000123456789,edit/and/crash.c
0.9123456789123456,want/to/throw/away.mk
0.8012345678901234,poopy.py
0.7654321098765432,time/spoiler.js
0.6060606060606060,stinker.cpp
0.5353535353535353,massive_logic.h
0.4321431431243124,SingletonLover.java
0.3939393939393939,cannot/read.yml
0.2102102102102102,god.go
0.1000000123456789,shutup.sh
```
## License
[MIT](LICENSE)
| 36.607143 | 191 | 0.704878 | eng_Latn | 0.423271 |
c130e58eacc1738c7cb8cc8a7a4ce26f27e298e1 | 701 | md | Markdown | api/Outlook.OutlookBarShortcut.Name.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-09T13:24:12.000Z | 2020-03-09T16:19:11.000Z | api/Outlook.OutlookBarShortcut.Name.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Outlook.OutlookBarShortcut.Name.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-28T06:51:45.000Z | 2019-11-28T06:51:45.000Z | ---
title: OutlookBarShortcut.Name property (Outlook)
keywords: vbaol11.chm342
f1_keywords:
- vbaol11.chm342
ms.prod: outlook
api_name:
- Outlook.OutlookBarShortcut.Name
ms.assetid: 403a1755-ca83-b6e6-db95-55dc12d05ec5
ms.date: 06/08/2017
localization_priority: Normal
---
# OutlookBarShortcut.Name property (Outlook)
Returns or sets a **String** value that represents the display name for the object. Read/write.
## Syntax
_expression_.**Name**
_expression_ A variable that represents an [OutlookBarShortcut](Outlook.OutlookBarShortcut.md) object.
## See also
[OutlookBarShortcut Object](Outlook.OutlookBarShortcut.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 21.90625 | 102 | 0.787447 | eng_Latn | 0.452403 |
c131543ca49ef774170b26eddf331234be40e9fb | 111 | md | Markdown | readme.md | KorySchneider/mintab | 0674e64f967f5caa0656d93e750df2d620bbe5c4 | [
"MIT"
] | 12 | 2017-07-16T01:55:46.000Z | 2020-03-05T15:30:51.000Z | readme.md | KorySchneider/mintab | 0674e64f967f5caa0656d93e750df2d620bbe5c4 | [
"MIT"
] | 6 | 2017-06-21T03:42:52.000Z | 2018-05-16T19:58:07.000Z | readme.md | KorySchneider/mintab | 0674e64f967f5caa0656d93e750df2d620bbe5c4 | [
"MIT"
] | 14 | 2017-09-01T09:39:19.000Z | 2019-08-30T05:26:50.000Z | *This project is no longer being maintained/updated. See [tab](https://github.com/koryschneider/tab) instead.*
| 55.5 | 110 | 0.774775 | eng_Latn | 0.821238 |
c131636960e10fe8febdd0c50405798fec1d19ce | 2,947 | md | Markdown | README.md | creode/get-address-io | 44e23574ecf6c3cca709b74aa3819c59e0c9a53c | [
"MIT"
] | null | null | null | README.md | creode/get-address-io | 44e23574ecf6c3cca709b74aa3819c59e0c9a53c | [
"MIT"
] | null | null | null | README.md | creode/get-address-io | 44e23574ecf6c3cca709b74aa3819c59e0c9a53c | [
"MIT"
] | null | null | null | # Get Address IO plugin for Craft CMS 3.x
Integrates Craft CMS with the getaddress IO service for autocompletion of address' in the UK.

## Requirements
This plugin requires Craft CMS 3.5 or later. This is due to the usage of Crafts Template Roots functionality.
## Installation
To install the plugin, follow these instructions.
1. Open your terminal and go to your Craft project:
cd /path/to/project
2. Then tell Composer to load the plugin:
composer require creode/get-address-io
3. In the Control Panel, go to Settings → Plugins and click the “Install” button for Get Address IO.
## Get Address IO Overview
This plugin aims to provide a basic input and select box for communicating with the https://getaddress.io API. The goal is to keep this plugin as lightweight as possible and offer the bear miniumum functionality so that it can be adapted and styled easily.
## Configuring Get Address IO
On the Plugins page, click `Settings` for the "Get Address IO" plugin and set your API key. I would suggest this to be set as an environment variable however there is the option to just provide this as standard.
## Using Get Address IO
You have two templates that can be used in your sites `templates` folder:
- `{% include '_get-address-io/_autocomplete.twig' %}` - Offers address autocompletion for a single field. This by default is tied into the Select2 library.
- `{% include '_get-address-io/_postcode-lookup.twig' %}` - Offers the ability to lookup a postcode and display a select list of address' matching that postcode.
Each of the templates above can be overwritten by using the same path within your template folder. This gives you fine grain control over how the fields should be structured. When doing so I'd suggest removing the existing asset bundle and going with your own JavaScript to ensure the functionality still works.
### JavaScript Events
In order to keep this plugin as customisable as possible we fire off our own document events within JavaScript so that they can be responsed to within your own code. These are as follows:
- get-address-io-postcode-lookup: { detail: { selectBox: `<dom element>`, addresses: `<array of addresses>` } }
- get-address-io-autocomplete: { detail { addresses: `<array of addresses>` } }
See the following example showing how this event can be listened to:
VanillaJS
```
document.addEventListener('get-address-io-postcode-lookup', function(e) {
var eventData = e.detail;
// Use eventData.addresses to populate your own fields.
});
```
jQuery
```
jQuery(document).on('get-address-io-postcode-lookup', function(e) {
var eventData = e.detail;
// Use eventData.addresses to populate your own fields.
});
```
## Get Address IO Roadmap
Some things to do, and ideas for potential features:
* Release it
* Implement more features from the API
Brought to you by [Creode](https://creode.co.uk)
| 38.272727 | 311 | 0.754326 | eng_Latn | 0.99421 |
c131a7e64ec91770d7fa45ff39a1552a65623c68 | 198 | md | Markdown | Packs/ArcherRSA/ReleaseNotes/1_1_22.md | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/ArcherRSA/ReleaseNotes/1_1_22.md | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/ArcherRSA/ReleaseNotes/1_1_22.md | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z |
#### Integrations
##### RSA Archer v2
- Updated the Docker image to: *demisto/python3:3.9.6.24019*.
- Fixed an issue where the *Advanced: API Endpoint* parameter was not added properly to the URL.
| 33 | 96 | 0.722222 | eng_Latn | 0.982076 |
c1322a4aba6da4c5a1366e7997134bcbe2860195 | 471 | md | Markdown | README.md | antmicro/TermSharp | 5d61baf3fad63f072349eae7dda1e326ae0cc19e | [
"Apache-2.0"
] | 7 | 2019-06-04T15:40:04.000Z | 2022-02-10T00:37:05.000Z | README.md | antmicro/TermSharp | 5d61baf3fad63f072349eae7dda1e326ae0cc19e | [
"Apache-2.0"
] | 3 | 2021-09-27T16:29:21.000Z | 2022-02-11T08:58:08.000Z | README.md | antmicro/TermSharp | 5d61baf3fad63f072349eae7dda1e326ae0cc19e | [
"Apache-2.0"
] | 4 | 2021-05-24T09:24:54.000Z | 2021-11-26T17:17:50.000Z | # Termsharp
Copyright (c) 2016-2021 [Antmicro](https://antmicro.com)
Termsharp is a feature-rich VT100 terminal emulator widget, written in C#.
It is developed as part of the [Renode](https://www.renode.io) emulation framework.
Termsharp relies on the [XWT UI toolkit](https://github.com/mono/xwt).
Termsharp supports a fairly complete VT100 command set.
It also implements [iTerm2 protocol for handling inline images](https://iterm2.com/documentation-images.html).
| 36.230769 | 110 | 0.774947 | eng_Latn | 0.787644 |
c13280e564f1e22c2347fca7b0443077d40fa902 | 1,489 | md | Markdown | README.md | booink/shrb | 3c4b2d95f8933f070623865cd70f9c215afa7c22 | [
"MIT"
] | null | null | null | README.md | booink/shrb | 3c4b2d95f8933f070623865cd70f9c215afa7c22 | [
"MIT"
] | null | null | null | README.md | booink/shrb | 3c4b2d95f8933f070623865cd70f9c215afa7c22 | [
"MIT"
] | null | null | null | # Shrb
Rubyで書かれたシェルです。
Bashの記法を出来るだけ再現するのが目標です。
Shell by Ruby.
The goal is to reproduce the notation of Bash as much as possible.
## 機能 / Features
- [x] コマンド実行
- [x] パイプ
- [x] 環境変数
- [x] 論理演算
- [x] コマンドのグループ化
- [ ] サブシェル
- [ ] 変数展開
- [x] デーモン化
- [x] 長い文字列をパイプすると標準入力で受け取れない
- [ ] ダブルクォート内の変数展開
- [ ] ダラー$後の変数展開
- [ ] 環境変数とインライン環境変数
- [ ] リダイレクト
- [x] output
- [x] appending output
- [x] duplicating output
- [x] input
- [x] duplicating input
- [ ] here document
- [ ] open for reading and writing
- [ ] for
- [ ] while read
- [ ] ブレース展開
- [ ] プロセス置換
## インストール / Installation
$ git clone https://github.com/booink/shrb
<!--
$ gem install shrb
-->
## 使い方 / Usage
```sh
./exe/shrb
```
## コントリビュート / Contributing
バグ報告やプルリクエストは大歓迎です。このプロジェクトは安全で協力的なコラボレーションの場となることを目的としており、コントリビュータは [Contributor Covenant](http://contributor-covenant.org) をよく読んで守っていただけることを望んでいます。
Bug reports and pull requests are welcome. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
## ライセンス / License
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
## Code of Conduct
Everyone interacting in the Shrb project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/booink/shrb/blob/master/CODE_OF_CONDUCT.md).
| 24.016129 | 236 | 0.709872 | eng_Latn | 0.815846 |
c132f76094ef1c1fab029790395334f6de0c5762 | 837 | md | Markdown | README.md | manifoldco/go-jwt | 90ff03a1a3b3d5dc45e845588137b78e043439a8 | [
"BSD-3-Clause"
] | 3 | 2019-03-07T05:36:17.000Z | 2020-02-12T20:36:40.000Z | README.md | manifoldco/go-jwt | 90ff03a1a3b3d5dc45e845588137b78e043439a8 | [
"BSD-3-Clause"
] | 4 | 2017-04-10T15:19:10.000Z | 2019-04-11T17:34:47.000Z | README.md | manifoldco/go-jwt | 90ff03a1a3b3d5dc45e845588137b78e043439a8 | [
"BSD-3-Clause"
] | null | null | null | # go-jwt
Convenience wrapper for JWT creation
[Code of Conduct](./.github/CONDUCT.md) |
[Contribution Guidelines](./.github/CONTRIBUTING.md)
[](https://github.com/manifoldco/go-jwt/releases)
[](https://godoc.org/github.com/manifoldco/go-jwt)
[](https://travis-ci.org/manifoldco/go-jwt)
[](https://goreportcard.com/report/github.com/manifoldco/go-jwt)
[](./LICENSE.md)
## Description
This package is used by [grafton](https://github.com/manifoldco/grafton) to
generate JWTs.
| 46.5 | 142 | 0.751493 | yue_Hant | 0.549834 |
c1334829bffb1e78d62c93a8adaf76ed755006af | 4,008 | md | Markdown | chapter_computational-performance/multiple-gpus-gluon.md | ZZJwni/gluon-tutorials-zh | bce6fa6314d4f4fd824a8e1e938d8b2f02b0d15c | [
"Apache-2.0"
] | null | null | null | chapter_computational-performance/multiple-gpus-gluon.md | ZZJwni/gluon-tutorials-zh | bce6fa6314d4f4fd824a8e1e938d8b2f02b0d15c | [
"Apache-2.0"
] | null | null | null | chapter_computational-performance/multiple-gpus-gluon.md | ZZJwni/gluon-tutorials-zh | bce6fa6314d4f4fd824a8e1e938d8b2f02b0d15c | [
"Apache-2.0"
] | 1 | 2019-09-09T21:22:22.000Z | 2019-09-09T21:22:22.000Z | # 多GPU计算的Gluon实现
在Gluon中,我们可以很方便地使用数据并行进行多GPU计算。比方说,我们并不需要自己实现[“多GPU计算”](multiple-gpus.md)一节里介绍的多GPU之间同步数据的辅助函数。先导入本节实验需要的包或模块。同上一节,运行本节中的程序需要至少两块GPU。
```{.python .input n=1}
import sys
sys.path.insert(0, '..')
import gluonbook as gb
import mxnet as mx
from mxnet import autograd, gluon, init, nd
from mxnet.gluon import loss as gloss, nn, utils as gutils
import time
```
## 多GPU上初始化模型参数
我们使用ResNet-18来作为本节的样例模型。由于本节的输入图像使用原尺寸(未放大),这里的模型构造与[“残差网络(ResNet)”](../chapter_convolutional-neural-networks/resnet.md)一节中的ResNet-18构造稍有不同。这里的模型在一开始使用了较小的卷积核、步幅和填充,并去掉了最大池化层。
```{.python .input n=2}
# 本函数已保存在 gluonbook 包中方便以后使用。
def resnet18(num_classes):
def resnet_block(num_channels, num_residuals, first_block=False):
blk = nn.Sequential()
for i in range(num_residuals):
if i == 0 and not first_block:
blk.add(gb.Residual(
num_channels, use_1x1conv=True, strides=2))
else:
blk.add(gb.Residual(num_channels))
return blk
net = nn.Sequential()
# 这里使用了较小的卷积核、步幅和填充,并去掉了最大池化层。
net.add(nn.Conv2D(64, kernel_size=3, strides=1, padding=1),
nn.BatchNorm(), nn.Activation('relu'))
net.add(resnet_block(64, 2, first_block=True),
resnet_block(128, 2),
resnet_block(256, 2),
resnet_block(512, 2))
net.add(nn.GlobalAvgPool2D(), nn.Dense(num_classes))
return net
net = resnet18(10)
```
之前我们介绍了如何使用`initialize`函数的`ctx`参数在CPU或单个GPU上初始化模型参数。事实上,`ctx`可以接受一系列的CPU/GPU,从而使初始化好的模型参数复制到`ctx`里所有的CPU/GPU上。
```{.python .input n=3}
ctx = [mx.gpu(0), mx.gpu(1)]
net.initialize(init=init.Normal(sigma=0.01), ctx=ctx)
```
Gluon提供了上一节中实现的`split_and_load`函数。它可以划分一个小批量的数据样本并复制到各个CPU/GPU上。之后,根据输入数据所在的CPU/GPU,模型计算会发生在相同的CPU/GPU上。
```{.python .input n=4}
x = nd.random.uniform(shape=(4, 1, 28, 28))
gpu_x = gutils.split_and_load(x, ctx)
net(gpu_x[0]), net(gpu_x[1])
```
回忆一下[“模型参数的延后初始化”](../chapter_deep-learning-computation/deferred-init.md)一节中介绍的延后的初始化。现在,我们可以通过`data`访问初始化好的模型参数值了。需要注意的是,默认下`weight.data()`会返回CPU上的参数值。由于我们指定了2个GPU来初始化模型参数,我们需要指定GPU访问。我们看到,相同参数在不同的GPU上的值一样。
```{.python .input n=5}
weight = net[0].params.get('weight')
try:
weight.data()
except:
print('not initialized on', mx.cpu())
weight.data(ctx[0])[0], weight.data(ctx[1])[0]
```
## 多GPU训练模型
当我们使用多个GPU来训练模型时,`gluon.Trainer`会自动做数据并行,例如划分小批量数据样本并复制到各个GPU上,对各个GPU上的梯度求和再广播到所有GPU上。这样,我们就可以很方便地实现训练函数了。
```{.python .input n=7}
def train(num_gpus, batch_size, lr):
train_iter, test_iter = gb.load_data_fashion_mnist(batch_size)
ctx = [mx.gpu(i) for i in range(num_gpus)]
print('running on:', ctx)
net.initialize(init=init.Normal(sigma=0.01), ctx=ctx, force_reinit=True)
trainer = gluon.Trainer(
net.collect_params(), 'sgd', {'learning_rate': lr})
loss = gloss.SoftmaxCrossEntropyLoss()
for epoch in range(4):
start = time.time()
for X, y in train_iter:
gpu_Xs = gutils.split_and_load(X, ctx)
gpu_ys = gutils.split_and_load(y, ctx)
with autograd.record():
ls = [loss(net(gpu_X), gpu_y)
for gpu_X, gpu_y in zip(gpu_Xs, gpu_ys)]
for l in ls:
l.backward()
trainer.step(batch_size)
nd.waitall()
train_time = time.time() - start
test_acc = gb.evaluate_accuracy(test_iter, net, ctx[0])
print('epoch %d, training time: %.1f sec, test_acc %.2f' % (
epoch, train_time, test_acc))
```
首先在单GPU上训练。
```{.python .input}
train(num_gpus=1, batch_size=256, lr=0.1)
```
然后尝试2个GPU。比上一节使用的LeNet,ResNet-18计算更加复杂,其并行效果更佳。
```{.python .input n=10}
train(num_gpus=2, batch_size=512, lr=0.2)
```
## 小结
* 在Gluon中,我们可以很方便地进行多GPU计算,例如在多GPU上初始化模型参数和训练模型。
## 练习
* 本节使用了ResNet-18。试试不同的迭代周期、批量大小和学习率。如果条件允许,使用更多GPU计算。
* 有时候,不同的CPU/GPU的计算能力不一样,例如同时使用CPU和GPU,或者GPU之间型号不一样。这时候应该怎么办?
## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1885)

| 31.559055 | 207 | 0.672655 | yue_Hant | 0.163064 |
c133593329c72839017a32fa5df65ed6fbd7c3c6 | 265 | md | Markdown | aspnetcore/fundamentals/error-handling/samples/2.x/ErrorHandlingSample/README.md | terrajobst/AspNetCore.Docs.fr-fr | 50f7735d5e1dc017c993737f4baf114b6b3ab9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/fundamentals/error-handling/samples/2.x/ErrorHandlingSample/README.md | terrajobst/AspNetCore.Docs.fr-fr | 50f7735d5e1dc017c993737f4baf114b6b3ab9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/fundamentals/error-handling/samples/2.x/ErrorHandlingSample/README.md | terrajobst/AspNetCore.Docs.fr-fr | 50f7735d5e1dc017c993737f4baf114b6b3ab9b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # <a name="error-handling-sample-application"></a>Exemple d'application de gestion des erreurs
Cet exemple d’application illustre les scénarios décrits dans [Gérer les erreurs dans ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/error-handling).
| 66.25 | 168 | 0.8 | fra_Latn | 0.695094 |
c13361ea7a3e147deb20d361832ee78e4ef45ad1 | 898 | md | Markdown | docs/error-messages/compiler-warnings/compiler-warning-level-3-c4073.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-warnings/compiler-warning-level-3-c4073.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-warnings/compiler-warning-level-3-c4073.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:53:26.000Z | 2020-05-28T15:53:26.000Z | ---
title: Upozornění kompilátoru (úroveň 3) C4073
ms.date: 11/04/2016
f1_keywords:
- C4073
helpviewer_keywords:
- C4073
ms.assetid: 50081a6e-6acd-45ff-8484-9b1ea926cc5c
ms.openlocfilehash: 80b43f5fc5af23d84fe43727b75d041e39405ade
ms.sourcegitcommit: 857fa6b530224fa6c18675138043aba9aa0619fb
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/24/2020
ms.locfileid: "80199107"
---
# <a name="compiler-warning-level-3-c4073"></a>Upozornění kompilátoru (úroveň 3) C4073
Inicializátory jsou vloženy do inicializační oblasti knihovny.
Pouze vývojáři knihovny třetích stran by měli používat inicializační oblast knihovny, která je určena [#pragma init_seg](../../preprocessor/init-seg.md). Následující ukázka generuje C4073:
```cpp
// C4073.cpp
// compile with: /W3
#pragma init_seg(lib) // C4073
// try this line to resolve the warning
// #pragma init_seg(user)
int main() {
}
```
| 27.212121 | 188 | 0.777283 | ces_Latn | 0.942346 |
c133f24c16494c441a398a7ea93d7e423b40c679 | 216 | md | Markdown | README.md | Yelzoridy/DOM-ScoreKeeper | 1f95bec30055a0aecc10d7024835a6c1d54c778a | [
"MIT"
] | null | null | null | README.md | Yelzoridy/DOM-ScoreKeeper | 1f95bec30055a0aecc10d7024835a6c1d54c778a | [
"MIT"
] | null | null | null | README.md | Yelzoridy/DOM-ScoreKeeper | 1f95bec30055a0aecc10d7024835a6c1d54c778a | [
"MIT"
] | null | null | null | # DOM-Minpulation
This project is insipred by Colt's scorekeeper project.
I was introduced to a new css framework (bulma)and I thought it would be a good chance to add my little touch ( dynamic game selection)
| 27 | 136 | 0.768519 | eng_Latn | 0.99908 |
c1341f8d88bf33ea04c83a94b3548ac0ecf53517 | 1,177 | md | Markdown | README.md | JustPretender/botan-rs | bdf1de579913cb0b8a07024e8a4015a3719195ee | [
"MIT"
] | 15 | 2018-07-26T21:30:42.000Z | 2022-03-28T04:16:22.000Z | README.md | JustPretender/botan-rs | bdf1de579913cb0b8a07024e8a4015a3719195ee | [
"MIT"
] | 22 | 2018-07-31T13:52:28.000Z | 2021-03-26T15:20:32.000Z | README.md | JustPretender/botan-rs | bdf1de579913cb0b8a07024e8a4015a3719195ee | [
"MIT"
] | 7 | 2019-01-06T20:44:25.000Z | 2022-03-27T18:52:39.000Z | # botan-rs
[](https://github.com/randombit/botan-rs/actions)
[](https://crates.io/crates/botan)
[](https://docs.rs/botan)
This crate wraps the C API exposed by the [Botan](https://botan.randombit.net/)
cryptography library. The current version requires Botan 2.8.0 or higher
and Rust 1.43.0 or higher.
The following features are supported:
* `no-std`: Enable a no-std build. (Still uses `alloc`, requires nightly)
* `vendored`: Build a copy of the C++ library directly, without
relying on a system installed version.
* `botan3`: Link against (the currently unreleased) Botan 3.x rather
than the default Botan 2.x
Currently the crate exposes ciphers, hashes, MACs, KDFs, password based key
derivation (PBKDF2, Scrypt, Argon2, etc), bcrypt password hashes, random number
generators, X.509 certificates, format preserving encryption, HOTP/TOTP, NIST
key wrapping, multiprecision integers, and the usual public key algorithms (RSA,
ECDSA, ECDH, DH, ...)
PRs and comments/issues happily accepted.
| 45.269231 | 126 | 0.753611 | eng_Latn | 0.84851 |
c1347e070c23c5975a50bbf272e2b3fb100694d1 | 2,738 | md | Markdown | app/data/roo/uk/articles/ghana/wholly-obtained-verbatim.md | mattlavis-transform/ottp2 | 8cb17016770ac305ee4acaf7396aa1a623af6c0d | [
"MIT"
] | 1 | 2022-03-28T12:24:00.000Z | 2022-03-28T12:24:00.000Z | app/data/roo/uk/articles/ghana/wholly-obtained-verbatim.md | mattlavis-transform/ottp2 | 8cb17016770ac305ee4acaf7396aa1a623af6c0d | [
"MIT"
] | null | null | null | app/data/roo/uk/articles/ghana/wholly-obtained-verbatim.md | mattlavis-transform/ottp2 | 8cb17016770ac305ee4acaf7396aa1a623af6c0d | [
"MIT"
] | null | null | null | ## Wholly obtained products
1. The following shall be considered as wholly obtained in Ghana or the UK:
1. live animals born and raised there;
2. mineral products extracted from its soil or from its seabed or ocean floor;
3. vegetable products harvested there;
4. products from live animals raised there;
5.
1. products obtained by hunting or fishing conducted there;
2. products of aquaculture, including mariculture, where the animals are raised there from eggs, spawning, larvae or fry;
6. products of sea fishing and other products taken from the sea outside the territorial waters of the UK or of Ghana by their vessels;
7. products made aboard their factory ships exclusively from products referred to in point (f);
8. used articles fit only for the recovery of raw materials;
9. waste and scrap resulting from manufacturing operations conducted there;
10. products extracted from marine soil or subsoil outside their territorial waters provided that they have sole rights to work that soil or subsoil;
11. goods produced exclusively from the products specified in points (a) to (j).
2. The terms "their vessels" and "their factory ships" in points (f) and (g) of paragraph 1 of this Article shall apply only to vessels and factory ships:
1. which are registered or recorded in the UK or Ghana; and
2. which fly the flag of the UK or Ghana; and
3. which meet one of the following conditions:
1. they are at least 50 % owned by nationals of the UK, a Member State of the European Union and/or of Ghana; or
2. they are owned by companies:
- which have their head office and their main place of business in one of the UK, a Member State of the European Union or in Ghana, and
- which are at least 50 % owned by one or more of the UK, one or more Member States of the European Union and/or Ghana or by public entities or nationals of one or more of these States.
3. Notwithstanding the provisions of paragraph 2 of this Article, upon request of Ghana, vessels chartered or leased by Ghana shall be treated as "their vessels" to undertake fisheries activities in its exclusive economic zone provided that an offer has been made beforehand to the economic operators of the UK and that the implementing arrangements established beforehand by the Committee are adhered to. The Committee shall ensure that the conditions laid down in this paragraph are respected.
4. The conditions referred to in paragraph 2 of this Article may be met in Ghana and the States that come under different agreements with which cumulation is applicable. In these cases, the products shall be considered to originate from the Flag State.
{{ Article 3 }}
| 54.76 | 495 | 0.7626 | eng_Latn | 0.999991 |
c1353db9113f0ee664c4f090f83f7eb827f2ad23 | 1,514 | md | Markdown | README.md | Saadat123456/Math-Magicians | 753c03b7f670ced298a7e169b347565926e47fab | [
"MIT"
] | 2 | 2022-03-14T22:39:23.000Z | 2022-03-25T17:04:14.000Z | README.md | omar25ahmed/Math-Magicians-1 | 753c03b7f670ced298a7e169b347565926e47fab | [
"MIT"
] | null | null | null | README.md | omar25ahmed/Math-Magicians-1 | 753c03b7f670ced298a7e169b347565926e47fab | [
"MIT"
] | 1 | 2022-03-25T01:29:51.000Z | 2022-03-25T01:29:51.000Z | # Math Magicians

## Additional description about the project and its features:
This is simple react based calculator project.
## Live Here At
[Heroku](https://magicians-math-calculator1.herokuapp.com/quote)
[Netlify](https://623d05514fd103144245d8ad--keen-flan-ec8171.netlify.app/)
## Built With
- React.js
- HTML and CSS
- JavaScript
- [HTML & CSS3 & JavaScript Linters](https://github.com/microverseinc/linters-config/tree/master/html-css-js)
- Github and GithubFlow
- Webpack
## App Screenshot

## Getting Started
**To create a Calculator from this Repository feel free to contact me.**
## How to run in your local machine
- Copy the URL: git@github.com:Saadat123456/Math-Magicians.git
- In your terminal, go to the directory you want to clone the repository.
- Use the command: git clone git@github.com:Saadat123456/Math-Magicians.git
- Run npm install in the terminal to install node modules
- Execute npm run build in terminal to build the development files
- To start a server run npm start and the server would be started on port 8080
## Authors
👤 **Saadat Ali**
- GitHub: [@Saadat123456](https://github.com/Saadat123456)
## 🤝 Contributing
Contributions, issues, and feature requests are welcome!
Feel free to check the [issues page](../../issues/).
## Show your support
Give a ⭐️ if you like this project!
## Acknowledgments
- Microverse Team
## 📝 License
This project is [MIT](./LICENSE) licensed.
| 26.103448 | 109 | 0.745707 | eng_Latn | 0.87777 |
c13564c6c797edf3da75f329e5f747a6313388bb | 1,527 | markdown | Markdown | README.markdown | thezerobit/cl-future | eb4f3c6c0f165232cd67760b3c06c0546cdec48c | [
"BSD-3-Clause"
] | 2 | 2019-03-18T06:20:59.000Z | 2021-01-02T11:44:16.000Z | README.markdown | thezerobit/cl-future | eb4f3c6c0f165232cd67760b3c06c0546cdec48c | [
"BSD-3-Clause"
] | null | null | null | README.markdown | thezerobit/cl-future | eb4f3c6c0f165232cd67760b3c06c0546cdec48c | [
"BSD-3-Clause"
] | null | null | null | # CL-FUTURE
DEPRECATED in favor of
[GREEN-THREADS](https://github.com/deliciousrobots/green-threads). You
might also be looking for a different library named
[CL-FUTURE](https://github.com/jpalmucci/cl-future).
A trivial future / lightweight thread library built on cl-cont
delimited continuations.
## Usage
```common-lisp
;; example 1: futures
(defparameter *f1* (make-future))
(defparameter *f2* nil)
(register-action
(lambda ()
(with-call/cc
(let ((f2 (make-future)))
(princ "Hello, ")
(setf *f2* f2)
(wait-for *f1*)
(wait-for f2)
(princ "World.")
nil))))
(register-action
(lambda ()
(with-call/cc
(wait-for *f1*)
(princ "... "))))
(register-action (lambda () (complete-future *f2* 'someval)))
(register-action (lambda () (complete-future *f1* 'someval)))
(run-actions)
;; output: Hello, ... World.
;; example 2: cooperative multitasking
(register-action
(lambda ()
(with-call/cc
(dolist (letter (list "a" "b" "c" "d"))
(princ letter)
(yield)))))
(register-action
(lambda ()
(with-call/cc
(dolist (number (list "1" "2" "3" "4"))
(princ number)
(yield)))))
(run-actions)
;; output: a1b2c3d4
```
## Installation
Stick this repo in ~/quicklisp/local-projects and load with
(ql:quickload :cl-future).
## Author
* Stephen A. Goss (steveth45@gmail.com)
## Copyright
Copyright (c) 2012 Stephen A. Goss (steveth45@gmail.com)
# License
Licensed under the Modified BSD License.
| 19.329114 | 70 | 0.6241 | eng_Latn | 0.62089 |
c135a674035172ece07195b53de06dba3dcf617a | 461 | md | Markdown | README.md | mokinvillain/drain | efa66170ed36c76e0c65b41fa9e994c1619a5346 | [
"CC-BY-3.0"
] | null | null | null | README.md | mokinvillain/drain | efa66170ed36c76e0c65b41fa9e994c1619a5346 | [
"CC-BY-3.0"
] | null | null | null | README.md | mokinvillain/drain | efa66170ed36c76e0c65b41fa9e994c1619a5346 | [
"CC-BY-3.0"
] | null | null | null | # drain
## 안녕하세요
### 여러분
#### 무엇을할까요

[](https://www.youtube.com/watch?v=arA0n1jeNb0)
| 65.857143 | 262 | 0.843818 | yue_Hant | 0.199784 |
c13672597798c0d214b2d4088c4aa3dd4296f674 | 663 | md | Markdown | jphp-gui-game-ext/api-docs/classes/php/game/UXGameBackground.md | broelik/jphp-gui-ext | 351a293bdd1661f32965408d622d89449a24384c | [
"Apache-2.0"
] | 4 | 2019-01-16T14:48:50.000Z | 2020-09-29T17:25:13.000Z | jphp-gui-game-ext/api-docs/classes/php/game/UXGameBackground.md | broelik/jphp-gui-ext | 351a293bdd1661f32965408d622d89449a24384c | [
"Apache-2.0"
] | 1 | 2019-12-18T13:31:53.000Z | 2020-01-02T14:48:37.000Z | jphp-gui-game-ext/api-docs/classes/php/game/UXGameBackground.md | broelik/jphp-gui-ext | 351a293bdd1661f32965408d622d89449a24384c | [
"Apache-2.0"
] | 4 | 2019-02-06T11:18:21.000Z | 2021-02-15T18:59:55.000Z | # UXGameBackground
- **class** `UXGameBackground` (`php\game\UXGameBackground`) **extends** `UXCanvas` (`php\gui\UXCanvas`)
- **package** `game`
- **source** `php/game/UXGameBackground.php`
**Description**
Class UXGameBackground
---
#### Properties
- `->`[`image`](#prop-image) : `UXImage` - _Изображение._
- `->`[`velocity`](#prop-velocity) : `array` - _Линейная скорость [x, y]._
- `->`[`viewPosition`](#prop-viewposition) : `array` - _Позиция вида [x, y]._
- `->`[`autoSize`](#prop-autosize) : `bool` - _Авторазмер._
- `->`[`flipX`](#prop-flipx) : `bool` - _Отразить изображение по X._
- `->`[`flipY`](#prop-flipy) : `bool` - _Отразить изображение по Y._ | 33.15 | 104 | 0.634992 | yue_Hant | 0.133059 |
c13719de31bb0708a8682ae616fff2b9485f682c | 260 | md | Markdown | README.md | godzillalad/node-red-web-audio | b2c0d43ce4c260b8854167ff6e89d052f1c5d8a3 | [
"MIT"
] | null | null | null | README.md | godzillalad/node-red-web-audio | b2c0d43ce4c260b8854167ff6e89d052f1c5d8a3 | [
"MIT"
] | null | null | null | README.md | godzillalad/node-red-web-audio | b2c0d43ce4c260b8854167ff6e89d052f1c5d8a3 | [
"MIT"
] | 1 | 2016-04-08T22:16:31.000Z | 2016-04-08T22:16:31.000Z | # node-red-contrib-play-audio
Node to play audio from a raw audio buffer. Works well togather with the [Watson Text to Speech node](https://github.com/node-red/node-red-bluemix-nodes/tree/master/watson)
## Requirements
Browser needs to support Web Audio API. | 43.333333 | 172 | 0.780769 | eng_Latn | 0.886102 |
c1376f272ae31d3d507d486b6c8edafe387d5715 | 2,301 | md | Markdown | README.md | Kong/swagger-ui-kong-theme | 180e210072016239e7def698dda73e8a67b2e27a | [
"Apache-2.0"
] | 12 | 2020-04-18T17:41:41.000Z | 2021-11-15T09:46:27.000Z | README.md | Kong/swagger-ui-kong-theme | 180e210072016239e7def698dda73e8a67b2e27a | [
"Apache-2.0"
] | 15 | 2019-10-29T17:04:29.000Z | 2022-02-26T01:45:21.000Z | README.md | Kong/swagger-ui-kong-theme | 180e210072016239e7def698dda73e8a67b2e27a | [
"Apache-2.0"
] | 11 | 2019-11-18T04:33:19.000Z | 2021-09-17T06:23:49.000Z | This repo is a plugin for Swagger-UI that loads a custom 2/3 column theme, and adds code snippets with react-apiembed
This repo is still under development and changes are coming
## Known Issues
Swagger-UI does mot support react 16.
If you have react 16 ANYWHERE loaded/bundled on the same page as swagger you will not be able to fill in required parameters on try it
https://github.com/swagger-api/swagger-ui/issues/4745
Workaround: Use yarn not npm
make sure you have
```
"resolutions": {
"react": "15.6.2",
"react-dom": "15.6.2"
}
```
in your package json
## How to load
```
yarn add swagger-ui-kong-theme
```
From where you are loading your Swagger-Ui
```js
import { SwaggerUIKongTheme } from 'swagger-ui-kong-theme'
```
As part of the options include ```SwaggerUIKongTheme``` in the plugins array and ```'KongLayout'``` as your Layout
for example:
```js
const swaggerUIOptions = {
spec: swaggerSpec, // Define data to be used
dom_id: '#ui-wrapper', // Determine what element to load swagger ui
docExpansion: 'list',
deepLinking: true, // Enables dynamic deep linking for tags and operations this is needed for sidebar
filter: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIKongTheme,
SwaggerUIBundle.plugins.DownloadUrl
],
layout: 'KongLayout',
theme:
{
"swaggerAbsoluteTop": "0px", // the top most container is set absolute at this distance from top. (default 0)
"hasSidebar": true, // enables sidebar (default off)
"languages" : [ // sets langagues for sidebar (default bash, javascript, python, ruby)
{
"prismLanguage":"bash",
"target":"shell",
"client":"curl"
},{
"prismLanguage":"ruby",
"target":"ruby"
}]
}
}
}
const ui = SwaggerUIBundle(swaggerUIOptions)
```
## How to develop
run to install required packages
``` yarn ```
run to build
``` npm run build ```
## How to use Demo
follow dev steps above then:
``` cd demo```
``` yarn ```
``` yarn start ```
## Contributing
For problems directly related to this plugin, add an issue on GitHub.
For other issues, see Swagger UI
https://github.com/swagger-api/swagger-ui
| 25.853933 | 134 | 0.659713 | eng_Latn | 0.950021 |
c137c93a3d36b03ba99c1f0a03f632ef9ed03b0a | 2,687 | md | Markdown | content/user-operations/management/node.md | lxm/rainbond-docs | ef88a8b9de6d868ab32deb960bbaba4d56bda2c2 | [
"CC-BY-4.0"
] | null | null | null | content/user-operations/management/node.md | lxm/rainbond-docs | ef88a8b9de6d868ab32deb960bbaba4d56bda2c2 | [
"CC-BY-4.0"
] | null | null | null | content/user-operations/management/node.md | lxm/rainbond-docs | ef88a8b9de6d868ab32deb960bbaba4d56bda2c2 | [
"CC-BY-4.0"
] | null | null | null | ---
title: 节点管理(添加,删除,重置)
date: 2019-03-11T12:50:54+08:00
draft: false
weight: 1302
description: "节点管理:添加节点,删除节点,重置节点"
hidden: true
---
#### 添加节点
{{% notice note %}}
1. 安装节点时,请勿使用之前wget下载的grctl工具即(./grctl),直接使用grctl命令。
2. 管理节点不支持批量扩容操作,只能依次扩容。
3. 管理节点数目推荐为奇数1,3,5,7,两个节点无法保证高可用。
4. 支持使用root执行安装操作
{{% /notice %}}
```bash
# 添加管理节点
grctl node add --host <managexx> --iip <管理节点内网ip> -p <root密码> --role manage
## 法2默认已经配置ssh信任登陆
grctl node add --host <managexx> --iip <管理节点内网ip> --key /root/.ssh/id_rsa.pub --role manage
# 添加计算节点
grctl node add --host <gatewayxx> --iip <网关节点内网ip> -p <root密码> --role gateway
## 法2默认已经配置ssh信任登陆
grctl node add --host <gatewayxx> --iip <网关节点内网ip> --key /root/.ssh/id_rsa.pub --role gateway
# 添加计算节点
grctl node add --host <computexx> --iip <计算节点内网ip> -p <root密码> --role compute
## 法2默认已经配置ssh信任登陆
grctl node add --host <computexx> --iip <计算节点内网ip> --key /root/.ssh/id_rsa.pub --role compute
# 安装节点,节点uid可以通过grctl node list获取
grctl node install <新增节点uid>
# 确定计算节点处于health状态
grctl node up <新增节点uid>
```
#### 删除计算节点
- 1. 当前支持删除计算节点,仅仅将计算节点从集群中移除,不会停计算节点上运行的服务
```
grctl node down <被删除计算节点UUID>
grctl node delete <被删除计算节点UUID>
```
- 2. 重置计算节点(需要先从集群中删除)
```
# 慎重操作,默认会删除数据
ssh <被删除计算节点>
grctl reset
```
#### 删除管理节点
多管理节点时,需要注意etcd服务.
1. 先从etcd集群中移除需要删除的`etcdctl member remove <member id>`
2. 停管理节点服务 `grclis stop`
3. 卸载/grdata存储 `umount /grdata`
4. 重置节点 `grctl reset`
5. 如果多管理节点时需要手动清理etcd中已删除管理节点的数据 `ETCDCTL_API=3 etcdctl get /rainbond/endpoint --prefix`,具体可以参考[删除冗余数据](https://t.goodrain.com/t/topic/834/2)
{{% notice info %}}
如果单管理节点,多计算节点时,请勿操作否则会导致计算节点不可用
{{% /notice %}}
#### 重置节点
{{% notice warning %}}
当重置为计算节点时需要注意请勿删除grdata目录下数据
{{% /notice %}}
##### 重置计算节点
```bash
systemctl stop node
systemctl disable node
systemctl stop kubelet
systemctl disable kubelet
dps | grep goodrain.me | grep -v 'k8s' | awk '{print $NF}' | xargs -I {} systemctl disable {}
dps | grep goodrain.me | grep -v 'k8s' | awk '{print $NF}' | xargs -I {} systemctl stop {}
cclear
rm -rf /root/.kube/config
rm -rf /root/.rbd/grctl.yaml
rm -rf /tmp/*
rm -rf /usr/local/bin/grctl
rm -rf /usr/local/bin/node
# 删除镜像
docker images -q | xargs docker rmi -f
```
##### 重置管理节点
```bash
systemctl stop node
systemctl disable node
systemctl stop kubelet
systemctl disable kubelet
grclis stop
dps | grep goodrain.me | grep -v 'k8s' | awk '{print $NF}' | xargs -I {} systemctl disable {}
dps | grep goodrain.me | grep -v 'k8s' | awk '{print $NF}' | xargs -I {} systemctl stop {}
cclear
rm -rf /root/.kube/config
rm -rf /root/.rbd/grctl.yaml
rm -rf /tmp/*
rm -rf /usr/local/bin/grctl
rm -rf /usr/local/bin/node
rm -rf /opt/rainbond
rm -rf /grdata
rm -rf /grlocaldata
``` | 22.391667 | 141 | 0.693338 | yue_Hant | 0.245737 |
c1389c436905bffd0e4057efa5ee353a73b787e7 | 518 | md | Markdown | README.md | kostmetallist/prom-extended | 0c76b3c5b3bd23a26fda48a78b5e35e7bf0d159a | [
"Unlicense"
] | null | null | null | README.md | kostmetallist/prom-extended | 0c76b3c5b3bd23a26fda48a78b5e35e7bf0d159a | [
"Unlicense"
] | null | null | null | README.md | kostmetallist/prom-extended | 0c76b3c5b3bd23a26fda48a78b5e35e7bf0d159a | [
"Unlicense"
] | null | null | null | # ProM extended
## Description
ProM 6 framework distribution with extra custom plugins functionality.
Created for testing and running
[TPM plug-in](https://github.com/kostmetallist/transitive-performance-miner).
## Original sources
Initial code and libraries taken from [TU/e](https://www.tue.nl/en/) `Framework`
[repository](https://svn.win.tue.nl/trac/prom/browser/Framework/trunk).
## Support
For any questions and issues please refer to
[this](https://github.com/kostmetallist/prom-extended/issues) section.
| 28.777778 | 80 | 0.776062 | eng_Latn | 0.673009 |
c13989004e43c418411e205f7c6fac85e4c31d35 | 773 | md | Markdown | plugins/tasklist/README.md | wuchunfu/am-editor | 9cdc0c06bba572f7e1c2818560e9e876f676b6bb | [
"MIT"
] | 145 | 2022-02-15T02:39:07.000Z | 2022-03-31T08:48:36.000Z | plugins/tasklist/README.md | wuchunfu/am-editor | 9cdc0c06bba572f7e1c2818560e9e876f676b6bb | [
"MIT"
] | 31 | 2022-02-15T06:29:52.000Z | 2022-03-30T06:10:45.000Z | plugins/tasklist/README.md | wuchunfu/am-editor | 9cdc0c06bba572f7e1c2818560e9e876f676b6bb | [
"MIT"
] | 27 | 2022-02-15T03:50:24.000Z | 2022-03-28T12:47:02.000Z | # @aomao/plugin-tasklist
任务列表插件
## 安装
```bash
$ yarn add @aomao/plugin-tasklist
```
添加到引擎
```ts
import Engine, { EngineInterface } from '@aomao/engine';
import Tasklist , { CheckboxComponent } from '@aomao/plugin-tasklist';
new Engine(...,{ plugins:[Tasklist] , cards:[CheckboxComponent] })
```
## 可选项
### 快捷键
默认快捷键`mod+shift+9`
```ts
//快捷键
hotkey?: string | Array<string>;//默认mod+shift+9
//使用配置
new Engine(...,{
config:{
"tasklist":{
//修改快捷键
hotkey:"快捷键"
}
}
})
```
## 命令
可传入 { checked:true } 表示选中,可选参数
```ts
//使用 command 执行插件、并传入所需参数
engine.command.execute('tasklist', { checked: boolean });
//使用 command 执行查询当前状态,返回 false 或者当前列表插件名称 tasklist tasklist unorderedlist
engine.command.queryState('tasklist');
```
| 15.46 | 73 | 0.631307 | eng_Latn | 0.207755 |
c139eff64133c1a13913b694c5f824604dc9f24d | 291 | md | Markdown | examples/slipdev/README.md | eduazocar/m4a-firmware | 7b6a95975f17648cfb7dd0f7f5d6ec3a2a54fd98 | [
"Apache-2.0"
] | 1 | 2022-01-18T01:48:00.000Z | 2022-01-18T01:48:00.000Z | examples/slipdev/README.md | eduazocar/m4a-firmware | 7b6a95975f17648cfb7dd0f7f5d6ec3a2a54fd98 | [
"Apache-2.0"
] | null | null | null | examples/slipdev/README.md | eduazocar/m4a-firmware | 7b6a95975f17648cfb7dd0f7f5d6ec3a2a54fd98 | [
"Apache-2.0"
] | null | null | null | <h2 align=center> Slipdev</h2>
<h2> Getting Started: </h2>
Follow the next steps:
<br>
### Go to the example folder
```sh
cd examples/esp-wroom32/slipdev
```
### Compile your code and flash it
```sh
make flash term
```
### First
check your interface configuration
```sh
ifconfig
```
| 11.192308 | 34 | 0.670103 | eng_Latn | 0.966662 |
c13af8fedfaef43ec5bfc037bb14afb4d4f32ac7 | 5,277 | md | Markdown | docs/framework/wpf/data/l2dbform-xaml-cs-source-code.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/data/l2dbform-xaml-cs-source-code.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/data/l2dbform-xaml-cs-source-code.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: L2DBForm.xaml.cs-Quellcode
ms.date: 10/22/2019
ms.topic: sample
ms.openlocfilehash: 882699a76eab3c291cd92c298287bc5d28fb08e1
ms.sourcegitcommit: 82f94a44ad5c64a399df2a03fa842db308185a76
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/25/2019
ms.locfileid: "72921155"
---
# <a name="l2dbformxamlcs-source-code"></a>L2DBForm.xaml.cs-Quellcode
Diese Seite enthält den Inhalt und die Beschreibung des C# Quellcodes in der Datei *L2DBForm.XAML.cs*. Die in dieser Datei enthaltene L2XDBForm-Teilklasse kann in die folgenden drei logischen Abschnitte unterteilt werden: Datenmember und die Ereignishandler `OnRemove` und `OnAddBook` für das Klicken auf Schaltflächen.
## <a name="data-members"></a>Datenmember
Für die Zuordnung dieser Klasse zu den in *L2DBForm.xaml* verwendeten Fensterressourcen werden zwei private Datenmember verwendet.
- Die `myBooks`-Namespacevariable wird mit `"http://www.mybooks.com"` initialisiert.
- Der `bookList`-Member wird im Konstruktor mit der folgenden Zeile in die CDATA-Zeichenfolge in *L2DBForm.xaml* initialisiert:
```csharp
bookList = (XElement)((ObjectDataProvider)Resources["LoadedBooks"]).Data;
```
## <a name="onaddbook-event-handler"></a>Ereignishandler „OnAddBook“
Diese Methode enthält die folgenden drei Anweisungen:
- Die erste Bedingungsanweisung wird zur Eingabevalidierung verwendet.
- Die zweite Anweisung erstellt aus den Zeichenfolgenwerten, die der Benutzer im Benutzeroberflächenabschnitt **Add New Book** eingegeben hat, ein neues <xref:System.Xml.Linq.XElement>.
- Die letzte Anweisung fügt dem Datenanbieter in *L2DBForm.xaml* dieses neue Buchelement hinzu. Daraufhin aktualisiert die dynamische Datenbindung automatisch die Benutzeroberfläche mit diesem neuen Element. Zusätzlicher, vom Benutzer bereitgestellter Code ist nicht erforderlich.
## <a name="onremove-event-handler"></a>Ereignishandler „OnRemove“
Der `OnRemove`-Handler ist aus zwei Gründen komplizierter als der `OnAddBook`-Handler. Erstens: Das unformatierte XML enthält beibehaltenen Leerraum, sodass zusammen mit dem Bucheintrag auch passende neue Zeilen entfernt werden müssen. Zweitens: Im Sinne der Bequemlichkeit wird die Auswahl, die auf dem gelöschten Element lag, auf das vorherige Element in der Liste zurückgesetzt.
Die Hauptarbeit wird aber mit dem Entfernen des ausgewählten Buchelements von lediglich zwei Anweisungen ausgeführt:
- Zunächst wird das dem aktuell ausgewählten Element im Listenfeld zugeordnete Buchelement abgerufen:
```csharp
XElement selBook = (XElement)lbBooks.SelectedItem;
```
- Anschließend wird dieses Element aus dem Datenanbieter gelöscht:
```csharp
selBook.Remove();
```
Auch hier stellt die dynamische Datenbindung sicher, dass die Benutzeroberfläche des Programms automatisch aktualisiert wird.
## <a name="example"></a>Beispiel
### <a name="code"></a>Code
```csharp
using System;
using System.Linq;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Input;
using System.Xml;
using System.Xml.Linq;
namespace LinqToXmlDataBinding {
/// <summary>
/// Interaction logic for L2XDBForm.xaml
/// </summary>
public partial class L2XDBForm : System.Windows.Window
{
XNamespace mybooks = "http://www.mybooks.com";
XElement bookList;
public L2XDBForm()
{
InitializeComponent();
bookList = (XElement)((ObjectDataProvider)Resources["LoadedBooks"]).Data;
}
void OnRemoveBook(object sender, EventArgs e)
{
int index = lbBooks.SelectedIndex;
if (index < 0) return;
XElement selBook = (XElement)lbBooks.SelectedItem;
//Get next node before removing element.
XNode nextNode = selBook.NextNode;
selBook.Remove();
//Remove any matching newline node.
if (nextNode != null && nextNode.ToString().Trim().Equals(""))
{ nextNode.Remove(); }
//Set selected item.
if (lbBooks.Items.Count > 0)
{ lbBooks.SelectedItem = lbBooks.Items[index > 0 ? index - 1 : 0]; }
}
void OnAddBook(object sender, EventArgs e)
{
if (String.IsNullOrEmpty(tbAddID.Text) ||
String.IsNullOrEmpty(tbAddValue.Text))
{
MessageBox.Show("Please supply both a Book ID and a Value!", "Entry Error!");
return;
}
XElement newBook = new XElement(
mybooks + "book",
new XAttribute("id", tbAddID.Text),
tbAddValue.Text);
bookList.Add(" ", newBook, "\r\n");
}
}
}
```
### <a name="comments"></a>Kommentare
Informationen zur zugeordneten XAML-Quelle für diese Handler finden Sie unter [L2DBForm.xaml.cs-Quellcode](l2dbform-xaml-source-code.md).
## <a name="see-also"></a>Siehe auch
- [Exemplarische Vorgehensweise: LinqToXmlDataBinding-Beispiel](linq-to-xml-data-binding-sample.md)
- [L2DBForm.xaml-Quellcode](l2dbform-xaml-source-code.md)
| 38.518248 | 382 | 0.70343 | deu_Latn | 0.861263 |
c13b0ab8a731f81d5be10427287f398bac66f640 | 46 | md | Markdown | README.md | gatitoscontraladesigualdad/encuesta_repartidores | 58572c894f118ed80ebfcddd8ecd48696afd322d | [
"MIT"
] | null | null | null | README.md | gatitoscontraladesigualdad/encuesta_repartidores | 58572c894f118ed80ebfcddd8ecd48696afd322d | [
"MIT"
] | null | null | null | README.md | gatitoscontraladesigualdad/encuesta_repartidores | 58572c894f118ed80ebfcddd8ecd48696afd322d | [
"MIT"
] | null | null | null | # encuesta_repartidores
encuesta_repartidores
| 15.333333 | 23 | 0.913043 | spa_Latn | 0.9866 |
c13cbc87398f921aa111cb19f8dae6418e48693d | 188 | md | Markdown | README.md | cavuugroup/angular-10-facebook-login-example | 239e63c544b300acc99ace74d834616641df4766 | [
"MIT"
] | 12 | 2020-09-18T12:32:37.000Z | 2022-02-13T10:58:20.000Z | README.md | cavuugroup/angular-10-facebook-login-example | 239e63c544b300acc99ace74d834616641df4766 | [
"MIT"
] | null | null | null | README.md | cavuugroup/angular-10-facebook-login-example | 239e63c544b300acc99ace74d834616641df4766 | [
"MIT"
] | 9 | 2020-11-09T09:31:55.000Z | 2022-01-29T13:39:49.000Z | # angular-10-facebook-login-example
Angular 10 - Facebook Login Example
Tutorial and demo available at https://jasonwatmore.com/post/2020/09/21/angular-10-facebook-login-tutorial-example | 37.6 | 114 | 0.81383 | eng_Latn | 0.347262 |
c13d29f5c83b40c79ac559af109ca0f65e23baa2 | 31 | md | Markdown | README.md | kun1z/tinymail | 919be40ba7ea986762474338aa561c6b3d232456 | [
"Unlicense"
] | 1 | 2020-12-30T22:55:50.000Z | 2020-12-30T22:55:50.000Z | README.md | kun1z/tinymail | 919be40ba7ea986762474338aa561c6b3d232456 | [
"Unlicense"
] | null | null | null | README.md | kun1z/tinymail | 919be40ba7ea986762474338aa561c6b3d232456 | [
"Unlicense"
] | null | null | null | # tinymail
A tiny smtp server. | 10.333333 | 19 | 0.741935 | ces_Latn | 0.691501 |
c13d5e44ce327a25943c9c15d86ea2ed661c50f5 | 593 | md | Markdown | docfx/doc/dev/intro.md | agc93/downlink | c2d1c0793166c992f85765a73855b0e07f56e231 | [
"MIT"
] | 4 | 2017-07-02T20:03:22.000Z | 2019-08-10T05:25:17.000Z | docfx/doc/dev/intro.md | agc93/downlink | c2d1c0793166c992f85765a73855b0e07f56e231 | [
"MIT"
] | 2 | 2017-07-03T02:05:56.000Z | 2017-08-21T14:36:12.000Z | docfx/doc/dev/intro.md | agc93/downlink | c2d1c0793166c992f85765a73855b0e07f56e231 | [
"MIT"
] | 2 | 2021-09-14T05:31:55.000Z | 2022-03-28T22:28:19.000Z | # Developer Reference
This documentation is intended for use by developers and advanced users looking to build on Downlink or customise it for more advanced scenarios.
It's recommended to read the [Developer's Guide](./developers.md) first for a birds-eye view of how Downlink works. Pick a topic from the menu on the left to see the documentation for more specific scenarios such as building your own storage backend, or extending Downlink with a pre-built plugin.
Finally, you can see full source documentation, built directly from the code, in the [Source Reference](../../api/index.md). | 84.714286 | 297 | 0.79258 | eng_Latn | 0.999494 |
c13d7281f6baedf77b3e9a1243b697ba86034e5a | 244 | md | Markdown | README-zh_CN.md | ZKLlab/shu-scheduling-helper-frontend | 9496f19cbca59e01dcbdb6bd691aacb024bacdfc | [
"MIT"
] | 39 | 2020-02-26T04:11:06.000Z | 2021-06-23T03:04:50.000Z | README.md | ZKLlab/shu-scheduling-helper-frontend | 9496f19cbca59e01dcbdb6bd691aacb024bacdfc | [
"MIT"
] | 5 | 2020-08-14T13:16:05.000Z | 2020-09-02T14:34:53.000Z | README.md | ZKLlab/shu-scheduling-helper-frontend | 9496f19cbca59e01dcbdb6bd691aacb024bacdfc | [
"MIT"
] | 7 | 2020-02-26T02:00:17.000Z | 2020-09-09T07:18:17.000Z | **Current version: [shuosc/shu-scheduling-helper](https://github.com/shuosc/shu-scheduling-helper); this repository is no longer maintained.**
**目前版本:[shuosc/shu-scheduling-helper](https://github.com/shuosc/shu-scheduling-helper),此项目仓库不再维护。**
| 61 | 142 | 0.77459 | eng_Latn | 0.302444 |
c13d8d825648752c47aaf814d0598f84fe160779 | 10,624 | md | Markdown | content/post/2020/personal-resilience/index.md | nickjstevens/nickjstevens | 80d260edca14757c0880186ca105f9b18369c545 | [
"MIT"
] | null | null | null | content/post/2020/personal-resilience/index.md | nickjstevens/nickjstevens | 80d260edca14757c0880186ca105f9b18369c545 | [
"MIT"
] | null | null | null | content/post/2020/personal-resilience/index.md | nickjstevens/nickjstevens | 80d260edca14757c0880186ca105f9b18369c545 | [
"MIT"
] | null | null | null | ---
title: 'Personal Resilience'
subtitle: ''
summary: What does personal resilience mean to me?
authors:
- admin
tags:
- productivity
- resilience
- meditations
categories:
- Productivity
date: "2020-07-12T00:00:00Z"
lastmod: "2020-07-12T00:00:00Z"
featured: true
draft: false
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal point options: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight
image:
caption: ''
focal_point: ""
preview_only: true
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
---
Those on the Babcock Graduate Scheme have been given the task of asking managers and mentors for opinions on the topic of personal resilience.
The brief is to ask the following questions:
1. What does ‘personal resilience at work’ mean to you?
2. Why is it so important at work?
3. What do you feel are the most important characteristics/skills you needs to deal with setbacks etc.?
4. How have you continued to focus on your own resilience during lockdown and what challenges has this presented?
5. What advice and/or tips can you offer?
For me this is a really interesting topic, and as such I had a long and detailed answer that I wanted to share more widely.
Here's how I answered those questions:
1. Personal resilience at work means to me to always strive to be aware of the objective facts in any situation, and not be led by emotion. Striving to **see things as they are**, and to make decisions from a position of calm.
2. It’s important at work (and in life) because we have to deal with people at work, and people are individual, emotional, fickle and what they say and do is not within our control. If we don’t have resilience at work it will make dealing with people a mental challenge, and can be a source of personal stress and anguish. I like the Marcus Aurelius quote that we can expect to meet rude everyday:
> When you wake up in the morning, tell yourself: the people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous and surly.
After that tone anything else is a positive!
We cannot control other people or external events but we can control completely how we react or respond.
> Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom
3. I think it’s important to stay calm even under pressure or in emotional situations (**equanimity**). It’s also important to have personal techniques to deal with challenging or stressful situations. See my answer to question 5 for the techniques I use. The other thing that comes to mind is having a **growth mindset**; that is, seeing all challenges as opportunities, and seeing failure as necessary to learn and improve. I also hold that having a **positive mental attitude** is key.
> I am an optimist. It does not seem too much use being anything else.
4. During lockdown specifically I have kept up my daily practices for calmness. I have also sought to maintain a separation between work time and home time. I’ve also found that doing telephone calls while walking (mostly with my one year old on my back!) has been helpful - a dose of light exercise and fresh air. I’ve also maintained a list of all the positives from COVID, things like no more commuting, no more alarm clocks, more family time, and so on. Being intentional in making a list of pros has been really good for me to remain positive. In terms of challenges, I’ve been looking after my one year old daughter three days a week on the day’s my wife works (she’s a doctor so can’t work from home) which has meant being open with my line manager and team about these constraints and being flexible in still supporting project delivery. This has meant working in the evenings and weekends but has been achievable.
5. For tips and advice, I can just explain what it is that I do and why it’s helped then let you make your own mind up on it. The principle being one of self-experimentation - try it, if you notice an improvement great, if not move on to the next thing. The actionable things that I do to promote resilience are:
- **Breathing exercises**: if I’m about to make a difficult phone call, or stand up in front of people to present or anything else that I find uncomfortable, then I do a set of deep breathing exercises beforehand. There’s a few different techniques I use like Box Breathing, 4-7-8 breathing, and the Wim Hof Method by a bonkers Dutchman who holds several world records for things like submersion in ice that he attributes to his breathing techniques.
- **Meditation**: this is a well-documented way to unwind, be more present, and practice not thinking (emptying your mind of all the millions of thoughts that pop in to existence every second of the day). It’s a practice, and it’s as simple as sitting or lying and simply focusing on breathing. If you find yourself thinking other thoughts (which you will), you acknowledge the thought, then let it go and return your awareness and focus back to your breathing. It’s surprisingly hard, and 5 minutes is a great little exercise. In terms of resilience, the reason why meditating is beneficial is because you come to realise that thoughts and feelings just pop into existence unconsciously, and by meditating and observing this you can be more aware and detach your true consciousness from your unconscious thoughts and feelings. That’s to say that when you next get stressed, or worried, or angry, actually there’s now a more rational part of you that understands those emotions to just be fleeting and unconscious, and if you watch them and wait, those emotions will pass. Using the mindset of being an observer or scientist where the observer part of your brain watches the emotional part of your brain is a useful trick (”I will acknowledge my feelings arising” is a common mantra to repeat).
- **Cold therapy** (I didn’t say it would be easy 😃): cold is a stressor, and being able to stay calm, relaxed, and present under stress is exactly what resilience is to me. By having a cold shower every morning and ice baths every weekend I train myself to accept discomfort. I’m forced to focus on breathing, and frankly a cold shower won’t kill you so if you can cope with a 10 minute cold shower then a challenging work situation will be a stroll in the sunshine 😃. Also, the cold is like a quick trip to being present - what may take a while with meditation takes seconds in the cold to just be present, deep breathing and focused. Cold therapy is also excellent for the immune and cardiovascular systems.
- **Reading**: I find it beneficial to read productivity, leadership and personal development books. I read to be a better person (engineer, father, husband) rather than reading for the sake of it. Four of my recent favourite books are:
- Atomic Habits
- Essentialism
- Extreme Ownership ([read my review](https://nickjstevens.com/post/2019/extreme-ownership-book-review/))
- The Obstacle is the Way
- **Stoicism**: linked to reading, but more of a practice, is reading about Stoicism. Stoicism is an ancient school of philosophy, which sounds woolly and useless but it is actually more specifically the practical application of being a good (virtuous) human. Philosophy in fact means “love of wisdom” so that Stoicism could be paraphrased as **practical wisdom**. The neat thing for me is that the struggles and reflections in the ancient writings of the Stoics, for example the Roman Emperor Marcus Aurelius, are so applicable now it’s incredible - timeless wisdom if ever there was. A lot of the Stoic writings are in short letters, so easy to digest and read when you can. The practical tips across the ages are embodied in quotes like those below, and reading them gives me strength and makes me grateful for what I have:
> You have power over your mind - not outside events. Realize this, and you will find strength.
> Waste no more time arguing about what a good man should be. Be one.
> Very little is needed to make a happy life; it is all within yourself, in your way of thinking.
> When you arise in the morning, think of what a precious privilege it is to be alive - to breathe, to think, to enjoy, to love.
> Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present.
> It is not death that a man should fear, but he should fear never beginning to live.
> What is defeat? Nothing but education; nothing but the first steps to something better.
- **Exercise**: exercise, for me running and walking, is a great way to stay grounded, burn off frustration, and keep sane.
I also just wanted to reflect on a recent book I read called **Antifragile**. In the book the author defines fragility as something that is sensitive to shocks, and explains that people typically think of resilience (or robustness) as the opposite of being fragile. However he says that whilst resilience is about being insensitive or indifferent to shocks, there is an even better way, and that is being **anti**fragile. Antifragility goes beyond simply being robust, and instead is being something that gets better with shocks. The book is mainly talking about systems (like banking, natural systems), not people, but I like the sentiment that we should go beyond being simply resilient. It’s the difference between being an unemotional rock, which is resilient, to being an antifragile plant that grows, adapts, and gets stronger with shocks (to a point). I like to think that I can take challenges and stresses to get better and improve (antifragile), as opposed to simply bearing the burden (resilience). Another way to put it is that the passing of time harms things that are fragile, yet antifragile things improve with time. Resilient things are not affected by time. In closing, depriving a system of stressors doesn't always lead to good - small amounts of stress are better than no stress at all. In this way look to welcome challenges and stresses at work and in life, but with the mindset to use these experiences for the better. [Amor Fati](https://dailystoic.com/amor-fati/) (a love of fate) is a similar concept to this. | 115.478261 | 1,536 | 0.770143 | eng_Latn | 0.999917 |
c13e1a8c9994928f33c24a8257fffa94db24af26 | 466 | md | Markdown | CONTRIBUTING.md | jimmywarting/node-asn1 | 1e8ce3a99c7eb6fedcc08b4aafc30ed80ea5daac | [
"MIT"
] | 30 | 2017-12-23T02:55:14.000Z | 2022-03-30T01:44:34.000Z | CONTRIBUTING.md | jimmywarting/node-asn1 | 1e8ce3a99c7eb6fedcc08b4aafc30ed80ea5daac | [
"MIT"
] | 534 | 2019-11-15T02:30:56.000Z | 2021-11-25T16:00:20.000Z | CONTRIBUTING.md | jimmywarting/node-asn1 | 1e8ce3a99c7eb6fedcc08b4aafc30ed80ea5daac | [
"MIT"
] | 19 | 2018-03-10T02:31:50.000Z | 2022-02-16T17:12:57.000Z | # Contributing
This repository uses GitHub pull requests for code review.
See the [Joyent Engineering
Guidelines](https://github.com/joyent/eng/blob/master/docs/index.md) for general
best practices expected in this repository.
Contributions should be "make prepush" clean. The "prepush" target runs the
"check" target, which will check for linting and style errors.
If you're changing something non-trivial or user-facing, you may want to submit
an issue first.
| 33.285714 | 80 | 0.793991 | eng_Latn | 0.990587 |
c13e36f4c2714d8ef6c6d025a252a92ff5e7f148 | 3,001 | md | Markdown | README.md | AngainorDev/Maix-Amigo-Help | d0f505f80c06a5e45fc79529ca2a742de13bbb78 | [
"MIT"
] | 22 | 2020-10-28T02:40:23.000Z | 2021-12-29T00:52:17.000Z | README.md | AngainorDev/Maix-Amigo-Help | d0f505f80c06a5e45fc79529ca2a742de13bbb78 | [
"MIT"
] | 2 | 2020-10-30T16:45:15.000Z | 2020-10-30T19:24:53.000Z | README.md | AngainorDev/Maix-Amigo-Help | d0f505f80c06a5e45fc79529ca2a742de13bbb78 | [
"MIT"
] | 3 | 2020-10-30T17:44:07.000Z | 2021-11-17T05:52:41.000Z | # Maix-Amigo-Help
Doc, snippets and code for Sipeed Maix Amigo - Micropython enabled dev widget.
## The Sipeed Maix Amigo
The Amigo is a pretty nice hardware, feature packed:
https://www.seeedstudio.com/Sipeed-Maix-Amigo-p-4689.html
However, the software and documentation side is currently lagging behind and we were pretty disappointed when getting our Amigo.
No code sample would work, defaults settings are not correct, doc is inexistent (at best) or misleading when available.
The Amigo has a huge potential and we wanted to help others avoid the pain we felt when getting our first one.
This repo is there to compile our tries, fails and success.
# The firmware
## Do not use the maixhub service
This service is supposed to build a firmware for you.
It uses older versions than factory defaults, and does not include what you ask for. For instance I asked lvgl builds, and got no lvgl.
Build the firmware yourself. If you have a linux at hand, this is fast and troublefree.
The official doc can be followed: https://github.com/sipeed/MaixPy/blob/master/build.md
Use recommended kflashgui to flash the firmware https://maixpy.sipeed.com/en/get_started/upgrade_firmware.html
# MaixPy IDE
It's good, especially the integrated framebuffer view.
https://maixpy.sipeed.com/en/get_started/maixpyide.html
# Other useful tools
rshell is good.
WIP
# LEDS
- 1 Mini RGB Side LED
- 1 Large White Rear LED
See https://github.com/AngainorDev/Maix-Amigo-Help/tree/main/LED
# Cameras
- Front: GC0328
- Rear: OV7740
In a nutshell:
- Lower sensor frequency to 5000000 instead of default 20000000 or you'll get insane colors, bars on the image, unusable images
- Do not use `sensor.RGB565` as told in the official doc. Amigo uses YUV. Use `sensor.YUV422` or `sensor.GRAYSCALE`.
- Add `sensor.set_vflip(1)` and `sensor/set_hmirror(1)`
- Do not compile the firmware with double buffer, you won't have enough RAM to use them
- We'll detail custom firmware builds to get more RAM and really use the images instead of getting OOM errors everywhere.
To be detailled:
- How to use the Amigo front cam
- How to use windowing and POI
# Screen
- ILI9486
- 480x320 px
# Touchscreen
- FT6x36
- i2c address 0x38
- scl=24, sda=27
- freq = 1000000
MaixUI has a pure python implementation of the touch screen semi working.
See https://github.com/sipeed/MaixUI/blob/5646aa3899d126e579e99d38bb8020857cd3abe3/driver/touch.py and https://github.com/sipeed/MaixPy_scripts/issues/79 for a better alternative.
# LVGL
Demos with touchscreen not working, fix to be published here.
See https://github.com/AngainorDev/Maix-Amigo-Help/tree/main/LVGL/
# /flash
- No subdirectory support
- 3MB spiffs in default firmware
# SD Card
- Only support FAT32
# Tip when building your firmware
Raise heap from 8000 to F0000 (not sure if we can do even more?)
This will give more ram when using LVGL or many pictures functions, or everything crashes with OOM.
# Contributions
Are welcome!
| 30.622449 | 179 | 0.766744 | eng_Latn | 0.980936 |
c13e8a68a2558c58a3096c9cc0e2808f65078dd6 | 20,204 | md | Markdown | README.md | lasest/manubot | 7d74055105f3ac19e27fb469779b045a8dbf0acf | [
"BSD-2-Clause-Patent"
] | 1 | 2021-04-07T06:42:41.000Z | 2021-04-07T06:42:41.000Z | README.md | Junjun1guo/manubot | 3ff3000f76dcf82a30694d076a4da95326e3f6ae | [
"BSD-2-Clause-Patent"
] | null | null | null | README.md | Junjun1guo/manubot | 3ff3000f76dcf82a30694d076a4da95326e3f6ae | [
"BSD-2-Clause-Patent"
] | null | null | null | # Python utilities for Manubot: Manuscripts, open and automated
[](https://manubot.github.io/manubot/)
[](https://pypi.org/project/manubot/)
[](https://github.com/psf/black)
[](https://github.com/manubot/manubot/actions)
[](https://travis-ci.com/manubot/manubot)
[](https://ci.appveyor.com/project/manubot/manubot/branch/main)
[Manubot](https://manubot.org/ "Manubot homepage") is a workflow and set of tools for the next generation of scholarly publishing.
This repository contains a Python package with several Manubot-related utilities, as described in the [usage section](#usage) below.
Package documentation is available at <https://manubot.github.io/manubot> (auto-generated from the Python source code).
The `manubot cite` command-line interface retrieves and formats bibliographic metadata for user-supplied persistent identifiers like DOIs or PubMed IDs.
The `manubot process` command-line interface prepares scholarly manuscripts for Pandoc consumption.
The `manubot process` command is used by Manubot manuscripts, which are based off the [Rootstock template](https://github.com/manubot/rootstock), to automate several aspects of manuscript generation.
See Rootstock's [manuscript usage guide](https://github.com/manubot/rootstock/blob/main/USAGE.md) for more information.
**Note:**
If you want to experience Manubot by editing an existing manuscript, see <https://github.com/manubot/try-manubot>.
If you want to create a new manuscript, see <https://github.com/manubot/rootstock>.
To cite the Manubot project or for more information on its design and history, see:
> **Open collaborative writing with Manubot**<br>
Daniel S. Himmelstein, Vincent Rubinetti, David R. Slochower, Dongbo Hu, Venkat S. Malladi, Casey S. Greene, Anthony Gitter<br>
*PLOS Computational Biology* (2019-06-24) <https://doi.org/c7np><br>
DOI: [10.1371/journal.pcbi.1007128](https://doi.org/10.1371/journal.pcbi.1007128) · PMID: [31233491](https://www.ncbi.nlm.nih.gov/pubmed/31233491) · PMCID: [PMC6611653](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611653)
The Manubot version of this manuscript is available at <https://greenelab.github.io/meta-review/>.
## Installation
If you are using the `manubot` Python package as part of a manuscript repository, installation of this package is handled though the Rootstock's [environment specification](https://github.com/manubot/rootstock/blob/main/build/environment.yml).
For other use cases, this package can be installed via `pip`.
Install the latest release version [from PyPI](https://pypi.org/project/manubot/):
```sh
pip install --upgrade manubot
```
Or install from the source code on [GitHub](https://github.com/manubot/manubot), using the version specified by a commit hash:
```sh
COMMIT=d2160151e52750895571079a6e257beb6e0b1278
pip install --upgrade git+https://github.com/manubot/manubot@$COMMIT
```
The `--upgrade` argument ensures `pip` updates an existing `manubot` installation if present.
Some functions in this package require [Pandoc](https://pandoc.org/),
which must be [installed](https://pandoc.org/installing.html) separately on the system.
The pandoc-manubot-cite filter depends on Pandoc as well as panflute (a Python package).
Users must install a [compatible version of panflute](https://github.com/sergiocorreia/panflute#supported-pandoc-versions) based on their Pandoc version.
For example, on a system with Pandoc 2.9,
install the appropriate panflute like `pip install panflute==1.12.5`.
## Usage
Installing the python package creates the `manubot` command line program.
Here is the usage information as per `manubot --help`:
<!-- test codeblock contains output of `manubot --help` -->
```
usage: manubot [-h] [--version] {process,cite,webpage} ...
Manubot: the manuscript bot for scholarly writing
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
subcommands:
All operations are done through subcommands:
{process,cite,webpage}
process process manuscript content
cite citekey to CSL JSON command line utility
webpage deploy Manubot outputs to a webpage directory tree
```
Note that all operations are done through the following sub-commands.
### Process
The `manubot process` program is the primary interface to using Manubot.
There are two required arguments: `--content-directory` and `--output-directory`, which specify the respective paths to the content and output directories.
The content directory stores the manuscript source files.
Files generated by Manubot are saved to the output directory.
One common setup is to create a directory for a manuscript that contains both the `content` and `output` directory.
Under this setup, you can run the Manubot using:
```sh
manubot process \
--skip-citations \
--content-directory=content \
--output-directory=output
```
See `manubot process --help` for documentation of all command line arguments:
<!-- test codeblock contains output of `manubot process --help` -->
```
usage: manubot process [-h] --content-directory CONTENT_DIRECTORY
--output-directory OUTPUT_DIRECTORY
[--template-variables-path TEMPLATE_VARIABLES_PATH]
--skip-citations [--cache-directory CACHE_DIRECTORY]
[--clear-requests-cache]
[--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
Process manuscript content to create outputs for Pandoc consumption. Performs
bibliographic processing and templating.
optional arguments:
-h, --help show this help message and exit
--content-directory CONTENT_DIRECTORY
Directory where manuscript content files are located.
--output-directory OUTPUT_DIRECTORY
Directory to output files generated by this script.
--template-variables-path TEMPLATE_VARIABLES_PATH
Path or URL of a file containing template variables
for jinja2. Serialization format is inferred from the
file extension, with support for JSON, YAML, and TOML.
If the format cannot be detected, the parser assumes
JSON. Specify this argument multiple times to read
multiple files. Variables can be applied to a
namespace (i.e. stored under a dictionary key) like
`--template-variables-path=namespace=path_or_url`.
Namespaces must match the regex `[a-zA-
Z_][a-zA-Z0-9_]*`.
--skip-citations Skip citation and reference processing. Support for
citation and reference processing has been moved from
`manubot process` to the pandoc-manubot-cite filter.
Therefore this argument is now required. If citation-
tags.tsv is found in content, these tags will be
inserted in the markdown output using the reference-
link syntax for citekey aliases. Appends
content/manual-references*.* paths to Pandoc's
metadata.bibliography field.
--cache-directory CACHE_DIRECTORY
Custom cache directory. If not specified, caches to
output-directory.
--clear-requests-cache
--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
Set the logging level for stderr logging
```
#### Manual references
Manubot has the ability to rely on user-provided reference metadata rather than generating it.
`manubot process` searches the content directory for files containing manually-provided reference metadata that match the glob `manual-references*.*`.
These files are stored in the Pandoc metadata `bibliography` field, such that they can be loaded by `pandoc-manubot-cite`.
### Cite
`manubot cite` is a command line utility to produce bibliographic metadata for citation keys.
The utility either outputs metadata as [CSL JSON items](http://citeproc-js.readthedocs.io/en/latest/csl-json/markup.html#items) or produces formatted references if `--render`.
Citation keys should be in the format `prefix:accession`.
For example, the following example generates Markdown-formatted references for four persistent identifiers:
```shell
manubot cite --format=markdown \
doi:10.1098/rsif.2017.0387 pubmed:29424689 pmc:PMC5640425 arxiv:1806.05726
```
The following [terminal recording](https://asciinema.org/a/205085?speed=2) demonstrates the main features of `manubot cite` (for a slightly outdated version):

Additional usage information is available from `manubot cite --help`:
<!-- test codeblock contains output of `manubot cite --help` -->
```
usage: manubot cite [-h] [--output OUTPUT]
[--format {csljson,cslyaml,plain,markdown,docx,html,jats} | --yml | --txt | --md]
[--csl CSL] [--bibliography BIBLIOGRAPHY]
[--no-infer-prefix] [--allow-invalid-csl-data]
[--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
citekeys [citekeys ...]
Generate bibliographic metadata in CSL JSON format for one or more citation
keys. Optionally, render metadata into formatted references using Pandoc. Text
outputs are UTF-8 encoded.
positional arguments:
citekeys One or more (space separated) citation keys to
generate bibliographic metadata for.
optional arguments:
-h, --help show this help message and exit
--output OUTPUT Specify a file to write output, otherwise default to
stdout.
--format {csljson,cslyaml,plain,markdown,docx,html,jats}
Format to use for output file. csljson and cslyaml
output the CSL data. All other choices render the
references using Pandoc. If not specified, attempt to
infer this from the --output filename extension.
Otherwise, default to csljson.
--yml Short for --format=cslyaml.
--txt Short for --format=plain.
--md Short for --format=markdown.
--csl CSL URL or path with CSL XML style used to style
references (i.e. Pandoc's --csl option). Defaults to
Manubot's style.
--bibliography BIBLIOGRAPHY
File to read manual reference metadata. Specify
multiple times to load multiple files. Similar to
pandoc --bibliography.
--no-infer-prefix Do not attempt to infer the prefix for citekeys
without a known prefix.
--allow-invalid-csl-data
Allow CSL Items that do not conform to the JSON
Schema. Skips CSL pruning.
--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
Set the logging level for stderr logging
```
### Pandoc filter
This package creates the `pandoc-manubot-cite` Pandoc filter,
providing access to Manubot's cite-by-ID functionality from within a Pandoc workflow.
Options are set via Pandoc metadata fields [listed in the docs](https://manubot.github.io/manubot/reference/manubot/pandoc/cite_filter/).
<!-- test codeblock contains output of `pandoc-manubot-cite --help` -->
```
usage: pandoc-manubot-cite [-h] [--input [INPUT]] [--output [OUTPUT]]
target_format
Pandoc filter for citation by persistent identifier. Filters are command-line
programs that read and write a JSON-encoded abstract syntax tree for Pandoc.
Unless you are debugging, run this filter as part of a pandoc command by
specifying --filter=pandoc-manubot-cite.
positional arguments:
target_format output format of the pandoc command, as per Pandoc's --to
option
optional arguments:
-h, --help show this help message and exit
--input [INPUT] path read JSON input (defaults to stdin)
--output [OUTPUT] path to write JSON output (defaults to stdout)
```
Other Pandoc filters exist that do something similar:
[`pandoc-url2cite`](https://github.com/phiresky/pandoc-url2cite), [pandoc-url2cite-hs](https://github.com/Aver1y/pandoc-url2cite-hs), &
[`pwcite`](https://github.com/wikicite/wcite#filter-pwcite).
Currently, `pandoc-manubot-cite` supports the most types of persistent identifiers.
We're interested in creating as much compatibility as possible between these filters and their syntaxes.
#### Manual references
Manual references are loaded from the `references` and `bibliography` Pandoc metadata fields.
If a manual reference filename ends with `.json` or `.yaml`, it's assumed to contain CSL Data (i.e. Citation Style Language JSON).
Otherwise, the format is inferred from the extension and converted to CSL JSON using the `pandoc-citeproc --bib2json` [utility](https://github.com/jgm/pandoc-citeproc/blob/master/man/pandoc-citeproc.1.md#convert-mode).
The standard citation key for manual references is inferred from the CSL JSON `id` or `note` field.
When no prefix is provided, such as `doi:`, `url:`, or `raw:`, a `raw:` prefix is automatically added.
If multiple manual reference files load metadata for the same standard citation `id`, precedence is assigned according to descending filename order.
### Webpage
The `manubot webpage` command populates a `webpage` directory with Manubot output files.
<!-- test codeblock contains output of `manubot webpage --help` -->
```
usage: manubot webpage [-h] [--checkout [CHECKOUT]] [--version VERSION]
[--timestamp] [--no-ots-cache | --ots-cache OTS_CACHE]
[--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
Update the webpage directory tree with Manubot output files. This command
should be run from the root directory of a Manubot manuscript that follows the
Rootstock layout, containing `output` and `webpage` directories. HTML and PDF
outputs are copied to the webpage directory, which is structured as static
source files for website hosting.
optional arguments:
-h, --help show this help message and exit
--checkout [CHECKOUT]
branch to checkout /v directory contents from. For
example, --checkout=upstream/gh-pages. --checkout is
equivalent to --checkout=gh-pages. If --checkout is
ommitted, no checkout is performed.
--version VERSION Used to create webpage/v/{version} directory.
Generally a commit hash, tag, or 'local'. When
omitted, version defaults to the commit hash on CI
builds and 'local' elsewhere.
--timestamp timestamp versioned manuscripts in webpage/v using
OpenTimestamps. Specify this flag to create timestamps
for the current HTML and PDF outputs and upgrade any
timestamps from past manuscript versions.
--no-ots-cache disable the timestamp cache.
--ots-cache OTS_CACHE
location for the timestamp cache (default:
ci/cache/ots).
--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
Set the logging level for stderr logging
```
## Development
### Environment
Create a development environment using:
```shell
conda create --name manubot-dev --channel conda-forge \
python=3.8 pandoc=2.8
conda activate manubot-dev # assumes conda >= 4.4
pip install --editable ".[webpage,dev]"
```
### Commands
Below are some common commands used for development.
They assume the working directory is set to the repository's root,
and the conda environment is activated.
```shell
# run the test suite
pytest
# install pre-commit git hooks (once per local clone).
# The pre-commit checks declared in .pre-commit-config.yaml will now
# run on changed files during git commits.
pre-commit install
# run the pre-commit checks (required to pass CI)
pre-commit run --all-files
# commit despite failing pre-commit checks (will fail CI)
git commit --no-verify
# regenerate the README codeblocks for --help messages
python manubot/tests/test_readme.py
# generate the docs
portray as_html --overwrite --output_dir=docs
# process the example testing manuscript
manubot process \
--content-directory=manubot/process/tests/manuscripts/example/content \
--output-directory=manubot/process/tests/manuscripts/example/output \
--skip-citations \
--log-level=INFO
```
### Release instructions
[](https://pypi.org/project/manubot/)
This section is only relevant for project maintainers.
Travis CI deployments are used to upload releases to [PyPI](https://pypi.org/project/manubot).
To create a new release, bump the `__version__` in [`manubot/__init__.py`](manubot/__init__.py).
Then, set the `TAG` and `OLD_TAG` environment variables:
```shell
TAG=v$(python setup.py --version)
# fetch tags from the upstream remote
# (assumes upstream is the manubot organization remote)
git fetch --tags upstream main
# get previous release tag, can hardcode like OLD_TAG=v0.3.1
OLD_TAG=$(git describe --tags --abbrev=0)
```
The following commands can help draft release notes:
```shell
# check out a branch for a pull request as needed
git checkout -b "release-$TAG"
# create release notes file if it doesn't exist
touch "release-notes/$TAG.md"
# commit list since previous tag
echo $'\n\nCommits\n-------\n' >> "release-notes/$TAG.md"
git log --oneline --decorate=no $OLD_TAG..HEAD >> "release-notes/$TAG.md"
# commit authors since previous tag
echo $'\n\nCode authors\n------------\n' >> "release-notes/$TAG.md"
git log $OLD_TAG..HEAD --format='%aN <%aE>' | sort --unique >> "release-notes/$TAG.md"
```
After a commit with the above updates is part of `upstream:main`,
for example after a PR is merged,
use the [GitHub interface](https://github.com/manubot/manubot/releases/new) to create a release with the new "Tag version".
Monitor [GitHub Actions](https://github.com/manubot/manubot/actions?query=workflow%3ARelease) and [PyPI](https://pypi.org/project/manubot/#history) for successful deployment of the release.
## Goals & Acknowledgments
Our goal is to create scholarly infrastructure that encourages open science and assists reproducibility.
Accordingly, we hope for the Manubot software and philosophy to be adopted widely, by both academic and commercial entities.
As such, Manubot is free/libre and open source software (see [`LICENSE.md`](LICENSE.md)).
We would like to thank the contributors and funders whose support makes this project possible.
Specifically, Manubot development has been financially supported by:
- the **Alfred P. Sloan Foundation** in [Grant G-2018-11163](https://sloan.org/grant-detail/8501) to [**@dhimmel**](https://github.com/dhimmel).
- the **Gordon & Betty Moore Foundation** ([**@DDD-Moore**](https://github.com/DDD-Moore)) in [Grant GBMF4552](https://www.moore.org/grant-detail?grantId=GBMF4552) to [**@cgreene**](https://github.com/cgreene).
| 49.763547 | 243 | 0.702584 | eng_Latn | 0.934335 |
c13ef6f7115e9a8152aa27e9af51306e8578be3d | 1,013 | md | Markdown | _posts/2011-03-05-spostare-le-icone-della-finestra-barra-titolo.md | marste/Stefano | aa1f99cc33ba98ed7206b0c6af1994a4915f1e76 | [
"MIT"
] | null | null | null | _posts/2011-03-05-spostare-le-icone-della-finestra-barra-titolo.md | marste/Stefano | aa1f99cc33ba98ed7206b0c6af1994a4915f1e76 | [
"MIT"
] | null | null | null | _posts/2011-03-05-spostare-le-icone-della-finestra-barra-titolo.md | marste/Stefano | aa1f99cc33ba98ed7206b0c6af1994a4915f1e76 | [
"MIT"
] | null | null | null | ---
id: 110
title: Spostare le icone della finestra barra titolo
author: Stefano Marzorati
layout: post
guid: http://ubbunti.wordpress.com/2011/03/05/spostare-le-icone-della-finestra-barra-titolo
permalink: /spostare-le-icone-della-finestra-barra-titolo/
blogger_blog:
- ubbunti.blogspot.com
- ubbunti.blogspot.com
- ubbunti.blogspot.com
blogger_author:
- m@il_of_d@y
- m@il_of_d@y
- m@il_of_d@y
blogger_e466c7156e8bac77e64f63e8bad92c92_permalink:
- 7785006476131777898
- 7785006476131777898
- 7785006476131777898
categories:
- Linux
---
Premete **ALT+F2** e nella finestra che si apre scrivete `gconf-editor`.
A questo punto, nella finestra che si aprirà, sulla sinistra scegliete **Apps -> Metacity -> General**, poi nella parte destra selezionate la voce `button_layout`, dovreste avere come valore una stringa del tipo:
> `close,minimize,maximize:menu`
Semplicemente, cambiatela in:
> `menu:minimize,maximize,close`
In pratica, i due punti rappresentano il **titolo** della finestra.. | 30.69697 | 212 | 0.769003 | ita_Latn | 0.925663 |
c13ffc8cb8d5421e6cf5089a4e1400f54ab6c826 | 182 | md | Markdown | _posts/S03/2018-02-17-S03E06.md | souravkhoso4/souravkhoso4.github.io | 78ed1ba99b285438af7b822142e39d9e6df3eba5 | [
"CC-BY-3.0"
] | null | null | null | _posts/S03/2018-02-17-S03E06.md | souravkhoso4/souravkhoso4.github.io | 78ed1ba99b285438af7b822142e39d9e6df3eba5 | [
"CC-BY-3.0"
] | null | null | null | _posts/S03/2018-02-17-S03E06.md | souravkhoso4/souravkhoso4.github.io | 78ed1ba99b285438af7b822142e39d9e6df3eba5 | [
"CC-BY-3.0"
] | null | null | null | ---
title: "HIMYM - S03E06 - I'm Not That Guy"
layout: "post_episode"
gdriveid: "0B6D_WdeSr-7ZXzQzSEt6YW5nRkU"
season: "S03"
permalink: "episode/S03E06.html"
episodeid: "S03E06"
---
| 20.222222 | 42 | 0.71978 | eng_Latn | 0.18132 |
c1407da50e55683936f7e7d74c5aca1e18624e1d | 1,424 | md | Markdown | README.md | ruyadorno/eslintme | 9ca1254eff2226bbd97f8ab78f4188d2077c4e0a | [
"MIT"
] | 7 | 2016-12-19T23:18:45.000Z | 2021-02-15T07:29:12.000Z | README.md | ruyadorno/eslintme | 9ca1254eff2226bbd97f8ab78f4188d2077c4e0a | [
"MIT"
] | null | null | null | README.md | ruyadorno/eslintme | 9ca1254eff2226bbd97f8ab78f4188d2077c4e0a | [
"MIT"
] | null | null | null | # eslintme
[](https://npmjs.org/package/eslintme)
[](https://travis-ci.org/ruyadorno/eslintme)
> The fastest way to eslint a single file
## About
This is a convenience script around [eslint_d](https://github.com/mantoni/eslint_d.js) to run it at [maximum speed](https://github.com/mantoni/eslint_d.js#moar-speed) using **netcat**.
**eslint_d** is an amazing tool that keeps a local server running **eslint** to cut linting time for a single file, so that we can get instant linting in our preferred editor.
## Install
```
$ npm install -g eslintme
```
## Usage
To start the server and lint a file, just run:
```js
$ eslintme file.js
```
## Editor Integration
- __Vim__: Install the [syntastic](https://github.com/scrooloose/syntastic) plugin, then make sure this is in your `.vimrc`:
```vim
let g:syntastic_javascript_checkers = ['eslint']
let g:syntastic_javascript_eslint_generic = 1
let g:syntastic_javascript_eslint_exec = 'eslintme'
```
## Support
Please note that this is a very platform-specific convenience wrapper around **eslint_d**, it only supports unix platforms where **netcat** is available. For usage in any other systems please stick with the regular [eslint_d](https://github.com/mantoni/eslint_d.js).
## License
MIT © [Ruy Adorno](http://ruyadorno.com)
| 27.384615 | 266 | 0.738764 | eng_Latn | 0.802038 |
c140ba0e231b010f96227f0112dc412e76ae29d6 | 1,328 | md | Markdown | README.md | nuttyclub/booking-application-ZF | dd44ea2924c2db8bf736483c0246c58f7b04c2b5 | [
"BSD-3-Clause"
] | null | null | null | README.md | nuttyclub/booking-application-ZF | dd44ea2924c2db8bf736483c0246c58f7b04c2b5 | [
"BSD-3-Clause"
] | null | null | null | README.md | nuttyclub/booking-application-ZF | dd44ea2924c2db8bf736483c0246c58f7b04c2b5 | [
"BSD-3-Clause"
] | null | null | null | # Booking Application for Doctors
## Introduction
This is a booking application for doctors. It stores the records in a database and users are able add, edit or delete records(username,reason of visit, start time and end time).
### Framework
I started this with Zend Framework 2 and had to add touches of Zend Framework three because of some of ZF2's libraries were migrated and won't work on today's implementation of the Zend Framework Skeleton. So you might see touches of ZF2 and ZF3 in there.
### Database
SQLITE
## Set up instructions:
The easiest way to set this up is through composer. Also we will be using Php's built in webserver as oppose to Appache. and SQLITE for out data storage.
1. Make sure you have composer. Download/update composer - https://getcomposer.org/
2. Clone the project and navigate to it. `git clone git@github.com:nuttyclub/booking-application-ZF.git`
3. Do a `composer update`. and do a `composer install`.
4. Set up the database. `$ sqlite data/zftutorial.db < data/schema.sql` or `$ sqlite3 data/zftutorial.db < data/schema.sql`
5. Now because we have a script in our composer.json file. you can just run `composer serve` and it should start up `localhost:8080`. but if that doesn't work you can start the php server manually by running `$ php -S 0.0.0.0:8080 -t public public/index.php` | 69.894737 | 258 | 0.766566 | eng_Latn | 0.995002 |
c14118388c236e0f2d59cc299b94130a9ee56818 | 2,943 | md | Markdown | README.md | zevwl/ivr-recording-django | 6b0528c9b3f992005df9391ac20c1ada8b86670b | [
"MIT"
] | null | null | null | README.md | zevwl/ivr-recording-django | 6b0528c9b3f992005df9391ac20c1ada8b86670b | [
"MIT"
] | null | null | null | README.md | zevwl/ivr-recording-django | 6b0528c9b3f992005df9391ac20c1ada8b86670b | [
"MIT"
] | null | null | null | <a href="https://www.twilio.com">
<img src="https://static0.twilio.com/marketing/bundles/marketing/img/logos/wordmark-red.svg" alt="Twilio" width="250" />
</a>
# IVR Call Recording and Agent Conference.
IVRs (interactive voice response) are automated phone systems that can facilitate communication between callers and businesses. In this tutorial you will learn how to screen and send callers to voicemail if an agent is busy.
[Read the full tutorial here](https://www.twilio.com/docs/tutorials/walkthrough/ivr-screening/python/django)!
[](https://github.com/TwilioDevEd/ivr-recording-django/actions/workflows/build_test.yml)
## Local Development
1. Clone this repository and `cd` into its directory:
```bash
git clone git@github.com:TwilioDevEd/ivr-recording-django.git
cd ivr-recording-django
```
1. The file `ivr/fixtures/agents.json` contains the agents phone numbers. Replace any of these phone numbers with yours.
When the application asks you to select an agent, choose the one you just modified and it will then call your phone.
1. Create a local virtual environment and activate it:
```bash
python -m venv venv && source venv/bin/activate
```
1. Install dependencies:
```bash
pip install -r requirements.txt
```
1. Set environment variables:
```bash
cp .env.example .env
```
Then add a value to `SECRET_KEY`.
Note: `DEBUG` variable is False by default. Feel free to update it to True if needed.
1. Set up database and run migrations:
```bash
python manage.py migrate
```
1. Load initial agents' data:
```bash
python manage.py loaddata ivr/fixtures/agents.json
```
1. Make sure the tests succeed:
```bash
python manage.py test
```
1. Run the application:
```bash
python manage.py runserver
```
1. Check it out at [http://localhost:8000/ivr](http://localhost:8000/ivr).
You can go to the [agents page](http://localhost:8000/ivr/agents) to see and listen the saved recordings.
1. Expose the application to the wider Internet using [ngrok](https://ngrok.com/)
To let our Twilio Phone number use the callback endpoint we exposed, our development server will need to be publicly accessible. [We recommend using ngrok to solve this problem](https://www.twilio.com/blog/2015/09/6-awesome-reasons-to-use-ngrok-when-testing-webhooks.html).
```bash
ngrok http 8000
```
1. Provision a number under the [Manage Numbers page](https://www.twilio.com/user/account/phone-numbers/incoming) on your account. Set the voice URL for the number to `http://<your-ngrok-subdomain>.ngrok.io/ivr/welcome`.
That's it!
## Meta
* No warranty expressed or implied. Software is as is. Diggity.
* [MIT License](http://www.opensource.org/licenses/mit-license.html)
* Lovingly crafted by Twilio Developer Education.
| 32.7 | 276 | 0.729188 | eng_Latn | 0.84722 |
c141a68e305e694d63798a6955d36abcd5405ffb | 122 | md | Markdown | README.md | faithngetich/books | 3ba1d977af93196c63bbf06dcf6dba1678cb3b4f | [
"MIT"
] | null | null | null | README.md | faithngetich/books | 3ba1d977af93196c63bbf06dcf6dba1678cb3b4f | [
"MIT"
] | null | null | null | README.md | faithngetich/books | 3ba1d977af93196c63bbf06dcf6dba1678cb3b4f | [
"MIT"
] | null | null | null | # books
Sample Django App
This app will allow you to
-Add books
-Edit their information
-Search for a book
-Delete books
| 13.555556 | 26 | 0.770492 | eng_Latn | 0.992511 |
c142097aab30c90b46bfa2dd2b800bc802248aae | 282 | md | Markdown | streamalert_cli/_infrastructure/modules/tf_lookup_tables_policy/README.md | cninja1/streamalert | bfde778bc216bff1dfd7372164fd20cb78012dee | [
"Apache-2.0"
] | 2,770 | 2017-01-31T06:13:08.000Z | 2022-03-30T14:40:09.000Z | streamalert_cli/_infrastructure/modules/tf_lookup_tables_policy/README.md | cninja1/streamalert | bfde778bc216bff1dfd7372164fd20cb78012dee | [
"Apache-2.0"
] | 1,184 | 2017-02-01T04:31:00.000Z | 2022-03-21T17:36:38.000Z | streamalert_cli/_infrastructure/modules/tf_lookup_tables_policy/README.md | cninja1/streamalert | bfde778bc216bff1dfd7372164fd20cb78012dee | [
"Apache-2.0"
] | 401 | 2017-01-31T17:37:35.000Z | 2022-03-22T06:11:40.000Z | # Lookup Tables Terraform Policies
This module is a reusable component that generates IAM Policies and Policy Attachments to attach to
Lambda functions.
This module is not meant to be used on its own; it is exclusively used by the other tf_lookup_tables_*
modules as reusable code. | 47 | 102 | 0.819149 | eng_Latn | 0.999413 |
c142e4f5c6901eebc9ad45536bd53f4b9cc3aff0 | 4,988 | md | Markdown | _posts/2018-01-25-Code_alignment_options_in_llvm.md | dendibakh/dendibakh.github.io | 54a53957a7708c3768a4b7a6c8cc54d6c43ac8b0 | [
"CC-BY-4.0"
] | 45 | 2017-11-03T14:52:59.000Z | 2022-02-10T16:00:06.000Z | _posts/2018-01-25-Code_alignment_options_in_llvm.md | dendibakh/dendibakh.github.io | 54a53957a7708c3768a4b7a6c8cc54d6c43ac8b0 | [
"CC-BY-4.0"
] | 3 | 2021-02-01T09:24:50.000Z | 2021-07-17T08:10:54.000Z | _posts/2018-01-25-Code_alignment_options_in_llvm.md | dendibakh/dendibakh.github.io | 54a53957a7708c3768a4b7a6c8cc54d6c43ac8b0 | [
"CC-BY-4.0"
] | 14 | 2018-03-22T14:05:15.000Z | 2021-08-23T07:06:31.000Z | ---
layout: post
title: Code alignment options in llvm.
categories: [compilers]
---
**Contents:**
* TOC
{:toc}
------
**Subscribe to my [mailing list](https://mailchi.mp/4eb73720aafe/easyperf), support me on [Patreon](https://www.patreon.com/dendibakh) or by PayPal [donation](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=TBM3NW8TKTT34¤cy_code=USD&source=url).**
------
In my [previous post]({{ site.url }}/blog/2018/01/18/Code_alignment_issues) I discussed code alignment issues that could arise when you benchmarking your code. [Simon](https://twitter.com/TartanLlama) in the comments mentioned code alignment option '-align-all-nofallthru-blocks'. If we look at what description says about this option it's not clear what this option is doing. So, I decided to give some clear examples of what it's doing.
In latest llvm (as of 25.01.2018) there are 3 machine-independent option for controling code alignment:
```
-align-all-blocks=<uint>
Force the alignment of all blocks in the function.
-align-all-functions=<uint>
Force the alignment of all functions.
-align-all-nofallthru-blocks=<uint>
Force the alignment of all blocks that have no fall-through
predecessors (i.e. don't add nops that are executed).
```
Let's take an example like this:
```cpp
int foo();
int bar();
void func(int* a)
{
for (int i = 0; i < 32; ++i)
a[i] += 1;
if (a[0] == 1)
a[0] += foo();
else
a[0] += bar();
}
```
For this code compiled with `-O2 -march=skylake -fno-unroll-loops` clang will produce this assembly:
```asm
0000000000000040 <_Z4funcPi>:
40: push rbx
41: mov rbx,rdi
44: mov rax,0xffffffffffffff80
4b: vpcmpeqd ymm0,ymm0,ymm0
4f: nop
50: vmovdqu ymm1,YMMWORD PTR [rbx+rax*1+0x80]
59: vpsubd ymm1,ymm1,ymm0
5d: vmovdqu YMMWORD PTR [rbx+rax*1+0x80],ymm1
66: add rax,0x20
6a: jne 50 <_Z4funcPi+0x10>
6c: cmp DWORD PTR [rbx],0x1
6f: jne 7b <_Z4funcPi+0x3b>
71: vzeroupper
74: call 79 <_Z4funcPi+0x39>
79: jmp 83 <_Z4funcPi+0x43>
7b: vzeroupper
7e: call 83 <_Z4funcPi+0x43>
83: add DWORD PTR [rbx],eax
85: pop rbx
86: ret
```
> *Note that the loop is already aligned on a 16B boundary.*
And here is the vizualization (created with [this](https://github.com/radare/radare2) tool) for this assembly where we can see the basic blocks (BB):
{: .center-image }
Below I will show the difference in different code aligning options. All the code and scripts that I used can be found [here](https://github.com/dendibakh/dendibakh.github.io/tree/master/_posts/code/CodeAlignmentOptions).
### align-all-functions
This option will align all your functions on a bounday specified in the parameter. For example, `-mllvm -align-all-functions=5` will align all functions on a 32B boundary (`2^5=32`).
Regarding our case (don't look at the offsets in visual representation) function is already aligned at 64B boundary so, the only difference will be if we specify `-mllvm -align-all-functions=7`:
{: .center-image }
### align-all-blocks
Apply this option carefully because it can cause lot of nops be added into the assembly. Adding `-mllvm -align-all-blocks=5` yields this diff:
{: .center-image }
> Note that this option does not align the function beginning, but rather it's first basic block.
I will not show the results of what will happen if I will specify 6 or 7, because it won't fit on a screen.
### align-all-nofallthru-blocks
This option as opposed to blindly aligning all blocks does it in a smarter way. The description looks complicated, but in fact it's really simple. Algorithm looks like this: for each BB we check if a previous BB can reach current BB by falling through. If it can, we don't align such current, because it means that we will insert NOPs into the executed path (as the opposite to `-align-all-blocks`). If the previous BB can't reach current BB by falling through, it means that the only way we can reach current BB is by jumping into it and the previous block ends with unconditional branch, so we can safely insert nops between previous and current BB, knowing that those NOPs won't be executed.
In our function there is only one such BB (that has a call to `bar()`). Here is the diff for `-mllvm -align-all-nofallthru-blocks=5`:
{: .center-image }
Again, all the code and scripts that I used can be found [here](https://github.com/dendibakh/dendibakh.github.io/tree/master/_posts/code/CodeAlignmentOptions), so free to play with different options.
### Conclusion
By now I hope it's clear what those code alignment options mean, but I encourage you to use them with care. The most safe IMHO is the `-align-all-nofallthru-blocks`, however it also doesn't come for free - it increases the binary size.
| 46.185185 | 694 | 0.733761 | eng_Latn | 0.979646 |
c14365d0ff87c5909cb7d684d75c4fe81c1e3d04 | 1,214 | md | Markdown | gpm.md | qiufeng-sun/util | 13c8884d0b3c96dcc3ad380318bcf8d4a1784f4e | [
"MIT"
] | 2 | 2018-10-01T08:29:38.000Z | 2021-02-25T18:40:12.000Z | gpm.md | qiufeng-sun/util | 13c8884d0b3c96dcc3ad380318bcf8d4a1784f4e | [
"MIT"
] | null | null | null | gpm.md | qiufeng-sun/util | 13c8884d0b3c96dcc3ad380318bcf8d4a1784f4e | [
"MIT"
] | 2 | 2018-01-03T09:15:05.000Z | 2021-02-25T18:40:21.000Z | 工程依赖管理
================
* 使用的工具: gvp + gpm + gpm-git
---------------------------------
* [gvp](https://github.com/pote/gvp)用于设置GOPATH
```bash
$ git clone https://github.com/pote/gvp.git && cd gvp
$ git checkout v0.2.0 # You can ignore this part if you want to install HEAD.
$ ./configure
$ make install
```
* [gpm](https://github.com/pote/gpm)
```bash
$ git clone https://github.com/pote/gpm.git && cd gpm
$ git checkout v1.3.1 # You can ignore this part if you want to install HEAD.
$ ./configure
$ make install
```
* [gpm-git](https://github.com/technosophos/gpm-git)
```sh
$ git clone https://github.com/technosophos/gpm-git.git
$ git checkout v1.0.1
$ make install
```
* 获取工程, 并编译(以goproject/mining为例)
-----------------------------------------
```sh
$ mkdir work && cd ~/work # work为工作目录
$ git clone git@git.n.xiaomi.com:sunxiguang/goproject.git && cd goproject # 克隆goproject工程到work目录里
$ . gvp # 设置$GOPATH(go env可查看). 当前的为"/home/sxg/work/goproject/.godeps:/home/sxg/work/goproject"
$ cd src/mining # 进入子工程目录
$ gpm-git # 读取当前目录下Godeps-Git下载相关依赖
$ go install # 生成可执行文件(会生成在$GOBIN中)
$ cp $GOBIN/mining ~/work/goproject/bin/ # 从$GOBIN拷贝可执行文件到bin目录下
$ cp -r conf/ ~/work/goproject/bin/ # 拷贝配置文件到bin目录下
```
| 26.391304 | 97 | 0.643328 | kor_Hang | 0.199345 |
c1444b00f65e1d0d45c4f154c02aff6cfc66af5e | 5,370 | md | Markdown | _posts/2021-01-03-prijava.md | kiferd/farmebs-wp | 1ae7f17b0806bcf4d47834dced88fcc00d89f12a | [
"MIT"
] | null | null | null | _posts/2021-01-03-prijava.md | kiferd/farmebs-wp | 1ae7f17b0806bcf4d47834dced88fcc00d89f12a | [
"MIT"
] | null | null | null | _posts/2021-01-03-prijava.md | kiferd/farmebs-wp | 1ae7f17b0806bcf4d47834dced88fcc00d89f12a | [
"MIT"
] | null | null | null | ---
title: "Prijava"
bg: white
color: red
icon-color: red
fa-icon: sign-in
---
<div class="row">
<div class="column" style="float:center">
<!--
<h2><strong>Prijava sudjelovanja</strong></h2>
<p>
Prijave su otvorene do <strong><del>31. kolovoza</del> 10. rujna 2021.</strong>
<br>
Broj sudionika je ograničen u skladu s epidemološkom situacijom, a povratnu informaciju o potvrdi sudjelovanja prijavljeni će dobiti najkasnije 16. rujna 2021.
<br>
Sudionici s prijavljenim sažetkom imaju prednost pri sudjelovanju.
</p>
<form action="https://docs.google.com/forms/d/e/1FAIpQLSfHIaSfy5bDh9uSR1bpvk9fRYNFm5ArXACm_WymdsHmLEcypA/formResponse" method="post">
< !-- --- Prijava za simpozij --- --
<label><h5>e-mail:</h5></label>
<input type="email" placeholder="username@server.domain" name = "entry.837880758" required>
<label><h5>Ime:</h5></label>
<input type="text" placeholder="Vladimir" name = "entry.307023296" required>
<label><h5>Prezime:</h5></label>
<input type="text" placeholder="Prelog" name="entry.1090855792">
<label><h5>JMBAG:</h5></label>
<input type="text" placeholder="0006028877" name="entry.1325632219">
<label><h5>Naziv studija:</h5></label>
<input type="text" placeholder="Farmaceutske znanosti" name="entry.292043444">
<label><h5>Puni naziv fakulteta:</h5></label>
<input type="text" placeholder="Farmaceutsko-biokemijski fakultet" name="entry.1875020613">
<label><h5>Puni naziv sveučilišta:</h5></label>
<input type="text" placeholder="Sveučilište u Zagrebu" name="entry.973503126">
< !-- Add extra space --
<br> <br>
Klikom na tipku "Predaj" prihvaćaš
<a href="gdpr.html">izjavu o prikupljanju i korištenju osobnih podataka</a>
<br>
<button class='full-width' type="submit">Predaj</button>
</form>
</div>
< --
<div class="column" style="float:center">
<h2><strong>Prijava sažetka</strong></h2>
<p>
Sažetci se prijavljuju putem Google Forms u nastavku.
<br>
Autori se navode u formatu <em>"Ime Prezime[redni broj institucije]"</em> i međusobno su odvojeni točkama.
<br>
Institucije se navode u formatu <em>"[redni broj isntitucije] Institucija, grad, država"</em> i međusobno su odvojene točkama.
<br>
Sažetak mora biti napisan na hrvatskom jeziku i može sadržavati maksimalno 2500 znakova (uključujući i razmake).
<br>
Prijave sažetka su otvorene do <strong><del>31. kolovoza</del> 10. rujna 2021.</strong>
</p>
< !--
<form action="https://docs.google.com/forms/d/e/1FAIpQLSfswwbD3Xq-Yk5Clz-UTQd2HtnOzRCNLU_F3Q_C_5xDNOF4Ew/formResponse" method="post">
</!-- --- Prijava za simpozij --- --/>
<label><h5>e-mail:</h5></label>
<input type="text" placeholder="username@server.domain" name = "entry.1605552058" required>
<label><h5>Naslov:</h5></label>
<input type="text" placeholder="Specifikacija molekularne kiralnosti" name = "entry.278637510" required>
<label><h5>Autori:</h5></label>
<textarea rows='1' placeholder="Robert Cahn[1]. Christopher Ingold[2]. Vladimir Prelog[3]." name="entry.1273326281" required></textarea>
<label><h5>Izlagač:</h5></label>
<input type="text" placeholder="Vladimir Prelog" name="entry.498472378" required>
<label><h5>Institucije:</h5></label>
<textarea rows='5' placeholder="[1]The Chemical Society, Burlington House, Piccadily, London, United Kingdom. [2]University College, Gower Street, London, United Kngdom. [3]Eidgenossische Technische Hochschule, Zurich, Switzerland." name="entry.600010528" required></textarea>
<label><h5>Sažetak:</h5></label>
<textarea rows='20' maxlength='2700' placeholder="Topološka analiza kiralnih molekularnih modela pružila je okvir općeg sustava za specifikaciju njihove kiralnosti. Primjena ovog sustava, primijenjena u i prije 1956. godine, na organsko-kemijske konfiguracije, općenito je zadržana, ali je redefinirana s obzirom na određene tipove struktura, uglavnom u svjetlu iskustva stečenog od 1956. na Beilstein institutu i drugdje. Sustav je sada proširen tako da se, s jedne strane, bavi organsko-kemijskim konformacijama, a s druge s anorgansko-kemijskim konfiguracijama do oktaedarske strukture. Razmatraju se pitanja koja nastaju u vezi s prijenosom kiralnih specifikacija s modela na naziv, posebno na simbiozu u nomenklaturi izraza općeg sustava i sustava ograničenog opsega." name="entry.1446810103" required></textarea>
</div>!-- Add extra space --/>
<br> <br>
Klikom na tipku "Predaj" prihvaćaš
<a href="gdpr.html">izjavu o prikupljanju i korištenju osobnih podataka</a>
<br>
<button class='full-width' type="submit">Predaj</button>
</form>
--
</div>
--
</div>
Molimo sve prijavljene da popune anketu o [EU digitalnoj COVID potvrdi](https://forms.gle/u15WRuSUqMwTGrPN6).
-->
<h4><strong>Prijave su zatvorene</strong></h4>
| 40.074627 | 828 | 0.647858 | hrv_Latn | 0.722488 |
c1445f89c25fa9928ac5a8cf1f17a1e23518288f | 772 | md | Markdown | _posts/2021-04-26-210426-TIL.md | churry75/churry75.github.io | 6d468f115ca31a65cb7ed0cdd0ffbf279f74f525 | [
"MIT"
] | null | null | null | _posts/2021-04-26-210426-TIL.md | churry75/churry75.github.io | 6d468f115ca31a65cb7ed0cdd0ffbf279f74f525 | [
"MIT"
] | null | null | null | _posts/2021-04-26-210426-TIL.md | churry75/churry75.github.io | 6d468f115ca31a65cb7ed0cdd0ffbf279f74f525 | [
"MIT"
] | null | null | null | ---
title: "2021.04.26 TIL"
excerpt: "머신러닝"
toc: true
toc_sticky: true
categories:
- TIL
tags:
- TIL
- Machine Learning
last_modified_at: 2021.04.26-15:42:58
---
# 2021-04-26 TIL📓
## 오늘 할 일
- [x] 핸즈온머신러닝 공부
## 1장 머신러닝 개요
>이 페이지는 **[핸즈온 머신러닝 2판(한빛미디어)]**를 토대로 작성하였습니다.
1. 머신러닝의 정의
- 데이터로부터 **학습**할 수 있도록 컴퓨터를 프로그래밍하는 것
- 학습이란 어떤 작업에서 주어진 **성능 지표**가 더 나아지는 것을 의미함
2. 지도학습의 종류
- 회귀
- 분류
- etc...
3. 비지도학습의 종류
- 군집
- 시각화
- 차원 축소
- 연관 규칙 학습
- etc...
4. 테스트 세트의 목적
- 실전에 배치되기 전에 모델이 새로운 샘플에 대해 만들 일반화 오차를 추정하기 위해 사용
5. 검증 세트 목적
- 모델을 비교하는 데 사용
- 가장 좋은 모델을 고르고 하이퍼파라미터를 튜닝
6. 테스트 세트를 사용해 하이퍼파라미터를 튜닝하면 생기는 문제
- 과대적합(Overfitting)의 위험
- 일반화 오차를 오인하게 될 수 있음
## 장기계획
- 코딩테스트 대비 **코드 스니펫** 만들기
| 14.846154 | 54 | 0.568653 | kor_Hang | 1.00001 |
c144c68bf2526ad300198ecd8fb1037f24be2a3b | 1,853 | md | Markdown | articles/cognitive-services/Translator/prevent-translation.md | EINSTEINPRACIANO/azure-docs.pt-br | 93bbbf115ab76d31e6bc8919a338700294966913 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-02T14:26:54.000Z | 2019-05-02T14:26:54.000Z | articles/cognitive-services/Translator/prevent-translation.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Translator/prevent-translation.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Evitar a tradução de conteúdo – API de Tradução de Texto
titlesuffix: Azure Cognitive Services
description: Evitar a tradução de conteúdo com a API de Tradução de Texto.
services: cognitive-services
author: v-pawal
manager: nitinme
ms.service: cognitive-services
ms.subservice: translator-text
ms.topic: conceptual
ms.date: 02/21/2019
ms.author: v-jansko
ms.openlocfilehash: a9590a9a38859818e0b609d64fc12e30afd2e09e
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/23/2019
ms.locfileid: "60507004"
---
# <a name="how-to-prevent-translation-of-content-with-the-translator-text-api"></a>Como evitar a tradução de conteúdo com a API de Tradução de Texto
A API de Tradução de Texto permite que marcar conteúdo para que não seja traduzido. Por exemplo, você talvez queira marcar o código, um nome de marca ou uma palavra/frase que não precisa ser traduzida.
## <a name="methods-for-preventing-translation"></a>Métodos para evitar a tradução
1. Usar escape para uma tag do Twitter @somethingtopassthrough ou #somethingtopassthrough. Não usar escape após a tradução.
2. Marcar seu conteúdo com `notranslate`.
Exemplo:
```html
<div class="notranslate">This will not be translated.</div>
<div>This will be translated. </div>
```
3. use o [dicionário dinâmico](dynamic-dictionary.md) para prescrever uma tradução específica.
4. Não passe a cadeia de caracteres para a API de Tradução de Texto para tradução.
5. Tradutor Personalizado: use um [dicionário do Tradutor Personalizado](custom-translator/what-is-dictionary.md) para prescrever a tradução de uma frase com 100% de probabilidade.
## <a name="next-steps"></a>Próximos passos
> [!div class="nextstepaction"]
> [Evitar a tradução em sua chamada à API do Tradutor](reference/v3-0-translate.md)
| 40.282609 | 201 | 0.779817 | por_Latn | 0.989661 |
c1458eba85639d8381d7748452d6e63c8466b032 | 1,585 | md | Markdown | includes/germany-closure-info.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-07-09T03:34:12.000Z | 2022-03-05T21:27:20.000Z | includes/germany-closure-info.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 671 | 2017-06-29T16:36:35.000Z | 2021-12-03T16:34:03.000Z | includes/germany-closure-info.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 171 | 2017-07-25T06:26:46.000Z | 2022-03-23T09:07:10.000Z | ---
author: gitralf
ms.author: ralfwi
ms.date: 10/16/2020
ms.service: germany
ms.topic: include
ms.openlocfilehash: 3e3b67afa4095402d1f9171409f091b926b4db8e
ms.sourcegitcommit: d6a739ff99b2ba9f7705993cf23d4c668235719f
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/24/2020
ms.locfileid: "117029011"
---
> [!IMPORTANT]
> Desde [agosto de 2018](https://news.microsoft.com/europe/2018/08/31/microsoft-to-deliver-cloud-services-from-new-datacentres-in-germany-in-2019-to-meet-evolving-customer-needs/), no hemos aceptado nuevos clientes ni hemos implementado nuevas características y servicios en las ubicaciones originales de Microsoft Cloud Germany.
>
> En función de la evolución de las necesidades de los clientes, recientemente [hemos lanzado](https://azure.microsoft.com/blog/microsoft-azure-available-from-new-cloud-regions-in-germany/) dos nuevas regiones de centros de datos en Alemania, que ofrecen residencia de datos de clientes, conectividad completa a la red en la nube global de Microsoft y precios competitivos en el mercado.
>
> Además, el 30 de septiembre de 2020 anunciamos que Microsoft Cloud Germany cerraría el 29 de octubre de 2021. Encontrará más detalles aquí: [https://www.microsoft.com/cloud-platform/germany-cloud-regions](https://www.microsoft.com/cloud-platform/germany-cloud-regions).
>
> [Migre](../articles/germany/germany-migration-main.md) hoy mismo para aprovechar la gran cantidad de funcionalidades, la seguridad de nivel empresarial y características completas disponibles en las nuevas regiones de centro de datos de Alemania.
| 72.045455 | 388 | 0.808202 | spa_Latn | 0.930199 |
c1460ceaef6c5b4b9e66a753022bd85f70254a3a | 4,989 | md | Markdown | lit-ceramic-sdk/notes.md | cryptoKevinL/LitChat | 7c9dfa7bce309d236cd3764169c9ae418ca2d58c | [
"Apache-2.0",
"MIT"
] | 3 | 2022-02-01T14:51:18.000Z | 2022-03-25T01:27:50.000Z | lit-ceramic-sdk/notes.md | cryptoKevinL/LitChat | 7c9dfa7bce309d236cd3764169c9ae418ca2d58c | [
"Apache-2.0",
"MIT"
] | 4 | 2022-01-18T00:12:24.000Z | 2022-02-04T02:02:28.000Z | lit-ceramic-sdk/notes.md | cryptoKevinL/LitChat | 7c9dfa7bce309d236cd3764169c9ae418ca2d58c | [
"Apache-2.0",
"MIT"
] | 1 | 2022-02-03T10:30:27.000Z | 2022-02-03T10:30:27.000Z | # General Notes From Building with Ceramic
### Node
For ceramic you're required to run a client, which can either be based on your own node (using their "core" client Implementation) or using their JS HTTP client. Core requires setting up a IPFS node and a dag-jose apparently, so for now if I need to do this, and it looks like I do, I'm going with JS HTTP.
# Glossary
I found it useful to define some things things in short form, full glossary can be [found here](https://developers.ceramic.network/learn/glossary/). Some phrases are direct copies, most aren't.
## DID
### DID -
Decentralized Identifier, used to identify an entity, person, etc. via an agreed upon URI schema. They're used for Authentication by most StreamTypes
### DID Methods -
Implementation of the DID specification. DID methods must specify:
1. A name for the method in the form of a string
2. A description of where the DID document is stored (or how it is statically generated)
3. A DID resolver which: -->
- CAN return a DID document,
- GIVEN a URI
- WHICH conforms to that particular DID method.
There's an official method registry for DID methods with over 40 of them (maintained by the W3C), so that's cool.
For reference, and DID URI looks like this:
`did:<method-name>:<method-specific-identifier>`
### DID Document -
Contain metadata about a DID. Contains cryptographic key data for auth, at a minimium.
### DID Resolver -
Must be imported upon installation of JS HTTP or Core Client
### DID Providers -
DID providers are software libraries that expose a json-rpc interface which allows for the creation and usage of a DID that conforms to a particular DID method.
Usually a DID provider is constructed using a seed that the user controls. When using Ceramic with streams that require DIDs for authentication, applications either need to integrate a DID provider library, which leaves the responsibility of key management up to the application, or a DID wallet, which is a more user-friendly experience.
### DID Wallets -
DID wallets are software libraries or end-user applications that wrap DID providers with additional capabilities
3ID is the most popular wallet.
## Ceramic
### StreamType -
The processing logic used by the particular stream.
### Tile Document -
A StreamType that stores a JSON document, providing similar functionality as a NoSQL document store.
used as a database replacement for identity metadata (profiles, social graphs, reputation scores, linked social accounts), user-generated content (blog posts, social media, etc), indexes of other StreamIDs to form collections and user tables (IDX), DID documents, verifiable claims, and more
### CAIP-10 Link -
A StreamType that stores a cryptographically verifiable proof that links a blockchain address to a DID.
# Useful Links
Lifecycle of a Stream, and more, very useful: https://github.com/ceramicnetwork/ceramic/blob/master/SPECIFICATION.md
# Completely Random Notes
Keeping streams isn't universally required, so I think this may suffer the same problem as torrents, in that if something is considered forbidden content a govt authority can censor. I think that's probably by design but I found it interesting.
Arweave isn't implemented yet. The denote it as "archiving" (clever distinction, Arweave is forever if the price is right) as opposed to what they are currently doing with IPFS/Filecoin which is more like pay-as-you-go storage.
##### Of interest: data withholding attacks..
They act like this isn't a big deal, but if I get someone's private keys and do this to them unknowningly I control their stream history. Imagine losing years worth of data due poor key management!
Reference:
One suggested attack on this conflict resolution system is a data withholding attack. In this scenario a user creates a stream, makes two conflicting updates and anchors one of them earlier than the other, but only publishes the data of the update that was anchored later. Now subsequent updates to the stream will be made on top of the second, published update. Every observer will accept these updates as valid since they have not seen the first update. However if the user later publishes the data of the earlier update, the stream will fork back to this update and all of the other updates made to the stream will be invalidated.
This is essentially a double spend attack which is the problem that blockchains solve. However since identities have only one owner, the user, this is less of a problem. In this case, a "double spend" would cause the user to lose all history and associations that have accrued on their identity, which they are naturally disincentivized to do.
In the case of organizational identities this is more of a problem, e.g. if an old admin of the org wants to cause trouble. This can be solved by introducing "heavy anchors" which rely more heavily on some on-chain mechanism. For example, a smart contract or a DAO that controls the identity.
| 54.824176 | 641 | 0.779916 | eng_Latn | 0.999241 |
c146193cc427ab5c386292bb2a7ab54ff055fa6c | 2,470 | md | Markdown | README.md | astrellon/simple-signals | ec3ebdbab4c16836b59cfb98c418ca496ffd3ec5 | [
"MIT"
] | null | null | null | README.md | astrellon/simple-signals | ec3ebdbab4c16836b59cfb98c418ca496ffd3ec5 | [
"MIT"
] | null | null | null | README.md | astrellon/simple-signals | ec3ebdbab4c16836b59cfb98c418ca496ffd3ec5 | [
"MIT"
] | null | null | null | # Simple Signals

A Typescript based simple signal framework.
For when all you need a simple way to trigger and listen for type safe events.
A longer description would be that signals are for handling events but each instance of a `Signal` is for representing a single event.
## Install
To get from npm:
```sh
npm install --save simple-signals
```
Alternatively you can download the code and build it yourself with
```sh
npm run build
```
And in the `dist` folder will be the Javascript and Typescript typings file.
Also the whole thing is one Typescript file so it's pretty easy to manually add it to your own source code.
## Features
- Small file size (about 0.3kb before compression)
- Type safe events
- Very simple API, only 3 methods.
- No dependencies
## Example
Signals are a way to represent a single event with type safety.
```typescript
import Signal from "simple-signals";
const newName = new Signal<string>();
const remove1 = newName.add((name) =>
{
console.log('New name 1:', name);
});
const remove2 = newName.add((name) =>
{
console.log('New name 2:', name);
});
console.log(newName.length()) // 2
newName.trigger('Foo');
// Console prints
// New name 1: Foo
// New name 2: Foo
remove1();
console.log(newName.length()) // 1
newName.trigger('Bar');
// Console prints
// New name 2: Bar
```
# API
## Types
```typescript
// Listener for when a signal is dispatched.
// Can return false to stop the next signal from triggering.
export type SignalListener<T> = (value: T) => void | boolean;
// A function used to remove a listener from the signal.
export type RemoveListener = () => void;
```
## Signal
The main class that keeps track of added signal listeners.
### length
`returns: number` The number of active listeners.
### add
`listeners: SubscriptionListener<T>` A function that will be called when the signal is triggered. The function can optionally return a boolean (false) to prevent further listeners from triggering.
`returns: RemoveListener` A function to remove the listener from the store.
Add a listener to the signal, returns a function for removing the listener.
The order of added listeners is maintained.
### trigger
`value: T` The value to pass to each of the triggered listeners.
Triggers the listeners on the signal with the value given.
## License
MIT
## Author
Alan Lawrey 2020 | 25.729167 | 196 | 0.734413 | eng_Latn | 0.989788 |
c1464e7a82bc9b2827384e13d29a1f4de7c294ed | 724 | md | Markdown | README.md | creachadair/graphsheets | 790f47487c61b6411ef35ee2ee17878dcec3bc21 | [
"MIT"
] | null | null | null | README.md | creachadair/graphsheets | 790f47487c61b6411ef35ee2ee17878dcec3bc21 | [
"MIT"
] | null | null | null | README.md | creachadair/graphsheets | 790f47487c61b6411ef35ee2ee17878dcec3bc21 | [
"MIT"
] | null | null | null | # Grid and Graph Paper Programs
This repository contains small PostScript programs to print various kinds of
graph, hex, and other gridded paper.
| File | Description |
| --------------------- | ------------------------------------------- |
| graph-paper.ps | Square ruled paper |
| hex-graph-paper.ps | Hexagon and square ruled paper |
| hex-paper.ps | Hexagon ruled paper |
| knotwork-worksheet.ps | Square ruled paper with diagonal guidelines |
| return-labels.ps | Avery 5167/8167 address labels |
| tablature-sheet.ps | Six-string guitar tablature worksheet |
| 51.714286 | 76 | 0.519337 | eng_Latn | 0.97887 |
c14668905f6a948b85e72c5e35e660dc32a277f9 | 40 | md | Markdown | README.md | tjololo/helm_charts | e4ba19e037945f30040e165b91e0a0ab344043c0 | [
"Apache-2.0"
] | null | null | null | README.md | tjololo/helm_charts | e4ba19e037945f30040e165b91e0a0ab344043c0 | [
"Apache-2.0"
] | null | null | null | README.md | tjololo/helm_charts | e4ba19e037945f30040e165b91e0a0ab344043c0 | [
"Apache-2.0"
] | null | null | null | # helm_charts
Some personal helm charts
| 13.333333 | 25 | 0.825 | eng_Latn | 0.98931 |
c1467004208c9babe0076c05a30b085a9b5998fd | 517 | md | Markdown | README.md | wo1fsea/lens | adb78f09e37d59d6979dd816d9f902f00cf20a39 | [
"MIT"
] | null | null | null | README.md | wo1fsea/lens | adb78f09e37d59d6979dd816d9f902f00cf20a39 | [
"MIT"
] | null | null | null | README.md | wo1fsea/lens | adb78f09e37d59d6979dd816d9f902f00cf20a39 | [
"MIT"
] | null | null | null | # lens
A Path Tracing Example with GUI
## Build
1. Install [vcpkg](https://github.com/microsoft/vcpkg)
```
> git clone https://github.com/Microsoft/vcpkg.git
> cd vcpkg
PS> .\bootstrap-vcpkg.bat
Linux:~/$ ./bootstrap-vcpkg.sh
```
2. Use Vcpkg to Install Dependencies
```
PS> .\vcpkg install SDL2
Linux:~/$ ./vcpkg install SDL2
```
3. Use CMake to Build
```
> cd lens
> mkdir ./build
> cd ./build
> cmake -DCMAKE_TOOLCHAIN_FILE="[your_vcpkg_root]/scripts/buildsystems/vcpkg.cmake" ../
> cmake --build ./
``` | 17.233333 | 88 | 0.676983 | kor_Hang | 0.365262 |
c146e70ecee41a94d4ba39040b958f02a5127f2a | 22 | md | Markdown | README.md | kayjeee/c-schools | 1409f36fd02878f80c65f333193bcf88645035b4 | [
"Artistic-2.0"
] | null | null | null | README.md | kayjeee/c-schools | 1409f36fd02878f80c65f333193bcf88645035b4 | [
"Artistic-2.0"
] | null | null | null | README.md | kayjeee/c-schools | 1409f36fd02878f80c65f333193bcf88645035b4 | [
"Artistic-2.0"
] | null | null | null | # c-schools
jus a app
| 7.333333 | 11 | 0.681818 | eng_Latn | 0.94916 |
c14735d33b5438fa776ce73ae94d3d5c1bfd2813 | 810 | md | Markdown | _posts/2018-04-13-Veltins.md | AnxiousAccountant/chirpy-test | 1b613d6cb48b0408a639b0f985c2c1c816973e4f | [
"MIT"
] | null | null | null | _posts/2018-04-13-Veltins.md | AnxiousAccountant/chirpy-test | 1b613d6cb48b0408a639b0f985c2c1c816973e4f | [
"MIT"
] | null | null | null | _posts/2018-04-13-Veltins.md | AnxiousAccountant/chirpy-test | 1b613d6cb48b0408a639b0f985c2c1c816973e4f | [
"MIT"
] | 1 | 2021-12-12T07:18:47.000Z | 2021-12-12T07:18:47.000Z | ---
title: Veltins Alkoholfrei
date: 2018-04-13 12:00:00 +0000
firsttaste: 13 Apr 2018
categories: [Non-Alcoholic Beers]
tags: [beers] # TAG names should always be lowercase
image:
src: /assets/img/beers/Veltins.JPG
height: 240
width: 240
alt: Veltins
country: Germany
abv: 0.0%
ingredients: Water, Barley Malt, fermentation carbon dioxide, hops
released: Germany 1998, UK - Unknown
rating: 7/10
website: https://www.veltins.de/sortiment/veltins-alkoholfrei/
paragraph1: After a disappointing few new German beers over the past couple of years I was not holding out much hope.
paragraph2: But I was pleased to be pleasantly surprised as this one was nice and light with a nice level of bitterness. The only disappointment was that I only had 2 of these.
paragraph3:
---
{% include beertable.md %}
| 35.217391 | 176 | 0.760494 | eng_Latn | 0.990698 |
c14a09f62f4b16bfc1e393b7f4a1b746c19b18f4 | 955 | md | Markdown | docs/reference/serial.md | dcbriccetti/pxt-adafruit | 7c9bd3e4fd1a4c38873891b45ab466e30c58a7ef | [
"MIT"
] | 46 | 2019-05-21T10:59:34.000Z | 2022-03-02T16:43:22.000Z | docs/reference/serial.md | dcbriccetti/pxt-adafruit | 7c9bd3e4fd1a4c38873891b45ab466e30c58a7ef | [
"MIT"
] | 954 | 2019-05-08T17:32:09.000Z | 2022-02-23T20:16:59.000Z | docs/reference/serial.md | dcbriccetti/pxt-adafruit | 7c9bd3e4fd1a4c38873891b45ab466e30c58a7ef | [
"MIT"
] | 52 | 2019-05-20T19:25:18.000Z | 2022-02-02T21:02:58.000Z | # Serial
Reading and writing data over a serial connection.
## ~hint
The @boardname@ can read data from and write data to another computer or device with a serial connection using USB. To use serial communication between your board and MakeCode, you need the [MakeCode for Adafruit](https://www.microsoft.com/store/apps/9pgzhwsk0pgd) app for Windows 10 and a USB cable.
You can also write data to an output log with the [console](/reference/console) functions without having to use a serial connection.
## ~
```cards
serial.writeLine("");
serial.writeNumber(0);
serial.writeValue("x", 0);
serial.writeString("");
```
## Advanced
```cards
serial.writeBuffer(null);
```
## See Also
[write line](/reference/serial/write-line),
[write string](/reference/serial/write-string),
[write number](/reference/serial/write-number),
[write value](/reference/serial/write-value),
[write buffer](/reference/serial/write-buffer),
[console](/reference/console)
| 28.088235 | 300 | 0.748691 | eng_Latn | 0.987419 |
c14a30f976da0d28f8b5b6130b32dc16bae86eba | 903 | md | Markdown | CHANGES.md | linkerd/linkerd-smi | f53437792545a632ecb5eee6145c03561247b698 | [
"Apache-2.0"
] | 9 | 2021-07-31T05:35:56.000Z | 2022-01-26T09:38:06.000Z | CHANGES.md | linkerd/linkerd-smi | f53437792545a632ecb5eee6145c03561247b698 | [
"Apache-2.0"
] | 9 | 2021-05-25T15:02:07.000Z | 2021-11-26T05:37:19.000Z | CHANGES.md | linkerd/linkerd-smi | f53437792545a632ecb5eee6145c03561247b698 | [
"Apache-2.0"
] | 1 | 2021-07-13T05:53:10.000Z | 2021-07-13T05:53:10.000Z | # Changes
## v0.2.0
This release adds the `TrafficSplit` (`v1alpha1` and `v1alpha2`) CRD into the
extension. This also includes improvements around the non-default namespace
creation in helm, along with the controller to emit events while processing SMI
resources.
This version has compatibility with Linkerd starting from `edge-21.12.2` versions,
to prevent race conditions during the CRD install.
## v0.1.0
linkerd-smi 0.1.0 is the first public release of the SMI extension
for Linkerd. This extension follows the [Linkerd's extension model](https://github.com/linkerd/linkerd2/blob/main/EXTENSIONS.md),
and ships with both a CLI and a Helm Chart.
The `smi-adaptor` is the main component of this extension. It is a Kubernetes
controller that listens for `TrafficSplit` objects and converts them into
a new corresponding `ServiceProfile` object, or updates the existing one
if it already exists.
| 39.26087 | 129 | 0.78959 | eng_Latn | 0.997552 |
c14aa2ad9ca094926fc0c5f198b0ca427f8519ed | 48 | md | Markdown | wiki/table/test.md | SeolHa314/KouWiki | 755d999679849b01a9703ec13b498d27998c9603 | [
"MIT"
] | 1 | 2021-06-22T20:16:17.000Z | 2021-06-22T20:16:17.000Z | wiki/table/test.md | SeolHa314/KouWiki | 755d999679849b01a9703ec13b498d27998c9603 | [
"MIT"
] | null | null | null | wiki/table/test.md | SeolHa314/KouWiki | 755d999679849b01a9703ec13b498d27998c9603 | [
"MIT"
] | null | null | null | | 야가미 코우 |
| ---------- |
| 캐릭터 소개 |
**sdfja**
| 8 | 14 | 0.3125 | kor_Hang | 0.983011 |
c14abc1c8d53345b951b8facc35a0df7b2b5b63b | 516 | md | Markdown | modules/fastqc/readme.md | jianhong/universalModule | 032d77865ec17fcf055f5db61a00dd5c76478e05 | [
"MIT"
] | null | null | null | modules/fastqc/readme.md | jianhong/universalModule | 032d77865ec17fcf055f5db61a00dd5c76478e05 | [
"MIT"
] | null | null | null | modules/fastqc/readme.md | jianhong/universalModule | 032d77865ec17fcf055f5db61a00dd5c76478e05 | [
"MIT"
] | null | null | null | # FASTQC module
Run fastqc for fastq files.
## params.options
- publish_dir, publish directory, default 'fastqc'
- publish_mode, publish mode, default 'copy'
- publish_enabled, publish enabled, default true
- args, arguments for fastqc, default '--quite'
## labels
- process_medium
## input
- tuple val(meta), path(reads); meta must contain keys id and single_end
## output
- tuple val(meta), path("*.html"), emit: html
- tuple val(meta), path("*.zip"), emit: zip
- path "fastqc.version.txt", emit: version
| 20.64 | 72 | 0.709302 | eng_Latn | 0.594128 |
c14af51a334d95bcbf2c0df52051f81bd111b963 | 1,017 | md | Markdown | docs/documentation/programmatic_scan.md | lgov/soda-sql | f5756b9abdd621fe48c92da4285da2795d210fc4 | [
"Apache-2.0"
] | null | null | null | docs/documentation/programmatic_scan.md | lgov/soda-sql | f5756b9abdd621fe48c92da4285da2795d210fc4 | [
"Apache-2.0"
] | null | null | null | docs/documentation/programmatic_scan.md | lgov/soda-sql | f5756b9abdd621fe48c92da4285da2795d210fc4 | [
"Apache-2.0"
] | null | null | null | ---
layout: default
title: Programmatic scan
parent: Documentation
nav_order: 10
---
# Programmatic scan
Here's how to run scans using Python:
Programmatic scan execution based on default dir structure:
```python
scan_builder = ScanBuilder()
scan_builder.scan_yml_file = 'tables/my_table.yml'
# scan_builder will automatically find the warehouse.yml in the parent and same directory as the scan YAML file
# scan_builder.warehouse_yml_file = '../warehouse.yml'
scan = scan_builder.build()
scan_result = scan.execute()
if scan_result.has_failures():
print('Scan has test failures, stop the pipeline')
```
Programmatic scan execution using dicts:
```python
scan_builder = ScanBuilder()
scan_builder.warehouse_yml_dict = {
'name': 'my_warehouse_name',
'connection': {
'type': 'snowflake',
...
}
}
scan_builder.scan_yml_dict = {
...
}
scan = scan_builder.build()
scan_result = scan.execute()
if scan_result.has_failures():
print('Scan has test failures, stop the pipeline')
```
| 24.214286 | 111 | 0.729597 | eng_Latn | 0.819064 |
c14bf04f59e56da61ee6e0abff6de240ba2342c0 | 179 | md | Markdown | bin/README.md | NYULibraries/pds-custom | 856ff1b825c1c9270a5c755a965fa3d8e3dd6087 | [
"MIT"
] | null | null | null | bin/README.md | NYULibraries/pds-custom | 856ff1b825c1c9270a5c755a965fa3d8e3dd6087 | [
"MIT"
] | 10 | 2015-03-02T20:37:30.000Z | 2018-12-20T13:19:16.000Z | bin/README.md | NYULibraries/pds-custom | 856ff1b825c1c9270a5c755a965fa3d8e3dd6087 | [
"MIT"
] | null | null | null | # Custom Scripts
The perl scripts in this directory are symbolically linked in the
`pdsroot/service_proc` directory. They provide the glue between PDS
and the custom NYU modules.
| 35.8 | 67 | 0.815642 | eng_Latn | 0.997274 |
c14c14815dbd42fe0f3c7648e0e71542d1154486 | 7,275 | md | Markdown | plugins/README.md | apla/hoxy | a46406655b822777c5d6bdf690ae8377a727c144 | [
"MIT"
] | null | null | null | plugins/README.md | apla/hoxy | a46406655b822777c5d6bdf690ae8377a727c144 | [
"MIT"
] | null | null | null | plugins/README.md | apla/hoxy | a46406655b822777c5d6bdf690ae8377a727c144 | [
"MIT"
] | 1 | 2022-01-30T08:46:29.000Z | 2022-01-30T08:46:29.000Z | Plugins
=======
Plugins are a way of arbitrarily extending hoxy's capabilities.
Plugins are invoked from the rules file. See `README.md` in the rules dir. If a plugin is invoked in the rules as `@foo()`, that corresponds to a file in this dir called `foo.js`.
List of OOTB Plugins
========================
* `@allow-origin()` - allows cross-origin resource sharing
* `@banner(textToShow,styleOverrides)` - display a banner on html pages. textToShow shows in banner, styleOverrides is a css style attribute type string that override the default styling of the banner.
* `@css-beautify()` - reformat css code
* `@empty-text()` - send an empty text response
* `@external-redirect(url)` - send an http redirect
* `@expiry(days, hours, mins, secs)` - send expiry headers
* `@html-beautify()` - reformat html code. [credit](http://github.com/einars/js-beautify)
* `@internal-redirect(url)` - silent redirect
* `@js-beautify()` - reformat javascrit code. [credit](http://github.com/einars/js-beautify)
* `@jquery()` - DOM-manipulate a page using jQuery before sending it along to the client. [credit](http://jquery.com/), [credit](https://github.com/tmpvar/jsdom)
* `@send-404()` - sends a 404 response
* `@throttle(ms, chunkSize)` - throttle back the transfer speed
* `@unconditional()` - suppress http conditional get headers
* `@wait(ms)` - introduce latency
For more detailed usage info for a given plugin, peruse the JS files in this dir and look at the usage info in the comments.
Plugin Authoring Guide
======================
A plugin file has the form:
/**
This plugin increases the temperature of the response by one kelvin.
usage: @foo()
*/
// this is a plugin file
exports.run = function(api) {
// use the api
// and/or do anything node.js can do
api.notify(); // and done
};
WARNING: after it's done executing, a plugin *must* call `api.notify()`, otherwise hoxy will hang indefinitely. This scheme allows plugins to use asynchronous logic and still be executed in order.
Plugin API Documentation
------------------------
Here are the methods exposed by the plugin `api` object shown above.
### api.arg(index)
gets any args passed to the plugin, for example if the plugin is invoked as `@foo('bar',2000)` then:
var firstArg = api.arg(0);
// firstArg === "bar"
var secondArg = api.arg(1)
// secondArg === 2000
### api.getRequestInfo()
Gets a dictionary object containing all the information hoxy will be using to make the request to the server. If the plugin is executing in the response phase, this information is purely historical and modifying it has no effect. (There are exceptions, see note about cumulative effects below.) Otherwise, modifying the properties of this object will affect the request hoxy makes to the server.
* `requestInfo.method` - HTTP method to be used.
* `requestInfo.headers` - Header dictionary.
* `requestInfo.url` - URL to be used. (must be root-relative)
* `requestInfo.hostname` - Host to make the request to.
* `requestInfo.port` - Port to make the request on.
* `requestInfo.body` - An array of buffers containing binary data. For string manipulation against the body, it's recommended to use `api.getRequestBody()` and `api.setRequestBody(string)`.
* `requestInfo.throttle` - Integer number of milliseconds to wait between writing each binary buffer in the body array out to the server.
### api.getResponseInfo()
Gets a dictionary object containing all the information hoxy will be using to return the response to the client. If the plugin is executing in the request phase, this method will return `undefined`. (There are exceptions, see note about cumulative effects below.) Otherwise, modifying the properties of this object will affect the response hoxy returns to the browser.
* `responseInfo.headers` - header dictionary.
* `responseInfo.statusCode` - status code integer.
* `responseInfo.body` - An array of buffers objects containing binary data. For string manipulation against the body, it's recommended to use `api.getResponseBody()` and `api.setResponseBody(string)`.
* `responseInfo.throttle` - Integer number of milliseconds to wait between writing each binary buffer in the body array back to the client.
### api.setRequestInfo(newInfo)
If the plugin is executing in the response phase, calling this method has no effect. (There are exceptions, see note about cumulative effects below.) Otherwise, `newInfo` replaces the existing info hoxy will use to make the request to the server. The properties of `newInfo` *must* correspond to the ones listed in the section above for `api.getRequestInfo()`.
### api.setResponseInfo(newInfo)
If the plugin is executing in the request phase, calling this method prevents hoxy from making a request to the server. Otherwise, `newInfo` overwrites the response info hoxy has already received from the server. In either case, `newInfo` will be used to return a response to the client. Its properties *must* correspond to the ones listed in the section above for `api.getResponseInfo()`.
### api.notify()
Understanding why this method is called is *critical*. By default, hoxy will hang indefinitely on the execution of each plugin until this method is called, regardless of when and how the plugin returns or errors out. One way or another, therefore, this method must be called for each plugin execution.
Furthermore, all modifications by your plugin to the request/response *must* happen before `api.notify()` is called. If modifications happen after it's called, the behavior is undefined.
### api.setResponseBody(string)
Sets the entire response body to the given string. This is a convenience method to abstract away the annoyances of operating directly on the `responseInfo.body` buffer array. (See `api.getResponseInfo()`) Note: this method should not be used on binary response bodies.
### api.setRequestBody(string)
Sets the entire request body to the given string. This is a convenience method to abstract away the annoyances of operating directly on the `requestInfo.body` buffer array. (See `api.getRequestInfo()`) Note: this method should not be used on binary request bodies.
### api.getResponseBody()
Gets the entire response body as a string. This is a convenience method to abstract away the annoyances of operating directly on the `responseInfo.body` buffer array. (See `api.getResponseInfo()`) Note: this method should not be used on binary response bodies.
### api.getRequestBody()
Gets the entire request body as a string. This is a convenience method to abstract away the annoyances of operating directly on the `requestInfo.body` buffer array. (See `api.getRequestInfo()`) Note: this method should not be used on binary request bodies.
A Note About Cumulative Effects
-------------------------------
The effects of native actions and plugins are cumulative. If one rule sets a request header, then that change will be visible to conditions, actions and plugins of subsequent rules. For example, altering request info during the response phase may affect subsequent response-phase rules whose conditionals involve request info.
For instance, the second rule below will never execute:
request: $origin.clear()
response: if $origin not empty, @allow-origin()
| 59.146341 | 395 | 0.748729 | eng_Latn | 0.992969 |
c14c20f30dee82477e81d4e7cc802084808fb5c8 | 10,747 | md | Markdown | nfp-drv-kmods-master/README.md | earthcomputing/Netronome | 9f5132ecea3c134322305c9524da7189374881ec | [
"MIT"
] | null | null | null | nfp-drv-kmods-master/README.md | earthcomputing/Netronome | 9f5132ecea3c134322305c9524da7189374881ec | [
"MIT"
] | null | null | null | nfp-drv-kmods-master/README.md | earthcomputing/Netronome | 9f5132ecea3c134322305c9524da7189374881ec | [
"MIT"
] | null | null | null | # Netronome Flow Processor (NFP) Kernel Drivers
These drivers support Netronome's line of Flow Processor devices,
including the NFP4000 and NFP6000 model lines.
The repository builds the `nfp.ko` module which can be used to expose
networking devices (netdevs) and/or user space access to the device
via a character device.
The VF driver for NFP4000 and NFP6000 is available in upstream Linux
kernel since `4.5` release. The PF driver was added in Linux `4.11`.
This repository contains the same driver as upstream with necessary
compatibility code to make the latest version of the code build for
older kernels. We currently support kernels back to the `3.8` version,
support for older versions can be added if necessary.
Compared to upstream drivers this repository contains:
- non-PCI transport support to enable building the driver for the
on-chip control processor;
- support for netdev-based communication with the on-chip control
processor;
- optional low-level user space ABI for accessing card internals.
For more information, please visit: http://www.netronome.com or
http://open-nfp.org/.
If questions arise or an issue is identified related the released
driver code, please contact either your local Netronome contact or
email us on: oss-drivers@netronome.com
# Building and Installing
Building and installing for the currently running kernel:
$ make
$ sudo make install
To clean up use the `clean` target:
$ make clean
To override the kernel version to build for set `KVER`:
$ make KVER=<version>
$ sudo make KVER=<version> install
The `Makefile` searches a number of standard locations for the configured
kernel sources. To override the detected location, set `KSRC`:
$ make KSRC=<location of kernel build>
## Additional targets:
| Command | Action |
| --------------- | ------------------------------------------------- |
| make noisy | Verbose build with printing executed commands |
| make coccicheck | Runs Coccinelle/coccicheck (reqires `coccinelle`) |
| make sparse | Runs `sparse`, a tool for static code analysis |
| make nfp_net | Build the driver limited to netdev operation |
# Acquiring Firmware
The NFP4000 and NFP6000 devices require application specific firmware
to function. Firmware files contain card type (`AMDA-*` string), media
config etc. They should be placed in `/lib/firmware/netronome` directory.
Firmware for basic NIC operation is available in the upstream
`linux-firmware.git` repository, and if your distribution kernel is `4.11`
or newer you will most likely have it on your system already. For
more application specific firmware files please contact
support@netronome.com.
## Dealing with multiple projects
NFP hardware is fully programmable therefore there can be different
firmware images targeting different applications. We recommend placing
actual firmware files in application-named subdirectories in
`/lib/firmware/netronome` and linking the desired files, e.g.:
```
$ tree /lib/firmware/netronome/
/lib/firmware/netronome/
├── bpf
│ ├── nic_AMDA0081-0001_1x40.nffw
│ └── nic_AMDA0081-0001_4x10.nffw
├── flower
│ ├── nic_AMDA0081-0001_1x40.nffw
│ └── nic_AMDA0081-0001_4x10.nffw
├── nic
│ ├── nic_AMDA0081-0001_1x40.nffw
│ └── nic_AMDA0081-0001_4x10.nffw
├── nic_AMDA0081-0001_1x40.nffw -> bpf/nic_AMDA0081-0001_1x40.nffw
└── nic_AMDA0081-0001_4x10.nffw -> bpf/nic_AMDA0081-0001_4x10.nffw
3 directories, 8 files
```
You may need to use hard instead of symbolic links on distributions
which use old `mkinitrd` command instead of `dracut` (e.g. Ubuntu).
After changing firmware files you may need to regenerate the initramfs
image. Initramfs contains drivers and firmware files your system may
need to boot. Refer to the documentation of your distribution to find
out how to update initramfs. Good indication of stale initramfs
is system loading wrong driver or firmware on boot, but when driver is
later reloaded manually everything works correctly.
## Selecting firmware per device
Most commonly all cards on the system use the same type of firmware.
If you want to load specific firmware image for a specific card, you
can use either the PCI bus address or serial number. Driver will print
which files it's looking for when it recognizes a NFP device:
```
nfp: Looking for firmware file in order of priority:
nfp: netronome/serial-00-12-34-aa-bb-cc-10-ff.nffw: not found
nfp: netronome/pci-0000:02:00.0.nffw: not found
nfp: netronome/nic_AMDA0081-0001_1x40.nffw: found, loading...
```
In this case if file (or link) called *serial-00-12-34-aa-bb-5d-10-ff.nffw*
or *pci-0000:02:00.0.nffw* is present in `/lib/firmware/netronome` this
firmware file will take precedence over `nic_AMDA*` files.
Note that `serial-*` and `pci-*` files are **not** automatically included
in initramfs, you will have to refer to documentation of appropriate tools
to find out how to include them.
# Troubleshooting
If you're running the driver with user space access enabled you will be
able to use all Netronome's proprietary `nfp-*` tools. This section only
covers standard debugging interfaces based on kernel infrastructure and
which are always available.
## Probing output
Most basic set of information is printed when driver probes a device.
These include versions of various hardware and firmware components.
## Netdev information
`ethtool -i <ifcname>` provides user with basic set of application FW and
flash FW versions. Note that driver version for driver built in-tree will
be equal to the kernel version string and for out-of-tree driver it will
either contain the git hash if build inside a git repository or contents
of the `.revision` file. In both cases out of tree driver build will have
`(o-o-t)` appended to distinguish from in-tree builds.
## DebugFS
`nfp_net` directory contains information about queue state for all netdevs
using the driver. It can be used to inspect contents of memory rings and
position of driver and hardware pointers for RX, TX and XDP rings.
## PCI BAR access
`ethtool -d <ifcname>` can be used to dump the PCI netdev memory.
## NSP logs access
`ethtool -w <ifcname> data <outfile>` dumps the logs of the Service Processor.
# Operation modes
The nfp.ko module provides drivers for both PFs and VFs. VFs can only
be used as netdevs. In case of PF one can select whether to load the
driver in netdev mode which will create networking interfaces or only
expose low-level API to the user space and run health monitoring,
diagnostics and control device from user space.
NOTE: if you're using Netronome-provided driver packages some
of the defaults mentioned in this document may have been changed
in the `/etc/modprobe.d/netronome.conf` file.
## PF netdev mode
In this mode module provides a Linux network device interface on
the NFP's physical function. It requires appropriate FW image to
be either pre-loaded or available in `/lib/firmware/netronome/` to
work. This is the only mode of operation for the upstream driver.
Developers should use this mode if firmware is exposing vNICs on the
PCI PF device.
By default (i.e. not `make nfp_net` build) low-level user space access
ABIs of non-netdev mode will not be exposed, but can be re-enabled with
appropriate module parameters (`nfp_dev_cpp`).
## PF non-netdev mode
This mode is used by the out-of-tree Netronome SDN products for health
monitoring, loading firmware, and diagnostics. It is enabled by setting
`nfp_pf_netdev` module parameter to `0`. Driver in this mode will not
expose any netdevs of the PCI PF.
Developers should use this mode if firmware is only exposing vNICs on
the PCI VF devices.
This mode provides a low-level user space interface into the NFP
(`/dev/nfp-cpp-X` file), which is used by development and debugging tools.
It does not require a NFP firmware be loaded at device probe time.
## VF driver
The nfp.ko contains a driver used to provide NIC-style access to Virtual
Functions of the device when operating in PCI SR-IOV mode.
*For example, if a physical NFP6000 device was running Netronome SDN,
and had assigned a rule matching `'all 172.16.0.0/24 received'` to VF 5,
then the NFP6000's SR-IOV device `#5` would use this driver to provide a
NIC style interface to the flows that match that rule.*
## nfp6000 quirks
NFP4000/NFP6000 chips need a minor PCI quirk to avoid system crashing
after particular type of PCI config space addresses from user space.
If you're using the NFP on an old kernel you may see this message in
the logs:
```
Error: this kernel does not have quirk_nfp6000
Please contact support@netronome.com for more information
```
Suggested solution is to update your kernel. The fix is present in
upstream Linux `4.5`, but major distributions have backported it to
older kernels, too. If updating the kernel is not an option and you
are certain user space will not trigger the types of accesses which
may fault - you can attempt using the ``ignore_quirks'" parameter
although this is not guaranteed to work on systems requiring the fix.
## Module parameters
NOTE: `modinfo nfp.ko` is the authoritative documentation,
this is only presented here as a reference.
| Parameter | Default | Comment |
| --------------- | ------- | ----------------------------------------------- |
| ignore_quirks | false | Ignore the check for NFP6000 PCI quirks |
| nfp_pf_netdev | true | PF driver in [Netdev mode](#pf-netdev-mode) |
| nfp_fallback | true | In netdev mode stay bound even if netdevs failed|
| nfp_dev_cpp | true | Enable NFP CPP user space /dev interface |
| fw_load_required | false | Fail init if no suitable FW is present |
| nfp_net_vnic | false | vNIC net devices [1] |
| nfp_net_vnic_pollinterval | 10 | Polling interval for Rx/Tx queues (in ms) |
| nfp_net_vnic_debug | false | Enable debug printk messages |
| nfp_reset | false | Reset the NFP on init [2] |
| nfp_reset_on_exit | false | Reset the NFP on exit |
| hwinfo_debug | false | Enable to log hwinfo contents on load |
| hwinfo_wait | 10 | Wait N sec for board.state match, -1 = forever |
| nfp6000_explicit_bars | 4 | Number of explicit BARs. (range: 1..4) |
| nfp6000_debug | false | Enable debugging for the NFP6000 PCIe |
| nfp6000_firmware | (none) | NFP6000 firmware to load from /lib/firmware/ |
NOTES:
1. The vNIC net device creates a pseudo-NIC for NFP ARM Linux systems.
2. Reset on init will be performed anyway if firmware file is specified.
| 42.145098 | 79 | 0.736671 | eng_Latn | 0.993517 |
c14d68c94f23402492353a0d75594365217a8260 | 8,200 | md | Markdown | docs/analysis-services/schema-rowsets/data-mining/dmschema-mining-services-rowset.md | csrowell/sql-docs | 5e15fa8674a09821becd437e78cfb0bb472e3bc8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-04T19:57:42.000Z | 2019-05-04T19:57:42.000Z | docs/analysis-services/schema-rowsets/data-mining/dmschema-mining-services-rowset.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/schema-rowsets/data-mining/dmschema-mining-services-rowset.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "DMSCHEMA_MINING_SERVICES Rowset | Microsoft Docs"
ms.custom: ""
ms.date: "03/14/2017"
ms.prod: "sql-server-2016"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "analysis-services"
- "analysis-services/data-mining"
ms.tgt_pltfrm: ""
ms.topic: "reference"
apiname:
- "DMSCHEMA_MINING_SERVICES"
apitype: "NA"
applies_to:
- "SQL Server 2016 Preview"
helpviewer_keywords:
- "DMSCHEMA_MINING_SERVICES rowset"
ms.assetid: 4a672f2f-d637-4def-a572-c18556f83d34
caps.latest.revision: 35
author: "Minewiskan"
ms.author: "owend"
manager: "erikre"
ms.workload: "Inactive"
---
# DMSCHEMA_MINING_SERVICES Rowset
Provides a description of each data mining algorithm that the provider supports.
## Rowset Columns
The **DMSCHEMA_MINING_SERVICES** rowset contains the following columns.
|Column name|Type indicator|Description|
|-----------------|--------------------|-----------------|
|**SERVICE_NAME**|**DBTYPE_WSTR**|The name of the algorithm. This column is provider-specific.|
|**SERVICE_TYPE_ID**|**DBTYPE_UI4**|This column contains a bitmap that describes the mining service. [!INCLUDE[msCoName](../../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] populates this column with one of the following values:<br /><br /> **DM_SERVICETYPE_CLASSIFICATION** (**1**)<br /><br /> **DM_SERVICETYPE_CLUSTERING** (**2**)|
|**SERVICE_DISPLAY_NAME**|**DBTYPE_WSTR**|A localizable display name for the algorithm.|
|**SERVICE_GUID**|**DBTYPE_GUID**|The GUID for the algorithm.|
|**DESCRIPTION**|**DBTYPE_WSTR**|A user-friendly description of the algorithm.|
|**PREDICTION_LIMIT**|**DBTYPE_UI4**|The maximum number of predictions the model and algorithm can provide.|
|**SUPPORTED_DISTRIBUTION_FLAGS**|**DBTYPE_WSTR**|A comma-delimited list of flags that describe the statistical distributions supported by the algorithm. This column contains one or more of the following values:<br /><br /> "**NORMAL**"<br /><br /> "**LOG NORMAL**"<br /><br /> "**UNIFORM**"|
|**SUPPORTED_INPUT_CONTENT_TYPES**|**DBTYPE_WSTR**|A comma-delimited list of flags that describe the input content types that are supported by the algorithm. This column contains one or more of the following values:<br /><br /> "**KEY**"<br /><br /> "**DISCRETE**"<br /><br /> "**CONTINUOUS**"<br /><br /> "**DISCRETIZED**"<br /><br /> "**ORDERED**"<br /><br /> "KEY **SEQUENCE**"<br /><br /> "**CYCLICAL**"<br /><br /> "**PROBABILITY**"<br /><br /> "**VARIANCE**"<br /><br /> "**STDEV**"<br /><br /> "**SUPPORT**"<br /><br /> "**PROBABILITY VARIANCE**"<br /><br /> "**PROBABILITY STDEV**"<br /><br /> "**KEY TIME**"|
|**SUPPORTED_PREDICTION_CONTENT_TYPES**|**DBTYPE_WSTR**|A comma-delimited list of flags that describe the prediction content types that are supported by the algorithm. This column contains one or more of the following values:<br /><br /> "**KEY**"<br /><br /> "**DISCRETE**"<br /><br /> "**CONTINUOUS**"<br /><br /> "**DISCRETIZED**"<br /><br /> "**ORDERED**"<br /><br /> "KEY **SEQUENCE** "<br /><br /> "**CYCLICAL**"<br /><br /> "**PROBABILITY**"<br /><br /> "**VARIANCE**"<br /><br /> "**STDEV**"<br /><br /> "**SUPPORT**"<br /><br /> "**PROBABILITY VARIANCE**"<br /><br /> "**PROBABILITY STDEV**"<br /><br /> "KEY TIME"|
|**SUPPORTED_MODELING_FLAGS**|**DBTYPE_WSTR**|A comma-delimited list of the modeling flags that are supported by the algorithm. This column contains one or more of the following values:<br /><br /> "**MODEL_EXISTENCE_ONLY**"<br /><br /> "**REGRESSOR**"<br /><br /> <br /><br /> Note that provider-specific flags can also be defined.|
|**SUPPORTED_SOURCE_QUERY**|**DBTYPE_WSTR**|This column is supported for backward compatibility.|
|**TRAINING_COMPLEXITY**|**DBTYPE_I4**|The length of time that training is expected to take:<br /><br /> **DM_TRAINING_COMPLEXITY_LOW** indicates that the running time is relatively short, and it is proportional to input.<br /><br /> **DM_TRAINING_COMPLEXITY_MEDIUM** indicates that the running time may be long, but it is generally proportional to input.<br /><br /> **DM_TRAINING_COMPLEXITY_HIGH** indicates that the running time is long and it may grow exponentially in relationship to the number of training cases.|
|**PREDICTION_COMPLEXITY**|**DBTYPE_I4**|The length of time that prediction is expected to take:<br /><br /> **DM_PREDICTION_COMPLEXITY_LOW** indicates that the running time is relatively short, and it is proportional to input.<br /><br /> **DM_PREDICTION_COMPLEXITY_MEDIUM** indicates that the running time may be long, but it is generally proportional to input.<br /><br /> **DM_PREDICTION_COMPLEXITY_HIGH** indicates that the running time is long and it may grow exponentially in relationship to the number of training cases.|
|**EXPECTED_QUALITY**|**DBTYPE_I4**|The expected quality of the model produced with this algorithm:<br /><br /> **DM_EXPECTED_QUALITY_LOW**<br /><br /> **DM_EXPECTED_QUALITY_MEDIUM**<br /><br /> **DM_EXPECTED_QUALITY_HIGH**|
|**SCALING**|**DBTYPE_I4**|The scalability of the algorithm:<br /><br /> **DM_SCALING_LOW**<br /><br /> **DM_SCALING_MEDIUM**<br /><br /> **DM_SCALING_HIGH**|
|**ALLOW_INCREMENTAL_INSERT**|**DBTYPE_BOOL**|A Boolean that indicates whether the algorithm supports incremental training, i.e., updating the discovered patterns based on new factual data, rather than fully re-discovering the patterns.|
|**ALLOW_PMML_INITIALIZATION**|**DBTYPE_BOOL**|A Boolean that indicates whether mining models can be created based on an PMML 2.1 string.<br /><br /> If **TRUE**, the mining algorithm supports initialization from PMML 2.1 content.|
|**CONTROL**|**DBTYPE_I4**|The support given by the service if training is interrupted:<br /><br /> **DM_CONTROL_NONE** indicates that the algorithm cannot be canceled after it starts to train the model.<br /><br /> **DM_CONTROL_CANCEL** indicates that the algorithm can be canceled after it starts to train the model, but must be restarted to resume training.<br /><br /> **DM_CONTROL_SUSPENDRESUME** indicates that the algorithm can be canceled and resumed at any time, but results are not available until training is complete.<br /><br /> **DM_CONTROL_SUSPENDWITHRESULT** indicates that the algorithm can be canceled and resumed at any time, and any incremental results can be obtained.|
|**ALLOW_DUPLICATE_KEY**|**DBTYPE_BOOL**|A Boolean that indicates whether cases can contain duplicate keys.<br /><br /> If **VARIANT_TRUE**, cases are allowed to contain duplicate keys.|
|**VIEWER_TYPE**|**DBTYPE_WSTR**|The recommended viewer for this model.|
|**HELP_FILE**|**DBTYPE_WSTR**|(Optional) The name of the file that contains the documentation for this service.|
|**HELP_CONTEXT**|**DBTYPE_I4**|(Optional) The Help context ID for this service.|
|**MSOLAP_SUPPORTS_ANALYSIS_SERVICES_DDL**|**DBTYPE_WSTR**|The version of DDL supported. 0 indicates no DDL support.|
|**MSOLAP_SUPPORTS_OLAP_MINING_MODELS**|**DBTYPE_BOOL**|A Boolean that indicates whether OLAP mining models can be created.<br /><br /> If **TRUE**, OLAP mining models can be created. Requires **MSOLAP_SUPPORTS_ANALYSIS_SERVICES_DDL** to be non-zero.|
|**MSOLAP_SUPPORTS_DATA_MINING_DIMENSIONS**|**DBTYPE_BOOL**|A Boolean that indicates whether data mining dimensions can be created.<br /><br /> If **TRUE**, data mining dimensions can be created.|
|**MSOLAP_SUPPORTS_DRILLTHROUGH**|**DBTYPE_BOOL**|A Boolean that indicates whether the service supports drillthrough capabilities.<br /><br /> If **TRUE**, the service supports drill-through capabilities.|
## Restriction Columns
The **DMSCHEMA_MINING_SERVICES** rowset can be restricted on the columns listed in the following table.
|Column name|Type indicator|Restriction State|
|-----------------|--------------------|-----------------------|
|**SERVICE_NAME**|**DBTYPE_WSTR**|Optional.|
|**SERVICE_TYPE_ID**|**DBTYPE_UI4**|Optional.|
## See Also
[Data Mining Schema Rowsets](../../../analysis-services/schema-rowsets/data-mining/data-mining-schema-rowsets.md)
| 110.810811 | 693 | 0.702439 | eng_Latn | 0.8533 |
c14d7e202272a4bad86071d89c2ef839b9ba6fa7 | 9,984 | md | Markdown | Exchange-Server-2013/manage-a-um-ip-gateway-exchange-2013-help.md | v-rajagt/OfficeDocs-Exchange-Test-pr.fr-fr | cbee44a864fe5416316ffc90a6e61807a4a43441 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange-Server-2013/manage-a-um-ip-gateway-exchange-2013-help.md | v-rajagt/OfficeDocs-Exchange-Test-pr.fr-fr | cbee44a864fe5416316ffc90a6e61807a4a43441 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange-Server-2013/manage-a-um-ip-gateway-exchange-2013-help.md | v-rajagt/OfficeDocs-Exchange-Test-pr.fr-fr | cbee44a864fe5416316ffc90a6e61807a4a43441 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Gestion d'une passerelle IP de messagerie unifiée: Exchange 2013 Help"
TOCTitle: Gestion d'une passerelle IP de messagerie unifiée
ms:assetid: 387e540f-8c59-42d2-a423-99fcf97e00aa
ms:mtpsurl: https://technet.microsoft.com/fr-fr/library/Aa997283(v=EXCHG.150)
ms:contentKeyID: 50477925
ms.date: 04/24/2018
mtps_version: v=EXCHG.150
f1_keywords:
- Microsoft.Exchange.Management.SnapIn.Esm.Servers.UnifiedMessaging.UMIPGatewayGeneralPropertyPageControl
ms.translationtype: HT
---
# Gestion d'une passerelle IP de messagerie unifiée
_**Sapplique à :** Exchange Online, Exchange Server 2013, Exchange Server 2016_
_**Dernière rubrique modifiée :** 2013-02-21_
Après avoir créé une passerelle IP de messagerie unifiée, vous pouvez afficher ou configurer divers paramètres. Par exemple, vous pouvez configurer l'adresse IP ou un nom de domaine complet, ainsi que les paramètres des appels sortants, et activer ou désactiver l'indicateur d'attente des messages.
Pour les autres tâches de gestion relatives aux passerelles IP de messagerie unifiée, consultez la rubrique [Procédures de passerelle IP de messagerie unifiée](um-ip-gateway-procedures-exchange-2013-help.md).
## Ce qu'il faut savoir avant de commencer
- Durée d'exécution estimée : 5 minutes.
- Des autorisations doivent vous être attribuées avant de pouvoir exécuter cette procédure. Pour voir les autorisations qui vous sont nécessaires, consultez l'entrée « Passerelles IP de messagerie unifiée » dans la rubrique [Autorisations de messagerie unifiée](unified-messaging-permissions-exchange-2013-help.md).
- Avant d'exécuter ces procédures, vérifiez qu'un plan de numérotation de messagerie unifiée a été créé. Pour obtenir la procédure détaillée, consultez la rubrique [Créer un plan de numérotation de messagerie unifiée](create-a-um-dial-plan-exchange-2013-help.md).
- Avant d'exécuter ces procédures, vérifiez qu'une passerelle IP de messagerie unifiée a été créée. Pour obtenir la procédure détaillée, consultez la rubrique [Créer une passerelle IP de messagerie unifiée](create-a-um-ip-gateway-exchange-2013-help.md).
- Pour des informations sur les raccourcis clavier applicables aux procédures de cette rubrique, voir Raccourcis clavier dans Exchange 2013[Raccourcis clavier dans le Centre d’administration Exchange](keyboard-shortcuts-in-the-exchange-admin-center-exchange-online-protection-help.md).
> [!TIP]
> Vous rencontrez des difficultés ? Demandez de l’aide en participant aux forums Exchange. Visitez les forums sur les pages <a href="https://go.microsoft.com/fwlink/p/?linkid=60612">Exchange Server</a>, <a href="https://go.microsoft.com/fwlink/p/?linkid=267542">Exchange Online</a>, et <a href="https://go.microsoft.com/fwlink/p/?linkid=285351">Exchange Online Protection</a>..
## Que souhaitez-vous faire ?
## Utiliser le Centre d'administration Exchange (CAE) pour afficher ou configurer des propriétés de la passerelle IP de messagerie unifiée
1. Dans le Centre d'administration Exchange, accédez à **Messagerie unifiée** \> **Passerelles IP de messagerie unifiée**. Dans l'affichage Liste, sélectionnez la passerelle IP de messagerie unifiée à gérer, puis cliquez sur **Modifier**.gif "Icône Modifier").
2. L'onglet **Passerelle IP de messagerie unifiée** permet d'afficher et de configurer les paramètres de la passerelle IP de messagerie unifiée. Vous pouvez également configurer les paramètres suivants :
- **État** Ce champ en affichage seul indique l'état de la passerelle IP de messagerie unifiée.
- **Nom** Cette zone de texte permet de spécifier un nom unique pour la passerelle IP de messagerie unifiée. Il s'agit du nom complet qui apparaît dans l'EAC. Si vous devez modifier le nom complet de la passerelle IP de messagerie unifiée après sa création, vous devez d'abord supprimer la passerelle IP de messagerie unifiée existante, puis en créer une autre avec le nom approprié. Le nom de passerelle IP de messagerie unifiée est obligatoire mais utilisé à des fins d'affichage uniquement. Dans la mesure où votre organisation peut utiliser plusieurs passerelles IP de messagerie unifiée, nous vous recommandons d'utiliser des noms significatifs pour ces passerelles. La longueur maximale d'un nom de passerelle IP de messagerie unifiée est de 64 caractères pouvant contenir des espaces.
- **Adresse** Vous pouvez configurer une passerelle IP de messagerie unifiée soit avec une adresse IP soit avec un nom de domaine complet. Utilisez cette boîte de dialogue pour spécifier l'adresse IP ou le nom de domaine complet configuré sur la passerelle VoIP, au standard privé (PBX) SIP, PBX IP ou SBC.
Vous pouvez entrer des caractères alphabétiques et numériques dans cette zone. Les adresses IPv4, IPv6 et les adresses avec un nom de domaine complet sont prises en charge. Si vous utilisez un nom de domaine complet, vous devez également vous assurer que vous avez correctement configuré un enregistrement d'hôte DNS pour la passerelle VoIP afin de résoudre correctement le nom d'hôte en adresse IP. De même, si vous utilisez un nom de domaine complet à la place d'une adresse IP et si la configuration DNS de la passerelle IP de messagerie unifiée est modifiée, vous devez désactiver la passerelle IP de messagerie unifiée, puis l'activer pour garantir une mise à jour correcte des informations de configuration de cette passerelle IP de messagerie unifiée.
Si vous souhaitez utiliser l'authentification TLS (Transport Layer Security) mutuelle entre une passerelle IP de messagerie unifiée et un plan de numérotation fonctionnant en mode Sécurisé SIP ou en mode Sécurisé, vous devez configurer la passerelle IP de messagerie unifiée avec un nom de domaine complet. Vous devez également la configurer pour écouter le port 5061 et vérifier que les passerelles IP ou les IP PBX ont également été configuré(e)s pour écouter les demandes TLS mutuelles sur le port 5061. Pour configurer une passerelle IP de messagerie unifiée, exécutez la commande suivante : `Set-UMIPGateway -identity MyUMIPGateway -Port 5061`.
- **Autoriser les appels sortants via cette passerelle IP de messagerie unifiée** Activez cette case à cocher pour autoriser la passerelle IP de messagerie unifiée à accepter et traiter les appels sortants. Ce paramètre n'affecte pas les transferts d'appels ou les appels entrants en provenance d'une passerelle VoIP.
Ce paramètre est activé par défaut lors de la création de la passerelle IP de messagerie unifiée. Si vous désactivez ce paramètre, les utilisateurs associés au plan de numérotation ne pourront pas passer d'appels sortants via la passerelle VoIP, le PBX IP ou le SBC défini dans le champ **Adresse**.
- **Autoriser l'indicateur d'attente des messages** Activez cette case à cocher pour autoriser l'envoi de notifications de messagerie vocale aux utilisateurs pour les appels pris par la passerelle IP de messagerie unifiée. Ce paramètre autorise la passerelle IP de messagerie unifiée à recevoir et envoyer des messages SIP NOTIFY pour les utilisateurs. Ce paramètre est activé par défaut et autorise l'envoi de notifications de message en attente aux utilisateurs.
L'indicateur de message en attente peut être n'importe quel mécanisme indiquant l'existence d'un message nouveau ou non écouté. L'indication de l'arrivée d'un nouveau message vocal peut être trouvée dans la Boîte de réception d'applications clientes telles que Outlook et Outlook Web App. Cette indication peut être un SMS ou un message texte envoyé à un téléphone portable enregistré, ou un appel sortant émis à partir d'un serveur Exchange à un numéro préconfiguré ou encore l'illumination d'un voyant sur le téléphone fixe de l'utilisateur.
## Utiliser le shell pour configurer les propriétés de la passerelle IP de messagerie unifiée
Cet exemple modifie l'adresse IP d'une passerelle IP de messagerie unifiée nommée `MyUMIPGateway`.
Set-UMIPGateway -Identity MyUMIPGateway -Address 10.10.10.1
Cet exemple empêche la passerelle IP de messagerie unifiée appelée `MyUMIPGateway` d'accepter les appels entrants et empêche les appels sortants.
Set-UMIPGateway -Identity MyUMIPGateway -Address voipgateway.contoso.com -Status 2 -OutcallsAllowed $false
Cet exemple permet à la passerelle IP de messagerie unifiée de fonctionner comme simulateur de passerelle VoIP. Vous pouvez l'utiliser avec la cmdlet **Test-UMConnectivity**.
Set-UMIPGateway -Identity MyUMIPGateway -Simulator $true
> [!IMPORTANT]
> Il existe une période de latence avant que toutes les modifications que vous apportez à la configuration d'une passerelle IP de messagerie unifiée se répercutent sur tous les serveurs Exchange qui se trouvent dans le même plan de numérotation de messagerie unifiée que la passerelle IP de messagerie unifiée.
Cet exemple empêche la passerelle IP de messagerie unifiée nommée `MyUMIPGateway` d'accepter les appels entrants et empêche également les appels sortants, définit une adresse IPv6 et permet à la passerelle IP de messagerie unifiée d'utiliser les adresses IPv4 et IPv6.
Set-UMIPGateway -Identity MyUMIPGateway -Address fe80::39bd:88f7:6969:d223%11 -IPAddressFamily Any -Status Disabled -OutcallsAllowed $false
## Utiliser le shell pour afficher les propriétés de la passerelle IP de messagerie unifiée
Cet exemple affiche une liste mise en forme de toutes les passerelles IP de messagerie unifiée dans la forêt Active Directory.
Get-UMIPGateway |Format-List
Cet exemple de code affiche les propriétés d'une passerelle IP de messagerie unifiée nommée `MyUMIPGateway`.
Get-UMIPGateway -Identity MyUMIPGateway
Cet exemple affiche toutes les passerelles IP de messagerie unifiée, y compris les simulateurs de passerelle VoIP dans la forêt Active Directory.
Get-UMIPGateway -IncludeSimulator $true
| 96 | 799 | 0.796074 | fra_Latn | 0.990356 |
c14da989f699d25facc59003a1556c68e881bef8 | 1,404 | md | Markdown | 2020/08/02/2020-08-02 18:20.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/08/02/2020-08-02 18:20.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/08/02/2020-08-02 18:20.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年08月02日18时数据
Status: 200
1.唐嫣生的是女儿
微博热度:3546760
2.北大回应留守女生报考考古专业
微博热度:2026480
3.迪丽热巴高开叉长裙
微博热度:2010459
4.你是我的荣耀官宣
微博热度:1654495
5.青簪行预告是原声
微博热度:1346854
6.7岁女童商场偷拿玩具亲妈报警
微博热度:1162190
7.易建联受伤
微博热度:1149487
8.空军原司令员王海上将逝世
微博热度:1085607
9.德国柏林爆发反防疫措施游行
微博热度:1034348
10.有翡破雪推云预告片
微博热度:986502
11.乌鲁木齐确诊疑似无症状救治全免费
微博热度:960978
12.连狗都受不了的电视剧情
微博热度:869663
13.杨幂闪钻礼服裙
微博热度:722427
14.长歌行预告
微博热度:670366
15.澳大利亚暴发H7N7禽流感
微博热度:667254
16.繁花官宣胡歌
微博热度:661998
17.三体阵容
微博热度:654165
18.脑洞三十而已大结局
微博热度:652416
19.美军将永久性驻扎波兰
微博热度:642836
20.腾讯视频年度发布会
微博热度:642484
21.吴亦凡白衬衫吊带马甲
微博热度:623510
22.八佰定档
微博热度:535669
23.澳大利亚墨尔本将实施宵禁
微博热度:481893
24.比特币
微博热度:459959
25.入戏太深是种什么体验
微博热度:452406
26.阿黛尔近照
微博热度:413693
27.给孩子起名鬼才
微博热度:321084
28.惠来一肠粉店发生疑似食物中毒
微博热度:219340
29.囍 绝了
微博热度:211622
30.黄河壶口瀑布现入汛以来最大流量
微博热度:185543
31.见过最有趣的地名
微博热度:175864
32.心疼顾佳
微博热度:172788
33.剧版遇龙阵容
微博热度:156304
34.伊朗称逮捕了美国支持的恐怖组织头目
微博热度:151762
35.可可西里失联女大学生搜救画面
微博热度:149311
36.上海市中心最后一家花鸟市场关闭
微博热度:141347
37.司机冒死开走起火货车后续
微博热度:126045
38.说唱听我的
微博热度:123695
39.白举纲回应乐队淘汰
微博热度:117747
40.大连小伙再进隔离区热出花癣斑
微博热度:108046
41.广东海关破获8.6亿元挖掘机走私案
微博热度:97703
42.V5终结JDG十一连胜
微博热度:97438
43.新加坡遇史上最严重登革热疫情
微博热度:97407
44.范丞丞的理想型是倪妮
微博热度:97404
45.航天头盔式新冠面罩
微博热度:94017
46.粉笔模考
微博热度:89653
47.星际穿越重映
微博热度:89426
48.闪耀暖暖声明
微博热度:85596
49.黄明昊专辑打call阵容
微博热度:84894
50.微软暂停收购TikTok美国业务谈判
微博热度:78107
| 6.882353 | 21 | 0.781339 | yue_Hant | 0.335715 |
c14e967a99cdc59ab4df8f8775291f31a5d453b7 | 4,596 | md | Markdown | articles/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2017-08-28T07:45:55.000Z | 2022-03-07T21:35:48.000Z | articles/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 441 | 2017-11-08T13:15:56.000Z | 2021-06-02T10:39:53.000Z | articles/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 27 | 2017-11-13T13:38:31.000Z | 2022-02-17T11:57:33.000Z | ---
title: Omówienie stosowania rabatu za rezerwację do usługi Azure Cache for Redis | Microsoft Docs
description: Dowiedz się, w jaki sposób rabat za rezerwację jest stosowany do usługi Azure Cache for Redis.
author: yegu-ms
manager: maiye
ms.service: cache
ms.topic: conceptual
ms.date: 01/22/2020
ms.author: yegu
ms.openlocfilehash: 5f9e0a18db0920acd35ebd7b133ed3fe5d0eaee9
ms.sourcegitcommit: 9eda79ea41c60d58a4ceab63d424d6866b38b82d
ms.translationtype: HT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/30/2020
ms.locfileid: "96352953"
---
# <a name="how-the-reservation-discount-is-applied-to-azure-cache-for-redis"></a>W jaki sposób rabat za rezerwację jest stosowany do usługi Azure Cache for Redis
Po zakupie pojemności zarezerwowanej usługi Azure Cache for Redis rabat na rezerwację jest automatycznie stosowany do wystąpień pamięci podręcznej pasujących do atrybutów i ilości rezerwacji. Rezerwacja obejmuje tylko koszty obliczeniowe usługi Azure Cache for Redis. Opłata jest naliczana za magazyn i sieć według normalnych stawek. Pojemność zarezerwowana jest dostępna tylko w przypadku pamięci podręcznych [w warstwie Premium](../../azure-cache-for-redis/quickstart-create-redis.md).
## <a name="how-reservation-discount-is-applied"></a>Jak jest naliczany rabat za rezerwację
Rabat za rezerwację jest dostępny na zasadzie ***wykorzystaj lub strać** _. Zatem jeśli w ciągu jakiejś godziny nie będziesz mieć pasujących zasobów, utracisz ilość rezerwacji dla tej godziny. Niewykorzystanych godzin zarezerwowanych nie można przenieść na później.
Po wyłączeniu zasobu rabat za rezerwację automatycznie stosuje się do innego pasującego zasobu w określonym zakresie. Jeśli w określonym zakresie nie uda się znaleźć pasujących zasobów, zarezerwowane godziny zostaną utracone.
## <a name="discount-applied-to-azure-cache-for-redis"></a>Rabat stosowany do usługi Azure Cache for Redis
Rabat za pojemność zarezerwowaną usługi Azure Cache for Redis jest stosowany względem pamięci podręcznych z rozliczeniem godzinowym. Kupowana rezerwacja jest dopasowywana do użycia zasobów obliczeniowych emitowanych przez działające pamięci podręczne. W przypadku, gdy te pamięci podręczne nie działają przez pełną godzinę, rezerwacja jest automatycznie stosowana do innych pamięci podręcznych pasujących do atrybutów rezerwacji. Rabat może być stosowany do pamięci podręcznych, które działają równolegle. Jeśli nie masz pamięci podręcznych, które działają przez pełną godzinę i pasują do atrybutów rezerwacji, nie wykorzystasz w pełni korzyści z rabatu za rezerwację w tej godzinie.
W poniższych przykładach pokazano, w jaki sposób rabat za pojemność zarezerwowaną usługi Azure Cache for Redis jest stosowany w zależności od liczby zakupionych pamięci podręcznych oraz czasu ich działania.
_ **Przykład 1**: Kupujesz pojemność zarezerwowaną usługi Azure Cache for Redis dla pamięci podręcznej 6 GB. Jeśli uruchamiasz pamięć podręczną 13 GB zgodną z resztą atrybutów rezerwacji, opłata jest naliczana według stawek płatności zgodnie z rzeczywistym użyciem za 7 GB użycia mocy obliczeniowej usługi Azure Cache for Redis, a rabat za rezerwację otrzymujesz na 1 godzinę użycia mocy obliczeniowej 6 GB pamięci podręcznej.
Na potrzeby pozostałych przykładów przyjęto założenie, że zakupiona pojemność zarezerwowana usługi Azure Cache for Redis jest przeznaczona dla 26 GB pamięci podręcznej, a pozostałe atrybuty rezerwacji pasują do działającej pamięci podręcznej.
* **Przykład 2**: Uruchamiasz dwie pamięci podręczne 13 GB przez godzinę. Rabat za rezerwację 26 GB jest stosowany do użycia mocy obliczeniowej obu pamięci podręcznych.
* **Przykład 3**: Uruchamiasz jedną pamięć podręczną 26 GB od 13:00 do 13:30. Uruchamiasz kolejną pamięć podręczną 26 GB od 13:30 do 14:00. Obie te bazy danych są objęte rabatem na rezerwację.
* **Przykład 4**: Uruchamiasz jedną pamięć podręczną 26 GB od 13:00 do 13:45. Uruchamiasz kolejną pamięć podręczną 26 GB od 13:30 do 14:00. Za 15-minutowy okres jednoczesnego działania obu baz danych jest naliczana opłata według stawek płatności zgodnie z rzeczywistym użyciem. Na użycie zasobów obliczeniowych przez resztę czasu jest stosowany rabat na rezerwację.
Aby poznać zastosowanie swoich rezerwacji platformy Azure w raportach rozliczeń użycia i przejrzeć je, zobacz [Omówienie użycia rezerwacji platformy Azure](./understand-reserved-instance-usage-ea.md).
## <a name="need-help-contact-us"></a>Potrzebujesz pomocy? Skontaktuj się z nami
Jeśli masz pytania lub potrzebujesz pomocy, [utwórz wniosek o pomoc techniczną](https://go.microsoft.com/fwlink/?linkid=2083458). | 99.913043 | 683 | 0.822672 | pol_Latn | 1 |
c14efb0102c8e25fdd2f4eecc6c31b5b6608ba39 | 5,301 | md | Markdown | SUMMARY.md | alexandriatwo/transformers | 8c04b75b754cf4c0eb4f3e330bebd757e392c478 | [
"Apache-2.0"
] | null | null | null | SUMMARY.md | alexandriatwo/transformers | 8c04b75b754cf4c0eb4f3e330bebd757e392c478 | [
"Apache-2.0"
] | null | null | null | SUMMARY.md | alexandriatwo/transformers | 8c04b75b754cf4c0eb4f3e330bebd757e392c478 | [
"Apache-2.0"
] | null | null | null | # Table of contents
* [README](README.md)
* [🤗 Transformers Notebooks](notebooks/README.md)
* [Examples](notebooks/examples/README.md)
* [Research projects](notebooks/examples/research_projects/README.md)
* [MM-IMDb](notebooks/examples/research_projects/mm-imdb.md)
* [Plug and Play Language Models: a Simple Approach to Controlled Text Generation](notebooks/examples/research_projects/pplm.md)
* [Adversarial evaluation of model performances](notebooks/examples/research_projects/adversarial.md)
* [Fine-tuning Wav2Vec2](notebooks/examples/research_projects/wav2vec2/README.md)
* [FINE\_TUNE\_XLSR\_WAV2VEC2](notebooks/examples/research_projects/wav2vec2/fine_tune_xlsr_wav2vec2.md)
* [LXMERT DEMO](notebooks/examples/research_projects/lxmert.md)
* [Movement Pruning: Adaptive Sparsity by Fine-Tuning](notebooks/examples/research_projects/movement-pruning.md)
* [README](notebooks/examples/research_projects/rag.md)
* [Whole Word Mask Language Model](notebooks/examples/research_projects/mlm_wwm.md)
* [Long Form Question Answering](notebooks/examples/research_projects/longform-qa.md)
* [Distil\*](notebooks/examples/research_projects/distillation.md)
* [README](notebooks/examples/research_projects/seq2seq-distillation/README.md)
* [precomputed\_pseudo\_labels](notebooks/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md)
* [Patience-based Early Exit](notebooks/examples/research_projects/bert-loses-patience.md)
* [DeeBERT: Early Exiting for \*BERT](notebooks/examples/research_projects/deebert.md)
* [Text Summarization with Pretrained Encoders](notebooks/examples/research_projects/bertabs.md)
* [Performer fine-tuning](notebooks/examples/research_projects/performer.md)
* [Zero-shot classifier distillation](notebooks/examples/research_projects/zero-shot-distillation.md)
* [Multiple Choice](notebooks/examples/multiple-choice.md)
* [Legacy examples](notebooks/examples/legacy/README.md)
* [Token classification](notebooks/examples/legacy/token-classification.md)
* [README](notebooks/examples/legacy/seq2seq/README.md)
* [romanian\_postprocessing](notebooks/examples/legacy/seq2seq/romanian_postprocessing.md)
* [Text classification examples](notebooks/examples/text-classification.md)
* [Token classification](notebooks/examples/token-classification.md)
* [README](notebooks/examples/question-answering.md)
* [Sequence to Sequence Training and Evaluation](notebooks/examples/seq2seq.md)
* [Language model training](notebooks/examples/language-modeling.md)
* [🤗 Benchmark results](notebooks/examples/benchmarking.md)
* [Language generation](notebooks/examples/text-generation.md)
* [scripts](notebooks/scripts/README.md)
* [README](notebooks/scripts/tatoeba.md)
* [How to contribute to transformers?](notebooks/contributing.md)
* [Contributor Covenant Code of Conduct](notebooks/code_of_conduct.md)
* [tests](notebooks/tests/README.md)
* [Testing new Hugging Face Deep Learning Container.](notebooks/tests/sagemaker.md)
* [How To Request Support](notebooks/issues.md)
* [.github](notebooks/.github/README.md)
* [ISSUE\_TEMPLATE](notebooks/.github/issue_template/README.md)
* [bug-report](notebooks/.github/issue_template/bug-report.md)
* [🌟 New model addition](notebooks/.github/issue_template/new-model-addition.md)
* [📚 Migration](notebooks/.github/issue_template/migration.md)
* [question-help](notebooks/.github/issue_template/question-help.md)
* [🖥 Benchmarking transformers](notebooks/.github/issue_template/new-benchmark.md)
* [🚀 Feature request](notebooks/.github/issue_template/feature-request.md)
* [What does this PR do?](notebooks/.github/pull_request_template.md)
* [🔥 Model cards now live inside each huggingface.co model repo 🔥](notebooks/model_cards/README.md)
* [google](notebooks/model_cards/google/README.md)
* [TAPAS base model](notebooks/model_cards/google/tapas-base.md)
* [Generating the documentation](notebooks/docs/README.md)
* [source](notebooks/docs/source/README.md)
* [Installation](notebooks/docs/source/installation.md)
* [notebooks](notebooks/docs/source/notebooks.md)
* [Run training on Amazon SageMaker](notebooks/docs/source/sagemaker.md)
* [Migrating from previous packages](notebooks/docs/source/migration.md)
* [contributing](notebooks/docs/source/contributing.md)
* [Community](notebooks/docs/source/community.md)
* [examples](notebooks/docs/source/examples.md)
* [templates](notebooks/templates/README.md)
* [Using cookiecutter to generate models](notebooks/templates/adding_a_new_model/README.md)
* [README](notebooks/templates/adding_a_new_model/open_model_proposals/README.md)
* [How to add BigBird to 🤗 Transformers?](notebooks/templates/adding_a_new_model/open_model_proposals/add_big_bird.md)
* [ADD\_NEW\_MODEL\_PROPOSAL\_TEMPLATE](notebooks/templates/adding_a_new_model/add_new_model_proposal_template.md)
* [How to add a new example script in 🤗 Transformers](notebooks/templates/adding_a_new_example_script.md)
* [Untitled](untitled.md)
* [Git Repo](https://github.com/alexandriatwo)
| 71.635135 | 134 | 0.757404 | yue_Hant | 0.509978 |
c14f064d73d485dcd738349ca0474bbdd6a5b465 | 743 | md | Markdown | modules/css/css-fonts/index.md | byui-cit/learning-modules | 51608ae32374f80c268ea0d6523f570674728ebf | [
"MIT"
] | 1 | 2021-10-19T15:53:32.000Z | 2021-10-19T15:53:32.000Z | modules/css/css-fonts/index.md | byui-cit/learning-modules | 51608ae32374f80c268ea0d6523f570674728ebf | [
"MIT"
] | null | null | null | modules/css/css-fonts/index.md | byui-cit/learning-modules | 51608ae32374f80c268ea0d6523f570674728ebf | [
"MIT"
] | 1 | 2021-11-01T22:22:00.000Z | 2021-11-01T22:22:00.000Z | ---
title: CSS - Fonts
description: Using fonts with CSS
date: 2021-11-17
order: 2
tags:
- css
- fonts
layout: layouts/post.njk
---
## Description
Fonts are used to style the text on your web page. The browser will assign a default font style to all the text on your web page, but if you want to use a different font this learning module will show you how you can do that.
## Where this knowledge is utilized
- WDD 130
- WDD 230
- WDD 330
- WDD 430
- CSE 340
## Prepare
- [MDN Web fonts](https://developer.mozilla.org/en-US/docs/Learn/CSS/Styling_text/Web_fonts)
- [MDN font properties](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Fonts)
- [Overview of using Fonts](prepare1)
## Ponder
- [Practice with Fonts](ponder1/)
| 21.228571 | 225 | 0.722746 | eng_Latn | 0.823915 |
c14f162626bf4002188d8fee016e2c9301f51ca3 | 767 | md | Markdown | _posts/2017-11-10-about-product-manager.md | sherazhang/sherazhang.github.io | 04c3765ee7cd2bd64599ee6f30755fb1c5ba8fb5 | [
"Apache-2.0"
] | null | null | null | _posts/2017-11-10-about-product-manager.md | sherazhang/sherazhang.github.io | 04c3765ee7cd2bd64599ee6f30755fb1c5ba8fb5 | [
"Apache-2.0"
] | null | null | null | _posts/2017-11-10-about-product-manager.md | sherazhang/sherazhang.github.io | 04c3765ee7cd2bd64599ee6f30755fb1c5ba8fb5 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: 关于产品经理核心技能打磨
date: 2017-11-10
categories: blog
tags: [产品]
description: 进一寸有进一寸的欢喜。
---
### 怎么做
* APP Store
* 最新的、热门的,都在首页
* 看每个分类里热门的20个APP,不用全下载
* 创业公司
* 天使轮、A轮的20个公司,不仅是国内的
* 新闻搜索
* 上市公司
* 看股价,看雪球,百度搜索也有他的一些即时新闻
* 财报
### 每天看什么网站
* 新的、趋势类的东西
* 36氪、钛媒体、虎嗅等
* Techcrunch、Fast company、engaget
* 大公司动态
* Techweb、雪球、微博
* 媒体财经
* Ttchinese
* 社群
* 开智,BAT一些校友会、知名产品经理的微博微信等
### 学科储备与实际应用
* 逻辑
* 文字表达
* 口头表达
* 技术1:基础的计算机知识
* 技术2:体验你的产品实现全过程所需要的技术
* 统计:划出原型图PS的能力
* 数据分析:excel的使用
* 财务:预算
* 行业知识:边干边学,参加行业的会议等
### 产品=信息结构+需求市场
* 有竞争力的信息结构,在某种程度上打造一件艺术品
* 通过脑图画出这个产品的脉络
* 信息的建构与解构
* 工匠精神,喜欢到一个小按钮
* 话术,幽默与自省的
* 看准需求市场,还要去摸透这个市场
* 不要抄,你的目标用户是谁,他们正在的需求是什么
* 什么叫摸透,摸透是覆盖,覆盖一定要有指标
* 特定的时间段,用优势去占领这个市场
*资料来源:陆易斯开智社群分享
| 13.696429 | 35 | 0.687093 | yue_Hant | 0.755007 |
c14fb0a99eae2cf7c42b7ecb5e9a3e56c4a30443 | 6,071 | md | Markdown | README.md | zimond/tiny-skia | 682d24b277cb348fd1255c95f9c8884ff16cc0fd | [
"BSD-3-Clause"
] | null | null | null | README.md | zimond/tiny-skia | 682d24b277cb348fd1255c95f9c8884ff16cc0fd | [
"BSD-3-Clause"
] | null | null | null | README.md | zimond/tiny-skia | 682d24b277cb348fd1255c95f9c8884ff16cc0fd | [
"BSD-3-Clause"
] | 1 | 2022-01-23T16:08:33.000Z | 2022-01-23T16:08:33.000Z | # tiny-skia

[](https://crates.io/crates/tiny-skia)
[](https://docs.rs/tiny-skia)
[](https://www.rust-lang.org)
`tiny-skia` is a tiny [Skia] subset ported to Rust.
The goal is to provide an absolute minimal, CPU only, 2D rendering library for the Rust ecosystem,
with a focus on a rendering quality, speed and binary size.
And while `tiny-skia` is definitely tiny, it support all the common 2D operations
like: filling and stroking a shape with a solid color, gradient or pattern;
stroke dashing; clipping; images blending; PNG load/save.
The main missing feature is text rendering
(see [#1](https://github.com/RazrFalcon/tiny-skia/issues/1)).
**Note:** this is not a Skia replacement and never will be. It's more of a research project.
## Motivation
The main motivation behind this library is to have a small, high-quality 2D rendering
library that can be used by [resvg]. And the choice is rather limited.
You basically have to choose between [cairo], Qt and Skia. And all of them are
relatively bloated, hard to compile and distribute. Not to mention that none of them
are written in Rust.
But if we ignore those problems and focus only on quality and speed alone,
Skia is by far the best one.
However, the main problem with Skia is that it's huge. Really huge.
It supports CPU and GPU rendering, multiple input and output formats (including SVG and PDF),
various filters, color spaces, color types and text rendering.
It consists of 370 KLOC without dependencies (around 7 MLOC with dependencies)
and requires around 4-8 GiB of disk space to be built from sources.
And the final binary is 3-8 MiB big, depending on enabled features.
Not to mention that it requires `clang` and no other compiler
and uses an obscure build system (`gn`) which still uses Python2.
`tiny-skia` tries to be small, simple and easy to build.
Currently, it has around 14 KLOC, compiles in less than 5s on a modern CPU
and adds around 200KiB to your binary.
## Performance
Currently, `tiny-skia` is 20-100% slower than Skia.
Which is still faster than [cairo] and [raqote] in many cases.
See benchmark results [here](https://razrfalcon.github.io/tiny-skia/x86_64.html).
The heart of Skia's CPU rendering is
[SkRasterPipeline](https://github.com/google/skia/blob/master/src/opts/SkRasterPipeline_opts.h).
And this is an extremely optimized piece of code.
But to be a bit pedantic, it's not really a C++ code. It relies on clang's
non-standard vector extensions, which means that it works only with clang.
You can actually build it with gcc/msvc, but it will simply ignore all the optimizations
and become 15-30 *times* slower! Which makes it kinda useless.
Skia also supports ARM NEON instructions, which are unavailable in a stable Rust at the moment.
Therefore a fallback scalar implementation will be used instead on ARM and other non-x86 targets.
So if you're targeting ARM, you better stick with Skia.
Also note, that neither Skia or `tiny-skia` are supporting dynamic CPU detection,
so by enabling newer instructions you're making the resulting binary non-portable.
Essentially, you will get a decent performance on x86 targets by default.
But if you are looking for an even better performance, you should compile your application
with `RUSTFLAGS="-Ctarget-cpu=haswell"` env variables to enable AVX instructions.
You can find more information in [benches/README.md](./benches/README.md).
## Rendering quality
Unless there is a bug, `tiny-skia` must produce exactly the same results as Skia.
## Safety
The library does not rely on unsafe code and all pixels access is checked.
It does have a single `unsafe`
to mark a type as [bytemuck::Pod](https://docs.rs/bytemuck/1.4.1/bytemuck/trait.Pod.html),
but this is perfectly safe. And all the dangerous casts are handled by `bytemuck`.
## Out of scope
Skia is a huge library and we support only a tiny part of.
And more importantly, we do not plan to support many feature at all.
- GPU rendering.
- Text rendering (maybe someday).
- PDF generation.
- Non-RGBA8888 images.
- Non-PNG image formats.
- Advanced Bézier path operations.
- Conic path segments.
- Path effects (except dashing).
- Any kind of resource caching.
- ICC profiles.
## Notable changes
Despite being a port, we still have a lot of changes even in the supported subset.
- No global alpha.<br/>
Unlike Skia, only `Pattern` is allowed to have opacity.
In all other cases you should adjust colors opacity manually.
- No bilinear + mipmap down-scaling support.
- `tiny-skia` uses just a simple alpha mask for clipping, while Skia has a very complicated,
but way faster algorithm.
## Notes about the port
`tiny-skia` should be viewed as a Rust 2D rendering library that uses Skia algorithms internally.
We have a completely different public API. The internals are also extremely simplified.
But all the core logic and math is borrowed from Skia. Hence the name.
As for the porting process itself, Skia uses goto, inheritance, virtual methods, linked lists,
const generics and templates specialization a lot, and all of this features are unavailable in Rust.
There are also a lot of pointers magic, implicit mutations and caches.
Therefore we have to compromise or even rewrite some parts from scratch.
## Alternatives
Right now, the only pure Rust alternative is [raqote].
- It doesn't support high-quality antialiasing (hairline stroking in particular).
- It's very slow (see [benchmarks](./benches/README.md)).
- There are some rendering issues (like gradient transparency).
- Raqote has a very rudimentary text rendering support, while tiny-skia has none.
## License
The same as used by [Skia]: [New BSD License](./LICENSE)
[Skia]: https://skia.org/
[cairo]: https://www.cairographics.org/
[raqote]: https://github.com/jrmuizel/raqote
[resvg]: https://github.com/RazrFalcon/resvg
| 44.313869 | 100 | 0.768407 | eng_Latn | 0.994546 |
c150297cea50e16aac57b6c69e940d34f922ab7b | 2,223 | md | Markdown | README.md | IEEE-NITK/Multi-Agent-Reinforcement-Learning | 9d2240dd289cd022ef1c8fbad62d6d54220ef80a | [
"MIT"
] | 12 | 2019-04-09T14:20:25.000Z | 2022-03-31T22:28:30.000Z | README.md | IEEE-NITK/Multi-Agent-Reinforcement-Learning | 9d2240dd289cd022ef1c8fbad62d6d54220ef80a | [
"MIT"
] | 1 | 2019-01-05T16:45:28.000Z | 2019-01-05T16:45:28.000Z | README.md | IEEE-NITK/Multi-Agent-Reinforcement-Learning | 9d2240dd289cd022ef1c8fbad62d6d54220ef80a | [
"MIT"
] | 6 | 2018-10-16T08:30:03.000Z | 2021-03-31T08:52:48.000Z | # Multi-Agent Reinforcement Learning
The aim of this project is to explore Reinforcement Learning approaches for Multi-Agent System problems. Multi-Agent Systems pose some key challenges which not present in Single Agent problems. These challenges can be grouped into 4 categories ([Reference](https://arxiv.org/abs/1810.05587)):
* Emergent Behavior
* Learning Communication
* Learning Cooperation
* Agent Modelling
We focus on the problem of learning communication and cooperation in multi agent systems.
We also have a [blog](https://marl-ieee-nitk.github.io) with articles on the several concepts involved in the project.
## Implementations
### Differentiable Inter Agent Learning
Run and experiment with the implementation in your browser:[](https://colab.research.google.com/gist/MJ10/2c0d1972f3dd1edcc3cd17c636aac8d2/dial.ipynb)
[Foerster et al., 2016](https://arxiv.org/abs/1605.06676)
This is one of the seminal works in applying Deep Reinforcement Learning for learning communication in cooperative multi-agent environments. The paper proposes two learning approaches, Reinforced Inter Agent Learning (RIAL) and Differentiable Inter Agent Learning (DIAL). We implement the DIAL approach on the Switch Riddle environment.
The implementation in this repo is structured as follows:
* [`env/switch_riddle.py`](https://github.com/IEEE-NITK/Multi-Agent-Reinforcement-Learning/blob/master/env/switch_riddle.py): Contains the implementation of the Switch Riddle environment.
* [`agent.py`](https://github.com/IEEE-NITK/Multi-Agent-Reinforcement-Learning/blob/master/agent.py): Contains the implementation of the CNet model, Discretize/Regularise Unit and the Agent itself.
* [`arena`](https://github.com/IEEE-NITK/Multi-Agent-Reinforcement-Learning/blob/master/arena.py): Contains the code for training the algorithm on the environment.
## Requirements
* [PyTorch](https://pytorch.org/)
## Team
* Moksh Jain
* Mahir Jain
* Madhuparna Bhowmik
* Akash Nair
Mentor: Ambareesh Prakash
## License
This repository is licensed under the [MIT License](https://github.com/IEEE-NITK/Multi-Agent-Reinforcement-Learning/blob/master/LICENSE.md). | 58.5 | 336 | 0.796221 | eng_Latn | 0.883937 |