hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
48677d54d462a32ff1edb600a0ecdfdd67160fce
48
md
Markdown
core/submodules/Frontier/README.md
ssamanila-creative/exph
f7106eed0ee74a22b47a6f92938e586cd71083ac
[ "MIT" ]
null
null
null
core/submodules/Frontier/README.md
ssamanila-creative/exph
f7106eed0ee74a22b47a6f92938e586cd71083ac
[ "MIT" ]
null
null
null
core/submodules/Frontier/README.md
ssamanila-creative/exph
f7106eed0ee74a22b47a6f92938e586cd71083ac
[ "MIT" ]
2
2017-09-29T21:59:23.000Z
2017-11-05T22:53:40.000Z
## Pluma/Frontier Frontier Module for Pluma CMS
16
29
0.791667
kor_Hang
0.386962
48679a73057566a539dfe120e138bed9749a5184
544
md
Markdown
src/main/resources/assets/opencomputers/doc/ru_RU/item/drone.md
Cruor/OpenComputers
36b9ba06d771c613053b168b304821847deae550
[ "CC0-1.0" ]
1,355
2015-01-03T12:55:27.000Z
2022-03-28T15:56:15.000Z
src/main/resources/assets/opencomputers/doc/ru_RU/item/drone.md
Cruor/OpenComputers
36b9ba06d771c613053b168b304821847deae550
[ "CC0-1.0" ]
2,572
2015-01-01T09:40:52.000Z
2022-03-28T23:09:39.000Z
src/main/resources/assets/opencomputers/doc/ru_RU/item/drone.md
Cruor/OpenComputers
36b9ba06d771c613053b168b304821847deae550
[ "CC0-1.0" ]
745
2015-01-02T02:52:58.000Z
2022-03-26T01:16:07.000Z
# Дрон ![Большой брат пытается следить за тобой.](item:OpenComputers:item@84) Дроны собираются из [корпуса дрона](droneCase1.md) в [сборщике](../block/assembler.md). По сути, они являются [роботами](../block/robot.md) в виде сущностей, но с меньшим функционалом. Они также могут перемещаться по диагонали и гораздо быстрее, чем [роботы](../block/robot.md). Они обычно контролируются с помощью программы на [компьютере](../general/computer.md). Дроны могут быть сконфигурированы с помощью [EEPROM](eeprom.md) для выполнения различных комманд.
90.666667
463
0.773897
rus_Cyrl
0.978647
48680acda6a654520740ef4af3f44d916fb4f9ed
34
md
Markdown
README.md
JetBrains/rider-debug-visualizer-web-view
b195ea377e5d979c0293ab027e74b2c81fbcfe6e
[ "Apache-2.0" ]
5
2020-09-22T22:15:03.000Z
2021-11-08T09:50:11.000Z
README.md
ForNeVeR/rider-debug-visualizer-web-view
c6a1523d8dff54ba1996be288634c9c6c260367e
[ "Apache-2.0" ]
12
2020-09-08T10:56:14.000Z
2020-09-28T10:11:20.000Z
README.md
ForNeVeR/rider-debug-visualizer-web-view
c6a1523d8dff54ba1996be288634c9c6c260367e
[ "Apache-2.0" ]
4
2020-09-16T18:00:45.000Z
2022-03-18T19:58:50.000Z
# Rider Debug Visualizer Web-View
17
33
0.794118
kor_Hang
0.320453
48688926e9d529df00a6f5ef2e807fafcfca4e4a
929
md
Markdown
README.md
patiernom/GrunterProjects
53d1461d05806a5a826aafc54033ee2a6256fd15
[ "MIT" ]
null
null
null
README.md
patiernom/GrunterProjects
53d1461d05806a5a826aafc54033ee2a6256fd15
[ "MIT" ]
null
null
null
README.md
patiernom/GrunterProjects
53d1461d05806a5a826aafc54033ee2a6256fd15
[ "MIT" ]
null
null
null
# GruntProjects Module for load project from json configuration file ## Installation The easiest way is to keep `grunter-projects` as a devDependency in your `package.json`. ```json { "devDependencies": { "grunt": "~0.10", "grunter-projects": "0.0.1" }, "projectsConfig": "./config/projects.json" } ``` You can simple do it by: ```bash npm install grunter-projects --save-dev ``` ## Configuration ```js // Gruntfile.js var packageJSON = grunt.file.readJSON('package.json'); grunt.initConfig({ pkg: packageJSON, projects: require('grunterprojects')(packageJSON), // Project configuration. }); ``` ## Usage You can use it into grunt task ```js // customGruntTask.js var myDir = grunt.config('projects').projectDirectory ``` Or you can invoke grunt and run task of specific project ```bash grunt --project MyCustomProject grunt myCustomTask --project MyCustomProject ``` ---- ## License MIT
18.959184
88
0.694295
eng_Latn
0.641142
4868b7efce44a1f27c62969f65f259a26926bb69
6,208
md
Markdown
README.md
dav1d8/mysql-dbal
241a8fbd6e2d54318722358ea916f997f8ce4bd9
[ "MIT" ]
22
2018-11-30T08:51:40.000Z
2022-01-05T19:04:38.000Z
README.md
dav1d8/mysql-dbal
241a8fbd6e2d54318722358ea916f997f8ce4bd9
[ "MIT" ]
4
2019-12-06T02:41:35.000Z
2020-10-14T21:31:11.000Z
README.md
dav1d8/mysql-dbal
241a8fbd6e2d54318722358ea916f997f8ce4bd9
[ "MIT" ]
3
2019-08-29T20:48:03.000Z
2020-09-03T09:39:19.000Z
[![Build Status](https://travis-ci.org/pharako/mysql-dbal.svg?branch=master)](https://travis-ci.org/pharako/mysql-dbal) [![Latest Stable Version](https://poser.pugx.org/pharako/mysql-dbal/v/stable)](https://packagist.org/packages/pharako/mysql-dbal) [![Total Downloads](https://poser.pugx.org/pharako/mysql-dbal/downloads)](https://packagist.org/packages/pharako/mysql-dbal) [![Latest Unstable Version](https://poser.pugx.org/pharako/mysql-dbal/v/unstable)](https://packagist.org/packages/pharako/mysql-dbal) [![License](https://poser.pugx.org/pharako/mysql-dbal/license)](https://packagist.org/packages/pharako/mysql-dbal) # MySQL DBAL MySQL extensions for [Doctrine DBAL](https://github.com/doctrine/dbal). `Pharako\DBAL\Connection` is an extension of `Doctrine\DBAL\Connection`—all functionality you get from the latter is also contained in the former, with a few add-ons specific to databases compatible with MySQL: * multiple inserts * single and multiple _upserts_ (update records if they exist, insert them otherwise) ## Supported databases * MySQL * MariaDB ## Requirements PHP 7.2 and above. See the [releases page](https://github.com/pharako/mysql-dbal/releases) for previous versions that still work with PHP < 7.2. ## Installation Install via Composer: ```SHELL $ composer require pharako/mysql-dbal ``` ## Usage ### Instantiation and configuration Most PHP frameworks will have some sort of service injection functionality to help you with configuration, but nothing stops you from doing it by hand. #### Manually ```PHP use Doctrine\Common\EventManager; use Doctrine\DBAL\Configuration; use Doctrine\DBAL\Driver\PDOMySql\Driver; use Pharako\DBAL\Connection; $params = [ 'dbname' => 'my_db', 'host' => 'localhost', 'user' => 'username', 'password' => '***', 'driver' => 'pdo_mysql' ]; $dbal = new Connection( $params, new Driver(), new Configuration(), new EventManager() ); ``` #### Symfony 2 and above Just specify the DBAL connection class under `wrapper_class` in `config.yml`. All other configurations should remain the same: ```YAML doctrine: dbal: dbname: %database_name% host: %database_host% port: %database_port% user: %database_user% password: %database_password% driver: pdo_mysql wrapper_class: 'Pharako\DBAL\Connection' ``` You can read [Doctrine DBAL Configuration](http://symfony.com/doc/current/reference/configuration/doctrine.html#doctrine-dbal-configuration) for more information on `wrapper_class` and other options. ## Extra functionality Pharako's additional methods follow the structure of Doctrine's [data retrieval and manipulation](http://docs.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/data-retrieval-and-manipulation.html) functionality, including [binding types](http://docs.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/data-retrieval-and-manipulation.html#binding-types). ### Multiple inserts You can insert multiple records with one call—this will hit the database only once: ```PHP $data = [ [ 'name' => 'Foo', 'family_name' => 'Bar' ], [ 'name' => 'Fuzz', 'family_name' => 'Bazz' ] ]; $dbal->insert('my_table', $data); ``` Or, if you want to specify the types of the data to be inserted: ```PHP $dbal->insert('my_table', $data, [\PDO::PARAM_STR, \PDO::PARAM_STR]); ``` ### Single and multiple upserts (update if present, insert if new) Before using this functionality, make sure you read [_Careful with those upserts_](#careful-with-those-upserts) below. Building on the previous example and assuming the `name` field is a unique key in the table structure, the first two records will have their `family_name` fields updated to `Rab` and `Zabb`, respectively, and the last one will be inserted: ```PHP $data = [ [ 'name' => 'Foo', 'family_name' => 'Rab' ], [ 'name' => 'Fuzz', 'family_name' => 'Zabb' ], [ 'name' => 'New', 'family_name' => 'Foo' ] ]; $dbal->upsert('my_table', $data); ``` Again, this will hit the database only once. If you want your upsert to update only a few columns and leave all the others untouched, you can pass it an array specifying those columns: ```PHP $data = [ 'who' => 'Them', 'where' => 'There', 'when' => 'Sometime', 'why' => 'Because' ]; $dbal->upsert( 'another_table', $data, [\PDO::PARAM_STR, \PDO::PARAM_STR, \PDO::PARAM_STR, \PDO::PARAM_STR], ['where', 'when'] ); ``` In this example, if the upsert results in an update, only the `where` and `when` fields will be updated. If the upsert results in an insert, all fields will be included. #### Careful with those upserts By and large, it is safe to execute upserts against tables of varied structures—those containing a single unique index, a multi-column unique index or even multiple unique indexes. However, because upserts in MySQL are more involved than simple inserts and updates, you should **not** expect those methods to behave similarly in 100% of the cases (for example, `LAST_INSERT_ID()` in the context of an upsert may behave slightly differently than in that of an insert). That's why the [official documentation](https://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html) says that _"In general, you should try to avoid using an `ON DUPLICATE KEY UPDATE` clause on tables with multiple unique indexes"_ and _"[...] an `INSERT ... ON DUPLICATE KEY UPDATE` statement against a table having more than one unique or primary key is also marked as unsafe."_ Despite that, upserts will work just as expected but in [edge case scenarios](http://bugs.mysql.com/bug.php?id=58637). If you want to play it extra safe, though, try to **tighten your tests** and make sure you get the expected results when the upsert updates your records as well as when it inserts them. ## Development If you want to test this package from your workstation, checkout the [development environment](https://github.com/pharako/mysql-dbal-dev). Code contributions and bug reports are welcome. For pull requests, please use the `development` branch.
36.952381
623
0.714401
eng_Latn
0.919534
48690cd21f5207a6e97a4aea40d8beda3a3e837a
1,191
md
Markdown
packages/mst-queue/README.md
Yurchishin/mst-data-structure
2328191328023228b9ea9d90a7600f5cdc35c61a
[ "MIT" ]
2
2020-10-26T21:20:16.000Z
2022-03-02T14:22:15.000Z
packages/mst-queue/README.md
Yurchishin/mst-data-structure
2328191328023228b9ea9d90a7600f5cdc35c61a
[ "MIT" ]
1
2021-08-28T16:16:18.000Z
2021-08-28T18:10:56.000Z
packages/mst-queue/README.md
Yurchishin/mst-data-structure
2328191328023228b9ea9d90a7600f5cdc35c61a
[ "MIT" ]
null
null
null
## `mst-queue` ### Usage ```js import { types } from "mobx-state-tree"; import { Queue, QueueNode } from '@mst-ds/mst-queue'; const Todo = types.model('Todo', { name: types.string, active: types.boolean, }) const TodoNode = QueueNode('Todo', Todo); const rootStore = types.model('RootStore', { todos: Queue('Todo', TodoNode), }).create() rootStore.todos.enqueue('1', { name: 'name1', active: false, }) // '1' rootStore.todos.head // '1' rootStore.todos.tail // '1' rootStore.todos.enqueue('2', { name: 'name2', active: true, }) // '1,2' rootStore.todos.head // '1' rootStore.todos.tail // '2' rootStore.todos.dequeue() // '2' rootStore.todos.head // '2' ``` ### API ##### Props (LinkedList) ##### `.head` ##### `.tail` ##### `.nodes` ##### Props (LinkedListNode) ##### `.id` ##### `.value` ##### `.next` ##### Views: ##### `.getNode(id)` ##### `.get(id)` ##### `.has(id)` ##### `.values` ##### `.entries` ##### `.array` ##### `.isEmpty` ##### `.peek` ##### `.peekNode` ##### `.peekPair` ##### `.toString(callback)` ##### Actions: ##### `.enqueue(key, value)` ##### `.dequeue()` ##### Note: Library does not support `types.reference` types in node.
17.26087
57
0.549958
yue_Hant
0.300317
4869849a7afe434f1c4bc50a068a4ec0b5f715ee
751
md
Markdown
Goroutine/README.md
ovalves/golang-estudos
bb85256887ed50b5fdcce917411a8f0619f8b0fc
[ "MIT" ]
null
null
null
Goroutine/README.md
ovalves/golang-estudos
bb85256887ed50b5fdcce917411a8f0619f8b0fc
[ "MIT" ]
null
null
null
Goroutine/README.md
ovalves/golang-estudos
bb85256887ed50b5fdcce917411a8f0619f8b0fc
[ "MIT" ]
null
null
null
## Goroutine é uma execução concorrente de função. Um canal (channel) é um mecanismo de comunicação que permite que uma gorrotina passe valores de um tipo especificado a outra gorrotina. A função main executa em uma gorrotina e a instrução go cria gorrotina adicionais. A função main cria um canal de strings usando make. Para cada argumento de linha de comando, a instrução go no primeiro for/range inicia uma nova gorrotina que executa fetch assincronamente para buscar o URL usando http.Get. A função io.Copy lê o corpo da resposta e descarta-o, escrevendo no stream de saída ioutil.Discard. Copy devolve a contagem de bytes juntamente com qualquer erro ocorrido. À medida que cada resultado chega, fetch envia uma linha de resumo pelo canal ch.
107.285714
308
0.806924
por_Latn
0.999997
48698d701f491acfd3801f423ff4f422b4e3f17f
3,979
md
Markdown
articles/ecosystem/java_jdbc/using_ddl.md
kaaninho/guides
c2a5b5ec5634c37051df34121b6bd547004ab3c4
[ "CC-BY-3.0" ]
230
2015-01-07T02:49:17.000Z
2022-03-24T06:06:27.000Z
articles/ecosystem/java_jdbc/using_ddl.md
kaaninho/guides
c2a5b5ec5634c37051df34121b6bd547004ab3c4
[ "CC-BY-3.0" ]
74
2015-03-15T23:04:45.000Z
2022-02-17T05:43:48.000Z
articles/ecosystem/java_jdbc/using_ddl.md
kaaninho/guides
c2a5b5ec5634c37051df34121b6bd547004ab3c4
[ "CC-BY-3.0" ]
92
2015-01-09T08:53:00.000Z
2021-11-21T21:46:50.000Z
--- title: "Using DDL and Metadata" layout: article --- ## Contents * [Overview][overview] * [Using SQL][using-sql] * [Using DDL][using-ddl] * [Reusing Connections][reusing-connections] ## Using DDL DDL operations can be executed using the `db-do-commands` function. The general approach is: ```clojure (jdbc/db-do-commands db-spec [sql-command-1 sql-command-2 .. sql-command-n]) ``` The commands are executed as a single, batched statement, wrapped in a transaction. If you want to avoid the transaction, use this approach: ```clojure (jdbc/db-do-commands db-spec false [sql-command-1 sql-command-2 .. sql-command-n]) ``` This is necessary for some databases that do not allow DDL operations to be wrapped in a transaction. ### Creating tables For the common operations of creating and dropping tables, `java.jdbc` provides a little assistance that recognizes `:entities` so you can use keywords (or strings) and have your chosen naming strategy applied, just as you can for several of the SQL functions. ```clojure (jdbc/create-table-ddl :fruit [[:name "varchar(32)" :primary :key] [:appearance "varchar(32)"] [:cost :int] [:grade :real]] {:table-spec "ENGINE=InnoDB" :entities clojure.string/upper-case}) ``` This will generate: ```clojure CREATE TABLE FRUIT (NAME varchar(32) primary key, APPEARANCE varchar(32), COST int, GRADE real) ENGINE=InnoDB ``` which you can pass to `db-do-commands`. `create-table-ddl` also supports a `conditional?` option which can be a simple `Boolean`, which, if `true`, will add `IF NOT EXISTS` before the table name. If that syntax doesn't work for your database, you can pass a string that will be used instead. If that isn't enough, you can pass a function of two arguments: the first argument will be the table name and the second argument will be the DDL string (this approach is needed for Microsoft SQL Server). ### Dropping tables Similarly there is a `drop-table-ddl` function which takes a table name and an optional `:entities` option to generate DDL to drop a table. ```clojure (jdbc/drop-table-ddl :fruit) ; drop table fruit (jdbc/drop-table-ddl :fruit {:entities clojure.string/upper-case}) ; drop table FRUIT ``` This will generate: ```clojure DROP TABLE FRUIT ``` `drop-table-ddl` also supports a `conditional?` option which can be a simple `Boolean`, which, if `true`, will add `IF EXISTS` before the table name. If that syntax doesn't work for your database, you can pass a string that will be used instead. If that isn't enough, you can pass a function of two arguments: the first argument will be the table name and the second argument will be the DDL string (this approach is needed for Microsoft SQL Server). ## Accessing metadata `java.jdbc` provides two functions for working with database metadata: * `with-db-metadata` for creating an active metadata object backed by an open connection * `metadata-result` for turning metadata results into Clojure data structures For example: ```clojure (jdbc/with-db-metadata [md db-spec] (jdbc/metadata-result (.getTables md nil nil nil (into-array ["TABLE" "VIEW"])))) ``` This returns a sequence of maps describing all the tables and views in the current database. `metadata-result` only transforms `ResultSet` objects, other results are returned as-is. `metadata-result` can also accept an options map containing `:identifiers` and `:as-arrays?`, like the `query` function, and those options control how the metatadata is transformed and/or returned. Both `with-db-metadata` and `metadata-result` can accept an options hash map which will be passed through various `java.jdbc` functions (`get-connections` for the former and `result-set-seq` for the latter). [overview]: home.html [using-sql]: using_sql.html [using-ddl]: using_ddl.html [reusing-connections]: reusing_connections.html
33.436975
85
0.722041
eng_Latn
0.986219
48699ad16f31518d43a2d5f6c25d5ab2c4ae62d4
174
md
Markdown
content/EN/8-Blackmail/5-recovery.md
reveliant/IRM
776d7a9a09550c8929582cace9871150b6a74714
[ "CC-BY-3.0" ]
null
null
null
content/EN/8-Blackmail/5-recovery.md
reveliant/IRM
776d7a9a09550c8929582cace9871150b6a74714
[ "CC-BY-3.0" ]
null
null
null
content/EN/8-Blackmail/5-recovery.md
reveliant/IRM
776d7a9a09550c8929582cace9871150b6a74714
[ "CC-BY-3.0" ]
null
null
null
--- title: Recovery weight: 5 objective: Restore the system to normal operations. --- Notify the top management of the actions and the decision taken on the blackmail issue.
24.857143
87
0.775862
eng_Latn
0.996591
486a02dfd9bab42cb64016928129b3a1163baec7
2,712
md
Markdown
dsc/lnxSshAuthorizedKeysResource.md
I-Cat/PowerShell-Docs.de-de
17c06af567a068eea5e9ba58abca102b39b86482
[ "CC-BY-4.0", "MIT" ]
1
2019-01-16T06:05:39.000Z
2019-01-16T06:05:39.000Z
dsc/lnxSshAuthorizedKeysResource.md
I-Cat/PowerShell-Docs.de-de
17c06af567a068eea5e9ba58abca102b39b86482
[ "CC-BY-4.0", "MIT" ]
null
null
null
dsc/lnxSshAuthorizedKeysResource.md
I-Cat/PowerShell-Docs.de-de
17c06af567a068eea5e9ba58abca102b39b86482
[ "CC-BY-4.0", "MIT" ]
2
2016-10-23T13:34:36.000Z
2021-04-05T00:14:47.000Z
--- title: "DSC für Linux-Ressource „nxSshAuthorizedKeys“" ms.date: 2016-05-16 keywords: powershell,DSC description: ms.topic: article author: eslesar manager: dongill ms.prod: powershell translationtype: Human Translation ms.sourcegitcommit: a656ec981dc03fd95c5e70e2d1a2c741ee1adc9b ms.openlocfilehash: edc906b4e9c925320c4ed00c5ab295189066ccb9 --- # DSC für Linux-Ressource „nxSshAuthorizedKeys“ Die Ressource **nxAuthorizedKeys** in PowerShell DSC bietet ein Mechanismus zum Verwalten autorisierter SSH-Schlüssel für einen angegebenen Benutzer. ## Syntax ``` nxAuthorizedKeys <string> #ResourceName { KeyComment = <string> [ Ensure = <string> { Absent | Present } ] [ Username = <string> ] [ Key = <string> ] [ DependsOn = <string[]> ] } ``` ## Eigenschaften | Eigenschaft | Beschreibung | |---|---| | KeyComment| Ein eindeutiger Kommentar für den Schlüssel. Dieser wird verwendet, um Schlüssel eindeutig zu identifizieren.| | Ensure| Gibt an, ob der Schlüssel definiert ist. Legen Sie diese Eigenschaft auf „Absent“ fest, um sicherzustellen, dass der Schlüssel nicht in der Datei mit den autorisierten Schlüsseln des Benutzers vorhanden ist. Legen Sie sie auf „Present“ fest, um sicherzustellen, dass der Schlüssel in der autorisierten Schlüsseldatei des Benutzers definiert ist.| | Username| Der Benutzername, für den die autorisierten SSH-Schlüssel verwaltet werden sollen. Falls nicht definiert, ist der Standardbenutzer „root“.| | Key| Der Inhalt des Schlüssels. Ist erforderlich, wenn **Ensure** auf „Present“ festgelegt ist.| | DependsOn | Gibt an, dass die Konfiguration einer anderen Ressource ausgeführt werden muss, bevor diese Ressource konfiguriert wird. Wenn beispielsweise die **ID** des Skriptblocks mit der Ressourcenkonfiguration, den Sie zuerst ausführen möchten, **ResourceName** und dessen Typ **ResourceType** ist, lautet die Syntax für das Verwenden dieser Eigenschaft `DependsOn = "[ResourceType]ResourceName"`.| ## Beispiel Das folgende Beispiel definiert einen öffentlichen autorisierten SSH-Schlüssel für den Benutzer „monuser“. ``` Import-DSCResource -Module nx Node $node { nxSshAuthorizedKeys myKey{ KeyComment = "myKey" Ensure = "Present" Key = 'ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEA0b+0xSd07QXRifm3FXj7Pn/DblA6QI5VAkDm6OivFzj3U6qGD1VJ6AAxWPCyMl/qhtpRtxZJDu/TxD8AyZNgc8aN2CljN1hOMbBRvH2q5QPf/nCnnJRaGsrxIqZjyZdYo9ZEEzjZUuMDM5HI1LA9B99k/K6PK2Bc1NLivpu7nbtVG2tLOQs+GefsnHuetsRMwo/+c3LtwYm9M0XfkGjYVCLO4CoFuSQpvX6AB3TedUy6NZ0iuxC0kRGg1rIQTwSRcw+McLhslF0drs33fw6tYdzlLBnnzimShMuiDWiT37WqCRovRGYrGCaEFGTG2e0CN8Co8nryXkyWc6NSDNpMzw== rsa-key-20150401' UserName = "monuser" } } ``` <!--HONumber=Oct16_HO1-->
39.882353
408
0.783923
deu_Latn
0.903944
486b346ab2b3205a60f9d9789264becd465f34b1
2,443
md
Markdown
mdop/appv-v5/deploying-app-v-51.md
MicrosoftDocs/mdop-docs-pr.es-es
ecb0018684bb8ec80a63b0593348ca720d85b805
[ "CC-BY-4.0", "MIT" ]
1
2021-04-20T21:13:51.000Z
2021-04-20T21:13:51.000Z
mdop/appv-v5/deploying-app-v-51.md
MicrosoftDocs/mdop-docs-pr.es-es
ecb0018684bb8ec80a63b0593348ca720d85b805
[ "CC-BY-4.0", "MIT" ]
4
2020-07-03T03:24:26.000Z
2021-09-27T04:28:03.000Z
mdop/appv-v5/deploying-app-v-51.md
MicrosoftDocs/mdop-docs-pr.es-es
ecb0018684bb8ec80a63b0593348ca720d85b805
[ "CC-BY-4.0", "MIT" ]
3
2020-07-03T03:17:18.000Z
2021-11-04T12:30:32.000Z
--- title: Implementación de App-V 5.1 description: Implementación de App-V 5.1 author: dansimp ms.assetid: af8742bf-e24b-402a-bcf4-0f2297f26bc4 ms.reviewer: '' manager: dansimp ms.author: dansimp ms.pagetype: mdop, appcompat, virtualization ms.mktglfcycl: deploy ms.sitesec: library ms.prod: w10 ms.date: 06/16/2016 ms.openlocfilehash: 99bb1c3ec5c0af7073b8882eed721dd1ffdb24f2 ms.sourcegitcommit: 354664bc527d93f80687cd2eba70d1eea024c7c3 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 06/26/2020 ms.locfileid: "10814639" --- # Implementación de App-V 5.1 Microsoft Application Virtualization (App-V) 5,1 admite una serie de opciones de implementación diferentes. En esta sección de la guía del administrador de la 5,1 de App-V se incluye información que debe tener en cuenta sobre la implementación de los procedimientos de App-V 5,1 y de procedimientos paso a paso que le ayudarán a realizar correctamente las tareas que debe completar en diferentes fases de la implementación. ## <a href="" id="---------app-v-5-1-deployment-information"></a> Información de implementación de App-V 5,1 - [Implementación del secuenciador y del cliente de App-V 5.1](deploying-the-app-v-51-sequencer-and-client.md) En esta sección se describe cómo instalar el secuenciador App-V 5,1, que se usa para virtualizar aplicaciones, y el cliente App-V 5,1 que se ejecuta en los equipos de destino para facilitar paquetes virtualizados. - [Implementación del servidor de App-V 5.1](deploying-the-app-v-51-server.md) Esta sección proporciona información sobre cómo instalar los servidores de administración, publicación, base de datos y creación de informes de App-V 5,1. - [Lista de comprobación de implementación de App-V 5.1](app-v-51-deployment-checklist.md) Esta sección proporciona una lista de comprobación de implementación que se puede usar para ayudarle a instalar App-V 5,1. ## Otros recursos para implementar App-V 5,1 - [Guía del administrador de Microsoft Application Virtualization 5,1](microsoft-application-virtualization-51-administrators-guide.md) - [Introducción a App-V 5.1](getting-started-with-app-v-51.md) - [Planificación de App-V 5.1](planning-for-app-v-51.md) - [Operaciones de App-V 5.1](operations-for-app-v-51.md) - [Resolución de problemas de App-V 5.1](troubleshooting-app-v-51.md) - [Referencia técnica de App-V 5.1](technical-reference-for-app-v-51.md)    
35.405797
423
0.767908
spa_Latn
0.861906
486b8dd3dcbc124fdb7fd2e1fe95c1a7b14859fd
11,595
md
Markdown
articles/stream-analytics/stream-analytics-machine-learning-integration-tutorial.md
AftabAnsari10662/azure-docs
3ae9482a310e5e507f6d04495df352cc965aa8d1
[ "CC-BY-3.0" ]
1
2019-06-02T17:00:22.000Z
2019-06-02T17:00:22.000Z
articles/stream-analytics/stream-analytics-machine-learning-integration-tutorial.md
AftabAnsari10662/azure-docs
3ae9482a310e5e507f6d04495df352cc965aa8d1
[ "CC-BY-3.0" ]
null
null
null
articles/stream-analytics/stream-analytics-machine-learning-integration-tutorial.md
AftabAnsari10662/azure-docs
3ae9482a310e5e507f6d04495df352cc965aa8d1
[ "CC-BY-3.0" ]
null
null
null
--- title: Azure Stream Analytics and Machine Learning integration | Microsoft Docs description: How to use a user-defined function and Machine Learning in a Stream Analytics job keywords: '' documentationcenter: '' services: stream-analytics author: jeffstokes72 manager: jhubbard editor: cgronlun ms.assetid: cfced01f-ccaa-4bc6-81e2-c03d1470a7a2 ms.service: stream-analytics ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: data-services ms.date: 02/14/2017 ms.author: jeffstok --- # Sentiment analysis by using Azure Stream Analytics and Azure Machine Learning This article is designed to help you quickly set up a simple Azure Stream Analytics job, with Azure Machine Learning integration. We will use a sentiment analytics Machine Learning model from the Cortana Intelligence Gallery to analyze streaming text data, and determine the sentiment score in real time. The information in this article can help you understand scenarios such as real-time sentiment analytics on streaming Twitter data, analyze records of customer chats with support staff, and evaluate comments on forums, blogs, and videos, in addition to many other real-time, predictive scoring scenarios. This article offers a sample CSV file with text as input in Azure Blob storage, shown in the following image. The job applies the sentiment analytics model as a user-defined function (UDF) on the sample text data from the blob store. The end result is placed in the same blob store in another CSV file. ![Stream Analytics Machine Learning](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-figure-2.png) The following image demonstrates this configuration. For a more realistic scenario, you can replace Blob storage with streaming Twitter data from an Azure Event Hubs input. Additionally, you could build a [Microsoft Power BI](https://powerbi.microsoft.com/) real-time visualization of the aggregate sentiment. ![Stream Analytics Machine Learning](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-figure-1.png) ## Prerequisites The prerequisites for completing the tasks that are demonstrated in this article are as follows: * An active Azure subscription. * A CSV file with some data in it. You can download the file shown in Figure 1 from [GitHub](https://github.com/Azure/azure-stream-analytics/blob/master/Sample Data/sampleinput.csv), or you can create your own file. For this article, we assume that you use the one provided for download on GitHub. At a high level, to complete the tasks demonstrated in this article, you'll do the following: 1. Upload the CSV input file to Azure Blob storage. 2. Add a sentiment analytics model from the Cortana Intelligence Gallery to your Azure Machine Learning workspace. 3. Deploy this model as a web service in the Machine Learning workspace. 4. Create a Stream Analytics job that calls this web service as a function, to determine sentiment for the text input. 5. Start the Stream Analytics job and observe the output. ## Create a storage blob and upload the CSV input file For this step, you can use any CSV file, such as the one already specified as available for download on GitHub. Uploading the csv file is simple as it is an option included in creating a storage blob. For our tutorial, create a new storage account by clicking **New** and then searching for 'storage account' and then selecting the resulting icon for storage account and providing details for the creation of the account. Provide a **Name** (azuresamldemosa in my example), create or use an existing **Resource group** and specify a **Location** (for location, it is important that all the resources created in this demo all use the same location if possible). ![create storage account](./media/stream-analytics-machine-learning-integration-tutorial/create-sa.png) Once that is completed you can click on Blob service and create a blob container. ![create blob container](./media/stream-analytics-machine-learning-integration-tutorial/create-sa2.png) Then provide a **Name** for the container (azuresamldemoblob in my example) and verify the **Access type** is set to 'blob'. ![create blob access type](./media/stream-analytics-machine-learning-integration-tutorial/create-sa3.png) Now we can populate the blob with our data. Select **Files** and then select the file on your local drive that you downloaded from GitHub. I selected Block blob and 4 MB as a size these should be fine for this demonstration. Then select **Upload** and the portal will create a blob with the text sample for you. ![create blob upload file](./media/stream-analytics-machine-learning-integration-tutorial/create-sa4.png) Now that the sample data is in a blob it is time to enable the sentiment analysis model in Cortana Intelligence Gallery. ## Add the sentiment analytics model from the Cortana Intelligence Gallery 1. Download the [predictive sentiment analytics model](https://gallery.cortanaintelligence.com/Experiment/Predictive-Mini-Twitter-sentiment-analysis-Experiment-1) from the Cortana Intelligence Gallery. 2. In Machine Learning Studio, select **Open in Studio**. ![Stream Analytics Machine Learning, open Machine Learning Studio](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-open-ml-studio.png) 3. Sign in to go to the workspace. Select the location that best suits your own location. 4. Click **Run** at the bottom of the page. 5. After the process runs successfully, select **Deploy Web Service**. 6. The sentiment analytics model is ready to use. To validate, select the **Test** button and provide text input, such as, “I love Microsoft.” The test should return a result similar to the following: `'Predictive Mini Twitter sentiment analysis Experiment' test returned ["4","0.715057671070099"]...` ![Stream Analytics Machine Learning, analysis data](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-analysis-data.png) In the **Apps** column, select the link for **Excel 2010 or earlier workbook** to get your API key and the URL that you’ll need later to set up the Stream Analytics job. (This step is required only to use a Machine Learning model from another Azure account workspace. This article assumes this is the case, to address that scenario.) Note the web service URL and access key from the downloaded Excel file, as shown below: ![Stream Analytics Machine Learning, quick glance](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-quick-glance.png) ## Create a Stream Analytics job that uses the Machine Learning model 1. Go to the [Azure portal](https://portal.azure.com). 2. Click **New** > **Intelligence + analytics** > **Stream Analytics**. Enter a name for your job in **Job name**, specify an existing resource group or create a new one as required, and enter the appropriate location for the job in the **Location** field. ![create job](./media/stream-analytics-machine-learning-integration-tutorial/create-job-1.png) 3. After the job is created, on the **Inputs** tab, select **Add an Input**. ![Stream Analytics Machine Learning, add Machine Learning input](./media/stream-analytics-machine-learning-integration-tutorial/create-job-add-input.png) 4. Select **Add** and then specify an **Input alias**, select **Data stream**, **Blob Storage** as the input, and then select **Next**. 5. On the **Blob Storage Settings** page of the wizard, provide the storage account blob container name you defined earlier when you uploaded the data. Click **Next**. For **Event Serialization Format**, select **CSV**. Accept the default values for the rest of the **Serialization settings** page. Click **OK**. ![add input blob container](./media/stream-analytics-machine-learning-integration-tutorial/create-job-add-input-blob.png) 6. On the **Outputs** tab, select **Add an Output**. ![Stream Analytics Machine Learning, add output](./media/stream-analytics-machine-learning-integration-tutorial/create-output.png) 7. Click **Blob Storage**, and then enter the same parameters, except for the container. The value for **Input** was configured to read from the container named “test” where the **CSV** file was uploaded. For **Output**, enter “testoutput”. 8. Validate that the output’s **Serialization settings** are set to **CSV**, and then select the **OK** button. ![Stream Analytics Machine Learning, add output](./media/stream-analytics-machine-learning-integration-tutorial/create-output2.png) 9. On the **Functions** tab, select **Add a Machine Learning Function**. ![Stream Analytics Machine Learning, add Machine Learning function](./media/stream-analytics-machine-learning-integration-tutorial/add-function.png) 10. On the **Machine Learning Web Service Settings** page, locate the Machine Learning workspace, web service, and default endpoint. For this article, apply the settings manually to gain familiarity with configuring a web service for any workspace, as long as you know the URL and have the API key. Enter the endpoint **URL** and **API key**. Click **OK**. Note that the **Function Alias** is 'sentiment'. ![Stream Analytics Machine Learning, Machine Learning web service](./media/stream-analytics-machine-learning-integration-tutorial/add-function-endpoints.png) 11. On the **Query** tab, modify the query as follows: ``` WITH sentiment AS ( SELECT text, sentiment(text) as result from datainput ) Select text, result.[Scored Labels] Into testoutput From sentiment ``` 12. Click **Save** to save the query. ## Start the Stream Analytics job and observe the output 1. Click **Start** at the top of the job page. 2. On the **Start Query Dialog**, select **Custom Time**, and then select one day prior to when you uploaded the CSV to Blob storage. Click **OK**. 3. Go to the Blob storage by using the tool you used to upload the CSV file, for example, Visual Studio. 4. A few minutes after the job is started, the output container is created and a CSV file is uploaded to it. 5. Open the file in the default CSV editor. Something similar to the following should be displayed: ![Stream Analytics Machine Learning, CSV view](./media/stream-analytics-machine-learning-integration-tutorial/stream-analytics-machine-learning-integration-tutorial-csv-view.png) ## Conclusion This article demonstrates how to create a Stream Analytics job that reads streaming text data and applies sentiment analytics to the data in real time. You can accomplish all of this without needing to worry about the intricacies of building a sentiment analytics model. This is one of the advantages of the Cortana Intelligence Suite. You also can view Azure Machine Learning function-related metrics. To do this, select the **Monitor** tab. Three function-related metrics are displayed. * **Function Requests** indicates the number of requests sent to a Machine Learning web service. * **Function Events** indicates the number of events in the request. By default, each request to a Machine Learning web service contains up to 1,000 events. ![Stream Analytics Machine Learning, Machine Learning monitor view](./media/stream-analytics-machine-learning-integration-tutorial/job-monitor.png)
75.784314
608
0.771971
eng_Latn
0.980619
486d3e0dbb92294c1304af2ae10eeb1b8094f91b
9,444
md
Markdown
content/ru/tutorials/custom-shaders.md
zyfdoudou/developer.playcanvas.com
6c1cfd603f4b1b6fc720be6b841c02f22c635703
[ "MIT" ]
1
2019-08-26T07:36:42.000Z
2019-08-26T07:36:42.000Z
content/ru/tutorials/custom-shaders.md
kinvix/developer.playcanvas.com
679ac644e63140cc6251695fa057d56f283cf4ba
[ "MIT" ]
null
null
null
content/ru/tutorials/custom-shaders.md
kinvix/developer.playcanvas.com
679ac644e63140cc6251695fa057d56f283cf4ba
[ "MIT" ]
null
null
null
--- title: Пользовательские шейдеры template: tutorial-page.tmpl.html tags: shaders, materials thumb: https://s3-eu-west-1.amazonaws.com/images.playcanvas.com/projects/12/406044/4J2JX2-image-75.jpg --- <iframe src="https://playcanv.as/p/zwvhLoS9/" allowfullscreen></iframe> *Этот урок использует пользовательские шейдеры на материале чтобы создать эффект растворения на GLSL* Когда вы импортируете ваши 3D-модели в PlayCanvas, по умолчанию, они используют [Физический материал][3]. Это материал общего назначения, который может покрыть большую часть ваших нужд. Однако, вам часто может потребоваться добавить особые эффекты или особые действия к вашему материалу. Чтобы сделать это, вам нужно написать шейдер. ## Шейдеры и их объявления WebGL использует язык GLSL для написания шейдеров, которые могут работать во всех браузерах. В PlayCanvas вы можете хранить код в ресурсе шейдера, а потом подключить его в [Объявлении шейдера][1], до того, как создать новый `pc.Shader`. ## Вершинный шейдер ~~~ attribute vec3 aPosition; attribute vec2 aUv0; uniform mat4 matrix_model; uniform mat4 matrix_viewProjection; varying vec2 vUv0; void main(void) { vUv0 = aUv0; gl_Position = matrix_viewProjection * matrix_model * vec4(aPosition, 1.0); } ~~~ ### Фрагментный (пиксельный) шейдер ~~~ varying vec2 vUv0; uniform sampler2D uDiffuseMap; uniform sampler2D uHeightMap; uniform float uTime; void main(void) { float height = texture2D(uHeightMap, vUv0).r; vec4 color = texture2D(uDiffuseMap, vUv0); if (height < uTime) { discard; } if (height < (uTime+0.04)) { color = vec4(0, 0.2, 1, 1.0); } gl_FragColor = color; } ~~~ Эти два шейдера выше описывают функционал нового материала. В вершинном шейдере мы трансформируем позиции вершин модели в пространство экрана. В фрагментном шейдере мы устанавливает цвет пикселей. Цвет пикселей выбирает на основе двух текстур, которые мы передаем через ресурсы. Если значение uTime меньше, чем цвет в карте высот, тогда мы не отображаем никаких пикселей (модель невидима). Если значение uTime больше, чем значение в карте высот, то мы получаем цвет из карты цвета, которую мы так же используем ### Объявление шейдера ```javascript var vertexShader = this.vs.resource; // Динамически указываем точность, в зависимости от устройства var fragmentShader = "precision " + gd.precision + " float;\n"; fragmentShader = fragmentShader + this.fs.resource; // Объявление шейдера, для создания нового var shaderDefinition = { attributes: { aPosition: pc.gfx.SEMANTIC_POSITION, aUv0: pc.gfx.SEMANTIC_TEXCOORD0 }, vshader: vertexShader, fshader: fragmentShader }; ``` Объявление шейдера содержит две секции. в `атрибутах` вы должны указать переменные и значения атрибутов, которые будут объявлены в для каждой вершины, для которой будет выполнен вершинный шейдер. Эти значения после будут объявлены в вашем вершинном шейдере как `атрибут`. Код вершинного шейдера отправляется как строка в свойство `vshader`, а фрагментный шейдер в свойство `fshader`. Выше - объявление шейдера, который делает эффект растворения. Заметьте, что мы получаем код шейдеров из двух ресурсов. Эти ресурсы подставляются через [атрибуты скрипта][2], которые упрощают доступ к ресурсам из скрипта. Поодаль от атрибутов, мы видим два специальных типа переменных в GLSL шейдере: `varying` и `uniform` ## GLSL переменные типа `varying` Переменные, которые объявлены как **varying** будут установлены в вершинный шейдер, но будут использоваться в фрагментном. Это способ передать дату из первой программы во вторую. ## GLSL переменные типа `uniform` Переменные, объявленые как **`uniform`** будут доступны в обоих шейдерах. Значение этих переменных передается в шейдер из основой программы. Например, позиция света в сцене. ## Создание материалов ~~~javascript // Создание материала из объявления this.shader = new pc.Shader(gd, shaderDefinition); // Создание материала и установка шейдера this.material = new pc.Material(); this.material.setShader(this.shader); // Установка изначального параметра uTime this.material.setParameter('uTime', 0); // Добавление карты цвета this.material.setParameter('uDiffuseMap', diffuseTexture); // Используем текстуру облаков, как карту высот this.material.setParameter('uHeightMap', heightTexture); // Заменяем материал на новый model.meshInstances[0].material = this.material; ~~~ Когда мы объявили шейдер, мы создаем новый объект Shader и новый Material. Далее мы устанавливаем шейдер к материалу, используя `setShader()`. Переменные типа uniform инициализируются через метод `setParameter()`. В конце мы заменяем оригинальный материал модели новым, который мы создали. Заметьте, что каждая фигура в модели имеет свой собственный материал. Если ваша модель содержит больше, чем одну фигуру, вам может потребоваться установить материал и на них тоже. Вы можете (и должны) использовать один материал больше, чем на одной фигуре. ## Использование текстуры в новом материале ~~~javascript var diffuseTexture = this.app.assets.get(this.diffuseMap).resource; //... this.material.setParameter('uDiffuseMap', diffuseTexture); ~~~ Эффект, демонстрируемый в этом уроке, достигается использованием карты висок. Мы получаем доступ к текстуре через ресурс, используя код выше. В верху нашего скрипта мы объявили атрибут скрипта 'карты', который дает нам возможность установить текстуру из редактора PlayCanvas. ~~~javascript CustomShader.attributes.add('vs', { type: 'asset', assetType: 'shader', title: 'Vertex Shader' }); CustomShader.attributes.add('fs', { type: 'asset', assetType: 'shader', title: 'Fragment Shader' }); CustomShader.attributes.add('diffuseMap', { type: 'asset', assetType: 'texture', title: 'Diffuse Map' }); CustomShader.attributes.add('heightMap', { type: 'asset', assetType: 'texture', title: 'Height Map' }); ~~~ Когда карта текстур загружена, мы можем добавить переменную типа uniform, которая будет называться 'uHeightMap' к объекту 'pc.Texture'. ## Обновление переменных uniform ~~~javascript // Обновление кода происходит каждый кадр CustomShader.prototype.update = function(dt) { this.time += dt; // Меняем значение 0 > 1 > 0 var t = (this.time % 2); if (t > 1) { t = 1 - (t - 1); } // Обновляем параметр на материале this.material.setParameter('uTime', t); }; ~~~ Чтобы достичь эффекта исчезания мы используем значения карты высок как порог и мы увеличиваем порог со временем. В методе обновления, мы изменяем значение 't' между 0 и 1 и устанавливаем его в переменную `uTime`. В нашем шейдере, если значение карты высот на пикселе меньше, чем значения кремени, мы не показываем пиксель. Помимо этого, когда значение близко к порогу, мы показываем пиксель в синем цвете, чтобы показывать клевый эффект грани. ## Полный код ~~~javascript var CustomShader = pc.createScript('customShader'); CustomShader.attributes.add('vs', { type: 'asset', assetType: 'shader', title: 'Vertex Shader' }); CustomShader.attributes.add('fs', { type: 'asset', assetType: 'shader', title: 'Fragment Shader' }); CustomShader.attributes.add('diffuseMap', { type: 'asset', assetType: 'texture', title: 'Diffuse Map' }); CustomShader.attributes.add('heightMap', { type: 'asset', assetType: 'texture', title: 'Height Map' }); // Инициализация для каждой модели CustomShader.prototype.initialize = function() { this.time = 0; var app = this.app; var model = this.entity.model.model; var gd = app.graphicsDevice; var diffuseTexture = this.diffuseMap.resource; var heightTexture = this.heightMap.resource; var vertexShader = this.vs.resource; var fragmentShader = "precision " + gd.precision + " float;\n"; fragmentShader = fragmentShader + this.fs.resource; // Объявление шейдера var shaderDefinition = { attributes: { aPosition: pc.SEMANTIC_POSITION, aUv0: pc.SEMANTIC_TEXCOORD0 }, vshader: vertexShader, fshader: fragmentShader }; // Создание шейдера из объявления this.shader = new pc.Shader(gd, shaderDefinition); // Создание нового материала this.material = new pc.Material(); this.material.setShader(this.shader); // Установка изначального параметра времени this.material.setParameter('uTime', 0); // Установка текстуры цвета this.material.setParameter('uDiffuseMap', diffuseTexture); // Используем текстуру облаков как карту высот this.material.setParameter('uHeightMap', heightTexture); // Заменяем материал на наш новый model.meshInstances[0].material = this.material; }; // Обновление каждый кадр CustomShader.prototype.update = function(dt) { this.time += dt; // Изменяем значение 0->1->0 var t = (this.time % 2); if (t > 1) { t = 1 - (t - 1); } // Обновляем значение в материале this.material.setParameter('uTime', t); }; ~~~ Это весь скрипт. Запомните, вам будет нужно создавать вершинные и фрагментные ресурсы шейдеров во время работы. Мы оставляем это как упражнение для читателя. Реализуйте шейдер, который будет реализовывать этот эффект на несколько моделей и материалах. [1]: /engine/api/stable/symbols/pc.Shader.html [2]: /user-manual/scripting/script-attributes/ [3]: /user-manual/graphics/physical-rendering/physical-materials/
33.489362
510
0.736341
rus_Cyrl
0.881178
486dcd829666d2927860528e659c9d7a151935d7
107
md
Markdown
README.md
sensimevanidus/django-invitely
d41274e227111ba33fc651787be3ffff234f96c1
[ "MIT" ]
null
null
null
README.md
sensimevanidus/django-invitely
d41274e227111ba33fc651787be3ffff234f96c1
[ "MIT" ]
null
null
null
README.md
sensimevanidus/django-invitely
d41274e227111ba33fc651787be3ffff234f96c1
[ "MIT" ]
null
null
null
django-invitely =============== A simple django application used for handling cross-platform invitations
21.4
73
0.719626
eng_Latn
0.841207
486e35efb2c9c1bbe13c116704708dd701fea28a
79
md
Markdown
README.md
Santana9937/language-translation
0031544ed53b26c3bc0f8c73b817fbcb0889fdf8
[ "MIT" ]
null
null
null
README.md
Santana9937/language-translation
0031544ed53b26c3bc0f8c73b817fbcb0889fdf8
[ "MIT" ]
null
null
null
README.md
Santana9937/language-translation
0031544ed53b26c3bc0f8c73b817fbcb0889fdf8
[ "MIT" ]
null
null
null
# language-translation Fourth Assignment for Udacity Deep Learning Foundations
26.333333
55
0.860759
eng_Latn
0.976519
486e51d9967a040d523965d04503e147f4316710
118
md
Markdown
CHANGELOG.md
fuzhutech/fuzhutech-mybatis-generator
4286bfd264e6764eb0b41ce4e5ec1d39166633ce
[ "MIT" ]
null
null
null
CHANGELOG.md
fuzhutech/fuzhutech-mybatis-generator
4286bfd264e6764eb0b41ce4e5ec1d39166633ce
[ "MIT" ]
null
null
null
CHANGELOG.md
fuzhutech/fuzhutech-mybatis-generator
4286bfd264e6764eb0b41ce4e5ec1d39166633ce
[ "MIT" ]
null
null
null
# ChangeLog ## 1.0.0 ### 新特性 * 继承mybatis-generator进行扩展,以支持自动生成fuzhutech模式的相关model、mapper、xml mapper ### Bug修复 * 无
10.727273
70
0.711864
kor_Hang
0.118717
486efda3f81702a862cb6bb22d84e7e6fc4a1c29
2,301
md
Markdown
src/af/2020-03/06/07.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/af/2020-03/06/07.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/af/2020-03/06/07.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Vir verdere studie date: 07/08/2020 --- Lees Ellen G. White se “Talents” op bl. 325-365 in Christ’s Object Lessons. ’n Regte begrip oor die bybelse leer aangaande die gawes van die Gees bring eenheid in die kerk. Die besef dat elkeen van ons ’n waardevolle lid in die liggaam van Christus is, waarsonder die kerk nie kan klaarkom nie, is ’n gedagte wat gelowiges saamsnoer. Elke kerklid word benodig ter uitvoering van die groot opdrag wat Christus ons gegee het. Elke lid het ’n gawe (of gawes) vir naastediens ontvang. “Aan elkeen word iets gegee om vir die Meester te doen. Aan elkeen wat Hom dien, word ’n spesiale gawe of talent toevertrou. ‘En aan die een gee hy vyf talente en aan die ander twee en aan die ander een, aan elkeen na sy vermoë.’ Elkeen wat die Here dien, het iets wat aan hom [of haar] toevertrou is. Waartoe jy in staat is, bepaal die grootte van daardie gawe wat aan jou toevertrou word. Met die uitdeel van gawes het God nie party gelowiges voorgetrek nie. Hy het [bloot] gawes toegeken volgens die dinge waartoe elkeen in staat is, en Hy verwag dienooreenkomstige resultate” (Ellen G. White, Testimonies for the Church, vol. 2, bl. 282). Neem ook in ag dat die gawes van die Gees tot eer van die Here, en nie onsself nie, gegee word. Hy skenk dit ter verheerliking van sy Naam en ter bevordering van sy verlossingsending. **Vrae vir bespreking**: `1. Bepeins die feit dat elkeen van ons gawes van die Here ontvang het. Watter praktiese implikasies het dit vir die gemeente waarin jy is? Watter verskil maak hierdie gedagte ten opsigte van elke lid se betrokkenheid by naastediens en persoonlike bediening?` `2. Vertel vir die res van die klas hoe ’n ander lid se gawe (of gawes) jou al tot seën was. Praat ook ’n bietjie oor jou eie gawes van die Gees – hoe jy dit ontdek het, watter gawes jy meen jy het, en ook hoe jy dit tot seën van ander gebruik. ` `3. Hierdie les het daarop gewys dat gawes kan “vermeerder” namate ’n mens daarmee woeker. Doen selfondersoek: kan jy aan enige gawes dink waarmee jy ter ere van sy Naam gewoeker het, talente wat nou vermeerder het? Vra jou egter ook die volgende af (’n vraag wat ons inderdaad aan die einde van Donderdag se les die eerste keer aangeroer het): hoe getrou is ek met dit wat God aan my toevertrou het? `
164.357143
1,307
0.771838
afr_Latn
0.999987
486fb831ed7cfaa72c2ee4296e54d06b83d46280
12,272
md
Markdown
articles/media-services/azure-media-player/azure-media-player-feature-list.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/azure-media-player/azure-media-player-feature-list.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/azure-media-player/azure-media-player-feature-list.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Lista de recursos do Azure Media Player description: Uma referência de recurso para o Azure Media Player. author: IngridAtMicrosoft ms.author: inhenkel ms.service: media-services ms.topic: reference ms.date: 04/20/2020 ms.openlocfilehash: e5595620a2f888b06ad5b35d2e8a008f23861463 ms.sourcegitcommit: acb82fc770128234f2e9222939826e3ade3a2a28 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 04/21/2020 ms.locfileid: "81727223" --- # <a name="feature-list"></a>Lista de recursos # Aqui está a lista de recursos testados e recursos não suportados: | | Testado | PARCIALMENTE TESTADO | Testado | Unsupported | OBSERVAÇÕES | |:----------------------------------------|--------|------------------|----------|-------------|:---------------------------------------------------------------------------------------------------------------------| | Reprodução | | | | | | | Reprodução básica sob demanda | X | | | | Suporta fluxos apenas do Azure Media Services | | Reprodução básica ao vivo | X | | | | Suporta fluxos apenas do Azure Media Services | | AES | X | | | | Suporta o serviço de entrega chave dos serviços de mídia do Azure | | Multi-DRM | | X | | | | | PlayReady | X | | | | Suporta o serviço de entrega chave dos serviços de mídia do Azure | | Widevine | | X | | | Suporta caixas PSSH Widevine descritas em manifesto | | FairPlay | | X | | | Suporta o serviço de entrega chave dos serviços de mídia do Azure | | Técnicos | | | | | | | MSE/EME (AzureHtml5JS) | X | | | | | | Flash Fallback (FlashSS) | X | | | | Nem todos os recursos estão disponíveis nesta tecnologia. | | Silverlight Fallback SilverlightSS | X | | | | Nem todos os recursos estão disponíveis nesta tecnologia. | | Passe do HLS nativo (Html5) | | X | | | Nem todos os recursos estão disponíveis nesta tecnologia devido às restrições da plataforma. | | Recursos | | | | | | | Suporte a API | X | | | | Ver lista de problemas conhecidos | | UI básica | X | | | | | Inicialização através de JavaScript | X | | | | | | Inicialização através de tag de vídeo | | X | | | | | Endereçamento de segmento - Baseado em Tempo | X | | | | | | Endereçamento de segmento - Baseado em Índice | | | | X | | | Endereçamento de segmento - Byte Based | | | | X | | | Reescritor de URL do Azure Media Services | | X | | | | | Acessibilidade - Legendas e Legendas | | X | | | WebVTT suportado por demanda, CEA 708 ao vivo parcialmente testado | | Acessibilidade - Hotkeys | X | | | | | | Acessibilidade - Alto Contraste | | X | | | | | Acessibilidade - Foco em Guias | | X | | | | | Mensagens de erro | | X | | | Mensagens de erro são inconsistentes entre os técnicos | | Acionamento de Eventos | X | | | | | | Diagnósticos | | X | | | As informações de diagnóstico estão disponíveis apenas na tecnologia AzureHtml5JS e estão parcialmente disponíveis na tecnologia SilverlightSS. | | Ordem técnica personalizável | | X | | | | | Heurística - Básico | X | | | | | | Heurística - Personalização | | | X | | A personalização só está disponível com a tecnologia AzureHtml5JS. | | Descontinuidades | X | | | | | | Selecione Bitrate | X | | | | Esta API só está disponível nos técnicos AzureHtml5JS e FlashSS. | | Fluxo multi-áudio | | X | | | O switch de áudio programático é suportado em tecnologias AzureHtml5JS e FlashSS, e está disponível através da seleção de IU no AzureHtml5JS, FlashSS e html5 nativo (no Safari). A maioria das plataformas exige os mesmos dados privados de codec para alternar fluxos de áudio (mesmo codec, canal, taxa de amostragem, etc.). | | Localização da UI | | X | | | | | Reprodução em várias instâncias | | | | X | Este cenário pode funcionar para alguns técnicos, mas atualmente não é suportado e não testado. Você também pode fazer com que isso funcione usando iframes | | Suporte a anúncios | | x | | | O AMP suporta a inserção de anúncios lineares pré-médio e pós-roll de servidores de anúncios compatíveis com VAST para VOD na tecnologia AzureHtml5JS | | Análise | | X | | | O AMP fornece a capacidade de ouvir análises e eventos de diagnóstico para enviar para um backend do Analytics de sua escolha. Todos os eventos e propriedades não estão disponíveis entre os técnicos devido às limitações da plataforma. | | Skins personalizadas | | | X | | Gire os controles de configuração para falsos no AMP e usando seu próprio HTML e CSS. | | Procurar limpeza de barras | | | | X | | | Trick-Play | | | | X | | | Somente áudio | | | | X | Pode funcionar em alguns técnicos para O Streaming Adaptativo, mas atualmente não é suportado e não funciona no AzureHtml5JS. A reprodução progressiva do MP3 pode funcionar com a tecnologia HTML5 se a plataforma o suportar. | | Somente vídeo | | | | X | Pode funcionar em alguns técnicos para O Streaming Adaptativo, mas atualmente não é suportado e não funciona no AzureHtml5JS. | | Apresentação de vários períodos | | | | X | | Múltiplos ângulos de câmera | | | | X | | | Velocidade de reprodução | | X | | | A velocidade de reprodução é suportada na maioria dos cenários, exceto no caso do celular devido a um bug parcial no Chrome | ## <a name="next-steps"></a>Próximas etapas ## - [Azure Media Player Quickstart](azure-media-player-quickstart.md)
175.314286
428
0.289113
por_Latn
0.998425
486ffebb4b88bd49251cd07f455a2d0549a9c5f3
2,565
md
Markdown
_docs/osm/building-relation.md
osmpersia/osmpersia.github.io
e74a198d588d33f0b3e87793916aeba8fcaf475f
[ "MIT" ]
1
2018-11-23T11:10:20.000Z
2018-11-23T11:10:20.000Z
_docs/osm/building-relation.md
osmpersia/osmpersia.github.io
e74a198d588d33f0b3e87793916aeba8fcaf475f
[ "MIT" ]
null
null
null
_docs/osm/building-relation.md
osmpersia/osmpersia.github.io
e74a198d588d33f0b3e87793916aeba8fcaf475f
[ "MIT" ]
null
null
null
--- title: ریلیشن building permalink: /docs/building-relation/ --- ### نویسنده [@kiaraSh_Q](https://t.me/kiaraSh_Q) مطمئناً پیش اومده که خواسته باشین کاربری هر طبقه از یک ساختمون چند طبقه رو مشخص کنین... اما چجوری؟! در ادامه به شما خواهیم گفت... فرض کنین یک ساختمون ۲ طبقه داریم که یک زیرزمین هم داره و ارتفاع هر طبقه‌ش ۳ متره، و زیرزمین سرویس بهداشتی، طبقه اول سوپرمارکت و طبقه دومش مسکونی هست... ابتدا سطح مقطع اون ساختمون رو میکشیم و بعد برای هر طبقه یه ریلیشن مالتی پلیگون (type=multipolygon) جداگانه می‌سازیم (یعنی سه تا ریلیشن) و تگهای زیر رو بهشون میدیم زیرزمین ``` building=toilets building:part=yes level=-1 amenity=toilets ``` طبقه اول ``` building=supermarket building:part=yes level=0 height=3 shop=supermarket ``` طبقه دوم ``` building=house building:part=yes level=1 height=6 min_height=3 ``` در هر سه ریلیشن مرز ساختمون نقش outer خواهد داشت. (یک آبجکت، در بی نهایت ریلیشن میتونه عضو باشه) حالا اگه ساختمون شما n طبقه هم داشته باشه کافیه که همین کار رو برای هر طبقش انجام بدین... در نهایت میشه کُل این ریلیشن‌ها (child relation) رو در یک ریلیشن type=building جمع کرد و به هر کدومشون نقش part داد (role=part). تگهایی که مربوط به کُل ساختمان میشن رو باید به ریلیشن مادر (parent relation) داد... تگهای مربوط به کُل ساختمان ``` building=yes building:levels:2 building:levels:underground=1 height=6 ``` مشخصات دیگه ای از جمله رنگ و شکل سقف و... رو میشه با تگهای مربوطه‌شون مشخص کرد. با توجه به اینکه تعداد زیادی از سایت‌ها و اَپلیکیشن‌ها #3D رو ساپورت نمیکنن و محیطشون #2D هست ما یک نقش دیگه هم در ریلیشن بیلدینگمون نیاز داریم، تا در این موارد ساختمان دو بعدی و فقط دید از بالاش نمایش داده بشه... و خطوط اضافی دیگه‌ی ساختمان که در نمایش 2D الزامی ندارن نمایش داده نشن. اسم این نقشمون #outline هست و همونطور که از اسمش پیداست به مرز خارجی کُل ساختمون گفته میشه. در مثال خیلی ساده‌ی بالا (چون شکل ساختمان ساده‌ترین شکل ممکن هست) ما فقط یه مرز داریم که برای کُل ریلیشن‌ها از جمله ریلیشن مربوط به outline استفاده میشه. برای این منظور (نقش outline) در مثال بالا (و موارد مشابه دیگه) ما یک ریلیشن مالتی پلیگون دیگه، با تگهای برابر با ریلیشن مادر، یعنی همون ریلیشن building اصلی، به اضافه تگ building:part=no ایجاد میکنیم که نهایتاً این ریلیشن، در ریلیشن building ما نقش outline خواهد گرفت. برای ریلیشن بیلدینگ میتونین لینکهای زیر رو یه نگاه بندازین... [مجتمع مسکونی امام محمد باقر](https://www.openstreetmap.org/#map=19/30.26961/57.03619) [مجتمع مسکونی هواپیمایی ماهان](https://www.openstreetmap.org/#map=19/30.26974/56.99552) برای اشکال پیچیده‌تر هم میتونین برج میلاد و برج آزادی رو ببینین... برقرار باشید و خدا نگهدار 😊🌸
29.482759
286
0.747758
pes_Arab
0.967781
487025cf90d1b85815831f19ae82fc48e8ed9eb1
3,211
md
Markdown
README.md
BetaWile/betaGuard2
091343363a51dfee1a91e37b4bce0a8aeac5c448
[ "Apache-2.0" ]
38
2021-07-27T01:21:28.000Z
2021-12-17T18:33:26.000Z
README.md
BetaWile/betaGuard2
091343363a51dfee1a91e37b4bce0a8aeac5c448
[ "Apache-2.0" ]
null
null
null
README.md
BetaWile/betaGuard2
091343363a51dfee1a91e37b4bce0a8aeac5c448
[ "Apache-2.0" ]
1
2021-12-17T18:33:15.000Z
2021-12-17T18:33:15.000Z
# Discord Guard Bot - [Discord Guard Bot](https://github.com/beT4w/betaGuard) - [Kurulum](#kurulum) - [İçerikler](#İçerikler) - [İletişim](#İletişim) - [FAQ](#faq) <div align="center"> <a href="https://github.com/BetaWile"> <img src="https://betaaa.has-a-hot.mom/55orRHk8J.gif"> </a> </div> # Kurulum * İlk olarak bilgisayarına [Node JS](https://nodejs.org/en/) indir. * Bu projeyi zip halinde indir. * Herhangi bir klasöre zipi çıkart. * Daha sonra `beta`/`Settings`/`Config.json` dosyalardaki bilgileri doldur. * Sonra klasörün içerisinde bir `powershell` ya da `cmd` penceresi aç. * ```npm install``` yazarak tüm modülleri kur. * Kurulum bittikten sonra ```node beta.js``` yaz ve botu başlat. ## Botun İntentlerini Açmayı Unutma! * [Açmak İçin Tıkla](https://discord.com/developers/applications) <img src="https://cdn.discordapp.com/attachments/818953120452575322/851116463166849054/3P4KKB.png"/> ***Tadaaa 🎉. Artık guard botun hazır. Dilediğin gibi kullanabilirsin.*** # Neden Yayınlandı? Kısaca neden böyle bir şey için uğraştığımı anlatayım. Hem kendimi geliştirmek daha iyi bilgilere ulaşmak hatalar alıp onları nasıl düzeltebileceğimi bulmak tecrübe kazanmak için hemde Türkiyede bu kadar iyi detaylı, özenli bir altyapının olmadığını fark edip bundan sizinde yaralanmanızı istedim. ## Config.json Bilgi ```json { "Bot": { "token": "TOKEN", "owner": "SAHİP_ID", "server": "SUNUCU_ID", "activity": "DURUM" }, "Guard": { "safezone": ["GÜVENLİ_KİŞİ", "LER"], "safebots": ["GÜVELİ BOT", "LAR"], "jail": "CEZALI_ROL_ID", "booster": "BOOSTER_ROL_ID", "log": "LOG_KANAL_ID" } } ``` ## Panel.json Bilgi * True = Açık * False = Kapalı ```json { "RoleProtectiions": true, "ChannelProtections": true, "GuildProtections": true, "EmoteProtections": true } ``` # İçerikler ## • Guard { - [x] Rol Oluşturma Koruma - [x] Rol Silme Koruma - [x] Rol Güncelleme Koruma - [x] Kanal Oluşturma Koruma - [x] Kanal Silme Koruma - [x] Kanal Güncelleme Koruma - [x] Sunucu Güncelleme Koruma - [x] Üye Güncellenme Koruma - [x] Emoji Güncellenme Koruma - [x] Emoji Silme Koruma - [x] Emoji Oluşturma Koruma - [x] Ban Koruma - [x] Kick Koruma - [x] Ban Açma Koruma - [x] Bot Koruma - [x] Güvenli Fonksiyonu ## }; # İletişim * [Discord Profilim](https://discord.com/users/852615172673503262) * [Discord Sunucum](https://discord.gg/58UAMVJTSH) # FAQ Sıkça sorulan sorulara buradan ulaşabilirsin. **Q:** Altyapıyı geliştirilmeye devam edilecek mi?<br /> **A:** Eğer bir şeyler eklersem dolaylı yoldan burayada ekleyeceğim. **Q:** İstek herhangi bir şey ekliyor musun?<br /> **A:** Eğer istediğin şey hoşuma giderse ve yapmaktan zevk alacaksam eklerim. **Q:** Altyapı tamamen sanamı ait?<br /> **A:** Evet, tamamen bana aittir. **Q:** Hatalarla ilgileniyor musun?<br /> **A:** Proje içindeki hatalarla ilgileniyorum. Eğer bir hata ile karşılaşırsanız lütfen Discorddan benimle iletişim kurun. ## NOT: Botta liasns bulunmaktadır. Bu botun dosyalarının benden habersiz paylaşılması/satılması durumunda gerekli işlemler yapılacaktır!
28.415929
298
0.683899
tur_Latn
0.998515
487190ec22bdd387e87e3a5fa914de3bd282086d
952
md
Markdown
includes/storage-table-cosmos-db-tip-include.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/storage-table-cosmos-db-tip-include.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/storage-table-cosmos-db-tip-include.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: tamram ms.service: storage ms.topic: include ms.date: 10/26/2018 ms.author: tamram ms.openlocfilehash: 47d50c3f27742b7f82589bf4d423809a115d7483 ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/02/2020 ms.locfileid: "82611793" --- > [!TIP] > El contenido de este artículo se aplica al servicio original Azure Table Storage. Sin embargo, ahora hay una oferta prémium para el almacenamiento de tablas: Table API de Azure Cosmos DB. Esta API ofrece tablas optimizadas para el rendimiento, distribución global e índices secundarios automáticos. Hay varias [diferencias en las características entre Table API en Azure Cosmos DB y Azure Table Storage](../articles/cosmos-db/table-api-faq.md#table-api-vs-table-storage). Para más información al respecto y para probar esta oferta, consulte [Introducción a Table API de Azure Cosmos DB](https://aka.ms/premiumtables). >
56
620
0.804622
spa_Latn
0.848818
4871fd88093899bbc924747b250bab12662c6470
9,057
md
Markdown
articles/azure-diagnostics-troubleshooting.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-diagnostics-troubleshooting.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-diagnostics-troubleshooting.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Rozwiązywanie problemów z diagnostyki Azure" description="Rozwiązywanie problemów z podczas przy użyciu diagnostyki Azure usług w chmurze Azure, maszyn wirtualnych i " services="multiple" documentationCenter=".net" authors="rboucher" manager="jwhit" editor=""/> <tags ms.service="multiple" ms.workload="na" ms.tgt_pltfrm="na" ms.devlang="dotnet" ms.topic="article" ms.date="10/04/2016" ms.author="robb"/> # <a name="azure-diagnostics-troubleshooting"></a>Narzędzia diagnostyczne Azure, rozwiązywanie problemów Informacje dotyczące używania narzędzia diagnostyczne Azure do rozwiązywania problemów. Aby uzyskać więcej informacji na diagnostyki Azure zobacz [Omówienie diagnostyki Azure](azure-diagnostics.md#cloud-services). ## <a name="azure-diagnostics-is-not-starting"></a>Diagnostyka Azure nie jest uruchamiana. Narzędzia diagnostyczne składa się z dwóch części: wtyczki agenta gościa i monitorowania agenta. Możesz sprawdzić pliki dziennika **DiagnosticsPluginLauncher.log** i **DiagnosticsPlugin.log** informacji na temat Dlaczego diagnostyki nie można uruchomić. W roli usługi w chmurze pliki dziennika dla wtyczki agenta gości znajdują się w: ``` C:\Logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.6.3.0\ ``` W Azure maszyny wirtualnej, pliki dziennika dla wtyczki agenta gości znajdują się w: ``` C:\Packages\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.6.3.0\Logs\ ``` Ostatni wiersz pliki dziennika będzie zawierać kod zakończenia. ``` DiagnosticsPluginLauncher.exe Information: 0 : [4/16/2016 6:24:15 AM] DiagnosticPlugin exited with code 0 ``` Dodatek zwraca poniższe kody zakończenia: Kod wyjścia|Opis ---|--- 0|Zakończyło się pomyślnie. -1|Błąd ogólny. -2|Nie można załadować pliku rcf.<p>Jest to błąd wewnętrzny, który powinien wystąpić tylko, jeśli niepoprawnie, uruchom dodatek agenta gościa ręcznie jest wywoływana na maszyn wirtualnych. -3|Nie można załadować pliku konfiguracji diagnostyki.<p><p>Rozwiązanie: Jest to wynikiem pliku konfiguracji nie przechodzące sprawdzania poprawności schematu. Rozwiązanie jest zapewnienie pliku konfiguracji, który jest zgodny ze schematem. -4|Inne wystąpienie kontrolna diagnostyki już korzysta z katalogu zasobów lokalnych.<p><p>Rozwiązanie: Określ inną wartość dla **LocalResourceDirectory**. -6|Uruchom dodatek agenta gościa próbował Uruchamianie diagnostyki z nieprawidłowego wiersza polecenia.<p><p>Jest to błąd wewnętrzny, który powinien wystąpić tylko, jeśli niepoprawnie, uruchom dodatek agenta gościa ręcznie jest wywoływana na maszyn wirtualnych. -10|Dodatek Narzędzia diagnostyczne zostało zakończone z powodu nieobsługiwanego wyjątku. -11|Agent gościa nie może utworzyć odpowiedzialnego za uruchamianie i monitorowanie agenta monitorowania procesu.<p><p>Rozwiązanie: Sprawdź, czy wystarczające zasoby systemowe są dostępne dla uruchamianie nowych procesów.<p> -101|Nieprawidłowe argumenty podczas połączenia dodatek Narzędzia diagnostyczne.<p><p>Jest to błąd wewnętrzny, który powinien wystąpić tylko, jeśli niepoprawnie, uruchom dodatek agenta gościa ręcznie jest wywoływana na maszyn wirtualnych. -102|Proces wtyczka nie może się zainicjować.<p><p>Rozwiązanie: Sprawdź, czy wystarczające zasoby systemowe są dostępne dla uruchamianie nowych procesów. -103|Proces wtyczka nie może się zainicjować. W szczególności jest nie można utworzyć obiektu rejestratora.<p><p>Rozwiązanie: Sprawdź, czy wystarczające zasoby systemowe są dostępne dla uruchamianie nowych procesów. -104|Nie można załadować pliku rcf dostarczonego przez agenta gościa.<p><p>Jest to błąd wewnętrzny, który powinien wystąpić tylko, jeśli niepoprawnie, uruchom dodatek agenta gościa ręcznie jest wywoływana na maszyn wirtualnych. -105|Dodatek diagnostyki nie można otworzyć pliku konfiguracji diagnostyki.<p><p>Jest to błąd wewnętrzny, który należy tylko wystąpić, jeśli niepoprawnie, dodatek diagnostyki ręcznie jest wywoływana na maszyn wirtualnych. -106|Nie można odczytać pliku konfiguracji diagnostyki.<p><p>Rozwiązanie: Jest to wynikiem pliku konfiguracji nie przechodzące sprawdzania poprawności schematu. Dlatego rozwiązanie jest zapewnienie pliku konfiguracji, który jest zgodny ze schematem. Można znaleźć plik XML, który został dostarczony do rozszerzenia diagnostyki w folderze *%SystemDrive%\WindowsAzure\Config* na maszyn wirtualnych. Otwórz odpowiedni plik XML oraz wyszukiwania **Microsoft.Azure.Diagnostics**, a następnie w polu **xmlCfg** . Dane są kodowane, więc musisz [dekodowanie go](http://www.bing.com/search?q=base64+decoder) wyświetlić plik XML, który został załadowany przez diagnostyki base64.<p> -107|Przejście katalogu zasobów do monitorowania agenta jest nieprawidłowy.<p><p>Jest to błąd wewnętrzny, który należy tylko wystąpić, jeśli niepoprawnie, jednostka kontrolna ręcznie jest wywoływana na maszyn wirtualnych.</p> -108 |Nie można przekonwertować pliku konfiguracji diagnostyki do monitorowania pliku konfiguracji agenta.<p><p>Jest to błąd wewnętrzny, który należy tylko wystąpić, jeśli dodatek diagnostyki ręcznie wywoływana jest nieprawidłowy plik konfiguracji. -110|Błąd konfiguracji diagnostyki ogólne.<p><p>Jest to błąd wewnętrzny, który należy tylko wystąpić, jeśli dodatek diagnostyki ręcznie wywoływana jest nieprawidłowy plik konfiguracji. -111|Nie można uruchomić agenta monitorowania.<p><p>Rozwiązanie: Sprawdź, czy zasoby systemowe wystarczające są dostępne. -112|Błąd ogólny ## <a name="diagnostics-data-is-not-logged-to-azure-storage"></a>Narzędzia diagnostyczne dane znajdują się nie jest zalogowany do magazynu platformy Azure Diagnostyka Azure są przechowywane wszystkie dane w magazynie Azure. Najczęstsze przyczyny brakujące dane zdarzenie jest nieprawidłowe zdefiniowanej przechowywania informacji o koncie. Rozwiązanie: Popraw pliku konfiguracji narzędzia diagnostyczne i ponownie zainstalować narzędzia diagnostyczne. Jeśli problem będzie nadal występował, po ponownym zainstalowaniu rozszerzenia diagnostyki, a następnie może być konieczne debugowanie dalszych przeglądając monitorowania jakiekolwiek błędy agenta. Zanim dane zdarzenie zostało przesłane do swojego konta miejsca do magazynowania znajduje się w LocalResourceDirectory. Dla roli usługi w chmurze LocalResourceDirectory jest: C:\Resources\Directory\<CloudServiceDeploymentID>.<RoleName>.DiagnosticStore\WAD<DiagnosticsMajorandMinorVersion>\Tables W przypadku maszyn wirtualnych LocalResourceDirectory jest: C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\<DiagnosticsVersion>\WAD<DiagnosticsMajorandMinorVersion>\Tables Jeśli nie ma żadnych plików w folderze LocalResourceDirectory, nie można uruchomić jest jednostka kontrolna. Jest to zazwyczaj spowodowane przez plik nieprawidłowa konfiguracja zdarzenie, które powinny być zgłaszane w CommandExecution.log. Agent monitorowania pomyślnie zbiera dane zdarzenie zobaczysz pliki .tsf dla każdego zdarzenia zdefiniowane w pliku konfiguracji. Agent monitorowania dzienniki jej błędów w pliku MaEventTable.tsf. Za pomocą aplikacji tabel2csv Aby przekonwertować plik .tsf pliku values(.csv) oddzielone przecinkami, aby sprawdzić zawartość tego pliku: Z uprawnieniami usługi w chmurze: %SystemDrive%\Packages\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\<DiagnosticsVersion>\Monitor\x64\table2csv maeventtable.tsf *% Dysk_systemowy %* z uprawnieniami usługi w chmurze jest zazwyczaj D: Na komputerze wirtualnej: C:\Packages\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\<DiagnosticsVersion>\Monitor\x64\table2csv maeventtable.tsf Powyższe polecenia generuje dziennika plik *maeventtable.csv*, który można otworzyć, a inspekcja komunikaty o błędach. ## <a name="diagnostics-data-tables-not-found"></a>Narzędzia diagnostyczne danych nie można odnaleźć tabel Tabele w magazynie Azure przechowywania danych diagnostyki Azure są nazywane za pomocą kodu poniżej: if (String.IsNullOrEmpty(eventDestination)) { if (e == "DefaultEvents") tableName = "WADDefault" + MD5(provider); else tableName = "WADEvent" + MD5(provider) + eventId; } else tableName = "WAD" + eventDestination; Oto przykład: <EtwEventSourceProviderConfiguration provider=”prov1”> <Event id=”1” /> <Event id=”2” eventDestination=”dest1” /> <DefaultEvents /> </EtwEventSourceProviderConfiguration> <EtwEventSourceProviderConfiguration provider=”prov2”> <DefaultEvents eventDestination=”dest2” /> </EtwEventSourceProviderConfiguration> Który wygeneruje 4 tabel: Zdarzenia|Nazwa tabeli ---|--- Dostawca = "prov1" &lt;identyfikator zdarzenia = "1"-&gt;|WADEvent + MD5("prov1") + "1" Dostawca = "prov1" &lt;identyfikator zdarzenia = "2" eventDestination = "dest1"-&gt;|WADdest1 Dostawca = "prov1" &lt;DefaultEvents-&gt;|WADDefault+MD5("prov1") Dostawca = "prov2" &lt;DefaultEvents eventDestination = "dest2"-&gt;|WADdest2
67.589552
672
0.796511
pol_Latn
0.999824
487222321c89f034137cf5437e6d6440f5385331
8,076
md
Markdown
README.md
kanroyalhigh/twelvedata-ruby
e561aa7032aad956daf65cd2f7152cf4a7bd76cd
[ "MIT" ]
1
2021-07-24T02:54:59.000Z
2021-07-24T02:54:59.000Z
README.md
kanroyalhigh/twelvedata_ruby
e561aa7032aad956daf65cd2f7152cf4a7bd76cd
[ "MIT" ]
null
null
null
README.md
kanroyalhigh/twelvedata_ruby
e561aa7032aad956daf65cd2f7152cf4a7bd76cd
[ "MIT" ]
null
null
null
# TwelvedataRuby [![Gem Version](https://badge.fury.io/rb/twelvedata_ruby.svg)](https://badge.fury.io/rb/twelvedata_ruby) TwelvedataRuby is a Ruby library that exposes some convenient ways to access Twelve Data API to get information on stock, forex, crypto, and other financial market data. In order to do so, a free API key is required which can be easily requested [here](https://twelvedata.com/pricing). Visit their [API's full documentation](https://twelvedata.com/docs) ## Installation Add this line to your application's Gemfile: ```ruby gem 'twelvedata_ruby' ``` And then execute: $ bundle install Or install it yourself as: $ gem install twelvedata_ruby ## Usage The preferred way to include the Twelve Data API key in the request payload is to assign it to an ENVIRONMENT variable which your Ruby application can fetch if none was explicitly assigned. The default ENVIRONMENTt variable name is `TWELVEDATA_API_KEY` but you can configure this to any other value using the `#apikey_env_var_name=` setter method. To get hold of the singleton `TwelvedataRuby::Client.instance`, you can directly used that inherited instance method from the mixed in `Singleton` module or thru the gem's module helper class method: ```ruby require "twelvedata_ruby" client = TwelvedataRuby.client ``` By not passing anything to the options method parameters, the `client` instance attributes will have default values. Though you can still set different values to the attributes through their helper setter methods: ```ruby client.apikey = "twelvedata-apikey" client.apikey_env_var_name = "the_environment_variable_name" # the helper getter method will upcase the value client.connect_timeout = 300 # can also accept "300" ``` or simply set them all at once: ```ruby require "twelvedata_ruby" client = TwelvedataRuby.client(apikey: "twelvedata-apikey", connect_timeout: 300) # or client = TwelvedataRuby.client(apikey_env_var_name: "the_environment_variable_name", connect_timeout: 300) ``` The default values though are sufficient already. Getting any type of financial data then from the API, simply invoke any valid endpoint name to the client instance. For example, to fetch some data for `GOOG` stock symbol using quote, timeseries, price, and etd API endpoints: ```ruby # 1. response content-type will be :csv client.quote(symbol: "GOOG", format: :csv) # 2. assigns custom attachment name client.timeseries(symbol: "GOOG", interval: "1hour", format: :csv, filename: "google_timeseries_1hour.csv") # 3. the content-type format will be :json client.price(symbol: "GOOG") # 4. the passed apikey is the used in the request payload client.etd(symbol: "GOOG", apikey: "overrides-whatever-is-the-current-apikey") # 5. an example of invocation which the API will respond with 401 error code client.etd(symbol: "GOOG", apikey: "invalid-api-key") # 6. still exactly the same object with client TwelvedataRuby.client.api_usage # 7. an invalid request wherein the required query parameter :interval is missing TwelvedataRuby.client.timeseries(symbol: "GOOG") # 8. an invalid request because it contains an invalid parameter client.price(symbol: "GOOG", invalid_parameter: "value") # 9. invoking a non-existing API endpoint will cause a NoMethodError exception client.price(symbol: "GOOG", invalid_parameter: "value") ``` All of the invocations possible return instance value is one of the following: - `TwelvedataRuby::Response` instance object which `#error` instance getter method can return a nil or kind of `TwelvedataRuby::ResponseError` instance if the API, or the API web server responded with some errors. #5 is an example which the API response will have status error with 401 code. `TwelvedataRuby::Response` resolves this into `TwelvedataRuby::UnauthorizedResponseError` instance. - `TwelvedataRuby::ResponseError` instance object itself when some error occurred that's not coming from the API - a Hash instance which has an `:errors` key that contains instances of kind `TwelvedataRuby::EndpointError`. This is an invalid request scenario which the #7, #8, and #9 examples. No actual API request was sent in this scenario. On first invocation of a valid endpoint name, a `TwelvedataRuby::Client` instance method of the same name is dynamically defined. So in effect, ideally, there can be a one-to-one mapping of all the API endpoints with their respective parameters constraints. Please visit their excellent API documentation to know more of the endpoint details here https://twelvedata.com/doc. Or if you're in a hurry, you can list the endpoints definitions: ```ruby TwelvedataRuby::Endpoint.definitions ``` Another way of fetching data from API endpoints is by building a valid `TwelvedataRuby::Request` instance, then invoke `#fetch` on this instance. The possible return values are the same with the above examples. ```ruby quote_req = TwelvedataRuby::Request.new(:quote, symbol: "IBM") quote_resp = quote_req.fetch timeseries_req = TwelvedataRuby::Request.new(:quote, symbol: "IBM", interval: "1hour", format: :csv) timeseries_resp = timeseries_req.fetch etd_req = TwelvedataRuby::Request.new(:etd, symbol: "GOOG") etd_resp = etd_req.fetch # or just simply chain price_resp = TwelvedataRuby::Request.new(:price, symbol: "GOOG").fetch ``` An advantage of building a valid request instance first then invoke the `#fetch` on it is you actually have an option to not send the request one by one BUT rather send them to the API server all at once simultaneously (might be in parallel). Like so taking the above examples' request instance objects, send them all simultaneously ```ruby # returns a 3 element array of Response objects resp_objects_array = TwelvedataRuby::Client.request(quote_req, timeseries_req, etd_req) ``` Be caution that the above example, depending on the number request objects sent and how large the responses, hitting the daily limit is likely possible. But then again if you have several api keys you might be able to supply each request object with its own apikey. :) The data from a successful API request can be access from `Response#parsed_body`. If request format is `:json` then it will be a `Hash` instance ```ruby TwelvedataRuby.client.quote(symbol: "GOOG").parsed_body => {:symbol=>"GOOG", :name=>"Alphabet Inc", :exchange=>"NASDAQ", :currency=>"USD", :datetime=>"2021-07-15", :open=>"2650.00000", :high=>"2651.89990", :low=>"2611.95996", :close=>"2625.33008", :volume=>"828300", :previous_close=>"2641.64990", :change=>"-16.31982", :percent_change=>"-0.61779", :average_volume=>"850344", :fifty_two_week=>{:low=>"1406.55005", :high=>"2659.91992", :low_change=>"1218.78003", :high_change=>"-34.58984", :low_change_percent=>"86.65031", :high_change_percent=>"-1.30041", :range=>"1406.550049 - 2659.919922"}} ``` Likewise, if the API request format is `:csv` then `Response#parsed_body` will be `CSV#Table` instance ```ruby TwelvedataRuby.client.quote(symbol: "GOOG", format: :csv).parsed_body => #<CSV::Table mode:col_or_row row_count:2> ``` ## Documentation You can browse the source code [documentation](https://kanroyalhigh.github.io/twelvedata_ruby/doc/) ## Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/twelvedata_ruby. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/[USERNAME]/twelvedata_ruby/blob/master/CODE_OF_CONDUCT.md). ## License The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT). ## Code of Conduct Everyone interacting in the TwelvedataRuby project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/twelvedata_ruby/blob/master/CODE_OF_CONDUCT.md). # Notice This is not an offical Twelve Data ruby library and the author of this gem is not affiliated with Twelve Data in any way, shape or form. Twelve Data APIs and data are Copyright © 2020 Twelve Data Pte. Ltd
51.43949
439
0.772288
eng_Latn
0.975602
48728ce0509e3929b298ff6b0093478366fbe1a0
709
md
Markdown
docs/components/my-account.md
alxy/oc-mall-plugin
cac854a0eb70c6458abfdc1564201bffd51d0598
[ "MIT" ]
1
2019-10-07T07:47:40.000Z
2019-10-07T07:47:40.000Z
docs/components/my-account.md
alxy/oc-mall-plugin
cac854a0eb70c6458abfdc1564201bffd51d0598
[ "MIT" ]
null
null
null
docs/components/my-account.md
alxy/oc-mall-plugin
cac854a0eb70c6458abfdc1564201bffd51d0598
[ "MIT" ]
1
2020-05-07T06:42:12.000Z
2020-05-07T06:42:12.000Z
# MyAccount The `MyAccount` component displays an overview of the user's account. It pulls together the [OrdersList](./orders-list.md), [CustomerProfile](./customer-profile.md) and [AddressList]( ./address-list.md) components. By changing the `page` property a different child component is displayed. ## Properties ### `page` (string) The currently active page. This property should be populated via URL parameters. Possible values are `orders`, `profile` and `addresses`. ## Example implementations ### Display the my account page ```ini title = "Account" url = "/account/:page?" [session] security = "user" redirect = "login" [myAccount] page = "{{ :page }}" == {% component 'myAccount' %} ```
21.484848
113
0.71086
eng_Latn
0.962933
48731477ac4fab80827088c165530d98206e7222
197
md
Markdown
README.md
jayalfredprufrock/winx
e1912bc6b57ed753179f698f5984dd42f86387e3
[ "MIT" ]
1
2019-06-20T02:56:59.000Z
2019-06-20T02:56:59.000Z
README.md
jayalfredprufrock/winx
e1912bc6b57ed753179f698f5984dd42f86387e3
[ "MIT" ]
null
null
null
README.md
jayalfredprufrock/winx
e1912bc6b57ed753179f698f5984dd42f86387e3
[ "MIT" ]
null
null
null
winx ======================= An app framework built on top of CI 3.0 and Bootstrap 3, installable via composer. ##Installation composer create-project jayalfredprufrock/winx project-name
24.625
83
0.680203
eng_Latn
0.733114
4873ee0c54262f99f7a2d0121b9ee1bc92ed76ec
6,298
md
Markdown
_posts/2019-06-10-Download-indian-english-phonology-a-case-study-of-malayalee-english.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
_posts/2019-06-10-Download-indian-english-phonology-a-case-study-of-malayalee-english.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
_posts/2019-06-10-Download-indian-english-phonology-a-case-study-of-malayalee-english.md
Luanna-Lynde/28
1649d0fcde5c5a34b3079f46e73d5983a1bfce8c
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Indian english phonology a case study of malayalee english book If he could find her, indian english phonology a case study of malayalee english fantasy novel featuring Vikings in a longboat. 14, stymied by the challenge, windows offering escape and clean air, Dory. persistently through his thoughts that he wonders why it has such great indian english phonology a case study of malayalee english. It turned end over end, with what wouldst thou buy it?" "With the half of my kingdom," answered the Khalif; and Ibn es Semmak said, ii, iii, huh?" There was a little noise. That had always been his word for evil doings, of course, till he plucked it up by the roots and cast it to the ground, saying. 5 ort, Mom wasn't born to be a Las Vegas showgirl, we have to trust her instincts, he would hear her shrill accusations, and disappoint not my expectation concerning thee!" And he ceased not to press him till he consented to him; whereat Aboulhusn rejoiced and going on before him. "Who's been after you?" them -- were swallowed by each successive tunnel of this journey whose destination I did not In Cain's bedroom, was too public to suit his purposes, so God (extolled be His perfection and exalted be He!) might show forth her innocence before witnesses, their backs to the highway. "How clever you indian english phonology a case study of malayalee english he said. "Thank You. "Maurice was a philatelist! "Well, commodious, his voice hard and harsh, the port-wine birthmark, and had consisted of a lobby and a room with an perished in the neighbourhood of Cape Schelagskoj. " Quoth I, and an unseasonable warmth confirmed the coming catastrophe, unable to perform the one miracle she needed, speaking with the special knowledge of the once-dead, "They're seducing all of us, which all "You're sure. Fear clotted in Junior's veins, sang the following verses: JAAFER BEN YEHYA AND ABDULMEILIK BEN SALIH THE ABBASIDE, now so cool. There come to him a sort of robbers and seek to repent and proffer two boys [by way of peace-offering], Paul can't show his face outside, and the grey beach led him only to the feet of sheer He could shoot Tammy Bean after he killed Bartholomew, little He is about to move from petty crimes to the commission of a major felony, but along with them we found several skins of Eskimo. This worry is ridiculous, I highly recommend Culture of Death: The Assault on Medical Ethics in America by "Oh, by repeatedly Cuteness wasn't the quality Tom had in mind. " opened the bottle. " indian english phonology a case study of malayalee english without complying with the full terms of this agreement. They were too small to have been anything but canaries or parakeets. "Look here. We often joked like that. The distance from East Cape was 120', after the fullest fashion, bein' as I didn't know it? Instead of responding to the physician's request, and he had even less of a stomach for blood in real life, talking about differential transducers and inductive compensators. indian english phonology a case study of malayalee english can't crack you. We can spot each other a mile away! After some ten seconds he began feeling uncomfortable. bedroom, standing at "Five thousand, Corporal Swyley paid no heed as he stood by Fuller and Batesman, buried it again; after which he returned to his lodging and gave the idiot to eat and drink, and went down to the dogs and the horses and the cattle. The cat's dreams came into his mind, a white; a grey hen was setting her clutch in the henhouse. 2 0. If you don't mock a bastard like Cain, as though she were great dangers from a heavy sea at the river mouth, they still and further weakness among us. " species, and that The young women often strike one as very pretty if one can rid "When and where did we begin to go too far. "One hundred and four. Anyway, terrified. He was but a wall or two away, and in the end it was agreed: Clearance would be given for the civilians and a token military unit to begin moving down to Franklin, which permitted me to of whales'[26] hides? This proved easier didn't appear to trouble Barty much otherwise. give them to him. Perhaps he wanted to spite them. Then: "When we left, watching, and the old man obeyed his commandment, (140) and for the wind When was a rider found, sharp yet at the same time mild; a young couple passed; the girl turned to the man; her design, west of Samory, when we were parted from the vessel which had accompanied the _Vega_ "I'm Klonk, "I'm an easily confused layman. Teelroy had done barely enough maintenance to spare swelling the gutters with a poisonous flood? through her mind, her nasal cartilage rotted away by cocaine. thousands of spectators? He walked back to the Prosser residence, nodding her greeting, don't yell. "Ah," said the Patterner. He wanted her to be remembered, Barty reached up for his mother, smashed into an oil-tank truck, "The Lady Zubeideh saith thus and thus, because the sea was less covered by ice there. " since have been given to the masteries: finding, he would be reluctant to damage the property of another in this fashion, the more he came to understand how tenaciously and ferociously they would defend their freedom to express that dedication, she giggled, or is used the restroom only a short while ago, she could not play with overheating vehicles, But with. psychiatric ward. They saw me the moment I left the dust cloud. 2, the Immanent Grove. It is singular The care of pregnant beasts and women, the eastern sky was shadowy above the sea, and most if not all glittering droplets settled indian english phonology a case study of malayalee english my skin; they effervesced and evaporated, and he absolutely must obtain indian english phonology a case study of malayalee english for himself As the diameter of the tower shrank, arthritis and fallen arches, Agnes carried two suitcases out of the back door. Ridiculous sentence constructions. Bronson had thought of it as medicine, as though it were a fallen behemoth from the "The Company is in the King's employ, she thought, the woman cried out again. Moreover, I've got a flunky's job in a granary, high above the tower, Junior couldn't tell whether or not their voices were one and the same.
699.777778
6,166
0.788346
eng_Latn
0.999947
48745950033838dd8a8f96f25b9831e4cf16c50f
1,682
md
Markdown
_posts/2020-06-01-vim-grep.md
MichaelIsrael/hello-world
3bfcbca6b84642365ad52949d7590ac7521a8fed
[ "MIT" ]
null
null
null
_posts/2020-06-01-vim-grep.md
MichaelIsrael/hello-world
3bfcbca6b84642365ad52949d7590ac7521a8fed
[ "MIT" ]
null
null
null
_posts/2020-06-01-vim-grep.md
MichaelIsrael/hello-world
3bfcbca6b84642365ad52949d7590ac7521a8fed
[ "MIT" ]
null
null
null
--- title: Automating the usage of Vimgrep author: Michael Israel date: 2020-06-01 12:41:00 +0100 categories: [Programming, Vim] tags: [programming, vim] --- Since I tend to use grep a lot at work, I decided to take a look at Vim's grep-bindings. Be it the internal grep (:vimgrep) or the external (:grep), I found the process to be tedious and full of repititions to first have to run it (possibly even by search for a spicific text from the current buffer), then open the errors or locations window to see the result, so I decided to automate the whole process using a map (well ... four maps): ```vim nnoremap <M-g>w :lvimgrep /\v<C-r><C-w>/j **<CR>:lopen 10<CR> nnoremap <M-g>e :lvimgrep /\v<<C-r><C-w>>/j **<CR>:lopen 10<CR> vnoremap <M-g>w y:lvimgrep /\v<C-R>=escape(@",'/\')<CR>/j **<CR>:lopen 10<CR> vnoremap <M-g>e y:lvimgrep /\v<<C-R>=escape(@",'/\')<CR>>/j **<CR>:lopen 10<CR> ``` Here, there are basically two new shortcuts or mappings ```Alt+g - w``` and ```Alt+g - e```. And these two then twice; once for the normal mode and another for visual mode. ```Alt+g``` stands for **<ins>g</ins>**rep, ```w``` for **<ins>w</ins>**ord and ```e``` for (word) **<ins>e</ins>**nd. When in normal word, the word under the cursor is being searched for. While in visual mode, the selected test is copied and used as the regular expression. 'w' and 'e' differ in that the first searches for the word as is (without adding boundaries) while the latter adds the word boundary symbols around the expression. What I still have on my mind, is automate the process from the command line as well, so that vim is started directly with the search results opened in the locations window.
67.28
438
0.701546
eng_Latn
0.997283
48756e45be4b0740b2dfd8633c6affc7bfed0d13
782
md
Markdown
_posts/2017-10-19-2017.10.19.md
lixiache/lixiache.github.io
a10130f592ed8f6dc3c054fd3a51ed86f20d3ee4
[ "Apache-2.0" ]
null
null
null
_posts/2017-10-19-2017.10.19.md
lixiache/lixiache.github.io
a10130f592ed8f6dc3c054fd3a51ed86f20d3ee4
[ "Apache-2.0" ]
null
null
null
_posts/2017-10-19-2017.10.19.md
lixiache/lixiache.github.io
a10130f592ed8f6dc3c054fd3a51ed86f20d3ee4
[ "Apache-2.0" ]
null
null
null
--- layout: post title: 2017.10.19 date: 2017-10-19 13:09:14 updated: 2017-10-19 13:09:14 author: Dongwen typora-root-url: .. --- 昨晚洗了个澡,好好休息了一下。灵异第六感看了一半,哪里有于昕讲的那么好看…… 早上7:30自然醒来,神清气爽。收到了土耳其器官捐赠海报竞赛组委会的邮件。发给我一个电子参展证明。还有一个参展海报作品集电子版。超级好看!好用心的组委会呀,从网站设计到比赛流程组织,每一点都让人感觉十分正式和贴心。 早上微雨,打伞步行去工作室。弄到中午,总算把红线坐标重新标注好了。 中午去东艺把作品集打了一份。明天去取。去吃昆山面馆。傍晚约了和于昕打球。给他些梨。 中午睡觉真是越睡越困。梦到然然家开网吧。但是却玩不了她最爱玩的游戏。梦到我坐在一群抽着烟的虎形大汉中间。我发现了一个神奇的USB接口,大家都惊叹半天。 下午把红线坐标炸开就比例不对的问题解决了。傍晚去本科生工作室通知他们明天答辩带电脑。然后和于昕球场大战三百回合,斗不过特了……唉这身体…… 回宿舍休整一下。于昕给我带了孙雪洁孙雪洁家里果园的六个烟台苹果,我给他八个莱阳梨。还给他一本书。不想去工作室画图了。唉~明天还得去听本科生答辩,要早起去点名不想去!~给铭澍桌上放了一个大梨一个苹果。然后洗澡回来路上遇见铭澍,他说谢谢水果。我突然感觉自己很傻。如果做一步退让,两个水果就能解决的问题,怎么成了我天大的难题!所谓的心胸宽广,指的不是容纳自己本来就喜欢的人吧,当然是能够忍辱负重,敢于和自己不喜欢的人合作喽。 ![](/img/in-post/p46065144.jpg) ![](/img/in-post/p46065342.jpg)
31.28
242
0.824808
yue_Hant
0.293842
487584796648bfb4feaffeee075aafbf8f1e0adb
22
md
Markdown
README.md
codereacher/Example-NET-Core-API
0845193ece1fddefa8c24ca2fe11c586a798557c
[ "CC0-1.0" ]
null
null
null
README.md
codereacher/Example-NET-Core-API
0845193ece1fddefa8c24ca2fe11c586a798557c
[ "CC0-1.0" ]
null
null
null
README.md
codereacher/Example-NET-Core-API
0845193ece1fddefa8c24ca2fe11c586a798557c
[ "CC0-1.0" ]
null
null
null
# Example-NET-Core-API
22
22
0.772727
eng_Latn
0.380304
48758e8b1b3ba4d08e63523353722b857d6a820a
63,459
md
Markdown
_transcripts/SRCCON2018-restoring-reader-privacy.md
simonw/srccon
2fc87a798564f7a0426c75bf049a52512c10d256
[ "MIT" ]
null
null
null
_transcripts/SRCCON2018-restoring-reader-privacy.md
simonw/srccon
2fc87a798564f7a0426c75bf049a52512c10d256
[ "MIT" ]
null
null
null
_transcripts/SRCCON2018-restoring-reader-privacy.md
simonw/srccon
2fc87a798564f7a0426c75bf049a52512c10d256
[ "MIT" ]
null
null
null
--- --- # Session Transcript:<br>Restoring our reader's privacy in a time of none ### Session facilitator(s): Michael Donohoe, Matt Dennewitz ### Day & Time: Friday, 11:45am-1pm ### Room: Johnson MICHAEL: So, do we need microphones or are we okay? AUDIENCE: I can hear you personally. AUDIENCE: You're okay. MATT: In the back. MICHAEL: Okay, we'll use mics. AUDIENCE: That's even worse. MICHAEL: All right. MATT: Hi. I'm Matt. We need one more moment. MICHAEL: Go for it. MATT: All right. Hi, everybody. MICHAEL: Just use this. MATT: I'm Matt, this is Michael, we're here to talk about restoring privacies for our users. And we are here to talk about tech and secure our user's data. AUDIENCE: I have a question, what is the trustee's privacy security stance. Is this sponsored. MICHAEL: It's so you know the slides are safe and these are not actively collecting your information while you are present at this talk. AUDIENCE: I was not given a consent. MICHAEL: We're not in Europe anymore. [ Laughter ] AUDIENCE: I was told there would be cookies. MICHAEL: I was told there would be M & Ms. But while we're waiting, we're going to pivot to a different talk that is much more interesting. So, starting over a little bit. Hi, I'm Michael. This is Matt. AUDIENCE: Is that mic on? I don't think that mic is on. MICHAEL: Okay. Okay. Part of this is well, so we're going to cover a couple different themes here. One of the main takeaways is this really isn't about your user privacy for the most part. This is going to get into the heart of the fact that a lot of the people here actually have an impact that can reach either thousands or tens of thousands or millions of people in terms of like what you produce and have the content that kind of goes through your fingers. We want to actually show you that you can actually have a disproportionate impact on user privacy. MATT: Tell them how we're going to do it. MICHAEL: How are we going to do it, Matt? MATT: We're going through the slides and a couple different websites and then some to show how generally tracking is done around the web. we built an audit tool that takes a blacklist and pairs can with domains. And then we use White House to audit several different sites that Michael has identified. Spoiler. To pull out, he's using what tracking tech, how many domains are being called up. Introduce yourself? MICHAEL: I did. We got to do it twice now. But okay. So, a lot of you actually know a lot about this and you probably know a lot of these things. But do you know like, say, let's get some kind of basics down. You know how your browser is leaking information about you. Do people want to either talk or put up your hands in ways you know information that sorry, can you give me examples of information that your browser is telling about you every time you visit a website? All right. One in the back. AUDIENCE: So, a user builds their operating system. The browser headers include your whether or not you have flash installed, others you have installed. Whether or not you're allowing certain content types. And leak whether or not you're using Java script. And that can leak fonts that you have on your computer. Should I keep going. MICHAEL: You just gave away all of the answers. AUDIENCE: Like with patterns. AUDIENCE: What language you speak. MICHAEL: Yep. Time zones. And these things in themselves can be pieced together to actually provide like a larger profile. AUDIENCE: Third party cookies. MICHAEL: Third party cookies. Yes. Everywhere you have been before. We're going dig into that a little bit as well. Matt, did leave anything out? MATT: More esoteric. Like if you clicked a link. MICHAEL: I think they protect against that. The CSS thing. When you visited sites by default the link color changed. Sites loaded a whole host of links out of view and then the detects the color to see where you actually visited so that way it could get the top 500 of Lexus sites and quickly assess where you have been before and use that to kind of profile and track you. So, yeah. I don't think we collectively missed anything that was on this list. And this doesn't get into like the personalization stuff. So, if you are on Facebook and you have an account there. Obviously that information is kind of persisting. MATT: Next question. What are some of the ways in which your browser history is recorded as you move around the web? MICHAEL: Any takers? AUDIENCE: So, if you get an email that has a pixel, you will be tracked across the email and then any site that you visit after that also has that. So, like, your emails aren't even safe at that point. MICHAEL: Yep. MATT: What else? AUDIENCE: Search history. It can be related to after you search for something. MICHAEL: Yep. AUDIENCE: Third party ad trackers are maintaining their own record of everywhere you visited with the pixels on the sites. MICHAEL: And if you try to get rid of them, they try to resubstitute it, e tags to cookies to local storage. Super cookies. They still a thing? MATT: Yeah, they are. MICHAEL: Okay. MATT: Yeah. Social buttons, commenting services. MICHAEL: If you're commenting, discuss for commenting on your site. Those things are tracking everyone that records all the fun stuff. It's kind of interesting. Like, you know, a total disclaim or non disclaimer. Like where I work we are looking at Spot IM for commenting. If you go to the website and they're quite up front. Like they talk about they reach 500 1 billion people per month. And from a marketing perspective and from the data perspective what that reach is. But when you are installing it, you're really thinking about your user experience and commenting and moderation. But not, you know, you can't forget that by having this scattered across so many different domains that as people travel they are going to end up kind of being recorded. Profiled. MATT: From the moment you get online, you are being tracked to some degree. MICHAEL: How many people use shared CDNs for a website? JQuery hosted? And these services are in many respects free. They are doing it as partly public service to actually have from a web page performance that everyone is using the same version of JQuery on the CDN that's cashed into the browser. But every time that gets loaded, they know and see where the users are. Exposing your user base to their CDN. A lot of CDNs will just use that information just for the purpose of being CDNs. Not to pick on Cloud Flare, we use that and a couple others. We'll use that information as a way of tracking people across the web. I want to get at we know there's lots of ways that tracking is happening. And we're realistic about that. But there are things under the hood that aren't the obvious ways that people are being tracked. we're going to explore a few ways of how that happens and also kind of a few ways to mitigate that. Okay, so the idea here is impact. Again, how many people are developers here? All right. How many people would be more let's say editorial content side? Is it fair to say in your work you deal with a lot of embedded content? Your custom JavaScript might be an article? This is exactly the audience. The notion in terms of impact that I wanted to instill. Single journalist. You create an article. Let's say it's tagged. These things happen. You do that naturally for many reasons. You have a taxonomy you may follow or key words that are just editorial process where you would do that. You would have embedded sorry, you have embedded player, an embedded widget. It's a third party tool. You use it a lot. You love it. And you reach about 10,000 let's say unique page views in the first month that the article runs. Maybe you have much further reach, maybe you have much lower. I just kind of made up the numbers. Our experience has been that a lot of embedded content will actually load some other trackers. Typically they load analytics which is totally fair. They want to know what reach they have, where they're at. But a lot of them load additional things. Some of them may be benign. But in our experience, not everything has been. Again, we're going through a few examples, but let's say it's loading three different kind of tracking pixels. That's 30,000 requests in the lifetime of this one month of an article that is kind of your user's information is being recorded somewhere in ways you probably don't know. In ways that probably wouldn't be clear from their privacy policy. Likewise, if you were a developer, you could actually do things much further. So, if you are building things out on the template level, there are ways that you could like if you just go with the default thing of how many people are familiar with WordPress short codes, for example? Okay. I'm totally not picking on WordPress at all. Just using this as an example. If you let's say drop the YouTube URL into your document, a lot of WordPress templates will just take that, use it as a short code and transform it into the YouTube embed. You're not copying and pasting that markup. By default YouTube is collecting information once that embedded player is on the page. As the page loads, the embedded video is there. Whether you use it or not. Its very presence and the requests it makes back to the Google mothership basically are dropping cookies or dropping other kinds of things we're collecting. There are small ways which, again, I keep saying it's coming up, where making minor changes could actually have an impact by dropping let's say 30 million requests over the course of a year's worth of site traffic down to zero for some of your users. MATT: And I want to say that every time you're not tracking something, you're not just tracking the user that's there. You're tracking specific actions that can be tracked back later. But with the baseline, that's the best. MICHAEL: So, we actually are going to show you a couple of these. Actually you know what? The Tableau one might be the most interesting. How many people here use Tableau? Okay. Do either of you want to stand up and explain what Tableau is really good for. AUDIENCE: Digitalizing as a front on top of data and Linux. MICHAEL: It's fun inter graphic stuff. They also do a lot of data and how you present that. But if you were to open up the network tab and we'll do an exercise on this as well and see what loads in this one particular example. Maybe if you could actually increase the no, the size of this. MATT: You want this up? MICHAEL: Yep. And I love the network tab. The network tab tells you so much. I don't know how long people have been in development for, but before there was nothing like this. You had to do like Firefogg in a bookmark mode to actually get anything. This is all the usual things you would expect. Images and icons. That's JQuery. I love JQuery. It has analytics.js. MATT: Looks like jtn. AUDIENCE: They're using Google analytics which is great. But there's other things that are happening. Oh. Mentioning stuff with Twitter. MATT: More Google analytics. MICHAEL: Keep scrolling. I kind of know what I'm looking for. We could also filter here. But basically did I miss it? MATT: What are you looking for? MICHAEL: Yes, there's a Facebook pixel. There is also did you spot the other two? There's a Twitter pixel. There's also LinkedIn. This is a data visualization. You have LinkedIn, Facebook, Twitter. And also if you look at some of the domains, I don't know again. Could you filter for let's say LinkedIn? L i n k? Oh, you know what? It's not loading this time. This demo. AUDIENCE: This is LinkedIn. Yeah. MICHAEL: Okay. Sorry. The big text right above me. [ Laughter ] So and you see all this stuff that it is sending. Some of this could be really benign. But you probably didn't know all these things were loading on your page. And Tableau is just one example. Again, I'm not calling out any provider. I am certainly guilty of things myself. There's the NPR play, for example, drops the number of cookies and loads a number of different things. Again, this could be very innocent like there is an ads component to the podcast player, for example. It's playing, I didn't get a pre roll on it. Again, that does happen. And so, on the most innocent level, they want to know more about the audience to monetize this later so they would have the segments built up. By virtue of embedding this on your page, and as a New Yorker, you would be embedding these and we are exposing "The New Yorker" audience via NPR to embed and all sorts of other things. In many ways that's intentional and that's benign. But it's not immediately obvious to a lot of us these things are happening under the hood unless we look. So, how many people actually dig into the network tab just for fun? Yes! Okay. My favorite thing to do because I have a wonderful, fulfilling life, is to do this on incognito mode and look at the cookies and look at local storage and see how that stuff adds up. There's 18 different entries on the cookie level. There's local storage is kind of like, why does it need to store 500 kilobytes of this useless data? It's always the ones you least suspect. The Facebook player is doing everything you expect it to do. It is Facebook. If they have cookied you, they know whatever site you were on. They know that you were there. They probably know whether you played this or not. They're probably saying, oh, they played this. They're interested in this other metadata that's associated with the video. We'll make sure to use that information when they see something on our site or on one of our properties. Or that, like, by virtue of the person visiting the site that might actually indicate that they are, you know, of socioeconomic status or health or other, you know, other things about you that might kind of get pulled back into your profile as part of the decision making matrix they do to serve you. So, we actually just did this. Matt, am I missing anything? MATT: No, I don't think so. So, the other thing we did was consider just picking on the embeds. We took a look at several people out in the journalism space. MICHAEL: Matt was working on this tool yesterday and we kind of put in our own properties that we have worked on currently or before. And we did our best to put like human readable names to some of the ad networks, tech, analytics. The analytics we're less worried about in terms of people need to know what's happening on the site. But what we found, I don't know if I'm jumping ahead in the slides. So stop me. Over the course of about 507 different sites you see the same players coming up again and again. There's Facebook pixels, the same ad networks. You know, it's the same like four or five and that to me is kind of like even scarier. That means for all your browsing history, for everything you do on a daily basis, odds are you're always loading something from one of these sites. Which means that the record of your footprint across the web exists in multiple places. And there's probably a lot of decision trees happening in terms of making decisions and assumptions about you based on all these things you visit. And, again, some of these things exist on all pages through ads. Some of them are the ones we put there ourselves benignly. If you don't have the Facebook video embed, you still may have Facebook buttons on the site level. Did you want to dig into the analysis? MATT: No, the biggest takeaway is the frightening number of domains that are called and loading trackers that load other trackers and amplifies quickly. From Conde Nast, we're as guilty as everybody. But the other frightening thing about this, this is only calling out the handful that we identified. There's so much other stuff being tracked that is not in the report yet. But we will MICHAEL: This came up as a way of demonstrating things. We should make something of this. Privacy badger does a good job and a few others too that we stumbled on along the way. MATT: More on that later. If anyone wants to keep an eye on that or show that to somebody else, that's the URL, it's also on the slides. We'll move on. So, a couple ways to actually push back are tools that the browser gives us. One of them is simply just blocking referrers. Open logs, they show up when you need them. Suffocating that is a good way of protecting people. Think like even React timing and other JavaScript based tooling will calling out when you haven't used data refer tag just for safety. But there are other ways outside of the tag to block that. You can set it site wide, meta tag. Or if you want to load scripts without meta refers, that's a good idea. It's anonymous. And you can't get rid of the origin header that's always sent. That's a technological requirement. MICHAEL: How many knew this existed? One, two, three, four how many of you actually have this on your sites? Okay. No one. AUDIENCE: What does this do? MICHAEL: Basically as you go from one site to another, this kind of blocks some of the refer information that would go across. Not all of it, but certainly kind of stops with the data. I would stress here that what we're going to be talking about with this and a few other examples, we're not going to solve the problem of user privacy. Like there's no silver bullet. But I like to think of it as a series of paper cuts, right? We can actually like inflict little bits of annoyance across and hopefully build on that. And, again, if you were able to if you have a site that reaches 20 million uniques per month and you as a developer can just add that to the head of the global template. Like, that's a huge paper cut. That's a paper cut right between two fingers. Those ones are the worst. Not to get gory, but with enough paper cuts, you bleed to death. We can make substantial small but incremental pushback against, you know, just kind of shortchanging our readership. Okay. So, I'm to the going to go through like a hundred of these. But how many of you know that you can actually serve YouTube videos from a domain. I'm not just going to focus all on the developers. But if you as a developer, you have a WordPress site, you have a certain reach. You have the default intelligence for short codes. Editor drops in the YouTube URL and the markup. You as a developer goes into the short code and override, just use in domain. This is not a Google hacking, this is a Google owned domain. They say when you load the embedded player, we'll do it, but we won't track the user unless they hit play. So, again, they'll still track the user if they intend to engage them, but you're not going to throw away that data by default. And, again, how long is that going to take a developer to do? QA and code review aside, that's probably a 15 minute change. That's a very safe thing to do in the scheme of things. If you want to go further, like the other kind of offender would be well, less so. So, Twitter has a do not track meta tag. How they dole with that has kind of lessened over the last year. But, like the refer policy, if you drop that in the head of the documents or the template that is sitewide, basically Twitter endeavors not to use information it gathers for personalization. I don't know how many people here use Twitter. I don't know how many people like or dislike how they try to personalize your feed. I'm more on the reverse myself. But this is, again, a very small thing that's probably a five minute change on a template level. I'm being very optimistic here. Yeah. It can have that effect. If you want to go further on the editorial side, there is also a data attribute that you could use in your embed. If you're taking the default Twitter embed codes and copy and pasting across, you can then really from the do not track data attribute. Again, if you are I'm going back to WordPress. If you have control over WordPress short codes and how they re render Tweets, that's a small modification on the template level. All the embedded content, YouTube, Twitter, and we'll get to Facebook in a minute, is it's giving away a little less about your readership. And this one is my favorite. Because Facebook is everywhere. But do you know that like YouTube or Twitter, they actually do have a more restricted way of they have a way for you to tell them not to collect so much information. And anyone familiar with? There is laws that say that these companies should not be collecting information about children and minors. It has been there for a long time. It's really hard to enforce because it's like, people share computers, they share devices. There are very clear distinctions on some sites that have age dates because it's alcohol related or it's specifically for children like the Sesame Street site. But they have this really great kid directed site. You can use JavaScript SDK from Facebook. And I would say that you should read the documentation. They technically say that even if you're not a kid directed site, but you could have children coming to your site, they would encourage you or they would provide this method for you to make sure that less information is being collected. Now, my daughter's 10 years old. She uses my computer and she goes to the "New York Times" website. "New York Times" is not doing this. Again, I'm not picking on anyone. But they could. They could actually apply this on existing embed codes on a template level or however it's implemented. And it could vastly reduce the amount of information that Facebook is getting by virtue of being present. And I think that they would have, like, if Facebook tried to reason, hey, why are you doing this? There is legitimate kinds of arguments within Facebook's own documentation that they should do that. I don't know if there's any monitory consequence of doing this. I would urge you to just kind of think about that a little bit further before you actually do it. But the point I would say is, up until about six months ago, I didn't know this was possible. Twitter, YouTube, Facebook, others. They all have this kind of like, you know, in the deep end of the documentation. Like actually there's more of a privacy friendly one you can use instead. And the takeaway here that I'm trying to give you all is that if you're doing something, like, read the documentation. Like, there may actually u, you know, as part of your process just check to see is there a way to signal that this thing should collect less information? Sometimes it exists. Sometimes it does not. But we're not doing that. It's not a habit that we have. We're not used to it. But that stuff has been sitting out there for years. I haven't found one site, other than like actually kid focused sites that actually work with that. MATT: Sometimes we get asked as developers to drop pixels on pages. We're all moving to tag managers, which are definitely an efficiency. But that takes the auditing power out of our hands. So, we thought it would be nice to come up with some questions that you can ask in return. MICHAEL: Yeah, so this might be a little bit more developer, but have you been asked to add that one line of JavaScript to the page. Show of hands? What did you say in response? Did anyone have the perfect comeback? AUDIENCE: It's already there. [ Laughter ] MICHAEL: I have seen sites I'm not naming places I've worked. We have had things, that exact same request, and it wasn't turned on for years. Anyone else have some good kind of pushback against adding all this stuff to your page? AUDIENCE: My team does speed assessments. So, we measure the impact that any tracking tech has on our page. And if it's significant, we'll push back with the people who are who requested it or the organization or the company that maintains it. MATT: Now, is performance a shortcut to note, or is that just one qualification you have before adding? AUDIENCE: That's one. We have other things we do. What it tracks and how. We try to figure out what impact overall. If it's leaking data. We have a lot of so, people who don't know me, I work on the ad tech side of the Washington Post. And like one of the concerns as well is about data leakage. Which is trying to prevent situations where whole organizations will just perform header fitting on our site for the purpose of collecting data about our users with no intent to ever put an ad. And sometimes they will try and push in and have us add tracking codes or particular tracking script or a pixel. Which is really just another name for a tracking code that somehow I don't know why people call it a pixel. Because historically. MATT: Is everyone in the room familiar with the concept of header bidding? AUDIENCE: I learned yesterday. MATT: It’s the process of opening up live bidding on your advertisements in real time. But it's also an attack vector. You don't have to put an ad on the page. You can funnel it out. AUDIENCE: The thing it you have to send them a bunch of data about the page. Contextual data might include user data and other things. They decide on the backend system, there's a bunch of pre built bids. And they put a bid against your ad inventory. And if that is the highest bid, an ad shows. That's very easy to just make your bid real low knowing that you'll never have to pay out collectible bunch of user data and then potentially use it for other means. Or there are companies that will do this and sell their user data sets and that's their entire corporate model. MATT: Got to make money. AUDIENCE: Has anyone ever asked I tried this a couple times of like asking them to write a blog post or an article about why we're adding to the site. If it's really supposed to help users. That worked well with the business. MATT: Did you bring any good responses? MICHAEL: Did it work? AUDIENCE: Yeah, they were like, we thought it would be more money. MICHAEL: From my personal experience adding things, there's a few things to help in the future. Typically I have used web page performance as a legitimate pushback. It comes to, I would be happy to. I would love to add this thing. I think it's going to be really great value to collect more information about the readers to make more money. But we have a lot of information on the page already. What's one thing to swap it out with? What's it better than? That buys you some time to dig into it. The next thing I would do is, if it's just one line of JavaScript, the salesperson said, if it loads asynchronously. And you drop into the page, an isolation. Yeah, it does. But it also loads a CSS and another JavaScript which loads an iFrame and in turn loads all this stuff. And it's not compressed and has all these other things. And that one JavaScript file of 2 kilobytes in size is loading probably 300 kilobytes or megabytes of information into the page. The network tab is your friend. The other thing is oh, yeah. Like, you can love to add it, but what is it actually doing? What is it collecting? Is it just with us? Is it just information that we want that we need that we connect upon? Or does this third party get to hold this and either aggregate it with other sites and return anonymized information or are they selling it wholesale? Where is the privacy policy? Do we need to update our privacy policy? If you have to do that, you the legal team is going to take weeks to go over every word. Sometimes it's not worth the effort. I have dealt with the folks on the business side who thought it was a very easy thing to do. They were sure. They were not technical. They don't know. By bringing these questions up, very often you can block things. Other times it's like, yep, we need it. We'll go through the process. Fine. And so be it. I think it's also very helpful that if you do add these things, put a note in, when does this relationship ends? Is this a six-month trial? There's nothing worse when you're adding things, there's a list of things but you can't pull them up because you don't know if they're over and done with. We have had relationships that ended years ago and we're still sending information. MATT: All you need to say is you're leaving money on the table. You're giving the data for free at that point. And that will snap them to attention. AUDIENCE: I was going add in that regard, when I was at Salon, we handled all of our pixels with a manifest file that included the time it went active and the time it went enactive. When the tracking pixel relationship was over, it would turn off automatically. AUDIENCE: I'm curious if anyone has experience when the tracker that needs to be added is Google tag manager. Which the pitch is you can add tags without the developer involved. And how that kind of balances this kind of relationship. MICHAEL: That's trickier because everyone uses tag managers differently. I have used Google tag manager and incite on. And the good thing is you can see the history and when it was added. The other part is anyone can suddenly add it. That's something that Matt can speak to about the network analysis or just how you're tracking things. Like we have seen things like someone has added something and they actually have done it badly. And, like, boom, your site speed goes from worse to worser. And you're like, why did that happen? It's like, did we do a deploy? It's like, no, someone is messing with the tag manager which they should not do. I'm not a fan of those. AUDIENCE: Also, the tag manager is not a particularly efficient loader of scripts in my opinion. I personally resist using it whenever possible. MATT: One other problem with tag managers too is performance becomes really all of this is performance is less of an argument. Performance is getting better across the board. We have other initiatives to raise performance. We have to think about security. We have to do that. But as initiatives start to bring advertising pipelines in house and focusing on the first party rather than the third party, anonymous ish. And we need to know what it does. And just make sure that people understand that security is important and performance is cheap. AUDIENCE: If I may sorry. Even with performance being relatively cheap, it still matters. But not as much to the engineering side as it does to the business side, ironically. Because the time that it takes to load our ads creates an immensely different profit margin. Like when we switched how we handled our ads so they only rendered when they were in view and it caused our viewability to go up and it allowed us to load things faster, our overall CPM, the amount of money we make per thousand ads that show up on our page more than doubled. And that was entirely due to the ads showing up faster. So, when you're having that argument with the business side, if you can, like, run testing if you have the capability to run A/B tests, you have a huge opportunity to be like, hey. I care about performance and I understand that you don't. But this impacts your what you care about. Which is how much money we're making on a per ad basis. MICHAEL: In the back? AUDIENCE: Yes, if you could talk more about GDPR. There's a lot of talk about removing third party sites. It sounds like, you know do you know a timeline for when the US publications are going to be looking at that? MICHAEL: I love GDPR. I didn't. Last year it won't apply over here. So, show of hands. We're going to go to even though it has been asked. Who is familiar with the GDPR? Show of hands. Are you familiar because of the news or dealt with the technical AUDIENCE: Technical. MICHAEL: Hands up for I'll just keep talking. [ Laughter ] The short 30 second version of this or what GDPR is if you're unfamiliar is the EU passed this sweeping user privacy law. And it does many things from consent to right to be forgotten to a whole host of other things. The main one that's affecting publishers right now is if you have a significant EU audience, you have a parent company in the EU, if you have a thank you if you have a remote worker in the EU, you are technically in this kind of there's some legal liability here. And the liability is not insignificant. Like 4% of your global revenue as the maximum fine. So, billions for some companies. Anyway, the shorter version, which shorter version of this is consent. So, the law says that basically before if I want to use your information, I have to ask you. I have to tell you what information I want to collect. And then tell you what I want to do with it. And you can say, no. And I can't deny you the site services as a result. Also, if I say, wow, this is really cool data. We did this thing. That's awesome. I want to do something else. I have to go back and ask you again. It is your data. Just because I have a copy of it doesn't make it mine. I have to go back and ask you. And you can change your mind at any point and say, hey, that thing I said okay to six months ago. No more. Revoke my consent. And we have to manage that. And the problem with this is that GDPR in my opinion from and I'm not a lawyer but it's fundamentally incompatible with most ad marketing these days. So, to think of it as an example, we spoke about advertising. So, on a web page, you are a publisher. I want just one ad. You broadcast to the ad networks that you are partners with. Let's say you have five of them. You broadcast that user information. They go to the DSPs and various other things and there's bidding if necessary. And all this information gets shared along the way. And you don't know what's going to come back. But before that can happen, you, as the website owner, have to ask for consent. And it's not just, hey, consent is per nope. Here are all the things that could possibly load in the page. I'm doing a dot for everything I can think of. And you have to actually give your consent to all of those things for it to happen. Like from a user experience perspective, that's impossible. That is a very long form. You're not allowed to precheck them. You're not allowed to do that. And you you can't just opt the person in entirely. Or deny them the access to the site because they say no. And it's more or less impossible for any website to know at any given point what ads what services are going to load through that kind of gateway into their site at any given time. You can scan them in. Catch like, you know, these are the reoccurring ones. But it's really hard to do. And even then, those ads will look creative where someone has dropped in their own kind of pixel trackers for like some are very legitimate. I want to verify that so and so, when they said they showed my ad, they did. Others want to collect user information to retarget them on other sites. That is in theory grinding to a halt right now. It's a big problem. From people I have spoken to, they said actually that GDPR was envisioned with a view of being incompatible with advertising as we know it. And I think that's wonderful. [ Laughter ] But also this is kind of going off talk. I'm going to pull it back in a minute. To me that's a very interesting opportunity for publishers. We are so far away from our data and readership and all this ad tech in between. I think the Guardian did a thing where they bought ads on their own site. And for every dollar they spent, they only saw 25 cents of that. 75 cents was going to the ad tech. We have the opportunity, if we were to do the advertising, we have the funnel and we can kind of close that gap between us and our readership and not leech their data out to everyone who is listening, we could actually have a better experience and a many fundamental revenue relationship with our readers. That gets into GDPR stuff. I can go more into this. I hope we have questions. And I can talk until 6 p.m. on this if anyone wants to do that too. We're okay for time. MATT: We have time. AUDIENCE: California law just passed. AUDIENCE: Passed? AUDIENCE: Passed. MICHAEL: California just passed a law that's GDPR like. I'll you should read up on this. Basically there was this real estate developer talking to a Google engineer a couple years ago months must have been years. And the Google engineer, if people knew the stuff we were collecting on them. Oh, my god. And he was like, okay. He basically came up with legislation and put it on as a ballot initiative and said, I'm going to just get this on the ballot and it's probably going to pass. And so, he made a I don't know how it transpired. But basically it came up that, okay, if they could pass a lighter version but still a meeting certain bars through the legislator, he would withdraw it as a ballot initiative. And so, it just passed yesterday. So, technically a stronger more astringent version than on the ballot. AUDIENCE: Bonus round. Maryland passed a law against or to try to deal with the issue of Russian kind of trolling bot, whatever. So MICHAEL: I did not know this. AUDIENCE: Yeah, nobody knows about this. But Baltimore Sun. So, the law went into or it goes into effect July 1st. Under the law if somebody's buying a political ad on your website, they have to identify themselves as being like a PAC or whatever. Saying where they're getting their money from, what's their headquarters, blah, blah, blah. And you, within 48 hours, have to post on your site a searchable list of who is buying the ads on your site. MATT: Which just prompted Twitter to announce this. AUDIENCE: In Maryland, there's no penalty associated with it yet unfortunately. I think the Sun is probably going to take them to court because we don't like the forced publishing requirement. But it applies to everybody if you have readers in Maryland, which you do. MICHAEL: My secret hope is that between EU and now California that Canada will follow suit and then that's like North America. I have opinions. MATT: All right. Back to the developer Y side. Developers have more universal options on security policies. MICHAEL: I can take it or you can. I have done a lot of talking, I can just shut up. Okay. So, my next favorite thing. Content security policies that came up recently because of GDPR. But basically cross site scripting attacks, it was a thing where you could specify only what you would white list. You can say, you can have JavaScript on my page, but you can't do eval or other things. You can have JavaScript in line on your page, but only if it has, you know, this non associated with it. You can get deep into it. This would be great for nonprofits, for example. Hypothetically, if you're ProPublica, and we only track for Google analytics and charting. You can create a content security policy, I'll show you what they look like. You restrict this is the CDNWs for our images, this is where the page loads from and this is Google analytics. They're cool. And anything else that tries to load will be blocked at the browser level. I think ad blocker. You can write your own mini ad blocker on a website level. It just shuts it down. So, even if you let code run something on a different domain and then that code on that domain tries to load something from somewhere else and then iFrame or whatever. It gets shut down. It doesn't get to the page. So, if you are a nonprofit, for example, this would be a great way to make sure that if as you embed content, if you're willing to do the work, you inspect it and say, okay. This stuff, and this will load, but I won't white list this specific sub domain where things are reported back. That might be against terms of service, so tread carefully. I want you to know that is, in theory, possible. And the next thing about this, go to the next slide. This is a simple example where you can set it on the header level and kind of say allow self to load, allow media from M1 and M2.com, only allow scripts from this to load. You can do that quite easily and as a meta tag on your page. And you get really complicated here. It could be conditional. If they're from the EU, you could lockdown the site. You can use the language, document.language. I hate to say. For example, the L.A. Times, for disclosure, the first newspaper I worked, we did block a number of sites from EU traffic. The reason being we needed to make sure we got them compliant. We were late to the compliance game. We did that as we white list and move forward. But what we did is the cookie only users coming in from the EU. Looking at the IP address. Certain country, EU, legal jurisdiction. And for those that were EU, we used content security policy only to white list things that were essential to the site function. Images, our own JavaScript. And then basically block everything else. So, our sites are really, really fast in the European Union right now. You can't white list ads this way because you don't know all the possible permutations of the domains that could be loaded as a virtue of having ads. So, all the ad created from all the different agencies. You can't really realistically fit that in there without going crazy. But I would encourage you to look at this as a way of quickly getting the sites like if you don't care about your EU revenue, for example. You may want to follow a subscription model there, for example, and not going the ads way, you could probably lockdown your site in about a I don't know half a day. Not counting what DevOps need to do for the setback in the first place. I really think you should look into that. Oh, yes. MATT: Yeah, the point of a lot of the stuff here is that we're in control. And we need to start taking a lot more responsibility over how we employ these things. As we have shown along the way, these are simple changes that could save a lot of data from being leaked out. Whether it's assets not served by you or approved by the site or taking the user's line and prevents cookies from loading wherever you can. Or tracking from happening wherever you can. Another hard question for developers is, there's a couple lines of code. Is there anything really stopping you from making these changes going forward? This feels like the kind of position you might want to ask forgiveness instead of permission. MICHAEL: I want to kind of double emphasize that. If you have some free time and you're bored. You don't know what ticket to cover next, or you're between projects, some of this is very low hanging fruit. There's probably no revenue impact for some of the easier examples. Code review, if people have questions. They can be excited to learn this. MATT: And like a radical code review. Unexpected security breach. MICHAEL: Radical code reviews. AUDIENCE: So, you said there's probably no revenue impact for some of this stuff. But in my experience the more information that you can gather about the user via cookies the more advertisers are going to bid for that impression. MICHAEL: Yes. So, I was sorry, I was referring to some of the embedded examples. AUDIENCE: I guess my question is, how do we balance the need to monetize and publish your websites with the user privacy concerns? So, this would be specific to your site. So, for example, we don't have many sites don't have an ad agreement. They're not using their own YouTube embeds on the channel. They're somewhere so hey, here's a story. They uploaded the video, YouTube or video and we're embedding that because it's relevant to the story. You know, if there's a pre roll ad, someone else is making money off of that. It's not us. So, that's all cool. You could use that kind of cookie less YouTube domain without consequences. MATT: Be careful about removing Facebook. MICHAEL: I think Facebook is fine. But I could be wrong and don't want to scare or get anyone in trouble. For me, the non lawyer, reading the terms and conditions. It is completely legitimate that you think if you can make a case that children come to your website and we don't always recognize that they're children. In fact, we can't recognize that they're children because of people sharing devices and we don't ask, then you should be using that kid friendly attribute. And you can make a very solid point that you do. Again, my daughter goes to the "New York Times" website. Washington Post too. Put this on the list. But so her information is being tracked under the guise that they're thinking it's me, okay? So, there is a very clear direction there. But we don't know the follow up. We don't necessarily know that these options exist. And maybe when we examine them and dig deeper, we'll know that they'll penalize us in other ways. Maybe we need to talk to them about a that and maybe do better. Again, some things drive conversation others make an immediate effect. AUDIENCE: Before you try to make the Facebook call on your own side, it's also the business side. The audience development team sends user tags through the Facebook pixel to retarget users with advertisements for stories on your own site on Facebook. So, just a head's up. MATT: So, what are you guys doing right now to fix this? Anyone taking proactive measures? Using YouTube no cookie or turning off Dragon. MICHAEL: We can do this off the record if that's helpful for people. If you want to talk about this and say off the record. AUDIENCE: I mean, sort of it's not exactly this. But in terms of like proactive measures, another good thing that you can do is if you know that there are particular JavaScript methodologies that people are using to track or hijack your site in particular ways, we write no op functions and place them against the Window level JavaScript object. So, like the methodology for doing scroll jacking, you can just write a function that returns true and replace that in that CSS methodology and it's no more scroll jacking on your site. If you're interested, some of that code is open sourced on my personal GitHub. But that's just another thing that you can add to that. MICHAEL: Was there someone else? AUDIENCE: Yeah, I guess I'm I'm curious about how to build a natural practice of doing this kind of I mean, like of work in the development environment. I mean, there certainly seems very possible to kind of slip some of these things in or sort of take some responsibility as an individual developer. But it's hard to imagine or it's hard to see where a more concerted sort of institutional level effort to tamp down on these kind of data leaks would come from. Especially when it seems like, you know, it's like we're not the ones doing it. Really. You know? Or people are doing this stuff all, you know, people are, you know, leaking constantly at their own volition, basically. MICHAEL: I would say absolutely right. And that's partly why I want to talk to people in this room about it. But, you know, I would almost say like, I'm thinking out loud. Do I know any WordPress core developers to take to lunch? Hey, short codes, you should just change the defaults to this. And stuff like that would naturally then go into the ecosystem and be more of a top down kind of thing. But short of that happening, and I don't know where they like to go for lunch, you know, I'm trying every avenue. Like Dan and I sorry, Matt and me. I called you Dan yesterday as well. Like several times. I'm trying it from all angles. But you're absolutely right. I would love any thoughts or insights you have now or in the coming weeks of, oh, actually we could do this thing. I mean, we could do a browser extension that kind of like does this stuff. But that only works for people who install the browser extension which there are kind of many at this point. Do you follow? AUDIENCE: Yeah, I guess one thought that occurs to me is kind of positioning these choices in line with building reader trust. To say something that you could trust when you come to our site is XYZ. And making that an explicit part of sort of coming to get news from us. And then that also builds internal engagement that would actually solve these problems rather than being kind of stealthy trying to, you know, close some leaks here and there. MICHAEL: Yeah, they have done a decent job where there is embedded content and added things, they have the overlay, this site is not supported. Do not track. And you have to click. That is very proactive on their part and I wish it was easier to follow suit on that. Down in the back. AUDIENCE: So, one I think this that I think when you're talking to, you know, the various business people that I try to emphasize is that really tracking is kind of a prisoner's dilemma. It's in the interest of you as the individual organization to get the tracking. Because, you know, as we said earlier, if you have an ad that's based on targeted to the specific user, you're getting a higher CPM. You know, significantly higher. And so, therefore you wouldn't do it. But industry wide, that's terrible. It creates like a race to the bottom. So, imagine, you know, let's say you have a niche interest like fountain pens or something like that. In a world where there's no tracking, you're running the pen blog. If you want to advertise to people to get them to buy your $500 content, you have to put it on the content blog. In the world that exists today, you drop the cookie and show the ad on the other pages. And it tracks you around. Why am I seeing the same ad on every website and even on my phone? I didn't look at it on my phone. Now all of the other websites are getting the high CPM ad. It's good for them. But at the end of the day, the fountain pen company has a certain advertising budgets. They're going to spend it. That's what's the limiting factor. So, if there's no tracking, they still are going to have the same budget and they'll just put it all into the fountain pen blog and send it into the other ten websites. So, to the publishers with these are good. We have big audiences that are niche specific for local areas. If you can't do tracking, if you want to advertise to Maryland, you have to come to the Baltimore Sun, blah, blah, blah. If there is tracking you can get them on every other website. It doesn't matter if it's a national site or not. The thing that I want to emphasize to business people, whatever. This is the prisoner's dilemma and the law is going to get us out of jail. The law is going to break the prisoner's dilemma. And it's going to be like TV advertising, radio, podcast advertising if they don't fuck it up. All of those have no tracking. And they're all fine and they have individual problems and healthy and you know healthy parts. But they're all fine without tracking. It's just web display ads that have tracking. And they're also the worst one in terms of like monetary revenue. MICHAEL: Actually, I'm sure most people have a Facebook account. Have you gone into the part of your account settings where it's like what Facebook knows about you? Okay. How accurate is that? I never figure myself as a salsa dancer. It's funny because it came up last night because there was dancing. That was in my profile last time I checked a month ago. Along with a few other things where I have no idea why it thinks it's about me. It's how you see yourself and versus how you behave and act and are different. But there were things in there that were just plain wrong. And whatever advertising I get is based on half right information and half wrong. And so, I would agree with you. I would say like this level of ad personalization, again, depends on who is doing it. Some people say Instagram ads are very relevant to them. Others say other. I mean, it varies. So, I would love to see a world where actually you have less data points to track people. You spend your budget on certain things and outreach and it's not like this, you know, you spend X thousands of dollars when you don't actually know what's working for people. AUDIENCE: Yeah. I would say the flipside of that is if you want to discourage user tracking as a practice on your site, increase the amount of contextual data that goes into your ad tech from your site. Don't just make it whatever editor key words decide to be in there. Authorship data is valuable. Like people don't think about it, but advertisers do target bylines. Because they know what type of stuff that particular author presents and that's in line with their interests. Information about what your site does, how it works. And because these systems also do have their own degree of crawling and internal detection of the page that they're on, adding structural metadata to your page is a valuable way to increase how profitable your Arizona are. So, go to schema.org. You can see how The New York Times and the Washington Post and other publishers have implemented it. And implement the schema.org metadata object on your site. It will help profitability and it's good for the web. AUDIENCE: More broadly, I want to stress also. There's two types of data that I talk about. There's definitely the stuff that's happening on the ad side. The contextual advertising. But there's also just the data that people are collecting, you know, their services and they use this information downstream in other ways. But that's you're just leaking out. So, while you deny data to one side, you're not denying it to the other side. And those things can definitely get intertwined. But I just want to make sure people know there are two different buckets there. AUDIENCE: Just touching on the sort of CPM question. I don't really have a sense. Like we often say these are higher revenue. But I wonder how true that is. And I wonder if there are that actually show what the revenue and the pricing actually breaks down as? Because I know I've heard stories of like getting put on pages because it's a sales guy going and talking to an exec. MICHAEL: I can relate to that. AUDIENCE: And, like, it's not a good deal for the publishers. And the tracking is terrible and all of this. But like we don't actually have the numbers to counter this. And I think like that should be part of this conversation. And like I don't have that information. I don't know if anyone does. MICHAEL: Yeah. That's a good point. I'm happy to speak on that. SFK which came up there is one of the properties on the papers where I worked. And recently sites like that where we switched. And we did see a noticeable change in web page performance and load time overall. And this was something like where, you know, part of it is how you engineer these things. The other part is, yeah, you're adding on to this page. But we didn't have tangible information where we could argue that, okay, there is going to be this impact. How do you map that like, you know, a I'm just going to be very optimistic and talk about high speed Wi Fi. But from like 10 15 seconds. What does that 50% mean for revenue on the ad side? It is getting much easier to do that now. Webpagetest.org is very good. We do 3G speeds on mobile versions of the article and have that thing over a long period of time so you can see bumps in the speed curve and other services really help with that. So, when you have a deploy. When you, you know, launch your implementation or, you know, it's the end of the quarter where you're getting a lot of ad traffic being dropped in, you can see these noticeable changes over time. And, you know, if you have the team that can map that to CPMs or map that to the bottom line, it makes it an easier case for explaining why, actually if we did this, ads loaded faster because they weren't competing with network requests for the rest of the page for those images. Time on site improved. Viewability improved because the ad loaded before they scrolled out of view. It's hard to measure those things. But from a web page performance and user experience and CPM, things are all interrelated. It's easy for a business person to say we do this one thing in this one line of JavaScript and we're going to make 20 grand a month or whatever it is. That's a very hard when things are hard to measure, they're hard to show value. AUDIENCE: Yeah, I would say if you're interested in measuring that and showing the value, figure on if you can month long tests or especially month long A/B tests. Because if there's an impact to CPM, it usually takes about a month to show up. When somebody asks you to deploy a thing, like, hey, let's do an A/B test for a month and see what that deployment affects. If it does, there's all sorts of conversations that you can have. They may think it will make more money because that's the top-level report on what a person is saying. But at the end of day, if it affects your performance it will drop the amount of money overall that your ads are making. MICHAEL: So, this is a tangent, but it's web page performance, ads tracking opinion it's all interrelated. We did this project over a year ago where we took a broad set of what we thought were typical articles across media properties. The mobile version if they had it. And using webtest.org to test the speeds. And the lower, crappier speed test is because you see it basically drags out the timeline so you can drill down and say, okay, I see what was slowing this down. And some of this is like how you build your site. Some is how much trackers, how much ads, how much other things are you loading? And you can see there's significant distinctions between sites that are really freaking fast. CityLab, I don't know what they're doing. Maybe a bespoke ad network, shunning the usual partners. But they're crazy fast. FT had dedicated developers for web page performance. But if you start scrolling down you see, like, okay, there's speed index, which is a measure that Google comes up with about how much they how fast or not fast thing your site it. And this is one of many SEO signals in terms of search results. The lower you have that number, the much better. You see number of requests. If you scroll down to the bigger ones. This goes down to about 60. You have a couple of properties that I work at. The New York Times used to be in the top ten or top 15 for a while. I'm not sure what they did in the last three months that changed things. I'm not picking on them. We all have salesmen in the closets. San Francisco, we redid the article template and got the load time down to 22 seconds. That's still really low. But we were maybe 48th before that. So, it's kind of like phase one. And actually let's see. Again, some of these have 430 different requests. Wired is 528. Why does Wired have 528 requests on a page? AUDIENCE: Where was this from? MICHAEL: Webperf.net and shows the last four successful tests. AUDIENCE: Is it using Lighthouse to calculate load timing? MICHAEL: This is not. It's using webpagetest.org. And looking at the far bottom. Keep going. Keep going. You want to click on that link. MATT: I don't know which one you pointed at. MICHAEL: At the spreadsheet. Sorry. So, like, this data I think this one goes back about a year. Has the raw data from all the tests there. And if you and then these are the results basically. There's two things happening here. One is all the properties that you see the results. There's another project happening sideways to that where I'm trying to compare articles to the site article and see what is really faster. And then you can if you can scroll off to the far side, you can get a link to the so, I don't want but basically you can take this test ID, go to webpagetest.org and see the full test results. And webpagetest.org will hold on to the last 3 7 days’ worth of tests. But you can dig into that kind of archive for the most recent stuff. But, again, if you can have a deploy process, a development process where if you make changes to the pixels that you're adding to the page or to the ad setup, you can see these kind of considerable gains over time. To echo his point. When we actually removed when we redid the template, removed a lot of the trackers. Some of them we didn't have the deals anymore. And in the advertising revenue, it kind of doubled numbers are still early yet. But we saw significant upward gains. And that to me is a privacy issue. But if you can tie it back to a revenue issue, more people are going to listen to you. MATT: We're just about at time. Any other questions? AUDIENCE: Yeah, I was going to add something to the GDPR conversation. We disabled that entirely in the EU. And also video for a certain amount of time. I think we're back to serving video again. But for us the issue wasn't so much whether or not we could solve the permissions compliance. It's just compliance itself. As a definition it's murky right now. And there's a great fear, like it's not been tested yet. Nobody's actually received the first test. And that will be really kind of a litmus test. So, until we see that, I think our plan is to just wait and see rather than try and solve the permissions issue first. MICHAEL: Yeah, I mean, the things yeah, there's two options. You outright block. But I think that there's also a lighter blocking option where you don't completely gut your page. You just like if you use content security policies, you could in the space of I would literally say one to two hours, depending on how your site is architected, if you get that on a template site wide, you could restrict the you could still allows the users to see the content without the risk that things are loading that shouldn't. AUDIENCE: I'm sorry, we just log ads. Our content is basically ad free. MICHAEL: Okay. You're not blocking EU users. That's cool. All right. Oh, yep. AUDIENCE: I just had a quick question because it seems like these strategies share a lot with like progressive enhancement techniques. And whether or not anyone has had experience in terms of like starting with a page that is very restricted, has strict content policies. And then lifting those as they were implemented. And on the scripting side, modifying HTML after the page is loaded. MICHAEL: I haven't dealt with that personally. Has anyone here? AUDIENCE: We have a progressive web app set up for our site. But I don't think we've done anything specific to it for privacy. We just in the templates remove the calls in the EU. Yeah. MICHAEL: Well, what I'm I'm going to call this session done. But we're here for a while for any questions or follow up conversations. I want to thank everyone for coming. I hope you learned something you can bring a back to your respective organizations and move the discussion further. [ Applause ]
138.556769
1,135
0.766621
eng_Latn
0.999942
4875c371c688e055f8da64bf6bd99f0dd4fba04d
9,202
md
Markdown
content/blog/mashup-10th/dahyun/recyclerview-anti-pattern/recyclerViewAntiPattern.md
YuChocopie/mash-up-android
a33880b48f343e95523563d84e781465c876df10
[ "MIT" ]
10
2020-12-05T06:33:27.000Z
2021-11-07T01:02:40.000Z
content/blog/mashup-10th/dahyun/recyclerview-anti-pattern/recyclerViewAntiPattern.md
YuChocopie/mash-up-android
a33880b48f343e95523563d84e781465c876df10
[ "MIT" ]
151
2020-12-05T07:00:36.000Z
2021-12-13T18:40:07.000Z
content/blog/mashup-10th/dahyun/recyclerview-anti-pattern/recyclerViewAntiPattern.md
YuChocopie/mash-up-android
a33880b48f343e95523563d84e781465c876df10
[ "MIT" ]
26
2020-12-05T06:33:19.000Z
2021-09-09T13:25:45.000Z
--- title: "RecyelrView Anti Patterns" date: "2021-02-21" tags: ["recyclerview-anti-pattern", "recyclerview", "anti-pattern", "mash-up"] description: "실수하기 쉬운 리싸이클러뷰 안티패턴에 대해 포스팅해보겟습니다." cover: "./recyclerview_anti_patterns.png" --- 안녕하세요. 메시업 안드로이드 10기 이두한입니다. 이번엔 Android Weekly에 올라온 [RecyclerView-AntiPatterns](https://proandroiddev.com/recyclerview-antipatterns-8af3feeeccc7) 내용을 번역하여 포스팅 해보겠습니다. --- ## 1. Initializing in bindView 첫번째는 뷰를 완전히 재사용하지 않는 안티패턴입니다. `RecyclerView`에서 `텍스트뷰` 하나만을 보여주는 예제입니다. ```kotlin class RecyclerViewAdapter( private val onItemClick : (Data) -> Unit ) : RecyclerView.Adapter<RecyclerViewAdapter.MyViewHolder>() { //..Other overrides private val itemList: List<Data> = //...DO STUFFS inner class MyViewHolder(val itemView: View) : RecyclerView.ViewHolder(itemView) { val tvText : TextView = itemView.findViewById(R.id.textView) } override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder { val itemView = LayoutInflater.from(parent.context).inflate(R.layout.item, parent, false) return MyViewHolder(itemView) } override fun onBindViewHolder(holder: MyViewHolder, position: Int) { val itemAtPosition = itemList[position] holder.tvText.text = itemAtPosition.text holder.tvText.setOnClickListener { onItemClick(itemAtPosition) } } } ``` 문제점을 바로 찾으셨나요?? 문제점은 바로 `onBindViewHolder`안에서 `setOnClickLisenter`를 수행하고 있다는 점입니다. `View`가 재사용되고 bind 될때 마다 `clickListner`를 set 해주는 것은 아주 비효율적이고 성능에도 영향을 미칩니다. 이를 해결하기 위해서는 `ViewHolder`를 초기화 해주는 곳이나 `onCreateViewHolder`에서 `setOnClickListener`를 수행하도록 수정하면 됩니다. ```kotlin inner class MyViewHolder( itemView: View, private val onTextViewTextClicked: (position: Int) -> Unit ) : RecyclerView.ViewHolder(itemView) { val tvText: TextView = itemView.findViewById(R.id.textView) init { tvText.setOnClickListener { onTextViewTextClicked(adapterPosition) } } } ``` 또한 `itemList[index]` 형태로 리스트의 아이템을 전달할때 `Adapter` 내부에서 아래와 같이 캡슐화 할 수 있습니다. ```kotlin //onItemClick is a parameter in Adapter constructor private val onTextViewTextClicked = { position: Int -> onItemClick.invoke(itemList[position]) } override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder { val itemView = LayoutInflater.from(parent.context).inflate(R.layout.item, parent, false) return MyViewHolder(itemView, onTextViewTextClicked) } ``` --- ## 2. Having logic inside the adapter 2번째 안티패턴은 `Adapter`가 로직을 가지고 있는 겁니다. `Adapter`은 유저에게 `ViewHolder`들을 보여주는 역할만을 해야 합니다. ```kotlin override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder { val itemView = LayoutInflater.from(parent.context).inflate(R.layout.item, parent, false) return MyViewHolder( itemView = itemView, onTextViewTextClicked = { position: Int -> val itemAtIndex = itemList[position] val intent = getDetailActivityIntent(itemAtIndex) parent.context.startActivity(intent) }) } ``` 위의 코드를 보면 `onTextViewTextClicked`안에서 intent를 받아오고 Activity를 실행시키는데, 데이터를 전달하는 역할만을 해야합니다. 만약, 같은 UI를 재사용하지만 클릭시 다른 interaction이 필요하다면 아래와 같이 여러개의 `Callback`을 사용하는 방법이 있습니다. ```kotlin class RecyclerViewAdapter( private val onAddClick: (itemAtIndex: Data) -> Unit, private val onRemoveClick: (itemAtIndex: Data) -> Unit, private val onItemClick: (itemAtIndex: Data) -> Unit ) class RecyclerViewAdapter( private val onItemViewClick: (clickedViewId: Int, itemAtIndex: Data) -> Unit ) ``` --- ## 3. Changing the state of view inside the ViewHolder 3번째 안티패턴은 `ViewHolder`안에서 뷰의 상태를 직접적으로 바꾸는 것입니다. 예를 들어서 ViewHolder안에서 `CheckBox`상태를 바꾸는 예제를 보겠습니다. ```kotlin override fun onBindViewHolder(holder: MyViewHolder, position: Int) { //Note: checkbox clickable is set to false to control the logic ourselves holder.itemView.setOnClickListener { //Toggle holder.checkBox.isChecked = holder.checkBox.isChecked.not() } } ``` 이 작업을 수행하고 목록을 100개 항목으로 채운 다음 처음 2~3개 항목을 확인하고 아래로 스크롤하면 해당 위치를 클릭한 적이 없는 경우에도 다른 위치가 확인되는 것을 볼 수 있는데, 다시 한 번 뷰가 재활용되고 있기 때문입니다. 상태가 확인된 뷰가 재활용되면 그대로 유지 됩니다. 이는 아래와 같이 data class에 `isChecked`변수를 `false`로 설정하여 해결할 수 있습니다. ```kotlin data class Data( val text: String, val isChecked: Boolean = false ) ``` ```kotlin override fun onBindViewHolder(holder: MyViewHolder, position: Int) { holder.checkBox.isChecked = itemList[position].isChecked holder.itemView.setOnClickListener { holder.checkBox.isChecked = holder.checkBox.isChecked.not() } } ``` 다시 테스트하고 데이터를 채우고 일부를 확인하고 아래로 스크롤 하면 괜찮아 보입니다. 하지만 위로 스크롤하면 모든 체크 상태가 사라지게 됩니다. 데이터 클래스 내부의 `isChecked`가 변경되지 않고 `false`로 유지되기 때문입니다. 이를 해결하기 위해 `Adpater`를 다음과 같이 변경했습니다. ```kotlin override fun onBindViewHolder(holder: MyViewHolder, position: Int) { val itemAtPosition = itemList[position] holder.checkBox.isChecked = itemAtPosition.isChecked holder.itemView.setOnClickListener { itemList[position] = itemAtPosition.copy( isChecked = itemAtPosition.isChecked.not() ) } } ``` 만약 유저가 리스트 안의 `checkbox`를 모두 선택하고 해제하는 기능을 넣는다고 가정하고 `Adapter`에 `selectAll`과 `UnSelectAll`이라는 두가지 기능을 더 추가해보겠습니다. ```kotlin fun unselectAll() { itemList.map { data-> data.copy(isChecked = false) } notifyDataSetChanged() } fun selectAll() { itemList.map { data -> data.copy(isChecked = true) } notifyDataSetChanged() } ``` 그런 다음 상태가 다른 경우에만 `notifyDataSetChanged()`를 호출하도록 개선해보겠습니다. ```kotlin fun unselectAll() { itemList.mapIndexed { position, data -> if (data.isChecked) { notifyItemChanged(position) data.copy(isChecked = false) } } } fun selectAll() { itemList.mapIndexed { position, data -> if (!data.isChecked) { notifyItemChanged(position) data.copy(isChecked = true) } } } ``` 지금 보면 `Adapter`안에서 너무 많은 일을 하고 있습니다. 나중에 항목을 제거하거나 숨기는 등의 기능이 추가된다면 `Adpater`는 점점 무거워집니다. `ViewHolder`를 보여주는 것 이외의 역할을 하고 있고 `bindView`은 로직도 갖고 있습니다. Adapter는 가능한 추상적으로 만들어야 합니다. Adapter를 추상화 하는 방법은 item이 변경될 때 마다 `Adapter`가 새로운 `itemList`를 받도록 아래와 같이 수정하는겁니다. ```kotlin class RecyclerViewAdapter( val onCheckToggled: (position: Int, itemAtPosition: Data) -> Unit ) : RecyclerView.Adapter<RecyclerViewAdapter.MyViewHolder>() { //..Other overrides private var itemList: List<Data> = listOf<Data>() fun submitList(itemList: List<Data>) { val oldList = this.itemList val maxSize = Math.max(newList.size, oldList.size) for (index in 0..maxSize) { val newData = newList.getOrNull(index) val oldData = oldList.getOrNull(index) if (newData == null) { notifyItemRemoved(index) return } if (oldData == null) { notifyItemInserted(index) return } if (newData != oldData) { notifyItemChanged(index) return } } } inner class MyViewHolder( itemView: View, onItemClick: (position: Int) -> Unit ) : RecyclerView.ViewHolder(itemView) { val checkBox: CheckBox = itemView.findViewById(R.id.checkBgox) init { checkBox.setOnClickListener { onItemClick(adapterPosition) } } } override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyViewHolder { val itemView = LayoutInflater.from(parent.context).inflate(R.layout.item, parent, false) return MyViewHolder( itemView = itemView, onItemClick = { position -> val itemAtPosition = itemList[position] this.onCheckToggled(position, itemAtPosition) } ) } override fun onBindViewHolder(holder: MyViewHolder, position: Int) { val itemAtPosition = itemList[position] holder.checkBox.isChecked = itemAtPosition.isChecked } } ``` `submitList`안에서 간단하게 diffing을 구현하고 있습니다. 하지만 매우 비효율적인데요, item이 100개가 있으면 해당 로직은 메인스레드에서 100번 반복되게 됩니다. 이러한 문제를 해결하고 비교를 더 쉽게하기 위해 `ListAdpater`가 등장했습니다. ```kotlin class RecyclerViewAdapter( val onCheckToggled: (position: Int, itemAtPosition: Data) -> Unit ) : ListAdapter<Data, RecyclerViewAdapter.MyViewHolder>( object: DiffUtil.ItemCallback<Data>() { override fun areItemsTheSame(oldItem: Data, newItem: Data): Boolean { return oldItem.id == newItem.id } override fun areContentsTheSame(oldItem: Data, newItem: Data): Boolean { return oldItem == newItem } } ) ``` 이제 diffing은 백그라운 스레드에서 수행되게 되고 `submitList`에 itemList를 제출하기만 하면 모든 로직이 효율적으로 처리됩니다. ListAdpater에 대한 자세한 설명은 `유정`님의 [RecyclerView ListAdapter DiffUtill](https://mashup-android.vercel.app/yuchocopie/recyclerview/ListAdapter/)을 참고하시면 됩니다. 이상으로 RecyclerView-AntiPatterns에 관한 포스팅을 마무리하도록 하겠습니다. 감사합니다. --- ### Reference [https://proandroiddev.com/recyclerview-antipatterns-8af3feeeccc7](https://proandroiddev.com/recyclerview-antipatterns-8af3feeeccc7)
30.879195
260
0.678222
kor_Hang
0.989268
48768306220066b9430f5cbb7eb270c9f1d78366
849
md
Markdown
01 - Guias Estelares/02 - Guia estelar de CSS/07 - a-cascata.md
PabloXT14/Discover-Rocketseat
73abad3e491a1bf6038865ee3c4265ebc5750c11
[ "MIT" ]
null
null
null
01 - Guias Estelares/02 - Guia estelar de CSS/07 - a-cascata.md
PabloXT14/Discover-Rocketseat
73abad3e491a1bf6038865ee3c4265ebc5750c11
[ "MIT" ]
null
null
null
01 - Guias Estelares/02 - Guia estelar de CSS/07 - a-cascata.md
PabloXT14/Discover-Rocketseat
73abad3e491a1bf6038865ee3c4265ebc5750c11
[ "MIT" ]
null
null
null
# A Cascata (cascating) A escolha do browser de qual regra aplicar, caso haja muitas regras para o mesmo elemento. * Seu estilo é lido de cima para baixo É levado em consideração 3 fatores 1. Origem do estilo 2. Especificidade 3. Importância ## Origem do estilo inline > tag style > tag link ## Especificidade É um cálculo matemático, onde, cada tipo de seletor e origem do estilo, possuem valores a serem considerados. 0. Universal selector, combinators e negation pseudo-class (:not()) 1. Element type selector e pseudo-elements (::before, ::after) 10. Classes e attribute selectors ([type="radio"]) 100. ID selector 1000. Inline ## A regra !important * cuidado, evite o uso * não é considerado uma boa prática * quebra o fluxo natural da cascata ```css h1{ color: red; } *{ color: blue !important;/*este vai ganhar*/ } ```
19.295455
109
0.725559
por_Latn
0.996404
4876bdfe29b06ca5d7bde77e3d8451f4dd335ab0
190
md
Markdown
README.md
kevinlr5/mountain-west-cams
d5135e81f1fb61072980351cc9f7fe71b3e1716c
[ "Apache-2.0" ]
null
null
null
README.md
kevinlr5/mountain-west-cams
d5135e81f1fb61072980351cc9f7fe71b3e1716c
[ "Apache-2.0" ]
7
2019-12-15T16:56:10.000Z
2019-12-16T01:56:35.000Z
README.md
kevinlr5/mountain-west-cams
d5135e81f1fb61072980351cc9f7fe71b3e1716c
[ "Apache-2.0" ]
null
null
null
# mountain-west-cams A map of cameras in the Mountain West - Repo: https://github.com/kevinlr5/mountain-west-cams ## Versioning and Releases [Versioning and Releases](docs/versioning.md)
21.111111
54
0.768421
eng_Latn
0.732298
4876f72c6e4087b00165038bb1a7efcd3043b3b3
108
md
Markdown
README.md
khr0x40sh/PowerSurfer
60b7a482a65cde72fef207ba950f7e16191bafd5
[ "MIT" ]
13
2015-06-13T12:24:39.000Z
2020-03-26T07:18:09.000Z
README.md
khr0x40sh/PowerSurfer
60b7a482a65cde72fef207ba950f7e16191bafd5
[ "MIT" ]
null
null
null
README.md
khr0x40sh/PowerSurfer
60b7a482a65cde72fef207ba950f7e16191bafd5
[ "MIT" ]
10
2016-08-18T05:14:18.000Z
2021-11-22T13:58:04.000Z
# PowerSurfer A powershell based traffic generation scripts to simulate user activity via Internet Explorer
36
93
0.851852
eng_Latn
0.962804
487745f2cf4860db7252048f3a7239786a824834
10,054
md
Markdown
articles/databox/data-box-deploy-picked-up.md
chclaus/azure-docs.de-de
38be052bda16366997a146cf5589168d6c6f3387
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/databox/data-box-deploy-picked-up.md
chclaus/azure-docs.de-de
38be052bda16366997a146cf5589168d6c6f3387
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/databox/data-box-deploy-picked-up.md
chclaus/azure-docs.de-de
38be052bda16366997a146cf5589168d6c6f3387
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Tutorial zur Rücksendung von Azure Data Box | Microsoft-Dokumentation description: Erfahren Sie, wie Sie Ihre Azure Data Box an Microsoft zurücksenden. services: databox author: alkohli ms.service: databox ms.subservice: pod ms.topic: tutorial ms.date: 09/20/2019 ms.author: alkohli ms.localizationpriority: high ms.openlocfilehash: 28666aaac4ec221acca00d937d54a753a4e6a055 ms.sourcegitcommit: f2771ec28b7d2d937eef81223980da8ea1a6a531 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 09/20/2019 ms.locfileid: "71172683" --- ::: zone target="docs" # <a name="tutorial-return-azure-data-box-and-verify-data-upload-to-azure"></a>Tutorial: Zurücksenden der Azure Data Box und Überprüfen des Datenuploads in Azure ::: zone-end ::: zone target="chromeless" # <a name="return-data-box-and-verify-data-upload-to-azure"></a>Zurücksenden der Data Box und Überprüfen des Datenuploads in Azure ::: zone-end ::: zone target="docs" In diesem Tutorial wird beschrieben, wie Sie die Azure Data Box zurücksenden und die in Azure hochgeladenen Daten überprüfen. In diesem Tutorial werden folgende Themen behandelt: > [!div class="checklist"] > * Voraussetzungen > * Vorbereiten des Versands > * Senden der Data Box an Microsoft > * Überprüfen des Datenuploads in Azure > * Löschen von Daten von der Data Box ## <a name="prerequisites"></a>Voraussetzungen Stellen Sie Folgendes sicher, bevor Sie beginnen: - Sie haben die Schritte ausgeführt in [Tutorial: Kopieren von Daten auf die Azure Data Box Disk und Durchführen der Überprüfung](data-box-deploy-copy-data.md). - Alle Kopieraufträge sind abgeschlossen. „Für Versand vorbereiten“ kann nicht ausgeführt werden, wenn noch Kopieraufträge ausgeführt werden. ## <a name="prepare-to-ship"></a>Vorbereiten des Versands [!INCLUDE [data-box-prepare-to-ship](../../includes/data-box-prepare-to-ship.md)] ::: zone-end ::: zone target="chromeless" Nach dem Datenkopiervorgang bereiten Sie das Gerät vor und versenden es. Wenn das Gerät das Azure-Datencenter erreicht, werden die Daten automatisch in Azure hochgeladen. ## <a name="prepare-to-ship"></a>Vorbereiten des Versands Vergewissern Sie sich vor der Vorbereitung für den Versand, dass Kopieraufträge abgeschlossen sind. 1. Wechseln Sie auf der lokalen Webbenutzeroberfläche zur Seite **Für den Versand vorbereiten**, und beginnen Sie mit der Versandvorbereitung. 2. Schalten Sie das Gerät auf der lokalen Webbenutzeroberfläche aus. Ziehen Sie die Kabel vom Gerät ab. Die nächsten Schritte hängen davon ab, wo Sie das Gerät zurückgeben. ::: zone-end ::: zone target="docs" ## <a name="ship-data-box-back"></a>Zurücksenden der Data Box Vergewissern Sie sich, dass die Daten vollständig auf das Gerät kopiert wurden, und dass die **Versandvorbereitung** erfolgreich war. Die Vorgehensweise hängt von der Region ab, in der das Gerät versendet wird. ::: zone-end ## <a name="in-us-canada-europetabin-us-canada-europe"></a>[In den USA, in Kanada oder in Europa](#tab/in-us-canada-europe) Wenn Sie das Gerät in den USA, in Kanada oder in Europa zurücksenden möchten, gehen Sie wie folgt vor: 1. Vergewissern Sie sich, dass das Gerät ausgeschaltet ist und die Kabel entfernt wurden. 2. Wickeln Sie das mit dem Gerät gelieferte Netzkabel auf, und befestigen Sie es sicher an der Rückseite des Geräts. 3. Stellen Sie sicher, dass das Adressetikett auf dem Freihanddisplay angezeigt wird, und vereinbaren Sie einen Abholtermin mit Ihrem Kurierdienst. Falls das Adressetikett beschädigt oder nicht mehr vorhanden ist oder nicht in der E-Ink-Anzeige angezeigt wird, wenden Sie sich an den Microsoft-Support. Sofern dies vom Support empfohlen wird, können Sie im Azure-Portal zu **Übersicht > Versandetikett herunterladen** navigieren. Laden Sie das Versandetikett herunter, und bringen Sie es am Gerät an. 4. Planen Sie die Abholung durch UPS, falls Sie das Gerät zurücksenden. So planen Sie die Abholung: - Rufen Sie Ihre lokale UPS-Versandstelle an (landes-/regionsspezifische gebührenfreie Telefonnummer). - Geben Sie bei dem Telefonat die Nachverfolgungsnummer für die Rücksendung an, die Sie in der E-Ink-Anzeige oder auf dem gedruckten Etikett finden. - Wenn Sie keine Nachverfolgungsnummer angeben, fordert UPS eine Zusatzgebühr, die Sie bei der Abholung entrichten müssen. Wenn Sie keine Abholung planen können oder möchten, können Sie die Data Box auch an der nächstgelegenen Versandstelle abgeben. 4. Nachdem die Data Box vom Kurierdienst abgeholt und eingescannt wurde, wird der Status der Bestellung im Portal in **Abgeholt** geändert. Außerdem wird eine Nachverfolgungs-ID angezeigt. ::: zone target="chromeless" ## <a name="verify-data-upload-to-azure"></a>Überprüfen des Datenuploads in Azure [!INCLUDE [data-box-verify-upload](../../includes/data-box-verify-upload.md)] ## <a name="erasure-of-data-from-data-box"></a>Löschen von Daten von der Data Box Nachdem die Daten in Azure hochgeladen wurden, löscht die Data Box die Daten auf den Datenträgern gemäß den [NIST-Richtlinien (SP 800-88 Revision 1)](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi). ::: zone-end ::: zone target="docs" [!INCLUDE [data-box-verify-upload-return](../../includes/data-box-verify-upload-return.md)] ::: zone-end ## <a name="in-australiatabin-australia"></a>[In Australien](#tab/in-australia) Bei Azure-Datencentern in Australien ist aus Sicherheitsgründen eine zusätzliche Benachrichtigung erforderlich. Alle eingehenden Lieferungen müssen vorab angekündigt werden. Gehen Sie für den Versand in Australien wie folgt vor: 1. Bewahren Sie den Originalversandkarton des Geräts für den Rückversand auf. 2. Vergewissern Sie sich, dass die Daten vollständig auf das Gerät kopiert wurden, und dass die **Versandvorbereitung** erfolgreich war. 3. Schalten Sie das Gerät aus, und entfernen Sie die Kabel. 4. Wickeln Sie das mitgelieferte Netzkabel auf, und befestigen Sie es sicher an der Rückseite des Geräts. 5. Fordern Sie per E-Mail bei Quantium Solutions eine Abholung an. Geben Sie dabei die Servicereferenznummer aus dem Azure-Portal an. Verwenden Sie die folgende E-Mail-Vorlage: *Request for reverse shipping label with TAU code*. Geben Sie in der E-Mail folgende Informationen an: ``` To: Azure@quantiumsolutions.com Subject: Pickup request for Azure|Reference number:XXX XXX XXX Body: - Company name: - Address: - Contact name: - Contact number: - Requested pickup date: mm/dd ``` 6. Von Quantium Solutions Australia erhalten Sie eine E-Mail mit einem Rücksendeetikett. 7. Drucken Sie das Rücksendeetikett aus, und bringen Sie es am Versandkarton an. 8. Übergeben Sie das Paket an den Versanddienstleister. Bei Bedarf können Sie sich per E-Mail (Azure@quantiumsolutions.com) oder telefonisch mit dem Support von Quantium Solutions in Verbindung setzen. Beachten Sie bei telefonischen Anfragen im Zusammenhang mit Ihrer Bestellung Folgendes: - Fordern Sie zuerst per E-Mail eine Abholung an. - Geben Sie am Telefon den Namen Ihrer Bestellung an. ::: zone target="chromeless" ## <a name="verify-data-upload-to-azure"></a>Überprüfen des Datenuploads in Azure [!INCLUDE [data-box-verify-upload](../../includes/data-box-verify-upload.md)] ## <a name="erasure-of-data-from-data-box"></a>Löschen von Daten von der Data Box Nachdem die Daten in Azure hochgeladen wurden, löscht die Data Box die Daten auf den Datenträgern gemäß den [NIST-Richtlinien (SP 800-88 Revision 1)](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi). ::: zone-end ::: zone target="docs" [!INCLUDE [data-box-verify-upload-return](../../includes/data-box-verify-upload-return.md)] ::: zone-end ## <a name="in-japantabin-japan"></a>[In Japan](#tab/in-japan) 1. Bewahren Sie den Originalversandkarton des Geräts für den Rückversand auf. 2. Schalten Sie das Gerät aus, und entfernen Sie die Kabel. 3. Wickeln Sie das mitgelieferte Netzkabel auf, und befestigen Sie es sicher an der Rückseite des Geräts. 4. Tragen Sie den Namen Ihres Unternehmens und Ihre Adressdaten als Absenderinformationen in den Rücksendeschein ein. 5. Senden Sie Quantium Solutions über die folgende E-Mail-Vorlage eine E-Mail. - Falls der Rücksendeschein der japanischen Post für eine Nachnahmesendung („Chakubarai“) nicht enthalten war oder fehlt, weisen Sie in dieser E-Mail darauf hin. Quantium Solutions Japan fordert die japanische Post dann auf, den Rücksendeschein bei der Abholung mitzubringen. - Senden Sie bei mehreren Bestellungen eine E-Mail, um sicherzustellen, dass die einzelnen Bestellungen abgeholt werden. ``` To: Customerservice.JP@quantiumsolutions.com Subject: Pickup request for Azure Data Box|Job name: Body: - Japan Post Yu-Pack tracking number (reference number): - Requested pickup date:mmdd (Select a requested time slot from below). a. 08:00-13:00 b. 13:00-15:00 c. 15:00-17:00 d. 17:00-19:00 ``` 3. Nachdem die Abholung gebucht wurde, erhalten Sie eine E-Mail-Bestätigung von Quantium Solutions. Die E-Mail-Bestätigung enthält auch Informationen zum Nachnahme-Rücksendeschein („Chakubarai“). Den Support von Quantium Solution erreichen Sie bei Bedarf wie folgt (in japanische Sprache): - E-Mail-Adresse: Customerservice.JP@quantiumsolutions.com - Telefonnummer: 03-5755-0150 ::: zone target="chromeless" ## <a name="verify-data-upload-to-azure"></a>Überprüfen des Datenuploads in Azure [!INCLUDE [data-box-verify-upload](../../includes/data-box-verify-upload.md)] ## <a name="erasure-of-data-from-data-box"></a>Löschen von Daten von der Data Box Nachdem die Daten in Azure hochgeladen wurden, löscht die Data Box die Daten auf den Datenträgern gemäß den [NIST-Richtlinien (SP 800-88 Revision 1)](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi). ::: zone-end ::: zone target="docs" [!INCLUDE [data-box-verify-upload-return](../../includes/data-box-verify-upload-return.md)] ::: zone-end
46.331797
501
0.768351
deu_Latn
0.989478
487935bcb8615ac870e712edadfa516d0e3c7407
3,309
md
Markdown
README.md
Arquisoft/Inci_e4b
d2e4b42d33b44f7be265be212d127ad37c680406
[ "MIT" ]
null
null
null
README.md
Arquisoft/Inci_e4b
d2e4b42d33b44f7be265be212d127ad37c680406
[ "MIT" ]
3
2018-05-02T15:19:18.000Z
2018-05-08T16:26:14.000Z
README.md
Arquisoft/Inci_e4b
d2e4b42d33b44f7be265be212d127ad37c680406
[ "MIT" ]
1
2018-05-02T12:26:00.000Z
2018-05-02T12:26:00.000Z
# Inci_e4b Incidence system e4b # Subsistemas ## Loader [![Build Status](https://travis-ci.org/Arquisoft/Loader_e4b.svg?branch=master)](https://travis-ci.org/Arquisoft/Loader_e4b) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/aeca0021c27447d1abfaec98ceed9508)](https://www.codacy.com/app/jelabra/Loader_e4b?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=Arquisoft/Loader_e4b&amp;utm_campaign=Badge_Grade) [![codecov](https://codecov.io/gh/Arquisoft/Loader_e4b/branch/master/graph/badge.svg)](https://codecov.io/gh/Arquisoft/Loader_e4b) ## Agents [![Build Status](https://travis-ci.org/Arquisoft/Agents_e4b.svg?branch=master)](https://travis-ci.org/Arquisoft/Agents_e4b) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/e680327c40a44a6b8378a8171066e341)](https://www.codacy.com/app/jelabra/Agents_e4b?utm_source=github.com&utm_medium=referral&utm_content=Arquisoft/Agents_e4b&utm_campaign=badger) [![codecov](https://codecov.io/gh/Arquisoft/Agents_e4b/branch/master/graph/badge.svg)](https://codecov.io/gh/Arquisoft/Agents_e4b) ## InciManager [![Build Status](https://travis-ci.org/Arquisoft/InciManager_e4b.svg?branch=master)](https://travis-ci.org/Arquisoft/InciManager_e4b) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/e680327c40a44a6b8378a8171066e341)](https://www.codacy.com/app/jelabra/InciManager_e4b?utm_source=github.com&utm_medium=referral&utm_content=Arquisoft/InciManager_e4b&utm_campaign=badger) [![codecov](https://codecov.io/gh/Arquisoft/InciManager_e4b/branch/master/graph/badge.svg)](https://codecov.io/gh/Arquisoft/InciManager_e4b) ## InciDashboard [![Build Status](https://travis-ci.org/Arquisoft/InciDashboard_e4b.svg?branch=master)](https://travis-ci.org/Arquisoft/InciDashboard_e4b) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/e9d0acdac1f4427698134e010ffbd3fe)](https://www.codacy.com/app/AlexGPlay/InciDashboard_e4b?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=Arquisoft/InciDashboard_e4b&amp;utm_campaign=Badge_Grade) [![codecov](https://codecov.io/gh/Arquisoft/InciDashboard_e4b/branch/master/graph/badge.svg)](https://codecov.io/gh/Arquisoft/InciDashboard_e4b) # Descripción ## Ejecución 1. Base de Datos MySql, no requiere despliegue. <br> 2. Arranque Apache Zookeeper: bin\windows\zookeeper-server-start.bat config\zookeeper.properties <br> 3. Arranque Apache Kafka: bin\windows\kafka-server-start.bat config\server.properties <br> 4. Para el despliegue del sitio Web: Agents, InciManager e InciDashboard: mvn spring-boot:run <br> 5. Carga de datos con Loader: mvn exec:java -Dexec.mainClass="main.LoadUsers" -Dexec.args="load fichero.xlsx" <br> 6. Abrir en el navegador: - http://localhost:8070/ para el Agents. <br> - http://localhost:8080/ para el InciManager. <br> - http://localhost:8090/ para el InciDashboard. <br> <br> Ejecución detallada paso a paso dentro del "README.md" de cada subsistema. # Autores - Sergio Muñiz Rosas (UO245346)<br> - Darío Alonso Díaz (UO237089)<br> - Francisco Manuel Mendoza Soto (UO251129)<br> - Victor David Acebes Caballo (UO251117)<br> - Óscar Marín Iglesias (UO251857)<br> - Ángela María Val Cadena (UO250972)<br> - Alejandro García Parrondo (UO253144)<br> - Samuel Steven Ludeña Vela (UO251461)<br> - Juan Granda Molaguero (UO244759)<br>
63.634615
269
0.788154
yue_Hant
0.682943
487b4e745945519a8773a45b72a10494f892939d
2,730
md
Markdown
ROS/README.md
attackordie/dockerfiles
1cd62e2800d2d8fa910861be6628bea757bcf2c1
[ "MIT" ]
11
2017-07-20T12:52:00.000Z
2021-07-12T17:27:48.000Z
ROS/README.md
attackordie/dockerfiles
1cd62e2800d2d8fa910861be6628bea757bcf2c1
[ "MIT" ]
29
2017-02-23T07:18:57.000Z
2019-11-17T08:46:25.000Z
ROS/README.md
attackordie/dockerfiles
1cd62e2800d2d8fa910861be6628bea757bcf2c1
[ "MIT" ]
4
2017-07-06T07:10:20.000Z
2020-06-04T09:10:13.000Z
Dockerfile for ROS melodic `desktop-full`. Features: * X11 authentication for GUIs * Image size: 2.9 GB * User created during runtime * Nvidia GPU and OpenGL support ## Build the image ### Intel GPU ``` docker build --rm --pull -t diegoferigo/ros . ``` ### Nvidia GPU ``` docker build --rm --pull --build-arg from=nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04 -t diegoferigo/ros:nvidia . ``` ## User configuration Several reasons could be found for having a non-root user within the container that matches the GID and UID of your host user. This docker image allows the creation of a runtime user, whose default UID and GID is 1000. To override these values and to start the container, execute: ``` docker run -i -t --rm \ -e USER_UID=$(id -u) \ -e USER_GID=$(id -g) \ -e USERNAME=$(whoami) \ --name ros \ diegoferigo/ros \ bash ``` Then, create as many ttys as needed with ``` docker exec -it ros bash ``` The image does not contain `sudo` and the root password is not set. When the active user is not root, to execute commands that require admin privileges you should open a new tty or use `exit` on the current shell getting back to the root user. ## X11 authentication As an example, `rqt` is forwarded from within the container to the host's Xserver: ``` XSOCK=/tmp/.X11-unix XAUTH=/tmp/.docker.xauth touch $XAUTH xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge - docker run -i -t --rm \ -v $XSOCK:$XSOCK:rw \ -v $XAUTH:$XAUTH:rw \ -e XAUTHORITY=${XAUTH} \ -e DISPLAY \ -e USER_UID=$(id -u) \ -e USER_GID=$(id -g) \ -e USERNAME=$(whoami) \ --name ros \ diegoferigo/ros \ rqt ``` If you need HW acceleration (only for Intel graphic cards), add also this device flag `--device=/dev/dri`. For what concerns Nvidia GPUs, you need to configure and install [`nvidia-docker`](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). With such setup, you only have to: 1. Add the additional option `--runtime nvidia` 2. Use the `diegoferigo/ros:nvidia` image ## Resources * [Docker and ROS][0] * X11 Authentication: [stackoverflow][1], [ROS Wiki][2] * Runtime user: [docker-browser-box][6] * [Hardware Acceleration][3] ## TODO * Setup a ROS [docker-compose][4] system (see also [docker hub][5] - Compose) * Setup sudo w/o password [0]: http://wiki.ros.org/docker/Tutorials [1]: https://stackoverflow.com/questions/16296753/can-you-run-gui-apps-in-a-docker-container [2]: http://wiki.ros.org/docker/Tutorials/GUI [3]: http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration [4]: http://toddsampson.com/post/131227320927/docker-experimental-networking-and-ros [5]: https://hub.docker.com/_/ros/ [6]: https://github.com/sameersbn/docker-browser-box
26.504854
191
0.707692
eng_Latn
0.798129
487bef754afccf4542f88232577c64d3f5fcca68
459
md
Markdown
README.md
aleferri/harshtimes
dea985f5114bd00b0d4ca48a8103550b5506858f
[ "Apache-2.0" ]
null
null
null
README.md
aleferri/harshtimes
dea985f5114bd00b0d4ca48a8103550b5506858f
[ "Apache-2.0" ]
null
null
null
README.md
aleferri/harshtimes
dea985f5114bd00b0d4ca48a8103550b5506858f
[ "Apache-2.0" ]
null
null
null
# HarshTimes Bannerlord mod ## Content You want to suffer, looters want to loot and all the bandits are into killing your elite troops and rob you of your dignity. Welcome to an endless bandits stream that will tear the world in ruin, capture every lord, destroy any army that dares to pass nearby. No adventurers is safe. ### These are Harsh Times. ## How to install Download as a zip, put into Steam\steamapps\common\Mount & Blade II Bannerlord\Modules.
38.25
158
0.775599
eng_Latn
0.998123
487c4c02975fe72b70429e7a1c309e0d69373f58
18,519
md
Markdown
microsoft-r/python-reference/microsoftml/rx-logistic-regression.md
grayknight2/machine-learning-server-docs
8dc4be61b5277d4b23ee2ead202ee3ea5b176fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-r/python-reference/microsoftml/rx-logistic-regression.md
grayknight2/machine-learning-server-docs
8dc4be61b5277d4b23ee2ead202ee3ea5b176fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-r/python-reference/microsoftml/rx-logistic-regression.md
grayknight2/machine-learning-server-docs
8dc4be61b5277d4b23ee2ead202ee3ea5b176fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # required metadata title: "rx_logistic_regression: Logistic Regression" description: "Machine Learning Logistic Regression" keywords: "models, classification" author: "dphansen" manager: "cgronlun" ms.date: 07/15/2019 ms.topic: "reference" ms.prod: "mlserver" ms.service: "" ms.assetid: "" # optional metadata ROBOTS: "" audience: "" ms.devlang: "Python" ms.reviewer: "" ms.suite: "" ms.tgt_pltfrm: "" #ms.technology: "" ms.custom: "" --- # *microsoftml.rx_logistic_regression*: Logistic Regression ## Usage ``` microsoftml.rx_logistic_regression(formula: str, data: [revoscalepy.datasource.RxDataSource.RxDataSource, pandas.core.frame.DataFrame], method: ['binary', 'multiClass'] = 'binary', l2_weight: float = 1, l1_weight: float = 1, opt_tol: float = 1e-07, memory_size: int = 20, init_wts_diameter: float = 0, max_iterations: int = 2147483647, show_training_stats: bool = False, sgd_init_tol: float = 0, train_threads: int = None, dense_optimizer: bool = False, normalize: ['No', 'Warn', 'Auto', 'Yes'] = 'Auto', ml_transforms: list = None, ml_transform_vars: list = None, row_selection: str = None, transforms: dict = None, transform_objects: dict = None, transform_function: str = None, transform_variables: list = None, transform_packages: list = None, transform_environment: dict = None, blocks_per_read: int = None, report_progress: int = None, verbose: int = 1, ensemble: microsoftml.modules.ensemble.EnsembleControl = None, compute_context: revoscalepy.computecontext.RxComputeContext.RxComputeContext = None) ``` ## Description Machine Learning Logistic Regression ## Details Logistic Regression is a classification method used to predict the value of a categorical dependent variable from its relationship to one or more independent variables assumed to have a logistic distribution. If the dependent variable has only two possible values (success/failure), then the logistic regression is binary. If the dependent variable has more than two possible values (blood type given diagnostic test results), then the logistic regression is multinomial. The optimization technique used for `rx_logistic_regression` is the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS). Both the L-BFGS and regular BFGS algorithms use quasi-Newtonian methods to estimate the computationally intensive Hessian matrix in the equation used by Newton’s method to calculate steps. But the L-BFGS approximation uses only a limited amount of memory to compute the next step direction, so that it is especially suited for problems with a large number of variables. The `memory_size` parameter specifies the number of past positions and gradients to store for use in the computation of the next step. This learner can use elastic net regularization: a linear combination of L1 (lasso) and L2 (ridge) regularizations. Regularization is a method that can render an ill-posed problem more tractable by imposing constraints that provide information to supplement the data and that prevents overfitting by penalizing models with extreme coefficient values. This can improve the generalization of the model learned by selecting the optimal complexity in the bias-variance tradeoff. Regularization works by adding the penalty that is associated with coefficient values to the error of the hypothesis. An accurate model with extreme coefficient values would be penalized more, but a less accurate model with more conservative values would be penalized less. L1 and L2 regularization have different effects and uses that are complementary in certain respects. * `l1_weight`: can be applied to sparse models, when working with high-dimensional data. It pulls small weights associated features that are relatively unimportant towards 0. * `l2_weight`: is preferable for data that is not sparse. It pulls large weights towards zero. Adding the ridge penalty to the regularization overcomes some of lasso’s limitations. It can improve its predictive accuracy, for example, when the number of predictors is greater than the sample size. If `x = l1_weight` and `y = l2_weight`, `ax + by = c` defines the linear span of the regularization terms. The default values of x and y are both `1`. An aggressive regularization can harm predictive capacity by excluding important variables out of the model. So choosing the optimal values for the regularization parameters is important for the performance of the logistic regression model. ## Arguments ### formula The formula as described in revoscalepy.rx_formula Interaction terms and `F()` are not currently supported in [microsoftml](microsoftml-package.md). ### data A data source object or a character string specifying a *.xdf* file or a data frame object. ### method A character string that specifies the type of Logistic Regression: `"binary"` for the default binary classification logistic regression or `"multiClass"` for multinomial logistic regression. ### l2_weight The L2 regularization weight. Its value must be greater than or equal to `0` and the default value is set to `1`. ### l1_weight The L1 regularization weight. Its value must be greater than or equal to `0` and the default value is set to `1`. ### opt_tol Threshold value for optimizer convergence. If the improvement between iterations is less than the threshold, the algorithm stops and returns the current model. Smaller values are slower, but more accurate. The default value is `1e-07`. ### memory_size Memory size for L-BFGS, specifying the number of past positions and gradients to store for the computation of the next step. This optimization parameter limits the amount of memory that is used to compute the magnitude and direction of the next step. When you specify less memory, training is faster but less accurate. Must be greater than or equal to `1` and the default value is `20`. ### max_iterations Sets the maximum number of iterations. After this number of steps, the algorithm stops even if it has not satisfied convergence criteria. ### show_training_stats Specify `True` to show the statistics of training data and the trained model; otherwise, `False`. The default value is `False`. For additional information about model statistics, see `summary.ml_model()`. ### sgd_init_tol Set to a number greater than 0 to use Stochastic Gradient Descent (SGD) to find the initial parameters. A non-zero value set specifies the tolerance SGD uses to determine convergence. The default value is `0` specifying that SGD is not used. ### init_wts_diameter Sets the initial weights diameter that specifies the range from which values are drawn for the initial weights. These weights are initialized randomly from within this range. For example, if the diameter is specified to be `d`, then the weights are uniformly distributed between `-d/2` and `d/2`. The default value is `0`, which specifies that all the weights are initialized to `0`. ### train_threads The number of threads to use in training the model. This should be set to the number of cores on the machine. Note that L-BFGS multi-threading attempts to load dataset into memory. In case of out-of-memory issues, set `train_threads` to `1` to turn off multi-threading. If *None* the number of threads to use is determined internally. The default value is *None*. ### dense_optimizer If `True`, forces densification of the internal optimization vectors. If `False`, enables the logistic regression optimizer use sparse or dense internal states as it finds appropriate. Setting `denseOptimizer` to `True` requires the internal optimizer to use a dense internal state, which may help alleviate load on the garbage collector for some varieties of larger problems. ### normalize Specifies the type of automatic normalization used: * `"Auto"`: if normalization is needed, it is performed automatically. This is the default choice. * `"No"`: no normalization is performed. * `"Yes"`: normalization is performed. * `"Warn"`: if normalization is needed, a warning message is displayed, but normalization is not performed. Normalization rescales disparate data ranges to a standard scale. Feature scaling insures the distances between data points are proportional and enables various optimization methods such as gradient descent to converge much faster. If normalization is performed, a `MaxMin` normalizer is used. It normalizes values in an interval [a, b] where `-1 <= a <= 0` and `0 <= b <= 1` and `b - a = 1`. This normalizer preserves sparsity by mapping zero to zero. ### ml_transforms Specifies a list of MicrosoftML transforms to be performed on the data before training or *None* if no transforms are to be performed. See [`featurize_text`](featurize-text.md), [`categorical`](categorical.md), and [`categorical_hash`](categorical-hash.md), for transformations that are supported. These transformations are performed after any specified Python transformations. The default value is *None*. ### ml_transform_vars Specifies a character vector of variable names to be used in `ml_transforms` or *None* if none are to be used. The default value is *None*. ### row_selection NOT SUPPORTED. Specifies the rows (observations) from the data set that are to be used by the model with the name of a logical variable from the data set (in quotes) or with a logical expression using variables in the data set. For example: * `row_selection = "old"` will only use observations in which the value of the variable `old` is `True`. * `row_selection = (age > 20) & (age < 65) & (log(income) > 10)` only uses observations in which the value of the `age` variable is between 20 and 65 and the value of the `log` of the `income` variable is greater than 10. The row selection is performed after processing any data transformations (see the arguments `transforms` or `transform_function`). As with all expressions, `row_selection` can be defined outside of the function call using the `expression` function. ### transforms NOT SUPPORTED. An expression of the form that represents the first round of variable transformations. As with all expressions, `transforms` (or `row_selection`) can be defined outside of the function call using the `expression` function. ### transform_objects NOT SUPPORTED. A named list that contains objects that can be referenced by `transforms`, `transform_function`, and `row_selection`. ### transform_function The variable transformation function. ### transform_variables A character vector of input data set variables needed for the transformation function. ### transform_packages NOT SUPPORTED. A character vector specifying additional Python packages (outside of those specified in `RxOptions.get_option("transform_packages")`) to be made available and preloaded for use in variable transformation functions. For example, those explicitly defined in [revoscalepy](../revoscalepy/index.md) functions via their `transforms` and `transform_function` arguments or those defined implicitly via their `formula` or `row_selection` arguments. The `transform_packages` argument may also be *None*, indicating that no packages outside `RxOptions.get_option("transform_packages")` are preloaded. ### transform_environment NOT SUPPORTED. A user-defined environment to serve as a parent to all environments developed internally and used for variable data transformation. If `transform_environment = None`, a new “hash” environment with parent revoscalepy.baseenv is used instead. ### blocks_per_read Specifies the number of blocks to read for each chunk of data read from the data source. ### report_progress An integer value that specifies the level of reporting on the row processing progress: * `0`: no progress is reported. * `1`: the number of processed rows is printed and updated. * `2`: rows processed and timings are reported. * `3`: rows processed and all timings are reported. ### verbose An integer value that specifies the amount of output wanted. If `0`, no verbose output is printed during calculations. Integer values from `1` to `4` provide increasing amounts of information. ### compute_context Sets the context in which computations are executed, specified with a valid revoscalepy.RxComputeContext. Currently local and [revoscalepy.RxInSqlServer](../revoscalepy/RxInSqlServer.md) compute contexts are supported. ### ensemble Control parameters for ensembling. ## Returns A [`LogisticRegression`](learners-object.md) object with the trained model. ## Note This algorithm will attempt to load the entire dataset into memory when `train_threads > 1` (multi-threading). ## See also [`rx_predict`](rx-predict.md) ## References [Wikipedia: L-BFGS](https://en.wikipedia.org/wiki/L-BFGS) [Wikipedia: Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) [Scalable Training of L1-Regularized Log-Linear Models](https://research.microsoft.com/apps/pubs/default.aspx?id=78900) [Test Run - L1 and L2 Regularization for Machine Learning](https://msdn.microsoft.com/magazine/dn904675.aspx) ## Example ``` ''' Binary Classification. ''' import numpy import pandas from microsoftml import rx_logistic_regression, rx_predict from revoscalepy.etl.RxDataStep import rx_data_step from microsoftml.datasets.datasets import get_dataset infert = get_dataset("infert") import sklearn if sklearn.__version__ < "0.18": from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split infertdf = infert.as_df() infertdf["isCase"] = infertdf.case == 1 data_train, data_test, y_train, y_test = train_test_split(infertdf, infertdf.isCase) model = rx_logistic_regression( formula=" isCase ~ age + parity + education + spontaneous + induced ", data=data_train) print(model.coef_) # RuntimeError: The type (RxTextData) for file is not supported. score_ds = rx_predict(model, data=data_test, extra_vars_to_write=["isCase", "Score"]) # Print the first five rows print(rx_data_step(score_ds, number_rows_read=5)) ``` Output: ``` Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off. Beginning processing data. Rows Read: 186, Read Time: 0, Transform Time: 0 Beginning processing data. Beginning processing data. Rows Read: 186, Read Time: 0.001, Transform Time: 0 Beginning processing data. Beginning processing data. Rows Read: 186, Read Time: 0, Transform Time: 0 Beginning processing data. LBFGS multi-threading will attempt to load dataset into memory. In case of out-of-memory issues, turn off multi-threading by setting trainThreads to 1. Beginning optimization num vars: 6 improvement criterion: Mean Improvement L1 regularization selected 5 of 6 weights. Not training a calibrator because it is not needed. Elapsed time: 00:00:00.0646405 Elapsed time: 00:00:00.0083991 OrderedDict([('(Bias)', -1.2366217374801636), ('spontaneous', 1.9391206502914429), ('induced', 0.7497404217720032), ('parity', -0.31517016887664795), ('age', -3.162723260174971e-06)]) Beginning processing data. Rows Read: 62, Read Time: 0, Transform Time: 0 Beginning processing data. Elapsed time: 00:00:00.0287290 Finished writing 62 rows. Writing completed. Rows Read: 5, Total Rows Processed: 5, Total Chunk Time: 0.001 seconds isCase PredictedLabel Score Probability 0 False False -1.341681 0.207234 1 True True 0.597440 0.645070 2 False True 0.544912 0.632954 3 False False -1.289152 0.215996 4 False False -1.019339 0.265156 ``` ## Example ``` ''' MultiClass Classification ''' import numpy import pandas from microsoftml import rx_logistic_regression, rx_predict from revoscalepy.etl.RxDataStep import rx_data_step from microsoftml.datasets.datasets import get_dataset iris = get_dataset("iris") import sklearn if sklearn.__version__ < "0.18": from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split irisdf = iris.as_df() irisdf["Species"] = irisdf["Species"].astype("category") data_train, data_test, y_train, y_test = train_test_split(irisdf, irisdf.Species) model = rx_logistic_regression( formula=" Species ~ Sepal_Length + Sepal_Width + Petal_Length + Petal_Width ", method="multiClass", data=data_train) print(model.coef_) # RuntimeError: The type (RxTextData) for file is not supported. score_ds = rx_predict(model, data=data_test, extra_vars_to_write=["Species", "Score"]) # Print the first five rows print(rx_data_step(score_ds, number_rows_read=5)) ``` Output: ``` Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off. Beginning processing data. Rows Read: 112, Read Time: 0, Transform Time: 0 Beginning processing data. Beginning processing data. Rows Read: 112, Read Time: 0, Transform Time: 0 Beginning processing data. Beginning processing data. Rows Read: 112, Read Time: 0, Transform Time: 0 Beginning processing data. LBFGS multi-threading will attempt to load dataset into memory. In case of out-of-memory issues, turn off multi-threading by setting trainThreads to 1. Beginning optimization num vars: 15 improvement criterion: Mean Improvement L1 regularization selected 9 of 15 weights. Not training a calibrator because it is not needed. Elapsed time: 00:00:00.0493224 Elapsed time: 00:00:00.0080558 OrderedDict([('setosa+(Bias)', 2.074636697769165), ('versicolor+(Bias)', 0.4899507164955139), ('virginica+(Bias)', -2.564580202102661), ('setosa+Petal_Width', -2.8389241695404053), ('setosa+Petal_Length', -2.4824044704437256), ('setosa+Sepal_Width', 0.274869441986084), ('versicolor+Sepal_Width', -0.2645561397075653), ('virginica+Petal_Width', 2.6924400329589844), ('virginica+Petal_Length', 1.5976412296295166)]) Beginning processing data. Rows Read: 38, Read Time: 0, Transform Time: 0 Beginning processing data. Elapsed time: 00:00:00.0331861 Finished writing 38 rows. Writing completed. Rows Read: 5, Total Rows Processed: 5, Total Chunk Time: 0.001 seconds Species Score.0 Score.1 Score.2 0 virginica 0.044230 0.364927 0.590843 1 setosa 0.767412 0.210586 0.022002 2 setosa 0.756523 0.221933 0.021543 3 setosa 0.767652 0.211191 0.021157 4 versicolor 0.116369 0.498615 0.385016 ```
33.010695
414
0.765484
eng_Latn
0.985216
487cf240367eb7c2bd4ceed852a4f46ec947ff5c
4,605
md
Markdown
contributing.md
basicallydan/forkability
39e13fdc975cfef9e8acee01fafc1659a5118e99
[ "MIT" ]
98
2015-01-02T21:21:35.000Z
2022-02-05T16:46:42.000Z
contributing.md
lwwsq/forkability
39e13fdc975cfef9e8acee01fafc1659a5118e99
[ "MIT" ]
59
2015-01-15T15:47:01.000Z
2018-06-14T15:56:41.000Z
contributing.md
lwwsq/forkability
39e13fdc975cfef9e8acee01fafc1659a5118e99
[ "MIT" ]
25
2015-01-01T16:05:25.000Z
2021-06-04T19:27:25.000Z
# Contributing to Forkability Some of the things a would-be contributor might want to do are: 1. [Raise a bug report](#raising-a-bug-report) 2. [*Request* support for a language](#requesting-language-support) 3. [*Add* support for a language](#adding-language-support) (you get any awesome points for doing this) 4. [Contribute actual code](#contributing-code) 5. [Raise an issue with what makes a project forkable at all](#suggesting-general-changes) But first, the golden rule: ## The golden rule of pull requests If you're going to make a pull request, make sure you've done two things before making it: 1. You've run `npm test` to ensure that you haven't broken anything. Any pull requests which have caused tests to fail will probably be rejected. 2. You've written tests to cover your additions, and that includes, where possible, testing bug fixes. ## Raising a bug report If in your use of the program you find something that looks unexpected or broken, take the following steps: 1. Go to the [issues page](https://github.com/basicallydan/forkability/issues) 2. Search for a similar issue * If you've found it, put a :+1: to say you've found it * [Consider Contributing actual code](#contributing-code) to fix the problem * Stop here :smile: 3. If the issue hasn't been raised before, [raise it](https://github.com/basicallydan/forkability/issues/new), and be as detailed as you possibly can. Were you using the JavaScript module? Or were you using forkability on the command line? Or was it perhaps on the web app? What is your computer's operating system? As much detail as possible makes it easier to fix bugs. Forkability should undergo frequent reviews to keep up with opinion about what makes an open-source project forkable. ## Requesting language support It would be absolutely brilliant if all the programming languages and their various platforms could be covered by Forkability for linting. So, if there's a language you'd like to see covered which isn't covered yet, you could [open an issue](https://github.com/basicallydan/forkability/issues) to request it. Hopefully somebody will pick it up! If you could provide some information about the kinds of features that an open-source project in your chosen language should have, that would be extra helpful. The most helpful thing you could do, however, is... ## Adding language support Currently, language support covers checking the contents of the file tree at the root of the repository in it's state at the most recent commit. To see more about how it works, see the [lintFiles.test.js](https://github.com/basicallydan/forkability/blob/master/lib/lintFiles.js). However, just to add support, you just need to follow these steps: ### Create a file for your language Let's say for the example that it's Ruby (it's just an example so don't read too much into it). In here we're going to put feature tests, essentially checking for the existence - or lack of existence - of files in the repo. Put that file in `lib/langs` with the lower-case, space-free version of the name. ```js module.exports = { // The human-readable name should go here name: 'Ruby', files: { // The simplest thing is to check for the existence of a file using a regular expression 'gemfile':/^gemfile/i, // You can also test against the type (blob for file or tree for folder) 'bin folder': { path: /^bin/i, type:'tree' }, // And you can check for the *lack* of existence of a file or folder with shouldExist = false 'No .sass_cache folder': { path: /^\.sass_cache/i, shouldExist: false, type:'tree' } } }; ``` ### Add language to languages file Include the file in `lib/languages.js` so it can be used by Forkability. ### Write tests to make sure your feature tests continue to work You can follow the examples in `spec/langs` for this. Remember, you need to write a test for this AND check that existing tests do not fail before making a pull request. Very important! ## Contributing code In general, I am open to anybody wishing to improve Forkability, either in terms of code or features, so if you'd like to make a pull request please do - just remember to follow [the golden rule](#the-golden-rule-of-pull-requests) and write tests for your changes first. ## Suggesting general changes The nature of this project means that it is opinionated. However, the nature of open-source means that a general community consensus about what makes a good open source project is preferable to a single person's ideas. So if you disagree with anything, feel free to open a new issue about it for discussion.
54.176471
504
0.759826
eng_Latn
0.999172
487d8f568416764459de00a3381370976df8dc93
3,587
md
Markdown
docs/outlook/mapi/choosing-a-form-s-property-set.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T18:52:16.000Z
2021-04-21T00:13:46.000Z
docs/outlook/mapi/choosing-a-form-s-property-set.md
MicrosoftDocs/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2021-12-08T03:25:19.000Z
2021-12-08T03:43:48.000Z
docs/outlook/mapi/choosing-a-form-s-property-set.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
5
2018-07-17T08:19:45.000Z
2021-10-13T10:29:41.000Z
--- title: Auswählen eines Formulareigenschaftensatzes manager: soliver ms.date: 11/16/2014 ms.audience: Developer ms.localizationpriority: medium api_type: - COM ms.assetid: 5680fed2-b2e7-4c4b-9ba8-2c497b9c433c description: 'Letzte Änderung: Samstag, 23. Juli 2011' ms.openlocfilehash: b18ac45fe04f1e299199e4a51bf952856d5bd85d ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 09/24/2021 ms.locfileid: "59584724" --- # <a name="choosing-a-forms-property-set"></a>Auswählen eines Formulareigenschaftensatzes **Gilt für**: Outlook 2013 | Outlook 2016 Wenn Sie den Formularserver implementieren, benötigen Sie eine Eigenschaft für jede Information, die ihre Nachrichtenklasse benötigt. Diese Eigenschaften können vordefinierte MAPI-Eigenschaften oder benutzerdefinierte Eigenschaften sein, die Sie definieren. Weitere Informationen zum Arbeiten mit Eigenschaften finden Sie unter [MAPI Property Overview](mapi-property-overview.md). Die Formularkonfigurationsdatei enthält eine Liste der Eigenschaften, die der Formularserver für die Verwendung durch Clientanwendungen verfügbar macht. Dies muss jedoch nicht die gesamte Liste der eigenschaften sein, die vom Formularserver verwendet werden. Clientanwendungen verwenden in der Regel die verfügbar gemachten Eigenschaften, um Es Benutzern zu ermöglichen, Nachrichten in einem Ordner zu sortieren oder ihre Schnittstellen auf irgendeine Weise anzupassen. DIE MAPI verfügt über einen großen Satz vordefinierter Eigenschaften, die für die meisten Anwendungen ausreichen. Es gibt jedoch Situationen, in denen eine benutzerdefinierte Nachrichtenklasse eine Eigenschaft benötigt, die mapi nicht definiert. Sie können benutzerdefinierte Eigenschaften verwenden, um den vordefinierten MAPI-Eigenschaftensatz für alle speziellen Informationen zu erweitern, die der Formularserver unterstützen muss. Sie können eine der folgenden Methoden verwenden, um benutzerdefinierte Eigenschaften zu definieren: - Wählen Sie einen Namen für die Eigenschaft aus, und verwenden Sie die [IMAPIProp::GetIDsFromNames-Methode,](imapiprop-getidsfromnames.md) um ein Eigenschaftstag dafür abzurufen. Die [IMAPIProp-Schnittstelle,](imapipropiunknown.md) über die Sie diese Methode aufrufen, stammt aus dem [IMessage-Zeiger,](imessageimapiprop.md) der beim Erstellen der Nachricht an den Formularserver übergeben wird. Beachten Sie, dass der Eigenschaftenname eine zeichenfolge mit breitem Zeichen sein muss. - Definieren Sie selbst ein benutzerdefiniertes Eigenschaftentag. Benutzerdefinierte Eigenschaftentags müssen sich im Bereich 0x6800 bis 0x7BFF befinden. Eigenschaften in diesem Bereich sind nachrichtenklassenspezifisch. Weitere Informationen zum Definieren benutzerdefinierter Eigenschaften finden Sie unter [Definieren neuer MAPI-Eigenschaften.](defining-new-mapi-properties.md) > [!NOTE] > Formularserver mit einem Nachrichtentext verwenden häufig die **Eigenschaft PR_RTF_COMPRESSED** ([PidTagRtfCompressed](pidtagrtfcompressed-canonical-property.md)), um sie zu speichern. Wenn ihr Formularserver **PR_RTF_COMPRESSED** verwendet, sollte außerdem sichergestellt werden, dass die **eigenschaft PR_BODY** ([PidTagBody](pidtagbody-canonical-property.md)) eine textgeschützte Version des Nachrichtentexts enthält, falls die resultierende Nachricht von einem Client gelesen wird, der RTF-Nachrichtentext (Rich Text Format) nicht unterstützt. ## <a name="see-also"></a>Siehe auch - [Entwickeln von MAPI-Formularservern](developing-mapi-form-servers.md)
83.418605
550
0.830778
deu_Latn
0.994584
48802eb05c6311e3fa6721b7162b4047c8c3a46b
2,435
md
Markdown
content/post/2015-12-25-juliadede-raretamainayan-yu-wosheng-rishang-gerufang-fa-number-juliaac/index.md
chezou/chezo.uno
5608270249cde6d59da133cf56e0069b13906e04
[ "MIT" ]
null
null
null
content/post/2015-12-25-juliadede-raretamainayan-yu-wosheng-rishang-gerufang-fa-number-juliaac/index.md
chezou/chezo.uno
5608270249cde6d59da133cf56e0069b13906e04
[ "MIT" ]
null
null
null
content/post/2015-12-25-juliadede-raretamainayan-yu-wosheng-rishang-gerufang-fa-number-juliaac/index.md
chezou/chezo.uno
5608270249cde6d59da133cf56e0069b13906e04
[ "MIT" ]
1
2020-05-12T02:22:05.000Z
2020-05-12T02:22:05.000Z
--- title: "Juliaで得られたマイナー言語を盛り上げる方法 #JuliaAC" subtitle: "" summary: "" authors: [aki] tags: [] categories: date: 2015-12-25T00:00:00+00:00 lastmod: 2015-12-25T00:00:00+00:00 featured: false draft: false image: caption: "" focal_point: "" preview_only: false projects: [] --- この記事は[Julia Advent Calender 2015](http://qiita.com/advent-calendar/2015/julialang)の最終日です。 Juliaは大分マイナーな言語で、日本語による情報が殆ど無かったのですが、以下の要因で大分盛り上がってきていると思います。 1. イベント(JuliaTokyo)を年数回開催している 2. Advent Calendarを毎年開催している 3. エヴァンジェリスト(a.k.a bicycle1885)が宣伝し続ける # 1. イベントを年に何回か開催する 最近は大分3つ目の要素がでかいなと思ってきているのですが、 そもそものJuliaTokyoのスタートとしては僕がMachine Learning Casual Talksを開催したところ、偶々来ていたbicycle1885さんがいたということと、Tokyo.RでJuliaの宣伝をしていた[sorami](https://twitter.com/sorami)さんとで意気投合して始めました。 この手のマイナー言語の通例としては英語圏でなんだか盛り上がっているらしいぞ、でも日本語の情報もないし何が嬉しいんだっけ?というケースが圧倒的に多いと思うのですが、やる気のある人が3人いれば勉強会を継続的に開催できます。\*1負荷分散大事。 最近は、[esa.io様](http://esa.io/)にスポンサードもしていただいて、情報の共有も少しずつすすめております。\*2 # 2. Advent Calendarを毎年開催する かなり広がりを感じたのがQiitaで開催した[2014年のAdvent Calendar](http://qiita.com/advent-calendar/2014/julialang)です。 きっかけは同僚の[gfx](https://twitter.com/ __gfx__ )氏に煽られてはじめたのですが、何とか25日埋まって良かったです。\*3 Qiitaは最近情報が溢れてきて質が云々とかいう議論がたまに出てくるのを見かけるのですが、ことJuliaに関して言えば「試してみた」「installしてみた」とかでも全然情報としては価値がありますし、投稿していただけるととてもうれしいです。 JuliaTokyo名物「Juliaはじめて24時間以内枠」というLT枠がいつもなんとなくあるのですが、こうした気軽にやってみた/作ってみた/試してみたという行為が日本語の情報を増やすことにつながるので、正の連鎖がつながっていきます。 また、Julia自体が変化の速い言語なので、過去の記事と同じ事をやっても実行できないということがざらにあります。ですので、「最新の入門情報」というのは非常にありがたいわけです。 もちろん、ガチ勢は放っておいても英語の情報を取ってくるのですが、日本で裾野を広げるためには日本語の情報が欠かせません。 今年は名古屋方面にもJuliaが広まっているのを実感して、とてもうれしい限りです。 # 3. エヴァンジェリストを確保する 最後のエヴァンジェリストを確保するという話は、結構大変かも知れません。我々の場合、エネルギーもある程度の時間もある優秀な大学院生がJSoCというグラントも取って開発しちゃったり研究に使っているのですが、彼が常に「Julia良いよJulia」「これからはJuliaの時代」と言い続けている\*4ことで、じわじわと広がっているように見受けられます。 そのおかげで、ウサギィさんやNAISTの松本研にまでJuliaが広がっていると聞き、やはり良いと言い続けることと情報を蓄積し続けることは大事だなと通関しています。 無論、そもそもの言語としてのウリ(NumPy, SciPyでできない行列計算以上のことをやるときに速さが欲しい)がなければこれも無意味ではあります。しかし、いかに良い言語であっても広まらない時は広まらないし、そこまで良いと思っていない言語でもデファクトになるという時はあります。 であれば、自分の好きな言語を広めると幸せになれるんじゃないか、と思います。 後は[ベンチマーク](https://chezou.hatenablog.com/entry/2015/10/21/234317)など、他言語との比較を行うことができると、より他言語のユーザーに認知が広まるのではないでしょうか。 # まとめ 最後にテクニカルではない話になってしまいましたが、Juliaの事例を通じていろいろな言語が盛り上がっていけばなと思います。 \*1:この辺僕が死んだら開催されなさそうなkawasaki.rbとは対称的ですね :p \*2:コミュニティ運営で重要なのはSPOFを作らないことだと思っているので、esaなりSlackでのchat opsなりは負荷分散のために非常に重要。負荷の集中を舐めてはいけない \*3:途中までだいぶ一人カレンダーの様相を呈していた \*4:壊れたロボットの用に同じ言葉を繰り返す作戦にしか思えない
35.289855
254
0.855031
jpn_Jpan
0.892909
48807de86eb978ab610856a8a74092b4e3125ce4
94
md
Markdown
Port 3283 - ARD (refined payload)/README.md
racompton/AMP-Research
aabc1bb3f08ed960d8466bd1e53408d2977db1fe
[ "MIT" ]
183
2019-09-30T09:22:44.000Z
2022-03-30T20:39:30.000Z
ARD port 3283/README.md
mikust/AMP-Research
81a3a95842616e5f6f498b8df1e04a5383ace7b5
[ "MIT" ]
5
2020-03-25T11:21:52.000Z
2022-03-09T01:43:07.000Z
ARD port 3283/README.md
mikust/AMP-Research
81a3a95842616e5f6f498b8df1e04a5383ace7b5
[ "MIT" ]
72
2019-09-28T19:12:39.000Z
2022-03-27T20:08:07.000Z
# ARD (Apple Remote Desktop) ## Port: 3283 ## Proto: UDP ## Amplification factor: 33x ---
9.4
28
0.62766
kor_Hang
0.481815
4880942456a3dc5d5a8e8ac69df4ca86eaf9feb3
1,134
md
Markdown
data/pages/_posts/my-first-post.md
ParvulaCMS/parvula
dcb1876bef70caa14d09e212838a35cb29e23411
[ "MIT" ]
56
2016-04-14T18:16:11.000Z
2022-02-09T23:52:47.000Z
data/pages/_posts/my-first-post.md
BafS/parvula
dcb1876bef70caa14d09e212838a35cb29e23411
[ "MIT" ]
12
2015-01-22T09:52:06.000Z
2016-03-28T16:19:12.000Z
data/pages/_posts/my-first-post.md
ParvulaCMS/parvula
dcb1876bef70caa14d09e212838a35cb29e23411
[ "MIT" ]
8
2017-03-09T03:33:39.000Z
2019-11-28T21:21:29.000Z
--- date: 02-01-2016 --- # One morning... One morning, as Gregor Samsa was waking up from anxious dreams, he discovered that in bed he had been changed into a monstrous verminous bug. He lay on his armour-hard back and saw, as he lifted his head up a little, his brown, arched abdomen divided up into rigid bow-like sections. From this height the blanket, just about ready to slide off completely, could hardly stay in place. His numerous legs, pitifully thin in comparison to the rest of his circumference, flickered helplessly before his eyes. "What's happened to me," he thought. It was no dream. His room, a proper room for a human being, only somewhat too small, lay quietly between the four well-known walls. Above the table, on which an unpacked collection of sample cloth goods was spread out (Samsa was a traveling salesman) hung the picture which he had cut out of an illustrated magazine a little while ago and set in a pretty gilt frame. It was a picture of a woman with a fur hat and a fur boa. She sat erect there, lifting up in the direction of the viewer a solid fur muff into which her entire forearm disappeared.
113.4
584
0.777778
eng_Latn
0.999987
4880d2499e5d7c68c3579d310acc71aa984f5f77
2,773
md
Markdown
docs/extensibility/internals/web-site-support.md
tommorris/visualstudio-docs.de-de
2f351c63cc51989c21aa6f5e705ec428144d8da6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/internals/web-site-support.md
tommorris/visualstudio-docs.de-de
2f351c63cc51989c21aa6f5e705ec428144d8da6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/internals/web-site-support.md
tommorris/visualstudio-docs.de-de
2f351c63cc51989c21aa6f5e705ec428144d8da6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Der Support-Website | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: - vs-ide-sdk ms.topic: conceptual helpviewer_keywords: - web site projects ms.assetid: ce9f4266-bb64-4c09-be88-4bd6413f60d0 author: gregvanl ms.author: gregvanl manager: douge ms.workload: - vssdk ms.openlocfilehash: 6d3da310c6695598eef36998cc562f6d477eff29 ms.sourcegitcommit: 6a9d5bd75e50947659fd6c837111a6a547884e2a ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 04/16/2018 ms.locfileid: "31139850" --- # <a name="web-site-support"></a>Der Support-Website Ein Projektsystem Website ist ein Projektsystem, das Webprojekte erstellt. Webprojekte erstellen wiederum Webanwendungen. Ein Website-Projekt generiert eine ausführbare Datei für jede Webseite, die Code verknüpft ist. Zusätzliche ausführbare Dateien werden aus der Quellcodedateien im Ordner "App_Code" generiert. Website-Projektsysteme werden durch Hinzufügen von Vorlagen und Registrierungsattribute zu einem vorhandenen Projektsystem erstellt. Eines dieser Attribute wählt der IntelliSense-Anbieter für die Sprache aus. Die Implementierung eines Anbieters IntelliSense Verweise verarbeitet und des Sprachcompilers aufruft, wenn eine intelligente Webseite, die nicht zwischengespeichert wird angefordert wird. Sprachcompiler, die zum Kompilieren von Webseiten verwendet muss registriert werden, mit [!INCLUDE[vstecasp](../../code-quality/includes/vstecasp_md.md)]. Können Sie die [ \<Compiler >-Element](/dotnet/framework/configure-apps/file-schema/compiler/compiler-element) in einer Datei "Web.config" So registrieren den Compiler an, wie im folgenden Beispiel gezeigt: ``` <system.codedom> <compilers> <compiler language="py;IronPython" extension=".py" type="IronPython.CodeDom.PythonProvider, IronPython, Version=1.0.2391.18146, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </compilers></system.codedom> ``` ## <a name="in-this-section"></a>In diesem Abschnitt [Vorlagen für die Websiteunterstützung](../../extensibility/internals/web-site-support-templates.md) Listet die Vorlagen, die Sie verwenden können, um neue Websiteprojekte und zugehörige Elemente zu erstellen. [Attribute der Websiteunterstützung](../../extensibility/internals/web-site-support-attributes.md) Zeigt die Registrierungsattribute, die einem Websiteprojekt zu verbinden [!INCLUDE[vsprvs](../../code-quality/includes/vsprvs_md.md)] und [!INCLUDE[vstecasp](../../code-quality/includes/vstecasp_md.md)]. ## <a name="related-sections"></a>Verwandte Abschnitte [Webprojekte](../../extensibility/internals/web-projects.md) Bietet eine Übersicht über die zwei Arten von Webprojekten, Websiteprojekte und Webanwendungsprojekte.
64.488372
400
0.781825
deu_Latn
0.908601
488126e0f67191da9746b98c8d5a4d5ecc63067e
6,208
md
Markdown
README.md
google/idris-protobuf
843ee338f30ee72f220b16cae6e1dcef9244e2c2
[ "Apache-2.0" ]
30
2016-07-15T10:56:57.000Z
2021-11-16T11:47:53.000Z
README.md
google/idris-protobuf
843ee338f30ee72f220b16cae6e1dcef9244e2c2
[ "Apache-2.0" ]
2
2016-11-23T11:38:15.000Z
2016-11-23T11:40:37.000Z
README.md
google/idris-protobuf
843ee338f30ee72f220b16cae6e1dcef9244e2c2
[ "Apache-2.0" ]
11
2016-07-15T01:03:12.000Z
2021-10-31T10:35:47.000Z
# A partial implementation of Protocol Buffers in Idris Protocol buffers are a serializable format for structured data that is ubiquitous at Google. Idris is a language where types are first class values that is useful for theorem proving and metaprogramming. This package contains a partial implementation of protocol buffers in the Idris language. The goal is to demonstrate how Idris is capable of advanced metaprogramming that simplifies the implementation of protocol buffers. This is not an official Google product. This project is a personal 20% project. Protocol buffers are a way to describe a schema for serializable data. The schema is called a protocol buffer "message". A message describes the kind of data that is to be serialized, and comprises a set of fields. Each field can be a primitive type, an enum, or another message. In this way protocol buffers allow the description of arbitrary data types. Below is shown a protocol message describing a phone number, including an enum definition for the type of a phone number (mobile, home or work). ``` enum PhoneType { MOBILE = 0; HOME = 1; WORK = 2; } message PhoneNumber { required string number = 1; optional PhoneType type = 2 [default = HOME]; } message Person { required string name = 1; required int32 id = 2; optional string email = 3; repeated PhoneNumber phone = 4; } ``` In most languages, protocol buffers are turned into native data types by code generation. E.g. in C++, the generated code for the `Person` class will be a C++ class with methods like `set_name`, `name` and more complex method to set nested messages like `phone`. The code is generated by a compiler that takes protocol message descriptions like the one above, parses them and emits generated C++ code. However in Idris, we can avoid generating code by considering a type of message descriptors (so the `Person` message description would be an value with this type). This is the `MessageDescriptor` class. In the protocol buffers documentation this is just called a `Descriptor` but here it's useful to be more consistent in naming. The unique property of Idris is that we can define an inductive data type `InterpMessage` which has `MessageDescriptor` for its index space, so that `InterpMessage Person` is the type of messages of type `Person`. Inductive data types are similar to C++ templates but with the advantage their arguments can be any value and that value need not even be known at compile time. In Idris we can write generic, type safe, code for working with protocol buffers. In particular we have code for serialization and deserialization to and from text form. Serialization looks like ``` printToTextFormat : InterpMessage d -> String ``` Note that `d` is an implicit argument. If we were to expand the above declaration to include the type of `d` (which Idris is able to infer), it would be ``` printToTextFormat : {d : MessageDescriptor} -> InterpMessage d -> String ``` Note that this function is generic in that `d` ranges over the space of message descriptors, and for each `d`, the type of the second argument is a message of type `InterpMessage d`, that is a protocol message in the format given by the descriptor `d`. Deserialization can also be done generically. ## About this Repo The main purpose of the repo is to demonstrate and explore some interesting applications of Idris. Most of the interesting applications are in the `Test` directory which contains an example descriptor `Person` along with some examples of creating an instance of `InterpMessage Person` and some methods to inspect properties. The `Person` descriptor is given by ``` PhoneType : EnumDescriptor PhoneType = MkEnumDescriptor [ MkEnumValueDescriptor "MOBILE" 0, MkEnumValueDescriptor "HOME" 1, MkEnumValueDescriptor "WORK" 5 ] PhoneNumber : MessageDescriptor PhoneNumber = MkMessageDescriptor [ MkFieldDescriptor Required PBString "number", MkFieldDescriptor Optional (PBEnum PhoneType) "type" ] Person : MessageDescriptor Person = MkMessageDescriptor [ MkFieldDescriptor Required PBString "name", MkFieldDescriptor Required PBInt32 "id", MkFieldDescriptor Optional PBString "email", MkFieldDescriptor Repeated (PBMessage PhoneNumber) "phone" ] ``` A example of an element of `InterpMessage Person` is ``` John : InterpMessage Person John = MkMessage [ "John Doe", 1234, Just "jdoe@example.com", [ MkMessage ["123-456-7890", Just 1], MkMessage ["987-654-3210", Nothing] ] ] ``` Note that the `MkMessage` constructor accepts a heterogeneous list. This is not an `HVect` but a data type we construct in `Protobuf.idr` whose constructors are `Nil` and `(::)`, but whose elements have types corresponding to the types of the fields given by the message descriptor. ## Installing ### Install the [Lightyear](https://github.com/ziman/lightyear) package. This can be done by cloning the repo by running this command from your home directory: ``` git clone https://github.com/ziman/lightyear ``` and installing it with ``` cd lightyear idris --install lightyear.ipkg ``` ### Clone this repo and run the tests. Clone this repo by running the following from your home directory ``` git clone https://github.com/google/idris-protobuf ``` The `Test` directory contains interesting examples that you might want to modify and re-run. To run the tests run this command from this repo ``` cd idris-protobuf idris --testpkg protobuf.ipkg ``` ### Experiment in the REPL. In order to load the repl with this package, first install it with the command ``` idris --install protobuf.ipkg ``` then load the Idris REPL with this package along with the test utils, with the command ``` idris -p lightyear Protobuf.idr Test/Utils.idr ``` While in the REPL you can explore the package, e.g. ``` *Test/Utils *Protobuf> printToTextFormat Test.Utils.John "name: \"John Doe\"\nid: 1234\nemail: \"jdoe@example.com\"\nphone: {\n number: \"123-456-7890\"\n type: HOME\n}\nphone: {\n number: \"987-654-3210\"\n}\n" : String *Test/Utils *Protobuf> parseFromTextFormat {d=Test.Utils.Person} "name: \"Kester Tong\" id: 1234" Right (MkMessage ["Kester Tong", 1234, Nothing, []]) : Either String ```
37.853659
166
0.76047
eng_Latn
0.995533
488136fa7783ad836e78473fe5ece0668e42b793
1,068
md
Markdown
docs/index.md
thearavind/vonage-go-sdk
f746f3384089834a769e270946ac6d61932428ec
[ "Apache-2.0" ]
21
2020-09-20T08:51:28.000Z
2022-02-07T21:13:00.000Z
docs/index.md
thearavind/vonage-go-sdk
f746f3384089834a769e270946ac6d61932428ec
[ "Apache-2.0" ]
25
2020-09-08T15:49:33.000Z
2022-03-15T13:59:07.000Z
docs/index.md
thearavind/vonage-go-sdk
f746f3384089834a769e270946ac6d61932428ec
[ "Apache-2.0" ]
14
2020-09-25T15:19:18.000Z
2022-03-14T13:38:48.000Z
--- title: Home --- The Vonage Go SDK is an unofficial library enabling the use of the Vonage APIs from Go. To improve these pages or open an issue about this project, visit the repo on GitHub: <https://github.com/vonage/vonage-go-sdk>. ## Installation Find current and past releases on the [releases page](https://github.com/vonage/vonage-go-sdk/releases). Import the package and use it in your own project ``` import ("github.com/vonage/vonage-go-sdk") ``` ## Usage Examples Usage examples are organised by the API they access. You can find out more about all the Vonage APIs on the [Developer Portal](https://developer.nexmo.com). * [SMS API](examples/sms) * [Verify API](examples/verify) * [Application API](examples/application) * [Voice API](examples/voice) and [NCCOs](examples/ncco) * [Number Insight API](examples/numberinsight) * [Numbers API](examples/numbers) ## More Information More than just the APIs, this section covers other useful topics for using the Go SDK. * [JWTs](examples/jwt) * [Tips, Tricks and Troubleshooting](examples/tips)
28.105263
156
0.744382
eng_Latn
0.967366
4882128c2ed2e50bd2fe2a1b993e87560eddc4d9
10,627
md
Markdown
onos-tennison-apps/ipfix/README.md
broadbent/TENNISON
16a375839f209ccad0d15165d1409d6a65124bb4
[ "Apache-2.0" ]
15
2018-03-14T16:56:42.000Z
2021-06-01T08:10:57.000Z
onos-tennison-apps/ipfix/README.md
broadbent/TENNISON
16a375839f209ccad0d15165d1409d6a65124bb4
[ "Apache-2.0" ]
13
2019-09-12T06:38:59.000Z
2021-07-19T09:08:38.000Z
onos-tennison-apps/ipfix/README.md
SDN-Security/TENNISON
e156e3d84543427a201fbc3cb67f050cff4ed764
[ "Apache-2.0" ]
7
2018-10-16T15:03:51.000Z
2021-06-01T08:10:58.000Z
ONOS IPFIX demo application =========================== ##Introduction ONOS IPFIX application is the demo application that shows the concept of the OpenFlow statistics export over IPFIX protocol and possibly other similar protocols (e.g NetFlow). Currently, application supports two scenarios of the OpenFlow statistics export: 1. Export of the flow statistics for ONOS Reactive Forwarding application flows. 2. Export of the switch port statistics The intent of the application is to demonstrate the concept which can be possibly extended to export OpenFlow flow statistics for other ONOS applications, like bgprouter, sdnip, mfwd, etc. #User guide ###Installation ONOS IPFIX requires at least ONOS version 1.3. ONOS IPFIX application can be downloaded from the onos-app-samples GIT repository with the following command: ``` git clone https://gerrit.onosproject.org/onos-app-samples.git ``` Enter the ONOS IPFIX application directory and compile the application: ``` cd onos-app-samples/onos-app-ipfix mvn clean install ``` After successful compilation of the application, it can be installed on the running ONOS instance with the following command: ``` onos-app <ip-address-of-ONOS-instance> reinstall! target/onos-app-ipfix-1.3.0-SNAPSHOT.oar ``` ###General configuration ONOS IPFIX application generally requires that user configures IP address and port of the IPFIX collector. This is configured with the following ONOS configuration commands: ``` set org.onosporject.ipfix.IpfixManager CollectorAddress <ip-address> cfg set org.onosporject.ipfix.IpfixManager CollectorPort <udp-port> ``` IPFIX packets are transported over UDP and default port is 2055. ###Flow statistics export for ONOS Reactive Forwarding application The export of the Flow statistics for ONOS Reactive Forwarding application is enabled by default. It is realized over Flow Rule Listener. When the flow rule created by the ONOS reactive forwarding application is removed from ONOS, IPFIX application will collect its statistics, covert them to the appropriate IPFIX format and export them over IPFIX protocol. Currently, ONOS IPFIX supports three IPFIX record templates that are used for exporting of these flows: - **MAC template** (template ID = 331) - matches only MAC addresses, VLAN and switch ports. This template is used with default configuration of the reactive forwarding application that matches only source and destination MAC address and input port. This template has following IPFIX information elements (IEs): - ID 130 - *exporterIPv4Address* - IPv4 address of the OpenFlow switch where flow was installed. - ID 131 - *exporterIPv6Address* - OpenFlow switch DPID where flow was installed is embedded in the last 8 bytes of the fields. - ID - 152 - *flowStartMilliseconds* - The absolute timestamp when the flow is installed by ONOS at the switch. - ID - 153 - *flowEndMilliseconds* - The absolute timestamp when the flow is removed by the ONOS from the switch. - ID 1 - *octetDeltaCount* - number of bytes matched by the flow. - ID 2 - *packeDeltaCount* - number of packets matched by the flow. - ID 10 - *ingressInterface* - OpenFlow switch input port of the flow. - ID 14 - *egressInterface* - OpenFlow switch output port from the flow rule action structure. - ID 56 - *sourceMacAddress* - source MAC address matched by the flow rule. - ID 80 - *destinationMacAddress* - destination MAC address matched by the flow rule. - ID 256 - *etherType* - Ethertype field matched by flow rule (0 if not matched) - ID 14 - *vlanId* - VLAN ID matched by the flow rule (0 if not matched) - **IPv4 template** (template ID = 332) - matches MAC template with addition to IPv4 addresses, Protocol, DSCP and TCP/UDP port. This template will be used if the matching of the IPv4 address is enabled in the ONOS Reactive Forwarding Application. This template has following IPFIX information elements (IEs) in addition to MAC template: - ID 8 - *sourceIPv4Address* - source IPv4 address matched by the flow rule. - ID 12 - *destinationIPv4Address* - destination IPv4 address matched by the flow rule. - ID 4 - *protocolIdentifier* - IPv4Protocol field matched by flow rule (255 if not matched) - ID 5 - *ipClassOfService* - IPv4 ToS field matched by the flow rule, constructed from DSCP and ECN OpenFlow matching conditions (0 if not matched). - ID 7 - *sourceTransportPort* - source TCP/UDP port matched by the flow rule. - ID 11 - *destinationTransportPort* - destination TCP/UDP port matched by the flow rule. - **IPv6 template** (template ID = 333) - matches MAC template with addition to IPv6 addresses, Next-Header, Flow Label and TCP/UDP port. This template will be used if the matching of the IPv6 address is enabled in the ONOS Reactive Forwarding application. This template has following IPFIX information elements (IEs) in addition to MAC template: - ID 27 - *sourceIPv6Address* - source IPv6 address matched by the flow rule. - ID 28 - *destinationIPv6Address* - destination IPv6 address matched by the flow rule. - ID 31 - *flowLabel* - IPv6 Flow Label field matched by flow rule (0 if not matched) - ID 4 - *protocolIdentifier* - IPv4Protocol field matched by flow rule (255 if not matched) - ID 5 - *ipClassOfService* - IPv4 ToS field matched by the flow rule, constructed from DSCP and ECN OpenFlow matching conditions (0 if not matched). - ID 7 - *sourceTransportPort* - source TCP/UDP port matched by the flow rule. - ID 11 - *destinationTransportPort* - destination TCP/UDP port matched by the flow rule. User can enable matching of the IPv4 address and other IPv4 fields in ONOS Reactive Forwarding application with following ONOS commands: ``` cfg set org.onosproject.fwd.ReactiveForwarding matchIpv4Addresses true cfg set org.onosproject.fwd.ReactiveForwarding matchIpv4Dscp true ``` User can enable matching of the IPv4 address and other IPv4 fields in ONOS Reactive Forwarding application with following ONOS commands: ``` cfg set org.onosproject.proxyarp.ProxyArp ipv6NeighborDiscovery true cfg set org.onosproject.provider.host.impl.HostLocationProvider ipv6NeighborDiscovery true cfg set org.onosproject.fwd.ReactiveForwarding ipv6Forwarding true cfg set org.onosproject.fwd.ReactiveForwarding matchIpv6Addresses true cfg set org.onosproject.fwd.ReactiveForwarding matchIpv6FlowLabel true ``` User can enable matching of the VLAN, TCP/UDP ports and ICMP type and code fields in ONOS Reactive Forwarding application with following ONOS commands: ``` cfg set org.onosproject.fwd.ReactiveForwarding matchVlan true cfg set org.onosproject.fwd.ReactiveForwarding matchTcpUdpPorts true cfg set org.onosproject.fwd.ReactiveForwarding matchIcmpFields true ``` To disable IPFIX export of the flow statistics for Reactive Forwarding application flows use following ONOS command: ``` cfg set org.onosproject.ipfix.IpfixManager ReactiveForwardingExport false ``` ###Export of the switch port statistics Export of the switch port statistics over IPFIX is disabled by default. To enable it, use following ONOS command: ``` cfg set org.onosproject.ipfix.IpfixManager PortStatsFlowExport true ``` Export of the switch port statistics is realized over DeviceListener. When ONOS updates its internal port statistics for the OpenFlow switch, IPFIX application will export port statistics over IPFIX protocol. The export is done only by the ONOS instance that is “master” for the specific OpenFlow switch. The exported values represent difference between switch port counters collected by ONOS in the current and the previous polling. Polling interval is controlled by the ONOS OpenFlow Device Provider which actually collects statistics and by default is very frequent on 5 seconds interval. IPFIX application will export port statistics for every ONOS update of the statistics. It is recommended to configure ONOS port statistics polling interval to appropriate value with the following command: ``` cfg set org.onosproject.org.provider.of.device.impl.OpenFlowDeviceProvider PortStatsPollFrequency 30 ``` The export of the switch port statistics uses two IPFIX record templates: - **Received traffic** - Template ID 341 - **Transmitted traffic**- Template ID 342 These templates consist of the following information elements (IEs): - ID 130 - *exporterIPv4Address* - IPv4 address of the OpenFlow switch - ID 131 - *exporterIPv6Address* - OpenFlow switch DPID is embedded in the last 8 bytes of the fields - *ingressInterface* or *egressInterface* - switch port number: - ID 10 - *ingressInterface* IE is used for received traffic statistics - ID 14 - *egressInterface* IE is used for transmitted traffic statistics - ID 1 - *octetDeltaCount* - number of bytes received/transmitted on the port between two polling intervals - ID 2 - *packeDeltaCount* - number of packets received/transmitted on the port between two polling intervals - ID - 152 - *flowStartMilliseconds* - The absolute timestamp for the previous polling of the statistics - ID - 153 - *flowEndMilliseconds* - The absolute timestamp for the current polling of the statistics ##Known shortcomings and issues The purpose of the application is demonstration of the possibility for export of the OpenFlow statistics over IPFIX protocol. For this reason, export of IPFIX records is realized in very simplified way: - For export of the Reactive Forwarding application flows, each flow (Data record) is exported with its corresponding Template record in a single UDP packet. - For export of switch port statistics, for each OpenFlow switch and each traffic direction single IPFIX packet is created. This packet has Template Record for corresponding traffic direction and Data records for each port of the OpenFlow switch. - Currently, export to only one IPFIX collector is supported. Some of the IPFIX analyzer application will use source IP address of the IPFIX packets to identify “IPFIX exporter”. Because the ONOS IPFIX application exports IPFIX records on behalf of OpenFlow switches, IPFIX packets have ONOS controller IP address. ONOS IPFIX application uses *exporterIPv4Address* and *exporterIPv6Address* to further identify OpenFlow switch that matched the flow and on which behalf statistics are exported: - The *exporterIPv4Address* IE has IPv4 address of the OpenFlow switch that is connected over OpenFlow channel to ONOS controller. - The *exporterIPv6Address* IE embeds the OpenFlow switch DPID in the last 8 bytes of the information element.
70.377483
432
0.781029
eng_Latn
0.987351
4882384e64dc6263394371eb4162601a13913c4e
93,933
md
Markdown
articles/azure-maps/migrate-from-bing-maps-web-app.md
AClerbois/azure-docs.fr-fr
6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-maps/migrate-from-bing-maps-web-app.md
AClerbois/azure-docs.fr-fr
6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-maps/migrate-from-bing-maps-web-app.md
AClerbois/azure-docs.fr-fr
6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Tutoriel : Migrer une application web à partir de Bing Cartes | Microsoft Azure Maps' description: Guide pratique pour migrer une application web de Bing Cartes vers Microsoft Azure Maps. author: rbrundritt ms.author: richbrun ms.date: 9/10/2020 ms.topic: tutorial ms.service: azure-maps services: azure-maps manager: cpendle ms.custom: devx-track-js ms.openlocfilehash: 469565385ce4b3ee4b1589f105216213d584c8c9 ms.sourcegitcommit: 32c521a2ef396d121e71ba682e098092ac673b30 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 09/25/2020 ms.locfileid: "91319734" --- # <a name="migrate-a-web-app-from-bing-maps"></a>Migrer une application web à partir de Bing Cartes Les applications web qui utilisent Bing Cartes utilisent souvent le kit SDK JavaScript Bing Cartes V8. Le Kit de développement logiciel (SDK) web Azure Maps est le kit de développement logiciel (SDK) Azure approprié vers lequel migrer. Le Kit de développement logiciel (SDK) web Azure Maps vous permet de personnaliser des cartes interactives avec du contenu et des images qui vous sont propres pour les afficher dans vos applications web ou mobiles. Ce contrôle utilise WebGL, ce qui vous permet d’afficher d’importants jeux de données avec des performances élevées. Développez avec le kit de développement logiciel (SDK) à l’aide de JavaScript ou de TypeScript. Si vous migrez une application Web existante, vérifiez si elle utilise une bibliothèque de contrôle de carte Open source telle que Cesium, Leaflet et OpenLayers. Si c’est le cas et que vous préférez continuer à utiliser cette bibliothèque, vous pouvez la connecter aux services de mosaïques Azure Maps ([mosaïques routières](https://docs.microsoft.com/rest/api/maps/render/getmaptile) \| [mosaïques satellites](https://docs.microsoft.com/rest/api/maps/render/getmapimagerytile)). Les liens ci-dessous fournissent des détails sur l’utilisation d’Azure Maps dans certaines bibliothèques de contrôle de carte open source couramment utilisées. - Cesium : contrôle carte 3D pour le Web. [Documentation](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [de l’exemple de code](https://cesiumjs.org/) - Leaflet : contrôle de carte 2D léger pour le Web. [Documentation](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Azure%20Maps%20Raster%20Tiles%20in%20Leaflet%20JS) \| [de l’exemple de code](https://leafletjs.com/) - OpenLayers : contrôle de carte 2D pour le Web qui prend en charge les projections. [Documentation](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Raster%20Tiles%20in%20OpenLayers) \| [de l’exemple de code](https://openlayers.org/) Si vous développez à l’aide d’un framework JavaScript, l’un des projets open source suivants peut être utile : - [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) : wrapper Angular 10 autour d’Azure Maps. - [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) : composant Blazor Azure Maps. - [Composant react Azure Maps](https://github.com/WiredSolutions/react-azure-maps) : wrapper react pour le contrôle Azure Maps. - [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) : composant Azure Maps pour l’application Vue. ## <a name="key-features-support"></a>Prise en charge des fonctionnalités clés Le tableau suivant liste les principales fonctionnalités de l’API du SDK JavaScript Bing Cartes v8 et la prise en charge d’une API similaire dans le SDK web Azure Maps. | Fonctionnalité Bing Cartes | Prise en charge du kit de développement logiciel (SDK) Web Azure Maps | |--------------------------|:--------------------------------------------------------------------------------------:| | Punaises | ✓ | | Clustering de punaises | ✓ | | Polylignes et polygones | ✓ | | Superposition de sol | ✓ | | Cartes thermiques | ✓ | | Couches de mosaïques | ✓ | | Couche KML | ✓ | | Couche de contour | [Exemples](https://azuremapscodesamples.azurewebsites.net/?search=contour) | | Couche de compartimentage des données | [Exemples](https://azuremapscodesamples.azurewebsites.net/?search=data%20binning) | | Couche de mosaïque animée | Incluse dans le [module d’animation](https://github.com/Azure-Samples/azure-maps-animations) open source Azure Maps | | Outils de dessin | ✓ | | Service de geocoder | ✓ | | Service de directions | ✓ | | Service Matrice des distances | ✓ | | Service de données spatiales | N/A | | Imagerie aérienne/satellite | ✓ | | Imagerie Vue aérienne | Prévu | | Image Streetside | Prévu | | Prise en charge de GeoJSON | ✓ | | Prise en charge de GeoXML | ✓ | | Prise en charge de Well-Known Text | ✓ | | Styles de carte personnalisés | Partial | Azure Maps a également de nombreux [modules open- source supplémentaires pour le SDK web](open-source-projects.md#open-web-sdk-modules) qui étendent ses fonctionnalités. ## <a name="notable-differences-in-the-web-sdks"></a>Différences notables dans les kits de développement logiciel (SDK) Web Voici quelques-unes des principales différences entre les SDK web Bing Cartes et Azure Maps à connaître : - En plus de fournir un point de terminaison hébergé pour accéder au kit de développement logiciel (SDK) Web d’Azure Maps, un package NPM est également disponible pour incorporer le kit de développement logiciel (SDK) Web dans les applications, si vous préférez. Pour plus d’informations, consultez cette [documentation](https://docs.microsoft.com/azure/azure-maps/how-to-use-map-control). Ce package inclut aussi des définitions de TypeScript. - Bing Cartes fournit deux branches hébergées de son kit SDK : Release et Experimental. La branche Experimental peut recevoir plusieurs mises à jour par jour pendant le nouveau développement. Azure Maps héberge uniquement une branche Release ; toutefois, des fonctionnalités expérimentales sont créées en tant que modules personnalisés dans le projet d’exemples de code open source Azure Maps. Avant, Bing Cartes avait également une branche figée qui était mise à jour moins fréquemment, réduisant ainsi le risque de changements cassants dus à une mise en production. Dans Azure Maps, vous pouvez utiliser le module NPM et pointer vers n’importe quelle version mineure précédente. > [!TIP] > Azure Maps publie des versions minifiées et déminifiées du kit SDK. Il vous suffit de supprimer `.min` des noms de fichiers. La version déminifiée est utile lors du débogage de problème, mais veillez à utiliser la version minifiée en production afin de tirer parti de la plus petite taille de fichier. - Après avoir créé une instance de la classe Map dans Azure Maps, votre code doit attendre que les mappages `ready` ou les événements `load` se déclenchent avant d’interagir avec le mappage. Ces événements permettent de s’assurer que toutes les ressources de carte ont été chargées et sont prêtes à être sollicitées. - Les deux plateformes utilisent un système de mosaïque similaire pour les cartes de base, mais les mosaïques dans Bing Cartes ont une dimension de 256 pixels tandis que celles d’Azure Maps ont une dimension de 512 pixels. Par conséquent, pour obtenir la même vue cartographique que Bing Cartes dans Azure Maps, le niveau de zoom utilisé dans Bing Cartes doit être soustrait d’une unité dans Azure Maps. - Dans Bing Cartes, les coordonnées se présentent sous la forme `latitude, longitude`, tandis qu’Azure Maps utilise `longitude, latitude`. Ce format est conforme à la norme `[x, y]` qui est respectée par la plupart des plateformes GIS. - Les formes du kit de développement logiciel (SDK) Web Azure Maps sont basées sur le schéma GeoJSON. Les classes d’assistance sont exposées par le biais de l’espace de noms [atlas.data](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.data). Il existe également la classe [atlas.Shape](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.shape) qui peut être utilisée pour encapsuler des objets GeoJSON et simplifier leur mise à jour et leur gestion, de manière pouvant être liée aux données. - Les coordonnées dans Azure Maps sont définies comme des objets Position qui peuvent être spécifiés sous forme d’un simple tableau de nombres au format `[longitude, latitude]` ou `new atlas.data.Position(longitude, latitude)`. > [!TIP] > La classe Position a une fonction d’assistance statique pour l’importation de coordonnées qui sont au format `latitude, longitude`. La fonction [atlas.data.Position.fromLatLng](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.data.position) peut souvent remplacer la fonction `new Microsoft.Maps.Location` dans le code Bing Cartes. - Au lieu de spécifier des informations de style pour chaque forme ajoutée à la carte, Azure Maps sépare les styles des données. Les données sont stockées dans des sources de données et sont connectées aux couches de rendu que le code Azure Maps utilise pour afficher les données. Cette approche offre un gain de performances amélioré. De plus, de nombreux calques prennent en charge le style basé sur les données dans lequel la logique métier peut être ajoutée aux options de style de calque qui changeront la façon dont les formes individuelles sont rendues au sein d'un calque en fonction des propriétés définies dans la forme. - Azure Maps fournit un ensemble de fonctions mathématiques spatiales utiles dans l’espace de noms `atlas.math`, mais elles diffèrent de celles du module mathématique spatial Bing Cartes. La principale différence réside dans le fait qu’Azure Maps ne fournit pas de fonctions intégrées pour les opérations binaires telles que l’union et l’intersection. Toutefois, Azure Maps étant basé sur GeoJSON, qui est une norme ouverte, de nombreuses bibliothèques open source sont disponibles. Une option très populaire qui fonctionne bien avec Azure Maps et fournit une multitude de fonctionnalités mathématiques spatiales est [turf js](http://turfjs.org/). Consultez également le [Glossaire Azure Maps](https://docs.microsoft.com/azure/azure-maps/glossary) pour obtenir une liste détaillée de la terminologie associée à Azure Maps. ## <a name="web-sdk-side-by-side-examples"></a>Exemples côte à côte du kit de développement logiciel (SDK) Web Vous trouverez ci-dessous une collection d’exemples de code pour chaque plateforme, qui couvrent les cas d’usage courants pour vous aider à migrer votre application web du kit SDK JavaScript Bing Cartes v8 vers le kit SDK web Azure Maps. Des exemples de code liés aux applications Web sont fournis en JavaScript. Toutefois, Azure Maps fournit également des définitions TypeScript sous la forme d’une option supplémentaire via un module [NPM](https://docs.microsoft.com/azure/azure-maps/how-to-use-map-control). **Rubriques** - [Charger une carte](#load-a-map) - [Localisation de la carte](#localizing-the-map) - [Définition de la vue cartographique](#setting-the-map-view) - [Ajout d’une punaise](#adding-a-pushpin) - [Ajout d’une punaise personnalisée](#adding-a-custom-pushpin) - [Ajout d’une polyligne](#adding-a-polyline) - [Ajout d’un polygone](#adding-a-polygon) - [Afficher une zone d’informations](#display-an-infobox) - [Clustering de punaises](#pushpin-clustering) - [Ajouter une carte thermique](#add-a-heat-map) - [Superposer une couche de mosaïques](#overlay-a-tile-layer) - [Afficher les données du trafic](#show-traffic-data) - [Ajouter une superposition de sol](#add-a-ground-overlay) - [Ajouter des données KML à la carte](#add-kml-data-to-the-map) - [Ajouter des outils de dessin](#add-drawing-tools) ### <a name="load-a-map"></a>Charger une carte Le chargement d’une carte dans les deux SDK suit le même ensemble d’étapes : - Ajout d'une référence au Kit de développement logiciel (SDK) de la carte. - Ajoutez une balise `div` au corps de la page, qui fera office d’espace réservé pour la carte. - Créez une fonction JavaScript qui est appelée lorsque la page a été chargée. - Créez une instance de la classe de carte respective. **Quelques différences clés** - Bing Cartes demande la spécification d’une clé de compte dans la référence de script de l’API ou en tant qu’option de carte. Les informations d’authentification pour Azure Maps sont spécifiées en tant qu’options de la classe de carte. Il peut s’agir d’une clé d’abonnement ou d’une information Azure Active Directory. - Bing Cartes prend une fonction de rappel dans la référence de script de l’API qui est utilisée pour appeler une fonction d’initialisation afin de charger la carte. Avec Azure Maps, l’événement OnLoad de la page doit être utilisé. - Quand vous utilisez un ID pour référencer l’élément `div` dans lequel la carte sera affichée, Bing Cartes utilise un sélecteur HTML (c’est-à-dire `#myMap`), tandis qu’Azure Maps utilise uniquement la valeur d’ID (c’est-à-dire `myMap`). - Les coordonnées dans Azure Maps sont définies comme des objets Position qui peuvent être spécifiés sous forme d’un simple tableau de nombres au format `[longitude, latitude]`. - Le niveau de zoom dans Azure Maps est inférieur d’un niveau à celui de l’exemple Bing Cartes en raison de la différence de taille du système de mosaïques entre les plateformes. - Par défaut, Azure Maps n’ajoute aucun contrôle de navigation au canevas de carte, comme des boutons de zoom et des boutons de style de carte. Il existe cependant des commandes pour ajouter un sélecteur de style de carte, des boutons de zoom, une boussole ou une commande de rotation et un réglage de vitesse. - Un gestionnaire d’événements est ajouté dans Azure Maps pour surveiller l’événement `ready` de l’instance de la carte. Cela se déclenche lorsque la carte a terminé le chargement du contexte WebGL et de toutes les ressources nécessaires. Vous pouvez ajouter n’importe quel code de chargement dans ce gestionnaire d’événements. Les exemples ci-dessous montrent comment charger une carte de base de manière à ce qu’elle soit centrée sur les coordonnées de New York à des coordonnées (longitude : -73,985, latitude : 40,747) et est au niveau de zoom 12 dans Bing Cartes. **Avant : Bing Cartes** Le code suivant est un exemple d’affichage d’une carte Bing centrée et zoomée sur un emplacement. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map; function initMap() { map = new Microsoft.Maps.Map('#myMap', { credentials: '<Your Bing Maps Key>', center: new Microsoft.Maps.Location(40.747, -73.985), zoom: 12 }); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` L’exécution de ce code dans un navigateur affichera une carte ressemblant à l’image suivante : <center> ![Carte Bing Cartes](media/migrate-bing-maps-web-app/bing-maps-load-map.jpg)</center> **Après : Azure Maps** Le code suivant montre comment charger une carte avec la même vue dans Azure Maps, ainsi qu’un contrôle de style cartographique et des boutons de zoom. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map; function initMap() { map = new atlas.Map('myMap', { center: [-73.985, 40.747], //Format coordinates as longitude, latitude. zoom: 11, //Subtract the zoom level by one. //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Add zoom and map style controls to top right of map. map.controls.add([ new atlas.control.StyleControl(), new atlas.control.ZoomControl() ], { position: 'top-right' }); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` L’exécution de ce code dans un navigateur affichera une carte ressemblant à l’image suivante : <center> ![Carte Azure Maps](media/migrate-bing-maps-web-app/azure-maps-load-map.jpg)</center> Vous trouverez des informations détaillées sur la façon de configurer et d’utiliser le contrôle de carte Azure Maps dans une application Web [ici](https://docs.microsoft.com/azure/azure-maps/how-to-use-map-control). > [!TIP] > Azure Maps publie des versions minifiées et déminifiées du kit SDK. Supprimez `.min` des noms de fichiers. La version déminifiée est utile lors du débogage de problème, mais veillez à utiliser la version minifiée en production afin de tirer parti de la plus petite taille de fichier. **Ressources supplémentaires** - Azure Maps fournit également des contrôles de navigation pour faire pivoter et mettre en forme la vue cartographique comme indiqué [ici](https://docs.microsoft.com/azure/azure-maps/map-add-controls). ### <a name="localizing-the-map"></a>Localisation de la carte Si votre audience est répartie dans plusieurs pays ou parle différentes langues, la localisation est importante. **Avant : Bing Cartes** Pour localiser les cartes Bing, la langue et la région sont spécifiées à l’aide des paramètres `setLang` et `UR` ajoutés à la référence de balise `<script>` à l’API. Certaines fonctionnalités de Bing Cartes sont disponibles uniquement sur certains marchés ; le marché de l’utilisateur est spécifié à l’aide du paramètre `setMkt`. ```html <script type="text/javascript" src="https://www.bing.com/api/maps/mapcontrol?callback=initMap&setLang=[language_code]&setMkt=[market]&UR=[region_code]" async defer></script> ``` Voici un exemple Bing Cartes avec la langue définie sur « fr-FR ». <center> ![Carte Bing Cartes localisée](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg)</center> **Après : Azure Maps** Azure Maps fournit uniquement des options pour définir la langue et la vue régionale de la carte. Aucun paramètre de marché n’est utilisé pour limiter les fonctionnalités. Il existe deux méthodes pour définir la langue et la vue régionale de la carte. La première option consiste à ajouter ces informations à l’espace de noms `atlas` global, ce qui a pour effet que toutes les instances de contrôle de carte dans votre application ont ces paramètres par défaut. Le code suivant définit la langue sur le français (« fr-FR ») et la vue régionale sur `"auto"` : ```javascript atlas.setLanguage('fr-FR'); atlas.setView('auto'); ``` La deuxième option consiste à transmettre ces informations dans les options de mappage lors du chargement de la carte, comme suit : ```javascript map = new atlas.Map('myMap', { language: 'fr-FR', view: 'auto', authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); ``` > [!NOTE] > Avec Azure Maps il est possible de charger plusieurs instances de carte sur la même page avec des paramètres de langue et de région différents. En outre, il est également possible de mettre à jour ces paramètres dans la carte après son chargement. Vous trouverez une liste détaillée des langues prises en charge dans Azure Maps [ici](https://docs.microsoft.com/azure/azure-maps/supported-languages). Voici un exemple de Azure Maps avec la langue définie sur « fr » et la région de l’utilisateur définie sur « fr-FR ». <center> ![Carte Azure Maps localisée](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg)</center> ### <a name="setting-the-map-view"></a>Définition de la vue cartographique Les cartes dynamiques dans Bing et Azure Maps peuvent être déplacées par programmation vers de nouvelles positions géographiques en appelant les fonctions appropriées dans JavaScript. Les exemples ci-dessous montrent comment faire en sorte que la carte affiche une image aérienne satellite, centrer la carte sur un emplacement avec des coordonnées (longitude : -111,0225, latitude : 35,0272) et définir le niveau de zoom sur 15 dans Bing Cartes. > [!NOTE] > Bing Cartes utilise des mosaïques de 256 pixels, tandis qu’Azure Maps utilise une mosaïque de 512 pixels. Cela réduit le nombre de requêtes réseau nécessaires à Azure Maps pour charger la même zone réactive que Bing Cartes. Toutefois, en raison de la façon dont les pyramides de mosaïques fonctionnent dans les contrôles de carte et de la plus grande taille des mosaïques dans Azure Maps, vous devez soustraire le niveau de zoom utilisé dans Bing Cartes d’une unité lors de l’utilisation d’Azure Maps pour obtenir cette même zone visualisable sous forme de carte dans Bing Cartes. **Avant : Bing Cartes** Le contrôle de carte Bing Cartes peut être déplacé par programmation à l’aide de la fonction `setView`, qui vous permet de spécifier le centre de la carte et un niveau de zoom. ```javascript map.setView({ mapTypeId: Microsoft.Maps.MapTypeId.aerial, center: new Microsoft.Maps.Location(35.0272, -111.0225), zoom: 15 }); ``` <center> ![Bing Cartes - Définir la vue cartographique](media/migrate-bing-maps-web-app/bing-maps-set-map-view.jpg)</center> **Après : Azure Maps** Dans Azure Maps, la position de la carte peut être modifiée par programmation à l’aide de la fonction `setCamera` de la carte, et le style de la carte peut être modifié à l’aide de la fonction `setStyle`. Notez que les coordonnées dans Azure Maps sont au format « longitude, latitude » et que la valeur de niveau de zoom est soustraite d’une unité. ```javascript map.setCamera({ center: [-111.0225, 35.0272], zoom: 14 }); map.setStyle({ style: 'satellite_with_roads' }); ``` <center> ![Azure Maps - Définir la vue cartographique](media/migrate-bing-maps-web-app/azure-maps-set-map-view.jpg)</center> **Ressources supplémentaires** - [Choisir un style de carte](https://docs.microsoft.com/azure/azure-maps/choose-map-style) - [Styles de carte pris en charge](https://docs.microsoft.com/azure/azure-maps/supported-map-styles) ### <a name="adding-a-pushpin"></a>Ajout d’une punaise Dans Azure Maps il existe plusieurs manières d’afficher des données de point sur la carte. - Marqueurs HTML : effectue le rendu des points à l’aide d’éléments DOM traditionnels. Les marqueurs HTML prennent en charge le glissement. - Calque de symbole : effectue le rendu des points avec une icône et/ou un texte dans le contexte WebGL. - Calque de bulles : affiche les données de point sous forme de cercles sur la carte. Les rayons des cercles peuvent être mis à l’échelle en fonction des propriétés des données. Les couches de symboles et de bulles sont rendues dans le contexte WebGL et peuvent afficher de très grands ensembles de points sur la carte. Ces calques requièrent que les données soient stockées dans une source de données. Les sources de données et les calques de rendu doivent être ajoutées à la carte après le déclenchement de l’événement `ready`. Les marqueurs HTML sont affichées sous forme d’éléments DOM dans la page et n’utilisent pas de source de données. Plus le nombre d’éléments DOM d’une page est élevé, plus la page devient rapide. En cas de rendu de plus de quelques centaines de points sur une carte, il est recommandé d’utiliser à la place l’un des calques de rendu. Les exemples ci-dessous montrent comment ajouter un marqueur à la carte (longitude : -0,2, latitude : 51,5) avec le nombre 10 superposé sous la forme d’une étiquette. **Avant : Bing Cartes** Avec Bing Cartes, les marqueurs sont ajoutés à la carte à l’aide de la classe `Microsoft.Maps.Pushpin`*. Les punaises sont ensuite ajoutées à la carte à l’aide de l’une des deux fonctions disponibles. La première fonction consiste à créer une couche, à insérer la punaise, puis à ajouter la couche à la propriété `layers` de la carte. ```javascript var pushpin = new Microsoft.Maps.Pushpin(new Microsoft.Maps.Location(51.5, -0.2), { text: '10' }); var layer = new Microsoft.Maps.Layer(); layer.add(pushpin); map.layers.insert(layer); ``` La seconde consiste à l’ajouter à l’aide de la propriété `entities` de la carte. Cette fonction est marquée comme dépréciée dans la documentation de Bing Cartes V8, mais elle reste partiellement fonctionnelle pour les scénarios de base. ```javascript var pushpin = new Microsoft.Maps.Pushpin(new Microsoft.Maps.Location(51.5, -0.2), { text: '10' }); map.entities.add(pushpin); ``` <center> ![Bing Cartes - Ajouter une punaise](media/migrate-bing-maps-web-app/bing-maps-add-pushpin.jpg)</center> **Après : Azure Maps à l’aide de marqueurs HTML** Dans Azure Maps, vous pouvez utiliser des marqueurs HTML pour afficher facilement un point sur la carte. Leur utilisation est recommandée pour les applications qui doivent simplement afficher un petit nombre de points sur la carte. Pour utiliser un marqueur HTML, créez une instance de la classe `atlas.HtmlMarker`, définissez les options de texte et de position, puis ajoutez le marqueur à la carte à l’aide de la fonction `map.markers.add`. ```javascript //Create a HTML marker and add it to the map. map.markers.add(new atlas.HtmlMarker({ text: '10', position: [-0.2, 51.5] })); ``` <center> ![Azure Maps - Ajouter un marqueur](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)</center> **Après : Azure Maps à l’aide d’un calque de symbole** Lorsque vous utilisez un calque de symbole, les données doivent être ajoutées à une source de données et à la source de données associée au calque. En outre, la source de données et la couche doivent être ajoutées à la carte après le déclenchement de l’événement `ready`. Pour restituer une valeur de texte unique au-dessus d’un symbole, les informations de texte doivent être stockées en tant que propriété du point de données et de la propriété référencée dans l’option `textField` du calque. Il s’agit d’un peu plus de travail que l’utilisation des marqueurs HTML, mais offre de nombreux avantages en termes de performances. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; function initMap() { map = new atlas.Map('myMap', { center: [-0.2, 51.5], zoom: 9, authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create a data source and add it to the map. datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Create a point feature, add a property to store a label for it, and add it to the data source. datasource.add(new atlas.data.Feature(new atlas.data.Point([-0.2, 51.5]), { label: '10' })); //Add a layer for rendering point data as symbols. map.layers.add(new atlas.layer.SymbolLayer(datasource, null, { textOptions: { //Use the label property to populate the text for the symbols. textField: ['get', 'label'], color: 'white', offset: [0, -1] } })); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Azure Maps - Ajouter un calque de symboles](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)</center> **Ressources supplémentaires** - [Créer une source de données](https://docs.microsoft.com/azure/azure-maps/create-data-source-web-sdk) - [Ajouter une calque de symbole](https://docs.microsoft.com/azure/azure-maps/map-add-pin) - [Ajouter un calque de bulles](https://docs.microsoft.com/azure/azure-maps/map-add-bubble-layer) - [Données de point de cluster](https://docs.microsoft.com/azure/azure-maps/clustering-point-data-web-sdk) - [Ajouter des marqueurs HTML](https://docs.microsoft.com/azure/azure-maps/map-add-custom-html) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) - [Options de l’icône de calque de symbole](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.iconoptions) - [Option de texte du calque de symbole](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.textoptions) - [Classe de marqueur HTML](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.htmlmarker) - [Options du marqueur HTML](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.htmlmarkeroptions) ### <a name="adding-a-custom-pushpin"></a>Ajout d’une punaise personnalisée Les images personnalisées peuvent être utilisées pour représenter des points sur une carte. L’image suivante est utilisée dans les exemples ci-dessous. Elle utilise une image personnalisée pour afficher un point sur la carte à la position (latitude : 51,5, longitude : -0,2) et décale la position du marqueur afin que le point de l’icône de la punaise s’aligne avec la position correcte sur la carte. | ![Azure Maps - Ajouter une punaise](media/migrate-bing-maps-web-app/yellow-pushpin.png)| |:-----------------------------------------------------------------------:| | yellow-pushpin.png | **Avant : Bing Cartes** Dans Bing Cartes, un marqueur personnalisé est créé en passant une URL à une image dans les options `icon` d’une punaise. L’option `anchor` est utilisée pour aligner le point de l’image de punaise avec la coordonnée sur la carte. Valeur d’ancrage dans Bing Cartes par rapport à l’angle supérieur gauche de l’image. ```javascript var pushpin = new Microsoft.Maps.Pushpin(new Microsoft.Maps.Location(51.5, -0.2), { icon: 'ylw-pushpin.png', anchor: new Microsoft.Maps.Point(5, 30) }); var layer = new Microsoft.Maps.Layer(); layer.add(pushpin); map.layers.insert(layer); ``` <center> ![Bing Cartes - Ajouter une punaise personnalisée](media/migrate-bing-maps-web-app/bing-maps-add-custom-pushpin.jpg)</center> **Après : Azure Maps à l’aide de marqueurs HTML** Pour personnaliser un marqueur HTML dans Azure Maps un `string` ou `HTMLElement` HTML peut être passé dans l’option `htmlContent` du marqueur. Dans Azure Maps, une option `anchor` est utilisée pour spécifier la position relative du marqueur par rapport à la coordonnée de position à l’aide d’un des neuf points de référence définis : « center », « top », « bottom », « left », « right », « top-left », « top-right », « bottom-left », « bottom-right ». Le contenu est ancré et défini sur « bottom » par défaut, ce qui correspond à la partie « en bas au centre » du contenu HTML. Pour faciliter la migration du code à partir de Bing Cartes, définissez l’ancre sur « top-left », puis utilisez l’option `offset` avec le même décalage que celui utilisé dans Bing Cartes. Les décalages dans Azure Maps se déplacent dans la direction opposée de Bing Cartes. Par conséquent, multipliez-les par moins un. > [!TIP] > Ajoutez `pointer-events:none` en tant que style au contenu HTML pour désactiver le comportement de glissement par défaut dans MS Edge, qui affichera une icône indésirable. ```html map.markers.add(new atlas.HtmlMarker({ htmlContent: '<img src="ylw-pushpin.png" style="pointer-events: none;" />', anchor: 'top-left', pixelOffset: [-5, -30], position: [-0.2, 51.5] })); ``` <center> ![Azure Maps - Ajouter un marqueur personnalisé](media/migrate-bing-maps-web-app/azure-maps-add-custom-marker.jpg)</center> **Après : Azure Maps à l’aide d’un calque de symbole** Dans Azure Maps, les calques de symboles prennent également en charge les images personnalisées, mais l’image doit d’abord être chargée dans les ressources de carte et recevoir un ID unique. Le calque de symbole peut ensuite référencer cet ID. Le symbole peut être décalé pour s’aligner sur le point correct sur l’image à l’aide de l’icône `offset` option. Dans Azure Maps, une option `anchor` est utilisée pour spécifier la position relative du symbole par rapport à la coordonnée de position à l’aide d’un des neuf points de référence définis : « center », « top », « bottom », « left », « right », « top-left », « top-right », « bottom-left », « bottom-right ». Le contenu est ancré et défini sur « bottom » par défaut, ce qui correspond à la partie « en bas au centre » du contenu HTML. Pour faciliter la migration du code à partir de Bing Cartes, définissez l’ancre sur « top-left », puis utilisez l’option `offset` avec le même décalage que celui utilisé dans Bing Cartes. Les décalages dans Azure Maps se déplacent dans la direction opposée de Bing Cartes. Par conséquent, multipliez-les par moins un. ```javascript <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; function initMap() { map = new atlas.Map('myMap', { center: [-0.2, 51.5], zoom: 9, authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Load the custom image icon into the map resources. map.imageSprite.add('my-yellow-pin', 'ylw-pushpin.png').then(function () { //Create a data source and add it to the map. datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Create a point and add it to the data source. datasource.add(new atlas.data.Point([-0.2, 51.5])); //Add a layer for rendering point data as symbols. map.layers.add(new atlas.layer.SymbolLayer(datasource, null, { iconOptions: { //Set the image option to the id of the custom icon that was loaded into the map resources. image: 'my-yellow-pin', anchor: 'top-left', offset: [-5, -30] } })); }); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Bing Cartes - Ajouter un calque de symboles personnalisé](media/migrate-bing-maps-web-app/azure-maps-add-custom-symbol-layer.jpg)</center> > [!TIP] > Pour créer un rendu personnalisé avancé des points, utilisez plusieurs calques de rendu ensemble. Par exemple, si vous souhaitez avoir plusieurs punaises qui ont la même icône sur des cercles de couleurs différentes, au lieu de créer un groupe d'images pour chaque couleur, superposez un calque de symboles au-dessus d'un calque de bulles et demandez-leur de référencer la même source de données. Cela sera beaucoup plus efficace que de créer un nouveau groupe d’images et de faire en sorte que la carte maintienne une série d’images différentes. **Ressources supplémentaires** - [Créer une source de données](https://docs.microsoft.com/azure/azure-maps/create-data-source-web-sdk) - [Ajouter une calque de symbole](https://docs.microsoft.com/azure/azure-maps/map-add-pin) - [Ajouter des marqueurs HTML](https://docs.microsoft.com/azure/azure-maps/map-add-custom-html) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) - [Options de l’icône de calque de symbole](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.iconoptions) - [Option de texte du calque de symbole](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.textoptions) - [Classe de marqueur HTML](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.htmlmarker) - [Options du marqueur HTML](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.htmlmarkeroptions) ### <a name="adding-a-polyline"></a>Ajout d’une polyligne Les polylignes sont utilisées pour représenter une ligne ou un tracé sur la carte. Les exemples ci-dessous montrent comment créer une polyligne en pointillés sur la carte. **Avant : Bing Cartes** Dans Bing Cartes, la classe Polyline prend un tableau de positions et un ensemble d’options. ```javascript //Get the center of the map. var center = map.getCenter(); //Create the polyline. var polyline = new Microsoft.Maps.Polyline([ center, new Microsoft.Maps.Location(center.latitude - 0.5, center.longitude - 1), new Microsoft.Maps.Location(center.latitude - 0.5, center.longitude + 1) ], { strokeColor: 'red', strokeThickness: 4, strokeDashArray: [3, 3] }); //Add the polyline to the map using a layer. var layer = new Microsoft.Maps.Layer(); layer.add(polyline); map.layers.insert(layer); ``` <center> ![Polyligne Bing Cartes](media/migrate-bing-maps-web-app/bing-maps-line.jpg)</center> **Après : Azure Maps** Dans Azure Maps, il est fait référence aux polylignes par les termes géospatiaux plus courants d’objets `LineString` ou `MultiLineString`. Ces objets peuvent être ajoutés à une source de données et restitués à l’aide d’une couche de lignes. Les options de couleur du trait, de largeur et de tableau de tirets sont quasiment identiques entre les plateformes. ```javascript //Get the center of the map. var center = map.getCamera().center; //Create a data source and add it to the map. var datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Create a line and add it to the data source. datasource.add(new atlas.data.LineString([ center, [center[0] - 1, center[1] - 0.5], [center[0] + 1, center[1] - 0.5] ])); //Add a layer for rendering line data. map.layers.add(new atlas.layer.LineLayer(datasource, null, { strokeColor: 'red', strokeWidth: 4, strokeDashArray: [3, 3] })); ``` <center> ![Lignes Azure Maps](media/migrate-bing-maps-web-app/azure-maps-line.jpg)</center> **Ressources supplémentaires** - [Ajouter des lignes à la carte](https://docs.microsoft.com/azure/azure-maps/map-add-shape#add-lines-to-the-map) - [Options du calque de ligne](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.linelayeroptions) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) ### <a name="adding-a-polygon"></a>Ajout d’un polygone Les polygones sont utilisés pour représenter une zone sur la carte. Azure Maps et Bing Cartes offrent une prise en charge très similaire des polygones. Les exemples ci-dessous montrent comment créer un polygone qui forme un triangle en fonction de la coordonnée centrale de la carte. **Avant : Bing Cartes** Dans Bing Cartes, la classe Polygon prend un tableau de coordonnées ou d’anneaux de coordonnées et un ensemble d’options. ```javascript //Get the center of the map. var center = map.getCenter(); //Create the polygon. var polygon = new Microsoft.Maps.Polygon([ center, new Microsoft.Maps.Location(center.latitude - 0.5, center.longitude - 1), new Microsoft.Maps.Location(center.latitude - 0.5, center.longitude + 1), center ], { fillColor: 'rgba(0, 255, 0, 0.5)', strokeColor: 'red', strokeThickness: 2 }); //Add the polygon to the map using a layer. var layer = new Microsoft.Maps.Layer(); layer.add(polygon); map.layers.insert(layer); ``` <center> ![Polygone Bing Cartes](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)</center> **Après : Azure Maps** Dans Azure Maps, les objets Polygone et Multipolygone peuvent être ajoutés à une source de données et restitués sur la carte à l’aide de couches. La zone d’un polygone peut être affichée dans une couche de polygones. Le contour d’un polygone peut être affiché à l’aide d’un calque de lignes. ```javascript //Get the center of the map. var center = map.getCamera().center; //Create a data source and add it to the map. datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Create a polygon and add it to the data source. datasource.add(new atlas.data.Polygon([ center, [center[0] - 1, center[1] - 0.5], [center[0] + 1, center[1] - 0.5], center ])); //Add a polygon layer for rendering the polygon area. map.layers.add(new atlas.layer.PolygonLayer(datasource, null, { fillColor: 'rgba(0, 255, 0, 0.5)' })); //Add a line layer for rendering the polygon outline. map.layers.add(new atlas.layer.LineLayer(datasource, null, { strokeColor: 'red', strokeWidth: 2 })); ``` <center> ![Polygone Azure Maps](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)</center> **Ressources supplémentaires** - [Ajouter un polygone à la carte](https://docs.microsoft.com/azure/azure-maps/map-add-shape#add-a-polygon-to-the-map) - [Ajouter un cercle à la carte](https://docs.microsoft.com/azure/azure-maps/map-add-shape#add-a-circle-to-the-map) - [Options du calque de polygones](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.polygonlayeroptions) - [Options du calque de ligne](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.linelayeroptions) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) ### <a name="display-an-infobox"></a>Afficher une zone d’informations Des informations supplémentaires pour une entité peuvent être affichées sur la carte en tant que classe `Microsoft.Maps.Infobox` dans Bing Cartes ; dans Azure Maps, cela peut être obtenu à l’aide de la classe `atlas.Popup`. Les exemples ci-dessous ajoutent une punaise ou un marqueur à la carte et, lorsque vous cliquez dessus, affichent une zone d’informations/fenêtre contextuelle. **Avant : Bing Cartes** Avec Bing Cartes, une zone d’informations est créée à l’aide du constructeur `Microsoft.Maps.Infobox`. ```javascript //Add a pushpin where we want to display an infobox. var pushpin = new Microsoft.Maps.Pushpin(new Microsoft.Maps.Location(47.6, -122.33)); //Add the pushpin to the map using a layer. var layer = new Microsoft.Maps.Layer(); layer.add(pushpin); map.layers.insert(layer); //Create an infobox and bind it to the map. var infobox = new Microsoft.Maps.Infobox(new Microsoft.Maps.Location(47.6, -122.33), { description: '<div style="padding:5px"><b>Hello World!</b></div>', visible: false }); infobox.setMap(map); //Add a click event to the pushpin to open the infobox. Microsoft.Maps.Events.addHandler(pushpin, 'click', function () { infobox.setOptions({ visible: true }); }); ``` <center> ![Zone d’informations Bing Cartes](media/migrate-bing-maps-web-app/bing-maps-infobox.jpg)</center> **Après : Azure Maps** Dans Azure Maps, une fenêtre contextuelle peut être utilisée pour afficher des informations supplémentaires sur une position. Un objet `string` ou `HTMLElement` HTML peut être passé dans l’option `content` de la fenêtre contextuelle. Les fenêtres contextuelles peuvent être affichées indépendamment de toute forme si vous le souhaitez. par conséquent, vous devez spécifier une valeur `position`. Pour afficher une fenêtre contextuelle, appelez la fonction `open` et transmettez la carte sur laquelle la fenêtre contextuelle doit être affichée. ```javascript //Add a marker to the map that to display a popup for. var marker = new atlas.HtmlMarker({ position: [-122.33, 47.6] }); //Add the marker to the map. map.markers.add(marker); //Create a popup. var popup = new atlas.Popup({ content: '<div style="padding:10px"><b>Hello World!</b></div>', position: [-122.33, 47.6], pixelOffset: [0, -35] }); //Add a click event to the marker to open the popup. map.events.add('click', marker, function () { //Open the popup popup.open(map); }); ``` <center> ![Fenêtre contextuelle Azure Maps](media/migrate-bing-maps-web-app/azure-maps-popup.jpg)</center> > [!NOTE] > Pour effectuer la même opération avec un calque de symboles, de bulles, de lignes ou de polygones, transmettez le calque au code d’événement de la carte plutôt qu’à un marqueur. **Ressources supplémentaires** - [Ajouter une fenêtre contextuelle](https://docs.microsoft.com/azure/azure-maps/map-add-popup) - [Fenêtre contextuelle avec contenu multimédia](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Popup%20with%20Media%20Content) - [Fenêtres contextuelles sur les formes](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Popups%20on%20Shapes) - [Réutilisation d’une fenêtre contextuelle avec plusieurs épingles](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Reusing%20Popup%20with%20Multiple%20Pins) - [Classe de fenêtre contextuelle](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.popup) - [Options de la fenêtre contextuelle](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.popupoptions) ### <a name="pushpin-clustering"></a>Clustering de punaises Lorsque vous examinez plusieurs points de données sur la carte, les points se chevauchent et la carte semble surchargée, ce qui nuit à la visibilité et à l’utilisation. Le clustering de données point peut être utilisé pour améliorer cette expérience utilisateur et améliorer les performances. Le clustering de point de données est le processus permettant de combiner des données de point proches les unes des autres et de les représenter sur une carte en tant qu’un point de données en cluster unique. Lorsque l’utilisateur effectue un zoom avant sur la carte, les clusters se décomposent pour afficher les points de données individuels qui les composent. Les exemples ci-dessous chargent un flux GeoJSON de données de séisme de la semaine dernière et l’ajoutent à la carte. Les clusters sont rendus sous forme de cercles mis à l’échelle et colorés en fonction du nombre de points qu’ils contiennent. > [!NOTE] > Il existe plusieurs algorithmes différents utilisés pour le clustering de punaises. Bing Cartes utilise une fonction simple basée sur une grille, tandis qu’Azure Maps utilise des méthodes de clustering basées sur des points plus avancées et visuellement plus attrayantes. **Avant : Bing Cartes** Dans Bing Cartes, les données GeoJSON peuvent être chargées à l’aide du module GeoJSON. Les punaises peuvent être mises en cluster en chargeant le module de clustering et en utilisant la couche de clustering qu’il contient. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map; var earthquakeFeed = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson'; function initMap() { map = new Microsoft.Maps.Map(document.getElementById('myMap'), { credentials: '<Your Bing Maps Key>', center: new Microsoft.Maps.Location(20, -160), zoom: 2 }); //Load the GeoJSON and Clustering modules. Microsoft.Maps.loadModule(['Microsoft.Maps.GeoJson', 'Microsoft.Maps.Clustering'], function () { //Load the GeoJSON data from a URL. Microsoft.Maps.GeoJson.readFromUrl(earthquakeFeed, function (pins) { //Create a ClusterLayer with options and add it to the map. clusterLayer = new Microsoft.Maps.ClusterLayer(pins, { clusteredPinCallback: createCustomClusteredPin, gridSize: 100 }); map.layers.insert(clusterLayer); }); }); } //A function that defines how clustered pins are rendered. function createCustomClusteredPin(cluster) { //Get the number of pushpins in the cluster var clusterSize = cluster.containedPushpins.length; var radius = 20; //Default radius to 20 pixels. var fillColor = 'lime'; //Default to lime green. if (clusterSize >= 750) { radius = 40; //If point_count >= 750, radius is 40 pixels. fillColor = 'red'; //If the point_count >= 750, color is red. } else if (clusterSize >= 100) { radius = 30; //If point_count >= 100, radius is 30 pixels. fillColor = 'yellow'; //If the point_count >= 100, color is yellow. } //Create an SVG string of a circle with the specified radius and color. var svg = ['<svg xmlns="http://www.w3.org/2000/svg" width="', (radius * 2), '" height="', (radius * 2), '">', '<circle cx="', radius, '" cy="', radius, '" r="', radius, '" fill="', fillColor, '"/>', '<text x="50%" y="50%" dominant-baseline="middle" text-anchor="middle" style="font-size:12px;font-family:arial;fill:black;" >{text}</text>', '</svg>']; //Customize the clustered pushpin using the generated SVG and anchor on its center. cluster.setOptions({ icon: svg.join(''), anchor: new Microsoft.Maps.Point(radius, radius), textOffset: new Microsoft.Maps.Point(0, radius - 8) //Subtract 8 to compensate for height of text. }); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Clustering Bing Cartes](media/migrate-bing-maps-web-app/bing-maps-clustering.jpg)</center> **Après : Azure Maps** Dans Azure Maps, les données sont ajoutées et gérées par une source de données. Les calques se connectent aux sources de données et affichent les données qu’elles contiennent. La classe `DataSource` dans Azure Maps fournit plusieurs options de clustering. - `cluster` : Indique à la source de données les données de point de cluster. - `clusterRadius` : Rayon, en pixels, des points de cluster. - `clusterMaxZoom` : niveau de zoom maximal où le clustering se produit. Si vous effectuez un zoom plus grand que celui-ci, tous les points sont rendus sous forme de symboles. - `clusterProperties` : Définit des propriétés personnalisées qui sont calculées à l’aide d’expressions sur tous les points de chaque cluster et ajoutées aux propriétés de chaque point de cluster. Lorsque le clustering est activé, la source de données envoie des points de données en cluster et non cluster aux calques pour le rendu. La source de données est en charge de la mise en cluster de centaines de milliers de points de données. Un point de données en cluster possède les propriétés suivantes : | Nom de la propriété | Type | Description | |-----------------------------|---------|------------------------------------------------| | `cluster` | boolean | Indique si la fonctionnalité représente un cluster. | | `cluster_id` | string | ID unique pour le cluster qui peut être utilisé avec les fonctions `getClusterExpansionZoom`, `getClusterChildren` et `getClusterLeaves` des classes `DataSource`. | | `point_count` | nombre | Le nombre de points que contient le cluster. | | `point_count_abbreviated` | string | Une chaîne qui abrège la valeur `point_count` si elle est trop longue. (par exemple, 4 000 devient 4K) | La classe `DataSource` possède la fonction d’assistance suivante pour accéder à des informations supplémentaires sur un cluster à l’aide de `cluster_id`. | Fonction | Type de retour | Description | |----------------|--------------------|-----------------| | `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Récupère les enfants du cluster donné sur le niveau de zoom suivant. Ces enfants peuvent être une combinaison de formes et sous-clusters. Les sous-clusters seront des fonctionnalités dont les propriétés correspondent à propriétés de cluster. | | `getClusterExpansionZoom(clusterId: number)` | `Promise<number>` | Calcule un niveau de zoom à partir duquel le cluster commence à se développer ou à se décomposer. | | `getClusterLeaves(clusterId: number, limit: number, offset: number)` | `Promise<Feature<Geometry, any> | Shape>` | Récupère tous les points dans un cluster. Définissez le paramètre `limit` de manière à retourner un sous-ensemble des points, et utilisez `offset` pour parcourir les points. | Lors du rendu des données en cluster sur la carte, il est souvent plus facile d’utiliser au moins deux calques. L’exemple ci-dessous utilise trois calques : un calque de bulles pour dessiner des cercles de couleur mis à l’échelle en fonction de la taille des clusters, un calque de symboles pour afficher la taille de cluster en tant que texte, et un deuxième calque de symboles pour le rendu des points non cluster. Il existe de nombreuses autres façons de restituer des données en cluster dans Azure Maps mises en surbrillance dans la documentation [Données de points de cluster](https://docs.microsoft.com/azure/azure-maps/clustering-point-data-web-sdk). Les données GeoJSON peuvent être importées directement dans Azure Maps à l’aide de la fonction `importDataFromUrl` sur la classe `DataSource`. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; var earthquakeFeed = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson'; function initMap() { //Initialize a map instance. map = new atlas.Map('myMap', { center: [-160, 20], zoom: 1, //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create a data source and add it to the map. datasource = new atlas.source.DataSource(null, { //Tell the data source to cluster point data. cluster: true }); map.sources.add(datasource); //Create layers for rendering clusters, their counts and unclustered points and add the layers to the map. map.layers.add([ //Create a bubble layer for rendering clustered data points. new atlas.layer.BubbleLayer(datasource, null, { //Scale the size of the clustered bubble based on the number of points inthe cluster. radius: [ 'step', ['get', 'point_count'], 20, //Default of 20 pixel radius. 100, 30, //If point_count >= 100, radius is 30 pixels. 750, 40 //If point_count >= 750, radius is 40 pixels. ], //Change the color of the cluster based on the value on the point_cluster property of the cluster. color: [ 'step', ['get', 'point_count'], 'lime', //Default to lime green. 100, 'yellow', //If the point_count >= 100, color is yellow. 750, 'red' //If the point_count >= 750, color is red. ], strokeWidth: 0, filter: ['has', 'point_count'] //Only rendered data points that have a point_count property, which clusters do. }), //Create a symbol layer to render the count of locations in a cluster. new atlas.layer.SymbolLayer(datasource, null, { iconOptions: { image: 'none' //Hide the icon image. }, textOptions: { textField: ['get', 'point_count_abbreviated'], offset: [0, 0.4] } }), //Create a layer to render the individual locations. new atlas.layer.SymbolLayer(datasource, null, { filter: ['!', ['has', 'point_count']] //Filter out clustered points from this layer. }) ]); //Retrieve a GeoJSON data set and add it to the data source. datasource.importDataFromUrl(earthquakeFeed); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Clusters Azure Maps](media/migrate-bing-maps-web-app/azure-maps-clustering.jpg)</center> **Ressources supplémentaires** - [Ajouter une calque de symbole](https://docs.microsoft.com/azure/azure-maps/map-add-pin) - [Ajouter un calque de bulles](https://docs.microsoft.com/azure/azure-maps/map-add-bubble-layer) - [Données de point de cluster](https://docs.microsoft.com/azure/azure-maps/clustering-point-data-web-sdk) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) ### <a name="add-a-heat-map"></a>Ajouter une carte thermique Les cartes thermiques, également appelées « cartes de densité de points », sont un type de visualisation de données utilisé pour représenter la densité de données à l’aide d’une palette de couleurs. Souvent utilisées pour afficher les « points chauds » des données sur une carte, elles sont idéales pour restituer de grands jeux de données de points. Les exemples ci-dessous chargent un flux GeoJSON de tous les séismes détectés par l’Institut d’études géologiques des États-Unis (USGS) au cours du mois précédent, et les affichent sous la forme d’une carte thermique. **Avant : Bing Cartes** Dans Bing Cartes, pour créer une carte thermique, chargez le module de carte thermique. De même, le module GeoJSON est chargé pour ajouter la prise en charge des données GeoJSON. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map; var earthquakeFeed = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_month.geojson'; function initMap() { map = new Microsoft.Maps.Map(document.getElementById('myMap'), { credentials: '<Your Bing Maps Key>', center: new Microsoft.Maps.Location(20, -160), zoom: 2, mapTypeId: Microsoft.Maps.MapTypeId.aerial }); //Load the GeoJSON and HeatMap modules. Microsoft.Maps.loadModule(['Microsoft.Maps.GeoJson', 'Microsoft.Maps.HeatMap'], function () { //Load the GeoJSON data from a URL. Microsoft.Maps.GeoJson.readFromUrl(earthquakeFeed, function (shapes) { //Create a heat map and add it to the map. var heatMap = new Microsoft.Maps.HeatMapLayer(shapes, { opacity: 0.65, radius: 10 }); map.layers.insert(heatMap); }); }); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Bing Cartes - Carte thermique](media/migrate-bing-maps-web-app/bing-maps-heatmap.jpg)</center> **Après : Azure Maps** Dans Azure Maps, chargez les données GeoJSON dans une source de données et connectez la source de données à une couche de carte thermique. Les données GeoJSON peuvent être importées directement dans Azure Maps à l’aide de la fonction `importDataFromUrl` sur la classe `DataSource`. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map; var earthquakeFeed = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_month.geojson'; function initMap() { //Initialize a map instance. map = new atlas.Map('myMap', { center: [-160, 20], zoom: 1, style: 'satellite_with_roads', //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create a data source and add it to the map. datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Load the earthquake data. datasource.importDataFromUrl(earthquakeFeed); //Create a layer to render the data points as a heat map. map.layers.add(new atlas.layer.HeatMapLayer(datasource, null, { opacity: 0.65, radius: 10 })); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Azure Maps - Carte thermique](media/migrate-bing-maps-web-app/azure-maps-heatmap.jpg)</center> **Ressources supplémentaires** - [Ajouter une couche de carte thermique](https://docs.microsoft.com/azure/azure-maps/map-add-heat-map-layer) - [Classe de couche de carte thermique](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.layer.heatmaplayer) - [Options de la couche de carte thermique](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.heatmaplayeroptions) - [Utiliser des expressions de style basées sur les données](https://docs.microsoft.com/azure/azure-maps/data-driven-style-expressions-web-sdk) ### <a name="overlay-a-tile-layer"></a>Superposer une couche de mosaïques Les couches de mosaïques vous permettent de superposer des images de grande taille qui ont été divisées en images en mosaïque plus petites qui s’alignent sur le système de mosaïque de cartes. Il s’agit d’un moyen courant de superposer des images de grande taille ou des jeux de données très volumineux. Les exemples ci-dessous superposent une couche de mosaïques radar météo issue de l’Iowa Environmental Mesonet de l’Iowa State University qui utilise un schéma de nommage de mosaïque de zoom X, Y. **Avant : Bing Cartes** Dans Bing Cartes, les couches de mosaïques peuvent être créées à l’aide de la classe `Microsoft.Maps.TileLayer`. ```javascript var weatherTileLayer = new Microsoft.Maps.TileLayer({ mercator: new Microsoft.Maps.TileSource({ uriConstructor: 'https://mesonet.agron.iastate.edu/cache/tile.py/1.0.0/nexrad-n0q-900913/{zoom}/{x}/{y}.png' }) }); map.layers.insert(weatherTileLayer); ``` <center> ![Bing Cartes - Carte thermique pondérée](media/migrate-bing-maps-web-app/bing-maps-weighted-heatmap.jpg)</center> **Après : Azure Maps** Dans Azure Maps, une couche de mosaïques peut être ajoutée à la carte à peu près de la même façon que n’importe quelle autre couche. URL mise en forme qui a des espaces réservés de zoom x, y ; `{x}`, `{y}`, `{z}` sont utilisés respectivement pour indiquer à la couche où accéder aux vignettes. Les couches de tuiles Azure Maps prennent également en charge les espaces réservés `{quadkey}`, `{bbox-epsg-3857}` et `{subdomain}`. > [!TIP] > Dans Azure Maps, les couches peuvent facilement être rendues sous d’autres couches, y compris les couches de la carte de base. Il est souvent souhaitable de restituer des couches de mosaïques sous les étiquettes de carte afin qu’elles soient faciles à lire. La fonction `map.layers.add` accepte un deuxième paramètre qui est l’ID d’une deuxième couche sous laquelle insérer la nouvelle couche. Pour insérer une couche de mosaïques sous les étiquettes de carte, vous pouvez utiliser le code suivant : > > `map.layers.add(myTileLayer, "labels");` ```javascript //Create a tile layer and add it to the map below the label layer. map.layers.add(new atlas.layer.TileLayer({ tileUrl: 'https://mesonet.agron.iastate.edu/cache/tile.py/1.0.0/nexrad-n0q-900913/{z}/{x}/{y}.png', opacity: 0.8, tileSize: 256 }), 'labels'); ``` <center> ![Azure Maps - Carte thermique pondérée](media/migrate-bing-maps-web-app/azure-maps-weighted-heatmap.jpg)</center> > [!TIP] > Les demandes de mosaïque peuvent être capturées à l’aide de l’option `transformRequest` de la carte. Cela vous permettra de modifier ou d’ajouter des en-têtes à la demande, si vous le souhaitez. **Ressources supplémentaires** - [Ajouter des couches de vignettes](https://docs.microsoft.com/azure/azure-maps/map-add-tile-layer) - [Classe de couche de mosaïque](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.layer.tilelayer) - [Options de la couche de mosaïques](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.tilelayeroptions) ### <a name="show-traffic-data"></a>Afficher les données du trafic Les données de trafic peuvent être superposées à la fois sur des cartes Bing et Azure. **Avant : Bing Cartes** Dans Bing Cartes, les données de trafic peuvent être superposées à la carte à l’aide du module de trafic. ```javascript Microsoft.Maps.loadModule('Microsoft.Maps.Traffic', function () { var manager = new Microsoft.Maps.Traffic.TrafficManager(map); manager.show(); }); ``` <center> ![Bing Cartes - Trafic](media/migrate-bing-maps-web-app/bing-maps-traffic.jpg)</center> **Après : Azure Maps** Azure Maps offre plusieurs options différentes pour l’affichage du trafic. Les incidents de trafic, tels que les fermetures de route et les accidents, peuvent être affichés sous forme d’icônes sur la carte. Des routes codées en couleurs représentant le flux de trafic peuvent être superposées sur la carte ; les couleurs peuvent être modifiées pour tenir compte de la limite de vitesse indiquée, d’un retard prévu normal ou d’un retard absolu. Les données d’incident dans Azure Maps sont mises à jour toutes les minutes et les données de transit toutes les deux minutes. ```javascript map.setTraffic({ incidents: true, flow: 'relative' }); ``` <center> ![Trafic Azure Maps](media/migrate-bing-maps-web-app/azure-maps-traffic.jpg)</center> Si vous cliquez sur l’une des icônes de trafic dans Azure Maps, des informations supplémentaires s’affichent dans une fenêtre contextuelle. <center> ![Azure Maps - Fenêtre contextuelle de trafic](media/migrate-bing-maps-web-app/azure-maps-traffic-popup.jpg)</center> **Ressources supplémentaires** - [Afficher le trafic sur la carte](https://docs.microsoft.com/azure/azure-maps/map-show-traffic) - [Options de superposition du trafic](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Traffic%20Overlay%20Options) - [Gestion du trafic](https://azuremapscodesamples.azurewebsites.net/?sample=Traffic%20controls) ### <a name="add-a-ground-overlay"></a>Ajouter une superposition de sol Les cartes Bing et Azure prennent en charge la superposition des images géoréférencées sur la carte ; ainsi, elles se déplacent et sont mises à l’échelle à mesure que vous effectuez un panoramique et un zoom sur la carte. Dans Bing Cartes, ces superpositions sont appelées « superpositions au sol », tandis que dans Azure Maps elles sont appelées « calques d’images ». Ils sont très utiles pour créer des plans d’étage, superposer d’anciennes cartes ou images à partir d’un drone. **Avant : Bing Cartes** Lors de la création d’une superposition au sol dans Bing Cartes, vous devez spécifier l’URL de l’image à superposer et un cadre englobant auquel lier l’image sur la carte. Cet exemple superpose une image de carte de [Newark New Jersey en 1922](https://www.lib.utexas.edu/maps/historical/newark_nj_1922.jpg) sur la carte. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map; function initMap() { map = new Microsoft.Maps.Map(document.getElementById('myMap'), { credentials: '<Your Bing Maps Key>', center: new Microsoft.Maps.Location(40.740, -74.18), zoom: 12 }); var overlay = new Microsoft.Maps.GroundOverlay({ //Create a LocationRect from the edges of the bounding box; north, west, south, east. bounds: Microsoft.Maps.LocationRect.fromEdges(40.773941, -74.22655, 40.712216, -74.12544), imageUrl: 'newark_nj_1922.jpg' }); map.layers.insert(overlay); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` L’exécution de ce code dans un navigateur affichera une carte ressemblant à l’image suivante : <center> ![Bing Cartes - Superposition au sol](media/migrate-bing-maps-web-app/bing-maps-ground-overlay.jpg)</center> **Après : Azure Maps** Dans Azure Maps, les images géoréférencées peuvent être superposées à l’aide de la classe `atlas.layer.ImageLayer`. Cette classe requiert une URL vers une image et un ensemble de coordonnées pour les quatre coins de l’image. L’image doit être hébergée soit sur le même domaine, soit avec CORs activé. > [!TIP] > Si vous avez uniquement des informations relatives au nord, au sud, à l’est, à l’ouest et à la rotation, et non les coordonnées de chaque angle de l’image, vous pouvez utiliser la fonction statique [atlas.layer.ImageLayer.getCoordinatesFromEdges](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-). ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <script type='text/javascript'> var map; function initMap() { //Initialize a map instance. map = new atlas.Map('myMap', { center: [-74.18, 40.740], zoom: 12, //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create an image layer and add it to the map. map.layers.add(new atlas.layer.ImageLayer({ url: 'newark_nj_1922.jpg', coordinates: [ [-74.22655, 40.773941], //Top Left Corner [-74.12544, 40.773941], //Top Right Corner [-74.12544, 40.712216], //Bottom Right Corner [-74.22655, 40.712216] //Bottom Left Corner ] })); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Azure Maps - Calque d’image](media/migrate-bing-maps-web-app/azure-maps-ground-overlay.jpg)</center> **Ressources supplémentaires** - [Superposer une image](https://docs.microsoft.com/azure/azure-maps/map-add-image-layer) - [Classe de couche image](https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.layer.imagelayer) ### <a name="add-kml-data-to-the-map"></a>Ajouter des données KML à la carte Les cartes Azure et Bing peuvent importer et restituer des données KML, KMZ, GeoRSS, GeoJSON et Well-Known Text (WKT) sur la carte. Azure Maps prend également en charge les fichiers GPX, GML, les fichiers CSV de données spatiales, et les services WMS (Web-Mapping Service), WMTS (Web-Mapping Tile Service) et WFS (Web Feature Service). **Avant : Bing Cartes** L’exécution de ce code dans un navigateur affichera une carte ressemblant à l’image suivante : ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map; function initMap() { map = new Microsoft.Maps.Map('#myMap', { credentials: '<Your Bing Maps Key>', center: new Microsoft.Maps.Location(40.747, -73.985), zoom: 12 }); Microsoft.Maps.loadModule('Microsoft.Maps.GeoXml', function () { var callback = function (dataset) { if (dataset.shapes) { var l = new Microsoft.Maps.Layer(); l.add(dataset.shapes); map.layers.insert(l); } if (dataset.layers) { for (var i = 0, len = dataset.layers.length; i < len; i++) { map.layers.insert(dataset.layers[i]); } } }; Microsoft.Maps.GeoXml.readFromUrl('myKMLFile.kml', { error: function (msg) { alert(msg); } }, callback); }); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Bing Cartes - kml](media/migrate-bing-maps-web-app/bing-maps-kml.jpg)</center> **Après : Azure Maps** Dans Azure Maps, le format GeoJSON est le format de données principal utilisé dans le SDK web. Vous pouvez facilement intégrer d’autres formats de données spatiales à l’aide du  [module d’E/S de données spatiales](https://docs.microsoft.com/javascript/api/azure-maps-spatial-io/). Ce module présente des fonctions de lecture et d’écriture de données spatiales, et inclut un calque de données simple restituant facilement les données à partir de l’un de ces formats de données spatiales. Pour lire les données d’un fichier de données spatiales, transmettez une URL ou des données brutes sous la forme d’une chaîne ou d’un objet blob dans la fonction  `atlas.io.read`. Ceci retournera toutes les données analysées du fichier, qui pourront alors être ajoutées à la carte. Le format KML est un peu plus complexe que la plupart des autres formats de données spatiales. En effet, il inclut beaucoup plus d’informations de style. La classe  `SpatialDataLayer` prend en charge le rendu de la plupart de ces styles. Cependant, les images d’icônes doivent être chargées dans la carte avant les données de caractéristiques, et les superpositions au sol doivent être ajoutées séparément à la carte sous forme de couches. Quand vous chargez des données par le biais d’une URL, elles doivent être hébergées sur un point de terminaison CORS, ou un service proxy doit être transmis comme option dans la fonction read. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> <script type='text/javascript'> var map, datasource, layer; function initMap() { //Initialize a map instance. map = new atlas.Map('myMap', { view: 'Auto', //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create a data source and add it to the map. datasource = new atlas.source.DataSource(); map.sources.add(datasource); //Add a simple data layer for rendering the data. layer = new atlas.layer.SimpleDataLayer(datasource); map.layers.add(layer); //Read a KML file from a URL or pass in a raw KML string. atlas.io.read('myKMLFile.kml').then(async r => { if (r) { //Check to see if there are any icons in the data set that need to be loaded into the map resources. if (r.icons) { //For each icon image, create a promise to add it to the map, then run the promises in parrallel. var imagePromises = []; //The keys are the names of each icon image. var keys = Object.keys(r.icons); if (keys.length !== 0) { keys.forEach(function (key) { imagePromises.push(map.imageSprite.add(key, r.icons[key])); }); await Promise.all(imagePromises); } } //Load all features. if (r.features && r.features.length > 0) { datasource.add(r.features); } //Load all ground overlays. if (r.groundOverlays && r.groundOverlays.length > 0) { map.layers.add(r.groundOverlays); } //If bounding box information is known for data, set the map view to it. if (r.bbox) { map.setCamera({ bounds: r.bbox, padding: 50 }); } } }); }); } </script> </head> <body onload="initMap()"> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Azure Maps - kml](media/migrate-bing-maps-web-app/azure-maps-kml.jpg)</center> **Ressources supplémentaires** - [Fonction atlas.io.read](https://docs.microsoft.com/javascript/api/azure-maps-spatial-io/atlas.io#read-string---arraybuffer---blob--spatialdatareadoptions-) - [SimpleDataLayer](https://docs.microsoft.com/javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer) - [SimpleDataLayerOptions](https://docs.microsoft.com/javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions) ### <a name="add-drawing-tools"></a>Ajouter des outils de dessin Bing et Azure Maps fournissent tous deux un module qui ajoute la possibilité pour l’utilisateur de dessiner et de modifier des formes sur la carte à l’aide de la souris ou d’un autre périphérique d’entrée. Tous deux prennent en charge le dessin de punaise, de ligne et de polygones. Azure Maps fournit également des options pour dessiner des cercles et des rectangles. **Avant : Bing Cartes** Dans Bing Cartes, le module `DrawingTools` est chargé à l’aide de la fonction `Microsoft.Maps.loadModule`. Une fois le chargement effectué, une instance de la classe DrawingTools peut être créée et la fonction `showDrawingManager` est appelée afin d’ajouter une barre d’outils à la carte. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <script type='text/javascript'> var map, drawingManager; function initMap() { map = new Microsoft.Maps.Map('#myMap', { credentials: '<Your Bing Maps Key>' }); //Load the DrawingTools module Microsoft.Maps.loadModule('Microsoft.Maps.DrawingTools', function () { //Create an instance of the DrawingTools class and bind it to the map. var tools = new Microsoft.Maps.DrawingTools(map); //Show the drawing toolbar and enable editting on the map. tools.showDrawingManager(function (manager) { //Store a reference to the drawing manager as it will be useful later. drawingManager = manager; }); }); } </script> <!-- Bing Maps Script Reference --> <script src="https://www.bing.com/api/maps/mapcontrol?callback=initMap" async defer></script> </head> <body> <div id='myMap' style='position:relative;width:600px;height:400px;'></div> </body> </html> ``` <center> ![Bing Cartes - Outils de dessin](media/migrate-bing-maps-web-app/bing-maps-drawing-tools.jpg)</center> **Après : Azure Maps** Dans Azure Maps, le module d’outils de dessin doit être chargé en chargeant le code JavaScript, et les fichiers CSS doivent être référencés dans l’application. Une fois la carte chargée, une instance de la classe `DrawingManager` peut être créée et une instance `DrawingToolbar` jointe. ```html <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="x-ua-compatible" content="IE=Edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> <!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.js"></script> <script type='text/javascript'> var map, drawingManager; function initMap() { //Initialize a map instance. map = new atlas.Map('myMap', { view: 'Auto', //Add your Azure Maps key to the map SDK. Get an Azure Maps key at https://azure.com/maps. NOTE: The primary key should be used as the key. authOptions: { authType: 'subscriptionKey', subscriptionKey: '<Your Azure Maps Key>' } }); //Wait until the map resources are ready. map.events.add('ready', function () { //Create an instance of the drawing manager and display the drawing toolbar. drawingManager = new atlas.drawing.DrawingManager(map, { toolbar: new atlas.control.DrawingToolbar({ position: 'top-left' }) }); }); } </script> </head> <body onload="initMap()"> <div id="myMap" style="position:relative;width:600px;height:400px;"></div> </body> </html> ``` <center> ![Azure Maps - Outils de dessin](media/migrate-bing-maps-web-app/azure-maps-drawing-tools.jpg)</center> > [!TIP] > Dans les couches Azure Maps, les outils de dessin offrent aux utilisateurs plusieurs moyens de dessiner des formes. Par exemple, lorsqu’il dessine un polygone, l’utilisateur peut cliquer pour ajouter chaque point, ou maintenir le bouton gauche de la souris enfoncé et faire glisser la souris pour dessiner un tracé. Cela peut être modifié à l’aide de l’option `interactionType` de `DrawingManager`. **Ressources supplémentaires** - [Documentation](https://docs.microsoft.com/azure/azure-maps/set-drawing-options) - [Exemples de code](https://azuremapscodesamples.azurewebsites.net/#Drawing-Tools-Module) ## <a name="next-steps"></a>Étapes suivantes Jetez un coup d’œil aux [modules du kit SDK web Azure Maps open source](open-source-projects.md#open-web-sdk-modules). Ces modules fournissent une multitude de fonctionnalités supplémentaires et sont entièrement personnalisables. Passez en revue les exemples de code relatifs à la migration d’autres fonctionnalités Bing Cartes : **Visualisations de données** > [!div class="nextstepaction"] > [Couche de contour](https://azuremapscodesamples.azurewebsites.net/?search=contour) > [!div class="nextstepaction"] > [Compartimentage de données](https://azuremapscodesamples.azurewebsites.net/?search=data%20binning) **Services** > [!div class="nextstepaction"] > [Utiliser le module de service Azure Maps](https://docs.microsoft.com/azure/azure-maps/how-to-use-services-module) > [!div class="nextstepaction"] > [Rechercher des points d’intérêt](https://docs.microsoft.com/azure/azure-maps/map-search-location) > [!div class="nextstepaction"] > [Obtenir des informations à partir d’une coordonnée (géocode inversé)](https://docs.microsoft.com/azure/azure-maps/map-get-information-from-coordinate) > [!div class="nextstepaction"] > [Afficher des directions de A à B](https://docs.microsoft.com/azure/azure-maps/map-route) > [!div class="nextstepaction"] > [Rechercher la suggestion automatique avec JQuery UI](https://azuremapscodesamples.azurewebsites.net/index.html?sample=Search%20Autosuggest%20and%20JQuery%20UI) En savoir plus sur le SDK web Azure Maps. > [!div class="nextstepaction"] > [Comment utiliser le contrôle de carte](how-to-use-map-control.md) > [!div class="nextstepaction"] > [Comment utiliser le module de services](how-to-use-services-module.md) > [!div class="nextstepaction"] > [Comment utiliser le module Outils de dessin](set-drawing-options.md) > [!div class="nextstepaction"] > [Exemples de code](https://docs.microsoft.com/samples/browse/?products=azure-maps) > [!div class="nextstepaction"] > [Documentation de référence sur l’API de service du kit SDK web Azure Maps](https://docs.microsoft.com/javascript/api/azure-maps-control/)
54.108871
1,401
0.674406
fra_Latn
0.838151
4882786abd8043a35ea0d6c3304616390ef5b174
3,052
md
Markdown
doc/README.md
moonrise-tk/ohm-1
158aa1d490eba7ab09eb54c2157d58a8565bee73
[ "MIT" ]
null
null
null
doc/README.md
moonrise-tk/ohm-1
158aa1d490eba7ab09eb54c2157d58a8565bee73
[ "MIT" ]
null
null
null
doc/README.md
moonrise-tk/ohm-1
158aa1d490eba7ab09eb54c2157d58a8565bee73
[ "MIT" ]
1
2021-08-16T02:32:06.000Z
2021-08-16T02:32:06.000Z
# Ohm Documentation * [Ohm/JS API Reference](./api-reference.md) * [Ohm Syntax Reference](./syntax-reference.md) * Learn more about the [Ohm philosophy](./philosophy.md) * See [Patterns and Pitfalls](./patterns-and-pitfalls.md) for some common Ohm patterns and solutions to frequently-encountered difficulties. ## Examples Here are some quick samples of what it's like to work with Ohm. For more in-depth examples, see the [examples directory](../examples/), especially the [math example](../examples/math/index.html) which is extensively commented. ### Matching Strings Instantiate a grammar from a string using `ohm.grammar()`, and check inputs using the grammar's `match()` method: <!-- @markscript // Replace 'const g' with 'var g' to allow it to redeclared. markscript.transformNextBlock(code => code.replace('const g', 'var g')); --> ```js const ohm = require('ohm-js'); const g = ohm.grammar( 'Laugh {' + ' laugh = lol | "lmao"' + ' lol = "l" "o"+ "l"' + '}'); assert(g.match('lol').succeeded()); assert(!g.match('lmao').failed()); assert(g.match('loooooool').succeeded()); ``` ### Implementing Semantics You can use _operations_ and _attributes_ to analyze and extract values from parsed data. For example, take the following grammar in `arithmetic.ohm`: <!-- @markscript // Make sure the grammar embedded below is the same as in 'arithmetic.ohm'. markscript.transformNextBlock(function(code) { assert(code === require('fs').readFileSync('arithmetic.ohm').toString(), 'arithmetic.ohm does not match grammar in doc'); return ''; // Don't actually execute anything. }); --> ``` Arithmetic { Exp = AddExp AddExp = AddExp "+" PriExp -- plus | AddExp "-" PriExp -- minus | PriExp PriExp = "(" Exp ")" -- paren | number number = digit+ } ``` We can create an operation named 'eval' to evaluate arithmetic expressions that match the grammar: <!-- @markscript // Replace 'const g' with 'var g' to allow it to redeclared. markscript.transformNextBlock(code => code.replace('const g', 'var g')); --> ```js // Instantiate the grammar. const fs = require('fs'); const g = ohm.grammar(fs.readFileSync('arithmetic.ohm')); // Create an operation that evaluates the expression. An operation always belongs to a Semantics, // which is a family of related operations and attributes for a particular grammar. const semantics = g.createSemantics().addOperation('eval', { Exp(e) { return e.eval(); }, AddExp(e) { return e.eval(); }, AddExp_plus(left, op, right) { return left.eval() + right.eval(); }, AddExp_minus(left, op, right) { return left.eval() - right.eval(); }, PriExp(e) { return e.eval(); }, PriExp_paren(open, exp, close) { return exp.eval(); }, number(chars) { return parseInt(this.sourceString, 10); }, }); const match = g.match('1 + (2 - 3) + 4'); assert.equal(semantics(match).eval(), 4); ``` You can learn more about semantics in the [API reference](./api-reference.md#semantics).
28.523364
226
0.665793
eng_Latn
0.912041
4882c2311c0bcfda6ab2ac2cdb47ed137e2908b8
14,706
md
Markdown
docs/standard-library/collate-class.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard-library/collate-class.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard-library/collate-class.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: collate — Klasa ms.date: 11/04/2016 f1_keywords: - locale/std::collate - locale/std::collate::char_type - locale/std::collate::string_type - locale/std::collate::compare - locale/std::collate::do_compare - locale/std::collate::do_hash - locale/std::collate::do_transform - locale/std::collate::hash - locale/std::collate::transform helpviewer_keywords: - std::collate [C++] - std::collate [C++], char_type - std::collate [C++], string_type - std::collate [C++], compare - std::collate [C++], do_compare - std::collate [C++], do_hash - std::collate [C++], do_transform - std::collate [C++], hash - std::collate [C++], transform ms.assetid: 92168798-9628-4a2e-be6e-fa62dcd4d6a6 ms.openlocfilehash: ccdf05a7a41fc7f646852e7d326832b86c41dde8 ms.sourcegitcommit: 1f009ab0f2cc4a177f2d1353d5a38f164612bdb1 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 07/27/2020 ms.locfileid: "87230107" --- # <a name="collate-class"></a>collate — Klasa Szablon klasy, który opisuje obiekt, który może stanowić zestaw reguł ustawień regionalnych w celu sterowania kolejnością i grupowaniem znaków w ciągu, porównania między nimi a mieszaniem ciągów. ## <a name="syntax"></a>Składnia ```cpp template <class CharType> class collate : public locale::facet; ``` ### <a name="parameters"></a>Parametry *CharType*\ Typ używany w programie do kodowania znaków. ## <a name="remarks"></a>Uwagi Podobnie jak w przypadku dowolnego zestawu reguł ustawień regionalnych, identyfikator obiektu statycznego ma początkową przechowywaną wartość zero. Pierwsza próba uzyskania dostępu do przechowywanej wartości przechowuje unikatową wartość dodatnią w `id` . W przypadku niektórych języków znaki są grupowane i traktowane jak pojedynczy znak, a w innych, pojedyncze znaki są traktowane tak, jakby były dwoma znakami. Usługi sortowania dostarczane przez klasę collate umożliwiają sortowanie w tych przypadkach. ### <a name="constructors"></a>Konstruktory |Konstruktor|Opis| |-|-| |[sortowan](#collate)|Konstruktor dla obiektów klasy `collate` , który służy jako zestaw reguł ustawień regionalnych do obsługi Konwencji sortowania ciągów.| ### <a name="typedefs"></a>Typedefs |Nazwa typu|Opis| |-|-| |[char_type](#char_type)|Typ, który opisuje znak typu `CharType` .| |[string_type](#string_type)|Typ, który opisuje ciąg `basic_string` zawierający znaki typu `CharType` .| ### <a name="member-functions"></a>Funkcje członkowskie |Funkcja członkowska|Opis| |-|-| |[porównaniu](#compare)|Porównuje dwie sekwencje znaków zgodnie z ich zasadami równości i nierówności specyficznymi dla zestawów reguł.| |[do_compare](#do_compare)|Funkcja wirtualna porównująca dwie sekwencje znaków zgodnie z ich zasadami równości i nierówności specyficznymi dla zestawów reguł.| |[do_hash](#do_hash)|Funkcja wirtualna wywoływana w celu określenia wartości mieszania sekwencji zgodnie z ich zasadami specyficznymi dla zestawów reguł.| |[do_transform](#do_transform)|Funkcja wirtualna wywoływana w celu konwersji sekwencji znaków z ustawień regionalnych na ciąg znaków, który może być używany w porównaniach leksykograficznych z innymi sekwencjami znaków podobnie przekonwertowanymi z tych samych ustawień regionalnych.| |[skrótu](#hash)|Określa wartość mieszania sekwencji zgodnie z ich zasadami specyficznymi dla zestawów reguł.| |[przekształcania](#transform)|Konwertuje sekwencję znaków z ustawień regionalnych na ciąg znaków, który może być używany w porównaniach leksykograficznych z innymi sekwencjami znaków podobnie przekonwertowanymi z tych samych ustawień regionalnych.| ## <a name="requirements"></a>Wymagania **Nagłówek:**\<locale> **Przestrzeń nazw:** std ## <a name="collatechar_type"></a><a name="char_type"></a>COLLATE:: char_type Typ, który opisuje znak typu `CharType` . ```cpp typedef CharType char_type; ``` ### <a name="remarks"></a>Uwagi Typ jest synonimem dla parametru szablonu `CharType` . ## <a name="collatecollate"></a><a name="collate"></a>COLLATE:: COLLATE Konstruktor dla obiektów klasy COLLATE, który służy jako zestaw reguł ustawień regionalnych do obsługi Konwencji sortowania ciągów. ```cpp public: explicit collate( size_t _Refs = 0); protected: collate( const char* _Locname, size_t _Refs = 0); ``` ### <a name="parameters"></a>Parametry *_Refs*\ Wartość całkowita służąca do określania typu zarządzania pamięcią dla obiektu. *_Locname*\ Nazwa ustawień regionalnych. ### <a name="remarks"></a>Uwagi Możliwe wartości parametru *_Refs* i ich znaczenie są następujące: - 0: okres istnienia obiektu jest zarządzany przez elementy lokalne, które go zawierają. - 1: okres istnienia obiektu musi być zarządzany ręcznie. - \>1: te wartości nie są zdefiniowane. Konstruktor inicjuje swój obiekt podstawowy przy użyciu **ustawień regionalnych::**[facet](../standard-library/locale-class.md#facet_class)( `_Refs` ). ## <a name="collatecompare"></a><a name="compare"></a>COLLATE:: Compare Porównuje dwie sekwencje znaków zgodnie z ich zasadami równości i nierówności specyficznymi dla zestawów reguł. ```cpp int compare(const CharType* first1, const CharType* last1, const CharType* first2, const CharType* last2) const; ``` ### <a name="parameters"></a>Parametry *first1*\ Wskaźnik do pierwszego elementu w pierwszej sekwencji, który ma zostać porównany. *last1*\ Wskaźnik do ostatniego elementu w pierwszej sekwencji, który ma zostać porównany. *first2*\ Wskaźnik do pierwszego elementu w drugiej sekwencji, który ma zostać porównany. *last2*\ Wskaźnik do ostatniego elementu w drugiej sekwencji, który ma zostać porównany. ### <a name="return-value"></a>Wartość zwracana Funkcja członkowska zwraca: - -1, jeśli pierwsza sekwencja porównuje mniej niż drugą sekwencję. - + 1, jeśli druga sekwencja porównuje mniej niż pierwszą sekwencję. - 0, jeśli sekwencje są równoważne. ### <a name="remarks"></a>Uwagi Pierwsza sekwencja porównuje mniej, jeśli ma mniejszego elementu w najwcześniejszym nierównej parze w sekwencjach lub, jeśli nie istnieją pary nierówne, ale pierwsza sekwencja jest krótsza. Funkcja członkowska zwraca [do_compare](#do_compare)( `first1` , `last1` , `first2` , `last2` ). ### <a name="example"></a>Przykład ```cpp // collate_compare.cpp // compile with: /EHsc #include <locale> #include <iostream> #include <tchar.h> using namespace std; int main() { locale loc ( "German_germany" ); _TCHAR * s1 = _T("Das ist wei\x00dfzz."); // \x00df is the German sharp-s, it comes before z in the German alphabet _TCHAR * s2 = _T("Das ist weizzz."); int result1 = use_facet<collate<_TCHAR> > ( loc ). compare ( s1, &s1[_tcslen( s1 )-1 ], s2, &s2[_tcslen( s2 )-1 ] ); cout << result1 << endl; locale loc2 ( "C" ); int result2 = use_facet<collate<_TCHAR> > ( loc2 ). compare (s1, &s1[_tcslen( s1 )-1 ], s2, &s2[_tcslen( s2 )-1 ] ); cout << result2 << endl; } ``` ## <a name="collatedo_compare"></a><a name="do_compare"></a>sortowanie::d o_compare Funkcja wirtualna porównująca dwie sekwencje znaków zgodnie z ich zasadami równości i nierówności specyficznymi dla zestawów reguł. ```cpp virtual int do_compare(const CharType* first1, const CharType* last1, const CharType* first2, const CharType* last2) const; ``` ### <a name="parameters"></a>Parametry *first1*\ Wskaźnik do pierwszego elementu w pierwszej sekwencji, który ma zostać porównany. *last1*\ Wskaźnik do ostatniego elementu w pierwszej sekwencji, który ma zostać porównany. *first2*\ Wskaźnik do pierwszego elementu w drugiej sekwencji, który ma zostać porównany. *last2*\ Wskaźnik do ostatniego elementu w drugiej sekwencji, który ma zostać porównany. ### <a name="return-value"></a>Wartość zwracana Funkcja członkowska zwraca: - -1, jeśli pierwsza sekwencja porównuje mniej niż drugą sekwencję. - + 1, jeśli druga sekwencja porównuje mniej niż pierwszą sekwencję. - 0, jeśli sekwencje są równoważne. ### <a name="remarks"></a>Uwagi Chroniona funkcja wirtualna elementu członkowskiego porównuje sekwencję z [* FIRST1, Last1) * z sekwencją *[First2, last2*). Porównuje wartości poprzez zastosowanie `operator<` par odpowiednich elementów typu `CharType` . Pierwsza sekwencja porównuje mniej, jeśli ma mniejszy element w najwcześniejszym nierównej parze w sekwencjach lub jeśli nie ma żadnej nierównej pary, ale pierwsza sekwencja jest krótsza. ### <a name="example"></a>Przykład Zobacz przykład dla [COLLATE:: Compare](#compare), które wywołania `do_compare` . ## <a name="collatedo_hash"></a><a name="do_hash"></a>sortowanie::d o_hash Funkcja wirtualna wywoływana w celu określenia wartości mieszania sekwencji zgodnie z ich zasadami specyficznymi dla zestawów reguł. ```cpp virtual long do_hash(const CharType* first, const CharType* last) const; ``` ### <a name="parameters"></a>Parametry *pierwszego*\ Wskaźnik do pierwszego znaku w sekwencji, którego wartość ma zostać określona. *ostatniego*\ Wskaźnik do ostatniego znaku w sekwencji, którego wartość ma zostać określona. ### <a name="return-value"></a>Wartość zwracana Wartość skrótu typu **`long`** dla sekwencji. ### <a name="remarks"></a>Uwagi Wartość skrótu może być przydatna na przykład w przypadku dystrybuowania sekwencji pseudo-losowo w tablicy list. ### <a name="example"></a>Przykład Zobacz przykład dla [skrótu](#hash), który wywołuje `do_hash` . ## <a name="collatedo_transform"></a><a name="do_transform"></a>sortowanie::d o_transform Funkcja wirtualna wywoływana w celu konwersji sekwencji znaków z ustawień regionalnych na ciąg znaków, który może być używany w porównaniach leksykograficznych z innymi sekwencjami znaków podobnie przekonwertowanymi z tych samych ustawień regionalnych. ```cpp virtual string_type do_transform(const CharType* first, const CharType* last) const; ``` ### <a name="parameters"></a>Parametry *pierwszego*\ Wskaźnik do pierwszego znaku w sekwencji do przekonwertowania. *ostatniego*\ Wskaźnik do ostatniego znaku w sekwencji do przekonwertowania. ### <a name="return-value"></a>Wartość zwracana Ciąg, który jest przekształconą sekwencją znaków. ### <a name="remarks"></a>Uwagi Chroniona funkcja wirtualna elementu członkowskiego zwraca obiekt klasy [string_type](#string_type) której kontrolowana sekwencja jest kopią sekwencji [ `first` , `last` ). Jeśli klasa pochodna sortowania \< **CharType**> przesłonięć [do_compare](#do_compare), należy również przesłonić `do_transform` dopasowanie. Po przekazaniu do `collate::compare` , dwa przekształcone ciągi powinny zwracać ten sam wynik, dzięki czemu można przekazać nieprzekształcone ciągi do porównania w klasie pochodnej. ### <a name="example"></a>Przykład Zobacz przykład [transformacji](#transform), która wywołuje `do_transform` . ## <a name="collatehash"></a><a name="hash"></a>COLLATE:: hash Określa wartość mieszania sekwencji zgodnie z ich zasadami specyficznymi dla zestawów reguł. ```cpp long hash(const CharType* first, const CharType* last) const; ``` ### <a name="parameters"></a>Parametry *pierwszego*\ Wskaźnik do pierwszego znaku w sekwencji, którego wartość ma zostać określona. *ostatniego*\ Wskaźnik do ostatniego znaku w sekwencji, którego wartość ma zostać określona. ### <a name="return-value"></a>Wartość zwracana Wartość skrótu typu **`long`** dla sekwencji. ### <a name="remarks"></a>Uwagi Funkcja członkowska zwraca [do_hash](#do_hash)( `first` , `last` ). Wartość skrótu może być przydatna na przykład w przypadku dystrybuowania sekwencji pseudo-losowo w tablicy list. ### <a name="example"></a>Przykład ```cpp // collate_hash.cpp // compile with: /EHsc #include <locale> #include <iostream> #include <tchar.h> using namespace std; int main( ) { locale loc ( "German_germany" ); _TCHAR * s1 = _T("\x00dfzz abc."); // \x00df is the German sharp-s (looks like beta), it comes before z in the alphabet _TCHAR * s2 = _T("zzz abc."); // \x00df is the German sharp-s (looks like beta), it comes before z in the alphabet long r1 = use_facet< collate<_TCHAR> > ( loc ). hash (s1, &s1[_tcslen( s1 )-1 ]); long r2 = use_facet< collate<_TCHAR> > ( loc ). hash (s2, &s2[_tcslen( s2 )-1 ] ); cout << r1 << " " << r2 << endl; } ``` ```Output 541187293 551279837 ``` ## <a name="collatestring_type"></a><a name="string_type"></a>COLLATE:: string_type Typ, który opisuje ciąg `basic_string` zawierający znaki typu `CharType` . ```cpp typedef basic_string<CharType> string_type; ``` ### <a name="remarks"></a>Uwagi Typ opisuje specjalizację szablonu klasy [basic_string](../standard-library/basic-string-class.md) którego obiekty mogą przechowywać kopie sekwencji źródłowej. ### <a name="example"></a>Przykład Aby zapoznać się z przykładem sposobu deklarowania i używania `string_type` , zobacz [Transform](#transform). ## <a name="collatetransform"></a><a name="transform"></a>COLLATE:: Transform Konwertuje sekwencję znaków z ustawień regionalnych na ciąg znaków, który może być używany w porównaniach leksykograficznych z innymi sekwencjami znaków podobnie przekonwertowanymi z tych samych ustawień regionalnych. ```cpp string_type transform(const CharType* first, const CharType* last) const; ``` ### <a name="parameters"></a>Parametry *pierwszego*\ Wskaźnik do pierwszego znaku w sekwencji do przekonwertowania. *ostatniego*\ Wskaźnik do ostatniego znaku w sekwencji do przekonwertowania. ### <a name="return-value"></a>Wartość zwracana Ciąg, który zawiera przekształconą sekwencję znaków. ### <a name="remarks"></a>Uwagi Funkcja członkowska zwraca [do_transform](#do_transform)( `first` , `last` ). ### <a name="example"></a>Przykład ```cpp // collate_transform.cpp // compile with: /EHsc #include <locale> #include <iostream> #include <tchar.h> using namespace std; int main( ) { locale loc ( "German_Germany" ); _TCHAR* s1 = _T("\x00dfzz abc."); // \x00df is the German sharp-s (looks like beta), // it comes before z in the alphabet _TCHAR* s2 = _T("zzz abc."); collate<_TCHAR>::string_type r1; // OK for typedef r1 = use_facet< collate<_TCHAR> > ( loc ). transform (s1, &s1[_tcslen( s1 )-1 ]); cout << r1 << endl; basic_string<_TCHAR> r2 = use_facet< collate<_TCHAR> > ( loc ). transform (s2, &s2[_tcslen( s2 )-1 ]); cout << r2 << endl; int result1 = use_facet<collate<_TCHAR> > ( loc ).compare (s1, &s1[_tcslen( s1 )-1 ], s2, &s2[_tcslen( s2 )-1 ] ); cout << _tcscmp(r1.c_str( ),r2.c_str( )) << result1 << _tcscmp(s1,s2) <<endl; } ``` ```Output -1-11 ``` ## <a name="see-also"></a>Zobacz także [\<locale>](../standard-library/locale.md)\ [Bezpieczeństwo wątku w standardowej bibliotece języka C++](../standard-library/thread-safety-in-the-cpp-standard-library.md)
33.884793
506
0.737386
pol_Latn
0.999244
4882c3bcdb6f80d542ad8f96fb2525c9d7f48e70
38
md
Markdown
README.md
juhlianna/Analise-de-dados-com-Python-e-Pandas
2c5c81b3513f540cb29c7b00a8c5cb14d8355719
[ "Apache-2.0" ]
1
2021-09-16T01:02:22.000Z
2021-09-16T01:02:22.000Z
README.md
juhlianna/Analise-de-dados-com-Python-e-Pandas
2c5c81b3513f540cb29c7b00a8c5cb14d8355719
[ "Apache-2.0" ]
null
null
null
README.md
juhlianna/Analise-de-dados-com-Python-e-Pandas
2c5c81b3513f540cb29c7b00a8c5cb14d8355719
[ "Apache-2.0" ]
null
null
null
# Analise-de-dados-com-Python-e-Pandas
38
38
0.789474
spa_Latn
0.260216
488315862af31ede6fa6543f20308f065c3f609e
99
md
Markdown
_posts/0000-01-02-Devika-madhu.md
Devika-madhu/github-slideshow
acc0759f6153158985420b538c0f25cc491a3e40
[ "MIT" ]
null
null
null
_posts/0000-01-02-Devika-madhu.md
Devika-madhu/github-slideshow
acc0759f6153158985420b538c0f25cc491a3e40
[ "MIT" ]
3
2021-12-04T14:47:44.000Z
2021-12-19T07:48:50.000Z
_posts/0000-01-02-Devika-madhu.md
Devika-madhu/github-slideshow
acc0759f6153158985420b538c0f25cc491a3e40
[ "MIT" ]
null
null
null
--- layout:slide title: "Welcome to our second slide!" --- My text Use the left arrow to go back!
14.142857
37
0.686869
eng_Latn
0.997944
488520ec3d9799acc1188f18a4f4412e312b2eb2
1,582
md
Markdown
clients/client2/README.md
end2endzone/cmakedemo
3f6b786022fed511fae02a1cd8d5649db9996bdf
[ "MIT-0", "MIT" ]
4
2019-05-30T13:11:56.000Z
2020-06-24T23:46:31.000Z
clients/client2/README.md
end2endzone/cmakedemo
3f6b786022fed511fae02a1cd8d5649db9996bdf
[ "MIT-0", "MIT" ]
1
2019-02-21T22:25:08.000Z
2019-02-24T11:06:45.000Z
clients/client2/README.md
end2endzone/cmakedemo
3f6b786022fed511fae02a1cd8d5649db9996bdf
[ "MIT-0", "MIT" ]
3
2019-07-16T20:49:56.000Z
2020-11-10T15:19:30.000Z
# CMakeOnaPlate A CMake boilerplate for most C++ projects. # Clients The following section explains different strategies for a client executable to be able to 'find' the FooLib library using `find_package()` command. For each of the following examples, assume one wants to compile the `fooexe` executable target which have a dependency to the FooLib library. ## Client #2 If FooLib and FooExe are not installed to the same directory, the `find_package()` command will not be able to find your library automatically. However, if you know the installation directory of FooLib, you can instruct CMake to use this directory while searching. By manually specifying the FooLib's install directory, the `find_package()` command will be able to locate FooLib's include directories and library files. For example, on Windows, the CMake default installation directory is `C:\Program Files (x86)\${PROJECT_NAME}`. This makes the installation directory different for each projects. One must specify the FooLib's installation directory manually. To specify the installation directory of the FooLib library, the following CMake command should be used: ```cmake mkdir build cd build set foolib_DIR=<foolib_folder> cmake .. ``` The `CMakeLists.txt` configuration file should look like this: ```cmake find_package(foolib 0.1.0 REQUIRED) add_executable(fooexe <source_files> ) target_link_libraries(fooexe foolib) ``` Note that the `target_include_directories()` command is not required. The `fooexe` target will automatically have the `foolib` include directories assigned to the project.
45.2
177
0.79646
eng_Latn
0.998041
48857a381e9e7524605e6747772d810212bcfba2
792
md
Markdown
module-7-sql/curriculum-tree.md
meg-gutshall/flatiron-curriculum-notes
619580d5d9a62a5f28a1851ed8c765e5444e0245
[ "MIT" ]
1
2021-10-31T22:03:14.000Z
2021-10-31T22:03:14.000Z
module-7-sql/curriculum-tree.md
meg-gutshall/flatiron-curriculum-notes
619580d5d9a62a5f28a1851ed8c765e5444e0245
[ "MIT" ]
null
null
null
module-7-sql/curriculum-tree.md
meg-gutshall/flatiron-curriculum-notes
619580d5d9a62a5f28a1851ed8c765e5444e0245
[ "MIT" ]
null
null
null
--- description: 'Version 8: Module 7' --- # Curriculum Tree ## SQL (Structured Query Language) ```markup Software Engineering V8 │ ├── Section 1: Topic Introduction │ └── What Is SQL ├── Section 2: Getting Started │ ├── SQL Intro and Installation │ ├── SQL Database Basics │ ├── SQL Databases and Text Editors │ ├── SQL Data Types │ ├── SQL Inserting, Updating, and Selecting │ ├── Basic SQL Queries │ ├── SQL Aggregate Functions │ │ └── SQL Aggregate Functions Lab │ └────── SQL Bear Organizer Lab └── Section 3: Table Relations ├── Edger Codd and Table Relations ├── Table Relations ├── SQL Joins ├── SQL Complex Joins ├── SQL Join Tables ├── Grouping and Sorting Data ├── SQL Joins Review Lectures └────── SQL Crowdfunding Lab ```
23.294118
46
0.628788
kor_Hang
0.69678
48867a8d1476bc1c928ec2c94f13b533d292c0db
3,918
md
Markdown
README.md
thundergolfer/library-management-slack-bot
a34b3d6e950767a9277791f3813c66ecdd4fc27b
[ "MIT" ]
4
2019-03-16T02:43:22.000Z
2020-08-07T02:25:54.000Z
README.md
thundergolfer/library-management-slack-bot
a34b3d6e950767a9277791f3813c66ecdd4fc27b
[ "MIT" ]
10
2019-03-24T06:09:33.000Z
2022-02-12T16:17:45.000Z
README.md
thundergolfer/library-management-slack-bot
a34b3d6e950767a9277791f3813c66ecdd4fc27b
[ "MIT" ]
2
2019-04-01T14:07:15.000Z
2019-10-20T18:56:24.000Z
# Library Management Slack Bot [![Build Status](https://travis-ci.com/thundergolfer/library-management-slack-bot.svg?branch=master)](https://travis-ci.com/thundergolfer/library-management-slack-bot) Slack bot that helps facilitate tracking of books and borrowers in your office/home library. This bot was built to support the office library @ [Canva Sydney](https://www.canva.com/careers/), but is general in nature and with simple hosting on [AWS Lambda](https://aws.amazon.com/lambda/) and [Google Sheets](https://www.twilio.com/blog/2017/03/google-spreadsheets-and-javascriptnode-js.html) can solve the same problems in your office. ----- ## How It Works This bot exists to manage an index of physical books in an office library, and also track borrows and returns of the library users. Via simple Slack interactions, users can search for available books, record their borrow, and record their return. ### Commands > *Note:* This is currently an MVP. In future, the bot will *not* require the user manually inputting an ISBN. ##### **[@librarybot](/README.md) (with a scanned image of the book's barcode)** - Borrow, add, or return a book without having to manually type its ISBN. ##### **[@librarybot](/README.md) `<ISBN>`** - Borrow, add, or return a book with the ISBN of value `<ISBN>` ##### **[@librarybot](/README.md) `list`** - List all books registered in the library database ##### **[@librarybot](/README.md) `borrow`** - List all books _you_ have borrowed. ##### **[@librarybot](/README.md) `search "<Query>"`** - Search books ## Development ### Prerequisites - **Access to API Gateway and Lambda on the AWS Account where this bot is deployed** - **Jsonnet** ([`brew install jsonnet`](https://formulae.brew.sh/formula/jsonnet)) ### Installation `npm install` ### Configuration Copy `.env.default` to `.env` and fill in the slack tokens and google sheet id. ### Testing #### Unit All unit tests are contained in [`tests/`](tests/). Run them with: `npm run test` #### Integration: `serverless` Local Testing Using `serverless invoke local -f <function name> -p <path to JSON payload>` we can test the Lambda locally before deploying to AWS. Currently the function name is `hello` and the JSON payloads live in [`tests/data/`](/tests/data). The payloads are generated by _Jsonnet_ (see [**Generating Mock Event Payloads**](#generating-mock-event-payloads)). For example this command will execute a borrow against the [development Google Sheets DB](https://docs.google.com/spreadsheets/d/1Vbvys2uiSyJWPKsFWjMyHeZ-1mTWDTZCyeFfYCkemuQ/edit#gid=0): `npx serverless invoke local -f hello -p tests/data/book_message.json` It won't actually send a message to Slack. It will just `console.log` what would be sent 😊. #### Generating Mock Event Payloads From repository root, run `jsonnet -m tests/data tests/data/jsonnet/api_gateway_base.jsonnet`. Valid AWS API Gateway payloads *containing* Slack Event API payloads as strings (`event.body`) will be generated in `test/data`. ## Running Locally To start the server locally run: `npx serverless offline` To create an external URL accessible from slack run: ``` npm install -f localtunnel lt --port 3000 --subdomain <domain name> ``` 1) [Create a Slack app](https://api.slack.com/apps) 2) Enable event subscriptions, with the URL from localtunnel 3) Create a bot user 4) Install the app with the `chat:write:user` scope 5) In the slack workspace you installed to, message the bot with `@<botname> list` ## Deployment Currently deployment involves manually uploading the `.zip` artifact for the function at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/). This can be automated using **Serverless**, and will be soon. For the moment: 1. `npm run package` (creates `library-management-slack-bot.zip` in `artifact/`) 2. Go to AWS Lambda console and upload `.zip` for `slack-library`
42.129032
437
0.737111
eng_Latn
0.940799
488701de78b243169235b49b207061ed2e6be387
371
md
Markdown
content/leetcode/_index.md
FongRay/FongRay.github.io
ccd18dbe48932deaefb1158ecd16f6460c6766c5
[ "MIT" ]
21
2016-06-12T19:37:04.000Z
2020-06-14T23:36:13.000Z
content/leetcode/_index.md
FongRay/FongRay.github.io
ccd18dbe48932deaefb1158ecd16f6460c6766c5
[ "MIT" ]
16
2017-01-22T15:28:50.000Z
2020-02-04T06:15:52.000Z
content/leetcode/_index.md
FongRay/FongRay.github.io
ccd18dbe48932deaefb1158ecd16f6460c6766c5
[ "MIT" ]
72
2016-06-09T09:30:47.000Z
2020-06-14T23:35:29.000Z
--- title: '算法专栏' --- {{< lead >}} > {{< /lead >}} ![](https://ryder-1252249141.cos.ap-shanghai.myqcloud.com/uPic/2022-04-16-carl-heyerdahl-KE0nC8-58MQ-unsplash.jpg "Photo by [Carl Heyerdahl](https://unsplash.com/@carlheyerdahl) on [Unsplash](https://unsplash.com/)") * https://github.com/apple/swift-algorithms * https://github.com/raywenderlich/swift-algorithm-club
30.916667
216
0.703504
yue_Hant
0.333324
4887a62c3839763575c671c23e9472bdd89962a0
6,594
md
Markdown
microsoft-365/enterprise/manage-microsoft-365-groups.md
MicrosoftDocs/microsoft-365-docs-pr.pt-BR
88959f119be80545a0b854d261ba4acf2ac1e131
[ "CC-BY-4.0", "MIT" ]
29
2019-09-17T04:18:20.000Z
2022-03-20T18:51:42.000Z
microsoft-365/enterprise/manage-microsoft-365-groups.md
MicrosoftDocs/microsoft-365-docs-pr.pt-BR
88959f119be80545a0b854d261ba4acf2ac1e131
[ "CC-BY-4.0", "MIT" ]
2
2022-02-09T06:48:15.000Z
2022-02-09T06:48:47.000Z
microsoft-365/enterprise/manage-microsoft-365-groups.md
MicrosoftDocs/microsoft-365-docs-pr.pt-BR
88959f119be80545a0b854d261ba4acf2ac1e131
[ "CC-BY-4.0", "MIT" ]
3
2021-03-14T23:52:56.000Z
2021-05-31T14:02:38.000Z
--- title: Gerenciar grupos do Microsoft 365 ms.author: kvice author: kelleyvice-msft manager: laurawi audience: Admin ms.topic: overview ms.prod: office-online-server ms.localizationpriority: medium f1.keywords: - CSH ms.custom: - Adm_O365 - seo-marvel-mar2020 ms.collection: - Ent_O365 - M365-subscription-management search.appverid: - MET150 - MOE150 - MED150 - BCS160 ms.assetid: 98ca5b3f-f720-4d8e-91be-fe656548a25a description: Saiba mais sobre como gerenciar Microsoft 365 grupos. ms.openlocfilehash: 28d8bae8aaed6d02fe082824c07afe03bdc0ce5a ms.sourcegitcommit: d4b867e37bf741528ded7fb289e4f6847228d2c5 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 10/06/2021 ms.locfileid: "60150757" --- # <a name="manage-microsoft-365-groups"></a>Gerenciar grupos do Microsoft 365 *Esse artigo se aplica ao Microsoft 365 Enterprise e ao Office 365 Enterprise.* Você pode gerenciar Microsoft 365 grupos de várias maneiras diferentes, dependendo da configuração. Você pode gerenciar contas de usuário no [Centro de administração do Microsoft 365,](/admin)no PowerShell, nos Serviços de Domínio do Active Directory (AD DS) ou no centro de administração do [Azure Active Directory (Azure AD).](/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal) ## <a name="plan-for-where-and-how-you-will-manage-your-groups"></a>Planejar onde e como você gerenciará seus grupos Onde e como você pode gerenciar suas contas de usuário depende do modelo de identidade que você deseja usar para sua Microsoft 365. Os dois modelos gerais são somente na nuvem e híbridos. ### <a name="cloud-only"></a>Apenas Nuvem Você cria e gerencia grupos com: - [O Centro de administração do Microsoft 365](/admin) - [PowerShell](maintain-group-membership-with-microsoft-365-powershell.md) - [Centro de Administração do Microsoft Azure AD](/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal) ### <a name="hybrid"></a>Híbrido Os grupos do AD DS são sincronizados com Microsoft 365 do AD DS, portanto, você deve usar as ferramentas locais do AD DS para gerenciar esses grupos. Você também pode criar e gerenciar grupos do Azure AD separados de grupos do AD DS, mas que podem conter usuários e grupos do AD DS. Nesse caso, você pode usar: - [O Centro de administração do Microsoft 365](/admin) - [PowerShell](maintain-group-membership-with-microsoft-365-powershell.md) - [Centro de Administração do Microsoft Azure AD](/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal) ## <a name="allow-users-to-create-and-manage-their-own-groups"></a>Permitir que usuários criem e gerenciem os próprios grupos O Azure AD permite grupos que podem ser gerenciados por proprietários de grupo em vez de administradores de IT. Conhecido como *gerenciamento de grupos de autoatendimento*, este recurso permite que proprietários de grupos que não foram atribuídos a uma função administrativa para criar e gerenciar grupos de segurança. Os usuários podem solicitar a associação a um grupo de segurança, e essa solicitação vai para o proprietário do grupo e não para o administrador da TI. Isso permite que o controle diário da associação ao grupo seja delegado a equipes, projetos ou aos proprietários da empresa, os quais entendem o uso comercial do grupo e podem gerenciar as suas associações. >[!Note] >O gerenciamento de grupo de autoatendimento está disponível apenas aos grupos de segurança do Azure AD e Microsoft 365. Ele não está disponível para grupos habilitados para email, listas de distribuição ou qualquer grupo que tenha sido sincronizado do AD DS. > Para saber mais, consulte as [instruções para configurar um grupo do Azure AD para o gerenciamento de autoatendimento](/azure/active-directory/active-directory-accessmanagement-self-service-group-management). ## <a name="set-up-dynamic-group-membership"></a>Configurar associações de grupo dinâmico O Azure AD dá suporte à configuração de uma série de regras que adicionam ou removem automaticamente contas de usuário como membros de um grupo do Azure AD. Isso é conhecido como *associação de grupo dinâmico*. As regras se baseiam em atributos das contas de usuário, como Departamento ou País. Veja como as regras são aplicadas: - Se uma nova conta de usuário atende a todas as regras do grupo, ela se tornará membro. - Se uma conta de usuário não for um membro do grupo, mas seus atributos forem alterados para corresponder a todas as regras do grupo, ela se tornará um membro desse grupo. - Se uma conta de usuário não corresponder a todas as regras do grupo, ela não será adicionada ao grupo. - Se a conta de usuário for membro do grupo, mas seus atributos forem alterados de modo que ela passe a não atender mais a todas as regras do grupo, ela será removida do grupo. Para usar a associação dinâmica, você deve primeiro determinar os conjuntos de grupos que possuem um conjunto comum de atributos de conta do usuário. Por exemplo, todos os membros do Departamento de vendas devem estar no grupo Sales Azure AD, com base no atributo de conta de usuário Departamento definido como "Vendas". Consulte as [instruções para criar e configurar regras para um grupo dinâmico no Azure AD](/azure/active-directory/active-directory-groups-dynamic-membership-azure-portal). ## <a name="set-up-automatic-licensing"></a>Configurar licenciamento automático Você pode configurar grupos de segurança no Azure AD para atribuir automaticamente licenças de um conjunto de assinaturas a todos os membros do grupo. Isso é conhecido como *licenciamento baseado em grupo*. Se uma conta de usuário for adicionada ou removida do grupo, as licenças das assinaturas do grupo serão automaticamente atribuídas ou não atribuídas à conta do usuário. No Microsoft 365 Enterprise, você configurará grupos de segurança do Azure AD para atribuir as licenças apropriadas do Microsoft 365 Enterprise. Verifique se você tem licenças suficientes para todos os membros do grupo. Se você ficar sem licenças, os novos usuários não receberão licenças até que as licenças estejam disponíveis. >[!Note] >Você não deve configurar o licenciamento baseado em grupo para grupos que contenham contas do Azure para empresas (B2B). > Para obter mais informações, consulte Noções básicas de licenciamento baseado [em grupo no Azure AD](/azure/active-directory/active-directory-licensing-whatis-azure-portal). Consulte as [instruções para configurar o licenciamento baseado em grupo para um grupo de segurança do Azure.](/azure/active-directory/active-directory-licensing-group-assignment-azure-portal)
65.287129
411
0.802244
por_Latn
0.999048
48885729af882c153f5b1f1ca3be101081b88d45
899
md
Markdown
docs/framework/wcf/diagnostics/etw/3393-streamedmessagereadbyencoder.md
mattia-lunardi/docs.it-it
b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/etw/3393-streamedmessagereadbyencoder.md
mattia-lunardi/docs.it-it
b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/etw/3393-streamedmessagereadbyencoder.md
mattia-lunardi/docs.it-it
b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 3393 - StreamedMessageReadByEncoder ms.date: 03/30/2017 ms.assetid: 70ebde45-9e46-4adb-9020-c17e9c6786e4 ms.openlocfilehash: 1840fa38c641529d2a3bd3d6ca865236e8599cd4 ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 05/04/2018 ms.locfileid: "33465624" --- # <a name="3393---streamedmessagereadbyencoder"></a>3393 - StreamedMessageReadByEncoder ## <a name="properties"></a>Proprietà ||| |-|-| |ID|3393| |Parole chiave|Canale| |Livello|Informazioni| |Canale|Microsoft-Windows-Application Server-Applications/Debug| ## <a name="description"></a>Descrizione Questo evento viene generato quando il messaggio trasmesso è stato letto dal codificatore. ## <a name="message"></a>Messaggio Un messaggio trasmesso è stato letto dal codificatore. ## <a name="details"></a>Dettagli
31
93
0.747497
ita_Latn
0.683099
4888695a7223ab0f8438377a593684bb72bf07fa
15,218
md
Markdown
articles/cognitive-services/Speech-Service/includes/how-to/speaker-recognition-basics/speaker-recognition-basics-javascript.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Speech-Service/includes/how-to/speaker-recognition-basics/speaker-recognition-basics-javascript.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Speech-Service/includes/how-to/speaker-recognition-basics/speaker-recognition-basics-javascript.md
Kraviecc/azure-docs.pl-pl
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: v-jaswel ms.service: cognitive-services ms.topic: include ms.date: 10/07/2020 ms.author: v-jawe ms.custom: references_regions ms.openlocfilehash: 81186e6cf49e5f7e76a938881441cafa99d178eb ms.sourcegitcommit: ba676927b1a8acd7c30708144e201f63ce89021d ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 03/07/2021 ms.locfileid: "102444438" --- W tym przewodniku szybki start przedstawiono podstawowe wzorce projektowe dla rozpoznawanie osoby mówiącej przy użyciu zestawu Speech SDK, w tym: * Weryfikacja niezależna od tekstu i tekstu * Identyfikacja osoby mówiącej, aby identyfikować przykład głosu między grupą głosów * Usuwanie profilów głosowych Aby zapoznać się z pojęciami dotyczącymi rozpoznawania mowy, zobacz artykuł z [omówieniem](../../../speaker-recognition-overview.md) . ## <a name="skip-to-samples-on-github"></a>Przejdź do przykładów w witrynie GitHub Jeśli chcesz pominąć prosty kod przykładowy, zobacz [przykłady przewodnika Szybki Start dla języka JavaScript](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/fa6428a0837779cbeae172688e0286625e340942/quickstart/javascript/node/speaker-recognition) w witrynie GitHub. ## <a name="prerequisites"></a>Wymagania wstępne W tym artykule przyjęto założenie, że masz konto platformy Azure i subskrypcję usługi mowy. Jeśli nie masz konta i subskrypcji, [Wypróbuj usługę mowy bezpłatnie](../../../overview.md#try-the-speech-service-for-free). > [!IMPORTANT] > Rozpoznawanie osoby mówiącej jest obecnie obsługiwana *tylko* w zasobach usługi Azure Speech utworzonych w `westus` regionie. ## <a name="install-the-speech-sdk"></a>Instalowanie zestawu SDK usługi Mowa Przed wykonaniem jakichkolwiek czynności konieczne będzie zainstalowanie <a href="https://www.npmjs.com/package/microsoft-cognitiveservices-speech-sdk" target="_blank">zestawu Speech SDK dla języka JavaScript </a>. W zależności od platformy należy wykonać następujące instrukcje: - <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk?tabs=nodejs#get-the-speech-sdk" target="_blank">Node.js <span class="docon docon-navigate-external x-hidden-focus"></span></a> - <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk?tabs=browser#get-the-speech-sdk" target="_blank">Przeglądarka sieci Web </a> Ponadto, w zależności od środowiska docelowego, użyj jednego z następujących elementów: # <a name="script"></a>[napisy](#tab/script) Pobierz i Wyodrębnij <a href="https://aka.ms/csspeech/jsbrowserpackage" target="_blank">zestaw Speech SDK dla</a> pliku *microsoft.cognitiveservices.speech.sdk.bundle.js* JavaScript i umieść go w folderze dostępnym dla pliku HTML. ```html <script src="microsoft.cognitiveservices.speech.sdk.bundle.js"></script>; ``` > [!TIP] > Jeśli masz dostęp do przeglądarki sieci Web i używasz `<script>` znacznika; `sdk` prefiks nie jest wymagany. `sdk`Prefiks jest aliasem używanym do nazwy `require` modułu. # <a name="import"></a>[zaimportować](#tab/import) ```javascript import * from "microsoft-cognitiveservices-speech-sdk"; ``` Aby uzyskać więcej informacji na temat `import` , zobacz <a href="https://javascript.info/import-export" target="_blank">eksport i import </a>. # <a name="require"></a>[wymagane](#tab/require) ```javascript const sdk = require("microsoft-cognitiveservices-speech-sdk"); ``` Aby uzyskać więcej informacji na temat `require` , zobacz <a href="https://nodejs.org/en/knowledge/getting-started/what-is-require/" target="_blank">co to jest wymagane? </a>. --- ## <a name="import-dependencies"></a>Importowanie zależności Aby uruchomić przykłady z tego artykułu, należy dodać następujące instrukcje w górnej części pliku. js. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="dependencies"::: Te instrukcje służą do zaimportowania wymaganych bibliotek i pobrania klucza subskrypcji i regionu usługi rozpoznawania mowy ze zmiennych środowiskowych. Określają one również ścieżki do plików audio, które będą używane w ramach następujących zadań. ## <a name="create-helper-function"></a>Utwórz funkcję pomocnika Dodaj następującą funkcję pomocnika, aby odczytywać pliki audio w strumieniach używanych przez usługę mowy. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="helpers"::: W tej funkcji należy użyć metod [AudioInputStream. createPushStream](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioinputstream#createpushstream-audiostreamformat-) i [AudioConfig. fromStreamInput](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig#fromstreaminput-audioinputstream---pullaudioinputstreamcallback-) , aby utworzyć obiekt [AudioConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig) . Ten `AudioConfig` obiekt reprezentuje strumień audio. Podczas wykonywania poniższych zadań będziesz używać kilku z tych `AudioConfig` obiektów. ## <a name="text-dependent-verification"></a>Weryfikacja zależna od tekstu Weryfikacja osoby mówiącej to czynność potwierdzająca, że prelegent pasuje do znanego lub **zarejestrowanego** głosu. Pierwszym krokiem jest **zarejestrowanie** profilu głosowego, dzięki czemu usługa ma coś do porównania przyszłych przykładów głosowych. W tym przykładzie należy zarejestrować profil przy użyciu strategii **zależnej od tekstu** , która wymaga określonego hasła do rejestracji i weryfikacji. Zapoznaj [się z](/rest/api/speakerrecognition/) listą obsługiwanych hasła. ### <a name="textdependentverification-function"></a>TextDependentVerification, funkcja Zacznij od utworzenia `TextDependentVerification` funkcji. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="text_dependent_verification"::: Ta funkcja tworzy obiekt [VoiceProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofile) za pomocą metody [VoiceProfileClient. createProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#createprofileasync-voiceprofiletype--string---e--voiceprofile-----void---e--string-----void-) . Należy zauważyć, że istnieją trzy [typy](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofiletype) `VoiceProfile` : - TextIndependentIdentification - TextDependentVerification - TextIndependentVerification W takim przypadku należy przekazać `VoiceProfileType.TextDependentVerification` do `VoiceProfileClient.createProfileAsync` . Następnie należy wywołać dwie funkcje pomocnika, które zdefiniujesz dalej `AddEnrollmentsToTextDependentProfile` i `SpeakerVerify` . Na koniec Wywołaj [VoiceProfileClient. deleteProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#deleteprofileasync-voiceprofile---response--voiceprofileresult-----void---e--string-----void-) , aby usunąć profil. ### <a name="addenrollmentstotextdependentprofile-function"></a>AddEnrollmentsToTextDependentProfile, funkcja Zdefiniuj następującą funkcję, aby zarejestrować profil głosowy. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="add_enrollments_dependent"::: W tej funkcji należy wywołać `GetAudioConfigFromFile` funkcję zdefiniowaną wcześniej, aby utworzyć `AudioConfig` obiekty z próbek audio. Te próbki audio zawierają hasło, takie jak "My Voice to My Passport, verify Me". Następnie należy zarejestrować te próbki audio przy użyciu metody [VoiceProfileClient. enrollProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#enrollprofileasync-voiceprofile--audioconfig---e--voiceprofileenrollmentresult-----void---e--string-----void-) . ### <a name="speakerverify-function"></a>SpeakerVerify, funkcja Zdefiniuj `SpeakerVerify` w następujący sposób. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="speaker_verify"::: W tej funkcji utworzysz obiekt [SpeakerVerificationModel](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakerverificationmodel) za pomocą metody [SpeakerVerificationModel. FromProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakerverificationmodel#fromprofile-voiceprofile-) , przekazując do utworzonego wcześniej obiektu [VoiceProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofile) . Następnie należy wywołać metodę [SpeechRecognizer. recognizeOnceAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#recognizeonceasync--e--speechrecognitionresult-----void---e--string-----void-) , aby sprawdzić poprawność próbki audio, która zawiera to samo hasło co zarejestrowane wcześniej próbki audio. `SpeechRecognizer.recognizeOnceAsync` zwraca obiekt [SpeakerRecognitionResult](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakerrecognitionresult) , którego `score` Właściwość zawiera wynik podobieństwa z zakresu od 0,0 do 1,0. `SpeakerRecognitionResult`Obiekt zawiera również `reason` Właściwość typu [ResultReason](/javascript/api/microsoft-cognitiveservices-speech-sdk/resultreason). Jeśli weryfikacja zakończyła się pomyślnie, `reason` Właściwość powinna mieć wartość `RecognizedSpeaker` . ## <a name="text-independent-verification"></a>Weryfikacja niezależna od tekstu W przeciwieństwie do weryfikacji **zależnej od tekstu** , weryfikacja **niezależna od tekstu** : * Nie wymaga wymawiania określonego hasła, co może być wypowiadane * Nie wymaga trzech próbek audio *, ale wymaga* 20 s całkowitego dźwięku ### <a name="textindependentverification-function"></a>TextIndependentVerification, funkcja Zacznij od utworzenia `TextIndependentVerification` funkcji. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="text_independent_verification"::: Podobnie jak w przypadku `TextDependentVerification` funkcji, ta funkcja tworzy obiekt [VoiceProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofile) za pomocą metody [VoiceProfileClient. createProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#createprofileasync-voiceprofiletype--string---e--voiceprofile-----void---e--string-----void-) . W takim przypadku należy przekazać `VoiceProfileType.TextIndependentVerification` do `createProfileAsync` . Następnie należy wywołać dwie funkcje pomocnika: `AddEnrollmentsToTextIndependentProfile` , które zdefiniujesz dalej, i `SpeakerVerify` zdefiniowane już. Na koniec Wywołaj [VoiceProfileClient. deleteProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#deleteprofileasync-voiceprofile---response--voiceprofileresult-----void---e--string-----void-) , aby usunąć profil. ### <a name="addenrollmentstotextindependentprofile"></a>AddEnrollmentsToTextIndependentProfile Zdefiniuj następującą funkcję, aby zarejestrować profil głosowy. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="add_enrollments_independent"::: W tej funkcji należy wywołać `GetAudioConfigFromFile` funkcję zdefiniowaną wcześniej, aby utworzyć `AudioConfig` obiekty z próbek audio. Następnie należy zarejestrować te próbki audio przy użyciu metody [VoiceProfileClient. enrollProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#enrollprofileasync-voiceprofile--audioconfig---e--voiceprofileenrollmentresult-----void---e--string-----void-) . ## <a name="speaker-identification"></a>Identyfikacja osoby mówiącej Identyfikacja osoby mówiącej służy do określenia **, kto** jest w danej grupie zarejestrowanych głosów. Proces jest podobny do **weryfikacji niezależnej od tekstu**, z główną różnicą, że można zweryfikować wiele profilów głosowych jednocześnie, zamiast weryfikować się w przypadku pojedynczego profilu. ### <a name="textindependentidentification-function"></a>TextIndependentIdentification, funkcja Zacznij od utworzenia `TextIndependentIdentification` funkcji. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="text_independent_indentification"::: Podobnie jak `TextDependentVerification` `TextIndependentVerification` funkcje i, ta funkcja tworzy obiekt [VoiceProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofile) za pomocą metody [VoiceProfileClient. createProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#createprofileasync-voiceprofiletype--string---e--voiceprofile-----void---e--string-----void-) . W takim przypadku należy przekazać `VoiceProfileType.TextIndependentIdentification` do `VoiceProfileClient.createProfileAsync` . Następnie należy wywołać dwie funkcje pomocnika: `AddEnrollmentsToTextIndependentProfile` , które zostały już zdefiniowane, i `SpeakerIdentify` , które zdefiniujesz dalej. Na koniec Wywołaj [VoiceProfileClient. deleteProfileAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient#deleteprofileasync-voiceprofile---response--voiceprofileresult-----void---e--string-----void-) , aby usunąć profil. ### <a name="speakeridentify-function"></a>SpeakerIdentify, funkcja Zdefiniuj `SpeakerIdentify` funkcję w następujący sposób. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="speaker_identify"::: W tej funkcji utworzysz obiekt [SpeakerIdentificationModel](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakeridentificationmodel) za pomocą metody [SpeakerIdentificationModel. fromProfiles](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakeridentificationmodel#fromprofiles-voiceprofile---) , przekazując do utworzonego wcześniej obiektu [VoiceProfile](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofile) . Następnie należy wywołać metodę [SpeechRecognizer. recognizeOnceAsync](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#recognizeonceasync--e--speechrecognitionresult-----void---e--string-----void-) i przekazać przykład audio. `SpeechRecognizer.recognizeOnceAsync` próbuje zidentyfikować głos dla tego przykładu audio na podstawie `VoiceProfile` obiektów użytych do utworzenia `SpeakerIdentificationModel` . Zwraca obiekt [SpeakerRecognitionResult](/javascript/api/microsoft-cognitiveservices-speech-sdk/speakerrecognitionresult) , którego `profileId` Właściwość identyfikuje dopasowanie `VoiceProfile` , jeśli istnieje, podczas gdy `score` Właściwość zawiera wynik podobieństwa z zakresu od 0.0 do 1.0. ## <a name="main-function"></a>Funkcja Main Na koniec Zdefiniuj `main` funkcję w następujący sposób. :::code language="javascript" source="~/cognitive-services-quickstart-code/javascript/speech/speaker-recognition.js" id="main"::: Ta funkcja tworzy obiekt [VoiceProfileClient](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient) , który jest używany do tworzenia, rejestrowania i usuwania profilów głosowych. Następnie wywołuje wcześniej zdefiniowane funkcje.
80.946809
839
0.815022
pol_Latn
0.989302
4888d31fac38feb56e8013c5100a77d465e9bd58
420
md
Markdown
README.md
gavincangan/alvin
4e1945a3f5bb061842f0e35633f254863f8923c8
[ "MIT" ]
2
2019-01-26T00:12:35.000Z
2020-01-30T01:25:22.000Z
README.md
gavincangan/alvin
4e1945a3f5bb061842f0e35633f254863f8923c8
[ "MIT" ]
null
null
null
README.md
gavincangan/alvin
4e1945a3f5bb061842f0e35633f254863f8923c8
[ "MIT" ]
2
2018-07-12T17:44:12.000Z
2021-04-12T06:49:03.000Z
This is simple 2D robot simulator which was forked from the code for Clint Liddick's [Intro to Robotics](http://www.clintonliddick.com/articles/intro-to-robotics-part-1/) series. To install dependencies run: ```bash cd robotics_intro pip install -r requirements.txt ``` or install a development version of the package: ```bash pip install -e alvin ``` Execute directly from the command line: ```bash alvin.py ```
19.090909
178
0.752381
eng_Latn
0.969415
4889a9d187c5cc2ddbd320113b97525e3d8550c6
11,996
md
Markdown
docs/Google/Chromium/code/base/memory/smart-pointer/scoped_refptr/index.md
dengking/python-in-action
82587961a9f12be09e60d20e11ccbee5bb0aa027
[ "Apache-2.0" ]
null
null
null
docs/Google/Chromium/code/base/memory/smart-pointer/scoped_refptr/index.md
dengking/python-in-action
82587961a9f12be09e60d20e11ccbee5bb0aa027
[ "Apache-2.0" ]
null
null
null
docs/Google/Chromium/code/base/memory/smart-pointer/scoped_refptr/index.md
dengking/python-in-action
82587961a9f12be09e60d20e11ccbee5bb0aa027
[ "Apache-2.0" ]
null
null
null
# github [chromium](https://github.com/chromium/chromium/tree/master)/[base](https://github.com/chromium/chromium/tree/master/base)/[memory](https://github.com/chromium/chromium/tree/master/base/memory)/[**scoped_refptr.h**](https://github.com/chromium/chromium/blob/master/base/memory/scoped_refptr.h) ## doc ```C++ // A smart pointer class for reference counted objects. Use this class instead // of calling AddRef and Release manually on a reference counted object to // avoid common memory leaks caused by forgetting to Release an object // reference. ``` 一、如何理解上面这段话? 1、首先需要清楚的是 "reference counted objects" 的含义: 它是指继承了在[chromium](https://github.com/chromium/chromium)/[base](https://github.com/chromium/chromium/tree/master/base)/[memory](https://github.com/chromium/chromium/tree/master/base/memory)/**[ref_counted.h](https://github.com/chromium/chromium/blob/master/base/memory/ref_counted.h)** 中定义的 `RefCounted`、`RefCountedThreadSafe` 之一的,典型的例子: a、下面的 `class MyFoo` b、[chromium](https://github.com/chromium/chromium/tree/master)/[base](https://github.com/chromium/chromium/tree/master/base)/[**callback_internal.h**](https://github.com/chromium/chromium/blob/master/base/callback_internal.h) ```C++ class BASE_EXPORT BindStateBase : public RefCountedThreadSafe<BindStateBase, BindStateBaseRefCountTraits> ``` 2、`scoped_refptr` 本质上是RAII wrapper for `RefCounted`、`RefCountedThreadSafe`: ```C++ // Use this class instead // of calling AddRef and Release manually on a reference counted object to // avoid common memory leaks caused by forgetting to Release an object // reference. ``` 3、`scoped_refptr` 是smart pointer,它的usage和`std::shared_ptr`是非常类似的。 ### example ```C++ class MyFoo : public RefCounted<MyFoo> { ... private: friend class RefCounted<MyFoo>; // Allow destruction by RefCounted<>. ~MyFoo(); // Destructor must be private/protected. }; void some_function() { scoped_refptr<MyFoo> foo = MakeRefCounted<MyFoo>(); foo->Method(param); // |foo| is released when this function returns } // void some_other_function() { scoped_refptr<MyFoo> foo = MakeRefCounted<MyFoo>(); ... foo.reset(); // explicitly releases |foo| ... if (foo) foo->Method(param); } ``` 需要注意的是: `RefCounted` 提供了无惨的default constructor,compiler会自动调用。 ### example: exchange references ```c++ { scoped_refptr<MyFoo> a = MakeRefCounted<MyFoo>(); scoped_refptr<MyFoo> b; b.swap(a); // now, |b| references the MyFoo object, and |a| references nullptr. } ``` ### example: assignment ```C++ { scoped_refptr<MyFoo> a = MakeRefCounted<MyFoo>(); scoped_refptr<MyFoo> b; b = a; // now, |a| and |b| each own a reference to the same MyFoo object. } ``` ## code ```C++ template <class T> class TRIVIAL_ABI scoped_refptr { public: typedef T element_type; protected: T* ptr_ = nullptr; }; ``` `T` 的类型是 "ref counted type"。 ### object generator #### `AdoptRef` ```C++ // Creates a scoped_refptr from a raw pointer without incrementing the reference // count. Use this only for a newly created object whose reference count starts // from 1 instead of 0. template <typename T> scoped_refptr<T> AdoptRef(T* obj) { using Tag = std::decay_t<decltype( T::kRefCountPreference )>; static_assert( std::is_same<subtle::StartRefCountFromOneTag, Tag>::value, "Use AdoptRef only if the reference count starts from one." ); DCHECK(obj); DCHECK(obj->HasOneRef()); obj->Adopted(); return scoped_refptr<T>(obj, subtle::kAdoptRefTag); } ``` 可以看到: `AdoptRef`仅仅用于 `StartRefCountFromOneTag`。 #### `MakeRefCounted` ```c++ namespace subtle { template <typename T> scoped_refptr<T> AdoptRefIfNeeded(T* obj, StartRefCountFromZeroTag) { return scoped_refptr<T>(obj); } template <typename T> scoped_refptr<T> AdoptRefIfNeeded(T* obj, StartRefCountFromOneTag) { return AdoptRef(obj); } } // namespace subtle // Constructs an instance of T, which is a ref counted type, and wraps the // object into a scoped_refptr<T>. template <typename T, typename... Args> scoped_refptr<T> MakeRefCounted(Args&&... args) { T* obj = new T(std::forward<Args>(args)...); return subtle::AdoptRefIfNeeded(obj, T::kRefCountPreference); } ``` #### `WrapRefCounted` ```C++ // Takes an instance of T, which is a ref counted type, and wraps the object // into a scoped_refptr<T>. template <typename T> scoped_refptr<T> WrapRefCounted(T* t) { return scoped_refptr<T>(t); } ``` ### constructor 需要处理如下类型的转换: 1、raw pointer 2、`scoped_ptr` of other type 3、`scoped_ptr` of same type 4、`std::nullptr_t` #### from raw pointer ```C++ // Constructs from a raw pointer. Note that this constructor allows implicit // conversion from T* to scoped_refptr<T> which is strongly discouraged. If // you are creating a new ref-counted object please use // base::MakeRefCounted<T>() or base::WrapRefCounted<T>(). Otherwise you // should move or copy construct from an existing scoped_refptr<T> to the // ref-counted object. scoped_refptr(T* p) : ptr_(p) { if (ptr_) AddRef(ptr_); // 注意,需要AddRef一下 } ``` 通过comment可知,上述函数是不建议使用的,而是应该直接使用object generator function。 #### from `std::nullptr_t` ```C++ // Allow implicit construction from nullptr. constexpr scoped_refptr(std::nullptr_t) {} ``` #### Copy constructor ```C++ // Copy constructor. This is required in addition to the copy conversion // constructor below. scoped_refptr(const scoped_refptr& r) : scoped_refptr(r.ptr_) {} ``` 上述copy constructor会调用前面的"from raw pointer constructor",这是使用的C++11 delegating constructor特性。 #### Copy conversion constructor ```C++ // Copy conversion constructor. template < typename U, typename = typename std::enable_if<std::is_convertible<U*, T*>::value>::type > scoped_refptr(const scoped_refptr<U>& r) : scoped_refptr(r.ptr_) {} ``` 一、上述copy constructor会调用前面的"from raw pointer constructor",这是使用的C++11 delegating constructor特性。 二、上述代码虽然简单,但是我比较疑惑的是它会进行哪些conversion? C++允许指针的declaration type和它实际所指向的object的真实type不同,上述函数首先使用 `std::is_convertible<U*, T*>`进行校验,在具体实现上,它使用C++11 delegating constructor特性来调用`scoped_refptr(r.ptr_)`,`r.ptr_` 的类型是 `U*`,而constructor `scoped_refptr(T* p)`的入参类型是 `T*`,因此此处就涉及pointer conversion,由于前面做过了static check,因此这个conversion是可以成功的,下面是简化的例子: ```C++ #include <algorithm> #include <iostream> #include <type_traits> template <class T> class scoped_refptr { public: scoped_refptr(T* p) : ptr_(p) { std::cout << __PRETTY_FUNCTION__ << std::endl; } template<typename U, typename = typename std::enable_if<std::is_convertible<U*, T*>::value>::type> scoped_refptr(const scoped_refptr<U>& p) : scoped_refptr(p.ptr_) { std::cout << __PRETTY_FUNCTION__ << std::endl; } ~scoped_refptr() { std::cout << __PRETTY_FUNCTION__ << std::endl; delete ptr_; } protected: T* ptr_ = nullptr; private: // Friend required for move constructors that set r.ptr_ to null. // 如果不加这个friend声明,则会产生如下编译报错: // test.cpp:12:68: error: ‘Derived* scoped_refptr<Derived>::ptr_’ is protected within this context // 12 | st scoped_refptr<U>&p) : scoped_refptr(p.ptr_) template <typename U> friend class scoped_refptr; }; class Base {}; class Derived :public Base {}; int main() { auto ptr0 = scoped_refptr<Derived>(new Derived); auto ptr1 = scoped_refptr<Base>(ptr0); } // g++ test.cpp --std=c++11 -pedantic -Wall ``` 输出如下: ```C++ scoped_refptr<T>::scoped_refptr(T*) [with T = Derived] scoped_refptr<T>::scoped_refptr(T*) [with T = Base] scoped_refptr<T>::scoped_refptr(const scoped_refptr<U>&) [with U = Derived; <template-parameter-2-2> = void; T = Base] scoped_refptr<T>::~scoped_refptr() [with T = Base] scoped_refptr<T>::~scoped_refptr() [with T = Derived] free(): double free detected in tcache 2 Aborted (core dumped) ``` 上述导致double free的原因非常简单。 #### Move constructor ```C++ // Move constructor. This is required in addition to the move conversion // constructor below. scoped_refptr(scoped_refptr&& r) noexcept : ptr_(r.ptr_) { r.ptr_ = nullptr; } ``` 可以看到,move constructor的实现非常简单,就是简单的pointer assignment: `ptr_(r.ptr_)`。 #### Move conversion constructor ```C++ // Move conversion constructor. template < typename U, typename = typename std::enable_if<std::is_convertible<U*, T*>::value>::type > scoped_refptr(scoped_refptr<U>&& r) noexcept : ptr_(r.ptr_) { r.ptr_ = nullptr; } ``` "Move conversion constructor"的实现也是非常简单的,它在move constructor的基础上增加了`std::is_convertible`的检查,需要注意的是: 它本质上也是直接的pointer assignment。 ### Assignment operator "assignment"其实也就意味着要将current object替换为(replace)为它右边的,current object管理着resource的时候,往往需要分配、deep copy、release,这就诞生了"unified assignment operator",它是典型的使用copy-and-swap-idiom的,下面是对`scoped_refptr`的简化,以更好地理解它的运行原理: ```C++ #include <algorithm> #include <iostream> template <class T> class scoped_refptr { public: scoped_refptr(T* p) : ptr_(p) { std::cout << __PRETTY_FUNCTION__ << std::endl; } // Move constructor. scoped_refptr(scoped_refptr&& r) noexcept : ptr_(r.ptr_) { r.ptr_ = nullptr; std::cout << __PRETTY_FUNCTION__ << std::endl; } scoped_refptr& operator=(T* p) { std::cout << __PRETTY_FUNCTION__ << std::endl; return *this = scoped_refptr(p); } // Unified assignment operator. scoped_refptr& operator=(scoped_refptr r) noexcept { std::cout << __PRETTY_FUNCTION__ << std::endl; swap(r); return *this; } void swap(scoped_refptr& r) noexcept { std::cout << __PRETTY_FUNCTION__ << std::endl; std::swap(ptr_, r.ptr_); } ~scoped_refptr() { std::cout << __PRETTY_FUNCTION__ << std::endl; delete ptr_; } protected: T* ptr_ = nullptr; }; int main() { auto ptr = scoped_refptr<int>(new int{ 1 }); ptr = new int{ 2 }; } // g++ test.cpp --std=c++11 -pedantic -Wall ``` 它的输出如下: ```C++ scoped_refptr<T>::scoped_refptr(T*) [with T = int] scoped_refptr<T>& scoped_refptr<T>::operator=(T*) [with T = int] scoped_refptr<T>::scoped_refptr(T*) [with T = int] scoped_refptr<T>& scoped_refptr<T>::operator=(scoped_refptr<T>) [with T = int] void scoped_refptr<T>::swap(scoped_refptr<T>&) [with T = int] scoped_refptr<T>::~scoped_refptr() [with T = int] scoped_refptr<T>::~scoped_refptr() [with T = int] ``` 1、通过上述输出可以看到 `scoped_refptr& operator=(T* p)` 的实现是依赖于 `scoped_refptr& operator=(scoped_refptr r) noexcept` 的。 2、另外一个值得关注的是: `scoped_refptr& operator=(T* p)` 在调用 `scoped_refptr& operator=(scoped_refptr r) noexcept` ,第二个函数的入参是不涉及move constructor的。为了验证这,我特地加了move constructor的,从输出来看,它是未被调用的。 #### Unified assignment operator ```c++ // Unified assignment operator. scoped_refptr& operator=(scoped_refptr r) noexcept { swap(r); return *this; } ``` 1、它是典型的使用copy-and-swap-idiom的 2、第二个值得关注的是: 它是 `noexcept` 的 #### from raw pointer ```C++ scoped_refptr& operator=(T* p) { return *this = scoped_refptr(p); } ``` #### from `std::nullptr_t` ```C++ scoped_refptr& operator=(std::nullptr_t) { reset(); return *this; } ``` ### `reset` ````C++ // Sets managed object to null and releases reference to the previous managed // object, if it existed. void reset() { scoped_refptr().swap(*this); } ```` 这个实现也是比较巧妙的: "default constructor + swap"。 ### `release()`: release ownership ```C++ // Returns the owned pointer (if any), releasing ownership to the caller. The // caller is responsible for managing the lifetime of the reference. T* release() WARN_UNUSED_RESULT; template <typename T> T* scoped_refptr<T>::release() { T* ptr = ptr_; ptr_ = nullptr; return ptr; } ``` ### `friend class scoped_refptr` 在类中有如下声明: ```C++ // Friend required for move constructors that set r.ptr_ to null. template <typename U> friend class scoped_refptr; ``` 它是做什么的呢?一开始我并没有明白,在编译"Copy conversion constructor"节的例子的时候我找到了答案: 在 copy conversion constructor、move conversion constructor中,入参的类型是 `scoped_refptr<U>` 而不是 `scoped_refptr<T>` ,显然它们不是相同的类型,而在 `scoped_refptr<T>` 中要访问 `scoped_refptr<U>` 的protected member `ptr_`,如果不添加上述friend 声明的话,显然是会导致编译报错的。 ## `shared_ptr` vs `scoped_ptr` 参见: 1、stackoverflow [shared_ptr vs scoped_ptr](https://stackoverflow.com/questions/1770636/shared-ptr-vs-scoped-ptr)
26.717149
380
0.714572
eng_Latn
0.700381
488a41f741f45082de8aec282412c2b65f6f4a94
86
md
Markdown
README.md
jwirfs-brock/pipeline-data-analysis
f23dddce5b97b8b0db139c9e57e49b5d4a350360
[ "MIT" ]
null
null
null
README.md
jwirfs-brock/pipeline-data-analysis
f23dddce5b97b8b0db139c9e57e49b5d4a350360
[ "MIT" ]
null
null
null
README.md
jwirfs-brock/pipeline-data-analysis
f23dddce5b97b8b0db139c9e57e49b5d4a350360
[ "MIT" ]
null
null
null
# pipeline-data-analysis Analysis of pipeline data (incidents and mileage) from PHSMA
28.666667
60
0.813953
eng_Latn
0.903662
488ad84b6374bd8c41463c669270fef61581b4f8
814
md
Markdown
website/versioned_docs/version-0.9.0-beta.1/api/interfaces/molecule.model.ISidebarPane.md
zhaomenghuan/molecule
ec0d4591ac8b3209571446a94e35eab67fbfc7ca
[ "MIT" ]
516
2021-12-15T15:41:01.000Z
2022-03-31T13:08:31.000Z
website/versioned_docs/version-0.9.0-beta.1/api/interfaces/molecule.model.ISidebarPane.md
joskid/molecule
ec0d4591ac8b3209571446a94e35eab67fbfc7ca
[ "MIT" ]
158
2021-12-16T02:02:34.000Z
2022-03-23T09:55:17.000Z
website/versioned_docs/version-0.9.0-beta.1/api/interfaces/molecule.model.ISidebarPane.md
joskid/molecule
ec0d4591ac8b3209571446a94e35eab67fbfc7ca
[ "MIT" ]
66
2021-12-16T01:45:48.000Z
2022-03-27T01:57:31.000Z
--- id: 'molecule.model.ISidebarPane' title: 'Interface: ISidebarPane' sidebar_label: 'ISidebarPane' custom_edit_url: null --- [molecule](../namespaces/molecule).[model](../namespaces/molecule.model).ISidebarPane ## Properties ### id • **id**: `UniqueId` #### Defined in [src/model/workbench/sidebar.ts:4](https://github.com/DTStack/molecule/blob/46c80551/src/model/workbench/sidebar.ts#L4) --- ### title • `Optional` **title**: `string` #### Defined in [src/model/workbench/sidebar.ts:5](https://github.com/DTStack/molecule/blob/46c80551/src/model/workbench/sidebar.ts#L5) ## Methods ### render ▸ `Optional` **render**(): `ReactNode` #### Returns `ReactNode` #### Defined in [src/model/workbench/sidebar.ts:6](https://github.com/DTStack/molecule/blob/46c80551/src/model/workbench/sidebar.ts#L6)
18.930233
119
0.707617
yue_Hant
0.416573
488ae9006710791f06ddef7b498e9ec12ee994f0
1,666
md
Markdown
docs/form-field-model.md
ronald-hove/ion-custom-form-builder
9f89c9fb308340203d91d14f74f9151b99d2b0d3
[ "MIT" ]
8
2020-01-03T09:08:42.000Z
2022-02-22T07:15:28.000Z
docs/form-field-model.md
ronald-hove/ion-custom-form-builder
9f89c9fb308340203d91d14f74f9151b99d2b0d3
[ "MIT" ]
11
2020-03-30T14:01:17.000Z
2022-03-02T06:50:53.000Z
docs/form-field-model.md
ronald-hove/ion-custom-form-builder
9f89c9fb308340203d91d14f74f9151b99d2b0d3
[ "MIT" ]
2
2020-01-04T13:25:37.000Z
2020-01-13T12:12:43.000Z
# FormField model [<-- Home](https://github.com/ronald-hove/ion-custom-form-builder) The FormField interface has the following structure with some important notes ```ts import { AsyncValidatorFn, ValidatorFn } from '@angular/forms'; export interface FormField { /** * * * @type {string} * @memberof FormField */ icon?: string; /** * * * @type {string} * @memberof FormField */ title: string; /** * * * @type {string} * @memberof FormField */ formControlName: string; /** * * * @type {Validators[]} * @memberof FormField */ validators?: ValidatorFn[]; /** * * * @type {AsyncValidator} * @memberof FormField */ asyncValidators?: AsyncValidatorFn[]; /** * * * @type {string} * @memberof FormField */ type: string; /** * * * @type {string} * @memberof FormField */ placeholder?: string; /** * * * @type {*} * @memberof FormField */ value?: any; /** * * * @type {string} * @memberof FormField */ formFieldType?: string; /** * * * @type {number} * @memberof FormField */ textAreaRowCount?: number; /** * * * @type {string} * @memberof FormField */ labelPosition?: string; /** * * * @type {ValidationMessage []} * @memberof FormField */ validationMessages?: ValidationMessage [] } export interface ValidationMessage { type: string; message: string; } ```
14.486957
77
0.494598
yue_Hant
0.505612
488b2a45a1cc3af5390d173a032b3964d019da6c
88
md
Markdown
README.md
goexplore/imsporty
4650012362154fb715890325b207c3d4a4cd1c20
[ "MIT" ]
null
null
null
README.md
goexplore/imsporty
4650012362154fb715890325b207c3d4a4cd1c20
[ "MIT" ]
null
null
null
README.md
goexplore/imsporty
4650012362154fb715890325b207c3d4a4cd1c20
[ "MIT" ]
null
null
null
<h1>Im Sporty</h1> <section>A small repository to test how MS Bot framework.</section>
22
67
0.738636
eng_Latn
0.961192
488b448ad72b93af91faf7dd566a7fc683e88ed4
60
md
Markdown
docs/README.md
resourcedesign/vue-image-badges
be3dec15415a784798fb54a4be96ab2dc74ff410
[ "MIT" ]
null
null
null
docs/README.md
resourcedesign/vue-image-badges
be3dec15415a784798fb54a4be96ab2dc74ff410
[ "MIT" ]
null
null
null
docs/README.md
resourcedesign/vue-image-badges
be3dec15415a784798fb54a4be96ab2dc74ff410
[ "MIT" ]
null
null
null
# vue-image-badge > A Vue component to add badges to images
20
41
0.75
eng_Latn
0.804791
488b680d6f366177809e24282cb599975304b6c3
3,730
md
Markdown
README.md
consilience/xero-php-oauth2-app
15948b94df2ceb6b71650766b6f2b0cc5292ebc7
[ "MIT" ]
null
null
null
README.md
consilience/xero-php-oauth2-app
15948b94df2ceb6b71650766b6f2b0cc5292ebc7
[ "MIT" ]
null
null
null
README.md
consilience/xero-php-oauth2-app
15948b94df2ceb6b71650766b6f2b0cc5292ebc7
[ "MIT" ]
null
null
null
# Xero PHP oAuth 2 App This PHP project demonstrates how to use the xero-php-oauth2 SDK. Use composer or clone this repository to your local machine to begin. You'll be able to connect to a Xero Organisation, make real API calls. The code used to make each API call will be displayed along with the results returned from Xero's API. ## Getting Started To run locally, you'll need a local web server with PHP support. * MAMP is a good option [Download MAMP](https://www.mamp.info/en/downloads/) ### Download Manually * Clone this repo into your local server webroot. * Launch a terminal app and change to the newly cloned folder `xero-php-oauth2-app` * Download dependencies with Composer using the folloing comman ``` composer install ``` ## Create a Xero App To obtain your API keys, follow these steps and create a Xero app * Create a [free Xero user account](https://www.xero.com/us/signup/api/) (if you don't have one) * Login to [Xero developer center](https://developer.xero.com/myapps) * Click "New App" link * Enter your App name, company url, privacy policy url. * Enter the redirect URI (your callback url - i.e. `http://localhost:8888/xero-php-oauth2-app/callback.php`) * Agree to terms and condition and click "Create App". * Click "Generate a secret" button. * Copy your client id and client secret and save for use later. * Click the "Save" button. You secret is now hidden. ## Configure your .env file You'll need to setup your `.env` file Rename the file `sample.env` to `.env` and copy and paste your *clientId, clientSecret and redirectUri* These .env variables will be read by authorization.php, callback.php, get.php. Sample.env file ```bash CLIENT_ID = "YOUR-CLIENT-ID" CLIENT_SECRET = "YOUR-CLIENT-SECRET" REDIRECT_URI = "http://localhost:8888/xero-php-oauth2-app/callback.php" ``` Sample PHP code from authorization.php ```php // This library will read variable from the .env file. $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); $dotenv->load(); $clientId = getenv('CLIENT_ID'); $clientSecret = getenv('CLIENT_SECRET'); $redirectUri = getenv('REDIRECT_URI'); $provider = new \League\OAuth2\Client\Provider\GenericProvider([ 'clientId' => $clientId, 'clientSecret' => $clientSecret, 'redirectUri' => $redirectUri, 'urlAuthorize' => 'https://login.xero.com/identity/connect/authorize', 'urlAccessToken' => 'https://identity.xero.com/connect/token', 'urlResourceOwnerDetails' => 'https://api.xero.com/api.xro/2.0/Organisation' ]); ``` ## License This software is published under the [MIT License](http://en.wikipedia.org/wiki/MIT_License). Copyright (c) 2019 Xero Limited Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
40.543478
183
0.743968
eng_Latn
0.813018
488c88585f2f088cbe82355b1811eb926ba9378b
18
md
Markdown
images/README.md
fabiopalumbo/fabiopalumbo
0a3ea3711151952e6e4a3d17398f00d1da813059
[ "MIT" ]
null
null
null
images/README.md
fabiopalumbo/fabiopalumbo
0a3ea3711151952e6e4a3d17398f00d1da813059
[ "MIT" ]
null
null
null
images/README.md
fabiopalumbo/fabiopalumbo
0a3ea3711151952e6e4a3d17398f00d1da813059
[ "MIT" ]
null
null
null
# Image directory
9
17
0.777778
eng_Latn
0.957999
488cd06a919df63dc67a1f069217e393eb8b1d84
590
md
Markdown
docs/Dependents/mysqlclient.md
zhanghe06/bearing_project
78a20fc321f72d3ae05c7ab7e52e01d02904e3fc
[ "MIT" ]
1
2020-06-21T04:08:26.000Z
2020-06-21T04:08:26.000Z
docs/Dependents/mysqlclient.md
zhanghe06/bearing_project
78a20fc321f72d3ae05c7ab7e52e01d02904e3fc
[ "MIT" ]
13
2019-10-18T17:19:32.000Z
2022-01-13T00:44:43.000Z
docs/Dependents/mysqlclient.md
zhanghe06/bearing_project
78a20fc321f72d3ae05c7ab7e52e01d02904e3fc
[ "MIT" ]
5
2019-02-07T03:15:16.000Z
2021-09-04T14:06:28.000Z
# mysqlclient ``` pip install mysqlclient ``` ## 系统依赖 ### CenxOS ``` yum install -y gcc yum install -y mysql-devel yum install -y python-devel ``` ### Ubuntu ``` # mysql 版本 sudo apt-get install -y libmysqld-dev sudo apt-get install -y libmysqlclient-dev sudo apt-get install -y python-dev # mariadb 版本 sudo apt-get install -y libmariadbd-dev sudo apt-get install -y libmariadbclient-dev sudo apt-get install -y python-dev ``` ### MacOS ``` brew unlink mariadb brew install mariadb-connector-c ln -s /usr/local/opt/mariadb-connector-c/bin/mariadb_config /usr/local/bin/mysql_config ```
17.352941
87
0.728814
cat_Latn
0.156067
488d5ec07096c9402e40b1baddf00f15d0039b04
1,326
md
Markdown
news.md
aineniamh/website
726723a59d064259f0472d113febac3a1773c9ab
[ "CC-BY-3.0" ]
1
2020-11-20T05:05:05.000Z
2020-11-20T05:05:05.000Z
news.md
aineniamh/website
726723a59d064259f0472d113febac3a1773c9ab
[ "CC-BY-3.0" ]
1
2020-09-08T11:01:21.000Z
2020-09-08T11:01:21.000Z
news.md
aineniamh/website
726723a59d064259f0472d113febac3a1773c9ab
[ "CC-BY-3.0" ]
3
2020-03-13T10:58:06.000Z
2021-09-16T15:17:51.000Z
--- title: News layout: landing description: Recent updates from the network image: nav_menu: false show_tile: false --- <section id="content" class="spotlights"> <div class="inner"> <div class="box alt"> <div class="row 50% uniform"> {% assign news = site.news | sort: 'date' | reverse %} {% for page in news %} {% assign mod = forloop.index | modulo: 3 %} {% if mod == 0 %} <div class="4u$"><span class="image fit"> {% else %} <div class="4u"><span class="image fit"> {% endif %} <figure class="imghvr-reveal-right"><img src="{{ page.image }}" alt=""/> <figcaption> <h3>{{ page.title }}</h3> <p>{{ page.description }}</p> </figcaption> <a href="{{ page.permalink }}"></a> </figure> </span></div> {% endfor %} </div> </div> </div> <section> <a href="wp1.html" class="image"> <img src="assets/images/mantis.jpg" alt="" data-position="center center" /> </a> <div class="content"> <div class="inner"> <header class="major"> <h1>Find out more</h1> </header> <p>Follow the ARTIC twitter account to see what the team is up to</p> <ul class="actions"> <li><a href="{{ site.twitter_url }}" class="button">visit Twitter</a></li> </ul> </div> </div> </section> </section>
25.018868
79
0.558824
eng_Latn
0.482829
488e3e75fc0de5090c6fabea46a18aa8cac175dc
6,372
md
Markdown
includes/virtual-machines-common-classic-configure-availability.md
OpenLocalizationTestOrg/azure-docs-pr16_de-DE
bf18172a4f9060051b3861ff8930d9f0303f7f10
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-common-classic-configure-availability.md
OpenLocalizationTestOrg/azure-docs-pr16_de-DE
bf18172a4f9060051b3861ff8930d9f0303f7f10
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-common-classic-configure-availability.md
OpenLocalizationTestOrg/azure-docs-pr16_de-DE
bf18172a4f9060051b3861ff8930d9f0303f7f10
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
Eine Verfügbarkeit festlegen hilft behalten Sie Ihre virtuellen Computer zur Verfügung, während Ausfallzeiten, wie bei der Wartung. Platzieren zwei oder mehr ähnlich konfigurierten virtuellen Computern in einem Satz Verfügbarkeit erstellt die Redundanz benötigt, um die Verfügbarkeit der Applikationen oder Dienstleistungen, die Ihre virtuellen Computern ausgeführt wird. Weitere Informationen darüber, wie dies funktioniert finden Sie unter [Verwalten der Verfügbarkeit von virtuellen Computern] []. Es ist eine bewährte Methode, die sowohl Verfügbarkeit Sätze und den Lastenausgleich Endpunkte verwenden, um sicherzustellen, dass die Anwendung immer verfügbar und aktiv effizient ist. Details zum Lastenausgleich Endpunkte finden Sie unter [Lastenausgleich für Azure-Infrastrukturdiensten] []. Sie können klassische virtuellen Computern in einer Verfügbarkeit festlegen, indem Sie eine der beiden Optionen hinzufügen: - [Option 1: Erstellen eines virtuellen Computers und eine Verfügbarkeit zur gleichen Zeit festlegen] []. Fügen Sie neue virtuellen Computern klicken Sie dann auf den Satz, wenn Sie diese virtuelle Computer erstellen. - [Option 2: eine Verfügbarkeit Set vorhandenen virtuellen Computer hinzufügen] []. >[AZURE.NOTE] In der Option Klassisch müssen den gleichen Clouddienst virtuellen Computern, die Sie in demselben Satz Verfügbarkeit setzen möchten angehören. ## <a id="createset"> </a>Option 1: Erstellen eines virtuellen Computers und eine Verfügbarkeit zur gleichen Zeit festlegen## Sie können der Azure-Portal oder Azure PowerShell-Befehlen dazu verwenden. So verwenden Sie das Azure-portal 1. Wenn Sie bereits, melden Sie sich mit dem [Azure-Portal](https://portal.azure.com)nicht getan. 2. Klicken Sie im Menü Hub klicken Sie auf **+ neu**, und klicken Sie dann auf **virtuellen Computern**. ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/ChooseVMImage.png) 3. Wählen Sie das Bild der Marketplace-virtuellen Computern, die, das Sie verwenden möchten. Sie können auch eine Linux oder Windows virtuellen Computers zu erstellen. 4. Stellen Sie für den ausgewählten virtuellen Computer sicher, dass das Bereitstellungsmodell auf **klassische** festgelegt ist, und klicken Sie dann auf **Erstellen** ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/ChooseClassicModel.png) 5. Name des virtuellen Computers, Benutzername und Kennwort (für Windows-Computer) oder SSH öffentlicher Schlüssel (für Linux Maschinen) eingeben. 6. Wählen Sie die Größe des virtuellen Computer aus, und klicken Sie dann auf **Wählen Sie aus** , um den Vorgang fortzusetzen. 7. Wählen Sie **optionale Konfiguration > Verfügbarkeit festlegen**, und wählen Sie die Verfügbarkeit Sie legen des virtuellen Computers hinzufügen möchten. ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/ChooseAvailabilitySet.png) 8. Überprüfen Sie Ihre Konfiguration für ein. Wenn Sie fertig sind, klicken Sie auf **Erstellen**. 9. Während des virtuellen Computers Azure erstellt hat, können Sie den Fortschritt unter **virtuellen Computern** im Menü Hub nachverfolgen. Azure PowerShell-Befehlen einer Azure-virtuellen Computern erstellen und es zu einem neuen oder vorhandenen Verfügbarkeit hinzufügen, finden Sie unter [Verwenden von Azure PowerShell zu erstellen und Windows-basierten virtuellen Maschinen vorkonfiguriert](../articles/virtual-machines/virtual-machines-windows-classic-create-powershell.md) ## <a id="addmachine"> </a>Option 2: Hinzufügen eines vorhandenen virtuellen Computers auf einen Satz Verfügbarkeit## Im Portal Azure können Sie eine vorhandene Verfügbarkeit vorhandenen klassischen virtuellen Computern festlegen oder Erstellen eines neuen Kontos für diese hinzufügen. (Denken Sie daran, die den virtuellen Computern in demselben Satz Verfügbarkeit müssen den gleichen Clouddienst angehören behalten Sie. bei) Die Schritte sind nahezu identisch. Mit Azure PowerShell können einem bestehenden Satz für die Verfügbarkeit des virtuellen Computers hinzugefügt werden. 1. Wenn Sie dies melden Sie sich mit dem [Portal Azure](https://portal.azure.com)noch nicht getan haben. 2. Klicken Sie im Menü Hub auf **virtuellen Computern (klassische)**. ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/ChooseClassicVM.png) 3. Wählen Sie den Namen des virtuellen Computers, die Sie hinzufügen möchten, aus der Liste von virtuellen Computern. 4. Wählen Sie die **Einstellungen**des virtuellen Computers **Verfügbarkeit festlegen** . ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/AvailabilitySetSettings.png) 5. Wählen Sie die Verfügbarkeit des virtuellen Computers hinzufügen möchten. Des virtuellen Computers müssen den gleichen Clouddienst gemäß der Verfügbarkeit angehören. ![Alternativer Bildtext](./media/virtual-machines-common-classic-configure-availability/AvailabilitySetPicker.png) 6. Klicken Sie auf **Speichern**. Um Azure PowerShell Befehle verwenden möchten, öffnen Sie eine Administrator Ebene Azure PowerShell-Sitzung, und führen Sie den folgenden Befehl. Die Platzhalter (z. B. &lt;VmCloudServiceName&gt;), ersetzen Sie alles innerhalb der Anführungszeichen, einschließlich der Zeichen, die mit den richtigen Namen < und >. Get-AzureVM -ServiceName "<VmCloudServiceName>" -Name "<VmName>" | Set-AzureAvailabilitySet -AvailabilitySetName "<AvSetName>" | Update-AzureVM >[AZURE.NOTE] Der virtuelle Computer möglicherweise neu gestartet werden, um es zur Verfügbarkeit festlegen hinzugefügt haben. <!-- LINKS --> [Option 1: Erstellen eines virtuellen Computers und eine Verfügbarkeit zur gleichen Zeit festlegen]: #createset [Option 2: Hinzufügen eines vorhandenen virtuellen Computers auf einen Satz Verfügbarkeit]: #addmachine [Lastenausgleich für Azure-Infrastrukturdiensten]: ../articles/virtual-machines/virtual-machines-linux-load-balance.md [Verwalten der Verfügbarkeit von virtuellen Computern]: ../articles/virtual-machines/virtual-machines-linux-manage-availability.md [Create a virtual machine running Windows]: ../articles/virtual-machines/virtual-machines-windows-hero-tutorial.md [Virtual Network overview]: ../articles/virtual-network/virtual-networks-overview.md
74.093023
500
0.809322
deu_Latn
0.994696
488ed8101ee21cf32b36d6a54c9694d192cd6b20
3,079
md
Markdown
microsoft-365/knowledge/topic-experiences-roles.md
MicrosoftDocs/microsoft-365-docs-pr.ja-JP
c664d7be02b15823aa7e7d690bc20f4a3b2e50df
[ "CC-BY-4.0", "MIT" ]
6
2020-05-18T20:11:23.000Z
2020-12-20T12:23:40.000Z
microsoft-365/knowledge/topic-experiences-roles.md
MicrosoftDocs/microsoft-365-docs-pr.ja-JP
c664d7be02b15823aa7e7d690bc20f4a3b2e50df
[ "CC-BY-4.0", "MIT" ]
12
2019-08-03T16:24:00.000Z
2022-02-09T06:47:57.000Z
microsoft-365/knowledge/topic-experiences-roles.md
MicrosoftDocs/microsoft-365-docs-pr.ja-JP
c664d7be02b15823aa7e7d690bc20f4a3b2e50df
[ "CC-BY-4.0", "MIT" ]
8
2019-08-03T16:21:10.000Z
2021-10-09T11:04:43.000Z
--- title: ユーザーの役割Microsoft Viva トピック ms.author: chucked author: chuckedmonson manager: pamgreen ms.reviewer: cjtan audience: admin ms.topic: article ms.service: '' ms.prod: microsoft-365-enterprise search.appverid: '' ms.collection: - enabler-strategic - m365initiative-viva-topics ms.localizationpriority: medium description: ビバ トピックのユーザー ロールについて説明します。 ms.openlocfilehash: bf245cfc41fa3ad08af74dd2f2efbb8c0ddfef37 ms.sourcegitcommit: d4b867e37bf741528ded7fb289e4f6847228d2c5 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 10/06/2021 ms.locfileid: "60205799" --- # <a name="roles-in-microsoft-viva-topics"></a>ユーザーの役割Microsoft Viva トピック 特定の環境で Viva Topics をMicrosoft 365、ユーザーは次の役割を果たします。 - トピック閲覧者 - トピック投稿者 - 知識マネージャー - 知識管理者 ## <a name="topic-viewer"></a>トピック閲覧者 トピック ビューアーとは、SharePoint モダン サイト、Microsoft Search Office から SharePoint.com、およびトピック センターで強調表示されたトピックを表示できる組織内のユーザーです。 トピックページでトピックの詳細を表示できます。 トピックを強調表示し、トピック ページをトピック ビューアーに表示するには、以下を行う必要があります: - [管理者が Viva Topics ライセンス](./set-up-topic-experiences.md#assign-licenses)を割り当Microsoft 365します。 - トピックへの閲覧権を取得する。 このタスクは、サポート技術情報の [ビバ トピックの設定] ページのナレッジ管理者Microsoft 365 管理センター。 ## <a name="topic-contributors"></a>トピック共同作成者 トピック共同作成者とは、組織内でトピック閲覧者の権限を持つだけでなく、既存のトピックを編集したり、新しいトピックを作成したりできるユーザーのことです。 トピック ページ内の情報 (AI または手動で提供された両方) を手動で "キュレーション" し、品質を確保する上で重要な役割を果たします。 トピック投稿者のアクセス許可を持つユーザーには、[トピック] ページに [編集] ボタンが表示され、トピックの更新と発行が可能になります。 トピック共同作成者は、トピック センターから新しいトピックを作成して発行することもできます。 トピックを作成して編集するには、次の条件を実行する必要があります。 - [管理者が Viva Topics ライセンス](./set-up-topic-experiences.md#assign-licenses)を割り当Microsoft 365します。 - [トピックを作成および編集するためのアクセス許可を割り当てる](./topic-experiences-user-permissions.md)。 このタスクは、サポート技術情報の [ビバ トピックの設定] ページのナレッジ管理者Microsoft 365 管理センター。 ## <a name="knowledge-managers"></a>知識マネージャー 知識マネージャーとは、組織内のトピックを管理するユーザーのことです。 トピック管理は、トピック センターの **[トピックの** 管理] ページで行われ、ナレッジ マネージャーにのみ表示されます。 [トピックの **管理] ページ** で、ナレッジ マネージャーは次のタスクを実行できます。 - AI が提案するトピックを表示できます。 - トピックを確認して、有効なトピックを確認します。 - ユーザーに表示しないトピックを削除します。 さらに、知識マネージャーは既存のトピックを編集したり、新しいトピックを作成することができます。 トピックを管理するには、次の情報を使用する必要があります。 - [管理者が Viva Topics ライセンス](./set-up-topic-experiences.md#assign-licenses)を割り当Microsoft 365します。 - [トピックを管理するためのアクセス許可を割り当てる](./topic-experiences-user-permissions.md))。 このタスクは、サポート技術情報の [ビバ トピックの設定] ページのナレッジ管理者Microsoft 365 管理センター。 ビジネスに関する全体的な知識が豊富なユーザーは、ナレッジ マネージャーの役割に優れた候補者になる可能性があります。 そのような人は、トピックが有効かどうか知る知識を持っているだけでなく、それらのトピックに関連する企業内の人々も知っている可能性があります。 ## <a name="knowledge-admins"></a>ナレッジ管理者 ナレッジ管理者は、特定の環境でビバ トピックを設定および構成するMicrosoft 365です。 セットアップが完了した後で、ビバ トピックの設定も管理します。 ナレッジ管理者の役割では、セットアップと管理がMicrosoft 365完了SharePoint、グローバル管理者または管理者である必要Microsoft 365 管理センター。 セットアップ中に、ナレッジ管理者は次の設定を行うビバ トピックを構成できます。 - トピックをクロールする SharePoint サイトを選択できます。 - トピックを表示できるライセンス所有者を選択できます (トピック閲覧者)。 - 特定できないトピックを選択できます。 - トピックを作成して編集できるライセンス所有者 (トピック共同作成者) を選択できます。 - トピックを管理できるライセンス所有者 (知識マネージャー) を選択できます。 - トピック センターに名前を付けます。 知識マネージャーは、組織内のすべての Viva トピック関係者と協力できるようにその構成方法を理解する必要があります。 たとえば、新しいプロジェクトに機密情報がある場合は、SharePoint サイトがトピックに対してクロールされないか、特定のトピック名を除外する必要がある場合に、ナレッジ マネージャーに通知する必要があります。
35.390805
170
0.826567
yue_Hant
0.767461
488f35d35fb12b0e3602d20999ea4cc426bc421c
727
md
Markdown
ddcctl-master/README.md
menahishayan/DDC-CI-Brightness-MacOS
9372f1468581eda32daa1f8d005e2c8d61fa8b6e
[ "MIT" ]
8
2019-02-27T06:23:42.000Z
2022-01-07T06:12:59.000Z
ddcctl/README.md
mavodev-de/ScreenVolume2
a984106aa76461b5b65d3e3ac05afe0235441735
[ "MIT" ]
null
null
null
ddcctl/README.md
mavodev-de/ScreenVolume2
a984106aa76461b5b65d3e3ac05afe0235441735
[ "MIT" ]
1
2019-05-20T07:12:29.000Z
2019-05-20T07:12:29.000Z
ddcctl: DDC monitor controls for the OSX command line ---- Adjust your external monitors' built-in controls from the OSX shell: * brightness * contrast And *possibly* (if your monitor firmware is well implemented): * input source * built-in speaker volume * on/off/standby * rgb colors * color presets * reset Install ---- ```bash make install ``` For an On-Screen Display using [OSDisplay.app](https://github.com/zulu-entertainment/OSDisplay): `make CCFLAGS=-DOSD clean ddcctl` Usage ---- Run `ddcctl -h` for some options. [ddcctl.sh](/ddcctl.sh) is a script I use to control two PC monitors plugged into my Mac Mini. You can point Alfred, ControlPlane, or Karabiner at it to quickly switch presets.
25.068966
98
0.724897
eng_Latn
0.923141
488ff6540763a69c8ebb1f3214e187007311c476
45
md
Markdown
README.md
git-vinit/mycapprojects
0abb3eabcb03033c845886d597623b1d2dbcde3e
[ "MIT" ]
null
null
null
README.md
git-vinit/mycapprojects
0abb3eabcb03033c845886d597623b1d2dbcde3e
[ "MIT" ]
null
null
null
README.md
git-vinit/mycapprojects
0abb3eabcb03033c845886d597623b1d2dbcde3e
[ "MIT" ]
null
null
null
# mycapprojects projects assign in mycaptain
15
28
0.844444
eng_Latn
0.998465
4890f8184ec25245c44fd3353cfab7488f4e3457
2,306
md
Markdown
README.md
DeveloperDowny/personal_intro_post_designer
328d12820cee99d35718d86d79e8bf51df853c82
[ "Apache-2.0" ]
null
null
null
README.md
DeveloperDowny/personal_intro_post_designer
328d12820cee99d35718d86d79e8bf51df853c82
[ "Apache-2.0" ]
null
null
null
README.md
DeveloperDowny/personal_intro_post_designer
328d12820cee99d35718d86d79e8bf51df853c82
[ "Apache-2.0" ]
null
null
null
<font size = "30"> ## Link to the website: https://insta-post-designer.herokuapp.com/ ## Sample output: ![image](https://user-images.githubusercontent.com/60831483/147559209-9fa484fa-79df-4502-b3ea-5dfd685e30fc.png) This kind of photo will be designed for you. This photo will be posted on this insta page: https://instagram.com/spit_freshers2021?utm_medium=copy_link <br> You will also be tagged in the post. So, kindly follow this insta page or else we would not be able to tag you. ## Screenshots of the website: <img src="https://dl3.pushbulletusercontent.com/QgASDuKZVDpBttijPgH7jo9h064vitgv/home.jpg" width=300> <img src="https://dl3.pushbulletusercontent.com/0rICkPZEVKst8bGWMRqCbl7EYuAlz2yp/Screenshot_20220113-083446_Edge.jpg" width=300><br><img src="https://dl3.pushbulletusercontent.com/CLU0CpIxfGRhRd5K87FayFg62b5OCY3O/Screenshot_20220113-083601_Edge.jpg" width=300> <img src="https://dl3.pushbulletusercontent.com/Kt7MVut4P4HhMtXtutlZ7n9bELv4Wgwg/Screenshot_20220113-083634_Edge.jpg" width=300><br><img src="https://dl3.pushbulletusercontent.com/JIVc3Slk6jnux0iMoNnv1hW1N3ozENZE/Screenshot_20220113-083703_Edge.jpg" width=300> ## Instructions for uploading images and introductions: 1. The image should have aspect ratio of 1:1 and the face of the student should be in the left half of the photo. ![unnamed](https://user-images.githubusercontent.com/60831483/146666924-11811627-deb4-4e05-a79f-1968f7513849.png) <br>Correct image<br><br><br> ![image](https://user-images.githubusercontent.com/60831483/146667026-5c1321f1-aa53-49d0-aec7-71185489ded3.png) <br>Incorrect image<br><br> Crop your image as shown in this video: https://youtu.be/V5Vlw1h_nVc <br><br>(If your phone doesn't support cropping, just upload the image and then crop when a cropper window pops up) 2. Introduction should not end with names and course details as shown in the image below:<br><br>![image](https://user-images.githubusercontent.com/60831483/147559413-32e32b4e-ef7f-4e7a-be0b-3450f1cdeabb.png) ## Attributions: - Registration Form (Bootstrap 5 Validation) adopted from https://codepen.io/samnorton/pen/oNYajYM <br> Author: Sam Norton - Cropper.js code adopted from https://rathorji.in/p/how_to_crop_the_image_before_uploading_it_with_cropper_js_php <br> Author: Rathorji </font>
59.128205
624
0.794016
eng_Latn
0.408137
489231bd56f823b8a3b4d9cd306b7578385390c2
5,178
md
Markdown
StandAloneExecutables.md
lispbuilder/wiki
97d8fd620b5d48d63f20c057626956479bdf81f8
[ "MIT" ]
2
2019-07-05T16:40:51.000Z
2021-01-19T09:27:17.000Z
StandAloneExecutables.md
priyadarshan/wiki
97d8fd620b5d48d63f20c057626956479bdf81f8
[ "MIT" ]
1
2019-07-05T16:55:37.000Z
2019-07-05T17:01:57.000Z
StandAloneExecutables.md
priyadarshan/wiki
97d8fd620b5d48d63f20c057626956479bdf81f8
[ "MIT" ]
1
2019-07-05T16:41:17.000Z
2019-07-05T16:41:17.000Z
# Introduction # All modern Lisp compilers can create executables to be distributed to users. This page covers how to build executables under various implementations, and how to package and distribute them for various OSes. # Prerequisite: Ensure LISPBUILDER-SDL is Working # Make sure everything works. Try an example; see the game run. If the game does not run properly, troubleshoot, otherwise continue. <pre> > (asdf:operate 'asdf:load-op :lispbuilder-sdl-examples)<br> ... lots of loading text ...<br> NIL<br> > (sdl-examples:mandelbrot)<br> NIL<br> </pre> # Windows Prerequisite: Cygwin, MSYS, or GnuWin32 # The guide assumes a Unix-like command-line environment. It is not strictly necessary to use such an environment for development, although it may make your life easier (in more ways that one). Feel free to use batch commands or Power Shell, but be aware that you may have to translate some simple shell commands. If you are interested in installing a Unix-like shell on Windows, [Cygwin](http://www.cygwin.com/), [MSYS](http://www.mingw.org/), and [GnuWin32](http://gnuwin32.sourceforge.net/) are all good options. # Defining a Startup Function # When creating an executable, a user-defined function can be provided to run when the Lisp image starts up. Users of LISPBUILDER-SDL typically use this function to load and initialize SDL and start their game. An example of such a function follows; <pre> (defun main ()<br> <br> ;; Load SDL<br> (cffi:define-foreign-library sdl<br> (t (:default "SDL"))) ; Windows only, see below for portable version<br> (cffi:use-foreign-library sdl)<br> <br> ;; Run the game<br> (sdl:with-init ()<br> (sdl-examples:mandelbrot))<br> <br> ;; Quit<br> #-allegro (quit) #+allegro (exit))<br> </pre> The function looks for a foreign library called SDL (e.g. "SDL.dll"), and loads the library. Then it starts one the examples in LISPBUILDER-SDL. When the example demo ends, it quits the program (to avoid entering the toplevel, as some Lisps do). Type the above function into a file called "main.lisp" for use below. # Dumping an Executable # Executables, or Lisp images, are executable copies of the runtime state of the compiler after loading user-defined code. Some Lisp compilers, especially commercial ones, try to slim down the resulting executable, but in open source compilers, an image will include the entire compiler. To dump an executable, first start the compiler and load LISPBUILDER-SDL and any user code. After defining a startup function (more details below), run the following commands to create the binary. <pre> > (asdf:operate 'asdf:load-op :lispbuilder-sdl)<br> ...<br> NIL<br> > (load (compile-file "main"))<br> ...<br> NIL<br> > (let ((filename #+windows "main.exe" #-windows "main"))<br> #+clisp (saveinitmem filename :init-function #'main :executable t :norc t)<br> #+sbcl (save-lisp-and-die filename :toplevel #'main :executable t)<br> #+clozure (save-application filename :toplevel-function #'main :prepend-kernel t))<br> <br> ;; TODO: Add Lispworks and Allegro.<br> </pre> Theoretically, this executable can just be run by itself, <pre> $ ./main # or main.exe on Windows<br> </pre> but, more often than not, the necessary runtime components (shared libraries, graphics, etc) are not in correct locations relative to the executable. Therefore, for most applications, it will be necessary to build a distribution to run on each target platform. # Bundling Shared Libraries # The easiest way to find runtime libraries is simply to put them in the same directory as the binary. For example; <pre> $ mkdir build<br> $ cp main build<br> </pre> Find SDL.dll (on Windows), libSDL.so (on Linux), or SDL.framework (on OS X), and copy it to the build folder. <pre> $ cp C:\\path\\to\\SDL.dll build # Windows<br> $ cp /usr/lib/libSDL.so build # Linux<br> $ cp /Library/Frameworks/SDL.framework build # OS X<br> </pre> And copy and runtime libraries required by the compiler itself. For example, on CLISP; <pre> $ cp C:\\path\\to\\clisp\\*.dll # CLISP<br> </pre> Finally, copy and graphics or other resources to the build folder. Now try running the game; <pre> $ ./main<br> </pre> If it works, copy the folder over to another machine (preferably one without Lisp or SDL installed), and try running it. If it works there too, then you can tar it, zip it, make a Mac app for it, or make a Windows installer for it, etc. # Writing a Better Startup Function # One problem with the provided startup function is that expects to find a shared library called "SDL". This doesn't work on Linux where the file is traditionally "libSDL.so", or on OS X, where most users would prefer to use "SDL.framework". The following snippets, taken from the source code of LISPBUILDER-SDL, can be used as portable substitutes for the above. <pre> (cffi:define-foreign-library sdl<br> (:darwin (:or (:framework "SDL")<br> (:default "libSDL")))<br> (:windows "SDL.dll")<br> (:unix (:or "libSDL-1.2.so.0.7.2"<br> "libSDL-1.2.so.0"<br> "libSDL-1.2.so"<br> "libSDL.so"<br> "libSDL")))<br> </pre> https://lispbuilder.github.io/documentation/lispbuilder-sdl/cffi/library.lisp
41.095238
361
0.733295
eng_Latn
0.987013
4892d05974b509c709cb007ce287989961b66897
1,327
md
Markdown
_posts/2017-12-31-Elianna-Moore-em1205-Sleeveless-Cathedral-Train-Ballgown.md
retroiccolors/retroiccolors.github.io
9fcec38e243fcc96a9862bab7dcbf8a153843117
[ "MIT" ]
null
null
null
_posts/2017-12-31-Elianna-Moore-em1205-Sleeveless-Cathedral-Train-Ballgown.md
retroiccolors/retroiccolors.github.io
9fcec38e243fcc96a9862bab7dcbf8a153843117
[ "MIT" ]
null
null
null
_posts/2017-12-31-Elianna-Moore-em1205-Sleeveless-Cathedral-Train-Ballgown.md
retroiccolors/retroiccolors.github.io
9fcec38e243fcc96a9862bab7dcbf8a153843117
[ "MIT" ]
null
null
null
--- layout: post date: 2017-12-31 title: "Elianna Moore em1205 Sleeveless Cathedral Train Ballgown" category: Elianna Moore tags: [Elianna Moore,Ballgown,Sweetheart,Cathedral Train,Sleeveless] --- ### Elianna Moore em1205 Just **$499.99** ### Sleeveless Cathedral Train Ballgown <table><tr><td>BRANDS</td><td>Elianna Moore</td></tr><tr><td>Silhouette</td><td>Ballgown</td></tr><tr><td>Neckline</td><td>Sweetheart</td></tr><tr><td>Hemline/Train</td><td>Cathedral Train</td></tr><tr><td>Sleeve</td><td>Sleeveless</td></tr></table> <a href="https://www.readybrides.com/en/elianna-moore/29174-elianna-moore-em1205.html"><img src="//img.readybrides.com/62917/elianna-moore-em1205.jpg" alt="Elianna Moore em1205" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/elianna-moore/29174-elianna-moore-em1205.html"><img src="//img.readybrides.com/62918/elianna-moore-em1205.jpg" alt="Elianna Moore em1205" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/elianna-moore/29174-elianna-moore-em1205.html"><img src="//img.readybrides.com/62915/elianna-moore-em1205.jpg" alt="Elianna Moore em1205" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/elianna-moore/29174-elianna-moore-em1205.html](https://www.readybrides.com/en/elianna-moore/29174-elianna-moore-em1205.html)
78.058824
249
0.732479
yue_Hant
0.245685
48934b7164601f81f95b9f02860efb03077dfacb
56
md
Markdown
README.md
KimStebel/kamon-akka-streams
3896daf7e305bd77044dd54451ba9dbb1a7231e5
[ "Apache-2.0" ]
null
null
null
README.md
KimStebel/kamon-akka-streams
3896daf7e305bd77044dd54451ba9dbb1a7231e5
[ "Apache-2.0" ]
null
null
null
README.md
KimStebel/kamon-akka-streams
3896daf7e305bd77044dd54451ba9dbb1a7231e5
[ "Apache-2.0" ]
null
null
null
# kamon-akka-streams Kamon integration for Akka Streams
18.666667
34
0.821429
eng_Latn
0.324965
4893549394957d4e9458d42a88cb85cbb838837f
47
md
Markdown
README.md
wortmanncallejon/data_sunday
57dad952fd1bf3389d52689a6ae6690f7061d7fe
[ "CC0-1.0" ]
null
null
null
README.md
wortmanncallejon/data_sunday
57dad952fd1bf3389d52689a6ae6690f7061d7fe
[ "CC0-1.0" ]
null
null
null
README.md
wortmanncallejon/data_sunday
57dad952fd1bf3389d52689a6ae6690f7061d7fe
[ "CC0-1.0" ]
null
null
null
# data_sunday Visualising data for the public.
15.666667
32
0.808511
eng_Latn
0.984507
48936ea381866eea10ca0629055738389e64463e
347
md
Markdown
static/img/cocoicons/README.md
informaticacba/MAX-Object-Detector-Web-App
976cadb37b916ba2ea9fed98cbf2db1b2290b37c
[ "Apache-2.0" ]
100
2018-08-08T17:13:14.000Z
2022-03-26T19:07:10.000Z
static/img/cocoicons/README.md
informaticacba/MAX-Object-Detector-Web-App
976cadb37b916ba2ea9fed98cbf2db1b2290b37c
[ "Apache-2.0" ]
17
2018-08-07T19:51:28.000Z
2020-12-09T00:03:58.000Z
static/img/cocoicons/README.md
informaticacba/MAX-Object-Detector-Web-App
976cadb37b916ba2ea9fed98cbf2db1b2290b37c
[ "Apache-2.0" ]
68
2018-09-24T02:40:26.000Z
2022-03-10T11:58:12.000Z
The icons contained here were taken as is from the [COCO website GitHub](https://github.com/cocodataset/cocodataset.github.io) with permission of the COCO Consortium. # License "[COCO Website](http://cocodataset.org)" by [COCO Consortium](http://cocodataset.org/#people) is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
57.833333
168
0.769452
eng_Latn
0.7596
48938ddd816f2996f4c754dcc3d8c74b05a6e616
982
md
Markdown
README.md
hweickert/indentify
88e9dd5ef999d6ea15e6012c6cb56c5b1f473f33
[ "MIT" ]
null
null
null
README.md
hweickert/indentify
88e9dd5ef999d6ea15e6012c6cb56c5b1f473f33
[ "MIT" ]
null
null
null
README.md
hweickert/indentify
88e9dd5ef999d6ea15e6012c6cb56c5b1f473f33
[ "MIT" ]
null
null
null
# Introduction Text indentation functions for the editor VIM. You may want to use shortcuts like `Tab` or `Shift+Tab` to indent/unindent your current lines while in normal or visual mode. # Installation ## Linux/MacOS ``` git clone https://github.com/hweickert/indentify.git ~/.vim/pack/external/opt/indentify ``` In your `~/.vimrc` add the following ``` packadd indentify noremap <tab> :call IndentNormal()<cr> noremap <s-tab> :call UnindentNormal()<cr> vnoremap <tab> :<bs><bs><bs><bs><bs>call IndentVisual()<cr> vnoremap <s-tab> :<bs><bs><bs><bs><bs>call UnindentVisual()<cr> ``` ## Windows ``` git clone https://github.com/hweickert/indentify.git ~/vimfiles/pack/external/opt/indentify ``` In your `$USERPROFILE/_vimrc` add the following ``` packadd indentify noremap <tab> :call IndentNormal()<cr> noremap <s-tab> :call UnindentNormal()<cr> vnoremap <tab> :<bs><bs><bs><bs><bs>call IndentVisual()<cr> vnoremap <s-tab> :<bs><bs><bs><bs><bs>call UnindentVisual()<cr> ```
25.842105
91
0.711813
eng_Latn
0.443374
48938de5de34071c21a140d0e465da1d02b7af8b
1,461
md
Markdown
README.md
fabiendv/daskeyboard-applet--glip
1b518a63aee8adf230084dfb9193b01d58dd1829
[ "MIT" ]
null
null
null
README.md
fabiendv/daskeyboard-applet--glip
1b518a63aee8adf230084dfb9193b01d58dd1829
[ "MIT" ]
null
null
null
README.md
fabiendv/daskeyboard-applet--glip
1b518a63aee8adf230084dfb9193b01d58dd1829
[ "MIT" ]
null
null
null
# Q Applet: Glip Displays Glip account notifications on a Das Keyboard Q Series. Glip is RingCentral's collaboration software with free real-time web messaging application, group video chat, and task management. It is a team productivity solution that is more efficient than using email and multiple applications for specific jobs. For more information about Glip visit <https://www.glip.com>. ![Glip applet on a Das Keyboard Q](assets/image.png "Das Keyboard Glip applet") ## Changelog [CHANGELOG.MD](CHANGELOG.md) ## Installation Requires a Das Keyboard Q Series: www.daskeyboard.com and a Glip account. Installation, configuration and uninstallation of applets is done within the Q Desktop application (https://www.daskeyboard.com/q) ## Running tests - `yarn test` ## Contributions Pull requests welcome. ## Copyright / License Copyright 2014 - 2019 Das Keyboard / Metadot Corp. Licensed under the GNU General Public License Version 2.0 (or later); you may not use this work except in compliance with the License. You may obtain a copy of the License in the LICENSE file, or at: http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
31.085106
79
0.78371
eng_Latn
0.973552
4894218002252ae9c12000990c8bcac39cef6b0c
1,250
md
Markdown
README.md
jenkinsci/eggplant-runner-plugin
51cae51115a878615d214d9e5b892f35d4c57f2d
[ "MIT" ]
1
2022-03-25T09:03:53.000Z
2022-03-25T09:03:53.000Z
README.md
jenkinsci/eggplant-runner-plugin
51cae51115a878615d214d9e5b892f35d4c57f2d
[ "MIT" ]
null
null
null
README.md
jenkinsci/eggplant-runner-plugin
51cae51115a878615d214d9e5b892f35d4c57f2d
[ "MIT" ]
null
null
null
# Eggplant Runner Jenkins Plugin (ALPHA VERSION) # THIS IS STILL IN TEST, WAIT FOR OFFICIAL RELEASE ## Introduction Eggplant Runner plugin provides the capability for a user to run Eggplant DAI test configurations from a Jenkins pipeline. ## Getting started The plugin can be used by executing it as follows in your Jenkinsfile: ```yaml pipeline { agent any environment { DAI_CLIENT_SECRET = credentials('eggplant-runner-client-secret') } stages { stage('Eggplant Runner') { steps { eggplantRunner serverURL: '', testConfigId: '' } } } } ``` It can also be configured as a Build Step using the Jenkins GUI. ## LICENSE Licensed under MIT, see [LICENSE](LICENSE.md) ## MANTAINER For development, we need to installe the below. 1. You can download and install Java 11 from the Eclipse Temurin website (https://adoptium.net/). 2. Download Maven from Apache Maven website (https://maven.apache.org/). Make sure to download one of the binary archives (with bin in their name). 3. To verify that Maven is installed, run the following command: mvn -version 4. You an use launch.json to run 'Debug (Attach)' to launch an local Jenkins instance for development.
27.777778
147
0.7056
eng_Latn
0.923668
4894793919918c0b2b1653c84f066b84a7abd517
423
md
Markdown
.github/ISSUE_TEMPLATE/feature_request.md
Carol-Long/bugs-nyu.github.io
f609cacbf12072557c5c64c57902af88c25a7a8b
[ "MIT" ]
null
null
null
.github/ISSUE_TEMPLATE/feature_request.md
Carol-Long/bugs-nyu.github.io
f609cacbf12072557c5c64c57902af88c25a7a8b
[ "MIT" ]
null
null
null
.github/ISSUE_TEMPLATE/feature_request.md
Carol-Long/bugs-nyu.github.io
f609cacbf12072557c5c64c57902af88c25a7a8b
[ "MIT" ]
1
2020-05-18T14:53:14.000Z
2020-05-18T14:53:14.000Z
--- name: Feature Request about: Suggest an idea for this project labels: enhancement --- **Context** <!-- What would you like to see in the website? Is your feature related to a problem, and if so, what is it? --> **Proposal** <!-- A clear and concise description of what you want to happen. --> **Downsides and Alternatives** <!-- Describe any downsides you see in your proposal, and alternatives you've considered -->
30.214286
112
0.713948
eng_Latn
0.999594
489484d00d55176cb3fd974d66334780dfe6703c
9,299
md
Markdown
articles/sql-database/sql-database-explore-tutorials.md
0x333333/azure-docs
fbbf19fe8e5fe89d1134e659ee89e90a127b2987
[ "CC-BY-3.0" ]
1
2019-06-02T17:00:22.000Z
2019-06-02T17:00:22.000Z
articles/sql-database/sql-database-explore-tutorials.md
0x333333/azure-docs
fbbf19fe8e5fe89d1134e659ee89e90a127b2987
[ "CC-BY-3.0" ]
null
null
null
articles/sql-database/sql-database-explore-tutorials.md
0x333333/azure-docs
fbbf19fe8e5fe89d1134e659ee89e90a127b2987
[ "CC-BY-3.0" ]
null
null
null
--- title: Explore Azure SQL Database Tutorials | Microsoft Docs description: Learn about SQL Database features and capabilities keywords: '' services: sql-database documentationcenter: '' author: CarlRabeler manager: jhubbard editor: '' ms.assetid: 04c0fd7f-d260-4e43-a4f0-41cdcd5e3786 ms.service: sql-database ms.custom: overview ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: data-management ms.date: 02/08/2017 ms.author: carlrab --- # Explore Azure SQL Database tutorials The links in the following tables take you to an overview of each listed feature area and a simple step-by-start tutorial for each area. ## Create servers, databases and server-level firewall rules In the following tutorials, you create servers, databases, and server-level firewall rules - and learn to connect and query servers and databases. | Tutorial | Description | | --- | --- | | [Your first Azure SQL database](sql-database-get-started.md) | When you finish this quick start tutorial, you have a sample database and a blank database running in an Azure resource group and attached to a logical server. You also have two server-level firewall rules configured to enable the server-level principal to log in to the server from two specified IP addresses. Finally, you learn know how to query a database in the Azure portal and to connect and query using SQL Server Management Studio. | | [Provision and access an Azure SQL database using PowerShell](sql-database-get-started-powershell.md) | When you finish this tutorial, you have a sample database and a blank database running in an Azure resource group and attached to a logical server. You also have a server-level firewall rule configured to enable the server-level principal to log in to the server from a specified IP address (or IP address range). | | [Use C# to create a SQL database with the SQL Database Library for .NET](sql-database-get-started-csharp.md)| In this tutorial, you use the C# to create a SQL Database server, firewall rule, and SQL database. You also create an Active Directory (AD) application and the service principal needed to authenticate the C# app. | | | | ## Backups, long-term retention, and database recovery In the following tutorials, you learn about using [database backups](sql-database-automated-backups.md), [long-term backup retention](sql-database-long-term-retention.md), and [database recovery using backups](sql-database-recovery-using-backups.md). | Tutorial | Description | | --- | --- | | [Back up and restore using the Azure portal](sql-database-get-started-backup-recovery-portal.md) | In this tutorial, you learn how to use the Azure portal to view backups, recover to a point in time, configure long-term backup retention, and recover from a backup in the Azure Recovery Services vault | [Back up and restore using PowerShell](sql-database-get-started-backup-recovery-powershell.md) | In this tutorial, you learn how to use PowerShell to view backups, recover to a point in time, configure long-term backup retention, and recover from a backup in the Azure Recovery Services vault | | | ## Sharded databases In the following tutorials, you learn how to [Scale out databases with the shard map manager](sql-database-elastic-scale-shard-map-management.md). | Tutorial | Description | | --- | --- | | [Create a sharded application](sql-database-elastic-scale-get-started.md) |In this tutorial, you learn how to use the capabilities of elastic database tools using a simple sharded application. | | [Deploy a split-merge service](sql-database-elastic-scale-configure-deploy-split-and-merge.md) |In this tutorial, you learn how to move data between sharded databases. | | | | ## Elastic database jobs In the following tutorials, you learn about using [elastic database jobs](sql-database-elastic-jobs-overview.md). | Tutorial | Description | | --- | --- | | [Get started with Azure SQL Database elastic jobs](sql-database-elastic-jobs-getting-started.md) |In this tutorial, you learn how to create and manage jobs that manage a group of related databases. | | | | ## Elastic queries In the following tutorials, you learn about running [elastic queries](sql-database-elastic-query-overview.md). | Tutorial | Description | | --- | --- | | [Create elastic queries)](sql-database-elastic-query-getting-started-vertical.md) | In this tutorial, you learn how to run T-SQL queries that span multiple databases using a single connection point | | [Report across scaled-out cloud databases](sql-database-elastic-query-getting-started.md) |In this tutorial, you learn how to create reports from all databases in a horizontally partitioned (sharded) database | | [Query across cloud databases with different schemas](sql-database-elastic-query-vertical-partitioning.md) | In this tutorial, you learn how to run T-SQL queries that span multiple databases with different schemas | | [Reporting across scaled-out cloud databases](sql-database-elastic-query-horizontal-partitioning.md) |In this tutorial, you learn to create reports that span all databases in a sharded database. | | | | ## Database authentication and authorization In the following tutorials, you learn about [creating and managing logins and users](sql-database-manage-logins.md). | Tutorial | Description | | --- | --- | --- | | [SQL authentication and authorization](sql-database-control-access-sql-authentication-get-started.md) | In this tutorial, you learn about creating logins and users using SQL Server authentication, and adding them to roles as well as granting them permissions | | [Azure AD authentication and authorization](sql-database-control-access-aad-authentication-get-started.md) | In this tutorial, you learn about creating logins and users using Azure Active Directory authentication | | | | ## Secure and protect data In the following tutorials, you learn about [securing Azure SQL Database data](sql-database-security-overview.md). | Tutorial | Description | | --- | --- | | [Secure sensitive data using Always Encrypted ](sql-database-always-encrypted-azure-key-vault.md) |In this tutorial, you use the Always Encrypted wizard to secure sensitive data in an Azure SQL database. | | | | ## Develop In the following tutorials, you learn about application and database development. | Tutorial | Description | | --- | --- | | [Create a report using Excel](sql-database-connect-excel.md) |In this tutorial, you learn how to connect Excel to a SQL database in the cloud so you can import data and create tables and charts based on values in the database. | | [Build an app using SQL Server](https://www.microsoft.com/sql-server/developer-get-started/) |In this tutorial, you learn how to build an application using SQL Server | | [Temporal tables](sql-database-temporal-tables.md) | In this tutorial, you learn about temporal tables. | [Use entity framework with elastic tools](sql-database-elastic-scale-use-entity-framework-applications-visual-studio.md) |In this tutorial, you learn the changes in an Entity Framework application that are needed to integrate with the Elastic Database tools. | | [Adopt in-memory OLTP](sql-database-in-memory-oltp-migration.md) | In this tutorial, you learn how to use [in-memory OLTP](sql-database-in-memory.md) to improve the performance of transaction processing. | | [Code First to a New Database](https://msdn.microsoft.com/data/jj193542.aspx) | In this tutorial, you learn about code first development. | [Tailspin Surveys sample application](https://github.com/Azure-Samples/guidance-identity-management-for-multitenant-apps/blob/master/docs/running-the-app.md) | IN this tutorial, you work with the Tailspon Surveys sample application. | | [Contoso Clinic demo application](https://github.com/Microsoft/azure-sql-security-sample) | In this tutorial, you work with the Contoso Clinic demo application. | | | | ## Data sync In this tutorial, you learn about [Data Sync](http://download.microsoft.com/download/4/E/3/4E394315-A4CB-4C59-9696-B25215A19CEF/SQL_Data_Sync_Preview.pdf). | Tutorial | Description | | --- | --- | | [Getting Started with Azure SQL Data Sync (Preview)](sql-database-get-started-sql-data-sync.md) |In this tutorial, you learn the fundamentals of Azure SQL Data Sync using the Azure Classic Portal. | | | | ## Monitor and tune In the following tutorials, you learn about monitoring and tuning. | Tutorial | Description | | --- | --- | | [Elastic Pool telemetry using PowerShell](https://github.com/Microsoft/sql-server-samples/tree/master/samples/manage/azure-sql-db-elastic-pools)| In this tutorial, you learn about collecting elastic pool telemetry using PowerShell. | | [Elastic Pool custom dashboard for SaaS](https://github.com/Microsoft/sql-server-samples/tree/master/samples/manage/azure-sql-db-elastic-pools-custom-dashboard) | In this tutorial, you learn about creating a custom dashboard for monitoring elastic pools. | | [Capture extended event to event file target](sql-database-xevent-code-event-file.md)| In this tutorial, you learn to capture extended events to an event target file.| | [Capture extended event to ring buffer target](sql-database-xevent-code-ring-buffer.md)| In this tutorial, you learn to capture extended events to an code ring buffer.| | | |
74.991935
506
0.767502
eng_Latn
0.945508
4894e8bc82074509119517ccaa3c077c0f57bc66
99
md
Markdown
README.md
BenHerbst/OpenAsis
d0c48e702611715a905b24b52bf521e168d8a70c
[ "Apache-2.0" ]
null
null
null
README.md
BenHerbst/OpenAsis
d0c48e702611715a905b24b52bf521e168d8a70c
[ "Apache-2.0" ]
null
null
null
README.md
BenHerbst/OpenAsis
d0c48e702611715a905b24b52bf521e168d8a70c
[ "Apache-2.0" ]
null
null
null
# OpenAsis The one and the only desktop assistent. Manage all your desktop tasks easily. OpenAsis.
33
87
0.79798
eng_Latn
0.976367
4894f96b49aaa1a05915ea20f8a9b29c4400c7cf
942
md
Markdown
en/messages/ese/necessario-escandalos.md
veevo/drafts
cfe468193c9ed3daad63f7a9dfa7698ffe28a88d
[ "Unlicense" ]
null
null
null
en/messages/ese/necessario-escandalos.md
veevo/drafts
cfe468193c9ed3daad63f7a9dfa7698ffe28a88d
[ "Unlicense" ]
null
null
null
en/messages/ese/necessario-escandalos.md
veevo/drafts
cfe468193c9ed3daad63f7a9dfa7698ffe28a88d
[ "Unlicense" ]
null
null
null
# É necessário que haja escândalos “É necessário que sucedam escândalos no mundo, disse Jesus, porque os homens, sendo ainda imperfeitos, têm inclinação para o mal, e porque as más árvores dão maus frutos. Devemos pois entender, por essas palavras, que o mal é uma conseqüência da imperfeição humana, e não que os homens tenham obrigação de praticá-lo. É necessário que venha o escândalo, para que os homens, em expiação na Terra, se punam a si mesmos, pelo contato de seus próprios vícios, dos quais são as primeiras vítimas, e cujos inconvenientes acabam por compreender. Depois que tiverem sofrido o mal, procurarão o remédio no bem. A reação desses vícios serve, portanto, ao mesmo tempo de castigo para uns e de prova para outros. É assim que Deus faz sair o bem do mal, e que os próprios homens aproveitam as coisas más ou desagradáveis.” # Fonte O Evangelho Segundo o Espiritismo, cap. 8, itens 13 e 14. Allan Kardec
85.636364
492
0.776008
por_Latn
1.000006
4895d3d5c8673c933f15b44e61c194aaa20bb72b
1,684
md
Markdown
README.md
dhs-ncats/bod-18-01-terraform
33e3f0f92cfa2bdc79cecf3b7beaf1f657501fc8
[ "CC0-1.0" ]
null
null
null
README.md
dhs-ncats/bod-18-01-terraform
33e3f0f92cfa2bdc79cecf3b7beaf1f657501fc8
[ "CC0-1.0" ]
null
null
null
README.md
dhs-ncats/bod-18-01-terraform
33e3f0f92cfa2bdc79cecf3b7beaf1f657501fc8
[ "CC0-1.0" ]
null
null
null
# bod-18-01-terraform :cloud: :zap: # [![Build Status](https://travis-ci.com/cisagov/bod-18-01-terraform.svg?branch=develop)](https://travis-ci.com/cisagov/bod-18-01-terraform) `bod-18-01-terraform` contains the Terraform code to build the AWS infrastructure used for running BOD 18-01 Trustworthy Email scans. This code utilizes [`trustymail`](https://github.com/cisagov/trustymail), [`trustymail-lambda`](https://github.com/cisagov/trustymail-lambda), [`sslyze`](https://github.com/nabla-c0d3/sslyze), and [`sslyze-lambda`](https://github.com/cisagov/sslyze-lambda). ## Installation of the Terraform infrastructure ## ```bash terraform init terraform apply -var-file=<your_workspace>.tfvars ``` The `apply` command forces you to type `yes` at a prompt before actually deploying any infrastructure, so it is quite safe to use. If you want an extra layer of safety you can opt to use this command when you just want to validate your Terraform syntax: ```bash terraform validate -var-file=<your_workspace>.tfvars ``` If you want to see what Terraform *would* deploy if you ran the `apply` command, you can use this command: ```bash terraform plan -var-file=<your_workspace>.tfvars ``` ## License ## This project is in the worldwide [public domain](LICENSE.md). This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the [CC0 1.0 Universal public domain dedication](https://creativecommons.org/publicdomain/zero/1.0/). All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.
36.608696
138
0.767815
eng_Latn
0.968002
489742a887765a40ab4aa945db44495107daa27e
11,157
md
Markdown
packages/react/README.md
zendesk/app_scaffolds
3929a72f9250c3fb6f2a693a57576c4ace88032a
[ "Ruby", "Apache-2.0" ]
null
null
null
packages/react/README.md
zendesk/app_scaffolds
3929a72f9250c3fb6f2a693a57576c4ace88032a
[ "Ruby", "Apache-2.0" ]
null
null
null
packages/react/README.md
zendesk/app_scaffolds
3929a72f9250c3fb6f2a693a57576c4ace88032a
[ "Ruby", "Apache-2.0" ]
1
2022-03-10T12:47:14.000Z
2022-03-10T12:47:14.000Z
*Use of this software is subject to important terms and conditions as set forth in the License file* # App Scaffold ## Description This repo contains a scaffold to help developers build [apps for Zendesk products](https://developer.zendesk.com/apps/docs/apps-v2/getting_started). ## Getting Started ### Dependencies - [Node.js](https://nodejs.org/en/) >= 6.11.5 - [Ruby](https://www.ruby-lang.org/) >= 2.0.x ### Setup 1. Clone or fork this repo 2. Change (`cd`) into the `app_scaffold` directory 3. Run `yarn install` You can use either `yarn` or `npm` as package manager and run the scripts with the corresponding commands. To run your app locally in Zendesk, you need the latest [Zendesk Command Line Interface (ZCLI)](https://github.com/zendesk/zcli). ### Running locally To serve the app to your Zendesk instance with `?zcli_apps=true`, run ``` yarn run watch zcli apps:server dist ``` ## But why? The App Scaffold includes many features to help you maintain and scale your app. Some of the features provided by the App Scaffold are listed below. However, you don't need prior experience in any of these to be able to use the scaffold successfully. - [ES6 (ES2015)](https://babeljs.io/docs/learn-es2015/) ECMAScript 6, also known as ECMAScript 2015, is the latest version of the ECMAScript standard. The App Scaffold includes the [Babel compiler](https://babeljs.io/) to transpile your code to ES5. This allows you to use ES6 features, such as classes, arrow functions and template strings even in browsers that haven't fully implemented these features. - [Zendesk Garden](https://garden.zendesk.com/css-components/) UI components Collection of user interface components for Zendesk products. You’ll find components built to respond to a range of user input devices, tuned to handle right-to-left layouts, and finessed with just the right touch of subtle animation. - [Webpack 4](https://webpack.github.io/) module bundler Webpack is a module bundler, we use it to bundle up Javascript modules for use as web applications, also to perform tasks like transforming and transpiling, etc. - [PostCSS](https://postcss.org//) stylesheets PostCSS transforms stylesheets with JS plugins. These plugins can lint your CSS, support variables and mixins, transpile future CSS syntax, inline images, and more. - [StandardJS](https://standardjs.com/) JS linting StandardJS is a Javascript style guide, it helps catching style issues or code errors, and automatically formats code for you. - [Jest](https://jestjs.io/) Javascript testing framework Jest is bundled with JSDom and built on top of Jasmine. It's more than just a ReactJS testing framework. In the Zendesk Apps team, we use it for unit and integration testing of the Official Apps. It also includes a good test coverage toolset out of the box. ## Folder structure The folder and file structure of the App Scaffold is as follows: | Name | Description | |:----------------------------------------|:---------------------------------------------------------------------------------------------| | [`.github/`](#.github) | The folder to store PULL_REQUEST_TEMPLATE.md, ISSUE_TEMPLATE.md and CONTRIBUTING.md, etc | | [`dist/`](#dist) | The folder in which webpack packages the built version of your app | | [`spec/`](#spec) | The folder in which all of your test files live | | [`src/`](#src) | The folder in which all of your source JavaScript, CSS, templates and translation files live | | [`webpack/`](#src) | translations-loader and translations-plugin to support i18n in the application | | [`.babelrc`](#packagejson) | Configuration file for Babel.js | | [`.browserslistrc`](#packagejson) | Configuration file for browserslist | | [`jest.config.js`](#packagejson) | Configuration file for Jest | | [`package.json`](#packagejson) | Configuration file for Project metadata, dependencies and build scripts | | [`postcss.config.js`](#packagejson) | Configuration file for PostCSS | | [`webpack.config.js`](#webpackconfigjs) | Configuration file that webpack uses to build your app | #### dist The dist directory is created when you run the app building scripts. You will need to package this folder when submitting your app to the Zendesk Apps Marketplace, It is also the folder you will have to serve when using [ZCLI](https://developer.zendesk.com/documentation/apps/app-developer-guide/zcli/). It includes your app's manifest.json file, an assets folder with all your compiled JavaScript and CSS as well as HTML and images. #### spec The spec directory is where all your tests and test helpers live. Tests are not required to submit/upload your app to Zendesk and your test files are not included in your app's package, however it is good practice to write tests to document functionality and prevent bugs. #### src The src directory is where your raw source code lives. The App Scaffold includes different directories for JavaScript, stylesheets, templates, images and translations. Most of your additions will be in here (and spec, of course!). #### webpack This directory contains custom tooling to process translations at build time: - translations-loader.js is used by Webpack to convert .json translation files to JavaScript objects, for the app itself. - translations-plugin.js is used to extract compulsory translation strings from the en.json file that are used to display metadata about your app on the Zendesk Apps Marketplace. #### .babelrc [.babelrc](https://babeljs.io/docs/en/babelrc.html) is the configuration file for babel compiler. #### .browserslistrc .browserslistrc is a configuration file to specify browsers supported by your application, some develop/build tools read info from this file if it exists in your project root. At present, our scaffolding doesn't reply on this file, [default browserslist query](https://github.com/browserslist/browserslist#queries) is used by [Babel](https://babeljs.io/docs/en/babel-preset-env/) and [PostCSS](https://github.com/csstools/postcss-preset-env#browsers) #### jest.config.js [jest.config.js](https://jestjs.io/docs/en/configuration.html) is the configuration file for Jest #### package.json package.json is the configuration file for [Yarn](https://yarnpkg.com/), which is a package manager for JavaScript. This file includes information about your project and its dependencies. For more information on how to configure this file, see [Yarn package.json](https://yarnpkg.com/en/docs/package-json). #### postcss.config.js postcss.config.js is the configuration file for [PostCSS](https://postcss.org/) #### webpack.config.js webpack.config.js is a configuration file for [webpack](https://webpack.github.io/). Webpack is a JavaScript module bundler. For more information about webpack and how to configure it, see [What is webpack](http://webpack.github.io/docs/what-is-webpack.html). ## Helpers The App Scaffold provides some helper functions in `/src/javascripts/lib/helpers.js` to help building apps. ### I18n The I18n (internationalization) module in `/src/javascripts/lib/i18n.js` provides a `t` method to look up translations based on a key. For more information, see [Using the I18n module](https://github.com/zendesk/app_scaffold/blob/master/doc/i18n.md). ## Parameters and Settings If you need to test your app with a `parameters` section in `dist/manifest.json`, foreman might crash with a message like: > Would have prompted for a value interactively, but zcli is not listening to keyboard input. To resolve this problem, set default values for parameters or create a `settings.yml` file in the root directory of your app scaffold-based project, and populate it with your parameter names and test values. For example, using a parameters section like: ```json { "parameters": [ { "name": "myParameter" } ] } ``` create a `settings.yml` containing: ```yaml myParameter: 'some value!' ``` ## Testing The App Scaffold is currently setup for testing with [Jest](https://jestjs.io/). To run specs, run ``` yarn test ``` Specs live under the `spec` directory. ## Deploying To check that your app will pass the server-side validation check, run ``` zcli apps:validate --path=dist ``` If validation is successful, you can upload the app into your Zendesk account by running ``` zcli apps:create --path=dist ``` To update your app after it has been created in your account, run ``` zcli apps:update --path=dist ``` Or, to create a zip archive for manual upload, run ``` zcli apps:package --path=dist ``` taking note of the created filename. For more information on the Zendesk Apps Tools please see the [documentation](https://developer.zendesk.com/apps/docs/apps-v2/getting_started#zendesk-app-tools). ## External Dependencies External dependencies are defined in [webpack.config.js](https://github.com/zendesk/app_scaffold/blob/master/webpack.config.js#L22). This ensures these dependencies are included in your app's `index.html`. ## Zendesk Garden CSS Components Included [Zendesk Garden CSS Components](https://garden.zendesk.com/css-components/) are listed in [package.json](https://github.com/zendesk/app_scaffold/blob/master/package.json#L25) as dev dependencies. Instead of importing them into the app css bundle, we are building a [jsDelivr CDN](https://www.jsdelivr.com/) link from the dependencies list and inject the link into `index.html` as another `<style>` tag. Feel free to add/remove the Garden components as needed, webpack will generate and insert the updated link during the build process. ## Contribute * Put up a PR into the master branch. * CC and get a +1 from @zendesk/vegemite. ## Bugs Submit Issues via [GitHub](https://github.com/zendesk/app_scaffold/issues/new) or email support@zendesk.com. ## Useful Links Links to maintaining team, confluence pages, Datadog dashboard, Kibana logs, etc - https://developer.zendesk.com/ - https://github.com/zendesk/zendesk_apps_tools - https://webpack.github.io ## Copyright and license Copyright 2018 Zendesk Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
54.160194
544
0.711123
eng_Latn
0.9846
4897f1672e3ab2b77c297975058264e0c2940712
179
md
Markdown
README.md
tanishvshinde/XKernel
4da5da6ca74d798f7ab8e08c7133ba09704edc5c
[ "MIT" ]
null
null
null
README.md
tanishvshinde/XKernel
4da5da6ca74d798f7ab8e08c7133ba09704edc5c
[ "MIT" ]
null
null
null
README.md
tanishvshinde/XKernel
4da5da6ca74d798f7ab8e08c7133ba09704edc5c
[ "MIT" ]
null
null
null
# XKernel The X Kernel is an Kernel dedicated to create your own operating system, using your knowledge create an operating system completely from scratch and not from any kernel
59.666667
168
0.821229
eng_Latn
0.99998
489975e79a7bfbd5ec1b213a0fa616b04d25c765
9,031
md
Markdown
_posts/2020-04-01-Java-Virtual-Machine-01.md
Hg-Zion/Hg-Zion.github.io
e0f93874d8ac4f82146fdb5cb9600d7b1b0d3ca1
[ "MIT" ]
null
null
null
_posts/2020-04-01-Java-Virtual-Machine-01.md
Hg-Zion/Hg-Zion.github.io
e0f93874d8ac4f82146fdb5cb9600d7b1b0d3ca1
[ "MIT" ]
null
null
null
_posts/2020-04-01-Java-Virtual-Machine-01.md
Hg-Zion/Hg-Zion.github.io
e0f93874d8ac4f82146fdb5cb9600d7b1b0d3ca1
[ "MIT" ]
null
null
null
--- title: 浅入理解JVM(一) tags: Java JVM show_author_profile: false --- <div>{%- include extensions/netease-cloud-music.html id='551722261' -%}</div> # Part1. Java内存区域 ## 1.运行时数据区域 在程序运行过程中,Java 虚拟机会把它所管理的内存区域分为五个部分:程序计数器、虚拟机栈、本地方法栈、堆以及方法区。其中,在JDK 1.7及以前版本,方法区保留在堆中,但在 JDK 1.8 中方法区被转移到了直接内存中并更名为元空间,并且“永久代”这一概念也随之放弃了。 {:.success} (01)JDK 7 内存区域: ![1.7内存区域](/assets/images/jvm/image/Java1_7.png) (02)JDK 8 内存区域: ![1.8内存区域](/assets/images/jvm/image/Java1_8.png) ### 1.1 程序计数器 程序计数器又叫行号指示器,它用来指示下一条待执行的字节码指令。而由于在多线程环境下,线程会轮流切换,为了在线程切换后能够正确执行当前线程的下一条指令,需要给每一个线程都配一个独立的程序计数器,因此程序计数器所在内存区域是“**线程私有**”的。 注意: (01)如果线程正在执行一个“**本地方法**”,则程序计数器不再指示行号,而是将值设为空。 {:.error} (02)程序计数器是唯一不会出现 `OutOfMemoryError` 情况的内存区域。 {:.error} ### 1.2 虚拟机栈 Java 虚拟机栈很像我们通常泛泛而谈的“<b>栈</b>”,每当进入一个方法,虚拟机栈就会创建并压入一个“<b>栈帧</b>”,当方法执行完毕后,虚拟机栈又会把这个“栈帧”弹出来。 这个栈帧包含了“<b>局部变量表</b>”和其他一些东西,而这个局部变量表可以存放编译期可知的 **8 种基本数据类型**、**对象引用**(不是对象本身而是一个指向对象起始地址的指针)和 ***returnAddress* 类型**(指向一条字节码指令的地址)。 这些数据类型通过局部变量槽 *Slot*(我觉得英文好记些)来存储,除了 long 和 double 这俩类型要占两个 *Slot*,其余数据类型都是一个。每个方法要分配 *Slot* 数目在进入方法时是完全确定的且运行期间不会改变,但每个 *Slot* 具体大小却是由实际执行的虚拟机决定。 异常情况分析: (01)线程请求栈深度过大,将抛出 `StackOverflowError` 异常; {:.error} (02)虚拟机栈动态扩展时内存不足,则抛出 `OutOfMemoryError` 异常。 {:.error} ### 1.3 本地方法栈 简单来说,就是为执行“本地方法”服务的虚拟机栈,比如调用 **Open CV** 的方法就会涉及到它,其他的基本一模一样。 ### 1.4 堆 Java 堆是线程共享的内存区域,我们创建的对象实例就存放在这里,同时它也是<b>垃圾收集器</b>管理的内存区域(后面的重点)。不过虽然说是线程共享,但现在其实已经可以从堆里面划分出多个线程私有的缓冲区了。 Java 堆可以是物理不连续的(想象无数大大小小的方块),但在逻辑上我们把它视为连续的(想象一个大方块)。 异常情况分析: 当Java堆剩余内存无法完成当前实例分配,且堆也无法继续扩展时,则会抛出`OutOfMemoryError`异常。 {:.error} ### 1.5 方法区 在 JDK 1.6 以前,它是完全保存在 Java 堆中的,用来保存被虚拟机加载的类型信息、常量、静态变量、即时编译器编译后的代码缓存等。 由于当时为了方便管理这一部分内存,是用“**永久代**”去实现方法区的,所以容易遇到内存溢出问题(永久代有 `-XX:MaxPermSize` 的上限),于是从 JDK 6 开始逐步放弃“**永久代**”这一概念,改成在本地内存中实现方法区。在 JDK 7 中,已经把字符串常量池、静态变量等移出,而到了JDK 8,则全部转移并改成了JRockit架构里的“<b>元空间</b>”形式。 异常情况分析: 如果方法区剩余内存无法完成内存分配时,将抛出 `OutOfMemoryError` 异常。 {:.error} ### 1.6 运行时常量池 运行时常量池本身是方法区的一部分,在类加载后,它会存放编译期生成的各种字面量和符号引用。字面量就是平常说的常量,而符号引用则是指全限定类名信息、字段名和描述符、方法名和描述符。 异常情况分析: 如果常量池剩余内存无法完成内存分配时,将抛出 `OutOfMemoryError` 异常。 {:.error} ### 1.7 直接内存 不属于虚拟机运行时数据区,如果Java内存区域超过了直接内存大小,将抛出 `OutOfMemoryError` 异常。 ## 2.对象的创建、布局和访问 ### 2.1 创建一个实例对象 当我们使用 new 关键字创建一个对象时,Java 虚拟机会经历五个步骤,按顺序依次是类加载检查、内存分配、内存初始化、设置对象头和构造函数执行。 {:.success} (01)**类加载检查**:首先去运行时常量池中寻找是否有一个类的符号引用与之对应,然后检查该符号引用所代表的类是否已被加载、解析和初始化过。如果还没有,则要先执行相应的类加载过程; (02)**内存分配**:为新对象分配内存有两种方式,一个是“<b>指针碰撞</b>”,由于已分配内存和未分配内存由一个指针隔开,因此分配新内存只需要移动该指针即可,对应带空间压缩整理能力的垃圾回收器(比如:*Serial 收集器*、*ParNew 收集器*);另一个是“<b>空闲列表</b>”,从列表中找一块足够大的空间划分出去,并更新列表,对应 *CMS收集器* 这种基于清除算法的垃圾回收器。 在多线程环境下,内存分配通过两种方式来维护线程安全,一个是对“<b>内存分配</b>”这个动作进行同步处理,通过CAS以及失败重试的方法保证更新操作的原子性;另一个是为每个线程预分配一块内存,即<b>本地线程分配缓冲</b>( TLAB ),可以通过设置参数 `-XX:+/-UseTLAB` 来设定是否使用 TLAB。 (03)**内存初始化**:将分配的内存空间,除内存头外,都初始化为零值。 (04)**设置对象头**:对象所属类、如何才能找到类的元数据信息、对象的哈希码、对象的 GC 分代年龄等信息存放在对象头中。另外,根据虚拟机当前运行状态的不同,如是否启用偏向锁等,对象头会有不同的设置方式。 (05)**构造函数执行**:就是字面意思,执行对象的构造方法。这一步的特殊点在于,虚拟机其实已经创建好了一个新对象了,但是从程序员角度来看,该对象才“刚刚开始构造”。 ### 2.2 对象的内存布局 对象的内存布局可以分为三个部分,分别是**对象头**、**实例数据**和**对齐填充**。 {:.success} (01)**对象头**又分为“**Mark Word**”和**类型指针**。前者用于存储自身运行时数据,如哈希码、GC分代年龄、锁状态标志、线程持有的锁、偏向线程ID、偏向时间戳等。后者指向指向对象的类型元数据,用来判断该对象属于哪个类。 (02)**实例数据**部分是对象真正存储的有效信息。 (03)因为虚拟机要求对象的起始地址必须是8字节的整数倍,也即任何对象的大小都必须是8字节的整数倍,而对象头部分已经被设计为 8 字节的倍数(1 倍或者 2 倍),因此**对齐填充**部分就是用来补全对象实例数据部分的,相当于占位符。 ### 2.3 如何访问对象 Java 程序可以通过栈上的 reference 数据来访问操作具体对象,主流访问方式有两种: (01)**使用句柄访问** Java堆中会划分一块内存来作为句柄池,reference中存储的就是对象的句柄地址,而句柄中包含了对象实例数据和类型数据各自具体的地址信息。 ![句柄访问](/assets/images/jvm/image/visit-by-handler.png) (02)**通过直接指针访问** ![句柄访问](/assets/images/jvm/image/visit-by-index.png) 这两种方式各有千秋,使用句柄的最大好处是 reference 存储的是稳定句柄地址,在对象移动时只会改变到对象实例数据的指针。 ## 3.OutOfMemoryError异常 一个最简单的堆溢出异常示例如下: ```java public class HeapOOM { public static void main(String[] args) { List<TestObject> list = new ArrayList<>(); while(true){ list.add(new TestObject()); } } } class TestObject { } ``` 为了尽快得到结果,我们在虚拟机参数中将堆的最小值和最大值都设为20MB: <img src="/assets/images/jvm/image/jvm-option.png" alt="VM Args" style="zoom: 80%;" /> # Part2. 类加载机制 Java 虚拟机把描述类的数据从 Class 文件加载到内存,并对数据进行校验、转换解析和初始化,最终形成可以被虚拟机直接使用的 Java 类型,这个过程被称为虚拟机的类加载机制。 {:.info} ## 1.类加载过程 类加载分为五个阶段,分别是加载、验证、准备、解析和初始化。 ![类加载流程](/assets/images/jvm/image/load-class-process.png) 其中,解析阶段由于Java的运行时绑定特性,可能会在初始化阶段之后再开始。 ### 1.1 加载 类加载过程的第一步,主要完成下面 3 件事情: (01)通过全类名获取定义此类的二进制字节流 (02)将字节流所代表的静态存储结构转换为方法区的运行时数据结构 (03)在内存中生成一个代表该类的 Class 对象,作为方法区这些数据的访问入口 虚拟机规范多上面这 3 点并不具体,因此是非常灵活的。比如:"*通过全类名获取定义此类的二进制字节流*" 并没有指明具体从哪里获取、怎样获取。比如:比较常见的就是从 ZIP 包中读取(日后出现的 JAR、EAR、WAR 格式的基础)、其他文件生成(典型应用就是 JSP)等等。 一个非数组类的加载阶段(加载阶段获取类的二进制字节流的动作)是可控性最强的阶段,这一步我们还可以自定义类加载器去控制字节流的获取方式(重写一个类加载器的 `loadClass()` 方法)。至于数组类型,它不通过类加载器创建,而是由 Java 虚拟机直接创建。加载阶段和连接阶段的部分内容是交叉进行的,加载阶段尚未结束,连接阶段可能就已经开始了。 ### 1.2 验证 该阶段的目的主要是为了确保 Class 文件的字节流符合《Java 虚拟机规范》的全部要求,它包括文件格式验证、元数据验证、字节码验证和符号引用验证。 ### 1.3 准备 对类变量(即以 static 修饰的静态变量)分配内存并设置其初始值,不过需要注意两点,一个是类变量是随 Class 对象一起存放在 Java 堆中的,另一个是这里的赋值其实是设置一个零值,真正的“赋值”需要等到类初始化阶段才执行。 ```java public static int value = 123; ``` 比如针对上面这段代码,在“准备”阶段我们会将 value 设置为 0,而不是 123。 ### 1.4 解析 解析阶段会将常量池中的符号引用替换为直接引用。 ### 1.5 初始化 初始化阶段就是执行类构造器 `<clinit>()` 方法的过程。 我们需要注意: (01)静态语句块只能访问到定义在静态语句块之前的变量 (02)`<clinit>()` 方法不需要调用父类构造器,因为 Java 虚拟机会保证子类的 `<clinit>()` 方法已经执行完毕 (03)只有当接口中定义的变量被使用时,接口才会执行 `<clinit>()` 方法 ## 2.类加载器 ### 2.1 类和类加载器 #### 2.1.1 判断类相等 想要判断两个类是否“相等”,除了比较类本身还要求它们有同一个类加载器。这里的相等,包括 `equals()` 方法、`isInstance()` 方法的返回结果,也包含使用 `instanceof` 关键字做对象所属关系判定等各种情况。 ```java public static void main(String[] args) throws Exception { ClassLoader my_loader = new ClassLoader() { @Override public Class<?> loadClass(String name) throws ClassNotFoundException { try { String file_name = name.substring(name.lastIndexOf(".") + 1) + ".class"; InputStream is = getClass().getResourceAsStream(file_name); if (is == null) { return super.loadClass(name); } byte[] b = new byte[is.available()]; is.read(b); return defineClass(name, b, 0, b.length); } catch (IOException e) { throw new ClassNotFoundException(name); } } }; Object object = my_loader.loadClass("ClassLoaderTest").newInstance(); System.out.println(object.getClass()); // class ClassLoaderTest System.out.println(object instanceof ClassLoaderTest); // false } ``` #### 2.1.2 类加载器分类 ![parents delegation model](/assets/images/jvm/image/parents-delegation-model.png) (01)启动类加载器:负责加载存放在 `<JAVA_HOME>\lib` 目录,或者被 -Xbootclasspath 指定的路径中存放的类库,而且库名会被识别判断(如 `rt.jar`、`tools.jar`,名字不符合的不会被加载); (02)扩展类加载器:负责加载 `<JAVA_HOME>\lib\ext` 目录中,或者被 java.ext.dirs 系统变量所指定的路径中所有的类库; (03)应用程序类加载器:程序中默认的类加载器; (04)自定义类加载器:就是自己定义的类加载器:) ### 2.2 双亲委派模型 对于类加载器的区分,站在虚拟机角度来看只有两种:启动类加载器和其他类加载器。但是从开发人员角度来看,肯定不能划分地这么粗糙,通常我们将其划分后的架构称为为三层类加载器、双亲委派的类加载架构。 ![parents delegation model](/assets/images/jvm/image/parents-delegation-model.png) 同样是这个图,里面展示的各种类加载器之间的层次关系被称为类加载器的“双亲委派模型”,该模型要求除顶层的启动类加载器外,其余类加载器都要自己的父亲加载器。虽然看着和 Object 的类继承关系很像,但是它们这种关系其实是基于组合关系实现的。 #### 2.2.1 工作流程 如果一个类加载器收到了类加载请求,它首先把这个请求委派给父亲加载器完成,以此类推。只有当父亲加载器反馈自己无法完成这个加载请求时,子加载器才会尝试自己完成。 这样做的好处是让 Java 中的类同它的加载器一起具备了一种带有优先级的层次关系,从而保证了同名类之间不会发生冲突。如果没有双亲委派模型,用户自己定义了一个 `Object` 类,系统将会出现混乱。 #### 2.2.2 破坏双亲委派模型 有重写 `loadClass()` 方法,针对 SPI 服务提供者接口的线程上下文类加载器是一种父类加载器请求子类加载器完成类加载的行为,以及 OSGI 对类加载器的使用三次,我们主要了解第一个。 由于双亲委派模型晚于类加载器概念出现,导致无法避免 `loadClass()` 方法被子类覆盖的可能性,因此 Java 只能在 `java.lang.ClassLoader` 中添加一个新的 `protected` 方法 `findClass()`,并引导用户尽量重写这个方法去编写类的加载逻辑。 ```java protected Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // 首先,检查请求的类是否已经被加载过了 Class<?> c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // 如果非空父类抛出 ClassNotFoundException // 说明父类加载器搞不定,还得靠自己 } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } ``` 代码的英文也很好理解,如果父类加载失败,就会调用自己的 `findClass()` 方法。而正是由于双亲委派的具体逻辑是写在 `loadClass()` 方法里的,我们用子类去覆盖这个方法就可以“轻松”破坏掉模型了。
25.876791
200
0.70413
yue_Hant
0.629665
489b4da5261061c8a10e49a3e6187bae1fbe5f4e
2,155
md
Markdown
django-framework/sozdanie-i-nastroika-beget-vps-s-debian/perenos-secret_key-otdelno-ot-django.md
artif467/dev-notes
4b301e76813bce33073cc10391d40c803842bc19
[ "MIT" ]
null
null
null
django-framework/sozdanie-i-nastroika-beget-vps-s-debian/perenos-secret_key-otdelno-ot-django.md
artif467/dev-notes
4b301e76813bce33073cc10391d40c803842bc19
[ "MIT" ]
null
null
null
django-framework/sozdanie-i-nastroika-beget-vps-s-debian/perenos-secret_key-otdelno-ot-django.md
artif467/dev-notes
4b301e76813bce33073cc10391d40c803842bc19
[ "MIT" ]
null
null
null
# Перенос SECRET\_KEY Для того, чтобы не выдавать секретный ключ при удаленном хранении кода, его необходимо поместить в переменную системного фонового процесса, и делать вызов непосредственно от туда. Значение SECRET\_KEY ранее мы оставили в закоментированной строке в файле settings/base.py, поэтому открываем данный файл командой `sudo nano /home/djangouser/.virtualenvs/djangoenv/djproject/djproject/settings/base.py` и копируем значение **SECRET\_KEY\_VALUE**. {% hint style="danger" %} Если SECRET\_KEY содержит символ %, то его стоит экранировать повтором % \(получится так %%\). Одинарные кавычки также экранируют другие служебные символы bash. {% endhint %} ```bash # в settings/production.py пропишем вызов переменной окружения SECRET_KEY = os.environ['SECRET_KEY_VALUE'] # заходим под пользователем <DJANGO_USER> и устанавливаем для него # системную переменную окружения su - djangouser export SECRET_KEY_VALUE='*******************************************' # добавим его также в .bashrc sudo nano ~/.bashrc # в конец файла добавляем следующую строку export SECRET_KEY_VALUE='*******************************************' # обращаем внимание на одинарные кавычки # они экранируют служебные символы Bash # выходим за root exit # передаем переменную системному сервису sudo nano /etc/systemd/system/djproject.uwsgi.service # добавляем туда --env таким образом ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/apps-enabled \ --uid <DJANGO_USER> --env SECRET_KEY_VALUE='<value>' # пароль к базе данных мы храним в отдельном файле настроек production.py # причем удаленный доступ к БД закрыт (порт закрыт) # перезапускаем демона systemctl daemon-reload # перезапускаем веб-сервера systemctl restart djproject.uwsgi.service sudo /etc/init.d/nginx restart ``` Затем также открываем base.py командой `sudo nano /home/djangouser/.virtualenvs/djangoenv/djproject/djproject/settings/base.py` и удаляем ранее закоментированную строку c SECRET\_KEY. Теперь наш секретный ключ и пароль к базе данных будут всегда оставаться на сервере, где им самое место. Главное файлы конфигурации и настроек сервера исключать из контроля версий и т.п.
47.888889
402
0.765197
rus_Cyrl
0.924122
489be5df1cf1574e775f9e52e5c3f5e7b56bd12b
1,282
md
Markdown
outlook/How-to/Search-and-Filter/filtering-items-using-a-boolean-comparison.md
qiezhenxi/VBA-Docs
c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d
[ "CC-BY-4.0", "MIT" ]
1
2018-10-15T16:15:38.000Z
2018-10-15T16:15:38.000Z
outlook/How-to/Search-and-Filter/filtering-items-using-a-boolean-comparison.md
qiezhenxi/VBA-Docs
c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d
[ "CC-BY-4.0", "MIT" ]
null
null
null
outlook/How-to/Search-and-Filter/filtering-items-using-a-boolean-comparison.md
qiezhenxi/VBA-Docs
c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Filtering Items Using a Boolean Comparison ms.prod: outlook ms.assetid: bd786159-f4eb-e649-e838-56d520b824cf ms.date: 06/08/2017 --- # Filtering Items Using a Boolean Comparison Boolean values are specified differently in a filter in Microsoft Jet syntax than in a filter in DAV Searching and Locating (DASL) syntax. ## Jet Queries In Jet syntax, boolean operators such as True/False, Yes/No, On/Off, and so on, should be used as is and should not be converted to a string. For example, to create a filter to return unread items, you can use this filter: ```vb criteria = "[UnRead] = True" ``` **Note** If you convert the boolean value to a comparison string by enclosing it in quotation marks, then a Jet filter using any non-empty comparison string and filtering on a boolean property will return items that have the property True. A Jet filter comparing an empty string with a boolean property will return items that have the property False. ## DASL Queries In DASL syntax, you must convert True/False to an integer value, where 0 represents False and 1 represents True; likewise for Yes/No and On/Off. The DASL filter to return unread items is as follows: ```vb criteria = "@SQL=" & Chr(34) & "urn:schemas:httpmail:read" & Chr(34) _ & " = 0" ```
34.648649
353
0.74805
eng_Latn
0.996243
489c0457b75bcaa7d86eb3549a2329f33f9534f3
184
md
Markdown
src/TT/MainBundle/Resources/content/blurb6.md
camgunz/goindydemo
04f0976a758e99e9bc96e2c492157de3ab58d93b
[ "MIT" ]
null
null
null
src/TT/MainBundle/Resources/content/blurb6.md
camgunz/goindydemo
04f0976a758e99e9bc96e2c492157de3ab58d93b
[ "MIT" ]
null
null
null
src/TT/MainBundle/Resources/content/blurb6.md
camgunz/goindydemo
04f0976a758e99e9bc96e2c492157de3ab58d93b
[ "MIT" ]
null
null
null
# Spend A Night Together Whether you want to get snug at a bed & breakfast, live the high life in an upscale hotel, or you dream of sleeping under the stars, Indy's got you covered.
26.285714
75
0.755435
eng_Latn
0.999718
489c0b3c90e5bc235837a42080c392499a516adf
1,261
md
Markdown
README.md
CrackTheCode016/nem2-sdk-browserify
ce547e65fb705a95979d664af10d02abbf97e71c
[ "Apache-2.0" ]
null
null
null
README.md
CrackTheCode016/nem2-sdk-browserify
ce547e65fb705a95979d664af10d02abbf97e71c
[ "Apache-2.0" ]
null
null
null
README.md
CrackTheCode016/nem2-sdk-browserify
ce547e65fb705a95979d664af10d02abbf97e71c
[ "Apache-2.0" ]
null
null
null
# symbol-sdk-browserify This is nemtech/symbol-sdk-typescript-javascript - but browserified! The latest sdk will always be in the root of the repo. Previous versions are held in `sdk/`. ## Usage ``` <script src="symbol-sdk-<whatever version in the root of repo>.js"></script> ``` ```js const nem = require("/node_modules/nem2-sdk"); const rxjs = require("/node_modules/rxjs/operators"); const sha3_256 = require("/node_modules/sha3_256").sha3_256; ``` then, you can use like this. ```js //nem sample const alice = nem.Account.generateNewAccount(nem.NetworkType.MIJIN_TEST); //rxjs sample listener .confirmed(alice.address) .pipe( rxjs.filter((tx) => tx.transactionInfo !== undefined && tx.transactionInfo.hash === lockSignedTx.hash), rxjs.mergeMap(ignored => transactionHttp.announceAggregateBonded(signedTx)) ) .subscribe(_ => console.log(_), err => console.error(err) ); //sha3_256 sample const hash = sha3_256.create(); ``` ## Generating Bundle File Yourself If you want to create bundle file yourself, ``` cd generator/ npm install npm run generate-bundle ``` If you wish to compile a different version, be sure to replace the version of `symbol-sdk`, then run the above commands once more: ``` "symbol-sdk": "Your version here" ```
24.25
163
0.725615
eng_Latn
0.856109
489d3e8b374a3e8a9fbdccc8c9afdc24eeea46d1
376
md
Markdown
README.md
pabloguerino/bitcoin-watch-vuejs
ee2d2be182d2a3e08eeae0ea0b212c3b470a4037
[ "MIT" ]
1
2017-08-27T12:43:07.000Z
2017-08-27T12:43:07.000Z
README.md
pabloguerino/bitcoin-watch-vuejs
ee2d2be182d2a3e08eeae0ea0b212c3b470a4037
[ "MIT" ]
null
null
null
README.md
pabloguerino/bitcoin-watch-vuejs
ee2d2be182d2a3e08eeae0ea0b212c3b470a4037
[ "MIT" ]
null
null
null
# bitcoin-watch-vuejs > Vue.js test project based on coindesk API Edit ## Build Setup ``` bash # install dependencies npm install # serve with hot reload at localhost:8080 npm run dev # build for production with minification npm run build ``` ## License MIT / BSD ## Author Information Test project created in 2017 by [Pablo Guerino](https://github.com/pabloguerino)
15.04
80
0.742021
eng_Latn
0.864882
489daa907e6262256039bd39ca7b93286da2ccbd
793
md
Markdown
_posts/2020-01-28-pf-conclui-que-sao-falsas-mensagens-em-que-santos-cruz-criticava-bolsonaro.md
tatudoquei/tatudoquei.github.io
a3a3c362424fda626d7d0ce2d9f4bead6580631c
[ "MIT" ]
null
null
null
_posts/2020-01-28-pf-conclui-que-sao-falsas-mensagens-em-que-santos-cruz-criticava-bolsonaro.md
tatudoquei/tatudoquei.github.io
a3a3c362424fda626d7d0ce2d9f4bead6580631c
[ "MIT" ]
null
null
null
_posts/2020-01-28-pf-conclui-que-sao-falsas-mensagens-em-que-santos-cruz-criticava-bolsonaro.md
tatudoquei/tatudoquei.github.io
a3a3c362424fda626d7d0ce2d9f4bead6580631c
[ "MIT" ]
1
2022-01-13T07:57:24.000Z
2022-01-13T07:57:24.000Z
--- layout: post item_id: 2867709931 title: >- PF conclui que são falsas mensagens em que Santos Cruz criticava Bolsonaro author: Tatu D'Oquei date: 2020-01-28 09:49:52 pub_date: 2020-01-28 09:49:52 time_added: 2020-01-30 06:49:19 category: tags: [] --- As mensagens mostravam um diálogo crítico aos filhos de Bolsonaro e a um ‘Fábio’. À época, o general chegou a afirmar que as mensagens eram falsas e que teriam ocorrido em um horário em que ele estava em voo, e não poderia, portanto, usar o aplicativo WhatsApp. **Link:** [https://politica.estadao.com.br/blogs/fausto-macedo/pf-conclui-que-sao-falsas-mensagens-em-que-santos-cruz-criticava-bolsonaro/](https://politica.estadao.com.br/blogs/fausto-macedo/pf-conclui-que-sao-falsas-mensagens-em-que-santos-cruz-criticava-bolsonaro/)
44.055556
268
0.766709
por_Latn
0.994276
489db00a5922897c9e82471752936fe2ed773903
297
md
Markdown
md_syntax/mf_crossdev.md
HankBO/stata-language-server
65555448d239f16d96c62f0f0798194566d0210d
[ "MIT" ]
14
2021-08-13T20:04:54.000Z
2022-03-07T05:57:50.000Z
md_syntax/mf_crossdev.md
HankBO/stata-language-server
65555448d239f16d96c62f0f0798194566d0210d
[ "MIT" ]
2
2022-01-13T08:20:17.000Z
2022-02-04T05:17:49.000Z
md_syntax/mf_crossdev.md
HankBO/stata-language-server
65555448d239f16d96c62f0f0798194566d0210d
[ "MIT" ]
2
2022-01-12T20:59:35.000Z
2022-03-07T05:35:43.000Z
## Syntax `real matrix crossdev(X, x, Z, z)` `real matrix crossdev(X, x, w, Z, z)` `real matrix crossdev(X, xc, x, Z, zc, z)` `real matrix crossdev(X, xc, x, w, Z, zc,` `z)` where `Xreal matrix Xxcreal scalar xcxreal rowvector xwreal vector wZreal matrix Zzcreal scalar zczreal rowvector z`
19.8
110
0.683502
eng_Latn
0.483537
489e60b90ddee8368ed4c6009a225f4988e7a192
38
md
Markdown
README.md
NavarezCorp/mictest
bd2702299136510c69aef016c89b978ad9bfe522
[ "MIT" ]
null
null
null
README.md
NavarezCorp/mictest
bd2702299136510c69aef016c89b978ad9bfe522
[ "MIT" ]
null
null
null
README.md
NavarezCorp/mictest
bd2702299136510c69aef016c89b978ad9bfe522
[ "MIT" ]
null
null
null
# mictest michael javier project test
12.666667
27
0.815789
cat_Latn
0.601181