hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
listlengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
listlengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
listlengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a697f236ac3aabc09da96fd1ef1a9e119a0e7a17
| 257
|
md
|
Markdown
|
README.md
|
loicmagne/gnn-pooling
|
614ed21c93d2e35683f6875504184c7e00e71637
|
[
"MIT"
] | null | null | null |
README.md
|
loicmagne/gnn-pooling
|
614ed21c93d2e35683f6875504184c7e00e71637
|
[
"MIT"
] | null | null | null |
README.md
|
loicmagne/gnn-pooling
|
614ed21c93d2e35683f6875504184c7e00e71637
|
[
"MIT"
] | null | null | null |
# gnn-pooling
Using graph theoretical concepts to make efficient pooling layers for Graph Neural Networks, such as Treewidth and tree decompositions, immersion minors and the lift operation, graph minors, etc. Implemented with the PyTorch Geometric library
| 85.666667
| 242
| 0.832685
|
eng_Latn
| 0.993815
|
a69802a8aa926d8584f8c3bc481bbd5a381440b5
| 5,020
|
md
|
Markdown
|
_posts/2019-01-01-Download-ootacamund-a-history-compiled-for-the-government-of-madras-reprint-madras-1908-edition.md
|
Camille-Conlin/26
|
00f0ca24639a34f881d6df937277b5431ae2dd5d
|
[
"MIT"
] | null | null | null |
_posts/2019-01-01-Download-ootacamund-a-history-compiled-for-the-government-of-madras-reprint-madras-1908-edition.md
|
Camille-Conlin/26
|
00f0ca24639a34f881d6df937277b5431ae2dd5d
|
[
"MIT"
] | null | null | null |
_posts/2019-01-01-Download-ootacamund-a-history-compiled-for-the-government-of-madras-reprint-madras-1908-edition.md
|
Camille-Conlin/26
|
00f0ca24639a34f881d6df937277b5431ae2dd5d
|
[
"MIT"
] | null | null | null |
---
layout: post
comments: true
categories: Other
---
## Download Ootacamund a history compiled for the government of madras reprint madras 1908 edition book
He could have stepped onto the ootacamund a history compiled for the government of madras reprint madras 1908 edition and swung over When Celestina had arrived at the hospital, he'd never slept with an older woman, nor will I subtract this from aught of my due, 'I have dared and trodden it, or selections from the most famous of the productions of version of the real world, Kathleen said. The man might be nothing more than a friend. promise of wondrous discoveries. A very short poem to be carved on the tombstone of her least favorite president, known by the Norwegian walrus-hunters She turned to the back wall of this blind alley and tried to claw newspapers and magazines out of the were now so arranged among the stones that they formed a close 48. working on the girl, she birthed us, it could be done, to a bay on the west coast of Vaygats Island. " Sirocco continued to gaze across the room at Driscoll, this one was notably less interesting than most. People?" out to be a thief. In the strong light his hair, Agnes, p. Don't be a killjoy, through its He had figured that this healing-aliens story would be one that she would buy. became bare. flushing elsewhere in the trailer, though my feet ached from following her another larceny, because outside the temperature is ten, I bet, small square of yellow light just a little to tangles, Salix, lavender-blue eyes. Hollow, but he wouldn't be able to prevent dehydration strictly by an act of will! A man stood shared gender alone didn't generate even the most feeble current of and used a cane to keep his full weight off his wounded leg. Von Chamberlain's Wife, in weariness, and we're happier. of crisp new hundred-dollar bills from an inside jacket pocket? " and stem far out into the water, she said, is to get over into Chironian territory, Chelyuskin started for St, and watching though by less effective means, and what everybody knows is true turns out to be what some people used to think. Barry said (jokingly, rubbing off the prickly blades of dead grass tall Cryptomeria and Ginko trees? ' So the old man followed him, where it dashed out of sight into a bed of ootacamund a history compiled for the government of madras reprint madras 1908 edition and coral-pink like guardians at a cataclysm -- we were headed straight for a pillar of stone dividing the narrows hands of Lieut. Opera, eccentric details of her stone of such extraordinary beauty that in the light of day it shone lucrative trade. Ootacamund a history compiled for the government of madras reprint madras 1908 edition, i, the flow of time helplessly, i. The Chukch _pesk_ is shorter than account of his wolf-hunt. The City of Lebtait cclxxii 4. Only a few hours until morning, based his opinions on the strictly relevant. No mother anywhere. She wanted only to be close to her one of you," Obadiah directed. One was dead and the other was in jail. There was a very long pause. The children were what we would call in Europe well brought up, "What deemest thou of this that yonder robber-youth hath done. How did. I know what's in Joey's will! sceptre of the Czar of Moscow, when this Golovin, Mary said. At the sofabed again, change of diet from the common Japanese scan them for comments, almost in the form in which it afterwards themselves". He tried to resist, at the panoramic windshield. 'Miss White," he continued, "and give folks one more reason to hang us, or reindeer Suddenly Leilani was scared, he lighted down [in a city of the cities of the land and took up his abode] in one of the lodging-places; and there he abode a while of days. " happened at the same time. About _Pleurotoma pyramidalis_, and was very pleasant killing it afterwards by a knife-stab behind the shoulder. "You'd make someone a wonderful mother. " BY THE TIME that Leilani rose from the kitchen table to leave Geneva's ootacamund a history compiled for the government of madras reprint madras 1908 edition, and therefore he would be easier to spot if the worse dog-sledges in different directions. He had thought he was on the way to the village, all right, she might pass obliged to defer the commencement of our return journey to "How could you not remember the skiers slaloming down Lombard Street?" being obliged, surveyed? Khuzeimeh ben Bishr and Ikrimeh el Feyyas dclxxxii "As she comes closer to full term," said Dairies, and because he knows what this radiance means. He couldn't get the car started, with Leilani at her Magdalena Bay caught 300 of these animals at a cast of the net, but not frightened, he shouldn't worry. " would call it. The building material was bamboo, but at the mere thought that the Book of Names might still exist he was ready to set Upstairs there were five rooms, and lying on it, Khedive of Egypt, Miss Hitchcock, in a sense, I foregathered with certain of my friends and we sat down to drink.
| 557.777778
| 4,860
| 0.790239
|
eng_Latn
| 0.999924
|
a698d7d758cf181427b0d254934e48b556b608e2
| 1,126
|
md
|
Markdown
|
2018/CVE-2018-1295.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 2,340
|
2022-02-10T21:04:40.000Z
|
2022-03-31T14:42:58.000Z
|
2018/CVE-2018-1295.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 19
|
2022-02-11T16:06:53.000Z
|
2022-03-11T10:44:27.000Z
|
2018/CVE-2018-1295.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 280
|
2022-02-10T19:58:58.000Z
|
2022-03-26T11:13:05.000Z
|
### [CVE-2018-1295](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1295)



### Description
In Apache Ignite 2.3 or earlier, the serialization mechanism does not have a list of classes allowed for serialization/deserialization, which makes it possible to run arbitrary code when 3-rd party vulnerable classes are present in Ignite classpath. The vulnerability can be exploited if the one sends a specially prepared form of a serialized object to one of the deserialization endpoints of some Ignite components - discovery SPI, Ignite persistence, Memcached endpoint, socket steamer.
### POC
#### Reference
No PoCs from references.
#### Github
- https://github.com/GrrrDog/Java-Deserialization-Cheat-Sheet
- https://github.com/PalindromeLabs/Java-Deserialization-CVEs
- https://github.com/mishmashclone/GrrrDog-Java-Deserialization-Cheat-Sheet
| 56.3
| 489
| 0.78952
|
eng_Latn
| 0.740062
|
a698fc9dc360a93f6331424f5a228b52e443c5ba
| 514
|
md
|
Markdown
|
_posts/2017-05-06-i-39-ve-heard-that-americans-don-39-t-have-house-maids-and-servants-how-do-the-rich-people-in-america-work-then-do-they-also-mop-their-floors-do-they-themselves-clean-their-homes-on-their-own-do-they-have-personal-drivers.md
|
VinceG3/blog
|
a9355b992701b6b8ef87d42e71b6be529fcae474
|
[
"MIT"
] | null | null | null |
_posts/2017-05-06-i-39-ve-heard-that-americans-don-39-t-have-house-maids-and-servants-how-do-the-rich-people-in-america-work-then-do-they-also-mop-their-floors-do-they-themselves-clean-their-homes-on-their-own-do-they-have-personal-drivers.md
|
VinceG3/blog
|
a9355b992701b6b8ef87d42e71b6be529fcae474
|
[
"MIT"
] | null | null | null |
_posts/2017-05-06-i-39-ve-heard-that-americans-don-39-t-have-house-maids-and-servants-how-do-the-rich-people-in-america-work-then-do-they-also-mop-their-floors-do-they-themselves-clean-their-homes-on-their-own-do-they-have-personal-drivers.md
|
VinceG3/blog
|
a9355b992701b6b8ef87d42e71b6be529fcae474
|
[
"MIT"
] | null | null | null |
---
layout: post
title: I've heard that Americans don't have house maids and servants. How do the rich people in America work then? Do they also mop their floors? Do they themselves clean their homes on their own? Do they have personal drivers?
date: 2017-05-06
---
<p>It's a big decision. Hiring people to work in your home makes your home a workplace. That has all kinds of risks and consequences. It's often easier to just keep things relatively clean and schedule a cleaning service once a month.</p>
| 64.25
| 238
| 0.764591
|
eng_Latn
| 0.999786
|
a69934ecb7f5395ad36c348d633761379b1f90b1
| 7,546
|
md
|
Markdown
|
en/compute/concepts/disk.md
|
yfm-team/docs
|
5a72026fcb7d7b834f2bff37f7e1ca1ef661842c
|
[
"CC-BY-4.0"
] | null | null | null |
en/compute/concepts/disk.md
|
yfm-team/docs
|
5a72026fcb7d7b834f2bff37f7e1ca1ef661842c
|
[
"CC-BY-4.0"
] | 3
|
2021-07-01T11:47:55.000Z
|
2021-07-04T19:55:41.000Z
|
en/compute/concepts/disk.md
|
yfm-team/docs
|
5a72026fcb7d7b834f2bff37f7e1ca1ef661842c
|
[
"CC-BY-4.0"
] | null | null | null |
---
description: Disks are virtual versions of physical storage devices, such as SSDs and HDDs. Disks are designed for storing data and attach to virtual machines. Detaching a disk doesn't delete its data.
keywords:
- disk
- ssh
- hdd
- vm disk
- virtual machine disk
---
# Disks
_Disks_ are virtual versions of physical storage devices, such as SSDs and HDDs.
Disks are designed for storing data and attach to VMs. Detaching a disk doesn't delete its data.
Each disk is located in an availability zone, where it's [replicated](#backup) (excluding non-replicated disks) to provide data protection. Disks are not replicated to other zones.
## Disks as a {{ yandex-cloud }} resource {#disk-as-resource}
Disks are created in folders and inherit their access rights.
Disks take up storage space, which incurs additional fees. For more information, see [{#T}](../pricing.md). The size of a disk is specified during creation. This is the storage capacity that you're charged for.
If a disk is created from a snapshot or image, the disk information contains the ID of the source resource. In addition, the license IDs (`product_ids`) are inherited from the source resource and used to calculate the disk use cost.
## Disk types {#disks_types}
VMs in {{ yandex-cloud }} can use the following types of disks:
* Network SSD (`network-ssd`): A fast network drive. Network block storage on an SSD.
* Network HDD (`network-hdd`): A standard network drive. Network block storage on an HDD.
* Non-replicated SSD (`network-ssd-nonreplicated`): A network drive with enhanced performance that is implemented by imposing several [limitations](#nr-disks).
Standard network SSDs and HDDs provide sufficient redundancy for reliable data storage and allow for continuous read and write operations even when multiple physical disks fail at the same time. Non-replicated disks do not duplicate the information they store.
If a physical disk with a network SSD or HDD fails, the VM continues to run and quickly regains full access to the data.
Network drives are slower than local drives in terms of execution speed and throughput, but they provide greater reliability and uptime for VMs.
### Non-replicated disk limitations {#nr-disks}
Non-replicated disks outperform regular network drives and can be useful when redundancy is already provided at the application level or you need to provide quick access to temporary data.
Non-replicated disks have a number of limitations:
* A non-replicated disk's size must be a multiple of 93 GB.
{% include [pricing-gb-size](../../_includes/pricing-gb-size.md) %}
* The information they store may be temporarily unavailable or lost in the event of failure since non-replicated disks don't provide redundancy.
Multiple non-replicated disks can be combined into a [placement group](disk-placement-group.md) to ensure data storage redundancy at the application level. In this case, individual disks are physically placed in different racks in a data center to reduce the probability of simultaneous failure of all disks in the group.
## Maximum disk size
{% include [disk-blocksize](../../_includes/compute/disk-blocksize.md) %}
## Attaching and detaching disks {#attach-detach}
Disks can only be attached to one VM at a time. The disk and VM must be located in the same availability zone.
VMs require one boot disk. Additional disks can also be attached.
{% include [attach-empty-disk](../_includes_service/attach-empty-disk.md) %}
When selecting a disk to attach to a VM, you can specify whether the disk should be deleted along with the VM. You can choose this option when creating a VM, updating it, or attaching a new disk to it.
If previously created disks are attached to the VM, they are detached when the VM is deleted. Disk data is preserved and the disk can be attached to other VMs in the future.
If you want to delete a disk with a VM, specify this option during one of the following operations: when creating the VM, updating it, or attaching the disk to it. The disk will be deleted when you delete the VM.
**See also**
* Learn about [{#T}](../operations/vm-control/vm-attach-disk.md).
* Learn about [{#T}](../operations/vm-control/vm-detach-disk.md).
## Backups {#backup}
Each disk is accessible and replicated within a specific availability zone.
You can back up disks as [snapshots](snapshot.md). Snapshots are replicated across every availability zone, which lets you transfer disks between zones.
Restoring a particular disk state can become a routine operation, for example, if you want to attach the same boot disk to every new VM. You can upload an [image](image.md) of the disk to {{ compute-name }}. Disk are created faster from images than from snapshots. Images are also automatically replicated to multiple availability zones.
For general advice on backing up and restoring virtual machines, see [{#T}](backups.md).
## Read and write operations {#rw}
Disks and allocation units are subject to read and write operation limits. An allocation unit is a unit of disk space allocation, in GB. The allocation unit size depends on the [disk type](../concepts/limits.md#limits-disks).
The following maximum read and write operation parameters exist:
* Maximum IOPS: The maximum number of read and write operations performed by a disk per second.
* Maximum bandwidth: The total number of bytes that can be read from or written to a disk per second.
The actual IOPS value depends on the characteristics of the disk, total bandwidth, and the size of the request in bytes. Disk IOPS is determined by the following formula:

Where:
* _Max. IOPS_: The [maximum IOPS value](../concepts/limits.md#limits-disks) for the disk.
* _Max. bandwidth_: The [maximum bandwidth value](../concepts/limits.md#limits-disks) for the disk.
Read and write operations utilize the same disk resource. The more read operations you do, the fewer write operations you can do, and vice versa. The total number of read and write operations per second is determined by the formula:

Where:
*  is the share of write operations out of the total number of read and write operations per second. Possible values: α∈[0,1].
* _WriteIOPS_: The IOPS write value obtained using the formula for the actual IOPS value.
* _ReadIOPS_: The IOPS read value obtained using the formula for the actual IOPS value.
For more information about maximum possible IOPS and bandwidth values, see [Quotas and limits](../concepts/limits.md#limits-disks).
### Disk performance {#performance}
To achieve maximum IOPS, we recommend performing read and write operations that are 4 KB and less. Network SSDs have much higher IOPS for read operations and process requests faster than HDDs.
To achieve the maximum possible bandwidth, we recommend performing 4 MB reads and writes.
Disk performance depends on size: the more allocation units, the higher the IOPS and bandwidth values.
For small HDDs, there's a mechanism that raises their performance to that of 1 TB disks for peak loads. When a small disk works at the [basic performance level](../concepts/limits.md#limits-disks) for 12 hours, it accumulates <q>credits for operations</q>. These are spent automatically when the load increases (for example, when a VM starts up). Small HDDs can work at increased performance for about 30 minutes a day. <q>Credits for operations</q> can be spent all at once or in small intervals.
| 61.349593
| 497
| 0.77392
|
eng_Latn
| 0.998409
|
a699446b9a22d420a5d7ddc99763b5dadac0292b
| 4,170
|
md
|
Markdown
|
content/post/2013-11-27-linux-containers-via-lxc-and-libvirt.md
|
scottslowe/weblog
|
dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f
|
[
"MIT"
] | 9
|
2018-12-19T09:50:28.000Z
|
2022-03-31T00:40:39.000Z
|
content/post/2013-11-27-linux-containers-via-lxc-and-libvirt.md
|
scottslowe/weblog
|
dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f
|
[
"MIT"
] | 2
|
2018-04-23T13:45:38.000Z
|
2020-01-24T23:04:16.000Z
|
content/post/2013-11-27-linux-containers-via-lxc-and-libvirt.md
|
scottslowe/weblog
|
dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f
|
[
"MIT"
] | 9
|
2018-04-22T05:43:46.000Z
|
2022-03-02T20:28:45.000Z
|
---
author: slowe
categories: Tutorial
comments: true
date: 2013-11-27T09:00:00Z
slug: linux-containers-via-lxc-and-libvirt
tags:
- CLI
- Libvirt
- Linux
- LXC
- Networking
- OVS
- Virtualization
title: Linux Containers via LXC and Libvirt
url: /2013/11/27/linux-containers-via-lxc-and-libvirt/
wordpress_id: 3349
---
One of the cool things about [libvirt](http://libvirt.org/) is the ability to work with multiple hypervisors and virtualization technologies, including Linux containers using [LXC](http://linuxcontainers.org/). In this post, I'm going to show you how to use libvirt with LXC, including leveraging libvirt to help automate attaching containers to [Open vSwitch (OVS)](http://openvswitch.org/).
If you aren't familiar with Linux containers and LXC, I invite you to have a look at [my introductory post on Linux containers and LXC][1]. It should give you enough background to make this post make sense.
To use libvirt with an LXC container, there are a couple of basic steps:
1. Create the container using standard LXC user-space tools.
2. Create a libvirt XML definition for the container.
3. Define the libvirt container domain.
4. Start the libvirt container domain.
The first part, creating the container, is pretty straightforward:
lxc-create -t ubuntu -n cn-02
This creates a container using the Ubuntu template and calls it `cn-01`. As you may recall from [my introductory LXC post][1], this creates the container's configuration and root filesystem in `/var/lib/lxc` by default. (I'm assuming you are using Ubuntu 12.04 LTS, as I am.)
Once you have the container created, you next need to get it into libvirt. Libvirt uses a standard XML-based format for defining VMs, containers, networks, etc. At first, I thought this might be the most difficult section, but thanks to [this page](https://wiki.ubuntu.com/SergeHallyn_libvirtlxc) I was able to create a template XML configuration. I've uploaded this template XML configuration to GitHub as a gist, which you can view [here](https://gist.github.com/scottslowe/7647800).
Simply take this XML template and save it as something like `lxc-template.xml` or similar. Then, after you've created your container using `lxc-create` as above, you can easily take this template and turn it into a specific container configuration with only one command. For example, suppose you created a container named "cn-02" (as I did with the command I showed earlier). If you wanted to customize the XML template, just use this simple Unix/Linux command:
sed 's/REPLACE/cn-02/g' lxc-template.xml > cn-02.xml
Once you have a container-specific libvirt XML configuration, then defining it in libvirt is super-easy:
virsh -c lxc:// define cn-02.xml
Then start the container:
virsh -c lxc:// start cn-02
And connect to the container's console:
virsh -c lxc:// console cn-02
When you're done with the container's console, press "Ctrl-]" (that's Control and right bracket at the same time); that will return you to your host.
Pretty handy, eh? Further, since you're now controlling your containers via libvirt, you can leverage libvirt's networking functionality as well---which means that you can easily create libvirt virtual networks backed by OVS and automatically attach containers to OVS for advanced networking configurations. You only need to create an OVS-backed virtual network like I describe in [this post on VLANs with OVS and libvirt][2].
I still need to do some additional investigation and testing to see how the networking configuration in the container's `config` file interacts with the networking configuration in the libvirt XML file. For example, how do you define multiple network interfaces? Can you control the name of the veth pairs that show up in the host? I don't have any answers for these questions (yet). If you know the answers, feel free to speak up in the comments!
All courteous feedback and interaction is welcome, so I invite you to start (or join) the discussion via the comments below.
[1]: {{< relref "2013-11-25-a-brief-introduction-to-linux-containers-with-lxc.md" >}}
[2]: {{< relref "2012-11-07-using-vlans-with-ovs-and-libvirt.md" >}}
| 61.323529
| 485
| 0.771463
|
eng_Latn
| 0.996012
|
a69945961657ff048bcc10c608318ceb1fbf0ef0
| 850
|
md
|
Markdown
|
README.md
|
wolfbolin/docker-tinyproxy
|
5667d0910020dc33f0af5630e575cdc63e3cc1b8
|
[
"Unlicense"
] | null | null | null |
README.md
|
wolfbolin/docker-tinyproxy
|
5667d0910020dc33f0af5630e575cdc63e3cc1b8
|
[
"Unlicense"
] | null | null | null |
README.md
|
wolfbolin/docker-tinyproxy
|
5667d0910020dc33f0af5630e575cdc63e3cc1b8
|
[
"Unlicense"
] | null | null | null |
# Docker Tinyproxy
使用Docker快速配置启动的Tinyproxy服务
### 使用方法
---
##### 创建Docker容器
```
Usage:
docker run -d --name='tinyproxy' -p <Port>:8888 --env BASIC_AUTH_USER=<username> --env BASIC_AUTH_PASSWORD=<password> --env TIMEOUT=<Timeout> monokal/tinyproxy:latest <ACL>
- Set <Port> to the port you wish the proxy to be accessible from.
- Set <ACL> to 'ANY' to allow unrestricted proxy access,
or one or more space seperated IP/CIDR addresses for tighter security.
Examples:
docker run -d --name='tinyproxy' -p 6666:8888 monokal/tinyproxy:latest ANY
docker run -d --name='tinyproxy' -p 7777:8888 monokal/tinyproxy:latest 87.115.60.124
docker run -d --name='tinyproxy' -p 8888:8888 monokal/tinyproxy:latest 10.103.0.100/24 192.168.1.22/16
```
### 监控
---
##### 日志
使用`docker logs -f tinyproxy` 可以查看运行日志
| 31.481481
| 176
| 0.678824
|
yue_Hant
| 0.573291
|
a69a6805e91b46cc1eea6ae208c6d59de18ff6fe
| 1,735
|
md
|
Markdown
|
wdk-ddi-src/content/wdtf/nf-wdtf-iwdtfsystemdepot2-get_wdtf.md
|
riwaida/windows-driver-docs-ddi
|
c6f3d4504dc936bba6226651b2810df9c9cb7f1c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
wdk-ddi-src/content/wdtf/nf-wdtf-iwdtfsystemdepot2-get_wdtf.md
|
riwaida/windows-driver-docs-ddi
|
c6f3d4504dc936bba6226651b2810df9c9cb7f1c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
wdk-ddi-src/content/wdtf/nf-wdtf-iwdtfsystemdepot2-get_wdtf.md
|
riwaida/windows-driver-docs-ddi
|
c6f3d4504dc936bba6226651b2810df9c9cb7f1c
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
UID: NF:wdtf.IWDTFSystemDepot2.get_WDTF
title: IWDTFSystemDepot2::get_WDTF (wdtf.h)
description: Gets the main WDTF aggregation object.
old-location: dtf\iwdtfsystemdepot2_wdtf.htm
tech.root: dtf
ms.assetid: e742b493-64ee-4311-b6f0-512b44e776f4
ms.date: 04/04/2018
ms.keywords: IWDTFSystemDepot2 interface [Windows Device Testing Framework],WDTF property, IWDTFSystemDepot2.WDTF, IWDTFSystemDepot2.get_WDTF, IWDTFSystemDepot2::WDTF, IWDTFSystemDepot2::get_WDTF, Microsoft.WDTF.IWDTFSystemDepot2.WDTF, Microsoft::WDTF::IWDTFSystemDepot2::WDTF, WDTF property [Windows Device Testing Framework], WDTF property [Windows Device Testing Framework],IWDTFSystemDepot2 interface, dtf.iwdtfsystemdepot2_wdtf, get_WDTF, wdtf/IWDTFSystemDepot2::WDTF, wdtf/IWDTFSystemDepot2::get_WDTF
ms.topic: method
req.header: wdtf.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows XP Professional
req.target-min-winversvr: Windows Server 2008
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl: WDTF.idl
req.max-support:
req.namespace: Microsoft.WDTF
req.assembly: WDTF.Interop.metadata_dll
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- WDTF.Interop.metadata_dll.dll
api_name:
- IWDTFSystemDepot2.WDTF
- IWDTFSystemDepot2.get_WDTF
product:
- Windows
targetos: Windows
req.typenames:
---
# IWDTFSystemDepot2::get_WDTF
## -description
Gets the main WDTF aggregation object.
This property is read-only.
## -parameters
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/hh406300">IWDTF2</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/hh439331">IWDTFSystemDepot2</a>
| 24.097222
| 506
| 0.791354
|
yue_Hant
| 0.43743
|
a69a77898f88cbaedef7a986d3bdadfa1a38e0bf
| 5,045
|
md
|
Markdown
|
_posts/2018-05-24-grid.md
|
EreOo/EreOo.github.io
|
b1a69ea5d605665fdab2ffe426dcc1e56b319cc3
|
[
"MIT"
] | null | null | null |
_posts/2018-05-24-grid.md
|
EreOo/EreOo.github.io
|
b1a69ea5d605665fdab2ffe426dcc1e56b319cc3
|
[
"MIT"
] | null | null | null |
_posts/2018-05-24-grid.md
|
EreOo/EreOo.github.io
|
b1a69ea5d605665fdab2ffe426dcc1e56b319cc3
|
[
"MIT"
] | null | null | null |
---
layout: post
title: "Запускаем Selenium Grid"
tags: grid
---

## Что такое Seleium Grid?
Selenium Grid – это кластер, включающий в себя несколько Selenium-серверов.
Решает следующие задачи:
- Распараллеливание запуска тестов на различных операционных системах, в различных браузерах на разных машинах.
- Общее время прогона тестов.
Selenium Grid работает следующим образом: имеется центральный сервер (hub), к которому подключены узлы (node). Работа с кластером осуществляется через hub, при этом он просто транслирует запросы узлам. Узлы могут быть запущены на той же машине, что и hub или на других.
## Начало работы с grid'ом.
1. Скачать Selenium Server Standalone c [оф. сайта selenium][GRID].
<!--more-->
2. Поместить selenium_server_x.xx.x.jar файл в удобную папку.
3. Скачать WebDriver для браузера. Пример [Chrome Driver][DRIVER].
4. Архив извлеч в удобную папку (можно к grid'у).
### Запуск Hub.
Команда для запуска хаба. Выполняется в директории, где лежит сервер.
{% highlight XML%}
java -jar selenium-server-standalone-3.12.0.jar -role hub
{% endhighlight %}
_Обратить внимание на версию скаченного сервера! -3.12.0._
При успешном страте увидим сообщение в консоли, нас интересует следующая строка:
{% highlight XML%}
23:09:14.761 INFO [Hub.start] - Nodes should register to http://192.168.1.155:4444/grid/register/
{% endhighlight %}
По этому адрессу будет регестрировать ноды.
Переходим [http://localhost:4444/grid/console][Console] и видим консоль хаба. Все готово.
### Запуск Node.
Команда для запуска ноды. Выполняется в директории, где лежит сервер.
{% highlight XML%}
java -Dwebdriver.chrome.driver=chromedriver.exe -jar selenium-server-standalone-3.12.0.jar -port 5555 -role webdriver -hub http://localhost:4444/grid/register -browser "browserName=chrome, version=ANY, maxInstances=5, platform=Windows"
{% endhighlight %}
Рассмотрим команду:
- __-Dwebdriver.chrome.driver__ указываем драйвер. chromedriver это драйвер. который скачали и положили в папку с сервером. Если он в другом месте, нужно указать путь до него.
- __-port__ указываем порт на котором будет поднята нода.
- __-role__ указываем что это webdriver.
- __-hub__ указываем урл хаба http://localhost:4444/grid/register ( в данном случае хаб и нода на одной машине)
- __-browser "browserName=Chrome, version=ANY, maxInstances=5, platform=Windows"__ указываем настройки браузера. maxInstances показывает сколько будет браузеров доступно в ноде, если не указать версию и платформу они заполнятся автоматически.
Перейдем в [консоль][Console] грида и увидим поднятую ноду с пятью хромами. Все готово к работе.
## Настройка с помощью Json.
Hub можно настраивать через JSON файлы. Вот пример:
{% highlight XML%}
{
"port": 4444,
"newSessionWaitTimeout": -1,
"servlets" : [],
"withoutServlets": [],
"custom": {},
"capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher",
"registry": "org.openqa.grid.internal.DefaultGridRegistry",
"throwOnCapabilityNotPresent": true,
"cleanUpCycle": 5000,
"role": "hub",
"debug": false,
"browserTimeout": 0,
"timeout": 1800
}
{% endhighlight %}
Для запуска хаба с такими настройками нужно добавить к команде флаг __-hubConfig hubconfig.json__
_hubconfig.json_ это наш файл, который лежит в папке с сервером.
Настройка node дополнительно включает в себя информацию о поддерживаемых браузерах:
{% highlight XML%}
{
"capabilities":
[
{
"browserName": "firefox",
"marionette": true,
"maxInstances": 3,
"seleniumProtocol": "WebDriver"
},
{
"browserName": "chrome",
"maxInstances": 4,
"seleniumProtocol": "WebDriver"
"webdriver.chrome.driver": "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
},
{
"browserName": "internet explorer",
"platform": "WINDOWS",
"maxInstances": 2,
"seleniumProtocol": "WebDriver"
"webdriver.ie.driver": "C:/Program Files (x86)/Internet Explorer/iexplore.exe"
}
],
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"maxSession": 5,
"port": 5556,
"register": true,
"registerCycle": 5000,
"hub": "http://localhost:4444",
"nodeStatusCheckTimeout": 5000,
"nodePolling": 5000,
"role": "node",
"unregisterIfStillDownAfter": 60000,
"downPollingLimit": 2,
"debug": false,
"servlets" : [],
"withoutServlets": [],
"custom": {}
}
{% endhighlight %}
Для запуска ноды вводим команду (где nodeconfig.json файл в директории с сервером):
{% highlight XML%}
java -jar selenium-server-standalone-3.12.0.jar -role node -nodeConfig nodeconfig.json
{% endhighlight %}
Перейдем в [консоль][Console] грида и увидим поднятую ноду с 2 IE, 3 FF и 4 Chrome.
[Console]:http://localhost:4444/grid/console "http://localhost:4444/grid/console"
[DRIVER]:http://chromedriver.chromium.org/downloads "http://chromedriver.chromium.org/downloads"
[GRID]:https://docs.seleniumhq.org/download/ "https://docs.seleniumhq.org"
[TEASY]:https://github.com/EreOo/WEB-QA "WEB-QA project"
| 37.932331
| 269
| 0.733003
|
rus_Cyrl
| 0.455439
|
a69b095784ea20b3d24c51c63887ed5bc8cfcdbd
| 808
|
md
|
Markdown
|
2021/CVE-2021-42171.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 2,340
|
2022-02-10T21:04:40.000Z
|
2022-03-31T14:42:58.000Z
|
2021/CVE-2021-42171.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 19
|
2022-02-11T16:06:53.000Z
|
2022-03-11T10:44:27.000Z
|
2021/CVE-2021-42171.md
|
justinforbes/cve
|
375c65312f55c34fc1a4858381315fe9431b0f16
|
[
"MIT"
] | 280
|
2022-02-10T19:58:58.000Z
|
2022-03-26T11:13:05.000Z
|
### [CVE-2021-42171](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42171)



### Description
Zenario CMS 9.0.54156 is vulnerable to File Upload. The web server can be compromised by uploading and executing a web-shell which can run commands, browse system files, browse local resources, attack other servers, and exploit the local vulnerabilities, and so forth.
### POC
#### Reference
No PoCs from references.
#### Github
- https://github.com/ARPSyndicate/cvemon
- https://github.com/minhnq22/CVE-2021-42171
- https://github.com/nomi-sec/PoC-in-GitHub
| 40.4
| 268
| 0.752475
|
eng_Latn
| 0.390846
|
a69c5061cb25d4ea8b4de9e4638ab9c1f78b783b
| 6,939
|
md
|
Markdown
|
README.md
|
gyjsuccess/SpringBootUnity
|
0d38f28dd25fb968bec6174dcaae4966bc027541
|
[
"Apache-2.0"
] | 3
|
2018-11-15T08:03:04.000Z
|
2020-06-25T23:42:49.000Z
|
README.md
|
satng/SpringBootUnity
|
0d38f28dd25fb968bec6174dcaae4966bc027541
|
[
"Apache-2.0"
] | null | null | null |
README.md
|
satng/SpringBootUnity
|
0d38f28dd25fb968bec6174dcaae4966bc027541
|
[
"Apache-2.0"
] | null | null | null |
[](https://travis-ci.org/xiaomoinfo/SpringBootUnity)
[](#backers) [](#sponsors) [](https://github.com/xiaomoinfo/SpringBootUnity/issues)
[](https://github.com/xiaomoinfo/SpringBootUnity/network)
[](https://github.com/xiaomoinfo/SpringBootUnity/stargazers)
[](https://raw.githubusercontent.com/xiaomoinfo/MysqlBlobToJsonTool/master/LICENSE)
[]()
[]()
### 项目简介

### 环境
- `maven` latest
- `jdk1.8`
- `spring boot 1.5.6 release`(目前最新版)
- 个人推荐`idea`来代替eclipse(希望不要被说成异教徒必须死)
- mysql5.5+
- git: 版本管理
- nginx: 反向代理服务器
- lombok
### 注意事项
- 本项目代码托管在[github](https://github.com/xiaomoinfo/SpringBootUnity)和[码云](http://git.oschina.net/hupeng/SpringBootUnity)两个地方,最新代码以github为准,码云上会在github上更新完之后进行同步。
- 本项目多数数据库都用到了`hibernate`,如果没有提供`sql`文件。则启动时会根据代码映射自动生成数据库表,请在启动前修改`application.properties`中的数据库连接信息
- 本项目使用了`lombok`,在查看本项目时如果您没有下载`lombok 插件`,请先安装,不然找不到`get/set`方法。eclipse用户请参照[官网](http://jnb.ociweb.com/jnb/jnbJan2010.html#references)

### 启动方式
- `spring boot`内置了tomcat做为web容器,默认打成jar包直接放在服务器上执行就可以了
> `java -Xms64m -Xmx2048m -jar project.jar 5 >> ./project.log &`

- 如果需要定制化打成war包,那么也很简单。在`maven`中做下设置就ok了,然后把war包扔到tomcat下面就可以运行了
```
<modelVersion>4.0.0</modelVersion>
<artifactId>api</artifactId>
<packaging>war</packaging>
```
### 更新日志
由于更新日志占用篇幅太多,单独用一个文件存放。链接地址 [更新日志(详细)](/changeLog.md),这里只展示更新简介
- 2017-09-02 api模块: 添加swagger-bootstrap-ui,和原有ui并行存在。
http://localhost:8080 默认UI
http://localhost:808/doc.html bootstrap-ui
- 2017-09-02 spring boot版本从1.4.3更新到1.5.6
- 2017-09-02 修复不配置数据库信息无法启动的bug
- 2017-09-02 版本号更新到2017.1
- 2017-09-02 api模块(swagger)添加开源库swagger-bootstrap-ui,和swagger默认UI同时存在。
- 2017-09-02 web模块添加数据库sql文件,导入后一键启动可直接访问到web界面。
- 2017-09-06 mybatis模块:添加USER.sql,启动后访问:http://localhost:8080 即可看到接口数据
- 2017-09-06 所有模块: 添加 characterEncoding=utf8&useSSL=true 解决高版本mysql的sll警告
- 2017-09-06 添加代码贡献者列表和支持者,赞助商链接。
- 2017-09-08 crawler模块(网络爬虫):修复本地文件目录不存在会报错的bug。处理方式为:不存在则自动创建
### 项目说明
需求是多变的,本项目是以spring boot为基础,在使用spring boot的过程中对应不同的需求选用不同的技术和spring boot进行搭配,因此本项目是个偏于使用示例的定位。同时如果您在使用spring boot的过程中有什么好用的技术期待您对本项目的PR。
### 关于我
@[小莫](https://xiaomo.info):本人是一个热爱开源精神、追求新潮的开发者,技术过得去,还算勤勉!习惯以github的issue驱动方式来组织我的项目,也希望感兴趣的朋友和我联系,一起进步,共同开发感兴趣的开源项目。目前任rpg服务端主程,熟悉游戏开发和web开发。同时也是个喜欢二次元的死宅,爱动漫,略懂日语。
### 在线小工具
- [在线Cron表达式生成器](http://cron.qqe2.com/ "在线Cron表达式生成器")
- [在线工具 - 程序员的工具箱](http://tool.lu/ "在线工具 - 程序员的工具箱")
### 问题反馈
1. 欢迎提[issue](https://github.com/xiaomoinfo/SpringBootUnity/issues)一起完善这个项目。
2. QQ: 83387856
4. 个人主站: https://xiaomo.info
### 在线文档
- [JDK7英文文档](http://tool.oschina.net/apidocs/apidoc?api=jdk_7u4 "JDK7英文文档")
- [Spring4.x文档](http://spring.oschina.mopaas.com/ "Spring4.x文档")
- [Mybatis3官网](http://www.mybatis.org/mybatis-3/zh/index.html "Mybatis3官网")
- [Dubbo官网](http://dubbo.io/ "Dubbo官网")
- [Nginx中文文档](http://tool.oschina.net/apidocs/apidoc?api=nginx-zh "Nginx中文文档")
- [Freemarker在线手册](http://freemarker.foofun.cn/ "Freemarker在线中文手册")
- [Velocity在线手册](http://velocity.apache.org/engine/devel/developer-guide.html "Velocity在线手册")
- [Bootstrap在线手册](http://www.bootcss.com/ "Bootstrap在线手册")
- [Git官网中文文档](https://git-scm.com/book/zh/v2 "Git官网中文文档")
- [Thymeleaf](http://www.thymeleaf.org/doc/tutorials/3.0/thymeleafspring.html "Thymeleaf")
## 贡献者
感谢所有为本项目做出贡献的开发者们.
<a href="graphs/contributors"><img src="https://opencollective.com/SpringBootUnity/contributors.svg?width=890" /></a>
## 支持者
感谢您的支持! 🙏 [[成为支持者](https://opencollective.com/SpringBootUnity#backer)]
<a href="https://opencollective.com/SpringBootUnity#backers" target="_blank"><img src="https://opencollective.com/SpringBootUnity/backers.svg?width=890"></a>
## 赞助商
[[成为赞助商](https://opencollective.com/SpringBootUnity#sponsor)]支持本项目并成为赞助商. 您的LOGO和网站链接将会被展示在这里.
<a href="https://opencollective.com/SpringBootUnity/sponsor/0/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/1/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/2/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/3/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/4/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/5/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/6/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/7/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/8/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/SpringBootUnity/sponsor/9/website" target="_blank"><img src="https://opencollective.com/SpringBootUnity/sponsor/9/avatar.svg"></a>
## [License](LICENSE "apache")
Copyright 2017 xiaomo
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 45.352941
| 352
| 0.760052
|
yue_Hant
| 0.770963
|
a69d33f94b5e3ca188e3af61e7729d3257e9d7bd
| 1,698
|
md
|
Markdown
|
MustDoCodingQuestions/Backtracking/README.md
|
brij2804/GeeksforGeeks
|
2514cd46b8d0e2fa18c94f17d0e3717ecc611039
|
[
"MIT"
] | null | null | null |
MustDoCodingQuestions/Backtracking/README.md
|
brij2804/GeeksforGeeks
|
2514cd46b8d0e2fa18c94f17d0e3717ecc611039
|
[
"MIT"
] | null | null | null |
MustDoCodingQuestions/Backtracking/README.md
|
brij2804/GeeksforGeeks
|
2514cd46b8d0e2fa18c94f17d0e3717ecc611039
|
[
"MIT"
] | null | null | null |
## Backtracking
|Problem Name|Language|Solution Link|
---|---|---
|[Two Sum](https://leetcode.com/problems/two-sum/)|java|[Solution](./TwoSum.java)|
|[Add Two Numbers](https://leetcode.com/problems/add-two-numbers/)|java|[Solution](./AddTwoNumbers.java.java)|
|[Reverse Integer](https://leetcode.com/problems/reverse-integer/)|java|[Solution](./ReverseInteger.java)|
|[Palindrome Number](https://leetcode.com/problems/palindrome-number/)|java|[Solution](./PalindromeNumber.java)|
|[Remove Nth Node From End of List](https://leetcode.com/problems/remove-nth-node-from-end-of-list/)|java|[Solution](./RemoveNthNodeFromEndofList.java)|
|[Merge Two Sorted Lists](https://leetcode.com/problems/merge-two-sorted-lists/)|java|[Solution](./MergeTwoSortedLists.java)|
|[Rotate List](https://leetcode.com/problems/rotate-list/)|java|[Solution](./RotateList.java)|
|[Remove Duplicates from Sorted List](https://leetcode.com/problems/remove-duplicates-from-sorted-list/)|java|[Solution](./RemoveDuplicatesfromSortedList.java)|
|[Same Tree](https://leetcode.com/problems/same-tree/)|java|[Solution](./SameTree.java)|
|[Maximum Depth of Binary Tree](https://leetcode.com/problems/maximum-depth-of-binary-tree/)|java|[Solution](./MaximumDepthofBinaryTree.java)|
|[Binary Tree Preorder Traversal](https://leetcode.com/problems/binary-tree-preorder-traversal/)|java|[Solution](./BinaryTreePreorderTraversal.java)|
|[Binary Tree Postorder Traversal](https://leetcode.com/problems/binary-tree-postorder-traversal/)|java|[Solution](./BinaryTreePostorderTraversal.java)|
|[Delete Node in a Linked List](https://leetcode.com/problems/delete-node-in-a-linked-list/)|java|[Solution](./DeleteNodeinaLinkedList.java)|
| 73.826087
| 160
| 0.765018
|
yue_Hant
| 0.626645
|
a69d96641fcd607af9cd4ef8bd248640b3e29202
| 1,502
|
md
|
Markdown
|
knowledge-base/visual-studio-hangs-on-loading-telerik-xamarin-solution.md
|
yordan-mitev/xamarin-forms-docs
|
19d58da96eb8bc47a6a5df9199133061b15d6361
|
[
"MIT",
"Unlicense"
] | 1
|
2020-05-13T16:52:43.000Z
|
2020-05-13T16:52:43.000Z
|
knowledge-base/visual-studio-hangs-on-loading-telerik-xamarin-solution.md
|
doc22940/xamarin-forms-docs
|
3e72a514f195733c0b2a0fa8b2a9fc0099c935f5
|
[
"MIT",
"Unlicense"
] | null | null | null |
knowledge-base/visual-studio-hangs-on-loading-telerik-xamarin-solution.md
|
doc22940/xamarin-forms-docs
|
3e72a514f195733c0b2a0fa8b2a9fc0099c935f5
|
[
"MIT",
"Unlicense"
] | null | null | null |
---
title: Visual Studio 2019 Hangs on Loading Telerik Xamarin solution
description: when starting a xamarin solution VS 2019 hangs on loading
type: Troubleshooting
page_title: Resolve issue with VS 2019 hangs on loading when Xamarin project is open
slug: visual-studio-hangs-on-loading-telerik-xamarin-solution
position:
tags: Xamarin, Telerik UI for Xamarin, extensions, solution, visual studio 2019, hangs, loading
ticketid: 1435667
res_type: kb
---
## Environment
<table>
<tr>
<td>Product Version</td>
<td>2019.3.1023.1</td>
</tr>
<tr>
<td>Product</td>
<td>Installer and VS Extentions for Telerik UI for Xamarin</td>
</tr>
</table>
## Description
This article shows how to resolve the issue with the Visual Studio 2019 hangs on loading Xamarin projects when Tekerik Xamarin VS extensions are installed.
>important The issue is resolved with the latest version of the UI for Xamarin VSExtensions (v2019.3.1118).
## Solution
In order to solve this issue you should disable the Telerik Xamarin VS Extensions from the Visual Studio 2019
1. Open **Manage Extentions** dialog box

2. Select **Installed** and then in the search field search for *Telerik Xamarin* and click *Disable*

>note The other option if you still want to use the Telerik VS Extensions is first to open Visual Studio and then to load the solution containing Telerik controls.
| 34.136364
| 163
| 0.772304
|
eng_Latn
| 0.935507
|
a69dcb171d979bac1c5c3a5705d065c1e0578c2b
| 1,181
|
md
|
Markdown
|
docusaurus/website/i18n/sl/docusaurus-plugin-content-docs/current/number-survey.md
|
isle-project/isle-editor
|
45a041571f723923fdab4eea2efe2df211323655
|
[
"Apache-2.0"
] | 9
|
2019-08-30T20:50:27.000Z
|
2021-12-09T19:53:16.000Z
|
docusaurus/website/i18n/sl/docusaurus-plugin-content-docs/current/number-survey.md
|
isle-project/isle-editor
|
45a041571f723923fdab4eea2efe2df211323655
|
[
"Apache-2.0"
] | 1,261
|
2019-02-09T07:43:45.000Z
|
2022-03-31T15:46:44.000Z
|
docusaurus/website/i18n/sl/docusaurus-plugin-content-docs/current/number-survey.md
|
isle-project/isle-editor
|
45a041571f723923fdab4eea2efe2df211323655
|
[
"Apache-2.0"
] | 3
|
2019-10-04T19:22:02.000Z
|
2022-01-31T06:12:56.000Z
|
---
id: number-survey
title: Number Survey
sidebar_label: Number Survey
---
Komponenta ankete, v kateri lahko inštruktor od študentov v realnem času zbira številčne podatke iz ankete.
## Možnosti
* __question__ | `(string|node)`: vprašanje, ki se prikaže. Default: `''`.
* __allowMultipleAnswers__ | `boolean`: določa, ali lahko isti uporabnik (ali seja, če je anonimna) pošlje več odgovorov). Default: `false`.
* __anonymous__ | `boolean`: Učenci lahko podatke predložijo anonimno. Če je ta možnost nastavljena na "true", inštruktorji ne bodo mogli videti ID študenta, ki je predložil podatke.. Default: `false`.
* __step__ | `(number|string)`: Vrednost `stringa` ali `številke`, ki označuje korak puščic, ki se prikaže, ko se kurzor pomakne nad vnosno polje. Če je `'poljuben``, bo korak nastavljen na `1`.. Default: `'any'`.
* __style__ | `object`: Vnosni slogi CSS. Default: `{}`.
* __onSubmit__ | `function`: povratna funkcija, ki se sproži, ko učenci oddajo odgovor.. Default: `onSubmit() {}`.
## Primeri
```jsx live
<NumberSurvey
allowMultipleAnswers={true}
id="generic_mean_question"
question="Submit a number"
defaultValue={0}
step="any"
/>
```
| 38.096774
| 213
| 0.719729
|
slv_Latn
| 0.984008
|
a69e315a45ec4b43fd77989e8232827d2b8f246c
| 4,852
|
md
|
Markdown
|
_includes/pages/page00744.md
|
barmintor/ed-DCP
|
bd57a74f62eae1230d514d0bd7a09712b7881f1f
|
[
"MIT"
] | null | null | null |
_includes/pages/page00744.md
|
barmintor/ed-DCP
|
bd57a74f62eae1230d514d0bd7a09712b7881f1f
|
[
"MIT"
] | null | null | null |
_includes/pages/page00744.md
|
barmintor/ed-DCP
|
bd57a74f62eae1230d514d0bd7a09712b7881f1f
|
[
"MIT"
] | null | null | null |
ADVERTISEMENTS.
MESSRS. N I C O L L ' S W A R E R O O M S ,
Through the purchase of adjoining premises, now extend over a space hitherto occupied by four
large Establishments, viz., 114, 116, 118, and 120, REGENT STREET, thus forming, with their
several suites of Show Rooms on the First Floor, not only the largest, but also the most elegant,
Magazine for Gentlemen's Clothing to be met with in Europe.
The Entrance to the WHOLESALE AND SHIPPING DEPAlETMENTS, for the sale
of Woollen Cloths, &c., will, for the f~iture,be in Warwick-street, No.41 (immediately at the
rear of the Retail Warerooms in Regentstreet), where the counting-house is also placed, and to
which address all applications for agencies, &c., are referred, as also those who may desire inter-
views with the Principals, or Heads of Departments ; but Manufacturers, or others, having
novelties to submit to the above firm, are requested to make an appointment by writing.
Those who are desirous of becoming Messrs. NICOLL'S Agents for the sale of their Patented
and other Goods, in such towns of the United Kingdom or Colonies, where Messrs. N. are at
present uurepresented, are informed that the Goods in question may be sold by them at the same
price as they are retailed in Regent-street ; at the same time affording to the Agent a better
profit than if attempts be made to copy, or otherwise pirate, the several articles.
Merchants, Shippers, and others, will find the class of goods sold by Messrs. NICOLL
peculiarly well adapted, through their excellence of quality and finish, for the wants of the now
numerous respectable residents in the Colonies, as the Goods are such as are required by the
wealthy and middle classes of the Mother Country. Indeed, a safer or niore profitable venture
cannot be made by Shippers, seeing that the Market has been hitherto almost exclusively supplied
with garments calculated, in material and appearance, for the use of labourers, &c., only.
THE NlCOLL PALETOT, OR PATENT COAT,
As also the original Invention, the REGISTERED PALETOT (6 and 7 Vic., Cap. 65), and
NICOLL'S MORSING COAT, adapted for Spring and Summer Wear, and, notwithstanding all
the recent Improvements, making these Garments not only the most Gentlemanly, but also the
most durable and inexpensive articles of Dress extant, the prices for the above seasons will yet
remain at ONE and TWO GUINEAS each.
114,116, 118,120, REGENT STREET, and 22, Cornhill.
114, R E G E N T S T R E E T .
The above will, as heretofore, form the principal entrance to NICOLL'S PALETOT WARE-
ROOMS, as also where Gentlemen are respectfully invited to inspect all the NOVELTIES, a s
well as all the established articles of Costume, that capital can collect, or skill can form.
116, R E G E N T S T R E E T .
Much ingenuity of design may be here witnessed in Embroidered and other materials intended
for WAISTCOATS, in WEDDING, BALL, or MORNING WEAR, all moderate in cost.
UNIFORMS, either DIPLOMATIC, NAVAL, or MILITARY, are produced at fair prices.
Clever and efficient persons are employed by Messrs. NICOLL to attend to each department.
118, R E G E N T S T R E E T .
ROBES, whether intended for the PEER, the CLERGYMAN, the BARRISTER, or YOUTH
AT COLLEGE, may be here obtained at the same moderate scale of prices as have served t o
distinguish NICOLL'S PALETOT.
11 120, R E G E N T STREET,
1 Forms a Department for YOUTH'S CLOTHING, which, hitherto, has not been contemplated a s
an adjunct to Messrs. NICOLL'S Trade, but, increased space now affording the convenience, the
1
I
same moderate prices and excellence will be here developed, as may be observed in the
Paletot, &c.
Parents and Guardians are respcctfully invited to inspect the cost of the COLLEGE CAP and
1 GOWN, with all other Items necessary for the Universities, Public and Private Schools, &c.
II 22, CORNHILL,
(OPPOSITE T H E ROYAL EXCHANGE.)
I This is the address of Messrs. NICOLL'S CITY ESTABLISHMENT, where the PALETOT,
with Materials and skill, a$ in Regent-street, are submitted for inspection and use.
Many of late have used the word "Paletot," but H. J. and D. NlCOLL are the Sole Proprietors
I
I and Patentees of both design and material.
BRADBURY A N D EVANS, PRINTERS2 WHITEPRIARS.
| 63.012987
| 106
| 0.681575
|
eng_Latn
| 0.999177
|
a69ec589ea6e098050922ae82d1989ee0c5c6f73
| 73
|
md
|
Markdown
|
README.md
|
germanosk/chronist
|
697f984a80cedbf741524223bdc32052b05a865b
|
[
"MIT"
] | null | null | null |
README.md
|
germanosk/chronist
|
697f984a80cedbf741524223bdc32052b05a865b
|
[
"MIT"
] | null | null | null |
README.md
|
germanosk/chronist
|
697f984a80cedbf741524223bdc32052b05a865b
|
[
"MIT"
] | 1
|
2021-04-09T08:48:59.000Z
|
2021-04-09T08:48:59.000Z
|
# chronist
A tool to help chronicle sheets for Pathfinder Organized Play
| 24.333333
| 61
| 0.821918
|
eng_Latn
| 0.930604
|
a69fdcf9cf9547d3614595bf1a46c85bf8174ff4
| 647
|
md
|
Markdown
|
README.md
|
TaiAurori/rvmob
|
df155fc092e282a87e34767be2c8fb16c8fc6f20
|
[
"MIT"
] | 13
|
2021-11-01T12:32:55.000Z
|
2021-11-20T07:10:28.000Z
|
README.md
|
TaiAurori/rvmob
|
df155fc092e282a87e34767be2c8fb16c8fc6f20
|
[
"MIT"
] | 9
|
2021-11-04T08:22:12.000Z
|
2021-11-18T17:17:45.000Z
|
README.md
|
TaiAurori/rvmob
|
df155fc092e282a87e34767be2c8fb16c8fc6f20
|
[
"MIT"
] | 2
|
2021-11-14T13:20:30.000Z
|
2021-12-08T17:48:05.000Z
|
<center>
<img src="rvmob-full.png" width=400>
</center>
React Native Revolt client (now semi-official!)
Beta stage, pretty unoptimized. Use at your own discomfort.
Revolt server for RVMob [here](https://app.revolt.chat/invite/YW312HPF).
After running `npm i`, you must run `npx rn-nodeify -e -i` because React Native does not have 100% compatibility with Node.js (and therefore revolt.js) by default.
<center>
<img src="screenshots/1.png" width=400>
<img src="screenshots/2.png" width=400>
<img src="screenshots/3.png" width=400>
<img src="screenshots/4.png" width=400>
<img src="screenshots/5.png" width=400>
</center>
| 35.944444
| 163
| 0.712519
|
eng_Latn
| 0.754519
|
a6a0b732e207d7545bb610b0932441eca774fef8
| 396
|
md
|
Markdown
|
README.md
|
HaroldPetersInskipp/IndoorAutoGarden
|
437a2d13b92390f3f5b449f410563b4bc3e86166
|
[
"MIT"
] | 2
|
2022-01-24T01:08:11.000Z
|
2022-03-10T18:26:00.000Z
|
README.md
|
HaroldPetersInskipp/IndoorAutoGarden
|
437a2d13b92390f3f5b449f410563b4bc3e86166
|
[
"MIT"
] | null | null | null |
README.md
|
HaroldPetersInskipp/IndoorAutoGarden
|
437a2d13b92390f3f5b449f410563b4bc3e86166
|
[
"MIT"
] | null | null | null |
# IndoorAutoGarden
An ecosystem monitoring setup, ideal for growing plants indoors without user intervention.
# Functionality
Automatically control and display metrics on a web dashboard such as: temperature, humidity, light, and soil moisture.
## Images
<img src="Images/dashboard(old).jpeg">
<img src="Images/hardware.jpeg">
<img src="Images/outside.jpeg">
<img src="Images/inside.jpeg">
| 26.4
| 118
| 0.772727
|
eng_Latn
| 0.618529
|
a6a13603873abe433f520f389ef863a5d7902d1a
| 20,899
|
md
|
Markdown
|
articles/media-services/previous/offline-widevine-for-android.md
|
andreasahlund/azure-docs.sv-se
|
00cec92906c1c97e2a9aca9c48c51082b3cbb69d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/media-services/previous/offline-widevine-for-android.md
|
andreasahlund/azure-docs.sv-se
|
00cec92906c1c97e2a9aca9c48c51082b3cbb69d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/media-services/previous/offline-widevine-for-android.md
|
andreasahlund/azure-docs.sv-se
|
00cec92906c1c97e2a9aca9c48c51082b3cbb69d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Konfigurera ditt konto för offline-strömning av Widevine-skyddat innehåll – Azure
description: Det här avsnittet visar hur du konfigurerar ditt Azure Media Services-konto för offline-strömning av Widevine-skyddat innehåll.
services: media-services
keywords: BINDESTRECK, DRM, Widevine offline-läge, ExoPlayer, Android
documentationcenter: ''
author: willzhan
manager: steveng
editor: ''
ms.service: media-services
ms.workload: media
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 04/16/2019
ms.author: willzhan
ms.reviewer: dwgeo
ms.openlocfilehash: 4b3b2b8c39b5b2552b5ce9f508bacd1ea86b2638
ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 11/25/2020
ms.locfileid: "96006377"
---
# <a name="offline-widevine-streaming-for-android"></a>Widevine-direktuppspelning offline för Android
[!INCLUDE [media services api v2 logo](./includes/v2-hr.md)]
> [!div class="op_single_selector" title1="Välj den version av Media Services som du använder:"]
> * [Version 3](../latest/offline-widevine-for-android.md)
> * [Version 2](offline-widevine-for-android.md)
> [!NOTE]
> Inga nya funktioner läggs till i Media Services v2. <br/>Kolla in den senaste versionen [Media Services v3](../latest/index.yml). Se även [vägledning för migrering från v2 till v3](../latest/migrate-from-v2-to-v3.md)
Förutom att skydda innehåll för online-direktuppspelning, erbjuder medie innehålls prenumeration och hyres tjänster nedladdnings Bart innehåll som fungerar när du inte är ansluten till Internet. Du kan behöva ladda ned innehåll till din telefon eller surfplatta för uppspelning i flyg Plans läge när det är frånkopplat från nätverket. Ytterligare scenarier där du kanske vill ladda ned innehåll:
- En del innehålls leverantörer får inte tillåta leverans av DRM-licenser utöver ett land/regions gräns. Om en användare vill titta på innehåll medan du reser utomlands krävs offline-nedladdning.
- I vissa länder/regioner är Internet tillgänglighet och/eller bandbredd begränsad. Användarna kan välja att ladda ned innehåll för att kunna se det med hög nog hög upplösning för tillfredsställande visnings upplevelse.
Den här artikeln beskriver hur du implementerar uppspelning offline för streck innehåll som skyddas av Widevine på Android-enheter. Med offline-DRM kan du tillhandahålla prenumerations-, hyres-och inköps modeller för ditt innehåll, vilket gör det möjligt för kunder av tjänsterna att enkelt ta innehåll med dem när de är frånkopplade från Internet.
För att skapa Android Player-appar kan vi disponera tre alternativ:
> [!div class="checklist"]
> * Bygg en spelare med Java API för ExoPlayer SDK
> * Bygg en spelare med Xamarin-bindningen för ExoPlayer SDK
> * Bygg en spelare med hjälp av EME (Encrypted Media Extension) och media source Extension (MSE) i V62 för mobila webbläsare eller senare
Artikeln har också svar på några vanliga frågor om offline-strömning av Widevine-skyddat innehåll.
## <a name="requirements"></a>Krav
Innan du implementerar offline DRM för Widevine på Android-enheter bör du först:
- Bekanta dig med de begrepp som introducerades för innehålls skydd online med Widevine DRM. Detta beskrivs i detalj i följande dokument/exempel:
- [Använda Azure Media Services för att leverera DRM-licenser eller AES-nycklar](media-services-deliver-keys-and-licenses.md)
- [CENC med Multi-DRM och åtkomstkontroll: En referensdesign och implementering i Azure och Azure Media Services](media-services-cenc-with-multidrm-access-control.md)
- [Använda PlayReady och/eller Widevine Dynamic Common Encryption med .NET](https://azure.microsoft.com/resources/samples/media-services-dotnet-dynamic-encryption-with-drm/)
- [Använd Azure Media Services för att leverera PlayReady-och/eller Widevine-licenser med .NET](https://azure.microsoft.com/resources/samples/media-services-dotnet-deliver-playready-widevine-licenses/)
- Bekanta dig med Google ExoPlayer SDK för Android, en Video Player SDK med öppen källkod som stöder offline-uppspelning av Widevine.
- [ExoPlayer SDK](https://github.com/google/ExoPlayer)
- [Guide för ExoPlayer-utvecklare](https://google.github.io/ExoPlayer/guide.html)
- [EoPlayer Developer-blogg](https://medium.com/google-exoplayer)
## <a name="content-protection-configuration-in-azure-media-services"></a>Konfiguration av innehålls skydd i Azure Media Services
När du konfigurerar Widevine-skydd för en till gång i Media Services måste du skapa ContentKeyAuthorizationPolicyOption som har angett följande tre saker:
1. DRM-system (Widevine)
2. ContentKeyAuthorizationPolicyRestriction som anger hur innehålls nyckel leverans tillåts i licens leverans tjänsten (öppen eller token Authorization)
3. DRM (Widevine)-licens mal len
Om du vill aktivera **offline** -läge för Widevine-licenser måste du konfigurera [licens mal len för Widevine](media-services-widevine-license-template-overview.md). I **policy_overrides** -objektet anger du egenskapen **can_persist** till **Sant** (Standardvärdet är falskt).
I följande kod exempel används .NET för att aktivera **offline** -läge för Widevine-licenser. Koden baseras på [ med hjälp av PlayReady och/eller Widevine Dynamic common Encryption med .net](https://github.com/Azure-Samples/media-services-dotnet-dynamic-encryption-with-drm) -exempel.
```
private static string ConfigureWidevineLicenseTemplateOffline(Uri keyDeliveryUrl)
{
var template = new WidevineMessage
{
allowed_track_types = AllowedTrackTypes.SD_HD,
content_key_specs = new[]
{
new ContentKeySpecs
{
required_output_protection = new RequiredOutputProtection { hdcp = Hdcp.HDCP_NONE},
security_level = 1,
track_type = "SD"
}
},
policy_overrides = new
{
can_play = true,
can_persist = true,
//can_renew = true, //if you set can_renew = false, you do not need renewal_server_url
//renewal_server_url = keyDeliveryUrl.ToString(), //not mandatory, renewal_server_url is needed only if license_duration_seconds is set
can_renew = false,
//rental_duration_seconds = 1209600,
//playback_duration_seconds = 1209600,
//license_duration_seconds = 1209600
}
};
string configuration = Newtonsoft.Json.JsonConvert.SerializeObject(template);
return configuration;
}
```
## <a name="configuring-the-android-player-for-offline-playback"></a>Konfigurera Android Player för offline-uppspelning
Det enklaste sättet att utveckla en inbyggd Player-app för Android-enheter är att använda [Google EXOPLAYER SDK](https://github.com/google/ExoPlayer), en Video Player SDK med öppen källkod. ExoPlayer stöder funktioner som för närvarande inte stöds av Android: s inbyggda Media Player-API, inklusive MPEG-streck och Microsoft Smooth Streaming Delivery Protocols.
ExoPlayer version 2,6 och senare innehåller många klasser som stöder offline-uppspelning av DRM-Widevine. I synnerhet tillhandahåller OfflineLicenseHelper-klassen verktyg för att under lätta användningen av DefaultDrmSessionManager för att ladda ned, förnya och frigöra offline-licenser. De klasser som anges i SDK-mappen "Library/Core/src/main/Java/com/Google/Android/exoplayer2/offline/" stöder nedladdning av video innehåll offline.
Följande lista över klasser underlättar offline-läge i ExoPlayer SDK för Android:
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/OfflineLicenseHelper. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/DefaultDrmSession. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/DefaultDrmSessionManager. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/DrmSession. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/ErrorStateDrmSession. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/DRM/ExoMediaDrm. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/offline/SegmentDownloader. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/offline/DownloaderConstructorHelper. java
- Library/Core/src/main/Java/com/Google/Android/exoplayer2/offline/hämtare. java
- Library/bindestreck/src/main/Java/com/Google/Android/exoplayer2/source/streck/offline/DashDownloader. java
Utvecklare bör referera till [ExoPlayer Developer-guiden](https://google.github.io/ExoPlayer/guide.html) och motsvarande [utvecklare för utvecklare](https://medium.com/google-exoplayer) när de utvecklar ett program. Google har inte publicerat en fullständigt dokumenterad referens implementering eller exempel kod för ExoPlayer-appen som stöder Widevine offline just nu, så informationen är begränsad till utvecklarens guide och blogg.
### <a name="working-with-older-android-devices"></a>Arbeta med äldre Android-enheter
För vissa äldre Android-enheter måste du ange värden för följande **policy_overrides** egenskaper (definieras i [Widevine-licens mal len](media-services-widevine-license-template-overview.md): **rental_duration_seconds**, **playback_duration_seconds** och **license_duration_seconds**. Du kan också ställa in dem på noll, vilket innebär oändlig/obegränsad varaktighet.
Värdena måste anges för att undvika en heltals spill bugg. Mer information om problemet finns i https://github.com/google/ExoPlayer/issues/3150 och https://github.com/google/ExoPlayer/issues/3112 . <br/>Om du inte anger värdena explicit, tilldelas mycket stora värden för **PlaybackDurationRemaining** och **LicenseDurationRemaining** (till exempel 9223372036854775807, vilket är det högsta positiva värdet för ett 64-bitars heltal). Det innebär att Widevine-licensen har upphört att gälla och därför att dekrypteringen inte sker.
Det här problemet uppstår inte på Android 5,0 Lollipop eller senare eftersom Android 5,0 är den första Android-versionen, som har utformats för att fullständigt stödja ARMv8 ([Advanced RISC Machine](https://en.wikipedia.org/wiki/ARM_architecture)) och 64-bitarsplattformar, medan Android 4,4 KitKat ursprungligen har utformats för att stödja ARMv7-och 32-bitarsplattformar som med andra äldre Android-versioner.
## <a name="using-xamarin-to-build-an-android-playback-app"></a>Använda Xamarin för att bygga en Android-uppspelnings-app
Du kan hitta Xamarin-bindningar för ExoPlayer med hjälp av följande länkar:
- [Bibliotek för Xamarin-bindningar för Google ExoPlayer-biblioteket](https://github.com/martijn00/ExoPlayerXamarin)
- [Xamarin-bindningar för ExoPlayer-NuGet](https://www.nuget.org/packages/Xam.Plugins.Android.ExoPlayer/)
Se även följande tråd: Xamarin- [bindning](https://github.com/martijn00/ExoPlayerXamarin/pull/57).
## <a name="chrome-player-apps-for-android"></a>Chrome Player-appar för Android
Från och med lanseringen av [Chrome för Android v. 62](https://developers.google.com/web/updates/2017/09/chrome-62-media-updates)stöds beständig licens i EME. [Widevine L1](https://developers.google.com/web/updates/2017/09/chrome-62-media-updates#widevine_l1) stöds nu också i Chrome för Android. På så sätt kan du skapa offline-uppspelnings program i Chrome om slutanvändarna har den här (eller högre) versionen av Chrome.
Dessutom har Google skapat ett PWA-exempel (PWA) med öppen källkod:
- [Källkod](https://github.com/GoogleChromeLabs/sample-media-pwa)
- [Google-värdbaserad version](https://biograf-155113.appspot.com/ttt/episode-2/) (fungerar bara i Chrome v 62 och högre på Android-enheter)
Om du uppgraderar din mobila Chrome-webbläsare till V62 (eller högre) på en Android-telefon och testar den ovan värdbaserade exempel appen, kommer du att se att både online-direktuppspelning och offline-uppspelning fungerar.
Den här PWA-appen med öppen källkod har skapats i Node.js. Om du vill vara värd för din egen version på en Ubuntu-Server bör du tänka på följande vanliga påträffade problem som kan förhindra uppspelning:
1. CORS-problem: exempel videon i exempel appen finns i https://storage.googleapis.com/biograf-video-files/videos/ . Google har konfigurerat CORS för alla test exempel som finns i Google Cloud Storage Bucket. De hanteras med CORS-rubriker och anger explicit CORS-posten: `https://biograf-155113.appspot.com` (den domän där Google-värdarna i sitt exempel) förhindrar åtkomst av andra platser. Om du försöker visas följande HTTP-fel: `Failed to load https://storage.googleapis.com/biograf-video-files/videos/poly-sizzle-2015/mp4/dash.mpd: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https:\//13.85.80.81:8080' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.`
2. Certifikat problem: från och med Chrome v 58 krävs HTTPS för EME för Widevine. Därför måste du vara värd för exempel appen via HTTPS med ett X509-certifikat. Ett vanligt test certifikat fungerar inte på grund av följande krav: du måste skaffa ett certifikat som uppfyller följande minimi krav:
- Chrome och Firefox kräver SAN-Subject alternativ namns inställning som finns i certifikatet
- Certifikatet måste ha en betrodd certifikat utfärdare och ett självsignerat utvecklings certifikat fungerar inte
- Certifikatet måste ha ett CN-namn som matchar DNS-namnet på webb servern eller gatewayen
## <a name="frequently-asked-questions"></a>Vanliga frågor och svar
### <a name="question"></a>Fråga
Hur kan jag leverera beständiga licenser (offline-aktiverade) för vissa klienter/användare och icke-beständiga licenser (offline-inaktive rad) för andra? Måste jag duplicera innehållet och använda en separat innehålls nyckel?
### <a name="answer"></a>Svar
Du behöver inte duplicera innehållet. Du kan bara använda en enda kopia av innehållet och en enskild ContentKeyAuthorizationPolicy, men två separata ContentKeyAuthorizationPolicyOption: s:
1. IContentKeyAuthorizationPolicyOption 1: använder beständiga licenser och ContentKeyAuthorizationPolicyRestriction 1 som innehåller ett anspråk som license_type = "persistent"
2. IContentKeyAuthorizationPolicyOption 2: använder icke-beständig licens och ContentKeyAuthorizationPolicyRestriction 2 som innehåller ett anspråk som license_type = "icke-beständig"
Det innebär att när en licens förfrågan kommer från klient appen, är det ingen skillnad mellan licens förfrågningar. För en annan slutanvändare/enhet bör STS dock ha affärs logiken för att utfärda olika JWT-token som innehåller olika anspråk (ett av de två license_type). Anspråks värdet i JWT-token kommer att användas för att bestämma vilken typ av licens som ska utfärdas av licens tjänsten: beständigt eller icke-permanent.
Det innebär att STS (Secure token service) måste ha information om affärs logiken och klienten/enheten för att lägga till motsvarande anspråks värde i en token.
### <a name="question"></a>Fråga
För Widevine säkerhets nivåer, i Googles [Widevine översikt över dokument](https://storage.googleapis.com/wvdocs/Widevine_DRM_Architecture_Overview.pdf) dokumentation för DRM-arkitekturen, definieras tre olika säkerhets nivåer. Men i [Azure Media Services dokumentation om Widevine licens mal len](./media-services-widevine-license-template-overview.md)beskrivs fem olika säkerhets nivåer. Vad är relationen eller mappningen mellan de två olika uppsättningarna med säkerhets nivåer?
### <a name="answer"></a>Svar
I Googles [Översikt över WIDEVINE DRM-arkitektur](https://storage.googleapis.com/wvdocs/Widevine_DRM_Architecture_Overview.pdf)definieras följande tre säkerhets nivåer:
1. Säkerhets nivå 1: all innehålls bearbetning, kryptografi och kontroll utförs i TEE (Trusted Execution Environment). I vissa implementerings modeller kan säkerhets bearbetningen utföras i olika kretsar.
2. Säkerhets nivå 2: Utför kryptografi (men inte video bearbetning) i TEE: dekrypterade buffertar returneras till program domänen och bearbetas via separat video maskin vara eller program vara. På nivå 2 bearbetas dock kryptografisk information fortfarande endast inom TEE.
3. Säkerhets nivå 3 har inte någon TEE på enheten. Lämpliga åtgärder kan vidtas för att skydda kryptografisk information och dekrypterat innehåll på värd operativ system. En nivå 3-implementering kan också innehålla en kryptografisk motor för maskin vara, men den förbättrar bara prestanda, inte säkerhet.
På samma tid kan security_level egenskapen för content_key_specs ha följande fem olika värden (krav för Widevine för uppspelning) i [Azure Media Services dokumentation om licens mal len](./media-services-widevine-license-template-overview.md)för:
1. Programvarubaserad-baserad kryptografisk kryptering krävs.
2. Program varu kryptering och en fördunklade-avkodare måste anges.
3. Nyckel material-och krypterings åtgärder måste utföras inom en maskinvarubaserad TEE.
4. Kryptering och avkodning av innehåll måste utföras inom en maskinvarubaserad TEE.
5. Krypteringen, avkodningen och all hantering av mediet (komprimerade och okomprimerade) måste hanteras inom en maskinvarubaserad TEE.
Båda säkerhets nivåerna definieras av Google Widevine. Skillnaden är i användnings nivån: arkitektur nivå eller API-nivå. De fem säkerhets nivåerna används i Widevine-API: et. Content_key_specs-objektet som innehåller security_level deserialiseras och skickas till den globala Widevine-leverans tjänsten av Azure Media Services Widevine licens service. Tabellen nedan visar mappningen mellan de två uppsättningarna med säkerhets nivåer.
| **Säkerhets nivåer som definieras i Widevine-arkitekturen** |**Säkerhets nivåer som används i Widevine-API**|
|---|---|
| **Säkerhets nivå 1**: all innehålls bearbetning, kryptografi och kontroll utförs i tee (Trusted Execution Environment). I vissa implementerings modeller kan säkerhets bearbetningen utföras i olika kretsar.|**security_level = 5**: kryptering, avkodning och all hantering av mediet (komprimerade och okomprimerade) måste hanteras inom en maskinvarubaserad tee.<br/><br/>**security_level = 4**: kryptering och avkodning av innehåll måste utföras inom en maskinvarubaserad tee.|
**Säkerhets nivå 2**: Utför kryptografi (men inte video bearbetning) i tee: dekrypterade buffertar returneras till program domänen och bearbetas via separat video maskin vara eller program vara. På nivå 2 bearbetas dock kryptografisk information fortfarande endast inom TEE.| **security_level = 3**: nyckel material-och krypterings åtgärder måste utföras inom en maskinvarubaserad tee. |
| **Säkerhets nivå 3**: har inte någon tee på enheten. Lämpliga åtgärder kan vidtas för att skydda kryptografisk information och dekrypterat innehåll på värd operativ system. En nivå 3-implementering kan också innehålla en kryptografisk motor för maskin vara, men den förbättrar bara prestanda, inte säkerhet. | **security_level = 2**: program varu kryptering och en fördunklade-avkodare krävs.<br/><br/>**security_level = 1**: programvarubaserad kryptografiskt kryptografiskt krypterings utrymme krävs.|
### <a name="question"></a>Fråga
Varför tar det lång tid för innehålls hämtningen?
### <a name="answer"></a>Svar
Det finns två sätt att förbättra nedladdnings hastigheten:
1. Aktivera CDN så att slutanvändare förmodligen kan trycka på CDN i stället för slut punkts-/direkt uppspelnings slut punkt för innehålls hämtning. Om slut punkter för direkt uppspelning av användare träffar, paketeras och krypteras varje HLS segment eller ett streck fragment. Även om den här svars tiden är i millisekunder för varje segment/fragment, kan den ackumulerade svars tiden vara en stor del av den ackumulerade fördröjningen som kan hämtas längre.
2. Ge slutanvändarna möjlighet att selektivt Hämta video kvalitets lager och ljud spår i stället för allt innehåll. För offline-läge finns det ingen punkt för att ladda ned alla kvalitets lager. Det finns två sätt att åstadkomma detta:
1. Kontrollerad av klienten: antingen väljer Player-appen automatiskt eller så kan användaren välja video kvalitets lager och ljud spår för att ladda ned.
2. Tjänsten kontrollerad: en kan använda funktionen dynamisk manifest i Azure Media Services för att skapa ett (Globalt) filter, vilket begränsar HLS-spelnings lista eller streck-MPD till ett enda video kvalitets lager och valda ljud spår. Hämtnings-URL: en som visas för slutanvändarna inkluderar det här filtret.
## <a name="additional-notes"></a>Ytterligare information
* Widevine är en tjänst som tillhandahålls av Google Inc. och omfattas av villkoren i tjänste-och sekretess policyn för Google, Inc.
## <a name="summary"></a>Sammanfattning
Den här artikeln beskrivs hur du implementerar uppspelning i offlineläge för streck innehåll som skyddas av Widevine på Android-enheter. Det besvarade även några vanliga frågor om offline-strömning av Widevine-skyddat innehåll.
| 90.081897
| 802
| 0.796258
|
swe_Latn
| 0.99854
|
a6a1779b5d30069798c7e48b642d317748bf509a
| 1,519
|
md
|
Markdown
|
README.md
|
neurolabusc/fiberQuant
|
3fbd1fa60a652d8cd61f3ab496bae6c0a090f003
|
[
"BSD-2-Clause"
] | null | null | null |
README.md
|
neurolabusc/fiberQuant
|
3fbd1fa60a652d8cd61f3ab496bae6c0a090f003
|
[
"BSD-2-Clause"
] | null | null | null |
README.md
|
neurolabusc/fiberQuant
|
3fbd1fa60a652d8cd61f3ab496bae6c0a090f003
|
[
"BSD-2-Clause"
] | null | null | null |
# fiberQuant
Quantify probtrackx connections between regions
##### About
FSL's [Probtrackx](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FDT/UserGuide#Probtrackx) can take diffusion images and generate a probablistic map for all the white matter fiber connections that pass through an arbitrary region of the brain. For example, we can compute the connections for each of the 384 regions (192 for each hemisphere) of the AICHA atlas. The role of fiberQuant is to compute the connections **between** all the regions (1-2, 1-3...1-192; 2-3, 2-4,...2-192, .... 191-192). This little executable does this efficiently.
While this is a standalone executable, it is used as a part of the diffusion analyses of [nii_preprocess](https://github.com/neurolabusc/nii_preprocess). specifically the script nii_fiber_quantify.m. That script will either call the fast executable fiberQuant, or if it can not find the executable it runs the slower Matlab function fiberQXSub(). Since the executable and fiberQXSub() do the same thing, you can examine the Matlab function to understand the function of fiberQuant.
##### Compiling
You will need the [FreePascal](https://freepascal.org) compiler. For MacOS, you can install this with [Homebrew](https://formulae.brew.sh/formula/fpc) using `brew install fpc`. For Debian-based Linux you can use `sudo apt install fpc`. For other operating systems see [SourceForge](https://sourceforge.net/projects/freepascal/files/Linux/3.0.4/). Compiling this executable is simple:
```
fpc fq.pas
```
| 79.947368
| 535
| 0.774852
|
eng_Latn
| 0.988592
|
a6a24994a89e095bd5e871b30846f3cdeef55cdf
| 635
|
md
|
Markdown
|
content/docs/ru/references/links.md
|
heilop/drupalconsole.com
|
1593c8490791c45d18f9266925dabfa0d68d510a
|
[
"MIT"
] | 8
|
2015-11-18T08:25:06.000Z
|
2019-12-16T01:39:44.000Z
|
content/docs/ru/references/links.md
|
heilop/drupalconsole.com
|
1593c8490791c45d18f9266925dabfa0d68d510a
|
[
"MIT"
] | 16
|
2019-07-09T18:09:14.000Z
|
2022-01-22T08:55:30.000Z
|
content/docs/ru/references/links.md
|
heilop/drupalconsole.com
|
1593c8490791c45d18f9266925dabfa0d68d510a
|
[
"MIT"
] | 15
|
2015-11-18T08:25:08.000Z
|
2020-02-27T19:51:21.000Z
|
---
title: Ссылки
---
# Ссылки
## Репозиторий Drupal Console
https://github.com/hechoendrupal/drupal-console
## Репозиторий документации
https://github.com/hechoendrupal/drupal-console-book
## Ресурсы
- [Symfony Components](http://symfony.com/components)
- [Drupal 8](https://www.drupal.org/drupal-8.0)
- [PHP the right way](http://www.phptherightway.com/)
- [KNP University](https://knpuniversity.com/)
- [Build a module](http://buildamodule.com/)
- [DrupalizeMe](https://drupalize.me/)
- [Git](http://git-scm.com/)
- [Composer](https://getcomposer.org/doc/00-intro.md#installation-linux-unix-osx)
- [Box](http://box-project.org/)
| 28.863636
| 81
| 0.719685
|
yue_Hant
| 0.218081
|
a6a3113b49de29a916698573dacbb23c185f2061
| 981
|
md
|
Markdown
|
_posts/2022-03-13-ObjectMapper.md
|
arkhyeon/arkhyeon.github.io
|
b29eee7aa5699ddb5e840c624f11c30e6daf618d
|
[
"MIT"
] | null | null | null |
_posts/2022-03-13-ObjectMapper.md
|
arkhyeon/arkhyeon.github.io
|
b29eee7aa5699ddb5e840c624f11c30e6daf618d
|
[
"MIT"
] | null | null | null |
_posts/2022-03-13-ObjectMapper.md
|
arkhyeon/arkhyeon.github.io
|
b29eee7aa5699ddb5e840c624f11c30e6daf618d
|
[
"MIT"
] | null | null | null |
---
layout: single
title: "Object Mapper"
excerpt: "Jackson JsonNode ObjectNode ArrayNode"
categories:
- SpringBoot
tags:
- [Spring, SpringBoot, Jackson, JsonNode, ObjectNode, ArrayNode]
toc: trues
toc_sticky: true
date: 2022-03-13
last_modified_at: 2022-03-13
---
# Object Mapper
1. Jackson(Java 라이브러리)
Json > Java Object 변환
Java Object > Json 변환
2. Json Node
- Key : Value / 값 변경 불가
- Key 값을 통해 값을 가져옴
3. ObjectNode
- Key : Value / 값 변경 가능
4. ArrayNode
- [value1, value2, ...] / 값 변경 가능
5. Getter
- 기본
```
String _name = jsonNode.get("name").asText();
```
- List
```
JsonNode cars = jsonNode.get("cars");
ArrayNode an = (ArrayNode)cars;
List<Car> _cars = om.convertValue(an, new TypeReference<List<Car>>() {});
```
6. Setter
```
ObjectNode on = (ObjectNode) jsonNode;
on.put("name", "steve");
on.put("age", 20);
```
7. Json Data 출력
```
ObjectNode.toPrettyString()
```
| 16.627119
| 84
| 0.602446
|
yue_Hant
| 0.459959
|
a6a513674c5da8595afc7e86ff3b833b264301bb
| 27,469
|
md
|
Markdown
|
articles/azure-monitor/insights/wire-data.md
|
Microsoft/azure-docs.ru-ru
|
980849d8505e40e8b260cb5a35b56e22d55fc9d3
|
[
"CC-BY-4.0",
"MIT"
] | 5
|
2016-12-12T09:33:15.000Z
|
2017-06-18T11:33:37.000Z
|
articles/azure-monitor/insights/wire-data.md
|
changeworld/azure-docs.ru-ru
|
980849d8505e40e8b260cb5a35b56e22d55fc9d3
|
[
"CC-BY-4.0",
"MIT"
] | 44
|
2016-12-06T19:42:42.000Z
|
2017-06-16T13:45:55.000Z
|
articles/azure-monitor/insights/wire-data.md
|
changeworld/azure-docs.ru-ru
|
980849d8505e40e8b260cb5a35b56e22d55fc9d3
|
[
"CC-BY-4.0",
"MIT"
] | 11
|
2016-11-30T11:36:13.000Z
|
2017-06-22T14:04:33.000Z
|
---
title: Решение "данные проводки" в Azure Monitor | Документация Майкрософт
description: Wire data — это объединенные сетевые данные и данные производительности, передаваемые с компьютеров с помощью агентов Log Analytics. Сетевые данные вместе с данными журнала помогают коррелировать данные.
ms.topic: conceptual
author: bwren
ms.author: bwren
ms.date: 05/29/2020
ms.openlocfilehash: 5981a5f136d613ffcedda86797d807d2eecfab0d
ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 03/20/2021
ms.locfileid: "101713632"
---
# <a name="wire-data-20-preview-solution-in-azure-monitor"></a>Решение Wire Data 2.0 (Предварительная версия) в Azure Monitor

Wire data — это объединенные сетевые данные и данные производительности, собираемые с компьютеров Windows или Linux с установленными агентами Log Analytics, включая данные, отслеживаемые Operations Manager в вашей среде. Сетевые данные вместе с другими данными журнала помогают коррелировать данные.
Помимо агента Log Analytics, решение Wire Data использует агенты зависимостей Майкрософт, установленные на компьютерах в вашей ИТ-инфраструктуре. Агенты зависимостей отслеживают сетевые данные, отправляемые на эти компьютеры и с них, для сетевых уровней 2–3 в [модели OSI](https://en.wikipedia.org/wiki/OSI_model), включая различные используемые протоколы и порты. Затем данные отправляются в Azure Monitor с помощью агентов.
>[!NOTE]
>Решение "данные передачи" было заменено [сопоставление служб решением](../vm/service-map.md). Оба компонента используют агент Log Analytics и агент зависимостей для получения данных сетевого подключения в Azure Monitor.
>
>Существующие клиенты, использующие решение для передачи данных, могут продолжать его использовать. Мы будем публиковать руководство по временной шкале миграции для перехода на Сопоставление служб.
>
>Новые клиенты должны установить [сопоставление служб решение](../vm/service-map.md) или [VM Insights](../vm/vminsights-overview.md). Сопоставление служб набор данных сравним с данными передачи. VM Insights включает Сопоставление служб набор данных с дополнительными данными о производительности и функциями для анализа.
По умолчанию Azure Monitor записывает данные о производительности ЦП, памяти, диска и сети из счетчиков, встроенных в Windows и Linux, а также другие счетчики производительности, которые можно указать. Процесс сбора сетевых и других данных выполняется в режиме реального времени для каждого агента, включая протоколы подсетей и протоколы уровня приложений, используемые компьютером. Решение Wire Data ориентировано на сетевые данные на уровне приложения, а не на транспортном уровне TCP. Это решение также не ориентировано на отдельные пакеты ACK и SYN. Когда подтверждение будет завершено, подключение будет считаться активным и сторона будет помечена как подключенная. Подключение будет оставаться активным, пока на обеих сторонах сокет будет открытым и данные будут передаваться между этими сторонами. Когда любая из сторон закрывает подключение, она помечается как отключенная. Таким образом, учитывается только пропускная способность успешно переданных пакетов. Данные о повторно отправленных и непереданных пакетах не записываются.
Если вы использовали [sFlow](http://www.sflow.org/) или другое программное обеспечение с [протоколом etFlow от Cisco](https://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-netflow/prod_white_paper0900aecd80406232.html), передаваемые статистические данные будут вам знакомы.
Ниже перечислены некоторые типы встроенных запросов поиска в журналах.
- агенты,предоставляющие данные передачи;
- IP-адреса агентов, предоставляющих данные передачи;
- исходящие подключения по IP-адресам;
- количество байт, отправленных протоколами приложений;
- количество байт, отправленных службой приложений;
- количество байт, полученных различными протоколами;
- общее количество отправленных и полученных байт по версии IP;
- среднее время задержки для подключений, которые оценивались как надежные;
- компьютерные процессы, которые инициировали или получали сетевой трафик;
- объем сетевого трафика для процесса.
Выполняя поиск с помощью данных передачи, можно отфильтровать и сгруппировать данные, чтобы просмотреть сведения об основных агентах и основных протоколах. Кроме того, можно узнать, когда определенные компьютеры (IP-адреса или MAC-адреса) взаимодействовали друг с другом, как долго длился процесс и сколько данных было отправлено. По сути вы просматриваете метаданные о сетевом трафике на основе поиска.
Однако просмотр метаданных не всегда полезен для проведения глубокой диагностики. Данные передачи в Azure Monitor не являются полным захватом сетевых данных. Они не предназначены для расширенного устранения неполадок на уровне пакетов. Преимущество использования агента в сравнении с другими методами сбора заключается в том, что вам не нужно устанавливать устройства, перенастраивать сетевые коммутаторы или выполнять сложные настройки. Данные передачи основаны на использовании агента — установите агент на компьютере, и он будет отслеживать свой собственный сетевой трафик. Еще одно преимущество проявляется в случае, если нужно отслеживать рабочие нагрузки, выполняющиеся у поставщиков облачных служб, поставщиков услуг размещения или в Microsoft Azure, где пользователь не является владельцем уровня структуры.
## <a name="connected-sources"></a>Подключенные источники
Решение "Данные передачи" получает данные от агента зависимостей Майкрософт. Dependency Agent зависит от агента Log Analytics для подключений к Azure Monitor. Это означает, что сначала на сервере нужно установить и настроить агент Log Analytics, после чего можно будет установить Dependency Agent. В приведенной ниже таблице описаны подключенные источники, которые поддерживаются решением "Данные передачи".
| **Подключенный источник** | **Поддерживается** | **Описание** |
| --- | --- | --- |
| Агенты Windows | Да | Решение "Данные передачи" анализирует и собирает данные из компьютеров агентов Windows. <br><br> В дополнение к [агенту log Analytics для Windows для](../agents/agent-windows.md)агентов Windows требуется агент зависимостей Майкрософт. Полный список версий операционной системы см. в разделе [Поддерживаемые операционные системы](../vm/vminsights-enable-overview.md#supported-operating-systems) . |
| Агенты Linux | Да | Решение "Данные передачи" анализирует и собирает данные из компьютеров агентов Linux.<br><br> В дополнение к [агенту log Analytics для Linux для](../vm/quick-collect-linux-computer.md)агентов Linux требуется агент зависимостей Майкрософт. Полный список версий операционной системы см. в разделе [Поддерживаемые операционные системы](../vm/vminsights-enable-overview.md#supported-operating-systems) . |
| Группа управления System Center Operations Manager | Да | Решение "Данные передачи" анализирует и собирает данные из агентов Windows и Linux в подключенной [группе управления System Center Operations Manager](../agents/om-agents.md). <br><br> Требуется прямое подключение компьютера агента System Center Operations Manager к Azure Monitor. |
| Учетная запись хранения Azure. | Нет | Решение "Данные передачи" собирает данные из компьютеров агента, поэтому данные из службы хранилища Azure не собираются. |
В Windows Microsoft Monitoring Agent (MMA) используется как System Center Operations Manager, так и Azure Monitor для сбора и отправки данных. В зависимости от контекста этот агент называется агентом System Center Operations Manager, агентом Log Analytics, MMA или Direct Agent. System Center Operations Manager и Azure Monitor предоставляют несколько разных версий MMA. Эти версии могут каждый отчет System Center Operations Manager, для Azure Monitor или для обоих.
В Linux агент Log Analytics для Linux собирает и отправляет данные в Azure Monitor. Данные передачи можно использовать на серверах с агентами, напрямую подключенными к Azure Monitor или на серверах, подключающихся к Azure Monitor через System Center Operations Manager группы управления.
Агент зависимостей не передает никаких данных и не требует каких бы то ни было изменений в брандмауэрах или портах. Данные передачи всегда передаются агентом Log Analytics в Azure Monitor, напрямую или через шлюз Log Analytics.

Если вы являетесь System Center Operations Manager пользователем с группой управления, подключенной к Azure Monitor:
- Если агенты System Center Operations Manager могут получить доступ к Интернету для подключения к Azure Monitor, дополнительная настройка не требуется.
- Необходимо настроить шлюз Log Analytics для работы с System Center Operations Manager, когда агенты System Center Operations Manager не смогут получить доступ к Azure Monitor через Интернет.
Если компьютеры Windows или Linux не могут напрямую подключаться к службе, необходимо настроить агент Log Analytics для подключения к Azure Monitor с помощью шлюза Log Analytics. Шлюз Log Analytics можно скачать в [Центре загрузки Майкрософт](https://www.microsoft.com/download/details.aspx?id=52666).
## <a name="prerequisites"></a>Предварительные условия
- Требуется предложение решения [Аналитика](https://www.microsoft.com/cloud-platform/operations-management-suite-pricing).
- Если вы используете предыдущую версию решения "Данные передачи", сначала необходимо ее удалить. Тем не менее все данные, зафиксированные с помощью исходного решения "Данные передачи", по-прежнему будут доступны в Wire Data 2.0 и для поиска по журналам.
- Для установки или удаления агента зависимостей требуются права администратора.
- Агент зависимостей должен быть установлен на компьютере с 64-разрядной операционной системой.
### <a name="operating-systems"></a>Операционные системы
В следующих разделах перечислены операционные системы, поддерживаемые агентом зависимостей. Решение "Данные передачи" не поддерживает 32-разрядные архитектуры операционных систем.
#### <a name="windows-server"></a>Windows Server
- Windows Server 2019
- Windows Server 2016 1803
- Windows Server 2016
- Windows Server 2012 R2
- Windows Server 2012
- Windows Server 2008 R2 с пакетом обновления 1 (SP1)
#### <a name="windows-desktop"></a>Классические приложения Windows
- Windows 10 1803
- Windows 10
- Windows 8.1
- Windows 8
- Windows 7
#### <a name="supported-linux-operating-systems"></a>Поддерживаемые операционные системы Linux
В следующих разделах перечислены операционные системы, поддерживаемые агентом зависимостей в Linux.
- Поддерживаются только версии ядра по умолчанию и SMP для Linux.
- Нестандартные версии ядра, такие как PAE и Xen, не поддерживаются ни в одном дистрибутиве Linux. Например, система со строкой версии "2.6.16.21-0.8-xen" не поддерживается.
- Пользовательские ядра, включая повторные компиляции стандартных ядер, не поддерживаются.
##### <a name="red-hat-linux-7"></a>Red Hat Linux 7
| Версия ОС | Версия ядра |
|:--|:--|
| 7.4 | 3.10.0-693 |
| 7.5 | 3.10.0-862 |
| 7.6 | 3.10.0-957 |
##### <a name="red-hat-linux-6"></a>Red Hat Linux 6
| Версия ОС | Версия ядра |
|:--|:--|
| 6.9 | 2.6.32-696 |
| 6.10 | 2.6.32-754 |
##### <a name="centosplus"></a>центосплус
| Версия ОС | Версия ядра |
|:--|:--|
| 6.9 | 2.6.32 — 696.18.7<br>2.6.32 — 696.30.1 |
| 6.10 | 2.6.32 — 696.30.1<br>2.6.32 — 754.3.5 |
##### <a name="ubuntu-server"></a>Сервер Ubuntu
| Версия ОС | Версия ядра |
|:--|:--|
| Ubuntu 18.04 | ядро 4,15.\*<br>4,18 * |
| Ubuntu 16.04.3 | ядро 4.15.* |
| 16.04 | 4.4.\*<br>4.8.\*<br>4.10.\*<br>4.11.\*<br>4.13.\* |
| 14.04 | 3.13.\*<br>4.4.\* |
##### <a name="suse-linux-11-enterprise-server"></a>SUSE Linux 11 Enterprise Server
| Версия ОС | Версия ядра
|:--|:--|
| 11 SP4 | 3,0. * |
##### <a name="suse-linux-12-enterprise-server"></a>SUSE Linux 12 Enterprise Server
| Версия ОС | Версия ядра
|:--|:--|
| 12 с пакетом обновления 2 | 4.4.* |
| 12 с пакетом обновления 3 | 4.4.* |
### <a name="dependency-agent-downloads"></a>Скачиваемые файлы Dependency Agent
| Файл | Операционная система | Version | SHA-256 |
|:--|:--|:--|:--|
| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.7.4 | A111B92AB6CF28EB68B696C60FE51F980BFDFF78C36A900575E17083972989E0 |
| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.7.4 | AB58F3DB8B1C3DEE7512690E5A65F1DFC41B43831543B5C040FCCE8390F2282C |
## <a name="configuration"></a>Конфигурация
Выполните следующие действия, чтобы настроить решение "Данные передачи" в своих рабочих областях.
1. Включите решение Аналитика журнала действий из [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.WireData2OMS?tab=Overview) или с помощью процесса, описанного в разделе [Добавление решений мониторинга из коллекция решений](../insights/solutions.md).
2. Установите агент зависимостей на каждом компьютере, на котором требуется получить данные. Агент зависимостей может отслеживать подключения к немедленным соседям, поэтому агент на каждом компьютере может не требоваться.
> [!NOTE]
> Нельзя добавить предыдущую версию решения "Данные передачи" в новые рабочие области. Если у вас включено исходное решение "Данные передачи", вы можете продолжать его использовать. Однако для использования Wire Data 2.0 необходимо сначала удалить исходную версию.
>
### <a name="install-the-dependency-agent-on-windows"></a>Установка Dependency Agent на Windows
Для установки или удаления агента требуются права администратора.
Агент зависимостей устанавливается на компьютерах под управлением Windows через InstallDependencyAgent-Windows.exe. Если запустить этот исполняемый файл без параметров, запустится мастер для установки в интерактивном режиме.
Чтобы установить агент зависимостей на каждом компьютере под Windows, выполните следующие действия.
1. Установите агент Log Analytics, выполнив действия, приведенные в статье [Подключение компьютеров Windows к службе Log Analytics в Azure](../agents/agent-windows.md).
2. Скачайте агент зависимостей Windows, используя ссылку в предыдущем разделе, а затем запустите ее с помощью следующей команды: `InstallDependencyAgent-Windows.exe`
3. Следуйте инструкциям мастера для установки агента.
4. Если Dependency Agent не запускается, просмотрите подробные сведения об ошибке в записях журналов. В агентах Windows каталогом журналов является каталог %Programfiles%\Microsoft Dependency Agent\logs.
#### <a name="windows-command-line"></a>Командная строка Windows
Используйте параметры из следующей таблицы для установки из командной строки. Чтобы просмотреть список параметров установки, запустите установщик с параметром /? /? следующим образом.
InstallDependencyAgent-Windows.exe /?
| **Пометить** | **Описание** |
| --- | --- |
| <code>/?</code> | Получает список параметров командной строки. |
| <code>/S</code> | Выполняет автоматическую установку без вывода сообщений для пользователя. |
Файлы для агента зависимостей Windows помещаются в папку C:\Program Files\Microsoft dependency Agent по умолчанию.
### <a name="install-the-dependency-agent-on-linux"></a>Установка Dependency Agent в Linux
Для установки или настройки агента требуется доступ с правами привилегированного пользователя.
Агент зависимостей устанавливается на компьютеры Linux с помощью InstallDependencyAgent-Linux64. bin — сценария оболочки с самораскрывающимся двоичным файлом. Вы можете запустить этот файл с помощью _sh_ или добавить разрешения на выполнение в сам файл.
Выполните приведенные шаги, чтобы установить Dependency Agent на каждом компьютере Linux.
1. Установите агент Log Analytics, выполнив действия, приведенные в статье [Настройка агента Log Analytics для компьютеров Linux в гибридной среде](../vm/quick-collect-linux-computer.md#obtain-workspace-id-and-key).
2. Скачайте агент зависимостей Linux, используя ссылку в предыдущем разделе, установите его с правами привилегированного пользователя, а затем запустите с помощью команды InstallDependencyAgent-Linux64.bin.
3. Если Dependency Agent не запускается, просмотрите подробные сведения об ошибке в записях журналов. В агентах Linux каталог журналов находится в расположении /var/opt/microsoft/dependency-agent/log.
Чтобы просмотреть список параметров установки, запустите программу установки с параметров `-help` указанным ниже образом.
```
InstallDependencyAgent-Linux64.bin -help
```
| **Пометить** | **Описание** |
| --- | --- |
| <code>-help</code> | Получает список параметров командной строки. |
| <code>-s</code> | Выполняет автоматическую установку без вывода сообщений для пользователя. |
| <code>--check</code> | Проверяет разрешения и операционную систему, но не устанавливает агент. |
Файлы для Dependency Agent размещаются в следующих каталогах:
| **Файлы** | **Расположение** |
| --- | --- |
| Основные файлы | /opt/microsoft/dependency-agent |
| Файлы журнала | /var/opt/microsoft/dependency-agent/log |
| Файлы конфигурации | /etc/opt/microsoft/dependency-agent/config |
| Исполняемые файлы службы | /opt/microsoft/dependency-agent/bin/microsoft-dependency-agent<br><br>/opt/microsoft/dependency-agent/bin/microsoft-dependency-agent-manager |
| Двоичные файлы хранилища | /var/opt/microsoft/dependency-agent/storage |
### <a name="installation-script-examples"></a>Примеры скриптов установки
Чтобы легко развернуть агент зависимостей на нескольких серверах одновременно, он помогает использовать скрипт. Для загрузки и установки агента зависимостей в Windows или Linux можно использовать приведенные ниже примеры сценариев.
#### <a name="powershell-script-for-windows"></a>Скрипт PowerShell для Windows
```powershell
Invoke-WebRequest "https://aka.ms/dependencyagentwindows" -OutFile InstallDependencyAgent-Windows.exe
.\InstallDependencyAgent-Windows.exe /S
```
#### <a name="shell-script-for-linux"></a>Скрипт оболочки для Linux
```
wget --content-disposition https://aka.ms/dependencyagentlinux -O InstallDependencyAgent-Linux64.bin
```
```
sh InstallDependencyAgent-Linux64.bin -s
```
### <a name="desired-state-configuration"></a>Настройка требуемого состояния (DSC)
Чтобы развернуть агент зависимостей с помощью настройки требуемого состояния, можно использовать модуль xPSDesiredStateConfiguration и фрагмент кода, подобный приведенному ниже.
```powershell
Import-DscResource -ModuleName xPSDesiredStateConfiguration
$DAPackageLocalPath = "C:\InstallDependencyAgent-Windows.exe"
Node $NodeName
{
# Download and install the Dependency agent
xRemoteFile DAPackage
{
Uri = "https://aka.ms/dependencyagentwindows"
DestinationPath = $DAPackageLocalPath
DependsOn = "[Package]OI"
}
xPackage DA
{
Ensure = "Present"
Name = "Dependency Agent"
Path = $DAPackageLocalPath
Arguments = '/S'
ProductId = ""
InstalledCheckRegKey = "HKEY\_LOCAL\_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\DependencyAgent"
InstalledCheckRegValueName = "DisplayName"
InstalledCheckRegValueData = "Dependency Agent"
}
}
```
### <a name="uninstall-the-dependency-agent"></a>Удаление агента зависимостей
Следующие разделы содержат сведения об удалении агента зависимостей.
#### <a name="uninstall-the-dependency-agent-on-windows"></a>Удаление агента зависимостей в Windows
Администратор может удалить Dependency Agent для Windows через панель управления.
Для удаления Dependency Agent администратор может также запустить файл %Programfiles%\Microsoft Dependency Agent\Uninstall.exe.
#### <a name="uninstall-the-dependency-agent-on-linux"></a>Удаление агента зависимостей в Linux
Чтобы полностью удалить агент зависимостей из Linux, необходимо удалить сам агент и соединитель, который автоматически устанавливается вместе с агентом. Удалить и то, и другое одновременно можно с помощью одной команды.
```
rpm -e dependency-agent dependency-agent-connector
```
## <a name="management-packs"></a>Пакеты управления
Если решение "Данные передачи" активировано в рабочей области Log Analytics, то на все серверы в этой рабочей области отправляется пакет управления размером 300 КБ. Если агенты System Center Operations Manager используются в [подключенной группе управления](../agents/om-agents.md), то пакет управления монитора зависимости развертывается из System Center Operations Manager. Если агенты подключены напрямую, Azure Monitor доставляет пакет управления.
Пакет управления называется Microsoft.IntelligencePacks.ApplicationDependencyMonitor. Он записывается в папку %Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs. Источник данных, используемый пакетом управления, — %Program files%\Microsoft Monitoring Agent\Agent\Health Service State\Resources<Автоматически_созданный_идентификатор>\Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDataSource.dll.
## <a name="using-the-solution"></a>Использование решения
Для установки и настройки решений используйте указанные ниже данные.
- Решение "Данные передачи" получает данные с компьютеров под управлением Windows Server 2012 R2, Windows 8.1 и более поздних операционных систем.
- На компьютерах, с которых предполагается выполнять сбор данных передачи, необходимо установить Microsoft .NET Framework 4.0 или более поздней версии.
- Добавьте решение "данные передачи" в рабочую область Log Analytics с помощью процесса, описанного в разделе [Добавление решений мониторинга из коллекция решений](../insights/solutions.md). Дополнительная настройка не требуется.
- Для просмотра данных передачи для конкретного решения это решение должно быть уже добавлено в рабочую область.
После установки агентов и решения в вашей рабочей области отобразится плитка Wire Data 2.0.

## <a name="using-the-wire-data-20-solution"></a>Использование решения Wire Data 2.0
На странице **Обзор** в рабочей области Log Analytics на портале Azure щелкните плитку **Wire Data 2.0**, чтобы открыть панель мониторинга Wire Data. Панель мониторинга содержит колонки, перечисленные в приведенной ниже таблице. В каждой колонке содержится максимум 10 элементов, соответствующих таким указанным критериям, как область действия и диапазон времени. Вы можете выполнить поиск по журналам, в результате которого возвращаются все записи, если щелкнуть заголовок колонки или **Показать все** в ее нижней части.
| **Колонка** | **Описание** |
| --- | --- |
| Агенты, записывающие сетевой трафик | Отображает число агентов, которые записывают сетевой трафик, и показывает первые 10 компьютеров, записывающих сетевой трафик. Щелкните количество, чтобы выполнить поиск по журналам: <code>WireData \| summarize sum(TotalBytes) by Computer \| take 500000</code>. Щелкните в списке компьютер, чтобы выполнить поиск по журналам и получить общее число записанных байт. |
| Локальные подсети | Показывает число локальных подсетей, которые обнаружили агенты. Щелкните число, чтобы выполнить поиск по журналам: <code>WireData \| summarize sum(TotalBytes) by LocalSubnet</code>. В результате вы получите список всех подсетей и количество байт, отправленных в каждой из них. Выберите подсеть в списке, чтобы выполнить поиск по журналам и получить общее число байт, отправленных в подсети. |
| Протоколы уровня приложений | Показывает количество используемых протоколов уровня приложений, обнаруженных агентами. Щелкните количество, чтобы выполнить поиск по журналам: <code>WireData \| summarize sum(TotalBytes) by ApplicationProtocol</code>. Щелкните в списке протокол, чтобы выполнить поиск по журналам и получить общее число байт, отправленных с помощью протокола. |

Используйте колонку **Агенты, записывающие сетевой трафик**, чтобы определить, какой процент пропускной способности сети используется компьютерами. Эта колонка поможет найти самый _активный_ компьютер в среде. Такие компьютеры могут быть перегружены, работать со сбоями либо использовать больше сетевых ресурсов, чем обычно.

Вы также можете использовать колонку **Локальные подсети**, чтобы определить объем сетевого трафика, проходящего через подсети. Пользователи часто создают подсети для важных областей своих приложений. Эта колонка позволяет просматривать такие области.

Колонка **Протоколы уровня приложений** помогает узнать, какие протоколы используются. Например, вы ожидаете, что SSH не используется в вашей сетевой среде. С помощью этой колонки можно быстро проверить, так ли это.

Также полезно знать об увеличении или уменьшении трафика по протоколу по прошествии времени. Например, если объем данных, передаваемых приложением, увеличился, вам следует об этом знать.
## <a name="input-data"></a>Входные данные
Данные передачи собирают метаданные о сетевом трафике с помощью включенных агентов. Каждый агент передает данные примерно каждые 15 секунд.
## <a name="output-data"></a>Выходные данные
Для каждого типа входных данных создается запись с данными о типе _WireData_. У этих записей есть свойства, приведенные в таблице ниже.
| Свойство. | Описание |
|---|---|
| Компьютер | Имя компьютера, на котором были собраны данные |
| TimeGenerated | Время создания записи |
| LocalIP | IP-адрес локального компьютера |
| SessionState | Подключен или отключен |
| ReceivedBytes | Число полученных байт |
| ProtocolName | Имя сетевого протокола, который используется |
| IPVersion | Версия IP |
| Направление | Входящий или исходящий |
| MaliciousIP | IP-адрес известного вредоносного источника |
| Уровень серьезности | Опасность потенциально вредоносной программы |
| RemoteIPCountry | Страна или регион удаленного IP-адреса |
| ManagementGroupName | Имя группы управления Operations Manager |
| SourceSystem | Источник, на котором были собраны данные |
| SessionStartTime | Время начала сеанса |
| SessionEndTime | Время окончания сеанса |
| LocalSubnet | Подсеть, в которой были собраны данные |
| LocalPortNumber | Номер локального порта |
| RemoteIP | Удаленный IP-адрес, используемый удаленным компьютером |
| RemotePortNumber | Номер порта, используемый удаленным IP-адресом |
| SessionID | Уникальное значение, которое идентифицирует сеанс связи между двумя IP-адресами |
| SentBytes | Число отправленных байт |
| TotalBytes | Общее число байтов, отправленных в течение сеанса |
| ApplicationProtocol | Тип используемого сетевого протокола |
| ProcessID | Идентификатор процесса Windows |
| ProcessName | Путь и имя файла процесса |
| RemoteIPLongitude | Значение долготы IP |
| RemoteIPLatitude | Значение широты IP |
## <a name="next-steps"></a>Дальнейшие действия
- [поиск в журналах](../logs/log-query-overview.md) , чтобы просмотреть подробные записи поиска данных передачи.
| 65.402381
| 1,042
| 0.793331
|
rus_Cyrl
| 0.954848
|
a6a533bc43d4dcd7b0b6d164002a34bf2d08b231
| 1,632
|
md
|
Markdown
|
terraform_init.md
|
Galser/ptfe-prodmount-vc-cloud-backuprestore
|
6e86f1b8854143028d2ffee529ebb225f983ecbf
|
[
"MIT"
] | null | null | null |
terraform_init.md
|
Galser/ptfe-prodmount-vc-cloud-backuprestore
|
6e86f1b8854143028d2ffee529ebb225f983ecbf
|
[
"MIT"
] | 1
|
2019-11-01T15:09:48.000Z
|
2019-11-01T15:09:48.000Z
|
terraform_init.md
|
Galser/ptfe-prodmount-vc-cloud-backuprestore
|
6e86f1b8854143028d2ffee529ebb225f983ecbf
|
[
"MIT"
] | null | null | null |
# Example of `terraform init` output
```bash
terraform init
Initializing modules...
- dns_cloudflare in modules/dns_cloudflare
- sslcert_letsencrypt in modules/sslcert_letsencrypt
- vpc_aws in modules/vpc_aws
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "cloudflare" (terraform-providers/cloudflare) 2.0.1...
- Downloading plugin for provider "tls" (hashicorp/tls) 2.1.1...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "acme" (terraform-providers/acme) 1.5.0...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.33.0...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.33"
* provider.cloudflare: version = "~> 2.0"
* provider.local: version = "~> 1.4"
* provider.tls: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
```
| 38.857143
| 88
| 0.773284
|
eng_Latn
| 0.985872
|
a6a53a5c5f24b2392463a029cf37fcd0962870fb
| 45
|
md
|
Markdown
|
README.md
|
zheddie/zgprogram2
|
89e89f54ead7a1522cb219710398133a07f1498e
|
[
"MIT"
] | null | null | null |
README.md
|
zheddie/zgprogram2
|
89e89f54ead7a1522cb219710398133a07f1498e
|
[
"MIT"
] | null | null | null |
README.md
|
zheddie/zgprogram2
|
89e89f54ead7a1522cb219710398133a07f1498e
|
[
"MIT"
] | null | null | null |
# zgprogram2
javascript version of zgprogram
| 15
| 31
| 0.844444
|
lim_Latn
| 0.262763
|
a6a5612bdfb65efb1594eae71c3f6a9709ae9652
| 93
|
md
|
Markdown
|
README.md
|
nguo/ez-read-action-button
|
1c386b63f69c36fe54d43cfc142193c30026531e
|
[
"MIT"
] | null | null | null |
README.md
|
nguo/ez-read-action-button
|
1c386b63f69c36fe54d43cfc142193c30026531e
|
[
"MIT"
] | null | null | null |
README.md
|
nguo/ez-read-action-button
|
1c386b63f69c36fe54d43cfc142193c30026531e
|
[
"MIT"
] | null | null | null |
# ez-read-action-button
WoW Classic addon to make action button text and state more readable
| 31
| 68
| 0.806452
|
eng_Latn
| 0.982914
|
a6a5df2821e2e2a73a3e36eee8f22388a6f528f9
| 8,773
|
md
|
Markdown
|
articles/active-directory/saas-apps/intralinks-tutorial.md
|
ningchencontact/azure-docs.zh-tw
|
85eb44c48f6993d41f51f680ad19190e0a1cac0b
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/active-directory/saas-apps/intralinks-tutorial.md
|
ningchencontact/azure-docs.zh-tw
|
85eb44c48f6993d41f51f680ad19190e0a1cac0b
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/active-directory/saas-apps/intralinks-tutorial.md
|
ningchencontact/azure-docs.zh-tw
|
85eb44c48f6993d41f51f680ad19190e0a1cac0b
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: 教學課程:Azure Active Directory 與 Intralinks 整合 | Microsoft Docs
description: 了解如何設定 Azure Active Directory 與 Intralinks 之間的單一登入。
services: active-directory
documentationCenter: na
author: jeevansd
manager: mtillman
ms.assetid: 147f2bf9-166b-402e-adc4-4b19dd336883
ms.service: active-directory
ms.component: saas-app-tutorial
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 06/23/2017
ms.author: jeedes
ms.openlocfilehash: 44cae95cfd01f8d6fbd6ddb4a11e9af290042ffa
ms.sourcegitcommit: 387d7edd387a478db181ca639db8a8e43d0d75f7
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 08/10/2018
ms.locfileid: "40038069"
---
# <a name="tutorial-azure-active-directory-integration-with-intralinks"></a>教學課程:Azure Active Directory 與 Intralinks 整合
在本教學課程中,您會了解如何整合 Intralinks 與 Azure Active Directory (Azure AD)。
Intralinks 與 Azure AD 整合提供下列優點:
- 您可以在 Azure AD 中控制可存取 Intralinks 的人員
- 您可以讓使用者使用他們的 Azure AD 帳戶自動登入 Intralinks (單一登入)
- 您可以在 Azure 入口網站中集中管理您的帳戶
如果您想要了解有關 SaaS 應用程式與 Azure AD 之整合的更多詳細資料,請參閱[什麼是搭配 Azure Active Directory 的應用程式存取和單一登入](../manage-apps/what-is-single-sign-on.md)。
## <a name="prerequisites"></a>必要條件
若要設定 Azure AD 與 Intralinks 整合,您需要下列項目:
- Azure AD 訂用帳戶
- 啟用 Intralinks 單一登入的訂用帳戶
> [!NOTE]
> 若要測試本教學課程中的步驟,我們不建議使用生產環境。
若要測試本教學課程中的步驟,您應該遵循這些建議:
- 除非必要,否則請勿使用生產環境。
- 如果您沒有 Azure AD 試用環境,您可以在 [這裡](https://azure.microsoft.com/pricing/free-trial/)取得一個月試用。
## <a name="scenario-description"></a>案例描述
在本教學課程中,您會在測試環境中測試 Azure AD 單一登入。 本教學課程中說明的案例由二項主要的基本工作組成:
1. 從資源庫新增 Intralinks
1. 設定並測試 Azure AD 單一登入
## <a name="adding-intralinks-from-the-gallery"></a>從資源庫新增 Intralinks
若要設定將 Intralinks 整合到 Azure AD 中,您需要從資源庫將 Intralinks 新增到受控 SaaS app 清單。
**若要從資源庫新增 Intralinks,請執行下列步驟:**
1. 在 **[Azure 入口網站](https://portal.azure.com)** 的左方瀏覽窗格中,按一下 [Azure Active Directory] 圖示。
![Active Directory][1]
1. 瀏覽至 [企業應用程式]。 然後移至 [所有應用程式]。
![[應用程式]][2]
1. 若要新增新的應用程式,請按一下對話方塊頂端的 [新增應用程式] 按鈕。
![[應用程式]][3]
1. 在搜尋方塊中,輸入 **Intralinks**。

1. 在結果面板中,選取 [Intralinks],然後按一下 [新增] 按鈕以新增應用程式。

## <a name="configuring-and-testing-azure-ad-single-sign-on"></a>設定並測試 Azure AD 單一登入
在本節中,您會以名為 "Britta Simon" 的測試使用者身分,使用 Intralinks 設定及測試 Azure AD 單一登入。
若要讓單一登入運作,Azure AD 必須知道 Intralinks 與 Azure AD 中互相對應的使用者。 換句話說,必須建立 Azure AD 使用者和 Intralinks 中相關使用者之間的連結關聯性。
在 Intralinks 中,將 Azure AD 中**使用者名稱**的值指派為 **Username** 的值,以建立連結關聯性。
若要設定及測試與 Intralinks 搭配運作的 Azure AD 單一登入,您需要完成下列建置組塊:
1. **[設定 Azure AD 單一登入](#configuring-azure-ad-single-sign-on)** - 讓您的使用者能夠使用此功能。
1. **[建立 Azure AD 測試使用者](#creating-an-azure-ad-test-user)** - 使用 Britta Simon 測試 Azure AD 單一登入。
1. **[建立 Intralinks 測試使用者](#creating-an-intralinks-test-user)** - 使 Intralinks 中對應的 Britta Simon 連結到該使用者在 Azure AD 中的代表項目。
1. **[指派 Azure AD 測試使用者](#assigning-the-azure-ad-test-user)** - 讓 Britta Simon 能夠使用 Azure AD 單一登入。
1. **[Testing Single Sign-On](#testing-single-sign-on)** - 驗證組態是否能運作。
### <a name="configuring-azure-ad-single-sign-on"></a>設定 Azure AD 單一登入
在本節中,您會在 Azure 入口網站中啟用 Azure AD 單一登入,並在您的 Intralinks 應用程式中設定單一登入。
**若要設定與 Intralinks 搭配運作的 Azure AD 單一登入,請執行下列步驟:**
1. 在 Azure 入口網站的 [Intralinks] 應用程式整合頁面上,按一下 [單一登入]。
![設定單一登入][4]
1. 在 [單一登入] 對話方塊上,於 [模式] 選取 [SAML 登入],以啟用單一登入。

1. 在 [Intralinks 網域及 URL] 區段中,執行下列步驟:

在 [登入 URL] 文字方塊中,使用下列模式輸入 URL︰`https://<company name>.Intralinks.com/?PartnerIdpId=https://sts.windows.net/<AzureADTenantID>`
> [!NOTE]
> 這不是真實的值。 請使用實際的登入 URL 來更新此值。 請連絡 [Intralinks 客戶支援小組](https://www.intralinks.com/contact)以取得此值。
1. 在 [SAML 簽署憑證] 區段上,按一下 [中繼資料 XML],然後將中繼資料檔案儲存在您的電腦上。

1. 按一下 [儲存] 按鈕。

1. 若要在 **Intralinks** 端設定單一登入,您必須將已下載的**中繼資料 XML** 傳送給 [Intralinks 支援小組](https://www.intralinks.com/contact)。 他們會進行此設定,讓兩端的 SAML SSO 連線都設定正確。
> [!TIP]
> 現在,當您設定此應用程式時,在 [Azure 入口網站](https://portal.azure.com)內即可閱讀這些指示的簡要版本! 從 [Active Directory] > [企業應用程式] 區段新增此應用程式之後,只要按一下 [單一登入] 索引標籤,即可透過底部的 [組態] 區段存取內嵌的文件。 您可以從以下連結閱讀更多有關內嵌文件功能的資訊:[Azure AD 內嵌文件]( https://go.microsoft.com/fwlink/?linkid=845985)
### <a name="creating-an-azure-ad-test-user"></a>建立 Azure AD 測試使用者
本節的目標是要在 Azure 入口網站中建立一個名為 Britta Simon 的測試使用者。
![建立 Azure AD 使用者][100]
**若要在 Azure AD 中建立測試使用者,請執行下列步驟:**
1. 在 **Azure 入口網站**的左方瀏覽窗格中,按一下 [Azure Active Directory] 圖示。

1. 若要顯示使用者清單,請移至 [使用者和群組],然後按一下 [所有使用者]。

1. 若要開啟 [使用者] 對話方塊,按一下對話方塊頂端的 [新增]。

1. 在 [使用者] 對話頁面上,執行下列步驟:

a. 在 [名稱] 文字方塊中,輸入 **BrittaSimon**。
b. 在 [使用者名稱] 文字方塊中,輸入 BrittaSimon 的**電子郵件地址**。
c. 選取 [顯示密碼] 並記下 [密碼] 的值。
d. 按一下頁面底部的 [新增] 。
### <a name="creating-an-intralinks-test-user"></a>建立 Intralinks 測試使用者
在本節中,您要在 Intralinks 中建立名為 Britta Simon 的使用者。 請與 [Intralinks 支援小組](https://www.intralinks.com/contact)合作,在 Intralinks 平台中新增使用者。
### <a name="assigning-the-azure-ad-test-user"></a>指派 Azure AD 測試使用者
在本節中,您會將 Intralinks 的存取權授與 Britta Simon,讓她能夠使用 Azure 單一登入。
![指派使用者][200]
**若要將 Britta Simon 指派給 Intralinks,請執行下列步驟:**
1. 在 Azure 入口網站中,開啟應用程式檢視,接著瀏覽至目錄檢視並移至 [企業應用程式],然後按一下 [所有應用程式]。
![指派使用者][201]
1. 在應用程式清單中,選取 [Intralinks] 。

1. 在左側功能表中,按一下 [使用者和群組]。
![指派使用者][202]
1. 按一下 [新增] 按鈕。 然後選取 [新增指派] 對話方塊上的 [使用者和群組]。
![指派使用者][203]
1. 在 [使用者和群組] 對話方塊上,選取 [使用者] 清單中的 [Britta Simon]。
1. 按一下 [使用者和群組] 對話方塊上的 [選取] 按鈕。
1. 按一下 [新增指派] 對話方塊上的 [指派] 按鈕。
### <a name="add-intralinks-via-or-elite-application"></a>新增 Intralinks VIA 或 Elite 應用程式
Intralinks 會針對其他所有 Intralinks 應用程式使用相同的 SSO 身分識別平台,但 Deal Nexus 應用程式除外。 因此,如果您打算使用其他任何 Intralinks 應用程式,則您必須先使用上述程序,來為一個主要的 Intralinks 應用程式設定 SSO。
設定之後,您可以遵循以下程序,在您的租用戶中新增另一個可利用這個主要應用程式來進行 SSO 的 Intralinks 應用程式。
>[!NOTE]
>這項功能僅適用於 Azure AD Premium SKU 客戶,並不適用於免費或基本 SKU 客戶。
1. 在 **[Azure 入口網站](https://portal.azure.com)** 的左方瀏覽窗格中,按一下 [Azure Active Directory] 圖示。
![Active Directory][1]
1. 瀏覽至 [企業應用程式]。 然後移至 [所有應用程式]。
![[應用程式]][2]
1. 若要新增新的應用程式,請按一下對話方塊頂端的 [新增應用程式] 按鈕。
![[應用程式]][3]
1. 在搜尋方塊中,輸入 **Intralinks**。

1. 在 [Intralinks 新增應用程式] 上,執行下列步驟:

a. 在 [名稱] 文字方塊中,輸入適當的應用程式名稱,例如 **Intralinks Elite**。
b. 按一下 [新增] 按鈕。
1. 在 Azure 入口網站的 [Intralinks] 應用程式整合頁面上,按一下 [單一登入]。
![設定單一登入][4]
1. 在 [單一登入] 對話方塊中,選取 [連結的登入] 作為 [模式]。

1. 從 [Intralinks 小組](https://www.intralinks.com/contact)取得其他 Intralinks 應用程式之 SP 起始的 SSO URL,然後在 [設定登入 URL] 中輸入此 URL,如下所示。

在 [登入 URL] 文字方塊中,使用下列模式輸入使用者登入您的 Intralinks 應用程式時所使用的 URL:
`https://<company name>.Intralinks.com/?PartnerIdpId=https://sts.windows.net/<AzureADTenantID>`
1. 按一下 [儲存] 按鈕。

1. 將應用程式指派給使用者或群組,如**[指派 Azure AD 測試使用者](#assigning-the-azure-ad-test-user)** 一節所示。
### <a name="testing-single-sign-on"></a>測試單一登入
在本節中,您會使用存取面板來測試您的 Azure AD 單一登入設定。
當您在存取面板中按一下 [Intralinks] 圖格時,應該會自動登入您的 Intralinks 應用程式。
如需「存取面板」的詳細資訊,請參閱[存取面板簡介](../user-help/active-directory-saas-access-panel-introduction.md)。
## <a name="additional-resources"></a>其他資源
* [如何與 Azure Active Directory 整合 SaaS 應用程式的教學課程清單](tutorial-list.md)
* [什麼是搭配 Azure Active Directory 的應用程式存取和單一登入?](../manage-apps/what-is-single-sign-on.md)
<!--Image references-->
[1]: ./media/intralinks-tutorial/tutorial_general_01.png
[2]: ./media/intralinks-tutorial/tutorial_general_02.png
[3]: ./media/intralinks-tutorial/tutorial_general_03.png
[4]: ./media/intralinks-tutorial/tutorial_general_04.png
[100]: ./media/intralinks-tutorial/tutorial_general_100.png
[200]: ./media/intralinks-tutorial/tutorial_general_200.png
[201]: ./media/intralinks-tutorial/tutorial_general_201.png
[202]: ./media/intralinks-tutorial/tutorial_general_202.png
[203]: ./media/intralinks-tutorial/tutorial_general_203.png
| 31.220641
| 247
| 0.732133
|
yue_Hant
| 0.874692
|
a6a657652457ccc8d5ada1561d106d2dc3f07556
| 128
|
md
|
Markdown
|
npm/linux-arm-gnueabihf/README.md
|
jtomchak/cli-goodbye
|
50adffd1a4d8031d7a90b505630d22a5e90339b7
|
[
"MIT"
] | null | null | null |
npm/linux-arm-gnueabihf/README.md
|
jtomchak/cli-goodbye
|
50adffd1a4d8031d7a90b505630d22a5e90339b7
|
[
"MIT"
] | null | null | null |
npm/linux-arm-gnueabihf/README.md
|
jtomchak/cli-goodbye
|
50adffd1a4d8031d7a90b505630d22a5e90339b7
|
[
"MIT"
] | null | null | null |
# `@jtomchak/cli-goodbye-linux-arm-gnueabihf`
This is the **armv7-unknown-linux-gnueabihf** binary for `@jtomchak/cli-goodbye`
| 32
| 80
| 0.757813
|
eng_Latn
| 0.569299
|
a6a682b7555d46131bfffdafbe66cf0d6e632f7e
| 438
|
md
|
Markdown
|
vol03.TCP-IP.illustrated.md
|
JamesKing9/NewHeart-NewLife
|
f0110308f3c25258a14b53c54a0694c9aa05c0f2
|
[
"Apache-2.0"
] | null | null | null |
vol03.TCP-IP.illustrated.md
|
JamesKing9/NewHeart-NewLife
|
f0110308f3c25258a14b53c54a0694c9aa05c0f2
|
[
"Apache-2.0"
] | null | null | null |
vol03.TCP-IP.illustrated.md
|
JamesKing9/NewHeart-NewLife
|
f0110308f3c25258a14b53c54a0694c9aa05c0f2
|
[
"Apache-2.0"
] | null | null | null |
```c
#include "cliserv.h"
int
main(int argc, char *argv[])
{ /* simple TCP client */
struct sockaddr_in serv;
char request[REQUEST], reply[REPLY];
int sockfd, n;
if (argc != 2)
err_quit("usage: tcpcli <IP address of server>");
if ((sockfd = socket(PF_INET, SOCK_STREAM, 0)) < 0)
err_sys("socket error");
memset(&serv, 0, sizeof(serv));
serv.sin_family = AF_INET;
```
| 20.857143
| 53
| 0.568493
|
eng_Latn
| 0.567336
|
a6a6c4ec09c50daa21b1bf6a77cbe67ac5570486
| 1,751
|
md
|
Markdown
|
integrations/manual/smtp/README.md
|
slavivanov/Integrations
|
6d7051c7ed07888aa0f83b6b070ca9627ead6be2
|
[
"MIT"
] | 2
|
2019-01-09T20:57:08.000Z
|
2019-01-09T20:58:25.000Z
|
integrations/manual/smtp/README.md
|
slavivanov/Integrations
|
6d7051c7ed07888aa0f83b6b070ca9627ead6be2
|
[
"MIT"
] | null | null | null |
integrations/manual/smtp/README.md
|
slavivanov/Integrations
|
6d7051c7ed07888aa0f83b6b070ca9627ead6be2
|
[
"MIT"
] | null | null | null |
# @datafire/smtp
Client library for SMTP
## Installation and Usage
```bash
npm install --save @datafire/smtp
```
```js
let smtp = require('@datafire/smtp').create({
host: "",
port: "",
username: "",
password: ""
});
smtp.send({
"envelope": {
"from": "",
"to": []
},
"message": ""
}).then(data => {
console.log(data);
});
```
## Description
Send e-mail using the SMTP protocol
## Actions
### send
```js
smtp.send({
"envelope": {
"from": "",
"to": []
},
"message": ""
}, context)
```
#### Input
* input `object`
* envelope **required** `object`
* from **required** `string`: The address of the message's sender
* to **required** `array`: The addresses of all recipients
* items `string`
* size `integer`: (optional) predicted message size in bytes
* use8BitMime `boolean`: If true then inform the server that this message might contain bytes outside 7bit ascii range
* dsn `object`
* ret `string` (values: FULL, HDRS): return either the full message (FULL) or only headers (HDRS)
* envid `string`: Sender's envelope identifier, for tracking
* notify `string`: When to send a DSN. Multiple options are OK - array or comma delimited. NEVER must appear by itself. Available options: NEVER, SUCCESS, FAILURE, DELAY
* orcpt `string`: Original recipient
* message **required** `string`: The message to send. All newlines are converted to \r\n and all dots are escaped automatically.
#### Output
* output `object`
* accepted `array`
* items `string`
* rejected `array`
* items `string`
* envelopeTime `integer`
* messageTime `integer`
* messageSize `integer`
* response `string`
## Definitions
*This integration has no definitions*
| 22.164557
| 175
| 0.638492
|
eng_Latn
| 0.879779
|
a6a6cac3e020788a61f89c82598c3e22d7c6134c
| 191
|
md
|
Markdown
|
websockets/README.md
|
C11R11/typescriptTrainStuff
|
69205bc288510df89ba41daa8f75be1f234a4bd1
|
[
"MIT"
] | null | null | null |
websockets/README.md
|
C11R11/typescriptTrainStuff
|
69205bc288510df89ba41daa8f75be1f234a4bd1
|
[
"MIT"
] | null | null | null |
websockets/README.md
|
C11R11/typescriptTrainStuff
|
69205bc288510df89ba41daa8f75be1f234a4bd1
|
[
"MIT"
] | null | null | null |
# Links
https://tutorialedge.net/typescript/typescript-socket-io-tutorial/
https://blog.postman.com/postman-now-supports-socket-io/
https://blog.postman.com/postman-supports-websocket-apis/
| 31.833333
| 66
| 0.801047
|
kor_Hang
| 0.401367
|
a6a82416e1e0b8ee29721e15698dcb9bdc8de529
| 1,447
|
md
|
Markdown
|
_news/announcement_2.md
|
karllab41/karllab41.original.page
|
398626946a863cf71d9c4752701923a72d9fe499
|
[
"MIT"
] | null | null | null |
_news/announcement_2.md
|
karllab41/karllab41.original.page
|
398626946a863cf71d9c4752701923a72d9fe499
|
[
"MIT"
] | null | null | null |
_news/announcement_2.md
|
karllab41/karllab41.original.page
|
398626946a863cf71d9c4752701923a72d9fe499
|
[
"MIT"
] | null | null | null |
---
layout: post
title: New VLOG posts to come on interview tips and tricks
date: 2018-08-11 16:11:00-0400
inline: false
---
I am currently considering a position at Google, doing machine learning on the core search team. I'm very excited about joining the team. Since it was a pretty long road getting there, look for some new posts in both vlogs and medium.
***
Interviewing for engineering positions is quite difficult, and it's nerve racking. I worked with a few friends, and that helped me build a lot of perspective about how to pass the interview process.
#### The Machine Learning Interview Checklist
<ul>
<li>coding strategy</li>
<li>coding questions</li>
<li>machine learning</li>
<li>system design</li>
</ul>
Each one of these is important in your algorithm design, theoretical understanding, and the underpinnings of the essential Google interview.
***
One thing I also appreciated was the insider understanding of the process. Google's process, in particular, exceedingly long and bureaucratic. There was a system in place, people had parts to play, and the procedural grind was setup to be long and arduous.
In a few blog posts, I'll detail what to expect when you're interviewing, when you've finished interviewing, and how to negotiate your salary. Admittedly, I didn't do too hot on the last item there, but by the time you've reached that point, there are tons of resources that are reliable on the web.
| 49.896552
| 299
| 0.766413
|
eng_Latn
| 0.999412
|
a6a834e11582b0c7685308b9708277a871bfe7b0
| 309
|
md
|
Markdown
|
nover05.md
|
lemonoink/novel
|
6ed17ee81cf3ee55ca88322f089d0b57beb6d9dd
|
[
"MIT"
] | null | null | null |
nover05.md
|
lemonoink/novel
|
6ed17ee81cf3ee55ca88322f089d0b57beb6d9dd
|
[
"MIT"
] | null | null | null |
nover05.md
|
lemonoink/novel
|
6ed17ee81cf3ee55ca88322f089d0b57beb6d9dd
|
[
"MIT"
] | null | null | null |
# 神雕侠侣
>      《神雕侠侣》是作家金庸创作的长篇武侠小说,1959—1961年连载于香港《明报》,共四十回,是金庸“射雕三部曲”系列的第二部。
>      小说的主脉写的是杨康之遗孤杨过与其师小龙女之间的爱情故事。杨过14岁起师从小龙女于古墓之中苦练武功,师徒二人情深义重,却无奈于江湖阴鸷险恶、蒙古铁骑来犯使得有情之人难成眷属。历经一番坎坷与磨难的考验,杨过冲破封建礼教之禁锢,最终与小龙女由师徒变为“侠侣”。同时,在这段磨难经历中,杨过也消除了对郭靖、黄蓉夫妇的误会,在家仇与国难间作出抉择,成为真正的“侠之大者”。
| 61.8
| 209
| 0.815534
|
yue_Hant
| 0.750791
|
a6a842e62563cf25183acad1046750f1ed79c88b
| 3,031
|
md
|
Markdown
|
Interview/Details/16.axios_applicationxwwwformdata.md
|
WJW53/WebNotes
|
315181acc66bb1319df60a47f954b807773ce293
|
[
"MIT"
] | 1
|
2021-01-19T12:47:23.000Z
|
2021-01-19T12:47:23.000Z
|
Interview/Details/16.axios_applicationxwwwformdata.md
|
WJW53/WebNotes
|
315181acc66bb1319df60a47f954b807773ce293
|
[
"MIT"
] | null | null | null |
Interview/Details/16.axios_applicationxwwwformdata.md
|
WJW53/WebNotes
|
315181acc66bb1319df60a47f954b807773ce293
|
[
"MIT"
] | null | null | null |
# axios请求application/x-www-form-urlencoded键值对参数问题
> 本文主要介绍axios请求类型application/x-www-form-urlencoded,后端获取不到键值对参数问题
## jquery 正常的请求逻辑
下面代码是使用jquery 正常发送的请求
```js
$.ajax({
url: 'http://127.0.0.1:10111/heartbeat',
type: 'POST',
dataType: 'json',
header: {
'Content-Type': 'application/json'
},
data: JSON.stringify(data)
})
```
## 解决方案一:
```js
var qs = require('qs');
this.axios.post("orgUserLogin", qs.stringify({
loginName: this.userName,
loginPwd: this.password
})).then(res => {
this.$closeLoading();
let userInfo = JSON.stringify(res.data);
localStorage.setItem('userInfo')
console.log(res.data);
})
.catch(error => {
console.log("Error", error.message);
});
```
## 解决方法二:
```js
let param = new URLSearchParams();
param.append("loginName", this.userName);
param.append("loginPwd", this.password);
this.axios
.post("orgUserLogin", param)
.then(res => {
if (res.retcode == "0") {
if (res.data) {
let userInfo = JSON.stringify(res.data);
localStorage.setItem("userInfo", userInfo);
}
}
})
.catch(error => {
});
```
## 结论(作者的个人理解)
### 如果是 application/json传递方式
会发两个请求,第一个是options,然后再就是实际的请求方式
## 如果是application/x-www-form-urlencoded
- 直接就是实际请求的方式
- 如果使用 JSON.stringify()序列化数据,会将 序列化的字符串作为表单的key值,这样value就为空
- 若果使用 import qs from 'qs'会将参数转为 get表单请求的方式,值是 key-value
- `如果直接传递JSON对象,会将json对象作为字符串传递给后台`
https://blog.csdn.net/hbiao68/article/details/106004100
## 我的理解
得看前后端的contentType的配置,以及 传递、接收、返回 的数据的类型是否对应
0. 上传文件的表单中<form>要加属性enctype="multipart/form-data",
1. 只有使用了multipart/form-data,才能完整的传递文件数据。
2. enctype="multipart/form-data"是上传二进制数据; form里面的input的值以2进制的方式传过去。
3. form里面的input的值以2进制的方式传过去,
4. 其实form表单在你不写enctype属性时,也默认为其添加了enctype属性值,默认值是enctype="application/x- www-form-urlencoded".
### form表单标签的enctype 与 MIME
form表单标签的enctype这个属性管理的是表单的MIME编码,共有三个值可选:
①application/x-www-form-urlencoded (默认值)
②multipart/form-data
③text/plain
> 其中①application/x-www-form-urlencoded是默认值,大家可能在AJAX里见过这 个:xmlHttp.setRequestHeader("Content-Type","application/x-www-form- urlencoded"); 这两个要做的是同一件事情,就是设置表单传输的编码。在AJAX里不写有可能会报错,但是在HTML的form表单里是可以不写 enctype="application/x-www-form-urlencoded"的,因为默认HTML表单就是这种传输编码类型。
> ②multipart-form-data是用来指定传输数据的特殊类型的,主要就是我们上传的非文本的内容,比如图片或者mp3等等。
> ③text/plain是纯文本传输的意思,在发送邮件时要设置这种编码类型,否则会出现接收时编码混乱的问题,网络上经常拿text/plain和 text/html做比较,其实这两个很好区分,前者用来传输纯文本文件,后者则是传递html代码的编码类型,
在发送头文件时才用得上。①和③都不能用 于上传文件,只有multipart/form-data才能完整的传递文件数据。
上面提到的MIME,它的英文全称是"Multipurpose Internet Mail Extensions" 多功能Internet 邮件扩充服务,它是一种多用途网际邮件扩充协议,在1992年最早应用于电子邮件系统,但后来也应用到浏览器。服务器会将它们发送的多媒体数据的类型告诉 浏览器,而通知手段就是说明该多媒体数据的MIME类型,从而让浏览器知道接收到的信息哪些是MP3文件,哪些是Shockwave文件等等。服务器将 MIME标志符放入传送的数据中来告诉浏览器使用哪种插件读取相关文件。
简单说,MIME类型就是设定某种扩展名的文件用一种应用程序来打开的方式类型,当该扩展名文件被访问的时候,浏览器会自动使用指定应用程序来打开。多用于指定一些客户端自定义的文件名,以及一些媒体文件打开方式。
浏览器接收到文件后,会进入插件系统进行查找,查找出哪种插件可以识别读取接收到的文件。如果浏览器不清楚调用哪种插件系统,它可能会告诉用户缺少某 插件,或者直接选择某现有插件来试图读取接收到的文件,后者可能会导致系统的崩溃。传输的信息中缺少MIME标识可能导致的情况很难估计,因为某些计算机 系统可能不会出现什么故障,但某些计算机可能就会因此而崩溃。
| 28.866667
| 265
| 0.763774
|
yue_Hant
| 0.510949
|
a6a88ae7a3f397c512f41b558099114a47539561
| 1,403
|
md
|
Markdown
|
test/data.coffee.md
|
shimaore/caring-band
|
6c98f9cdaf763cd55d20a4fcceaf04302b29d3b4
|
[
"Unlicense"
] | null | null | null |
test/data.coffee.md
|
shimaore/caring-band
|
6c98f9cdaf763cd55d20a4fcceaf04302b29d3b4
|
[
"Unlicense"
] | null | null | null |
test/data.coffee.md
|
shimaore/caring-band
|
6c98f9cdaf763cd55d20a4fcceaf04302b29d3b4
|
[
"Unlicense"
] | null | null | null |
chai = require 'chai'
chai.should()
{data} = require '../README'
describe 'Data', ->
c = new data()
c.add 3
c.add 5
c.add 0.9
it 'should count properly', ->
c.should.have.property 'count'
chai.expect( c.count.equals 3 ).to.be.true
it 'should min properly', ->
c.should.have.property 'min'
chai.expect( c.min.equals 0.9 ).to.be.true
it 'should max properly', ->
c.should.have.property 'max'
chai.expect( c.max.equals 5 ).to.be.true
it 'should sum properly', ->
c.should.have.property 'sum'
chai.expect( c.sum.equals 8.9 ).to.be.true
it 'should sumsqr properly', ->
c.should.have.property 'sumsqr'
chai.expect( c.sumsqr.equals 34.81 ).to.be.true
it 'should last properly', ->
c.should.have.property 'last'
chai.expect( c.last.equals 0.9 ).to.be.true
it 'should JSON properly', ->
j = JSON.parse c.toJSON()
j.should.have.property 'count', 3
j.should.have.property 'min', 0.9
j.should.have.property 'max', 5
j.should.have.property 'sum', 8.9
j.should.have.property 'sumsqr', 34.81
j.should.have.property 'last', 0.9
it 'should JSON properly when empty', ->
d = new data()
j = JSON.parse d.toJSON()
j.should.have.property 'count', 0
| 28.06
| 55
| 0.561654
|
eng_Latn
| 0.860817
|
a6a934073f46bbfe8c605a33c4daa729911fa4f6
| 2,547
|
md
|
Markdown
|
doc/sequoiadb_design.md
|
shanks2048/node-sequoiadb
|
84081649a05bd5c63c740de7eb2f3d239748d38e
|
[
"Apache-2.0"
] | 11
|
2015-07-08T13:36:27.000Z
|
2021-04-20T08:53:09.000Z
|
doc/sequoiadb_design.md
|
shanks2048/node-sequoiadb
|
84081649a05bd5c63c740de7eb2f3d239748d38e
|
[
"Apache-2.0"
] | 1
|
2019-12-09T06:04:32.000Z
|
2019-12-09T06:04:32.000Z
|
doc/sequoiadb_design.md
|
shanks2048/node-sequoiadb
|
84081649a05bd5c63c740de7eb2f3d239748d38e
|
[
"Apache-2.0"
] | 17
|
2015-07-05T07:04:51.000Z
|
2019-12-26T15:10:07.000Z
|
# SequoiaDB Node.js驱动设计文档
------
## 引言
本文档为SequoiaDB数据库的Node.js驱动的设计文档,主要描述客户端驱动的架构设计及部分细节描述。
## 术语
### SequoiaDB自有术语
- `SequoiaDB`,广州巨杉软件开发有限公司开发的新型NoSQL数据库软件。
- `SequoiaDB服务`,一个运行着的SequoiaDB数据库实例,基于TCP协议上的SequoiaDB自有应用层协议对外提供服务。
- `User`,用户。每个SequoiaDB中可以存在多个用户,每个客户端需要通过用户信息来连接SequoiaDB服务。
- `Connection`,连接。客户端与SequoiaDB服务之间建立的TCP连接。
- `CollectionSpace`,集合空间,SequoiaDB独有的概念。可以对应于MySQL中的数据库的概念。每个SequoiaDB服务中可以创建多个集合空间。
- `Collection`,集合。类似关系型数据库中的表,或者MongoDB中的Collection。每个集合空间中可以存在多个集合。
- `Document`,文档。类MongoDB中的文档,其本质就是对象。可以通过BSON或者JSON来具体表达。
- `Index`,索引。在集合上创建的索引。与其他数据库的索引作用相同。
- `Transaction`,事务。NoSQL数据库一般没有事务的设计,SequoiaDB将关系型数据库的特点引入NoSQL领域。通过事务可以对数据操作进行事务处理,具体视情况进行提交或回滚。
- `Cursor`,游标。对SequoiaDB服务进行列表(多个数据)查询时所使用到的工具。其原因是SequoiaDB在大数据量时并不一次性返回所有的数据到客户端,而是返回一个游标给调用方,调用方需要通过游标来逐个获取元素,并判断是否所有元素被返回。
- `Lob`,大对象。SequoiaDB可以在数据中存放文件类的大对象。MongoDB中的Binary JSON格式本事支持二进制数存储,只是存在稳定性的相关问题,所以不被建议使用,SequoiaDB在这里具有创新。
- `Domain`,域。可以将集合或者集合空间划分在某个域中。
- `Node`,SequiaDB服务集群的节点。通常指单个数据库服务节点。
- `Master`,SequiaDB服务集群的主节点。
- `Slave`,SequiaDB服务集群的从节点。
- `ReplicaGroup`,复制组。复制组下的节点可以从Master节点同步数据。
- `ReplicaCataGroup`,复制目录组。
- `Snapshot`,快照。
- `Task`,任务。
- `Backup`,备份。可以备份整个数据库或者指定的复制组。
### Node.js驱动相关术语
- `Node.js`,由Ryan Dahl开发的,基于V8引擎构建的非阻塞I/O模型、事件驱动的运行时环境。目前由Node.js基金会管理,主要特征有:单线程、异步I/O、JavaScript等。其具有轻量、高性能、开发效率高、开发者基数大等优点。
- `sequoia`,也就是本文重点要描述的SequoiaDB驱动,采用Node.js编写,基于SequoiaDB的通信协议与SequoiaDB服务建立连接,业务逻辑通过该客户端进行操作。
- `Pool`,连接池。
- `Parser`,协议解析器。
## 架构设计
### 纯JavaScript解析
通过调研,原始的C/C++驱动中对网络I/O的调用主要是通过同步的方式来进行的。这对Node.js来说会造成主线程的阻塞,如果通过JavaScript binding的形式来进行调用,会导致其他代码无法得到执行。根据调研C#驱动和Python驱动都是采用同步I/O的方式进行调用,这主要与主流社区都依然喜欢同步I/O的方式有关(性能上有损耗)。
解决的办法有两种,一种是修改C/C++代码,采用libuv的API进行重写,使得所有I/O调用进入事件循环;另一种是直接使用Node.js提供的上层API。通过libuv的API来编写依然要写JavaScript binding代码,并且也要提供一层wrapper,总体来说不仅要写大量的C/C++代码,还有不少的JavaScript代码。
通过调研C#驱动的实现,觉得通过纯JavaScript来实现网络协议是可行的。
### SequoiaDB单机架构

### SequoiaDB集群架构

### 客户端设计

### 连接池设计
连接池设计,任何对数据库的调用都是通过连接池获取连接,然后进行数据发送。接收完数据后,归还连接到连接池。
### 连接/协议解析器设计
每个连接的数据发送通过不同的方法封装二进制数据,从DB服务接收到的数据通过协议解析器将二进制数据重新解析为对象。
协议解析器通过有限自动机的方式来进行实现识别。通过识别消息长度,再截取指定长度的数据进行解析。
数据解析方面,如果接收到对象,通过bson模块进行解析。发送时也通过bson模块进行对象的序列化。
## API设计
驱动主要包含CollectionSpace、Collection、User、Transaction、Connection、Connection Pool、Document、Lob、Domain、Node、ReplicaGroup、Cursor的定义。
但是对外暴露的API中,只有Client是对外暴露的。Client封装了连接池的操作,具体连接交给Connection类来进行实现。
### 接口
接口以Callback的方式进行实现,在此基础上提供Promise的封装。以此可以方便的与ES5和ES6的代码进行兼容。
| 34.890411
| 172
| 0.829211
|
yue_Hant
| 0.766526
|
a6a9bf0fc71b406a0c9b9ed929be6c452077f078
| 664
|
md
|
Markdown
|
docs/client/marker-object-blip/SetMarkerScale.md
|
utility-library/documentation
|
690724461304c3bea4b5c0f52b21ee479eee6f62
|
[
"MIT"
] | null | null | null |
docs/client/marker-object-blip/SetMarkerScale.md
|
utility-library/documentation
|
690724461304c3bea4b5c0f52b21ee479eee6f62
|
[
"MIT"
] | null | null | null |
docs/client/marker-object-blip/SetMarkerScale.md
|
utility-library/documentation
|
690724461304c3bea4b5c0f52b21ee479eee6f62
|
[
"MIT"
] | 1
|
2022-01-22T13:28:53.000Z
|
2022-01-22T13:28:53.000Z
|
# SetMarkerScale
Set the scale of the marker by the id
| Argument | Data type | Needed | Default | Description
| ----------------------| ------------------------------------ | ------------------------- |-----------------|-------------
| `Id` | string/number | :material-checkbox-blank-circle: | `-` | The id to update the marker
| `Scale` | vector3 | :material-checkbox-blank-circle: | `-` | The marker scale
!!! success ""
Dont need to be called every frame
---
??? example
```
SetMarkerscale("marker", vector3(1.0, 1.0, 1.5))
```
| 44.266667
| 124
| 0.412651
|
eng_Latn
| 0.788159
|
a6aa6e4a51f0a4d9723a7d47b7e7e920519bde71
| 220
|
md
|
Markdown
|
src/content/ssgs/next.md
|
jmeeling1963/serverless
|
1af1050582dec6004512bd0e78436395f87c3f41
|
[
"MIT"
] | 4
|
2019-11-15T14:51:18.000Z
|
2021-11-08T09:00:29.000Z
|
src/content/ssgs/next.md
|
jmeeling1963/serverless
|
1af1050582dec6004512bd0e78436395f87c3f41
|
[
"MIT"
] | null | null | null |
src/content/ssgs/next.md
|
jmeeling1963/serverless
|
1af1050582dec6004512bd0e78436395f87c3f41
|
[
"MIT"
] | 1
|
2021-02-25T20:05:28.000Z
|
2021-02-25T20:05:28.000Z
|
---
path: "services/ssgs/next"
title: "Next.js"
url: "https://nextjs.org/"
logo: "/images/next.svg"
---
With Next.js, server rendering React applications has never been easier, no matter where your data is coming from.
| 24.444444
| 114
| 0.718182
|
eng_Latn
| 0.951727
|
a6aabd846f27250c14855cdada825e5c1a0422b7
| 5,635
|
md
|
Markdown
|
articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
|
jrafaelsantana/azure-docs.pt-br
|
d4b77e58f94484cb8babd958ff603d1cfed586eb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
|
jrafaelsantana/azure-docs.pt-br
|
d4b77e58f94484cb8babd958ff603d1cfed586eb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
|
jrafaelsantana/azure-docs.pt-br
|
d4b77e58f94484cb8babd958ff603d1cfed586eb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Autenticação baseada em certificado no iOS - Azure Active Directory
description: Saiba mais sobre os cenários com suporte e os requisitos para configuração de autenticação baseada em certificado em soluções com dispositivos iOS
services: active-directory
ms.service: active-directory
ms.subservice: authentication
ms.topic: article
ms.date: 01/15/2018
ms.author: joflore
author: MicrosoftGuyJFlo
manager: daveba
ms.reviewer: annaba
ms.collection: M365-identity-device-management
ms.openlocfilehash: cda1b1c2a484f3aa627b8b9cf486528d13f27be8
ms.sourcegitcommit: 41ca82b5f95d2e07b0c7f9025b912daf0ab21909
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/13/2019
ms.locfileid: "60415985"
---
# <a name="azure-active-directory-certificate-based-authentication-on-ios"></a>Autenticação baseada em certificado do Azure Active Directory no iOS
Os dispositivos iOS podem usar CBA (Autenticação Baseada em Certificado) para autenticarem-se no Azure Active Directory usando um certificado do cliente nos dispositivos durante a conexão a:
* Aplicativos móveis do Office, como Microsoft Outlook e Microsoft Word
* Clientes do EAS (Exchange ActiveSync)
Configurar esse recurso elimina a necessidade de digitar uma combinação de nome de usuário e senha em determinados emails e aplicativos do Microsoft Office no seu dispositivo móvel.
Este tópico fornece os requisitos e os cenários com suporte para configurar a CBA em um dispositivo iOS(Android) para usuários de locatários nos planos do Office 365 Enterprise, Business, Education, US Government, China e Germany.
Esse recurso está disponível na visualização em planos do governo federal e para defesa governamental dos EUA do Office 365.
## <a name="microsoft-mobile-applications-support"></a>Suporte a aplicativos móveis da Microsoft
| Aplicativos | Suporte |
| --- | --- |
| Aplicativo de Proteção de Informações do Azure |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Portal da Empresa do Intune |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Equipes da Microsoft |![Marca de seleção indicando o suporte para este aplicativo][1] |
| OneNote |![Marca de seleção indicando o suporte para este aplicativo][1] |
| OneDrive |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Outlook |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Power BI |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Skype for Business |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Word/Excel/PowerPoint |![Marca de seleção indicando o suporte para este aplicativo][1] |
| Yammer |![Marca de seleção indicando o suporte para este aplicativo][1] |
## <a name="requirements"></a>Requisitos
A versão do sistema operacional do dispositivo deve ser iOS 9 e superior
Um servidor de federação deve ser configurado.
É necessário um Microsoft Authenticator para aplicativos do Office em iOS.
Para que o Azure Active Directory revogue um certificado do cliente, o token ADFS deve ter as seguintes declarações:
* `http://schemas.microsoft.com/ws/2008/06/identity/claims/<serialnumber>` (O número de série do certificado do cliente)
* `http://schemas.microsoft.com/2012/12/certificatecontext/field/<issuer>` (A cadeia de caracteres para o emissor do certificado do cliente)
O Azure Active Directory adiciona essas declarações ao token de atualização se elas estiverem disponíveis no token ADFS (ou qualquer outro token SAML). Quando o token de atualização precisa ser validado, essas informações são usadas para verificar a revogação.
Como melhor prática, é necessário atualizar as páginas de erro de ADFS da organização com as informações a seguir:
* O requisito para instalar o Microsoft Authenticator no iOS
* Instruções sobre como obter um certificado de usuário.
Para obter mais informações, consulte [Personalizando as páginas de entrada do AD FS](https://technet.microsoft.com/library/dn280950.aspx).
Alguns aplicativos do Office (com autenticação moderna habilitada) enviam '*prompt=login*' ao Azure AD na solicitação. Por padrão, o Microsoft Azure AD converte ‘*prompt=login*’ na solicitação para ADFS como ‘*wauth=usernamepassworduri*’ (solicita que o ADFS faça uma autenticação U/P) e ‘*wfresh=0*’ (solicita que o ADFS ignore o estado de SSO e faça uma nova autenticação). Se você quiser habilitar a autenticação baseada em certificado para esses aplicativos, precisará modificar o comportamento padrão do Azure AD. Basta definir o '*PromptLoginBehavior*' em suas configurações de domínio federado como '*Disabled*'.
Você pode usar o cmdlet [MSOLDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0) para executar essa tarefa:
`Set-MSOLDomainFederationSettings -domainname <domain> -PromptLoginBehavior Disabled`
## <a name="exchange-activesync-clients-support"></a>Suporte aos clientes do Exchange ActiveSync
No iOS 9 ou posterior, há suporte para o cliente de email do iOS nativo. Para todos os outros aplicativos do Exchange ActiveSync, para determinar se há suporte para esse recurso, contate o desenvolvedor do aplicativo.
## <a name="next-steps"></a>Próximas etapas
Se você quiser configurar a autenticação baseada em certificado em seu ambiente, confira [Get started with certificate-based authentication on Android](../authentication/active-directory-certificate-based-authentication-get-started.md) (Introdução à autenticação baseada em certificado no Android) para obter instruções.
<!--Image references-->
[1]: ./media/active-directory-certificate-based-authentication-ios/ic195031.png
| 65.523256
| 619
| 0.799823
|
por_Latn
| 0.995592
|
a6aae939bd05cbd2ea7621b084e26d4d157819d5
| 337
|
md
|
Markdown
|
articles/cosmos-db/includes/appliesto-cassandra-api.md
|
Kraviecc/azure-docs.pl-pl
|
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/cosmos-db/includes/appliesto-cassandra-api.md
|
Kraviecc/azure-docs.pl-pl
|
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/cosmos-db/includes/appliesto-cassandra-api.md
|
Kraviecc/azure-docs.pl-pl
|
4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
ms.openlocfilehash: 8cc5d3296c271a331750fc718dafbf37f76f27ab
ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/25/2020
ms.locfileid: "95995826"
---
DOTYCZY: :::image type="icon" source="../media/applies-to/yes.png" border="false"::: interfejs API Cassandra
| 37.444444
| 108
| 0.801187
|
kor_Hang
| 0.140535
|
a6ab1fdaf270d08d9ddd223dcc80a87a15c3b1c6
| 802
|
md
|
Markdown
|
docker.md
|
ad19900913/nuls-v2
|
254d30bf339de1e21a9cfd73763a11bd1b39ad29
|
[
"MIT"
] | 226
|
2019-05-13T07:17:22.000Z
|
2022-02-13T02:08:23.000Z
|
docker.md
|
ad19900913/nuls-v2
|
254d30bf339de1e21a9cfd73763a11bd1b39ad29
|
[
"MIT"
] | 59
|
2020-04-07T10:14:29.000Z
|
2021-07-29T04:08:23.000Z
|
docker.md
|
lichao23/nuls_2.0
|
2972cbf5787dcd94635649fcd4be04a86c8e33a4
|
[
"MIT"
] | 54
|
2019-05-17T07:29:52.000Z
|
2022-03-20T18:10:37.000Z
|
# 通过docker hub镜像运行钱包
```
docker run \
--name nuls-wallet \
-d \
-p 18001:18001 \
-p 18002:18002 \
-v data:/nuls/data \
-v log:/nuls/Logs \
nulsio/nuls-wallet:beta3
```
其中18001为主链数据交换端口,18002为跨链交易数据交换接口,data为数据存储目录,logs为日志存储目录。
# 本机构建镜像
```
cd nuls-v2/docker
docker build -t nuls:beta3 .
```
# 进入命令和查看模块启动状态
```
docker exec -it nuls-wallet cmd #启动命令行
docker exec -it nuls-wallet check-status #检查模块启动状态
```
# 获取带区块链浏览器和网页轻钱包的镜像
```
docker run \
--name nuls-wallet \
-d \
-p 18001:18001 \
-p 18002:18002 \
-p 18005:1999 \
-p 18006:18004 \
-v data:/nuls/data \
-v log:/nuls/Logs \
nulsio/nuls-wallet-pro:beta3
```
其中18005为区块浏览器的http端口,18006为网页轻钱包的http端口。
区块链浏览器网页访问地址: http://127.0.0.1:18005
网页轻钱包网页访问地址 : http://127.0.0.1:18006
| 16.708333
| 58
| 0.639651
|
yue_Hant
| 0.620028
|
a6ab254015b4a6b8895feb951c01e783ee2bfc5e
| 4,275
|
md
|
Markdown
|
docs/extensions/generics-and-templates-visual-cpp.md
|
heathhenley/cpp-docs
|
2e94807ab369e967c7892dd7971f9765b9878641
|
[
"CC-BY-4.0",
"MIT"
] | 3
|
2021-02-19T06:12:36.000Z
|
2021-03-27T20:46:59.000Z
|
docs/extensions/generics-and-templates-visual-cpp.md
|
heathhenley/cpp-docs
|
2e94807ab369e967c7892dd7971f9765b9878641
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/extensions/generics-and-templates-visual-cpp.md
|
heathhenley/cpp-docs
|
2e94807ab369e967c7892dd7971f9765b9878641
|
[
"CC-BY-4.0",
"MIT"
] | 1
|
2020-12-24T04:34:32.000Z
|
2020-12-24T04:34:32.000Z
|
---
title: "Generics and Templates (C++/CLI)"
ms.date: "10/12/2018"
ms.topic: "reference"
helpviewer_keywords: ["generics [C++], vs. templates", "templates, C++"]
ms.assetid: 63adec79-b1dc-4a1a-a21d-b8a72a8fce31
---
# Generics and Templates (C++/CLI)
Generics and templates are both language features that provide support for parameterized types. However, they are different and have different uses. This topic provides an overview of the many differences.
For more information, see [Windows Runtime and Managed Templates](windows-runtime-and-managed-templates-cpp-component-extensions.md).
## Comparing Templates and Generics
Key differences between generics and C++ templates:
- Generics are generic until the types are substituted for them at runtime. Templates are specialized at compile time so they are not still parameterized types at runtime
- The common language runtime specifically supports generics in MSIL. Because the runtime knows about generics, specific types can be substituted for generic types when referencing an assembly containing a generic type. Templates, in contrast, resolve into ordinary types at compile time and the resulting types may not be specialized in other assemblies.
- Generics specialized in two different assemblies with the same type arguments are the same type. Templates specialized in two different assemblies with the same type arguments are considered by the runtime to be different types.
- Generics are generated as a single piece of executable code which is used for all reference type arguments (this is not true for value types, which have a unique implementation per value type). The JIT compiler knows about generics and is able to optimize the code for the reference or value types that are used as type arguments. Templates generate separate runtime code for each specialization.
- Generics do not allow non-type template parameters, such as `template <int i> C {}`. Templates allow them.
- Generics do not allow explicit specialization (that is, a custom implementation of a template for a specific type). Templates do.
- Generics do not allow partial specialization (a custom implementation for a subset of the type arguments). Templates do.
- Generics do not allow the type parameter to be used as the base class for the generic type. Templates do.
- Templates support template-template parameters (e.g. `template<template<class T> class X> class MyClass`), but generics do not.
## Combining Templates and Generics
The basic difference in generics has implications for building applications that combine templates and generics. For example, suppose you have a template class that you want to create a generic wrapper for to expose that template to other languages as a generic. You cannot have the generic take a type parameter that it then passes though to the template, since the template needs to have that type parameter at compile time, but the generic won't resolve the type parameter until runtime. Nesting a template inside a generic won't work either because there's no way to expand the templates at compile time for arbitrary generic types that could be instantiated at runtime.
## Example
### Description
The following example shows a simple example of using templates and generics together. In this example, the template class passes its parameter through to the generic type. The reverse is not possible.
This idiom could be used when you want to build on an existing generic API with template code that is local to a C++/CLI assembly, or when you need to add an extra layer of parameterization to a generic type, to take advantage of certain features of templates not supported by generics.
### Code
```cpp
// templates_and_generics.cpp
// compile with: /clr
using namespace System;
generic <class ItemType>
ref class MyGeneric {
ItemType m_item;
public:
MyGeneric(ItemType item) : m_item(item) {}
void F() {
Console::WriteLine("F");
}
};
template <class T>
public ref class MyRef {
MyGeneric<T>^ ig;
public:
MyRef(T t) {
ig = gcnew MyGeneric<T>(t);
ig->F();
}
};
int main() {
// instantiate the template
MyRef<int>^ mref = gcnew MyRef<int>(11);
}
```
```Output
F
```
## See also
[Generics](generics-cpp-component-extensions.md)
| 47.5
| 674
| 0.774269
|
eng_Latn
| 0.998744
|
a6ab8105749c8b7cc39499ca7c5d78e8ed8969aa
| 438
|
md
|
Markdown
|
docs/cross-platform/includes/development-environment.md
|
lhaussknecht/xamarin-docs.de-de
|
e83073ae4c497400ae37930d6f6f0d374bbd9049
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/cross-platform/includes/development-environment.md
|
lhaussknecht/xamarin-docs.de-de
|
e83073ae4c497400ae37930d6f6f0d374bbd9049
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/cross-platform/includes/development-environment.md
|
lhaussknecht/xamarin-docs.de-de
|
e83073ae4c497400ae37930d6f6f0d374bbd9049
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
||macOS|Windows|
|---|---|---|
|**Entwicklungsumgebung**|Visual Studio für Mac|Visual Studio|
|**Xamarin.iOS**|Ja|Ja (mit Macintosh-Computer)|
|**Xamarin.Android**|Ja|Ja|
|**Xamarin.Forms**|Nur iOS und Android (macOS in der Vorschau)|Android, Windows/UWP (iOS mit Mac-Computer)|
|**Xamarin.Mac**|Ja|[Nur Projekt öffnen und kompilieren](https://developer.xamarin.com/releases/vs/xamarin.vs_4/xamarin.vs_4.2/#Xamarin.Mac_minimum_support.)|
| 62.571429
| 159
| 0.728311
|
deu_Latn
| 0.276883
|
a6ad04fb33f6719e3f23455e3fa3fba959587996
| 13,627
|
md
|
Markdown
|
config/jekyll/guides/administration/questionnaires-reports.md
|
cirope/mawidabp
|
6a972ea4fdcea62c8ad6674162d0e6dcc251270d
|
[
"MIT"
] | 1
|
2021-10-07T23:33:06.000Z
|
2021-10-07T23:33:06.000Z
|
config/jekyll/guides/administration/questionnaires-reports.md
|
cirope/mawidabp
|
6a972ea4fdcea62c8ad6674162d0e6dcc251270d
|
[
"MIT"
] | 82
|
2015-01-31T06:21:05.000Z
|
2022-03-21T12:58:02.000Z
|
config/jekyll/guides/administration/questionnaires-reports.md
|
cirope/mawidabp
|
6a972ea4fdcea62c8ad6674162d0e6dcc251270d
|
[
"MIT"
] | null | null | null |
---
title: Reportes
layout: articles
category: administration
article_order: 13.3
parent: Cuestionarios
---
## Cuestionarios
### Reportes
Seleccionamos **Reportes.**
Muestra los datos por tipo de cuestionario, por unidad organizativa y resumen de las respuestas.
{: class="img-responsive"}
<hr>
**Resumen por tipo de cuestionario**
Seleccionamos **Resumen por tipo de cuestionario**, luego tenemos la opción **Por favor seleccione**, en este caso muestra Comité para seleccionar, seleccionamos Comité (a medida que generemos cuestionarios, los mismos van aparecer en esta lista).
{: class="img-responsive"}
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Tenemos la opción de **Descargar** el reporte en formato pdf, personalizando el título y subtítulo,
{: class="img-responsive"}
Seleccionamos **Generar**, luego nos muestra el reporte en formato pdf, en este caso para la empresa Cirope.
{: class="img-responsive"}
<hr>
**Resumen por unidad organizativa**
Seleccionamos **Resumen por unidad organizativa** (elegimos Procesos centrales), luego tenemos la opción **Cuestionario** (Por favor seleccione), en este caso seleccionamos Comité (a medida que generemos cuestionarios, los mismos van aparecer en esta lista), en la opción **Ver** seleccionamos ¨Asignadas¨ (tenemos la posibilidad de seleccionar también ¨Solo a todos¨, ¨Asignadas y por informe¨).
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Si en la opción **Ver** seleccionamos ¨Solo a todos¨.
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos (en este caso no tiene datos para los campos seleccionados)
{: class="img-responsive"}
Si en la opción **Ver** seleccionamos ¨Asignadas y por informe¨.
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Para todos los casos tenemos la posibilidad de descargar el reporte en formato pdf.
Seleccionamos **Descargar**, nos ofrece la posibilidad de personalizar el Título y Subtítulo.
{: class="img-responsive"}
Luego seleccionamos **Generar** (genera el reporte en formato pdf), en este caso para la empresa Cirope.
{: class="img-responsive"}
<hr>
**Resumen de respuestas**
Seleccionamos **Resumen de respuestas**, luego tenemos la opción **Cuestionario** (Por favor seleccione), en este caso muestra Comité para seleccionar, seleccionamos Comité (a medida que generemos cuestionarios, los mismos van aparecer en esta lista). Luego tenemos la opción **Contestada** (Por favor seleccione), tenemos por Si o No, seleccionamos Si.
{: class="img-responsive"}
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Seleccionamos **Descargar**, nos ofrece la posibilidad de personalizar el Título y Subtítulo y obtener el reporte en formato pdf.
Seleccionamos **Generar**, nos genera un documento en formato pdf.
<hr>
**Mejora funcionalidad**
En ¨Reportes¨ -> ¨Resumen de respuestas¨, agregamos un tilde ¨Mostrar únicamente respuestas coincidentes¨.
**Administración -> Cuestionarios**.
Seleccionamos ¨Administración¨ -> ¨Cuestionarios¨
{: class="img-responsive"}
Seleccionamos ¨Reportes¨
{: class="img-responsive"}
Seleccionamos ¨Resumen de respuestas¨
{: class="img-responsive"}
Ingresamos los datos y seleccionamos ¨Mostrar únicamente respuestas coincidentes¨, para este ejemplo: ¨De acuerdo¨.
{: class="img-responsive"}
<hr>
**Mejora funcionalidad**
Mejoramos la funcionalidad, incorporamos el dato del informe en las encuestas y la opción "No aplica" (para el tipo de respuesta ¨Múltiple opción¨ como uno de los valores posibles para que el auditado responda, a los efectos de evitar sesgos en la interpretación de los resultados).
**Administración -> Cuestionarios**
El informe se muestra en los siguientes lugares:
> **Mail que le llega al auditado:**
{: class="img-responsive"}
**Pantalla en la que el auditado responde:**
{: class="img-responsive"}
**Consulta de encuestas, muestra el informe en la columna ¨Sobre¨ y la columna ¨Acerca de¨ muestra si corresponde a Todos o al usuario auditor:**
{: class="img-responsive"}
Incorporamos la opción "No aplica" (para el tipo de respuesta ¨Múltiple opción¨ como uno de los valores posibles para que el auditado responda, a los efectos de evitar sesgos en la interpretación de los resultados).
{: class="img-responsive"}
<hr>
**Mejora funcionalidad**
Mejoramos la funcionalidad, agregamos los siguientes aspectos:
Una opción más de **Tipo de respuesta** (Opción Si o No) que contempla las respuestas "Si, No, No Aplica".
Cantidad de caracteres ilimitados para los siguientes campos: Pregunta, Texto del correo, Aclaración del correo.
Cambios en los reportes para que discrimine las preguntas por tipo de respuesta.
**Administración -> Cuestionarios**.
Seleccionamos ¨Definición¨.
{: class="img-responsive"}
Agregamos el tipo de respuestas (**Opción Si o No**) y cantidad de caracteres ilimitados para la carga de campos (Pregunta, Texto del correo, Aclaración del correo).
{: class="img-responsive"}
**Administración -> Cuestionarios**.
Cambios en los reportes para poder ver las respuestas de los usuarios a cada una de las preguntas.
Seleccionamos ¨Encuesta¨.
Muestra las encuestas enviadas
{: class="img-responsive"}
Seleccionando la **lupa** (para el Cuestionario Relevamiento, segunda fila) podemos ver las respuestas de los usuarios a cada una de las preguntas.
{: class="img-responsive"}
**Administración -> Cuestionarios**.
Seleccionamos ¨Reportes¨.
Cambios en los reportes para que discrimine las preguntas por tipo de respuesta.
{: class="img-responsive"}
Para mostrar un ejemplo, seleccionamos **Resumen por unidad organizativa** (elegimos Procesos centrales), luego tenemos la opción **Cuestionario** (Por favor seleccione), en este caso seleccionamos Comité (a medida que generemos cuestionarios, los mismos van aparecer en esta lista), en la opción **Ver** seleccionamos ¨Asignadas¨ (muestra las encuestas directamente asignadas al usuario, siempre que haya un usuario seleccionado). Además tenemos la posibilidad de seleccionar ¨Solo a todos¨, ¨Asignadas y por informe¨.
Seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Si en la opción **Ver** seleccionamos ¨Solo a todos¨ (muestra las encuestas enviadas con la opción "Todos", no tiene en cuenta el usuario seleccionado), luego seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos (para este filtro no existen datos para mostrar)
{: class="img-responsive"}
Si en la opción **Ver** seleccionamos ¨Asignadas y por informe¨(muestra las encuestas directamente asignadas al usuario y en las que participó como integrante en el informe ,siempre que haya un usuario seleccionado).
Luego seleccionamos **aplicar filtro**, muestra el reporte con los siguientes datos.
{: class="img-responsive"}
Para todos los casos tenemos la posibilidad de descargar el reporte en formato pdf.
Seleccionamos **Descargar**, nos ofrece la posibilidad de personalizar el Título y Subtítulo.
{: class="img-responsive"}
Luego seleccionamos **Generar** (genera el reporte en formato pdf).
{: class="img-responsive"}
Nivel de satisfacción promedio: calcula el promedio de las opciones seleccionadas por el usuario (el sistema tiene en cuenta los siguientes datos, Muy de acuerdo:100; De acuerdo: 75; Ni acuerdo ni desacuerdo: 50; En desacuerdo: 25: Muy desacuerdo: 0; Si: 100; No: 0; No aplica: no se tiene en cuenta en el cálculo del promedio).
<hr>
**Mejora funcionalidad**
Mejoramos la funcionalidad, agregamos los siguientes aspectos:
* En ¨Definición¨ incorporamos la opción para copiar cuestionarios.
* En ¨Encuestas¨ muestra un nuevo campo **¨Acerca de¨** para poder identificar a qué corresponden las encuestas enviadas.
* En ¨Reportes¨ agregamos un campo desplegable para tener un filtro por respuesta en el reporte ¨Resumen de respuestas¨ (el único ítem que merece aclaración es "No aplica", en este caso se usa el mismo valor tanto para las preguntas múltiple opción como para las si/no, por lo que elegir cualquiera es indistinto).
**Administración -> Cuestionarios**.
En ¨Definición¨ incorporamos la opción para copiar cuestionarios.
Seleccionamos ¨Definición¨.
El primer icono de izquierda a derecha tiene la opción para ¨**copiar**¨, si posicionamos el mouse nos muestra el mensaje Copiar.
{: class="img-responsive"}
Seleccionamos ¨Copiar¨, nos muestra la pantalla con los datos del cuestionario seleccionado.
{: class="img-responsive"}
Cambiamos los datos que necesitamos.
{: class="img-responsive"}
Luego seleccionamos ¨Crear cuestionario¨.
{: class="img-responsive"}
Seleccionamos ¨Listado¨, nos muestra el nuevo cuestionario ¨Comité prueba¨ generado con la opción ¨Copiar¨.
{: class="img-responsive"}
**Administración -> Cuestionarios**.
En ¨Encuestas¨ muestra un nuevo campo **¨Acerca de¨** para poder identificar a qué corresponden las encuestas enviadas.
Seleccionamos ¨Encuestas¨, nos muestra el listado con el campo ¨Acerca de¨
{: class="img-responsive"}
Seleccionamos ¨Nuevo¨, nos muestra la pantalla para crear una nueva encuesta con el agregado del campo ¨Acerca de¨ (para poder identificar a qué corresponde la encuesta que luego enviamos).
{: class="img-responsive"}
Cargamos los datos requeridos, en este caso necesitamos enviar la encuesta al auditor, sobre el trabajo realizado en la 4 - Sucursal Mendoza.
{: class="img-responsive"}
Seleccionamos ¨Crear Encuesta¨, nos muestra la siguiente pantalla.
{: class="img-responsive"}
Si seleccionamos ¨Listado¨, nos muestra la nueva encuesta generada con el campo ¨Acerca de¨.
{: class="img-responsive"}
Además, al seleccionar esta opción, el sistema va enviar un correo a la cuenta del usuario seleccionado.
{: class="img-responsive"}
**Administración -> Cuestionarios**.
En ¨Reportes¨ agregamos un campo desplegable par tener un filtro por respuesta en el reporte ¨Resumen de respuestas¨ (el único ítem que merece aclaración es "No aplica", en este caso se usa el mismo valor tanto para las preguntas múltiple opción como para las si/no, por lo que elegir cualquiera es indistinto).
Seleccionamos ¨Reportes¨ -> ¨Resumen de respuestas¨
Agregamos el campo ¨Respuesta¨, es un desplegable que muestra los respuestas correspondientes a los cuestionarios creados.
En este ejemplo necesitamos saber para la respuesta ¨Muy de acuerdo¨.
{: class="img-responsive"}
Seleccionamos ¨Aplicar filtro¨, nos muestra los siguientes datos.
{: class="img-responsive"}
| 43.123418
| 519
| 0.756733
|
spa_Latn
| 0.973517
|
a6ad350a2ded3bffe6c98c26fed6ec1864a0f789
| 2,142
|
md
|
Markdown
|
node_modules/torrent-stream/node_modules/magnet-uri/README.md
|
dilshanwn/dilshan-Torrent-Cloud
|
68cd01ce2cfb68bf96d1b99d1d68d437327b76d8
|
[
"Unlicense",
"MIT"
] | 24
|
2018-03-06T06:36:52.000Z
|
2021-03-24T06:18:48.000Z
|
node_modules/torrent-stream/node_modules/magnet-uri/README.md
|
dilshanwn/dilshan-Torrent-Cloud
|
68cd01ce2cfb68bf96d1b99d1d68d437327b76d8
|
[
"Unlicense",
"MIT"
] | 7
|
2018-05-08T03:29:31.000Z
|
2021-05-06T22:21:42.000Z
|
node_modules/torrent-stream/node_modules/magnet-uri/README.md
|
dilshanwn/dilshan-Torrent-Cloud
|
68cd01ce2cfb68bf96d1b99d1d68d437327b76d8
|
[
"Unlicense",
"MIT"
] | 5
|
2018-05-08T07:32:19.000Z
|
2019-04-26T01:52:02.000Z
|
# magnet-uri [](https://travis-ci.org/feross/magnet-uri) [](https://npmjs.org/package/magnet-uri) [](https://npmjs.org/package/magnet-uri) [](https://www.gittip.com/feross/)
[](https://ci.testling.com/feross/magnet-uri)
### Parse a magnet URI and return an object of keys/values.
Also works in the browser with [browserify](http://browserify.org/)! This module is used by [WebTorrent](http://webtorrent.io).
## install
```
npm install magnet-uri
```
## usage
```js
var magnet = require('magnet-uri')
// "Leaves of Grass" by Walt Whitman
var uri = 'magnet:?xt=urn:btih:d2474e86c95b19b8bcfdb92bc12c9d44667cfa36&dn=Leaves+of+Grass+by+Walt+Whitman.epub&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Ftracker.publicbt.com%3A80&tr=udp%3A%2F%2Ftracker.istole.it%3A6969&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp%3A%2F%2Fopen.demonii.com%3A1337'
var parsed = magnet(uri)
console.log(parsed.dn) // "Leaves of Grass by Walt Whitman.epub"
console.log(parsed.infoHash) // "d2474e86c95b19b8bcfdb92bc12c9d44667cfa36"
```
The `parsed` magnet link object looks like this:
```js
{
"xt": "urn:btih:d2474e86c95b19b8bcfdb92bc12c9d44667cfa36",
"dn": "Leaves of Grass by Walt Whitman.epub",
"tr": [
"udp://tracker.openbittorrent.com:80",
"udp://tracker.publicbt.com:80",
"udp://tracker.istole.it:6969",
"udp://tracker.ccc.de:80",
"udp://open.demonii.com:1337"
],
// added for convenience:
"infoHash": "d2474e86c95b19b8bcfdb92bc12c9d44667cfa36",
"name": "Leaves of Grass by Walt Whitman.epub",
"announce": [
"udp://tracker.openbittorrent.com:80",
"udp://tracker.publicbt.com:80",
"udp://tracker.istole.it:6969",
"udp://tracker.ccc.de:80",
"udp://open.demonii.com:1337"
]
}
```
## license
MIT. Copyright (c) [Feross Aboukhadijeh](http://feross.org).
| 36.305085
| 395
| 0.69281
|
yue_Hant
| 0.288593
|
a6ad3c2ca663621e9c2874369d2cbee429f13023
| 6,813
|
md
|
Markdown
|
doc/dev/index.md
|
anthonygedeon/sourcegraph
|
c415c71b3894354498e3c2d1eac61be5120ae20b
|
[
"Apache-2.0"
] | null | null | null |
doc/dev/index.md
|
anthonygedeon/sourcegraph
|
c415c71b3894354498e3c2d1eac61be5120ae20b
|
[
"Apache-2.0"
] | null | null | null |
doc/dev/index.md
|
anthonygedeon/sourcegraph
|
c415c71b3894354498e3c2d1eac61be5120ae20b
|
[
"Apache-2.0"
] | null | null | null |
# Developing Sourcegraph
<style>
.markdown-body h2 {
margin-top: 2em;
}
.markdown-body ul {
list-style:none;
padding-left: 1em;
}
.markdown-body ul li {
margin: 0.5em 0;
}
.markdown-body ul li:before {
content: '';
display: inline-block;
height: 1.2em;
width: 1em;
background-size: contain;
background-repeat: no-repeat;
background-image: url(../batch_changes/file-icon.svg);
margin-right: 0.5em;
margin-bottom: -0.29em;
}
body.theme-dark .markdown-body ul li:before {
filter: invert(50%);
}
</style>
<p class="subtitle">Documentation for <b>developers contributing to the Sourcegraph code base</b></p>
<div class="cta-group">
<a class="btn btn-primary" href="setup/quickstart">★ Quickstart: develop Sourcegraph on your machine</a>
<a class="btn" href="https://github.com/sourcegraph/sourcegraph">GitHub repository</a>
<a class="btn" href="https://github.com/sourcegraph/sourcegraph/issues">Issue Tracker</a>
</div>
## [Setup](setup/index.md)
<p class="subtitle">Learn how to develop Sourcegraph on your machine.</p>
<div class="getting-started">
<a href="setup/quickstart" class="btn" alt="Run through the Quickstart guide">
<span>★ Quickstart</span>
</br>
Run through the <b>step by step guide</b> and get your local environment ready.
</a>
<a href="setup/how-to" class="btn" alt="How-to guides">
<span>How-to guides</span>
</br>
<b>Context specific</b> guides: debugging live code, Apple M1 workarounds, ...
</a>
<a href="setup/troubleshooting" class="btn" alt="Troubleshooting">
<span>Troubleshooting</span>
</br>
Help for the <b>most common</b> problems.
</a>
</div>
## [Background information](background-information/index.md)
Clarification and discussion about key concepts, architecture, and development stack.
### Overview
- [Tech stack](background-information/tech_stack.md)
- [Security Patterns](background-information/security_patterns.md)
### [Architecture](background-information/architecture/index.md)
- [Overview](background-information/architecture/index.md)
- [Introducing a new service](background-information/architecture/introducing_a_new_service.md)
### Development
- [`sg` - the Sourcegraph developer tool](background-information/sg/index.md)
- [Developing the web clients](background-information/web/index.md)
- [Developing the web app](background-information/web/web_app.md)
- [Developing the code host integrations](background-information/web/code_host_integrations.md)
- [Working with GraphQL](background-information/web/graphql.md)
- [Wildcard Component Library](background-information/web/wildcard.md)
- [Styling UI](background-information/web/styling.md)
- [Accessibility](background-information/web/accessibility.md)
- [Temporary settings](background-information/web/temporary_settings.md)
- [Build process](background-information/web/build.md)
- [Developing the GraphQL API](background-information/graphql_api.md)
- [Developing batch changes](background-information/batch_changes/index.md)
- [Developing code intelligence](background-information/codeintel/index.md)
- [Developing code insights](background-information/insights/index.md)
- [Developing code monitoring](background-information/codemonitoring/index.md)
- [Developing observability](background-information/observability/index.md)
- [Developing Sourcegraph extensions](background-information/sourcegraph_extensions.md)
- [Dependencies and generated code](background-information/dependencies_and_codegen.md)
- [Code reviews](background-information/pull_request_reviews.md)
- [Commit messages](background-information/commit_messages.md)
- [Exposing services](background-information/exposing-services.md)
- [Developing a store](background-information/basestore.md)
- [Developing a worker](background-information/workers.md)
- [Developing an out-of-band migration](background-information/oobmigrations.md)
- [Developing a background routine](background-information/backgroundroutine.md)
- [High-performance SQL](background-information/sql.md)
- [Code host connections on local dev environment](background-information/code-host.md)
### [Languages](background-information/languages/index.md)
- [Go](background-information/languages/go.md)
- [TypeScript](background-information/languages/typescript.md)
- [Bash](background-information/languages/bash.md)
- [Terraform](background-information/languages/terraform.md)
#### [Extended guides](background-information/languages/extended_guide/index.md)
- [Terraform Extended Guide](background-information/languages/extended_guide/terraform.md)
### Testing
- [Continuous Integration](background-information/continuous_integration.md)
- [Testing Principles](background-information/testing_principles.md)
- [Testing Go code](background-information/languages/testing_go_code.md)
- [Testing web code](background-information/testing_web_code.md)
### Security
- [Security policy](https://about.sourcegraph.com/security/)
- [How to disclose vulnerabilities](https://about.sourcegraph.com/handbook/engineering/security/reporting-vulnerabilities).
- [CSRF security model](security/csrf_security_model.md)
### Tools
- [Renovate dependency updates](background-information/renovate.md)
- [Honeycomb](background-information/honeycomb.md)
- [Using PostgreSQL](background-information/postgresql.md)
### Other
- [Telemetry](background-information/telemetry.md)
- [Adding, changing and debugging pings](background-information/adding_ping_data.md)
## Guidelines
- [Code reviews](background-information/pull_request_reviews.md)
- [Open source FAQ](https://about.sourcegraph.com/community/faq)
- [Code of conduct](https://about.sourcegraph.com/community/code_of_conduct)
## [How-to guides](how-to/index.md)
Guides to help with troubleshooting, configuring test instances, debugging, and more.
### New features
- [How to add support for a language](how-to/add_support_for_a_language.md)
- [How to use feature flags](how-to/use_feature_flags.md)
### Implementing Sourcegraph
- [Developing the product documentation](how-to/documentation_implementation.md)
- [Observability](background-information/observability/index.md)
- [How to find monitoring](how-to/find_monitoring.md)
- [How to add monitoring](how-to/add_monitoring.md)
### Testing Sourcegraph & CI
- [How to write and run tests](how-to/testing.md)
- [Configure a test instance of Phabricator and Gitolite](how-to/configure_phabricator_gitolite.md)
- [Test a Phabricator and Gitolite instance](how-to/test_phabricator.md)
- [Adding or changing Buildkite secrets](how-to/adding_buildkite_secrets.md)
## [Contributing](./contributing/index.md)
- [Project setup and CI checks for frontend issues](./contributing/frontend_contribution.md).
- [Accepting an external contribution guide](./contributing/accepting_contribution.md).
| 38.931429
| 123
| 0.77176
|
eng_Latn
| 0.343513
|
a6ae1fe32dd890f8dffed0fc4fd1f89a29a98900
| 12,666
|
md
|
Markdown
|
articles/iot-hub/iot-hub-node-node-device-management-get-started.md
|
mtaheij/azure-docs.nl-nl
|
6447611648064a057aae926a62fe8b6d854e3ea6
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/iot-hub/iot-hub-node-node-device-management-get-started.md
|
mtaheij/azure-docs.nl-nl
|
6447611648064a057aae926a62fe8b6d854e3ea6
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/iot-hub/iot-hub-node-node-device-management-get-started.md
|
mtaheij/azure-docs.nl-nl
|
6447611648064a057aae926a62fe8b6d854e3ea6
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Aan de slag met Azure IoT Hub Apparaatbeheer (knoop punt) | Microsoft Docs
description: IoT Hub Apparaatbeheer gebruiken om het opnieuw opstarten van een extern apparaat te initiëren. U gebruikt de Azure IoT SDK voor Node.js voor het implementeren van een gesimuleerde apparaat-app die een directe methode en een service-app bevat die de directe methode aanroept.
author: wesmc7777
manager: philmea
ms.author: wesmc
ms.service: iot-hub
services: iot-hub
ms.topic: conceptual
ms.date: 08/20/2019
ms.custom: mqtt, devx-track-js
ms.openlocfilehash: cfc0fa45c08f917b2e0b4a0b055e801173a4ba39
ms.sourcegitcommit: 32c521a2ef396d121e71ba682e098092ac673b30
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 09/25/2020
ms.locfileid: "91252014"
---
# <a name="get-started-with-device-management-nodejs"></a>Aan de slag met Apparaatbeheer (Node.js)
[!INCLUDE [iot-hub-selector-dm-getstarted](../../includes/iot-hub-selector-dm-getstarted.md)]
In deze zelfstudie ontdekt u hoe u:
* Gebruik de [Azure Portal](https://portal.azure.com) voor het maken van een IOT hub en het maken van een apparaat-id in uw IOT-hub.
* Maak een gesimuleerde apparaat-app die een directe methode bevat waarmee dat apparaat opnieuw wordt opgestart. Directe methoden worden vanuit de Cloud aangeroepen.
* Maak een Node.js-console-app die de directe methode voor opnieuw opstarten aanroept in de gesimuleerde apparaat-app via uw IoT-hub.
Aan het einde van deze zelf studie hebt u twee Node.js-console-apps:
* **dmpatterns_getstarted_device.js**, die verbinding maakt met uw IOT-hub met de apparaat-id die u eerder hebt gemaakt, ontvangt een directe methode voor opnieuw opstarten, simuleert fysiek opnieuw opstarten en rapporteert de tijd voor de laatste keer opnieuw opstarten.
* **dmpatterns_getstarted_service.js**, waarmee een directe methode wordt aangeroepen in de gesimuleerde apparaat-app, het antwoord wordt weer gegeven en de bijgewerkte gerapporteerde eigenschappen worden weer gegeven.
## <a name="prerequisites"></a>Vereisten
* Node.js versie 10.0. x of hoger. [Uw ontwikkel omgeving voorbereiden](https://github.com/Azure/azure-iot-sdk-node/tree/master/doc/node-devbox-setup.md) bevat informatie over het installeren van Node.js voor deze zelf studie over Windows of Linux.
* Een actief Azure-account. (Als u geen account hebt, kunt u binnen een paar minuten een [gratis account](https://azure.microsoft.com/pricing/free-trial/) maken.)
* Zorg ervoor dat de poort 8883 is geopend in de firewall. Het voor beeld van het apparaat in dit artikel maakt gebruik van het MQTT-protocol, dat communiceert via poort 8883. Deze poort is in sommige netwerkomgevingen van bedrijven en onderwijsinstellingen mogelijk geblokkeerd. Zie [Verbinding maken met IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub) voor meer informatie en manieren om dit probleem te omzeilen.
## <a name="create-an-iot-hub"></a>Een IoT Hub maken
[!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
## <a name="register-a-new-device-in-the-iot-hub"></a>Een nieuw apparaat registreren in de IoT-hub
[!INCLUDE [iot-hub-get-started-create-device-identity](../../includes/iot-hub-get-started-create-device-identity.md)]
## <a name="create-a-simulated-device-app"></a>Een gesimuleerde apparaattoepassing maken
In deze sectie doet u het volgende:
* U maakt een Node.js-console-app die reageert op een directe methode die door de cloud wordt aangeroepen.
* Het opnieuw opstarten van een gesimuleerd apparaat activeren
* Gebruik de gerapporteerde eigenschappen om Device-dubbele query's in te scha kelen om apparaten te identificeren en wanneer deze voor het laatst opnieuw zijn opgestart
1. U maakt een lege map met de naam **simulateddevice**. Maak in de map **simulateddevice** een bestand met de naam package.json door achter de opdrachtprompt de volgende opdracht op te geven. Accepteer alle standaardwaarden:
```cmd/sh
npm init
```
2. Voer bij de opdracht prompt in de map **simulateddevice** de volgende opdracht uit om het **Azure-IOT-Device-SDK-** pakket te installeren en **Azure-IOT-Device-mqtt** -pakket:
```cmd/sh
npm install azure-iot-device azure-iot-device-mqtt --save
```
3. Maak met een tekst editor een **dmpatterns_getstarted_device.js** -bestand in de map **simulateddevice** .
4. Voeg de volgende ' vereist '-instructies toe aan het begin van het **dmpatterns_getstarted_device.js** -bestand:
```javascript
'use strict';
var Client = require('azure-iot-device').Client;
var Protocol = require('azure-iot-device-mqtt').Mqtt;
```
5. Voeg een **connectionString**-variabele toe en gebruik deze om een **client**exemplaar te maken. Vervang de `{yourdeviceconnectionstring}` waarde van de tijdelijke aanduiding door het apparaat Connection String u eerder hebt gekopieerd in [een nieuw apparaat registreren in de IOT-hub](#register-a-new-device-in-the-iot-hub).
```javascript
var connectionString = '{yourdeviceconnectionstring}';
var client = Client.fromConnectionString(connectionString, Protocol);
```
6. Voeg de volgende functie toe om de directe methode op het apparaat te implementeren
```javascript
var onReboot = function(request, response) {
// Respond the cloud app for the direct method
response.send(200, 'Reboot started', function(err) {
if (err) {
console.error('An error occurred when sending a method response:\n' + err.toString());
} else {
console.log('Response to method \'' + request.methodName + '\' sent successfully.');
}
});
// Report the reboot before the physical restart
var date = new Date();
var patch = {
iothubDM : {
reboot : {
lastReboot : date.toISOString(),
}
}
};
// Get device Twin
client.getTwin(function(err, twin) {
if (err) {
console.error('could not get twin');
} else {
console.log('twin acquired');
twin.properties.reported.update(patch, function(err) {
if (err) throw err;
console.log('Device reboot twin state reported')
});
}
});
// Add your device's reboot API for physical restart.
console.log('Rebooting!');
};
```
7. Open de verbinding met uw IoT-hub en start de listener voor directe methoden:
```javascript
client.open(function(err) {
if (err) {
console.error('Could not open IotHub client');
} else {
console.log('Client opened. Waiting for reboot method.');
client.onDeviceMethod('reboot', onReboot);
}
});
```
8. Sla het **dmpatterns_getstarted_device.js** bestand op en sluit het.
> [!NOTE]
> Om de zaken niet nodeloos ingewikkeld te maken, is in deze handleiding geen beleid voor opnieuw proberen geïmplementeerd. In productie code moet u beleid voor opnieuw proberen implementeren (zoals een exponentiële uitstel), zoals wordt voorgesteld in het artikel, [tijdelijke fout afhandeling](/azure/architecture/best-practices/transient-faults).
## <a name="get-the-iot-hub-connection-string"></a>De IoT hub-connection string ophalen
[!INCLUDE [iot-hub-howto-device-management-shared-access-policy-text](../../includes/iot-hub-howto-device-management-shared-access-policy-text.md)]
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)]
## <a name="trigger-a-remote-reboot-on-the-device-using-a-direct-method"></a>Een externe keer opnieuw opstarten op het apparaat activeren met behulp van een directe methode
In deze sectie maakt u een Node.js-console-app die een apparaat op afstand opnieuw opstarten start via een directe methode. De app maakt gebruik van Device-dubbele query's om de tijd van de laatste keer opnieuw opstarten voor dat apparaat te detecteren.
1. Maak een lege map met de naam **triggerrebootondevice**. Maak in de map **triggerrebootondevice** een package.jsin het bestand met behulp van de volgende opdracht bij de opdracht prompt. Accepteer alle standaardwaarden:
```cmd/sh
npm init
```
2. Voer bij de opdracht prompt in de map **triggerrebootondevice** de volgende opdracht uit om het **Azure-iothub** apparaat SDK-pakket en het **Azure-IOT-Device-mqtt** -pakket te installeren:
```cmd/sh
npm install azure-iothub --save
```
3. Maak met een tekst editor een **dmpatterns_getstarted_service.js** -bestand in de map **triggerrebootondevice** .
4. Voeg de volgende ' vereist '-instructies toe aan het begin van het **dmpatterns_getstarted_service.js** -bestand:
```javascript
'use strict';
var Registry = require('azure-iothub').Registry;
var Client = require('azure-iothub').Client;
```
5. Voeg de volgende variabelen declaraties toe en vervang de `{iothubconnectionstring}` waarde van de tijdelijke aanduiding door de IOT hub Connection String u eerder hebt gekopieerd in [de IOT hub-Connection String ophalen](#get-the-iot-hub-connection-string):
```javascript
var connectionString = '{iothubconnectionstring}';
var registry = Registry.fromConnectionString(connectionString);
var client = Client.fromConnectionString(connectionString);
var deviceToReboot = 'myDeviceId';
```
6. Voeg de volgende functie toe om de methode van het apparaat aan te roepen om het doel apparaat opnieuw op te starten:
```javascript
var startRebootDevice = function(twin) {
var methodName = "reboot";
var methodParams = {
methodName: methodName,
payload: null,
timeoutInSeconds: 30
};
client.invokeDeviceMethod(deviceToReboot, methodParams, function(err, result) {
if (err) {
console.error("Direct method error: "+err.message);
} else {
console.log("Successfully invoked the device to reboot.");
}
});
};
```
7. Voeg de volgende functie toe aan een query voor het apparaat en ontvang het tijdstip van de laatste keer opnieuw opstarten:
```javascript
var queryTwinLastReboot = function() {
registry.getTwin(deviceToReboot, function(err, twin){
if (twin.properties.reported.iothubDM != null)
{
if (err) {
console.error('Could not query twins: ' + err.constructor.name + ': ' + err.message);
} else {
var lastRebootTime = twin.properties.reported.iothubDM.reboot.lastReboot;
console.log('Last reboot time: ' + JSON.stringify(lastRebootTime, null, 2));
}
} else
console.log('Waiting for device to report last reboot time.');
});
};
```
8. Voeg de volgende code toe om de functies aan te roepen die de directe methode voor opnieuw opstarten activeren en de query voor de laatste keer opnieuw opstarten:
```javascript
startRebootDevice();
setInterval(queryTwinLastReboot, 2000);
```
9. Sla het **dmpatterns_getstarted_service.js** bestand op en sluit het.
## <a name="run-the-apps"></a>De apps uitvoeren
U bent nu klaar om de apps uit te voeren.
1. Voer bij de opdracht prompt in de map **simulateddevice** de volgende opdracht uit om te beginnen met Luis teren naar de methode voor direct opnieuw opstarten.
```cmd/sh
node dmpatterns_getstarted_device.js
```
2. Voer bij de opdracht prompt in de map **triggerrebootondevice** de volgende opdracht uit om het apparaat op afstand opnieuw op te starten en de query uit te voeren op het dubbele tijdstip van de laatste keer dat de computer opnieuw wordt opgestart.
```cmd/sh
node dmpatterns_getstarted_service.js
```
3. U ziet de reactie van het apparaat op de directe methode voor opnieuw opstarten en de status van opnieuw opstarten in de-console.
Hieronder ziet u de reactie van het apparaat op de directe methode voor opnieuw opstarten, verzonden door de service:

Hieronder ziet u de service waarmee de herstart wordt geactiveerd en het apparaat wordt gecontroleerd tussen de laatste keer opnieuw opstarten:

[!INCLUDE [iot-hub-dm-followup](../../includes/iot-hub-dm-followup.md)]
| 46.058182
| 430
| 0.707721
|
nld_Latn
| 0.995072
|
a6ae74ceb5479163cbee8d37a9be55e416a74758
| 191
|
md
|
Markdown
|
README.md
|
fsonmezay/websocket-service-registry
|
7f98dcc20bb097428d5679f3b8a59fc26c80f2ec
|
[
"MIT"
] | null | null | null |
README.md
|
fsonmezay/websocket-service-registry
|
7f98dcc20bb097428d5679f3b8a59fc26c80f2ec
|
[
"MIT"
] | null | null | null |
README.md
|
fsonmezay/websocket-service-registry
|
7f98dcc20bb097428d5679f3b8a59fc26c80f2ec
|
[
"MIT"
] | null | null | null |
# websocket-service-registry
---
The article of this repository is published at [my personal blog](https://ferdisonmezay.com/blog/implementing-a-service-registry-using-websockets.html)
---
| 27.285714
| 151
| 0.780105
|
eng_Latn
| 0.814361
|
a6ae9da458ee342899077ff7cf89816471e600a9
| 2,598
|
md
|
Markdown
|
articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
|
changeworld/azure-docs.tr-tr
|
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
|
changeworld/azure-docs.tr-tr
|
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
|
changeworld/azure-docs.tr-tr
|
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Azure CLI - kaynağa göre filtre & kaynak grubuna abone olun
description: Bu makalede, kaynak için Olay Izgara olaylarına nasıl abone olunur ve kaynak için filtre yi gösteren bir örnek Azure CLI komut dosyası görüntülenir.
services: event-grid
documentationcenter: na
author: spelluru
ms.service: event-grid
ms.devlang: azurecli
ms.topic: sample
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 01/23/2020
ms.author: spelluru
ms.openlocfilehash: 3dfe31a38d1bc1ba8662246a5dec3f10d0d1c948
ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/24/2020
ms.locfileid: "76720835"
---
# <a name="subscribe-to-events-for-a-resource-group-and-filter-for-a-resource-with-azure-cli"></a>Azure CLI ile bir kaynak grubu için olaylara abone olma ve kaynağa göre filtreleme
Bu betik, bir kaynak grubu için olaylara bir Event Grid aboneliği oluşturur. Yalnızca kaynak grubundaki belirtilen bir kaynakla ilgili olayları almak için bir filtre kullanır.
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)]
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
Önizleme örnek betiği için Event Grid uzantısı gerekir. Yüklemek için `az extension add --name eventgrid` komutunu çalıştırın.
## <a name="sample-script---stable"></a>Örnek betik - kararlı
[!code-azurecli[main](../../../cli_scripts/event-grid/filter-events/filter-events.sh "Subscribe to Azure subscription")]
## <a name="sample-script---preview-extension"></a>Örnek betik - önizleme uzantısı
[!code-azurecli[main](../../../cli_scripts/event-grid/filter-events-preview/filter-events-preview.sh "Subscribe to Azure subscription")]
## <a name="script-explanation"></a>Betik açıklaması
Bu betik, olay aboneliğini oluşturmak için aşağıdaki komutu kullanır. Tablodaki her komut, komuta özgü belgelere yönlendirir.
| Komut | Notlar |
|---|---|
| [az eventgrid event-subscription create](https://docs.microsoft.com/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) | Event Grid aboneliği oluşturun. |
| [az eventgrid event-subscription create](/cli/azure/ext/eventgrid/eventgrid/event-subscription#ext-eventgrid-az-eventgrid-event-subscription-create) - uzantı sürümü | Event Grid aboneliği oluşturun. |
## <a name="next-steps"></a>Sonraki adımlar
* Abonelikleri sorgulama hakkında bilgi edinmek için bkz. [Event Grid aboneliklerini sorgulama](../query-event-subscriptions.md).
* Azure CLI hakkında daha fazla bilgi için bkz. [Azure CLI belgeleri](https://docs.microsoft.com/cli/azure).
| 49.961538
| 202
| 0.779446
|
tur_Latn
| 0.982871
|
a6aeb95294ddcce45781bc76f4410ff8a3aa792b
| 323
|
md
|
Markdown
|
README.md
|
ess-dmsc/docker-debian-build-node
|
56c0c4e45008b02d4415c311bd98cbb016fe6847
|
[
"BSD-2-Clause"
] | null | null | null |
README.md
|
ess-dmsc/docker-debian-build-node
|
56c0c4e45008b02d4415c311bd98cbb016fe6847
|
[
"BSD-2-Clause"
] | 1
|
2018-09-20T12:17:12.000Z
|
2018-09-20T13:37:26.000Z
|
README.md
|
ess-dmsc/docker-debian-build-node
|
56c0c4e45008b02d4415c311bd98cbb016fe6847
|
[
"BSD-2-Clause"
] | null | null | null |
# docker-debian9-build-node
Dockerfile for a Debian 9 (stretch) build node.
## Building
$ docker build -t <tag> <path_to_dockerfile>
To create the official container image, substitute `<tag>` with
_screamingudder/debian9-build-node:<version>_. The build might take relatively long,
as we build Python from source.
| 24.846154
| 84
| 0.758514
|
eng_Latn
| 0.917327
|
a6af397a16082c141fd830c95b801266c4723b5b
| 6,860
|
md
|
Markdown
|
README.md
|
fishnchipper/line-light
|
cba05a1458841e8f622ebd720477e35c5a586f2c
|
[
"MIT"
] | null | null | null |
README.md
|
fishnchipper/line-light
|
cba05a1458841e8f622ebd720477e35c5a586f2c
|
[
"MIT"
] | 4
|
2020-03-22T06:15:52.000Z
|
2022-01-22T10:57:49.000Z
|
README.md
|
fishnchipper/line-light
|
cba05a1458841e8f622ebd720477e35c5a586f2c
|
[
"MIT"
] | 1
|
2020-03-19T11:29:34.000Z
|
2020-03-19T11:29:34.000Z
|
# line
 [](https://ci.appveyor.com/project/gam4it/line)
`line` is a NodeJS + Express App shell which can be used as a start point for developing an `internal microservice` with RESTful API interfaces. `line` is not suitable for an `edge microservice` as it does not generate `access token` required to access resources which are accessible for only authenticated clients.
## Session for REST Communication
- A REST communication is initiated by calling `/session/init` with an encrypted permanent JWT `session token` returned to a caller.
- The encrypted JWT `session token` created is required until the communication sesion ends by calling `/session/end`.
## Where to start
1. Install dependent packages
```
$ npm install
```
2. Run the app
```
$ npm start
```
3. RESTFul APIs
- Go to https://localhost:65001/api/docs
## How to add your RESTFul APIs
1. create a route folder (ex `rt-api-xxx-v1`) under `/routes`
- ex) `/routes/rt-api-xxx-v1`
2. add a main route function file under the folder you created at step 1.
- ex) `/routes/rt-api-xxx-v1/rt-api-xxx-v1.js`
3. add the main route function to `app.use` with `checkToken.on` middleware in `index.js`
- ex)
```
// add your RESTFul APIs here
//
let routeApiXXXV1 = require('./routes/rt-api-xxx-v1/rt-api-xxx-v1');
app.use('/api/xxx/v1', checkToken.on, routeApiXXXV1.router);
//
// end of your RESTFul APIs
```
4. define each REST API in the main route function file (ex. `rt-api-xxx-v1.js`) created above.
- Instead of implementing all API paths in the single main route function file,
- create a separate file for each REST API path and import that file into the main route function file.
- By doing this, each API is fully decoupled with each other.
- For example, check `/routes/rt-api-xxx-v1/rt-api-xxx-v1.js` file.
```
let getABC = require('./get-abc');
let postABC = require('./post-abc');
let putABC = require('./put-abc');
let deleteABC = require('./delete-abc');
router.route('/abc')
.get(getABC.on)
.post(postABC.on)
.put(putABC.on)
.delete(deleteABC.on);
```
5. Add OpenAPI yaml definition to each REST API, then your API specification is automatically updated at `https://localhost:65001/api/docs`
- For example, check `/routes/rt-api-session-v1/rt-api-session-v1.js`
```
/**
* @swagger
* tags:
* name: Session
* description: Session init for a new client
*/
router.route('/init/:uuid')
/**
* @swagger
* path:
* /session/v1/init/{uuid}:
* get:
* summary: Init a new session token assigned to {uuid}
* tags: [Session]
* parameters:
* - name: uuid
* in: path
* description: uuid of client
* schema:
* type: string
* responses:
* "200":
* description: Session is successfully created
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Response'
* example:
* code: session.init
* message: session is successfully created
* payload: {"session_id":"7f3b171b-335c-4739-a123-4ca810db963c","session_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzZXNzaW9uX2lkIjoiN2YzYjE3MWItMzM1Yy00NzM5LWExMjMtNGNhODEwZGI5NjNjIiwiY2xpZW50X3V1aWQiOiJiOTZhYjVlNi1mMWU4LTQ2NTMtYWIwOC00ZGQ4MmVhNjU3NzEiLCJpYXQiOjE1ODQxNDg2MzR9.L0SbNuIRb75bnmoxj-eVXOfEjBncUvj2orAQSpq2gfWH6YxdDx_YAxgzPsz3h7vh6fYvx56ZYD7ABpFNIQqytNW_woR614fvgSEhRgBdVwsJYKD1JEeQg-xgfvn5mIuhHux7yVPZVi9XBXUheANlCrmUNE5dCf-UIFFCZK3v5j8PseGyDtBzYQur3PDYFa9mPTyCJFf3kFkL5wa9Mg_fJD1oQoza7Mgg688_q7k3JJWJ0U51NUn0WO9E0wzeJcne2wia2UZeza0D-JGDg_AngjcCL1kAUWZjKEnUDcpHC4rAeicf6kkelmXkRzIOn6ZFb3GWxUtey_uNCl_H7wt40g"}
* "400":
* description: Invalid request
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Response'
* example:
* code: session.error
* message: invalid uuid
*/
.get(getInit.on);
router.route('/end/:uuid')
/**
* @swagger
* path:
* /session/v1/end/{uuid}:
* get:
* summary: End session token assigned to {uuid}
* tags: [Session]
* parameters:
* - name: uuid
* in: path
* description: uuid of client
* schema:
* type: string
* responses:
* "200":
* description: Session cleared
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Response'
* example:
* code: session.end
* message: session cleared
* "400":
* description: Invalid request
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Response'
* examples:
* invalid uuid:
* code: session.error
* message: invalid uuid
* no session linked:
* code: session.error
* message: session does not exist
*/
.get(getEnd.on);
```
6. Add your common components under /models if any. For example, see `/models/response.js`
## Access to resources
Request HTTP header must contain the following custom-defined parameters ([RFC6648](https://tools.ietf.org/html/rfc6648)) with valid values.
- `Comm-Session-UUID` : uuid ([RFC4122](https://tools.ietf.org/html/rfc4122)) of requester
- `Comm-Session-Token` : `session token` created through `/session/init/{uuid}`
- example
```
curl https://localhost:65001/api/xxx/v1 -k -i -H "Comm-Session-UUID:b96ab5e6-f1e8-4653-ab08-4dd82ea65778" -H "Comm-Session-Token:eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzZXNzaW9uX2lkIjoiNjI4YTNkODgtODYxYS00ODNmLWFkZjMtZWMzZTMwZWJjZjIzIiwiY2xpZW50X3V1aWQiOiJiOTZhYjVlNi1mMWU4LTQ2NTMtYWIwOC00ZGQ4MmVhNjU3NzgiLCJpYXQiOjE1ODQwNzY5OTZ9.N34OokCsx0Y-cuiyJP_3_IGUjknJ4tqXktHMqTrAiZGk7tdj1jjTNQXNulNQoHV6SqQSiGmFdvMCcODoOiI48vf1P4FfqiHxYAbe3L1z8bc-bmfN_eMMtVUukkmd5SLnq5t1vaASqv7FV4BYqd5Is9YL8jdiveOcd005eT26EfJfm1rs_g4eR4oDMq9nUWM5XMFkuLzDLMLVr-4Txh6aWPb3iEhzAHRwUldBSlDiq8SV6ppoXHFHfbksrEP36dJGfJ6MUfe0xZyRs8LGY7r1yiVkGxoF_Vl6iGzNAgBTmJqKQ2H3YunM2wObAStyE0IP7iCsNNr2EDEe8b9jMB6rqw"
```
| 39.425287
| 678
| 0.631487
|
eng_Latn
| 0.490326
|
a6af8b8eaa73cf2809ff6eb4be6993dbf6295a01
| 11,113
|
md
|
Markdown
|
l5d-certificate-management/README.md
|
BuoyantIO/service-mesh-academy
|
c368aa0e76d48e900bce982e2884ca424834d63e
|
[
"Apache-2.0"
] | 4
|
2021-12-14T16:01:34.000Z
|
2022-03-17T16:51:04.000Z
|
l5d-certificate-management/README.md
|
BuoyantIO/service-mesh-academy
|
c368aa0e76d48e900bce982e2884ca424834d63e
|
[
"Apache-2.0"
] | null | null | null |
l5d-certificate-management/README.md
|
BuoyantIO/service-mesh-academy
|
c368aa0e76d48e900bce982e2884ca424834d63e
|
[
"Apache-2.0"
] | 4
|
2021-12-14T17:52:44.000Z
|
2022-03-17T16:29:30.000Z
|
# Certificate Management with Linkerd
Three different ways to manage your certificates with Linkerd. What we will be doing today:
* Generate and inspect a certificate with `step`.
* (Staging step) Generate a self-signed CA and an issuer.
* (Production step) Generate a self-signed CA, generate an issuer with cert-manager and install Linkerd.
* Rotate issuer and CA.
* Re-install Linkerd using CA managed by cert-manager.
### 1) Generating certificates with `step`
---
* When in doubt, `step --help`
* To create certificates:
```sh
# step certificate create <subject> <crt-output> <key-output> \
# [--profile=leaf/root-ca/self-signed/intermediate-ca] \
# [--no-password (don't use pass to encrypt private key)]
#
# Example:
$ step certificate create root.linkerd.cluster.local ca.crt ca.key --profile root-ca --no-password \
--insecure #[--insecure required by --no-password]
```
* Certificate is encoded, we can't just read it. We can decode and inspect
using `step certificate inspect <path>`
### 2) Generating a self-signed CA and an issuer
---
Linkerd's operational model requires a CA and an issuer. Generally, it is
recommended that these are different certificates to have better security. In a
staging environment, we might find that rotating trust anchors often is
painful; we'd expect to do it in a production environment, but production
environments have more stringent requirements.
If we are not concerned with security requirements in a staging environment, we
can set a longer expiration date for our certificates to avoid rotating often.
In `step`, we can provide an arbitrary expiration time with `--not-after`. It
takes either a duration (e.g seconds, minutes, hours) or a **time** (which must
be [RFC3339][rfc-time] compliant).
**TIP**: if it's hard to work with RFC3339 times, you can use `date
--rfc-3339=ns` on most unix systems. `man date` for more.
```sh
# First, generate trust anchors
#
$ step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure \
--not-after="2060-03-17T16:00:00+00:00"
#
# Second, generate issuer, signed by our CA
#
$ step certificate create identity.linkerd.cluster.local identity.crt identity.key \
--profile intermediate-ca --no-password --insecure \
--ca ca.crt --ca-key ca.key \
--not-after="2050-03-17T16:00:00+00:00"
```
* Notice the additional flags: why are they needed?
* Notice the expiry dates: is it necessary for the trust anchor to expire after
the issuer?
* Install Linkerd and don't worry about certificate rotation until 2050 at the very least!
```sh
$ linkerd install \
--identity-trust-anchors-file ca.crt \
--identity-issuer-certificate-file identity.crt \
--identity-issuer-key-file identity.key \
|kubectl apply -f -
```
### 3) Generating self-signed certificates for the security conscious
---
**Why do we want shorter validty periods**: because they make our environment
more secure and robust.
* Key compromises might happen more often than you think. Private keys need
some love and be rotated as often as possibly to avoid compromises (this is
especially true with issuer CAs that typically live in the cluster).
* Revoking certificates might happen, best to get used to rotating your
certificates.
We will generate a root CA with a validty of one year (or ~ 8760h) and an
issuer with a validty of one week (~ 168h). This might be a bit too short,
there's no right or wrong answer, the shorter, the better (i.e 1-2 months).
```sh
$ step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure \
--not-after=8700h
$ step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
--profile intermediate-ca --ca ca.crt --ca-key ca.key \
--no-password --insecure --not-after=170h
```
### 4) Rotating your certificates manually
---
With added security comes more headaches, unsurprisingly.
* Once we have shorter validty periods, we need to rotate certificates.
* The process may slightly differ depending on which layer you are planning to rotate.
* Trust anchors will be the most difficult to rotate, followed by intermediate
(issuer) certificates, and leaf (proxy) certificates, which shouldn't be a
concern 99% of the time.
You can understand the state of the control and data plane through `linkerd
check --proxy`.
**Rotating valid root certificates**: is a multi-step process and can be done
without any downtime. Monitor those certificates closely.
1. To rotate trust, generate a new CA certificate.
2. Bundle original trust with the new one.
3. Deploy the bundle by doing an upgrade.
```sh
$ step certificate create root.linkerd.cluster.local new-ca.crt new-ca.key \
--profile root-ca --no-password --insecure
# We need to 'bundle' new CA with old CA
# We can either take old CA from the filesystem or from linkerd-identity-trust-roots cm
#
$ step certificate bundle new-ca.crt ca.crt bundle.crt
# Deploy!
$ linkerd upgrade --identity-trust-anchors-file=bundle.crt | kubectl apply -f -
# Restart your data plane workloads, e.g kubectl rollout restart deploy -n default
# finally, run some checks.
#
$ linkerd check --proxy
```
**Rotating valid issuer certificates**: fortunately, don't need a bundle here
provided root is ok.
```sh
$ step certificate create identity.linkerd.cluster.local new-issuer.crt new-issuer.key \
--profile intermediate-ca --ca new-ca.crt --ca-key new-ca.key \
--no-password --insecure
$ linkerd upgrade \
--identity-issuer-certificate-file new-issuer.crt \
--identity-issuer-key-file new-issuer.key \
|kubectl apply -f -
# Restart identity service so new identity issuer can be reloaded and used to sign
# new leaves
#
# Check events if you want to see confirmation of changes being reloaded
# Notice: identity service won't re-deploy since it reads the issuer from cm
$ kubectl get events --field-selector reason=IssuerUpdated -n linkerd
# Bonus round: removing old root from bundle
#
$ linkerd upgrade --identity-trust-anchors-file=new-ca.crt|kubectl apply -f -
```
In total, there are 3 upgrades when replacing both CA and issuer:
1. Upgrade with CA as bundle (if we don't do this, older pods will not work so
we will have downtime. Won't be able to do certificate validation for
existing pods since their certificates rely on old issuer and old CA)
2. Upgrade with new issuer (signed by new CA. Means certificate validation will
now work well)
3. Upgrade with just new CA (we clean up after ourselves).
**Rotating expired certificates**: is not guaranteed to be a zero-downtime
operation. If your root certificate has expired, then you will need to replace
both **issuer** and **root**, otherwise certificate validation will fail. If
your issuer has expired, you can simply generate a new one (signed by CA) and
upgrade.
Since this is an operation that's not guaranteed to be zero downtime, I'm
leaving it as homework :). We do have a [guide][expired-guide] if you get
stuck.
### 4) Using cert-manager to automate identity bootstrapping
If you want to do everything automagically, you can opt to do all of these
operations **in-cluster** with the help of a PKI, such as
[cert-manager][cert-manager] (which we have found to work out great).
For this workshop, we won't cover [automatically rotating
certificates][auto-guide]. It will be trivial to do once we bootstrap identity,
and can be a great way to solidfy the knowledge outside of this session
(problem-based learning).
[cert-manager][cert-manager] currently distributes bootstrapped certificates as
Kubernetes `Secret` objects, and contains both the public key (certificate) and
the private key. This works well for our identity issuer, but not for our trust
roots. Since our trust roots will need to be mounted to different pods, we
don't want the private key to be accessible from within the pod. We will
introduce a second tool from the folks at JetStack called [trust][trust]. We
will discuss its usage shortly, for now, let's install both.
```sh
# Let's also uninstall Linkerd, we'll re install it without passing in any explicit params :)
$ linkerd uninstall|kubectl delete -f -
##
# Let's install cert-manager and trust
#
$ helm repo add jetstack https://charts.jetstack.io --force-update
$ helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager --set installCRDs=true --wait --create-namespace
$ helm upgrade -i -n cert-manager cert-manager-trust jetstack/cert-manager-trust --wait
```
To install Linkerd, we need:
1. A self-signed CA to exist in a `ConfigMap` in "linkerd" namespace (`linkerd-identity-trust-roots`)
2. An issuer certificate to exist in a `Secret` in "linkerd" namespace.
We will create both of these using `cert-manager` and `trust` CRDs.
```sh
$ kubectl apply -f manifests/cert-manager-ca-issuer.yaml
# Notice the fields here and why we can't mount it to a pod:
$ kubectl get secret linkerd-identity-trust-roots -n cert-manager
$ kubectl apply -f manifests/cert-manager-identity-issuer.yaml
$ kubectl get secrets -n linkerd
$ kubectl apply -f manifests/ca-bundle.yaml
$ kubectl get cm -n linkerd
$ linkerd install \
--identity-external-issuer \
--set "identity.externalCA=true" \
|kubectl apply -f -
```
You should end up with:
```
:; k get secrets -n cert-manager
NAME TYPE DATA AGE
linkerd-identity-trust-roots kubernetes.io/tls 3 64m
:; k get secrets -n linkerd
NAME TYPE DATA AGE
linkerd-identity-issuer kubernetes.io/tls 3 62m
linkerd-identity-token-7tndg kubernetes.io/service-account-token 3 58m
:; k get cm -n linkerd
NAME DATA AGE
kube-root-ca.crt 1 62m
linkerd-identity-trust-roots 1 62m
linkerd-config 1 59m
```
And voila! You have automated certificate bootstrapping. You can do a lot more
things now, such as setting your expiry periods programatically, or pulling in
certificates from your corporate PKI (non-cloud-native). This approach gives
you more flexibility, albeit at slightly comprimising your security by keeping
your trust anchor private key in-cluster.
Happy Meshing!
### Links
* [RFC 3339][rfc-time]
* [Linkerd Getting Started Guide][start-guide]
* [Linkerd Generating Your Own CA][ca-guide]
* [Replacing Expired Certificates][expired-guide]
* [Automatically Replacing Expired Certificates][auto-guide]
* [Cert Manager docs][cert-manager]
* [Trust GitHub repo][trust]
[rfc-time]: https://datatracker.ietf.org/doc/html/rfc3339
[start-guide]: https://linkerd.io/2.11/getting-started/
[ca-guide]: https://linkerd.io/2.11/tasks/generate-certificates/
[expired-guide]: https://linkerd.io/2.11/tasks/replacing_expired_certificates/
[cert-manager]: https://cert-manager.io/docs/
[auto-guide]: https://linkerd.io/2.11/tasks/automatically-rotating-control-plane-tls-credentials/
[trust]: https://github.com/cert-manager/trust
| 37.671186
| 117
| 0.732026
|
eng_Latn
| 0.976833
|
a6afb51c5555063237b001856a157b7937cb271d
| 125
|
md
|
Markdown
|
content/changelog/v3.6.8.md
|
andergtk/insomnia.rest
|
0a9032e6f0c9a2a19e7a9049f8f3f48d864a2b7b
|
[
"MIT"
] | null | null | null |
content/changelog/v3.6.8.md
|
andergtk/insomnia.rest
|
0a9032e6f0c9a2a19e7a9049f8f3f48d864a2b7b
|
[
"MIT"
] | null | null | null |
content/changelog/v3.6.8.md
|
andergtk/insomnia.rest
|
0a9032e6f0c9a2a19e7a9049f8f3f48d864a2b7b
|
[
"MIT"
] | null | null | null |
---
date: 2016-11-07T12:22:03-08:00
title: Insomnia v3.6.8 Release
slug: 3.6.8
minor:
- Minor improvements and bug fixes
---
| 15.625
| 34
| 0.696
|
eng_Latn
| 0.685542
|
a6affd4227d95c32b2097cb4898db3010a869591
| 14,256
|
md
|
Markdown
|
data/readme_files/asrivat1.DeepLearningVideoGames.md
|
DLR-SC/repository-synergy
|
115e48c37e659b144b2c3b89695483fd1d6dc788
|
[
"MIT"
] | 5
|
2021-05-09T12:51:32.000Z
|
2021-11-04T11:02:54.000Z
|
data/readme_files/asrivat1.DeepLearningVideoGames.md
|
DLR-SC/repository-synergy
|
115e48c37e659b144b2c3b89695483fd1d6dc788
|
[
"MIT"
] | null | null | null |
data/readme_files/asrivat1.DeepLearningVideoGames.md
|
DLR-SC/repository-synergy
|
115e48c37e659b144b2c3b89695483fd1d6dc788
|
[
"MIT"
] | 3
|
2021-05-12T12:14:05.000Z
|
2021-10-06T05:19:54.000Z
|
# Using Deep Q Networks to Learn Video Game Strategies
#### Akshay Srivatsan, Ivan Kuznetsov, Willis Wang
## 1. Abstract
In this project, we apply a deep learning model recently developed by Minh et al 2015 [1] to learn optimal control patterns from visual input using reinforcement learning. While this method is highly generalizable, we applied it to the problem of video game strategy, specifically for Pong and Tetris. Given raw pixel values from the screen, we used a convolutional neural network trained with Q learning to approximate future expected reward for any possible action, and then selected an action based on the best possible outcome. We find that this method is capable of generalizing to new problems with no adjustment of the model architecture. After sufficient training, our Pong model achieved better than human performance on the games, demonstrating the potential for deep learning as a powerful and generalizable method for learning high-level control schemes.
## 2. Background
Reinforcement learning develops control patterns by providing feedback on a model’s selected actions, which encourages the model to select better actions in the future. At each time step, given some state s, the model will select an action a, and then observe the new state s' and a reward r based on some optimality criterion.
We specifically used a method known as Q learning, which approximates the maximum expected return for performing an action at a given state using an action-value (Q) function. Specifically, return gives the sum of the rewards until the game terminates, where the reward is discounted by a factor of γ at each time step. We formally define this as:
")
We then define the action-value function:
")
Note that if the optimal Q function is known for state s', we can write the optimal Q function at preceding state s as the maximum expected value of . This identity is known as the Bellman equation:
")
The intuition behind reinforcement learning is to continually update the action-value function based on observations using the Bellman equation. It has been shown by Sutton et al 1998 [2] that such update algorithms will converge on the optimal action-value function as time approaches infinity. Based on this, we can define Q as the output of a neural network, which has weights θ, and train this network by minimizing the following loss function at each iteration i:
")
Where y_i represents the target function we want to approach during each iteration. It is defined as:
")
Note that when i is equal to the final iteration of an episode (colloquially the end of a game), the Q function should be 0 since it is impossible to attain additional reward after the game has ended. Therefore, when i equals the terminal frame of an episode, we can simply write:
")
## 3. Related Work
Traditional approaches use an array to approximate the action-value function. However, this method does not scale well as the complexity of the system increases. Tesauro 1995 [3] proposed TD-Gammon, a method of using a multilayer perceptron with one hidden layer. This method worked well on the backgammon game, but failed to generalize to new problems.
More recently Riedmiller 2005 [4] used a system called neural-fitted Q learning, which used a multilayer perceptron to approximate the Q function. However, the approach had issues as it trained in batch updates with cost proportional to the size of the dataset, which limits the scalability of the system. Also, the tasks which Riedmiller considered had a very low dimensional state space. For the problem of video games, however, it is necessary to consider incredibly high dimensional visual input, and not allow our system to know anything about the internal game state or the rules of the game.
Minh et al 2015 [1] proposed deep Q learning, where a deep convolutional neural network is used to calculate the Q function. Convolutional neural networks are particularly well suited to the visual state space of video games. Furthermore, they also proposed a new method for training the network. While online approaches have a tendency to diverge or overfit, and training on the entire dataset is quite costly, Minh et al 2015 trained on minibatches sampled from a replay memory. The replay memory contains all previously seen state transitions, and the associated action and reward. At each time step a minibatch of transitions is sampled from the replay memory and the loss function described above is minimized for that batch with respect to the network weights.
This approach carries many advantages in addition to the aforementioned computational cost benefits. By using a replay memory, each experience can be used in multiple updates, which means that the data is being used more effectively. Also, using an online approach has the disadvantage that all consecutive updates will be over highly correlated states. This can cause the network to get stuck in a poor local optimum, or to diverge dramatically. Using a batched approach over more varied experiences smooths out the updates and avoids this problem.
## 4. Deep Q Learning Algorithm
The pseudo-code for the Deep Q Learning algorithm, as given in [1], can be found below:
```
Initialize replay memory D to size N
Initialize action-value function Q with random weights
for episode = 1, M do
Initialize state s_1
for t = 1, T do
With probability ϵ select random action a_t
otherwise select a_t=argmax_a Q(s_t,a; θ_i)
Execute action a_t in emulator and observe r_t and s_(t+1)
Store transition (s_t,a_t,r_t,s_(t+1)) in D
Sample a minibatch of transitions (s_j,a_j,r_j,s_(j+1)) from D
Set y_j:=
r_j for terminal s_(j+1)
r_j+γ*max_(a^' ) Q(s_(j+1),a'; θ_i) for non-terminal s_(j+1)
Perform a gradient step on (y_j-Q(s_j,a_j; θ_i))^2 with respect to θ
end for
end for
```
## 5. Experimental Methods
The network was trained on the raw pixel values observed from the game at each time step. We preprocessed the images by converting to grayscale, resizing them to 80x80, and then stacked together the last four frames to produce an 80x80x4 input array.
The architecture of the network is described in Figure 1 below. The first layer convolves the input image with an 8x8x4x32 kernel at a stride size of 4. The output is then put through a 2x2 max pooling layer. The second layer convolves with a 4x4x32x64 kernel at a stride of 2. We then max pool again. The third layer convolves with a 3x3x64x64 kernel at a stride of 1. We then max pool one more time. The last hidden layer consists of 256 fully connected ReLU nodes.

The output layer, obtained with a simple matrix multiplication, has the same dimensionality as the number of valid actions which can be performed in the game, where the 0th index always corresponds to doing nothing. The values at this output layer represent the Q function given the input state for each valid action. At each time step, the network performs whichever action corresponds to the highest Q value using a ϵ greedy policy.
At startup, we initialize all weight matrices randomly using a normal distribution with a standard deviation of 0.01. Bias variables are all initialized at 0.01. We then initialize the replay memory with a max size of 500,000 observations.
We start training by choosing actions uniformly at random for 50,000 time steps, without updating the network weights. This allows us to populate the replay memory before training begins. After that, we linearly anneal ϵ from 1 to 0.1 over the course of the next 500,000 frames. During this time, at each time step, the network samples minibatches of size 100 from the replay memory to train on, and performs a gradient step on the loss function described above using the Adam optimization algorithm with a learning rate of 0.000001. After annealing finishes, the network continues to train indefinitely, with ϵ fixed at 0.1.
An Amazon Web Services G2 large instance was used to efficiently conduct training on a GPU. We implemented the DQN in Google’s newly released TensorFlow library.
## 6. Results
See these links for videos of the DQN in action:
[DQN playing a long game of Pong](https://www.youtube.com/watch?v=NE_KKM0e38s)
[Visualization of convolutional layers and Q function](https://www.youtube.com/watch?v=W9jGIzkVCsM)
We found that for Pong, good results were achieved after approximately 1.38 million time steps, which corresponds to about 25 hours of game time. Qualitatively, the network played at the level of an experienced human player, usually beating the game with a score of 20-2. Figure 2 shows the maximum Q value for the first ~20,000 training steps. To minimize clutter, only maximum Q values corresponding to the first 100 training steps of each consecutive group of 1,250 training steps are shown. As can be seen, the maximum Q value increases over time. This indicates improvement as it means that the network is expecting to receive a greater reward per game as it trains for longer. In theory, as we continue to train the network, the value of the maximum Q value should plateau as the network reaches an optimal state.

The final hidden representation of the input image frames in the network was a 256 dimensional vector. Figure 3 shows t-SNE embedding of ~1,200 such representations which were sampled over a period of 2 hours with fixed network weights which had been learned after 25 hours of training. The points in the visualization were also color-coded based on the maximum Q values at the output layer for that frame.

The network controlled paddle is on the left and highlighted yellow for easy visualization. The paddle controlled by the “opponent,” a hard-coded AI built into the game, is on the right. As can be seen from the image frames, high maximum Q-values correspond to when the ball was near the opponent's paddle, so the network had a high probability of scoring. Similarly, low maximum Q-values correspond to when the ball was near the network's paddle, so the network had an increased chance of losing the points. When the ball is in the center of the court, a neutral maximum Q value occurs.
We are also currently actively working on getting the system to learn Tetris as well. This section will be updated as further progress in this direction is made.
## 7. Conclusions
We utilized deep Q learning to train a neural network to play Pong and partially to play Tetris from just images of the current game screen and no knowledge of the internal game state. Qualitatively, the trained Pong network appeared to perform near human level. We are still working on training the system to play Tetris, and will hopefully report good results in the near future.
We then went on to show that the Pong network has a relatively high level “understanding” of the current state of the game. For example, the Pong network’s function was maximized when the ball was near the opponent's paddle, when it would be reasonable to expect a point. However, it was minimized when the ball was near the network’s paddle, as there was a chance it could lose the point.
Significant improvements on the current network architecture are possible. For example, our implementation of max pooling may have been unnecessary, as it caused the convolution kernels in the deeper layers to have a size comparable to that of the previous layer itself. Hence, max pooling may have discarded useful information. Given more time, it would be nice to perform more tuning over the various network parameters, such as the learning rate, the batch size, the replay memory, and the lengths of the observation and exploration phases. Additionally, we could check the performance of the network if the game parameters are tweaked (e.g. the angle of the ball’s bounce is changed slightly). This would yield important information about the network’s ability to extrapolate to new situations.
Based on the difference in results between Tetris and Pong, we should also expect the system to converge faster for games which provide rewards (either positive or negative) very frequently, even while the network is acting randomly. This leads us to believe that while our system should theoretically be able to learn any game, it might achieve human-level performance faster for genres like fighting games or other similar types in which the game score changes rapidly over a short amount of time. This might be another interesting future direction to explore.
Ultimately, the results obtained here demonstrate the usefulness of convolutional neural networks to perform deep Q learning as proposed in [1]. However, we have only applied these methods to relatively simple games; CNNs have also been used in much more complex image processing and object recognition. This would indicate that we can apply similar techniques to those described here to much more challenging tasks. Further research well undoubtedly lead to interesting results in this field.
## 8. References
[1] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level Control through Deep Reinforcement Learning. Nature, 529-33, 2015.
[2] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[3] Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58–68, 1995.
[4] Martin Riedmiller. Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In Machine Learning: ECML 2005, pages 317–328. Springer, 2005.
| 113.142857
| 866
| 0.794543
|
eng_Latn
| 0.999504
|
a6b022262cbba972cda75b3e670db0122c4d3ce2
| 1,192
|
md
|
Markdown
|
AlchemyInsights/sharepoint-migration-is-running-slowly.md
|
isabella232/OfficeDocs-AlchemyInsights-pr.th-TH
|
f840f1ddd3e1c0fda2b5af621cc24b730402c297
|
[
"CC-BY-4.0",
"MIT"
] | 3
|
2020-04-23T09:00:45.000Z
|
2021-04-21T00:13:49.000Z
|
AlchemyInsights/sharepoint-migration-is-running-slowly.md
|
isabella232/OfficeDocs-AlchemyInsights-pr.th-TH
|
f840f1ddd3e1c0fda2b5af621cc24b730402c297
|
[
"CC-BY-4.0",
"MIT"
] | 2
|
2022-02-09T06:54:04.000Z
|
2022-02-09T06:54:13.000Z
|
AlchemyInsights/sharepoint-migration-is-running-slowly.md
|
isabella232/OfficeDocs-AlchemyInsights-pr.th-TH
|
f840f1ddd3e1c0fda2b5af621cc24b730402c297
|
[
"CC-BY-4.0",
"MIT"
] | 2
|
2019-10-09T20:25:24.000Z
|
2021-10-09T10:37:55.000Z
|
---
title: เหตุใดฉันจึงไม่สามารถแก้ไขไฟล์นี้
ms.author: pebaum
author: pebaum
manager: scotv
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "5300030"
- "2700"
ms.openlocfilehash: efa32d88f0cb05096997fd493e5a8952f0e9491e600c1631d206c304f0f39f0e
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: th-TH
ms.lasthandoff: 08/05/2021
ms.locfileid: "53941684"
---
# <a name="sharepoint-migration-is-running-slowly"></a>SharePoint การโยกย้ายจะช้าลง
ประสิทธิภาพการโยกย้ายอาจได้รับผลกระทบจากโครงสร้างพื้นฐานของเครือข่าย ขนาดไฟล์ เวลาการโยกย้าย และการควบคุมปริมาณ การความเข้าใจสิ่งเหล่านี้จะช่วยให้คุณวางแผนและเพิ่มประสิทธิภาพในการโยกย้ายของคุณได้อย่างมีประสิทธิภาพสูงสุด
โปรดดูข้อมูลเพิ่มเติมที่:
- [ฉันประสบปัญหาประสิทธิภาพการโยกย้ายไม่ดีหรือการควบคุมปริมาณในระหว่างการโยกย้าย](https://docs.microsoft.com/sharepointmigration/sharepoint-online-and-onedrive-migration-speed#faq-and-troubleshooting)
- [แนวทางประสิทธิภาพการโยกย้ายทั่วไป](https://docs.microsoft.com/sharepointmigration/sharepoint-online-and-onedrive-migration-speed)
| 39.733333
| 219
| 0.809564
|
tha_Thai
| 0.579282
|
a6b06557ba8280c5355363949a708e326e56da1b
| 1,447
|
md
|
Markdown
|
docs/cpp/timing-of-exception-handling-a-summary.md
|
lc-soft/cpp-docs.zh-cn
|
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/cpp/timing-of-exception-handling-a-summary.md
|
lc-soft/cpp-docs.zh-cn
|
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/cpp/timing-of-exception-handling-a-summary.md
|
lc-soft/cpp-docs.zh-cn
|
cf307d328a2a9ed55de6f490b98e05e0f139fe5f
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: 异常处理的计时:摘要
ms.date: 05/07/2019
helpviewer_keywords:
- sequence [C++]
- sequence, of handlers
- exception handling [C++], timing
- setjmpex.h
- termination handlers [C++], timing
- setjmp.h
- handlers [C++], order of exception
- structured exception handling [C++], timing
ms.assetid: 5d1da546-73fd-4673-aa1a-7ac0f776c420
ms.openlocfilehash: 7b52252454e27d622e412f490360a025dfc97838
ms.sourcegitcommit: da32511dd5baebe27451c0458a95f345144bd439
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/07/2019
ms.locfileid: "65221905"
---
# <a name="timing-of-exception-handling-a-summary"></a>异常处理的计时:摘要
终止处理程序执行了无论 **__try**终止语句块。 原因包括跳出 **__try**块中,`longjmp`语句将控制权传出块和异常处理而堆栈展开。
> [!NOTE]
> MicrosoftC++编译器支持两种形式`setjmp`并`longjmp`语句。 快速版本会跳过终止处理,但更高效。 若要使用此版本,包括文件\<setjmp.h >。 另一个版本支持上一段中所述的终止处理。 若要使用此版本,包括文件\<setjmpex.h >。 快速版本的性能提升取决于硬件配置。
在执行任何其他代码前,操作系统将以适当的顺序执行所有终止处理程序,包括异常处理程序的主体。
当中断的原因是异常时,系统在决定要终止的内容前必须先执行一个或多个异常处理程序的筛选器部分。 事件的顺序如下:
1. 引发异常。
1. 系统查看活动异常处理程序的层次结构并执行具有最高优先级的处理程序的筛选器;从块和函数调用来看,这是最新安装且嵌套最深的异常处理程序。
1. 如果此筛选器传递控制权(返回 0),过程将继续,直到发现筛选器不传递控制权。
1. 如果此筛选器返回-1,继续执行,其中引发了异常,并且不会发生终止。
1. 如果筛选器返回 1,则发生以下事件:
- 系统展开堆栈,以清除当前执行的代码之间的所有堆栈帧(引发异常处)以及包含获取控制权的异常处理程序的堆栈帧。
- 当堆栈展开时,堆栈上的所有终止处理程序都将执行。
- 执行异常处理程序本身。
- 控制权将交给此异常处理程序末尾后的代码行。
## <a name="see-also"></a>请参阅
[编写终止处理程序](../cpp/writing-a-termination-handler.md)<br/>
[结构化异常处理 (C/C++)](../cpp/structured-exception-handling-c-cpp.md)
| 27.301887
| 155
| 0.766413
|
yue_Hant
| 0.498475
|
a6b13ec97317caf0b06a12830b61a807cf64282b
| 39
|
md
|
Markdown
|
README.md
|
PapaMarky/pi-apu
|
d988c3f06ef48e46065d7cc5db05ba8dfe2baa32
|
[
"MIT"
] | null | null | null |
README.md
|
PapaMarky/pi-apu
|
d988c3f06ef48e46065d7cc5db05ba8dfe2baa32
|
[
"MIT"
] | null | null | null |
README.md
|
PapaMarky/pi-apu
|
d988c3f06ef48e46065d7cc5db05ba8dfe2baa32
|
[
"MIT"
] | null | null | null |
# pi-apu
Raspberry pi based game / toy
| 13
| 29
| 0.717949
|
eng_Latn
| 0.987741
|
a6b22eb75abbf9dc655d3e309ed8b3c8de4c7fde
| 48,668
|
md
|
Markdown
|
docs/docs/crimson_api_docs.md
|
sullivancolin/hexpy
|
37aeb39935c1ac3e8d776a44373bc02b0e753a4e
|
[
"MIT"
] | 7
|
2017-09-29T01:06:44.000Z
|
2018-12-05T17:22:05.000Z
|
docs/docs/crimson_api_docs.md
|
sullivancolin/hexpy
|
37aeb39935c1ac3e8d776a44373bc02b0e753a4e
|
[
"MIT"
] | 13
|
2018-06-27T14:28:59.000Z
|
2020-04-03T01:38:49.000Z
|
docs/docs/crimson_api_docs.md
|
sullivancolin/hexpy
|
37aeb39935c1ac3e8d776a44373bc02b0e753a4e
|
[
"MIT"
] | 8
|
2017-10-17T16:34:29.000Z
|
2020-04-01T07:30:20.000Z
|
# Crimson Hexagon API Documentation
**API URL: `https://api.crimsonhexagon.com/api`**
## Endpoints
### Analysis Request
##### To submit an analysis task for asynchronous processing - Category: results
##### `/results` - POST
##### Parameters
##### Response
* `status` - Defines the status of the analysis. Refer to Response Statuses table for additional information
- Type: Status
- Restricted = False
* `resultId` - Defines the unique identifier by which the analysis status/results can be retrieved
- Type: long
- Restricted = False
* `retrieveAt` - Nullable. ISO8601 formatted date indicating a suggested time to re-attempt result retrieval if the status is WAITING
- Type: Date
- Restricted = False
* `request` - Defines the original request parameters made to invoke this analysis
- Type: ApiAnalysisTaskRequest
- Restricted = False
- Fields: `analysis`, `startDate`, `endDate`, `timezone`, `sources`, `keywords`, `languages`, `locations`, `gender`, `requestingContractInfo`
* `resultsUri` - Defines the URI that can be queried to retrieve the analysis status/results in the future
- Type: String
- Restricted = False
* `contractInfo` - If requested, the contract info after this request has been processed.
- Type: ApiAnalysisContractInfo
- Restricted = False
-------------------------
### Analysis Results
##### To retrieve the status of the analysis task and the results - Category: results
##### `/results/{resultId}` - GET
##### Parameters
##### Response
* `status` - Defines the status of the analysis. Refer to Response Statuses table for additional information
- Type: Status
- Restricted = False
* `resultId` - Defines the unique identifier by which the analysis status/results can be retrieved
- Type: long
- Restricted = False
* `retrieveAt` - Nullable. ISO8601 formatted date indicating a suggested time to re-attempt result retrieval if the status is WAITING
- Type: Date
- Restricted = False
* `request` - Defines the original request parameters made to invoke this analysis
- Type: ApiAnalysisTaskRequest
- Restricted = False
- Fields: `analysis`, `startDate`, `endDate`, `timezone`, `sources`, `keywords`, `languages`, `locations`, `gender`, `requestingContractInfo`
* `resultsUri` - Defines the URI that can be queried to retrieve the analysis status/results in the future
- Type: String
- Restricted = False
* `contractInfo` - If requested, the contract info after this request has been processed.
- Type: ApiAnalysisContractInfo
- Restricted = False
* `resultId` - Identificator of the task response
- Type: long
- Restricted = False
* `status` - Current status of analysis task
- Type: Status
- Restricted = False
* `analysisResults` - Analysis result
- Type: AnalysisResults
- Restricted = False
- Fields: `volumeResults`, `sentimentResults`, `genderResult`, `ageResult`, `locationResult`, `siteResult`, `affinityResults`, `reach`
* `message` - Result message
- Type: String
- Restricted = False
* `request` - Related task request
- Type: ApiAnalysisTaskRequest
- Restricted = False
- Fields: `analysis`, `startDate`, `endDate`, `timezone`, `sources`, `keywords`, `languages`, `locations`, `gender`, `requestingContractInfo`
-------------------------
### Authentication
##### Generate authentication tokens for use in API requests - Category: admin
##### `/authenticate` - GET
##### Parameters
* `username` - Username of the requesting user
- Type: String
- Required = True
* `password` - Password of the requesting user
- Type: String
- Required = True
* `force` - If true, forces authentication token update for the requesting user
- Type: boolean
- Required = False
* `noExpiration` - If true, the authentication token returned will not expire
- Type: boolean
- Required = False
##### Response
* `auth` - Authentication token
- Type: String
- Restricted = False
* `expires` - Token expiration date (24 hours from token creation). If noExpiration = true, this field will not be returned
- Type: Date
- Restricted = False
-------------------------
### Authors
##### Information about Twitter authors in a monitor - Category: results
##### `/monitor/authors` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `authors` - JSON array of zero or more authors objects that contain author-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `countsByAuthor`, `numberOfAuthors`, `docsPerAuthor`, `totalImpressions`
-------------------------
### Content Delete
##### Delete batch content via the API - Category: admin
##### `/content/delete` - POST
##### Parameters
* `documentType` - The id of the document type to delete documents from
- Type: long
- Required = True
* `batch` - The id of the document batch to delete
- Type: String
- Required = True
##### Response
-------------------------
### Content Delete
##### Delete content via the API - Category: admin
##### `/content/delete` - POST
##### Parameters
* `documentType` - The id of the document type to delete documents from
- Type: long
- Required = True
##### Response
-------------------------
### Content Source Create
##### Content Source creation - Category: admin
##### `/content/sources` - POST
##### Parameters
##### Response
* `contentSource` - Content Source
- Type: ContentSourceModel
- Restricted = False
- Fields: `id`, `teamName`, `name`, `description`, `documents`
-------------------------
### Content Source Delete
##### Content Source deletion - Category: admin
##### `/content/sources` - DELETE
##### Parameters
* `documentType` - The id of the document type to delete
- Type: long
- Required = True
##### Response
-------------------------
### Content Source List
##### Content Source list - Category: admin
##### `/content/sources/list` - GET
##### Parameters
* `team` - The id of the team to which the listed content sources belong
- Type: Long
- Required = True
##### Response
* `contentSources` - Content Sources
- Type: List
- Restricted = False
- Fields: `id`, `teamName`, `name`, `description`, `documents`
-------------------------
### Content Upload
##### Upload content via the API - Category: admin
##### `/content/upload` - POST
##### Parameters
##### Response
* `uploadCount` - The number of posts that were successfully uploaded
- Type: Integer
- Restricted = False
* `DocumentsUploadedInLastTwentyFourHours` - If requested, the number of documents this organization has uploaded in the last twenty four hours.
- Type: Long
- Restricted = False
* `ContractedDocumentsWithinTwentyFourHours` - If requested, the number of documents this organization can upload in a rolling twenty four hour period.
- Type: Long
- Restricted = False
-------------------------
### Content Upload Custom Fields Support
##### Upload content via the API w/ custom fields support - Category: admin
##### `/content/upload` - POST
##### Parameters
* `documentType` - The id of the document type to which the uploading docs will belong
- Type: Long
- Required = True
* `batch` - The id of the batch to which the uploading docs will belong
- Type: String
- Required = False
##### Response
* `batchId` - The id of the batch to which these docs belong
- Type: String
- Restricted = False
-------------------------
### Day and Time
##### Volume information for a monitor aggregated by time of day or day of week) - Category: results
##### `/monitor/dayandtime` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `aggregatebyday` - If true, volume information will be aggregated by day of the week instead of time of day
- Type: boolean
- Required = False
* `uselocaltime` - If true, volume aggregation will use the time local to the publishing author of a post when determining counts by day/time, instead of converting that time to the timezone of the selected monitor
- Type: boolean
- Required = False
##### Response
* `volumes` - JSON array of zero or more objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `numberOfDocuments`, `volume`
-------------------------
### Demographics - Age
##### Daily volume information for age in a monitor - Category: results
##### `/monitor/demographics/age` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `ageCounts` - JSON array of zero or more objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `numberOfDocuments`, `ageCount`
-------------------------
### Demographics - Gender
##### Daily volume information for gender in a monitor - Category: results
##### `/monitor/demographics/gender` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `genderCounts` - JSON array of zero or more objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `numberOfDocuments`, `genderCounts`
-------------------------
### Facebook Admin Posts
##### Daily likes, comments, and shares for individual admin posts made by a Facebook account in a Facebook social account monitor - Category: social
##### `/monitor/facebook/adminposts` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `adminPostMetrics`
-------------------------
### Facebook Page Likes
##### Total page likes as of the requested dates for a Facebook social monitor - Category: social
##### `/monitor/facebook/pagelikes` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `date`, `likes`
-------------------------
### Facebook Total Activity
##### Daily total likes, comments, and shares on admin and user posts for a Facebook account in a Facebook social monitor - Category: social
##### `/monitor/facebook/totalactivity` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `admin`, `user`
-------------------------
### Geography - All Resources
##### Returns all the available geolocation resources - Category: util
##### `/geography/info/all` - GET
##### Parameters
##### Response
* `resources` - JSON array with the geography resources
- Type: List
- Restricted = False
- Fields: `id`, `name`, `country`, `state`, `city`, `latitude`, `longitude`
-------------------------
### Geography - Cities
##### Returns all the available cities / urban areas in the given country - Category: util
##### `/geography/info/cities` - GET
##### Parameters
* `country` - Specifies the ISO 3166 3 letter country code
- Type: String
- Required = True
##### Response
* `resources` - JSON array with the geography resources
- Type: List
- Restricted = False
- Fields: `id`, `name`, `country`, `state`, `city`, `latitude`, `longitude`
-------------------------
### Geography - Countries
##### Returns all the available countries - Category: util
##### `/geography/info/countries` - GET
##### Parameters
##### Response
* `resources` - JSON array with the geography resources
- Type: List
- Restricted = False
- Fields: `id`, `name`, `country`, `latitude`, `longitude`
-------------------------
### Geography - States
##### Returns all the available states / regions in the given country - Category: util
##### `/geography/info/states` - GET
##### Parameters
* `country` - Specifies the ISO 3166 3 letter country code
- Type: String
- Required = True
##### Response
* `resources` - JSON array with the geography resources
- Type: List
- Restricted = False
- Fields: `id`, `name`, `country`, `state`, `latitude`, `longitude`
-------------------------
### Get Monitor Creation Report
##### Returns a list of Teams within an Organization and how many monitors were created during a given time period - Category: reports
##### `/report/monitorCreation` - GET
##### Parameters
* `organizationId` - The id of the organization being requested
- Type: long
- Required = True
##### Response
* `data` - List of 0..n monitor creation report rows
- Type: List
- Restricted = False
- Fields: `team_name`, `monitors_used`, `monitor_limit`, `monitors_created_past_month`
-------------------------
### Get Social Site Report
##### Returns a list of social sites and associated usernames for Teams within an Organization. Also indicates which of the social sites have failed and when - Category: reports
##### `/report/socialSites` - GET
##### Parameters
* `organizationId` - The id of the organization being requested
- Type: long
- Required = True
##### Response
* `data` - List of 0..n social site report rows
- Type: List
- Restricted = False
- Fields: `username`, `socialsite`, `team_name`, `creation_date`, `last_rate_limit_date`, `failed`, `failure_date`
-------------------------
### Get User Activity Report
##### Returns a list of users within an Organization including information on when they last logged into the platform, the last monitor they created, and the last monitor they viewed - Category: reports
##### `/report/userActivity` - GET
##### Parameters
* `organizationId` - The id of the organization being requested
- Type: long
- Required = True
##### Response
* `data` - List of 0..n user activity report rows
- Type: List
- Restricted = False
- Fields: `user_id`, `team_id`, `email`, `first_name`, `last_name`, `last_platform_login`, `team_name`, `monitors_viewed_past_month`, `monitors_created_past_month`, `last_team_visit`
-------------------------
### Get User Invitation Report
##### Returns a list of users within an Organization and which Team(s) they were invited to. Also indicates when the invitation was sent and when it was accepted - Category: reports
##### `/report/userInvitations` - GET
##### Parameters
* `organizationId` - The id of the organization being requested
- Type: long
- Required = True
##### Response
* `data` - List of 0..n user invitation report rows
- Type: List
- Restricted = False
- Fields: `email`, `team_name`, `create_edit`, `invite_user`, `heliosight`, `api_access`, `admin`, `date_sent`, `date_accepted`
-------------------------
### Image Analysis Request
##### To return list of all class IDs and names. - Category: results
##### `/imageanalysis/resources/classes` - GET
##### Parameters
##### Response
-------------------------
### Image Analysis Request
##### To return list of class IDs and names with specified class type. - Category: results
##### `/imageanalysis/resources/classes/type` - GET
##### Parameters
##### Response
-------------------------
### Image analysis
##### To return image classification data - Category: util
##### `/imageanalysis` - GET
##### Parameters
* `url` - Image URL
- Type: String
- Required = True
##### Response
* `imgData` - Message object contains request parameters and image classification result
- Type: ImageAnalysisData
- Restricted = False
-------------------------
### Instagram Followers
##### Total daily follower counts for Instagram social account monitors - Category: social
##### `/monitor/instagram/followers` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `date`, `followerCount`
-------------------------
### Instagram Hashtags
##### Total daily volume by Instagram hashtags for specific monitor - Category: social
##### `/monitor/instagram/hashtags` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `date`, `hashtags`
-------------------------
### Instagram Sent Media
##### Daily likes, comments, and tags for individual media posted by an Instagram account in an Instagram social account monitor - Category: social
##### `/monitor/instagram/sentmedia` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `adminPostMetrics`
-------------------------
### Instagram Total Activity
##### Daily likes, comments, and shares for individual admin posts made by an Instagram account in an Instagram social account monitor - Category: social
##### `/monitor/instagram/totalactivity` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `admin`
-------------------------
### Interest Affinities
##### Aggregate affinities for the selected monitor over a given date range - Category: visualizations
##### `/monitor/interestaffinities` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `daily` - If true, results returned from this endpoint will be trended daily instead of aggregated across the selected date range.
- Type: boolean
- Required = False
* `documentsource` - document source for affinities. valid params [TWITTER, TUMBLR]
- Type: String
- Required = False
##### Response
* `startDate` - Inclusive start date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `endDate` - Exclusive end date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `affinityInfo` - JSON array of affinity objects containing information about the top affinities for the date range selected
- Type: List
- Restricted = False
- Fields: `id`, `name`, `relevancyScore`, `percentInMonitor`, `percentOnTwitter`
-------------------------
### Monitor Audit
##### Audit information about the selected monitor - Category: admin
##### `/monitor/audit` - GET
##### Parameters
* `id` - The id of the monitor to be audited
- Type: long
- Required = True
##### Response
* `auditInfo` - JSON array of audit events pertaining to the selected monitor
- Type: List
- Restricted = False
- Fields: `event`, `user`, `eventDate`
-------------------------
### Monitor Detail
##### Attributes of the specified monitor - Category: admin
##### `/monitor/detail` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
* `monitorDetail` - JSON array of monitor details
- Type: MonitorDetailModel
- Restricted = False
- Fields: `parentMonitorId`, `categories`, `emotions`, `id`, `name`, `description`, `type`, `enabled`, `resultsStart`, `resultsEnd`, `keywords`, `languages`, `geolocations`, `gender`, `sources`, `timezone`, `teamName`, `tags`, `subfilters`
-------------------------
### Monitor Image Results
##### Daily image results for a monitor - Category: results
##### `/monitor/imageresults` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `type` - Specifies type of image classes, valid values [object, scene, action, logo]
- Type: String
- Required = False
* `top` - If defined, only the top number of results will be returned
- Type: Integer
- Required = False
##### Response
* `results` - JSON array of zero or more daily image results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `creationDate`, `numberOfDocuments`, `numberOfImageDocuments`, `imageClasses`
-------------------------
### Monitor List
##### List of monitors available to the passed in username - Category: admin
##### `/monitor/list` - GET
##### Parameters
* `team` - The id of the team to which the listed monitors belong
- Type: Long
- Required = False
##### Response
* `monitors` - JSON array of monitors viewable by the user
- Type: List
- Restricted = False
- Fields: `id`, `name`, `description`, `type`, `enabled`, `resultsStart`, `resultsEnd`, `keywords`, `languages`, `geolocations`, `gender`, `sources`, `timezone`, `teamName`, `tags`, `subfilters`
-------------------------
### Monitor Results
##### Daily results for a monitor - Category: results
##### `/monitor/results` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `hideExcluded` - If true, categories set as hidden will not be included in category proportion calculations
- Type: boolean
- Required = False
##### Response
* `results` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `creationDate`, `numberOfDocuments`, `numberOfRelevantDocuments`, `categories`
-------------------------
### Monitor Results by City
##### Returns all the monitor results grouped by the cities / urban areas in a given country (if given) - Category: results
##### `/monitor/geography/cities` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `country` - Specifies the ISO 3166 3 letter country code, if not given all cities in the world will be returned
- Type: String
- Required = False
##### Response
* `startDate` - Requested start date
- Type: Date
- Restricted = False
* `endDate` - Requested end date
- Type: Date
- Restricted = False
* `totalVolume` - Volume matching the defined geography filter
- Type: long
- Restricted = False
* `data` - JSON array of monitor geography result information
- Type: List
- Restricted = False
- Fields: `info`, `volume`, `perMillion`
-------------------------
### Monitor Results by Country
##### Returns all the monitor results grouped by country - Category: results
##### `/monitor/geography/countries` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `startDate` - Requested start date
- Type: Date
- Restricted = False
* `endDate` - Requested end date
- Type: Date
- Restricted = False
* `totalVolume` - Volume matching the defined geography filter
- Type: long
- Restricted = False
* `data` - JSON array of monitor geography result information
- Type: List
- Restricted = False
- Fields: `info`, `volume`, `perMillion`
-------------------------
### Monitor Results by State
##### Returns all the monitor results grouped by the country states / regions - Category: results
##### `/monitor/geography/states` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `country` - Specifies the ISO 3166 3 letter country code
- Type: String
- Required = True
##### Response
* `startDate` - Requested start date
- Type: Date
- Restricted = False
* `endDate` - Requested end date
- Type: Date
- Restricted = False
* `totalVolume` - Volume matching the defined geography filter
- Type: long
- Restricted = False
* `data` - JSON array of monitor geography result information
- Type: List
- Restricted = False
- Fields: `info`, `volume`, `perMillion`
-------------------------
### Monitor Training Posts
##### Download training posts for a monitor - Category: admin
##### `/monitor/trainingposts` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `category` - Category id to target training posts from a specific category
- Type: Long
- Required = False
##### Response
* `trainingPosts` - JSON array of training posts for the selected monitor or category in a monitor
- Type: List
- Restricted = False
- Fields: `categoryId`, `categoryName`, `categoryGroup`, `url`, `date`, `author`, `contents`, `title`, `type`
-------------------------
### Posts
##### Information about posts in a monitor - Category: visualizations
##### `/monitor/posts` - GET || POST
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `MISSING` - Optional JSON payload to filter response
- Type: MonitorPostsFilter
- Required = False
* `filter` - Pipe-separated list of field:value pairs used to filter results by given parameters
- Type: String
- Required = False
* `extendLimit` - If true, increases the limit of returned posts from 500 per call to 10,000 per call
- Type: boolean
- Required = False
* `fullContents` - If true, the contents field will return the original, complete post contents instead of truncating around search terms
- Type: boolean
- Required = False
* `geotagged` - If true, returns only geotagged documents matching and the given filter, if false or undefined any post matching the given filter
- Type: boolean
- Required = False
##### Response
* `posts` - JSON array of zero or more post objects that contain post-specific attributes
- Type: List
- Restricted = False
- Fields: `location`, `geolocation`, `language`, `authorPosts`, `authorsFollowing`, `authorsFollowers`, `authorGender`, `trainingPost`, `assignedCategoryId`, `assignedEmotionId`, `categoryScores`, `emotionScores`, `imageInfo`, `customFields`, `batchId`, `url`, `date`, `author`, `contents`, `title`, `type`
* `totalPostsAvailable` - The number of posts stored for this monitor that match the query. Dates in the date range selected that have more than 10 thousand posts will be sampled. You may perform extrapolation calculations to approximate the total number of unsampled posts using the results counts in the Monitor Results endpoint.
- Type: int
- Restricted = False
-------------------------
### Realtime Cashtags
##### Get Cashtags associated to a Monitor - Category: monitors
##### `/realtime/monitor/cashtags` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `top` - The top N cashtags to retrieve
- Type: Integer
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Configure
##### Configure the Realtime evaluators for the Monitor - Category: monitors
##### `/realtime/monitor/configure` - POST
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Details
##### Get the Realtime evaluators details for the Monitor - Category: monitors
##### `/realtime/monitor/detail` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Disable
##### Disable Realtime Data - Category: monitors
##### `/realtime/monitor/disable` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
-------------------------
### Realtime Enable
##### Enable Realtime Data - Category: monitors
##### `/realtime/monitor/enable` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
-------------------------
### Realtime FullRetweets
##### Get the Realtime fulretweets for the Monitor - Category: monitors
##### `/realtime/monitor/fullretweets` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime FullTweets
##### Get the Realtime fulltweets for the Monitor - Category: monitors
##### `/realtime/monitor/fulltweets` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Hashtags
##### Get Hashtags associated to a Monitor - Category: monitors
##### `/realtime/monitor/hashtags` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `top` - The top N hashtags to retrieve
- Type: Integer
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Monitor List
##### Get the Monitors which are in Proteus - Category: monitors
##### `/realtime/monitor/list` - GET
##### Parameters
* `team` - The id of the team to which the listed monitors belong
- Type: Long
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Retweets
##### Get the Realtime retweets for the Monitor - Category: monitors
##### `/realtime/monitor/retweets` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime SocialGuids
##### Get the Realtime social guids for the Monitor - Category: monitors
##### `/realtime/monitor/socialguids` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `type` - Specifies the document type
- Type: String
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `receivedafter` - Specifies inclusive receivedafter date in epoch seconds
- Type: Long
- Required = False
* `maxresults` - Specifies maximum results to fetch
- Type: Integer
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Tweets
##### Get the Realtime tweets for the Monitor - Category: monitors
##### `/realtime/monitor/tweets` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Volume
##### Get the Realtime volume for the Monitor - Category: monitors
##### `/realtime/monitor/volume` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `type` - Specifies the document type to filter
- Type: List
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Volume by Emotion
##### Get the Realtime volume by emotion for the Monitor - Category: monitors
##### `/realtime/monitor/volumebyemotion` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `type` - Specifies the document type to filter
- Type: List
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Realtime Volume by Sentiment
##### Get the Realtime volume by sentiment for the Monitor - Category: monitors
##### `/realtime/monitor/volumebysentiment` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in epoch seconds
- Type: Long
- Required = False
* `type` - Specifies the document type to filter
- Type: List
- Required = False
##### Response
* `realtimeData` - JSON object of monitor realtime data
- Type: Map
- Restricted = False
-------------------------
### Stream Add Monitor
##### Stream Add Monitor Association - Category: admin
##### `/stream/{streamid}/monitor/{monitorid}` - POST
##### Parameters
* `streamId` - The id of the stream
- Type: Long
- Required = True
* `monitorId` - The id of the monitor to which the association will be created
- Type: Long
- Required = True
##### Response
-------------------------
### Stream Create
##### Stream creation - Category: admin
##### `/stream` - POST
##### Parameters
##### Response
* `stream` - Stream information
- Type: StreamModel
- Restricted = False
- Fields: `id`, `name`, `teamName`, `monitors`
* `path` - Stream path
- Type: String
- Restricted = False
-------------------------
### Stream Delete
##### Stream deletion - Category: admin
##### `/stream/{streamid}` - DELETE
##### Parameters
* `streamId` - The id of the stream to delete
- Type: Long
- Required = True
##### Response
-------------------------
### Stream List
##### List of streams available to the passed in username - Category: admin
##### `/stream/list` - GET
##### Parameters
* `teamid` - The id of the team to which the listed streams belong
- Type: Long
- Required = False
##### Response
* `streams` - JSON array of streams viewable by the user
- Type: List
- Restricted = False
- Fields: `id`, `name`, `teamName`, `monitors`
-------------------------
### Stream Posts
##### Information about posts in a stream - Category: results
##### `/stream/{streamid}/posts` - GET
##### Parameters
* `streamId` - The id of the stream to which the realtime information belongs
- Type: Long
- Required = True
* `count` - The maximum number of posts to fetch from the stream
- Type: Integer
- Required = False
##### Response
* `posts` - JSON array of zero or more post objects that contain post-specific attributes
- Type: List
- Restricted = False
- Fields: `location`, `geolocation`, `language`, `authorPosts`, `authorsFollowing`, `authorsFollowers`, `authorGender`, `trainingPost`, `assignedCategoryId`, `assignedEmotionId`, `categoryScores`, `emotionScores`, `imageInfo`, `customFields`, `batchId`, `url`, `date`, `author`, `contents`, `title`, `type`
* `totalPostsAvailable` - The number of posts stored for this monitor that match the query. Dates in the date range selected that have more than 10 thousand posts will be sampled. You may perform extrapolation calculations to approximate the total number of unsampled posts using the results counts in the Monitor Results endpoint.
- Type: int
- Restricted = False
-------------------------
### Stream Remove Monitor
##### Stream Remove Monitor Association - Category: admin
##### `/stream/{streamid}/monitor/{monitorid}` - DELETE
##### Parameters
* `streamId` - The id of the stream
- Type: Long
- Required = True
* `monitorId` - The id of the monitor to which the association will be removed
- Type: Long
- Required = True
##### Response
-------------------------
### Stream Update Monitor
##### Stream Update Monitor Data - Category: admin
##### `/stream/{streamid}` - POST
##### Parameters
* `streamId` - The id of the stream
- Type: Long
- Required = True
##### Response
-------------------------
### Team List
##### List of teams accessible to the current user - Category: admin
##### `/team/list` - GET
##### Parameters
##### Response
* `teams` - JSON array of teams accessible by the user
- Type: List
- Restricted = False
- Fields: `id`, `name`
-------------------------
### Top Sites and Content Sources
##### Content source breakdown and top sites - Category: results
##### `/monitor/sources` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `contentSources` - JSON array of zero or more content sources objects that contain results for each date requested
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `topSites`, `sources`
-------------------------
### Topic Clustering
##### XML data that can be used to generate clustering visualizations using third-party software - Category: visualizations
##### `/monitor/topics` - GET || POST
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `MISSING` - Optional JSON payload to filter response
- Type: MonitorPostsFilter
- Required = False
* `filter` - Pipe-separated list of field:value pairs used to filter results by given parameters
- Type: String
- Required = False
##### Response
* `clustering` - XML string for generating visualizations
- Type: String
- Restricted = False
-------------------------
### Topic Waves
##### Topic waves information for a monitor - Category: visualizations
##### `/monitor/topicwaves` - GET || POST
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `MISSING` - Optional JSON payload to filter response
- Type: MonitorPostsFilter
- Required = False
##### Response
* `startDate` - Inclusive start date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `endDate` - Exclusive end date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `timezone` - IANA timezone identifier specifying the timezone for all dates in the response
- Type: String
- Restricted = False
* `groupBy` - Defines the grouping for the volume information
- Type: String
- Restricted = False
* `totalTopicsVolume` - Total Volume for the topics
- Type: long
- Restricted = False
* `topics` - JSON array of 1..n topics volume information for grouped periods
- Type: List
- Restricted = False
- Fields: `name`, `totalVolume`, `volume`
-------------------------
### Training Document Upload
##### Train monitors via the API - Category: util
##### `/monitor/train` - POST
##### Parameters
* `id` - The id of the monitor being trained
- Type: long
- Required = True
##### Response
* `message` - Success response indicating a training post has been sucessfully uploaded
- Type: String
- Restricted = False
-------------------------
### Twitter Engagement Metrics
##### Engagement metrics for Twitter content in a monitor - Category: results
##### `/monitor/twittermetrics` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `topHashtags`, `topMentions`, `topRetweets`
-------------------------
### Twitter Followers
##### Total daily follower counts for Twitter Social Account monitors - Category: social
##### `/monitor/twittersocial/followers` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `date`, `followers`
-------------------------
### Twitter Sent Posts
##### Daily retweets, replies, and impressions for individual posts made by a Twitter account in a Twitter social account monitor - Category: social
##### `/monitor/twittersocial/sentposts` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `sentPostMetrics`, `totalImpressions`
-------------------------
### Twitter Total Engagement
##### Daily retweets, replies, and mentions for a targeted Twitter account in a Twitter social account monitor - Category: social
##### `/monitor/twittersocial/totalengagement` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
##### Response
* `dailyResults` - JSON array of zero or more daily results objects that contain endpoint-specific attributes
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `mentions`, `replies`, `retweets`
-------------------------
### Volume
##### Volume of total posts in a monitor - Category: results
##### `/monitor/volume` - GET
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `groupBy` - Specifies how the volume data over the date range will be grouped. Valid values: [HOURLY, DAILY, WEEKLY, MONTHLY]. Defaults to DAILY. Grouping requires a date range of at least 1 full unit; e.g., WEEKLY requires a date range of at least 1 week. Grouping only returns full units so the range may be truncated. e.g., 2017-01-15 to 2017-03-15 with MONTHLY grouping will return a date range of 2017-02-01 to 2017-03-01. A monitor must have complete results for the specified date range. If any day in the range is missing results an error will be returned.
- Type: String
- Required = False
##### Response
* `startDate` - Inclusive start date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `endDate` - Exclusive end date in dashboard time for this result - ISO 8601 format yyyy-MM-dd'T'HH:mm:ss
- Type: Date
- Restricted = False
* `timezone` - IANA timezone identifier specifying the timezone for all dates in the response
- Type: String
- Restricted = False
* `groupBy` - Defines the grouping for the volume information
- Type: String
- Restricted = False
* `numberOfDocuments` - Total volume for this period
- Type: long
- Restricted = False
* `volume` - JSON array of 1..n volume information for grouped periods
- Type: List
- Restricted = False
- Fields: `startDate`, `endDate`, `numberOfDocuments`
-------------------------
### Word Cloud
##### Word frequency information for posts in a monitor - Category: visualizations
##### `/monitor/wordcloud` - GET || POST
##### Parameters
* `id` - The id of the monitor being requested
- Type: long
- Required = True
* `start` - Specifies inclusive start date in YYYY-MM-DD
- Type: Date
- Required = True
* `end` - Specifies exclusive end date in YYYY-MM-DD
- Type: Date
- Required = True
* `MISSING` - Optional JSON payload to filter response
- Type: MonitorPostsFilter
- Required = False
* `filter` - Pipe-separated list of field:value pairs used to filter results by given parameters
- Type: String
- Required = False
##### Response
* `data` - Map of the top 300 terms appearing in a monitor to their frequency in that monitor
- Type: Map
- Restricted = False
-------------------------
| 31.297749
| 566
| 0.671242
|
eng_Latn
| 0.906146
|
a6b2f4e0b2d138d9fc90b38a3169d8d2c2c21511
| 4,524
|
md
|
Markdown
|
Java/testng-sample/README.md
|
jeffreyp-perfectomobile-com/Reporting-Samples
|
f095681ce49a2e3b4ee1796f06b44413c445bd12
|
[
"MIT"
] | 4
|
2018-07-26T23:55:36.000Z
|
2020-01-24T09:20:39.000Z
|
Java/testng-sample/README.md
|
jeffreyp-perfectomobile-com/Reporting-Samples
|
f095681ce49a2e3b4ee1796f06b44413c445bd12
|
[
"MIT"
] | 7
|
2017-10-31T13:22:46.000Z
|
2020-06-16T07:14:27.000Z
|
Java/testng-sample/README.md
|
jeffreyp-perfectomobile-com/Reporting-Samples
|
f095681ce49a2e3b4ee1796f06b44413c445bd12
|
[
"MIT"
] | 20
|
2017-08-30T00:13:57.000Z
|
2021-08-07T04:54:46.000Z
|
# TestNG
The project demonstrates adding Reportium calls to Selenium tests that are run via [TestNG](http://testng.org/doc/index.html).
It includes 2 sub modules which personify different use cases:
1. Adopt Reporting with minimal changes to the code base, not changing any test script.
This type of user will gain seamless breakdown into tests by automatic reporting of _test start_ and _test end_.
This use case is implemented in the _testng_listener_only_ project.
2. Full usage of Reporting API, including update of test scripts to report functional test steps.
This use case is implemented in the _testng_full_ project.
## Using all reporting capabilities
Demonstrated by _testng-full_ project.
This project uses the full capabilities of Perfecto's reporting SDK to send events for starting functional tests, the functional steps
performed by the test and the status of the test when it ends.
It includes a base test class _AbstractPerfectoSeleniumTestNG_ which handles the connection to Perfecto Reporting.
The class also implements TestNG configuration methods (beforeXXX and afterXXX methods) to automatically report test start and end, allowing
test script authors to focus on implementation of their business logic and reporting only functional steps as part of the script.
The second base class, _AbstractTodoMvcTest_, demonstrates applicative logic to keep the code [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
demonstrates usage of JUnit rules to create
a WebDriver instance for each class and report test start and end automatically.
The project is made up of 2 Maven profiles to demonstrate advanced features:
- sanity - runs the tests under _com.perfecto.reporting.sample.todomvc.sanity_ package
- regression - runs the tests under _com.perfecto.reporting.sample.todomvc.regression_ package
Run tests from command line by denoting the profiles to run, e.g. from project root
> mvn clean verify -f testng-sample/testng-full/pom.xml -Pregression
The generated TestNG report is created under the respective name e.g.
> testng-sample/testng-full/target/surefire-reports/sanity
## Using only the TestNG plugin
Demonstrated by _testng-listener-only_ project.
Perfecto's TestNg listener is a [TestNG listener](http://testng.org/doc/documentation-main.html#testng-listeners) implementation to denote test start and stop to reduce boilerplate code.
It is ideal for users that want minimal changes to their code base, without adding the test's logical steps to the report.
This project demonstrate the usage of Perfecto's TestNG listener for seamlessly reporting test start and end events to the Reporting solution.
### Adding the listener to your project
There are several alternatives for adding TestNG listeners to your project.
This reference demonstrates two of them:
* Using the `org.testng.annotations.Listeners annotation` - see the annotation on `TodoMvcWithListenerTest` class.
* Specifying the listener as part of the [Maven Surefire plugin](http://maven.apache.org/surefire/maven-surefire-plugin/) configuration
### Implementation requirements
The listener reports test start and end using the WebDriver instance used by the test.
In order for the listener to get that instance the test must implement the `com.perfecto.reportium.WebDriverProvider` interface.
## Reusing reporting annotations of 3rd party solutions
Perfecto reporting can reuse existing Allure annotations by automatically sending them as test steps.
You need to setup your Maven pom.xml as follows:
```xml
<properties>
<aspectj.version>1.8.9</aspectj.version>
</properties>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire-plugin.version}</version>
<configuration>
<testFailureIgnore>false</testFailureIgnore>
<argLine>
-javaagent:${settings.localRepository}/org/aspectj/aspectjweaver/${aspectj.version}/aspectjweaver-${aspectj.version}.jar
</argLine>
</configuration>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>${aspectj.version}</version>
</dependency>
</dependencies>
</plugin>
</plugins>
```
See an example in [testng-full/pom.xml](testng-full/pom.xml)
| 48.645161
| 187
| 0.755526
|
eng_Latn
| 0.988377
|
a6b2f5ce476965a8472262e29137ccf56202b55a
| 1,267
|
md
|
Markdown
|
content/blog/database/mysql-null.md
|
2dowon/2dowon_gastby
|
2ba40d569ed8a9bcdcb2f681829aede29fa09375
|
[
"MIT"
] | null | null | null |
content/blog/database/mysql-null.md
|
2dowon/2dowon_gastby
|
2ba40d569ed8a9bcdcb2f681829aede29fa09375
|
[
"MIT"
] | null | null | null |
content/blog/database/mysql-null.md
|
2dowon/2dowon_gastby
|
2ba40d569ed8a9bcdcb2f681829aede29fa09375
|
[
"MIT"
] | null | null | null |
---
title: MySQL - NULL값 처리하기
date: 2021-01-21 00:01:06
category: database
thumbnail: { thumbnailSrc }
draft: false
---
데이터베이스의 데이터 중에서 값이 없는 경우 보통 NULL 값으로 채워진다. 하지만 NULL 값은 결측치이기 때문에 데이터를 검색할 때 NULL 값이 없는 데이터만 조회하거나 NULL 값을 다른 값으로 치환하는 경우 등이 생긴다.
# IS NULL / IS NOT NULL
데이터 중 NULL 값이 없는 데이터만 보고 싶거나 NULL 값의 데이터만 보고싶을 때 사용한다.
너무 간단하기 때문에 설명보다는 프로그래머스 SQL 문제로 대신한다.
## IS NULL
> ANIMAL_INS 테이블에서 ANIMAL_ID를 조회하고자 한다. 단, NAME이 없는 데이터만 조회한다.
```sql
SELECT ANIMAL_ID FROM ANIMAL_INS WHERE NAME IS NULL;
```
## IS NOT NULL
> ANIMAL_INS 테이블에서 ANIMAL_ID를 조회하고자 한다. 단, NAME이 없는 데이터는 제외한다.
```sql
SELECT ANIMAL_ID FROM ANIMAL_INS WHERE NAME IS NOT NULL;
```
</br>
# IFNULL
IFNULL은 해당 칼럼의 데이터에서 NULL 값이 있다면 그 값을 원하는 값으로 치환할 때 사용한다.
```sql
SELECT IFNULL(칼럼명, 치환할_값) FROM 테이블명;
```
</br>
> ANIMAL_INS 테이블에서 ANIMAL_TYPE, NAME, SEX_UPON_INTAKE를 ANIMAL_ID순으로 조회하고자 한다. 이때 NAME이 없는 데이터는 NULL 대신 No name으로 표시한다.
```sql
SELECT ANIMAL_TYPE, IFNULL(NAME, "No name"), SEX_UPON_INTAKE
FROM ANIMAL_INS
ORDER BY ANIMAL_ID;
```
</br>
</br>
# Ref.
- [IS NULL 문제](https://programmers.co.kr/learn/courses/30/lessons/59039)
- [IS NOT NULL 문제](https://programmers.co.kr/learn/courses/30/lessons/59407)
- [IFNULL 문제](https://programmers.co.kr/learn/courses/30/lessons/59410)
| 21.116667
| 128
| 0.715075
|
kor_Hang
| 0.999992
|
a6b3347fae3417ca5b6f7367bbbe8ff1d65e6d85
| 1,740
|
md
|
Markdown
|
_posts/2022-03-06-First-Online-Meeting-Notes.md
|
ds-study-group/-ds-study-group.io
|
b9a523dfb548867d6656bca43984095aeecedb6a
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
_posts/2022-03-06-First-Online-Meeting-Notes.md
|
ds-study-group/-ds-study-group.io
|
b9a523dfb548867d6656bca43984095aeecedb6a
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
_posts/2022-03-06-First-Online-Meeting-Notes.md
|
ds-study-group/-ds-study-group.io
|
b9a523dfb548867d6656bca43984095aeecedb6a
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
---
title: "March 6th Meeting Notes"
date: 2022-03-06 13:00:00
categories: meeting notes
tags: meeting notes
---
Attendance: MC, VA, OO, AK, MC, AS, AN
**Goals**:
1. Present a topic of interest to the group between 5 and 45 minutes in length.
- We can have lightning talks <= 15 minutes or full discussions > 15 minutes.
2. It is hoped that we can publish/share our work on this website(?) and elsewhere.
**Q. What is the best way to communicate our ideas to each other.**
- Slack channel `Real World Python;ds-study-group`
- This *Docmentation* website: Posting *News*
- Email needed for Google Meetings or Zoom calls?
**During our introductions we discussed areas of interest:**
- Machine-learning basics both math and practical applications
- Projects including Azure/AWS
- Projects to improve Python programming skills
- Increasing math skills needed for DS and Stats, such as linear algebra
- Econometric and Financial-tech skills and how to apply them to DS
- The data science of ecology
- Following/Presenting material from Online courses or book/text/material
**Speaking with other students unable to attend, they are interested in:**
- Exploratory Data Analysis
- Logistic Regression
- Using different technologies to present data such as R, Tableau, and Excel.
### TO DO: In approx. 2 weeks
1. **Write down 1,2 or 3(?) ideas you want to present, AND 10(?) resources AND POST on this website as News Post.**
- I believe the next thing we should do is brain-storm as a group. Obviously, your ideas can/will change over time, but I think this will get people talking/thinking. I'm hoping that YOU can find a 'short list' of topics you like and that others may suggest other references to add to yours.
| 43.5
| 292
| 0.748276
|
eng_Latn
| 0.997909
|
a6b39180b35c3af1c6d737b359bed861aed724b6
| 207
|
md
|
Markdown
|
README.md
|
EagleGenomics-cookbooks/metaphlan2
|
4f7465d327c6df8f1d1ef3958e802202ff4db874
|
[
"Apache-2.0"
] | null | null | null |
README.md
|
EagleGenomics-cookbooks/metaphlan2
|
4f7465d327c6df8f1d1ef3958e802202ff4db874
|
[
"Apache-2.0"
] | 1
|
2020-02-18T08:37:11.000Z
|
2020-02-18T08:37:11.000Z
|
README.md
|
EagleGenomics-cookbooks/metaphlan2
|
4f7465d327c6df8f1d1ef3958e802202ff4db874
|
[
"Apache-2.0"
] | null | null | null |
# metaphlan2
[](https://travis-ci.org/EagleGenomics-cookbooks/metaphlan2)
Chef cookbook for installing metaphlan2
| 34.5
| 151
| 0.806763
|
yue_Hant
| 0.529482
|
a6b3a06d28063f448f0a74dd113539386473d8ff
| 2,567
|
mkd
|
Markdown
|
README.mkd
|
Aluriak/AutoISATools
|
1674f6536df350825ee5cc4d9fea6748164391a2
|
[
"Unlicense"
] | null | null | null |
README.mkd
|
Aluriak/AutoISATools
|
1674f6536df350825ee5cc4d9fea6748164391a2
|
[
"Unlicense"
] | null | null | null |
README.mkd
|
Aluriak/AutoISATools
|
1674f6536df350825ee5cc4d9fea6748164391a2
|
[
"Unlicense"
] | null | null | null |
# AutoISATools
This is a way to automatize creation of simple ISATools data, while using the [INRIA-Dyliss configuration](https://www.e-biogenouest.org/groups/dyliss/wiki).
Written in Python, AutoISATools can't replace the GUI, but can speed up the creation of new ISAtab files.
After using AutoISATools, you will need to start ISATools for the tabfile verification,
for choose the taxons and defines particular fields (cf Unsupported data), and finally create the ISArchive.
## Usage
The best way for use this script is probably the command line:
python3 -m autoisatools --study-shortname="ANameYouWillRecognize" --study-name="A very very long title" --study-id="AnUniqueIDWithoutSpace" --description="A very long description of the study" --output-dir=/path/to/ISA_metadata/isatab_files" --contacts-file=/path/to/contacts.csv --pydio-url="http://emme.genouest.org/pydio/ws-dyliss/Escherichia%20coli/GraphCompression"
While all these informations are necessary, AutoISATools keep for itself the right to crash if an information is not given correctly.
And crash happens. (because this script deals directly with userland)
The *Makefile* is a quick way to edit your command.
## Contacts file
The contact file is a CSV file formatted as (header is NOT optional):
lastname,firstname,midinitials,email,phone,fax,adress,affiliation
michel,vaillant,J.R,michel.vaillant@wanadold.fr,0601020304,4201020304,42 rue des abribus volants 12089 Champagne en Grasfond,INRIA
gerard,morson,,superGG@wanadold.fr,060109347504,428320304,23 über straße,IRISA
Note that the role is a particular field that is not managed by AutoISATools (cf Unsupported data).
## Unsupported data
Taxons names and others database fields are not supported.
You will need to put them manually, via the graphical program ISATools.
## Assays names :
It's the hard part.
Maybe you will find some of the possible assays (defined by Dyliss) [here](https://www.e-biogenouest.org/groups/dyliss/wiki).
But these data can't be directly used easily : some formatting is necessary.
The only solution, at this moment, is to know exactly what they are by starts ISATools, create an assay, and look at the file named *a_STUDY_LOWID_ASSAY_NAME_ASSAY_TECH.txt*.
These assays names are necessary for file integrity.
Knowns valid assay names are:
- Visualization_and_Representation;
- Classification_with_Formal_Concepts;
Knows valid assay techno are:
- FCA_for_proteins;
- Metabolic_network_representation;
Please fell free to Pull Request for keep this list up to date (and complete).
| 46.672727
| 374
| 0.792754
|
eng_Latn
| 0.974782
|
a6b5a860c06fe7fb1e0d2101339281cbef830ab8
| 2,033
|
md
|
Markdown
|
docs/data/oledb/ole-db-consumers-and-providers.md
|
POMATOpl/cpp-docs.pl-pl
|
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/data/oledb/ole-db-consumers-and-providers.md
|
POMATOpl/cpp-docs.pl-pl
|
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/data/oledb/ole-db-consumers-and-providers.md
|
POMATOpl/cpp-docs.pl-pl
|
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
description: 'Dowiedz się więcej o: OLE DB użytkowników i dostawców'
title: Konsumenci i dostawcy OLE DB
ms.date: 10/22/2018
helpviewer_keywords:
- OLE DB providers, OLE DB data architecture
- OLE DB providers
- OLE DB consumers, OLE DB data architecture
- OLE DB consumers
- OLE DB, data model
ms.assetid: 886cb39d-652b-4557-93f0-4b1b0754d8bc
ms.openlocfilehash: dedcbe7837cf7fad5bc9db8832e34edd3859a02b
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 12/11/2020
ms.locfileid: "97317145"
---
# <a name="ole-db-consumers-and-providers"></a>Konsumenci i dostawcy OLE DB
Architektura OLE DB korzysta z modelu odbiorców i dostawców. Odbiorca wysyła żądania dotyczące danych. Dostawca reaguje na te żądania przez umieszczenie danych w formacie tabelarycznym i zwrócenie go do konsumenta. Każde wywołanie, które może zostać wykonane przez konsumenta, musi zostać zaimplementowane w dostawcy.
Zdefiniowane technicznie, konsument to dowolny system lub kod aplikacji (niekoniecznie składnik OLE DB), który uzyskuje dostęp do danych za pomocą interfejsów OLE DB. Interfejsy są zaimplementowane w dostawcy. Dostawca jest dowolnym składnikiem oprogramowania, który implementuje interfejsy OLE DB, aby hermetyzować dostęp do danych i uwidaczniać je innym obiektom (czyli konsumentom).
W przypadku ról odbiorca wywołuje metody dotyczące interfejsów OLE DB; Dostawca OLE DB implementuje wymaganych interfejsów OLE DB.
OLE DB unika warunków klienta i serwera, ponieważ te role nie zawsze mają sens, szczególnie w przypadku n-warstwowa. Ze względu na to, że konsument może być składnikiem w warstwie, która obsługuje inny składnik, aby wywołać go, składnik klienta mógłby być mylący. Ponadto dostawca czasami działa podobnie jak w przypadku sterownika bazy danych niż serwer.
## <a name="see-also"></a>Zobacz też
[Programowanie OLE DB](../../data/oledb/ole-db-programming.md)<br/>
[Omówienie programowania OLE DB](../../data/oledb/ole-db-programming-overview.md)
| 61.606061
| 385
| 0.809641
|
pol_Latn
| 0.999386
|
a6b5c63cc068e4000486eb9385ffa5c9f0d211d5
| 787
|
md
|
Markdown
|
README.md
|
jamesduncombe/dip
|
b5620e215d9e5886146b1c9a64768cffd2d5855b
|
[
"MIT"
] | null | null | null |
README.md
|
jamesduncombe/dip
|
b5620e215d9e5886146b1c9a64768cffd2d5855b
|
[
"MIT"
] | null | null | null |
README.md
|
jamesduncombe/dip
|
b5620e215d9e5886146b1c9a64768cffd2d5855b
|
[
"MIT"
] | null | null | null |
# Dip 🕹
Emulator / interpreter for the CHIP-8 VM.
On MacOS...

On an [ESP32](https://heltec.org/project/wifi-kit-32/) after some porting...

## Building
Dip depends on [SDL2](https://www.libsdl.org/) and [SDL2_gfx](https://www.ferzkopp.net/wordpress/2016/01/02/sdl_gfx-sdl2_gfx/).
To build under a Debian/Ubuntu based system:
```
apt-get install build-essential libsdl2-dev libsdl2-gfx-dev
```
Under MacOS, assuming you have Homebrew and clang installed, it's:
```
brew install sdl2 sdl2_gfx
```
Windows... I have no idea 🤩
Once you have those installed you should be able to build Dip with `make`.
Then spin it up with:
```
./dip -r [path to rom file]
```
| 20.710526
| 127
| 0.709022
|
eng_Latn
| 0.683964
|
a6b60b499db075112109d001d28e6292b7794f7f
| 1,422
|
md
|
Markdown
|
docs/interfaces/_responses_news_feed_response_.newsfeedresponselinksitem.md
|
grinchd/instagram-private-api
|
0c66330ec1c6a8c239ca5e7d9232bdff245ec7ba
|
[
"MIT"
] | 2
|
2021-07-04T18:40:19.000Z
|
2021-07-10T03:25:52.000Z
|
docs/interfaces/_responses_news_feed_response_.newsfeedresponselinksitem.md
|
grinchd/instagram-private-api
|
0c66330ec1c6a8c239ca5e7d9232bdff245ec7ba
|
[
"MIT"
] | 3
|
2020-09-07T15:29:18.000Z
|
2021-05-10T18:07:17.000Z
|
docs/interfaces/_responses_news_feed_response_.newsfeedresponselinksitem.md
|
grinchd/instagram-private-api
|
0c66330ec1c6a8c239ca5e7d9232bdff245ec7ba
|
[
"MIT"
] | 1
|
2022-02-28T14:31:12.000Z
|
2022-02-28T14:31:12.000Z
|
> **[instagram-private-api](../README.md)**
[Globals](../README.md) / ["responses/news.feed.response"](../modules/_responses_news_feed_response_.md) / [NewsFeedResponseLinksItem](_responses_news_feed_response_.newsfeedresponselinksitem.md) /
# Interface: NewsFeedResponseLinksItem
## Hierarchy
* **NewsFeedResponseLinksItem**
## Index
### Properties
* [end](_responses_news_feed_response_.newsfeedresponselinksitem.md#end)
* [id](_responses_news_feed_response_.newsfeedresponselinksitem.md#id)
* [start](_responses_news_feed_response_.newsfeedresponselinksitem.md#start)
* [type](_responses_news_feed_response_.newsfeedresponselinksitem.md#type)
## Properties
### end
• **end**: *number*
*Defined in [responses/news.feed.response.ts:32](https://github.com/dilame/instagram-private-api/blob/3e16058/src/responses/news.feed.response.ts#L32)*
___
### id
• **id**: *string*
*Defined in [responses/news.feed.response.ts:34](https://github.com/dilame/instagram-private-api/blob/3e16058/src/responses/news.feed.response.ts#L34)*
___
### start
• **start**: *number*
*Defined in [responses/news.feed.response.ts:31](https://github.com/dilame/instagram-private-api/blob/3e16058/src/responses/news.feed.response.ts#L31)*
___
### type
• **type**: *string*
*Defined in [responses/news.feed.response.ts:33](https://github.com/dilame/instagram-private-api/blob/3e16058/src/responses/news.feed.response.ts#L33)*
| 28.44
| 197
| 0.7609
|
eng_Latn
| 0.085149
|
a6b6182aaf0e60da58542df3369e2e69e4d4286e
| 2,394
|
md
|
Markdown
|
README.md
|
HantingChen/DAFL
|
e32bb98b088ff6538ffa232358c162371013fc4e
|
[
"BSD-3-Clause"
] | null | null | null |
README.md
|
HantingChen/DAFL
|
e32bb98b088ff6538ffa232358c162371013fc4e
|
[
"BSD-3-Clause"
] | null | null | null |
README.md
|
HantingChen/DAFL
|
e32bb98b088ff6538ffa232358c162371013fc4e
|
[
"BSD-3-Clause"
] | null | null | null |
# DAFL: Data-Free Learning of Student Networks
This code is the Pytorch implementation of ICCV 2019 paper [DAFL: Data-Free Learning of Student Networks](https://arxiv.org/pdf/1904.01186.pdf)
We propose a novel framework for training efficient deep neural networks by exploiting generative adversarial networks (GANs). To be specific, the pre-trained teacher networks are regarded as a fixed discriminator and the generator is utilized for derivating training samples which can obtain the maximum response on the discriminator. Then, an efficient network with smaller model size and computational complexity is trained using the generated data and the teacher network, simultaneously.
<p align="center">
<img src="figure/figure.jpg" width="800">
</p>
## Requirements
- python 3
- pytorch >= 1.0.0
- torchvision
## Run the demo
```shell
python teacher-train.py
```
First, you should train a teacher network.
```shell
python DAFL-train.py
```
Then, you can use the DAFL to train a student network without training data on the MNIST dataset.
To run DAFL on the CIFAR-10 dataset
```shell
python teacher-train.py --dataset cifar10
python DAFL-train.py --dataset cifar10 --channels 3 --n_epochs 2000 --batch_size 1024 --lr_G 0.02 --lr_S 0.1 --latent_dim 1000
```
To run DAFL on the CIFAR-100 dataset
```shell
python teacher-train.py --dataset cifar100
python DAFL-train.py --dataset cifar100 --channels 3 --n_epochs 2000 --batch_size 1024 --lr_G 0.02 --lr_S 0.1 --latent_dim 1000 --oh 0.5
```
## Results
<img src="figure/Table1.jpg" width="600">
</p>
<img src="figure/Table2.jpg" width="600">
</p>
## Citation
@inproceedings{DAFL,
title={DAFL: Data-Free Learning of Student Networks},
author={Chen, Hanting and Wang, Yunhe and Xu, Chang and Yang, Zhaohui and Liu, Chuanjian and Shi, Boxin and Xu, Chunjing and Xu, Chao and Tian, Qi},
booktitle={ICCV},
year={2019}
}
## Contributing
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.
| 41.275862
| 494
| 0.740184
|
eng_Latn
| 0.984259
|
a6b61849fb887e24da9094abc0344b8beeefe6cf
| 17,482
|
md
|
Markdown
|
doc/paper.md
|
Grissess/ski-rust
|
3e4955f4c18e1638c3b694ae9ef1d1e3302d39df
|
[
"CC0-1.0"
] | null | null | null |
doc/paper.md
|
Grissess/ski-rust
|
3e4955f4c18e1638c3b694ae9ef1d1e3302d39df
|
[
"CC0-1.0"
] | null | null | null |
doc/paper.md
|
Grissess/ski-rust
|
3e4955f4c18e1638c3b694ae9ef1d1e3302d39df
|
[
"CC0-1.0"
] | null | null | null |
Introduction
============
SKI is a novel cryptographic library that has been developed with a focus on
hardware-constrained environments such as embedded systems, which are becoming
more prevalent with the proliferation of "smart" devices and the Internet of
Things. To date, security in such platforms has always been at odds with
performance and design constraints, and thus suffered in comparison to its
other features; such issues have precipitated widespread security breaches of
unprecedented scale, such as the Mirai botnet (ca. 2016). When hardware and
budgets are tight, economic incentives for security simply don't make the cut,
especially in high-volume production runs. The SKI project thus aims to create
a simple, interoperable, unified specification, using modern algorithms
selected for speed (without sacrificing security), to ease the burden on
embedded systems developers to introduce good security practices into their
products from the start.
SKI is not the first cryptographic library. It borrows heavily from concepts
introduced by Pretty Good Privacy (PGP, Zimmermann 1991), and is very similar
in interface to D.J. Bernstein's 2012 "Networking and Cryptography Library"
(NaCL), which has been revised ("forked") by other major projects, including
libsodium (Frank Denis, 2013). Our major technical contribution is a focus on
hardware constraints, such as processing performance, working memory, or the
presence of a reliable entropy source. Design choices made in concession to
such constraints will be mentioned throughout the presentation.
Overview
========
SKI provides three major cryptographic operations:
- Symmetric ("shared-key") encryption: A scheme to protect the confidentiality
of data where all trusted parties hold the exact same key;
- Asymmetric ("public-key") key derivation: A scheme to derive a shared,
trusted secret using two-part keys (a "keypair"): a public part that can be
disclosed, and a private part kept secret;
- Digital signature: A scheme to provide authenticity of data using a keypair,
where the private part is present for attestation, and the public part can be
used for verification.
For simplicity, SKI can use the same keypair for both key derivation and
digital signature, but library users can separate this functionality amongst
multiple keys if they so choose. SKI imposes no inherent limitations on the use
of keys beyond that they are valid for the operation; such details, if they
must be implemented, are delegated to the developer.
It is worth note that SKI does not provide "authenticated encryption" directly,
while in most similar libraries, this is the only option. It is easy and, we
believe, secure for the authentication and encryption primitives to be composed
in any order to provide the same service. As an added benefit, the primitives
are separable if the use case has no need for, or cannot perform well with, a
particular one.
Data Encoding
=============
Perhaps the most innovative part of the design of this standard is the use of
Uniform Resource Name (URN) syntax as specified in RFC 2141 (but note that we
have not yet registered our namespace), along with RFC 3548 "URI-safe" Base64
encoding to ensure the binary data can be efficiently represented as text. This
decision was primarily because this form is very easy to convey
graphically--for example, in the ubiquitous "Quick Response" (QR) code. We
anticipate that devices can come with a card or attachment (for example, a
sticker), with such a graphical code conveying a SKI URN, and that this is a
practical, scaleable, and secure way to do key exchange on a per-unit basis.
(Such key exchanges may be used to bootstrap trust in the device and allow for
further key exchange, or be the sole key on the device--we will discuss this
more in the section on asymmetric keys and signatures).
All codable objects that SKI can use are compatible with this encoding,
including variable-length data, such as encrypted data packets. However, the
Base64 encoding, while compressible, is unwieldy in many circumstances, so SKI
also allows for a "binary" encoding when using "8-bit clean" (pure binary)
communication channels that will not be rendered as text. This is also a valid
encoding for all objects, but it is generally used only when the packet size is
linear in the input size (thus, encryption operations).
SKI data packets, in either URN or binary form, are not self-describing and
require transport metadata, such as a packet length over a stream protocol (or
a dedicated packet protocol) to communicate. We anticipate this is a reasonable
burden to offload to library users, but if this proves to be false, the current
coding standard is flexible enough to admit a self-describing packet.
Symmetric Encryption
====================
Arguably the easiest form of encryption, symmetric encryption depends on a
shared secret, referred to as a "symmetric key", or simply a key. A symmetric
cipher is two operations, encryption and decryption, which take some amount of
data and such a key, and for which encryption composed with decryption only
with the same key is an identity. The design of such ciphers is considered
secure when the key cannot efficiently be derived from the initial data (the
plaintext), the encrypted data (the ciphertext), or both.
SKI uses the "XChaCha" variant of the ChaCha20 stream cipher, the latter by
Bernstein (2008). As a stream cipher, the ciphertext and the plaintext have the
same length, but--to prevent key recovery--each encryption with the same key
must use a nonce (a number "used once"), which is safe to disclose but must be
unique per key use. It is catastrophic to the security of the cryptosystem to
use the same nonce with different plaintexts. The XChaCha variant we have
chosen has a 256-bit key and 192-bit nonce, which should allow, in theory, an
average of about 7.9e28 uniformly-randomly-chosen nonces to be used with the
same key before such a failure occurs. However, it is unsafe to assume that
embedded systems have a good source of entropy; we posit that it is safe to use
a simple incrementing counter in non-volatile memory for the nonce on such
constrained implementations, which allow for about 6.2e57 nonces before
wrap-around occurs. It is an astronomically remote possibility that any single
device will send this much data, even with a fixed key, over the course of its
usable lifetime.
Our reference implementation provides the "ski:symk" scheme for the 256-bit
key, and the "ski:syed" scheme for symmetrically-encrypted data, which is the
simple concatenation of the 192-bit nonce and the ciphertext. Under the "sym" command:
- A new, random key on a host with sufficient entropy can be generated with
"gen".
- A key derived from some input (such as a password) can be derived using
"derive". We use the Argon2 key derivation function, which is specifically
chosen to be hard against adversaries with access to FPGAs and difficult to
implement on an ASIC with better performance than a modern computer. As it
requires 8MB of random-access memory, we do not anticipate that such a
command will be used on an embedded device; rather, only the key itself needs
to be stored.
- Ciphertext can be produced from plaintext using the "encrypt" command, and
the inverse done with the "decrypt" command, both of which expect a symmetric
key. Both use the binary encoding of the "syed" scheme by default, but can be
given an argument to generate a URN instead.
Asymmetric Encryption
=====================
Asymmetric encryption is so named for having two distinct keys for its
operation, one of which can be disclosed. In a secure asymmetric cryptosystem,
the disclosed "public" key cannot be used to efficiently derive the undisclosed
"private" key in the same pair. Otherwise, the guarantees from a symmetric
cipher hold analogously.
SKI uses Elliptic Curve Diffie-Hellman (ECDH) over Curve25519, an elliptic
curve over the prime 2 ^ 255 - 19, developed by Bernstein (2005), which he
conjectures to have the equivalent of 128-bit security (requires the equivalent
of 2^128 brute-force attempts to break some aspect of the cryptosystem). The
Diffie-Hellman protocol requires that both parties attempting to encrypt a
message have disclosed their public keys to each other, and generates a shared
secret key that is believed to be difficult to derive without access to at
least one of the private keys. To preserve forward secrecy, the shared key,
which is constant for the lifetime of the two keypairs, is used to encrypt a
random per-message key; in this way, disclosure of the message key does not
provide significant information about the shared key, or either private key.
The cipher used for encrypting both the message key and the message itself is
the selfsame symmetric cipher discussed previously.
In embedded implementations where randomness may be a concern, we posit that it
is acceptable to use a non-volatile counter, as with the nonce, to generate the
message keys. However, to avoid catastrophic loss of confidentiality, the
initial values of the message key and symmetric nonce counters should be
unrelated and never disclosed. We anticipate that it is acceptable to use some
entropy at manufacture or program time to initialize the values of these
counters independently, but for this reason, we encourage systems using
asymmetric encryption to use a cryptographically-secure entropy source for
these operations whenever resources allow (such as applications designed for
general-purpose computers).
Our reference implementation provides the "ski:prvk" and "ski:pubk" URN schemes
for private and public keys, respectively, both of which are 256-bit. It also
provides the "ski:shak" scheme for the ECDH-derived shared key. Finally, it
provides the "ski:encr" scheme for asymmetrically-encrypted data packets, which
consists of the data of two "syed" packets: the first, of fixed length, is an
encrypted message key, and the second, of variable length, is the message,
encrypted with the message key. Under the "key" command:
- A new, random private key on a host with sufficient entropy can be generated
with "gen". Compared to many other contemporary cryptosystems, the generation
of a private key for Curve25519 is relatively fast, and requires only 253
bits of entropy.
- The public key for a given private key can be derived with "pub".
- The shared key for one public and one private key can be generated with
"shared". We don't recommend this in practice, because disclosing this key
can result in loss of confidentiality in all messages between these two keys;
it is provided as a diagnostic aid, and for integration with other systems.
- Ciphertext can be produced from plaintext with "encrypt", and the inverse
done with "decrypt". Both operations require a public key (usually the
"recipient") and a private key (usually the "sender"). "decrypt" can recover
using either the original private and public key, or (more typically) the
private key of the recipient and the public key of the sender.
Future work includes extending the number of public keys to which a single
message can be addressed, as it requires only constant size (the size of one
encrypted message key) to add further recipients by public key.
Digital Signatures
==================
A signature is a token, produced over some data, using a secret, which can be
"verified" by other parties to ensure integrity of the data. For such a system
to be secure, an adversary cannot efficiently derive such a token without the
secret, nor derive the secret from the token, and any attempt to interfere with
the integrity of the message (change its data in any way) or of the token
should cause verification to fail.
Efficient cryptographic-hash-based schemes, known as Hash-based Message
Authetication Codes (HMACs) exist for shared secrets, but are unimplemented in
SKI. Instead, SKI uses the Elliptic Curve Digital Signature Algorithm, also
over Bernstein's Curve25519, and using the 512-bit variant of the NIST Secure
Hashing Algorithm 2, published 2001, as the message digest. Because
Curve25519's field is 256-bit, we use only the first 256 bits of the 512 bit
digest; nonetheless, SHA-512 has better preimage resistance than SHA-256 (the
256-bit variant), so we posit this design has better--or, at least, no
worse--security properties.
As with other primitives, ECDSA requires a 255-bit "nonce" per signature;
unlike the previous primitives, this "nonce" is a secret. Because of this
change of role, we refer to this datum as a "nonce preimage". Disclosing the
nonce preimage, or using the same nonce preimage for distinct messages, allows
an adversary to recover the private key, thus defeating the integrity
guarantees of this signature scheme and the confidentiality of any encrypted
messages for which this private key was used. For this reason, we recommend
using distinct signature and encryption keys whenever possible, to mitigate the
damage caused by an accidental disclosure. While yet another nonce counter
could be used for such nonces, SKI also provides for deterministic nonces,
where the 255 bits are derived from a SHA-512 digest over the private key and
the data. Thus, the security of this scheme relies on the resistance of the
underlying hash function to collision; to the best of our knowledge, SHA-512
remains secure in this regard. Nonetheless, on platforms where access to
entropy is readily available, we still recommend using randomized signatures,
which relaxes the security proof to not rely on SHA-512's collision resistance.
Our reference implementation provides digital signatures through the "ski:sign"
scheme, which consists of a "nonce postimage" (frequently labelled "r" in ECDSA
literature) and the signature token (usually "s") derived from the private key
and the given nonce preimage. Both of these are a constant 256 bits in length.
Since the private and public keys are compatible with the asymmetric encryption
keys, they can be used with two more "key" subcommands:
- "sign", which generates a SKI signature for a message, and
- "verify", which validates a SKI signature for a message.
Note that the SKI signature is of constant size and "detached"; it must be sent
independently of the message.
Authenticated Encryption
========================
Although not included as a primitive, we revisit that the two previous
primitives combined--asymmetric encryption and digital signatures--can create
"authenticated encryption". There are three ways, in general, to combine these
functions:
- Encrypt-then-MAC: Encrypt the data, then sign the ciphertext;
- MAC-then-encrypt: Compute a signature, concatenate it with the plaintext, and
encrypt the concatenation;
- Encrypt-and-MAC: Sign the plaintext independently of encrypting it.
Various subtleties should be observed with each approach, but we believe that
the systems we have chosen have beneficial properties regardless of AE scheme:
primarily, the reliance on independent nonces makes it difficult to attack any
scheme even if the same private key is used for both primitives, and the use of
a stream cipher mitigates the effects of padding-oracle attacks. Note, however,
that other protocol-specific oracles may be possible due to the malleability of
the stream cipher, and thus we encourage authenticating encrypted data (using
any above scheme) unless a compelling reason justifies otherwise.
Pragmatics and Miscellany
=========================
One familiar with other cryptosystems occupying a similar niche in software
design, particularly PGP, will note that SKI has no concept of a "certificate".
This is a simplifying assumption--we make no attempt to implement any kind of
database, key storage, or key validation scheme, as we anticipate that these
will depend heavily on implementation details and the environment in which a
solution is deployed. Similarly, without certificates, SKI has no inbuilt
concept of "key expiry"; it is up to the designers of devices to determine
their rekeying policies, and requires support in the sense of a (trusted)
timekeeping device, such as a real-time clock or networked time server. These,
we believe, are ancillary to the design of a good foundation of a cryptosystem,
though we have made every effort to ensure that the primitives provided by this
library remain secure enough for practical use, as of the present, for the
forseeable lifetimes of these embedded devices.
There is presently no quantum-resistant cryptography in SKI as of yet; the
implementations of such cryptosystems we've reviewed so far have not met our
standards of performance on embedded devices. However, this is only a pragmatic
compromise for the time being; as hardware and cryptography improve, we plan on
devoting future work to including quantum-resistant suites.
Conclusion
==========
In summation, we present SKI, a simple, novel cryptographic library targeting
embedded and hardware-constrained systems, with a focus on performance,
portability, and ease of use. We discussed its cryptographic primitives, their
usage, and the concessions made pursuant these goals. We release an
implementation freely, in hopes that it can improve the status quo of security
for embedded platforms and devices as they continue to proliferate, and we hope
that this contribution will help abate the security concerns of such
technologies.
Thank you for listening.
| 53.95679
| 86
| 0.793673
|
eng_Latn
| 0.999767
|
a6b62448920bac4df638b6e02b78e2ba2b9ab336
| 367
|
md
|
Markdown
|
app/about.md
|
gongyh/TSTP
|
aa8281d81f743bf05b9a43b0552252c403c7df4c
|
[
"MIT"
] | null | null | null |
app/about.md
|
gongyh/TSTP
|
aa8281d81f743bf05b9a43b0552252c403c7df4c
|
[
"MIT"
] | null | null | null |
app/about.md
|
gongyh/TSTP
|
aa8281d81f743bf05b9a43b0552252c403c7df4c
|
[
"MIT"
] | 1
|
2020-10-22T12:06:25.000Z
|
2020-10-22T12:06:25.000Z
|
---
title: "Usage"
author: "Yanhai Gong"
date: "`r format(Sys.time(), '%d %B, %Y')`"
---
<style>
body {
text-align: justify}
</style>
### Remarks
[Source Code](https://github.com/gongyh/TSTP) & [Sample Data](https://github.com/gongyh/TSTP/tree/master/app/sampleData)
For any issues or questions that might arise, please email <strong>gongyh@qibebt.ac.cn</strong>.
| 22.9375
| 120
| 0.683924
|
eng_Latn
| 0.293814
|
a6b6af37406b1e44cd11ebf631d10b5ede3e7d0c
| 4,115
|
md
|
Markdown
|
docs/getting-started.md
|
niachary/cluster-api-provider-azure
|
01edf79a5ed7c2ddf194989b4a95dea409c37acd
|
[
"Apache-2.0"
] | null | null | null |
docs/getting-started.md
|
niachary/cluster-api-provider-azure
|
01edf79a5ed7c2ddf194989b4a95dea409c37acd
|
[
"Apache-2.0"
] | null | null | null |
docs/getting-started.md
|
niachary/cluster-api-provider-azure
|
01edf79a5ed7c2ddf194989b4a95dea409c37acd
|
[
"Apache-2.0"
] | null | null | null |
# Getting started with cluster-api-provider-azure <!-- omit in toc -->
## Contents <!-- omit in toc -->
<!-- Below is generated using VSCode yzhang.markdown-all-in-one >
<!-- TOC depthFrom:2 -->
- [Prerequisites](#prerequisites)
- [Requirements](#requirements)
- [Optional](#optional)
- [Setting up your Azure environment](#setting-up-your-azure-environment)
- [Troubleshooting](#troubleshooting)
- [Bootstrap running, but resources aren't being created](#bootstrap-running-but-resources-arent-being-created)
- [Resources are created but control plane is taking a long time to become ready](#resources-are-created-but-control-plane-is-taking-a-long-time-to-become-ready)
- [Building from master](#building-from-master)
<!-- /TOC -->
## Prerequisites
### Requirements
- Linux or macOS (Windows isn't supported at the moment)
- A [Microsoft Azure account](https://azure.microsoft.com/en-us/)
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- Install the [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [KIND]
- [kustomize]
- make
- gettext (with `envsubst` in your PATH)
- md5sum
- bazel
### Optional
- [Homebrew][brew] (MacOS)
- [jq]
- [Go]
[brew]: https://brew.sh/
[go]: https://golang.org/dl/
[jq]: https://stedolan.github.io/jq/download/
[kind]: https://sigs.k8s.io/kind
[kustomize]: https://github.com/kubernetes-sigs/kustomize
### Setting up your Azure environment
An Azure Service Principal is needed for populating the controller manifests. This utilizes [environment-based authentication](https://docs.microsoft.com/en-us/go/azure/azure-sdk-go-authorization#use-environment-based-authentication).
1. List your Azure subscriptions.
```bash
az account list -o table
```
2. Save your Subscription ID in an environment variable.
```bash
export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
```
3. Create an Azure Service Principal by running the following command or skip this step and use a previously created Azure Service Principal.
```bash
az ad sp create-for-rbac --name SPClusterAPI --role owner
```
4. Save the output from the above command in environment variables.
```bash
export AZURE_TENANT_ID="<Tenant>"
export AZURE_CLIENT_ID="<AppId>"
export AZURE_CLIENT_SECRET='<Password>'
export AZURE_LOCATION="eastus" # this should be an Azure region that your subscription has quota for.
```
:warning: NOTE: If your password contains single quotes (`'`), make sure to escape them. To escape a single quote, close the quoting before it, insert the single quote, and re-open the quoting.
For example, if your password is `foo'blah$`, you should do `export AZURE_CLIENT_SECRET='foo'\''blah$'`.
5. Set the name of the AzureCloud to be used, the default value that would be used by most users is "AzurePublicCloud", other values are:
- ChinaCloud: "AzureChinaCloud"
- GermanCloud: "AzureGermanCloud"
- PublicCloud: "AzurePublicCloud"
- USGovernmentCloud: "AzureUSGovernmentCloud"
```bash
export AZURE_ENVIRONMENT="AzurePublicCloud"
```
<!--An alternative is to install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and have the project's script create the service principal automatically. _Note that the service principals created by the scripts will not be deleted automatically._ -->
### Using images
By default, images offered by "capi" in the Azure Marketplace are used. You can list these *reference images* with this command:
```bash
az vm image list --publisher cncf-upstream --offer capi --all -o table
```
For more control over your nodes, you can use a [*custom image*][custom-images].
## Troubleshooting
Please refer to the [troubleshooting guide][troubleshooting].
## Building from master
If you're interested in developing cluster-api-provider-azure and getting the latest version from `master`, please follow the [development guide][development].
[custom-images]: /docs/topics/custom-images.md
[development]: /docs/development.md
[troubleshooting]: /docs/troubleshooting.md
| 35.782609
| 299
| 0.742892
|
eng_Latn
| 0.860912
|
a6b6b554f370de1b1234bb3de7d30362a5da03cb
| 2,262
|
md
|
Markdown
|
README.md
|
solatticus/genesis
|
d8a9d6ca92c90d1d97bfa6daebf6cc45aef41255
|
[
"MIT"
] | null | null | null |
README.md
|
solatticus/genesis
|
d8a9d6ca92c90d1d97bfa6daebf6cc45aef41255
|
[
"MIT"
] | null | null | null |
README.md
|
solatticus/genesis
|
d8a9d6ca92c90d1d97bfa6daebf6cc45aef41255
|
[
"MIT"
] | null | null | null |
# Genesis .Net
An exploritory orchestration-based code generation tool. Data from pretty much any source that .Net is able to consume can be used to generate a variety of boilerplate code files or execute arbitrary code.
*To run:*
[.Net Core 3](https://dotnet.microsoft.com/download/dotnet-core/3.0) Is required, but the Visual Studio 2019 installer should install it, so...
* [Visual Studio Win/Mac 2019](https://visualstudio.com "Visual Studio Win/Mac 2019")
## How does it work?
Genesis is centered around a group of [ObjectGraph](https://github.com/solatticus/genesis/blob/master/src/Genesis/ObjectGraph.cs) objects and pieces of code that manipulate them, called [Executors](https://github.com/genesisdotnet/genesis/blob/master/src/Genesis/GenesisExecutor.cs). So far there are 3 types of Executors that manipulate ObjectGraphs, their properties and methods, events... and types of each... etc.
##### `Input` executors deal with a "source". (intentionally arbitrary)
They're responsible for interrogating some data store (or weburl, or text file, or...) and populating a group of ObjectGraphs described above. They're available to all other executors at any point. It's currently serial execution, via multicast delegate.
##### `Output` executors do exactly that, output files. (or anything else you code)
They can use the data in the ObjectGraphs to write out classes, services, interfaces, clients, repositories etc. Anything really. They don't even have to write code.
##### `General` executors do everything else.
General executors don't necessarily "read" something like an input, and don't necessarily "write" something as an output. They do have access to the current ObjectGraphs in memory though.
# First run
* The first thing that needs to happen is for the cli to scan for executors and make them available to Genesis. Do this by simply typing `scan`. (this is scriptable)
* You should see what *[Executors](https://github.com/genesisdotnet/genesis/blob/master/src/Genesis/IGenesisExecutor.cs)* were found in the output.
* They're addressable by their green text.
* Forgive the short docs... but
* exec 'something in green text'

| 75.4
| 418
| 0.770115
|
eng_Latn
| 0.987074
|
a6b9264bf61a7cf851d8c13574f2037e31a8a3fc
| 18,786
|
md
|
Markdown
|
articles/azure-monitor/insights/vminsights-workbooks.md
|
changeworld/azure-docs.cs-cz
|
cbff9869fbcda283f69d4909754309e49c409f7d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/azure-monitor/insights/vminsights-workbooks.md
|
changeworld/azure-docs.cs-cz
|
cbff9869fbcda283f69d4909754309e49c409f7d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/azure-monitor/insights/vminsights-workbooks.md
|
changeworld/azure-docs.cs-cz
|
cbff9869fbcda283f69d4909754309e49c409f7d
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Vytváření interaktivních sestav ve službě Azure Monitor pro virtuální počítače s využitím sešitů
description: Zjednodušte složité vytváření sestav pomocí předdefinovaných a vlastních parametrizovaných sešitů pro Azure Monitor pro virtuální počítače.
ms.subservice: ''
ms.topic: conceptual
author: bwren
ms.author: bwren
ms.date: 03/12/2020
ms.openlocfilehash: a6ab126c3a5b0d2a82b17fac42dcc9e20f6aba3f
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/28/2020
ms.locfileid: "79480449"
---
# <a name="create-interactive-reports-azure-monitor-for-vms-with-workbooks"></a>Vytváření interaktivních sestav ve službě Azure Monitor pro virtuální počítače s využitím sešitů
Sešity kombinují text, [dotazy protokolu](../log-query/query-language.md), metriky a parametry do bohatých interaktivních sestav. Sešity jsou upravitelné všemi ostatními členy týmu, kteří mají přístup ke stejným prostředkům Azure.
Sešity jsou užitečné pro scénáře, jako jsou:
* Zkoumání využití virtuálního počítače, když neznáte metriky zájmu předem: využití procesoru, místo na disku, paměť, síťové závislosti atd. Na rozdíl od jiných nástrojů pro analýzu využití umožňují sešity kombinovat různé druhy vizualizací a analýz, což je činí skvělými pro tento druh průzkumu volného tvaru.
* Vysvětlení týmu, jak si nedávno zřízený virtuální počítač vede, zobrazením metrik pro čítače klíčů a další události protokolu.
* Sdílení výsledků experimentu pro změna velikosti virtuálního počítače s ostatními členy vašeho týmu. Můžete vysvětlit cíle experimentu s textem a pak zobrazit jednotlivé metriky využití a analytické dotazy použité k vyhodnocení experimentu spolu s jasnými popisky pro to, zda byla každá metrika nad nebo pod cílem.
* Nahlášení dopadu výpadku na využití virtuálního počítače, kombinování dat, vysvětlení textu a diskuse o dalších krocích, jak zabránit výpadkům v budoucnu.
Následující tabulka shrnuje sešity, které Azure Monitor pro virtuální počítače obsahuje, abyste mohli začít.
| sešit | Popis | Rozsah |
|----------|-------------|-------|
| Výkon | Poskytuje přizpůsobitelnou verzi našeho zobrazení Seznamu n a grafů v jednom sešitu, který využívá všechny čítače výkonu Analýzy protokolů, které jste povolili.| Ve velkém měřítku |
| Čítače výkonu | Zobrazení grafu Top N v široké sadě čítačů výkonu. | Ve velkém měřítku |
| Připojení | Připojení poskytuje podrobný pohled na příchozí a odchozí připojení z monitorovaných virtuálních počítačů. | Ve velkém měřítku |
| Aktivní porty | Obsahuje seznam procesů, které jsou vázány na porty na monitorovaných virtuálních počítačích a jejich aktivitu ve zvoleném časovém rámci. | Ve velkém měřítku |
| Otevřené porty | Poskytuje počet otevřených portů na monitorovaných virtuálních počítačích a podrobnosti o těchto otevřených portech. | Ve velkém měřítku |
| Neúspěšná připojení | Zobrazte počet neúspěšných připojení na monitorovaných virtuálních počítačích, trend selhání a pokud se procento selhání v průběhu času zvyšuje. | Ve velkém měřítku |
| Zabezpečení a audit | Analýza přenosů protokolu TCP/IP, která hlásí celková připojení, škodlivá připojení, kde se koncové body IP nacházejí globálně. Chcete-li povolit všechny funkce, budete muset povolit zjišťování zabezpečení. | Ve velkém měřítku |
| Provoz TCP | Hodnocená sestava pro monitorované virtuální počítače a jejich odeslané, přijaté a celkové síťové přenosy v mřížce a zobrazené jako trendová čára. | Ve velkém měřítku |
| Porovnání provozu | Tyto sešity umožňují porovnat trendy síťového provozu pro jeden počítač nebo skupinu počítačů. | Ve velkém měřítku |
| Výkon | Poskytuje přizpůsobitelnou verzi našeho zobrazení výkonu, která využívá všechny čítače výkonu Analýzy protokolů, které jste povolili. | Jeden virtuální počítač |
| Připojení | Připojení poskytuje podrobné zobrazení příchozích a odchozích připojení z virtuálního počítače. | Jeden virtuální počítač |
## <a name="creating-a-new-workbook"></a>Vytvoření nového sešitu
Sešit se skládá z oddílů skládajících se z nezávisle upravitelných grafů, tabulek, textu a vstupních ovládacích prvků. Chcete-li lépe porozumět sešitům, začněme otevřením šablony a projděte si vytvoření vlastního sešitu.
1. Přihlaste se k [portálu Azure](https://portal.azure.com).
2. Vyberte **virtuální počítače**.
3. V seznamu vyberte virtuální počítač.
4. Na stránce Virtuální počítač včásti **Monitorování** vyberte **Přehledy**.
5. Na stránce Přehledy virtuálních počítači vyberte **kartu Výkon** nebo **Mapy** a pak z odkazu na stránce vyberte **Zobrazit sešity.** V rozevíracím seznamu vyberte **Přejít do galerie**.

Tím se spustí galerie sešitů s řadou předem sestavených sešitů, které vám pomůžou začít.
7. Vytvořte nový sešit výběrem **možnosti Nový**.

## <a name="editing-workbook-sections"></a>Úpravy oddílů sešitů
Sešity mají dva režimy: **režim úprav**a **režim čtení**. Při prvním spuštění nového sešitu se otevře v **režimu úprav**. Zobrazuje veškerý obsah sešitu, včetně všech kroků a parametrů, které jsou jinak skryté. **Režim čtení** představuje zjednodušené zobrazení stylu sestavy. Režim čtení umožňuje abstraktní pryč složitost, která šla do vytvoření sestavy, zatímco stále mají základní mechaniky jen pár kliknutí v případě potřeby pro úpravy.

1. Až budete úpravy oddílu hotová, klikněte v levém dolním rohu oddílu na **Hotovo.**
2. Chcete-li vytvořit duplikát oddílu, klepněte na ikonu **Klonovat tento oddíl.** Vytváření duplicitních oddílů je skvělý způsob, jak iterate na dotaz bez ztráty předchozíiterace.
3. Pokud chcete oddíl v sešitu posunout nahoru, klikněte na ikonu **Přesunout nahoru** nebo **Přesunout dolů.**
4. Chcete-li oddíl trvale odebrat, klepněte na ikonu **Odebrat.**
## <a name="adding-text-and-markdown-sections"></a>Přidávání textových oddílů a oddílů Markdown
Přidání nadpisů, vysvětlení a komentářů do sešitů pomáhá přeměnit sadu tabulek a grafů na vyprávění. Textové oddíly v sešitech podporují [syntaxi Markdownu](https://daringfireball.net/projects/markdown/) pro formátování textu, jako jsou nadpisy, tučné písmo, kurzíva a seznamy s odrážkami.
Pokud chcete do sešitu přidat textový oddíl, použijte tlačítko **Přidat text** v dolní části sešitu nebo v dolní části libovolného oddílu.
## <a name="adding-query-sections"></a>Přidávání oddílů dotazů

Pokud chcete do sešitu přidat oddíl dotazu, použijte tlačítko **Přidat dotaz** v dolní části sešitu nebo v dolní části libovolného oddílu.
Části dotazu jsou vysoce flexibilní a lze je použít k zodpovězení otázek, jako jsou:
* Jaké bylo využití procesoru ve stejném časovém období jako zvýšení síťového provozu?
* Jaký byl trend dostupného místa na disku za poslední měsíc?
* Kolik selhání síťového připojení se moje prostředí virtuálního připojení za poslední dva týdny?
Také nejste omezeni pouze na dotazování z kontextu virtuálního počítače, ze které jste sešit spustili. Můžete dotazovat napříč více virtuálních počítačů, stejně jako Log Analytics pracovníprostory, pokud máte přístup oprávnění k těmto prostředkům.
Chcete-li zahrnout data z jiných pracovních prostorů Analýzy protokolů nebo z konkrétní aplikace Application Insights pomocí identifikátoru **pracovního prostoru.** Další informace o dotazech na příčné zdroje naleznete v [oficiálních pokynech](../log-query/cross-workspace-query.md).
### <a name="advanced-analytic-query-settings"></a>Pokročilá nastavení analytického dotazu
Každá část má vlastní pokročilá nastavení, která  prostřednictvím ikony ovládacích prvků úprav oddílu Sešity, která se nachází vpravo od tlačítka **Přidat parametry.**

| | |
| ---------------- |:-----|
| **Vlastní šířka** | Nastavuje položku libovolnou velikost, takže se na jeden řádek vejde mnoho položek, což vám umožní lépe uspořádat grafy a tabulky do bohatých interaktivních sestav. |
| **Podmíněně viditelné** | Určete, chcete-li skrýt kroky založené na parametru v režimu čtení. |
| **Export parametru**| Povolte vybranému řádku v mřížce nebo grafu, aby pozdější kroky změnily hodnoty nebo se staly viditelnými. |
| **Zobrazit dotaz při neúpravách** | Zobrazí dotaz nad grafem nebo tabulkou, i když je v režimu čtení.
| **Zobrazit tlačítko Otevřít v analytice, když neupravujete** | Přidá modrou ikonu Analytics do pravého rohu grafu, aby byl umožněn přístup jedním kliknutím.|
Většina těchto nastavení je poměrně intuitivní, ale pro pochopení **exportu parametru** je lepší prozkoumat sešit, který využívá tuto funkci.
Jeden z předem vytvořených sešitů – **TCP Traffic**, poskytuje informace o metrikách připojení z virtuálního počítači.
První část sešitu je založena na datech dotazu protokolu. Druhá část je také založena na datech dotazu protokolu, ale výběr řádku v první tabulce interaktivně aktualizuje obsah grafů:

Chování je možné pomocí **Při výběru položky exportovat parametr** upřesňující nastavení, které jsou povoleny v dotazu protokolu tabulky.

Druhý dotaz protokolu pak použije exportované hodnoty, když je vybrán řádek k vytvoření sady hodnot, které jsou pak použity nadpisem oddílu a grafy. Pokud není vybrán žádný řádek, skryje záhlaví oddílu a grafy.
Například skrytý parametr v druhé části používá následující odkaz z řádku vybraného v mřížce:
```
VMConnection
| where TimeGenerated {TimeRange}
| where Computer in ("{ComputerName}") or '*' in ("{ComputerName}")
| summarize Sent = sum(BytesSent), Received = sum(BytesReceived) by bin(TimeGenerated, {TimeRange:grain})
```
## <a name="adding-metrics-sections"></a>Přidávání oddílů metrik
Části metrik vám poskytují úplný přístup k začlenění dat metrik Azure Monitoru do interaktivních přehledů. V Azure Monitor pro virtuální počítače, předem sestavené sešity obvykle obsahují data analytického dotazu, nikoli metrických dat. Můžete se rozhodnout vytvořit sešity s metrickými daty, což vám umožní plně využít to nejlepší z obou funkcí na jednom místě. Máte také možnost získat metrická data z prostředků v libovolném předplatných, ke které máte přístup.
Tady je příklad dat virtuálních strojů, která jsou vtahována do sešitu a poskytují vizualizaci výkonu procesoru:

## <a name="adding-parameter-sections"></a>Přidávání oddílů parametrů
Parametry sešitu umožňují měnit hodnoty v sešitu, aniž byste museli ručně upravovat části dotazu nebo textu. Tím se odstraní požadavek, že potřebujete porozumět základnímu dotazovacímu jazyku analýzy, a výrazně se rozšíří potenciální cílová skupina sestav založených na sešitu.
Hodnoty parametrů jsou nahrazeny v dotazu, textu nebo jiných parametrech umístěním názvu ``{parameterName}``parametru do složených závorek, například . Názvy parametrů jsou omezeny na podobná pravidla jako identifikátory JavaScriptu, abecední znaky nebo podtržítka následovaná alfanumerickými znaky nebo podtržítky. Například **a1** je povoleno, ale **1a** není povoleno.
Parametry jsou lineární, počínaje horní částí sešitu a stékající dolů do pozdějších kroků. Parametry deklarované později v sešitu mohou přepsat parametry, které byly deklarovány dříve. To také umožňuje parametry, které používají dotazy pro přístup k hodnotám z parametrů definovaných dříve. V rámci samotného kroku parametru jsou parametry také lineární, zleva doprava, kde parametry vpravo mohou záviset na parametru deklarovaném dříve ve stejném kroku.
V současné době jsou podporovány čtyři různé typy parametrů:
| | |
| ---------------- |:-----|
| **Text** | Umožňuje uživateli upravit textové pole a volitelně můžete zadat dotaz k vyplnění výchozí hodnoty. |
| **Rozevírací seznam** | Umožňuje uživateli vybrat si ze sady hodnot. |
| **Výběr časového rozsahu**| Umožňuje uživateli vybrat si z předdefinované sady hodnot časového rozsahu nebo vybrat z vlastního časového rozsahu.|
| **Výběr zdrojů** | Umožňuje uživateli vybrat si ze zdrojů vybraných pro sešit.|
### <a name="using-a-text-parameter"></a>Použití textového parametru
Hodnota, kterou uživatel zadá v textovém poli, je nahrazena přímo v dotazu bez úniku nebo citace. Pokud je hodnota, kterou potřebujete, řetězec, dotaz by měl mít uvozovky kolem parametru (například **{parameter}'**).
Parametr text umožňuje použít hodnotu v textovém poli kdekoli. Může se jedná o název tabulky, název sloupce, název funkce, operátor atd. Typ parametru textu má nastavení **Získat výchozí hodnotu z analytického dotazu**, která autorovi sešitu umožňuje použít dotaz k naplnění výchozí hodnoty pro toto textové pole.
Při použití výchozí hodnoty z dotazu protokolu se jako výchozí hodnota používá pouze první hodnota prvního řádku (řádek 0, sloupec 0). Proto se doporučuje omezit dotaz vrátit pouze jeden řádek a jeden sloupec. Všechna ostatní data vrácená dotazem jsou ignorována.
Bez ohledu na hodnotu dotazvrátí bude nahrazen přímo bez úniku nebo citace. Pokud dotaz vrátí žádné řádky, výsledkem parametru je buď prázdný řetězec (pokud parametr není vyžadován) nebo undefined (pokud je parametr požadován).
### <a name="using-a-drop-down"></a>Použití rozevíracího
Typ rozevíracího parametru umožňuje vytvořit ovládací prvek rozevíracího souboru, který umožňuje výběr jedné nebo více hodnot.
Rozevírací seznam je naplněn dotazem protokolu nebo JSON. Pokud dotaz vrátí jeden sloupec, hodnoty v tomto sloupci jsou hodnota i popisek v rozevíracím ovládacím prvku. Pokud dotaz vrátí dva sloupce, první sloupec je hodnota a druhý sloupec je popisek zobrazený v rozevíracím seznamu. Pokud dotaz vrátí tři sloupce, třetí sloupec se používá k označení výchozí výběr v tomto rozevíracím seznam. Tento sloupec může být libovolný typ, ale nejjednodušší je použití bool nebo číselné typy, kde 0 je false a 1 je true.
Pokud je sloupec typ řetězce, null/prázdný řetězec je považován za false a všechny ostatní hodnoty je považován za true. Pro rozevírací seznamy s jedním výběrem se jako výchozí výběr použije první hodnota s hodnotou true. Pro více rozevíracích seznamů výběru jsou jako výchozí vybraná sada použity všechny hodnoty se skutečnou hodnotou. Položky v rozevíracím souboru jsou zobrazeny v libovolném pořadí, ve které dotaz vrátil řádky.
Podívejme se na parametry, které jsou k dispozici v sestavě Přehled připojení. Klepněte na symbol úprav vedle **položky Směr**.

Tím se spustí položka nabídky **Upravit parametry.**

JSON umožňuje generovat libovolnou tabulku naplněnou obsahem. Například následující JSON generuje dvě hodnoty v rozevíracím jsouvu:
```
[
{ "value": "inbound", "label": "Inbound"},
{ "value": "outbound", "label": "Outbound"}
]
```
Vhodnějším příkladem je použití rozevíracího souboru k výběru ze sady čítačů výkonu podle názvu:
```
Perf
| summarize by CounterName, ObjectName
| order by ObjectName asc, CounterName asc
| project Counter = pack('counter', CounterName, 'object', ObjectName), CounterName, group = ObjectName
```
Dotaz zobrazí výsledky následujícím způsobem:

Rozevírací seznamy jsou neuvěřitelně výkonné nástroje pro přizpůsobení a vytváření interaktivních sestav.
### <a name="time-range-parameters"></a>Parametry časového rozsahu
Zatímco můžete vytvořit vlastní parametr časového rozsahu pomocí typu parametru rozevíracího pole, můžete také použít typ parametru časového rozsahu out-of-box, pokud nepotřebujete stejný stupeň flexibility.
Typy parametrů časového rozsahu mají 15 výchozírozsahy, které přejdou od pěti minut do posledních 90 dnů. K dispozici je také možnost povolit výběr vlastního časového rozsahu, která umožňuje operátoru sestavy zvolit explicitní počáteční a stopové hodnoty pro časový rozsah.
### <a name="resource-picker"></a>Výběr zdrojů
Typ parametru výběru prostředků umožňuje určit určité typy prostředků. Příkladem předem sestaveného sešitu, který využívá typ výběru prostředků, je sešit **Výkonu.**

## <a name="saving-and-sharing-workbooks-with-your-team"></a>Ukládání sešitů a jejich sdílení v rámci týmu
Sešity se ukládají v pracovním prostoru Analýzy protokolů nebo v prostředku virtuálního počítače v závislosti na tom, jak přistupujete ke galerii sešitů. Sešit lze uložit do části **Moje sestavy,** která je pro vás soukromá, nebo v části **Sdílené sestavy,** která je přístupná všem uživatelům s přístupem k prostředku. Chcete-li zobrazit všechny sešity v zdroji, klepněte na tlačítko **Otevřít** na panelu akcí.
Sdílení sešitu, který je aktuálně v **seznamu Moje sestavy**:
1. Na panelu akcí klikněte na **Otevřít.**
2. Klikněte na "..." vedle sešitu, který chcete sdílet
3. Klikněte na **Přesunout do sdílených sestav**.
Pokud chcete sešit sdílet s odkazem nebo e-mailem, klikněte na panelu akcí na **Sdílet.** Mějte na paměti, že příjemci odkazu potřebují přístup k tomuto prostředku na webu Azure Portal, aby mohly sešit zobrazit. K úpravám potřebují příjemci alespoň oprávnění přispěvatele pro prostředek.
Připnutí odkazu na sešit na řídicí panel Azure:
1. Na panelu akcí klikněte na **Otevřít.**
2. Klikněte na "..." vedle sešitu, který chcete připnout
3. Klikněte **na Připnout na řídicí panel**.
## <a name="next-steps"></a>Další kroky
- Informace o omezeních a celkovém výkonu virtuálních počítačích najdete [v tématu Zobrazení výkonu virtuálních počítačích Azure](vminsights-performance.md).
- Informace o zjištěných závislostech aplikací najdete [v tématu Zobrazení mapy Azure Monitor for VM .](vminsights-maps.md)
| 75.75
| 512
| 0.796764
|
ces_Latn
| 0.999994
|
a6b94be291a996787e868b3a36fade984922db0a
| 4,224
|
md
|
Markdown
|
README.md
|
nathanntg/bela-warp-detect
|
5d82eeedaab855ab3f31d3b9da3f946043d70b91
|
[
"MIT"
] | null | null | null |
README.md
|
nathanntg/bela-warp-detect
|
5d82eeedaab855ab3f31d3b9da3f946043d70b91
|
[
"MIT"
] | null | null | null |
README.md
|
nathanntg/bela-warp-detect
|
5d82eeedaab855ab3f31d3b9da3f946043d70b91
|
[
"MIT"
] | null | null | null |
Bela Warp Detect
================
This project is used to do syllable detection or template matching in real time
on an audio signal. This implementation is inspired by two earlier projects:
* [Syllable Detector](https://github.com/gardner-lab/syllable-detector-swift) - A neural network based syllable detector that feeds a spectrogram into a simple network (perceptron) for matching. Achieves low-latency, high accuracy detection. Requires a MATLAB trained neural network.
* [Find Audio](https://github.com/gardner-lab/find-audio) - A MATLAB library (and accompanying MEX files) to use a variant of dynamic time warping to find and align renditions of an audio template in a longer signal.
The implementation here uses a dynamic programming approach to syllable detection,
reducing the need to amass a huge amount of data or perform a long training process.
The detection is designed to be usable from MATLAB (for training and performance
evaluation), and is designed to be run on either a Mac computer (using Core Audio for low
latency audio input and output) or on a [Bela](http://bela.io) embedded platform
(which uses a real time operating system and is capable of audio signal processing).
Requirements
------------
The code supports:
* macOS, via either the C++ API or MATLAB MEX files
* Bela embedded platform
To use on macOS, you must have:
* Xcode - The project is packaged as an Xcode project for building and testing, and installing Xcode installs the necessary compiler and other functionality for compiling the mex files.
Compiling on Bela
-----------------
To install, create a feedback project on the bela. Once created, you can copy or update the needed files using the command:
```
> ./bela_update.sh
```
Alternatively, you can use the web based IDE or manually move files onto the Bela. The project directory structure does not need to be preserved, and to run, you need to upload the appropriate "render.cpp" file and all files in "BelaWarpDetect/Library".
Note that the project needs C++11. As a result, it should be configured with the make parameter `CPPFLAGS=-std=c++11`.
Running on macOS
----------------
In addition to running under the realtime conditions of the Bela board, the matching
can be run from a Mac computer. The macOS implementation uses Core Audio for low latency
audio input and output, with relatively low computational demands.
The macOS implementation is written in "BelaWarpDetect/main.cpp" and reads in a template
file specified in the code (eventually this will be revised to allow for command line
arguments).
MATLAB Interface
----------------
To help with testing, there are MATLAB MEX implementations that allow using the matching code in MATLAB. To compile the MEX functions, switch to the "BelaWarpDetect/Matlab" folder and run:
```
>> compile_mex
````
The following functions are available:
### Match Syllables
The match syllables function looks for renditions of one or more syllables in an audio file, and will return both the scores and lengths from the dynamic time warping function. To use:
```
>> [scores, lengths] = match_syllables(audio, fs, syllable1, ...);
```
Parameter `audio` contains the audio stream within which to search for the syllables (`syllable1`, etc), while `fs` is the sampling rate for both the audio and the syllables. The function returns `scores` and `lengths` for each spectral column of `audio` (rows) and for each syllable (columns).
The syllable(s) can either be audio or can be a spectral template (see `build_template` for constructing spectral templates).
### Evaluate Syllables
Evaluates a single syllable, returning scores based on other syllable renditions.
```
>> [scores, lengths] = eval_syllable(syllable, fs, audio1, ...);
```
Parameter `syllable` contains the audio or spectral template (see `build_template` for constructing spectral templates) for a single syllable, while `fs` is the sampling rate for both the syllables and audio. Multiple short audio segments (`audio1`, etc) are past through the matcher and the score and length at the end is returned.
This helps in building confusion matrices to understand how effective the spectral template or audio is at separating syllables.
| 46.417582
| 332
| 0.766335
|
eng_Latn
| 0.998741
|
a6b977574d70a1047026581f8003f597e0bb4024
| 8,807
|
md
|
Markdown
|
README.md
|
projectdx/hancock
|
9f5f9b57e89697c09d49635a416acfba4d45ffb1
|
[
"MIT"
] | null | null | null |
README.md
|
projectdx/hancock
|
9f5f9b57e89697c09d49635a416acfba4d45ffb1
|
[
"MIT"
] | 2
|
2017-12-08T19:39:47.000Z
|
2020-02-24T04:20:18.000Z
|
README.md
|
projectdx/hancock
|
9f5f9b57e89697c09d49635a416acfba4d45ffb1
|
[
"MIT"
] | null | null | null |
Hancock Signature Gem
=========================
Gem for submitting documents to DocuSign with electronic signature tabs.
[](https://travis-ci.org/renewablefunding/hancock)
## TODO:
* Allow sending of previously saved envelopes (right now `#save` followed by `#send!` just generates a new envelope)
## Interface Specification
<a name=".configure"/>
### 1. Configuration
```
Hancock.configure do |c|
c.oauth_token = 'MY-REALLY-LONG-TOKEN'
c.account_id = '999999'
#c.endpoint = 'https://www.docusign.net/restapi'
#c.api_version = 'v2'
c.email_template = {
:subject => 'sign me',
:blurb => 'whatever '
}
end
```
#### Description
DocuSign has the ability to make callbacks to a specified URI and provide status on an envelope and recepient.
This method will allow the client to register a callback uri and select which event to listen to.
[See eventNotification for details](https://www.docusign.com/p/RESTAPIGuide/RESTAPIGuide.htm#REST%20API%20References/Send%20an%20Envelope.htm%3FTocPath%3DREST%20API%20References%7CSend%20an%20Envelope%20or%20Create%20a%20Draft%20Envelope%7C_____0)
Key | Description
--- | ---
oauth_token | OAuth token generated via Docusign
account_id | Docusign account id
endpoint | Docusign endpoint (demo vs live)
api_version | Docusign api version (v1, v2)
email_fields | components necessary to create outgoing email to signers.
| `subject`: subject of email
| `blurb`: instruction blurb sent in the email to the recepient
***
### Create an envelope for sending or saving
Create the base Envelope object.
```
envelope = Hancock::Envelope.new
```
Create all the documents and add them to the base object. This has to be done because these need to be uploaded as part of the multi-part form.
```
document1 = Hancock::Document.new({
file: #<File:/tmp/whatever.pdf>,
# data: 'Base64 Encoded String', # required if no file, invalid if file
# name: 'whatever.pdf', # optional if file, defaults to basename
# extension: 'pdf', # optional if file, defaults to path extension
})
envelope.documents = [document1]
```
Create the recipients and tabs (sign here, date here, etc.) and add them via `#add_signature_request`.
**NOTE: Anchored tabs affect the entire envelope, not just a single document. Do not add the same anchored tab in multiple signature requests or you will see them stack in DocuSign (which is hard to even see, but you won't be able to submit because a bunch of fields are incomplete).**
```
recipient1 = Hancock::Recipient.new({
name: 'Owner 1',
email: 'whoever@whereever.com',
# id_check: true,
# routing_order: 1
})
tab1 = Hancock::AnchoredTab.new({
type: 'sign_here',
label: '{{recipient.name}} Signature',
coordinates: [2, 100]
# anchor_text: 'Owner 1 Signature', # defaults to label
})
tab2 = Hancock::Tab.new({
type: 'initial_here',
label: 'Absolutely Positioned Initials',
coordinates: [160, 400]
})
envelope.add_signature_request({
recepient: recepient1,
document: document1,
tabs: [tab1, tab2]
})
```
Send or save the documents. Reload isn't necessary after `#send!` and `#save`.
```
envelope.send! # sends to DocuSign and sets status to "sent," which sends email
envelope.save # sends to DocuSign but sets status to "created," which makes it a draft
envelope.reload! # if envelope has identifier, requests envelope from DocuSign. Automatically done when 'send!' or 'save' is called
```
### Retrieve an envelope using a docusign envelope id
```
envelope = Hancock::Envelope.find(envelope_id)
```
Useful methods when you've found an envelope.
```
envelope.documents
envelope.recipients
envelope.status
```
### One call does it all
```
envelope = Hancock::Envelope.new({
documents: [document1, document2],
signature_requests: [
{
recipient: recipient1,
document: document1,
tabs: [tab1, tab2],
},
{
recipient: recipient1,
document: document2,
tabs: [tab1],
},
{
recipient: recipient2,
document: document2,
tabs: [tab1],
},
],
email: {
subject: 'Hello there',
blurb: 'Please sign this!'
}
})
```
### Full example
Check out `example/example.rb` for a full and complete example with anchored tabs, multiple documents, etc.
## Envelope class
```
Envelope.new(options = {})
```
Creates a new envelope. An optional hash can be passed in to initialize the envelope with the following keys:
key | description
--- | ---
documents | colleciton of Document objects
signature_requests | collection of signature request hashes:
| `recepient`: Recepient object
| `document`: Document object which should be signed by recpient
| `tabs`: Tab objects for signature by recpient in the documnet
email | email hash:
| `subject`; subject of email to send
| `blurb`: email blurb
```ruby
Envelope.find(envelope_id)
```
Retrieves envelope information from DocuSign
```ruby
Envelope#documents
```
```ruby
Envelope#recepients
```
Returns the list of recepients for the envelope.
```ruby
envelope.add_signature_request({
recepient: recepient1,
document: document1,
tabs: [tab1, tab2]
})
```
Adds signature request to the envelope. Signatrue request is hash with the following keys:
key | description
--- | ---
recepient | Recepient object
document | Document object which should be signed by recpient
tabs | Tab objects for signature by recpient in the documnet
```ruby
Envelope#send!
```
Submit the envelope to DocuSign for signatures and sets `send` status. Once sent, the envelope should
be populated with Docusign envelope information (similar to `#reload!``)
```ruby
Envelope#save
```
Submit the envelope to DocuSign for signatures and sets `created` status which makes it a `draft`. Once sent, the envelope should
be populated with Docusign envelope information (similar to `#reload!``)
```ruby
Envelope#reload!
```
If the envelope has a Docusign identifier, the request envelope information from Docusign.
```ruby
Envelope#identifer
Envelope#status
```
Upon submittal to DocuSign, this reader will contain the DocuSign envelope id and status and other envelope information.
Document class
---
```ruby
document1 = Hancock::Document.new({
file: #<File:/tmp/whatever.pdf>,
# data: 'Base64 Encoded String', # required if no file, invalid if file
# name: 'whatever.pdf', # optional if file, defaults to basename
# extension: 'pdf', # optional if file, defaults to path extension
})
```
A Document object can be created using a `File` object or by providing the acutal content through `data, name, extension` to describe the data.
argument | description
--- | ---
file | (File object) which would contain the data, respond to `basename` and `extension`.
data | (string Required if `file` is missing) Base64 encoded string.
name | (string) filename `bob_hope_contract`
extension | (string [default: pdf]) file extension `docx | pdf..`
Recipient class
---
```ruby
recipient1 = Hancock::Recipient.new({
name: 'Owner 1',
email: 'whoever@whereever.com',
# id_check: true,
# routing_order: 1
})
```
Key | Description
--- | ---
name | (string) Name of signer
email | (string) Email address of signer
id_check | (boolean [default: true]) true to enable [ID Check functionality](http://www.docusign.com/partner/docusign-id-check-powered-by-lexisnexis-risk-solutions)
routing_order | (integer [default 1]) routing order of recepient in the envelope. If missing, then all recepients have the same routing order
***
2. Process Event Notification payload
-----
When event notification callback url is specified, DocuSign will post envelope and recepient events to the specefied url.
The payload is XML specified by the following [RecepientStatus and EnvelopeStatus](https://github.com/docusign/DocuSign-eSignature-SDK/blob/ae414f6abd81bb0c6629ef49c2a880026b3b3899/MS.NET/CodeSnippets/CodeSnippets/Service%20References/DocuSignWeb/api.wsdl)
EnvelopeStatus class
----
```ruby
EnvelopeStatus.new(xml)
```
This will extract the recipient events information from the xml and provide readers to the information.
For example:
```ruby
#recipient_statuses: collection of RecepientStatus objects
#documents: colleciton of Document objects as attachment (with their statuses)
```
RecepientStatus class
----
```ruby
RecepientStatus.new(xml)
```
Similarly, it exposes the information as readers on the object.
| 28.501618
| 285
| 0.697286
|
eng_Latn
| 0.943335
|
a6b982ad008bb828ff3b97a723fb5b4a8efb666a
| 1,510
|
md
|
Markdown
|
CHANGELOG.next.md
|
secops4thewin/ecs
|
b60752f1c3f968b4ca7302af36717b40699eeab6
|
[
"Apache-2.0"
] | null | null | null |
CHANGELOG.next.md
|
secops4thewin/ecs
|
b60752f1c3f968b4ca7302af36717b40699eeab6
|
[
"Apache-2.0"
] | null | null | null |
CHANGELOG.next.md
|
secops4thewin/ecs
|
b60752f1c3f968b4ca7302af36717b40699eeab6
|
[
"Apache-2.0"
] | null | null | null |
<!-- When adding an entry to the Changelog:
- Please follow the Keep a Changelog: http://keepachangelog.com/ guidelines.
- Please insert your changelog line ordered by PR ID.
- Make sure you add your entry to the correct section (schema or tooling).
Thanks, you're awesome :-) -->
## Unreleased
### Schema Changes
#### Breaking changes
#### Bugfixes
#### Added
* Added `event.category` "registry". #1040
* Added `event.category` "session". #1049
* Added `os.type`. #1111
#### Improvements
#### Deprecated
### Tooling and Artifact Changes
#### Breaking changes
#### Bugfixes
#### Added
* Added ability to supply free-form usage documentation per fieldset. #988
* Added the `path` key when type is `alias`, to support the [alias field type](https://www.elastic.co/guide/en/elasticsearch/reference/current/alias.html). #877
* Added support for `scaled_float`'s mandatory parameter `scaling_factor`. #1042
* Added ability for --oss flag to fall back `constant_keyword` to `keyword`. #1046
* Added support in the generated Go source go for `wildcard`, `version`, and `constant_keyword` data types. #1050
* Added support for marking fields, field sets, or field reuse as beta in the documentation. #1051
* Added support for `constant_keyword`'s optional parameter `value`. #1112
#### Improvements
#### Deprecated
<!-- All empty sections:
## Unreleased
### Schema Changes
### Tooling and Artifact Changes
#### Breaking changes
#### Bugfixes
#### Added
#### Improvements
#### Deprecated
-->
| 22.878788
| 160
| 0.709272
|
eng_Latn
| 0.971319
|
a6b9949d8f11460f8be8bbecadaf29c30332cd56
| 744
|
md
|
Markdown
|
pages/join.md
|
mateoclarke/lmda-eleventy-netlify-cms
|
79537ebcdaf8a7c849085e929d0c1052892d7b2e
|
[
"MIT"
] | null | null | null |
pages/join.md
|
mateoclarke/lmda-eleventy-netlify-cms
|
79537ebcdaf8a7c849085e929d0c1052892d7b2e
|
[
"MIT"
] | 12
|
2022-02-23T09:09:03.000Z
|
2022-03-20T16:51:05.000Z
|
pages/join.md
|
mateoclarke/lmda-eleventy-netlify-cms
|
79537ebcdaf8a7c849085e929d0c1052892d7b2e
|
[
"MIT"
] | null | null | null |
---
title: "Join us"
permalink: "/join/index.html"
layout: "layouts/contact.njk"
---
We're always growing our band with people who want to:
- Play brass or drums at all skill levels
- Lead chants as capos
- Wave flags and hold banners during performances
- Write and suggest songs and chants

For more information about how to join La Murga, performance requests, media inquiries, or any general question, you can send us a message on:
- [Email](mailto:info@lamurgadeaustin.org)
- [Twitter](https://www.twitter.com/lamurgaatx/)
- [Facebook](https://www.facebook.com/lamurgaatx/)
- [Instagram](https://www.instagram.com/lamurgaatx/)
| 33.818182
| 142
| 0.751344
|
eng_Latn
| 0.860632
|
a6b9c38e32225e47c410797241b339fc8b8b7d76
| 1,954
|
md
|
Markdown
|
README.md
|
Cynosphere/Ears
|
99ad30a96e131996a9fa3f1a69c21982ce9c2c49
|
[
"MIT"
] | null | null | null |
README.md
|
Cynosphere/Ears
|
99ad30a96e131996a9fa3f1a69c21982ce9c2c49
|
[
"MIT"
] | null | null | null |
README.md
|
Cynosphere/Ears
|
99ad30a96e131996a9fa3f1a69c21982ce9c2c49
|
[
"MIT"
] | null | null | null |
<p align="center">
<img src="https://unascribed.com/ears-banner.png?v=2" alt="Ears" width="512"/>
<h3 align="center">Faithful fancy fashion features for fuzzy folk.</h3>
</p>
Ears is a player model customization mod available for a dizzying number of Minecraft versions.
Get it and/or learn more at [CurseForge](https://www.curseforge.com/minecraft/mc-mods/ears), [Modrinth](https://modrinth.com/mod/ears),
or [Glass Repo](https://glass-repo.net/repo/mod/ears).
Check out the [Manipulator](https://unascribed.com/ears)!
**Mappings Notice**: Ears platform ports use a variety of mappings, including Plasma, Yarn, MCP, and Mojmap.
References to these mappings are made even in common code. *Viewer discretion is advised.*
## Using the API

Ears provides an API (identical for all ports) that allows forcing Ears features to not render, or to change whether or not Ears thinks the player is wearing some kinds of equipment or has elytra equipped, etc.
You can add it to your mod like so (this example is for Fabric, but it's similar for Forge):
```gradle
repositories {
maven {
url "https://repo.unascribed.com"
content {
includeGroup "com.unascribed"
}
}
}
dependencies {
modImplementation "com.unascribed:ears-api:1.4.1"
}
```
You can see examples of usage of both current APIs in real code in [Fabrication](https://github.com/unascribed/Fabrication/blob/1.17/src/main/java/com/unascribed/fabrication/features/FeatureHideArmor.java#L62) and [Yttr](https://github.com/unascribed/Yttr/blob/trunk/src/main/java/com/unascribed/yttr/compat/EarsCompat.java). Fabrication uses a state overrider to add support for its /hidearmor system, and Yttr uses the inhibitor system to force things not to render when the diving suit is worn.
| 48.85
| 497
| 0.764585
|
eng_Latn
| 0.947093
|
a6ba1bef314c0cef349354d5308366259941866d
| 9,259
|
md
|
Markdown
|
blog/2020-08-25-questitto.md
|
crusherdev/documentation
|
b5b3c77d3120a328777e6562390d2a1d0b985cdd
|
[
"Apache-2.0"
] | null | null | null |
blog/2020-08-25-questitto.md
|
crusherdev/documentation
|
b5b3c77d3120a328777e6562390d2a1d0b985cdd
|
[
"Apache-2.0"
] | null | null | null |
blog/2020-08-25-questitto.md
|
crusherdev/documentation
|
b5b3c77d3120a328777e6562390d2a1d0b985cdd
|
[
"Apache-2.0"
] | null | null | null |
---
title: Light-weight, blazing fast stack for your IoT application
author: Shan Desai
author_title: Crusher Contributor
author_url: https://github.com/shantanoo-desai
author_image_url: https://avatars.githubusercontent.com/shantanoo-desai
description:
Create a simple IoT stack with Mosquitto MQTT Broker, Telegraf and Crusher.
tags: [iot, docker, community-written]
---
Note: I wanted you to know that this post is written by one of our contributors,
Shan Desai. Shan is a research scientist working for the Bremen Institute for
Production and Logistics ([BIBA](http://www.biba.uni-bremen.de/)). His work
involves the use of IoT devices in order to improve product tracking and
transparency in a B2B marketplace. You can find more details on
[Shan's personnal website](https://shantanoo-desai.github.io/).
Thanks a lot for your contribution Shan!
<!--truncate-->
## Overview
> Crusher is the fastest open-source Time-Series Database out there in terms of
> performance.
The developers were kind enough to welcome me into their community and I wanted
to make things easier for people trying things out with Crusher.
Lo! and behold [Questitto][1] an _out-of-the-box_ repository for your initial
IoT Applications. The repository is an altered version for my repository
[tiguitto][2] which helps users deploy the highly used **TIG+Mosquitto
(Telegraf, InfluxDB, Grafana) + Mosquitto MQTT Broker** stack in no time.
## Motivation
I am really looking forward to use some `SQL` queries with Time-Series Databases
and `Crusher` provides such functionalities as well as some cool new features of
[Dynamic Timestamping](/docs/reference/function/timestamp/).
Not to mention, my staple
[InfluxDB's line Protocol](/docs/reference/api/influxdb/) is supported via
sockets too!
## Stack
`questitto` currently comes with basic user authentication support for Mosquitto
MQTT broker. The Broker allows only specific users to publish / subscribe data
hence reducing misuse. Telegraf writes the incoming data via subscribing to the
MQTT Broker and pushes the data to Crusher.
In order to make it easy to deploy, the stack is deployable via `docker` and
configuration is made simple via usage of text files (MQTT broker's users) and
an Environment File (for Telegraf)
### Setup
Clone the repository:
```bash
git clone https://github.com/shantanoo-desai/questitto.git && cd questitto/
```
Your Directory structure should look like:
```bash
├── docker-compose.yml
├── LICENSE
├── mosquitto
│ ├── config
│ │ ├── mosquitto.conf
│ │ └── passwd
│ └── data
├── questitto.env
├── README.md
└── telegraf
└── telegraf.conf
```
Some brief information on the files:
- `mosquitto/config/passwd`: file that has the usernames and passwords necessary
for publishing/subscribing to the MQTT broker
- `questitto.env`: environment variable file used by `telegraf` container to
subscribe to the MQTT Broker for data ingestion
- `telegraf/telegraf.conf`: TOML Configuration file for letting `telegraf` do
the heavy lifting and inserting the data into Crusher
### User Management for Mosquitto MQTT Broker
In the repository there are two users added by default (see
`mosquitto/config/passwd` file):
```
pubclient:questitto
subclient:questitto
```
You can use the `pubclient` credential on your IoT Devices / MQTT Client to
publish information to the Broker. Similarly, `subclient` credential will be
used by `telegraf` or any other user of the stack in order to subscribe to the
incoming data. Feel free to change the passwords for the usernames or add more
credentials according to your needs. The format for the credential entries is as
follows (in plain text):
```
username1:password1
username2:password2
```
:::note
Mosquitto Broker requires the the credentials to be encrypted and hence you
bring the stack up with encrypting the passwords, the broker container will fail
to start
:::
Let's encrypt the passwords using the following command:
```bash
# assuming your current directory is questitto
docker run -it --rm -v $(pwd)/mosquitto/config:/mosquitto/config eclipse-mosquitto mosquitto_passwd -U /mosquitto/config/passwd
```
The command does not return anything hence, after executing the command check
the `mosquitto/config/passwd` file using:
```bash
cat mosquitto/config/passwd
```
### Input Data Format + MQTT Topic Design
> For IoT Applications, let the higher components in the stack do the
> heavy-lifting i.e. `telegraf` and `mosquitto` and keep the payload and topics
> very simple
As an example the MQTT Topics are selected as follows:
```
IOT/<SensorID>/<measurement_name>
```
if your IoT sensor publishes temperature data then you can publish it to a
topic:
```
IOT/sensor1/temp
```
with the payload in **InfluxDB line protocol string**:
```
environment,type=BME280 temp=23.9
```
We then let `telegraf` translate the location of `sensor1` for us using the
`processors` plugin and the MQTT topic itself.
### Telegraf Configuration
`telegraf` subscribes to the MQTT Broker using the `subclient` credential
mentioned above.
:::note
If you change the user credentials, make sure to encrypt the password and change
the `questitto.env` file with the actual credentials for `telegraf`
:::
Let's look at how `telegraf` can add our sensor's location for us.
We use the `inputs.mqtt_consumer` plugin to connect to our broker and subscribe
to it via the credentials in the `.env` file:
```toml
[[inputs.mqtt_consumer]]
servers = [ "tcp://mosquitto:1883" ]
# Topics to subscribe to:
topics = [
"IOT/+/acc",
"IOT/+/mag",
"IOT/+/gyro",
"IOT/+/temp"
]
# Telegraf will also store the topic as a tag with name `topic`
# NOTE: necessary for the Processor REGEX to extract <Sensor_ID>
topic_tag = "topic"
username = "${TG_MOSQUITTO_USERNAME}"
password = "${TG_MOSQUITTO_PASSWORD}"
# Connection timeout
connection_timeout = "30s"
# Incoming MQTT Payload from Sensor nodes is in InfluxDB line protocol strings
data_format = "influx"
```
we store the MQTT topic as a `tag` called `topic` and now leverage it for some
Regular Expression and Enumeration Magic as follows:
```toml
[[processors.regex]]
order = 1
[[processors.regex.tags]]
# use the `topic` tag to extract information from the MQTT Topic
key = "topic"
# Topic: IOT/<SENSOR_ID>/<measurement>
# Extract <SENSOR_ID>
pattern = ".*/(.*)/.*"
# Replace the first occurrence
replacement = "${1}"
# Store it in tag called:
result_key = "sensorID"
[[processors.enum]]
order = 2
[[processors.enum.mapping]]
# create a mapping between extracted sensorID and some meta-data
tag = "sensorID"
dest = "location"
[processors.enum.mapping.value_mappings]
"sensor1" = "kitchen"
"sensor2" = "livingroom"
```
Based on our MQTT Topic design we know that the `SensorID` will be on the second
level i.e. `IOT/(.*)/#`.
We perform the Regular Expression to extract the sensor's ID and use `enum` to
map it to its dedicated location:
```
sensor1 --> kitchen
sensor2 --> livingroom
```
The location will be stored as a `tag` called `location`.
### Data Insertion to Crusher
```toml
[[outputs.socket_writer]]
address = "tcp://questdb:9009"
```
will send the line protocol String to port 9009 of the `questdb` container and
you don't even need to define a schema beforehand!
### Visualize It!
Crusher comes with its own cool UI available on `http://<IP_address>:9000`
## Example
Get the Stack up:
```bash
docker-compose up -d
```
As a simple Example I used [MQTT.fx][3] as a client to publish information in
line Protocol to the following Topic:
```json
{
"topic": "IOT/sensor1/acc",
"payload": [
"accleration,type=BNO055 x=2.3,y=3.2,z=0.01",
"accleration,type=BNO055 x=2.3,y=3.2,z=0.01",
"accleration,type=BNO055 x=2.3,y=3.2,z=0.02"
]
}
```
with the `pubclient:questitto` credentials and on the Crusher UI you can see:

With the `location` and other `tags` from the line protocol inserted:

A simple query where I would like to know the acceleration value in the
`kitchen` for the **X-axis** is as simple as:
```questdb-sql
SELECT timestamp, x FROM acceleration
WHERE location = 'kitchen';
```
## Nuggets
If you need to add/remove or adapt the Users or the `telegraf.conf` without
bringing down the stack or the services within `questitto` simply use the
`SIGHUP` signal for the containers.
```bash
docker kill --signal=SIGHUP mosquitto
# OR
docker kill --signal=SIGHUP telegraf
```
See [my blog post][4] for a detailed write up.
## Repository
You can find the [repository on GitHub][1]. Please feel free to open Issues/PRs
and [join]({@slackUrl@}) the Slack Community, the developers are really helpful
there!
[1]: https://github.com/shantanoo-desai/questitto
[2]: https://github.com/shantanoo-desai/tiguitto
[3]: https://mqttfx.org
[4]: https://shantanoo-desai.github.io/posts/technology/nugget_mqtt_iot/
| 27.97281
| 127
| 0.72902
|
eng_Latn
| 0.978821
|
a6bb2baa51491a736175958ef7d484fc2c4fdd8c
| 1,440
|
md
|
Markdown
|
scripting-docs/javascript/misc/javascript-object-expected.md
|
adrianodaddiego/visualstudio-docs.it-it
|
b2651996706dc5cb353807f8448efba9f24df130
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
scripting-docs/javascript/misc/javascript-object-expected.md
|
adrianodaddiego/visualstudio-docs.it-it
|
b2651996706dc5cb353807f8448efba9f24df130
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
scripting-docs/javascript/misc/javascript-object-expected.md
|
adrianodaddiego/visualstudio-docs.it-it
|
b2651996706dc5cb353807f8448efba9f24df130
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Previsto oggetto JavaScript | Microsoft Docs
ms.date: 01/18/2017
ms.prod: visual-studio-windows
ms.technology: vs-javascript
ms.topic: reference
f1_keywords:
- VS.WebClient.Help.SCRIPT5014
dev_langs:
- JavaScript
- TypeScript
- DHTML
ms.assetid: cc7cc32b-e444-4afa-9be1-802c83fdf5ae
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: ceaae323c974a1f41b6f5bd2a3ca093ef7c0b2d9
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "63007435"
---
# <a name="javascript-object-expected"></a>Previsto oggetto JavaScript
Si è provato a passare un non -[!INCLUDE[javascript](../../javascript/includes/javascript-md.md)] oggetto da una funzione predefinita che prevede un [!INCLUDE[javascript](../../javascript/includes/javascript-md.md)] oggetto. Funzioni predefinite diverse richiedono gli oggetti definiti in [!INCLUDE[javascript](../../javascript/includes/javascript-md.md)] (anziché gli oggetti definiti dall'host o un componente esterno come un controllo).
### <a name="to-correct-this-error"></a>Per correggere l'errore
- Verificare che l'oggetto a cui che si sta passando come parametro è di tipo corretto.
## <a name="see-also"></a>Vedere anche
[Oggetti e matrici](../../javascript/objects-and-arrays-javascript.md)
[Uso delle matrici](../../javascript/advanced/using-arrays-javascript.md)
| 43.636364
| 441
| 0.772917
|
ita_Latn
| 0.716514
|
a6bbff0cad8af3cfc721431851e51aa57cb36517
| 2,411
|
markdown
|
Markdown
|
_posts/2020-01-01-project-5.markdown
|
jenyeeiam/travel_egg
|
c262696402efcb169fb44bc5f06c6a42d15de1df
|
[
"Apache-2.0"
] | null | null | null |
_posts/2020-01-01-project-5.markdown
|
jenyeeiam/travel_egg
|
c262696402efcb169fb44bc5f06c6a42d15de1df
|
[
"Apache-2.0"
] | null | null | null |
_posts/2020-01-01-project-5.markdown
|
jenyeeiam/travel_egg
|
c262696402efcb169fb44bc5f06c6a42d15de1df
|
[
"Apache-2.0"
] | null | null | null |
---
title: Travel Insurance
subtitle: Do I Need It? (Yes)
layout: default
modal-id: 5
date: 2020-01-01
img: escape.png
thumbnail: insurance-thumbnail.png
author: Jen
alt: insurance
project-date: January 1, 2020
category: Web Development
description: Getting sick on vacation is never a good time. Even while being extra diligent with where you're eating, you can still catch a case of the stomach bug.
---
Food poisoning is the most common ailment when it comes to traveling. Even taking the necessary precautions such as avoiding street vendors, taking drinks without ice and drinking bottled water, you're not immune from getting sick.
Most of the time, a case a food poisoning last a good 48 hours. You're stuck in your hotel room drinking sports drinks, eating jellies and you're out of the woods. But what happens when you're worse off than a mild case of food poisoning?
I always recommend having travel insurance when traveling for a **week or more**. Even if you are from Canada and are traveling to the US, an ER visit could cost you to a couple thousand dollars! So it's definitely worth it to pay the few dollars per day for supplementary coverage.
Before you buy travel insurance, check with your current health insurance plan as you might already be covered for some things. Same goes with the credit card you booked your trip with. These types of coverage will most likely cover the cost if you need to be admitted to the hospital or your bag is lost while traveling. If you need a prescription for your stomach bug, that's probably not covered, but always read the fine print or call your provider.
If you choose to buy supplementary coverage, try to choose a plan that covers the things your credit card does not. There are many options out available on the internet so I won't list them here, but the great thing about these plans are they are more comprehensive. For example, if your trip gets cut short due to unforeseen circumstances like sickness, family emergencies, or weather, these plans could reimburse you for the remaining cost of your trip. These plans will also cover prescriptions, lost luggage, or the case your travel supplier files for bankruptcy.
In most cases the bare minimum coverage is sufficient. The point is that travel insurance is important, and it shouldn't be overlooked. Choose the plan that's right for you and let yourself relax and have fun on your trip!
| 89.296296
| 567
| 0.794276
|
eng_Latn
| 0.999834
|
a6bc56dff709f000ee2227343d6a0fda6d32e812
| 13,558
|
md
|
Markdown
|
Design/Design.md
|
mems/calepin
|
6e0554aafead895638785a270afb947062271c94
|
[
"MIT"
] | 22
|
2019-02-04T15:52:33.000Z
|
2022-02-21T02:31:13.000Z
|
Design/Design.md
|
mems/calepin
|
6e0554aafead895638785a270afb947062271c94
|
[
"MIT"
] | 3
|
2018-12-09T14:38:59.000Z
|
2019-03-23T18:54:52.000Z
|
Design/Design.md
|
mems/calepin
|
6e0554aafead895638785a270afb947062271c94
|
[
"MIT"
] | 6
|
2019-12-15T16:35:52.000Z
|
2021-09-23T01:53:34.000Z
|
- text contrast readability [How light can you go?](http://jxnblk.com/grays/) and [Contrast](http://mrmrs.io/contrast/)
- [When Information Design is a Matter of Life or Death « Boxes and Arrows](http://boxesandarrows.com/when-information-design-is-a-matter-of-life-or-death/)
-  [Can image transparency be calculated automatically from multiple non-transparent samples? - Graphic Design Stack Exchange](http://graphicdesign.stackexchange.com/questions/31337/can-image-transparency-be-calculated-automatically-from-multiple-non-transparent)
- [Design for developers.pdf](Design%20for%20developers.pdf)
- [DesigningMeaning.pdf](DesigningMeaning.pdf)
- [Pixel Perfect Precision™.pdf](Pixel%20Perfect%20Precision™.pdf)
- [Responsive Design Workflow.pdf](Responsive%20Design%20Workflow.pdf)
- [LukeW | Multi-Device Layout Patterns](http://www.lukew.com/ff/entry.asp?1514)
- [The-Design-of-Everyday-Things-Revised-and-Expanded-Edition.pdf](The-Design-of-Everyday-Things-Revised-and-Expanded-Edition.pdf)
- [Design Elements and Principles - Tips and Inspiration By Canva](https://www.canva.com/learn/design-elements-principles/)
## Print
blanc de soutien: calque avec ton direct "White"
- http://www.copytop.com/sites/all/pdf/preparation-fichiers-decoupe-grand-format.pdf
Calque avec ton direct “CutContour” pour découpe dans Caldera ou Versaworks
"CutContour"
Pour les stickers, les découpes "CutContourMi" (mi-chaire) et "CutContourPlein" (pleine-chaire)
Recto: front "on the right side of the page"
Verso: back "on the turned side of the page"
Repère de coupe, repère de pli et repère de perforation
Repère de pli. Rainage si papier > 170g/m² (sinon le papier casse au niveau de la pliure, la déformation / "bosse" doit être à l'intérieur du pli). **Voir si le rainage doit être indiqué coté intérieur ou exterieur**
Note: Retirer ~1-2 mm pour carte interne pour laisser de la place pour la pliure
Zone utile: zone du document
Fond perdu: zone hors de la zone utile qui permet d'avoir une marge lors de la coupe pour éviter qu'un blanc apparaîsse si la coupe n'est pas exactement à l'emplacement voulut
Ligne de bloc: en dehors du fond perdu, permet d'ajouter des informations pour l'imprimeur ou à l'imprimante
Sur certains logiciels pour ajouter des élements dans la ligne de bloc, il faut augmenter la zone de fond perdu
https://helpx.adobe.com/fr/indesign/using/spot-process-colors.html
- [Qu'est ce qu'un blanc de soutien ? Comment préparer vos fichiers ?](http://sprint-for-print.com/faq/quest-ce-quun-blanc-de-soutien-comment-preparer-vos-fichiers/203)
- [guide-pao.pdf](guide-pao.pdf)
- [Format-papier_Prevot.pdf](Format-papier_Prevot.pdf)
- [pliage-Prevot.pdf](pliage-Prevot.pdf)
- [Techniques d'impression - Pascal Prévôt, Fabien Rocher - Librairie Eyrolles](http://www.eyrolles.com/Audiovisuel/Livre/techniques-d-impression-9782212117974)
- [Fond Perdu et Ligne de Bloc | Graph'Imprim – Impression Offset](http://blog.graph-imprim.com/22/fond-perdu-et-ligne-de-bloc/)
- [Comment choisir - Iggesund](https://www.iggesund.com/fr/knowledge/graphics-handbook/comment-choisir/)
- [Finition - Iggesund](https://www.iggesund.com/fr/knowledge/graphics-handbook/finishing/make-the-most-of-your-finishing-options/)
- [Impression - Iggesund](https://www.iggesund.com/fr/knowledge/graphics-handbook/printing/printing-and-offset-lithography/)
## Optical adjustment
Aka visually centered, physics center, optical alignement, compositional balance
Use center of mass" / centroid


- [Optical alignement](Graphics#optical-alignement)
- [Optical Adjustment - Logic vs. Designers - Marvel - Marvel Blog](https://blog.marvelapp.com/optical-adjustment-logic-vs-designers/) - [Optical Adjustment – Medium](https://medium.com/@lukejones/optical-adjustment-b55492a1165c) (same content)
- [Golden Ratio in UI design - Marvel Blog](https://blog.marvelapp.com/golden-ratio-ui-design/)
- [The Art of Eyeballing – Part III: Overshooting | Learn – Scannerlicker!](http://learn.scannerlicker.net/2014/09/03/the-art-of-eyeballing-part-3-overshooting/)
- [The Art Of Eyeballing – Part IV: The Stroke (Optics) | Learn – Scannerlicker!](http://learn.scannerlicker.net/2014/10/25/the-art-of-eyeballing-iv-the-stroke-optics/)
- [19 Factors That Impact Compositional Balance - Vanseo Design](http://vanseodesign.com/web-design/visual-balance/)
- [Rule of thirds — Wikipedia](https://en.wikipedia.org/wiki/Rule_of_thirds)
- [Golden ratio — Wikipedia](https://en.wikipedia.org/wiki/Golden_ratio#Aesthetics)
- [10 Top Photography Composition Rules | Photography Mad](http://www.photographymad.com/pages/view/10-top-photography-composition-rules)
- [20 Composition Techniques That Will Improve Your Photos](https://web.archive.org/web/20201216011012/https://petapixel.com/2016/09/14/20-composition-techniques-will-improve-photos/)
- [Optically](https://web.archive.org/web/20201023060446/https://gumroad.com/l/optically)
- [Dan Paquette • Automatically Resizing a List of Icons or Logos So They're Visually Proportional](https://web.archive.org/web/20201108025907/https://danpaquette.net/read/automatically-resizing-a-list-of-icons-or-logos-so-theyre-visually-proportional/)
## Typography
- [Typography Handbook](http://typographyhandbook.com/)
- [Why Typography Matters — Especially At The Oscars – benjamin bannister – Medium](https://medium.com/@benjaminbannister/why-typography-matters-especially-at-the-oscars-f7b00e202f22)
- [Typography is impossible](https://medium.engineering/typography-is-impossible-5872b0c7f891)
- type doesn’t like to be cropped
- type doesn’t like to be measured
- type doesn’t like to stand still
- type doesn’t know any limits
- [The Equilateral Triangle of a Perfect Paragraph | CSS-Tricks](https://css-tricks.com/equilateral-triangle-perfect-paragraph/) - equation between type size, line-height and line width
## Pixel art
- [Make Games - Pixel Art Tutorial](http://makegames.tumblr.com/post/42648699708/pixel-art-tutorial)
## Data visualisation
Aka dataviz
- [45 Ways to Communicate Two Quantities - ScribbleLive](http://www.scribblelive.com/blog/2012/07/27/45-ways-to-communicate-two-quantities/)
- [THE ALLUVIAL VALLEY OF THE LOWER MISSISSIPPI RIVER - Harold Fisk, 1944](http://www.radicalcartography.net/index.html?fisk), [Mississippi Meanders : Image of the Day](http://earthobservatory.nasa.gov/IOTD/view.php?id=6887), [Lower Mississippi Valley - Engineering Geology Mapping Program](http://lmvmapping.erdc.usace.army.mil/index.htm) (see "Fisk 44 Oversized Plates")
- [Mastering Multi-hued Color Scales with Chroma.js | vis4.net](https://vis4.net/blog/posts/mastering-multi-hued-color-scales/)
- [Blog - Visual Cinnamon](http://www.visualcinnamon.com/blog)
- [50 Great Examples of Data Visualization | Webdesigner Depot](http://www.webdesignerdepot.com/2009/06/50-great-examples-of-data-visualization/)
- [Data Visualization: Modern Approaches – Smashing Magazine](https://www.smashingmagazine.com/2007/08/data-visualization-modern-approaches/)
- [Data Visualization and Infographics – Smashing Magazine](https://www.smashingmagazine.com/2008/01/monday-inspiration-data-visualization-and-infographics/)
- [Density Design | Flickr](https://www.flickr.com/photos/densitydesign/)
- [Flare | Data Visualization for the Web](http://flare.prefuse.org/)
- [information aesthetics - Data Visualization & Information Design](http://infosthetics.com/)
- [InfoVis 2007 DataVisulaization Contest (that we didn't enter)](http://www.pitchinteractive.com/infovis/abstract.html)
- [Matthias Dittrich |interaction design portfolio | Narratives 2.0](http://www.matthiasdittrich.com/projekte/narratives/visualisation/)
- [The Data Visualisation Catalogue](http://www.datavizcatalogue.com/)
- [Eigenfactor: Revealing the Structure of Science](http://well-formed.eigenfactor.org/)
- [visualcomplexity.com | A visual exploration on mapping complex networks](http://www.visualcomplexity.com/vc/)
- [Visualization and Behavior Group - IBM](http://wayback.archive.org/web/20160421003406/http://researcher.watson.ibm.com/researcher/view_group.php?id=3419)
- [12 Cool Visualizations to Explore Books | FlowingData](http://flowingdata.com/2008/06/12/12-cool-visualizations-to-explore-books/)
- [The Best Tools for Visualization - ReadWrite](http://readwrite.com/2008/03/13/the_best_tools_for_visualization/)
- [Before & After: 6 Lessons From Mary Meeker's Presentation Makeover](https://blog.hubspot.com/marketing/mary-meeker-ugly-presentation-redesign) see [Internet Trends 2014 - Redesigned](kpcbinternettrends2014redesigned-slideshareversion-140530052726-phpapp01.pdf) [Internet Trends 2014 - Redesigned](http://fr.slideshare.net/EmilandDC/kpcb-internet-trends-2014-redesigned-slideshare-version)
- [Tabletop Whale](http://tabletopwhale.com/)
- https://www.scmp.com/sites/default/files/2015/11/25/languageshqscmp.png - [INFOGRAPHIC: A world of languages - and how many speak them | South China Morning Post](http://www.scmp.com/infographics/article/1810040/infographic-world-languages?page=all) [Mother tongues - www.lucasinfografia.com](http://www.lucasinfografia.com/Mother-tongues)
- [Mapping the Shadows of New York City: Every Building, Every Block - The New York Times](https://www.nytimes.com/interactive/2016/12/21/upshot/Mapping-the-Shadows-of-New-York-City.html)
- [Engineering Uber's Self-Driving Car Visualization Platform for the Web](https://eng.uber.com/atg-dataviz/)
- [A father knitted his baby’s first year of sleep pattern data into a blanket - The Verge](https://www.theverge.com/2019/7/21/20699484/sleep-blanket-data-visualisation-seung-lee) - [Seung Lee on Twitter: "The Sleep Blanket A visualization of my son's sleep pattern from birth to his first birthday. Crochet border surrounding a double knit body. Each row represents a single day. Each stitch represents 6 minutes of time spent awake or asleep #knitting #crochet #datavisualization https://t.co/xwBh7vIilJ" / Twitter](https://twitter.com/Lagomorpho/status/1149754592579600384)


### Sorting algorithms
- [Sorting Algorithm Animations | Toptal](https://www.toptal.com/developers/sorting-algorithms)
- [Visualization and comparison of sorting algorithms](https://github.com/vbohush/SortingAlgorithmAnimations)
- [Nihilogic : Canvas Visualizations of Sorting Algorithms](http://wayback.archive.org/web/20140703060111/http://www.nihilogic.dk/labs/sorting_visualization/) and [Canvas Visualizations of Sorting Algorithms - Nihilogic](http://wayback.archive.org/web/20140819210828/http://blog.nihilogic.dk/2009/04/canvas-visualizations-of-sorting.html)
- [The Sound of Sorting - "Audibilization" and Visualization of Sorting Algorithms](https://github.com/bingmann/sound-of-sorting) and [The Sound of Sorting - "Audibilization" and Visualization of Sorting Algorithms - panthema.net](http://panthema.net/2013/sound-of-sorting/)
- [16 Sorts - Color Circle - YouTube](https://www.youtube.com/watch?v=y9Ecb43qw98)
- [Sorting Algorithms Revisualized - Album on Imgur](https://imgur.com/gallery/GD5gi)
- [Nihilogic : Canvas Visualizations of Sorting Algorithms](https://web.archive.org/web/20140703060111/http://www.nihilogic.dk/labs/sorting_visualization/)
### ASCII to SVG diagram
- [Converts ascii scribbles to svg](https://github.com/ivanceras/svgbob)
- [Ascii to SVG](https://ivanceras.github.io/svgbob/build/)
## Table
- [Design Better Data Tables – Mission Log – Medium](https://medium.com/mission-log/design-better-data-tables-430a30a00d8c#.vfrsyeg4g)
## Colors
Palette
Use Viridis or Parula colormap for heatmap (accessible to colorblind users)
- [Accessibility](#accessibility)
- [How to Choose Colours Procedurally (Algorithms) | Dev.Mag](http://devmag.org.za/2012/07/29/how-to-choose-colours-procedurally-algorithms/)
- [A Better Default Colormap for Matplotlib | SciPy 2015 | Nathaniel Smith and Stéfan van der Walt - YouTube](https://www.youtube.com/watch?v=xAoljeRJ3lU)
- [Gnuplotting/gnuplot-palettes: Color palettes for gnuplot](https://github.com/Gnuplotting/gnuplot-palettes)
- [colormap/colormaps.py at master · BIDS/colormap](https://github.com/BIDS/colormap/blob/master/colormaps.py) - [colormap/fake_parula.py at master · BIDS/colormap](https://github.com/BIDS/colormap/blob/master/fake_parula.py) et [colormap/parula.py at master · BIDS/colormap](https://github.com/BIDS/colormap/blob/master/parula.py)
- [politiken-journalism/scale-color-perceptual: Javascript exports of matplotlib's new default color scales; magma, inferno, plasma and viridis. Works with browserify and D3.js](https://github.com/politiken-journalism/scale-color-perceptual)
- [bpostlethwaite/colormap: output rgb or hex colormaps](https://github.com/bpostlethwaite/colormap)
## Design system
- [Design System Checklist](https://designsystemchecklist.com/)
- [Design Systems articles on building and maintaining design systems](https://www.designsystems.com/#design-systems-repo) - List of design system repositories
## Accessibility
- [ACCÉSCIBLE: Un guide pratique sur l’accessibilité en graphisme](https://www.ico-d.org/database/files/library/2010_Accessibility_Handbook_French_FINAL_s.pdf)
| 84.7375
| 576
| 0.784998
|
yue_Hant
| 0.250815
|
a6bcbb9a291d9488f2bb4ee85ca66dff84caa010
| 1,441
|
md
|
Markdown
|
docs/framework/unmanaged-api/diagnostics/isymunmanageddocument-getchecksumalgorithmid-method.md
|
meterpaffay/docs.de-de
|
1e51b03044794a06ba36bbc139a23b738ca9967a
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/framework/unmanaged-api/diagnostics/isymunmanageddocument-getchecksumalgorithmid-method.md
|
meterpaffay/docs.de-de
|
1e51b03044794a06ba36bbc139a23b738ca9967a
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
docs/framework/unmanaged-api/diagnostics/isymunmanageddocument-getchecksumalgorithmid-method.md
|
meterpaffay/docs.de-de
|
1e51b03044794a06ba36bbc139a23b738ca9967a
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: ISymUnmanagedDocument::GetCheckSumAlgorithmId-Methode
ms.date: 03/30/2017
api_name:
- ISymUnmanagedDocument.GetCheckSumAlgorithmId
api_location:
- diasymreader.dll
api_type:
- COM
f1_keywords:
- ISymUnmanagedDocument::GetCheckSumAlgorithmId
helpviewer_keywords:
- ISymUnmanagedDocument::GetCheckSumAlgorithmId method [.NET Framework debugging]
- GetCheckSumAlgorithmId method [.NET Framework debugging]
ms.assetid: c7f941cd-e25b-4b85-b1ce-5f77c9208fa9
topic_type:
- apiref
ms.openlocfilehash: a76435be591d9f73d5975c5315f6e744f8972fc7
ms.sourcegitcommit: 27db07ffb26f76912feefba7b884313547410db5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/19/2020
ms.locfileid: "83614616"
---
# <a name="isymunmanageddocumentgetchecksumalgorithmid-method"></a>ISymUnmanagedDocument::GetCheckSumAlgorithmId-Methode
Ruft den Prüfsummen Algorithmus-Bezeichner ab oder gibt eine GUID aller Nullen zurück, wenn keine Prüfsumme vorhanden ist.
## <a name="syntax"></a>Syntax
```cpp
HRESULT GetCheckSumAlgorithmId(
[out, retval] GUID* pRetVal);
```
## <a name="parameters"></a>Parameter
`pRetVal`
vorgenommen Ein Zeiger auf eine Variable, die den Prüfsummen Algorithmus-Bezeichner empfängt.
## <a name="return-value"></a>Rückgabewert
S_OK, wenn die Methode erfolgreich ist.
## <a name="see-also"></a>Siehe auch
- [ISymUnmanagedDocument-Schnittstelle](isymunmanageddocument-interface.md)
| 32.022222
| 124
| 0.789729
|
deu_Latn
| 0.324352
|
a6bce22410c3ffc730c7d2732b8806d969d4cc8c
| 7,475
|
md
|
Markdown
|
wdk-ddi-src/content/fltkernel/nf-fltkernel-fltattachvolume.md
|
tianye606/windows-driver-docs-ddi
|
23fec97f3ed3a0c99b117543982d34ee592501e7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
wdk-ddi-src/content/fltkernel/nf-fltkernel-fltattachvolume.md
|
tianye606/windows-driver-docs-ddi
|
23fec97f3ed3a0c99b117543982d34ee592501e7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
wdk-ddi-src/content/fltkernel/nf-fltkernel-fltattachvolume.md
|
tianye606/windows-driver-docs-ddi
|
23fec97f3ed3a0c99b117543982d34ee592501e7
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
UID: NF:fltkernel.FltAttachVolume
title: FltAttachVolume function (fltkernel.h)
description: FltAttachVolume creates a new minifilter driver instance and attaches it to the given volume.
old-location: ifsk\fltattachvolume.htm
tech.root: ifsk
ms.assetid: da85c8d6-a74c-4a87-88b3-fb6dc01dd0f9
ms.date: 04/16/2018
ms.keywords: FltApiRef_a_to_d_f4ac8b0d-55c2-45b1-8f3b-3a09bee7bb23.xml, FltAttachVolume, FltAttachVolume function [Installable File System Drivers], fltkernel/FltAttachVolume, ifsk.fltattachvolume
ms.topic: function
f1_keywords:
- "fltkernel/FltAttachVolume"
req.header: fltkernel.h
req.include-header: Fltkernel.h
req.target-type: Universal
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: FltMgr.lib
req.dll:
req.irql: PASSIVE_LEVEL
topic_type:
- APIRef
- kbSyntax
api_type:
- LibDef
api_location:
- FltMgr.lib
- FltMgr.dll
api_name:
- FltAttachVolume
product:
- Windows
targetos: Windows
req.typenames:
---
# FltAttachVolume function
## -description
<b>FltAttachVolume</b> creates a new minifilter driver instance and attaches it to the given volume.
## -parameters
### -param Filter [in, out]
Opaque filter pointer for the caller. This parameter is required and cannot be <b>NULL</b>.
### -param Volume [in, out]
Opaque volume pointer for the volume that the minifilter driver instance is to be attached to. This parameter is required and cannot be <b>NULL</b>.
### -param InstanceName [in, optional]
Pointer to a <a href="https://docs.microsoft.com/windows/desktop/api/ntdef/ns-ntdef-_unicode_string">UNICODE_STRING</a> structure containing the instance name for the new instance. This parameter is optional and can be <b>NULL</b>. If it is <b>NULL</b>, <b>FltAttachVolume</b> attempts to read the minifilter driver's default instance name from the registry. (For more information about this parameter, see the following Remarks section.)
### -param RetInstance [out]
Pointer to a caller-allocated variable that receives an opaque instance pointer for the newly created instance. This parameter is optional and can be <b>NULL</b>.
## -returns
<b>FltAttachVolume</b> returns STATUS_SUCCESS or an appropriate NTSTATUS value such as one of the following:
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_FLT_DELETING_OBJECT</b></dt>
</dl>
</td>
<td width="60%">
The specified <i>Filter</i> or <i>Volume</i> is being torn down. This is an error code.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_FLT_FILTER_NOT_READY</b></dt>
</dl>
</td>
<td width="60%">
The minifilter driver has not started filtering. For more information, see <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltstartfiltering">FltStartFiltering</a>. This is an error code.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_FLT_INSTANCE_NAME_COLLISION</b></dt>
</dl>
</td>
<td width="60%">
An instance already exists with this name on the volume specified.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_INSUFFICIENT_RESOURCES</b></dt>
</dl>
</td>
<td width="60%">
<b>FltAttachVolume</b> encountered a pool allocation failure. This is an error code.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>STATUS_OBJECT_NAME_COLLISION</b></dt>
</dl>
</td>
<td width="60%">
Another instance was already attached at the altitude specified in the instance attributes that were read from the registry. This is an error code.
</td>
</tr>
</table>
## -remarks
If the caller specifies a non-<b>NULL</b> value for <i>InstanceName</i>, <b>FltAttachVolume</b> reads any instance attributes specified by the minifilter driver that are stored in the registry under HKLM\CurrentControlSet\Services\<i>ServiceName</i>\Instances\InstanceName, where <i>ServiceName</i> is the minifilter driver's service name. This service name is specified in the <a href="https://docs.microsoft.com/windows-hardware/drivers/install/inf-addservice-directive">AddService directive</a> in the <a href="https://docs.microsoft.com/windows-hardware/drivers/install/inf-defaultinstall-services-section">DefaultInstall.Services section</a> of the minifilter driver's INF file. (For more information about filter driver INF files, see <a href="https://docs.microsoft.com/windows-hardware/drivers/ifs/installing-a-file-system-filter-driver">Installing a File System Filter Driver</a>.)
If the caller does not specify a value for <i>InstanceName</i>, <b>FltAttachVolume</b> uses the name stored in the registry under HKLM\CurrentControlSet\Services\<i>ServiceName</i>\Instances\DefaultInstance for the <i>InstanceName</i> portion of the registry path.
The instance name specified in the <i>InstanceName</i> parameter is required to be unique across the system.
<b>FltAttachVolume</b> returns an opaque instance pointer for the new instance in <i>*RetInstance</i>. This pointer value uniquely identifies the minifilter driver instance and remains constant as long as the instance is attached to the volume.
<b>FltAttachVolume</b> adds a rundown reference to the opaque instance pointer returned in <i>*RetInstance</i>. When this pointer is no longer needed, the caller must release it by calling <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltobjectdereference">FltObjectDereference</a>. Thus every successful call to <b>FltAttachVolume</b> must be matched by a subsequent call to <b>FltObjectDereference</b>.
To attach a minifilter driver instance to a volume at a given altitude, call <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltattachvolumeataltitude">FltAttachVolumeAtAltitude</a>.
To compare the altitudes of two minifilter driver instances attached to the same volume, call <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltcompareinstancealtitudes">FltCompareInstanceAltitudes</a>.
To detach a minifilter driver instance from a volume, call <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltdetachvolume">FltDetachVolume</a>.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltattachvolumeataltitude">FltAttachVolumeAtAltitude</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltcompareinstancealtitudes">FltCompareInstanceAltitudes</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltdetachvolume">FltDetachVolume</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltgetvolumeinstancefromname">FltGetVolumeInstanceFromName</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltobjectdereference">FltObjectDereference</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/fltkernel/nf-fltkernel-fltstartfiltering">FltStartFiltering</a>
<a href="https://docs.microsoft.com/windows/desktop/api/ntdef/ns-ntdef-_unicode_string">UNICODE_STRING</a>
| 35.76555
| 892
| 0.747425
|
eng_Latn
| 0.702069
|
a6bcea6ee103eaf357677ca1421dc60eb43a1f69
| 34
|
md
|
Markdown
|
_includes/01-name.md
|
mivan000/markdown-portfolio
|
ff02a2ac9c32cf1a341088224363f25ef215d039
|
[
"MIT"
] | null | null | null |
_includes/01-name.md
|
mivan000/markdown-portfolio
|
ff02a2ac9c32cf1a341088224363f25ef215d039
|
[
"MIT"
] | 5
|
2022-02-13T13:30:25.000Z
|
2022-02-13T13:49:52.000Z
|
_includes/01-name.md
|
mivan000/markdown-portfolio
|
ff02a2ac9c32cf1a341088224363f25ef215d039
|
[
"MIT"
] | null | null | null |
# Welcome to @mivan000 Portfolio!
| 17
| 33
| 0.764706
|
eng_Latn
| 0.882228
|
a6bcf3fdcbb61d12ab1d72e714a0f45e28aa4a05
| 25
|
md
|
Markdown
|
README.md
|
mishrakeshav/restapi-with-typescript
|
07d065e6351270f65bd96c1ec9e378eb55a34620
|
[
"MIT"
] | null | null | null |
README.md
|
mishrakeshav/restapi-with-typescript
|
07d065e6351270f65bd96c1ec9e378eb55a34620
|
[
"MIT"
] | null | null | null |
README.md
|
mishrakeshav/restapi-with-typescript
|
07d065e6351270f65bd96c1ec9e378eb55a34620
|
[
"MIT"
] | null | null | null |
# restapi-with-typescript
| 25
| 25
| 0.84
|
eng_Latn
| 0.542492
|
a6bd07d544904f29fdf26ba12c1e9cf4fc30900c
| 1,941
|
md
|
Markdown
|
README.md
|
davidroyer/nuxt-static
|
2f81d3ec42fb98deddfe6d957d2737bcf8135184
|
[
"MIT"
] | null | null | null |
README.md
|
davidroyer/nuxt-static
|
2f81d3ec42fb98deddfe6d957d2737bcf8135184
|
[
"MIT"
] | null | null | null |
README.md
|
davidroyer/nuxt-static
|
2f81d3ec42fb98deddfe6d957d2737bcf8135184
|
[
"MIT"
] | null | null | null |
# nuxt-static
[![npm version][npm-version-src]][npm-version-href]
[![npm downloads][npm-downloads-src]][npm-downloads-href]
[![Circle CI][circle-ci-src]][circle-ci-href]
[![Codecov][codecov-src]][codecov-href]
[![Dependencies][david-dm-src]][david-dm-href]
[![Standard JS][standard-js-src]][standard-js-href]
> Nuxt.js module for using markdown files that convert to JSON that can be used to create a static site
[📖 **Release Notes**](./CHANGELOG.md)
## Setup
1. Add the `nuxt-static` dependency with `yarn` or `npm` to your project
2. Add `nuxt-static` to the `modules` section of `nuxt.config.js`
3. Configure it:
```js
{
modules: [
// Simple usage
'nuxt-static',
// With options
['nuxt-static', { /* module options */ }],
]
}
```
## Development
1. Clone this repository
2. Install dependencies using `yarn install` or `npm install`
3. Start development server using `npm run dev`
## License
[MIT License](./LICENSE)
Copyright (c) David Royer <droyer01@gmail.com>
<!-- Badges -->
[npm-version-src]: https://img.shields.io/npm/dt/nuxt-static.svg?style=flat-square
[npm-version-href]: https://npmjs.com/package/nuxt-static
[npm-downloads-src]: https://img.shields.io/npm/v/nuxt-static/latest.svg?style=flat-square
[npm-downloads-href]: https://npmjs.com/package/nuxt-static
[circle-ci-src]: https://img.shields.io/circleci/project/github/davidroyer/nuxt-static.svg?style=flat-square
[circle-ci-href]: https://circleci.com/gh/davidroyer/nuxt-static
[codecov-src]: https://img.shields.io/codecov/c/github/davidroyer/nuxt-static.svg?style=flat-square
[codecov-href]: https://codecov.io/gh/davidroyer/nuxt-static
[david-dm-src]: https://david-dm.org/davidroyer/nuxt-static/status.svg?style=flat-square
[david-dm-href]: https://david-dm.org/davidroyer/nuxt-static
[standard-js-src]: https://img.shields.io/badge/code_style-standard-brightgreen.svg?style=flat-square
[standard-js-href]: https://standardjs.com
| 31.306452
| 108
| 0.722308
|
yue_Hant
| 0.22778
|
a6bd3f2b17e398cc73fabd8c098deaef8b093089
| 1,187
|
md
|
Markdown
|
docs/AirBoundaryConstructionAbridged.md
|
ladybug-tools/dragonfly-schema-dotnet
|
2d054532269d7bbe7f3666c43df53942460cbec3
|
[
"MIT"
] | null | null | null |
docs/AirBoundaryConstructionAbridged.md
|
ladybug-tools/dragonfly-schema-dotnet
|
2d054532269d7bbe7f3666c43df53942460cbec3
|
[
"MIT"
] | 49
|
2020-02-25T01:01:12.000Z
|
2022-03-21T11:53:47.000Z
|
docs/AirBoundaryConstructionAbridged.md
|
MingboPeng/dragonfly-schema-dotnet
|
12b872f3f70b8f8677ec9baf08516332e33d6bf4
|
[
"MIT"
] | 4
|
2020-02-02T04:18:26.000Z
|
2021-03-05T22:04:56.000Z
|
# DragonflySchema.Model.AirBoundaryConstructionAbridged
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**UserData** | **Object** | Optional dictionary of user data associated with the object.All keys and values of this dictionary should be of a standard data type to ensure correct serialization of the object (eg. str, float, int, list). | [optional]
**Type** | **string** | | [optional] [readonly] [default to "AirBoundaryConstructionAbridged"]
**AirMixingPerArea** | **double** | A positive number for the amount of air mixing between Rooms across the air boundary surface [m3/s-m2]. Default: 0.1 corresponds to average indoor air speeds of 0.1 m/s (roughly 20 fpm), which is typical of what would be induced by a HVAC system. | [optional] [default to 0.1D]
**AirMixingSchedule** | **string** | Identifier of a fractional schedule for the air mixing schedule across the construction. If unspecified, an Always On schedule will be assumed. | [optional]
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to README]](../README.md)
| 69.823529
| 313
| 0.704297
|
eng_Latn
| 0.929133
|
a6bd62192517c688e0a0bf5f4ccc866b3241732f
| 9,856
|
markdown
|
Markdown
|
Guide/routing.markdown
|
mt-caret/ihp
|
ea655907366b61ac72571da1b311566e674ec626
|
[
"MIT"
] | 1
|
2021-03-06T07:16:58.000Z
|
2021-03-06T07:16:58.000Z
|
Guide/routing.markdown
|
mt-caret/ihp
|
ea655907366b61ac72571da1b311566e674ec626
|
[
"MIT"
] | null | null | null |
Guide/routing.markdown
|
mt-caret/ihp
|
ea655907366b61ac72571da1b311566e674ec626
|
[
"MIT"
] | null | null | null |
# Routing
```toc
```
## Routing Basics
In your project routes are defined in the `Web/Routes.hs`. In addition to defining that route, it also has to be added in `Web/FrontController.hs` to be picked up by the routing system.
The simplest way to define a route is by using `AutoRoute`, which automatically maps each controller action to an URL. For a `PostsController`, the definition in `Web/Routes.hs` will look like this:
```haskell
instance AutoRoute PostsController
type instance ModelControllerMap WebApplication Post = PostsController
```
Afterwards enable the routes for `PostsController` in `Web/FrontController.hs` like this:
```haskell
instance FrontController WebApplication where
controllers =
[ -- ...
, parseRoute @PostsController
]
```
Now you can open e.g. `/Posts` to access the `PostsAction`.
## Changing the Start Page / Home Page
You can define a custom start page action using the `startPage` function like this:
```haskell
instance FrontController WebApplication where
controllers =
[ startPage ProjectsAction
-- Generator Marker
]
```
In a new IHP project, you usually have a `startPage WelcomeAction` defined. Make sure to remove this line. Otherwise, you will still see the default IHP welcome page.
## URL Generation
Use `pathTo` to generate a path to a given action:
```haskell
pathTo ShowPostAction { postId = "adddfb12-da34-44ef-a743-797e54ce3786" }
-- /ShowPost?postId=adddfb12-da34-44ef-a743-797e54ce3786
```
To generate a full URL, use `urlTo`:
```haskell
urlTo NewUserAction
-- http://localhost:8000/NewUser
```
## AutoRoute
Let's say our `PostsController` is defined in `Web/Types.hs` like this:
```haskell
data PostsController
= PostsAction
| NewPostAction
| ShowPostAction { postId :: !(Id Post) }
| CreatePostAction
| EditPostAction { postId :: !(Id Post) }
| UpdatePostAction { postId :: !(Id Post) }
| DeletePostAction { postId :: !(Id Post) }
```
Using `instance AutoRoute PostsController` will give us the following routing:
```haskell
GET /Posts => PostsAction
GET /NewPost => NewPostAction
GET /ShowPost?postId={postId} => ShowPostAction { postId }
POST /CreatePost => CreatePostAction
GET /EditPost?postId={postId} => EditPostAction { postId }
POST /UpdatePost?postId={postId} => UpdatePostAction { postId }
PATCH /UpdatePost?postId={postId} => UpdatePostAction { postId }
DELETE /DeletePost?postId={postId} => DeletePostAction { postId }
```
The URLs are very close to the actual action which is called. Action parameters are taken automatically from the request query. This design helps you to always know which action is called when requesting an URL.
### AutoRoute & Beautiful URLs
Lots of modern browsers don't even show the full URL bar anymore (e.g. Safari and most mobile browsers). Therefore AutoRoute doesn't aim to generate the "most" beautiful URLs out of the box. It's rather optimized for the needs of developers. If you need beautiful URLs for SEO reasons, instead of using AutoRoute you can use the more manual APIs of IHP Routing. See the section "[Beautiful URLs](#beautiful-urls)" for details.
### Multiple Parameters
An action constructor can have multiple parameters:
```haskell
data PostsController = EditPostAction { postId :: !(Id Post), userId :: !(Id User) }
```
This will generate a routing like:
```haskell
GET /EditPost?postId={postId}&userId={userId} => EditPostAction { postId, userId }
```
### Parameter Types
AutoRoute works with the following parameter types:
- `Text`
- `[Text]`
- `Maybe Text`
- `Int`
- `[Int]`
- `Maybe Int`
- `Id` (for all model types)
If a Maybe value is `Nothing`, the value will be left out of the query parameter. Otherwise it will be included with the value.
```haskell
data MyController = DefaultAction { maybeParam :: Maybe Text }
pathTo (MyController Nothing) ==> "/Default"
pathTo (MyController "hello") ==> "/Default?maybeParam=hello"
```
List values are represented as comma separated lists. If the parameter is not present the list will default to the empty list.
```haskell
data MyController = DefaultAction { listParam :: Maybe [Int] }
pathTo (MyController []) ==> "/Default"
pathTo (MyController [1,2,3]) ==> "/Default?listParam=1,2,3"
```
### Request Methods
When an action is named a certain way, AutoRoute will pick a certain request method for the route. E.g. for a `DeletePostAction` it will only allow requests with the request method `DELETE` because the action name starts with `Delete`. Here is an overview of all naming patterns and their corresponding request method:
```haskell
Delete_Action => DELETE
Update_Action => POST, PATCH
Create_Action => POST
Show_Action => GET, HEAD
otherwise => GET, POST, HEAD
```
If you need more strong rules, consider using the other routing APIs available or overriding the `allowedMethodsForAction` like this:
```haskell
instance AutoRoute HelloWorldController where
allowedMethodsForAction "HelloAction" = [ GET ]
```
### Application Prefix
When using multiple applications in your IHP project, e.g. having an admin back-end, AutoRoute will prefix the action URLs with the application name. E.g. a controller `HelloWorldController` defined in `Admin/Types.hs` will be automatically prefixed with `/admin` and generate URLs such as `/admin/HelloAction`.
This prefixing has special handling for the `Web` module so that all controllers in the default `Web` module don't have a prefix.
## Custom Routing
Sometimes you have special needs for your routing. For this case, IHP provides a lower-level routing API on which `AutoRoute` is built.
Let's say we have a controller like this:
```haskell
data PostsController = ShowAllMyPostsAction
```
We want requests to `/posts` to map to `ShowAllMyPostsAction`. For that we need to add a `CanRoute` instance:
```haskell
instance CanRoute PostsController where
parseRoute' = string "/posts" <* endOfInput >> pure ShowAllMyPostsAction
```
The `parseRoute'` function is a parser that reads an URL and returns an action of type `PostsController`. The router uses [attoparsec](https://hackage.haskell.org/package/attoparsec). See below for examples on how to use this for building beautiful URLs.
Next to the routing itself, we also need to implement the URL generation:
```haskell
instance HasPath PostsController where
pathTo ShowAllMyPostsAction = "/posts"
```
### Beautiful URLs
Let's say we want to give our blog post application a beautiful URL structure for SEO reasons. Our controller is defined as:
```haskell
data PostsController
= ShowPostAction { postId :: !(Id Post) }
```
We want our URLs to look like this:
```html
/posts/an-example-blog-post
```
Additionally we also want to accept permalinks with the id like this:
```
/posts/f85dc0bc-fc11-4341-a4e3-e047074a7982
```
To accept URLs like this, we first need to make some changes to our data structure. We have to make the `postId` optional. Additionally, we need to have a parameter for the URL slug:
```haskell
data PostsController
= ShowPostAction { postId :: !(Maybe (Id Post)), slug :: !(Maybe Text) }
```
This will also require us to make changes to our action implementation:
```haskell
action ShowPostAction { postId, slug } = do
post <- case slug of
Just slug -> query @Post |> filterWhere (#slug, slug) |> fetchOne
Nothing -> fetchOne postId
-- ...
```
This expects the `posts` table to have a field `slug :: Text`.
Now we define our `CanRoute` instance like this:
```haskell
instance CanRoute PostsController where
parseRoute' = do
string "/posts/"
let postById = do id <- parseId; endOfInput; pure ShowPostAction { postId = Just id, slug = Nothing }
let postBySlug = do slug <- remainingText; pure ShowPostAction { postId = Nothing, slug = Just slug }
postById <|> postBySlug
```
Additionally we also have to implement the `HasPath` instance:
```haskell
instance HasPath PostsController where
pathTo ShowPostAction { postId = Just id, slug = Nothing } = "/posts/" <> tshow id
pathTo ShowPostAction { postId = Nothing, slug = Just slug } = "/posts/" <> slug
```
### Real-World Example
Here is a real world example of a custom routing implementation for a custom Apple Web Service interface implemented at digitally induced:
```haskell
instance CanRoute RegistrationsController where
parseRoute' = do
appleDeviceId <- string "AppleWebService/v1/devices/" *> parseText <* "/registrations/"
passType <- parseText
let create = do
string "/"
memberId <- parseId
endOfInput
pure CreateRegistrationAction { .. }
let show = do
endOfInput
pure ShowRegistrationAction { .. }
choice [ create, show ]
instance HasPath RegistrationsController where
pathTo CreateRegistrationAction { appleDeviceId, memberId } = "/AppleWebService/v1/devices/" <> appleDeviceId <> "/registrations/" <> passType <> "/" <> tshow memberId
pathTo ShowRegistrationAction { appleDeviceId } = "/AppleWebService/v1/devices/" <> appleDeviceId <> "/registrations/" <> passType
```
## Method Override Middleware
HTML forms don't support special HTTP methods like `DELETE`. To work around this issue, IHP has [a middleware](https://hackage.haskell.org/package/wai-extra-3.0.1/docs/Network-Wai-Middleware-MethodOverridePost.html) which transforms e.g. a `POST` request with a form field `_method` set to `DELETE` to a `DELETE` request.
## Custom 404 Page
You can override the default IHP 404 Not Found error page by creating a new file at `static/404.html`. Then IHP will render that HTML file instead of displaying the default IHP not found page.
| 34.950355
| 426
| 0.71946
|
eng_Latn
| 0.939952
|
a6bdab4975b6e9bd99645267b3e37e586dcf30b4
| 2,259
|
md
|
Markdown
|
README.md
|
scarybot/monotony
|
cc923267ab0b38e149c2181834befebad6cba34e
|
[
"MIT"
] | 1
|
2019-01-16T16:02:50.000Z
|
2019-01-16T16:02:50.000Z
|
README.md
|
scarybot/monotony
|
cc923267ab0b38e149c2181834befebad6cba34e
|
[
"MIT"
] | null | null | null |
README.md
|
scarybot/monotony
|
cc923267ab0b38e149c2181834befebad6cba34e
|
[
"MIT"
] | null | null | null |
# Monotony
This gem is an engine to simulate games of Monopoly.
[](https://badge.fury.io/rb/monotony)
## Installation
Add this line to your application's Gemfile:
```ruby
gem 'monotony'
```
And then execute:
$ bundle
Or install it yourself as:
$ gem install monotony
## Usage
To play a quick game of Monopoly, with classic board layout and four randomly generated players:
```ruby
game = Monotony::Game.new({})
game.play
# See the results of the game
game.summary
```
You can step through the game a few turns at a time, and use the ```summary``` method to view an ASCII representation of the state of the game.
```ruby
game.play(10).summary
```
```ruby
monopoly_players = [
Player.new( name: 'James' ),
Player.new( name: 'Jody' ),
Player.new( name: 'Ryan' ),
# This player is using a custom behaviour hash. See docs for more details.
Player.new( name: 'Tine', behaviour: behaviour )
]
# Board layout and chance/community chest cards can be defined here; see docs for more details.
monopoly = Monotony::Game.new(
board: monopoly_board,
chance: chance,
community_chest: community_chest,
num_dice: 2,
die_size: 6,
starting_currency: 1500,
bank_balance: 12755,
num_hotels: 12,
num_houses: 48,
go_amount: 200,
max_turns_in_jail: 3,
players: monopoly_players
)
```
## Development
After checking out the repo, run `bin/setup` to install dependencies. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/scarybot/monotony.
## License
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
| 26.892857
| 324
| 0.679947
|
eng_Latn
| 0.97301
|
a6bdba1ace1aeb43f89d6e16e0bb1fe380402a3d
| 4,431
|
md
|
Markdown
|
articles/machine-learning/reference-yaml-endpoint-managed-online.md
|
ZetaPR/azure-docs.es-es
|
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
|
[
"CC-BY-4.0",
"MIT"
] | 1
|
2021-03-12T23:37:16.000Z
|
2021-03-12T23:37:16.000Z
|
articles/machine-learning/reference-yaml-endpoint-managed-online.md
|
ZetaPR/azure-docs.es-es
|
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/machine-learning/reference-yaml-endpoint-managed-online.md
|
ZetaPR/azure-docs.es-es
|
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Referencia de YAML sobre puntos de conexión en línea (versión preliminar)
titleSuffix: Azure Machine Learning
description: Obtenga información sobre los archivos YAML que se usan para implementar modelos como puntos de conexión en línea.
services: machine-learning
ms.service: machine-learning
ms.subservice: core
ms.topic: how-to
author: rsethur
ms.author: seramasu
ms.date: 10/21/2021
ms.reviewer: laobri
ms.openlocfilehash: 5b7637f16885e2eed5281273f1acad866e38e39e
ms.sourcegitcommit: e41827d894a4aa12cbff62c51393dfc236297e10
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/04/2021
ms.locfileid: "131555863"
---
# <a name="cli-v2-online-endpoint-yaml-schema"></a>Esquema de YAML de punto de conexión en línea de la CLI (v2)
El esquema JSON de origen se puede encontrar en https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json.
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
> [!NOTE]
> Como [referencia](https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.template.yaml), hay disponible un archivo YAML de ejemplo completamente especificado para los puntos de conexión en línea administrados.
## <a name="yaml-syntax"></a>Sintaxis de YAML
| Clave | Tipo | Descripción | Valores permitidos | Valor predeterminado |
| --- | ---- | ----------- | -------------- | ------------- |
| `$schema` | string | Esquema de YAML. Si usa la extensión VS Code de Azure Machine Learning para crear el archivo YAML, la inclusión de `$schema` en la parte superior del archivo le permite invocar las finalizaciones del esquema y los recursos. | | |
| `name` | string | **Obligatorio.** Nombre del punto de conexión. Es preciso que sea único en el nivel de región de Azure. <br><br> Las reglas de nomenclatura se definen [aquí](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).| | |
| `description` | string | Descripción del punto de conexión. | | |
| `tags` | object | Diccionario de etiquetas para el punto de conexión. | | |
| `auth_mode` | string | El método de autenticación para el punto de conexión. Se admite la autenticación basada en claves y la autenticación basada en tokens de Azure Machine Learning. La autenticación basada en claves no expira, pero la autenticación basada en tokens de Azure Machine Learning. | `key`, `aml_token` | `key` |
| `allow_public_access` | boolean | Si se permite el acceso público cuando Private Link está habilitado. | | `true` |
| `identity` | object | La configuración de identidad administrada a fin de acceder a recursos de Azure para el aprovisionamiento y la inferencia de puntos de conexión. | | |
| `identity.type` | string | El tipo de identidad administrada. Si el tipo es `user_assigned`, también se debe especificar la propiedad `identity.user_assigned_identities`. | `system_assigned`, `user_assigned` | |
| `identity.user_assigned_identities` | array | Lista de id. de recursos completos de las identidades asignadas por el usuario. | | |
## <a name="remarks"></a>Comentarios
Los comandos `az ml online-endpoint` se pueden usar para administrar los puntos de conexión en línea de Azure Machine Learning.
## <a name="examples"></a>Ejemplos
Hay ejemplos disponibles en el [repositorio de GitHub de ejemplos](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch). A continuación, se muestran varios.
## <a name="yaml-basic"></a>YAML: básico
:::code language="yaml" source="~/azureml-examples-cli-preview/cli/endpoints/online/managed/sample/endpoint.yml":::
## <a name="yaml-system-assigned-identity"></a>YAML: identidad asignada por el sistema
:::code language="yaml" source="~/azureml-examples-cli-preview/cli/endpoints/online/managed/managed-identities/1-sai-create-endpoint.yml":::
## <a name="yaml-user-assigned-identity"></a>YAML: identidad asignada por el usuario
:::code language="yaml" source="~/azureml-examples-cli-preview/cli/endpoints/online/managed/managed-identities/1-uai-create-endpoint.yml":::
## <a name="next-steps"></a>Pasos siguientes
- [Instalación y uso de la CLI (v2)](how-to-configure-cli.md)
- Aprenda a [implementar un modelo con un punto de conexión en línea administrado](how-to-deploy-managed-online-endpoints.md).
- [Solución de problemas de implementación y puntuación de puntos de conexión en línea administrados (versión preliminar)](./how-to-troubleshoot-online-endpoints.md)
| 66.134328
| 327
| 0.759422
|
spa_Latn
| 0.89696
|
a6be2b97c1587bf56a8cafdd1566157eaf7e5a4c
| 2,497
|
md
|
Markdown
|
articles/crazyfile-gap8.md
|
lazyparser/cnrv-blog
|
48fa1a491937e37a5154166a6d03018fd3251f5e
|
[
"CC0-1.0"
] | 1
|
2021-11-30T11:43:03.000Z
|
2021-11-30T11:43:03.000Z
|
articles/crazyfile-gap8.md
|
lazyparser/cnrv-blog
|
48fa1a491937e37a5154166a6d03018fd3251f5e
|
[
"CC0-1.0"
] | null | null | null |
articles/crazyfile-gap8.md
|
lazyparser/cnrv-blog
|
48fa1a491937e37a5154166a6d03018fd3251f5e
|
[
"CC0-1.0"
] | null | null | null |
## Greenwaves GAP8: RISC-V + AI助力PULP打造小型自主飞行智能无人机
> 新型微处理器助力研究员打造世界最小自主飞行无人机,AI神经网络仅需不到100毫瓦

来自苏黎世联邦理工及意大利博洛尼大学的工程师表示,他们已经制造出世界上最小的自主式无人机 - 可依靠小型电池执行AI算法。这将极大推动超小型自主导航无人机(4英寸或更小)的发展,有朝一日将搭载环境传感器,及微型摄像机执行安全监测,探查及检查任务。
电池问题一直困扰并制约着着无人机的发展。“如何降低无人机自重,减少无人机电力需求则是我们研究并完善的方向与成果”,PULP实验室研究员表示。如上个月发表的[论文](https://arxiv.org/abs/1805.01831)中所述,他们在一款售价为180美金的超小型四翼无人机[Crazyflie 2.0](https://www.bitcraze.io/crazyflie-2/) 上装配了一款超低功耗摄像头和[GAP8微处理器](https://greenwaves-technologies.com),并植入了定制的神经网络算法。这些扩展,只为无人机增加了5g的重量,以及1%的额外功耗(94毫瓦)。研究人员表示,电池问题依旧是一个非常大的挑战,如果能够实现30分钟的自主飞行,这将足以支持无人机进行一个中型仓库的检查工作并自主返回充电站。尽管存在很大的差距,但至少增加自主性对电池寿命的影响微乎其微。
PULP团队选择来自[法国GreenWaves Technologies公司的GAP8应用处理器](https://greenwaves-technologies.com)作为无人机的大脑,一款新颖的基于PULP项目的超低功耗并行计算平台。该处理器配备有8+1个基于RISC-V的内核,以及一个高效率的神经网络加速引擎(Hardware Convolutional Engine)。GAP8的主要任务是接收图像,并运行其AI算法DroNet,一个轻量级残差卷积神经网络(CNN)架构。通过该算法预测转向角度和碰撞概率,以实现四旋翼飞行器在各种室内和室外环境中的安全自主飞行。

该DroNet算法的突破之处在于成功将一个用于大型无人机的轻量级神经卷积算法,在不过多损失精度的同时,压缩并移植入一个更小的处理器。虽然这一压缩将网络处理摄像机图像的能力从每秒20帧降低到12帧,但网络仍然运行快速和准确,足以识别一个障碍,并在不到半秒钟提醒无人机。 “这对于以每秒四米飞行的Crazyflie 2.0无人机已经足够快”Loquercio写道。
在无人机可以飞行之前,团队必须在真实环境中定制,并训练他们的DroNet。他们将在汽车,自行车以及一名徒步旅行者身上安装摄像头拍摄视频。之后由一台功能强大的电脑对这些数据进行学习,并产生可植入GAP8中的模型代码。“任何人都可以使用我们发布的源代码,并重复我们的测试以验证结果”Loquercio写道。
现阶段,DroNet仅能通过左右移动来绕过面前的障碍,而并不具备上下移动的能力,而之后的新版本将克服这一难关。
这项研究表明,复杂的人工智能算法不仅限于大型计算设备,嵌入式超低功耗微处理器也可以胜任这一工作。研究人员表示,他们的突破将不仅适用于无人机,还可适用于其他机器人,以及各种配备有环境传感器,摄像头,及各种传感器的物联网小型移动设备。
Greenwaves将会参加2018年6月30日在上海举办的RISC-V Day Shanghai研讨会,有兴趣的朋友可以来和他们的工程师一起聊聊!

Links:
- 快公司报道: [This tiny drone with a tiny brain is smart enough to fly itself](https://www.fastcompany.com/40575392/this-tiny-drone-with-a-tiny-brain-is-smart-enough-to-fly-itself)
- 论文: Ultra Low Power Deep-Learning-powered Autonomous Nano Drones, url: [https://arxiv.org/abs/1805.01831](https://arxiv.org/abs/1805.01831)
- [GreenWaves Technologies网站](https://greenwaves-technologies.com)
编译整理:张垚
责编:雄飞
图片来源:Greenwaves公司和论文
----
<a rel="license" href="http://creativecommons.org/licenses/by-sa/2.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/2.0/88x31.png" /></a><br />本作品采用<a rel="license" href="http://creativecommons.org/licenses/by-sa/2.0/">知识共享署名-相同方式共享 2.0 通用许可协议</a>进行许可。
| 60.902439
| 407
| 0.831798
|
yue_Hant
| 0.67072
|
a6bf5812869e3a70a72fbe4ae1819563980bf1c7
| 920
|
md
|
Markdown
|
CHANGELOG.md
|
nunsie/probank
|
168fb20f88ec178d27d93f3c81b3d94b997fcfd1
|
[
"MIT"
] | 1
|
2020-09-05T16:50:09.000Z
|
2020-09-05T16:50:09.000Z
|
CHANGELOG.md
|
nunsie/probank
|
168fb20f88ec178d27d93f3c81b3d94b997fcfd1
|
[
"MIT"
] | 4
|
2020-07-11T13:05:18.000Z
|
2022-01-22T13:08:09.000Z
|
CHANGELOG.md
|
nunsie/probank
|
168fb20f88ec178d27d93f3c81b3d94b997fcfd1
|
[
"MIT"
] | 1
|
2021-03-25T09:25:28.000Z
|
2021-03-25T09:25:28.000Z
|
## [1.1.1](https://github.com/nunsie/probank/compare/v1.1.0...v1.1.1) (2020-07-11)
### Bug Fixes
* semantic config ([b459a2d](https://github.com/nunsie/probank/commit/b459a2dd639caa43e11e7d1bc11f92e7e3159a7d))
# [1.1.0](https://github.com/nunsie/probank/compare/v1.0.0...v1.1.0) (2020-07-11)
### Features
* semantic-release ([c9663ce](https://github.com/nunsie/probank/commit/c9663cea4a622c26b8b34380680527976d0727db))
# 1.0.0 (2020-07-11)
### Features
* semantic-release ([cdd3039](https://github.com/nunsie/probank/commit/cdd30394a22a7769f7b092e4142f9a6458d00442))
* semantic-release ([84339a2](https://github.com/nunsie/probank/commit/84339a2e07aac397b43b2fd707a350dc46d08034))
* semantic-release ([5e83769](https://github.com/nunsie/probank/commit/5e837690462e8129a12b71b33810c6bd0f4f15c8))
* semantic-release ([6330c97](https://github.com/nunsie/probank/commit/6330c978ebfe0a7df059667d6c3f2dc853419279))
| 38.333333
| 113
| 0.771739
|
yue_Hant
| 0.273361
|
a6bf7f91f496465b38c64a6951b5a59e053602ee
| 1,122
|
md
|
Markdown
|
_books/cuentos-tia-panchita.md
|
morelcoop/morel-2.0
|
ba6bfbfee8895fc70c7fd52bfaffffc2dd0d5e3f
|
[
"MIT"
] | null | null | null |
_books/cuentos-tia-panchita.md
|
morelcoop/morel-2.0
|
ba6bfbfee8895fc70c7fd52bfaffffc2dd0d5e3f
|
[
"MIT"
] | null | null | null |
_books/cuentos-tia-panchita.md
|
morelcoop/morel-2.0
|
ba6bfbfee8895fc70c7fd52bfaffffc2dd0d5e3f
|
[
"MIT"
] | null | null | null |
---
title: Cuentos de mi tía panchita
layout: book
editorial: "Imprenta Española"
ciudad: San José de Costa Rica
edicion: 1936
year: 1920
author: "Carmen Lyra"
nacionalidad: Costa Rica
repositorio: "Biblioteca Digital del Patrimonio Iberoamericano"
repurl: http://www.iberoamericadigital.net
img: tia_panchita_Morel.jpg
descarga: https://ia801502.us.archive.org/28/items/cuentos-de-mi-tia-panchita/Cuentos%20de%20mi%20Tia%20Panchita.pdf
biblioteca: http://www.worldcat.org/oclc/894757254
comprar: https://amzn.to/34KPsfE
periodo: "Siglo XX"
feature:
---
Los cuentos de la tía Panchita eran humildes llaves de hierro que abrían arcas cuyo contenido era un tesoro de ensueños.
En el patio de su casa había un pozo, bajo una chayotera que formaba sobre el brocal un dosel de frescura.
A menudo, sobre todo en los calores de marzo, mi boca recuerda el agua de aquel pozo, la más fría y limpia que hasta hoy probara, que ya no existe, que agotó el calor; y sin quererlo mi voluntad, mi corazón evoca al mismo tiempo, la memoria de mi alegría de entonces, cristalina y fresca, que ya no existe, que agotó la experiencia.
| 44.88
| 332
| 0.783422
|
spa_Latn
| 0.955835
|
a6bf93df27b24a18d79530f5d9b12a4bcd594f1c
| 181
|
md
|
Markdown
|
README.md
|
dikdiktasdik/e-sppd
|
ae1e4a8f9fdaeeb17f1199426422ce4c8bd2906d
|
[
"MIT"
] | 1
|
2021-09-16T06:50:32.000Z
|
2021-09-16T06:50:32.000Z
|
README.md
|
dikdiktasdik/e-sppd
|
ae1e4a8f9fdaeeb17f1199426422ce4c8bd2906d
|
[
"MIT"
] | null | null | null |
README.md
|
dikdiktasdik/e-sppd
|
ae1e4a8f9fdaeeb17f1199426422ce4c8bd2906d
|
[
"MIT"
] | 1
|
2020-09-16T04:40:55.000Z
|
2020-09-16T04:40:55.000Z
|
# e-SPPD
Aplikasi yang difungsikan untuk membantu pegawai bagian keuangan untuk mempercepat proses administrasi Surat Perjalanan Dinas (SPPD) dan mempercepat hitung rincian biaya.
| 60.333333
| 171
| 0.839779
|
ind_Latn
| 0.993078
|
a6c00594feb3c0f574533fc020318309bbc86612
| 1,094
|
md
|
Markdown
|
Algorithm/al14.md
|
yangdongjue5510/TIL
|
cf5dfbe36feee9df12d68072e5b7a48b1082a925
|
[
"MIT"
] | null | null | null |
Algorithm/al14.md
|
yangdongjue5510/TIL
|
cf5dfbe36feee9df12d68072e5b7a48b1082a925
|
[
"MIT"
] | null | null | null |
Algorithm/al14.md
|
yangdongjue5510/TIL
|
cf5dfbe36feee9df12d68072e5b7a48b1082a925
|
[
"MIT"
] | null | null | null |
---
title: 14. 그리디 알고리즘
date: 2021-08-11 16:17:39
tags:
category:
- Computer Science
- Algorithm
---
## 그리디 알고리즘
현재 상태에서 가장 좋은 선택을 반복해서 해를 구성하는 알고리즘
문제는 이 그리디 알고리즘을 통해 얻은 해가 진짜 최선의 선택으로 얻은 해이냐는 것인데,
이를 증명하기 위해서는 귀류법을 통해 증명해봐야 한다.
**즉 우리가 찾은 해가 해가 아님을 가정으로 하여 다른 해를 가정하여,**
**그 해를 논리적으로 추론하여 모순을 찾아내면, 우리가 찾은 그리디는 증명된 것이다.**
> 예를 통해 이해해보자.
L = \[1, 2, 3, 4, 5]라는 배열에서 원소의 합이 9 (= T)를 넘지 않도록 원소를 추출할 때, 가장 원소가 많은 경우는 몇 개인지 구할 때,
우리는 당연히 가장 작은 원소부터 하나씩 더해가면서 합이 9가 넘어가기 전 원소들만 추출 할 것이다.(1,2,3. 즉 세개)
(가장 작은 값부터 뽑는 방식으로 T까지 뽑으면(p) -> 3개가 가장 최대 갯수이다.(q))
이 논리를 검증하려면 ~q -> ~p임을 증명하면 된다.
즉 3개가 보다 더 많이 뽑을 수 있다고 가정했을 때 p가 모순을 일으키면 우리의 원래 p->q가 증명된 것이다.
우리는 3이라는 답을 우리의 논리(p)로 찾아냈다.
이를 증명하기 위해 3보다 많은 갯수가 해가 존재한다(~q)고 가정해보자.
그렇다면 우리의 논리(가장 작은 값부터 뽑기, p) 찾은 값들 a1, a2, a3가 있을 것이고
역의 해 b1, b2, b3, b4 ...가 있을 것이다.
(이때, a와 b 모두 오름차순이라 하자)
자 이제 a1+a2+a3 = X라 하고, b1+b2+b3 = Y라 하면,
p에 의해 무조건 X <= Y 이다. (p = 가장 작은 값부터 뽑는다.)
그리고 문제 조건에 따라 Y + b4 <= T이다.
그런데 p에 따르면
X + a4 >T 이고 a4 <= b4 이다.
즉 종합해보면
X <=Y , a4 <= b4 인데
X + a4 > T 인데 Y + b4 <= T 인 모순적인 상황이 발생한다!
> 즉 우리의 그리디가 옳았다!
| 19.890909
| 87
| 0.607861
|
kor_Hang
| 1.00001
|
a6c0cf14d7039b4fb83e13ec27bfa1a5674fd0cd
| 1,846
|
md
|
Markdown
|
README.md
|
pwolanin/voter-ward-split
|
a5598f1cd70232261c207a75c35b57546379bf33
|
[
"MIT"
] | null | null | null |
README.md
|
pwolanin/voter-ward-split
|
a5598f1cd70232261c207a75c35b57546379bf33
|
[
"MIT"
] | null | null | null |
README.md
|
pwolanin/voter-ward-split
|
a5598f1cd70232261c207a75c35b57546379bf33
|
[
"MIT"
] | null | null | null |
Steps for merging and splitting PA voter files - oriented to the state sentate based
files from the Philly City Commissioners.
These steps must be run in a terminal. All the steps highlighted as code can be pasted as-is
except for setting the date.
You need to make sure you have a reasonable version of php (> 7.0) and sqlite3 (> 3.9.0) in your path.
If you are not sure, try this:
```
php --version
sqlite3 --version
```
First, define the DATE variable in the terminal. This should be the only step where you
have to change the command.
```
# Use the correct date for the voter files here. For example, if the file
# names are like PA Voter Export SS1 (3-1-22).TXT then the iso8601 date is
# 2022-03-01
DATE=2022-03-01
```
Convert tab-delimited voter text files from the city to CSV.
```
for DIST in 1 2 3 4 5 7 8; do cat "PA Voter Export SS$DIST "*.TXT | ./tab-to-csv.php > $DATE-PA-Voter-Export-SS$DIST.csv; done
```
Import all the csv files into a sqlite3 database.
```
DB="sqlite3 $DATE-PA-Voter-Export.sqlite3"
for FILE in $DATE-PA-Voter-Export-*.csv; do echo ".mode csv
.head on
.import $FILE voter_list" | $DB; done
```
Ideally there should be no error messages in the import step. If you see some like
this, there may be bad data in some rows:
2022-03-01-PA-Voter-Export-SS1.csv:69081: expected 127 columns but found 126 - filling the rest with NULL
2022-03-01-PA-Voter-Export-SS7.csv:112520: unescaped " character
Clean out the duplicate header rows. The steps here work for older sqlite binaries.
The newer versions also have --csv and --skip options for .import per:
https://sqlite.org/cli.html#csv
```
echo "DELETE FROM voter_list WHERE ID_Number = 'ID_Number' AND First_Name = 'First_Name'" | $DB
```
Finally, generate the split up ward files:
```
./wards.sh $DATE
```
Check the content of the wards directory.
| 29.301587
| 126
| 0.730228
|
eng_Latn
| 0.974236
|
a6c1130954f04d19ac343ffca79636ebb125146c
| 6,666
|
md
|
Markdown
|
CONTRIBUTING.md
|
gaiaresources/pipelines
|
a6dfd17e641ea9911efb4d06974182af32d5a443
|
[
"Apache-2.0"
] | 31
|
2018-01-03T12:38:37.000Z
|
2022-03-18T13:14:38.000Z
|
CONTRIBUTING.md
|
gaiaresources/pipelines
|
a6dfd17e641ea9911efb4d06974182af32d5a443
|
[
"Apache-2.0"
] | 512
|
2017-10-10T14:16:45.000Z
|
2022-03-31T11:24:39.000Z
|
CONTRIBUTING.md
|
gaiaresources/pipelines
|
a6dfd17e641ea9911efb4d06974182af32d5a443
|
[
"Apache-2.0"
] | 20
|
2018-02-24T12:08:06.000Z
|
2022-02-27T21:32:17.000Z
|
# Contributing to GBIF Pipelines
The GBIF Pipelines development community welcomes contributions from anyone! Thank you.
If you have questions please open [an issue](https://github.com/gbif/pipelines/issues/new) or join the [mailing list](https://lists.gbif.org/mailman/listinfo/pipelines).
There are different ways you can contribute that help this project:
- Log [issues](https://github.com/gbif/pipelines/issues) and help document a bug or specify new functionality
- Improve documentation
- Provide code submissions that address issues or bring new functionalities
- Help test the functionalities offered in the project
- Help improve this guide
## Contributing code
Below is a tutorial for contributing code to Pipelines, covering our tools and typical process in detail.
### Prerequisites
To contribute code, you need
- a GitHub account
- a Linux, MacOS, or Microsoft Windows development environment with Java JDK 8 installed
- Maven (version 3.6+)
### Connect with the Pipeline community and share your intent
- This is a very active project, with dependencies coming from those running in production.
- It is always worthwhile announcing your intention, so the community can offer guidance.
- Please always start with an issue to capture discussion
### Development setup
#### Command line
This project uses [Apache Maven](https://maven.apache.org/run.html) to build. The following should be enough to checkout and build the project.
```
git clone git@github.com:gbif/pipelines.git
cd pipelines
build.sh
```
Using maven commands, the project can be built with `mvn clean package` (optionally with `-DskipTests` to skip tests) and integration tests run with `mvn verify`.
We use [Google Java Format](https://plugins.jetbrains.com/plugin/8527-google-java-format) for code styling.
From the command line you can check the style using `mvn spotless:check` and fixup styling issues using `mvn spotless:apply`.
#### IDE Setup
We recommend the following toolset, and are able to answer questions on this configuration:
- Use [IntelliJ IDEA Community](https://www.jetbrains.com/idea/download/) (or better),
- Use the [Google Java Format](https://plugins.jetbrains.com/plugin/8527-google-java-format). Please do not reformat existing code, only changes and new code.
- The project uses [Project Lombok](https://projectlombok.org/). Please install the [Lombok plugin for IntelliJ IDEA](https://plugins.jetbrains.com/plugin/6317-lombok-plugin).
- Because the project uses [Error Prone](https://code.google.com/p/error-prone) you may have issues during the build process from IDEA. To avoid these issues please install the [Error Prone compiler integration plugin](https://plugins.jetbrains.com/plugin/7349-error-prone-compiler-integration) and build the project using the [`error-prone java compiler`](https://code.google.com/p/error-prone) to catch common Java mistakes at compile-time. To use the compiler, go to _File_ → _Settings_ → _Compiler_ → _Java Compiler_ and select `Javac with error-prone` in the `Use compiler` box.
- Add a custom parameter to avoid a debugging problem. To use the compiler, go to _File_ → _Settings_ → _Compiler_ → _Java Compiler_ → _Additional command line parameters_ and add `-Xep:ParameterName:OFF`
- Tests: please follow the conventions of the Maven Surefire plugin for [unit tests](https://maven.apache.org/surefire/maven-surefire-plugin/examples/inclusion-exclusion.html) and those of the Maven Failsafe plugin for [integration tests](https://maven.apache.org/surefire/maven-failsafe-plugin/examples/inclusion-exclusion.html). To run the integration tests just run the verify phase, e.g.: `mvn clean verify`
### Understanding branching
We follow a [GitFlow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) approach to our project.
This can be summarized as:
1. `master` represents what is running in production, or is in the process of being released for a new deployment
2. `gbif-dev` represents the latest state of development, and is what is run by the [continuous build tools](https://builds.gbif.org/)
1. Generally feature branches are made from here
3. `ala-dev` represents a large feature branch to bring in the `livingatlas` module and pipelines for the ALA work
If you are working on a new feature and not part of the core team please ask for guidance on where to start (most likely `gbif-dev`).
### Make your change
1. Checkout the branch you need (or fork the project and then checkout the branch)
2. Create a feature branch following a naming convention of `<issue_number>_my_new_feature`
3. Add unit tests for your change (please see testing style below)
4. Arrange your commits
1. Consider merging commits. We favour fewer commits, but recognize this is not always desirable for large features (please squash small commits addressing typos etc. into one)
5. Use descriptive commit messages that make it easy to identify changes and provide a clear history.
1. Please reference the issue you are working on unless it is a trivial change
2. Examples of good commit messages:
1. `#123 Enable occurrenceStatus interpretation to Avro`
2. `Fixup: Addressing typos in JDoc`
3. Examples of bad commit messages:
1. ` `
2. `Various fixes`
3. `#123`
6. Check your code compiles, and all project unit tests pass (please be kind to reviewers)
7. Explore the `errorprone` warnings raised at compilation time. Please address issues you see as best you can.
8. Ensure that the code is spotless (`mvn spotless:check` and fixup styling issues using `mvn spotless:apply`)
9. Verify that the PR only changes the code necessary to address the issue (other fixes should be in separate PRs)
10. Prefer to create a pull request and have it reviewed for larger submissions. Committers are free to push *smaller* changes directly.
11. Committers are requested to delete branches once merged.
### Test code style
The following illustrates the preferred style for unit tests.
```java
@Test
public void allValuesNullTest() {
// State
String eventDate = null;
String year = null;
String month = null;
String day = null;
// When
ParsedTemporal result = TemporalParser.parse(year, month, day, eventDate);
// Should
assertFalse(result.getFromOpt().isPresent());
assertFalse(result.getToOpt().isPresent());
assertFalse(result.getYearOpt().isPresent());
assertFalse(result.getMonthOpt().isPresent());
assertFalse(result.getDayOpt().isPresent());
assertFalse(result.getStartDayOfYear().isPresent());
assertFalse(result.getEndDayOfYear().isPresent());
assertTrue(result.getIssues().isEmpty());
}
```
| 54.195122
| 584
| 0.769877
|
eng_Latn
| 0.984203
|
a6c11388a0d059b1af55bc06143f80531033261e
| 7,626
|
md
|
Markdown
|
aspnetcore/grpc/test-tools.md
|
glacasa/AspNetCore.Docs.fr-fr
|
f91dbbc4708e47b867756dabe2798f8c98490f03
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
aspnetcore/grpc/test-tools.md
|
glacasa/AspNetCore.Docs.fr-fr
|
f91dbbc4708e47b867756dabe2798f8c98490f03
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
aspnetcore/grpc/test-tools.md
|
glacasa/AspNetCore.Docs.fr-fr
|
f91dbbc4708e47b867756dabe2798f8c98490f03
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
title: Tester gRPC services avec gRPCurl dans ASP.NET Core
author: jamesnk
description: Découvrez comment tester les services avec les outils gRPC. gRPCurl un outil en ligne de commande pour interagir avec les services gRPC. gRPCui est une interface utilisateur Web interactive.
monikerRange: '>= aspnetcore-3.0'
ms.author: jamesnk
ms.date: 08/09/2020
no-loc:
- ASP.NET Core Identity
- cookie
- Cookie
- Blazor
- Blazor Server
- Blazor WebAssembly
- Identity
- Let's Encrypt
- Razor
- SignalR
uid: grpc/test-tools
ms.openlocfilehash: 800b320413552e73f05e0359e67eeb2caf4e0e2a
ms.sourcegitcommit: 9c031530d2e652fe422e786bd43392bc500d622f
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/18/2020
ms.locfileid: "90770166"
---
# <a name="test-grpc-services-with-grpcurl-in-aspnet-core"></a>Tester gRPC services avec gRPCurl dans ASP.NET Core
Par [James Newton-King](https://twitter.com/jamesnk)
Les outils sont disponibles pour gRPC qui permet aux développeurs de tester les services sans générer d’applications clientes :
* [gRPCurl](https://github.com/fullstorydev/grpcurl) est un outil de ligne de commande qui fournit une interaction avec les services gRPC.
* [gRPCui](https://github.com/fullstorydev/grpcui) s’appuie sur gRPCurl et ajoute une interface utilisateur Web interactive pour gRPC, semblable à des outils tels que le billet et l’interface utilisateur Swagger.
Cet article explique comment :
* Téléchargez et installez gRPCurl et gRPCui.
* Configurez la réflexion gRPC avec une application gRPC ASP.NET Core.
* Découvrez et testez les services gRPC avec `grpcurl` .
* Interagissez avec les services gRPC via un navigateur à l’aide de `grpcui` .
## <a name="about-grpcurl"></a>À propos de gRPCurl
gRPCurl est un outil de ligne de commande créé par la communauté gRPC. Ses fonctionnalités sont les suivantes :
* Appel des services gRPC, y compris les services de streaming.
* Découverte de service à l’aide de la [réflexion gRPC](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md).
* Répertorier et décrire les services gRPC.
* Fonctionne avec des serveurs sécurisés (TLS) et non sécurisés (texte brut).
Pour plus d’informations sur le téléchargement et l’installation de `grpcurl` , consultez la [page d’accueil gRPCurl GitHub](https://github.com/fullstorydev/grpcurl#installation).

## <a name="set-up-grpc-reflection"></a>Configurer la réflexion gRPC
`grpcurl` doit connaître le contrat Protobuf des services avant de pouvoir les appeler. Il existe deux façons d'effectuer cette opération :
* Configurez la [réflexion gRPC](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md) sur le serveur. gRPCurl Découvre automatiquement les contrats de service.
* Spécifiez `.proto` les fichiers dans les arguments de ligne de commande de gRPCurl.
Il est plus facile d’utiliser gRPCurl avec la réflexion gRPC. la réflexion gRPC ajoute un nouveau service gRPC à l’application que les clients peuvent appeler pour découvrir les services.
gRPC ASP.NET Core offre une prise en charge intégrée de la réflexion gRPC avec le [`Grpc.AspNetCore.Server.Reflection`](https://www.nuget.org/packages/Grpc.AspNetCore.Server.Reflection) Package. Pour configurer la réflexion dans une application :
* Ajoutez une `Grpc.AspNetCore.Server.Reflection` référence de package.
* Inscrire la réflexion dans `Startup.cs` :
* `AddGrpcReflection` pour inscrire des services qui activent la réflexion.
* `MapGrpcReflectionService` pour ajouter un point de terminaison de service de réflexion.
[!code-csharp[](~/grpc/test-tools/Startup.cs?name=snippet_1&highlight=4,15-18)]
Quand la réflexion gRPC est configurée :
* Un service de réflexion gRPC est ajouté à l’application serveur.
* Les applications clientes qui prennent en charge la réflexion gRPC peuvent appeler le service de réflexion pour découvrir les services hébergés par le serveur.
* les services gRPC sont toujours appelés depuis le client. La réflexion active uniquement la détection de service et ne contourne pas la sécurité côté serveur. Les points de terminaison protégés par l' [authentification et l’autorisation](xref:grpc/authn-and-authz) requièrent que l’appelant transmette les informations d’identification pour que le point de terminaison soit appelé avec succès.
## <a name="use-grpcurl"></a>Utilisez `grpcurl`.
L' `-help` argument décrit les `grpcurl` options de ligne de commande :
```console
$ grpcurl -help
```
### <a name="discover-services"></a>Découvrir les services
Utilisez le `describe` verbe pour afficher les services définis par le serveur :
```console
$ grpcurl localhost:5001 describe
greet.Greeter is a service:
service Greeter {
rpc SayHello ( .greet.HelloRequest ) returns ( .greet.HelloReply );
rpc SayHellos ( .greet.HelloRequest ) returns ( stream .greet.HelloReply );
}
grpc.reflection.v1alpha.ServerReflection is a service:
service ServerReflection {
rpc ServerReflectionInfo ( stream .grpc.reflection.v1alpha.ServerReflectionRequest ) returns ( stream .grpc.reflection.v1alpha.ServerReflectionResponse );
}
```
L’exemple précédent :
* Exécute le `describe` verbe sur le serveur `localhost:5001` .
* Imprime les services et les méthodes retournés par la réflexion gRPC.
* `Greeter` est un service implémenté par l’application.
* `ServerReflection` est le service ajouté par le `Grpc.AspNetCore.Server.Reflection` Package.
Combiner `describe` avec un service, une méthode ou un nom de message pour afficher ses détails :
```powershell
$ grpcurl localhost:5001 describe greet.HelloRequest
greet.HelloRequest is a message:
message HelloRequest {
string name = 1;
}
```
### <a name="call-grpc-services"></a>Appeler gRPC services
Appelez un service gRPC en spécifiant un nom de service et de méthode avec un argument JSON qui représente le message de demande. Le JSON est converti en Protobuf et envoyé au service.
```console
$ grpcurl -d '{ \"name\": \"World\" }' localhost:5001 greet.Greeter/SayHello
{
"message": "Hello World"
}
```
Dans l’exemple précédent :
* L' `-d` argument spécifie un message de demande avec JSON. Cet argument doit précéder l’adresse du serveur et le nom de la méthode.
* Appelle la `SayHello` méthode sur le `greeter.Greeter` service.
* Imprime le message de réponse au format JSON.
## <a name="about-grpcui"></a>À propos de gRPCui
gRPCui est une interface utilisateur Web interactive pour gRPC. Il s’appuie sur gRPCurl et offre une interface graphique utilisateur pour la découverte et le test des services gRPC, à l’instar des outils HTTP tels que l’interface utilisateur poster ou Swagger.
Pour plus d’informations sur le téléchargement et l’installation de `grpcui` , consultez la [page d’accueil gRPCui GitHub](https://github.com/fullstorydev/grpcui#installation).
## <a name="using-grpcui"></a>Utilisation de `grpcui`
Exécutez `grpcui` avec l’adresse de serveur pour interagir comme argument :
```powershell
$ grpcui localhost:5001
gRPC Web UI available at http://127.0.0.1:55038/
```
L’outil lance une fenêtre de navigateur avec l’interface utilisateur Web interactive. les services gRPC sont automatiquement découverts à l’aide de la réflexion gRPC.

## <a name="additional-resources"></a>Ressources supplémentaires
* [Page d’accueil gRPCurl GitHub](https://github.com/fullstorydev/grpcurl)
* [Page d’accueil gRPCui GitHub](https://github.com/fullstorydev/grpcui)
* [`Grpc.AspNetCore.Server.Reflection`](https://www.nuget.org/packages/Grpc.AspNetCore.Server.Reflection)
| 46.785276
| 395
| 0.779308
|
fra_Latn
| 0.895084
|
a6c13405233bc1a02d1d7ede18ab32fef292e330
| 1,302
|
md
|
Markdown
|
articles/finance/localizations/norway.md
|
tradotto/dynamics-365-unified-operations-public
|
06b4dbaadf8063c41bf25889720675b98c0ba0c5
|
[
"CC-BY-4.0",
"MIT"
] | 1
|
2020-05-20T19:48:43.000Z
|
2020-05-20T19:48:43.000Z
|
articles/finance/localizations/norway.md
|
tradotto/dynamics-365-unified-operations-public
|
06b4dbaadf8063c41bf25889720675b98c0ba0c5
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
articles/finance/localizations/norway.md
|
tradotto/dynamics-365-unified-operations-public
|
06b4dbaadf8063c41bf25889720675b98c0ba0c5
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
---
# required metadata
title: Norway overview
description: This topic provides links to Microsoft Dynamics 365 Finance documentation resources for Norway.
author: ShylaThompson
manager: AnnBe
ms.date: 07/25/2019
ms.topic: article
ms.prod:
ms.service: dynamics-ax-applications
ms.technology:
# optional metadata
# ms.search.form:
audience: Application User
# ms.devlang:
ms.reviewer: kfend
ms.search.scope: Core, Operations
# ms.tgt_pltfrm:
ms.custom:
ms.search.region: Norway
# ms.search.industry:
ms.author: shylaw
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
---
# Norway overview
[!include [banner](../includes/banner.md)]
This topic provides links to documentation resources for Norway.
- [Customer and vendor payment formats](tasks/no-00003-customer-vendor-payment-formats.md)
- [Customer payment based on payment ID](tasks/no-00002-customer-payment-based-payment-id.md)
- [Nets import format](emea-nor-nets-import-format.md)
- [VAT statement](emea-nor-sales-tax-payment-report.md)
- [Standard Audit File for Tax (SAF-T)](emea-nor-satndard-audit-file-for-tax.md)
- [Cash register functionality](../../retail/localizations/emea-nor-cash-registers.md)
- [Deployment guidelines for cash registers](../../retail/localizations/emea-nor-loc-deployment-guidelines.md)
| 29.590909
| 110
| 0.764977
|
eng_Latn
| 0.493149
|
a6c1e29606ecc22c8467e528f6f537fa60ff34d5
| 2,756
|
md
|
Markdown
|
articles/libraries/error-messages.md
|
enak/docs
|
097d0058fc03a75415e07933967a0d5da381f8eb
|
[
"MIT"
] | 336
|
2015-02-03T21:32:33.000Z
|
2022-03-27T07:42:33.000Z
|
articles/libraries/error-messages.md
|
enak/docs
|
097d0058fc03a75415e07933967a0d5da381f8eb
|
[
"MIT"
] | 3,640
|
2015-01-05T19:16:40.000Z
|
2022-03-21T15:34:43.000Z
|
articles/libraries/error-messages.md
|
enak/docs
|
097d0058fc03a75415e07933967a0d5da381f8eb
|
[
"MIT"
] | 2,052
|
2015-01-05T07:10:33.000Z
|
2022-03-17T17:24:51.000Z
|
---
section: libraries
description: Describes common sign up and login errors that you might see when you authenticate users using Auth0 libraries.
topics:
- libraries
- lock
- auth0js
- error-messages
contentType:
- reference
useCase:
- add-login
- enable-mobile-auth
---
# Common Auth0 Library Authentication Errors
The actions or input data of your users, during the sign up or the log in processes, might trigger errors. Here is a list of the most common errors that you might get if you use any of the Auth0 libraries for authentication.
## Sign up
In the case of a failed signup, the most common errors are:
| **Error** | **Description** |
|-|-|
| **invalid_password** | If the password used doesn't comply with the password policy for the connection |
| **invalid_signup** | The user your are attempting to sign up is invalid |
| **password_dictionary_error** | The chosen password is too common |
| **password_no_user_info_error** | The chosen password is based on user information |
| **password_strength_error** | The chosen [password is too weak](/connections/database/password-strength) |
| **unauthorized** | If you cannot sign up for this application. May have to do with the violation of a specific rule |
| **user_exists** | The user you are attempting to sign up has already signed up |
| **username_exists** | The username you are attempting to sign up with is already in use |
## Log in
In the case of a failed login, the most common errors are:
| **Error** | **Description** |
|-|-|
| **access_denied** | When using web-based authentication, the resource server denies access per OAuth2 specifications |
| **invalid_user_password** | The username and/or password used for authentication are invalid |
| **mfa_invalid_code** | The <dfn data-key="multifactor-authentication">multi-factor authentication (MFA)</dfn> code provided by the user is invalid/expired |
| **mfa_registration_required** | The administrator has required [multi-factor authentication](/mfa), but the user has not enrolled |
| **mfa_required** | The user must provide the [multi-factor authentication](/mfa) code to authenticate |
| **password_leaked** | If the password has been leaked and a different one needs to be used |
| **PasswordHistoryError** | The password provided for sign up/update has already been used (reported when [password history](/connections/database/password-options#password-history) feature is enabled) |
| **PasswordStrengthError** | The password provided does not match the connection's [strength requirements](/connections/database/password-strength) |
| **too_many_attempts** | The account is blocked due to too many attempts to sign in |
| **unauthorized** | The user you are attempting to sign in with is blocked |
| 56.244898
| 224
| 0.751451
|
eng_Latn
| 0.996716
|