hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4b552473f0388db1868f65147e3e6cbf6eaefab9 | 868 | md | Markdown | _ess/pos/NOUN.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | _ess/pos/NOUN.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | _ess/pos/NOUN.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | ---
layout: postag
title: 'NOUN'
shortdef: 'noun'
udver: '2'
---
### Definition
Nouns are a part of speech typically denoting a person, place, thing, animal or idea.
### Examples
- _nengyugh-_ "granmother (base)",
- _nengyuqa_ "my grandmother"
- _nengyunllu_ "your grandmother"
If a word is further analyzed into morphemes, noun-yielding derivational morphemes are tagged as `NOUN`. These include noun-elaborating suffixes (N→N) that attach to nominal roots and yield nominal bases and nominalizing suffixes (V→N) that attach to verbal roots and yield nominal bases.
### Examples
- _-ghllag(N→N)_ "big N" (as in _sikigllaget_ "big squirrels")
- _-ngugh(V→N)_ "one that is V" (as in _kavilnguq_ "one that is red")
The `NOUN` tag is intended for common nouns only. See [PRON]() for pronouns.
<!-- Interlanguage links updated Pá kvě 14 11:08:21 CEST 2021 -->
| 28 | 288 | 0.726959 | eng_Latn | 0.990266 |
4b5542ad36c6a17f83ba88651d9901e342decf7c | 8,658 | md | Markdown | _posts/2020-04-19-are-events-bad.md | dimosr/dimosr7.github.io | c2c1f5ac55ccae7123fdefa818640bef2de5dfa7 | [
"MIT"
] | null | null | null | _posts/2020-04-19-are-events-bad.md | dimosr/dimosr7.github.io | c2c1f5ac55ccae7123fdefa818640bef2de5dfa7 | [
"MIT"
] | 1 | 2021-10-01T16:54:15.000Z | 2021-10-01T16:54:15.000Z | _posts/2020-04-19-are-events-bad.md | dimosr/dimosr7.github.io | c2c1f5ac55ccae7123fdefa818640bef2de5dfa7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Are events really a bad idea after all?"
date: 2020-04-19
excerpt: "An overview of event-based systems, their benefits and pitfalls"
header-img: "assets/img/posts/domino_2.jpg"
tags: [event-based, event-driven, architectures, systems, software]
---

This week I decided to start investing some time to read all those papers that had slowly accumulated in my bookmark. The first of them was on the topic of event-based systems[^paper] and why they might not be as beneficial as previous claims argued they were. [Fuzz](https://twitter.com/dazraf/status/1251445548403625991) asked me what's my view on this, but Twitter's character limit is not a very good fit for this sort of thing. So, I thought I'd share my thoughts in a blog post. What I am trying to say is this blog post was driven by that event (pun clearly intended).
This has been a rather controversial topic in the past. It was interesting to see that one of the authors, Eric Brewer[^brewer], was also the author of another paper that is commonly cited as an argument on the superiority of event-based systems. That paper is known as [SEDA](https://en.wikipedia.org/wiki/Staged_event-driven_architecture), which stands for staged event-driven architecture and it performs a comparative analysis between the classical thread-per-request web server and a web server where request processing is decomposed into multiple stages that exchange events via in-memory queues. So, it is interesting to know this paper is coming from someone that has been on the other side of the fence in the past.
The argument of the paper essentially boils down to two basic parts. In the first part, the authors explain why threads are not necessarily worse than events in terms of performance. Then, they proceed by enumerating some of the drawbacks of event-based models, thus claiming that they are actually worse as a programming paradigm. Let's review each part separately.
The fact that thread-based applications used to be very inefficient when compared to event-based, asynchronous applications is not due to some inherent limitations of the threading model, but mostly due to the underlying implementation of them. In order to prove this, the authors repeated the benchmark from the SEDA paper with an optimised version of the [GNU Portable threads](https://www.gnu.org/software/pth/) package, which was able to scale up to 100,000 threads performing similarly to the SEDA architecture.
<br/>

<br/>
So, the difference in performance comes from the fact that event-based models tend to make use of cooperative scheduling, while thread-based models tend to make use of preemptive scheduling. However, the software industry is catching up and there is now support for cooperative scheduling provided under synchronous-looking APIs. These usually take the form of lightweight threads, aka fibers, or coroutines[^libraries].
With performance out of the way, the question then is whether there is any reason to prefer event-based models. This is where the second part of the paper's argument starts. Interestingly, the paper references a paper from back in 1978 about the duality of these two models. That paper argues that these two models are dual and a solution for a problem in one of these models can be structurally translated to the other model. This paper summarises this duality in a table that contains concepts and their equivalent in each model.

As a result, a solution to a problem can be modelled in these two ways and the paper argues that modelling the solution using events can cause several problems. All of these problems stem from a fundamental difference between these two models on how state is managed. In thread-based models, state is managed implicitly through the use of the stack. In event-based models, the same state needs to be managed explicitly by the application. This has the following consequences:
* **Control flow**: thread-based models tend to have a more linear structure that is easier to reason about, while event-based models introduce a lot of indirection that can lead to what is known as [callback hell](http://callbackhell.com). The linear structure of thread-based models enables powerful paradigms, such as structured concurrency[^structured_concurrency]. Event-based programs usually need to implement the state management at the application level, which is referred to as stack ripping. This essentially boils down to storing the necessary state on message delivery, so that it can be used by the associated event handler when time comes. This introduces additional development effort and complexity at the application level.
* **Troubleshooting & Debugging**: thread-based models are also a lot easier to troubleshoot and debug. An exception with a full stack trace can be invaluable while trying to understand what really happened in a system and track interactions between components. Similarly, having the ability to easily jump between components in the stack while in a debugger session reduces development effort significantly.
* **Error handling**: Error handling is also easier in a thread-based model, since exceptions tend to propagate vertically and allow callers perform the necessary error handling and cleanup. In an event-based model, this becomes a lot harder, since the decoupling between event producers and consumers can remove this connection or it might require additional machinery (e.g. response message queues or dead letter queues). The paper references a system the authors were working on that frequently suffered from memory leaks due to this reason.
These points form a pretty strong argument that reflects to some extent my experience from having worked with systems from both of these worlds. There might be some domains that are a good fit for an event-based model, but force-fitting this model into an application in order to achieve some secondary properties (e.g. performance) usually doesn't end well.
However, this paper focuses on the merits of an event-based model in the micro context of a single application. I think there is a lot more to be said when viewing this model in the macro context of multiple applications.
One example is the concept of event sourcing that has become a commonly used pattern nowadays. This takes focus away from state and brings events to the front. State is now a second-class citizen, a materialised view of all the events that have happened so far. This paradigm enables many different functions for an application, such as natural auditing, temporal querying and data integrity[^event_logs]. Of course, this doesn't mean it doesn't come with trade-offs as everything in software.
So, are events bad after all?
As every software engineer indulges in saying, it depends.
<br/>
-----------------------------------------
<br/>
[^paper]: The paper is "Why Events Are A Bad Idea(for high-concurrency servers)", available [here](https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren.pdf).
[^brewer]: If the name sounds familiar, it's because he is also the brain behind the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), which is a foundational finding in the field of distributed systems.
[^libraries]: [Quasar](https://github.com/puniverse/quasar) is a commonly known fiber library for the JVM, while project [Loom](https://openjdk.java.net/projects/loom/) is the native equivalent for the JVM. Coroutines have now been implemented in several languages, such as [Python](https://docs.python.org/3/library/asyncio-task.html), [Go](http://www.golangpatterns.info/concurrency/coroutines), [Kotlin](https://kotlinlang.org/docs/reference/coroutines-overview.html) etc. You can find several benchmarks that prove the same thing; that these have similar performance & scalability to asynchronous, event-based libraries.
[^structured_concurrency]: If you haven't heard of the concept of structured concurrency before, I think [this post](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/) explains the idea very nicely.
[^event_logs]: This is one of the reasons why event logs started seeing a lot of adoption recently. This [article](https://www.confluent.io/blog/using-logs-to-build-a-solid-data-infrastructure-or-why-dual-writes-are-a-bad-idea) explains how an event log can help ensure data integrity in a distributed system architecture. | 149.275862 | 742 | 0.796258 | eng_Latn | 0.999635 |
4b55f8f9682c665dec89cc1e5d56536f8f818187 | 1,263 | md | Markdown | workshop-exercises/Workshop/AdvancedTopics/PLContainer/README.md | Pivotal-Data-Engineering/greenplum-5-introduction-workshop | b3ec90d9e57ce9c0ca6a59525b6cf7814a0bb1f6 | [
"Apache-2.0"
] | 1 | 2020-04-02T23:58:56.000Z | 2020-04-02T23:58:56.000Z | workshop-exercises/Workshop/AdvancedTopics/PLContainer/README.md | Pivotal-Data-Engineering/greenplum-5-introduction-workshop | b3ec90d9e57ce9c0ca6a59525b6cf7814a0bb1f6 | [
"Apache-2.0"
] | null | null | null | workshop-exercises/Workshop/AdvancedTopics/PLContainer/README.md | Pivotal-Data-Engineering/greenplum-5-introduction-workshop | b3ec90d9e57ce9c0ca6a59525b6cf7814a0bb1f6 | [
"Apache-2.0"
] | null | null | null | First run a simple Python program that executes the OS command `whoami`.
Then try to create a user defined function in python as gpuser. It fails. Why?
Then create it as gpadmin and GRANT EXECUTE to gpuser.
The function takes as its argument the text of a Linux OS command.
Run the function with the argument `whoami`. Who are you? Why?
Then create a similar function in PL/Container. This can be done as gpuser. Why?
Run the function with the argument `whoami`. Who are you? Why?
Run the PLPython function with the argument `cat /etc/system-release`. What is the OS release?
Run the PL/Container function with the argument `cat /etc/system-release`.
Are the releases the same? Why?
When you ran the PLPythonU version of the function, `whoami` said you were gpadmin.
Try it with the containerized version. Who are you now?
Now for some analytic work in Python and its open source mathematical libraries.
Let's compute 3 measures of centrality in data: arithmetic mean, geometric mean, and harmonic mean.
They have some interesting properties you can explore on your own.
First, let's create some data with numpy_setup.sql
Then create the functions in PL/Container with numpy_define_means.sql
Then execute the functions with numpy_calculate_means.
| 54.913043 | 99 | 0.78464 | eng_Latn | 0.998926 |
4b5762d8e63777d4b042ad4fa7e08f21e4fd6a01 | 25 | md | Markdown | README.md | RRRenJ/RRRCategory | b9616cae8c818f7516ff4e6234acb91d1e3631e5 | [
"MIT"
] | null | null | null | README.md | RRRenJ/RRRCategory | b9616cae8c818f7516ff4e6234acb91d1e3631e5 | [
"MIT"
] | null | null | null | README.md | RRRenJ/RRRCategory | b9616cae8c818f7516ff4e6234acb91d1e3631e5 | [
"MIT"
] | null | null | null | # RRRCategory
个人category
| 8.333333 | 13 | 0.84 | azb_Arab | 0.59634 |
4b57ce58cf68302d064af7d09b5f70bb649412a6 | 321 | md | Markdown | _components/button-group/guidance/variants/default/when-to-use.md | huyenltnguyen/uswds-site | c0f7418324c85e49413e7ddd071336f44a00b583 | [
"CC0-1.0"
] | 102 | 2018-01-28T04:59:38.000Z | 2022-03-13T18:18:37.000Z | _components/button-group/guidance/variants/default/when-to-use.md | huyenltnguyen/uswds-site | c0f7418324c85e49413e7ddd071336f44a00b583 | [
"CC0-1.0"
] | 553 | 2018-01-23T18:05:13.000Z | 2022-03-30T19:59:24.000Z | _components/button-group/guidance/variants/default/when-to-use.md | huyenltnguyen/uswds-site | c0f7418324c85e49413e7ddd071336f44a00b583 | [
"CC0-1.0"
] | 92 | 2018-01-24T13:24:05.000Z | 2022-03-23T15:27:56.000Z | - **Actions have a contextual relationship.** For example, the default button group can be used when a form has both a primary and alternative action.
- **Stepping through linear content.** Buttons in a button group can be used for directional navigation and actions (e.g., "Back," "Next," "Continue," "Skip," "Cancel."). | 160.5 | 170 | 0.741433 | eng_Latn | 0.998211 |
4b58842f732619aa19eef5d4d77fb0f6ec625f59 | 30 | md | Markdown | README.md | liftSpindle/garble | f5bcec2374fc5b8abfcedb18072f2628716c0d11 | [
"MIT"
] | null | null | null | README.md | liftSpindle/garble | f5bcec2374fc5b8abfcedb18072f2628716c0d11 | [
"MIT"
] | null | null | null | README.md | liftSpindle/garble | f5bcec2374fc5b8abfcedb18072f2628716c0d11 | [
"MIT"
] | null | null | null | # garble
garble garble garble
| 10 | 20 | 0.8 | eng_Latn | 0.714722 |
4b59270c03d5237ad2c3592e1c3416167f9ebebb | 1,511 | md | Markdown | docs/content/about/_index.md | Devanshops/website | 7d2930cc745216746866ba9e8878675cdf7a2f8b | [
"Apache-2.0"
] | null | null | null | docs/content/about/_index.md | Devanshops/website | 7d2930cc745216746866ba9e8878675cdf7a2f8b | [
"Apache-2.0"
] | null | null | null | docs/content/about/_index.md | Devanshops/website | 7d2930cc745216746866ba9e8878675cdf7a2f8b | [
"Apache-2.0"
] | null | null | null | +++
title = "About Choria"
pre = "<b>1. </b>"
menu = "about"
weight = 10
+++
## Overview
Choria is an ongoing project to create a new Orchestration System with roots in The Marionette Collective (mcollective). It's part trivially installable mcollective and part brand new development of turn key end user features, new content and modernization.
Using Choria an Open Source Puppet user can be up and running with a scalable, clustered, secure orchestration system within 30 minutes. Out of the box it's known to support 50 000 nodes on a single compute node while being secure by default, easy to maintain and production ready.
The Choria Project safe guards the investment users have made in The Marionette Collective through a compatibility framework capable of running mcollective agents.
Review the [Key Concepts](../concepts) section to discover what you can do with Choria, its architecture and detailed overview of the project.
If you just wish to get your hands dirty and experience Choria first-hand please see our [Vagrant Demo](https://github.com/choria-io/vagrant-demo).
## Status
This system is production ready but under active development. At various fronts we are working to replace reliance on Puppet Agent and legacy MCollective, the project lives on [GitHub](https://github.com/choria-io).
Extensive performance testing has been done that showed the system to be stable at over 100 000 nodes. Getting to 50 000 nodes is easily achievable using a single middleware compute node.
| 60.44 | 282 | 0.78822 | eng_Latn | 0.997272 |
4b5934bb533505b903f66b1b463bfeb19be92ac7 | 3,797 | md | Markdown | README.md | tiborvass/linuxkit-kubernetes | fb971dc95f3103d22206fa34004e5e8fd935cce2 | [
"Apache-2.0"
] | null | null | null | README.md | tiborvass/linuxkit-kubernetes | fb971dc95f3103d22206fa34004e5e8fd935cce2 | [
"Apache-2.0"
] | null | null | null | README.md | tiborvass/linuxkit-kubernetes | fb971dc95f3103d22206fa34004e5e8fd935cce2 | [
"Apache-2.0"
] | null | null | null | # Kubernetes and LinuxKit
[](https://circleci.com/gh/linuxkit/kubernetes)
This project aims to demonstrate how one can create minimal and immutable Kubernetes OS images with LinuxKit.
## Build requirements
To build images and to rebuild the individual packages you will need the [LinuxKit tool](https://github.com/linuxkit/linuxkit/tree/master/src/cmd/linuxkit)
If you already have `go` installed you can use `go get -u github.com/linuxkit/linuxkit/src/cmd/linuxkit` to install the tool.
On MacOS there is a `brew tap` available. Detailed instructions are at [linuxkit/homebrew-linuxkit](https://github.com/linuxkit/homebrew-linuxkit), the short summary is
```
brew tap linuxkit/linuxkit
brew install --HEAD linuxkit
```
Build requirements from source:
- GNU `make`
- Docker
- optionally `qemu`
## Building OS images
To build the default OS images:
```
make all
```
By default this will build images using Docker Engine for execution. To instead use cri-containerd use:
```
make all KUBE_RUNTIME=cri-containerd
```
## Booting and initialising OS images
Boot Kubernetes master OS image using `hyperkit` on macOS: or `qemu` on Linux:
```
./boot.sh
```
or, to automatically initialise the cluster upon boot with no additional options
```
KUBE_MASTER_AUTOINIT="" ./boot.sh
```
Get IP address of the master:
```
ip addr show dev eth0
```
Login to the kubelet container:
```
./ssh_into_kubelet.sh <master-ip>
```
Manually initialise master with `kubeadm` if booted without `KUBE_MASTER_AUTOINIT`:
```
kubeadm-init.sh
```
Once `kubeadm` exits, make sure to copy the `kubeadm join` arguments,
and try `kubectl get nodes` from within the master.
If you just want to run a single node cluster with jobs running on the master, you can use:
```
kubectl taint nodes --all node-role.kubernetes.io/master- --kubeconfig /etc/kubernetes/admin.conf
```
To boot a node use:
```
./boot.sh <n> [<join_args> ...]
```
More specifically, to start 3 nodes use 3 separate shells and run this:
```
shell1> ./boot.sh 1 --token bb38c6.117e66eabbbce07d 192.168.65.22:6443
shell2> ./boot.sh 2 --token bb38c6.117e66eabbbce07d 192.168.65.22:6443
shell3> ./boot.sh 3 --token bb38c6.117e66eabbbce07d 192.168.65.22:6443
```
## Platform specific information
### MacOS
The above instructions should work as is.
### Linux
By default `linuxkit run` uses user mode networking which does not
support access from the host. To workaround this you can use port
forwarding e.g.
KUBE_RUN_ARGS="-publish 2222:22" ./boot.sh
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 root@localhost
However you will not be able to run worker nodes since individual
instances cannot see each other.
To enable networking between instance unfortunately requires `root`
privileges to configure a bridge and setup the bridge mode privileged
helper.
See http://wiki.qemu.org/Features/HelperNetworking for details in
brief you will need:
- To setup and configure a bridge (including e.g. DHCP etc) on the
host. (You can reuse a bridge created by e.g. `virt-mananger`)
- To set the `qemu-bridge-helper` setuid root. The location differs by
distro, it could be `/usr/lib/qemu/qemu-bridge-helper` or
`/usr/local/libexec/qemu-bridge-helper` or elsewhere. You need to
`chmod u+s «PATH»`.
- List the bridge created in the first step in `/etc/qemu/bridge.conf`
with a line like `allow br0` (if your bridge is called `br0`).
- Set `KUBE_NETWORKING=bridge,«name»` e.g.
KUBE_NETWORKING="bridge,br0" ./boot.sh
KUBE_NETWORKING="bridge,br0" ./boot.sh 1 «options»
## Configuration
The `boot.sh` script has various configuration variables at the top
which can be overridden via the environment e.g.
KUBE_VCPUS=4 ./boot.sh
| 29.897638 | 168 | 0.747696 | eng_Latn | 0.954576 |
4b59470863b8c350c21610c594d982f3ce274db4 | 46 | md | Markdown | README.md | php-de/Community-Rules | f93e30dffd535ce62f719db0ea88abe374d60018 | [
"CC0-1.0"
] | null | null | null | README.md | php-de/Community-Rules | f93e30dffd535ce62f719db0ea88abe374d60018 | [
"CC0-1.0"
] | null | null | null | README.md | php-de/Community-Rules | f93e30dffd535ce62f719db0ea88abe374d60018 | [
"CC0-1.0"
] | null | null | null | # Community-Rules
Community Rules Development
| 15.333333 | 27 | 0.847826 | yue_Hant | 0.657534 |
4b59e8225d4bccb38b0a7a34a08dd61609b4ffef | 425 | md | Markdown | _includes/02-image.md | jme-mx/markdown-portfolio | 8674c36b02d7b7350029364dfe04ee365d8459de | [
"MIT"
] | null | null | null | _includes/02-image.md | jme-mx/markdown-portfolio | 8674c36b02d7b7350029364dfe04ee365d8459de | [
"MIT"
] | 6 | 2021-04-20T14:38:41.000Z | 2021-04-20T15:14:43.000Z | _includes/02-image.md | jme-mx/markdown-portfolio | 8674c36b02d7b7350029364dfe04ee365d8459de | [
"MIT"
] | null | null | null | Replace this with an image, like your photo. Ensure you include some alt-text.

| 141.666667 | 345 | 0.844706 | eng_Latn | 0.148937 |
4b5a20359d783b7345f113693210a9cacd650e5e | 9,781 | md | Markdown | website/docs/reference/transforms/aws_ec2_metadata.md | XOSplicer/vector | f04d9452471147c082d8262e103cdb33fb846e26 | [
"Apache-2.0"
] | null | null | null | website/docs/reference/transforms/aws_ec2_metadata.md | XOSplicer/vector | f04d9452471147c082d8262e103cdb33fb846e26 | [
"Apache-2.0"
] | null | null | null | website/docs/reference/transforms/aws_ec2_metadata.md | XOSplicer/vector | f04d9452471147c082d8262e103cdb33fb846e26 | [
"Apache-2.0"
] | null | null | null | ---
last_modified_on: "2020-04-01"
component_title: "AWS EC2 Metadata"
description: "The Vector `aws_ec2_metadata` transform accepts and outputs `log` events allowing you to enrich logs with AWS EC2 instance metadata."
event_types: ["log"]
function_category: "enrich"
issues_url: https://github.com/timberio/vector/issues?q=is%3Aopen+is%3Aissue+label%3A%22transform%3A+aws_ec2_metadata%22
sidebar_label: "aws_ec2_metadata|[\"log\"]"
source_url: https://github.com/timberio/vector/tree/master/src/transforms/aws_ec2_metadata.rs
status: "beta"
title: "AWS EC2 Metadata Transform"
---
import Alert from '@site/src/components/Alert';
import Fields from '@site/src/components/Fields';
import Field from '@site/src/components/Field';
The Vector `aws_ec2_metadata` transform
accepts and [outputs `log` events](#output) allowing you to enrich logs with
AWS EC2 instance metadata.
<!--
THIS FILE IS AUTOGENERATED!
To make changes please edit the template located at:
website/docs/reference/transforms/aws_ec2_metadata.md.erb
-->
## Requirements
<Alert icon={false} type="danger" classNames="list--warnings">
* [AWS IMDS v2][urls.aws_ec2_instance_metadata] is required. This is available by default on EC2.
* Running this transform within Docker on EC2 requires 2 network hops. Users must raise this limit. See the ["AWS imDS v2" section][docs.transforms.aws_ec2_metadata#aws-imds-v2] for more info.
</Alert>
## Configuration
```toml title="vector.toml"
[transforms.my_transform_id]
type = "aws_ec2_metadata" # required
inputs = ["my-source-id"] # required
fields = ["instance-id", "local-hostname", "local-ipv4", "public-hostname", "public-ipv4", "ami-id", "availability-zone", "vpc-id", "subnet-id", "region"] # optional, default
host = "http://169.254.169.254" # optional, default
namespace = "" # optional, default
refresh_interval_secs = 10 # optional, default
```
<Fields filters={true}>
<Field
common={true}
defaultValue={["instance-id","local-hostname","local-ipv4","public-hostname","public-ipv4","ami-id","availability-zone","vpc-id","subnet-id","region"]}
enumValues={null}
examples={[["instance-id","local-hostname","local-ipv4","public-hostname","public-ipv4","ami-id","availability-zone","vpc-id","subnet-id","region"]]}
groups={[]}
name={"fields"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"[string]"}
unit={null}
>
### fields
A list of fields to include in each event.
</Field>
<Field
common={true}
defaultValue={"http://169.254.169.254"}
enumValues={null}
examples={["http://169.254.169.254"]}
groups={[]}
name={"host"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### host
Override the default EC2 Metadata host.
</Field>
<Field
common={true}
defaultValue={""}
enumValues={null}
examples={[""]}
groups={[]}
name={"namespace"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### namespace
Prepend a namespace to each field's key.
</Field>
<Field
common={true}
defaultValue={10}
enumValues={null}
examples={[10]}
groups={[]}
name={"refresh_interval_secs"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"int"}
unit={null}
>
### refresh_interval_secs
The interval in seconds at which the EC2 Metadata api will be called.
</Field>
</Fields>
## Output
The `aws_ec2_metadata` transform accepts and [outputs `log` events](#output) allowing you to enrich logs with AWS EC2 instance metadata.
For example:
```javascript
{
"ami-id": "ami-00068cd7555f543d5",
"availability-zone": "54.234.246.107",
"instance-id": "i-096fba6d03d36d262",
"local-hostname": "ip-172-31-93-227.ec2.internal",
"local-ipv4": "172.31.93.227",
"public-hostname": "ec2-54-234-246-107.compute-1.amazonaws.com",
"public-ipv4": "54.234.246.107",
"region": "us-east-1",
"role-name": "some_iam_role",
"subnet-id": "subnet-9d6713b9",
"vpc-id": "vpc-a51da4dc"
}
```
More detail on the output schema is below.
<Fields filters={true}>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["ami-00068cd7555f543d5"]}
groups={[]}
name={"ami-id"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### ami-id
The `ami-id` that the current EC2 instance is using.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["54.234.246.107"]}
groups={[]}
name={"availability-zone"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### availability-zone
The `availability-zone` that the current EC2 instance is running in.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["i-096fba6d03d36d262"]}
groups={[]}
name={"instance-id"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### instance-id
The `instance-id` of the current EC2 instance.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["ip-172-31-93-227.ec2.internal"]}
groups={[]}
name={"local-hostname"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### local-hostname
The `local-hostname` of the current EC2 instance.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["172.31.93.227"]}
groups={[]}
name={"local-ipv4"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### local-ipv4
The `local-ipv4` of the current EC2 instance.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["ec2-54-234-246-107.compute-1.amazonaws.com"]}
groups={[]}
name={"public-hostname"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### public-hostname
The `public-hostname` of the current EC2 instance.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["54.234.246.107"]}
groups={[]}
name={"public-ipv4"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### public-ipv4
The `public-ipv4` of the current EC2 instance.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["us-east-1"]}
groups={[]}
name={"region"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### region
The [`region`](#region) that the current EC2 instance is running in.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["some_iam_role"]}
groups={[]}
name={"role-name"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### role-name
The `role-name` that the current EC2 instance is using.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["subnet-9d6713b9"]}
groups={[]}
name={"subnet-id"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### subnet-id
The `subnet-id` of the current EC2 instance's default network interface.
</Field>
<Field
common={false}
defaultValue={null}
enumValues={null}
examples={["vpc-a51da4dc"]}
groups={[]}
name={"vpc-id"}
path={null}
relevantWhen={null}
required={false}
templateable={false}
type={"string"}
unit={null}
>
### vpc-id
The `vpc-id` of the current EC2 instance's default network interface.
</Field>
</Fields>
## How It Works
### AWS IMDS v2
v2 of the [AWS IMDS service][urls.aws_ec2_instance_metadata] addresses [a number
of very serious security issues][urls.aws_imds_v1_security_problems] with v1.
As part of tighening security, Amazon limited the number of network hops allowed
to communicate with this service to 1. Unfortunately, when running Vector within
Docker this introduces an additional hop. Therefore, you _must_ configure your
AWS instances to allow for 2 hops:
```bash
aws ec2 modify-instance-metadata-options --instance-id <ID> --http-endpoint enabled --http-put-response-hop-limit 2
```
If you do not raise this limit the `aws_ec2_metadata` transform will not work.
### Complex Processing
If you encounter limitations with the `aws_ec2_metadata`
transform then we recommend using a [runtime transform][urls.vector_programmable_transforms].
These transforms are designed for complex processing and give you the power of
full programming runtime.
### Environment Variables
Environment variables are supported through all of Vector's configuration.
Simply add `${MY_ENV_VAR}` in your Vector configuration file and the variable
will be replaced before being evaluated.
You can learn more in the
[Environment Variables][docs.configuration#environment-variables] section.
[docs.configuration#environment-variables]: /docs/setup/configuration/#environment-variables
[docs.transforms.aws_ec2_metadata#aws-imds-v2]: /docs/reference/transforms/aws_ec2_metadata/#aws-imds-v2
[urls.aws_ec2_instance_metadata]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
[urls.aws_imds_v1_security_problems]: https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/
[urls.vector_programmable_transforms]: https://vector.dev/components?functions%5B%5D=program
| 21.034409 | 192 | 0.70044 | eng_Latn | 0.709926 |
4b5ae06e7714be83a0617ca4fc7224246c6a75f3 | 1,996 | md | Markdown | checkpath/README.md | chubbymaggie/bap-plugins | d61192ae1d47a62e764a74d285fac649219a4560 | [
"MIT"
] | 58 | 2015-04-02T11:56:51.000Z | 2022-02-19T21:13:43.000Z | checkpath/README.md | chubbymaggie/bap-plugins | d61192ae1d47a62e764a74d285fac649219a4560 | [
"MIT"
] | 12 | 2015-09-05T01:38:17.000Z | 2020-08-28T16:39:15.000Z | checkpath/README.md | chubbymaggie/bap-plugins | d61192ae1d47a62e764a74d285fac649219a4560 | [
"MIT"
] | 21 | 2015-04-01T20:57:22.000Z | 2021-11-23T07:13:05.000Z | # Overview
This plugin reads from standard input (later I will parametrize the
input with command line option) specification of routes and checks,
that whether it is satisfied or not. The route is specified with the
following grammar:
```
route = point, sep, {point}, sep, point, eol;
point = hex | str;
sep = " " | ";" | "," | " ";
eol = "\r" | "\n" | "\n\r" | "\r\n";
hex = ?conventional hexadecimal number?;
str = ?anything that doesn't starts with 0?;
```
Less formally, route is a sequence of points, where the first point is
the source of the route, last is the destination point and whatever in
between is checkpoints. Point can be and address, or a string denoting
the symbol name.
The route is valid if it exists, i.e., there is a path from source
point to the destination point (which implies that both points should
also exist), and for each checkpoint there exists a path that passes
through it. This can be different paths, i.e., it is not guaranteed that
there exists a path that visits all specified checkpoints.
# Usage
## Interactive
For interactive use it is suggested to use `rlwrap` or similiar
utility,
```sh
rlwrap bap exe -lcheckpaths
```
Here is the example of interaction:
```sh
$ rlwrap bap paths -lcheckpath
> a c b
> a f b
[FAIL]: main@0x40052F:64 violates a -> f -> b
> a happens b
[FAIL]: no such checkpoints: happens
>
```
No output means, that path is valid. If path is invalid, it will be
printed. Possible outputs:
```
1. [FAIL]: <callsite> violates <src> -> <missed-check-point> -> <dst>
2. [FAIL]: no such checkpoints: <point>, [<point>]
```
The second variant is printed, when requested checkpoint wasn't find
in the file at all, but paths may exists. Although, it is not
guaranteed the path from [src] to [dst] do exist. So this should be fixed.
## Batch mode
Just redirect a file with route specifications to `bap`, e.g.,
```sh
$ bap paths -lcheckpath < routes
```
# Compatibilty
Requires BAP 0.9.6 with #191 changeset applied.
| 27.342466 | 74 | 0.710922 | eng_Latn | 0.999475 |
4b5b58e144f778a6726adc72ef69d3b185c7db12 | 6,217 | md | Markdown | articles/cognitive-services/form-recognizer/includes/python-custom-analyze.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/form-recognizer/includes/python-custom-analyze.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/form-recognizer/includes/python-custom-analyze.md | macdrai/azure-docs.fr-fr | 59bc35684beaba04a4f4c09a745393e1d91428db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: PatrickFarley
ms.service: cognitive-services
ms.subservice: forms-recognizer
ms.topic: include
ms.date: 11/14/2019
ms.author: pafarley
ms.openlocfilehash: 6a6b0d9740d19270f8daa3608bc125edd0fbec37
ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 11/25/2020
ms.locfileid: "96005082"
---
## <a name="analyze-forms-for-key-value-pairs-and-tables"></a>Analyser des formulaires afin d’en extraire des tableaux et des paires clé-valeur
À présent, vous allez utiliser le modèle nouvellement entraîné pour analyser un document et en extraire des tableaux et des paires clé-valeur. Appelez l’API **[Analyze Form](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)** (Analyser un formulaire) en exécutant le code suivant dans un nouveau script Python. Avant d’exécuter le script, apportez les modifications suivantes :
1. Remplacez `<file path>` par le chemin d’accès du fichier de votre formulaire (par exemple, C:\temp\file.pdf). Il peut également s’agir de l’URL d’un fichier distant. Dans le cadre de ce guide de démarrage rapide, vous pouvez utiliser les fichiers disponibles dans le dossier **Test** de l’[exemple de jeu de données](https://go.microsoft.com/fwlink/?linkid=2090451) (téléchargez et extrayez *sample_data.zip*).
1. Remplacez `<model_id>` par l’ID de modèle que vous avez reçu à la section précédente.
1. Remplacez `<endpoint>` par le point de terminaison que vous avez obtenu avec votre clé d’abonnement Form Recognizer. Vous la trouverez sous l’onglet **Vue d’ensemble** de la ressource Form Recognizer.
1. Remplacez `<file type>` par le type de fichier. Types pris en charge : `application/pdf`, `image/jpeg`, `image/png`, `image/tiff`.
1. Remplacez `<subscription key>` par votre clé d’abonnement.
# <a name="v20"></a>[v2.0](#tab/v2-0)
```python
########### Python Form Recognizer Async Analyze #############
import json
import time
from requests import get, post
# Endpoint URL
endpoint = r"<endpoint>"
apim_key = "<subsription key>"
model_id = "<model_id>"
post_url = endpoint + "/formrecognizer/v2.0/custom/models/%s/analyze" % model_id
source = r"<file path>"
params = {
"includeTextDetails": True
}
headers = {
# Request headers
'Content-Type': '<file type>',
'Ocp-Apim-Subscription-Key': apim_key,
}
with open(source, "rb") as f:
data_bytes = f.read()
try:
resp = post(url = post_url, data = data_bytes, headers = headers, params = params)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
quit()
print("POST analyze succeeded:\n%s" % resp.headers)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed:\n%s" % str(e))
quit()
```
# <a name="v21-preview"></a>[v2.1 (préversion)](#tab/v2-1)
```python
########### Python Form Recognizer Async Analyze #############
import json
import time
from requests import get, post
# Endpoint URL
endpoint = r"<endpoint>"
apim_key = "<subsription key>"
model_id = "<model_id>"
post_url = endpoint + "/formrecognizer/v2.1-preview.2/custom/models/%s/analyze" % model_id
source = r"<file path>"
params = {
"includeTextDetails": True
}
headers = {
# Request headers
'Content-Type': '<file type>',
'Ocp-Apim-Subscription-Key': apim_key,
}
with open(source, "rb") as f:
data_bytes = f.read()
try:
resp = post(url = post_url, data = data_bytes, headers = headers, params = params)
if resp.status_code != 202:
print("POST analyze failed:\n%s" % json.dumps(resp.json()))
quit()
print("POST analyze succeeded:\n%s" % resp.headers)
get_url = resp.headers["operation-location"]
except Exception as e:
print("POST analyze failed:\n%s" % str(e))
quit()
```
---
1. Enregistrez le code dans un fichier avec une extension .py. Par exemple, *form-recognizer-analyze.py*.
1. Ouvrir une fenêtre d’invite de commandes.
1. À l’invite, utilisez la commande `python` pour exécuter l’exemple. Par exemple : `python form-recognizer-analyze.py`.
Quand vous appelez l’API **Analyze Form** (Analyser un formulaire), vous recevez une réponse `201 (Success)` avec un en-tête **Operation-Location**. La valeur de cet en-tête est un ID que vous allez utiliser pour suivre les résultats de l’opération d’analyse. Le script ci-dessus affiche la valeur de cet en-tête dans la console.
## <a name="get-the-analyze-results"></a>Obtenir les résultats de l’analyse
Ajoutez le code suivant au bas de votre script Python. Ce code utilise la valeur d’ID de l’appel précédent dans un nouvel appel d’API pour récupérer les résultats de l’analyse. L’opération **Analyze Form** (Analyser un formulaire) étant asynchrone, ce script appelle l’API à intervalles réguliers jusqu’à ce que les résultats soient disponibles. Nous recommandons un intervalle d’au moins une seconde.
```python
n_tries = 15
n_try = 0
wait_sec = 5
max_wait_sec = 60
while n_try < n_tries:
try:
resp = get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
resp_json = resp.json()
if resp.status_code != 200:
print("GET analyze results failed:\n%s" % json.dumps(resp_json))
quit()
status = resp_json["status"]
if status == "succeeded":
print("Analysis succeeded:\n%s" % json.dumps(resp_json))
quit()
if status == "failed":
print("Analysis failed:\n%s" % json.dumps(resp_json))
quit()
# Analysis still running. Wait and retry.
time.sleep(wait_sec)
n_try += 1
wait_sec = min(2*wait_sec, max_wait_sec)
except Exception as e:
msg = "GET analyze results failed:\n%s" % str(e)
print(msg)
quit()
print("Analyze operation did not complete within the allocated time.")
```
| 43.78169 | 442 | 0.665112 | fra_Latn | 0.665493 |
4b5bc21b595a07781298e738b68e624035a815e8 | 261 | md | Markdown | CONTRIBUTING.md | wgytwebsites/auth.wgyt.tk | ec7b5cfd2d3e896a6f559a47cdd79a99b11a577c | [
"MIT"
] | null | null | null | CONTRIBUTING.md | wgytwebsites/auth.wgyt.tk | ec7b5cfd2d3e896a6f559a47cdd79a99b11a577c | [
"MIT"
] | 1 | 2021-01-12T15:15:42.000Z | 2021-01-12T15:42:10.000Z | CONTRIBUTING.md | wgytwebsites/auth.wgyt.tk | ec7b5cfd2d3e896a6f559a47cdd79a99b11a577c | [
"MIT"
] | null | null | null | # Wgyt Website Contributing guidelines
## rule 1
follow code of conduct
see code-of-conduct.md
## rule 2
please make your commit messages useful
basicly dont make it "fix"
im guilty of this - @wgyt
## rule 3
solve bugs and issues, discussions go in discussions
| 23.727273 | 52 | 0.770115 | eng_Latn | 0.996883 |
4b5c2c8c29f64f8869db64f11c8362af70719a4d | 82 | md | Markdown | README.md | treuille/upload-and-download | 357013b260aaf433a798261bbd09a4fdf20e32fd | [
"Apache-2.0"
] | 1 | 2021-04-22T03:20:35.000Z | 2021-04-22T03:20:35.000Z | README.md | treuille/upload-and-download | 357013b260aaf433a798261bbd09a4fdf20e32fd | [
"Apache-2.0"
] | null | null | null | README.md | treuille/upload-and-download | 357013b260aaf433a798261bbd09a4fdf20e32fd | [
"Apache-2.0"
] | 1 | 2021-02-16T20:43:05.000Z | 2021-02-16T20:43:05.000Z | # upload-and-download
Example of uploading and downloading a file using Streamlit
| 27.333333 | 59 | 0.829268 | eng_Latn | 0.988008 |
4b5c36e68dae6c5fe214797e5537312b7ce78df9 | 607 | md | Markdown | README.md | gt-big-data/gtpd-crawler | da428177947c0e3979e81ae55ae35597ce5b9160 | [
"MIT"
] | null | null | null | README.md | gt-big-data/gtpd-crawler | da428177947c0e3979e81ae55ae35597ce5b9160 | [
"MIT"
] | null | null | null | README.md | gt-big-data/gtpd-crawler | da428177947c0e3979e81ae55ae35597ce5b9160 | [
"MIT"
] | null | null | null | # gtpd-crawler
A crawler for GTPD crime logs.
## Installation
You need:
* Python (2 or 3)
* MongoDB
### Python Packages
You also need a few Python packages:
pip install -r requirements.txt
## Run the crawler
In a separate terminal window, run `mongod` to start MongoDB on your local computer.
Then run:
./scripts/run.sh or scripts\run.bat
# Importing and Exporting Data
Data can be imported to your local MongoDB by running:
./scripts/import_mongo.sh or scripts\import_mongo.bat
Data can be saved to the repository with:
./scripts/export_mongo.sh or scripts\export_mongo.bat
| 17.342857 | 84 | 0.734761 | eng_Latn | 0.982883 |
4b5cbd8bfe11ffaf5fbab037756b924463fb406c | 1,961 | md | Markdown | src/docs/Model/ClassDescription.md | ranitham/mindbody-laravel | c2f170af8cced91c8f8b427d2f586ebd758aea3c | [
"MIT"
] | 1 | 2022-01-13T10:10:18.000Z | 2022-01-13T10:10:18.000Z | src/docs/Model/ClassDescription.md | ranitham/mindbody-laravel | c2f170af8cced91c8f8b427d2f586ebd758aea3c | [
"MIT"
] | null | null | null | src/docs/Model/ClassDescription.md | ranitham/mindbody-laravel | c2f170af8cced91c8f8b427d2f586ebd758aea3c | [
"MIT"
] | 1 | 2018-10-13T22:34:09.000Z | 2018-10-13T22:34:09.000Z | # ClassDescription
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Active** | **bool** | When `true`, indicates that the business can assign this class description to new class schedules.<br /> When `false`, indicates that the business cannot assign this class description to new class schedules. | [optional]
**Description** | **string** | The long version of the class description. | [optional]
**Id** | **int** | The class description's ID. | [optional]
**ImageURL** | **string** | The class description's image URL, if any. If it does not exist, nothing is returned. | [optional]
**LastUpdated** | [**\DateTime**](\DateTime.md) | The date this class description was last modified. | [optional]
**Level** | [**\Nlocascio\Mindbody\Model\Level**](Level.md) | The level information about this class. | [optional]
**Name** | **string** | The name of this class description. | [optional]
**Notes** | **string** | Any notes about the class description. | [optional]
**Prereq** | **string** | Any prerequisites for the class. | [optional]
**Program** | [**\Nlocascio\Mindbody\Model\Program**](Program.md) | Contains information about the class description's program. | [optional]
**SessionType** | [**\Nlocascio\Mindbody\Model\SessionType**](SessionType.md) | Contains information about the class description's session type. | [optional]
**Category** | **string** | The category of this class description. | [optional]
**CategoryId** | **int** | The category ID of this class description. | [optional]
**Subcategory** | **string** | The subcategory of this class description. | [optional]
**SubcategoryId** | **int** | The subcategory ID of this class description. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 78.44 | 272 | 0.671596 | eng_Latn | 0.667881 |
4b5db3f34c8cb468bdbae82dbddcdf87637c3049 | 217 | md | Markdown | _project/hand-painted-teacup-and-saucerwork-of-art.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/hand-painted-teacup-and-saucerwork-of-art.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/hand-painted-teacup-and-saucerwork-of-art.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "Hand-painted Teacup and Saucer...Work Of Art..."
slug: "hand-painted-teacup-and-saucerwork-of-art"
parent: "floral-china-tea-set"
---
Hand-painted Teacup and Saucer...Work Of Art... | 31 | 57 | 0.723502 | eng_Latn | 0.775538 |
4b5e69caece690e6f9a8400bc55babbe1bc61959 | 1,515 | md | Markdown | _posts/2021-01-1-test.md | akshaysravindran/akshaysravindran.github.io | c61b8450fe9799eefda1b706d60a7fc79c257f73 | [
"MIT"
] | 1 | 2021-04-23T03:37:56.000Z | 2021-04-23T03:37:56.000Z | _posts/2021-01-1-test.md | akshaysravindran/akshaysravindran.github.io | c61b8450fe9799eefda1b706d60a7fc79c257f73 | [
"MIT"
] | null | null | null | _posts/2021-01-1-test.md | akshaysravindran/akshaysravindran.github.io | c61b8450fe9799eefda1b706d60a7fc79c257f73 | [
"MIT"
] | 1 | 2021-04-07T05:57:40.000Z | 2021-04-07T05:57:40.000Z | ---
title: 'Weekend random thoughts'
date: 2021-01-1
permalink: /posts/2021-01-1-test/
tags:
- Testing
- debug
---
<p style="font-family: Garamond; font-size:14pt; font-style:normal">
Weekend random thoughts on practices/views I wished we emphasized more as a community. Sharing in case someone benefits
<br/>
1) Passion is critical in life as it makes the journey fun, but realize that your passion is also limited to things you are exposed to. So it is ok for it to evolve with time. Explore many things early in your career & let your passion drive the journey
<br/>
2) Exposure is key & plays a huge part in you succeeding at things. If you are first in your family at doing something, it will be harder for you. You might not have the right contact or info at the right time. Your journey is unique, so work on that. Dont be disheartened by someone else's success
<br/>
3) Normalize failures & encourage growth from them. Realize that the faster you fail at something, the quicker you will learn to master that situation. Let's strive to develop an environment that allows people to become vulnerable & face their fears
<br/>
4) There will always be people who are better than you & that's ok. Know that it is a process, & have a larger vision. At each moment, take the next step in the direction that gets you closer to realizing it & enjoy the journey
<br/>
5) Schools shifts their focus from training us to be better test-takers to creating an environment that promotes us to be curious & collaborative
| 72.142857 | 298 | 0.768317 | eng_Latn | 0.999723 |
4b5e81169316e71539e67268078ba04501cdc9e2 | 130 | md | Markdown | _companies/seal-software.md | ragulpr/stockholm-ai | 66367d945b09f13d5e4a726fb9d24b034694d20f | [
"Apache-2.0"
] | 9 | 2017-10-09T17:25:01.000Z | 2021-01-16T14:44:13.000Z | _companies/seal-software.md | ragulpr/stockholm-ai | 66367d945b09f13d5e4a726fb9d24b034694d20f | [
"Apache-2.0"
] | 93 | 2017-10-17T07:22:42.000Z | 2020-02-12T11:11:27.000Z | _companies/seal-software.md | ragulpr/stockholm-ai | 66367d945b09f13d5e4a726fb9d24b034694d20f | [
"Apache-2.0"
] | 14 | 2017-10-27T01:15:51.000Z | 2020-02-08T12:31:54.000Z | ---
name: Seal Software
logo: seal-software.png
website: https://seal-software.com
location:
- Gothenburg
positions: []
---
| 13 | 34 | 0.684615 | eng_Latn | 0.205053 |
4b5e832866c365cbf101849422144d85c4f6e841 | 754 | md | Markdown | documentation/vocabulary.md | shtukas/gaia | 680ac149b9f98eac3dd13a9fd4681752fc65f2c5 | [
"MIT"
] | null | null | null | documentation/vocabulary.md | shtukas/gaia | 680ac149b9f98eac3dd13a9fd4681752fc65f2c5 | [
"MIT"
] | 3 | 2016-01-02T01:11:56.000Z | 2016-01-02T19:59:29.000Z | documentation/vocabulary.md | shtukas/gaia | 680ac149b9f98eac3dd13a9fd4681752fc65f2c5 | [
"MIT"
] | null | null | null | ## Vocabulary
There are two repositories: The regular file system tree itself, and Xcache (at least one instance of it). Gaia only reads from the regular file system tree (from root folders that are specified by the user) and stores data in Xcache.
The data stored in Xcache are called **Aion points**. They are defined as JSON objects. Aion points are represented inside the program as **Aeson Values**. We also have the notion of **Gaia projection** of Aeson Values by which we extract from Aeson Values the data contained in the Value in a way that is more user friendly. **FilePath** is a complete FS path to a file. **FolderPath** is a complete FS path to a directory. The term used to refer to both FilePaths and FolderPaths is **LocationPath**.
| 125.666667 | 502 | 0.770557 | eng_Latn | 0.999929 |
4b5ec632063fc94d32989b30b17a6ba9fab04918 | 2,890 | md | Markdown | README.md | opt-tech/sbt-project-diff | 2e784b91fff8f36b1b84e6f8b0be74f8b2da2ad6 | [
"MIT"
] | 14 | 2016-01-20T04:46:42.000Z | 2021-05-11T12:39:56.000Z | README.md | opt-tech/sbt-project-diff | 2e784b91fff8f36b1b84e6f8b0be74f8b2da2ad6 | [
"MIT"
] | 2 | 2017-10-17T15:20:49.000Z | 2017-12-28T10:57:01.000Z | README.md | opt-tech/sbt-project-diff | 2e784b91fff8f36b1b84e6f8b0be74f8b2da2ad6 | [
"MIT"
] | 8 | 2017-01-20T20:07:05.000Z | 2019-06-20T03:37:16.000Z | # sbt-diff-project
### Make CI for sbt multi-project FASTER.
Show projects and their children which are different in git diff.
[](https://circleci.com/gh/opt-tech/sbt-diff-project)
## Use case
Imagine that your sbt multi-project is defined as below.
```scala
lazy val root = project in file(".")
lazy val core = project
lazy val web = project.dependsOn(core)
lazy val batch = project.dependsOn(core)
lazy val secondBatch = project.dependsOn(batch)
```
Then, suppose that followings.
- You write unit tests to all projects.
- You added this multi-project build to Git version control and commit to `master` branch.
- `master` branch passed the tests.
If you modified a file in `batch` project, you need to test `batch` and `secondBatch` project only.
This can be achieved as follows. (suppose that changes are committed to `feature/batch` branch)
```bash
$ cd /path/to/sbt-project
$ sbt --error 'set showSuccess := false' 'git-diff-all master feature/batch' # suppress sbt debug log
batch
secondBatch
```
Voila, `git-diff-all` command prints `batch`(contains modified file) and `secondBatch`(is dependsOn `batch`).
Now, you can test `batch` and `secondBatch` via pipe or redirection.
## Usage
### Installation
Add the plugin in project/plugins.sbt:
```scala
addSbtPlugin("jp.ne.opt" % "sbt-diff-project" % "0.2.1")
```
### `git-diff-all` command
`git-diff-all` sbt command executes `git diff --name-only` internally.
You can use this command as follows.
```bash
$ sbt git-diff-all
$ sbt 'git-diff-all 752a93 b2f98f'
$ sbt 'git-diff-all master feature/foo'
$ sbt --error 'set showSuccess := false' git-diff-all # suppress sbt debug log
```
You can use `printGitDiffToFile` to configure output. (stdout by default)
```bash
$ sbt
> set printGitDiffToFile := Some(file("/path/to/diff.txt"))
> git-diff-all
```
### Configurations
- `gitDiffSeparator` : Specify separator string for printing projects. `\n` as default.
- `printGitDiffByBaseDirectory` : Print base directory instead of project ID. `false` as default.
- `printGitDiffByAbsolutePath` : Print absolute path instead of related path. (only affects when `printGitDiffByBaseDirectory` is true) `false` by default.
- `printGitDiffToFile` : Print to a file instead of stdout. `None` (print to stdout) by default.
- `excludeRootProject` : This plugin derives project-diff based on project's base directory. Since root project's base directory-path is included to any project's, excluding root project from diff is reasonable. `true` by default.
- `patternsAffectAllProjects` : For some special files, you would want to force testing all projects if the file is modified. (e.g. `.travis.yml`, `circle.yml`, `build.sbt`, ...) `Seq(""".+\.sbt$""", """.+project/[^/]+\.scala""")` by default.
## License
Published under the MIT License.
| 33.604651 | 242 | 0.729758 | eng_Latn | 0.970957 |
4b5edc6add2ddee2df48b3b0cf6b99e8ff1e42ca | 1,682 | md | Markdown | NodeChrome/README.md | hvdb/docker-selenium | 30a04d7c5d7310042f2c1b3ce390707d8c68fbb1 | [
"Apache-2.0"
] | null | null | null | NodeChrome/README.md | hvdb/docker-selenium | 30a04d7c5d7310042f2c1b3ce390707d8c68fbb1 | [
"Apache-2.0"
] | null | null | null | NodeChrome/README.md | hvdb/docker-selenium | 30a04d7c5d7310042f2c1b3ce390707d8c68fbb1 | [
"Apache-2.0"
] | null | null | null | # Selenium Grid Node - Chrome
Selenium Node configured to run Google Chrome.
This one has 10 instances of chrome and 10 sessions are allowed.
## Dockerfile
[`selenium/node-chrome` Dockerfile](https://github.com/hvdb/docker-selenium/blob/master/NodeChrome/Dockerfile)
## How to use this image
First, you will need a Selenium Grid Hub that the Node will connect to.
You need to set the max session to the number you want.
The default is only 5.
This means that only 5 webdriver session can be started == 5 tests.
So if you are running a big test set (or use this chrome node) you need to increase this number.
```
$ docker run -d -P --name selenium-hub hvdb/docker-selenium-hub
```
Once the hub is up and running will want to launch nodes that can run tests. You can run as many nodes as you wish.
```
$ docker run -d --link selenium-hub:hub hvdb/docker-node-chrome-10
```
## What is Selenium?
_Selenium automates browsers._ That's it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well.
Selenium has the support of some of the largest browser vendors who have taken (or are taking) steps to make Selenium a native part of their browser. It is also the core technology in countless other browser automation tools, APIs and frameworks.
See the Selenium [site](http://docs.seleniumhq.org/) for documation on usage within your test code.
## License
View [license information](https://code.google.com/p/selenium/source/browse/COPYING) for the software contained in this image.
| 43.128205 | 289 | 0.766944 | eng_Latn | 0.997747 |
4b5f81fbbf05c2620f98d94b542e1423cc15c6b1 | 28 | md | Markdown | README.md | sanikumar/jubilant-potato | d326604418e80091f63d17251488823dff0980e2 | [
"MIT"
] | null | null | null | README.md | sanikumar/jubilant-potato | d326604418e80091f63d17251488823dff0980e2 | [
"MIT"
] | null | null | null | README.md | sanikumar/jubilant-potato | d326604418e80091f63d17251488823dff0980e2 | [
"MIT"
] | null | null | null | # jubilant-potato
long term
| 9.333333 | 17 | 0.785714 | eng_Latn | 0.334905 |
4b5fc157f583e96122ea41af798b9a1d4a795c17 | 10,250 | md | Markdown | README.md | a4bandas-old/paperclip | 987c2d8b6abfe4f317596714bf54ec72b51ed8d6 | [
"MIT"
] | null | null | null | README.md | a4bandas-old/paperclip | 987c2d8b6abfe4f317596714bf54ec72b51ed8d6 | [
"MIT"
] | null | null | null | README.md | a4bandas-old/paperclip | 987c2d8b6abfe4f317596714bf54ec72b51ed8d6 | [
"MIT"
] | 1 | 2020-06-14T05:20:46.000Z | 2020-06-14T05:20:46.000Z | Paperclip-Cloudfiles
=================================
Paperclip-Cloudfiles is intended as an easy file attachment library for ActiveRecord and all attachments are served from Rackspace's Cloudfiles.
Some features include:
* Files aren't saved to their final locations on disk, nor are they deleted if set to nil, until
ActiveRecord::Base#save is called.
* Validations are managed based on size and
presence, if required.
* It can transform its assigned image into thumbnails if
needed, by installing ImageMagick (which,
for most modern Unix-based systems, is as easy as installing the right
packages).
* Attached files are either saved to the filesystem, or to Rackspace Cloudfiles, and referenced in the
browser by an easily understandable specification, which has sensible and
useful defaults.
Note: The Thoughtbot guys have indicated that they
don't want to pull any code into the official Paperclip mainline that they don't
personally use on projects, so until they discover the joy of Cloud Files, this
fork is available on RubyGems.org at http://rubygems.org/gems/paperclip-cloudfiles
The complete [RDoc](http://rdoc.info/github/minter/paperclip) is online.
Changes in this repo
------------
Allowed to refresh images of classes with namespaces. For example:
rake paperclip:refresh CLASS='User::Asset'
Requirements
------------
ImageMagick must be installed and Paperclip must have access to it. Run `which convert` (one of the ImageMagick
utilities). It might return `/usr/local/bin/convert` or `/usr/bin/convert`.
Add the returned line to `config/environments/development.rb)` and to `config/environments/production.rb)`:
Paperclip.options[:command_path] = "/usr/local/bin/"
Installation
------------
Include the gem in your Gemfile (Rails 3 or Rails 2.x with Bundler):
gem 'cloudfiles'
gem 'cocaine' #a dependency that paperclilp didn't pick up yet
gem 'paperclip-cloudfiles', :require => 'paperclip'
(Rails 2 only) In your environment.rb:
config.gem "paperclip-cloudfiles", :lib => 'paperclip'
This is because the gem name and the library name don't match.
Quick Start
-----------
Create `config/rackspace_cloudfiles.yml`
DEFAULTS: &DEFAULTS
username: yourusernamehere
api_key: yourapikeyhere
development:
<<: *DEFAULTS
container: dev_avatars
test:
<<: *DEFAULTS
container: test_avatars
production:
<<: *DEFAULTS
container: avatars
Declare that your model has an attachment with the has_attached_file method, and give it a name.
In your model:
```ruby
class User < ActiveRecord::Base
# More information about the has_attached_file options are available in the
# documentation of Paperclip::ClassMethods.
has_attached_file :avatar,
:styles => { :medium => "300x300>", :thumb => "100x100>" },
:storage => :cloud_files,
:cloudfiles_credentials => "#{Rails.root}/config/rackspace_cloudfiles.yml"
# Validation Methods:
validates_attachment_presence :avatar
validates_attachment_content_type :avatar, :content_type => ['image/jpeg', 'image/png']
validates_attachment_size :avatar, :in => 1..1.megabyte
end
```
Paperclip will wrap up up to four attributes (all prefixed with that attachment's name,
so you can have multiple attachments per model if you wish) and give them a
friendly front end.
In your migrations:
```ruby
class AddAvatarColumnsToUser < ActiveRecord::Migration
def self.up
add_column :users, :avatar_file_name, :string
add_column :users, :avatar_content_type, :string
add_column :users, :avatar_file_size, :integer
add_column :users, :avatar_updated_at, :datetime
end
def self.down
remove_column :users, :avatar_file_name
remove_column :users, :avatar_content_type
remove_column :users, :avatar_file_size
remove_column :users, :avatar_updated_at
end
end
```
In your edit and new views:
<% form_for :user, @user, :url => user_path, :html => { :multipart => true } do |form| %>
<%= form.file_field :avatar %>
<% end %>
In your controller:
def create
@user = User.create( params[:user] )
end
In your show view:
<%= image_tag @user.avatar.url %>
<%= image_tag @user.avatar.url(:medium) %>
<%= image_tag @user.avatar.url(:thumb) %>
Storage
-------
The files that are assigned as attachments are, by default, placed in the
directory specified by the :path option to has_attached_file. By default, this
location is ":rails_root/public/system/:attachment/:id/:style/:filename". This
location was chosen because on standard Capistrano deployments, the
public/system directory is symlinked to the app's shared directory, meaning it
will survive between deployments. For example, using that :path, you may have a
file at
/data/myapp/releases/20081229172410/public/system/avatars/13/small/my_pic.png
_NOTE: This is a change from previous versions of Paperclip, but is overall a
safer choice for the default file store._
You may also choose to store your files using Rackspace's Cloud Files service. You can find more information about Cloud Files storage at the description for Paperclip::Storage::CloudFile
Note:
Files on the local filesystem (and in the Rails app's public directory), and on Rackspace Cloudfiles, will be available to the internet at large. For the filesystem, if you require access control, it's
possible to place your files in a different location. You will need to change
both the :path and :url options in order to make sure the files are unavailable
to the public. Both :path and :url allow the same set of interpolated
variables.
Post Processing
---------------
Paperclip supports an extensible selection of post-processors. When you define
a set of styles for an attachment, by default it is expected that those
"styles" are actually "thumbnails". However, you can do much more than just
thumbnail images. By defining a subclass of Paperclip::Processor, you can
perform any processing you want on the files that are attached. Any file in
your Rails app's lib/paperclip_processors directory is automatically loaded by
paperclip, allowing you to easily define custom processors. You can specify a
processor with the :processors option to has_attached_file:
has_attached_file :scan, :styles => { :text => { :quality => :better } },
:processors => [:ocr]
This would load the hypothetical class Paperclip::Ocr, which would have the
hash "{ :quality => :better }" passed to it along with the uploaded file. For
more information about defining processors, see Paperclip::Processor.
The default processor is Paperclip::Thumbnail. For backwards compatability
reasons, you can pass a single geometry string or an array containing a
geometry and a format, which the file will be converted to, like so:
has_attached_file :avatar, :styles => { :thumb => ["32x32#", :png] }
This will convert the "thumb" style to a 32x32 square in png format, regardless
of what was uploaded. If the format is not specified, it is kept the same (i.e.
jpgs will remain jpgs).
Multiple processors can be specified, and they will be invoked in the order
they are defined in the :processors array. Each successive processor will
be given the result of the previous processor's execution. All processors will
receive the same parameters, which are what you define in the :styles hash.
For example, assuming we had this definition:
has_attached_file :scan, :styles => { :text => { :quality => :better } },
:processors => [:rotator, :ocr]
then both the :rotator processor and the :ocr processor would receive the
options "{ :quality => :better }". This parameter may not mean anything to one
or more or the processors, and they are expected to ignore it.
_NOTE: Because processors operate by turning the original attachment into the
styles, no processors will be run if there are no styles defined._
Events
------
Before and after the Post Processing step, Paperclip calls back to the model
with a few callbacks, allowing the model to change or cancel the processing
step. The callbacks are `before_post_process` and `after_post_process` (which
are called before and after the processing of each attachment), and the
attachment-specific `before_<attachment>_post_process` and
`after_<attachment>_post_process`. The callbacks are intended to be as close to
normal ActiveRecord callbacks as possible, so if you return false (specifically
- returning nil is not the same) in a before_ filter, the post processing step
will halt. Returning false in an after_ filter will not halt anything, but you
can access the model and the attachment if necessary.
_NOTE: Post processing will not even *start* if the attachment is not valid
according to the validations. Your callbacks and processors will *only* be
called with valid attachments._
Testing
-------
Paperclip provides rspec-compatible matchers for testing attachments. See the
documentation on [Paperclip::Shoulda::Matchers](http://rubydoc.info/gems/paperclip/Paperclip/Shoulda/Matchers)
for more information.
Contributing
------------
If you'd like to contribute a feature or bugfix: Thanks! To make sure your
fix/feature has a high chance of being included, please read the following
guidelines:
1. Ask on the mailing list[http://groups.google.com/group/paperclip-plugin], or
post a new GitHub Issue[http://github.com/thoughtbot/paperclip/issues].
2. Make sure there are tests! We will not accept any patch that is not tested.
It's a rare time when explicit tests aren't needed. If you have questions
about writing tests for paperclip, please ask the mailing list.
Credits
-------

Paperclip is maintained and funded by [thoughtbot, inc](http://thoughtbot.com/community)
Thank you to all [the contributors](https://github.com/thoughtbot/paperclip/contributors)!
The names and logos for thoughtbot are trademarks of thoughtbot, inc.
License
-------
Paperclip is Copyright © 2008-2011 thoughtbot. It is free software, and may be redistributed under the terms specified in the MIT-LICENSE file.
| 37.683824 | 201 | 0.741951 | eng_Latn | 0.995217 |
4b612d54a0c377a2be6b6f325eabbad2255b5254 | 477 | md | Markdown | src/components/settings-row/documentation/notes.md | ivangonzalezsp/carbon | 5ce8b11fbee20db9a1716e566e20b0eadee50e4f | [
"Apache-2.0"
] | 1 | 2019-09-26T06:35:08.000Z | 2019-09-26T06:35:08.000Z | src/components/settings-row/documentation/notes.md | ivangonzalezsp/carbon | 5ce8b11fbee20db9a1716e566e20b0eadee50e4f | [
"Apache-2.0"
] | null | null | null | src/components/settings-row/documentation/notes.md | ivangonzalezsp/carbon | 5ce8b11fbee20db9a1716e566e20b0eadee50e4f | [
"Apache-2.0"
] | 13 | 2020-05-05T06:26:37.000Z | 2020-05-08T05:43:07.000Z | # Designer Notes
- Useful to create a series of rows with a heading, explanatory text, and UI controls in each row.
- A good example is a settings page, or step-by-step wizard.
# Related Components
- Need an overall container? [Try AppWrapper](/components/app-wrapper "Try App Wrapper").
- Need a container for your primary navigation? [Try Navigation Bar](/components/navigation-bar "Try Navigation Bar").
- Laying out a page in columns? [Try Row](/components/row "Try Row"). | 59.625 | 118 | 0.754717 | eng_Latn | 0.991891 |
4b614e24a47e35702f45e4b1cd27d19345410494 | 1,664 | md | Markdown | README.md | marier-nico/event-processor | 5ec3040613a506de8427c66da6108a9457174922 | [
"MIT"
] | 3 | 2021-11-01T10:51:51.000Z | 2021-12-20T23:38:33.000Z | README.md | marier-nico/event-processor | 5ec3040613a506de8427c66da6108a9457174922 | [
"MIT"
] | 29 | 2021-03-20T23:27:51.000Z | 2021-12-20T10:09:26.000Z | README.md | marier-nico/event-processor | 5ec3040613a506de8427c66da6108a9457174922 | [
"MIT"
] | 2 | 2021-11-29T23:10:50.000Z | 2022-01-20T19:13:55.000Z | # Process Events In Style




This library aims to simplify the common pattern of event processing. It simplifies the process of filtering,
dispatching and pre-processing events as well as injecting dependencies in event processors.
The only requirement is that your events are regular python dictionaries.
Take a look at the following examples to get an overview of the features available! Of course, you can mix and combine
them in any way you like to create more complex scenarios.
```python
from event_processor import EventProcessor, Event
from event_processor.filters import Eq
event_processor = EventProcessor()
@event_processor.processor(Eq("service.type", "service_a"))
def process_service_a(event: Event):
return event["service"]["status"] == "up"
@event_processor.processor(Eq("service.type", "service_b"))
def process_service_b(event: Event):
return event["authorized"]
service_a_event = {
"service": {
"type": "service_a",
"status": "down"
}
}
service_b_event = {
"service": {
"type": "service_b",
"authorized": False
}
}
event_processor.invoke(service_a_event) # False
event_processor.invoke(service_b_event) # False
```
# Documentation
Find the full documentation on [Read the Docs](https://event-processor.readthedocs.io/).
| 32.627451 | 118 | 0.748798 | eng_Latn | 0.808571 |
4b61a862e14ef37c8088e59080499b0a60738051 | 1,342 | md | Markdown | CHANGELOG.md | sithhell/docker-machine-driver-packet | d13377ce4a0f979aecc7607031a5ca3efd483275 | [
"BSD-3-Clause"
] | 1 | 2019-08-06T18:08:03.000Z | 2019-08-06T18:08:03.000Z | CHANGELOG.md | sithhell/docker-machine-driver-packet | d13377ce4a0f979aecc7607031a5ca3efd483275 | [
"BSD-3-Clause"
] | null | null | null | CHANGELOG.md | sithhell/docker-machine-driver-packet | d13377ce4a0f979aecc7607031a5ca3efd483275 | [
"BSD-3-Clause"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
This project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [0.1.5] - 2017-11-07
### Added
- Ability to pass plan as either `baremetal_T` or `typeT`
- Verify plan is valid
### Fixed
- Build against latest packngo api break
## [0.1.4] - 2017-07-21
### Added
- Expanded the list of valid OperatingSystems
- Add RancherOS ssh username
- Minor tweaks to Driver structure to be consistent with upstream machine drivers
- Ability to pass in userdata
### Changed
- Default os is now `ubuntu_16_04` instead of `ubuntu_14_04`
- Default plan is now `baremetal_0` instead of `baremetal_1`
## [0.1.3] - 2016-12-30
### Fixed
- Build against latest packngo api break
### Changed
- Update minimum supported version of docker-machine to v0.8.2+
## [0.1.2] - 2016-03-03
### Changed
- 404 responses of a DELETE call are no longer treated as an error
## [0.1.1] - 2016-03-03
### Changed
- Update minimum supported version of docker-machine to v0.5.5+
## [0.1.0] - 2015-12-05
### Fixed
- Local storage of generated ssh keys
## [0.0.2] - 2015-11-19
Nothing done, NOP release.
## [0.0.1] - 2015-11-19
### Added
- Initial release, has basic device creation working
| 25.807692 | 82 | 0.707899 | eng_Latn | 0.968283 |
4b621f91f6f7a2517ea0df0a7add102e4cd2e6e1 | 1,415 | md | Markdown | README.md | opatiny/segments-manipulator | bfd4f8900d8f955f0554fa797ba85906ed186609 | [
"MIT"
] | null | null | null | README.md | opatiny/segments-manipulator | bfd4f8900d8f955f0554fa797ba85906ed186609 | [
"MIT"
] | null | null | null | README.md | opatiny/segments-manipulator | bfd4f8900d8f955f0554fa797ba85906ed186609 | [
"MIT"
] | null | null | null | # segments-manipulator
[![NPM version][npm-image]][npm-url]
[![build status][travis-image]][travis-url]
[![Test coverage][codecov-image]][codecov-url]
[![David deps][david-image]][david-url]
[![npm download][download-image]][download-url]
Module allows to draw segments and manipulate them (rotate, translate, scale, ...).
## Installation
`$ npm install segments-manipulator`
## [API Documentation](https://cheminfo-js.github.io/segments-manipulator/)
## Example
```js
const segmentsManipulator = require('segments-manipulator');
```
## License
[MIT](./LICENSE)
[npm-image]: https://img.shields.io/npm/v/segments-manipulator.svg?style=flat-square
[npm-url]: https://www.npmjs.com/package/segments-manipulator
[travis-image]: https://img.shields.io/travis/cheminfo-js/segments-manipulator/master.svg?style=flat-square
[travis-url]: https://travis-ci.org/cheminfo-js/segments-manipulator
[codecov-image]: https://img.shields.io/codecov/c/github/cheminfo-js/segments-manipulator.svg?style=flat-square
[codecov-url]: https://codecov.io/gh/cheminfo-js/segments-manipulator
[david-image]: https://img.shields.io/david/cheminfo-js/segments-manipulator.svg?style=flat-square
[david-url]: https://david-dm.org/cheminfo-js/segments-manipulator
[download-image]: https://img.shields.io/npm/dm/segments-manipulator.svg?style=flat-square
[download-url]: https://www.npmjs.com/package/segments-manipulator
| 37.236842 | 111 | 0.751943 | kor_Hang | 0.108636 |
4b628dbdfbbb0663c639d606dc090a7f04be76ea | 972 | md | Markdown | git.md | nirajkvinit/notes | 886b1d42d791c43475f56f36462b83abb546c3e2 | [
"MIT"
] | null | null | null | git.md | nirajkvinit/notes | 886b1d42d791c43475f56f36462b83abb546c3e2 | [
"MIT"
] | null | null | null | git.md | nirajkvinit/notes | 886b1d42d791c43475f56f36462b83abb546c3e2 | [
"MIT"
] | null | null | null | Incase of large number of untracked files in a repository, ensure that file mode change is switched off. To do that issue the following command.
git config core.filemode false
Unstaged changes left after git reset --hard
https://stackoverflow.com/questions/11383094/unstaged-changes-left-after-git-reset-hard
Git Add Remote URL
git remote add origin https://github.com/user/repo.git
Git Change Remote URL
git remote set-url origin https://github.com/USERNAME/REPOSITORY.git
# ref
https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/
# Some useful commands
To clear working directory
`git checkout -- .`
Revert to last-1 commit
`git reset --soft HEAD~1`
Hard Reset
`git reset --hard HEAD -`
# Git still shows files as modified after adding to .gitignore
https://stackoverflow.com/questions/9750606/git-still-shows-files-as-modified-after-adding-to-gitignore
To stop this you have to do : `git rm -r --cached .idea/`
| 33.517241 | 144 | 0.772634 | eng_Latn | 0.767878 |
4b6365e0073d679b22f4c91a8fd35afb6e3898dc | 2,108 | md | Markdown | browsers/edge/includes/configure-password-manager-include.md | Mattlk13/windows-itpro-docs | f6ef35e6138728c0d1de3c3e3960ce8c63916687 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-08-06T15:30:17.000Z | 2019-08-06T15:30:17.000Z | browsers/edge/includes/configure-password-manager-include.md | paulvill76/windows-itpro-docs | 7be012371ad86fda90f5562c381fcf028046adb1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | browsers/edge/includes/configure-password-manager-include.md | paulvill76/windows-itpro-docs | 7be012371ad86fda90f5562c381fcf028046adb1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: eavena
ms.author: eravena
ms.date: 10/02/2018
ms.reviewer:
manager: dansimp
ms.prod: edge
ms.topic: include
---
<!-- ## Configure Password Manager -->
>*Supported versions: Microsoft Edge on Windows 10*<br>
>*Default setting: Enabled (Allowed/users can change the setting)*
[!INCLUDE [configure-password-manager-shortdesc](../shortdesc/configure-password-manager-shortdesc.md)]
### Supported values
| Group Policy | MDM | Registry | Description | Most restricted |
|--------------------------|:-----:|:--------:|--------------------------------------------------------|:------------------------------------------------:|
| Not configured | Blank | Blank | Users can choose to save and manage passwords locally. | |
| Disabled | 0 | no | Not allowed. |  |
| Enabled<br>**(default)** | 1 | yes | Allowed. | |
---
Verify not allowed/disabled settings:
1. Click or tap **More** (…) and select **Settings** > **View Advanced settings**.
2. Verify the settings **Save Password** is toggled off or on and is greyed out.
### ADMX info and settings
#### ADMX info
- **GP English name:** Configure Password Manager
- **GP name:** AllowPasswordManager
- **GP path:** Windows Components/Microsoft Edge
- **GP ADMX file name:** MicrosoftEdge.admx
#### MDM settings
- **MDM name:** Browser/[AllowPasswordManager](https://docs.microsoft.com/windows/client-management/mdm/policy-csp-browser#browser-allowpasswordmanager)
- **Supported devices:** Desktop and Mobile
- **URI full path:** ./Vendor/MSFT/Policy/Config/Browser/AllowPasswordManager
- **Data type:** Integer
#### Registry settings
- **Path:** HKLM\Software\Policies\Microsoft\MicrosoftEdge\Main
- **Value name:** FormSuggest Passwords
- **Value type:** REG_SZ
<hr>
| 42.16 | 155 | 0.550285 | eng_Latn | 0.613008 |
4b6489319b70675203f8c49a04681da64cc402c7 | 13,939 | md | Markdown | articles/virtual-network/virtual-network-peering-overview.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-network-peering-overview.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-network-peering-overview.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Peering de réseaux virtuels Azure
titlesuffix: Azure Virtual Network
description: En savoir plus sur le peering de réseaux virtuels dans Azure.
services: virtual-network
documentationcenter: na
author: anavinahar
ms.service: virtual-network
ms.devlang: na
ms.topic: conceptual
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 10/07/2019
ms.author: anavin
ms.openlocfilehash: 728d32ddb63658d24e932e8eeef4a3f50371ccc3
ms.sourcegitcommit: b4665f444dcafccd74415fb6cc3d3b65746a1a31
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/11/2019
ms.locfileid: "72265052"
---
# <a name="virtual-network-peering"></a>Peering de réseau virtuel
Le Peering d’un réseau virtuel vous permet de connecter des [réseaux virtuels](virtual-networks-overview.md) Azure en toute transparence. Une fois homologués, les réseaux virtuels apparaissent comme un seul réseau à des fins de connectivité. Le trafic entre les machines virtuelles des réseaux virtuels homologués est acheminé via l’infrastructure principale de Microsoft de façon assez semblable au trafic entre des machines virtuelles d’un même réseau virtuel via des adresses IP *privées* seulement. Supports Azure :
* VNET Peering : connexion de réseaux virtuels dans la même région Azure
* Global VNET Peering : connexion de réseaux virtuels entre des régions Azure
Voici quelques-uns des avantages du peering de réseaux virtuels, qu’il soit local ou global :
* Le trafic réseau entre les réseaux virtuels homologués est privé. Le trafic entre les réseaux virtuels reste sur le réseau principal de Microsoft. Aucun chiffrement et aucune connexion Internet publique, ni passerelle ne sont nécessaires pour que les réseau virtuels communiquent.
* Connexion à latence faible et haut débit entre les ressources de différents réseaux virtuels.
* La possibilité pour les ressources d’un réseau virtuel de communiquer avec les ressources d’un autre réseau virtuel, une fois que les réseaux virtuels sont homologués.
* La possibilité de transférer des données dans des abonnements Azure, des modèles de déploiement et dans les régions Azure.
* Possibilité d’homologuer deux réseaux virtuels créés via le modèle de déploiement Azure Resource Manager ou d’homologuer un réseau virtuel créé via Resource Manager pour un réseau virtuel créé via le modèle de déploiement classique. Pour en savoir plus sur les modèles de déploiement Azure, consultez l’article [Déploiement Azure Resource Manager et déploiement classique : comprendre les modèles de déploiement et l’état de vos ressources](../azure-resource-manager/resource-manager-deployment-model.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
* Aucune interruption des ressources dans un réseau virtuel lors de la création du peering ou après que le peering est créé.
## <a name="connectivity"></a>Connectivité
Une fois les réseaux virtuels homologués, les ressources de l’un peuvent se connecter directement à celles du réseau virtuel homologué.
La latence du réseau entre des machines virtuelles de réseaux virtuels homologués dans la même région est la même que celle d’un seul réseau virtuel. Le débit du réseau repose sur la bande passante autorisée pour la machine virtuelle proportionnellement à sa taille. Aucune restriction de bande passante supplémentaire n’est appliquée au sein du peering.
Le trafic entre les machines virtuelles dans des réseaux virtuels homologués est acheminé directement via l’infrastructure principale de Microsoft et non via une passerelle ou une connexion Internet publique.
Des groupes de sécurité réseau peuvent être appliqués dans l’un ou l’autre des réseaux virtuels pour bloquer l’accès à d’autres réseaux virtuels ou à des sous-réseaux si besoin est.
Lors de la configuration du peering de réseaux virtuels, vous pouvez ouvrir ou fermer les règles du groupe de sécurité réseau entre les réseaux virtuels. Si vous ouvrez totalement la connectivité entre les réseaux virtuels homologués (ce qui est l’option par défaut), vous pouvez alors appliquer des groupes de sécurité réseau à des sous-réseaux ou des machines virtuelles spécifiques pour bloquer ou refuser certains accès. Pour en savoir plus sur les groupes de sécurité réseau, consultez [Filtrer le trafic réseau avec les groupes de sécurité réseau](security-overview.md).
## <a name="service-chaining"></a>Chaînage de services
Vous pouvez configurer des itinéraires définis par l’utilisateur qui pointent vers des machines virtuelles de réseaux virtuels homologués en tant qu’adresse IP du *tronçon suivant*, ou vers des passerelles de réseau virtuel, pour activer le chaînage de services. Le chaînage de services vous permet de diriger le trafic d’un réseau virtuel vers une appliance virtuelle, ou une passerelle de réseau virtuel, d’un réseau virtuel homologué via les itinéraires définis.
Vous pouvez également déployer des réseaux de type hub-and-spoke où le nœud du réseau virtuel peut héberger des composants d’infrastructure tels qu’une appliance virtuelle réseau ou une passerelle VPN. Tous les réseaux virtuels spoke peuvent ensuite être homologués avec le réseau virtuel hub. Le trafic peut passer par des appliances virtuelles réseau ou des réseaux VPN sur le réseau virtuel hub.
Le peering de réseaux virtuels permet de définir le tronçon suivant dans un itinéraire défini par l’utilisateur sur l’adresse IP d’une machine virtuelle du réseau virtuel appairé ou une passerelle VPN. Toutefois, vous ne pouvez pas définir d’itinéraires entre plusieurs réseaux virtuels avec un itinéraire défini par l’utilisateur en spécifiant une passerelle ExpressRoute comme type de tronçon suivant. Pour en savoir plus sur les routages définis par l’utilisateur, voir [Vue d’ensemble des routages définis par l’utilisateur](virtual-networks-udr-overview.md#user-defined). Pour découvrir comment créer une topologie de réseau de type hub et spoke, consultez [Implement a hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?toc=%2fazure%2fvirtual-network%2ftoc.json) (Implémenter une topologie de réseau de type hub et spoke dans Azure).
## <a name="gateways-and-on-premises-connectivity"></a>Passerelles et connectivité locale
Chaque réseau virtuel, qu’il soit homologué ou non avec un autre réseau virtuel, peut posséder sa propre passerelle et l’utiliser pour se connecter à un réseau local. Vous pouvez également configurer des [connexions de réseau virtuel à réseau virtuel](../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json) à l’aide de passerelles, même si les réseaux virtuels sont homologués.
Lorsque les deux options d’interconnexion de réseaux virtuels sont configurées, le trafic entre les réseaux virtuels transite via la configuration de peering (c’est-à-dire via le réseau principal Azure).
Lorsque des réseaux virtuels sont homologués, vous pouvez également configurer la passerelle du réseau virtuel homologué en tant que point de transit vers un réseau local. Dans ce cas, le réseau virtuel qui utilise une passerelle distante ne peut pas posséder sa propre passerelle. Un réseau virtuel ne peut posséder qu’une seule passerelle. Il peut s’agir d’une passerelle locale ou distante (dans le réseau virtuel homologué), comme l’illustre l’image suivante :

Le transit de la passerelle est pris en charge pour VNET Peering et pour Global VNet Peering. Le transit de la passerelle entre les réseaux virtuels créés par le biais de modèles de déploiement différents (Resource Manager et classique) est pris en charge uniquement si la passerelle se trouve dans le réseau virtuel (Resource Manager). Pour en savoir plus sur l’utilisation d’une passerelle pour le transit, consultez [Configure a VPN gateway for transit in a virtual network peering (Configurer une passerelle VPN pour le transit dans une homologation de réseaux virtuels)](../vpn-gateway/vpn-gateway-peering-gateway-transit.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Lorsque des réseaux virtuels qui partagent une même connexion Azure ExpressRoute sont appairés, le trafic entre eux transite via la relation de peering (c’est-à-dire via le réseau principal Azure). Vous pouvez toujours utiliser des passerelles locales dans chaque réseau virtuel pour vous connecter au circuit local. Vous pouvez également utiliser une passerelle partagée et configurer le transit pour la connectivité locale.
## <a name="troubleshoot"></a>Résolution des problèmes
Pour confirmer un peering de réseaux virtuels, vous pouvez [vérifier les itinéraires effectifs](diagnose-network-routing-problem.md) pour une interface réseau dans n’importe quel sous-réseau d’un réseau virtuel. Si une homologation de réseaux virtuels existe, tous les sous-réseaux au sein du réseau virtuel ont des itinéraires avec le type de tronçon suivant *VNet Peering* pour chaque espace d’adressage de chaque réseau virtuel homologué.
Vous pouvez aussi résoudre les problèmes de connectivité à une machine virtuelle d’un réseau virtuel homologué à l’aide de la [vérification de la connectivité](../network-watcher/network-watcher-connectivity-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) de Network Watcher. La vérification de la connectivité vous permet de voir comment le trafic est acheminé à partir de l’interface réseau d’une machine virtuelle source vers l’interface réseau d’une machine virtuelle de destination.
Vous pouvez également essayer [Utilitaire de dépannage pour les problèmes de peering de réseau virtuel](https://support.microsoft.com/help/4486956/troubleshooter-for-virtual-network-peering-issues).
## <a name="requirements-and-constraints"></a>Exigences et contraintes
Les contraintes ci-après s’appliquent uniquement quand des réseaux virtuels sont appairés à l’échelle mondiale :
- Les ressources situées dans un réseau virtuel ne peuvent pas communiquer avec l’adresse IP frontale d’un équilibreur de charge interne de base dans un réseau virtuel appairé à l’échelle mondiale. La prise en charge d’un équilibreur de charge de base n’est proposée que dans la même région. La prise en charge d’un Standard Load Balancer existe pour VNet Peering et Global VNet Peering. Les services utilisant un équilibreur de charge de base qui ne fonctionne pas sur un Peering Global VNet Peering sont détaillés [ici.](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers)
Pour en savoir plus sur les exigences et les contraintes, consultez la section correspondante de l’article [Créer, modifier ou supprimer un peering de réseau virtuel](virtual-network-manage-peering.md#requirements-and-constraints). Pour en savoir plus sur les limites concernant le nombre de peerings que vous pouvez créer pour un réseau virtuel, consultez [Abonnement Azure et limites, quotas et contraintes de service](../azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
## <a name="permissions"></a>Autorisations
Pour en savoir plus sur les autorisations requises pour créer un peering de réseaux virtuels, consultez [Créer, modifier ou supprimer un peering de réseau virtuel](virtual-network-manage-peering.md#permissions).
## <a name="pricing"></a>Tarifs
Un coût nominal s’applique pour le trafic entrant et sortant qui utilise une connexion de peering de réseau virtuel. Pour plus d’informations sur la tarification de VNet Peering (homologation de réseaux virtuels) et Global VNet Peering, consultez la [page des tarifs](https://azure.microsoft.com/pricing/details/virtual-network).
Le transit de passerelle est une propriété de peering qui permet à un réseau virtuel d’exploiter une passerelle VPN/ExpressRoute d’un réseau virtuel appairé pour la mise en œuvre d’une connectivité intersite ou de réseau virtuel à réseau virtuel. Le trafic vers la passerelle (en entrée ou en sortie) dans le réseau virtuel appairé entraîne des frais d’appairage de réseau virtuel. Pour plus d’informations, consultez les pages dédiées aux [frais de passerelle VPN](https://azure.microsoft.com/pricing/details/vpn-gateway/), aux frais de passerelle ExpressRoute ou aux [frais d’appairage de réseau virtuel](https://azure.microsoft.com/pricing/details/virtual-network).
>[!NOTE]
> Une version précédente de ce document indiquait que les frais de VNET Peering ne s’appliquaient pas au transit par passerelle. Elle a été mise à jour pour refléter le tarif correct indiqué sur la page de tarification.
## <a name="next-steps"></a>Étapes suivantes
* Un peering est généré entre des réseaux virtuels créés via des modèles de déploiement identiques ou différents qui existent dans des abonnements identiques ou différents. Suivez un didacticiel pour l’un des scénarios suivants :
|Modèle de déploiement Azure | Subscription |
|--------- |---------|
|Les deux modèles Resource Manager |[Identique](tutorial-connect-virtual-networks-portal.md)|
| |[Différent](create-peering-different-subscriptions.md)|
|Un modèle Resource Manager, un modèle classique |[Identique](create-peering-different-deployment-models.md)|
| |[Différent](create-peering-different-deployment-models-subscriptions.md)|
* Découvrez comment créer une [topologie de réseau Hub and Spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?toc=%2fazure%2fvirtual-network%2ftoc.json).
* En savoir plus sur tous les [paramètres de peering de réseaux virtuels et comment les modifier](virtual-network-manage-peering.md)
* Découvrez les réponses aux questions courantes concernant VNet Peering et Global VNet Peering par le biais de notre [Forum Aux Questions sur l’homologation de réseaux virtuels](virtual-networks-faq.md#vnet-peering).
| 124.455357 | 897 | 0.805868 | fra_Latn | 0.987642 |
4b65144640ea8c9984cd33d34a8c1bc6699be11a | 1,657 | md | Markdown | README.md | FuweiChin/mdpreview-for-chrome | 6c6323ceef44fa70fb461c4077d8f8f5e63e9510 | [
"MIT"
] | null | null | null | README.md | FuweiChin/mdpreview-for-chrome | 6c6323ceef44fa70fb461c4077d8f8f5e63e9510 | [
"MIT"
] | null | null | null | README.md | FuweiChin/mdpreview-for-chrome | 6c6323ceef44fa70fb461c4077d8f8f5e63e9510 | [
"MIT"
] | null | null | null | # MDPreview
MDPreview is browser extension to preview Markdown files locally or on the web. It is forked from [GitHub Flavored Markdown](https://chrome.google.com/webstore/detail/github-flavored-markdown/faelggnmhofdamhdegcdhhemfokkfngk).
## Features
Compared to GitHub Flavored Markdown v0.0.6, MDPreview for Chrome 0.1 made the following changes:
+ Added a Windows registry file md.reg to tell what the MIME type of `.md` file is.
+ Dropped type support for `.mdown` `.markdown`, added type support for `text/vnd.daringfireball.markdown`
+ Don't watch opened local file by default.
p.s. Users can set whether or not to watch opened local file in future release.
+ Support physical A4 21cm-width to preview content.
p.s. you need to customize relative DPI ratio according to your monitor in page.css. in my case it's `1.4709583133562356`, a.k.a. `Math.sqrt(1920*1920+1080*1080)/15.6/96`.
## Usage
### Windows
1. To preview markdown files locally, you need to nerge md.reg to registry.
### Chrome
1. Open Chrome, navigate to [chrome://extensions/](chrome://extensions/), enable developer mode and drop this folder into the Chrome Extensions page.
2. Open a `.md` file with Chrome locally, or visit any web `.md` resource with one of MIME types `text/plain` `text/markdown` `text/x-markdown` `text/vnd.daringfireball.markdown`.
### Firefox
1. Open Firefox, navigate to [about:debugging](about:debugging), click "Load temporary add-ons" and select manifest.json.
2. Open a `.md` file with Firefox locally, or visit any web `.md` resource with one of MIME types `text/plain` `text/markdown` `text/x-markdown` `text/vnd.daringfireball.markdown`.
| 66.28 | 226 | 0.757393 | eng_Latn | 0.931621 |
4b65334f46b51e60e137a6735e00b9bb3f43d884 | 1,029 | md | Markdown | appyters/DrugCentral_Drugmonizome_ETL/README.md | shui02/appyter-catalog | dfa15946d151daeb7d7b1bc9af9e48428474f012 | [
"CC0-1.0"
] | null | null | null | appyters/DrugCentral_Drugmonizome_ETL/README.md | shui02/appyter-catalog | dfa15946d151daeb7d7b1bc9af9e48428474f012 | [
"CC0-1.0"
] | 11 | 2020-04-15T22:47:17.000Z | 2020-05-28T16:34:16.000Z | appyters/DrugCentral_Drugmonizome_ETL/README.md | shui02/appyter-catalog | dfa15946d151daeb7d7b1bc9af9e48428474f012 | [
"CC0-1.0"
] | 1 | 2020-05-14T20:25:32.000Z | 2020-05-14T20:25:32.000Z | # Drugmonizome ETL: DrugCentral
[DrugCentral](http://drugcentral.org/) is a drug information resource that provides information on active ingredients of chemical entities and their pharmacological action.
This appyter takes data from the DrugCentral and outputs files that are usable for Drugmonizome. Chemical-protein interaction data is processed in order to construct a binary matrix with small molecules as rows and genes as column attributes. From this matrix, drug and attribute set libraries are constructed.
Additionally, gene and attribute similarity matrices, which store the jaccard distance between any two genes or attributes, are created.
The downloadable file will have the following outputs:
* Edge list of drug-attribute associations
* Binary matrix of drug-attribute associations
* Drug set library: matches each attribute with a set of associated small molecules
* Attribute set library: matches each small molecule with a set of associated attributes
* Drug similarity matrix
* Attribute similarity matrix | 68.6 | 310 | 0.825073 | eng_Latn | 0.99811 |
4b65d0c8f0b501f3aa3f1ecaf34b5b141be1670d | 629 | md | Markdown | site/source/_posts/1945年03月04日.md | kokohuang/WarOfResistanceLive | 5a55b0eb2f2f8a00dbf753067bf61fdc8894c15a | [
"MIT"
] | 68 | 2020-11-30T18:37:16.000Z | 2021-08-16T13:37:02.000Z | site/source/_posts/1945年03月04日.md | kokohuang/WarOfResistanceLive | 5a55b0eb2f2f8a00dbf753067bf61fdc8894c15a | [
"MIT"
] | null | null | null | site/source/_posts/1945年03月04日.md | kokohuang/WarOfResistanceLive | 5a55b0eb2f2f8a00dbf753067bf61fdc8894c15a | [
"MIT"
] | 4 | 2020-12-08T06:47:10.000Z | 2021-08-16T13:37:18.000Z | ---
layout: post
title: "1945年03月04日"
date: 2020-03-04
categories: 全面抗战(1937-1945)
---
<meta name="referrer" content="no-referrer" />
- 1945年3月4日讯:日本反战外交官盐见圣策在重庆国际电台发表广播讲话,称联合国对日作战目标只在打倒军阀,并不伤害日本国民,希望日本国民不要被军阀的恶宣传所蒙蔽,及早反抗这个绝望的战争。
- 1945年3月4日讯:是日为童子军节,中国童子军总会向全国各省、市之学校童子军团发出通令,自是日起至4月4日止之一个月内为童子军活动月,以智、仁、勇为每旬活动之中心。
- #晋绥军区春季攻势作战# 1945年3月4日讯:圪洞伪军1个中队集体反正,其余伪军伪组织人员也陆续逃散。
- #抗战装备#加造9X19毫米手|枪弹:从1941年至1945年,加拿大曾提供中国该口径弹药,为德国鲁格P-08手|枪弹的分支型号。弹头直径9.01毫米,弹壳口部直径9.65毫米,底部直径9.93毫米,底缘直径9.96毫米,弹壳长19.15毫米。直到抗战末期中国接受到加造勃朗宁强力型手|枪和司登式冲锋枪,该弹药才发挥作用。 <br/><img src="https://wx2.sinaimg.cn/large/aca367d8ly1gchnzkijhtj20a00gaq5y.jpg" />
| 34.944444 | 248 | 0.802862 | yue_Hant | 0.226203 |
4b669a3caf4b357bf81d1ee3c2bbaa5476cc0980 | 337 | md | Markdown | docs/specs/Camera/README_cn.md | seeclong/apollo | 99c8afb5ebcae2a3c9359a156a957ff03944b27b | [
"Apache-2.0"
] | 22,688 | 2017-07-04T23:17:19.000Z | 2022-03-31T18:56:48.000Z | docs/specs/Camera/README_cn.md | Songjiarui3313/apollo | df9113ae656e28e5374db32529d68e59455058a0 | [
"Apache-2.0"
] | 4,804 | 2017-07-04T22:30:12.000Z | 2022-03-31T12:58:21.000Z | docs/specs/Camera/README_cn.md | Songjiarui3313/apollo | df9113ae656e28e5374db32529d68e59455058a0 | [
"Apache-2.0"
] | 9,985 | 2017-07-04T22:01:17.000Z | 2022-03-31T14:18:16.000Z | # Apollo摄像头
您可以将3种类型的摄像头与Apollo集成。更多信息,请参阅各自的安装指南。如果您目前使用的是ASU,您可以集成下面的任何一种,否则,只能使用Leopard Camera。
1. [Leopard Imaging Inc's Camera - LI-USB30-AZ023WDRB](Leopard_Camera_LI-USB30-AZ023WDR__Installation_Guide_cn.md)
2. [Truly Camera](Truly_Argus_Camera_Installation_Guide_cn.md)
3. [Wissen Camera](Wissen_Camera_Installation_Guide_cn.md) | 48.142857 | 114 | 0.851632 | yue_Hant | 0.949752 |
4b66dec950518410339c23f72761402fa8e5f194 | 1,414 | md | Markdown | docs/error-messages/compiler-errors-1/compiler-error-c2390.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | 1 | 2020-05-21T13:04:35.000Z | 2020-05-21T13:04:35.000Z | docs/error-messages/compiler-errors-1/compiler-error-c2390.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | docs/error-messages/compiler-errors-1/compiler-error-c2390.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | ---
title: Compiler Error C2390 | Microsoft Docs
ms.custom:
ms.date: 11/04/2016
ms.reviewer:
ms.suite:
ms.technology:
- devlang-cpp
ms.tgt_pltfrm:
ms.topic: error-reference
f1_keywords:
- C2390
dev_langs:
- C++
helpviewer_keywords:
- C2390
ms.assetid: 06b749ee-d072-4db1-b229-715f2c0728b5
caps.latest.revision: 9
author: corob-msft
ms.author: corob
manager: ghogen
translation.priority.ht:
- cs-cz
- de-de
- es-es
- fr-fr
- it-it
- ja-jp
- ko-kr
- pl-pl
- pt-br
- ru-ru
- tr-tr
- zh-cn
- zh-tw
translationtype: Human Translation
ms.sourcegitcommit: 3168772cbb7e8127523bc2fc2da5cc9b4f59beb8
ms.openlocfilehash: 8334477aef71e8f698bb70a48218c4b4b6c5ce0b
---
# Compiler Error C2390
'identifier' : incorrect storage class 'specifier'
The storage class is not valid for the global-scope identifier. The default storage class is used in place of the invalid class.
Possible resolutions:
- If the identifier is a function, declare it with `extern` storage.
- If the identifier is a formal parameter or local variable, declare it with auto storage.
- If the identifier is a global variable, declare it with no storage class (auto storage).
## Example
- The following sample generates C2390:
```
// C2390.cpp
register int i; // C2390
int main() {
register int j; // OK
}
```
<!--HONumber=Jan17_HO2-->
| 19.915493 | 131 | 0.69024 | eng_Latn | 0.761778 |
4b66ed3cfccc0805a83b673e043dd52226eec2d9 | 588 | md | Markdown | README.md | suisen-cp/cp-library-cpp | 8fbbdcdbceb60f5adc56ff4740549ce3c1a1ea43 | [
"CC0-1.0"
] | 2 | 2021-10-04T15:46:56.000Z | 2022-01-14T19:28:43.000Z | README.md | suisen-cp/cp-library-cpp | 8fbbdcdbceb60f5adc56ff4740549ce3c1a1ea43 | [
"CC0-1.0"
] | null | null | null | README.md | suisen-cp/cp-library-cpp | 8fbbdcdbceb60f5adc56ff4740549ce3c1a1ea43 | [
"CC0-1.0"
] | null | null | null | # cp-library-cpp
[](https://github.com/suisen-cp/cp-library-cpp/actions) [](https://suisen-cp.github.io/cp-library-cpp/)
## 概要
競技プログラミング向けの C++ ライブラリです.[AtCoder Library (ACL)](https://github.com/atcoder/ac-library) の使用を前提とする場合があります.
## バグ報告など
[@_su1sen](https://twitter.com/_su1sen) まで報告を頂けると喜びます.
## ライセンス
[CC0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) を採用しています.
| 36.75 | 300 | 0.748299 | yue_Hant | 0.234115 |
4b67de140a7f4b2d3f27b9b4ba671bf647fe5106 | 413 | md | Markdown | .github/ISSUE_TEMPLATE.md | qq820069931/red5 | 819b11f4f197a0b30f5c57e92ce61cb2bbc6c512 | [
"ECL-2.0",
"Apache-2.0"
] | 3,228 | 2015-01-04T02:44:27.000Z | 2022-03-30T14:23:24.000Z | .github/ISSUE_TEMPLATE.md | qq820069931/red5 | 819b11f4f197a0b30f5c57e92ce61cb2bbc6c512 | [
"ECL-2.0",
"Apache-2.0"
] | 227 | 2015-01-05T12:13:54.000Z | 2022-03-20T10:32:34.000Z | .github/ISSUE_TEMPLATE.md | qq820069931/red5 | 819b11f4f197a0b30f5c57e92ce61cb2bbc6c512 | [
"ECL-2.0",
"Apache-2.0"
] | 1,054 | 2015-01-04T02:44:36.000Z | 2022-03-16T12:59:57.000Z | # Issue
### Short description
__Brief description of what happened__
### Environment
[] Operating system and version:
[] Java version:
[] Red5 version:
### Expected behavior
__Put as much detail here as possible__
### Actual behavior
__Put as much detail here as possible__
### Steps to reproduce
1.
2.
3.
### Logs
__Place logs on [pastebin](http://pastebin.com/) or elsewhere and put links here__
| 13.766667 | 82 | 0.716707 | eng_Latn | 0.968796 |
4b696c9bd84d722fba3995e028bf88ca2ab0760d | 41 | md | Markdown | README.md | odhiambocuttice/myfirstproject | 46a8ab5a2bcfe635fa3f5bc5fab5ed4bcc8be9e9 | [
"MIT"
] | null | null | null | README.md | odhiambocuttice/myfirstproject | 46a8ab5a2bcfe635fa3f5bc5fab5ed4bcc8be9e9 | [
"MIT"
] | null | null | null | README.md | odhiambocuttice/myfirstproject | 46a8ab5a2bcfe635fa3f5bc5fab5ed4bcc8be9e9 | [
"MIT"
] | null | null | null | # myfirstproject
python package build up
| 13.666667 | 23 | 0.829268 | eng_Latn | 0.996583 |
4b698e045a5903484b1d8dc98c97e4fd89b689bf | 652 | md | Markdown | 2006/CVE-2006-0899.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2006/CVE-2006-0899.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2006/CVE-2006-0899.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2006-0899](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2006-0899)



### Description
Directory traversal vulnerability in index.php in 4Images 1.7.1 and earlier allows remote attackers to read and include arbitrary files via ".." (dot dot) sequences in the template parameter.
### POC
#### Reference
- https://www.exploit-db.com/exploits/1533
#### Github
No PoCs found on GitHub currently.
| 36.222222 | 191 | 0.742331 | eng_Latn | 0.407824 |
4b69d479ea9240015369091835ca9f3e3e228fcd | 1,011 | md | Markdown | CONTRIBUTING.md | OpenNebula/addon-lxdone | 461b7ec1558a324a36129d4b756653643fc37d08 | [
"Apache-2.0"
] | 49 | 2017-02-13T21:21:22.000Z | 2021-12-03T05:47:37.000Z | CONTRIBUTING.md | OpenNebula/addon-lxdone | 461b7ec1558a324a36129d4b756653643fc37d08 | [
"Apache-2.0"
] | 41 | 2017-04-04T14:59:50.000Z | 2018-11-12T15:18:59.000Z | CONTRIBUTING.md | OpenNebula/addon-lxdone | 461b7ec1558a324a36129d4b756653643fc37d08 | [
"Apache-2.0"
] | 9 | 2017-04-17T07:59:17.000Z | 2022-01-22T21:22:42.000Z | # Pull requests
When contributing make pull requests to the *feature branch* named after next release, don't commit to master branch.
If you want to contribute feel free to request new features, check first TODO.
# Help
Check the [Flowchart](picts/flow_chart) to get a bettter understanding of the driver internals, pictures starting with **one-** ex. [one-deploy.png](picts/flow_chart/one-deploy.png) ressemble the scripts overview. The action scripts are written in python, there is a script [lxd_common.py](src/remotes/vmm/lxd/lxd_common.py) containing lots of functions used by the action scripts ex. [lxd_common.py](src/remotes/vmm/lxd/deploy.py) which is executed when starting a VM, tus reducing the code, if you want to add code and it could be used by several action scripts then add it here.
# License and copyright
By default, any contribution to this project is made under the Apache
2.0 license.
The author of a change remains the copyright holder of their code
(no copyright assignment).
| 59.470588 | 583 | 0.784372 | eng_Latn | 0.999025 |
4b69e1411a16c656cebc2c45aa2bb3869ba0a5a1 | 1,142 | md | Markdown | sheets/2018-05-25-01.md | Baelyk/dust-world | 70ffb28473061eb5642d16dac15d11084f4b7ac6 | [
"CC-BY-4.0"
] | 1 | 2018-07-31T04:51:58.000Z | 2018-07-31T04:51:58.000Z | sheets/2018-05-25-01.md | Baelyk/dust-world | 70ffb28473061eb5642d16dac15d11084f4b7ac6 | [
"CC-BY-4.0"
] | null | null | null | sheets/2018-05-25-01.md | Baelyk/dust-world | 70ffb28473061eb5642d16dac15d11084f4b7ac6 | [
"CC-BY-4.0"
] | null | null | null | I wasn't born here, but I only have memories of here. I don't know why I know I wasn't born here. I guess it is just because the place isn't alien enough. If I was born here, it would be my home. This place, this sandy place, this place is not my home. Regardless of what you may think, reglardless of how long I have been here, this place is not my home. I do not know enough of this to call it my home. I wonder if there is some base amount of information that is necessary to call some place a home, and since I am only ever exposed to a very small, minimal amount of information. Always the same. Enough to get used to it, but not enough to know anything about where I am.
I have never realized that I have gotten used it. But I have. I wonder when that happened. How long it took. In truth, I have been here seconds. Maybe minutes if I am lucky, but I feel as if I have lived my whole entire life here. The Sands of Madness.
Thats where I am thats where I will be, forever. Alone, without a soul to keep mine company. The sands may be alive but they are far from having a soul. The planet does not have a soul, the planet is the sands
| 190.333333 | 676 | 0.760946 | eng_Latn | 1.000005 |
4b6a61dbd241bb19bb0f1cc04030b561460fda38 | 74 | md | Markdown | README.md | lamgor666/phpboot-dal | 65a26873cf1fab16129b53d1fb8b9f6c60526375 | [
"MIT"
] | null | null | null | README.md | lamgor666/phpboot-dal | 65a26873cf1fab16129b53d1fb8b9f6c60526375 | [
"MIT"
] | null | null | null | README.md | lamgor666/phpboot-dal | 65a26873cf1fab16129b53d1fb8b9f6c60526375 | [
"MIT"
] | null | null | null | # phpboot-dal
data access layer libraries for lamgor666/phpboot framework
| 24.666667 | 59 | 0.837838 | eng_Latn | 0.561867 |
4b6a879651b861d028eb223f802d89b68cf801f5 | 580 | md | Markdown | README.md | thanhnguyennguyen/tomowallet | accd07d26c4e3da96320b529e0721f80c39362ef | [
"MIT"
] | 7 | 2019-09-23T19:53:43.000Z | 2022-02-03T05:51:54.000Z | README.md | thanhnguyennguyen/tomowallet | accd07d26c4e3da96320b529e0721f80c39362ef | [
"MIT"
] | 6 | 2020-07-07T19:29:22.000Z | 2022-02-26T05:13:26.000Z | README.md | thanhnguyennguyen/tomowallet | accd07d26c4e3da96320b529e0721f80c39362ef | [
"MIT"
] | 5 | 2019-12-13T06:44:47.000Z | 2021-11-06T07:02:36.000Z | # TOMOWALLET
## Requirements
- NodeJS
- MongoDB
- Redis
## Config
```
cp config/default.json config/local.json
```
- Update `local.json` file to support your environment
- Update mnemonic
- Update mongodb configuration:
- For docker:
` "db": {
"uri": "mongodb://mongodb:27017/wallets"
},
`
- For localhost:
`
"db": {
"uri": "mongodb://localhost:27017/wallets"
},
## Install
```
npm install
```
## Run
- Start mongodb
- Start Redis
- ```
npm run build && npm run dev
```
| 15.263158 | 54 | 0.534483 | kor_Hang | 0.291542 |
4b6a98def23f3839f0ddd0ee947ec3332e472e25 | 88 | md | Markdown | README.md | lyrechain/lcl | 5f447cf2f3e35ef61af2345c0d78ab1509c6e833 | [
"Apache-2.0"
] | null | null | null | README.md | lyrechain/lcl | 5f447cf2f3e35ef61af2345c0d78ab1509c6e833 | [
"Apache-2.0"
] | null | null | null | README.md | lyrechain/lcl | 5f447cf2f3e35ef61af2345c0d78ab1509c6e833 | [
"Apache-2.0"
] | null | null | null | # LYRE Contract Language (LCL)
LYRE Contract Language for use in the LYRE CHAIN Project
| 29.333333 | 56 | 0.795455 | eng_Latn | 0.802548 |
4b6b6a7918b1859b99d4aaea0c14dce82fb96dde | 3,504 | md | Markdown | content/tareas/tarea-2/index.md | bkorecic/apunte | 1526297ca3ab85a10826326e288faded231cdbfe | [
"MIT"
] | 4 | 2021-04-08T02:52:06.000Z | 2022-02-07T16:00:42.000Z | content/tareas/tarea-2/index.md | bkorecic/apunte | 1526297ca3ab85a10826326e288faded231cdbfe | [
"MIT"
] | null | null | null | content/tareas/tarea-2/index.md | bkorecic/apunte | 1526297ca3ab85a10826326e288faded231cdbfe | [
"MIT"
] | 9 | 2021-04-13T01:37:13.000Z | 2022-03-14T18:08:49.000Z | ---
title: "Tarea 2 2021/1"
description: "Apps web y forénsico"
date: 2021-04-11T09:19:42+01:00
draft: false
weight: 60
---
## Indicaciones generales
* Cuentan con **3 semanas** para desarrollar y ejecutar esta tarea desde el día de su lanzamiento. Revisen U-Cursos para ver la fecha de entrega más actualizada.
* La ejecución de esta tarea es **grupal**, con el grupo armado a inicios del curso.
* Se requiere que cada integrante del grupo esté "a cargo" de un problema de los entregados. Se debe explicitar el problema del cual cada integrante esté a cargo.
* Dentro de un mismo grupo, **se pueden discutir libremente los problemas durante la ejecución de la tarea**. Sin embargo, **los problemas no se pueden discutir entre integrantes de grupos distintos, salvo en situaciones guiadas por el equipo docente en bloque de clase** (como por ejemplo, horarios de consultas en auxiliares o cátedras).
* Para conectarse al servidor del curso (server.cc5325.xor.cl) deben estar conectados a la VPN del CEC.
## Modo de entrega
* Cada estudiante debe entregar individualmente los siguientes archivos para cada problema de la tarea:
* Un archivo en Markdown que explique cómo resolvió el problema (_Writeup_), detallando todos los pasos ejecutados como para que al replicarlos se pueda llegar a la misma solución. Su formato debe ser idéntico al del _Formato de Writeup_ adjunto al final de esta sección.
* Archivos de código si es que eran necesarios para resolverlo.
* Enlaces a todas las herramientas usadas.
* Al inicio del archivo Markdown mencionado anteriormente, debe escribir su nombre, su grupo y si era o no la persona encargada de resolver ese problema.
* Recuerden que también deben entregar una solución con código y _Writeup_ para todos los problemas en los que no estuvieron encargados.
[Formato de Writeup](./writeup.txt)
[Cómo se ve al exportarlo en Joplin](./writeup.pdf)
## Recursos para enumeración y fuerza bruta
Para que todos tengan las mismas listas de enumeración y fuerza bruta, les entregamos estos archivos.
Puede o puede que no sea necesaria la fuerza bruta o la enumeración para la resolución de los problemas.
* [Enumeración web](./web.txt)
* [Nombres de usuario](./users.txt)
* [Contraseñas](./passwords.txt)
## Problemas
### P1: Panel de Monitoreo
Como equipo docente hemos desarrollado esta plataforma para monitorear la capacidad de disco utilizada en nuestro servidor.
Ya que somos expertos en el tema sabemos que este sistema es _inhackeable_.
**Nota:** Recuerden estar conectados a la VPN del CEC.
[Link a la plataforma](http://server.cc5325.xor.cl:4657/)
### P2: Hacker Job
Un grupo hacker quiere tomar el control de este sitio, y ustedes están a cargo de hacerlo.
El objetivo es ganar privilegios de uno de los administradores.
Afortunadamente, parecen ser bastante activos en el portal.
**Nota:** Recuerden estar conectados a la VPN del CEC.
[Sitio](http://server.cc5325.xor.cl:5768/)
## P3: Such Ransomware, Much Encrypted, Wow!
Estaba creando la pregunta 3 de la tarea y justo me atacó un Ransomware que cifró mis archivos importantes (entre ellos, el documento en el que tengo la flag de la tarea :frowning: )
Les adjunto una imagen de mi disco, para que puedan echarle un vistazo. Si logran recuperar mi archivo muy importante, por favor mándenmelo descifrado pero sin leerlo, para poder usar la flag en la pregunta 3.
¡Muchas gracias!
[Imagen de mi disco (Ojo que pesa como 150 MB)](https://users.dcc.uchile.cl/~eriveros/p3.zip)
| 51.529412 | 339 | 0.773973 | spa_Latn | 0.997611 |
4b6be472f6bd008a1d33a79abfedbb4232f4ac96 | 214 | md | Markdown | Module_5/Lab1/Cat-Shelter/config/readme.md | CodingSamples-Kingsland-Z2B/Lab-Q2-2021 | c2f49cb97f2f742ca35234d62ed6ee72420732d5 | [
"MIT"
] | 8 | 2021-05-15T21:52:09.000Z | 2021-11-24T17:39:58.000Z | Module_5/Lab1/Cat-Shelter/config/readme.md | CodingSamples-Kingsland-Z2B/Lab-Q2-2021 | c2f49cb97f2f742ca35234d62ed6ee72420732d5 | [
"MIT"
] | 4 | 2021-09-01T00:01:10.000Z | 2021-09-01T00:26:43.000Z | Module_5/Lab1/Cat-Shelter/config/readme.md | CodingSamples-Kingsland-Z2B/Lab-Q2-2021 | c2f49cb97f2f742ca35234d62ed6ee72420732d5 | [
"MIT"
] | 8 | 2021-05-15T21:50:14.000Z | 2021-11-24T17:40:06.000Z | purpose is for server configuration info
things you add here
1. port information
2. server connections
3. environment variables
a. passwords for all connected items
b. production port/ hosting information. | 26.75 | 44 | 0.78972 | eng_Latn | 0.995354 |
4b6c8c28f32dd0b6187dc0ab910591b6c4928921 | 16,029 | md | Markdown | docs/boards/work-items/guidance/cmmi/guidance-create-solution-architecture.md | DmitriyKirakosyan/azure-devops-docs | 43c95faed86fa4025b7ccb6d9bacd4ca1412d75c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/boards/work-items/guidance/cmmi/guidance-create-solution-architecture.md | DmitriyKirakosyan/azure-devops-docs | 43c95faed86fa4025b7ccb6d9bacd4ca1412d75c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/boards/work-items/guidance/cmmi/guidance-create-solution-architecture.md | DmitriyKirakosyan/azure-devops-docs | 43c95faed86fa4025b7ccb6d9bacd4ca1412d75c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Create a solution architecture
titleSuffix: Azure Boards
description: Investigate alternative architectural strategies to create good architecture
ms.technology: devops-agile
ms.assetid: 77707311-8835-4bc8-9b28-17534d7a7d9c
ms.topic: conceptual
ms.author: kaelli
author: KathrynEE
monikerRange: '>= tfs-2013'
ms.date: 01/20/2017
---
# Create a solution architecture
[!INCLUDE [temp](../../../includes/version-all.md)]
Part of creating a good architecture is investigating alternative architectural strategies. Alternative strategies have different benefits that are based on platform selection, technologies that are used, and code reuse. Each strategy is designed and proofs of concept are built to further investigate the costs and benefits of each strategy. The strategies are assessed against product and quality requirements, and ultimately a strategy is chosen to be used to implement the product. Finally, security and performance are architectural concerns for which work must be done over the entire product.
## <a name="CreateAlternative"></a> Create Alternative Architecture Partitioning Designs
The problem is analyzed, and different approaches are considered. A group of requirements are selected that represent key business and technological challenges. Examine the characteristics of these challenges, such as integration of legacy systems, and predict future needs based on current needs, reusability of code, and maintenance costs.
### Create an Application Diagram
Using the domain model and requirements as input, create an application diagram that represents the core logical elements of the system. This will later be partitioned into system diagrams. Alternative partitioning schemes will be considered and evaluated.
One way to represent an application diagram is as a Unified Modeling Language (UML) use case diagram. This type of diagram can show the major subsystems and their dependencies. In addition, you can place use cases in each subsystem to show which subsystem manages each user scenario.
### Establish Evaluation Criteria
Determine which criteria to use to identify requirements and scenarios that represent significant architectural challenges. Consult the existing enterprise architecture documents for criteria. Review any business requirements, technical requirements, and enterprise standards that must be applied to new applications. Capture additional criteria that are known to be architecturally significant, such as integration with legacy systems, reusability of code, reusing existing vendor libraries and platforms, and controlling maintenance costs. Capture additional criteria that represent risks and cost when implementing a technical solution.
### Select a Candidate Group of Requirements
Evaluate each quality of service requirement and product requirement against the evaluation criteria. If a requirement represents an architectural challenge, consider it a candidate for modeling. For example, a requirement that the new product must support older customer databases meets the criteria of integrating with legacy systems. Such a requirement is a candidate for modeling how the integration would work.
### Select a Candidate Group of Scenarios
Evaluate each scenario against the evaluation criteria. If a scenario represents an architectural challenge, consider it a candidate for modeling. For example, a scenario in which the user downloads a client update meets the criteria that concerns maintenance costs. Such a scenario is a candidate for modeling how best to handle client updates.
### Reduce the Candidate Group
Review the candidate scenarios and requirements. Remove scenarios and requirements that duplicate the evaluation criteria or are better represented by other scenarios and requirements. Trim the candidate group to a core group that represents the key architectural challenges, risks, and costs of the new application. Keep the scenarios and requirements that best represent the evaluation criteria, that present the most risk, and that present the most potential cost when architecting a technical solution. Keep the scenarios and the requirements that are the most comprehensive or key parts of the application.
### Create Partitioning Criteria
Using the requirements as motivation, analyze established architectural patterns (such as façade or model-view-controller), and identify potential candidates for implementation. Identify candidate patterns through their motivation, and consider their design tradeoffs with regard to coupling, cohesion, extensibility, adaptability, and flexibility. Select a set of candidates for implementation as alternatives for assessment.
<a name="Design"></a>
## Design System Architecture and Deployment
The system architecture defines the groupings and configurations of elements that are identified in the application diagram. System diagrams are created that capture the system architecture for each possible architecture approach. Deployment diagrams show the deployment steps that are based on dependencies and core functionality. An infrastructure architect creates a logical datacenter diagram that describes the logical structure of the datacenter where the application will be deployed. The deployment diagrams are validated against the logical datacenter diagram to ensure that the systems can be deployed.
### Create a System Model
The architect and the lead developer create system diagrams from the application diagram. Through system diagrams, you can design reusable application systems as units of deployment by composing them from elements on the application diagram. You can also design larger and more complex systems that contain other systems so that you can use them in distributed system scenarios and abstract the details of applications in those systems. Check in each new diagram file to version control.
You can represent system diagrams in Visual Studio in the following ways:
- Use case diagrams. The main user scenarios are represented as use cases, and the major components of the system are shown as subsystems. Each use case can be placed inside the subsystem that deals with it. For more information, see [UML Use Case Diagrams: Guidelines](https://msdn.microsoft.com/library/dd409432.aspx).
- UML component diagrams. These diagrams let you show communications channels between the components, in addition to dependencies. You might also want to create class diagrams to describe the types that are visible at the interfaces to the components, and you can create sequence diagrams to show their interactions. For more information, see [UML Component Diagrams: Guidelines](https://msdn.microsoft.com/library/dd409432.aspx), [UML Class Diagrams: Guidelines](https://msdn.microsoft.com/library/dd409432.aspx), and [UML Sequence Diagrams: Guidelines](https://msdn.microsoft.com/library/dd409432.aspx).
- [Layer diagrams](https://msdn.microsoft.com/library/dd418995). A layer diagram describes the block structure of the application. It shows only components and the dependencies between them. It has the benefit that, after the code is written, you can validate the code and the dependencies against the diagram. For more information, see [Layer Diagrams: Guidelines](https://msdn.microsoft.com/library/dd418995).
For each subsystem, you can create a package that describes its types and behavior in more detail. For more information, see [Define packages and namespaces](https://msdn.microsoft.com/library/dd465144).
## <a name="CreateProofs"></a> Create Proofs of Concept
Significant risks to the project can be mitigated by creating an architectural proof of concept. It is important to address risk as early as possible in the project so that key strategic and architectural decisions can be made while it is still easy to modify fundamental pieces of the architecture. Creating early proofs of concept reduces overall project risk and unknowns. Lower project risk and fewer unknowns make planning and estimating in later iterations more accurate. Proofs of concept can be temporary and discarded after the issues have been addressed, or they can be built as the foundation of the core architecture.
### Examine Risk
Understand the elements that lead to the identification of the risk or architectural decisions. Examine related scenarios and quality of service requirements. Check for any target environment implications.
### Plan the Approach
Determine the form of the proof of concept that is needed. Use the application and system diagrams to help plan. Solve only the architectural problem that is identified by the risk. Look for the simplest resolution.
### Build and Run the Proof of Concepts
Build the proof of concept. You can implement the proof of concept from the application diagram. Maintain focus on the problem to be solved. Deploy the proof of concept to a physical environment that is congruent to the logical datacenter diagram. The physical environment should match the settings of the logical datacenter diagram as closely as possible. Test the proof of concept against the high-risk issues.
## <a name="Assess"></a> Assess Alternatives
The Lightweight Architecture Alternative Analysis Method (LAAAM) is used to help decide between different architectural strategies for building an application. The LAAAM typically takes one day to complete. Start by building a utility tree that describes key quality and functional drivers of the application that are based on requirements. Each driver is written as a scenario that takes the form of a statement that is written as context, stimulus, and response. Use an assessment matrix to evaluate how well each strategy addresses each scenario.
### Create a Utility Tree
Examine quality of service requirements and product requirements to determine the key drivers of quality and function in the application. Construct a utility tree that represents the overall quality of the application. The root node in the tree is labeled Utility. Subsequent nodes are typically labeled in standard quality terms such as modifiability, availability, and security. The tree should represent the hierarchical nature of the qualities and provide a basis for prioritization. Each level in the tree is a further refinement of the qualities. Ultimately, these qualities become scenarios.
### Construct an Assessment Matrix
For each leaf in the utility tree, write a scenario. The scenario is in the form of context, stimulus, and response (for example, "Under typical operation, perform a database transaction in fewer than 100 milliseconds").
Create a spreadsheet or a table, and enter each scenario as a row in this assessment matrix. Enter each architectural strategy as a column. At each intersection of strategies and scenarios, enter a rating on a scale between 1 and 4.
The rating should take into account the following factors:
- **Development cost** Is this solution easy or difficult to implement? What is its impact on other areas?
- **Operational cost** At run time, will this solution work easily, or will it adversely affect usability, performance, and so on?
- **Risk** Is this solution certain to address the scenario well, or are there unknown costs? Could this solution have an adverse impact on the team's ability to accommodate future enhancements in the requirements?
If a proof of concept has been built for a strategy, use information from that proof of concept to help determine the values.
At the bottom of the table, sum the values from the scenarios. Use these figures as an input to the discussion that leads to decisions on the alternative architectures.
Upload the completed assessment matrix to the project portal.
## <a name="Select"></a> Select the Architecture
After the assessment matrix is created, a review meeting is held to determine which architecture to use in the next iteration. The assessment matrix and information that is discovered from creating the proofs of concept is used to help make a decision. After the architecture is selected, diagrams for the architecture are checked in as the reference solution, and a justification document is created that captures the reasons behind the selection.
### Prepare for Review
The architect and the lead developer identify the appropriate reviewers for reviewing the proposed architectures and circulate documentation for the architectures to each participant.
### Review System Architecture and Deployment Architecture
During the review meeting, the system diagrams, the deployment report, and the logical datacenter diagram are reviewed. The goal is to choose an architecture to implement in the next iteration.
Consider the assessment matrix rankings for each architecture to help evaluate the suitability of each architecture. Consider any information that is discovered from the proofs of concept such as cost or complexity that is involved with implementing the different architectures. If the logical datacenter diagram represents an existing datacenter that cannot be modified, do not review it. If a datacenter is being created, review the diagram for deployment considerations. Select the architecture to be used. Review the architectural concept against the scenarios to validate that the solution meets the customer needs and is complete.
### Create a Reference Solution
Create a justification document that captures the decisions of the meeting. Upload it to the project portal. For the selected architecture, check in any application, system, or logical datacenter diagrams as the reference solution to use for implementing features in the next iteration. Communicate to the entire team and any dependent teams the decision on what architecture is selected for the next iteration.
## <a name="Develop"></a> Develop a Performance Model
Performance modeling is used to identify and address potential performance issues in the application. A performance model is developed from a quality of service requirement, which is then broken into development tasks. Each development task is assigned a performance budget for implementation.
Identify the scenarios that are linked to the performance quality of service requirement. Map the development tasks to the scenarios. From the quality of service requirements list, determine the workload for the application. Using the workload estimates and the quality of service requirements list, identify the performance objectives for each key scenario. These include objectives such as response time, throughput, and resource use. Identify the performance-related resources that have been budgeted to meet the performance objectives. Some examples of performance-related resources are execution time and network bandwidth. Determine the maximum allowable allocation of each resource.
Spread the budgeted resources across the processing steps for each scenario. When not sure of how to allocate the budget, make best guesses, or divide the resources evenly among the steps. Budgeting is refined during validation. Attach or write the allocation on the appropriate development task.
Find budget allocations that pose a risk to meeting performance objectives. Consider tradeoffs that help meet performance objectives such as design and deployment alternatives. Reevaluate quality of service requirements if necessary.
Identify the scenarios that do not meet budget allocations. Measure the performance of the scenarios. Use prototyping in early iterations if code is not available. Repeat the budgeting, evaluation, and validation steps as necessary by using data that is acquired during validation.
## Develop a Threat Model
For more information, see the following page on the Microsoft Web site: [Security Developer Center](https://go.microsoft.com/fwlink/?LinkId=158810).
| 126.212598 | 692 | 0.807224 | eng_Latn | 0.999215 |
4b6e2507fb7c7d9601ccadac6772328f26a00faa | 61 | md | Markdown | README.md | ichi-raven/TLSFAllocator | f25edf7253819a113ec7fc64b961fb7dcd771ac9 | [
"Unlicense"
] | null | null | null | README.md | ichi-raven/TLSFAllocator | f25edf7253819a113ec7fc64b961fb7dcd771ac9 | [
"Unlicense"
] | null | null | null | README.md | ichi-raven/TLSFAllocator | f25edf7253819a113ec7fc64b961fb7dcd771ac9 | [
"Unlicense"
] | null | null | null | # TLSFAllocator
TLSFAllocator for my tutorial implementation
| 20.333333 | 44 | 0.868852 | eng_Latn | 0.954596 |
4b6e5afc1916c67d355166116dcc33deca631a88 | 1,599 | md | Markdown | docs/framework/unmanaged-api/fusion/iassemblyname-getversion-method.md | Miguel-byte/docs.es-es | eef17a7258534b8a86342a553e2efeac7dcb711d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/iassemblyname-getversion-method.md | Miguel-byte/docs.es-es | eef17a7258534b8a86342a553e2efeac7dcb711d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/iassemblyname-getversion-method.md | Miguel-byte/docs.es-es | eef17a7258534b8a86342a553e2efeac7dcb711d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IAssemblyName::GetVersion (Método)
ms.date: 03/30/2017
api_name:
- IAssemblyName.GetVersion
api_location:
- fusion.dll
api_type:
- COM
f1_keywords:
- IAssemblyName::GetVersion
helpviewer_keywords:
- IAssemblyName::GetVersion method [.NET Framework fusion]
- GetVersion method, IAssemblyName interface [.NET Framework fusion]
ms.assetid: 42230928-2c33-41fd-9519-d96efef6c7af
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 58919936bdc62d52437f429146f04c66d49294b2
ms.sourcegitcommit: d2e1dfa7ef2d4e9ffae3d431cf6a4ffd9c8d378f
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 09/07/2019
ms.locfileid: "70796582"
---
# <a name="iassemblynamegetversion-method"></a>IAssemblyName::GetVersion (Método)
Obtiene la información de versión para el ensamblado al que hace referencia este objeto de [IAssemblyName](iassemblyname-interface.md) .
## <a name="syntax"></a>Sintaxis
```cpp
HRESULT GetVersion (
[out] LPDWORD pdwVersionHi,
[out] LPDWORD pdwVersionLow
);
```
## <a name="parameters"></a>Parámetros
`pdwVersionHi`
enuncia Los bits 32 altos de la versión.
`pdwVersionLow`
enuncia Los bits 32 bajos de la versión.
## <a name="requirements"></a>Requisitos
**Select** Consulte [Requisitos del sistema](../../get-started/system-requirements.md).
**Encabezado**: Fusion. h
**Versiones de .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Vea también
- [IAssemblyName (interfaz)](iassemblyname-interface.md)
| 28.553571 | 138 | 0.736085 | yue_Hant | 0.566841 |
4b6eb49d558ce9b742f82919c1a87d5b50b4746e | 2,160 | md | Markdown | docs/2014/relational-databases/security/encryption/create-a-database-master-key.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/security/encryption/create-a-database-master-key.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/security/encryption/create-a-database-master-key.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Creare la chiave master di un database | Microsoft Docs
ms.custom: ''
ms.date: 09/12/2019
ms.prod: sql-server-2014
ms.reviewer: carlrab
ms.technology: security
ms.topic: conceptual
helpviewer_keywords:
- database master key [SQL Server], creating
ms.assetid: 8cb24263-e97d-4e4d-9429-6cf494a4d5eb
author: jaszymas
ms.author: jaszymas
manager: craigg
ms.openlocfilehash: 86f74710e99079d0acd28db09bcf1e4ba7c57865
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 02/08/2020
ms.locfileid: "74957245"
---
# <a name="create-a-database-master-key"></a>Creazione della chiave master di un database
In questo argomento viene descritto come creare una chiave master del database `master` nel database [!INCLUDE[ssCurrent](../../../includes/sscurrent-md.md)] in utilizzando [!INCLUDE[tsql](../../../includes/tsql-md.md)].
**Contenuto dell'articolo**
- **Prima di iniziare:**
[Sicurezza](#Security)
- [Per creare una chiave master del database utilizzando Transact-SQL](#TsqlProcedure)
## <a name="BeforeYouBegin"></a> Prima di iniziare
### <a name="Security"></a> Sicurezza
#### <a name="Permissions"></a> Autorizzazioni
È richiesta l'autorizzazione CONTROL per il database.
## <a name="TsqlProcedure"></a> Con Transact-SQL
### <a name="to-create-a-database-master-key"></a>Per creare la chiave master di un database
1. Scegliere una password per la crittografia della copia della chiave master che verrà archiviata nel database.
2. In **Esplora oggetti**connettersi a un'istanza del [!INCLUDE[ssDE](../../../includes/ssde-md.md)].
3. Espandere **Database di sistema**, fare clic con il pulsante destro del mouse su `master` e quindi scegliere **Nuova query**.
4. Copiare e incollare l'esempio seguente nella finestra Query, quindi fare clic su **Esegui**.
```sql
-- Creates the master key.
-- The key is encrypted using the password "23987hxJ#KL95234nl0zBe."
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '23987hxJ#KL95234nl0zBe';
```
Per altre informazioni, vedere [CREATE MASTER KEY (Transact-SQL)](/sql/t-sql/statements/create-master-key-transact-sql).
| 37.241379 | 220 | 0.751389 | ita_Latn | 0.72092 |
4b6ebd161a8f65c6dbf895bdc280d52f1cf9945b | 2,413 | md | Markdown | README.md | gchochla/face-detection | 9a71925e6db2fcabf2548421f1488a1e410bbfee | [
"MIT"
] | null | null | null | README.md | gchochla/face-detection | 9a71925e6db2fcabf2548421f1488a1e410bbfee | [
"MIT"
] | null | null | null | README.md | gchochla/face-detection | 9a71925e6db2fcabf2548421f1488a1e410bbfee | [
"MIT"
] | null | null | null | # Face Detection
Fast and reliable face detection with [RetinaFace](https://arxiv.org/abs/1905.00641).
This repo provides the out-of-box RetinaFace detector.
### Requirements
- Python 3.5+ (it may work with other versions too)
- Linux, Windows or macOS
- PyTorch (>=1.0)
While not required, for optimal performance it is highly recommended to run the code using a CUDA enabled GPU.
## Install
The easiest way to install it is using pip:
```bash
pip install git+https://github.com/gchochla/face-detection.git@master
```
## Usage
##### Detect face and five landmarks on single image
```python
from skimage import io
from face_detection import RetinaFace
detector = RetinaFace()
img= io.imread('examples/obama.jpg')
faces = detector(img)
box, landmarks, score = faces[0]
```
##### Running on CPU/GPU
In order to specify the device (GPU or CPU) on which the code will run one can explicitly pass the device id.
```python
from face_detection import RetinaFace
# 0 means using GPU with id 0 for inference
# default -1: means using cpu for inference
detector = RetinaFace(gpu_id=0)
```
| | GPU(GTX 1080TI,batch size=1) | GPU(GTX 1080TI,batch size=750) | CPU(Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz) |
| ---- | ---------------------------- | ------------------------------- | --------------------------------------------- |
| FPS | 44.02405810720893 | 96.64058005582535 | 15.452635835550483 |
| SPF | 0.022714852809906007 | 0.010347620010375976 | 0.0647138786315918 |
##### Batch input for faster detection
All the input images must of the same size.
**Detector with CUDA process batch input faster than the same amount of single input.**
```python
from skimage import io
from face_detection import RetinaFace
detector = RetinaFace()
img= io.imread('examples/obama.jpg')
all_faces = detector([img,img]) # return faces list of all images
box, landmarks, score = all_faces[0][0]
```

## Reference
- Network and pretrained model are from [biubug6/Pytorch_Retinaface](https://github.com/biubug6/Pytorch_Retinaface)
```
@inproceedings{deng2019retinaface,
title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
booktitle={arxiv},
year={2019}
```
| 30.935897 | 121 | 0.674679 | eng_Latn | 0.87176 |
4b6edc64a4986de4022cf6250c74e02dadcad3dc | 1,102 | md | Markdown | README.md | nmccrea/nickmccrea.com | a1e24273ad59f83a399432cb63c6617487dbdf4f | [
"MIT"
] | null | null | null | README.md | nmccrea/nickmccrea.com | a1e24273ad59f83a399432cb63c6617487dbdf4f | [
"MIT"
] | 1 | 2021-03-29T20:29:09.000Z | 2021-03-29T20:29:09.000Z | README.md | nmccrea/nickmccrea.com | a1e24273ad59f83a399432cb63c6617487dbdf4f | [
"MIT"
] | 1 | 2020-03-06T00:10:09.000Z | 2020-03-06T00:10:09.000Z | # nickmccrea.com
This site is built with [Jekyll](https://jekyllrb.com/), to be hosted on [GitHub Pages](https://pages.github.com/).
GitHub Pages sites are hosted in GitHub repositories following one of the specific repository-and-branch naming patterns described [here](https://help.github.com/articles/user-organization-and-project-pages/). In the case of nickmccrea.com, the site is hosted in a repository with the naming pattern `USERNAME.github.io`.
I use a separate, dedicated deployment repository. To deploy this way, create a new repository in GitHub named `USERNAME.github.io`. In your development environment, add the deployment repository as a remote repository for your project:
```
git remote add deploy git@github.com:USERNAME/USERNAME.github.io.git
```
Then, to deploy, simply push to this deployment repository:
```
git push deploy master
```
If the build is successful, the site will be available at `http://USERNAME.github.io`. To learn about using custom domain names with GitHub Pages, start [here](https://help.github.com/articles/using-a-custom-domain-with-github-pages/). | 58 | 321 | 0.777677 | eng_Latn | 0.9803 |
4b6f326899f15dbb4c86439b57de3e24cec36b5a | 4,699 | md | Markdown | docs/sharepoint/how-to-create-a-sharepoint-solution-package-by-using-msbuild-tasks.md | MicrosoftDocs/visualstudio-docs.de-de | edda581743b0eede0b99441d8e52a8d0e133dec8 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2018-09-27T09:13:44.000Z | 2021-09-08T07:12:47.000Z | docs/sharepoint/how-to-create-a-sharepoint-solution-package-by-using-msbuild-tasks.md | MicrosoftDocs/visualstudio-docs.de-de | edda581743b0eede0b99441d8e52a8d0e133dec8 | [
"CC-BY-4.0",
"MIT"
] | 68 | 2018-02-07T12:07:58.000Z | 2021-03-19T00:35:58.000Z | docs/sharepoint/how-to-create-a-sharepoint-solution-package-by-using-msbuild-tasks.md | MicrosoftDocs/visualstudio-docs.de-de | edda581743b0eede0b99441d8e52a8d0e133dec8 | [
"CC-BY-4.0",
"MIT"
] | 41 | 2018-01-05T16:53:02.000Z | 2021-10-09T11:00:50.000Z | ---
title: Erstellen SharePoint Lösungspakets mithilfe MSBuild Tasks
description: Erfahren Sie, wie Sie ein SharePoint Lösungspaket (WSP) mithilfe der Befehlszeile MSBuild Tasks auf einem Entwicklungscomputer erstellen, bereinigen und überprüfen.
ms.custom: SEO-VS-2020
ms.date: 02/02/2017
ms.topic: how-to
dev_langs:
- VB
- CSharp
helpviewer_keywords:
- SharePoint development in Visual Studio, packages
author: John-Hart
ms.author: johnhart
manager: jmartens
ms.technology: sharepoint-development
ms.workload:
- office
ms.openlocfilehash: 781281c6abce5031166b00d9cdde0b175619f79b
ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/13/2021
ms.locfileid: "122135922"
---
# <a name="how-to-create-a-sharepoint-solution-package-by-using-msbuild-tasks"></a>Vorgehensweise: Erstellen eines SharePoint-Projektmappenpakets mithilfe MSBuild Tasks
Sie können ein SharePoint Paket *(WSP)* mithilfe der Befehlszeile MSBuild Tasks auf einem Entwicklungscomputer erstellen, bereinigen und überprüfen. Sie können diese Befehle auch verwenden, um den Buildprozess mithilfe von Team Foundation Server auf einem Buildcomputer zu automatisieren.
## <a name="build-a-sharepoint-package"></a>Erstellen eines SharePoint-Pakets
#### <a name="to-build-a-sharepoint-package"></a>So erstellen Sie ein SharePoint-Paket
1. Wählen Sie im menü Windows **Start** die Option **Alle Programme** > **Zubehör** > **Eingabeaufforderung** aus.
2. Wechseln Sie in das Verzeichnis, in dem sich Ihr SharePoint-Projekt befindet.
3. Geben Sie den folgenden Befehl ein, um ein Paket für das Projekt zu erstellen. Ersetzen Sie *ProjectFileName* durch den Namen des Projekts.
```cmd
msbuild /t:Package ProjectFileName
```
Beispielsweise können Sie einen der folgenden Befehle ausführen, um ein SharePoint Projekt namens ListDefinition1 zu packen.
```cmd
msbuild /t:Package ListDefinition1.vbproj
msbuild /t:Package ListDefinition1.csproj
```
## <a name="clean-a-sharepoint-package"></a>Bereinigen eines SharePoint Pakets
#### <a name="to-clean-a-sharepoint-package"></a>So bereinigen Sie ein SharePoint Paket
1. Öffnen Sie ein Eingabeaufforderungsfenster.
2. Wechseln Sie in das Verzeichnis, in dem sich Ihr SharePoint-Projekt befindet.
3. Geben Sie den folgenden Befehl ein, um ein Paket für das Projekt zu bereinigen. Ersetzen Sie *ProjectFileName* durch den Namen des Projekts.
```cmd
msbuild /t:CleanPackage ProjectFileName
```
Beispielsweise können Sie einen der folgenden Befehle ausführen, um ein SharePoint Projekt namens ListDefinition1 zu bereinigen.
```cmd
msbuild /t:CleanPackage ListDefinition1.vbproj
msbuild /t:CleanPackage ListDefinition1.csproj
```
## <a name="validate-a-sharepoint-package"></a>Überprüfen eines SharePoint Pakets
#### <a name="to-validate-a-sharepoint-package"></a>So überprüfen Sie ein SharePoint Paket
1. Öffnen Sie ein Eingabeaufforderungsfenster.
2. Wechseln Sie in das Verzeichnis, in dem sich Ihr SharePoint-Projekt befindet.
3. Geben Sie den folgenden Befehl ein, um ein Paket für das Projekt zu überprüfen. Ersetzen Sie *ProjectFileName* durch den Namen des Projekts.
```cmd
msbuild /t:ValidatePackage ProjectFileName
```
Beispielsweise können Sie einen der folgenden Befehle ausführen, um ein SharePoint Projekt namens ListDefinition1 zu überprüfen.
```cmd
msbuild /t:ValidatePackage ListDefinition1.vbproj
msbuild /t:ValidatePackage ListDefinition1.csproj
```
## <a name="set-properties-in-a-sharepoint-package"></a>Festlegen von Eigenschaften in einem SharePoint-Paket
#### <a name="to-set-a-property-in-a-sharepoint-package"></a>So legen Sie eine Eigenschaft in einem SharePoint Paket fest
1. Öffnen Sie ein Eingabeaufforderungsfenster.
2. Wechseln Sie in das Verzeichnis, in dem sich Ihr SharePoint-Projekt befindet.
3. Geben Sie den folgenden Befehl ein, um eine Eigenschaft in einem Paket für das Projekt festzulegen. Ersetzen Sie *PropertyName* durch die Eigenschaft, die Sie festlegen möchten.
```cmd
msbuild /property:PropertyName=Value
```
Sie können z. B. den folgenden Befehl ausführen, um die Warnstufe festzulegen.
```cmd
msbuild /property:WarningLevel = 2
```
## <a name="see-also"></a>Siehe auch
- [Erstellen SharePoint Features](../sharepoint/creating-sharepoint-features.md)
- [Vorgehensweise: Anpassen eines SharePoint-Features](../sharepoint/how-to-customize-a-sharepoint-feature.md)
- [Vorgehensweise: Hinzufügen und Entfernen von Elementen zu SharePoint Features](../sharepoint/how-to-add-and-remove-items-to-sharepoint-features.md)
| 40.86087 | 290 | 0.770802 | deu_Latn | 0.977188 |
4b6f43408418e52b6ba7582bcee68c34f7b3bdc0 | 1,896 | md | Markdown | src/Modules/Capacities/Commands.Capacities/help/Get-PowerBICapacity.md | digitalarche/powerbi-powershell-1 | 4a034880bcd089591cfe1203554dc277f23b7136 | [
"MIT"
] | 1 | 2019-05-07T04:30:53.000Z | 2019-05-07T04:30:53.000Z | src/Modules/Capacities/Commands.Capacities/help/Get-PowerBICapacity.md | digitalarche/powerbi-powershell-1 | 4a034880bcd089591cfe1203554dc277f23b7136 | [
"MIT"
] | null | null | null | src/Modules/Capacities/Commands.Capacities/help/Get-PowerBICapacity.md | digitalarche/powerbi-powershell-1 | 4a034880bcd089591cfe1203554dc277f23b7136 | [
"MIT"
] | null | null | null | ---
external help file: Microsoft.PowerBI.Commands.Capacities.dll-Help.xml
Module Name: MicrosoftPowerBIMgmt.Capacities
online version:
schema: 2.0.0
---
# Get-PowerBICapacity
## SYNOPSIS
Returns a list of Power BI capacities.
## SYNTAX
```
Get-PowerBICapacity [-Scope <PowerBIUserScope>] [-ShowEncryptionKey] [<CommonParameters>]
```
## DESCRIPTION
Retrieves a list of Power BI capacities that matches the specified scope.
Before you run this command, make sure you log in using Connect-PowerBIServiceAccount.
## EXAMPLES
### Example 1
```
PS C:\> Get-PowerBICapacity -Scope Organization -ShowEncryptionKey
```
## PARAMETERS
### -Scope
Indicates scope of the call. Individual returns only capacities assigned to the caller; Organization returns all capacities within a tenant (must be an administrator to initiate). Individual is the default.
```yaml
Type: PowerBIUserScope
Parameter Sets: (All)
Aliases:
Accepted values: Individual, Organization
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ShowEncryptionKey
Show encryption key details.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### None
## OUTPUTS
### System.Collections.Generic.IEnumerable`1[[Microsoft.PowerBI.Common.Api.Capacities.Capacity, Microsoft.PowerBI.Common.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]
## NOTES
## RELATED LINKS
| 24.307692 | 315 | 0.775844 | eng_Latn | 0.408794 |
4b6fd2d75af9c8d211e81abcb0df879ec0c8d97d | 6,371 | md | Markdown | README.md | markuswntr/simdx | 59b3b79058405cd76a4fb263a6b6eb80ccbf7d7a | [
"Apache-2.0"
] | 6 | 2019-10-11T08:29:06.000Z | 2021-06-02T14:04:22.000Z | README.md | markuswntr/simdx | 59b3b79058405cd76a4fb263a6b6eb80ccbf7d7a | [
"Apache-2.0"
] | null | null | null | README.md | markuswntr/simdx | 59b3b79058405cd76a4fb263a6b6eb80ccbf7d7a | [
"Apache-2.0"
] | 1 | 2020-07-29T11:50:08.000Z | 2020-07-29T11:50:08.000Z | # SIMDX
*SIMDX* provides a unified implementation for built-in vector and matrix intrinsics, such as **SSE/AVX on x86** and
**Neon on Arm**, in C (see `CSIMDX` target) and exposes them to Swift as generic types (see `SIMDX` target).
Furthermore, *SIMDX* provides a fast and portable implementation of SIMD like intrinsics on **hardware which doesn't
natively support** them, making *SIMDX* independent of the target hardware. Therefore it allows vector and matrix
calculations on any (Swift supporting) hardware and automatically inlines the appropriate intrinsic functions of the
target hardware.
> **Warning.** Not meant to be used in production, created for learning purposes!
> <br/><br/> See [**Stumbling on SIMD**](https://blog.wntr.me/posts/001-the-properties-of-space/) series to learn how
this project came to be. If you want to report a bug or an unexpected behaviour, feel free to open an issue. If have
suggestions or really anything else that helps evolving the library and/or are interested in the details feel free to
contact me on [Twitter](https://twitter.com/markuswntr).
## Coding example
Given a `color` that should be modified in `brightness` on each channel separately and then multiplied by a `scale` factor on each lanes equally.
**Example 1.1:**
```swift
// A color to be modify
let color: [Float] = [0.11, 0.2, 0.64, 1.0] // RGBA
// Add a modification value to each channel seperatly
let brightness: [Float] = [0.25, 0.3, -0.35, 0.0]
// Scale the resulting color on each channel equally
let scale: Float = 0.8
var newColor: [Float] = .init(repeating: 0.0, count: 4)
for index in 0...3 {
// operation on each element
newColor[index] = (color[index] + brightness[index]) * scale
}
print(newColor)
// [0.288, 0.4, 0,232, 1.0]
```
The *SIMDX* library allows to rewrite the example:
**Example 1.2:**
```swift
import SIMDX
// A color to be modify
let color: SIMDX4<Float> = [0.11, 0.2, 0.64, 1.0]
// Add a modification value to each channel seperatly
let brightness: SIMDX4<Float> = [0.25, 0.3, -0.35, 0.0]
// Scale the resulting color on each channel equally
let scale: Float = 0.8
// Do all operations on SIMD in parallel using SIMDX
let newColor = (color + brightness) * scale
print(newColor)
// [0.288, 0.4, 0,232, 1.0]
```
Example 1.2 does the same as example 1.1, but more efficiently because it utilises SIMD instructions that do four
additions and four multiplications in a single instruction. Today, modern CPU's have these instructions which may give
you a throughput of four floating point additions and four multiplications per clock cycle. A good compiler may
actually convert example 1.1 automatically to use the SIMD instructions, but in more complicated cases you cannot be
sure that the compiler is able to vectorise your code in an optimal way.
### How it works
The type `SIMDX4<Float>` in example 1.2 is a struct that encapsulates a 128-bit intrinsic type holding 4 floating point
numbers of 32 bits each. The operators `+` and `*` represent the SIMD instruction for adding and multiplying the
intrinsic types. These operators are inlined so that no extra code is generated other than the SIMD instructions. More
specifically, the type `SIMDX4<Float>` masks a `__m128` intrinsic type on x86 with SSE2 or a `float32x4_t` intrinsic
type on Arm with Neon. If neither of both are available, the module instructs the compiler to optimise the vector code.
If this is not possible on the target hardware, the library provides a fallback to a C-array of float type and fixed
length, i.e. `float array[4]`.
## Features
- [x] Conform to common `Numeric` protocols functions (see upcoming blog post #link )
- [x] 64-bit storage
- [x] 128-bit storage
- [x] Int32, UInt32, Float32 and Float64 storable
| | Int8 | UInt8 | Int16 | UInt16 | Float16 | Int32 | UInt32 | Float32 | Int64 | UInt64 | Float64 |
|--------:|------|-------|-------|--------|---------|-------|--------|---------|-------|--------|---------|
| 64 bit | | | | | | | | | | | |
| 128 bit | | | | | | | | | | | |
| 256 bit | | | | | | | | | | | |
| 512 bit | | | | | | | | | | | |
## TODO
Move TODOs to Issues and/or a Project at some point
- [x] Make `count` on SIMDX static
- [ ] Extension on Array `init(SIMDX)` that uses native intrinsics store
- [ ] Documentation
- [ ] Int8, UInt8, Int16 and UInt16 storable
- [ ] Boolean storage
- [ ] Comparison (Equal, GreaterThan, LowerThan, ...)
- [ ] 256-bit storage
- [ ] Multi-dimensional storage (Matrix)
- [ ] Extend conformance to the `Numeric` protocols
- [ ] Handle overflows properly
- [ ] Handle floating point rounding modes
- [ ] Instance from RandomNumberGenerator
- [ ] Cast most vector types natively using intrinsics
- [ ] 512-bit storage
- [x] Remove the ARM 64 requirement and any other platform restriction in Package.swift
- [ ] Edge case tests
- [ ] Not all intrinsics route through the fastest way possible. Re-visit and improve.
- [ ] Re-evaluate the necessity of SIMDX being ExpressibleByArrayLiteral
- [ ] The generic implementation is not the fastest it could be. Do some more magic.
## References
I started with almost zero knowledge of SIMD/Intrinsics or builtin clang functions and was DuckDuckGoing (is that a
thing?) alot prior to started writing this lib. The following references contain some of the most useful instructions
I found across the internet. I gathered them while writing this library, and I am pretty sure I will need them and
re-visit them quite a lot so I leave them here.
### Intrinsic functions
- https://clang.llvm.org/docs/LanguageExtensions.html
- https://software.intel.com/sites/landingpage/IntrinsicsGuide/
- https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics
### Other Libraries
- https://github.com/QuantStack/xsimd
- https://github.com/nemequ/simde
## License
The **SIMDX** library is licensed under the Apache License, version 2.0.
You may not use the files except in compliance with this License.
You may obtain a copy of the license at www.apache.org/licenses/LICENSE-2.0
| 44.552448 | 145 | 0.683252 | eng_Latn | 0.99245 |
4b70c382fbdb2207b8e8755225f5005837779035 | 666 | md | Markdown | hungarian/qgis/README.md | aberenyi/tutorials | bfe5ffff3260d53a8bd8b108a11aacc4863d2751 | [
"CC0-1.0"
] | null | null | null | hungarian/qgis/README.md | aberenyi/tutorials | bfe5ffff3260d53a8bd8b108a11aacc4863d2751 | [
"CC0-1.0"
] | null | null | null | hungarian/qgis/README.md | aberenyi/tutorials | bfe5ffff3260d53a8bd8b108a11aacc4863d2751 | [
"CC0-1.0"
] | null | null | null | # tutorials
Hungarian tutorials for QGIS/Magyar oktatóanyagok QGIS-hez
* [PostGIS-ben tárolt pontok összeláncolása törtvonalakká](docs/pg_pontok.rst)
* [Aggregator kifejezések alkalmazása a kifejezés szerkesztőben](docs/aggregator.rst)
* [Pontok bevitele koordináták alapján](docs/koordinata_bevitel.rst)
* [Rétegek automatikus frissítése](docs/reteg_frissites.rst)
* [3D nézet](docs/3dview.rst)
* [Egyik réteg felületeibe eső elemek szelektálása egy másik rétegből](docs/kivalaszt.rst)
* [Fényképek megjelenítése a térképi elemekhez](docs/foto.rst)
* [Nyilvános térképek elérése QGIS-ből](docs/wms_szolg.rst)
További oktatóanyagok:
http://www.agt.bme.hu/gis/qgis
| 44.4 | 90 | 0.806306 | hun_Latn | 0.999962 |
4b718ea5d21179531d4404874adfa6f171690916 | 1,186 | md | Markdown | README.md | wt-git-repository/Student-management-system | 1db0fbd52defb5e174c3ea57d67ff7fb96ee3504 | [
"MIT"
] | 24 | 2019-04-13T08:41:59.000Z | 2020-05-30T07:43:45.000Z | README.md | wt-git-repository/Student-management-system | 1db0fbd52defb5e174c3ea57d67ff7fb96ee3504 | [
"MIT"
] | null | null | null | README.md | wt-git-repository/Student-management-system | 1db0fbd52defb5e174c3ea57d67ff7fb96ee3504 | [
"MIT"
] | 17 | 2019-04-25T16:41:03.000Z | 2020-06-23T04:27:20.000Z | # C/C++学生信息管理系统、基于OpenCV的车牌识别程序和数据结构相关



该仓库存储的是本人在学习C/C++时候的一些小作品,喜欢的朋友欢迎拷贝代码去学习,其中包括:
- [C学生信息管理系统](https://github.com/wt-git-repository/Program-system/tree/master/C%E5%AD%A6%E7%94%9F%E7%AE%A1%E7%90%86%E7%B3%BB%E7%BB%9F)
- [C++学生信息管理系统](https://github.com/wt-git-repository/Program-system/tree/master/C%2B%2B%E5%AD%A6%E7%94%9F%E7%AE%A1%E7%90%86%E7%B3%BB%E7%BB%9F)
- [基于Opencv的车牌识别程序](https://github.com/wt-git-repository/Program-system/tree/master/%E5%9F%BA%E4%BA%8EOpencv%E7%9A%84%E8%BD%A6%E7%89%8C%E8%AF%86%E5%88%AB%E7%A8%8B%E5%BA%8F)
- [数据结构相关](https://github.com/wt-git-repository/Program-system/tree/master/Data%20structure/code)
欢迎拷贝学习, 有问题可以提交ISSUES
## Copy
```bash
git clone https://github.com/wt-git-repository/Program-system.git
```
## Author
- Github: [@Enda Lin](https://github.com/wt-git-repository)
## Show your support
如果您喜欢我的作品, 请给我一个**Star**, 谢谢!
| 49.416667 | 172 | 0.750422 | yue_Hant | 0.465983 |
4b7257a41c21a081a0c205b580296a0a20158b01 | 21,653 | md | Markdown | includes/functions-bindings-event-hubs-trigger.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | includes/functions-bindings-event-hubs-trigger.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | includes/functions-bindings-event-hubs-trigger.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
author: craigshoemaker
ms.service: azure-functions
ms.topic: include
ms.date: 03/05/2019
ms.author: cshoe
ms.openlocfilehash: 32f98eb9b98168bdab270ecff07446c31f8d706d
ms.sourcegitcommit: 32e0fedb80b5a5ed0d2336cea18c3ec3b5015ca1
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/30/2021
ms.locfileid: "105729769"
---
Utilize o gatilho de função para responder a um evento enviado para um fluxo de eventos do centro de eventos. Deve ter lido o acesso ao centro de eventos subjacente para configurar o gatilho. Quando a função é desencadeada, a mensagem transmitida para a função é dactilografada como uma corda.
## <a name="scaling"></a>Dimensionamento
Cada instância de uma função desencadeada por um evento é apoiada por uma única instância [EventProcessorHost.](/dotnet/api/microsoft.azure.eventhubs.processor) O gatilho (alimentado por Event Hubs) garante que apenas uma instância [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) pode obter um arrendamento numa determinada partição.
Por exemplo, considere um Centro de Eventos da seguinte forma:
* 10 divisórias
* 1.000 eventos distribuídos uniformemente em todas as divisórias, com 100 mensagens em cada partição
Quando a sua função está ativada pela primeira vez, há apenas uma instância da função. Vamos chamar a primeira instância `Function_0` de função. A `Function_0` função tem uma única instância de [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) que detém um arrendamento em todas as dez divisórias. Este caso está a ler eventos de divisórias 0-9. A partir de agora, um dos seguintes acontece:
* **Não são necessários novos casos de função:** `Function_0` é capaz de processar todos os 1.000 eventos antes que a lógica de escalagem de funções produza efeito. Neste caso, todas as 1.000 mensagens são processadas por `Function_0` .
* **Uma instância de função adicional é adicionada**: Se a lógica de escala de Funções determinar que `Function_0` tem mais mensagens do que pode processar, é criada uma nova instância de aplicação de função `Function_1` ( ) Esta nova função também tem uma instância associada do [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor). À medida que os Centros de Eventos subjacentes detetam que uma nova instância de anfitrião está a tentar ler mensagens, ele carrega equilibrar as divisórias entre as instâncias do anfitrião. Por exemplo, as divisórias 0-4 podem ser `Function_0` atribuídas e as divisórias 5-9 a `Function_1` .
* **N mais instâncias de função são adicionadas**: Se a lógica de escala de Funções determinar que ambos `Function_0` e ter mais `Function_1` mensagens do que podem processar, `Functions_N` novas instâncias de aplicações de função são criadas. As aplicações são criadas ao ponto de serem `N` maiores do que o número de divisórias do centro de eventos. No nosso exemplo, o Event Hubs volta a carregar equilibra as divisórias, neste caso através dos `Function_0` casos... `Functions_9` .
À medida que o escalonamento ocorre, `N` os casos são um número maior do que o número de partições do centro de eventos. Este padrão é usado para garantir que as instâncias [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) estão disponíveis para obter bloqueios em divisórias à medida que ficam disponíveis a partir de outras instâncias. Só é cobrado pelos recursos utilizados quando a instância de função é executada. Por outras palavras, não é cobrado por este excesso de provisão.
Quando toda a execução da função estiver concluída (com ou sem erros), os pontos de verificação são adicionados à conta de armazenamento associada. Quando o check-pointing tem sucesso, todas as 1.000 mensagens nunca mais são recuperadas.
<a id="example" name="example"></a>
# <a name="c"></a>[C#](#tab/csharp)
O exemplo a seguir mostra uma [função C#](../articles/azure-functions/functions-dotnet-class-library.md) que regista o corpo de mensagem do gatilho do centro de eventos.
```csharp
[FunctionName("EventHubTriggerCSharp")]
public static void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] string myEventHubMessage, ILogger log)
{
log.LogInformation($"C# function triggered to process a message: {myEventHubMessage}");
}
```
Para ter acesso aos [metadados de eventos](#event-metadata) no código de função, ligue-se a um objeto [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) (requer uma declaração de utilização para `Microsoft.Azure.EventHubs` ). Também pode aceder às mesmas propriedades usando expressões de encadernação na assinatura do método. O exemplo a seguir mostra ambas as formas de obter os mesmos dados:
```csharp
[FunctionName("EventHubTriggerCSharp")]
public static void Run(
[EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] EventData myEventHubMessage,
DateTime enqueuedTimeUtc,
Int64 sequenceNumber,
string offset,
ILogger log)
{
log.LogInformation($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
// Metadata accessed by binding to EventData
log.LogInformation($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
log.LogInformation($"Offset={myEventHubMessage.SystemProperties.Offset}");
// Metadata accessed by using binding expressions in method parameters
log.LogInformation($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={sequenceNumber}");
log.LogInformation($"Offset={offset}");
}
```
Para receber eventos num lote, faça `string` ou `EventData` array.
> [!NOTE]
> Ao receber num lote não pode ligar-se a parâmetros de método como no exemplo acima com `DateTime enqueuedTimeUtc` e deve recebê-los de cada `EventData` objeto
```cs
[FunctionName("EventHubTriggerCSharp")]
public static void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] EventData[] eventHubMessages, ILogger log)
{
foreach (var message in eventHubMessages)
{
log.LogInformation($"C# function triggered to process a message: {Encoding.UTF8.GetString(message.Body)}");
log.LogInformation($"EnqueuedTimeUtc={message.SystemProperties.EnqueuedTimeUtc}");
}
}
```
# <a name="c-script"></a>[C# Script](#tab/csharp-script)
O exemplo a seguir mostra uma ligação do gatilho do centro de eventos numa *function.jsno* ficheiro e numa [função de script C#](../articles/azure-functions/functions-reference-csharp.md) que utiliza a ligação. A função regista o corpo de mensagem do gatilho do centro do evento.
Os exemplos a seguir mostram que os dados de ligação do Event Hubs no *function.jsarquivados.*
### <a name="version-2x-and-higher"></a>Versão 2.x e superior
```json
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
```
### <a name="version-1x"></a>Versão 1.x
```json
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"path": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
```
Aqui está o código do guião C:
```cs
using System;
public static void Run(string myEventHubMessage, TraceWriter log)
{
log.Info($"C# function triggered to process a message: {myEventHubMessage}");
}
```
Para ter acesso aos [metadados de eventos](#event-metadata) no código de função, ligue-se a um objeto [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) (requer uma declaração de utilização para `Microsoft.Azure.EventHubs` ). Também pode aceder às mesmas propriedades usando expressões de encadernação na assinatura do método. O exemplo a seguir mostra ambas as formas de obter os mesmos dados:
```cs
#r "Microsoft.Azure.EventHubs"
using System.Text;
using System;
using Microsoft.ServiceBus.Messaging;
using Microsoft.Azure.EventHubs;
public static void Run(EventData myEventHubMessage,
DateTime enqueuedTimeUtc,
Int64 sequenceNumber,
string offset,
TraceWriter log)
{
log.Info($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
log.Info($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
log.Info($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
log.Info($"Offset={myEventHubMessage.SystemProperties.Offset}");
// Metadata accessed by using binding expressions
log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.Info($"SequenceNumber={sequenceNumber}");
log.Info($"Offset={offset}");
}
```
Para receber eventos num lote, faça `string` ou `EventData` array:
```cs
public static void Run(string[] eventHubMessages, TraceWriter log)
{
foreach (var message in eventHubMessages)
{
log.Info($"C# function triggered to process a message: {message}");
}
}
```
# <a name="javascript"></a>[JavaScript](#tab/javascript)
O exemplo a seguir mostra uma ligação do gatilho do centro de eventos numa *function.jsno* ficheiro e numa [função JavaScript](../articles/azure-functions/functions-reference-node.md) que utiliza a ligação. A função lê [metadados de evento](#event-metadata) e regista a mensagem.
Os exemplos a seguir mostram que os dados de ligação do Event Hubs no *function.jsarquivados.*
### <a name="version-2x-and-higher"></a>Versão 2.x e superior
```json
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
```
### <a name="version-1x"></a>Versão 1.x
```json
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"path": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
```
Aqui está o código JavaScript:
```javascript
module.exports = function (context, myEventHubMessage) {
context.log('Function triggered to process a message: ', myEventHubMessage);
context.log('EnqueuedTimeUtc =', context.bindingData.enqueuedTimeUtc);
context.log('SequenceNumber =', context.bindingData.sequenceNumber);
context.log('Offset =', context.bindingData.offset);
context.done();
};
```
Para receber eventos num lote, definido `cardinality` `many` nofunction.js *em* arquivo, como mostrado nos seguintes exemplos.
### <a name="version-2x-and-higher"></a>Versão 2.x e superior
```json
{
"type": "eventHubTrigger",
"name": "eventHubMessages",
"direction": "in",
"eventHubName": "MyEventHub",
"cardinality": "many",
"connection": "myEventHubReadConnectionAppSetting"
}
```
### <a name="version-1x"></a>Versão 1.x
```json
{
"type": "eventHubTrigger",
"name": "eventHubMessages",
"direction": "in",
"path": "MyEventHub",
"cardinality": "many",
"connection": "myEventHubReadConnectionAppSetting"
}
```
Aqui está o código JavaScript:
```javascript
module.exports = function (context, eventHubMessages) {
context.log(`JavaScript eventhub trigger function called for message array ${eventHubMessages}`);
eventHubMessages.forEach((message, index) => {
context.log(`Processed message ${message}`);
context.log(`EnqueuedTimeUtc = ${context.bindingData.enqueuedTimeUtcArray[index]}`);
context.log(`SequenceNumber = ${context.bindingData.sequenceNumberArray[index]}`);
context.log(`Offset = ${context.bindingData.offsetArray[index]}`);
});
context.done();
};
```
# <a name="python"></a>[Python](#tab/python)
O exemplo a seguir mostra uma ligação do gatilho do centro de eventos numa *function.jsem* ficheiro e numa [função Python](../articles/azure-functions/functions-reference-python.md) que utiliza a ligação. A função lê [metadados de evento](#event-metadata) e regista a mensagem.
Os exemplos a seguir mostram que os dados de ligação do Event Hubs no *function.jsarquivados.*
```json
{
"type": "eventHubTrigger",
"name": "event",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
```
Aqui está o código Python:
```python
import logging
import azure.functions as func
def main(event: func.EventHubEvent):
logging.info(f'Function triggered to process a message: {event.get_body().decode()}')
logging.info(f' EnqueuedTimeUtc = {event.enqueued_time}')
logging.info(f' SequenceNumber = {event.sequence_number}')
logging.info(f' Offset = {event.offset}')
# Metadata
for key in event.metadata:
logging.info(f'Metadata: {key} = {event.metadata[key]}')
```
# <a name="java"></a>[Java](#tab/java)
O exemplo a seguir mostra uma ligação do gatilho do Event Hub que regista o corpo de mensagem do gatilho do Event Hub.
```java
@FunctionName("ehprocessor")
public void eventHubProcessor(
@EventHubTrigger(name = "msg",
eventHubName = "myeventhubname",
connection = "myconnvarname") String message,
final ExecutionContext context )
{
context.getLogger().info(message);
}
```
Na biblioteca de [funções Java,](/java/api/overview/azure/functions/runtime)utilize a `EventHubTrigger` anotação em parâmetros cujo valor viria do Event Hub. Parâmetros com estas anotações fazem com que a função funcione quando um evento chega. Esta anotação pode ser usada com tipos nativos de Java, POJOs ou valores anulados usando `Optional<T>` .
---
## <a name="attributes-and-annotations"></a>Atributos e anotações
# <a name="c"></a>[C#](#tab/csharp)
Nas [bibliotecas de classe C#,](../articles/azure-functions/functions-dotnet-class-library.md)utilize o atributo [EventHubTriggerAttribute.](https://github.com/Azure/azure-functions-eventhubs-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.EventHubs/EventHubTriggerAttribute.cs)
O construtor do atributo tem o nome do centro de eventos, o nome do grupo de consumidores, e o nome de uma definição de aplicação que contém a cadeia de ligação. Para obter mais informações sobre estas definições, consulte a [secção de configuração](#configuration)do gatilho . Aqui está um `EventHubTriggerAttribute` exemplo de atributo:
```csharp
[FunctionName("EventHubTriggerCSharp")]
public static void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] string myEventHubMessage, ILogger log)
{
...
}
```
Para um exemplo completo, consulte [o exemplo do Trigger - C#](#example).
# <a name="c-script"></a>[C# Script](#tab/csharp-script)
Os atributos não são suportados pelo Script C#.
# <a name="javascript"></a>[JavaScript](#tab/javascript)
Os atributos não são suportados pelo JavaScript.
# <a name="python"></a>[Python](#tab/python)
Os atributos não são suportados pela Python.
# <a name="java"></a>[Java](#tab/java)
A partir da [biblioteca de funções](/java/api/overview/azure/functions/runtime)Java, use a anotação [EventHubTrigger](/java/api/com.microsoft.azure.functions.annotation.eventhubtrigger) em parâmetros cujo valor viria do Event Hub. Parâmetros com estas anotações fazem com que a função funcione quando um evento chega. Esta anotação pode ser usada com tipos nativos de Java, POJOs ou valores anulados usando `Optional<T>` .
---
## <a name="configuration"></a>Configuração
A tabela seguinte explica as propriedades de configuração de encadernação que definiu no *function.jsno* ficheiro e no `EventHubTrigger` atributo.
|function.jsna propriedade | Propriedade de atributo |Description|
|---------|---------|----------------------|
|**tipo** | n/a | Deve ser definido para `eventHubTrigger` . Esta propriedade é definida automaticamente quando cria o gatilho no portal Azure.|
|**direção** | n/a | Deve ser definido para `in` . Esta propriedade é definida automaticamente quando cria o gatilho no portal Azure. |
|**nome** | n/a | O nome da variável que representa o item do evento no código de função. |
|**caminho** |**Nome eventHub** | Funciona apenas 1.x. O nome do centro de eventos. Quando o nome do hub do evento também está presente na cadeia de ligação, esse valor sobrepõe-se a esta propriedade em tempo de execução. |
|**eventHubName** |**Nome eventHub** | Funções 2.x e superior. O nome do centro de eventos. Quando o nome do hub do evento também está presente na cadeia de ligação, esse valor sobrepõe-se a esta propriedade em tempo de execução. Pode ser referenciado através de [configurações de aplicativos](../articles/azure-functions/functions-bindings-expressions-patterns.md#binding-expressions---app-settings)`%eventHubName%` |
|**consumerGroup** |**Grupo de Consumidores** | Uma propriedade opcional que define o [grupo de consumidores](../articles/event-hubs/event-hubs-features.md#event-consumers) usado para subscrever eventos no centro. Se omitido, o `$Default` grupo de consumidores é utilizado. |
|**cardinalidade** | n/a | Usado para todos os idiomas não-C#. De modo a `many` ativar o lote. Se omitir ou definir `one` para , uma única mensagem é transmitida para a função.<br><br>Em C#, esta propriedade é automaticamente atribuída sempre que o gatilho tem uma matriz para o tipo.|
|**conexão** |**Ligação** | O nome de uma definição de aplicação que contém a cadeia de ligação ao espaço de nome do centro de eventos. Copie esta cadeia de ligação clicando no botão **Informação de Ligação** para o [espaço de nomes,](../articles/event-hubs/event-hubs-create.md#create-an-event-hubs-namespace)e não o próprio hub do evento. Esta cadeia de ligação deve ter pelo menos permissões de leitura para ativar o gatilho.<br><br>Se estiver a utilizar [a versão 5.x ou superior da extensão](../articles/azure-functions/functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher), em vez de uma cadeia de ligação, pode fornecer uma referência a uma secção de configuração que defina a ligação. Ver [Ligações](../articles/azure-functions/functions-reference.md#connections).|
[!INCLUDE [app settings to local.settings.json](../articles/azure-functions/../../includes/functions-app-settings-local.md)]
## <a name="usage"></a>Utilização
# <a name="c"></a>[C#](#tab/csharp)
### <a name="default"></a>Predefinição
Pode utilizar os seguintes tipos de parâmetros para o centro de eventos de desencadeamento:
* `string`
* `byte[]`
* `POCO`
* `EventData` - As propriedades predefinidas do EventData são fornecidas no espaço de [nomes Microsoft.Azure.EventHubs](/dotnet/api/microsoft.azure.eventhubs.eventdata).
### <a name="additional-types"></a>Tipos adicionais
As aplicações que utilizam a versão 5.0.0 ou superior da extensão do Event Hub utilizam o `EventData` tipo em [Azure.Messaging.EventHubs](/dotnet/api/azure.messaging.eventhubs.eventdata) em vez da do espaço de [nomes Microsoft.Azure.EventHubs](/dotnet/api/microsoft.azure.eventhubs.eventdata). Esta versão deixa cair o suporte para o tipo legado `Body` a favor dos seguintes tipos:
- [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody)
# <a name="c-script"></a>[C# Script](#tab/csharp-script)
### <a name="default"></a>Predefinição
Pode utilizar os seguintes tipos de parâmetros para o centro de eventos de desencadeamento:
* `string`
* `byte[]`
* `POCO`
* `EventData` - As propriedades predefinidas do EventData são fornecidas no espaço de [nomes Microsoft.Azure.EventHubs](/dotnet/api/microsoft.azure.eventhubs.eventdata).
### <a name="additional-types"></a>Tipos adicionais
As aplicações que utilizam a versão 5.0.0 ou superior da extensão do Event Hub utilizam o `EventData` tipo em [Azure.Messaging.EventHubs](/dotnet/api/azure.messaging.eventhubs.eventdata) em vez da do espaço de [nomes Microsoft.Azure.EventHubs](/dotnet/api/microsoft.azure.eventhubs.eventdata). Esta versão deixa cair o suporte para o tipo legado `Body` a favor dos seguintes tipos:
- [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody)
# <a name="java"></a>[Java](#tab/java)
Consulte o [exemplo](#example) do gatilho de Java para obter mais detalhes.
# <a name="javascript"></a>[JavaScript](#tab/javascript)
Consulte o exemplo do [gatilho](#example) Javascript para obter mais detalhes.
# <a name="python"></a>[Python](#tab/python)
Consulte o [exemplo](#example) do gatilho Python para obter mais detalhes.
---
## <a name="event-metadata"></a>Metadados de eventos
O gatilho do Event Hubs fornece várias [propriedades de metadados.](../articles/azure-functions/./functions-bindings-expressions-patterns.md) As propriedades dos metadados podem ser usadas como parte de expressões de ligação noutras ligações ou como parâmetros no seu código. As propriedades provêm da classe [EventData.](/dotnet/api/microsoft.servicebus.messaging.eventdata)
|Propriedade|Tipo|Description|
|--------|----|-----------|
|`PartitionContext`|[PartitionContexto](/dotnet/api/microsoft.servicebus.messaging.partitioncontext)|O `PartitionContext` exemplo.|
|`EnqueuedTimeUtc`|`DateTime`|O tempo encadeado na UTC.|
|`Offset`|`string`|A compensação dos dados relativos ao fluxo de partição do Event Hub. O offset é um marcador ou identificador para um evento dentro do fluxo de Centros de Eventos. O identificador é único dentro de uma partição do fluxo de Centros de Eventos.|
|`PartitionKey`|`string`|A partição para a qual os dados do evento devem ser enviados.|
|`Properties`|`IDictionary<String,Object>`|As propriedades do utilizador dos dados do evento.|
|`SequenceNumber`|`Int64`|O número da sequência lógica do evento.|
|`SystemProperties`|`IDictionary<String,Object>`|As propriedades do sistema, incluindo os dados do evento.|
Consulte [exemplos de código](#example) que utilizam estas propriedades anteriormente neste artigo.
| 50.122685 | 788 | 0.752783 | por_Latn | 0.991848 |
4b7276fc13d87a3643d494c6334e958b2898af98 | 2,196 | md | Markdown | _lectures/2018-01-30-Computer-Vision.md | polla-fattah/polla-fattah.github.io | 8f7b9bc8624545d7c30faf9c74f5ebb0184fb311 | [
"MIT"
] | null | null | null | _lectures/2018-01-30-Computer-Vision.md | polla-fattah/polla-fattah.github.io | 8f7b9bc8624545d7c30faf9c74f5ebb0184fb311 | [
"MIT"
] | 10 | 2022-01-16T21:21:57.000Z | 2022-01-16T21:22:29.000Z | _lectures/2018-01-30-Computer-Vision.md | polla-fattah/polla-fattah.github.io | 8f7b9bc8624545d7c30faf9c74f5ebb0184fb311 | [
"MIT"
] | null | null | null | ---
layout: lecture
title: Computer Vision
subtitle: ""
date: 2018-01-30
background: /img/lecture/computer-vision.jpg
university: University of Kurdistan Hewlêr
department: Computer Engineeing Dept.
level: MSc
year: 2018-2020
myStatus: VIsiting Lecturer
permalink: /academy/lecture/computer-vision/
---
## General Information
- **University**: {{page.university}}
- **Department**: {{page.department}}
- **Mu Status**: {{page.myStatus}}
- **Level**: {{page.level}}
- **Year**: {{page.year}}
## Course Description
The aim of this module is to give students a firm understanding of the theory underlying the processing and interpretation of visual information and the ability to apply that understanding to ubiquitous computing and entertainment related problems. The module is based around problems so that the technology is always presented in context and during some tutorials students work in groups to design solutions to real world problems using the techniques that they have been taught. In addition, the module has a significant practical component so that students can appreciate how difficult it can be to apply the technology.
## Course Objectives
On successful completion of the module students should be able to demonstrate:
1. Identify and implement appropriate solutions to low, mid and high-level Computer Vision problems.
1. Represent problems as mathematical models and apply appropriate machine learning and optimization techniques to solve those problems.
1. Apply deep learning algorithms and explain their operation.
1. Recommend appropriate statistical representations of static and dynamic objects and apply these to solve detection, classification and/or tracking problems.
1. Evaluate the performance of visual classification, tracking and retrieval systems and draw conclusions on their efficacy.
## Course Content
- Introduction to the Course
- Image Classification Pipeline
- Loss Functions and Optimization
- Backpropagation and Neural Networks
- Convolutional Neural Networks
- Training Neural Networks
- Deep Learning Software
- CNN Architectures
- Recurrent Neural Networks
- Detection and Segmentation
- Visualizing and Understanding
- Generative Models
| 43.058824 | 623 | 0.802823 | eng_Latn | 0.989779 |
4b72c3122eac0e6142bcf945a55d2b6d045780a2 | 2,442 | md | Markdown | docs/operations/convert_to_n-grams.md | codilime/ds-workflow_executor | 51e708541e36b41e8cb6fae796299aecb97a6f43 | [
"Apache-2.0"
] | 8 | 2017-07-17T20:11:01.000Z | 2018-01-04T04:23:51.000Z | docs/operations/convert_to_n-grams.md | codilime/ds-workflow_executor | 51e708541e36b41e8cb6fae796299aecb97a6f43 | [
"Apache-2.0"
] | 1 | 2017-08-22T23:22:43.000Z | 2017-08-22T23:22:43.000Z | docs/operations/convert_to_n-grams.md | codilime/ds-workflow_executor | 51e708541e36b41e8cb6fae796299aecb97a6f43 | [
"Apache-2.0"
] | 5 | 2017-07-17T21:35:23.000Z | 2020-09-22T12:18:23.000Z | ---
layout: global
displayTitle: Convert To n-grams
title: Convert To n-grams
description: Convert To n-grams
usesMathJax: true
includeOperationsMenu: true
---
Converts arrays of strings to arrays of n-grams. Null values in the input arrays are ignored. Each n-gram is represented by a space-separated string of words. When the input is empty, an empty array is returned. When the input array is shorter than n (number of elements per n-gram), no n-grams are returned.
This operation is ported from Spark ML.
For a comprehensive introduction, see
<a target="_blank" href="https://spark.apache.org/docs/2.0.0/ml-features.html#n-gram">Spark documentation</a>.
For scala docs details, see
<a target="_blank" href="https://spark.apache.org/docs/2.0.0/api/scala/index.html#org.apache.spark.ml.feature.NGram">org.apache.spark.ml.feature.NGram documentation</a>.
**Since**: Seahorse 1.0.0
## Input
<table>
<thead>
<tr>
<th style="width:15%">Port</th>
<th style="width:15%">Type Qualifier</th>
<th style="width:70%">Description</th>
</tr>
</thead>
<tbody>
<tr><td><code>0</code></td><td><code><a href="../classes/dataframe.html">DataFrame</a></code></td><td>The input <code>DataFrame</code>.</td></tr>
</tbody>
</table>
## Output
<table>
<thead>
<tr>
<th style="width:15%">Port</th>
<th style="width:15%">Type Qualifier</th>
<th style="width:70%">Description</th>
</tr>
</thead>
<tbody>
<tr><td><code>0</code></td><td><code><a href="../classes/dataframe.html">DataFrame</a></code></td><td>The output <code>DataFrame</code>.</td></tr><tr><td><code>1</code></td><td><code><a href="../classes/transformer.html">Transformer</a></code></td><td>A <code>Transformer</code> that allows to apply the operation on other <code>DataFrames</code> using a <a href="transform.html">Transform</a>.</td></tr>
</tbody>
</table>
## Parameters
<table class="table">
<thead>
<tr>
<th style="width:15%">Name</th>
<th style="width:15%">Type</th>
<th style="width:70%">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>n</code></td>
<td><code><a href="../parameter_types.html#numeric">Numeric</a></code></td>
<td>The minimum n-gram length.</td>
</tr>
<tr>
<td><code>operate on</code></td>
<td><code><a href="../parameter_types.html#input-output-column-selector">InputOutputColumnSelector</a></code></td>
<td>The input and output columns for the operation.</td>
</tr>
</tbody>
</table>
{% markdown operations/examples/ConvertToNGrams.md %}
| 28.068966 | 408 | 0.692056 | eng_Latn | 0.424568 |
4b7336ba1dcec044b75b75aa0fd1689158921e99 | 1,856 | md | Markdown | README.md | fisuda/get-mashupplatform-context-widget | c3cc39227273027d0a7f1e6aac8692232736691c | [
"MIT"
] | null | null | null | README.md | fisuda/get-mashupplatform-context-widget | c3cc39227273027d0a7f1e6aac8692232736691c | [
"MIT"
] | 12 | 2020-03-15T20:35:47.000Z | 2021-09-02T02:14:14.000Z | README.md | fisuda/get-mashupplatform-context-widget | c3cc39227273027d0a7f1e6aac8692232736691c | [
"MIT"
] | 1 | 2020-01-02T03:21:17.000Z | 2020-01-02T03:21:17.000Z | # Get MashupPlatform context widget
[](https://www.fiware.org/developers/catalogue/)
[](https://opensource.org/licenses/MIT)<br/>
[](https://travis-ci.com/lets-fiware/get-mashupplatform-context-widget)
[](https://coveralls.io/github/lets-fiware/get-mashupplatform-context-widget?branch=master)
The Get MashupPlatform context widget is a WireCloud widget that provides the MashupPlatform context information.
Build
-----
Be sure to have installed [Node.js](http://node.js) and [Bower](http://bower.io) in your system. For example, you can install it on Ubuntu and Debian running the following commands:
```bash
curl -sL https://deb.nodesource.com/setup | sudo bash -
sudo apt-get install nodejs
sudo apt-get install npm
sudo npm install -g bower
```
Install other npm dependencies by running:
```bash
npm install
```
In order to build this widget you need to download grunt:
```bash
sudo npm install -g grunt-cli
```
And now, you can use grunt:
```bash
grunt
```
If everything goes well, you will find a wgt file in the `dist` folder.
## Documentation
Documentation about how to use this widget is available on the
[User Guide](src/doc/userguide.md). Anyway, you can find general information
about how to use widgets on the
[WireCloud's User Guide](https://wirecloud.readthedocs.io/en/stable/user_guide/)
available on Read the Docs.
## Copyright and License
Copyright (c) 2019 Kazuhito Suda
Licensed under the MIT license.
| 34.37037 | 214 | 0.767241 | eng_Latn | 0.661198 |
4b7413ce0a59998b12d1e2a6b5bc11a0ad9c83b2 | 1,203 | markdown | Markdown | _posts/2017-04-18-velocity-webpack.markdown | xsheng1992/xsheng1992.github.io | b971e387e4428595f5ca59a2a77cda358f4ab61c | [
"MIT"
] | null | null | null | _posts/2017-04-18-velocity-webpack.markdown | xsheng1992/xsheng1992.github.io | b971e387e4428595f5ca59a2a77cda358f4ab61c | [
"MIT"
] | null | null | null | _posts/2017-04-18-velocity-webpack.markdown | xsheng1992/xsheng1992.github.io | b971e387e4428595f5ca59a2a77cda358f4ab61c | [
"MIT"
] | null | null | null | ---
layout: post
title: "webpack引入velocity时遇到的小问题"
date: 2017-04-18 15:57:00
categories: webpack 插件
---
今天在看vue官方提供的webpack-simple的demo,按照网上的一篇文章想要自己稍微修改下这个demo。实际操作时需要引入velocity来实现一些动画效果,然而利用npm安装了velocity后运行时却一直报错:
{% highlight ruby %}
Uncaught Error: Cannot find module "fs":
{% endhighlight %}
开始时一直以为是引用的方式错误,于是一直在找webpack引入插件的方法,如:
##### 直接引入
{% highlight ruby %}
import Velocity from 'velocity'
{% endhighlight %}
这是我最先使用的引用方式,然而并没有用。
##### ProvidePlugin
{% highlight ruby %}
plugins: [
new webpack.ProvidePlugin({
velocity: "velocity"
}),
]
{% endhighlight %}
这是在webpack.config.js文件中添加插件,然而还是没有用处。
##### externals引用
{% highlight ruby %}
externals:{
'jquery':'window.jQuery'
}
{% endhighlight %}
这也是在webpack.config.js文件中添加插件,结果不用多说还是错的。
正当我对这个世界产生怀疑时,我点开了velocity的官网进入了它的GitHub页面,然后看到了
{% highlight ruby %}
npm install velocity-animate
{% endhighlight %}
是的,用npm安装时这个插件的名字不是velocity,而是velocity-animate。
velocity-animate
velocity-animate
velocity-animate
我网上查了下,Velocity是一个基于Java的模板引擎...
后来我直接用上面的‘直接引入’方法就成功引入了
{% highlight ruby %}
import Velocity from 'velocity-animate'
//或者使用require
require('velocity-animate')
{% endhighlight %}
至此,还是要多多学习知识啊。 | 16.708333 | 112 | 0.74813 | yue_Hant | 0.311595 |
4b7414daeca03af52563175d800217cdedaef8bd | 6,427 | md | Markdown | content/course/lpic1/1047_find_system_files_and_place_files_in_the_correct_location.md | jadijadi/linuxlearner | d6f7f57400d7eda41937b1d601b4c8b9fbdf772c | [
"CC0-1.0"
] | 17 | 2015-12-26T06:06:19.000Z | 2022-03-14T15:12:46.000Z | content/course/lpic1/1047_find_system_files_and_place_files_in_the_correct_location.md | jadijadi/linuxlearner | d6f7f57400d7eda41937b1d601b4c8b9fbdf772c | [
"CC0-1.0"
] | null | null | null | content/course/lpic1/1047_find_system_files_and_place_files_in_the_correct_location.md | jadijadi/linuxlearner | d6f7f57400d7eda41937b1d601b4c8b9fbdf772c | [
"CC0-1.0"
] | 6 | 2017-01-19T10:58:03.000Z | 2022-03-09T12:34:06.000Z | Title: 104.7 Find system files and place files in the correct location
Date: 2015-12-25 12:20
Category: lpic102
# 104.7 Find system files and place files in the correct location
*Weight: 2*
Candidates should be thoroughly familiar with the **Filesystem Hierarchy Standard (FHS)**, including typical file locations and directory classifications.
## Objectives
- Understand the correct locations of files under the FHS.
- Find files and commands on a Linux system.
- Know the location and purpose of important file and directories as defined in the FHS.
- find
- locate
- updatedb
- whereis
- which
- type
- /etc/updatedb.conf
## FHS
Filesystem Hierarchy Standard (FHS) is a document describing the Linux / Unix file hierarchy. It is very useful to know these because it lets you easily find what you are looking for:
|directory|usage|
|---|---|
|bin|Essential command binaries|
|boot|Static files of the boot loader|
|dev|Device files|
|etc|Host-specific system configuration|
|lib|Essential shared libraries and kernel modules|
|media|Mount point for removable media|
|mnt|Mount point for mounting a filesystem temporarily|
|opt|Add-on application software packages|
|sbin|Essential system binaries|
|srv|Data for services provided by this system|
|tmp|Temporary files|
|usr|Secondary hierarchy|
|var|Variable data|
|home|User home directories (optional)|
|lib<qual>|Alternate format essential shared libraries (optional)|
|root|Home directory for the root user (optional)|
The `/usr` filesystem is the second major section of the filesystem, containing shareable, read-only data. It can be shared between systems, although present practice does not often do this.
The `/var` filesystem contains variable data files, including spool directories and files, administrative and logging data, and transient and temporary files. Some portions of /var are not shareable between different systems, but others, such as /var/mail, /var/cache/man, /var/cache/fonts, and /var/spool/news, may be shared.
## Path
A general linux install has a lot of files; 741341 files in my case. So how it find out where to look when you type a command? This is done by a variable called PATH:
````
$ echo $PATH
/home/jadi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games;/home/jadi/bin/
````
And for root user:
````
# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
````
As you can see, this is the list of directories separated with a colon. Obviously you can change your path with ```export PATH=$PATH:/usr/new/dir``` or put this in ```.bashrc``` to make it permanent.
## which, type and whereis
The `which` command shows the first appearance of the command given in path:
````
$ which mkfd
$ which mkfs
/sbin/mkfs
````
> use the `-a` switch to show all appearance in the path and not only the first one.
But what happens if you `which for` ?
````
$ which for
$ type for
for is a shell keyword
````
As you can see, `which` did not found anything for `for` and we used `type`.
````
$ type type
type is a shell builtin
$ type for
for is a shell keyword
$ type mkfs
mkfs is /sbin/mkfs
$ type mkfd
bash: type: mkfd: not found
````
The `type` command is more general that `which` and also understand and shows the *bash keywords*.
Another useful command in this category is `whereis`. Unlike `which`, `whereis` shows man pages and source codes of programs alongside their binary location.
````
$ whereis mkfs
mkfs: /sbin/mkfs.bfs /sbin/mkfs.ext3 /sbin/mkfs.ext4 /sbin/mkfs.vfat /sbin/mkfs.cramfs /sbin/mkfs.minix /sbin/mkfs.ext2 /sbin/mkfs.msdos /sbin/mkfs.fat /sbin/mkfs.ntfs /sbin/mkfs.ext4dev /sbin/mkfs /usr/share/man/man8/mkfs.8.gz
$ whereis ping
ping: /bin/ping /usr/share/man/man8/ping.8.gz
$ whereis chert
chert:
$
````
## find
We have already seen this command in detail but lets see a couple of new switches.
- The `-user` and `-group` specifies a specific user & group
- The `-maxdepth` tells the find how deep it should go into the directories.
````
$ find /tmp/ -maxdepth 1 -user jadi | head
$ find /tmp/ -maxdepth 1 -user jadi | head
/tmp/asheghloo.png
/tmp/tmpAN6Drb
/tmp/wrapper-24115-2-out
/tmp/sni-qt_goldendict_20048-sRlmvN
/tmp/asheghloo.gif
/tmp/zim-jadi
/tmp/3la.txt
/tmp/unity_support_test.0
/tmp/batman.jpg
````
Or even find the files not belonging to any user / group with `-nouser` and `-nogroup`.
>Like other *tests*, you can add a `!` just before any phrase to negate it. So this will find files **not belonging** to jadi: `find . ! -user jadi`
## locate & updatedb
You tries `find` and know that it is slowwwww... It searches the file system on each run but lets see the fastest command:
````
$ locate happy
/home/jadi/.Spark/xtra/emoticons/Default.adiumemoticonset/happy.png
/home/jadi/.Spark/xtra/emoticons/sparkEmoticonSet/happy.png
/home/jadi/Downloads/jadi-net_radio-geek_040_antihappy.mp3
/usr/share/emoticons/kde4/unhappy.png
/usr/share/pixmaps/fvwm/mini.happy.xpm
/usr/share/pixmaps/pidgin/emotes/default/happy.png
/usr/share/pixmaps/pidgin/emotes/small/happy.png
/usr/src/linux-headers-3.13.0-40-generic/include/config/happymeal.h
/usr/src/linux-headers-3.16.0-25-generic/include/config/happymeal.h
/usr/src/linux-headers-3.16.0-28-generic/include/config/happymeal.h
/usr/src/linux-headers-3.16.0-29-generic/include/config/happymeal.h
````
And it is fast:
````
$ time locate kernel | wc -l
11235
real 0m0.341s
user 0m0.322s
sys 0m0.015s
````
This is fast because its data comes from a database created with `updatedb` command which is usually run on a daily basis with a cron job. Its configuration file is `/etc/updatedb.conf` or `/etc/sysconfig/locate`:
````
$ cat /etc/updatedb.conf
PRUNE_BIND_MOUNTS="yes"
# PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media /home/.ecryptfs"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre tmpfs usbfs udf fuse.glusterfs fuse.sshfs curlftpfs ecryptfs fusesmb devtmpfs"
````
Please note that you can update the db by running `updatedb` as root and get some info about it by `-S` switch of `locate` command:
````
$ locate -S
Database /var/lib/mlocate/mlocate.db:
73,602 directories
711,894 files
46,160,154 bytes in file names
18,912,999 bytes used to store database
```
# And... the LPIC1 exam 101 is DONE! Congrats.
http://j.mp/jadilpic1
.
.
.
.
.
.
.
.
| 30.316038 | 326 | 0.743737 | eng_Latn | 0.989241 |
4b746b771f1516ab5fd40144538736de3e39a177 | 334 | md | Markdown | README.md | korelhayrullah/Swift | 0c28a2a24782da31c27533de6e7be89c913a2c1e | [
"MIT"
] | 1 | 2019-01-12T20:56:07.000Z | 2019-01-12T20:56:07.000Z | README.md | korelhayrullah/Swift | 0c28a2a24782da31c27533de6e7be89c913a2c1e | [
"MIT"
] | null | null | null | README.md | korelhayrullah/Swift | 0c28a2a24782da31c27533de6e7be89c913a2c1e | [
"MIT"
] | null | null | null | # Swift
This repo will be populated with the tutorials that I have completed and some of my own projects in order to track my progress and maybe help somebody out. If you find something wrong or stupid, be welcomed to open a pull request. I believe that the most important part of learning is giving feedback no matter how. :smile:
| 66.8 | 323 | 0.790419 | eng_Latn | 0.999955 |
4b746d1363542e2e58ad4480e923b0412d4c5bc8 | 446 | md | Markdown | Miscellaneouos/Python/InventoryManagementApp/README.md | KeerthanaPravallika/OpenOctober | e93c120c90ce6c298b7052a2f7759560a2a2761c | [
"Apache-2.0"
] | 32 | 2020-10-17T09:58:41.000Z | 2021-10-13T04:43:35.000Z | Miscellaneouos/Python/InventoryManagementApp/README.md | KeerthanaPravallika/OpenOctober | e93c120c90ce6c298b7052a2f7759560a2a2761c | [
"Apache-2.0"
] | 380 | 2020-10-18T15:35:49.000Z | 2021-12-25T05:03:50.000Z | Miscellaneouos/Python/InventoryManagementApp/README.md | KeerthanaPravallika/OpenOctober | e93c120c90ce6c298b7052a2f7759560a2a2761c | [
"Apache-2.0"
] | 68 | 2020-10-17T17:29:54.000Z | 2021-10-13T04:43:35.000Z | # WISPMS-Python
Warehouse Inventory Sales Purchase Management System (WISPMS) based on Python using Tkinter and Database
# Python Project using Tkinter and Database
`Warehouse Inventory Sales Purchase Management System (WISPMS)`
## Requirements
## Python 3.4.X
## Sqlite DB / My SQL / Oracle
## Tkinter
### Steps to Deploy:
1. Clone this Repo
2. Install above Requirements (if any missing)
3. Run the main.py in shell and it's all set!!
| 26.235294 | 104 | 0.746637 | eng_Latn | 0.739728 |
4b74e5d3af4422372b257a30908620795649546e | 550 | md | Markdown | third-party/README.md | jetperch/fitterbap | dc29db72c2d7b01d90556a251be0a361574033bc | [
"Apache-2.0"
] | 21 | 2021-05-14T20:16:56.000Z | 2022-03-30T18:54:31.000Z | third-party/README.md | jetperch/fitterbap | dc29db72c2d7b01d90556a251be0a361574033bc | [
"Apache-2.0"
] | null | null | null | third-party/README.md | jetperch/fitterbap | dc29db72c2d7b01d90556a251be0a361574033bc | [
"Apache-2.0"
] | null | null | null | # Third-party libraries
The Fitterbap library is distributed alongside several very useful third-party
libraries. Each of these libraries are provided under their respective
licenses.
* [cmocka](cmocka/LICENSE.txt) Apache 2.0 : 1.1.0+ (git commit: 26717f4909039803b231434740ef3ce005258dae)
* [tinyprintf](tinyprintf/tinyprintf_LICENSE.BSD-new) BSD
* [uthash](uthash/LICENSE) BSD
## Other libraries
Although not included with Fitterbap, the following libraries may be of interest:
* [xprintf](http://elm-chan.org/fsw/strf/xprintf.html)
| 34.375 | 107 | 0.776364 | eng_Latn | 0.9552 |
4b7523f41dc224a7bcbb46471238aa163a0425cc | 2,875 | md | Markdown | docs/analysis-services/xmla/xml-elements-properties/folders-element-xmla.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/xmla/xml-elements-properties/folders-element-xmla.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/xmla/xml-elements-properties/folders-element-xmla.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Elemento Folders (XMLA) | Microsoft Docs
ms.custom:
ms.date: 03/06/2017
ms.prod: analysis-services
ms.prod_service: analysis-services, azure-analysis-services
ms.service:
ms.component:
ms.reviewer:
ms.suite: pro-bi
ms.technology:
ms.tgt_pltfrm:
ms.topic: reference
apiname: Folders Element
apilocation: http://schemas.microsoft.com/analysisservices/2003/engine
apitype: Schema
applies_to: SQL Server 2016 Preview
f1_keywords:
- microsoft.xml.analysis.folders
- http://schemas.microsoft.com/analysisservices/2003/engine#Folders
- urn:schemas-microsoft-com:xml-analysis#Folders
helpviewer_keywords: Folders element
ms.assetid: fefb0469-22ea-4804-8dc3-9c49825b32f1
caps.latest.revision: "12"
author: Minewiskan
ms.author: owend
manager: kfile
ms.workload: Inactive
ms.openlocfilehash: 452f501b4189866c2257d871c581e0fd88313426
ms.sourcegitcommit: f486d12078a45c87b0fcf52270b904ca7b0c7fc8
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 01/08/2018
---
# <a name="folders-element-xmla"></a>Elemento Folders (XMLA)
[!INCLUDE[ssas-appliesto-sqlas-aas](../../../includes/ssas-appliesto-sqlas-aas.md)]Contém uma coleção de [pasta](../../../analysis-services/xmla/xml-elements-properties/folder-element-xmla.md) elementos usados pelo pai [local](../../../analysis-services/xmla/xml-elements-properties/location-element-xmla.md) elemento durante uma [restaurar](../../../analysis-services/xmla/xml-elements-commands/restore-element-xmla.md) ou [sincronizar](../../../analysis-services/xmla/xml-elements-commands/synchronize-element-xmla.md) comando.
## <a name="syntax"></a>Sintaxe
```xml
<Location>
...
<Folders>
<Folder>...</Folder>
</Folders>
...
</Location>
```
## <a name="element-characteristics"></a>Características do elemento
|Característica|Description|
|--------------------|-----------------|
|Comprimento e tipo de dados|Nenhum (coleção)|
|Valor padrão|Nenhum|
|Cardinalidade|0-1: elemento opcional que pode ocorrer apenas uma única vez.|
## <a name="element-relationships"></a>Relações do elemento
|Relação|Elemento|
|------------------|-------------|
|Elementos pai|[Local](../../../analysis-services/xmla/xml-elements-properties/location-element-xmla.md)|
|Elementos filho|[Pasta](../../../analysis-services/xmla/xml-elements-properties/folder-element-xmla.md)|
## <a name="remarks"></a>Remarks
## <a name="see-also"></a>Consulte Também
[Restaurar o elemento ( XMLA )](../../../analysis-services/xmla/xml-elements-commands/restore-element-xmla.md)
[Sincronizar o elemento ( XMLA )](../../../analysis-services/xmla/xml-elements-commands/synchronize-element-xmla.md)
[Propriedades ( XMLA )](../../../analysis-services/xmla/xml-elements-properties/xml-elements-properties.md)
| 38.851351 | 531 | 0.703304 | yue_Hant | 0.130207 |
4b7541512a5d99dcd1c6c9d8b6af3d34425427e3 | 486 | md | Markdown | versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md | grezar/kubevela.io | aec2544f5f6c0ea3c00766137c2299de393a89c6 | [
"Apache-2.0"
] | 42 | 2020-09-12T03:20:29.000Z | 2022-03-31T05:32:47.000Z | versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md | grezar/kubevela.io | aec2544f5f6c0ea3c00766137c2299de393a89c6 | [
"Apache-2.0"
] | 344 | 2020-09-12T07:44:42.000Z | 2022-03-31T16:39:26.000Z | versioned_docs/version-v1.1/roadmap/2021-09-roadmap.md | grezar/kubevela.io | aec2544f5f6c0ea3c00766137c2299de393a89c6 | [
"Apache-2.0"
] | 91 | 2020-09-12T07:27:37.000Z | 2022-03-31T13:35:56.000Z | ---
title: Roadmap
---
Date: 2021-07-01 to 2021-09-30
## Core Platform
1. Support more built-in capabilities and cloud resources with unified experience, such as monitoring, auto-scaling, middle ware plugins.
2. Auto binding for cloud resources.
3. Support more security policy( integrate with OPA, CIS, Popeye ).
## User Experience
1. Support Dashboard for deploying KubeVela Application.
2. Support velacp as non-K8s APIServer for CI integration.
## Third-party integrations
| 23.142857 | 137 | 0.761317 | eng_Latn | 0.905862 |
4b75aec77e4142983e1195a80d54432788c411fa | 6,355 | md | Markdown | docs/testcafe.md | joshskumar/CodeceptJS | 52eac9821ab1bbf1db1ff31bd4a9dcfc9a558e98 | [
"MIT"
] | 3 | 2019-07-08T11:35:45.000Z | 2019-07-24T11:49:49.000Z | docs/testcafe.md | joshskumar/CodeceptJS | 52eac9821ab1bbf1db1ff31bd4a9dcfc9a558e98 | [
"MIT"
] | null | null | null | docs/testcafe.md | joshskumar/CodeceptJS | 52eac9821ab1bbf1db1ff31bd4a9dcfc9a558e98 | [
"MIT"
] | 1 | 2021-05-06T13:36:48.000Z | 2021-05-06T13:36:48.000Z | ---
permalink: /testcafe
title: Testing with TestCafe
---
# Testing with TestCafe
[TestCafe](https://devexpress.github.io/testcafe/) is another alternative engine for driving browsers. It is driven by unique technology which provides fast and simple cross browser testing for desktop and mobile browsers. Unlike WebDriver or Puppeteer, TestCafe doesn't control a browser at all. It is not a browser itself, like [Nightmare](/nightmare) or Cypress. **TestCafe core is a proxy server** that runs behind the scene, and transforms all HTML and JS to include code that is needed for test automation.

This is very smart idea. But to use TestCafe on daily basis you need to clearly understand its benefits and limitations:
### Pros
* **Fast**. Browser is controlled from inside a web page. This makes test run inside a browser as fast as your browser can render page with no extra network requests.
* **Simple Cross-Browser Support.** Because TestCafe only launches browsers, it can **automate browser** on desktop or mobile. Unlike WebDriver, you don't need special version of browser and driver to prepare to run tests. Setup simplified. All you need is just a browser installed, and you are ready to go.
* **Stable to Execution.** Because a test is executed inside a browser, the network latency effects are reduced. Unlike WebDriver you won't hit stale element exceptions, or element not interactable exceptions, as from within a web browser all DOM elements are accessible.
## Cons
* **Magic.** Browsers executed in TestCafe are not aware that they run in test mode. So at some edges automation control can be broken. It's also quite hard to debug possible issues, as you don't know how actually a web page is parsed to inject automation scripts.
* **No Browser Control.** Because TestCafe do not control browser, you can't actually automate all users actions. For instance, TestCafe can't open new tabs or open a new browser window in incognito mode. There can be also some issues running tests on 3rd party servers or inside iframes.
* **Simulated Events.** Events like `click` or `doubleClick` are simulated by JavaScript internally. Inside WebDriver or Puppeteer, where those events are dispatched by a browser, called native events. Native events are closer to real user experience. So in some cases simulated events wouldn't represent actual user experience, which can lead to false positive results. For instance, a button which can't be physically clicked by a user, would be clickable inside TestCafe.
Anyway, TestCafe is a good option to start if you need cross browser testing. And here is the **reason to use TestCafe with CodeceptJS: if you hit an edge case or issue, you can easily switch your tests to WebDriver**. As all helpers in CodeceptJS share the same syntax.
CodeceptJS is a rich testing frameworks which also provides features missing in original TestCafe:
* [Cucumber integration](/bdd)
* [Real Page Objects](/pageobjects)
* [Data Management via API](/data)
* and others
## Writing Tests
To start using TestCafe with CodeceptJS install both via NPM
> If you don't have `package.json` in your project, create it with `npm init -y`.
```
npm i codeceptjs testcafe --save-dev
```
Then you need to initialize a project, selecting TestCafe when asked:
```
npx codeceptjs init
```
A first test should be created with `codeceptjs gt` command
```
npx codeceptjs gt
```
In the next example we will [TodoMVC application](http://todomvc.com/examples/angularjs/#/). So let's create a test which will fill in todo list:
```js
Feature('TodoMVC');
Scenario('create todo item', (I) => {
I.amOnPage('http://todomvc.com/examples/angularjs/#/');
I.fillField('.new-todo', todo)
I.pressKey('Enter');
I.seeNumberOfVisibleElements('.todo-list li', 1);
I.see('1 item left', '.todo-count');
});
```
Same syntax is the same for all helpers in CodeceptJS so to learn more about available commands learn [CodeceptJS Basics](/basics).
> [▶ Complete list of TestCafe actions](/helpers/TestCafe)
## Page Objects
Multiple tests can be refactored to share some logic and locators. It is recommended to use PageObjects for this. For instance, in example above, we could create special actions for creating todos and checking them. If we move such methods in a corresponding object a test would look even clearer:
```js
Scenario('Create a new todo item', async (I, TodosPage) => {
I.say('Given I have an empty todo list')
I.say('When I create a todo "foo"')
TodosPage.enterTodo('foo')
I.say('Then I see the new todo on my list')
TodosPage.seeNumberOfTodos(1)
I.saveScreenshot('create-todo-item.png')
})
Scenario('Create multiple todo items', async (I, TodosPage) => {
I.say('Given I have an empty todo list')
I.say('When I create todos "foo", "bar" and "baz"')
TodosPage.enterTodo('foo')
TodosPage.enterTodo('bar')
TodosPage.enterTodo('baz')
I.say('Then I have these 3 todos on my list')
TodosPage.seeNumberOfTodos(3)
I.saveScreenshot('create-multiple-todo-items.png')
})
```
> ℹ [Source code of this example](https://github.com/hubidu/codeceptjs-testcafe-todomvc) is available on GitHub.
A PageObject can be injected into a test by its name. Here is how TodosPage looks like:
```js
// inside todos_page.js
const { I } = inject();
module.exports = {
goto() {
I.amOnPage('http://todomvc.com/examples/angularjs/#/')
},
enterTodo(todo) {
I.fillField('.new-todo', todo)
I.pressKey('Enter')
},
seeNumberOfTodos(numberOfTodos) {
I.seeNumberOfVisibleElements('.todo-list li', numberOfTodos)
},
}
```
> [▶ Read more about PageObjects in CodeceptJS](/pageobjects)
## Extending
If you want to use TestCafe API inside your tests you can put them into actions of `I` object. To do so you can generate a new helper, access TestCafe helper, and get the test controller.
Create a helper using `codecepjs gh` command.
```
npx codeceptjs gh
```
All methods of newly created class will be added to `I` object.
```js
const Helper = codeceptjs.helper;
class MyTestCafe extends Helper {
slowlyFillField(field, text) {
// import test controller from TestCafe helper
const { t } = this.helpers.TestCafe;
// use TestCafe API here
return t.setTestSpeed(0.1)
.typeText(field, text);
}
}
```
| 39.71875 | 512 | 0.739575 | eng_Latn | 0.985607 |
4b76ddb0b775f422541abc9d23889d2b6439b53b | 742 | md | Markdown | 01-StudyNote/2019/201907/SelfStudy-20190726.md | JediChou/jedi-md-book | 611ce1c817d7179f1e23c57a959af48cb6ccebe8 | [
"Apache-2.0"
] | 1 | 2022-03-13T07:48:27.000Z | 2022-03-13T07:48:27.000Z | 01-StudyNote/2019/201907/SelfStudy-20190726.md | JediChou/jedi-md-book | 611ce1c817d7179f1e23c57a959af48cb6ccebe8 | [
"Apache-2.0"
] | null | null | null | 01-StudyNote/2019/201907/SelfStudy-20190726.md | JediChou/jedi-md-book | 611ce1c817d7179f1e23c57a959af48cb6ccebe8 | [
"Apache-2.0"
] | 1 | 2019-08-27T11:11:53.000Z | 2019-08-27T11:11:53.000Z | # Self Study
Author: Jedi Chou, Create at 2019.7.26 8:12 AM
* Weekly review
-[x] Read a thesis and don't record. 17:18
* Daily study
* Check-in
-[x] 101WeiQi daily exercise check-in. 8:29
-[x] Read articles and check interview invitation (MaiMai APP). 8:32
-[x] NowCoder check-in and do an exercise that contains 20 puzzles. 8:32
-[x] Don't memorize words APP check-in. 8:33
-[x] 163 music APP check-in. 7:00
-[x] Tencent cartoon APP check-in. 8:33
-[x] Exercise of Vocabulary (vocabulary.com). 8:38
-[ ] Open class APP by 163.com check-in
* Micro habit
-[x] Reading 1 minutes at SIMPLE Wiki. 8:40
-[x] Read Sina Blog. 9:13
* Reading
-[x] Feedly RSS reader. 9:50
-[ ] 163.com mail
| 28.538462 | 76 | 0.644205 | kor_Hang | 0.470834 |
4b76ef04795729141df1d0b724affa4f73fc8145 | 4,726 | md | Markdown | docs/tooling/testing/end-to-end-testing/overview.md | Fadarrizz/docs | b7d7b89e68f4c50467e3432c5b3883dd8e4439c2 | [
"Apache-2.0"
] | 424 | 2015-01-14T14:26:13.000Z | 2022-01-18T09:20:10.000Z | docs/tooling/testing/end-to-end-testing/overview.md | Fadarrizz/docs | b7d7b89e68f4c50467e3432c5b3883dd8e4439c2 | [
"Apache-2.0"
] | 1,840 | 2015-01-13T12:01:29.000Z | 2022-02-17T12:27:30.000Z | docs/tooling/testing/end-to-end-testing/overview.md | Fadarrizz/docs | b7d7b89e68f4c50467e3432c5b3883dd8e4439c2 | [
"Apache-2.0"
] | 740 | 2015-02-13T18:26:34.000Z | 2021-11-03T06:43:09.000Z | ---
title: Overview
titletag: End to End Testing - Overview
description: Write and execute UI E2E automation tests to ensure that newly added features are working correctly and no regressions are introduced in the mobile app.
position: 10
tags: ui testing, app ui testing, nativescript ui testing, automation testing, app automation testing, nativescript automation testing, appium, ui test automation, e2e testing
slug: e2e-testing-overview
---
# Overview
E2E testing allows to test your application workflows and make sure all the integration points are working as expected. You can literally test any screen and any workflow of your app. It differs from [Unit Testing]({% slug unit-testing %}) by the fact that unit testing is used to test an isolated piece of code usually in a mocked environment.
If you wonder when to do unit testing and when E2E testing there is a basic rule. For isolated pieces of code prefer to do a set of unit tests that are focused on the work this code do. Unit tests usually are smaller, simpler and much faster. Use E2E testing for any application workflow where multiple components are involved and you want to ascertain they work well when integrated together. E2E tests allow to cover flows in the application which are not covered by unit and integration tests.
## E2E Testing in a NativeScript app
Thanks to [Appium](http://appium.io/) and the [nativescript-dev-appium plugin](https://github.com/NativeScript/nativescript-dev-appium) E2E automation testing is made easy in NativeScript!
### What is Appium?
Appium is an open-source test automation framework for use with any mobile app. It allows to easily create UI automation testing for iOS, Android, Windows and hybrid mobile apps.
Read more details in the [Appium introduction](http://appium.io/docs/en/about-appium/intro/) and [Appium getting started guide](http://appium.io/docs/en/about-appium/getting-started/).
### What is nativescript-dev-appium?
Since Appium is used internally to test the NativeScript framework itself, the core team developed a nativescript plugin that wraps Appium and allows using it easy for UI test automation of NativeScript apps. The [nativescript-dev-appium plugin](https://github.com/NativeScript/nativescript-dev-appium) is created and maintained by the core team and is constantly improving.
## Environment Setup
### Prerequisites
For the plugin to work correctly you need to have:
- latest version of [XCode](https://developer.apple.com/library/archive/releasenotes/DeveloperTools/RN-Xcode/Chapters/Introduction.html)
- [Android SDK Tools](https://developer.android.com/studio/releases/sdk-tools.html) with version > 25.3.0
### Global Setup
* Install Appium globally:
```shell
$ npm install -g appium@1.18.1
```
* iOS Dependencies
Install external dependencies of [XCUITest](https://github.com/appium/appium-xcuitest-driver/blob/master/README.md#external-dependencies) driver for iOS via Homebrew or NPM as follows:
* [Homebrew](https://brew.sh):
```shell
$ brew install carthage
$ brew install libimobiledevice --HEAD
$ brew install ideviceinstaller
$ brew install ios-webkit-debug-proxy
```
* [NPM](https://www.npmjs.com/):
```shell
$ npm install -g ios-deploy
```
> For detailed information on external dependencies, please, refer to the [XCUITest](https://github.com/appium/appium-xcuitest-driver/blob/master/README.md#external-dependencies) repository.
* Android Dependencies
For correct functioning of the [mobile-devices-controller](https://github.com/NativeScript/mobile-devices-controller) for Android emulators, `telnet` is required to be available on your system.
As `telnet` is removed from *macOS High Sierra*, it could be installed as follows:
```shell
$ brew install telnet
```
## What's Next?
You have now learned the basics about what E2E testing is for mobile apps and what's the difference between unit and e2e testing. You can now continue to the simple example provided in the [Getting Started]({% slug e2e-testing-getting-started %}) section where you'll learn how to setup the nativescript-dev-appium plugin in your project and how to run your first test.
Do not miss gaining more advanced knowledge about the usage of Appium by reviewing
- [nativescript-dev-appium Features]({% slug e2e-testing-features %})
- [How to create custom Appium capabilities and what options it provides]({% slug e2e-testing-customization %})?
- [How to troubleshoot any issues and what are some common issues]({% slug e2e-testing-troubleshooting %})?
There are also nice blog posts and conference videos covering Appium and its usage in NativeScript apps which you can find in the [References section]({% slug e2e-testing-references %}) of this documentation.
| 55.6 | 496 | 0.781634 | eng_Latn | 0.990569 |
4b770791427c0efb8704e132ebf8492859914471 | 5,845 | md | Markdown | CHANGELOG.md | cursormove/flow-is-helpers | f8670807f870953ae6610a6e5c32fea3d5d0b473 | [
"MIT"
] | 1 | 2019-08-24T07:40:19.000Z | 2019-08-24T07:40:19.000Z | CHANGELOG.md | cursormove/flow-is-helpers | f8670807f870953ae6610a6e5c32fea3d5d0b473 | [
"MIT"
] | 4 | 2019-03-27T16:52:37.000Z | 2021-04-17T00:19:42.000Z | CHANGELOG.md | cursormove/flow-is-helpers | f8670807f870953ae6610a6e5c32fea3d5d0b473 | [
"MIT"
] | null | null | null | # CHANGELOG
All notable changes to this project will be documented in this file.
_The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html)._
---
## Release: v.[2.1.0](https://github.com/cursormove/flow-is-helpers/compare/2.0.0...2.1.0) 🠲 2019-03-27
### Merged
- fix(lib): fix isAction - alway return boolean → #4 __⊶__ [`#5`](https://github.com/cursormove/flow-is-helpers/pull/5)
- docs(README): update wercker badge → #1 __⊶__ [`#2`](https://github.com/cursormove/flow-is-helpers/pull/2)
- Release 2.0.0 __⊶__ [`#23`](https://github.com/cursormove/flow-is-helpers/pull/23)
### Fixed
- fix(lib): fix isAction - alway return boolean → #4 __⊶__ [`#4`](https://github.com/cursormove/flow-is-helpers/issues/4)
- docs(README): update wercker badge → #1 __⊶__ [`#1`](https://github.com/cursormove/flow-is-helpers/issues/1)
### Commits
- init(transfer): transfer from @artisin to @cursormove → ★ __⊶__ [`09f50f9`](https://github.com/cursormove/flow-is-helpers/commit/09f50f91b87c99f378bc72fc0e9288a0222eed3e)
- Release 2.0.0 → #20 __⊶__ [`96fb51e`](https://github.com/cursormove/flow-is-helpers/commit/96fb51e32c0ba5d859bed689c8f4e4d99d5ee97b)
## Release: v.[2.0.0](https://github.com/cursormove/flow-is-helpers/compare/1.2.0...2.0.0) 🠲 2019-03-27
### Merged
- **Breaking change:** Release 2.0.0 __⊶__ [`ba48222`](https://github.com/cursormove/flow-is-helpers/commit/ba48222527668649171470ef7c21e9b09e958652)
+ Update dependencies
* `@babel*`
* `core-js@2` → `core-js@3`
## Release: v.[1.2.0](https://github.com/cursormove/flow-is-helpers/compare/1.1.0...1.2.0) 🠲 2019-03-27
### Merged
- update(dependencies): update dependencies as well as the needed update → #18 __⊶__ [`#19`](https://github.com/cursormove/flow-is-helpers/pull/19)
### Fixed
- update(dependencies): update dependencies as well as the needed update → #18 __⊶__ [`#18`](https://github.com/cursormove/flow-is-helpers/issues/18)
### Commits
- Release 1.2.0 __⊶__ [`9c9e457`](https://github.com/cursormove/flow-is-helpers/commit/9c9e457d78643f01afd52ebeba53aae415371f1f)
## Release: v.[1.1.0](https://github.com/cursormove/flow-is-helpers/compare/1.0.1...1.1.0) 🠲 2019-03-27
### Commits
- Release 1.1.0 __⊶__ [`c98936f`](https://github.com/cursormove/flow-is-helpers/commit/c98936fbbe8878efbcf70e3f01520c351215adc8)
## Release: v.[1.0.1](https://github.com/cursormove/flow-is-helpers/compare/1.0.0...1.0.1) 🠲 2019-03-27
### Merged
- docs(README): add wercker badge → #15 __⊶__ [`#17`](https://github.com/cursormove/flow-is-helpers/pull/17)
- update(release-it): update/fix release-it and auto-changelog → #12 __⊶__ [`#16`](https://github.com/cursormove/flow-is-helpers/pull/16)
- Release 1.0.0 __⊶__ [`#13`](https://github.com/cursormove/flow-is-helpers/pull/13)
- init(wercker): init commit → #10 __⊶__ [`#11`](https://github.com/cursormove/flow-is-helpers/pull/11)
### Fixed
- docs(README): add wercker badge → #15 __⊶__ [`#15`](https://github.com/cursormove/flow-is-helpers/issues/15)
- update(release-it): update/fix release-it and auto-changelog → #12 __⊶__ [`#12`](https://github.com/cursormove/flow-is-helpers/issues/12)
- init(__config__): add changelog-preview → #14 __⊶__ [`#14`](https://github.com/cursormove/flow-is-helpers/issues/14)
- init(wercker): init commit → #10 __⊶__ [`#10`](https://github.com/cursormove/flow-is-helpers/issues/10)
### Commits
- Release 1.0.1 __⊶__ [`f490e4e`](https://github.com/cursormove/flow-is-helpers/commit/f490e4e26d333baef1719c3cf84920344da3512a)
## Release: v.[1.0.0](https://github.com/cursormove/flow-is-helpers/compare/0.1.0...1.0.0) 🠲 2019-03-26
### Merged
- docs(CHANGELOG): update CHANGELOG → #8 __⊶__ [`#9`](https://github.com/cursormove/flow-is-helpers/pull/9)
### Fixed
- docs(CHANGELOG): update CHANGELOG → #8 __⊶__ [`#8`](https://github.com/cursormove/flow-is-helpers/issues/8)
### Commits
- Release 1.0.0 __⊶__ [`ea46e0c`](https://github.com/cursormove/flow-is-helpers/commit/ea46e0cdd4faf3cd38dbd4f4dbe04bbe55aacc93)
## Release: v.[0.1.0](https://github.com/cursormove/flow-is-helpers/compare/0.0.1...0.1.0) 🠲 2019-03-26
### Merged
- docs(README): fix title → #6 __⊶__ [`#7`](https://github.com/cursormove/flow-is-helpers/pull/7)
- init(README): commit & update README.md → #2 __⊶__ [`#5`](https://github.com/cursormove/flow-is-helpers/pull/5)
- update(package.json): add lodash to dependencies → #3 __⊶__ [`#4`](https://github.com/cursormove/flow-is-helpers/pull/4)
### Fixed
- docs(README): fix title → #6 __⊶__ [`#6`](https://github.com/cursormove/flow-is-helpers/issues/6)
- init(README): commit & update README.md → #2 __⊶__ [`#2`](https://github.com/cursormove/flow-is-helpers/issues/2)
- update(package.json): add lodash to dependencies → #3 __⊶__ [`#3`](https://github.com/cursormove/flow-is-helpers/issues/3)
- docs(package.json): update package description → #1 __⊶__ [`#1`](https://github.com/cursormove/flow-is-helpers/issues/1)
### Commits
- init(__tests__): init commit → ★ __⊶__ [`5c62c38`](https://github.com/cursormove/flow-is-helpers/commit/5c62c380d11fd299f0bbe562f84e822e1e20cd12)
- init(__config__): init commit → ★ __⊶__ [`8856cb1`](https://github.com/cursormove/flow-is-helpers/commit/8856cb126a86ebe16bde252f5a776c539a232b26)
- init(lib): init commit → ★ __⊶__ [`ee4a941`](https://github.com/cursormove/flow-is-helpers/commit/ee4a941198f7c99868c33cb718d6865e6b2d2ee5)
- init(.github): init commit → ★ __⊶__ [`00020fd`](https://github.com/cursormove/flow-is-helpers/commit/00020fd414a1eb0a9b78e5529569dc85c5760a9b)
## Release: v.0.0.1 🠲 2019-03-25
### Commits
- init(root): init commit → ★ __⊶__ [`ad69481`](https://github.com/cursormove/flow-is-helpers/commit/ad69481267d674d0f2cb6bb9571cdcc8d8858c16)
| 49.957265 | 173 | 0.710351 | yue_Hant | 0.446465 |
4b7813a0a180ef34eae7ade5ccfa7575ee6b742d | 688 | md | Markdown | docs/C++/STL/Utility-library/index.md | dengking/programming-language | 44398bf81e4cc5fc0484011fb5196f10ecc450dc | [
"Apache-2.0"
] | null | null | null | docs/C++/STL/Utility-library/index.md | dengking/programming-language | 44398bf81e4cc5fc0484011fb5196f10ecc450dc | [
"Apache-2.0"
] | null | null | null | docs/C++/STL/Utility-library/index.md | dengking/programming-language | 44398bf81e4cc5fc0484011fb5196f10ecc450dc | [
"Apache-2.0"
] | null | null | null | # cppreference [Utility library](https://en.cppreference.com/w/cpp/utility)
> NOTE: 在`C++\What-is-new-in-C++\index.md`中,介绍了“Design goal of c++”,其中非常重要的一点是:
>
> > Prefer introducing new features via the standard library, rather than extending the core language
> >
>
> 上面这段话中的standard library,更加具体地来说就是 [Utility library](https://en.cppreference.com/w/cpp/utility)。
- language support libraries
- general-purpose libraries.
## [Language support](https://en.cppreference.com/w/cpp/utility#Language_support)
> NOTE: 在`./Language-support`对它进行描述。
## [General-purpose utilities](https://en.cppreference.com/w/cpp/utility#General-purpose_utilities)
> NOTE: 在`./General-purpose`对它进行描述。 | 29.913043 | 101 | 0.747093 | eng_Latn | 0.340621 |
4b78b091151fcc51b5b849767e7abc69db41b58e | 2,193 | md | Markdown | _posts/2019/2019-06-17-1116.md | ibesora/rafagas | c4fc0a34a887998d9b92d8c00a4336a633ebde69 | [
"MIT"
] | 22 | 2016-01-24T22:27:54.000Z | 2021-12-06T11:22:12.000Z | _posts/2019/2019-06-17-1116.md | ibesora/rafagas | c4fc0a34a887998d9b92d8c00a4336a633ebde69 | [
"MIT"
] | 10 | 2015-07-13T18:17:46.000Z | 2021-04-19T09:05:10.000Z | _posts/2019/2019-06-17-1116.md | ibesora/rafagas | c4fc0a34a887998d9b92d8c00a4336a633ebde69 | [
"MIT"
] | 5 | 2018-12-16T15:19:45.000Z | 2020-12-09T09:45:09.000Z | ---
date: 2019-06-17T13:29:06+0200
layout: rafaga
rafagas:
- desc: Enclaves & Exclaves, a story map about complicated boundaries
keyw: boundaries
link: https://storymaps.esri.com/stories/2017/enclaves-exclaves/index.html
microlink:
desc: A tour of the world’s engulfed and orphaned places.
image: http://storymaps.esri.com/stories/2017/enclaves-exclaves/resources/images/EnclavesFacebook.jpg
logo: https://storymaps.esri.com/stories/2017/enclaves-exclaves/resources/tpl/viewer/icons/favicon.ico
title: Enclaves & Exclaves
- desc: A repository that collects many cheatsheets for Data Science
invalid: true
keyw: cheatsheet
link: https://github.com/abhat222/Data-Science--Cheat-Sheet
microlink:
desc: Cheat Sheets. Contribute to abhat222/Data-Science--Cheat-Sheet development
by creating an account on GitHub.
image: https://avatars1.githubusercontent.com/u/46282114?s=400&v=4
logo: https://github.githubassets.com/favicon.ico
title: abhat222
- desc: An exploratory analysis of Airbnb data to understand the rental landscape
in Peru
keyw: airbnb
link: https://app.powerbi.com/view?r=eyJrIjoiNGJjYjY0YzktNjBlMC00MDg5LWIyZjYtOTQzZmM0NzA5ZGVlIiwidCI6Ijc1ZTcxMGJjLTU3NjktNGFiZi05YjI3LTRhMWJmMjAwNzg1NSJ9
microlink:
desc: Report powered by Power BI
image: https://app.powerbi.com/13.0.9703.183/images/PowerBI125x125.png
logo: https://app.powerbi.com/images/PowerBI_Favicon.ico
title: Power BI Report
nocheck: true
- desc: A report about the deforestation of the Colombian Amazon, 56300 hectares over
2019
keyw: deforestation
link: https://maaproject.org/2019/chiribiquete_2019/
microlink:
desc: 'A major deforestation surge continues in the northwest Colombian Amazon
(MAAP #97). In 2018, it resulted in the loss of 199,000 hectares (491,700 acres)*,
making it the most concentrated deforestat…'
image: https://maaproject.org/maap/wp-content/uploads/2019/05/Feat-Image.jpg
logo: https://maaproject.org/maap/wp-content/uploads/2015/04/favicon-maap-551cc4afv1_site_icon-1-256x256.jpg
title: 'MAAP #101: Deforestation Continues in Colombian Amazon (2019)'
nocheck: true
rid: 1116
--- | 47.673913 | 155 | 0.763338 | yue_Hant | 0.270301 |
4b79224cdb7e03bb045d28f5ccf35ed25514542e | 9,541 | md | Markdown | FILENAMING.md | Svdvoort/dcm2niix | 081c6300d0cf47088f0873cd586c9745498f637a | [
"Zlib"
] | 1 | 2021-02-25T11:04:15.000Z | 2021-02-25T11:04:15.000Z | FILENAMING.md | Svdvoort/dcm2niix | 081c6300d0cf47088f0873cd586c9745498f637a | [
"Zlib"
] | null | null | null | FILENAMING.md | Svdvoort/dcm2niix | 081c6300d0cf47088f0873cd586c9745498f637a | [
"Zlib"
] | 1 | 2020-11-02T01:17:02.000Z | 2020-11-02T01:17:02.000Z | ## About
DICOM files tend to have bizarre file names, for example based on the instance UID, e.g. `MR.1.3.12.2.1107.5.2.32.35131.2014031013003871821190579`. In addition, DICOM images are often 2D slices or 3D volumes that we will combine into a single unified NIfTI file. On the other hand, some enhanced DICOM images save different reconstructions (e.g. phase and magnitude) of the same image that we will want to save as separate NIfTI files. Therefore, dcm2niix attempts to provide a sensible file naming scheme.
## Basics
You request the output file name with the `-f` argument. For example, consider you convert files with `dcm2niix -f %s_%p`: in this case an image from series 3 with the protocol name `T1` will be saved as `3_T1.nii`. Here are the available parameters for file names:
- %a=antenna (coil) name (from Siemens 0051,100F)
- %b=basename (file name of first DICOM)
- %c=comments (from 0020,4000)
- %d=description (from 0008,103E)
- %e=echo number (from 0018,0086)
- %f=folder name (name of folder containing first DICOM)
- %i=ID of patient (from 0010,0020)
- %j=series instance UID (from 0020,000E)
- %k=study instance UID (from 0020,000D)
- %l=local procedure step description (from 0040,0254)
- %m=manufacturer short name (from 0008,0070: GE, Ph, Si, To, UI, NA)
- %n=name of patient (from 0010,0010)
- %o=mediaObjectInstanceUID (0002,0003)*
- %p=protocol name (from 0018,1030). If 0018,1030 is empty, or if the Manufacturer (0008,0070) is GE with Modality (0008,0060) of MR, then the SequenceName (0018,0024) is used if it is not empty.
- %r=instance number (from 0020,0013)*
- %s=series number (from 0020,0011)
- %t=time of study (from 0008,0020 and 0008,0030)
- %u=acquisition number (from 0020,0012)
- %v=vendor long name (from 0008,0070: GE, Philips, Siemens, Toshiba, UIH, NA)
- %x=study ID (from 0020,0010)
- %y=youth in series: GE RawDataRunNumber ([0019,10A2](https://github.com/rordenlab/dcm2niix/issues/359)) else TemporalPosition ([0020,0100](https://github.com/rordenlab/dcm2niix/issues/357))*
- %z=sequence name (from 0018,0024)
* Attributes listed above with an asterisk (*) are likely to vary within a series, and are typically not useful for DICOM to NIfTI conversion (where all images from a series are stacked together). These attributes can be useful for [renaming](RENAMING.md) DICOM images
## File Name Post-fixes: Image Disambiguation
In general dcm2niix creates images with 3D dimensions, or 4 dimensions when the 4th dimension is time (fMRI) or gradient number (DWI). However, DICOM images can include additional dimensions, e.g. a multiple-echo sequence would generate separate images for each echo. By default dcm2niix will use the following extensions to the file names in order to disambiguate additional dimensions from the same series:
- _cNx.._cNz where C* refers to the coil name (typically only seen for uncombined data, where a separate image is generated for each antenna)
- _e1..eN echo number for multi-echo sequences
- _Eq is commonly seen in [CT scans](https://github.com/neurolabusc/dcm_qa_ct). For example, CT scans of the brain often have many slices closely packed near the brain stem and only a few slices spread far apart near the top of the head. Variable between-slice spacing is rarer in MRI, and if you see this from a MRI sequence you should ensure that [all of the acquired slices have been provided to dcm2niix](https://neurostars.org/t/field-mapping-siemens-scanners-dcm2niix-output-2-bids/2075/7). NIfTI asumes all 2D slices that form a 3D stack are equidistant. Therefore, dcm2niix reslices the input data to generate an equidistant volume.
- _ph phase map
- _iN appended image number for non-parallel slices
- _imaginary imaginary component of complex image
- _MoCo is appended to the ProtocolName if Image Type (0008,0008) includes the term 'MOCO'. This helps disambiguate Siemens fMRI runs where both motion corrected and raw data is stored for a single session.
- _real real component of complex image
- _phMag rare case where phase and magnitude are saved as the 4th dimension
- _t If the trigger delay time (0020,9153) or trigger time (0018,1060) is non-zero, it will be recorded in the file name. For example, the files "T1_t178.nii" and "T1_t511" suggests that the T1 scan was acquired with two cardiac trigger delays (178 and 511ms after the last R-peak).
- _Tilt is specific to [CT scans](https://www.nitrc.org/plugins/mwiki/index.php/dcm2nii:MainPage#Computed_Tomography_.28CT.2C_CAT.29). These scans can be acquired with a gantry tilt that causes a skew that can not be stored in a NIfTI qForm. Therefore, the slices are resampled to remove the effect of tilt.
Some post-fixes are specific to Philips DICOMs
- _ADC Philips specific case. A DWI image where derived isotropic, ADC or trace volume was appended to the series. Since this image will disrupt subsequent processing, and because subsequent processing (dwidenoise, topup, eddy) will yield better derived images, dcm2niix will also create an additional image without this volume. Therefore, the _ADC file should typically be discarded. If you want dcm2niix to discard these useless derived images, use the ignore feature ('-i y').
- _Raw Philips XX_* DICOMs (Raw Data Storage).
- _PS Philips PS_* DICOMs (Grayscale Softcopy Presentation State).
If you do not want post-fixes, run dcm2niix in the terse mode (`--terse`). In this mode, most post-fixes will be omitted. Beware that this mode can have name clashes, and images from a series may over write each other.
## Overlays
DICOM images can have up to [16](https://www.medicalconnections.co.uk/kb/Number-Of-Overlays-In-Image/) binary (black or white) overlays as described by the [Overlay Plane Module](http://dicom.nema.org/dicom/2013/output/chtml/part03/sect_C.9.html). dcm2niix will save these regions of interest with the post-fix "_ROIn" where N is the overlay number (1..16).
## File Name Conflicts
dcm2niix will attempt to write your image using the naming scheme you specify with the '-f' parameter. However, if an image already exists with the specified output name, dcm2niix will append a letter (e.g. 'a') to the end of a file name to avoid overwriting existing images. Consider a situation where dcm2niix is run with '-f %t'. This will name images based on the study time. If a single study has multiple series (for example, a T1 sequence and a fMRI scan, the reulting file names will conflict with each other. In order to avoid overwriting images, dcm2niix will resort to adding the post fix 'a', 'b', etc. There are a few solutions to avoiding these situations. You may want to consider using both of these:
- Make sure you specify a naming scheme that can discriminate between your images. For example '-f %t' will not disambiguate different series acquired in the same session. However, '-f %t_%s' will discriminate between series.
- Localizer (scout) images are the first scans acquired for any scanning session, and are used to plan the location for subsequent images. Localizers are not used in subsequent analyses (due to resolution, artefacts, etc). Localizers are often acquired with three orthogonal image planes (sagittal, coronal and axial). The NIfTI format requires that all slices in a volume are co-planar, so these localizers will generate naming conflicts. The solution is to use '-i y' which will ignore (not convert) localizers (it will also ignore derived images and 2D slices). This command helps exclude images that are not required for subsequent analyses.
- Be aware that if you run dcm2niix twice with the same parameters on the same data, you will encounter file naming conflicts.
## Special Characters
[Some characters are not permitted](https://stackoverflow.com/questions/1976007/what-characters-are-forbidden-in-windows-and-linux-directory-names) in file names. The following characters will be replaced with underscorces (`_`). Note that the forbidden characters vary between operating systems (Linux only forbids the forward slash, MacOS forbids forward slash and colon, while Windows forbids any of the characters listed below). To ensure that files can be easily copied between file systems, [dcm2niix restricts file names to characters allowed by Windows](https://github.com/rordenlab/dcm2niix/issues/237).
### List of Forbidden Characters (based on Windows)
```
< (less than)
> (greater than)
: (colon - sometimes works, but is actually NTFS Alternate Data Streams)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)
```
[Control characters](https://en.wikipedia.org/wiki/ASCII#Control_characters) like backspace and tab are also forbidden.
Be warned that dcm2niix will copy all allowed characters verbatim, which can cause problems for some other tools. Consider this [sample dataset](https://github.com/neurolabusc/dcm_qa_nih/tree/master/In/20180918GE/mr_0004) where the DICOM Protocol Name (0018,1030) is 'Axial_EPI-FMRI_(Interleaved_I_to_S)'. The parentheses ("round brackets") may cause other tools issues. Consider converting this series with the command 'dcm2niix -f %s_%p ~/DICOM' to create the file '4_Axial_EPI-FMRI_(Interleaved_I_to_S).nii'.If you now run the command 'fslhd 4_Axial_EPI-FMRI_(Interleaved_I_to_S).nii' you will get the error '-bash: syntax error near unexpected token `(''. Therefore, it is often a good idea to use double quotes to specify the names of files. In this example 'fslhd "4_Axial_EPI-FMRI_(Interleaved_I_to_S).nii"' will work correctly. | 109.666667 | 835 | 0.773609 | eng_Latn | 0.995676 |
4b793fa1e2f9af6d7e1f855574e9a3b0feeed79f | 3,326 | md | Markdown | business-central/LocalFunctionality/Russia/depreciation-bonus.md | Teja-Nagoori/dynamics365smb-docs | 007827780cf803f83d8219e1273b7714a399c36c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-28T12:52:43.000Z | 2020-04-28T12:52:43.000Z | business-central/LocalFunctionality/Russia/depreciation-bonus.md | Teja-Nagoori/dynamics365smb-docs | 007827780cf803f83d8219e1273b7714a399c36c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/LocalFunctionality/Russia/depreciation-bonus.md | Teja-Nagoori/dynamics365smb-docs | 007827780cf803f83d8219e1273b7714a399c36c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Depreciation bonus in Russia
description: Russian enhancements include depreciation.
author: DianaMalina
ms.service: dynamics365-business-central
ms.topic: article
ms.search.keywords:
ms.date: 04/01/2020
ms.reviewer: edupont
ms.author: soalex
---
# Depreciation Bonus
Depreciation bonus is an accelerated depreciation method applied in tax accounting because of provisions in the Russian tax laws. A depreciation bonus enables you to include fixed asset and capital investment expenses in the current period at the rate of 10 percent or 30 percent.
## Depreciation Bonus Calculation
A depreciation bonus can be calculated and applied for the following types of transactions:
- Acquisition costs of fixed assets.
- Acquisition costs and appreciation of capital investments for all previous periods excluding the current period.
The rate of the depreciation bonus is 10 percent or 30 percent, depending on the class of the fixed asset. The rate is set for a depreciation group using the **Depr. Bonus Percentage** field in the **Depreciation Group** window.
After the depreciation bonus is calculated and posted for a period, all transactions are cleared in preparation for the next period.
## Depreciation Bonus Settings
Before depreciation bonus is calculated, you will have to make sure that the appropriate settings have been applied in the **Tax Register Setup** window. Use the information in the following table to apply depreciation bonus settings.
| Field | Description |
| :--------------------------------- | :----------------------------------------------------------- |
| **Rel. Act as Depr. Bonus Base** | Select if you want fixed asset releases to be used to calculate the depreciation bonus base. |
| **Depr. Bonus TD Code** | Enter a tax difference code that is used to calculate the depreciation bonus. The selected tax difference code should be identified as a depreciation bonus during tax difference setup. |
| **Depr. Bonus Recovery from** | Enter the starting date from which depreciation is recovered if the fixed asset is sold. If the fixed asset is sold before this date and the depreciation bonus has already been applied, the depreciation bonus will not be recovered. |
| **Depr. Bonus Recov. Per. (Year)** | Enter the period in which the depreciation bonus is recovered if the fixed asset is sold. |
| **Depr. Bonus Recovery TD Code** | Enter the tax difference code that is used to calculate the depreciation bonus recovery amount in tax accounting. |
## Selecting and Canceling Depreciation Bonus Transactions
Depreciation bonus transactions should be posted before the monthly depreciation amount is calculated and posted.
To select depreciation bonus transactions for posting for a period, select **Depr. Bonus** in the **Fixed Asset Journal** window and the **Fixed Asset G/L Journal** window.
You can cancel depreciation bonus transactions by running the **Cancel FA Ledger Entries** batch job. After posting the depreciation bonus cancellation, all operations that are included in the depreciation bonus base must be manually selected as the depreciation bonus base.
## See Also
[Fixed Assets](fixed-assets.md)
| 63.961538 | 281 | 0.732411 | eng_Latn | 0.998802 |
4b795bd070b9ef33cacc77f9e6d5caa4d00de0af | 148 | md | Markdown | README.md | JiaxiangBU/xgboost-LightGBM_demo | ea9b443121c8124340b5906340a0b9d5a098ac1a | [
"Apache-2.0"
] | null | null | null | README.md | JiaxiangBU/xgboost-LightGBM_demo | ea9b443121c8124340b5906340a0b9d5a098ac1a | [
"Apache-2.0"
] | null | null | null | README.md | JiaxiangBU/xgboost-LightGBM_demo | ea9b443121c8124340b5906340a0b9d5a098ac1a | [
"Apache-2.0"
] | 4 | 2019-08-08T14:59:17.000Z | 2021-03-18T07:44:34.000Z | # xgboost/LightGBM_demo
python xgboost学习代码总结
1. Xgboost_examples是XGBoost官方例子
2. LightGBM_examples是LightGBM官方例子
3. Xgboost_prac和LightGBM_prac是个人练习目录
| 24.666667 | 36 | 0.898649 | yue_Hant | 0.14767 |
4b7a04e2c1fd41d1be67e872c82d8900250fbb8f | 931 | md | Markdown | Design Materials/Mechanical/Pinch Valves/Cam Valve/README.md | alberto-bortoni/bruno2 | d8e4c3a32aa60f10e3ed508f231155c3c64b60a3 | [
"FSFAP"
] | 2 | 2020-03-26T18:19:34.000Z | 2020-04-01T03:03:43.000Z | Design Materials/Mechanical/Pinch Valves/Cam Valve/README.md | alberto-bortoni/bruno2 | d8e4c3a32aa60f10e3ed508f231155c3c64b60a3 | [
"FSFAP"
] | 1 | 2020-04-19T20:33:09.000Z | 2020-04-19T20:33:09.000Z | Design Materials/Mechanical/Pinch Valves/Cam Valve/README.md | bruno2-ventilator/bruno2 | d8e4c3a32aa60f10e3ed508f231155c3c64b60a3 | [
"FSFAP"
] | 2 | 2020-04-19T17:46:26.000Z | 2021-01-01T05:11:00.000Z | # BrunO2 Cam Pinch Valve Design Materials
Please refer to [Cam_Valve_Documentation.pdf](Cam_Valve_Documentation.pdf)
This is an electrically-controlled proportional valve driven by a NEMA 17 stepper motor. It is a "pinch valve" meaning that the fluid only ever touches the inside of the tubing and flow is restricted by pinching the tubing. This design pinches the tubing with a roller cam attached to the shaft of the stepper motor. All components are either 3D printed or common fasteners. The pdf documentation contains further details and flow rate data. Sample Arduino codes are included for controlling the valve with some common stepper drivers.
## License
These open source ventilator design materials are provided under the conditions specified in the [Permissive License](https://github.com/bruno2-ventilator/bruno2/blob/master/Permissive%20License--Brown%20University%20041720.pdf)
---
# MORE DETAILS COMING SOON
| 77.583333 | 538 | 0.815252 | eng_Latn | 0.997917 |
4b7aeaf06c4e769755c97f28b435be0525942d85 | 7,412 | md | Markdown | pages/1.12/release-notes/1.11.0-rc1/index.md | adamtheturtle/dcos-docs-site | 55aa1b07745c2844109634724c554a5c3e7c5148 | [
"Apache-2.0"
] | null | null | null | pages/1.12/release-notes/1.11.0-rc1/index.md | adamtheturtle/dcos-docs-site | 55aa1b07745c2844109634724c554a5c3e7c5148 | [
"Apache-2.0"
] | null | null | null | pages/1.12/release-notes/1.11.0-rc1/index.md | adamtheturtle/dcos-docs-site | 55aa1b07745c2844109634724c554a5c3e7c5148 | [
"Apache-2.0"
] | null | null | null | ---
layout: layout.pug
navigationTitle: Release Notes for 1.11.0 Release Candidate 1
title: Release Notes for 1.11.0 RC 1
menuWeight: 30
excerpt: Release notes for DC/OS 1.11.0 release candidate 1
---
These are the release notes for DC/OS 1.11.0 Release Candidate 1.
<table class="table" bgcolor="#FAFAFA"> <tr> <td style="border-left: thin solid; border-top: thin solid; border-bottom: thin solid; border-right: thin solid;">
[button color="purple" href="https://downloads.dcos.io/dcos/EarlyAccess/1.11.0-rc1/dcos_generate_config.sh"]Download DC/OS Open Source[/button]
To download DC/OS Enterprise, contact: [Mesosphere Support](https://support.mesosphere.com/hc/en-us/articles/213198586).
<h3>This release candidate is for testing only and not to be used in production. </h3>
DC/OS 1.11.0 Release Candidate 1 has a number of limitations that will be resolved at GA time.
<ul>
<li>DC/OS 1.11 requires CLI version 0.6.x.
<ul>
<li><a href="/1.11/cli/uninstall/">Uninstall the existing CLI</a>.</li>
<li>Install version 0.6.x using the <strong>Install CLI</strong> instructions in the dropdown in the upper left hand corner of the 1.11 DC/OS GUI.</li>
</ul>
<strong>Note:</strong> CLI version 0.6.x is not compatible with DC/OS 1.10</li>
</ul>
Please try out the new features and updated data services. Provide any feedback through our support channel: <a href="https://support.mesosphere.com/">support.mesosphere.com</a>.
</td> </tr> </table>
<a name="new-features"></a>
# New features and capabilities
## Apache Mesos 1.5, Marathon 1.6, and Kubernetes 1.9 Integrated.
- DC/OS 1.11.0 is is based on Mesos 1.5. View the [Mesos changelog](https://github.com/apache/mesos/blob/1.5.x/CHANGELOG).
- DC/OS 1.11.0 is integrated with the latest 1.6 release of Marathon. For more information about Marathon 1.6, consult the [Marathon changelog](https://github.com/mesosphere/marathon/blob/master/changelog.md).
- DC/OS 1.11.0 supports latest Kubernetes 1.9 Container Scheduler. For more information about Kubernetes 1.0 on DC/OS, [view the documentation](https://docs.mesosphere.com/services/kubernetes/1.0.0-1.9.3).
## Platform
- Fault domain awareness. Use fault domain awareness to make your services highly available and to allow for increased capacity when needed. [View the documentation](/1.11/deploying-services/fault-domain-awareness). [enterprise type="inline" size="small" /]
- Linked clusters. A cluster link is a _**unidirectional**_ relationship between a cluster and another cluster. You add and remove links from one cluster to another cluster using DC/OS CLI. Once a link is set up you can easily switch between clusters using the CLI or UI. [View the documentation](/1.11/administering-clusters/multiple-clusters/cluster-links). [enterprise type="inline" size="small" /]
- Integrated Remote Regions. Enables “Bursting” to take advantage of ephemeral cloud compute resources. [View the documentation](/1.11/deploying-services/fault-domain-awareness). [enterprise type="inline" size="small" /]
- [Multi-Region Management](/1.11/deploying-services/fault-domain-awareness). Enables a DC/OS Cluster to span multiple datacenters, clouds and remote branches while providing a unified management and control cluster.
- Decommission Node. Support for permanently decommissioning nodes enables easier maintenance and decommissioning “Spot” Cloud Instances after use allowing for immediate task rescheduling as opposed delayed task rescheduling.
- UCR
- Support for Docker image garbage collection. [View the documentation](/1.11/deploying-services/containerizers).
- Support for Docker image pull secrets.
## Networking
[enterprise]
- Edge-LB 1.0 RC candidate. [View the documentation](https://docs.mesosphere.com/services/edge-lb/1.0/)
[/enterprise]
- IPv6 is now supported for Docker containers.
- Performance improvements to the DC/OS network stack. All networking components (minuteman, navstar, spartan) are aggregated into a single systemD unit called `dcos-net`. Please read the note on [networking software re-architecture](/1.11/networking/#a-note-on-software-re-architecture) to learn more about the re-factoring of the network stack.
[enterprise]
## Security
[/enterprise]
- Secrets Management Service
- Binary Secret files are now supported
- Hierarchical access control is now supported.
## Monitoring
- The DC/OS metrics component now produces metrics in [Prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/) format. [View the documentation](/1.11/metrics).
- Unified Logging Endpoint to Collect Container (task) as well as System Component Logs.
## Storage
- DC/OS 1.11 introduces an implementation of the industry-standard Container Storage Interface (CSI) version 0.1, which enables developers (Mesosphere, community, and partners) to streamline the development of storage features within DC/OS by providing a common API between the Container Orchestrator (DC/OS) and the storage devices. [enterprise type="inline" size="small" /]
- Pods now support persistent volumes. [View the documentation](/1.11/deploying-services/pods).
**Note:** Because these storage features are beta in 1.11, they must be explicitly enabled. Beta features are not recommended for production usage, but are a good indication of the direction the project is headed.
## Updated DC/OS Data Services
- TLS encryption for DC/OS Kafka, DC/OS Cassandra, DC/OS Elastic, and DC/OS HDFS is now supported.
- Fault domain awareness for DC/OS Kafka, DC/OS Cassandra, DC/OS Elastic and DC/OS HDFS. Use fault domain awareness to make your services highly available and to allow for increased capacity when needed.
- New API endpoint to pause a node for DC/OS Kafka, DC/OS Cassandra, DC/OS Elastic, and DC/OS HDFS. Use this endpoint to relaunch a node in an idle command state for debugging purposes.
- New beta DC/OS Kafka ZooKeeper service. [View the documentation](/services/beta-kafka-zookeeper).
- You can now select a DC/OS data service version from a dropdown menu in the DC/OS UI.
- Improved scalability for all DC/OS data services.
# <a name="known-issues"></a>Known Issues and Limitations
- Upgrades from 1.10 to 1.11 are _not supported_ in 1.11.0 Release Candidate 1.
- DCOS-19047 - The `dcos-secrets` service is unavailable during upgrade from 1.10.x to 1.11.0. [enterprise type="inline" size="small" /]
# <a name="fixed-issues"></a>Improvements and Major Issues Fixed in 1.11.0 Release Candidate 1
- DCOS-19573 - Add support for changes to unique constraints in the UI.
- DCOS-19837 - Consolidate fault-domain scripts for all cloud providers into one script to support clusters with multiple cloud providers.
- DCOS-19896 - Add `--linked` flag to `dcos cluster list` so users can see which clusters can be unlinked. [enterprise type="inline" size="small" /]
- DCOS-19955 - Enhance API and CLI experience for linking clusters. [enterprise type="inline" size="small" /]
- DCOS_OSS-1658 - Add `--verbose` flag to upgrade script that prints all status and error messages to the console to enable upgrade debugging.
- DCOS_OSS-1733 - The configuration parameter `dns_forward_zones` now takes a list of objects instead of nested lists.
- DCOS_OSS-2130 - `systemd-networkd` must be enabled for DC/OS networking to work with CoreOS.
**Note:** The Kubernetes package dependencies are documented [here](https://docs.mesosphere.com/services/kubernetes/1.2.0-1.10.5/install).
| 75.632653 | 401 | 0.769563 | eng_Latn | 0.943931 |
4b7aeef251820a4adf8373ead1c78e88ddad9830 | 38 | md | Markdown | README.md | alexvishneuski/VKLayouts | 660a017c7d30e3ad28fbaad399d2666e47474bf7 | [
"Apache-2.0"
] | null | null | null | README.md | alexvishneuski/VKLayouts | 660a017c7d30e3ad28fbaad399d2666e47474bf7 | [
"Apache-2.0"
] | 27 | 2018-01-12T22:08:55.000Z | 2018-02-03T20:00:58.000Z | README.md | alexvishneuski/VCBestClient | 660a017c7d30e3ad28fbaad399d2666e47474bf7 | [
"Apache-2.0"
] | null | null | null | # VKLayouts
Mastering layouts makeup
| 12.666667 | 25 | 0.815789 | eng_Latn | 0.95377 |
4b7c4faba1b92ad940d80dbfe1c191791e733a70 | 29 | md | Markdown | README.md | EdgardOliveira/nextjs | c478be7d682ff99660730e955d58cd2ce9b22de8 | [
"MIT"
] | null | null | null | README.md | EdgardOliveira/nextjs | c478be7d682ff99660730e955d58cd2ce9b22de8 | [
"MIT"
] | null | null | null | README.md | EdgardOliveira/nextjs | c478be7d682ff99660730e955d58cd2ce9b22de8 | [
"MIT"
] | null | null | null | # nextjs
Testes com o NextJS
| 9.666667 | 19 | 0.758621 | por_Latn | 0.52916 |
4b7c746e42507672d59fdcd8afa10a0fff58c67a | 2,598 | md | Markdown | apps/automated/src/image-source/image-source.md | tralves/NativeScript | 86c689ef7ec0d0f4b80f73ea0f1aa8ce8dc06741 | [
"Apache-2.0"
] | 23,692 | 2015-03-05T14:31:33.000Z | 2022-03-31T21:06:38.000Z | apps/automated/src/image-source/image-source.md | tralves/NativeScript | 86c689ef7ec0d0f4b80f73ea0f1aa8ce8dc06741 | [
"Apache-2.0"
] | 9,418 | 2015-03-06T11:43:31.000Z | 2022-03-31T18:08:12.000Z | apps/automated/src/image-source/image-source.md | tralves/NativeScript | 86c689ef7ec0d0f4b80f73ea0f1aa8ce8dc06741 | [
"Apache-2.0"
] | 2,114 | 2015-03-03T13:37:21.000Z | 2022-03-31T07:45:40.000Z | ---
nav-title: "image-source How-To"
title: "image-source"
environment: nativescript
description: "Examples for using image-source"
previous_url: /ApiReference/image-source/HOW-TO
---
# Image source
Using the image source requires the image-source module.
```TypeScript
import * as imageSource from "tns-core-modules/image-source";
```
```JavaScript
var imageSource = require("tns-core-modules/image-source");
```
The pre-required `imageSource` module is used throughout the following code snippets.
We also use fs module defined as follows:
```TypeScript
import * as fs from "tns-core-modules/file-system";
```
```JavaScript
var fs = require("tns-core-modules/file-system");
```
## Loading and saving images
### Load image using resource name
This is similar to loading Bitmap from `R.drawable.logo` on Android or calling `[UIImage imageNamed@"logo"]` on iOS.
The method `fromResource` creates an `ImageSource` instance and loads it from the specified resource name.
{%snippet imagesource-resname%}
### Save image to PNG or JPG file
The method `saveToFile(path: string, format: "png" | "jpeg" | "jpg", quality?: number): boolean` saves `ImageSource` instance to the specified file, using the provided image format and quality.
The supported formats are `png`, `jpeg`, and `jpg`. The quality parameter is optional and defaults to maximum quality. Returns `true` if the file is saved successfully.
{%snippet imagesource-save-to%}
### Load image from a local file
Use `fromFile(path: string): Promise<boolean>` to load an `ImageSource` instance from the specified file asynchronously.
{%snippet imagesource-load-local%}
### Load image from URL
Use `http.getImage(url: string): Promise<ImageSource>` to fetch `ImageSource` from online source.
{%snippet http-get-image%}
### Save image from image asset to PNG file
Use `fromAsset(asset: ImageAsset): Promise<ImageSource>` to load `ImageSource` from the specified `ImageAsset` asynchronously.
{%snippet imagesource-from-imageasset-save-to%}
### Creating base64 string from PNG image file
The method `toBase64String(format: "png" | "jpeg" | "jpg", quality?: number): string` converts the image to base64 encoded string, using the provided image format and quality.
The supported formats are `png`, `jpeg`, and `jpg`. The quality parameter is optional and defaults to maximum quality.
{%snippet imagesource-to-base-string%}
### Creating PNG image file from base64 string
The method `fromBase64(source: string): Promise<boolean>` loads this instance from the specified base64 encoded string asynchronously.
{%snippet imagesource-from-base-string%}
| 45.578947 | 193 | 0.761355 | eng_Latn | 0.884838 |
4b7c94d0ae6ac8cf956b6e2c50348af2f830a162 | 62 | md | Markdown | ChangeLog.md | kayvank/haskell-programming-book | 281e46751e37de45cde47e418452d05333f575eb | [
"BSD-3-Clause"
] | null | null | null | ChangeLog.md | kayvank/haskell-programming-book | 281e46751e37de45cde47e418452d05333f575eb | [
"BSD-3-Clause"
] | null | null | null | ChangeLog.md | kayvank/haskell-programming-book | 281e46751e37de45cde47e418452d05333f575eb | [
"BSD-3-Clause"
] | null | null | null | # Changelog for haskell-pgramming-book
## Unreleased changes
| 15.5 | 38 | 0.790323 | eng_Latn | 0.941652 |
4b7e4bfba2fb2a2e928c75dbf874d5ee1df60914 | 4,697 | md | Markdown | README.md | thewalkingduff/pinky | 4ef5f00a3ef50fb6ce1bf35bc92ed92dd30131b0 | [
"MIT"
] | null | null | null | README.md | thewalkingduff/pinky | 4ef5f00a3ef50fb6ce1bf35bc92ed92dd30131b0 | [
"MIT"
] | null | null | null | README.md | thewalkingduff/pinky | 4ef5f00a3ef50fb6ce1bf35bc92ed92dd30131b0 | [
"MIT"
] | null | null | null | # pinky
<!-- PROJECT LOGO -->
<br />
<p align="center">
<a href="https://thewalkingduff.github.io/pinky/">
<img src="images/pinky.jpg" alt="Logo">
</a>
<h3 align="center">Pinky</h3>
<p align="center">
An app that helps cure a large pair of eyes' case of pink eye by manipulating the DOM.
<br />
<a href="https://thewalkingduff.github.io/pinky/"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://thewalkingduff.github.io/pinky/">View Demo</a>
·
<a href="https://thewalkingduff.github.io/pinky/">Report Bug</a>
·
<a href="https://thewalkingduff.github.io/pinky/">Request Feature</a>
</p>
</p>
<!-- TABLE OF CONTENTS -->
<details open="open">
<summary><h2 style="display: inline-block">Table of Contents</h2></summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgements">Acknowledgements</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
Cure and cause conjunctivitis with this silly app.
`thewalkingduff`, `pinky`, `@duffManCode`, `bduffy@devduffy.com`, `Pinky`
### Built With
- [HTML]()
- [CSS]()
- [JAVASCRIPT]()
<!-- GETTING STARTED -->
## Getting Started
To get a local copy up and running follow these simple steps.
### Prerequisites
This is an example of how to list things you need to use the software and how to install them.
- npm
```sh
npm install npm@latest -g
```
### Installation
1. Clone the repo
```sh
git clone https://thewalkingduff.github.io/pinky.git
```
2. Install NPM packages
```sh
npm install
```
<!-- USAGE EXAMPLES -->
## Usage
Use eye dropper to squeeze medicine into eye balls. Reinfect eye balls but touching them with the hand.
_For more examples, please refer to the [Documentation](https://example.com)_
<!-- ROADMAP -->
## Roadmap
App isn't working perfectly right now because it sometimes will only change from an eye dropper to a hand with the eye ball is clicked. I would like it to change when any part of the eye is clicked.
Going to add some CSS art to it in the future. Possibly a cat in the lower right corner that also tracks the eye dropper/hand.
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE` for more information.
<!-- CONTACT -->
## Contact
Brendan Duffy - [@duffManCode](https://twitter.com/duffManCode) - bduffy@devduffy.com
Project Link: [https://thewalkingduff.github.io/pinky/](https://thewalkingduff.github.io/pinky/)
<!-- ACKNOWLEDGEMENTS -->
## Acknowledgements
- MIT xPro
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/github_username/repo.svg?style=for-the-badge
[contributors-url]: https://github.com/github_username/repo/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/github_username/repo.svg?style=for-the-badge
[forks-url]: https://github.com/github_username/repo/network/members
[stars-shield]: https://img.shields.io/github/stars/github_username/repo.svg?style=for-the-badge
[stars-url]: https://github.com/github_username/repo/stargazers
[issues-shield]: https://img.shields.io/github/issues/github_username/repo.svg?style=for-the-badge
[issues-url]: https://github.com/github_username/repo/issues
[license-shield]: https://img.shields.io/github/license/github_username/repo.svg?style=for-the-badge
[license-url]: https://github.com/github_username/repo/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/github_username | 30.303226 | 199 | 0.689163 | eng_Latn | 0.413158 |
4b7e79647c499679537045612b207e5742cdf945 | 5,658 | md | Markdown | README.md | cthom055/gen_audio_rnn | bb244a538cf83ae283bc490d6f9f0d245f4e6cc0 | [
"MIT"
] | 1 | 2020-01-09T06:45:18.000Z | 2020-01-09T06:45:18.000Z | README.md | cthom055/gen_audio_rnn | bb244a538cf83ae283bc490d6f9f0d245f4e6cc0 | [
"MIT"
] | null | null | null | README.md | cthom055/gen_audio_rnn | bb244a538cf83ae283bc490d6f9f0d245f4e6cc0 | [
"MIT"
] | null | null | null | # Generative Audio
## Introduction and Overview
This is the repository for the networks that generate audio.
* [main.ipynb](main.ipynb): *This is the go to file that contains the model, the audio dataset loader and the audio generation. Naturally the IPython notebook makes it easy to explore the data.*
* [main_large_dataset.ipynb](main_large_dataset.ipynb): *This example takes on multiple samples to train for much longer. Use this once you have [main.ipynb](main.ipynb) working and experiment with the parameters to achieve different results*
* [old_main.py](old_main.py): *This is a older file that had the initial vinilla TensorFlow code in it. It creates the model via the methods from model.py. Higher level TensorFlow wrappers such as tflearn and slim tended to deliver better results so we migrated to using those in the main.ipynb. This demonstrates a lot of the usefulness of the AudioDatasetGenerator in the running of a tf session via the get_next_batch and is_new_epoch functions.*
* [old_model.py](old_model.py): *This contained the vinilla TensorFlow code to replicate Sam's Keras model. As mentioned earlier higher level wrappers provided better results to we moved to those.*
* [old_phase_gen.py](old_phase_gen.py): *This is the original method of phase reconstruction. Currently we now use griffin lim, which can be found in the audio_dataset_generator.py under the method with the same name. There is also a phase reconstruction network in progress in the phase_reconstruction.ipynb file, although this as of the time of writing is achieving poor results.*
* [assets](assets): *This is the folder that contains the audio for training. The current code in main.ipynb instanciates a AudioDatasetGenerator object that will pass the path to the assets folder so it can create the necessary sequences of fft magnitudes.*
-----
## Running on B0rk
To access the b0rk via ssh:
```
ssh username@igor.gold.ac.uk
ssh eavi@158.223.52.45
```
Check the docker processes:
```
sudo docker ps
```
If the container is running copy the container id and run the ipython notebook:
```
sudo docker exec -it <container id> /run_jupyter.sh
```
Else open a new container with the saved docker image, forward ipython notebook port to 8882 then run the notebook:
```
sudo nvidia-docker run -it -p 8882:8888 golb0rk/lstmsynth /run_jupyter.sh
```
#### Forwarding ports to access notebook
You can use any port numbers, if it complains try a different port - I find it easiest to keep a tunnel open (with -f) from my igor into b0rk. Then all I have to do is tunnel from my local machine into igor.
ssh tunnel from Igor to b0rk (whilst sshed into igor):
```
ssh -N -f -L localhost:8883:localhost:8882 eavi@158.223.52.45
```
then tunnel from your local machine to igor
```
ssh -N -L localhost:8884:localhost:8883 username@igor.gold.ac.uk
```
now ```localhost:8884``` should work in your browser!
#### Saving the docker container state
Detach from the docker with ```Ctrl+p```+```Ctrl+q``` (exit or ctrl+e will halt the container).
Then from outside the docker (eavi):
```
sudo docker commit <container id> golb0rk/lstmsynth
```
-------
## Running locally
To run this locally, it is assumed that Python 3.5, TensorFlow 1.1++ and the latest version of TFLearn are all installed on your system.
### Instructions
Generally the model found in the main.ipynb file is the best place to start hacking around. Have at it!
### Upload audio files to Jupyter
You can upload your own training data through the iPython file browser.
Locate the audio data path set in the main script. In this example it's been set to: **assets/test_samples**
```
audio_data_path = "assets/test_samples"
```
[](https://postimg.org/image/y2bw4auzd/)
Upload a .wav file
[](https://postimg.org/image/azld54ti1/)
Confirm upload
[](https://postimg.org/image/rpwqugrx5/)
When using a new set of samples, make sure **force_new_dataset** is set to true
[](https://postimg.org/image/n6ki8ya1l/)
When running a new test, you'll need to restart the kernal in the main notebook and re-run all the scripts. Tip: in the Jupyter toolbar, go to **_Cell_** -> **_Run All_**
### Code
There are a few things to note when building networks.
Audio data can be managed using the AudioDatasetGenerator. Care should be taken to remove any existing .npy files from whatever folder the target wavs are stored as the class will attempt to load those first. To force a new dataset, just pass true in the load method where force_new_dataset is.
```python
dataset = AudioDatasetGenerator(fft_size, window_size, hop_size, sequence_length, sample_rate)
dataset.load(audio_data_path, force_new_dataset)
```
The above code is fairly trivial - one other thing that might be unclear is the sequence_length parameter. This simply means how many n frames of fft magnitudes there should be for every prediction of a frame of fft magnitudes by the model.
____
### To do:
* Implement fast griffin lim / implement good phase reconstruction network.
* Get working instructions on how to get ipython notebook of b0rk running locally.
* Modify current instructions to allow multiple users to modify same image - currently all ip's are used to ipython forwarding.
* Perhaps try something like seq2seq for the generation of frames.
* Add variable layer mlp / highway layers at the end for easier experimentation.
* Experiment with deconvolutions at the end?
| 47.546218 | 449 | 0.768823 | eng_Latn | 0.994183 |
4b7ea933b62e03488955eb224d902fb0c8bf0cc8 | 2,531 | md | Markdown | README.md | marshallswain/auth-component | c91b44508aea9ad76e3d6c0beb280c92b0edf9e5 | [
"MIT"
] | 2 | 2015-11-19T04:28:46.000Z | 2016-07-15T05:42:38.000Z | README.md | marshallswain/auth-component | c91b44508aea9ad76e3d6c0beb280c92b0edf9e5 | [
"MIT"
] | 3 | 2015-11-05T19:18:47.000Z | 2015-11-19T04:59:31.000Z | README.md | marshallswain/auth-component | c91b44508aea9ad76e3d6c0beb280c92b0edf9e5 | [
"MIT"
] | null | null | null | # auth-component
A Collection of Authentication Tools for DoneJS.
## The `<token-auth>` component
The `token-auth` component makes it easy to implement JWT-based authentication for your application.
```mustache
<token-auth {^auth}="session"
key-location="authToken"
login-endpoint="http://localhost:8080/login"
username-field="email"
{(loading)}="loading"
remember-me >
</token-auth>
```
Available attributes include
* `key-location` - The name of the location where the token will be stored in either SessionStorage or LocalStorage.
* `login-endpoint` - The url used to POST login data.
* `username-field` - used customize what parameter is sent to the server. default is `username`.
* `remember-me` - Determines the longevity of the stored token. If enabled, the token will be stored in LocalStorage instead of SessionStorage.
The `token-auth` component includes a loading indicator and a basic login form that overlay your application. Future improvements will allow you to customize the template.
<img src="token-auth.png" alt="token-auth component"/>
## The `<session-auth>` component
Coming in a future release.
## Which type of authentication should I use?
JWT auth, when executed correctly, is superior to cookie/session auth in a couple of potentially big ways:
- It's more secure. Due to the way that browsers were designed to handle cookies, they are vulnerable to XSS attacks. By not using cookies, these cookie-based attacks can be avoided.
- It's more efficient. Many cookie/session auth implementations require more communication with a database server to retrieve user data for the verification process. JWT tokens store that data inside the encrypted token, which eliminates the extra round trip to the database.
One caveat to using token auth is that DoneJS's server-side rendering will not have access to the token. This limits the server-side rendered parts of your app to information that is publicly available. Your templates will still be able to be rendered on the server. Any user-specific data will need to be requested by the browser.
## Security
This information isn't a comprehensive guide to security, but hopefully can be helpful in helping you to secure your application. If is other information that you think should be included here, please open an issue or submit a PR.
If you see room for improvement in any of the provided modules, whether in features or in security improvements, please help out the community by opening issues or submitting a PR. | 61.731707 | 334 | 0.775978 | eng_Latn | 0.99854 |
4b7ebed0eb03c3fc2046a826693ee665a442f199 | 4,294 | md | Markdown | _posts/2018-12-30-Download-plato-answers-for-world-history-b.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-12-30-Download-plato-answers-for-world-history-b.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2018-12-30-Download-plato-answers-for-world-history-b.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Plato answers for world history b book
Khuzeimeh ben Bishr and Ikrimeh el Feyyas dclxxxii "As she comes closer to full term," said Dairies, and bends with a rounding towards plato answers for world history b Anadyr. "I sought the deer today. Then he returned to his shop and sought in marriage of her father her who had played him the trick aforesaid and who was the daughter of the chief of the guild of the blacksmiths. Nevertheless, have laid themselves to rest, i. "Swab this on your skin, Ms, feigned regret, maybe it was the dark variety A small glistening pink animal plato answers for world history b its head out of the Toad's great tangled realized that this might not be the case. Stuxberg, not for what he owned or might one day acquire! You performed this very month in South Africa, ever, in the circle Werchojansk, fearing that the government quarantine of Bingo, and the naked arms were coloured high up with the "Great guy. Green during this rainy season, even the speaker's sunflower-like synanthea, and she sank back, and I follow him, even more different from the cold austerity of the wizard's house, not you. If books could be brought together in one place. drift-wood in heaps on the beach, both move purposefully. One day, covered with yellow grass-straws from beds of snow which up to autumn are to be found at several places "Sometimes it's sad here, a plate, here. Her wrists were too tightly bound to allow her to hold a lighter in such a way On the 19th August we continued to sail and steam along the coast, we entered a smaller room -- after the pure radiance of the other. Shook them out into the palms of their hands. We used to ask Ennesson to do bird calls? heart only was eaten, for that the slaughter of a soul without just cause is plato answers for world history b grave [matter]. This was a predatory silence, which was kept in bounds by no feeling of self-respect, the pedestrian precinct beneath the plato answers for world history b complex and business offices of the Manhattan module was lively and crowded with people. purpose, 'Hasten unto this, along the north coast of Asia. Lechat shook his head. The most important of these was the power system created by focusing the Ozo at a narrow Yet neither of this booted pair seems in the least interested in the crumpled Males under six years of age cannot, called one Zourkhan and the other Kardan, what he'd said, five days before Barty's first birthday, small voice of Zedd guided him now, Barry could feel the middle of his body turning outrageous, so the young merchant may lose favour with him and he rid us of him and we be at rest from him. "What do you do about people who insist on being as unreasonable and oh noxious as they can, possibly compressing his nose and plato answers for world history b his boutonniere! I credited him with more character. So she called an old woman who used to visit her and acquainted her with her case, with his jolly freckled face, if I recollect right, as if her vertebrae were fingers shuffling. And along half the strand, I locate perhaps to facilitate the formation of the half-carbonised wood-meal she would be chattering enthusiastically in one voice or another. " scarlet fingernails on the dolled Formica desk top. Labuan itself and its immediate neighbourhood have The light in her dimmed. He couldn't get the car started, whose face was a mere collection of not so abruptly as the Namer, who invented hip, God made them furry! Every foreign grain of dust can here he easily distinguished and Although the piano was at some distance and the restaurant was a little noisy, before the first of his three successful political campaigns, who were the sailors C. "It's a deal. Of the "Mom?" Celestina said. I am sending it up. His theory-yes, he was filled with a greater sense of adventure than he'd felt since arriving in the city from Oregon, talking to a taxi driver. ii. Once the Master of Iria said he would or Like a gargoyle above, pictures better suited for cheap calendars than for gallery walls, but she needed to counterpoint: he an oboe with a split reed; she a whistling flute. Named Angel. He roared away as if trying to outrun daylight. The Archmage will never return. | 477.111111 | 4,187 | 0.791803 | eng_Latn | 0.999975 |
4b7ef20577361d64ca2b0c4025b39431afed694e | 27,707 | md | Markdown | articles/azure-maps/drawing-requirements.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-maps/drawing-requirements.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-maps/drawing-requirements.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Exigences du package de dessin dans le Créateur Azure Maps
description: Découvrez les exigences du package de dessin pour convertir les fichiers de conception de votre bâtiment en données cartographiques à l’aide du service de conversion d’Azure Maps
author: anastasia-ms
ms.author: v-stharr
ms.date: 6/12/2020
ms.topic: conceptual
ms.service: azure-maps
services: azure-maps
manager: philMea
ms.openlocfilehash: 4a57719ec9e7b22ed81ee6f07a568a993846de42
ms.sourcegitcommit: f353fe5acd9698aa31631f38dd32790d889b4dbb
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 07/29/2020
ms.locfileid: "87374318"
---
# <a name="drawing-package-requirements"></a>Exigences du package de dessin
Le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) vous permet de convertir les packages de dessin chargés en données cartographiques. Cet article décrit les exigences du package de dessin pour l’API de conversion. Pour voir un exemple de package, vous pouvez télécharger l’exemple [Package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
## <a name="prerequisites"></a>Prérequis
Le package Dessin comprend des dessins enregistrés au format DWG. Il s’agit du format de fichier natif du logiciel AutoCAD® d’Autodesk, une [marque d’Autodesk, Inc](https://www.autodesk.com/company/legal-notices-trademarks/trademarks/guidelines-for-use#section12).
Vous pouvez choisir n’importe quel logiciel de CAO pour produire les dessins du package de dessin.
Le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) convertit le package de dessin en données cartographiques. Le service de conversion a été développé et testé à l’aide du format de fichier DWG AutoCAD. `AC1032` est la version de format interne des fichiers DWG. Vous êtes encouragé à sélectionner `AC1032` comme version de format de fichier DWG interne.
Glossaire des termes utilisés dans ce document.
| Terme | Définition |
|:-------|:------------|
| Couche | Calque DWG AutoCAD.|
| Level | Zone d’un immeuble à une élévation définie. Par exemple, l’étage d’un immeuble. |
| Xref |Fichier au format DWG AutoCAD (.dwg) attaché au dessin principal en tant que référence externe. |
| Fonctionnalité | Objet combinant une géométrie avec des informations de métadonnées supplémentaires. |
| Classes de caractéristiques | Blueprint commun pour les caractéristiques. Par exemple, une unité est une classe de caractéristiques et un bureau est une caractéristique. |
## <a name="drawing-package-structure"></a>Structure de package de dessin
Un package de dessin est une archive. zip contenant les fichiers suivants :
* Fichiers DWG au format de fichier DWG AutoCAD.
* Fichier _manifest.json_ pour un seul bâtiment.
Les fichiers DWG peuvent être organisés d’une façon quelconque dans le dossier, mais le fichier manifeste doit se trouver dans le répertoire racine du dossier. Le dossier doit être compressé dans un fichier d’archive unique, avec une extension. zip. Les sections suivantes détaillent les exigences relatives aux fichiers DWG, au fichier manifeste et au contenu de ces fichiers.
## <a name="dwg-files-requirements"></a>Exigences relatives aux fichiers DWG
Un fichier DWG unique est requis pour chaque niveau du bâtiment. Les données du niveau doivent être contenues dans un fichier DWG unique. Toute référence externe (_xref_) doit être liée au dessin parent. En outre, chaque fichier DWG :
* Doit définir les calques _Extérieur_ et _Unité_. Il peut éventuellement définir les calques facultatifs suivants : _Mur_, _Porte_, _UnitLabel_, _Zone_ et _ZoneLabel_.
* Ne doit pas contenir de caractéristiques de plusieurs niveaux.
* Ne doit pas contenir de caractéristiques de plusieurs bâtiments.
Le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) peut extraire d’un fichier DWG les classes de caractéristiques suivantes :
* Levels
* Units
* Zones
* Ouvertures
* Murs
* Pénétrations verticales
Tous les travaux de conversion aboutissent à un ensemble minimal de catégories par défaut : salle, structure.mur, ouverture.porte, zone et bâtiment. Pour chaque Nom de catégorie, des catégories supplémentaires sont référencées par des objets.
Un calque DWG doit contenir des caractéristiques d’une seule classe. Des classes ne doivent pas partager un calque. Par exemple, des unités et des murs ne peuvent pas partager un calque.
Les calques DWG doivent également respecter les critères suivants :
* Les origines des dessins de tous les fichiers DWG doivent s’aligner sur les mêmes latitude et longitude.
* Chaque niveau doit être dans la même orientation que les autres niveaux.
* Les polygones avec auto-intersection sont automatiquement réparés, et le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) génère un avertissement. Il est recommandé d’inspecter manuellement les résultats réparés, car ils peuvent ne pas correspondre aux résultats attendus.
Toutes les entités de calque doivent être de l’un des types suivants : Ligne, Polyligne, Polygone, Arc circulaire, Cercle, Texte (ligne unique). Tous les autres types d’entités seront ignorés.
Le tableau ci-dessous présente les types d’entités et les caractéristiques pris en charge pour chaque calque. Si un calque contient des types d’entités non pris en charge, le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) ignore ces entités.
| Couche | Types d’entités | Fonctionnalités |
| :----- | :-------------------| :-------
| [Extérieur](#exterior-layer) | Polygone, Polyligne (fermée), Cercle | Levels
| [Unité](#unit-layer) | Polygone, Polyligne (fermée), Cercle | Pénétrations verticales, Unités
| [Mur](#wall-layer) | Polygone, Polyligne (fermée), Cercle | Non applicable. Pour plus d’informations, consultez [Calque Mur](#wall-layer).
| [Porte](#door-layer) | Polygone, Polyligne, Ligne, Arc circulaire, Cercle | Ouvertures
| [Zone](#zone-layer) | Polygone, Polyligne (fermée), Cercle | Zone
| [UnitLabel](#unitlabel-layer) | Texte (ligne unique) | Non applicable. Ce calque ne peut ajouter des propriétés aux caractéristiques de l’unité qu’à partir du calque Unités. Pour plus d’informations, consultez [Calque UnitLabel](#unitlabel-layer).
| [ZoneLabel](#zonelabel-layer) | Texte (ligne unique) | Non applicable. Ce calque ne peut ajouter des propriétés aux caractéristiques de la zone qu’à partir du calque ZonesLayer. Pour plus d’informations, consultez [Calque ZoneLabel](#zonelabel-layer).
Les sections suivantes détaillent les exigences pour chaque calque.
### <a name="exterior-layer"></a>Calque Extérieur
Le fichier DWG pour chaque niveau doit contenir un calque pour définir le périmètre de ce niveau. Ce calque est appelé calque Extérieur. Par exemple, si un bâtiment contient deux niveaux, il doit avoir deux fichiers DWG, avec un calque Extérieur pour chaque fichier.
Quel que soit le nombre de dessins d’entité dans le calque Extérieur, le [jeu de données du bâtiment obtenu](tutorial-creator-indoor-maps.md#create-a-feature-stateset) ne contient qu’**une** seule caractéristique de niveau pour chaque fichier DWG. De plus :
* Les calques extérieurs doivent être dessinés en tant que Polygone, Polyligne (fermée) et Cercle.
* Les calques extérieurs peuvent se chevaucher, mais sont fusionnés dans une seule géométrie.
Si le calque contient plusieurs polylignes qui se chevauchent, celles-ci sont fusionnées en une seule caractéristique Niveau. Par ailleurs, si le calque contient plusieurs polylignes ne se chevauchant pas, la caractéristique Niveau obtenue a une représentation multi-polygonale.
Un exemple de calque Extérieur est visible en tant que calque CONTOUR dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### <a name="unit-layer"></a>Calque Unité
Le fichier DWG pour chaque niveau doit définir un calque contenant des unités. Les unités sont des espaces navigables dans le bâtiment, tels que des bureaux, des couloirs, des escaliers et des ascenseurs. Le calque Unités doit respecter les exigences suivantes :
* Les unités doivent être dessinées en tant que Polygone, Polyligne (fermée) et Cercle.
* Les unités doivent se trouver à l’intérieur des limites du périmètre extérieur du bâtiment.
* Les unités ne doivent pas se chevaucher partiellement.
* Les unités ne doivent pas contenir de géométrie avec auto-intersection.
Nommez une unité en créant un objet texte dans le calque _UnitLabel_, puis placez l’objet à l’intérieur des limites de l’unité. Pour plus d’informations, consultez [Calque UnitLabel](#unitlabel-layer).
Un exemple de calque Unités est visible en tant que calque UNITÉS dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### <a name="wall-layer"></a>Calque Mur
Le fichier DWG pour chaque niveau peut contenir un calque qui définit les étendues physiques de murs, de colonnes et d’autres structures de bâtiment.
* Les murs doivent être dessinés en tant que Polygone, Polyligne (fermée) et Cercle.
* Les calques Mur doivent contenir uniquement une géométrie qui est interprétée comme une structure de bâtiment.
Un exemple de calque Murs est visible en tant que calque MURS dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### <a name="door-layer"></a>Calque Porte
Vous pouvez inclure un claque DWG contenant des portes. Chaque porte doit chevaucher le bord d’une unité du calque Unités.
Les ouvertures de portes dans un jeu de données Azure Maps sont représentées sous la forme d’un segment d’une seule ligne qui chevauche plusieurs limites d’unité. Les étapes suivantes permettent de convertir une géométrie du calque de porte en caractéristiques d’ouverture dans un jeu de données.

### <a name="zone-layer"></a>Calque Zones
Le fichier DWG pour chaque niveau peut contenir un calque Zones qui définit les étendues physiques de zones. Une zone peut être un espace vide ou une cour.
* Les zones doivent être dessinées en tant que Polygone, Polyligne (fermée) et Cercle.
* Des zones peuvent se chevaucher.
* Les zones peuvent se trouver à l’intérieur ou à l’extérieur du périmètre extérieur du bâtiment.
Nommez une zone en créant un objet texte dans le calque _ZoneLabel_ et en plaçant l’objet texte à l’intérieur des limites de la zone. Pour plus d’informations, consultez [Calque ZoneLabel](#zonelabel-layer).
Un exemple de calque Zones est visible en tant que calque ZONES dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### <a name="unitlabel-layer"></a>Claque UnitLabel
Le fichier DWG pour chaque niveau peut contenir un calque d’étiquette d’unité (UnitLabel). Le calque d’étiquette d’unité ajoute une propriété de nom aux unités extraites du calque Unités. Les unités ayant une propriété de nom peuvent avoir des détails supplémentaires spécifiés dans le fichier manifeste.
* Les étiquettes d’unité doivent être des entités texte d’une seule ligne.
* Les étiquettes d’unité doivent se trouver dans les limites de leur unité.
* Les unités ne doivent pas contenir plusieurs entités texte dans le calque UnitLabel.
Un exemple de calque UnitLabel est visible en tant que calque UNITLABELS dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### <a name="zonelabel-layer"></a>Calque ZoneLabel
Le fichier DWG pour chaque niveau peut contenir un calque d’étiquette de zone (ZoneLabel). Ce calque ajoute une propriété de nom aux zones extraites du claque Zones. Les zones ayant une propriété de nom peuvent avoir des détails supplémentaires spécifiés dans le fichier manifeste.
* Les étiquettes de zone doivent être des entités texte d’une seule ligne.
* Les étiquettes de zone doivent se trouver dans les limites de leur zone.
* Les zones ne doivent pas contenir plusieurs entités texte dans le calque ZoneLabel.
Un exemple de calque ZoneLabel est visible en tant que calque ZONELABELS dans l’[exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
## <a name="manifest-file-requirements"></a>Exigences du fichier manifeste
Le dossier zip doit contenir un fichier manifeste au niveau racine du répertoire, et le fichier doit être nommé **manifest.json**. Il décrit les fichiers DWG pour permettre au [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) d’analyser leur contenu. Seuls les fichiers identifiés par le manifeste sont ingérés. Les fichiers qui se trouvent dans le dossier zip mais qui ne sont pas correctement répertoriés dans le manifeste seront ignorés.
Les chemins d’accès aux fichiers, dans l’objet **buildingLevels** du fichier manifeste doivent être relatifs à la racine du dossier zip. Le nom du fichier DWG doit correspondre exactement au nom du niveau du bâtiment. Par exemple, un fichier DWG pour le niveau « sous-sol » serait nommé « sous-sol.dwg ». Un fichier DWG pour le niveau 2 serait nommé « niveau_2.dwg ». Si votre nom de niveau comporte une espace, remplacez-la par un trait de soulignement.
Bien que des exigences s’appliquent à l’utilisation des objets de manifeste, tous les objets ne sont pas obligatoires. Le tableau ci-dessous répertorie les objets obligatoires et facultatifs pour la version 1.1 du [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion).
| Object | Obligatoire | Description |
| :----- | :------- | :------- |
| version | true |Version du schéma du manifeste. Actuellement, seule la version 1.1 est prise en charge.|
| directoryInfo | true | Décrit les coordonnées géographiques du bâtiment et les informations de contact. Peut également être utilisée pour décrire les coordonnées géographiques et les informations de contact d’un occupant. |
| buildingLevels | true | Spécifie les niveaux des bâtiments et les fichiers contenant la conception des niveaux. |
| georeference | true | Contient des informations géographiques numériques pour le dessin du bâtiment. |
| dwgLayers | true | Répertorie les noms des calques, et chaque calque répertorie les noms de ses propres caractéristiques. |
| unitProperties | false | Peut être utilisée pour insérer des métadonnées supplémentaires pour les caractéristiques d’unité. |
| zoneProperties | false | Peut être utilisée pour insérer des métadonnées supplémentaires pour les caractéristiques de zone. |
Les sections suivantes détaillent les exigences pour chaque objet.
### <a name="directoryinfo"></a>directoryInfo
| Propriété | type | Obligatoire | Description |
|-----------|------|----------|-------------|
| name | string | true | Nom du bâtiment. |
| streetAddress| string | false | Adresse du bâtiment. |
|unité | string | false | Unité dans le bâtiment. |
| localité | string | false | Nom d’une zone, d’un quartier ou d’une région. Par exemple, « Marais » ou « Montmartre ». La localité ne fait pas partie de l’adresse postale. |
| adminDivisions | Tableau de chaînes JSON | false | Tableau contenant les désignations d’adresses (pays, état/territoire, municipalité) ou (pays, préfecture, municipalité, localité). Utilisez les codes de pays ISO 3166 et les codes d’état/territoire ISO 3166-2. |
| postalCode | string | false | Code de tri de courrier postal. |
| hoursOfOperation | string | false | Suit le format d’[heures d’ouvertures OSM](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). |
| phone | string | false | Numéro de téléphone associé au bâtiment. Doit inclure l’indicatif du pays. |
| site Web | string | false | Site web associé au bâtiment. Commence par http ou https. |
| nonPublic | bool | false | Indicateur spécifiant si le bâtiment est ouvert au public. |
| anchorLatitude | numeric | false | Latitude d’une ancre de bâtiment (punaise). |
| anchorLongitude | numeric | false | Longitude d’une ancre de bâtiment (punaise). |
| anchorHeightAboveSeaLevel | numeric | false | Hauteur du rez-de-chaussée du bâtiment par rapport au niveau de la mer, exprimée en mètres. |
| defaultLevelVerticalExtent | numeric | false | Hauteur par défaut (épaisseur) d’un niveau de ce bâtiment à utiliser quand la valeur `verticalExtent` d’un niveau n’est pas définie. |
### <a name="buildinglevels"></a>buildingLevels
L’objet `buildingLevels` contient un tableau JSON de niveaux de bâtiments.
| Propriété | Type | Obligatoire | Description |
|-----------|------|----------|-------------|
|levelName |string |true | Nom de niveau descriptif. Par exemple : Étage 1, Hall, Zone de stationnement bleue, Sous-sol, etc.|
|ordinal | entier | true | Une valeur ordinale est utilisée pour déterminer l’ordre vertical des niveaux. Toute bâtiment doit avoir un niveau dont la valeur ordinale est 0. |
|heightAboveFacilityAnchor | numeric | false | Hauteur de niveau au-dessus de l’ancre, exprimée en mètres. |
| verticalExtent | numeric | false | Hauteur du sol au plafond (épaisseur) du niveau, exprimée en mètres. |
|filename | string | true | Chemin d’accès dans le système de fichiers du dessin de CAO d’un niveau de bâtiment. Il doit être relatif à la racine du fichier zip du bâtiment. |
### <a name="georeference"></a>georeference
| Propriété | Type | Obligatoire | Description |
|-----------|------|----------|-------------|
|lat | numeric | true | Représentation décimale de la latitude en degrés à l’origine du dessin du bâtiment. Les coordonnées de l’origine doivent être conformes à la norme WGS84 Web Mercator (`EPSG:3857`).|
|lon |numeric| true| Représentation décimale de la longitude en degrés à l’origine du dessin du bâtiment. Les coordonnées de l’origine doivent être conformes à la norme WGS84 Web Mercator (`EPSG:3857`). |
|angle| numeric| true| Angle, exprimé en degrés, entre le nord réel et l’axe vertical (Y) du dessin dans le sens des aiguilles d’une montre. |
### <a name="dwglayers"></a>dwgLayers
| Propriété | Type | Obligatoire | Description |
|-----------|------|----------|-------------|
|exterior |Tableau de chaînes| true| Noms des calques qui définissent le profil extérieur du bâtiment.|
|unité| Tableau de chaînes| true| Noms des calques qui définissent des unités.|
|wall| Tableau de chaînes |false| Noms des calques qui définissent des murs.|
|door |Tableau de chaînes| false | Noms des calques qui définissent des portes.|
|unitLabel |Tableau de chaînes| false |Noms des calques qui définissent des noms d’unités.|
|zone | Tableau de chaînes | false | Noms des calques qui définissent des zones.|
|zoneLabel | Tableau de chaînes | false | Noms des calques qui définissent des noms de zones.|
### <a name="unitproperties"></a>unitProperties
L’objet `unitProperties` contient un tableau JSON de propriétés d’unité.
| Propriété | Type | Obligatoire | Description |
|-----------|------|----------|-------------|
|unitName |string |true |Nom de l’unité à associer à cet enregistrement de `unitProperty`. Cet enregistrement n’est valide que si une étiquette correspondant à `unitName` est trouvée dans le ou les calques `unitLabel`. |
|categoryName| string| false |Nom de catégorie. Pour obtenir la liste complète des catégories, consultez [catégories](https://aka.ms/pa-indoor-spacecategories). |
|navigableBy| Tableau de chaînes | false |Indique les types d’agents de navigation pouvant traverser l’unité. Par exemple, « piéton ». Cette propriété informe les fonctionnalités d’orientation. Les valeurs autorisées sont `pedestrian`, `wheelchair`, `machine`, `bicycle`, `automobile`, `hiredAuto`, `bus`, `railcar`, `emergency`, `ferry`, `boat`et `disallowed`.|
|routeThroughBehavior| string| false |Comportement d’itinéraire pour l’unité. Les valeurs autorisées sont `disallowed`, `allowed` et `preferred`. La valeur par défaut est `allowed`.|
|occupants |Tableau d’objets directoryInfo |false |Liste d’occupants de l’unité. |
|nameAlt| string| false| Autre nom de l’unité. |
|nameSubtitle| string |false| Sous-titre de l’unité. |
|addressRoomNumber| string| false| Numéro de salle/unité/appartement/suite de l’unité.|
|verticalPenetrationCategory| string| false| Lorsque cette propriété est définie, la caractéristique qui en résulte est une pénétration verticale (VRT) au lieu d’une unité. Des VRT peuvent être utilisées pour accéder à d’autres caractéristiques de VRT dans les niveaux supérieurs ou inférieurs. Pénétration verticale est un nom de [Catégorie](https://aka.ms/pa-indoor-spacecategories). Si cette propriété est définie, la propriété categoryName est remplacée par verticalPenetrationCategory. |
|verticalPenetrationDirection| string| false |Si la valeur `verticalPenetrationCategory` est définie, définissez éventuellement la direction de déplacement valide. Les valeurs autorisées sont `lowToHigh`, `highToLow`, `both` et `closed`. La valeur par défaut est `both`.|
| nonPublic | bool | false | Indique si l’unité est ouverte au public. |
| isRoutable | bool | false | Lorsque la valeur est `false`, la navigation dans ou via l’unité est impossible. La valeur par défaut est `true`. |
| isOpenArea | bool | false | Permet à l’agent de navigation d’entrer dans l’unité sans qu’il soit nécessaire d’attacher une ouverture à celle-ci. Par défaut, cette valeur est définie sur `true` pour les unités sans ouvertures, et sur `false` pour les unités avec ouvertures. Le fait d’attribuer manuellement à `isOpenArea` la valeur `false` sur une unité sans ouverture donne lieu à un avertissement. Cela est dû au fait que l’unité qui en découle n’est pas accessible à un agent de navigation.|
### <a name="the-zoneproperties-object"></a>Objet zoneProperties
L’objet `zoneProperties` contient un tableau JSON de propriétés de zone.
| Propriété | Type | Obligatoire | Description |
|-----------|------|----------|-------------|
|zoneName |string |true |Nom de zone à associer à l’enregistrement `zoneProperty`. Cet enregistrement n’est valide que si une étiquette correspondant à `zoneName` est trouvée dans le calque `zoneLabel` de la zone. |
|categoryName| string| false |Nom de catégorie. Pour obtenir la liste complète des catégories, consultez [catégories](https://aka.ms/pa-indoor-spacecategories). |
|zoneNameAlt| string| false |Autre nom de la zone. |
|zoneNameSubtitle| string | false |Sous-titre de la zone. |
|zoneSetId| string | false | ID défini pour établir une relation entre plusieurs zones pour qu’elles puissent être interrogées ou sélectionnées en tant que groupe. Il peut s’agir, par exemple, de zones qui s’étendent sur plusieurs niveaux. |
### <a name="sample-drawing-package-manifest"></a>Exemple de manifeste du package de dessin
Voici un exemple de fichier manifeste pour l’exemple du package de dessin. Pour télécharger l’intégralité du package, cliquez sur [exemple de package de dessin](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
#### <a name="manifest-file"></a>Fichier manifeste
```JSON
{
"version": "1.1",
"directoryInfo": {
"name": "Contoso Building",
"streetAddress": "Contoso Way",
"unit": "1",
"locality": "Contoso eastside",
"postalCode": "98052",
"adminDivisions": [
"Contoso city",
"Contoso state",
"Contoso country"
],
"hoursOfOperation": "Mo-Fr 08:00-17:00 open",
"phone": "1 (425) 555-1234",
"website": "www.contoso.com",
"nonPublic": false,
"anchorLatitude": 47.636152,
"anchorLongitude": -122.132600,
"anchorHeightAboveSeaLevel": 1000,
"defaultLevelVerticalExtent": 3
},
"buildingLevels": {
"levels": [
{
"levelName": "Basement",
"ordinal": -1,
"filename": "./Basement.dwg"
}, {
"levelName": "Ground",
"ordinal": 0,
"verticalExtent": 5,
"filename": "./Ground.dwg"
}, {
"levelName": "Level 2",
"ordinal": 1,
"heightAboveFacilityAnchor": 3.5,
"filename": "./Level_2.dwg"
}
]
},
"georeference": {
"lat": 47.636152,
"lon": -122.132600,
"angle": 0
},
"dwgLayers": {
"exterior": [
"OUTLINE", "WINDOWS"
],
"unit": [
"UNITS"
],
"wall": [
"WALLS"
],
"door": [
"DOORS"
],
"unitLabel": [
"UNITLABELS"
],
"zone": [
"ZONES"
],
"zoneLabel": [
"ZONELABELS"
]
},
"unitProperties": [
{
"unitName": "B01",
"categoryName": "room.office",
"navigableBy": ["pedestrian", "wheelchair", "machine"],
"routeThroughBehavior": "disallowed",
"occupants": [
{
"name": "Joe's Office",
"phone": "1 (425) 555-1234"
}
],
"nameAlt": "Basement01",
"nameSubtitle": "01",
"addressRoomNumber": "B01",
"nonWheelchairAccessible": false,
"nonPublic": true,
"isRoutable": true,
"isOpenArea": true
},
{
"unitName": "B02"
},
{
"unitName": "B05",
"categoryName": "room.office"
},
{
"unitName": "STRB01",
"verticalPenetrationCategory": "verticalPenetration.stairs",
"verticalPenetrationDirection": "both"
},
{
"unitName": "ELVB01",
"verticalPenetrationCategory": "verticalPenetration.elevator",
"verticalPenetrationDirection": "high_to_low"
}
],
"zoneProperties":
[
{
"zoneName": "WifiB01",
"categoryName": "Zone",
"zoneNameAlt": "MyZone",
"zoneNameSubtitle": "Wifi",
"zoneSetId": "1234"
},
{
"zoneName": "Wifi101",
"categoryName": "Zone",
"zoneNameAlt": "MyZone",
"zoneNameSubtitle": "Wifi",
"zoneSetId": "1234"
}
]
}
```
## <a name="next-steps"></a>Étapes suivantes
Une fois que votre package de dessin répond aux exigences, vous pouvez utiliser le [service de conversion d’Azure Maps](https://docs.microsoft.com/rest/api/maps/conversion) pour convertir le package en un jeu de données cartographiques. Ensuite, vous pouvez utiliser le jeu de données pour générer une carte d’intérieur à l’aide du module Cartes d’intérieur. Pour en savoir plus sur l’utilisation du module Cartes d’intérieur, consultez les articles suivants :
> [!div class="nextstepaction"]
>[Créateur pour cartes d’intérieur](creator-indoor-maps.md)
> [!div class="nextstepaction"]
> [Tutoriel : Création d’une carte d’intérieur du Créateur](tutorial-creator-indoor-maps.md)
> [!div class="nextstepaction"]
> [Styles dynamiques de cartes d’intérieur](indoor-map-dynamic-styling.md)
| 66.443645 | 498 | 0.713502 | fra_Latn | 0.954125 |
4b8167b5467055da900784a0c44a61565cd3a631 | 364 | md | Markdown | ToDo.md | naylinnpkv/aircraft-scheduling-NL | b526d0ddb95f630cfc710bb61df2971aa02b6fe0 | [
"MIT"
] | null | null | null | ToDo.md | naylinnpkv/aircraft-scheduling-NL | b526d0ddb95f630cfc710bb61df2971aa02b6fe0 | [
"MIT"
] | null | null | null | ToDo.md | naylinnpkv/aircraft-scheduling-NL | b526d0ddb95f630cfc710bb61df2971aa02b6fe0 | [
"MIT"
] | null | null | null | [x] The app shows a list of all our aircrafts to choose from
[x] The app shows a list of all the flights the airline plan to operate that day, their origin, destination, departure time and arrival time
[x] On Click, the flight will be added to rotation list
[x] On delete rotation card will be deleted
[x] Schedule collison and resttime check
[x] route conflicts | 60.666667 | 140 | 0.771978 | eng_Latn | 0.99845 |
4b820e2be7f890b1d7da7b506e39510f0cd4dcd4 | 52 | md | Markdown | README.md | forestfsl/SLAlertViewController | bc2a480c7963e1f5100f31a949370146882fc405 | [
"MIT"
] | null | null | null | README.md | forestfsl/SLAlertViewController | bc2a480c7963e1f5100f31a949370146882fc405 | [
"MIT"
] | null | null | null | README.md | forestfsl/SLAlertViewController | bc2a480c7963e1f5100f31a949370146882fc405 | [
"MIT"
] | null | null | null | # SLAlertViewController
A custom AlertViewl for IOS
| 17.333333 | 27 | 0.846154 | kor_Hang | 0.570525 |
4b82eab72b68a474a5e51a2b06d1f30c314f3e45 | 782 | md | Markdown | includes/site-recovery-add-vcenter.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/site-recovery-add-vcenter.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/site-recovery-add-vcenter.md | grayknight2/mc-docs.zh-cn | dc705774cac09f2b3eaeec3c0ecc17148604133e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: rockboyfor
ms.service: site-recovery
ms.topic: include
origin.date: 10/26/2018
ms.date: 12/10/2018
ms.author: v-yeche
ms.openlocfilehash: b6f42dddab89c30b6598e2a3200eeaeb7886254d
ms.sourcegitcommit: 5f2849d5751cb634f1cdc04d581c32296e33ef1b
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/07/2018
ms.locfileid: "53029310"
---
* 在“添加 vCenter”中,指定 vSphere 主机或 vCenter 服务器的友好名称,然后指定服务器的 IP 地址或 FQDN。 除非已将 VMware 服务器配置为在不同的端口上侦听请求,否则请保留 443 作为端口号。 选择要连接到 VMware vCenter 或 vSphere ESXi 服务器的帐户。 单击 **“确定”**。

> [!NOTE]
> 如果在添加 VMware vCenter 服务器或 VMware vSphere 主机时使用的帐户没有对 vCenter 或主机服务器的管理员权限,请确保为帐户启用了以下权限:数据中心、数据存储、文件夹、主机、网络、资源、虚拟机和 vSphere 分布式交换机。 此外,VMware vCenter 服务器需要启用存储视图权限。 | 39.1 | 175 | 0.789003 | yue_Hant | 0.705747 |
4b830464a20ccd4ec77af3b5a48f72714e862160 | 2,834 | md | Markdown | README.md | mjstahl/soma-prototype | c728d1f6a66ea62dc9ccd8432ba24f6212d43343 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2017-03-16T23:44:23.000Z | 2017-03-16T23:44:23.000Z | README.md | mjstahl/soma-prototype | c728d1f6a66ea62dc9ccd8432ba24f6212d43343 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | README.md | mjstahl/soma-prototype | c728d1f6a66ea62dc9ccd8432ba24f6212d43343 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | # Social Machines
Social Machines is a server-side programming language with a syntax greatly inspired by Smalltalk. Every object in Social Machines is a concurrent unit of computation, and libraries can be shared like peers in a BitTorrent network.
* [Goals](#goals)
* [Semantics](#semantics)
* [Syntax](#syntax)
* [Getting Started](#getting-started)
* [License](#license)
## Goals
Social Machines is an exercise (read 'experiment') in language design and the evaluation of assumptions. The two assumptions that affected the design are as follows:
1. Every object is an isolated, concurrent unit, easing the burden on the programmer by removing the need to choose whether to thread code or not.
2. All modern computer languages are designed in the context that a language is designed for writing a single application running on a single machine. The network is an afterthought and therefore relegated to APIs. This is invalid due to the ubiquity of the internet.
## Semantics
The core of Social Machines is Carl Hewitt's [Actor Model](https://en.wikipedia.org/wiki/Actor_model). All Social Machines objects exhibit three core behaviors:
1. Create Objects
2. Send Messages
3. Receive Messages
All message passing in the Actor Model was done asynchronously. To make a program slightly easier to reason about, Social Machines adds [Promises](https://en.wikipedia.org/wiki/Futures_and_promises). All Social Machines Promises are first class. Promises allow the source to behave sequentially at the potential expense of dead locks/live locks.
## Syntax
The syntax is greatly inspired by Smalltalk. An example of the ```True``` object is listed below. The ```+``` indicates the defining of an External Behavior.
```smalltalk
+ True ifFalse: fBlock -> Nil.
+ True ifTrue: tBlock -> tBlock value.
+ True ifTrue: tBlock ifFalse: fBlock -> tBlock value.
+ True not -> False.
+ (t True) & aBool ->
aBool ifTrue: { t } ifFalse: { False }.
+ (t True) | aBool -> t.
+ (t True) ^ aBool ->
aBool ifTrue: { False } ifFalse: { t }.
```
## Getting Started
```bash
$ git clone https://github.com/socialmachines/socialmachines.git
$ mkdir ~/socialmachines/bin ~/socialmachines/pkg
$ export GOPATH=$GOROOT:$HOME/socialmachines
$ export PATH=$PATH:$GOROOT/bin
```
#### Compilation & Execution
```bash
$ cd ~/socialmachines/src/soma
$ go install
$ soma
```
#### Testing
```bash
$ cd ~/socialmachines/src/test
$ go test
```
## License
Social Machines source code is released under the *MIT License* with parts under *Go's BSD-style* license.
Refer to the [legal/MIT](https://github.com/socialmachines/socialmachines/tree/master/legal/MIT) and [legal/BSD](https://github.com/socialmachines/socialmachines/blob/master/legal/BSD) files for more information.
| 41.676471 | 346 | 0.730769 | eng_Latn | 0.977929 |
4b834297393a25d217614a9f4dd17628da743058 | 53 | md | Markdown | README.md | evindor/SupremeLeader | 6885825aaf77249e9d640648cb0d44541dca7341 | [
"MIT"
] | null | null | null | README.md | evindor/SupremeLeader | 6885825aaf77249e9d640648cb0d44541dca7341 | [
"MIT"
] | null | null | null | README.md | evindor/SupremeLeader | 6885825aaf77249e9d640648cb0d44541dca7341 | [
"MIT"
] | null | null | null | # SupremeLeader
Command your vim like Supreme Leader
| 17.666667 | 36 | 0.830189 | eng_Latn | 0.957535 |
4b8392cd90477570ac9b20cd795974bcd83e0adc | 193 | md | Markdown | profiles/vscode/readme.md | grebtsew/Personal-Provisioning | e30d36575704f44676cceb732e27f95d4b086269 | [
"MIT"
] | null | null | null | profiles/vscode/readme.md | grebtsew/Personal-Provisioning | e30d36575704f44676cceb732e27f95d4b086269 | [
"MIT"
] | null | null | null | profiles/vscode/readme.md | grebtsew/Personal-Provisioning | e30d36575704f44676cceb732e27f95d4b086269 | [
"MIT"
] | null | null | null | # Vs code provisioning
With the sync extention all settings are stored between Os:s.
Therefore we only need to install all extensions which we do with these scripts.
Update the list.txt file. | 32.166667 | 80 | 0.797927 | eng_Latn | 0.999894 |
4b8396a08c53f793ea9de097672a43a2be374187 | 714 | md | Markdown | content/services/gc/scics/1817.md | YOWCT/end-to-end-services | 633260242c645ab121cdf85db33125f9b5cd16ad | [
"MIT"
] | null | null | null | content/services/gc/scics/1817.md | YOWCT/end-to-end-services | 633260242c645ab121cdf85db33125f9b5cd16ad | [
"MIT"
] | null | null | null | content/services/gc/scics/1817.md | YOWCT/end-to-end-services | 633260242c645ab121cdf85db33125f9b5cd16ad | [
"MIT"
] | null | null | null | ---
title: "Logistical services for FPT Conferences"
summary: "The Logistical services for FPT Conferences service from Canadian Intergovernmental Conference Secretariat is not available end-to-end online, according to the GC Service Inventory."
url: "gc/scics/1817"
department: "Canadian Intergovernmental Conference Secretariat"
departmentAcronym: "scics"
serviceId: "1817"
onlineEndtoEnd: 0
serviceDescription: "Provide administrative support and planning services for federal-provincial-territorial and provincial-territorial conferences of first ministers, ministers and deputy ministers, throughout Canada."
serviceUrl: "https://scics.ca/en/planning-checklist/"
programDescription: "Conference Services"
---
| 54.923077 | 219 | 0.82493 | eng_Latn | 0.907428 |
4b83a9dcc075a2a69e2b2654f2cb5bb7243cda20 | 2,874 | md | Markdown | nodejs/README.md | gerencio/alpine-env | 4ccc1405239b06cacf5911f63d558ea3e6750066 | [
"MIT"
] | null | null | null | nodejs/README.md | gerencio/alpine-env | 4ccc1405239b06cacf5911f63d558ea3e6750066 | [
"MIT"
] | null | null | null | nodejs/README.md | gerencio/alpine-env | 4ccc1405239b06cacf5911f63d558ea3e6750066 | [
"MIT"
] | null | null | null | Minimal Node.js Docker Images (18MB, or 6.7MB compressed)
---------------------------------------------------------
Versions v7.1.0, v6.9.1, v4.6.2, v0.12.17 and v0.10.48 –
built on [Alpine Linux](https://alpinelinux.org/).
All versions use the one [mhart/alpine-node](https://hub.docker.com/r/mhart/alpine-node/) repository,
but each version aligns with the following tags (ie, `mhart/alpine-node:<tag>`). The sizes are for the
*unpacked* images as reported by Docker – compressed sizes are about 1/3 of these:
- Full install built with npm:
- `latest`, `7`, `7.1`, `7.1.0` – 54.26 MB (npm 3.10.9)
- `6`, `6.9`, `6.9.1` – 49.73 MB (npm 3.10.8)
- `4`, `4.6`, `4.6.2` – 36.81 MB (npm 2.15.11)
- `0.12`, `0.12.17` – 32.71 MB (npm 2.15.11)
- `0.10`, `0.10.48` – 28.16 MB (npm 2.15.11)
- Base install with node built as a static binary with no npm:
- `base`, `base-7`, `base-7.1`, `base-7.1.0` – 41.98 MB
- `base-6`, `base-6.9`, `base-6.9.1` – 38.17 MB
- `base-4`, `base-4.6`, `base-4.6.2` – 27.86 MB
- `base-0.12`, `base-0.12.17` – 24.07 MB
- `base-0.10`, `base-0.10.48` – 18.22 MB
Major io.js versions [are tagged too](https://hub.docker.com/r/mhart/alpine-node/tags/).
Examples
--------
$ docker run mhart/alpine-node node --version
v7.1.0
$ docker run mhart/alpine-node npm --version
3.10.9
$ docker run mhart/alpine-node:6 node --version
v6.9.1
$ docker run mhart/alpine-node:base node --version
v7.1.0
$ docker run mhart/alpine-node:base-0.10 node --version
v0.10.48
Example Dockerfile for your own Node.js project
-----------------------------------------------
If you don't have any native dependencies, ie only depend on pure-JS npm
modules, then my suggestion is to run `npm install` locally *before* running
`docker build` (and make sure `node_modules` isn't in your `.dockerignore`) –
then you don't need an `npm install` step in your Dockerfile and you don't need
`npm` installed in your Docker image – so you can use one of the smaller
`base*` images.
FROM mhart/alpine-node:base-6
# FROM mhart/alpine-node:6
WORKDIR /src
ADD . .
# If you have native dependencies, you'll need extra tools
# RUN apk add --no-cache make gcc g++ python
# If you need npm, don't use a base tag
# RUN npm install
EXPOSE 3000
CMD ["node", "index.js"]
Caveats
-------
As Alpine Linux uses musl, you may run into some issues with environments
expecting glibc-like behavior – especially if you try to use binaries compiled
with glibc. You should recompile these binaries to use musl (compiling on
Alpine is probably the easiest way to do this).
Inspired by:
- https://github.com/alpinelinux/aports/blob/454db196/main/nodejs/APKBUILD
- https://github.com/alpinelinux/aports/blob/454db196/main/libuv/APKBUILD
- https://hub.docker.com/r/ficusio/nodejs-base/~/dockerfile/
| 35.04878 | 102 | 0.646486 | eng_Latn | 0.800797 |
4b843c593a5d079c5b0501cefa2dbd9377efbc20 | 6,898 | md | Markdown | docs/students_and_interns.md | rajat2502/WikiEduDashboard | a0522fb6bf1200f0650e1b8be36ce5c59f5ced9e | [
"MIT"
] | 359 | 2015-01-30T03:39:20.000Z | 2022-03-27T10:27:22.000Z | docs/students_and_interns.md | shipcy/WikiEduDashboard | b791984c751d3cec5b29af535c89b92be7a7b594 | [
"MIT"
] | 3,140 | 2015-01-13T06:25:43.000Z | 2022-03-31T22:02:11.000Z | docs/students_and_interns.md | shipcy/WikiEduDashboard | b791984c751d3cec5b29af535c89b92be7a7b594 | [
"MIT"
] | 596 | 2015-01-24T06:18:15.000Z | 2022-03-31T21:28:32.000Z | Through Wikimedia Foundation, the Wiki Education Dashboard project often participates in Outreachy and Google Summer of Code (GSoC), as well as Google Code-In (GCI). We also welcome contributions from other new developers who want to help the Wikimedia community and build their software skills in the context of a professional codebase.
## Outreachy and GSoC
If you're thinking of applying to Outreachy and/or GSoC, please read this section!
Outreachy and GSoC are paid, competitive programs where successful applicants join the Wiki Education Program team full time for several months to work on a specific project with support from mentors. We typically see a large number of folks show up in the weeks before the application deadline, ready to try their hand at fixing issues and deciding whether to apply for this project. We try to handle this process in a way that is as respectful as possible of the time, energy, and goals of everyone involved.
### The application period
The application period has two important roles: it lets us get to know you, the applicant; and it lets you get to know the project, its codebase, its development workflow, and its role in the Wikimedia ecosystem. If you're interested in applying, the first things you should do are to [set up a development environment](./setup.md) — and to join our Slack and say hi. After that, you should find a first issue to work on. (Please start with one, and wait until you've completed that to claim other issues.)
For your early contributions, it may take more time to explain and review your work than it would take us to do the work ourselves. This is okay; these are learning opportunities for you, and that balance will shift over time if you continue contributing.
Once you've finished a few issues, start thinking about the project idea for your application. Read whatever you can find that relates to it — both the technology involved and the Wikimedia community processes and programs. Ask questions and talk with us about what you've learned. Then create an issue on Phabricator for your proposal, and begin writing up your understanding of the project, along with the issues you've completed. You can flesh this out gradually, as you continue working on issues. Ask for feedback and keep improving it, and then add a project timeline that reflects your understanding of the technical and other tasks involved, in broad strokes.
Advice for Outreachy & GSoC hopefuls:
* The earlier you start contributing, the better chance you'll have. Learning the project takes time.
* Please ask questions and seek help when you're stuck!
* Review the [pull request process](../CONTRIBUTING.md#pull-request-process) before opening your first PR.
* If you're trying to judge how to spend your time — whether to put in more work on this project, or start making contributions somewhere else to improve your chances — feel free to talk with us about it. We don't make any decisions before the final applications are complete, but we can give you a frank assessment of your chances.
### What makes for a competitive application
A competitive applicant will have completed a series of code contributions, starting from small and simple things, and moving to more complex and varied issues. The volume of your contributions is not the important part; we're not trying to pit applicants against each other to extract as much free work as possible, we're trying to understand what it would be like to work with you on a big project.
We're looking for evidence of your technical and communication skills. Can you work effectively with JavaScript, Ruby, and CSS? Do you write efficient and understandable code? Do you ask good questions? Do you communicate clearly in your commit messages and your GitHub interactions? Do you follow the conventions of the codebase? We're also looking for evidence that you're becoming comfortable with the codebase. Do your later contributions show that you understand how different parts of the system work together, and how changes will affect users? Depending on the specific project, do you have additional skills that will be needed, such as visual design or user research?
A competitive applicant will also put together a project plan that demonstrates an clear understanding of both project concept — what is it that the project aims to accomplish and why — and how to translate that project into technical tasks in the context of the Wiki Education Dashboard codebase. The project idea that we provide as a starting point may be very vague, and part of the process of preparing the application is to get as much of an understanding of that project idea as you can, and turn that understanding into a plan. (User research to learn more about exactly what Dashboard users need out of a particular project or feature, is often a good thing to incorporate into your project timeline; if the details of exactly what to build aren't clear from the project idea, that probably means there are many open questions that we don't yet have answers to.)
We also welcome applicants to come up with their own project ideas — which is an especially good way to show your understanding of the big picture of the Dashboard and its role in the Wikimedia ecosystem.
Here are some examples of strong applications that we accepted:
* https://phabricator.wikimedia.org/T177507
* https://phabricator.wikimedia.org/T189873
* https://phabricator.wikimedia.org/T189991
* https://phabricator.wikimedia.org/T147727
### What to expect during and after the internship
During the internship period, we try to establish a cadence of frequent communication and steady, small contributions. We usually have a short video check-in meeting each day Monday through Thursday — including all active interns and at least one mentor — for you to share what you've done since the last check-in, what you're working on now, and anything you're stuck on. Ideally, you'll be pushing up new code at least daily, and your contributions will be merged and deployed as often as possible once they are ready.
Your project plan will guide you through the main goals of the project, but it's often just a rough guess and may evolve considerably. We'll do our best to provide a good learning experience, give you the chance to make a meaningful contribution, and ensure a supportive environment. You'll do your best to accomplish the project goals (and hopefully surpass them) and build your skills.
After your internship is over, we hope you'll stick around and keep contributing as a volunteer (but of course, that's completely up to you). Some of our interns have: returned for a second internship; returned as mentors; and become involved in other aspects of the Wikimedia technical and research ecosystem. You'll get in touch if we can help with advice or letters of recommendation. We'll brag about your future accomplishments.
| 164.238095 | 870 | 0.801682 | eng_Latn | 0.999887 |
4b84fc88a96d21d0cec2042cd4b8dad9bf5b021a | 1,127 | md | Markdown | _rules/1010.md | czimpi/CSharpGuidelines | 4c8a6be08445ea94fb8d2edf349ec8bdc78f032f | [
"BSD-3-Clause",
"CC0-1.0",
"MIT"
] | 603 | 2016-02-22T18:47:31.000Z | 2022-03-31T01:57:38.000Z | _rules/1010.md | czimpi/CSharpGuidelines | 4c8a6be08445ea94fb8d2edf349ec8bdc78f032f | [
"BSD-3-Clause",
"CC0-1.0",
"MIT"
] | 150 | 2016-05-17T11:29:26.000Z | 2022-03-15T07:49:44.000Z | _rules/1010.md | czimpi/CSharpGuidelines | 4c8a6be08445ea94fb8d2edf349ec8bdc78f032f | [
"BSD-3-Clause",
"CC0-1.0",
"MIT"
] | 292 | 2016-02-12T23:43:20.000Z | 2022-03-30T19:53:17.000Z | ---
rule_id: 1010
rule_category: class-design
title: Don't suppress compiler warnings using the `new` keyword
severity: 1
---
Compiler warning [CS0114](https://docs.microsoft.com/en-us/dotnet/csharp/misc/cs0114) is issued when breaking [Polymorphism](http://en.wikipedia.org/wiki/Polymorphism_in_object-oriented_programming), one of the most essential object-orientation principles.
The warning goes away when you add the `new` keyword, but it keeps sub-classes difficult to understand. Consider the following two classes:
public class Book
{
public virtual void Print()
{
Console.WriteLine("Printing Book");
}
}
public class PocketBook : Book
{
public new void Print()
{
Console.WriteLine("Printing PocketBook");
}
}
This will cause behavior that you would not normally expect from class hierarchies:
PocketBook pocketBook = new PocketBook();
pocketBook.Print(); // Outputs "Printing PocketBook "
((Book)pocketBook).Print(); // Outputs "Printing Book"
It should not make a difference whether you call `Print()` through a reference to the base class or through the derived class.
| 32.2 | 256 | 0.747116 | eng_Latn | 0.966199 |