hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
4f6467bfca898ff81390bd66ae62fd4ea122196d
6,758
md
Markdown
packages/hdwallet-provider/README.md
gitpurva/truffle
1a3b17a67bc272eef81b66e44b5dee5d05396722
[ "MIT" ]
1
2021-03-18T09:24:15.000Z
2021-03-18T09:24:15.000Z
packages/hdwallet-provider/README.md
gitpurva/truffle
1a3b17a67bc272eef81b66e44b5dee5d05396722
[ "MIT" ]
6
2021-04-11T21:22:26.000Z
2021-04-13T06:23:01.000Z
packages/hdwallet-provider/README.md
gitpurva/truffle
1a3b17a67bc272eef81b66e44b5dee5d05396722
[ "MIT" ]
2
2022-02-08T12:16:54.000Z
2022-03-20T03:37:39.000Z
# @truffle/hdwallet-provider HD Wallet-enabled Web3 provider. Use it to sign transactions for addresses derived from a 12 or 24 word mnemonic. ## Install ``` $ npm install @truffle/hdwallet-provider ``` ## Requirements ``` Node >= 7.6 Web3 ^1.2.0 ``` ## General Usage You can use this provider wherever a Web3 provider is needed, not just in Truffle. For Truffle-specific usage, see next section. By default, the `HDWalletProvider` will use the address of the first address that's generated from the mnemonic. If you pass in a specific index, it'll use that address instead. ### Instantiation You can instantiate `hdwallet-provider` with options passed in an object with named keys. You can specify the following options in your object: Parameters: | Parameter | Type | Default | Required | Description | | ------ | ---- | ------- | ----------- | ----------- | | `mnemonic` | `object\|string` | null | [ ] | Object containing `phrase` and `password` (optional) properties. `phrase` is a 12 word mnemonic string which addresses are created from. Alternately the value for mnemonic can be a string with your mnemonic phrase. | | `privateKeys` | `string[]` | null | [ ] | Array containing 1 or more private keys. | | `providerOrUrl` | `string\|object` | `null` | [x] | URI or Ethereum client to send all other non-transaction-related Web3 requests | | `addressIndex` | `number` | `0` | [ ] | If specified, will tell the provider to manage the address at the index specified | | `numberOfAddresses` | `number` | `1` | [ ] | If specified, will create `numberOfAddresses` addresses when instantiated | | `shareNonce` | `boolean` | `true` | [ ] | If `false`, a new WalletProvider will track its own nonce-state | | `derivationPath` | `string` | `"m/44'/60'/0'/0/"` | [ ] | If specified, will tell the wallet engine what derivation path should use to derive addresses. | | `pollingInterval` | `number` | `4000` | [ ] | If specified, will tell the wallet engine to use a custom interval when polling to track blocks. Specified in milliseconds. | Some examples can be found below: ```javascript const HDWalletProvider = require("@truffle/hdwallet-provider"); const Web3 = require("web3"); const mnemonicPhrase = "mountains supernatural bird..."; // 12 word mnemonic let provider = new HDWalletProvider({ mnemonic: { phrase: mnemonicPhrase }, providerOrUrl: "http://localhost:8545" }); // Or, alternatively pass in a zero-based address index. provider = new HDWalletProvider({ mnemonic: mnemonicPhrase, provider: "http://localhost:8545", addressIndex: 5 }); // Or, use your own hierarchical derivation path provider = new HDWalletProvider({ mnemonic: mnemonicPhrase, provider: "http://localhost:8545", numberOfAddresses: 1, shareNonce: true, derivationPath: "m/44'/137'/0'/0/" }); // To make HDWallet less "chatty" over JSON-RPC, // configure a higher value for the polling interval. provider = new HDWalletProvider({ mnemonic: { phrase: mnemonicPhrase }, providerOrUrl: "http://localhost:8545", pollingInterval: 8000 }); // HDWalletProvider is compatible with Web3. Use it at Web3 constructor, just like any other Web3 Provider const web3 = new Web3(provider); // Or, if web3 is alreay initialized, you can call the 'setProvider' on web3, web3.eth, web3.shh and/or web3.bzz web3.setProvider(provider) // ... // Write your code here. // ... // At termination, `provider.engine.stop()' should be called to finish the process elegantly. provider.engine.stop(); ``` **Note: If both mnemonic and private keys are provided, the mnemonic is used.** ### Using the legacy interface (deprecated) The legacy interface is deprecated and it is recommended to pass options in an object as detailed above. The following method of passing options is here primarily to document the interface thoroughly and avoid confusion. You can specify the following options in the order below. Pass `undefined` if you want to omit a parameter. Parameters: | Parameter | Type | Default | Required | Description | | ------ | ---- | ------- | ----------- | ----------- | | `mnemonic`/`privateKeys` | `string`/`string[]` | null | [x] | 12 word mnemonic which addresses are created from or array of private keys. | | `providerOrUrl` | `string\|object` | `null` | [x] | URI or Ethereum client to send all other non-transaction-related Web3 requests | | `addressIndex` | `number` | `0` | [ ] | If specified, will tell the provider to manage the address at the index specified | | `numberOfAddresses` | `number` | `1` | [ ] | If specified, will create `numberOfAddresses` addresses when instantiated | | `shareNonce` | `boolean` | `true` | [ ] | If `false`, a new WalletProvider will track its own nonce-state | | `derivationPath` | `string` | `"m/44'/60'/0'/0/"` | [ ] | If specified, will tell the wallet engine what derivation path should use to derive addresses. | Instead of a mnemonic, you can alternatively provide a private key or array of private keys as the first parameter. When providing an array, `addressIndex` and `numberOfAddresses` are fully supported. ```javascript const HDWalletProvider = require("@truffle/hdwallet-provider"); //load single private key as string let provider = new HDWalletProvider("3f841bf589fdf83a521e55d51afddc34fa65351161eead24f064855fc29c9580", "http://localhost:8545"); // Or, pass an array of private keys, and optionally use a certain subset of addresses const privateKeys = [ "3f841bf589fdf83a521e55d51afddc34fa65351161eead24f064855fc29c9580", "9549f39decea7b7504e15572b2c6a72766df0281cea22bd1a3bc87166b1ca290", ]; provider = new HDWalletProvider(privateKeys, "http://localhost:8545", 0, 2); //start at address_index 0 and load both addresses ``` **NOTE: This is just an example. NEVER hard code production/mainnet private keys in your code or commit them to git. They should always be loaded from environment variables or a secure secret management system.** ## Truffle Usage You can easily use this within a Truffle configuration. For instance: truffle-config.js ```javascript const HDWalletProvider = require("@truffle/hdwallet-provider"); const mnemonicPhrase = "mountains supernatural bird ..."; module.exports = { networks: { development: { host: "localhost", port: 8545, network_id: "*" // Match any network id }, ropsten: { // must be a thunk, otherwise truffle commands may hang in CI provider: () => new HDWalletProvider({ mnemonic: { phrase: mnemonicPhrase }, providerOrUrl: "https://ropsten.infura.io/v3/YOUR-PROJECT-ID", numberOfAddresses: 1, shareNonce: true, derivationPath: "m/44'/1'/0'/0/" }), network_id: '3', } } }; ```
39.988166
263
0.70657
eng_Latn
0.981084
4f646ee5bfaad330da124bea8bac9529895613f5
12,990
markdown
Markdown
_posts/2008-10-16-amplified-sensitivity-of-porous-chemosensors-based-on-bernoulli-effect.markdown
api-evangelist/patents-2008
fa7a9f82abec3c65c55cab3b2a849b335fd2f75e
[ "Apache-2.0" ]
null
null
null
_posts/2008-10-16-amplified-sensitivity-of-porous-chemosensors-based-on-bernoulli-effect.markdown
api-evangelist/patents-2008
fa7a9f82abec3c65c55cab3b2a849b335fd2f75e
[ "Apache-2.0" ]
null
null
null
_posts/2008-10-16-amplified-sensitivity-of-porous-chemosensors-based-on-bernoulli-effect.markdown
api-evangelist/patents-2008
fa7a9f82abec3c65c55cab3b2a849b335fd2f75e
[ "Apache-2.0" ]
5
2019-07-11T06:07:03.000Z
2020-08-13T12:57:05.000Z
--- title: Amplified sensitivity of porous chemosensors based on bernoulli effect abstract: A method of vapor sampling and its delivery to the porous sensory element(s) employed in chemical detectors/sensors for vapor(s) identification and quantification. The sampling and delivery system comprises a flow cell in which a sensory membrane is placed parallel to the flow, while an additional flow normal to the membrane is introduced using the Bernoulli effect. The bi-directional flow of vapors increases the interactions between the sensory material and vapor molecules, and enhances sensitivity. url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=07908902&OS=07908902&RS=07908902 owner: Emitech, Inc number: 07908902 owner_city: Fall River owner_country: US publication_date: 20081016 --- The invention described herein may be manufactured and used by or for the Government of the United States of America for governmental purposes without the payment of any royalties thereon or therefore. The present invention relates to a novel method for sampling and delivery of vapors for chemical sensing. This method discloses the application of a new flow cell design based on the Bernoulli principle to enhance the sensitivity of a sensor or sensor array that uses a sensory material entrapped in a porous membrane to detect analyte vapors. The invention generally relates to the sampling of vapors and their delivery to sensory elements in various chemical sensors or sensor arrays. Usually any sensor for detecting vapors especially low pressure vapors requires a sampling and delivery system. The most general sampling and delivery method is vapor pumping through a flow channel where the sensory element s is placed. The interaction of analyte vapors with the sensory element affects its physical chemical properties e.g. electrical conductivity optical absorption etc. and their changes can be detected followed by analyte vapors identification and quantification. To increase the sensitivity to analyte vapors porous materials with a large surface area are employed. The porous medium can be sensitive itself or can be infiltrated with a sensory material. The major problem here is the vapor permeability through the pores in which the topology can be branch like or pores can be partially clogged making vapor diffusion inside the pores difficult. This factor can seriously detriment the sensor performance reducing its high sensitivity that is expected from the large surface area. If the average pore size of the sensory membrane is small less than 1 2 m the pump cannot provide effective vapor delivery through the pores because of the low flow rate flow normal to the membrane surface . Increasing the flow rate will require higher pump power and could result in the membrane breakdown. Therefore the sensory membrane should be fixed so that it will provide a flow parallel to its surface. In such a configuration despite the absence of the flow rate limitation vapors still cannot penetrate deep inside the membrane. Therefore there is a need in the delivery method which combines the parallel vapor flow with an effective permeability mechanism through the porous structure. The present invention provides a novel method of vapor sampling and its delivery to the porous sensory element s employed in chemical detectors sensors for vapor s identification and quantification. In this method an additional channel for vapor delivery through the porous structure is employed which is based on the Bernoulli effect. Thus an amplified sensitivity is anticipated as compared to conventional methods where only one delivery channel is used. In this invention the sensory element porous membrane is placed in the flow channel in a special manner to provide the difference in the static pressure on both sides of the porous membrane due to the Bernoulli effect. It can be done by varying the width of two sub channels or making the opening connecting the back side of the membrane to the ambient environment subject of the vapor detection . The difference in vapor velocity on both sides of the membrane should result in the static pressure difference due to the Bernoulli effect. Consequently the flow in the direction normal to the membrane surface should be generated increasing the vapor interaction with the sensory material and thereby enhancing the detector sensitivity. Airflow must be frequently sampled for a variety of flow monitoring applications. Such sampling may be performed to examine the ambient air for chemical biological and or radiological particulates. Other purposes may include inertial characteristics of the airflows such as pressure measurements. Finally airflow sampling and delivery can be used in various chemical sensors detectors sensor arrays for identification and quantification of the specific target vapors analytes existing in the air e.g. industrial toxic compounds explosives chemical biological warfare agents . In this invention the sampling delivery system with improved characteristics will be considered as a part of the above chemical sensors. Thus it can be incorporated in any chemical sensors to enhance amplify the device sensitivity. First we will demonstrate that the porous sensory material has a significant advantage over the sensors based on flat solid films. Let us consider the fluorescence quenching of a solid sensory film as a transduction mechanism for the detection of analyte vapors. The same approach can be applied to any other transduction mechanism e.g. conductivity surface acoustic wave plasmon resonance etc. . The transduction mechanisms include an optical signal change e.g. reflectance absorption luminescence conductivity or resistivity change resonance frequency of the acoustic signal change etc. The two main factors that govern the device sensitivity S I I are the initial amplitude of the signal I and the amplitude after an exposure to analyte vapor I . The value of Idepends on the amount of deposited sensory molecules polymers and the value of I is defined by the analyte permeability. Here we keep the other parameters constant such as emissive yields binding constants film porosity etc. A thin sensory film will provide a low Ivalue and high analyte permeability so the maximum amount of molecules will be quenched. Experimentally this means that despite high sensitivity an initial signal can be comparable with the noise level especially for a sensory monolayer. For a thick film the Iamplitude can be high but the vapor permeability is reduced at remote layers under the film surface . The use of the porous materials resolve this problem since a sufficient amount of the sensory material is combined with high analyte permeability . However most porous materials e.g. sol gel zeolites porous glasses etc. have random porosity and broad size distribution that lead to poor vapor permeability and interaction of the analyte molecules with sensory material inside the pores. This factor can seriously detriment the sensor performance reducing its high sensitivity expected from the large surface area of porous membrane. Therefore a special method should be applied to efficiently deliver the vapors inside the nano micro porous membrane. At first sight the direct airflow through the sensory membrane normal to the membrane could resolve this problem. However the pump cannot provide effective vapor delivery through the pores since the pores of such small size less than 1 2 m block airflow directed normal to the membrane surface. Increasing the flow rate will require higher pump power and could result in membrane breakdown. Therefore to maintain a high flow rate the sensory membrane usually is placed in a flow channel so that the air flows in the direction parallel to the membrane surface . The drawback of this design is the low vapor permeability through the pores. In this invention we propose the use of the Bernoulli effect to improve vapor permeability through the porous sensory membrane. The Bernoulli principle is concerned with the relationship between static and dynamic pressures such that P P P V 2 is the dynamic pressure where is the vapor density and V is the vapor velocity . Thus the higher vapor velocity should result in lower static pressure. shows the design of the flow cell where the back side of the sensory membrane is in contact with the ambient air through an opening and its front side is exposed to the parallel flow inside the flow cell. According to the Bernoulli effect the static pressure at the back of the membrane P being higher than that at its front P due to the moving flow should initiate additional vapor flow through the porous membrane dotted arrows . Such additional flow normal to the membrane surface improves the vapor permeability through the sensory element s and consequently increases the device sensitivity. To validate the Bernoulli effect experiments has been conducted with aluminum oxide free standing membranes 100 nm pore size filled with a fluorescent sensory polymer. Luminescent sensory material is selected from any group of sensory polymers and small molecules e.g. conductive emissive. non emissive acidic basic quantum dots nanotubes or nanorods fabricated from II VI III V or IV semiconductors e.g. of CdS CdSe InP GaAs Ge Si and doped Si metal oxides e.g. TiOand AlO Si oxide and carbon e.g. carbon nanotubes or fullerenes . A sensory membrane was placed in the flow cell such that it covers the opening in the cell wall while exposing two sides one side to the inner flow channel and the other to atmosphere . The source of analyte vapor to which the fluorescent polymer is sensitive was brought from the outside in the proximity of the covered opening. No analyte vapors were introduced into the cell through the cell entrance. No fluorescence quenching was induced by the anlayte in the absence of the flow i.e. when the pump was off. However a sizable response was observed when the flow was introduced into the cell i.e. when the pump was on such that the analyte vapors were drawn into the flow cell through the opening and consequently through the thickness of the membrane . This proves that the Bernoulli effect can be employed to amplify the sensitivity by increasing the interaction between the sensory polymer and the anlayte molecules. Another example shows the design of the flow cell divided into two sub channels by a solid concave wall with an opening connecting the two sub channels in the place where they have maximum and minimum widths and a sensory membrane covering the opening. At the flow cell inlet the static pressure P is equal in both of the sub channels because of the same vapor air velocity. However different vapor velocities at both sides of the sensory membrane as a result of different channel widths lead to different static pressures P P which induces additional vapor flow through the pores dotted arrows . Such an additional flow normal to the membrane surface improves the vapor permeability through the sensory element s and consequently increases the device sensitivity. Another example demonstrates the concept design of the miniature optochemical sensor array consisting of two sensory membranes integrated with a wireless device which uses the Bernoulli principle to amplify the system sensitivity. Its key part is the flow cell integrated with LED and two photodiodes filters in the wall of the flow channel. Two sensory elements are mounted on the opposite walls of the flow channel in front of the photodiodes. Each element represents the porous membrane alumina or Si infiltrated with two different sensory emissive polymers to provide chemical diversity for sensor selectivity. The porous membrane is selected from II VI III V or IV semiconductors e.g. of CdS CdSe InP GaAs Ge Si and doped Si zeolites sol gel silicon oxide beads synthethic opal metal oxides e.g. TiOand AlO porous glass metal organic frameworks MOFs polymers e.g. electrospun polymer film nylon membrane carbon nanotubes buckypaper . Because of the porous structure the lateral size can be small 4 2 mm and yet can provide a high photoluminescence signal output and enhanced sensitivity. For vapor sampling a micropump will be integrated in the flow cell exhaust. The microprocessor reads the output signals from photodiodes through A D interface analyzes them according to the pattern recognition algorithm and in the case of target vapor detection displays the alarm signal on the VGI . Then the personnel can trigger the wireless alarm manually e.g. using a cell phone or it can be done automatically . A rechargeable lithium ion battery provides the power supply for the functional modules of the sensor system. Inset to shows how the Bernoulli effect amplifies the device sensitivity as a result of an additional flow directed normal to the porous sensory elements.
295.227273
1,704
0.825019
eng_Latn
0.999823
4f64b7a8d4a2f159ade19089c701d8c33a564612
3,881
md
Markdown
README.md
jaf7/halftime_metro
3ad285666dac26c64e813b43e8baac96c5ec5ca4
[ "MIT", "Unlicense" ]
null
null
null
README.md
jaf7/halftime_metro
3ad285666dac26c64e813b43e8baac96c5ec5ca4
[ "MIT", "Unlicense" ]
null
null
null
README.md
jaf7/halftime_metro
3ad285666dac26c64e813b43e8baac96c5ec5ca4
[ "MIT", "Unlicense" ]
null
null
null
# Halftime Metro A meetup app that leverages 5 Google Maps APIs to remove the complexity of deciding where to meet in the city. It calculates optimum metro-plus-walking travel times for each party and offers a list of easy walking destinations to choose from. The destination choices are grouped around an optimum travel time midpoint so that each party can arrive within a couple minutes of each other. It returns detailed directions and a map route for each friend that includes optimized walk times. The app uses asynchronous fetch requests to Google Maps Geocode, DistanceMatrix, DirectionsService, DirectionsRenderer and Places APIs. Promises and `Promise.all` iterables are used to leverage the various APIs smoothly. No frontend framework: DOM manipulation in pure JS. The app also uses a minimal backend Rails API for user registration (hosted on Heroku). ## Demo ➡️   Use it [here](https://jaf7.github.io/halftime_metro/)! (Feedback welcome! @janthonyfields) ➡️   Watch a 2 minute demo [here](https://youtu.be/DK8PVKX0Dq8) ## Motivation This app was born because I couldn't find a service online that offered the best metro / walking route to a meeting place between two origins, based on optimum travel time. While distance is important, within the city (NYC in my case) travel time is most important when planning to meet up. When you're on the go or short of time, it can be difficult and time consuming to figure out the best subway stops to get off at and the best walking destinations. I wanted a way to make a quick decision and spend more time with my friend and less time figuring out where to meet. ## Built Using * [ES6 JavaScript](http://es6-features.org/) for behavior and DOM manipulation * Custom CSS including `@keyframes` for animation sequences * [Bulma](https://bulma.io/) * [jQuery](https://jquery.com/) * [jQuery.scrollTo](https://github.com/flesler/jquery.scrollTo) - Lightweight animated scrolling with jQuery * [Heroku](https://devcenter.heroku.com/) - deployed to Heroku remote * [Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) - For local development server, logging, controlling dynos ## Contribute Fork and clone the repo, install live-server: ``` npm install -g live-server ``` and execute `live-server` while in the root directory. To run the backend locally, see it's [repo]() (when running locally the app will look for the API endpoint at `localhost:3001`) #### TODO - [ ] Persist logged in users' saved routes to backend or `LocalStorage` - [ ] Implement Facebook login using [OmniAuth](https://github.com/omniauth/omniauth) - [ ] Experiment w/ refactoring to use [Ruby Geocoder](https://github.com/alexreisner/geocoder) ## Credits Thanks to my pairing partner on this project, Caroline Lee! ## License The MIT License (MIT) Copyright (c) 2018 by Anthony Fields Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
69.303571
571
0.77815
eng_Latn
0.960364
4f659f1b690fe88c6e0a1b9eb55ee0ffd48f1066
233
md
Markdown
content/theme/hugo-future-imperfect-slim.md
Jinksi/jamstackthemes
0fbdcd73a885478ed2088e127d56190f83c97485
[ "MIT" ]
2
2020-03-21T06:46:46.000Z
2022-03-11T13:18:11.000Z
content/theme/hugo-future-imperfect-slim.md
Jinksi/jamstackthemes
0fbdcd73a885478ed2088e127d56190f83c97485
[ "MIT" ]
null
null
null
content/theme/hugo-future-imperfect-slim.md
Jinksi/jamstackthemes
0fbdcd73a885478ed2088e127d56190f83c97485
[ "MIT" ]
1
2019-11-28T09:31:11.000Z
2019-11-28T09:31:11.000Z
--- title: "Future Imperfect Slim" github: https://github.com/pacollins/hugo-future-imperfect-slim demo: https://themes.gohugo.io/theme/hugo-future-imperfect-slim/ author: Patrick Collins draft: true ssg: - Hugo cms: - No Cms ---
21.181818
64
0.733906
yue_Hant
0.141325
4f65da4c692535cf95aec742b2907ec6837fff32
3,014
md
Markdown
content/zh/publication/The improved DnCNN for linear noise attenuation/index.md
nana33/academic-kickstart
a7d08a27f252efd184b1be8030256d20e5782d74
[ "MIT" ]
null
null
null
content/zh/publication/The improved DnCNN for linear noise attenuation/index.md
nana33/academic-kickstart
a7d08a27f252efd184b1be8030256d20e5782d74
[ "MIT" ]
null
null
null
content/zh/publication/The improved DnCNN for linear noise attenuation/index.md
nana33/academic-kickstart
a7d08a27f252efd184b1be8030256d20e5782d74
[ "MIT" ]
null
null
null
--- title: "The improved DnCNN for linear noise attenuation" authors: - Yue Zheng - Yijun Yuan - Xu Si date: "2019-07-01T00:00:00Z" doi: "" # Schedule page publish date (NOT publication's date). publishDate: "2019-07-01T00:00:00Z" # Publication type. # Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article; # 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section; # 7 = Thesis; 8 = Patent publication_types: ["1"] # Publication name and optional abbreviated publication name. publication: In *2019 SEG 3rd International Workshop* publication_short: In *SEG* abstract: Seismic data are often highly corrupted by different kinds of noise, including linear noise. Therefore, the attenuation of linear noise has been an essential step in seismic data processing. Traditional methods of linear noise suppression are mostly based on the difference of signals and noise in transform domains. However, the application of these traditional methods is limited to some particular assumptions. For this reason, we utilize an algorithm based on deep convolutional neural network (DnCNN) to attenuate linear noise. DnCNN is proposed to suppress Gaussian noise in images. In term of the characteristics of linear noise, we make some improvements to the original DnCNN, like patch size, convolutional kernel number. Tests on two types of synthetic data both indicate that the improved DnCNN algorithm is capable of linear noise attenuation in the seismic data. # Summary. An optional shortened abstract. summary: Seismic data are often highly corrupted by different kinds of noise, including linear noise. We utilize an algorithm based on deep convolutional neural network (DnCNN) to attenuate linear noise.Tests on two types of synthetic data both indicate that the improved DnCNN algorithm is capable of linear noise attenuation in the seismic data. tags: - Source Themes featured: true links: # - name: Custom Link # url: http://example.org url_pdf: publication/The improved DnCNN for linear noise attenuation url_code: 'https://github.com/nana33/DnCNN_denoise_linear-noise' # url_dataset: '#' # url_poster: '#' # url_project: '' # url_slides: '' # url_source: '#' # url_video: '#' # Featured image # To use, add an image named `featured.jpg/png` to your page's folder. image: caption: 'Image credit: [**Unsplash**](https://unsplash.com/photos/pLCdAaMFLTE)' focal_point: "" preview_only: false # Associated Projects (optional). # Associate this publication with one or more of your projects. # Simply enter your project's folder or file name without extension. # E.g. `internal-project` references `content/project/internal-project/index.md`. # Otherwise, set `projects: []`. projects: - internal-project # Slides (optional). # Associate this publication with Markdown slides. # Simply enter your slide deck's filename without extension. # E.g. `slides: "example"` references `content/slides/example/index.md`. # Otherwise, set `slides: ""`. slides: example ---
45.666667
886
0.763769
eng_Latn
0.973841
4f660e126c826a4a8f54278d681ede4e7bd88996
10,053
md
Markdown
articles/azure-arc/data/migrate-to-managed-instance.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-arc/data/migrate-to-managed-instance.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-arc/data/migrate-to-managed-instance.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Migrer une base de données de SQL Server vers Azure Arc enabled SQL Managed Instance description: Migrer une base de données de SQL Server vers Azure Arc enabled SQL Managed Instance services: azure-arc ms.service: azure-arc ms.subservice: azure-arc-data author: vin-yu ms.author: vinsonyu ms.reviewer: mikeray ms.date: 09/22/2020 ms.topic: how-to ms.openlocfilehash: 86563b0a44bade2cedaf76af3c247821756111fe ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 10/09/2020 ms.locfileid: "90930282" --- # <a name="migrate-sql-server-to-azure-arc-enabled-sql-managed-instance"></a>Migrer : SQL Server vers Azure Arc enabled SQL Managed Instance Ce scénario vous guide tout au long des étapes de migration d’une base de données d’une instance SQL Server vers Azure SQL Managed Instance dans Azure Arc via deux méthodes différentes de sauvegarde et de restauration. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## <a name="use-azure-blob-storage"></a>Utiliser le stockage Blob Azure Utilisez le stockage Blob Azure pour la migration vers Azure Arc enabled SQL Managed Instance. Cette méthode utilise le stockage Blob Azure comme emplacement de stockage temporaire où effectuer la sauvegarde, puis à partir duquel effectuer la restauration. ### <a name="prerequisites"></a>Prérequis - [Installer Azure Data Studio](install-client-tools.md) - [Installer l’Explorateur Stockage Azure](https://azure.microsoft.com/features/storage-explorer/) - Abonnement Azure ### <a name="step-1-provision-azure-blob-storage"></a>Étape 1 : Provisionner le stockage Blob Azure 1. Suivez les étapes décrites dans [Créer un compte de stockage Blob Azure](../../storage/blobs/storage-blob-create-account-block-blob.md?tabs=azure-portal) 1. Lancer l’Explorateur Stockage Azure 1. [Connectez-vous à Azure](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) pour accéder au stockage Blob créé à l’étape précédente. 1. Cliquez avec le bouton droit sur le compte de stockage Blob, puis sélectionnez **Créer un conteneur d’objets blob** pour créer un conteneur où le fichier de sauvegarde sera stocké. ### <a name="step-2-get-storage-blob-credentials"></a>Étape 2 : Obtenir les informations d’identification du stockage Blob 1. Dans l’Explorateur Stockage Azure, cliquez avec le bouton droit sur le conteneur d’objets blob qui vient d’être créé et sélectionnez **Obtenir une signature d’accès partagé** 1. Sélectionnez **Lire**, **Écrire** et **Lister** 1. Sélectionnez **Créer** Prenez note de l’URI et de la chaîne de requête figurant dans cet écran. Ils seront nécessaires dans des étapes ultérieures. Cliquez sur le bouton **Copier** pour enregistrer dans le Bloc-notes/OneNote, etc. 1. Fermez la fenêtre **Signature d’accès partagé**. ### <a name="step-3-backup-database-file-to-azure-blob-storage"></a>Étape 3 : Sauvegarder le fichier de base de données dans le stockage Blob Azure Dans cette étape, nous allons nous connecter au serveur SQL Server source et créer le fichier de sauvegarde de la base de données que vous voulez migrer vers SQL Managed Instance - Azure Arc. 1. Lancer Azure Data Studio 1. Connectez-vous à l’instance SQL Server qui contient la base de données que vous voulez migrer vers SQL Managed Instance - Azure Arc. 1. Cliquez avec le bouton droit sur la base de données et sélectionnez **Nouvelle requête**. 1. Préparez votre requête au format suivant en remplaçant les espaces réservés indiqués par `<...>` en utilisant les informations de la signature d’accès partagé des étapes précédentes. Une fois que vous avez remplacé les valeurs, exécutez la requête. ```sql IF NOT EXISTS (SELECT * FROM sys.credentials WHERE name = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>') CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<SAS_TOKEN>'; ``` 1. De même, préparez la commande **BACKUP DATABASE** comme suit pour créer un fichier de sauvegarde dans le conteneur d’objets blob. Une fois que vous avez remplacé les valeurs, exécutez la requête. ```sql BACKUP DATABASE <database name> TO URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>' ``` 1. Ouvrez l’Explorateur Stockage Azure et vérifiez que le fichier de sauvegarde créé à l’étape précédente est visible dans le conteneur d’objets blob. ### <a name="step-4-restore-the-database-from-azure-blob-storage-to-sql-managed-instance---azure-arc"></a>Étape 4 : Restaurer la base de données depuis le stockage Blob Azure vers SQL Managed Instance - Azure Arc 1. Dans Azure Data Studio, connectez-vous à l’instance SQL Managed Instance - Azure Arc. 1. Développez **Bases de données système**, cliquez avec le bouton droit sur la base de données **master** et sélectionnez **Nouvelle requête**. 1. Dans la fenêtre de l’éditeur de requête, préparez et exécutez la même requête que celle de l’étape précédente pour créer les informations d’identification. ```sql IF NOT EXISTS (SELECT * FROM sys.credentials WHERE name = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>') CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<SAS_TOKEN>'; ``` 1. Préparez et exécutez la commande ci-dessous pour vérifier que le fichier de sauvegarde est lisible et intact. ```console RESTORE FILELISTONLY FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' ``` 1. Préparez et exécutez la commande **RESTORE DATABASE** comme suit pour restaurer le fichier de sauvegarde dans une base de données sur SQL Managed Instance - Azure Arc ```sql RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>' WITH MOVE 'Test' to '/var/opt/mssql/data/<file name>.mdf' ,MOVE 'Test_log' to '/var/opt/mssql/data/<file name>.ldf' ,RECOVERY ,REPLACE ,STATS = 5; GO ``` Découvrez plus en détail la sauvegarde vers une URL ici : [Documentation sur la sauvegarde vers une URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) [Sauvegarder vers une URL en utilisant SQL Server Management Studio (SSMS)](/sql/relational-databases/tutorial-sql-server-backup-and-restore-to-azure-blob-storage-service) ------- ## <a name="method-2-copy-the-backup-file-into-an-azure-sql-managed-instance---azure-arc-pod-using-kubectl"></a>Méthode 2 : Copier le fichier de sauvegarde dans un pod Azure SQL Managed Instance - Azure Arc en utilisant kubectl Cette méthode vous montre comment produire un fichier de sauvegarde que vous créez via n’importe quelle méthode, puis comment le copier dans le stockage local du pod Azure SQL Managed Instance afin de pouvoir effectuer une restauration à partir de là, comme vous le feriez sur un système de fichiers standard sur Windows ou Linux. Dans ce scénario, vous allez utiliser la commande `kubectl cp` pour copier le fichier depuis un emplacement donné dans le système de fichiers du pod. ### <a name="prerequisites"></a>Prérequis - Installer et configurer kubectl pour le faire pointer vers votre cluster Kubernetes sur lequel les services de données Azure Arc sont déployés - Avoir un outil comme Azure Data Studio ou SQL Server Management Server installé et connecté au serveur SQL Server où vous voulez créer le fichier de sauvegarde, OU avoir un fichier .bak existant déjà créé sur votre système de fichiers local. ### <a name="step-1-backup-the-database-if-you-havent-already"></a>Étape 1 : Sauvegarder la base de données si vous ne l’avez pas déjà fait Sauvegardez la base de données SQL Server dans le chemin de votre fichier local, comme n’importe quelle sauvegarde standard SQL Server sur disque : ```sql BACKUP DATABASE Test TO DISK = 'c:\tmp\test.bak' WITH FORMAT, MEDIANAME = 'Test’ ; GO ``` ### <a name="step-2-copy-the-backup-file-into-the-pods-file-system"></a>Étape 2 : Copier le fichier de sauvegarde dans le système de fichiers du pod Recherchez le nom du pod où l’instance SQL est déployée. C’est généralement similaire à ceci : `pod/<sqlinstancename>-0`. Obtenez la liste de tous les pods en exécutant : ```console kubectl get pods -n <namespace of data controller> ``` Exemple : Copiez le fichier de sauvegarde depuis le stockage local vers le pod SQL du cluster. ```console kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -n <namespace name> #Example: kubectl cp C:\Backupfiles\test.bak sqlinstance1-0:var/opt/mssql/data/test.bak -n arc ``` ### <a name="step-3-restore-the-database"></a>Étape 3 : Restaurer la base de données Préparez et exécutez la commande RESTORE pour restaurer le fichier de sauvegarde dans Azure SQL Managed Instance - Azure Arc ```sql RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' WITH MOVE '<database name>' to '/var/opt/mssql/data/<file name>.mdf' ,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf' ,RECOVERY ,REPLACE ,STATS = 5; GO ``` Exemple : ```sql RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/test.bak' WITH MOVE 'test' to '/var/opt/mssql/data/test.mdf' ,MOVE 'test' to '/var/opt/mssql/data/test_log.ldf' ,RECOVERY ,REPLACE ,STATS = 5; GO ``` ## <a name="next-steps"></a>Étapes suivantes [En savoir plus sur les fonctionnalités et les capacités d’Azure Arc enabled SQL Managed Instance](managed-instance-features.md) [Commencer en créant un contrôleur de données](create-data-controller.md) [Vous avez déjà créé un contrôleur de données ? Créez une instance Azure Arc enabled SQL Managed Instance](create-sql-managed-instance.md)
51.030457
480
0.760569
fra_Latn
0.847623
4f662048c3bf4bbcc0684cadb7c6439e9d802fa5
248
md
Markdown
README.md
seltzlab/venations
d16f22de261e271229cc9cfca579f489d5e52dd5
[ "MIT" ]
null
null
null
README.md
seltzlab/venations
d16f22de261e271229cc9cfca579f489d5e52dd5
[ "MIT" ]
null
null
null
README.md
seltzlab/venations
d16f22de261e271229cc9cfca579f489d5e52dd5
[ "MIT" ]
null
null
null
# OPEN VENATIONS ALGORITHM ## A simulation of the leaf venations growth An experiment based on the paper http://algorithmicbotany.org/papers/venation.sig2005.pdf and on the work https://github.com/danielgm/Venation Built with http://paperjs.org
27.555556
89
0.794355
yue_Hant
0.647557
4f6664641fbe2276b2fa46495eb46636cdd51235
402
md
Markdown
docs/v1/DashboardListDeleteResponse.md
technicalpickles/datadog-api-client-ruby
019b898a554f9cde6ffefc912784f62c0a8cc8d6
[ "Apache-2.0" ]
null
null
null
docs/v1/DashboardListDeleteResponse.md
technicalpickles/datadog-api-client-ruby
019b898a554f9cde6ffefc912784f62c0a8cc8d6
[ "Apache-2.0" ]
1
2021-01-27T05:06:34.000Z
2021-01-27T05:12:09.000Z
docs/v1/DashboardListDeleteResponse.md
ConnectionMaster/datadog-api-client-ruby
dd9451bb7508c1c1211556c995a04215322ffada
[ "Apache-2.0" ]
null
null
null
# DatadogAPIClient::V1::DashboardListDeleteResponse ## Properties | Name | Type | Description | Notes | | ---- | ---- | ----------- | ----- | | **deleted_dashboard_list_id** | **Integer** | ID of the deleted dashboard list. | [optional] | ## Example ```ruby require 'datadog_api_client/v1' instance = DatadogAPIClient::V1::DashboardListDeleteResponse.new( deleted_dashboard_list_id: null ) ```
21.157895
96
0.669154
yue_Hant
0.252419
4f6665d93c2ed47c698f8130da293b01a1d8679f
2,198
md
Markdown
README.md
nguyenanhung/codeigniter4-skeleton
8ccda2484e257e210df2938e417c11137d553b8d
[ "MIT" ]
1
2021-09-09T03:57:06.000Z
2021-09-09T03:57:06.000Z
README.md
nguyenanhung/codeigniter4-skeleton
8ccda2484e257e210df2938e417c11137d553b8d
[ "MIT" ]
null
null
null
README.md
nguyenanhung/codeigniter4-skeleton
8ccda2484e257e210df2938e417c11137d553b8d
[ "MIT" ]
null
null
null
# CodeIgniter4 Framework - Skeleton Application [![Latest Stable Version](http://poser.pugx.org/nguyenanhung/codeigniter4-skeleton/v)](https://packagist.org/packages/nguyenanhung/codeigniter4-skeleton) [![Total Downloads](http://poser.pugx.org/nguyenanhung/codeigniter4-skeleton/downloads)](https://packagist.org/packages/nguyenanhung/codeigniter4-skeleton) [![Latest Unstable Version](http://poser.pugx.org/nguyenanhung/codeigniter4-skeleton/v/unstable)](https://packagist.org/packages/nguyenanhung/codeigniter4-skeleton) [![License](http://poser.pugx.org/nguyenanhung/codeigniter4-skeleton/license)](https://packagist.org/packages/nguyenanhung/codeigniter4-skeleton) [![PHP Version Require](http://poser.pugx.org/nguyenanhung/codeigniter4-skeleton/require/php)](https://packagist.org/packages/nguyenanhung/codeigniter4-skeleton) Bản Skeleton phục vụ triển khai ứng dụng web bằng `CodeIgniter4 Framework`. Được đóng gói lại thành packages nhằm mục đích triển khai ứng dụng nhanh chóng ## CHANGELOG Thông tin Changelog được cập nhật tại https://github.com/nguyenanhung/codeigniter4-skeleton/blob/master/CHANGELOG.md ## Install Chạy command sau để tiến hành cài đặt ứng dụng `nguyenanhung/codeigniter4-skeleton` và triển khai 1 dự án mới ```shell composer create-project nguyenanhung/codeigniter4-skeleton [my-app-name] ``` Thay thế `[my-app-name]` bằng thư mục dự án mới cần triển khai, ví dụ: `my-website` ```shell composer create-project nguyenanhung/codeigniter4-skeleton my-website ``` ## Start Application Triển khai ứng dụng nhanh với Docker được build sẵn 1. Build docker ```shell docker-compose build ``` 2. Khởi chạy ứng dụng ```shell docker-compose up -d ``` 3. Add url to hosts file ```shell sudo vi /etc/hosts 127.0.0.1 app.codeigniter4.io 127.0.0.1 opcache.codeigniter4.io ``` 4. Open Service in URL ```shell http://app.codeigniter4.io/ ``` 5. Screenshot Page ![https://i.imgur.com/lno3ugO.jpg](https://i.imgur.com/lno3ugO.jpg) ## Liên hệ | Name | Email | Skype | Facebook | | ----------- | -------------------- | ---------------- | ------------- | | Hung Nguyen | dev@nguyenanhung.com | nguyenanhung5891 | @nguyenanhung |
33.815385
782
0.730209
vie_Latn
0.794771
4f669f1d2363ca3caaf279385e2707e383a5f1e3
9,028
md
Markdown
docs/implementation-notes.md
monish001/react-data-grid
0847e653d83e744a120bdb262ccbe59b15fb3763
[ "MIT" ]
3
2019-09-16T15:34:01.000Z
2019-09-17T10:43:00.000Z
docs/implementation-notes.md
monish001/react-data-grid
0847e653d83e744a120bdb262ccbe59b15fb3763
[ "MIT" ]
null
null
null
docs/implementation-notes.md
monish001/react-data-grid
0847e653d83e744a120bdb262ccbe59b15fb3763
[ "MIT" ]
1
2021-02-14T13:19:54.000Z
2021-02-14T13:19:54.000Z
--- id: implementation-notes title: Implementation Notes --- ## Data Virtualization ReactDataGrid has been optimized to render data in a highly efficient manner. The grid data is virtualized both for rows as well as columns, only rendering exactly what is necessary to the viewport to allow for performant grid interaction, like for actions such as scrolling and cell navigation. The calulation for which rows and columms to render to the Canvas happens in the [Viewport](https://github.com/adazzle/react-data-grid/blob/master/packages/react-data-grid/src/Viewport.js). It uses input parameters such as gridHeight, rowHeight, scroll length and direction to determine the visible and overscan indexes of the rows and columns. Rows and columns which fall between the visible indexes are rendered to the Canvas and are visble. Rows and columns which fall outside the visible indexes but inside the overscan indexes are also rendered to the Canvas but are not visible. This buffer of rows and columns allows for smooth scrolling with minimal lag even for data sets that contain thousands of rows and columns. ### Virtualization props The most important props that the Viewport passes to the Canvas are the following: - rowOverscanStartIdx - The index of the first invisible row to be rendered to the canvas. - rowOverscanEndIdx - The index of the last invisible row to be rendered to the canvas. - rowVisibleStartIdx - The index of the first visible row to be rendered to the canvas. - rowVisibleEndIdx - The index of the last visible row to be rendered to the canvas. - colVisibleStartIdx - The index of the first visible column to be rendered to the canvas. - colVisibleEndIdx - The index of the last visible column to be rendered to the canvas. - colOverscanStartIdx - The index of the first visible column to be rendered to the canvas. - colOverscanEndIdx - The index of the last visible column to be rendered to the canvas. ### Virtualization when scrolliing When the grid is being scrolled, it is important that only the minimal necessary amount of rows and columns are rendered to the canvas. One way that ReactDataGrid optimises this range, is using the scroll direction. See the diagrams below for an example of how the rendered rows are calculated on scrolling ### Scrolling downwards When scrolling downwards, it is unnecessary to render any columns outside of the visible window. Rendering a buffer of extra columns will only slow down scrolling and create lag. The only buffer that should be rendered is the rows at the bottom of the canvas. These overscan rows will make scrolling appear smoother as the rows already exist in the dom at the time the viewport scrolls to their position. ![Scrolling Down](assets/scroll_down.svg) ### Scrolling upwards When scrolling upwards, the only buffer that should be rendered is the rows at the top of the canvas. ![Scrolling Down](assets/scroll_up.svg) ### Scrolling right When scrolling right, it is unnecessary to render any rows outside of the visible window. Rendering a buffer of extra rows will only slow down scrolling and create lag. The only buffer that should be rendered are some overflow columns to the right of the canvas. These overscan columns will make horizontal scrolling appear smoother as the columns already exist in the dom at the time the viewport scrolls to their position. ![Scrolling Right](assets/scroll_right.svg) ### Scrolling Left When scrolling left, the only buffer that should be rendered is some columns to the left of the canvas. This is reverse of the previous image. ## Interaction Layer A recent major change to the core RDG architecture was to migrate all the grid interaction functionality, out of the Row and Cell components and into it's own separate layer, known as the InteractionMask. The aim of this was 1. To significantly improve grid interaction performance. Prior to the change, the Cell component had quite a few expensive operations that were called for each render of each Cell. With grids that displayed many cells, the performance overhead was significant, and the grid interaction was noticably laggy. 2. To simplify the Row and Cell components which had become bloated with too much functionality. The responsibilities of the InteractionMask are - Render a SelectionMask which is used to control cell selection and navigation on the grid - Render an EditorContainer when a cell is updated for editing - Render a DragMask when the DragHandle of a cell is dragged up or down - Render a CopyMask when a cell is pasted or copied from - Render a CellRangeSelectionMask when a range of cells is selected Let's take a look at some of the above functionalty in more detail ### Cell Selection and Navigation When a cell is selected, either with a mouse click, or with the array keys of the keyboard, the InteractionMask will render a blue ractangular SelectionMask around the border of the cell that was selected. ![Selection Mask](assets/selection-mask.png) The InteractionMask keeps track of the selected cell `{idx, rowIdx}` in state. However, the InteractionMask does not know how to relate coordinates on the screen to an actual cells position on the grid. In order to update the state of the selected cells position in the InteractionMask, the InteractionMask needs to listen to events from the Cell component. This is possible as aach Cell component is aware of its position in the grid. Unfortunately, as InteractionMask and Cell sit at the same level in the component heirarchy, it is dificult for the two components to directly communicate with each other. ### EventBus To solve this issue as well as similar use cases, we introduced an [EventBus](https://github.com/adazzle/react-data-grid/blob/master/packages/react-data-grid/src/masks/EventBus.js) object to allow for easier communication between sibling components, as well as to provide a way for components to propagate state changes to their descendants deep in the component hierarchy. We originally looked to solve this problem by incorperating a state management solultion like Redux, as well as a custom RxJs state manager. These added too much overhead to what we wanted to achieve and in the end, we decided to create a very simple event bus object that is passed down from the root component, allowing components to publish and subscribe to events. <img src="assets/selection4.svg" alt="drawing" width="1800"/> ### Cell Editing The InteractionMask also keeps track of which cell is being current edited. It listens to both keypresses and mouse double clicks and uses the SelectedCell coordinates to open the [EditorContainer](https://github.com/adazzle/react-data-grid/blob/master/packages/common/editors/EditorContainer.js) in the same position as the SelectedCell. The EditorContainer will render an editor as defined by `column.editor`. An editor will remain open, until it is commited. By default the EditorContainer will call commit for the following scenarios - When the user clicks Enter from the primary input of the editor - When the user clicks Tab from the primary input of the editor - When the primary input of the editor is unfocussed - When the onCommit prop is called manually by the editor Note that it is entirely possible to change this default behavior by prevent certain events from bubbling up from the editor to the EditorContainer by calling `event.stopPropagation();` from the editor. Once commit has been called, this callback will propogate up to the root ReactDataGrid component and fire an onGridRowsUpdated event. See the Editing examples for an overview on how this works. By default, each cell of ReactDataGrid is readonly. It can be turned on for a given column as described in this article. ### Cell Update scenarios When editing is enabled, it is possible to update the values of a cell in the following ways * Using the supplied editor of the column. The default editor is the [SimpleTextEditor](https://github.com/adazzle/react-data-grid/blob/master/packages/common/editors/SimpleTextEditor.js). * Copy/pasting the value from one cell to another <kbd>CTRL</kbd>+<kbd>C</kbd>, <kbd>CTRL</kbd>+<kbd>V</kbd> * Update multiple cells by dragging the fill handle of a cell up or down to a destination cell. * Update all cells under a given cell by double clicking the cell's fill handle. ### Enabling cell edit In order for the cells of a column to be editable, you need to do the following: 1. Set the `editable` property of the column to be true. 2. Provide an `onGridRowsUpdated` handler function. The below snippet is an example handler that handles all the above update scenarios. ```javascript onGridRowsUpdated = ({ fromRow, toRow, updated }) => { this.setState(state => { const rows = state.rows.slice(); for (let i = fromRow; i <= toRow; i++) { rows[i] = { ...rows[i], ...updated }; } return { rows }; }); }; ```
79.893805
743
0.779242
eng_Latn
0.999321
4f66b5c38c27cb2a755e9f78c79c298f9eb7f620
5,572
md
Markdown
_posts/2020-01-13-Series Article of RasPi 01.md
OUCliuxiang/OUC_LiuX.github.io
310ed04a4a62df55f71be17209f3a0aa45d5ae45
[ "MIT" ]
null
null
null
_posts/2020-01-13-Series Article of RasPi 01.md
OUCliuxiang/OUC_LiuX.github.io
310ed04a4a62df55f71be17209f3a0aa45d5ae45
[ "MIT" ]
1
2022-03-01T12:26:59.000Z
2022-03-01T12:26:59.000Z
_posts/2020-01-13-Series Article of RasPi 01.md
OUCliuxiang/OUC_LiuX.github.io
310ed04a4a62df55f71be17209f3a0aa45d5ae45
[ "MIT" ]
2
2019-10-14T06:28:43.000Z
2021-09-09T07:05:10.000Z
--- layout: post title: Series Article of RasPi -- 01 subtitle: 树莓派使用实录01 -- 系统烧写和开机配置 date: 2021-07-27 author: OUC_LiuX header-img: img/wallpic02.jpg catalog: true tags: - RasPi --- <head> <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { skipTags: ['script', 'noscript', 'style', 'textarea', 'pre'], inlineMath: [['$','$']] } }); </script> </head> ## 系统烧写 1. 下载烧写软件。 在树莓派[官网](https://www.raspberrypi.org/software/)下载相应系统的烧写软件,我的工作主机是 ubuntu 系统,就以 [Imager for Ubuntu](https://downloads.raspberrypi.org/imager/imager_latest_amd64.deb) 进行介绍。 如果是树莓派系统,应当通过包管理器 `apt` 下载安装: ```shell $ sudo apt-get install rpi-imager ``` 2. 下载系统镜像。 在树莓派[官网](https://www.raspberrypi.org/software/operating-systems/#raspberry-pi-os-32-bit)下载镜像,推荐 [Raspberry Pi OS with desktop](https://downloads.raspberrypi.org/raspios_armhf/images/raspios_armhf-2021-05-28/2021-05-07-raspios-buster-armhf.zip) 这个镜像,因为有图形界面,可以防不时之需。 然后 `sudo dpkg -i xxx` 安装即可。 3. 烧写镜像到sd卡。 ```shell $ rpi-imager ``` 启动软件,当然将软件加到 dock 上最好了。界面如下: <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi01.png"></div> 然后选择 sd 卡, 选择镜像。 <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi02.png"></div> <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi03.png"></div> 镜像一定要从 `Use custom` 这一项中,选择下载好的镜像,如果选择 第一项 Raspberry Pi OS (32-bit) 这一项,会重新从网络下载,但下载源不在大陆,在墙外面,下载过程很费时间的。 点击 Write,输入 sudo 密码即可。大概等待十几分钟吧。 ## 无屏幕配置wifi 和 ssh 资深 linux 用户不习惯使用桌面,于是此处介绍如何在无桌面的情况下配置 wifi 和 ssh 远程连接。 烧写完镜像后不要拔出 sd 卡,进入卡的 `boot` 路径,如果是 windows 机器,应该只能找到一个分区,那就是 `boot`。 建立文件 `wpa_supplicant.conf` 并写入以下内容: ```conf country=CN ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 network={ ssid="wifi ssid" psk="password" key_mgmt=WPA-PSK priority=1 } ``` 仍然是 `boot` 路径下,生成一个空的 ssh 文件: ```shell $ touch ssh ``` 注意,没有任何格式。该文件在树莓派开机后就会消失。 装载 sd 卡到树莓派,开机。之后树莓派开机都会默认连接到设置好的 wifi 并开启SSH服务。 ## apt 换清华源 ```shell $ sudo nano /etc/apt/sources.list ``` 注释掉默认源,添加下面两行: ``` deb http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib deb-src http://mirrors.tuna.tsinghua.edu.cn/raspbian/raspbian/ buster main non-free contrib ``` `ctrl + o` 保存, `ctrl + x` 关闭。 此处可选的还有阿里云的源: ``` deb http://mirrors.aliyun.com/raspbian/raspbian/ buster main contrib non-free rpi deb-src http://mirrors.aliyun.com/raspbian/raspbian/ buster main contrib non-free rpi ``` ```shell $ sudo nano /etc/apt/sources.list.d/raspi.list ``` 注释掉原有内容,替换为以下项: ``` deb http://mirrors.tuna.tsinghua.edu.cn/raspberrypi/ buster main ui deb-src http://mirrors.tuna.tsinghua.edu.cn/raspberrypi/ buster main ui ``` 最后 ```shell sudo apt-get update ``` 更新源。 ## 修改默认 python 路径及 pip 换源 树莓派4B 默认安装了 Python2 以及 Python3 。首先查看默认 Python 版本: `python --version` 查看安装的python路径: `whereis python` 查看python默认链接的路径: `which python` 删除原来python链接文件: `sudo rm /usr/bin/python` 建立新的python链接指向: `sudo ln -s /usr/bin/python3.7 /usr/bin/python` 把路径/usr/bin加入环境变量中: `PATH=/usr/bin:$PATH` 重新打开 terminal 或 `source ~/.bashrc` 查看python版本。 一行命令将 pip 换为清华源: ```bash $ pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple ``` ## 超频升压 树莓派4b 默认 cpu 频率为 700MHz,但允许超频至 2GHz。标准电压同理。 先升级了固件和操作系统: ```shell sudo rpi-update sudo apt dist-upgrade ``` 然后,以root用户(sudo)编辑 /boot /config.txt 文件,添加以下代码: ```shell force_turbo=0 arm_freq=2000 over_voltage=6 ``` 意思既是将最大频率提升到2.0 GHz,将最高电压提升 6*0.025 伏特。 `force_turbo` 这一项如果置 1,虽然允许进一步提升电压,但会改变芯片中的保险结构。既有可能导致板子损坏。 ## 系统配置 ssh 进入 raspi 系统,键入命令:`sudo raspi-config` 进入树莓派系统配置界面,使用方向键 $\uparrow\downarrow\leftarrow\rightarrow$控制 (方向键语法见博客 [一些Markdown语法记录](https://www.ouc-liux.cn/2021/04/27/Markdown-Grammar/))。 <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi04.png"></div> RasPi3 和 RasPi4有些许不同,但也只是列表顺序和包含关系的改变。如果和博客图片有不同的话,仔细找找,能找到需要的项。 ### Change User Password 按照里面的要求修改成自己的密码即可。默认用户名为pi,默认密码为raspberry(第一次开机已设置过的无需再次修改)。 这个密码会用于远程ssh登陆、VNC远程桌面及需要管理员root权限时输入。 密码在第一次开机时应该已经设置过,无需再次修改。 ### Interfacing Options <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi05.png"></div> 打开 Camera、SSH、VNC、Serial、Remote GPIO, 这几个功能都是我们在以后会用到的功能。 * Camera:摄像头 * SSH:ssh远程通信与登陆 * VNC:VNC远程桌面登陆 * Serial:串口控制 * Remote GPIO:远程GPIO引脚控制 ### Advanced Options <div align=center><img src="https://raw.githubusercontent.com/OUCliuxiang/OUCliuxiang.github.io/master/img/raspi/raspi06.png"></div> * 选择Expand Filesystem,将根目录扩展到这个SD卡,充分利用SD卡的存储空间 * 选择Overscan,在整个屏幕上显示 * 选择Audio,选择Force 3.5mm(‘headphone’ jack),树莓派的声音会从耳机孔输出 * 选择Resolution,选择默认设置,自动根据显示屏调整分辨率 其他项无需修改,保持默认即可,设置完成后返回选择Finish,会跳出对话框问是否重新启动,可以直接回车确定重启,也可以等下次启动时候生效。
30.78453
272
0.675341
yue_Hant
0.512627
4f673e09a33d061c5ce4aa74aa1b8f2cbb374aa9
243
md
Markdown
README.md
JacobLibby/webscraper_jobBoards
e153952381d56bc81fc6e8966ef4b3ca281cfd9f
[ "CC0-1.0" ]
null
null
null
README.md
JacobLibby/webscraper_jobBoards
e153952381d56bc81fc6e8966ef4b3ca281cfd9f
[ "CC0-1.0" ]
null
null
null
README.md
JacobLibby/webscraper_jobBoards
e153952381d56bc81fc6e8966ef4b3ca281cfd9f
[ "CC0-1.0" ]
null
null
null
# webscraper_jobBoards A JavaScript webscraper that scrapes current openings from company job boards # References Online Tutorial: https://www.youtube.com/watch?v=-3lqUHeZs_0&ab_channel=CodewithAniaKub%C3%B3w by Code with Ania Kubów
16.2
118
0.798354
eng_Latn
0.565494
4f6772289ae3d782c2c92e502c76ffbc2f2b30d1
181
md
Markdown
online/exp0/README.md
justcatthefish/agh-ctf-2019
498940ba3c92b1f77de0ef197eece956801a45f6
[ "MIT" ]
null
null
null
online/exp0/README.md
justcatthefish/agh-ctf-2019
498940ba3c92b1f77de0ef197eece956801a45f6
[ "MIT" ]
null
null
null
online/exp0/README.md
justcatthefish/agh-ctf-2019
498940ba3c92b1f77de0ef197eece956801a45f6
[ "MIT" ]
null
null
null
# [easy] exp0 ### Exploit challenge #### Task message What does execstack mean? #### Stats Number of solves at the event: 4 Solved by: * pbms * 0x4B1D * WietWarriors * 0xbeefc0de
12.928571
32
0.696133
eng_Latn
0.871264
4f6867bbc38ed4a59311e33580179d8b95b9363f
4,606
md
Markdown
docs-archive-a/2014/sql-server/install/user-defined-functions-are-not-allowed-in-system-function-schema.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs-archive-a/2014/sql-server/install/user-defined-functions-are-not-allowed-in-system-function-schema.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T02:18:31.000Z
2021-11-25T02:26:28.000Z
docs-archive-a/2014/sql-server/install/user-defined-functions-are-not-allowed-in-system-function-schema.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
2
2021-09-29T08:52:22.000Z
2021-10-13T09:16:56.000Z
--- title: Funções definidas pelo usuário não são permitidas em system_function_schema | Microsoft Docs ms.custom: '' ms.date: 03/06/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: database-engine ms.topic: conceptual helpviewer_keywords: - system functions [SQL Server] - user-defined functions [SQL Server], system ms.assetid: 3cb54053-ef65-4558-ae96-8686b6b22f4f author: mashamsft ms.author: mathoma ms.openlocfilehash: 7242f9fda74288a2b7354ac0550ff4966e05c555 ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 08/04/2020 ms.locfileid: "87686026" --- # <a name="user-defined-functions-are-not-allowed-in-system_function_schema"></a>Funções definidas pelo usuário não são permitidas no system_function_schema O supervisor de atualização detectou funções definidas pelo usuário que são de Propriedade do usuário não documentado **system_function_schema**. Você não pode criar uma função de sistema definida pelo usuário especificando esse usuário. O nome de usuário **system_function_schema** não existe e a ID de usuário que está associada a esse nome (UID = 4) é reservada para o esquema **Sys** e é restrita somente ao uso interno. ## <a name="component"></a>Componente [!INCLUDE[ssDE](../../includes/ssde-md.md)] ## <a name="description"></a>Descrição O armazenamento de objeto de sistema mudou da seguinte maneira: - Os objetos do sistema são armazenados no banco de dados de **recursos** somente leitura e as atualizações diretas do objeto do sistema não são permitidas. Os objetos do sistema aparecem logicamente no esquema **Sys** de cada banco de dados. Isso mantém a habilidade para invocar funções de sistema de qualquer banco de dados especificando um nome de função de uma parte. Por exemplo, a instrução `SELECT * FROM fn_helpcollations()` pode ser executada de qualquer banco de dados. - A **system_function_schema** de usuário não documentada foi removida. - A ID de usuário associada a **system_function_schema** (UID = 4) é reservada para o esquema **Sys** e é restrita somente ao uso interno. Essas alterações têm o seguinte efeito em funções de sistema definidas pelo usuário: - Instruções DDL (linguagem de definição de dados) que fazem referência a **system_function_schema** falharão. Por exemplo, a instrução `CREATE FUNCTION system` _ `function` \_ `schema.fn` \_ `MySystemFunction` ... Não terá sucesso. - Depois de atualizar para o [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] , os objetos existentes que pertencem a **system_function_schema** estão contidos apenas no esquema **Sys** do banco de dados **mestre** . Como os objetos do sistema não podem ser modificados, essas funções nunca podem ser alteradas ou removidas do banco de dados **mestre** . Além disso, essas funções não podem ser invocadas a partir de outros bancos de dados pela especificação apenas de um nome de função de uma parte. ## <a name="corrective-action"></a>Ação corretiva Antes da atualização, faça o seguinte: 1. Altere a propriedade das funções definidas pelo usuário existentes para **dbo** usando o procedimento armazenado do sistema **sp_changeobjectowner** . 2. Considere renomear a função de forma que ela não use o prefixo ‘fn_’. Isso evitará potenciais conflitos de nome com atuais ou futuras funções de sistema. 3. Adicione uma cópia das funções modificadas a todos os bancos de dados que as usam. 4. Substitua referências a **system_function_schema** com **dbo** em todos os scripts que contêm instruções DDL da função definida pelo usuário. 5. Modifique os scripts que chamam essas funções para usar o nome dbo de duas partes **.** _function_name_ou o nome de três partes _database_name_**.** dbo. *function_name*. Para obter mais informações, consulte os seguintes tópicos dos Manuais Online do SQL Server: - ‘sp_changeobjectowner’ - ‘Separação do esquema de usuário’ - ‘Banco de dados Recurso’ ## <a name="see-also"></a>Consulte Também [Supervisor de atualização do SQL Server 2014 &#91;novo&#93;](sql-server-2014-upgrade-advisor.md) [Problemas de atualização do Mecanismo de Banco de Dados](../../../2014/sql-server/install/database-engine-upgrade-issues.md) [Remover instruções que modificam objetos do sistema](../../../2014/sql-server/install/remove-statements-that-modify-system-objects.md) [Remover instruções que descartam objetos do sistema](../../../2014/sql-server/install/remove-statements-that-drop-system-objects.md)
63.09589
507
0.756622
por_Latn
0.999534
4f68a2da11f288a403965a812096fe6866ebf45f
3,593
md
Markdown
README.md
selimanac/behaviortrees
18a93ab01e01da1d12fe5bdb90c053539b9a5b04
[ "MIT" ]
1
2022-03-31T00:33:59.000Z
2022-03-31T00:33:59.000Z
README.md
selimanac/behaviortrees
18a93ab01e01da1d12fe5bdb90c053539b9a5b04
[ "MIT" ]
null
null
null
README.md
selimanac/behaviortrees
18a93ab01e01da1d12fe5bdb90c053539b9a5b04
[ "MIT" ]
null
null
null
- New editor: https://selimanac.github.io/behavior3/ # BEHAVIOR3EDITOR ![interface preview](preview.png) **Behavior3 Editor** is the official visual editor for the **Behavior3** libraries. It can be accessed online or you can download it to have handle local projects. - Info: http://behavior3.com - Editor: http://editor.behavior3.com ## Why Behavior3 Editor? Why should you use b3editor? What is different from other editors? Can it compete against commercial alternatives? - Well, check it out some characteristics of Behavior3 Editor: - **Open Source Software**: under MIT license, you can use this software freely, adapt it to your need and even use a specialized internal version in your company. You can also contribute with bug fixes, suggestions and patches to make it better. - **Open Format**: b3editor can export the modeled trees to JSON files, following an open format. If there is no official reader on your favorite language yet, you can develop your own library and use the trees created here. - **Formality**: the editor works above the basis created by Behavior3JS, which in turn is based on formal description of behavior trees. Thus, the editor provides a stable solution to model agents for your games or other applications such as robotics and simulations in general. - **Focus on Usability**: intuitiveness is the key word of b3editor. We focus on providing an easy, clean, and intuitive tool for programmers and non-programmers. If there is something obscure or too difficult to use, report it immediately! - **Minimalist, but Functional**: b3editor follows a minimalist style, trying to reduce the amount of non-essential information presented on the screen. We focus on the important things: designing Behavior Trees. - **Customizable**: create your own node types and customize nodes instances individually. Create several projects and trees, change titles and add properties. - **Big Projects Ahead**: we are working towards a collaborative tool in order to provide an awesome editor for big projects involving several designers working together. - **Does not depends on other tools/editors/engines**. ## Main features - **Custom Nodes**: you can create your own node types inside one of the four basic categories - *composite*, *decorator*, *action* or *condition*. - **Individual Node Properties**: you can modify node titles, description and custom properties. - **Manual and Auto Organization**: organize by dragging nodes around or just type "a" to auto organize the whole tree. - **Create and Manage Multiple Trees**: you can create and manage an unlimited number of trees. - **Import and Export to JSON**: export your project, tree or nodes to JSON format. Import them back. Use JSON on your own custom library or tool. You decide. ## Limitations Nothing is perfect =( . Behavior3 Editor focus on Chrome (thus, working pretty well on Opera too), so it have some incompatibilities with Firefox, such as the image preview lag when dragging to create a node for the first time, and the ugly scroll bar inside the panels. Not tested on IE! ## Looking for Behavior Tree Libraries? See http://behavior3.com for a complete list of libraries. *If you have implemented an library compatible with Behavior3 schematics, tell me and I link it there*. ## Want to Contribute? Take a look at the issue list, suggest new features, report bugs, send me pull requests, write documentation and tutorials. There are many ways to contribute, do what you know and you can make Behavior3 Editor better! - More info: http://behavior3.com/donation
58.901639
288
0.773727
eng_Latn
0.997523
4f69b00d1ce4ff3b15ff9865fca4ff58be5ba365
81
md
Markdown
README.md
litsungyi/UnityAccessModifierTest
2fc355e714c08d3bac6c9de0fd8fde4aad2c8e3a
[ "MIT" ]
null
null
null
README.md
litsungyi/UnityAccessModifierTest
2fc355e714c08d3bac6c9de0fd8fde4aad2c8e3a
[ "MIT" ]
null
null
null
README.md
litsungyi/UnityAccessModifierTest
2fc355e714c08d3bac6c9de0fd8fde4aad2c8e3a
[ "MIT" ]
null
null
null
# UnityAccessModifierTest Test how C# access modifiers works in Unity3d project.
27
54
0.82716
fra_Latn
0.434657
4f69d7236267be3c4fe2850ee67e8ddbeaf07b11
1,568
md
Markdown
CHANGELOG.md
jsexton-portfolio/pyocle
e50e183c05acf4fde4d93ce9d09e350ea4487437
[ "MIT" ]
1
2021-03-11T22:53:18.000Z
2021-03-11T22:53:18.000Z
CHANGELOG.md
jsexton-portfolio/pyocle
e50e183c05acf4fde4d93ce9d09e350ea4487437
[ "MIT" ]
1
2021-08-25T05:21:57.000Z
2021-08-25T05:21:57.000Z
CHANGELOG.md
jsexton-portfolio/pyocle
e50e183c05acf4fde4d93ce9d09e350ea4487437
[ "MIT" ]
null
null
null
# 0.4.2 - Bumped dependency versions # 0.4.1 - Fixed issue not allowing json messages to be published correctly using sns. # 0.4.0 - Added accepted response method - Added service package - Added Simple Notification Service - Added Simple Email Service - Key Management Service overhaul # 0.3.2 - Fixed bug causing `PaginationDetails` to not ignore unexpected attributes # 0.3.1 - Improved error logging - Fixed bug that was causing `resolve_query_params` to return a dict instead of the resolved model object # 0.3.0 - Fixed bug causing `None` to be returned when no default was provided for `encrypted_env_var`. - Added pagination details to response metadata - Removed error module - `error_handler` noe resides in response module - Restructured `ErrorDetails` - Renamed `field_name` to `location` - Removed `FieldErrorDetails` # 0.2.2 - Fixed bug in `encrypted_env_var()` function that was using the default value as the found environment variable. # 0.2.1 - Fixed bug that caused library to fail when no attrs were passed to `encrypted_env_var()` function - `connection_string()` function now accepts a default # 0.2.0 - Added `env_var` methods - Added decryption utils for environment variables - Added dependencies to setup.py - Added modules to `__init__`. Modules can now be used by simply importing pyocle. # 0.1.0 - Created initial config module - Created initial error module - Created initial form module - Created initial response module - Created initial serialization module - Created initial service module
25.704918
113
0.757653
eng_Latn
0.991225
fa7bb06470a8c82217b3fa4058ce64f9be37a5a9
2,828
md
Markdown
docs/test/how-to-configure-load-tests-to-collect-full-details.md
Kaunaj/visualstudio-docs
47ed61d95acbb33fbdfa8ed43934cbdb451ad97c
[ "CC-BY-4.0", "MIT" ]
1
2021-07-05T06:40:10.000Z
2021-07-05T06:40:10.000Z
docs/test/how-to-configure-load-tests-to-collect-full-details.md
Kaunaj/visualstudio-docs
47ed61d95acbb33fbdfa8ed43934cbdb451ad97c
[ "CC-BY-4.0", "MIT" ]
1
2018-10-08T17:51:50.000Z
2018-10-08T17:51:50.000Z
docs/test/how-to-configure-load-tests-to-collect-full-details.md
Kaunaj/visualstudio-docs
47ed61d95acbb33fbdfa8ed43934cbdb451ad97c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Collect Full Details for virtual users for load testing in Visual Studio" ms.date: 10/19/2016 ms.topic: conceptual helpviewer_keywords: - "load tests, virtual user activity chart, configuring" - "virtual user activity chart, configuring" ms.assetid: cb22e43b-af4d-4e09-9389-3c3fa00786f7 author: gewarren ms.author: gewarren manager: douge ms.prod: visual-studio-dev15 ms.technology: vs-ide-test --- # How to: Configure load tests to collect full details to enable virtual user activity in test results To use the **Virtual User Activity Chart** for your load test, you must configure your load test to collect full details. To configure the load test to do this, select the **All Individual Details** setting for the **Timing Details Storage** property associated with your load test. In this mode your load test will collect detailed information about every test, page, and transaction. If you are upgrading a project from a previous version of Visual Studio load test, use the steps in the following procedure to enable full detail collection. The **All Individual Details** setting for the **Timing Details Storage** property can be set to any of the following options: - **All Individual Details:** Collects and stores individual timing data for each test, transaction, and page issued during the test. > [!NOTE] > The **All Individual Details** option must be selected to enable virtual user data information in your load test results. - **None:** Does not collect any individual timing details. However, the average values are still available. - **Statistics Only:** Stores individual timing data, but only as percentile data. This saves space resources. ## To configure the timing details storage property in a load test 1. Open a load test in the **Load Test Editor**. 2. Expand the **Run Settings** node in the load test. 3. Choose on the run settings that you want to configure, for example **Run Settings1[Active]**. 4. Open the **Properties** Window. On the **View** menu, select **Properties Window**. 5. Under the **Results** category, choose the **Timing Details Storage** property and select **All Individual Details**. After you have configured the **All Individual Details** setting for the **Timing Details Storage** property, you can run your load test and view the **Virtual User Activity Chart**. For more information, see [How to: Analyze what virtual users are doing during a load test](../test/how-to-analyze-virtual-user-activity-during-a-load-test.md). ## See also - [Analyzing virtual user activity in the Details view](../test/analyze-load-test-virtual-user-activity-in-the-details-view.md) - [Walkthrough: Using the Virtual User Activity Chart to isolate issues](../test/walkthrough-use-the-virtual-user-activity-chart-to-isolate-issues.md)
57.714286
385
0.760962
eng_Latn
0.974939
fa7bff5dc8306f802ec9b08cb12ca78fe89db6b5
550
md
Markdown
docs/pages/documentation/provider.md
gladstargtschool6/react-supabase
c8c96d169970de1cb01c997b372f40bda342b288
[ "MIT" ]
138
2021-05-03T14:10:27.000Z
2022-03-30T20:07:20.000Z
docs/pages/documentation/provider.md
gladstargtschool6/react-supabase
c8c96d169970de1cb01c997b372f40bda342b288
[ "MIT" ]
14
2021-05-17T12:29:21.000Z
2022-03-01T08:02:10.000Z
docs/pages/documentation/provider.md
Uvacoder/react-supabase
592fb29800db9eac494076a93f8650b5e98ab212
[ "MIT" ]
11
2021-05-16T20:33:11.000Z
2022-03-15T05:30:41.000Z
# Provider In order to use a Supabase client, you need to provide it via the [Context API](https://reactjs.org/docs/context.html). This may be done with the help of the Provider export. ```tsx import { createClient } from '@supabase/supabase-js' import { Provider } from 'react-supabase' const client = createClient('https://xyzcompany.supabase.co', 'public-anon-key') const App = () => ( <Provider value={client}> <YourRoutes /> </Provider> ) ``` All examples and code snippets from now on assume that they are wrapped in a `Provider`.
28.947368
174
0.712727
eng_Latn
0.961147
fa7c92cce34ff79bd8cb1e3dea753cb02b18f9d2
238
md
Markdown
_posts/2007-03-15-florida-lawmakers-counter-the-cuba-us.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
_posts/2007-03-15-florida-lawmakers-counter-the-cuba-us.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
_posts/2007-03-15-florida-lawmakers-counter-the-cuba-us.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
--- title: Florida lawmakers counter the Cuba-US tags: - Mar 2007 --- Florida lawmakers counter the Cuba-U.S. oil drilling bill. Newspapers: **Miami Morning News or The Miami Herald** Page: **3**, Section: **A**
19.833333
60
0.630252
eng_Latn
0.912816
fa7d23c20eb3981ede97e9190277e0a7c9d0ba79
6,899
md
Markdown
www/_posts/2019-10-30-enumerable-io.md
HoneyryderChuck/httpx
87bb4a4994073c1c83a93247aed6006d9bb2d97f
[ "Apache-2.0" ]
62
2018-06-08T01:43:56.000Z
2022-03-14T00:02:44.000Z
www/_posts/2019-10-30-enumerable-io.md
HoneyryderChuck/httpx
87bb4a4994073c1c83a93247aed6006d9bb2d97f
[ "Apache-2.0" ]
4
2021-03-04T18:57:58.000Z
2022-02-01T15:43:25.000Z
www/_posts/2019-10-30-enumerable-io.md
HoneyryderChuck/httpx
87bb4a4994073c1c83a93247aed6006d9bb2d97f
[ "Apache-2.0" ]
null
null
null
--- layout: post title: Enumerable IO Streams keyowrds: IO, enumerable, streaming, API --- I've been recently working on CSV generation with ruby in my day job, in order to solve a bottleneck we found because of a DB table, whose number or rows grew too large for the infrastructure to handle our poorly optimized code. This led me in a journey of discovery on how to use and play with raw ruby APIs to solve a complex problem. ## The problem So, let's say we have a `User` ActiveRecord class, and our routine looks like this: ```ruby class UsersCSV def initialize(users) @users = users end def generate CSV.generate(force_quotes: true) do |csv| csv << %w[id name address has_dog] @users.find_each do |user| csv << [ user.id, user.name, user.address.to_addr, user.dog.present? ] end end end payload = UsersCSV.new(User.relevant_for_this_csv).generate aws_bucket.upload(body: StringIO(payload)) ``` The first thing you might ask is "why are you not using [sequel](https://github.com/jeremyevans/sequel)". That is a valid question, but for the purpose of this article, let's assume we're stuck with active record (we really kind of are). The second might be "dude, that address seems to be a relationship, isn't that a classic N+1 no-brainer?". It kind of is, and good for you to notice, I'll get back to it later. But the third thing is "dude, what happens if you have like, a million users, and you're generating a CSV for all of them?". And touchè! That's what I wanted you to focus on. You see, this example is a standard example you find all over the internet on how to generate CSV data using the `csv` standard library, so it's not like I'm doing something out of the ordinary. I could rewrite the generation to use `CSV.open("path/to/file", "wb")` to pipe the data to a file, however if I can send the data to the AWS bucket in chunks, why can't I just pipe it as I generate? There are many ways to do this, but this put me to think, and I came up with an solution using the `Enumerable` module. ## Enumerable to the rescue I'll change my code to enumerate the CSV rows as they're generated: ```ruby class UsersCSV include Enumerable def initialize(users) @users = users end def each yield line(%w[id name address has_dog]) @users.find_each do |user| yield line([ user.id, user.name, user.address.to_addr, user.dog.present? ]) end end private def line(row) CSV.generate(row, force_quotes: true) end end # I can eager-load the payload payload = UsersCSV.new(User.relevant_for_this_csv).to_a.join # you can select line by line csv = UsersCSV.new(User.relevant_for_this_csv).each headers = csv.next first_row = csv.next #... ``` But this by itself doesn't solve my issue. If you look at the first example, specifically the line: ```ruby aws_bucket.upload(body: StringIO(payload)) ``` I'm wrapping the payload in a StringIO. that's because my aws client expects an IO interface. And Enumerables aren't IOs. ## The IO interface An IO-like object must implement a few methods to be usable by certain functions which expect the IO interface. In other more-ruby-words, it must "quack like an IO". And how does an IO quack? Here are a few examples: * An IO reader must implement `#read(size, buffer)` * An IO writer must implement `#write(data)` * A duplex IO must implement both * A closable IO must implement `eof?` and `#close` * A rewindable socket must implement `#rewind` * IO wrappers must implement `#to_io` You know some of ruby's classes which implement a few (some, all) of these APIs: `File`, `TCPSocket`, and the aforementioned `StringIO`. A few ruby APIs expect arguments which implement the IO interface, but aren't necessarily instances of IO. * `IO.select` can be passed IO wrappers * `IO.copy_stream(src, dst)`, takes an IO reader and an IO writer as arguments. ## Enter Enumerable IO So, what if our csv generator can turn itself into a readable IO? I could deal with this behaviour directly in my routine, but I'd argue that this should be a feature provided by `Enumerable`, i.e. an enumerable could also be cast into an IO. The expectation is risky: the yield-able data must be strings, for example. But for now, I'll just monkey-patch the `Enumerable` module: ```ruby # practical example of a feature proposed to ruby core: # https://bugs.ruby-lang.org/issues/15549 module Enumerable def to_readable_stream Reader.new(self, size) end class Reader attr_reader :bytesize def initialize(enum, size = nil) @enum = enum @bytesize = size @buffer = "".b end def read(bytes, buffer = nil) @iterator ||= @enum.each buffer ||= @buffer buffer.clear if @rest buffer << @rest @rest.clear end while buffer.bytesize < bytes begin buffer << @iterator.next rescue StopIteration return if buffer.empty? break end end @rest = buffer.slice!(bytes..-1) buffer end end end ``` With this extension, I can do the following: ```ruby csv = UsersCSV.new(User.relevant_scope_for_this_csv).to_readable_stream aws_bucket.upload(body: csv) ``` And voilà! Enumerable and IO APIs for the win! Using this solution, there's a performance benefit while using clean ruby APIs. The main performance benefit is, the payload doesn't need to be all kept in memory til all the CSV is generated, so we get constant memory usage (in our case, this leak was exacerbated by that N+1 problem; the more you wait for the rows, the longer the csv payload was being retained). ## Caveat Depending of what you're using to upload the file, you might still need to buffer first to a file; at work, we use `fog` to manage our S3 uploads, which requires IO-like request bodies to implement `rewind`, therefore the easy way out is to buffer to a tempfile first: ```ruby csv = UsersCSV.new(User.relevant_scope_for_this_csv).to_readable_stream file = Tempfile.new IO.copy_stream(csv, file) file.rewind fog_wrapper.upload(file) ``` ## Conclusion There are many ways to skin this cat, but I argue that this way is the easiest tom maintain: you can tell any developer that their CSV/XML/${insert format here} generator must implement `#each` and yield formatted lines, and then you just have to pass it to your uploader. You ensure that the payload will not grow linearly, and no one will ever have to read another tutorial on "How to write CSV files in ruby" ever again. This doesn't mean that all of our problems are solved: as the number of records grows, so does the time needed to generate it. And it will become a bottleneck. So how can you guarantee that the time needed to generate the date won't derail? I'll let you know when I have the answer.
35.020305
423
0.719814
eng_Latn
0.997511
fa7da125bc2073a47d6424398b70cc8e304d397d
4,974
md
Markdown
_posts/2020-08-17-howto-installazione-e-utilizzo-del-comando-autojump.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
_posts/2020-08-17-howto-installazione-e-utilizzo-del-comando-autojump.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
_posts/2020-08-17-howto-installazione-e-utilizzo-del-comando-autojump.md
DumbMahreeo/linuxhub.it
201ca8534562fb22f013b919d599c547b461bd5c
[ "MIT" ]
null
null
null
--- title: '#howto - Installazione e utilizzo del comando autojump' published: 2020-08-17 layout: post author: Alessandro Zangrandi author_github: AlexzanDev tags: - python - github - bash --- `autojump` è un comando utilizzabile da terminale simile a `cd`. A differenza di quest'ultimo, aiuta a velocizzare la navigazione tra le cartelle mantenendo una cronologia di quelle navigate in precedenza. Se ci sono più cartelle con lo stesso nome, autojump mantiene una storia che favorisce quella con più accessi. In questa guida vedremo come installare e utilizzare `autojump`. ## Installazione di autojump ### Debian, Ubuntu e derivate Per installare `autojump` su **Ubuntu, Debian** e derivate possiamo utilizzare `apt`: ```bash apt install autojump ``` su distribuzioni derivate da Debian (come Ubuntu) dobbiamo aggiungere una riga di testo al file *.bashrc* (o *.zshrc*): ```bash . /usr/share/autojump/autojump.sh ``` ed effettuare il *source* di uno dei due file: ```bash # Bash source .bashrc # Zsh source .zshrc ``` ### CentOS, RHEL e derivate Per installare `autojump` su **CentOS, RHEL** e derivate possiamo utilizzare invece `yum`: ```bash yum install autojump ``` ### Arch Linux `autojump` non è presente nelle repository di default di **Arch Linux**, bensì nell'**AUR** (Arch User Repository). Per poterlo installare su Arch Linux, dobbiamo utilizzare `yay`, di cui abbiamo già parlato in una [guida dedicata](https://linuxhub.it/articles/howto-introduzione-alla-aur-e-aur-helper#title2). ``` yay -S autojump ``` ### Installazione manuale `autojump` è anche installabile manualmente scaricando la repo Git ed eseguendo lo script di installazione scritto in **Python**. Assicuriamoci di avere installato Python (supportato Python 2.6+, Python 3.0+): ```bash python -V ``` cloniamo la repo da GitHub: ```bash git clone https://github.com/wting/autojump ``` entriamo nella cartella ed eseguiamo lo script in Python: ```bash cd autojump python install.py ``` ## Utilizzo del comando Sia il comando `autojump` che `j` possono essere utilizzati. Quest'ultimo, che sfrutteremo nella guida, viene preferito per convenienza. Prima di viaggiare in qualche cartella, vediamo quanto pesa l'installazione (e vedere la cronologia) con l'argomento *-s*: ```bash j -s ``` siccome non abbiamo visto nessuna cartella da quando abbiamo installato il programma, l'output sarà simile al seguente: ```bash ________________________________________ 0: total weight 0: number of entries 0.00: current directory weight data: /home/alessandro/.local/share/autojump/autojump.txt ``` Per provare `autojump`, creiamo una cartella d'esempio e visitiamola, assieme ad una sottocartella: ```bash mkdir test mkdir test/prova cd test cd prova cd -- ``` Tornati alla home, vediamo di nuovo la storia di `autojump` con: ```bash j -s ``` questa volta l'output dovrebbe essere simile al seguente: ```bash 10.0: /home/alessandro/test 10.0: /home/alessandro/test/prova ________________________________________ 20: total weight 2: number of entries 0.00: current directory weight data: /home/alessandro/.local/share/autojump/autojump.txt ``` ### Visitare una cartella Per **visitare velocemente una cartella** con `autojump` possiamo utilizzare `j` seguito dal nome della cartella selezionata (deve essere nella cronologia): ``` j test ``` ### Visitare una sottocartella Per **visitare una sottocartella** possiamo utilizzare invece `jc` (*c* significa child directory, cartella figlia) seguito poi dal nome della cartella. Non è necessario inserire il path assoluto, ma solo il suo nome: ```bash jc prova ``` ### Visitare una cartella con argomenti multipli Non vi ricordate il nome preciso di una cartella a cui volete andare? Nessun problema, basta che inseriate **solo una parte** del suo nome. Non sappiamo ad esempio come si chiama "test"? Per entrarci possiamo utilizzare: ```bash j te j est j es j tet ``` e via dicendo. Se vogliamo entrare anche nella sottocartella possiamo farlo, come argomento multiplo: ```bash j test pro ``` Anche in questo caso i nomi delle due cartelle possono essere parziali, e `autojump` li riconoscerà. ### Aprire una cartella con un file manager `autojump` ci permette di aprire una cartella con un file manager semplicemente con il comando `jo`, utilizzabie assieme a `jc` se necessario: ```bash jco prova ``` ### Rimuovere cartelle eliminate dalla cronologia Quando una cartella viene cancellata, `autojump` si ricorda della sua esistenza. Per cancellarla **anche dalla cronologia**, possiamo procedere in questo modo. Cancelliamo la cartella dalla nostra cartella *home*: ``` rm -rf test/prova ``` e utilizziamo l'argomento *--purge*: ```bash j --purge ``` La cartella cancellata ora non sarà più nella storia di `autojump`. Per maggiori informazioni, dubbi e chiarimenti, non esitate a fare domande sul nostro [gruppo Telegram](https://t.me/linuxpeople).
25.248731
317
0.748693
ita_Latn
0.997728
fa7e44a2d6c46b841cb305e9b0d058cfcb142779
679
md
Markdown
README.md
mezae/pfpaa
120382551e4597840b4de4eb87451021c795daa3
[ "MIT" ]
null
null
null
README.md
mezae/pfpaa
120382551e4597840b4de4eb87451021c795daa3
[ "MIT" ]
null
null
null
README.md
mezae/pfpaa
120382551e4597840b4de4eb87451021c795daa3
[ "MIT" ]
null
null
null
The app was created to streamline the Prep for Prep Alumni Council Election. ## Getting Started Install Homebrew (http://brew.sh/) Execute the following commands (for any brew installation, check Caveats section for further instructions) ``` brew install git brew install node brew install mongodb npm install -g bower npm install -g grunt-cli ``` At the pfpaa root directory, execute the following to install dependencies ``` npm install ``` If you installed mongo through Homebrew, execute the following command to run it in the background ``` mongod --config /usr/local/etc/mongod.conf ``` In a separate terminal, execute the following to run the server: ``` grunt ```
21.903226
106
0.764359
eng_Latn
0.995916
fa7ed475576f7bd49bdf0f8a3cfe18f01bd88179
187
md
Markdown
README.md
ArielCalisaya/Portfolio
f6c7c5613c7363649ef7e6ad0a17a6fef0cdba00
[ "BSD-2-Clause" ]
null
null
null
README.md
ArielCalisaya/Portfolio
f6c7c5613c7363649ef7e6ad0a17a6fef0cdba00
[ "BSD-2-Clause" ]
null
null
null
README.md
ArielCalisaya/Portfolio
f6c7c5613c7363649ef7e6ad0a17a6fef0cdba00
[ "BSD-2-Clause" ]
null
null
null
# Personal Portfolio > For the starter cofiguration i'm using [Template-NATH](https://github.com/ArielCalisaya/Template-NATH) ## Preview Preview the example live on (working on it...)
23.375
104
0.754011
yue_Hant
0.573883
fa7ee9f5d32b26bc81f346455661536b8ce707a5
588
md
Markdown
docs/4-language-usage/2-variables-and-types/1-general/g-2150.md
Primus-Delphi-Group/PLSQL_SQL-Coding-Guidelines
1519ca79b46ba4975cb0758582cf7bbacb1b81d2
[ "Apache-2.0" ]
119
2018-06-12T11:15:06.000Z
2022-03-21T15:38:52.000Z
docs/4-language-usage/2-variables-and-types/1-general/g-2150.md
kasimkobakci/plsql-and-sql-coding-guidelines
d61bfca98234fee1fa919708023b18ed115cb0c1
[ "Apache-2.0" ]
107
2018-06-13T11:04:39.000Z
2021-12-16T17:54:34.000Z
docs/4-language-usage/2-variables-and-types/1-general/g-2150.md
kasimkobakci/plsql-and-sql-coding-guidelines
d61bfca98234fee1fa919708023b18ed115cb0c1
[ "Apache-2.0" ]
40
2018-06-12T18:42:12.000Z
2022-01-08T10:39:42.000Z
# G-2150: Avoid comparisons with NULL value, consider using IS [NOT] NULL. !!! bug "Blocker" Portability, Reliability ## Reason The `null` value can cause confusion both from the standpoint of code review and code execution. You must always use the `is null` or `is not null` syntax when you need to check if a value is or is not `null`. ## Example (bad) ``` sql declare l_value integer; begin if l_value = null then null; end if; end; / ``` ## Example (good) ``` sql declare l_value integer; begin if l_value is null then null; end if; end; / ```
17.294118
209
0.666667
eng_Latn
0.996126
fa7eebebaec3660424f9dc55e7da352d8a854c9f
1,259
md
Markdown
en/speechkit/api-ref/authentication.md
OlesyaAkimova28/docs
08b8e09d3346ec669daa886a8eda836c3f14a0b0
[ "CC-BY-4.0" ]
null
null
null
en/speechkit/api-ref/authentication.md
OlesyaAkimova28/docs
08b8e09d3346ec669daa886a8eda836c3f14a0b0
[ "CC-BY-4.0" ]
null
null
null
en/speechkit/api-ref/authentication.md
OlesyaAkimova28/docs
08b8e09d3346ec669daa886a8eda836c3f14a0b0
[ "CC-BY-4.0" ]
null
null
null
# Authentication in the API To perform operations via the API, you need to authenticate using your account: {% list tabs %} - Yandex account 1. [Get an IAM token](../../iam/operations/iam-token/create.md). 2. {% include [iam-token-usage](../../_includes/iam-token-usage.md) %} - Service accounts The service supports two authentication methods based on service accounts: * Using [API keys](../../iam/concepts/authorization/api-key.md). {% include [api-keys-disclaimer](../../_includes/iam/api-keys-disclaimer.md) %} 1. [Get an API key](../../iam/operations/api-key/create.md). 2. Specify the received API key when accessing Yandex.Cloud resources via the API. Pass the API key in the `Authorization` header in the following format: ``` Authorization: Api-Key <API key> ``` * Using an [IAM token](../../iam/concepts/authorization/iam-token.md): 1. [Get an IAM token](../../iam/operations/iam-token/create-for-sa.md). 2. {% include [iam-token-usage](../../_includes/iam-token-usage.md) %} - Federated account 1. [Get an IAM token](../../iam/operations/iam-token/create-for-federation.md). 2. {% include [iam-token-usage](../../_includes/iam-token-usage.md) %} {% endlist %}
34.972222
160
0.657665
eng_Latn
0.875246
fa806d2ea98cc58e6e2db3bed6dce7863dde0506
4,525
md
Markdown
WindowsServerDocs/administration/windows-commands/diskcomp.md
awakecoding/windowsserverdocs
cb266c8ea42b9800babbbe96b17885e82b55787d
[ "CC-BY-4.0", "MIT" ]
1
2020-08-24T10:46:35.000Z
2020-08-24T10:46:35.000Z
WindowsServerDocs/administration/windows-commands/diskcomp.md
awakecoding/windowsserverdocs
cb266c8ea42b9800babbbe96b17885e82b55787d
[ "CC-BY-4.0", "MIT" ]
1
2020-11-11T19:54:41.000Z
2020-11-11T19:54:41.000Z
WindowsServerDocs/administration/windows-commands/diskcomp.md
awakecoding/windowsserverdocs
cb266c8ea42b9800babbbe96b17885e82b55787d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: diskcomp description: Reference topic for the diskcomp command, which compares the contents of two floppy disks. ms.prod: windows-server ms.technology: manage-windows-commands ms.topic: article ms.assetid: 4f56f534-a356-4daa-8b4f-38e089341e42 author: coreyp-at-msft ms.author: coreyp manager: dongill ms.date: 10/16/2017 --- # diskcomp Compares the contents of two floppy disks. If used without parameters, **diskcomp** uses the current drive to compare both disks. ## Syntax ``` diskcomp [<drive1>: [<drive2>:]] ``` ### Parameters | Parameter | Description | | --------- | ----------- | | `<drive1>` | Specifies the drive containing one of the floppy disks. | | /? | Displays help at the command prompt. | #### Remarks - The **diskcomp** command works only with floppy disks. You cannot use **diskcomp** with a hard disk. If you specify a hard disk drive for *drive1* or *drive2*, **diskcomp** displays the following error message: ``` Invalid drive specification Specified drive does not exist or is nonremovable ``` - If all tracks on the two disks being compared are the same (it ignores a disk's volume number), **diskcomp** displays the following message: ``` Compare OK ``` If the tracks aren't the same, **diskcomp** displays a message similar to the following: ``` Compare error on side 1, track 2 ``` When **diskcomp** completes the comparison, it displays the following message: ``` Compare another diskette (Y/N)? ``` If you press **Y**, **diskcomp** prompts you to insert the disk for the next comparison. If you press **N**, **diskcomp** stops the comparison. - If you omit the *drive2* parameter, **diskcomp** uses the current drive for *drive2*. If you omit both drive parameters, **diskcomp** uses the current drive for both. If the current drive is the same as *drive1*, **diskcomp** prompts you to swap disks as necessary. - If you specify the same floppy disk drive for *drive1* and *drive2*, **diskcomp** compares them by using one drive and prompts you to insert the disks as necessary. You might have to swap the disks more than once, depending on the capacity of the disks and the amount of available memory. - **Diskcomp** can't compare a single-sided disk with a double-sided disk, nor a high-density disk with a double-density disk. If the disk in *drive1* isn't of the same type as the disk in *drive2*, **diskcomp** displays the following message: ``` Drive types or diskette types not compatible ``` - **Diskcomp** doesn't work on a network drive or on a drive created by the **subst** command. If you attempt to use **diskcomp** with a drive of any of these types, **diskcomp** displays the following error message: ``` Invalid drive specification ``` - If you use **diskcomp** with a disk that you made by using **copy**, **diskcomp** might display a message similar to the following: ``` Compare error on side 0, track 0 ``` This type of error can occur even if the files on the disks are identical. Although **copy** duplicates information, it doesn't necessarily place it in the same location on the destination disk. - **diskcomp** exit codes: | Exit code | Description | | --------- | ----------- | | 0 | Disks are the same | | 1 | Differences were found | | 3 | Hard error occurred | | 4 | Initialization error occurred | To process exit codes that are returned by **diskcomp**, you can use the *ERRORLEVEL* environment variable on the **if** command line in a batch program. ## Examples If your computer has only one floppy disk drive (for example, drive A), and you want to compare two disks, type: ``` diskcomp a: a: ``` **Diskcomp** prompts you to insert each disk, as needed. To illustrates how to process a **diskcomp** exit code in a batch program that uses the *ERRORLEVEL* environment variable on the **if** command line: ``` rem Checkout.bat compares the disks in drive A and B echo off diskcomp a: b: if errorlevel 4 goto ini_error if errorlevel 3 goto hard_error if errorlevel 1 goto no_compare if errorlevel 0 goto compare_ok :ini_error echo ERROR: Insufficient memory or command invalid goto exit :hard_error echo ERROR: An irrecoverable error occurred goto exit :break echo You just pressed CTRL+C to stop the comparison goto exit :no_compare echo Disks are not the same goto exit :compare_ok echo The comparison was successful; the disks are the same goto exit :exit ``` ## Additional References - [Command-Line Syntax Key](command-line-syntax-key.md)
32.553957
290
0.719558
eng_Latn
0.993163
fa8210254ce81d8e3bf0e4bd7eb6b70366a764c8
5,061
md
Markdown
articles/devtest-labs/create-lab-windows-vm-template.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/devtest-labs/create-lab-windows-vm-template.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/devtest-labs/create-lab-windows-vm-template.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure DevTest Labs 및 Azure Resource Manager 템플릿을 사용하여 랩 만들기 description: 이 빠른 시작에서는 ARM 템플릿(Azure Resource Manager 템플릿)을 사용하여 Azure DevTest Labs에서 랩을 만듭니다. 랩 관리자는 랩을 설정하고, 랩에 VM을 만들고, 정책을 구성합니다. ms.topic: quickstart ms.custom: subject-armqs ms.date: 06/26/2020 ms.openlocfilehash: 2b825b4d4485f401199556b6faaef0017f583cc1 ms.sourcegitcommit: 772eb9c6684dd4864e0ba507945a83e48b8c16f0 ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 03/19/2021 ms.locfileid: "91461192" --- # <a name="quickstart-set-up-a-lab-by-using-azure-devtest-labs-arm-template"></a>빠른 시작: Azure DevTest Labs ARM 템플릿을 사용하여 랩 설정 이 빠른 시작에서는 ARM 템플릿(Azure Resource Manager 템플릿)을 사용하여 Windows Server 2019 Datacenter VM에서 랩을 만듭니다. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] 이 빠른 시작에서는 다음 작업을 수행합니다. > [!div class="checklist"] > * 템플릿 검토 > * 템플릿 배포 > * 템플릿 확인 > * 리소스 정리 환경이 필수 구성 요소를 충족하고 ARM 템플릿 사용에 익숙한 경우 **Azure에 배포** 단추를 선택합니다. 그러면 Azure Portal에서 템플릿이 열립니다. [![Azure에 배포](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-dtl-create-lab-windows-vm%2Fazuredeploy.json) ## <a name="prerequisites"></a>필수 구성 요소 Azure 구독이 아직 없는 경우 시작하기 전에 [체험](https://azure.microsoft.com/free/) 계정을 만듭니다. ## <a name="review-the-template"></a>템플릿 검토 이 빠른 시작에서 사용되는 템플릿은 [Azure 빠른 시작 템플릿](https://azure.microsoft.com/resources/templates/101-dtl-create-lab-windows-vm/)에서 나온 것입니다. :::code language="json" source="~/quickstart-templates/101-dtl-create-lab-windows-vm/azuredeploy.json"::: 템플릿에 정의된 리소스는 다음과 같습니다. - [**Microsoft.DevTestLab/labs**](/azure/templates/microsoft.devtestlab/labs) - [**Microsoft.DevTestLab labs/virtualnetworks**](/azure/templates/microsoft.devtestlab/labs/virtualnetworks) - [**Microsoft.DevTestLab labs/virtualmachines**](/azure/templates/microsoft.devtestlab/labs/virtualmachines) 더 많은 샘플 템플릿은 [Azure 빠른 시작 템플릿](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devtestlab)에서 찾을 수 있습니다. ## <a name="deploy-the-template"></a>템플릿 배포 배포를 자동으로 실행하려면 다음 단추를 클릭합니다. [![Azure에 배포](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-dtl-create-lab-windows-vm%2Fazuredeploy.json) 1. 나중에 쉽게 정리할 수 있도록 **새 리소스 그룹** 을 만듭니다. 1. 리소스 그룹의 **위치** 를 선택합니다. 1. **랩에 대한 이름** 을 입력합니다. 1. **VM에 대한 이름** 을 입력합니다. 1. VM에 액세스할 수 있는 **사용자 이름** 을 입력합니다. 1. 사용자에 대한 **암호** 를 입력합니다. 1. **위에 명시된 사용 약관에 동의함** 을 선택합니다. 1. 그런 다음, **구매** 를 선택합니다. :::image type="content" source="./media/create-lab-windows-vm-template/deploy-template-page.png" alt-text="템플릿 배포 페이지"::: ## <a name="validate-the-deployment"></a>배포 유효성 검사 1. 배포 상태를 확인하려면 맨 위에 있는 **알림** 을 선택하고 **배포 진행 중** 을 클릭합니다. :::image type="content" source="./media/create-lab-windows-vm-template/deployment-notification.png" alt-text="배포 알림"::: 2. **배포 - 개요** 페이지에서 배포가 완료될 때까지 기다립니다. 이 작업(특히 VM 만들기)은 완료하는 데 다소 시간이 걸립니다. 그런 다음, 다음 이미지에 표시된 것처럼 **리소스 그룹으로 이동** 또는 **리소스 그룹의 이름** 을 선택합니다. :::image type="content" source="./media/create-lab-windows-vm-template/navigate-resource-group.png" alt-text="리소스 그룹으로 이동"::: 3. **리소스 그룹** 페이지에서 리소스 그룹의 리소스 목록이 표시됩니다. 리소스에서 `DevTest Lab` 유형의 랩이 표시되는지 확인합니다. 리소스 그룹의 가상 네트워크 및 가상 머신과 같은 종속 리소스도 표시됩니다. :::image type="content" source="./media/create-lab-windows-vm-template/resource-group-home-page.png" alt-text="리소스 그룹의 홈 페이지"::: 4. 랩의 홈 페이지를 보려면 리소스 목록에서 랩을 선택합니다. **내 가상 머신** 목록에서 Windows Server 2019 Datacenter VM이 표시되는지 확인합니다. 다음 이미지에서 **기본 정보** 섹션은 최소화되어 있습니다. :::image type="content" source="./media/create-lab-windows-vm-template/lab-home-page.png" alt-text="랩에 대한 홈 페이지"::: > [!IMPORTANT] > 이 페이지를 열어 두고 다음 섹션의 지침에 따라 Azure에서 랩 및 VM을 실행하는 데 드는 비용을 방지하기 위해 리소스를 정리합니다. 랩에서 VM에 대한 액세스를 테스트하기 위해 다음 자습서를 진행하려는 경우 해당 자습서를 진행한 후 리소스를 정리합니다. ## <a name="clean-up-resources"></a>리소스 정리 1. 먼저 리소스 그룹을 삭제할 수 있도록 랩을 삭제합니다. 랩이 포함된 리소스 그룹은 삭제할 수 없습니다. 랩을 삭제하려면 도구 모음에서 **삭제** 를 선택합니다. :::image type="content" source="./media/create-lab-windows-vm-template/delete-lab-button.png" alt-text="랩 삭제 단추"::: 2. 확인 페이지에서 **랩 이름** 을 입력하고 **삭제** 를 선택합니다. 3. 랩이 삭제될 때까지 기다립니다. **벨** 아이콘을 선택하여 삭제 작업의 알림을 확인합니다. 이 프로세스는 다소 시간이 걸립니다. 랩 삭제를 확인한 다음, 이동 경로 탐색 메뉴에서 **리소스 그룹** 을 선택합니다. :::image type="content" source="./media/create-lab-windows-vm-template/confirm-lab-deletion.png" alt-text="알림에서 VM 삭제 확인"::: 1. **리소스 그룹** 페이지에서 도구 모음에 있는 **리소스 그룹 삭제** 를 선택합니다. 확인 페이지에서 **리소스 그룹 이름** 을 입력하고 **삭제** 를 선택합니다. 알림을 확인하여 리소스 그룹이 삭제되었는지 확인합니다. :::image type="content" source="./media/create-lab-windows-vm-template/delete-resource-group-button.png" alt-text="리소스 그룹 삭제 단추"::: ## <a name="next-steps"></a>다음 단계 이 빠른 시작에서는 VM을 사용하여 랩을 만들었습니다. 랩에 액세스하는 방법에 대해 알아보려면 다음 자습서로 이동합니다. > [!div class="nextstepaction"] > [자습서: 랩 액세스](tutorial-use-custom-lab.md)
50.61
259
0.71231
kor_Hang
0.999931
fa8216651124b1ee020c53da47ee2d803e7a07ce
1,153
md
Markdown
README.md
pthambirajah/virusdata
4da7f62bf3547d8a6b280a59e6e104e7b1965de8
[ "MIT" ]
2
2020-03-22T16:58:53.000Z
2020-03-29T11:52:29.000Z
README.md
covid19-ch/forms
a0bc9b2d43210cf311425c5fce4902adc861b404
[ "MIT" ]
null
null
null
README.md
covid19-ch/forms
a0bc9b2d43210cf311425c5fce4902adc861b404
[ "MIT" ]
null
null
null
# OFSP Covid-19 forms After reading [this article](https://www.rts.ch/info/suisse/11175710-les-annonces-de-nouveaux-cas-de-coronavirus-se-font-par-fax.html), we decided to help out by creating a really simple tool that can simplify the way data on __Covid-19__ in Switzerland is collected by the OFSP. We basically create a web version of the forms [available here](https://www.bag.admin.ch/bag/fr/home/krankheiten/infektionskrankheiten-bekaempfen/meldesysteme-infektionskrankheiten/meldepflichtige-ik/meldeformulare.html), that gives a CSV file as output. The idea is that the reporter attaches the __CSV__ file (or files) to the email sent to the HIN secured address covid-19@hin.infreport.ch. We hope that this code can be used and extended as much as possible. ## For developers In order to extend or update the forms, these rules need to be followed: * HTML elements ids need to be prefixed with *covid19_* in order to be transformed in CSV data (the id without that prefix will be used as header name) * Date picker elements need to be suffixed with *_date* Theme generated with [admin.ch styles](https://github.com/swiss/styleguide).
57.65
280
0.786644
eng_Latn
0.990544
fa82e35308c3629df354fed7c89b355aa178537d
403
md
Markdown
examples/README.md
olksdr/terraform-provider-elasticstack
7e3d1129e0c98a218b99887d2a7df9bb18b0b709
[ "Apache-2.0" ]
58
2021-12-20T13:02:30.000Z
2022-03-17T11:32:00.000Z
examples/README.md
olksdr/terraform-provider-elasticstack
7e3d1129e0c98a218b99887d2a7df9bb18b0b709
[ "Apache-2.0" ]
50
2021-12-20T12:52:27.000Z
2022-03-25T19:45:02.000Z
examples/README.md
olksdr/terraform-provider-elasticstack
7e3d1129e0c98a218b99887d2a7df9bb18b0b709
[ "Apache-2.0" ]
8
2021-12-20T13:01:01.000Z
2022-02-24T10:52:36.000Z
# Examples This directory contains examples that are mostly used for documentation, but can also be run/tested manually via the Terraform CLI. * **provider/provider.tf** example file for the provider index page * **data-sources/\<full data source name\>/data-source.tf** example file for the named data source page * **resources/\<full resource name\>/resource.tf** example file for the resource page
50.375
131
0.769231
eng_Latn
0.998136
fa8319cabbcb580a7db3a642fec87d83ec4ba5d5
26,980
md
Markdown
looginenmalli/dokumentaatio/index.md
sykefi/Tonttijakosuunnitelman-tietomalli
0e1c859e90c5313ca197c960cc49102280bcdd5c
[ "CC-BY-4.0" ]
1
2022-01-19T09:38:56.000Z
2022-01-19T09:38:56.000Z
looginenmalli/dokumentaatio/index.md
ilkkarinne/Tonttijakosuunnitelman-tietomalli
0e1c859e90c5313ca197c960cc49102280bcdd5c
[ "CC-BY-4.0" ]
null
null
null
looginenmalli/dokumentaatio/index.md
ilkkarinne/Tonttijakosuunnitelman-tietomalli
0e1c859e90c5313ca197c960cc49102280bcdd5c
[ "CC-BY-4.0" ]
1
2022-03-16T09:28:16.000Z
2022-03-16T09:28:16.000Z
--- layout: "default" title: "Looginen tietomalli - looginen tietomalli - dokumentaatio" description: "" id: "dokumentaatio" status: "Luonnos" --- # Loogisen tason tonttijakosuunnitelma {:.no_toc} 1. {:toc} ## Yleistä Loogisen tason tietomalli määrittelee kaikille tonttijakosuunnitelman kohteille yhteiset tietorakenteet, joita sovelletaan tonttijaon ilmaisemiseen laadittujen [soveltamisohjeiden](../soveltamisohjeet/) ja niihin kiinnittyvien koodistojen sekä [elinkaari-](../elinkaarisaannot.html) ja [laatusääntöjen](../laatusaannot.html) mukaisesti. Looginen tietomalli pyrkii olemaan mahdollisimman riippumaton tietystä toteutusteknologiasta tai tiedon fyysisestä esitystavasta. <!--**Graafinen mallinnus loogisesta tietomallista** ![Tonttijakosuunnitelman looginen malli graafisena mallinnuksena](looginenmalli.png "Looginen tietomalli - graafinen mallinnus (Neo4j)") (Lataa [Kaavio määritelmien kanssa](looginenmalli.png))--> ### Normatiiviset viittaukset Tonttijakosuunnitelman tietomalli hyödyntää samoja normatiivisia viittauksia kuin kaavatietomallikin. Ne käsittävät seuraavat dokumentit: * [ISO 639-2:1998 Codes for the representation of names of languages — Part 2: Alpha-3 code][ISO-639-2] * [ISO 8601-1:2019 Date and time — Representations for information interchange — Part 1: Basic rules][ISO-8601-1] * [ISO 19103:2015 Geographic information — Conceptual schema language][ISO-19103] * [ISO 19107:2019 Geographic information — Spatial schema][ISO-19107] * [ISO 19108:2002 Geographic information — Temporal schema][ISO-19108] * [ISO 19109:2015 Geographic information — Rules for application schema][ISO-19109] * [ISO 19505-2:ISO/IEC 19505-2:2012, Information technology — Object Management Group Unified Modeling Language (OMG UML) — Part 2: Superstructure][ISO-19505-2] ### Standardienmukaisuus Tonttijakosuunnitelman looginen tietomalli perustuu [ISO 19109][ISO-19109]-standardin yleiseen kohdetietomalliin (General Feature Model, GFM), joka määrittelee rakennuspalikat paikkatiedon ISO-standardiperheen mukaisten sovellusskeemojen määrittelyyn. GFM kuvaa muun muassa metaluokat ```FeatureType```, ```AttributeType``` ja ```FeatureAssociationType```. Tonttijakosuunnitelman tietomallissa kaikki tietokohteet, joilla on tunnus ja jotka voivat esiintyä erillään toisista kohteista on määritelty kohdetyypeinä stereotyypin ```FeatureType``` kautta. Sellaiset tietokohteet, joilla ei ole omaa tunnusta ja jotka voivat esiintyä vain kohdetyyppien attribuuttien arvoina on määritelty [ISO 19103][ISO-19103]-standardin ```DataType```-stereotyypin avulla. Lisäksi [HallinnollinenAlue](#hallinnollinenalue) ja [Organisaatio](#organisaatio) on mallinnettu vain rajapintojen (```Interface```) avulla, koska niitä ei ole tarpeen kuvata tonttijakosuunnitelman tietomallissa yksityiskohtaisesti, ja on todennäköistä, että suunnitelmia ylläpitävät tietojärjestelmät tarjoavat niille konkreettiset toteuttavat luokat. [ISO 19109][ISO-19109] -standardin lisäksi tonttijakosuunnitelman tietomalli perustuu muihin paikkatiedon ISO-standardeihin, joista keskeisimpiä ovat [ISO 19103][ISO-19103] (UML-kielen käyttö paikkatietojen mallinnuksessa), [ISO 19107][ISO-19107] (sijaintitiedon mallintaminen) ja [ISO 19108][ISO-19108] (aikaan sidotun tiedon mallintaminen). ### Muualla määritellyt luokat ja tietotyypit #### CharacterString Kuvaa yleisen merkkijonon, joka koostuu 0..* merkistä, merkkijonon pituudesta, merkistökoodista ja maksimipituudesta. Määritelty rajapinta-tyyppisenä [ISO 19103][ISO-19103]-standardissa. #### LanguageString Kuvaa kielikohtaisen merkkijonon. Laajentaa CharacterString-rajapintaa lisäämällä siihen language-attribuutin, jonka arvo on LanguageCode-koodiston arvo. Kielikoodi voi [ISO 19103][ISO-19103]-standardin määritelmän mukaan olla mikä tahansa ISO 639 -standardin osa. #### Number Kuvaa yleisen numeroarvon, joka voi olla kokonaisluku, desimaaliluku tai liukuluku. Määritelty rajapintana [ISO 19103][ISO-19103]-standardissa. #### Integer Laajentaa Number-rajapintaa kuvaamaan numeron, joka on kokonaisluku ilman murto- tai desimaaliosaa. Määritelty rajapintana [ISO 19103][ISO-19103]-standardissa. #### Decimal Laajentaa [Number](#number)-rajapintaa kuvaamaan numeron, joka on desimaaliluku. Decimal-rajapinnan toteuttava numero voidaan ilmaista virheettä yhden kymmenysosan tarkkuudella. Määritelty rajapintana [ISO 19103][ISO-19103]-standardissa. Decimal-numeroita käytetään, kun desimaalien käsittelyn tulee olla tarkkaa, esim. rahaan liityvissä tehtävissä. #### Real Laajentaa [Number](#number)-rajapintaa kuvaamaan numeron, joka on tarkkudeltaan rajoitettu liukuluku. Real-rajapinnan numero voi ilmaista tarkasti vain luvut, jotka ovat 1/2:n (puolen) potensseja. Määritelty rajapintana [ISO 19103][ISO-19103]-standardissa. Käytännössä esitystarkkuus riippuu numeron tallentamiseen varattujen bittien määrästä, esim. ```float (32-bittinen)``` (tarkkuus 7 desimaalia) ja ```double (64-bittinen)``` (tarkkuus 15 desimaalia). #### TM_Object Aikamääreiden yhteinen yläluokka, jota käytetään, mikäli arvo voi olla joko yksittäinen ajanhetki tai aikaväli. Määritelty luokkana [ISO 19108][ISO-19108]-standardissa. #### TM_Instant Kuvaa yksittäisen ajanhetken 0-ulotteisena ajan geometriana, joka vastaa pistettä avaruudessa. Määritelty luokkana [ISO 19108][ISO-19108]-standardissa. Aikapisteen arvo on määritelty ```TM_Position```-luokalla [ISO 8601][ISO-8601-1]-standardin mukaisilla päivämäärä- tai kellonaikakentilla tai niiden yhdistelmällä, tai muulla ```TM_TemporalPosition```-luokan avulla kuvatulla aikapisteellä. Viimeksi mainitun luokan attribuutti ```indeterminatePosition``` mahdollistaa ei-täsmällisen ajanhetken ilmaisemisen liittämällä mahdolliseen arvoon luokittelun *tuntematon*, *nyt*, *ennen*, *jälkeen* tai *nimi*. #### TM_Period Kuvaa aikavälin [TM_Instant](#tm_instant)-tyyppisten ```begin```- ja ```end```-attribuuttien avulla. Molemmat attribuutit ovat pakollisia, mutta voivat sisältää tuntemattoman arvon hyödyntämällä ```indeterminatePosition = unknown``` -attribuuttia. Määritelty luokkana [ISO 19108][ISO-19108]-standardissa. #### URI Määrittää merkkijonomuotoisen Uniform Resource Identifier (URI) -tunnuksen [ISO 19103][ISO-19103]-standardissa. URIa voi käyttää joko pelkkänä tunnuksena tai resurssin paikantimena (Uniform Resource Locator, URL). #### Geometry Kaikkien geometria-tyyppien yhteinen rajapinta [ISO 19107][ISO-19107]-standardissa. Tyypillisimpiä [ISO 19107][ISO-19107]-standardin Geometry-rajapintaa laajentavia rajapintoja ovat ```Point```, ```Curve```, ```Surface``` ja ```Solid``` sekä ```Collection```, jota käyttämällä voidaan kuvata geometriakokoelmia (multipoint, multicurve, multisurface, multisolid). #### Point Täsmälleen yhdestä pisteestä koostuva geometriatyyppi. Määritelty rajapintana [ISO 19107][ISO-19107]-standardissa. ## Tietomallin yleispiirteet Tietomalli perustuu kaavatietomallin yhteiskäyttöisiin tietokomponentteihin. Kaavatietomallin MKP-ydin (maankäyttöpäätökset, MKP) kuvaa maankäyttöpäätösten tietomallintamisessa yleiskäyttöisiksi suunnitellut luokat ja niihin liittyvät koodistot, joita hyödynnetään tonttijakosuunnitelman soveltamisprofiilin kautta. MKP-ytimen lisäksi hyödynnetään laajasti Kaavatietomallin abstrakteja ja muita luokkia tonttijakosuunnitelmatietomallin määrittelemien koodistojen ja soveltamisprofiilin avulla. Tonttijakosuunnitelman UML-luokkakaaviot ovat saatavilla erillisellä [UML-kaaviot-sivulla](../uml/doc/). ## Kaavatietomallin ja tonttijakosuunnitelman tietomallin suhde ja tietovirrat Kaavatietomallin mukaiset kaavamääräykset, joiden ```laji-attribuutin``` arvo on [Tonttijako-koodin](https://koodistot.suomi.fi/code;registryCode=rytj;schemeCode=RY_KaavamaaraysLaji_AK;codeCode=10) *Esitontti* tai *Sitova tonttijako laadittava*, ovat aluemaisia lähtötietoja tonttijakosuunnitelman laatimiselle. Tonttijakosuunnitelman tietomallin esitonteille kaavamääräykset linkitetään suoraan kaavatietomallista. Linkitys tietomallien välillä perustuu viittaustunnukseen, joka muodostetaan tonttijakosuunnitelman tietomallin [Kaavamaarays-luokan](#kaavamaarays) ```liittyvanKaavamaarayksenTunnus```-attribuutille annettavalla kaavatietomallin [Kaavamaarays-luokan](https://tietomallit.ymparisto.fi/kaavatiedot/v1.0/looginenmalli/dokumentaatio/#kaavamaarays) ```viittaustunnus``` -attribuutilla. Tällä vältytään toisteellisen kaavamääräystiedon tuottamiselta. Tonttijakosuunnitelman tietomalli mahdollistaa kuitenkin tonttijakosuunnitelman laatijan määrittää kerrosala laskennallisesti esitonttikohteille. Elinkaarisäännöt-sivulla [Asemakaavan suhde esitonttikohteeseen -luvussa](../looginenmalli/elinkaarisaannot.html#asemakaavan-suhde-esitonttikohteeseen) on kuvattu kaavatiedon elinkaaren vaikutukset esitonttikohteen elinkaareen. ## Maankäyttöpäätöksien ydin (MKP-ydin) ### AbstraktiVersioituObjekti Englanninkielinen nimi: AbstractVersionedObject Stereotyyppi: FeatureType (kohdetyyppi) Yhteinen yläluokka kaikille tonttijakosuunnitelman versiohallituille luokille. Kuvaa kaikkien kohdetyyppien yhteiset ominaisuudet ja assosiaatiot. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ paikallinenTunnus| [CharacterString](#characterstring) | 0..1 | kohteen pääavain (id) nimiavaruus | [URI](#uri) | 0..1 | tunnusten nimiavaruus viittausTunnus | [URI](#uri) | 0..1 | johdettu nimiavaruudesta, luokan englanninkielisestä nimestä ja paikallisesta tunnuksesta identiteettiTunnus | [CharacterString](#characterstring) | 0..1 | kohteen versioriippumaton tunnus tuottajakohtainenTunnus | [CharacterString](#characterstring) | 0..1 | kohteen tunnus tuottajatietojärjestelmässä viimeisinMuutos | [TM_Instant](#tm_instant) | 0..1 | ajanhetki, jolloin kohteen tietoja on viimeksi muutettu tuottajatietojärjestelmässä tallennusAika | [TM_Instant](#tm_instant) | 0..1 | ajanhetki, jolloin kohde on tietojärjestelmään ### AbstraktiMaankayttoasia Englanninkielinen nimi: AbstractLandUseMatter Erikoistaa luokkaa [AbstraktiVersioituObjekti](#abstraktiversioituobjekti). Stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ nimi| [LanguageString](#languagestring) | 0..* | asian nimi kuvaus | [LanguageString](#languagestring) | 0..* | asian kuvausteksti metatietokuvaus | [URI](#uri) | 0..1 | viittaus ulkoiseen metatietokuvaukseen ### Asiakirja Englanninkielinen nimi: Document Kuvaa käsitteen [tonttijakosuunnitelman viiteasiakirjan](../kasitemalli/#tonttijakosuunnitelman-viiteasiakirja). Erikoistaa luokkaa [AbstraktiVersioituObjekti](#abstraktiversioituobjekti). Stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ asiakirjatunnus | [URI](#uri) | 0..* | asiakirjan pysyvä tunnus, esim. diaarinumero tai muu dokumentinhallinnan tunnus laji | [TonttijakosuunnitelmanAsiakirjanLaji](#tonttijakosuunnitelmanasiakirjanlaji) | 1 | asiakirjan tyyppi lisatietolinkki | [URI](#uri) | 0..1 | viittaus ulkoiseen lisätietokuvaukseen asiakirjasta metatietolinkki | [URI](#uri) | 0..1 | viittaus ulkoiseen metatietokuvaukseen asiakirjasta nimi | [LanguageString](#languagestring) | 0..* | asiakirjan nimi ### AbstraktiTapahtuma Englanninkielinen nimi: AbstractEvent Erikoistaa luokkaa [AbstraktiVersioituObjekti](#abstraktiversioituobjekti). Stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ nimi |[LanguageString](#languagestring) | 0..* | tapahtuman kuvaus tapahtumaAika | [TM_Object](#tm_object) | 0..1 | tapahtuman aika (hetki tai aikaväli) kuvaus | [LanguageString](#languagestring) | 0..* | tapahtuman tekstimuotoinen kuvaus ### Kasittelytapahtuma Englanninkielinen nimi: HandlingEvent Kuvaa käsitteen käsittelytapahtuma. Erikoistaa luokkaa [AbstraktiTapahtuma](#abstraktitapahtuma). Stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ laji | [AbstraktiKasittelytapahtumanLaji](#abstraktikasittelytapahtumanlaji) | 1 | käsittelytapahtuman tyyppi ### Vuorovaikutustapahtuma Englanninkielinen nimi: InteractionEvent Kuvaa käsitteen Vuorovaikutustapahtuma. Erikoistaa luokkaa [AbstraktiTapahtuma](#abstraktitapahtuma). Stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ laji | [AbstraktiVuorovaikutustapahtumanLaji](#abstraktivuorovaikutustapahtumanlaji) | 1 | vuorovaikutustapahtuman tyyppi ### HallinnollinenAlue Englanninkielinen nimi: AdministrativeArea Stereotyyppi: Interface (rajapinta) Hallinnollinen alue on kuvattu tonttijakosuunniteman tietomallissa ainoastaan rajapintana, koska sen mallintaminen on kuulu tonttijakosuunnitelman tietomallin sovellusalaan. Toteuttavien tietojärjestelmien tulee tarjota rajapinnan määrittelemät vähimmäistoiminnallisuudet. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ hallintoaluetunnus | [CharacterString](#characterstring) | 1 | palauttaa hallinnollisen alueen tunnuksen alue | [geometry](#geometry) | 1 | palauttaa hallinnollisen alueen aluerajauksen nimi | [CharacterString](#characterstring) | 1 | palauttaa hallinnollisen alueen nimen valitulla kielellä ### Organisaatio Englanninkielinen nimi: Organization Stereotyyppi: Interface (rajapinta) Organisaatio on kuvattu tonttijakosuunnitelman tietomallissa ainoastaan rajapintana, koska sen mallintaminen on kuulu tonttijakosuunnitelman tietomallin sovellusalaan. Toteuttavien tietojärjestelmien tulee tarjota rajapinnan määrittelemät vähimmäistoiminnallisuudet. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ nimi | [CharacterString](#characterstring) | 1 | palauttaa organisaation alueen nimen valitulla kielellä ### Koodistot #### TonttijakosuunnitelmanAsiakirjanLaji Englanninkielinen nimi: PlotPlanDocumentKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanAsiakirjanLaji" name="Tonttijakosuunnitelman asiakirjan laji" %} #### AbstraktiKasittelytapahtumanLaji Englanninkielinen nimi: AbstractHandlingEventKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa Käsittelytapahtumien lajit kuvataan MKP-ydin -paketissa abstraktina koodistona, jota laajennetaan kunkin maankäyttöpäätöksen prosessin konkreettisten arvojen mukaisesti niiden tietomalleissa. #### AbstraktiVuorovaikutustapahtumanLaji Englanninkielinen nimi: AbstractInteractionEventKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa Vuorovaikutustapahtumien lajit kuvataan MKP-ydin -paketissa abstraktina koodistona, jota laajennetaan kunkin maankäyttöpäätöksen prosessin konkreettisten arvojen mukaisesti niiden tietomalleissa. ## Tonttijakosuunnitelman tiedot ### Tonttijakosuunnitelma Kuvaa käsitteen Tonttijakosuunnitelma, erikoistaa luokkaa AbstraktiMaankayttoasia, stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ laji | [Codelist](#tonttijakosuunnitelmanLaji) | 1 | kertoo, millainen tonttijakosuunnitelma on laadittu tunnus | [CharacterString](#CharacterString) | 1 | yksilöivä ID elinkaarentila | [Codelist](#tonttijakosuunnitelmanElinkaarentila) | 1 | yleisimmät arvot vireillä oleva, hyväksytty tai voimassa kumoutumistieto | [TonttijakosuunnitelmanKumoutumistieto](#TonttijakosuunnitelmanKumoutumistieto) | 0..* | tonttijakosuunnitelman tai sen osa, jonka tämä tonttijakosuunnitelma kumoaa vireilletuloAika | [TM_Instant](#TM_Instant) | 0..1 | aika, jolloin tonttijakosuunnitema on tullut vireille hyvaksymisAika | [TM_Instant](#TM_Instant) | 0..1 | aika, jolloin tonttijakosuunnitelma on tullut virallisesti hyväksyttyä digitaalinenAlkupera | [DigitaalinenAlkupera](#DigitaalinenAlkupera) | 0..1 | luokittelu alunperin tietomallin mukaan luotuihin ja jälkeenpäin digitoituihin tonttijakosuunnitelmiin AbstraktiMaankayttoasia-luokasta peritytyvä attribuutti aluerajauus kuvaa tonttijakosuunnitelman suunnittelualueen. **Assosiaatiot** Roolin nimi | Kohde | Kardinaliteetti | Kuvaus -----------------|--------------------|---------------------|----------------- esitonttikohde | [Kaavakohde](#Kaavakohde) | 1 | paikkatietokohde, johon kohdistuu kaavamääräyksiä tai -suosituksia laatija | [TonttijakosuunnitelmanLaatija](#TonttijakosuunnitelmanLaatija) | 1 | tonttijakosuunnitelman laatija ### TonttijakosuunnitelmanLaatija Kuvaa käsitteen Tonttijakosuunnitelman laatija, erikoistaa luokkaa AbstraktiVersioituObjekti, stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ nimi | [CharacterString](#CharacterString) | 1 | laatijan nimi nimike | [LanguageString](#LanguageString) | 0..* | ammatti- tai virkanimike ### Abstraktikaavakohde Erikoistaa luokkaa AbstraktiVersioituObjekti, stereotyyppi: FeatureType (kohdetyyppi) Kaikkien tonttijakosuunnitelmaan liittyvien paikkatietokohteiden yhteinen abstrakti yläluokka. Kohteen geometria voi olla 2-ulotteinen piste,tai alue, tai 3-ulotteinen kappale. Moniosaiset geometriat (multigeometry) ovat sallittuja. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ arvo | [Abstraktiarvo](#Abstraktiarvo) | 0..1 | Esitontin tunnusarvo tai esitontin rajapisteen numero geometria | [geometry](#geometry) | 0..1 | esitonttikohteen sijainti kohteenPinta-ala | [Number](#Number) | 0..* | esitontin pinta-ala tai kolmiulotteisen esitontin projisoitu pinta-ala pystysuunteinenRajaus | [Korkeusvali](#Korkeusvali) | 0..1 | kolmiulotteisen esitontin ylin ja alin korkeus merenpinnasta **Assosiaatiot** Roolin nimi | Kohde | Kardinaliteetti | Kuvaus -----------------|--------------------|---------------------|---------- liittyvaKohde | [Abstraktikaavakohde](#Abstraktikaavakohde) | 0..* | kohde, joka liittyy tähän kohteeseen. Kukin assosiaatio voi sisältää rooli-määreen tyyppiä LanguageString, joka kuvaa miten kohde liittyy tähän kohteeseen. ### Esitonttikohde Kuvaa käsitteen Esitonttikohde, erikoistaa luokkaa AbstraktiKaavakohde, stereotyyppi: FeatureType (kohdetyyppi) Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ laji | [EsitonttikohdeLaji](#EsitonttikohdeLaji) | 1 | kuvaa esitonttikohteen tyypin suhdePeruskiinteistoon | [suhdePeruskiinteistoon](#suhdePeruskiinteistoon) | 0..1 | luokittelu esitonttikohteen sijoittumisesta suhteessa peruskiinteistöön, joka merkitään vain 3D-esitonttikohteelle elinkaarentila | [TonttijakosuunnitelmanElinkaarentila](#TonttijakosuunnitelmanElinkaarentila) | 1 | muodostustieto | [Muodostustieto](#muodostustieto) | 1..* | tieto muodostajakiinteistöistä, josta/joista esitontti muodostetaan kaavatilannetieto | [Kaavatilannetieto](#kaavatilannetieto) | 1..* | tieto esitonttikohteeseen liittyvistä asemakaavoista ja niiden vaikutuksista rakennettu | [boolean](#boolean) | 0..1 | tieto muun muassa rakentamattomasta rakennuspaikasta korotettua kiinteistöverotusta varten, onko esitonttikohde rakennettu asemakaavan mukaisesti. Lisäksi tämän tiedon perusteella saadaan tieto kunnan kaavavarannosta. rakennuskielto | [boolean](#boolean) | 0..1 | kuvaa, onko esitonttikohteella rakennuskielto voimassaoloAika | [TM_Period](#TM_Period) | 0..1 | aikaväli, jona asiasta tehty päätös suunnitelmineen ja säännöksineen on lainvoimainen **Assosiaatiot** Roolin nimi | Kohde | Kardinaliteetti | Kuvaus -----------------|--------------------|---------------------|---------- maarays | [Kaavamaarays](#Kaavamaarays) | 0..* | kaavaan sisältyvä sanallinen määräys, jolla ohjataan alueiden suunnittelua ja rakentamista ### Muodostustieto Stereotyyppi: DataType (tietotyyppi) Tieto muodostajakiinteistöistä, josta esitontti muodostetaan. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ kiinteistoTunnus | [Tunnusarvo](#Tunnusarvo) | 1 | kiinteistörekisteriin merkityn rekisteriyksikön yksilöivä tunnus muodostusPinta-ala | [Number](#Number) | 1 | muodostavan rekisterikiinteistön pinta-alan määrä neliömetreissä ### Kaavatilannetieto Stereotyyppi: DataType (tietotyyppi) Tieto esitonttiin liittyvistä asemakaavoista ja niiden vaikutuksista. Nimi | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ kaavaTunnus | [URI](#URI) | 1 | kaavatunnus, joka muuttaa esitonttikohteen kaavamääräyksiä tai kumoaa esitonttikohteen kaavalaji | [Kaavalaji](#Kaavalaji) | 1 | alueiden käytön ohjaustarpeeseen, kaavan sisältövaatimuksiin, prosessiin ja vastuulliseen hallintoviranomaiseen perustuva luokittelu kumoaaEsitonttikohteen | [boolean](#boolean) | 1 | jos arvo on true, kaava kumoaa esitonttikohteen kokonaan ### Kaavamaarays Kuvaa käsitteen Kaavamääräys, erikoistaa luokkaa AbstraktiTietoyksikko, stereotyyppi: FeatureType (kohdetyyppi) laji | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ liittyvanKaavamaarayksenTunnus | [URI](#URI) | 1 | viittaustunnus kaavaan sisältyvän kaavamääräyksen tietokohteeseen, joka liittyy esitonttikohteeseen ### AbstraktiTietoyksikko Erikoistaa luokkaa AbstraktiVersioituObjekti, stereotyyppi: FeatureType (kohdetyyppi) Kaikkien tonttijakosuunnitelmiin liittyvien tietoelementtien yhteinen abstrakti yläluokka. laji | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ arvo | [AbstraktiArvo](#AbstraktiArvo) | 0..* | kuvaa tonttijakosuunnitelman laatijan tulkitsemaa arvoa esim. rakentamisen määrä ### TonttijakosuunnitelmanKumoutumistieto Stereotyyppi: DataType (tietotyyppi) Kumoamistieto yksilöi mitä tonttijakosuunnitelmia tai niiden esitonttikohteita tonttijakosuunnitelma kumoaa lainvoimaiseksi tullessaan. laji | Tyyppi | Kardinaliteetti | Kuvaus -----------------|---------------------|-----------------|------------------------------------ kumoutuvanTonttijakosuunnitelmanTunnus | [URI](#URI) | 1 | tonttijakosuunnitelma, johon kumoaminen kohdistuu kumoutuvanEsitonttikohteenTunnus | [URI](#URI) | 0..* | esitonttikohde, johon kumoutuminen kohdistuu ### Koodistot #### TonttijakosuunnitelmanLaji Englanninkielinen nimi: PlotPlanKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanLaji" name="Tonttijakosuunnitelman laji" %} #### TonttijakosuunnitelmanElinkaarentila Englanninkielinen nimi: PlotPlanLifeCycleState Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanElinkaarentila" name="Tonttijakosuunnitelman elinkaaren tila" %} #### TonttijakosuunnitelmanAsiakirjanLaji Englanninkielinen nimi: PlotPlanDocumentType Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanAsiakirjanLaji" name="Tonttijakosuunnitelmaa koskevan asiakirjan laji" %} #### SuhdePeruskiinteistoon Englanninkielinen nimi: RelationToBaseProperty Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_SuhdePeruskiinteistoon" name="Esitonttikohteen suhde peruskiinteistöön" %} #### EsitonttikohdeLaji <!--Lisää sisäinen linkki? --> Erikoistaa luokkaa AbstraktiKaavamaarayslaji. Englanninkielinen nimi: PreplotKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_EsitonttikohdeLaji" name="Esitonttikohteen laji" %} #### TonttijakosuunnitelmanVuorovaikutustapahtumanLaji <!--Lisää sisäinen linkki? --> Erikoistaa luokkaa AbstraktiVuorovaikutustapahtumanLaji. Englanninkielinen nimi: PlotPlanPublicParticipationEventKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanVuorovaikutustapahtumanLaji" name="Tonttijakosuunnitelman vuorovaikutustapahtuman laji" %} #### TonttijakosuunnitelmanKasittelytapahtumanLaji <!--Lisää sisäinen linkki? --> Erikoistaa luokkaa AbstraktiKasittelytapahtumanLaji. Englanninkielinen nimi: PlotPlanHandlingEventKind Stereotyyppi: CodeList (koodisto) Laajennettavuus: Ei laajennettavissa {% include common/codelistref.html registry="rytj" id="RY_TonttijakosuunnitelmanKasittelytapahtumanLaji" name="Tonttijakosuunnitelman kasittelytapahtuman laji" %} <!-- linkit standardeihin, joihin mainittu sivun alussa --> [ISO-8601-1]: https://www.iso.org/standard/70907.html "ISO 8601-1:2019 Date and time — Representations for information interchange — Part 1: Basic rules" [ISO-639-2]: https://www.iso.org/standard/4767.html "ISO 639-2:1998 Codes for the representation of names of languages — Part 2: Alpha-3 code" [ISO-19103]: https://www.iso.org/standard/56734.html "ISO 19103:2015 Geographic information — Conceptual schema language" [ISO-19107]: https://www.iso.org/standard/66175.html "ISO 19107:2019 Geographic information — Spatial schema" [ISO-19108]: https://www.iso.org/standard/26013.html "ISO 19108:2002 Geographic information — Temporal schema" [ISO-19109]: https://www.iso.org/standard/59193.html "ISO 19109:2015 Geographic information — Rules for application schema" [ISO-19505-2]: https://www.iso.org/standard/52854.html "ISO/IEC 19505-2:2012, Information technology — Object Management Group Unified Modeling Language (OMG UML) — Part 2: Superstructure"
57.526652
750
0.749852
fin_Latn
0.999456
fa840f5969027fee481cceeb9e51c84283308239
9
md
Markdown
tests/markdown_spec/example-14.md
fuelingtheweb/prettier
53edfeb1ded1e2729e3f226f1a3fcc3b42516776
[ "MIT" ]
40,139
2017-02-20T22:01:11.000Z
2022-03-31T19:56:19.000Z
tests/markdown_spec/example-14.md
fuelingtheweb/prettier
53edfeb1ded1e2729e3f226f1a3fcc3b42516776
[ "MIT" ]
9,185
2017-02-20T22:02:24.000Z
2022-03-31T20:45:07.000Z
tests/markdown_spec/example-14.md
fuelingtheweb/prettier
53edfeb1ded1e2729e3f226f1a3fcc3b42516776
[ "MIT" ]
4,365
2017-02-21T16:30:33.000Z
2022-03-31T02:49:26.000Z
-- ** __
2.25
2
0.222222
eng_Latn
0.539367
fa845dfa36c7ebc3ac0c9513f74dfa2f9fc0f9a5
1,905
md
Markdown
docs/Snips.md
SuessLabs/VsLinuxDebug
4db143009e4c6b18a83cb57c12083a63fb0437ec
[ "MIT" ]
1
2022-03-22T16:01:46.000Z
2022-03-22T16:01:46.000Z
docs/Snips.md
SuessLabs/RemoteDebug
75f3626e8b82476f7452def6f23d9b875cf56da0
[ "MIT" ]
11
2022-03-30T01:33:23.000Z
2022-03-31T13:11:45.000Z
docs/Snips.md
SuessLabs/RemoteDebug
75f3626e8b82476f7452def6f23d9b875cf56da0
[ "MIT" ]
null
null
null
# Code Snips ## RemoteDebugger.cs ```cs /* * Borrowed from VSMonoDebugger * public async Task BuildStartupProjectAsync() { await ThreadHelper.JoinableTaskFactory.SwitchToMainThreadAsync(); var failedBuilds = BuildStartupProject(); if (failedBuilds > 0) { Window window = _dte.Windows.Item("{34E76E81-EE4A-11D0-AE2E-00A0C90FFFC3}");//EnvDTE.Constants.vsWindowKindOutput OutputWindow outputWindow = (OutputWindow)window.Object; outputWindow.ActivePane.Activate(); outputWindow.ActivePane.OutputString($"{failedBuilds} project(s) failed to build. See error and output window!"); //// _errorListProvider.Show(); throw new Exception($"{failedBuilds} project(s) failed to build. See error and output window!"); } } private int BuildStartupProject() { ThreadHelper.ThrowIfNotOnUIThread(); //// var dte = (DTE)Package.GetGlobalService(typeof(DTE)); var sb = (SolutionBuild2)_dte.Solution.SolutionBuild; try { var startProject = GetStartupProject(); var activeConfiguration = _dte.Solution.SolutionBuild.ActiveConfiguration as SolutionConfiguration2; var activeConfigurationName = activeConfiguration.Name; var activeConfigurationPlatform = activeConfiguration.PlatformName; var startProjectName = startProject.FullName; sb.BuildProject($"{activeConfigurationName}|{activeConfigurationPlatform}", startProject.FullName, true); } catch (Exception ex) { // Build complete solution (fallback solution) return BuildSolution(); } return sb.LastBuildInfo; } private int BuildSolution() { ThreadHelper.ThrowIfNotOnUIThread(); var sb = (SolutionBuild2)_dte.Solution.SolutionBuild; sb.Build(true); return sb.LastBuildInfo; } */ ```
30.725806
121
0.675591
yue_Hant
0.888714
fa84959fb8f3c789adae8f32971eeb1f7e047bd0
421
md
Markdown
e2e/regression/cases/034_cli_single/README.md
fhoffa/bqtail
ab61bda7c1c59dd61eefb116d736b233b586afc6
[ "Apache-2.0" ]
45
2019-09-16T13:01:30.000Z
2022-02-07T02:29:12.000Z
e2e/regression/cases/034_cli_single/README.md
fhoffa/bqtail
ab61bda7c1c59dd61eefb116d736b233b586afc6
[ "Apache-2.0" ]
5
2019-12-11T19:26:35.000Z
2021-04-08T20:05:51.000Z
e2e/regression/cases/034_cli_single/README.md
fhoffa/bqtail
ab61bda7c1c59dd61eefb116d736b233b586afc6
[ "Apache-2.0" ]
6
2020-01-07T20:41:28.000Z
2021-06-24T22:57:35.000Z
### Client side batch ### Scenario: This scenario tests client side bqtail with individual file ingestion. ```bash export GOOGLE_APPLICATION_CREDENTIALS='${env.HOME}/.secret/${gcpCredentials}.json' bqtail -r=rule/rule.yaml -s=data/trigger/dummy.json ``` [@rule.yaml](rule/rule.yaml) ```yaml When: Prefix: /data/case034 Suffix: .json Dest: Table: bqtail.dummy_034 Async: true OnSuccess: - Action: delete ```
17.541667
82
0.719715
eng_Latn
0.230612
fa84cd3390b245fdb8d55d6c78b9a6a46630e044
39,400
md
Markdown
docs/CHANGELOG.md
fernando-aristizabal/cahaba
4e27d501867a02d7c0ddab7d456723bfd0380ae3
[ "Info-ZIP" ]
25
2020-10-13T17:45:31.000Z
2022-01-25T18:35:49.000Z
docs/CHANGELOG.md
fernando-aristizabal/cahaba
4e27d501867a02d7c0ddab7d456723bfd0380ae3
[ "Info-ZIP" ]
422
2020-10-06T16:48:38.000Z
2022-02-03T22:43:23.000Z
docs/CHANGELOG.md
fernando-aristizabal/cahaba
4e27d501867a02d7c0ddab7d456723bfd0380ae3
[ "Info-ZIP" ]
7
2020-10-06T16:17:49.000Z
2021-12-07T23:16:05.000Z
All notable changes to this project will be documented in this file. We follow the [Semantic Versioning 2.0.0](http://semver.org/) format. <br/><br/> ## v3.0.22.7 - 2021-10-08 - [PR #467](https://github.com/NOAA-OWP/cahaba/pull/467) These "tool" enhancements 1) delineate in-channel vs. out-of-channel geometry to allow more targeted development of key physical drivers influencing the SRC calculations (e.g. bathymetry & Manning’s n) #418 and 2) applies a variable/composite Manning’s roughness (n) using user provided csv with in-channel vs. overbank roughness values #419 & #410. ## Additions - `identify_src_bankfull.p`: new post-processing tool that ingests a flow csv (e.g. NWM 1.5yr recurr flow) to approximate the bankfull STG and then calculate the channel vs. overbank proportions using the volume and hydraulic radius variables - `vary_mannings_n_composite.py`: new post-processing tool that ingests a csv containing feature_id, channel roughness, and overbank roughness and then generates composite n values via the channel ratio variable ## Changes - `eval_plots.py`: modified the plot legend text to display full label for development tests - `inundation.py`: added new optional argument (-n) and corresponding function to produce a csv containing the stage value (and SRC variables) calculated from the flow to stage interpolation. <br/><br/> ## v3.0.22.6 - 2021-09-13 - [PR #462](https://github.com/NOAA-OWP/cahaba/pull/462) This new workflow ingests FIM point observations from users and “corrects” the synthetic rating curves to produce the desired FIM extent at locations where feedback is available (locally calibrate FIM). ## Changes - `add_crosswalk.py`: added `NextDownID` and `order_` attributes to the exported `hydroTable.csv`. This will potentially be used in future enhancements to extend SRC changes to upstream/downstream catchments. - `adjust_rc_with_feedback.py`: added a new workflow to perform the SRC modifications (revised discharge) using the existing HAND geometry variables combined with the user provided point location flow and stage data. - `inundation_wrapper_custom_flow.py`: updated code to allow for huc6 processing to generate custom inundation outputs. <br/><br/> ## v3.0.22.5 - 2021-09-08 - [PR #460](https://github.com/NOAA-OWP/cahaba/pull/460) Patches an issue where only certain benchmark categories were being used in evaluation. ## Changes - In `tools/tools_shared_variables.py`, created a variable `MAGNITUDE_DICT` to store benchmark category magnitudes. - `synthesize_test_cases.py` imports `MAGNITUDE_DICT` and uses it to assign magnitudes. <br/><br/> ## v3.0.22.4 - 2021-08-30 - [PR #456](https://github.com/NOAA-OWP/cahaba/pull/456) Renames the BARC modified variables that are exported to `src_full_crosswalked.csv` to replace the original variables. The default/original variables are renamed with `orig_` prefix. This change is needed to ensure downstream uses of the `src_full_crosswalked.csv` are able to reference the authoritative version of the channel geometry variables (i.e. BARC-adjust where available). ## Changes - In `src_full_crosswalked.csv`, default/original variables are renamed with `orig_` prefix and `SA_div` is renamed to `SA_div_flag`. <br/><br/> ## v3.0.22.3 - 2021-08-27 - [PR #457](https://github.com/NOAA-OWP/cahaba/pull/457) This fixes a bug in the `get_metadata()` function in `/tools/tools_shared_functions.py` that arose because of a WRDS update. Previously the `metadata_source` response was returned as independent variables, but now it is returned a list of strings. Another issue was observed where the `EVALUATED_SITES_CSV` variable was being misdefined (at least on the development VM) through the OS environmental variable setting. ## Changes - In `tools_shared_functions.py`, changed parsing of WRDS `metadata_sources` to account for new list type. - In `generate_categorical_fim_flows.py`, changed the way the `EVALUATED_SITES_CSV` path is defined from OS environmental setting to a relative path that will work within Docker container. <br/><br/> ## v3.0.22.2 - 2021-08-26 - [PR #455](https://github.com/NOAA-OWP/cahaba/pull/455) This merge addresses an issues with the bathymetry adjusted rating curve (BARC) calculations exacerbating single-pixel inundation issues for the lower Mississippi River. This fix allows the user to specify a stream order value that will be ignored in BARC calculations (reverts to using the original/default rating curve). If/when the "thalweg notch" issue is addressed, this change may be unmade. ## Changes - Added new env variable `ignore_streamorders` set to 10. - Added new BARC code to set the bathymetry adjusted cross-section area to 0 (reverts to using the default SRC values) based on the streamorder env variable. <br/><br/> ## v3.0.22.1 - 2021-08-20 - [PR #447](https://github.com/NOAA-OWP/cahaba/pull/447) Patches the minimum stream length in the template parameters file. ## Changes - Changes `max_split_distance_meters` in `params_template.env` to 1500. <br/><br/> ## v3.0.22.0 - 2021-08-19 - [PR #444](https://github.com/NOAA-OWP/cahaba/pull/444) This adds a script, `adjust_rc_with_feedback.py`, that will be expanded in future issues. The primary function that performs the HAND value and hydroid extraction is ingest_points_layer() but this may change as the overall synthetic rating curve automatic update machanism evolves. ## Additions - Added `adjust_rc_with_feedback.py` with `ingest_points_layer()`, a function to extract HAND and hydroid values for use in an automatic synthetic rating curve updating mechanism. <br/><br/> ## v3.0.21.0 - 2021-08-18 - [PR #433](https://github.com/NOAA-OWP/cahaba/pull/433) General repository cleanup, made memory-profiling an optional flag, API's release feature now saves outputs. ## Changes - Remove `Dockerfile.prod`, rename `Dockerfile.dev` to just `Dockerfile`, and remove `.dockerignore`. - Clean up `Dockerfile` and remove any unused* packages or variables. - Remove any unused* Python packages from the `Pipfile`. - Move the `CHANGELOG.md`, `SECURITY.md`, and `TERMS.md` files to the `/docs` folder. - Remove any unused* scripts in the `/tools` and `/src` folders. - Move `tools/preprocess` scripts into `tools/`. - Ensure all scripts in the `/src` folder have their code in functions and are being called via a `__main__` function (This will help with implementing memory profiling fully). - Changed memory-profiling to be an option flag `-m` for `fim_run.sh`. - Updated FIM API to save all outputs during a "release" job. <br/><br/> ## v3.0.20.2 - 2021-08-13 - [PR #443](https://github.com/NOAA-OWP/cahaba/pull/443) This merge modifies `clip_vectors_to_wbd.py` to check for relevant input data. ## Changes - `clip_vectors_to_wbd.py` now checks that there are NWM stream segments within the buffered HUC boundary. - `included_huc8_ms.lst` has several additional HUC8s. <br/><br/> ## v3.0.20.1 - 2021-08-12 - [PR #442](https://github.com/NOAA-OWP/cahaba/pull/442) This merge improves documentation in various scripts. ## Changes This PR better documents the following: - `inundate_nation.py` - `synthesize_test_cases.py` - `adjust_thalweg_lateral.py` - `rem.py` <br/><br/> ## v3.0.20.0 - 2021-08-11 - [PR #440](https://github.com/NOAA-OWP/cahaba/pull/440) This merge adds two new scripts into `/tools/` for use in QAQC. ## Additions - `inundate_nation.py` to produce inundation maps for the entire country for use in QAQC. - `check_deep_flooding.py` to check for depths of inundation greater than a user-supplied threshold at specific areas defined by a user-supplied shapefile. <br/><br/> ## v3.0.19.5 - 2021-07-19 Updating `README.md`. <br/><br/> ## v3.0.19.4 - 2021-07-13 - [PR #431](https://github.com/NOAA-OWP/cahaba/pull/431) Updating logging and fixing bug in vector preprocessing. ## Additions - `fim_completion_check.py` adds message to docker log to log any HUCs that were requested but did not finish `run_by_unit.sh`. - Adds `input_data_edits_changelog.txt` to the inputs folder to track any manual or version/location specific changes that were made to data used in FIM 3. ## Changes - Provides unique exit codes to relevant domain checkpoints within `run_by_unit.sh`. - Bug fixes in `reduce_nhd_stream_density.py`, `mprof plot` call. - Improved error handling in `add_crosswalk.py`. <br/><br/> ## v3.0.19.3 - 2021-07-09 Hot fix to `synthesize_test_cases`. ## Changes - Fixed if/elif/else statement in `synthesize_test_cases.py` that resulted in only IFC data being evaluated. <br/><br/> ## v3.0.19.2 - 2021-07-01 - [PR #429](https://github.com/NOAA-OWP/cahaba/pull/429) Updates to evaluation scripts to allow for Alpha testing at Iowa Flood Center (IFC) sites. Also, `BAD_SITES` variable updates to omit sites not suitable for evaluation from metric calculations. ## Changes - The `BAD_SITES` list in `tools_shared_variables.py` was updated and reasons for site omission are documented. - Refactored `run_test_case.py`, `synthesize_test_cases.py`, `tools_shared_variables.py`, and `eval_plots.py` to allow for IFC comparisons. <br/><br/> ## v3.0.19.1 - 2021-06-17 - [PR #417](https://github.com/NOAA-OWP/cahaba/pull/417) Adding a thalweg profile tool to identify significant drops in thalweg elevation. Also setting lateral thalweg adjustment threshold in hydroconditioning. ## Additions - `thalweg_drop_check.py` checks the elevation along the thalweg for each stream path downstream of MS headwaters within a HUC. ## Removals - Removing `dissolveLinks` arg from `clip_vectors_to_wbd.py`. ## Changes - Cleaned up code in `split_flows.py` to make it more readable. - Refactored `reduce_nhd_stream_density.py` and `adjust_headwater_streams.py` to limit MS headwater points in `agg_nhd_headwaters_adj.gpkg`. - Fixed a bug in `adjust_thalweg_lateral.py` lateral elevation replacement threshold; changed threshold to 3 meters. - Updated `aggregate_vector_inputs.py` to log intermediate processes. <br/><br/> ## v3.0.19.0 - 2021-06-10 - [PR #415](https://github.com/NOAA-OWP/cahaba/pull/415) Feature to evaluate performance of alternative CatFIM techniques. ## Additions - Added `eval_catfim_alt.py` to evaluate performance of alternative CatFIM techniques. <br/><br/> ## v3.0.18.0 - 2021-06-09 - [PR #404](https://github.com/NOAA-OWP/cahaba/pull/404) To help analyze the memory consumption of the Fim Run process, the python module `memory-profiler` has been added to give insights into where peak memory usage is with in the codebase. In addition, the Dockerfile was previously broken due to the Taudem dependency removing the version that was previously being used by FIM. To fix this, and allow new docker images to be built, the Taudem version has been updated to the newest version on the Github repo and thus needs to be thoroughly tested to determine if this new version has affected the overall FIM outputs. ## Additions - Added `memory-profiler` to `Pipfile` and `Pipfile.lock`. - Added `mprof` (memory-profiler cli utility) call to the `time_and_tee_run_by_unit.sh` to create overall memory usage graph location in the `/logs/{HUC}_memory.png` of the outputs directory. - Added `@profile` decorator to all functions within scripts used in the `run_by_unit.sh` script to allow for memory usage tracking, which is then recorded in the `/logs/{HUC}.log` file of the outputs directory. ## Changes - Changed the Taudem version in `Dockerfile.dev` to `98137bb6541a0d0077a9c95becfed4e56d0aa0ac`. - Changed all calls of python scripts in `run_by_unit.s` to be called with the `-m memory-profiler` argument to allow scripts to also track memory usage. <br/><br/> ## v3.0.17.1 - 2021-06-04 - [PR #395](https://github.com/NOAA-OWP/cahaba/pull/395) Bug fix to the `generate_nws_lid.py` script ## Changes - Fixes incorrectly assigned attribute field "is_headwater" for some sites in the `nws_lid.gpkg` layer. - Updated `agg_nhd_headwaters_adj.gpkg`, `agg_nhd_streams_adj.gpkg`, `nwm_flows.gpkg`, and `nwm_catchments.gpkg` input layers using latest NWS LIDs. <br/><br/> ## v3.0.17.0 - 2021-06-04 - [PR #393](https://github.com/NOAA-OWP/cahaba/pull/393) BARC updates to cap the bathy calculated xsec area in `bathy_rc_adjust.py` and allow user to choose input bankfull geometry. ## Changes - Added new env variable to control which input file is used for the bankfull geometry input to bathy estimation workflow. - Modified the bathymetry cross section area calculation to cap the additional area value so that it cannot exceed the bankfull cross section area value for each stream segment (bankfull value obtained from regression equation dataset). - Modified the `rating_curve_comparison.py` plot output to always put the FIM rating curve on top of the USGS rating curve (avoids USGS points covering FIM). - Created a new aggregate csv file (aggregates for all hucs) for all of the `usgs_elev_table.csv` files (one per huc). - Evaluate the FIM Bathymetry Adjusted Rating Curve (BARC) tool performance using the estimated bankfull geometry dataset derived for the NWM route link dataset. <br/><br/> ## v3.0.16.3 - 2021-05-21 - [PR #388](https://github.com/NOAA-OWP/cahaba/pull/388) Enhancement and bug fixes to `synthesize_test_cases.py`. ## Changes - Addresses a bug where AHPS sites without benchmark data were receiving a CSI of 0 in the master metrics CSV produced by `synthesize_test_cases.py`. - Includes a feature enhancement to `synthesize_test_cases.py` that allows for the inclusion of user-specified testing versions in the master metrics CSV. - Removes some of the print statements used by `synthesize_test_cases.py`. <br/><br/> ## v3.0.16.2 - 2021-05-18 - [PR #384](https://github.com/NOAA-OWP/cahaba/pull/384) Modifications and fixes to `run_test_case.py`, `eval_plots.py`, and AHPS preprocessing scripts. ## Changes - Comment out return statement causing `run_test_case.py` to skip over sites/hucs when calculating contingency rasters. - Move bad sites list and query statement used to filter out bad sites to the `tools_shared_variables.py`. - Add print statements in `eval_plots.py` detailing the bad sites used and the query used to filter out bad sites. - Update AHPS preprocessing scripts to produce a domain shapefile. - Change output filenames produced in ahps preprocessing scripts. - Update workarounds for some sites in ahps preprocessing scripts. <br/><br/> ## v3.0.16.1 - 2021-05-11 - [PR #380](https://github.com/NOAA-OWP/cahaba/pull/380) The current version of Eventlet used in the Connector module of the FIM API is outdated and vulnerable. This update bumps the version to the patched version. ## Changes - Updated `api/node/connector/requirements.txt` to have the Eventlet version as 0.31.0 <br/><br/> ## v3.0.16.0 - 2021-05-07 - [PR #378](https://github.com/NOAA-OWP/cahaba/pull/378) New "Release" feature added to the FIM API. This feature will allow for automated FIM, CatFIM, and relevant metrics to be generated when a new FIM Version is released. See [#373](https://github.com/NOAA-OWP/cahaba/issues/373) for more detailed steps that take place in this feature. ## Additions - Added new window to the UI in `api/frontend/gui/templates/index.html`. - Added new job type to `api/node/connector/connector.py` to allow these release jobs to run. - Added additional logic in `api/node/updater/updater.py` to run the new eval and CatFIM scripts used in the release feature. ## Changes - Updated `api/frontend/output_handler/output_handler.py` to allow for copying more broad ranges of file paths instead of only the `/data/outputs` directory. <br/><br/> ## v3.0.15.10 - 2021-05-06 - [PR #375](https://github.com/NOAA-OWP/cahaba/pull/375) Remove Great Lakes coastlines from WBD buffer. ## Changes - `gl_water_polygons.gpkg` layer is used to mask out Great Lakes boundaries and remove NHDPlus HR coastline segments. <br/><br/> ## v3.0.15.9 - 2021-05-03 - [PR #372](https://github.com/NOAA-OWP/cahaba/pull/372) Generate `nws_lid.gpkg`. ## Additions - Generate `nws_lid.gpkg` with attributes indicating if site is a headwater `nws_lid` as well as if it is co-located with another `nws_lid` which is referenced to the same `nwm_feature_id` segment. <br/><br/> ## v3.0.15.8 - 2021-04-29 - [PR #371](https://github.com/NOAA-OWP/cahaba/pull/371) Refactor NHDPlus HR preprocessing workflow. Resolves issue #238 ## Changes - Consolidate NHD streams, NWM catchments, and headwaters MS and FR layers with `mainstem` column. - HUC8 intersections are included in the input headwaters layer. - `clip_vectors_to_wbd.py` removes incoming stream segment from the selected layers. <br/><br/> ## v3.0.15.7 - 2021-04-28 - [PR #367](https://github.com/NOAA-OWP/cahaba/pull/367) Refactor synthesize_test_case.py to handle exceptions during multiprocessing. Resolves issue #351 ## Changes - refactored `inundation.py` and `run_test_case.py` to handle exceptions without using `sys.exit()`. <br/><br/> ## v3.0.15.6 - 2021-04-23 - [PR #365](https://github.com/NOAA-OWP/cahaba/pull/365) Implement CatFIM threshold flows to Sierra test and add AHPS benchmark preprocessing scripts. ## Additions - Produce CatFIM flows file when running `rating_curve_get_usgs_gages.py`. - Several scripts to preprocess AHPS benchmark data. Requires numerous file dependencies not available through Cahaba. ## Changes - Modify `rating_curve_comparison.py` to ingest CatFIM threshold flows in calculations. - Modify `eval_plots.py` to save all site specific bar plots in same parent directory instead of in subdirectories. - Add variables to `env.template` for AHPS benchmark preprocessing. <br/><br/> ## v3.0.15.5 - 2021-04-20 - [PR #363](https://github.com/NOAA-OWP/cahaba/pull/363) Prevent eval_plots.py from erroring out when spatial argument enabled if certain datasets not analyzed. ## Changes - Add check to make sure analyzed dataset is available prior to creating spatial dataset. <br/><br/> ## v3.0.15.4 - 2021-04-20 - [PR #356](https://github.com/NOAA-OWP/cahaba/pull/356) Closing all multiprocessing Pool objects in repo. <br/><br/> ## v3.0.15.3 - 2021-04-19 - [PR #358](https://github.com/NOAA-OWP/cahaba/pull/358) Preprocess NHDPlus HR rasters for consistent projections, nodata values, and convert from cm to meters. ## Additions - `preprocess_rasters.py` reprojects raster, converts to meters, and updates nodata value to -9999. - Cleaned up log messages from `bathy_rc_adjust.py` and `usgs_gage_crosswalk.py`. - Outputs paths updated in `generate_categorical_fim_mapping.py` and `generate_categorical_fim.py`. - `update_raster_profile` cleans up raster crs, blocksize, nodata values, and converts elevation grids from cm to meters. - `reproject_dem.py` imports gdal to reproject elevation rasters because an error was occurring when using rasterio. ## Changes - `burn_in_levees.py` replaces the `gdal_calc.py` command to resolve inconsistent outputs with burned in levee values. <br/><br/> ## v3.0.15.2 - 2021-04-16 - [PR #359](https://github.com/NOAA-OWP/cahaba/pull/359) Hotfix to preserve desired files when production flag used in `fim_run.sh`. ## Changes - Fixed production whitelisted files. <br/><br/> ## v3.0.15.1 - 2021-04-13 - [PR #355](https://github.com/NOAA-OWP/cahaba/pull/355) Sierra test considered all USGS gage locations to be mainstems even though many actually occurred with tributaries. This resulted in unrealistic comparisons as incorrect gages were assigned to mainstems segments. This feature branch identifies gages that are on mainstems via attribute field. ## Changes - Modifies `usgs_gage_crosswalk.py` to filter out gages from the `usgs_gages.gpkg` layer such that for a "MS" run, only consider gages that contain rating curve information (via `curve` attribute) and are also mainstems gages (via `mainstems` attribute). - Modifies `usgs_gage_crosswalk.py` to filter out gages from the `usgs_gages.gpkg` layer such that for a "FR" run, only consider gages that contain rating curve information (via `curve` attribute) and are not mainstems gages (via `mainstems` attribute). - Modifies how mainstems segments are determined by using the `nwm_flows_ms.gpkg` as a lookup to determine if the NWM segment specified by WRDS for a gage site is a mainstems gage. ## Additions - Adds a `mainstem` attribute field to `usgs_gages.gpkg` that indicates whether a gage is located on a mainstems river. - Adds `NWM_FLOWS_MS` variable to the `.env` and `.env.template` files. - Adds the `extent` argument specified by user when running `fim_run.sh` to `usgs_gage_crosswalk.py`. <br/><br/> ## v3.0.15.0 - 2021-04-08 - [PR #340](https://github.com/NOAA-OWP/cahaba/pull/340) Implementing a prototype technique to estimate the missing bathymetric component in the HAND-derived synthetic rating curves. The new Bathymetric Adjusted Rating Curve (BARC) function is built within the `fim_run.sh` workflow and will ingest bankfull geometry estimates provided by the user to modify the cross section area used in the synthetic rating curve generation. ### Changes - `add_crosswalk.py` outputs the stream order variables to `src_full_crosswalked.csv` and calls the new `bathy_rc_adjust.py` if bathy env variable set to True and `extent=MS`. - `run_by_unit.sh` includes a new csv outputs for reviewing BARC calculations. - `params_template.env` & `params_calibrated.env` contain new BARC function input variables and on/off toggle variable. - `eval_plots.py` now includes additional AHPS eval sites in the list of "bad_sites" (flagged issues with MS flowlines). ### Additions - `bathy_rc_adjust.py`: - Imports the existing synthetic rating curve table and the bankfull geometry input data (topwidth and cross section area per COMID). - Performs new synthetic rating curve calculations with bathymetry estimation modifications. - Flags issues with the thalweg-notch artifact. <br/><br/> ## v3.0.14.0 - 2021-04-05 - [PR #338](https://github.com/NOAA-OWP/cahaba/pull/338) Create tool to retrieve rating curves from USGS sites and convert to elevation (NAVD88). Intended to be used as part of the Sierra Test. ### Changes - Modify `usgs_gage_crosswalk.py` to: 1) Look for `location_id` instead of `site_no` attribute field in `usgs_gages.gpkg` file. 2) Filter out gages that do not have rating curves included in the `usgs_rating_curves.csv`. - Modify `rating_curve_comparison.py` to perform a check on the age of the user specified `usgs_rating_curves.csv` and alert user to the age of the file and recommend updating if file is older the 30 days. ### Additions - Add `rating_curve_get_usgs_curves.py`. This script will generate the following files: 1) `usgs_rating_curves.csv`: A csv file that contains rating curves (including converted to NAVD88 elevation) for USGS gages in a format that is compatible with `rating_curve_comparisons.py`. As it is is currently configured, only gages within CONUS will have rating curve data. 2) `log.csv`: A log file that records status for each gage and includes error messages. 3) `usgs_gages.gpkg`: A geospatial layer (in FIM projection) of all active USGS gages that meet a predefined criteria. Additionally, the `curve` attribute indicates whether a rating curve is found in the `usgs_rating_curves.csv`. This spatial file is only generated if the `all` option is passed with the `-l` argument. <br/><br/> ## v3.0.13.0 - 2021-04-01 - [PR #332](https://github.com/NOAA-OWP/cahaba/pull/332) Created tool to compare synthetic rating curve with benchmark rating curve (Sierra Test). ### Changes - Update `aggregate_fim_outputs.py` call argument in `fim_run.sh` from 4 jobs to 6 jobs, to optimize API performance. - Reroutes median elevation data from `add_crosswalk.py` and `rem.py` to new file (depreciating `hand_ref_elev_table.csv`). - Adds new files to `viz_whitelist` in `output_cleanup.py`. ### Additions - `usgs_gage_crosswalk.py`: generates `usgs_elev_table.csv` in `run_by_unit.py` with elevation and additional attributes at USGS gages. - `rating_curve_comparison.py`: post-processing script to plot and calculate metrics between synthetic rating curves and USGS rating curve data. <br/><br/> ## v3.0.12.1 - 2021-03-31 - [PR #336](https://github.com/NOAA-OWP/cahaba/pull/336) Fix spatial option in `eval_plots.py` when creating plots and spatial outputs. ### Changes - Removes file dependencies from spatial option. Does require the WBD layer which should be specified in `.env` file. - Produces outputs in a format consistent with requirements needed for publishing. - Preserves leading zeros in huc information for all outputs from `eval_plots.py`. ### Additions - Creates `fim_performance_points.shp`: this layer consists of all evaluated ahps points (with metrics). Spatial data retrieved from WRDS on the fly. - Creates `fim_performance_polys.shp`: this layer consists of all evaluated huc8s (with metrics). Spatial data retrieved from WBD layer. <br/><br/> ## v3.0.12.0 - 2021-03-26 - [PR #327](https://github.com/NOAA-OWP/cahaba/pull/237) Add more detail/information to plotting capabilities. ### Changes - Merge `plot_functions.py` into `eval_plots.py` and move `eval_plots.py` into the tools directory. - Remove `plots` subdirectory. ### Additions - Optional argument to create barplots of CSI for each individual site. - Create a csv containing the data used to create the scatterplots. <br/><br/> ## v3.0.11.0 - 2021-03-22 - [PR #319](https://github.com/NOAA-OWP/cahaba/pull/298) Improvements to CatFIM service source data generation. ### Changes - Renamed `generate_categorical_fim.py` to `generate_categorical_fim_mapping.py`. - Updated the status outputs of the `nws_lid_sites layer` and saved it in the same directory as the `merged catfim_library layer`. - Additional stability fixes (such as improved compatability with WRDS updates). ### Additions - Added `generate_categorical_fim.py` to wrap `generate_categorical_fim_flows.py` and `generate_categorical_fim_mapping.py`. - Create new `nws_lid_sites` shapefile located in same directory as the `catfim_library` shapefile. <br/><br/> ## v3.0.10.1 - 2021-03-24 - [PR #320](https://github.com/NOAA-OWP/cahaba/pull/320) Patch to synthesize_test_cases.py. ### Changes - Bug fix to `synthesize_test_cases.py` to allow comparison between `testing` version and `official` versions. <br/><br/> ## v3.0.10.0 - 2021-03-12 - [PR #298](https://github.com/NOAA-OWP/cahaba/pull/298) Preprocessing of flow files for Categorical FIM. ### Additions - Generate Categorical FIM flow files for each category (action, minor, moderate, major). - Generate point shapefile of Categorical FIM sites. - Generate csv of attribute data in shapefile. - Aggregate all shapefiles and csv files into one file in parent directory. - Add flood of record category. ### Changes - Stability fixes to `generate_categorical_fim.py`. <br/><br/> ## v3.0.9.0 - 2021-03-12 - [PR #297](https://github.com/NOAA-OWP/cahaba/pull/297) Enhancements to FIM API. ### Changes - `fim_run.sh` can now be run with jobs in parallel. - Viz post-processing can now be selected in API interface. - Jobs table shows jobs that end with errors. - HUC preset lists can now be selected in interface. - Better `output_handler` file writing. - Overall better restart and retry handlers for networking problems. - Jobs can now be canceled in API interface. - Both FR and MS configs can be selected for a single job. <br/><br/> ## v3.0.8.2 - 2021-03-11 - [PR #296](https://github.com/NOAA-OWP/cahaba/pull/296) Enhancements to post-processing for Viz-related use-cases. ### Changes - Aggregate grids are projected to Web Mercator during `-v` runs in `fim_run.sh`. - HUC6 aggregation is parallelized. - Aggregate grid blocksize is changed from 256 to 1024 for faster postprocessing. <br/><br/> ## v3.0.8.1 - 2021-03-10 - [PR #302](https://github.com/NOAA-OWP/cahaba/pull/302) Patched import issue in `tools_shared_functions.py`. ### Changes - Changed `utils.` to `tools_` in `tools_shared_functions.py` after recent structural change to `tools` directory. <br/><br/> ## v3.0.8.0 - 2021-03-09 - [PR #279](https://github.com/NOAA-OWP/cahaba/pull/279) Refactored NWS Flood Categorical HAND FIM (CatFIM) pipeline to open source. ### Changes - Added `VIZ_PROJECTION` to `shared_variables.py`. - Added missing library referenced in `inundation.py`. - Cleaned up and converted evaluation scripts in `generate_categorical_fim.py` to open source. - Removed `util` folders under `tools` directory. <br/><br/> ## v3.0.7.1 - 2021-03-02 - [PR #290](https://github.com/NOAA-OWP/cahaba/pull/290) Renamed benchmark layers in `test_cases` and updated variable names in evaluation scripts. ### Changes - Updated `run_test_case.py` with new benchmark layer names. - Updated `run_test_case_calibration.py` with new benchmark layer names. <br/><br/> ## v3.0.7.0 - 2021-03-01 - [PR #288](https://github.com/NOAA-OWP/cahaba/pull/288) Restructured the repository. This has no impact on hydrological work done in the codebase and is simply moving files and renaming directories. ### Changes - Moved the contents of the `lib` folder to a new folder called `src`. - Moved the contents of the `tests` folder to the `tools` folder. - Changed any instance of `lib` or `libDir` to `src` or `srcDir`. <br/><br/> ## v3.0.6.0 - 2021-02-25 - [PR #276](https://github.com/NOAA-OWP/cahaba/pull/276) Enhancement that creates metric plots and summary statistics using metrics compiled by `synthesize_test_cases.py`. ### Additions - Added `eval_plots.py`, which produces: - Boxplots of CSI, FAR, and POD/TPR - Barplot of aggregated CSI scores - Scatterplot of CSI comparing two FIM versions - CSV of aggregated statistics (CSI, FAR, POD/TPR) - CSV of analyzed data and analyzed sites <br/><br/> ## v3.0.5.3 - 2021-02-23 - [PR #275](https://github.com/NOAA-OWP/cahaba/pull/275) Bug fixes to new evaluation code. ### Changes - Fixed a bug in `synthesize_test_cases.py` where the extent (MS/FR) was not being written to merged metrics file properly. - Fixed a bug in `synthesize_test_cases.py` where only BLE test cases were being written to merged metrics file. - Removed unused imports from `inundation.py`. - Updated README.md <br/><br/> ## v3.0.5.2 - 2021-02-23 - [PR #272](https://github.com/NOAA-OWP/cahaba/pull/272) Adds HAND synthetic rating curve (SRC) datum elevation values to `hydroTable.csv` output. ### Changes - Updated `add_crosswalk.py` to included "Median_Thal_Elev_m" variable outputs in `hydroTable.csv`. - Renamed hydroid attribute in `rem.py` to "Median" in case we want to include other statistics in the future (e.g. min, max, range etc.). <br/><br/> ## v3.0.5.1 - 2021-02-22 Fixed `TEST_CASES_DIR` path in `tests/utils/shared_variables.py`. ### Changes - Removed `"_new"` from `TEST_CASES_DIR` variable. <br/><br/> ## v3.0.5.0 - 2021-02-22 - [PR #267](https://github.com/NOAA-OWP/cahaba/pull/267) Enhancements to allow for evaluation at AHPS sites, the generation of a query-optimized metrics CSV, and the generation of categorical FIM. This merge requires that the `/test_cases` directory be updated for all machines performing evaluation. ### Additions - `generate_categorical_fim.py` was added to allow production of NWS Flood Categorical HAND FIM (CatFIM) source data. More changes on this script are to follow in subsequent branches. ### Removals - `ble_autoeval.sh` and `all_ble_stats_comparison.py` were deleted because `synthesize_test_cases.py` now handles the merging of metrics. - The code block in `run_test_case.py` that was responsible for printing the colored metrics to screen has been commented out because of the new scale of evaluations (formerly in `run_test_case.py`, now in `shared_functions.py`) - Remove unused imports from inundation wrappers in `/tools`. ### Changes - Updated `synthesize_test_cases.py` to allow for AHPS site evaluations. - Reorganized `run_test_case.py` by moving more functions into `shared_functions.py`. - Created more shared variables in `shared_variables.py` and updated import statements in relevant scripts. <br/><br/> ## v3.0.4.4 - 2021-02-19 - [PR #266](https://github.com/NOAA-OWP/cahaba/pull/266) Rating curves for short stream segments are replaced with rating curves from upstream/downstream segments. ### Changes - Short stream segments are identified and are reassigned the channel geometry from upstream/downstream segment. - `fossid` renamed to `fimid` and the attribute's starting value is now 1000 to avoid HydroIDs with leading zeroes. - Addresses issue where HydroIDs were not included in final hydrotable. - Added `import sys` to `inundation.py` (missing from previous feature branch). - Variable names and general workflow are cleaned up. <br/><br/> ## v3.0.4.3 - 2021-02-12 - [PR #254](https://github.com/NOAA-OWP/cahaba/pull/254) Modified `rem.py` with a new function to output HAND reference elev. ### Changes - Function `make_catchment_hydroid_dict` creates a df of pixel catchment ids and overlapping hydroids. - Merge hydroid df and thalweg minimum elevation df. - Produces new output containing all catchment ids and min thalweg elevation value named `hand_ref_elev_table.csv`. - Overwrites the `demDerived_reaches_split.gpk` layer by adding additional attribute `Min_Thal_Elev_meters` to view the elevation value for each hydroid. <br/><br/> ## v3.0.4.2 - 2021-02-12 - [PR #255](https://github.com/NOAA-OWP/cahaba/pull/255) Addresses issue when running on HUC6 scale. ### Changes - `src.json` should be fixed and slightly smaller by removing whitespace. - Rasters are about the same size as running fim as huc6 (compressed and tiled; aggregated are slightly larger). - Naming convention and feature id attribute are only added to the aggregated hucs. - HydroIDs are different for huc6 vs aggregated huc8s mostly due to forced split at huc boundaries (so long we use consistent workflow it shouldn't matter). - Fixed known issue where sometimes an incoming stream is not included in the final selection will affect aggregate outputs. <br/><br/> ## v3.0.4.1 - 2021-02-12 - [PR #261](https://github.com/NOAA-OWP/cahaba/pull/261) Updated MS Crosswalk method to address gaps in FIM. ### Changes - Fixed typo in stream midpoint calculation in `split_flows.py` and `add_crosswalk.py`. - `add_crosswalk.py` now restricts the MS crosswalk to NWM MS catchments. - `add_crosswalk.py` now performs a secondary MS crosswalk selection by nearest NWM MS catchment. <br/><br/> ## v3.0.4.0 - 2021-02-10 - [PR #256](https://github.com/NOAA-OWP/cahaba/pull/256) New python script "wrappers" for using `inundation.py`. ### Additions - Created `inundation_wrapper_nwm_flows.py` to produce inundation outputs using NWM recurrence flows: 1.5 year, 5 year, 10 year. - Created `inundation_wrapper_custom_flow.py` to produce inundation outputs with user-created flow file. - Created new `tools` parent directory to store `inundation_wrapper_nwm_flows.py` and `inundation_wrapper_custom_flow.py`. <br/><br/> ## v3.0.3.1 - 2021-02-04 - [PR #253](https://github.com/NOAA-OWP/cahaba/pull/253) Bug fixes to correct mismatched variable name and file path. ### Changes - Corrected variable name in `fim_run.sh`. - `acquire_and_preprocess_inputs.py` now creates `huc_lists` folder and updates file path. <br/><br/> ## v3.0.3.0 - 2021-02-04 - [PR #227](https://github.com/NOAA-OWP/cahaba/pull/227) Post-process to aggregate FIM outputs to HUC6 scale. ### Additions - Viz outputs aggregated to HUC6 scale; saves outputs to `aggregate_fim_outputs` folder. ### Changes - `split_flows.py` now splits streams at HUC8 boundaries to ensure consistent catchment boundaries along edges. - `aggregate_fim_outputs.sh` has been depreciated but remains in the repo for potential FIM 4 development. - Replaced geopandas driver arg with getDriver throughout repo. - Organized parameters in environment files by group. - Cleaned up variable names in `split_flows.py` and `build_stream_traversal.py`. - `build_stream_traversal.py` is now assigning HydroID by midpoint instead centroid. - Cleanup of `clip_vectors_to_wbd.py`. <br/><br/> ## v3.0.2.0 - 2021-01-25 - [PR #218](https://github.com/NOAA-OWP/cahaba/pull/218) Addition of an API service to schedule, run and manage `fim_run` jobs through a user-friendly web interface. ### Additions - `api` folder that contains all the codebase for the new service. <br/><br/> ## v3.0.1.0 - 2021-01-21 - [PR #206](https://github.com/NOAA-OWP/cahaba/pull/206) Preprocess MS and FR stream networks ### Changes - Headwater stream segments geometries are adjusted to align with with NWM streams. - Incoming streams are selected using intersection points between NWM streams and HUC4 boundaries. - `clip_vectors_to_wbd.py` handles local headwaters. - Removes NHDPlus features categorized as coastline and underground conduit. - Added streams layer to production whitelist. - Fixed progress bar in `lib/acquire_and_preprocess_inputs.py`. - Added `getDriver` to shared `functions.py`. - Cleaned up variable names and types. <br/><br/> ## v3.0.0.4 - 2021-01-20 - [PR #230](https://github.com/NOAA-OWP/cahaba/pull/230) Changed the directory where the `included_huc*.lst` files are being read from. ### Changes - Changed the directory where the `included_huc*.lst` files are being read from. <br/><br/> ## v3.0.0.3 - 2021-01-14 - [PR #210](https://github.com/NOAA-OWP/cahaba/pull/210) Hotfix for handling nodata value in rasterized levee lines. ### Changes - Resolves bug for HUCs where `$ndv > 0` (Great Lakes region). - Initialize the `nld_rasterized_elev.tif` using a value of `-9999` instead of `$ndv`. <br/><br/> ## v3.0.0.2 - 2021-01-06 - [PR #200](https://github.com/NOAA-OWP/cahaba/pull/200) Patch to address AHPSs mapping errors. ### Changes - Checks `dtype` of `hydroTable.csv` columns to resolve errors caused in `inundation.py` when joining to flow forecast. - Exits `inundation.py` when all hydrotable HydroIDs are lake features. - Updates path to latest AHPs site layer. - Updated [readme](https://github.com/NOAA-OWP/cahaba/commit/9bffb885f32dfcd95978c7ccd2639f9df56ff829) <br/><br/> ## v3.0.0.1 - 2020-12-31 - [PR #184](https://github.com/NOAA-OWP/cahaba/pull/184) Modifications to build and run Docker image more reliably. Cleanup on some pre-processing scripts. ### Changes - Changed to noninteractive install of GRASS. - Changed some paths from relative to absolute and cleaned up some python shebang lines. ### Notes - `aggregate_vector_inputs.py` doesn't work yet. Need to externally download required data to run fim_run.sh <br/><br/> ## v3.0.0.0 - 2020-12-22 - [PR #181](https://github.com/NOAA-OWP/cahaba/pull/181) The software released here builds on the flood inundation mapping capabilities demonstrated as part of the National Flood Interoperability Experiment, the Office of Water Prediction's Innovators Program and the National Water Center Summer Institute. The flood inundation mapping software implements the Height Above Nearest Drainage (HAND) algorithm and incorporates community feedback and lessons learned over several years. The software has been designed to meet the requirements set by stakeholders interested in flood prediction and has been developed in partnership with several entities across the water enterprise.
49.936629
622
0.758147
eng_Latn
0.972157
fa85234613a1918b71f2f2b2b6538d558cc917b3
6,267
md
Markdown
docs/DiscoveryClient.md
atomminer/udp-discovery
c3546642058c02e9c10cbbdac1fda8452bab22ce
[ "MIT" ]
null
null
null
docs/DiscoveryClient.md
atomminer/udp-discovery
c3546642058c02e9c10cbbdac1fda8452bab22ce
[ "MIT" ]
null
null
null
docs/DiscoveryClient.md
atomminer/udp-discovery
c3546642058c02e9c10cbbdac1fda8452bab22ce
[ "MIT" ]
null
null
null
##### [<< Back to index](README.md) <a name="module_DiscoveryClient"></a> ## DiscoveryClient Local network discovery service using UDP broadcast messages and UDP server(s) to locate running services in the local network **Example** ```js Discovery.discover({port: 10001}, (err, msg, info) => { if(err) return console.error(err); if(msg && info) console.log(`[10001] Message from ${info.address}: ${msg.toString()}`); }); ``` **Example** ```js class MyDiscovery extends Discovery.Discover { onMessage(msg, info) { if(msg === 'cookie') super.onMessage(msg, info); } } const discovery = new Discover(configOrPort); discovery.on('stop', () => { console.log('stop') }); discovery.on('error', (e) => { discovery.removeAllListeners('stop'); discovery.stop(); console.error(e); }); discovery.on('found', ({msg, info}) => { console.log(msg.toString(), info) }); discovery.start(); ``` * [DiscoveryClient](#module_DiscoveryClient) * _static_ * [.Discover](#module_DiscoveryClient.Discover) ⇐ <code>EventEmitter</code> * [new Discover(config)](#new_module_DiscoveryClient.Discover_new) * [.running](#module_DiscoveryClient.Discover+running) * [.config](#module_DiscoveryClient.Discover+config) * [.address](#module_DiscoveryClient.Discover+address) * [.discoveryMessage](#module_DiscoveryClient.Discover+discoveryMessage) * [.start()](#module_DiscoveryClient.Discover+start) * [.stop()](#module_DiscoveryClient.Discover+stop) * [.send(msg, port, host)](#module_DiscoveryClient.Discover+send) * [.discover(configOrPort, cb)](#module_DiscoveryClient.discover) * _inner_ * [~DiscoverConfig](#module_DiscoveryClient..DiscoverConfig) : <code>object</code> <a name="module_DiscoveryClient.Discover"></a> ### discovery.Discover ⇐ <code>EventEmitter</code> UDP discovery class. Can be extended to fit specific needs **Kind**: static class of [<code>DiscoveryClient</code>](#module_DiscoveryClient) **Extends**: <code>EventEmitter</code> **Emits**: <code>event:error</code>, <code>event:start</code>, <code>event:stop</code>, <code>event:found</code> **Access**: public * [.Discover](#module_DiscoveryClient.Discover) ⇐ <code>EventEmitter</code> * [new Discover(config)](#new_module_DiscoveryClient.Discover_new) * [.running](#module_DiscoveryClient.Discover+running) * [.config](#module_DiscoveryClient.Discover+config) * [.address](#module_DiscoveryClient.Discover+address) * [.discoveryMessage](#module_DiscoveryClient.Discover+discoveryMessage) * [.start()](#module_DiscoveryClient.Discover+start) * [.stop()](#module_DiscoveryClient.Discover+stop) * [.send(msg, port, host)](#module_DiscoveryClient.Discover+send) <a name="new_module_DiscoveryClient.Discover_new"></a> #### new Discover(config) | Param | Type | Description | | --- | --- | --- | | config | <code>DiscoverConfig</code> \| <code>Number</code> | DiscoverConfig configuration object or port number | <a name="module_DiscoveryClient.Discover+running"></a> #### discover.running If Discovery is still running **Kind**: instance property of [<code>Discover</code>](#module_DiscoveryClient.Discover) **Read only**: true <a name="module_DiscoveryClient.Discover+config"></a> #### discover.config Current instance config object. **Kind**: instance property of [<code>Discover</code>](#module_DiscoveryClient.Discover) **Read only**: true <a name="module_DiscoveryClient.Discover+address"></a> #### discover.address Last known address discovery socket was/is working on. See: Socket::AddressInfo **Kind**: instance property of [<code>Discover</code>](#module_DiscoveryClient.Discover) **Read only**: true <a name="module_DiscoveryClient.Discover+discoveryMessage"></a> #### discover.discoveryMessage Message to use for discover handshake. Can be overloaded by subclass **Kind**: instance property of [<code>Discover</code>](#module_DiscoveryClient.Discover) **Read only**: true <a name="module_DiscoveryClient.Discover+start"></a> #### discover.start() Start UDP scan **Kind**: instance method of [<code>Discover</code>](#module_DiscoveryClient.Discover) <a name="module_DiscoveryClient.Discover+stop"></a> #### discover.stop() Stop UDP scan **Kind**: instance method of [<code>Discover</code>](#module_DiscoveryClient.Discover) <a name="module_DiscoveryClient.Discover+send"></a> #### discover.send(msg, port, host) Send message **Kind**: instance method of [<code>Discover</code>](#module_DiscoveryClient.Discover) | Param | Type | Description | | --- | --- | --- | | msg | <code>string</code> \| <code>Buffer</code> \| <code>object</code> | Message to send | | port | <code>Number</code> | Port number | | host | <code>string</code> | Host to send message to | <a name="module_DiscoveryClient.discover"></a> ### discovery.discover(configOrPort, cb) Simple way to discover/broadcast message with callback **Kind**: static method of [<code>DiscoveryClient</code>](#module_DiscoveryClient) | Param | Type | Description | | --- | --- | --- | | configOrPort | <code>DiscoverConfig</code> \| <code>Number</code> | DiscoverConfig object or port number to run discovery on | | cb | <code>function</code> | callback | <a name="module_DiscoveryClient..DiscoverConfig"></a> ### DiscoveryClient~DiscoverConfig : <code>object</code> Default UDP discovery config **Kind**: inner namespace of [<code>DiscoveryClient</code>](#module_DiscoveryClient) **Properties** | Name | Type | Default | Description | | --- | --- | --- | --- | | port | <code>Number</code> | <code>10001</code> | Default port to scan network nodes on | | tomeout | <code>Number</code> | <code>1500</code> | Default length of scan in ms | | broadcast | <code>string</code> | <code>&quot;&#x27;255.255.255.255&#x27;&quot;</code> | Network broadcast address. Use default to broadcast initla messages thtough all available interfaces (tested on Linux), or calculate you own broadcast address for specific interface using its IP and netmask via following formula `broadcast = (~netmask) | (IP)`. IPv6 should use `ff:ff:ff:ff:ff:ff` as a broadcast address | | handshake | <code>string</code> | | Message to broadcast |
39.917197
414
0.698739
eng_Latn
0.241999
fa8750ad4a24d0bfe9710a2f6743f42fe86c31d0
263
md
Markdown
content-ca/video/tealight-tote.en.md
gulrenor/pllc
26e786f0b0f4b59e5d1fbc1cf8ca13a8ab5a45de
[ "MIT" ]
null
null
null
content-ca/video/tealight-tote.en.md
gulrenor/pllc
26e786f0b0f4b59e5d1fbc1cf8ca13a8ab5a45de
[ "MIT" ]
15
2016-11-28T17:12:48.000Z
2017-02-03T15:07:32.000Z
content-us/video/tealight-tote.md
gulrenor/pllc
26e786f0b0f4b59e5d1fbc1cf8ca13a8ab5a45de
[ "MIT" ]
null
null
null
+++ categories_weight = 50 series_weight = 50 description = "Get Bookings with the new tealight tote." active = false date = "2016-11-22T12:30:23-05:00" title = "Booking on the Go with the Tealight Tote" series = "" categories = [ ] +++ {{< vimeo 125493073 >}}
17.533333
56
0.684411
eng_Latn
0.900962
fa891233c833f3336daf3be5f48122c5fe302eec
881
md
Markdown
docs/corsi/vue-js/10.md
sdiricco/pnotes
41f2de6645da01ae00decea0b76c85cb836dad57
[ "MIT" ]
1
2021-10-20T19:37:02.000Z
2021-10-20T19:37:02.000Z
docs/corsi/vue-js/10.md
sdiricco/wiki-programming-notes
41f2de6645da01ae00decea0b76c85cb836dad57
[ "MIT" ]
null
null
null
docs/corsi/vue-js/10.md
sdiricco/wiki-programming-notes
41f2de6645da01ae00decea0b76c85cb836dad57
[ "MIT" ]
null
null
null
# Le direttive predefinite in Vue.js Vue mette a disposizione una serie di direttive predefinite grazie alle quali sarà possibile controllare quali elementi dovranno essere renderizzati. Le direttive sono: - `v-if`, `v-else`, `v-else-if`: aggiunge o rimuove uno o più elementi dal DOM a seconda che l'espressione passata come valore sia equiparabile al valore true o false. - `v-show`: come `v-if` può essere usata per mostrare o nascondere un elemento in base ad un'espressione passata come valore, ma non può essere accompagnata dalle direttive `v-else-if` e `v-else`. Rispetto alla direttiva `v-if`, `v-show` usa la proprietà CSS **display** per mostrare o nascondere un elemento e non può essere applicata sugli elementi di tipo `<template>` - `v-for`: usata per mostrare sullo schermo più elementi simili partendo dalle informazioni contenute in un oggetto o in un array.
80.090909
371
0.777526
ita_Latn
0.999722
fa891cbb1153a1f22f036d6125995d36520381a6
7,175
md
Markdown
_vendor/bootstrap-ajax/README.md
jmcampanini/django-starter
9d35c0f45b9709abfe430ebb5bb7afae857af180
[ "Apache-2.0" ]
null
null
null
_vendor/bootstrap-ajax/README.md
jmcampanini/django-starter
9d35c0f45b9709abfe430ebb5bb7afae857af180
[ "Apache-2.0" ]
null
null
null
_vendor/bootstrap-ajax/README.md
jmcampanini/django-starter
9d35c0f45b9709abfe430ebb5bb7afae857af180
[ "Apache-2.0" ]
1
2021-02-11T22:41:51.000Z
2021-02-11T22:41:51.000Z
# bootstrap-ajax This plugin is designed to work with Twitter Bootstrap to enable declarative AJAX support. No more writing the same 20 line ```$.ajax``` blocks of Javascript over and over again for each snippet of AJAX that you want to support. Easily extend support on the server side code for this by adding a top-level attribute to the JSON you are already returning called ```"html"``` that is the rendered content. Unlike a backbone.js approach to building a web app, bootstrap-ajax leverages server side template rendering engines to render and return HTML fragments. ## Demo There is a demo project at https://github.com/eldarion/bootstrap-ajax-demo/ which is also online at http://uk013.o1.gondor.io/ ## Installation Copy the files in ```js/bootstrap-ajax.js``` and optionally ```vendor/spin.min.js``` to where you keep your web sites static media and the include them in your HTML: ``` <script src="/js/spin.min.js"></script> <script src="/js/bootstrap-ajax.js"></script> ``` The inclusion of ```spin.min.js``` is optional. ## Actions There are currently three actions supported: 1. ```a.click``` 2. ```form.submit``` 3. ```a.cancel``` ### ```a.click``` Binding to the ```a``` tag's click event where the tag has the class ```ajax```: ``` <a href="/tasks/12342/done/" class="btn ajax"> <i class="icon icon-check"></i> Done </a> ``` In addition to the ```href``` attribute, you can add ```data-method="post"``` to change the default action from an HTTP GET to an HTTP POST. ### ```form.submit``` Convert any form to an AJAX form submission quite easily by adding ```ajax``` to the form's class attribute: ``` <form class="form ajax" action="/tasks/create/" method="post">...</form> ``` When submitting this form, any ```input[type=submit]``` or ```button[type=submit]``` will be disabled immediately, then the data in the form is serialized and sent to the server using the ```method``` that was declared in the ```form``` tag. ### ```a.cancel``` Any ```a``` tag that has a ```data-cancel-closest``` attribute defined will trigger the cancel event handler. This simply removes from the DOM any elements found using the selector defined in the ```data-cancel-closest``` attribute: ``` <a href="#" data-cancel-closest=".edit-form" class="btn"> Cancel </a> ``` ## Processing Responses There are three data attributes looked for in the response JSON data: 1. ```location``` 2. ```html``` 3. ```fragments``` If ```location``` is found in the response JSON payload, it is expected to be a URL and the browser will be immediately redirected to that location. If, on the other hand it is not present, then the processing rules below will be processed based on what attributes are defined. If you have a ```fragments``` hash defined, it should contain a list of key/value pairs where the keys are the selectors to content that will be replaced, and the values are the server-side rendered HTML content that will replace the elements that match the selection. You can define both ```html``` to be processed by the declaritive rules defined below and the ```fragements``` at the same time. This gives you the ability to for example replace the form you submited with ```html``` content while at the same time updating multiple bits of content on the page without having to refresh them. There are five different ways that you can declare an ```html``` response without a ```location``` directive be processed: 1. Append 2. Refresh 3. Refresh Closest 4. Replace 5. Replace Closest Here is where it can get fun as all of the values for these processing directives are just CSS selectors. In addition they can be multiplexed. You can declare all of them at the same time if you so desire. A CSS selector can easily be written to address multiple different blocks on the page at the same time. Best to just see some examples. ### Append Using ```data-append``` allows you to specify that the ```data.html``` returned in the server response's JSON be appended to the elements found in the specified CSS selector: ``` <a href="/tasks/12342/done/" class="btn ajax" data-method="post" data-append=".done-list"> <i class="icon icon-check"></i> Done </a> ``` ### Refresh Using the ```data-refresh``` attribute let's you define what elements, if selected by the CSS selector specified for it's value, get **_refreshed_**. Elements that are selected will get refreshed with the contents of the url defined in their ```data-refresh-url``` attribute: ``` <div class="done-score" data-refresh-url="/users/paltman/done-score/">...</div> <div class="done-list">...</div> <a href="/tasks/12342/done/" class="btn ajax" data-method="post" data-append=".done-list" data-refresh=".done-score"> <i class="icon icon-check"></i> Done </a> ``` In this example, the ```.done-list``` will be appended to with the ```data.html``` returns from the AJAX post made as a result of clicking the button and simultaneously, the ```.done-score``` will refresh itself by fetching (GET) JSON from the url defined in ```data-refresh-url``` and replacing itself with the contents of ```data.html``` that is returned. ### Refresh Closest This works very much in the same way as ```data-refresh``` however, the uses jQuery's ```closest``` method to interpret the selector. ### Replace Sometimes you want to neither refresh nor append to existing elements but you want to just replace the content with whatever it is that is returned from the server. This is what ```data-replace``` is for. ### Replace Closest This works very much in the same way as ```data-replace``` however, the uses jQuery's ```closest``` method to interpret the selector. ``` <div class="done-score" data-refresh-url="/users/paltman/done-score/">...</div> <div class="done-list">...</div> <div class="results"></div> <a href="/tasks/12342/done/" class="btn ajax" data-method="post" data-append=".done-list" data-refresh=".done-score" data-replace=".results"> <i class="icon icon-check"></i> Done </a> ``` It is rare that you'll add/use all of these processing methods combined like this. Usually it will just be one or the other, however, I add them all here to illustrate the point that they are independently interpreted and executed. ## Spinner This is an optional include and provides support to show an activity spinner during the life of the callback. You can specify where the spinner should be placed (it defaults to the ```a.click``` or ```form.submit``` in question) by declaring ```data-spinner``` with a CSS selector. You can turn it off all together by simply specifying ```off``` as the value instead of a selector. ## Commercial Support This project, and others like it, have been built in support of many of Eldarion's own sites, and sites of our clients. We would love to help you on your next project so get in touch by dropping us a note at info@eldarion.com.
36.984536
466
0.695192
eng_Latn
0.998242
fa89bc4fe2bad2889f225f5e42177b7b0e9811da
3,182
md
Markdown
README.md
BuildForSDG/Team-001-backend
5f0875a9101bd5adb71850226845f3cbc41faba2
[ "MIT" ]
null
null
null
README.md
BuildForSDG/Team-001-backend
5f0875a9101bd5adb71850226845f3cbc41faba2
[ "MIT" ]
11
2020-06-09T08:46:12.000Z
2021-09-02T13:05:16.000Z
README.md
BuildForSDG/Team-001-backend
5f0875a9101bd5adb71850226845f3cbc41faba2
[ "MIT" ]
3
2020-06-08T11:33:43.000Z
2020-06-09T10:49:00.000Z
Backend repository for the [Betalife app](https://betalife-frontend.netlify.app). See '__Usage__' for example test on the backend. [![Codacy Badge](https://api.codacy.com/project/badge/Grade/9a016eb72eea47f9a17050dbbbee9520)](https://app.codacy.com/gh/BuildForSDG/team-001-backend?utm_source=github.com&utm_medium=referral&utm_content=BuildForSDG/team-001-backend&utm_campaign=Badge_Grade_Settings) ## About This is the backend I created for my team on our Betalife project, This backend uses __MongoDB__ for database. All features can be tested from the frontend, [Betalife App](https://betalife-frontend.netlify.app) under the BuildForSDG projects. See also __Example__ for testing out the RESTful API ## Why Our Betalife project needed a database, Authentication, REST API and other backend features for the Web app. ## Usage You can directly test the backend on https://betalife-backend.herokuapp.com #### Example #### See a list of all events created from frontend: https://betalife-backend.herokuapp.com/api/events Or visit the frontend for [Betalife App](https://betalife-frontend.netlify.app) to test out all features. ## Setup Install `npm` or `yarn` if you dont have any of them already installed. We recommend Yarn though. After clonning the repo to your local machine and moving into the cloned folder, Run `yarn install` to get started by installing dependencies. `src/index.js` is the entry to the project and source code should go into the `src` folder. All tests are written in the `__tests__' folder. This starter uses [Parcel](https://parceljs.org/getting_started.html) as the bundler. It is much simpler than WebPack and the others #### Hints - Run `npm install` or `yarn install` to get started. We'll assume you are using Yarn. - Install additional dependencies: `yarn add <dependency-name> [-D]` - Run tests: `yarn test` - Run tests with test coverage info: `yarn test:cover` - Check the codebase for proper syntax and formatting compliance: `yarn lint` - Run your app in local dev mode: `yarn start`. This puts the bundled app in a `dist` folder, set up a local web server at localhost:1234, and continues to watch for your code changes which it syncs with the local server. This means if you loaded the app in a browser, it will auto-refresh as you code along. Feel free to use whatever bundler best meets your needs. Parcel was only added as a sample and for those looking for a simple but effective solution to the hassle of bundlers. ## Authors See https://betalife-frontend.netlify.app/About for all contributors for the Betalife project ## Contributing If this project sounds interesting to you and you'd like to contribute, thank you! First, you can send a mail to buildforsdg@andela.com to indicate your interest, why you'd like to support and what forms of support you can bring to the table, but here are areas we think we'd need the most help in this project : The Betalife Project aims to tackle the No Poverty Goal under the UN Sustainable Development Goal. ## Acknowledgements A big thanks to Andela BuildForSDG for the for guidance and support. Also grateful to the Developers on my team. ## LICENSE MIT
46.794118
485
0.775613
eng_Latn
0.994272
fa89da4024f119f19c912cae3519630331460370
9,535
md
Markdown
README.md
mdigger/provisioning
6c424e92ce708d3de987779861bf91d495ff5360
[ "MIT" ]
null
null
null
README.md
mdigger/provisioning
6c424e92ce708d3de987779861bf91d495ff5360
[ "MIT" ]
null
null
null
README.md
mdigger/provisioning
6c424e92ce708d3de987779861bf91d495ff5360
[ "MIT" ]
null
null
null
# Сервис настроек ## Параметры запуска сервиса При старте сервис запускает два HTTP-сервера: административный и пользовательский. Адрес и/или порт, на котором отвечает административный сервер, задаются с помощью параметра `-admin <[addr]:port>` при запуске сервиса: - `./provisioning -admin localhost:8000` - `./provisioning -admin :8000` - `./provisioning -admin 10.0.0.1:8000` Для пользовательского сервера адрес и порт задаются с помощью `-port <addr>`. Если указано имя хоста `-letsencrypt <host>`, то оно используется для автоматического получения сертификатов Let's Encrypt. В этом случае всегда используется порт 443 (по умолчанию, не указывается) для защищенных соединений и 80 для автоматического редиректа на 443 порт. - `./provisioning -port localhost:8080` - `./provisioning -letsencrypt config.connector73.net` Полученные сертификаты кешируются и автоматически обновляются, когда истекает их срок действия. В качестве хранилища данных используется файл в бинарном формате. Его имя и путь к нему можно задать с помощью параметра `-db <filename>`: - `./provisioning -db test.db` ## Административный API ### Описание сервисов - `PUT /services/<name>` - задает описание сервиса. В качестве данных передается JSON или HTTP-form со списком параметров сервисов, используемых по умолчанию - `DELETE /services/<name>`- удаляет описание сервиса с указанным именем - `GET /services/<name>`- возвращает описание сервиса с указанным именем - `GET /services`- возвращает список всех имен зарегистрированных сервисов **Например**: `PUT /services/mx` ```json { "address": "89.185.256.135", "port": "7778", "secure": true, "version": 6 } ``` ### Именованные группы сервисов Сервисы могут быть сгруппированы в именованные группы, которые используются для ассоциации набора сервисов для пользователей. - `PUT /groups/<name>` - задает описание группы. В качестве данных передается JSON или HTTP-form со списком имен сервисов, используемых данной группой. Для каждого сервиса так же можно переопределить или добавить параметры сервиса внутри такого объекта - `DELETE /groups/<name>`- удаляет описание группы с указанным именем - `GET /groups/<name>`- возвращает описание группы с указанным именем - `GET /groups`- возвращает список всех имен зарегистрированных групп **Например**: `PUT /groups/test` ```json { "mx": { "test": true, "version": 7 }, "store": null, "push": null } ``` ### Пользователи В качестве имени (идентификатора) пользователя в обязательном порядке используется его email. - `PUT /users/<name>` - задает описание пользователя. В качестве данных передается JSON или HTTP-form с параметрами, описывающими пользователя - `DELETE /users/<name>`- удаляет описание пользователя с указанным именем - `GET /users/<name>`- возвращает описание пользователя с указанным именем - `GET /users`- возвращает список всех имен зарегистрированных пользователей - `GET /users/<name>/config`- возвращает объединенную конфигурацию сервисов пользователя Для описания пользователя используются следующие поля данных: - `name` - задает необязательное отображаемое имя пользователя - `password` - пароль пользователя, который может быть представлен в открытом виде либо в виде строки с хешом **bcrypt**. При сохранении пароля в открытом виде он автоматически заменяется соответствующем хешом. Пароль не может быть пустым. - `tenant` - идентификатор Azure AD, к которому привязан пользователь - `group` - задает название группы, не может быть пустым - `services` - JSON с дополнительными параметрами сервисов с настройками пользователя **Пример**: `PUT /users/maximd@xyzrd.com` ```json { "password": "$2a$10$OC8LKbl.fU6xVh.o0bVktejzQwvkzGtkOSZ73GYZBAI1Q872FUUPK", "group": "test", "services": { "mx": { "login": { "password": "password", "user": "dmitrys" } } } } ``` ### Пользовательские данные В качестве имени (идентификатора) пользователя в обязательном порядке используется его email. - `PUT /users/<name>/data` - задает дополнительные данные пользователя, заменяя предыдущее значение - `PATCH /users/<name>/data`- заменяет только указанные в запросе пользовательские данные, остальные оставляя неизменными. Если значение ключа `null`, то такие данные удаляются. - `DELETE /users/<name>/data`- удаляет описание пользовательских данных с указанным именем - `GET /users/<name>/data`- возвращает описание пользовательских данных с указанным именем ### Администраторы Если для сервиса задан хотя бы один администратор, то все обращения к API требуют авторизации в заголовке HTTP Basic. - `PUT /admins/<name>` - задает пароль администратора с указанным в запросе именем; пароль задается с помощью JSON `{"password": "password"}` или HTML-form. - `DELETE /admins/<name>` - удаляет администратора - `GET /admins/<name>` - возвращает хеш от пароля администратора - `GET /admins` - возвращает список зарегистрированных администраторов ### Почта Чтобы задать настройки для отправки почты через Gmail, нужно выполнить несколько шагов: 1. Задать идентификатор клиента и секретную строку, которую необходимо получить в Google `PUT /gmail`: ```json { "id": "392045688696-...1i5.apps.googleusercontent.com", "secret": "Tr2na...IyuDcitAR" } ``` 2. В ответ будет возвращен URL, по которому необходимо перейти и скопировать код подтверждения авторизации. 3. Добавить полученный код в запрос и передать это не сервер `PUT /gmail`: ```json { "id": "392045688696-...1i5.apps.googleusercontent.com", "secret": "Tr2na...IyuDcitAR", "code": "4/Upw0Q2e...GTShpM" } ``` После этого конфигурация для доступа к отправке почты будет сохранена. - `PUT /gmail` - задает или изменяет настройки почты (описано выше) - `GET /gmail` - возвращает информацию о настройках почты ### Почтовые шаблоны - `PUT /templates/<name>` - задает именованный шаблон почтового сообщения - `DELETE /templates/<name>` - удаляет именованный шаблон сообщения - `GET /templates/<name>` - возвращает информацию о шаблоне почтового сообщения - `GET /templates` - возвращает список зарегистрированных имен почтовых шаблонов - `POST /templates/<name>/send/<email>` - отправляет почтовое сообщение с использованием указанного шаблона; именованные параметры, передающиеся при запросе, могут использоваться для заполнения шаблона Шаблон почтового сообщения описывается следующими полями: - `subject` задает заголовок сообщения - `template` описывает текстовый шаблон сообщения - `html` - флаг, указывающий на то, что письмо отправляется не в текстовом, а HTML формате **Пример шаблона**: `PUT /templates/test` ```json { "subject": "Тема письма", "template": "<h2>Приветствую тебя, о великий {{.name}}!<h2>\n<p>Тебе необходимо установить приложение <a href=\"{{.url}}\">{{.app}}</a>.</p>", "html": true } ``` В качестве разметки шаблона используется нотация _Golang Templates_. По умолчанию в шаблоне доступны следующие значения: - `{{.email}}` - адрес пользователя - `{{.name}}` - имя пользователя (может быть пустым) - любые дополнительные именованные параметры, переданные в запросе Для того, чтобы отправить письмо пользователю с помощью данного шаблона, необходимо выполнить следующий запрос: `POST /templates/test/send/maximd@xyzrd.com` ```json { "app": "Connector73", "url": "app-info://9340234234" } ``` ## Пользовательский API ### Обобщенная конфигурация пользователя - `GET /config` - возвращает обобщенную конфигурацию пользователя, собранную на основании группы и описания сервисов. Для запроса необходима авторизация пользователя, которая передается в заголовке запроса HTTP Basic или HTTP Bearer для авторизации пользователей Azure AD. ### Смена пароля пользователя - `POST /password` - изменяет пароль пользователя на новый. Новый пароль передается в теле запроса в виде HTTP-form или JSON: ```json {"password": "new password"} ``` Для запроса необходима авторизация пользователя, которая передается в заголовке запроса HTTP Basic. Т.е. пользователь может сменить пароль только в том случае, если он знает текущий свой пароль. Для пользователей Azure AD смена пароля не поддерживается. ### Токен для сброса пароля - `POST /reset/<email>` Для сброса пароля необходимо обратиться по адресу с указанием email пользователя. Авторизации данный запрос не требует. По этому запросу создается специальный токен, с помощью которого можно сбросить пароль пользователя. Во избежание проблем с безопасностью и возможной блокировкой пользователя, новый пароль будет сгенериров только после того, как данный токен будет передан на сервер. Токен отправляется пользователю с помощью почты. Для этого используется шаблон с предопределенным именем `resetPassword`, который должен быть зарегистрирован. Сам токен доступен в шаблоне в виде `{{.token}}`. Для пользователей Azure AD смена пароля не поддерживается. ### Сброс пароля - `POST /password/<token>` - генерирует и возвращает новый пароль пользователя, для которого был создан этот токен. Если в запросе указан параметр `?email`, то новый пароль дополнительно отправляется на почтовый адрес пользователя. Для этого используется шаблон с предопределенным именем `newPassword`, который должен быть зарегистрирован. Сам пароль доступен в шаблоне в виде `{{.password}}`. ### Дополнительные данные пользователя - `GET /data` - возвращает дополнительные данные пользователя. Для запроса необходима авторизация пользователя, которая передается в заголовке запроса HTTP Basic или HTTP Bearer для авторизации пользователей Azure AD.
38.760163
277
0.759727
rus_Cyrl
0.97702
fa8a65163b78adef12e6ea2d608e1329b6daf2af
1,949
md
Markdown
sacred-grove/README.md
mk12/zeldalloy
0649bd88ff1098e31ef0897a76625faf08082b4c
[ "MIT" ]
null
null
null
sacred-grove/README.md
mk12/zeldalloy
0649bd88ff1098e31ef0897a76625faf08082b4c
[ "MIT" ]
null
null
null
sacred-grove/README.md
mk12/zeldalloy
0649bd88ff1098e31ef0897a76625faf08082b4c
[ "MIT" ]
null
null
null
# Sacred Grove - **Game**: Twilight Princess - **Location**: Sacred Grove - **Name**: [Guardian Statues](https://www.zeldadungeon.net/wiki/Sacred_Grove_Guardians) ![Sacred Grove puzzle screenshot](screenshot.jpg) ## Puzzle Wolk Link (**L**) sits between two statues (**A** above and **B** below). Link can move up, down, left, or right. Each time he moves, Statue **A** moves in the same direction, and Statue **B** moves in the opposite direction. The goal is to get the statues in the positions marked by **x**. Link cannot move onto one of the positions occupied by a statue. (This is true even if it would move out of the way; Link always move first, and then the statues after.) It is game over if a statue jumps on top of Link. If the statues try to go past each other, they bump back into their original place (but Link still moves). | __.__ | __.__ | | __.__ | __.__ | | :---: | :---: | :---: | :---: | :---: | | __.__ | __x__ | __A__ | __x__ | __.__ | | __.__ | __.__ | __.__ | __.__ | __.__ | | | __.__ | __L__ | __.__ | | | | __.__ | __.__ | __.__ | | | | | __B__ | | | ## Solution **Note**: This puzzle is symmetrical, so all solutions work on both GameCube and Wii orientations. | Solution | States | Notes | | -------------------- | -----: | ----------------------------------------------- | | Optimal | 13 | 1.1s (MiniSat, 2.3 GHz Quad-Core Intel Core i7) | | [Zelda Dungeon][zd] | 14 | Steps: L, D, R2, U, L, U2, L, D2, R, U | | [Zelda Universe][zu] | 14 | Same as Zelda Dungeon | [![asciicast](https://asciinema.org/a/323988.svg)](https://asciinema.org/a/323988?size=big) [zd]: https://www.zeldadungeon.net/twilight-princess-walkthrough/the-master-sword/#c11_3 [zu]: https://zeldauniverse.net/guides/twilight-princess/sidequests/guardian-statue-puzzle/
55.685714
617
0.58235
eng_Latn
0.869205
fa8c6976e14dd905702bfd0073008097271503e9
76
md
Markdown
dockerfiles-v3/alpine-base/README.md
jinsenglin/docker
c71bd3941c5f16f0328114f7aae06fd83c47b4a2
[ "Apache-2.0" ]
null
null
null
dockerfiles-v3/alpine-base/README.md
jinsenglin/docker
c71bd3941c5f16f0328114f7aae06fd83c47b4a2
[ "Apache-2.0" ]
null
null
null
dockerfiles-v3/alpine-base/README.md
jinsenglin/docker
c71bd3941c5f16f0328114f7aae06fd83c47b4a2
[ "Apache-2.0" ]
null
null
null
# Build Image ``` docker build -t jimlintw/base:alpine -f Dockerfile . ```
12.666667
52
0.671053
deu_Latn
0.30599
fa8cd2ddda9db8bb37e4745b1bdd2737d3f52199
2,407
md
Markdown
README.md
mitre-atlas/atlas-navigator-data
09a3ec4a7812dfb5677e7f1a655fac226ecf4301
[ "Apache-2.0" ]
1
2022-03-24T02:30:29.000Z
2022-03-24T02:30:29.000Z
README.md
mitre-atlas/atlas-navigator-data
09a3ec4a7812dfb5677e7f1a655fac226ecf4301
[ "Apache-2.0" ]
null
null
null
README.md
mitre-atlas/atlas-navigator-data
09a3ec4a7812dfb5677e7f1a655fac226ecf4301
[ "Apache-2.0" ]
null
null
null
# MITRE | ATLAS Navigator Data [ATLAS Data](https://github.com/mitre-atlas/atlas-data) in STIX and [ATT&CK Navigator layer](https://github.com/mitre-attack/attack-navigator/tree/master/layers) formats for use with the [ATLAS Navigator](https://mitre-atlas.github.io/atlas-navigator/). ## Distributed files Located the `dist` directory: - `case-study-navigator-layers/` + Navigator layer files highlighting techniques used by each ATLAS case study. + View using the "Navigator Layer" > "View on ATLAS Navigator" sidebar buttons on each case study page accessible from https://atlas.mitre.org/studies. - `default-navigator-layers/` + Navigator layer files highlighting the ATLAS matrix and a case study frequency heatmap. + Viewable by default on the [ATLAS Navigator](https://mitre-atlas.github.io/atlas-navigator/). - `stix-atlas.json` + ATLAS matrix expressed as a STIX 2.0 bundle following the [ATT&CK data model](https://github.com/mitre/cti/blob/master/USAGE.md#the-attck-data-model). + Used as domain data for the ATLAS Navigator. ## Development Scripts in the `tools` directory update the files above. ### Installation Ensure the `atlas-data` submodule is available by cloning this repository with `git clone --recurse-submodules` or running `git submodule update --init` on an existing repository. Once the submodule is available, optionally run the following once to sparse checkout only the necessary files in the `atlas-data/dist` directory. ```bash git -C atlas-data config core.sparseCheckout true echo 'dist/*' >> .git/modules/atlas-data/info/sparse-checkout git submodule update --force --checkout atlas-data ``` Install dependencies via `pip install -r tools/requirements.txt` ### Usage When case studies update in `atlas-data`, run ``` python tools/generate_navigator_layer.py --layer case_study ``` When tactics and techniques update in `atlas-data`, run ``` python tools/generate_stix.py python tools/generate_navigator_layer.py --layer matrix ``` Omit the `--layer` option above to generate all outputs. Run each script with `-h` for further options. ## Related work ATLAS enables researchers to navigate the landscape of threats to artificial intelligence and machine learning systems. Visit https://atlas.mitre.org for more information. The ATLAS Navigator is a fork of the [ATT&CK Navigator](https://mitre-attack.github.io/attack-navigator/).
42.22807
253
0.764437
eng_Latn
0.799585
fa8e732fb079c32ebee87a6d4f377bcf6bef4237
24,598
md
Markdown
2015/2015-05-02.md
JHON835-smith/github-trending
773b35ca8d45f25dce22c1f519a11cf0fe5ed644
[ "MIT" ]
37
2017-10-12T01:50:42.000Z
2022-02-24T02:44:45.000Z
2015/2015-05-02.md
JHON835-smith/github-trending
773b35ca8d45f25dce22c1f519a11cf0fe5ed644
[ "MIT" ]
null
null
null
2015/2015-05-02.md
JHON835-smith/github-trending
773b35ca8d45f25dce22c1f519a11cf0fe5ed644
[ "MIT" ]
12
2018-07-31T10:04:56.000Z
2022-02-07T00:08:06.000Z
###2015-05-02 ####objective-c * <img src='https://avatars2.githubusercontent.com/u/438313?v=3&s=40' height='20' width='20'>[ erichoracek / Motif ](https://github.com/erichoracek/Motif): A lightweight and customizable JSON stylesheet framework for iOS * <img src='https://avatars3.githubusercontent.com/u/6493255?v=3&s=40' height='20' width='20'>[ Draveness / DKNightVersion ](https://github.com/Draveness/DKNightVersion): A light weight framework adding night version (night mode) to your app on iOS. * <img src='https://avatars1.githubusercontent.com/u/86447?v=3&s=40' height='20' width='20'>[ blommegard / APNS-Pusher ](https://github.com/blommegard/APNS-Pusher): A simple debug application for Apple Push Notification Service (APNS) * <img src='https://avatars0.githubusercontent.com/u/944428?v=3&s=40' height='20' width='20'>[ icepat / ICETutorial ](https://github.com/icepat/ICETutorial): A nice tutorial like the one introduced in the Path 3.X App * <img src='https://avatars1.githubusercontent.com/u/2410234?v=3&s=40' height='20' width='20'>[ forkingdog / UITableView-FDTemplateLayoutCell ](https://github.com/forkingdog/UITableView-FDTemplateLayoutCell): Template auto layout cell for automatically UITableViewCell height calculating * <img src='https://avatars0.githubusercontent.com/u/5517281?v=3&s=40' height='20' width='20'>[ KittenYang / KYGooeyMenu ](https://github.com/KittenYang/KYGooeyMenu): Gooey Effects 带粘性的扇形菜单 * <img src='https://avatars1.githubusercontent.com/u/8357630?v=3&s=40' height='20' width='20'>[ nsdictionary / CoreLock ](https://github.com/nsdictionary/CoreLock): 高仿支付宝解锁 * <img src='https://avatars2.githubusercontent.com/u/1070794?v=3&s=40' height='20' width='20'>[ MortimerGoro / MGSwipeTableCell ](https://github.com/MortimerGoro/MGSwipeTableCell): An easy to use UITableViewCell subclass that allows to display swippable buttons with a variety of transitions. * <img src='https://avatars1.githubusercontent.com/u/432536?v=3&s=40' height='20' width='20'>[ ReactiveCocoa / ReactiveCocoa ](https://github.com/ReactiveCocoa/ReactiveCocoa): A framework for composing and transforming streams of values * <img src='https://avatars3.githubusercontent.com/u/7659?v=3&s=40' height='20' width='20'>[ AFNetworking / AFNetworking ](https://github.com/AFNetworking/AFNetworking): A delightful iOS and OS X networking framework * <img src='https://avatars3.githubusercontent.com/u/1282248?v=3&s=40' height='20' width='20'>[ John-Lluch / SWRevealViewController ](https://github.com/John-Lluch/SWRevealViewController): A UIViewController subclass for presenting side view controllers inspired on the FaceBook and Wunderlist apps, done right ! * <img src='https://avatars2.githubusercontent.com/u/68232?v=3&s=40' height='20' width='20'>[ rs / SDWebImage ](https://github.com/rs/SDWebImage): Asynchronous image downloader with cache support with an UIImageView category * <img src='https://avatars1.githubusercontent.com/u/163390?v=3&s=40' height='20' width='20'>[ facebook / Shimmer ](https://github.com/facebook/Shimmer): An easy way to add a simple, shimmering effect to any view in an iOS app. * <img src='https://avatars3.githubusercontent.com/u/9983806?v=3&s=40' height='20' width='20'>[ JxbSir / YiYuanYunGou ](https://github.com/JxbSir/YiYuanYunGou): 高仿一元云购IOS应用(高仿自一元云购安卓客户端) * <img src='https://avatars2.githubusercontent.com/u/1746532?v=3&s=40' height='20' width='20'>[ xmartlabs / XLForm ](https://github.com/xmartlabs/XLForm): XLForm is the most flexible and powerful iOS library to create dynamic table-view forms. Fully compatible with Swift & Obj-C. * <img src='https://avatars2.githubusercontent.com/u/479569?v=3&s=40' height='20' width='20'>[ kbhomes / radiant-player-mac ](https://github.com/kbhomes/radiant-player-mac): Turn Google Play Music into a separate, beautiful application that integrates with your Mac. * <img src='https://avatars3.githubusercontent.com/u/250144?v=3&s=40' height='20' width='20'>[ realm / realm-cocoa ](https://github.com/realm/realm-cocoa): Realm is a mobile database: a replacement for Core Data & SQLite * <img src='https://avatars1.githubusercontent.com/u/444313?v=3&s=40' height='20' width='20'>[ ResearchKit / ResearchKit ](https://github.com/ResearchKit/ResearchKit): ResearchKit is an open source software framework that makes it easy to create apps for medical research or for other research projects. * <img src='https://avatars3.githubusercontent.com/u/54485?v=3&s=40' height='20' width='20'>[ martijnwalraven / meteor-ios ](https://github.com/martijnwalraven/meteor-ios): Meteor iOS aims to be an easy way to integrate native iOS apps with a Meteor server through DDP * <img src='https://avatars0.githubusercontent.com/u/535699?v=3&s=40' height='20' width='20'>[ Huohua / HHRouter ](https://github.com/Huohua/HHRouter): Yet another URL Router for iOS. * <img src='https://avatars3.githubusercontent.com/u/583809?v=3&s=40' height='20' width='20'>[ beardedspice / beardedspice ](https://github.com/beardedspice/beardedspice): Mac Media Keys for the Masses * <img src='https://avatars3.githubusercontent.com/u/633862?v=3&s=40' height='20' width='20'>[ icanzilb / JSONModel ](https://github.com/icanzilb/JSONModel): Magical Data Modelling Framework for JSON. Create rapidly powerful, atomic and smart data model classes * <img src='https://avatars3.githubusercontent.com/u/7659?v=3&s=40' height='20' width='20'>[ mattt / FormatterKit ](https://github.com/mattt/FormatterKit): `stringWithFormat:` for the sophisticated hacker set * <img src='https://avatars1.githubusercontent.com/u/2301114?v=3&s=40' height='20' width='20'>[ jessesquires / JSQMessagesViewController ](https://github.com/jessesquires/JSQMessagesViewController): An elegant messages UI library for iOS * <img src='https://avatars2.githubusercontent.com/u/954279?v=3&s=40' height='20' width='20'>[ BradLarson / GPUImage ](https://github.com/BradLarson/GPUImage): An open source iOS framework for GPU-based image and video processing ####go * <img src='https://avatars2.githubusercontent.com/u/1299?v=3&s=40' height='20' width='20'>[ hashicorp / vault ](https://github.com/hashicorp/vault): A tool for managing secrets. * <img src='https://avatars1.githubusercontent.com/u/891222?v=3&s=40' height='20' width='20'>[ coocood / freecache ](https://github.com/coocood/freecache): A cache library for Go with zero GC overhead. * <img src='https://avatars0.githubusercontent.com/u/1128849?v=3&s=40' height='20' width='20'>[ mholt / caddy ](https://github.com/mholt/caddy): Configurable, general-purpose HTTP/2 web server for any platform. * <img src='https://avatars3.githubusercontent.com/u/2621?v=3&s=40' height='20' width='20'>[ bradfitz / http2 ](https://github.com/bradfitz/http2): HTTP/2 support for Go (in active development) * <img src='https://avatars1.githubusercontent.com/u/749551?v=3&s=40' height='20' width='20'>[ docker / docker ](https://github.com/docker/docker): Docker - the open-source application container engine * <img src='https://avatars0.githubusercontent.com/u/1095328?v=3&s=40' height='20' width='20'>[ dvyukov / go-fuzz ](https://github.com/dvyukov/go-fuzz): Randomized testing for Go * <img src='https://avatars3.githubusercontent.com/u/1317058?v=3&s=40' height='20' width='20'>[ kabukky / journey ](https://github.com/kabukky/journey): A blog engine written in Go, compatible with Ghost themes. * <img src='https://avatars3.githubusercontent.com/u/1544861?v=3&s=40' height='20' width='20'>[ prydonius / karn ](https://github.com/prydonius/karn): Manage multiple Git identities * <img src='https://avatars3.githubusercontent.com/u/104030?v=3&s=40' height='20' width='20'>[ golang / go ](https://github.com/golang/go): The Go programming language * <img src='https://avatars3.githubusercontent.com/u/5751682?v=3&s=40' height='20' width='20'>[ GoogleCloudPlatform / kubernetes ](https://github.com/GoogleCloudPlatform/kubernetes): Container Cluster Manager from Google * <img src='https://avatars1.githubusercontent.com/u/7171?v=3&s=40' height='20' width='20'>[ constabulary / gb ](https://github.com/constabulary/gb): gb, the project based build tool for Go * <img src='https://avatars1.githubusercontent.com/u/5056191?v=3&s=40' height='20' width='20'>[ docker / libnetwork ](https://github.com/docker/libnetwork): networking for containers * <img src='https://avatars3.githubusercontent.com/u/31996?v=3&s=40' height='20' width='20'>[ avelino / awesome-go ](https://github.com/avelino/awesome-go): A curated list of awesome Go frameworks, libraries and software * <img src='https://avatars3.githubusercontent.com/u/108725?v=3&s=40' height='20' width='20'>[ fsouza / go-dockerclient ](https://github.com/fsouza/go-dockerclient): Go HTTP client for the Docker remote API. * <img src='https://avatars2.githubusercontent.com/u/178316?v=3&s=40' height='20' width='20'>[ go-martini / martini ](https://github.com/go-martini/martini): Classy web framework for Go * <img src='https://avatars0.githubusercontent.com/u/5741620?v=3&s=40' height='20' width='20'>[ mailgun / godebug ](https://github.com/mailgun/godebug): A cross-platform debugger for Go. * <img src='https://avatars0.githubusercontent.com/u/647?v=3&s=40' height='20' width='20'>[ gliderlabs / logspout ](https://github.com/gliderlabs/logspout): Log routing for Docker container logs * <img src='https://avatars0.githubusercontent.com/u/173412?v=3&s=40' height='20' width='20'>[ spf13 / hugo ](https://github.com/spf13/hugo): A Fast and Flexible Static Site Generator built with love by spf13 in GoLang * <img src='https://avatars1.githubusercontent.com/u/11505236?v=3&s=40' height='20' width='20'>[ google / gxui ](https://github.com/google/gxui): An experimental Go cross platform UI library. * <img src='https://avatars2.githubusercontent.com/u/98356?v=3&s=40' height='20' width='20'>[ google / cayley ](https://github.com/google/cayley): An open-source graph database * <img src='https://avatars1.githubusercontent.com/u/985208?v=3&s=40' height='20' width='20'>[ google / cadvisor ](https://github.com/google/cadvisor): Analyzes resource usage and performance characteristics of running containers. * <img src='https://avatars0.githubusercontent.com/u/233907?v=3&s=40' height='20' width='20'>[ astaxie / bat ](https://github.com/astaxie/bat): Go implement CLI, cURL-like tool for humans * <img src='https://avatars3.githubusercontent.com/u/2601015?v=3&s=40' height='20' width='20'>[ coreos / rkt ](https://github.com/coreos/rkt): rkt is an App Container runtime for Linux * <img src='https://avatars3.githubusercontent.com/u/21?v=3&s=40' height='20' width='20'>[ github / git-lfs ](https://github.com/github/git-lfs): Git extension for versioning large files * <img src='https://avatars1.githubusercontent.com/u/1409344?v=3&s=40' height='20' width='20'>[ CiscoCloud / mesos-consul ](https://github.com/CiscoCloud/mesos-consul): Mesos to Consul bridge for service discovery ####javascript * <img src='https://avatars3.githubusercontent.com/u/1162160?v=3&s=40' height='20' width='20'>[ Rich-Harris / ramjet ](https://github.com/Rich-Harris/ramjet): Morph DOM elements from one state to another with smooth transitions * <img src='https://avatars3.githubusercontent.com/u/1480170?v=3&s=40' height='20' width='20'>[ MicrosoftDX / Vorlonjs ](https://github.com/MicrosoftDX/Vorlonjs): * <img src='https://avatars3.githubusercontent.com/u/121766?v=3&s=40' height='20' width='20'>[ moose-team / friends ](https://github.com/moose-team/friends): P2P chat powered by the web. * <img src='https://avatars3.githubusercontent.com/u/934293?v=3&s=40' height='20' width='20'>[ bevacqua / dragula ](https://github.com/bevacqua/dragula): Drag and drop so simple it hurts * <img src='https://avatars3.githubusercontent.com/u/8969849?v=3&s=40' height='20' width='20'>[ google / open-location-code ](https://github.com/google/open-location-code): Open Location Codes are short, generated codes that can be used like street addresses, for places where street addresses don't exist. * <img src='https://avatars1.githubusercontent.com/u/376661?v=3&s=40' height='20' width='20'>[ mafintosh / chromecasts ](https://github.com/mafintosh/chromecasts): Query your local network for Chromecasts and have them play media * <img src='https://avatars1.githubusercontent.com/u/521604?v=3&s=40' height='20' width='20'>[ paldepind / flyd ](https://github.com/paldepind/flyd): The minimalistic but powerful, modular, functional reactive programming library in JavaScript. * <img src='https://avatars1.githubusercontent.com/u/884810?v=3&s=40' height='20' width='20'>[ astoilkov / jsblocks ](https://github.com/astoilkov/jsblocks): Better MV-ish Framework * <img src='https://avatars3.githubusercontent.com/u/8445?v=3&s=40' height='20' width='20'>[ facebook / react ](https://github.com/facebook/react): A declarative, efficient, and flexible JavaScript library for building user interfaces. * <img src='https://avatars1.githubusercontent.com/u/231686?v=3&s=40' height='20' width='20'>[ mikolalysenko / l1-path-finder ](https://github.com/mikolalysenko/l1-path-finder): A fast planner for 2D uniform cost grids * <img src='https://avatars0.githubusercontent.com/u/197597?v=3&s=40' height='20' width='20'>[ facebook / react-native ](https://github.com/facebook/react-native): A framework for building native apps with React. * <img src='https://avatars2.githubusercontent.com/u/760062?v=3&s=40' height='20' width='20'>[ paypal / react-engine ](https://github.com/paypal/react-engine): a composite render engine for isomorphic express apps to render both plain react views and react-router views * <img src='https://avatars1.githubusercontent.com/u/339208?v=3&s=40' height='20' width='20'>[ airbnb / javascript ](https://github.com/airbnb/javascript): JavaScript Style Guide * <img src='https://avatars3.githubusercontent.com/u/216296?v=3&s=40' height='20' width='20'>[ angular / angular.js ](https://github.com/angular/angular.js): HTML enhanced for web apps * <img src='https://avatars1.githubusercontent.com/u/1380995?v=3&s=40' height='20' width='20'>[ MeoMix / StreamusChromeExtension ](https://github.com/MeoMix/StreamusChromeExtension): A YouTube video player as a Google Chrome extension * <img src='https://avatars1.githubusercontent.com/u/924201?v=3&s=40' height='20' width='20'>[ manifoldjs / ManifoldJS ](https://github.com/manifoldjs/ManifoldJS): JR Node.js tool for App Generation * <img src='https://avatars3.githubusercontent.com/u/51664?v=3&s=40' height='20' width='20'>[ bendc / sprint ](https://github.com/bendc/sprint): A tiny, lightning fast jQuery-like library for modern browsers. * <img src='https://avatars0.githubusercontent.com/u/2007468?v=3&s=40' height='20' width='20'>[ callemall / material-ui ](https://github.com/callemall/material-ui): A CSS Framework and a Set of React Components that Implement Google's Material Design. * <img src='https://avatars0.githubusercontent.com/u/974035?v=3&s=40' height='20' width='20'>[ soundblogs / react-soundplayer ](https://github.com/soundblogs/react-soundplayer): Create custom SoundCloud players with React * <img src='https://avatars2.githubusercontent.com/u/853712?v=3&s=40' height='20' width='20'>[ babel / babel ](https://github.com/babel/babel): Babel is a compiler for writing next generation JavaScript. * <img src='https://avatars1.githubusercontent.com/u/845425?v=3&s=40' height='20' width='20'>[ SamyPesse / betty ](https://github.com/SamyPesse/betty): Open source Google Voice with Receptionist abilities, built on top of Twilio * <img src='https://avatars1.githubusercontent.com/u/19343?v=3&s=40' height='20' width='20'>[ postcss / postcss ](https://github.com/postcss/postcss): Transforming CSS with JS plugins * <img src='https://avatars0.githubusercontent.com/u/23123?v=3&s=40' height='20' width='20'>[ Semantic-Org / Semantic-UI ](https://github.com/Semantic-Org/Semantic-UI): Semantic is a UI component framework based around useful principles from natural language. * <img src='https://avatars0.githubusercontent.com/u/150330?v=3&s=40' height='20' width='20'>[ getify / You-Dont-Know-JS ](https://github.com/getify/You-Dont-Know-JS): A book series on JavaScript. @YDKJS on twitter. * <img src='https://avatars0.githubusercontent.com/u/7460787?v=3&s=40' height='20' width='20'>[ foam-framework / foam ](https://github.com/foam-framework/foam): Feature-Oriented Active Modeller ####ruby * <img src='https://avatars1.githubusercontent.com/u/17538?v=3&s=40' height='20' width='20'>[ discourse / discourse ](https://github.com/discourse/discourse): A platform for community discussion. Free, open, simple. * <img src='https://avatars2.githubusercontent.com/u/237985?v=3&s=40' height='20' width='20'>[ jekyll / jekyll ](https://github.com/jekyll/jekyll): Jekyll is a blog-aware, static site generator in Ruby * <img src='https://avatars2.githubusercontent.com/u/568243?v=3&s=40' height='20' width='20'>[ Homebrew / homebrew ](https://github.com/Homebrew/homebrew): The missing package manager for OS X. * <img src='https://avatars1.githubusercontent.com/u/2248?v=3&s=40' height='20' width='20'>[ interagent / pliny ](https://github.com/interagent/pliny): Write excellent APIs in Ruby * <img src='https://avatars1.githubusercontent.com/u/3124?v=3&s=40' height='20' width='20'>[ rails / rails ](https://github.com/rails/rails): Ruby on Rails * <img src='https://avatars1.githubusercontent.com/u/657707?v=3&s=40' height='20' width='20'>[ pushpop-project / pushpop-slack ](https://github.com/pushpop-project/pushpop-slack): Send messages to Slack via Pushpop! * <img src='https://avatars3.githubusercontent.com/u/795488?v=3&s=40' height='20' width='20'>[ janko-m / as-duration ](https://github.com/janko-m/as-duration): Extraction of ActiveSupport::Duration from Rails * <img src='https://avatars1.githubusercontent.com/u/31698?v=3&s=40' height='20' width='20'>[ eliotsykes / rspec-rails-examples ](https://github.com/eliotsykes/rspec-rails-examples): RSpec cheatsheet & Rails app: Learn how to expertly test Rails apps from a model codebase * <img src='https://avatars1.githubusercontent.com/u/188?v=3&s=40' height='20' width='20'>[ sass / sass ](https://github.com/sass/sass): Sass makes CSS fun again. * <img src='https://avatars0.githubusercontent.com/u/727482?v=3&s=40' height='20' width='20'>[ caskroom / homebrew-cask ](https://github.com/caskroom/homebrew-cask): A CLI workflow for the administration of Mac applications distributed as binaries * <img src='https://avatars3.githubusercontent.com/u/505427?v=3&s=40' height='20' width='20'>[ robertomiranda / has_secure_token ](https://github.com/robertomiranda/has_secure_token): Create uniques random tokens for any model in ruby on rails. Backport of ActiveRecord::SecureToken 5 to AR 3.x and 4.x * <img src='https://avatars0.githubusercontent.com/u/42336?v=3&s=40' height='20' width='20'>[ voltrb / volt ](https://github.com/voltrb/volt): A Ruby web framework where your Ruby runs on both server and client * <img src='https://avatars1.githubusercontent.com/u/305940?v=3&s=40' height='20' width='20'>[ gitlabhq / gitlabhq ](https://github.com/gitlabhq/gitlabhq): GitLab is version control for your server * <img src='https://avatars2.githubusercontent.com/u/1299?v=3&s=40' height='20' width='20'>[ mitchellh / vagrant ](https://github.com/mitchellh/vagrant): Vagrant is a tool for building and distributing development environments. * <img src='https://avatars3.githubusercontent.com/u/174777?v=3&s=40' height='20' width='20'>[ pushpop-project / pushpop ](https://github.com/pushpop-project/pushpop): A framework for scheduled integrations between popular services * <img src='https://avatars1.githubusercontent.com/u/216339?v=3&s=40' height='20' width='20'>[ twbs / bootstrap-sass ](https://github.com/twbs/bootstrap-sass): Official Sass port of Bootstrap * <img src='https://avatars2.githubusercontent.com/u/1731?v=3&s=40' height='20' width='20'>[ skwp / dotfiles ](https://github.com/skwp/dotfiles): YADR - The best vim,git,zsh plugins and the cleanest vimrc you've ever seen * <img src='https://avatars1.githubusercontent.com/u/17579?v=3&s=40' height='20' width='20'>[ Thibaut / devdocs ](https://github.com/Thibaut/devdocs): API Documentation Browser * <img src='https://avatars2.githubusercontent.com/u/1043162?v=3&s=40' height='20' width='20'>[ sotownsend / BooJS ](https://github.com/sotownsend/BooJS): Unix swiss army knife for headless browser javascript * <img src='https://avatars3.githubusercontent.com/u/8445?v=3&s=40' height='20' width='20'>[ reactjs / react-rails ](https://github.com/reactjs/react-rails): Ruby gem for automatically transforming JSX and using React in Rails. * <img src='https://avatars1.githubusercontent.com/u/193898?v=3&s=40' height='20' width='20'>[ tombenner / ru ](https://github.com/tombenner/ru): Ruby in your shell! * <img src='https://avatars2.githubusercontent.com/u/719?v=3&s=40' height='20' width='20'>[ activeadmin / activeadmin ](https://github.com/activeadmin/activeadmin): The administration framework for Ruby on Rails applications. * <img src='https://avatars2.githubusercontent.com/u/9736?v=3&s=40' height='20' width='20'>[ emberjs / guides ](https://github.com/emberjs/guides): The source for http://guides.emberjs.com * <img src='https://avatars3.githubusercontent.com/u/5958?v=3&s=40' height='20' width='20'>[ peatio / peatio ](https://github.com/peatio/peatio): An open-source, white label cryptocurrency exchange * <img src='https://avatars1.githubusercontent.com/u/235510?v=3&s=40' height='20' width='20'>[ bmichotte / HSTracker ](https://github.com/bmichotte/HSTracker): HSTracker is a Hearthstone deck tracker for Mac OsX
34.64507
141
0.655094
eng_Latn
0.204484
fa8f03eac71f6b3d79f4df9ff22e6257bf35bca0
3,271
md
Markdown
content/docs/for-developers/sending-email/zapier-sending-for-gravity-forms-submissions.md
utkarsh1235/docs
070fcfa0d6116a3a468099171d99403048e92758
[ "MIT" ]
null
null
null
content/docs/for-developers/sending-email/zapier-sending-for-gravity-forms-submissions.md
utkarsh1235/docs
070fcfa0d6116a3a468099171d99403048e92758
[ "MIT" ]
null
null
null
content/docs/for-developers/sending-email/zapier-sending-for-gravity-forms-submissions.md
utkarsh1235/docs
070fcfa0d6116a3a468099171d99403048e92758
[ "MIT" ]
null
null
null
--- seo: title: Sending New Emails via SendGrid For New Gravity Forms Submissions description: Learn how to send an email whenever you receive a new Gravity Form submission. keywords: integration, tutorial, Gravity Forms Submissions, SendGrid, Zapier title: Sending New Emails via SendGrid For New Gravity Forms Submissions group: partners weight: 0 layout: page navigation: show: true --- If you want to send a new SendGrid email every time you receive a Gravity Form submission, you can do that using [Zapier](http://zapier.com). You will need: * a [Gravity Forms](http://www.gravityforms.com) account * a [SendGrid](http://sendgrid.com) account * a [Zapier](http://zapier.com) account ## Steps 1. [Getting your accounts ready](#ready) 2. [Connecting your accounts](#connect) ### Getting your accounts ready<a name="ready"></a> To connect your Gravity Forms account to Zapier, you will need the Gravity Forms plugin with the Zapier add-on installed. You will also need a form created in Gravity forms. To get started with Gravity forms, and for information on creating forms and installing the add-ons that you will need, check [here](https://www.gravityhelp.com/documentation/article/getting-started/). Information for getting started with Gravity Forms on Zapier can be found [here](https://zapier.com/help/gravity-forms/#how-get-started-gravity-forms). To link your Gravity Forms to SendGrid, you must have an active SendGrid account. To learn more about getting started with SendGrid basics, start [here]({{root_url}}/api-reference/). For more information about getting started with SendGrid on Zapier, go [here](https://zapier.com/help/sendgrid/#how-get-started-sendgrid). ### Connecting your accounts<a name="connect"></a> Click here to [Send new emails via SendGrid for new Gravity Forms submissions](https://zapier.com/zapbook/zaps/4782/send-new-emails-via-sendgrid-for-new-gravity-forms-submissions/). 1. Follow the directions on the first step of the Zap to connect your Gravity Forms account to Zapier. 2. Choose your SendGrid account from the list of accounts, or connect a new account. 3. To connect your SendGrid account to Zapier for the first time, you will enter the credentials of an API/mail account. If you have not created this account, you can do so [here](https://sendgrid.com/credentials). ![Credential entry](https://api.monosnap.com/rpc/file/download?id=gAajRq9wMKNTN4HyEKzAMosD71ifb8) 4. Using fields from Gravity Forms, create and customize the **To**, **From**, and **Subject** email message that the Zap will send. ![Email entry](https://api.monosnap.com/rpc/file/download?id=5fpmLkDdv82LPlTeYCyhUE7bsFeSIE) 5. Click **Save + Finish**. Now test the Zap to make sure it works. Once you’re satisfied with the results, new Gravity Forms submissions will automatically send an SendGrid email. <call-out> If you ever want to change this Gravity Forms and SendGrid integration, just go to [your Zapier dashboard](https://zapier.com/app/dashboard) and tweak anything you'd like. </call-out> You can also check out all that’s possible with [Gravity Forms on Zapier](https://zapier.com/zapbook/gravity-forms/), and other ways to connect [Gravity Forms and SendGrid](https://zapier.com/zapbook/gravity-forms/sendgrid).
54.516667
527
0.771324
eng_Latn
0.946504
fa8f21d8ded44d6436774fae6aab8710c01d475e
8,790
md
Markdown
docs/reference/context/media-timed-asset-reference.schema.md
honstar/xdm
03820f64905959265567848abbfeab398c3b5579
[ "CC-BY-4.0" ]
null
null
null
docs/reference/context/media-timed-asset-reference.schema.md
honstar/xdm
03820f64905959265567848abbfeab398c3b5579
[ "CC-BY-4.0" ]
null
null
null
docs/reference/context/media-timed-asset-reference.schema.md
honstar/xdm
03820f64905959265567848abbfeab398c3b5579
[ "CC-BY-4.0" ]
null
null
null
# Timed media primary asset reference Schema ``` https://ns.adobe.com/xdm/context/media-timed-asset-reference ``` Asset information about the main content that was played, but present on all ads and chapters that occur during the content playback. | [Abstract](../../abstract.md) | [Extensible](../../extensions.md) | [Status](../../status.md) | [Identifiable](../../id.md) | [Custom Properties](../../extensions.md) | [Additional Properties](../../extensions.md) | Defined In | |-------------------------------|-----------------------------------|---------------------------|-----------------------------|------------------------------------------|----------------------------------------------|------------| | Can be instantiated | Yes | Experimental | No | Forbidden | Permitted | [context/media-timed-asset-reference.schema.json](context/media-timed-asset-reference.schema.json) | ## Schema Hierarchy * Timed media primary asset reference `https://ns.adobe.com/xdm/context/media-timed-asset-reference` * [Extensibility base schema](../common/extensible.schema.md) `https://ns.adobe.com/xdm/common/extensible` * [Audio](../external/id3/audio.schema.md) `https://id3.org/id3v2.4/audio` * [Series](../external/iptc/series.schema.md) `http://www.iptc.org/series` * [Season](../external/iptc/season.schema.md) `http://www.iptc.org/season` * [Episode](../external/iptc/episode.schema.md) `http://www.iptc.org/episode` ## Timed media primary asset reference Examples ```json { "@id": "https://data.adobe.io/entities/media-timed-asset-reference/15234430", "dc:title": "Floki Begs Helga for Freedom", "dc:creator": "Video Author", "dc:publisher": "tvonline", "xmpDM:duration": 87, "id3:Audio": { "id3:TRSN": "Q991.3", "id3:TPUB": "Atlantic" }, "iptc4xmpExt:Series": { "iptc4xmpExt:Name": "show_highlights", "iptc4xmpExt:Identifier": "http://mychanneltv.com/series-identifiers/2613953" }, "xdm:showType": "episode", "xdm:streamFormat": "long", "xdm:streamType": "video", "iptc4xmpExt:Season": { "iptc4xmpExt:Number": 1 }, "iptc4xmpExt:Episode": { "iptc4xmpExt:Number": 1 }, "iptc4xmpExt:Genre": [ "sports" ], "iptc4xmpExt:Rating": [ { "iptc4xmpExt:RatingValue": "OTV", "iptc4xmpExt:RatingSourceLink": "http://www.tvmedia.org/ratings.htm" } ], "iptc4xmpExt:Creator": [ { "iptc4xmpExt:Name": "MyChannelTV" } ] } ``` ```json { "@id": "https://data.adobe.io/entities/media-timed-asset-reference/15234431", "dc:creator": "Jimmy Page", "dc:title": "Stairway to Heaven", "xdm:artist": "Led Zeppelin", "xdm:album": "Led Zeppelin IV", "xmpDM:duration": 482, "xdm:streamType": "audio" } ``` # Timed media primary asset reference Properties | Property | Type | Required | Default | Defined by | |----------|------|----------|---------|------------| | [@id](#id) | `string` | Optional | | Timed media primary asset reference (this schema) | | [dc:title](#dctitle) | `string` | Optional | | Timed media primary asset reference (this schema) | | [id3:Audio](#id3audio) | Audio | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Creator](#iptc4xmpextcreator) | Creator | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Episode](#iptc4xmpextepisode) | Episode | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Genre](#iptc4xmpextgenre) | `string[]` | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Rating](#iptc4xmpextrating) | Rating | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Season](#iptc4xmpextseason) | Season | Optional | | Timed media primary asset reference (this schema) | | [iptc4xmpExt:Series](#iptc4xmpextseries) | Series | Optional | | Timed media primary asset reference (this schema) | | [xdm:showType](#xdmshowtype) | `string` | Optional | | Timed media primary asset reference (this schema) | | [xdm:streamFormat](#xdmstreamformat) | `string` | Optional | | Timed media primary asset reference (this schema) | | [xdm:streamType](#xdmstreamtype) | `enum` | Optional | `"video"` | Timed media primary asset reference (this schema) | | [xmpDM:album](#xmpdmalbum) | `string` | Optional | | Timed media primary asset reference (this schema) | | [xmpDM:artist](#xmpdmartist) | `string` | Optional | | Timed media primary asset reference (this schema) | | [xmpDM:duration](#xmpdmduration) | `integer` | Optional | | Timed media primary asset reference (this schema) | | `*` | any | Additional | this schema *allows* additional properties | ## @id ### Asset ID Identifier of the content, which can be used to tie back to other industry or CMS IDs. `@id` * is optional * type: `string` * defined in this schema ### @id Type `string` * format: `uri-reference` – URI Reference (according to [RFC3986](https://tools.ietf.org/html/rfc3986)) ## dc:title ### Media name The friendly, human-readable name of the timed media asset. `dc:title` * is optional * type: `string` * defined in this schema ### dc:title Type `string` ## id3:Audio ### Audio Metadata specific to audio content (record label, radio station, etc.). `id3:Audio` * is optional * type: Audio * defined in this schema ### id3:Audio Type * [Audio](../external/id3/audio.schema.md) – `https://id3.org/id3v2.4/audio` ## iptc4xmpExt:Creator ### Creator Party or parties including person or organization which created the video, refinement by the role attribute. `iptc4xmpExt:Creator` * is optional * type: Creator * defined in this schema ### iptc4xmpExt:Creator Type Array type: Creator All items must be of the type: * [Creator](../external/iptc/creator.schema.md) – `http://www.iptc.org/creator` ## iptc4xmpExt:Episode ### Episode The episode the show belongs to. `iptc4xmpExt:Episode` * is optional * type: Episode * defined in this schema ### iptc4xmpExt:Episode Type * [Episode](../external/iptc/episode.schema.md) – `http://www.iptc.org/episode` ## iptc4xmpExt:Genre ### Genre Type or grouping of content as defined by content producer. `iptc4xmpExt:Genre` * is optional * type: `string[]` * defined in this schema ### iptc4xmpExt:Genre Type Array type: `string[]` All items must be of the type: `string` ## iptc4xmpExt:Rating ### Content rating The rating as defined by Parental Guidelines. `iptc4xmpExt:Rating` * is optional * type: Rating * defined in this schema ### iptc4xmpExt:Rating Type Array type: Rating All items must be of the type: * [Rating](../external/iptc/rating.schema.md) – `http://www.iptc.org/rating` ## iptc4xmpExt:Season ### Season The season the show belongs to. `iptc4xmpExt:Season` * is optional * type: Season * defined in this schema ### iptc4xmpExt:Season Type * [Season](../external/iptc/season.schema.md) – `http://www.iptc.org/season` ## iptc4xmpExt:Series ### Series The series the show belongs to. `iptc4xmpExt:Series` * is optional * type: Series * defined in this schema ### iptc4xmpExt:Series Type * [Series](../external/iptc/series.schema.md) – `http://www.iptc.org/series` ## xdm:showType ### Show type The type of content for example, trailer or full episode. `xdm:showType` * is optional * type: `string` * defined in this schema ### xdm:showType Type `string` ## xdm:streamFormat ### Stream format Free-form format of the stream for example, short or long. `xdm:streamFormat` * is optional * type: `string` * defined in this schema ### xdm:streamFormat Type `string` ## xdm:streamType The type of the media stream `xdm:streamType` * is optional * type: `enum` * default: `"video"` * defined in this schema The value of this property **must** be equal to one of the [known values below](#xdmstreamtype-known-values). ### xdm:streamType Known Values | Value | Description | |-------|-------------| | `audio` | An audio stream (e.g. podcast, audiobook, radio stream). | | `video` | A video stream (e.g. Video-On-Demand, live event stream, downloaded movie). | | `gaming` | A gaming stream (e.g. Twitch, Hitbox). | ## xmpDM:album ### Album The name of the album that the music recording or video belongs to. `xmpDM:album` * is optional * type: `string` * defined in this schema ### xmpDM:album Type `string` ## xmpDM:artist ### Artist The name of the album artist or group performing the music recording or video. `xmpDM:artist` * is optional * type: `string` * defined in this schema ### xmpDM:artist Type `string` ## xmpDM:duration ### Media content length Length of primary media asset in seconds. `xmpDM:duration` * is optional * type: `integer` * defined in this schema ### xmpDM:duration Type `integer`
21.079137
230
0.664619
eng_Latn
0.815913
fa8fc57edf77b54fde4b530143a38aab09e5ab85
72
md
Markdown
README.md
bucketcapacity/Simplex
5b750312077f2d7633bf8471951ddd43966c8bf3
[ "MIT" ]
null
null
null
README.md
bucketcapacity/Simplex
5b750312077f2d7633bf8471951ddd43966c8bf3
[ "MIT" ]
null
null
null
README.md
bucketcapacity/Simplex
5b750312077f2d7633bf8471951ddd43966c8bf3
[ "MIT" ]
null
null
null
# Simplex Solves linear optimization problems using the simplex method.
24
61
0.833333
eng_Latn
0.996128
fa903929941c93a45345b8119c1abf4457cf64b0
799
md
Markdown
README.md
goncalo-oliveira/redis-extensions
4b9fa66522e6126713f43b364ddebdf0a8e88a82
[ "Apache-2.0" ]
2
2021-02-08T16:09:35.000Z
2021-02-09T10:49:30.000Z
README.md
goncalo-oliveira/redis-extensions
4b9fa66522e6126713f43b364ddebdf0a8e88a82
[ "Apache-2.0" ]
null
null
null
README.md
goncalo-oliveira/redis-extensions
4b9fa66522e6126713f43b364ddebdf0a8e88a82
[ "Apache-2.0" ]
null
null
null
# Redis extensions for .NET This repository contains Rediss service extensions with dependency injection for .NET. ## Installing Add a package reference from NuGet ``` dotnet add package Faactory.Extensions.Redis ``` ## Usage Just configure the DI container with the service extensions. ```csharp public void ConfigureServices( IServiceCollection services ) { ... // add your services here. services.AddRedisService( options => { options.Connection = "redis-connection-string"; } ); } ``` And then you can inject the `IRedisService` wherever you need it. ```csharp public class WeatherController : ControllerBase { public WeatherController( IRedisService redisService ) { IConnectionMultiplexer connection = redisService.Connection; } } ```
19.487805
86
0.722153
eng_Latn
0.857788
fa90565f55a5db45f733a055644f6a5724f241bb
2,798
md
Markdown
repos/varnish/remote/6.md
hassoon1986/repo-info
477fbc6bf9149b9c66c936b61c31dcd70385b8bb
[ "Apache-2.0" ]
1
2020-05-11T17:34:20.000Z
2020-05-11T17:34:20.000Z
repos/varnish/remote/6.md
hassoon1986/repo-info
477fbc6bf9149b9c66c936b61c31dcd70385b8bb
[ "Apache-2.0" ]
null
null
null
repos/varnish/remote/6.md
hassoon1986/repo-info
477fbc6bf9149b9c66c936b61c31dcd70385b8bb
[ "Apache-2.0" ]
null
null
null
## `varnish:6` ```console $ docker pull varnish@sha256:b6adc2db657ec6120661a1d6c395bce501cc3449d13596cf380f6fb8b7df46ba ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 ### `varnish:6` - linux; amd64 ```console $ docker pull varnish@sha256:4a3cef85679a3e2b6f4ff34b421f91db431fd835f7a6661134886a06cbfb2f58 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **76.8 MB (76774316 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:58fb7a9b0b998257bd2088de0b526d304c2efa3b754d73c478579c9862301bb6` - Entrypoint: `["docker-varnish-entrypoint"]` - Default Command: `[]` ```dockerfile # Thu, 23 Apr 2020 00:20:32 GMT ADD file:9b8be2b52ee0fa31da1b6256099030b73546253a57e94cccb24605cd888bb74d in / # Thu, 23 Apr 2020 00:20:32 GMT CMD ["bash"] # Thu, 23 Apr 2020 19:09:45 GMT ENV VARNISH_VERSION=6.4.0-1~buster # Thu, 23 Apr 2020 19:09:45 GMT ENV VARNISH_SIZE=100M # Thu, 23 Apr 2020 19:10:12 GMT RUN set -ex; fetchDeps=" dirmngr gnupg "; apt-get update; apt-get install -y --no-install-recommends apt-transport-https ca-certificates $fetchDeps; key=A9897320C397E3A60C03E8BF821AD320F71BFF3D; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver hkps://hkps.pool.sks-keyservers.net --recv-keys $key; gpg --batch --export export $key > /etc/apt/trusted.gpg.d/varnish.gpg; gpgconf --kill all; rm -rf $GNUPGHOME; echo deb https://packagecloud.io/varnishcache/varnish64/debian/ buster main > /etc/apt/sources.list.d/varnish.list; apt-get update; apt-get install -y --no-install-recommends varnish=$VARNISH_VERSION; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps; rm -rf /var/lib/apt/lists/* # Thu, 23 Apr 2020 19:10:12 GMT WORKDIR /etc/varnish # Thu, 23 Apr 2020 19:10:12 GMT COPY file:4156d91450dca54febf2b6908a0871cf84271dba1069d9641be798ec9f560393 in /usr/local/bin/ # Thu, 23 Apr 2020 19:10:13 GMT ENTRYPOINT ["docker-varnish-entrypoint"] # Thu, 23 Apr 2020 19:10:13 GMT EXPOSE 80 8443 # Thu, 23 Apr 2020 19:10:13 GMT CMD [] ``` - Layers: - `sha256:54fec2fa59d0a0de9cd2dec9850b36c43de451f1fd1c0a5bf8f1cf26a61a5da4` Last Modified: Thu, 23 Apr 2020 00:25:10 GMT Size: 27.1 MB (27098254 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ea9679bef917109b4df69e4e3303870b8b8a9824689236a57d32610c5dc018a6` Last Modified: Thu, 23 Apr 2020 19:10:51 GMT Size: 49.7 MB (49675609 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0c71288eaf0af463539b07d00291749ee7e4349ba5b00371a4c2382213e823cc` Last Modified: Thu, 23 Apr 2020 19:10:40 GMT Size: 453.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
45.868852
752
0.756969
yue_Hant
0.319847
fa90dd1e215566f76309cf286fa2ce24dcd9ad4d
69
md
Markdown
README.md
Vitao18/C-exercises
121aec31978ee197523a2923ab26a37a38b768fd
[ "MIT" ]
1
2020-05-16T23:34:26.000Z
2020-05-16T23:34:26.000Z
README.md
Vitao18/C-exercises
121aec31978ee197523a2923ab26a37a38b768fd
[ "MIT" ]
null
null
null
README.md
Vitao18/C-exercises
121aec31978ee197523a2923ab26a37a38b768fd
[ "MIT" ]
null
null
null
# C-exercises Following https://www.ime.usp.br/~macmulti/exercicios/
23
54
0.768116
kor_Hang
0.165802
fa910d19de6a7436ba1d4eb82ab51879eb73ce35
3,529
md
Markdown
docs/2014/data-quality-services/create-a-data-quality-project.md
IrvinDominin/sql-docs.it-it
4b82830a24c29e5486f950728a69ddb46cb4c874
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/data-quality-services/create-a-data-quality-project.md
IrvinDominin/sql-docs.it-it
4b82830a24c29e5486f950728a69ddb46cb4c874
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/data-quality-services/create-a-data-quality-project.md
IrvinDominin/sql-docs.it-it
4b82830a24c29e5486f950728a69ddb46cb4c874
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Creare un progetto Data Quality | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: - data-quality-services ms.topic: conceptual f1_keywords: - sql12.dqs.dqproject.newdqproject.f1 helpviewer_keywords: - create,data quality project - data quality project,create ms.assetid: 19c52d2b-d28e-4449-ab59-5fe0dc326cd9 author: douglaslMS ms.author: douglasl manager: craigg ms.openlocfilehash: 4426c37be0996069ced05954451d1b1f70854319 ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/02/2018 ms.locfileid: "48132301" --- # <a name="create-a-data-quality-project"></a>Creare un progetto Data Quality In questo argomento viene descritto come creare un progetto Data Quality mediante il [!INCLUDE[ssDQSClient](../includes/ssdqsclient-md.md)]. Un progetto Data Quality viene utilizzato per eseguire l'attività di pulizia o di corrispondenza in [!INCLUDE[ssDQSnoversion](../includes/ssdqsnoversion-md.md)] (DQS). ## <a name="BeforeYouBegin"></a> Prima di iniziare ### <a name="Prerequisites"></a> Prerequisiti È necessario disporre di una Knowledge Base pertinente da utilizzare nel progetto Data Quality per l'attività di pulizia e di corrispondenza. ### <a name="Security"></a> Sicurezza #### <a name="Permissions"></a> Permissions Per creare un progetto Data Quality, è necessario disporre del ruolo dqs_kb_editor o dqs_kb_operator nel database DQS_MAIN. ## <a name="Create"></a> Creare un progetto Data Quality 1. [!INCLUDE[ssDQSInitialStep](../includes/ssdqsinitialstep-md.md)] [Eseguire l'applicazione Data Quality Client](../../2014/data-quality-services/run-the-data-quality-client-application.md). 2. Nella schermata iniziale del [!INCLUDE[ssDQSClient](../includes/ssdqsclient-md.md)] fare clic su **Nuovo progetto Data Quality**. 3. Nella schermata **Nuovo progetto Data Quality** : 1. Nella casella **Nome** digitare un nome per il nuovo progetto Data Quality. 2. Nella casella **Descrizione** digitare una descrizione per il nuovo progetto Data Quality (facoltativo). 3. Nell'elenco **Usa Knowledge Base** fare clic per selezionare una Knowledge Base da utilizzare per il progetto Data Quality. Nell'area **Dettagli Knowledge Base: <Nome_Knowledge_Base>** sul lato destro vengono visualizzati i nomi di dominio disponibili nella knowledge base selezionata. 4. Nell'area **Seleziona attività** fare clic su un'attività che si desidera eseguire utilizzando questo progetto Data Quality: - **Pulizia**: selezionare questa attività per pulire i dati di origine. - **Corrispondenza**: selezionare questa attività per eseguire una corrispondenza. Questa attività è disponibile solo se la Knowledge Base selezionata per il progetto Data Quality contiene i criteri di corrispondenza. 4. Fare clic su **Crea** per creare un progetto Data Quality. ## <a name="FollowUp"></a> Completamento: fasi successive alla creazione di un progetto Data Quality Dopo avere creato un progetto Data Quality, viene visualizzata una procedura guidata che è possibile utilizzare per eseguire l'attività selezionata: pulizia o corrispondenza. Per altre informazioni sulle attività di pulizia e corrispondenza, vedere [Pulizia dei dati](../../2014/data-quality-services/data-cleansing.md) e [Corrispondenza di dati](../../2014/data-quality-services/data-matching.md).
54.292308
401
0.756022
ita_Latn
0.989351
fa91d8e354e14f59c5087b7d896f6e02185ec3d1
1,565
md
Markdown
README.md
natsukoshi/Laravel_Vue_Board
b60b369f161781490f43a6cab438baca36462a96
[ "MIT" ]
2
2020-12-28T14:06:58.000Z
2022-03-10T13:31:13.000Z
README.md
natsukoshi/Laravel_Vue_Board
b60b369f161781490f43a6cab438baca36462a96
[ "MIT" ]
5
2021-03-09T21:15:59.000Z
2022-02-26T19:18:58.000Z
README.md
natsukoshi/Laravel_Vue_Board
b60b369f161781490f43a6cab438baca36462a96
[ "MIT" ]
null
null
null
# Laravel+Vue.jsの掲示板風SPA ## 概要 サーバーサイドをLaravel、フロントサイドをVue.jsで作成した掲示板風SPAです。 ユーザ登録(Googleアカウントも可)、ログイン/ログアウト、テキスト/画像の投稿・削除などができます。 Heroku上で稼働しています。(勉強のために少しずつ機能を増やしていってます) https://laravel-board.herokuapp.com/ ## 作成した経緯・理由 - このアプリケーションの前に、素のPHPだけで同じような掲示板アプリケーションを作成しましたが、実際の現場ではフレームワークを用いることがほとんどだということをメンターから聞きました。そこで、より実践に近い形でのアプリケーションを作成したいと思ったためです。 - 前職で保守していたWebアプリケーションは10年近く保守が続いており、誰も全貌を把握できていないような巨大なシステムでした。サーバーサイドの改修作業しか発生しなかったため、小さくてもアプリケーションを全て作成することで最低限必要な流れや仕組みを学びたかったためです。 - テストも手動でスクリーンショットを撮るということが当たり前だったため、少しでもテストコードを書くことに慣れたかったというもあります。 #### なぜ掲示板風アプリなのか - LaravelとVue.jsでのシングルページアプリケーションというモダンな技術を優先して学びたかったためです。(アイデア出しや仕様にリソースを割きたくなかった) - 掲示板でもユーザの登録・ログイン機能・テキスト、画像の投稿など基本的な要素は揃っているためと考えたためです。 ## 意識した点・工夫した点 - APIを新しく作る際は、先にテストを書くようにしました。そうすることによってコントローラやバリデーションに必要な処理が具体的に意識できるようになり開発がしやすくなりました。 - コントローラーに記載するコード量を少なくするようにして、メンテナンスしやすいようにしました。初めはバリデーションなどもコントローラーで行っていましたが、コントローラーが肥大化してきたため、バリデーションはフォームリクエストで行うようにするなどし、コントローラーに持たせる機能を必要最低限にするようにしました。 - 画像は画像用モデルを作成し、ファイル名を持たせることで実ファイルを参照できるようにしています。投稿が削除される場合、添付されている画像も削除しますが、画像モデルを削除する度に実ファイルも削除するのは手間ですし、漏れが発生すると考えました。そこで、オブザーバーの機能を用いて画像モデルが削除されたら画像の実ファイルも削除するようにしました。 - フロントでは、投稿用フォームやページネーション、ヘッダーなどをコンポーネントとして作成し、複数ページで使いまわしができるよう実装しました。 ## 苦労した点 - Laravelのデフォルトの仕様がSPA向けになっておらず、フォームからの送信時にCSRFトークンの検証エラーが発生していました。公式のドキュメントでは検証の詳細な仕様がわからなかったため、実際にCSRFトークンの検証を行っているメソッドのソースを読むことで、仕様を理解し解決しました。 - シングルページアプリケーションの実装が初めてだったため、Laravelのみの場合とフロントの実装方法が異なり、慣れるまで戸惑いました。ページを作りながら手順や仕組みを別途まとめることで学習していきました。 ## ローカル開発環境 - Cent OS 7.6 (ローカルの仮想マシン) - Laravel 5.8 - PHP 7.3 - Vue.js 2.6
46.029412
171
0.890096
jpn_Jpan
0.997269
fa931ec7d5ee2f4416f77892472f286bca1e0a7d
146
md
Markdown
_posts/0000-01-02-gabyrech.md
gabyrech/github-slideshow
248644654650b6cee1efef3035021395a7571a99
[ "MIT" ]
null
null
null
_posts/0000-01-02-gabyrech.md
gabyrech/github-slideshow
248644654650b6cee1efef3035021395a7571a99
[ "MIT" ]
3
2020-09-24T17:58:40.000Z
2020-09-24T21:18:00.000Z
_posts/0000-01-02-gabyrech.md
gabyrech/github-slideshow
248644654650b6cee1efef3035021395a7571a99
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" --- Esta es una prueba! **Ahora si se puso interesante** Use the left arrow to go back!
14.6
37
0.691781
eng_Latn
0.430439
fa93c9fd540363fa8350e92340ef036536896656
6,913
md
Markdown
README.md
lorisleiva/javel
b1b100a09d9c2c6e44b1263759ef15b708081555
[ "MIT" ]
283
2019-01-26T17:38:21.000Z
2022-02-16T20:46:14.000Z
README.md
lorisleiva/javel
b1b100a09d9c2c6e44b1263759ef15b708081555
[ "MIT" ]
11
2019-02-13T00:09:06.000Z
2022-02-12T05:56:09.000Z
README.md
lorisleiva/javel
b1b100a09d9c2c6e44b1263759ef15b708081555
[ "MIT" ]
15
2019-02-12T01:50:24.000Z
2021-07-25T16:24:28.000Z
# Javel 🎁 Wrap your plain JavaScript objects into customizable Laravel-like models. [Read introduction article](https://lorisleiva.com/introducing-javel/). ![cover](https://user-images.githubusercontent.com/3642397/52560981-55702e80-2dfa-11e9-979f-179e7fccadde.png) ## Installation ```sh npm i javel -D ``` ## Overview ```js import Model from 'javel' class Article extends Model {/* ... */} await Article.all({ /* request */ }) // => [ Article* ] await Article.paginate({ query: { page: 2 } }) // => { data: [ Article* ], current_page: 2, ... } await Article.find(1) // => Article { id: 1, ... } let article = await Article.create({ name: 'My article' }) // => Article { id: 2, name: 'My article' } await article.update({ name: 'My updated article' }) // => Article { id: 2, name: 'My updated article' } await article.delete() // => Deleted from the server article = new Article({ name: 'My draft blog post' }) article.name = 'My new blog post' await article.save() // => Article { id: 3, name: 'My new blog post', ... } ``` ## Getting started Start by creating your base model that all other models will extends from. In there you can override any logic you want or, even better, attach additional behavior using mixins ([see below](#a-chain-of-mixins)). ```js import { Model as BaseModel } from 'javel' export default class Model extends BaseModel { // } ``` Typically, in this base model, you would set up how to reach your server by overriding the `baseUrl` and `makeRequest` methods like this: ```js export default class Model extends BaseModel { baseUrl () { return '/api' } makeRequest ({ method, url, data, query }) { return axios({ method, url, data, params: query }) } } ``` Note that `baseUrl` defaults to `/api` and that `makeRequest` will automatically use axios if it available in the `window` (which is the case by default in Laravel). Next, create specific models for your application. ```js import Model from './Model.js' export default class Article extends Model { // Your logic here... } ``` Finally you will likely want to configure which URL should be used for each actions (find, create, update, etc.). You might also want to add some behavior right before or after requests are made and customize how to handle the response. You can learn all about this in the [documentation of the `MakesRequests` mixin](docs/MakesRequests.md). ## A chain of mixins Javel uses the [mixwith library](https://github.com/justinfagnani/mixwith.js) to separate each functionality of a Model into dedicated mixins (comparable to how Eloquent uses traits in Laravel). For the sake of convenience, Javel exposes the mixwith's API directly: ```js import { Model as BaseModel, Mixin, mix } from 'javel' // Create a mixin const ImmutableModels = Mixin(superclass => class extends superclass { // }) // Use a mixin class Model extends mix(BaseModel).with(ImmutableModels) { // } ``` You can of course combine as many mixins as you want. ```js import { Model as BaseModel, mix } from 'javel' import { MixinA, MixinB, MixinC } from './mixins' // Use a mixin class Model extends mix(BaseModel).with(MixinA, MixinB, MixinC) { // } ``` Note that the order in which you use your mixins is important. The mixins will be applied using inheritance from right to left. Therefore the previous example is comparable to: ```js class MixinA extends BaseModel {} class MixinB extends MixinA {} class MixinC extends MixinB {} class Model extends MixinC {} ``` Check out [the lifecycle of a base model](docs/lifecycle.md) before creating your own mixins. ## Mixins included in Javel's Model By default, the base Model provided by javel includes the following mixins (in this order, i.e. the lower overrides the higher). You can learn more about each of them by reading their dedicated documentation. - [HasAttributes](docs/HasAttributes.md) Defines the basis of getting and setting attributes on a Model and provide some useful methods like `primaryKey`, `exists`, `is`, `clone`, etc. - [HasRelationships](docs/HasRelationships.md) Enables models to configure their relationships with each other so that their attributes are automatically wrapped in the right model. - [KeepsParentRelationship](docs/KeepsParentRelationship.md) Ensures each child relationship keeps track of its parent and how to access itself from it. This enables models to climb up the relationship tree and even remove themselves from their parent when deleted. - [MakesRequests](docs/MakesRequests.md) Introduces async actions (`find`, `create`, `update`, etc.) to conveniently request the server and provides all the hooks necessary to customize how to handle your request/response proctol for each model. ## Extra mixins available Javel also provides some additional mixins that can be useful to plug in or to get inspired from when writing your own. Don't hesitate to PR your best mixins and share it with us. - [GeneratesUniqueKey](docs/GeneratesUniqueKey.md) Attaches a unique key to every new model instanciated. If the model has a primary key available, the primary key will be used instead of generating a new unique key. - [UsesMethodFieldWithFormData](docs/UsesMethodFieldWithFormData.md) Transforms the `update` action to use the `POST` method with the `_method=PATCH` field when the provided data is an instance of FormData. - [IntegratesQueryBuilder](docs/IntegratesQueryBuilder.md) An easy way to build a query string compatible with "spatie/laravel-query-builder" *(Has dependencies: [`js-query-builder`](https://github.com/coderello/js-query-builder))*. ## Optional dependencies Some extra mixins have additional dependencies that need to be resolved. For example, some mixins could wrap a third party library to make it work with Javel out-of-the-box. Because these mixins are optional (you can choose not to add them), their dependencies must also be optional so that you don't end up loading lots of dependencies you don't need. This means, when you *do* decide to pull in a mixin that has dependencies, you have to install them yourself and tell Javel how to access it, like this: ```sh npm i third-party-library ``` ```js import ThirdPartyLibrary from 'third-party-library' import { Model as BaseModel, mix, SomeMixinThatUsesThirdPartyLibraries, registerModule } from 'javel' registerModule('third-party-library', ThirdPartyLibrary) class Model extends mix(BaseModel).with(SomeMixinThatUsesThirdPartyLibraries) { // } ``` <small>Note: This behaviour has been designed as a workaround of webpack's optional `externals` which [unfortunately creates warnings when optional dependencies are not present](https://github.com/webpack/webpack/issues/7713#issuecomment-467888437).</small>
45.480263
352
0.730363
eng_Latn
0.984364
fa93e5ee6b5610672df4fe64b75f4af7056c1eda
29
md
Markdown
README.md
mivanchenko/CD2
2a0af5f4b94dd3883ce07767e69014709fd6a37e
[ "Artistic-2.0" ]
null
null
null
README.md
mivanchenko/CD2
2a0af5f4b94dd3883ce07767e69014709fd6a37e
[ "Artistic-2.0" ]
null
null
null
README.md
mivanchenko/CD2
2a0af5f4b94dd3883ce07767e69014709fd6a37e
[ "Artistic-2.0" ]
null
null
null
# CD2 CD2, current, 2 tracks
9.666667
22
0.689655
eng_Latn
0.675762
fa949ad0deb5c38aed14ec9f410245c83c314866
1,476
md
Markdown
wiki/farming/farming_manual/3bot_farm_mgmt.md
OmarElawady/info_threefold
6da6b0b4704e9b7d7daf54dd6e6bfb1573fda404
[ "Apache-2.0" ]
null
null
null
wiki/farming/farming_manual/3bot_farm_mgmt.md
OmarElawady/info_threefold
6da6b0b4704e9b7d7daf54dd6e6bfb1573fda404
[ "Apache-2.0" ]
null
null
null
wiki/farming/farming_manual/3bot_farm_mgmt.md
OmarElawady/info_threefold
6da6b0b4704e9b7d7daf54dd6e6bfb1573fda404
[ "Apache-2.0" ]
null
null
null
# 3bot Farm management This section of your 3Bot lets you create, manage and monitor your farms. <!-- * [Monitoring the nodes health and usage](inspecting-an-existing-farm) --> ## Creating a new farm Follow this detailed guide to [create a farm](farm_init). ## Edit a Farm If you want to change any of the detail about your farm, you can click on the gear icon at the right side of the farm table under the actions column. This will open a popup containing a form that let you edit any of the farm's information. ![gear](img/3bot_farmmgmt_gear.png) ### Changing the owner of a farm If you want to change the owner of a farm, you can do so by opening the edit popup and then change the `3Bot ID` field. Be careful when doing so because once a farm has changed owner, there is no way to get it back unless the new owner agrees to do so. See also the [Farm Migration](farm_migration) section. ## Inspecting an existing farm The top table shows all the farms that you own: ![overview](img/farm_management_overview.png) You can click the eye icon in the action column to open the listing of the nodes belonging to the farm in the bottom table. This will list all the nodes linked to the selected farm. ![nodes listing](img/farm_management_nodes.png) ### Deleting a dead node One or more of your nodes may die due to hardware failures. If you want to clean it up and remove it from the farm, you can click the trash bin icon on the right side in the nodes tables.
43.411765
308
0.76084
eng_Latn
0.999589
fa949c90272f16c6e4461febe7d764b22068e3e1
491
md
Markdown
archived/2020-10-06/2020-10-06-068 电影传奇(总策划:崔永元): 《林海雪原》之《杨子荣》.md
NodeBE4/teahouse
4d31c4088cc871c98a9760cefd2d77e5e0dd7466
[ "MIT" ]
1
2020-09-16T02:05:27.000Z
2020-09-16T02:05:27.000Z
archived/2020-10-06/2020-10-06-068 电影传奇(总策划:崔永元): 《林海雪原》之《杨子荣》.md
NodeBE4/teahouse
4d31c4088cc871c98a9760cefd2d77e5e0dd7466
[ "MIT" ]
null
null
null
archived/2020-10-06/2020-10-06-068 电影传奇(总策划:崔永元): 《林海雪原》之《杨子荣》.md
NodeBE4/teahouse
4d31c4088cc871c98a9760cefd2d77e5e0dd7466
[ "MIT" ]
null
null
null
--- layout: post title: "068 电影传奇(总策划:崔永元): 《林海雪原》之《杨子荣》" date: 2020-10-06T01:23:41.000Z author: 崔永元 from: https://www.youtube.com/watch?v=Plz4dblaopw tags: [ 崔永元 ] categories: [ 崔永元 ] --- <!--1601947421000--> [068 电影传奇(总策划:崔永元): 《林海雪原》之《杨子荣》](https://www.youtube.com/watch?v=Plz4dblaopw) ------ <div> 今天发布2004年4月播出的 电影传奇 “《林海雪原》之《杨子荣》”。解密不为人知的历史传奇和电影故事。《林海雪原》是中国人民解放军八一电影制片厂拍摄的战争电影,由刘沛然导演,王润身、张勇手、张良主演,于1960年上映。崔永元拥有《电影传奇》授权 - 永久在全球的网络传播权。欢迎分享,严禁转载!请其他发布盗版《电影传奇》的朋友尽快删除!感恩您曾经的宣传。 </div>
28.882353
178
0.712831
yue_Hant
0.197282
fa9612665acc91ec59d504791cf1ff6a3b5a0509
1,182
md
Markdown
src/pages/pt/appflow/cookbook/private_npm_modules.md
krlevkirill/ionic-docs
322eae078862c9ad4dc684141aa0635df8d1981e
[ "Apache-2.0" ]
3
2019-12-05T14:57:23.000Z
2019-12-05T14:59:49.000Z
src/pages/pt/appflow/cookbook/private_npm_modules.md
krlevkirill/ionic-docs
322eae078862c9ad4dc684141aa0635df8d1981e
[ "Apache-2.0" ]
null
null
null
src/pages/pt/appflow/cookbook/private_npm_modules.md
krlevkirill/ionic-docs
322eae078862c9ad4dc684141aa0635df8d1981e
[ "Apache-2.0" ]
null
null
null
--- previousText: 'Using private GIT repositories' previousUrl: '/docs/appflow/cookbook/private_git' nextText: 'DevApp: Local Development' nextUrl: '/docs/appflow/devapp' --- # Using private NPM modules Get an authentication token using the npm CLI: $ npm token create --read-only npm password: ┌────────────────┬──────────────────────────────────────┐ │ token │ 1a583a54-5515-4058-a3c4-047e5f699d27 │ ├────────────────┼──────────────────────────────────────┤ │ cidr_whitelist │ │ ├────────────────┼──────────────────────────────────────┤ │ readonly │ true │ ├────────────────┼──────────────────────────────────────┤ │ created │ 2019-01-08T20:53:17.461Z │ └────────────────┴──────────────────────────────────────┘ Configure an `NPM_TOKEN` secret in your Appflow environment using the generated token as value: ![NPM token secret](/docs/assets/img/appflow/cookbook/npm-token-secret.png) Check in a `.npmrc` file in the root of your project directory with the following line: //registry.npmjs.org/:_authToken=${NPM_TOKEN}
38.129032
95
0.471235
eng_Latn
0.460423
fa9620774a215ab4310c9b3509c2b23a32b9d2d5
36
md
Markdown
README.md
gearfactory/Crab-Rust
c87642d13b27232737dc0154f703c56ce9c4a554
[ "MIT" ]
null
null
null
README.md
gearfactory/Crab-Rust
c87642d13b27232737dc0154f703c56ce9c4a554
[ "MIT" ]
null
null
null
README.md
gearfactory/Crab-Rust
c87642d13b27232737dc0154f703c56ce9c4a554
[ "MIT" ]
1
2021-06-09T18:24:59.000Z
2021-06-09T18:24:59.000Z
# Crab-Rust Crab RPC Rust framework
12
23
0.777778
kor_Hang
0.846915
fa985d5a42a6a1a5ec3dc167370c9cc73d474e23
558
md
Markdown
README.md
code4wt/mybatis-test
821e896fe620d8c2d9d69b9707521e71d7ea9b7c
[ "Apache-2.0" ]
130
2018-09-11T23:39:29.000Z
2022-03-29T03:40:31.000Z
README.md
code4wt/mybatis-test
821e896fe620d8c2d9d69b9707521e71d7ea9b7c
[ "Apache-2.0" ]
1
2019-08-27T09:59:28.000Z
2019-10-14T01:51:10.000Z
README.md
code4wt/mybatis-test
821e896fe620d8c2d9d69b9707521e71d7ea9b7c
[ "Apache-2.0" ]
59
2018-09-25T13:29:09.000Z
2022-02-17T12:17:38.000Z
# mybatis-test ## 1.简介 《一本小小的MyBatis源码分析书》一书的附属源码,该书以电子书的形式发布,可免费下载。下载途径如下: 百度网盘:[点击下载](https://pan.baidu.com/s/1d0JTkab0gHOApXrMUbHGuQ) <img src="http://blog-pictures.oss-cn-shanghai.aliyuncs.com/shuji.png" width="200px"/> ## 2. 代码使用说明 mybatis-test 项目的文件结构大致如下: ``` . ├── sql │   └── myblog.sql ├── src    ├── main    │   ├── java    │   └── resources    └── test    ├── java    └── resources ``` 使用步骤如下: 1. 执行 sql/myblog.sql 脚本,导入数据到数据库中 2. 修改 test/resources/log4j.properties 文件,将数据库配置改为你所使用的数据库 3. 执行 test/java/ 文件夹下的测试代码进行测试
15.942857
86
0.636201
yue_Hant
0.420859
fa989b646ec60d01a110d60d0dc19d9e0b5ab98e
1,042
md
Markdown
Language/Reference/User-Interface-Help/decimal-data-type.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
Language/Reference/User-Interface-Help/decimal-data-type.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
Language/Reference/User-Interface-Help/decimal-data-type.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Decimal Data Type keywords: vblr6.chm1099868 f1_keywords: - vblr6.chm1099868 ms.prod: office ms.assetid: 5f70e06b-61da-e0be-9f96-7dd84f377c74 ms.date: 06/08/2017 --- # Decimal Data Type [Decimal variables](../../Glossary/vbe-glossary.md#decimal-data-type) are stored as 96-bit (12-byte) signed integers scaled by a variable power of 10. The power of 10 scaling factor specifies the number of digits to the right of the decimal point, and ranges from 0 to 28. With a scale of 0 (no decimal places), the largest possible value is +/-79,228,162,514,264,337,593,543,950,335. With a 28 decimal places, the largest value is +/-7.9228162514264337593543950335 and the smallest, non-zero value is +/-0.0000000000000000000000000001. **Note** At this time the **Decimal** data type can only be used within a[Variant](../../Glossary/vbe-glossary.md#variant-data-type), that is, you cannot declare a variable to be of type **Decimal**. You can, however, create a **Variant** whose subtype is **Decimal** using the **CDec** function.
52.1
536
0.745681
eng_Latn
0.961713
fa98a74bfa03daf64eaccfac49539d2149df93e5
2,789
md
Markdown
dynamicsax2012-technet/usa-about-working-with-commerce-services.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
1
2020-06-16T22:06:04.000Z
2020-06-16T22:06:04.000Z
dynamicsax2012-technet/usa-about-working-with-commerce-services.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/usa-about-working-with-commerce-services.md
s0pach/DynamicsAX2012-technet
8412306681e6b914ebcfad0a9ee05038474ef1e6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: (USA) About working with Commerce Services TOCTitle: (USA) About working with Commerce Services ms:assetid: 027e5a4e-eb54-4abc-b7fb-81db05aa6832 ms:mtpsurl: https://technet.microsoft.com/library/Hh242101(v=AX.60) ms:contentKeyID: 36055930 author: Khairunj ms.date: 04/18/2014 mtps_version: v=AX.60 audience: Application User ms.search.region: USA --- # (USA) About working with Commerce Services _**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012_ After you set up an account for Commerce Services for Microsoft Dynamics ERP and set up marketplaces, you can list your products online and process sales orders as customers buy your products. The following steps describe the basic workflow for working with Commerce Services: 1. In Microsoft Dynamics AX, select the products to sell online, and then synchronize the products with Commerce Services. For more information, see [(USA) Select products to sell online](usa-select-products-to-sell-online.md). 2. Organize the items in Commerce Services. For example, you can create categories and catalogs, and add images to products. For more information, see [(USA) Organize and edit products in Commerce Services](usa-organize-and-edit-products-in-commerce-services.md). 3. List products and catalogs in online marketplaces, such as eBay, and in online stores that your organization creates. For more information, see [(USA) List products and catalogs in online marketplaces](usa-list-products-and-catalogs-in-online-marketplaces.md). 4. Download orders from online marketplaces, and process the orders in Microsoft Dynamics AX. You can download orders on demand. Orders can also be downloaded automatically, based on a synchronization schedule. For more information, see [(USA) Download orders from online marketplaces](usa-download-orders-from-online-marketplaces.md). You can also specify some settings for Commerce Services, in both Microsoft Dynamics AX and Commerce Services. For example, in Microsoft Dynamics AX, you can specify category hierarchies and shipping charges. In Commerce Services, you can specify departments and categories, shipping methods, payment providers, and payment methods. For more information, see [(USA) Set up and maintain a Commerce Services account](usa-set-up-and-maintain-a-commerce-services-account.md) and [(USA) Modify Commerce Services settings](usa-modify-commerce-services-settings.md). > [!NOTE] > <P>This feature is not available if Microsoft Dynamics AX 2012 R3 is installed.</P> ## See also [(USA) About Commerce Services](usa-about-commerce-services.md) [(USA) Select online marketplaces](usa-select-online-marketplaces.md)
51.648148
559
0.783435
eng_Latn
0.951595
fa99ff5bb7ecf6baca22e4dad0a1b887d7369fd5
12,491
md
Markdown
_posts/2007-10-10-My Musings.md
seanseah/seanseah.github.io
a0dcfd50a29355111ff226c7a39187fa29b9d62b
[ "MIT" ]
2
2015-06-13T16:14:31.000Z
2017-11-18T02:55:03.000Z
_posts/2007-10-10-My Musings.md
seanseah/seanseah.github.io
a0dcfd50a29355111ff226c7a39187fa29b9d62b
[ "MIT" ]
null
null
null
_posts/2007-10-10-My Musings.md
seanseah/seanseah.github.io
a0dcfd50a29355111ff226c7a39187fa29b9d62b
[ "MIT" ]
3
2015-05-24T00:56:53.000Z
2015-11-22T01:14:28.000Z
--- layout: post title: Collection of Thoughts description: headline: categories: personal tags: - reflections imagefeature: comments: false mathjax: null featured: true published: true --- ## Musing 1: Consumed by Regret _All the world's a stage, And all the men and women merely players._ Like a play, there are two types, tragedies and comedies - and I often view mine as a tragedy. It somehow falls back to the glass being half full or half empty analogy. Like a looking glass, a lens could be all that matters in shaping one's opinion. I have been full of **pessimism** and often view the world with a set of negatively tinted lens. This has affected me and more often than not, often results in making worse decision. Some decisions in life are those that we will live to regret. Yet, there is no way for this flow to reverse its direction and we would just have to accept and live with the decisions we made. There isn't really much we can do once we have made the decision. It is often that we tend to reminisce on how things were and what we should have done instead, like History, a lesson that is often criticized for looking at things once past. It is not that looking back is bad, ideally, we look back to learn from past mistakes but there isn't really a need to dwell too much on it. Any more would just be wallowing in depression. There's something we can do though. It is to maximise every opportunity that we have, to do the best that we can. Such that, we can proudly say that there's nothing to regret at the end of it all. Live life to its fullest, by giving your all. There's no point in half hearted efforts. Despite the many bumps and the occasional pitfalls that we might encounter, life gives as much as it takes. At times, I wonder why life is so difficult, why there seems to be a constant ebb of life taking from us and why do we want to put ourselves through so many painful, tormentous episodes? It is hard to sometimes find value in life and I used to have the belief that living is akin to a suffering, a series of trials and tribulations perhaps. I alluded to life being a tragedy at the onset, yet I do have many beautiful and wonderful things that have happened in my life. It is just unfortunate that I tend to let the small unhappy things cloud over the good things and it has been a struggle for me to look at things positively. The way out of it all? Just relax. We shouldn't think too much and just accept life for the little pleasures it offers, and not brood too much. Life is a long journey and for all the ups and downs, the final destination for everyone is the same, so enjoy the journey while it lasts. ## Musing 2: The Early Bird or Worm It's been awhile since I was out before the sun came out, and wasn't exactly looking forward to it too. I don't really have much options to blame other than myself for not getting my bus pass earlier but well, perhaps taking the early cycling trip this morning isn't that bad a thing either. That's just how I am. At times, stingy to the extent that I would torture myself just to save a dollar? It was a 10 min trip to Kembangan station, and the moment I turned out from my house, I could feel the cold wind coming headlong as I pedalled faster, I started to shiver on the bike. Yet, it was surprisingly quite refreshing! I felt rejuvenated and excited to cycle faster. I guess I am just very much a morning person and that little morning ride energised me and got me ready for the day. Mornings are the most productive time for me. Therefore, I need to make full use of this morning time, and to dedicate my energy and attention span to the tasks that needs it the most. One article I read suggested that people should avoid checking emails during the morning, well, because, emails are varied, at times include work chatter, and all these are just distracting. Before you are aware of it, half the morning had passed and so too is the optimal window for productivity. It was quite strange to see so many school kids getting ready for school, walking to the MRT or to school, a scene that disappears once the sun comes up. Wasn't it just a few years ago that I was part of this very same school going crowd? Very soon, in a few months time, I will probably rejoin this crowd as I will have to get up like at 6, put on my uniform, and make my way to camp. Going backwards in time. I'm indeed looking forward to the end of the semester. ## Musing 3: My Decision to Join the Air Force A trip down memory lane saw me going back 4 years ago, back when I was still a fresh recruit in the army, just having received my A level results, pondering on my next step in life. I was undecided what I want to be, and I took up the first job offer that sort of came my way. I would like to think that it was not a reckless decision, but whether it was a well-informed one remains to be seen. Many factors weighed in on why I chose to be Air Engineering Officer. One reason was to truncate my National Service. I wondered then if an AEO career would mean evading from National Service responsibilities. Instead of two years of National Service, I figured I could substitute it with 5 years of 'Regular' work, get some work experience and money, while I figure out my next step in life. I would also be allowed to defer my current duties and go for studies immediately within the same year. This would give me a "two year" advantage over my peers. Yet, looking at my friends job hunting now, I think I haven't change all that much in this 4 years and still not sure what I want to do. I guess this dilly-dally over job issues is more to do with my possibly good grades and potential job offers out there, and yet, I am tied down with a SAF LSA scholarship and might be be pegged down based on the various scholarships rankings. A sign of jealousy perhaps? In a materialistic vein, it's all about the money! In a way, I can see this as a career, it isn't exactly a bad job and it does offer certain levels of prospects and all in all, working with technology is still an interesting thing for me. Furthermore, it could be interesting if I can connect my level of interest and merge it with the aviation stuff. The drawback would be more of how other people view the job? A military job is often looked down upon and unlikely to raise the admirations of people, shucks :) That's my ego speaking. This has been also a year of rumours, as the AEOs discuss about how our job will be like, the initial phase of training, important gossipy bits such as the pay structure. I love money. Who doesn't? But when am I going to do with all that money down the road? I don't know. Can it actually make me happy? I am not sure, in truth, probably not. It can make me have a comfortable life, but apart from that, does having more money directly make me happy? Perhaps it is time to review what I really want to do with my life. As I trawl the Internet on what motivates people and advice on the important things to focus on, I came up with a smorgasbord of tips that I hope would benefit me. 1. Spending Time with Family and Friends. As I grow older, I spend lesser time with friends. Spending time with family is also problematic too as more often than not, we are consumed by work. As we grow older, responsibilities increase, work consumes us. But is that truly important? The ages of 20-50 are the prime time of our lives, when our minds and body are the most capable. Are we actually putting them to good use by slogging our lifes for someone else? 2. Health I hope to die in my sleep, and not having to fight sickness. There's no escaping death, but I would like to imagine or at least hope that I can control the way I die. Yeah right. I have heard horror stories of people suffering from strokes and being bed-ridden for two to three years before they pass away. You may have friends, relatives, in similar situations. It is not uncommon, and I definitely don't think it is enjoyable. Spending my remaining years staring at a ceiling as life passes me by - knowing that there's not much waiting for me even if I were able to recover. Health is important, so that we reduce the probability of this happening. And hopefully, we can have a say in how we decide to leave this world. 3. Being Happy. Don't you think we are just too gloomy all the time? Coming from me, this must be the biggest oxymoron ever. But hear me out. *Being Happy* is a about a positive state of mind. And laughter, associated with happines is linked to all sorts of positive things that make you healthier. Scientifically, a laugh releases oxytocin throughout the body which evokes happiness and calmness. > _We don't laugh because we're happy – we're happy because we laugh._ > <small>William James</small> Honestly, money doesn't make me happy. I can hire a comedian with it perhaps, but there's tons of ways to make one laugh, on the cheap. And being happy starts with just being positive. 4. Giving This is something that I need to do more of. Help others, it need not be monetary aid but anything that can benefit or assist others. If anything, life is short and there are others around that are equally in need. We can and should always do more to help. At the same time, do not be selfish. Do not be evil. These two traits would only serve to take away the goodness that you or others have done. 5. Your Legacy A wise man once said that money doesn't last after three generations. In fact, leaving too much money behind may not be a wise thing. Haven't you heard of inflation? It destroys money. All the money today would not suffice for others to bat their eyelids in 20-30 years to come. On the other hand, legacies last for thousands of years. What's your legacy? ## Musing 4: My Personality Never a big fan of such tests, but well, its a really fast one. And surprisingly I think this test is quite accurate. **Colourgenics** "You feel as if you have missed out on a great deal that life had to offer and you go about trying to make up for past failures. Naturally at times you get depressed and you try to compensate for your 'missed opportunities' by living your life to the full. This is what, perhaps, may be described as 'living with exaggerated intensity'. In this way you feel you can break the chains of the past and start again - and it could be that you are right. You are lazy - you dream of a peaceful, calm, uncluttered and uncomplicated life. Your ideal would be to share a permanent base with some person or persons who would be able to demonstrate on-going love, peace and security. You need a friend - a close friend - and you are willing to become emotionally involved with the right person, but you are very demanding and particular in your choice of partners. You are constantly looking for reassurance and it is perhaps because of this that you tend to be somewhat argumentative, but you try to hold back - careful to avoid open conflict - since this might reduce your prospects of realising your hopes of establishing a warm caring relationship. Nobody seems to understand you at this moment for everything you suggest or do seems to be taken up the wrong way. All of this misunderstanding is leading to anxiety and stress. The situation naturally is not as you would like it to be - you feel that you are being treated most unfairly and that trust, affection and understanding are being withheld from you and that you are being treated with a demeaning lack of consideration. You consider yourself being denied the appreciation essential to your well being and self-esteem and that there is nothing you can do about it. You feel that whatever you try to do to change the situation, you are getting nowhere fast. You would really like to get away from it all but can't find the energy or the strength of mind to make the necessary decision. The tensions that you are trying to cope with are a result of conditions which are really beyond your control. As a consequence of this almost impossible situation and not being able to get your own way, you are subjected to frustration and almost ungovernable anger. You are trying to remedy the situation but the stress that you are experiencing is making the situation even worse. You feel so inadequate that you are not quite sure which way to turn. A good suggestion would to be to try to relieve the stress and anxiety by participating in some very active physical activity which will relieve your tension."
12,491
12,491
0.781283
eng_Latn
0.999982
fa9a9a41abb8c48fa6d4fe6987d9caf05c7ab0f7
198
md
Markdown
docs-2.0/nebula-explorer/explorer-console.md
qizhuluanpaosha/nebula-docs-cn
eee7feec261892e467db3791c134e2153cf86ec6
[ "Apache-2.0" ]
null
null
null
docs-2.0/nebula-explorer/explorer-console.md
qizhuluanpaosha/nebula-docs-cn
eee7feec261892e467db3791c134e2153cf86ec6
[ "Apache-2.0" ]
null
null
null
docs-2.0/nebula-explorer/explorer-console.md
qizhuluanpaosha/nebula-docs-cn
eee7feec261892e467db3791c134e2153cf86ec6
[ "Apache-2.0" ]
null
null
null
# 控制台 Explorer 的控制台功能允许用户手动输入 nGQL 语句,并可视化地呈现查询结果。 在 Explorer 页面的右上方,单击![console](figs/nav-console.png)进入控制台页面。 Explorer 控制台功能同 Studio。详情,参见[控制台](../nebula-studio/quick-start/st-ug-console.md)。
22
81
0.767677
deu_Latn
0.049401
fa9b48df9f82a4cc6bfeb5012dd56f30965ed24d
3,226
md
Markdown
content/curriculum/guides/2016/3/16.03.02.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
content/curriculum/guides/2016/3/16.03.02.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
content/curriculum/guides/2016/3/16.03.02.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
--- layout: "unit" title: "Guide Entry 16.03.02" path: "/curriculum/guides/2016/3/16.03.02.x.html" unitTitle: "The Citizenship Complex: Why the Vote Matters in the Race for Freedom and Equality for All" unitAuthor: "Vancardi Dwight Foster" keywords: "(Developed for United States History, Facing History and Ourselves, grades 10-11; recommended for United States History, Civics, Facing History and Ourselves, grades 9-12)" recommendedFor: "Developed for United States History, Facing History and Ourselves, grades 10-11; recommended for United States History, Civics, Facing History and Ourselves, grades 9-12" --- <main> <p> Not all people are born equal or free but there is an expectation of both when you are a citizen of the United States.  Our struggles to earn the base level of representation are quickly forgotten as we look for another group to demonize.  In my unit we will discover why George Washington was ahead of his time with his warning about "factions" and how their existence makes freedom and equality harder to bridge.  As we trek through time highlighting issues such as the abolition of slavery, support for women's suffrage, and the challenges that face Asian and LGBTQIA communities my hope is that student understand the sacrifices made to be accepted and to earn the right to vote but more importantly the difficulty in being welcomed into American society. </p> <p> The “Citizenship Complex” is the process by which groups gain full inclusion.  To understand it, one must look to the intersection of law, citizenship and the Constitution.  The unit aims to provide a more complex history of our nation, to tell a more earnest story of how the American identity became a mosaic of human struggle, and to offer a more robust and enlightening study of these issues so that as students recognize the power of citizenship they will take a more hopeful view of what our nation will look like in the future.  By engaging in the sophisticated discussions of the past, identifying why some groups supported each other and scapegoated others, and learning about the importance of supporting efforts at inclusion, our students should become more informed, open-minded, and ready for the globalized world of the 21 <sup> st </sup> Century. </p> <p> The unit will focus on four groups that have experienced the “Citizenship Complex”: African-American slaves, women, Asian immigrants, and the LGBTQIA community.  By comparing these groups over time, we will really be able to unearth the cycles behind the Citizenship Complex and understand that American citizenship means at different times in our country’s history. </p> <p> (For use with U.S. History and Civics classes but can also be used with the Facing History and Ourselves Curriculum) </p> <p> Keywords: government, supreme court, citizenship complex, citizenship, freedom, equality, federalism, republicanism, civics, amendments, identity, democracy, self-determination, the other, disenfranchisement, voting, gender, Civil Rights, Constitution, and rights </p> <p> (Developed for United States History, Facing History and Ourselves, grades 10-11; recommended for United States History, Civics, Facing History and Ourselves, grades 9-12) </p> </main>
97.757576
836
0.794482
eng_Latn
0.998895
fa9b6ea516cf28160f087d875ce27e7df7ce854c
4,567
md
Markdown
roadmap/README.md
theinfinit/awesome-rust
780a562c07ab83da36ae0e5fbc79adafe3bfead9
[ "CC0-1.0" ]
null
null
null
roadmap/README.md
theinfinit/awesome-rust
780a562c07ab83da36ae0e5fbc79adafe3bfead9
[ "CC0-1.0" ]
null
null
null
roadmap/README.md
theinfinit/awesome-rust
780a562c07ab83da36ae0e5fbc79adafe3bfead9
[ "CC0-1.0" ]
null
null
null
<sup>forked from [anshulrgoyal/rust-web-developer-roadmap](https://github.com/anshulrgoyal/rust-web-developer-roadmap)</sup> # Rust Web Developer Roadmap > Roadmap to becoming a [Rust](https://www.rust-lang.org/) web developer in 2021: Below you can find a chart demonstrating the path you may take and the libraries you may require to become a Rust Web Developer. This chart is made with inspiration from [Golang Developer Roadmap](https://github.com/Alikhll/golang-developer-roadmap/). [简体中文版](./i18n/zh-CN/README-zh-CN.md) ## Disclaimer > The purpose of this roadmap is to help beginner Rust web developers to navigate through frameworks and libraries in Rust ecosystem while staying as productive as possible. The libraries and (my personal) recommendations listed under each stage of the following roadmap has been researched to the best of my capacity. You should always do research on your end and build up a solution that best works for you. ## Roadmap ![Roadmap](./rust-web-developer-roadmap.png) ## Resources 1. Prerequisites - [Rust](https://www.rust-lang.org/) - [The Book](https://doc.rust-lang.org/book/) - [Rustlings Course](https://github.com/rust-lang/rustlings/) - [Rust by Example](https://doc.rust-lang.org/stable/rust-by-example/) - [Async Programming](https://rust-lang.github.io/async-book/) - [Rustup](https://www.rust-lang.org/tools/install) - [Cargo Book](https://doc.rust-lang.org/cargo/index.html) - [Crates.io](https://crates.io/) 2. CLI - [clap](https://crates.io/crates/clap) - [structopt](https://crates.io/crates/structopt) - [argh](https://crates.io/crates/argh) 3. Web Frameworks - [actix-web](https://crates.io/crates/actix-web) - [gotham](https://crates.io/crates/gotham) - [nickel](https://crates.io/crates/nickel) - [rocket](https://crates.io/crates/rocket) - [tide](https://crates.io/crates/tide) - [tower-web](https://crates.io/crates/tower-web) - [warp](https://crates.io/crates/warp) 4. ORM - [diesel](https://crates.io/crates/diesel) - [rustorm](https://crates.io/crates/rustorm) 5. Caching - [redis](https://crates.io/crates/redis) - [sled](https://crates.io/crates/sled) 6. Logging - [log](https://crates.io/crates/log) - [env_logger](https://crates.io/crates/env_logger) - [flexi_logger](https://crates.io/crates/flexi_logger) - [slog](https://crates.io/crates/slog) - [fern](https://crates.io/crates/fern) - [log4rs](https://crates.io/crates/log4rs) - [sentry](https://crates.io/crates/sentry) 7. GRPC Frameworks - [grpc](https://crates.io/crates/grpc) - [grpcio](https://crates.io/crates/grpcio) - [tonic](https://crates.io/crates/tonic) 8. JSON-RPC Framework - [jsonrpc-core](https://crates.io/crates/jsonrpc-core) 9. GraphQL Framework - [juniper](https://crates.io/crates/juniper) 10. HTTP Clients - [reqwest](https://crates.io/crates/reqwest) - [curl](https://crates.io/crates/curl) 11. Testing - _[Inbuilt](https://doc.rust-lang.org/book/ch11-00-testing.html)_ 12. Task Scheduling - [clokwerk](https://crates.io/crates/clokwerk) - [delay-timer](https://crates.io/crates/delay_timer) 13. Frontend Development - [yew](https://crates.io/crates/yew) - [wasm-bindgen](https://crates.io/crates/wasm-bindgen) - [js-sys](https://crates.io/crates/js-sys) - [web-sys](https://crates.io/crates/web-sys) 14. Good to know crates - [validator](https://crates.io/crates/validator) - [serde](https://crates.io/crates/serde) - [r2d2](https://crates.io/crates/r2d2) - [lettre](https://crates.io/crates/lettre) 15. Additional Rust Content - [Rust in 30 min](https://fasterthanli.me/articles/a-half-hour-to-learn-rust) ## Wrap Up If you think the roadmap can be improved, please do open a PR with any updates and submit any issues. ## Contribution The roadmap is built using [Draw.io](https://www.draw.io/). Project file can be found at `rust-web-developer-roadmap.xml` file. To modify it, open draw.io, click **Open Existing Diagram** and choose `xml` file with project. It will open the roadmap for you. Update it, upload and update the images in readme and create a PR (export as png with 50px border width and minify that with [Compressor.io](https://compressor.io/compress)). - Open a pull request with improvements - Discuss ideas in issues - Spread the word ## License [![License: CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
35.96063
432
0.703744
eng_Latn
0.332121
fa9bad753bd85d636e3f77bde23bfd311090fbae
1,492
md
Markdown
README.md
josantosc/josantosc
2cd60b125d684fc3a30d8972278c182b930c802a
[ "MIT" ]
null
null
null
README.md
josantosc/josantosc
2cd60b125d684fc3a30d8972278c182b930c802a
[ "MIT" ]
null
null
null
README.md
josantosc/josantosc
2cd60b125d684fc3a30d8972278c182b930c802a
[ "MIT" ]
null
null
null
[![https://img.shields.io/badge/-Linkedin-blue](https://img.shields.io/badge/-Linkedin-blue)](https://www.linkedin.com/in/joeckson/) ![Twitter URL](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fjoeckson) <p align="center"><img src="https://github.com/josantosc/josantosc/blob/master/fig2.gif"></p> ### Sobre min Estou sempre disposto a aprender tecnologias novas, não canso de estudar, sou apaixonado por Desenvolvimento de Software, Machine Learning, Processamento de Linguagem Natural e aprecio o compartilhamento de conhecimento. Aluno de Mestrado em Ciencia da Computação/Processamento de Linguagem Natural pela [UFMA](https://sigaa.ufma.br/sigaa/public/programa/apresentacao_stricto.jsf?lc=pt_BR&idPrograma=1117) Atualmente trabalho como Analista Desenvolvedor de Software na Sinch. Estudante de Ciencia de Dados na [Data Science Academy](https://www.datascienceacademy.com.br/) e Node.js na [Rocketseat](https://rocketseat.com.br/) ### Habilidades * Python * Elixir * Machine Learning * Node.js * Agentes Conversacionais * DialogFlow * Rasa * GCP <!-- **josantosc/josantosc** is a ✨ _special_ ✨ repository because its `README.md` (this file) appears on your GitHub profile. Here are some ideas to get you started: - 🔭 I’m currently working on ... - 🌱 I’m currently learning ... - 👯 I’m looking to collaborate on ... - 🤔 I’m looking for help with ... - 💬 Ask me about ... - 📫 How to reach me: ... - 😄 Pronouns: ... - ⚡ Fun fact: ... -->
34.697674
220
0.735255
por_Latn
0.670554
fa9bcc1a66204caba6af5b7d9e690838c31536c0
2,645
md
Markdown
Exchange-Server-2013/configure-remote-domain-automatic-replies-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.fr-fr
ef5f903ed67ab02903b8d8097438da160dfa8672
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange-Server-2013/configure-remote-domain-automatic-replies-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.fr-fr
ef5f903ed67ab02903b8d8097438da160dfa8672
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange-Server-2013/configure-remote-domain-automatic-replies-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.fr-fr
ef5f903ed67ab02903b8d8097438da160dfa8672
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Configurer des réponses automatiques de domaine distant: Exchange 2013 Help' TOCTitle: Configurer des réponses automatiques de domaine distant ms:assetid: 3d88a1fb-4b62-419a-a50d-ffd868e229d0 ms:mtpsurl: https://technet.microsoft.com/fr-fr/library/JJ657720(v=EXCHG.150) ms:contentKeyID: 50477944 ms.date: 04/24/2018 mtps_version: v=EXCHG.150 ms.translationtype: HT --- # Configurer des réponses automatiques de domaine distant   _**Sapplique à :** Exchange Server 2013_ _**Dernière rubrique modifiée :** 2015-04-08_ Vous pouvez utiliser Exchange Management Shell pour configurer la façon dont les courriers électroniques sont envoyés et reçus via des domaines distants. La démonstration suivante montre comment utiliser l’environnement de ligne de commande Exchange Management Shell pour configurer la manière dont Exchange gère les réponses automatiques. ## Ce qu’il faut savoir avant de commencer ? - Durée d'exécution estimée : 10 minutes - Vous pouvez uniquement utiliser l'environnement de ligne de commande Exchange Management Shell pour effectuer cette tâche. - Des autorisations doivent vous être attribuées avant de pouvoir exécuter cette procédure. Pour voir les autorisations qui vous sont nécessaires, consultez - Entrée « Domaine distant » dans la rubrique [Autorisations de flux de messagerie](mail-flow-permissions-exchange-2013-help.md). - Pour des informations sur les raccourcis clavier applicables aux procédures de cette rubrique, voir Raccourcis clavier dans Exchange 2013[Raccourcis clavier dans le Centre d’administration Exchange](keyboard-shortcuts-in-the-exchange-admin-center-exchange-online-protection-help.md). > [!TIP] > Vous rencontrez des difficultés ? Demandez de l’aide en participant aux forums Exchange. Visitez les forums sur les pages <a href="https://go.microsoft.com/fwlink/p/?linkid=60612">Exchange Server</a>, <a href="https://go.microsoft.com/fwlink/p/?linkid=267542">Exchange Online</a>, et <a href="https://go.microsoft.com/fwlink/p/?linkid=285351">Exchange Online Protection</a>. ## Utiliser l’environnement de ligne de commande Exchange Management Shell pour configurer les réponses automatiques Vous pouvez utiliser la cmdlet **Set-RemoteDomain** pour configurer les propriétés d’un domaine distant. Cet exemple permet l’envoi de réponses automatiques au domaine distant nommé Contoso. Ce paramètre est désactivé par défaut. Set-RemoteDomain Contoso -AutoReplyEnabled $true Dans cet exemple les transferts automatiques vers le domaine distant sont autorisés. Ce paramètre est désactivé par défaut. Set-RemoteDomain Contoso -AutoForwardEnabled $true
55.104167
376
0.800756
fra_Latn
0.957979
fa9bce98f4959e20310656def71f9019fdeba381
2,326
md
Markdown
treebanks/br_keb/br_keb-dep-root.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
treebanks/br_keb/br_keb-dep-root.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
treebanks/br_keb/br_keb-dep-root.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
--- layout: base title: 'Statistics of root in UD_Breton-KEB' udver: '2' --- ## Treebank Statistics: UD_Breton-KEB: Relations: `root` This relation is universal. 888 nodes (9%) are attached to their parents as `root`. 888 instances of `root` (100%) are left-to-right (parent precedes child). Average distance between parent and child is 3.81644144144144. The following 8 pairs of parts of speech are connected with `root`: -<tt><a href="br_keb-pos-VERB.html">VERB</a></tt> (644; 73% instances), -<tt><a href="br_keb-pos-NOUN.html">NOUN</a></tt> (121; 14% instances), -<tt><a href="br_keb-pos-ADJ.html">ADJ</a></tt> (77; 9% instances), -<tt><a href="br_keb-pos-ADV.html">ADV</a></tt> (16; 2% instances), -<tt><a href="br_keb-pos-PRON.html">PRON</a></tt> (16; 2% instances), -<tt><a href="br_keb-pos-PROPN.html">PROPN</a></tt> (8; 1% instances), -<tt><a href="br_keb-pos-NUM.html">NUM</a></tt> (5; 1% instances), -<tt><a href="br_keb-pos-INTJ.html">INTJ</a></tt> (1; 0% instances). ~~~ conllu # visual-style 4 bgColor:blue # visual-style 4 fgColor:white # visual-style 0 bgColor:blue # visual-style 0 fgColor:white # visual-style 0 4 root color:blue 1 N' ne ADV adv Polarity=Neg 4 advmod _ SpaceAfter=No 2 int bezañ VERB vblex Mood=Ind|Number=Plur|Person=3|Tense=Pres|VerbForm=Fin 4 aux _ _ 3 ket ket ADV adv _ 4 advmod _ _ 4 aet mont VERB vblex Tense=Past|VerbForm=Part 0 root _ _ 5 war-raok war-raok ADV adv _ 4 advmod _ SpaceAfter=No 6 . . PUNCT sent _ 4 punct _ _ ~~~ ~~~ conllu # visual-style 1 bgColor:blue # visual-style 1 fgColor:white # visual-style 0 bgColor:blue # visual-style 0 fgColor:white # visual-style 0 1 root color:blue 1 Poent poent NOUN n Gender=Masc|Number=Sing 0 root _ _ 2 eo bezañ VERB vblex Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin 1 cop _ _ 3 din da ADP pr _ 4 case _ _ 4 _ indirect PRON prn Case=Acc|Number=Sing|Person=1|PronType=Prs 1 obl _ _ 5 mont mont VERB vblex VerbForm=Inf 1 csubj _ SpaceAfter=No 6 . . PUNCT sent _ 1 punct _ _ ~~~ ~~~ conllu # visual-style 1 bgColor:blue # visual-style 1 fgColor:white # visual-style 0 bgColor:blue # visual-style 0 fgColor:white # visual-style 0 1 root color:blue 1 Dav dav ADJ adj _ 0 root _ _ 2 e e PART vpart _ 1 aux _ _ 3 vije bezañ VERB vblex Number=Sing|Person=3 1 cop _ SpaceAfter=No 4 . . PUNCT sent _ 1 punct _ _ ~~~
35.784615
624
0.705073
yue_Hant
0.266176
fa9c5471c04163a648370ff8288df204d5d77871
564
md
Markdown
pages/p1.md
alexpaynter/test_site_ghpages_jekyl
d05ac69cd48285f7c67cdf2f73040d2cc6c217b0
[ "CC0-1.0" ]
null
null
null
pages/p1.md
alexpaynter/test_site_ghpages_jekyl
d05ac69cd48285f7c67cdf2f73040d2cc6c217b0
[ "CC0-1.0" ]
null
null
null
pages/p1.md
alexpaynter/test_site_ghpages_jekyl
d05ac69cd48285f7c67cdf2f73040d2cc6c217b0
[ "CC0-1.0" ]
null
null
null
--- layout: page title: Test page 1 description: Bleh --- This is a sentence where we say some words This is apparently how you do a block quotey thing If you do the above it might read like a quote, but kbro uses it to mostly make things look like git code stuff and stuff git this is not special and does not run # The force awakens ## The last jedi ### JJ Abhrams may have finally killed this series - Testing that the page links work for [actual](pages/overview.html) pages. - Testing that they'll fail for ones that [don't exist](pages/dne.html).
22.56
96
0.739362
eng_Latn
0.999584
fa9c71d9714214fed2e30d483845684734e9e806
3,269
md
Markdown
_posts/2021-9-14-Second-Blog-Ramey.md
kramey4321/kramey4321.github.io
bedecd9ca844dd175a35a7bf845037ab54d90bd7
[ "MIT" ]
null
null
null
_posts/2021-9-14-Second-Blog-Ramey.md
kramey4321/kramey4321.github.io
bedecd9ca844dd175a35a7bf845037ab54d90bd7
[ "MIT" ]
null
null
null
_posts/2021-9-14-Second-Blog-Ramey.md
kramey4321/kramey4321.github.io
bedecd9ca844dd175a35a7bf845037ab54d90bd7
[ "MIT" ]
null
null
null
Programming Background ================ Kristi Ramey 9/14/2021 ## Reflection on Coding Back when I was a wee-undergrad, I took one intro to programming course for my math major which used the language C++. I remember fondly my professor, who had the same exact tonality of Bill Cosby (back before Bill Cosby was known as a predator), and he had the idiosyncrasy of wiping the chalk board clean meticulously in long swipes, end to end. I also remember the countless hours I spent on my final project where I had to write a program to simulate booking passengers at an airport. The reason my project turned into an odyssey was that I had failed to add ‘+1’ to my counter, so my code went off to an oblivion of zeros and ones and kept crashing my computer. Thus started my love of computer science. Fast forward to entering into the Master’s program in Statistics at NC State… While this course is my first exposure to intense programming with R, I had dabbled in R labs in previous course. Most of those ‘toe dipping’ experiences were following the professor’s code, line by line and changing one or two fields to fit the problem. Overall, it was a nice experience, but I did have several instances where the professor underestimated my dependence on their examples and was left in a tizzy when the file or operation deviated one step past the provided code. While I wouldn’t say I enjoy R, I now have a deeper appreciation. I like the rather simple structure of vectors, matrices, data frames, and lists. I LOVE the help resources with R that are available on-line. Since it is open-sourced, it feels more of a community where people will post questions and get rather straightforward responses. I was a kumquat and fought off R Markdown until this course and now I see its full glory! No more screen clipping code and adding them to a OneNote page. My professor has mentioned numerous times that the best and worst thing is most things have multiple ways to code it in. I also have had two courses that used SAS instead of R. My impression of SAS was a pal that made you buy their Amway products. It’s been a full year since SAS coding, but I remember it as being very straightforward with no CasE iSSues to worry about and semi-colons everywhere. Truth be told, that might have been the result of a ton handholding by the professor. I do remember feeling defeated by the installation process. I had to call Tech Support, where the person had to take over my computer to install it. Why the issue, you ask? Because my work computer used ‘D:’ as the personal directory, instead of the more common ‘C:’. I also remember my license expiring right as a project was due, which gave me more adrenaline than was needed. Of the two languages, I prefer R, which is why I waited NC State out for them to remove the ST555 requirement for Master candidates (an intensive SAS course). While that gave me one less language to struggle with, it came at a cost. I wish I had taken my program intensive course sooner, since it would have given me more confidence and knowledge with R, before the concepts became intense. But, se la vie, I’m glad I’m in this course now. ## Example R Markdown output. ``` r plot(iris) ``` ![](../images/unnamed-chunk-1-1.png)<!-- -->
49.530303
75
0.776078
eng_Latn
0.999966
fa9de50b0954cf67b8bcca9927dc30cf84187fe2
5,417
md
Markdown
CHANGELOG.md
nicolasdao/schemaglue
1e94e07d76fb20f5a0361588eca807424e4d13f6
[ "BSD-3-Clause" ]
118
2017-07-18T16:36:13.000Z
2021-12-13T16:39:47.000Z
CHANGELOG.md
nicolasdao/schemaglue
1e94e07d76fb20f5a0361588eca807424e4d13f6
[ "BSD-3-Clause" ]
29
2017-08-21T00:36:14.000Z
2022-03-07T16:47:33.000Z
CHANGELOG.md
nicolasdao/schemaglue
1e94e07d76fb20f5a0361588eca807424e4d13f6
[ "BSD-3-Clause" ]
13
2017-08-24T21:42:49.000Z
2021-11-21T18:32:54.000Z
# Changelog All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines. ## [4.3.0](https://github.com/nicolasdao/schemaglue/compare/v4.2.3...v4.3.0) (2021-10-07) ### [4.2.3](https://github.com/nicolasdao/schemaglue/compare/v4.2.1...v4.2.3) (2021-09-29) ### [4.2.1](https://github.com/nicolasdao/schemaglue/compare/v4.2.0...v4.2.1) (2021-09-29) ## [4.2.0](https://github.com/nicolasdao/schemaglue/compare/v4.1.0...v4.2.0) (2021-09-29) ### Features * Add support for glueing inline string schema ([835634c](https://github.com/nicolasdao/schemaglue/commit/835634c1fe39c4ccf3680cbc3411551da72cdbfa)) ## [4.1.0](https://github.com/nicolasdao/schemaglue/compare/v4.0.6...v4.1.0) (2021-02-25) ### Features * add typescript definitions ([9d546c1](https://github.com/nicolasdao/schemaglue/commit/9d546c1f1d72b5395609d89891fd23db0c1efb39)) ### [4.0.6](https://github.com/nicolasdao/schemaglue/compare/v4.0.5...v4.0.6) (2021-02-01) ### Bug Fixes * Fixes [#23](https://github.com/nicolasdao/schemaglue/issues/23) ([8017df3](https://github.com/nicolasdao/schemaglue/commit/8017df3807c38faf60c9802233f87e542e6acf60)) ### [4.0.5](https://github.com/nicolasdao/schemaglue/compare/v4.0.4...v4.0.5) (2020-07-16) ### Bug Fixes * Vulnerabilities ([4a9ee0f](https://github.com/nicolasdao/schemaglue/commit/4a9ee0fc1fc1c6f49e68321c89c9bbad0b4fcf3a)) <a name="4.0.4"></a> ## [4.0.4](https://github.com/nicolasdao/schemaglue/compare/v0.4.4...v4.0.4) (2019-04-21) <a name="0.4.4"></a> ## [0.4.4](https://github.com/nicolasdao/schemaglue/compare/v0.4.3...v0.4.4) (2019-04-21) <a name="0.4.3"></a> ## [0.4.3](https://github.com/nicolasdao/schemaglue/compare/v4.0.2...v0.4.3) (2019-04-21) <a name="4.0.2"></a> ## [4.0.2](https://github.com/nicolasdao/schemaglue/compare/v4.0.1...v4.0.2) (2018-09-06) <a name="4.0.1"></a> ## [4.0.1](https://github.com/nicolasdao/schemaglue/compare/v3.0.2...v4.0.1) (2018-07-10) <a name="3.0.2"></a> ## [3.0.2](https://github.com/nicolasdao/schemaglue/compare/v3.0.1...v3.0.2) (2018-07-10) <a name="3.0.1"></a> ## [3.0.1](https://github.com/nicolasdao/schemaglue/compare/v2.0.3...v3.0.1) (2018-07-10) <a name="2.0.3"></a> ## [2.0.3](https://github.com/nicolasdao/schemaglue/compare/v2.0.2...v2.0.3) (2018-07-10) <a name="2.0.2"></a> ## [2.0.2](https://github.com/nicolasdao/schemaglue/compare/v2.0.1...v2.0.2) (2018-04-28) <a name="2.0.1"></a> ## [2.0.1](https://github.com/nicolasdao/schemaglue/compare/v2.0.0...v2.0.1) (2018-03-06) <a name="2.0.0"></a> # [2.0.0](https://github.com/nicolasdao/schemaglue/compare/v1.5.1...v2.0.0) (2018-03-06) ### Features * Add support for '.graphql' files ([7b2b47f](https://github.com/nicolasdao/schemaglue/commit/7b2b47f)) <a name="1.5.1"></a> ## [1.5.1](https://github.com/nicolasdao/schemaglue/compare/v1.5.0...v1.5.1) (2017-10-28) ### Bug Fixes * Bug [#3](https://github.com/nicolasdao/schemaglue/issues/3). Allow to ignore files and folders using globbing ([e70b8ce](https://github.com/nicolasdao/schemaglue/commit/e70b8ce)) <a name="1.5.0"></a> # [1.5.0](https://github.com/nicolasdao/schemaglue/compare/v1.4.2...v1.5.0) (2017-10-28) ### Features * Add support for ignoring some files or folders using the globbing convention ([57932b6](https://github.com/nicolasdao/schemaglue/commit/57932b6)) <a name="1.4.2"></a> ## [1.4.2](https://github.com/nicolasdao/schemaglue/compare/v1.4.1...v1.4.2) (2017-10-28) <a name="1.4.1"></a> ## [1.4.1](https://github.com/nicolasdao/schemaglue/compare/v1.4.0...v1.4.1) (2017-09-09) <a name="1.4.0"></a> # [1.4.0](https://github.com/nicolasdao/schemaglue/compare/v1.3.1...v1.4.0) (2017-09-09) ### Features * Add support for subscription and add unit tests ([eaeb0b4](https://github.com/nicolasdao/schemaglue/commit/eaeb0b4)) <a name="1.3.1"></a> ## [1.3.1](https://github.com/nicolasdao/schemaglue/compare/v1.2.0...v1.3.1) (2017-08-02) ### Bug Fixes * Add support for the appconfig file ([b5a82ba](https://github.com/nicolasdao/schemaglue/commit/b5a82ba)) <a name="1.2.0"></a> # [1.2.0](https://github.com/nicolasdao/schemaglue/compare/v1.1.3-alpha.0...v1.2.0) (2017-07-18) ### Features * Add the ability to set up the graphql model folder path programmatically + add more doc ([bd97dc6](https://github.com/nicolasdao/schemaglue/commit/bd97dc6)) <a name="1.1.3-alpha.0"></a> ## [1.1.3-alpha.0](https://github.com/nicolasdao/schemaglue/compare/v1.1.2...v1.1.3-alpha.0) (2017-07-18) ### Bug Fixes * Issue related to exporting module ([a4c548b](https://github.com/nicolasdao/schemaglue/commit/a4c548b)) <a name="1.1.2"></a> ## [1.1.2](https://github.com/nicolasdao/schemaglue/compare/v1.1.1...v1.1.2) (2017-07-18) ### Bug Fixes * typos ([58a2d62](https://github.com/nicolasdao/schemaglue/commit/58a2d62)) <a name="1.1.1"></a> ## [1.1.1](https://github.com/nicolasdao/schemaglue/compare/v1.1.0...v1.1.1) (2017-07-18) ### Bug Fixes * Forgot to take into account the Query and Mutation ([51fed1c](https://github.com/nicolasdao/schemaglue/commit/51fed1c)) <a name="1.1.0"></a> # 1.1.0 (2017-07-18) ### Bug Fixes * Remove the alpha version ([82c3820](https://github.com/nicolasdao/schemaglue/commit/82c3820)) ### Features * Create the project ([5f5f3e8](https://github.com/nicolasdao/schemaglue/commit/5f5f3e8))
26.950249
180
0.679712
yue_Hant
0.256471
fa9e7d7e45aebe8f92300420cb7a680e5cefab37
579
md
Markdown
content/art/2021-01-05-Portrait-009.md
honungsburk/honungsburk.github.io
1878b5c896ae0485fa907e166028aa869b6ae9dc
[ "BSD-3-Clause" ]
null
null
null
content/art/2021-01-05-Portrait-009.md
honungsburk/honungsburk.github.io
1878b5c896ae0485fa907e166028aa869b6ae9dc
[ "BSD-3-Clause" ]
null
null
null
content/art/2021-01-05-Portrait-009.md
honungsburk/honungsburk.github.io
1878b5c896ae0485fa907e166028aa869b6ae9dc
[ "BSD-3-Clause" ]
null
null
null
--- image: "https://images-wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/e83a41f0-1127-4c89-9388-cca19c024bde/debphtk-65468e2c-abdd-4b0a-8c6b-6675ca2839ad.jpg?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOiIsImlzcyI6InVybjphcHA6Iiwib2JqIjpbW3sicGF0aCI6IlwvZlwvZTgzYTQxZjAtMTEyNy00Yzg5LTkzODgtY2NhMTljMDI0YmRlXC9kZWJwaHRrLTY1NDY4ZTJjLWFiZGQtNGIwYS04YzZiLTY2NzVjYTI4MzlhZC5qcGcifV1dLCJhdWQiOlsidXJuOnNlcnZpY2U6ZmlsZS5kb3dubG9hZCJdfQ.2Ws2sdlJWuWPcY3kyXElV1We08k_2NQkA0ph9V6GCt8" alt: "Green Giant" tags: - portrait - character - "digital art" - procreate ---
64.333333
486
0.882556
yue_Hant
0.110751
fa9e8ee3ce5601d0380e21702eb0ac32b7e221cb
42,454
md
Markdown
articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
LeMuecke/azure-docs.de-de
a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Azure AD Connect-Synchronisierung: Konfigurieren der Filterung | Microsoft-Dokumentation' description: Erläutert das Konfigurieren der Filterung bei der Azure AD Connect-Synchronisierung. services: active-directory documentationcenter: '' author: billmath manager: daveba editor: '' ms.assetid: 880facf6-1192-40e9-8181-544c0759d506 ms.service: active-directory ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to ms.date: 03/26/2019 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: 1879df40122549ddc4c57557017fa2c84c883368 ms.sourcegitcommit: 269da970ef8d6fab1e0a5c1a781e4e550ffd2c55 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 08/11/2020 ms.locfileid: "88061505" --- # <a name="azure-ad-connect-sync-configure-filtering"></a>Azure AD Connect-Synchronisierung: Konfigurieren der Filterung Per Filterung können Sie für Ihr lokales Verzeichnis steuern, welche Objekte in Azure Active Directory (Azure AD) angezeigt werden. Die Standardkonfiguration deckt alle Objekte in allen Domänen der konfigurierten Gesamtstrukturen ab. Dies ist die für den Normalfall empfohlene Konfiguration. Benutzer, die Office 365-Workloads wie etwa Exchange Online und Skype for Business verwenden, profitieren von einer vollständigen globalen Adressliste, die zum Senden von E-Mails und Anrufen anderer Personen genutzt werden kann. In der Standardkonfiguration erhalten diese Benutzer die gleiche Funktionalität wie bei einer lokalen Implementierung von Exchange oder Lync. In einigen Fällen ist es jedoch erforderlich, Änderungen an der Standardkonfiguration vorzunehmen. Im Folgenden finden Sie einige Beispiele: * Sie planen, die [Azure AD-Multiverzeichnistopologie](plan-connect-topologies.md#each-object-only-once-in-an-azure-ad-tenant) zu verwenden. Sie müssen einen Filter anwenden, um zu steuern, welche Objekte mit einem bestimmten Azure AD-Verzeichnis synchronisiert werden sollen. * Sie führen ein Pilotprojekt für Azure oder Office 365 aus und benötigen nur eine Teilmenge der Benutzer in Azure AD. In dem kleinen Pilotprojekt müssen Sie nicht unbedingt über eine vollständige globale Adressliste verfügen, um die Funktionsweise zu demonstrieren. * Sie haben viele Dienstkonten und andere nicht persönliche Konten, die nicht in Azure AD enthalten sein sollen. * Aus Compliancegründen löschen Sie lokal keine Benutzerkonten. Sie deaktivieren sie nur. In Azure AD sollen aber nur aktive Konten vorhanden sein. In diesem Artikel wird beschrieben, wie Sie die verschiedenen Filtermethoden konfigurieren. > [!IMPORTANT] > Microsoft unterstützt die Änderung oder den Einsatz der Azure AD Connect-Synchronisierung außerhalb dieser formal dokumentierten Aktionen nicht. Eine dieser Aktionen kann zu einem inkonsistenten oder nicht unterstützten Status der Azure AD Connect-Synchronisierung führen. Folglich kann Microsoft auch keinen technischen Support für solche Bereitstellungen leisten. ## <a name="basics-and-important-notes"></a>Grundlagen und wichtige Hinweise In der Azure AD Connect-Synchronisierung können Sie die Filterung jederzeit aktivieren. Wenn Sie mit einer Standardkonfiguration der Verzeichnissynchronisierung beginnen und dann die Filterung konfigurieren, werden die Objekte, die herausgefiltert werden, nicht mehr mit Azure AD synchronisiert. Infolge dieser Änderung werden alle Objekte in Azure AD, die zuvor synchronisiert, aber dann gefiltert wurden, in Azure AD gelöscht. Stellen Sie vor dem Vornehmen von Änderungen an der Filterung sicher, dass Sie die [geplante Aufgabe deaktivieren](#disable-the-scheduled-task). So werden nicht versehentlich Änderungen exportiert, deren Richtigkeit Sie nicht überprüft haben. Da bei der Filterung viele Objekte gleichzeitig entfernt werden können, sollten Sie darauf achten, dass Ihre neuen Filter korrekt sind, bevor Sie mit dem Exportieren von Änderungen nach Azure AD beginnen. Es wird dringend empfohlen, nach dem Durchführen der Konfigurationsschritte die [Überprüfungsschritte](#apply-and-verify-changes) auszuführen, bevor Sie exportieren und Änderungen an Azure AD vornehmen. Um das versehentliche Löschen von vielen Objekten zu verhindern, ist das Feature zum [Verhindern versehentlicher Löschungen](how-to-connect-sync-feature-prevent-accidental-deletes.md) standardmäßig aktiviert. Wenn Sie aufgrund einer Filterung viele Objekte löschen (standardmäßig 500), müssen Sie die Schritte in diesem Artikel ausführen, damit die Löschvorgänge auch für Azure AD gelten. Wenn Sie einen älteren Build als November 2015 nutzen ([1.0.9125](reference-connect-version-history.md)), eine Änderung an der Filterkonfiguration vornehmen und die Kennworthashsynchronisierung verwenden, müssen Sie eine vollständige Synchronisierung aller Kennwörter auslösen, nachdem Sie die Konfiguration abgeschlossen haben. Informationen zu Schritten zum Auslösen einer vollständigen Kennwortsynchronisierung finden Sie unter [Auslösen einer vollständigen Synchronisierung aller Kennwörter](tshoot-connect-password-hash-synchronization.md#trigger-a-full-sync-of-all-passwords). Falls Sie Version 1.0.9125 oder höher verwenden, wird mit der normalen Aktion **Vollständige Synchronisierung** auch berechnet, ob Kennwörter synchronisiert werden sollen. Dieser zusätzliche Schritt ist nicht mehr erforderlich. Wenn **Benutzerobjekte** in Azure AD aufgrund eines Filterungsfehlers versehentlich gelöscht wurden, können Sie die Benutzerobjekte in Azure AD neu erstellen, indem Sie Ihre Filterkonfigurationen entfernen. Anschließend können Sie Ihre Verzeichnisse erneut synchronisieren. Mit dieser Aktion werden die Benutzer aus dem Papierkorb in Azure AD wiederhergestellt. Das Löschen anderer Objekttypen kann jedoch nicht rückgängig gemacht werden. Wenn Sie beispielsweise eine Sicherheitsgruppe versehentlich löschen, die als Zugriffssteuerungsliste (ACL) für eine Ressource verwendet wurde, können die Gruppe und die zugehörigen ACLs nicht wiederhergestellt werden. Von Azure AD Connect werden nur Objekte gelöscht, die einmal als zum Bereich gehörend betrachtet wurden. Wenn in Azure AD Objekte enthalten sind, die von einem anderen Synchronisierungsmodul erstellt wurden und nicht Teil des Bereichs sind, werden sie durch das Hinzufügen der Filterung nicht entfernt. Wenn Sie beispielsweise mit einem DirSync-Server beginnen, mit dem eine vollständige Kopie Ihres gesamten Verzeichnisses in Azure AD erstellt wurde, und einen neuen Azure AD Connect-Synchronisierungsserver parallel mit der von Beginn an aktivierten Filterung installieren, werden die von DirSync erstellten zusätzlichen Objekte von Azure AD Connect nicht entfernt. Die Filterkonfigurationen werden beibehalten, wenn Sie eine neuere Version von Azure AD Connect installieren oder ein Upgrade darauf durchführen. Es ist immer empfehlenswert, vor dem Ausführen des ersten Synchronisierungszyklus zu verifizieren, dass die Konfiguration nach einem Upgrade auf eine neuere Version nicht versehentlich geändert wurde. Wenn Sie über mehrere Gesamtstrukturen verfügen, müssen die in diesem Thema beschriebenen Filterkonfigurationen auf jede Gesamtstruktur angewendet werden (vorausgesetzt, für alle soll die gleiche Konfiguration gelten). ### <a name="disable-the-scheduled-task"></a>Deaktivieren von geplanten Aufgaben Führen Sie die folgenden Schritte aus, um den integrierten Scheduler zu deaktivieren, der jeweils im Abstand von 30 Minuten einen Synchronisierungszyklus auslöst: 1. Wechseln Sie zu einer PowerShell-Eingabeaufforderung. 2. Führen Sie `Set-ADSyncScheduler -SyncCycleEnabled $False` aus, um den Scheduler zu deaktivieren. 3. Nehmen Sie die in diesem Thema beschriebenen Änderungen vor. 4. Führen Sie `Set-ADSyncScheduler -SyncCycleEnabled $True` aus, um den Scheduler wieder zu aktivieren. **Bei Verwendung eines älteren Azure AD Connect-Builds als 1.1.105.0** Führen Sie die folgenden Schritte aus, um die geplante Aufgabe zu deaktivieren, mit der jeweils im Abstand von drei Stunden ein Synchronisierungszyklus ausgelöst wird: 1. Starten Sie im **Startmenü** die **Aufgabenplanung**. 2. Suchen Sie direkt unterhalb der **Aufgabenplanungsbibliothek** nach der Aufgabe namens **Azure AD-Synchronisierungsplaner**, klicken Sie mit der rechten Maustaste darauf, und wählen Sie **Deaktivieren** aus. ![Aufgabenplanung](./media/how-to-connect-sync-configure-filtering/taskscheduler.png) 3. Sie können jetzt Konfigurationsänderungen vornehmen und das Synchronisierungsmodul manuell über die Konsole **Synchronization Service Manager** ausführen. Nachdem Sie alle Änderungen an der Filterung vorgenommen haben, sollten Sie nicht vergessen, die Aufgabe wieder zu **aktivieren**. ## <a name="filtering-options"></a>Filteroptionen Sie können die folgenden Filterkonfigurationstypen auf das Tool für die Verzeichnissynchronisierung anwenden: * [**Gruppenbasiert**](#group-based-filtering): Die auf einer einzelnen Gruppe basierende Filterung kann nur bei der Erstinstallation mit dem Installations-Assistenten konfiguriert werden. * [**Domänenbasiert**](#domain-based-filtering): Bei dieser Option können Sie auswählen, welche Domänen mit Azure AD synchronisiert werden. Außerdem können Sie Domänen zur Konfiguration des Synchronisierungsmoduls hinzufügen und daraus entfernen, wenn Sie Änderungen an Ihrer lokalen Infrastruktur vornehmen, nachdem Sie die Azure AD Connect-Synchronisierung installiert haben. * [**Organisationseinheitenbasiert**](#organizational-unitbased-filtering): Bei dieser Option können Sie auswählen, welche Organisationseinheiten mit Azure AD synchronisiert werden. Diese Option ist für alle Objekttypen in den ausgewählten Organisationseinheiten vorhanden. * [**Attributbasiert**](#attribute-based-filtering): Bei dieser Option können Sie Objekte basierend auf den Attributwerten der Objekte filtern. Sie können auch unterschiedliche Filter für unterschiedliche Objekttypen verwenden. Sie können gleichzeitig mehrere Filteroptionen verwenden. Beispielsweise können Sie die organisationseinheitenbasierte Filterung verwenden, um ausschließlich Objekte in einer Organisationseinheit einzuschließen. Gleichzeitig können Sie die attributbasierte Filterung verwenden, um die Objekte noch weiter zu filtern. Wenn Sie mehrere Filtermethoden verwenden, wird für die Filter zwischen den Filtern ein logisches „Und“ genutzt. ## <a name="domain-based-filtering"></a>Domänenbasierte Filterung Dieser Abschnitt enthält die Schritte, die Sie zum Konfigurieren des Domänenfilters ausführen müssen. Wenn Sie nach der Installation von Azure AD Connect in der Gesamtstruktur Domänen hinzugefügt oder entfernt haben, müssen Sie auch die Konfiguration der Filterung aktualisieren. Die bevorzugte Methode zum Ändern der domänenbasierten Filterung besteht darin, den Installations-Assistenten auszuführen und die [Filterung von Domänen und Organisationseinheiten](how-to-connect-install-custom.md#domain-and-ou-filtering) zu ändern. Der Installations-Assistent automatisiert alle Aufgaben, die in diesem Thema dokumentiert sind. Sie sollten die folgenden Schritte nur ausführen, wenn Sie den Installations-Assistenten aus irgendeinem Grund nicht ausführen können. Die Konfiguration der domänenbasierten Filterung umfasst folgende Schritte: 1. Wählen Sie die Domänen aus, die in die Synchronisierung einbezogen werden sollen. 2. Passen Sie für jede hinzugefügte und entfernte Domäne die Ausführungsprofile an. 3. [Wenden Sie die Änderungen an, und überprüfen Sie sie.](#apply-and-verify-changes) ### <a name="select-the-domains-to-be-synchronized"></a>Auswählen der zu synchronisierenden Domänen Es gibt zwei Möglichkeiten für das Auswählen der zu synchronisierenden Domänen: - Verwenden des Synchronisierungsdiensts - Verwenden des Azure AD Connect-Assistenten. #### <a name="select-the-domains-to-be-synchronized-using-the-synchronization-service"></a>Wählen Sie die zu synchronisierenden Domänen über den Synchronisierungsdienst aus. Um den Filter für die Domäne festzulegen, führen Sie die folgenden Schritte aus: 1. Melden Sie sich bei dem Server, auf dem die Azure AD Connect-Synchronisierung ausgeführt wird, mit einem Konto an, das Mitglied der Sicherheitsgruppe **ADSyncAdmins** ist. 2. Starten Sie den **Synchronisierungsdienst** über das **Startmenü**. 3. Wählen Sie **Connectors** aus, und wählen Sie in der Liste **Connectors** den Connector mit dem Typ **Active Directory Domain Services** aus. Wählen Sie unter **Aktionen** die Option **Eigenschaften** aus. ![Connectoreigenschaften](./media/how-to-connect-sync-configure-filtering/connectorproperties.png) 4. Klicken Sie auf **Verzeichnispartitionen konfigurieren**. 5. Aktivieren bzw. deaktivieren Sie je nach Bedarf die Domänen in der Liste **Select directory partitions** (Verzeichnispartitionen auswählen). Achten Sie darauf, dass nur die Partitionen ausgewählt sind, die Sie synchronisieren möchten. ![Partitionen](./media/how-to-connect-sync-configure-filtering/connectorpartitions.png) Wenn Sie Ihre lokale Active Directory-Infrastruktur geändert und der Gesamtstruktur Domänen hinzugefügt oder daraus entfernt haben, können Sie auf die Schaltfläche **Aktualisieren** klicken, um eine aktualisierte Liste zu erhalten. Beim Aktualisieren werden Sie zur Eingabe von Anmeldeinformationen aufgefordert. Geben Sie Anmeldeinformationen an, die Ihnen den Lesezugriff auf Windows Server Active Directory ermöglichen. Es muss sich nicht um den Benutzer handeln, der im Dialogfeld bereits angegeben ist. ![Aktualisierung erforderlich](./media/how-to-connect-sync-configure-filtering/refreshneeded.png) 6. Schließen Sie nach Abschluss des Vorgangs das Dialogfeld **Eigenschaften**, indem Sie auf **OK** klicken. Wenn Sie Domänen aus der Gesamtstruktur entfernt haben, werden Sie in einer Popupmeldung darauf hingewiesen, dass eine Domäne entfernt wurde und diese Konfiguration bereinigt wird. 7. Fahren Sie mit dem Anpassen der Ausführungsprofile fort. #### <a name="select-the-domains-to-be-synchronized-using-the-azure-ad-connect-wizard"></a>Wählen Sie die zu synchronisierenden Domänen mit dem Azure AD Connect-Assistenten aus. Um den Filter für die Domäne festzulegen, führen Sie die folgenden Schritte aus: 1. Starten Sie den Azure AD Connect-Assistenten. 2. Klicken Sie auf **Konfigurieren**. 3. Wählen Sie **Synchronisierungsoptionen anpassen**, und klicken Sie auf **Weiter**. 4. Geben Sie Ihre Azure AD-Anmeldeinformationen ein. 5. Klicken Sie auf dem Bildschirm **Verbundene Verzeichnisse** auf **Weiter**. 6. Klicken Sie auf der Seite **Filtern von Domänen und Organisationseinheiten** auf **Aktualisieren**. Neue Domänen werden nun angezeigt und gelöschte Domänen werden ausgeblendet. ![Partitionen](./media/how-to-connect-sync-configure-filtering/update2.png) ### <a name="update-the-run-profiles"></a>Aktualisieren von Ausführungsprofilen Wenn Sie den Domänenfilter aktualisiert haben, müssen Sie auch die Ausführungsprofile aktualisieren. 1. Stellen Sie in der Liste **Connectors** sicher, dass der Connector ausgewählt ist, den Sie im vorherigen Schritt geändert haben. Wählen Sie unter **Aktionen** die Option **Configure Run Profiles** (Ausführungsprofile konfigurieren). ![Connector-Ausführungsprofile 1](./media/how-to-connect-sync-configure-filtering/connectorrunprofiles1.png) 2. Suchen und identifizieren Sie die folgenden Profile: * Vollständiger Import * Vollständige Synchronisierung * Deltaimport * Deltasynchronisierung * Exportieren 3. Passen Sie für jedes Profil die **hinzugefügten** und **entfernten** Domänen an. 1. Führen Sie für jedes der fünf Profile für alle **hinzugefügten** Domänen jeweils die folgenden Schritte aus: 1. Wählen Sie das Ausführungsprofil aus, und klicken Sie auf **Neuer Schritt**. 2. Wählen Sie auf der Seite **Configure Step** (Schritt konfigurieren) im Dropdownmenü **Type** (Typ) den Schritttyp mit dem gleichen Namen wie das Profil aus, das Sie konfigurieren. Klicken Sie dann auf **Weiter**. ![Connector-Ausführungsprofile 2](./media/how-to-connect-sync-configure-filtering/runprofilesnewstep1.png) 3. Wählen Sie auf der Seite **Connector Configuration** (Connectorkonfiguration) im Dropdownmenü **Partition** den Namen der Domäne aus, die Sie dem Domänenfilter hinzugefügt haben. ![Connector-Ausführungsprofile 3](./media/how-to-connect-sync-configure-filtering/runprofilesnewstep2.png) 4. Um das Dialogfeld **Configure Run Profile** zu schließen, klicken Sie auf **Finish** (Fertig stellen). 2. Führen Sie für jedes der fünf Profile für alle **entfernten** Domänen jeweils die folgenden Schritte aus: 1. Wählen Sie das Ausführungsprofil aus. 2. Wenn der **Wert** des Attributs **Partition** eine GUID ist, wählen Sie den Ausführungsschritt aus und klicken auf **Delete Step** (Schritt löschen). ![Connector-Ausführungsprofile 4](./media/how-to-connect-sync-configure-filtering/runprofilesdeletestep.png) 3. Überprüfen Sie Ihre Änderungen. Jede Domäne, die Sie synchronisieren möchten, wird als Schritt in jedem Ausführungsprofil aufgelistet. 4. Um das Dialogfeld **Configure Run Profiles** zu schließen, klicken Sie auf **OK**. 5. Um die Konfiguration abzuschließen, müssen Sie einen **vollständigen Import** und eine **Deltasynchronisierung** durchführen. Fahren Sie mit dem Abschnitt [Anwenden und Überprüfen von Änderungen](#apply-and-verify-changes) fort. ## <a name="organizational-unitbased-filtering"></a>Filterung basierend auf Organisationseinheiten Die bevorzugte Methode zum Ändern der organisationseinheitenbasierten Filterung besteht darin, den Installations-Assistenten auszuführen und die [Filterung von Domänen und Organisationseinheiten](how-to-connect-install-custom.md#domain-and-ou-filtering) zu ändern. Der Installations-Assistent automatisiert alle Aufgaben, die in diesem Thema dokumentiert sind. Sie sollten die folgenden Schritte nur ausführen, wenn Sie den Installations-Assistenten aus irgendeinem Grund nicht ausführen können. Führen Sie zum Konfigurieren der auf Organisationseinheiten basierenden Filterung die folgenden Schritte aus: 1. Melden Sie sich bei dem Server, auf dem die Azure AD Connect-Synchronisierung ausgeführt wird, mit einem Konto an, das Mitglied der Sicherheitsgruppe **ADSyncAdmins** ist. 2. Starten Sie den **Synchronisierungsdienst** über das **Startmenü**. 3. Wählen Sie **Connectors** aus, und wählen Sie in der Liste **Connectors** den Connector mit dem Typ **Active Directory Domain Services** aus. Wählen Sie unter **Aktionen** die Option **Eigenschaften** aus. ![Connectoreigenschaften](./media/how-to-connect-sync-configure-filtering/connectorproperties.png) 4. Klicken Sie auf **Configure Directory Partitions** (Verzeichnispartitionen konfigurieren), wählen Sie die Domäne aus, die Sie konfigurieren möchten, und klicken Sie dann auf **Containers** (Container). 5. Geben Sie bei Aufforderung Anmeldeinformationen an, die Ihnen den Lesezugriff auf Ihr lokales Active Directory-Verzeichnis ermöglichen. Es muss sich nicht um den Benutzer handeln, der im Dialogfeld bereits angegeben ist. 6. Deaktivieren Sie im Dialogfeld **Select Containers** (Container auswählen) die Organisationseinheiten, die Sie nicht mit dem Cloudverzeichnis synchronisieren möchten, und klicken Sie auf **OK**. ![OEs im Dialogfeld „Container auswählen“](./media/how-to-connect-sync-configure-filtering/ou.png) * Der Container **Computer** sollte ausgewählt sein, damit die Synchronisierung Ihrer Windows 10-Computer mit Azure AD erfolgreich ist. Falls sich die einer Domäne angehörenden Computer in anderen Organisationseinheiten befinden, sollten Sie sicherstellen, dass sie ausgewählt sind. * Der Container **ForeignSecurityPrincipals** sollte ausgewählt sein, wenn Sie über mehrere Gesamtstrukturen mit Vertrauensstellungen verfügen. Dieser Container ermöglicht die Auflösung der gesamtstrukturübergreifenden Sicherheitsgruppenmitgliedschaft. * Die Organisationseinheit **RegisteredDevices** sollte ausgewählt sein, wenn Sie das Feature für das Geräterückschreiben aktiviert haben. Falls Sie ein anderes Feature für das Geräterückschreiben verwenden, z. B. das Gruppenrückschreiben, sollten Sie sicherstellen, dass diese Speicherorte ausgewählt sind. * Wählen Sie eine beliebige andere Organisationseinheit aus, in der Benutzer, iNetOrgPersons, Gruppen, Kontakte und Computer enthalten sind. In der Abbildung befinden sich all diese Organisationseinheiten in der Organisationseinheit „ManagedObjects“. * Wenn Sie die gruppenbasierte Filterung verwenden, muss die Organisationseinheit, zu der die Gruppe gehört, enthalten sein. * Hinweis: Sie können konfigurieren, ob neue Organisationseinheiten, die nach Abschluss der Filterkonfiguration hinzugefügt wurden, synchronisiert werden sollen. Weitere Details finden Sie im nächsten Abschnitt. 7. Schließen Sie nach Abschluss des Vorgangs das Dialogfeld **Eigenschaften**, indem Sie auf **OK** klicken. 8. Um die Konfiguration abzuschließen, müssen Sie einen **vollständigen Import** und eine **Deltasynchronisierung** durchführen. Fahren Sie mit dem Abschnitt [Anwenden und Überprüfen von Änderungen](#apply-and-verify-changes) fort. ### <a name="synchronize-new-ous"></a>Synchronisieren neuer Organisationseinheiten Für neue Organisationseinheiten, die nach dem Konfigurieren der Filterung erstellt werden, erfolgt standardmäßig eine Synchronisierung. Dieser Zustand wird durch ein aktiviertes Kontrollkästchen angegeben. Sie können auch die Auswahl einiger untergeordneter Organisationseinheiten wieder aufheben. Klicken Sie dazu auf das Feld, bis es weiß mit einem blauen Häkchen (Standardzustand) dargestellt wird. Deaktivieren Sie danach alle untergeordneten Organisationseinheiten, die Sie nicht synchronisieren möchten. Wenn alle untergeordneten Organisationseinheiten synchronisiert sind, ist das Feld weiß und enthält ein blaues Häkchen. ![Organisationseinheit mit allen Feldern aktiviert](./media/how-to-connect-sync-configure-filtering/ousyncnewall.png) Wenn die Auswahl einiger untergeordneter Organisationseinheiten aufgehoben wurde, wird das Feld grau mit einem weißen Häkchen dargestellt. ![Organisationseinheit mit einigen deaktivierten untergeordneten Organisationseinheiten](./media/how-to-connect-sync-configure-filtering/ousyncnew.png) Bei dieser Konfiguration wird eine neue Organisationseinheit, die unter ManagedObjects erstellt wurde, synchronisiert. Der Installations-Assistent für Azure AD Connect erstellt immer diese Konfiguration. ### <a name="dont-synchronize-new-ous"></a>Aufheben der Synchronisierung neuer Organisationseinheiten Sie können konfigurieren, dass das Synchronisierungsmodul neue Organisationseinheiten nach Abschluss der Filterkonfiguration nicht synchronisiert. Dieser Zustand wird auf der Benutzeroberfläche mit einem einfarbig grauen Feld ohne Häkchen dargestellt. Klicken Sie für diese Einstellung auf das Feld, bis es weiß ohne ein Häkchen dargestellt wird. Wählen Sie dann die untergeordneten Organisationseinheiten aus, die Sie synchronisieren möchten. ![Organisationseinheit mit deaktiviertem Stamm](./media/how-to-connect-sync-configure-filtering/oudonotsyncnew.png) Bei dieser Konfiguration wird eine neue Organisationseinheit, die unter ManagedObjects erstellt wurde, nicht synchronisiert. ## <a name="attribute-based-filtering"></a>Attributbasierte Filterung Stellen Sie sicher, dass Sie den Build vom November 2015 ([1.0.9125](reference-connect-version-history.md)) oder höher verwenden, damit diese Schritte funktionieren. > [!IMPORTANT] >Microsoft empfiehlt, die von **Azure AD Connect** erstellten Standardregeln nicht zu ändern. Falls Sie die Regel ändern möchten, können Sie sie klonen und dann die ursprüngliche Regel deaktivieren. Nehmen Sie die gewünschten Änderungen an der geklonten Regel vor. Beachten Sie, dass Sie dadurch (das Deaktivieren der ursprünglichen Regel) keine der Fehlerbehebungen oder Features erhalten, die durch diese Regel aktiviert werden. Die attributbasierte Filterung ist die flexibelste Möglichkeit zum Filtern von Objekten. Sie können die hohe Leistungsfähigkeit der [deklarativen Bereitstellung](concept-azure-ad-connect-sync-declarative-provisioning.md) nutzen, um nahezu jeden Aspekt der Synchronisierung von Objekten mit Azure AD zu steuern. Die Filterung kann sowohl in [eingehender](#inbound-filtering) Richtung von Active Directory zur Metaverse als auch in [ausgehender](#outbound-filtering) Richtung von der Metaverse zu Azure AD angewendet werden. Es wird empfohlen, die eingehende Filterung anzuwenden, da dies am einfachsten zu verwalten ist. Die ausgehende Filterung sollte nur verwendet werden, wenn Objekte aus mehreren Gesamtstrukturen verknüpft werden müssen, bevor die Auswertung stattfinden kann. ### <a name="inbound-filtering"></a>Eingehende Filterung Bei der eingehenden Filterung wird die Standardkonfiguration genutzt, bei der für an Azure AD gesendete Objekte das Metaverse-Attribut „cloudFiltered“ nicht auf einen Wert festgelegt sein darf, damit die Synchronisierung erfolgen kann. Wenn der Wert dieses Attributs auf **TRUE**festgelegt ist, wird das Objekt nicht synchronisiert. Es sollte nicht standardmäßig auf **FALSE** festgelegt sein. Um sicherzustellen, dass über andere Regeln ein Wert beigetragen werden kann, sollte dieses Attribut nur über die Werte **TRUE** oder **NULL** (nicht vorhanden) verfügen. Bei der eingehenden Filterung nutzen Sie die Leistungsfähigkeit des **Bereichs**, um zu ermitteln, welche Objekte synchronisiert werden sollen. Hierbei nehmen Sie die Anpassungen vor, um die Anforderungen Ihres Unternehmens zu erfüllen. Das Bereichsmodul verfügt über die Elemente **group** (Gruppe) und **clause** (Klausel), um zu bestimmen, wann eine Synchronisierungsregel zum Bereich gehören soll. Eine Gruppe enthält eine oder mehrere Klauseln. Ein logisches „Und“ wird zwischen mehreren Klauseln und ein logisches „Oder“ zwischen mehreren Gruppen verwendet. Beispiel: ![Umfang](./media/how-to-connect-sync-configure-filtering/scope.png) Lesen Sie dieses Beispiel wie folgt: **(department = IT) OR (department = Sales AND c = US)** . In den folgenden Beispielen und Schritten verwenden Sie das Benutzerobjekt als Beispiel, aber Sie können es für alle Objekttypen nutzen. In den folgenden Beispielen beginnen die Rangfolgenwerte bei 50. Dies kann eine beliebige nicht verwendete Zahl sein, die jedoch kleiner als 100 sein sollte. #### <a name="negative-filtering-do-not-sync-these"></a>Negative Filterung („keine Synchronisierung“) Im folgenden Beispiel werden alle Benutzer herausgefiltert (nicht synchronisiert), bei denen **extensionAttribute15** den Wert **NoSync** hat. 1. Melden Sie sich bei dem Server, auf dem die Azure AD Connect-Synchronisierung ausgeführt wird, mit einem Konto an, das Mitglied der Sicherheitsgruppe **ADSyncAdmins** ist. 2. Starten Sie den **Synchronisierungsregel-Editor** über das **Startmenü**. 3. Stellen Sie sicher, dass **Eingehend** ausgewählt ist, und klicken Sie auf **Neue Regel hinzufügen**. 4. Geben Sie der Regel einen aussagekräftigen Namen, z.B. *In from AD – User DoNotSyncFilter*. Wählen Sie die richtige Gesamtstruktur und anschließend **Benutzer** für **CS object type** (CS-Objekttyp) und **Person** für **MV object type** (MV-Objekttyp) aus. Wählen Sie als **Verknüpfungstyp** die Option **Join** aus. Geben Sie unter **Rangfolge** einen Wert ein, der zurzeit noch von keiner anderen Synchronisierungsregel verwendet wird (z.B. 50), und klicken Sie auf **Weiter**. ![Eingehend 1 Beschreibung](./media/how-to-connect-sync-configure-filtering/inbound1.png) 5. Klicken Sie in **Scoping filter** (Bereichsfilter) auf **Gruppe hinzufügen** und dann auf **Klausel hinzufügen**. Wählen Sie in **Attribut** die Option **ExtensionAttribute15** aus. Stellen Sie sicher, dass der **Operator** auf **EQUAL** festgelegt ist, und geben Sie dann den Wert **NoSync** in das Feld **Wert** ein. Klicken Sie auf **Weiter**. ![Eingehend 2 Bereich](./media/how-to-connect-sync-configure-filtering/inbound2.png) 6. Lassen Sie **Join rules** (Joinregeln) leer, und klicken Sie dann auf **Next**. 7. Klicken Sie auf **Transformation hinzufügen**, und wählen Sie **Konstante** als **FlowType** und **cloudFiltered** als **Zielattribut** aus. Geben Sie **True** im Textfeld **Quelle** ein. Klicken Sie auf **Hinzufügen** , um die Regel zu speichern. ![Eingehend 3 Transformation](./media/how-to-connect-sync-configure-filtering/inbound3.png) 8. Um die Konfiguration abzuschließen, müssen Sie eine **vollständige Synchronisierung** durchführen. Fahren Sie mit dem Abschnitt [Anwenden und Überprüfen von Änderungen](#apply-and-verify-changes) fort. #### <a name="positive-filtering-only-sync-these"></a>Positive Filterung („ausschließliche Synchronisierung“) Das Ausdrücken der positiven Filterung kann mit mehr Aufwand verbunden sein. Sie müssen hierbei nämlich auch Objekte berücksichtigen, bei denen die Synchronisierung nicht offensichtlich ist, z.B. Konferenzräume. Sie überschreiben auch den Standardfilter in der vordefinierten Regel **Ein von AD – Benutzerverknüpfung**. Achten Sie beim Erstellen eines benutzerdefinierten Filters darauf, keine wichtigen Systemobjekte, Replikationskonfliktobjekte, speziellen Postfächer und Dienstkonten für Azure AD Connect einzuschließen. Die positive Filterung erfordert zwei Synchronisierungsregeln. Sie benötigen eine Regel (oder mehrere) mit dem richtigen Bereich der zu synchronisierenden Objekte. Darüber hinaus benötigen Sie eine zweite Catchall-Synchronisierungsregel, die alle Objekte herausfiltert, die noch nicht als ein zu synchronisierendes Objekt identifiziert wurden. Im folgenden Beispiel werden nur Benutzerobjekte synchronisiert, bei denen das department-Attribut den Wert **Sales**hat. 1. Melden Sie sich bei dem Server, auf dem die Azure AD Connect-Synchronisierung ausgeführt wird, mit einem Konto an, das Mitglied der Sicherheitsgruppe **ADSyncAdmins** ist. 2. Starten Sie den **Synchronisierungsregel-Editor** über das **Startmenü**. 3. Stellen Sie sicher, dass **Eingehend** ausgewählt ist, und klicken Sie auf **Neue Regel hinzufügen**. 4. Geben Sie der Regel einen aussagekräftigen Namen, z.B. *In from AD – User Sales sync*. Wählen Sie die richtige Gesamtstruktur und anschließend **Benutzer** für **CS object type** (CS-Objekttyp) und **Person** für **MV object type** (MV-Objekttyp) aus. Wählen Sie als **Verknüpfungstyp** die Option **Join** aus. Geben Sie unter **Rangfolge** einen Wert ein, der zurzeit noch von keiner anderen Synchronisierungsregel verwendet wird (z.B. 51), und klicken Sie auf **Weiter**. ![Eingehend 4 Beschreibung](./media/how-to-connect-sync-configure-filtering/inbound4.png) 5. Klicken Sie in **Scoping filter** (Bereichsfilter) auf **Gruppe hinzufügen** und dann auf **Klausel hinzufügen**. Wählen Sie in **Attribut** den Wert **department** aus. Stellen Sie sicher, dass der Operator auf **EQUAL** festgelegt ist, und geben Sie dann den Wert **Sales** in das Feld **Wert** ein. Klicken Sie auf **Weiter**. ![Eingehend 5 Bereich](./media/how-to-connect-sync-configure-filtering/inbound5.png) 6. Lassen Sie **Join rules** (Joinregeln) leer, und klicken Sie dann auf **Next**. 7. Klicken Sie auf **Transformation hinzufügen**, und wählen Sie **Konstante** als **FlowType** und **cloudFiltered** als **Zielattribut** aus. Geben Sie **False** im Feld **Quelle** ein. Klicken Sie auf **Hinzufügen** , um die Regel zu speichern. ![Eingehend 6 Transformation](./media/how-to-connect-sync-configure-filtering/inbound6.png) Dies ist ein Sonderfall, in dem cloudFiltered explizit auf **FALSE** festgelegt wird. 8. Wir müssen jetzt die Synchronisierungsregel „catch-all“ erstellen, die alles abdeckt. Geben Sie der Regel einen aussagekräftigen Namen, z.B. *In from AD – User Catch-all filter*. Wählen Sie die richtige Gesamtstruktur und anschließend **Benutzer** für **CS object type** (CS-Objekttyp) und **Person** für **MV object type** (MV-Objekttyp) aus. Wählen Sie als **Verknüpfungstyp** die Option **Join** aus. Geben Sie unter **Rangfolge** einen Wert ein, der zurzeit noch von keiner anderen Synchronisierungsregel verwendet wird (z.B. 99). Sie haben einen Rangfolgewert ausgewählt, der höher (niedrigere Rangfolge) als der für die vorherige Synchronisierungsregel ist. Sie haben aber auch Platz gelassen, sodass Sie später noch weitere Filterregeln für die Synchronisierung hinzufügen können, wenn Sie zusätzliche Abteilungen synchronisieren möchten. Klicken Sie auf **Weiter**. ![Eingehend 7 Beschreibung](./media/how-to-connect-sync-configure-filtering/inbound7.png) 9. Lassen Sie **Scoping filter** leer, und klicken Sie auf **Next**. Ein leerer Filter gibt an, dass die Regel nicht auf alle Objekte angewendet wird. 10. Lassen Sie **Join rules** (Joinregeln) leer, und klicken Sie dann auf **Next**. 11. Klicken Sie auf **Transformation hinzufügen**, und wählen Sie **Konstante** als **FlowType** und **cloudFiltered** als **Zielattribut** aus. Geben Sie **True** im Feld **Quelle** ein. Klicken Sie auf **Hinzufügen** , um die Regel zu speichern. ![Eingehend 3 Transformation](./media/how-to-connect-sync-configure-filtering/inbound3.png) 12. Um die Konfiguration abzuschließen, müssen Sie eine **vollständige Synchronisierung** durchführen. Fahren Sie mit dem Abschnitt [Anwenden und Überprüfen von Änderungen](#apply-and-verify-changes) fort. Bei Bedarf können Sie weitere Regeln des ersten Typs erstellen, bei denen Sie mehr Objekte in die Synchronisierung einbeziehen. ### <a name="outbound-filtering"></a>Ausgehende Filterung In einigen Fällen ist es erforderlich, die Filterung erst auszuführen, nachdem die Objekte der Metaverse hinzugefügt wurden. Es kann beispielsweise erforderlich sein, das E-Mail-Attribut aus der Ressourcengesamtstruktur und das Attribut „userPrincipalName“ aus der Kontogesamtstruktur zu untersuchen, um zu ermitteln, ob ein Objekt synchronisiert werden soll. In diesen Fällen wird die Filterung für die ausgehende Regel erstellt. In diesem Beispiel wird die Filterung so geändert, dass nur Benutzer synchronisiert werden, deren Attribute „mail“ und „userPrincipalName“ auf @contoso.com enden: 1. Melden Sie sich bei dem Server, auf dem die Azure AD Connect-Synchronisierung ausgeführt wird, mit einem Konto an, das Mitglied der Sicherheitsgruppe **ADSyncAdmins** ist. 2. Starten Sie den **Synchronisierungsregel-Editor** über das **Startmenü**. 3. Klicken Sie unter **Rules Type** (Regeltyp) auf **Outbound** (Ausgehend). 4. Suchen Sie abhängig von der verwendeten Connect-Version entweder die Regel mit dem Namen **Out to AAD – User Join** oder die Regel mit dem Namen **Out to AAD - User Join SOAInAD**, und klicken Sie auf **Bearbeiten**. 5. Wählen Sie im Popupfenster die Antwort **Ja** , um eine Kopie der Regel zu erstellen. 6. Ändern Sie auf der Seite **Beschreibung** die **Rangfolge** in einen nicht verwendeten Wert, z.B. 50. 7. Klicken Sie im linken Navigationsbereich auf **Scoping filter** (Bereichsfilter) und dann auf **Klausel hinzufügen**. Wählen Sie in **Attribut** den Wert **mail** aus. Wählen Sie in **Operator** die Option **ENDSWITH** aus. Geben Sie in **Wert** die Zeichenfolge **\@contoso.com** ein, und klicken Sie dann auf **Klausel hinzufügen**. Wählen Sie in **Attribut** die Option **userPrincipalName** aus. Wählen Sie in **Operator** die Option **ENDSWITH** aus. Geben Sie in **Wert** die Zeichenfolge **\@contoso.com** ein. 8. Klicken Sie auf **Speichern**. 9. Um die Konfiguration abzuschließen, müssen Sie eine **vollständige Synchronisierung** durchführen. Fahren Sie mit dem Abschnitt [Anwenden und Überprüfen von Änderungen](#apply-and-verify-changes) fort. ## <a name="apply-and-verify-changes"></a>Anwenden und Überprüfen von Änderungen Nachdem Sie Änderungen an der Konfiguration vorgenommen haben, müssen diese Änderungen auf die Objekte angewendet werden, die bereits im System vorhanden sind. Es kann auch sein, dass derzeit nicht im Synchronisierungsmodul enthaltene Objekte verarbeitet werden sollen und das Synchronisierungsmodul das Quellsystem erneut auslesen muss, um den Inhalt zu verifizieren. Wenn Sie die Konfiguration per Filterung nach **Domäne** oder **Organisationseinheit** geändert haben, müssen Sie einen **vollständigen Import** gefolgt von einer **Deltasynchronisierung** durchführen. Falls Sie die Konfiguration per Filterung nach dem **Attribut** geändert haben, müssen Sie die **vollständige Synchronisierung** durchführen. Führen Sie die folgenden Schritte aus: 1. Starten Sie den **Synchronisierungsdienst** über das **Startmenü**. 2. Wählen Sie **Connectors** aus. Wählen Sie in der Liste **Connectors** den Connector aus, für den Sie vorher eine Konfigurationsänderung vorgenommen haben. Wählen Sie unter **Aktionen** die Option **Ausführen** aus. ![Connectorausführung](./media/how-to-connect-sync-configure-filtering/connectorrun.png) 3. Wählen Sie unter **Ausführungsprofile** den im vorherigen Abschnitt erwähnten Vorgang aus. Wenn Sie zwei Aktionen ausführen müssen, führen Sie die zweite nach Abschluss der ersten Aktivität aus. (Die Spalte **Status** zeigt für den ausgewählten Connector **Im Leerlauf** an.) Nach der Synchronisierung werden alle Änderungen für den Export bereitgestellt. Bevor Sie die Änderungen in Azure AD tatsächlich vornehmen, sollten Sie sicherstellen, dass alle Änderungen richtig sind. 1. Starten Sie eine Eingabeaufforderung, und wechseln Sie zu `%ProgramFiles%\Microsoft Azure AD Sync\bin`. 2. Führen Sie `csexport "Name of Connector" %temp%\export.xml /f:x` aus. Den Namen des Connectors finden Sie im Synchronisierungsdienst. Für Azure AD sieht der Name in etwa wie folgt aus: contoso.com – AAD. 3. Führen Sie `CSExportAnalyzer %temp%\export.xml > %temp%\export.csv` aus. 4. Sie verfügen jetzt im Ordner „%temp%“ über eine Datei namens „export.csv“, die in Microsoft Excel untersucht werden kann. Diese Datei enthält alle Änderungen, die exportiert werden sollen. 5. Nehmen Sie erforderliche Änderungen an den Daten oder der Konfiguration vor, und führen Sie die oben genannten Schritte (Importieren, Synchronisieren und Überprüfen) erneut aus, bis Sie die Änderungen erhalten, die Sie exportieren möchten. Wenn Sie zufrieden sind, können Sie die Änderungen nach Azure AD exportieren. 1. Wählen Sie **Connectors** aus. Wählen Sie in der Liste **Connectors** den Azure AD-Connector aus. Wählen Sie unter **Aktionen** die Option **Ausführen** aus. 2. Wählen Sie unter **Ausführungsprofile** die Option **Exportieren** aus. 3. Falls im Rahmen Ihrer Konfigurationsänderungen viele Objekte geändert werden, wird für den Export ein Fehler angezeigt, wenn die Zahl die konfigurierte Schwelle (standardmäßig 500) übersteigt. In diesem Fall müssen Sie das Feature [Verhindern von versehentlichen Löschungen](how-to-connect-sync-feature-prevent-accidental-deletes.md) vorübergehend deaktivieren. Jetzt ist es an der Zeit, den Scheduler wieder zu aktivieren. 1. Starten Sie im **Startmenü** die **Aufgabenplanung**. 2. Suchen Sie direkt unterhalb der **Aufgabenplanungsbibliothek** nach der Aufgabe mit dem Namen **Azure AD-Synchronisierungsplaner**, klicken Sie mit der rechten Maustaste darauf, und wählen Sie **Aktivieren** aus. ## <a name="group-based-filtering"></a>Gruppenbasierte Filterung Die gruppenbasierte Filterung kann bei der erstmaligen Installation von Azure AD Connect als [benutzerdefinierte Installation](how-to-connect-install-custom.md#sync-filtering-based-on-groups) konfiguriert werden. Die Option ist für Pilotbereitstellungen gedacht, bei denen nur ein kleiner Satz von Objekten synchronisiert werden soll. Wenn Sie die gruppenbasierte Filterung deaktiviert haben, können Sie sie nicht mehr aktivieren. Die Verwendung der gruppenbasierten Filterung in einer benutzerdefinierten Konfiguration wird *nicht unterstützt*. Die Konfiguration dieses Features wird nur mit dem Installations-Assistenten unterstützt. Wenn Sie Ihr Pilotprojekt abgeschlossen haben, sollten Sie eine der anderen Filteroptionen in diesem Thema verwenden. Wenn Sie die organisationseinheitenbasierte Filterung zusammen mit der gruppenbasierten Filterung verwenden, müssen die Organisationseinheiten mit den Gruppen- und Mitgliedsobjekten eingeschlossen werden. Beim Synchronisieren mehrerer AD-Gesamtstrukturen können Sie die gruppenbasierte Filterung konfigurieren, indem Sie für jeden AD-Connector eine andere Gruppe angeben. Wenn Sie einen Benutzer in einer AD-Gesamtstruktur synchronisieren möchten und für diesen Benutzer mindestens ein entsprechendes Objekt in anderen AD-Gesamtstrukturen vorhanden ist, müssen Sie sicherstellen, dass das Benutzerobjekt und alle entsprechenden Objekte sich innerhalb eines Bereichs für die gruppenbasierte Filterung befinden. Beispiele: * Sie verfügen über einen Benutzer in einer Gesamtstruktur, für den ein entsprechendes FSP-Objekt (Foreign Security Principal, fremder Sicherheitsprinzipal) in einer anderen Gesamtstruktur vorhanden ist. Beide Objekte müssen sich innerhalb eines Bereichs für die gruppenbasierte Filterung befinden. Andernfalls wird der Benutzer nicht mit Azure AD synchronisiert. * Sie verfügen über einen Benutzer in einer Gesamtstruktur, für den ein entsprechendes Ressourcenkonto (z.B. ein verknüpftes Postfach) in einer anderen Gesamtstruktur vorhanden ist. Darüber hinaus haben Sie Azure AD Connect so konfiguriert, dass der Benutzer mit dem Ressourcenkonto verknüpft ist. Beide Objekte müssen sich innerhalb eines Bereichs für die gruppenbasierte Filterung befinden. Andernfalls wird der Benutzer nicht mit Azure AD synchronisiert. * Sie verfügen über einen Benutzer in einer Gesamtstruktur, für den ein entsprechender E-Mail-Kontakt in einer anderen Gesamtstruktur vorhanden ist. Darüber hinaus haben Sie Azure AD Connect so konfiguriert, dass der Benutzer mit dem E-Mail-Kontakt verknüpft ist. Beide Objekte müssen sich innerhalb eines Bereichs für die gruppenbasierte Filterung befinden. Andernfalls wird der Benutzer nicht mit Azure AD synchronisiert. ## <a name="next-steps"></a>Nächste Schritte - Weitere Informationen zur Konfiguration der [Azure AD Connect-Synchronisierung](how-to-connect-sync-whatis.md). - Weitere Informationen zum [Integrieren lokaler Identitäten in Azure AD](whatis-hybrid-identity.md).
127.107784
958
0.806049
deu_Latn
0.999017
fa9ec715d7d0b085d4cfb1f440288568e18139cc
3,108
md
Markdown
problem06/README.md
mexuaz/Concurrency
e7025dc0fa2b95f66ff608ff7eba5cdef837a7d4
[ "Apache-2.0" ]
null
null
null
problem06/README.md
mexuaz/Concurrency
e7025dc0fa2b95f66ff608ff7eba5cdef837a7d4
[ "Apache-2.0" ]
null
null
null
problem06/README.md
mexuaz/Concurrency
e7025dc0fa2b95f66ff608ff7eba5cdef837a7d4
[ "Apache-2.0" ]
null
null
null
# Problem 06 This problem is implementing a semi-parallel sort, the procedure is as follow: 1. Break the list of integers in to n portion 2. Sort each portion in a different thread 3. When all the sorts are done merge them to one list Some parts could be done more in parallel like when at least two threads finished sorting start merging them but in order to keep the source a little more comprehensible I skip that approach. ## Performance Evaluation Performance for both C++ and Go are measured by timing. The timing results are in milli-seconds for a vector of 100,000,000 integers. I used std::vector for C++ and Slice for Golang. The generated binary file using Go Standard compiler is optimized by default and Go doesn't have different level of optimization[(Golang Compiler)][golang_compiler]. In fact if you want to disable optimization you should use -N flag, instead of that the C++ version measured in different level of optimization. I used an [Asus N43JQ][ausus_n43jq_specs] equipped to an Intel Core i7 (8 cores) with 8 GB of after market RAM installed on it for this benchmark. | Splits | C++ O | C++ O1 | C++ O2 | C++ O3 | Go | |:-------:|:--------:|:--------:|:---------:|:--------:|:-----:| | 1 | 11426 | 11440 | 11687 | 12643 | 40006 | | 2 | 7423 | 7501 | 7320 | 7717 | 26081 | | 4 | 5926 | 5954 | 5618 | 5899 | 19637 | | 8 | 5089 | 5142 | 4681 | 4806 | 16727 | | 16 | 5897 | 5883 | 5302 | 5656 | 18036 | | 32 | 6689 | 6701 | 5963 | 6099 | 18990 | | 64 | 7471 | 7480 | 6594 | 6751 | 20407 | | 128 | 8302 | 8329 | 7244 | 7425 | | | 256 | 9088 | 9099 | 7908 | 8104 | | | 512 | 9891 | 9942 | 8531 | 8770 | | | 1024 | 10748 | 10723 | 9274 | 9405 | | | 2046 | 11482 | 11465 | 9803 | 10044 | | | 4096 | 12399 | 12437 | 10575 | 10850 | | | 8192 | 13340 | 13345 | 11365 | 11663 | | | 16382 | 14347 | 14608 | 12221 | 12506 | | ## Observations: * Best time for both languages was for 8 splits which I think this happened because of my notebook has 8 Cores in CPU. * The best time achived for C++ was in L2 Optimization 4681 and for Go was 16727. The best result of C++ was 3.6 times faster than Go. * Increasing number of threads affected C++ program more than Go, this could be happened because Go threads are lighter than C++. * The interesting fact is that C++ sequential version (1 split) is faster than Go's best result for 8 split! * Optimization doesn't contribute much to C++ except in case of increased thread numbers. * Multi-threading improved C++ time in best case 2.5 times and Go 2.4 times If you look at the source code the Go implementation is similar to C++ one this means C++ was faster for this problem. [golang_compiler]: https://golang.org/cmd/compile/ [ausus_n43jq_specs]: https://www.asus.com/Laptops/N43JQ/specifications/
47.815385
640
0.619048
eng_Latn
0.994172
fa9eee3c9200c023a5860e68dd3e6674513fa812
2,168
md
Markdown
website/src/_posts/2020-09-companion-2.0.md
profsmallpine/uppy
8c3f0cb66c32836948d26537a729d7e735f92491
[ "MIT" ]
26,768
2015-12-09T10:34:22.000Z
2022-03-31T13:37:19.000Z
website/src/_posts/2020-09-companion-2.0.md
profsmallpine/uppy
8c3f0cb66c32836948d26537a729d7e735f92491
[ "MIT" ]
2,893
2015-11-26T22:37:52.000Z
2022-03-31T23:10:54.000Z
website/src/_posts/2020-09-companion-2.0.md
profsmallpine/uppy
8c3f0cb66c32836948d26537a729d7e735f92491
[ "MIT" ]
2,166
2016-04-13T16:44:34.000Z
2022-03-30T15:54:45.000Z
--- title: "Companion 2.0 is here" date: 2020-09-09 author: ife published: true --- We are happy to announce version 2.0 of Companion! 🎉 After maintaining and improving the 1.x series for over a year, we're now releasing a major version bump on the Companion package. The drive on this release is mainly towards fixing some terminology inconsistencies and aligning with Node.js LTS to ease the maintenance burden. <!--more--> So what are the changes you can expect with Companion 2.0? ## Node >= v10 Node.js 8.x has reached end-of-life. Consequently, Companion 2.0 has dropped support for Node.js 6.x and Node.js 8.x, and now requires that you run at least Node.js 10.20.1. ## Renamed provider options Pre 2.0, there were inconsistencies in relation to the provider names. In some places, the Google Drive provider was referred to as *google* (e.g., in `providerOptions`) while in some other places, it was referred to as *drive* (e.g., the server endpoints `/drive/list`). Companion 2.0 now consistently uses the name *drive* everywhere. Similarly, the OneDrive provider was made to have the consistent name *onedrive*. ## Changed Redirect URIs On the topic of consistent naming, we have also made some changes to the redirect URIs supplied during the OAuth process. For example, in the case of Google Drive, the form of the old redirect URI was `https://mycompanionwebsite.tld/connect/google/callback`. In Companion 2.0, this is now changed to `https://mycompanionwebsite.tld/drive/redirect`. This is a Breaking Change: you will need to make the corresponding changes to your redirect URIs on your Providers' API Dashboards. ## Compatibility with Uppy 1.x client Companion 2.0 is compatible with any Uppy 1.x version, so you don't have to worry about upgrading your Uppy client installations when you upgrade Companion on your server. ## Will Companion v1 still receive updates? Companion 1.x will continue to receive security patches until March 1, 2021. ## Migrating from Companion 1.x to 2.x Given the breaking changes, we've created a [migration tutorial for upgrading from Companion v1 to v2](https://uppy.io/docs/companion/#Migrating-v1-to-v2).
58.594595
480
0.771218
eng_Latn
0.998514
fa9fb8567a17a5fae634f1fca3816c8fb3707ee9
345
md
Markdown
_posts/2008/2008-04-23-eintracht-schlaegt-neuenhain.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
null
null
null
_posts/2008/2008-04-23-eintracht-schlaegt-neuenhain.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
1
2021-04-01T17:08:43.000Z
2021-04-01T17:08:43.000Z
_posts/2008/2008-04-23-eintracht-schlaegt-neuenhain.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
null
null
null
--- layout: post title: "Eintracht schlägt Neuenhain" --- In einem Benefizspiel zu Gunsten der Leberecht-Stiftung der FNP hat die Eintracht mit 21:0 gegen die SG Bad Soden (1. Halbzeit) und FV 08 Neuenhain (2. Halbzeit) gewonnen. Die Tore erzielten Amanatidis (5), Mantzios, Mahdavikia (je 4), Caio, Finke (je 3), Weissenberger und Mössmer.
34.5
282
0.747826
deu_Latn
0.971122
fa9ff92f189f20ae1eaa874a1c670ffc4c9bbc2a
1,145
md
Markdown
packages/ckeditor5-html-support/docs/api/html-support.md
linxd/ckeditor5
c7de161a9ae48809ddc93c9c77e63137cbb8785d
[ "MIT" ]
null
null
null
packages/ckeditor5-html-support/docs/api/html-support.md
linxd/ckeditor5
c7de161a9ae48809ddc93c9c77e63137cbb8785d
[ "MIT" ]
null
null
null
packages/ckeditor5-html-support/docs/api/html-support.md
linxd/ckeditor5
c7de161a9ae48809ddc93c9c77e63137cbb8785d
[ "MIT" ]
null
null
null
--- category: api-reference --- # CKEditor 5 General HTML Support feature [![npm version](https://badge.fury.io/js/%40ckeditor%2Fckeditor5-image.svg)](https://www.npmjs.com/package/@ckeditor/ckeditor5-html-support) This package implements the General HTML Support feature for CKEditor 5. ## Demo Check out the demos in the {@link features/general-html-support General HTML Support feature} guide. ## Documentation See the {@link features/general-html-support General HTML Support feature} guide. ## Installation ```bash npm install --save @ckeditor/ckeditor5-html-support ``` ## Contribute The source code of this package is available on GitHub in https://github.com/ckeditor/ckeditor5/tree/master/packages/ckeditor5-html-support. ## External links * [`@ckeditor/ckeditor5-html-support` on npm](https://www.npmjs.com/package/@ckeditor/ckeditor5-html-support) * [`ckeditor/ckeditor5-html-support` on GitHub](https://github.com/ckeditor/ckeditor5/tree/master/packages/ckeditor5-html-support) * [Issue tracker](https://github.com/ckeditor/ckeditor5/issues) * [Changelog](https://github.com/ckeditor/ckeditor5/blob/master/CHANGELOG.md)
32.714286
140
0.771179
kor_Hang
0.340194
fa9ffb3b820accc2f885cd1928d6beee9794357e
396
md
Markdown
01/README.md
bitfieldconsulting/ftl-fundamentals-rmarcandier
0826ecabed67df857dccad507c3f207ca8c44562
[ "MIT" ]
1
2020-03-21T04:32:44.000Z
2020-03-21T04:32:44.000Z
01/README.md
bitfieldconsulting/ftl-fundamentals-rmarcandier
0826ecabed67df857dccad507c3f207ca8c44562
[ "MIT" ]
1
2020-03-20T13:30:30.000Z
2020-03-20T13:30:30.000Z
01/README.md
bitfieldconsulting/ftl-fundamentals-bitfield
3e3de845faeb32c3d27c93003afd78dd7c8664c0
[ "MIT" ]
null
null
null
Let's make sure our Go environment is set up and everything is working right. In this directory, run the command: ``` go test ``` If everything is good, we will see this output: ``` PASS ok hello 0.186s ``` This tells us that all the tests passed in the package called `hello`, and that running them took 0.186 seconds. (The exact time can vary.) Nice job! Go on to the next exercise.
24.75
139
0.717172
eng_Latn
0.999833
faa0350f972cd1578fbddde6364d006a5765ded0
1,221
md
Markdown
notes/FUTURE_TOPICS/sum-interface.md
side-projects-42/BGOONZ_BLOG_2.0
dbf8df8afc99963d127155a1ab2e211b592a1994
[ "MIT" ]
null
null
null
notes/FUTURE_TOPICS/sum-interface.md
side-projects-42/BGOONZ_BLOG_2.0
dbf8df8afc99963d127155a1ab2e211b592a1994
[ "MIT" ]
null
null
null
notes/FUTURE_TOPICS/sum-interface.md
side-projects-42/BGOONZ_BLOG_2.0
dbf8df8afc99963d127155a1ab2e211b592a1994
[ "MIT" ]
null
null
null
sum-interface" - <a href="sum-interface.html" /sum-interface" /sum-interface" sum-interface" task/sum-interface" <!-- --> sum-interface" sum-interface" /sum-interface" sum-interface" We want to make this open-source project available for people all around the world. Search Searchk%2Fsum-interface"%2Fsum-interface" </a> <a href="../data-types.html" Data types</span></a> <a href="../number.html" Numbers</span></a> <a href="../number.html" ## Sum numbers from the visitor <span class="task__importance" title="How important is the task, from 1 to 5">importance: 5</span> Create a script that prompts the visitor to enter two numbers and then shows their sum. [Run the demo](sum-interface.html#) P.S. There is a gotcha with types. solution <a href="sum-interface.html#" <a href="sum-interface.html#" class="toolbar__button toolbar__button_edit" title="open in sandbox"></a> let a = +prompt("The first number?", ""); let b = +prompt("The second number?", ""); alert( a + b ); Note the unary plus `+` before `prompt`. It immediately converts the value to a number. Otherwise, `a` and `b` would be string their sum would be their concatenation, that is: `"1" + "2" = "12"`.
23.941176
107
0.686323
eng_Latn
0.903666
faa074b4b6cf9de8d0f46af91ef38556619cc3e3
2,199
md
Markdown
apps/wordpress/htdocs/wp-content/plugins/jetpack/jetpack_vendor/automattic/jetpack-password-checker/CHANGELOG.md
RightSolutions-4U/rightsolutions4uWPNew
0293ae46db482165436db79993d427938a1f721c
[ "Apache-2.0" ]
1
2021-12-29T06:10:14.000Z
2021-12-29T06:10:14.000Z
wp-content/plugins/jetpack/jetpack_vendor/automattic/jetpack-password-checker/CHANGELOG.md
fakhar-ali/mysocceracademy.com
b9e0c2f9cc45546274747d6200640315d467099a
[ "Unlicense" ]
48
2021-12-28T04:28:55.000Z
2022-01-27T03:59:04.000Z
wp-content/plugins/jetpack/jetpack_vendor/automattic/jetpack-password-checker/CHANGELOG.md
fakhar-ali/mysocceracademy.com
b9e0c2f9cc45546274747d6200640315d467099a
[ "Unlicense" ]
null
null
null
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [0.2.0] - 2022-01-04 ### Changed - Switch to pcov for code coverage. - Updated package dependencies - Updated package textdomain from `jetpack` to `jetpack-password-checker`. ## [0.1.8] - 2021-12-14 ### Changed - Updated package dependencies. ## [0.1.7] - 2021-11-02 ### Changed - Set `convertDeprecationsToExceptions` true in PHPUnit config. - Update PHPUnit configs to include just what needs coverage rather than include everything then try to exclude stuff that doesn't. ## [0.1.6] - 2021-10-13 ### Changed - Updated package dependencies. ## [0.1.5] - 2021-10-12 ### Changed - Updated package dependencies ## [0.1.4] - 2021-09-28 ### Changed - Updated package dependencies. ## [0.1.3] - 2021-08-30 ### Changed - Run composer update on test-php command instead of phpunit - Tests: update PHPUnit polyfills dependency (yoast/phpunit-polyfills). ## [0.1.2] - 2021-05-25 ### Fixed - Avoid checking in vendor directory. ## [0.1.1] - 2021-04-27 ### Changed - Updated package dependencies. ## 0.1.0 - 2021-03-30 ### Added - Initial release. ### Fixed - Use `composer update` rather than `install` in scripts, as composer.lock isn't checked in. [0.2.0]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.8...v0.2.0 [0.1.8]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.7...v0.1.8 [0.1.7]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.6...v0.1.7 [0.1.6]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.5...v0.1.6 [0.1.5]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.4...v0.1.5 [0.1.4]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.3...v0.1.4 [0.1.3]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.2...v0.1.3 [0.1.2]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.1...v0.1.2 [0.1.1]: https://github.com/Automattic/jetpack-password-checker/compare/v0.1.0...v0.1.1
34.359375
131
0.71487
eng_Latn
0.376666
faa0be73ed4cb5a83cbf2c08f75b0f425b306da0
1,899
md
Markdown
_posts/2018-05-01-intern-plans.md
liqimore/note
1c4dfd0b340200a7d6c7193c410207698a5d2e91
[ "MIT" ]
3
2018-06-01T05:35:08.000Z
2018-08-08T11:04:52.000Z
_posts/2018-05-01-intern-plans.md
liqimore/note
1c4dfd0b340200a7d6c7193c410207698a5d2e91
[ "MIT" ]
null
null
null
_posts/2018-05-01-intern-plans.md
liqimore/note
1c4dfd0b340200a7d6c7193c410207698a5d2e91
[ "MIT" ]
null
null
null
--- layout: post title: "近期安排+京东百度面经" description: "赶上了春招的尾巴,所以尝试了一把,非常幸运的拿到了京东的offer." date: 2018-04-30 tags: 近况, 春招, 面试, plans comments: true --- ## 最近在忙什么 在3月底的时候,在牛客网看到了春招马上就结束了,之前的计划是春招不参加,抓紧准备秋招.但是看到能投 简历又不甘心这么好的机会就放过去,所以来试了一下.去面试的时候也没有抱有很大的希望,单纯的是去找 刺激的(刺激确实找到了...),但是收获了京东的offer,非常感谢京东能给我这次机会. ## 投过简历的企业(按照顺序) 1. 网易 2. 阿里巴巴(淘宝) 3. 百度 4. 京东 5. 头条 ## 网易: 网易这次笔试我认为难度挺大的(水平不够),编程题没做出来几个,直接笔试就没有过. ## 阿里巴巴 阿里只有一到相对简单的笔试题,当天下午做过之后,晚上9:30左右1面电话就打来了(周五).问了关于`多线程, 集合框架,并发,GC,Spring AOP/IOC`相关知识点.每一个知识点从基础的使用,到源码,到JVM,到优化方法. 非常非常的细致,我答的非常不好. 面试官是非常棒的,很有耐心的引导我,可是不会就是不会.最后面试官告诉我 **学习一个知识的时候,不能只 从表面理解它.可能百度一下,也能实现需求,但是不能止步于此,应该继续往下查找为什么这样子,它是如何 实现的,有更好的方法没有.** 受益匪浅,虽然面试官拒掉了我,但是我还是很开心的.有前辈告诉我如何学习, 真的可以节省很多时间. 同时,可见阿里是一个注重基础的公司, **想要加入阿里,需要有非常扎实的基础知识** . ## 百度 作为BAT中的老大,虽然近些年发展不如其他企业,但是依然占据着中国互联网的入口,是非常具有吸引力的. 在1面中被刷,没什么不好意思的,知识水平不足.面试官考了一个遍历和翻转二叉树的简单算法,我没有能 写出第二个代码来(纸笔).在面试的过程中,面试官提到了刷题,我是不赞成刷题的,我认为刷题并不能体现出 一个人的知识水准,只能体现出是否预先做过准备,联系过.话虽然这么说,我也是理解面试官的,在这个行业, 除了这么面试,我也实在想不出其他更好的办法了. 总之,**百度需要较强的算法基础**,你必须进行大量刷题,训练才能够通过面试.不论是否赞成这种做法, 现在面对的情况就是这样子的,如果基本算法都不会,公司凭什么会认为你能进行更复杂的工作.所以说, **只有拿下算法题,才有资格继续讨论其他问题**.在暑假可能需要刷题了. ## 京东 ### 一面 面试官问得问题都是偏向实用的,比如说spring框架的基本使用,java的基础操作和一些概念的区别,主要是集合. 不同面试官性格不同,问题差异可能会较大.面试我的面试官可以看得出也是非常喜欢计算机的,喜欢研究新技术, 问到了我最近关注的前沿技术有什么. 期间问到了一个算法题,剑指offer原题.在大量数字中高效选出最大的/出现次数最多的几个数.算法是弱项,回答 出来的就是暴力而已,没有想到最佳答案.真的需要加强算法了. 面试结束后,面试官主动站起来握手,让我感受到他是非常尊重面试者的,希望以后有机会能在一起工作.我认为,不论 一个人技术水平如何,首先,尊重其他人是最基本的. ### 二面 二面面试官应该是个大佬,话非常少..问了GC和类加载,然后随便闲聊了几句,就放我出来了.我一度感觉这是要凉了. 但是出门后在官网上查到已经是hr面试未安排,还好放了我一马. ### HR面 HR是一个非常热情的阿姨,核心问题就两个,居住和加班.我是租房子的,所以住在哪里这个问题很容易就解决了.加班的话 也是能接受的,晚上管饭也有班车,对于我这样大学实习/刚毕业没什么影响,所以很愉快的就过了.同时和我说了很多JD内部 部门的分化,说的非常详细. 最后一点让我印象深刻的,三面所有面试官,都说我是他们这几天面试中唯一的一个本科生,他们非常惊讶...压力真的非常大啊,现在 本科学历都成了起步学历了,找工作竞争的都是硕士博士,想要在这样的比赛中获胜,或许得付出几倍于他们的努力. ## 头条 时间冲突,未参加笔试. ## 最后 希望在秋天有丰富的收获. <small>2018/5/28-02:21</small>
25.32
61
0.796209
zho_Hans
0.475713
faa10afb073428975646d1b4aa2e89dc45c9b483
2,110
md
Markdown
desktop-src/Services/service-structures.md
velden/win32
94b05f07dccf18d4b1dbca13b19fd365a0c7eedc
[ "CC-BY-4.0", "MIT" ]
552
2019-08-20T00:08:40.000Z
2022-03-30T18:25:35.000Z
desktop-src/Services/service-structures.md
velden/win32
94b05f07dccf18d4b1dbca13b19fd365a0c7eedc
[ "CC-BY-4.0", "MIT" ]
1,143
2019-08-21T20:17:47.000Z
2022-03-31T20:24:39.000Z
desktop-src/Services/service-structures.md
velden/win32
94b05f07dccf18d4b1dbca13b19fd365a0c7eedc
[ "CC-BY-4.0", "MIT" ]
1,287
2019-08-20T05:37:48.000Z
2022-03-31T20:22:06.000Z
--- description: 'The following structures are used with services:' ms.assetid: 775ecbeb-3a2a-40dd-b262-b66dea04713d title: Service Structures ms.topic: article ms.date: 05/31/2018 --- # Service Structures The following structures are used with services: - [**ENUM\_SERVICE\_STATUS**](/windows/desktop/api/Winsvc/ns-winsvc-enum_service_statusa) - [**ENUM\_SERVICE\_STATUS\_PROCESS**](/windows/desktop/api/Winsvc/ns-winsvc-enum_service_status_processa) - [**QUERY\_SERVICE\_CONFIG**](/windows/desktop/api/Winsvc/ns-winsvc-query_service_configa) - [**QUERY\_SERVICE\_LOCK\_STATUS**](/windows/desktop/api/Winsvc/ns-winsvc-query_service_lock_statusa) - [**SC\_ACTION**](/windows/desktop/api/Winsvc/ns-winsvc-sc_action) - [**SERVICE\_DELAYED\_AUTO\_START\_INFO**](/windows/desktop/api/Winsvc/ns-winsvc-service_delayed_auto_start_info) - [**SERVICE\_DESCRIPTION**](/windows/desktop/api/Winsvc/ns-winsvc-service_descriptiona) - [**SERVICE\_FAILURE\_ACTIONS**](/windows/desktop/api/Winsvc/ns-winsvc-service_failure_actionsa) - [**SERVICE\_FAILURE\_ACTIONS\_FLAG**](/windows/desktop/api/Winsvc/ns-winsvc-service_failure_actions_flag) - [**SERVICE\_NOTIFY**](/windows/desktop/api/Winsvc/ns-winsvc-service_notify_2a) - [**SERVICE\_PRESHUTDOWN\_INFO**](/windows/desktop/api/Winsvc/ns-winsvc-service_preshutdown_info) - [**SERVICE\_REQUIRED\_PRIVILEGES\_INFO**](/windows/desktop/api/Winsvc/ns-winsvc-service_required_privileges_infoa) - [**SERVICE\_SID\_INFO**](/windows/desktop/api/Winsvc/ns-winsvc-service_sid_info) - [**SERVICE\_STATUS**](/windows/desktop/api/Winsvc/ns-winsvc-service_status) - [**SERVICE\_STATUS\_PROCESS**](/windows/desktop/api/Winsvc/ns-winsvc-service_status_process) - [**SERVICE\_TABLE\_ENTRY**](/windows/desktop/api/Winsvc/ns-winsvc-service_table_entrya) - [**SERVICE\_TRIGGER**](/windows/desktop/api/winsvc/ns-winsvc-service_trigger) - [**SERVICE\_TRIGGER\_INFO**](/windows/desktop/api/winsvc/ns-winsvc-service_trigger_info) - [**SERVICE\_TRIGGER\_SPECIFIC\_DATA\_ITEM**](/windows/desktop/api/winsvc/ns-winsvc-service_trigger_specific_data_item)    
54.102564
122
0.769194
yue_Hant
0.622447
faa14da5d03f83dd0ffdf64d259e7f622d28fb57
1,270
md
Markdown
_posts/2018-05-13-rxJava4.md
kangsungjin/kangsungjin.github.io
5977221fe5e3cf5713f938675eee953cc0dd1527
[ "MIT" ]
null
null
null
_posts/2018-05-13-rxJava4.md
kangsungjin/kangsungjin.github.io
5977221fe5e3cf5713f938675eee953cc0dd1527
[ "MIT" ]
null
null
null
_posts/2018-05-13-rxJava4.md
kangsungjin/kangsungjin.github.io
5977221fe5e3cf5713f938675eee953cc0dd1527
[ "MIT" ]
2
2021-09-05T16:36:32.000Z
2021-11-24T06:44:26.000Z
--- layout: archive title: "Hello RxJava #4" date: 2018-05-13 07:00:00 author: ks J categories: rxJava tags: [rxJava, scheduler] --- 소개 <hr/> 이 페이지는 rxJava 2 버전을기반으로 설명하며, Scheduler를 공부하기 위해 정리했습니다. <br/> <br/> <br/> 사용 설명 <hr/> __기본__{: style="color: #1b557a"} <br > _newThread_{: style="color: #e26716"} - 구독자가 추가 될때 마다 스레드를 새로 생성한다는 의미를 갖습니다. <br /> _single_{: style="color: #e26716"} - 단일 스레드를 별도로 생성하여 구독작업합니다. 여러번 구독 요청이 와도 단일 스레드를 공통으로 사용하게 됩니다. <br > _computation_{: style="color: #e26716"} - 일반적인 계산 작업을 할때 사용하는 스케줄러 입니다. interval()함수는 기본적으로 computation스케줄에서 돌아갑니다. CPU에 대응하여 계산하는 스케줄러입니다. io작업은 수행할수 없고, 프로세스 수만큼 스레드 풀을 증가 할수 있습니다. ~~~ java //코드 public static Observable<Long> interval(long perid, TimeUnit unit){ return interval(perid, period, unit, Scheduler.computation()); } ~~~ <br/> _io_{: style="color: #e26716"} - 네트워크상의 요청, 파일 입출력, DB쿼리등을 처리 할때 사용하는 스케줄러 입니다. <br > _trampoline_{: style="color: #e26716"} - 새로운 스레드를 생성하지 않고 현재 스레드에서 무한한 크기의 대기행령 큐를 생성하는 스케줄러입니다. 새로운 스레드를 생성하지 않는다는 것과 대기 행령을 자동으로 만들어 준다는 것이 뉴 스레드 스케줄러, 계산 스케줄러, IO스케줄러와 다릅니다. <hr/> _subscribeOn_{: style="color: #e26716"} - subscribe() 데이터 발행할때 처리하는 스레드를 지정해줍니다. <br > _observeOn_{: style="color: #e26716"} - Observable에서 데이터를 처리 할때 사용 되는 스레드를 지정해줍니다.<br >
27.608696
178
0.684252
kor_Hang
1.000009
faa165ad6094f0adf6046c84322cc7adfe79c9ba
1,883
md
Markdown
README.md
AmanVirmani/WalkerBot
168244643b7fd8b0e04047abff08d4e3c4caf616
[ "MIT" ]
null
null
null
README.md
AmanVirmani/WalkerBot
168244643b7fd8b0e04047abff08d4e3c4caf616
[ "MIT" ]
null
null
null
README.md
AmanVirmani/WalkerBot
168244643b7fd8b0e04047abff08d4e3c4caf616
[ "MIT" ]
null
null
null
# Turtlebot Walker [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) # Project Overview This project implements a roomba type behavior robot using turtlebot platform in ROS. This repository contains the following files: - include/walker/walker.h - src/walker.cpp - src/main.cpp - launch/demo.launch # Dependencies - ROS Kinetic To install follow this [link](http://wiki.ros.org/kinetic/Installation) - Ubuntu 16.04 - Turtlebot packages To install turtlebot, type the following: ``` sudo apt-get install ros-kinetic-turtlebot-gazebo ros-kinetic-turtlebot-apps ros-kinetic-turtlebot-rviz-launchers ``` # Building package ``` mkdir -p ~/catkin_ws/src cd ~/catkin_ws/ catkin_make source devel/setup.bash cd src/ git clone https://github.com/AmanVirmani/WalkerBot cd .. catkin_make ``` # Running demo ## Using roslaunch Type the following command in a new terminal: ``` roslaunch walkerbot demo.launch ``` ## Using rosrun Run roscore in a new terminal ``` roscore ``` launch turtlebot gazebo simulation ``` roslaunch turtlebot_gazebo turtlebot_world.launch ``` Run the turtlebot node by using the following command ``` rosrun walkerbot walkerbot ``` # Recording using rosbag files Record the rostopics using the following command with the launch file (camera output is not recorded): ``` roslaunch walkerbot demo.launch record:=true ``` The recorded bag file will be stored in the results folder To record for a specific time ``` roslaunch walkerbot demo.launch record:=true secs:=30 ``` In the above case, rosbag will record for 30 seconds # Playing bag files Navigate to the results folder ``` cd ~/catkin_ws/src/turtlebot_walker/results ``` Play the bag file ``` rosbag play turtlebotRecord.bag ``` Verify the published topic by given command ``` rostopic echo /mobile_base/commands/velocity ```
22.963415
132
0.758895
eng_Latn
0.858176
faa19fcc975a5a48f042ae041a51bbbad5416faf
398
md
Markdown
desktop-src/OpenGL/glloadmatrix.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
552
2019-08-20T00:08:40.000Z
2022-03-30T18:25:35.000Z
desktop-src/OpenGL/glloadmatrix.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,143
2019-08-21T20:17:47.000Z
2022-03-31T20:24:39.000Z
desktop-src/OpenGL/glloadmatrix.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,287
2019-08-20T05:37:48.000Z
2022-03-31T20:22:06.000Z
--- title: glLoadMatrix Functions description: These functions replace the current matrix with an arbitrary matrix. ms.assetid: a55575a8-fd1d-4f5b-a5c7-c158c1ef0fee ms.topic: article ms.date: 05/31/2018 --- # glLoadMatrix Functions These functions replace the current matrix with an arbitrary matrix: - [**glLoadMatrixd**](glloadmatrixd.md) - [**glLoadMatrixf**](glloadmatrixf.md)    
17.304348
81
0.751256
eng_Latn
0.797853
faa1a354dd9e66c3c6afc1939713e24f0ed251de
6,412
md
Markdown
content/project/Thermal rectifier/index.md
Shizheng-Wen/Personalweb
7127c0a36fa7b42a5e82f455782bf50f722c86a1
[ "MIT" ]
null
null
null
content/project/Thermal rectifier/index.md
Shizheng-Wen/Personalweb
7127c0a36fa7b42a5e82f455782bf50f722c86a1
[ "MIT" ]
null
null
null
content/project/Thermal rectifier/index.md
Shizheng-Wen/Personalweb
7127c0a36fa7b42a5e82f455782bf50f722c86a1
[ "MIT" ]
null
null
null
--- title: Near-field Radiation-based Thermal Rectifiers summary: I proposed a thermal rectifer reaching a record-high rectification ratio. tags: - Nanoscale Heat Transfer date: "2017-09-27T00:00:00Z" # Optional external URL for project (replaces project detail page). external_link: "" image: caption: Variations of thermal rectification ratio with respect to vaccum gap in previous studies and simulated results of the proposed design focal_point: Smart links: #- icon: twitter # icon_pack: fab # name: Follow # url: https://twitter.com/georgecushen url_code: "" url_pdf: "" url_slides: "" url_video: "" # Slides (optional). # Associate this project with Markdown slides. # Simply enter your slide deck's filename without extension. # E.g. `slides = "example-slides"` references `content/slides/example-slides.md`. # Otherwise, set `slides = ""`. # slides: example --- Thermal radiation is one of the most common phenomena in nature. Any object with a temperature greater than 0 K can spontaneously radiate electromagnetic waves (thermal radiation) all the time. In last century, when quantum mechanics was first established, people believed that the thermal emission of objects was governed by Planck's law at large scale, modified by the corresponding surface emissivity. ![1](./photo/1.jpg) However, in recent twenty years, experimental researchers have found that when the geometric dimension of object is comparable to the characteristic wavelength of thermal radiation, the resulting radiative energy exchange can greatly [exceed the limit set by Planck's blackbody law](https://physicsworld.com/a/exposing-the-flaw-in-plancks-law/). And this phenomena is what we called near-field radiative heat transfer. Near-field radiative heat transfer has many promising applications due to its high efficiency of energy transmission, including [near-field thermophotovoltaics](https://shizhengwen.netlify.app/publication/jqsrt_2020_ntpv/) and [thermal rectifiers](https://shizhengwen.netlify.app/publication/jqsrt_2019_thermal_rectifer/). Here, I want to introduce more about the thermal rectifiers. As we all know, the development of modern electronics and information processing industries depends on the invention of electric diodes, which relies on the rectification of electrons flow. Nevertheless, due to the higher integration of circuits, the heat flux density has increased significantly. Subsequently, a high-temperature working condition may be resulted in, where electronic diodes tend to have low efficiency and even fail. On the other hand, heat flow can also be controlled, which may provide alternative ways to process information at harsh conditions. Thermal rectification is a phenomenon that allows heat flow in a more favorable direction. This phenomenon is promising to pave the way for the realization of future thermal diode/rectifier devices, even thermal computers, and thus is attracting extensive attention. ![2](./photo/2.jpg) The performance of thermal rectifier can be characterized by the thermal rectification ratio as: ![6](./photo/6.jpg) where Qf and Qr represent net heat fluxes in the forward and reverse scenarios, respectively. As we all know, heat transfer can be realized by three approaches: conduction, convection and radiation. Previous researches on thermal rectification were primarily based on conduction and convection. Alternatively, Radiation-based thermal diodes, which can avoid contact and intrusion, have been proposed. ![3](./photo/3.jpg) Although previous researches have been done on this field, radiation-based thermal rectifiers, as shown in this figure, still have a low rectification ratio compared with conduction-based counterparts, which can be as high as 100. Therefore, investigations on further improving the performance of radiation-based rectifiers are very imperative. ![4](./photo/4.jpg) In order to cause the asymmetry for heat flux, the radiative properties of the material should be sensitive with the temperature. Here, I consider the intrinsic silicon as the main material to construct the rectifier. According to semiconductor physics, radiative properties of silicon can be easily tailored by changing temperatures due to thermally excited carriers. See the following figures. ![5](./photo/5.jpg) Therefore, when we constructed the thermal rectifier based on intrinsic Si and a dissimilar material. In the forward bias, electrons of Si will be excited at high temperatures, which gives rise to the enhancement of radiative heat transfer. In reverse bias, electrons won’t be excited, the heat transfer will be constrained. This asymmetry of heat flux will give rise to a high rectification ratio. And this is the key idea of this work. Further, effects of gap distances, materials and configurations of nanoparticles on the rectification ratio were also investigated. And we used dipole approximation to unveil the underlying mechanisms. Details information can be viewed in my paper. This work was finished when I was a junior student at NUAA in 2019 Spring, under the advisement of Prof. Xianglei Liu. He is now the vice dean of College of Energy and Power Engineering. He won the Sigma Xi Best Ph.D. Thesis Award in Georgia Institute of Technology (TOP 2%), Raymond Viskanta Young Scientist Award (1-2 people in the field of thermal radiation every year) and so on. It should be noted that this work helped me won the highest honor of research in NUAA. <img src="./photo/7.jpg" alt="5" style="zoom:50%;" /> Following are part comments from reviewers: `In this paper, the authors proposed a high-performance thermal rectification approach based on the thermal effect of intrinsic and doped Si. The thermal rectification ratio can be enhanced to exceeding 10K, which is record-breaking. I found the results very interesting, and therefore recommend its acceptance after a minor revision.` `In this work the authors described a thermal rectification mechanism between two different nanoparticles with respect to their separation distance. They highlight a high (potentially record) rectification coefficient both in near-field and in far field regime with a maximum of rectification in extreme near-field regime around 10nm and explain the physical origin of this strong rectification. The manuscript is pretty well written and the involved physical mechanisms are well described.`
75.435294
834
0.801622
eng_Latn
0.998903
faa23482b2b3d3bb7a7b4ade4bbc9f775358fb63
725
md
Markdown
docs/aws-iam-policies.md
vipbeto/domain-protect
c881ff360c32fa864d826c494483f4026d715861
[ "Apache-2.0" ]
null
null
null
docs/aws-iam-policies.md
vipbeto/domain-protect
c881ff360c32fa864d826c494483f4026d715861
[ "Apache-2.0" ]
null
null
null
docs/aws-iam-policies.md
vipbeto/domain-protect
c881ff360c32fa864d826c494483f4026d715861
[ "Apache-2.0" ]
1
2022-03-26T20:43:05.000Z
2022-03-26T20:43:05.000Z
# AWS IAM policies For least privilege access control, example AWS IAM policies are provided: * [domain-protect audit policy](../aws-iam-policies/domain-protect-audit.json) - attach to domain-protect audit role in every AWS account * [domain-protect audit trust relationship](../aws-iam-policies/domain-protect-audit-trust.json) for domain-protect audit role in every AWS account * [domain-protect audit trust relationship with External ID](../aws-iam-policies/domain-protect-audit-trust-external-id.json) for domain-protect audit role in every AWS account * [domain-protect deploy policy](../aws-iam-policies/domain-protect-deploy.json) - attach to IAM group or role assumed by CI/CD pipeline [back to README](../README.md)
90.625
176
0.783448
eng_Latn
0.872798
faa28981f612e7e820393a60fc4bf0270bc38f85
13,248
md
Markdown
articles/search/cognitive-search-concept-image-scenarios.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/cognitive-search-concept-image-scenarios.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/cognitive-search-concept-image-scenarios.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Processar e extrair texto de imagens em um pipeline de enriquecimento titleSuffix: Azure Cognitive Search description: Processe e extraia texto e outras informações de imagens em pipelines de Pesquisa Cognitiva do Azure. manager: nitinme author: LuisCabrer ms.author: luisca ms.service: cognitive-search ms.topic: conceptual ms.date: 11/04/2019 ms.openlocfilehash: 5006bf5bc7eafd464861a3570654539386c5f837 ms.sourcegitcommit: b050c7e5133badd131e46cab144dd5860ae8a98e ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 10/23/2019 ms.locfileid: "72787750" --- # <a name="how-to-process-and-extract-information-from-images-in-ai-enrichment-scenarios"></a>Como processar e extrair informações de imagens em cenários de enriquecimento de ia O Azure Pesquisa Cognitiva tem vários recursos para trabalhar com imagens e arquivos de imagem. Durante a quebra de documento, você pode usar o parâmetro *imageAction* extrair texto de fotos ou imagens que contêm texto alfanumérico, como a palavra "PARE" em um sinal de parada. Outros cenários incluem gerar uma representação de texto de uma imagem, como "dente-de-leão" para uma foto de um dente-de-leão ou a cor "amarela". Você também pode extrair metadados sobre a imagem, como seu tamanho. Este artigo aborda o processamento de imagem em mais detalhes e fornece orientação para trabalhar com imagens em um pipeline de enriquecimento de ia. <a name="get-normalized-images"></a> ## <a name="get-normalized-images"></a>Obter imagens normalizadas Como parte da quebra do documento, há um novo conjunto de parâmetros de configuração do indexador para lidar com arquivos de imagem ou imagens incorporadas em arquivos. Esses parâmetros são usados para normalizar imagens para processamento downstream maior. Normalizar imagens as torna mais uniformes. Imagens grandes são redimensionadas para uma altura e largura máximas, para torná-las consumíveis. Para imagens que fornecem metadados na orientação, a rotação da imagem será ajustada para carregamento vertical. Ajustes de metadados são capturados em um tipo complexo criado para cada imagem. Você não pode desativar a normalização de imagem. Habilidades que iteram sobre imagens esperam imagens normalizadas. Habilitar a normalização de imagem em um indexador requer que um conconhecimento seja anexado a esse indexador. | Parâmetro de configuração | Descrição | |--------------------|-------------| | imageAction | Definido como "none" se nenhuma ação puder ser tomada quando os arquivos de imagem ou imagens incorporadas forem encontrados. <br/>Defina como "generateNormalizedImages" para gerar uma matriz de imagens normalizadas como parte da quebra de documento.<br/>Definido como "generateNormalizedImagePerPage" para gerar uma matriz de imagens normalizadas onde, para PDFs na fonte de dados, cada página é renderizada para uma imagem de saída. A funcionalidade é a mesmo que "generateNormalizedImages" para tipos de arquivos que não são PDF.<br/>Para qualquer opção que não seja "none", essas imagens serão expostas no campo *normalized_images*. <br/>O padrão é "none". Essa configuração só é pertinente a fontes de dados de blob, quando "dataToExtract" é definido como "contentAndMetadata". <br/>Um máximo de 1000 imagens será extraído de um determinado documento. Se houver mais de 1000 imagens em um documento, o primeiro 1000 será extraído e um aviso será gerado. | | normalizedImageMaxWidth | A largura máxima (em pixels) para as imagens normalizadas geradas. O padrão é 2000. O valor máximo permitido é 10000. | | normalizedImageMaxHeight | A altura máxima (em pixels) para as imagens normalizadas geradas. O padrão é 2000. O valor máximo permitido é 10000.| > [!NOTE] > Se você definir a propriedade *imageaction* como algo diferente de "None", não será possível definir a propriedade *parsingMode* como algo diferente de "default". Você só pode definir uma dessas duas propriedades como um valor não padrão na configuração do indexador. Defina o parâmetro **parsingMode** `json`(para indexar cada blob como um único documento) ou `jsonArray` (se seus blobs contêm matrizes JSON e você precisa que cada elemento da matriz seja tratado como um documento separado). O padrão de 2000 pixels para a largura e altura máximas das imagens normalizadas se baseia nos tamanhos máximos compatíveis com a [habilidade de OCR](cognitive-search-skill-ocr.md) e a [habilidade de análise de imagem](cognitive-search-skill-image-analysis.md). A [habilidade de OCR](cognitive-search-skill-ocr.md) dá suporte a uma largura e altura máxima de 4200 para idiomas que não estão em inglês e 10000 para inglês. Se você aumentar os limites máximos, o processamento poderá falhar em imagens maiores, dependendo da sua definição de Skills e do idioma dos documentos. Especifique o imageAction na [definição do indexador](https://docs.microsoft.com/rest/api/searchservice/create-indexer) da seguinte maneira: ```json { //...rest of your indexer definition goes here ... "parameters": { "configuration": { "dataToExtract": "contentAndMetadata", "imageAction": "generateNormalizedImages" } } } ``` Quando o campo *imageAction* for definido para qualquer valor diferente de "none", o novo campo *normalized_images* conterá uma matriz de imagens. Cada imagem é um tipo complexo que tem os seguintes membros: | Membro de imagem | Descrição | |--------------------|-----------------------------------------| | data | Cadeia codificada em Base64 da imagem normalizada no formato JPEG. | | width | Largura da imagem normalizada em pixels. | | height | Altura da imagem normalizada em pixels. | | originalWidth | A largura original da imagem antes da normalização. | | originalHeight | A altura original da imagem antes da normalização. | | rotationFromOriginal | Rotação no sentido anti-horário em graus para criar a imagem normalizada. Um valor entre 0 graus e 360 graus. Esta etapa lê os metadados da imagem gerada por uma câmera ou scanner. Geralmente, é um múltiplo de 90 graus. | | contentOffset | O deslocamento de caractere dentro do campo de conteúdo do qual a imagem foi extraída. Este campo só é aplicável a arquivos com imagens incorporadas. | | pageNumber | Se a imagem tiver sido extraída ou renderizada de um PDF, esse campo conterá o número da página no PDF em que foi extraído ou renderizado, começando em 1. Se a imagem não fosse de um PDF, esse campo será 0. | Exemplo de valor de *normalized_images*: ```json [ { "data": "BASE64 ENCODED STRING OF A JPEG IMAGE", "width": 500, "height": 300, "originalWidth": 5000, "originalHeight": 3000, "rotationFromOriginal": 90, "contentOffset": 500, "pageNumber": 2 } ] ``` ## <a name="image-related-skills"></a>Habilidades relacionadas com a imagem Existem duas habilidades cognitivas internas que usam imagens como entrada: [OCR](cognitive-search-skill-ocr.md) e [Análise de Imagens](cognitive-search-skill-image-analysis.md). Atualmente, essas habilidades só funcionam com imagens geradas na etapa de quebra de documento. Como tal, a única entrada com suporte é `"/document/normalized_images"`. ### <a name="image-analysis-skill"></a>Habilidade Análise de Imagens A [habilidade Análise de Imagens](cognitive-search-skill-image-analysis.md) extrai um conjunto avançado de recursos visuais com base no conteúdo da imagem. Por exemplo, é possível gerar uma legenda de uma imagem, criar marcas ou identificar celebridades e pontos de referência. ### <a name="ocr-skill"></a>Habilidade OCR O [habilidade OCR](cognitive-search-skill-ocr.md) extrai o texto de arquivos de imagem como JPGs, PNGs e bitmaps. Pode extrair texto, bem como informações de layout. As informações de layout fornecem caixas delimitadoras para cada uma das cadeias de caracteres identificadas. ## <a name="embedded-image-scenario"></a>Cenário de imagem incorporada Um cenário comum envolve a criação de uma única cadeia de caracteres que contém todos os conteúdos do arquivo, texto e texto de origem da imagem, executando as seguintes etapas: 1. [Extrair normalized_images](#get-normalized-images) 1. Executar a habilidade OCR usando `"/document/normalized_images"` como entrada 1. Mesclar a representação de texto dessas imagens com o texto bruto extraído do arquivo. Use a habilidade [Mesclar Texto](cognitive-search-skill-textmerger.md) para consolidar as duas partes de texto em uma única cadeia de caracteres grande. O conjunto de habilidades de exemplo a seguir cria um campo *merged_text* que contém o conteúdo textual do documento. Também inclui o texto processado em OCR de cada uma das imagens incorporadas. #### <a name="request-body-syntax"></a>Sintaxe de corpo da solicitação ```json { "description": "Extract text from images and merge with content text to produce merged_text", "skills": [ { "description": "Extract text (plain and structured) from image.", "@odata.type": "#Microsoft.Skills.Vision.OcrSkill", "context": "/document/normalized_images/*", "defaultLanguageCode": "en", "detectOrientation": true, "inputs": [ { "name": "image", "source": "/document/normalized_images/*" } ], "outputs": [ { "name": "text" } ] }, { "@odata.type": "#Microsoft.Skills.Text.MergeSkill", "description": "Create merged_text, which includes all the textual representation of each image inserted at the right location in the content field.", "context": "/document", "insertPreTag": " ", "insertPostTag": " ", "inputs": [ { "name":"text", "source": "/document/content" }, { "name": "itemsToInsert", "source": "/document/normalized_images/*/text" }, { "name":"offsets", "source": "/document/normalized_images/*/contentOffset" } ], "outputs": [ { "name": "mergedText", "targetName" : "merged_text" } ] } ] } ``` Agora que você tem um campo merged_text, é possível mapeá-lo como um campo pesquisável na definição do indexador. Todo o conteúdo dos arquivos, incluindo o texto das imagens, serão pesquisáveis. ## <a name="visualize-bounding-boxes-of-extracted-text"></a>Visualizar caixas delimitadoras do texto extraído Outro cenário comum é visualizar informações de layout dos resultados de pesquisa. Por exemplo, talvez você queira realçar o local em que um trecho de texto foi encontrado em uma imagem como parte dos resultados da pesquisa. Como a etapa de OCR é executada nas imagens normalizadas, as coordenadas de layout estão no espaço da imagem normalizada. Ao exibir a imagem normalizada, a presença de coordenadas geralmente não é um problema, mas em algumas situações, você deseja exibir a imagem original. Nesse caso, converta cada um dos pontos de coordenadas no layout no sistema de coordenadas da imagem original. Como um auxiliar, se você precisar transformar coordenadas normalizadas no espaço de coordenadas original, use o seguinte algoritmo: ```csharp /// <summary> /// Converts a point in the normalized coordinate space to the original coordinate space. /// This method assumes the rotation angles are multiples of 90 degrees. /// </summary> public static Point GetOriginalCoordinates(Point normalized, int originalWidth, int originalHeight, int width, int height, double rotationFromOriginal) { Point original = new Point(); double angle = rotationFromOriginal % 360; if (angle == 0 ) { original.X = normalized.X; original.Y = normalized.Y; } else if (angle == 90) { original.X = normalized.Y; original.Y = (width - normalized.X); } else if (angle == 180) { original.X = (width - normalized.X); original.Y = (height - normalized.Y); } else if (angle == 270) { original.X = height - normalized.Y; original.Y = normalized.X; } double scalingFactor = (angle % 180 == 0) ? originalHeight / height : originalHeight / width; original.X = (int) (original.X * scalingFactor); original.Y = (int)(original.Y * scalingFactor); return original; } ``` ## <a name="see-also"></a>Consulte + [Criar indexador (REST)](https://docs.microsoft.com/rest/api/searchservice/create-indexer) + [Habilidade Analisar Imagem](cognitive-search-skill-image-analysis.md) + [Habilidade OCR](cognitive-search-skill-ocr.md) + [Habilidade Mesclar Texto](cognitive-search-skill-textmerger.md) + [Como definir um conjunto de qualificações](cognitive-search-defining-skillset.md) + [Como mapear campos enriquecidos](cognitive-search-output-field-mapping.md)
59.408072
979
0.709843
por_Latn
0.998818
faa45f4801f221f083f42176e127a58a74b6a087
33
md
Markdown
README.md
fabriciofmsilva/schematics
8dd05c295b6384c17f1e8a6a9d183482d55948f9
[ "MIT" ]
null
null
null
README.md
fabriciofmsilva/schematics
8dd05c295b6384c17f1e8a6a9d183482d55948f9
[ "MIT" ]
null
null
null
README.md
fabriciofmsilva/schematics
8dd05c295b6384c17f1e8a6a9d183482d55948f9
[ "MIT" ]
null
null
null
# schematics A simple Schematics
11
19
0.818182
oci_Latn
0.868024
faa472c0ea7aa98f555af32510e31a4c7c1b52b5
208
md
Markdown
pages.ro.aws/linux/pkgadd.md
unPi-ro/tldr
13ffa5e396b4018eeaebf42dd7fff38bfd74638b
[ "CC-BY-4.0" ]
null
null
null
pages.ro.aws/linux/pkgadd.md
unPi-ro/tldr
13ffa5e396b4018eeaebf42dd7fff38bfd74638b
[ "CC-BY-4.0" ]
null
null
null
pages.ro.aws/linux/pkgadd.md
unPi-ro/tldr
13ffa5e396b4018eeaebf42dd7fff38bfd74638b
[ "CC-BY-4.0" ]
null
null
null
# pkgadd > Adăugați un pachet la un sistem CRUX. - Instalați un pachet software local: `pkgadd {{package_name}}` - Actualizați un pachet deja instalat dintr-un pachet local: `pkgadd -u {{package_name}}`
17.333333
60
0.730769
ron_Latn
0.980306
faa4dab07cf703e2e83fc4100c251ff6a066b05e
118
md
Markdown
gai-nian-yu-yuan-li/cluster/README.md
xingwang-guo/kubernetes-handbook
82b34e19853fad015d870bc75c45b3dd26a33800
[ "Apache-2.0" ]
null
null
null
gai-nian-yu-yuan-li/cluster/README.md
xingwang-guo/kubernetes-handbook
82b34e19853fad015d870bc75c45b3dd26a33800
[ "Apache-2.0" ]
null
null
null
gai-nian-yu-yuan-li/cluster/README.md
xingwang-guo/kubernetes-handbook
82b34e19853fad015d870bc75c45b3dd26a33800
[ "Apache-2.0" ]
null
null
null
# 集群资源管理 为了管理异构和不同配置的主机,为了便于Pod的运维管理,Kubernetes中提供了很多集群管理的配置和管理功能,通过namespace划分的空间,通过为node节点创建label和taint用于pod的调度等。
23.6
106
0.90678
yue_Hant
0.684545
faa6629e9b103b46d6efb1731b97d91f2ead8f9b
63
md
Markdown
README.md
TheZacqwerty/LEarn_QuatroTam-Techathon-2022
eb20ff30d76ea772388ca140bd4fce56c4b19854
[ "Apache-2.0" ]
null
null
null
README.md
TheZacqwerty/LEarn_QuatroTam-Techathon-2022
eb20ff30d76ea772388ca140bd4fce56c4b19854
[ "Apache-2.0" ]
null
null
null
README.md
TheZacqwerty/LEarn_QuatroTam-Techathon-2022
eb20ff30d76ea772388ca140bd4fce56c4b19854
[ "Apache-2.0" ]
null
null
null
# LEarn_QuatroTam-Techathon-2022 A prototype made by QuatroTam
21
32
0.84127
eng_Latn
0.834421
faa79d35c5c05da9710e3f0a1f916873d7ab283d
192
md
Markdown
README.md
vsemecky/ganimator
7b6da2218eec45c753a752669e3f019cdf04e747
[ "MIT" ]
null
null
null
README.md
vsemecky/ganimator
7b6da2218eec45c753a752669e3f019cdf04e747
[ "MIT" ]
1
2021-08-11T12:44:49.000Z
2021-08-11T12:44:49.000Z
README.md
vsemecky/ganimator
7b6da2218eec45c753a752669e3f019cdf04e747
[ "MIT" ]
null
null
null
# Ganimator Ganimator (GAN Animator) is python library to generate beautyfull videos from StyleGan networks. It combines the power of **StyleGan** and **MoviePy**. ## Still in development...
38.4
151
0.760417
eng_Latn
0.981613
faa8f8274ce7675d813007af11d33d72c95d04fc
7,987
md
Markdown
create-saml-assertion/README.md
Axway-API-Management-Plus/scripting-examples
0587598bb66a0e182db791f4c8f0aa11dc152ab5
[ "Apache-2.0" ]
null
null
null
create-saml-assertion/README.md
Axway-API-Management-Plus/scripting-examples
0587598bb66a0e182db791f4c8f0aa11dc152ab5
[ "Apache-2.0" ]
null
null
null
create-saml-assertion/README.md
Axway-API-Management-Plus/scripting-examples
0587598bb66a0e182db791f4c8f0aa11dc152ab5
[ "Apache-2.0" ]
null
null
null
# Create SAML Assertion using the API-Gateway In cases where the API-Gateway is required to dynamically create a SAML Assertion to be send to downstream applications. For instance, when a user has been authenticated and a SAML Assertion on the behalf of the user must be created. For this purpose, the standard filter: "Insert SAML Attribute Assertion" can be used, but this filter requires some kind of special input parameters. The scripting filter creates the required attribute: `attribute.lookup.list` and can be combined into a policy like so: ![Sample-Policy](./images/sample-policy.png) ## Attribute: attribute.subject.id The authenticated user the SAML-Assertion should belong too. Sample: sampleuser ## Attribute: attribute.lookup.list A list of attributes that should be part become part of the assertion. In fact, that list must be a `java.lang.Map<String, RetrievedAttribute>`. When Tracing the attribute you will see something like this: ``` attribute.lookup.list { Value: {Key123=key=[Key123] name=[Key123] values=[Value123] namespace=[##nonamespace##] namespaceForAssertion=[urn:vordel:attribute:1.0] useForAssertion=[true], Key0815=key=[Key0815] name=[Key0815] values=[Value0815] namespace=[##nonamespace##] namespaceForAssertion=[urn:vordel:attribute:1.0] useForAssertion=[true]} Type: java.util.HashMap } ``` This creates a SAML Assertion like the following, that can be placed in an attribute of choice: ```xml <saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="Id-d4a3315e7303d90009bde218-2" IssueInstant="2020-01-29T15:25:08Z" Version="2.0"> <saml:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=*.demo.axway.com,OU=Axway IT,O=Axway Inc.,L=Phoenix,ST=Arizona,C=US</saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">Chris</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml:SubjectConfirmationData> <dsig:KeyInfo xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" Id="Id-d4a3315e7303d90009bde218-1"> <enc:EncryptedKey xmlns:enc="http://www.w3.org/2001/04/xmlenc#" Id="Id-0001580311508513-0000000068b4b7ec-1"> <enc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5"/> <dsig:KeyInfo Id="Id-0001580311508568-00000000758700a2-2"> <dsig:X509Data> <dsig:X509Certificate>MIIHTzCCBTegAwIBAgIJAKPaQn6ksa7aMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYDVQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlkIChzZWUgY3VycmVudCBhZGRyZXNzIGF0IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXNzKTESMBAGA1UEBRMJQTgyNzQzMjg3MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS4xKTAnBgNVBAMTIENoYW1iZXJzIG9mIENvbW1lcmNlIFJvb3QgLSAyMDA4MB4XDTA4MDgwMTEyMjk1MFoXDTM4MDczMTEyMjk1MFowga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpNYWRyaWQgKHNlZSBjdXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29tL2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVyZmlybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwMDgwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCvAMtwNyuAWko6bHiUfaN/Gh/2NdW928sNRHI+JrKQUrpjOyhYb6WzbZSm891kDFX29ufyIiKAXuFixrYp4YFs8r/lfTJqVKAyGVn+H4vXPWCGhSRv4xGzdz4gljUha7MI2XAuZPeEklPWDrCQiorjh40G072QDuKZoRuGDtqaCrsLYVAGUvGef3bsyw/QHg3PmTA9HMRFEFis1tPo1+XqxQEHd9ZR5gN/ikilTWh1uem8nk4ZcfUyS5xtYBkL+8ydddy/Js2Pk3g5eXNeJQ7KXOt3EgfLZEFHcpOrUMPrCXZkNNI5t3YRCQ12RcSprj1qr7V9ZS+UWBDsXHyvfuK2GNnQm05aSd+pZgvMPMZ4fKecHePOjlO+Bd5gD2vlGts/4+EhySnB8esHnFIbAURRPHsl18TlUlRdJQfKFiC4reRB7noI/plvg6aRArBsNlVq5331lubKgdaX8ZSD6e2wsWsSaR6s+12pxZjptFtYer49okQ6Y1nUCyXeG0+95QGezdIp1Z8XGQpvvwyQ0wlf2eOKNcx5Wk0ZN5K3xMGtr/R5JJqyAQuxr1yW84Ay+1w9mPGgP0revq+ULtlVmhduYJ1jbLhjya6BXBg14JC7vjxPNyK5fuvPnnchpj04gftI2jE9K+OJ9dC1vX7gUMQSibMjmhAxhduub+84Mxh2EQIDAQABo4IBbDCCAWgwEgYDVR0TAQH/BAgwBgEB/wIBDDAdBgNVHQ4EFgQU+SSsD7K1+HnA+mCIG8TZTQKeFxkwgeMGA1UdIwSB2zCB2IAU+SSsD7K1+HnA+mCIG8TZTQKeFxmhgbSkgbEwga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpNYWRyaWQgKHNlZSBjdXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29tL2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVyZmlybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwMDiCCQCj2kJ+pLGu2jAOBgNVHQ8BAf8EBAMCAQYwPQYDVR0gBDYwNDAyBgRVHSAAMCowKAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3kuY2FtZXJmaXJtYS5jb20wDQYJKoZIhvcNAQEFBQADggIBAJASryI1wqM58C7e6bXpeHxIvj99RZJe6dqxGfwWPJ+0W2aeaufDuV2I6A+tzyMP3iU6XsxPpcG1Lawk0lgH3qLPaYRgM+gQDROpI9CF5Y57pp49chNyM/WqfcZjHwj0/gF/JM8rLFQJ3uIrbZLGOU8W6jx+ekbURWpGqOt1glanq6B8aBMz9p0w8G8nOSQjKpD9kCk18pPfNKXG9/jvjA9iSnyu0/VU+I22mlaHFoI6M6taIgj3grrqLuBHmrS1RaMFO9ncLkVAO+rcf+g769HsJtg1pDDFOqxXnrN2pSB7+R5KBWIBpih1YJeSDW4+TTdDDZIVnBgizVGZoCkaPF+KMjNbMMeJL0eYD6MDxvbxrN8y8NmBGuScvfaAFPDRLLmF9dijscilIeUcE5fuDr3fKanvNFNb0+RqE4QGtjICxFKuItLcsiFCGtpA8CnJ7AoMXOLQusxI0zcKzBIKinmwPQN/aUv0NCB9szTqjktk9T79syNnFQ0EuPAtwQlRPLJsFfClI9eDdOTlLsn+mCdCxqvGnrDQWzilm1DefhiYtUU79nm06PcaewaD+9CL2rvHvRirCG88gGtAPxkZumWK5r7VXNM21+9AUiRgOGcEMeyP84LG3rlV8zsxkVrctQgVrXYlCg17LofiDKYGvCYQbTed7N14jHyAxfDZd0jQ</dsig:X509Certificate> </dsig:X509Data> </dsig:KeyInfo> <enc:CipherData> <enc:CipherValue>fUJiZ9sfZDvbtSxHprQBvrGtL/2WfFwEd7bi2yu111N7phdNSYrCYHLWOF0YDskj eOkccOCrxcKsZszLoBQpZAzUH1Kjq8utS4qWTkgM7dmlWYoRtCyC5ZqzFnhQPITj cd9/+7IvjSn6UYdUnu48ALfi5v9cTEQMzt4IBwI6ZvxvbQV45Mbb6wgPu0HgWU3H 5omiW2nF/wEh5JFUZN4R+MHXLuASGy94CzkS/Zt/tPjGJmrGbhbC02ZzW2LlLkH3 95ccb4yM01hzbrASJMn7b7uMjC12uuG38FqhMAohI/Hk6oM/nZDjUSViyDfipbd4 TYnCwMB/o+o0zHNFk4dXcW57YrS0jfQGBYZJ7jojrygW/l7QgUib+Hj3zZeY+W+h wbFFlfDdMSSuB7d78hMPvwrGEoaYh1mFFWV6oeCT+UqcI0mYjA9tiQK36pYYKxjg M/7ZI98zfLd2Cq+EqCYHpnnT9zY4i4M7TLSyMpW1IDCeUiUkrVyuN5XFYZyqWps0 U3VyeezCEN6m69A3QyeFX5e3/IQU61kABtAku5lvmWyfBjRTxLYdSY13oaSj9Gu1 W4wbdjfg80USkoxQd/f2AgtTj5kAc5rHR2YI+tGTyiKWSbkyyJxFHCRKCIi/sU6Q m9jR76mpw+JZScho9L3VP7hR7u0znMFzDHZ9ijxOHVk=</enc:CipherValue> </enc:CipherData> </enc:EncryptedKey> </dsig:KeyInfo> </saml:SubjectConfirmationData> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2020-01-29T15:25:07Z" NotOnOrAfter="2020-02-03T15:25:07Z"/> <saml:AttributeStatement> <saml:Attribute Name="Key123" NameFormat="urn:vordel:attribute:1.0"> <saml:AttributeValue>Value123</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="Key0815" NameFormat="urn:vordel:attribute:1.0"> <saml:AttributeValue>Value0815</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> </saml:Assertion> ``` ## Script-Examples The following script is used to create the required attribute: `attribute.lookup.list` and can be adjusted as needed. ### Javascript ```javascript var imp = new JavaImporter(java.util, com.vordel.circuit.attribute); with(imp) { function invoke(msg) { var userProperties = new java.util.HashMap(); var list1 = new ArrayList(); var list2 = new ArrayList(); list1.add("Value123"); list2.add("Value0815"); var attribute1 = new RetrievedAttribute(6, "Key123", null, list1); var attribute2 = new RetrievedAttribute(6, "Key0815", null, list2); userProperties.put("Key123", attribute1); userProperties.put("Key0815", attribute2); msg.put("attribute.lookup.list", userProperties); return true; } } ``` ### Groovy N/A ### Jython N/A ## Changelog - 0.0.1 - 29.01.2020 - Initial version ## Limitations/Caveats - N/A ## Contributing Please read [Contributing.md](https://github.com/Axway-API-Management-Plus/Common/blob/master/Contributing.md) for details on our code of conduct, and the process for submitting pull requests to us. ## Team ![alt text][Axwaylogo] Axway Team [Axwaylogo]: https://github.com/Axway-API-Management/Common/blob/master/img/AxwayLogoSmall.png "Axway logo"
61.914729
2,577
0.797922
yue_Hant
0.356599
faa910e49f5798b8992cfaa3ce878197ab48b934
5,392
md
Markdown
blog/_posts/2019-11-09-tools-for-privacy-and-seurity-online.md
jonmbake/jonbake.com
ae96f9db6d981c08e866b916fbed826c05923b4c
[ "CC-BY-3.0" ]
2
2021-01-04T19:08:46.000Z
2021-08-12T06:05:56.000Z
blog/_posts/2019-11-09-tools-for-privacy-and-seurity-online.md
jonmbake/jonbake.com
ae96f9db6d981c08e866b916fbed826c05923b4c
[ "CC-BY-3.0" ]
2
2021-08-18T01:04:14.000Z
2021-08-18T01:04:18.000Z
blog/_posts/2019-11-09-tools-for-privacy-and-seurity-online.md
jonmbake/jonbake.com
ae96f9db6d981c08e866b916fbed826c05923b4c
[ "CC-BY-3.0" ]
1
2021-04-03T00:46:25.000Z
2021-04-03T00:46:25.000Z
--- layout: default title: Tools for Privacy and Security Online tags: - front-page --- Online adverting and in turn bulk personal data collection is a multi-billion dollar industry. Sometimes it feels as if there is little an individual can do to protect privacy online. That doesn't mean we shouldn't try. There are simple steps we can all take to increase privacy and security online. This post outlines a few tools one can employ to be more secure online. ## Browser Extensions Browser extensions allow third-parties to extend the behavior of browsers. There's a set of extensions that allow for greater browsing privacy by blocking certain unsafe behaviors. It's important to make sure the browser creator is a trusted third-party. ### Ad blocker Ad blockers work by blocking network connections to known tracking sources. For example, many websites use _Google Analytics_ to track user behavior. The data is sent to Google and can be aggregated to track user behavior across the web. Ad blockers will block the tracking code from loading. Some good choices are: - [uBlock Origin Chrome Extension](https://chrome.google.com/webstore/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm); [uBlock Origin Firefox Extension](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/) - [AdBlock Plus](https://adblockplus.org/) ### EFF's Privacy Badger The [Electronic Frontier Foundation - EFF](https://www.eff.org/) is non-profit committed to fighting for privacy and user rights on the web. They offer privacy tools available free to the public. One of those tools is [Privacy Badger](https://www.eff.org/privacybadger), which works similarly to an ad blocker like _uBlock Origin_. Adding _Privacy Badger_ adds an extra layer of protection and will likely catch additional invisible trackers. ## Browser Settings In addition to enabling browser extensions, you can also tweak browser settings to create a more secure browsing experience. ### Disable Third-party Cookies Cookies are little bits of data that get transmitted with each web request. They are necessary for things like logging into a site and maintaining a session. However, third-party cookies-- cookies set by sites other than the one you're currently interacting with-- for the most part are unnecessary. They are primarily used to track user behavior across the web. They can be disabled in most browsers. In Chrome, you go to _chrome://settings/content/cookies_ and toggle the _Block third-party cookies_ button: ![Enable Block third-party cookies Chrome](/assets/images/blog/2019/11/09/chrome-block-third-party-cookies.png) ### Enable Do Not Track When _Do Not Track_ is enabled, the browser will send a _DNT (Do not track)_ header with every request. Of course, it is the responsibility of the web page owner on whether the header will be respected. _GDPR_ and other government regulations are forcing companies to respect _DNT_. With more websites respecting _Do Not Track_, it is a valuable tool to increase user privacy. ## Behavioral Beyond tools like enabling browser extensions, there are simple behaviors you can do to increase your online privacy and security. Here are a few. ### Turn Off Devices When Not In Use To turn off devices when not in use seems rather impractical, but it is probably one of the best things you can do to increase your online security. Especially with mobile phones, app APIs are constantly being pinged with your location and other sensitive data. Short of hooking up a network sniffer, there is really no way to know what data is being uploaded in the background when your phone is on. The only solution is to turn it off so nothing can be sent. ### Use Private Browser Private browsing mode creates a new cookie store. This ensures previously set cookies are not sent. It can be an additional safeguard against tracking cookies getting sent to untrusted sites. #### Look for HTTPS HTTPS encrypts web traffic between you and the website. In Chrome, a web page displays with a little lock icon if the site is using HTTPS. If the icon is not displaying, be aware that any data sent to the site will not be encrypted. ## More Advanced You can go beyond behavioral and browser setting/extensions with these more advanced practices. ### Use a VPN Even when a site does not use a tracking cookie, your [IP Address](https://en.wikipedia.org/wiki/IP_address) can be used as a tracking device. A Virtual Private Network (VPN) acts as a reverse proxy to the internet-- making your IP Address that of the provider. VPN traffic is also encrypted to the VPN server, making it an option for use over an untrusted Wifi connection like at a café. It is very important to choose a trustworthy VPN provider. The VPN provider has the same level of insight into your online behavior as an [ISP](https://en.wikipedia.org/wiki/Internet_service_provider). They could log all your behavior. Make sure to do a bit of research prior to choosing one. ### Setup a Pi-Hole A [Pi-Hole](https://pi-hole.net/) is a bit of software that runs on a [Raspberry Pi](https://www.raspberrypi.org/) that effectively acts as a network-wide ad blocker. It's like having uBlock origin automatically installed on every device connected to the network. It's more of an advanced option because it requires a bit of technical knowledge to set up, but the extra protection is well worth it.
77.028571
509
0.790245
eng_Latn
0.998271
faa96cca7d211a9b37f765618ca3400b4763468b
38,556
md
Markdown
doc/src/tutorial.md
UnofficialJuliaMirrorSnapshots/DataKnots.jl-f3f2b2ad-91c8-5588-b964-d77e2d3bb090
9547884c325a12fb3a9ec78caf099da2d1266b9d
[ "MIT" ]
null
null
null
doc/src/tutorial.md
UnofficialJuliaMirrorSnapshots/DataKnots.jl-f3f2b2ad-91c8-5588-b964-d77e2d3bb090
9547884c325a12fb3a9ec78caf099da2d1266b9d
[ "MIT" ]
null
null
null
doc/src/tutorial.md
UnofficialJuliaMirrorSnapshots/DataKnots.jl-f3f2b2ad-91c8-5588-b964-d77e2d3bb090
9547884c325a12fb3a9ec78caf099da2d1266b9d
[ "MIT" ]
null
null
null
# Embedded Query Interface DataKnots is an embedded query language designed so that accidental programmers can more easily analyze complex data. This tutorial shows how typical query operations can be performed upon a simplified in-memory dataset. ## Getting Started Consider a tiny cross-section of public data from Chicago, represented as nested `Vector` and `NamedTuple` objects. department_data = [ (name = "POLICE", employee = [ (name = "ANTHONY A", position = "POLICE OFFICER", salary = 72510), (name = "JEFFERY A", position = "SERGEANT", salary = 101442), (name = "NANCY A", position = "POLICE OFFICER", salary = 80016)]), (name = "FIRE", employee = [ (name = "DANIEL A", position = "FIREFIGHTER-EMT", salary = 95484), (name = "ROBERT K", position = "FIREFIGHTER-EMT", salary = 103272)])] This hierarchical dataset contains a list of departments, with each department containing associated employees. To query this dataset, we convert it into a `DataKnot`, or *knot*. using DataKnots chicago = DataKnot(:department => department_data) ## Our First Query Let's say we want to return the list of department names from this dataset. We query the `chicago` knot using Julia's index notation with `It.department.name`. department_names = chicago[It.department.name] #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# The output, `department_names`, is also a DataKnot. The content of this output knot could be accessed via `get` function. get(department_names) #-> ["POLICE", "FIRE"] ## Navigation In DataKnot queries, `It` means "the current input". The dotted notation lets one navigate a hierarchical dataset. Let's continue our dataset exploration by listing employee names. chicago[It.department.employee.name] #=> │ name │ ──┼───────────┼ 1 │ ANTHONY A │ 2 │ JEFFERY A │ 3 │ NANCY A │ 4 │ DANIEL A │ 5 │ ROBERT K │ =# Navigation context matters. For example, `employee` tuples are not directly accessible from the root of the dataset. When a field label, such as `employee`, can't be found, an appropriate error message is displayed. chicago[It.employee] #-> ERROR: cannot find "employee" ⋮ Instead, `employee` tuples can be queried by navigating through `department` tuples. When tuples are returned, they are displayed as a table. chicago[It.department.employee] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ 3 │ NANCY A POLICE OFFICER 80016 │ 4 │ DANIEL A FIREFIGHTER-EMT 95484 │ 5 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# Notice that nested vectors traversed during navigation are flattened into a single output vector. ## Composition & Identity Dotted navigation, such as `It.department.name`, is a syntax shorthand for the `Get()` primitive together with query composition (`>>`). chicago[Get(:department) >> Get(:name)] #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# The `Get()` primitive returns values that match a given label. Query composition (`>>`) chains two queries serially, with the output of the first query as input to the second. chicago[Get(:department) >> Get(:employee)] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ 3 │ NANCY A POLICE OFFICER 80016 │ 4 │ DANIEL A FIREFIGHTER-EMT 95484 │ 5 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# The `It` query simply reproduces its input, which makes it the identity with respect to composition (`>>`). Hence, `It` can be woven into any composition without changing the result. chicago[It >> Get(:department) >> Get(:name)] #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# This motivates our clever use of `It` as a syntax shorthand. chicago[It.department.name] #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# In DataKnots, queries are either *primitives*, such as `Get` and `It`, or built from other queries with *combinators*, such as composition (`>>`). Let's explore some other combinators. ## Context & Counting To count the number of departments in this `chicago` dataset we write the query `Count(It.department)`. Observe that the argument provided to `Count()`, `It.department`, is itself a query. chicago[Count(It.department)] #=> ┼───┼ │ 2 │ =# We could also count the total number of employees across all departments. chicago[Count(It.department.employee)] #=> ┼───┼ │ 5 │ =# What if we wanted to count employees by department? Using query composition (`>>`), we can perform `Count` in a nested context. chicago[It.department >> Count(It.employee)] #=> ──┼───┼ 1 │ 3 │ 2 │ 2 │ =# In this output, we see that one department has `3` employees, while the other has `2`. ## Record Construction Let's improve the previous query by including each department's name alongside employee counts. This can be done by using the `Record` combinator. chicago[ It.department >> Record(It.name, Count(It.employee))] #=> │ department │ │ name #B │ ──┼────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# To label a record field we use Julia's `Pair` syntax, (`=>`). chicago[ It.department >> Record(It.name, :size => Count(It.employee))] #=> │ department │ │ name size │ ──┼──────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# This is syntax shorthand for the `Label` primitive. chicago[ It.department >> Record(It.name, Count(It.employee) >> Label(:size))] #=> │ department │ │ name size │ ──┼──────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# Rather than building a record from scratch, one could add a field to an existing record using `Collect`. chicago[It.department >> Collect(:size => Count(It.employee))] #=> │ department │ │ name employee{name,position,salary} size │ ──┼───────────────────────────────────────────────────────────────────┼ 1 │ POLICE ANTHONY A, POLICE OFFICER, 72510; JEFFERY A, SERGEA… 3 │ 2 │ FIRE DANIEL A, FIREFIGHTER-EMT, 95484; ROBERT K, FIREFIG… 2 │ =# If a label is set to `nothing` then that field is excluded. This would let us restructure a record as we see fit. chicago[It.department >> Collect(:size => Count(It.employee), :employee => nothing)] #=> │ department │ │ name size │ ──┼──────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# Records can be nested. The following listing includes, for each department, employees' name and salary. chicago[ It.department >> Record(It.name, It.employee >> Record(It.name, It.salary))] #=> │ department │ │ name employee{name,salary} │ ──┼─────────────────────────────────────────────────────────────┼ 1 │ POLICE ANTHONY A, 72510; JEFFERY A, 101442; NANCY A, 80016 │ 2 │ FIRE DANIEL A, 95484; ROBERT K, 103272 │ =# In this output, commas separate tuple fields and semi-colons separate vector elements. ## Reusable Queries Queries can be reused. Let's define `DeptSize` to be a query that computes the number of employees in a department. DeptSize = :size => Count(It.employee) This query can be used in different ways. chicago[Max(It.department >> DeptSize)] #=> ┼───┼ │ 3 │ =# chicago[ It.department >> Record(It.name, DeptSize)] #=> │ department │ │ name size │ ──┼──────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# ## Filtering Data Let's extend the previous query to only show departments with more than one employee. This can be done using the `Filter` combinator. chicago[ It.department >> Record(It.name, DeptSize) >> Filter(It.size .> 2)] #=> │ department │ │ name size │ ──┼──────────────┼ 1 │ POLICE 3 │ =# To use regular operators in query expressions, we need to use broadcasting notation, such as `.>` rather than `>` ; forgetting the period is an easy mistake to make. chicago[ It.department >> Record(It.name, DeptSize) >> Filter(It.size > 2)] #=> ERROR: MethodError: no method matching isless(::Int64, ::DataKnots.Navigation) ⋮ =# ## Incremental Composition Combinators let us construct queries incrementally. Let's explore our Chicago data starting with a list of employees. Q = It.department.employee chicago[Q] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ 3 │ NANCY A POLICE OFFICER 80016 │ 4 │ DANIEL A FIREFIGHTER-EMT 95484 │ 5 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# Let's extend this query to show if the salary is over 100k. Q >>= Collect(:gt100k => It.salary .> 100000) The query definition is tracked automatically. Q #=> It.department.employee >> Collect(:gt100k => It.salary .> 100000) =# Let's run `Q` again. chicago[Q] #=> │ employee │ │ name position salary gt100k │ ──┼────────────────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 false │ 2 │ JEFFERY A SERGEANT 101442 true │ 3 │ NANCY A POLICE OFFICER 80016 false │ 4 │ DANIEL A FIREFIGHTER-EMT 95484 false │ 5 │ ROBERT K FIREFIGHTER-EMT 103272 true │ =# We can now filter the dataset to include only high-paid employees. Q >>= Filter(It.gt100k) #=> It.department.employee >> Collect(:gt100k => It.salary .> 100000) >> Filter(It.gt100k) =# Let's run `Q` again. chicago[Q] #=> │ employee │ │ name position salary gt100k │ ──┼────────────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 true │ 2 │ ROBERT K FIREFIGHTER-EMT 103272 true │ =# Well-tested queries may benefit from a `Tag` so that their definitions are suppressed in larger compositions. HighlyCompensated = Tag(:HighlyCompensated, Q) #-> HighlyCompensated chicago[HighlyCompensated] #=> │ employee │ │ name position salary gt100k │ ──┼────────────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 true │ 2 │ ROBERT K FIREFIGHTER-EMT 103272 true │ =# This tagging can make subsequent compositions easier to read. Q = HighlyCompensated >> It.name #=> HighlyCompensated >> It.name =# chicago[Q] #=> │ name │ ──┼───────────┼ 1 │ JEFFERY A │ 2 │ ROBERT K │ =# ## Aggregate Queries We've demonstrated the `Count` combinator, but `Count` could also be used as a query. In this next example, `Count` receives employees as input, and produces their number as output. chicago[It.department.employee >> Count] #=> ┼───┼ │ 5 │ =# Previously we've only seen *elementwise* queries, which emit an output for each of its input elements. The `Count` query is an *aggregate*, which means it emits an output for its entire input. We may wish to count employees by department. Contrary to expectation, adding parentheses will not change the output. chicago[It.department >> (It.employee >> Count)] #=> ┼───┼ │ 5 │ =# To count employees in *each* department, we use the `Each()` combinator, which evaluates its argument elementwise. chicago[It.department >> Each(It.employee >> Count)] #=> ──┼───┼ 1 │ 3 │ 2 │ 2 │ =# Alternatively, we could use the `Count()` combinator to get the same result. chicago[It.department >> Count(It.employee)] #=> ──┼───┼ 1 │ 3 │ 2 │ 2 │ =# Which form of `Count` to use depends upon what is notationally convenient. For incremental construction, being able to simply append `>> Count` is often very helpful. Q = It.department.employee chicago[Q >> Count] #=> ┼───┼ │ 5 │ =# We could then refine the query, and run the exact same command. Q >>= Filter(It.salary .> 100000) chicago[Q >> Count] #=> ┼───┼ │ 2 │ =# ## Summarizing Data To summarize data, we could use query combinators such as `Min`, `Max`, and `Sum`. Let's compute some salary statistics. Salary = It.department.employee.salary chicago[ Record( :count => Count(Salary), :min => Min(Salary), :max => Max(Salary), :sum => Sum(Salary))] #=> │ count min max sum │ ┼──────────────────────────────┼ │ 5 72510 103272 452724 │ =# Just as `Count` has an aggregate query form, so do `Min`, `Max`, and `Sum`. The previous query could be written in aggregate form. chicago[ Record( :count => Salary >> Count, :min => Salary >> Min, :max => Salary >> Max, :sum => Salary >> Sum)] #=> │ count min max sum │ ┼──────────────────────────────┼ │ 5 72510 103272 452724 │ =# Let's calculate salary statistics by department. Salary = It.employee.salary chicago[ It.department >> Record( It.name, :count => Count(Salary), :min => Min(Salary), :max => Max(Salary), :sum => Sum(Salary))] #=> │ department │ │ name count min max sum │ ──┼──────────────────────────────────────┼ 1 │ POLICE 3 72510 101442 253968 │ 2 │ FIRE 2 95484 103272 198756 │ =# Summary combinators can be used to define domain specific measures, such as `PayGap` and `AvgPay`. Salary = It.employee.salary PayGap = :paygap => Max(Salary) .- Min(Salary) AvgPay = :avgpay => Sum(Salary) ./ Count(It.employee) chicago[ It.department >> Record(It.name, PayGap, AvgPay)] #=> │ department │ │ name paygap avgpay │ ──┼─────────────────────────┼ 1 │ POLICE 28932 84656.0 │ 2 │ FIRE 7788 99378.0 │ =# `Unique` is another combinator producing a summary value. Here, we use `Unique` to return distinct positions by department. chicago[It.department >> Record(It.name, Unique(It.employee.position))] #=> │ department │ │ name position │ ──┼──────────────────────────────────┼ 1 │ POLICE POLICE OFFICER; SERGEANT │ 2 │ FIRE FIREFIGHTER-EMT │ =# ## Grouping Data So far, we've navigated and summarized data by exploiting its hierarchical organization: the whole dataset $\to$ department $\to$ employee. But what if we want a query that isn't supported by the existing hierarchy? For example, how could we calculate the number of employees for each *position*? A list of distinct positions could be obtained using `Unique`. chicago[It.department.employee.position >> Unique] #=> │ position │ ──┼─────────────────┼ 1 │ FIREFIGHTER-EMT │ 2 │ POLICE OFFICER │ 3 │ SERGEANT │ =# However, `Unique` is not sufficient because positions are not associated to the respective employees. To associate employee records to their positions, we use `Group` combinator: chicago[It.department.employee >> Group(It.position)] #=> │ position employee{name,position,salary} │ ──┼───────────────────────────────────────────────────────────────────┼ 1 │ FIREFIGHTER-EMT DANIEL A, FIREFIGHTER-EMT, 95484; ROBERT K, FIRE…│ 2 │ POLICE OFFICER ANTHONY A, POLICE OFFICER, 72510; NANCY A, POLIC…│ 3 │ SERGEANT JEFFERY A, SERGEANT, 101442 │ =# The query `Group(It.position)` rearranges the dataset into a new hierarchy: position $\to$ employee. We can use the new arrangement to show employee names for each unique position. chicago[It.department.employee >> Group(It.position) >> Record(It.position, It.employee.name)] #=> │ position name │ ──┼─────────────────────────────────────┼ 1 │ FIREFIGHTER-EMT DANIEL A; ROBERT K │ 2 │ POLICE OFFICER ANTHONY A; NANCY A │ 3 │ SERGEANT JEFFERY A │ =# We could further use summary combinators, which lets us answer the original question: What is the number of employees for each position? chicago[ It.department.employee >> Group(It.position) >> Record(It.position, :count => Count(It.employee))] #=> │ position count │ ──┼────────────────────────┼ 1 │ FIREFIGHTER-EMT 2 │ 2 │ POLICE OFFICER 2 │ 3 │ SERGEANT 1 │ =# Moreover, we could reuse the previously defined employee measures. Salary = It.employee.salary PayGap = :paygap => Max(Salary) .- Min(Salary) AvgPay = :avgpay => Sum(Salary) ./ Count(It.employee) chicago[ It.department.employee >> Group(It.position) >> Record(It.position, PayGap, AvgPay)] #=> │ position paygap avgpay │ ──┼───────────────────────────────────┼ 1 │ FIREFIGHTER-EMT 7788 99378.0 │ 2 │ POLICE OFFICER 7506 76263.0 │ 3 │ SERGEANT 0 101442.0 │ =# One could group by any query; here we group employees based upon a salary threshold. GT100K = :gt100k => (It.salary .> 100000) chicago[ It.department.employee >> Group(GT100K) >> Record(It.gt100k, It.employee.name)] #=> │ gt100k name │ ──┼──────────────────────────────────────┼ 1 │ false ANTHONY A; NANCY A; DANIEL A │ 2 │ true JEFFERY A; ROBERT K │ =# We could also group by several queries. chicago[ It.department.employee >> Group(It.position, GT100K) >> Record(It.position, It.gt100k, It.employee.name)] #=> │ position gt100k name │ ──┼─────────────────────────────────────────────┼ 1 │ FIREFIGHTER-EMT false DANIEL A │ 2 │ FIREFIGHTER-EMT true ROBERT K │ 3 │ POLICE OFFICER false ANTHONY A; NANCY A │ 4 │ SERGEANT true JEFFERY A │ =# ## Broadcasting over Queries Any function could be applied to query arguments using Julia's broadcasting notation. chicago[ It.department.employee >> titlecase.(It.name)] #=> ──┼───────────┼ 1 │ Anthony A │ 2 │ Jeffery A │ 3 │ Nancy A │ 4 │ Daniel A │ 5 │ Robert K │ =# Broadcasting can also used with operators. For example, let's compute and display a 2% Cost Of Living Adjustment ("COLA"). COLA = trunc.(Int, It.salary .* 0.02) chicago[ It.department.employee >> Record(It.name, :old_salary => It.salary, :COLA => "+" .* string.(COLA), :new_salary => It.salary .+ COLA)] #=> │ employee │ │ name old_salary COLA new_salary │ ──┼──────────────────────────────────────────┼ 1 │ ANTHONY A 72510 +1450 73960 │ 2 │ JEFFERY A 101442 +2028 103470 │ 3 │ NANCY A 80016 +1600 81616 │ 4 │ DANIEL A 95484 +1909 97393 │ 5 │ ROBERT K 103272 +2065 105337 │ =# Functions taking a vector argument, such as `mean`, can also be applied to queries. In this example, `mean` computes the average employee salary by department. using Statistics: mean chicago[ It.department >> Record( It.name, :mean_salary => mean.(It.employee.salary))] #=> │ department │ │ name mean_salary │ ──┼─────────────────────┼ 1 │ POLICE 84656.0 │ 2 │ FIRE 99378.0 │ =# ## Keeping Values Suppose we'd like to list employee names together with their department. The naive approach won't work because `department` is not available in the context of an employee. chicago[ It.department >> It.employee >> Record(It.name, It.department.name)] #-> ERROR: cannot find "department" ⋮ This can be overcome by using `Keep` to label an expression's result, so that it is available within subsequent computations. chicago[ It.department >> Keep(:dept_name => It.name) >> It.employee >> Record(It.name, It.dept_name)] #=> │ employee │ │ name dept_name │ ──┼──────────────────────┼ 1 │ ANTHONY A POLICE │ 2 │ JEFFERY A POLICE │ 3 │ NANCY A POLICE │ 4 │ DANIEL A FIRE │ 5 │ ROBERT K FIRE │ =# This pattern also emerges when a filter condition uses a parameter calculated in a parent context. For example, let's list employees with a higher than average salary for their department. chicago[ It.department >> Keep(:mean_salary => mean.(It.employee.salary)) >> It.employee >> Filter(It.salary .> It.mean_salary)] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 │ 2 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# ## Query Parameters Parameters let us reuse complex queries without changing their definition. Here we construct a query that depends upon the parameter `AMT`, which is capitalized by convention. PaidOverAmt = It.department >> It.employee >> Filter(It.salary .> It.AMT) >> It.name Query parameters are passed as keyword arguments. chicago[AMT=100000, PaidOverAmt] #=> │ name │ ──┼───────────┼ 1 │ JEFFERY A │ 2 │ ROBERT K │ =# What if we want to return employees who have a greater than average salary? This average could be computed first. MeanSalary = mean.(It.department.employee.salary) mean_salary = chicago[MeanSalary] #=> ┼─────────┼ │ 90544.8 │ =# Then, this value could be passed as our parameter. chicago[PaidOverAmt, AMT=mean_salary] #=> │ name │ ──┼───────────┼ 1 │ JEFFERY A │ 2 │ DANIEL A │ 3 │ ROBERT K │ =# This approach performs composition outside of the query language. To evaluate a query and immediately use it as a parameter within the same query expression, we could use the `Given` combinator. chicago[Given(:AMT => MeanSalary, PaidOverAmt)] #=> │ name │ ──┼───────────┼ 1 │ JEFFERY A │ 2 │ DANIEL A │ 3 │ ROBERT K │ =# ## Query Functions Let's make a function `EmployeesOver` that produces employees with a salary greater than the given threshold. The threshold value `AMT` is evaluated and then made available in the context of each employee with the `Given` combinator. EmployeesOver(X) = Given(:AMT => X, It.department >> It.employee >> Filter(It.salary .> It.AMT)) chicago[EmployeesOver(100000)] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 │ 2 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# `EmployeesOver` can take a query as an argument. For example, let's find employees with higher than average salary. MeanSalary = mean.(It.department.employee.salary) chicago[EmployeesOver(MeanSalary)] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 │ 2 │ DANIEL A FIREFIGHTER-EMT 95484 │ 3 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# Note that this combination is yet another query that could be further refined. chicago[EmployeesOver(MeanSalary) >> It.name] #=> │ name │ ──┼───────────┼ 1 │ JEFFERY A │ 2 │ DANIEL A │ 3 │ ROBERT K │ =# Alternatively, this query function could have been defined using `Keep`. We use `Given` because it doesn't leak parameters. Specifically, `It.AMT` is not available outside `EmployeesOver()`. chicago[EmployeesOver(MeanSalary) >> It.AMT] #-> ERROR: cannot find "AMT" ⋮ ## Paging Data Sometimes query results can be quite large. In this case it's helpful to `Take` or `Drop` items from the input. Let's start by listing all 5 employees of our toy database. Employee = It.department.employee chicago[Employee] #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ 3 │ NANCY A POLICE OFFICER 80016 │ 4 │ DANIEL A FIREFIGHTER-EMT 95484 │ 5 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# To return only the first 2 records, we use `Take`. chicago[Employee >> Take(2)] #=> │ employee │ │ name position salary │ ──┼───────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ =# A negative index counts records from the end of the input. So, to return all the records but the last two, we write: chicago[Employee >> Take(-2)] #=> │ employee │ │ name position salary │ ──┼───────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ 3 │ NANCY A POLICE OFFICER 80016 │ =# To skip the first two records, returning the rest, we use `Drop`. chicago[Employee >> Drop(2)] #=> │ employee │ │ name position salary │ ──┼───────────────────────────────────┼ 1 │ NANCY A POLICE OFFICER 80016 │ 2 │ DANIEL A FIREFIGHTER-EMT 95484 │ 3 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# To return the 1st half of the employees in the database, we could use `Take` with an argument that computes how many to take. chicago[Employee >> Take(Count(Employee) .÷ 2)] #=> │ employee │ │ name position salary │ ──┼───────────────────────────────────┼ 1 │ ANTHONY A POLICE OFFICER 72510 │ 2 │ JEFFERY A SERGEANT 101442 │ =# ## Extracting Data Given any `DataKnot`, its content can be extracted using `get`. For singular output, `get` returns a scalar value. get(chicago[Count(It.department)]) #-> 2 For plural output, `get` returns a `Vector`. get(chicago[It.department.employee.name]) #-> ["ANTHONY A", "JEFFERY A", "NANCY A", "DANIEL A", "ROBERT K"] For more complex outputs, `get` may return a `@VectorTree`, which is an `AbstractVector` specialized for column-oriented storage. query = It.department >> Record(It.name, :size => Count(It.employee)) vt = get(chicago[query]) display(vt) #=> @VectorTree of 2 × (name = (1:1) × String, size = (1:1) × Int64): (name = "POLICE", size = 3) (name = "FIRE", size = 2) =# ## The `@query` Notation Queries could be written using a convenient path-like notation provided by the `@query` macro. In this notation: * bare identifiers are translated to navigation with `Get`; * query combinators, such as `Count(X)`, use lower-case names; * the period (`.`) is used for query composition (`>>`); * aggregate queries, such as `Count`, require parentheses; * records can be constructed using curly brackets, `{}`; and * functions and operators are lifted automatically. The `@query` Notation | Equivalent Query -----------------------------|------------------------------------ `department` | `Get(:department)` `count(department)` | `Count(Get(:department))` `department.count()` | `Get(:department) >> Count` `department.employee` | `Get(:department) >> Get(:employee)` `department.count(employee)` | `Get(:department) >> Count(Get(:employee))` `department{name}` | `Get(:department) >> Record(Get(:name))` A `@query` macro with one argument creates a query object. @query department.name #-> Get(:department) >> Get(:name) This query object could be used to query a `DataKnot` as usual. chicago[@query department.name] #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# Alternatively, we can provide the input dataset as an argument to `@query`. @query chicago department.name #=> │ name │ ──┼────────┼ 1 │ POLICE │ 2 │ FIRE │ =# Queries could also be composed by placing the query components in a `begin`/`end` block. @query begin department count(employee) end #-> Get(:department) >> Count(Get(:employee)) Curly brackets `{}` are used to construct `Record` queries. @query department{name, count(employee)} #-> Get(:department) >> Record(Get(:name), Count(Get(:employee))) @query chicago department{name, count(employee)} #=> │ department │ │ name #B │ ──┼────────────┼ 1 │ POLICE 3 │ 2 │ FIRE 2 │ =# Combinators, such as `Filter` and `Keep`, are available, using lower-case names. Operators and functions are automatically lifted to queries. using Statistics: mean @query chicago begin department keep(avg_salary => mean(employee.salary)) employee filter(salary > avg_salary) {name, salary} end #=> │ employee │ │ name salary │ ──┼───────────────────┼ 1 │ JEFFERY A 101442 │ 2 │ ROBERT K 103272 │ =# In `@query` notation, query aggregates, such as `Count` and `Unique`, are lower-case and require parentheses. @query chicago department.employee.position.unique().count() #=> ┼───┼ │ 3 │ =# Query parameters are passed as keyword arguments to `@query`. @query chicago begin department employee filter(salary>threshold) end threshold=90544.8 #=> │ employee │ │ name position salary │ ──┼────────────────────────────────────┼ 1 │ JEFFERY A SERGEANT 101442 │ 2 │ DANIEL A FIREFIGHTER-EMT 95484 │ 3 │ ROBERT K FIREFIGHTER-EMT 103272 │ =# To embed regular Julia variables and expressions from within a `@query`, use the interpolation syntax (`$`). threshold = 90544.8 @query chicago begin department.employee filter(salary>$threshold) {name, salary, over => salary - $(trunc(Int, threshold))} end #=> │ employee │ │ name salary over │ ──┼──────────────────────────┼ 1 │ JEFFERY A 101442 10898 │ 2 │ DANIEL A 95484 4940 │ 3 │ ROBERT K 103272 12728 │ =# We can use `@query` to define reusable queries and combinators. salary = @query department.employee.salary stats(x) = @query {min=>min($x), max=>max($x), count=>count($x)} @query chicago $stats($salary) #=> │ min max count │ ┼──────────────────────┼ │ 72510 103272 5 │ =# ## Importing & Exporting Data We can import directly from systems that support the `Tables.jl` interface. Here is a tabular variant of the chicago dataset. using CSV employee_data = """ name,department,position,salary,rate "JEFFERY A","POLICE","SERGEANT",101442, "NANCY A","POLICE","POLICE OFFICER",80016, "ANTHONY A","POLICE","POLICE OFFICER",72510, "ALBA M","POLICE","POLICE CADET",,9.46 "JAMES A","FIRE","FIRE ENGINEER-EMT",103350, "DANIEL A","FIRE","FIREFIGHTER-EMT",95484, "ROBERT K","FIRE","FIREFIGHTER-EMT",103272, "LAKENYA A","OEMC","CROSSING GUARD",,17.68 "DORIS A","OEMC","CROSSING GUARD",,19.38 "BRENDA B","OEMC","TRAFFIC CONTROL AIDE",64392, """ |> IOBuffer |> CSV.File chicago′ = DataKnot(:employee => employee_data) chicago′[It.employee] #=> │ employee │ │ name department position salary rate │ ───┼────────────────────────────────────────────────────────────┼ 1 │ JEFFERY A POLICE SERGEANT 101442 │ 2 │ NANCY A POLICE POLICE OFFICER 80016 │ 3 │ ANTHONY A POLICE POLICE OFFICER 72510 │ 4 │ ALBA M POLICE POLICE CADET 9.46 │ 5 │ JAMES A FIRE FIRE ENGINEER-EMT 103350 │ 6 │ DANIEL A FIRE FIREFIGHTER-EMT 95484 │ 7 │ ROBERT K FIRE FIREFIGHTER-EMT 103272 │ 8 │ LAKENYA A OEMC CROSSING GUARD 17.68 │ 9 │ DORIS A OEMC CROSSING GUARD 19.38 │ 10 │ BRENDA B OEMC TRAFFIC CONTROL AIDE 64392 │ =# This tabular data could be filtered to show employees that are paid more than average. Let's also prune the `rate` column. using Statistics: mean highly_compensated = chicago′[Keep(:avg_salary => mean.(It.employee.salary)) >> It.employee >> Filter(It.salary .> It.avg_salary) >> Collect(:rate => nothing)] #=> │ employee │ │ name department position salary │ ──┼──────────────────────────────────────────────────┼ 1 │ JEFFERY A POLICE SERGEANT 101442 │ 2 │ JAMES A FIRE FIRE ENGINEER-EMT 103350 │ 3 │ DANIEL A FIRE FIREFIGHTER-EMT 95484 │ 4 │ ROBERT K FIRE FIREFIGHTER-EMT 103272 │ =# We can then export this data. using DataFrames highly_compensated |> DataFrame #=> 4×4 DataFrames.DataFrame │ Row │ name │ department │ position │ salary │ │ │ String │ String │ String │ Int64⍰ │ ├─────┼───────────┼────────────┼───────────────────┼────────┤ │ 1 │ JEFFERY A │ POLICE │ SERGEANT │ 101442 │ │ 2 │ JAMES A │ FIRE │ FIRE ENGINEER-EMT │ 103350 │ │ 3 │ DANIEL A │ FIRE │ FIREFIGHTER-EMT │ 95484 │ │ 4 │ ROBERT K │ FIRE │ FIREFIGHTER-EMT │ 103272 │ =# ## Restructuring Imported Data After importing tabular data, it is sometimes helpful to restructure hierarchically to make queries more convenient. We've seen earlier how this could be done with `Group` combinator. chicago′[It.employee >> Group(It.department)] #=> │ department employee{name,department,position,salary,rate} │ ──┼───────────────────────────────────────────────────────────────────┼ 1 │ FIRE JAMES A, FIRE, FIRE ENGINEER-EMT, 103350, missing; DA…│ 2 │ OEMC LAKENYA A, OEMC, CROSSING GUARD, missing, 17.68; DORI…│ 3 │ POLICE JEFFERY A, POLICE, SERGEANT, 101442, missing; NANCY A…│ =# With a some labeling, this hierarchy could be transformed so that its structure is compatible with our initial `chicago` dataset. Restructure = :department => It.employee >> Group(It.department) >> Record( :name => It.department, :employee => It.employee >> Collect(:department => nothing)) chicago′[Restructure] #=> │ department │ │ name employee{name,position,salary,rate} │ ──┼───────────────────────────────────────────────────────────────────┼ 1 │ FIRE JAMES A, FIRE ENGINEER-EMT, 103350, missing; DANIEL A, FI…│ 2 │ OEMC LAKENYA A, CROSSING GUARD, missing, 17.68; DORIS A, CROSS…│ 3 │ POLICE JEFFERY A, SERGEANT, 101442, missing; NANCY A, POLICE OFF…│ =# Using `Collect` we could save this restructured dataset as a top-level field, `department`. chicago″ = chicago′[Restructure >> Collect] #=> │ employee{name,department,positio… department{name,employee{name,pos…│ ┼─────────────────────────────────────────────────────────────────────┼ │ JEFFERY A, POLICE, SERGEANT, 101… FIRE, [JAMES A, FIRE ENGINEER-EMT…│ =# Then, queries that originally worked with our hierarchical `chicago` dataset would now work with this imported and then restructured `chicago″` data. For example, we could once again compute the average employee salary by department. using Statistics: mean chicago″[ It.department >> Record( It.name, :mean_salary => mean.(It.employee.salary))] #=> │ department │ │ name mean_salary │ ──┼─────────────────────┼ 1 │ FIRE 100702.0 │ 2 │ OEMC 64392.0 │ 3 │ POLICE 84656.0 │ =#
29.567485
82
0.539112
eng_Latn
0.883375
faa9879b9eb1bb95d581822f55aa7bf6c4012576
3,336
md
Markdown
function-contrib-gollum/Home.md
trifork/riak_function_contrib
cb93594abb66a1fc1a964a9ac8491be66f8bd7ff
[ "Apache-2.0" ]
1
2016-06-30T07:11:34.000Z
2016-06-30T07:11:34.000Z
function-contrib-gollum/Home.md
trifork/riak_function_contrib
cb93594abb66a1fc1a964a9ac8491be66f8bd7ff
[ "Apache-2.0" ]
null
null
null
function-contrib-gollum/Home.md
trifork/riak_function_contrib
cb93594abb66a1fc1a964a9ac8491be66f8bd7ff
[ "Apache-2.0" ]
null
null
null
Riak Function Contrib is a joint effort between [Basho](http://basho.com), the company behind Riak, and the Riak community to compile a library of functions and other useful code that can be used in applications everywhere. # Where to start with Riak Function Contrib * If you're looking for an overview of the project, simply keep reading * All the code for this site can be found on the [Riak Function Contrib Repo on GitHub](https://github.com/basho/riak_function_contrib) * Scroll down to the **Contributing** section to learn how to add a function to the repo * The list of available MapReduce functions can be found [[here|map-reduce-functions]] * The Pre- and Post- Commit Functions are [[here|pre-and-post-commits]] * Other Functions (Importing/Exporting Data, Bucket Reloading, etc.) can be [[found here|other-functions]] ## Functions in Riak The ability to query Riak past the standard GET, PUT and UPDATE functionality that a key/value store provides is made possible through MapReduce functions.(You can, of course, use [Riak Search](http://wiki.basho.com/display/RIAK/Riak+Search), but it serves a different purpose than MapReduce). Additionally, Riak provides the ability to run Pre- and Post-Commit Hooks, which are functions that are invoked before or after a riak_object is persisted. MapReduce and Pre- and Post- Commits enable you to extend your Riak's capabilities and build powerful querying and additional functionality into your applications. ## The Role of Riak Function Contrib One barrier to using these, however, is having to create numerous functions to use with your application. So, in the spirit of truly useful and collaborative projects like [clojure-contrib](https://github.com/richhickey/clojure-contrib), we are aiming to erase that barrier by tapping the collective power of the community to help us build out a library of contributed functions. With that in mind, the goal of Riak Function Contrib is three-fold: 1. Build a robust library of functions that developers can use in applications running on Riak 2. Encourage participation from the community around MapReduce, Pre- and Post- Commit Hooks, and other Functions 3. Expand the amount of "built in" functions that ship with Riak ## How To Use This Site You can use this page to: * Search for [[MapReduce|map-reduce-functions]], [[Pre-/Post-Commit|pre-and-post-commits]] or [[other functions|other-functions]] that may be suitable for your needs * Learn how to contribute a function to the repo (see below) * If you're looking for the Function Contrib Repo on Github, [go here](https://github.com/basho/riak_function_contrib). ## Contributing ### Why Contribute to Riak Function Contrib? * Have you ever driven a Zonda from Florence to Bologna going 250 kmh the entire way? * Have you ever ran with the bulls at Pamplona? * Have you ever chugged an entire cola without stopping? * Have you ever rescued seven puppies from a burning building? * Have you ever done a 265 meter freedive off the coast New Zealand for kicks? __*None of these activities are nearly as exhilarating and rewarding as contributing your code to Riak Function Contrib.*__ If you have some code to share, head over to the [Riak Function Contrib repo on GitHub](https://github.com/basho/riak_function_contrib) to get started.
68.081633
451
0.777878
eng_Latn
0.996984
faa9a7d8cd7a93f7bb99c473cfb24f3cfdb00661
584
md
Markdown
matrix-synapse-nomad/README.md
grembo/potluck
0f8cea05be2b2fd42e36d5114ae5ee758b075999
[ "BSD-3-Clause" ]
23
2020-07-16T06:39:08.000Z
2022-03-06T10:23:05.000Z
matrix-synapse-nomad/README.md
grembo/potluck
0f8cea05be2b2fd42e36d5114ae5ee758b075999
[ "BSD-3-Clause" ]
2
2021-03-24T06:13:56.000Z
2021-06-10T16:24:15.000Z
matrix-synapse-nomad/README.md
grembo/potluck
0f8cea05be2b2fd42e36d5114ae5ee758b075999
[ "BSD-3-Clause" ]
8
2020-11-26T16:34:29.000Z
2021-12-27T20:06:04.000Z
--- author: "Stephan Lichtenauer" title: Matrix Synapse (Nomad) summary: Matrix Synapse secure, decentralised, real-time communication server that can be deployed via nomad. tags: ["matrix", "synapse", "im", "instant messaging", "nomad"] --- # Overview **IMPORTANT NOTE: THIS IS A BETA IMAGE!** This is a Matrix Synapse jail that can be started with ```pot``` but it can also be deployed via ```nomad```. For more details about ```nomad```images, see [about potluck](https://potluck.honeyguide.net/micro/about-potluck/). # Nomad Job Description Example Description will follow.
30.736842
115
0.729452
eng_Latn
0.88636