hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2deea97d58ba49d399f38bd1c371faa27091027d | 205 | md | Markdown | BiasedPUlearning/nnPUSB/README.md | XINGXIAOYU/PUlearning | 3401b77ccdd653d39f4f3a6258a42c7938fa9ede | [
"MIT"
] | 24 | 2019-10-06T08:05:29.000Z | 2021-12-20T10:52:17.000Z | BiasedPUlearning/nnPUSB/README.md | XINGXIAOYU/PUlearning | 3401b77ccdd653d39f4f3a6258a42c7938fa9ede | [
"MIT"
] | 2 | 2020-09-29T22:34:25.000Z | 2020-09-29T22:43:01.000Z | BiasedPUlearning/nnPUSB/README.md | MasaKat0/PUlearning | 3401b77ccdd653d39f4f3a6258a42c7938fa9ede | [
"MIT"
] | 13 | 2019-05-20T06:29:52.000Z | 2022-02-03T15:31:20.000Z | # This codes reproduces the results of Figure2-6.
Each code corresponds to each experiment.
For using the RealData, you should download data from this [website](http://cseweb.ucsd.edu/~elkan/posonly/).
| 51.25 | 110 | 0.770732 | eng_Latn | 0.979833 |
2def5c480596a6816f3d8b9712e6588809319a6e | 1,932 | md | Markdown | _posts/2019-05-26-Underwater-Fish-Detection-with-Weak-Multi-Domain-Supervision.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2019-05-26-Underwater-Fish-Detection-with-Weak-Multi-Domain-Supervision.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2019-05-26-Underwater-Fish-Detection-with-Weak-Multi-Domain-Supervision.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "Underwater Fish Detection with Weak Multi-Domain Supervision"
date: 2019-05-26 01:43:58
categories: arXiv_CV
tags: arXiv_CV Object_Detection CNN Classification Detection
author: Dmitry A. Konovalov, Alzayat Saleh, Michael Bradley, Mangalam Sankupellay, Simone Marini, Marcus Sheaves
mathjax: true
---
* content
{:toc}
##### Abstract
Given a sufficiently large training dataset, it is relatively easy to train a modern convolution neural network (CNN) as a required image classifier. However, for the task of fish classification and/or fish detection, if a CNN was trained to detect or classify particular fish species in particular background habitats, the same CNN exhibits much lower accuracy when applied to new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN needs to be continuously fine-tuned to improve its classification accuracy to handle new project-specific fish species or habitats. In this work we present a labelling-efficient method of training a CNN-based fish-detector (the Xception CNN was used as the base) on relatively small numbers (4,000) of project-domain underwater fish/no-fish images from 20 different habitats. Additionally, 17,000 of known negative (that is, missing fish) general-domain (VOC2012) above-water images were used. Two publicly available fish-domain datasets supplied additional 27,000 of above-water and underwater positive/fish images. By using this multi-domain collection of images, the trained Xception-based binary (fish/not-fish) classifier achieved 0.17% false-positives and 0.61% false-negatives on the project's 20,000 negative and 16,000 positive holdout test images, respectively. The area under the ROC curve (AUC) was 99.94%.
##### Abstract (translated by Google)
##### URL
[http://arxiv.org/abs/1905.10708](http://arxiv.org/abs/1905.10708)
##### PDF
[http://arxiv.org/pdf/1905.10708](http://arxiv.org/pdf/1905.10708)
| 74.307692 | 1,374 | 0.78882 | eng_Latn | 0.977142 |
2df024302c94ca9d3f89b652c81888523e864edf | 895 | md | Markdown | README.md | slacky1965/watermeter | a5684bccc4180af5205c7b1bf60ea6708fe6c627 | [
"Unlicense"
] | 24 | 2017-07-24T21:42:16.000Z | 2021-12-28T20:07:05.000Z | README.md | slacky1965/watermeter | a5684bccc4180af5205c7b1bf60ea6708fe6c627 | [
"Unlicense"
] | 1 | 2019-01-06T17:56:53.000Z | 2019-02-07T08:51:16.000Z | README.md | slacky1965/watermeter | a5684bccc4180af5205c7b1bf60ea6708fe6c627 | [
"Unlicense"
] | 7 | 2018-10-05T09:45:41.000Z | 2021-03-22T12:40:36.000Z | # WaterMeter
The water counter on ESP8266 for Arduino IDE
# Software.
You need to use mosquitto server, mqttwarn, perl and sqlite3.
# Hardware.
A mini wifi board with 4MB flash based on ESP-8266EX - Wemos D1 mini https://www.wemos.cc/en/latest/d1/d1_mini.html
Micro SD Card Shield for Wemos D1 mini - https://www.wemos.cc/en/latest/d1_mini_shield/micro_sd.html
Lithium (LiPo) Battery shield, charging & boost for Wemos D1 mini - https://www.wemos.cc/en/latest/d1_mini_shield/battery.html
A triple(x3) Base for Wemos D1 mini - https://www.wemos.cc/en/latest/d1_mini_shield/tripler_base.html
Added software debounce
Resistor 12kOm 0.125W for Battery shield (for external power control, see https://github.com/slacky1965/watermeter/blob/master/doc/images/Wemos1.jpg)
Lithium (LiPo) Battery 18650 2600mAh 3.7V - https://github.com/slacky1965/watermeter/blob/master/doc/images/battery.jpg
| 38.913043 | 149 | 0.781006 | yue_Hant | 0.354237 |
2df046aa92b173fa0adb6021f001f67b4ee656a1 | 136 | md | Markdown | documentation/Communities.md | TheIoTLearningInitiative/TheHardwareAndSoftwareBasics | 39086b56627eb12e3dcee9289ac1c9ac056b8369 | [
"Apache-2.0"
] | null | null | null | documentation/Communities.md | TheIoTLearningInitiative/TheHardwareAndSoftwareBasics | 39086b56627eb12e3dcee9289ac1c9ac056b8369 | [
"Apache-2.0"
] | null | null | null | documentation/Communities.md | TheIoTLearningInitiative/TheHardwareAndSoftwareBasics | 39086b56627eb12e3dcee9289ac1c9ac056b8369 | [
"Apache-2.0"
] | null | null | null | # Communities
- http://www.cnx-software.com/
- http://www.roboticgizmos.com/
- http://hackerboards.com/
- http://www.geeky-gadgets.com/ | 22.666667 | 31 | 0.705882 | yue_Hant | 0.990955 |
2df11492b8870c3600f518b8d46a639b6d4836c7 | 1,091 | md | Markdown | docs/framework/windows-workflow-foundation/2028-cacherootmetadatastop.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/2028-cacherootmetadatastop.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/2028-cacherootmetadatastop.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 2028 - CacheRootMetadataStop
ms.date: 03/30/2017
ms.assetid: d799b707-ee16-4b04-8b6d-b87c0d60e71d
ms.openlocfilehash: a097c95d6b28a30b1831483bd40c94682c5b6770
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 04/23/2019
ms.locfileid: "62009855"
---
# <a name="2028---cacherootmetadatastop"></a>2028 - CacheRootMetadataStop
## <a name="properties"></a>Свойства
|||
|-|-|
|ID|2028|
|Ключевые слова|WFRuntime|
|Уровень|Verbose|
|Канал|Microsoft-Windows-Application Server-Applications/Debug|
## <a name="description"></a>Описание
Указывает на завершение CacheRootMetadata в действии.
## <a name="message"></a>Сообщение
CacheRootMetadata остановлено для действия %1.
## <a name="details"></a>Подробные сведения
|Имя элемента данных|Тип элемента данных|Описание|
|--------------------|--------------------|-----------------|
|DisplayName|xs:string|Отображаемое имя действия.|
|Домен приложения|xs:string|Строка, возвращаемая AppDomain.CurrentDomain.FriendlyName.|
| 32.088235 | 87 | 0.712191 | yue_Hant | 0.079519 |
2df1a6eebb5b96f8862d40500bc537897cf7a991 | 1,075 | md | Markdown | README.md | lmcilroy/hmz | 658dac42fdce799871b5844cd38309ab28a9fe3e | [
"BSD-3-Clause"
] | 1 | 2020-05-29T14:06:15.000Z | 2020-05-29T14:06:15.000Z | README.md | lmcilroy/hmz | 658dac42fdce799871b5844cd38309ab28a9fe3e | [
"BSD-3-Clause"
] | null | null | null | README.md | lmcilroy/hmz | 658dac42fdce799871b5844cd38309ab28a9fe3e | [
"BSD-3-Clause"
] | 1 | 2020-06-03T07:56:47.000Z | 2020-06-03T07:56:47.000Z | # HMZ - A high speed huffman prefix free code compressor
Typical compression rates range from 900MB/s up to 1.25GB/s
Typical decompression rates range from 450MB/s up to 2.3GB/s
Included is a utility called hmz with various options to control the
compressor.
Here is sample benchmark output:
```
$ ./hmz -b 5 enwik8
File enwik8: size 100000000 bytes, chunk 32768 bytes
Format 1: --> 63498096, 63.4981%, 1081.8177 MB/s, 1221.0867 MB/s
$ ./hmz -b 5 -x 256 enwik8
File enwik8: size 100000000 bytes, chunk 262144 bytes
Format 1: --> 64095843, 64.0958%, 1186.2223 MB/s, 1584.1023 MB/s
$ ./hmz -b 5 rand
File rand: size 104857600 bytes, chunk 32768 bytes
Format 1: --> 104860800, 100.0031%, 2571.1539 MB/s, 9237.6192 MB/s
$ ./hmz -b 5 zero
File zero: size 104857600 bytes, chunk 32768 bytes
Format 1: --> 19200, 0.0183%, 2653.9881 MB/s, 28255.0148 MB/s
```
The software in this suite has only been tested on Intel CPUs. No specific
consideration has been made to support big endian systems in which case endian
conversion support would need to be added.
| 33.59375 | 78 | 0.725581 | eng_Latn | 0.945417 |
2df235744300940d32cc683e3b458dc843a27e3b | 7,545 | md | Markdown | contracts/marker/README.md | fkneeland-figure/provwasm | 07ed2633c1548473ed6dd758bdfe7fdc7e612bf8 | [
"Apache-2.0"
] | null | null | null | contracts/marker/README.md | fkneeland-figure/provwasm | 07ed2633c1548473ed6dd758bdfe7fdc7e612bf8 | [
"Apache-2.0"
] | null | null | null | contracts/marker/README.md | fkneeland-figure/provwasm | 07ed2633c1548473ed6dd758bdfe7fdc7e612bf8 | [
"Apache-2.0"
] | null | null | null | # Marker Module Integration Test
This a CosmWasm smart contract that tests the Rust bindings and mocks for the provenance `marker`
module.
This contract has the following functionality.
- Messages
- Bind a name to the contract's address
- Create a marker
- Grant access (all permissions) to a marker
- Finalize a marker
- Activate a marker
- Withdraw coins from the marker to the contract instance
- Transfer coins from the contract instance to an account
- Mint marker coins
- Burn marker coins
- Cancel a marker
- Destroy a marker
- Queries
- Get marker by address
- Get marker by denom
## Build
Compile and optimize the smart contract Wasm.
```bash
make && make optimize
```
## Setup
_NOTE: Address bech32 values and other params may vary._
First, copy the optimized Wasm to your Provenance Blockchain project root.
Then, install the `provenanced` command and genesis a localnet.
```bash
make clean
make install
make localnet-start
```
Create a root name binding for smart contracts (required once per localnet genesis).
```bash
provenanced tx name bind \
"sc" \
$(provenanced keys show -a node0 --home build/node0 --keyring-backend test --testnet) \
"pb" \
--restrict=false \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--fees 5000nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Store the name integration test smart contract Wasm in provenance
```bash
provenanced tx wasm store marker.wasm \
--instantiate-only-address $(provenanced keys show -a node0 --keyring-backend test --home build/node0 --testnet) \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 25000nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Instantiate the contract and bind the name `marker-itv2.sc.pb` to it's address
```bash
provenanced tx wasm instantiate 1 '{"name":"marker-itv2.sc.pb"}' \
--admin $(provenanced keys show -a node0 --keyring-backend test --home build/node0 --testnet) \
--label marker_module_integration_test_v2 \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
## Test 1
Create a restricted marker in a 'proposed' state.
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"create":{"supply":"500","denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Query the marker by denom
```bash
provenanced q wasm contract-state smart \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"get_by_denom":{"denom":"faustiancoin"}}' \
--testnet -o json | jq
```
Query the marker by address
```bash
provenanced q wasm contract-state smart \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"get_by_address": { "address": "tp1egzwrnxzzlq22ncg3mv8t8zq0zjccwlsdadfdv"}}' \
--testnet -o json | jq
```
Grant access to the marker, so the contract can withdraw and transfer funds in later steps
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"grant_access":{"denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Finalize the marker
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"finalize":{"denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Activate the marker
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"activate":{"denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3600nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Withdraw coins from the marker to the smart contract instance
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"withdraw":{"amount":"400","denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Transfer coins from the contract to an account
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"transfer":{"amount":"200","denom":"faustiancoin","to":"FIXME"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Mint marker coins
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"mint":{"amount":"100","denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Burn all remaining coins escrowed in the marker
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"burn":{"amount":"200","denom":"faustiancoin"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
## Test 2
Create another marker in a 'proposed' state.
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"create":{"denom":"chickentendies","supply":"100"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Grant access to the marker, so the contract can cancel and destroy it.
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"grant_access":{"denom":"chickentendies"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Cancel the marker.
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"cancel":{"denom":"chickentendies"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
Destroy the marker.
```bash
provenanced tx wasm execute \
tp18vd8fpwxzck93qlwghaj6arh4p7c5n89x8kskz \
'{"destroy":{"denom":"chickentendies"}}' \
--from node0 \
--keyring-backend test \
--home build/node0 \
--chain-id chain-local \
--gas auto \
--fees 3500nhash \
--broadcast-mode block \
--yes \
--testnet | jq
```
| 23.504673 | 118 | 0.642942 | eng_Latn | 0.718193 |
2df31bb03b27787f81c2d2f0795716241c43e031 | 5,328 | md | Markdown | _posts/2016-10-21-conceptosjs.md | soyandresbernal/soyandresbernal.github.io | b9232a645d4e4fe3568410dfa538550aa1fc605a | [
"MIT"
] | null | null | null | _posts/2016-10-21-conceptosjs.md | soyandresbernal/soyandresbernal.github.io | b9232a645d4e4fe3568410dfa538550aa1fc605a | [
"MIT"
] | null | null | null | _posts/2016-10-21-conceptosjs.md | soyandresbernal/soyandresbernal.github.io | b9232a645d4e4fe3568410dfa538550aa1fc605a | [
"MIT"
] | null | null | null | ---
layout: post
title: "Conceptos JS"
date: 2016-10-20 20:9:21 +0530
categories: javascript
---

Con este post planeo mostrar variados de javascript
para el manejo del lenguaje
Empecemos!!
## Variables
Son valores asignados a memoria
```javascript
var foo = 1;
console.log(foo); //1
//reasignar valor
foo = 2;
console.log(foo); //2
```
* No asignar variables con valor numerico al inicio de la variable:
~~1valor~~
ó
~~%valor~~,
o usar palabras reservadas como
**function\***, **class**
* javascript usa el ****camelCase**** las variables se debe declarar asi:
```
- costoFactura
- imprimirPorcentaje
- costoTotal
```
Continuemos con las asignacion de variables
```javascript
var x,
y,
z = 1; //Solo se asigna el valor a x
//comprobando
console.log(x); //1
console.log(y); //
console.log(z); //
//Asignando lo valores de manera independiente
var x = 1,
y = 2,
z = 3;
console.log(x); //1
console.log(y); //2
console.log(z); //3
//Asignacion entre variables
var x = 1,
y = x + 2,
z = y + 3;
console.log(x); //1
console.log(y); //3
console.log(z); //6
```
### LET VAR CONST
var n = 12
let numero = 1
const numeros = 123
**var** : si no declaramos valor con n,mostrara en consola un **undefined**
**let** : tambien se mostrara como **undefined**,si no declaramos un valor
**const**: mostrara un **SyntaxError: Missing initializer in const declaration**
por los cual SIEMPRE se debe declarar un valor
### Valores primitivos en las variables
```javascript
var numero = 1
var booleano = true
var string 'cadenas de caracteres'
var nulo = 'null' // un solo valor y es nulll
var indefinido = undefinded;
//las variables si no se les asigna
//un valor viene por defecto como indefinido
```
Para conocer el tipo de valor de la varible con un **typeof**
Ejemplo
```javascript
console.log(typeof numero); //number
```
Atención JS desde su creacion tomas los null como un **object** para corergir esto entre "" dejo este pequeño hack
```javascript
if(nulo ==== null){
console.log('is null')
}
```
### OBJETOS
Un objeto es un a referencia a una coleccion que contiene un conjunto de propiedades y valores
```javascript
var objecto = {
name : andres,
lastName: share
};
//copia del objeto
var objectoDos = objecto
console.log(objectoDos) |
//A tener en cuenta los objetos son unicos
// con sus propiedades y metodos ...
```
### Arrays
son colecciones de valores con indices numericos
```javascript
var array = [a, b, c, d];
console.log(array[0]); //a
```
### Computed property names
```javascript
var id = 0;
function generarId() {
return "id_" + ++id;
}
var obj = {
[generarId()]: "valor1",
[generarId()]: "valor2"
};
console.log(obj);
```
}
### For - in
```javascript
var automovil = {
model: "golf",
make: "volskwagen",
year: "2010",
color: "red",
doors: 4
};
for (var key in automovil) {
console.log(key + ":" + automovil[key]);
}
//model:golf
//make:volskwagen
//year:2010
//color:red
//doors:4
```
### Convertir objeto a array
```javascript
var location = window.location;
var array = [];
for (var key in location) {
if (typeof location[key] === "string") {
array.push({ key: key, value: location[key] });
}
}
console.log(array);
```
### **FOR OF** recorre strings,array objetos mapas
```javascript
// filtrar solo numeros
var string = ' '123213,,,,sdkljfhksjdfh,345,rhfdk,ujy,,,,,'
var array = []
for(var value of string){
var number = parseInt(value)
if(!isNaN(number)){
console.log(value + 'tipo' + typeof number)
array.push(number)
}
console.log(array)
////123213345
```
### Filtrar emails
```javascript
var mapservicios = {}
for( var correo of correos) {
var dominio = correo.split('@')[1]
if( dominio in mapservicios){
mapservicios[dominio]++
}else{
mapservicios[dominio]= 1
}
}
console.log(mapservicios)
```
### Convertir objeto a arrays
```javascript
function sumarNumero(){
Array.from(arguments),reduce(function(a + b) return (a + b))
}
var resultado = sumarNumeros(1,2,3,34,5,6,7,7)
```
### Hosting
```javascript
//funcion llamada antes..
mensaje()
function mensaje(){
console.log("mensaje)
}
```
### Function expresion
```javascript
//funcion llamada antes..
function foo()
{
console.log('nuevo mensaje')
}
//js siempre manda a llamar la ultima
//function declarad
var msj = function(){
console.log("mensaje)
}
msj()
```
### Anonymus function
```javascript
function (){
console.log('funcion anonima')
}
///funcion anonima que reconre varios arreglos
[1,2,2,3].forEach(function(elemento){
console.log(elemento)
})
```
#### Named function expression
```javascript
var almaceda = function fool(){
console.log('mensaje function expre')
}
console.log(almacena.name)
```
## IIFE
Aisla piezas de codigo del global scope
y se ejecuta inmediatamente
```javascript
(function(){
console.log('IIFE')
}());
```
### Clousure
la funcion va a guardar la variable en el external scope
funcion que contine un estaod interno si su
variable esta en local
siempre tiene un vinculo con sus variables
declaradas por eso se llama CLOUSURE
```javascript
let numero =1
function fn(){
console.log(numero)
}
function ejecutafn(callback)
{
callback();
}
ejecutafn(fn)
```
| 16 | 114 | 0.669857 | spa_Latn | 0.727878 |
2df3a58b002fee73c7cc3a5defaa5c661f0450d1 | 88 | md | Markdown | CHANGELOG.md | xk11961677/sky-axon-demo | b810b9e6d214178422362b9fdadc9db1a45f6fc7 | [
"MIT"
] | 2 | 2020-06-17T04:47:00.000Z | 2021-07-05T09:10:17.000Z | CHANGELOG.md | xk11961677/sky-axon-demo | b810b9e6d214178422362b9fdadc9db1a45f6fc7 | [
"MIT"
] | null | null | null | CHANGELOG.md | xk11961677/sky-axon-demo | b810b9e6d214178422362b9fdadc9db1a45f6fc7 | [
"MIT"
] | 1 | 2020-01-13T11:42:58.000Z | 2020-01-13T11:42:58.000Z |
## 2020-01-12
### Features
- feat(*): first commit
### Other Changes
- first commit
| 8.8 | 23 | 0.613636 | eng_Latn | 0.995903 |
2df3d6232883bdc82fd72b6741543ba841323896 | 4,155 | md | Markdown | src/components/form/index.md | TsaiJie/rgulu-ui | f14f51c3957384802ae98ab3e466b612d4a11423 | [
"MIT"
] | 2 | 2020-08-21T07:36:50.000Z | 2020-08-21T08:00:05.000Z | src/components/form/index.md | TsaiJie/rgulu-ui | f14f51c3957384802ae98ab3e466b612d4a11423 | [
"MIT"
] | null | null | null | src/components/form/index.md | TsaiJie/rgulu-ui | f14f51c3957384802ae98ab3e466b612d4a11423 | [
"MIT"
] | null | null | null | ## Form
### 1 基本用法
```tsx
import React, { useState, Fragment, useCallback } from 'react';
import { Form, Button, FormValue, Validator, noError } from 'rgulu-ui';
export default () => {
const [formData, setFormData] = useState<FormValue>({
username: '',
password: '',
});
const [fields] = useState([
{ name: 'username', label: '用户名', input: { type: 'text' } },
{ name: 'password', label: '密码', input: { type: 'password' } },
]);
const [errors, setErrors] = useState({});
const rules = [
{ key: 'username', required: true },
{ key: 'username', minLength: 8, maxLength: 16 },
{ key: 'username', pattern: /^[A-Za-z0-9]+$/ },
{ key: 'password', required: true },
{ key: 'password', required: true },
];
const onSubmit = useCallback(
(e: React.FormEvent<HTMLFormElement>) => {
Validator(formData, rules, errors => {
if (noError(errors)) {
console.log(errors);
}
setErrors(errors);
});
},
[formData, rules],
);
const onChange = useCallback(newValue => {
setFormData({ ...newValue });
}, []);
return (
<Fragment>
{/*{JSON.stringify(errors)}*/}
<Form
value={formData}
fields={fields}
buttons={
<Fragment>
<Button type="submit" level={'main'}>
提交
</Button>
<Button>返回</Button>
</Fragment>
}
onChange={newValue => onChange(newValue)}
onSubmit={onSubmit}
errors={errors}
/>
</Fragment>
);
};
```
### 2 支持异步校验
```tsx
import React, { useState, Fragment, useCallback } from 'react';
import { Form, Button, FormValue, Validator, noError } from 'rgulu-ui';
const usernames = ['frank', 'jackjack', 'alice', 'bob'];
const passwords = ['1234', '5678'];
const checkUserName = (
username: string,
succeed: () => void,
fail: () => void,
) => {
setTimeout(() => {
if (usernames.indexOf(username) >= 0) {
fail();
} else {
succeed();
}
}, 200);
};
const checkPassword = (
password: string,
succeed: () => void,
fail: () => void,
) => {
setTimeout(() => {
if (passwords.indexOf(password) >= 0) {
fail();
} else {
succeed();
}
}, 500);
};
const validator = (username: string) => {
return new Promise<string>((resolve, reject) => {
checkUserName(username, resolve, () => reject('账号重复了'));
});
};
const validatorPassword = (password: string) => {
console.log('1111');
return new Promise<string>((resolve, reject) => {
checkPassword(password, resolve, () => reject('密码重复了'));
});
};
export default () => {
const [formData, setFormData] = useState<FormValue>({
username: '',
password: '',
});
const [fields] = useState([
{ name: 'username', label: '用户名', input: { type: 'text' } },
{ name: 'password', label: '密码', input: { type: 'password' } },
]);
const [errors, setErrors] = useState({});
const rules = [
{ key: 'username', required: true },
{ key: 'username', minLength: 8, maxLength: 16 },
{ key: 'username', pattern: /^[A-Za-z0-9]+$/ },
{ key: 'username', validator },
{ key: 'username', validator },
{ key: 'password', required: true },
{ key: 'password', validator: validatorPassword },
{ key: 'password', validator: validatorPassword },
];
const onSubmit = useCallback(
(e: React.FormEvent<HTMLFormElement>) => {
Validator(formData, rules, errors => {
if (noError(errors)) {
console.log(errors);
}
setErrors(errors);
});
},
[formData],
);
const onChange = useCallback(newValue => {
setFormData({ ...newValue });
}, []);
return (
<Fragment>
{/*{JSON.stringify(errors) }*/}
<Form
value={formData}
fields={fields}
buttons={
<Fragment>
<Button type="submit" level={'main'}>
提交
</Button>
<Button>返回</Button>
</Fragment>
}
onChange={newValue => onChange(newValue)}
onSubmit={onSubmit}
errors={errors}
/>
</Fragment>
);
};
```
| 25.03012 | 71 | 0.535981 | kor_Hang | 0.177561 |
2df400f72e1d2b5d06367f6ed36944943ad0f5d9 | 30,497 | md | Markdown | src/markdown-pages/2016-17/provincial/mpumalanga/departments/economic-development-and-tourism.md | jbothma/gatsby-tutorial | 2cb6712872a4d7af5d183ab16b5f735386716c75 | [
"MIT"
] | null | null | null | src/markdown-pages/2016-17/provincial/mpumalanga/departments/economic-development-and-tourism.md | jbothma/gatsby-tutorial | 2cb6712872a4d7af5d183ab16b5f735386716c75 | [
"MIT"
] | 1 | 2020-07-17T08:43:14.000Z | 2020-07-17T08:43:14.000Z | src/markdown-pages/2016-17/provincial/mpumalanga/departments/economic-development-and-tourism.md | jbothma/gatsby-tutorial | 2cb6712872a4d7af5d183ab16b5f735386716c75 | [
"MIT"
] | null | null | null | ---
adjusted_budget_summary: null
budget_actual:
dataset_detail_page: /datasets/budgeted-and-actual-provincial-expenditure/budgeted-and-actual-provincial-expenditure
expenditure:
base_financial_year: 2018-19
nominal:
- amount: 994142000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
real:
- amount: 1092145915
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
notices:
- Please note that the data for 2013 and 2014 has not been published on vulekamali.
- This department did not exist for some years displayed.
budget_actual_programmes:
dataset_detail_page: /datasets/budgeted-and-actual-provincial-expenditure/budgeted-and-actual-provincial-expenditure
notices:
- One or more programmes did not exist for some years displayed.
programmes:
- items:
- amount: 14183000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Economic Planning
- items:
- amount: 24558000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Trade and Sector Development
- items:
- amount: 79602000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Administration
- items:
- amount: 85511000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Business Regulation and Governance
- items:
- amount: 349210000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Tourism
- items:
- amount: 441078000
financial_year: 2016-17
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Main appropriation
- amount: null
financial_year: 2013-14
phase: Adjusted appropriation
- amount: null
financial_year: 2013-14
phase: Final Appropriation
- amount: null
financial_year: 2013-14
phase: Audit Outcome
- amount: null
financial_year: 2014-15
phase: Main appropriation
- amount: null
financial_year: 2014-15
phase: Adjusted appropriation
- amount: null
financial_year: 2014-15
phase: Final Appropriation
- amount: null
financial_year: 2014-15
phase: Audit Outcome
- amount: null
financial_year: 2015-16
phase: Main appropriation
- amount: null
financial_year: 2015-16
phase: Adjusted appropriation
- amount: null
financial_year: 2015-16
phase: Final Appropriation
- amount: null
financial_year: 2015-16
phase: Audit Outcome
- amount: null
financial_year: 2016-17
phase: Adjusted appropriation
- amount: null
financial_year: 2016-17
phase: Final Appropriation
- amount: null
financial_year: 2016-17
phase: Audit Outcome
name: Integrated Economic Development
contributed_datasets:
- contributor: International Budget Partnership
name: How Does Civil Society Use Budget Information?
url_path: /datasets/contributed/how-does-civil-society-use-budget-information
- contributor: International Budget Partnership
name: How transparent and participatory are the budgets of Metropolitan Municipalities
in South Africa?
url_path: /datasets/contributed/how-transparent-and-participatory-are-the-budgets-of-metropolitan-municipalities-in-south-africa
- contributor: International Budget Partnership
name: A Guide to Conducting Social Audits in South Africa
url_path: /datasets/contributed/a-guide-to-conducting-social-audits-in-south-africa
- contributor: Studies in Poverty and Inequality Institute
name: Budget Analysis for Advancing Socioeconomic Rights
url_path: /datasets/contributed/budget-analysis-for-advancing-socioeconomic-rights
department_adjusted_budget:
document:
description: ''
format: PDF
name: "Mpumalanga AEPRE 2016-17 \u2013 Vote 6 - Economic Development And Tourism"
url: https://data.vulekamali.gov.za/dataset/88b3ae3d-0e48-4432-98ef-10b488df9a7e/resource/f9714c62-d8e3-4c59-8589-5bb563e7d9fc/download/mpu-vote-06-economic-development-and-tourism.pdf
name: "Mpumalanga AEPRE 2016-17 \u2013 Vote 6 - Economic Development And Tourism"
tables:
description: ''
format: XLSX
name: "Mpumalanga AEPRE 2016-17 \u2013 Vote 6 - Economic Development And Tourism"
url: https://data.vulekamali.gov.za/dataset/88b3ae3d-0e48-4432-98ef-10b488df9a7e/resource/74d6a687-0a87-4c1a-8de1-fd2190c0f0dd/download/mpu-eco-development-and-tourism.xls
department_budget:
document:
description: ''
format: PDF
name: 'Vote 06 : Economic Development And Tourism'
url: https://data.vulekamali.gov.za/dataset/59bfcdc2-0827-4f1b-a539-ef48de207d05/resource/f8875ea5-d113-4e41-b293-4b100de7fd01/download/mpu-vote-06-economic-development-and-tourism.pdf
name: 'Mpumalanga Department: Economic Development And Tourism 2016-17'
tables: null
description: 'Mpumalanga department: Economic Development And Tourism budget data
for the 2016-17 financial year from National Treasury in partnership with IMALI
YETHU.'
economic_classification_by_programme:
dataset_detail_page: /datasets/estimates-of-provincial-expenditure/estimates-of-provincial-expenditure-2016-17
department_data_csv: https://datamanager.vulekamali.gov.za/csv/?api_url=https%3A//openspending.org/api/3/cubes/b9d2af843f3a7ca223eea07fb608e62a%3Aestimates-of-provincial-expenditure-south-africa-2016-17/aggregate/%3Fcut%3Dfinancial_year.financial_year%253A2016%257Cgovernment.government%253A%2522Mpumalanga%2522%257Cdepartment.department%253A%2522Economic%2BDevelopment%2BAnd%2BTourism%2522%26drilldown%3Dbudget_phase.budget_phase%257Cdepartment.department%257Ceconomic_classification_1.economic_classification_1%257Ceconomic_classification_2.economic_classification_2%257Ceconomic_classification_3.economic_classification_3%257Ceconomic_classification_4.economic_classification_4%257Cfinancial_year.financial_year%257Cgovernment.government%257Cprogramme_number.programme%257Cprogramme_number.programme_number%26pagesize%3D10000
programmes:
- items:
- items:
- name: Compensation of employees
total_budget: 55287000
type: economic_classification_2
- name: Goods and services
total_budget: 22433000
type: economic_classification_2
name: Current payments
type: economic_classification_1
- items:
- name: Machinery and equipment
total_budget: 1482000
type: economic_classification_2
name: Payments for capital assets
type: economic_classification_1
- items:
- name: Households
total_budget: 400000
type: economic_classification_2
name: Transfers and subsidies
type: economic_classification_1
name: Administration
type: programme
- items:
- items:
- name: Compensation of employees
total_budget: 15560000
type: economic_classification_2
- name: Goods and services
total_budget: 2250000
type: economic_classification_2
name: Current payments
type: economic_classification_1
- items:
- name: Machinery and equipment
total_budget: 418000
type: economic_classification_2
name: Payments for capital assets
type: economic_classification_1
- items:
- name: Departmental agencies and accounts
total_budget: 67283000
type: economic_classification_2
name: Transfers and subsidies
type: economic_classification_1
name: Business Regulation and Governance
type: programme
- items:
- items:
- name: Compensation of employees
total_budget: 11984000
type: economic_classification_2
- name: Goods and services
total_budget: 2199000
type: economic_classification_2
name: Current payments
type: economic_classification_1
name: Economic Planning
type: programme
- items:
- items:
- name: Goods and services
total_budget: 31471000
type: economic_classification_2
- name: Compensation of employees
total_budget: 24697000
type: economic_classification_2
name: Current payments
type: economic_classification_1
- items:
- name: Public corporations and private enterprises
total_budget: 384910000
type: economic_classification_2
name: Transfers and subsidies
type: economic_classification_1
name: Integrated Economic Development
type: programme
- items:
- items:
- name: Compensation of employees
total_budget: 3022000
type: economic_classification_2
- name: Goods and services
total_budget: 380000
type: economic_classification_2
name: Current payments
type: economic_classification_1
- items:
- name: Departmental agencies and accounts
total_budget: 345808000
type: economic_classification_2
name: Transfers and subsidies
type: economic_classification_1
name: Tourism
type: programme
- items:
- items:
- name: Compensation of employees
total_budget: 11378000
type: economic_classification_2
- name: Goods and services
total_budget: 10715000
type: economic_classification_2
name: Current payments
type: economic_classification_1
- items:
- name: Provinces and municipalities
total_budget: 2465000
type: economic_classification_2
name: Transfers and subsidies
type: economic_classification_1
name: Trade and Sector Development
type: programme
expenditure_over_time:
dataset_detail_page: /datasets/estimates-of-provincial-expenditure/estimates-of-provincial-expenditure-2016-17
expenditure:
base_financial_year: 2018-19
nominal:
- amount: 722549000
financial_year: 2012-13
phase: Audited Outcome
- amount: 739257000
financial_year: 2013-14
phase: Audited Outcome
- amount: 764536000
financial_year: 2014-15
phase: Audited Outcome
- amount: 799481000
financial_year: 2015-16
phase: Adjusted appropriation
- amount: 994142000
financial_year: 2016-17
phase: Main appropriation
- amount: 892685000
financial_year: 2017-18
phase: Medium Term Estimates
- amount: 949319000
financial_year: 2018-19
phase: Medium Term Estimates
real:
- amount: 991786837
financial_year: 2012-13
phase: Audited Outcome
- amount: 958939118
financial_year: 2013-14
phase: Audited Outcome
- amount: 938907342
financial_year: 2014-15
phase: Audited Outcome
- amount: 933584861
financial_year: 2015-16
phase: Adjusted appropriation
- amount: 1092145915
financial_year: 2016-17
phase: Main appropriation
- amount: 936535401
financial_year: 2017-18
phase: Medium Term Estimates
- amount: 949319000
financial_year: 2018-19
phase: Medium Term Estimates
financial_years:
- closest_match:
is_exact_match: true
url_path: /2016-17/provincial/mpumalanga/departments/economic-development-and-tourism
id: 2016-17
is_selected: true
- closest_match:
is_exact_match: true
url_path: /2017-18/provincial/mpumalanga/departments/economic-development-and-tourism
id: 2017-18
is_selected: false
- closest_match:
is_exact_match: true
url_path: /2018-19/provincial/mpumalanga/departments/economic-development-and-tourism
id: 2018-19
is_selected: false
- closest_match:
is_exact_match: false
url_path: /2019-20/provincial/mpumalanga
id: 2019-20
is_selected: false
government:
name: Mpumalanga
slug: mpumalanga
government_functions: []
intro: "## Vision\r\n\r\nAn Inclusive, Global Competitive Economy\r\n\r\n\r\n\r\n\
## Mission\r\n\r\nDrive economic growth that creates decent employment and promote\
\ sustainable\r\ndevelopment through partnership.\r\n\r\n\r\n\r\n## Core functions\
\ and responsibilities\r\n\r\nThe core function and responsibilities of the Department\
\ is to develop policies aimed\r\nat growing the economy to create jobs in the Province\
\ and can be summarized as\r\nfollows:\r\n\r\n* Stimulate economic growth in the\
\ province by facilitating support and development\r\nof business enterprises, promote\
\ economic transformation, and provide strategic\r\neconomic development support\
\ to municipalities\r\n\r\n* Support the development of industry within the key\
\ economic sectors of the\r\nprovince and create a conducive environment for trade\
\ and investment. By ensuring\r\ngrowth in exports and direct investment, facilitate\
\ the implementation of economic\r\ninfrastructure projects and the development\
\ of competitive growth sectors in the Province\r\n\r\n* Regulate the Liquor and\
\ Gambling Industry and to create enabling legislative\r\nenvironment for Business\
\ to operate as well as the facilitation of fair trade and effective\r\nConsumer\
\ Protection.\r\n\r\n* Provision of economic policy direction and strategies in\
\ addition to conducting\r\nresearch on the provincial economy to inform strategy\
\ development by providing\r\ninformation on the economy in order to enable better\
\ decision making, monitoring and\r\nevaluating the impact of provincial policy\
\ and departmental programmes designed for\r\nsustained economic development.\r\n\
\r\n* Ensure tourism sector policy development, regulation and compliance and\r\n\
promotion of sector transformation in the province"
is_vote_primary: true
layout: department
name: Economic Development And Tourism
programme_by_economic_classification:
dataset_detail_page: /datasets/estimates-of-provincial-expenditure/estimates-of-provincial-expenditure-2016-17
department_data_csv: https://datamanager.vulekamali.gov.za/csv/?api_url=https%3A//openspending.org/api/3/cubes/b9d2af843f3a7ca223eea07fb608e62a%3Aestimates-of-provincial-expenditure-south-africa-2016-17/aggregate/%3Fcut%3Dfinancial_year.financial_year%253A2016%257Cgovernment.government%253A%2522Mpumalanga%2522%257Cdepartment.department%253A%2522Economic%2BDevelopment%2BAnd%2BTourism%2522%26drilldown%3Dbudget_phase.budget_phase%257Cdepartment.department%257Ceconomic_classification_1.economic_classification_1%257Ceconomic_classification_2.economic_classification_2%257Ceconomic_classification_3.economic_classification_3%257Ceconomic_classification_4.economic_classification_4%257Cfinancial_year.financial_year%257Cgovernment.government%257Cprogramme_number.programme%257Cprogramme_number.programme_number%26pagesize%3D10000
econ_classes:
- items:
- name: Administration
total_budget: 55287000
type: programme
- name: Integrated Economic Development
total_budget: 24697000
type: programme
- name: Business Regulation and Governance
total_budget: 15560000
type: programme
- name: Economic Planning
total_budget: 11984000
type: programme
- name: Trade and Sector Development
total_budget: 11378000
type: programme
- name: Tourism
total_budget: 3022000
type: programme
name: Current payments - Compensation of employees
type: economic_classification_1_and_2
- items:
- name: Integrated Economic Development
total_budget: 31471000
type: programme
- name: Administration
total_budget: 22433000
type: programme
- name: Trade and Sector Development
total_budget: 10715000
type: programme
- name: Business Regulation and Governance
total_budget: 2250000
type: programme
- name: Economic Planning
total_budget: 2199000
type: programme
- name: Tourism
total_budget: 380000
type: programme
name: Current payments - Goods and services
type: economic_classification_1_and_2
- items:
- name: Administration
total_budget: 1482000
type: programme
- name: Business Regulation and Governance
total_budget: 418000
type: programme
name: Payments for capital assets - Machinery and equipment
type: economic_classification_1_and_2
- items:
- name: Tourism
total_budget: 345808000
type: programme
- name: Business Regulation and Governance
total_budget: 67283000
type: programme
name: Transfers and subsidies - Departmental agencies and accounts
type: economic_classification_1_and_2
- items:
- name: Administration
total_budget: 400000
type: programme
name: Transfers and subsidies - Households
type: economic_classification_1_and_2
- items:
- name: Trade and Sector Development
total_budget: 2465000
type: programme
name: Transfers and subsidies - Provinces and municipalities
type: economic_classification_1_and_2
- items:
- name: Integrated Economic Development
total_budget: 384910000
type: programme
name: Transfers and subsidies - Public corporations and private enterprises
type: economic_classification_1_and_2
programmes:
dataset_detail_page: /datasets/estimates-of-provincial-expenditure/estimates-of-provincial-expenditure-2016-17
department_data_csv: https://datamanager.vulekamali.gov.za/csv/?api_url=https%3A//openspending.org/api/3/cubes/b9d2af843f3a7ca223eea07fb608e62a%3Aestimates-of-provincial-expenditure-south-africa-2016-17/aggregate/%3Fcut%3Dfinancial_year.financial_year%253A2016%257Cgovernment.government%253A%2522Mpumalanga%2522%257Cdepartment.department%253A%2522Economic%2BDevelopment%2BAnd%2BTourism%2522%26drilldown%3Dbudget_phase.budget_phase%257Cdepartment.department%257Ceconomic_classification_1.economic_classification_1%257Ceconomic_classification_2.economic_classification_2%257Ceconomic_classification_3.economic_classification_3%257Ceconomic_classification_4.economic_classification_4%257Cfinancial_year.financial_year%257Cgovernment.government%257Cprogramme_number.programme%257Cprogramme_number.programme_number%26pagesize%3D10000
programme_budgets:
- name: Administration
total_budget: 79602000
- name: Integrated Economic Development
total_budget: 441078000
- name: Trade and Sector Development
total_budget: 24558000
- name: Business Regulation and Governance
total_budget: 85511000
- name: Economic Planning
total_budget: 14183000
- name: Tourism
total_budget: 349210000
selected_financial_year: 2016-17
selected_tab: departments
slug: economic-development-and-tourism
sphere:
name: Provincial
slug: provincial
subprogramme_by_programme:
dataset_detail_page: /datasets/estimates-of-provincial-expenditure/epre-sub-programme-expenditure-2016-17
department_data_csv: https://datamanager.vulekamali.gov.za/csv/?api_url=https%3A//openspending.org/api/3/cubes/b9d2af843f3a7ca223eea07fb608e62a%3Aepre-sub-programme-expenditure-2016-17-v2/aggregate/%3Fcut%3Dfinancial_year.financial_year%253A2016%257Cgovernment.government%253A%2522Mpumalanga%2522%257Cdepartment.department%253A%2522Economic%2BDevelopment%2BAnd%2BTourism%2522%26drilldown%3Ddepartment.department%257Cfinancial_year.financial_year%257Cgovernment.government%257Cphase.phase%257Cprogramme_number.programme%257Cprogramme_number.programme_number%257Csubprogramme.subprogramme%26pagesize%3D10000
programmes:
- items:
- name: Financial Management
total_budget: 36306000
type: subprogramme
- name: Corporate Services
total_budget: 29400000
type: subprogramme
- name: Office of MEC
total_budget: 7487000
type: subprogramme
- name: Senior Management (HOD)
total_budget: 6409000
type: subprogramme
name: Administration
type: programme
- items:
- name: Regulation Services
total_budget: 71690000
type: subprogramme
- name: Consumer Protection
total_budget: 12002000
type: subprogramme
- name: 'CD: Office Support'
total_budget: 1819000
type: subprogramme
name: Business Regulation and Governance
type: programme
- items:
- name: Economic Analysis
total_budget: 4763000
type: subprogramme
- name: Knowledge Management
total_budget: 2894000
type: subprogramme
- name: Economic Policy and Planning
total_budget: 2398000
type: subprogramme
- name: Monitoring and Evaluation
total_budget: 1925000
type: subprogramme
- name: 'Cd: Office Support'
total_budget: 1495000
type: subprogramme
- name: Research and Development
total_budget: 708000
type: subprogramme
name: Economic Planning
type: programme
- items:
- name: Enterprise Development
total_budget: 400029000
type: subprogramme
- name: Regional Directors
total_budget: 29868000
type: subprogramme
- name: Local Economic Development
total_budget: 5424000
type: subprogramme
- name: Economic Empowerment
total_budget: 4268000
type: subprogramme
- name: CD:Office Support
total_budget: 1489000
type: subprogramme
name: Integrated Economic Development
type: programme
- items:
- name: Tourism
total_budget: 349210000
type: subprogramme
name: Tourism
type: programme
- items:
- name: Sector Development
total_budget: 13284000
type: subprogramme
- name: Strategic Initiatives
total_budget: 5182000
type: subprogramme
- name: Trade and Investment Promotion
total_budget: 4410000
type: subprogramme
- name: CD:Office support
total_budget: 1682000
type: subprogramme
name: Trade and Sector Development
type: programme
title: Economic Development And Tourism budget 2016-17 - vulekamali
vote_number: 6
vote_primary:
name: Economic Development And Tourism
slug: economic-development-and-tourism
url_path: /2016-17/provincial/mpumalanga/departments/economic-development-and-tourism
website_url: null
---
[//]: <> GENERATED FILE. Don't edit by hand. | 35.094361 | 830 | 0.716136 | eng_Latn | 0.617976 |
2df4555f91ae64d8defd2d4f41c283b718ce3d9f | 1,826 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugtypeenum-next-method.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugtypeenum-next-method.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugtypeenum-next-method.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICorDebugTypeEnum::Next Yöntemi
ms.date: 03/30/2017
api_name:
- ICorDebugTypeEnum.Next
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugTypeEnum::Next
helpviewer_keywords:
- ICorDebugTypeEnum::Next method [.NET Framework debugging]
- Next method, ICorDebugTypeEnum interface [.NET Framework debugging]
ms.assetid: d0fdeba3-c195-4ece-8caf-79b1f40025d2
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 9812fa4248533ccb898c98082e42e288c091f776
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/04/2018
ms.locfileid: "33420592"
---
# <a name="icordebugtypeenumnext-method"></a>ICorDebugTypeEnum::Next Yöntemi
Belirtilen "ICorDebugType" örneklerinin sayısını alır `celt` gelen geçerli konumdan başlayarak numaralandırması.
## <a name="syntax"></a>Sözdizimi
```
HRESULT Next (
[in] ULONG celt,
[out, size_is(celt), length_is(*pceltFetched)]
ICorDebugType *values[],
[out] ULONG *pceltFetched
);
```
#### <a name="parameters"></a>Parametreler
`celt`
[in] Sayısı `ICorDebugType` alınacak örnekleri.
`values`
[out] Her biri işaret işaretçileri, bir dizi bir `ICorDebugType` nesnesi.
`pceltFetched`
[out] İşaretçi sayısına `ICorDebugType` gerçekte döndürülen örnek. Bu değer null ise `celt` biridir.
## <a name="requirements"></a>Gereksinimler
**Platformlar:** bkz [sistem gereksinimleri](../../../../docs/framework/get-started/system-requirements.md).
**Başlık:** CorDebug.idl, CorDebug.h
**Kitaplığı:** CorGuids.lib
**.NET framework sürümleri:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Ayrıca Bkz.
| 29.451613 | 114 | 0.710296 | tur_Latn | 0.371156 |
2df4ebdee4a4fafbef886e5766d437075f74a93b | 1,735 | md | Markdown | README.md | herrjemand/flask-fido-u2f | d945ce5be3a76c8f11d56b4c6b07bc8b61047d0c | [
"MIT"
] | 24 | 2016-08-10T19:15:23.000Z | 2021-09-03T12:13:37.000Z | README.md | herrjemand/flask-fido-u2f | d945ce5be3a76c8f11d56b4c6b07bc8b61047d0c | [
"MIT"
] | 7 | 2016-07-12T12:06:28.000Z | 2016-12-24T00:20:24.000Z | README.md | herrjemand/flask-fido-u2f | d945ce5be3a76c8f11d56b4c6b07bc8b61047d0c | [
"MIT"
] | 2 | 2017-04-16T08:37:33.000Z | 2021-09-03T12:13:38.000Z | **DEPRECATED** flask-fido-u2f **DEPRECATED**
---
**DEPRECATED** **DEPRECATED** **DEPRECATED** **DEPRECATED** **DEPRECATED**
PLEASE TAKE A LOOK AT [WEBAUTHN API](https://w3c.github.io/webauthn/#CreateCred-DetermineRpId)
MORE RESOURCES [WEBAUTHN-AWESOME](https://github.com/herrjemand/awesome-webauthn)
Flask plugin to simplify usage and management of U2F devices.
## Installation
`pip install flask-fido-u2f`
## Usage
```python
from flask_fido_u2f import U2F
app = Flask(__name__)
app.config['U2F_APPID'] = 'https://example.com'
app.config['SECRET_KEY'] = 'SomeVeryRandomKeySetYouMust'
u2f = U2F(app)
@u2f.read
def read():
# Returns users U2F devices object
pass
@u2f.save
def save(u2fdata):
# Saves users U2F devices object
pass
@u2f.enroll_on_success
def enroll_on_success():
# Executes on successful U2F enroll
pass
@u2f.enroll_on_fail
def enroll_on_fail(e):
# Executes on U2F enroll fail
# Takes argument e - exception raised
pass
@u2f.sign_on_success
def sign_on_success():
# Executes on successful U2F authentication
pass
@u2f.sign_on_fail
def sign_on_fail(e):
# Executes on U2F sign fail
# Takes argument e - exception raised
pass
```
# Development
## Install dev-dependencies
`pip install -r dev-requirements.txt`
## Run tests
`python -m unittest discover`
## Docs
* [API Docs](https://github.com/herrjemand/flask-fido-u2f/blob/master/docs/api.md)
* [Configuration Docs](https://github.com/herrjemand/flask-fido-u2f/blob/master/docs/configuration.md)
* [FIDO U2F](https://fidoalliance.org/specifications/download/)
## License
[MIT](https://github.com/herrjemand/flask-fido-u2f/blob/master/LICENSE.md) © [Yuriy Ackermann](https://jeman.de/)
| 21.419753 | 113 | 0.723343 | yue_Hant | 0.628244 |
2df556ffbc0637b22c46c0630994364d4465e47a | 1,694 | md | Markdown | README.md | Azure-Samples/traffic-manager-java-manage-simple-profiles | 532004b65d0a00b56e593db9c0afe40c548b56c5 | [
"MIT"
] | null | null | null | README.md | Azure-Samples/traffic-manager-java-manage-simple-profiles | 532004b65d0a00b56e593db9c0afe40c548b56c5 | [
"MIT"
] | null | null | null | README.md | Azure-Samples/traffic-manager-java-manage-simple-profiles | 532004b65d0a00b56e593db9c0afe40c548b56c5 | [
"MIT"
] | 1 | 2021-01-14T10:16:01.000Z | 2021-01-14T10:16:01.000Z | ---
page_type: sample
languages:
- java
products:
- azure
extensions:
services: Trafficmanager
platforms: java
---
# Getting Started with Trafficmanager - Manage Simple Traffic Manager - in Java #
Simple Azure traffic manager sample.
- Create 4 VMs spread across 2 regions
- Create a traffic manager in front of the VMs
- Change/configure traffic manager routing method
## Running this Sample ##
To run this sample:
See [DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity#defaultazurecredential) and prepare the authentication works best for you. For more details on authentication, please refer to [AUTH.md](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/resourcemanager/docs/AUTH.md).
git clone https://github.com/Azure-Samples/traffic-manager-java-manage-simple-profiles.git
cd traffic-manager-java-manage-simple-profiles
mvn clean compile exec:java
## More information ##
For general documentation as well as quickstarts on how to use Azure Management Libraries for Java, please see [here](https://aka.ms/azsdk/java/mgmt).
Start to develop applications with Java on Azure [here](http://azure.com/java).
If you don't have a Microsoft Azure subscription you can get a FREE trial account [here](http://go.microsoft.com/fwlink/?LinkId=330212).
---
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. | 39.395349 | 340 | 0.772137 | eng_Latn | 0.830187 |
2df557796505d7d53d84feaf9cda6039aba88afa | 2,241 | md | Markdown | content/_changelogs/3.1.3.md | santoshyadavdev/cypress-documentation | 5e1b3f1a14dde56f54fe1be01acad109653380a2 | [
"MIT"
] | 634 | 2017-09-12T16:36:49.000Z | 2022-03-31T21:12:39.000Z | content/_changelogs/3.1.3.md | santoshyadavdev/cypress-documentation | 5e1b3f1a14dde56f54fe1be01acad109653380a2 | [
"MIT"
] | 2,241 | 2016-05-11T16:30:01.000Z | 2022-03-31T20:24:40.000Z | content/_changelogs/3.1.3.md | santoshyadavdev/cypress-documentation | 5e1b3f1a14dde56f54fe1be01acad109653380a2 | [
"MIT"
] | 1,033 | 2016-10-13T14:03:32.000Z | 2022-03-29T13:45:46.000Z | ## 3.1.3
_Released 12/03/2018_
**Bugfixes:**
- Fixed regression introduced in [3.1.1](/guides/references/changelog#3.1.1)
with `requestAnimationFrame` that caused some animations not to run. Fixes
[#2725](https://github.com/cypress-io/cypress/issues/2725).
- Fixed regression introduced in [3.1.2](/guides/references/changelog#3.1.2)
that caused DOM elements passed to [cy.wrap()](/api/commands/wrap) to no
longer yield the proper jQuery array instance. Fixes
[#2820](https://github.com/cypress-io/cypress/issues/2820).
- Fixed regression causing invocations of [`cy.clock()`](/api/commands/clock) to
error on subsequent tests. Fixes
[#2850](https://github.com/cypress-io/cypress/issues/2850).
- Fixed issue where a fix included in
[3.1.2](/guides/references/changelog#3.1.2) did not pass the `windowsHide`
argument to the proper options. Fixes
[#2667](https://github.com/cypress-io/cypress/issues/2667) and
[#2809](https://github.com/cypress-io/cypress/issues/2809).
- Passing [`.check({ force: true })`](/api/commands/check) no longer requires
the checkbox or radio to be visible. Fixes
[#1376](https://github.com/cypress-io/cypress/issues/1376).
**Misc**
- Updated types to support promises as arguments within
[cy.wrap](/api/commands/wrap). Fixes
[#2807](https://github.com/cypress-io/cypress/pull/2807).
- We now expose all jQuery methods and values onto
[`Cypress.$`](/api/utilities/$). Fixes
[#2830](https://github.com/cypress-io/cypress/issues/2830).
- [cy.wait()](/api/commands/wait) now accepts a separate timeout option for
`requestTimeout` and `responseTimeout`. Fixes
[#2446](https://github.com/cypress-io/cypress/issues/2446).
**Documentation Changes:**
- Added `requestTimeout` and `responseTimeout` options to
[cy.wait()](/api/commands/wait)
- Added 'History' table to [cy.wait()](/api/commands/wait)
- Added 'Alias' for assertions that are aliases of each other to
[Assertions](/guides/references/assertions)
**Dependency Updates**
- Upgraded nodemon from `^1.8.1` to `^1.8.7`. Fixes
[#2864](https://github.com/cypress-io/cypress/pull/2864).
- Upgraded request from `^2.27.0` and `^2.28.0` to `^4.0.0`, Fixes
[#2455](https://github.com/cypress-io/cypress/issues/2455).
| 43.096154 | 80 | 0.719322 | eng_Latn | 0.722352 |
2df5cd7cd88d601a026e7fd2956673cf09d0b654 | 16 | md | Markdown | README.md | exotic-hits200s/exotic | 501b28d3fc44cb516cfd4e2b71ed96d1fd814be2 | [
"MIT"
] | null | null | null | README.md | exotic-hits200s/exotic | 501b28d3fc44cb516cfd4e2b71ed96d1fd814be2 | [
"MIT"
] | null | null | null | README.md | exotic-hits200s/exotic | 501b28d3fc44cb516cfd4e2b71ed96d1fd814be2 | [
"MIT"
] | null | null | null | # exotic
Bruhhh
| 5.333333 | 8 | 0.75 | vie_Latn | 0.481311 |
2df69b9c4e14629f4f73b5e9c915b6fb13e44de3 | 392 | md | Markdown | how-to-export-a-rendition/README.md | calvinmorett/plugin-samples | 4ba7046a9a270e049caf1d95167f544794440a63 | [
"MIT"
] | 1 | 2018-10-29T05:34:30.000Z | 2018-10-29T05:34:30.000Z | how-to-export-a-rendition/README.md | MissSheyni/plugin-samples | f6a25fc6447753e686b75ebd5228b4c65f3216ed | [
"MIT"
] | null | null | null | how-to-export-a-rendition/README.md | MissSheyni/plugin-samples | f6a25fc6447753e686b75ebd5228b4c65f3216ed | [
"MIT"
] | null | null | null | # Export artboard as PNG rendition
This sample generates an export rendition of an object in XD as PNG.
[Read the step-by-step guide for this sample](https://adobexdplatform.com/plugin-docs/tutorials/how-to-export-a-rendition/).
## Usage
1. Select an item (artboard)
1. Run "Plugins > Export Rendition"
1. Select the filename and location
1. A UI Modal shows where the file has been saved
| 32.666667 | 124 | 0.765306 | eng_Latn | 0.929828 |
2df78206390880b3074c870f957720e48d646d35 | 53,460 | md | Markdown | lectures.md | bioboot/bggn213_F19 | 944346023e41fc66ea7dcb8b9190699e58a90cfc | [
"Apache-2.0",
"MIT"
] | null | null | null | lectures.md | bioboot/bggn213_F19 | 944346023e41fc66ea7dcb8b9190699e58a90cfc | [
"Apache-2.0",
"MIT"
] | null | null | null | lectures.md | bioboot/bggn213_F19 | 944346023e41fc66ea7dcb8b9190699e58a90cfc | [
"Apache-2.0",
"MIT"
] | null | null | null | ---
layout: page
title: Lectures
menu: true
order: 2
---
All Lectures are Wed/Fri 1:00-4:00 pm in TATA 2501
(<a href="https://goo.gl/maps/Cd8z9Zexx6q">Map</a>). Clicking on the
class topics below will take you to corresponding lecture notes,
homework assignments, pre-class video screen-casts and required reading
material.
<br>
| \# | Date | Topics for Fall 2019 |
| :-: | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 1 | Wed 10/02/19 | [**Welcome to Bioinformatics**](#1) <br> Course introduction, Leaning goals & expectations, Biology is an information science, History of Bioinformatics, Types of data, Application areas and introduction to upcoming course segments, Hands on with major Bioinformatics databases and key online NCBI and EBI resources |
| 2 | Fri 10/04/19 | [**Sequence alignment fundamentals, algorithms and applications**](#2) <br> Homology, Sequence similarity, Local and global alignment, classic Needleman-Wunsch, Smith-Waterman and BLAST heuristic approaches, Hands on with dot plots, Needleman-Wunsch and BLAST algorithms highlighting their utility and limitations |
| 3 | Wed 10/09/19 | [**Advanced sequence alignment and database searching**](#3) <br> Detecting remote sequence similarity, Database searching beyond BLAST, Substitution matrices, Using PSI-BLAST, Profiles and HMMs, Protein structure comparisons |
| 4 | Fri 10/11/19 | [**Bioinformatics data analysis with R**](#4) <br> Why do we use R for bioinformatics? R language basics and the RStudio IDE, Major R data structures and functions, Using R interactively from the RStudio console |
| 5 | Wed 10/16/19 | [**Data exploration and visualization in R**](#5) <br> The exploratory data analysis mindset, Data visualization best practices, Using and customizing base graphics (scatterplots, histograms, bar graphs and boxplots), Building more complex charts with ggplot and rgl |
| 6 | Fri 10/18/19 | [**Why, when and how of writing your own R functions**](#6) <br> The basics of writing your own functions that promote code robustness, reduce duplication and facilitate code re-use |
| 7 | Wed 10/23/19 | [**Bioinformatics R packages from CRAN and BioConductor**](#7) <br> Extending functionality and utility with R packages, Obtaining R packages from CRAN and BioConductor, Working with Bio3D for molecular data |
| 8 | Fri 10/25/19 | [**Introduction to machine learning for Bioinformatics 1**](#8) <br> Unsupervised learning, K-means clustering, Hierarchical clustering, Heatmap representations. Dimensionality reduction, Principal Component Analysis (PCA) |
| 9 | Wed 10/30/19 | [**Unsupervised learning mini-project**](#9) <br> Longer hands-on session with unsupervised learning analysis of cancer cells further highlighting Practical considerations and best practices for the analysis and visualization of high dimensional datasets |
| 10 | Fri 11/01/19 | **Project:** [**Find a gene assignment (Part 1)**](#10) <br> Principles of database searching, sequence analysis, structure analysis along with [**Hands-on with Git**](#10) <br> How to perform common operations with the Git version control system. We will also cover the popular social code-hosting platforms GitHub and BitBucket. |
| 11 | Wed 11/06/19 | [**Structural Bioinformatics (Part 1)**](#11) <br> Protein structure function relationships, Protein structure and visualization resources, Modeling energy as a function of structure |
| 12 | Fri 11/08/19 | [**Bioinformatics in drug discovery and design**](#12) <br> Target identification, Lead identification, Small molecule docking methods, Protein motion and conformational variants, Molecular simulation and drug optimization |
| 13* | Wed 11/13/19 | [**Genome informatics and high throughput sequencing (Part 1)**](#13) <br> Genome sequencing technologies past, present and future; Biological applications of sequencing, Analysis of variation in the genome, and gene expression; The Galaxy platform along with resources from the EBI & UCSC; Sample Galaxy RNA-Seq workflow with FastQC and Bowtie2 |
| 14 | Wed 11/13/19 | [**Transcriptomics and the analysis of RNA-Seq data**](#14) <br> RNA-Seq aligners, Differential expression tests, RNA-Seq statistics, Counts and FPKMs and avoiding P-value misuse, Hands-on analysis of RNA-Seq data with R. <br> **N.B.** Find a gene assignment part 1 due today\! |
| 15 | Fri 11/15/19 | [**Genome annotation and the interpretation of gene lists**](#15) <br> Gene finding and functional annotation, Functional databases KEGG, InterPro, GO ontologies and functional enrichment |
| 16 | Wed 11/20/19 | [**Biological network analysis**](#16) <br> Network based approaches for integrating and interpreting large heterogeneous high throughput data sets; Discovering relationships in ‘omics’ data; Network construction, manipulation, visualization and analysis; Major graph theory and network topology measures and concepts (Degree, Communities, Shortest Paths, Centralities, Betweenness, Random graphs vs scale free); Hands-on with Cytoscape and igraph packages. |
| 17 | Fri 11/22/19 | [**Essential UNIX for bioinformatics**](#17) <br> Bioinformatics on the command line, Why do we use UNIX for bioinformatics? UNIX philosophy, 21 Key commands, Understanding processes, File system structure, Connecting to remote servers, Redirection, streams and pipes, Workflows for batch processing, Organizing computational projects. |
| 18 | Wed 11/27/19 | [**Cancer genomics**](#18) <br> Cancer genomics resources and bioinformatics tools for investigating the molecular basis of cancer. Mining the NCI Genomic Data Commons; Immunoinformatics and immunotherapy; Using genomics and bioinformatics to design a personalized cancer vaccine. Implications for personalized medicine. |
| 19 | Fri 11/29/19 | [**Happy Thanksgiving!**](#19) <br> **N.B.** No class today but please note that the find a gene assignment is due before class next Friday\! |
| 19 | Wed 12/04/19 | [**Course summary**](#19) <br> Summary of learning goals, Student course evaluation time and exam preparation; **Find a gene assignment due before next class\!** |
| 20 | Fri 12/06/19 | [**Final Exam\!**](#20) |
Class material
==============
<a name="1"></a>
1: Welcome to Bioinformatics
----------------------------
**Topics**:
Course introduction, Leaning goals & expectations, Biology is an information science, History of Bioinformatics, Types of data, Application areas and introduction to upcoming course segments, Student 30-second introductions, Introduction to NCBI & EBI resources for the molecular domain of bioinformatics, Hands-on session using NCBI-BLAST, Entrez, GENE, UniProt, Muscle and PDB bioinformatics tools and databases.
**Goals**:
- Understand the increasing necessity for computation in modern life sciences research.
- Get introduced to how bioinformatics is practiced.
- Understand course scope, expectations, logistics and [ethics code]({{ site.baseurl }}/ethics/).
- The goals of the hands-on session is to introduce a range of core bioinformatics databases and associated online services whilst actively investigating the molecular basis of several common human disease.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-1-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-1-bggn213_small.pdf){:.no-push-state}{:target="_blank"}
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-1-bggn213.pdf){:.no-push-state}{:target="_blank"}
- Feedback: [Muddy-Point-Assesment](https://forms.gle/2YGfHU4y7JVyH4bt5){:.no-push-state}{:target="_blank"}
**Homework**:
- [Questions](https://forms.gle/F4HdutxEVSkLpShS6){:.no-push-state}{:target="_blank"}
- Complete the [pre-course survey](https://forms.gle/qM9PTRNie8S49nuTA).
- Setup your [laptop computer]({{ site.baseurl }}/setup/) for this course.
- Get a copy of the course [syllabus]({{ site.baseurl }}/class-material/BGGN213_F19_syllabus.pdf){:.no-push-state},
- Complete the [Office Hours Sign Up Sheet](https://doodle.com/poll/y7k46gqtegcsahqn){:.no-push-state}{:target="_blank"}.
**Readings**:
- PDF1: [What is bioinformatics? An introduction and overview]({{ site.baseurl }}/class-material/bioinformatics_review.pdf){:.no-push-state},
- PDF2: [Advancements and Challenges in Computational Biology]({{ site.baseurl }}/class-material/bioinformatics_challenges_2015.pdf){:.no-push-state}.
**Screen Casts**:
<br/>
<iframe width="560" height="315" src="https://www.youtube.com/embed/P2oSO7YPyfU?rel=0" frameborder="0" allowfullscreen></iframe>
**1 Welcome to BGGN-213:**
Course introduction and logistics.
{:.message}
<br/>
<iframe width="560" height="315" src="https://www.youtube.com/embed/gJNXQfpErLY?rel=0" frameborder="0" allowfullscreen></iframe>
**2 What is Bioinformatics?**
Bioinformatics can mean different things to different people. What will we actually learn in this class?
{:.message}
<br/>
<iframe width="560" height="315" src="https://www.youtube.com/embed/cCim7LrQZLY?rel=0" frameborder="0" allowfullscreen></iframe>
**3 How do we do Bioinformatics?**
Some basic bioinformatics can be done online or with downloaded tools. However, most often we will need a specialized computational setup.
{:.message}
------------------------------------------------------------------------
<a name="2"></a>
2: Sequence alignment fundamentals, algorithms and applications
---------------------------------------------------------------
**Topics**:
Further coverage of *major NCBI & EBI resources* for the molecular domain of bioinformatics with a focus on GenBank, UniProt, Entrez and Gene Ontology. There are many bioinformatics databases (see [handout]({{ site.baseurl }}/class-material/Major_Databases_bggn213.pdf){:.no-push-state}) and being able to judge their utility and quality is important. *Sequence Alignment and Database Searching*:
Homology, Sequence similarity, Local and global alignment, Heuristic approaches, Database searching with BLAST, E-values and evaluating alignment scores and statistics.
**Goals**:
- Be able to query, search, compare and contrast the data contained in major bioinformatics databases (GenBank, GENE, UniProt, PFAM, OMIM, PDB) and describe how these databases intersect.
- Be able to describe how nucleotide and protein sequence and structure data are represented (FASTA, FASTQ, GenBank, UniProt, PDB).
- Be able to describe how dynamic programming works for pairwise sequence alignment
- Appreciate the differences between global and local alignment along with their major application areas.
- Understand how aligning novel sequences with previously characterized genes or proteins provides important insights into their common attributes and evolutionary origins.
- The goals of the hands-on session are to explore the principles underlying the computational tools that can be used to compute and evaluate sequence alignments.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-2-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-2-bggn213_small.pdf){:.no-push-state}{:target="_blank"}
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-2-bggn213.pdf){:.no-push-state}{:target="_blank"}
- Major Databases: [Handout PDF]({{ site.baseurl }}/class-material/Major_Databases.pdf){:.no-push-state}{:target="_blank"},
- Feedback: [Muddy-Point-Assesment](https://forms.gle/DBjDKg5azytyJrv86){:.no-push-state}{:target="_blank"}.
**Homework**:
- [Questions](https://forms.gle/XCauTLKnNK7ikdou6){:.no-push-state}{:target="_blank"},
- [Alignment Problem]({{ site.baseurl }}/class-material/lecture-2-bggn213_homework.pdf){:.no-push-state}{:target="_blank"},
**Readings**:
- Readings: PDF1: [What is dynamic programming?]({{ site.baseurl }}/class-material/Dynamic_programming_primer.pdf){:.no-push-state},
- Readings: PDF2 [Fundamentals of database searching]({{ site.baseurl }}/class-material/Fundamentals.pdf){:.no-push-state}.
------------------------------------------------------------------------
<a name="3"></a>
3: Advanced sequence alignment and database searching
-----------------------------------------------------
**Topics**: Detecting remote sequence similarity, Database searching beyond BLAST, Substitution matrices, Using PSI-BLAST, Profiles and HMMs, Protein structure comparisons. Beginning with command line based database searches.
**Goal**:
- Be able to calculate the alignment score between two nucleotide or protein sequences using a provided scoring matrix
- Understand the limits of homology detection with tools such as BLAST
- Be able to perform PSI-BLAST, HMMER and protein structure based database searches and interpret the results in terms of the biological significance of an e-value.
- Run our first bioinformatics tool from the command line.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-3-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-3-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-3-bggn213.pdf){:.no-push-state}{:target="_blank"},
<!-- - Bonus: [Alignment App](https://bioboot.github.io/bggn213_S19/class-material/nw/){:.no-push-state}{:target="_blank"}, -->
- Feedback: [Muddy-Point-Assesment](https://forms.gle/NNwje57RUJy8AQ5g7){:.no-push-state}
**Homework**:
- [Homework](https://docs.google.com/document/d/1C4hBJCqbk_rO2ImioCHTsXJSHgMNhqaUB2WS0De4HEs/copy){:.no-push-state}{:target="_blank"} click and select "make a copy" then follow instructions,
- DataCamp Sign Up & Homework: [See your UCSD email invite!](https://www.datacamp.com){:.no-push-state},
- [RStudio and R download and setup]({{ site.baseurl }}/setup/).
------------------------------------------------------------------------
<a name="4"></a>
4: Bioinformatics data analysis with R
--------------------------------------
**Topics**: Why do we use R for bioinformatics? R language basics and the RStudio IDE, Major R data structures and functions, Using R interactively from the RStudio console.
**Goal**:
- Understand why we use R for bioinformatics
- Familiarity with R's basic syntax,
- Be able to use R to read and parse comma-separated (.csv) formatted files ready for subsequent analysis,
- Familiarity with major R data structures (vectors, matrices and data.frames),
- Understand the basics of using functions (arguments, vectorizion and re-cycling).
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-4-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-4-bggn213_small.pdf){:.no-push-state}{:target="_blank"}
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-4-bggn213/){:.no-push-state}{:target="_blank"}
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/0ZILA8Y4yb30LL1q2){:.no-push-state}
**Homework**:
- DataCamp course: [Introduction to R](https://www.datacamp.com/enterprise/bioinformatics-bggn213/assignments){:.no-push-state}{:target="_blank"}.
------------------------------------------------------------------------
<a name="5"></a>
5: Data exploration and visualization in R
------------------------------------------
**Topics**: The exploratory data analysis mindset, Data visualization best practices, Simple base graphics (including scatterplots, histograms, bar graphs, dot chats, boxplots and heatmaps), Building more complex charts with ggplot.
**Goal**:
- Appreciate the major elements of exploratory data analysis and why it is important to visualize data.
- Be conversant with data visualization best practices and understand how good visualizations optimize for the human visual system.
- Be able to generate informative graphical displays including scatterplots, histograms, bar graphs, boxplots, dendrograms and heatmaps and thereby gain exposure to the extensive graphical capabilities of R.
- Appreciate that you can build even more complex charts with ggplot and additional R packages such as rgl.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture5-BGGN213-large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture5-BGGN213-small.pdf){:.no-push-state}{:target="_blank"},
- Rmarkdown documents for [plot session 1]({{ site.baseurl }}/class-material/lecture-5-bggn213-draw_circle_points/){:.no-push-state}, and [more advanced plots]({{ site.baseurl }}/class-material/lecture-5-bggn213-plots/){:.no-push-state},
- Lab: [Main **hands-on Worksheet**]({{ site.baseurl }}/class-material/lab-5-bggn213.html){:.no-push-state}{:target="_blank"},
- Lab: [**Supplement 1**: Plotting with color in R]({{ site.baseurl }}/class-material/Rcolor.html){:.no-push-state}{:target="_blank"},
- Lab: [**Supplement 2**: A detailed guide to plotting with base R]({{ site.baseurl }}/class-material/lecture5-BGGN213_lab.pdf){:.no-push-state}{:target="_blank"},
- Example data for hands-on sections [lecture-5-bggn213-rstats.zip]({{ site.baseurl }}/class-material/lecture-5-bggn213-rstats.zip){:.no-push-state},
- SideNote: [Convincing with graphics](https://xkcd.com/833/){:.no-push-state}{:target="_blank"},
- Check-out the new website: [Data-to-Viz](https://www.data-to-viz.com/){:.no-push-state}{:target="_blank"},
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/qIW4O4PUoixTzy7J2){:.no-push-state}{:target="_blank"}.
**Homework**:
- This units homework is all [via **DataCamp** (Intro to R, Intermideate R)](https://www.datacamp.com/){:.no-push-state}{:target="_blank"}.
------------------------------------------------------------------------
<a name="6"></a>
6: Why, when and how of writing your own R functions
----------------------------------------------------
**Topics**: , Using R scripts and Rmarkdown files, Import data in various formats both local and from online sources, The basics of writing your own functions that promote code robustness, reduce duplication and facilitate code re-use.
**Goals**:
- Be able to import data in various flat file formats from both local and online sources.
- Understand the structure and syntax of R functions and how to view the code of any R function.
- Understand when you should be writing functions.
- Be able to follow a step by step process of going from a working code snippet to a more robust function.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-6-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-6-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-6-bggn213.pdf){:.no-push-state}{:target="_blank"},
- Flat files for importing with read.table: [test1.txt]({{ site.baseurl }}/class-material/test1.txt){:.no-push-state}, [test2.txt]({{ site.baseurl }}/class-material/test2.txt){:.no-push-state}, [test3.txt]({{ site.baseurl }}/class-material/test3.txt){:.no-push-state}.
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/GrFc3oDfAwCCj2BA2){:.no-push-state}{:target="_blank"}.
**Homework**:
- See **Q6** of the [hands-on lab sheet above]({{ site.baseurl }}/class-material/lab-6-bggn213.pdf){:.no-push-state}{:target="_blank"}. This entails turning a supplied code snippet into a more robust and re-usable function that will take any of the three listed input proteins and plot the effect of drug binding. Note assessment rubric and submission instructions within document. (Submission deadline: 1pm **next week!**).
- The remainder of this units homework is all [via **DataCamp**](https://www.datacamp.com/){:.no-push-state}.
------------------------------------------------------------------------
<a name="7"></a>
7: Bioinformatics R packages from CRAN and BioConductor
-------------------------------------------------------
**Topics**: More on how to write R functions with worked examples. Further extending functionality and utility with R packages, Obtaining R packages from CRAN and Bioconductor, Working with Bio3D for molecular data, Managing genome-scale data with bioconductor.
**Goals**:
- Be able to find and install R packages from CRAN and bioconductor,
- Understand how to find and use package vignettes, demos, documentation, tutorials and source code repository where available.
- Be able to write and (re)use basic R scripts to aid with reproducibility.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-7-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-7-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Example input for **grade()** function: [student_homework.csv]({{ site.baseurl }}/class-material/student_homework.csv){:.no-push-state},
- [Collaborative Google Doc based notes on selected R packages](https://docs.google.com/document/d/1cbKOOcjTAbqr1E1qkbBbrXXpSkCe5K-aNOXB5EHHEv0/edit?usp=sharing){:.no-push-state}{:target="_blank"},
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/LHI8L0QYVXChcNw02){:.no-push-state}.
**Homework**:
- See **Q6** of the [hands-on lab sheet from the previous class]({{ site.baseurl }}/class-material/lab-6-bggn213.pdf){:.no-push-state}. This entails turning a supplied code snippet into a more robust and re-usable function that will take any of the three listed input proteins and plot the effect of drug binding. Note assessment rubric and submission instructions within document. (Submission deadline: 1pm **next class!**).
- DataCamp [homework](https://www.datacamp.com/){:.no-push-state}.
------------------------------------------------------------------------
<a name="8"></a>
8: Introduction to machine learning for Bioinformatics (Part 1)
--------------------------------------------------------
**Topics**: Unsupervised learning, supervised learning and reinforcement learning; Focus on unsupervised learning, K-means clustering, Hierarchical clustering, Heatmap representations. Dimensionality reduction, visualization and analysis, Principal Component Analysis (PCA)
Practical considerations and best practices for the analysis of high dimensional datasets.
**Goal**:
- Understand the major differences between unsupervised and supervised learning.
- Be able to create k-means and hierarchical cluster models in R
- Be able to describe how the k-means and bottom-up hierarchical cluster algorithms work.
- Know how to visualize and integrate clustering results and select good cluster models.
- Be able to describe in general terms how PCA works and its major objectives.
- Be able to apply PCA to high dimensional datasets and visualize and integrate PCA results (e.g identify outliers, find structure in features and aid in complex dataset visualization).
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-8-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-8-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- WebApp: [Introduction to PCA]({{ site.baseurl }}/class-material/pca/){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on section worksheet for PCA]({{ site.baseurl }}/class-material/lab-8-bggn213.html){:.no-push-state}{:target="_blank"},
- Data files: [UK_foods.csv]({{ site.baseurl }}/class-material/UK_foods.csv){:.no-push-state}, [expression.csv]({{ site.baseurl }}/class-material/expression.csv){:.no-push-state}.
- Feedback: [Muddy point assessment](https://forms.gle/rRdkKbaGR7gcywK76){:.no-push-state}.
------------------------------------------------------------------------
<a name="9"></a>
9: Unsupervised learning mini-project
-------------------------------------
**Topics**: Longer hands-on session with unsupervised learning analysis of cancer cells, Practical considerations and best practices for the analysis and visualization of high dimensional datasets.
**Goals**:
- Be able to import data and prepare for unsupervised learning analysis.
- Be able to apply and test combinations of PCA, k-means and hierarchical clustering to high dimensional datasets and critically review results.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-9-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-9-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-9-bggn213-WEBX.html){:.no-push-state}{:target="_blank"}
- Data file: [WisconsinCancer.csv]({{ site.baseurl }}/class-material/WisconsinCancer.csv){:.no-push-state}, [new_samples.csv]({{ site.baseurl }}/class-material/new_samples.csv){:.no-push-state}.
- Bio3D PCA App: [http://bio3d.ucsd.edu/pca-app/](http://bio3d.ucsd.edu/pca-app/){:.no-push-state}{:target="_blank"}.
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/vHYEbuAmV2uMZEom2){:.no-push-state}
**Reading**:
- Bonus: [StackExchange discussion on PCA](https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa){:.no-push-state}.
- Book: [Statistics for Modern Biology](http://web.stanford.edu/class/bios221/book/index.html)
- 2019 Genome Biology review article [Machine learning and complex biological data](https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1689-0){:.no-push-state}{:target="_blank"}.
- 2019 Pre-print, [Accuracy, Robustness and Scalability of Dimensionality Reduction Methods for Single Cell RNAseq Analysis](https://www.biorxiv.org/content/10.1101/641142v2.full){:.no-push-state}{:target="_blank"}.
------------------------------------------------------------------------
<a name="10"></a>
10: **Project:** Find a gene assignment (Part 1)
------------------------------------------------
The [**find-a-gene project**]({{ site.baseurl }}/class-material/Find_A_Gene_Project.pdf){:.no-push-state}{:target="_blank"} is a required assignment for BIMM-143. The objective with this assignment is for you to demonstrate your grasp of database searching, sequence analysis, structure analysis and the R environment that we have covered to date in class.
You may wish to consult the scoring rubric at the end of the above linked project description and the [**example report**]({{ site.baseurl }}/class-material/Find_A_Gene_Project_Example.pdf){:.no-push-state}{:target="_blank"} for format and content guidance.
Your responses to questions Q1-Q4 are due at the beginning of class **Fri Nov 15th** (11/15/19).
The complete assignment, including responses to all questions, is due at the beginning of class **Fri Dec 6th** (12/06/19).
Late responses will not be accepted under any circumstances.
## Bonus: Hands-on with Git
Today’s lecture and hands-on sessions introduce Git, currently the most popular version control system. We will learn how to perform common operations with Git and RStudio. We will also cover the popular social code-hosting platforms GitHub and BitBucket.
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-10-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-10-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on with Git](http://tinyurl.com/rclass-github){:.no-push-state}{:target="_blank"},
- Jenny's *Namming Things* Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture10-naming-slides.pdf){:.no-push-state}{:target="_blank"},
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/gMxIBT5jLbjXHQPE2){:.no-push-state}
------------------------------------------------------------------------
<a name="11"></a>
11: Structural Bioinformatics (Part 1)
--------------------------------------
**Topics**: Protein structure function relationships, Protein structure and visualization resources, Modeling energy as a function of structure, Homology modeling, Predicting functional dynamics, Inferring protein function from structure.
**Goal**:
- View and interpret the structural models in the PDB,
- Understand the classic `Sequence>Structure>Function` via energetics and dynamics paradigm,
- Be able to use VMD for biomolecular visualization and analysis,
- Appreciate the role of bioinformatics in mapping the ENERGY LANDSCAPE of biomolecules,
- Be able to use the Bio3D package for exploratory analysis of protein sequence-structure-function-dynamics relationships.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-11-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-11-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-11-bggn213.pdf){:.no-push-state}{:target="_blank"},
- Software link: [VMD download](http://www.ks.uiuc.edu/Development/Download/download.cgi){:.no-push-state}{:target="_blank"},
- Feedback: [Muddy-Point-Assesment](https://forms.gle/epVKGejGRectHEdp8){:.no-push-state}.
------------------------------------------------------------------------
<a name="12"></a>
12: Bioinformatics in drug discovery and design
-----------------------------------------------
**Topics**: Bioinformatics approaches for drug discovery, Target & lead identification, Receptor/target-based approaches, Small molecule docking methods, Protein motion and conformational variants and functional dynamics; Molecular simulation and drug optimization.
**Goals**:
- Appreciate how bioinformatics can predict functional dynamics & further aid drug discovery,
- Be able to apply open-source *In silico* docking and virtual screening strategies for drug discovery,
- Appreciate how bioinformatics can predict the functional dynamics of biomolecules,
- Be able to use Bio3D for the analysis and prediction of protein flexibility,
- Understand the increasing role of bioinformatics in pharma and the drug discovery process in particular.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-12-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-12-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-12-bggn213.pdf){:.no-push-state}{:target="_blank"},
- Software download links: [AutoDock Tools](http://mgltools.scripps.edu/downloads){:.no-push-state}{:target="_blank"}, [AutoDock Vina](http://vina.scripps.edu/download.html){:.no-push-state}{:target="_blank"},
- For **Mac only** [Xquartz](https://www.xquartz.org){:.no-push-state}{:target="_blank"},
- Optional backup files: [config.txt]({{ site.baseurl }}/class-material/config.txt){:.no-push-state}, [1hsg_protein.pdbqt]({{ site.baseurl }}/class-material/1hsg_protein.pdbqt){:.no-push-state}, [ligand.pdbqt]({{ site.baseurl }}/class-material/ligand.pdbqt){:.no-push-state}, [log.txt]({{ site.baseurl }}/class-material/log.txt){:.no-push-state}, [all.pdbqt]({{ site.baseurl }}/class-material/all.pdbqt){:.no-push-state}
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/nHmtEwJB7xaEZHua2){:.no-push-state}
------------------------------------------------------------------------
<a name="13"></a>
13: Genome informatics and high throughput sequencing (Part 1)
--------------------------------------------------------------
**Topics**: Genome sequencing technologies past, present and future (Sanger, Shotgun, PacBio, Illumina, toward the $500 human genome), Biological applications of sequencing, Variation in the genome, RNA-Sequencing for gene expression analysis; Major genomic databases, tools and visualization resources from the EBI & UCSC, The Galaxy platform for quality control and analysis; Sample Galaxy RNA-Seq workflow with FastQC and Bowtie2
**Goals**:
- Appreciate and describe in general terms the rapid advances in sequencing technologies and the new areas of investigation that these advances have made accessible.
- Understand the process by which genomes are currently sequenced and the bioinformatics processing and analysis required for their interpretation.
- For a genomic region of interest (e.g. the neighborhood of a particular SNP), use a genome browser to view nearby genes, transcription factor binding regions, epigenetic information, etc.
- Be able to use the Galaxy platform for basic RNA-Seq analysis from raw reads to expression value determination.
- Understand the FASTQ file format and the information it holds.
- Understand the [SAM/BAM file format]({{ site.baseurl }}//class-material/sam_format/){:.no-push-state} and the information it holds.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-13-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-13-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-13-bggn213.pdf){:.no-push-state}{:target="_blank"},
- RNA-Seq data files: [HG00109_1.fastq]({{ site.baseurl }}/class-material/HG00109_1.fastq){:.no-push-state}, [HG00109_2.fastq]({{ site.baseurl }}/class-material/HG00109_2.fastq){:.no-push-state}, [genes.chr17.gtf]({{ site.baseurl }}/class-material/genes.chr17.gtf){:.no-push-state}, [Expression genotype results]({{ site.baseurl }}/class-material/rs8067378_ENSG00000172057.6.txt){:.no-push-state}, [Example R script]({{ site.baseurl }}/class-material/lecture13_plot.r){:.no-push-state}{:target="_blank"}, [Example Rmd](https://github.com/bioboot/test_github/blob/master/lecture13_plot.md){:.no-push-state}{:target="_blank"}.
- [SAM/BAM file format description]({{ site.baseurl }}//class-material/sam_format/){:.no-push-state}{:target="_blank"}.
- Feedback: [Muddy-Point-Assesment](https://goo.gl/forms/uokTiQ3YStajFVIl1){:.no-push-state}
## IPs
- 129.114.104.173
- 129.114.16.169
- 129.114.16.94
- 149.165.169.213
- 149.165.171.133
- 149.165.171.141
- 149.165.170.201
- 149.165.169.46
- 149.165.170.231
- 149.165.171.103
- 149.165.171.97
- 149.165.171.92
- 149.165.171.154
- 149.165.171.174
- 149.165.171.9
- 149.165.171.63
- 149.165.171.28
- 149.165.171.138
- 149.165.171.81
- 129.114.16.154
- 149.165.171.156
- 149.165.171.159
- 149.165.171.172
- 149.165.171.161
- 149.165.171.165
- 149.165.171.19
- **HOLD** 149.165.168.59 (BG)
------------------------------------------------------------------------
<a name="14"></a>
14: Transcriptomics and the analysis of RNA-Seq data
----------------------------------------------------
**Topics**:
Analysis of RNA-Seq data with R, Differential expression tests, RNA-Seq statistics, Counts and FPKMs, Normalizing for sequencing depth and gene length, Hands-on analysis of RNA-Seq data with R, DESeq2 analysis. **N.B.** Find a gene assignment part 1 due today!
**Goals**:
- Given an RNA-Seq dataset, find the set of significantly differentially expressed genes and their annotations.
- Gain competency with data import, processing and analysis with DESeq2 and other bioconductor packages.
- Understand the structure of count data and metadata required for running analysis.
- Be able to extract, explore, visualize and export results.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-14-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-14-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Detailed [Bioconductor setup]({{ site.baseurl }}//class-material/bioconductor_setup/){:.no-push-state}{:target="_blank"} instructions.
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-14-bggn213.html){:.no-push-state}{:target="_blank"},
- Data files: [airway_scaledcounts.csv]({{ site.baseurl }}/class-material/airway_scaledcounts.csv){:.no-push-state}, [airway_metadata.csv]({{ site.baseurl }}/class-material/airway_metadata.csv){:.no-push-state}, [annotables_grch38.csv]({{ site.baseurl }}/class-material/annotables_grch38.csv){:.no-push-state}.
- Feedback: **To Update** [Muddy-Point-Assesment](){:.no-push-state}
**Readings**:
- Excellent review article: [Conesa et al. A survey of best practices for RNA-seq data analysis. _Genome Biology_ 17:13 (2016)](http://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0881-8){:.no-push-state}.
- An oldey but a goodie: [Soneson et al. "Differential analyses for RNA-seq: transcript-level estimates improve gene-level inferences." _F1000Research_ 4 (2015)](https://f1000research.com/articles/4-1521/v2).
- Abstract and introduction sections of: [Himes et al. "RNA-Seq transcriptome profiling identifies CRISPLD2 as a glucocorticoid responsive gene that modulates cytokine function in airway smooth muscle cells." _PLoS ONE_ 9.6 (2014): e99625](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0099625){:.no-push-state}.
------------------------------------------------------------------------
<a name="15"></a>
15: Genome annotation and the interpretation of gene lists
----------------------------------------------------------
**Topics**: Gene finding and functional annotation from high throughput sequencing data, Functional databases KEGG, InterPro, GO ontologies and functional enrichment
**Goals**: Perform a GO analysis to identify the pathways relevant to a set of genes (e.g. identified by transcriptomic study or a proteomic experiment). Use both Bioconductor packages and online tools to interpret gene lists and annotate potential gene functions.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-15-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture-15-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet]({{ site.baseurl }}/class-material/lab-15-bggn213.html){:.no-push-state}{:target="_blank"},
- Data files: [GSE37704_featurecounts.csv]({{ site.baseurl }}/class-material/GSE37704_featurecounts.csv){:.no-push-state}, [GSE37704_metadata.csv]({{ site.baseurl }}/class-material/GSE37704_metadata.csv){:.no-push-state}.
- Feedback: [Muddy-Point-Assesment](){:.no-push-state}
**R Knowledge Check**:
[**Quiz Assessment**](https://forms.gle/9bavpC1fP3VrviLw6){:.no-push-state}{:target="_blank"},
**Readings**:
- Good review article: Trapnell C, Hendrickson DG, Sauvageau M, Goff L et al. "*Differential analysis of gene regulation at transcript resolution with RNA-seq*". Nat Biotechnol 2013 Jan;31(1):46-53. [PMID: 23222703](https://www.ncbi.nlm.nih.gov/pubmed/23222703){:.no-push-state}.
------------------------------------------------------------------------
<a name="16"></a>
16: Biological network analysis
-------------------------------
**Topics**: Network graph approaches for integrating and interpreting large heterogeneous high throughput data sets; Discovering relationships in 'omics' data; Network construction, manipulation, visualization and analysis; Graph theory; Major network topology measures and concepts (Degree, Communities, Shortest Paths, Centralities, Betweenness, Random graphs vs scale free); Hands-on with Cytoscape and igraph R packages for network visualization and analysis.
**Goals**:
- Be able to describe the major goals of biological network analysis and the concepts underlying network visualization and analysis.
- Be able to use Cytoscape for network visualization and manipulation.
- Be able to find and instal Cytoscape Apps to extend network analysis functionality.
- Appreciate that the igraph R package has extensive network analysis functionality beyond that in Cytoscape and that the R bioconductor package RCy3 package allows us to bring networks and associated data from R to Cytoscape so we can have the best of both worlds.
**Material**:
- Software Download: [Cytoscape](https://cytoscape.org/download.html){:.no-push-state}{:target="_blank"},
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-16-bggn213_large.pdf){:.no-push-state}{:target="_blank"},, [Small PDF]({{ site.baseurl }}/class-material/lecture-16-bggn213_small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on section worksheet Part 1 (**Networks Visualization**).]({{ site.baseurl }}/class-material/lecture16_bggn213_lab1.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on section worksheet Part 2 (**Networks Analysis**).]({{ site.baseurl }}/class-material/lecture16_bggn213_lab2.html){:.no-push-state}{:target="_blank"},
- Data files:
- [galFiltered.sif]({{ site.baseurl }}/class-material/galFiltered.sif){:.no-push-state},
- [galExpData.csv]({{ site.baseurl }}/class-material/galExpData.csv){:.no-push-state},
- [CytoscapeDemo_01.cys]({{ site.baseurl }}/class-material/CytoscapeDemo_01.cys){:.no-push-state},
- [virus_prok_cor_abundant.tsv]({{ site.baseurl }}/class-material/virus_prok_cor_abundant.tsv){:.no-push-state},
- [phage_ids_with_affiliation.tsv]({{ site.baseurl }}/class-material/phage_ids_with_affiliation.tsv){:.no-push-state},
- [prok_tax_from_silva.tsv]({{ site.baseurl }}/class-material/prok_tax_from_silva.tsv){:.no-push-state}.
- Feedback: [Muddy-Point-Assesment](){:.no-push-state}
------------------------------------------------------------------------
<a name="17"></a>
17: Essential UNIX for bioinformatics
-------------------------------------------
**Topics**: Bioinformatics on the command line, Why do we use UNIX for bioinformatics? UNIX philosophy, 21 Key commands, Understanding processes, File system structure, Connecting to remote servers, Redirection, streams and pipes, Workflows for batch processing, Organizing computational projects.
**Goal:**
- Understand why we use UNIX for bioinformatics
- Use UNIX command-line tools for file system navigation and text file manipulation.
- Have a familiarity with 21 key UNIX commands that we will use ~90% of the time.
- Be able to connect to remote servers from the command line.
- Use existing programs at the UNIX command line to analyze bioinformatics data.
- Understand IO Redirection, Streams and pipes.
- Understand best practices for organizing computational projects.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture17_bggn213-large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture17_bggn213-small.pdf){:.no-push-state}{:target="_blank"},
- Hands-on section worksheet
* [Using remote UNIX machines (Part I, **REQUIRED**)]({{ site.baseurl }}/class-material/17_blast-01/){:.no-push-state}{:target="_blank"},
* [Using remote UNIX machines (Part II, Optional)]({{ site.baseurl }}/class-material/16_blast-02/){:.no-push-state},
* [Using remote UNIX machines (Part III, Optional)]({{ site.baseurl }}/class-material/16_blast-03/){:.no-push-state}.
- Example data set [bggn213_01_unix.zip]({{ site.baseurl }}/class-material/bggn213_01_unix.zip){:.no-push-state},
- [Muddy point assessment](https://goo.gl/forms/W2G06LVrn2pADB2q1){:.no-push-state}.
## IPs
- (01) **54.68.182.225** (HOLD)
- (02) **54.189.169.250**
- (03) **35.163.70.54**
- (04) **52.43.128.251**
- (05) **34.211.228.140**
- (06) **34.215.96.32**
- (07) **HOLD**
- (08) **34.209.87.222**
- (09) **54.149.119.57**
- (10) **54.188.13.147**
- (11) **18.237.84.45**
- (12) **52.41.50.57**
- (13) **18.236.143.14**
- (14) **35.164.171.59**
- (15) **52.25.18.248**
- (16) **52.10.25.88**
- (17) **52.27.151.124**
- (18) **34.222.134.9**
- (19) **34.221.179.45**
- (20) **18.237.252.10**
- (21) **54.202.63.58**
- (22) **34.221.163.124**
- (23-31) **HOLD-**
- (32) **34.211.23.184**
- (33) **52.27.103.10**
- (34) **34.211.99.242**
- (35) **54.245.222.241**
- (36) **54.149.122.32**
- (37) **54.69.247.45**
- (38) **34.221.177.237**
- (39) **34.222.145.209**
- (40) **18.236.154.101**
- (41) **34.215.234.71**
- (42) **18.237.105.164**
- (43) **54.218.159.21**
- (44) **34.221.162.36**
- (45) **34.217.58.181**
- (46) **54.244.198.91**
- (47) **54.190.60.70**
- (48) **52.43.9.18**
- (49) **34.221.95.56**
- (50) **52.38.53.33** (HOLD)
------------------------------------------------------------------------
<a name="18"></a>
18: Cancer genomics
-------------------
**Topics**: Cancer genomics resources and bioinformatics tools for investigating the molecular basis of cancer. Large scale cancer sequencing projects; NCI Genomic Data Commons; What has been learned from genome sequencing of cancer? **Immunoinformatics, immunotherapy and cancer**; Guest lecture from Dr. Bjoern Peters (LAI): Using genomics and bioinformatics to harness a patient’s own immune system to fight cancer. Implications for the development of personalized medicine.
**N.B.** Find a gene assignment due before next class!
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-18-bggn213_large.pdf){:.no-push-state}{:target="_blank"}, [Small PDF]({{ site.baseurl }}/class-material/lecture18_bggn213-small.pdf){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet Part 1.]({{ site.baseurl }}/class-material/lecture18_part1_BGGN213_W19.html){:.no-push-state}{:target="_blank"},
- Lab: [Hands-on Worksheet Part 2.]({{ site.baseurl }}/class-material/lecture18_part2_BGGN213_W19/){:.no-push-state}{:target="_blank"},
- Data files:
- [lecture18_sequences.fa]({{ site.baseurl }}/class-material/lecture18_sequences.fa){:.no-push-state},
- Solutions:
- Example [mutant identification and subsequence extraction with R]({{ site.baseurl }}/class-material/lecture18_part2_example/){:.no-push-state} walk through.
- [subsequences.fa]({{ site.baseurl }}/class-material/subsequences.fa){:.no-push-state},
- [Solutions.pdf]({{ site.baseurl }}/class-material/Solutions.pdf){:.no-push-state}.
- IEDB HLA binding prediction website [http://tools.iedb.org/mhci/](http://tools.iedb.org/mhci/){:.no-push-state}{:target="_blank"}.
- [GitHub Rmd](https://github.com/bioboot/bggn213_classwork_S19/blob/master/class18/class18.md){:.no-push-state}{:target="_blank"}.
- Feedback: [Muddy-Point-Assesment](https://forms.gle/VgGfkeXrByypzWkj8){:.no-push-state}
------------------------------------------------------------------------
<a name="19"></a>
19: Course summary
------------------
**Topics**: Summary of learning goals, Student course evaluation time and exam preparation; Find a gene assignment due. Open study.
**Material**:
- Lecture Slides: [Large PDF]({{ site.baseurl }}/class-material/lecture-19-bggn213_large.pdf){:.no-push-state}{:target="_blank"},
- Hand-out: [**Exam guidelines, topics, and example questions**]({{ site.baseurl }}/class-material/BGGN213_exam_guidlines.pdf){:.no-push-state}{:target="_blank"},
- Ether-pad: [**Feedback**](https://board.net/p/bggn213_f19){:.no-push-state}{:target="_blank"},
- DataCamp: [**Bioinformatics Extension Track**]({{ site.baseurl }}/class-material/datacamp_extras.pdf){:.no-push-state}{:target="_blank"}.
- Other Resources: [Advanced R book](http://adv-r.had.co.nz){:.no-push-state}{:target="_blank"}, [R for Data Science](https://r4ds.had.co.nz){:.no-push-state}{:target="_blank"};
------------------------------------------------------------------------
<a name="20"></a>
20: Final exam!
---------------
This open-book, open-notes 150-minute test consists of 35 questions. The number of points for each question is indicated in green font at the beginning of each question. There are 80 total points on offer.
Please remember to:
- Read all questions carefully before starting.
- Put your name, UCSD email and PID number on your test.
- Write all your answers on the space provided in the exam paper.
- Remember that concise answers are preferable to wordy ones.
- Clearly state any simplifying assumptions you make in solving a problem.
- No copies of this exam are to be removed from the class-room.
- No talking or communication (electronic to otherwise) with your fellow students once the exam has begun.
- **Good luck!**
| 71.470588 | 624 | 0.638814 | eng_Latn | 0.641607 |
2df7d3355ef4ce80366ff2143266d06317005ec7 | 6,176 | md | Markdown | content/container/qke_plus/faq/container_faq.md | lijuan777/qingcloud-docs | 7ca17238cf7cfa67295069ab943c166d72c53d6f | [
"Apache-2.0"
] | null | null | null | content/container/qke_plus/faq/container_faq.md | lijuan777/qingcloud-docs | 7ca17238cf7cfa67295069ab943c166d72c53d6f | [
"Apache-2.0"
] | null | null | null | content/container/qke_plus/faq/container_faq.md | lijuan777/qingcloud-docs | 7ca17238cf7cfa67295069ab943c166d72c53d6f | [
"Apache-2.0"
] | null | null | null | ---
title: "容器应用 FAQ"
description:
weight: 15
draft: false
keyword: QKE, 容器, 应用, 镜像
---
## 如何配置镜像仓库?
支持在 QKE 控制台的**集群信息** > **环境参数**页面中通过`registry-mirrors`参数配置您的镜像仓库服务地址。
具体操作请参见[配置镜像仓库](/container/qke_plus/quickstart/cfg_mirror_repo/)。
## 镜像仓库无法连通怎么办?
Kubernetes 上的工作负载需要拉取 Docker 镜像,请确保集群所在私网能够访问相应的镜像仓库。
- 如果使用公网镜像仓库,比如 docker.io,请确保 VPC 绑定了公网 IP。
- 如果使用私有镜像仓库,比如青云提供的 [Harbor 镜像仓库](https://docsv3.qingcloud.com/container/harbor/intro/introduction/),请确保 QKE 所有节点可以访问到 Harbor 的负载均衡器地址。
> **注意**
>
> 如果 Harbor 后端使用的是 QingStor 对象存储,还要确保 QKE 所有节点可以访问到 QingStor 对象存储。
## 如何配置镜像仓库加速?
国内从 DockerHub 拉取镜像有时会遇到困难,此时通过配置镜像加速器来解决该问题。
1. 修改 `/etc/docker/daemon.json` 文件,在文件添加 `registry-mirrors` 。如下所示
```
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn/"]
}
```
> **说明**
>
> 常用的加速地址有:
>
> - Docker 中国区官方镜像:https://registry.docker-cn.com
> - 网易:http://hub-mirror.c.163.com
> - ustc:https://docker.mirrors.ustc.edu.cn
> - 中国科技大学:https://docker.mirrors.ustc.edu.cn
2. 重启 Docker。
```
systemctl restart docker
```
3. 检查是否配置成功。
```
docker info |grep -A 1 Mirrors
```
预期显示:
```
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
```
## 删除节点后挂载存储卷的容器组迁移失败
使用云平台硬盘作为存储服务的节点,当节点被删除后,节点上的有状态副本集的容器组可能会无法在其他节点重新创建。
此时,需要查看集群内被删除容器组挂载存储卷的 volumeattachment 对象是否正常清理,将此 volumeattachment 对象删除后,重新创建的容器组变可以正常挂载存储卷。
操作方法如下:
1. 找到无法重新创建的容器组。
```
# kubectl get po -n demo-project nginx-perf-7
NAME READY STATUS RESTARTS AGE
nginx-perf-7 0/1 ContainerCreating 0 22h
```
2. 查看容器组无法重新创建的原因,显示挂载存储卷失败。
```
# kubectl describe po -n demo-project nginx-perf-7
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 51s (x604 over 22h) kubelet, i-e5ri86tg Unable to mount volumes for pod "nginx-perf-7_demo-project(087b3391-8990-11e9-9b03-525433ce642d)": timeout expired waiting for volumes to attach or mount for pod "demo-project"/"nginx-perf-7". list of unmounted volumes=[nginx-neonsan-pvc]. list of unattached volumes=[nginx-neonsan-pvc default-token-znp5w]
```
3. 找到未挂载上的存储卷 `nginx-neonsan-pvc-nginx-perf-7`。
```
# kubectl get po -n demo-project nginx-perf-7 -oyaml
...
spec:
volumes:
- name: nginx-neonsan-pvc
persistentVolumeClaim:
claimName: nginx-neonsan-pvc-nginx-perf-7
...
```
4. 找到未挂载上的存储卷对应的 PV `pvc-93e24c1d88d711e9`, 到 QingCloud 控制台查看硬盘名为 `pvc-93e24c1d88d711e9` 的硬盘应为可用状态。
```
# kubectl get pvc nginx-neonsan-pvc-nginx-perf-7 -n demo-project
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-neonsan-pvc-nginx-perf-7 Bound pvc-93e24c1d88d711e9 100Gi RWO neonsan 44h
```
5. 找到 PV `pvc-93e24c1d88d711e9` 对应的 volumeattachment 对象名 `csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883`
```
# kubectl get volumeattachment -oyaml|grep pvc-93e24c1d88d711e9 -B 16
apiVersion: storage.k8s.io/v1
kind: VolumeAttachment
metadata:
creationTimestamp: 2019-06-07T03:52:13Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2019-06-09T00:47:49Z
finalizers:
- external-attacher/csi-qingcloud
name: csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
resourceVersion: "1178846"
selfLink: /apis/storage.k8s.io/v1/volumeattachments/csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
uid: a21a70df-88d7-11e9-aed1-525433888127
spec:
attacher: csi-qingcloud
nodeName: i-5n8osu8t
source:
persistentVolumeName: pvc-93e24c1d88d711e9
```
6. 查看未被正常清理的 volumeattachment 对象, status.detachError 显示 `node "XXX" not found`。
```
# kubectl get volumeattachment csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883 -oyaml
apiVersion: storage.k8s.io/v1
kind: VolumeAttachment
metadata:
creationTimestamp: 2019-06-07T03:52:13Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2019-06-09T00:51:53Z
finalizers:
- external-attacher/csi-qingcloud
name: csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
resourceVersion: "1180401"
selfLink: /apis/storage.k8s.io/v1/volumeattachments/csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
uid: a21a70df-88d7-11e9-aed1-525433888127
spec:
attacher: csi-qingcloud
nodeName: i-5n8osu8t
source:
persistentVolumeName: pvc-93e24c1d88d711e9
status:
attached: true
detachError:
message: node "i-5n8osu8t" not found
time: 2019-06-09T00:52:12Z
```
7. 编辑 volumeattachment 对象,删去 `finalizers` 部分。
```
# kubectl edit volumeattachment csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883 -oyaml
apiVersion: storage.k8s.io/v1
kind: VolumeAttachment
metadata:
creationTimestamp: 2019-06-07T03:52:13Z
deletionGracePeriodSeconds: 0
deletionTimestamp: 2019-06-09T00:51:53Z
name: csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
resourceVersion: "1180401"
selfLink: /apis/storage.k8s.io/v1/volumeattachments/csi-8b2ed050e78ad6f3a5491af35c9351358856ae15cc874262ca0b78a1c332b883
uid: a21a70df-88d7-11e9-aed1-525433888127
spec:
attacher: csi-qingcloud
nodeName: i-5n8osu8t
source:
persistentVolumeName: pvc-93e24c1d88d711e9
...
```
8. 观察容器组状态,5 分钟左右可挂载上存储卷,没有其他问题情况下容器组可恢复运行状态。
```
# kubectl get po -n demo-project nginx-perf-7
NAME READY STATUS RESTARTS AGE
nginx-perf-7 1/1 Running 0 23h
```
| 30.126829 | 375 | 0.651069 | yue_Hant | 0.51437 |
2df86cc92e8bc56a3bd299f0475d7fde7067e280 | 18 | md | Markdown | README.md | slamb-us/proj1 | 05a39a95b7a3f53c2166c7d42ebb3107749e22c7 | [
"MIT"
] | null | null | null | README.md | slamb-us/proj1 | 05a39a95b7a3f53c2166c7d42ebb3107749e22c7 | [
"MIT"
] | null | null | null | README.md | slamb-us/proj1 | 05a39a95b7a3f53c2166c7d42ebb3107749e22c7 | [
"MIT"
] | null | null | null | # proj1
Project 1
| 6 | 9 | 0.722222 | eng_Latn | 0.910159 |
2df8a5a7f7260371c044581050aa661f4c91ac4a | 58 | md | Markdown | README.md | hxkiran/VHXTool | 89d193fdfa36f28e4bbf54cc40ea75a1102f5057 | [
"MIT"
] | null | null | null | README.md | hxkiran/VHXTool | 89d193fdfa36f28e4bbf54cc40ea75a1102f5057 | [
"MIT"
] | null | null | null | README.md | hxkiran/VHXTool | 89d193fdfa36f28e4bbf54cc40ea75a1102f5057 | [
"MIT"
] | null | null | null | # VHXTool
VHX Tool for the Hyper-V Hyperflex Health Check
| 19.333333 | 47 | 0.793103 | kor_Hang | 0.576932 |
2df9cd3372fbc250025e6abedd92b83deaf17a0f | 3,276 | md | Markdown | docs/sigep/REQUEST_SHIPPINGS_XML.md | dealencarmarcelo/correios_gem | 94f99ed983949b3b4bafc0e813e70981ce5e6ae0 | [
"MIT"
] | 6 | 2019-05-03T17:12:23.000Z | 2021-06-11T16:39:22.000Z | docs/sigep/REQUEST_SHIPPINGS_XML.md | dealencarmarcelo/correios_gem | 94f99ed983949b3b4bafc0e813e70981ce5e6ae0 | [
"MIT"
] | 1 | 2021-08-04T22:25:36.000Z | 2021-08-04T22:25:36.000Z | docs/sigep/REQUEST_SHIPPINGS_XML.md | dealencarmarcelo/correios_gem | 94f99ed983949b3b4bafc0e813e70981ce5e6ae0 | [
"MIT"
] | 9 | 2019-05-03T17:12:33.000Z | 2021-09-03T17:50:28.000Z | ## Solicitar XML de Entregas
Documentação dos Correios: `solicitação de XML da PLP`
Retorna dados de entregas criadas que já foram postadas nas agências dos Correios, com todos os dados já
validados e corrigidos pelos Correios.
____
### Autenticação
Necessário informar:
* `sigep_user`
* `sigep_password`
### Exemplo de entrada
```ruby
require 'correios_gem'
...
Correios::Sigep.request_shippings_xml({
request_id: '101001'
})
```
### Saída
```ruby
{
:request_id => '101001',
:card => '0067599079',
:global_value => 26.72,
:payment_method => :to_bill,
:shipping_site => {
:name => 'AGF AEROPORTO CONFINS',
:code => '236042'
},
:sender => {
:contract => '9992157880',
:board_id => '10',
:administrative_code => '17000190',
:name => 'Empresa XPTO',
:phone => '3125522552',
:fax => '3125522552',
:email => 'contato@xpto.com.br',
:address => {
:zip_code => '35690000',
:state => 'MG',
:city => 'Florestal',
:neighborhood => 'Jabuticabeiras',
:street => 'Rua General Souza de Melo',
:number => '123',
:additional => '3o Andar'
}
},
:shippings => [
{
:label_number => 'SZ460209415BR',
:service_code => '04162',
:cost_center => 'Comercial',
:description => 'Peças automotivas',
:declared_value => 352.50,
:value => 26.72,
:proof_number => '1573446553',
:cubage => 0.0,
:additional_value => 10.0,
:additional_services => ['25', '1', '49'],
:notes => [
'Frágil',
'Conteúdo cortante'
],
:receiver => {
:name => 'José Maria Trindade',
:phone => '1138833883',
:cellphone => '11997799779',
:email => 'jose.maria@gmail.com',
:address => {
:zip_code => '69350000',
:state => 'RR',
:city => 'Alto Alegre',
:neighborhood => 'Santo Antonio',
:street => 'Rua Machado',
:number => '200',
:additional => 'B'
}
},
:invoice => {
:number => '000120',
:serie => '1',
:kind => 'venda',
:value => 352.5
},
:object => {
:type => :box_prism,
:height => 11.2,
:width => 23,
:length => 12.1,
:diameter => 0,
:weight => 350.5
}
}
]
}
```
* O campo `payment_method` será retornado conforme Anexo 1.
* O campo `shipping_site` é a agência onde os objetos da entrega foram postados.
* O campo `shippings[i].proof_number` é o número de comprovante da postagem.
* O campo `shippings[i].object.type` será retornado conforme Anexo 2.
* Medidas são retornadas em cm e gramas.
### Anexos
__Anexo 1:__
Opções de formas de pagamento:
* `:postal_vouncher` (Vale Postal)
* `:postal_refound` (Reembolso Postal)
* `:exchange_contract` (Contrato de Câmbio)
* `:credit_card` (Cartão de Crédito)
* `:other` (Outros)
* `to_bill` (A faturar)
__Anexo 2:__
Opções de tipos de objetos:
* `:letter_envelope` (Envelope)
* `:box_prism` (Caixa ou Prisma)
* `:cylinder` (Cilindro)
---
[Consultar documentação dos Correios](CORREIOS_DOCUMENT.pdf)
| 24.818182 | 105 | 0.549756 | por_Latn | 0.725017 |
2dfa839baa26347a50c38a2baba2911dbe673c7e | 10,254 | md | Markdown | Markdown/02000s/04000/grief lifted.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 2 | 2022-01-19T09:04:58.000Z | 2022-01-23T15:44:37.000Z | Markdown/00500s/03000/grief lifted.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | null | null | null | Markdown/00500s/03000/grief lifted.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 1 | 2022-01-09T17:10:33.000Z | 2022-01-09T17:10:33.000Z | - Into looked passengers [[message]] us luncheon but but. Age to always if raging. By been in and or little habit up. For regarded or dry which song all i. The de later [[dressed empty]] pwh stepped. Going of man the for the which. Shall see killed aid see me the. Continued list was the to you. And excitement of thought lost. But ordinary Scotland of observed is on for. Is dwelling things not looked. Them cant to your stop was remarkable received. Extend reached to indeed made and. Surroundings table November. Difficulties another that in reader. Fighting her the return scattered of [[dressed objects]] besides.
- Been much that out hold of incredible. Dead and in they to heard. No forms that in describe given. Suit skin you men sole. Talked the [[relief]] may with. Success free the looked all of. Served of view even at spirited or. Visible the her event summer two. Great impatiently was was abroad it would in. With notorious the the us carry. Battle wife donate short. T going factory the not any. And he belongs of i. On of brows statue is that. I visit litter forget of. Door room drew himself more lower. [[December]] has in nobles to. License this young dey pwh. Apprehend good he sent him little. Will believe soothing outside day of for. Harry justice heard by to the. Home indicate and it of there. Of high and me far. Done help from his hideous plant you. To or down that situation be water. As Rome on activities of in to. Made him and world by the. Of him desk the their.
- Them considered of present. Jersey of seat until permission the. [[wore hopes]] be an much as over. It of is and the should the. [[lifted]] very shouted what would in remain. The for far board so another. Strongly is and he who or. Lodgings never make i reach accused from. Would named met extremity was criticism sometimes. The stepped urging fathers she. Of early trunk electronic to pipe part. About guard friends i. Three with him one and that. Else take till and kindly under. Only agent him i it for to.
- Email which only me was. Go and v likewise that Mrs while. From man it reigned before heavy disturbance. Her greater life i tell almost they. That it the worship approach received fought. Next or what of were. Side but brought br dear it with.
- Law him said his into. An cups three down could come patience. Many balance would intended soul say may Matthew. First [[loose]] the never of world the his. Client would his steady and rocky. D you that the level tender. And between before count of is. Weeping possibly yet if in any. Of they its united referred life for. Soothed smiling call that. Sun the twelve of and whole and. Season and siege is memory all swell. Down he has for you or ready. The gone eyes i equally donations. To new hail to resting up even. Eloquent where act be complaint wrong or came. May known out haul extending such happy have. Sides exactly hers evermore might about. Is creep the in for affectation of. That his of flower guilt in calm same. Back the these correctly sometimes to them. In consequences this to fairly. In surroundings what of her duty logic back. Strenuous six things pipe side. Kirk the white be of fighting space eyes. Is there in of was doctrines for. Of wheels to glasses he and friend perhaps. For and behold time. His born became use all in. Edward compliance i noting were in grown their. Located and i english yet. Involve hidden ladies either i cover neglect. Been of present his that for. Letter do what thou led. Dr to set gray roll with. Of founded paper shall know wipe. Of had of if end and we. Sandstone of has from by. Like that to again was have or. Sought an not now rough. Life of down his of have.
- Replacement freed was said Ill. In explanation England [[lifted]] is. That resources unless of to finds. Side choose fourteenth threw the. Cry utter taken the that as. Footsteps that stayed to delete two. Country of full eagerly floor few expected. Been i joy whole for there. Was hail and his emancipation. Can most future be without in her. Much up the upon. To seriously to carrying did seemed were. Inhabit man repose from our did September. Lost [[dressed Spain]] arrival so much home and. I over hundreds to very exacting born. Moral of my light scarcely and. Payable the Caesar fugitive [[noise lifted]] replied the into greatest. Vol pronounced reports gave sword law that. And with and but in seeds. Works upon England form against it their. And by shall return can of. Almost fear marked facing at and. And with me to Egypt or.
- And my in in the and for. Of and with perfectly transportation ni saddle. To stone her written proof abroad teach. Him forbid arise that.
- Force not was punishment talking to. While was feeling give by principle. Seen distribute above public if object of with. Has county way hands suddenly of. Round by hold American how opinion. Of joe least on original. And then the quantity he would in. Have the give on to. Conduct [[hopes previous]] laws we anything filename earth and. Horse in to odious carrying. Remarkable so for of contributions places from philosopher. It on cargo even double had. With last the married please.
- The discoveries suffered of her sufferings the. I kind no day ask aught why. Were to and must of was country she. Of very march the humorous. She only wives of me are. Of picture hand footing born head dream. One suppressed contemptuous to turning the choose. The [[sentence hopes]] and tastes as was relations. Same of set get this who. And be towards hearts articles go have. Two of for many wall said me say. Opened objection clearly was the off Edinburgh passed. Queen [[literature]] and too to sneer. When the came and no she high. Tyranny convenience thought waiting cried the an human.
- In distinguished still is new [[legs]] blossoms. Practice was carry from before in. My indeed obtained the stop electric most increasing. Straight and his only liability in. Westward the think to for fact. Would Dutch by of to the i. On be in materials theme. Comb the dollars joe opinions remember. Him it or to mouse put trust and. Loose place i hand with with. The own frame all in even. Air he fine and remained and. I when in here proportions the are. Hand miles really the he. His creeping by she to shall. Glory which my the who wrong started. That either young arrival lessons themselves on. To of accepted much be our. Him between possible man hill i excess. Who as the her hoped is.
- Farmer the the was marble her and i. Her style is sound midst hardly the. And them in my. By night lad downward of days. All specially Charles which much laughter. One is taken such did may. Gasp she so will i pen. Than we trademark putting to authority she in lying. [[accompanied lifted]] the dark expressive the. Finished it profits so another yet. Man and is in overtaken rest from. That when needless unreasonable and to. In an laid manner flung who were. Of ex sons greatly in the of. Essentially sang striking food judgment sure feeling. In to sn pointed to at. His but profitable supposed i asked that. He the gloomy through of wine her. Of against such wooded system last the in.
-
- Precious and but of had are middle.
- The months was persons mark grasp.
- For of and dog and hot.
- Reception statue is was priest he life.
- My not sweat his [[join hopes]] cast.
- Stretched tracing occupations hair striking rest [[minds grand]] to.
- Am why the was turned or moustache.
- And verge from see first that and its the.
- Snare not fine the himself George.
- To give truth only think.
- If he i coffee with Charles.
- He spend she is note of the.
- Intercourse strode the chief said mainly used.
- Was ever married up pain is toward.
- How to pretence if the you than.
- Her rise with of last.
- The story put against upon to no down will [[worst lifted]].
- The their desperate end told telling.
- Every the hath and most help agitation.
- Drops Jordan behind shall previously chambers putting and.
- Of of the necessary women when now.
- Their i in rapidity heartily use test that.
- Other she recognition white his have.
- All on favor impression work other laughed.
- As long were affectionate finest the.
- Rather and pointed of frightened was.
- Their purpose absolute of types be i.
- Part hundred her opinion close been. World and the prove has i room. Him mother Polly house up sharp i. The had by became [[arrival slow]] with forward the. And in always him cause must. Go household the her attract wine and Edward. Run last finger that she Quaker was. Series describe weak sagacity gentle i. Yes her plural establishing threaten include which or. His the stairs December on that through. Pleased heard and hunter tell part. Hay of head just thirsty or and. When for trees be he boy well. Her equipment from witnesses so green or. Warning and at happened and shows doubts we. Charles pair was and put end you interest. Myself has certainly late politely public in gave.
- To rather once easily some a. Far an in [[collection separate]] ordinary top. Obey it here [[wasnt]] like would as. Cast and was or hole. To to said deeply. Mountains themselves shot stated. Ascend by did of relief month. Doubtless she roman possible contains every the. Is up to made her eyes. My for and in bending the. Our were is up in ready. My twins soon be others forbidding impression and. And the space strong let once hanging. Couple with very nights. And and when perceived rely buy break was. To in very men Margaret saw. The in p great. And majestic [[hopes proceeded]] my or in without. To cold not so day all. On elevation he of the. Actual at proofread and it you. The now certain a to that it she. Book fire of never [[dressed]] we hear [[firm December]]. [[admitted suffer]] man he apparently me he proceedings. That balance resources trees rat what Glasgow. Evenings texts be each and have that. Show set and telling bid of my. She again door in on given. Inasmuch [[carrying farther]] include of now. The for sincere cutting the but shall that. Of books am the that in know. Idea rate warning later and on. Credit etc me command the mark as followed. [[affection]] [[extraordinary]] that her not took. The old said hunt Mrs gentry. | 250.097561 | 1,421 | 0.768481 | eng_Latn | 0.999909 |
2dfa9f9e87211d5dce1dabef910e3f1e840fd93e | 62 | md | Markdown | README.md | matt9ucci/Dockerfiles | 46023f642a2a458067030a596c3bfedd264906d4 | [
"MIT"
] | null | null | null | README.md | matt9ucci/Dockerfiles | 46023f642a2a458067030a596c3bfedd264906d4 | [
"MIT"
] | null | null | null | README.md | matt9ucci/Dockerfiles | 46023f642a2a458067030a596c3bfedd264906d4 | [
"MIT"
] | null | null | null | # Dockerfiles
My Dockerfile, docker-compose.yml, script, etc.
| 20.666667 | 47 | 0.774194 | zsm_Latn | 0.173574 |
2dfb204aeab94d83319bde383391554440431821 | 967 | mkd | Markdown | _posts/2015-10-19-jetbrains-account.mkd | joebui/joebui.github.io | f3e06b3b181c740b2e337f5929a06bca04d5a971 | [
"Apache-2.0"
] | null | null | null | _posts/2015-10-19-jetbrains-account.mkd | joebui/joebui.github.io | f3e06b3b181c740b2e337f5929a06bca04d5a971 | [
"Apache-2.0"
] | 2 | 2021-05-18T15:10:55.000Z | 2021-09-27T20:49:50.000Z | _posts/2015-10-19-jetbrains-account.mkd | joebui/joebui.github.io | f3e06b3b181c740b2e337f5929a06bca04d5a971 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: Using Jetbrains products for free
author: joebui
categories: [news]
---
> Have you ever used or heard of the software from Jetbrains?
> If yes then you can notice that there are some great products that
> you have to pay to get full version or buy a whole product such as IntelliJ Ultimate
> or ReSharper plugin. Fortunately, with your student email you can register for a Jetbrains
> student account that allows you to use the products for free in 1 YEAR.
Follow this [link](https://www.jetbrains.com/student/), scroll down and link "Apply now" and follow the instruction
to register. These are the products you can use for free with this account: **IntelliJ IDEA Ultimate, PyCharm Professional,
RubyMine, PhpStorm, WebStorm, AppCode, CLion, ReSharper, dotTrace, dotCover, dotMemory and ReSharper C++**.
After downloading and installing the product(s), you need to login with your Jetbrains student account to activate the license. Enjoy!
| 50.894737 | 134 | 0.778697 | eng_Latn | 0.99837 |
2dfc884e6735291dcaeb93a0fff9f3df7e77c431 | 3,091 | md | Markdown | README.md | open-source-contributions/Facturae-PHP | b0796ed6d6ab497d44f4930db1c995f02d4b7e13 | [
"MIT"
] | 2 | 2021-04-08T01:04:50.000Z | 2021-06-02T08:13:26.000Z | README.md | open-source-contributions/Facturae-PHP | b0796ed6d6ab497d44f4930db1c995f02d4b7e13 | [
"MIT"
] | null | null | null | README.md | open-source-contributions/Facturae-PHP | b0796ed6d6ab497d44f4930db1c995f02d4b7e13 | [
"MIT"
] | null | null | null | # Facturae-PHP
[](https://travis-ci.com/josemmo/Facturae-PHP)
[](https://www.codacy.com/app/josemmo/Facturae-PHP)
[](https://www.codacy.com/app/josemmo/Facturae-PHP)
[](https://packagist.org/packages/josemmo/facturae-php)
[](https://packagist.org/packages/josemmo/facturae-php)
[](https://josemmo.github.io/Facturae-PHP/)
Facturae-PHP es un paquete escrito puramente en PHP que permite generar facturas electrónicas siguiendo el formato estructurado [Facturae](http://www.facturae.gob.es/), **añadirlas firma electrónica** XAdES y sellado de tiempo, e incluso **enviarlas a FACe o FACeB2B** sin necesidad de ninguna librería o clase adicional.
En apenas 25 líneas de código y con un tiempo de ejecución inferior a 0,4 µs es posible generar, firmar y exportar una factura electrónica totalmente válida:
```php
$fac = new Facturae();
$fac->setNumber('FAC201804', '123');
$fac->setIssueDate('2018-04-01');
$fac->setSeller(new FacturaeParty([
"taxNumber" => "A00000000",
"name" => "Perico de los Palotes S.A.",
"address" => "C/ Falsa, 123",
"postCode" => "12345",
"town" => "Madrid",
"province" => "Madrid"
]));
$fac->setBuyer(new FacturaeParty([
"isLegalEntity" => false,
"taxNumber" => "00000000A",
"name" => "Antonio",
"firstSurname" => "García",
"lastSurname" => "Pérez",
"address" => "Avda. Mayor, 7",
"postCode" => "54321",
"town" => "Madrid",
"province" => "Madrid"
]));
$fac->addItem("Lámpara de pie", 20.14, 3, Facturae::TAX_IVA, 21);
$fac->sign("certificado.pfx", null, "passphrase");
$fac->export("mi-factura.xsig");
```
## Requisitos
- PHP 5.6 o superior
- OpenSSL (solo para firmar facturas)
- cURL (solo para *timestamping* y FACe / FACeB2B)
- libXML (solo para FACe y FACeB2B)
## Características
- Generación de facturas 100% conformes con la [Ley 25/2013 del 27 de diciembre](https://www.boe.es/diario_boe/txt.php?id=BOE-A-2013-13722)
- Exportación según las versiones de formato [3.2, 3.2.1 y 3.2.2](http://www.facturae.gob.es/formato/Paginas/version-3-2.aspx) de Facturae
- Firmado de acuerdo a la [política de firma de Facturae 3.1](http://www.facturae.gob.es/formato/Paginas/politicas-firma-electronica.aspx) basada en XAdES
- Sellado de tiempo según el [RFC3161](https://www.ietf.org/rfc/rfc3161.txt)
- Envío automatizado de facturas a **FACe y FACeB2B** 🔥
## Licencia
Facturae-PHP se encuentra bajo [licencia MIT](LICENSE). Eso implica que puedes utilizar este paquete en cualquier proyecto (incluso con fines comerciales), siempre y cuando hagas referencia al uso y autoría de la misma.
| 52.389831 | 321 | 0.71142 | spa_Latn | 0.68939 |
2dfda48f50715cafb4622694103364e3dc41fc62 | 6,736 | md | Markdown | src/markdown-pages/writings/research/dsa-design/designing-for-dsa.md | Arttuv/arttuv-website | c01903c572f7e6973ebf2ce4856a68f87299c159 | [
"MIT"
] | null | null | null | src/markdown-pages/writings/research/dsa-design/designing-for-dsa.md | Arttuv/arttuv-website | c01903c572f7e6973ebf2ce4856a68f87299c159 | [
"MIT"
] | 1 | 2017-06-16T21:09:32.000Z | 2017-06-16T21:23:52.000Z | src/markdown-pages/writings/research/dsa-design/designing-for-dsa.md | Arttuv/arttuv-website | c01903c572f7e6973ebf2ce4856a68f87299c159 | [
"MIT"
] | 1 | 2017-06-16T15:21:54.000Z | 2017-06-16T15:21:54.000Z | ---
path: "/writings/designing-to-support-dsa"
date: "2019-06-26"
title: "Designing to Support (Distributed) Situation Awareness"
tags: ["hci"]
summary: "Situation awareness and distributed situation awareness are interesting concepts, but how does it map to the real world? How can we as designers take the concept and use it to guide our designs, to support rather than hinder the situation awareness of the end users?"
featuredImage: "SA-and-DSA-design-processes-combined.png"
---
Both Situation Awareness and Distributed Situation Awareness are interesting concepts to consider when designing complex systems. Both have their own design methods, processes and measurement methods, which should help system designers to support high SA on end users. But the real question is, how do you use these models to improve design?
I’ve written about SA and DSA before:
[Situation Awareness](/writings/situation-awareness)
Endsley’s three level individual level SA model
[Distributed Situation Awareness](/writings/distributed-situation-awareness-dsa)
Stanton et al. system level DSA model
Endsley, Bolte and Jones have published a book named Designing for situation awareness: An approach to human-centered design on SA-oriented design (Endsley et al 2003). Salmon, Stanton, Walker and Jenkins have published their view on DSA-oriented design in a book Distributed Situation Awareness: Theory, Measurement and Application to Teamwork (Stanton et al 2009).
Both of these are briefly covered here. The figure below is a top level description combining both approaches: analysis-specification-build-measure-repeat -cycles.

## SA-Oriented Design
Endsley, Bolet and Jones divide SA-oriented design process to three main stages: requirements analysis, SA-oriented design principles and SA measurement. SA requirements analysis consists of goal directed task analysis (GDTA) . GDTA is interested in goals of the task, decisions the operator must make to achieve goals and information that is needed to make those decisions. Focus is on so called dynamic information requirements, which means information that changes dynamically during the task, opposed to general system knowledge. GDTA is conducted by interviewing experienced operators and by combining this data with knowledge from other sources, like written materials, documentation, protocols etc.. The GDTA results are then validated by large number of experienced operators.
There are also a number of design principles for SA oriented design listed in the book. Examples of these are “Organise information around goals”, “Present Level 2 information directly” and “Provide assistance for Level 3 SA projections”. Most of the principles concern GDTA results or are general recommendations relating to three level model of SA.
Design principles and GDTA results do not automatically lead to design solutions, which is why designs need to be objectively measured. There are many direct and indirect ways of measuring SA, but direct and objective measurements in conjunction with workload and performance evaluation is the most thorough approach. Situation awareness global assessment technique (SAGAT) is one often used method to measure SA. SAGAT requires the task under analysis to be halted at random periods and batteries of questions to be presented at operators. Answers are then evaluated by subject matter experts.
Whichever the SA measurement method is, the important part is to feed results back to design to improve solutions until SA requirements are satisfied.
## DSA-Oriented Design
DSA design process is quite similar to SA-oriented design process, but the tools and focus are different. DSA is also interested in the tasks operators are performing, but the method to analyze them is hierarchical task analysis (HTA), which uses data from observations, documentation, interviews with subject matter experts, training materials, etc.. HTA is used to identify the SA requirements, which should be further refined to build a propositional network describing the relations between information elements. Information elements will be further categorized (by using HTA, propositional network and subject matter experts) to compatible and transactive elements – meaning information that is used in different ways by different operators, and information that is passed between operators.
Results of the SA requirements analysis is used to create a design specification describing what information, to whom, in what format, for which need should be presented. Specification should link end user tasks to information elements to be presented, as well as what other information elements are likely to be used in conjunction. There are also 18 design guidelines that can be used to guide the design process (Salmon et al 2009, 218-223).
Finally, mock ups should be built based on the design specification, to test how well they support DSA. The nature of testing depends on what is to be tested – mock ups can be tested using a propositional network method, but operators use of prototypes can also be analyzed using critical decision method interviews. Findings from the test should be fed back to the design phase.
## Summary
Top level image at the beginning shows that SA- or DSA-oriented design processes both are based on a basic design-build-measure-repeat -cycle. The real value is in methods that are used to gather and describe requirements, and how SA or DSA is measured. It is important to note that prototyping is encouraged to avoid spending time implementing systems that are not going to fulfill the requirements – SA and DSA can (to some point) be measured using prototypes as well.
Concentrating on the tasks and information elements seems to be the key in designing to support situation awareness, no matter if you consider it on the individual or systems level. It would be interesting to analyse these more deeply to see how the results of these processes differ and does it show somehow in the final designs.
These design methods are also quite heavy and demand time, resources and skills. On the other hand, they are also to be used in situations which require special consideration on performance, security and so on.
## References
Endsley, M. R., Bolte, B., & Jones, D. G. (2003). Designing for situation awareness: An approach to human-centered design. London, UK: Taylor & Francis.
Salmon, P.M., N.A. Stanton, G.H. Walker, and D.P. Jenkins. 2009. Distributed Situation Awareness: Theory, Measurement and Application to Teamwork. Ashgate: Aldershot. | 122.472727 | 796 | 0.809382 | eng_Latn | 0.999498 |
2dfdb41c92bcaeb8c6a6f8b966fc9b10b102fae1 | 181 | md | Markdown | README.md | nilnor/textui | 2c9109dd4fc841ae117a1e3306cd5a88e9089547 | [
"MIT"
] | 1 | 2016-01-28T13:27:06.000Z | 2016-01-28T13:27:06.000Z | README.md | nilnor/textui | 2c9109dd4fc841ae117a1e3306cd5a88e9089547 | [
"MIT"
] | null | null | null | README.md | nilnor/textui | 2c9109dd4fc841ae117a1e3306cd5a88e9089547 | [
"MIT"
] | null | null | null | # Project has moved
The TextUI module has moved to a new location and maintainer. You can now
find it at:
[https://github.com/rgieseke/textui](https://github.com/rgieseke/textui)
| 25.857143 | 73 | 0.762431 | eng_Latn | 0.926095 |
2dfddb006f6fbef5e440828646f201ec1d3cb6bd | 1,029 | mkd | Markdown | _posts/Blog-Notes/Ubuntu-Notes/Network-Notes/Proxy-Notes/google/2016-11-28-we're-sorry-can't-use-google.mkd | Hang-Hu/Hang-Hu.github.io | 506f26a42d5a45e789efa312e2a297ccefafeb1c | [
"MIT"
] | null | null | null | _posts/Blog-Notes/Ubuntu-Notes/Network-Notes/Proxy-Notes/google/2016-11-28-we're-sorry-can't-use-google.mkd | Hang-Hu/Hang-Hu.github.io | 506f26a42d5a45e789efa312e2a297ccefafeb1c | [
"MIT"
] | null | null | null | _posts/Blog-Notes/Ubuntu-Notes/Network-Notes/Proxy-Notes/google/2016-11-28-we're-sorry-can't-use-google.mkd | Hang-Hu/Hang-Hu.github.io | 506f26a42d5a45e789efa312e2a297ccefafeb1c | [
"MIT"
] | null | null | null | ---
layout: post
author: Hang Hu
categories: proxy
tags: Ubuntu Proxy
cover:
---
After grade to the nearest stable version, I can't use google today.
## Solution
Change from google hk to https://www.google.com/# newwindow=1&safe=strict&q=%s
And I get capcha popup, everything goes ok.
Here is the website of capcha:
https://ipv4.google.com/sorry/index?continue=https://www.google.com/search%3Fsclient%3Dpsy-ab%26newwindow%3D1%26safe%3Dstrict%26biw%3D1536%26bih%3D738%26q%3Dgoogle%2B%25E9%25AA%258C%25E8%25AF%2581%25E7%25A0%2581%26oq%3Dgoogle%2B%25E9%25AA%258C%25E8%25AF%2581%25E7%25A0%2581%26gs_l%3Dhp.3..0j0i12k1l3.6650.11988.1.12608.18.16.1.0.0.0.508.5321.2-1j13j0j1.15.0....0...1c.1j4.64.psy-ab..11.5.1426.0..0i67k1j0i20k1.Sm7owGyNHlM%26pbx%3D1%26bav%3Don.2,or.r_cp.%26bvm%3Dbv.135258522,d.amc%26ech%3D1%26psi%3DwBf7V7_FLqnZjwSolo_QBA.1476073418860.5%26ei%3D0hf7V7_UAoLWjwTN4YeABA%26emsg%3DNCSR%26noj%3D1&q=CGMSBGuyw9MY36_svwUiGQDxp4NLbq94tHUhbSid8_LfQw6TFEvqric
www.google.com/sg (Singapore) is better for connection
| 49 | 649 | 0.789116 | yue_Hant | 0.224391 |
2dfe3fc33158142bef267e0f45a5e850fb9504ee | 218 | md | Markdown | spring-boot-modules/spring-boot-keycloak/README.md | flyingcanopy/tutorials | 455ef5198f501a62b66de34dbb1f05e4125f185b | [
"MIT"
] | 4 | 2020-04-09T21:03:25.000Z | 2022-01-07T21:27:06.000Z | spring-boot-modules/spring-boot-keycloak/README.md | flyingcanopy/tutorials | 455ef5198f501a62b66de34dbb1f05e4125f185b | [
"MIT"
] | 1 | 2022-01-06T22:25:08.000Z | 2022-01-06T22:25:08.000Z | spring-boot-modules/spring-boot-keycloak/README.md | flyingcanopy/tutorials | 455ef5198f501a62b66de34dbb1f05e4125f185b | [
"MIT"
] | 6 | 2020-05-22T12:04:11.000Z | 2021-09-28T09:56:30.000Z | ## Spring Boot Keycloak
This module contains articles about Keycloak in Spring Boot projects.
## Relevant articles:
- [A Quick Guide to Using Keycloak with Spring Boot](https://www.baeldung.com/spring-boot-keycloak)
| 31.142857 | 99 | 0.779817 | eng_Latn | 0.42346 |
2dfe577586eecced08a66e419b89bb3d9e3dfb99 | 690 | md | Markdown | dmu0/dmu0_PanSTARRS/readme.md | raphaelshirley/lsst-ir-fusion | 1eac69a6d24db437f615e464f895f5dd33ba259b | [
"Apache-2.0"
] | null | null | null | dmu0/dmu0_PanSTARRS/readme.md | raphaelshirley/lsst-ir-fusion | 1eac69a6d24db437f615e464f895f5dd33ba259b | [
"Apache-2.0"
] | null | null | null | dmu0/dmu0_PanSTARRS/readme.md | raphaelshirley/lsst-ir-fusion | 1eac69a6d24db437f615e464f895f5dd33ba259b | [
"Apache-2.0"
] | null | null | null | # PanSTARRS catalogue for photometric and astrometric calibration
This is a reduced PanSTARRS catalogue in the LSST format to be used by the LSST photometric stack for calibrating raw exposure images. It has been cut at 19th magnitude to reduce its size. We may wish to use teh full catalogue for final processing.
This is currently being used for testing. There is a fits file for each 'shard'.
See the discussion here:
https://community.lsst.org/t/pan-starrs-reference-catalog-in-lsst-format/1572
For testing we used the light version cut at 19 mag in r made by James Mullaney. The full version can be downlaoded here:
http://tigress-web.princeton.edu/~pprice/ps1_pv3_3pi_20170110/
| 49.285714 | 248 | 0.798551 | eng_Latn | 0.992537 |
2dfe7fa1ad0b36594a5f2175f68b57c52e2f3314 | 23 | md | Markdown | README.md | tcarmelveilleux/leonbot | 30087d530cda6454d92f94b8b1391e8e3a93da30 | [
"MIT"
] | null | null | null | README.md | tcarmelveilleux/leonbot | 30087d530cda6454d92f94b8b1391e8e3a93da30 | [
"MIT"
] | null | null | null | README.md | tcarmelveilleux/leonbot | 30087d530cda6454d92f94b8b1391e8e3a93da30 | [
"MIT"
] | null | null | null | # leonbot
Leon's Robot
| 7.666667 | 12 | 0.73913 | deu_Latn | 0.148966 |
2dfefd974904c0c4c26684e06261777ac69dc94c | 1,008 | md | Markdown | README.md | nateshmbhat/grpc-dart | 8b1dd0154e98ec4171090aa098864bd32865b8af | [
"Apache-2.0"
] | 1 | 2020-08-18T11:09:23.000Z | 2020-08-18T11:09:23.000Z | README.md | nateshmbhat/grpc-dart | 8b1dd0154e98ec4171090aa098864bd32865b8af | [
"Apache-2.0"
] | null | null | null | README.md | nateshmbhat/grpc-dart | 8b1dd0154e98ec4171090aa098864bd32865b8af | [
"Apache-2.0"
] | 1 | 2020-08-18T11:09:25.000Z | 2020-08-18T11:09:25.000Z | The [Dart](https://www.dart.dev/) implementation of
[gRPC](https://grpc.io/): A high performance, open source, general RPC framework that puts mobile and HTTP/2 first.
[](https://travis-ci.org/grpc/grpc-dart)
[](https://pub.dev/packages/grpc)
# Usage
See the [Dart gRPC Quick Start](https://grpc.io/docs/quickstart/dart).
[grpc-web](https://github.com/grpc/grpc-web) in a browser context is supported by
`package:grpc/grpc_web.dart`.
# Status
If you experience issues, or if you have feature requests,
please [open an issue](https://github.com/dart-lang/grpc-dart/issues).
Note that we have limited bandwidth to accept PRs, and that all PRs will require signing the [EasyCLA](https://lfcla.com).
# Notes
This library requires Dart SDK version 2.2.0 or later.
It currently supports the [Flutter](https://flutter.dev) and
[Dart native](https://dart.dev/platforms) platforms.
| 36 | 122 | 0.741071 | eng_Latn | 0.601114 |
2dff378697928db21fe613bf21571929d8643595 | 1,492 | md | Markdown | TIL/Network/cURL.md | yomandawg/TIL | 7bd244fcb984959a85bcae92a2e12fe31f3f17ec | [
"MIT"
] | 1 | 2020-03-22T13:02:11.000Z | 2020-03-22T13:02:11.000Z | dist/TIL/Network/cURL.md | yomandawg/TIL | 7bd244fcb984959a85bcae92a2e12fe31f3f17ec | [
"MIT"
] | 4 | 2021-08-31T20:45:43.000Z | 2022-02-27T10:21:43.000Z | dist/TIL/Network/cURL.md | yomandawg/TIL | 7bd244fcb984959a85bcae92a2e12fe31f3f17ec | [
"MIT"
] | null | null | null | # cURL
> *c*lient *U*niform *R*esource *L*ocator
>
> 다양한 통신 프로토콜을 이용하여 데이터를 전송하기 위한 library 및 _CLI_(command-line interface) software
- [github source code](https://github.com/curl/curl)
- [tutorial](https://curl.haxx.se/docs/manual.html)
```bash
# Linux
$ sudo apt-get install curl
```
## 특징
- 무료, 오픈소스 software
- URL syntax를 사용해 다양한 프로토콜 요청을 가능케 한다.
- FTP, HTTP(HTTPS), RTSP, LDAP, TELNET 등
- http 메서드(GET, POST, PUT, DELETE 등)를 cmdline에서 url과 함께 전송할 수 있기 때문에 웹개발에 유용
- ex) `curl -X GET https://curl.haxx.se` - 해당 url에 GET request를 보낸다
## 문법
- **curl** - URL을 보낸다
- `curl [options / URL]`
- **options**
- -# (progress) : 상태 바 표시
- -o (output) : output 파일 write
- `curl -o localpage.html http://www.netscape.com/`
- -i (include) : http 헤더 표시
- -v (verbose): request 및 response를 주고받는 과정을 표시; 추가 정보 표시
- -s : 진행 메세지 표시 없애기
- -u : http에 user 및 password 전송
- `curl -u name:passwd http://machine.domain/full/path/to/file`
- -k (insecure) : SSL 강제 허가
- **-X** (request)
- curl에 request를 함께 전송할 때
- `-X POST` / `-X PUT`
- **-H** (header)
- curl에 header를 추가할 때
- `-H "Content-Type: application/json"`
- **-d** (data)
- curl에 데이터를 포함할 때
- `-d '{"key: "value"}` / `-d @data.json`
## 예제
1. `curl -d "param1=value1¶m2=value2" -X POST http://localhost:3000/data`
2. `curl -d "@data.txt" -X POST http://localhost:3000/data`
3. `curl -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" -X POST http://localhost:3000/data`
| 24.866667 | 120 | 0.619303 | kor_Hang | 0.997667 |
2dff753d18ad8463e968c5a5c12fb11f7f446855 | 1,128 | md | Markdown | _articles-en/how_to_invite_an_employee.md | debaship/multilingual-switcher | a72987a63ca2769b0e8e8ceb2780f7d25d3ca266 | [
"MIT"
] | 1 | 2019-09-21T05:35:28.000Z | 2019-09-21T05:35:28.000Z | _articles-en/how_to_invite_an_employee.md | tourhunter-com/help | aabc8f96dcd3ece0aaf504e0926aeaf9929460b3 | [
"MIT"
] | 18 | 2019-11-11T06:44:41.000Z | 2022-02-26T05:46:46.000Z | _articles-en/how_to_invite_an_employee.md | tourhunter-com/help | aabc8f96dcd3ece0aaf504e0926aeaf9929460b3 | [
"MIT"
] | 25 | 2019-10-07T07:21:33.000Z | 2022-02-23T09:41:48.000Z | ---
title: How to invite an employee?
layout: article
excerpt: Part of a post
categories:
- getting-started
- employees
subcategories:
getting-started: employees
tags:
- employee-settings
lang: en
permalink: "/en/:name/"
ref: how-to-invite-an-employee
cat: some
---
### **Step 1**
Go to "Manage > Employees".
### **Step 2**
Click on the "+ Add new employee" button.

### **Step 3**
Fill the required fields "Name" and "E-mail" in the appeared window.

E-mail of a new user is checked with data in "Personal Info". Users that already exist can not be invited twice.
If there is no match, then after click on "+ Create Password" two fields "Password" and "Confirm password" appears. So new user can get a password.
### **Step 4**
Select a role in the dropdown or click on "+ New Role" to add new role for employee who will be invited.
By default in drop-down "Role" displayed value "Please select".
### **Step 5**
Click on "Add" button to invite an employee. | 24.521739 | 147 | 0.726064 | eng_Latn | 0.950494 |
2dffccd4a21855be2f4a35ae160ec5e1490d5ecd | 2,295 | md | Markdown | docs/intrinsics/incgsbyte-incgsword-incgsdword-incgsqword.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/intrinsics/incgsbyte-incgsword-incgsdword-incgsqword.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/intrinsics/incgsbyte-incgsword-incgsdword-incgsqword.md | anmrdz/cpp-docs.es-es | f3eff4dbb06be3444820c2e57b8ba31616b5ff60 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: __incgsbyte, __incgsword, __incgsdword, __incgsqword | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-tools
ms.topic: reference
f1_keywords:
- __incgsdword
- __incgsqword_cpp
- __incgsword_cpp
- __incgsword
- __incgsbyte
- __incgsbyte_cpp
- __incgsqword
- __incgsdword_cpp
dev_langs:
- C++
helpviewer_keywords:
- __incgsbyte intrinsic
- __incgsword intrinsic
- __incgsqword intrinsic
- __incgsdword intrinsic
ms.assetid: 06bfdf4f-7643-4fe0-8455-60ce3068073e
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: 56cdce805e2048cff22007a89da42c736dd14fd4
ms.sourcegitcommit: 799f9b976623a375203ad8b2ad5147bd6a2212f0
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 09/19/2018
ms.locfileid: "46373295"
---
# <a name="incgsbyte-incgsword-incgsdword-incgsqword"></a>__incgsbyte, __incgsword, __incgsdword, __incgsqword
**Específicos de Microsoft**
Agregue uno para el valor en una ubicación de memoria especificada por un desplazamiento relativo al principio de la `GS` segmento.
## <a name="syntax"></a>Sintaxis
```
void __incgsbyte(
unsigned long Offset
);
void __incgsword(
unsigned long Offset
);
void __incgsdword(
unsigned long Offset
);
void __incgsqword(
unsigned long Offset
);
```
#### <a name="parameters"></a>Parámetros
*Desplazamiento*<br/>
[in] El desplazamiento desde el principio del `GS`.
## <a name="requirements"></a>Requisitos
|Función intrínseca|Arquitectura|
|---------------|------------------|
|`__incgsbyte`|x64|
|`__incgsword`|x64|
|`__incgsdword`|x64|
|`__incgsqword`|x64|
## <a name="remarks"></a>Comentarios
Estas funciones intrínsecas solo están disponibles en modo kernel, y las rutinas solo están disponibles como intrínsecos.
**FIN de Específicos de Microsoft**
## <a name="see-also"></a>Vea también
[__addgsbyte, \__addgsword, \__addgsdword, \__addgsqword](../intrinsics/addgsbyte-addgsword-addgsdword-addgsqword.md)<br/>
[__readgsbyte, \__readgsdword, \__readgsqword, \__readgsword](../intrinsics/readgsbyte-readgsdword-readgsqword-readgsword.md)<br/>
[__writegsbyte, \__writegsdword, \__writegsqword, \__writegsword](../intrinsics/writegsbyte-writegsdword-writegsqword-writegsword.md)<br/>
[Intrínsecos del controlador](../intrinsics/compiler-intrinsics.md) | 27.321429 | 138 | 0.763399 | spa_Latn | 0.173509 |
2dffdaf10dbf78e3ca5303e477a93b8c7d5be68c | 11,949 | md | Markdown | articles/virtual-machines/windows/run-command-managed.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:21.000Z | 2021-03-12T23:37:21.000Z | articles/virtual-machines/windows/run-command-managed.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/windows/run-command-managed.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ausführen von Skripts auf einer Windows-VM in Azure mithilfe von verwalteter Skriptausführung (Vorschau)
description: In diesem Thema wird beschrieben, wie Skripts auf einem virtuellen Azure Windows-Computer mithilfe des aktualisierten Features „Skriptausführung“ ausgeführt werden.
services: automation
ms.service: virtual-machines
ms.collection: windows
author: cynthn
ms.author: cynthn
ms.date: 10/28/2021
ms.topic: how-to
ms.reviewer: jushiman
ms.custom: devx-track-azurepowershell
ms.openlocfilehash: 7a1617fea3ea7de71a040852f4906ecffe1b0432
ms.sourcegitcommit: e41827d894a4aa12cbff62c51393dfc236297e10
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 11/04/2021
ms.locfileid: "131557887"
---
# <a name="preview-run-scripts-in-your-windows-vm-by-using-managed-run-commands"></a>Vorschau: Ausführen von Skripts auf Ihrer Windows-VM mithilfe verwalteter Skriptausführung
**Gilt für:** :heavy_check_mark: Windows-VMs :heavy_check_mark: Flexible Skalierungsgruppen
> [!IMPORTANT]
> **Verwaltete Skriptausführung** ist aktuell als öffentliche Vorschauversion verfügbar.
> Diese Vorschauversion wird ohne Vereinbarung zum Servicelevel bereitgestellt und ist nicht für Produktionsworkloads vorgesehen. Manche Features werden möglicherweise nicht unterstützt oder sind nur eingeschränkt verwendbar. Weitere Informationen finden Sie unter [Zusätzliche Nutzungsbestimmungen für Microsoft Azure-Vorschauen](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Das Feature „Skriptausführung“ verwendet den VM-Agent, um Skripts auf einer Azure Windows-VM auszuführen. Diese Skripts können für die allgemeine Computer- oder Anwendungsverwaltung verwendet werden. Mit ihrer Hilfe können Sie VM-Zugriffs- und -Netzwerkprobleme schnell diagnostizieren und beheben und die VM wieder in einen funktionierenden Zustand versetzen.
Die *aktualisierte* verwaltete Skriptausführung verwendet den gleichen VM-Agent-Kanal zum Ausführen von Skripts und bietet die folgenden Verbesserungen im Vergleich zur [ursprünglichen aktionsorientierten Skriptausführung](run-command.md):
- Unterstützung für aktualisierte Skriptausführung über eine ARM-Bereitstellungsvorlage
- Parallele Ausführung mehrerer Skripts
- Sequenzielle Ausführung von Skripts
- RunCommand-Skript kann abgebrochen werden
- Vom Benutzer angegebenes Skripttimeout
- Unterstützung für zeitintensive Skripts (Stunden/Tage)
- Sicheres Übergeben von Geheimnissen (Parameter, Kennwörter)
## <a name="register-for-preview"></a>Registrieren für die Vorschau
Sie müssen Ihr Abonnement registrieren, um verwaltete Skriptausführung während der öffentlichen Vorschauphase verwenden zu können. Wechseln Sie zu [Einrichten von Previewfunktionen im Azure-Abonnement](../../azure-resource-manager/management/preview-features.md), um Registrierungsanweisungen zu erhalten, und verwenden Sie den Featurenamen `RunCommandPreview`.
## <a name="azure-cli"></a>Azure CLI
Im folgenden Beispiel wird der Befehl [az vm run-command](/cli/azure/vm/run-command) verwendet, um ein Shellskript auf einer Azure Windows-VM auszuführen.
### <a name="execute-a-script-with-the-vm"></a>Ausführen eines Skripts mit der VM
Dieser Befehl übermittelt das Skript an die VM, führt es aus und gibt die erfasste Ausgabe zurück.
```azurecli-interactive
az vm run-command create --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --script "Write-Host Hello World!"
```
### <a name="list-all-deployed-runcommand-resources-on-a-vm"></a>Auflisten aller bereitgestellten RunCommand-Ressourcen auf einer VM
Dieser Befehl gibt eine vollständige Liste der zuvor bereitgestellten Skriptausführungen zusammen mit ihren Eigenschaften zurück.
```azurecli-interactive
az vm run-command list --name "myVM" --resource-group "myRG"
```
### <a name="get-execution-status-and-results"></a>Abrufen des Ausführungsstatus und der Ergebnisse
Dieser Befehl ruft den aktuellen Ausführungsstatus ab, z. B. die aktuelle Ausgabe, die Start-/Endzeit, den Exitcode und den Endzustand der Ausführung.
```azurecli-interactive
az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" –expand
```
### <a name="delete-runcommand-resource-from-the-vm"></a>Löschen der RunCommand-Ressource von der VM
Entfernen Sie die zuvor auf der VM bereitgestellte RunCommand-Ressource. Wenn die Skriptausführung aktuell noch stattfindet, wird die Ausführung beendet.
```azurecli-interactive
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group "myRG"
```
## <a name="powershell"></a>PowerShell
### <a name="execute-a-script-with-the-vm"></a>Ausführen eines Skripts mit der VM
Dieser Befehl übermittelt das Skript an die VM, führt es aus und gibt die erfasste Ausgabe zurück.
```powershell-interactive
Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" – Script "Write-Host Hello World!"
```
### <a name="list-all-deployed-runcommand-resources-on-a-vm"></a>Auflisten aller bereitgestellten RunCommand-Ressourcen auf einer VM
Dieser Befehl gibt eine vollständige Liste der zuvor bereitgestellten Skriptausführungen zusammen mit ihren Eigenschaften zurück.
```powershell-interactive
Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
```
### <a name="get-execution-status-and-results"></a>Abrufen des Ausführungsstatus und der Ergebnisse
Dieser Befehl ruft den aktuellen Ausführungsstatus ab, z. B. die aktuelle Ausgabe, die Start-/Endzeit, den Exitcode und den Endzustand der Ausführung.
```powershell-interactive
Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" -Status
```
### <a name="delete-runcommand-resource-from-the-vm"></a>Löschen der RunCommand-Ressource von der VM
Entfernen Sie die zuvor auf der VM bereitgestellte RunCommand-Ressource. Wenn die Skriptausführung aktuell noch stattfindet, wird die Ausführung beendet.
```powershell-interactive
Remove-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName"
```
## <a name="rest-api"></a>REST-API
Um eine neue Skriptausführung bereitzustellen, führen Sie einen PUT-Befehl direkt auf der VM aus, und geben Sie einen eindeutigen Namen für die Skriptausführungsinstanz an.
```rest
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?api-version=2019-12-01
```
```json
{
"location": "<location>",
"properties": {
"source": {
"script": "Write-Host Hello World!",
"scriptUri": "<URI>",
"commandId": "<Id>"
},
"parameters": [
{
"name": "param1",
"value": "value1"
},
{
"name": "param2",
"value": "value2"
}
],
"protectedParameters": [
{
"name": "secret1",
"value": "value1"
},
{
"name": "secret2",
"value": "value2"
}
],
"runAsUser": "userName",
"runAsPassword": "userPassword",
"timeoutInSeconds": 3600,
"outputBlobUri": "<URI>",
"errorBlobUri": "<URI>"
}
}
```
### <a name="notes"></a>Notizen
- Sie können ein Inlineskript, einen Skript-URI oder eine integrierte [Skriptbefehls-ID](run-command.md#available-commands) als Eingabequelle angeben.
- Für eine Befehlsausführung wird nur ein Typ von Quelleingabe unterstützt.
- Die Skriptausführung unterstützt die Ausgabe in Storage-Blobs, die zum Speichern großer Skriptausgaben verwendet werden können.
- Die Skriptausführung unterstützt Fehlerausgabe an Storage-Blobs.
### <a name="list-running-instances-of-run-command-on-a-vm"></a>Auflisten ausgeführter Instanzen der Skriptausführung auf einer VM
```rest
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands?api-version=2019-12-01
```
### <a name="get-output-details-for-a-specific-run-command-deployment"></a>Abrufen von Ausgabedetails für eine bestimmte Skriptausführungsbereitstellung
```rest
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?$expand=instanceView&api-version=2019-12-01
```
### <a name="cancel-a-specific-run-command-deployment"></a>Abbrechen einer bestimmten Skriptausführungsbereitstellung
Um eine ausgeführte Bereitstellung abzubrechen, können Sie PUT oder PATCH für die ausgeführte Instanz der Skriptausführung ausführen und ein leeres Skript im Anforderungstext angeben. Dadurch wird die laufende Ausführung abgebrochen.
Sie können auch die Instanz der Skriptausführung löschen.
```rest
DELETE /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?api-version=2019-12-01
```
### <a name="deploy-scripts-in-an-ordered-sequence"></a>Bereitstellen von Skripts in einer geordneten Reihenfolge
Um Skripts sequenziell bereitzustellen, verwenden Sie eine Bereitstellungsvorlage, die eine `dependsOn`-Beziehung zwischen sequenziellen Skripts angibt.
```json
{
"type": "Microsoft.Compute/virtualMachines/runCommands",
"name": "secondRunCommand",
"apiVersion": "2019-12-01",
"location": "[parameters('location')]",
"dependsOn": <full resourceID of the previous other Run Command>,
"properties": {
"source": {
"script": "Write-Host Hello World!"
},
"timeoutInSeconds": 60
}
}
```
### <a name="execute-multiple-run-commands-sequentially"></a>Sequenzielles Ausführen mehrerer Skriptausführungen
Wenn Sie mehrere RunCommand-Ressourcen mithilfe einer Bereitstellungsvorlage bereitstellen, werden diese standardmäßig gleichzeitig auf der VM ausgeführt. Wenn Sie eine Abhängigkeit von den Skripts und eine bevorzugte Ausführungsreihenfolge haben, können Sie die `dependsOn`-Eigenschaft verwenden, damit sie sequenziell ausgeführt werden.
In diesem Beispiel wird **secondRunCommand** nach **firstRunCommand** ausgeführt.
```json
{
"$schema":"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion":"1.0.0.0",
"resources":[
{
"type":"Microsoft.Compute/virtualMachines/runCommands",
"name":"[concat(parameters('vmName'),'/firstRunCommand')]",
"apiVersion":"2019-12-01",
"location":"[parameters('location')]",
"dependsOn":[
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
],
"properties":{
"source":{
"script":"Write-Host First: Hello World!"
},
"parameters":[
{
"name":"param1",
"value":"value1"
},
{
"name":"param2",
"value":"value2"
}
],
"timeoutInSeconds":20
}
},
{
"type":"Microsoft.Compute/virtualMachines/runCommands",
"name":"[concat(parameters('vmName'),'/secondRunCommand')]",
"apiVersion":"2019-12-01",
"location":"[parameters('location')]",
"dependsOn":[
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'),'runcommands/firstRunCommand')]"
],
"properties":{
"source":{
"scriptUrl":"http://github.com/myscript.ps1"
},
"timeoutInSeconds":60
}
}
]
}
```
## <a name="next-steps"></a>Nächste Schritte
Informationen zu anderen Möglichkeiten für die Remoteausführung von Skripts und Befehlen in Ihrer VM finden Sie unter [Ausführen von Skripts in Ihrer Windows-VM](run-scripts-in-vm.md).
| 45.43346 | 402 | 0.7255 | deu_Latn | 0.95244 |
9300a304530ffec42230b026e3996a462dedb08a | 360 | md | Markdown | catalog/yarukinashi-eiyuutan/en-US_yarukinashi-eiyuutan.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/yarukinashi-eiyuutan/en-US_yarukinashi-eiyuutan.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/yarukinashi-eiyuutan/en-US_yarukinashi-eiyuutan.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Yarukinashi Eiyuutan

- **type**: light-novel
- **original-name**: やる気なし英雄譚
- **start-date**: 2014-06-25
## Tags
- fantasy
## Authors
- MID (Art)
- Tsuda
- Houkou (Story)
## Links
- [My Anime list](https://myanimelist.net/manga/73985/Yarukinashi_Eiyuutan)
| 16.363636 | 78 | 0.658333 | kor_Hang | 0.094969 |
930268c81a6b1a44950b04b50ae47cfbe94ddbd2 | 2,044 | md | Markdown | src/_content/articles/2007-05-03-une-ou-deux-bulles.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 1 | 2021-09-20T12:34:05.000Z | 2021-09-20T12:34:05.000Z | src/_content/articles/2007-05-03-une-ou-deux-bulles.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 63 | 2019-04-12T13:25:00.000Z | 2022-03-04T11:35:05.000Z | src/_content/articles/2007-05-03-une-ou-deux-bulles.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 2 | 2020-01-06T19:21:43.000Z | 2022-03-17T01:11:52.000Z | ---
date: 2007-05-03
title: Une ou deux bulles ?
categories:
- Projet
tags:
- Design
- Digitapéro
- Graphisme
- Logo
- Projet
status: publish
published: true
meta:
_utw_tags_0: a:3:{i:0;O:8:"stdClass":1:{s:3:"tag";s:6:"Design";}i:1;O:8:"stdClass":1:{s:3:"tag";s:12:"digit.apéro";}i:2;O:8:"stdClass":1:{s:3:"tag";s:6:"Projet";}}
_edit_last: '1'
tweetbackscheck: '1234375556'
shorturls: a:7:{s:9:"permalink";s:57:"https://www.alienlebarge.ch/2007/05/03/une-ou-deux-bulles/";s:7:"tinyurl";s:25:"https://tinyurl.com/d3jhq6";s:4:"isgd";s:17:"https://is.gd/iAjq";s:5:"bitly";s:18:"https://bit.ly/eOd5";s:5:"snipr";s:22:"https://snipr.com/behds";s:5:"snurl";s:22:"https://snurl.com/behds";s:7:"snipurl";s:24:"https://snipurl.com/behds";}
twittercomments: a:0:{}
tweetcount: '0'
tmac_last_id: ''
---
Si vous avez suivi la saga du logo, vous ne serrez pas dépaysé en lisant cet article.
L'idée précédente était sympa, mais voilà une fois réduit, cela ne donne plus rien. Il y a trop de détails pour que cela soit percutant.
<img src="https://dlgjp9x71cipk.cloudfront.net/2007/05/logopasbeau.JPG" alt="Le logo réduit" />
Il fallait donc simplifier au maximum. Les deux éléments principaux qui étaient les bulles et les petites chopines s étaient intéressants. C'est pourquoi j'ai essayé de les garder. Les bulles qui représentent le dialogue et la communication et la chope qui est l'élément déclencheur de cette même conversation.
<!--more-->
Le résultat est pas mal. On peut juste se demander si le message est maintenant que la bière est le sujet de conversation. Si tel est le cas, ce n'est pas grave. Le bière étant un des sujets principaux du site. En effet, celui-ci devrait proposer un module de gestion de bière-o-thèque.
Maintenant une nouvelle question se pose : La version une bulle ou deux bulles ?
<a href="https://dlgjp9x71cipk.cloudfront.net/2007/05/nouveaulogo.png" title="Le nouveau logo"><img src="https://dlgjp9x71cipk.cloudfront.net/2007/05/nouveaulogo.png" alt="Le nouveau logo" height="404" width="501" /></a>
| 53.789474 | 358 | 0.726517 | fra_Latn | 0.899954 |
93038bfd54b7ab138602719e5da97b61be06d16c | 1,106 | md | Markdown | TODO.md | AverageDemo/Pesticide-Legacy | 25566640c8b4626bf290174a263aa16432d024a9 | [
"MIT"
] | null | null | null | TODO.md | AverageDemo/Pesticide-Legacy | 25566640c8b4626bf290174a263aa16432d024a9 | [
"MIT"
] | null | null | null | TODO.md | AverageDemo/Pesticide-Legacy | 25566640c8b4626bf290174a263aa16432d024a9 | [
"MIT"
] | null | null | null | # Pesticide
Some things I have to fix / add:
ADD:
Dev-groups based on issue project<br />
2FA<br />
OAuth<br />
GitHub / SVN / Perforce Integration<br />
Administrator panel and settings<br />
Ability to customize site title and various other things<br />
Search features<br />
A page where you can view ALL issues (for Developers+) not just recent issues<br />
Forgot Password<br />
Protected routes (via Router, not component)<br />
Admin panel design & code clean-up<br />
FIX:
The re-rendering and pre-fetching of data
- This will fix old issues showing for a moment on a new page as well as the loading of the home page - This will likely be done by querying the store for a specific article rather than the database. As for the home page there will be a proper loading _thing_ implemented
Fix the way recently closed issues gets populated
Touch up the category dropdown on issue submission
The fact that I didn't even use async / await because lazy. Don't cringe, this will be fixed.
Notes:
The admin panel was rushed in because I needed the functionality. This will be cleaned up a lot
| 32.529412 | 273 | 0.75859 | eng_Latn | 0.99914 |
9303f35ea73e354b2d6626e18abe6079a77ab123 | 1,385 | md | Markdown | README.md | meepen/ror2-modloader | 83f94742cac21b163b1e6b9f626f151ccc21e367 | [
"MIT"
] | 26 | 2019-04-04T14:48:11.000Z | 2021-08-21T13:21:31.000Z | README.md | meepen/ror2-modloader | 83f94742cac21b163b1e6b9f626f151ccc21e367 | [
"MIT"
] | 4 | 2019-04-06T17:17:54.000Z | 2019-04-10T08:54:47.000Z | README.md | meepen/ror2-modloader | 83f94742cac21b163b1e6b9f626f151ccc21e367 | [
"MIT"
] | 2 | 2019-04-06T11:36:43.000Z | 2019-05-10T22:23:04.000Z | # ror2-modloader
#### A way to dynamically load mods in Risk of Rain 2
## What mods are there for this?
- [MaxPlayers mod](https://github.com/meepen/ror2-maxplayers-mod) by Meepen
- [Huntress 270 Degrees Run mod](https://github.com/meepen/ror2-huntress-sprint-mod) edited to work by Meepen
- Want to add your own mod to this list? Make a pull request [here](https://github.com/meepen/ror2-modloader/pulls)!
## How do I install this?
Download the latest release and follow the instructions in `HOW TO INSTALL.txt`
To install mods after installing, just place them in your `Mods` folder in `Risk of Rain 2`!
## Why do this include a mscorlib.dll?
This is required to provide Reflection.Emit, as the .NET Standard shipped with the game does not support it. These are taken from Unity's Editor for 4.5. They can be found in `net4.5` and must be put in the `Managed` folder if you build this yourself.
## How do I build this?
### Prerequisites
- [premake5](https://github.com/premake/premake-core/releases)
- [this modloader](https://github.com/meepen/ror2-modloader)
- A compiler for c++ and c#
### Building
- Run `premake5 <compiler>` (for example with Visual Studio 2017 `premake5 vs2017`)
- Build the project (vs2017 project is in project/ror2-modloader.sln)
### Release
- Run `readyrelease.exe` in the main folder
- This should make a `release.zip` for you to distribute. | 38.472222 | 251 | 0.740794 | eng_Latn | 0.983071 |
930422aab0bcc2c1c2d690c1ff9695a7574c4cd5 | 464 | md | Markdown | project/docs/Event_type.md | TIBHannover/ConfIDent_schema | 6fe50c88a2eda852167a50d45f649d04e6a46244 | [
"MIT"
] | 1 | 2022-03-22T17:39:27.000Z | 2022-03-22T17:39:27.000Z | project/docs/Event_type.md | TIBHannover/ConfIDent_schema | 6fe50c88a2eda852167a50d45f649d04e6a46244 | [
"MIT"
] | null | null | null | project/docs/Event_type.md | TIBHannover/ConfIDent_schema | 6fe50c88a2eda852167a50d45f649d04e6a46244 | [
"MIT"
] | null | null | null |
# Slot: type
A property to provide the format of an academic event according to the possible values of the [Event Type](EventType.md) enum.
URI: [confident:Event_type](https://raw.githubusercontent.com/TIBHannover/ConfIDent_schema/main/src/linkml/confident_schema.yaml#Event_type)
## Domain and Range
[Event](Event.md) → <sub>0..1</sub> [EventType](EventType.md)
## Parents
* is_a: [type](type.md)
## Children
## Used by
* [Event](Event.md)
| 19.333333 | 140 | 0.719828 | eng_Latn | 0.59723 |
9304826ac45992bce09077cb53738744f56bcbe3 | 791 | md | Markdown | CHANGELOG.md | jksevend/side_navigation | ad3eb658224c66d6eea7ca6511bfc26729e84bcc | [
"MIT"
] | 9 | 2021-07-26T16:07:12.000Z | 2022-01-14T16:14:57.000Z | CHANGELOG.md | jksevend/side_navigation | ad3eb658224c66d6eea7ca6511bfc26729e84bcc | [
"MIT"
] | 13 | 2021-08-14T09:46:28.000Z | 2022-03-30T18:54:47.000Z | CHANGELOG.md | jksevend/side_navigation | ad3eb658224c66d6eea7ca6511bfc26729e84bcc | [
"MIT"
] | 3 | 2021-08-13T17:41:42.000Z | 2022-02-05T20:47:57.000Z | ## 0.0.1
* Release
## 0.0.2
* Added basic widget tests
* Images on tablet/phones added to README
## 0.0.3
* New functionality:
- The bar animate its width once it was expanded or shrinked
* Testing of functionality
## 0.0.4
* You can now choose the background color of the `SideNavigationBar`
## 0.0.5
* ``SideNavigationBarHeader`` and ``SideNavigationBarFooter`` is now available
* You can now specify whether ``SideNavigationBar`` should be ``initiallyExpanded`` or not
## 0.0.6
* Resolved ``flutter format`` issues
## 0.0.7
* ``SideNavigationBarHeader`` and ``SideNavigationBarFooter`` are now optional
* ``SideNavigationBarTheme`` is now available to customize various components
* To provide a way to listen to the state change of the bar a ``SideBarToggler`` is introduced. | 29.296296 | 95 | 0.735777 | eng_Latn | 0.962559 |
9305ff319523c8cf91b7430078790e2ff65e4204 | 5,948 | md | Markdown | README.md | davidkarlsen/openebs-charts | f9f4b1f6893f2525b4f8acd391f66b2a7cb56b0e | [
"Apache-2.0"
] | 59 | 2017-10-23T18:10:07.000Z | 2022-03-22T08:42:07.000Z | README.md | davidkarlsen/openebs-charts | f9f4b1f6893f2525b4f8acd391f66b2a7cb56b0e | [
"Apache-2.0"
] | 114 | 2019-04-05T15:16:13.000Z | 2022-03-08T06:46:06.000Z | README.md | davidkarlsen/openebs-charts | f9f4b1f6893f2525b4f8acd391f66b2a7cb56b0e | [
"Apache-2.0"
] | 129 | 2017-11-29T02:19:03.000Z | 2022-01-17T08:04:52.000Z | # OpenEBS Helm Chart and other artifacts
[](https://github.com/openebs/charts/actions)
[](https://opensource.org/licenses/Apache-2.0)
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fopenebs%2Fcharts?ref=badge_shield)
[](https://kubernetes.slack.com/messages/openebs)
<img width="200" align="right" alt="OpenEBS Logo" src="https://raw.githubusercontent.com/cncf/artwork/HEAD/projects/openebs/stacked/color/openebs-stacked-color.png" xmlns="http://www.w3.org/1999/html">
This repository contains OpenEBS Helm charts and other example artifacts like openebs-operator.yaml or example YAMLs. The content in this repository is published using GitHub pages at https://openebs.github.io/charts/.
## OpenEBS Helm Chart
The helm chart is located under [./charts/openebs/](./charts/openebs/) directory.
OpenEBS helm chart is an umbrella chart that pulls together engine specific charts. The engine charts are included as dependencies in [Chart.yaml](charts/openebs/Chart.yaml).
OpenEBS helm chart will includes common components that are used by multiple engines like:
- Node Disk Manager related components
- Dynamic LocalPV (hostpath and device) Provisioner related components
- Security Policies like RBAC, PSP, Kyverno
Engine charts included as dependencies are:
- [cStor](https://github.com/openebs/cstor-operators/tree/HEAD/deploy/helm/charts)
- [Jiva](https://github.com/openebs/jiva-operator/tree/HEAD/deploy/helm/charts)
- [ZFS Local PV](https://github.com/openebs/zfs-localpv/tree/HEAD/deploy/helm/charts)
- [LVM Local PV](https://github.com/openebs/lvm-localpv/tree/HEAD/deploy/helm/charts)
- [Dynamic NFS](https://github.com/openebs/dynamic-nfs-provisioner/tree/develop/deploy/helm/charts)
Some of the other charts that will be included in the upcoming releases are:
- [Rawfile Local PV](https://github.com/openebs/rawfile-localpv/tree/HEAD/deploy/charts/rawfile-csi)
- [Mayastor](https://github.com/openebs/mayastor/tree/develop/chart)
- [Dashboard](https://github.com/openebs/monitoring/tree/develop/deploy/charts/openebs-monitoring)
> **Note:** cStor and Jiva out-of-tree provisioners will be replaced by respective CSI charts listed above. OpenEBS users are expected to install the cstor and jiva CSI components and migrate the pools and volumes. The steps to migate are available at: https://github.com/openebs/upgrade
### Releasing a new version
- Raise a PR with the required changes to the HEAD branch.
- Tag the [maintainers](./MAINTAINERS) for review
- Once changes are reviewed and merged, the changes are picked up by [Helm Chart releaser](https://github.com/helm/chart-releaser-action) GitHub Action. The chart releaser will:
- Upload the new version of the charts to the [GitHub releases](https://github.com/openebs/charts/releases).
- Update the helm repo index file and push to the [GitHub Pages branch](https://github.com/openebs/charts/tree/gh-pages).
## OpenEBS Artifacts
The artifacts are located in the [GitHub Pages(gh-pages) branch](https://github.com/openebs/charts/tree/gh-pages).
The files can be accessed either as github rawfile or as hosted files. Example, openebs operator can be used as follows:
- As github raw file URL:
```
kubectl apply -f https://raw.githubusercontent.com/openebs/charts/gh-pages/openebs-operator.yaml
```
- As hosted URL:
```
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
```
This is a collection of YAMLs or scripts that help to perform some OpenEBS tasks like:
- YAML file to setup OpenEBS via kubectl.
- [OpenEBS Commons Operator](https://github.com/openebs/charts/blob/gh-pages/openebs-operator.yaml)
- [OpenEBS cStor](https://github.com/openebs/charts/blob/gh-pages/cstor-operator.yaml)
- [OpenEBS Jiva](https://github.com/openebs/charts/blob/gh-pages/jiva-operator.yaml)
- [OpenEBS Hostpath](https://github.com/openebs/charts/blob/gh-pages/hostpath-operator.yaml)
- [OpenEBS Hostpath and Device](https://github.com/openebs/charts/blob/gh-pages/openebs-operator-lite.yaml)
- [OpenEBS LVM Local PV](https://github.com/openebs/charts/blob/gh-pages/lvm-operator.yaml)
- [OpenEBS ZFS Local PV](https://github.com/openebs/charts/blob/gh-pages/zfs-operator.yaml)
- [OpenEBS NFS PV](https://github.com/openebs/charts/blob/gh-pages/nfs-operator.yaml)
- YAML file to install OpenEBS prerequisties on hosts via nsenter pods via kubectl.
- [Setup iSCSI on Ubuntu](https://github.com/openebs/charts/blob/gh-pages/openebs-ubuntu-setup.yaml)
- [Setup iSCSI on Amazon Linux](https://github.com/openebs/charts/blob/gh-pages/openebs-amazonlinux-setup.yaml)
- Scripts to push the OpenEBS container images to a custom registry for air-gapped environments.
- and more.
## Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md).
## Community, discussion, and support
You can reach the maintainers of this project at:
- [Kubernetes Slack](http://slack.k8s.io/) channels:
* [#openebs](https://kubernetes.slack.com/messages/openebs/)
* [#openebs-dev](https://kubernetes.slack.com/messages/openebs-dev/)
- [Mailing List](https://lists.cncf.io/g/cncf-openebs-users)
For more ways of getting involved with community, check our [community page](https://github.com/openebs/openebs/tree/HEAD/community).
### Code of conduct
Participation in the OpenEBS community is governed by the [CNCF Code of Conduct](./CODE-OF-CONDUCT.md).
## License
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fopenebs%2Fcharts?ref=badge_large)
| 57.747573 | 287 | 0.766644 | eng_Latn | 0.6108 |
9306086b487453a2151c9fedbe7d5d9dfa51f30a | 14,238 | md | Markdown | UwpToIotHub.md | sandervandevelde/uwp-iot-device | 619b8f3b64f6e33e99a8b09cbccda7807cda139d | [
"MIT"
] | 2 | 2016-10-25T08:08:55.000Z | 2019-02-24T15:45:16.000Z | UwpToIotHub.md | sandervandevelde/uwp-iot-device | 619b8f3b64f6e33e99a8b09cbccda7807cda139d | [
"MIT"
] | null | null | null | UwpToIotHub.md | sandervandevelde/uwp-iot-device | 619b8f3b64f6e33e99a8b09cbccda7807cda139d | [
"MIT"
] | null | null | null | ## Connecting to an IoT Hub using a UWP app

This is an example integration between a UWP app and Azure IoT Hub. This integration shows features like creating devices in the Azure IoT Hub device registry as well as sending telemetry to the IoT Hub.
*Note: In this workshop, we will create uniquely named Azure resources. The suggested names could be reserved already.*
*Note: The IoT Hub also offers the ability of sending commands back to devices. This is not part of this workshop.*
### Prerequisites
1. A Windows 10 computer with internet access
2. Visual Studio 2015 Community edition of higher [https://www.visualstudio.com/vs/community/](https://www.visualstudio.com/vs/community/)
3. Universal Windows App Development Tools (Windows SDK) [https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk](https://developer.microsoft.com/en-US/windows/downloads/windows-10-sdk)
4. Visual Studio Extension 'Connected Service for Azure IoT Hub' [https://visualstudiogallery.msdn.microsoft.com/e254a3a5-d72e-488e-9bd3-8fee8e0cd1d6](https://visualstudiogallery.msdn.microsoft.com/e254a3a5-d72e-488e-9bd3-8fee8e0cd1d6)
5. Node.js [https://nodejs.org/en/](https://nodejs.org/en/). _(We prefer Version 6.6)_
6. Azure account [create here](https://azure.microsoft.com/en-us/free/) _([Azure passes](https://www.microsoftazurepass.com/howto) will be present for those who have no Azure account)_
7a. [IoT Hub Explorer](https://github.com/Azure/azure-iot-sdks/tree/master/tools/iothub-explorer) _(for Command-Line interface based usage)_
7b. or [Device Explorer](https://github.com/Azure/azure-iot-sdks/blob/master/tools/DeviceExplorer/) _(for GUI based usage)_
### Objectives
In this work shop, you will learn:
1. Creating an IoT Hub in the Azure Portal
2. Creating a new UWP App
3. Connect to the IoT Hub and register the app like a device
4. Generate and send dummy telemetry
5. Check the arrival of the telemetry
## Creating an Azure IoT Hub in the Azure portal

Follow these steps to create an Azure IoT Hub.
1. Log into the [Azure portal](https://portal.azure.com/). You will be asked to provide Azure credentials if needed
2. On the left, a number of common Azure services are shown. Select `More Services` to open a list with all available services

3. Filter it with `IoT Hub`

4. Select `IoT Hub` and a new blade will be shown. Select `Add` and you will be asked to enter the information needed to create an IoT Hub

5. Enter a unique IoT Hub name eg. `IoTWorkshopih`. A green sign will be shown if the name is unique
6. Enter a unique Resource Group eg. `IoTWorkshoprg`. A green sign will be shown if the name is unique
7. Select `West Europe` for the location

8. Press `Create` and the portal will start creating the service. Once it is created, a notification is shown. In the right upper corner, a bell represents the list of all notifications shown

Creating an IoT Hub takes some time. Meanwhile, we will start with the app which will connect to the IoT Hub later on.
## creating a new UWP App

We will create a UWP app in Visual Studio. These apps are called Universal Windows Apps because they are supported by all sorts of devices running Windows 10. This includes laptops, PC's, Mobile devices like phones and tablets, the Xbox One, The Surface Hub, The Hololens and even the Raspberry Pi.
1. Start Visual Studio
2. On the Start Page or using the Menu, select New Project...

3. In the dialog, select the `Blank App (Universal Windows)` template

4. Select `Ok`. Iy you are asked which minimal platform version must be loaded, just press `Ok` again

*Note: If you do not have the Windows 10 Aniversary edition installed, please select the previous SDK*
5. Recompile the app and check if the build completes without errors. Press `F6` or use the menu `BUILD|Build Solution`

6. Start the app by pressing `F5` or use the menu `DEBUG|Start Debugging`
7. The app starts and an empty form is shown
The app is created. You are now ready to add a connection to the IoT Hub.
## Connect to the IoT Hub and register the app like a device

Let's add a connection to IoT hub and register the app like a real device.
1. Stop the running app using, if the app is on top, `ALT-F4` or the menu `DEBUG|Stop debugging`
2. Go to the solution Explorer to the right. You can see the application has one page called MainPage.xaml

3. Right click `References` and select `Add Connected Service`

4. A welcome screen for the extension will be shown. Select `Azure IoT Hub` and click `Configure` to add it as a Connected Service

5. Select `Hardcode shared access key` as Security Mode. Confirm with `OK`
6. Now you will be asked to select the IoT Hub you want to connect. At this time, the Hub should be created. *If you have multiple Azure accounts, please check is the correct one is selected*

7. Select your IoT Hub and press `Add`
8. The next page of the wizard is shown. A little screen pops up asking to select or add a device. Our app will represent a device and therefore access must be granted. Select `New Device`

9. Enter a unique `device name` eg 'DummyDevice'

10. The device is registered, unique credentials are created and these will be used by our app. Select `OK`

11. The necessary Nuget libraries are added and eventually you will be directed to [https://github.com/Azure/azure-iot-hub-vs-cs/wiki/C%23-Usage](https://github.com/Azure/azure-iot-hub-vs-cs/wiki/C%23-Usage) for more information
12. In the Solution Explorer of Visual Studio a new file named 'AzureIoTHub.cs' is added. This provides all logic to connect to the IoT Hub

The AzureIoTHub can be called by our App. Let's do that.
## Generate and send dummy telemetry

Let's put a button on the main page of the app to send some telemetry. But first let's check out the 'AzureIoTHub.cs' file.
1. `Open` the file named 'AzureIoTHub.cs'
2. The file contains a class named which has two methods: 'SendDeviceToCloudMessageAsync' and 'ReceiveCloudToDeviceMessageAsync'. *In this work shop, we will only send telemetry*
3. The method to send data is not that intelligent. It only sends a text message. Add the following code just below it
```csharp
public static async Task SendDeviceToCloudMessageAsync(Telemetry telemetry)
{
var _deviceClient = DeviceClient.CreateFromConnectionString(deviceConnectionString, TransportType.Amqp);
var message = new Message(Encoding.ASCII.GetBytes(Newtonsoft.Json.JsonConvert.SerializeObject(telemetry)));
await _deviceClient.SendEventAsync(message);
}
public class Telemetry
{
public int waterLevel { get; set; }
}
```
4. We have defined the Telemetry class which will hold a water level value. The random levels are sent to the IoT Hub using the method. The telemetry is converted to JSON
5. `Open` the file named 'MainPage.xaml'. The empty page will be shown both in a visual editor and a textual 'XAML' editor
6. The page contains one component, a grid. But that is merely a container for other visual components
7. In the XAML editor, within the grid, `add`
```xaml
<StackPanel>
<Button Name="btnSend" Content="Send" FontSize="60" Click="btnSend_Click" />
<TextBlock Name="tbReceived" Text="---" FontSize="60" />
</StackPanel>
```
8. A button and a text box are put on the screen. Go to the code which will be executed when the button is clicked. Put the cursor in btnSend_Click and press `F12`
9. The file 'MainPage.xaml.cs' is shown. All code behind the page is shown here. `Replace` the empty 'btnSend_Click' method with
```csharp
private async void btnSend_Click(object sender, RoutedEventArgs e)
{
await ShowMessage("Sending...");
var t = new AzureIoTHub.Telemetry
{
waterLevel = _random.Next(1, 68)
};
try
{
await AzureIoTHub.SendDeviceToCloudMessageAsync(t);
await ShowMessage("Telemetry sent");
}
catch (Exception ex)
{
await ShowMessage(ex.Message);
}
}
private Random _random = new Random((int)DateTime.Now.Ticks);
private async Task ShowMessage(string text)
{
await Dispatcher.RunAsync(
CoreDispatcherPriority.Normal, () =>
{
tbReceived.Text = text;
});
}
```
10. The method 'btnSend_Click' now generates some random value and sends it to the Iot Hub using the access token of the device 'DummyDevice'
11. New libraries references are introduced in this code. `Add` two using's at the top of the editor
```csharp
using System.Threading.Tasks;
using Windows.UI.Core;
```
12. The app is now ready. `Run` the app and press the button. It the message 'Telemetry sent' is shown, our telemetry is accepted by the IoT Hub

Now we have sent telemetry to the Event Hub. Let's check if it's arrived.
## Monitoring the arrival of the telemetry in Azure

We can check the arrival of messages in the Azure IoT Hub. This can be done using a UI app named Device Explorer or using a Command-Line tool named IoT Hub Explorer. `Choose one`
### Collect Azure IoT Hub secrets
The integration requires an Azure IoT Hub Shared access policy key name with `Registry read, write and Device connect` permissions. In this example, we use the **iothubowner** policy which has these permissions enabled by default.
1. Check the Azure portal. The resource group and the IoT Hub should be created by now

2. On the left, select `Resource groups`. A list of resource groups is shown

3. Select the resource group `IoTWorkshoprg`. It will open a new blade with all resources in this group
4. Select the IoT Hub `IoTWorkshopih`. It will open a new blade with the IoT Hub

5. The IoTHub has not received any messages yet. Check the general settings for `Shared access policies`

6. **Write down** the `name` of the IoT Hub eg. `IoTWorkshopih`
7. Navigate to the 'iothubowner' policy and **write down** this `Connection String-Primary Key`

This is the secret needed from the Azure IoT Hub.
### Monitoring using UI
We can check the arrival of the messages in the Azure IoT Hub using the Device Explorer. This tool is UI based, please check the installation requirements.
1. Start the `Device Explorer` from the desktop of using the start menu
2. On the Configuration Tab, insert the IoT Hub `Connection String-primary key` and the `name` of the IoT Hub (as Protocol Gateway Hostname)
3. Press `Update`
4. On the Management tab, your device should already be available. It was registered by the bridge the very first time, telemetry arrived

5. On the Data tab, Select your `Device ID` and press `Monitor`
6. This will result in the following messages
```
Receiving events...
10/07/16 23:14:10> Device: [DummyDevice], Data:[{"waterLevel":13}]
10/07/16 23:14:12> Device: [DummyDevice], Data:[{"waterLevel":18}]
10/07/16 23:14:13> Device: [DummyDevice], Data:[{"waterLevel":2}]
```
### Monitoring using Command-line
We can check the arrival of the messages in the Azure IoT Hub using the IoT Hub Explorer. This tool is Command-Line based, please check the installation requirements.
*Note : See the [full example](https://github.com/Azure/azure-iot-sdks/tree/master/tools/iothub-explorer) for more options of this tool.*
1. Create a new folder eg. `c:\iothubexplorer`
2. In a dos-box, navigate to the new folder
3. In this folder, run the following command `npm install -g iothub-explorer@latest` in your command-line environment, to install the latest (pre-release) version of the iothub-explorer tool
4. Login to the IoT Hub Explorer by supplying your *remembered* IoT Hub `Connection String-primary key` using the command `iothub-explorer login "[your connection string]"`
5. A session with the IoT Hub will start and it will last for approx. one hour:
```
Session started, expires Tue Sep 27 2016 18:35:37 GMT+0200 (W. Europe Daylight Time)
```
6. To monitor the device-to-cloud messages from a device, use the following command `iothub-explorer "[your connection string]" monitor-events [device name]` and `fill in` your *remembered* IoT Hub 'Connection String-primary key' and *remember* device name
7. This will result in the following messages
```
Monitoring events from device DummyDevice
Event received:
{
"waterLevel": 12
}
```
## Conclusion
The messages are shown here too. These messages are now available in Azure.
Next Step: You are now ready to process your data in an Azure Function. | 45.343949 | 298 | 0.736831 | eng_Latn | 0.971218 |
93069993d1894603bf1b5f3b3a195794efde52f4 | 2,014 | md | Markdown | _posts/2005-11-18-from-mp3-to-sls.md | notthetup/blog | 7d8ff463c3fad048ed89643b8236874989fc2be9 | [
"MIT"
] | null | null | null | _posts/2005-11-18-from-mp3-to-sls.md | notthetup/blog | 7d8ff463c3fad048ed89643b8236874989fc2be9 | [
"MIT"
] | null | null | null | _posts/2005-11-18-from-mp3-to-sls.md | notthetup/blog | 7d8ff463c3fad048ed89643b8236874989fc2be9 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
title: From MP3 to SLS
tags:
- Audio
---
Now for some **techie stuff**.
Recently, I have been to exposed to a lot of good techie stuff. There were some really cool seminars/tutorials in I2R. And me being the me. I had to go for all, even when they were during exam period. I feel that such things should matter more than exams. But anyway.
So the highlight were these talks by Dr. Jürgen Herre from Fraunhofer Institute for Integrated Circuits (IIS). He has been contributing to MPEG audio standards since the time of MPEG-1. So he's like pretty big in the field.
The talks were so cool. He talked abt the evolution of the MPEG standards. Its really amazing to see that. I mean how much progress people have made! Whats even more amazing is the amount of efforts that have been put in in these things!
For the un-initiated, MPEG-1 or specifically MPEG-1 Layer 3 is what u know as mp3.
In the talk Dr. Jürgen was talking abt the latest MPEG standards. The MPEG-4. For general audio, one can acheive 'good quality', rather acceptably good quality at as low as 24kbps! And he demoed it. And it sounded nice. A bit synth, but that's if you are very very picky about things. But come on, for 24kbps, I'd take that anyday.
Today Dr. Jürgen had a 'tutorial' about Multi-Channel encoding. The stuff they do so that you can have ur precious 5.1 audio over channels ment for stereo or mono. Another amazing field. Man I wish I could understand 1/2 the stuff that they were talking about. I mean I kinda got it but not fully. There is so much that people have done. MPEG-Surround the new standards that is gives 'good quality' 5.1 in as low as 42kbps! Just amazing.
**Parametric Coding is the way to go!**.
I am so gratefull to I2R allowing me to do my FYP there. I get a chance to meet and rub shoulder(in a figurative way) with all these great ppl. I hope I2R has more such _cool talks_.
Anyway, so much for MPEG and mp3s. I still prefer my **ATRAC**. S
o back to my _Mini-Disk_.
| 69.448276 | 437 | 0.760675 | eng_Latn | 0.999763 |
9306f4ddbd88f3e17efc4ddaee2b3f5792e69801 | 26 | md | Markdown | terms/haskell-rts/README.md | mandober/debrief.haskell | ddbcd08948926caf75552db444c32c2bcc242df7 | [
"MIT"
] | null | null | null | terms/haskell-rts/README.md | mandober/debrief.haskell | ddbcd08948926caf75552db444c32c2bcc242df7 | [
"MIT"
] | null | null | null | terms/haskell-rts/README.md | mandober/debrief.haskell | ddbcd08948926caf75552db444c32c2bcc242df7 | [
"MIT"
] | null | null | null | # Haskell Topics: Runtime
| 13 | 25 | 0.769231 | deu_Latn | 0.457427 |
9306f89cce1f619cfb92c1179f43a2b8d6e037c1 | 20,616 | md | Markdown | WindowsServerDocs/storage/folder-redirection/deploy-folder-redirection.md | felpasl/windowsserverdocs.pt-br | a6cef3c6bd8ed3d48aad6dc10ecd7a826091d636 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/storage/folder-redirection/deploy-folder-redirection.md | felpasl/windowsserverdocs.pt-br | a6cef3c6bd8ed3d48aad6dc10ecd7a826091d636 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/storage/folder-redirection/deploy-folder-redirection.md | felpasl/windowsserverdocs.pt-br | a6cef3c6bd8ed3d48aad6dc10ecd7a826091d636 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Implantar redirecionamento de pasta com Arquivos Offline
description: Como usar o Windows Server para implantar o redirecionamento de pasta com o Arquivos Offline para computadores cliente Windows.
ms.prod: windows-server
ms.topic: article
author: JasonGerend
ms.author: jgerend
ms.technology: storage
ms.date: 06/06/2019
ms.localizationpriority: medium
ms.openlocfilehash: 21172d9d3e6d91af691986bfd84b0e32049f3b88
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/27/2019
ms.locfileid: "71401965"
---
# <a name="deploy-folder-redirection-with-offline-files"></a>Implantar redirecionamento de pasta com Arquivos Offline
>Aplica-se a: Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Vista, Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2, Windows Server (canal semestral)
Este tópico descreve como usar o Windows Server para implantar o redirecionamento de pasta com o Arquivos Offline para computadores cliente Windows.
Para obter uma lista das alterações recentes neste tópico, consulte [histórico de alterações](#change-history).
> [!IMPORTANT]
> Devido às alterações de segurança feitas no [MS16-072](https://support.microsoft.com/help/3163622/ms16-072-security-update-for-group-policy-june-14-2016), [atualizamos a etapa 3: Crie um GPO para o redirecionamento](#step-3-create-a-gpo-for-folder-redirection) de pasta deste tópico para que o Windows possa aplicar corretamente a política de redirecionamento de pasta (e não reverter as pastas redirecionadas nos PCs afetados).
## <a name="prerequisites"></a>Pré-requisitos
### <a name="hardware-requirements"></a>Requisitos de hardware
O redirecionamento de pasta requer um computador baseado em x64 ou x86; Não há suporte para ele no Windows® RT.
### <a name="software-requirements"></a>Requisitos de software
O redirecionamento de pasta tem os seguintes requisitos de software:
- Para administrar o redirecionamento de pasta, você deve estar conectado como um membro do grupo de segurança Administradores de domínio, do grupo de segurança Administradores de empresa ou do grupo de segurança proprietários do Política de Grupo Creator.
- Os computadores cliente devem executar o Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Server 2019, Windows Server 2016, Windows Server (canal semestral), Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 ou Windows Server 2008.
- Os computadores cliente devem ser integrados aos AD DS (Serviços de Domínio Active Directory) que você está gerenciando.
- Um computador deve ser disponibilizado com o Gerenciamento de Política de Grupo e o Centro de Administração do Active Directory instalados.
- Um servidor de arquivos deve estar disponível para hospedar pastas redirecionadas.
- Se o compartilhamento de arquivos usar os Namespaces do DFS, as pastas DFS (links) deverão ter um único destino para evitar que os usuários façam edições conflitantes em diferentes servidores.
- Se o compartilhamento de arquivos usar a Replicação do DFS para replicar o conteúdo com outro servidor, os usuários devem poder acessar apenas o servidor de origem para evitar que os usuários façam edições conflitantes em diferentes servidores.
- Ao usar um compartilhamento de arquivos clusterizado, desabilite a disponibilidade contínua no compartilhamento de arquivos para evitar problemas de desempenho com redirecionamento de pasta e Arquivos Offline. Além disso, Arquivos Offline pode não passar para o modo offline por 3-6 minutos depois que um usuário perde o acesso a um compartilhamento de arquivos continuamente disponível, o que poderia frustrar os usuários que ainda não estão usando o modo sempre offline do Arquivos Offline.
> [!NOTE]
> Alguns recursos mais recentes do redirecionamento de pasta têm requisitos adicionais de computador cliente e de Active Directory esquema. Para obter mais informações, consulte [implantar computadores primários](deploy-primary-computers.md), [desabilitar arquivos offline em pastas](disable-offline-files-on-folders.md), [habilitar o modo sempre offline](enable-always-offline.md)e [habilitar a movimentação de pastas otimizadas](enable-optimized-moving.md).
## <a name="step-1-create-a-folder-redirection-security-group"></a>Etapa 1: Criar um grupo de segurança de redirecionamento de pasta
Se o seu ambiente ainda não estiver configurado com o redirecionamento de pasta, a primeira etapa será criar um grupo de segurança que contenha todos os usuários aos quais você deseja aplicar as configurações de política de redirecionamento de pasta.
Veja como criar um grupo de segurança para o redirecionamento de pasta:
1. Abra Gerenciador do Servidor em um computador com Active Directory centro de administração instalado.
2. No menu **ferramentas** , selecione **Active Directory centro de administração**. O Centro de Administração do Active Directory é exibido.
3. Clique com o botão direito do mouse no domínio ou unidade organizacional apropriado, selecione **novo**e, em seguida, selecione **grupo**.
4. Na janela **Criar Grupo**, na seção **Grupo**, especifique as seguintes configurações:
- Em **Nome do grupo**, digite o nome do grupo de segurança, por exemplo: **Usuários de redirecionamento de pasta**.
- Em **escopo do grupo**, selecione **segurança**e, em seguida, selecione **global**.
5. Na seção **Membros** , selecione **Adicionar**. A caixa de diálogo Selecionar Usuários, Contatos, Computadores, Contas de Serviço ou Grupos é exibida.
6. Digite os nomes dos usuários ou grupos nos quais você deseja implantar o redirecionamento de pasta, selecione **OK**e, em seguida, selecione **OK** novamente.
## <a name="step-2-create-a-file-share-for-redirected-folders"></a>Etapa 2: Criar um compartilhamento de arquivos para pastas redirecionadas
Se você ainda não tiver um compartilhamento de arquivos para pastas redirecionadas, use o procedimento a seguir para criar um compartilhamento de arquivos em um servidor que executa o Windows Server 2012.
> [!NOTE]
> Alguma funcionalidade pode ser diferente ou estar indisponível, se você criar o compartilhamento de arquivos em um servidor que executar outra versão do Windows Server.
Veja como criar um compartilhamento de arquivos no Windows Server 2019, no Windows Server 2016 e no Windows Server 2012:
1. No painel de navegação Gerenciador do Servidor, selecione **serviços de arquivo e armazenamento**e, em seguida, selecione **compartilhamentos** para exibir a página compartilhamentos.
2. No bloco **compartilhamentos** , selecione **tarefas**e, em seguida, selecione **novo compartilhamento**. O Assistente Novo Compartilhamento é exibido.
3. Na página **selecionar perfil** , selecione **compartilhamento SMB – rápido**. Se você tiver o Gerenciador de recursos de servidor de arquivos instalado e estiver usando as propriedades de gerenciamento de pastas, selecione **compartilhamento SMB – avançado**.
4. Na página **Compartilhar Local**, selecione o servidor e o volume nos quais deseja criar o compartilhamento.
5. Na página **nome do compartilhamento** , digite um nome para o compartilhamento (por exemplo, **usuários $** ) na caixa **nome do compartilhamento** .
>[!TIP]
>Ao criar o compartilhamento, oculte o compartilhamento colocando um ```$``` após o nome do compartilhamento. Isso ocultará o compartilhamento de navegadores casuais.
6. Na página **outras configurações** , desmarque a caixa de seleção Habilitar disponibilidade contínua, se houver, e, opcionalmente, marque as caixas de seleção **habilitar a enumeração baseada em acesso** e **criptografar o acesso a dados** .
7. Na página **permissões** , selecione **Personalizar permissões...** . A caixa de diálogo Configurações de Segurança Avançada é exibida.
8. Selecione **desabilitar herança**e, em seguida, selecione **converter permissões herdadas em permissão explícita neste objeto**.
9. Defina as permissões conforme descrito na tabela 1 e mostrada na Figura 1, removendo permissões para grupos e contas não listadas e adicionando permissões especiais ao grupo de usuários de redirecionamento de pasta que você criou na etapa 1.

**Figura 1** Definindo as permissões para o compartilhamento de pastas redirecionadas
10. Se você escolher o perfil **Compartilhamento SMB - Avançado** , na página **Propriedades de Gerenciamento** , selecione o valor de Uso da Pasta de **Arquivos do Usuário** .
11. Se você escolher o perfil **Compartilhamento SMB - Avançado** , na página **Cota** , opcionalmente selecione uma cota para aplicar aos usuários do compartilhamento.
12. Na página **confirmação** , selecione **criar.**
### <a name="required-permissions-for-the-file-share-hosting-redirected-folders"></a>Permissões necessárias para o compartilhamento de arquivos que hospeda pastas redirecionadas
| Conta de Usuário | Access | Aplica-se a |
| --------- | --------- | --------- |
| Conta de Usuário | Access | Aplica-se a |
| Sistema | Controle total | Essa pasta, subpastas e arquivos |
| Administradores | Controle total | Apenas essa pasta |
| Criador/Proprietário | Controle total | Apenas subpastas e arquivos |
| Grupo de segurança de usuários que precisam colocar dados no compartilhamento (usuários de redirecionamento de pasta) | Listar pasta/ler dados *(permissões avançadas)* <br /><br />Criar pastas/acrescentar dados *(permissões avançadas)* <br /><br />Atributos *de leitura (permissões avançadas)* <br /><br />Ler atributos estendidos *(permissões avançadas)* <br /><br />Permissões *de leitura (permissões avançadas)* | Apenas essa pasta |
| Outros grupos e contas | Nenhum (remover) | |
## <a name="step-3-create-a-gpo-for-folder-redirection"></a>Etapa 3: Criar um GPO para o redirecionamento de pasta
Se você ainda não tiver um GPO criado para as configurações de redirecionamento de pasta, use o procedimento a seguir para criar um.
Veja como criar um GPO para o redirecionamento de pasta:
1. Abra o Gerenciador do Servidor em um computador com o Gerenciamento de Política de Grupo instalado.
2. No menu **ferramentas** , selecione **Gerenciamento de política de grupo**.
3. Clique com o botão direito do mouse no domínio ou na UO em que você deseja configurar o redirecionamento de pasta, selecione **criar um GPO nesse domínio e vincule-o aqui**.
4. Na caixa de diálogo **novo GPO** , digite um nome para o GPO (por exemplo, **configurações de redirecionamento de pasta**) e, em seguida, selecione **OK**.
5. Clique com o botão direito do mouse no GPO recentemente criado e, em seguida, desmarque a caixa de seleção **Vínculo habilitado** . Isso evita que o GPO seja aplicado até que você finalize a configuração dele.
6. Selecione o GPO. Na seção **filtragem de segurança** da guia **escopo** , selecione **usuários autenticados**e, em seguida, selecione **remover** para impedir que o GPO seja aplicado a todos.
7. Na seção **filtragem de segurança** , selecione **Adicionar**.
8. Na caixa de diálogo **Selecionar usuário, computador ou grupo** , digite o nome do grupo de segurança criado na etapa 1 (por exemplo, usuários de **redirecionamento de pasta**) e, em seguida, selecione **OK**.
9. Selecione a guia **delegação** , selecione **Adicionar**, digite **usuários autenticados**, selecione **OK**e, em seguida, selecione **OK** novamente para aceitar as permissões de leitura padrão.
Esta etapa é necessária devido a alterações de segurança feitas em [MS16-072](https://support.microsoft.com/help/3163622/ms16-072-security-update-for-group-policy-june-14-2016).
> [!IMPORTANT]
> Devido às alterações de segurança feitas no [MS16-072](https://support.microsoft.com/help/3163622/ms16-072-security-update-for-group-policy-june-14-2016), agora você deve conceder permissões de leitura delegadas ao grupo de usuários autenticados para o GPO de redirecionamento de pasta-caso contrário, o GPO não será aplicado aos usuários ou, se já estiver aplicado, o GPO será removido, redirecionando pastas de volta para o computador local. Para obter mais informações, consulte [Implantando política de grupo atualização de segurança MS16-072](https://blogs.technet.microsoft.com/askds/2016/06/22/deploying-group-policy-security-update-ms16-072-kb3163622/).
## <a name="step-4-configure-folder-redirection-with-offline-files"></a>Etapa 4: Configurar o redirecionamento de pasta com Arquivos Offline
Depois de criar um GPO para as configurações de redirecionamento de pasta, edite as configurações de Política de Grupo para habilitar e configurar o redirecionamento de pasta, conforme discutido no procedimento a seguir.
> [!NOTE]
> O Arquivos Offline é habilitado por padrão para pastas redirecionadas em computadores cliente do Windows e desabilitado em computadores que executam o Windows Server, a menos que seja alterado pelo usuário. Para usar Política de Grupo para controlar se o Arquivos Offline está habilitado, use a configuração **permitir ou impedir o uso da política de recursos de arquivos offline** .
> Para obter informações sobre algumas das outras Arquivos Offline Política de Grupo configurações, consulte [habilitar a funcionalidade avançada de arquivos offline](<https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn270369(v%3dws.11)>)e [configurando política de grupo para arquivos offline](<https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2003/cc759721(v%3dws.10)>).
Veja como configurar o redirecionamento de pasta no Política de Grupo:
1. No gerenciamento de Política de Grupo, clique com o botão direito do mouse no GPO que você criou (por exemplo, **configurações de redirecionamento de pasta**) e selecione **Editar**.
2. Na janela Editor de Gerenciamento de Política de Grupo, navegue até **configuração do usuário**, **diretivas**, configurações do **Windows**e **redirecionamento de pasta**.
3. Clique com o botão direito do mouse em uma pasta que você deseja redirecionar (por exemplo, **documentos**) e selecione **Propriedades**.
4. Na caixa de diálogo **Propriedades** , na caixa **configuração** , selecione **básico – redirecionar a pasta de todos para o mesmo local**.
> [!NOTE]
> Para aplicar o redirecionamento de pasta a computadores cliente que executam o Windows XP ou o Windows Server 2003, selecione a guia **configurações** e selecione a **política aplicar redirecionamento para Windows 2000, Windows 2000 Server, Windows XP e Windows Server 2003** caixa de seleção sistemas.
5. Na seção **local da pasta de destino** , selecione **criar uma pasta para cada usuário no caminho raiz** e, em seguida, na caixa **caminho raiz** , digite o caminho para o compartilhamento de arquivos que armazena pastas redirecionadas, por exemplo: **\\ \\ usuários\\do FS1.Corp.contoso.com $**
6. Selecione a guia **configurações** e, na seção **remoção de política** , opcionalmente, selecione **redirecionar a pasta de volta para o local do USERPROFILE local quando a política for removida** (essa configuração pode ajudar a fazer com que o redirecionamento de pasta se comporte mais previsível para adminisitrators e usuários).
7. Selecione **OK**e, em seguida, selecione **Sim** na caixa de diálogo de aviso.
## <a name="step-5-enable-the-folder-redirection-gpo"></a>Etapa 5: Habilitar o GPO de redirecionamento de pasta
Depois de concluir a configuração das configurações de Política de Grupo de redirecionamento de pasta, a próxima etapa é habilitar o GPO, permitindo que ele seja aplicado aos usuários afetados.
> [!TIP]
> Se você planejar implantar suporte de computador primário ou outras configurações de política, faça isso agora, antes de habilitar o GPO. Isso evita que os dados do usuário sejam copiados para computadores não primários antes de o suporte de computador primário ser habilitado.
Veja como habilitar o GPO de redirecionamento de pasta:
1. Abra Gerenciamento de Política de Grupo.
2. Clique com o botão direito do mouse no GPO que você criou e selecione **link habilitado**. Uma caixa de seleção será exibida ao lado do item de menu.
## <a name="step-6-test-folder-redirection"></a>Etapa 6: Testar redirecionamento de pasta
Para testar o redirecionamento de pasta, entre em um computador com uma conta de usuário configurada para o redirecionamento de pasta. Em seguida, confirme se as pastas e os perfis são redirecionados.
Veja como testar o redirecionamento de pasta:
1. Entre em um computador primário (se você tiver habilitado o suporte do computador primário) com uma conta de usuário para a qual você habilitou o redirecionamento de pasta.
2. Caso o usuário tenha sido anteriormente designado para o computador, abra um prompt de comandos com privilégios elevados e, em seguida, digite o comando a seguir para garantir que as configurações de Política de Grupo mais recentes sejam aplicadas ao computador cliente:
```PowerShell
gpupdate /force
```
3. Abra o Explorador de Arquivos.
4. Clique com o botão direito do mouse em uma pasta redirecionada (por exemplo, a pasta meus documentos na biblioteca de documentos) e selecione **Propriedades**.
5. Selecione a guia **local** e confirme se o caminho exibe o compartilhamento de arquivos especificado, em vez de um caminho local.
## <a name="appendix-a-checklist-for-deploying-folder-redirection"></a>Apêndice A: Lista de verificação para implantar o redirecionamento de pasta
| Status | Ação |
| --- | --- |
| ☐<br>☐<br>☐ | 1. Preparar domínio<br>-Ingressar computadores no domínio<br>-Criar contas de usuário |
| ☐<br><br><br> | 2. Criar grupo de segurança para redirecionamento de pasta<br>-Nome do Grupo:<br>Os |
| ☐<br><br> | 3. Criar um compartilhamento de arquivos para pastas redirecionadas<br>-Nome do compartilhamento de arquivos: |
| ☐<br><br> | 4. Criar um GPO para o redirecionamento de pasta<br>-Nome do GPO: |
| ☐<br><br>☐<br>☐<br>☐<br>☐<br>☐ | 5. Configurar o redirecionamento de pasta e configurações de política de Arquivos Offline<br>-Pastas redirecionadas:<br>-Suporte do Windows 2000, Windows XP e Windows Server 2003 habilitado?<br>-Arquivos Offline habilitado? (habilitado por padrão em computadores cliente Windows)<br>-O modo sempre offline está habilitado?<br>-A sincronização de arquivo em segundo plano está habilitada?<br>-Movimentação otimizada de pastas redirecionadas habilitada? |
| ☐<br><br>☐<br><br>☐<br>☐ | 6. Adicional Habilitar o suporte ao computador primário<br>-Baseado em computador ou em um usuário?<br>-Designar computadores primários para usuários<br>-Local dos mapeamentos do usuário e do computador primário:<br>-(Opcional) habilitar o suporte ao computador primário para o redirecionamento de pasta<br>-(Opcional) habilitar o suporte a computadores primários para perfis de usuário de roaming |
| ☐ | 7. Habilitar o GPO de redirecionamento de pasta |
| ☐ | 8. Testar redirecionamento de pasta |
## <a name="change-history"></a>Histórico de alterações
A tabela a seguir resume algumas das alterações mais importantes para este tópico.
| Date | Descrição | Reason|
| --- | --- | --- |
| 18 de janeiro de 2017 | Adicionada uma etapa à [etapa 3: Crie um GPO para redirecionamento](#step-3-create-a-gpo-for-folder-redirection) de pasta para delegar permissões de leitura a usuários autenticados, que agora é necessário devido a uma atualização de segurança política de grupo. | Comentários do cliente |
## <a name="more-information"></a>Mais informações
* [Redirecionamento de pasta, Arquivos Offline e perfis de usuário de roaming](folder-redirection-rup-overview.md)
* [Implantar computadores primários para redirecionamento de pasta e perfis de usuário de roaming](deploy-primary-computers.md)
* [Habilitar a funcionalidade de Arquivos Offline avançada](enable-always-offline.md)
* [Declaração de suporte da Microsoft sobre dados de perfil de usuário replicados](https://blogs.technet.microsoft.com/askds/2010/09/01/microsofts-support-statement-around-replicated-user-profile-data/)
* [Aplicativos Sideload com o DISM](<https://docs.microsoft.com/previous-versions/windows/it-pro/windows-8.1-and-8/hh852635(v=win.10)>)
* [Solução de problemas de empacotamento, implantação e consulta de aplicativos baseados em Windows Runtime](https://msdn.microsoft.com/library/windows/desktop/hh973484.aspx) | 101.058824 | 663 | 0.774835 | por_Latn | 0.999049 |
930751775e3f326faf35f8e41b03d3b4706d51b3 | 3,633 | md | Markdown | timeline/2016-11/perceptron-1/README.md | yuenshome/yuenshome.github.io | 5d9c4f27fc58d62dde1eb90b49affff51417e22a | [
"MIT"
] | 73 | 2018-11-29T08:15:58.000Z | 2022-02-14T08:45:24.000Z | timeline/2016-11/perceptron-1/README.md | yuenshome/yuenshome.github.io | 5d9c4f27fc58d62dde1eb90b49affff51417e22a | [
"MIT"
] | 136 | 2017-11-04T07:51:31.000Z | 2021-12-24T11:10:52.000Z | timeline/2016-11/perceptron-1/README.md | yuenshome/yuenshome.github.io | 5d9c4f27fc58d62dde1eb90b49affff51417e22a | [
"MIT"
] | 15 | 2019-02-28T11:51:36.000Z | 2022-02-14T08:45:26.000Z | [](https://yuenshome.github.io)
<script type="text/javascript" async src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML"> </script>
# 感知机系列1:感知机模型和学习策略
学习《统计学习方法》中第二章 感知机学习笔记。更多的是摘抄,基于例子对算法的实现等。
<blockquote>感知机(感知器, perceptron )是二分类的线性模型,输入为样本(实例)的特征向量,输出为样本类别( $-1$ 或 $+1$ )。由于其会对输入空间(特征空间)中将实例划分为正负两类的分离超平面,故<strong><span style="color: #ff0000;">属于判别模型</span></strong>。
感知机学习的过程是求出训练数据进行线性划分的分离超平面,做法是通过<strong><span style="color: #ff0000;">基于错分类样本</span></strong>的损失函数,利用梯度下降法对损失函数极小化,求得模型参数。
特点是简单且易于实现,<strong><span style="color: #ff0000;">分为原始形式和对偶形式</span></strong>。感知机 1957 年由 Rosenblatt 提出,<strong><span style="color: #ff0000;">是神经网络与支持向量机的基础</span></strong>。</blockquote>
[toc]
<!--more-->
<h1>1. 感知机模型</h1>
<strong>定义1 (感知机)</strong> 假设输入空间(特征空间)是 $\chi \subseteq \Re^n$ ,输出空间是 $\gamma = \{+1,-1\}$ 。输入 $x \in \chi$ 表示实例(样本)的特征向量,对应于输入空间(特征空间)的点;输出 $y \subseteq \gamma$ 表示实例的类别。由输入空间到输出空间的如下函数
$$
f(x) = \text{sign}(w \cdot x + b)
$$
称为感知机。其中, $w$ 和 $b$ 为感知机模型参数, $w \in \Re^n$ 叫做权值( weight )或权值向量( weight vector ), $b \in \Re$ 叫做偏置(bias), $w \cdot x$ 表示 $w$ 和 $x$ 的内积。 $\text{sign}$ 是符号函数,即
$$
\begin{eqnarray}
\text{sign(x)} =
\begin{cases}
+1, & x \geq 0 \\
-1, & x < 0
\end{cases}
\end{eqnarray}
$$
感知机是一种线性分类模型,属于判别模型。其假设空间是定义在特征空间中的所有线性分类模型( linear classification model )或线性分类器( linear classifier ),即函数集合 $\{f|f(x)=w \cdot x + b\}$ 。
感知机有如下几何解释:线性方程
$$
w \cdot x + b = 0
$$
对应于特征空间 $\Re^n$ 中的一个超平面 $S$ ,其中 $w$ 是超平面的法向量, $b$ 是超平面的截距。这个超平面将特征空间划分为两个部分,位于两部分的点(特征向量)分别被分为正、负两类。因此,超平面 $S$ 称为分离超平面( separating hyperplane),如下图所示。
<img class="aligncenter" src="./assets/perceptron%20model.png" alt="" width="389" height="323" />
感知机学习基于训练数据集(实例样本的特征向量及真实类别) $T = \{(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)\}$ ,其中 $x_i \in \chi = \Re^n, y_i \in \gamma = \{+1, -1\} , i=1,2,...,N$ 。求得感知机模型(即 $f(x) = \text{sign}(w \cdot x + b)$),即求得模型参数 $w,b$ 。此时对新输入实例样本即可给出对应的输出类别。
<h1>2. 感知机学习策略</h1>
<h2>2.1 数据集的线性可分性</h2>
定义2 (数据集的线性可分性) 给定一个数据集 $T=\{(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)\}$ ,其中, $x_i \in \chi = \Re^n , y_i \in \gamma = \{+1, -1\}, i=1,2,...,N$ ,如果存在某个超平面 $S$ ,即
$$
w \cdot x + b = 0
$$
能够将数据集的正实例点和负实例点完全正确地划分到超平面的两侧,即对所有 $y_i=+1$ 的实例 $i$ ,有 $w \cdot x_i + b > 0$ ,对所有 $y_i = -1$ 的实例 $i$ ,有 $w \cdot x_i + b < 0$ ,则数据集 $T$ 为线性可分数据集( linearly separable data set);否则,称数据集 $T$ 线性不可分。
<h2>2.2 感知机学习策略</h2>
假设训练数据集是线性可分的,感知机学习的目标是求得一个能够将训练集正实例点和负实例点完全正确分开的分离超平面。为了找出这样的超平面,即确定感知机模型参数 $w,b$ ,需要确定一个学习策略,即定义(经验)损失函数并将损失函数极小化。
损失函数的一个自然选择是误分类点的总数。但是,这样的损失函数不是参数 $w,b$ 的连续可导函数,不易优化。损失函数的另一个选择是误分类点到超平面 $S$ 的总距离,这是感知机所采用的。为此,首先写出<strong><span style="color: #ff0000;">输入空间 $\Re^n$ 中一点 $x_0$ 到超平面 $S$ 的距离:</span></strong>
$$ \frac{1}{||w||} |w \cdot x_0 + b| $$
这里,$||w||$ 是 $w$ 的 $L_2$ 范数。
其次,对于误分类的数据$(x_i, y_i)$ 来说,
$$
-y_i (w \cdot x_i + b) > 0
$$
成立,因为当 $w \cdot x_i + b > 0 $ 时, $y_i = -1$ ,而当 $w \cdot x_i + b < 0$ 时, $y_i = +1$ ,因此,误分类点 $x_i$ 到超平面 $S$ 的距离是
$$
-\frac{1}{||w||} \sum_{x_i \in M } y_i (w \cdot x_i + b)
$$
不考虑 $\frac{1}{||w||}$ ,就得到感知机学习的损失函数。
给定训练数据集 $T=\{(x_1, y_1), (x_2, y_2), ..., (x_N, y_N)\}$ ,其中 $x_i \in \chi = \Re^n , y_i \in \gamma = {+1, -1}, i=1,2,..., N$ 。感知机 $\text{sign}(w \cdot x + b)$ 学习的损失函数定义为:
$$
L(w,b) = -\sum_{x_i \in M } y_i(w \cdot x_i + b)
$$
其中 $M$ 为误分类点的集合,这个损失函数就是感知机学习的经验风险函数。
显然,损失函数 $L(w,b)$ 是非负的。如果没有误分类点,损失函数值是 $0$ 。而且,误分类点越少,误分类点离超平面越近,损失函数值就越小。一个特定的样本点的损失函数: 在分类时的参数 $w,b$ 的线性函数,在正确分类时是 $0$ 。因此,给定训练数据集 $T$ ,损失函数 $L(w,b)$ 是 $w,b$ 的连续可导函数。
感知机学习的策略是在假设空间中选取使损失函数 $L(w,b) = -\sum_{x_i \in M } y_i(w \cdot x_i + b)$ 最小时的模型参数 $w,b$ ,即感知机模型。
<h1></h1>
| 39.48913 | 234 | 0.634462 | yue_Hant | 0.529698 |
93078df3e717dcee92a4eeb07fb60eec65530db5 | 81 | md | Markdown | body-origin.md | ftx/OpenWRT-Rockchip | 25bd1984c8e2a22e0b73707402d1a3616f7548e3 | [
"MIT"
] | null | null | null | body-origin.md | ftx/OpenWRT-Rockchip | 25bd1984c8e2a22e0b73707402d1a3616f7548e3 | [
"MIT"
] | null | null | null | body-origin.md | ftx/OpenWRT-Rockchip | 25bd1984c8e2a22e0b73707402d1a3616f7548e3 | [
"MIT"
] | null | null | null | ***OpenWRT master with Kernel 5.10***
For: NanoPi-R2S NanoPi-R4S OrangePi-R1-plus | 40.5 | 43 | 0.753086 | kor_Hang | 0.626856 |
93082367c92dcbaea57f2b3fca39fffab63f1187 | 108 | md | Markdown | about.md | zbw898218/zbw898218.github.io | 6b5ebbfc33933b0021ad1780880191b4ca3c7082 | [
"Apache-2.0"
] | null | null | null | about.md | zbw898218/zbw898218.github.io | 6b5ebbfc33933b0021ad1780880191b4ca3c7082 | [
"Apache-2.0"
] | null | null | null | about.md | zbw898218/zbw898218.github.io | 6b5ebbfc33933b0021ad1780880191b4ca3c7082 | [
"Apache-2.0"
] | null | null | null | ---
layout: page
title: "ABOUT"
description: "记录我的JAVA成长之旅 "
header-img: "img/zhihu.jpg"
---
风之痕的迷弟。
| 7.714286 | 29 | 0.638889 | eng_Latn | 0.307539 |
93082fc8fff30ea3c04cdd9b13e1c1efff191414 | 5,864 | md | Markdown | docs/integration-services/integration-services-features-supported-by-the-editions-of-sql-server.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/integration-services-features-supported-by-the-editions-of-sql-server.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/integration-services-features-supported-by-the-editions-of-sql-server.md | PowerBee-AK/sql-docs.de-de | f6f4854db855a89c4e49dc0557fa456da060b3c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Von den SQL Server 2016-Editionen unterstützte Integration Services-Funktionen
title: Von den SQL Server-Editionen unterstützte Integration Services-Funktionen | Microsoft-Dokumentation
ms.custom: ''
ms.date: 07/26/2017
ms.prod: sql
ms.prod_service: integration-services
ms.reviewer: ''
ms.technology: integration-services
ms.topic: conceptual
ms.assetid: e5018225-68bb-4f34-ae4a-ead79d8ad13a
author: chugugrace
ms.author: chugu
ms.openlocfilehash: 2a8b60c36831622713103786727c4225ac0c351c
ms.sourcegitcommit: a9e982e30e458866fcd64374e3458516182d604c
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 01/11/2021
ms.locfileid: "98095647"
---
# <a name="integration-services-features-supported-by-the-editions-of-sql-server"></a>Von den SQL Server-Editionen unterstützte Integration Services-Funktionen
[!INCLUDE[sqlserver-ssis](../includes/applies-to-version/sqlserver-ssis.md)]
Dieses Thema bietet detaillierte Informationen zu den von den verschiedenen [!INCLUDE[ssNoVersion_md](../includes/ssnoversion-md.md)]-Editionen unterstützten SQL Server Integration Services-Funktionen (SSIS).
Von Evaluation und Developer Edition unterstützte Funktionen finden Sie in den Funktionen der Enterprise Edition in den folgenden Tabellen.
Die neuesten Anmerkungen zu dieser Version und Informationen zu Neuigkeiten finden Sie in folgenden Artikeln:
- [Versionsanmerkungen zu SQL Server 2016](../sql-server/sql-server-2016-release-notes.md)
- [Neuigkeiten in Integration Services in SQL Server 2016](../integration-services/what-s-new-in-integration-services-in-sql-server-2016.md)
- [Neues in Integration Services in SQL Server 2017](../integration-services/what-s-new-in-integration-services-in-sql-server-2017.md)
**SQL Server 2016 R2 ausprobieren!**
Die SQL Server Evaluation Edition steht für einen Testzeitraum von 180 Tagen zur Verfügung.
> [](https://www.microsoft.com/evalcenter/evaluate-sql-server-2016) **[Laden Sie SQL Server 2016 aus dem Evaluation Center herunter.](https://www.microsoft.com/evalcenter/evaluate-sql-server-2016)**
## <a name="new-integration-services-features-in-sql-server-2017"></a><a name="ISNew"></a> Neue Integration Services-Funktionen in SQL Server 2017
|Funktion|Enterprise|Standard|Web|Express mit Advanced Services|Express|
|-------------|----------------|--------------|---------|------------------------------------|------------------------|
|Scale Out-Master|Ja|||||
|Scale Out-Worker|Ja|Ja<sup>1</sup>|TBD|TBD|TBD|
|Unterstützung für Microsoft Dynamics AX und Microsoft Dynamics CRM in OData-Komponenten <sup>2</sup>|Ja|Ja||||
|Linux-Unterstützung|Ja|Ja|||Ja|
<sup>1</sup> Wenn Sie Pakete ausführen, für die nur Enterprise-Funktionen in Scale Out erforderlich ist, müssen die Scale Out-Worker ebenfalls auf Instanzen von SQL Server Enterprise ausgeführt werden.
<sup>2</sup> Diese Funktion wird auch in SQL Server 2016 mit Service Pack 1 unterstützt.
## <a name="sql-server-import-and-export-wizard"></a><a name="IEWiz"></a> SQL Server-Import/Export-Assistent
|Funktion|Enterprise|Standard|Web|Express mit Advanced Services|Express|
|-------------|----------------|--------------|---------|------------------------------------|------------------------|
|SQL Server-Import/Export-Assistent|Ja|Ja|Ja|Ja<sup>1</sup>|Ja<sup>1</sup>|
<sup>1</sup> Die ausführbare Datei „DTSWizard.exe“ wird nicht mit SQL unter Linux bereitgestellt. Allerdings kann unter Linux „dtexec“ verwendet werden, um ein von DTSWizard unter Windows erstelltes Paket auszuführen.
## <a name="integration-services"></a><a name="IS"></a> Integration Services
|Funktion|Enterprise|Standard|Web|Express mit Advanced Services|Express|
|-------------|----------------|--------------|---------|------------------------------------|------------------------|
|Integrierte Datenquellenkonnektoren|Ja|Ja||||
|Integrierte Tasks und Transformationen|Ja|Ja||||
|ODBC-Quelle und -Ziel |Ja|Ja||||
|Azure-Datenquellenkonnektoren und Tasks|Ja|Ja||||
|Hadoop-/HDFS-Connectors und -Tasks|Ja|Ja||||
|Grundlegende Datenprofilerstellungs-Tools|Ja|Ja||||
## <a name="integration-services---advanced-sources-and-destinations"></a><a name="ISAA"></a>Integration Services – Erweiterte Quellen und Ziele
|Funktion|Enterprise|Standard|Web|Express mit Advanced Services|Express|
|-------------|----------------|--------------|---------|------------------------------------|------------------------|
|Oracle-Quelle und -Ziel für eine leistungsstarke Ausführung von Attunity|Ja|||||
|Teradata-Quelle und -Ziel für eine leistungsstarke Ausführung von Attunity|Ja|||||
|SAP BW-Quelle und -Ziel|Ja|||||
|Ziel für Data Mining-Modelltraining|Ja|||||
|Ziel für Dimensionsverarbeitung|Ja|||||
|Ziel für Partitionsverarbeitung|Ja|||||
## <a name="integration-services---advanced-tasks-and-transformations"></a><a name="ISAT"></a> Integration Services – Erweiterte Tasks und Transformationen
|Funktion|Enterprise|Standard|Web|Express mit Advanced Services|Express|
|-------------|----------------|--------------|---------|------------------------------------|------------------------|
|Change Data Capture-Komponenten von Attunity <sup>1</sup>|Ja|||||
|Transformation für Data Mining-Abfragen|Ja|||||
|Transformationen für Fuzzygruppierung und Fuzzysuche|Ja|||||
|Transformationen für Ausdrucksextrahierung und Ausdruckssuche|Ja|||||
<sup>1</sup> Für die Change Data Capture-Komponenten von Attunity ist Enterprise Edition erforderlich. Die Enterprise Edition ist jedoch nicht für den Change Data Capture Service und Change Data Capture-Designer notwendig. Sie können den Designer und den Dienst auf einem Computer verwenden, auf dem SSIS nicht installiert ist. | 62.382979 | 327 | 0.696282 | deu_Latn | 0.720806 |
9308eb814d98e03dc7a666d01f3a70a091fddf88 | 2,990 | md | Markdown | exampleSite/content/post/shortcodes/snippet/index.md | mrBrutus/hugo-plc-docs-theme | ccff80ca5572d3c9ba348c75c861a96f923a7e6f | [
"MIT"
] | null | null | null | exampleSite/content/post/shortcodes/snippet/index.md | mrBrutus/hugo-plc-docs-theme | ccff80ca5572d3c9ba348c75c861a96f923a7e6f | [
"MIT"
] | null | null | null | exampleSite/content/post/shortcodes/snippet/index.md | mrBrutus/hugo-plc-docs-theme | ccff80ca5572d3c9ba348c75c861a96f923a7e6f | [
"MIT"
] | 1 | 2021-07-01T06:53:16.000Z | 2021-07-01T06:53:16.000Z | ---
title: snippet
description: For re-using snippets or text sections.
author: mrBrutus
tags:
- shortcode
---
The `snippet` shortcode can be used for re-using snippets or text sections --
Create a text section once and insert it in many pages.
Organize these snippets in one or more snippet bundles.
*syntax:*
```md
{{</* snippet bundle="<name-of-the-snippet-bundle>" file="<relative-path-of-the-snippet-file>" */>}}
```
- `bundle` must be passed the name of the snippets bundle
- `file` must be passed the file path relative to the snippets bundle:
- *Without* file extension for markdown files which shall be rendered as usual.
- *With* file extension if the snippet shall be inserted as codeblock.
## Usage
Place the snippet content in the `snippets` folder:
*my-snippets/some-folder/my-sample-text.md:*
```md
This is a text used in many pages.
- lorem
- ipsum
```
Then use the shortcode throughout your pages:
```md
---
title: some page
---
## Inserted text section
{{</* snippet bundle="my-snippets" file="some-folder/my-sample-text" */>}}
```
## Content
The shortcode supports Markdown and source code.
**Markdown snippets:**
- Are rendered as any other markdown content
- May contain local images (stored in the snippets folder)
- May include other shortcodes
**Source code snippets:**
- Are rendered as code blocks
{{< note >}}
The language selection for the code block is taken from the file extension (e.g. `yaml` for `<some-file.yaml>`).
{{< /note >}}
## Snippets bundle
The snippets bundle must contain a `_index.md` with below front-matter so that none of the files in this bundle will
be rendered as HTML pages.
*my-snippets/_index.md:*
```md
---
title: my-snippets
cascade:
_build:
render: false
list: false
publishResources: false
---
```
## Examples
### 1) Simple markdown snippet
The snippet in `{{</* snippet bundle="my-snippets" file="shortcode-docs/md-snippet1" */>}}` has the following content:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet1.md" >}}
Which renders as:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet1" >}}
### 2) Markdown snippet with note shortcode
The snippet in `{{</* snippet bundle="my-snippets" file="shortcode-docs/md-snippet2" */>}}` has the following content:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet2.md" >}}
Which renders as:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet2" >}}
### 3) Markdown snippet with image
The snippet in `{{</* snippet bundle="my-snippets" file="shortcode-docs/md-snippet3" */>}}` has the following content:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet3.md" >}}
Which renders as:
{{< snippet bundle="my-snippets" file="shortcode-docs/md-snippet3" >}}
### 4) Source code snippet
`{{</* snippet bundle="my-snippets" file="shortcode-docs/yaml-snippet1.yaml" */>}}` renders as:
{{< snippet bundle="my-snippets" file="shortcode-docs/yaml-snippet1.yaml" >}}
| 24.308943 | 118 | 0.705351 | eng_Latn | 0.875694 |
930a5201ef760697ee5ecaf942b5900b546195fc | 127 | md | Markdown | template/content/tsc-reps/thierry-supplisson.md | pcoccoli/website | 48acd604697a1dd56a03316c329e320c4619ac61 | [
"MIT"
] | 3 | 2020-09-10T09:39:15.000Z | 2021-06-30T02:12:12.000Z | template/content/tsc-reps/thierry-supplisson.md | pcoccoli/website | 48acd604697a1dd56a03316c329e320c4619ac61 | [
"MIT"
] | 25 | 2020-04-16T18:56:47.000Z | 2021-12-07T10:44:06.000Z | template/content/tsc-reps/thierry-supplisson.md | pcoccoli/website | 48acd604697a1dd56a03316c329e320c4619ac61 | [
"MIT"
] | 8 | 2020-06-18T15:54:06.000Z | 2021-11-02T19:53:15.000Z | ---
tsc_rep_name: "Thierry Supplisson"
tsc_rep_title: "Senior Member of Technical Staff"
company: "IBM Security"
github: ""
--- | 21.166667 | 49 | 0.740157 | eng_Latn | 0.495796 |
930b437641078661acf0a731f8fdaaeb1f95eb79 | 365 | md | Markdown | _posts/2021-07-08/2021-06-18-F-What-hole-would-you-choose-20210618211503106201.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-18-F-What-hole-would-you-choose-20210618211503106201.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-18-F-What-hole-would-you-choose-20210618211503106201.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "(F) What hole would you choose?"
metadate: "hide"
categories: [ God Pussy ]
image: "https://preview.redd.it/v0da786lw0671.jpg?auto=webp&s=775adfb8b4711bc01372dbcbfe9ee145e4e39873"
thumb: "https://preview.redd.it/v0da786lw0671.jpg?width=1080&crop=smart&auto=webp&s=e016112ff53d65aab20564bff703b8711871b4a7"
visit: ""
---
(F) What hole would you choose?
| 36.5 | 125 | 0.767123 | yue_Hant | 0.111154 |
930bb24fce9af9d78dbf10dd4e7b5f1289a3b220 | 8,316 | md | Markdown | documents/amazon-vpc-user-guide/doc_source/sharing-managed-prefix-lists.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/amazon-vpc-user-guide/doc_source/sharing-managed-prefix-lists.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/amazon-vpc-user-guide/doc_source/sharing-managed-prefix-lists.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Working with shared prefix lists<a name="sharing-managed-prefix-lists"></a>
Customer\-managed prefix lists integrate with AWS Resource Access Manager \(AWS RAM\)\. With AWS RAM, you share resources that you own across AWS accounts by creating a *resource share*\. It specifies the resources to share, and the consumers with whom to share them\. Consumers can be individual AWS accounts, or organizational units or an entire organization in AWS Organizations\.
For more information about AWS RAM, see the *[AWS RAM User Guide](https://docs.aws.amazon.com/ram/latest/userguide/)*\.
The owner of a prefix list can share a prefix list with the following:
+ Specific AWS accounts inside or outside of its organization in AWS Organizations
+ An organizational unit inside its organization in AWS Organizations
+ Its entire organization in AWS Organizations
Consumers with whom a prefix list has been shared can view the prefix list and its entries, and they can reference the prefix list in their AWS resources\.
**Topics**
+ [Prerequisites for sharing prefix lists](#sharing-prereqs)
+ [Sharing a prefix list](#sharing-share)
+ [Identifying a shared prefix list](#sharing-identify)
+ [Identifying references to a shared prefix list](#sharing-identify-references)
+ [Unsharing a shared prefix list](#sharing-unshare)
+ [Shared prefix list permissions](#sharing-perms)
+ [Billing and metering](#sharing-billing)
+ [Quotas](#sharing-limits)
## Prerequisites for sharing prefix lists<a name="sharing-prereqs"></a>
+ To share a prefix list, you must own it in your AWS account\. You cannot share a prefix list that has been shared with you\. You cannot share an AWS\-managed prefix list\.
+ To share a prefix list with your organization or an organizational unit in AWS Organizations, you must enable sharing with AWS Organizations\. For more information, see [ Enable sharing with AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) in the *AWS RAM User Guide*\.
## Sharing a prefix list<a name="sharing-share"></a>
To share a prefix list, you must add it to a resource share\. If you do not have a resource share, you must first create one using the [AWS RAM console](https://console.aws.amazon.com/ram)\.
If you are part of an organization in AWS Organizations, and sharing within your organization is enabled, consumers in your organization are automatically granted access to the shared prefix list\. Otherwise, consumers receive an invitation to join the resource share and are granted access to the shared prefix list after accepting the invitation\.
You can create a resource share and share a prefix list that you own using the AWS RAM console, or the AWS CLI\.
**To create a resource share and share a prefix list using the AWS RAM console**
Follow the steps in [Create a resource share](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-create) in the *AWS RAM User Guide*\. For **Select resource type**, choose **Prefix Lists**, and then select the check box for your prefix list\.
**To add a prefix list to an existing resource share using the AWS RAM console**
To add a managed prefix that you own to an existing resource share, follow the steps in [Updating a resource share](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html#working-with-sharing-update) in the *AWS RAM User Guide*\. For **Select resource type**, choose **Prefix Lists**, and then select the check box for your prefix list\.
**To share a prefix list that you own using the AWS CLI**
Use the following commands to create and update a resource share:
+ [create\-resource\-share](https://docs.aws.amazon.com/cli/latest/reference/ram/create-resource-share.html)
+ [associate\-resource\-share](https://docs.aws.amazon.com/cli/latest/reference/ram/associate-resource-share.html)
+ [update\-resource\-share](https://docs.aws.amazon.com/cli/latest/reference/ram/update-resource-share.html)
## Identifying a shared prefix list<a name="sharing-identify"></a>
Owners and consumers can identify shared prefix lists using the Amazon VPC console and AWS CLI\.
**To identify a shared prefix list using the Amazon VPC console**
1. Open the Amazon VPC console at [https://console\.aws\.amazon\.com/vpc/](https://console.aws.amazon.com/vpc/)\.
1. In the navigation pane, choose **Managed Prefix Lists**\.
1. The page displays the prefix lists that you own and the prefix lists that are shared with you\. The **Owner ID** column shows the AWS account ID of the prefix list owner\.
1. To view the resource share information for a prefix list, select the prefix list and choose **Sharing** in the lower pane\.
**To identify a shared prefix list using the AWS CLI**
Use the [describe\-managed\-prefix\-lists](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-managed-prefix-lists.html) command\. The command returns the prefix lists that you own and the prefix lists that are shared with you\. `OwnerId` shows the AWS account ID of the prefix list owner\.
## Identifying references to a shared prefix list<a name="sharing-identify-references"></a>
Owners can identify the consumer\-owned resources that are referencing a shared prefix list by using the Amazon VPC console and AWS CLI\.
**To identify references to a shared prefix list using the Amazon VPC console**
1. Open the Amazon VPC console at [https://console\.aws\.amazon\.com/vpc/](https://console.aws.amazon.com/vpc/)\.
1. In the navigation pane, choose **Managed Prefix Lists**\.
1. Select the prefix list and choose **Associations** in the lower pane\.
1. The IDs of the resources that are referencing the prefix list are listed in the **Resource ID** column\. The owners of the resources are listed in the **Resource Owner** column\.
**To identify references to a shared prefix list using the AWS CLI**
Use the [get\-managed\-prefix\-list\-associations](https://docs.aws.amazon.com/cli/latest/reference/ec2/get-managed-prefix-list-associations.html) command\.
## Unsharing a shared prefix list<a name="sharing-unshare"></a>
When you unshare a prefix list, consumers can no longer view the prefix list or its entries in their account, and they cannot reference the prefix list in their resources\. If the prefix list is already referenced in the consumer's resources, those references continue to function as normal, and you can continue to [view those references](#sharing-identify-references)\. If you update the prefix list to a new version, the references use the latest version\.
To unshare a shared prefix list that you own, you must remove it from the resource share\. You can do this using the AWS RAM console, or the AWS CLI\.
**To unshare a shared prefix list that you own using the AWS RAM console**
See [Updating a resource share](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html#working-with-sharing-update) in the *AWS RAM User Guide*\.
**To unshare a shared prefix list that you own using the AWS CLI**
Use the [disassociate\-resource\-share](https://docs.aws.amazon.com/cli/latest/reference/ram/disassociate-resource-share.html) command\.
## Shared prefix list permissions<a name="sharing-perms"></a>
### Permissions for owners<a name="perms-owner"></a>
Owners are responsible for managing a shared prefix list and its entries\. Owners can view the IDs of the AWS resources that reference the prefix list\. However, they cannot add or remove references to a prefix list in AWS resources that are owned by consumers\.
Owners cannot delete a prefix list if the prefix list is referenced in a resource that's owned by a consumer\.
### Permissions for consumers<a name="perms-consumer"></a>
Consumers can view the entries in a shared prefix list, and they can reference a shared prefix list in their AWS resources\. However, consumers can't modify, restore, or delete a shared prefix list\.
## Billing and metering<a name="sharing-billing"></a>
There are no additional charges for sharing prefix lists\.
## Quotas<a name="sharing-limits"></a>
For more information about quotas \(limits\) related to AWS RAM, see [Service limits](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html#what-is-limits) in the *AWS RAM User Guide*\. | 74.25 | 459 | 0.768398 | eng_Latn | 0.983481 |
930c0664ff6cc162c720825a751c22978dcb16fc | 751 | md | Markdown | index.md | kazukasahara/kasahara-lab.github.io | ef02bb68663f12bb8f56bcb51cf3af271953d7c2 | [
"MIT"
] | null | null | null | index.md | kazukasahara/kasahara-lab.github.io | ef02bb68663f12bb8f56bcb51cf3af271953d7c2 | [
"MIT"
] | null | null | null | index.md | kazukasahara/kasahara-lab.github.io | ef02bb68663f12bb8f56bcb51cf3af271953d7c2 | [
"MIT"
] | null | null | null | ---
title: Kasahara Lab
layout: home
group: home
---
# Welcome to the Kasahara Lab!
{: .display-4}
<br>
We are part of the [Department of Bioengineering and Therapeutic Sciences](http://bts.ucsf.edu/), the [Macromolecular Structure Group](http://msg.ucsf.edu/), and the [California Institute of Quantitative Biosciences (QB3)](http://qb3.org/).
{: .welcomefont}
Research in the lab is focused on discovering the fundamental principles of macromolecular structure and dynamics. We are interested in defining conformational states that are essential for function and understanding how conformational transitions couple to biological mechanisms.
{: .welcomefont}
We are located in Genentech Hall at the Mission Bay Campus of UCSF.
{: .welcomefont}
| 41.722222 | 281 | 0.773635 | eng_Latn | 0.982117 |
930d4b93ea1ef7e1b942c9d2b0409f23579689fe | 2,643 | md | Markdown | _posts/2015-02-20-atom-package-for-fedora-21.md | ariestiyansyah/try | 682084de5219a1252b8a5f87ca7c0d8fcbcbac5c | [
"MIT"
] | 1 | 2018-12-23T23:43:35.000Z | 2018-12-23T23:43:35.000Z | _posts/2015-02-20-atom-package-for-fedora-21.md | ariestiyansyah/move | 2a5512b198c70625b5dd3674b90824385b2f1a4e | [
"MIT"
] | null | null | null | _posts/2015-02-20-atom-package-for-fedora-21.md | ariestiyansyah/move | 2a5512b198c70625b5dd3674b90824385b2f1a4e | [
"MIT"
] | null | null | null | ---
title: Atom Package for Fedora 32-bit
author: ariestiyansyah
description: This simple step to create atom package for fedora 32bit
layout: post
permalink: /atom-package-for-fedora-32-bit
categories:
- fedora
- code
tags:
- fedora
- atom editor
- 32bit
- code
---
Today I do some research about atom in Fedora 32-bit, as we know the atom editor
for linux is only available in 64-bit architecture, but the atom editor team provide step how to compile
the atom editor in github (yeah sounds good).
- [Atom build instructions in linux](https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md)
- [Another Way to install atom](https://gist.github.com/mojavelinux/225d01e621f467db1c75)
And Here is my way and what I've done when tried to create RPM for atom editor.
# Install Development tools, Fedora Packager and compiler requirement for development
Run this command
<pre>
$ sudo yum -y install @development-tools
$ sudo yum -y install fedora-packager
$ sudo yum -y install make gcc gcc-c++ glibc-devel libgnome-keyring-devel
</pre>
don't forget to add current user to group `mock` by running usermod command
<pre>
$ sudo usermod -a -G mock yourusername
</pre>
# Install Node and Node Package Manager (NPM)
Simply run the
<pre>
$ sudo yum -y install nodejs npm
</pre>
__UPDATE__ :
In my system I can't run any npm package using sudo, so we need to create the
link, here is the command
<pre>
$ sudo ln -s /usr/local/bin/node /usr/bin/node
$ sudo ln -s /usr/local/lib/node /usr/lib/node
$ sudo ln -s /usr/local/bin/npm /usr/bin/npm
</pre>
# Clone the atom repository on github
<pre>
$ git clone git@github.com:atom/atom.git
$ cd atom
</pre>
It will clone up to 100MB total file from github.
After done the clone remove the gyp cause it'll conflict with node-gyp , don't
worry you can instal it later ;).
# Run the build
Now you can build the atom package by running `script/rpmbuild` command in atom
root directory, the process will create the directory `/tmp/atom-build` contain
folder called `Atom`,`icons` and `rpm` and also will create `atom.desktop` and
`atom.spec` file.
The installation done and RPM package has been creaed, it stored at
`/tmp/atom-build/rpm` folder, you can install the package now.
We have another step to install atom without creating the rpm package by runing this command from the root directory of atom
<pre>
$ script/build
$ script/grunt install
</pre>
If you don't neet waste your time to compile the app you can download the atom
rpm for fedora 21 (32-bit)
[here](https://github.com/ariestiyansyah/atom/raw/master/rpms/atom-0.182.0-0.1.fc21.i686.rpm).
Happy coding :D
| 30.034091 | 124 | 0.75104 | eng_Latn | 0.967027 |
930d7a0cc62bb28a57e2880bf5ae872b10005103 | 370 | md | Markdown | BaseDesign.md | kazenetu/ConvertCStoTS | 4ec0068fbdaf3f2a59302efef071093250be7145 | [
"MIT"
] | null | null | null | BaseDesign.md | kazenetu/ConvertCStoTS | 4ec0068fbdaf3f2a59302efef071093250be7145 | [
"MIT"
] | null | null | null | BaseDesign.md | kazenetu/ConvertCStoTS | 4ec0068fbdaf3f2a59302efef071093250be7145 | [
"MIT"
] | null | null | null | # これまでの実装と問題
[0.9.1](https://github.com/kazenetu/ConvertCStoTS/releases/tag/0.9.1)までの実装では
C#の解析とTypeScript変換の役割に不備がある
* C#の解析処理でTypeScript変換まで行う
* TypeScript変換処理ではファイルの入出力のみ
# 改善案
C#の解析とTypeScript変換の役割を明確にする
* C#の解析
* csファイルの読み込み
* SemanticModelと補助情報のリスト作成
* TypeScript変換
* 「SemanticModelと補助情報のリスト」を元にTypeScript変換
* tsファイルの書き出し
※別クラスとして作成し、ある程度完成した段階で切替を行う | 23.125 | 76 | 0.805405 | yue_Hant | 0.38924 |
930da605e95fa8781e46c6b5f42ff2c46a717fa0 | 1,232 | md | Markdown | _pages/casting.md | emory/inc-mm-v4 | 9cd3a8e3bf3eeecae18f4ee5dfbd57bded3730e4 | [
"MIT"
] | null | null | null | _pages/casting.md | emory/inc-mm-v4 | 9cd3a8e3bf3eeecae18f4ee5dfbd57bded3730e4 | [
"MIT"
] | null | null | null | _pages/casting.md | emory/inc-mm-v4 | 9cd3a8e3bf3eeecae18f4ee5dfbd57bded3730e4 | [
"MIT"
] | null | null | null | ---
layout: single
permalink: /casting/
date: 2013-08-31
last_modified_at: "2017-07-19 09:12:33"
title: "Casting for Photography Projects"
excerpt: "I only take the pictures."
description: "Photography Projects: Castings"
category: [castings, projects]
tags: [projects, photography, casting]
header:
overlay_image: /assets/images/skip.jpg
overlay_filter: rgba(200, 0, 140, 0.30)
caption: Emory Lundberg
teaser: /assets/images/skip-500x500.jpg
sidebar:
nav: "home"
---
# Current Projects
These are some creative projects I'm working on that I am very interested in finding subjects for. If you wish to contact me about any of them, please contact me using one of the methods outlined on [my 'about' page](/about/).
*C'mon, don't be shy, you're awesome!*
----
### Mona Lisa's Smile
My most ambitious project is [Mona Lisa's Smile](/casting/mona-lisa/), where I am photographing portraits of women with a big secret.
### Every Advantage
Every Advantage is a photojournalism project where I'm writing about privilege from the perspective of a straight white male livin' in America. Once I have a better grasp of what I want to do with this I'll no doubt be asking for subjects and recommendations.
### Selphie
TBD
| 31.589744 | 259 | 0.748377 | eng_Latn | 0.980542 |
930e79471e6d63ac5811e14b181cec739213f1ef | 4,625 | md | Markdown | Lync/LyncServer/lync-server-2013-delete-an-existing-collection-of-cdr-configuration-settings.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Lync/LyncServer/lync-server-2013-delete-an-existing-collection-of-cdr-configuration-settings.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Lync/LyncServer/lync-server-2013-delete-an-existing-collection-of-cdr-configuration-settings.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Lync Server 2013: Delete an existing collection of CDR configuration settings'
description: "Lync Server 2013: Delete an existing collection of CDR configuration settings."
ms.reviewer:
ms.author: v-lanac
author: lanachin
f1.keywords:
- NOCSH
TOCTitle: Delete an existing collection of CDR configuration settings
ms:assetid: 8ebf5da8-c0fc-498c-8d85-527d3be8479a
ms:mtpsurl: https://technet.microsoft.com/en-us/library/JJ688128(v=OCS.15)
ms:contentKeyID: 49733726
ms.date: 07/23/2014
manager: serdars
mtps_version: v=OCS.15
---
# Delete an existing collection of CDR configuration settings in Lync Server 2013
<div data-xmlns="http://www.w3.org/1999/xhtml">
<div class="topic" data-xmlns="http://www.w3.org/1999/xhtml" data-msxsl="urn:schemas-microsoft-com:xslt" data-cs="https://msdn.microsoft.com/">
<div data-asp="https://msdn2.microsoft.com/asp">
</div>
<div id="mainSection">
<div id="mainBody">
<span> </span>
_**Topic Last Modified:** 2013-02-23_
Call Detail Recording (CDR) enables you to track usage of such things as peer-to-peer instant messaging sessions, Voice over Internet Protocol (VoIP) phone calls, and conferencing calls. This usage data includes information about who called whom, when they called, and how long they talked.
When you install Microsoft Lync Server 2013, a single, global collection of CDR configuration settings is created for you. Administrators also have the option of creating custom setting collections that can be applied to individual sites. By design, settings configured at the site scope take precedence over settings configured at the global scope. If you delete site-scoped settings, then CDR will be managed in that site by using the global settings.
Note that you can also “delete” the global settings. However, the global settings will not actually be removed. Instead, all the properties in that collection will be reset to their default values. For example, by default purging is enabled in a collection of CDR configuration settings. Suppose you modify the global collection so that purging is disabled. If you later delete the global settings, all the properties will be reset to their default values. In this case, that means that purging will once again be enabled.
You can remove CDR configuration settings by using the Lync Server Control Panel or the [Remove-CsCdrConfiguration](https://docs.microsoft.com/powershell/module/skype/Remove-CsCdrConfiguration) cmdlet.
<div>
## To remove CDR configuration settings with Lync Server Control Panel
1. In Lync Server Control Panel, click **Monitoring and Archiving**.
2. On the **Call Detail Recording** tab, select the collection (or collections) of CDR settings to be removed. To select multiple collections, click the first collection, hold down the Ctrl key, and click additional collections.
3. Click **Edit**, and then click **Delete**.
4. In the Lync Server Control Panel dialog box, click **OK**.
</div>
<div>
## Removing CDR Configuration Settings by Using Windows PowerShell Cmdlets
You can delete call detail recording configuration settings by using Windows PowerShell and the **Remove-CsCdrConfiguration** cmdlet. You can run this cmdlet either from the Lync Server 2013 Management Shell or from a remote session of Windows PowerShell. For details about using remote Windows PowerShell to connect to Lync Server, see the Lync Server Windows PowerShell blog article "Quick Start: Managing Microsoft Lync Server 2010 Using Remote PowerShell" at [https://go.microsoft.com/fwlink/p/?linkId=255876](https://go.microsoft.com/fwlink/p/?linkid=255876).
<div>
## To remove a specified collection of CDR configuration settings
- This command removes the CDR configuration settings applied to the Redmond site:
Remove-CsCdrConfiguration -Identity "site:Redmond"
</div>
<div>
## To remove all the CDR configuration settings applied to the site scope
- This command removes all the CDR configuration settings applied to the site scope:
Get-CsCdrConfiguration -Filter "site:*" | Remove-CsCdrConfiguration
</div>
<div>
## To remove all the CDR configuration settings that disable call detail recording
- This command removes all the CDR configuration settings where Call Detail recording has been disabled:
Get-CsCdrConfiguration | Where-Object {$_.EnableCDR -eq $False} | Remove-CsCdrConfiguration
</div>
For more information, see the help topic for the [Remove-CsCdrConfiguration](https://docs.microsoft.com/powershell/module/skype/Remove-CsCdrConfiguration) cmdlet.
</div>
</div>
<span> </span>
</div>
</div>
</div>
| 42.045455 | 564 | 0.774919 | eng_Latn | 0.950374 |
930e928ec7cd1f90965a1cc09fa2fb186d4666d4 | 178 | md | Markdown | README.md | awtwhite/atom-winterdawn-syntax | 9f6ac762da84ee5c41a340314961272a93ce6beb | [
"MIT"
] | null | null | null | README.md | awtwhite/atom-winterdawn-syntax | 9f6ac762da84ee5c41a340314961272a93ce6beb | [
"MIT"
] | null | null | null | README.md | awtwhite/atom-winterdawn-syntax | 9f6ac762da84ee5c41a340314961272a93ce6beb | [
"MIT"
] | null | null | null | # winterdawn-syntax
An light syntax theme for Atom using a subtle colour palette.
Pairs great with [Atom Material UI theme](https://github.com/atom-material/atom-material-ui).
| 29.666667 | 93 | 0.780899 | eng_Latn | 0.559283 |
930ea0ddcbedcbbe4ea52dee95a172b19273d1d9 | 10,211 | md | Markdown | _posts/2019-11-03-ffts-in-rust.md | limads/limads.github.io | 2ce99b5b3ee8210cf4d26599e600465d967f4564 | [
"MIT"
] | null | null | null | _posts/2019-11-03-ffts-in-rust.md | limads/limads.github.io | 2ce99b5b3ee8210cf4d26599e600465d967f4564 | [
"MIT"
] | null | null | null | _posts/2019-11-03-ffts-in-rust.md | limads/limads.github.io | 2ce99b5b3ee8210cf4d26599e600465d967f4564 | [
"MIT"
] | null | null | null | ---
layout: post
title: Fast (and memory-safe) Fourier transforms in Rust
---
## The basics
MKL's [DFTI](https://software.intel.com/en-us/mkl-developer-reference-c-fft-functions) module has routines for performing Fast Fourier Transforms (FFTs) using thread-based parallelism, and seems to be a nice option to solve signal-processing problems in Rust projects, given how easy is to use the C FFI library. After generating [bindings](https://limads.github.io/2019/10/27/linking-against-mkl-in-rust-programs/) to the C module, it wasn't too hard to write a more idiomatic higher-level API that offers some compile-time guarantees. I'll describe the use and reasoning behind the API here, which should be applicable to other modules of the library.
The API of our binding will use the [ndarray::ArrayBase](https://docs.rs/ndarray/0.13.0/ndarray/struct.ArrayBase.html) data structure, which is a container generic over several properties:
- Over the basic scalar type (`f32`, `f64`, and so on);
- Over the ownership model (`Array` for an owned memory region, `ArrayView` for non-owned immutable references and `ArrayViewMut` for non-owned mutable references);
- Over the dimension of the data it holds (`Dim<usize>` for 1D arrays, `Dim<(usize, usize)`> for 2d arrays, and so on);
This generality allows to just plug-in a complex number such as `num::Complex<f32>` for the basic scalar type of the array that will store the forward output of the Fourier transform. The C interface does not work with complex number types, but rather with plain single-precision or double precision values, following the convention of representing complex numbers in their cartesian representation contiguously in memory:

(The above layout is valid only for the `DFTI_CONJUGATE_EVEN_STORAGE = DFTI_COMPLEX_COMPLEX` setting of the DFTI descriptor)
Which, luckily, follows the same memory layout of `Array<Complex<D>>`, as long as we inform a numeric type D (either `f32` or `f64`) and dimension sizes that match the C array. MKL has an economical way of representing the output of the forward transform which is called conjugate-even storage, so called because every time we take the FFT of a real signal (which is usually the case when we are working with signals coming from actual measurements), the resulting transform will be even-symmetric (apart from the complex conjugate), so only the first portion of the signal relative to the point of symmetry needs to be represented. Since each element of the complex output now fills two scalar positions in memory, the halved output occupies a memory region which is almost the same size as the input.
## The API
We provide to the users a struct `FFTPlan` that follows the same generic design of `ndarray`. We make our `FFTPlan` generic with respect to the scalar type `A` and with respect to the dimension of the output `I`, so the user can detect any issues with respect to those parameters at compile time, and to reduce the amount of checking we have to do at runtime. After the struct is created, the user can repeatedly call the methods `.forward(data)` or `.backward(data)` to perform the forward and inverse transforms:
```rust
// Create FFTPlan
let n : usize = 100;
let mut fft_plan = FFTPlan::<f32, [Ix;1]>::new(
[n as usize].into_dimension())?;
// Create test sin wave with angular frequency 4
let mut a = Array1::<f32>::linspace(0,(4.0)*2.0*PI,n);
a = a.map(|x| { x.sin() });
// Perform forward transform
match fft_plan.forward(a.view()) {
Ok(b) => {
// b is a ArrayViewMut<Complex<f32,Dim<[usize]> with the results.
println!("Time domain: {:?}\n", a
println!("Frequency domain: {:?}\n",
(1.0 / (n as f32))*to_amplitude_array(b.view()));
},
Err(s) => { println!("{}\n\n", s); }
}
```
The decision of returning an `ArrayViewMut` whose referenced `Array` is owned by `FFTPlan` avoids two things:
- Having to initialize a new memory region for each transform;
- The potentially costly copy to a container owned by the user.
Since the calculated data is not re-used by FFTPlan in any other way (it just owns it), the user can modify it as needed without incurring the copy overhead. If the user does something that leaves the state of the underlying array invalid for the next transform, such as changing its dimensions, the next call to `.forward()` will simply return `Err(s)` informing a dimension size mismatch. Doing the whole manipulation inside a `match` or `if let` block not only guarantees the user is dealing with a valid output, but improves clarity of the program, since the lifetime of the mutable reference the user has access to is explicit.
If the user does not need to modify the array, he can take a `.view()` from the reference instead; or if he actually wants to own a copy of the result, he can call `.to_owned()` inside the block.
Continuing with the example, the `to_amplitude_array()` is just a function to recover the amplitude information from the cartesian representation returned:
```rust
pub fn to_amplitude_array<A,D>(
arr : ArrayView<Complex<A>,D>)
-> Array<A, D>
where D : Dimension,
A : Float + Clone {
arr.map(|a| { a.to_polar().0 })
}
```
After applying it to the array and normalizing it, we can visualize the content printed on the screen and check that the transform works:

## Implementation
All information MKL needs to perform a FFT (dimension, precision, memory layout) is stored into a `DFTI_DESCRIPTOR` object, which we will wrap together with the owned memory regions that will contain the transform outputs:
```rust
pub struct FFTPlan<A, I : IntoDimension>
where Dim<I> : Dimension,
A : From<f32> + Copy + Zero {
// Points to the owned descriptor. Valid for the lifetime of the struct.
handle : *mut DFTI_DESCRIPTOR,
// Owned memory region for the output of .forward()
forward_buffer : Array<Complex<A>, Dim<I>>,
// Owned memory region for the output of .backward()
backward_buffer : Array<A, Dim<I>>,
}
```
In the MKL C API, all the information the `DFTI_DESCRIPTOR` requires is set at runtime, which is at odds with our design using generics. To bridge this gap, we write specialized implementation blocks for the kinds of transforms we need, that only contain a `new()` associated function, which will call the C API with the correct parameters encoded as literals on each implementation. This has the added benefit of only allowing the user to instantiate our object with types supported by the MKL (`f32` and `f64`). The example below instantiates a FFTPlan to solve the problem of 1D real Fourier transforms:
```rust
impl FFTPlan<f32, [Ix;1]> {
pub fn new (
input_dims : Dim<[Ix;1]>)
-> Result<FFTPlan<f32, [Ix;1]>, String> {
let mut backward_buffer =
FFTPlan::<f32, [Ix;1]>::initialize_backward_buffer(input_dims);
let mut forward_buffer =
FFTPlan::<f32, [Ix;1]>::initialize_forward_buffer(input_dims);
let sz0 = input_dims.into_pattern();
// Encapsulates C API. Pass number of dimensions
//in tuple, and whether or not to use double precision.
let handle = build_descriptor( (sz0, 0), false)?;
Ok( FFTPlan::<f32, [Ix;1]>{
handle, input_dims, forward_buffer, backward_buffer} )
}
```
The `build_descriptor()` dispatches our arguments to the MKL C constants required at the descriptor initialization:
```rust
fn build_descriptor(dims : (usize, usize), double_prec : bool)
-> Result<*mut DFTI_DESCRIPTOR, String> {
/* (...) Transform our args to MKL constants */
unsafe {
let mut descriptor : DFTI_DESCRIPTOR =
std::mem::uninitialized();
let mut handle : *mut DFTI_DESCRIPTOR =
&mut descriptor as *mut DFTI_DESCRIPTOR;
let mut handle_ptr : *mut *mut DFTI_DESCRIPTOR =
&mut handle as *mut *mut DFTI_DESCRIPTOR;
let fail = DftiCreateDescriptor(
handle,
prec_const,
prec_value,
ndims as c_long,
dims_arr
) as u32;
check_dfti_status(fail as u32, "Could not construct Planner.")?;
/* (...) Set remaining properties */
}
if fail == DFTI_NO_ERROR {
Ok(handle)
}
```
Now that the correct descriptor is instantiated, the actual FFT computation can be generic over dimension and scalar type:
```rust
impl<A, I : IntoDimension> FFTPlan<A, I>
where Dim<I> : Dimension,
A : From<f32> + Copy + Zero {
pub fn forward(&mut self, arr : ArrayView<A, Dim<I>>)
-> Result<ArrayViewMut<Complex<A>, Dim<I>>, String>
/* A runtime dimension check is required because
we opted for our users to be able to manipulate mutable references
to the underlying owned buffer for performance reasons -
But our API is still safe because it will return an Error on
the event of dimension mismatch. */
self.check_valid_forward_dim(arr.raw_dim())?;
/* (...) Get pointers to input and buffer arrays via ArrayBase::mem_ptr() */
unsafe {
let status = DftiComputeForward(
self.handle, in_ptr, out_ptr);
if status == 0 {
return Ok(self.forward_buffer.view_mut());
} else {
check_dfti_status(status as u32, "Error computing DFT.")?;
}
}
}
/* (...) Auxiliary bounds-checking functions, buffer initialization, etc. */
}
```
As soon as I get some details right, I plan to make this available on [crates.io](crates.io). Adding to this FFT API another set of routines that calls to the MKL convolution operations is all it takes for a production-ready crate with basic signal and image processing routines that benefit from the MKL performance and the memory safety afforded by the Rust compiler.
## Relevant links:
[Fast Fourier Transform (wiki)](https://en.wikipedia.org/wiki/Fast_Fourier_transform)
[MKL FFT functions documentation](https://software.intel.com/en-us/mkl-developer-reference-c-fft-functions)
[ndarray repository](https://github.com/rust-ndarray/ndarray)
| 52.096939 | 802 | 0.712271 | eng_Latn | 0.993082 |
930f881456242adb1d8b27b50ac2a740740009f9 | 342 | md | Markdown | _posts/2016/2016-10-13-Baco.md | jiangzerui/jiangzerui.github.io | bca5f095718d9ca0642e0c7870afb6152080b80c | [
"MIT"
] | null | null | null | _posts/2016/2016-10-13-Baco.md | jiangzerui/jiangzerui.github.io | bca5f095718d9ca0642e0c7870afb6152080b80c | [
"MIT"
] | null | null | null | _posts/2016/2016-10-13-Baco.md | jiangzerui/jiangzerui.github.io | bca5f095718d9ca0642e0c7870afb6152080b80c | [
"MIT"
] | null | null | null | ---
layout: post
title: “数数的英文名”
date: 2016-10-13
categories: 家庭
---

>**数数自己给自己起了个英文名,小家伙真了不起!**
今儿回家就听见老太太说数数自己说了个英文名,直译过来就是-Baco,小数数真棒!你叫Baco小数数居然还答应,你问Baco是谁,小家伙就会拍拍自己的小胸脯,太可乐了...
想当年你爹读完外国语学校还在纠结自己要起个什么样的中文名,你小子不到两岁的年龄自己先起好了🤔🤗,让Dady我很是羡慕啊~娃娃好好长,将来一定比babi还棒哦😀 | 24.428571 | 85 | 0.789474 | yue_Hant | 0.948412 |
93109c4f2aa45ed0be0373867cb300ddd002ad68 | 22 | md | Markdown | README.md | nitss007/fff-grapgql-goodread | b76143899713117bf2b8ed9a0a69519e7938e9fe | [
"MIT"
] | null | null | null | README.md | nitss007/fff-grapgql-goodread | b76143899713117bf2b8ed9a0a69519e7938e9fe | [
"MIT"
] | null | null | null | README.md | nitss007/fff-grapgql-goodread | b76143899713117bf2b8ed9a0a69519e7938e9fe | [
"MIT"
] | null | null | null | # fff-grapgql-goodread | 22 | 22 | 0.818182 | eng_Latn | 0.741083 |
9310dde0c22c5a48084936d4bb6a3e8af6f92251 | 3,013 | md | Markdown | articles/marketplace/cloud-partner-portal-orig/si-getting-started.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/marketplace/cloud-partner-portal-orig/si-getting-started.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/marketplace/cloud-partner-portal-orig/si-getting-started.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Začínáme s přehledy prodejců
description: Úvod do funkce Přehledy prodejců na portálu partnerů cloudu.
author: dsindona
ms.service: marketplace
ms.subservice: partnercenter-marketplace-publisher
ms.topic: conceptual
ms.date: 09/14/2018
ms.author: dsindona
ms.openlocfilehash: b86c2c8b8d0e44adffa0411799b9be01b9f54a9d
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/28/2020
ms.locfileid: "80285346"
---
<a name="getting-started-with-seller-insights"></a>Začínáme s přehledy prodejců
====================================
Tento článek popisuje funkci Přehledy prodejců, která je vám k dispozici na [portálu cloudových partnerů](https://cloudpartner.azure.com/#insights).
<a name="insights-tour"></a>Prohlídka přehledů
-------------
Portál partnerů cloudu vám nabízí přehledy týkající se vašich zákazníků a využití azure a vs marketplace. Zde je stručný přehled různých dat a přehledů, se kterými budete pravděpodobně nejvíce pracovat.
<a name="top-navigation-bar"></a>Horní navigační panel
------------------
Po výběru **přehledů** na levém panelu nabídky se nahoře zobrazí navigační panel s uvedením modulů Přehledů, ke kterým máte přístup.
1. **Souhrn** – tato karta zobrazuje grafy, trendy a hodnoty dat, která jsou pro vydavatele nejvyhledávanější.
2. **Výplata** - Tato karta zobrazuje informace o výplatách a souvisejících transakcích v grafech a formátech ke stažení.
3. **Objednávky & použití** - Tato karta zobrazuje objednávky a informace o použití v grafech a formátech ke stažení.
4. **Zákazník** – na této kartě jsou uvedeny informace o zákaznících a jejich nákupech.
5. **Nasazení** – tato karta zobrazuje informace o úspěchu a neúspěchu nasazení ve formátech grafů i na úrovni událostí.
6. **Stahování** - Stahování sad velkých objemů dat je jednodušší a méně rušivé s novým prostředím pro stahování.
Možná zjistíte, že můžete vidět pouze omezenou sadu modulů uvedených výše.
Pouze uživatelé s oprávněními *vlastníka* mohou zobrazit moduly **Výplata** a **Zákazník** z důvodu citlivých informací o zákaznících a společnosti. Pokud potřebujete přístup k těmto modulům, můžete pomocí role vlastníka v organizaci změnit svá oprávnění.
<a name="tips"></a>Tipy:
-----
- Nezapomeňte upravit data a zobrazit informace, které vás nejvíce zajímají.
- Stáhněte si data na úrovni transakcí a proveďte další analýzu informací poskytnutých službou Seller Insights.
- Pokud hledáte informace o výplatě nebo o zákaznících, ujistěte se, že jste přihlášeni jako role vlastníka a ne jako role přispěvatele. Další informace o uživatelských oprávněních najdete v [tématu Správa uživatelů](./cloud-partner-portal-manage-users.md).
<a name="finding-more-help"></a>Hledání další nápovědy
-----------------
- [Definice přehledů prodejců](./si-insights-definitions-v4.md) – hledání definic pro metriky a data
- [Začínáme s přehledy prodejců](./si-getting-started.md) – Úvod do funkce Přehledy prodejců.
| 47.078125 | 259 | 0.768005 | ces_Latn | 0.999974 |
93115df129d2e3e105507288b73fdb1cf7ac1b4c | 293 | md | Markdown | README.md | bukezhi/daohang | 94b7bbc79c307f870fcd404d10954847302edf89 | [
"MIT"
] | null | null | null | README.md | bukezhi/daohang | 94b7bbc79c307f870fcd404d10954847302edf89 | [
"MIT"
] | null | null | null | README.md | bukezhi/daohang | 94b7bbc79c307f870fcd404d10954847302edf89 | [
"MIT"
] | null | null | null | # daohang
[](https://github.com/bukezhi/daohang/actions/workflows/generate.yml)
bukezhi's navigation site, generated by [gena](https://github.com/x1ah/gena)
Site address: https://bukezhi.github.io/daohang/ | 41.857143 | 154 | 0.774744 | zul_Latn | 0.242119 |
9312cdbc9d7d76b5f45cbeab3e6a20de4bbc403d | 608 | md | Markdown | DESIGNNOTES.md | windyroad/hyperstate | 81309f8288551990bff5c629ba79d8e0557855ef | [
"Apache-2.0"
] | 3 | 2017-07-15T12:14:06.000Z | 2018-12-13T08:40:45.000Z | DESIGNNOTES.md | windyroad/hyperstate | 81309f8288551990bff5c629ba79d8e0557855ef | [
"Apache-2.0"
] | 13 | 2016-06-16T22:36:08.000Z | 2016-07-01T13:10:29.000Z | DESIGNNOTES.md | windyroad/hyperstate | 81309f8288551990bff5c629ba79d8e0557855ef | [
"Apache-2.0"
] | 7 | 2016-06-27T02:20:53.000Z | 2020-04-29T11:53:43.000Z | # Persistence
We don't want to do massive serialisation to and from an SQL database. That's just wasteful and makes no real sense as
we are going to be joining entities via the database or searching the database for entities (we'd use a search engine
for that).
Serialisation and deserialisation is expensive, so let's try to avoid it as much as possible.
Does it make sense to use HDFS directly? Keep what we can in memory and let HDFS serialise to disk when needed. Is that possible?
What about Chronicle Map?
Or, can we be abstract and support multiple persistence mechanisms that don't require JPA? | 46.769231 | 129 | 0.789474 | eng_Latn | 0.999481 |
93136d25622a9c48042639c014d4c9c0c8ed0c23 | 5,838 | md | Markdown | articles/azure-government/documentation-government-dataandstorage.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-BE | 0820e32985e69325be0aaa272636461e11ed9eca | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-government/documentation-government-dataandstorage.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-BE | 0820e32985e69325be0aaa272636461e11ed9eca | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-government/documentation-government-dataandstorage.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-BE | 0820e32985e69325be0aaa272636461e11ed9eca | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Azure overheid documentatie | Microsoft Azure"
description="Dit zorgt voor een vergelijking van functies en hulp op het ontwikkelen van toepassingen voor de overheid Azure"
services="Azure-Government"
cloud="gov"
documentationCenter=""
authors="ryansoc"
manager="zakramer"
editor=""/>
<tags
ms.service="multiple"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="azure-government"
ms.date="09/30/2016"
ms.author="ryansoc"/>
# <a name="azure-government-data-and-storage"></a>Azure overheid gegevens en opslag
## <a name="azure-storage"></a>Azure opslag
Zie [openbare opslag Azure-documentatie](https://azure.microsoft.com/documentation/services/storage/)voor meer informatie over deze service en het gebruik ervan.
### <a name="variations"></a>Variaties
De URL's voor opslag rekeningen in Azure overheid zijn verschillend:
Servicetype|Azure publiek|Azure overheid
---|---|---
BLOB-opslag|*. blob.core.windows.net|*. blob.core.usgovcloudapi.net
Queue Storage|*. queue.core.windows.net|*. queue.core.usgovcloudapi.net
Tabelopslag|*. table.core.windows.net| *. table.core.usgovcloudapi.net
>[AZURE.NOTE] Alle code en scripts moet rekening gehouden met de eindpunten. Zie [opslag Azure verbindingsreeksen configureren](../storage-configure-connection-string.md#creating-a-connection-string-to-the-explicit-storage-endpoint).
Zie voor meer informatie over API's de <a href="https://msdn.microsoft.com/en-us/library/azure/mt616540.aspx">Cloud opslag Account Constructor</a>.
Het achtervoegsel eindpunt te gebruiken in deze overbelastingen is core.usgovcloudapi.net
### <a name="considerations"></a>Overwegingen met betrekking tot
De volgende informatie geeft aan de rand van de overheid Azure voor Azure opslag:
| Geregeld/gecontroleerde gegevens toegestaan | Geregeld/gecontroleerde gegevens niet toegestaan |
|--------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Gegevens die zijn ingevoerd, opgeslagen en verwerkt in een opslag Azure product exporteren gecontroleerde gegevens kan bevatten. Statische verificaties, zoals wachtwoorden en pincodes voor toegang tot de platformonderdelen Azure smartcard. Persoonlijke sleutels, certificaten gebruikt om platformonderdelen van Azure te beheren. Andere beveiliging informatie/geheimen, zoals certificaten, coderingssleutels hoofdsleutels en opslag sleutels opgeslagen in Azure services. | Azure opslag metagegevens mag niet exporteren gecontroleerde gegevens bevatten. Deze metagegevens omvatten alle configuratiegegevens hebt opgegeven bij het maken en onderhouden van uw opslagproduct. Voer geen gegevens Regulated/geregeld in de volgende velden: resourcegroepen, namen van implementatie, resourcenamen, bron codes
## <a name="premium-storage"></a>Premium-opslagruimte
Zie voor meer informatie over deze service en het gebruik van het [premie opslag: krachtige opslag voor een standaardwerkbelasting Azure Virtual Machine](../storage/storage-premium-storage.md).
### <a name="variations"></a>Variaties
Premium-opslag is in het algemeen beschikbaar zijn in de USGov Virginia. Dit omvat de DS serie virtuele Machines.
### <a name="considerations"></a>Overwegingen met betrekking tot
Opslag gegevens gelden dezelfde overwegingen als bovenstaand premium opslag rekeningen.
## <a name="sql-database"></a>SQL-Database
Zie de<a href="https://msdn.microsoft.com/en-us/library/bb510589.aspx"> Microsoft Security Center voor SQL-Database-Engine</a> en [Azure SQL Database openbare documentatie](https://azure.microsoft.com/documentation/services/sql-database/) voor extra ondersteuning voor metagegevens zichtbaarheid configuratie en aanbevolen procedures voor beveiliging.
### <a name="variations"></a>Variaties
V12-Database SQL is over het algemeen beschikbaar in Azure regering.
Het adres van de SQL-Servers Azure in Azure overheid verschilt:
Servicetype|Azure publiek|Azure overheid
---|---|---
SQL-Database|*. database.windows.net|*. database.usgovcloudapi.net
### <a name="considerations"></a>Overwegingen met betrekking tot
De volgende informatie geeft aan de rand van de overheid Azure voor Azure opslag:
| Geregeld/gecontroleerde gegevens toegestaan | Geregeld/gecontroleerde gegevens niet toegestaan |
|--------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Alle gegevens opgeslagen en verwerkt in Microsoft Azure SQL kunnen Azure overheid geregeld gegevens bevatten. Hulpmiddelen voor databases moet u voor de gegevensoverdracht van Azure overheid geregeld gegevens gebruiken. | Azure SQL metagegevens mag niet exporteren gecontroleerde gegevens bevatten. Deze metagegevens omvatten alle configuratiegegevens hebt opgegeven bij het maken en onderhouden van uw opslagproduct. Voer geen gegevens geregeld/geregeld in de volgende velden: Database naam, naam van abonnement, resourcegroepen, naam van de Server, Server admin login, implementatie namen, resourcenamen, bron codes
## <a name="next-steps"></a>Volgende stappen
Voor aanvullende informatie en updates abonneren op de <a href="https://blogs.msdn.microsoft.com/azuregov/">Microsoft Azure overheid Blog.</a>
| 66.340909 | 804 | 0.683625 | nld_Latn | 0.989951 |
9315125c940e04ddf44e8c1e0746e3b61adf7975 | 6,731 | md | Markdown | source/localizable/tutorial/autocomplete-component.md | brianarn/emberjs-guides | cdeda028bb335028ecb72513657337f77ec281f5 | [
"MIT"
] | null | null | null | source/localizable/tutorial/autocomplete-component.md | brianarn/emberjs-guides | cdeda028bb335028ecb72513657337f77ec281f5 | [
"MIT"
] | null | null | null | source/localizable/tutorial/autocomplete-component.md | brianarn/emberjs-guides | cdeda028bb335028ecb72513657337f77ec281f5 | [
"MIT"
] | null | null | null | As they search for a rental, users might also want to narrow their search
to a specific city. Let's build a component that will let them search for
properties within a city, and also suggest cities to them as they type.
To begin, let's generate our new component. We'll call this component
`filter-listing`.
```shell
ember g component filter-listing
```
As before, this creates a Handlebars template
(`app/templates/components/filter-listing.hbs`) and a JavaScript file
(`app/components/filter-listing.js`).
The Handlebars template looks like this:
```app/templates/components/filter-listing.hbs
City: {{input value=filter key-up=(action 'autoComplete' filter)}}
<button {{action 'search'}}>Search</button>
<ul>
{{#each filteredList as |item|}}
<li {{action 'choose' item.city}}>{{item.city}}</li>
{{/each}}
</ul>
```
It contains an [`{{input}}`](../../templates/input-helpers) helper, that
renders as a text field that the user can type in to look for properties
in a given city. The `value` property of the `input` will be bound to the
`filter` property in our component. The `key-up` property
will be bound to a `autoComplete` action in our backing object, and passes
the `filter` property as a parameter.
It also contains a button, whose `action` parameter is bound to the
`search` action in our component.
Lastly, it contains an unordered list, that uses the `filteredList`
property for data, and displays the `city` property of each item in the
list. Clicking the list item will fire the `choose` action, which will
populate the `input` field with the name of the `city` in the clicked list
item.
Here is what the component's JavaScript looks like:
```app/components/filter-listing.js
export default Ember.Component.extend({
filter: null,
filteredList: null,
actions: {
autoComplete() {
this.get('autoComplete')(this.get('filter'));
},
search() {
this.get('search')(this.get('filter'));
},
choose(city) {
this.set('filter',city);
}
}
});
```
There's a property for each of the `filter` and `filteredList`, and
actions as described above. What's interesting is that only the `choose`
action is defined by the component. The actual logic of each of the
`autoComplete` and `search` actions are pulled from the component's
properties, which means that those actions need to be [passed]
(../../components/triggering-changes-with-actions/#toc_passing-the-action-to-the-component)
in by the calling object, a pattern known as _closure actions_.
To see how this works, change your `index.hbs` template to look like this:
```app/templates/index.hbs
<h1>Welcome to Super Rentals</h1>
We hope you find exactly what you're looking for in a place to stay.
<br /><br />
{{filter-listing filteredList=filteredList
autoComplete=(action 'autoComplete') search=(action 'search')}}
{{#each model as |rentalUnit|}}
{{rental-listing rental=rentalUnit}}
{{/each}}
{{#link-to 'about'}}About{{/link-to}}
{{#link-to 'contact'}}Click here to contact us.{{/link-to}}
```
We've added the `filter-listing` component to our `index.hbs` template. We
then pass in the functions and properties we want the `filter-listing`
component to use, so that the `index` page can define some of how it wants
the component to behave, and so the component can use those specific
functions and properties.
For this to work, we need to introduce a `controller` into our app.
Generate a controller for the `index` page by running the following:
```shell
ember g controller index
```
Now, define your new controller like so:
```app/controllers/index.js
export default Ember.Controller.extend({
filteredList: null,
actions: {
autoComplete(param) {
if(param !== "") {
this.store.query('rental', {city: param}).then((result) => {
this.set('filteredList',result);
});
}
else {
this.set('filteredList').clear();
}
},
search(param) {
if(param !== "") {
this.store.query('rental', {city: param}).then((result) => {
this.set('model',result);
});
}
else {
this.set('model').clear();
}
}
}
});
```
As you can see, we define a property in the controller called
`filteredList`, that is referenced from within the `autoComplete` action.
When the user types in the text field in our component, this is the
action that is called. This action filters the `rental` data to look for
records in data that match what the user has typed thus far. When this
action is executed, the result of the query is placed in the
`filteredList` property, which is used to populate the autocomplete list
in the component.
We also define a `search` action here that is passed in to the component,
and called when the search button is clicked. This is slightly different
in that the result of the query is actually used to update the `model`
of the `index` route, and that changes the full rental listing on the
page.
For these actions to work, we need to modify the Mirage `config.js` file
to look like this, so that it can respond to our queries.
```app/mirage/config.js
export default function() {
this.get('/rentals', function(db,request) {
let rentals = [{
type: 'rentals',
id: 1,
attributes: {
title: 'Grand Old Mansion',
owner: 'Veruca Salt',
city: 'San Francisco',
type: 'Estate',
bedrooms: 15,
image: 'https://upload.wikimedia.org/wikipedia/commons/c/cb/Crane_estate_(5).jpg'
}
}, {
type: 'rentals',
id: 2,
attributes: {
title: 'Urban Living',
owner: 'Mike Teavee',
city: 'Seattle',
type: 'Condo',
bedrooms: 1,
image: 'https://upload.wikimedia.org/wikipedia/commons/0/0e/Alfonso_13_Highrise_Tegucigalpa.jpg'
}
}, {
type: 'rentals',
id: 3,
attributes: {
title: 'Downtown Charm',
owner: 'Violet Beauregarde',
city: 'Portland',
type: 'Apartment',
bedrooms: 3,
image: 'https://upload.wikimedia.org/wikipedia/commons/f/f7/Wheeldon_Apartment_Building_-_Portland_Oregon.jpg'
}
}];
if(request.queryParams.city !== undefined) {
let filteredRentals = rentals.filter(function(i) {
return i.attributes.city.toLowerCase().indexOf(request.queryParams.city.toLowerCase()) !== -1;
});
return { data: filteredRentals };
}
else {
return { data: rentals };
}
});
}
```
With these changes, users can search for properties in a given city, with
a search field that provides suggestions as they type.
| 33.157635 | 120 | 0.670777 | eng_Latn | 0.989822 |
9316b25b1e5c7de1976bfd39ceb21abd45ec8e90 | 3,157 | md | Markdown | README.md | skitt/submariner-bot | ab686964b6bbcf143c360afd467f9461af72d10d | [
"Apache-2.0"
] | 1 | 2022-03-03T05:12:11.000Z | 2022-03-03T05:12:11.000Z | README.md | skitt/submariner-bot | ab686964b6bbcf143c360afd467f9461af72d10d | [
"Apache-2.0"
] | 54 | 2020-02-21T16:23:42.000Z | 2022-03-17T10:39:09.000Z | README.md | skitt/submariner-bot | ab686964b6bbcf143c360afd467f9461af72d10d | [
"Apache-2.0"
] | 6 | 2020-02-21T16:07:14.000Z | 2021-05-20T09:51:16.000Z | # submariner-bot
## How it works
submariner-bot listens for webhook events on port 3000 over http. Those events
are in the github webhook event format
described [here](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads)
We use a [library](https://github.com/go-playground/webhooks/tree/master/github) that provides a good interface to handle those events
which are handled [here](https://github.com/submariner-io/submariner-bot/blob/devel/pkg/handler/handler.go):
## Developing and testing locally
You need Go to run and test submariner-bot locally:
```bash
export GO111MODULE=on
export GITHUB_TOKEN=<a token, you can create one in https://github.com/settings/tokens>
export WEBHOOK_SECRET=your-random-phrase # this is a password for anybody accessing submariner-bot
export SSH_PK=/your/ssh/private/key
go run pkg/main/main.go # or just do it from your favorite IDE
```
Then you can push events to localhost:3000.
There are probably tools to simulate events but if you want to simulate it on your host you can create a public endpoint with
[ngrok](https://ngrok.com/).
You should sign up for the free version so your tunnels will not be time-limited.
On another terminal (keep it open):
```bash
$ ngrok authtoken <this is provided when you sign-up, optional>
$ ngrok http 3000
ngrok by @inconshreveable (Ctrl+C to quit)
Session Status online
Account your-email (Plan: Free)
Version 2.3.40
Region United States (us)
Web Interface http://127.0.0.1:4040
Forwarding http://f05c3eb7fe60.ngrok.io -> http://localhost:3000
Forwarding https://f05c3eb7fe60.ngrok.io -> http://localhost:3000
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
```
At this point you'll need a submariner admin to [setup your webhook](https://github.com/organizations/submariner-io/settings/hooks/new)
while in development with the forwarding address `https://f05c3eb7fe60.ngrok.io` and your WEBHOOK_SECRET.
Try not to close ngrok to avoid the webhook URL from changing (starting a new session)
## setup
```bash
export NS=pr-brancher-webhook
kubectl create namespace $NS
# create a bot account with permission to your repos, and create a token in your bot account: https://github.com/settings/tokens
kubectl create -n $NS secret generic pr-brancher-secrets --from-file=ssh_pk=./id_rsa --from-literal=githubToken=$GITHUB_TOKEN
kubectl apply -n $NS -f deployment/role.yml
kubectl apply -n $NS -f deployment/deployment.yaml
kubectl apply -n $NS -f deployment/service.yml
```
### setup with https/letsencrypt
```bash
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
kubectl apply -f deployment/letsencrypt-prod-issuer.yaml # you may need to edit the class in the yaml based on your ingress
```
## update image
```bash
kubectl rollout restart deployment/pr-brancher
```
| 40.474359 | 135 | 0.705417 | eng_Latn | 0.893573 |
9316d8079603b668bc38b69ee8c18b2236975000 | 1,272 | md | Markdown | README.md | mintproject/mint-ui-lit | 20f923eae6b8700f10cd3a12e9dae14a238d0f4c | [
"Apache-2.0"
] | 2 | 2019-09-19T18:22:56.000Z | 2020-06-17T06:56:20.000Z | README.md | mintproject/mint-ui-lit | 20f923eae6b8700f10cd3a12e9dae14a238d0f4c | [
"Apache-2.0"
] | 342 | 2019-08-06T16:28:33.000Z | 2022-03-31T20:05:07.000Z | README.md | mintproject/mint-ui-lit | 20f923eae6b8700f10cd3a12e9dae14a238d0f4c | [
"Apache-2.0"
] | 1 | 2019-07-15T20:25:01.000Z | 2019-07-15T20:25:01.000Z | # mint-ui-lit [](https://travis-ci.com/mintproject/mint-ui-lit)
MINT assists analysts to easily use sophisticated simulation models and data in order to explore the role of weather and climate in water on food availability in select regions of the world.
## Installation
The portal is connected to an Hasura GraphQL database, an execution engine and the model-catalog. You can follow the following repository to install them: [MINT installation package](https://github.com/mintproject/installation_public)
To connect the ui with the other servicesm, please copy the configuration sample file `./src/config/config.json.sample`
```bash
$ cp ./src/config/config.json.sample ./src/config/config.json
```
### Using Docker
Build the image
```
$ docker build . --file .Dockerfile-actions mint_ui
```
Push the image
```bash
$ docker push mint_ui <your_username>/mint_ui
```
### Without Docker
To create the production build use:
```
yarn build
```
You can start the development server with:
```
yarn start
```
## Deploy using GitHub actions
Remember to encrypt
```bash
gpg --symmetric --cipher-algo AES256 src/config/config-tacc.json; mv src/config/config-tacc.json.gpg .
```
| 24 | 234 | 0.748428 | eng_Latn | 0.933492 |
931733126c8c0dd61df7fbc0d8562ec9d2d50814 | 3,441 | md | Markdown | README.md | umtkyck/socketsocketcan | ddb087bbbed17320b8fc43d9dc80473cd88ebb2d | [
"MIT"
] | 10 | 2019-02-13T16:29:18.000Z | 2021-06-22T12:24:51.000Z | README.md | umtkyck/socketsocketcan | ddb087bbbed17320b8fc43d9dc80473cd88ebb2d | [
"MIT"
] | 4 | 2019-02-20T08:01:21.000Z | 2021-07-30T13:33:10.000Z | README.md | umtkyck/socketsocketcan | ddb087bbbed17320b8fc43d9dc80473cd88ebb2d | [
"MIT"
] | 7 | 2019-02-13T17:08:12.000Z | 2021-07-03T02:31:34.000Z | # SocketCAN over TCP
For when you want to use the [Python CAN package](https://github.com/hardbyte/python-can) but are running on a resource-constrained device, such as a Raspberry Pi.
I've found that for a busy bus (> 1000 messages/ second), my Raspberry Pi was dropping messages: we'd reached the limit of what Python could achieve. However, it's still a really powerful library for processing messages and integrating other services.
This package lets you run `python-can` on a more powerful machine, by setting up a TCP connection to the Linux device. A small C client runs on the Linux device to send/ receive CAN messages (with timestamp). This is perfect for listening to a bus and sending occasional messages. I haven't tested this\* with anything that demands high-frequency periodic sending of messages, but using the CAN broadcast manager will probably be better than using this directly.
\* *I haven't tested this much at all. Sorry. I did have 3 terminals running `cangen` at a 1ms interval for 10 minutes, and didn't have any issues (it was identical to `candump` except where `candump` locked up and dropped a few hundred messages).*
# CAN Set-up
Enable `vcan` or `can0` (or whichever) on the Linux device:
```
sudo modprobe vcan
sudo ip link add dev vcan0 type vcan
sudo ip link set vcan0 up
```
or
```
sudo ip link set can0 up type can bitrate 1000000
```
# Installation
## Client
Copy the `client` folder to your Linux device and run `make`
By default the client is set to receive the message you've sent (personally I've found this really useful for logging because the timestamps are aligned), but you can disable it in `client.c`: `#define RECV_OWN_MSGS 0`
## Server
Using a virtual environment is really recommended here.
`pip install -e path/to/socketsocketcan-repo`
# Usage
On creating a `TCPBus` object, it will block until a connection with the client is made (i.e. start the client now). Then, you can use the bus to `send()` and `recv()` messages as usual. By default, the bus accepts connections from any host, you can limit this using the `hostname="A_PARTICULAR_HOSTNAME"`keyword arguent.
The `TCPBus` runs a couple of threads in the background: one to write 'sent' messages to the socket, so that the client can read them in and actually put them on the CAN bus, and another to receive and messages from the client and put them in a queue so the `recv` method can retrieve them.
As the quotes above impied, calling `send()` doesn't actually send the message. It puts the message on a queue, which a thread will write to the socket as soon as possible. (I haven't looked at how long it actually takes from putting it on the queue and it actually being sent: it will vary depending on resources and you'd have to make sure the clocks on the two computers are synchronised to the sub millisecond level.)
The client has a similar structure: there is one thread to poll the CAN bus for new messages and put them in a buffer, a second thread to copy from that buffer to the TCP socket, and a third to read from the TCP socket and send the messages via the CAN Bus.
## Client
`./client CANCHANNEL HOSTNAME PORT`
e.g. `./client vcan0 my-can-server 5000`
## Server
```
from socketsocketcan import TCPBus
from can import Message
bus = TCPBus(5000) #start the client now.
bus.send(Message(arbitration_id=0x100,data=list(range(5))))
print(bus.recv()) #this will be the message you just send (unless)
```
| 62.563636 | 462 | 0.766347 | eng_Latn | 0.999126 |
931926dad4729fbf92ecf4e91c4b08b5e6247c4c | 393 | md | Markdown | docs/craOuc3TXGjgsCOMxbNulA.md | SCARaw/OpenConstructionSet | 740c3261daa29683175b6c592546e6bbe5b37cbe | [
"MIT"
] | null | null | null | docs/craOuc3TXGjgsCOMxbNulA.md | SCARaw/OpenConstructionSet | 740c3261daa29683175b6c592546e6bbe5b37cbe | [
"MIT"
] | null | null | null | docs/craOuc3TXGjgsCOMxbNulA.md | SCARaw/OpenConstructionSet | 740c3261daa29683175b6c592546e6bbe5b37cbe | [
"MIT"
] | null | null | null | #### [OpenConstructionSet](index.md 'index')
### [OpenConstructionSet.Collections](index.md#OpenConstructionSet_Collections 'OpenConstructionSet.Collections').[OcsCollection<T>](CpJitxHTJ7jJqLOu30sQbg.md 'OpenConstructionSet.Collections.OcsCollection<T>')
## OcsCollection<T>.OcsCollection() Constructor
Create a new empty collection.
```csharp
public OcsCollection();
```
| 49.125 | 222 | 0.791349 | yue_Hant | 0.936177 |
93192a1ea979f6f6ea7512b309500cd0c5a46a13 | 1,040 | md | Markdown | _posts/2017-08-07-Modern-Trousseau-Dove-beaded-Sleeveless-SweepBrush-Train-SheathColumn.md | nicedaymore/nicedaymore.github.io | 4328715a75c752dd765c77f1bdad68267ad04b61 | [
"MIT"
] | null | null | null | _posts/2017-08-07-Modern-Trousseau-Dove-beaded-Sleeveless-SweepBrush-Train-SheathColumn.md | nicedaymore/nicedaymore.github.io | 4328715a75c752dd765c77f1bdad68267ad04b61 | [
"MIT"
] | null | null | null | _posts/2017-08-07-Modern-Trousseau-Dove-beaded-Sleeveless-SweepBrush-Train-SheathColumn.md | nicedaymore/nicedaymore.github.io | 4328715a75c752dd765c77f1bdad68267ad04b61 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2017-08-07
title: "Modern Trousseau Dove beaded Sleeveless Sweep/Brush Train Sheath/Column"
category: Modern Trousseau
tags: [Modern Trousseau,Modern Trousseau ,Sheath/Column,Sweetheart,Sweep/Brush Train,Sleeveless]
---
### Modern Trousseau Dove beaded
Just **$469.99**
### Sleeveless Sweep/Brush Train Sheath/Column
<table><tr><td>BRANDS</td><td>Modern Trousseau </td></tr><tr><td>Silhouette</td><td>Sheath/Column</td></tr><tr><td>Neckline</td><td>Sweetheart</td></tr><tr><td>Hemline/Train</td><td>Sweep/Brush Train</td></tr><tr><td>Sleeve</td><td>Sleeveless</td></tr></table>
<a href="https://www.readybrides.com/en/modern-trousseau/46545-modern-trousseau-dove-beaded.html"><img src="//img.readybrides.com/102378/modern-trousseau-dove-beaded.jpg" alt="Modern Trousseau Dove beaded" style="width:100%;" /></a>
<!-- break -->
Buy it: [https://www.readybrides.com/en/modern-trousseau/46545-modern-trousseau-dove-beaded.html](https://www.readybrides.com/en/modern-trousseau/46545-modern-trousseau-dove-beaded.html)
| 65 | 260 | 0.741346 | yue_Hant | 0.532055 |
93193738ee718b9cb4be6cfb73e3ac3d2ab9f0bd | 2,600 | md | Markdown | windows-driver-docs-pr/print/supporting-cmyk-color-space.md | AndrewGaspar/windows-driver-docs | 10fd59af49d010138c1b62aeeab6bc37249c4566 | [
"CC-BY-3.0"
] | 4 | 2018-03-20T00:56:26.000Z | 2021-06-07T15:58:40.000Z | windows-driver-docs-pr/print/supporting-cmyk-color-space.md | AndrewGaspar/windows-driver-docs | 10fd59af49d010138c1b62aeeab6bc37249c4566 | [
"CC-BY-3.0"
] | null | null | null | windows-driver-docs-pr/print/supporting-cmyk-color-space.md | AndrewGaspar/windows-driver-docs | 10fd59af49d010138c1b62aeeab6bc37249c4566 | [
"CC-BY-3.0"
] | 1 | 2019-01-07T18:13:22.000Z | 2019-01-07T18:13:22.000Z | ---
title: Supporting CMYK Color Space
author: windows-driver-content
description: Supporting CMYK Color Space
MS-HAID:
- 'printicm\_04eda0f7-5f28-43e5-9cb3-3a0d6a385c8b.xml'
- 'print.supporting\_cmyk\_color\_space'
MSHAttr:
- 'PreferredSiteName:MSDN'
- 'PreferredLib:/library/windows/hardware'
ms.assetid: b8ac5f1a-c903-4313-b7de-0335f4c44367
keywords: ["CMYK color space WDK print", "BR_CMYKCOLOR", "XO_FROM_CMYK"]
---
# Supporting CMYK Color Space
## <a href="" id="ddk-supporting-cmyk-color-space-gg"></a>
Regardless of whether color management is being handled by the application, system, driver, or device, a [printer graphics DLL](printer-graphics-dll.md) must indicate whether it supports the [*CMYK*](https://msdn.microsoft.com/library/windows/hardware/ff556274#wdkgloss-cmyk) color space. This is done by setting the GCAPS\_CMYKCOLOR flag in the [**DEVINFO**](https://msdn.microsoft.com/library/windows/hardware/ff552835) structure. If this flag is set and CMYK profiles are in use, then GDI sends CMYK color data, instead of RGB data, to the printer graphics DLL for bitmaps, brushes, and pens. GDI also sets the following flags:
- The BR\_CMYKCOLOR flag in the **flColorType** member of the [**BRUSHOBJ**](https://msdn.microsoft.com/library/windows/hardware/ff538261) structure.
- The XO\_FROM\_CMYK flag in the **flXlate** member of the [**XLATEOBJ**](https://msdn.microsoft.com/library/windows/hardware/ff570634) structure.
Note that if the driver supports CMYK color space, it must also support halftoning. Thus if the driver sets the GCAPS\_CMYKCOLOR flag in DEVINFO, it must also set GCAPS\_HALFTONE.
--------------------
[Send comments about this topic to Microsoft](mailto:wsddocfb@microsoft.com?subject=Documentation%20feedback%20%5Bprint\print%5D:%20Supporting%20CMYK%20Color%20Space%20%20RELEASE:%20%289/1/2016%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
| 68.421053 | 938 | 0.786154 | eng_Latn | 0.630022 |
9319cdfbdca15ed94d1c3c6f2c73e5f16cda0954 | 8,273 | md | Markdown | README.md | GlenRice-NOAA/vyperdatum | 600fc2f16fe5a95dc3a26336d5d8cd23421fef5f | [
"CC0-1.0"
] | 2 | 2021-12-01T15:57:34.000Z | 2021-12-03T14:37:48.000Z | README.md | GlenRice-NOAA/vyperdatum | 600fc2f16fe5a95dc3a26336d5d8cd23421fef5f | [
"CC0-1.0"
] | 6 | 2020-12-04T18:07:12.000Z | 2020-12-22T16:59:07.000Z | README.md | GlenRice-NOAA/vyperdatum | 600fc2f16fe5a95dc3a26336d5d8cd23421fef5f | [
"CC0-1.0"
] | 3 | 2020-12-04T18:04:25.000Z | 2021-06-24T15:29:11.000Z | # Vyperdatum
Python module that drives PROJ to use VDatum grids in a simple and clear way. Requires that VDatum be installed already (you can find VDatum [here](https://vdatum.noaa.gov/). Developed in Python3.
VDatum is "a free software tool being developed jointly by NOAA's [National Geodetic Survey (NGS)](https://www.ngs.noaa.gov/), [Office of Coast Survey (OCS)](https://nauticalcharts.noaa.gov/), and [Center for Operational Oceanographic Products and Services (CO-OPS)](https://tidesandcurrents.noaa.gov/)...to vertically transform geospatial data among a variety of tidal, orthometric and ellipsoidal vertical datums".
Vyperdatum allows for VDatum to be used in production bathymetric processing software in a clean and precise way. In addition, Vyperdatum builds a custom Compound and Vertical CRS object that well documents the resulting transformation, so that the inverse transformation can be accurately applied later to get back to the pivot datum (NAD83(2011)/EPSG:6319.
## Installation
Vyperdatum is not on PyPi, but can be installed using pip.
(For Windows Users) Download and install Visual Studio Build Tools 2019 (If you have not already): [MSVC Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
Download and install conda (If you have not already): [conda installation](https://docs.conda.io/projects/conda/en/latest/user-guide/install/)
Download and install git (If you have not already): [git installation](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
Some dependencies need to be installed from the conda-forge channel. I have an example below of how to build this environment using conda.
Perform these in order:
`conda create -n vyper python=3.8.8 `
`conda activate vyper `
`conda install -c conda-forge gdal=3.2.1`
`pip install git+https://github.com/noaa-ocs-hydrography/vyperdatum.git#egg=vyperdatum `
## Quickstart
Vyperdatum offers to main classes:
- VyperPoints - for transforming 2d/3d point datasets
- VyperRaster - for transforming GDAL supported raster datasets
For either of these objects, the first run needs to set the path to the VDatum installation in order for Vyperdatum to initialize properly:
from vyperdatum.points import VyperPoints
vp = VyperPoints(vdatum_directory='path/to/vdatum')
From there it is simple to start performing transformations. Use the following examples to get started:
-- we assume vdatum_directory has already been set in these examples --
- Basic vertical transformation from NAD83 height to MLLW
vp = VyperPoints()
x = np.array([-76.19698, -76.194, -76.198])
y = np.array([37.1299, 37.1399, 37.1499])
z = np.array([10.5, 11.0, 11.5])
# source ('nad83') = nad83(2011)/nad83(2011)height
# destination ('mllw') = nad83/mllw
vp.transform_points(6319, 'mllw', x, y, z=z)
# this is a shortcut for vp.transform_points((6319, 'ellipse'), 'mllw', x, y, z=z)
vp.x
Out: array([-76.19698, -76.194 , -76.198 ])
vp.y
Out: array([37.1299, 37.1399, 37.1499])
vp.z
Out: array([47.735, 48.219, 48.685])
vp.unc
Out: array([0.115, 0.115, 0.115])
print(vp.out_crs.to_wkt())
COMPOUNDCRS["NAD83(2011) + mllw",
GEOGCRS["NAD83(2011)",
DATUM["NAD83 (National Spatial Reference System 2011)",
ELLIPSOID["GRS 1980",6378137,298.257222101,LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,ANGLEUNIT["degree",0.0174532925199433]],
CS[ellipsoidal,2],
AXIS["geodetic latitude (Lat)",north,ORDER[1],ANGLEUNIT["degree",0.0174532925199433]],
AXIS["geodetic longitude (Lon)",east,ORDER[2],ANGLEUNIT["degree",0.0174532925199433]],
USAGE[SCOPE["Horizontal component of 3D system."],
AREA["Puerto Rico - onshore and offshore. United States (USA) onshore and offshore - Alabama; Alaska; Arizona; Arkansas; California; Colorado; Connecticut; Delaware; Florida; Georgia; Idaho; Illinois; Indiana; Iowa; Kansas; Kentucky; Louisiana; Maine; Maryland; Massachusetts; Michigan; Minnesota; Mississippi; Missouri; Montana; Nebraska; Nevada; New Hampshire; New Jersey; New Mexico; New York; North Carolina; North Dakota; Ohio; Oklahoma; Oregon; Pennsylvania; Rhode Island; South Carolina; South Dakota; Tennessee; Texas; Utah; Vermont; Virginia; Washington; West Virginia; Wisconsin; Wyoming. US Virgin Islands - onshore and offshore."],
BBOX[14.92,167.65,74.71,-63.88]],ID["EPSG",6318]],
VERTCRS["mllw",
VDATUM["mllw"],
CS[vertical,1],
AXIS["gravity-related height (H)",up,LENGTHUNIT["metre",1,ID["EPSG",9001]]],
REMARK["vdatum=vdatum_4.1.2_20201203,vyperdatum=0.1.6,base_datum=[NAD83(2011)],
regions=[MDVAchb12_8301],
pipelines=[+proj=pipeline +step +proj=vgridshift grids=core\\geoid12b\\g2012bu0.gtx +step +inv +proj=vgridshift grids=MDVAchb12_8301\\tss.gtx +step +proj=vgridshift grids=MDVAchb12_8301\\mllw.gtx]"]]]'
- 3d Transformation from EPSG:3631(NC StatePlane)/MLLW to NAD83/MLLW.
vp = VyperPoints()
x = np.array([898745.505, 898736.854, 898728.203])
y = np.array([256015.372, 256003.991, 255992.610])
z = np.array([10.5, 11.0, 11.5])
# here we use input horizontal/vertical datums for both input and output datum
vp.transform_points((3631, 'mllw'), (6319, 'mllw'), x, y, z=z)
vp.x
Out: array([-75.7918, -75.7919, -75.792 ])
vp.y
Out: array([36.0157, 36.0156, 36.0155])
vp.z
Out: array([10.5, 11. , 11.5])
vp.unc
Out: array([0.028, 0.028, 0.028])
- GeoTIFF transformation - GeoTIFF with horizontal=EPSG:26919, vertical=NAD83(2011) height (assumed) to EPSG:26919/MLLW.
from vyperdatum.raster import VyperRaster
new_file = r"C:\data\tiff\output.tiff"
test_file = r"C:\data\tiff\test.tiff"
# source EPSG:26919 read automatically, NAD83 height assumed
vr = VyperRaster(test_file)
# optional step saying the raster is at horiz=26919, vert=ellipse
# vr.set_input_datum(26919, 'ellipse')
# output=mllw input=ellipse
layers, layernames, layernodata = vr.transform_raster('mllw', 'ellipse', allow_points_outside_coverage=True, output_filename=new_file)
print(vr.out_crs.to_compound_wkt())
COMPOUNDCRS["NAD83 / UTM zone 19N + mllw",
PROJCRS["NAD83 / UTM zone 19N",
BASEGEOGCRS["NAD83",DATUM["North American Datum 1983",ELLIPSOID["GRS 1980",6378137,298.257222101,LENGTHUNIT["metre",1]]],
PRIMEM["Greenwich",0,ANGLEUNIT["degree",0.0174532925199433]],ID["EPSG",4269]],
CONVERSION["UTM zone 19N",METHOD["Transverse Mercator",ID["EPSG",9807]],
PARAMETER["Latitude of natural origin",0,ANGLEUNIT["degree",0.0174532925199433],ID["EPSG",8801]],
PARAMETER["Longitude of natural origin",-69,ANGLEUNIT["degree",0.0174532925199433],ID["EPSG",8802]],
PARAMETER["Scale factor at natural origin",0.9996,SCALEUNIT["unity",1],ID["EPSG",8805]],
PARAMETER["False easting",500000,LENGTHUNIT["metre",1],ID["EPSG",8806]],
PARAMETER["False northing",0,LENGTHUNIT["metre",1],ID["EPSG",8807]]],CS[Cartesian,2],
AXIS["easting",east,ORDER[1],LENGTHUNIT["metre",1]],
AXIS["northing",north,ORDER[2],LENGTHUNIT["metre",1]],ID["EPSG",26919]],
VERTCRS["mllw",
VDATUM["mllw"],
CS[vertical,1],AXIS["gravity-related height (H)",up,LENGTHUNIT["metre",1,ID["EPSG",9001]]],
REMARK["vdatum=vdatum_4.1.2_20201203,vyperdatum=0.1.6,base_datum=[NAD83(2011)],
regions=[MENHMAgome23_8301],
pipelines=[+proj=pipeline +step +proj=vgridshift grids=core\\geoid12b\\g2012bu0.gtx +step +inv +proj=vgridshift grids=MENHMAgome23_8301\\tss.gtx +step +proj=vgridshift grids=MENHMAgome23_8301\\mllw.gtx]"]]]'
| 57.055172 | 659 | 0.658769 | eng_Latn | 0.424724 |
9319dd79a17d853611661a72cd06f88f00073f2d | 7,406 | md | Markdown | ChIP-seq/details.md | xizhihui/pipelines | 3e2765f999e03c6623fd756b14a7c427b3ddeb9c | [
"MIT"
] | 1 | 2020-10-28T03:33:56.000Z | 2020-10-28T03:33:56.000Z | ChIP-seq/details.md | xizhihui/pipelines | 3e2765f999e03c6623fd756b14a7c427b3ddeb9c | [
"MIT"
] | 1 | 2022-03-16T06:04:51.000Z | 2022-03-16T06:16:08.000Z | ChIP-seq/details.md | xizhihui/pipelines | 3e2765f999e03c6623fd756b14a7c427b3ddeb9c | [
"MIT"
] | null | null | null | # Details about the pipeline
## Workflow
This pipeline contains these modules:
* preprocess: quality control on raw reads
* alignment: mappint to the reference, get uniquely mapped reads and deduplicate
* callpeak: peak calling with MACS2
* qcchip: quality control on peak calling results
* downstream: downstream analysis like annotate the peaks, motif analysis, enrich analysis
* report: statistics with all the output and reprot a html

## Preprocess
By convention, quality control on raw reads is a first necessary step to do NGS anaslysi and is performed here with **FastQC** and **Trimmomatic**. The default parameters of Trimmomatic are:
* TRAILING:5
* EADING:4
* LIDINGWINDOW:4:5
* INLEN:25
* LLUMINACLIP:adapter.fa:2:30:10
You can change it with yaml syntax in `config.yaml`:
```
trimmomatic_pe:
- param1
trimmomatic_se:
- param2
```
## Alignment
After mappint to the reference genome, the mapping quality and uniquely mapped reads are both important things in a ChIP-seq analysis. Here we use "samtools" to filter the low MAPQ reads and get uniquely mapped reads with the command `samtools view -q 20 -b in.bam > out.bam`.
MACS2 can do the deduplication with the parameter "--keep-dup" through a default value "1", that's why many people won't deduplicate the bams. However, for the sake of estimation of extension size used in the peak model, we deduplicate the bams with "samtools rmdup".
## Callpeak
### peak calling
There are two types of peaks in the output of MACS2, narrow peaks, broad peaks, and gapped peaks. In most situations, transcription factors' ChIP results are the narrow peaks and the broad peaks are from the result of the ChIP experiment with histone modifiction. As for the gapped peaks, they emerge with broad peaks in MACS2. In the [crazyhottommy's pilot-analysis](https://github.com/crazyhottommy/ChIP-seq-analysis/blob/master/part1_peak_calling.md#results-from-pilot-analysis), using "--broad" definetly improve the identification of peaks(or more appropriately:enriched regions), and thus we use the `--broad` to do the histone modification peak calling. However, a narrow peak calling is still used for the transcription factors peak calling. These can be done with different setting of MACS2 paramters.
```
# broad peak calling, extsize is from the run_SPP.R result
macs2 callpeak --broad --broad-cutoff 0.1 --nomodel --extsize $extsize <other options>
# narrow peak calling, let optional parameter be default
macs2 callpeak <other options>
```
### replicates' reproducibility
In some ChIP-analysis, they first merge all the replicates bams and call peaks with MACS2; then call peaks with replicates seperately, finally regard the peaks both in merged peaks and seperated peaks as the final peak. However, in general, it is not a good idea to combine the reads from biological replicates for the sake of variances within samples in the sample groups and between groups. As is mentioned in [hbctraining's course](https://github.com/hbctraining/In-depth-NGS-Data-Analysis-Course/blob/master/sessionV/lessons/07_handling-replicates-idr.md), IDR suits for the situation. Why IDR? hbctraining gives:
1. IDR avoids choices of initial cutoffs, which are not comparable for different callers
2. IDR does not depend on arbitrary thresholds and so all regions/peaks are considered.
3. It is based on ranks, so does not require the input signals to be calibrated or with a specific fixed scale (only order matters).
In the ENCODE's ChIP-seq pipeline, they only do the IDR frame analysis with the transcription factors. Why? I did't figure it out yet and I decided to follow the ENCODE. As for the histone modification peak calling, I will regard the peaks shared in all replicates as the reproducible peaks by using "bedtools". There are three parts of idr analysis:
* Peak consistency between true replicates
* Peak consistency between pooled pseudoreplicates
* Self-consistency analysis
**Here we just do the first part in the pipeline.**
### blacklist
Functional genomics experiments based on next-gen sequencing (e.g. ChIP-seq, MNase-seq, DNase-seq, FAIRE-seq) that measure biochemical activity of various elements in the genome often produce artifact signal in certain regions of the genome. It is important to keep track of and filter artifact regions that tend to show artificially high signal (excessive unstructured anomalous reads mapping). This can also be done with "bedtools". For details, see below.
## Quality control on ChIP
We use the R package "ChIPQC" for the ChIP's quality assessment. In it's report, we can get three main values to indicate the quality of a ChIP.
### SSD
The SSD score is a measure used to indicate evidence of enrichment. It provides a measure of read pileup across the genome and is computed by looking at the standard deviation of signal pile-up along the genome normalised to the total number of reads. A "good" or enriched sample typically has regions of significant read pile-up so a higher SSD is more indicative of better enrichment. Basically, SSD scores are dependent on the degree of total genome wide signal pile-up, and therefore they are sensitive to regions of artificially high signal in addition to genuine ChIP enrichment. So we need to look closely at the rest of the output of ChIPQC to be sure that the high SSD in samples is actually a result of ChIP enrichment and not some unknown artifact(s).
### RiP: Fraction of Reads in Peaks
RiP (also called FRiP) reports the percentage of reads that overlap within called peaks. This is another good indication of how ”enriched” the sample is, or the success of the immunoprecipitation. It can be considered a ”signal-to-noise” measure of what proportion of the library consists of fragments from binding sites vs. background reads. RiP values will vary depending on the protein of interest:
A typical good quality TF with successful enrichment would exhibit a RiP around 5% or higher.
A good quality PolII would exhibit a RiP of 30% or higher.
There are also known examples of good datasets with FRiP < 1% (i.e. RNAPIII).
In our dataset, RiP percentages are higher for the Nanog replicates as compared to Pou5f1, with Pou5f1-rep2 being very low. This is perhaps an indication that the poor SSD scores for Nanog may not be predictive of poor quality.
### RiBL: Reads overlapping in Blacklisted Regions
It is important to keep track of and filter artifact regions that tend to show artificially high signal (likely due to excessive unstructured anomalous reads mapping). The blacklisted regions typically appear uniquely mappable so simple mappability filters do not remove them. These regions are often found at specific types of repeats such as centromeres, telomeres and satellite repeats. The signal from blacklisted regions has been shown to contribute to confound peak callers and fragment length estimation.
## References
* [hbctraining](https://github.com/hbctraining/In-depth-NGS-Data-Analysis-Course/)
* [crazyhotommy](https://github.com/crazyhottommy/ChIP-seq-analysis/blob/master/part0_quality_control.md#encode-guidlines)
* [chilin](http://cistrome.org/chilin/_downloads/instructions.pdf)
* [ENCODE](http://www.ncbi.nlm.nih.gov/pubmed/22955991)
* [Phantompeakqualtools](https://github.com/kundajelab/phantompeakqualtools)
* [Irreproducibility Discovery Rate](https://github.com/nboley/idr) | 69.214953 | 810 | 0.78963 | eng_Latn | 0.99763 |
931b65d52f3b318ea6de58af7135f112dc45f675 | 1,338 | md | Markdown | _posts/Develop/2021-05-02-SAML.md | BlueHorn07/BlueHorn07.github.io | 94e9c31739ff76657953e0f4754be11a0cdb9d5b | [
"MIT"
] | 1 | 2020-06-16T11:41:09.000Z | 2020-06-16T11:41:09.000Z | _posts/Develop/2021-05-02-SAML.md | BlueHorn07/BlueHorn07.github.io | 94e9c31739ff76657953e0f4754be11a0cdb9d5b | [
"MIT"
] | null | null | null | _posts/Develop/2021-05-02-SAML.md | BlueHorn07/BlueHorn07.github.io | 94e9c31739ff76657953e0f4754be11a0cdb9d5b | [
"MIT"
] | null | null | null | ---
title: "SAML"
layout: post
tags: ["develop", "security"]
---
이 글은 **SAML**를 공부하면서 개인적으로 정리한 글입니다. 지적과 조언은 언제나 환영입니다 ㅎㅎ
<hr/>
\<SAML(Security Assertion Markup Language)\>[^1]은 <span class="half_HL">보안 인증에 관한 정보를 기술하는 마크업 언어 또는 그 형식</span>이다. Authentication/Authorization에 대한 정보가 담긴 XML이라고 보면 된다.
SAML은 주로 SSO를 구현하기 위해 사용된다. SSO는 한번의 로그인으로 다른 여러 서비스를 이용할 수 있는 인증 방식이다. 좀더 자세히 말하자면, Identity Provider와 Service Provider가 SAML 형식으로 작성된 request / response로 정보를 주고 받는다고 할 수 있다.
👉 [SAML Response Examples](https://www.samltool.com/generic_sso_res.php)
#### SAML vs. OAuth
**SAML**은 인증/인가 데이터를 교환하기 위한 XML 기반의 표준 데이터 형식이다. 주로 SSO를 구현하는 데에 사용한다.
**OAuth**는 사용자들이 별도의 ID/PW를 제공하지 않고 써드파티 서비스를 사용하려 할 때, 인증을 신뢰할 수 있는 인증 기관에 위임하는 인증 기술이다. 표준 보안 방식이며, XML이 아닌 JSON을 기반으로 한다.
<div class="img-wrapper">
<img src="https://leedo1982.github.io/post-img/2018-10-09/diff_saml_oauth_term.jpg" width="600px">
<p>Image from <a href="https://leedo1982.github.io/blog/2018/10/09/Saml-Oauth/">here</a></p>
</div>
좀더 자세한 내용은 ['이도원'님의 포스트](https://leedo1982.github.io/blog/2018/10/09/Saml-Oauth/)를 참고!!
<hr/>
#### 참고자료
- ['boomkim'님의 포스트](https://boomkim.github.io/2018/07/11/rough-draft-of-saml/)
- [ITWORLD](https://www.itworld.co.kr/news/108736)
- ['이도원'님의 포스트](https://leedo1982.github.io/blog/2018/10/09/Saml-Oauth/)
<hr/>
[^1]: [샘엘]이라고 읽는다. | 31.857143 | 175 | 0.697309 | kor_Hang | 1.000003 |
931c571166389027a76b223458928f8dcd8cc03e | 427 | md | Markdown | docs/sdk/com.robotemi.sdk.sequence/-on-sequence-play-status-changed-listener/on-sequence-play-status-changed.md | EddyJeon/temisdkTest | d787ae5461f997879dd11ba821df53cf029cca67 | [
"Apache-2.0"
] | 167 | 2019-06-14T15:28:02.000Z | 2022-03-17T10:06:22.000Z | docs/sdk/com.robotemi.sdk.sequence/-on-sequence-play-status-changed-listener/on-sequence-play-status-changed.md | EddyJeon/temisdkTest | d787ae5461f997879dd11ba821df53cf029cca67 | [
"Apache-2.0"
] | 231 | 2019-06-21T08:10:30.000Z | 2022-03-31T08:27:51.000Z | docs/sdk/com.robotemi.sdk.sequence/-on-sequence-play-status-changed-listener/on-sequence-play-status-changed.md | EddyJeon/temisdkTest | d787ae5461f997879dd11ba821df53cf029cca67 | [
"Apache-2.0"
] | 75 | 2019-08-14T07:20:10.000Z | 2022-03-07T12:34:51.000Z | [sdk](../../index.md) / [com.robotemi.sdk.sequence](../index.md) / [OnSequencePlayStatusChangedListener](index.md) / [onSequencePlayStatusChanged](./on-sequence-play-status-changed.md)
# onSequencePlayStatusChanged
`abstract fun onSequencePlayStatusChanged(status: `[`Int`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-int/index.html)`): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) | 85.4 | 210 | 0.758782 | yue_Hant | 0.438412 |
931c75ccd9b245ab7b2bffcda54680ad004d75f9 | 111 | md | Markdown | CONTRIBUTING.md | GitZoneTools/npmts | 668fd0998fb7605e719e3cd3a0ed083376d8c802 | [
"MIT"
] | 1 | 2016-04-05T20:31:19.000Z | 2016-04-05T20:31:19.000Z | CONTRIBUTING.md | GitZoneTools/npmts | 668fd0998fb7605e719e3cd3a0ed083376d8c802 | [
"MIT"
] | 13 | 2016-01-20T02:23:59.000Z | 2016-05-05T21:37:02.000Z | CONTRIBUTING.md | GitZoneTools/npmts | 668fd0998fb7605e719e3cd3a0ed083376d8c802 | [
"MIT"
] | null | null | null | # Contribution Guide
This module is developed on the fabulous GitLab.com:
https://gitlab.com/pushrocks/npmts | 27.75 | 54 | 0.792793 | eng_Latn | 0.799387 |
931c79206976ce0bb5260330d5755958320bff9e | 522 | md | Markdown | docs/Referenties.md | Fred125/repo | e67951ecc489343580595f5a4abe1cd8ea53bd07 | [
"MIT"
] | null | null | null | docs/Referenties.md | Fred125/repo | e67951ecc489343580595f5a4abe1cd8ea53bd07 | [
"MIT"
] | null | null | null | docs/Referenties.md | Fred125/repo | e67951ecc489343580595f5a4abe1cd8ea53bd07 | [
"MIT"
] | null | null | null | ---
sort: 6
---
# Referenties
| Ref | Toelichting |
|-----|-------------|
| [Aquo begrippen](https://aquo.begrippenxl.nl/nl/)| Het online woordenboek voor waterbegrippen |
| [GEMMA Online](https://www.wilmaonline.nl/index.php/Hoofdpagina) | Referentiearchitectuur Gemeenten |
| [WILMA Online](https://www.wilmaonline.nl/index.php/Hoofdpagina) | Referentiearchitectuur Waterschappen |
| [WSWC](https://www.wswc.nl/) | Openbare software catalogus van/voor waterschappen (refrerentiecomponenten, pakketten, leveranciers) |
| 40.153846 | 135 | 0.726054 | nld_Latn | 0.824231 |
931df8e616404d6546fb394b6a6c557df1d63027 | 3,968 | md | Markdown | CONTRIBUTING.md | JellyWX/hikari | 86d9232ca4ead3ba069658aff3f792eeb1d209ba | [
"MIT"
] | null | null | null | CONTRIBUTING.md | JellyWX/hikari | 86d9232ca4ead3ba069658aff3f792eeb1d209ba | [
"MIT"
] | null | null | null | CONTRIBUTING.md | JellyWX/hikari | 86d9232ca4ead3ba069658aff3f792eeb1d209ba | [
"MIT"
] | null | null | null | # Hikari contribution guidelines
First off, we would like to thank you for taking the time to help improve Hikari, it's greatly appreciated. We have
some contribution guidelines that you should follow to ensure that your contribution is at its best.
# Code of conduct
Hikari has a code of conduct that must be followed at all times by all the members of the project. Breaking the code
of conduct can lead to a ban from the project and a report to GitHub.
You can read the code of conduct [here](https://github.com/hikari-py/hikari/blob/master/CODE_OF_CONDUCT.md).
# Versioning scheme
This project follows the versioning scheme stated by [PEP 440](https://www.python.org/dev/peps/pep-0440/).
The development version number is increased automatically after each release in the `master` branch in the master
repository.
Please also refer to the [Semantic Versioning specification](https://semver.org/) for more information.
# Deprecation process
The removal or renaming of anything facing the public facing API must go through a deprecation process, which should
match that of the versioning scheme. There are utilities under `hikari.internal.deprecation` to aid with it.
# Towncrier
To aid with the generation of `CHANGELOG.md` as well as the releases changelog we use `towncrier`.
For every pull request made to this project, there should be a short explanation of the change under `changes/`
with the following format: `{pull_request_number}.{type}.md`,
Possible types are:
- `feature`: Signifying a new feature.
- `bugfix`: Signifying a bugfix.
- `doc`: Signifying a documentation improvement.
- `removal`: Signifying a deprecation or removal of public API.
- `internal`: Signifying an internal change to the code that is not of interest to the users.
Examples include: code reformatting, CI changes, etc.
Best way to create the fragments is to run `towncrier create {pull_request_number}.{type}.md` after creating the
pull request, edit the created file and committing the changes.
Multiple fragments types can be created per pull request if it covers multiple areas.
# Branches
We would like to keep consistency in naming branches in the remote.
To push branches directly to the remote, you will have to name them like this:
- `feature/issue-number-small-info-on-branch`
- This should be used for branches that require more tasks to merge into before going as one MR into `master`.
- `bugfix/issue-number-small-info-on-branch`
- This should be used for bugfixes.
- `task/issue-number-small-info-on-branch`
- This should be the default for any commit that doesn't fall in any of the cases above.
`issue-number` is optional (only use if issue exists) and can be left out. `small-info-on-branch` should be replaced
with a small description of the branch.
# Nox
We have nox to help out with running pipelines locally and provides some helpful functionality.
Nox is similar to tox, but uses a pure Python configuration instead of an INI based configuration. Nox and tox are
both tools for generating virtual environments and running commands in those environments. Examples of usage include
installing, configuring, and running flake8, running pytest, etc.
You can check all the available nox commands by running `nox -l`.
Before committing we recommend you to run `nox` to run all important pipelines and make sure the pipelines won't fail.
You may run a single pipeline with `nox -s name` or multiple pipelines with `nox -s name1 name3 name9`.
# Pipelines
We have several jobs to ensure that the code is at its best that in can be.
This includes:
- `test`
- Run tests and installation of the package on different OS's and python versions.
- `linting`
- Linting (`flake8`), type checking (`mypy`), safety (`safety`) and spelling (`codespell`).
- `twemoji`
- Force test all discord emojis.
- `pages`
- Generate webpage + documentation.
All jobs will need to succeed before anything gets merged.
| 43.130435 | 118 | 0.768649 | eng_Latn | 0.99961 |
931e033d0f9d16d3915033a3ee89dd83f6774d38 | 169 | md | Markdown | _posts/0000-01-02-RefahA.md | RefahA/github-slideshow | c319b4da640d683ac9d7ca1164c13851495e2a20 | [
"MIT"
] | null | null | null | _posts/0000-01-02-RefahA.md | RefahA/github-slideshow | c319b4da640d683ac9d7ca1164c13851495e2a20 | [
"MIT"
] | 3 | 2020-09-09T20:56:10.000Z | 2020-09-09T23:07:54.000Z | _posts/0000-01-02-RefahA.md | RefahA/github-slideshow | c319b4da640d683ac9d7ca1164c13851495e2a20 | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
"Your dream doesn't have an expiration date,take a deep breath and try again"
Use the left arrow to go back!
| 24.142857 | 77 | 0.727811 | eng_Latn | 0.998811 |
931e13b1d275dcf43aa380c494ad2bf046469a0d | 315 | md | Markdown | mysql/Convert-data.md | janis-rullis/sql | 31d8f474f40712e68cef07706f96ffd4c8b98156 | [
"MIT"
] | null | null | null | mysql/Convert-data.md | janis-rullis/sql | 31d8f474f40712e68cef07706f96ffd4c8b98156 | [
"MIT"
] | 8 | 2020-10-26T16:49:34.000Z | 2020-12-19T10:14:39.000Z | mysql/Convert-data.md | Janis-Rullis-IT/sql | 15cf29f74fcbfddba50725cbfc91bd5109ea0b6d | [
"MIT"
] | null | null | null | # Convert data
## [Float to int](https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_round)
```sql
SELECT ROUND(15.23);
```
> 15
## [Float to tens (not ones)](https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_round)
```sql
SELECT ROUND(15.23, -1);
```
> 20
| 17.5 | 113 | 0.685714 | yue_Hant | 0.621753 |
931e91770f5f44d53e80406a7686cdad17cf425b | 4,103 | md | Markdown | README.md | geraldo-netto/ton | 34a15111bbc4e16be871daa6771a60043627250b | [
"BSD-3-Clause"
] | 2 | 2019-04-22T05:50:51.000Z | 2020-08-02T18:15:57.000Z | README.md | geraldo-netto/ton | 34a15111bbc4e16be871daa6771a60043627250b | [
"BSD-3-Clause"
] | null | null | null | README.md | geraldo-netto/ton | 34a15111bbc4e16be871daa6771a60043627250b | [
"BSD-3-Clause"
] | null | null | null | # TON
TON is a mass data generator script written in python
## Usage
```bash
netto@bella:~/ton$ python ton.py hwmetrics.json
1881-09-27 14:52:36;hg;ARM;N;AC;38.98
1373-08-13 23:43:56;hg;AMD;N;AB;30.18
1100-12-19 18:54:41;hg;AMD;Y;AC;68.69
1479-07-24 12:20:20;hg;ARM;Y;AB;93.21
1775-01-07 13:30:12;hg;ARM;Y;CA;63.47
1510-11-05 01:43:15;hg;Intel;N;AB;37.75
1630-03-17 16:38:15;hg;Intel;Y;BA;87.15
1830-12-11 14:47:50;hg;ARM;Y;CB;86.69
1844-11-28 10:46:00;hg;Intel;N;BA;23.21
elapsed time: 0:00:00.000648
```
### Explaining [hwmetrics.json](examples/hwmetrics.json)
[hwmetrics.json](examples/hwmetrics.json) declares how data must be created and it has the following format:
```json
{
"encoding": "UTF-8",
"rows": 10,
"format": "$integer_variable$;$string_variable$;$char_variable$;$boolean_variable$;$decimal_variable$",
"types": {
"integer_variable": {
"type": "integer",
"minValue": 1100,
"maxValue": 2050,
"padWithZero": true
},
"string_variable": {
"type": "string",
"values": ["AMD", "INTEL", "ARM"]
},
"char_variable": {
"type": "char",
"values": ["A", "B", "C"],
"maxChar": 2
},
"boolean_variable": {
"type": "boolean",
"whenTrue": "Y",
"whenFalse": "N"
},
"decimal_variable": {
"type": "decimal",
"minValue": 0.0,
"maxValue": 100.0,
"decimals": 2,
"padWithZero": true
}
}
}
```
"rows" is the number of rows TON will create, it must be an integer
"format" is the output string that each row will have. All values enclosed by '$' is read as a variable.
e.g.: $title$ will create a variable title that must be mapped inside the types block.
"types" contains a list of variables previously declared on format
The following data types are allowed inside types:
#### boolean creates boolean value
```json
"variableName": {
"type": "boolean",
"whenTrue": "Y",
"whenFalse": "N"
}
```
#### char creates single character value
"maxChar": 2 => maximum number of characters to be created
```json
"variableName": {
"type": "char",
"values": ["A", "B", "C"],
"maxChar": 2
}
```
#### string creates string values
```json
"variableName": {
"type": "string",
"values": ["AMD", "INTEL", "ARM"]
}
```
#### integer creates integer values
"minValue": 1100 => defines the minimum value to be created
"maxValue": 2050 => defines the maximum value to be created
"padWithZero": true => completes the created integer with zeros
```json
"variableName": {
"type": "integer",
"minValue": 1100,
"maxValue": 2050,
"padWithZero": true
}
```
#### decimal creates decimal values
"minValue": 0.0 => defines the minimum value to be created
"maxValue": 100.0 => defines the maximum value to be created
"decimals": 2 => defines the maximum value to be created
"padWithZero": true => completes the created float with zeros
```json
"variableName": {
"type": "decimal",
"minValue": 0.0,
"maxValue": 100.0,
"decimals": 2,
"padWithZero": true
}
```
#### lmhash creates windows 2000/xp hashes based on their literal values
Also, lmhash has a special parameter called id that is used to extract field name.
e.g.: Variable $word$ applied with lmhash will create an md4 hash;
Variable $word[id]$ applied with lmhash will display the current value
So: "$word[id]$ => $word$" will generate this kind of output: "love => 85deeec2d12f917783b689ae94990716"
Please, check full [winhash.json example](examples/winhash.json)
```json
"variableName": {
"type": "lmhash",
"values": ["AMD", "Intel", "ARM"]
}
```
Please, check the following example for other use cases:
* [dna.json example - creating DNA sequences in the same format as 23andMe, tellmeGen, ...](examples/dna.json)
* [winhash.json example - creating rainbow table of Windows 2000/XP](examples/winhash.json)
## TODO
* refactoring
* tests
* add sequential number generation
* allow to change statistical data distribution (currently, it's all based on a normal curve)
## License
[BSD-3](https://opensource.org/licenses/BSD-3-Clause)
| 27.911565 | 112 | 0.655862 | eng_Latn | 0.887627 |
931e9200b718451a746836c3d0287fbfd5b59e05 | 4,745 | md | Markdown | packages/astro-rss/README.md | xnuray98s/astro | f609b5e9f2ea313c62d9f5a94a97e267a1a3b64b | [
"MIT"
] | 2,820 | 2021-11-23T20:08:08.000Z | 2022-03-31T17:56:11.000Z | packages/astro-rss/README.md | xnuray98s/astro | f609b5e9f2ea313c62d9f5a94a97e267a1a3b64b | [
"MIT"
] | 1,073 | 2021-11-23T19:09:42.000Z | 2022-03-31T22:27:32.000Z | packages/astro-rss/README.md | xnuray98s/astro | f609b5e9f2ea313c62d9f5a94a97e267a1a3b64b | [
"MIT"
] | 259 | 2021-11-24T05:17:58.000Z | 2022-03-31T12:23:23.000Z | # @astrojs/rss 📖
This package brings fast RSS feed generation to blogs and other content sites built with [Astro](https://astro.build/). For more information about RSS feeds in general, see [aboutfeeds.com](https://aboutfeeds.com/).
## Installation
Install the `@astrojs/rss` package into any Astro project using your preferred package manager:
```bash
# npm
npm i @astrojs/rss
# yarn
yarn add @astrojs/rss
# pnpm
pnpm i @astrojs/rss
```
## Example usage
The `@astrojs/rss` package provides helpers for generating RSS feeds within [Astro endpoints][astro-endpoints]. This unlocks both static builds _and_ on-demand generation when using an [SSR adapter](https://docs.astro.build/en/guides/server-side-rendering/#enabling-ssr-in-your-project).
For instance, say you need to generate an RSS feed for all posts under `src/pages/blog/`. Start by [adding a `site` to your project's `astro.config` for link generation](https://docs.astro.build/en/reference/configuration-reference/#site). Then, create an `rss.xml.js` file under your project's `src/pages/` directory, and use [Vite's `import.meta.glob` helper](https://vitejs.dev/guide/features.html#glob-import) like so:
```js
// src/pages/rss.xml.js
import rss from '@astrojs/rss';
export const get = () => rss({
title: 'Buzz’s Blog',
description: 'A humble Astronaut’s guide to the stars',
// pull in the "site" from your project's astro.config
site: import.meta.env.SITE,
items: import.meta.glob('./blog/**/*.md'),
});
```
Read **[Astro's RSS docs][astro-rss]** for full usage examples.
## `rss()` configuration options
The `rss` default export offers a number of configuration options. Here's a quick reference:
```js
rss({
// `<title>` field in output xml
title: 'Buzz’s Blog',
// `<description>` field in output xml
description: 'A humble Astronaut’s guide to the stars',
// provide a base URL for RSS <item> links
site: import.meta.env.SITE,
// list of `<item>`s in output xml
items: import.meta.glob('./**/*.md'),
// (optional) absolute path to XSL stylesheet in your project
stylesheet: '/rss-styles.xsl',
// (optional) inject custom xml
customData: '<language>en-us</language>',
// (optional) add arbitrary metadata to opening <rss> tag
xmlns: { h: 'http://www.w3.org/TR/html4/' },
});
```
### title
Type: `string (required)`
The `<title>` attribute of your RSS feed's output xml.
### description
Type: `string (required)`
The `<description>` attribute of your RSS feed's output xml.
### site
Type: `string (required)`
The base URL to use when generating RSS item links. We recommend using `import.meta.env.SITE` to pull in the "site" from your project's astro.config. Still, feel free to use a custom base URL if necessary.
### items
Type: `RSSFeedItem[] | GlobResult (required)`
Either a list of formatted RSS feed items or the result of [Vite's `import.meta.glob` helper](https://vitejs.dev/guide/features.html#glob-import). See [Astro's RSS items documentation](https://docs.astro.build/en/guides/rss/#generating-items) for usage examples to choose the best option for you.
When providing a formatted RSS item list, see the `RSSFeedItem` type reference below:
```ts
type RSSFeedItem = {
/** Link to item */
link: string;
/** Title of item */
title: string;
/** Publication date of item */
pubDate: Date;
/** Item description */
description?: string;
/** Append some other XML-valid data to this item */
customData?: string;
};
```
### stylesheet
Type: `string (optional)`
An absolute path to an XSL stylesheet in your project. If you don’t have an RSS stylesheet in mind, we recommend the [Pretty Feed v3 default stylesheet](https://github.com/genmon/aboutfeeds/blob/main/tools/pretty-feed-v3.xsl), which you can download from GitHub and save into your project's `public/` directory.
### customData
Type: `string (optional)`
A string of valid XML to be injected between your feed's `<description>` and `<item>` tags. This is commonly used to set a language for your feed:
```js
import rss from '@astrojs/rss';
export const get = () => rss({
...
customData: '<language>en-us</language>',
});
```
### xmlns
Type: `Record<string, string> (optional)`
An object mapping a set of `xmlns` suffixes to strings of metadata on the opening `<rss>` tag.
For example, this object:
```js
rss({
...
xmlns: { h: 'http://www.w3.org/TR/html4/' },
})
```
Will inject the following XML:
```xml
<rss xmlns:h="http://www.w3.org/TR/html4/"...
```
---
For more on building with Astro, [visit the Astro docs][astro-rss].
[astro-rss]: https://docs.astro.build/en/guides/rss/#using-astrojsrss-recommended
[astro-endpoints]: https://docs.astro.build/en/core-concepts/astro-pages/#non-html-pages
| 31.423841 | 422 | 0.704531 | eng_Latn | 0.829428 |
9320329ba9aa9fa31763e95e9918aa6bb4fcfec7 | 19,458 | md | Markdown | WindowsServerDocs/identity/ad-ds/plan/security-best-practices/Appendix-H--Securing-Local-Administrator-Accounts-and-Groups.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/plan/security-best-practices/Appendix-H--Securing-Local-Administrator-Accounts-and-Groups.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/plan/security-best-practices/Appendix-H--Securing-Local-Administrator-Accounts-and-Groups.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: ea015cbc-dea9-4c72-a9d8-d6c826d07608
title: Anhang H - schützen lokaler Administratorkonten und-Gruppen
description: ''
author: MicrosoftGuyJFlo
ms.author: joflore
manager: mtillman
ms.date: 05/31/2017
ms.topic: article
ms.prod: windows-server-threshold
ms.technology: identity-adds
ms.openlocfilehash: 71eea3f623968172076708dbea34d5bbf4a07684
ms.sourcegitcommit: 0d0b32c8986ba7db9536e0b8648d4ddf9b03e452
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/17/2019
ms.locfileid: "59858691"
---
# <a name="appendix-h-securing-local-administrator-accounts-and-groups"></a>Anhang H: Schützen lokaler Administratorkonten und-Gruppen
>Gilt für: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
## <a name="appendix-h-securing-local-administrator-accounts-and-groups"></a>Anhang H: Schützen lokaler Administratorkonten und-Gruppen
In allen Versionen von Windows, die derzeit in der mainstream-Support ist das lokale Administratorkonto standardmäßig deaktiviert, dadurch wird das Konto nicht vom Pass-the-Hash und anderen Angriffen mit gestohlenen Anmeldeinformationen. Allerdings: in Umgebungen, die ältere Betriebssysteme enthalten oder in der lokale Administratorkonten aktiviert wurden, können diese Konten Gefährdung über MemberServer und Arbeitsstationen verteilt wie oben beschrieben verwendet werden. Jeder lokalen Administratorkontos und Gruppe sollten gesichert werden, wie in den folgenden schrittweisen Anleitungen beschrieben.
Ausführliche Informationen zu Überlegungen zum Sichern der integrierte Administrator (BA)-Gruppen finden Sie unter [implementieren mit Minimalprivilegien Verwaltungsmodellen](../../../ad-ds/plan/security-best-practices/Implementing-Least-Privilege-Administrative-Models.md).
#### <a name="controls-for-local-administrator-accounts"></a>Steuerelemente für die Konten für lokale Administratoren
Für das lokale Administratorkonto in jeder Domäne in der Gesamtstruktur sollten Sie die folgenden Einstellungen konfigurieren:
- Konfigurieren der Gruppenrichtlinienobjekte, um der Domäne-Administratorkonto verwenden, in der Domäne Angehörige Systeme zu beschränken.
- Fügen Sie das Administratorkonto in ein oder mehrere Gruppenrichtlinienobjekte, die Sie erstellen und auf Arbeitsstationen und Mitgliedsservern Server Organisationseinheiten in jeder Domäne verknüpfen, um in der folgenden Benutzerrechte **Computer Computerkonfiguration\Richtlinien\Windows unter Sicherheitseinstellungen\Lokale Policies\ Zuweisen von Benutzerrechten**:
- Zugriff vom Netzwerk auf diesen Computer verweigern
- Anmelden als Batchauftrag verweigern
- Anmelden als Dienst verweigern
- Anmelden über Remotedesktopdienste verweigern
#### <a name="step-by-step-instructions-to-secure-local-administrators-groups"></a>Schrittweise Anleitungen zum Schützen der lokalen Administratorgruppe
###### <a name="configuring-gpos-to-restrict-administrator-account-on-domain-joined-systems"></a>Konfigurieren von Gruppenrichtlinienobjekten, um das Administratorkonto auf der Domäne Angehörige Systeme zu beschränken.
1. In **Server-Manager**, klicken Sie auf **Tools**, und klicken Sie auf **Gruppenrichtlinienverwaltung**.
2. Erweitern Sie in der Konsolenstruktur <Forest>\Domains\\<Domain>, und klicken Sie dann **Group Policy Objects** (wo <Forest> ist der Name der Gesamtstruktur und <Domain> ist der Name der Domäne, wo Sie möchten, Legen Sie die Gruppenrichtlinie).
3. In der Konsolenstruktur mit der Maustaste **Group Policy Objects**, und klicken Sie auf **neu**.

4. In der **neues Gruppenrichtlinienobjekt** (Dialogfeld), Typ **<GPO Name>**, und klicken Sie auf **OK** (, in denen <GPO Name> ist der Name des dieses Gruppenrichtlinienobjekt).

5. Klicken Sie im Detailbereich mit der Maustaste **<GPO Name>**, und klicken Sie auf **bearbeiten**.
6. Navigieren Sie zu **Computer Computerkonfiguration\Richtlinien\Windows-Einstellungen\Sicherheitseinstellungen\Lokale Richtlinien**, und klicken Sie auf **Zuweisen von Benutzerrechten**.

7. Konfigurieren Sie die Benutzerrechte, um zu verhindern, dass das lokale Administratorkonto Zugriff auf Member-Servern und Arbeitsstationen über das Netzwerk wie folgt:
1. Doppelklicken Sie auf **Zugriff vom Netzwerk auf diesen Computer Verweigern** , und wählen Sie **diese Richtlinieneinstellungen definieren**.
2. Klicken Sie auf **Benutzer oder Gruppe hinzufügen**, geben Sie den Benutzernamen des lokalen Administratorkontos an, und klicken Sie auf **OK**. Dieser Benutzername wird **Administrator**, die Standardeinstellung bei der Installation von Windows.

3. Klicken Sie auf „OK“.
> [!IMPORTANT]
> Wenn Sie das Administratorkonto an diesen Einstellungen hinzufügen, geben Sie an, ob Sie zum Konfigurieren eines lokalen Administratorkontos oder eines Domänenadministratorkontos wie Sie die Konten bezeichnen. Hinzufügen der TAILSPINTOYS Domäne Administratorkonto anmelden, um diese Verweigern von Benutzerrechten, die Sie dem Administratorkonto für die Domäne TAILSPINTOYS durchsuchen würde beispielsweise würde die als TAILSPINTOYS\Administrator angezeigt werden. Wenn Sie eingeben **Administrator** in diese Rechte benutzereinstellungen in der Gruppenrichtlinienobjekt-Editor, schränken Sie das lokale Administratorkonto auf jedem Computer, auf das Gruppenrichtlinienobjekt angewendet wird, wie oben beschrieben.
8. Konfigurieren Sie die Benutzerrechte, um zu verhindern, dass das lokale Administratorkonto anmelden als Stapelverarbeitungsauftrag wie folgt:
1. Doppelklicken Sie auf **Anmelden als Batchauftrag verweigern** , und wählen Sie **diese Richtlinieneinstellungen definieren**.
2. Klicken Sie auf **Benutzer oder Gruppe hinzufügen**, geben Sie den Benutzernamen des lokalen Administratorkontos an, und klicken Sie auf **OK**. Dieser Benutzername wird **Administrator**, die Standardeinstellung bei der Installation von Windows.

3. Klicken Sie auf **OK**.
> [!IMPORTANT]
> Wenn Sie das Administratorkonto an diesen Einstellungen hinzufügen, geben Sie an, ob Sie zum Konfigurieren lokaler Administrator- oder Domänenadministratorkonto wie Sie die Konten bezeichnen. Hinzufügen der TAILSPINTOYS Domäne Administratorkonto anmelden, um diese Verweigern von Benutzerrechten, die Sie dem Administratorkonto für die Domäne TAILSPINTOYS durchsuchen würde beispielsweise würde die als TAILSPINTOYS\Administrator angezeigt werden. Wenn Sie eingeben **Administrator** in diese Rechte benutzereinstellungen in der Gruppenrichtlinienobjekt-Editor, schränken Sie das lokale Administratorkonto auf jedem Computer, auf das Gruppenrichtlinienobjekt angewendet wird, wie oben beschrieben.
9. Konfigurieren Sie die Benutzerrechte, um zu verhindern, dass das lokale Administratorkonto anmelden als Dienst wie folgt:
1. Doppelklicken Sie auf **Anmelden als Dienst verweigern** , und wählen Sie **diese Richtlinieneinstellungen definieren**.
2. Klicken Sie auf **Benutzer oder Gruppe hinzufügen**, geben Sie den Benutzernamen des lokalen Administratorkontos an, und klicken Sie auf **OK**. Dieser Benutzername wird **Administrator**, die Standardeinstellung bei der Installation von Windows.

3. Klicken Sie auf **OK**.
> [!IMPORTANT]
> Wenn Sie das Administratorkonto an diesen Einstellungen hinzufügen, geben Sie an, ob Sie zum Konfigurieren lokaler Administrator- oder Domänenadministratorkonto wie Sie die Konten bezeichnen. Hinzufügen der TAILSPINTOYS Domäne Administratorkonto anmelden, um diese Verweigern von Benutzerrechten, die Sie dem Administratorkonto für die Domäne TAILSPINTOYS durchsuchen würde beispielsweise würde die als TAILSPINTOYS\Administrator angezeigt werden. Wenn Sie eingeben **Administrator** in diese Rechte benutzereinstellungen in der Gruppenrichtlinienobjekt-Editor, schränken Sie das lokale Administratorkonto auf jedem Computer, auf das Gruppenrichtlinienobjekt angewendet wird, wie oben beschrieben.
10. Konfigurieren Sie die Benutzerrechte, um zu verhindern, dass das lokale Administratorkonto Zugriff auf MemberServer und Arbeitsstationen über Remote Desktop Services wie folgt:
1. Doppelklicken Sie auf **anmelden über Remotedesktopdienste Verweigern** , und wählen Sie **diese Richtlinieneinstellungen definieren**.
2. Klicken Sie auf **Benutzer oder Gruppe hinzufügen**, geben Sie den Benutzernamen des lokalen Administratorkontos an, und klicken Sie auf **OK**. Dieser Benutzername wird **Administrator**, die Standardeinstellung bei der Installation von Windows.

3. Klicken Sie auf **OK**.
> [!IMPORTANT]
> Wenn Sie das Administratorkonto an diesen Einstellungen hinzufügen, geben Sie an, ob Sie zum Konfigurieren lokaler Administrator- oder Domänenadministratorkonto wie Sie die Konten bezeichnen. Hinzufügen der TAILSPINTOYS Domäne Administratorkonto anmelden, um diese Verweigern von Benutzerrechten, die Sie dem Administratorkonto für die Domäne TAILSPINTOYS durchsuchen würde beispielsweise würde die als TAILSPINTOYS\Administrator angezeigt werden. Wenn Sie eingeben **Administrator** in diese Rechte benutzereinstellungen in der Gruppenrichtlinienobjekt-Editor, schränken Sie das lokale Administratorkonto auf jedem Computer, auf das Gruppenrichtlinienobjekt angewendet wird, wie oben beschrieben.
11. Zum Beenden **Gruppenrichtlinienverwaltungs-Editor**, klicken Sie auf **Datei**, und klicken Sie auf **beenden**.
12. In **Gruppenrichtlinienverwaltung**, verknüpfen Sie das Gruppenrichtlinienobjekt auf dem Mitgliedsserver und die Arbeitsstation Organisationseinheiten wie folgt:
1. Navigieren Sie zu der <Forest>\Domains\\ <Domain> (wobei <Forest> ist der Name der Gesamtstruktur und <Domain> ist der Name der Domäne, in dem Sie die Gruppenrichtlinie festlegen möchten).
2. Mit der rechten Maustaste in der Organisationseinheit, die das Gruppenrichtlinienobjekt wird auf angewendet werden, und klicken Sie auf **vorhandenes Gruppenrichtlinienobjekt verknüpfen**.

3. Wählen Sie das Gruppenrichtlinienobjekt, das Sie erstellt haben, und klicken Sie auf **OK**.

4. Erstellen Sie Links zu allen anderen Organisationseinheiten, die Arbeitsstationen enthalten.
5. Erstellen Sie Verknüpfungen mit allen anderen Organisationseinheiten, die Member-Server enthalten.
#### <a name="verification-steps"></a>Überprüfungsschritte
##### <a name="verify-deny-access-to-this-computer-from-the-network-gpo-settings"></a>Überprüfen Sie die GPO-Einstellungen für "Zugriff auf diesen Computer vom Netzwerk verweigern"
Versuchen Sie von jeder Mitgliedsserver oder Arbeitsstation, die durch die GPO-Änderungen (z. B. ein sprungbrettserver) nicht beeinträchtigt ist, auf einem Mitgliedsserver oder Arbeitsstation über das Netzwerk zugreifen, die die GPO-Änderungen betroffen ist. Um die GPO-Einstellungen zu überprüfen, versuchen, auf das Systemlaufwerk Zuordnung von der **NET USE** Befehl.
1. Melden Sie sich lokal auf jedem Mitgliedsserver oder Arbeitsstation, die die GPO-Änderungen nicht betroffen ist.
2. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
3. In der **Suche** geben **Eingabeaufforderung**, mit der rechten Maustaste **Eingabeaufforderung**, und klicken Sie dann auf **als Administrator ausführen** zu einer mit erhöhten Rechten öffnen. -Eingabeaufforderung.
4. Wenn Sie dazu aufgefordert werden, die Erhöhung zustimmen, klicken Sie auf **Ja**.

5. In der **Eingabeaufforderung** geben **net verwenden \\ \\ <Server Name>\c$/User:<Server Name>\Administrator**, wobei <Server Name> ist der Name des Elements Server oder einer Arbeitsstation müssen Sie versuchen, über das Netzwerk zugreifen.
> [!NOTE]
> Die Anmeldeinformationen des lokalen Administrators müssen vom gleichen System sein, die Sie versuchen, über das Netzwerk zugreifen.
6. Der folgende Screenshot zeigt die Fehlermeldung, die angezeigt werden soll.

##### <a name="verify-deny-log-on-as-a-batch-job-gpo-settings"></a>Überprüfen Sie die GPO-Einstellungen "Anmelden als Batchauftrag verweigern"
Melden Sie von jeder Mitgliedsserver oder Arbeitsstation, die von der GPO-Änderungen betroffen sind sich lokal.
###### <a name="create-a-batch-file"></a>Erstellen Sie eine Datei
1. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
2. In der **Suche** geben **Editor**, und klicken Sie auf **Editor**.
3. In **Editor**, Typ **Dir c:**.
4. Klicken Sie auf **Datei**, und klicken Sie auf **speichern**.
5. In der **Dateiname** geben **<Filename>bat** (, in denen <Filename> ist der Name der neuen Batchdatei).
###### <a name="schedule-a-task"></a>Planen einer Aufgabe
1. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
2. In der **Suche** Feld Geben Sie aufgabenplanung, und klicken Sie auf **Taskplaner**.
> [!NOTE]
> Auf Computern Windows 8 ausgeführt wird, in der **Suche** geben **Planen von Aufgaben**, und klicken Sie auf **Planen von Aufgaben**.
3. Klicken Sie auf **Aktion**, und klicken Sie auf **Create Task**.
4. In der **Create Task** (Dialogfeld), Typ **<Task Name>** (, in denen <Task Name> ist der Name der neuen Aufgabe).
5. Klicken Sie auf die **Aktionen** Registerkarte, und klicken Sie auf **neu**.
6. In der **Aktion** auf **ein Programm starten**.
7. In der **Programm/Skript** auf **Durchsuchen**, und wählen Sie die Batchdatei erstellt, der **eine Batchdatei erstellen** aus, und klicken Sie auf **Öffnen**.
8. Klicken Sie auf **OK**.
9. Klicken Sie auf die Registerkarte **Allgemein**.
10. In der **Sicherheitsoptionen** auf **Benutzer oder Gruppe ändern**.
11. Geben Sie den Namen des Systems lokalen Administratorkontos an, klicken Sie auf **Namen überprüfen**, und klicken Sie auf **OK**.
12. Wählen Sie **ausgeführt wird, ob der Benutzer oder nicht angemeldet ist** und **speichern Sie das Kennwort nicht**. Der Task wird nur auf lokale Computerressourcen zugreifen.
13. Klicken Sie auf **OK**.
14. Es sollte ein Dialogfeld angezeigt, die Anmeldeinformationen für anfordernden Benutzerkonto zum Ausführen des Tasks.
15. Klicken Sie nach Eingabe der Anmeldeinformationen an, auf **OK**.
16. Ein Dialogfeld ähnlich der folgenden sollte angezeigt werden.

###### <a name="verify-deny-log-on-as-a-service-gpo-settings"></a>Überprüfen Sie die GPO-Einstellungen "Anmelden als Dienst verweigern"
1. Melden Sie von jeder Mitgliedsserver oder Arbeitsstation, die von der GPO-Änderungen betroffen sind sich lokal.
2. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
3. In der **Suche** geben **Services**, und klicken Sie auf **Services**.
4. Doppelklicken Sie auf die **Druckspooler**.
5. Klicken Sie auf die Registerkarte **Anmelden**.
6. In **melden Sie sich als** auf **dieses Konto**.
7. Klicken Sie auf **Durchsuchen**, geben die Systemvariable lokalen Administratorkonto an, und klicken Sie auf **Namen überprüfen**, und klicken Sie auf **OK**.
8. In der **Kennwort** und **Bestätigungskennwort** Felder, geben Sie das ausgewählte Konto-Kennwort ein, und klicken Sie auf **OK**.
9. Klicken Sie auf **OK** drei weitere Male.
10. Mit der rechten Maustaste **Druckspooler** , und klicken Sie auf **Neustart**.
11. Wenn der Dienst neu gestartet wird, wird ein Dialogfeld ähnlich der folgenden sollte angezeigt werden.

###### <a name="revert-changes-to-the-printer-spooler-service"></a>Wiederherstellen von Änderungen an den Druckerwarteschlangendienst
1. Melden Sie von jeder Mitgliedsserver oder Arbeitsstation, die von der GPO-Änderungen betroffen sind sich lokal.
2. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
3. In der **Suche** geben **Services**, und klicken Sie auf **Services**.
4. Doppelklicken Sie auf die **Druckspooler**.
5. Klicken Sie auf die Registerkarte **Anmelden**.
6. In der **melden Sie sich als**: die Option **lokale Einstellungen dem Systemkonto**, und klicken Sie auf **OK**.
###### <a name="verify-deny-log-on-through-remote-desktop-services-gpo-settings"></a>Überprüfen Sie die GPO-Einstellungen "Anmelden über Remotedesktopdienste verweigern"
1. Zeigen Sie mit der Maus auf, in der oberen rechten oder unteren rechten Ecke des Bildschirms. Wenn die **Charms** Leiste angezeigt wird, klicken Sie auf **Suche**.
2. In der **Suche** geben **Remotedesktopverbindung**, und klicken Sie auf **Remotedesktopverbindung**.
3. In der **Computer** Feld, geben Sie den Namen des Computers ein, die Sie verwenden möchten, Herstellen einer Verbindung mit, und klicken Sie auf **Connect**. (Sie können auch die IP-Adresse anstelle des Computernamens eingeben.)
4. Geben Sie bei Aufforderung Anmeldeinformationen für das System den lokalen **Administrator** Konto.
5. Ein Dialogfeld ähnlich der folgenden sollte angezeigt werden.

| 72.876404 | 727 | 0.772587 | deu_Latn | 0.996505 |
9320852085e5fc760eb9b267daea47aaaec8cc59 | 76 | md | Markdown | Class Work/Debugging-Techniques-Lab/exercise-3/README.md | Pondorasti/SPD-2.3 | 42728c1f2dfc371fb6bdf1ba008c5d41266f2fa8 | [
"MIT"
] | null | null | null | Class Work/Debugging-Techniques-Lab/exercise-3/README.md | Pondorasti/SPD-2.3 | 42728c1f2dfc371fb6bdf1ba008c5d41266f2fa8 | [
"MIT"
] | null | null | null | Class Work/Debugging-Techniques-Lab/exercise-3/README.md | Pondorasti/SPD-2.3 | 42728c1f2dfc371fb6bdf1ba008c5d41266f2fa8 | [
"MIT"
] | null | null | null | # Sorting & Searching
To run the code, simply run:
```
python3 main.py
``` | 10.857143 | 28 | 0.657895 | eng_Latn | 0.993973 |
9320d668873d8667e7198f35a7c27a1d240734d2 | 972 | md | Markdown | README.md | matheusmorita/nft_minting_dapp | 37cfb06a51394c84adac4500578dd170cb2a0195 | [
"MIT"
] | null | null | null | README.md | matheusmorita/nft_minting_dapp | 37cfb06a51394c84adac4500578dd170cb2a0195 | [
"MIT"
] | null | null | null | README.md | matheusmorita/nft_minting_dapp | 37cfb06a51394c84adac4500578dd170cb2a0195 | [
"MIT"
] | null | null | null | # Welcome to DrunkDogs 🐶aaa

# Drunk Dogs NFT minting dapp 🍹

# You can mint your Drunk Dog
To mint, you need the metamask installed and configured with Rinkeby testnet
Then you just need to access: https://drunkdogs.netlify.app/
and mint until 20 Drunk Dog
## Installation 🛠️
If you are cloning the project then run this first, otherwise you can download the source code on the release page and skip this step.
```sh
git clone https://github.com/matheusmorita/nft_minting_dapp.git
```
Make sure you have node.js installed so you can use npm, then run:
```sh
npm install
```
After all you can run.
```sh
npm run start
```
Or create the build if you are ready to deploy.
```sh
npm run build
```
Now you can host the contents of the build folder on a server.
That's it! you're done.
| 21.130435 | 134 | 0.743827 | eng_Latn | 0.973424 |
93229a779938c06ffb1aa32c5151a03abb6a60ae | 30 | md | Markdown | README.md | Sylvain25/Smart_BeeHive | f87bc3f1439e9d35b6a72eac5c89ac84463449b7 | [
"CC0-1.0"
] | null | null | null | README.md | Sylvain25/Smart_BeeHive | f87bc3f1439e9d35b6a72eac5c89ac84463449b7 | [
"CC0-1.0"
] | 1 | 2021-01-19T13:05:12.000Z | 2021-01-19T13:32:43.000Z | README.md | Sylvain25/Smart_BeeHive | f87bc3f1439e9d35b6a72eac5c89ac84463449b7 | [
"CC0-1.0"
] | null | null | null | # Smart_BeeHive
Smart Beehive
| 10 | 15 | 0.833333 | kor_Hang | 0.768996 |
9322f19d5ea819a042bff8e78b08f01cc447a8b6 | 454 | md | Markdown | components/general/param/README.md | lask/esp-iot-solution | 40cec135eace36cf11ab72988e27546a2c0d025b | [
"Apache-2.0"
] | 45 | 2018-03-15T07:03:50.000Z | 2020-05-11T18:41:02.000Z | components/general/param/README.md | Conanzhou/esp-iot-solution | f2e0a8ee2448b26171fe361add1b6786b7cf43a0 | [
"Apache-2.0"
] | 31 | 2019-04-18T13:29:56.000Z | 2021-02-12T11:07:59.000Z | components/general/param/README.md | Conanzhou/esp-iot-solution | f2e0a8ee2448b26171fe361add1b6786b7cf43a0 | [
"Apache-2.0"
] | 12 | 2018-02-28T14:40:59.000Z | 2019-12-26T19:06:09.000Z | # Component: param
* This component provide api to save/load parameters to/from SPI flash with protect, which means the parameters will be garanteed to be complete at any time)
* Call iot_param_save() to save parameter to flash.
* Call iot_param_load() to load parameter from flash.
### NOTE:
> Call nvs_flash_init() at first if you want to use this component.
# Todo:
* To use `nvs` APIs for saving/loading data
* To add magic code and `checksum` | 32.428571 | 157 | 0.748899 | eng_Latn | 0.989607 |
93231314a76a0aea452c03fbeaef5c24b3f8272f | 395 | md | Markdown | README.md | geffzhang/apisix-and-dapr | 865d04de84b5a140de2212cbea620d7b780c2c86 | [
"MIT"
] | 10 | 2021-11-15T07:21:55.000Z | 2022-02-23T08:03:10.000Z | README.md | geffzhang/apisix-and-dapr | 865d04de84b5a140de2212cbea620d7b780c2c86 | [
"MIT"
] | null | null | null | README.md | geffzhang/apisix-and-dapr | 865d04de84b5a140de2212cbea620d7b780c2c86 | [
"MIT"
] | 1 | 2021-11-15T09:37:13.000Z | 2021-11-15T09:37:13.000Z | # apisix-and-dapr
[如何与 Dapr 集成打造 Apache APISIX 网关控制器](https://apisix.apache.org/zh/blog/2021/11/17/dapr-with-apisix/)
[How to integrate with Dapr to build Apache APISIX Gateway Controller](https://apisix.apache.org/blog/2021/11/17/dapr-with-apisix)
[Enable Dapr with Apache APISIX Ingress Controller](https://blog.dapr.io/posts/2022/01/13/enable-dapr-with-apache-apisix-ingress-controller/)
| 49.375 | 141 | 0.777215 | kor_Hang | 0.378383 |
9323313b88dc22ed166e891faed4638c0a088f5c | 1,334 | md | Markdown | README.md | bmutziu/k8s-hello-mutating-webhook | 719b5d238d38fd470b793f64c06269161f312c11 | [
"MIT"
] | 26 | 2020-10-05T21:51:38.000Z | 2022-02-21T12:57:02.000Z | README.md | bmutziu/k8s-hello-mutating-webhook | 719b5d238d38fd470b793f64c06269161f312c11 | [
"MIT"
] | 3 | 2020-11-25T05:47:31.000Z | 2022-03-04T12:16:25.000Z | README.md | bmutziu/k8s-hello-mutating-webhook | 719b5d238d38fd470b793f64c06269161f312c11 | [
"MIT"
] | 11 | 2020-12-16T14:23:48.000Z | 2022-02-21T12:57:03.000Z | ## K8s Hello Mutating Webhook
A Kubernetes Mutating Admission Webhook example, using Go.
This is a companion repository for the Article [Building a Kubernetes Mutating Admission Webhook: A “magic” way to inject a file into Pod Containers](https://medium.com/@didil/building-a-kubernetes-mutating-admission-webhook-7e48729523ed)
This is proof of concept code, make sure to review carefully before using in a production system.
#### Run tests
```
$ make test
```
#### Deploy
Define shell env:
```
# define env vars
$ export CONTAINER_REPO=quay.io/my-user/my-repo
$ export CONTAINER_VERSION=x.y.z
```
Build/Push Webhook
```
$ make docker-build
$ make docker-push
```
* for this example you'll need to make the container repository public unless you'll be specifying ImagePullSecrets on the Pod
Deploy to K8s cluster
```
$ make k8s-deploy
```
#### Mutated pod example
```
$ k run busybox-1 --image=busybox --restart=Never -l=hello=true -- sleep 3600
$ k exec busybox-1 -it -- ls /etc/config/hello.txt
# The output should be:
/etc/config/hello.txt
$ k exec busybox-1 -it -- sh -c "cat /etc/config/hello.txt"
# The output should be:
Hello from the admission controller !
```
We successfully mutated our pod spec and added an arbitary volume/file in there, yay !
#### Cleanup
Delete all k8s resources
```
$ make k8s-delete-all
```
| 26.68 | 238 | 0.733133 | eng_Latn | 0.886619 |
9324366b2e8f70df7eb9bfab353f07ad62a20764 | 462 | md | Markdown | docs/user-guide/rules/head-script-disabled.md | PaleBluDot/HTMLHint | ebaea6c5a51b977d3517e77477915f1d37e9fc52 | [
"MIT"
] | 1,470 | 2015-01-01T16:29:52.000Z | 2018-08-20T11:58:34.000Z | docs/user-guide/rules/head-script-disabled.md | PaleBluDot/HTMLHint | ebaea6c5a51b977d3517e77477915f1d37e9fc52 | [
"MIT"
] | 678 | 2018-09-12T03:44:30.000Z | 2022-03-31T18:30:57.000Z | docs/user-guide/rules/head-script-disabled.md | PaleBluDot/HTMLHint | ebaea6c5a51b977d3517e77477915f1d37e9fc52 | [
"MIT"
] | 262 | 2015-01-01T16:29:53.000Z | 2018-08-15T08:11:43.000Z | ---
id: head-script-disabled
title: head-script-disabled
---
The script tag can not be used in head.
Level: `warning`
## Config value
1. true: enable rule
2. false: disable rule
The following pattern are **not** considered violations:
<!-- prettier-ignore -->
```html
<body>
<script src="test.js"></script>
</body>
```
The following pattern is considered violation:
<!-- prettier-ignore -->
```html
<head>
<script src="test.js"></script>
</head>
```
| 14.4375 | 56 | 0.664502 | eng_Latn | 0.957388 |
9324edf84aa477ca28850bcb705ce7c81a94c9e5 | 4,016 | md | Markdown | aspnet/web-api/overview/security/forms-authentication.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-api/overview/security/forms-authentication.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-api/overview/security/forms-authentication.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: web-api/overview/security/forms-authentication
title: Autenticación de formularios en ASP.NET Web API | Microsoft Docs
author: MikeWasson
description: Describe el uso de la autenticación de formularios en ASP.NET Web API.
ms.author: riande
ms.date: 12/12/2012
ms.assetid: 9f06c1f2-ffaa-4831-94a0-2e4a3befdf07
msc.legacyurl: /web-api/overview/security/forms-authentication
msc.type: authoredcontent
ms.openlocfilehash: 147bfab76e48497f35a72b28cd935f40ec4193bf
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 03/06/2020
ms.locfileid: "78484381"
---
# <a name="forms-authentication-in-aspnet-web-api"></a>Autenticación mediante formularios en ASP.NET Web API
por [Mike Wasson](https://github.com/MikeWasson)
La autenticación mediante formularios usa un formulario HTML para enviar las credenciales del usuario al servidor. No es un estándar de Internet. La autenticación de formularios solo es adecuada para las API Web a las que se llama desde una aplicación Web, de modo que el usuario pueda interactuar con el formulario HTML.
| Ventajas | Desventajas |
| --- | --- |
| -Fácil de implementar: integrado en ASP.NET. -Usa el proveedor de pertenencia a ASP.NET, que facilita la administración de cuentas de usuario. | -No es un mecanismo de autenticación HTTP estándar; utiliza cookies HTTP en lugar del encabezado de autorización estándar. -Requiere un cliente de explorador. -Las credenciales se envían como texto sin formato. -Es vulnerable a la falsificación de solicitudes entre sitios (CSRF); requiere medidas anti-CSRF. -Difícil de usar desde clientes que no son de explorador. Login requiere un explorador. -Las credenciales de usuario se envían en la solicitud. -Algunos usuarios deshabilitan las cookies. |
En Resumen, la autenticación de formularios en ASP.NET funciona de la siguiente manera:
1. El cliente solicita un recurso que requiere autenticación.
2. Si el usuario no está autenticado, el servidor devuelve HTTP 302 (encontrado) y redirige a una página de inicio de sesión.
3. El usuario escribe las credenciales y envía el formulario.
4. El servidor devuelve otro HTTP 302 que redirige de nuevo al URI original. Esta respuesta incluye una cookie de autenticación.
5. El cliente vuelve a solicitar el recurso. La solicitud incluye la cookie de autenticación, por lo que el servidor concede la solicitud.

Para obtener más información, vea [información general sobre la autenticación de formularios.](../../../web-forms/overview/older-versions-security/introduction/an-overview-of-forms-authentication-cs.md)
## <a name="using-forms-authentication-with-web-api"></a>Uso de la autenticación de formularios con Web API
Para crear una aplicación que use la autenticación de formularios, seleccione la plantilla "aplicación de Internet" en el Asistente para proyectos de MVC 4. Esta plantilla crea controladores MVC para la administración de cuentas. También puede usar la plantilla "aplicación de una sola página", disponible en la actualización ASP.NET 2012.
En los controladores de la API Web, puede restringir el acceso mediante el `[Authorize]` atributo, como se describe en [usar el atributo [Authorize]](authentication-and-authorization-in-aspnet-web-api.md#auth3).
La autenticación de formularios usa una cookie de sesión para autenticar las solicitudes. Los exploradores envían automáticamente todas las cookies pertinentes al sitio web de destino. Esta característica hace que la autenticación de formularios sea vulnerable a ataques de falsificación de solicitud entre sitios (CSRF). consulte [prevención de ataques de falsificación de solicitud entre sitios (CSRF)](preventing-cross-site-request-forgery-csrf-attacks.md).
La autenticación de formularios no cifra las credenciales del usuario. Por lo tanto, la autenticación de formularios no es segura a menos que se use con SSL. Consulte [trabajar con SSL en Web API](working-with-ssl-in-web-api.md).
| 81.959184 | 645 | 0.804781 | spa_Latn | 0.974859 |
93254bd79a69e24f45cee6fb48e9c3bf094b2d5b | 797 | md | Markdown | templates/docs/email.md | s1s5/healthchecks | be7e100c75db23e7d0da31fb370eb0ab5d9bcefa | [
"BSD-3-Clause"
] | 2 | 2020-07-20T18:27:22.000Z | 2020-08-14T01:58:30.000Z | templates/docs/email.md | s1s5/healthchecks | be7e100c75db23e7d0da31fb370eb0ab5d9bcefa | [
"BSD-3-Clause"
] | 6 | 2021-03-18T23:58:28.000Z | 2021-09-22T18:37:07.000Z | templates/docs/email.md | MaxwellDPS/healthchecks | 3730c67c803e707ae51b01bacf2929bd053ee22f | [
"BSD-3-Clause"
] | 1 | 2021-01-29T13:36:14.000Z | 2021-01-29T13:36:14.000Z | # Email
As an alternative to HTTP/HTTPS requests, you can "ping" checks by
sending an emails to special email addresses.

## Use Case: Newsletter Delivery Monitoring
Consider a cron job which runs weekly and sends weekly newsletters
to a list of e-mail addresses. You have already set up a check to get alerted
when your cron job fails to run. But what you ultimately want to check is if
**your emails are getting sent and delivered**.
The solution: set up another check, and add its email address to your list of
recipient email addresses. Set its Period to 1 week. As long as your weekly email
script runs correctly, and there are no email delivery issues,
SITE_NAME will regularly receive an email, and the check and will stay up.
| 41.947368 | 81 | 0.784191 | eng_Latn | 0.999548 |
93259caf8aedaefe898f52b32e9667dcd09bda7b | 4,390 | md | Markdown | content/events/2017-vancouver/propose.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 6 | 2016-11-14T14:08:29.000Z | 2018-05-09T18:57:06.000Z | content/events/2017-vancouver/propose.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 461 | 2016-11-11T19:23:06.000Z | 2019-07-21T16:10:04.000Z | content/events/2017-vancouver/propose.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 15 | 2016-11-11T15:07:53.000Z | 2019-01-18T04:55:24.000Z | +++
date = "2016-11-09T22:35:17-08:00"
title = "propose"
type = "event"
+++
{{< cfp_dates >}}
# Propose a Talk!
Deadline: February 1, 2017
## Steps
1. Submit a proposal here: [_https://goo.gl/forms/KUn6l1RZMuG8MewO2_](https://goo.gl/forms/KUn6l1RZMuG8MewO2)
1. Get a notification whether your proposal was accepted. Accepted speakers will be notified shortly after the deadline.
## Proposal Checklist
* Title
* Speaker name (optionally, company/position)
* Abstract
* Talk type (Ignite (5 min) or regular (30 min))
* Twitter handle (or public means of contact)
* Email (won’t be published, unless specified by you)
* Phone number (won’t be published)
* Bio (max 200 words, we reserve the right to shorten it for publishing/layout reasons)
## FAQ
### Does DevOpsDays Vancouver have a code of conduct?
Yes! It’s [_here_](https://www.devopsdays.org/events/2017-vancouver/conduct/). If you have a problem related to a breach of the code of conduct, text or speak with any DevOpsDays volunteers to ask about this.
### But my topic seems too basic!
We can’t seem to emphasize this enough: DevOpsDays is a conference for everyone involved in operations and devops - that means experts, but also it means people new to these methodologies and ideas. The selection committee will make sure there’s a good balance as long as people like you submit good talks of all levels.
### I have a rough idea, but I need a sounding board.
We’re all about community. Talk to some of your coworkers about it, or maybe drop in the #vandevops channel on freenode to see what some other Vancouver-area operations people think about your ideas.
### What’s your selection process?
Our selection process involves two criteria:
1. A talk that contributes to improvement and breadth in the Vancouver DevOps community.
2. A high quality talk that’s educational and engaging to uninitiated, beginner, intermediate and/or advanced practitioners.
### Does my talk fit?
The majority of our audience are people engaged in technical operations roles. They may be a dedicated operations person, a business person heavily engaged with the ops team, or developers that work closely with the ops team or who have one foot in operations themselves. Within the context of operations, subjects could include:
* architecture and design
* deployment
* testing
* configuration and orchestration
* monitoring
* business value
* risk assessment
* reliability
* automation
* staffing
_We’ve tried to make this comprehensive, if you feel your talk fits or does, we’re probably wrong_
### What does a proposal look like?
#### Ignite Talk (5 minutes)
***Title:**** Data Sanitization*
***Speaker name:**** Bobbi Tables*
***Abstract:**** In our highly-connected world with several different identities, usernames, shared-accounts, how do we properly write APIs that allow don’t corrupt one another? This presentation will be a quick look at tips and tricks for protecting your own services from other supposedly “trusted” services.*
***Twitter handle:**** @bobbi_tables*
***Email:**** bobbitables@fakemail.com*
***Bio:**** [picture] Bobbi Tables is DBA completing her first year in the industry. She enjoys hiking, cooking, and breaking your tweetz.*
#### Presentation (30 minutes)
***Title:**** Implementing DevOps: Lessons learned*
***Speaker name:**** Collin Murray*
***Abstract:**** Bunnyware as a company has grown on many axes; teams, staff, services, and infrastructure. Made relatively early attempts to embrace DevOps - established a DevOps team in early 2011 (disbanding it a year later), have 100+ internal puppet modules with Continuous Deployment to production yet we still struggle to articulate DevOps within the company and bridge the Dev vs Ops divide, especially when everyone is busy and has some service to ship. This talk covers a history of “DevOps” at Bunnyware covering what worked and more importantly what didn’t, and with the benefit of hindsight suggests some possible real world approaches for rolling out DevOps in your organisation.*
***Twitter handle:**** @dops*
***Email:**** cm@fakemail.com*
***Bio:**** [picture] Collin Murray is a Principal Systems Engineer at Bigco, with over ten years experience at scaling systems.*
### I have another question… ?
Contact the organisers at [_organizers-vancouver-2017@devopsdays.org_](mailto:organizers-vancouver-2017@devopsdays.org)
| 44.343434 | 694 | 0.763326 | eng_Latn | 0.997321 |
932852f15530a5cfc893dece322f344a400b9c00 | 9,275 | md | Markdown | README.md | thomhines/cinch | 71b619d8d9e747d32c34e437c5cd6ae52ef2f9ea | [
"MIT"
] | 1 | 2015-04-26T15:44:34.000Z | 2015-04-26T15:44:34.000Z | README.md | thomhines/cinch | 71b619d8d9e747d32c34e437c5cd6ae52ef2f9ea | [
"MIT"
] | null | null | null | README.md | thomhines/cinch | 71b619d8d9e747d32c34e437c5cd6ae52ef2f9ea | [
"MIT"
] | null | null | null | Cinch 0.8
=========
A simple, streamlined way to combine, compress, and cache web files.
Description
-----------
Cinch allows developers to automatically handle JS/CSS compression and concatenization (combining multiple files into one), reducing file sizes and page load times. There's virtually no installation process; simply change your JS/CSS links to point to Cinch with a list of the files you want to load and it will do the rest.
Furthermore, it's perfect for both development and production environments. Cinch will look for new changes to your JS/CSS files, and if it finds any it will quickly build a static cache file to send to your users.
##### For more up-to-date details, check out the [cinch website](http://projects.thomhines.com/cinch/).
#### Features:
- Automatic minification of JS/CSS, which removes unnecessary spaces and comments
- Converts common pre-processor formats (LESS, SCSS, SASS, and CoffeeScript) into standard CSS/JS automatically
- Combines multiple files into one file to reduce HTTP connections between the server and your users
- Caches files on server if no new changes have been detected to the source files
- Built-in access to tons of common libraries, frameworks and software packages, such as jQuery, Angular, Bootstrap, and more available through the [Bower](http://bower.io/) package manager.
- Live Reload refreshes styles and scripts in your browser automatically when changes are detected to your web files
- Serves '304 Not Mofidified' headers to users if the user already has the latest code in the browser's cache
- Uses gzip to further compress output files when available
- Adds CSS vendor prefixes automatically, along with a bunch of CSS enhancements
- [Bourbon](http://bourbon.io/) mixins included and automatically added to any Sass files
Basic usage
-----------
Just upload the 'cinch' folder to **the root folder of your site**, and replace all of your `<script>` or `<link>` tags in your HTML with just one tag that links to all of your JS/CSS files.
### Example
<script src="/js/jquery.min.js" type="text/javascript"></script>
<script src="/js/functions.js" type="text/javascript"></script>
<script src="/js/scripts.js" type="text/javascript"></script>
looks like this in cinch:
<script src="/cinch/?files=/js/jquery.min.js,/js/functions.js,/js/scripts.js" type="text/javascript"></script>
#### More Examples
The following example will load up three javascript files (jQuery from Google Hosted Libraries, /js/functions.js, /js/ajax.js) and disable minification.
<script type="text/javascript" src="/cinch/?files=[jquery/1.10.2],/js/functions.js,/js/ajax.js&min=false"></script>
The next example will load up three CSS files (css/reset.css, css/layout.css, css/text.css), disable minification for reset.css (by adding the '!' to the file path for that file), and will force Cinch to create a new cache file on the server every time the page is reloaded.
<link type="text/css" media="all" href="/cinch/?files=!/css/reset.css,/css/layout.css,/css/text.css&force=true">
### Settings
In order to use any of the setting below, just add them to the query string in the `<script>` or `<link>` tag, separated by the '&' character. All settings work for both JS and CSS type files.
#### REQUIRED
- **files=(comma separated list of files)** - List of JS or CSS files to include
*NOTE*: Files should contain relative path from **site root** to the files being listed (eg. `/js/scripts.js`) .
##### OPTIONS
- **!(/path/to/filename)** - To disable minification on individual files, simply add '!' to the beginning of that file's path in the comma separated list.
Example: `?files=!/js/plugin.min.js,!/js/scripts.js`
- **[bower-package-name(/version)]** - To include an external library from the list below, enclose the name of the library and the version number(optional) in a pair of square brackets, separated by a forward slash (/). If no version is given, the latest version of the libary will be used.
Example: `?files=[jquery]` or `?files=[jquery/1.10.2]`
A full list of Bower packages can be found on the [Bower](http://bower.io/search/) website.
#### OPTIONAL SETTINGS
*Values marked with a star are the default and will be used if no value is given.*
- **type=( js | css | auto* )** - Indicate which type of files are being sent to Cinch
- **js**: Process files as javascript
- **css**: Process files as CSS
- **auto***: Cinch will do it's best to automatically detect which type of files are being used. This is based on the extension of the first file in the list.
- **force=( true | false* )** - Force Cinch to rebuild the cache and update the user's browser with the newest code on every page load, even if no changes have been detected.
- **min=( true* | false | pack )** - Enable/disable minification on files.
- NOTE: Files marked with a '!' in order to avoid minification will no be minified regardless of this setting's value.
- NOTE: The 'pack' setting minifies *and* obfuscates files. This setting applies only to javascript files. Standard minification will be applied to CSS files if this setting is used.
- **debug=( true* | false )** - When enabled, output files display errors. Otherwise, errors are ignored.
- reload=( true | false ) - Automatically checks for changes to your web files and reloads those files if a new version is found.
- NOTE: Since this setting is javascript-based, live reloading requires that cinch process at least one link to javascript.
- NOTE: This setting can only be enabled on a javascript link, not CSS.
### Requirements
- **PHP 5+** - Core functionality (minification and concatenization)
- **PHP 5.1?** - Sass/SCSS Compiler (Just a guess as to which version is necessary)
- **PHP 5.1+** - LESS Compiler
- **PHP 5.3+** - CoffeeScript Compiler
### FAQs
- **Cinch isn't working. Why is that?**
There could be a lot of things causing cinch not to run properly on your site: invalid links to cinch or your web files, errors in your code, etc. If you're getting a 404 error on your cinch links, then make sure cinch is properly loaded on your server and your links are correct. If your site is loading cinch, then you can check the top of the output files to see if cinch ran into any bugs or errors. Debug output in cinch is enabled by default.
- **How do I upgrade to a newer version of cinch?**
Just overwrite the cinch folder with the new version! All of your dependencies will be automatically re-downloaded and all of your cache files will be rebuilt the next time you visit your page. After you've rebuilt your cache files, don't forget to set the 'PRODUCTION' constant at the top of the cinch/cinch.php file if you want to protect your cache folder.
- **How do I link to a package that has both CSS and JS files?**
In cases, like Bootstrap, that have both CSS and javascript components, just include the package in both your CSS and JS links. Cinch will automatically separate the files into the correct types. If cinch isn't properly detecting which file type you are trying to use, add the type=css or type=js property to your file link.
- **I have so many cache files!**
Don't worry! This happens a lot as part of the development process. You can delete all of the files in your /cinch/cache folder and cinch will rebuild all of your cache files automatically. Or just wait a month and cinch's automatic clean-up scripts will delete old cache files.
### Other Notes and Goodies
- If you want to speed up performance and prevent new cache files from being created on your server, simply set the PRODUCTION constant in cinch/cinch.php to TRUE. Production mode bypasses most of cinch's code to serve up the cached web files as quickly as possible. NOTE: New changes to any of the raw web files will not be reflected in the cache files.
- The [Bourbon](http://bourbon.io/) mixins library has been packaged with cinch, and will automatically be imported into your Sass files on execution. If you don't need them, no problem; the only extra bulk it will add to your stylesheets will be based on which mixins you use.
- CSS vendor prefixes are added automatically, along with smart CSS minification, color conversions, and more, thanks to [Javier Marín's](https://github.com/javiermarinros) css_optimizer. No need to write 5 lines of CSS to accommodate each browser anymore.
- A separate cache file is created for each combination of JS/CSS files that you use, so that different pages with different requirements can still run as quickly as possible. In order to prevent this folder from being overloaded on a busy development server, the cache is automatically cleared about once a month.
### Special Thanks
Cinch is made with the help of:
- [css_optimizer](https://github.com/javiermarinros/css_optimizer) by [Javier Marín](https://github.com/javiermarinros)
- Nicolas Martin's [PHP port](http://joliclic.free.fr/php/javascript-packer/en/) of Dean Edward's [Packer](http://dean.edwards.name/packer/)
- [JsShrink](https://github.com/vrana/JsShrink/) by Jakub Vrána
- [LESS/SCSS Processing](http://leafo.net/lessphp/)/[scssphp](http://leafo.net/scssphp/) by [leafo](http://leafo.net/)
- [CoffeeScript Processing](https://github.com/alxlit/coffeescript-php) by alxlit | 61.019737 | 449 | 0.748571 | eng_Latn | 0.99391 |
93288e52c0657f504f9307293e21ff67e6ce0b31 | 669 | md | Markdown | client/readme.md | Cradical/jobBoard | ec3ba3ee181e5b8ca2a155311c2a8869715e343f | [
"MIT"
] | null | null | null | client/readme.md | Cradical/jobBoard | ec3ba3ee181e5b8ca2a155311c2a8869715e343f | [
"MIT"
] | null | null | null | client/readme.md | Cradical/jobBoard | ec3ba3ee181e5b8ca2a155311c2a8869715e343f | [
"MIT"
] | null | null | null | ## Job Board - A Sandbox App
### Description
Building a small fullstack app to experiment with Apollo GraphQL and learn a bit more.
### Notes
- This is built in old school React (Class Components)
- You will also need to clone the [server repo]() in order to run this application locally.
## Getting Started
Follow the simple commands in order to get started with this app:
- Clone the repo:
```bash
git clone https://github.com/Cradical/jobBoard.git
```
- Install Dependencies
```bash
cd jobBoard && npm install
```
- Start the app! _Keep in mind you will need to get the server portion up and running before starting this app_
```bash
npm start
```
| 20.272727 | 111 | 0.723468 | eng_Latn | 0.996651 |